title
stringlengths
4
295
pmid
stringlengths
8
8
background_abstract
stringlengths
12
1.65k
background_abstract_label
stringclasses
12 values
methods_abstract
stringlengths
39
1.48k
methods_abstract_label
stringlengths
6
31
results_abstract
stringlengths
65
1.93k
results_abstract_label
stringclasses
10 values
conclusions_abstract
stringlengths
57
1.02k
conclusions_abstract_label
stringclasses
22 values
mesh_descriptor_names
sequence
pmcid
stringlengths
6
8
background_title
stringlengths
10
86
background_text
stringlengths
215
23.3k
methods_title
stringlengths
6
74
methods_text
stringlengths
99
42.9k
results_title
stringlengths
6
172
results_text
stringlengths
141
62.9k
conclusions_title
stringlengths
9
44
conclusions_text
stringlengths
5
13.6k
other_sections_titles
sequence
other_sections_texts
sequence
other_sections_sec_types
sequence
all_sections_titles
sequence
all_sections_texts
sequence
all_sections_sec_types
sequence
keywords
sequence
whole_article_text
stringlengths
6.93k
126k
whole_article_abstract
stringlengths
936
2.95k
background_conclusion_text
stringlengths
587
24.7k
background_conclusion_abstract
stringlengths
936
2.83k
whole_article_text_length
int64
1.3k
22.5k
whole_article_abstract_length
int64
183
490
other_sections_lengths
sequence
num_sections
int64
3
28
most_frequent_words
sequence
keybert_topics
sequence
annotated_base_background_abstract_prompt
stringclasses
1 value
annotated_base_methods_abstract_prompt
stringclasses
1 value
annotated_base_results_abstract_prompt
stringclasses
1 value
annotated_base_conclusions_abstract_prompt
stringclasses
1 value
annotated_base_whole_article_abstract_prompt
stringclasses
1 value
annotated_base_background_conclusion_abstract_prompt
stringclasses
1 value
annotated_keywords_background_abstract_prompt
stringlengths
28
460
annotated_keywords_methods_abstract_prompt
stringlengths
28
701
annotated_keywords_results_abstract_prompt
stringlengths
28
701
annotated_keywords_conclusions_abstract_prompt
stringlengths
28
428
annotated_keywords_whole_article_abstract_prompt
stringlengths
28
701
annotated_keywords_background_conclusion_abstract_prompt
stringlengths
28
428
annotated_mesh_background_abstract_prompt
stringlengths
53
701
annotated_mesh_methods_abstract_prompt
stringlengths
53
701
annotated_mesh_results_abstract_prompt
stringlengths
53
692
annotated_mesh_conclusions_abstract_prompt
stringlengths
54
701
annotated_mesh_whole_article_abstract_prompt
stringlengths
53
701
annotated_mesh_background_conclusion_abstract_prompt
stringlengths
54
701
annotated_keybert_background_abstract_prompt
stringlengths
100
219
annotated_keybert_methods_abstract_prompt
stringlengths
100
219
annotated_keybert_results_abstract_prompt
stringlengths
101
219
annotated_keybert_conclusions_abstract_prompt
stringlengths
100
240
annotated_keybert_whole_article_abstract_prompt
stringlengths
100
240
annotated_keybert_background_conclusion_abstract_prompt
stringlengths
100
211
annotated_most_frequent_background_abstract_prompt
stringlengths
67
217
annotated_most_frequent_methods_abstract_prompt
stringlengths
67
217
annotated_most_frequent_results_abstract_prompt
stringlengths
67
217
annotated_most_frequent_conclusions_abstract_prompt
stringlengths
71
217
annotated_most_frequent_whole_article_abstract_prompt
stringlengths
67
217
annotated_most_frequent_background_conclusion_abstract_prompt
stringlengths
71
217
annotated_tf_idf_background_abstract_prompt
stringlengths
74
283
annotated_tf_idf_methods_abstract_prompt
stringlengths
67
325
annotated_tf_idf_results_abstract_prompt
stringlengths
69
340
annotated_tf_idf_conclusions_abstract_prompt
stringlengths
83
403
annotated_tf_idf_whole_article_abstract_prompt
stringlengths
70
254
annotated_tf_idf_background_conclusion_abstract_prompt
stringlengths
71
254
annotated_entity_plan_background_abstract_prompt
stringlengths
20
313
annotated_entity_plan_methods_abstract_prompt
stringlengths
20
452
annotated_entity_plan_results_abstract_prompt
stringlengths
20
596
annotated_entity_plan_conclusions_abstract_prompt
stringlengths
20
150
annotated_entity_plan_whole_article_abstract_prompt
stringlengths
50
758
annotated_entity_plan_background_conclusion_abstract_prompt
stringlengths
50
758
Predictive Nomogram of Subsequent Liver Metastasis After Mastectomy or Breast-Conserving Surgery in Patients With Nonmetastatic Breast Cancer.
33626925
Metastasis accounts for the majority of deaths in patients with breast cancer. Liver metastasis is reported common for breast cancer patients. The purpose of this study was to construct a nomogram to predict the likelihood of subsequent liver metastasis in patients with nonmetastatic breast cancer, thus high-risk patient populations can be prevented and monitored.
BACKGROUND
A total of 1840 patients with stage I-III breast cancer were retrospectively included and analyzed. A nomogram was constructed to predict liver metastasis based on multivariate logistic regression analysis. SEER database was used for external validation. C-index, calibration curve and decision curve analysis were used to evaluate the predictive performance of the model.
METHODS
The nomogram included 3 variables related to liver metastasis: HER2 status (odds ratio (OR) 1.86, 95%CI 1.02 to 3.41; P = 0.045), tumor size (OR 3.62, 1.91 to 6.87; P < 0.001) and lymph node metastasis (OR 2.26, 1.18 to 4.34; P = 0.014). The C index of the training cohort, internal validation cohort and external validation cohort were 0.699, 0.814 and 0.791, respectively. The nomogram was well-calibrated, with no statistical difference between the predicted and the observed probabilities.
RESULTS
We have developed and validated a robust tool enabled to predict subsequent liver metastasis in patients with nonmetastatic breast cancer. Distinguishing a population of patients at high risk of liver metastasis will facilitate preventive treatment or monitoring of liver metastasis.
CONCLUSION
[ "Adult", "Breast Neoplasms", "Female", "Humans", "Liver Neoplasms", "Mastectomy", "Neoplasm Staging", "Nomograms" ]
8482719
Introduction
Breast cancer is one of the leading causes of cancer-related deaths among women worldwide.1 Although only 6%-10% of breast cancer patients are diagnosed with metastatic disease, approximately 30% of women diagnosed with the nonmetastatic disease will relapse after treatment.2,3 Breast cancer mainly metastasizes to bone, lung, liver and brain through circulation; among them, the liver is the third most common distant metastatic site of breast cancer.4 Liver metastasis is reported to be responsible for approximately 20% to 35% of metastatic breast cancer patients’ death.5-7 Studies have shown that breast cancer patients with liver metastasis exhibit poor prognosis and short median survival. The median survival time of those patients without any treatment was about 4-8 months8 compared to 13-31 months2,3,9,10 after systemic treatment. For the treatment, systematic therapy is still the backbone to treat breast cancer with liver metastasis patients although surgery, radiofrequency ablation, or radiotherapy can be used for liver metastasis.11,12 Currently, the occurrence of liver metastasis from breast cancer cannot be accurately predicted. The construction of nomograms based on known prognostic factors is increasing and widely used to predict specific outcomes.13,14 We hypothesized a nomogram could be constructed by combining selected clinical and pathological variables using a multivariate model to predict the likelihood of postoperative liver metastasis in early breast cancer patients. This nomogram can be used to identify subgroups of high-risk patients, develop targeted screening and new preventive treatment strategies for early-stage breast cancer patients, and even improve life quality and survival outcomes.15,16 Therefore, we constructed and validated such a nomogram using retrospectively study data from 2 breast cancer patient populations.
Methods
The electronic database of the department of breast cancer at Sun Yat-sen University Cancer was searched and retrospectively reviewed from January 2008 to December 2010. Information from 1840 consecutive patients with nonmetastatic breast cancer was collected to serve as the basis for this study. The inclusion criteria for this study were: (1) female; (2) stage I-III breast cancer; (3) underwent mastectomy or breast-conserving surgery; (4) confirmed by the pathological diagnosis as invasive carcinoma. Exclusion criteria were: (1) male; (2) distant metastasis at initial diagnosis; (3) incomplete information such as TNM staging or pathological diagnosis; (4) other primary malignancies, including pre-diagnosis or follow-up of breast cancer. The staging of the tumor was based on the eighth edition of the TNM malignant tumor classification. The primary tumor and metastasis were confirmed by pathology. The classification of tumor size is: the maximum diameter of T1 tumor ≤ 20 mm; the maximum diameter of T2 tumor is > 20 mm, but ≤ 50 mm; The maximum diameter of T3 tumor is > 50 mm; T4 is directly invading the chest wall or skin regardless of tumor size. The diagnostic criteria for breast cancer liver metastasis were as follows: (1) BCLM was confirmed by histopathological examination;(2) When the patient is unable to undergo the pathological examination of liver metastases, we mainly diagnose BCLM based on clinical manifestations and imaging examinations. For patients without pathological examination, breast oncologists and imaging physicians jointly confirm the diagnosis of BCLM based on ultrasound, CT, and MRI. Consecutive patients from January 2008 to December 2009 were included in the training cohort, while consecutive patients from January 2010 to December 2010 were included in the internal validation cohort from the same institutional database using the same criteria as the derived cohort. The database includes the patient’s treatment and pathological variables (age, menopausal status, tumor size, lymph node metastasis, histological grade, ER, PR, HER2 and lymphatic vessel invasion, etc.). The criteria for HER2 positivity is IHC 3+ (defined as > 30% of invasive tumor cells with uniform strong membrane staining) or FISH amplification (defined as the ratio of HER2 to CEP17 > 2.2 or the average copy number of the HER2 gene has a signal/core ratio greater than 6 when no internal reference probe is used in this detection system). The retrospectively maintained database of early breast cancer patients, as well as this study, was approved by the Institutional Review Board of the Sun Yat-sen University Cancer Center. Statistical Analysis Statistical analysis was performed using SPSS® version 21.0 (IBM, Armonk, New York, USA) and statistical software package R version 3.5.1 (http://www.r-project). The clinical characteristics of the patients are mainly summarized as the variables of the classification. Comparison between groups was performed using a chi-square test to analyze categorical variables. Receiver Operating Characteristics (ROC) curve was built using SPSS® software. We firstly used univariate and multivariate analysis to determine factors that predict postoperative liver metastasis. Then, we identified the factors predicting postoperative liver metastasis by using a binary logistic regression model. Subsequently, we constructed a nomogram for predicting postoperative liver metastasis from the results of multivariate analysis using the rms package in R. Internal verification of the nomogram was performed with bootstraps with 1000 resampling. Later, we evaluated the predicted performance of the established nomogram by C-index measurement and the calibration curve is plotted to assess the calibration of the model, tested by Hosmer-Lemeshow test [20]. Decision curve analysis in the internal validation was set to determine the clinical value of the nomogram by quantifying the net benefit when considering different threshold probabilities [21, 22]. Besides, we constructed a clinical impact curve to assess the clinical impact of risk prediction models with decision curve analysis [23]. Finally, the breast cancer cohort of the SEER database was used for external validation. Statistical analysis was performed using SPSS® version 21.0 (IBM, Armonk, New York, USA) and statistical software package R version 3.5.1 (http://www.r-project). The clinical characteristics of the patients are mainly summarized as the variables of the classification. Comparison between groups was performed using a chi-square test to analyze categorical variables. Receiver Operating Characteristics (ROC) curve was built using SPSS® software. We firstly used univariate and multivariate analysis to determine factors that predict postoperative liver metastasis. Then, we identified the factors predicting postoperative liver metastasis by using a binary logistic regression model. Subsequently, we constructed a nomogram for predicting postoperative liver metastasis from the results of multivariate analysis using the rms package in R. Internal verification of the nomogram was performed with bootstraps with 1000 resampling. Later, we evaluated the predicted performance of the established nomogram by C-index measurement and the calibration curve is plotted to assess the calibration of the model, tested by Hosmer-Lemeshow test [20]. Decision curve analysis in the internal validation was set to determine the clinical value of the nomogram by quantifying the net benefit when considering different threshold probabilities [21, 22]. Besides, we constructed a clinical impact curve to assess the clinical impact of risk prediction models with decision curve analysis [23]. Finally, the breast cancer cohort of the SEER database was used for external validation.
Results
A total of 817 patients underwent breast-conserving surgery (including axillary lymph node dissection or sentinel lymph node biopsy); 1023 patients underwent modified radical mastectomy. The distribution of molecular subtypes in the training set is 68.8% in HR+/HER2-, 14.3% in HR+/HER2+, 7.0% in HR-/HER2+, and 10.0% in TN subtypes. The distribution of molecular subtypes in the verification concentration is: 63.5% in HR + /HER2-, 11.1% in HR + / HER2 +, 7.5% in HR− / HER2 +, and 12.0% in TN subtype. Our data analysis showed that in the training group of 1,149 patients with early breast cancer, 51 patients (4.44%) had clinical evidence of liver metastasis after a median follow-up of 71 months. There are 4 patients with extensive systemic metastases, 2 patients with lung metastases, 2 patients with bone metastases, and 1 patient with brain metastases. The demographic and clinical characteristics of the patients in both cohorts are summarized in Table 1. In the internal validation cohort, 28 patients (4.05%) developed liver metastasis after a median follow-up of 59 months. There was no significant difference in liver metastasis between the 2 cohorts (P = 0.723). In univariate analysis, HER2 status, tumor size, and lymph node metastasis were associated with liver metastasis in breast cancer (Table 1). Subsequent liver metastasis was not significantly associated with age at diagnosis (P = 0.062), ER status (P = 0.271), PR status (P = 0.361), lymphovascular invasion (P = 0.078), pathologic stage (P = 0.33) or menopause status at diagnosis (P = 0.881). In addition, we also used Cox regression to analyze the time-dependent risk factors for liver metastasis. The results of the training cohort suggested that HER2 status (Hazard ratio (HR) 1.80, 95%CI (1.01-3.22); P = 0.047), tumor size (HR 4.45, 95%CI (2.41-8.24); P < 0.001) and lymph Node metastasis (HR 2.19, 95%CI (1.15-4.15); P = 0.016) were independent risk factors for liver metastasis (Table. S1). The COX regression of the internal validation cohort basically obtained similar results, although the effect of HER2 status on the recurrence time of liver metastasis did not reach statistical significance. Clinical and Pathological Features of Patients in the Training and Validation Cohort Based on Liver Metastatic Status. Development of Nomogram Logistic regression model identified 3 variables associated with liver metastasis: tumor size (odds ratio (OR) 3.62, 95% CI, 1.91 to 6.87; P < 0.001), lymph node metastasis (OR 2.26, 95% CI, 1.18 to 4.34); P = 0.014) and HER2 status (OR 1.86, 1.02 to 3.41; P = 0.045) (Table 2). Risk Factors for Liver Metastasis as Determined by Logistic Regression. OR: Odds Ratio. A nomogram containing these 3 factors was constructed (Figure 1). Good agreement between prediction and observation was displayed by the calibration curve of the training group (Figure S1A, supporting information). The Hosmer-Leme display test yielded a P value of 0.853, indicating that the model fit well. The C-index of the predicted nomogram is 0.699. Nomogram to predict postoperative liver metastasis in patients with nonmetastatic breast cancer. There are 6 rows in the nomogram. Significant variables are displayed in lines 2 through 4, and the points of each variable are read from the scale of line 1. Add the points of the 3 variables to the total and mark them on the scale of line 5. The risk of liver metastasis is read from the scale of line 6 by drawing a vertical line from the total point marked in line 5 and the points are translated to the probability of liver metastasis. Logistic regression model identified 3 variables associated with liver metastasis: tumor size (odds ratio (OR) 3.62, 95% CI, 1.91 to 6.87; P < 0.001), lymph node metastasis (OR 2.26, 95% CI, 1.18 to 4.34); P = 0.014) and HER2 status (OR 1.86, 1.02 to 3.41; P = 0.045) (Table 2). Risk Factors for Liver Metastasis as Determined by Logistic Regression. OR: Odds Ratio. A nomogram containing these 3 factors was constructed (Figure 1). Good agreement between prediction and observation was displayed by the calibration curve of the training group (Figure S1A, supporting information). The Hosmer-Leme display test yielded a P value of 0.853, indicating that the model fit well. The C-index of the predicted nomogram is 0.699. Nomogram to predict postoperative liver metastasis in patients with nonmetastatic breast cancer. There are 6 rows in the nomogram. Significant variables are displayed in lines 2 through 4, and the points of each variable are read from the scale of line 1. Add the points of the 3 variables to the total and mark them on the scale of line 5. The risk of liver metastasis is read from the scale of line 6 by drawing a vertical line from the total point marked in line 5 and the points are translated to the probability of liver metastasis. Validation of Nomogram The calibration curve shows good agreement between predicted and observed liver metastasis in the internal validation set (Figure S1, supporting information) due to the non-significant P value (0.972) produced by the Hosmer-Lemeshow test. The C-index of the predicted nomogram is 0.814. The ROC curve was constructed for the derived and validated groups (Figure 2). For the training and verification groups, the area under the curve (AUC) is 0.699 and 0.815, respectively, and the corresponding cutoff values are 0.052 and 0.031. The Receiver operating characteristic (ROC) curves for training (A), internal validation (B) cohorts; Areas under the ROC curve are 0.699 (A) and 0.815 (B). Cut-off values (marked with a symbol) are 0.052 (A) and 0.031 (B). Furthermore, we used the breast cancer cohort of the SEER database to further validate this nomogram. The results of the SEER database further confirmed the results of the training cohort, and also supported that HER2 status (odd ratio (OR) 1.88, 95%CI (1.74-2.04); P < 0.001), tumor size (OR 6.72, 95%CI (6.12-7.38); P < 0.001) and lymph node metastasis(OR 3.11, 95%CI (2.87-3.37); P < 0.001) were risk factors for breast cancer liver metastasis. The C-index of the SEER cohort nomogram is 0.791. The calibration curve showed good agreement between predicted and observed liver metastasis in the SEER cohort. The calibration curve shows good agreement between predicted and observed liver metastasis in the internal validation set (Figure S1, supporting information) due to the non-significant P value (0.972) produced by the Hosmer-Lemeshow test. The C-index of the predicted nomogram is 0.814. The ROC curve was constructed for the derived and validated groups (Figure 2). For the training and verification groups, the area under the curve (AUC) is 0.699 and 0.815, respectively, and the corresponding cutoff values are 0.052 and 0.031. The Receiver operating characteristic (ROC) curves for training (A), internal validation (B) cohorts; Areas under the ROC curve are 0.699 (A) and 0.815 (B). Cut-off values (marked with a symbol) are 0.052 (A) and 0.031 (B). Furthermore, we used the breast cancer cohort of the SEER database to further validate this nomogram. The results of the SEER database further confirmed the results of the training cohort, and also supported that HER2 status (odd ratio (OR) 1.88, 95%CI (1.74-2.04); P < 0.001), tumor size (OR 6.72, 95%CI (6.12-7.38); P < 0.001) and lymph node metastasis(OR 3.11, 95%CI (2.87-3.37); P < 0.001) were risk factors for breast cancer liver metastasis. The C-index of the SEER cohort nomogram is 0.791. The calibration curve showed good agreement between predicted and observed liver metastasis in the SEER cohort. Clinical Use The decision curve analysis of the nomogram is shown in Figure 3A. This analysis showed that when the threshold probability was in the range of 0-0.7, the use of a nomogram to predict liver metastasis increased more net benefit than the treat-all or treat-none strategy. Within this range, the net benefit based on the nomogram overlaped at several points. Decision curve analysis (A) and clinical impact curve analysis (B) of the validation cohort. (A): The y-axis represents net benefit. The x-axis shows the threshold probability. “All” means that all patients have assumed liver metastasis, and “none” means that the patient has no liver metastasis. Using a nomogram to predict, if the score is in the range of 0 to 0.7, liver metastasis increases more net benefit than the treat-all or treat-none strategy. (B): The red curve (Number high risk) indicates the number of patients who are classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) is the number of true positives at each threshold probability. The clinical impact curve analysis of the nomogram was shown in Figure 3B. The red curve (Number high risk) indicated the number of people who were classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) was the number of true positive at each threshold probability. The decision curve analysis of the nomogram is shown in Figure 3A. This analysis showed that when the threshold probability was in the range of 0-0.7, the use of a nomogram to predict liver metastasis increased more net benefit than the treat-all or treat-none strategy. Within this range, the net benefit based on the nomogram overlaped at several points. Decision curve analysis (A) and clinical impact curve analysis (B) of the validation cohort. (A): The y-axis represents net benefit. The x-axis shows the threshold probability. “All” means that all patients have assumed liver metastasis, and “none” means that the patient has no liver metastasis. Using a nomogram to predict, if the score is in the range of 0 to 0.7, liver metastasis increases more net benefit than the treat-all or treat-none strategy. (B): The red curve (Number high risk) indicates the number of patients who are classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) is the number of true positives at each threshold probability. The clinical impact curve analysis of the nomogram was shown in Figure 3B. The red curve (Number high risk) indicated the number of people who were classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) was the number of true positive at each threshold probability.
Conclusion
In summary, we developed a nomogram, which is a powerful tool for predicting subsequent liver metastasis in nonmetastatic breast cancer patients. Our model will help us in identifying patients at high risks of liver metastasis, thereby we could design preventive trials for them correspondingly. Further researches are needed to determine whether it can be applied to other subgroups of patients.
[ "Statistical Analysis", "Development of Nomogram", "Validation of Nomogram", "Clinical Use" ]
[ "Statistical analysis was performed using SPSS® version 21.0 (IBM, Armonk, New York, USA) and statistical software package R version 3.5.1 (http://www.r-project). The clinical characteristics of the patients are mainly summarized as the variables of the classification. Comparison between groups was performed using a chi-square test to analyze categorical variables. Receiver Operating Characteristics (ROC) curve was built using SPSS® software. We firstly used univariate and multivariate analysis to determine factors that predict postoperative liver metastasis. Then, we identified the factors predicting postoperative liver metastasis by using a binary logistic regression model. Subsequently, we constructed a nomogram for predicting postoperative liver metastasis from the results of multivariate analysis using the rms package in R. Internal verification of the nomogram was performed with bootstraps with 1000 resampling. Later, we evaluated the predicted performance of the established nomogram by C-index measurement and the calibration curve is plotted to assess the calibration of the model, tested by Hosmer-Lemeshow test [20]. Decision curve analysis in the internal validation was set to determine the clinical value of the nomogram by quantifying the net benefit when considering different threshold probabilities [21, 22]. Besides, we constructed a clinical impact curve to assess the clinical impact of risk prediction models with decision curve analysis [23]. Finally, the breast cancer cohort of the SEER database was used for external validation.", "Logistic regression model identified 3 variables associated with liver metastasis: tumor size (odds ratio (OR) 3.62, 95% CI, 1.91 to 6.87; P < 0.001), lymph node metastasis (OR 2.26, 95% CI, 1.18 to 4.34); P = 0.014) and HER2 status (OR 1.86, 1.02 to 3.41; P = 0.045) (Table 2).\nRisk Factors for Liver Metastasis as Determined by Logistic Regression.\nOR: Odds Ratio.\nA nomogram containing these 3 factors was constructed (Figure 1). Good agreement between prediction and observation was displayed by the calibration curve of the training group (Figure S1A, supporting information). The Hosmer-Leme display test yielded a P value of 0.853, indicating that the model fit well. The C-index of the predicted nomogram is 0.699.\nNomogram to predict postoperative liver metastasis in patients with nonmetastatic breast cancer. There are 6 rows in the nomogram. Significant variables are displayed in lines 2 through 4, and the points of each variable are read from the scale of line 1. Add the points of the 3 variables to the total and mark them on the scale of line 5. The risk of liver metastasis is read from the scale of line 6 by drawing a vertical line from the total point marked in line 5 and the points are translated to the probability of liver metastasis.", "The calibration curve shows good agreement between predicted and observed liver metastasis in the internal validation set (Figure S1, supporting information) due to the non-significant P value (0.972) produced by the Hosmer-Lemeshow test. The C-index of the predicted nomogram is 0.814. The ROC curve was constructed for the derived and validated groups (Figure 2). For the training and verification groups, the area under the curve (AUC) is 0.699 and 0.815, respectively, and the corresponding cutoff values are 0.052 and 0.031.\nThe Receiver operating characteristic (ROC) curves for training (A), internal validation (B) cohorts; Areas under the ROC curve are 0.699 (A) and 0.815 (B). Cut-off values (marked with a symbol) are 0.052 (A) and 0.031 (B).\nFurthermore, we used the breast cancer cohort of the SEER database to further validate this nomogram. The results of the SEER database further confirmed the results of the training cohort, and also supported that HER2 status (odd ratio (OR) 1.88, 95%CI (1.74-2.04); P < 0.001), tumor size (OR 6.72, 95%CI (6.12-7.38); P < 0.001) and lymph node metastasis(OR 3.11, 95%CI (2.87-3.37); P < 0.001) were risk factors for breast cancer liver metastasis. The C-index of the SEER cohort nomogram is 0.791. The calibration curve showed good agreement between predicted and observed liver metastasis in the SEER cohort.", "The decision curve analysis of the nomogram is shown in Figure 3A. This analysis showed that when the threshold probability was in the range of 0-0.7, the use of a nomogram to predict liver metastasis increased more net benefit than the treat-all or treat-none strategy. Within this range, the net benefit based on the nomogram overlaped at several points.\nDecision curve analysis (A) and clinical impact curve analysis (B) of the validation cohort. (A): The y-axis represents net benefit. The x-axis shows the threshold probability. “All” means that all patients have assumed liver metastasis, and “none” means that the patient has no liver metastasis. Using a nomogram to predict, if the score is in the range of 0 to 0.7, liver metastasis increases more net benefit than the treat-all or treat-none strategy. (B): The red curve (Number high risk) indicates the number of patients who are classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) is the number of true positives at each threshold probability.\nThe clinical impact curve analysis of the nomogram was shown in Figure 3B. The red curve (Number high risk) indicated the number of people who were classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) was the number of true positive at each threshold probability." ]
[ null, null, null, null ]
[ "Introduction", "Methods", "Statistical Analysis", "Results", "Development of Nomogram", "Validation of Nomogram", "Clinical Use", "Discussion", "Conclusion", "Supplemental Material" ]
[ "Breast cancer is one of the leading causes of cancer-related deaths among women worldwide.1 Although only 6%-10% of breast cancer patients are diagnosed with metastatic disease, approximately 30% of women diagnosed with the nonmetastatic disease will relapse after treatment.2,3 Breast cancer mainly metastasizes to bone, lung, liver and brain through circulation; among them, the liver is the third most common distant metastatic site of breast cancer.4 Liver metastasis is reported to be responsible for approximately 20% to 35% of metastatic breast cancer patients’ death.5-7 Studies have shown that breast cancer patients with liver metastasis exhibit poor prognosis and short median survival. The median survival time of those patients without any treatment was about 4-8 months8 compared to 13-31 months2,3,9,10 after systemic treatment. For the treatment, systematic therapy is still the backbone to treat breast cancer with liver metastasis patients although surgery, radiofrequency ablation, or radiotherapy can be used for liver metastasis.11,12\n\nCurrently, the occurrence of liver metastasis from breast cancer cannot be accurately predicted. The construction of nomograms based on known prognostic factors is increasing and widely used to predict specific outcomes.13,14 We hypothesized a nomogram could be constructed by combining selected clinical and pathological variables using a multivariate model to predict the likelihood of postoperative liver metastasis in early breast cancer patients. This nomogram can be used to identify subgroups of high-risk patients, develop targeted screening and new preventive treatment strategies for early-stage breast cancer patients, and even improve life quality and survival outcomes.15,16 Therefore, we constructed and validated such a nomogram using retrospectively study data from 2 breast cancer patient populations.", "The electronic database of the department of breast cancer at Sun Yat-sen University Cancer was searched and retrospectively reviewed from January 2008 to December 2010. Information from 1840 consecutive patients with nonmetastatic breast cancer was collected to serve as the basis for this study. The inclusion criteria for this study were: (1) female; (2) stage I-III breast cancer; (3) underwent mastectomy or breast-conserving surgery; (4) confirmed by the pathological diagnosis as invasive carcinoma. Exclusion criteria were: (1) male; (2) distant metastasis at initial diagnosis; (3) incomplete information such as TNM staging or pathological diagnosis; (4) other primary malignancies, including pre-diagnosis or follow-up of breast cancer. The staging of the tumor was based on the eighth edition of the TNM malignant tumor classification. The primary tumor and metastasis were confirmed by pathology. The classification of tumor size is: the maximum diameter of T1 tumor ≤ 20 mm; the maximum diameter of T2 tumor is > 20 mm, but ≤ 50 mm; The maximum diameter of T3 tumor is > 50 mm; T4 is directly invading the chest wall or skin regardless of tumor size. The diagnostic criteria for breast cancer liver metastasis were as follows: (1) BCLM was confirmed by histopathological examination;(2) When the patient is unable to undergo the pathological examination of liver metastases, we mainly diagnose BCLM based on clinical manifestations and imaging examinations. For patients without pathological examination, breast oncologists and imaging physicians jointly confirm the diagnosis of BCLM based on ultrasound, CT, and MRI.\nConsecutive patients from January 2008 to December 2009 were included in the training cohort, while consecutive patients from January 2010 to December 2010 were included in the internal validation cohort from the same institutional database using the same criteria as the derived cohort.\nThe database includes the patient’s treatment and pathological variables (age, menopausal status, tumor size, lymph node metastasis, histological grade, ER, PR, HER2 and lymphatic vessel invasion, etc.). The criteria for HER2 positivity is IHC 3+ (defined as > 30% of invasive tumor cells with uniform strong membrane staining) or FISH amplification (defined as the ratio of HER2 to CEP17 > 2.2 or the average copy number of the HER2 gene has a signal/core ratio greater than 6 when no internal reference probe is used in this detection system).\nThe retrospectively maintained database of early breast cancer patients, as well as this study, was approved by the Institutional Review Board of the Sun Yat-sen University Cancer Center.\nStatistical Analysis Statistical analysis was performed using SPSS® version 21.0 (IBM, Armonk, New York, USA) and statistical software package R version 3.5.1 (http://www.r-project). The clinical characteristics of the patients are mainly summarized as the variables of the classification. Comparison between groups was performed using a chi-square test to analyze categorical variables. Receiver Operating Characteristics (ROC) curve was built using SPSS® software. We firstly used univariate and multivariate analysis to determine factors that predict postoperative liver metastasis. Then, we identified the factors predicting postoperative liver metastasis by using a binary logistic regression model. Subsequently, we constructed a nomogram for predicting postoperative liver metastasis from the results of multivariate analysis using the rms package in R. Internal verification of the nomogram was performed with bootstraps with 1000 resampling. Later, we evaluated the predicted performance of the established nomogram by C-index measurement and the calibration curve is plotted to assess the calibration of the model, tested by Hosmer-Lemeshow test [20]. Decision curve analysis in the internal validation was set to determine the clinical value of the nomogram by quantifying the net benefit when considering different threshold probabilities [21, 22]. Besides, we constructed a clinical impact curve to assess the clinical impact of risk prediction models with decision curve analysis [23]. Finally, the breast cancer cohort of the SEER database was used for external validation.\nStatistical analysis was performed using SPSS® version 21.0 (IBM, Armonk, New York, USA) and statistical software package R version 3.5.1 (http://www.r-project). The clinical characteristics of the patients are mainly summarized as the variables of the classification. Comparison between groups was performed using a chi-square test to analyze categorical variables. Receiver Operating Characteristics (ROC) curve was built using SPSS® software. We firstly used univariate and multivariate analysis to determine factors that predict postoperative liver metastasis. Then, we identified the factors predicting postoperative liver metastasis by using a binary logistic regression model. Subsequently, we constructed a nomogram for predicting postoperative liver metastasis from the results of multivariate analysis using the rms package in R. Internal verification of the nomogram was performed with bootstraps with 1000 resampling. Later, we evaluated the predicted performance of the established nomogram by C-index measurement and the calibration curve is plotted to assess the calibration of the model, tested by Hosmer-Lemeshow test [20]. Decision curve analysis in the internal validation was set to determine the clinical value of the nomogram by quantifying the net benefit when considering different threshold probabilities [21, 22]. Besides, we constructed a clinical impact curve to assess the clinical impact of risk prediction models with decision curve analysis [23]. Finally, the breast cancer cohort of the SEER database was used for external validation.", "Statistical analysis was performed using SPSS® version 21.0 (IBM, Armonk, New York, USA) and statistical software package R version 3.5.1 (http://www.r-project). The clinical characteristics of the patients are mainly summarized as the variables of the classification. Comparison between groups was performed using a chi-square test to analyze categorical variables. Receiver Operating Characteristics (ROC) curve was built using SPSS® software. We firstly used univariate and multivariate analysis to determine factors that predict postoperative liver metastasis. Then, we identified the factors predicting postoperative liver metastasis by using a binary logistic regression model. Subsequently, we constructed a nomogram for predicting postoperative liver metastasis from the results of multivariate analysis using the rms package in R. Internal verification of the nomogram was performed with bootstraps with 1000 resampling. Later, we evaluated the predicted performance of the established nomogram by C-index measurement and the calibration curve is plotted to assess the calibration of the model, tested by Hosmer-Lemeshow test [20]. Decision curve analysis in the internal validation was set to determine the clinical value of the nomogram by quantifying the net benefit when considering different threshold probabilities [21, 22]. Besides, we constructed a clinical impact curve to assess the clinical impact of risk prediction models with decision curve analysis [23]. Finally, the breast cancer cohort of the SEER database was used for external validation.", "A total of 817 patients underwent breast-conserving surgery (including axillary lymph node dissection or sentinel lymph node biopsy); 1023 patients underwent modified radical mastectomy. The distribution of molecular subtypes in the training set is 68.8% in HR+/HER2-, 14.3% in HR+/HER2+, 7.0% in HR-/HER2+, and 10.0% in TN subtypes. The distribution of molecular subtypes in the verification concentration is: 63.5% in HR + /HER2-, 11.1% in HR + / HER2 +, 7.5% in HR− / HER2 +, and 12.0% in TN subtype. Our data analysis showed that in the training group of 1,149 patients with early breast cancer, 51 patients (4.44%) had clinical evidence of liver metastasis after a median follow-up of 71 months. There are 4 patients with extensive systemic metastases, 2 patients with lung metastases, 2 patients with bone metastases, and 1 patient with brain metastases. The demographic and clinical characteristics of the patients in both cohorts are summarized in Table 1. In the internal validation cohort, 28 patients (4.05%) developed liver metastasis after a median follow-up of 59 months. There was no significant difference in liver metastasis between the 2 cohorts (P = 0.723). In univariate analysis, HER2 status, tumor size, and lymph node metastasis were associated with liver metastasis in breast cancer (Table 1). Subsequent liver metastasis was not significantly associated with age at diagnosis (P = 0.062), ER status (P = 0.271), PR status (P = 0.361), lymphovascular invasion (P = 0.078), pathologic stage (P = 0.33) or menopause status at diagnosis (P = 0.881). In addition, we also used Cox regression to analyze the time-dependent risk factors for liver metastasis. The results of the training cohort suggested that HER2 status (Hazard ratio (HR) 1.80, 95%CI (1.01-3.22); P = 0.047), tumor size (HR 4.45, 95%CI (2.41-8.24); P < 0.001) and lymph Node metastasis (HR 2.19, 95%CI (1.15-4.15); P = 0.016) were independent risk factors for liver metastasis (Table. S1). The COX regression of the internal validation cohort basically obtained similar results, although the effect of HER2 status on the recurrence time of liver metastasis did not reach statistical significance.\nClinical and Pathological Features of Patients in the Training and Validation Cohort Based on Liver Metastatic Status.\nDevelopment of Nomogram Logistic regression model identified 3 variables associated with liver metastasis: tumor size (odds ratio (OR) 3.62, 95% CI, 1.91 to 6.87; P < 0.001), lymph node metastasis (OR 2.26, 95% CI, 1.18 to 4.34); P = 0.014) and HER2 status (OR 1.86, 1.02 to 3.41; P = 0.045) (Table 2).\nRisk Factors for Liver Metastasis as Determined by Logistic Regression.\nOR: Odds Ratio.\nA nomogram containing these 3 factors was constructed (Figure 1). Good agreement between prediction and observation was displayed by the calibration curve of the training group (Figure S1A, supporting information). The Hosmer-Leme display test yielded a P value of 0.853, indicating that the model fit well. The C-index of the predicted nomogram is 0.699.\nNomogram to predict postoperative liver metastasis in patients with nonmetastatic breast cancer. There are 6 rows in the nomogram. Significant variables are displayed in lines 2 through 4, and the points of each variable are read from the scale of line 1. Add the points of the 3 variables to the total and mark them on the scale of line 5. The risk of liver metastasis is read from the scale of line 6 by drawing a vertical line from the total point marked in line 5 and the points are translated to the probability of liver metastasis.\nLogistic regression model identified 3 variables associated with liver metastasis: tumor size (odds ratio (OR) 3.62, 95% CI, 1.91 to 6.87; P < 0.001), lymph node metastasis (OR 2.26, 95% CI, 1.18 to 4.34); P = 0.014) and HER2 status (OR 1.86, 1.02 to 3.41; P = 0.045) (Table 2).\nRisk Factors for Liver Metastasis as Determined by Logistic Regression.\nOR: Odds Ratio.\nA nomogram containing these 3 factors was constructed (Figure 1). Good agreement between prediction and observation was displayed by the calibration curve of the training group (Figure S1A, supporting information). The Hosmer-Leme display test yielded a P value of 0.853, indicating that the model fit well. The C-index of the predicted nomogram is 0.699.\nNomogram to predict postoperative liver metastasis in patients with nonmetastatic breast cancer. There are 6 rows in the nomogram. Significant variables are displayed in lines 2 through 4, and the points of each variable are read from the scale of line 1. Add the points of the 3 variables to the total and mark them on the scale of line 5. The risk of liver metastasis is read from the scale of line 6 by drawing a vertical line from the total point marked in line 5 and the points are translated to the probability of liver metastasis.\nValidation of Nomogram The calibration curve shows good agreement between predicted and observed liver metastasis in the internal validation set (Figure S1, supporting information) due to the non-significant P value (0.972) produced by the Hosmer-Lemeshow test. The C-index of the predicted nomogram is 0.814. The ROC curve was constructed for the derived and validated groups (Figure 2). For the training and verification groups, the area under the curve (AUC) is 0.699 and 0.815, respectively, and the corresponding cutoff values are 0.052 and 0.031.\nThe Receiver operating characteristic (ROC) curves for training (A), internal validation (B) cohorts; Areas under the ROC curve are 0.699 (A) and 0.815 (B). Cut-off values (marked with a symbol) are 0.052 (A) and 0.031 (B).\nFurthermore, we used the breast cancer cohort of the SEER database to further validate this nomogram. The results of the SEER database further confirmed the results of the training cohort, and also supported that HER2 status (odd ratio (OR) 1.88, 95%CI (1.74-2.04); P < 0.001), tumor size (OR 6.72, 95%CI (6.12-7.38); P < 0.001) and lymph node metastasis(OR 3.11, 95%CI (2.87-3.37); P < 0.001) were risk factors for breast cancer liver metastasis. The C-index of the SEER cohort nomogram is 0.791. The calibration curve showed good agreement between predicted and observed liver metastasis in the SEER cohort.\nThe calibration curve shows good agreement between predicted and observed liver metastasis in the internal validation set (Figure S1, supporting information) due to the non-significant P value (0.972) produced by the Hosmer-Lemeshow test. The C-index of the predicted nomogram is 0.814. The ROC curve was constructed for the derived and validated groups (Figure 2). For the training and verification groups, the area under the curve (AUC) is 0.699 and 0.815, respectively, and the corresponding cutoff values are 0.052 and 0.031.\nThe Receiver operating characteristic (ROC) curves for training (A), internal validation (B) cohorts; Areas under the ROC curve are 0.699 (A) and 0.815 (B). Cut-off values (marked with a symbol) are 0.052 (A) and 0.031 (B).\nFurthermore, we used the breast cancer cohort of the SEER database to further validate this nomogram. The results of the SEER database further confirmed the results of the training cohort, and also supported that HER2 status (odd ratio (OR) 1.88, 95%CI (1.74-2.04); P < 0.001), tumor size (OR 6.72, 95%CI (6.12-7.38); P < 0.001) and lymph node metastasis(OR 3.11, 95%CI (2.87-3.37); P < 0.001) were risk factors for breast cancer liver metastasis. The C-index of the SEER cohort nomogram is 0.791. The calibration curve showed good agreement between predicted and observed liver metastasis in the SEER cohort.\nClinical Use The decision curve analysis of the nomogram is shown in Figure 3A. This analysis showed that when the threshold probability was in the range of 0-0.7, the use of a nomogram to predict liver metastasis increased more net benefit than the treat-all or treat-none strategy. Within this range, the net benefit based on the nomogram overlaped at several points.\nDecision curve analysis (A) and clinical impact curve analysis (B) of the validation cohort. (A): The y-axis represents net benefit. The x-axis shows the threshold probability. “All” means that all patients have assumed liver metastasis, and “none” means that the patient has no liver metastasis. Using a nomogram to predict, if the score is in the range of 0 to 0.7, liver metastasis increases more net benefit than the treat-all or treat-none strategy. (B): The red curve (Number high risk) indicates the number of patients who are classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) is the number of true positives at each threshold probability.\nThe clinical impact curve analysis of the nomogram was shown in Figure 3B. The red curve (Number high risk) indicated the number of people who were classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) was the number of true positive at each threshold probability.\nThe decision curve analysis of the nomogram is shown in Figure 3A. This analysis showed that when the threshold probability was in the range of 0-0.7, the use of a nomogram to predict liver metastasis increased more net benefit than the treat-all or treat-none strategy. Within this range, the net benefit based on the nomogram overlaped at several points.\nDecision curve analysis (A) and clinical impact curve analysis (B) of the validation cohort. (A): The y-axis represents net benefit. The x-axis shows the threshold probability. “All” means that all patients have assumed liver metastasis, and “none” means that the patient has no liver metastasis. Using a nomogram to predict, if the score is in the range of 0 to 0.7, liver metastasis increases more net benefit than the treat-all or treat-none strategy. (B): The red curve (Number high risk) indicates the number of patients who are classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) is the number of true positives at each threshold probability.\nThe clinical impact curve analysis of the nomogram was shown in Figure 3B. The red curve (Number high risk) indicated the number of people who were classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) was the number of true positive at each threshold probability.", "Logistic regression model identified 3 variables associated with liver metastasis: tumor size (odds ratio (OR) 3.62, 95% CI, 1.91 to 6.87; P < 0.001), lymph node metastasis (OR 2.26, 95% CI, 1.18 to 4.34); P = 0.014) and HER2 status (OR 1.86, 1.02 to 3.41; P = 0.045) (Table 2).\nRisk Factors for Liver Metastasis as Determined by Logistic Regression.\nOR: Odds Ratio.\nA nomogram containing these 3 factors was constructed (Figure 1). Good agreement between prediction and observation was displayed by the calibration curve of the training group (Figure S1A, supporting information). The Hosmer-Leme display test yielded a P value of 0.853, indicating that the model fit well. The C-index of the predicted nomogram is 0.699.\nNomogram to predict postoperative liver metastasis in patients with nonmetastatic breast cancer. There are 6 rows in the nomogram. Significant variables are displayed in lines 2 through 4, and the points of each variable are read from the scale of line 1. Add the points of the 3 variables to the total and mark them on the scale of line 5. The risk of liver metastasis is read from the scale of line 6 by drawing a vertical line from the total point marked in line 5 and the points are translated to the probability of liver metastasis.", "The calibration curve shows good agreement between predicted and observed liver metastasis in the internal validation set (Figure S1, supporting information) due to the non-significant P value (0.972) produced by the Hosmer-Lemeshow test. The C-index of the predicted nomogram is 0.814. The ROC curve was constructed for the derived and validated groups (Figure 2). For the training and verification groups, the area under the curve (AUC) is 0.699 and 0.815, respectively, and the corresponding cutoff values are 0.052 and 0.031.\nThe Receiver operating characteristic (ROC) curves for training (A), internal validation (B) cohorts; Areas under the ROC curve are 0.699 (A) and 0.815 (B). Cut-off values (marked with a symbol) are 0.052 (A) and 0.031 (B).\nFurthermore, we used the breast cancer cohort of the SEER database to further validate this nomogram. The results of the SEER database further confirmed the results of the training cohort, and also supported that HER2 status (odd ratio (OR) 1.88, 95%CI (1.74-2.04); P < 0.001), tumor size (OR 6.72, 95%CI (6.12-7.38); P < 0.001) and lymph node metastasis(OR 3.11, 95%CI (2.87-3.37); P < 0.001) were risk factors for breast cancer liver metastasis. The C-index of the SEER cohort nomogram is 0.791. The calibration curve showed good agreement between predicted and observed liver metastasis in the SEER cohort.", "The decision curve analysis of the nomogram is shown in Figure 3A. This analysis showed that when the threshold probability was in the range of 0-0.7, the use of a nomogram to predict liver metastasis increased more net benefit than the treat-all or treat-none strategy. Within this range, the net benefit based on the nomogram overlaped at several points.\nDecision curve analysis (A) and clinical impact curve analysis (B) of the validation cohort. (A): The y-axis represents net benefit. The x-axis shows the threshold probability. “All” means that all patients have assumed liver metastasis, and “none” means that the patient has no liver metastasis. Using a nomogram to predict, if the score is in the range of 0 to 0.7, liver metastasis increases more net benefit than the treat-all or treat-none strategy. (B): The red curve (Number high risk) indicates the number of patients who are classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) is the number of true positives at each threshold probability.\nThe clinical impact curve analysis of the nomogram was shown in Figure 3B. The red curve (Number high risk) indicated the number of people who were classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) was the number of true positive at each threshold probability.", "We have developed and validated a nomogram that predicts the development of postoperative liver metastasis in early breast cancer patients. The nomogram included 3 items, tumor size, lymph node metastasis, and HER2 status, and showed good agreement between the predicted and actual probabilities in the derived and validated cohorts.\nLiver metastasis is a growing problem in the treatment of breast cancer.17 Liver metastasis severely affects patients’ life quality and prognosis. Therefore, predicting higher risk liver metastasis breast cancer patients will enrich the population who should be treated more specifically and thereby improve clinical outcomes in these patients.18\n\nBased on this nomogram, assuming a breast cancer patient with T3-4, lymph node metastasis and HER2-positive tumors, her total score was 205, as shown in Figure 1. Using a nomogram, the patient is expected to have a 20% possibility to develop liver metastasis. Therefore, patients with the above characteristics are expected to benefit from liver metastasis screening.\nIn contrast, an assuming patient with a T1-2 tumor, no lymph node metastasis, and the HER2 negative status had a total score of 0, as shown in Figure 1. Using a nomogram, the predicted chance for this patient to get liver metastasis is relatively low (less than 5%).\nThere is currently no specific preventive treatment to reduce the incidence of liver metastasis in breast cancer. But due to the local liver treatment (surgery, intrahepatic local chemotherapy, etc.), strengthen surveillance may bring benefits for high-risk metastatic breast cancer patients. We are not the only ones trying to establish a nomogram on breast cancer liver metastasis. Lin and his colleagues constructed a nomogram using variables such as sex, histology type, N stage, grade, age, ER, PR, HER2 status.19 The problem with their nomogram is that the patients they enrolled are de novo liver metastasis, which means that the diagnosis of liver metastasis and the diagnosis of breast cancer are simultaneous and thus it doesn’t have enough predictive value. The patients included in this study were those who had liver recurrences after early breast cancer treatment. Thus, our nomogram has a more superior predictive value than theirs. Additionally, when T1 and T2 are divided into 2 groups, the ROC curve and the calibration curve of this new nomogram are almost the same as the previous results (Figure S2, supporting information).\nIt is worth noting that this study also has some limitations. First of all, this nomogram was constructed using retrospective data, so prospective studies should be performed for further validation. Secondly, we did not evaluate the impact of adjuvant CT, endocrine, and targeted therapy due to the unavailability of data. Thirdly, we also did not evaluate the effect of the eighth edition of TNM staging on liver metastasis due to the unavailability of data. Finally, the AUC in the ROC analysis of the training cohort is relatively low.", "In summary, we developed a nomogram, which is a powerful tool for predicting subsequent liver metastasis in nonmetastatic breast cancer patients. Our model will help us in identifying patients at high risks of liver metastasis, thereby we could design preventive trials for them correspondingly. Further researches are needed to determine whether it can be applied to other subgroups of patients.", "Click here for additional data file.\nSupplemental Material, sj-jpg-1-ccx-10.1177_1073274821997418 for Predictive Nomogram of Subsequent Liver Metastasis After Mastectomy or Breast-Conserving Surgery in Patients With Nonmetastatic Breast Cancer by Anli Yang, Weikai Xiao, Shaoquan Zheng, Yanan Kong, Yutian Zou, Mingyue Li, Feng Ye and Xiaoming Xie in Cancer Control\nClick here for additional data file.\nSupplemental Material, sj-pdf-1-ccx-10.1177_1073274821997418 for Predictive Nomogram of Subsequent Liver Metastasis After Mastectomy or Breast-Conserving Surgery in Patients With Nonmetastatic Breast Cancer by Anli Yang, Weikai Xiao, Shaoquan Zheng, Yanan Kong, Yutian Zou, Mingyue Li, Feng Ye and Xiaoming Xie in Cancer Control\nClick here for additional data file.\nSupplemental Material, sj-tif-1-ccx-10.1177_1073274821997418 for Predictive Nomogram of Subsequent Liver Metastasis After Mastectomy or Breast-Conserving Surgery in Patients With Nonmetastatic Breast Cancer by Anli Yang, Weikai Xiao, Shaoquan Zheng, Yanan Kong, Yutian Zou, Mingyue Li, Feng Ye and Xiaoming Xie in Cancer Control" ]
[ "intro", "methods", null, "results", null, null, null, "discussion", "conclusions", "supplementary-material" ]
[ "breast cancer", "postoperative liver metastasis", "nomogram", "prediction", "validation" ]
Introduction: Breast cancer is one of the leading causes of cancer-related deaths among women worldwide.1 Although only 6%-10% of breast cancer patients are diagnosed with metastatic disease, approximately 30% of women diagnosed with the nonmetastatic disease will relapse after treatment.2,3 Breast cancer mainly metastasizes to bone, lung, liver and brain through circulation; among them, the liver is the third most common distant metastatic site of breast cancer.4 Liver metastasis is reported to be responsible for approximately 20% to 35% of metastatic breast cancer patients’ death.5-7 Studies have shown that breast cancer patients with liver metastasis exhibit poor prognosis and short median survival. The median survival time of those patients without any treatment was about 4-8 months8 compared to 13-31 months2,3,9,10 after systemic treatment. For the treatment, systematic therapy is still the backbone to treat breast cancer with liver metastasis patients although surgery, radiofrequency ablation, or radiotherapy can be used for liver metastasis.11,12 Currently, the occurrence of liver metastasis from breast cancer cannot be accurately predicted. The construction of nomograms based on known prognostic factors is increasing and widely used to predict specific outcomes.13,14 We hypothesized a nomogram could be constructed by combining selected clinical and pathological variables using a multivariate model to predict the likelihood of postoperative liver metastasis in early breast cancer patients. This nomogram can be used to identify subgroups of high-risk patients, develop targeted screening and new preventive treatment strategies for early-stage breast cancer patients, and even improve life quality and survival outcomes.15,16 Therefore, we constructed and validated such a nomogram using retrospectively study data from 2 breast cancer patient populations. Methods: The electronic database of the department of breast cancer at Sun Yat-sen University Cancer was searched and retrospectively reviewed from January 2008 to December 2010. Information from 1840 consecutive patients with nonmetastatic breast cancer was collected to serve as the basis for this study. The inclusion criteria for this study were: (1) female; (2) stage I-III breast cancer; (3) underwent mastectomy or breast-conserving surgery; (4) confirmed by the pathological diagnosis as invasive carcinoma. Exclusion criteria were: (1) male; (2) distant metastasis at initial diagnosis; (3) incomplete information such as TNM staging or pathological diagnosis; (4) other primary malignancies, including pre-diagnosis or follow-up of breast cancer. The staging of the tumor was based on the eighth edition of the TNM malignant tumor classification. The primary tumor and metastasis were confirmed by pathology. The classification of tumor size is: the maximum diameter of T1 tumor ≤ 20 mm; the maximum diameter of T2 tumor is > 20 mm, but ≤ 50 mm; The maximum diameter of T3 tumor is > 50 mm; T4 is directly invading the chest wall or skin regardless of tumor size. The diagnostic criteria for breast cancer liver metastasis were as follows: (1) BCLM was confirmed by histopathological examination;(2) When the patient is unable to undergo the pathological examination of liver metastases, we mainly diagnose BCLM based on clinical manifestations and imaging examinations. For patients without pathological examination, breast oncologists and imaging physicians jointly confirm the diagnosis of BCLM based on ultrasound, CT, and MRI. Consecutive patients from January 2008 to December 2009 were included in the training cohort, while consecutive patients from January 2010 to December 2010 were included in the internal validation cohort from the same institutional database using the same criteria as the derived cohort. The database includes the patient’s treatment and pathological variables (age, menopausal status, tumor size, lymph node metastasis, histological grade, ER, PR, HER2 and lymphatic vessel invasion, etc.). The criteria for HER2 positivity is IHC 3+ (defined as > 30% of invasive tumor cells with uniform strong membrane staining) or FISH amplification (defined as the ratio of HER2 to CEP17 > 2.2 or the average copy number of the HER2 gene has a signal/core ratio greater than 6 when no internal reference probe is used in this detection system). The retrospectively maintained database of early breast cancer patients, as well as this study, was approved by the Institutional Review Board of the Sun Yat-sen University Cancer Center. Statistical Analysis Statistical analysis was performed using SPSS® version 21.0 (IBM, Armonk, New York, USA) and statistical software package R version 3.5.1 (http://www.r-project). The clinical characteristics of the patients are mainly summarized as the variables of the classification. Comparison between groups was performed using a chi-square test to analyze categorical variables. Receiver Operating Characteristics (ROC) curve was built using SPSS® software. We firstly used univariate and multivariate analysis to determine factors that predict postoperative liver metastasis. Then, we identified the factors predicting postoperative liver metastasis by using a binary logistic regression model. Subsequently, we constructed a nomogram for predicting postoperative liver metastasis from the results of multivariate analysis using the rms package in R. Internal verification of the nomogram was performed with bootstraps with 1000 resampling. Later, we evaluated the predicted performance of the established nomogram by C-index measurement and the calibration curve is plotted to assess the calibration of the model, tested by Hosmer-Lemeshow test [20]. Decision curve analysis in the internal validation was set to determine the clinical value of the nomogram by quantifying the net benefit when considering different threshold probabilities [21, 22]. Besides, we constructed a clinical impact curve to assess the clinical impact of risk prediction models with decision curve analysis [23]. Finally, the breast cancer cohort of the SEER database was used for external validation. Statistical analysis was performed using SPSS® version 21.0 (IBM, Armonk, New York, USA) and statistical software package R version 3.5.1 (http://www.r-project). The clinical characteristics of the patients are mainly summarized as the variables of the classification. Comparison between groups was performed using a chi-square test to analyze categorical variables. Receiver Operating Characteristics (ROC) curve was built using SPSS® software. We firstly used univariate and multivariate analysis to determine factors that predict postoperative liver metastasis. Then, we identified the factors predicting postoperative liver metastasis by using a binary logistic regression model. Subsequently, we constructed a nomogram for predicting postoperative liver metastasis from the results of multivariate analysis using the rms package in R. Internal verification of the nomogram was performed with bootstraps with 1000 resampling. Later, we evaluated the predicted performance of the established nomogram by C-index measurement and the calibration curve is plotted to assess the calibration of the model, tested by Hosmer-Lemeshow test [20]. Decision curve analysis in the internal validation was set to determine the clinical value of the nomogram by quantifying the net benefit when considering different threshold probabilities [21, 22]. Besides, we constructed a clinical impact curve to assess the clinical impact of risk prediction models with decision curve analysis [23]. Finally, the breast cancer cohort of the SEER database was used for external validation. Statistical Analysis: Statistical analysis was performed using SPSS® version 21.0 (IBM, Armonk, New York, USA) and statistical software package R version 3.5.1 (http://www.r-project). The clinical characteristics of the patients are mainly summarized as the variables of the classification. Comparison between groups was performed using a chi-square test to analyze categorical variables. Receiver Operating Characteristics (ROC) curve was built using SPSS® software. We firstly used univariate and multivariate analysis to determine factors that predict postoperative liver metastasis. Then, we identified the factors predicting postoperative liver metastasis by using a binary logistic regression model. Subsequently, we constructed a nomogram for predicting postoperative liver metastasis from the results of multivariate analysis using the rms package in R. Internal verification of the nomogram was performed with bootstraps with 1000 resampling. Later, we evaluated the predicted performance of the established nomogram by C-index measurement and the calibration curve is plotted to assess the calibration of the model, tested by Hosmer-Lemeshow test [20]. Decision curve analysis in the internal validation was set to determine the clinical value of the nomogram by quantifying the net benefit when considering different threshold probabilities [21, 22]. Besides, we constructed a clinical impact curve to assess the clinical impact of risk prediction models with decision curve analysis [23]. Finally, the breast cancer cohort of the SEER database was used for external validation. Results: A total of 817 patients underwent breast-conserving surgery (including axillary lymph node dissection or sentinel lymph node biopsy); 1023 patients underwent modified radical mastectomy. The distribution of molecular subtypes in the training set is 68.8% in HR+/HER2-, 14.3% in HR+/HER2+, 7.0% in HR-/HER2+, and 10.0% in TN subtypes. The distribution of molecular subtypes in the verification concentration is: 63.5% in HR + /HER2-, 11.1% in HR + / HER2 +, 7.5% in HR− / HER2 +, and 12.0% in TN subtype. Our data analysis showed that in the training group of 1,149 patients with early breast cancer, 51 patients (4.44%) had clinical evidence of liver metastasis after a median follow-up of 71 months. There are 4 patients with extensive systemic metastases, 2 patients with lung metastases, 2 patients with bone metastases, and 1 patient with brain metastases. The demographic and clinical characteristics of the patients in both cohorts are summarized in Table 1. In the internal validation cohort, 28 patients (4.05%) developed liver metastasis after a median follow-up of 59 months. There was no significant difference in liver metastasis between the 2 cohorts (P = 0.723). In univariate analysis, HER2 status, tumor size, and lymph node metastasis were associated with liver metastasis in breast cancer (Table 1). Subsequent liver metastasis was not significantly associated with age at diagnosis (P = 0.062), ER status (P = 0.271), PR status (P = 0.361), lymphovascular invasion (P = 0.078), pathologic stage (P = 0.33) or menopause status at diagnosis (P = 0.881). In addition, we also used Cox regression to analyze the time-dependent risk factors for liver metastasis. The results of the training cohort suggested that HER2 status (Hazard ratio (HR) 1.80, 95%CI (1.01-3.22); P = 0.047), tumor size (HR 4.45, 95%CI (2.41-8.24); P < 0.001) and lymph Node metastasis (HR 2.19, 95%CI (1.15-4.15); P = 0.016) were independent risk factors for liver metastasis (Table. S1). The COX regression of the internal validation cohort basically obtained similar results, although the effect of HER2 status on the recurrence time of liver metastasis did not reach statistical significance. Clinical and Pathological Features of Patients in the Training and Validation Cohort Based on Liver Metastatic Status. Development of Nomogram Logistic regression model identified 3 variables associated with liver metastasis: tumor size (odds ratio (OR) 3.62, 95% CI, 1.91 to 6.87; P < 0.001), lymph node metastasis (OR 2.26, 95% CI, 1.18 to 4.34); P = 0.014) and HER2 status (OR 1.86, 1.02 to 3.41; P = 0.045) (Table 2). Risk Factors for Liver Metastasis as Determined by Logistic Regression. OR: Odds Ratio. A nomogram containing these 3 factors was constructed (Figure 1). Good agreement between prediction and observation was displayed by the calibration curve of the training group (Figure S1A, supporting information). The Hosmer-Leme display test yielded a P value of 0.853, indicating that the model fit well. The C-index of the predicted nomogram is 0.699. Nomogram to predict postoperative liver metastasis in patients with nonmetastatic breast cancer. There are 6 rows in the nomogram. Significant variables are displayed in lines 2 through 4, and the points of each variable are read from the scale of line 1. Add the points of the 3 variables to the total and mark them on the scale of line 5. The risk of liver metastasis is read from the scale of line 6 by drawing a vertical line from the total point marked in line 5 and the points are translated to the probability of liver metastasis. Logistic regression model identified 3 variables associated with liver metastasis: tumor size (odds ratio (OR) 3.62, 95% CI, 1.91 to 6.87; P < 0.001), lymph node metastasis (OR 2.26, 95% CI, 1.18 to 4.34); P = 0.014) and HER2 status (OR 1.86, 1.02 to 3.41; P = 0.045) (Table 2). Risk Factors for Liver Metastasis as Determined by Logistic Regression. OR: Odds Ratio. A nomogram containing these 3 factors was constructed (Figure 1). Good agreement between prediction and observation was displayed by the calibration curve of the training group (Figure S1A, supporting information). The Hosmer-Leme display test yielded a P value of 0.853, indicating that the model fit well. The C-index of the predicted nomogram is 0.699. Nomogram to predict postoperative liver metastasis in patients with nonmetastatic breast cancer. There are 6 rows in the nomogram. Significant variables are displayed in lines 2 through 4, and the points of each variable are read from the scale of line 1. Add the points of the 3 variables to the total and mark them on the scale of line 5. The risk of liver metastasis is read from the scale of line 6 by drawing a vertical line from the total point marked in line 5 and the points are translated to the probability of liver metastasis. Validation of Nomogram The calibration curve shows good agreement between predicted and observed liver metastasis in the internal validation set (Figure S1, supporting information) due to the non-significant P value (0.972) produced by the Hosmer-Lemeshow test. The C-index of the predicted nomogram is 0.814. The ROC curve was constructed for the derived and validated groups (Figure 2). For the training and verification groups, the area under the curve (AUC) is 0.699 and 0.815, respectively, and the corresponding cutoff values are 0.052 and 0.031. The Receiver operating characteristic (ROC) curves for training (A), internal validation (B) cohorts; Areas under the ROC curve are 0.699 (A) and 0.815 (B). Cut-off values (marked with a symbol) are 0.052 (A) and 0.031 (B). Furthermore, we used the breast cancer cohort of the SEER database to further validate this nomogram. The results of the SEER database further confirmed the results of the training cohort, and also supported that HER2 status (odd ratio (OR) 1.88, 95%CI (1.74-2.04); P < 0.001), tumor size (OR 6.72, 95%CI (6.12-7.38); P < 0.001) and lymph node metastasis(OR 3.11, 95%CI (2.87-3.37); P < 0.001) were risk factors for breast cancer liver metastasis. The C-index of the SEER cohort nomogram is 0.791. The calibration curve showed good agreement between predicted and observed liver metastasis in the SEER cohort. The calibration curve shows good agreement between predicted and observed liver metastasis in the internal validation set (Figure S1, supporting information) due to the non-significant P value (0.972) produced by the Hosmer-Lemeshow test. The C-index of the predicted nomogram is 0.814. The ROC curve was constructed for the derived and validated groups (Figure 2). For the training and verification groups, the area under the curve (AUC) is 0.699 and 0.815, respectively, and the corresponding cutoff values are 0.052 and 0.031. The Receiver operating characteristic (ROC) curves for training (A), internal validation (B) cohorts; Areas under the ROC curve are 0.699 (A) and 0.815 (B). Cut-off values (marked with a symbol) are 0.052 (A) and 0.031 (B). Furthermore, we used the breast cancer cohort of the SEER database to further validate this nomogram. The results of the SEER database further confirmed the results of the training cohort, and also supported that HER2 status (odd ratio (OR) 1.88, 95%CI (1.74-2.04); P < 0.001), tumor size (OR 6.72, 95%CI (6.12-7.38); P < 0.001) and lymph node metastasis(OR 3.11, 95%CI (2.87-3.37); P < 0.001) were risk factors for breast cancer liver metastasis. The C-index of the SEER cohort nomogram is 0.791. The calibration curve showed good agreement between predicted and observed liver metastasis in the SEER cohort. Clinical Use The decision curve analysis of the nomogram is shown in Figure 3A. This analysis showed that when the threshold probability was in the range of 0-0.7, the use of a nomogram to predict liver metastasis increased more net benefit than the treat-all or treat-none strategy. Within this range, the net benefit based on the nomogram overlaped at several points. Decision curve analysis (A) and clinical impact curve analysis (B) of the validation cohort. (A): The y-axis represents net benefit. The x-axis shows the threshold probability. “All” means that all patients have assumed liver metastasis, and “none” means that the patient has no liver metastasis. Using a nomogram to predict, if the score is in the range of 0 to 0.7, liver metastasis increases more net benefit than the treat-all or treat-none strategy. (B): The red curve (Number high risk) indicates the number of patients who are classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) is the number of true positives at each threshold probability. The clinical impact curve analysis of the nomogram was shown in Figure 3B. The red curve (Number high risk) indicated the number of people who were classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) was the number of true positive at each threshold probability. The decision curve analysis of the nomogram is shown in Figure 3A. This analysis showed that when the threshold probability was in the range of 0-0.7, the use of a nomogram to predict liver metastasis increased more net benefit than the treat-all or treat-none strategy. Within this range, the net benefit based on the nomogram overlaped at several points. Decision curve analysis (A) and clinical impact curve analysis (B) of the validation cohort. (A): The y-axis represents net benefit. The x-axis shows the threshold probability. “All” means that all patients have assumed liver metastasis, and “none” means that the patient has no liver metastasis. Using a nomogram to predict, if the score is in the range of 0 to 0.7, liver metastasis increases more net benefit than the treat-all or treat-none strategy. (B): The red curve (Number high risk) indicates the number of patients who are classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) is the number of true positives at each threshold probability. The clinical impact curve analysis of the nomogram was shown in Figure 3B. The red curve (Number high risk) indicated the number of people who were classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) was the number of true positive at each threshold probability. Development of Nomogram: Logistic regression model identified 3 variables associated with liver metastasis: tumor size (odds ratio (OR) 3.62, 95% CI, 1.91 to 6.87; P < 0.001), lymph node metastasis (OR 2.26, 95% CI, 1.18 to 4.34); P = 0.014) and HER2 status (OR 1.86, 1.02 to 3.41; P = 0.045) (Table 2). Risk Factors for Liver Metastasis as Determined by Logistic Regression. OR: Odds Ratio. A nomogram containing these 3 factors was constructed (Figure 1). Good agreement between prediction and observation was displayed by the calibration curve of the training group (Figure S1A, supporting information). The Hosmer-Leme display test yielded a P value of 0.853, indicating that the model fit well. The C-index of the predicted nomogram is 0.699. Nomogram to predict postoperative liver metastasis in patients with nonmetastatic breast cancer. There are 6 rows in the nomogram. Significant variables are displayed in lines 2 through 4, and the points of each variable are read from the scale of line 1. Add the points of the 3 variables to the total and mark them on the scale of line 5. The risk of liver metastasis is read from the scale of line 6 by drawing a vertical line from the total point marked in line 5 and the points are translated to the probability of liver metastasis. Validation of Nomogram: The calibration curve shows good agreement between predicted and observed liver metastasis in the internal validation set (Figure S1, supporting information) due to the non-significant P value (0.972) produced by the Hosmer-Lemeshow test. The C-index of the predicted nomogram is 0.814. The ROC curve was constructed for the derived and validated groups (Figure 2). For the training and verification groups, the area under the curve (AUC) is 0.699 and 0.815, respectively, and the corresponding cutoff values are 0.052 and 0.031. The Receiver operating characteristic (ROC) curves for training (A), internal validation (B) cohorts; Areas under the ROC curve are 0.699 (A) and 0.815 (B). Cut-off values (marked with a symbol) are 0.052 (A) and 0.031 (B). Furthermore, we used the breast cancer cohort of the SEER database to further validate this nomogram. The results of the SEER database further confirmed the results of the training cohort, and also supported that HER2 status (odd ratio (OR) 1.88, 95%CI (1.74-2.04); P < 0.001), tumor size (OR 6.72, 95%CI (6.12-7.38); P < 0.001) and lymph node metastasis(OR 3.11, 95%CI (2.87-3.37); P < 0.001) were risk factors for breast cancer liver metastasis. The C-index of the SEER cohort nomogram is 0.791. The calibration curve showed good agreement between predicted and observed liver metastasis in the SEER cohort. Clinical Use: The decision curve analysis of the nomogram is shown in Figure 3A. This analysis showed that when the threshold probability was in the range of 0-0.7, the use of a nomogram to predict liver metastasis increased more net benefit than the treat-all or treat-none strategy. Within this range, the net benefit based on the nomogram overlaped at several points. Decision curve analysis (A) and clinical impact curve analysis (B) of the validation cohort. (A): The y-axis represents net benefit. The x-axis shows the threshold probability. “All” means that all patients have assumed liver metastasis, and “none” means that the patient has no liver metastasis. Using a nomogram to predict, if the score is in the range of 0 to 0.7, liver metastasis increases more net benefit than the treat-all or treat-none strategy. (B): The red curve (Number high risk) indicates the number of patients who are classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) is the number of true positives at each threshold probability. The clinical impact curve analysis of the nomogram was shown in Figure 3B. The red curve (Number high risk) indicated the number of people who were classified as positive (high risk) by the simple model at each threshold probability; the blue curve (Number high risk with events) was the number of true positive at each threshold probability. Discussion: We have developed and validated a nomogram that predicts the development of postoperative liver metastasis in early breast cancer patients. The nomogram included 3 items, tumor size, lymph node metastasis, and HER2 status, and showed good agreement between the predicted and actual probabilities in the derived and validated cohorts. Liver metastasis is a growing problem in the treatment of breast cancer.17 Liver metastasis severely affects patients’ life quality and prognosis. Therefore, predicting higher risk liver metastasis breast cancer patients will enrich the population who should be treated more specifically and thereby improve clinical outcomes in these patients.18 Based on this nomogram, assuming a breast cancer patient with T3-4, lymph node metastasis and HER2-positive tumors, her total score was 205, as shown in Figure 1. Using a nomogram, the patient is expected to have a 20% possibility to develop liver metastasis. Therefore, patients with the above characteristics are expected to benefit from liver metastasis screening. In contrast, an assuming patient with a T1-2 tumor, no lymph node metastasis, and the HER2 negative status had a total score of 0, as shown in Figure 1. Using a nomogram, the predicted chance for this patient to get liver metastasis is relatively low (less than 5%). There is currently no specific preventive treatment to reduce the incidence of liver metastasis in breast cancer. But due to the local liver treatment (surgery, intrahepatic local chemotherapy, etc.), strengthen surveillance may bring benefits for high-risk metastatic breast cancer patients. We are not the only ones trying to establish a nomogram on breast cancer liver metastasis. Lin and his colleagues constructed a nomogram using variables such as sex, histology type, N stage, grade, age, ER, PR, HER2 status.19 The problem with their nomogram is that the patients they enrolled are de novo liver metastasis, which means that the diagnosis of liver metastasis and the diagnosis of breast cancer are simultaneous and thus it doesn’t have enough predictive value. The patients included in this study were those who had liver recurrences after early breast cancer treatment. Thus, our nomogram has a more superior predictive value than theirs. Additionally, when T1 and T2 are divided into 2 groups, the ROC curve and the calibration curve of this new nomogram are almost the same as the previous results (Figure S2, supporting information). It is worth noting that this study also has some limitations. First of all, this nomogram was constructed using retrospective data, so prospective studies should be performed for further validation. Secondly, we did not evaluate the impact of adjuvant CT, endocrine, and targeted therapy due to the unavailability of data. Thirdly, we also did not evaluate the effect of the eighth edition of TNM staging on liver metastasis due to the unavailability of data. Finally, the AUC in the ROC analysis of the training cohort is relatively low. Conclusion: In summary, we developed a nomogram, which is a powerful tool for predicting subsequent liver metastasis in nonmetastatic breast cancer patients. Our model will help us in identifying patients at high risks of liver metastasis, thereby we could design preventive trials for them correspondingly. Further researches are needed to determine whether it can be applied to other subgroups of patients. Supplemental Material: Click here for additional data file. Supplemental Material, sj-jpg-1-ccx-10.1177_1073274821997418 for Predictive Nomogram of Subsequent Liver Metastasis After Mastectomy or Breast-Conserving Surgery in Patients With Nonmetastatic Breast Cancer by Anli Yang, Weikai Xiao, Shaoquan Zheng, Yanan Kong, Yutian Zou, Mingyue Li, Feng Ye and Xiaoming Xie in Cancer Control Click here for additional data file. Supplemental Material, sj-pdf-1-ccx-10.1177_1073274821997418 for Predictive Nomogram of Subsequent Liver Metastasis After Mastectomy or Breast-Conserving Surgery in Patients With Nonmetastatic Breast Cancer by Anli Yang, Weikai Xiao, Shaoquan Zheng, Yanan Kong, Yutian Zou, Mingyue Li, Feng Ye and Xiaoming Xie in Cancer Control Click here for additional data file. Supplemental Material, sj-tif-1-ccx-10.1177_1073274821997418 for Predictive Nomogram of Subsequent Liver Metastasis After Mastectomy or Breast-Conserving Surgery in Patients With Nonmetastatic Breast Cancer by Anli Yang, Weikai Xiao, Shaoquan Zheng, Yanan Kong, Yutian Zou, Mingyue Li, Feng Ye and Xiaoming Xie in Cancer Control
Background: Metastasis accounts for the majority of deaths in patients with breast cancer. Liver metastasis is reported common for breast cancer patients. The purpose of this study was to construct a nomogram to predict the likelihood of subsequent liver metastasis in patients with nonmetastatic breast cancer, thus high-risk patient populations can be prevented and monitored. Methods: A total of 1840 patients with stage I-III breast cancer were retrospectively included and analyzed. A nomogram was constructed to predict liver metastasis based on multivariate logistic regression analysis. SEER database was used for external validation. C-index, calibration curve and decision curve analysis were used to evaluate the predictive performance of the model. Results: The nomogram included 3 variables related to liver metastasis: HER2 status (odds ratio (OR) 1.86, 95%CI 1.02 to 3.41; P = 0.045), tumor size (OR 3.62, 1.91 to 6.87; P < 0.001) and lymph node metastasis (OR 2.26, 1.18 to 4.34; P = 0.014). The C index of the training cohort, internal validation cohort and external validation cohort were 0.699, 0.814 and 0.791, respectively. The nomogram was well-calibrated, with no statistical difference between the predicted and the observed probabilities. Conclusions: We have developed and validated a robust tool enabled to predict subsequent liver metastasis in patients with nonmetastatic breast cancer. Distinguishing a population of patients at high risk of liver metastasis will facilitate preventive treatment or monitoring of liver metastasis.
Introduction: Breast cancer is one of the leading causes of cancer-related deaths among women worldwide.1 Although only 6%-10% of breast cancer patients are diagnosed with metastatic disease, approximately 30% of women diagnosed with the nonmetastatic disease will relapse after treatment.2,3 Breast cancer mainly metastasizes to bone, lung, liver and brain through circulation; among them, the liver is the third most common distant metastatic site of breast cancer.4 Liver metastasis is reported to be responsible for approximately 20% to 35% of metastatic breast cancer patients’ death.5-7 Studies have shown that breast cancer patients with liver metastasis exhibit poor prognosis and short median survival. The median survival time of those patients without any treatment was about 4-8 months8 compared to 13-31 months2,3,9,10 after systemic treatment. For the treatment, systematic therapy is still the backbone to treat breast cancer with liver metastasis patients although surgery, radiofrequency ablation, or radiotherapy can be used for liver metastasis.11,12 Currently, the occurrence of liver metastasis from breast cancer cannot be accurately predicted. The construction of nomograms based on known prognostic factors is increasing and widely used to predict specific outcomes.13,14 We hypothesized a nomogram could be constructed by combining selected clinical and pathological variables using a multivariate model to predict the likelihood of postoperative liver metastasis in early breast cancer patients. This nomogram can be used to identify subgroups of high-risk patients, develop targeted screening and new preventive treatment strategies for early-stage breast cancer patients, and even improve life quality and survival outcomes.15,16 Therefore, we constructed and validated such a nomogram using retrospectively study data from 2 breast cancer patient populations. Conclusion: In summary, we developed a nomogram, which is a powerful tool for predicting subsequent liver metastasis in nonmetastatic breast cancer patients. Our model will help us in identifying patients at high risks of liver metastasis, thereby we could design preventive trials for them correspondingly. Further researches are needed to determine whether it can be applied to other subgroups of patients.
Background: Metastasis accounts for the majority of deaths in patients with breast cancer. Liver metastasis is reported common for breast cancer patients. The purpose of this study was to construct a nomogram to predict the likelihood of subsequent liver metastasis in patients with nonmetastatic breast cancer, thus high-risk patient populations can be prevented and monitored. Methods: A total of 1840 patients with stage I-III breast cancer were retrospectively included and analyzed. A nomogram was constructed to predict liver metastasis based on multivariate logistic regression analysis. SEER database was used for external validation. C-index, calibration curve and decision curve analysis were used to evaluate the predictive performance of the model. Results: The nomogram included 3 variables related to liver metastasis: HER2 status (odds ratio (OR) 1.86, 95%CI 1.02 to 3.41; P = 0.045), tumor size (OR 3.62, 1.91 to 6.87; P < 0.001) and lymph node metastasis (OR 2.26, 1.18 to 4.34; P = 0.014). The C index of the training cohort, internal validation cohort and external validation cohort were 0.699, 0.814 and 0.791, respectively. The nomogram was well-calibrated, with no statistical difference between the predicted and the observed probabilities. Conclusions: We have developed and validated a robust tool enabled to predict subsequent liver metastasis in patients with nonmetastatic breast cancer. Distinguishing a population of patients at high risk of liver metastasis will facilitate preventive treatment or monitoring of liver metastasis.
5,469
283
[ 262, 266, 293, 290 ]
10
[ "metastasis", "liver", "liver metastasis", "nomogram", "curve", "breast", "cancer", "patients", "breast cancer", "risk" ]
[ "liver metastasis relatively", "liver metastasis cohorts", "predict liver metastasis", "metastasis breast cancer", "metastatic breast cancer" ]
[CONTENT] breast cancer | postoperative liver metastasis | nomogram | prediction | validation [SUMMARY]
[CONTENT] breast cancer | postoperative liver metastasis | nomogram | prediction | validation [SUMMARY]
[CONTENT] breast cancer | postoperative liver metastasis | nomogram | prediction | validation [SUMMARY]
[CONTENT] breast cancer | postoperative liver metastasis | nomogram | prediction | validation [SUMMARY]
[CONTENT] breast cancer | postoperative liver metastasis | nomogram | prediction | validation [SUMMARY]
[CONTENT] breast cancer | postoperative liver metastasis | nomogram | prediction | validation [SUMMARY]
[CONTENT] Adult | Breast Neoplasms | Female | Humans | Liver Neoplasms | Mastectomy | Neoplasm Staging | Nomograms [SUMMARY]
[CONTENT] Adult | Breast Neoplasms | Female | Humans | Liver Neoplasms | Mastectomy | Neoplasm Staging | Nomograms [SUMMARY]
[CONTENT] Adult | Breast Neoplasms | Female | Humans | Liver Neoplasms | Mastectomy | Neoplasm Staging | Nomograms [SUMMARY]
[CONTENT] Adult | Breast Neoplasms | Female | Humans | Liver Neoplasms | Mastectomy | Neoplasm Staging | Nomograms [SUMMARY]
[CONTENT] Adult | Breast Neoplasms | Female | Humans | Liver Neoplasms | Mastectomy | Neoplasm Staging | Nomograms [SUMMARY]
[CONTENT] Adult | Breast Neoplasms | Female | Humans | Liver Neoplasms | Mastectomy | Neoplasm Staging | Nomograms [SUMMARY]
[CONTENT] liver metastasis relatively | liver metastasis cohorts | predict liver metastasis | metastasis breast cancer | metastatic breast cancer [SUMMARY]
[CONTENT] liver metastasis relatively | liver metastasis cohorts | predict liver metastasis | metastasis breast cancer | metastatic breast cancer [SUMMARY]
[CONTENT] liver metastasis relatively | liver metastasis cohorts | predict liver metastasis | metastasis breast cancer | metastatic breast cancer [SUMMARY]
[CONTENT] liver metastasis relatively | liver metastasis cohorts | predict liver metastasis | metastasis breast cancer | metastatic breast cancer [SUMMARY]
[CONTENT] liver metastasis relatively | liver metastasis cohorts | predict liver metastasis | metastasis breast cancer | metastatic breast cancer [SUMMARY]
[CONTENT] liver metastasis relatively | liver metastasis cohorts | predict liver metastasis | metastasis breast cancer | metastatic breast cancer [SUMMARY]
[CONTENT] metastasis | liver | liver metastasis | nomogram | curve | breast | cancer | patients | breast cancer | risk [SUMMARY]
[CONTENT] metastasis | liver | liver metastasis | nomogram | curve | breast | cancer | patients | breast cancer | risk [SUMMARY]
[CONTENT] metastasis | liver | liver metastasis | nomogram | curve | breast | cancer | patients | breast cancer | risk [SUMMARY]
[CONTENT] metastasis | liver | liver metastasis | nomogram | curve | breast | cancer | patients | breast cancer | risk [SUMMARY]
[CONTENT] metastasis | liver | liver metastasis | nomogram | curve | breast | cancer | patients | breast cancer | risk [SUMMARY]
[CONTENT] metastasis | liver | liver metastasis | nomogram | curve | breast | cancer | patients | breast cancer | risk [SUMMARY]
[CONTENT] cancer | breast cancer | breast | treatment | cancer patients | breast cancer patients | patients | survival | liver | metastatic [SUMMARY]
[CONTENT] analysis | tumor | criteria | curve | clinical | performed | breast | cancer | mm | database [SUMMARY]
[CONTENT] metastasis | curve | liver | number | liver metastasis | probability | threshold probability | 95 | 95 ci | ci [SUMMARY]
[CONTENT] patients | trials correspondingly researches needed | applied subgroups | help | design preventive trials correspondingly | determine applied | determine applied subgroups | determine applied subgroups patients | tool | tool predicting [SUMMARY]
[CONTENT] metastasis | liver | liver metastasis | curve | nomogram | cancer | breast | patients | breast cancer | analysis [SUMMARY]
[CONTENT] metastasis | liver | liver metastasis | curve | nomogram | cancer | breast | patients | breast cancer | analysis [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] 1840 ||| ||| SEER ||| [SUMMARY]
[CONTENT] 3 | 1.86 | 1.02 | 3.41 | P = | 0.045 | 3.62 | 1.91 | 6.87 | P < 0.001 | 2.26 | 1.18 | 4.34 | P | 0.014 ||| 0.699 | 0.814 | 0.791 ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| 1840 ||| ||| SEER ||| ||| ||| 3 | 1.86 | 1.02 | 3.41 | P = | 0.045 | 3.62 | 1.91 | 6.87 | P < 0.001 | 2.26 | 1.18 | 4.34 | P | 0.014 ||| 0.699 | 0.814 | 0.791 ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| 1840 ||| ||| SEER ||| ||| ||| 3 | 1.86 | 1.02 | 3.41 | P = | 0.045 | 3.62 | 1.91 | 6.87 | P < 0.001 | 2.26 | 1.18 | 4.34 | P | 0.014 ||| 0.699 | 0.814 | 0.791 ||| ||| ||| [SUMMARY]
Revisional Surgery of One Anastomosis Gastric Bypass for Severe Protein-Energy Malnutrition.
35684155
One anastomosis gastric bypass (OAGB) is safe and effective. Its strong malabsorptive component might cause severe protein-energy malnutrition (PEM), necessitating revisional surgery. We aimed to evaluate the safety and outcomes of OAGB revision for severe PEM.
BACKGROUND
This was a single-center retrospective analysis of OAGB patients undergoing revision for severe PEM (2015-2021). Perioperative data and outcomes were retrieved.
METHODS
Ten patients underwent revision for severe PEM. Our center's incidence is 0.63% (9/1425 OAGB). All patients were symptomatic. Median (interquartile range) EWL and lowest albumin were 103.7% (range 57.6, 114) and 24 g/dL (range 19, 27), respectively, and 8/10 patients had significant micronutrient deficiencies. Before revision, nutritional optimization was undertaken. Median OAGB to revision interval was 18.4 months (range 15.7, 27.8). Median BPL length was 200 cm (range 177, 227). Reversal (n = 5), BPL shortening (n = 3), and conversion to Roux-en-Y gastric bypass (RYGB) (n = 2) were performed. One patient had anastomotic leak after BPL shortening. No death occurred. Median BMI and albumin increased from 22.4 kg/m2 (range 20.6, 30.3) and 35.5 g/dL (range 29.2, 41), respectively, at revision to 27.5 (range 22.2, 32.4) kg/m2 and 39.5 g/dL (range 37.2, 41.7), respectively, at follow-up (median 25.4 months, range 3.1, 45). Complete resolution occurs after conversion to RYGB or reversal to normal anatomy, but not after BPL shortening.
RESULTS
Revisional surgery of OAGB for severe PEM is feasible and safe after nutritional optimization. Our results suggest that the type of revision may be an important factor for PEM resolution. Comparative studies are needed to define the role of each revisional option.
CONCLUSIONS
[ "Albumins", "Gastric Bypass", "Humans", "Obesity, Morbid", "Postoperative Complications", "Protein-Energy Malnutrition", "Retrospective Studies", "Weight Loss" ]
9183067
1. Introduction
One anastomosis gastric bypass (OAGB) is the third most commonly performed bariatric metabolic surgery (BMS) worldwide [1], and the most common in our bariatric center. It combines restriction with a dominant malabsorptive component, and it is simple to perform, with a short learning curve [2,3]. OAGB is considered safe and effective as both primary and secondary BMS. It confers good outcomes in terms of satisfactory weight reduction and resolution of obesity-associated medical problems [4,5,6]. OAGB results in equal or better outcomes compared with Roux-en-Y gastric bypass (RYGB) on the one hand and more nutritional deficiencies on the other hand, owing to its stronger malabsorptive component [7,8], i.e., the longer biliopancreatic limb (BPL). Severe protein–energy malnutrition (PEM) is an uncommon complication following OAGB that rarely requires corrective surgery [9] like in other BMS [10]. There is no uniform definition in the bariatric literature for protein–energy malnutrition, its severity grading or threshold for revisional surgery. We aimed to evaluate the safety and outcomes of revisional surgery for severe PEM.
null
null
3. Results
Ten patients underwent revisional surgery due to severe PEM. The incidence of revisional surgery for PEM in our bariatric center is only 0.63% (9/1425 OAGB). Patients` characteristics at OAGB are presented in Table 1. The male-to-female ratio was 2:8, the median (interquartile range) age was 47.5 years (range 42.2, 58.2), and the median BMI was 44.2 kg/m2 (range 35.8, 47). The OAGB was primary or secondary in five patients each. Previous BMS included sleeve gastrectomy, adjustable gastric banding, and silastic ring vertical gastroplasty (n = 2, 2, and 1, respectively). The median BPL length was 200 cm (range 177, 200). All patients were in good nutritional status before undergoing OAGB, and 3/10 had mild micronutrient deficiency (iron, B12, or vitamin D). The patients’ characteristics of PEM are presented in Table 2. The median EWL was 103.7% (range 57.6, 114), and the median TWL was 43% (range 35.7, 56.9). Symptoms included mainly marked weakness, dizziness, syncope, peripheral edema, and diarrhea (watery or steatorrhea). The lowest albumin was median 24 g/dL (range 19, 27), and 8/10 patients had >1 significant vitamin/ mineral deficiencies (iron, calcium, vitamin D3, B1, B6, B12, folic acid). Nutritional optimization included nutritional supplements and medications to aid absorption, consisting of enteral nutrition (EN), total parenteral nutrition (TPN), multivitamins (MV), loperamide, and pancreolipase (n = 10, 5, 10, 4, 8, and 7, respectively). The patients’ characteristics at revisional surgery are presented in Table 3. Revision was performed at a median interval of 18.4 months (range 15.7, 27.8) after OAGB. At revision, the median age, BMI, and highest albumin were 49.5 years (range 43.5, 61.5), 22.4 kg/m2 (range 20.6, 30.3), and 35.5 g/dL (range 29.2, 41), respectively. The type of revision included reversal to normal anatomy, BPL shortening, and conversion to RYGB, performed in five, three, and two patients, respectively. Revisional surgery was laparoscopic in 8/10 patients, and through laparotomy in 2/10 patients (one reversal and another conversion to RYGB). BPL was found longer than reported in 2/10 patients (250 instead of 200, and 350 instead of 250 cm), and the median actual BPL length was 200 cm (range 177, 227). At revision, the common channel was found of at least 300 cm, except one, who was operated on at another hospital for the OAGB. The median operative time was 88.5 min (range 66.8, 117). The median LOS was 8 days (range 7.2, 12.2) and early complications CD >3 occurred in one patient (1/10), who had an anastomotic leak after BPL shortening and underwent debridement with primary repair. There was no occurrence of mortality. None of the patients had liver function derangements. None of the patients had villous atrophy or other gastrointestinal tract pathologies on pre-revision investigation (0/10) or in examined operative specimen (0/6). The median follow-up after revision was 25.4 months (range 3.1, 45). The median weight, BMI, and albumin increased from 60 kg (range 51.7, 79.2), 22.4 kg/m2 (range 20.6, 30.3), and 35.5 g/dL (range 29.2, 41), respectively, at revision to 73.5 kg (range 59, 89), 27.5 kg/m2 (range 22.2, 32.4), and 39.5 g/dL (range 37.2, 41.7), respectively, at last follow-up (median 25.4 months, range 3.1, 45). PEM has resolved in 7/10 patients, improved in 2/10 still on supportive medications for PEM and nutritional support, and did not change in 1/10. The latter had undergone BPL shortening and is a candidate for reversal to normal anatomy. BPL shortening did not result in complete resolution, as opposed to conversion to RYGB and reversal. In a univariate analysis including all of the aforementioned parameters, we found that only the type of revision had a significant correlation with the complete resolution of PEM. Prior to revision, all patients had a BPL length >150 cm, and in a univariate analysis, none of the variables were found as a risk factor for the development of PEM. The BMS effects were maintained in 9/10 patients, whereas 1/10 patients regained weight significantly to BMI >40 kg/m2. All the patients that had T2D (n = 3) and NAFLD (n = 4) at OAGB had a complete resolution at revisional surgery and last follow-up.
5. Conclusions
Revisional surgery of OAGB for severe PEM is feasible and safe after nutritional optimization. Our results suggest that the type of revision may be an important factor for PEM resolution. Comparative studies are needed to better define the role of each revisional option.
[ "2. Materials and Methods", "2.1. Statistical Analysis", "2.2. Revisional Surgery Technique Highlights" ]
[ "A retrospective review of a single bariatric center database, and the hospital computerized medical record system were performed. Ten OAGB patients reoperated at our medical center (January 2015 to December 2021) due to severe PEM were included in this study. Nine of them underwent the OAGB in our medical center. In the period corresponding to the study, 1425 cases of OAGB were performed in our bariatric center. Prior to OAGB, all patients were found eligible for BMS by the multidisciplinary bariatric team, according to the American Society for Metabolic and Bariatric Surgery (ASMBS) guidelines. \nBefore revisional surgery, all patients underwent a thorough workup by a multidisciplinary team, including psychological and/or psychiatric evaluation, nutritional assessment, anthropometric studies (weight, height, body mass index (BMI)) laboratory tests, gastrointestinal investigation (upper and lower gastrointestinal endoscopy, computed tomography, barium swallow test, abdominal ultrasound). All patients received nutritional support, including enteral and/or total parenteral nutrition and micronutrient supplements. High-dose Loperamide and Pancreolipase were given to 8/10 and 7/10 patients, respectively. All patients gave their written informed consent, following elaborate explanations about the indication for reoperation, surgical options, possible complications, surgical re-interventions, and implications on future bariatric and metabolic outcomes.\nOur threshold for revisional surgery: we defined severe PEM as the criterion for revision. \nOur definition of severe PEM: significant symptoms of hypoalbuminemia, such as edema, and weakness, with or without micronutrient deficiency, that persisted or recurred despite nutritional optimization. \nSignificant micronutrient deficiency: moderate-severe deficiency of iron, calcium, vitamin D3, B1, B6, B12, and folic acid [11]. \nData retrieved at both surgeries (OAGB and reoperation) included demographic information, clinical characteristics, work-up findings, nutritional parameters and intervention, medical treatment, operative information, and postoperative outcomes. Early (30-day) and late (>30-day) complications were graded according to the Clavien–Dindo (CD) classification [12].\n2.1. Statistical Analysis Continuous data are presented as medians (interquartile range (IQR)). Proportions are presented as n (%). Dichotomous data were analyzed using the Fisher’s exact test. Continuous data were analyzed using the Mann–Whitney test.\nContinuous data are presented as medians (interquartile range (IQR)). Proportions are presented as n (%). Dichotomous data were analyzed using the Fisher’s exact test. Continuous data were analyzed using the Mann–Whitney test.\n2.2. Revisional Surgery Technique Highlights \n1.Reversal to normal anatomy:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.2.BPL shortening:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.3.Conversion to RYGB:\na.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\n\nReversal to normal anatomy:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.\nHorizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. \nConstruction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.\nBPL shortening:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.\nHorizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.\nRefashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.\nConversion to RYGB:\na.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\nResection of the gastro-jejunal complex, using 3 firings of a linear stapler.\nRe-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.\nConversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\nThe study was approved by the Tel-Aviv Sourasky Medical Center Institutional Review Board (TLV-16-0325/2019) and was performed in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in this study. \n\n1.Reversal to normal anatomy:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.2.BPL shortening:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.3.Conversion to RYGB:\na.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\n\nReversal to normal anatomy:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.\nHorizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. \nConstruction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.\nBPL shortening:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.\nHorizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.\nRefashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.\nConversion to RYGB:\na.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\nResection of the gastro-jejunal complex, using 3 firings of a linear stapler.\nRe-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.\nConversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\nThe study was approved by the Tel-Aviv Sourasky Medical Center Institutional Review Board (TLV-16-0325/2019) and was performed in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in this study. ", "Continuous data are presented as medians (interquartile range (IQR)). Proportions are presented as n (%). Dichotomous data were analyzed using the Fisher’s exact test. Continuous data were analyzed using the Mann–Whitney test.", "\n1.Reversal to normal anatomy:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.2.BPL shortening:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.3.Conversion to RYGB:\na.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\n\nReversal to normal anatomy:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.\nHorizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. \nConstruction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.\nBPL shortening:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.\nHorizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.\nRefashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.\nConversion to RYGB:\na.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\nResection of the gastro-jejunal complex, using 3 firings of a linear stapler.\nRe-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.\nConversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\nThe study was approved by the Tel-Aviv Sourasky Medical Center Institutional Review Board (TLV-16-0325/2019) and was performed in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in this study. " ]
[ null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Statistical Analysis", "2.2. Revisional Surgery Technique Highlights", "3. Results", "4. Discussion", "5. Conclusions" ]
[ "One anastomosis gastric bypass (OAGB) is the third most commonly performed bariatric metabolic surgery (BMS) worldwide [1], and the most common in our bariatric center. It combines restriction with a dominant malabsorptive component, and it is simple to perform, with a short learning curve [2,3]. OAGB is considered safe and effective as both primary and secondary BMS. It confers good outcomes in terms of satisfactory weight reduction and resolution of obesity-associated medical problems [4,5,6]. OAGB results in equal or better outcomes compared with Roux-en-Y gastric bypass (RYGB) on the one hand and more nutritional deficiencies on the other hand, owing to its stronger malabsorptive component [7,8], i.e., the longer biliopancreatic limb (BPL). \nSevere protein–energy malnutrition (PEM) is an uncommon complication following OAGB that rarely requires corrective surgery [9] like in other BMS [10]. There is no uniform definition in the bariatric literature for protein–energy malnutrition, its severity grading or threshold for revisional surgery. \nWe aimed to evaluate the safety and outcomes of revisional surgery for severe PEM.", "A retrospective review of a single bariatric center database, and the hospital computerized medical record system were performed. Ten OAGB patients reoperated at our medical center (January 2015 to December 2021) due to severe PEM were included in this study. Nine of them underwent the OAGB in our medical center. In the period corresponding to the study, 1425 cases of OAGB were performed in our bariatric center. Prior to OAGB, all patients were found eligible for BMS by the multidisciplinary bariatric team, according to the American Society for Metabolic and Bariatric Surgery (ASMBS) guidelines. \nBefore revisional surgery, all patients underwent a thorough workup by a multidisciplinary team, including psychological and/or psychiatric evaluation, nutritional assessment, anthropometric studies (weight, height, body mass index (BMI)) laboratory tests, gastrointestinal investigation (upper and lower gastrointestinal endoscopy, computed tomography, barium swallow test, abdominal ultrasound). All patients received nutritional support, including enteral and/or total parenteral nutrition and micronutrient supplements. High-dose Loperamide and Pancreolipase were given to 8/10 and 7/10 patients, respectively. All patients gave their written informed consent, following elaborate explanations about the indication for reoperation, surgical options, possible complications, surgical re-interventions, and implications on future bariatric and metabolic outcomes.\nOur threshold for revisional surgery: we defined severe PEM as the criterion for revision. \nOur definition of severe PEM: significant symptoms of hypoalbuminemia, such as edema, and weakness, with or without micronutrient deficiency, that persisted or recurred despite nutritional optimization. \nSignificant micronutrient deficiency: moderate-severe deficiency of iron, calcium, vitamin D3, B1, B6, B12, and folic acid [11]. \nData retrieved at both surgeries (OAGB and reoperation) included demographic information, clinical characteristics, work-up findings, nutritional parameters and intervention, medical treatment, operative information, and postoperative outcomes. Early (30-day) and late (>30-day) complications were graded according to the Clavien–Dindo (CD) classification [12].\n2.1. Statistical Analysis Continuous data are presented as medians (interquartile range (IQR)). Proportions are presented as n (%). Dichotomous data were analyzed using the Fisher’s exact test. Continuous data were analyzed using the Mann–Whitney test.\nContinuous data are presented as medians (interquartile range (IQR)). Proportions are presented as n (%). Dichotomous data were analyzed using the Fisher’s exact test. Continuous data were analyzed using the Mann–Whitney test.\n2.2. Revisional Surgery Technique Highlights \n1.Reversal to normal anatomy:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.2.BPL shortening:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.3.Conversion to RYGB:\na.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\n\nReversal to normal anatomy:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.\nHorizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. \nConstruction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.\nBPL shortening:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.\nHorizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.\nRefashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.\nConversion to RYGB:\na.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\nResection of the gastro-jejunal complex, using 3 firings of a linear stapler.\nRe-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.\nConversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\nThe study was approved by the Tel-Aviv Sourasky Medical Center Institutional Review Board (TLV-16-0325/2019) and was performed in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in this study. \n\n1.Reversal to normal anatomy:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.2.BPL shortening:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.3.Conversion to RYGB:\na.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\n\nReversal to normal anatomy:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.\nHorizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. \nConstruction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.\nBPL shortening:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.\nHorizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.\nRefashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.\nConversion to RYGB:\na.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\nResection of the gastro-jejunal complex, using 3 firings of a linear stapler.\nRe-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.\nConversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\nThe study was approved by the Tel-Aviv Sourasky Medical Center Institutional Review Board (TLV-16-0325/2019) and was performed in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in this study. ", "Continuous data are presented as medians (interquartile range (IQR)). Proportions are presented as n (%). Dichotomous data were analyzed using the Fisher’s exact test. Continuous data were analyzed using the Mann–Whitney test.", "\n1.Reversal to normal anatomy:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.2.BPL shortening:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.3.Conversion to RYGB:\na.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\n\nReversal to normal anatomy:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.\nHorizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. \nConstruction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.\nBPL shortening:\na.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.\nHorizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.\nRefashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.\nConversion to RYGB:\na.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\nResection of the gastro-jejunal complex, using 3 firings of a linear stapler.\nRe-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.\nConversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively).\nThe study was approved by the Tel-Aviv Sourasky Medical Center Institutional Review Board (TLV-16-0325/2019) and was performed in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in this study. ", "Ten patients underwent revisional surgery due to severe PEM. The incidence of revisional surgery for PEM in our bariatric center is only 0.63% (9/1425 OAGB). Patients` characteristics at OAGB are presented in Table 1. The male-to-female ratio was 2:8, the median (interquartile range) age was 47.5 years (range 42.2, 58.2), and the median BMI was 44.2 kg/m2 (range 35.8, 47). The OAGB was primary or secondary in five patients each. Previous BMS included sleeve gastrectomy, adjustable gastric banding, and silastic ring vertical gastroplasty (n = 2, 2, and 1, respectively). The median BPL length was 200 cm (range 177, 200). All patients were in good nutritional status before undergoing OAGB, and 3/10 had mild micronutrient deficiency (iron, B12, or vitamin D).\nThe patients’ characteristics of PEM are presented in Table 2. The median EWL was 103.7% (range 57.6, 114), and the median TWL was 43% (range 35.7, 56.9). Symptoms included mainly marked weakness, dizziness, syncope, peripheral edema, and diarrhea (watery or steatorrhea). The lowest albumin was median 24 g/dL (range 19, 27), and 8/10 patients had >1 significant vitamin/ mineral deficiencies (iron, calcium, vitamin D3, B1, B6, B12, folic acid). Nutritional optimization included nutritional supplements and medications to aid absorption, consisting of enteral nutrition (EN), total parenteral nutrition (TPN), multivitamins (MV), loperamide, and pancreolipase (n = 10, 5, 10, 4, 8, and 7, respectively).\nThe patients’ characteristics at revisional surgery are presented in Table 3. Revision was performed at a median interval of 18.4 months (range 15.7, 27.8) after OAGB. At revision, the median age, BMI, and highest albumin were 49.5 years (range 43.5, 61.5), 22.4 kg/m2 (range 20.6, 30.3), and 35.5 g/dL (range 29.2, 41), respectively. The type of revision included reversal to normal anatomy, BPL shortening, and conversion to RYGB, performed in five, three, and two patients, respectively. Revisional surgery was laparoscopic in 8/10 patients, and through laparotomy in 2/10 patients (one reversal and another conversion to RYGB). BPL was found longer than reported in 2/10 patients (250 instead of 200, and 350 instead of 250 cm), and the median actual BPL length was 200 cm (range 177, 227). At revision, the common channel was found of at least 300 cm, except one, who was operated on at another hospital for the OAGB. The median operative time was 88.5 min (range 66.8, 117). The median LOS was 8 days (range 7.2, 12.2) and early complications CD >3 occurred in one patient (1/10), who had an anastomotic leak after BPL shortening and underwent debridement with primary repair. There was no occurrence of mortality. None of the patients had liver function derangements. None of the patients had villous atrophy or other gastrointestinal tract pathologies on pre-revision investigation (0/10) or in examined operative specimen (0/6). \nThe median follow-up after revision was 25.4 months (range 3.1, 45). The median weight, BMI, and albumin increased from 60 kg (range 51.7, 79.2), 22.4 kg/m2 (range 20.6, 30.3), and 35.5 g/dL (range 29.2, 41), respectively, at revision to 73.5 kg (range 59, 89), 27.5 kg/m2 (range 22.2, 32.4), and 39.5 g/dL (range 37.2, 41.7), respectively, at last follow-up (median 25.4 months, range 3.1, 45). PEM has resolved in 7/10 patients, improved in 2/10 still on supportive medications for PEM and nutritional support, and did not change in 1/10. The latter had undergone BPL shortening and is a candidate for reversal to normal anatomy. BPL shortening did not result in complete resolution, as opposed to conversion to RYGB and reversal. In a univariate analysis including all of the aforementioned parameters, we found that only the type of revision had a significant correlation with the complete resolution of PEM. Prior to revision, all patients had a BPL length >150 cm, and in a univariate analysis, none of the variables were found as a risk factor for the development of PEM. The BMS effects were maintained in 9/10 patients, whereas 1/10 patients regained weight significantly to BMI >40 kg/m2. All the patients that had T2D (n = 3) and NAFLD (n = 4) at OAGB had a complete resolution at revisional surgery and last follow-up.", "OAGB is one of three commonly performed, acceptable BMSs [13]. There is cumulative evidence of satisfactory excess weight loss—88% at 2 years, 77% at 6 years, and 70% at 12 years postoperatively [6]—and remission or improvement of obesity associated medical problems, especially type II diabetes (T2D) resolution—85–90% at 1 year postoperatively [4,14] and ~70% at 5–15 years [15]. OAGB is safe, with 5% overall morbidity and 0.2% mortality rates [5]. However, as in other BMSs, worrisome, delayed onset complications might develop and affect patients’ health and quality of life. These include mainly bile reflux, anastomotic ulcer, and PEM and may infrequently require revisional surgery [16]. Khrucharoen et al. showed in a systematic review that 26% (46/179) of OAGB revisions were for PEM [17]. \nThe incidence of revisions for PEM is variable between different studies. Rutledge reported that 31/2410 patients (1.28%) underwent reversal due to excessive weight loss [18]. Parmar et al. found a cumulative rate of 0.71% (range, 0–3.8%) in a systematic review of 12,807 OAGB patients [19]. Recent studies, such as the Italian multi-institutional survey of Musella et al. [16] and the single center studies of Jedamzik et al. [20] and Almuhanna et al. [15], reported rates of 0.18% (16/8676 OAGB patients), 0.9% (9/1025 OAGB patients), and 2.3% (51/2223 OAGB patients), respectively. In the current single-center study, the rate is only 0.63% (9/1425 OAGB). In-line with the literature, most of our patients (7/10) had their OAGB to revision interval within the second postoperative year. However, it can develop thereafter. We encourage more surgeons to report their long-term rates of this uncommon complication. \nPEM after OAGB is mainly attributed to its strong malabsorptive component, i.e., the BPL length. In a comparative study of BPL lengths of 150, 180, and 250 cm, Ahuja et al. [21] found no significant difference in OAGB effectiveness at 1 year (i.e., EWL, resolution of T2D, HTN); however, 250 cm BPL was associated with worse nutritional deficiencies, and one patient died of liver failure. In another comparative study of BPL lengths of 150, 180, and 200 cm, Pizza et al. [22] did not find any significant difference in effectiveness or nutritional status at 2 years, except for iron and ferritin levels, which were significantly lower in 200 cm BPL. Nevertheless, in both studies, none of the patients underwent revisional surgery. In contrast, the current study focuses on patients operated for severe PEM, and none of them had a BPL length of 150 cm (median 200 cm, range 177, 227). In 2/10 patients, the actual BPL found at revision was longer than reported at OAGB. We therefore advise to count the BPL length out loud during OAGB, for team double-checking. We do not routinely measure the total small bowel length (SBL) at the OAGB. In the first years, we used a fixed ~200 cm BPL and occasionally a 250 cm-BPL for BMI > 60 without counting the total SBL. Following accumulating data and personal experience, though uncommon occurrence of significant diarrhea, excessive weight loss, and nutritional deficiencies, we changed the BPL length into 150–200 cm, depending mainly on the patient’s BMI, and indication (primary or secondary OAGB). Ramos et al. have outlined the 2020 IFSO consensus conference statement on OAGB, postulating a clear consensus that BPL length <200 cm was adequate, BPL should only be >200 cm if total SBL is measured and suitably long, and that BPL length can be determined according to the BMI [23]. Total SBL varies widely [24]; therefore, some surgeons routinely measure the total SBL and tailor the BPL accordingly, in order to gain adequate bariatric and metabolic goals on the one hand and minimize PEM occurrence on the other hand. Komaei et al. [25] retrospectively compared outcomes between fixed 200 cm BPL without SBL measurements to tailored BPL of 40% from total SBL. There was no significant difference in terms of efficacy, but more patients in the fixed BPL group had nutritional deficiencies (p < 0.05), none of them necessitated revision. Many surgeons have not endorsed this approach, and there is no consensus regarding its necessity or the ideal BPL percentage of total SBL [23]. Furthermore, we share the fear of other bariatric surgeons of adding a risk of unintentional bowel injury as an argument against routine total SBL measurements. Instead, we believe such measurements are important at revisional surgery of OAGB in cases of severe PEM or following weight regain (WR) when considering malabsorption intensification as in the case of RYGB [26]. \nProjecting from WR, the etiology of PEM is probably multifactorial [26]. An altered gut–hormonal balance may explain cases of PEM that are refractory to intensive treatment and cases of ongoing PEM despite BPL shortening. Further investigation is needed to support or refute our assumption. New-onset anorexia or pre-existing psychiatric disorders are other possible causes. Some patients do not comply with the strict nutritional recommendations, perhaps due to their high cost, even in the event of PEM. Secondary pathologies such as colitis, enteritis with atrophy of the villi, undiagnosed celiac disease, pancreatic insufficiency, or small intestinal bacterial overgrowth could serve as another explanation. In our cohort, all patients were in good nutritional status before undergoing OAGB, and none of them had any evidence of the aforementioned GI pathologies, but they were non-compliant with nutritional recommendations after OAGB. Revisional surgery for PEM should be preceded by nutritional status optimization, which is also partly dependent on patients’ compliance. \nThere are several revision options described in the literature, including BPL shortening, conversion to RYGB, conversion to SG, and reversal to normal anatomy. All these surgical options are generally feasible and relatively safe. However, PEM-wise, it is difficult to assess the cumulative complication and resolution rates for each revision option. This is for several reasons. Most studies do not focus on PEM as a sole indication for revision. The complication rate of revision for PEM may be higher than for other indications, especially if not amenable for optimal correction prior to revision, as occurred in one of our patients. In addition, most reports do not elaborate on PEM outcomes after revision, and some studies did not specify which revision types were performed. Furthermore, there is lack of standardized criteria for revisional surgery, and most studies do not specify their definition of severe PEM, and rather simply use the terms: ‘malnutrition’, ‘protein–energy/calorie malnutrition’, ‘excessive weight loss’, ‘macro or micro—nutrient deficiency’, ‘hypoalbuminemia’, etc. There is no standardized protocol of nutritional status optimization prior to revision either. In addition, there is variance regarding BPL lengths and other OAGB parameters. Henceforth, it is difficult to assess the best revisional surgery for PEM or delineate guidelines or algorithm for the management of severe PEM after OAGB. \nDespite these limitations, few observations can be made regarding the different revisional options. Reversal of OAGB is generally feasible, safe, and simple to perform [27]. Gesner et al. [28] reported a high rate of overall morbidity, which was reduced when they switched from simple transection of the gastro-jejunostomy to resection and construction of jejuno-jejunostomy (50 vs. 8.3%, p = 0.03). In their first 14 cases, they left the old gastro-jejunal staple line untouched and consequently had 4 complications there (leak and stenosis), while in the current study, although we had fewer patients, we prevented these complications (0/5 patients) by resecting the old staple line and assuring adequate patency of the intestinal lumen. We therefore suggest to beware of two pitfalls: First, simple transection of the gastro-jejunal anastomosis should be performed only if it is possible to preserve an adequate intestinal patency (i.e., without narrowing the lumen) and provided the old staple line is completely resected. Otherwise, resection with jejuno-jejunal anastomosis is advised. Second, gastro-gastrostomy should be wide enough (>6 cm) to prevent stricture. Reversal to normal anatomy is advantageous since it ensures resolution of PEM, and we believe it is more adequate for patients with severe PEM, especially non-compliant ones. The disadvantage of reversal is the possible risk of significant weight regain and recurrence of obesity-associated problems. Consequently, many patients refuse this option and prefer one of the revisional alternatives, which are more likely to preserve BMS outcomes. \nBPL shortening maintains the ‘OAGB structure’ while reducing the malabsorptive component and therefore is supposed to result in PEM resolution. Interestingly, while Hussain et al. [29] reported the resolution of intractable diarrhea, PEM, or deranged liver functions in eight patients undergoing BPL shortening to 150 cm (from >200 cm), none of our three patients had resolution of PEM, despite a shorter revised BPL of 65–120 cm (from 170 or 220 cm). We assume that for longer initial BPL, shortening may sometimes be adequate; however, this revisional option mandates careful surveillance due to the risk of ongoing PEM. More data are needed. \nConversion to RYGB or SG are other options that are likely to preserve BMS outcomes. Chen et al. [30] performed conversion to sleeve gastrectomy by transecting the gastro-jejunal anastomosis, applying hand-sewn anastomosis of the distal gastric pouch to the antrum of the remnant stomach and vertical resection of the remnant stomach over an endoscope. They reported significant improvement in malnutrition with a maintained weight loss at early follow-up. Their study included, however, both OAGB and RYGB patients and other indications for revision, and the overall complication rate was 8.1% for the entire cohort. We did not perform this type of revisional surgery because of the possibility to replace PEM with other complications of SG (for example, gastroesophageal reflux disease or stricture) and because it prevents future reversal to normal anatomy in case of persistent nutritional deficiencies. Jedamzik et al. performed conversions to RYGB and concluded that BPL shortening to 35–100 cm with conversion to RYGB is feasible and safe for severe malnutrition [20]. Similarly, two of our patients were converted to RYGB with 50–100 cm revised BPL and achieved PEM resolution. \nKhrucharoen et al. [17] concluded that although revision to RYGB was technically simpler than revision to SG or normal anatomy, it should be avoided in PEM, due to the risk of further malabsorption posed by the remaining bypassed BPL. They also mentioned that a three-limb measurement is important to decide if we need to relocate the gastrojejunal anastomosis or simply create the jejunojejunostomy, since the latter option might exacerbate malnutrition. They also concluded that reversal to original anatomy is beneficial, though can be technically challenging and may be associated with an increased risk of complications (i.e., gastrojejunal leak and stenosis). Haddad et al. [1] reported in the IFSO worldwide OAGB survey the operative data of 239/277 patients that underwent revision for malnutrition or steatorrhea. Revisions included: conversion to RYGB (43%), reversal (32%), BPL shortening (20%), and conversion to SG (5%). BPL length data were available in 244/277 patients, revealing that, most commonly, the BPL length in these patients was 200 cm. They also reported a 5% (5/98 OAGB mortalities) mortality rate due to liver failure or malnutrition. Kermansaravi [31] reported a cumulative rate of revisions for PEM of 0.84% (153/17,938 patients) and suggested avoiding creating a BPL >150 cm to reduce the rates of PEM, given that BMS outcomes seem similar for 150 cm compared with 200 cm [32]. In the current study, we found the revision type is an important factor for complete resolution of PEM; however, large comparative studies are needed to examine all aspects of revisional surgery for severe PEM following OAGB. \nOur study has several limitations. This is a retrospective observational study of a small cohort and no comparative groups. The real incidence of PEM may be underestimated, due to a ~30% ‘loss to follow-up’. Large comparative studies are needed to appreciate the proper surgical management of severe PEM, to delineate uniform threshold for revisional surgery, and to identify risk factors for the development of PEM.\nDespite these limitations, this is a very important issue for further discussion and investigation since it might rarely lead to hepatic insufficiency and death. We have described in detail this uncommon complication and outcomes of three different revisional options. We believe that OAGB is a very good surgery to treat severe obesity, and it is associated with uncommon occurrence of delayed complications occurring in other BMS. ", "Revisional surgery of OAGB for severe PEM is feasible and safe after nutritional optimization. Our results suggest that the type of revision may be an important factor for PEM resolution. Comparative studies are needed to better define the role of each revisional option. " ]
[ "intro", null, null, null, "results", "discussion", "conclusions" ]
[ "one anastomosis gastric bypass", "protein", "malnutrition", "revisional surgery", "reversal", "conversion", "bariatric metabolic surgery" ]
1. Introduction: One anastomosis gastric bypass (OAGB) is the third most commonly performed bariatric metabolic surgery (BMS) worldwide [1], and the most common in our bariatric center. It combines restriction with a dominant malabsorptive component, and it is simple to perform, with a short learning curve [2,3]. OAGB is considered safe and effective as both primary and secondary BMS. It confers good outcomes in terms of satisfactory weight reduction and resolution of obesity-associated medical problems [4,5,6]. OAGB results in equal or better outcomes compared with Roux-en-Y gastric bypass (RYGB) on the one hand and more nutritional deficiencies on the other hand, owing to its stronger malabsorptive component [7,8], i.e., the longer biliopancreatic limb (BPL). Severe protein–energy malnutrition (PEM) is an uncommon complication following OAGB that rarely requires corrective surgery [9] like in other BMS [10]. There is no uniform definition in the bariatric literature for protein–energy malnutrition, its severity grading or threshold for revisional surgery. We aimed to evaluate the safety and outcomes of revisional surgery for severe PEM. 2. Materials and Methods: A retrospective review of a single bariatric center database, and the hospital computerized medical record system were performed. Ten OAGB patients reoperated at our medical center (January 2015 to December 2021) due to severe PEM were included in this study. Nine of them underwent the OAGB in our medical center. In the period corresponding to the study, 1425 cases of OAGB were performed in our bariatric center. Prior to OAGB, all patients were found eligible for BMS by the multidisciplinary bariatric team, according to the American Society for Metabolic and Bariatric Surgery (ASMBS) guidelines. Before revisional surgery, all patients underwent a thorough workup by a multidisciplinary team, including psychological and/or psychiatric evaluation, nutritional assessment, anthropometric studies (weight, height, body mass index (BMI)) laboratory tests, gastrointestinal investigation (upper and lower gastrointestinal endoscopy, computed tomography, barium swallow test, abdominal ultrasound). All patients received nutritional support, including enteral and/or total parenteral nutrition and micronutrient supplements. High-dose Loperamide and Pancreolipase were given to 8/10 and 7/10 patients, respectively. All patients gave their written informed consent, following elaborate explanations about the indication for reoperation, surgical options, possible complications, surgical re-interventions, and implications on future bariatric and metabolic outcomes. Our threshold for revisional surgery: we defined severe PEM as the criterion for revision. Our definition of severe PEM: significant symptoms of hypoalbuminemia, such as edema, and weakness, with or without micronutrient deficiency, that persisted or recurred despite nutritional optimization. Significant micronutrient deficiency: moderate-severe deficiency of iron, calcium, vitamin D3, B1, B6, B12, and folic acid [11]. Data retrieved at both surgeries (OAGB and reoperation) included demographic information, clinical characteristics, work-up findings, nutritional parameters and intervention, medical treatment, operative information, and postoperative outcomes. Early (30-day) and late (>30-day) complications were graded according to the Clavien–Dindo (CD) classification [12]. 2.1. Statistical Analysis Continuous data are presented as medians (interquartile range (IQR)). Proportions are presented as n (%). Dichotomous data were analyzed using the Fisher’s exact test. Continuous data were analyzed using the Mann–Whitney test. Continuous data are presented as medians (interquartile range (IQR)). Proportions are presented as n (%). Dichotomous data were analyzed using the Fisher’s exact test. Continuous data were analyzed using the Mann–Whitney test. 2.2. Revisional Surgery Technique Highlights 1.Reversal to normal anatomy: a.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.2.BPL shortening: a.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.3.Conversion to RYGB: a.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively). Reversal to normal anatomy: a.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure. Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure. BPL shortening: a.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy. Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy. Conversion to RYGB: a.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively). Resection of the gastro-jejunal complex, using 3 firings of a linear stapler. Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy. Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively). The study was approved by the Tel-Aviv Sourasky Medical Center Institutional Review Board (TLV-16-0325/2019) and was performed in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in this study. 1.Reversal to normal anatomy: a.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.2.BPL shortening: a.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.3.Conversion to RYGB: a.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively). Reversal to normal anatomy: a.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure. Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure. BPL shortening: a.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy. Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy. Conversion to RYGB: a.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively). Resection of the gastro-jejunal complex, using 3 firings of a linear stapler. Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy. Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively). The study was approved by the Tel-Aviv Sourasky Medical Center Institutional Review Board (TLV-16-0325/2019) and was performed in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in this study. 2.1. Statistical Analysis: Continuous data are presented as medians (interquartile range (IQR)). Proportions are presented as n (%). Dichotomous data were analyzed using the Fisher’s exact test. Continuous data were analyzed using the Mann–Whitney test. 2.2. Revisional Surgery Technique Highlights: 1.Reversal to normal anatomy: a.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure.2.BPL shortening: a.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy.3.Conversion to RYGB: a.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively). Reversal to normal anatomy: a.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. b.Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure. Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. Construction of wide side-to-side gastro-gastrostomy, using a 60 mm cartridge linear stapler and a continuous self- retaining suture for defect closure. BPL shortening: a.Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch.b.Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy. Horizontal antimesenteric transection of the gastro-jejunostomy, while preserving adequate intestinal continuity, using a linear stapler. The old staple line is then resected off the distal pouch. Refashioning of the BPL length at 65, 100, and 120 cm (from 170, 170, and 220 cm, respectively) and the creation of a new side-to-side gastro-jejunostomy. Conversion to RYGB: a.Resection of the gastro-jejunal complex, using 3 firings of a linear stapler.b.Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy.c.Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively). Resection of the gastro-jejunal complex, using 3 firings of a linear stapler. Re-establishment of bowel continuity by construction of side-to-side jejuno-jejunostomy. Conversion to RYGB, with a BPL length of 50 and 100 cm (instead of 350 and 200 cm, respectively). The study was approved by the Tel-Aviv Sourasky Medical Center Institutional Review Board (TLV-16-0325/2019) and was performed in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in this study. 3. Results: Ten patients underwent revisional surgery due to severe PEM. The incidence of revisional surgery for PEM in our bariatric center is only 0.63% (9/1425 OAGB). Patients` characteristics at OAGB are presented in Table 1. The male-to-female ratio was 2:8, the median (interquartile range) age was 47.5 years (range 42.2, 58.2), and the median BMI was 44.2 kg/m2 (range 35.8, 47). The OAGB was primary or secondary in five patients each. Previous BMS included sleeve gastrectomy, adjustable gastric banding, and silastic ring vertical gastroplasty (n = 2, 2, and 1, respectively). The median BPL length was 200 cm (range 177, 200). All patients were in good nutritional status before undergoing OAGB, and 3/10 had mild micronutrient deficiency (iron, B12, or vitamin D). The patients’ characteristics of PEM are presented in Table 2. The median EWL was 103.7% (range 57.6, 114), and the median TWL was 43% (range 35.7, 56.9). Symptoms included mainly marked weakness, dizziness, syncope, peripheral edema, and diarrhea (watery or steatorrhea). The lowest albumin was median 24 g/dL (range 19, 27), and 8/10 patients had >1 significant vitamin/ mineral deficiencies (iron, calcium, vitamin D3, B1, B6, B12, folic acid). Nutritional optimization included nutritional supplements and medications to aid absorption, consisting of enteral nutrition (EN), total parenteral nutrition (TPN), multivitamins (MV), loperamide, and pancreolipase (n = 10, 5, 10, 4, 8, and 7, respectively). The patients’ characteristics at revisional surgery are presented in Table 3. Revision was performed at a median interval of 18.4 months (range 15.7, 27.8) after OAGB. At revision, the median age, BMI, and highest albumin were 49.5 years (range 43.5, 61.5), 22.4 kg/m2 (range 20.6, 30.3), and 35.5 g/dL (range 29.2, 41), respectively. The type of revision included reversal to normal anatomy, BPL shortening, and conversion to RYGB, performed in five, three, and two patients, respectively. Revisional surgery was laparoscopic in 8/10 patients, and through laparotomy in 2/10 patients (one reversal and another conversion to RYGB). BPL was found longer than reported in 2/10 patients (250 instead of 200, and 350 instead of 250 cm), and the median actual BPL length was 200 cm (range 177, 227). At revision, the common channel was found of at least 300 cm, except one, who was operated on at another hospital for the OAGB. The median operative time was 88.5 min (range 66.8, 117). The median LOS was 8 days (range 7.2, 12.2) and early complications CD >3 occurred in one patient (1/10), who had an anastomotic leak after BPL shortening and underwent debridement with primary repair. There was no occurrence of mortality. None of the patients had liver function derangements. None of the patients had villous atrophy or other gastrointestinal tract pathologies on pre-revision investigation (0/10) or in examined operative specimen (0/6). The median follow-up after revision was 25.4 months (range 3.1, 45). The median weight, BMI, and albumin increased from 60 kg (range 51.7, 79.2), 22.4 kg/m2 (range 20.6, 30.3), and 35.5 g/dL (range 29.2, 41), respectively, at revision to 73.5 kg (range 59, 89), 27.5 kg/m2 (range 22.2, 32.4), and 39.5 g/dL (range 37.2, 41.7), respectively, at last follow-up (median 25.4 months, range 3.1, 45). PEM has resolved in 7/10 patients, improved in 2/10 still on supportive medications for PEM and nutritional support, and did not change in 1/10. The latter had undergone BPL shortening and is a candidate for reversal to normal anatomy. BPL shortening did not result in complete resolution, as opposed to conversion to RYGB and reversal. In a univariate analysis including all of the aforementioned parameters, we found that only the type of revision had a significant correlation with the complete resolution of PEM. Prior to revision, all patients had a BPL length >150 cm, and in a univariate analysis, none of the variables were found as a risk factor for the development of PEM. The BMS effects were maintained in 9/10 patients, whereas 1/10 patients regained weight significantly to BMI >40 kg/m2. All the patients that had T2D (n = 3) and NAFLD (n = 4) at OAGB had a complete resolution at revisional surgery and last follow-up. 4. Discussion: OAGB is one of three commonly performed, acceptable BMSs [13]. There is cumulative evidence of satisfactory excess weight loss—88% at 2 years, 77% at 6 years, and 70% at 12 years postoperatively [6]—and remission or improvement of obesity associated medical problems, especially type II diabetes (T2D) resolution—85–90% at 1 year postoperatively [4,14] and ~70% at 5–15 years [15]. OAGB is safe, with 5% overall morbidity and 0.2% mortality rates [5]. However, as in other BMSs, worrisome, delayed onset complications might develop and affect patients’ health and quality of life. These include mainly bile reflux, anastomotic ulcer, and PEM and may infrequently require revisional surgery [16]. Khrucharoen et al. showed in a systematic review that 26% (46/179) of OAGB revisions were for PEM [17]. The incidence of revisions for PEM is variable between different studies. Rutledge reported that 31/2410 patients (1.28%) underwent reversal due to excessive weight loss [18]. Parmar et al. found a cumulative rate of 0.71% (range, 0–3.8%) in a systematic review of 12,807 OAGB patients [19]. Recent studies, such as the Italian multi-institutional survey of Musella et al. [16] and the single center studies of Jedamzik et al. [20] and Almuhanna et al. [15], reported rates of 0.18% (16/8676 OAGB patients), 0.9% (9/1025 OAGB patients), and 2.3% (51/2223 OAGB patients), respectively. In the current single-center study, the rate is only 0.63% (9/1425 OAGB). In-line with the literature, most of our patients (7/10) had their OAGB to revision interval within the second postoperative year. However, it can develop thereafter. We encourage more surgeons to report their long-term rates of this uncommon complication. PEM after OAGB is mainly attributed to its strong malabsorptive component, i.e., the BPL length. In a comparative study of BPL lengths of 150, 180, and 250 cm, Ahuja et al. [21] found no significant difference in OAGB effectiveness at 1 year (i.e., EWL, resolution of T2D, HTN); however, 250 cm BPL was associated with worse nutritional deficiencies, and one patient died of liver failure. In another comparative study of BPL lengths of 150, 180, and 200 cm, Pizza et al. [22] did not find any significant difference in effectiveness or nutritional status at 2 years, except for iron and ferritin levels, which were significantly lower in 200 cm BPL. Nevertheless, in both studies, none of the patients underwent revisional surgery. In contrast, the current study focuses on patients operated for severe PEM, and none of them had a BPL length of 150 cm (median 200 cm, range 177, 227). In 2/10 patients, the actual BPL found at revision was longer than reported at OAGB. We therefore advise to count the BPL length out loud during OAGB, for team double-checking. We do not routinely measure the total small bowel length (SBL) at the OAGB. In the first years, we used a fixed ~200 cm BPL and occasionally a 250 cm-BPL for BMI > 60 without counting the total SBL. Following accumulating data and personal experience, though uncommon occurrence of significant diarrhea, excessive weight loss, and nutritional deficiencies, we changed the BPL length into 150–200 cm, depending mainly on the patient’s BMI, and indication (primary or secondary OAGB). Ramos et al. have outlined the 2020 IFSO consensus conference statement on OAGB, postulating a clear consensus that BPL length <200 cm was adequate, BPL should only be >200 cm if total SBL is measured and suitably long, and that BPL length can be determined according to the BMI [23]. Total SBL varies widely [24]; therefore, some surgeons routinely measure the total SBL and tailor the BPL accordingly, in order to gain adequate bariatric and metabolic goals on the one hand and minimize PEM occurrence on the other hand. Komaei et al. [25] retrospectively compared outcomes between fixed 200 cm BPL without SBL measurements to tailored BPL of 40% from total SBL. There was no significant difference in terms of efficacy, but more patients in the fixed BPL group had nutritional deficiencies (p < 0.05), none of them necessitated revision. Many surgeons have not endorsed this approach, and there is no consensus regarding its necessity or the ideal BPL percentage of total SBL [23]. Furthermore, we share the fear of other bariatric surgeons of adding a risk of unintentional bowel injury as an argument against routine total SBL measurements. Instead, we believe such measurements are important at revisional surgery of OAGB in cases of severe PEM or following weight regain (WR) when considering malabsorption intensification as in the case of RYGB [26]. Projecting from WR, the etiology of PEM is probably multifactorial [26]. An altered gut–hormonal balance may explain cases of PEM that are refractory to intensive treatment and cases of ongoing PEM despite BPL shortening. Further investigation is needed to support or refute our assumption. New-onset anorexia or pre-existing psychiatric disorders are other possible causes. Some patients do not comply with the strict nutritional recommendations, perhaps due to their high cost, even in the event of PEM. Secondary pathologies such as colitis, enteritis with atrophy of the villi, undiagnosed celiac disease, pancreatic insufficiency, or small intestinal bacterial overgrowth could serve as another explanation. In our cohort, all patients were in good nutritional status before undergoing OAGB, and none of them had any evidence of the aforementioned GI pathologies, but they were non-compliant with nutritional recommendations after OAGB. Revisional surgery for PEM should be preceded by nutritional status optimization, which is also partly dependent on patients’ compliance. There are several revision options described in the literature, including BPL shortening, conversion to RYGB, conversion to SG, and reversal to normal anatomy. All these surgical options are generally feasible and relatively safe. However, PEM-wise, it is difficult to assess the cumulative complication and resolution rates for each revision option. This is for several reasons. Most studies do not focus on PEM as a sole indication for revision. The complication rate of revision for PEM may be higher than for other indications, especially if not amenable for optimal correction prior to revision, as occurred in one of our patients. In addition, most reports do not elaborate on PEM outcomes after revision, and some studies did not specify which revision types were performed. Furthermore, there is lack of standardized criteria for revisional surgery, and most studies do not specify their definition of severe PEM, and rather simply use the terms: ‘malnutrition’, ‘protein–energy/calorie malnutrition’, ‘excessive weight loss’, ‘macro or micro—nutrient deficiency’, ‘hypoalbuminemia’, etc. There is no standardized protocol of nutritional status optimization prior to revision either. In addition, there is variance regarding BPL lengths and other OAGB parameters. Henceforth, it is difficult to assess the best revisional surgery for PEM or delineate guidelines or algorithm for the management of severe PEM after OAGB. Despite these limitations, few observations can be made regarding the different revisional options. Reversal of OAGB is generally feasible, safe, and simple to perform [27]. Gesner et al. [28] reported a high rate of overall morbidity, which was reduced when they switched from simple transection of the gastro-jejunostomy to resection and construction of jejuno-jejunostomy (50 vs. 8.3%, p = 0.03). In their first 14 cases, they left the old gastro-jejunal staple line untouched and consequently had 4 complications there (leak and stenosis), while in the current study, although we had fewer patients, we prevented these complications (0/5 patients) by resecting the old staple line and assuring adequate patency of the intestinal lumen. We therefore suggest to beware of two pitfalls: First, simple transection of the gastro-jejunal anastomosis should be performed only if it is possible to preserve an adequate intestinal patency (i.e., without narrowing the lumen) and provided the old staple line is completely resected. Otherwise, resection with jejuno-jejunal anastomosis is advised. Second, gastro-gastrostomy should be wide enough (>6 cm) to prevent stricture. Reversal to normal anatomy is advantageous since it ensures resolution of PEM, and we believe it is more adequate for patients with severe PEM, especially non-compliant ones. The disadvantage of reversal is the possible risk of significant weight regain and recurrence of obesity-associated problems. Consequently, many patients refuse this option and prefer one of the revisional alternatives, which are more likely to preserve BMS outcomes. BPL shortening maintains the ‘OAGB structure’ while reducing the malabsorptive component and therefore is supposed to result in PEM resolution. Interestingly, while Hussain et al. [29] reported the resolution of intractable diarrhea, PEM, or deranged liver functions in eight patients undergoing BPL shortening to 150 cm (from >200 cm), none of our three patients had resolution of PEM, despite a shorter revised BPL of 65–120 cm (from 170 or 220 cm). We assume that for longer initial BPL, shortening may sometimes be adequate; however, this revisional option mandates careful surveillance due to the risk of ongoing PEM. More data are needed. Conversion to RYGB or SG are other options that are likely to preserve BMS outcomes. Chen et al. [30] performed conversion to sleeve gastrectomy by transecting the gastro-jejunal anastomosis, applying hand-sewn anastomosis of the distal gastric pouch to the antrum of the remnant stomach and vertical resection of the remnant stomach over an endoscope. They reported significant improvement in malnutrition with a maintained weight loss at early follow-up. Their study included, however, both OAGB and RYGB patients and other indications for revision, and the overall complication rate was 8.1% for the entire cohort. We did not perform this type of revisional surgery because of the possibility to replace PEM with other complications of SG (for example, gastroesophageal reflux disease or stricture) and because it prevents future reversal to normal anatomy in case of persistent nutritional deficiencies. Jedamzik et al. performed conversions to RYGB and concluded that BPL shortening to 35–100 cm with conversion to RYGB is feasible and safe for severe malnutrition [20]. Similarly, two of our patients were converted to RYGB with 50–100 cm revised BPL and achieved PEM resolution. Khrucharoen et al. [17] concluded that although revision to RYGB was technically simpler than revision to SG or normal anatomy, it should be avoided in PEM, due to the risk of further malabsorption posed by the remaining bypassed BPL. They also mentioned that a three-limb measurement is important to decide if we need to relocate the gastrojejunal anastomosis or simply create the jejunojejunostomy, since the latter option might exacerbate malnutrition. They also concluded that reversal to original anatomy is beneficial, though can be technically challenging and may be associated with an increased risk of complications (i.e., gastrojejunal leak and stenosis). Haddad et al. [1] reported in the IFSO worldwide OAGB survey the operative data of 239/277 patients that underwent revision for malnutrition or steatorrhea. Revisions included: conversion to RYGB (43%), reversal (32%), BPL shortening (20%), and conversion to SG (5%). BPL length data were available in 244/277 patients, revealing that, most commonly, the BPL length in these patients was 200 cm. They also reported a 5% (5/98 OAGB mortalities) mortality rate due to liver failure or malnutrition. Kermansaravi [31] reported a cumulative rate of revisions for PEM of 0.84% (153/17,938 patients) and suggested avoiding creating a BPL >150 cm to reduce the rates of PEM, given that BMS outcomes seem similar for 150 cm compared with 200 cm [32]. In the current study, we found the revision type is an important factor for complete resolution of PEM; however, large comparative studies are needed to examine all aspects of revisional surgery for severe PEM following OAGB. Our study has several limitations. This is a retrospective observational study of a small cohort and no comparative groups. The real incidence of PEM may be underestimated, due to a ~30% ‘loss to follow-up’. Large comparative studies are needed to appreciate the proper surgical management of severe PEM, to delineate uniform threshold for revisional surgery, and to identify risk factors for the development of PEM. Despite these limitations, this is a very important issue for further discussion and investigation since it might rarely lead to hepatic insufficiency and death. We have described in detail this uncommon complication and outcomes of three different revisional options. We believe that OAGB is a very good surgery to treat severe obesity, and it is associated with uncommon occurrence of delayed complications occurring in other BMS. 5. Conclusions: Revisional surgery of OAGB for severe PEM is feasible and safe after nutritional optimization. Our results suggest that the type of revision may be an important factor for PEM resolution. Comparative studies are needed to better define the role of each revisional option.
Background: One anastomosis gastric bypass (OAGB) is safe and effective. Its strong malabsorptive component might cause severe protein-energy malnutrition (PEM), necessitating revisional surgery. We aimed to evaluate the safety and outcomes of OAGB revision for severe PEM. Methods: This was a single-center retrospective analysis of OAGB patients undergoing revision for severe PEM (2015-2021). Perioperative data and outcomes were retrieved. Results: Ten patients underwent revision for severe PEM. Our center's incidence is 0.63% (9/1425 OAGB). All patients were symptomatic. Median (interquartile range) EWL and lowest albumin were 103.7% (range 57.6, 114) and 24 g/dL (range 19, 27), respectively, and 8/10 patients had significant micronutrient deficiencies. Before revision, nutritional optimization was undertaken. Median OAGB to revision interval was 18.4 months (range 15.7, 27.8). Median BPL length was 200 cm (range 177, 227). Reversal (n = 5), BPL shortening (n = 3), and conversion to Roux-en-Y gastric bypass (RYGB) (n = 2) were performed. One patient had anastomotic leak after BPL shortening. No death occurred. Median BMI and albumin increased from 22.4 kg/m2 (range 20.6, 30.3) and 35.5 g/dL (range 29.2, 41), respectively, at revision to 27.5 (range 22.2, 32.4) kg/m2 and 39.5 g/dL (range 37.2, 41.7), respectively, at follow-up (median 25.4 months, range 3.1, 45). Complete resolution occurs after conversion to RYGB or reversal to normal anatomy, but not after BPL shortening. Conclusions: Revisional surgery of OAGB for severe PEM is feasible and safe after nutritional optimization. Our results suggest that the type of revision may be an important factor for PEM resolution. Comparative studies are needed to define the role of each revisional option.
1. Introduction: One anastomosis gastric bypass (OAGB) is the third most commonly performed bariatric metabolic surgery (BMS) worldwide [1], and the most common in our bariatric center. It combines restriction with a dominant malabsorptive component, and it is simple to perform, with a short learning curve [2,3]. OAGB is considered safe and effective as both primary and secondary BMS. It confers good outcomes in terms of satisfactory weight reduction and resolution of obesity-associated medical problems [4,5,6]. OAGB results in equal or better outcomes compared with Roux-en-Y gastric bypass (RYGB) on the one hand and more nutritional deficiencies on the other hand, owing to its stronger malabsorptive component [7,8], i.e., the longer biliopancreatic limb (BPL). Severe protein–energy malnutrition (PEM) is an uncommon complication following OAGB that rarely requires corrective surgery [9] like in other BMS [10]. There is no uniform definition in the bariatric literature for protein–energy malnutrition, its severity grading or threshold for revisional surgery. We aimed to evaluate the safety and outcomes of revisional surgery for severe PEM. 5. Conclusions: Revisional surgery of OAGB for severe PEM is feasible and safe after nutritional optimization. Our results suggest that the type of revision may be an important factor for PEM resolution. Comparative studies are needed to better define the role of each revisional option.
Background: One anastomosis gastric bypass (OAGB) is safe and effective. Its strong malabsorptive component might cause severe protein-energy malnutrition (PEM), necessitating revisional surgery. We aimed to evaluate the safety and outcomes of OAGB revision for severe PEM. Methods: This was a single-center retrospective analysis of OAGB patients undergoing revision for severe PEM (2015-2021). Perioperative data and outcomes were retrieved. Results: Ten patients underwent revision for severe PEM. Our center's incidence is 0.63% (9/1425 OAGB). All patients were symptomatic. Median (interquartile range) EWL and lowest albumin were 103.7% (range 57.6, 114) and 24 g/dL (range 19, 27), respectively, and 8/10 patients had significant micronutrient deficiencies. Before revision, nutritional optimization was undertaken. Median OAGB to revision interval was 18.4 months (range 15.7, 27.8). Median BPL length was 200 cm (range 177, 227). Reversal (n = 5), BPL shortening (n = 3), and conversion to Roux-en-Y gastric bypass (RYGB) (n = 2) were performed. One patient had anastomotic leak after BPL shortening. No death occurred. Median BMI and albumin increased from 22.4 kg/m2 (range 20.6, 30.3) and 35.5 g/dL (range 29.2, 41), respectively, at revision to 27.5 (range 22.2, 32.4) kg/m2 and 39.5 g/dL (range 37.2, 41.7), respectively, at follow-up (median 25.4 months, range 3.1, 45). Complete resolution occurs after conversion to RYGB or reversal to normal anatomy, but not after BPL shortening. Conclusions: Revisional surgery of OAGB for severe PEM is feasible and safe after nutritional optimization. Our results suggest that the type of revision may be an important factor for PEM resolution. Comparative studies are needed to define the role of each revisional option.
6,308
376
[ 1858, 45, 684 ]
7
[ "bpl", "cm", "patients", "gastro", "pem", "oagb", "jejunostomy", "linear stapler", "linear", "stapler" ]
[ "surgery pem bariatric", "en gastric bypass", "bariatric metabolic outcomes", "oagb performed bariatric", "gastric bypass oagb" ]
null
[CONTENT] one anastomosis gastric bypass | protein | malnutrition | revisional surgery | reversal | conversion | bariatric metabolic surgery [SUMMARY]
null
[CONTENT] one anastomosis gastric bypass | protein | malnutrition | revisional surgery | reversal | conversion | bariatric metabolic surgery [SUMMARY]
[CONTENT] one anastomosis gastric bypass | protein | malnutrition | revisional surgery | reversal | conversion | bariatric metabolic surgery [SUMMARY]
[CONTENT] one anastomosis gastric bypass | protein | malnutrition | revisional surgery | reversal | conversion | bariatric metabolic surgery [SUMMARY]
[CONTENT] one anastomosis gastric bypass | protein | malnutrition | revisional surgery | reversal | conversion | bariatric metabolic surgery [SUMMARY]
[CONTENT] Albumins | Gastric Bypass | Humans | Obesity, Morbid | Postoperative Complications | Protein-Energy Malnutrition | Retrospective Studies | Weight Loss [SUMMARY]
null
[CONTENT] Albumins | Gastric Bypass | Humans | Obesity, Morbid | Postoperative Complications | Protein-Energy Malnutrition | Retrospective Studies | Weight Loss [SUMMARY]
[CONTENT] Albumins | Gastric Bypass | Humans | Obesity, Morbid | Postoperative Complications | Protein-Energy Malnutrition | Retrospective Studies | Weight Loss [SUMMARY]
[CONTENT] Albumins | Gastric Bypass | Humans | Obesity, Morbid | Postoperative Complications | Protein-Energy Malnutrition | Retrospective Studies | Weight Loss [SUMMARY]
[CONTENT] Albumins | Gastric Bypass | Humans | Obesity, Morbid | Postoperative Complications | Protein-Energy Malnutrition | Retrospective Studies | Weight Loss [SUMMARY]
[CONTENT] surgery pem bariatric | en gastric bypass | bariatric metabolic outcomes | oagb performed bariatric | gastric bypass oagb [SUMMARY]
null
[CONTENT] surgery pem bariatric | en gastric bypass | bariatric metabolic outcomes | oagb performed bariatric | gastric bypass oagb [SUMMARY]
[CONTENT] surgery pem bariatric | en gastric bypass | bariatric metabolic outcomes | oagb performed bariatric | gastric bypass oagb [SUMMARY]
[CONTENT] surgery pem bariatric | en gastric bypass | bariatric metabolic outcomes | oagb performed bariatric | gastric bypass oagb [SUMMARY]
[CONTENT] surgery pem bariatric | en gastric bypass | bariatric metabolic outcomes | oagb performed bariatric | gastric bypass oagb [SUMMARY]
[CONTENT] bpl | cm | patients | gastro | pem | oagb | jejunostomy | linear stapler | linear | stapler [SUMMARY]
null
[CONTENT] bpl | cm | patients | gastro | pem | oagb | jejunostomy | linear stapler | linear | stapler [SUMMARY]
[CONTENT] bpl | cm | patients | gastro | pem | oagb | jejunostomy | linear stapler | linear | stapler [SUMMARY]
[CONTENT] bpl | cm | patients | gastro | pem | oagb | jejunostomy | linear stapler | linear | stapler [SUMMARY]
[CONTENT] bpl | cm | patients | gastro | pem | oagb | jejunostomy | linear stapler | linear | stapler [SUMMARY]
[CONTENT] surgery | oagb | outcomes | protein energy malnutrition | bypass | energy malnutrition | gastric bypass | bariatric | bms | hand [SUMMARY]
null
[CONTENT] range | patients | median | 10 | kg | revision | kg m2 | m2 | 10 patients | bpl [SUMMARY]
[CONTENT] pem | revisional | pem resolution comparative studies | define | severe pem feasible | severe pem feasible safe | nutritional optimization results suggest | nutritional optimization results | role revisional option | role revisional [SUMMARY]
[CONTENT] patients | pem | cm | bpl | oagb | gastro | range | data | linear | stapler [SUMMARY]
[CONTENT] patients | pem | cm | bpl | oagb | gastro | range | data | linear | stapler [SUMMARY]
[CONTENT] One ||| ||| [SUMMARY]
null
[CONTENT] Ten ||| 0.63% ||| ||| EWL | 103.7% | 57.6 | 114 | 24 | 19 | 27 | 8/10 ||| ||| 18.4 months | 15.7 | 27.8 ||| BPL | 200 cm | 177 | 227 ||| 5 | BPL | 3 | Roux | 2 ||| One | BPL ||| ||| BMI | 22.4 kg | 20.6 | 30.3 | 35.5 | 29.2 | 41 | 27.5 | 22.2 | 32.4 | kg | 39.5 | 37.2 | 41.7 | 25.4 months | 3.1 | 45 ||| RYGB | BPL [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] One ||| ||| 2015-2021 ||| ||| Ten ||| 0.63% ||| ||| EWL | 103.7% | 57.6 | 114 | 24 | 19 | 27 | 8/10 ||| ||| 18.4 months | 15.7 | 27.8 ||| BPL | 200 cm | 177 | 227 ||| 5 | BPL | 3 | Roux | 2 ||| One | BPL ||| ||| BMI | 22.4 kg | 20.6 | 30.3 | 35.5 | 29.2 | 41 | 27.5 | 22.2 | 32.4 | kg | 39.5 | 37.2 | 41.7 | 25.4 months | 3.1 | 45 ||| RYGB | BPL ||| ||| ||| [SUMMARY]
[CONTENT] One ||| ||| 2015-2021 ||| ||| Ten ||| 0.63% ||| ||| EWL | 103.7% | 57.6 | 114 | 24 | 19 | 27 | 8/10 ||| ||| 18.4 months | 15.7 | 27.8 ||| BPL | 200 cm | 177 | 227 ||| 5 | BPL | 3 | Roux | 2 ||| One | BPL ||| ||| BMI | 22.4 kg | 20.6 | 30.3 | 35.5 | 29.2 | 41 | 27.5 | 22.2 | 32.4 | kg | 39.5 | 37.2 | 41.7 | 25.4 months | 3.1 | 45 ||| RYGB | BPL ||| ||| ||| [SUMMARY]
Evaluation of Micronuclei and Cytomorphometric Changes in Patients with Different Tobacco Related Habits Using Exfoliated Buccal Cells.
34181342
Tobacco is one of the main reasons behind the occurrence of oral cancer. Oral cancer, even though being the tenth most common cancer in the world, gets diagnosed at an advanced stage and ends up with poor prognosis. So early diagnosis is the need of the hour. Our study aimed to evaluate the genotoxic changes in patients with different tobacco habits using buccal exfoliated cells.
BACKGROUND
Buccal smears were taken from smokers (30), smokeless tobacco users (30), combined tobacco users (30) and controls (30) with clinically normal oral mucosa. All the smears were stained with Papanicolaou stain and Feulgen stain and viewed under light microscope for the evaluation of mean number of micronuclei, mean micronuclei per cell, frequency of cells showing micronuclei, nuclear area, cytoplasmic area, nuclear-cytoplasmic ratio.
METHODS
Mean number of micronuclei, mean micronuclei per cell, frequency of cells showing micronuclei, and nuclear area were significantly increased in tobacco users than controls, especially in combined tobacco users. Nuclear-cytoplasmic ratio was increased and cytoplasmic area was decreased in tobacco users than controls.
RESULTS
Tobacco in any consumable form is genotoxic. Smoking and smokeless tobacco, when consumed together, synergistically causes higher genetic damage. Different tobacco habits have different deleterious effects on oral mucosa, and these effects are more pronounced when the patients have combined habits. So, detecting the genotoxic changes through exfoliative cytology can be used as a simple yet reliable marker for early detection of carcinogenesis.<br />.
CONCLUSION
[ "Adult", "Case-Control Studies", "Female", "Humans", "Male", "Micronuclei, Chromosome-Defective", "Micronucleus Tests", "Mouth Mucosa", "Mouth Neoplasms", "Mutagenicity Tests", "Tobacco Use Disorder" ]
8418841
Introduction
Oral cancer is one of the top three cancers in India accounting for 30% of all the cancers. The most widespread form of oral cancer mainly depends of tobacco consumption in any form, which is closely associated not only with the development of oral cancer, but also with a poor prognosis (Kashyap et al., 2012). The most aggressive chemicals present in tobacco cause extensive genetic damage to the human body, some of which are irreversible. Genetic damage gets started long before the clinical lesion appears. So, early diagnosis and prevention is very essential. Buccal cells, being the first barrier, represent a preferred target site for early genotoxic events induced by carcinogenic agents through inhalation or ingestion route and are capable of metabolizing proximate carcinogens to reactive products (Torres-Bugarin et al., 2014). These changes include formation of micronuclei, and alterations in nuclear size, cell size, nuclear cytoplasmic ratio, nuclear shape, nuclear discontinuity, optical density and nuclear texture. Exfoliative cytology could be of great value for identifying these genotoxic changes. The present study was undertaken to assess these genotoxic changes like micronuclei frequency, nuclear area, cytoplasmic area and the nuclear-cytoplasmic ratio of the squames from clinically normal buccal mucosa of tobacco users (smokers, tobacco chewers and combined habit group) and non-users of tobacco and to compare and correlate the findings.
null
null
Results
Results obtained were similar using either PAP or Feulgen stain in almost all the parameters evaluated. The mean number of micronuclei, mean micronuclei per cell, and frequency of cells showing micronuclei were significantly higher in tobacco users (Groups I, II and III) when compared with controls (Group IV). Among the participants habituated to tobacco, all the parameters related to micronuclei were highest in the combined tobacco users (Group III) followed by smokeless tobacco users (Group II), and smokers (Group I) (p<0.001) (Table 1). Cytomorphometric assessment of nuclear area, cytoplasmic area (Cell area - Nuclear area), and nuclear-cytoplasmic ratio was done using Papanicolaou stain. Mean nuclear area was significantly higher in tobacco users when compared with controls. Among the habits groups, nuclear area was significantly increased in smokers followed by smokeless tobacco users and combined tobacco users (p<0.001) (Table 2). Comparison of mean cytoplasmic area and nuclear-cytoplasmic ratio using one-way-ANOVA showed no significant difference among the various study groups. Intergroup Comparisons of Mean Number of Micronuclei, Mean Micronuclei Per Cell, Frequency of Cells Showing Micronuclei among Various Study Groups Using Feulgen Stain †, Kruskal Wallis, post hoc. two sided P.value ≤ 0.05 Comparison of Mean Nuclear Area among the Study Groups Using Pap Stain Intergroup Comparison of Mean Nuclear Area among the Different Groups Using Pap Stain Smears Stained with Feulgen Stain Showing Micronuclei (Arrows) Cytomorphometric Analysis of Nuclear Area, Cytoplasmic Area using Pap Stain
null
null
[ "Author Contribution Statement" ]
[ "Study conception and design - Kokila Sivakumar and Harikrishnan Prasad. Clinical studies and sample collection - Kokila Sivakumar. Data acquisition and analysis - Kokila Sivakumar. Statistical analysis - Harikrishnan Prasad. General supervision - Srichinthu Keniyan Kumar, Rajmohan Muthusamy, Mahalakshmi Loganathan, Shanmuganathan Sivanandham, Prema Perumal. Manuscript preparation - Kokila Sivakumar and Harikrishnan Prasad. Manuscript editing and suggestions – Harikrishnan Prasad and Srichinthu Keniyan Kumar. All the authors have equal contribution in the study and manuscript works. All of them reviewed the results and approved the final version of the manuscript." ]
[ null ]
[ "Introduction", "Materials and Methods", "Results", "Discussion", "Author Contribution Statement" ]
[ "Oral cancer is one of the top three cancers in India accounting for 30% of all the cancers. The most widespread form of oral cancer mainly depends of tobacco consumption in any form, which is closely associated not only with the development of oral cancer, but also with a poor prognosis (Kashyap et al., 2012). The most aggressive chemicals present in tobacco cause extensive genetic damage to the human body, some of which are irreversible. Genetic damage gets started long before the clinical lesion appears. So, early diagnosis and prevention is very essential. Buccal cells, being the first barrier, represent a preferred target site for early genotoxic events induced by carcinogenic agents through inhalation or ingestion route and are capable of metabolizing proximate carcinogens to reactive products (Torres-Bugarin et al., 2014). These changes include formation of micronuclei, and alterations in nuclear size, cell size, nuclear cytoplasmic ratio, nuclear shape, nuclear discontinuity, optical density and nuclear texture. Exfoliative cytology could be of great value for identifying these genotoxic changes. The present study was undertaken to assess these genotoxic changes like micronuclei frequency, nuclear area, cytoplasmic area and the nuclear-cytoplasmic ratio of the squames from clinically normal buccal mucosa of tobacco users (smokers, tobacco chewers and combined habit group) and non-users of tobacco and to compare and correlate the findings.", "Institutional ethical clearance was obtained before commencing the study. A total number of 120 individuals without oral lesions were included in the study.\n• Group I - Individuals habituated with smoking tobacco - 30\n• Group II - Individuals habituated with smokeless tobacco - 30\n• Group III - Individuals habituated with both smoking and smokeless tobacco - 30\n• Group IV - Individuals without any deleterious habits – 30 (Controls)\nIndividuals with any history of systemic diseases and recent history of any viral infection or hospitalization, recent exposure to radiologic investigations, habituated with alcohol were excluded from the study.\nSmears were taken by scraping the buccal mucosa of the participants gently in relation to premolar-molar area with the use of wooden spatula. The smears were taken in pre-cleaned, number coded microscopic slides and were fixed in 70% ethanol. Four smears were collected from each individual. All the smears were stained using Papanicolaou stain (PAP) using the manufacturer recommended protocol provided in the Rapid PAP kit. Feulgen staining was done using the protocol mentioned by (Gopal & Padma, 2018). All the PAP and Feulgen-stained slides were viewed under light microscope and cytomorphometric analysis was done with the help of Jenoptik pRogress software. Hundred cells per patient were evaluated for micronuclei using the criteria mentioned by (Tolbert et al., 1992). The extra nuclear cytoplasmic DNA fragments satisfying the following criteria were counted as micronuclei. \n• Micronuclei must be clearly separated from the main nucleus. \n• Micronuclei must have a smooth, oval or round shape. \n• Texture similar to nucleus. \n• Less than a third the diameter of associated nucleus, but large enough to discern shape and color. \n• Staining intensity similar to nucleus. \n• Same focal plane as nucleus. \nThe criteria for excluding cells for micronuclei assessment by (Tolbert et al., 1992) were also followed. The cells with the following features were not taken for micronuclei assessment: \n• Cells with two nuclei.\n• Dead or degenerating cells (karyolysis, karyorrhexis, nuclear fragmentation). \n• Nuclear blebbings (micronucleus- like structure connected with the main nucleus with a bridge). \n• Anucleated cells.\nThe mean number of micronuclei, mean micronuclei per cell, frequency of cells showing micronuclei were evaluated for each patient. Cytomorphometric assessment of nuclear area, cytoplasmic area and nuclear-cytoplasmic area was done for 100 cells in each patient using Jenoptik pRogress software tools. Results obtained were analysed using one-way ANOVA, Kruskal Wallis test, and Mann Whitney test followed by post-hoc tests.", "Results obtained were similar using either PAP or Feulgen stain in almost all the parameters evaluated. The mean number of micronuclei, mean micronuclei per cell, and frequency of cells showing micronuclei were significantly higher in tobacco users (Groups I, II and III) when compared with controls (Group IV). Among the participants habituated to tobacco, all the parameters related to micronuclei were highest in the combined tobacco users (Group III) followed by smokeless tobacco users (Group II), and smokers (Group I) (p<0.001) (Table 1). \nCytomorphometric assessment of nuclear area, cytoplasmic area (Cell area - Nuclear area), and nuclear-cytoplasmic ratio was done using Papanicolaou stain. Mean nuclear area was significantly higher in tobacco users when compared with controls. Among the habits groups, nuclear area was significantly increased in smokers followed by smokeless tobacco users and combined tobacco users (p<0.001) (Table 2). Comparison of mean cytoplasmic area and nuclear-cytoplasmic ratio using one-way-ANOVA showed no significant difference among the various study groups. \nIntergroup Comparisons of Mean Number of Micronuclei, Mean Micronuclei Per Cell, Frequency of Cells Showing Micronuclei among Various Study Groups Using Feulgen Stain\n†, Kruskal Wallis, post hoc. two sided P.value ≤ 0.05\nComparison of Mean Nuclear Area among the Study Groups Using Pap Stain\nIntergroup Comparison of Mean Nuclear Area among the Different Groups Using Pap Stain\nSmears Stained with Feulgen Stain Showing Micronuclei (Arrows)\nCytomorphometric Analysis of Nuclear Area, Cytoplasmic Area using Pap Stain", "Oral cancer is a multistage disease; it arises from normal mucosa, progresses to dysplasia and ultimately ends as cancer. Development of oral cancer proceeds through discrete genetic changes that occurs due to loss of genomic integrity after the continuous exposure to carcinogenic agent (Park et al., 2011).\nThe carcinogenic effect of the tobacco habits inducing genotoxic effect on oral mucosal cells can be found with investigations. It is widely accepted that genotoxic studies in exfoliated buccal cells remains one of the reliable sensitive markers in early diagnosis of oral cancer in tobacco users (Singam et al., 2019). Our study assesses the genotoxic effect of different types tobacco on the oral mucosa before the lesions appear in the oral cavity. Our study was designed to evaluate micronuclei and cytomorphometric changes (nuclear area, cytoplasmic area, nuclear-cytoplasmic ratio) associated with smokers, smokeless tobacco users, combined tobacco users and healthy individuals without any habits. Buccal smears from all subjects in the four study groups were stained with PAP and Feulgen stain and the parameters were evaluated.\nWe observed that the mean number of micronuclei and mean micronuclei per cell were increased in combined tobacco users than smokers and smokeless tobacco users. Our results were in accordance with the studies conducted by (Sellapa et al., 2009; Dash et al., 2018). On the other hand, our findings were contradictory to the findings observed by (Bonaasi et al., 2003; Pradeep et al., 2014) who stated that the number of micronuclei was increased in smokers than other groups. We also found that frequency of cells showing micronuclei was significantly increased in combined tobacco users when compared with other groups. Similar findings were observed by (Upadhyay et al., 2019; Chandirasekar et al., 2019). The micronuclei related genotoxic alterations in the cells may be due to the possibility that the buccal mucosa cells get direct exposure to the carcinogenic amines present in the tobacco (Proia et al., 2006). The cells bearing the damaged DNA will mostly survive and replicate with the damage and result in higher frequency of micronuclei (Moghaddam et al., 2020). Tobacco specific nitrosamines are believed to be responsible for the induction of micronuclei (Muhammed et al., 2021). Increase in all the micronuclei related changes may be due to the synergistic effect of combined use of smoking and smokeless tobacco which results in a higher genotoxicity in buccal mucosa cells than when they are consumed alone (Dash et al., 2018). Heat and chemical exposure from smoking and continuous exposure of tobacco specific amines while taking the smokeless tobacco prevents the cells from further dividing and in turn the nuclei get disintegrated due to the carcinogenic exposure and induces the formation of micronuclei. Smoking and smokeless tobacco when consumed together has been associated with increased risk of oral squamous cell carcinoma (Mello et al., 2019). Individuals with the habit of smoking and alcohol together are more prone to oral cancers than those to have the habits separately (Liu et al., 2015).\nWe found that the smokers group showed the highest mean nuclear area when compared to smokeless tobacco users and combined tobacco users. Similar findings were observed by (Einstein et al., 2005; Khot et al., 2015). Tobacco causes increase in nuclear size of the buccal cells. It is due to cellular adaptation of the cell in response to the carcinogens in the tobacco. Buccal epithelial cells have a decreased turn over and they will be in cell cycle for long periods which in turn increases the nuclear area. We found that mean cytoplasmic area is significantly higher in control group when compared to tobacco users group. Similar findings were observed by (Parmar et al., 2010; Babuta et al., 2014; Santos et al., 2017;). Decrease in the cytoplasmic area of smokeless tobacco users may be due to the fact that there is a close contact between the smokeless tobacco and oral mucosa. It causes the carcinogenic by-products to infiltrate into the mucosa since it will be kept in the oral cavity for longer period. As a result, the cell undergoes dehydration and causes the shrinkage of cytoplasm. We also found that the nuclear-cytoplasmic ratio was significantly higher in smokeless tobacco group when compared to smokers and controls. Similar findings were observed by (Singh et al., 2014; Khot et al., 2015). The increase in nuclear cytoplasmic area in smokeless tobacco might be due to the synchronous increase in the nuclear area and decrease in cytoplasmic area. Our results were in accordance with the study conducted by (Parmar et al., 2015; Mohan et al., 2017)\nThere are limited studies which evaluate the micronuclei and cytomorphometric changes in clinically normal appearing oral mucosa in comparison with different type of tobacco users. To the best of our knowledge, this is the first study to compare the cytotoxic effects of smoking tobacco users, smokeless tobacco users and combined tobacco users in clinically healthy mucosa. \nWe observed that all parameters related to micronuclei were increased in the habits groups. It was the highest in the combined users group, which suggests that the synergistic effect of smoking and smokeless tobacco could cause greater genomic damage. Smokers group, however, showed pronounced alterations in cytomorphometric parameters, especially the nuclear area. Smokeless tobacco users had an elevated nuclear cytoplasmic ratio, suggesting that individuals with smokeless tobacco habit show both nuclear alterations, and changes in cytoplasmic area. Based on the findings of our study, we conclude that different tobacco related habits have different deleterious effects on the buccal mucosal cells, and these effects are more pronounced when the patients have both types of habits together.", "Study conception and design - Kokila Sivakumar and Harikrishnan Prasad. Clinical studies and sample collection - Kokila Sivakumar. Data acquisition and analysis - Kokila Sivakumar. Statistical analysis - Harikrishnan Prasad. General supervision - Srichinthu Keniyan Kumar, Rajmohan Muthusamy, Mahalakshmi Loganathan, Shanmuganathan Sivanandham, Prema Perumal. Manuscript preparation - Kokila Sivakumar and Harikrishnan Prasad. Manuscript editing and suggestions – Harikrishnan Prasad and Srichinthu Keniyan Kumar. All the authors have equal contribution in the study and manuscript works. All of them reviewed the results and approved the final version of the manuscript." ]
[ "intro", "materials|methods", "results", "discussion", null ]
[ "Micronuclei", "cytomorphometry", "tobacco", "cytoplasmic area", "Feulgen", "exfoliated buccal cells" ]
Introduction: Oral cancer is one of the top three cancers in India accounting for 30% of all the cancers. The most widespread form of oral cancer mainly depends of tobacco consumption in any form, which is closely associated not only with the development of oral cancer, but also with a poor prognosis (Kashyap et al., 2012). The most aggressive chemicals present in tobacco cause extensive genetic damage to the human body, some of which are irreversible. Genetic damage gets started long before the clinical lesion appears. So, early diagnosis and prevention is very essential. Buccal cells, being the first barrier, represent a preferred target site for early genotoxic events induced by carcinogenic agents through inhalation or ingestion route and are capable of metabolizing proximate carcinogens to reactive products (Torres-Bugarin et al., 2014). These changes include formation of micronuclei, and alterations in nuclear size, cell size, nuclear cytoplasmic ratio, nuclear shape, nuclear discontinuity, optical density and nuclear texture. Exfoliative cytology could be of great value for identifying these genotoxic changes. The present study was undertaken to assess these genotoxic changes like micronuclei frequency, nuclear area, cytoplasmic area and the nuclear-cytoplasmic ratio of the squames from clinically normal buccal mucosa of tobacco users (smokers, tobacco chewers and combined habit group) and non-users of tobacco and to compare and correlate the findings. Materials and Methods: Institutional ethical clearance was obtained before commencing the study. A total number of 120 individuals without oral lesions were included in the study. • Group I - Individuals habituated with smoking tobacco - 30 • Group II - Individuals habituated with smokeless tobacco - 30 • Group III - Individuals habituated with both smoking and smokeless tobacco - 30 • Group IV - Individuals without any deleterious habits – 30 (Controls) Individuals with any history of systemic diseases and recent history of any viral infection or hospitalization, recent exposure to radiologic investigations, habituated with alcohol were excluded from the study. Smears were taken by scraping the buccal mucosa of the participants gently in relation to premolar-molar area with the use of wooden spatula. The smears were taken in pre-cleaned, number coded microscopic slides and were fixed in 70% ethanol. Four smears were collected from each individual. All the smears were stained using Papanicolaou stain (PAP) using the manufacturer recommended protocol provided in the Rapid PAP kit. Feulgen staining was done using the protocol mentioned by (Gopal & Padma, 2018). All the PAP and Feulgen-stained slides were viewed under light microscope and cytomorphometric analysis was done with the help of Jenoptik pRogress software. Hundred cells per patient were evaluated for micronuclei using the criteria mentioned by (Tolbert et al., 1992). The extra nuclear cytoplasmic DNA fragments satisfying the following criteria were counted as micronuclei. • Micronuclei must be clearly separated from the main nucleus. • Micronuclei must have a smooth, oval or round shape. • Texture similar to nucleus. • Less than a third the diameter of associated nucleus, but large enough to discern shape and color. • Staining intensity similar to nucleus. • Same focal plane as nucleus. The criteria for excluding cells for micronuclei assessment by (Tolbert et al., 1992) were also followed. The cells with the following features were not taken for micronuclei assessment: • Cells with two nuclei. • Dead or degenerating cells (karyolysis, karyorrhexis, nuclear fragmentation). • Nuclear blebbings (micronucleus- like structure connected with the main nucleus with a bridge). • Anucleated cells. The mean number of micronuclei, mean micronuclei per cell, frequency of cells showing micronuclei were evaluated for each patient. Cytomorphometric assessment of nuclear area, cytoplasmic area and nuclear-cytoplasmic area was done for 100 cells in each patient using Jenoptik pRogress software tools. Results obtained were analysed using one-way ANOVA, Kruskal Wallis test, and Mann Whitney test followed by post-hoc tests. Results: Results obtained were similar using either PAP or Feulgen stain in almost all the parameters evaluated. The mean number of micronuclei, mean micronuclei per cell, and frequency of cells showing micronuclei were significantly higher in tobacco users (Groups I, II and III) when compared with controls (Group IV). Among the participants habituated to tobacco, all the parameters related to micronuclei were highest in the combined tobacco users (Group III) followed by smokeless tobacco users (Group II), and smokers (Group I) (p<0.001) (Table 1). Cytomorphometric assessment of nuclear area, cytoplasmic area (Cell area - Nuclear area), and nuclear-cytoplasmic ratio was done using Papanicolaou stain. Mean nuclear area was significantly higher in tobacco users when compared with controls. Among the habits groups, nuclear area was significantly increased in smokers followed by smokeless tobacco users and combined tobacco users (p<0.001) (Table 2). Comparison of mean cytoplasmic area and nuclear-cytoplasmic ratio using one-way-ANOVA showed no significant difference among the various study groups. Intergroup Comparisons of Mean Number of Micronuclei, Mean Micronuclei Per Cell, Frequency of Cells Showing Micronuclei among Various Study Groups Using Feulgen Stain †, Kruskal Wallis, post hoc. two sided P.value ≤ 0.05 Comparison of Mean Nuclear Area among the Study Groups Using Pap Stain Intergroup Comparison of Mean Nuclear Area among the Different Groups Using Pap Stain Smears Stained with Feulgen Stain Showing Micronuclei (Arrows) Cytomorphometric Analysis of Nuclear Area, Cytoplasmic Area using Pap Stain Discussion: Oral cancer is a multistage disease; it arises from normal mucosa, progresses to dysplasia and ultimately ends as cancer. Development of oral cancer proceeds through discrete genetic changes that occurs due to loss of genomic integrity after the continuous exposure to carcinogenic agent (Park et al., 2011). The carcinogenic effect of the tobacco habits inducing genotoxic effect on oral mucosal cells can be found with investigations. It is widely accepted that genotoxic studies in exfoliated buccal cells remains one of the reliable sensitive markers in early diagnosis of oral cancer in tobacco users (Singam et al., 2019). Our study assesses the genotoxic effect of different types tobacco on the oral mucosa before the lesions appear in the oral cavity. Our study was designed to evaluate micronuclei and cytomorphometric changes (nuclear area, cytoplasmic area, nuclear-cytoplasmic ratio) associated with smokers, smokeless tobacco users, combined tobacco users and healthy individuals without any habits. Buccal smears from all subjects in the four study groups were stained with PAP and Feulgen stain and the parameters were evaluated. We observed that the mean number of micronuclei and mean micronuclei per cell were increased in combined tobacco users than smokers and smokeless tobacco users. Our results were in accordance with the studies conducted by (Sellapa et al., 2009; Dash et al., 2018). On the other hand, our findings were contradictory to the findings observed by (Bonaasi et al., 2003; Pradeep et al., 2014) who stated that the number of micronuclei was increased in smokers than other groups. We also found that frequency of cells showing micronuclei was significantly increased in combined tobacco users when compared with other groups. Similar findings were observed by (Upadhyay et al., 2019; Chandirasekar et al., 2019). The micronuclei related genotoxic alterations in the cells may be due to the possibility that the buccal mucosa cells get direct exposure to the carcinogenic amines present in the tobacco (Proia et al., 2006). The cells bearing the damaged DNA will mostly survive and replicate with the damage and result in higher frequency of micronuclei (Moghaddam et al., 2020). Tobacco specific nitrosamines are believed to be responsible for the induction of micronuclei (Muhammed et al., 2021). Increase in all the micronuclei related changes may be due to the synergistic effect of combined use of smoking and smokeless tobacco which results in a higher genotoxicity in buccal mucosa cells than when they are consumed alone (Dash et al., 2018). Heat and chemical exposure from smoking and continuous exposure of tobacco specific amines while taking the smokeless tobacco prevents the cells from further dividing and in turn the nuclei get disintegrated due to the carcinogenic exposure and induces the formation of micronuclei. Smoking and smokeless tobacco when consumed together has been associated with increased risk of oral squamous cell carcinoma (Mello et al., 2019). Individuals with the habit of smoking and alcohol together are more prone to oral cancers than those to have the habits separately (Liu et al., 2015). We found that the smokers group showed the highest mean nuclear area when compared to smokeless tobacco users and combined tobacco users. Similar findings were observed by (Einstein et al., 2005; Khot et al., 2015). Tobacco causes increase in nuclear size of the buccal cells. It is due to cellular adaptation of the cell in response to the carcinogens in the tobacco. Buccal epithelial cells have a decreased turn over and they will be in cell cycle for long periods which in turn increases the nuclear area. We found that mean cytoplasmic area is significantly higher in control group when compared to tobacco users group. Similar findings were observed by (Parmar et al., 2010; Babuta et al., 2014; Santos et al., 2017;). Decrease in the cytoplasmic area of smokeless tobacco users may be due to the fact that there is a close contact between the smokeless tobacco and oral mucosa. It causes the carcinogenic by-products to infiltrate into the mucosa since it will be kept in the oral cavity for longer period. As a result, the cell undergoes dehydration and causes the shrinkage of cytoplasm. We also found that the nuclear-cytoplasmic ratio was significantly higher in smokeless tobacco group when compared to smokers and controls. Similar findings were observed by (Singh et al., 2014; Khot et al., 2015). The increase in nuclear cytoplasmic area in smokeless tobacco might be due to the synchronous increase in the nuclear area and decrease in cytoplasmic area. Our results were in accordance with the study conducted by (Parmar et al., 2015; Mohan et al., 2017) There are limited studies which evaluate the micronuclei and cytomorphometric changes in clinically normal appearing oral mucosa in comparison with different type of tobacco users. To the best of our knowledge, this is the first study to compare the cytotoxic effects of smoking tobacco users, smokeless tobacco users and combined tobacco users in clinically healthy mucosa. We observed that all parameters related to micronuclei were increased in the habits groups. It was the highest in the combined users group, which suggests that the synergistic effect of smoking and smokeless tobacco could cause greater genomic damage. Smokers group, however, showed pronounced alterations in cytomorphometric parameters, especially the nuclear area. Smokeless tobacco users had an elevated nuclear cytoplasmic ratio, suggesting that individuals with smokeless tobacco habit show both nuclear alterations, and changes in cytoplasmic area. Based on the findings of our study, we conclude that different tobacco related habits have different deleterious effects on the buccal mucosal cells, and these effects are more pronounced when the patients have both types of habits together. Author Contribution Statement: Study conception and design - Kokila Sivakumar and Harikrishnan Prasad. Clinical studies and sample collection - Kokila Sivakumar. Data acquisition and analysis - Kokila Sivakumar. Statistical analysis - Harikrishnan Prasad. General supervision - Srichinthu Keniyan Kumar, Rajmohan Muthusamy, Mahalakshmi Loganathan, Shanmuganathan Sivanandham, Prema Perumal. Manuscript preparation - Kokila Sivakumar and Harikrishnan Prasad. Manuscript editing and suggestions – Harikrishnan Prasad and Srichinthu Keniyan Kumar. All the authors have equal contribution in the study and manuscript works. All of them reviewed the results and approved the final version of the manuscript.
Background: Tobacco is one of the main reasons behind the occurrence of oral cancer. Oral cancer, even though being the tenth most common cancer in the world, gets diagnosed at an advanced stage and ends up with poor prognosis. So early diagnosis is the need of the hour. Our study aimed to evaluate the genotoxic changes in patients with different tobacco habits using buccal exfoliated cells. Methods: Buccal smears were taken from smokers (30), smokeless tobacco users (30), combined tobacco users (30) and controls (30) with clinically normal oral mucosa. All the smears were stained with Papanicolaou stain and Feulgen stain and viewed under light microscope for the evaluation of mean number of micronuclei, mean micronuclei per cell, frequency of cells showing micronuclei, nuclear area, cytoplasmic area, nuclear-cytoplasmic ratio. Results: Mean number of micronuclei, mean micronuclei per cell, frequency of cells showing micronuclei, and nuclear area were significantly increased in tobacco users than controls, especially in combined tobacco users. Nuclear-cytoplasmic ratio was increased and cytoplasmic area was decreased in tobacco users than controls. Conclusions: Tobacco in any consumable form is genotoxic. Smoking and smokeless tobacco, when consumed together, synergistically causes higher genetic damage. Different tobacco habits have different deleterious effects on oral mucosa, and these effects are more pronounced when the patients have combined habits. So, detecting the genotoxic changes through exfoliative cytology can be used as a simple yet reliable marker for early detection of carcinogenesis.<br />.
null
null
2,250
291
[ 102 ]
5
[ "tobacco", "nuclear", "micronuclei", "area", "users", "tobacco users", "cells", "cytoplasmic", "smokeless", "smokeless tobacco" ]
[ "development oral cancer", "genotoxic effect oral", "genotoxic studies exfoliated", "response carcinogens tobacco", "carcinogens tobacco buccal" ]
null
null
null
[CONTENT] Micronuclei | cytomorphometry | tobacco | cytoplasmic area | Feulgen | exfoliated buccal cells [SUMMARY]
null
[CONTENT] Micronuclei | cytomorphometry | tobacco | cytoplasmic area | Feulgen | exfoliated buccal cells [SUMMARY]
null
[CONTENT] Micronuclei | cytomorphometry | tobacco | cytoplasmic area | Feulgen | exfoliated buccal cells [SUMMARY]
null
[CONTENT] Adult | Case-Control Studies | Female | Humans | Male | Micronuclei, Chromosome-Defective | Micronucleus Tests | Mouth Mucosa | Mouth Neoplasms | Mutagenicity Tests | Tobacco Use Disorder [SUMMARY]
null
[CONTENT] Adult | Case-Control Studies | Female | Humans | Male | Micronuclei, Chromosome-Defective | Micronucleus Tests | Mouth Mucosa | Mouth Neoplasms | Mutagenicity Tests | Tobacco Use Disorder [SUMMARY]
null
[CONTENT] Adult | Case-Control Studies | Female | Humans | Male | Micronuclei, Chromosome-Defective | Micronucleus Tests | Mouth Mucosa | Mouth Neoplasms | Mutagenicity Tests | Tobacco Use Disorder [SUMMARY]
null
[CONTENT] development oral cancer | genotoxic effect oral | genotoxic studies exfoliated | response carcinogens tobacco | carcinogens tobacco buccal [SUMMARY]
null
[CONTENT] development oral cancer | genotoxic effect oral | genotoxic studies exfoliated | response carcinogens tobacco | carcinogens tobacco buccal [SUMMARY]
null
[CONTENT] development oral cancer | genotoxic effect oral | genotoxic studies exfoliated | response carcinogens tobacco | carcinogens tobacco buccal [SUMMARY]
null
[CONTENT] tobacco | nuclear | micronuclei | area | users | tobacco users | cells | cytoplasmic | smokeless | smokeless tobacco [SUMMARY]
null
[CONTENT] tobacco | nuclear | micronuclei | area | users | tobacco users | cells | cytoplasmic | smokeless | smokeless tobacco [SUMMARY]
null
[CONTENT] tobacco | nuclear | micronuclei | area | users | tobacco users | cells | cytoplasmic | smokeless | smokeless tobacco [SUMMARY]
null
[CONTENT] nuclear | tobacco | cancer | changes | oral cancer | genotoxic | oral | genotoxic changes | genetic damage | form [SUMMARY]
null
[CONTENT] area | mean | nuclear | groups | stain | micronuclei | tobacco users | users | nuclear area | tobacco [SUMMARY]
null
[CONTENT] tobacco | nuclear | micronuclei | area | users | tobacco users | cytoplasmic | cells | mean | smokeless [SUMMARY]
null
[CONTENT] ||| tenth ||| the hour ||| [SUMMARY]
null
[CONTENT] ||| [SUMMARY]
null
[CONTENT] ||| tenth ||| the hour ||| ||| 30 | 30 | 30 | 30 ||| Papanicolaou | Feulgen ||| ||| ||| ||| ||| ||| ||| [SUMMARY]
null
[Analysis of histoprognostic factors for the non metastatic rectal cancer in a west Algerian series of 58 cases].
27583069
The aim of our study was to analyze histoprognostic factors in patients with non-metastatic rectal cancer operated at the division of surgery "A" in Tlemcen, west Algeria, over a period of six years.
INTRODUCTION
Retrospective study of 58 patients with rectal adenocarcinoma. Evaluation criterion was survival. Parameters studied were sex, age, tumor stage, tumor recurrence.
METHODS
The average age was 58 years, 52% of men and 48% of women, with sex-ratio (1,08). Tumor seat was: middle rectum 41.37%, lower rectum 34.48% and upper rectum 24.13%. Concerning TNM clinical staging, patients were classified as stage I (17.65%), stage II (18.61%), stage III (53.44%) and stage IV (7.84%). Median overall survival was 40 months ±2,937 months. Survival based on tumor staging: stage III and IV had a lower 3 years survival rate (19%) versus stage I, II which had a survival rate of 75% (P = 0.000) (95%). Patients with tumor recurrences had a lower 3 years survival rate compared to those who had no tumoral recurrences (30.85% vs 64.30% P = 0.043).
RESULTS
In this series, univariate analysis of prognostic factors affecting survival allowed to retain only three factors influencing survival: tumor size, stage and tumor recurrences. In multivariate analysis using Cox's model only one factor was retained: tumor recurrence.
CONCLUSION
[ "Adenocarcinoma", "Adult", "Aged", "Aged, 80 and over", "Algeria", "Female", "Humans", "Male", "Middle Aged", "Multivariate Analysis", "Neoplasm Recurrence, Local", "Neoplasm Staging", "Prognosis", "Proportional Hazards Models", "Rectal Neoplasms", "Retrospective Studies", "Survival Rate" ]
4992382
Introduction
Le cancer du rectum (CR) est souvent intégré dans les îlots des cancers colorectaux (CCR). Il est de plus en plus fréquent et pose un réel problème de diagnostic et de prise en charge dans les pays en développement [1]. En Algérie, son incidence annuelle est de 31,8 cas pour 100 000 habitants, soit une moyenne annuelle de 1500 cas incident. Le CR se situe au 2émè rang des cancers digestifs [2]. Il représente la deuxième cause de mortalité dans les pays développés [1–3]. Les progrès réalisés en matière de diagnostic et thérapeutique (Traitement néo-adjuvant) ont amélioré la survie qui ne dépasse pas 50% à 5 ans [4]. Reconnaître les facteurs pronostics du CR conditionne la survie à long terme. Déterminer ces facteurs est d'une importance capitale permettant ainsi d'orienter les patients vers un Protocol thérapeutique plus adéquat avec un calendrier de surveillance mieux adapté. L'objectif de notre travail est d'analyser les facteurs histo-pronostiques des cas de cancer du rectum pris en charge durant une période de six ans au service de chirurgie «A» du centre hospitalo-universitaire de Tlemcen à ouest Algérien.
null
null
null
null
Conclusion
Dans cette série, l’étude univarié des différents facteurs pronostiques conditionnant la survie n'a permis de retenir que trois facteurs influençant la survie, a savoir la taille tumorale, le stade, et les récidives tumorales. En analyse multi variée en utilisant le modèle Cox un seul facteur été retenu c'est la récidive tumorale Etat des connaissances actuelles sur le sujet La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. Contribution de notre étude à la connaissance Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;On confirme que la taille tumorale et la récidive ont une influence sur la survie. Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant; On confirme que la taille tumorale et la récidive ont une influence sur la survie. Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;On confirme que la taille tumorale et la récidive ont une influence sur la survie. Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant; On confirme que la taille tumorale et la récidive ont une influence sur la survie.
[ "Méthodes", "Résultats", "Etat des connaissances actuelles sur le sujet", "Contribution de notre étude à la connaissance" ]
[ "Répondre à la problématique sus énoncé, une étude pronostic mono centrique a été menée, portant sur les dossiers de malades admis et opérés pour CR au service de chirurgie viscérale «A» au centre hospitalo-universitaire de Tlemcen, sur une période de 6 ans allant de Janvier 2009 à Mai 2015. Nous avons inclus dans cette étude tout patient présentant un adénocarcinome prouvé histologiquement et dont le siège de la tumeur était situé entre 0 et 12 cm de la marge anale. Était exclus l’étude, tout adénocarcinome rectal associé à des métastases à distance (hépatiques, pulmonaires, péritonéales), une tumeur située au delà de 12 cm le la marge anale et ceux de la jonction récto-sigmoïdienne, ainsi que toute tumeur rectal non glandulaire, un traitement non curatif, cancer opéré en urgence et enfin tout dossier incomplet. Cinquante-huit patients qui réunissaient les critères de l’étude étaient retenus, pour lesquels une courbe de survie a été réalisée. Les paramètres étudiés étaient, le sexe, l’âge, le siège tumoral, le degrés de différenciation de l'adénocarcinome, le dosage des anti gènes carcino-embryonnaires (ACE), les traitements associés a celui de la chirurgie (néo-adjuvant et adjuvant), le type de traitement chirurgical, nombres de ganglions envahis, stade tumoral, et enfin l'existence ou non d'une récidive tumorale. Pour ce qui est de l'analyse statistique, on procède d'abord à une description de la population de l’étude on exprimons par des pourcentages pour des données qualitatives et sous forme de moyenne ± écart type pour les données quantitatives, puis une analyse de survie globale et en fonction des facteurs pronostics on utilisera la méthode de Kaplan Meier tout on estimons la moyenne et la médiane de survie avec des comparaisons de la survie on fonction des facteurs pronostic par le test de Long-Rang (p < 0,05).\nLes facteurs pronostics ayant un seuil de signification statistique < ou = 3% étaient introduites dans un modèle de régression Cox pour l'analyse multi-variée.", "Sur l'ensemble des 86 dossiers d'adénocarcinome du rectum, nous avons colligé 58 patients qui présentaient un cancer du rectum prouvé histologiquement et dont les dossiers étaient complets. L’âge moyen des patients étaient de 58 ans ± 11,6 avec des extrêmes «30-84ans». Il s'agissait de 52% d'hommes (n = 30) et de 48% de femmes (n = 28) avec sex-ratio (1,08). L'examen endoscopique montrait que la tumeur était située au moyen rectum dans 41,37% (n = 24), 34,48% (n = 20) au bas rectum et dans 24,13% (n = 14) au niveau du haut rectum. Sur le plan histologique la biopsie avait montré que l'adénocarcinome liberkunien était bien différencié dans 75,86% (n = 44), moyennement différencié dans 20,68% (n = 12) et dans 3,44% (n = 2) peu différencié. L'antigène carcino-embryonnaire (ACE) été dosé dans 48,77% (n = 28) et révélait un taux supérieur à 5ng/ml chez 28,57% (n = 8) patients. Sur le plan thérapeutique, un traitement néo-adjuvant par radio-chimiothérapie (RCC) était pratiqué chez 18,95% (n = 11) et une radiothérapie seul cycle court pour 6,90% (n = 4). Une exérèse chirurgicale de type amputation abdomino pelvienne (AAP) était réalisée chez 22,41% des cas (n = 13), alors qu'une chirurgie conservatrice de l'appareil sphinctérien était possible dans 77,58% des cas (n = 45). Les suites postopératoires étaient marquées par une mortalité de l'ordre de 8,62% (n = 5), et une morbidité de 24,15%(n = 14). La durée de séjour hospitalier était de 17, 90 jours ±6,24 avec des extrêmes de «6-32 jours». L'examen anatomopathologique sur pièce opératoire avait précisé que la taille moyenne tumorale était de 4,42 cm ±1,98 avec des extrêmes «1-8cm» et que la taille tumorale était supérieure à 5 cm dans 15,70% (n = 10) et inférieure à 5 cm dans 84,30% (n = 48). En analysant l'envahissement ganglionnaire, le curage avait permis de prélever en moyenne 10,77 ganglions ±5,513 avec des extrêmes de «00-33». Étaient envahis en moyenne 2,25 ganglions ±2,928 avec des extrêmes de «00-17». L’étude histologique avait permis de classer les patients selon la classification TNM avec 17,65% des patients au stade I (n = 10),18,61% au stade II (n = 11), 53,44% au stade III(n = 33) et 7,84% au stade IV (n = 4). En postopératoire vingt deux patients (41,50%) avait bénéficié d'une radio-chimiothérapie, vingt neuf patients (54,71%), d'une chimiothérapie systémique et deux patients d'une radiothérapie seule (3,77%).\nPar ailleurs, dix patients (18,86%) avaient présenté une récidive et dont le délai moyen était de 18,90±9,53 mois avec des extrêmes de« 6-36 mois» (Tableau 1). La survie médiane globale était de 40 mois ±2,937 mois (Figure 1). L'analyse univariée de la survie en fonction du sexe ne retrouvait pas de différence significative P = 0,661. De même en comparant la survie en fonction de l’âge, entre un groupe de patients moins de 50 ans et un groupe âgé plus de 50 ans. Le siège de la tumeur n'avait aucune différence significative sur la survie entre le haut, moyen et bas rectum. Selon le geste pratiqué, celui-ci ne montrait pas une différence significative entre un geste ne conservant pas l'appareil sphinctérien (AAP) et un geste conservant l'appareil sphinctérien (RA) (Tableau 2). La survie en fonction d'un traitement néo adjuvant n'avait pas de différence significative. En analysant la survie en fonction du stade tumoral, le stade III et IV avait un faible taux de survie (19%) a 3 ans tandis que le stade I, II avait un taux de survie de (75%) à 3 ans. (P = 0,000) (IC 95%) (Figure 2). La survie à 3 ans en fonction de la taille tumorale était significativement différente, lorsque la taille tumorale était supérieure à 5 cm. La survie était faible par rapport à une taille inférieur à 5 cm (P = 0,021) (Figure 3). Les patients ayant présenté des récidives tumorales avait un taux de survie faible à 3 ans par rapport à ceux n'ayant pas eu de récidives tumorales (30,85% contre 64,30% P = 0,043) (Figure 4). D'autres facteurs ont été analysés, tel que l'existence d'un traitement adjuvant, l'envahissement ganglionnaire et le taux de l'ACE. Mais il n'avait aucune influence sur la survie dans notre étude. L'analyse univariée avait permis l'identification de trois variables significatives (taille, stade tumorale et récidive). Ces derniers étaient inclus dans un modèle de Cox et un seul facteur déterminant à savoir la récidive tumorale qui sortait et avait une influence significative sur la survie à trois ans.\nAnalyse de la survie globale des malades opères pour cancer du rectum non métastatique\nAnalyse de la survie en fonction du stade de la maladie tumorale, 1-Stade tardif (III, IV) 0-Stade précoce (I-II)\nAnalyse de la survie en fonction de la taille tumorale en centimètres\nAnalyse de la Survie en fonction des récidives tumorale\nCaractéristiques des patients dans notre série selon les variables étudiés\n28 patients ont eu le dosage\nradio chimiothérapie préopératoire\nAnalyse univariée des facteurs influençant la des malades opérés pour cancer du rectum non métastatique dans notre série", "\nLa survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale.\n\nLa survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale.", "\nLes facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;On confirme que la taille tumorale et la récidive ont une influence sur la survie.\n\nLes facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;\nOn confirme que la taille tumorale et la récidive ont une influence sur la survie." ]
[ null, null, null, null ]
[ "Introduction", "Méthodes", "Résultats", "Discussion", "Conclusion", "Etat des connaissances actuelles sur le sujet", "Contribution de notre étude à la connaissance" ]
[ "Le cancer du rectum (CR) est souvent intégré dans les îlots des cancers colorectaux (CCR). Il est de plus en plus fréquent et pose un réel problème de diagnostic et de prise en charge dans les pays en développement [1]. En Algérie, son incidence annuelle est de 31,8 cas pour 100 000 habitants, soit une moyenne annuelle de 1500 cas incident. Le CR se situe au 2émè rang des cancers digestifs [2]. Il représente la deuxième cause de mortalité dans les pays développés [1–3]. Les progrès réalisés en matière de diagnostic et thérapeutique (Traitement néo-adjuvant) ont amélioré la survie qui ne dépasse pas 50% à 5 ans [4]. Reconnaître les facteurs pronostics du CR conditionne la survie à long terme. Déterminer ces facteurs est d'une importance capitale permettant ainsi d'orienter les patients vers un Protocol thérapeutique plus adéquat avec un calendrier de surveillance mieux adapté. L'objectif de notre travail est d'analyser les facteurs histo-pronostiques des cas de cancer du rectum pris en charge durant une période de six ans au service de chirurgie «A» du centre hospitalo-universitaire de Tlemcen à ouest Algérien.", "Répondre à la problématique sus énoncé, une étude pronostic mono centrique a été menée, portant sur les dossiers de malades admis et opérés pour CR au service de chirurgie viscérale «A» au centre hospitalo-universitaire de Tlemcen, sur une période de 6 ans allant de Janvier 2009 à Mai 2015. Nous avons inclus dans cette étude tout patient présentant un adénocarcinome prouvé histologiquement et dont le siège de la tumeur était situé entre 0 et 12 cm de la marge anale. Était exclus l’étude, tout adénocarcinome rectal associé à des métastases à distance (hépatiques, pulmonaires, péritonéales), une tumeur située au delà de 12 cm le la marge anale et ceux de la jonction récto-sigmoïdienne, ainsi que toute tumeur rectal non glandulaire, un traitement non curatif, cancer opéré en urgence et enfin tout dossier incomplet. Cinquante-huit patients qui réunissaient les critères de l’étude étaient retenus, pour lesquels une courbe de survie a été réalisée. Les paramètres étudiés étaient, le sexe, l’âge, le siège tumoral, le degrés de différenciation de l'adénocarcinome, le dosage des anti gènes carcino-embryonnaires (ACE), les traitements associés a celui de la chirurgie (néo-adjuvant et adjuvant), le type de traitement chirurgical, nombres de ganglions envahis, stade tumoral, et enfin l'existence ou non d'une récidive tumorale. Pour ce qui est de l'analyse statistique, on procède d'abord à une description de la population de l’étude on exprimons par des pourcentages pour des données qualitatives et sous forme de moyenne ± écart type pour les données quantitatives, puis une analyse de survie globale et en fonction des facteurs pronostics on utilisera la méthode de Kaplan Meier tout on estimons la moyenne et la médiane de survie avec des comparaisons de la survie on fonction des facteurs pronostic par le test de Long-Rang (p < 0,05).\nLes facteurs pronostics ayant un seuil de signification statistique < ou = 3% étaient introduites dans un modèle de régression Cox pour l'analyse multi-variée.", "Sur l'ensemble des 86 dossiers d'adénocarcinome du rectum, nous avons colligé 58 patients qui présentaient un cancer du rectum prouvé histologiquement et dont les dossiers étaient complets. L’âge moyen des patients étaient de 58 ans ± 11,6 avec des extrêmes «30-84ans». Il s'agissait de 52% d'hommes (n = 30) et de 48% de femmes (n = 28) avec sex-ratio (1,08). L'examen endoscopique montrait que la tumeur était située au moyen rectum dans 41,37% (n = 24), 34,48% (n = 20) au bas rectum et dans 24,13% (n = 14) au niveau du haut rectum. Sur le plan histologique la biopsie avait montré que l'adénocarcinome liberkunien était bien différencié dans 75,86% (n = 44), moyennement différencié dans 20,68% (n = 12) et dans 3,44% (n = 2) peu différencié. L'antigène carcino-embryonnaire (ACE) été dosé dans 48,77% (n = 28) et révélait un taux supérieur à 5ng/ml chez 28,57% (n = 8) patients. Sur le plan thérapeutique, un traitement néo-adjuvant par radio-chimiothérapie (RCC) était pratiqué chez 18,95% (n = 11) et une radiothérapie seul cycle court pour 6,90% (n = 4). Une exérèse chirurgicale de type amputation abdomino pelvienne (AAP) était réalisée chez 22,41% des cas (n = 13), alors qu'une chirurgie conservatrice de l'appareil sphinctérien était possible dans 77,58% des cas (n = 45). Les suites postopératoires étaient marquées par une mortalité de l'ordre de 8,62% (n = 5), et une morbidité de 24,15%(n = 14). La durée de séjour hospitalier était de 17, 90 jours ±6,24 avec des extrêmes de «6-32 jours». L'examen anatomopathologique sur pièce opératoire avait précisé que la taille moyenne tumorale était de 4,42 cm ±1,98 avec des extrêmes «1-8cm» et que la taille tumorale était supérieure à 5 cm dans 15,70% (n = 10) et inférieure à 5 cm dans 84,30% (n = 48). En analysant l'envahissement ganglionnaire, le curage avait permis de prélever en moyenne 10,77 ganglions ±5,513 avec des extrêmes de «00-33». Étaient envahis en moyenne 2,25 ganglions ±2,928 avec des extrêmes de «00-17». L’étude histologique avait permis de classer les patients selon la classification TNM avec 17,65% des patients au stade I (n = 10),18,61% au stade II (n = 11), 53,44% au stade III(n = 33) et 7,84% au stade IV (n = 4). En postopératoire vingt deux patients (41,50%) avait bénéficié d'une radio-chimiothérapie, vingt neuf patients (54,71%), d'une chimiothérapie systémique et deux patients d'une radiothérapie seule (3,77%).\nPar ailleurs, dix patients (18,86%) avaient présenté une récidive et dont le délai moyen était de 18,90±9,53 mois avec des extrêmes de« 6-36 mois» (Tableau 1). La survie médiane globale était de 40 mois ±2,937 mois (Figure 1). L'analyse univariée de la survie en fonction du sexe ne retrouvait pas de différence significative P = 0,661. De même en comparant la survie en fonction de l’âge, entre un groupe de patients moins de 50 ans et un groupe âgé plus de 50 ans. Le siège de la tumeur n'avait aucune différence significative sur la survie entre le haut, moyen et bas rectum. Selon le geste pratiqué, celui-ci ne montrait pas une différence significative entre un geste ne conservant pas l'appareil sphinctérien (AAP) et un geste conservant l'appareil sphinctérien (RA) (Tableau 2). La survie en fonction d'un traitement néo adjuvant n'avait pas de différence significative. En analysant la survie en fonction du stade tumoral, le stade III et IV avait un faible taux de survie (19%) a 3 ans tandis que le stade I, II avait un taux de survie de (75%) à 3 ans. (P = 0,000) (IC 95%) (Figure 2). La survie à 3 ans en fonction de la taille tumorale était significativement différente, lorsque la taille tumorale était supérieure à 5 cm. La survie était faible par rapport à une taille inférieur à 5 cm (P = 0,021) (Figure 3). Les patients ayant présenté des récidives tumorales avait un taux de survie faible à 3 ans par rapport à ceux n'ayant pas eu de récidives tumorales (30,85% contre 64,30% P = 0,043) (Figure 4). D'autres facteurs ont été analysés, tel que l'existence d'un traitement adjuvant, l'envahissement ganglionnaire et le taux de l'ACE. Mais il n'avait aucune influence sur la survie dans notre étude. L'analyse univariée avait permis l'identification de trois variables significatives (taille, stade tumorale et récidive). Ces derniers étaient inclus dans un modèle de Cox et un seul facteur déterminant à savoir la récidive tumorale qui sortait et avait une influence significative sur la survie à trois ans.\nAnalyse de la survie globale des malades opères pour cancer du rectum non métastatique\nAnalyse de la survie en fonction du stade de la maladie tumorale, 1-Stade tardif (III, IV) 0-Stade précoce (I-II)\nAnalyse de la survie en fonction de la taille tumorale en centimètres\nAnalyse de la Survie en fonction des récidives tumorale\nCaractéristiques des patients dans notre série selon les variables étudiés\n28 patients ont eu le dosage\nradio chimiothérapie préopératoire\nAnalyse univariée des facteurs influençant la des malades opérés pour cancer du rectum non métastatique dans notre série", "L'incidence du CR est plus élevée dans les pays du nord qu'en Afrique [5]. Comme très peu de régions en Afrique sont couvertes par un registre, en Algérie l'incidence du CR reste sous-estimée et difficile à préciser. Et d'après les dernier données épidémiologiques publiées en 2009 [2], le CR en Algérie occupe le 4ème rang parmi les cancers chez l'homme et le 5ème chez la femme. L’âge moyen de nos patients était de 58 ans comparable à celui rapporté aux autres séries Africaines Nigérienne et Béninoise [6, 7] où l’âge moyen variait entre 46,7 à 51,2 ans. Ainsi, le CR apparaît à un âge relativement plus bas chez les africains que chez les occidentaux où le pic de fréquence se situe entre 60 et 70 ans [5]. Nos patients semblent plus jeunes en raison de la jeunesse de notre population. Dans notre série le sex-ratio était de 1,08 identique à celui qui était retrouvé dans la littérature [8, 9]. L’étude des facteurs pronostiques permet au clinicien de sélectionner les patients pour un traitement donné. Si le principal facteur reste le stade évolutif de la tumeur au moment du diagnostic [10], il est important de déterminer d'autres facteurs pronostiques qui conditionnent la survie. Parmi ces facteurs pronostiques cliniques étudiés dans la littérature: l’âge, ce dernier reste un facteur discutable. Six études sur quinze qui évaluaient ce facteur avaient conclu que la survenue d'un CR chez le sujet jeune était de mauvais pronostic [11]. Dans notre étude nous n'avions pas trouvé de différence significative en termes de survie en fonction des tranches d’âge de nos patients. La prédominance masculine était dominante dans notre série. Trois études multi variées avaient affirmé que la survie à long terme était meilleure chez la femme par rapport à l'homme [12–15]. Cette constatation n’était pas identifiée dans notre série puisque nous n'avons pas trouvé de différence significative de survie entre les deux sexes. Selon la topographie et en comparant la survie en fonction du siège tumoral, une étude de Jatzco [16] qui étudiait l'influence du siège tumoral sur la survie et ces constatations avaient conclus qu'il n'y avait pas d'influence. Il en est de même dans notre série où il n'y avait pas de différence significative selon le siège tumoral (P = 0,123).\nSur le plan biologique, parmi les marqueurs tumoraux, l'ACE est le marqueur tumoral le plus utilisé en pathologie colorectale. Toute élévation de ce marqueur en pré-opératoire, était un facteur de mauvais pronostic dans plusieurs études publiées [17, 18, 10]. Dans notre série, l’élévation du taux sérique de l'ACE n’était pas un facteur influençant la survie. En ce qui concerne les facteurs thérapeutiques étudiés, depuis la fin des années 90, l'association de la chimiothérapie à la radiothérapie a encore amélioré le pronostic carcinologique et fonctionnel du CR. Une méta-analyse réalisée en 2013 [19] qui comparait la radio-chimiothérapie néo-adjuvante versus chirurgie seule, avait prouvé qu'il n'y avait pas de différence significative sur la survie globale à long terme. Dans notre étude, il n'y avait pas de différance significative (p = 0,576) entre un traitement néo-adjuvant chirurgie versus chirurgie seule. Mais ceci reste à prendre avec précaution car l’échantillon de notre série était faible. En ce qui concerne le traitement chirurgical, deux études prospectives réalisées chez 2136 et 1219 malades [20, 21] avaient comparé les différentes techniques chirurgicales à savoir les amputations abdomino-périnéales et une résection antérieure pour la tumeur du moyen et bas rectum. Ces études n'avaient pas trouvé de différence significative sur la survie globale. Ce qui a été retrouvé dans notre étude. En dehors des facteurs pronostiques cliniques, biologiques et thérapeutiques, d'autres facteurs d'importance capitale ont été étudiés par différentes études. C'est l’étude anatomopathologique qui a analysé l'aspect macroscopique et microscopique de la tumeur. Parmi les facteurs analysés macroscopiquement, c'est l'influence de la taille tumorale sur la survie, qui reste controversée dans la littérature. Park JY et He WJ [22, 23] rapportait dans leurs analyses multi-variées que la taille de la tumeur n’était pas un facteur pronostic influençant la survie. Par contre, d'autres études ont prouvé le contraire tel l’étude de Xu FY qui avait trouvé que lorsque la taille supérieure à 6 cm était de mauvais pronostic [24]. Dans notre analyse, la taille tumorale était un facteur influençant sur la survie globale de façon significative (P = 0,023). En ce qui concerne l'influence de l'envahissement ganglionnaire du CR sur la survie, des études ont confirmés cette influence sur la survie [25, 26]. Pour notre part l'envahissement ganglionnaire et le nombre de ganglions n’étaient pas un facteur influençant. En comparant la survie des différents stades tumoraux dans notre série, les stades III et IV (19% a trois ans) avaient un taux de survie plus faible, tandis que les stades I et II avaient meilleur taux de survie (79% à trois ans). Ces mêmes résultats sont retrouvés dans 2 études multi variées comparant les stades tumoraux [15–27]. Dans une étude analytique Tunisienne réalisée en 2006 [10], qui avait démontré que les patients présentant une récidive avaient un taux de survie plus faible. Cette conclusion a été retrouvé dans notre analyse, puisqu'il n'y'avait une différence significative entre les deux groupes (P < 103)", "Dans cette série, l’étude univarié des différents facteurs pronostiques conditionnant la survie n'a permis de retenir que trois facteurs influençant la survie, a savoir la taille tumorale, le stade, et les récidives tumorales. En analyse multi variée en utilisant le modèle Cox un seul facteur été retenu c'est la récidive tumorale\n Etat des connaissances actuelles sur le sujet \nLa survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale.\n\nLa survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale.\n\nLa survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale.\n\nLa survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale.\n Contribution de notre étude à la connaissance \nLes facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;On confirme que la taille tumorale et la récidive ont une influence sur la survie.\n\nLes facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;\nOn confirme que la taille tumorale et la récidive ont une influence sur la survie.\n\nLes facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;On confirme que la taille tumorale et la récidive ont une influence sur la survie.\n\nLes facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;\nOn confirme que la taille tumorale et la récidive ont une influence sur la survie.", "\nLa survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale.\n\nLa survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale.", "\nLes facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;On confirme que la taille tumorale et la récidive ont une influence sur la survie.\n\nLes facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;\nOn confirme que la taille tumorale et la récidive ont une influence sur la survie." ]
[ "intro", null, null, "discussion", "conclusion", null, null ]
[ "Adénocarcinome rectale", "survie", "récidive", "Rectal adenocarcinoma", "survival", "recurrence" ]
Introduction: Le cancer du rectum (CR) est souvent intégré dans les îlots des cancers colorectaux (CCR). Il est de plus en plus fréquent et pose un réel problème de diagnostic et de prise en charge dans les pays en développement [1]. En Algérie, son incidence annuelle est de 31,8 cas pour 100 000 habitants, soit une moyenne annuelle de 1500 cas incident. Le CR se situe au 2émè rang des cancers digestifs [2]. Il représente la deuxième cause de mortalité dans les pays développés [1–3]. Les progrès réalisés en matière de diagnostic et thérapeutique (Traitement néo-adjuvant) ont amélioré la survie qui ne dépasse pas 50% à 5 ans [4]. Reconnaître les facteurs pronostics du CR conditionne la survie à long terme. Déterminer ces facteurs est d'une importance capitale permettant ainsi d'orienter les patients vers un Protocol thérapeutique plus adéquat avec un calendrier de surveillance mieux adapté. L'objectif de notre travail est d'analyser les facteurs histo-pronostiques des cas de cancer du rectum pris en charge durant une période de six ans au service de chirurgie «A» du centre hospitalo-universitaire de Tlemcen à ouest Algérien. Méthodes: Répondre à la problématique sus énoncé, une étude pronostic mono centrique a été menée, portant sur les dossiers de malades admis et opérés pour CR au service de chirurgie viscérale «A» au centre hospitalo-universitaire de Tlemcen, sur une période de 6 ans allant de Janvier 2009 à Mai 2015. Nous avons inclus dans cette étude tout patient présentant un adénocarcinome prouvé histologiquement et dont le siège de la tumeur était situé entre 0 et 12 cm de la marge anale. Était exclus l’étude, tout adénocarcinome rectal associé à des métastases à distance (hépatiques, pulmonaires, péritonéales), une tumeur située au delà de 12 cm le la marge anale et ceux de la jonction récto-sigmoïdienne, ainsi que toute tumeur rectal non glandulaire, un traitement non curatif, cancer opéré en urgence et enfin tout dossier incomplet. Cinquante-huit patients qui réunissaient les critères de l’étude étaient retenus, pour lesquels une courbe de survie a été réalisée. Les paramètres étudiés étaient, le sexe, l’âge, le siège tumoral, le degrés de différenciation de l'adénocarcinome, le dosage des anti gènes carcino-embryonnaires (ACE), les traitements associés a celui de la chirurgie (néo-adjuvant et adjuvant), le type de traitement chirurgical, nombres de ganglions envahis, stade tumoral, et enfin l'existence ou non d'une récidive tumorale. Pour ce qui est de l'analyse statistique, on procède d'abord à une description de la population de l’étude on exprimons par des pourcentages pour des données qualitatives et sous forme de moyenne ± écart type pour les données quantitatives, puis une analyse de survie globale et en fonction des facteurs pronostics on utilisera la méthode de Kaplan Meier tout on estimons la moyenne et la médiane de survie avec des comparaisons de la survie on fonction des facteurs pronostic par le test de Long-Rang (p < 0,05). Les facteurs pronostics ayant un seuil de signification statistique < ou = 3% étaient introduites dans un modèle de régression Cox pour l'analyse multi-variée. Résultats: Sur l'ensemble des 86 dossiers d'adénocarcinome du rectum, nous avons colligé 58 patients qui présentaient un cancer du rectum prouvé histologiquement et dont les dossiers étaient complets. L’âge moyen des patients étaient de 58 ans ± 11,6 avec des extrêmes «30-84ans». Il s'agissait de 52% d'hommes (n = 30) et de 48% de femmes (n = 28) avec sex-ratio (1,08). L'examen endoscopique montrait que la tumeur était située au moyen rectum dans 41,37% (n = 24), 34,48% (n = 20) au bas rectum et dans 24,13% (n = 14) au niveau du haut rectum. Sur le plan histologique la biopsie avait montré que l'adénocarcinome liberkunien était bien différencié dans 75,86% (n = 44), moyennement différencié dans 20,68% (n = 12) et dans 3,44% (n = 2) peu différencié. L'antigène carcino-embryonnaire (ACE) été dosé dans 48,77% (n = 28) et révélait un taux supérieur à 5ng/ml chez 28,57% (n = 8) patients. Sur le plan thérapeutique, un traitement néo-adjuvant par radio-chimiothérapie (RCC) était pratiqué chez 18,95% (n = 11) et une radiothérapie seul cycle court pour 6,90% (n = 4). Une exérèse chirurgicale de type amputation abdomino pelvienne (AAP) était réalisée chez 22,41% des cas (n = 13), alors qu'une chirurgie conservatrice de l'appareil sphinctérien était possible dans 77,58% des cas (n = 45). Les suites postopératoires étaient marquées par une mortalité de l'ordre de 8,62% (n = 5), et une morbidité de 24,15%(n = 14). La durée de séjour hospitalier était de 17, 90 jours ±6,24 avec des extrêmes de «6-32 jours». L'examen anatomopathologique sur pièce opératoire avait précisé que la taille moyenne tumorale était de 4,42 cm ±1,98 avec des extrêmes «1-8cm» et que la taille tumorale était supérieure à 5 cm dans 15,70% (n = 10) et inférieure à 5 cm dans 84,30% (n = 48). En analysant l'envahissement ganglionnaire, le curage avait permis de prélever en moyenne 10,77 ganglions ±5,513 avec des extrêmes de «00-33». Étaient envahis en moyenne 2,25 ganglions ±2,928 avec des extrêmes de «00-17». L’étude histologique avait permis de classer les patients selon la classification TNM avec 17,65% des patients au stade I (n = 10),18,61% au stade II (n = 11), 53,44% au stade III(n = 33) et 7,84% au stade IV (n = 4). En postopératoire vingt deux patients (41,50%) avait bénéficié d'une radio-chimiothérapie, vingt neuf patients (54,71%), d'une chimiothérapie systémique et deux patients d'une radiothérapie seule (3,77%). Par ailleurs, dix patients (18,86%) avaient présenté une récidive et dont le délai moyen était de 18,90±9,53 mois avec des extrêmes de« 6-36 mois» (Tableau 1). La survie médiane globale était de 40 mois ±2,937 mois (Figure 1). L'analyse univariée de la survie en fonction du sexe ne retrouvait pas de différence significative P = 0,661. De même en comparant la survie en fonction de l’âge, entre un groupe de patients moins de 50 ans et un groupe âgé plus de 50 ans. Le siège de la tumeur n'avait aucune différence significative sur la survie entre le haut, moyen et bas rectum. Selon le geste pratiqué, celui-ci ne montrait pas une différence significative entre un geste ne conservant pas l'appareil sphinctérien (AAP) et un geste conservant l'appareil sphinctérien (RA) (Tableau 2). La survie en fonction d'un traitement néo adjuvant n'avait pas de différence significative. En analysant la survie en fonction du stade tumoral, le stade III et IV avait un faible taux de survie (19%) a 3 ans tandis que le stade I, II avait un taux de survie de (75%) à 3 ans. (P = 0,000) (IC 95%) (Figure 2). La survie à 3 ans en fonction de la taille tumorale était significativement différente, lorsque la taille tumorale était supérieure à 5 cm. La survie était faible par rapport à une taille inférieur à 5 cm (P = 0,021) (Figure 3). Les patients ayant présenté des récidives tumorales avait un taux de survie faible à 3 ans par rapport à ceux n'ayant pas eu de récidives tumorales (30,85% contre 64,30% P = 0,043) (Figure 4). D'autres facteurs ont été analysés, tel que l'existence d'un traitement adjuvant, l'envahissement ganglionnaire et le taux de l'ACE. Mais il n'avait aucune influence sur la survie dans notre étude. L'analyse univariée avait permis l'identification de trois variables significatives (taille, stade tumorale et récidive). Ces derniers étaient inclus dans un modèle de Cox et un seul facteur déterminant à savoir la récidive tumorale qui sortait et avait une influence significative sur la survie à trois ans. Analyse de la survie globale des malades opères pour cancer du rectum non métastatique Analyse de la survie en fonction du stade de la maladie tumorale, 1-Stade tardif (III, IV) 0-Stade précoce (I-II) Analyse de la survie en fonction de la taille tumorale en centimètres Analyse de la Survie en fonction des récidives tumorale Caractéristiques des patients dans notre série selon les variables étudiés 28 patients ont eu le dosage radio chimiothérapie préopératoire Analyse univariée des facteurs influençant la des malades opérés pour cancer du rectum non métastatique dans notre série Discussion: L'incidence du CR est plus élevée dans les pays du nord qu'en Afrique [5]. Comme très peu de régions en Afrique sont couvertes par un registre, en Algérie l'incidence du CR reste sous-estimée et difficile à préciser. Et d'après les dernier données épidémiologiques publiées en 2009 [2], le CR en Algérie occupe le 4ème rang parmi les cancers chez l'homme et le 5ème chez la femme. L’âge moyen de nos patients était de 58 ans comparable à celui rapporté aux autres séries Africaines Nigérienne et Béninoise [6, 7] où l’âge moyen variait entre 46,7 à 51,2 ans. Ainsi, le CR apparaît à un âge relativement plus bas chez les africains que chez les occidentaux où le pic de fréquence se situe entre 60 et 70 ans [5]. Nos patients semblent plus jeunes en raison de la jeunesse de notre population. Dans notre série le sex-ratio était de 1,08 identique à celui qui était retrouvé dans la littérature [8, 9]. L’étude des facteurs pronostiques permet au clinicien de sélectionner les patients pour un traitement donné. Si le principal facteur reste le stade évolutif de la tumeur au moment du diagnostic [10], il est important de déterminer d'autres facteurs pronostiques qui conditionnent la survie. Parmi ces facteurs pronostiques cliniques étudiés dans la littérature: l’âge, ce dernier reste un facteur discutable. Six études sur quinze qui évaluaient ce facteur avaient conclu que la survenue d'un CR chez le sujet jeune était de mauvais pronostic [11]. Dans notre étude nous n'avions pas trouvé de différence significative en termes de survie en fonction des tranches d’âge de nos patients. La prédominance masculine était dominante dans notre série. Trois études multi variées avaient affirmé que la survie à long terme était meilleure chez la femme par rapport à l'homme [12–15]. Cette constatation n’était pas identifiée dans notre série puisque nous n'avons pas trouvé de différence significative de survie entre les deux sexes. Selon la topographie et en comparant la survie en fonction du siège tumoral, une étude de Jatzco [16] qui étudiait l'influence du siège tumoral sur la survie et ces constatations avaient conclus qu'il n'y avait pas d'influence. Il en est de même dans notre série où il n'y avait pas de différence significative selon le siège tumoral (P = 0,123). Sur le plan biologique, parmi les marqueurs tumoraux, l'ACE est le marqueur tumoral le plus utilisé en pathologie colorectale. Toute élévation de ce marqueur en pré-opératoire, était un facteur de mauvais pronostic dans plusieurs études publiées [17, 18, 10]. Dans notre série, l’élévation du taux sérique de l'ACE n’était pas un facteur influençant la survie. En ce qui concerne les facteurs thérapeutiques étudiés, depuis la fin des années 90, l'association de la chimiothérapie à la radiothérapie a encore amélioré le pronostic carcinologique et fonctionnel du CR. Une méta-analyse réalisée en 2013 [19] qui comparait la radio-chimiothérapie néo-adjuvante versus chirurgie seule, avait prouvé qu'il n'y avait pas de différence significative sur la survie globale à long terme. Dans notre étude, il n'y avait pas de différance significative (p = 0,576) entre un traitement néo-adjuvant chirurgie versus chirurgie seule. Mais ceci reste à prendre avec précaution car l’échantillon de notre série était faible. En ce qui concerne le traitement chirurgical, deux études prospectives réalisées chez 2136 et 1219 malades [20, 21] avaient comparé les différentes techniques chirurgicales à savoir les amputations abdomino-périnéales et une résection antérieure pour la tumeur du moyen et bas rectum. Ces études n'avaient pas trouvé de différence significative sur la survie globale. Ce qui a été retrouvé dans notre étude. En dehors des facteurs pronostiques cliniques, biologiques et thérapeutiques, d'autres facteurs d'importance capitale ont été étudiés par différentes études. C'est l’étude anatomopathologique qui a analysé l'aspect macroscopique et microscopique de la tumeur. Parmi les facteurs analysés macroscopiquement, c'est l'influence de la taille tumorale sur la survie, qui reste controversée dans la littérature. Park JY et He WJ [22, 23] rapportait dans leurs analyses multi-variées que la taille de la tumeur n’était pas un facteur pronostic influençant la survie. Par contre, d'autres études ont prouvé le contraire tel l’étude de Xu FY qui avait trouvé que lorsque la taille supérieure à 6 cm était de mauvais pronostic [24]. Dans notre analyse, la taille tumorale était un facteur influençant sur la survie globale de façon significative (P = 0,023). En ce qui concerne l'influence de l'envahissement ganglionnaire du CR sur la survie, des études ont confirmés cette influence sur la survie [25, 26]. Pour notre part l'envahissement ganglionnaire et le nombre de ganglions n’étaient pas un facteur influençant. En comparant la survie des différents stades tumoraux dans notre série, les stades III et IV (19% a trois ans) avaient un taux de survie plus faible, tandis que les stades I et II avaient meilleur taux de survie (79% à trois ans). Ces mêmes résultats sont retrouvés dans 2 études multi variées comparant les stades tumoraux [15–27]. Dans une étude analytique Tunisienne réalisée en 2006 [10], qui avait démontré que les patients présentant une récidive avaient un taux de survie plus faible. Cette conclusion a été retrouvé dans notre analyse, puisqu'il n'y'avait une différence significative entre les deux groupes (P < 103) Conclusion: Dans cette série, l’étude univarié des différents facteurs pronostiques conditionnant la survie n'a permis de retenir que trois facteurs influençant la survie, a savoir la taille tumorale, le stade, et les récidives tumorales. En analyse multi variée en utilisant le modèle Cox un seul facteur été retenu c'est la récidive tumorale Etat des connaissances actuelles sur le sujet La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. Contribution de notre étude à la connaissance Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;On confirme que la taille tumorale et la récidive ont une influence sur la survie. Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant; On confirme que la taille tumorale et la récidive ont une influence sur la survie. Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;On confirme que la taille tumorale et la récidive ont une influence sur la survie. Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant; On confirme que la taille tumorale et la récidive ont une influence sur la survie. Etat des connaissances actuelles sur le sujet: La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. Contribution de notre étude à la connaissance: Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;On confirme que la taille tumorale et la récidive ont une influence sur la survie. Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant; On confirme que la taille tumorale et la récidive ont une influence sur la survie.
Background: The aim of our study was to analyze histoprognostic factors in patients with non-metastatic rectal cancer operated at the division of surgery "A" in Tlemcen, west Algeria, over a period of six years. Methods: Retrospective study of 58 patients with rectal adenocarcinoma. Evaluation criterion was survival. Parameters studied were sex, age, tumor stage, tumor recurrence. Results: The average age was 58 years, 52% of men and 48% of women, with sex-ratio (1,08). Tumor seat was: middle rectum 41.37%, lower rectum 34.48% and upper rectum 24.13%. Concerning TNM clinical staging, patients were classified as stage I (17.65%), stage II (18.61%), stage III (53.44%) and stage IV (7.84%). Median overall survival was 40 months ±2,937 months. Survival based on tumor staging: stage III and IV had a lower 3 years survival rate (19%) versus stage I, II which had a survival rate of 75% (P = 0.000) (95%). Patients with tumor recurrences had a lower 3 years survival rate compared to those who had no tumoral recurrences (30.85% vs 64.30% P = 0.043). Conclusions: In this series, univariate analysis of prognostic factors affecting survival allowed to retain only three factors influencing survival: tumor size, stage and tumor recurrences. In multivariate analysis using Cox's model only one factor was retained: tumor recurrence.
Introduction: Le cancer du rectum (CR) est souvent intégré dans les îlots des cancers colorectaux (CCR). Il est de plus en plus fréquent et pose un réel problème de diagnostic et de prise en charge dans les pays en développement [1]. En Algérie, son incidence annuelle est de 31,8 cas pour 100 000 habitants, soit une moyenne annuelle de 1500 cas incident. Le CR se situe au 2émè rang des cancers digestifs [2]. Il représente la deuxième cause de mortalité dans les pays développés [1–3]. Les progrès réalisés en matière de diagnostic et thérapeutique (Traitement néo-adjuvant) ont amélioré la survie qui ne dépasse pas 50% à 5 ans [4]. Reconnaître les facteurs pronostics du CR conditionne la survie à long terme. Déterminer ces facteurs est d'une importance capitale permettant ainsi d'orienter les patients vers un Protocol thérapeutique plus adéquat avec un calendrier de surveillance mieux adapté. L'objectif de notre travail est d'analyser les facteurs histo-pronostiques des cas de cancer du rectum pris en charge durant une période de six ans au service de chirurgie «A» du centre hospitalo-universitaire de Tlemcen à ouest Algérien. Conclusion: Dans cette série, l’étude univarié des différents facteurs pronostiques conditionnant la survie n'a permis de retenir que trois facteurs influençant la survie, a savoir la taille tumorale, le stade, et les récidives tumorales. En analyse multi variée en utilisant le modèle Cox un seul facteur été retenu c'est la récidive tumorale Etat des connaissances actuelles sur le sujet La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. La survie est influencée par deux facteurs: la taille tumorale; la récidive tumorale. Contribution de notre étude à la connaissance Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;On confirme que la taille tumorale et la récidive ont une influence sur la survie. Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant; On confirme que la taille tumorale et la récidive ont une influence sur la survie. Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant;On confirme que la taille tumorale et la récidive ont une influence sur la survie. Les facteurs suivant n'influencent pas la survie: le type de chirurgie, le siège de la tumeur, l'association ou non à un traitement néo-adjuvant; On confirme que la taille tumorale et la récidive ont une influence sur la survie.
Background: The aim of our study was to analyze histoprognostic factors in patients with non-metastatic rectal cancer operated at the division of surgery "A" in Tlemcen, west Algeria, over a period of six years. Methods: Retrospective study of 58 patients with rectal adenocarcinoma. Evaluation criterion was survival. Parameters studied were sex, age, tumor stage, tumor recurrence. Results: The average age was 58 years, 52% of men and 48% of women, with sex-ratio (1,08). Tumor seat was: middle rectum 41.37%, lower rectum 34.48% and upper rectum 24.13%. Concerning TNM clinical staging, patients were classified as stage I (17.65%), stage II (18.61%), stage III (53.44%) and stage IV (7.84%). Median overall survival was 40 months ±2,937 months. Survival based on tumor staging: stage III and IV had a lower 3 years survival rate (19%) versus stage I, II which had a survival rate of 75% (P = 0.000) (95%). Patients with tumor recurrences had a lower 3 years survival rate compared to those who had no tumoral recurrences (30.85% vs 64.30% P = 0.043). Conclusions: In this series, univariate analysis of prognostic factors affecting survival allowed to retain only three factors influencing survival: tumor size, stage and tumor recurrences. In multivariate analysis using Cox's model only one factor was retained: tumor recurrence.
3,128
290
[ 373, 1058, 34, 91 ]
7
[ "de", "la", "survie", "le", "la survie", "en", "les", "un", "dans", "des" ]
[ "carcinologique fonctionnel du", "récidives tumorales en", "cancer du rectum", "récidive tumorale contribution", "le pronostic carcinologique" ]
null
null
[CONTENT] Adénocarcinome rectale | survie | récidive | Rectal adenocarcinoma | survival | recurrence [SUMMARY]
null
null
[CONTENT] Adénocarcinome rectale | survie | récidive | Rectal adenocarcinoma | survival | recurrence [SUMMARY]
[CONTENT] Adénocarcinome rectale | survie | récidive | Rectal adenocarcinoma | survival | recurrence [SUMMARY]
[CONTENT] Adénocarcinome rectale | survie | récidive | Rectal adenocarcinoma | survival | recurrence [SUMMARY]
[CONTENT] Adenocarcinoma | Adult | Aged | Aged, 80 and over | Algeria | Female | Humans | Male | Middle Aged | Multivariate Analysis | Neoplasm Recurrence, Local | Neoplasm Staging | Prognosis | Proportional Hazards Models | Rectal Neoplasms | Retrospective Studies | Survival Rate [SUMMARY]
null
null
[CONTENT] Adenocarcinoma | Adult | Aged | Aged, 80 and over | Algeria | Female | Humans | Male | Middle Aged | Multivariate Analysis | Neoplasm Recurrence, Local | Neoplasm Staging | Prognosis | Proportional Hazards Models | Rectal Neoplasms | Retrospective Studies | Survival Rate [SUMMARY]
[CONTENT] Adenocarcinoma | Adult | Aged | Aged, 80 and over | Algeria | Female | Humans | Male | Middle Aged | Multivariate Analysis | Neoplasm Recurrence, Local | Neoplasm Staging | Prognosis | Proportional Hazards Models | Rectal Neoplasms | Retrospective Studies | Survival Rate [SUMMARY]
[CONTENT] Adenocarcinoma | Adult | Aged | Aged, 80 and over | Algeria | Female | Humans | Male | Middle Aged | Multivariate Analysis | Neoplasm Recurrence, Local | Neoplasm Staging | Prognosis | Proportional Hazards Models | Rectal Neoplasms | Retrospective Studies | Survival Rate [SUMMARY]
[CONTENT] carcinologique fonctionnel du | récidives tumorales en | cancer du rectum | récidive tumorale contribution | le pronostic carcinologique [SUMMARY]
null
null
[CONTENT] carcinologique fonctionnel du | récidives tumorales en | cancer du rectum | récidive tumorale contribution | le pronostic carcinologique [SUMMARY]
[CONTENT] carcinologique fonctionnel du | récidives tumorales en | cancer du rectum | récidive tumorale contribution | le pronostic carcinologique [SUMMARY]
[CONTENT] carcinologique fonctionnel du | récidives tumorales en | cancer du rectum | récidive tumorale contribution | le pronostic carcinologique [SUMMARY]
[CONTENT] de | la | survie | le | la survie | en | les | un | dans | des [SUMMARY]
null
null
[CONTENT] de | la | survie | le | la survie | en | les | un | dans | des [SUMMARY]
[CONTENT] de | la | survie | le | la survie | en | les | un | dans | des [SUMMARY]
[CONTENT] de | la | survie | le | la survie | en | les | un | dans | des [SUMMARY]
[CONTENT] de | les | en | du | est | cas | dans les | plus | cr | en charge [SUMMARY]
null
null
[CONTENT] la | tumorale la | tumorale | survie | la survie | taille tumorale la | tumorale la récidive | la taille tumorale la | taille tumorale la récidive | la récidive [SUMMARY]
[CONTENT] la | de | survie | le | la survie | tumorale | tumorale la | les | en | un [SUMMARY]
[CONTENT] la | de | survie | le | la survie | tumorale | tumorale la | les | en | un [SUMMARY]
[CONTENT] Tlemcen | Algeria | six years [SUMMARY]
null
null
[CONTENT] only three ||| Cox | only one [SUMMARY]
[CONTENT] Tlemcen | Algeria | six years ||| 58 ||| ||| ||| ||| 58 years | 52% | 48% | 1,08 ||| 41.37% | 34.48% | 24.13% ||| 17.65% | 18.61% | 53.44% | 7.84% ||| Median | 40 months ±2,937 months ||| III | IV | 3 years | 19% | 75% | 0.000 | 95% ||| 3 years | 30.85% | 64.30% | 0.043 ||| only three ||| Cox | only one [SUMMARY]
[CONTENT] Tlemcen | Algeria | six years ||| 58 ||| ||| ||| ||| 58 years | 52% | 48% | 1,08 ||| 41.37% | 34.48% | 24.13% ||| 17.65% | 18.61% | 53.44% | 7.84% ||| Median | 40 months ±2,937 months ||| III | IV | 3 years | 19% | 75% | 0.000 | 95% ||| 3 years | 30.85% | 64.30% | 0.043 ||| only three ||| Cox | only one [SUMMARY]
Potential anticancer properties of bioactive compounds of Gymnema sylvestre and its biofunctionalized silver nanoparticles.
25565802
Gymnema sylvestre is an ethno-pharmacologically important medicinal plant used in many polyherbal formulations for its potential health benefits. Silver nanoparticles (SNPs) were biofunctionalized using aqueous leaf extracts of G. sylvestre. The anticancer properties of the bioactive compounds and the biofunctionalized SNPs were compared using the HT29 human adenoma colon cancer cell line.
BACKGROUND
The preliminary phytochemical screening for bioactive compounds from aqueous extracts revealed the presence of alkaloids, triterpenes, flavonoids, steroids, and saponins. Biofunctionalized SNPs were synthesized using silver nitrate and characterized by ultraviolet-visible spectroscopy, scanning electron microscopy, energy-dispersive X-ray analysis, Fourier transform infrared spectroscopy, and X-ray diffraction for size and shape. The characterized biofunctionalized G. sylvestre were tested for its in vitro anticancer activity against HT29 human colon adenocarcinoma cells.
METHODS
The biofunctionlized G. sylvestre SNPs showed the surface plasmon resonance band at 430 nm. The scanning electron microscopy images showed the presence of spherical nanoparticles of various sizes, which were further determined using the Scherrer equation. In vitro cytotoxic activity of the biofunctionalized green-synthesized SNPs (GSNPs) indicated that the sensitivity of HT29 human colon adenocarcinoma cells for cytotoxic drugs is higher than that of Vero cell line for the same cytotoxic agents and also higher than the bioactive compound of the aqueous extract.
RESULTS
Our results show that the anticancer properties of the bioactive compounds of G. sylvestre can be enhanced through biofunctionalizing the SNPs using the bioactive compounds present in the plant extract without compromising their medicinal properties.
CONCLUSION
[ "Animals", "Antineoplastic Agents", "Chlorocebus aethiops", "Gymnema sylvestre", "HT29 Cells", "Humans", "Microscopy, Electron, Scanning", "Nanoparticles", "Plant Extracts", "Plant Leaves", "Plants, Medicinal", "Saponins", "Silver", "Silver Nitrate", "Spectroscopy, Fourier Transform Infrared", "Triterpenes", "Vero Cells", "X-Ray Diffraction" ]
4274148
Introduction
For treatment of various diseases, bioactive components from medicinal plants that are similar to chemical compounds are used.1 In recent years, the use of ethno-botanical information in medicinal plant research has gained considerable attention in some segments of the scientific community.2 In one of the ethno-botanical surveys of medicinal plants commonly used by the Kani tribals in Tirunelveli hills of the Western Ghats in Tamil Nadu, India, it was revealed that Gymneme sylvestre is the most important species based on its use.2 The use of plant parts and isolated phytochemicals for the prevention and treatment of various health ailments has been in practice for many decades.3 G. sylvestre R. Br, commonly known as “Meshasringi”, is distributed over most of India and has a reputation in traditional medicine as a stomachic, diuretic, and a remedy to control diabetes mellitus. G. sylvestre R. Br4 is a woody, climbing plant that grows in the tropical forests of Central and Southern India and in parts of Asia.5 It is a pubescent shrub with young stems and branches, and has a distichous phyllotactic opposite arrangement pattern of leaves which are 2.5–6 cm long and are usually ovate or elliptical. The flowers are small, yellow, and in umbellate cymes, and the follicles are terete, lanceolate, and up to 3 inches in length.6 In homeopathy, as well as in folk and ayurvedic medicine, G. sylvestre has been used for diabetes treatment.7 G. sylvestre has bioactive components that can cure asthma, eye ailments, snakebite, piles, chronic cough, breathing troubles, colic pain, cardiopathy, constipation, dyspepsia, hemorrhoids, and hepatosplenomegaly, as well as assist in family planning.8 In addition, it also possesses antimicrobial,9 antitumor,5 anti-obesity,10 anti-inflammatory,11 anti-hyperglycemic,12 antiulcer, anti-stress, and antiallergic activity.13 The presence of flavonoids, saponins, anthraquinones, quercitol, and other alkaloid have been reported in the flowers, leaves, and fruits of G. sylvestre.14 The presence of other therapeutic agents, such as gymnemagenin, gymnemic acids, gymnemanol, and β-amyrin-related glycosides, which play a key role in therapeutic applications, have also been reported. The focus of the present work is to assess the potential therapeutic medicinal value of this herb and to understand/enhance the mechanistic action of their bioactive components.14 G. sylvestre contains triterpenes, saponins, and gymnemic acids belonging to the oleane and dammarene classes.15,16 The plant extract has also tested positive for alkaloids, acidic glycosides, and anthraquinone derivatives. Oleanane saponins are gymnemic acids and gymnema saponins, while dammarene saponins are gymnemasides. As reported by Thakur et al14 the aqueous extracts of the G. sylvestre leaves showed the presence of gymnemic acids Ι–VΙ, while the saponin fraction of the leaves tested positive for the presence of gymnemic acids XV–XVIII. The gymnemic acid derivative of gymnemagenin was elucidated from the fraction VIII–XII, which is responsible for the antidiabetic activity, and the fraction VIII stimulates the pancreas for insulin secretion. The novel D-glucoside structure with anti-sweet principle is present in the I–V saponin fraction. The presence of pentatriacontane, α- and β-chlorophylls, phytin, resins, D-quercitol, tartaric acid, formic acid, butyric acid, lupeol, and stigmasterol has been reported as other plant constituents of G. sylvestre,14 while the extract has also been tested positive for alkaloids.13,17 Sharma et al have reported the antioxidant activity of oleane saponins from G. sylvestre plant extract and determined the IC50 values for 2,2-diphenylpicrylhydrazyl (DPPH) scavenging, superoxide radical scavenging, inhibition of in vitro lipid peroxidation, and protein carbonyl formation as 238 μg/mL, 140 μg/mL, 99 μg/mL, and 28 μg/mL, respectively, which may be due to the presence of flavonoids, phenols, tannins, and triterpenoids.18 The enhanced radiation (8 Gy)-induced augmentation of lipid peroxidation and depletion of glutathione and protein in mouse brain were reported by Sharma et al18 using multiherbal ayurvedic formulations containing extracts of G. sylvestre, such as “Hyponidd” and “Dihar”. They also demonstrated the antioxidant activity by increasing the levels of superoxide dismutase, glutathione, and catalase in rats through in vivo studies.19 Kang et al20 proved the role of antioxidants from G. sylvestre in diabetic rats using ethanolic extracts. Using several antioxidant assays, eg, thiobarbituric acid assay with slight modifications, egg yolk lecithin or 2-deoxyribose (associated with lipid peroxidation) assay, superoxide dismutase-like activity assay, and 2,2′-azinobis (3-ethylbenzothiazoline-6-sulfonic acid) assay. The potent anticancer activity of G. sylvestre against the human lung adenocarcinoma cell lines (A549) and human breast carcinoma cell lines (MCF7) using alcoholic extracts of the herb has been reported by Srikant et al.21 Also, Amaki et al22 reported the inhibition of the breast cancer resistance protein using the alcoholic extract of G. sylvestre. Many plant-derived saponins, eg, ginsenosides, soyasaponins, and saikosaponins, have been found to exhibit significant anticancer activity. The anticancer activity of gymnemagenol on HeLa cancer cell lines in in vitro conditions was determined by the MTT cell proliferation assay for cytotoxic activity of saponins. Using 5 μg/mL, 15 μg/mL, 25 μg/mL, and 50 μg/mL concentrations of gymnemagenol, the IC50 value was found to be 37 μg/mL after 96 hours. The isolated bioactive constituent, gymnemagenol, showed a high degree of inhibition to HeLa cancer cell line proliferation, and saponins were not found to be toxic to the growth of normal cells under in vitro conditions.23 Already many researchers have reported that the leaves of G. sylvestre lower blood sugar, stimulate the heart, uterus, and circulatory systems, and exhibit anti-sweet and hepatoprotective activities.20,24–31 Administration of G. sylvestre extract to diabetic rats increased superoxide dismutase activity and decreased lipid peroxide by either directly scavenging the reactive oxygen species, due to the presence of various antioxidant compounds, or by increasing the synthesis of antioxidant molecules (albumin and uric acid).24,30,32 Therefore, in this study, an attempt was made to synthesize the silver nanoparticles (SNPs) from aqueous extracts of the G. sylvestre leaves. These green-synthesized SNPs (GSNPs) of G. sylvestre were examined by ultraviolet–visible (UV–vis) spectroscopy, scanning electron microscopy (SEM), energy dispersive X-ray analysis (EDAX), Fourier transform infrared spectroscopy (FTIR), and X-ray diffraction (XRD) analysis for studying their size and shape. The synthesized and well-characterized nanoparticles (NPs) were tested for their cytotoxicity effect. Our findings clearly demonstrate that it is indeed possible to have a much greener way to synthesize SNPs without compromising their antibacterial properties and thus plant extracts may prove to be a good alternative to obtain such NPs with improved antibacterial and antiviral properties for diabetic wound healing applications. Goix et al33 and Boholm and Arvidsson34 have pointed out that silver is either beneficial or harmful in relation to four main values: the environment, health, sewage treatment, and product effectiveness. As reported by Barua et al35 poly(ethylene glycol)-stabilized colloidal SNPs showed the nonhazardous anticancer and antibacterial properties. Jin et al36 have reported the therapeutic applications of plant-extract-based scaffolds for wound healing and skin reconstitution studies.
Qualitative and quantitative phytochemical analysis
The qualitative phytochemical analysis of G. sylvestre extracts were performed following the methods of Parekh and Chanda38 to determine the presence of alkaloids (Mayer, Wagner, Dragendorff), flavonoids (alkaline reagent, Shinoda), phenolics (lead acetate, alkaline reagent test), triterpenes (Liberman-Burchard test), saponins (foam test), and tannins (gelatin).39 The results were qualitatively expressed as positive (+) or negative (−).40 The chemicals used for the study were purchased from Sigma-Aldrich (Chennai, India). Phytochemical quantitative analyses are described briefly in our previous paper.13 The total phenolic content was measured using the Folin–Ciocalteu colorimetric method. The flavonoids were estimated using aluminum chloride colorimetric method. Gallic acid was used as standard for the analysis of total antioxidant capacity, and the DPPH radical scavenging activity was done following the methods described by Blios.41
Results
Phytochemical screening of G. sylvestre leaf extract The preliminary phytochemical screening of aqueous extracts of G. sylvestre revealed the presence of alkaloids, phenols, flavonoids, sterols, tannins, and triterpenes (Table 1). As shown in Table 2, 125.62±26.84 μg/g of total flavonoids, 285.23±1.11 μg/g of total phenols, and 111.53±15.13 μg/g of tannin were present in the aqueous extract of G. sylvestre. The flavonoids and phenolic compounds exhibited a wide range of biological activities, such as antioxidant and lipid peroxidation inhibition.13 The estimated total antioxidant activity was 9.13±0.04 μg/g and the DPPH radical scavenging activity was 52.14%±0.32% (Table 2). The preliminary phytochemical screening of aqueous extracts of G. sylvestre revealed the presence of alkaloids, phenols, flavonoids, sterols, tannins, and triterpenes (Table 1). As shown in Table 2, 125.62±26.84 μg/g of total flavonoids, 285.23±1.11 μg/g of total phenols, and 111.53±15.13 μg/g of tannin were present in the aqueous extract of G. sylvestre. The flavonoids and phenolic compounds exhibited a wide range of biological activities, such as antioxidant and lipid peroxidation inhibition.13 The estimated total antioxidant activity was 9.13±0.04 μg/g and the DPPH radical scavenging activity was 52.14%±0.32% (Table 2). Characterization of biofunctionalized SNPs The color change observed in the aqueous silver nitrate solution showed that the SNPs were formed rapidly within 30 minutes of incubation of the plant extract with aqueous AgNO3 solution. The colorless solution changed to ruby red, confirming the formation of SNPs (Figure 1). The intensity of the red color increased with time because of the excited surface plasmon resonance effect and reduced AgNO3. The control aqueous AgNO3 solution (without leaf extract) showed no change of color with time and was taken as the blank reference. UV–vis spectrometry is a reliable and reproducible technique that can be used to accurately characterize the metal NPs though it does not provide direct information regarding the particle sizes. The surface plasmon bands (absorbance spectra) are influenced by the size and shape of the NPs produced, along with the dielectric constant of the surrounding media. Figure 2 shows the time-dependent intensity of the absorption band, which reached its maximum peak at 12 hours, after which no further change in the spectrum was observed indicating that the precursors had been consumed. Initially, the UV–vis spectrum did not show evidence of any absorption in the region 350–600 nm, but after the addition of extract a distinct band was observed at 432 nm. When silver nitrate was added to the aqueous plant extract of G. sylvestre, it was reduced to SNPs by the aldehyde group present in the flavonoids (125.6 μg/g), which was further oxidized to the carboxyl group. Also, the carboxyl groups of the phenols from the G. sylvestre extract (285.23 μg/g) acted as a surfactant to attach the major phytochemicals from the plant extract to the surface of the SNPs. Our previous study on the synthesis of SNPs using aqueous extracts of Memecylon edule,37 Memecylon umbellatum,47 Chrysopogon zizanioides46 and Indigofera aspalathoides50 showed that the color of the reaction mixture during the formation of GSNPs changed to ruby red color from colorless/straw color. Our results are also comparable with the other available reports for plant-extract-mediated synthesis of SNPs. From Figure 3, it is clear that the synthesized SNPs were approximately spherical and of different sizes. The SEM images in the figure clearly indicated a thin layer of phytochemicals from the plant extract covering the synthesized SNPs. Mostly the total phenolic content, flavonoids, and tannins were responsible for the bioreduction of the SNPs. In this green synthesis, the phytochemicals from the plant extract acted as a surfactant to prevent the aggregation of the synthesized SNPs. The SEM images of our earlier research had revealed that this biologically eco-friendly synthesis of NPs utilizing the leaf extracts of M. edule,37 C. zizanioides,46 and M. umbellatum47 showed no aggregation due to the biomolecules from the plant extract. And the mechanism behind this particle formation with no aggregation may be the spontaneous nucleation and isotropic growth of NPs along with the plant extract. As these chains grow in diameter with increasing silver deposition, spherical particles break off from these structures forming nanospherical particles which can be typically observed from this synthesis.51 The elemental composition of green-synthesized AgNPs was analyzed through EDAX. These measurements confirmed the presence of the elementary silver signal of the SNPs. The vertical axis displays the number of X-ray counts and the horizontal axis displays the energy in keV. The EDAX spectrum of the biofunctionalized SNPs in Figure 4 clearly shows the strong signals from silver atoms along with the weaker signals from carbon and oxygen present from biomolecules of the plant extract. The elemental silver peak at 2–4 keV, which is the major emission peak specified for metallic silver, with minor peaks of C and O were also seen due to the capping of Ag NPs by the biomolecules of G. sylvestre leaf extract, and the absence of other peaks evidenced the purity of the Ag NPs. FTIR analysis in Figure 5 shows that the SNPs produced by G. sylvestre extract were coated by phytocompounds and secondary metabolites such as saponins, terpenoids, and gymnemagenin derivative of gymnemic acid containing the functional groups of amines, aldehydes, carboxylic acids, and alcohols. The presence of the amide linkages seen in Figure 5 suggests that the different functional groups of the proteins present in the plant extracts might be capping the NPs and playing an important role in the stabilization of the green NPs formed. The band at 1,443 cm−1 was assigned to the methylene scissoring vibrations of proteins, and the bands located at 1,318 cm−1 and 1,089 cm−1 are due to the C–N stretching vibration of aromatic and aliphatic amines, respectively, which agrees with earlier reports of Suman et al.52 The positions of these bands were comparable to those reported for phytochemicals reported in the G. sylvestre extract as total phenols, flavonoids, and tannins (Table 2). Thus, we can confirm that the nanocapping of the phytochemicals from the G. sylvestre extract is responsible for the reduction and subsequent stabilization of the SNPs. The absorption bands that appear in the IR spectrum of the aqueous extract could also be seen in the IR spectra of phyto-capped Ag NPs, confirming the role of the phyto constituents (mostly gymnemic acid) in protecting the Ag NPs from aggregation. Also, during our repeated experiments there were no batch-to-batch variations in size, regardless of the isotopic composition, and the particles diameters of the SNPs formed were known to a high degree of accuracy. A detailed study on the large-scale synthesis and elemental composition on the synthesized NPs can be carried out using inductively coupled plasma mass spectrometry to obtain reproducible compositions in every batch. In future, the elemental analysis can be carried out as described earlier by other researchers.53,54 XRD analysis of NPs represented in Figure 6 shows several size-dependent features leading to irregular peak position, height, and width. XRD was mainly carried out to study the crystalline nature of the green-synthesized G. sylvestre SNPs. From the figure, the GSNPs are seen to exhibit monocrystallinity. The XRD peaks at 38.2°, 44.5°, 64.7°, and 77.7° can be indexed to the [111], [200], [220], and [311] planes, indicating that the SNPs are highly crystalline. Similar results were reported for Abelmoschus esculentus, Citrus limon, Citrus reticulate, and Citrus sinensis55,56 and in our previous studies using C. zizanioides.46 Table 3 shows the characteristic features of the GSNPs using various plant parts of different plant species reported by various researchers along with our previous reports. The color change observed in the aqueous silver nitrate solution showed that the SNPs were formed rapidly within 30 minutes of incubation of the plant extract with aqueous AgNO3 solution. The colorless solution changed to ruby red, confirming the formation of SNPs (Figure 1). The intensity of the red color increased with time because of the excited surface plasmon resonance effect and reduced AgNO3. The control aqueous AgNO3 solution (without leaf extract) showed no change of color with time and was taken as the blank reference. UV–vis spectrometry is a reliable and reproducible technique that can be used to accurately characterize the metal NPs though it does not provide direct information regarding the particle sizes. The surface plasmon bands (absorbance spectra) are influenced by the size and shape of the NPs produced, along with the dielectric constant of the surrounding media. Figure 2 shows the time-dependent intensity of the absorption band, which reached its maximum peak at 12 hours, after which no further change in the spectrum was observed indicating that the precursors had been consumed. Initially, the UV–vis spectrum did not show evidence of any absorption in the region 350–600 nm, but after the addition of extract a distinct band was observed at 432 nm. When silver nitrate was added to the aqueous plant extract of G. sylvestre, it was reduced to SNPs by the aldehyde group present in the flavonoids (125.6 μg/g), which was further oxidized to the carboxyl group. Also, the carboxyl groups of the phenols from the G. sylvestre extract (285.23 μg/g) acted as a surfactant to attach the major phytochemicals from the plant extract to the surface of the SNPs. Our previous study on the synthesis of SNPs using aqueous extracts of Memecylon edule,37 Memecylon umbellatum,47 Chrysopogon zizanioides46 and Indigofera aspalathoides50 showed that the color of the reaction mixture during the formation of GSNPs changed to ruby red color from colorless/straw color. Our results are also comparable with the other available reports for plant-extract-mediated synthesis of SNPs. From Figure 3, it is clear that the synthesized SNPs were approximately spherical and of different sizes. The SEM images in the figure clearly indicated a thin layer of phytochemicals from the plant extract covering the synthesized SNPs. Mostly the total phenolic content, flavonoids, and tannins were responsible for the bioreduction of the SNPs. In this green synthesis, the phytochemicals from the plant extract acted as a surfactant to prevent the aggregation of the synthesized SNPs. The SEM images of our earlier research had revealed that this biologically eco-friendly synthesis of NPs utilizing the leaf extracts of M. edule,37 C. zizanioides,46 and M. umbellatum47 showed no aggregation due to the biomolecules from the plant extract. And the mechanism behind this particle formation with no aggregation may be the spontaneous nucleation and isotropic growth of NPs along with the plant extract. As these chains grow in diameter with increasing silver deposition, spherical particles break off from these structures forming nanospherical particles which can be typically observed from this synthesis.51 The elemental composition of green-synthesized AgNPs was analyzed through EDAX. These measurements confirmed the presence of the elementary silver signal of the SNPs. The vertical axis displays the number of X-ray counts and the horizontal axis displays the energy in keV. The EDAX spectrum of the biofunctionalized SNPs in Figure 4 clearly shows the strong signals from silver atoms along with the weaker signals from carbon and oxygen present from biomolecules of the plant extract. The elemental silver peak at 2–4 keV, which is the major emission peak specified for metallic silver, with minor peaks of C and O were also seen due to the capping of Ag NPs by the biomolecules of G. sylvestre leaf extract, and the absence of other peaks evidenced the purity of the Ag NPs. FTIR analysis in Figure 5 shows that the SNPs produced by G. sylvestre extract were coated by phytocompounds and secondary metabolites such as saponins, terpenoids, and gymnemagenin derivative of gymnemic acid containing the functional groups of amines, aldehydes, carboxylic acids, and alcohols. The presence of the amide linkages seen in Figure 5 suggests that the different functional groups of the proteins present in the plant extracts might be capping the NPs and playing an important role in the stabilization of the green NPs formed. The band at 1,443 cm−1 was assigned to the methylene scissoring vibrations of proteins, and the bands located at 1,318 cm−1 and 1,089 cm−1 are due to the C–N stretching vibration of aromatic and aliphatic amines, respectively, which agrees with earlier reports of Suman et al.52 The positions of these bands were comparable to those reported for phytochemicals reported in the G. sylvestre extract as total phenols, flavonoids, and tannins (Table 2). Thus, we can confirm that the nanocapping of the phytochemicals from the G. sylvestre extract is responsible for the reduction and subsequent stabilization of the SNPs. The absorption bands that appear in the IR spectrum of the aqueous extract could also be seen in the IR spectra of phyto-capped Ag NPs, confirming the role of the phyto constituents (mostly gymnemic acid) in protecting the Ag NPs from aggregation. Also, during our repeated experiments there were no batch-to-batch variations in size, regardless of the isotopic composition, and the particles diameters of the SNPs formed were known to a high degree of accuracy. A detailed study on the large-scale synthesis and elemental composition on the synthesized NPs can be carried out using inductively coupled plasma mass spectrometry to obtain reproducible compositions in every batch. In future, the elemental analysis can be carried out as described earlier by other researchers.53,54 XRD analysis of NPs represented in Figure 6 shows several size-dependent features leading to irregular peak position, height, and width. XRD was mainly carried out to study the crystalline nature of the green-synthesized G. sylvestre SNPs. From the figure, the GSNPs are seen to exhibit monocrystallinity. The XRD peaks at 38.2°, 44.5°, 64.7°, and 77.7° can be indexed to the [111], [200], [220], and [311] planes, indicating that the SNPs are highly crystalline. Similar results were reported for Abelmoschus esculentus, Citrus limon, Citrus reticulate, and Citrus sinensis55,56 and in our previous studies using C. zizanioides.46 Table 3 shows the characteristic features of the GSNPs using various plant parts of different plant species reported by various researchers along with our previous reports. In vitro anticancer activity From Figure 7, it can be observed that, as the concentration of the GSNPs increased, the percentage of viable cells decreased in the cytotoxicity studies by MTT assay. The GSNPs were taken up by mammalian cells through different mechanisms such as pinocytosis, endocytosis, and phagocytosis.75 Once the NPs enter the cells, they interact with the cellular materials and cause DNA damage and cell death. The GSNPs at 85 μg/mL concentration showed 95.23% inhibition of HT29 cell growth. The concentration of the NPs was chosen based on the TC ID50 value (results not shown). Another promising result was that G. sylvestre plant extracts alone at 85 μg/mL concentration showed 30.77% inhibition of HT29 cell lines growth. From our results, it can be concluded that the GSNPs could have induced intracellular reactive oxygen species generation, which can be evaluated using intracellular peroxide-dependent oxidation, and caused cell death. The control cells were clustered, healthy, and viable cells (Figure 8A), whereas the HT29 cells’ proliferation was significantly inhibited by GS (Figure 8B). The SNP-treated cells showed increased apoptotic morphological changes (Figure 8C), also the clearly visible cell debris in Figure 8D is due to cell death by 85 μg/mL SNP treatment. These results indicate that the sensitivity of HT29 human colon cancer cell line for cytotoxic drugs is higher than that of the Vero cell line for the same cytotoxic agents. Sahu et al76 have reported the presence of four new tritepenoid saponins, namely gymnemasins A, B, C, and D, from the leaves of G. sylvestre, while Chan has identified the presence of acylation with diangeloyl groups at the C21–22 positions in triterpenoid saponins, which is essential for cytotoxcity toward tumor cells.77 Tang et al78 have reported that saponin could induce apoptosis of U251 cells, and both BAD-mediated intrinsic apoptotic signaling pathway and caspase-8-mediated extrinsic apoptotic signaling pathway were involved in the apoptosis. The promising saponins were further studied as potential anticancer agents by many researchers. Ai et al79 proposed a qualitative method that can be used to recognize the presence or absence of cancer cells with gold NPs for targeted cancer cell imaging and efficient photodynamic therapy. As reported by Raghavendra et al80 the size effects and multifunctionality are the main characteristics of NPs, so our method of one-step synthesis of SNPs using the aqueous extract of G. sylvestre may serve as a potential anticancer drug for cancer therapy. Further studies have to be carried out to understand the nature of cytotoxicity and the death or proliferation of cells caused by GSNPs from G. sylvestre leaf extract. From Figure 7, it can be observed that, as the concentration of the GSNPs increased, the percentage of viable cells decreased in the cytotoxicity studies by MTT assay. The GSNPs were taken up by mammalian cells through different mechanisms such as pinocytosis, endocytosis, and phagocytosis.75 Once the NPs enter the cells, they interact with the cellular materials and cause DNA damage and cell death. The GSNPs at 85 μg/mL concentration showed 95.23% inhibition of HT29 cell growth. The concentration of the NPs was chosen based on the TC ID50 value (results not shown). Another promising result was that G. sylvestre plant extracts alone at 85 μg/mL concentration showed 30.77% inhibition of HT29 cell lines growth. From our results, it can be concluded that the GSNPs could have induced intracellular reactive oxygen species generation, which can be evaluated using intracellular peroxide-dependent oxidation, and caused cell death. The control cells were clustered, healthy, and viable cells (Figure 8A), whereas the HT29 cells’ proliferation was significantly inhibited by GS (Figure 8B). The SNP-treated cells showed increased apoptotic morphological changes (Figure 8C), also the clearly visible cell debris in Figure 8D is due to cell death by 85 μg/mL SNP treatment. These results indicate that the sensitivity of HT29 human colon cancer cell line for cytotoxic drugs is higher than that of the Vero cell line for the same cytotoxic agents. Sahu et al76 have reported the presence of four new tritepenoid saponins, namely gymnemasins A, B, C, and D, from the leaves of G. sylvestre, while Chan has identified the presence of acylation with diangeloyl groups at the C21–22 positions in triterpenoid saponins, which is essential for cytotoxcity toward tumor cells.77 Tang et al78 have reported that saponin could induce apoptosis of U251 cells, and both BAD-mediated intrinsic apoptotic signaling pathway and caspase-8-mediated extrinsic apoptotic signaling pathway were involved in the apoptosis. The promising saponins were further studied as potential anticancer agents by many researchers. Ai et al79 proposed a qualitative method that can be used to recognize the presence or absence of cancer cells with gold NPs for targeted cancer cell imaging and efficient photodynamic therapy. As reported by Raghavendra et al80 the size effects and multifunctionality are the main characteristics of NPs, so our method of one-step synthesis of SNPs using the aqueous extract of G. sylvestre may serve as a potential anticancer drug for cancer therapy. Further studies have to be carried out to understand the nature of cytotoxicity and the death or proliferation of cells caused by GSNPs from G. sylvestre leaf extract.
Conclusion
The green synthesis of biofunctionalized SNPs from the leaves of G. sylvestre was economical, nontoxic, and environmentally benign. Due to the reducing and capping nature of the bioactive phytocompounds present in the aqueous extract of G. sylvestre, a cap was formed around the silver ions of the biofunctionalized SNPs which were stable. The presence of the functional group of the bioactive compounds was confirmed by FTIR spectra. The particle size and the spherical shape of the SNPs were determined by XRD and SEM analyses. Since the plant extract and the biofunctionalized SNPs showed anticancer activity against cancer cells, G. sylvestre may serve as a source for potential anticancer drugs. The present study showed the anticancer activities of both the bioactive compounds of the leaf extract and the biofunctionalized SNPs synthesized against HT29 human adenocarcinoma cells in vitro. Our studies provide an important basis for the application of NPs for in vitro anticancer activity against human colon adenocarcinoma cells. Our earlier reports have also shown the potential antiulcer properties of G. sylvestre in mice.13 So GSS is a good plant candidate for further studies in alternative medicine due its multifunctional medical properties
[ "Collection of plants", "Preparation of aqueous extract", "Synthesis of SNPs", "Characterization of SNPs", "In vitro anticancer activity", "Cell line and culture medium", "Cell viability by MTT assay", "Morphological changes", "Phytochemical screening of G. sylvestre leaf extract", "Characterization of biofunctionalized SNPs", "In vitro anticancer activity", "Conclusion" ]
[ "Fresh leaves of G. sylvestre from plants of same age group of a single population were collected from the experimental Herbal Garden, Tamil University, Thanjavur, Tamil Nadu, India, in July, 2010. The herbarium was prepared for authentication (Ref No: SRM\\CENR\\PTC\\2010\\03), and taxonomic identification was done by Dr Jayaraman, Professor, Department of Botany, Madras Christian College, Tambaram, Chennai, Tamil Nadu. The herb sample is maintained in the research laboratory for further reference.", "The leaves of G. sylvestre were washed with distilled water to remove the dirt and further washed with mild soap solution and rinsed thrice with distilled water. The leaves were blot-dried with tissue paper and shade dried at room temperature for 2 weeks. After complete drying, the leaves were cut into small pieces and powdered in a mixer and sieved using a 20-μm mesh sieve to get a uniform size range for further studies. Twenty grams of the sieved leaf powder was added to 100 mL of sterile distilled water in a 500-mL Erlenmeyer flask and boiled for 5 minutes. The flask was kept under continuous dark conditions at 30°C in a shaker. The extract was filtered and stored in an airtight container and protected from sunlight for further use.37", "Silver nitrate (AgNO3) was purchased from Sigma-Aldrich (St Louis, MO, USA), and all solutions were freshly made for the synthesis of SNPs. The aqueous leaf extract of G. sylvestre was used for the bioreduction synthesis of the NPs. The SNPs were synthesized by adding 5 mL of plant extract to 15 mL of 1 mM aqueous AgNO3 solution in a 250-mL Erlenmeyer flask and incubated in a rotary shaker at 150 rpm in dark. The synthesis of NPs was confirmed spectrophotometrically at every 30-minute interval till no reduction was observed. The reduction was observed by the color change in the colloidal solution, which confirmed the formation of SNPs.42,43", "The GSNPs were characterized periodically by measuring the bioreduction of AgNO3 using a UV–vis 3000+ double-beam spectrophotometer (Lab India, Maharashtra, India). The spectrometric range was 200–800 nm, and scanning interval was 0.5 nm.\nThe surface morphology of the biofunctionalized SNPs was characterized by high-resolution SEM analysis (JSM-5600LV; JEOL, Tokyo, Japan) and the elemental compositions were determined by EDAX analysis (S-3400N; Hitachi, Tokyo, Japan). The functional characterization of biomolecules present in the GSNPs from the leaf extract of G. sylvestre was done by FTIR spectrometry (RX1; Perkin-Elmer, Waltham, MA, USA).44\nThe crystal lattice and the size of the synthesized NPs were determined by XRD measurements using an XRD-6000 X-ray diffractometer (Shimadzu, Kyoto, Japan). The crystal-lite domain size was calculated from the width of the XRD peaks, as described in our previous papers.17,45–47", " Cell line and culture medium Vero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17\nVero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17\n Cell viability by MTT assay MTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48\nMTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48\n Morphological changes The cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49\nThe cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49", "Vero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17", "MTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48", "The cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49", "The preliminary phytochemical screening of aqueous extracts of G. sylvestre revealed the presence of alkaloids, phenols, flavonoids, sterols, tannins, and triterpenes (Table 1). As shown in Table 2, 125.62±26.84 μg/g of total flavonoids, 285.23±1.11 μg/g of total phenols, and 111.53±15.13 μg/g of tannin were present in the aqueous extract of G. sylvestre. The flavonoids and phenolic compounds exhibited a wide range of biological activities, such as antioxidant and lipid peroxidation inhibition.13\nThe estimated total antioxidant activity was 9.13±0.04 μg/g and the DPPH radical scavenging activity was 52.14%±0.32% (Table 2).", "The color change observed in the aqueous silver nitrate solution showed that the SNPs were formed rapidly within 30 minutes of incubation of the plant extract with aqueous AgNO3 solution. The colorless solution changed to ruby red, confirming the formation of SNPs (Figure 1). The intensity of the red color increased with time because of the excited surface plasmon resonance effect and reduced AgNO3. The control aqueous AgNO3 solution (without leaf extract) showed no change of color with time and was taken as the blank reference.\nUV–vis spectrometry is a reliable and reproducible technique that can be used to accurately characterize the metal NPs though it does not provide direct information regarding the particle sizes. The surface plasmon bands (absorbance spectra) are influenced by the size and shape of the NPs produced, along with the dielectric constant of the surrounding media.\nFigure 2 shows the time-dependent intensity of the absorption band, which reached its maximum peak at 12 hours, after which no further change in the spectrum was observed indicating that the precursors had been consumed. Initially, the UV–vis spectrum did not show evidence of any absorption in the region 350–600 nm, but after the addition of extract a distinct band was observed at 432 nm.\nWhen silver nitrate was added to the aqueous plant extract of G. sylvestre, it was reduced to SNPs by the aldehyde group present in the flavonoids (125.6 μg/g), which was further oxidized to the carboxyl group. Also, the carboxyl groups of the phenols from the G. sylvestre extract (285.23 μg/g) acted as a surfactant to attach the major phytochemicals from the plant extract to the surface of the SNPs. Our previous study on the synthesis of SNPs using aqueous extracts of Memecylon edule,37\nMemecylon umbellatum,47\nChrysopogon zizanioides46 and Indigofera aspalathoides50 showed that the color of the reaction mixture during the formation of GSNPs changed to ruby red color from colorless/straw color. Our results are also comparable with the other available reports for plant-extract-mediated synthesis of SNPs.\nFrom Figure 3, it is clear that the synthesized SNPs were approximately spherical and of different sizes. The SEM images in the figure clearly indicated a thin layer of phytochemicals from the plant extract covering the synthesized SNPs. Mostly the total phenolic content, flavonoids, and tannins were responsible for the bioreduction of the SNPs. In this green synthesis, the phytochemicals from the plant extract acted as a surfactant to prevent the aggregation of the synthesized SNPs. The SEM images of our earlier research had revealed that this biologically eco-friendly synthesis of NPs utilizing the leaf extracts of M. edule,37\nC. zizanioides,46 and M. umbellatum47 showed no aggregation due to the biomolecules from the plant extract. And the mechanism behind this particle formation with no aggregation may be the spontaneous nucleation and isotropic growth of NPs along with the plant extract. As these chains grow in diameter with increasing silver deposition, spherical particles break off from these structures forming nanospherical particles which can be typically observed from this synthesis.51\nThe elemental composition of green-synthesized AgNPs was analyzed through EDAX. These measurements confirmed the presence of the elementary silver signal of the SNPs. The vertical axis displays the number of X-ray counts and the horizontal axis displays the energy in keV.\nThe EDAX spectrum of the biofunctionalized SNPs in Figure 4 clearly shows the strong signals from silver atoms along with the weaker signals from carbon and oxygen present from biomolecules of the plant extract. The elemental silver peak at 2–4 keV, which is the major emission peak specified for metallic silver, with minor peaks of C and O were also seen due to the capping of Ag NPs by the biomolecules of G. sylvestre leaf extract, and the absence of other peaks evidenced the purity of the Ag NPs.\nFTIR analysis in Figure 5 shows that the SNPs produced by G. sylvestre extract were coated by phytocompounds and secondary metabolites such as saponins, terpenoids, and gymnemagenin derivative of gymnemic acid containing the functional groups of amines, aldehydes, carboxylic acids, and alcohols.\nThe presence of the amide linkages seen in Figure 5 suggests that the different functional groups of the proteins present in the plant extracts might be capping the NPs and playing an important role in the stabilization of the green NPs formed. The band at 1,443 cm−1 was assigned to the methylene scissoring vibrations of proteins, and the bands located at 1,318 cm−1 and 1,089 cm−1 are due to the C–N stretching vibration of aromatic and aliphatic amines, respectively, which agrees with earlier reports of Suman et al.52\nThe positions of these bands were comparable to those reported for phytochemicals reported in the G. sylvestre extract as total phenols, flavonoids, and tannins (Table 2). Thus, we can confirm that the nanocapping of the phytochemicals from the G. sylvestre extract is responsible for the reduction and subsequent stabilization of the SNPs. The absorption bands that appear in the IR spectrum of the aqueous extract could also be seen in the IR spectra of phyto-capped Ag NPs, confirming the role of the phyto constituents (mostly gymnemic acid) in protecting the Ag NPs from aggregation.\nAlso, during our repeated experiments there were no batch-to-batch variations in size, regardless of the isotopic composition, and the particles diameters of the SNPs formed were known to a high degree of accuracy. A detailed study on the large-scale synthesis and elemental composition on the synthesized NPs can be carried out using inductively coupled plasma mass spectrometry to obtain reproducible compositions in every batch. In future, the elemental analysis can be carried out as described earlier by other researchers.53,54\nXRD analysis of NPs represented in Figure 6 shows several size-dependent features leading to irregular peak position, height, and width. XRD was mainly carried out to study the crystalline nature of the green-synthesized G. sylvestre SNPs. From the figure, the GSNPs are seen to exhibit monocrystallinity. The XRD peaks at 38.2°, 44.5°, 64.7°, and 77.7° can be indexed to the [111], [200], [220], and [311] planes, indicating that the SNPs are highly crystalline. Similar results were reported for Abelmoschus esculentus, Citrus limon, Citrus reticulate, and Citrus sinensis55,56 and in our previous studies using C. zizanioides.46\nTable 3 shows the characteristic features of the GSNPs using various plant parts of different plant species reported by various researchers along with our previous reports.", "From Figure 7, it can be observed that, as the concentration of the GSNPs increased, the percentage of viable cells decreased in the cytotoxicity studies by MTT assay. The GSNPs were taken up by mammalian cells through different mechanisms such as pinocytosis, endocytosis, and phagocytosis.75 Once the NPs enter the cells, they interact with the cellular materials and cause DNA damage and cell death.\nThe GSNPs at 85 μg/mL concentration showed 95.23% inhibition of HT29 cell growth. The concentration of the NPs was chosen based on the TC ID50 value (results not shown). Another promising result was that G. sylvestre plant extracts alone at 85 μg/mL concentration showed 30.77% inhibition of HT29 cell lines growth. From our results, it can be concluded that the GSNPs could have induced intracellular reactive oxygen species generation, which can be evaluated using intracellular peroxide-dependent oxidation, and caused cell death. The control cells were clustered, healthy, and viable cells (Figure 8A), whereas the HT29 cells’ proliferation was significantly inhibited by GS (Figure 8B). The SNP-treated cells showed increased apoptotic morphological changes (Figure 8C), also the clearly visible cell debris in Figure 8D is due to cell death by 85 μg/mL SNP treatment.\nThese results indicate that the sensitivity of HT29 human colon cancer cell line for cytotoxic drugs is higher than that of the Vero cell line for the same cytotoxic agents. Sahu et al76 have reported the presence of four new tritepenoid saponins, namely gymnemasins A, B, C, and D, from the leaves of G. sylvestre, while Chan has identified the presence of acylation with diangeloyl groups at the C21–22 positions in triterpenoid saponins, which is essential for cytotoxcity toward tumor cells.77 Tang et al78 have reported that saponin could induce apoptosis of U251 cells, and both BAD-mediated intrinsic apoptotic signaling pathway and caspase-8-mediated extrinsic apoptotic signaling pathway were involved in the apoptosis.\nThe promising saponins were further studied as potential anticancer agents by many researchers. Ai et al79 proposed a qualitative method that can be used to recognize the presence or absence of cancer cells with gold NPs for targeted cancer cell imaging and efficient photodynamic therapy. As reported by Raghavendra et al80 the size effects and multifunctionality are the main characteristics of NPs, so our method of one-step synthesis of SNPs using the aqueous extract of G. sylvestre may serve as a potential anticancer drug for cancer therapy. Further studies have to be carried out to understand the nature of cytotoxicity and the death or proliferation of cells caused by GSNPs from G. sylvestre leaf extract.", "The green synthesis of biofunctionalized SNPs from the leaves of G. sylvestre was economical, nontoxic, and environmentally benign. Due to the reducing and capping nature of the bioactive phytocompounds present in the aqueous extract of G. sylvestre, a cap was formed around the silver ions of the biofunctionalized SNPs which were stable. The presence of the functional group of the bioactive compounds was confirmed by FTIR spectra. The particle size and the spherical shape of the SNPs were determined by XRD and SEM analyses. Since the plant extract and the biofunctionalized SNPs showed anticancer activity against cancer cells, G. sylvestre may serve as a source for potential anticancer drugs. The present study showed the anticancer activities of both the bioactive compounds of the leaf extract and the biofunctionalized SNPs synthesized against HT29 human adenocarcinoma cells in vitro. Our studies provide an important basis for the application of NPs for in vitro anticancer activity against human colon adenocarcinoma cells. Our earlier reports have also shown the potential antiulcer properties of G. sylvestre in mice.13 So GSS is a good plant candidate for further studies in alternative medicine due its multifunctional medical properties" ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Collection of plants", "Preparation of aqueous extract", "Qualitative and quantitative phytochemical analysis", "Synthesis of SNPs", "Characterization of SNPs", "In vitro anticancer activity", "Cell line and culture medium", "Cell viability by MTT assay", "Morphological changes", "Results", "Phytochemical screening of G. sylvestre leaf extract", "Characterization of biofunctionalized SNPs", "In vitro anticancer activity", "Conclusion" ]
[ "For treatment of various diseases, bioactive components from medicinal plants that are similar to chemical compounds are used.1 In recent years, the use of ethno-botanical information in medicinal plant research has gained considerable attention in some segments of the scientific community.2 In one of the ethno-botanical surveys of medicinal plants commonly used by the Kani tribals in Tirunelveli hills of the Western Ghats in Tamil Nadu, India, it was revealed that Gymneme sylvestre is the most important species based on its use.2 The use of plant parts and isolated phytochemicals for the prevention and treatment of various health ailments has been in practice for many decades.3\nG. sylvestre R. Br, commonly known as “Meshasringi”, is distributed over most of India and has a reputation in traditional medicine as a stomachic, diuretic, and a remedy to control diabetes mellitus. G. sylvestre R. Br4 is a woody, climbing plant that grows in the tropical forests of Central and Southern India and in parts of Asia.5 It is a pubescent shrub with young stems and branches, and has a distichous phyllotactic opposite arrangement pattern of leaves which are 2.5–6 cm long and are usually ovate or elliptical. The flowers are small, yellow, and in umbellate cymes, and the follicles are terete, lanceolate, and up to 3 inches in length.6\nIn homeopathy, as well as in folk and ayurvedic medicine, G. sylvestre has been used for diabetes treatment.7\nG. sylvestre has bioactive components that can cure asthma, eye ailments, snakebite, piles, chronic cough, breathing troubles, colic pain, cardiopathy, constipation, dyspepsia, hemorrhoids, and hepatosplenomegaly, as well as assist in family planning.8 In addition, it also possesses antimicrobial,9 antitumor,5 anti-obesity,10 anti-inflammatory,11 anti-hyperglycemic,12 antiulcer, anti-stress, and antiallergic activity.13\nThe presence of flavonoids, saponins, anthraquinones, quercitol, and other alkaloid have been reported in the flowers, leaves, and fruits of G. sylvestre.14 The presence of other therapeutic agents, such as gymnemagenin, gymnemic acids, gymnemanol, and β-amyrin-related glycosides, which play a key role in therapeutic applications, have also been reported. The focus of the present work is to assess the potential therapeutic medicinal value of this herb and to understand/enhance the mechanistic action of their bioactive components.14\nG. sylvestre contains triterpenes, saponins, and gymnemic acids belonging to the oleane and dammarene classes.15,16 The plant extract has also tested positive for alkaloids, acidic glycosides, and anthraquinone derivatives. Oleanane saponins are gymnemic acids and gymnema saponins, while dammarene saponins are gymnemasides.\nAs reported by Thakur et al14 the aqueous extracts of the G. sylvestre leaves showed the presence of gymnemic acids Ι–VΙ, while the saponin fraction of the leaves tested positive for the presence of gymnemic acids XV–XVIII. The gymnemic acid derivative of gymnemagenin was elucidated from the fraction VIII–XII, which is responsible for the antidiabetic activity, and the fraction VIII stimulates the pancreas for insulin secretion. The novel D-glucoside structure with anti-sweet principle is present in the I–V saponin fraction. The presence of pentatriacontane, α- and β-chlorophylls, phytin, resins, D-quercitol, tartaric acid, formic acid, butyric acid, lupeol, and stigmasterol has been reported as other plant constituents of G. sylvestre,14 while the extract has also been tested positive for alkaloids.13,17\nSharma et al have reported the antioxidant activity of oleane saponins from G. sylvestre plant extract and determined the IC50 values for 2,2-diphenylpicrylhydrazyl (DPPH) scavenging, superoxide radical scavenging, inhibition of in vitro lipid peroxidation, and protein carbonyl formation as 238 μg/mL, 140 μg/mL, 99 μg/mL, and 28 μg/mL, respectively, which may be due to the presence of flavonoids, phenols, tannins, and triterpenoids.18 The enhanced radiation (8 Gy)-induced augmentation of lipid peroxidation and depletion of glutathione and protein in mouse brain were reported by Sharma et al18 using multiherbal ayurvedic formulations containing extracts of G. sylvestre, such as “Hyponidd” and “Dihar”. They also demonstrated the antioxidant activity by increasing the levels of superoxide dismutase, glutathione, and catalase in rats through in vivo studies.19\nKang et al20 proved the role of antioxidants from G. sylvestre in diabetic rats using ethanolic extracts. Using several antioxidant assays, eg, thiobarbituric acid assay with slight modifications, egg yolk lecithin or 2-deoxyribose (associated with lipid peroxidation) assay, superoxide dismutase-like activity assay, and 2,2′-azinobis (3-ethylbenzothiazoline-6-sulfonic acid) assay.\nThe potent anticancer activity of G. sylvestre against the human lung adenocarcinoma cell lines (A549) and human breast carcinoma cell lines (MCF7) using alcoholic extracts of the herb has been reported by Srikant et al.21 Also, Amaki et al22 reported the inhibition of the breast cancer resistance protein using the alcoholic extract of G. sylvestre.\nMany plant-derived saponins, eg, ginsenosides, soyasaponins, and saikosaponins, have been found to exhibit significant anticancer activity. The anticancer activity of gymnemagenol on HeLa cancer cell lines in in vitro conditions was determined by the MTT cell proliferation assay for cytotoxic activity of saponins. Using 5 μg/mL, 15 μg/mL, 25 μg/mL, and 50 μg/mL concentrations of gymnemagenol, the IC50 value was found to be 37 μg/mL after 96 hours. The isolated bioactive constituent, gymnemagenol, showed a high degree of inhibition to HeLa cancer cell line proliferation, and saponins were not found to be toxic to the growth of normal cells under in vitro conditions.23\nAlready many researchers have reported that the leaves of G. sylvestre lower blood sugar, stimulate the heart, uterus, and circulatory systems, and exhibit anti-sweet and hepatoprotective activities.20,24–31 Administration of G. sylvestre extract to diabetic rats increased superoxide dismutase activity and decreased lipid peroxide by either directly scavenging the reactive oxygen species, due to the presence of various antioxidant compounds, or by increasing the synthesis of antioxidant molecules (albumin and uric acid).24,30,32\nTherefore, in this study, an attempt was made to synthesize the silver nanoparticles (SNPs) from aqueous extracts of the G. sylvestre leaves. These green-synthesized SNPs (GSNPs) of G. sylvestre were examined by ultraviolet–visible (UV–vis) spectroscopy, scanning electron microscopy (SEM), energy dispersive X-ray analysis (EDAX), Fourier transform infrared spectroscopy (FTIR), and X-ray diffraction (XRD) analysis for studying their size and shape. The synthesized and well-characterized nanoparticles (NPs) were tested for their cytotoxicity effect. Our findings clearly demonstrate that it is indeed possible to have a much greener way to synthesize SNPs without compromising their antibacterial properties and thus plant extracts may prove to be a good alternative to obtain such NPs with improved antibacterial and antiviral properties for diabetic wound healing applications. Goix et al33 and Boholm and Arvidsson34 have pointed out that silver is either beneficial or harmful in relation to four main values: the environment, health, sewage treatment, and product effectiveness. As reported by Barua et al35 poly(ethylene glycol)-stabilized colloidal SNPs showed the nonhazardous anticancer and antibacterial properties. Jin et al36 have reported the therapeutic applications of plant-extract-based scaffolds for wound healing and skin reconstitution studies.", " Collection of plants Fresh leaves of G. sylvestre from plants of same age group of a single population were collected from the experimental Herbal Garden, Tamil University, Thanjavur, Tamil Nadu, India, in July, 2010. The herbarium was prepared for authentication (Ref No: SRM\\CENR\\PTC\\2010\\03), and taxonomic identification was done by Dr Jayaraman, Professor, Department of Botany, Madras Christian College, Tambaram, Chennai, Tamil Nadu. The herb sample is maintained in the research laboratory for further reference.\nFresh leaves of G. sylvestre from plants of same age group of a single population were collected from the experimental Herbal Garden, Tamil University, Thanjavur, Tamil Nadu, India, in July, 2010. The herbarium was prepared for authentication (Ref No: SRM\\CENR\\PTC\\2010\\03), and taxonomic identification was done by Dr Jayaraman, Professor, Department of Botany, Madras Christian College, Tambaram, Chennai, Tamil Nadu. The herb sample is maintained in the research laboratory for further reference.\n Preparation of aqueous extract The leaves of G. sylvestre were washed with distilled water to remove the dirt and further washed with mild soap solution and rinsed thrice with distilled water. The leaves were blot-dried with tissue paper and shade dried at room temperature for 2 weeks. After complete drying, the leaves were cut into small pieces and powdered in a mixer and sieved using a 20-μm mesh sieve to get a uniform size range for further studies. Twenty grams of the sieved leaf powder was added to 100 mL of sterile distilled water in a 500-mL Erlenmeyer flask and boiled for 5 minutes. The flask was kept under continuous dark conditions at 30°C in a shaker. The extract was filtered and stored in an airtight container and protected from sunlight for further use.37\nThe leaves of G. sylvestre were washed with distilled water to remove the dirt and further washed with mild soap solution and rinsed thrice with distilled water. The leaves were blot-dried with tissue paper and shade dried at room temperature for 2 weeks. After complete drying, the leaves were cut into small pieces and powdered in a mixer and sieved using a 20-μm mesh sieve to get a uniform size range for further studies. Twenty grams of the sieved leaf powder was added to 100 mL of sterile distilled water in a 500-mL Erlenmeyer flask and boiled for 5 minutes. The flask was kept under continuous dark conditions at 30°C in a shaker. The extract was filtered and stored in an airtight container and protected from sunlight for further use.37\n Qualitative and quantitative phytochemical analysis The qualitative phytochemical analysis of G. sylvestre extracts were performed following the methods of Parekh and Chanda38 to determine the presence of alkaloids (Mayer, Wagner, Dragendorff), flavonoids (alkaline reagent, Shinoda), phenolics (lead acetate, alkaline reagent test), triterpenes (Liberman-Burchard test), saponins (foam test), and tannins (gelatin).39 The results were qualitatively expressed as positive (+) or negative (−).40 The chemicals used for the study were purchased from Sigma-Aldrich (Chennai, India).\nPhytochemical quantitative analyses are described briefly in our previous paper.13 The total phenolic content was measured using the Folin–Ciocalteu colorimetric method. The flavonoids were estimated using aluminum chloride colorimetric method. Gallic acid was used as standard for the analysis of total antioxidant capacity, and the DPPH radical scavenging activity was done following the methods described by Blios.41\nThe qualitative phytochemical analysis of G. sylvestre extracts were performed following the methods of Parekh and Chanda38 to determine the presence of alkaloids (Mayer, Wagner, Dragendorff), flavonoids (alkaline reagent, Shinoda), phenolics (lead acetate, alkaline reagent test), triterpenes (Liberman-Burchard test), saponins (foam test), and tannins (gelatin).39 The results were qualitatively expressed as positive (+) or negative (−).40 The chemicals used for the study were purchased from Sigma-Aldrich (Chennai, India).\nPhytochemical quantitative analyses are described briefly in our previous paper.13 The total phenolic content was measured using the Folin–Ciocalteu colorimetric method. The flavonoids were estimated using aluminum chloride colorimetric method. Gallic acid was used as standard for the analysis of total antioxidant capacity, and the DPPH radical scavenging activity was done following the methods described by Blios.41\n Synthesis of SNPs Silver nitrate (AgNO3) was purchased from Sigma-Aldrich (St Louis, MO, USA), and all solutions were freshly made for the synthesis of SNPs. The aqueous leaf extract of G. sylvestre was used for the bioreduction synthesis of the NPs. The SNPs were synthesized by adding 5 mL of plant extract to 15 mL of 1 mM aqueous AgNO3 solution in a 250-mL Erlenmeyer flask and incubated in a rotary shaker at 150 rpm in dark. The synthesis of NPs was confirmed spectrophotometrically at every 30-minute interval till no reduction was observed. The reduction was observed by the color change in the colloidal solution, which confirmed the formation of SNPs.42,43\nSilver nitrate (AgNO3) was purchased from Sigma-Aldrich (St Louis, MO, USA), and all solutions were freshly made for the synthesis of SNPs. The aqueous leaf extract of G. sylvestre was used for the bioreduction synthesis of the NPs. The SNPs were synthesized by adding 5 mL of plant extract to 15 mL of 1 mM aqueous AgNO3 solution in a 250-mL Erlenmeyer flask and incubated in a rotary shaker at 150 rpm in dark. The synthesis of NPs was confirmed spectrophotometrically at every 30-minute interval till no reduction was observed. The reduction was observed by the color change in the colloidal solution, which confirmed the formation of SNPs.42,43\n Characterization of SNPs The GSNPs were characterized periodically by measuring the bioreduction of AgNO3 using a UV–vis 3000+ double-beam spectrophotometer (Lab India, Maharashtra, India). The spectrometric range was 200–800 nm, and scanning interval was 0.5 nm.\nThe surface morphology of the biofunctionalized SNPs was characterized by high-resolution SEM analysis (JSM-5600LV; JEOL, Tokyo, Japan) and the elemental compositions were determined by EDAX analysis (S-3400N; Hitachi, Tokyo, Japan). The functional characterization of biomolecules present in the GSNPs from the leaf extract of G. sylvestre was done by FTIR spectrometry (RX1; Perkin-Elmer, Waltham, MA, USA).44\nThe crystal lattice and the size of the synthesized NPs were determined by XRD measurements using an XRD-6000 X-ray diffractometer (Shimadzu, Kyoto, Japan). The crystal-lite domain size was calculated from the width of the XRD peaks, as described in our previous papers.17,45–47\nThe GSNPs were characterized periodically by measuring the bioreduction of AgNO3 using a UV–vis 3000+ double-beam spectrophotometer (Lab India, Maharashtra, India). The spectrometric range was 200–800 nm, and scanning interval was 0.5 nm.\nThe surface morphology of the biofunctionalized SNPs was characterized by high-resolution SEM analysis (JSM-5600LV; JEOL, Tokyo, Japan) and the elemental compositions were determined by EDAX analysis (S-3400N; Hitachi, Tokyo, Japan). The functional characterization of biomolecules present in the GSNPs from the leaf extract of G. sylvestre was done by FTIR spectrometry (RX1; Perkin-Elmer, Waltham, MA, USA).44\nThe crystal lattice and the size of the synthesized NPs were determined by XRD measurements using an XRD-6000 X-ray diffractometer (Shimadzu, Kyoto, Japan). The crystal-lite domain size was calculated from the width of the XRD peaks, as described in our previous papers.17,45–47\n In vitro anticancer activity Cell line and culture medium Vero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17\nVero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17\n Cell viability by MTT assay MTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48\nMTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48\n Morphological changes The cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49\nThe cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49\n Cell line and culture medium Vero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17\nVero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17\n Cell viability by MTT assay MTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48\nMTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48\n Morphological changes The cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49\nThe cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49", "Fresh leaves of G. sylvestre from plants of same age group of a single population were collected from the experimental Herbal Garden, Tamil University, Thanjavur, Tamil Nadu, India, in July, 2010. The herbarium was prepared for authentication (Ref No: SRM\\CENR\\PTC\\2010\\03), and taxonomic identification was done by Dr Jayaraman, Professor, Department of Botany, Madras Christian College, Tambaram, Chennai, Tamil Nadu. The herb sample is maintained in the research laboratory for further reference.", "The leaves of G. sylvestre were washed with distilled water to remove the dirt and further washed with mild soap solution and rinsed thrice with distilled water. The leaves were blot-dried with tissue paper and shade dried at room temperature for 2 weeks. After complete drying, the leaves were cut into small pieces and powdered in a mixer and sieved using a 20-μm mesh sieve to get a uniform size range for further studies. Twenty grams of the sieved leaf powder was added to 100 mL of sterile distilled water in a 500-mL Erlenmeyer flask and boiled for 5 minutes. The flask was kept under continuous dark conditions at 30°C in a shaker. The extract was filtered and stored in an airtight container and protected from sunlight for further use.37", "The qualitative phytochemical analysis of G. sylvestre extracts were performed following the methods of Parekh and Chanda38 to determine the presence of alkaloids (Mayer, Wagner, Dragendorff), flavonoids (alkaline reagent, Shinoda), phenolics (lead acetate, alkaline reagent test), triterpenes (Liberman-Burchard test), saponins (foam test), and tannins (gelatin).39 The results were qualitatively expressed as positive (+) or negative (−).40 The chemicals used for the study were purchased from Sigma-Aldrich (Chennai, India).\nPhytochemical quantitative analyses are described briefly in our previous paper.13 The total phenolic content was measured using the Folin–Ciocalteu colorimetric method. The flavonoids were estimated using aluminum chloride colorimetric method. Gallic acid was used as standard for the analysis of total antioxidant capacity, and the DPPH radical scavenging activity was done following the methods described by Blios.41", "Silver nitrate (AgNO3) was purchased from Sigma-Aldrich (St Louis, MO, USA), and all solutions were freshly made for the synthesis of SNPs. The aqueous leaf extract of G. sylvestre was used for the bioreduction synthesis of the NPs. The SNPs were synthesized by adding 5 mL of plant extract to 15 mL of 1 mM aqueous AgNO3 solution in a 250-mL Erlenmeyer flask and incubated in a rotary shaker at 150 rpm in dark. The synthesis of NPs was confirmed spectrophotometrically at every 30-minute interval till no reduction was observed. The reduction was observed by the color change in the colloidal solution, which confirmed the formation of SNPs.42,43", "The GSNPs were characterized periodically by measuring the bioreduction of AgNO3 using a UV–vis 3000+ double-beam spectrophotometer (Lab India, Maharashtra, India). The spectrometric range was 200–800 nm, and scanning interval was 0.5 nm.\nThe surface morphology of the biofunctionalized SNPs was characterized by high-resolution SEM analysis (JSM-5600LV; JEOL, Tokyo, Japan) and the elemental compositions were determined by EDAX analysis (S-3400N; Hitachi, Tokyo, Japan). The functional characterization of biomolecules present in the GSNPs from the leaf extract of G. sylvestre was done by FTIR spectrometry (RX1; Perkin-Elmer, Waltham, MA, USA).44\nThe crystal lattice and the size of the synthesized NPs were determined by XRD measurements using an XRD-6000 X-ray diffractometer (Shimadzu, Kyoto, Japan). The crystal-lite domain size was calculated from the width of the XRD peaks, as described in our previous papers.17,45–47", " Cell line and culture medium Vero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17\nVero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17\n Cell viability by MTT assay MTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48\nMTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48\n Morphological changes The cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49\nThe cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49", "Vero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17", "MTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48", "The cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49", " Phytochemical screening of G. sylvestre leaf extract The preliminary phytochemical screening of aqueous extracts of G. sylvestre revealed the presence of alkaloids, phenols, flavonoids, sterols, tannins, and triterpenes (Table 1). As shown in Table 2, 125.62±26.84 μg/g of total flavonoids, 285.23±1.11 μg/g of total phenols, and 111.53±15.13 μg/g of tannin were present in the aqueous extract of G. sylvestre. The flavonoids and phenolic compounds exhibited a wide range of biological activities, such as antioxidant and lipid peroxidation inhibition.13\nThe estimated total antioxidant activity was 9.13±0.04 μg/g and the DPPH radical scavenging activity was 52.14%±0.32% (Table 2).\nThe preliminary phytochemical screening of aqueous extracts of G. sylvestre revealed the presence of alkaloids, phenols, flavonoids, sterols, tannins, and triterpenes (Table 1). As shown in Table 2, 125.62±26.84 μg/g of total flavonoids, 285.23±1.11 μg/g of total phenols, and 111.53±15.13 μg/g of tannin were present in the aqueous extract of G. sylvestre. The flavonoids and phenolic compounds exhibited a wide range of biological activities, such as antioxidant and lipid peroxidation inhibition.13\nThe estimated total antioxidant activity was 9.13±0.04 μg/g and the DPPH radical scavenging activity was 52.14%±0.32% (Table 2).\n Characterization of biofunctionalized SNPs The color change observed in the aqueous silver nitrate solution showed that the SNPs were formed rapidly within 30 minutes of incubation of the plant extract with aqueous AgNO3 solution. The colorless solution changed to ruby red, confirming the formation of SNPs (Figure 1). The intensity of the red color increased with time because of the excited surface plasmon resonance effect and reduced AgNO3. The control aqueous AgNO3 solution (without leaf extract) showed no change of color with time and was taken as the blank reference.\nUV–vis spectrometry is a reliable and reproducible technique that can be used to accurately characterize the metal NPs though it does not provide direct information regarding the particle sizes. The surface plasmon bands (absorbance spectra) are influenced by the size and shape of the NPs produced, along with the dielectric constant of the surrounding media.\nFigure 2 shows the time-dependent intensity of the absorption band, which reached its maximum peak at 12 hours, after which no further change in the spectrum was observed indicating that the precursors had been consumed. Initially, the UV–vis spectrum did not show evidence of any absorption in the region 350–600 nm, but after the addition of extract a distinct band was observed at 432 nm.\nWhen silver nitrate was added to the aqueous plant extract of G. sylvestre, it was reduced to SNPs by the aldehyde group present in the flavonoids (125.6 μg/g), which was further oxidized to the carboxyl group. Also, the carboxyl groups of the phenols from the G. sylvestre extract (285.23 μg/g) acted as a surfactant to attach the major phytochemicals from the plant extract to the surface of the SNPs. Our previous study on the synthesis of SNPs using aqueous extracts of Memecylon edule,37\nMemecylon umbellatum,47\nChrysopogon zizanioides46 and Indigofera aspalathoides50 showed that the color of the reaction mixture during the formation of GSNPs changed to ruby red color from colorless/straw color. Our results are also comparable with the other available reports for plant-extract-mediated synthesis of SNPs.\nFrom Figure 3, it is clear that the synthesized SNPs were approximately spherical and of different sizes. The SEM images in the figure clearly indicated a thin layer of phytochemicals from the plant extract covering the synthesized SNPs. Mostly the total phenolic content, flavonoids, and tannins were responsible for the bioreduction of the SNPs. In this green synthesis, the phytochemicals from the plant extract acted as a surfactant to prevent the aggregation of the synthesized SNPs. The SEM images of our earlier research had revealed that this biologically eco-friendly synthesis of NPs utilizing the leaf extracts of M. edule,37\nC. zizanioides,46 and M. umbellatum47 showed no aggregation due to the biomolecules from the plant extract. And the mechanism behind this particle formation with no aggregation may be the spontaneous nucleation and isotropic growth of NPs along with the plant extract. As these chains grow in diameter with increasing silver deposition, spherical particles break off from these structures forming nanospherical particles which can be typically observed from this synthesis.51\nThe elemental composition of green-synthesized AgNPs was analyzed through EDAX. These measurements confirmed the presence of the elementary silver signal of the SNPs. The vertical axis displays the number of X-ray counts and the horizontal axis displays the energy in keV.\nThe EDAX spectrum of the biofunctionalized SNPs in Figure 4 clearly shows the strong signals from silver atoms along with the weaker signals from carbon and oxygen present from biomolecules of the plant extract. The elemental silver peak at 2–4 keV, which is the major emission peak specified for metallic silver, with minor peaks of C and O were also seen due to the capping of Ag NPs by the biomolecules of G. sylvestre leaf extract, and the absence of other peaks evidenced the purity of the Ag NPs.\nFTIR analysis in Figure 5 shows that the SNPs produced by G. sylvestre extract were coated by phytocompounds and secondary metabolites such as saponins, terpenoids, and gymnemagenin derivative of gymnemic acid containing the functional groups of amines, aldehydes, carboxylic acids, and alcohols.\nThe presence of the amide linkages seen in Figure 5 suggests that the different functional groups of the proteins present in the plant extracts might be capping the NPs and playing an important role in the stabilization of the green NPs formed. The band at 1,443 cm−1 was assigned to the methylene scissoring vibrations of proteins, and the bands located at 1,318 cm−1 and 1,089 cm−1 are due to the C–N stretching vibration of aromatic and aliphatic amines, respectively, which agrees with earlier reports of Suman et al.52\nThe positions of these bands were comparable to those reported for phytochemicals reported in the G. sylvestre extract as total phenols, flavonoids, and tannins (Table 2). Thus, we can confirm that the nanocapping of the phytochemicals from the G. sylvestre extract is responsible for the reduction and subsequent stabilization of the SNPs. The absorption bands that appear in the IR spectrum of the aqueous extract could also be seen in the IR spectra of phyto-capped Ag NPs, confirming the role of the phyto constituents (mostly gymnemic acid) in protecting the Ag NPs from aggregation.\nAlso, during our repeated experiments there were no batch-to-batch variations in size, regardless of the isotopic composition, and the particles diameters of the SNPs formed were known to a high degree of accuracy. A detailed study on the large-scale synthesis and elemental composition on the synthesized NPs can be carried out using inductively coupled plasma mass spectrometry to obtain reproducible compositions in every batch. In future, the elemental analysis can be carried out as described earlier by other researchers.53,54\nXRD analysis of NPs represented in Figure 6 shows several size-dependent features leading to irregular peak position, height, and width. XRD was mainly carried out to study the crystalline nature of the green-synthesized G. sylvestre SNPs. From the figure, the GSNPs are seen to exhibit monocrystallinity. The XRD peaks at 38.2°, 44.5°, 64.7°, and 77.7° can be indexed to the [111], [200], [220], and [311] planes, indicating that the SNPs are highly crystalline. Similar results were reported for Abelmoschus esculentus, Citrus limon, Citrus reticulate, and Citrus sinensis55,56 and in our previous studies using C. zizanioides.46\nTable 3 shows the characteristic features of the GSNPs using various plant parts of different plant species reported by various researchers along with our previous reports.\nThe color change observed in the aqueous silver nitrate solution showed that the SNPs were formed rapidly within 30 minutes of incubation of the plant extract with aqueous AgNO3 solution. The colorless solution changed to ruby red, confirming the formation of SNPs (Figure 1). The intensity of the red color increased with time because of the excited surface plasmon resonance effect and reduced AgNO3. The control aqueous AgNO3 solution (without leaf extract) showed no change of color with time and was taken as the blank reference.\nUV–vis spectrometry is a reliable and reproducible technique that can be used to accurately characterize the metal NPs though it does not provide direct information regarding the particle sizes. The surface plasmon bands (absorbance spectra) are influenced by the size and shape of the NPs produced, along with the dielectric constant of the surrounding media.\nFigure 2 shows the time-dependent intensity of the absorption band, which reached its maximum peak at 12 hours, after which no further change in the spectrum was observed indicating that the precursors had been consumed. Initially, the UV–vis spectrum did not show evidence of any absorption in the region 350–600 nm, but after the addition of extract a distinct band was observed at 432 nm.\nWhen silver nitrate was added to the aqueous plant extract of G. sylvestre, it was reduced to SNPs by the aldehyde group present in the flavonoids (125.6 μg/g), which was further oxidized to the carboxyl group. Also, the carboxyl groups of the phenols from the G. sylvestre extract (285.23 μg/g) acted as a surfactant to attach the major phytochemicals from the plant extract to the surface of the SNPs. Our previous study on the synthesis of SNPs using aqueous extracts of Memecylon edule,37\nMemecylon umbellatum,47\nChrysopogon zizanioides46 and Indigofera aspalathoides50 showed that the color of the reaction mixture during the formation of GSNPs changed to ruby red color from colorless/straw color. Our results are also comparable with the other available reports for plant-extract-mediated synthesis of SNPs.\nFrom Figure 3, it is clear that the synthesized SNPs were approximately spherical and of different sizes. The SEM images in the figure clearly indicated a thin layer of phytochemicals from the plant extract covering the synthesized SNPs. Mostly the total phenolic content, flavonoids, and tannins were responsible for the bioreduction of the SNPs. In this green synthesis, the phytochemicals from the plant extract acted as a surfactant to prevent the aggregation of the synthesized SNPs. The SEM images of our earlier research had revealed that this biologically eco-friendly synthesis of NPs utilizing the leaf extracts of M. edule,37\nC. zizanioides,46 and M. umbellatum47 showed no aggregation due to the biomolecules from the plant extract. And the mechanism behind this particle formation with no aggregation may be the spontaneous nucleation and isotropic growth of NPs along with the plant extract. As these chains grow in diameter with increasing silver deposition, spherical particles break off from these structures forming nanospherical particles which can be typically observed from this synthesis.51\nThe elemental composition of green-synthesized AgNPs was analyzed through EDAX. These measurements confirmed the presence of the elementary silver signal of the SNPs. The vertical axis displays the number of X-ray counts and the horizontal axis displays the energy in keV.\nThe EDAX spectrum of the biofunctionalized SNPs in Figure 4 clearly shows the strong signals from silver atoms along with the weaker signals from carbon and oxygen present from biomolecules of the plant extract. The elemental silver peak at 2–4 keV, which is the major emission peak specified for metallic silver, with minor peaks of C and O were also seen due to the capping of Ag NPs by the biomolecules of G. sylvestre leaf extract, and the absence of other peaks evidenced the purity of the Ag NPs.\nFTIR analysis in Figure 5 shows that the SNPs produced by G. sylvestre extract were coated by phytocompounds and secondary metabolites such as saponins, terpenoids, and gymnemagenin derivative of gymnemic acid containing the functional groups of amines, aldehydes, carboxylic acids, and alcohols.\nThe presence of the amide linkages seen in Figure 5 suggests that the different functional groups of the proteins present in the plant extracts might be capping the NPs and playing an important role in the stabilization of the green NPs formed. The band at 1,443 cm−1 was assigned to the methylene scissoring vibrations of proteins, and the bands located at 1,318 cm−1 and 1,089 cm−1 are due to the C–N stretching vibration of aromatic and aliphatic amines, respectively, which agrees with earlier reports of Suman et al.52\nThe positions of these bands were comparable to those reported for phytochemicals reported in the G. sylvestre extract as total phenols, flavonoids, and tannins (Table 2). Thus, we can confirm that the nanocapping of the phytochemicals from the G. sylvestre extract is responsible for the reduction and subsequent stabilization of the SNPs. The absorption bands that appear in the IR spectrum of the aqueous extract could also be seen in the IR spectra of phyto-capped Ag NPs, confirming the role of the phyto constituents (mostly gymnemic acid) in protecting the Ag NPs from aggregation.\nAlso, during our repeated experiments there were no batch-to-batch variations in size, regardless of the isotopic composition, and the particles diameters of the SNPs formed were known to a high degree of accuracy. A detailed study on the large-scale synthesis and elemental composition on the synthesized NPs can be carried out using inductively coupled plasma mass spectrometry to obtain reproducible compositions in every batch. In future, the elemental analysis can be carried out as described earlier by other researchers.53,54\nXRD analysis of NPs represented in Figure 6 shows several size-dependent features leading to irregular peak position, height, and width. XRD was mainly carried out to study the crystalline nature of the green-synthesized G. sylvestre SNPs. From the figure, the GSNPs are seen to exhibit monocrystallinity. The XRD peaks at 38.2°, 44.5°, 64.7°, and 77.7° can be indexed to the [111], [200], [220], and [311] planes, indicating that the SNPs are highly crystalline. Similar results were reported for Abelmoschus esculentus, Citrus limon, Citrus reticulate, and Citrus sinensis55,56 and in our previous studies using C. zizanioides.46\nTable 3 shows the characteristic features of the GSNPs using various plant parts of different plant species reported by various researchers along with our previous reports.\n In vitro anticancer activity From Figure 7, it can be observed that, as the concentration of the GSNPs increased, the percentage of viable cells decreased in the cytotoxicity studies by MTT assay. The GSNPs were taken up by mammalian cells through different mechanisms such as pinocytosis, endocytosis, and phagocytosis.75 Once the NPs enter the cells, they interact with the cellular materials and cause DNA damage and cell death.\nThe GSNPs at 85 μg/mL concentration showed 95.23% inhibition of HT29 cell growth. The concentration of the NPs was chosen based on the TC ID50 value (results not shown). Another promising result was that G. sylvestre plant extracts alone at 85 μg/mL concentration showed 30.77% inhibition of HT29 cell lines growth. From our results, it can be concluded that the GSNPs could have induced intracellular reactive oxygen species generation, which can be evaluated using intracellular peroxide-dependent oxidation, and caused cell death. The control cells were clustered, healthy, and viable cells (Figure 8A), whereas the HT29 cells’ proliferation was significantly inhibited by GS (Figure 8B). The SNP-treated cells showed increased apoptotic morphological changes (Figure 8C), also the clearly visible cell debris in Figure 8D is due to cell death by 85 μg/mL SNP treatment.\nThese results indicate that the sensitivity of HT29 human colon cancer cell line for cytotoxic drugs is higher than that of the Vero cell line for the same cytotoxic agents. Sahu et al76 have reported the presence of four new tritepenoid saponins, namely gymnemasins A, B, C, and D, from the leaves of G. sylvestre, while Chan has identified the presence of acylation with diangeloyl groups at the C21–22 positions in triterpenoid saponins, which is essential for cytotoxcity toward tumor cells.77 Tang et al78 have reported that saponin could induce apoptosis of U251 cells, and both BAD-mediated intrinsic apoptotic signaling pathway and caspase-8-mediated extrinsic apoptotic signaling pathway were involved in the apoptosis.\nThe promising saponins were further studied as potential anticancer agents by many researchers. Ai et al79 proposed a qualitative method that can be used to recognize the presence or absence of cancer cells with gold NPs for targeted cancer cell imaging and efficient photodynamic therapy. As reported by Raghavendra et al80 the size effects and multifunctionality are the main characteristics of NPs, so our method of one-step synthesis of SNPs using the aqueous extract of G. sylvestre may serve as a potential anticancer drug for cancer therapy. Further studies have to be carried out to understand the nature of cytotoxicity and the death or proliferation of cells caused by GSNPs from G. sylvestre leaf extract.\nFrom Figure 7, it can be observed that, as the concentration of the GSNPs increased, the percentage of viable cells decreased in the cytotoxicity studies by MTT assay. The GSNPs were taken up by mammalian cells through different mechanisms such as pinocytosis, endocytosis, and phagocytosis.75 Once the NPs enter the cells, they interact with the cellular materials and cause DNA damage and cell death.\nThe GSNPs at 85 μg/mL concentration showed 95.23% inhibition of HT29 cell growth. The concentration of the NPs was chosen based on the TC ID50 value (results not shown). Another promising result was that G. sylvestre plant extracts alone at 85 μg/mL concentration showed 30.77% inhibition of HT29 cell lines growth. From our results, it can be concluded that the GSNPs could have induced intracellular reactive oxygen species generation, which can be evaluated using intracellular peroxide-dependent oxidation, and caused cell death. The control cells were clustered, healthy, and viable cells (Figure 8A), whereas the HT29 cells’ proliferation was significantly inhibited by GS (Figure 8B). The SNP-treated cells showed increased apoptotic morphological changes (Figure 8C), also the clearly visible cell debris in Figure 8D is due to cell death by 85 μg/mL SNP treatment.\nThese results indicate that the sensitivity of HT29 human colon cancer cell line for cytotoxic drugs is higher than that of the Vero cell line for the same cytotoxic agents. Sahu et al76 have reported the presence of four new tritepenoid saponins, namely gymnemasins A, B, C, and D, from the leaves of G. sylvestre, while Chan has identified the presence of acylation with diangeloyl groups at the C21–22 positions in triterpenoid saponins, which is essential for cytotoxcity toward tumor cells.77 Tang et al78 have reported that saponin could induce apoptosis of U251 cells, and both BAD-mediated intrinsic apoptotic signaling pathway and caspase-8-mediated extrinsic apoptotic signaling pathway were involved in the apoptosis.\nThe promising saponins were further studied as potential anticancer agents by many researchers. Ai et al79 proposed a qualitative method that can be used to recognize the presence or absence of cancer cells with gold NPs for targeted cancer cell imaging and efficient photodynamic therapy. As reported by Raghavendra et al80 the size effects and multifunctionality are the main characteristics of NPs, so our method of one-step synthesis of SNPs using the aqueous extract of G. sylvestre may serve as a potential anticancer drug for cancer therapy. Further studies have to be carried out to understand the nature of cytotoxicity and the death or proliferation of cells caused by GSNPs from G. sylvestre leaf extract.", "The preliminary phytochemical screening of aqueous extracts of G. sylvestre revealed the presence of alkaloids, phenols, flavonoids, sterols, tannins, and triterpenes (Table 1). As shown in Table 2, 125.62±26.84 μg/g of total flavonoids, 285.23±1.11 μg/g of total phenols, and 111.53±15.13 μg/g of tannin were present in the aqueous extract of G. sylvestre. The flavonoids and phenolic compounds exhibited a wide range of biological activities, such as antioxidant and lipid peroxidation inhibition.13\nThe estimated total antioxidant activity was 9.13±0.04 μg/g and the DPPH radical scavenging activity was 52.14%±0.32% (Table 2).", "The color change observed in the aqueous silver nitrate solution showed that the SNPs were formed rapidly within 30 minutes of incubation of the plant extract with aqueous AgNO3 solution. The colorless solution changed to ruby red, confirming the formation of SNPs (Figure 1). The intensity of the red color increased with time because of the excited surface plasmon resonance effect and reduced AgNO3. The control aqueous AgNO3 solution (without leaf extract) showed no change of color with time and was taken as the blank reference.\nUV–vis spectrometry is a reliable and reproducible technique that can be used to accurately characterize the metal NPs though it does not provide direct information regarding the particle sizes. The surface plasmon bands (absorbance spectra) are influenced by the size and shape of the NPs produced, along with the dielectric constant of the surrounding media.\nFigure 2 shows the time-dependent intensity of the absorption band, which reached its maximum peak at 12 hours, after which no further change in the spectrum was observed indicating that the precursors had been consumed. Initially, the UV–vis spectrum did not show evidence of any absorption in the region 350–600 nm, but after the addition of extract a distinct band was observed at 432 nm.\nWhen silver nitrate was added to the aqueous plant extract of G. sylvestre, it was reduced to SNPs by the aldehyde group present in the flavonoids (125.6 μg/g), which was further oxidized to the carboxyl group. Also, the carboxyl groups of the phenols from the G. sylvestre extract (285.23 μg/g) acted as a surfactant to attach the major phytochemicals from the plant extract to the surface of the SNPs. Our previous study on the synthesis of SNPs using aqueous extracts of Memecylon edule,37\nMemecylon umbellatum,47\nChrysopogon zizanioides46 and Indigofera aspalathoides50 showed that the color of the reaction mixture during the formation of GSNPs changed to ruby red color from colorless/straw color. Our results are also comparable with the other available reports for plant-extract-mediated synthesis of SNPs.\nFrom Figure 3, it is clear that the synthesized SNPs were approximately spherical and of different sizes. The SEM images in the figure clearly indicated a thin layer of phytochemicals from the plant extract covering the synthesized SNPs. Mostly the total phenolic content, flavonoids, and tannins were responsible for the bioreduction of the SNPs. In this green synthesis, the phytochemicals from the plant extract acted as a surfactant to prevent the aggregation of the synthesized SNPs. The SEM images of our earlier research had revealed that this biologically eco-friendly synthesis of NPs utilizing the leaf extracts of M. edule,37\nC. zizanioides,46 and M. umbellatum47 showed no aggregation due to the biomolecules from the plant extract. And the mechanism behind this particle formation with no aggregation may be the spontaneous nucleation and isotropic growth of NPs along with the plant extract. As these chains grow in diameter with increasing silver deposition, spherical particles break off from these structures forming nanospherical particles which can be typically observed from this synthesis.51\nThe elemental composition of green-synthesized AgNPs was analyzed through EDAX. These measurements confirmed the presence of the elementary silver signal of the SNPs. The vertical axis displays the number of X-ray counts and the horizontal axis displays the energy in keV.\nThe EDAX spectrum of the biofunctionalized SNPs in Figure 4 clearly shows the strong signals from silver atoms along with the weaker signals from carbon and oxygen present from biomolecules of the plant extract. The elemental silver peak at 2–4 keV, which is the major emission peak specified for metallic silver, with minor peaks of C and O were also seen due to the capping of Ag NPs by the biomolecules of G. sylvestre leaf extract, and the absence of other peaks evidenced the purity of the Ag NPs.\nFTIR analysis in Figure 5 shows that the SNPs produced by G. sylvestre extract were coated by phytocompounds and secondary metabolites such as saponins, terpenoids, and gymnemagenin derivative of gymnemic acid containing the functional groups of amines, aldehydes, carboxylic acids, and alcohols.\nThe presence of the amide linkages seen in Figure 5 suggests that the different functional groups of the proteins present in the plant extracts might be capping the NPs and playing an important role in the stabilization of the green NPs formed. The band at 1,443 cm−1 was assigned to the methylene scissoring vibrations of proteins, and the bands located at 1,318 cm−1 and 1,089 cm−1 are due to the C–N stretching vibration of aromatic and aliphatic amines, respectively, which agrees with earlier reports of Suman et al.52\nThe positions of these bands were comparable to those reported for phytochemicals reported in the G. sylvestre extract as total phenols, flavonoids, and tannins (Table 2). Thus, we can confirm that the nanocapping of the phytochemicals from the G. sylvestre extract is responsible for the reduction and subsequent stabilization of the SNPs. The absorption bands that appear in the IR spectrum of the aqueous extract could also be seen in the IR spectra of phyto-capped Ag NPs, confirming the role of the phyto constituents (mostly gymnemic acid) in protecting the Ag NPs from aggregation.\nAlso, during our repeated experiments there were no batch-to-batch variations in size, regardless of the isotopic composition, and the particles diameters of the SNPs formed were known to a high degree of accuracy. A detailed study on the large-scale synthesis and elemental composition on the synthesized NPs can be carried out using inductively coupled plasma mass spectrometry to obtain reproducible compositions in every batch. In future, the elemental analysis can be carried out as described earlier by other researchers.53,54\nXRD analysis of NPs represented in Figure 6 shows several size-dependent features leading to irregular peak position, height, and width. XRD was mainly carried out to study the crystalline nature of the green-synthesized G. sylvestre SNPs. From the figure, the GSNPs are seen to exhibit monocrystallinity. The XRD peaks at 38.2°, 44.5°, 64.7°, and 77.7° can be indexed to the [111], [200], [220], and [311] planes, indicating that the SNPs are highly crystalline. Similar results were reported for Abelmoschus esculentus, Citrus limon, Citrus reticulate, and Citrus sinensis55,56 and in our previous studies using C. zizanioides.46\nTable 3 shows the characteristic features of the GSNPs using various plant parts of different plant species reported by various researchers along with our previous reports.", "From Figure 7, it can be observed that, as the concentration of the GSNPs increased, the percentage of viable cells decreased in the cytotoxicity studies by MTT assay. The GSNPs were taken up by mammalian cells through different mechanisms such as pinocytosis, endocytosis, and phagocytosis.75 Once the NPs enter the cells, they interact with the cellular materials and cause DNA damage and cell death.\nThe GSNPs at 85 μg/mL concentration showed 95.23% inhibition of HT29 cell growth. The concentration of the NPs was chosen based on the TC ID50 value (results not shown). Another promising result was that G. sylvestre plant extracts alone at 85 μg/mL concentration showed 30.77% inhibition of HT29 cell lines growth. From our results, it can be concluded that the GSNPs could have induced intracellular reactive oxygen species generation, which can be evaluated using intracellular peroxide-dependent oxidation, and caused cell death. The control cells were clustered, healthy, and viable cells (Figure 8A), whereas the HT29 cells’ proliferation was significantly inhibited by GS (Figure 8B). The SNP-treated cells showed increased apoptotic morphological changes (Figure 8C), also the clearly visible cell debris in Figure 8D is due to cell death by 85 μg/mL SNP treatment.\nThese results indicate that the sensitivity of HT29 human colon cancer cell line for cytotoxic drugs is higher than that of the Vero cell line for the same cytotoxic agents. Sahu et al76 have reported the presence of four new tritepenoid saponins, namely gymnemasins A, B, C, and D, from the leaves of G. sylvestre, while Chan has identified the presence of acylation with diangeloyl groups at the C21–22 positions in triterpenoid saponins, which is essential for cytotoxcity toward tumor cells.77 Tang et al78 have reported that saponin could induce apoptosis of U251 cells, and both BAD-mediated intrinsic apoptotic signaling pathway and caspase-8-mediated extrinsic apoptotic signaling pathway were involved in the apoptosis.\nThe promising saponins were further studied as potential anticancer agents by many researchers. Ai et al79 proposed a qualitative method that can be used to recognize the presence or absence of cancer cells with gold NPs for targeted cancer cell imaging and efficient photodynamic therapy. As reported by Raghavendra et al80 the size effects and multifunctionality are the main characteristics of NPs, so our method of one-step synthesis of SNPs using the aqueous extract of G. sylvestre may serve as a potential anticancer drug for cancer therapy. Further studies have to be carried out to understand the nature of cytotoxicity and the death or proliferation of cells caused by GSNPs from G. sylvestre leaf extract.", "The green synthesis of biofunctionalized SNPs from the leaves of G. sylvestre was economical, nontoxic, and environmentally benign. Due to the reducing and capping nature of the bioactive phytocompounds present in the aqueous extract of G. sylvestre, a cap was formed around the silver ions of the biofunctionalized SNPs which were stable. The presence of the functional group of the bioactive compounds was confirmed by FTIR spectra. The particle size and the spherical shape of the SNPs were determined by XRD and SEM analyses. Since the plant extract and the biofunctionalized SNPs showed anticancer activity against cancer cells, G. sylvestre may serve as a source for potential anticancer drugs. The present study showed the anticancer activities of both the bioactive compounds of the leaf extract and the biofunctionalized SNPs synthesized against HT29 human adenocarcinoma cells in vitro. Our studies provide an important basis for the application of NPs for in vitro anticancer activity against human colon adenocarcinoma cells. Our earlier reports have also shown the potential antiulcer properties of G. sylvestre in mice.13 So GSS is a good plant candidate for further studies in alternative medicine due its multifunctional medical properties" ]
[ "intro", "materials|methods", null, null, "methods", null, null, null, null, null, null, "results", null, null, null, null ]
[ "Gymnema sylvestre", "gymnemic acid", "biofunctionalized silver nanoparticles", "anticancer activity", "HT29 cell line" ]
Introduction: For treatment of various diseases, bioactive components from medicinal plants that are similar to chemical compounds are used.1 In recent years, the use of ethno-botanical information in medicinal plant research has gained considerable attention in some segments of the scientific community.2 In one of the ethno-botanical surveys of medicinal plants commonly used by the Kani tribals in Tirunelveli hills of the Western Ghats in Tamil Nadu, India, it was revealed that Gymneme sylvestre is the most important species based on its use.2 The use of plant parts and isolated phytochemicals for the prevention and treatment of various health ailments has been in practice for many decades.3 G. sylvestre R. Br, commonly known as “Meshasringi”, is distributed over most of India and has a reputation in traditional medicine as a stomachic, diuretic, and a remedy to control diabetes mellitus. G. sylvestre R. Br4 is a woody, climbing plant that grows in the tropical forests of Central and Southern India and in parts of Asia.5 It is a pubescent shrub with young stems and branches, and has a distichous phyllotactic opposite arrangement pattern of leaves which are 2.5–6 cm long and are usually ovate or elliptical. The flowers are small, yellow, and in umbellate cymes, and the follicles are terete, lanceolate, and up to 3 inches in length.6 In homeopathy, as well as in folk and ayurvedic medicine, G. sylvestre has been used for diabetes treatment.7 G. sylvestre has bioactive components that can cure asthma, eye ailments, snakebite, piles, chronic cough, breathing troubles, colic pain, cardiopathy, constipation, dyspepsia, hemorrhoids, and hepatosplenomegaly, as well as assist in family planning.8 In addition, it also possesses antimicrobial,9 antitumor,5 anti-obesity,10 anti-inflammatory,11 anti-hyperglycemic,12 antiulcer, anti-stress, and antiallergic activity.13 The presence of flavonoids, saponins, anthraquinones, quercitol, and other alkaloid have been reported in the flowers, leaves, and fruits of G. sylvestre.14 The presence of other therapeutic agents, such as gymnemagenin, gymnemic acids, gymnemanol, and β-amyrin-related glycosides, which play a key role in therapeutic applications, have also been reported. The focus of the present work is to assess the potential therapeutic medicinal value of this herb and to understand/enhance the mechanistic action of their bioactive components.14 G. sylvestre contains triterpenes, saponins, and gymnemic acids belonging to the oleane and dammarene classes.15,16 The plant extract has also tested positive for alkaloids, acidic glycosides, and anthraquinone derivatives. Oleanane saponins are gymnemic acids and gymnema saponins, while dammarene saponins are gymnemasides. As reported by Thakur et al14 the aqueous extracts of the G. sylvestre leaves showed the presence of gymnemic acids Ι–VΙ, while the saponin fraction of the leaves tested positive for the presence of gymnemic acids XV–XVIII. The gymnemic acid derivative of gymnemagenin was elucidated from the fraction VIII–XII, which is responsible for the antidiabetic activity, and the fraction VIII stimulates the pancreas for insulin secretion. The novel D-glucoside structure with anti-sweet principle is present in the I–V saponin fraction. The presence of pentatriacontane, α- and β-chlorophylls, phytin, resins, D-quercitol, tartaric acid, formic acid, butyric acid, lupeol, and stigmasterol has been reported as other plant constituents of G. sylvestre,14 while the extract has also been tested positive for alkaloids.13,17 Sharma et al have reported the antioxidant activity of oleane saponins from G. sylvestre plant extract and determined the IC50 values for 2,2-diphenylpicrylhydrazyl (DPPH) scavenging, superoxide radical scavenging, inhibition of in vitro lipid peroxidation, and protein carbonyl formation as 238 μg/mL, 140 μg/mL, 99 μg/mL, and 28 μg/mL, respectively, which may be due to the presence of flavonoids, phenols, tannins, and triterpenoids.18 The enhanced radiation (8 Gy)-induced augmentation of lipid peroxidation and depletion of glutathione and protein in mouse brain were reported by Sharma et al18 using multiherbal ayurvedic formulations containing extracts of G. sylvestre, such as “Hyponidd” and “Dihar”. They also demonstrated the antioxidant activity by increasing the levels of superoxide dismutase, glutathione, and catalase in rats through in vivo studies.19 Kang et al20 proved the role of antioxidants from G. sylvestre in diabetic rats using ethanolic extracts. Using several antioxidant assays, eg, thiobarbituric acid assay with slight modifications, egg yolk lecithin or 2-deoxyribose (associated with lipid peroxidation) assay, superoxide dismutase-like activity assay, and 2,2′-azinobis (3-ethylbenzothiazoline-6-sulfonic acid) assay. The potent anticancer activity of G. sylvestre against the human lung adenocarcinoma cell lines (A549) and human breast carcinoma cell lines (MCF7) using alcoholic extracts of the herb has been reported by Srikant et al.21 Also, Amaki et al22 reported the inhibition of the breast cancer resistance protein using the alcoholic extract of G. sylvestre. Many plant-derived saponins, eg, ginsenosides, soyasaponins, and saikosaponins, have been found to exhibit significant anticancer activity. The anticancer activity of gymnemagenol on HeLa cancer cell lines in in vitro conditions was determined by the MTT cell proliferation assay for cytotoxic activity of saponins. Using 5 μg/mL, 15 μg/mL, 25 μg/mL, and 50 μg/mL concentrations of gymnemagenol, the IC50 value was found to be 37 μg/mL after 96 hours. The isolated bioactive constituent, gymnemagenol, showed a high degree of inhibition to HeLa cancer cell line proliferation, and saponins were not found to be toxic to the growth of normal cells under in vitro conditions.23 Already many researchers have reported that the leaves of G. sylvestre lower blood sugar, stimulate the heart, uterus, and circulatory systems, and exhibit anti-sweet and hepatoprotective activities.20,24–31 Administration of G. sylvestre extract to diabetic rats increased superoxide dismutase activity and decreased lipid peroxide by either directly scavenging the reactive oxygen species, due to the presence of various antioxidant compounds, or by increasing the synthesis of antioxidant molecules (albumin and uric acid).24,30,32 Therefore, in this study, an attempt was made to synthesize the silver nanoparticles (SNPs) from aqueous extracts of the G. sylvestre leaves. These green-synthesized SNPs (GSNPs) of G. sylvestre were examined by ultraviolet–visible (UV–vis) spectroscopy, scanning electron microscopy (SEM), energy dispersive X-ray analysis (EDAX), Fourier transform infrared spectroscopy (FTIR), and X-ray diffraction (XRD) analysis for studying their size and shape. The synthesized and well-characterized nanoparticles (NPs) were tested for their cytotoxicity effect. Our findings clearly demonstrate that it is indeed possible to have a much greener way to synthesize SNPs without compromising their antibacterial properties and thus plant extracts may prove to be a good alternative to obtain such NPs with improved antibacterial and antiviral properties for diabetic wound healing applications. Goix et al33 and Boholm and Arvidsson34 have pointed out that silver is either beneficial or harmful in relation to four main values: the environment, health, sewage treatment, and product effectiveness. As reported by Barua et al35 poly(ethylene glycol)-stabilized colloidal SNPs showed the nonhazardous anticancer and antibacterial properties. Jin et al36 have reported the therapeutic applications of plant-extract-based scaffolds for wound healing and skin reconstitution studies. Materials and methods: Collection of plants Fresh leaves of G. sylvestre from plants of same age group of a single population were collected from the experimental Herbal Garden, Tamil University, Thanjavur, Tamil Nadu, India, in July, 2010. The herbarium was prepared for authentication (Ref No: SRM\CENR\PTC\2010\03), and taxonomic identification was done by Dr Jayaraman, Professor, Department of Botany, Madras Christian College, Tambaram, Chennai, Tamil Nadu. The herb sample is maintained in the research laboratory for further reference. Fresh leaves of G. sylvestre from plants of same age group of a single population were collected from the experimental Herbal Garden, Tamil University, Thanjavur, Tamil Nadu, India, in July, 2010. The herbarium was prepared for authentication (Ref No: SRM\CENR\PTC\2010\03), and taxonomic identification was done by Dr Jayaraman, Professor, Department of Botany, Madras Christian College, Tambaram, Chennai, Tamil Nadu. The herb sample is maintained in the research laboratory for further reference. Preparation of aqueous extract The leaves of G. sylvestre were washed with distilled water to remove the dirt and further washed with mild soap solution and rinsed thrice with distilled water. The leaves were blot-dried with tissue paper and shade dried at room temperature for 2 weeks. After complete drying, the leaves were cut into small pieces and powdered in a mixer and sieved using a 20-μm mesh sieve to get a uniform size range for further studies. Twenty grams of the sieved leaf powder was added to 100 mL of sterile distilled water in a 500-mL Erlenmeyer flask and boiled for 5 minutes. The flask was kept under continuous dark conditions at 30°C in a shaker. The extract was filtered and stored in an airtight container and protected from sunlight for further use.37 The leaves of G. sylvestre were washed with distilled water to remove the dirt and further washed with mild soap solution and rinsed thrice with distilled water. The leaves were blot-dried with tissue paper and shade dried at room temperature for 2 weeks. After complete drying, the leaves were cut into small pieces and powdered in a mixer and sieved using a 20-μm mesh sieve to get a uniform size range for further studies. Twenty grams of the sieved leaf powder was added to 100 mL of sterile distilled water in a 500-mL Erlenmeyer flask and boiled for 5 minutes. The flask was kept under continuous dark conditions at 30°C in a shaker. The extract was filtered and stored in an airtight container and protected from sunlight for further use.37 Qualitative and quantitative phytochemical analysis The qualitative phytochemical analysis of G. sylvestre extracts were performed following the methods of Parekh and Chanda38 to determine the presence of alkaloids (Mayer, Wagner, Dragendorff), flavonoids (alkaline reagent, Shinoda), phenolics (lead acetate, alkaline reagent test), triterpenes (Liberman-Burchard test), saponins (foam test), and tannins (gelatin).39 The results were qualitatively expressed as positive (+) or negative (−).40 The chemicals used for the study were purchased from Sigma-Aldrich (Chennai, India). Phytochemical quantitative analyses are described briefly in our previous paper.13 The total phenolic content was measured using the Folin–Ciocalteu colorimetric method. The flavonoids were estimated using aluminum chloride colorimetric method. Gallic acid was used as standard for the analysis of total antioxidant capacity, and the DPPH radical scavenging activity was done following the methods described by Blios.41 The qualitative phytochemical analysis of G. sylvestre extracts were performed following the methods of Parekh and Chanda38 to determine the presence of alkaloids (Mayer, Wagner, Dragendorff), flavonoids (alkaline reagent, Shinoda), phenolics (lead acetate, alkaline reagent test), triterpenes (Liberman-Burchard test), saponins (foam test), and tannins (gelatin).39 The results were qualitatively expressed as positive (+) or negative (−).40 The chemicals used for the study were purchased from Sigma-Aldrich (Chennai, India). Phytochemical quantitative analyses are described briefly in our previous paper.13 The total phenolic content was measured using the Folin–Ciocalteu colorimetric method. The flavonoids were estimated using aluminum chloride colorimetric method. Gallic acid was used as standard for the analysis of total antioxidant capacity, and the DPPH radical scavenging activity was done following the methods described by Blios.41 Synthesis of SNPs Silver nitrate (AgNO3) was purchased from Sigma-Aldrich (St Louis, MO, USA), and all solutions were freshly made for the synthesis of SNPs. The aqueous leaf extract of G. sylvestre was used for the bioreduction synthesis of the NPs. The SNPs were synthesized by adding 5 mL of plant extract to 15 mL of 1 mM aqueous AgNO3 solution in a 250-mL Erlenmeyer flask and incubated in a rotary shaker at 150 rpm in dark. The synthesis of NPs was confirmed spectrophotometrically at every 30-minute interval till no reduction was observed. The reduction was observed by the color change in the colloidal solution, which confirmed the formation of SNPs.42,43 Silver nitrate (AgNO3) was purchased from Sigma-Aldrich (St Louis, MO, USA), and all solutions were freshly made for the synthesis of SNPs. The aqueous leaf extract of G. sylvestre was used for the bioreduction synthesis of the NPs. The SNPs were synthesized by adding 5 mL of plant extract to 15 mL of 1 mM aqueous AgNO3 solution in a 250-mL Erlenmeyer flask and incubated in a rotary shaker at 150 rpm in dark. The synthesis of NPs was confirmed spectrophotometrically at every 30-minute interval till no reduction was observed. The reduction was observed by the color change in the colloidal solution, which confirmed the formation of SNPs.42,43 Characterization of SNPs The GSNPs were characterized periodically by measuring the bioreduction of AgNO3 using a UV–vis 3000+ double-beam spectrophotometer (Lab India, Maharashtra, India). The spectrometric range was 200–800 nm, and scanning interval was 0.5 nm. The surface morphology of the biofunctionalized SNPs was characterized by high-resolution SEM analysis (JSM-5600LV; JEOL, Tokyo, Japan) and the elemental compositions were determined by EDAX analysis (S-3400N; Hitachi, Tokyo, Japan). The functional characterization of biomolecules present in the GSNPs from the leaf extract of G. sylvestre was done by FTIR spectrometry (RX1; Perkin-Elmer, Waltham, MA, USA).44 The crystal lattice and the size of the synthesized NPs were determined by XRD measurements using an XRD-6000 X-ray diffractometer (Shimadzu, Kyoto, Japan). The crystal-lite domain size was calculated from the width of the XRD peaks, as described in our previous papers.17,45–47 The GSNPs were characterized periodically by measuring the bioreduction of AgNO3 using a UV–vis 3000+ double-beam spectrophotometer (Lab India, Maharashtra, India). The spectrometric range was 200–800 nm, and scanning interval was 0.5 nm. The surface morphology of the biofunctionalized SNPs was characterized by high-resolution SEM analysis (JSM-5600LV; JEOL, Tokyo, Japan) and the elemental compositions were determined by EDAX analysis (S-3400N; Hitachi, Tokyo, Japan). The functional characterization of biomolecules present in the GSNPs from the leaf extract of G. sylvestre was done by FTIR spectrometry (RX1; Perkin-Elmer, Waltham, MA, USA).44 The crystal lattice and the size of the synthesized NPs were determined by XRD measurements using an XRD-6000 X-ray diffractometer (Shimadzu, Kyoto, Japan). The crystal-lite domain size was calculated from the width of the XRD peaks, as described in our previous papers.17,45–47 In vitro anticancer activity Cell line and culture medium Vero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17 Vero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17 Cell viability by MTT assay MTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48 MTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48 Morphological changes The cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49 The cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49 Cell line and culture medium Vero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17 Vero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17 Cell viability by MTT assay MTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48 MTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48 Morphological changes The cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49 The cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49 Collection of plants: Fresh leaves of G. sylvestre from plants of same age group of a single population were collected from the experimental Herbal Garden, Tamil University, Thanjavur, Tamil Nadu, India, in July, 2010. The herbarium was prepared for authentication (Ref No: SRM\CENR\PTC\2010\03), and taxonomic identification was done by Dr Jayaraman, Professor, Department of Botany, Madras Christian College, Tambaram, Chennai, Tamil Nadu. The herb sample is maintained in the research laboratory for further reference. Preparation of aqueous extract: The leaves of G. sylvestre were washed with distilled water to remove the dirt and further washed with mild soap solution and rinsed thrice with distilled water. The leaves were blot-dried with tissue paper and shade dried at room temperature for 2 weeks. After complete drying, the leaves were cut into small pieces and powdered in a mixer and sieved using a 20-μm mesh sieve to get a uniform size range for further studies. Twenty grams of the sieved leaf powder was added to 100 mL of sterile distilled water in a 500-mL Erlenmeyer flask and boiled for 5 minutes. The flask was kept under continuous dark conditions at 30°C in a shaker. The extract was filtered and stored in an airtight container and protected from sunlight for further use.37 Qualitative and quantitative phytochemical analysis: The qualitative phytochemical analysis of G. sylvestre extracts were performed following the methods of Parekh and Chanda38 to determine the presence of alkaloids (Mayer, Wagner, Dragendorff), flavonoids (alkaline reagent, Shinoda), phenolics (lead acetate, alkaline reagent test), triterpenes (Liberman-Burchard test), saponins (foam test), and tannins (gelatin).39 The results were qualitatively expressed as positive (+) or negative (−).40 The chemicals used for the study were purchased from Sigma-Aldrich (Chennai, India). Phytochemical quantitative analyses are described briefly in our previous paper.13 The total phenolic content was measured using the Folin–Ciocalteu colorimetric method. The flavonoids were estimated using aluminum chloride colorimetric method. Gallic acid was used as standard for the analysis of total antioxidant capacity, and the DPPH radical scavenging activity was done following the methods described by Blios.41 Synthesis of SNPs: Silver nitrate (AgNO3) was purchased from Sigma-Aldrich (St Louis, MO, USA), and all solutions were freshly made for the synthesis of SNPs. The aqueous leaf extract of G. sylvestre was used for the bioreduction synthesis of the NPs. The SNPs were synthesized by adding 5 mL of plant extract to 15 mL of 1 mM aqueous AgNO3 solution in a 250-mL Erlenmeyer flask and incubated in a rotary shaker at 150 rpm in dark. The synthesis of NPs was confirmed spectrophotometrically at every 30-minute interval till no reduction was observed. The reduction was observed by the color change in the colloidal solution, which confirmed the formation of SNPs.42,43 Characterization of SNPs: The GSNPs were characterized periodically by measuring the bioreduction of AgNO3 using a UV–vis 3000+ double-beam spectrophotometer (Lab India, Maharashtra, India). The spectrometric range was 200–800 nm, and scanning interval was 0.5 nm. The surface morphology of the biofunctionalized SNPs was characterized by high-resolution SEM analysis (JSM-5600LV; JEOL, Tokyo, Japan) and the elemental compositions were determined by EDAX analysis (S-3400N; Hitachi, Tokyo, Japan). The functional characterization of biomolecules present in the GSNPs from the leaf extract of G. sylvestre was done by FTIR spectrometry (RX1; Perkin-Elmer, Waltham, MA, USA).44 The crystal lattice and the size of the synthesized NPs were determined by XRD measurements using an XRD-6000 X-ray diffractometer (Shimadzu, Kyoto, Japan). The crystal-lite domain size was calculated from the width of the XRD peaks, as described in our previous papers.17,45–47 In vitro anticancer activity: Cell line and culture medium Vero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17 Vero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17 Cell viability by MTT assay MTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48 MTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48 Morphological changes The cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49 The cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49 Cell line and culture medium: Vero cell line (derived from the normal kidney of adult monkeys) and human adenocarcinoma colon HT29 cells were purchased from the National Center for Cell Sciences, Pune, India. The cells were cultured under standard conditions in Dulbecco’s Modified Eagle Medium (DMEM), supplemented with 10% heat-inactivated fetal bovine serum (FBS), 100 U/mL of penicillin, and 100 U/mL of streptomycin in a humidified incubator set at 37°C with 5% CO2.17 Cell viability by MTT assay: MTT assay was performed to determine the cytotoxic properties of biofunctionalized SNPs against HT29 cell lines by adding 1×105 cells/well in 12-well plates and incubated with various concentrations of biofunctionalized particles (83 μg/mL, 84 μg/mL, and 85 μg/mL). Vero cells were used as a monolayer for culturing the HT29 cells. The cell lines were seeded in 96-well tissue culture plates and the appropriate concentrations of GSNP stock solutions were added to the cultures to obtain the respective concentration of the NPs and incubated for 48 hours at 37°C. The untreated cells were used as control. The incubated cultured cells were subjected to MTT colorimetric assay. All assays were performed in triplicate, and the aqueous leaf extract of the G. sylvestre were also similarly assayed for the anticancer activity for comparison.17,48 Morphological changes: The cytotoxicity effects were observed using an inverted microscope, and the morphological changes were photographed.49 Results: Phytochemical screening of G. sylvestre leaf extract The preliminary phytochemical screening of aqueous extracts of G. sylvestre revealed the presence of alkaloids, phenols, flavonoids, sterols, tannins, and triterpenes (Table 1). As shown in Table 2, 125.62±26.84 μg/g of total flavonoids, 285.23±1.11 μg/g of total phenols, and 111.53±15.13 μg/g of tannin were present in the aqueous extract of G. sylvestre. The flavonoids and phenolic compounds exhibited a wide range of biological activities, such as antioxidant and lipid peroxidation inhibition.13 The estimated total antioxidant activity was 9.13±0.04 μg/g and the DPPH radical scavenging activity was 52.14%±0.32% (Table 2). The preliminary phytochemical screening of aqueous extracts of G. sylvestre revealed the presence of alkaloids, phenols, flavonoids, sterols, tannins, and triterpenes (Table 1). As shown in Table 2, 125.62±26.84 μg/g of total flavonoids, 285.23±1.11 μg/g of total phenols, and 111.53±15.13 μg/g of tannin were present in the aqueous extract of G. sylvestre. The flavonoids and phenolic compounds exhibited a wide range of biological activities, such as antioxidant and lipid peroxidation inhibition.13 The estimated total antioxidant activity was 9.13±0.04 μg/g and the DPPH radical scavenging activity was 52.14%±0.32% (Table 2). Characterization of biofunctionalized SNPs The color change observed in the aqueous silver nitrate solution showed that the SNPs were formed rapidly within 30 minutes of incubation of the plant extract with aqueous AgNO3 solution. The colorless solution changed to ruby red, confirming the formation of SNPs (Figure 1). The intensity of the red color increased with time because of the excited surface plasmon resonance effect and reduced AgNO3. The control aqueous AgNO3 solution (without leaf extract) showed no change of color with time and was taken as the blank reference. UV–vis spectrometry is a reliable and reproducible technique that can be used to accurately characterize the metal NPs though it does not provide direct information regarding the particle sizes. The surface plasmon bands (absorbance spectra) are influenced by the size and shape of the NPs produced, along with the dielectric constant of the surrounding media. Figure 2 shows the time-dependent intensity of the absorption band, which reached its maximum peak at 12 hours, after which no further change in the spectrum was observed indicating that the precursors had been consumed. Initially, the UV–vis spectrum did not show evidence of any absorption in the region 350–600 nm, but after the addition of extract a distinct band was observed at 432 nm. When silver nitrate was added to the aqueous plant extract of G. sylvestre, it was reduced to SNPs by the aldehyde group present in the flavonoids (125.6 μg/g), which was further oxidized to the carboxyl group. Also, the carboxyl groups of the phenols from the G. sylvestre extract (285.23 μg/g) acted as a surfactant to attach the major phytochemicals from the plant extract to the surface of the SNPs. Our previous study on the synthesis of SNPs using aqueous extracts of Memecylon edule,37 Memecylon umbellatum,47 Chrysopogon zizanioides46 and Indigofera aspalathoides50 showed that the color of the reaction mixture during the formation of GSNPs changed to ruby red color from colorless/straw color. Our results are also comparable with the other available reports for plant-extract-mediated synthesis of SNPs. From Figure 3, it is clear that the synthesized SNPs were approximately spherical and of different sizes. The SEM images in the figure clearly indicated a thin layer of phytochemicals from the plant extract covering the synthesized SNPs. Mostly the total phenolic content, flavonoids, and tannins were responsible for the bioreduction of the SNPs. In this green synthesis, the phytochemicals from the plant extract acted as a surfactant to prevent the aggregation of the synthesized SNPs. The SEM images of our earlier research had revealed that this biologically eco-friendly synthesis of NPs utilizing the leaf extracts of M. edule,37 C. zizanioides,46 and M. umbellatum47 showed no aggregation due to the biomolecules from the plant extract. And the mechanism behind this particle formation with no aggregation may be the spontaneous nucleation and isotropic growth of NPs along with the plant extract. As these chains grow in diameter with increasing silver deposition, spherical particles break off from these structures forming nanospherical particles which can be typically observed from this synthesis.51 The elemental composition of green-synthesized AgNPs was analyzed through EDAX. These measurements confirmed the presence of the elementary silver signal of the SNPs. The vertical axis displays the number of X-ray counts and the horizontal axis displays the energy in keV. The EDAX spectrum of the biofunctionalized SNPs in Figure 4 clearly shows the strong signals from silver atoms along with the weaker signals from carbon and oxygen present from biomolecules of the plant extract. The elemental silver peak at 2–4 keV, which is the major emission peak specified for metallic silver, with minor peaks of C and O were also seen due to the capping of Ag NPs by the biomolecules of G. sylvestre leaf extract, and the absence of other peaks evidenced the purity of the Ag NPs. FTIR analysis in Figure 5 shows that the SNPs produced by G. sylvestre extract were coated by phytocompounds and secondary metabolites such as saponins, terpenoids, and gymnemagenin derivative of gymnemic acid containing the functional groups of amines, aldehydes, carboxylic acids, and alcohols. The presence of the amide linkages seen in Figure 5 suggests that the different functional groups of the proteins present in the plant extracts might be capping the NPs and playing an important role in the stabilization of the green NPs formed. The band at 1,443 cm−1 was assigned to the methylene scissoring vibrations of proteins, and the bands located at 1,318 cm−1 and 1,089 cm−1 are due to the C–N stretching vibration of aromatic and aliphatic amines, respectively, which agrees with earlier reports of Suman et al.52 The positions of these bands were comparable to those reported for phytochemicals reported in the G. sylvestre extract as total phenols, flavonoids, and tannins (Table 2). Thus, we can confirm that the nanocapping of the phytochemicals from the G. sylvestre extract is responsible for the reduction and subsequent stabilization of the SNPs. The absorption bands that appear in the IR spectrum of the aqueous extract could also be seen in the IR spectra of phyto-capped Ag NPs, confirming the role of the phyto constituents (mostly gymnemic acid) in protecting the Ag NPs from aggregation. Also, during our repeated experiments there were no batch-to-batch variations in size, regardless of the isotopic composition, and the particles diameters of the SNPs formed were known to a high degree of accuracy. A detailed study on the large-scale synthesis and elemental composition on the synthesized NPs can be carried out using inductively coupled plasma mass spectrometry to obtain reproducible compositions in every batch. In future, the elemental analysis can be carried out as described earlier by other researchers.53,54 XRD analysis of NPs represented in Figure 6 shows several size-dependent features leading to irregular peak position, height, and width. XRD was mainly carried out to study the crystalline nature of the green-synthesized G. sylvestre SNPs. From the figure, the GSNPs are seen to exhibit monocrystallinity. The XRD peaks at 38.2°, 44.5°, 64.7°, and 77.7° can be indexed to the [111], [200], [220], and [311] planes, indicating that the SNPs are highly crystalline. Similar results were reported for Abelmoschus esculentus, Citrus limon, Citrus reticulate, and Citrus sinensis55,56 and in our previous studies using C. zizanioides.46 Table 3 shows the characteristic features of the GSNPs using various plant parts of different plant species reported by various researchers along with our previous reports. The color change observed in the aqueous silver nitrate solution showed that the SNPs were formed rapidly within 30 minutes of incubation of the plant extract with aqueous AgNO3 solution. The colorless solution changed to ruby red, confirming the formation of SNPs (Figure 1). The intensity of the red color increased with time because of the excited surface plasmon resonance effect and reduced AgNO3. The control aqueous AgNO3 solution (without leaf extract) showed no change of color with time and was taken as the blank reference. UV–vis spectrometry is a reliable and reproducible technique that can be used to accurately characterize the metal NPs though it does not provide direct information regarding the particle sizes. The surface plasmon bands (absorbance spectra) are influenced by the size and shape of the NPs produced, along with the dielectric constant of the surrounding media. Figure 2 shows the time-dependent intensity of the absorption band, which reached its maximum peak at 12 hours, after which no further change in the spectrum was observed indicating that the precursors had been consumed. Initially, the UV–vis spectrum did not show evidence of any absorption in the region 350–600 nm, but after the addition of extract a distinct band was observed at 432 nm. When silver nitrate was added to the aqueous plant extract of G. sylvestre, it was reduced to SNPs by the aldehyde group present in the flavonoids (125.6 μg/g), which was further oxidized to the carboxyl group. Also, the carboxyl groups of the phenols from the G. sylvestre extract (285.23 μg/g) acted as a surfactant to attach the major phytochemicals from the plant extract to the surface of the SNPs. Our previous study on the synthesis of SNPs using aqueous extracts of Memecylon edule,37 Memecylon umbellatum,47 Chrysopogon zizanioides46 and Indigofera aspalathoides50 showed that the color of the reaction mixture during the formation of GSNPs changed to ruby red color from colorless/straw color. Our results are also comparable with the other available reports for plant-extract-mediated synthesis of SNPs. From Figure 3, it is clear that the synthesized SNPs were approximately spherical and of different sizes. The SEM images in the figure clearly indicated a thin layer of phytochemicals from the plant extract covering the synthesized SNPs. Mostly the total phenolic content, flavonoids, and tannins were responsible for the bioreduction of the SNPs. In this green synthesis, the phytochemicals from the plant extract acted as a surfactant to prevent the aggregation of the synthesized SNPs. The SEM images of our earlier research had revealed that this biologically eco-friendly synthesis of NPs utilizing the leaf extracts of M. edule,37 C. zizanioides,46 and M. umbellatum47 showed no aggregation due to the biomolecules from the plant extract. And the mechanism behind this particle formation with no aggregation may be the spontaneous nucleation and isotropic growth of NPs along with the plant extract. As these chains grow in diameter with increasing silver deposition, spherical particles break off from these structures forming nanospherical particles which can be typically observed from this synthesis.51 The elemental composition of green-synthesized AgNPs was analyzed through EDAX. These measurements confirmed the presence of the elementary silver signal of the SNPs. The vertical axis displays the number of X-ray counts and the horizontal axis displays the energy in keV. The EDAX spectrum of the biofunctionalized SNPs in Figure 4 clearly shows the strong signals from silver atoms along with the weaker signals from carbon and oxygen present from biomolecules of the plant extract. The elemental silver peak at 2–4 keV, which is the major emission peak specified for metallic silver, with minor peaks of C and O were also seen due to the capping of Ag NPs by the biomolecules of G. sylvestre leaf extract, and the absence of other peaks evidenced the purity of the Ag NPs. FTIR analysis in Figure 5 shows that the SNPs produced by G. sylvestre extract were coated by phytocompounds and secondary metabolites such as saponins, terpenoids, and gymnemagenin derivative of gymnemic acid containing the functional groups of amines, aldehydes, carboxylic acids, and alcohols. The presence of the amide linkages seen in Figure 5 suggests that the different functional groups of the proteins present in the plant extracts might be capping the NPs and playing an important role in the stabilization of the green NPs formed. The band at 1,443 cm−1 was assigned to the methylene scissoring vibrations of proteins, and the bands located at 1,318 cm−1 and 1,089 cm−1 are due to the C–N stretching vibration of aromatic and aliphatic amines, respectively, which agrees with earlier reports of Suman et al.52 The positions of these bands were comparable to those reported for phytochemicals reported in the G. sylvestre extract as total phenols, flavonoids, and tannins (Table 2). Thus, we can confirm that the nanocapping of the phytochemicals from the G. sylvestre extract is responsible for the reduction and subsequent stabilization of the SNPs. The absorption bands that appear in the IR spectrum of the aqueous extract could also be seen in the IR spectra of phyto-capped Ag NPs, confirming the role of the phyto constituents (mostly gymnemic acid) in protecting the Ag NPs from aggregation. Also, during our repeated experiments there were no batch-to-batch variations in size, regardless of the isotopic composition, and the particles diameters of the SNPs formed were known to a high degree of accuracy. A detailed study on the large-scale synthesis and elemental composition on the synthesized NPs can be carried out using inductively coupled plasma mass spectrometry to obtain reproducible compositions in every batch. In future, the elemental analysis can be carried out as described earlier by other researchers.53,54 XRD analysis of NPs represented in Figure 6 shows several size-dependent features leading to irregular peak position, height, and width. XRD was mainly carried out to study the crystalline nature of the green-synthesized G. sylvestre SNPs. From the figure, the GSNPs are seen to exhibit monocrystallinity. The XRD peaks at 38.2°, 44.5°, 64.7°, and 77.7° can be indexed to the [111], [200], [220], and [311] planes, indicating that the SNPs are highly crystalline. Similar results were reported for Abelmoschus esculentus, Citrus limon, Citrus reticulate, and Citrus sinensis55,56 and in our previous studies using C. zizanioides.46 Table 3 shows the characteristic features of the GSNPs using various plant parts of different plant species reported by various researchers along with our previous reports. In vitro anticancer activity From Figure 7, it can be observed that, as the concentration of the GSNPs increased, the percentage of viable cells decreased in the cytotoxicity studies by MTT assay. The GSNPs were taken up by mammalian cells through different mechanisms such as pinocytosis, endocytosis, and phagocytosis.75 Once the NPs enter the cells, they interact with the cellular materials and cause DNA damage and cell death. The GSNPs at 85 μg/mL concentration showed 95.23% inhibition of HT29 cell growth. The concentration of the NPs was chosen based on the TC ID50 value (results not shown). Another promising result was that G. sylvestre plant extracts alone at 85 μg/mL concentration showed 30.77% inhibition of HT29 cell lines growth. From our results, it can be concluded that the GSNPs could have induced intracellular reactive oxygen species generation, which can be evaluated using intracellular peroxide-dependent oxidation, and caused cell death. The control cells were clustered, healthy, and viable cells (Figure 8A), whereas the HT29 cells’ proliferation was significantly inhibited by GS (Figure 8B). The SNP-treated cells showed increased apoptotic morphological changes (Figure 8C), also the clearly visible cell debris in Figure 8D is due to cell death by 85 μg/mL SNP treatment. These results indicate that the sensitivity of HT29 human colon cancer cell line for cytotoxic drugs is higher than that of the Vero cell line for the same cytotoxic agents. Sahu et al76 have reported the presence of four new tritepenoid saponins, namely gymnemasins A, B, C, and D, from the leaves of G. sylvestre, while Chan has identified the presence of acylation with diangeloyl groups at the C21–22 positions in triterpenoid saponins, which is essential for cytotoxcity toward tumor cells.77 Tang et al78 have reported that saponin could induce apoptosis of U251 cells, and both BAD-mediated intrinsic apoptotic signaling pathway and caspase-8-mediated extrinsic apoptotic signaling pathway were involved in the apoptosis. The promising saponins were further studied as potential anticancer agents by many researchers. Ai et al79 proposed a qualitative method that can be used to recognize the presence or absence of cancer cells with gold NPs for targeted cancer cell imaging and efficient photodynamic therapy. As reported by Raghavendra et al80 the size effects and multifunctionality are the main characteristics of NPs, so our method of one-step synthesis of SNPs using the aqueous extract of G. sylvestre may serve as a potential anticancer drug for cancer therapy. Further studies have to be carried out to understand the nature of cytotoxicity and the death or proliferation of cells caused by GSNPs from G. sylvestre leaf extract. From Figure 7, it can be observed that, as the concentration of the GSNPs increased, the percentage of viable cells decreased in the cytotoxicity studies by MTT assay. The GSNPs were taken up by mammalian cells through different mechanisms such as pinocytosis, endocytosis, and phagocytosis.75 Once the NPs enter the cells, they interact with the cellular materials and cause DNA damage and cell death. The GSNPs at 85 μg/mL concentration showed 95.23% inhibition of HT29 cell growth. The concentration of the NPs was chosen based on the TC ID50 value (results not shown). Another promising result was that G. sylvestre plant extracts alone at 85 μg/mL concentration showed 30.77% inhibition of HT29 cell lines growth. From our results, it can be concluded that the GSNPs could have induced intracellular reactive oxygen species generation, which can be evaluated using intracellular peroxide-dependent oxidation, and caused cell death. The control cells were clustered, healthy, and viable cells (Figure 8A), whereas the HT29 cells’ proliferation was significantly inhibited by GS (Figure 8B). The SNP-treated cells showed increased apoptotic morphological changes (Figure 8C), also the clearly visible cell debris in Figure 8D is due to cell death by 85 μg/mL SNP treatment. These results indicate that the sensitivity of HT29 human colon cancer cell line for cytotoxic drugs is higher than that of the Vero cell line for the same cytotoxic agents. Sahu et al76 have reported the presence of four new tritepenoid saponins, namely gymnemasins A, B, C, and D, from the leaves of G. sylvestre, while Chan has identified the presence of acylation with diangeloyl groups at the C21–22 positions in triterpenoid saponins, which is essential for cytotoxcity toward tumor cells.77 Tang et al78 have reported that saponin could induce apoptosis of U251 cells, and both BAD-mediated intrinsic apoptotic signaling pathway and caspase-8-mediated extrinsic apoptotic signaling pathway were involved in the apoptosis. The promising saponins were further studied as potential anticancer agents by many researchers. Ai et al79 proposed a qualitative method that can be used to recognize the presence or absence of cancer cells with gold NPs for targeted cancer cell imaging and efficient photodynamic therapy. As reported by Raghavendra et al80 the size effects and multifunctionality are the main characteristics of NPs, so our method of one-step synthesis of SNPs using the aqueous extract of G. sylvestre may serve as a potential anticancer drug for cancer therapy. Further studies have to be carried out to understand the nature of cytotoxicity and the death or proliferation of cells caused by GSNPs from G. sylvestre leaf extract. Phytochemical screening of G. sylvestre leaf extract: The preliminary phytochemical screening of aqueous extracts of G. sylvestre revealed the presence of alkaloids, phenols, flavonoids, sterols, tannins, and triterpenes (Table 1). As shown in Table 2, 125.62±26.84 μg/g of total flavonoids, 285.23±1.11 μg/g of total phenols, and 111.53±15.13 μg/g of tannin were present in the aqueous extract of G. sylvestre. The flavonoids and phenolic compounds exhibited a wide range of biological activities, such as antioxidant and lipid peroxidation inhibition.13 The estimated total antioxidant activity was 9.13±0.04 μg/g and the DPPH radical scavenging activity was 52.14%±0.32% (Table 2). Characterization of biofunctionalized SNPs: The color change observed in the aqueous silver nitrate solution showed that the SNPs were formed rapidly within 30 minutes of incubation of the plant extract with aqueous AgNO3 solution. The colorless solution changed to ruby red, confirming the formation of SNPs (Figure 1). The intensity of the red color increased with time because of the excited surface plasmon resonance effect and reduced AgNO3. The control aqueous AgNO3 solution (without leaf extract) showed no change of color with time and was taken as the blank reference. UV–vis spectrometry is a reliable and reproducible technique that can be used to accurately characterize the metal NPs though it does not provide direct information regarding the particle sizes. The surface plasmon bands (absorbance spectra) are influenced by the size and shape of the NPs produced, along with the dielectric constant of the surrounding media. Figure 2 shows the time-dependent intensity of the absorption band, which reached its maximum peak at 12 hours, after which no further change in the spectrum was observed indicating that the precursors had been consumed. Initially, the UV–vis spectrum did not show evidence of any absorption in the region 350–600 nm, but after the addition of extract a distinct band was observed at 432 nm. When silver nitrate was added to the aqueous plant extract of G. sylvestre, it was reduced to SNPs by the aldehyde group present in the flavonoids (125.6 μg/g), which was further oxidized to the carboxyl group. Also, the carboxyl groups of the phenols from the G. sylvestre extract (285.23 μg/g) acted as a surfactant to attach the major phytochemicals from the plant extract to the surface of the SNPs. Our previous study on the synthesis of SNPs using aqueous extracts of Memecylon edule,37 Memecylon umbellatum,47 Chrysopogon zizanioides46 and Indigofera aspalathoides50 showed that the color of the reaction mixture during the formation of GSNPs changed to ruby red color from colorless/straw color. Our results are also comparable with the other available reports for plant-extract-mediated synthesis of SNPs. From Figure 3, it is clear that the synthesized SNPs were approximately spherical and of different sizes. The SEM images in the figure clearly indicated a thin layer of phytochemicals from the plant extract covering the synthesized SNPs. Mostly the total phenolic content, flavonoids, and tannins were responsible for the bioreduction of the SNPs. In this green synthesis, the phytochemicals from the plant extract acted as a surfactant to prevent the aggregation of the synthesized SNPs. The SEM images of our earlier research had revealed that this biologically eco-friendly synthesis of NPs utilizing the leaf extracts of M. edule,37 C. zizanioides,46 and M. umbellatum47 showed no aggregation due to the biomolecules from the plant extract. And the mechanism behind this particle formation with no aggregation may be the spontaneous nucleation and isotropic growth of NPs along with the plant extract. As these chains grow in diameter with increasing silver deposition, spherical particles break off from these structures forming nanospherical particles which can be typically observed from this synthesis.51 The elemental composition of green-synthesized AgNPs was analyzed through EDAX. These measurements confirmed the presence of the elementary silver signal of the SNPs. The vertical axis displays the number of X-ray counts and the horizontal axis displays the energy in keV. The EDAX spectrum of the biofunctionalized SNPs in Figure 4 clearly shows the strong signals from silver atoms along with the weaker signals from carbon and oxygen present from biomolecules of the plant extract. The elemental silver peak at 2–4 keV, which is the major emission peak specified for metallic silver, with minor peaks of C and O were also seen due to the capping of Ag NPs by the biomolecules of G. sylvestre leaf extract, and the absence of other peaks evidenced the purity of the Ag NPs. FTIR analysis in Figure 5 shows that the SNPs produced by G. sylvestre extract were coated by phytocompounds and secondary metabolites such as saponins, terpenoids, and gymnemagenin derivative of gymnemic acid containing the functional groups of amines, aldehydes, carboxylic acids, and alcohols. The presence of the amide linkages seen in Figure 5 suggests that the different functional groups of the proteins present in the plant extracts might be capping the NPs and playing an important role in the stabilization of the green NPs formed. The band at 1,443 cm−1 was assigned to the methylene scissoring vibrations of proteins, and the bands located at 1,318 cm−1 and 1,089 cm−1 are due to the C–N stretching vibration of aromatic and aliphatic amines, respectively, which agrees with earlier reports of Suman et al.52 The positions of these bands were comparable to those reported for phytochemicals reported in the G. sylvestre extract as total phenols, flavonoids, and tannins (Table 2). Thus, we can confirm that the nanocapping of the phytochemicals from the G. sylvestre extract is responsible for the reduction and subsequent stabilization of the SNPs. The absorption bands that appear in the IR spectrum of the aqueous extract could also be seen in the IR spectra of phyto-capped Ag NPs, confirming the role of the phyto constituents (mostly gymnemic acid) in protecting the Ag NPs from aggregation. Also, during our repeated experiments there were no batch-to-batch variations in size, regardless of the isotopic composition, and the particles diameters of the SNPs formed were known to a high degree of accuracy. A detailed study on the large-scale synthesis and elemental composition on the synthesized NPs can be carried out using inductively coupled plasma mass spectrometry to obtain reproducible compositions in every batch. In future, the elemental analysis can be carried out as described earlier by other researchers.53,54 XRD analysis of NPs represented in Figure 6 shows several size-dependent features leading to irregular peak position, height, and width. XRD was mainly carried out to study the crystalline nature of the green-synthesized G. sylvestre SNPs. From the figure, the GSNPs are seen to exhibit monocrystallinity. The XRD peaks at 38.2°, 44.5°, 64.7°, and 77.7° can be indexed to the [111], [200], [220], and [311] planes, indicating that the SNPs are highly crystalline. Similar results were reported for Abelmoschus esculentus, Citrus limon, Citrus reticulate, and Citrus sinensis55,56 and in our previous studies using C. zizanioides.46 Table 3 shows the characteristic features of the GSNPs using various plant parts of different plant species reported by various researchers along with our previous reports. In vitro anticancer activity: From Figure 7, it can be observed that, as the concentration of the GSNPs increased, the percentage of viable cells decreased in the cytotoxicity studies by MTT assay. The GSNPs were taken up by mammalian cells through different mechanisms such as pinocytosis, endocytosis, and phagocytosis.75 Once the NPs enter the cells, they interact with the cellular materials and cause DNA damage and cell death. The GSNPs at 85 μg/mL concentration showed 95.23% inhibition of HT29 cell growth. The concentration of the NPs was chosen based on the TC ID50 value (results not shown). Another promising result was that G. sylvestre plant extracts alone at 85 μg/mL concentration showed 30.77% inhibition of HT29 cell lines growth. From our results, it can be concluded that the GSNPs could have induced intracellular reactive oxygen species generation, which can be evaluated using intracellular peroxide-dependent oxidation, and caused cell death. The control cells were clustered, healthy, and viable cells (Figure 8A), whereas the HT29 cells’ proliferation was significantly inhibited by GS (Figure 8B). The SNP-treated cells showed increased apoptotic morphological changes (Figure 8C), also the clearly visible cell debris in Figure 8D is due to cell death by 85 μg/mL SNP treatment. These results indicate that the sensitivity of HT29 human colon cancer cell line for cytotoxic drugs is higher than that of the Vero cell line for the same cytotoxic agents. Sahu et al76 have reported the presence of four new tritepenoid saponins, namely gymnemasins A, B, C, and D, from the leaves of G. sylvestre, while Chan has identified the presence of acylation with diangeloyl groups at the C21–22 positions in triterpenoid saponins, which is essential for cytotoxcity toward tumor cells.77 Tang et al78 have reported that saponin could induce apoptosis of U251 cells, and both BAD-mediated intrinsic apoptotic signaling pathway and caspase-8-mediated extrinsic apoptotic signaling pathway were involved in the apoptosis. The promising saponins were further studied as potential anticancer agents by many researchers. Ai et al79 proposed a qualitative method that can be used to recognize the presence or absence of cancer cells with gold NPs for targeted cancer cell imaging and efficient photodynamic therapy. As reported by Raghavendra et al80 the size effects and multifunctionality are the main characteristics of NPs, so our method of one-step synthesis of SNPs using the aqueous extract of G. sylvestre may serve as a potential anticancer drug for cancer therapy. Further studies have to be carried out to understand the nature of cytotoxicity and the death or proliferation of cells caused by GSNPs from G. sylvestre leaf extract. Conclusion: The green synthesis of biofunctionalized SNPs from the leaves of G. sylvestre was economical, nontoxic, and environmentally benign. Due to the reducing and capping nature of the bioactive phytocompounds present in the aqueous extract of G. sylvestre, a cap was formed around the silver ions of the biofunctionalized SNPs which were stable. The presence of the functional group of the bioactive compounds was confirmed by FTIR spectra. The particle size and the spherical shape of the SNPs were determined by XRD and SEM analyses. Since the plant extract and the biofunctionalized SNPs showed anticancer activity against cancer cells, G. sylvestre may serve as a source for potential anticancer drugs. The present study showed the anticancer activities of both the bioactive compounds of the leaf extract and the biofunctionalized SNPs synthesized against HT29 human adenocarcinoma cells in vitro. Our studies provide an important basis for the application of NPs for in vitro anticancer activity against human colon adenocarcinoma cells. Our earlier reports have also shown the potential antiulcer properties of G. sylvestre in mice.13 So GSS is a good plant candidate for further studies in alternative medicine due its multifunctional medical properties
Background: Gymnema sylvestre is an ethno-pharmacologically important medicinal plant used in many polyherbal formulations for its potential health benefits. Silver nanoparticles (SNPs) were biofunctionalized using aqueous leaf extracts of G. sylvestre. The anticancer properties of the bioactive compounds and the biofunctionalized SNPs were compared using the HT29 human adenoma colon cancer cell line. Methods: The preliminary phytochemical screening for bioactive compounds from aqueous extracts revealed the presence of alkaloids, triterpenes, flavonoids, steroids, and saponins. Biofunctionalized SNPs were synthesized using silver nitrate and characterized by ultraviolet-visible spectroscopy, scanning electron microscopy, energy-dispersive X-ray analysis, Fourier transform infrared spectroscopy, and X-ray diffraction for size and shape. The characterized biofunctionalized G. sylvestre were tested for its in vitro anticancer activity against HT29 human colon adenocarcinoma cells. Results: The biofunctionlized G. sylvestre SNPs showed the surface plasmon resonance band at 430 nm. The scanning electron microscopy images showed the presence of spherical nanoparticles of various sizes, which were further determined using the Scherrer equation. In vitro cytotoxic activity of the biofunctionalized green-synthesized SNPs (GSNPs) indicated that the sensitivity of HT29 human colon adenocarcinoma cells for cytotoxic drugs is higher than that of Vero cell line for the same cytotoxic agents and also higher than the bioactive compound of the aqueous extract. Conclusions: Our results show that the anticancer properties of the bioactive compounds of G. sylvestre can be enhanced through biofunctionalizing the SNPs using the bioactive compounds present in the plant extract without compromising their medicinal properties.
Introduction: For treatment of various diseases, bioactive components from medicinal plants that are similar to chemical compounds are used.1 In recent years, the use of ethno-botanical information in medicinal plant research has gained considerable attention in some segments of the scientific community.2 In one of the ethno-botanical surveys of medicinal plants commonly used by the Kani tribals in Tirunelveli hills of the Western Ghats in Tamil Nadu, India, it was revealed that Gymneme sylvestre is the most important species based on its use.2 The use of plant parts and isolated phytochemicals for the prevention and treatment of various health ailments has been in practice for many decades.3 G. sylvestre R. Br, commonly known as “Meshasringi”, is distributed over most of India and has a reputation in traditional medicine as a stomachic, diuretic, and a remedy to control diabetes mellitus. G. sylvestre R. Br4 is a woody, climbing plant that grows in the tropical forests of Central and Southern India and in parts of Asia.5 It is a pubescent shrub with young stems and branches, and has a distichous phyllotactic opposite arrangement pattern of leaves which are 2.5–6 cm long and are usually ovate or elliptical. The flowers are small, yellow, and in umbellate cymes, and the follicles are terete, lanceolate, and up to 3 inches in length.6 In homeopathy, as well as in folk and ayurvedic medicine, G. sylvestre has been used for diabetes treatment.7 G. sylvestre has bioactive components that can cure asthma, eye ailments, snakebite, piles, chronic cough, breathing troubles, colic pain, cardiopathy, constipation, dyspepsia, hemorrhoids, and hepatosplenomegaly, as well as assist in family planning.8 In addition, it also possesses antimicrobial,9 antitumor,5 anti-obesity,10 anti-inflammatory,11 anti-hyperglycemic,12 antiulcer, anti-stress, and antiallergic activity.13 The presence of flavonoids, saponins, anthraquinones, quercitol, and other alkaloid have been reported in the flowers, leaves, and fruits of G. sylvestre.14 The presence of other therapeutic agents, such as gymnemagenin, gymnemic acids, gymnemanol, and β-amyrin-related glycosides, which play a key role in therapeutic applications, have also been reported. The focus of the present work is to assess the potential therapeutic medicinal value of this herb and to understand/enhance the mechanistic action of their bioactive components.14 G. sylvestre contains triterpenes, saponins, and gymnemic acids belonging to the oleane and dammarene classes.15,16 The plant extract has also tested positive for alkaloids, acidic glycosides, and anthraquinone derivatives. Oleanane saponins are gymnemic acids and gymnema saponins, while dammarene saponins are gymnemasides. As reported by Thakur et al14 the aqueous extracts of the G. sylvestre leaves showed the presence of gymnemic acids Ι–VΙ, while the saponin fraction of the leaves tested positive for the presence of gymnemic acids XV–XVIII. The gymnemic acid derivative of gymnemagenin was elucidated from the fraction VIII–XII, which is responsible for the antidiabetic activity, and the fraction VIII stimulates the pancreas for insulin secretion. The novel D-glucoside structure with anti-sweet principle is present in the I–V saponin fraction. The presence of pentatriacontane, α- and β-chlorophylls, phytin, resins, D-quercitol, tartaric acid, formic acid, butyric acid, lupeol, and stigmasterol has been reported as other plant constituents of G. sylvestre,14 while the extract has also been tested positive for alkaloids.13,17 Sharma et al have reported the antioxidant activity of oleane saponins from G. sylvestre plant extract and determined the IC50 values for 2,2-diphenylpicrylhydrazyl (DPPH) scavenging, superoxide radical scavenging, inhibition of in vitro lipid peroxidation, and protein carbonyl formation as 238 μg/mL, 140 μg/mL, 99 μg/mL, and 28 μg/mL, respectively, which may be due to the presence of flavonoids, phenols, tannins, and triterpenoids.18 The enhanced radiation (8 Gy)-induced augmentation of lipid peroxidation and depletion of glutathione and protein in mouse brain were reported by Sharma et al18 using multiherbal ayurvedic formulations containing extracts of G. sylvestre, such as “Hyponidd” and “Dihar”. They also demonstrated the antioxidant activity by increasing the levels of superoxide dismutase, glutathione, and catalase in rats through in vivo studies.19 Kang et al20 proved the role of antioxidants from G. sylvestre in diabetic rats using ethanolic extracts. Using several antioxidant assays, eg, thiobarbituric acid assay with slight modifications, egg yolk lecithin or 2-deoxyribose (associated with lipid peroxidation) assay, superoxide dismutase-like activity assay, and 2,2′-azinobis (3-ethylbenzothiazoline-6-sulfonic acid) assay. The potent anticancer activity of G. sylvestre against the human lung adenocarcinoma cell lines (A549) and human breast carcinoma cell lines (MCF7) using alcoholic extracts of the herb has been reported by Srikant et al.21 Also, Amaki et al22 reported the inhibition of the breast cancer resistance protein using the alcoholic extract of G. sylvestre. Many plant-derived saponins, eg, ginsenosides, soyasaponins, and saikosaponins, have been found to exhibit significant anticancer activity. The anticancer activity of gymnemagenol on HeLa cancer cell lines in in vitro conditions was determined by the MTT cell proliferation assay for cytotoxic activity of saponins. Using 5 μg/mL, 15 μg/mL, 25 μg/mL, and 50 μg/mL concentrations of gymnemagenol, the IC50 value was found to be 37 μg/mL after 96 hours. The isolated bioactive constituent, gymnemagenol, showed a high degree of inhibition to HeLa cancer cell line proliferation, and saponins were not found to be toxic to the growth of normal cells under in vitro conditions.23 Already many researchers have reported that the leaves of G. sylvestre lower blood sugar, stimulate the heart, uterus, and circulatory systems, and exhibit anti-sweet and hepatoprotective activities.20,24–31 Administration of G. sylvestre extract to diabetic rats increased superoxide dismutase activity and decreased lipid peroxide by either directly scavenging the reactive oxygen species, due to the presence of various antioxidant compounds, or by increasing the synthesis of antioxidant molecules (albumin and uric acid).24,30,32 Therefore, in this study, an attempt was made to synthesize the silver nanoparticles (SNPs) from aqueous extracts of the G. sylvestre leaves. These green-synthesized SNPs (GSNPs) of G. sylvestre were examined by ultraviolet–visible (UV–vis) spectroscopy, scanning electron microscopy (SEM), energy dispersive X-ray analysis (EDAX), Fourier transform infrared spectroscopy (FTIR), and X-ray diffraction (XRD) analysis for studying their size and shape. The synthesized and well-characterized nanoparticles (NPs) were tested for their cytotoxicity effect. Our findings clearly demonstrate that it is indeed possible to have a much greener way to synthesize SNPs without compromising their antibacterial properties and thus plant extracts may prove to be a good alternative to obtain such NPs with improved antibacterial and antiviral properties for diabetic wound healing applications. Goix et al33 and Boholm and Arvidsson34 have pointed out that silver is either beneficial or harmful in relation to four main values: the environment, health, sewage treatment, and product effectiveness. As reported by Barua et al35 poly(ethylene glycol)-stabilized colloidal SNPs showed the nonhazardous anticancer and antibacterial properties. Jin et al36 have reported the therapeutic applications of plant-extract-based scaffolds for wound healing and skin reconstitution studies. Conclusion: The green synthesis of biofunctionalized SNPs from the leaves of G. sylvestre was economical, nontoxic, and environmentally benign. Due to the reducing and capping nature of the bioactive phytocompounds present in the aqueous extract of G. sylvestre, a cap was formed around the silver ions of the biofunctionalized SNPs which were stable. The presence of the functional group of the bioactive compounds was confirmed by FTIR spectra. The particle size and the spherical shape of the SNPs were determined by XRD and SEM analyses. Since the plant extract and the biofunctionalized SNPs showed anticancer activity against cancer cells, G. sylvestre may serve as a source for potential anticancer drugs. The present study showed the anticancer activities of both the bioactive compounds of the leaf extract and the biofunctionalized SNPs synthesized against HT29 human adenocarcinoma cells in vitro. Our studies provide an important basis for the application of NPs for in vitro anticancer activity against human colon adenocarcinoma cells. Our earlier reports have also shown the potential antiulcer properties of G. sylvestre in mice.13 So GSS is a good plant candidate for further studies in alternative medicine due its multifunctional medical properties
Background: Gymnema sylvestre is an ethno-pharmacologically important medicinal plant used in many polyherbal formulations for its potential health benefits. Silver nanoparticles (SNPs) were biofunctionalized using aqueous leaf extracts of G. sylvestre. The anticancer properties of the bioactive compounds and the biofunctionalized SNPs were compared using the HT29 human adenoma colon cancer cell line. Methods: The preliminary phytochemical screening for bioactive compounds from aqueous extracts revealed the presence of alkaloids, triterpenes, flavonoids, steroids, and saponins. Biofunctionalized SNPs were synthesized using silver nitrate and characterized by ultraviolet-visible spectroscopy, scanning electron microscopy, energy-dispersive X-ray analysis, Fourier transform infrared spectroscopy, and X-ray diffraction for size and shape. The characterized biofunctionalized G. sylvestre were tested for its in vitro anticancer activity against HT29 human colon adenocarcinoma cells. Results: The biofunctionlized G. sylvestre SNPs showed the surface plasmon resonance band at 430 nm. The scanning electron microscopy images showed the presence of spherical nanoparticles of various sizes, which were further determined using the Scherrer equation. In vitro cytotoxic activity of the biofunctionalized green-synthesized SNPs (GSNPs) indicated that the sensitivity of HT29 human colon adenocarcinoma cells for cytotoxic drugs is higher than that of Vero cell line for the same cytotoxic agents and also higher than the bioactive compound of the aqueous extract. Conclusions: Our results show that the anticancer properties of the bioactive compounds of G. sylvestre can be enhanced through biofunctionalizing the SNPs using the bioactive compounds present in the plant extract without compromising their medicinal properties.
11,155
288
[ 90, 144, 126, 175, 543, 91, 154, 16, 115, 1211, 490, 202 ]
16
[ "extract", "snps", "cells", "sylvestre", "cell", "ml", "nps", "μg", "plant", "aqueous" ]
[ "medicinal plants commonly", "medicinal plant", "phytochemicals reported sylvestre", "botanical surveys medicinal", "sylvestre plant extract" ]
[CONTENT] Gymnema sylvestre | gymnemic acid | biofunctionalized silver nanoparticles | anticancer activity | HT29 cell line [SUMMARY]
[CONTENT] Gymnema sylvestre | gymnemic acid | biofunctionalized silver nanoparticles | anticancer activity | HT29 cell line [SUMMARY]
[CONTENT] Gymnema sylvestre | gymnemic acid | biofunctionalized silver nanoparticles | anticancer activity | HT29 cell line [SUMMARY]
[CONTENT] Gymnema sylvestre | gymnemic acid | biofunctionalized silver nanoparticles | anticancer activity | HT29 cell line [SUMMARY]
[CONTENT] Gymnema sylvestre | gymnemic acid | biofunctionalized silver nanoparticles | anticancer activity | HT29 cell line [SUMMARY]
[CONTENT] Gymnema sylvestre | gymnemic acid | biofunctionalized silver nanoparticles | anticancer activity | HT29 cell line [SUMMARY]
[CONTENT] Animals | Antineoplastic Agents | Chlorocebus aethiops | Gymnema sylvestre | HT29 Cells | Humans | Microscopy, Electron, Scanning | Nanoparticles | Plant Extracts | Plant Leaves | Plants, Medicinal | Saponins | Silver | Silver Nitrate | Spectroscopy, Fourier Transform Infrared | Triterpenes | Vero Cells | X-Ray Diffraction [SUMMARY]
[CONTENT] Animals | Antineoplastic Agents | Chlorocebus aethiops | Gymnema sylvestre | HT29 Cells | Humans | Microscopy, Electron, Scanning | Nanoparticles | Plant Extracts | Plant Leaves | Plants, Medicinal | Saponins | Silver | Silver Nitrate | Spectroscopy, Fourier Transform Infrared | Triterpenes | Vero Cells | X-Ray Diffraction [SUMMARY]
[CONTENT] Animals | Antineoplastic Agents | Chlorocebus aethiops | Gymnema sylvestre | HT29 Cells | Humans | Microscopy, Electron, Scanning | Nanoparticles | Plant Extracts | Plant Leaves | Plants, Medicinal | Saponins | Silver | Silver Nitrate | Spectroscopy, Fourier Transform Infrared | Triterpenes | Vero Cells | X-Ray Diffraction [SUMMARY]
[CONTENT] Animals | Antineoplastic Agents | Chlorocebus aethiops | Gymnema sylvestre | HT29 Cells | Humans | Microscopy, Electron, Scanning | Nanoparticles | Plant Extracts | Plant Leaves | Plants, Medicinal | Saponins | Silver | Silver Nitrate | Spectroscopy, Fourier Transform Infrared | Triterpenes | Vero Cells | X-Ray Diffraction [SUMMARY]
[CONTENT] Animals | Antineoplastic Agents | Chlorocebus aethiops | Gymnema sylvestre | HT29 Cells | Humans | Microscopy, Electron, Scanning | Nanoparticles | Plant Extracts | Plant Leaves | Plants, Medicinal | Saponins | Silver | Silver Nitrate | Spectroscopy, Fourier Transform Infrared | Triterpenes | Vero Cells | X-Ray Diffraction [SUMMARY]
[CONTENT] Animals | Antineoplastic Agents | Chlorocebus aethiops | Gymnema sylvestre | HT29 Cells | Humans | Microscopy, Electron, Scanning | Nanoparticles | Plant Extracts | Plant Leaves | Plants, Medicinal | Saponins | Silver | Silver Nitrate | Spectroscopy, Fourier Transform Infrared | Triterpenes | Vero Cells | X-Ray Diffraction [SUMMARY]
[CONTENT] medicinal plants commonly | medicinal plant | phytochemicals reported sylvestre | botanical surveys medicinal | sylvestre plant extract [SUMMARY]
[CONTENT] medicinal plants commonly | medicinal plant | phytochemicals reported sylvestre | botanical surveys medicinal | sylvestre plant extract [SUMMARY]
[CONTENT] medicinal plants commonly | medicinal plant | phytochemicals reported sylvestre | botanical surveys medicinal | sylvestre plant extract [SUMMARY]
[CONTENT] medicinal plants commonly | medicinal plant | phytochemicals reported sylvestre | botanical surveys medicinal | sylvestre plant extract [SUMMARY]
[CONTENT] medicinal plants commonly | medicinal plant | phytochemicals reported sylvestre | botanical surveys medicinal | sylvestre plant extract [SUMMARY]
[CONTENT] medicinal plants commonly | medicinal plant | phytochemicals reported sylvestre | botanical surveys medicinal | sylvestre plant extract [SUMMARY]
[CONTENT] extract | snps | cells | sylvestre | cell | ml | nps | μg | plant | aqueous [SUMMARY]
[CONTENT] extract | snps | cells | sylvestre | cell | ml | nps | μg | plant | aqueous [SUMMARY]
[CONTENT] extract | snps | cells | sylvestre | cell | ml | nps | μg | plant | aqueous [SUMMARY]
[CONTENT] extract | snps | cells | sylvestre | cell | ml | nps | μg | plant | aqueous [SUMMARY]
[CONTENT] extract | snps | cells | sylvestre | cell | ml | nps | μg | plant | aqueous [SUMMARY]
[CONTENT] extract | snps | cells | sylvestre | cell | ml | nps | μg | plant | aqueous [SUMMARY]
[CONTENT] reported | sylvestre | anti | μg ml | saponins | activity | plant | gymnemic acids | μg | gymnemic [SUMMARY]
[CONTENT] test | following methods | reagent | colorimetric method | following | methods | alkaline reagent | alkaline | phytochemical | method [SUMMARY]
[CONTENT] figure | snps | extract | nps | plant | cells | plant extract | cell | reported | sylvestre [SUMMARY]
[CONTENT] bioactive | snps | biofunctionalized snps | anticancer | biofunctionalized | extract biofunctionalized | adenocarcinoma cells | extract biofunctionalized snps | bioactive compounds | showed anticancer [SUMMARY]
[CONTENT] cells | ml | cell | snps | extract | μg | sylvestre | nps | μg ml | plant [SUMMARY]
[CONTENT] cells | ml | cell | snps | extract | μg | sylvestre | nps | μg ml | plant [SUMMARY]
[CONTENT] Gymnema ||| G. sylvestre ||| [SUMMARY]
[CONTENT] ||| Fourier ||| G. sylvestre [SUMMARY]
[CONTENT] G. sylvestre | 430 ||| Scherrer ||| Vero [SUMMARY]
[CONTENT] G. sylvestre [SUMMARY]
[CONTENT] Gymnema ||| G. sylvestre ||| ||| ||| Fourier ||| G. sylvestre ||| G. sylvestre | 430 ||| Scherrer ||| Vero ||| G. sylvestre [SUMMARY]
[CONTENT] Gymnema ||| G. sylvestre ||| ||| ||| Fourier ||| G. sylvestre ||| G. sylvestre | 430 ||| Scherrer ||| Vero ||| G. sylvestre [SUMMARY]
Assembly of tetraspanins, galectin-3, and distinct N-glycans defines the solubilization signature of seminal prostasomes from normozoospermic and oligozoospermic men.
34540145
Prostasomes, extracellular vesicles (EVs) abundantly present in seminal plasma, express distinct tetraspanins (TS) and galectin-3 (gal-3), which are supposed to shape their surface by an assembly of different molecular complexes. In this study, detergent-sensitivity patterns of membrane-associated prostasomal proteins were determined aiming at the solubilization signature as an intrinsic multimolecular marker and a new parameter suitable as a reference for the comparison of EVs populations in health and disease.
BACKGROUND
Prostasomes were disrupted by Triton X-100 and analyzed by gel filtration under conditions that maintained complete solubilization. Redistribution of TS (CD63, CD9, and CD81), gal-3, gamma-glutamyltransferase (GGT), and distinct N-glycans was monitored using solid-phase lectin-binding assays, transmission electron microscopy, electrophoresis, and lectin blot.
METHODS
Comparative data on prostasomes under normal physiology and conditions of low sperm count revealed similarity regarding the redistribution of distinct N-glycans and GGT, all presumed to be mainly part of the vesicle coat. In contrast to this, a greater difference was found in the redistribution of integral membrane proteins, exemplified by TS and gal-3. Accordingly, they were grouped into two molecular patterns mainly consisting of overlapped CD9/gal-3/wheat germ agglutinin-reactive glycoproteins and CD63/GGT/concanavalin A-reactive glycoproteins.
RESULTS
Solubilization signature can be considered as an all-inclusive distinction factor regarding the surface properties of a particular vesicle since it reflects the status of the parent cell and the extracellular environment, both of which contribute to the composition of spatial membrane arrangements.
CONCLUSIONS
[ "Galectin 3", "Humans", "Male", "Polysaccharides", "Semen", "Spermatozoa", "Tetraspanins" ]
8431989
Introduction
Tetraspanin-web and galectin-glycoprotein lattices represent distinct multi/macromolecular complexes assembled at the plasma membrane and are supposed to facilitate different biological activities/functions (1–3). Extracellular vesicles (EVs), membranous structures originating from plasma- or intracellular membranes, are considered enriched in tetraspanins (TS), which are used as canonical markers (4). Regarding the presence of lectins, including galectins, although not widely studied in this context, there are data indicating that galectin-3 (gal-3) is involved in the biogenesis of EVs and can be used as a reliable marker (5). Both TS and gal-3 are thought to shape not only the surface of EVs but also the cargo composition (6). Prostasomes, EVs originating from the prostate and abundantly present in human seminal plasma (SP), are reported to express TS: CD63, CD9, and CD81, as well as gal-3 (2, 7–9). In addition, gal-3 and mannosylated/sialylated glycans were found to contribute to the prostasomal surface in a specific way in terms of accessibility and native presentation which could be altered in pathological conditions associated with male fertility (2). This study aimed to delineate the positions of selected TS: CD63, CD9, and CD81, that is, to obtain data on their molecular associations and relate them to gal-3 and selected N-glycans, all known to reside on the surface of prostasomes. Although of possible general importance for the identification of distinct vesicle types and their functions in different heterogeneous extracellular landscapes, related data are still missing. Thus, it is presumed that solubilization signature is an all-inclusive distinction factor regarding surface properties of a particular vesicle, since it can reflect the status of the parent cell and the extracellular environment, both of which contribute to the composition of spatial membrane arrangements. By establishing solubilization signature, we aimed at determining a new qualitative data suitable as a reference for the comparison of any type of vesicles. The existence of distinct prostasomal surface molecular complexes was deduced from their detergent resistance (revealing TS-primary ligand association, gal-3-glycoprotein association, and insoluble membranes) or detergent sensitivity (revealing solubilized glycoprotein–glycolipid complexes). Related molecular patterns established from the mode of response to disruption by a non-ionic detergent of high stringency were used as a reference to annotate and/or compare prostasomal preparations from normozoospermic and oligozoospermic men. In general, this approach is readily applicable, and it is not supposed to be significantly affected by different isolation procedures. Getting insight into the distribution patterns of TS on different membrane domains can add new value to their common use (based on presence only) as EVs markers. Moreover, since TS have distinct biological activities involved in cell adhesion, motility, and metastasis, as well as cell activation and signal transduction (10, 11), possible differences in their organization may have biomedical consequences. Thus, solubilization signatures of EVs might relate their structure with functional alterations in distinct pathological conditions.
null
null
Results
Prostasomes from human seminal plasma of normozoospermic men: influence of TX-100 treatment Distributions of distinct surface-associated markers of prostasomes from human SP of normozoospermic men (sPro-N) after solubilization with TX-100 are shown in Figure 1a–d. In general, their gel filtration elution positions differed in terms of a more or less noticeable shift from the void volume where they were co-eluted on native vesicles, revealing new patterns of associations. Surface-associated glycoproteins and gamma-glutamyl transferase on the seminal prostasomes of normozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of normozoospermic men (sPro-N) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-N (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. A450: absorbance at 450 nm; GGT: gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3: galectin 3; F: fraction. In eluted fractions, total mannosylated and sialylated glycans, which can reside on both glycoprotein and glycolipids, were monitored by lectin-binding reactivity (Figure 1a, b), and the GGT as an individual protein marker was monitored by enzymatic activity (Figure 1c). Thus, Con A-reactive glycoproteins were solubilized as evidenced by a striking decrease at the initial position (Figure 1a). However, the related redistribution can be rather deduced than clearly shown according to the reactivity of fractions in the included column volume, possibly due to the influence of their structure on immobilization. In contrast to this, WGA-reactive glycoproteins seemed to be partly solubilized and released as aggregated, judging by the corresponding elution profile revealing broad peaks entering the column and trailing down along the entire chromatogram (Figure 1b). Moreover, they produced a small but distinct peak before that of intact vesicles, which suggests formation of larger protein complexes. As for GGT, it was also clearly solubilized from vesicles with TX-100 and exhibited an elution profile distinct from the examined glycans (Figure 1c). The influence of TX-100 on the distribution of TS: CD63, CD9, and CD81, chosen as integral membrane proteins, and gal-3, chosen as a soluble but membrane-associated molecule, was monitored by the immunodot blot as adequate method (in contrast to western blot) for monitoring surface-associated changes (Figure 1d). TS and gal-3 co-localized on native vesicles at a position overlapping the detected Con A- and WGA-reactive glycoproteins and GGT (Figure 1c, data not shown). After TX-100 treatment, CD63 was clearly released and exhibited broad distribution (Figure 1d, fractions 16–30), but also remained close to its initial position. In contrast to this, CD9 (Figure 1d, fractions 16–18) and gal-3 (Figure 1d, fractions 16–18) retained narrow distributions, that is, they were slightly shifted during elution. Moreover, their patterns overlapped completely. As for CD81, it could not be detected after detergent treatment (Figure 1d). To further follow up the influence of TX-100 on prostasomes, changes in their ultrastructure were analyzed (Figure 2). In the region where all examined markers remained more or less co-localized after TX-100 treatment, the microscopic inspection revealed the presence of structures that correspond to reorganized detergent-resistant domains of vesicular membranes. They appeared as broken vesicles surrounded with leaking content or smaller vesicles with disrupted irregular surfaces seemingly shrunken with no associated material. In contrast to this, in the region where released glycoproteins were separated (Figure 1, fractions 20–22), no such structures were visible, only irregular, possibly, protein deposits. To complement the results obtained for detergent-treated samples as such, changes in the patterns of total prostasomal glycoproteins were analyzed under denaturing and reducing conditions by electrophoresis (Figure 3) and lectin blot (Figure 4). Compared to the native vesicles (Figure 3a), in the TX-100-treated ones (Figure 3b), the proteins exhibiting a prostasome-like pattern remained clustered close to the initial position, that is, they were marginally shifted in elution. However, their abundance was noticeably lower. Specifically, discrete changes in terms of the abundance of major bands in the region below 66 kDa (encompass masses of TS and gal-3) were detected. In addition, clear loss of bands in the region corresponding to the masses of prostasomal signature bands (90–150 kDa) was also detected. Consequently, some of them were visible as shifted in a cluster of protein bands included in column volume (Figure 3b, fractions 20–22). Transmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of normozoospermic men (sPro-N) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 1). Bar 500 nm. F14: rare vesicles; F17: broken vesicles surrounded with leaking content; F19: vesicles with disrupted irregular surface with no associated material; F21: irregular deposits. Protein composition of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a) and TX-100 treated sPro-N (b) were resolved by electrophoresis and stained with silver. Although aggregation is present as judged by intense bands at the border of the stacking and separating gel, the referent protein bands are preserved. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards. Distribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a, c) and TX-100 treated sPro-N (b, d) were subjected to lectin-blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel. In contrast to proteins, prostasomal glycoproteins were considerably reorganized after TX-100 treatment. Regarding Con A-reactive glycoproteins, what the lectin-binding assay suggested was confirmed by the lectin blot (Figure 4). Thus, the pattern of Con A-reactive glycoproteins from native vesicles comprised high molecular mass band at the border of the stacking and separating gel and smear band in the stacking gel as well as four distinct lower molecular mass bands (Figure 4 a, fractions 15–17). After TX-100 treatment, a striking loss of high molecular mass components was observed. It can be related to different modes of redistribution of distinct lower molecular mass Con A-reactive bands. Specifically, the major 97 kDa band remained partly close to the initial position (Figure 4b, fraction 17) but was also released, that is, redistributed (Figure 4b, fractions 19–22). In addition, almost complete release of those with molecular masses below 66 kDa was observed (Figure 4a, b, fractions 15–17). The pattern of WGA-reactive glycoproteins of native vesicles was comparable with that of the Con A-reactive ones in high molecular mass region (Figure 4c). However, after TX-100 treatment, their patterns were strikingly different (Figure 4d), as initially observed in the corresponding lectin-binding assay (Figure 1b). Thus, the high molecular mass band was completely lost, and a shift of weak 97 kDa was also detected. Taken together, the profiles related to the cluster of Con A- and WGA-binding glycoproteins might be influenced by their presence in different but overlapping complexes. Moreover, it is possible that diverse, but overlapped, bands with matching molecular mass and lectin binding were detected, as indicated by disparate/selective presence/abundance of particular ones in the subsequently eluted fraction. Distributions of distinct surface-associated markers of prostasomes from human SP of normozoospermic men (sPro-N) after solubilization with TX-100 are shown in Figure 1a–d. In general, their gel filtration elution positions differed in terms of a more or less noticeable shift from the void volume where they were co-eluted on native vesicles, revealing new patterns of associations. Surface-associated glycoproteins and gamma-glutamyl transferase on the seminal prostasomes of normozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of normozoospermic men (sPro-N) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-N (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. A450: absorbance at 450 nm; GGT: gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3: galectin 3; F: fraction. In eluted fractions, total mannosylated and sialylated glycans, which can reside on both glycoprotein and glycolipids, were monitored by lectin-binding reactivity (Figure 1a, b), and the GGT as an individual protein marker was monitored by enzymatic activity (Figure 1c). Thus, Con A-reactive glycoproteins were solubilized as evidenced by a striking decrease at the initial position (Figure 1a). However, the related redistribution can be rather deduced than clearly shown according to the reactivity of fractions in the included column volume, possibly due to the influence of their structure on immobilization. In contrast to this, WGA-reactive glycoproteins seemed to be partly solubilized and released as aggregated, judging by the corresponding elution profile revealing broad peaks entering the column and trailing down along the entire chromatogram (Figure 1b). Moreover, they produced a small but distinct peak before that of intact vesicles, which suggests formation of larger protein complexes. As for GGT, it was also clearly solubilized from vesicles with TX-100 and exhibited an elution profile distinct from the examined glycans (Figure 1c). The influence of TX-100 on the distribution of TS: CD63, CD9, and CD81, chosen as integral membrane proteins, and gal-3, chosen as a soluble but membrane-associated molecule, was monitored by the immunodot blot as adequate method (in contrast to western blot) for monitoring surface-associated changes (Figure 1d). TS and gal-3 co-localized on native vesicles at a position overlapping the detected Con A- and WGA-reactive glycoproteins and GGT (Figure 1c, data not shown). After TX-100 treatment, CD63 was clearly released and exhibited broad distribution (Figure 1d, fractions 16–30), but also remained close to its initial position. In contrast to this, CD9 (Figure 1d, fractions 16–18) and gal-3 (Figure 1d, fractions 16–18) retained narrow distributions, that is, they were slightly shifted during elution. Moreover, their patterns overlapped completely. As for CD81, it could not be detected after detergent treatment (Figure 1d). To further follow up the influence of TX-100 on prostasomes, changes in their ultrastructure were analyzed (Figure 2). In the region where all examined markers remained more or less co-localized after TX-100 treatment, the microscopic inspection revealed the presence of structures that correspond to reorganized detergent-resistant domains of vesicular membranes. They appeared as broken vesicles surrounded with leaking content or smaller vesicles with disrupted irregular surfaces seemingly shrunken with no associated material. In contrast to this, in the region where released glycoproteins were separated (Figure 1, fractions 20–22), no such structures were visible, only irregular, possibly, protein deposits. To complement the results obtained for detergent-treated samples as such, changes in the patterns of total prostasomal glycoproteins were analyzed under denaturing and reducing conditions by electrophoresis (Figure 3) and lectin blot (Figure 4). Compared to the native vesicles (Figure 3a), in the TX-100-treated ones (Figure 3b), the proteins exhibiting a prostasome-like pattern remained clustered close to the initial position, that is, they were marginally shifted in elution. However, their abundance was noticeably lower. Specifically, discrete changes in terms of the abundance of major bands in the region below 66 kDa (encompass masses of TS and gal-3) were detected. In addition, clear loss of bands in the region corresponding to the masses of prostasomal signature bands (90–150 kDa) was also detected. Consequently, some of them were visible as shifted in a cluster of protein bands included in column volume (Figure 3b, fractions 20–22). Transmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of normozoospermic men (sPro-N) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 1). Bar 500 nm. F14: rare vesicles; F17: broken vesicles surrounded with leaking content; F19: vesicles with disrupted irregular surface with no associated material; F21: irregular deposits. Protein composition of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a) and TX-100 treated sPro-N (b) were resolved by electrophoresis and stained with silver. Although aggregation is present as judged by intense bands at the border of the stacking and separating gel, the referent protein bands are preserved. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards. Distribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a, c) and TX-100 treated sPro-N (b, d) were subjected to lectin-blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel. In contrast to proteins, prostasomal glycoproteins were considerably reorganized after TX-100 treatment. Regarding Con A-reactive glycoproteins, what the lectin-binding assay suggested was confirmed by the lectin blot (Figure 4). Thus, the pattern of Con A-reactive glycoproteins from native vesicles comprised high molecular mass band at the border of the stacking and separating gel and smear band in the stacking gel as well as four distinct lower molecular mass bands (Figure 4 a, fractions 15–17). After TX-100 treatment, a striking loss of high molecular mass components was observed. It can be related to different modes of redistribution of distinct lower molecular mass Con A-reactive bands. Specifically, the major 97 kDa band remained partly close to the initial position (Figure 4b, fraction 17) but was also released, that is, redistributed (Figure 4b, fractions 19–22). In addition, almost complete release of those with molecular masses below 66 kDa was observed (Figure 4a, b, fractions 15–17). The pattern of WGA-reactive glycoproteins of native vesicles was comparable with that of the Con A-reactive ones in high molecular mass region (Figure 4c). However, after TX-100 treatment, their patterns were strikingly different (Figure 4d), as initially observed in the corresponding lectin-binding assay (Figure 1b). Thus, the high molecular mass band was completely lost, and a shift of weak 97 kDa was also detected. Taken together, the profiles related to the cluster of Con A- and WGA-binding glycoproteins might be influenced by their presence in different but overlapping complexes. Moreover, it is possible that diverse, but overlapped, bands with matching molecular mass and lectin binding were detected, as indicated by disparate/selective presence/abundance of particular ones in the subsequently eluted fraction. Prostasomes from human seminal plasma of oligozoospermic men: influence of TX-100 treatment In parallel, prostasomes isolated from human seminal plasma of oligozoospermic men (sPro-O) were subjected to TX-100 treatment and analyzed in the same manner. Compared to the native sample, the treated ones exhibited patterns of Con A- (Figure 5a) and WGA-reactive glycans (Figure 5b) as well as GGT (Figure 5c) that indicated decrease/loss and/or redistribution, similarly as found for sPro-N. In addition, similarity with sPro-N was also noticed regarding the effect on the redistribution of TS: CD63 and CD81 (Figure 5d). However, although CD9 (Figure 5d, fraction 14) and gal-3 (Figure 5d, fractions 14–16) both remained close to the initial position as observed for sPro-N, they exhibited partially overlapping profiles. Moreover, the immunoreactivity of both TS (CD63 and CD81) was barely detectable. Observation at the ultrastructural level suggested more general disruption of sPro-O (Figure 6) than sPro-N. In general, vesicular structures were low abundant and their morphology was clearly different from sPro-N. Integrity of the TX-100-treated vesicles reflected on patterns of total proteins (Figure 7) and glycoproteins (Figure 8) and indicated more severe perturbation than for sPro-N (Figure 3a). In comparison with the native sPro-O (Figure 7a), a significant loss of protein in the entire range of molecular masses, which remained mostly in the region below 66 kDa, was observed (Figure 7b). In agreement with this, Con A-reactive glycoproteins of native sPro-O (Figure 8a) were also strikingly decreased, including the major one at 97 kDa (Figure 8b). Moreover, lectin blot failed to detect any WGA-reactive glycoproteins in the TX-100 treated sPro-O (Figure 8b). Compared with those for sPro-N, the profiles of both types of released glycoproteins for sPro-O indicated more intensive aggregation to form complexes which, in general, interfere with or prevent the detection of contributing components. Surface-associated glycoproteins and gamma-glutamyl transferase on seminal prostasomes of oligozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of oligozoospermic men (sPro-O) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-O from Sephadex G-200 column (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. Barely detectable CD9- and gal-3-immunoreactivity is indicated by circle. A450, absorbance at 450 nm; GGT, gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3, galectin 3; F, fraction. Transmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of oligozoospermic men (sPro-O) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 5). Bar 500 nm. F14–F17: rare vesicles; F19–F21: irregular deposits. Protein composition of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a) and TX-100-treated sPro-O (b) were resolved by electrophoresis and stained with silver. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards. Distribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a, c) and TX-100-treated sPro-O (b, d) were subjected to lectin blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel. In parallel, prostasomes isolated from human seminal plasma of oligozoospermic men (sPro-O) were subjected to TX-100 treatment and analyzed in the same manner. Compared to the native sample, the treated ones exhibited patterns of Con A- (Figure 5a) and WGA-reactive glycans (Figure 5b) as well as GGT (Figure 5c) that indicated decrease/loss and/or redistribution, similarly as found for sPro-N. In addition, similarity with sPro-N was also noticed regarding the effect on the redistribution of TS: CD63 and CD81 (Figure 5d). However, although CD9 (Figure 5d, fraction 14) and gal-3 (Figure 5d, fractions 14–16) both remained close to the initial position as observed for sPro-N, they exhibited partially overlapping profiles. Moreover, the immunoreactivity of both TS (CD63 and CD81) was barely detectable. Observation at the ultrastructural level suggested more general disruption of sPro-O (Figure 6) than sPro-N. In general, vesicular structures were low abundant and their morphology was clearly different from sPro-N. Integrity of the TX-100-treated vesicles reflected on patterns of total proteins (Figure 7) and glycoproteins (Figure 8) and indicated more severe perturbation than for sPro-N (Figure 3a). In comparison with the native sPro-O (Figure 7a), a significant loss of protein in the entire range of molecular masses, which remained mostly in the region below 66 kDa, was observed (Figure 7b). In agreement with this, Con A-reactive glycoproteins of native sPro-O (Figure 8a) were also strikingly decreased, including the major one at 97 kDa (Figure 8b). Moreover, lectin blot failed to detect any WGA-reactive glycoproteins in the TX-100 treated sPro-O (Figure 8b). Compared with those for sPro-N, the profiles of both types of released glycoproteins for sPro-O indicated more intensive aggregation to form complexes which, in general, interfere with or prevent the detection of contributing components. Surface-associated glycoproteins and gamma-glutamyl transferase on seminal prostasomes of oligozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of oligozoospermic men (sPro-O) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-O from Sephadex G-200 column (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. Barely detectable CD9- and gal-3-immunoreactivity is indicated by circle. A450, absorbance at 450 nm; GGT, gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3, galectin 3; F, fraction. Transmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of oligozoospermic men (sPro-O) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 5). Bar 500 nm. F14–F17: rare vesicles; F19–F21: irregular deposits. Protein composition of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a) and TX-100-treated sPro-O (b) were resolved by electrophoresis and stained with silver. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards. Distribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a, c) and TX-100-treated sPro-O (b, d) were subjected to lectin blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel.
null
null
[ "Materials", "Human semen samples", "Isolation of prostasomes from human seminal plasma", "Gel filtration", "SDS-PAGE", "Western blot and dot blot", "Transmission electron microscopy (TEM)", "Prostasomes from human seminal plasma of normozoospermic men: influence of TX-100 treatment", "Prostasomes from human seminal plasma of oligozoospermic men: influence of TX-100 treatment" ]
[ "Monoclonal anti-CD63 antibody (clone TS63) was from Abcam (Cambridge, UK), monoclonal anti-CD81 (clone M38) and monoclonal anti-CD9 (clone MEM-61) were from Invitrogen by Thermo Fisher Scientific (Carlsbad, CA, USA), and biotinylated goat anti-galectin-3 (gal-3) antibodies were from R&D Systems (Minneapolis, USA). 3,3′,5,5′-tetramethylbenzidine (TMB), bovine serum albumin (BSA), and Triton X-100 (TX-100) were from Sigma (St. Louis, MO, USA). Biotinylated goat anti-mouse IgG, biotinylated plant lectins: Con A (Concanavalin A), wheat germ agglutinin (WGA), and the Elite Vectastain ABC kit were from Vector Laboratories (Burlingame, CA, USA). Sephadex G-200 was from Pharmacia AB (Uppsala, Sweden). The silver stain kit and SDS-PAGE molecular mass standards (broad range) were from Bio-Rad (Hercules, CA, USA). Nitrocellulose membrane and Pierce ECL Western Blotting Substrate were from Thermo Scientific (Rockford, IL, USA). Microwell plates were from Thermo Scientific (Roskilde, Denmark).", "This study was performed on the leftover, anonymized specimens of human semen taken for routine analysis, and since existing human specimens were used, it is not considered as research on human subjects. It was approved by the institutional ethics committee according to the guidelines (No. # 02-832/1), which conforms to the Helsinki Declaration, 1975 (revised 2008). Sperm parameters were assessed according to the recommended criteria of the World Health Organization (released in 2010.), concerning numbers, morphology, and motility.\nSperm cells and other debris were removed from the ejaculate by centrifugation at 1,000 × g for 20 min.", "Two pools of human SP of normozoospermic men and two pools of human SP of oligozoospermic men were used for the isolation of prostasomes. Each pool contained 10 individual SP samples. Prostasomes from normozoospermic men (sPro-N) and oligozoospermic men (sPro-O) were isolated from SP according to the modified protocol of Carlsson et al. (12). CD63-, CD9-, and CD81-immunoreactivities were used as the indicator of EVs’ presence. These prostasomal preparations were subjected to detergent treatment by incubation with 1% TX-100 for 1 h at room temperature and then subjected to gel filtration as the method of choice (13, 14). We monitored the redistribution of selected markers during gel filtration as an indicator of release from vesicles using the combined analysis of intact fractions (solid-phase assay with immobilized fractions and microscopy) and methods analyzing denatured fractions (electrophoresis and lectin blot).", "Gel filtration separation profiles of TX-100-treated prostasomes were obtained under conditions where TX-100 was present during elution to ensure maintenance of total solubilization (15). Thus, the detergent-treated seminal prostasome preparation (1 mL) was loaded on a Sephadex G-200 column (bed volume 35 mL) equilibrated and eluted with 0.03 M Tris-HCl, pH 7.6, containing 0.13 M NaCl and 1% TX-100. Fractions of 1 mL were collected. The elution was monitored as described previously (16). Briefly, gel filtration-separated fractions were coated on microwell plates at 4°C overnight. After washing steps (3 × 300 μL with 0.05 M phosphate-buffered saline, PBS), they were blocked with 50 μL 1% BSA for 1.5 h and then washed again. Biotinylated plant lectins: Con A and WGA (50 μL, 0.5 mg/mL) were allowed to react for 30 min at room temperature, washed out, and followed by the addition of 50 μL of avidin/biotin–HRPO complex (Elite, Vectastain ABC Kit, prepared according to the manufacturer’s instructions). After incubation for 30 min, at room temperature, the plates were rinsed and developed using 50 μL TMB substrate solution. The reaction was stopped with 50 μL 2 N sulfuric acid. Absorbance was read at 450 nm using a Wallac 1420 Multilabel counter Victor3V (Perkin Elmer, Waltham, MA, USA). The elution profile of gamma-glutamyl transferase (GGT) was monitored by measuring enzyme activity using GGT colorimetric assay kits (Bioanalytica, Madrid, Spain), according to the manufacturer’s instructions for Biosystems A25 (Barcelona, Spain). The selected fractions were further analyzed by electrophoresis and blotting.\nNative seminal prostasome preparations were analyzed in the same way except that TX-100 was not added to the elution buffer.", "Corresponding samples were resolved on 10% separating gel with 4% stacking gel under denaturing and reducing conditions (17) and stained with silver nitrate, using a silver stain kit (Bio-Rad) according to the manufacturer’s instructions. The gel was calibrated with SDS-PAGE molecular weight standards (broad range).", "Samples were transferred onto nitrocellulose membrane by semi-dry blotting using a Trans-blot SD (Bio-Rad Laboratories). The conditions were as follows: transfer buffer, 0.025 M Tris containing 0.192 M glycine and 20% methanol, pH 8.3 under a constant current of 1.2 mA/cm2 for 1 h. The membrane was blocked with 3% BSA in 0.05 M PBS, pH 7.2, for 2 h at room temperature, and then used for lectin-blotting (1) as described below.\nFor dot blot, 3 μL of each corresponding fraction was applied to the nitrocellulose membrane, dried, blocked as described above, and subjected to immunoblotting (2).\nLectin blotting\nLectin blotting was performed as described earlier (18). The membrane was incubated with the chosen biotinylated plant lectin (0.2 μg/mL in 0.05 M PBS, pH 7.2) for 1 h at room temperature and then washed six times in 0.05 M PBS, pH 7.2. Avidin/biotinylated horseradish peroxidase (HRPO) from Vectastain Elite ABC kit (prepared according to the manufacturer’s instructions) was added and incubated for 30 min at room temperature. The membrane was then rinsed again six times in 0.05 M PBS, pH 7.2, and the proteins were visualized using Pierce ECL substrate solution (Thermo Scientific, Rockford, IL, USA), according to the manufacturer’s instructions.\nImmunoblotting\nImmunodot blot was performed as previously established (18). For immunoblotting, the membrane was incubated with the corresponding antibodies: anti-CD63 antibody (0.5 μg/mL), anti-CD81 antibody (0.25 μg/mL), anti-CD9 antibody (0.5 μg/mL), and biotinylated anti-gal-3 antibodies (0.025 μg/mL), overnight at 4°C. After a washing step, bound antibody was detected by incubation with biotinylated goat anti-mouse IgG for 30 min at room temperature. The membrane was rinsed and the avidin/biotinylated HRPO mixture from the Elite Vectastain ABC kit was added, followed by incubation for 30 min at room temperature. After another washing step, the blots were visualized using Pierce ECL Western blotting substrate according to the manufacturer’s instructions.", "TEM was performed as described previously (19). Samples were applied to the formvar-coated, 200 mesh, Cu grids by grid flotation on 10 μL sample droplets, for 45 min at room temperature. This was followed by steps of fixation (2% paraformaldehyde, 10 min), washing (PBS, 3 × 2 min), post-fixing (2% glutaraldehyde, 5 min), and a final wash with distilled H2O (2 min). Grids were then air-dried, and the images were collected using a Philips CM12 electron microscope (Philips/FEI, Eindhoven, the Netherlands).", "Distributions of distinct surface-associated markers of prostasomes from human SP of normozoospermic men (sPro-N) after solubilization with TX-100 are shown in Figure 1a–d. In general, their gel filtration elution positions differed in terms of a more or less noticeable shift from the void volume where they were co-eluted on native vesicles, revealing new patterns of associations.\nSurface-associated glycoproteins and gamma-glutamyl transferase on the seminal prostasomes of normozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of normozoospermic men (sPro-N) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-N (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. A450: absorbance at 450 nm; GGT: gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3: galectin 3; F: fraction.\nIn eluted fractions, total mannosylated and sialylated glycans, which can reside on both glycoprotein and glycolipids, were monitored by lectin-binding reactivity (Figure 1a, b), and the GGT as an individual protein marker was monitored by enzymatic activity (Figure 1c). Thus, Con A-reactive glycoproteins were solubilized as evidenced by a striking decrease at the initial position (Figure 1a). However, the related redistribution can be rather deduced than clearly shown according to the reactivity of fractions in the included column volume, possibly due to the influence of their structure on immobilization. In contrast to this, WGA-reactive glycoproteins seemed to be partly solubilized and released as aggregated, judging by the corresponding elution profile revealing broad peaks entering the column and trailing down along the entire chromatogram (Figure 1b). Moreover, they produced a small but distinct peak before that of intact vesicles, which suggests formation of larger protein complexes. As for GGT, it was also clearly solubilized from vesicles with TX-100 and exhibited an elution profile distinct from the examined glycans (Figure 1c). The influence of TX-100 on the distribution of TS: CD63, CD9, and CD81, chosen as integral membrane proteins, and gal-3, chosen as a soluble but membrane-associated molecule, was monitored by the immunodot blot as adequate method (in contrast to western blot) for monitoring surface-associated changes (Figure 1d). TS and gal-3 co-localized on native vesicles at a position overlapping the detected Con A- and WGA-reactive glycoproteins and GGT (Figure 1c, data not shown). After TX-100 treatment, CD63 was clearly released and exhibited broad distribution (Figure 1d, fractions 16–30), but also remained close to its initial position. In contrast to this, CD9 (Figure 1d, fractions 16–18) and gal-3 (Figure 1d, fractions 16–18) retained narrow distributions, that is, they were slightly shifted during elution. Moreover, their patterns overlapped completely. As for CD81, it could not be detected after detergent treatment (Figure 1d). To further follow up the influence of TX-100 on prostasomes, changes in their ultrastructure were analyzed (Figure 2). In the region where all examined markers remained more or less co-localized after TX-100 treatment, the microscopic inspection revealed the presence of structures that correspond to reorganized detergent-resistant domains of vesicular membranes. They appeared as broken vesicles surrounded with leaking content or smaller vesicles with disrupted irregular surfaces seemingly shrunken with no associated material. In contrast to this, in the region where released glycoproteins were separated (Figure 1, fractions 20–22), no such structures were visible, only irregular, possibly, protein deposits. To complement the results obtained for detergent-treated samples as such, changes in the patterns of total prostasomal glycoproteins were analyzed under denaturing and reducing conditions by electrophoresis (Figure 3) and lectin blot (Figure 4). Compared to the native vesicles (Figure 3a), in the TX-100-treated ones (Figure 3b), the proteins exhibiting a prostasome-like pattern remained clustered close to the initial position, that is, they were marginally shifted in elution. However, their abundance was noticeably lower. Specifically, discrete changes in terms of the abundance of major bands in the region below 66 kDa (encompass masses of TS and gal-3) were detected. In addition, clear loss of bands in the region corresponding to the masses of prostasomal signature bands (90–150 kDa) was also detected. Consequently, some of them were visible as shifted in a cluster of protein bands included in column volume (Figure 3b, fractions 20–22).\nTransmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of normozoospermic men (sPro-N) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 1). Bar 500 nm. F14: rare vesicles; F17: broken vesicles surrounded with leaking content; F19: vesicles with disrupted irregular surface with no associated material; F21: irregular deposits.\nProtein composition of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a) and TX-100 treated sPro-N (b) were resolved by electrophoresis and stained with silver. Although aggregation is present as judged by intense bands at the border of the stacking and separating gel, the referent protein bands are preserved. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards.\nDistribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a, c) and TX-100 treated sPro-N (b, d) were subjected to lectin-blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel.\nIn contrast to proteins, prostasomal glycoproteins were considerably reorganized after TX-100 treatment. Regarding Con A-reactive glycoproteins, what the lectin-binding assay suggested was confirmed by the lectin blot (Figure 4). Thus, the pattern of Con A-reactive glycoproteins from native vesicles comprised high molecular mass band at the border of the stacking and separating gel and smear band in the stacking gel as well as four distinct lower molecular mass bands (Figure 4 a, fractions 15–17). After TX-100 treatment, a striking loss of high molecular mass components was observed. It can be related to different modes of redistribution of distinct lower molecular mass Con A-reactive bands. Specifically, the major 97 kDa band remained partly close to the initial position (Figure 4b, fraction 17) but was also released, that is, redistributed (Figure 4b, fractions 19–22). In addition, almost complete release of those with molecular masses below 66 kDa was observed (Figure 4a, b, fractions 15–17).\nThe pattern of WGA-reactive glycoproteins of native vesicles was comparable with that of the Con A-reactive ones in high molecular mass region (Figure 4c). However, after TX-100 treatment, their patterns were strikingly different (Figure 4d), as initially observed in the corresponding lectin-binding assay (Figure 1b). Thus, the high molecular mass band was completely lost, and a shift of weak 97 kDa was also detected.\nTaken together, the profiles related to the cluster of Con A- and WGA-binding glycoproteins might be influenced by their presence in different but overlapping complexes. Moreover, it is possible that diverse, but overlapped, bands with matching molecular mass and lectin binding were detected, as indicated by disparate/selective presence/abundance of particular ones in the subsequently eluted fraction.", "In parallel, prostasomes isolated from human seminal plasma of oligozoospermic men (sPro-O) were subjected to TX-100 treatment and analyzed in the same manner. Compared to the native sample, the treated ones exhibited patterns of Con A- (Figure 5a) and WGA-reactive glycans (Figure 5b) as well as GGT (Figure 5c) that indicated decrease/loss and/or redistribution, similarly as found for sPro-N. In addition, similarity with sPro-N was also noticed regarding the effect on the redistribution of TS: CD63 and CD81 (Figure 5d). However, although CD9 (Figure 5d, fraction 14) and gal-3 (Figure 5d, fractions 14–16) both remained close to the initial position as observed for sPro-N, they exhibited partially overlapping profiles. Moreover, the immunoreactivity of both TS (CD63 and CD81) was barely detectable. Observation at the ultrastructural level suggested more general disruption of sPro-O (Figure 6) than sPro-N. In general, vesicular structures were low abundant and their morphology was clearly different from sPro-N. Integrity of the TX-100-treated vesicles reflected on patterns of total proteins (Figure 7) and glycoproteins (Figure 8) and indicated more severe perturbation than for sPro-N (Figure 3a). In comparison with the native sPro-O (Figure 7a), a significant loss of protein in the entire range of molecular masses, which remained mostly in the region below 66 kDa, was observed (Figure 7b). In agreement with this, Con A-reactive glycoproteins of native sPro-O (Figure 8a) were also strikingly decreased, including the major one at 97 kDa (Figure 8b). Moreover, lectin blot failed to detect any WGA-reactive glycoproteins in the TX-100 treated sPro-O (Figure 8b). Compared with those for sPro-N, the profiles of both types of released glycoproteins for sPro-O indicated more intensive aggregation to form complexes which, in general, interfere with or prevent the detection of contributing components.\nSurface-associated glycoproteins and gamma-glutamyl transferase on seminal prostasomes of oligozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of oligozoospermic men (sPro-O) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-O from Sephadex G-200 column (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. Barely detectable CD9- and gal-3-immunoreactivity is indicated by circle. A450, absorbance at 450 nm; GGT, gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3, galectin 3; F, fraction.\nTransmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of oligozoospermic men (sPro-O) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 5). Bar 500 nm. F14–F17: rare vesicles; F19–F21: irregular deposits.\nProtein composition of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a) and TX-100-treated sPro-O (b) were resolved by electrophoresis and stained with silver. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards.\nDistribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a, c) and TX-100-treated sPro-O (b, d) were subjected to lectin blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Material and methods", "Materials", "Human semen samples", "Isolation of prostasomes from human seminal plasma", "Gel filtration", "SDS-PAGE", "Western blot and dot blot", "Transmission electron microscopy (TEM)", "Results", "Prostasomes from human seminal plasma of normozoospermic men: influence of TX-100 treatment", "Prostasomes from human seminal plasma of oligozoospermic men: influence of TX-100 treatment", "Discussion" ]
[ "Tetraspanin-web and galectin-glycoprotein lattices represent distinct multi/macromolecular complexes assembled at the plasma membrane and are supposed to facilitate different biological activities/functions (1–3). Extracellular vesicles (EVs), membranous structures originating from plasma- or intracellular membranes, are considered enriched in tetraspanins (TS), which are used as canonical markers (4). Regarding the presence of lectins, including galectins, although not widely studied in this context, there are data indicating that galectin-3 (gal-3) is involved in the biogenesis of EVs and can be used as a reliable marker (5). Both TS and gal-3 are thought to shape not only the surface of EVs but also the cargo composition (6).\nProstasomes, EVs originating from the prostate and abundantly present in human seminal plasma (SP), are reported to express TS: CD63, CD9, and CD81, as well as gal-3 (2, 7–9). In addition, gal-3 and mannosylated/sialylated glycans were found to contribute to the prostasomal surface in a specific way in terms of accessibility and native presentation which could be altered in pathological conditions associated with male fertility (2). This study aimed to delineate the positions of selected TS: CD63, CD9, and CD81, that is, to obtain data on their molecular associations and relate them to gal-3 and selected N-glycans, all known to reside on the surface of prostasomes. Although of possible general importance for the identification of distinct vesicle types and their functions in different heterogeneous extracellular landscapes, related data are still missing. Thus, it is presumed that solubilization signature is an all-inclusive distinction factor regarding surface properties of a particular vesicle, since it can reflect the status of the parent cell and the extracellular environment, both of which contribute to the composition of spatial membrane arrangements. By establishing solubilization signature, we aimed at determining a new qualitative data suitable as a reference for the comparison of any type of vesicles.\nThe existence of distinct prostasomal surface molecular complexes was deduced from their detergent resistance (revealing TS-primary ligand association, gal-3-glycoprotein association, and insoluble membranes) or detergent sensitivity (revealing solubilized glycoprotein–glycolipid complexes). Related molecular patterns established from the mode of response to disruption by a non-ionic detergent of high stringency were used as a reference to annotate and/or compare prostasomal preparations from normozoospermic and oligozoospermic men.\nIn general, this approach is readily applicable, and it is not supposed to be significantly affected by different isolation procedures. Getting insight into the distribution patterns of TS on different membrane domains can add new value to their common use (based on presence only) as EVs markers. Moreover, since TS have distinct biological activities involved in cell adhesion, motility, and metastasis, as well as cell activation and signal transduction (10, 11), possible differences in their organization may have biomedical consequences. Thus, solubilization signatures of EVs might relate their structure with functional alterations in distinct pathological conditions.", "Materials Monoclonal anti-CD63 antibody (clone TS63) was from Abcam (Cambridge, UK), monoclonal anti-CD81 (clone M38) and monoclonal anti-CD9 (clone MEM-61) were from Invitrogen by Thermo Fisher Scientific (Carlsbad, CA, USA), and biotinylated goat anti-galectin-3 (gal-3) antibodies were from R&D Systems (Minneapolis, USA). 3,3′,5,5′-tetramethylbenzidine (TMB), bovine serum albumin (BSA), and Triton X-100 (TX-100) were from Sigma (St. Louis, MO, USA). Biotinylated goat anti-mouse IgG, biotinylated plant lectins: Con A (Concanavalin A), wheat germ agglutinin (WGA), and the Elite Vectastain ABC kit were from Vector Laboratories (Burlingame, CA, USA). Sephadex G-200 was from Pharmacia AB (Uppsala, Sweden). The silver stain kit and SDS-PAGE molecular mass standards (broad range) were from Bio-Rad (Hercules, CA, USA). Nitrocellulose membrane and Pierce ECL Western Blotting Substrate were from Thermo Scientific (Rockford, IL, USA). Microwell plates were from Thermo Scientific (Roskilde, Denmark).\nMonoclonal anti-CD63 antibody (clone TS63) was from Abcam (Cambridge, UK), monoclonal anti-CD81 (clone M38) and monoclonal anti-CD9 (clone MEM-61) were from Invitrogen by Thermo Fisher Scientific (Carlsbad, CA, USA), and biotinylated goat anti-galectin-3 (gal-3) antibodies were from R&D Systems (Minneapolis, USA). 3,3′,5,5′-tetramethylbenzidine (TMB), bovine serum albumin (BSA), and Triton X-100 (TX-100) were from Sigma (St. Louis, MO, USA). Biotinylated goat anti-mouse IgG, biotinylated plant lectins: Con A (Concanavalin A), wheat germ agglutinin (WGA), and the Elite Vectastain ABC kit were from Vector Laboratories (Burlingame, CA, USA). Sephadex G-200 was from Pharmacia AB (Uppsala, Sweden). The silver stain kit and SDS-PAGE molecular mass standards (broad range) were from Bio-Rad (Hercules, CA, USA). Nitrocellulose membrane and Pierce ECL Western Blotting Substrate were from Thermo Scientific (Rockford, IL, USA). Microwell plates were from Thermo Scientific (Roskilde, Denmark).\nHuman semen samples This study was performed on the leftover, anonymized specimens of human semen taken for routine analysis, and since existing human specimens were used, it is not considered as research on human subjects. It was approved by the institutional ethics committee according to the guidelines (No. # 02-832/1), which conforms to the Helsinki Declaration, 1975 (revised 2008). Sperm parameters were assessed according to the recommended criteria of the World Health Organization (released in 2010.), concerning numbers, morphology, and motility.\nSperm cells and other debris were removed from the ejaculate by centrifugation at 1,000 × g for 20 min.\nThis study was performed on the leftover, anonymized specimens of human semen taken for routine analysis, and since existing human specimens were used, it is not considered as research on human subjects. It was approved by the institutional ethics committee according to the guidelines (No. # 02-832/1), which conforms to the Helsinki Declaration, 1975 (revised 2008). Sperm parameters were assessed according to the recommended criteria of the World Health Organization (released in 2010.), concerning numbers, morphology, and motility.\nSperm cells and other debris were removed from the ejaculate by centrifugation at 1,000 × g for 20 min.\nIsolation of prostasomes from human seminal plasma Two pools of human SP of normozoospermic men and two pools of human SP of oligozoospermic men were used for the isolation of prostasomes. Each pool contained 10 individual SP samples. Prostasomes from normozoospermic men (sPro-N) and oligozoospermic men (sPro-O) were isolated from SP according to the modified protocol of Carlsson et al. (12). CD63-, CD9-, and CD81-immunoreactivities were used as the indicator of EVs’ presence. These prostasomal preparations were subjected to detergent treatment by incubation with 1% TX-100 for 1 h at room temperature and then subjected to gel filtration as the method of choice (13, 14). We monitored the redistribution of selected markers during gel filtration as an indicator of release from vesicles using the combined analysis of intact fractions (solid-phase assay with immobilized fractions and microscopy) and methods analyzing denatured fractions (electrophoresis and lectin blot).\nTwo pools of human SP of normozoospermic men and two pools of human SP of oligozoospermic men were used for the isolation of prostasomes. Each pool contained 10 individual SP samples. Prostasomes from normozoospermic men (sPro-N) and oligozoospermic men (sPro-O) were isolated from SP according to the modified protocol of Carlsson et al. (12). CD63-, CD9-, and CD81-immunoreactivities were used as the indicator of EVs’ presence. These prostasomal preparations were subjected to detergent treatment by incubation with 1% TX-100 for 1 h at room temperature and then subjected to gel filtration as the method of choice (13, 14). We monitored the redistribution of selected markers during gel filtration as an indicator of release from vesicles using the combined analysis of intact fractions (solid-phase assay with immobilized fractions and microscopy) and methods analyzing denatured fractions (electrophoresis and lectin blot).\nGel filtration Gel filtration separation profiles of TX-100-treated prostasomes were obtained under conditions where TX-100 was present during elution to ensure maintenance of total solubilization (15). Thus, the detergent-treated seminal prostasome preparation (1 mL) was loaded on a Sephadex G-200 column (bed volume 35 mL) equilibrated and eluted with 0.03 M Tris-HCl, pH 7.6, containing 0.13 M NaCl and 1% TX-100. Fractions of 1 mL were collected. The elution was monitored as described previously (16). Briefly, gel filtration-separated fractions were coated on microwell plates at 4°C overnight. After washing steps (3 × 300 μL with 0.05 M phosphate-buffered saline, PBS), they were blocked with 50 μL 1% BSA for 1.5 h and then washed again. Biotinylated plant lectins: Con A and WGA (50 μL, 0.5 mg/mL) were allowed to react for 30 min at room temperature, washed out, and followed by the addition of 50 μL of avidin/biotin–HRPO complex (Elite, Vectastain ABC Kit, prepared according to the manufacturer’s instructions). After incubation for 30 min, at room temperature, the plates were rinsed and developed using 50 μL TMB substrate solution. The reaction was stopped with 50 μL 2 N sulfuric acid. Absorbance was read at 450 nm using a Wallac 1420 Multilabel counter Victor3V (Perkin Elmer, Waltham, MA, USA). The elution profile of gamma-glutamyl transferase (GGT) was monitored by measuring enzyme activity using GGT colorimetric assay kits (Bioanalytica, Madrid, Spain), according to the manufacturer’s instructions for Biosystems A25 (Barcelona, Spain). The selected fractions were further analyzed by electrophoresis and blotting.\nNative seminal prostasome preparations were analyzed in the same way except that TX-100 was not added to the elution buffer.\nGel filtration separation profiles of TX-100-treated prostasomes were obtained under conditions where TX-100 was present during elution to ensure maintenance of total solubilization (15). Thus, the detergent-treated seminal prostasome preparation (1 mL) was loaded on a Sephadex G-200 column (bed volume 35 mL) equilibrated and eluted with 0.03 M Tris-HCl, pH 7.6, containing 0.13 M NaCl and 1% TX-100. Fractions of 1 mL were collected. The elution was monitored as described previously (16). Briefly, gel filtration-separated fractions were coated on microwell plates at 4°C overnight. After washing steps (3 × 300 μL with 0.05 M phosphate-buffered saline, PBS), they were blocked with 50 μL 1% BSA for 1.5 h and then washed again. Biotinylated plant lectins: Con A and WGA (50 μL, 0.5 mg/mL) were allowed to react for 30 min at room temperature, washed out, and followed by the addition of 50 μL of avidin/biotin–HRPO complex (Elite, Vectastain ABC Kit, prepared according to the manufacturer’s instructions). After incubation for 30 min, at room temperature, the plates were rinsed and developed using 50 μL TMB substrate solution. The reaction was stopped with 50 μL 2 N sulfuric acid. Absorbance was read at 450 nm using a Wallac 1420 Multilabel counter Victor3V (Perkin Elmer, Waltham, MA, USA). The elution profile of gamma-glutamyl transferase (GGT) was monitored by measuring enzyme activity using GGT colorimetric assay kits (Bioanalytica, Madrid, Spain), according to the manufacturer’s instructions for Biosystems A25 (Barcelona, Spain). The selected fractions were further analyzed by electrophoresis and blotting.\nNative seminal prostasome preparations were analyzed in the same way except that TX-100 was not added to the elution buffer.\nSDS-PAGE Corresponding samples were resolved on 10% separating gel with 4% stacking gel under denaturing and reducing conditions (17) and stained with silver nitrate, using a silver stain kit (Bio-Rad) according to the manufacturer’s instructions. The gel was calibrated with SDS-PAGE molecular weight standards (broad range).\nCorresponding samples were resolved on 10% separating gel with 4% stacking gel under denaturing and reducing conditions (17) and stained with silver nitrate, using a silver stain kit (Bio-Rad) according to the manufacturer’s instructions. The gel was calibrated with SDS-PAGE molecular weight standards (broad range).\nWestern blot and dot blot Samples were transferred onto nitrocellulose membrane by semi-dry blotting using a Trans-blot SD (Bio-Rad Laboratories). The conditions were as follows: transfer buffer, 0.025 M Tris containing 0.192 M glycine and 20% methanol, pH 8.3 under a constant current of 1.2 mA/cm2 for 1 h. The membrane was blocked with 3% BSA in 0.05 M PBS, pH 7.2, for 2 h at room temperature, and then used for lectin-blotting (1) as described below.\nFor dot blot, 3 μL of each corresponding fraction was applied to the nitrocellulose membrane, dried, blocked as described above, and subjected to immunoblotting (2).\nLectin blotting\nLectin blotting was performed as described earlier (18). The membrane was incubated with the chosen biotinylated plant lectin (0.2 μg/mL in 0.05 M PBS, pH 7.2) for 1 h at room temperature and then washed six times in 0.05 M PBS, pH 7.2. Avidin/biotinylated horseradish peroxidase (HRPO) from Vectastain Elite ABC kit (prepared according to the manufacturer’s instructions) was added and incubated for 30 min at room temperature. The membrane was then rinsed again six times in 0.05 M PBS, pH 7.2, and the proteins were visualized using Pierce ECL substrate solution (Thermo Scientific, Rockford, IL, USA), according to the manufacturer’s instructions.\nImmunoblotting\nImmunodot blot was performed as previously established (18). For immunoblotting, the membrane was incubated with the corresponding antibodies: anti-CD63 antibody (0.5 μg/mL), anti-CD81 antibody (0.25 μg/mL), anti-CD9 antibody (0.5 μg/mL), and biotinylated anti-gal-3 antibodies (0.025 μg/mL), overnight at 4°C. After a washing step, bound antibody was detected by incubation with biotinylated goat anti-mouse IgG for 30 min at room temperature. The membrane was rinsed and the avidin/biotinylated HRPO mixture from the Elite Vectastain ABC kit was added, followed by incubation for 30 min at room temperature. After another washing step, the blots were visualized using Pierce ECL Western blotting substrate according to the manufacturer’s instructions.\nSamples were transferred onto nitrocellulose membrane by semi-dry blotting using a Trans-blot SD (Bio-Rad Laboratories). The conditions were as follows: transfer buffer, 0.025 M Tris containing 0.192 M glycine and 20% methanol, pH 8.3 under a constant current of 1.2 mA/cm2 for 1 h. The membrane was blocked with 3% BSA in 0.05 M PBS, pH 7.2, for 2 h at room temperature, and then used for lectin-blotting (1) as described below.\nFor dot blot, 3 μL of each corresponding fraction was applied to the nitrocellulose membrane, dried, blocked as described above, and subjected to immunoblotting (2).\nLectin blotting\nLectin blotting was performed as described earlier (18). The membrane was incubated with the chosen biotinylated plant lectin (0.2 μg/mL in 0.05 M PBS, pH 7.2) for 1 h at room temperature and then washed six times in 0.05 M PBS, pH 7.2. Avidin/biotinylated horseradish peroxidase (HRPO) from Vectastain Elite ABC kit (prepared according to the manufacturer’s instructions) was added and incubated for 30 min at room temperature. The membrane was then rinsed again six times in 0.05 M PBS, pH 7.2, and the proteins were visualized using Pierce ECL substrate solution (Thermo Scientific, Rockford, IL, USA), according to the manufacturer’s instructions.\nImmunoblotting\nImmunodot blot was performed as previously established (18). For immunoblotting, the membrane was incubated with the corresponding antibodies: anti-CD63 antibody (0.5 μg/mL), anti-CD81 antibody (0.25 μg/mL), anti-CD9 antibody (0.5 μg/mL), and biotinylated anti-gal-3 antibodies (0.025 μg/mL), overnight at 4°C. After a washing step, bound antibody was detected by incubation with biotinylated goat anti-mouse IgG for 30 min at room temperature. The membrane was rinsed and the avidin/biotinylated HRPO mixture from the Elite Vectastain ABC kit was added, followed by incubation for 30 min at room temperature. After another washing step, the blots were visualized using Pierce ECL Western blotting substrate according to the manufacturer’s instructions.\nTransmission electron microscopy (TEM) TEM was performed as described previously (19). Samples were applied to the formvar-coated, 200 mesh, Cu grids by grid flotation on 10 μL sample droplets, for 45 min at room temperature. This was followed by steps of fixation (2% paraformaldehyde, 10 min), washing (PBS, 3 × 2 min), post-fixing (2% glutaraldehyde, 5 min), and a final wash with distilled H2O (2 min). Grids were then air-dried, and the images were collected using a Philips CM12 electron microscope (Philips/FEI, Eindhoven, the Netherlands).\nTEM was performed as described previously (19). Samples were applied to the formvar-coated, 200 mesh, Cu grids by grid flotation on 10 μL sample droplets, for 45 min at room temperature. This was followed by steps of fixation (2% paraformaldehyde, 10 min), washing (PBS, 3 × 2 min), post-fixing (2% glutaraldehyde, 5 min), and a final wash with distilled H2O (2 min). Grids were then air-dried, and the images were collected using a Philips CM12 electron microscope (Philips/FEI, Eindhoven, the Netherlands).", "Monoclonal anti-CD63 antibody (clone TS63) was from Abcam (Cambridge, UK), monoclonal anti-CD81 (clone M38) and monoclonal anti-CD9 (clone MEM-61) were from Invitrogen by Thermo Fisher Scientific (Carlsbad, CA, USA), and biotinylated goat anti-galectin-3 (gal-3) antibodies were from R&D Systems (Minneapolis, USA). 3,3′,5,5′-tetramethylbenzidine (TMB), bovine serum albumin (BSA), and Triton X-100 (TX-100) were from Sigma (St. Louis, MO, USA). Biotinylated goat anti-mouse IgG, biotinylated plant lectins: Con A (Concanavalin A), wheat germ agglutinin (WGA), and the Elite Vectastain ABC kit were from Vector Laboratories (Burlingame, CA, USA). Sephadex G-200 was from Pharmacia AB (Uppsala, Sweden). The silver stain kit and SDS-PAGE molecular mass standards (broad range) were from Bio-Rad (Hercules, CA, USA). Nitrocellulose membrane and Pierce ECL Western Blotting Substrate were from Thermo Scientific (Rockford, IL, USA). Microwell plates were from Thermo Scientific (Roskilde, Denmark).", "This study was performed on the leftover, anonymized specimens of human semen taken for routine analysis, and since existing human specimens were used, it is not considered as research on human subjects. It was approved by the institutional ethics committee according to the guidelines (No. # 02-832/1), which conforms to the Helsinki Declaration, 1975 (revised 2008). Sperm parameters were assessed according to the recommended criteria of the World Health Organization (released in 2010.), concerning numbers, morphology, and motility.\nSperm cells and other debris were removed from the ejaculate by centrifugation at 1,000 × g for 20 min.", "Two pools of human SP of normozoospermic men and two pools of human SP of oligozoospermic men were used for the isolation of prostasomes. Each pool contained 10 individual SP samples. Prostasomes from normozoospermic men (sPro-N) and oligozoospermic men (sPro-O) were isolated from SP according to the modified protocol of Carlsson et al. (12). CD63-, CD9-, and CD81-immunoreactivities were used as the indicator of EVs’ presence. These prostasomal preparations were subjected to detergent treatment by incubation with 1% TX-100 for 1 h at room temperature and then subjected to gel filtration as the method of choice (13, 14). We monitored the redistribution of selected markers during gel filtration as an indicator of release from vesicles using the combined analysis of intact fractions (solid-phase assay with immobilized fractions and microscopy) and methods analyzing denatured fractions (electrophoresis and lectin blot).", "Gel filtration separation profiles of TX-100-treated prostasomes were obtained under conditions where TX-100 was present during elution to ensure maintenance of total solubilization (15). Thus, the detergent-treated seminal prostasome preparation (1 mL) was loaded on a Sephadex G-200 column (bed volume 35 mL) equilibrated and eluted with 0.03 M Tris-HCl, pH 7.6, containing 0.13 M NaCl and 1% TX-100. Fractions of 1 mL were collected. The elution was monitored as described previously (16). Briefly, gel filtration-separated fractions were coated on microwell plates at 4°C overnight. After washing steps (3 × 300 μL with 0.05 M phosphate-buffered saline, PBS), they were blocked with 50 μL 1% BSA for 1.5 h and then washed again. Biotinylated plant lectins: Con A and WGA (50 μL, 0.5 mg/mL) were allowed to react for 30 min at room temperature, washed out, and followed by the addition of 50 μL of avidin/biotin–HRPO complex (Elite, Vectastain ABC Kit, prepared according to the manufacturer’s instructions). After incubation for 30 min, at room temperature, the plates were rinsed and developed using 50 μL TMB substrate solution. The reaction was stopped with 50 μL 2 N sulfuric acid. Absorbance was read at 450 nm using a Wallac 1420 Multilabel counter Victor3V (Perkin Elmer, Waltham, MA, USA). The elution profile of gamma-glutamyl transferase (GGT) was monitored by measuring enzyme activity using GGT colorimetric assay kits (Bioanalytica, Madrid, Spain), according to the manufacturer’s instructions for Biosystems A25 (Barcelona, Spain). The selected fractions were further analyzed by electrophoresis and blotting.\nNative seminal prostasome preparations were analyzed in the same way except that TX-100 was not added to the elution buffer.", "Corresponding samples were resolved on 10% separating gel with 4% stacking gel under denaturing and reducing conditions (17) and stained with silver nitrate, using a silver stain kit (Bio-Rad) according to the manufacturer’s instructions. The gel was calibrated with SDS-PAGE molecular weight standards (broad range).", "Samples were transferred onto nitrocellulose membrane by semi-dry blotting using a Trans-blot SD (Bio-Rad Laboratories). The conditions were as follows: transfer buffer, 0.025 M Tris containing 0.192 M glycine and 20% methanol, pH 8.3 under a constant current of 1.2 mA/cm2 for 1 h. The membrane was blocked with 3% BSA in 0.05 M PBS, pH 7.2, for 2 h at room temperature, and then used for lectin-blotting (1) as described below.\nFor dot blot, 3 μL of each corresponding fraction was applied to the nitrocellulose membrane, dried, blocked as described above, and subjected to immunoblotting (2).\nLectin blotting\nLectin blotting was performed as described earlier (18). The membrane was incubated with the chosen biotinylated plant lectin (0.2 μg/mL in 0.05 M PBS, pH 7.2) for 1 h at room temperature and then washed six times in 0.05 M PBS, pH 7.2. Avidin/biotinylated horseradish peroxidase (HRPO) from Vectastain Elite ABC kit (prepared according to the manufacturer’s instructions) was added and incubated for 30 min at room temperature. The membrane was then rinsed again six times in 0.05 M PBS, pH 7.2, and the proteins were visualized using Pierce ECL substrate solution (Thermo Scientific, Rockford, IL, USA), according to the manufacturer’s instructions.\nImmunoblotting\nImmunodot blot was performed as previously established (18). For immunoblotting, the membrane was incubated with the corresponding antibodies: anti-CD63 antibody (0.5 μg/mL), anti-CD81 antibody (0.25 μg/mL), anti-CD9 antibody (0.5 μg/mL), and biotinylated anti-gal-3 antibodies (0.025 μg/mL), overnight at 4°C. After a washing step, bound antibody was detected by incubation with biotinylated goat anti-mouse IgG for 30 min at room temperature. The membrane was rinsed and the avidin/biotinylated HRPO mixture from the Elite Vectastain ABC kit was added, followed by incubation for 30 min at room temperature. After another washing step, the blots were visualized using Pierce ECL Western blotting substrate according to the manufacturer’s instructions.", "TEM was performed as described previously (19). Samples were applied to the formvar-coated, 200 mesh, Cu grids by grid flotation on 10 μL sample droplets, for 45 min at room temperature. This was followed by steps of fixation (2% paraformaldehyde, 10 min), washing (PBS, 3 × 2 min), post-fixing (2% glutaraldehyde, 5 min), and a final wash with distilled H2O (2 min). Grids were then air-dried, and the images were collected using a Philips CM12 electron microscope (Philips/FEI, Eindhoven, the Netherlands).", "Prostasomes from human seminal plasma of normozoospermic men: influence of TX-100 treatment Distributions of distinct surface-associated markers of prostasomes from human SP of normozoospermic men (sPro-N) after solubilization with TX-100 are shown in Figure 1a–d. In general, their gel filtration elution positions differed in terms of a more or less noticeable shift from the void volume where they were co-eluted on native vesicles, revealing new patterns of associations.\nSurface-associated glycoproteins and gamma-glutamyl transferase on the seminal prostasomes of normozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of normozoospermic men (sPro-N) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-N (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. A450: absorbance at 450 nm; GGT: gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3: galectin 3; F: fraction.\nIn eluted fractions, total mannosylated and sialylated glycans, which can reside on both glycoprotein and glycolipids, were monitored by lectin-binding reactivity (Figure 1a, b), and the GGT as an individual protein marker was monitored by enzymatic activity (Figure 1c). Thus, Con A-reactive glycoproteins were solubilized as evidenced by a striking decrease at the initial position (Figure 1a). However, the related redistribution can be rather deduced than clearly shown according to the reactivity of fractions in the included column volume, possibly due to the influence of their structure on immobilization. In contrast to this, WGA-reactive glycoproteins seemed to be partly solubilized and released as aggregated, judging by the corresponding elution profile revealing broad peaks entering the column and trailing down along the entire chromatogram (Figure 1b). Moreover, they produced a small but distinct peak before that of intact vesicles, which suggests formation of larger protein complexes. As for GGT, it was also clearly solubilized from vesicles with TX-100 and exhibited an elution profile distinct from the examined glycans (Figure 1c). The influence of TX-100 on the distribution of TS: CD63, CD9, and CD81, chosen as integral membrane proteins, and gal-3, chosen as a soluble but membrane-associated molecule, was monitored by the immunodot blot as adequate method (in contrast to western blot) for monitoring surface-associated changes (Figure 1d). TS and gal-3 co-localized on native vesicles at a position overlapping the detected Con A- and WGA-reactive glycoproteins and GGT (Figure 1c, data not shown). After TX-100 treatment, CD63 was clearly released and exhibited broad distribution (Figure 1d, fractions 16–30), but also remained close to its initial position. In contrast to this, CD9 (Figure 1d, fractions 16–18) and gal-3 (Figure 1d, fractions 16–18) retained narrow distributions, that is, they were slightly shifted during elution. Moreover, their patterns overlapped completely. As for CD81, it could not be detected after detergent treatment (Figure 1d). To further follow up the influence of TX-100 on prostasomes, changes in their ultrastructure were analyzed (Figure 2). In the region where all examined markers remained more or less co-localized after TX-100 treatment, the microscopic inspection revealed the presence of structures that correspond to reorganized detergent-resistant domains of vesicular membranes. They appeared as broken vesicles surrounded with leaking content or smaller vesicles with disrupted irregular surfaces seemingly shrunken with no associated material. In contrast to this, in the region where released glycoproteins were separated (Figure 1, fractions 20–22), no such structures were visible, only irregular, possibly, protein deposits. To complement the results obtained for detergent-treated samples as such, changes in the patterns of total prostasomal glycoproteins were analyzed under denaturing and reducing conditions by electrophoresis (Figure 3) and lectin blot (Figure 4). Compared to the native vesicles (Figure 3a), in the TX-100-treated ones (Figure 3b), the proteins exhibiting a prostasome-like pattern remained clustered close to the initial position, that is, they were marginally shifted in elution. However, their abundance was noticeably lower. Specifically, discrete changes in terms of the abundance of major bands in the region below 66 kDa (encompass masses of TS and gal-3) were detected. In addition, clear loss of bands in the region corresponding to the masses of prostasomal signature bands (90–150 kDa) was also detected. Consequently, some of them were visible as shifted in a cluster of protein bands included in column volume (Figure 3b, fractions 20–22).\nTransmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of normozoospermic men (sPro-N) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 1). Bar 500 nm. F14: rare vesicles; F17: broken vesicles surrounded with leaking content; F19: vesicles with disrupted irregular surface with no associated material; F21: irregular deposits.\nProtein composition of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a) and TX-100 treated sPro-N (b) were resolved by electrophoresis and stained with silver. Although aggregation is present as judged by intense bands at the border of the stacking and separating gel, the referent protein bands are preserved. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards.\nDistribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a, c) and TX-100 treated sPro-N (b, d) were subjected to lectin-blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel.\nIn contrast to proteins, prostasomal glycoproteins were considerably reorganized after TX-100 treatment. Regarding Con A-reactive glycoproteins, what the lectin-binding assay suggested was confirmed by the lectin blot (Figure 4). Thus, the pattern of Con A-reactive glycoproteins from native vesicles comprised high molecular mass band at the border of the stacking and separating gel and smear band in the stacking gel as well as four distinct lower molecular mass bands (Figure 4 a, fractions 15–17). After TX-100 treatment, a striking loss of high molecular mass components was observed. It can be related to different modes of redistribution of distinct lower molecular mass Con A-reactive bands. Specifically, the major 97 kDa band remained partly close to the initial position (Figure 4b, fraction 17) but was also released, that is, redistributed (Figure 4b, fractions 19–22). In addition, almost complete release of those with molecular masses below 66 kDa was observed (Figure 4a, b, fractions 15–17).\nThe pattern of WGA-reactive glycoproteins of native vesicles was comparable with that of the Con A-reactive ones in high molecular mass region (Figure 4c). However, after TX-100 treatment, their patterns were strikingly different (Figure 4d), as initially observed in the corresponding lectin-binding assay (Figure 1b). Thus, the high molecular mass band was completely lost, and a shift of weak 97 kDa was also detected.\nTaken together, the profiles related to the cluster of Con A- and WGA-binding glycoproteins might be influenced by their presence in different but overlapping complexes. Moreover, it is possible that diverse, but overlapped, bands with matching molecular mass and lectin binding were detected, as indicated by disparate/selective presence/abundance of particular ones in the subsequently eluted fraction.\nDistributions of distinct surface-associated markers of prostasomes from human SP of normozoospermic men (sPro-N) after solubilization with TX-100 are shown in Figure 1a–d. In general, their gel filtration elution positions differed in terms of a more or less noticeable shift from the void volume where they were co-eluted on native vesicles, revealing new patterns of associations.\nSurface-associated glycoproteins and gamma-glutamyl transferase on the seminal prostasomes of normozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of normozoospermic men (sPro-N) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-N (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. A450: absorbance at 450 nm; GGT: gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3: galectin 3; F: fraction.\nIn eluted fractions, total mannosylated and sialylated glycans, which can reside on both glycoprotein and glycolipids, were monitored by lectin-binding reactivity (Figure 1a, b), and the GGT as an individual protein marker was monitored by enzymatic activity (Figure 1c). Thus, Con A-reactive glycoproteins were solubilized as evidenced by a striking decrease at the initial position (Figure 1a). However, the related redistribution can be rather deduced than clearly shown according to the reactivity of fractions in the included column volume, possibly due to the influence of their structure on immobilization. In contrast to this, WGA-reactive glycoproteins seemed to be partly solubilized and released as aggregated, judging by the corresponding elution profile revealing broad peaks entering the column and trailing down along the entire chromatogram (Figure 1b). Moreover, they produced a small but distinct peak before that of intact vesicles, which suggests formation of larger protein complexes. As for GGT, it was also clearly solubilized from vesicles with TX-100 and exhibited an elution profile distinct from the examined glycans (Figure 1c). The influence of TX-100 on the distribution of TS: CD63, CD9, and CD81, chosen as integral membrane proteins, and gal-3, chosen as a soluble but membrane-associated molecule, was monitored by the immunodot blot as adequate method (in contrast to western blot) for monitoring surface-associated changes (Figure 1d). TS and gal-3 co-localized on native vesicles at a position overlapping the detected Con A- and WGA-reactive glycoproteins and GGT (Figure 1c, data not shown). After TX-100 treatment, CD63 was clearly released and exhibited broad distribution (Figure 1d, fractions 16–30), but also remained close to its initial position. In contrast to this, CD9 (Figure 1d, fractions 16–18) and gal-3 (Figure 1d, fractions 16–18) retained narrow distributions, that is, they were slightly shifted during elution. Moreover, their patterns overlapped completely. As for CD81, it could not be detected after detergent treatment (Figure 1d). To further follow up the influence of TX-100 on prostasomes, changes in their ultrastructure were analyzed (Figure 2). In the region where all examined markers remained more or less co-localized after TX-100 treatment, the microscopic inspection revealed the presence of structures that correspond to reorganized detergent-resistant domains of vesicular membranes. They appeared as broken vesicles surrounded with leaking content or smaller vesicles with disrupted irregular surfaces seemingly shrunken with no associated material. In contrast to this, in the region where released glycoproteins were separated (Figure 1, fractions 20–22), no such structures were visible, only irregular, possibly, protein deposits. To complement the results obtained for detergent-treated samples as such, changes in the patterns of total prostasomal glycoproteins were analyzed under denaturing and reducing conditions by electrophoresis (Figure 3) and lectin blot (Figure 4). Compared to the native vesicles (Figure 3a), in the TX-100-treated ones (Figure 3b), the proteins exhibiting a prostasome-like pattern remained clustered close to the initial position, that is, they were marginally shifted in elution. However, their abundance was noticeably lower. Specifically, discrete changes in terms of the abundance of major bands in the region below 66 kDa (encompass masses of TS and gal-3) were detected. In addition, clear loss of bands in the region corresponding to the masses of prostasomal signature bands (90–150 kDa) was also detected. Consequently, some of them were visible as shifted in a cluster of protein bands included in column volume (Figure 3b, fractions 20–22).\nTransmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of normozoospermic men (sPro-N) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 1). Bar 500 nm. F14: rare vesicles; F17: broken vesicles surrounded with leaking content; F19: vesicles with disrupted irregular surface with no associated material; F21: irregular deposits.\nProtein composition of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a) and TX-100 treated sPro-N (b) were resolved by electrophoresis and stained with silver. Although aggregation is present as judged by intense bands at the border of the stacking and separating gel, the referent protein bands are preserved. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards.\nDistribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a, c) and TX-100 treated sPro-N (b, d) were subjected to lectin-blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel.\nIn contrast to proteins, prostasomal glycoproteins were considerably reorganized after TX-100 treatment. Regarding Con A-reactive glycoproteins, what the lectin-binding assay suggested was confirmed by the lectin blot (Figure 4). Thus, the pattern of Con A-reactive glycoproteins from native vesicles comprised high molecular mass band at the border of the stacking and separating gel and smear band in the stacking gel as well as four distinct lower molecular mass bands (Figure 4 a, fractions 15–17). After TX-100 treatment, a striking loss of high molecular mass components was observed. It can be related to different modes of redistribution of distinct lower molecular mass Con A-reactive bands. Specifically, the major 97 kDa band remained partly close to the initial position (Figure 4b, fraction 17) but was also released, that is, redistributed (Figure 4b, fractions 19–22). In addition, almost complete release of those with molecular masses below 66 kDa was observed (Figure 4a, b, fractions 15–17).\nThe pattern of WGA-reactive glycoproteins of native vesicles was comparable with that of the Con A-reactive ones in high molecular mass region (Figure 4c). However, after TX-100 treatment, their patterns were strikingly different (Figure 4d), as initially observed in the corresponding lectin-binding assay (Figure 1b). Thus, the high molecular mass band was completely lost, and a shift of weak 97 kDa was also detected.\nTaken together, the profiles related to the cluster of Con A- and WGA-binding glycoproteins might be influenced by their presence in different but overlapping complexes. Moreover, it is possible that diverse, but overlapped, bands with matching molecular mass and lectin binding were detected, as indicated by disparate/selective presence/abundance of particular ones in the subsequently eluted fraction.\nProstasomes from human seminal plasma of oligozoospermic men: influence of TX-100 treatment In parallel, prostasomes isolated from human seminal plasma of oligozoospermic men (sPro-O) were subjected to TX-100 treatment and analyzed in the same manner. Compared to the native sample, the treated ones exhibited patterns of Con A- (Figure 5a) and WGA-reactive glycans (Figure 5b) as well as GGT (Figure 5c) that indicated decrease/loss and/or redistribution, similarly as found for sPro-N. In addition, similarity with sPro-N was also noticed regarding the effect on the redistribution of TS: CD63 and CD81 (Figure 5d). However, although CD9 (Figure 5d, fraction 14) and gal-3 (Figure 5d, fractions 14–16) both remained close to the initial position as observed for sPro-N, they exhibited partially overlapping profiles. Moreover, the immunoreactivity of both TS (CD63 and CD81) was barely detectable. Observation at the ultrastructural level suggested more general disruption of sPro-O (Figure 6) than sPro-N. In general, vesicular structures were low abundant and their morphology was clearly different from sPro-N. Integrity of the TX-100-treated vesicles reflected on patterns of total proteins (Figure 7) and glycoproteins (Figure 8) and indicated more severe perturbation than for sPro-N (Figure 3a). In comparison with the native sPro-O (Figure 7a), a significant loss of protein in the entire range of molecular masses, which remained mostly in the region below 66 kDa, was observed (Figure 7b). In agreement with this, Con A-reactive glycoproteins of native sPro-O (Figure 8a) were also strikingly decreased, including the major one at 97 kDa (Figure 8b). Moreover, lectin blot failed to detect any WGA-reactive glycoproteins in the TX-100 treated sPro-O (Figure 8b). Compared with those for sPro-N, the profiles of both types of released glycoproteins for sPro-O indicated more intensive aggregation to form complexes which, in general, interfere with or prevent the detection of contributing components.\nSurface-associated glycoproteins and gamma-glutamyl transferase on seminal prostasomes of oligozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of oligozoospermic men (sPro-O) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-O from Sephadex G-200 column (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. Barely detectable CD9- and gal-3-immunoreactivity is indicated by circle. A450, absorbance at 450 nm; GGT, gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3, galectin 3; F, fraction.\nTransmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of oligozoospermic men (sPro-O) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 5). Bar 500 nm. F14–F17: rare vesicles; F19–F21: irregular deposits.\nProtein composition of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a) and TX-100-treated sPro-O (b) were resolved by electrophoresis and stained with silver. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards.\nDistribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a, c) and TX-100-treated sPro-O (b, d) were subjected to lectin blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel.\nIn parallel, prostasomes isolated from human seminal plasma of oligozoospermic men (sPro-O) were subjected to TX-100 treatment and analyzed in the same manner. Compared to the native sample, the treated ones exhibited patterns of Con A- (Figure 5a) and WGA-reactive glycans (Figure 5b) as well as GGT (Figure 5c) that indicated decrease/loss and/or redistribution, similarly as found for sPro-N. In addition, similarity with sPro-N was also noticed regarding the effect on the redistribution of TS: CD63 and CD81 (Figure 5d). However, although CD9 (Figure 5d, fraction 14) and gal-3 (Figure 5d, fractions 14–16) both remained close to the initial position as observed for sPro-N, they exhibited partially overlapping profiles. Moreover, the immunoreactivity of both TS (CD63 and CD81) was barely detectable. Observation at the ultrastructural level suggested more general disruption of sPro-O (Figure 6) than sPro-N. In general, vesicular structures were low abundant and their morphology was clearly different from sPro-N. Integrity of the TX-100-treated vesicles reflected on patterns of total proteins (Figure 7) and glycoproteins (Figure 8) and indicated more severe perturbation than for sPro-N (Figure 3a). In comparison with the native sPro-O (Figure 7a), a significant loss of protein in the entire range of molecular masses, which remained mostly in the region below 66 kDa, was observed (Figure 7b). In agreement with this, Con A-reactive glycoproteins of native sPro-O (Figure 8a) were also strikingly decreased, including the major one at 97 kDa (Figure 8b). Moreover, lectin blot failed to detect any WGA-reactive glycoproteins in the TX-100 treated sPro-O (Figure 8b). Compared with those for sPro-N, the profiles of both types of released glycoproteins for sPro-O indicated more intensive aggregation to form complexes which, in general, interfere with or prevent the detection of contributing components.\nSurface-associated glycoproteins and gamma-glutamyl transferase on seminal prostasomes of oligozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of oligozoospermic men (sPro-O) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-O from Sephadex G-200 column (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. Barely detectable CD9- and gal-3-immunoreactivity is indicated by circle. A450, absorbance at 450 nm; GGT, gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3, galectin 3; F, fraction.\nTransmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of oligozoospermic men (sPro-O) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 5). Bar 500 nm. F14–F17: rare vesicles; F19–F21: irregular deposits.\nProtein composition of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a) and TX-100-treated sPro-O (b) were resolved by electrophoresis and stained with silver. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards.\nDistribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a, c) and TX-100-treated sPro-O (b, d) were subjected to lectin blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel.", "Distributions of distinct surface-associated markers of prostasomes from human SP of normozoospermic men (sPro-N) after solubilization with TX-100 are shown in Figure 1a–d. In general, their gel filtration elution positions differed in terms of a more or less noticeable shift from the void volume where they were co-eluted on native vesicles, revealing new patterns of associations.\nSurface-associated glycoproteins and gamma-glutamyl transferase on the seminal prostasomes of normozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of normozoospermic men (sPro-N) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-N (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. A450: absorbance at 450 nm; GGT: gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3: galectin 3; F: fraction.\nIn eluted fractions, total mannosylated and sialylated glycans, which can reside on both glycoprotein and glycolipids, were monitored by lectin-binding reactivity (Figure 1a, b), and the GGT as an individual protein marker was monitored by enzymatic activity (Figure 1c). Thus, Con A-reactive glycoproteins were solubilized as evidenced by a striking decrease at the initial position (Figure 1a). However, the related redistribution can be rather deduced than clearly shown according to the reactivity of fractions in the included column volume, possibly due to the influence of their structure on immobilization. In contrast to this, WGA-reactive glycoproteins seemed to be partly solubilized and released as aggregated, judging by the corresponding elution profile revealing broad peaks entering the column and trailing down along the entire chromatogram (Figure 1b). Moreover, they produced a small but distinct peak before that of intact vesicles, which suggests formation of larger protein complexes. As for GGT, it was also clearly solubilized from vesicles with TX-100 and exhibited an elution profile distinct from the examined glycans (Figure 1c). The influence of TX-100 on the distribution of TS: CD63, CD9, and CD81, chosen as integral membrane proteins, and gal-3, chosen as a soluble but membrane-associated molecule, was monitored by the immunodot blot as adequate method (in contrast to western blot) for monitoring surface-associated changes (Figure 1d). TS and gal-3 co-localized on native vesicles at a position overlapping the detected Con A- and WGA-reactive glycoproteins and GGT (Figure 1c, data not shown). After TX-100 treatment, CD63 was clearly released and exhibited broad distribution (Figure 1d, fractions 16–30), but also remained close to its initial position. In contrast to this, CD9 (Figure 1d, fractions 16–18) and gal-3 (Figure 1d, fractions 16–18) retained narrow distributions, that is, they were slightly shifted during elution. Moreover, their patterns overlapped completely. As for CD81, it could not be detected after detergent treatment (Figure 1d). To further follow up the influence of TX-100 on prostasomes, changes in their ultrastructure were analyzed (Figure 2). In the region where all examined markers remained more or less co-localized after TX-100 treatment, the microscopic inspection revealed the presence of structures that correspond to reorganized detergent-resistant domains of vesicular membranes. They appeared as broken vesicles surrounded with leaking content or smaller vesicles with disrupted irregular surfaces seemingly shrunken with no associated material. In contrast to this, in the region where released glycoproteins were separated (Figure 1, fractions 20–22), no such structures were visible, only irregular, possibly, protein deposits. To complement the results obtained for detergent-treated samples as such, changes in the patterns of total prostasomal glycoproteins were analyzed under denaturing and reducing conditions by electrophoresis (Figure 3) and lectin blot (Figure 4). Compared to the native vesicles (Figure 3a), in the TX-100-treated ones (Figure 3b), the proteins exhibiting a prostasome-like pattern remained clustered close to the initial position, that is, they were marginally shifted in elution. However, their abundance was noticeably lower. Specifically, discrete changes in terms of the abundance of major bands in the region below 66 kDa (encompass masses of TS and gal-3) were detected. In addition, clear loss of bands in the region corresponding to the masses of prostasomal signature bands (90–150 kDa) was also detected. Consequently, some of them were visible as shifted in a cluster of protein bands included in column volume (Figure 3b, fractions 20–22).\nTransmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of normozoospermic men (sPro-N) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 1). Bar 500 nm. F14: rare vesicles; F17: broken vesicles surrounded with leaking content; F19: vesicles with disrupted irregular surface with no associated material; F21: irregular deposits.\nProtein composition of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a) and TX-100 treated sPro-N (b) were resolved by electrophoresis and stained with silver. Although aggregation is present as judged by intense bands at the border of the stacking and separating gel, the referent protein bands are preserved. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards.\nDistribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a, c) and TX-100 treated sPro-N (b, d) were subjected to lectin-blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel.\nIn contrast to proteins, prostasomal glycoproteins were considerably reorganized after TX-100 treatment. Regarding Con A-reactive glycoproteins, what the lectin-binding assay suggested was confirmed by the lectin blot (Figure 4). Thus, the pattern of Con A-reactive glycoproteins from native vesicles comprised high molecular mass band at the border of the stacking and separating gel and smear band in the stacking gel as well as four distinct lower molecular mass bands (Figure 4 a, fractions 15–17). After TX-100 treatment, a striking loss of high molecular mass components was observed. It can be related to different modes of redistribution of distinct lower molecular mass Con A-reactive bands. Specifically, the major 97 kDa band remained partly close to the initial position (Figure 4b, fraction 17) but was also released, that is, redistributed (Figure 4b, fractions 19–22). In addition, almost complete release of those with molecular masses below 66 kDa was observed (Figure 4a, b, fractions 15–17).\nThe pattern of WGA-reactive glycoproteins of native vesicles was comparable with that of the Con A-reactive ones in high molecular mass region (Figure 4c). However, after TX-100 treatment, their patterns were strikingly different (Figure 4d), as initially observed in the corresponding lectin-binding assay (Figure 1b). Thus, the high molecular mass band was completely lost, and a shift of weak 97 kDa was also detected.\nTaken together, the profiles related to the cluster of Con A- and WGA-binding glycoproteins might be influenced by their presence in different but overlapping complexes. Moreover, it is possible that diverse, but overlapped, bands with matching molecular mass and lectin binding were detected, as indicated by disparate/selective presence/abundance of particular ones in the subsequently eluted fraction.", "In parallel, prostasomes isolated from human seminal plasma of oligozoospermic men (sPro-O) were subjected to TX-100 treatment and analyzed in the same manner. Compared to the native sample, the treated ones exhibited patterns of Con A- (Figure 5a) and WGA-reactive glycans (Figure 5b) as well as GGT (Figure 5c) that indicated decrease/loss and/or redistribution, similarly as found for sPro-N. In addition, similarity with sPro-N was also noticed regarding the effect on the redistribution of TS: CD63 and CD81 (Figure 5d). However, although CD9 (Figure 5d, fraction 14) and gal-3 (Figure 5d, fractions 14–16) both remained close to the initial position as observed for sPro-N, they exhibited partially overlapping profiles. Moreover, the immunoreactivity of both TS (CD63 and CD81) was barely detectable. Observation at the ultrastructural level suggested more general disruption of sPro-O (Figure 6) than sPro-N. In general, vesicular structures were low abundant and their morphology was clearly different from sPro-N. Integrity of the TX-100-treated vesicles reflected on patterns of total proteins (Figure 7) and glycoproteins (Figure 8) and indicated more severe perturbation than for sPro-N (Figure 3a). In comparison with the native sPro-O (Figure 7a), a significant loss of protein in the entire range of molecular masses, which remained mostly in the region below 66 kDa, was observed (Figure 7b). In agreement with this, Con A-reactive glycoproteins of native sPro-O (Figure 8a) were also strikingly decreased, including the major one at 97 kDa (Figure 8b). Moreover, lectin blot failed to detect any WGA-reactive glycoproteins in the TX-100 treated sPro-O (Figure 8b). Compared with those for sPro-N, the profiles of both types of released glycoproteins for sPro-O indicated more intensive aggregation to form complexes which, in general, interfere with or prevent the detection of contributing components.\nSurface-associated glycoproteins and gamma-glutamyl transferase on seminal prostasomes of oligozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of oligozoospermic men (sPro-O) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-O from Sephadex G-200 column (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. Barely detectable CD9- and gal-3-immunoreactivity is indicated by circle. A450, absorbance at 450 nm; GGT, gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3, galectin 3; F, fraction.\nTransmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of oligozoospermic men (sPro-O) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 5). Bar 500 nm. F14–F17: rare vesicles; F19–F21: irregular deposits.\nProtein composition of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a) and TX-100-treated sPro-O (b) were resolved by electrophoresis and stained with silver. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards.\nDistribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a, c) and TX-100-treated sPro-O (b, d) were subjected to lectin blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel.", "The results obtained revealed novel patterns of surface-associated prostasomal proteins and related them to the molecular disposition on detergent-sensitive/resistant membrane domains that exist under normal physiology and conditions of low sperm count. In summary, distinct differences were found in the influence of detergent on solubilization of each tetraspanin as well as their relation with the other examined surface-associated molecules. Accordingly, they were grouped into two patterns mainly consisting of overlapped CD9/gal3/WGA-reactive glycoproteins and CD63/Con A-reactive glycoproteins/GGT. When the effect of TX-100 on sPro-N is compared with that on sPro-O, the overall similarity can be seen regarding the redistribution of examined surface-associated glycoproteins including GGT, all presumed to be mainly part of the vesicle coat. In contrast to this, greater difference was found in the redistribution of true integral membrane proteins exemplified by TS as well as gal-3, which could exist as membrane-associated through carbohydrate/protein-binding interactions. More specifically, in sPro-O, perturbation of CD9 and gal-3 was found to be related to engagement in different high molecular mass complexes and mutual segregation rather than co-localization in detergent-resistant membranous structures as could be deduced for sPro-N.\nThe existence of a TS-web on EVs, in general, has not been studied (4). In addition, there are few data on the composition of prostasomal surface glycans and gal-3 (2, 16). TS are a family of integral membrane proteins that may be involved in three levels of interaction with their molecular partners (11, 20, 21). As a result of these interactions, they are grouped into detergent-soluble tetraspanin-enriched membranes (1, 10, 22) that are clearly different from other types of higher order molecular complexes (23). Some detergents used for the investigation of prostasomal proteins (24) may differently affect TS interactions with their molecular partners. Since these interactions were out of the scope of this study, and we actually wanted to disrupt TS–TS interactions, we choose TX-100 treatment as the most commonly used method for the extraction of selected molecules (both TS and GGT) (15, 22). In addition to TS, the recruitment of different molecules into organized complexes may involve galectins (25, 26). Although soluble proteins, galectins, are readily found as membrane-associated through interactions achieved by carbohydrate-binding (cross-links N-glycans as ligands) or other protein- or lipid-binding domains, such as gal-3 as a distinct member of this family of lectins (5, 23, 27).\nThus, TS are expected to be readily solubilized, and since they are medium sized (~250 amino acids), this can cause a broad elution pattern due to the shift to higher molecular masses (depending on the composition of complexes with ligands). Indeed, the observed redistribution of CD63 was in agreement with this, suggesting abundant release in molecular complexes in response to detergent treatment of both sPro-N and sPro-O. However, one part of CD63-immunoreactivity remained at the initial position. In contrast to CD63, the elution of CD9 was only slightly shifted suggesting that it remains in high molecular mass complexes in both sPro-N and sPro-O. However, the complexes in sPro-N and sPro-O seemed to differ judging by the influence of detergent treatment on their structure. Thus, TEM suggested that CD9 in sPro-N remained in detergent-insoluble membranes/vesicular structures, whereas in sPro-O it was rather a part of aggregated protein complexes (both eluted in the void volume). This is supported by gal-3 distribution, which overlapped completely with CD9 in sPro-N but partially in sPro-O. The possibility of glycan-mediated or protein–protein interaction of gal-3 with CD9 can be supported by their detected co-localization, since neither type of interaction is expected to be influenced by TX-100. As for CD81, it could not be detected after TX-100 treatment in either sPro-N or sPro-O. This can be related to the data indicating that different antibodies differentially recognize CD81 if it is associated with the TS-web, or if the web is disrupted using the TX-100 (28). CD81 was previously reported to interact with GGT (29), which was also monitored. In relation to this, our results for the redistribution of prostasomal surface-associated GGT, monitored by measuring the enzymatic activity, clearly indicated its release from vesicles and appearance in molecular complexes (30). However, at this stage, it cannot be confirmed if this GGT pattern is due to its complex with CD81. Differences in the solubilizing properties of TS might be related to the facts that CD9 is a glycosylated proteolipid, CD63 is a glycosylated protein, and CD81 is a non-glycosylated protein (1). Thus, the TS structure itself and the specificities of the prostasomal surface microenvironment (in terms of distribution and presentation of glycans) may also influence the results obtained for detergent sensitivity. So far, it was reported that the examined TS and gal-3 are found co-isolated with prostasomal lipid rafts (9). It is known that lipid rafts contain an unusual lipid composition rich in cholesterol, which renders them insoluble upon detergent treatment (9, 31) and that they may associate with TS by lateral crosstalk between membrane domains. In relation to this, the existence of several prostasomal gal-3 isoelectric variants including a truncated form (carbohydrate recognition domain only) (7) which could reside in a different membrane microenvironment and consequently organize related but distinct molecular complexes may be responsible for specific redistribution profiles of glycoproteins/TS.\nDetergent-soluble glycoproteins as molecules, which, on the one hand, can penetrate the membrane core and anchor hydrophobically, and, on the other hand, constitute a specific coat (32, 33), could also influence the stability and accessibility of the domain that may be intercalated with detergent. In this study, in intact sPro-N and sPro-O, WGA and Con A revealed a cluster of distinct partially overlapped glycoproteins. They were almost completely released from vesicles upon TX-100 treatment. However, the detergent-induced changes were distinct, influencing their detection depending on the experimental conditions used, especially for WGA-reactive ones. Significant shielding in auto-aggregates/heterologous complexes which can interfere with lectin binding, or mixed release of glycolipids/lipoproteins which escape detection by the methods used, is in agreement with the observed behavior, much emphasized in sPro-O. It is interesting that the majority of Con A-reactive glycoproteins were revealed in the region overlapping prostasome signature bands in both intact sPro-N and sPro-O. They are glycoproteins comprising the integral membrane protein CD13 of 150 kDa, transmembrane, and soluble CD26 of 82–110 kDa and soluble CD10 of 94 kDa (34). Regarding TS, all this indicated mixed patterns of different Con A-reactive glycoproteins and their preferential associations with CD63, which exhibited an overlapped solubilization pattern.\nAnalysis of membranous proteins is very difficult, since they usually exhibit anomalous behavior in many standardly used protein techniques (35–37). The possibility that some proteins could aggregate in spite of the critical micelle concentration, due to their abundance or inherent structure as well as that resolved peaks could be a set of peaks from protein, protein complexes, lipid and detergent, must be taken into consideration (38). Thus, the obtained molecular patterns themselves are descriptive and provide qualitative data. As a rule, they comprise numerous differently abundant bands. Some of them could have variable presence and some of them are constitutively present. Similar to the total prostasomal protein pattern exemplified by three prostasome signature bands (34), solubilization signature provided data in terms of annotation of main glycoproteins/TS to distinct detergent-sensitive or insensitive prostasomal patterns. Consequently, they can be used reliably for the comparison of prostasomes/any EVs (after establishing their own solubilization signature) in different physiological conditions. In terms of the presumed role of EVs as a communication tool (39, 40), these initial data could be a base for addressing the place of scaffolding in enabling the membrane functionality of EVs. In addition, it may initiate widening investigations on the basic issues of membrane complexity (41–43) usually deduced from the cell surface of plasma membrane to the field of EVs membrane." ]
[ "intro", "material|methods", null, null, null, null, null, null, null, "results", null, null, "discussion" ]
[ "Extracellular vesicles", "detergent sensitivity", "CD63", "CD9", "gamma-glutamyl transferase", "molecular patterns", "normozoospermia", "oligozoospermia" ]
Introduction: Tetraspanin-web and galectin-glycoprotein lattices represent distinct multi/macromolecular complexes assembled at the plasma membrane and are supposed to facilitate different biological activities/functions (1–3). Extracellular vesicles (EVs), membranous structures originating from plasma- or intracellular membranes, are considered enriched in tetraspanins (TS), which are used as canonical markers (4). Regarding the presence of lectins, including galectins, although not widely studied in this context, there are data indicating that galectin-3 (gal-3) is involved in the biogenesis of EVs and can be used as a reliable marker (5). Both TS and gal-3 are thought to shape not only the surface of EVs but also the cargo composition (6). Prostasomes, EVs originating from the prostate and abundantly present in human seminal plasma (SP), are reported to express TS: CD63, CD9, and CD81, as well as gal-3 (2, 7–9). In addition, gal-3 and mannosylated/sialylated glycans were found to contribute to the prostasomal surface in a specific way in terms of accessibility and native presentation which could be altered in pathological conditions associated with male fertility (2). This study aimed to delineate the positions of selected TS: CD63, CD9, and CD81, that is, to obtain data on their molecular associations and relate them to gal-3 and selected N-glycans, all known to reside on the surface of prostasomes. Although of possible general importance for the identification of distinct vesicle types and their functions in different heterogeneous extracellular landscapes, related data are still missing. Thus, it is presumed that solubilization signature is an all-inclusive distinction factor regarding surface properties of a particular vesicle, since it can reflect the status of the parent cell and the extracellular environment, both of which contribute to the composition of spatial membrane arrangements. By establishing solubilization signature, we aimed at determining a new qualitative data suitable as a reference for the comparison of any type of vesicles. The existence of distinct prostasomal surface molecular complexes was deduced from their detergent resistance (revealing TS-primary ligand association, gal-3-glycoprotein association, and insoluble membranes) or detergent sensitivity (revealing solubilized glycoprotein–glycolipid complexes). Related molecular patterns established from the mode of response to disruption by a non-ionic detergent of high stringency were used as a reference to annotate and/or compare prostasomal preparations from normozoospermic and oligozoospermic men. In general, this approach is readily applicable, and it is not supposed to be significantly affected by different isolation procedures. Getting insight into the distribution patterns of TS on different membrane domains can add new value to their common use (based on presence only) as EVs markers. Moreover, since TS have distinct biological activities involved in cell adhesion, motility, and metastasis, as well as cell activation and signal transduction (10, 11), possible differences in their organization may have biomedical consequences. Thus, solubilization signatures of EVs might relate their structure with functional alterations in distinct pathological conditions. Material and methods: Materials Monoclonal anti-CD63 antibody (clone TS63) was from Abcam (Cambridge, UK), monoclonal anti-CD81 (clone M38) and monoclonal anti-CD9 (clone MEM-61) were from Invitrogen by Thermo Fisher Scientific (Carlsbad, CA, USA), and biotinylated goat anti-galectin-3 (gal-3) antibodies were from R&D Systems (Minneapolis, USA). 3,3′,5,5′-tetramethylbenzidine (TMB), bovine serum albumin (BSA), and Triton X-100 (TX-100) were from Sigma (St. Louis, MO, USA). Biotinylated goat anti-mouse IgG, biotinylated plant lectins: Con A (Concanavalin A), wheat germ agglutinin (WGA), and the Elite Vectastain ABC kit were from Vector Laboratories (Burlingame, CA, USA). Sephadex G-200 was from Pharmacia AB (Uppsala, Sweden). The silver stain kit and SDS-PAGE molecular mass standards (broad range) were from Bio-Rad (Hercules, CA, USA). Nitrocellulose membrane and Pierce ECL Western Blotting Substrate were from Thermo Scientific (Rockford, IL, USA). Microwell plates were from Thermo Scientific (Roskilde, Denmark). Monoclonal anti-CD63 antibody (clone TS63) was from Abcam (Cambridge, UK), monoclonal anti-CD81 (clone M38) and monoclonal anti-CD9 (clone MEM-61) were from Invitrogen by Thermo Fisher Scientific (Carlsbad, CA, USA), and biotinylated goat anti-galectin-3 (gal-3) antibodies were from R&D Systems (Minneapolis, USA). 3,3′,5,5′-tetramethylbenzidine (TMB), bovine serum albumin (BSA), and Triton X-100 (TX-100) were from Sigma (St. Louis, MO, USA). Biotinylated goat anti-mouse IgG, biotinylated plant lectins: Con A (Concanavalin A), wheat germ agglutinin (WGA), and the Elite Vectastain ABC kit were from Vector Laboratories (Burlingame, CA, USA). Sephadex G-200 was from Pharmacia AB (Uppsala, Sweden). The silver stain kit and SDS-PAGE molecular mass standards (broad range) were from Bio-Rad (Hercules, CA, USA). Nitrocellulose membrane and Pierce ECL Western Blotting Substrate were from Thermo Scientific (Rockford, IL, USA). Microwell plates were from Thermo Scientific (Roskilde, Denmark). Human semen samples This study was performed on the leftover, anonymized specimens of human semen taken for routine analysis, and since existing human specimens were used, it is not considered as research on human subjects. It was approved by the institutional ethics committee according to the guidelines (No. # 02-832/1), which conforms to the Helsinki Declaration, 1975 (revised 2008). Sperm parameters were assessed according to the recommended criteria of the World Health Organization (released in 2010.), concerning numbers, morphology, and motility. Sperm cells and other debris were removed from the ejaculate by centrifugation at 1,000 × g for 20 min. This study was performed on the leftover, anonymized specimens of human semen taken for routine analysis, and since existing human specimens were used, it is not considered as research on human subjects. It was approved by the institutional ethics committee according to the guidelines (No. # 02-832/1), which conforms to the Helsinki Declaration, 1975 (revised 2008). Sperm parameters were assessed according to the recommended criteria of the World Health Organization (released in 2010.), concerning numbers, morphology, and motility. Sperm cells and other debris were removed from the ejaculate by centrifugation at 1,000 × g for 20 min. Isolation of prostasomes from human seminal plasma Two pools of human SP of normozoospermic men and two pools of human SP of oligozoospermic men were used for the isolation of prostasomes. Each pool contained 10 individual SP samples. Prostasomes from normozoospermic men (sPro-N) and oligozoospermic men (sPro-O) were isolated from SP according to the modified protocol of Carlsson et al. (12). CD63-, CD9-, and CD81-immunoreactivities were used as the indicator of EVs’ presence. These prostasomal preparations were subjected to detergent treatment by incubation with 1% TX-100 for 1 h at room temperature and then subjected to gel filtration as the method of choice (13, 14). We monitored the redistribution of selected markers during gel filtration as an indicator of release from vesicles using the combined analysis of intact fractions (solid-phase assay with immobilized fractions and microscopy) and methods analyzing denatured fractions (electrophoresis and lectin blot). Two pools of human SP of normozoospermic men and two pools of human SP of oligozoospermic men were used for the isolation of prostasomes. Each pool contained 10 individual SP samples. Prostasomes from normozoospermic men (sPro-N) and oligozoospermic men (sPro-O) were isolated from SP according to the modified protocol of Carlsson et al. (12). CD63-, CD9-, and CD81-immunoreactivities were used as the indicator of EVs’ presence. These prostasomal preparations were subjected to detergent treatment by incubation with 1% TX-100 for 1 h at room temperature and then subjected to gel filtration as the method of choice (13, 14). We monitored the redistribution of selected markers during gel filtration as an indicator of release from vesicles using the combined analysis of intact fractions (solid-phase assay with immobilized fractions and microscopy) and methods analyzing denatured fractions (electrophoresis and lectin blot). Gel filtration Gel filtration separation profiles of TX-100-treated prostasomes were obtained under conditions where TX-100 was present during elution to ensure maintenance of total solubilization (15). Thus, the detergent-treated seminal prostasome preparation (1 mL) was loaded on a Sephadex G-200 column (bed volume 35 mL) equilibrated and eluted with 0.03 M Tris-HCl, pH 7.6, containing 0.13 M NaCl and 1% TX-100. Fractions of 1 mL were collected. The elution was monitored as described previously (16). Briefly, gel filtration-separated fractions were coated on microwell plates at 4°C overnight. After washing steps (3 × 300 μL with 0.05 M phosphate-buffered saline, PBS), they were blocked with 50 μL 1% BSA for 1.5 h and then washed again. Biotinylated plant lectins: Con A and WGA (50 μL, 0.5 mg/mL) were allowed to react for 30 min at room temperature, washed out, and followed by the addition of 50 μL of avidin/biotin–HRPO complex (Elite, Vectastain ABC Kit, prepared according to the manufacturer’s instructions). After incubation for 30 min, at room temperature, the plates were rinsed and developed using 50 μL TMB substrate solution. The reaction was stopped with 50 μL 2 N sulfuric acid. Absorbance was read at 450 nm using a Wallac 1420 Multilabel counter Victor3V (Perkin Elmer, Waltham, MA, USA). The elution profile of gamma-glutamyl transferase (GGT) was monitored by measuring enzyme activity using GGT colorimetric assay kits (Bioanalytica, Madrid, Spain), according to the manufacturer’s instructions for Biosystems A25 (Barcelona, Spain). The selected fractions were further analyzed by electrophoresis and blotting. Native seminal prostasome preparations were analyzed in the same way except that TX-100 was not added to the elution buffer. Gel filtration separation profiles of TX-100-treated prostasomes were obtained under conditions where TX-100 was present during elution to ensure maintenance of total solubilization (15). Thus, the detergent-treated seminal prostasome preparation (1 mL) was loaded on a Sephadex G-200 column (bed volume 35 mL) equilibrated and eluted with 0.03 M Tris-HCl, pH 7.6, containing 0.13 M NaCl and 1% TX-100. Fractions of 1 mL were collected. The elution was monitored as described previously (16). Briefly, gel filtration-separated fractions were coated on microwell plates at 4°C overnight. After washing steps (3 × 300 μL with 0.05 M phosphate-buffered saline, PBS), they were blocked with 50 μL 1% BSA for 1.5 h and then washed again. Biotinylated plant lectins: Con A and WGA (50 μL, 0.5 mg/mL) were allowed to react for 30 min at room temperature, washed out, and followed by the addition of 50 μL of avidin/biotin–HRPO complex (Elite, Vectastain ABC Kit, prepared according to the manufacturer’s instructions). After incubation for 30 min, at room temperature, the plates were rinsed and developed using 50 μL TMB substrate solution. The reaction was stopped with 50 μL 2 N sulfuric acid. Absorbance was read at 450 nm using a Wallac 1420 Multilabel counter Victor3V (Perkin Elmer, Waltham, MA, USA). The elution profile of gamma-glutamyl transferase (GGT) was monitored by measuring enzyme activity using GGT colorimetric assay kits (Bioanalytica, Madrid, Spain), according to the manufacturer’s instructions for Biosystems A25 (Barcelona, Spain). The selected fractions were further analyzed by electrophoresis and blotting. Native seminal prostasome preparations were analyzed in the same way except that TX-100 was not added to the elution buffer. SDS-PAGE Corresponding samples were resolved on 10% separating gel with 4% stacking gel under denaturing and reducing conditions (17) and stained with silver nitrate, using a silver stain kit (Bio-Rad) according to the manufacturer’s instructions. The gel was calibrated with SDS-PAGE molecular weight standards (broad range). Corresponding samples were resolved on 10% separating gel with 4% stacking gel under denaturing and reducing conditions (17) and stained with silver nitrate, using a silver stain kit (Bio-Rad) according to the manufacturer’s instructions. The gel was calibrated with SDS-PAGE molecular weight standards (broad range). Western blot and dot blot Samples were transferred onto nitrocellulose membrane by semi-dry blotting using a Trans-blot SD (Bio-Rad Laboratories). The conditions were as follows: transfer buffer, 0.025 M Tris containing 0.192 M glycine and 20% methanol, pH 8.3 under a constant current of 1.2 mA/cm2 for 1 h. The membrane was blocked with 3% BSA in 0.05 M PBS, pH 7.2, for 2 h at room temperature, and then used for lectin-blotting (1) as described below. For dot blot, 3 μL of each corresponding fraction was applied to the nitrocellulose membrane, dried, blocked as described above, and subjected to immunoblotting (2). Lectin blotting Lectin blotting was performed as described earlier (18). The membrane was incubated with the chosen biotinylated plant lectin (0.2 μg/mL in 0.05 M PBS, pH 7.2) for 1 h at room temperature and then washed six times in 0.05 M PBS, pH 7.2. Avidin/biotinylated horseradish peroxidase (HRPO) from Vectastain Elite ABC kit (prepared according to the manufacturer’s instructions) was added and incubated for 30 min at room temperature. The membrane was then rinsed again six times in 0.05 M PBS, pH 7.2, and the proteins were visualized using Pierce ECL substrate solution (Thermo Scientific, Rockford, IL, USA), according to the manufacturer’s instructions. Immunoblotting Immunodot blot was performed as previously established (18). For immunoblotting, the membrane was incubated with the corresponding antibodies: anti-CD63 antibody (0.5 μg/mL), anti-CD81 antibody (0.25 μg/mL), anti-CD9 antibody (0.5 μg/mL), and biotinylated anti-gal-3 antibodies (0.025 μg/mL), overnight at 4°C. After a washing step, bound antibody was detected by incubation with biotinylated goat anti-mouse IgG for 30 min at room temperature. The membrane was rinsed and the avidin/biotinylated HRPO mixture from the Elite Vectastain ABC kit was added, followed by incubation for 30 min at room temperature. After another washing step, the blots were visualized using Pierce ECL Western blotting substrate according to the manufacturer’s instructions. Samples were transferred onto nitrocellulose membrane by semi-dry blotting using a Trans-blot SD (Bio-Rad Laboratories). The conditions were as follows: transfer buffer, 0.025 M Tris containing 0.192 M glycine and 20% methanol, pH 8.3 under a constant current of 1.2 mA/cm2 for 1 h. The membrane was blocked with 3% BSA in 0.05 M PBS, pH 7.2, for 2 h at room temperature, and then used for lectin-blotting (1) as described below. For dot blot, 3 μL of each corresponding fraction was applied to the nitrocellulose membrane, dried, blocked as described above, and subjected to immunoblotting (2). Lectin blotting Lectin blotting was performed as described earlier (18). The membrane was incubated with the chosen biotinylated plant lectin (0.2 μg/mL in 0.05 M PBS, pH 7.2) for 1 h at room temperature and then washed six times in 0.05 M PBS, pH 7.2. Avidin/biotinylated horseradish peroxidase (HRPO) from Vectastain Elite ABC kit (prepared according to the manufacturer’s instructions) was added and incubated for 30 min at room temperature. The membrane was then rinsed again six times in 0.05 M PBS, pH 7.2, and the proteins were visualized using Pierce ECL substrate solution (Thermo Scientific, Rockford, IL, USA), according to the manufacturer’s instructions. Immunoblotting Immunodot blot was performed as previously established (18). For immunoblotting, the membrane was incubated with the corresponding antibodies: anti-CD63 antibody (0.5 μg/mL), anti-CD81 antibody (0.25 μg/mL), anti-CD9 antibody (0.5 μg/mL), and biotinylated anti-gal-3 antibodies (0.025 μg/mL), overnight at 4°C. After a washing step, bound antibody was detected by incubation with biotinylated goat anti-mouse IgG for 30 min at room temperature. The membrane was rinsed and the avidin/biotinylated HRPO mixture from the Elite Vectastain ABC kit was added, followed by incubation for 30 min at room temperature. After another washing step, the blots were visualized using Pierce ECL Western blotting substrate according to the manufacturer’s instructions. Transmission electron microscopy (TEM) TEM was performed as described previously (19). Samples were applied to the formvar-coated, 200 mesh, Cu grids by grid flotation on 10 μL sample droplets, for 45 min at room temperature. This was followed by steps of fixation (2% paraformaldehyde, 10 min), washing (PBS, 3 × 2 min), post-fixing (2% glutaraldehyde, 5 min), and a final wash with distilled H2O (2 min). Grids were then air-dried, and the images were collected using a Philips CM12 electron microscope (Philips/FEI, Eindhoven, the Netherlands). TEM was performed as described previously (19). Samples were applied to the formvar-coated, 200 mesh, Cu grids by grid flotation on 10 μL sample droplets, for 45 min at room temperature. This was followed by steps of fixation (2% paraformaldehyde, 10 min), washing (PBS, 3 × 2 min), post-fixing (2% glutaraldehyde, 5 min), and a final wash with distilled H2O (2 min). Grids were then air-dried, and the images were collected using a Philips CM12 electron microscope (Philips/FEI, Eindhoven, the Netherlands). Materials: Monoclonal anti-CD63 antibody (clone TS63) was from Abcam (Cambridge, UK), monoclonal anti-CD81 (clone M38) and monoclonal anti-CD9 (clone MEM-61) were from Invitrogen by Thermo Fisher Scientific (Carlsbad, CA, USA), and biotinylated goat anti-galectin-3 (gal-3) antibodies were from R&D Systems (Minneapolis, USA). 3,3′,5,5′-tetramethylbenzidine (TMB), bovine serum albumin (BSA), and Triton X-100 (TX-100) were from Sigma (St. Louis, MO, USA). Biotinylated goat anti-mouse IgG, biotinylated plant lectins: Con A (Concanavalin A), wheat germ agglutinin (WGA), and the Elite Vectastain ABC kit were from Vector Laboratories (Burlingame, CA, USA). Sephadex G-200 was from Pharmacia AB (Uppsala, Sweden). The silver stain kit and SDS-PAGE molecular mass standards (broad range) were from Bio-Rad (Hercules, CA, USA). Nitrocellulose membrane and Pierce ECL Western Blotting Substrate were from Thermo Scientific (Rockford, IL, USA). Microwell plates were from Thermo Scientific (Roskilde, Denmark). Human semen samples: This study was performed on the leftover, anonymized specimens of human semen taken for routine analysis, and since existing human specimens were used, it is not considered as research on human subjects. It was approved by the institutional ethics committee according to the guidelines (No. # 02-832/1), which conforms to the Helsinki Declaration, 1975 (revised 2008). Sperm parameters were assessed according to the recommended criteria of the World Health Organization (released in 2010.), concerning numbers, morphology, and motility. Sperm cells and other debris were removed from the ejaculate by centrifugation at 1,000 × g for 20 min. Isolation of prostasomes from human seminal plasma: Two pools of human SP of normozoospermic men and two pools of human SP of oligozoospermic men were used for the isolation of prostasomes. Each pool contained 10 individual SP samples. Prostasomes from normozoospermic men (sPro-N) and oligozoospermic men (sPro-O) were isolated from SP according to the modified protocol of Carlsson et al. (12). CD63-, CD9-, and CD81-immunoreactivities were used as the indicator of EVs’ presence. These prostasomal preparations were subjected to detergent treatment by incubation with 1% TX-100 for 1 h at room temperature and then subjected to gel filtration as the method of choice (13, 14). We monitored the redistribution of selected markers during gel filtration as an indicator of release from vesicles using the combined analysis of intact fractions (solid-phase assay with immobilized fractions and microscopy) and methods analyzing denatured fractions (electrophoresis and lectin blot). Gel filtration: Gel filtration separation profiles of TX-100-treated prostasomes were obtained under conditions where TX-100 was present during elution to ensure maintenance of total solubilization (15). Thus, the detergent-treated seminal prostasome preparation (1 mL) was loaded on a Sephadex G-200 column (bed volume 35 mL) equilibrated and eluted with 0.03 M Tris-HCl, pH 7.6, containing 0.13 M NaCl and 1% TX-100. Fractions of 1 mL were collected. The elution was monitored as described previously (16). Briefly, gel filtration-separated fractions were coated on microwell plates at 4°C overnight. After washing steps (3 × 300 μL with 0.05 M phosphate-buffered saline, PBS), they were blocked with 50 μL 1% BSA for 1.5 h and then washed again. Biotinylated plant lectins: Con A and WGA (50 μL, 0.5 mg/mL) were allowed to react for 30 min at room temperature, washed out, and followed by the addition of 50 μL of avidin/biotin–HRPO complex (Elite, Vectastain ABC Kit, prepared according to the manufacturer’s instructions). After incubation for 30 min, at room temperature, the plates were rinsed and developed using 50 μL TMB substrate solution. The reaction was stopped with 50 μL 2 N sulfuric acid. Absorbance was read at 450 nm using a Wallac 1420 Multilabel counter Victor3V (Perkin Elmer, Waltham, MA, USA). The elution profile of gamma-glutamyl transferase (GGT) was monitored by measuring enzyme activity using GGT colorimetric assay kits (Bioanalytica, Madrid, Spain), according to the manufacturer’s instructions for Biosystems A25 (Barcelona, Spain). The selected fractions were further analyzed by electrophoresis and blotting. Native seminal prostasome preparations were analyzed in the same way except that TX-100 was not added to the elution buffer. SDS-PAGE: Corresponding samples were resolved on 10% separating gel with 4% stacking gel under denaturing and reducing conditions (17) and stained with silver nitrate, using a silver stain kit (Bio-Rad) according to the manufacturer’s instructions. The gel was calibrated with SDS-PAGE molecular weight standards (broad range). Western blot and dot blot: Samples were transferred onto nitrocellulose membrane by semi-dry blotting using a Trans-blot SD (Bio-Rad Laboratories). The conditions were as follows: transfer buffer, 0.025 M Tris containing 0.192 M glycine and 20% methanol, pH 8.3 under a constant current of 1.2 mA/cm2 for 1 h. The membrane was blocked with 3% BSA in 0.05 M PBS, pH 7.2, for 2 h at room temperature, and then used for lectin-blotting (1) as described below. For dot blot, 3 μL of each corresponding fraction was applied to the nitrocellulose membrane, dried, blocked as described above, and subjected to immunoblotting (2). Lectin blotting Lectin blotting was performed as described earlier (18). The membrane was incubated with the chosen biotinylated plant lectin (0.2 μg/mL in 0.05 M PBS, pH 7.2) for 1 h at room temperature and then washed six times in 0.05 M PBS, pH 7.2. Avidin/biotinylated horseradish peroxidase (HRPO) from Vectastain Elite ABC kit (prepared according to the manufacturer’s instructions) was added and incubated for 30 min at room temperature. The membrane was then rinsed again six times in 0.05 M PBS, pH 7.2, and the proteins were visualized using Pierce ECL substrate solution (Thermo Scientific, Rockford, IL, USA), according to the manufacturer’s instructions. Immunoblotting Immunodot blot was performed as previously established (18). For immunoblotting, the membrane was incubated with the corresponding antibodies: anti-CD63 antibody (0.5 μg/mL), anti-CD81 antibody (0.25 μg/mL), anti-CD9 antibody (0.5 μg/mL), and biotinylated anti-gal-3 antibodies (0.025 μg/mL), overnight at 4°C. After a washing step, bound antibody was detected by incubation with biotinylated goat anti-mouse IgG for 30 min at room temperature. The membrane was rinsed and the avidin/biotinylated HRPO mixture from the Elite Vectastain ABC kit was added, followed by incubation for 30 min at room temperature. After another washing step, the blots were visualized using Pierce ECL Western blotting substrate according to the manufacturer’s instructions. Transmission electron microscopy (TEM): TEM was performed as described previously (19). Samples were applied to the formvar-coated, 200 mesh, Cu grids by grid flotation on 10 μL sample droplets, for 45 min at room temperature. This was followed by steps of fixation (2% paraformaldehyde, 10 min), washing (PBS, 3 × 2 min), post-fixing (2% glutaraldehyde, 5 min), and a final wash with distilled H2O (2 min). Grids were then air-dried, and the images were collected using a Philips CM12 electron microscope (Philips/FEI, Eindhoven, the Netherlands). Results: Prostasomes from human seminal plasma of normozoospermic men: influence of TX-100 treatment Distributions of distinct surface-associated markers of prostasomes from human SP of normozoospermic men (sPro-N) after solubilization with TX-100 are shown in Figure 1a–d. In general, their gel filtration elution positions differed in terms of a more or less noticeable shift from the void volume where they were co-eluted on native vesicles, revealing new patterns of associations. Surface-associated glycoproteins and gamma-glutamyl transferase on the seminal prostasomes of normozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of normozoospermic men (sPro-N) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-N (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. A450: absorbance at 450 nm; GGT: gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3: galectin 3; F: fraction. In eluted fractions, total mannosylated and sialylated glycans, which can reside on both glycoprotein and glycolipids, were monitored by lectin-binding reactivity (Figure 1a, b), and the GGT as an individual protein marker was monitored by enzymatic activity (Figure 1c). Thus, Con A-reactive glycoproteins were solubilized as evidenced by a striking decrease at the initial position (Figure 1a). However, the related redistribution can be rather deduced than clearly shown according to the reactivity of fractions in the included column volume, possibly due to the influence of their structure on immobilization. In contrast to this, WGA-reactive glycoproteins seemed to be partly solubilized and released as aggregated, judging by the corresponding elution profile revealing broad peaks entering the column and trailing down along the entire chromatogram (Figure 1b). Moreover, they produced a small but distinct peak before that of intact vesicles, which suggests formation of larger protein complexes. As for GGT, it was also clearly solubilized from vesicles with TX-100 and exhibited an elution profile distinct from the examined glycans (Figure 1c). The influence of TX-100 on the distribution of TS: CD63, CD9, and CD81, chosen as integral membrane proteins, and gal-3, chosen as a soluble but membrane-associated molecule, was monitored by the immunodot blot as adequate method (in contrast to western blot) for monitoring surface-associated changes (Figure 1d). TS and gal-3 co-localized on native vesicles at a position overlapping the detected Con A- and WGA-reactive glycoproteins and GGT (Figure 1c, data not shown). After TX-100 treatment, CD63 was clearly released and exhibited broad distribution (Figure 1d, fractions 16–30), but also remained close to its initial position. In contrast to this, CD9 (Figure 1d, fractions 16–18) and gal-3 (Figure 1d, fractions 16–18) retained narrow distributions, that is, they were slightly shifted during elution. Moreover, their patterns overlapped completely. As for CD81, it could not be detected after detergent treatment (Figure 1d). To further follow up the influence of TX-100 on prostasomes, changes in their ultrastructure were analyzed (Figure 2). In the region where all examined markers remained more or less co-localized after TX-100 treatment, the microscopic inspection revealed the presence of structures that correspond to reorganized detergent-resistant domains of vesicular membranes. They appeared as broken vesicles surrounded with leaking content or smaller vesicles with disrupted irregular surfaces seemingly shrunken with no associated material. In contrast to this, in the region where released glycoproteins were separated (Figure 1, fractions 20–22), no such structures were visible, only irregular, possibly, protein deposits. To complement the results obtained for detergent-treated samples as such, changes in the patterns of total prostasomal glycoproteins were analyzed under denaturing and reducing conditions by electrophoresis (Figure 3) and lectin blot (Figure 4). Compared to the native vesicles (Figure 3a), in the TX-100-treated ones (Figure 3b), the proteins exhibiting a prostasome-like pattern remained clustered close to the initial position, that is, they were marginally shifted in elution. However, their abundance was noticeably lower. Specifically, discrete changes in terms of the abundance of major bands in the region below 66 kDa (encompass masses of TS and gal-3) were detected. In addition, clear loss of bands in the region corresponding to the masses of prostasomal signature bands (90–150 kDa) was also detected. Consequently, some of them were visible as shifted in a cluster of protein bands included in column volume (Figure 3b, fractions 20–22). Transmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of normozoospermic men (sPro-N) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 1). Bar 500 nm. F14: rare vesicles; F17: broken vesicles surrounded with leaking content; F19: vesicles with disrupted irregular surface with no associated material; F21: irregular deposits. Protein composition of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a) and TX-100 treated sPro-N (b) were resolved by electrophoresis and stained with silver. Although aggregation is present as judged by intense bands at the border of the stacking and separating gel, the referent protein bands are preserved. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards. Distribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a, c) and TX-100 treated sPro-N (b, d) were subjected to lectin-blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel. In contrast to proteins, prostasomal glycoproteins were considerably reorganized after TX-100 treatment. Regarding Con A-reactive glycoproteins, what the lectin-binding assay suggested was confirmed by the lectin blot (Figure 4). Thus, the pattern of Con A-reactive glycoproteins from native vesicles comprised high molecular mass band at the border of the stacking and separating gel and smear band in the stacking gel as well as four distinct lower molecular mass bands (Figure 4 a, fractions 15–17). After TX-100 treatment, a striking loss of high molecular mass components was observed. It can be related to different modes of redistribution of distinct lower molecular mass Con A-reactive bands. Specifically, the major 97 kDa band remained partly close to the initial position (Figure 4b, fraction 17) but was also released, that is, redistributed (Figure 4b, fractions 19–22). In addition, almost complete release of those with molecular masses below 66 kDa was observed (Figure 4a, b, fractions 15–17). The pattern of WGA-reactive glycoproteins of native vesicles was comparable with that of the Con A-reactive ones in high molecular mass region (Figure 4c). However, after TX-100 treatment, their patterns were strikingly different (Figure 4d), as initially observed in the corresponding lectin-binding assay (Figure 1b). Thus, the high molecular mass band was completely lost, and a shift of weak 97 kDa was also detected. Taken together, the profiles related to the cluster of Con A- and WGA-binding glycoproteins might be influenced by their presence in different but overlapping complexes. Moreover, it is possible that diverse, but overlapped, bands with matching molecular mass and lectin binding were detected, as indicated by disparate/selective presence/abundance of particular ones in the subsequently eluted fraction. Distributions of distinct surface-associated markers of prostasomes from human SP of normozoospermic men (sPro-N) after solubilization with TX-100 are shown in Figure 1a–d. In general, their gel filtration elution positions differed in terms of a more or less noticeable shift from the void volume where they were co-eluted on native vesicles, revealing new patterns of associations. Surface-associated glycoproteins and gamma-glutamyl transferase on the seminal prostasomes of normozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of normozoospermic men (sPro-N) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-N (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. A450: absorbance at 450 nm; GGT: gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3: galectin 3; F: fraction. In eluted fractions, total mannosylated and sialylated glycans, which can reside on both glycoprotein and glycolipids, were monitored by lectin-binding reactivity (Figure 1a, b), and the GGT as an individual protein marker was monitored by enzymatic activity (Figure 1c). Thus, Con A-reactive glycoproteins were solubilized as evidenced by a striking decrease at the initial position (Figure 1a). However, the related redistribution can be rather deduced than clearly shown according to the reactivity of fractions in the included column volume, possibly due to the influence of their structure on immobilization. In contrast to this, WGA-reactive glycoproteins seemed to be partly solubilized and released as aggregated, judging by the corresponding elution profile revealing broad peaks entering the column and trailing down along the entire chromatogram (Figure 1b). Moreover, they produced a small but distinct peak before that of intact vesicles, which suggests formation of larger protein complexes. As for GGT, it was also clearly solubilized from vesicles with TX-100 and exhibited an elution profile distinct from the examined glycans (Figure 1c). The influence of TX-100 on the distribution of TS: CD63, CD9, and CD81, chosen as integral membrane proteins, and gal-3, chosen as a soluble but membrane-associated molecule, was monitored by the immunodot blot as adequate method (in contrast to western blot) for monitoring surface-associated changes (Figure 1d). TS and gal-3 co-localized on native vesicles at a position overlapping the detected Con A- and WGA-reactive glycoproteins and GGT (Figure 1c, data not shown). After TX-100 treatment, CD63 was clearly released and exhibited broad distribution (Figure 1d, fractions 16–30), but also remained close to its initial position. In contrast to this, CD9 (Figure 1d, fractions 16–18) and gal-3 (Figure 1d, fractions 16–18) retained narrow distributions, that is, they were slightly shifted during elution. Moreover, their patterns overlapped completely. As for CD81, it could not be detected after detergent treatment (Figure 1d). To further follow up the influence of TX-100 on prostasomes, changes in their ultrastructure were analyzed (Figure 2). In the region where all examined markers remained more or less co-localized after TX-100 treatment, the microscopic inspection revealed the presence of structures that correspond to reorganized detergent-resistant domains of vesicular membranes. They appeared as broken vesicles surrounded with leaking content or smaller vesicles with disrupted irregular surfaces seemingly shrunken with no associated material. In contrast to this, in the region where released glycoproteins were separated (Figure 1, fractions 20–22), no such structures were visible, only irregular, possibly, protein deposits. To complement the results obtained for detergent-treated samples as such, changes in the patterns of total prostasomal glycoproteins were analyzed under denaturing and reducing conditions by electrophoresis (Figure 3) and lectin blot (Figure 4). Compared to the native vesicles (Figure 3a), in the TX-100-treated ones (Figure 3b), the proteins exhibiting a prostasome-like pattern remained clustered close to the initial position, that is, they were marginally shifted in elution. However, their abundance was noticeably lower. Specifically, discrete changes in terms of the abundance of major bands in the region below 66 kDa (encompass masses of TS and gal-3) were detected. In addition, clear loss of bands in the region corresponding to the masses of prostasomal signature bands (90–150 kDa) was also detected. Consequently, some of them were visible as shifted in a cluster of protein bands included in column volume (Figure 3b, fractions 20–22). Transmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of normozoospermic men (sPro-N) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 1). Bar 500 nm. F14: rare vesicles; F17: broken vesicles surrounded with leaking content; F19: vesicles with disrupted irregular surface with no associated material; F21: irregular deposits. Protein composition of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a) and TX-100 treated sPro-N (b) were resolved by electrophoresis and stained with silver. Although aggregation is present as judged by intense bands at the border of the stacking and separating gel, the referent protein bands are preserved. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards. Distribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a, c) and TX-100 treated sPro-N (b, d) were subjected to lectin-blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel. In contrast to proteins, prostasomal glycoproteins were considerably reorganized after TX-100 treatment. Regarding Con A-reactive glycoproteins, what the lectin-binding assay suggested was confirmed by the lectin blot (Figure 4). Thus, the pattern of Con A-reactive glycoproteins from native vesicles comprised high molecular mass band at the border of the stacking and separating gel and smear band in the stacking gel as well as four distinct lower molecular mass bands (Figure 4 a, fractions 15–17). After TX-100 treatment, a striking loss of high molecular mass components was observed. It can be related to different modes of redistribution of distinct lower molecular mass Con A-reactive bands. Specifically, the major 97 kDa band remained partly close to the initial position (Figure 4b, fraction 17) but was also released, that is, redistributed (Figure 4b, fractions 19–22). In addition, almost complete release of those with molecular masses below 66 kDa was observed (Figure 4a, b, fractions 15–17). The pattern of WGA-reactive glycoproteins of native vesicles was comparable with that of the Con A-reactive ones in high molecular mass region (Figure 4c). However, after TX-100 treatment, their patterns were strikingly different (Figure 4d), as initially observed in the corresponding lectin-binding assay (Figure 1b). Thus, the high molecular mass band was completely lost, and a shift of weak 97 kDa was also detected. Taken together, the profiles related to the cluster of Con A- and WGA-binding glycoproteins might be influenced by their presence in different but overlapping complexes. Moreover, it is possible that diverse, but overlapped, bands with matching molecular mass and lectin binding were detected, as indicated by disparate/selective presence/abundance of particular ones in the subsequently eluted fraction. Prostasomes from human seminal plasma of oligozoospermic men: influence of TX-100 treatment In parallel, prostasomes isolated from human seminal plasma of oligozoospermic men (sPro-O) were subjected to TX-100 treatment and analyzed in the same manner. Compared to the native sample, the treated ones exhibited patterns of Con A- (Figure 5a) and WGA-reactive glycans (Figure 5b) as well as GGT (Figure 5c) that indicated decrease/loss and/or redistribution, similarly as found for sPro-N. In addition, similarity with sPro-N was also noticed regarding the effect on the redistribution of TS: CD63 and CD81 (Figure 5d). However, although CD9 (Figure 5d, fraction 14) and gal-3 (Figure 5d, fractions 14–16) both remained close to the initial position as observed for sPro-N, they exhibited partially overlapping profiles. Moreover, the immunoreactivity of both TS (CD63 and CD81) was barely detectable. Observation at the ultrastructural level suggested more general disruption of sPro-O (Figure 6) than sPro-N. In general, vesicular structures were low abundant and their morphology was clearly different from sPro-N. Integrity of the TX-100-treated vesicles reflected on patterns of total proteins (Figure 7) and glycoproteins (Figure 8) and indicated more severe perturbation than for sPro-N (Figure 3a). In comparison with the native sPro-O (Figure 7a), a significant loss of protein in the entire range of molecular masses, which remained mostly in the region below 66 kDa, was observed (Figure 7b). In agreement with this, Con A-reactive glycoproteins of native sPro-O (Figure 8a) were also strikingly decreased, including the major one at 97 kDa (Figure 8b). Moreover, lectin blot failed to detect any WGA-reactive glycoproteins in the TX-100 treated sPro-O (Figure 8b). Compared with those for sPro-N, the profiles of both types of released glycoproteins for sPro-O indicated more intensive aggregation to form complexes which, in general, interfere with or prevent the detection of contributing components. Surface-associated glycoproteins and gamma-glutamyl transferase on seminal prostasomes of oligozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of oligozoospermic men (sPro-O) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-O from Sephadex G-200 column (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. Barely detectable CD9- and gal-3-immunoreactivity is indicated by circle. A450, absorbance at 450 nm; GGT, gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3, galectin 3; F, fraction. Transmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of oligozoospermic men (sPro-O) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 5). Bar 500 nm. F14–F17: rare vesicles; F19–F21: irregular deposits. Protein composition of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a) and TX-100-treated sPro-O (b) were resolved by electrophoresis and stained with silver. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards. Distribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a, c) and TX-100-treated sPro-O (b, d) were subjected to lectin blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel. In parallel, prostasomes isolated from human seminal plasma of oligozoospermic men (sPro-O) were subjected to TX-100 treatment and analyzed in the same manner. Compared to the native sample, the treated ones exhibited patterns of Con A- (Figure 5a) and WGA-reactive glycans (Figure 5b) as well as GGT (Figure 5c) that indicated decrease/loss and/or redistribution, similarly as found for sPro-N. In addition, similarity with sPro-N was also noticed regarding the effect on the redistribution of TS: CD63 and CD81 (Figure 5d). However, although CD9 (Figure 5d, fraction 14) and gal-3 (Figure 5d, fractions 14–16) both remained close to the initial position as observed for sPro-N, they exhibited partially overlapping profiles. Moreover, the immunoreactivity of both TS (CD63 and CD81) was barely detectable. Observation at the ultrastructural level suggested more general disruption of sPro-O (Figure 6) than sPro-N. In general, vesicular structures were low abundant and their morphology was clearly different from sPro-N. Integrity of the TX-100-treated vesicles reflected on patterns of total proteins (Figure 7) and glycoproteins (Figure 8) and indicated more severe perturbation than for sPro-N (Figure 3a). In comparison with the native sPro-O (Figure 7a), a significant loss of protein in the entire range of molecular masses, which remained mostly in the region below 66 kDa, was observed (Figure 7b). In agreement with this, Con A-reactive glycoproteins of native sPro-O (Figure 8a) were also strikingly decreased, including the major one at 97 kDa (Figure 8b). Moreover, lectin blot failed to detect any WGA-reactive glycoproteins in the TX-100 treated sPro-O (Figure 8b). Compared with those for sPro-N, the profiles of both types of released glycoproteins for sPro-O indicated more intensive aggregation to form complexes which, in general, interfere with or prevent the detection of contributing components. Surface-associated glycoproteins and gamma-glutamyl transferase on seminal prostasomes of oligozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of oligozoospermic men (sPro-O) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-O from Sephadex G-200 column (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. Barely detectable CD9- and gal-3-immunoreactivity is indicated by circle. A450, absorbance at 450 nm; GGT, gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3, galectin 3; F, fraction. Transmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of oligozoospermic men (sPro-O) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 5). Bar 500 nm. F14–F17: rare vesicles; F19–F21: irregular deposits. Protein composition of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a) and TX-100-treated sPro-O (b) were resolved by electrophoresis and stained with silver. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards. Distribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a, c) and TX-100-treated sPro-O (b, d) were subjected to lectin blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel. Prostasomes from human seminal plasma of normozoospermic men: influence of TX-100 treatment: Distributions of distinct surface-associated markers of prostasomes from human SP of normozoospermic men (sPro-N) after solubilization with TX-100 are shown in Figure 1a–d. In general, their gel filtration elution positions differed in terms of a more or less noticeable shift from the void volume where they were co-eluted on native vesicles, revealing new patterns of associations. Surface-associated glycoproteins and gamma-glutamyl transferase on the seminal prostasomes of normozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of normozoospermic men (sPro-N) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-N (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. A450: absorbance at 450 nm; GGT: gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3: galectin 3; F: fraction. In eluted fractions, total mannosylated and sialylated glycans, which can reside on both glycoprotein and glycolipids, were monitored by lectin-binding reactivity (Figure 1a, b), and the GGT as an individual protein marker was monitored by enzymatic activity (Figure 1c). Thus, Con A-reactive glycoproteins were solubilized as evidenced by a striking decrease at the initial position (Figure 1a). However, the related redistribution can be rather deduced than clearly shown according to the reactivity of fractions in the included column volume, possibly due to the influence of their structure on immobilization. In contrast to this, WGA-reactive glycoproteins seemed to be partly solubilized and released as aggregated, judging by the corresponding elution profile revealing broad peaks entering the column and trailing down along the entire chromatogram (Figure 1b). Moreover, they produced a small but distinct peak before that of intact vesicles, which suggests formation of larger protein complexes. As for GGT, it was also clearly solubilized from vesicles with TX-100 and exhibited an elution profile distinct from the examined glycans (Figure 1c). The influence of TX-100 on the distribution of TS: CD63, CD9, and CD81, chosen as integral membrane proteins, and gal-3, chosen as a soluble but membrane-associated molecule, was monitored by the immunodot blot as adequate method (in contrast to western blot) for monitoring surface-associated changes (Figure 1d). TS and gal-3 co-localized on native vesicles at a position overlapping the detected Con A- and WGA-reactive glycoproteins and GGT (Figure 1c, data not shown). After TX-100 treatment, CD63 was clearly released and exhibited broad distribution (Figure 1d, fractions 16–30), but also remained close to its initial position. In contrast to this, CD9 (Figure 1d, fractions 16–18) and gal-3 (Figure 1d, fractions 16–18) retained narrow distributions, that is, they were slightly shifted during elution. Moreover, their patterns overlapped completely. As for CD81, it could not be detected after detergent treatment (Figure 1d). To further follow up the influence of TX-100 on prostasomes, changes in their ultrastructure were analyzed (Figure 2). In the region where all examined markers remained more or less co-localized after TX-100 treatment, the microscopic inspection revealed the presence of structures that correspond to reorganized detergent-resistant domains of vesicular membranes. They appeared as broken vesicles surrounded with leaking content or smaller vesicles with disrupted irregular surfaces seemingly shrunken with no associated material. In contrast to this, in the region where released glycoproteins were separated (Figure 1, fractions 20–22), no such structures were visible, only irregular, possibly, protein deposits. To complement the results obtained for detergent-treated samples as such, changes in the patterns of total prostasomal glycoproteins were analyzed under denaturing and reducing conditions by electrophoresis (Figure 3) and lectin blot (Figure 4). Compared to the native vesicles (Figure 3a), in the TX-100-treated ones (Figure 3b), the proteins exhibiting a prostasome-like pattern remained clustered close to the initial position, that is, they were marginally shifted in elution. However, their abundance was noticeably lower. Specifically, discrete changes in terms of the abundance of major bands in the region below 66 kDa (encompass masses of TS and gal-3) were detected. In addition, clear loss of bands in the region corresponding to the masses of prostasomal signature bands (90–150 kDa) was also detected. Consequently, some of them were visible as shifted in a cluster of protein bands included in column volume (Figure 3b, fractions 20–22). Transmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of normozoospermic men (sPro-N) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 1). Bar 500 nm. F14: rare vesicles; F17: broken vesicles surrounded with leaking content; F19: vesicles with disrupted irregular surface with no associated material; F21: irregular deposits. Protein composition of the detergent-treated prostasomes from human seminal plasma of normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a) and TX-100 treated sPro-N (b) were resolved by electrophoresis and stained with silver. Although aggregation is present as judged by intense bands at the border of the stacking and separating gel, the referent protein bands are preserved. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards. Distribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from normozoospermic men. Selected gel filtration (Figure 1) fractions (F) of native prostasomes from human seminal plasma of normozoospermic men (sPro-N) (a, c) and TX-100 treated sPro-N (b, d) were subjected to lectin-blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel. In contrast to proteins, prostasomal glycoproteins were considerably reorganized after TX-100 treatment. Regarding Con A-reactive glycoproteins, what the lectin-binding assay suggested was confirmed by the lectin blot (Figure 4). Thus, the pattern of Con A-reactive glycoproteins from native vesicles comprised high molecular mass band at the border of the stacking and separating gel and smear band in the stacking gel as well as four distinct lower molecular mass bands (Figure 4 a, fractions 15–17). After TX-100 treatment, a striking loss of high molecular mass components was observed. It can be related to different modes of redistribution of distinct lower molecular mass Con A-reactive bands. Specifically, the major 97 kDa band remained partly close to the initial position (Figure 4b, fraction 17) but was also released, that is, redistributed (Figure 4b, fractions 19–22). In addition, almost complete release of those with molecular masses below 66 kDa was observed (Figure 4a, b, fractions 15–17). The pattern of WGA-reactive glycoproteins of native vesicles was comparable with that of the Con A-reactive ones in high molecular mass region (Figure 4c). However, after TX-100 treatment, their patterns were strikingly different (Figure 4d), as initially observed in the corresponding lectin-binding assay (Figure 1b). Thus, the high molecular mass band was completely lost, and a shift of weak 97 kDa was also detected. Taken together, the profiles related to the cluster of Con A- and WGA-binding glycoproteins might be influenced by their presence in different but overlapping complexes. Moreover, it is possible that diverse, but overlapped, bands with matching molecular mass and lectin binding were detected, as indicated by disparate/selective presence/abundance of particular ones in the subsequently eluted fraction. Prostasomes from human seminal plasma of oligozoospermic men: influence of TX-100 treatment: In parallel, prostasomes isolated from human seminal plasma of oligozoospermic men (sPro-O) were subjected to TX-100 treatment and analyzed in the same manner. Compared to the native sample, the treated ones exhibited patterns of Con A- (Figure 5a) and WGA-reactive glycans (Figure 5b) as well as GGT (Figure 5c) that indicated decrease/loss and/or redistribution, similarly as found for sPro-N. In addition, similarity with sPro-N was also noticed regarding the effect on the redistribution of TS: CD63 and CD81 (Figure 5d). However, although CD9 (Figure 5d, fraction 14) and gal-3 (Figure 5d, fractions 14–16) both remained close to the initial position as observed for sPro-N, they exhibited partially overlapping profiles. Moreover, the immunoreactivity of both TS (CD63 and CD81) was barely detectable. Observation at the ultrastructural level suggested more general disruption of sPro-O (Figure 6) than sPro-N. In general, vesicular structures were low abundant and their morphology was clearly different from sPro-N. Integrity of the TX-100-treated vesicles reflected on patterns of total proteins (Figure 7) and glycoproteins (Figure 8) and indicated more severe perturbation than for sPro-N (Figure 3a). In comparison with the native sPro-O (Figure 7a), a significant loss of protein in the entire range of molecular masses, which remained mostly in the region below 66 kDa, was observed (Figure 7b). In agreement with this, Con A-reactive glycoproteins of native sPro-O (Figure 8a) were also strikingly decreased, including the major one at 97 kDa (Figure 8b). Moreover, lectin blot failed to detect any WGA-reactive glycoproteins in the TX-100 treated sPro-O (Figure 8b). Compared with those for sPro-N, the profiles of both types of released glycoproteins for sPro-O indicated more intensive aggregation to form complexes which, in general, interfere with or prevent the detection of contributing components. Surface-associated glycoproteins and gamma-glutamyl transferase on seminal prostasomes of oligozoospermic men: influence of detergent treatment. Prostasomes from human seminal plasma of oligozoospermic men (sPro-O) were subjected to Triton X-100 (TX-100) treatment followed by gel filtration on a Sephadex G-200 column. Reference elution profiles of native sPro-O from Sephadex G-200 column (eluted at void volume) were shown for comparison. Elution of (a) concanavalin A-reactive glycans (Con A-R) and (b) wheat germ agglutinin-reactive glycans (WGA-R). (c) Elution of GGT. (d) Distribution of tetraspanins (CD63, CD9, and CD81) and gal-3 (indicated in panel c) was monitored by measuring the immunoreactivity of dot blot-immobilized fractions. The presence of TX-100 in eluted fractions caused spilled appearance of dots and background staining. Barely detectable CD9- and gal-3-immunoreactivity is indicated by circle. A450, absorbance at 450 nm; GGT, gamma-glutamyl transferase activity expressed in U/L, unit per liter; gal-3, galectin 3; F, fraction. Transmission electron microscopy of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Ultrastructural appearance of prostasomes from human seminal plasma of oligozoospermic men (sPro-O) treated with TX-100. Selected gel filtration-resolved fraction (F) was shown (Figure 5). Bar 500 nm. F14–F17: rare vesicles; F19–F21: irregular deposits. Protein composition of the detergent-treated prostasomes from human seminal plasma of oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a) and TX-100-treated sPro-O (b) were resolved by electrophoresis and stained with silver. No adjustment of glycoprotein content (equal concentration per lane) was made, that is, the eluted fractions were loaded as such (equal volume) to keep on elution profiles. The numbers (in kDa) indicate the position of molecular mass standards. Distribution of Con A-reactive and WGA-reactive glycoproteins of the detergent-treated seminal prostasomes from oligozoospermic men. Selected gel filtration (Figure 5) fractions (F) of native prostasomes from human seminal plasma of oligozoospermic men (sPro-O) (a, c) and TX-100-treated sPro-O (b, d) were subjected to lectin blot using Con A (a, b) and WGA (c, d). Con A-R: concanavalin A-reactive glycoproteins; WGA-R: wheat germ agglutinin-reactive glycoproteins. The numbers (in kDa) indicate the position of molecular mass standards. Arrows indicate the border of stacking and separating gel. Discussion: The results obtained revealed novel patterns of surface-associated prostasomal proteins and related them to the molecular disposition on detergent-sensitive/resistant membrane domains that exist under normal physiology and conditions of low sperm count. In summary, distinct differences were found in the influence of detergent on solubilization of each tetraspanin as well as their relation with the other examined surface-associated molecules. Accordingly, they were grouped into two patterns mainly consisting of overlapped CD9/gal3/WGA-reactive glycoproteins and CD63/Con A-reactive glycoproteins/GGT. When the effect of TX-100 on sPro-N is compared with that on sPro-O, the overall similarity can be seen regarding the redistribution of examined surface-associated glycoproteins including GGT, all presumed to be mainly part of the vesicle coat. In contrast to this, greater difference was found in the redistribution of true integral membrane proteins exemplified by TS as well as gal-3, which could exist as membrane-associated through carbohydrate/protein-binding interactions. More specifically, in sPro-O, perturbation of CD9 and gal-3 was found to be related to engagement in different high molecular mass complexes and mutual segregation rather than co-localization in detergent-resistant membranous structures as could be deduced for sPro-N. The existence of a TS-web on EVs, in general, has not been studied (4). In addition, there are few data on the composition of prostasomal surface glycans and gal-3 (2, 16). TS are a family of integral membrane proteins that may be involved in three levels of interaction with their molecular partners (11, 20, 21). As a result of these interactions, they are grouped into detergent-soluble tetraspanin-enriched membranes (1, 10, 22) that are clearly different from other types of higher order molecular complexes (23). Some detergents used for the investigation of prostasomal proteins (24) may differently affect TS interactions with their molecular partners. Since these interactions were out of the scope of this study, and we actually wanted to disrupt TS–TS interactions, we choose TX-100 treatment as the most commonly used method for the extraction of selected molecules (both TS and GGT) (15, 22). In addition to TS, the recruitment of different molecules into organized complexes may involve galectins (25, 26). Although soluble proteins, galectins, are readily found as membrane-associated through interactions achieved by carbohydrate-binding (cross-links N-glycans as ligands) or other protein- or lipid-binding domains, such as gal-3 as a distinct member of this family of lectins (5, 23, 27). Thus, TS are expected to be readily solubilized, and since they are medium sized (~250 amino acids), this can cause a broad elution pattern due to the shift to higher molecular masses (depending on the composition of complexes with ligands). Indeed, the observed redistribution of CD63 was in agreement with this, suggesting abundant release in molecular complexes in response to detergent treatment of both sPro-N and sPro-O. However, one part of CD63-immunoreactivity remained at the initial position. In contrast to CD63, the elution of CD9 was only slightly shifted suggesting that it remains in high molecular mass complexes in both sPro-N and sPro-O. However, the complexes in sPro-N and sPro-O seemed to differ judging by the influence of detergent treatment on their structure. Thus, TEM suggested that CD9 in sPro-N remained in detergent-insoluble membranes/vesicular structures, whereas in sPro-O it was rather a part of aggregated protein complexes (both eluted in the void volume). This is supported by gal-3 distribution, which overlapped completely with CD9 in sPro-N but partially in sPro-O. The possibility of glycan-mediated or protein–protein interaction of gal-3 with CD9 can be supported by their detected co-localization, since neither type of interaction is expected to be influenced by TX-100. As for CD81, it could not be detected after TX-100 treatment in either sPro-N or sPro-O. This can be related to the data indicating that different antibodies differentially recognize CD81 if it is associated with the TS-web, or if the web is disrupted using the TX-100 (28). CD81 was previously reported to interact with GGT (29), which was also monitored. In relation to this, our results for the redistribution of prostasomal surface-associated GGT, monitored by measuring the enzymatic activity, clearly indicated its release from vesicles and appearance in molecular complexes (30). However, at this stage, it cannot be confirmed if this GGT pattern is due to its complex with CD81. Differences in the solubilizing properties of TS might be related to the facts that CD9 is a glycosylated proteolipid, CD63 is a glycosylated protein, and CD81 is a non-glycosylated protein (1). Thus, the TS structure itself and the specificities of the prostasomal surface microenvironment (in terms of distribution and presentation of glycans) may also influence the results obtained for detergent sensitivity. So far, it was reported that the examined TS and gal-3 are found co-isolated with prostasomal lipid rafts (9). It is known that lipid rafts contain an unusual lipid composition rich in cholesterol, which renders them insoluble upon detergent treatment (9, 31) and that they may associate with TS by lateral crosstalk between membrane domains. In relation to this, the existence of several prostasomal gal-3 isoelectric variants including a truncated form (carbohydrate recognition domain only) (7) which could reside in a different membrane microenvironment and consequently organize related but distinct molecular complexes may be responsible for specific redistribution profiles of glycoproteins/TS. Detergent-soluble glycoproteins as molecules, which, on the one hand, can penetrate the membrane core and anchor hydrophobically, and, on the other hand, constitute a specific coat (32, 33), could also influence the stability and accessibility of the domain that may be intercalated with detergent. In this study, in intact sPro-N and sPro-O, WGA and Con A revealed a cluster of distinct partially overlapped glycoproteins. They were almost completely released from vesicles upon TX-100 treatment. However, the detergent-induced changes were distinct, influencing their detection depending on the experimental conditions used, especially for WGA-reactive ones. Significant shielding in auto-aggregates/heterologous complexes which can interfere with lectin binding, or mixed release of glycolipids/lipoproteins which escape detection by the methods used, is in agreement with the observed behavior, much emphasized in sPro-O. It is interesting that the majority of Con A-reactive glycoproteins were revealed in the region overlapping prostasome signature bands in both intact sPro-N and sPro-O. They are glycoproteins comprising the integral membrane protein CD13 of 150 kDa, transmembrane, and soluble CD26 of 82–110 kDa and soluble CD10 of 94 kDa (34). Regarding TS, all this indicated mixed patterns of different Con A-reactive glycoproteins and their preferential associations with CD63, which exhibited an overlapped solubilization pattern. Analysis of membranous proteins is very difficult, since they usually exhibit anomalous behavior in many standardly used protein techniques (35–37). The possibility that some proteins could aggregate in spite of the critical micelle concentration, due to their abundance or inherent structure as well as that resolved peaks could be a set of peaks from protein, protein complexes, lipid and detergent, must be taken into consideration (38). Thus, the obtained molecular patterns themselves are descriptive and provide qualitative data. As a rule, they comprise numerous differently abundant bands. Some of them could have variable presence and some of them are constitutively present. Similar to the total prostasomal protein pattern exemplified by three prostasome signature bands (34), solubilization signature provided data in terms of annotation of main glycoproteins/TS to distinct detergent-sensitive or insensitive prostasomal patterns. Consequently, they can be used reliably for the comparison of prostasomes/any EVs (after establishing their own solubilization signature) in different physiological conditions. In terms of the presumed role of EVs as a communication tool (39, 40), these initial data could be a base for addressing the place of scaffolding in enabling the membrane functionality of EVs. In addition, it may initiate widening investigations on the basic issues of membrane complexity (41–43) usually deduced from the cell surface of plasma membrane to the field of EVs membrane.
Background: Prostasomes, extracellular vesicles (EVs) abundantly present in seminal plasma, express distinct tetraspanins (TS) and galectin-3 (gal-3), which are supposed to shape their surface by an assembly of different molecular complexes. In this study, detergent-sensitivity patterns of membrane-associated prostasomal proteins were determined aiming at the solubilization signature as an intrinsic multimolecular marker and a new parameter suitable as a reference for the comparison of EVs populations in health and disease. Methods: Prostasomes were disrupted by Triton X-100 and analyzed by gel filtration under conditions that maintained complete solubilization. Redistribution of TS (CD63, CD9, and CD81), gal-3, gamma-glutamyltransferase (GGT), and distinct N-glycans was monitored using solid-phase lectin-binding assays, transmission electron microscopy, electrophoresis, and lectin blot. Results: Comparative data on prostasomes under normal physiology and conditions of low sperm count revealed similarity regarding the redistribution of distinct N-glycans and GGT, all presumed to be mainly part of the vesicle coat. In contrast to this, a greater difference was found in the redistribution of integral membrane proteins, exemplified by TS and gal-3. Accordingly, they were grouped into two molecular patterns mainly consisting of overlapped CD9/gal-3/wheat germ agglutinin-reactive glycoproteins and CD63/GGT/concanavalin A-reactive glycoproteins. Conclusions: Solubilization signature can be considered as an all-inclusive distinction factor regarding the surface properties of a particular vesicle since it reflects the status of the parent cell and the extracellular environment, both of which contribute to the composition of spatial membrane arrangements.
null
null
14,412
306
[ 215, 120, 170, 348, 61, 418, 119, 1659, 914 ]
13
[ "figure", "spro", "100", "tx 100", "tx", "fractions", "glycoproteins", "prostasomes", "reactive", "men" ]
[ "glycoproteins lectin", "lectins including galectins", "galectins widely studied", "web galectin glycoprotein", "galectin glycoprotein lattices" ]
null
null
null
[CONTENT] Extracellular vesicles | detergent sensitivity | CD63 | CD9 | gamma-glutamyl transferase | molecular patterns | normozoospermia | oligozoospermia [SUMMARY]
null
[CONTENT] Extracellular vesicles | detergent sensitivity | CD63 | CD9 | gamma-glutamyl transferase | molecular patterns | normozoospermia | oligozoospermia [SUMMARY]
null
[CONTENT] Extracellular vesicles | detergent sensitivity | CD63 | CD9 | gamma-glutamyl transferase | molecular patterns | normozoospermia | oligozoospermia [SUMMARY]
null
[CONTENT] Galectin 3 | Humans | Male | Polysaccharides | Semen | Spermatozoa | Tetraspanins [SUMMARY]
null
[CONTENT] Galectin 3 | Humans | Male | Polysaccharides | Semen | Spermatozoa | Tetraspanins [SUMMARY]
null
[CONTENT] Galectin 3 | Humans | Male | Polysaccharides | Semen | Spermatozoa | Tetraspanins [SUMMARY]
null
[CONTENT] glycoproteins lectin | lectins including galectins | galectins widely studied | web galectin glycoprotein | galectin glycoprotein lattices [SUMMARY]
null
[CONTENT] glycoproteins lectin | lectins including galectins | galectins widely studied | web galectin glycoprotein | galectin glycoprotein lattices [SUMMARY]
null
[CONTENT] glycoproteins lectin | lectins including galectins | galectins widely studied | web galectin glycoprotein | galectin glycoprotein lattices [SUMMARY]
null
[CONTENT] figure | spro | 100 | tx 100 | tx | fractions | glycoproteins | prostasomes | reactive | men [SUMMARY]
null
[CONTENT] figure | spro | 100 | tx 100 | tx | fractions | glycoproteins | prostasomes | reactive | men [SUMMARY]
null
[CONTENT] figure | spro | 100 | tx 100 | tx | fractions | glycoproteins | prostasomes | reactive | men [SUMMARY]
null
[CONTENT] ts | evs | distinct | surface | extracellular | gal | data | cell | different | glycoprotein [SUMMARY]
null
[CONTENT] figure | spro | reactive | glycoproteins | 100 | fractions | tx | tx 100 | men | treated [SUMMARY]
null
[CONTENT] figure | spro | 100 | fractions | gel | tx | tx 100 | min | glycoproteins | men [SUMMARY]
null
[CONTENT] Prostasomes ||| [SUMMARY]
null
[CONTENT] ||| TS ||| two [SUMMARY]
null
[CONTENT] Prostasomes ||| ||| Prostasomes | Triton X-100 ||| lectin blot ||| ||| TS ||| two ||| [SUMMARY]
null
Tibial component rotation in total knee arthroplasty.
26883741
Both the range of motion (ROM) technique and the tibial tubercle landmark (TTL) technique are frequently used to align the tibial component into proper rotational position during total knee arthroplasty (TKA). The aim of the study was to assess the intra-operative differences in tibial rotation position during computer-navigated primary TKA using either the TTL or ROM techniques. The ROM technique was hypothesized to be a repeatable method and to produce different tibial rotation positions compared to the TTL technique.
BACKGROUND
A prospective, observational study was performed to evaluate the antero-posterior axis of the cut proximal tibia using both the ROM and the TTL technique during primary TKA without postoperative clinical assessment. Computer navigation was used to measure this difference in 20 consecutive knees of 20 patients who underwent a posterior stabilized total knee arthroplasty with a fixed-bearing polyethylene insert and a patella resurfacing.
METHODS
The ROM technique is a repeatable method with an interclass correlation coefficient (ICC2) of 0.84 (p < 0.001). The trial tibial baseplate was on average 4.56 degrees externally rotated compared to the tubercle landmark. This difference was statistically significant (p = 0.028). The amount of maximum intra-operative flexion and the pre-operative mechanical axis were positively correlated with the magnitude of difference between the two methods.
RESULTS
It is important for the orthopaedic surgeon to realise that there is a significant difference between the TTL technique and ROM technique when positioning the tibial component in a rotational position. This difference is correlated with high maximum flexion and mechanical axis deviations.
CONCLUSIONS
[ "Aged", "Aged, 80 and over", "Arthroplasty, Replacement, Knee", "Female", "Humans", "Male", "Middle Aged", "Prospective Studies", "Range of Motion, Articular", "Rotation", "Tibia" ]
4756521
Background
Rotational alignment of the components in total knee arthroplasty (TKA) is an important factor for both survival and the performance of the prostheses [1, 2]. The majority of the attention has focussed on the rotational alignment of the femoral component [3–6], which has resulted in the widespread use of the transepicondylar axis and the antero-posterior axis (Whiteside’s Line) of the distal femur as the reference axes for the rotational alignment of the femoral component [3–6]. However there is more discussion about the rotational alignment of the tibial component in part because of the difficulty of clinically assessing tibial component rotation. Furthermore, a whole range of anatomical landmarks can be used, including the medial border of the tibial tuberosity, the medial third of the tibial tuberosity, the anterior tibial crest, the posterior tibial condylar line, the second ray and the first web space of the foot. Aligning the tibial component to the tibial tubercle is one of the most popular landmark methods [7–9]. The disadvantage of all anatomical landmark techniques is that they do not account for femoro-tibial kinematics [10]. To address this problem, the ROM technique was introduced; in this technique, the rotational alignment of the tibial tray is determined through conformity to the femoral component when the knee is put through a series of full flexion-extension cycles [11]. However, the position of the tibial tray is not exclusively determined by the femoral component but is also influenced by the extensor mechanism, the patellar component, the ligament balancing and the tibial cut [12, 13]. The rationale behind the ROM technique lies in the theoretical advantage of aligning the tibial component in relation to the femoral component while respecting the soft tissue torsion forces to create optimal femoro-tibial kinematics [14]. For this method to work, the femoral component should be positioned accurately. Using computer-assisted surgery may improve the accuracy of positioning [15]. Several studies have demonstrated variability in the relationships between different landmarks and techniques for establishing rotational alignment of tibial components in total knee arthroplasty [11, 16–18]. A review reported that there is no gold standard measurement of tibial component rotation [18]. Whether the ROM technique is a repeatable method, and whether there is a significant difference in tibial component rotational position between the TTL technique and the ROM technique in computer navigated TKA with patella resurfacing remains unanswered questions in the literature. The primary purpose of this study was therefore to intra-operatively evaluate the repeatability of the ROM technique. The secondary outcome was to evaluate the difference in rotational alignment of the trial tibial component with the use of the TTL and ROM techniques during computer-navigated TKA with patella resurfacing . Additionally, the factors that influenced the positioning of the trial tibial component with both techniques were investigated. Postoperative clinical and radiological data were not collected.
null
null
null
null
Conclusions
The ROM technique is a repeatable intra-operative method for determining the rotational position of the tibia trial component. Because the best method to determine the intra-operative position of the tibia component is still under debate, TKA surgeons should be aware that there is a difference between the ROM and TTL methods, particularly in patients with high peri-operative ranges of motion and/or high pre-operative varus/valgus alignment.
[ "Methods", "Ethics and consent", "Patient demographics", "Operative procedure", "Statistics", "Results", "Discussion", "Limitations of the study", "Conclusions" ]
[ "A prospective, observational study of 20 consecutive primary posterior stabilized TKAs in 20 patients with fixed-bearing polyethylene inserts (Scorpio Flex PS, Stryker Corporation, Mahwah, NJ USA) was performed by a single surgeon (MvS).\nData collection began with 10 consecutive TKAs to determine whether there was a difference between the alignment techniques and if so to gather data to perform a power analysis. Using the acquired data, we determined that a total of 20 subjects were needed to achieve 90 % power assuming a minimum detectable difference of 5.0 degrees, a standard deviation of 7.7 degrees and a significance level of alpha < 0.05.\nThe mean pre-operative mechanical leg axis was 3.65° ± SD 7.15 of varus. The mean pre-operative mechanical leg axis of the varus knees (N = 15) was 7.0° ± SD 4.05, of the valgus knees (N = 5) was −6.4° ± SD −4.15.\nPositive values indicated varus alignment, negative values indicated valgus alignment.\nThe mean posterior slope was 1.5° ± SD 1.02.\n Ethics and consent The Medical Ethics Committee of the Maastricht University Medical Centre has concluded that the described research does not apply to the Dutch Medical Research involving Human Subjects Act (WMO), therefore the patient was not required to provide consent regarding the use of the material.\nFurthermore, every patient in the Maastricht University Medical Centre is provided with information regarding these kinds of studies. If they do not wish to contribute to these studies, this information will be included in their file. The patient involved in this study did not make an objection against the use of his/her material for research purposes.\nThe Medical Ethics Committee of the Maastricht University Medical Centre has concluded that the described research does not apply to the Dutch Medical Research involving Human Subjects Act (WMO), therefore the patient was not required to provide consent regarding the use of the material.\nFurthermore, every patient in the Maastricht University Medical Centre is provided with information regarding these kinds of studies. If they do not wish to contribute to these studies, this information will be included in their file. The patient involved in this study did not make an objection against the use of his/her material for research purposes.\n Patient demographics The patient demographics are summarized in Table 1.Table 1Patient demographics. Positive values indicated varus alignment, negative values indicated valgus alignmentAge (years)Preoperative Leg axis (degrees varus)Sex (Male/Female)Side (Right/Left)Preoperative ROM (degrees)Preoperative flexion contracture (degrees)Mean ± SD69.8 ± 103.7 ± 7.28/1210/10117 ± 165.6 ± 4.7\nPatient demographics. Positive values indicated varus alignment, negative values indicated valgus alignment\nThe patient demographics are summarized in Table 1.Table 1Patient demographics. Positive values indicated varus alignment, negative values indicated valgus alignmentAge (years)Preoperative Leg axis (degrees varus)Sex (Male/Female)Side (Right/Left)Preoperative ROM (degrees)Preoperative flexion contracture (degrees)Mean ± SD69.8 ± 103.7 ± 7.28/1210/10117 ± 165.6 ± 4.7\nPatient demographics. Positive values indicated varus alignment, negative values indicated valgus alignment\n Operative procedure The Stryker Knee Navigation System (Stryker Navigation System II, version 3.1) was used in this study. The prosthesis (Scorpio PS Stryker Howmedica Osteonics, Allendale, NJ USA) used in these surgeries allows five degrees of rotation between the tibial insert and the femoral component. In all cases, a tourniquet was applied for the entire duration of the surgery. After a standard midline skin incision and a medial parapatellar arthrotomy, the active wireless trackers of the navigation system were fixed to the femur and tibia. The required landmarks were entered into the navigation computer, and the rotation centre of the hip was determined by a special algorithm executed in customized software. The transepicondylar axis of the distal femur and the Whiteside’s line were set in each case exactly perpendicular to each other to improve the accuracy of the positioning the femoral component. The femoral component was aligned parallel to the transepicondylar axis. The AP axis of the proximal tibia was determined by placing the tip of the pointer on the centre of the line between the intercondylar eminences and aligning it to the medial 1/3 of the tibial tubercle . This AP axis was saved in the navigation program as 0 degrees of rotation. The proximal tibial and distal femoral cuts were performed and examined with the navigation system. The tibial posterior slope was set according to the patient’s natural slope. The polyethylene insert (Scorpio-Flex PS fixed bearing tibial insert) had an additional four degrees of posterior down-slope. The rotation of the femoral component was oriented according to the transepicondylar line and the AP axis (Whiteside’s Line) of the distal femur as currently advised in literature [3, 5, 17]. After soft tissue balancing and achievement of the maximal range of motion, the patella was prepared. The patellar button position may affect femoro-tibial kinematics; therefore, all of the trial components, including the patella and the PS tibial trial insert, were placed before the tibial component was subjected to the ROM technique. One navigation tracker was applied to the alignment handle of the trial tibial tray to check the position according to the given 0 degrees of rotation. Flexion and extension were measured intra-operatively after the approach was made and the trackers were placed. Positive values for extension represent hyperextension, while negative values represent flexion contracture.\nThe tibial component was inserted and checked for smooth movement on the tibial cut surface. The knee was then put through five full flexion-extension cycles while the surgeon held the ankle only. During the ROM cycles, no hands were touching the knee to prevent manual manipulation, and no varus/valgus stress was applied. The movement was followed on the navigation computer to confirm that no varus/valgus stress was applied. After performing the ROM cycles, the rotational position of the trial tibial component was recorded as indicated by the navigation computer. Positive values indicated that the trial component was in internal rotation according to the given TTL axis, and negative values indicated that the trial component was in external rotation. While this measurement was being acquired, the patella was lying in the patellar groove to facilitate optimal patellofemoral tracking and to prevent lateral pull on the patella tendon that could cause the tibia to rotate externally. The rotational position was noted (measurement 1). After removing and reinserting the components, the ROM technique was applied two additional times with five full flexion-extension movements and corresponding subsequent measurements from the navigation system (measurement 2 and 3).\nAfter completing the operative procedure, the final tibial tray was cemented up to 1/3 of the medial border of the tibial tubercle.\nThe Stryker Knee Navigation System (Stryker Navigation System II, version 3.1) was used in this study. The prosthesis (Scorpio PS Stryker Howmedica Osteonics, Allendale, NJ USA) used in these surgeries allows five degrees of rotation between the tibial insert and the femoral component. In all cases, a tourniquet was applied for the entire duration of the surgery. After a standard midline skin incision and a medial parapatellar arthrotomy, the active wireless trackers of the navigation system were fixed to the femur and tibia. The required landmarks were entered into the navigation computer, and the rotation centre of the hip was determined by a special algorithm executed in customized software. The transepicondylar axis of the distal femur and the Whiteside’s line were set in each case exactly perpendicular to each other to improve the accuracy of the positioning the femoral component. The femoral component was aligned parallel to the transepicondylar axis. The AP axis of the proximal tibia was determined by placing the tip of the pointer on the centre of the line between the intercondylar eminences and aligning it to the medial 1/3 of the tibial tubercle . This AP axis was saved in the navigation program as 0 degrees of rotation. The proximal tibial and distal femoral cuts were performed and examined with the navigation system. The tibial posterior slope was set according to the patient’s natural slope. The polyethylene insert (Scorpio-Flex PS fixed bearing tibial insert) had an additional four degrees of posterior down-slope. The rotation of the femoral component was oriented according to the transepicondylar line and the AP axis (Whiteside’s Line) of the distal femur as currently advised in literature [3, 5, 17]. After soft tissue balancing and achievement of the maximal range of motion, the patella was prepared. The patellar button position may affect femoro-tibial kinematics; therefore, all of the trial components, including the patella and the PS tibial trial insert, were placed before the tibial component was subjected to the ROM technique. One navigation tracker was applied to the alignment handle of the trial tibial tray to check the position according to the given 0 degrees of rotation. Flexion and extension were measured intra-operatively after the approach was made and the trackers were placed. Positive values for extension represent hyperextension, while negative values represent flexion contracture.\nThe tibial component was inserted and checked for smooth movement on the tibial cut surface. The knee was then put through five full flexion-extension cycles while the surgeon held the ankle only. During the ROM cycles, no hands were touching the knee to prevent manual manipulation, and no varus/valgus stress was applied. The movement was followed on the navigation computer to confirm that no varus/valgus stress was applied. After performing the ROM cycles, the rotational position of the trial tibial component was recorded as indicated by the navigation computer. Positive values indicated that the trial component was in internal rotation according to the given TTL axis, and negative values indicated that the trial component was in external rotation. While this measurement was being acquired, the patella was lying in the patellar groove to facilitate optimal patellofemoral tracking and to prevent lateral pull on the patella tendon that could cause the tibia to rotate externally. The rotational position was noted (measurement 1). After removing and reinserting the components, the ROM technique was applied two additional times with five full flexion-extension movements and corresponding subsequent measurements from the navigation system (measurement 2 and 3).\nAfter completing the operative procedure, the final tibial tray was cemented up to 1/3 of the medial border of the tibial tubercle.\n Statistics The statistical analyses were performed with SPSS statistical software (Version 12, SPSS Inc., Chicago, IL USA). The reproducibility of the ROM-technique was evaluated using the intra-class correlation coefficient (ICC). For each target, there was one ‘rater’(MvS) who performed the three consecutive attempts at positioning the tibial component using the ROM technique. Since the exact same rater made ratings on every patient and it was assumed that both patients and observer were drawn randomly from larger populations, the ICC2 was used [19]. The ICC2 reflects the reliability of this single rater. Means were compared with paired T-tests in cases of normal distributions [20]. A level of p < 0.05 was considered statistically significant. The mean of the 3 ROM measurements was used to evaluate the difference between the ROM and TTL techniques. We evaluated potential factors associated with the difference between the ROM and TTL technique including, leg axis, intra-operative flexion, intra-operative extension and posterior slope. (Table 2) The associations between each variable and the difference between the ROM and TTL were examined with univariable regression analyses. Factors that were associated with the outcome in univariable analyses (p-values < 0.20) were included in multivariable regression analyses. In multivariable regression analyses p-values < 0.05 were considered significant. Regression coefficients with their 95 % confidence intervals are reported.Table 2Outcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contractureVariablesMean ± SD(range)Difference ROM and TTL (°)−4.6 ± 8.6(−27.0 to 11.5)Varus mechanical leg axis (°)3.7 ± 7.2(−13.0 to 15.0)Intra-operative flexion (°)121.9 ± 6.7(108.0 to 134.0)Intra-operative extension (°)−0.1 ± 3.0(−5.5 to 7.0)Posterior slope (°)1.5 ± 1.0(0.0 to 3.5)\nOutcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contracture\nThe statistical analyses were performed with SPSS statistical software (Version 12, SPSS Inc., Chicago, IL USA). The reproducibility of the ROM-technique was evaluated using the intra-class correlation coefficient (ICC). For each target, there was one ‘rater’(MvS) who performed the three consecutive attempts at positioning the tibial component using the ROM technique. Since the exact same rater made ratings on every patient and it was assumed that both patients and observer were drawn randomly from larger populations, the ICC2 was used [19]. The ICC2 reflects the reliability of this single rater. Means were compared with paired T-tests in cases of normal distributions [20]. A level of p < 0.05 was considered statistically significant. The mean of the 3 ROM measurements was used to evaluate the difference between the ROM and TTL techniques. We evaluated potential factors associated with the difference between the ROM and TTL technique including, leg axis, intra-operative flexion, intra-operative extension and posterior slope. (Table 2) The associations between each variable and the difference between the ROM and TTL were examined with univariable regression analyses. Factors that were associated with the outcome in univariable analyses (p-values < 0.20) were included in multivariable regression analyses. In multivariable regression analyses p-values < 0.05 were considered significant. Regression coefficients with their 95 % confidence intervals are reported.Table 2Outcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contractureVariablesMean ± SD(range)Difference ROM and TTL (°)−4.6 ± 8.6(−27.0 to 11.5)Varus mechanical leg axis (°)3.7 ± 7.2(−13.0 to 15.0)Intra-operative flexion (°)121.9 ± 6.7(108.0 to 134.0)Intra-operative extension (°)−0.1 ± 3.0(−5.5 to 7.0)Posterior slope (°)1.5 ± 1.0(0.0 to 3.5)\nOutcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contracture", "The Medical Ethics Committee of the Maastricht University Medical Centre has concluded that the described research does not apply to the Dutch Medical Research involving Human Subjects Act (WMO), therefore the patient was not required to provide consent regarding the use of the material.\nFurthermore, every patient in the Maastricht University Medical Centre is provided with information regarding these kinds of studies. If they do not wish to contribute to these studies, this information will be included in their file. The patient involved in this study did not make an objection against the use of his/her material for research purposes.", "The patient demographics are summarized in Table 1.Table 1Patient demographics. Positive values indicated varus alignment, negative values indicated valgus alignmentAge (years)Preoperative Leg axis (degrees varus)Sex (Male/Female)Side (Right/Left)Preoperative ROM (degrees)Preoperative flexion contracture (degrees)Mean ± SD69.8 ± 103.7 ± 7.28/1210/10117 ± 165.6 ± 4.7\nPatient demographics. Positive values indicated varus alignment, negative values indicated valgus alignment", "The Stryker Knee Navigation System (Stryker Navigation System II, version 3.1) was used in this study. The prosthesis (Scorpio PS Stryker Howmedica Osteonics, Allendale, NJ USA) used in these surgeries allows five degrees of rotation between the tibial insert and the femoral component. In all cases, a tourniquet was applied for the entire duration of the surgery. After a standard midline skin incision and a medial parapatellar arthrotomy, the active wireless trackers of the navigation system were fixed to the femur and tibia. The required landmarks were entered into the navigation computer, and the rotation centre of the hip was determined by a special algorithm executed in customized software. The transepicondylar axis of the distal femur and the Whiteside’s line were set in each case exactly perpendicular to each other to improve the accuracy of the positioning the femoral component. The femoral component was aligned parallel to the transepicondylar axis. The AP axis of the proximal tibia was determined by placing the tip of the pointer on the centre of the line between the intercondylar eminences and aligning it to the medial 1/3 of the tibial tubercle . This AP axis was saved in the navigation program as 0 degrees of rotation. The proximal tibial and distal femoral cuts were performed and examined with the navigation system. The tibial posterior slope was set according to the patient’s natural slope. The polyethylene insert (Scorpio-Flex PS fixed bearing tibial insert) had an additional four degrees of posterior down-slope. The rotation of the femoral component was oriented according to the transepicondylar line and the AP axis (Whiteside’s Line) of the distal femur as currently advised in literature [3, 5, 17]. After soft tissue balancing and achievement of the maximal range of motion, the patella was prepared. The patellar button position may affect femoro-tibial kinematics; therefore, all of the trial components, including the patella and the PS tibial trial insert, were placed before the tibial component was subjected to the ROM technique. One navigation tracker was applied to the alignment handle of the trial tibial tray to check the position according to the given 0 degrees of rotation. Flexion and extension were measured intra-operatively after the approach was made and the trackers were placed. Positive values for extension represent hyperextension, while negative values represent flexion contracture.\nThe tibial component was inserted and checked for smooth movement on the tibial cut surface. The knee was then put through five full flexion-extension cycles while the surgeon held the ankle only. During the ROM cycles, no hands were touching the knee to prevent manual manipulation, and no varus/valgus stress was applied. The movement was followed on the navigation computer to confirm that no varus/valgus stress was applied. After performing the ROM cycles, the rotational position of the trial tibial component was recorded as indicated by the navigation computer. Positive values indicated that the trial component was in internal rotation according to the given TTL axis, and negative values indicated that the trial component was in external rotation. While this measurement was being acquired, the patella was lying in the patellar groove to facilitate optimal patellofemoral tracking and to prevent lateral pull on the patella tendon that could cause the tibia to rotate externally. The rotational position was noted (measurement 1). After removing and reinserting the components, the ROM technique was applied two additional times with five full flexion-extension movements and corresponding subsequent measurements from the navigation system (measurement 2 and 3).\nAfter completing the operative procedure, the final tibial tray was cemented up to 1/3 of the medial border of the tibial tubercle.", "The statistical analyses were performed with SPSS statistical software (Version 12, SPSS Inc., Chicago, IL USA). The reproducibility of the ROM-technique was evaluated using the intra-class correlation coefficient (ICC). For each target, there was one ‘rater’(MvS) who performed the three consecutive attempts at positioning the tibial component using the ROM technique. Since the exact same rater made ratings on every patient and it was assumed that both patients and observer were drawn randomly from larger populations, the ICC2 was used [19]. The ICC2 reflects the reliability of this single rater. Means were compared with paired T-tests in cases of normal distributions [20]. A level of p < 0.05 was considered statistically significant. The mean of the 3 ROM measurements was used to evaluate the difference between the ROM and TTL techniques. We evaluated potential factors associated with the difference between the ROM and TTL technique including, leg axis, intra-operative flexion, intra-operative extension and posterior slope. (Table 2) The associations between each variable and the difference between the ROM and TTL were examined with univariable regression analyses. Factors that were associated with the outcome in univariable analyses (p-values < 0.20) were included in multivariable regression analyses. In multivariable regression analyses p-values < 0.05 were considered significant. Regression coefficients with their 95 % confidence intervals are reported.Table 2Outcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contractureVariablesMean ± SD(range)Difference ROM and TTL (°)−4.6 ± 8.6(−27.0 to 11.5)Varus mechanical leg axis (°)3.7 ± 7.2(−13.0 to 15.0)Intra-operative flexion (°)121.9 ± 6.7(108.0 to 134.0)Intra-operative extension (°)−0.1 ± 3.0(−5.5 to 7.0)Posterior slope (°)1.5 ± 1.0(0.0 to 3.5)\nOutcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contracture", "The tibial component can be reliable positioned in terms of rotation using the ROM technique, as demonstrated by an ICC2 = 0.84 (95 % CI (0.70–0.93); p < 0.001). The ICC2 = 0.84 of tibial component positioning using the ROM technique indicating nearly perfect repeatability. Because the ROM technique was nearly perfectly reliable, the means of the 3 ROM measurements were used to evaluate the difference between the two techniques. With the ROM technique, the tibial component was on average 4.56 (± SD 8.59) degrees externally rotated compared to the tubercle landmark. This difference was statistically significant p = 0.028.\nIt appeared from the multivariable regression analyses that more valgus pre-operative mechanical leg axis (−0.54 (95%CI −0.98 - -0.10); p = 0.019), intra-operative flexion (0.57 (95%CI 0.13 – 1.00); p = 0.014) and intra-operative extension (1.41 (95%CI 0.50 – 2.32); p = 0.005) were associated with a greater difference between tibial component positioning using the ROM and TTL techniques. (Table 3) These results indicate that increasing the pre-operative varus mechanical leg alignment by 1 degree resulted in an increase in the external rotation of the tibial component of 0.54 degrees relative to the tibial tubercle using the ROM technique.Table 3Results of univariable and multivariable regression analysesVariableUnivariable analysisMultivariable analysiscoefficient (95%CI)\np-valuecoefficient (95%CI)\np-valueVarus mechanical leg axis−0.58 (−1.09 to −0.05)0.030−0.54 (−0.98 to −0.10)0.019Intra-operative flexion0.82 (0.33 to 1.31)0.0020.57 (0.13 to 1.00)0.014Intra-operative extension1.19 (−0.09 to 2.48)0.0701.41 (0.50 to 2.32)0.005Posterior slope2.63 (−1.31 to 6.56)0.1800.04 (−3.05 to 3.13)0.979\nResults of univariable and multivariable regression analyses\nIn the varus knees, the tibial component was on average 5.9 (± SD 8.7) degrees externally rotated; in the valgus knees, the mean external rotation was 0.4 (± SD 7.6) degrees relative to the tibial tubercle. The differences between the rotational alignments using the ROM and TTL techniques in both the varus and valgus knees were not statistically significant (p = 0.221).\nThe cut posterior slope (0.04 (95%CI −3.05 – 3.13); p = 0.979) was not significantly related to the difference in the rotational alignment of the tibial component according to the tibial tubercle in either the ROM or TTL techniques.", "This study revealed that the ROM technique is a repeatable method for aligning the tibial component. Using the ROM technique, the tibial component was on average 4.56 (± SD 8.59) degrees externally rotated compared to the tubercle landmark. Our result is in contradiction with Ikeuchi et al. [11] They found that using the ROM technique results in a more internally rotated position of the tibial component en found also widely variable results. However in line with our findings, Berhouet [21] and Chotanaputhi [22] found the ROM technique reproducible and the alignment of the tibial tray was externally rotated in comparison with the medial border of the tibial tubercle and the posterior tibial condyle line respectively. Ikeuchi [11] used cruciate-retaining (CR) TKA components without patellar resurfacing. The mean posterior slope of the cut tibia was 5 degrees and they compared the position of the tibial component related to the Akagi line [23]. The amount of tibial slope, the design of the prosthesis and the patella resurfacing all might have influenced the outcome. Rossi [24] stated that the ROM technique is a reproducible method to establish tibial rotation during TKA, having found that components were positioned in 0.35 external rotation to the Akagi line.\nUsing anatomical landmarks for the rotational alignment of the femoral and tibial components is a widely accepted method. Alignment to the medial 1/3 of the tibial tubercle (Insall’s reference [25]) is based on papers of Nicoll [26], Lawrie [27], Lützner [28] and Yin [29], who found the medial 1/3 of the tibial tubercle the most accurate and reliable anatomical landmark. However, determining the component positions separately can lead to rotational mismatch between the femoral and tibial components [30]. Using the dynamic ROM technique may allow for the tibial tray to align itself according to the femoral component position, ligament balancing and extensor mechanism alignment.\nRetrieval and biomechanical studies have indicated that femoro-tibial rotational mismatches cause increased contact stress on the tibial insert and patellar component that leads to accelerated polyethylene wear [31–33]. Steinbrück et al. [34] recommended the rotational alignment of the tibial component to the medial 1/3 of the tibial tubercle to achieve the lowest retro-patellar pressure. Using the ROM technique the tibial component was externally rotated by a mean 4.56 degrees respect the tibial tubercle which might have resulted in increased retro-patellar peak pressure. Kim et al [35] found the best survival rate when tibial component was aligned between 2 degrees of internal rotation to 5 degrees of external rotation to the medial 1/3 of the tibial tubercle. External rotational errors were not associated with pain in a study of Nicoll [26].\nTo measure the rotational difference between the ROM and TTL techniques in degrees may not be easy during TKAs that are performed without navigation. Computer navigation is an ideal method for measuring the difference in trial tibial tray position between the TTL and ROM techniques given its reported accuracy of one degree [36]. Furthermore, the use of computer navigation can help to optimize the femoral component position, which is crucial for the performance of the ROM technique. [15, 37] Computer navigation has no advantage regarding the identification of the correct positions of the anatomical landmarks, but it does have advantages in comparing the positions of the landmarks to other landmarks (e.g., the transepicondylar axis to Whiteside’s line) and positioning the implants according the identified landmarks.\nHuddleston et al. [16] found that when the ROM technique is applied to varus knees, the antero-posterior axis of the tibial tray is significantly more externally rotated then when this technique is applied to valgus knees. The same result was found in our study, although this difference did not reach statistical significance likely because our study was underpowered regarding this aspect. However, the pre-operative mechanical leg axis was correlated with the difference between the ROM and TTL techniques (p = 0.019). With increasing pre-operative varus alignment, the ROM technique results in increasing external rotation of the tibial component.\nThe maximum degrees of flexion (p = 0.014) and extension (p = 0.005) during surgery were also correlated with the difference between the ROM technique and the tubercle landmark in this study, which indicates that the use of the ROM technique for a patient with great preoperative flexion would result in a more internally rotated tibial component position compared with a patient with less preoperative flexion.\n Limitations of the study The number of patients (observations) in our study is rather low. Various studies have suggested that for each variable studied in multiple regression analysis at least 10 observations are required [38–40] although a recent study showed that this number could be lower in certain circumstances [41]. The results should therefore be interpreted with some caution.\nClinical and radiological data were not collected postoperatively. Therefore, no results are available regarding potential differences in clinical outcomes between the two techniques. It is known that a tourniquet affects the intra-operative patello-femoral tracking [42, 43]. Therefore, it is likely that the tourniquet had some effect on the tibial component rotational alignment in the ROM technique. The use of a tourniquet during all of the operations and keeping it inflated while performing the ROM cycles might have affected our results.\nAlthough tibial rotational alignment is also effected by ligament balancing, we did not measure the gaps intra-operatively.\nThe design of the prosthesis (CR or PS version) and the design of the tibial tray (symmetric or anatomical) may also have influence on the outcome [44].\nThe number of patients (observations) in our study is rather low. Various studies have suggested that for each variable studied in multiple regression analysis at least 10 observations are required [38–40] although a recent study showed that this number could be lower in certain circumstances [41]. The results should therefore be interpreted with some caution.\nClinical and radiological data were not collected postoperatively. Therefore, no results are available regarding potential differences in clinical outcomes between the two techniques. It is known that a tourniquet affects the intra-operative patello-femoral tracking [42, 43]. Therefore, it is likely that the tourniquet had some effect on the tibial component rotational alignment in the ROM technique. The use of a tourniquet during all of the operations and keeping it inflated while performing the ROM cycles might have affected our results.\nAlthough tibial rotational alignment is also effected by ligament balancing, we did not measure the gaps intra-operatively.\nThe design of the prosthesis (CR or PS version) and the design of the tibial tray (symmetric or anatomical) may also have influence on the outcome [44].", "The number of patients (observations) in our study is rather low. Various studies have suggested that for each variable studied in multiple regression analysis at least 10 observations are required [38–40] although a recent study showed that this number could be lower in certain circumstances [41]. The results should therefore be interpreted with some caution.\nClinical and radiological data were not collected postoperatively. Therefore, no results are available regarding potential differences in clinical outcomes between the two techniques. It is known that a tourniquet affects the intra-operative patello-femoral tracking [42, 43]. Therefore, it is likely that the tourniquet had some effect on the tibial component rotational alignment in the ROM technique. The use of a tourniquet during all of the operations and keeping it inflated while performing the ROM cycles might have affected our results.\nAlthough tibial rotational alignment is also effected by ligament balancing, we did not measure the gaps intra-operatively.\nThe design of the prosthesis (CR or PS version) and the design of the tibial tray (symmetric or anatomical) may also have influence on the outcome [44].", "The ROM technique is a repeatable intra-operative method for determining the rotational position of the tibia trial component. Because the best method to determine the intra-operative position of the tibia component is still under debate, TKA surgeons should be aware that there is a difference between the ROM and TTL methods, particularly in patients with high peri-operative ranges of motion and/or high pre-operative varus/valgus alignment." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Ethics and consent", "Patient demographics", "Operative procedure", "Statistics", "Results", "Discussion", "Limitations of the study", "Conclusions" ]
[ "Rotational alignment of the components in total knee arthroplasty (TKA) is an important factor for both survival and the performance of the prostheses [1, 2]. The majority of the attention has focussed on the rotational alignment of the femoral component [3–6], which has resulted in the widespread use of the transepicondylar axis and the antero-posterior axis (Whiteside’s Line) of the distal femur as the reference axes for the rotational alignment of the femoral component [3–6].\nHowever there is more discussion about the rotational alignment of the tibial component in part because of the difficulty of clinically assessing tibial component rotation. Furthermore, a whole range of anatomical landmarks can be used, including the medial border of the tibial tuberosity, the medial third of the tibial tuberosity, the anterior tibial crest, the posterior tibial condylar line, the second ray and the first web space of the foot. Aligning the tibial component to the tibial tubercle is one of the most popular landmark methods [7–9]. The disadvantage of all anatomical landmark techniques is that they do not account for femoro-tibial kinematics [10]. To address this problem, the ROM technique was introduced; in this technique, the rotational alignment of the tibial tray is determined through conformity to the femoral component when the knee is put through a series of full flexion-extension cycles [11]. However, the position of the tibial tray is not exclusively determined by the femoral component but is also influenced by the extensor mechanism, the patellar component, the ligament balancing and the tibial cut [12, 13]. The rationale behind the ROM technique lies in the theoretical advantage of aligning the tibial component in relation to the femoral component while respecting the soft tissue torsion forces to create optimal femoro-tibial kinematics [14]. For this method to work, the femoral component should be positioned accurately. Using computer-assisted surgery may improve the accuracy of positioning [15].\nSeveral studies have demonstrated variability in the relationships between different landmarks and techniques for establishing rotational alignment of tibial components in total knee arthroplasty [11, 16–18]. A review reported that there is no gold standard measurement of tibial component rotation [18]. Whether the ROM technique is a repeatable method, and whether there is a significant difference in tibial component rotational position between the TTL technique and the ROM technique in computer navigated TKA with patella resurfacing remains unanswered questions in the literature. The primary purpose of this study was therefore to intra-operatively evaluate the repeatability of the ROM technique. The secondary outcome was to evaluate the difference in rotational alignment of the trial tibial component with the use of the TTL and ROM techniques during computer-navigated TKA with patella resurfacing . Additionally, the factors that influenced the positioning of the trial tibial component with both techniques were investigated. Postoperative clinical and radiological data were not collected.", "A prospective, observational study of 20 consecutive primary posterior stabilized TKAs in 20 patients with fixed-bearing polyethylene inserts (Scorpio Flex PS, Stryker Corporation, Mahwah, NJ USA) was performed by a single surgeon (MvS).\nData collection began with 10 consecutive TKAs to determine whether there was a difference between the alignment techniques and if so to gather data to perform a power analysis. Using the acquired data, we determined that a total of 20 subjects were needed to achieve 90 % power assuming a minimum detectable difference of 5.0 degrees, a standard deviation of 7.7 degrees and a significance level of alpha < 0.05.\nThe mean pre-operative mechanical leg axis was 3.65° ± SD 7.15 of varus. The mean pre-operative mechanical leg axis of the varus knees (N = 15) was 7.0° ± SD 4.05, of the valgus knees (N = 5) was −6.4° ± SD −4.15.\nPositive values indicated varus alignment, negative values indicated valgus alignment.\nThe mean posterior slope was 1.5° ± SD 1.02.\n Ethics and consent The Medical Ethics Committee of the Maastricht University Medical Centre has concluded that the described research does not apply to the Dutch Medical Research involving Human Subjects Act (WMO), therefore the patient was not required to provide consent regarding the use of the material.\nFurthermore, every patient in the Maastricht University Medical Centre is provided with information regarding these kinds of studies. If they do not wish to contribute to these studies, this information will be included in their file. The patient involved in this study did not make an objection against the use of his/her material for research purposes.\nThe Medical Ethics Committee of the Maastricht University Medical Centre has concluded that the described research does not apply to the Dutch Medical Research involving Human Subjects Act (WMO), therefore the patient was not required to provide consent regarding the use of the material.\nFurthermore, every patient in the Maastricht University Medical Centre is provided with information regarding these kinds of studies. If they do not wish to contribute to these studies, this information will be included in their file. The patient involved in this study did not make an objection against the use of his/her material for research purposes.\n Patient demographics The patient demographics are summarized in Table 1.Table 1Patient demographics. Positive values indicated varus alignment, negative values indicated valgus alignmentAge (years)Preoperative Leg axis (degrees varus)Sex (Male/Female)Side (Right/Left)Preoperative ROM (degrees)Preoperative flexion contracture (degrees)Mean ± SD69.8 ± 103.7 ± 7.28/1210/10117 ± 165.6 ± 4.7\nPatient demographics. Positive values indicated varus alignment, negative values indicated valgus alignment\nThe patient demographics are summarized in Table 1.Table 1Patient demographics. Positive values indicated varus alignment, negative values indicated valgus alignmentAge (years)Preoperative Leg axis (degrees varus)Sex (Male/Female)Side (Right/Left)Preoperative ROM (degrees)Preoperative flexion contracture (degrees)Mean ± SD69.8 ± 103.7 ± 7.28/1210/10117 ± 165.6 ± 4.7\nPatient demographics. Positive values indicated varus alignment, negative values indicated valgus alignment\n Operative procedure The Stryker Knee Navigation System (Stryker Navigation System II, version 3.1) was used in this study. The prosthesis (Scorpio PS Stryker Howmedica Osteonics, Allendale, NJ USA) used in these surgeries allows five degrees of rotation between the tibial insert and the femoral component. In all cases, a tourniquet was applied for the entire duration of the surgery. After a standard midline skin incision and a medial parapatellar arthrotomy, the active wireless trackers of the navigation system were fixed to the femur and tibia. The required landmarks were entered into the navigation computer, and the rotation centre of the hip was determined by a special algorithm executed in customized software. The transepicondylar axis of the distal femur and the Whiteside’s line were set in each case exactly perpendicular to each other to improve the accuracy of the positioning the femoral component. The femoral component was aligned parallel to the transepicondylar axis. The AP axis of the proximal tibia was determined by placing the tip of the pointer on the centre of the line between the intercondylar eminences and aligning it to the medial 1/3 of the tibial tubercle . This AP axis was saved in the navigation program as 0 degrees of rotation. The proximal tibial and distal femoral cuts were performed and examined with the navigation system. The tibial posterior slope was set according to the patient’s natural slope. The polyethylene insert (Scorpio-Flex PS fixed bearing tibial insert) had an additional four degrees of posterior down-slope. The rotation of the femoral component was oriented according to the transepicondylar line and the AP axis (Whiteside’s Line) of the distal femur as currently advised in literature [3, 5, 17]. After soft tissue balancing and achievement of the maximal range of motion, the patella was prepared. The patellar button position may affect femoro-tibial kinematics; therefore, all of the trial components, including the patella and the PS tibial trial insert, were placed before the tibial component was subjected to the ROM technique. One navigation tracker was applied to the alignment handle of the trial tibial tray to check the position according to the given 0 degrees of rotation. Flexion and extension were measured intra-operatively after the approach was made and the trackers were placed. Positive values for extension represent hyperextension, while negative values represent flexion contracture.\nThe tibial component was inserted and checked for smooth movement on the tibial cut surface. The knee was then put through five full flexion-extension cycles while the surgeon held the ankle only. During the ROM cycles, no hands were touching the knee to prevent manual manipulation, and no varus/valgus stress was applied. The movement was followed on the navigation computer to confirm that no varus/valgus stress was applied. After performing the ROM cycles, the rotational position of the trial tibial component was recorded as indicated by the navigation computer. Positive values indicated that the trial component was in internal rotation according to the given TTL axis, and negative values indicated that the trial component was in external rotation. While this measurement was being acquired, the patella was lying in the patellar groove to facilitate optimal patellofemoral tracking and to prevent lateral pull on the patella tendon that could cause the tibia to rotate externally. The rotational position was noted (measurement 1). After removing and reinserting the components, the ROM technique was applied two additional times with five full flexion-extension movements and corresponding subsequent measurements from the navigation system (measurement 2 and 3).\nAfter completing the operative procedure, the final tibial tray was cemented up to 1/3 of the medial border of the tibial tubercle.\nThe Stryker Knee Navigation System (Stryker Navigation System II, version 3.1) was used in this study. The prosthesis (Scorpio PS Stryker Howmedica Osteonics, Allendale, NJ USA) used in these surgeries allows five degrees of rotation between the tibial insert and the femoral component. In all cases, a tourniquet was applied for the entire duration of the surgery. After a standard midline skin incision and a medial parapatellar arthrotomy, the active wireless trackers of the navigation system were fixed to the femur and tibia. The required landmarks were entered into the navigation computer, and the rotation centre of the hip was determined by a special algorithm executed in customized software. The transepicondylar axis of the distal femur and the Whiteside’s line were set in each case exactly perpendicular to each other to improve the accuracy of the positioning the femoral component. The femoral component was aligned parallel to the transepicondylar axis. The AP axis of the proximal tibia was determined by placing the tip of the pointer on the centre of the line between the intercondylar eminences and aligning it to the medial 1/3 of the tibial tubercle . This AP axis was saved in the navigation program as 0 degrees of rotation. The proximal tibial and distal femoral cuts were performed and examined with the navigation system. The tibial posterior slope was set according to the patient’s natural slope. The polyethylene insert (Scorpio-Flex PS fixed bearing tibial insert) had an additional four degrees of posterior down-slope. The rotation of the femoral component was oriented according to the transepicondylar line and the AP axis (Whiteside’s Line) of the distal femur as currently advised in literature [3, 5, 17]. After soft tissue balancing and achievement of the maximal range of motion, the patella was prepared. The patellar button position may affect femoro-tibial kinematics; therefore, all of the trial components, including the patella and the PS tibial trial insert, were placed before the tibial component was subjected to the ROM technique. One navigation tracker was applied to the alignment handle of the trial tibial tray to check the position according to the given 0 degrees of rotation. Flexion and extension were measured intra-operatively after the approach was made and the trackers were placed. Positive values for extension represent hyperextension, while negative values represent flexion contracture.\nThe tibial component was inserted and checked for smooth movement on the tibial cut surface. The knee was then put through five full flexion-extension cycles while the surgeon held the ankle only. During the ROM cycles, no hands were touching the knee to prevent manual manipulation, and no varus/valgus stress was applied. The movement was followed on the navigation computer to confirm that no varus/valgus stress was applied. After performing the ROM cycles, the rotational position of the trial tibial component was recorded as indicated by the navigation computer. Positive values indicated that the trial component was in internal rotation according to the given TTL axis, and negative values indicated that the trial component was in external rotation. While this measurement was being acquired, the patella was lying in the patellar groove to facilitate optimal patellofemoral tracking and to prevent lateral pull on the patella tendon that could cause the tibia to rotate externally. The rotational position was noted (measurement 1). After removing and reinserting the components, the ROM technique was applied two additional times with five full flexion-extension movements and corresponding subsequent measurements from the navigation system (measurement 2 and 3).\nAfter completing the operative procedure, the final tibial tray was cemented up to 1/3 of the medial border of the tibial tubercle.\n Statistics The statistical analyses were performed with SPSS statistical software (Version 12, SPSS Inc., Chicago, IL USA). The reproducibility of the ROM-technique was evaluated using the intra-class correlation coefficient (ICC). For each target, there was one ‘rater’(MvS) who performed the three consecutive attempts at positioning the tibial component using the ROM technique. Since the exact same rater made ratings on every patient and it was assumed that both patients and observer were drawn randomly from larger populations, the ICC2 was used [19]. The ICC2 reflects the reliability of this single rater. Means were compared with paired T-tests in cases of normal distributions [20]. A level of p < 0.05 was considered statistically significant. The mean of the 3 ROM measurements was used to evaluate the difference between the ROM and TTL techniques. We evaluated potential factors associated with the difference between the ROM and TTL technique including, leg axis, intra-operative flexion, intra-operative extension and posterior slope. (Table 2) The associations between each variable and the difference between the ROM and TTL were examined with univariable regression analyses. Factors that were associated with the outcome in univariable analyses (p-values < 0.20) were included in multivariable regression analyses. In multivariable regression analyses p-values < 0.05 were considered significant. Regression coefficients with their 95 % confidence intervals are reported.Table 2Outcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contractureVariablesMean ± SD(range)Difference ROM and TTL (°)−4.6 ± 8.6(−27.0 to 11.5)Varus mechanical leg axis (°)3.7 ± 7.2(−13.0 to 15.0)Intra-operative flexion (°)121.9 ± 6.7(108.0 to 134.0)Intra-operative extension (°)−0.1 ± 3.0(−5.5 to 7.0)Posterior slope (°)1.5 ± 1.0(0.0 to 3.5)\nOutcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contracture\nThe statistical analyses were performed with SPSS statistical software (Version 12, SPSS Inc., Chicago, IL USA). The reproducibility of the ROM-technique was evaluated using the intra-class correlation coefficient (ICC). For each target, there was one ‘rater’(MvS) who performed the three consecutive attempts at positioning the tibial component using the ROM technique. Since the exact same rater made ratings on every patient and it was assumed that both patients and observer were drawn randomly from larger populations, the ICC2 was used [19]. The ICC2 reflects the reliability of this single rater. Means were compared with paired T-tests in cases of normal distributions [20]. A level of p < 0.05 was considered statistically significant. The mean of the 3 ROM measurements was used to evaluate the difference between the ROM and TTL techniques. We evaluated potential factors associated with the difference between the ROM and TTL technique including, leg axis, intra-operative flexion, intra-operative extension and posterior slope. (Table 2) The associations between each variable and the difference between the ROM and TTL were examined with univariable regression analyses. Factors that were associated with the outcome in univariable analyses (p-values < 0.20) were included in multivariable regression analyses. In multivariable regression analyses p-values < 0.05 were considered significant. Regression coefficients with their 95 % confidence intervals are reported.Table 2Outcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contractureVariablesMean ± SD(range)Difference ROM and TTL (°)−4.6 ± 8.6(−27.0 to 11.5)Varus mechanical leg axis (°)3.7 ± 7.2(−13.0 to 15.0)Intra-operative flexion (°)121.9 ± 6.7(108.0 to 134.0)Intra-operative extension (°)−0.1 ± 3.0(−5.5 to 7.0)Posterior slope (°)1.5 ± 1.0(0.0 to 3.5)\nOutcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contracture", "The Medical Ethics Committee of the Maastricht University Medical Centre has concluded that the described research does not apply to the Dutch Medical Research involving Human Subjects Act (WMO), therefore the patient was not required to provide consent regarding the use of the material.\nFurthermore, every patient in the Maastricht University Medical Centre is provided with information regarding these kinds of studies. If they do not wish to contribute to these studies, this information will be included in their file. The patient involved in this study did not make an objection against the use of his/her material for research purposes.", "The patient demographics are summarized in Table 1.Table 1Patient demographics. Positive values indicated varus alignment, negative values indicated valgus alignmentAge (years)Preoperative Leg axis (degrees varus)Sex (Male/Female)Side (Right/Left)Preoperative ROM (degrees)Preoperative flexion contracture (degrees)Mean ± SD69.8 ± 103.7 ± 7.28/1210/10117 ± 165.6 ± 4.7\nPatient demographics. Positive values indicated varus alignment, negative values indicated valgus alignment", "The Stryker Knee Navigation System (Stryker Navigation System II, version 3.1) was used in this study. The prosthesis (Scorpio PS Stryker Howmedica Osteonics, Allendale, NJ USA) used in these surgeries allows five degrees of rotation between the tibial insert and the femoral component. In all cases, a tourniquet was applied for the entire duration of the surgery. After a standard midline skin incision and a medial parapatellar arthrotomy, the active wireless trackers of the navigation system were fixed to the femur and tibia. The required landmarks were entered into the navigation computer, and the rotation centre of the hip was determined by a special algorithm executed in customized software. The transepicondylar axis of the distal femur and the Whiteside’s line were set in each case exactly perpendicular to each other to improve the accuracy of the positioning the femoral component. The femoral component was aligned parallel to the transepicondylar axis. The AP axis of the proximal tibia was determined by placing the tip of the pointer on the centre of the line between the intercondylar eminences and aligning it to the medial 1/3 of the tibial tubercle . This AP axis was saved in the navigation program as 0 degrees of rotation. The proximal tibial and distal femoral cuts were performed and examined with the navigation system. The tibial posterior slope was set according to the patient’s natural slope. The polyethylene insert (Scorpio-Flex PS fixed bearing tibial insert) had an additional four degrees of posterior down-slope. The rotation of the femoral component was oriented according to the transepicondylar line and the AP axis (Whiteside’s Line) of the distal femur as currently advised in literature [3, 5, 17]. After soft tissue balancing and achievement of the maximal range of motion, the patella was prepared. The patellar button position may affect femoro-tibial kinematics; therefore, all of the trial components, including the patella and the PS tibial trial insert, were placed before the tibial component was subjected to the ROM technique. One navigation tracker was applied to the alignment handle of the trial tibial tray to check the position according to the given 0 degrees of rotation. Flexion and extension were measured intra-operatively after the approach was made and the trackers were placed. Positive values for extension represent hyperextension, while negative values represent flexion contracture.\nThe tibial component was inserted and checked for smooth movement on the tibial cut surface. The knee was then put through five full flexion-extension cycles while the surgeon held the ankle only. During the ROM cycles, no hands were touching the knee to prevent manual manipulation, and no varus/valgus stress was applied. The movement was followed on the navigation computer to confirm that no varus/valgus stress was applied. After performing the ROM cycles, the rotational position of the trial tibial component was recorded as indicated by the navigation computer. Positive values indicated that the trial component was in internal rotation according to the given TTL axis, and negative values indicated that the trial component was in external rotation. While this measurement was being acquired, the patella was lying in the patellar groove to facilitate optimal patellofemoral tracking and to prevent lateral pull on the patella tendon that could cause the tibia to rotate externally. The rotational position was noted (measurement 1). After removing and reinserting the components, the ROM technique was applied two additional times with five full flexion-extension movements and corresponding subsequent measurements from the navigation system (measurement 2 and 3).\nAfter completing the operative procedure, the final tibial tray was cemented up to 1/3 of the medial border of the tibial tubercle.", "The statistical analyses were performed with SPSS statistical software (Version 12, SPSS Inc., Chicago, IL USA). The reproducibility of the ROM-technique was evaluated using the intra-class correlation coefficient (ICC). For each target, there was one ‘rater’(MvS) who performed the three consecutive attempts at positioning the tibial component using the ROM technique. Since the exact same rater made ratings on every patient and it was assumed that both patients and observer were drawn randomly from larger populations, the ICC2 was used [19]. The ICC2 reflects the reliability of this single rater. Means were compared with paired T-tests in cases of normal distributions [20]. A level of p < 0.05 was considered statistically significant. The mean of the 3 ROM measurements was used to evaluate the difference between the ROM and TTL techniques. We evaluated potential factors associated with the difference between the ROM and TTL technique including, leg axis, intra-operative flexion, intra-operative extension and posterior slope. (Table 2) The associations between each variable and the difference between the ROM and TTL were examined with univariable regression analyses. Factors that were associated with the outcome in univariable analyses (p-values < 0.20) were included in multivariable regression analyses. In multivariable regression analyses p-values < 0.05 were considered significant. Regression coefficients with their 95 % confidence intervals are reported.Table 2Outcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contractureVariablesMean ± SD(range)Difference ROM and TTL (°)−4.6 ± 8.6(−27.0 to 11.5)Varus mechanical leg axis (°)3.7 ± 7.2(−13.0 to 15.0)Intra-operative flexion (°)121.9 ± 6.7(108.0 to 134.0)Intra-operative extension (°)−0.1 ± 3.0(−5.5 to 7.0)Posterior slope (°)1.5 ± 1.0(0.0 to 3.5)\nOutcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contracture", "The tibial component can be reliable positioned in terms of rotation using the ROM technique, as demonstrated by an ICC2 = 0.84 (95 % CI (0.70–0.93); p < 0.001). The ICC2 = 0.84 of tibial component positioning using the ROM technique indicating nearly perfect repeatability. Because the ROM technique was nearly perfectly reliable, the means of the 3 ROM measurements were used to evaluate the difference between the two techniques. With the ROM technique, the tibial component was on average 4.56 (± SD 8.59) degrees externally rotated compared to the tubercle landmark. This difference was statistically significant p = 0.028.\nIt appeared from the multivariable regression analyses that more valgus pre-operative mechanical leg axis (−0.54 (95%CI −0.98 - -0.10); p = 0.019), intra-operative flexion (0.57 (95%CI 0.13 – 1.00); p = 0.014) and intra-operative extension (1.41 (95%CI 0.50 – 2.32); p = 0.005) were associated with a greater difference between tibial component positioning using the ROM and TTL techniques. (Table 3) These results indicate that increasing the pre-operative varus mechanical leg alignment by 1 degree resulted in an increase in the external rotation of the tibial component of 0.54 degrees relative to the tibial tubercle using the ROM technique.Table 3Results of univariable and multivariable regression analysesVariableUnivariable analysisMultivariable analysiscoefficient (95%CI)\np-valuecoefficient (95%CI)\np-valueVarus mechanical leg axis−0.58 (−1.09 to −0.05)0.030−0.54 (−0.98 to −0.10)0.019Intra-operative flexion0.82 (0.33 to 1.31)0.0020.57 (0.13 to 1.00)0.014Intra-operative extension1.19 (−0.09 to 2.48)0.0701.41 (0.50 to 2.32)0.005Posterior slope2.63 (−1.31 to 6.56)0.1800.04 (−3.05 to 3.13)0.979\nResults of univariable and multivariable regression analyses\nIn the varus knees, the tibial component was on average 5.9 (± SD 8.7) degrees externally rotated; in the valgus knees, the mean external rotation was 0.4 (± SD 7.6) degrees relative to the tibial tubercle. The differences between the rotational alignments using the ROM and TTL techniques in both the varus and valgus knees were not statistically significant (p = 0.221).\nThe cut posterior slope (0.04 (95%CI −3.05 – 3.13); p = 0.979) was not significantly related to the difference in the rotational alignment of the tibial component according to the tibial tubercle in either the ROM or TTL techniques.", "This study revealed that the ROM technique is a repeatable method for aligning the tibial component. Using the ROM technique, the tibial component was on average 4.56 (± SD 8.59) degrees externally rotated compared to the tubercle landmark. Our result is in contradiction with Ikeuchi et al. [11] They found that using the ROM technique results in a more internally rotated position of the tibial component en found also widely variable results. However in line with our findings, Berhouet [21] and Chotanaputhi [22] found the ROM technique reproducible and the alignment of the tibial tray was externally rotated in comparison with the medial border of the tibial tubercle and the posterior tibial condyle line respectively. Ikeuchi [11] used cruciate-retaining (CR) TKA components without patellar resurfacing. The mean posterior slope of the cut tibia was 5 degrees and they compared the position of the tibial component related to the Akagi line [23]. The amount of tibial slope, the design of the prosthesis and the patella resurfacing all might have influenced the outcome. Rossi [24] stated that the ROM technique is a reproducible method to establish tibial rotation during TKA, having found that components were positioned in 0.35 external rotation to the Akagi line.\nUsing anatomical landmarks for the rotational alignment of the femoral and tibial components is a widely accepted method. Alignment to the medial 1/3 of the tibial tubercle (Insall’s reference [25]) is based on papers of Nicoll [26], Lawrie [27], Lützner [28] and Yin [29], who found the medial 1/3 of the tibial tubercle the most accurate and reliable anatomical landmark. However, determining the component positions separately can lead to rotational mismatch between the femoral and tibial components [30]. Using the dynamic ROM technique may allow for the tibial tray to align itself according to the femoral component position, ligament balancing and extensor mechanism alignment.\nRetrieval and biomechanical studies have indicated that femoro-tibial rotational mismatches cause increased contact stress on the tibial insert and patellar component that leads to accelerated polyethylene wear [31–33]. Steinbrück et al. [34] recommended the rotational alignment of the tibial component to the medial 1/3 of the tibial tubercle to achieve the lowest retro-patellar pressure. Using the ROM technique the tibial component was externally rotated by a mean 4.56 degrees respect the tibial tubercle which might have resulted in increased retro-patellar peak pressure. Kim et al [35] found the best survival rate when tibial component was aligned between 2 degrees of internal rotation to 5 degrees of external rotation to the medial 1/3 of the tibial tubercle. External rotational errors were not associated with pain in a study of Nicoll [26].\nTo measure the rotational difference between the ROM and TTL techniques in degrees may not be easy during TKAs that are performed without navigation. Computer navigation is an ideal method for measuring the difference in trial tibial tray position between the TTL and ROM techniques given its reported accuracy of one degree [36]. Furthermore, the use of computer navigation can help to optimize the femoral component position, which is crucial for the performance of the ROM technique. [15, 37] Computer navigation has no advantage regarding the identification of the correct positions of the anatomical landmarks, but it does have advantages in comparing the positions of the landmarks to other landmarks (e.g., the transepicondylar axis to Whiteside’s line) and positioning the implants according the identified landmarks.\nHuddleston et al. [16] found that when the ROM technique is applied to varus knees, the antero-posterior axis of the tibial tray is significantly more externally rotated then when this technique is applied to valgus knees. The same result was found in our study, although this difference did not reach statistical significance likely because our study was underpowered regarding this aspect. However, the pre-operative mechanical leg axis was correlated with the difference between the ROM and TTL techniques (p = 0.019). With increasing pre-operative varus alignment, the ROM technique results in increasing external rotation of the tibial component.\nThe maximum degrees of flexion (p = 0.014) and extension (p = 0.005) during surgery were also correlated with the difference between the ROM technique and the tubercle landmark in this study, which indicates that the use of the ROM technique for a patient with great preoperative flexion would result in a more internally rotated tibial component position compared with a patient with less preoperative flexion.\n Limitations of the study The number of patients (observations) in our study is rather low. Various studies have suggested that for each variable studied in multiple regression analysis at least 10 observations are required [38–40] although a recent study showed that this number could be lower in certain circumstances [41]. The results should therefore be interpreted with some caution.\nClinical and radiological data were not collected postoperatively. Therefore, no results are available regarding potential differences in clinical outcomes between the two techniques. It is known that a tourniquet affects the intra-operative patello-femoral tracking [42, 43]. Therefore, it is likely that the tourniquet had some effect on the tibial component rotational alignment in the ROM technique. The use of a tourniquet during all of the operations and keeping it inflated while performing the ROM cycles might have affected our results.\nAlthough tibial rotational alignment is also effected by ligament balancing, we did not measure the gaps intra-operatively.\nThe design of the prosthesis (CR or PS version) and the design of the tibial tray (symmetric or anatomical) may also have influence on the outcome [44].\nThe number of patients (observations) in our study is rather low. Various studies have suggested that for each variable studied in multiple regression analysis at least 10 observations are required [38–40] although a recent study showed that this number could be lower in certain circumstances [41]. The results should therefore be interpreted with some caution.\nClinical and radiological data were not collected postoperatively. Therefore, no results are available regarding potential differences in clinical outcomes between the two techniques. It is known that a tourniquet affects the intra-operative patello-femoral tracking [42, 43]. Therefore, it is likely that the tourniquet had some effect on the tibial component rotational alignment in the ROM technique. The use of a tourniquet during all of the operations and keeping it inflated while performing the ROM cycles might have affected our results.\nAlthough tibial rotational alignment is also effected by ligament balancing, we did not measure the gaps intra-operatively.\nThe design of the prosthesis (CR or PS version) and the design of the tibial tray (symmetric or anatomical) may also have influence on the outcome [44].", "The number of patients (observations) in our study is rather low. Various studies have suggested that for each variable studied in multiple regression analysis at least 10 observations are required [38–40] although a recent study showed that this number could be lower in certain circumstances [41]. The results should therefore be interpreted with some caution.\nClinical and radiological data were not collected postoperatively. Therefore, no results are available regarding potential differences in clinical outcomes between the two techniques. It is known that a tourniquet affects the intra-operative patello-femoral tracking [42, 43]. Therefore, it is likely that the tourniquet had some effect on the tibial component rotational alignment in the ROM technique. The use of a tourniquet during all of the operations and keeping it inflated while performing the ROM cycles might have affected our results.\nAlthough tibial rotational alignment is also effected by ligament balancing, we did not measure the gaps intra-operatively.\nThe design of the prosthesis (CR or PS version) and the design of the tibial tray (symmetric or anatomical) may also have influence on the outcome [44].", "The ROM technique is a repeatable intra-operative method for determining the rotational position of the tibia trial component. Because the best method to determine the intra-operative position of the tibia component is still under debate, TKA surgeons should be aware that there is a difference between the ROM and TTL methods, particularly in patients with high peri-operative ranges of motion and/or high pre-operative varus/valgus alignment." ]
[ "introduction", null, null, null, null, null, null, null, null, null ]
[ "Total knee arthroplasty", "Tibial rotation", "ROM technique", "TTL technique", "Computer navigation" ]
Background: Rotational alignment of the components in total knee arthroplasty (TKA) is an important factor for both survival and the performance of the prostheses [1, 2]. The majority of the attention has focussed on the rotational alignment of the femoral component [3–6], which has resulted in the widespread use of the transepicondylar axis and the antero-posterior axis (Whiteside’s Line) of the distal femur as the reference axes for the rotational alignment of the femoral component [3–6]. However there is more discussion about the rotational alignment of the tibial component in part because of the difficulty of clinically assessing tibial component rotation. Furthermore, a whole range of anatomical landmarks can be used, including the medial border of the tibial tuberosity, the medial third of the tibial tuberosity, the anterior tibial crest, the posterior tibial condylar line, the second ray and the first web space of the foot. Aligning the tibial component to the tibial tubercle is one of the most popular landmark methods [7–9]. The disadvantage of all anatomical landmark techniques is that they do not account for femoro-tibial kinematics [10]. To address this problem, the ROM technique was introduced; in this technique, the rotational alignment of the tibial tray is determined through conformity to the femoral component when the knee is put through a series of full flexion-extension cycles [11]. However, the position of the tibial tray is not exclusively determined by the femoral component but is also influenced by the extensor mechanism, the patellar component, the ligament balancing and the tibial cut [12, 13]. The rationale behind the ROM technique lies in the theoretical advantage of aligning the tibial component in relation to the femoral component while respecting the soft tissue torsion forces to create optimal femoro-tibial kinematics [14]. For this method to work, the femoral component should be positioned accurately. Using computer-assisted surgery may improve the accuracy of positioning [15]. Several studies have demonstrated variability in the relationships between different landmarks and techniques for establishing rotational alignment of tibial components in total knee arthroplasty [11, 16–18]. A review reported that there is no gold standard measurement of tibial component rotation [18]. Whether the ROM technique is a repeatable method, and whether there is a significant difference in tibial component rotational position between the TTL technique and the ROM technique in computer navigated TKA with patella resurfacing remains unanswered questions in the literature. The primary purpose of this study was therefore to intra-operatively evaluate the repeatability of the ROM technique. The secondary outcome was to evaluate the difference in rotational alignment of the trial tibial component with the use of the TTL and ROM techniques during computer-navigated TKA with patella resurfacing . Additionally, the factors that influenced the positioning of the trial tibial component with both techniques were investigated. Postoperative clinical and radiological data were not collected. Methods: A prospective, observational study of 20 consecutive primary posterior stabilized TKAs in 20 patients with fixed-bearing polyethylene inserts (Scorpio Flex PS, Stryker Corporation, Mahwah, NJ USA) was performed by a single surgeon (MvS). Data collection began with 10 consecutive TKAs to determine whether there was a difference between the alignment techniques and if so to gather data to perform a power analysis. Using the acquired data, we determined that a total of 20 subjects were needed to achieve 90 % power assuming a minimum detectable difference of 5.0 degrees, a standard deviation of 7.7 degrees and a significance level of alpha < 0.05. The mean pre-operative mechanical leg axis was 3.65° ± SD 7.15 of varus. The mean pre-operative mechanical leg axis of the varus knees (N = 15) was 7.0° ± SD 4.05, of the valgus knees (N = 5) was −6.4° ± SD −4.15. Positive values indicated varus alignment, negative values indicated valgus alignment. The mean posterior slope was 1.5° ± SD 1.02. Ethics and consent The Medical Ethics Committee of the Maastricht University Medical Centre has concluded that the described research does not apply to the Dutch Medical Research involving Human Subjects Act (WMO), therefore the patient was not required to provide consent regarding the use of the material. Furthermore, every patient in the Maastricht University Medical Centre is provided with information regarding these kinds of studies. If they do not wish to contribute to these studies, this information will be included in their file. The patient involved in this study did not make an objection against the use of his/her material for research purposes. The Medical Ethics Committee of the Maastricht University Medical Centre has concluded that the described research does not apply to the Dutch Medical Research involving Human Subjects Act (WMO), therefore the patient was not required to provide consent regarding the use of the material. Furthermore, every patient in the Maastricht University Medical Centre is provided with information regarding these kinds of studies. If they do not wish to contribute to these studies, this information will be included in their file. The patient involved in this study did not make an objection against the use of his/her material for research purposes. Patient demographics The patient demographics are summarized in Table 1.Table 1Patient demographics. Positive values indicated varus alignment, negative values indicated valgus alignmentAge (years)Preoperative Leg axis (degrees varus)Sex (Male/Female)Side (Right/Left)Preoperative ROM (degrees)Preoperative flexion contracture (degrees)Mean ± SD69.8 ± 103.7 ± 7.28/1210/10117 ± 165.6 ± 4.7 Patient demographics. Positive values indicated varus alignment, negative values indicated valgus alignment The patient demographics are summarized in Table 1.Table 1Patient demographics. Positive values indicated varus alignment, negative values indicated valgus alignmentAge (years)Preoperative Leg axis (degrees varus)Sex (Male/Female)Side (Right/Left)Preoperative ROM (degrees)Preoperative flexion contracture (degrees)Mean ± SD69.8 ± 103.7 ± 7.28/1210/10117 ± 165.6 ± 4.7 Patient demographics. Positive values indicated varus alignment, negative values indicated valgus alignment Operative procedure The Stryker Knee Navigation System (Stryker Navigation System II, version 3.1) was used in this study. The prosthesis (Scorpio PS Stryker Howmedica Osteonics, Allendale, NJ USA) used in these surgeries allows five degrees of rotation between the tibial insert and the femoral component. In all cases, a tourniquet was applied for the entire duration of the surgery. After a standard midline skin incision and a medial parapatellar arthrotomy, the active wireless trackers of the navigation system were fixed to the femur and tibia. The required landmarks were entered into the navigation computer, and the rotation centre of the hip was determined by a special algorithm executed in customized software. The transepicondylar axis of the distal femur and the Whiteside’s line were set in each case exactly perpendicular to each other to improve the accuracy of the positioning the femoral component. The femoral component was aligned parallel to the transepicondylar axis. The AP axis of the proximal tibia was determined by placing the tip of the pointer on the centre of the line between the intercondylar eminences and aligning it to the medial 1/3 of the tibial tubercle . This AP axis was saved in the navigation program as 0 degrees of rotation. The proximal tibial and distal femoral cuts were performed and examined with the navigation system. The tibial posterior slope was set according to the patient’s natural slope. The polyethylene insert (Scorpio-Flex PS fixed bearing tibial insert) had an additional four degrees of posterior down-slope. The rotation of the femoral component was oriented according to the transepicondylar line and the AP axis (Whiteside’s Line) of the distal femur as currently advised in literature [3, 5, 17]. After soft tissue balancing and achievement of the maximal range of motion, the patella was prepared. The patellar button position may affect femoro-tibial kinematics; therefore, all of the trial components, including the patella and the PS tibial trial insert, were placed before the tibial component was subjected to the ROM technique. One navigation tracker was applied to the alignment handle of the trial tibial tray to check the position according to the given 0 degrees of rotation. Flexion and extension were measured intra-operatively after the approach was made and the trackers were placed. Positive values for extension represent hyperextension, while negative values represent flexion contracture. The tibial component was inserted and checked for smooth movement on the tibial cut surface. The knee was then put through five full flexion-extension cycles while the surgeon held the ankle only. During the ROM cycles, no hands were touching the knee to prevent manual manipulation, and no varus/valgus stress was applied. The movement was followed on the navigation computer to confirm that no varus/valgus stress was applied. After performing the ROM cycles, the rotational position of the trial tibial component was recorded as indicated by the navigation computer. Positive values indicated that the trial component was in internal rotation according to the given TTL axis, and negative values indicated that the trial component was in external rotation. While this measurement was being acquired, the patella was lying in the patellar groove to facilitate optimal patellofemoral tracking and to prevent lateral pull on the patella tendon that could cause the tibia to rotate externally. The rotational position was noted (measurement 1). After removing and reinserting the components, the ROM technique was applied two additional times with five full flexion-extension movements and corresponding subsequent measurements from the navigation system (measurement 2 and 3). After completing the operative procedure, the final tibial tray was cemented up to 1/3 of the medial border of the tibial tubercle. The Stryker Knee Navigation System (Stryker Navigation System II, version 3.1) was used in this study. The prosthesis (Scorpio PS Stryker Howmedica Osteonics, Allendale, NJ USA) used in these surgeries allows five degrees of rotation between the tibial insert and the femoral component. In all cases, a tourniquet was applied for the entire duration of the surgery. After a standard midline skin incision and a medial parapatellar arthrotomy, the active wireless trackers of the navigation system were fixed to the femur and tibia. The required landmarks were entered into the navigation computer, and the rotation centre of the hip was determined by a special algorithm executed in customized software. The transepicondylar axis of the distal femur and the Whiteside’s line were set in each case exactly perpendicular to each other to improve the accuracy of the positioning the femoral component. The femoral component was aligned parallel to the transepicondylar axis. The AP axis of the proximal tibia was determined by placing the tip of the pointer on the centre of the line between the intercondylar eminences and aligning it to the medial 1/3 of the tibial tubercle . This AP axis was saved in the navigation program as 0 degrees of rotation. The proximal tibial and distal femoral cuts were performed and examined with the navigation system. The tibial posterior slope was set according to the patient’s natural slope. The polyethylene insert (Scorpio-Flex PS fixed bearing tibial insert) had an additional four degrees of posterior down-slope. The rotation of the femoral component was oriented according to the transepicondylar line and the AP axis (Whiteside’s Line) of the distal femur as currently advised in literature [3, 5, 17]. After soft tissue balancing and achievement of the maximal range of motion, the patella was prepared. The patellar button position may affect femoro-tibial kinematics; therefore, all of the trial components, including the patella and the PS tibial trial insert, were placed before the tibial component was subjected to the ROM technique. One navigation tracker was applied to the alignment handle of the trial tibial tray to check the position according to the given 0 degrees of rotation. Flexion and extension were measured intra-operatively after the approach was made and the trackers were placed. Positive values for extension represent hyperextension, while negative values represent flexion contracture. The tibial component was inserted and checked for smooth movement on the tibial cut surface. The knee was then put through five full flexion-extension cycles while the surgeon held the ankle only. During the ROM cycles, no hands were touching the knee to prevent manual manipulation, and no varus/valgus stress was applied. The movement was followed on the navigation computer to confirm that no varus/valgus stress was applied. After performing the ROM cycles, the rotational position of the trial tibial component was recorded as indicated by the navigation computer. Positive values indicated that the trial component was in internal rotation according to the given TTL axis, and negative values indicated that the trial component was in external rotation. While this measurement was being acquired, the patella was lying in the patellar groove to facilitate optimal patellofemoral tracking and to prevent lateral pull on the patella tendon that could cause the tibia to rotate externally. The rotational position was noted (measurement 1). After removing and reinserting the components, the ROM technique was applied two additional times with five full flexion-extension movements and corresponding subsequent measurements from the navigation system (measurement 2 and 3). After completing the operative procedure, the final tibial tray was cemented up to 1/3 of the medial border of the tibial tubercle. Statistics The statistical analyses were performed with SPSS statistical software (Version 12, SPSS Inc., Chicago, IL USA). The reproducibility of the ROM-technique was evaluated using the intra-class correlation coefficient (ICC). For each target, there was one ‘rater’(MvS) who performed the three consecutive attempts at positioning the tibial component using the ROM technique. Since the exact same rater made ratings on every patient and it was assumed that both patients and observer were drawn randomly from larger populations, the ICC2 was used [19]. The ICC2 reflects the reliability of this single rater. Means were compared with paired T-tests in cases of normal distributions [20]. A level of p < 0.05 was considered statistically significant. The mean of the 3 ROM measurements was used to evaluate the difference between the ROM and TTL techniques. We evaluated potential factors associated with the difference between the ROM and TTL technique including, leg axis, intra-operative flexion, intra-operative extension and posterior slope. (Table 2) The associations between each variable and the difference between the ROM and TTL were examined with univariable regression analyses. Factors that were associated with the outcome in univariable analyses (p-values < 0.20) were included in multivariable regression analyses. In multivariable regression analyses p-values < 0.05 were considered significant. Regression coefficients with their 95 % confidence intervals are reported.Table 2Outcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contractureVariablesMean ± SD(range)Difference ROM and TTL (°)−4.6 ± 8.6(−27.0 to 11.5)Varus mechanical leg axis (°)3.7 ± 7.2(−13.0 to 15.0)Intra-operative flexion (°)121.9 ± 6.7(108.0 to 134.0)Intra-operative extension (°)−0.1 ± 3.0(−5.5 to 7.0)Posterior slope (°)1.5 ± 1.0(0.0 to 3.5) Outcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contracture The statistical analyses were performed with SPSS statistical software (Version 12, SPSS Inc., Chicago, IL USA). The reproducibility of the ROM-technique was evaluated using the intra-class correlation coefficient (ICC). For each target, there was one ‘rater’(MvS) who performed the three consecutive attempts at positioning the tibial component using the ROM technique. Since the exact same rater made ratings on every patient and it was assumed that both patients and observer were drawn randomly from larger populations, the ICC2 was used [19]. The ICC2 reflects the reliability of this single rater. Means were compared with paired T-tests in cases of normal distributions [20]. A level of p < 0.05 was considered statistically significant. The mean of the 3 ROM measurements was used to evaluate the difference between the ROM and TTL techniques. We evaluated potential factors associated with the difference between the ROM and TTL technique including, leg axis, intra-operative flexion, intra-operative extension and posterior slope. (Table 2) The associations between each variable and the difference between the ROM and TTL were examined with univariable regression analyses. Factors that were associated with the outcome in univariable analyses (p-values < 0.20) were included in multivariable regression analyses. In multivariable regression analyses p-values < 0.05 were considered significant. Regression coefficients with their 95 % confidence intervals are reported.Table 2Outcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contractureVariablesMean ± SD(range)Difference ROM and TTL (°)−4.6 ± 8.6(−27.0 to 11.5)Varus mechanical leg axis (°)3.7 ± 7.2(−13.0 to 15.0)Intra-operative flexion (°)121.9 ± 6.7(108.0 to 134.0)Intra-operative extension (°)−0.1 ± 3.0(−5.5 to 7.0)Posterior slope (°)1.5 ± 1.0(0.0 to 3.5) Outcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contracture Ethics and consent: The Medical Ethics Committee of the Maastricht University Medical Centre has concluded that the described research does not apply to the Dutch Medical Research involving Human Subjects Act (WMO), therefore the patient was not required to provide consent regarding the use of the material. Furthermore, every patient in the Maastricht University Medical Centre is provided with information regarding these kinds of studies. If they do not wish to contribute to these studies, this information will be included in their file. The patient involved in this study did not make an objection against the use of his/her material for research purposes. Patient demographics: The patient demographics are summarized in Table 1.Table 1Patient demographics. Positive values indicated varus alignment, negative values indicated valgus alignmentAge (years)Preoperative Leg axis (degrees varus)Sex (Male/Female)Side (Right/Left)Preoperative ROM (degrees)Preoperative flexion contracture (degrees)Mean ± SD69.8 ± 103.7 ± 7.28/1210/10117 ± 165.6 ± 4.7 Patient demographics. Positive values indicated varus alignment, negative values indicated valgus alignment Operative procedure: The Stryker Knee Navigation System (Stryker Navigation System II, version 3.1) was used in this study. The prosthesis (Scorpio PS Stryker Howmedica Osteonics, Allendale, NJ USA) used in these surgeries allows five degrees of rotation between the tibial insert and the femoral component. In all cases, a tourniquet was applied for the entire duration of the surgery. After a standard midline skin incision and a medial parapatellar arthrotomy, the active wireless trackers of the navigation system were fixed to the femur and tibia. The required landmarks were entered into the navigation computer, and the rotation centre of the hip was determined by a special algorithm executed in customized software. The transepicondylar axis of the distal femur and the Whiteside’s line were set in each case exactly perpendicular to each other to improve the accuracy of the positioning the femoral component. The femoral component was aligned parallel to the transepicondylar axis. The AP axis of the proximal tibia was determined by placing the tip of the pointer on the centre of the line between the intercondylar eminences and aligning it to the medial 1/3 of the tibial tubercle . This AP axis was saved in the navigation program as 0 degrees of rotation. The proximal tibial and distal femoral cuts were performed and examined with the navigation system. The tibial posterior slope was set according to the patient’s natural slope. The polyethylene insert (Scorpio-Flex PS fixed bearing tibial insert) had an additional four degrees of posterior down-slope. The rotation of the femoral component was oriented according to the transepicondylar line and the AP axis (Whiteside’s Line) of the distal femur as currently advised in literature [3, 5, 17]. After soft tissue balancing and achievement of the maximal range of motion, the patella was prepared. The patellar button position may affect femoro-tibial kinematics; therefore, all of the trial components, including the patella and the PS tibial trial insert, were placed before the tibial component was subjected to the ROM technique. One navigation tracker was applied to the alignment handle of the trial tibial tray to check the position according to the given 0 degrees of rotation. Flexion and extension were measured intra-operatively after the approach was made and the trackers were placed. Positive values for extension represent hyperextension, while negative values represent flexion contracture. The tibial component was inserted and checked for smooth movement on the tibial cut surface. The knee was then put through five full flexion-extension cycles while the surgeon held the ankle only. During the ROM cycles, no hands were touching the knee to prevent manual manipulation, and no varus/valgus stress was applied. The movement was followed on the navigation computer to confirm that no varus/valgus stress was applied. After performing the ROM cycles, the rotational position of the trial tibial component was recorded as indicated by the navigation computer. Positive values indicated that the trial component was in internal rotation according to the given TTL axis, and negative values indicated that the trial component was in external rotation. While this measurement was being acquired, the patella was lying in the patellar groove to facilitate optimal patellofemoral tracking and to prevent lateral pull on the patella tendon that could cause the tibia to rotate externally. The rotational position was noted (measurement 1). After removing and reinserting the components, the ROM technique was applied two additional times with five full flexion-extension movements and corresponding subsequent measurements from the navigation system (measurement 2 and 3). After completing the operative procedure, the final tibial tray was cemented up to 1/3 of the medial border of the tibial tubercle. Statistics: The statistical analyses were performed with SPSS statistical software (Version 12, SPSS Inc., Chicago, IL USA). The reproducibility of the ROM-technique was evaluated using the intra-class correlation coefficient (ICC). For each target, there was one ‘rater’(MvS) who performed the three consecutive attempts at positioning the tibial component using the ROM technique. Since the exact same rater made ratings on every patient and it was assumed that both patients and observer were drawn randomly from larger populations, the ICC2 was used [19]. The ICC2 reflects the reliability of this single rater. Means were compared with paired T-tests in cases of normal distributions [20]. A level of p < 0.05 was considered statistically significant. The mean of the 3 ROM measurements was used to evaluate the difference between the ROM and TTL techniques. We evaluated potential factors associated with the difference between the ROM and TTL technique including, leg axis, intra-operative flexion, intra-operative extension and posterior slope. (Table 2) The associations between each variable and the difference between the ROM and TTL were examined with univariable regression analyses. Factors that were associated with the outcome in univariable analyses (p-values < 0.20) were included in multivariable regression analyses. In multivariable regression analyses p-values < 0.05 were considered significant. Regression coefficients with their 95 % confidence intervals are reported.Table 2Outcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contractureVariablesMean ± SD(range)Difference ROM and TTL (°)−4.6 ± 8.6(−27.0 to 11.5)Varus mechanical leg axis (°)3.7 ± 7.2(−13.0 to 15.0)Intra-operative flexion (°)121.9 ± 6.7(108.0 to 134.0)Intra-operative extension (°)−0.1 ± 3.0(−5.5 to 7.0)Posterior slope (°)1.5 ± 1.0(0.0 to 3.5) Outcome and available covariates assessed for inclusion in the regression model. Positive values for extension represent hyperextension, while negative values indicate flexion contracture Results: The tibial component can be reliable positioned in terms of rotation using the ROM technique, as demonstrated by an ICC2 = 0.84 (95 % CI (0.70–0.93); p < 0.001). The ICC2 = 0.84 of tibial component positioning using the ROM technique indicating nearly perfect repeatability. Because the ROM technique was nearly perfectly reliable, the means of the 3 ROM measurements were used to evaluate the difference between the two techniques. With the ROM technique, the tibial component was on average 4.56 (± SD 8.59) degrees externally rotated compared to the tubercle landmark. This difference was statistically significant p = 0.028. It appeared from the multivariable regression analyses that more valgus pre-operative mechanical leg axis (−0.54 (95%CI −0.98 - -0.10); p = 0.019), intra-operative flexion (0.57 (95%CI 0.13 – 1.00); p = 0.014) and intra-operative extension (1.41 (95%CI 0.50 – 2.32); p = 0.005) were associated with a greater difference between tibial component positioning using the ROM and TTL techniques. (Table 3) These results indicate that increasing the pre-operative varus mechanical leg alignment by 1 degree resulted in an increase in the external rotation of the tibial component of 0.54 degrees relative to the tibial tubercle using the ROM technique.Table 3Results of univariable and multivariable regression analysesVariableUnivariable analysisMultivariable analysiscoefficient (95%CI) p-valuecoefficient (95%CI) p-valueVarus mechanical leg axis−0.58 (−1.09 to −0.05)0.030−0.54 (−0.98 to −0.10)0.019Intra-operative flexion0.82 (0.33 to 1.31)0.0020.57 (0.13 to 1.00)0.014Intra-operative extension1.19 (−0.09 to 2.48)0.0701.41 (0.50 to 2.32)0.005Posterior slope2.63 (−1.31 to 6.56)0.1800.04 (−3.05 to 3.13)0.979 Results of univariable and multivariable regression analyses In the varus knees, the tibial component was on average 5.9 (± SD 8.7) degrees externally rotated; in the valgus knees, the mean external rotation was 0.4 (± SD 7.6) degrees relative to the tibial tubercle. The differences between the rotational alignments using the ROM and TTL techniques in both the varus and valgus knees were not statistically significant (p = 0.221). The cut posterior slope (0.04 (95%CI −3.05 – 3.13); p = 0.979) was not significantly related to the difference in the rotational alignment of the tibial component according to the tibial tubercle in either the ROM or TTL techniques. Discussion: This study revealed that the ROM technique is a repeatable method for aligning the tibial component. Using the ROM technique, the tibial component was on average 4.56 (± SD 8.59) degrees externally rotated compared to the tubercle landmark. Our result is in contradiction with Ikeuchi et al. [11] They found that using the ROM technique results in a more internally rotated position of the tibial component en found also widely variable results. However in line with our findings, Berhouet [21] and Chotanaputhi [22] found the ROM technique reproducible and the alignment of the tibial tray was externally rotated in comparison with the medial border of the tibial tubercle and the posterior tibial condyle line respectively. Ikeuchi [11] used cruciate-retaining (CR) TKA components without patellar resurfacing. The mean posterior slope of the cut tibia was 5 degrees and they compared the position of the tibial component related to the Akagi line [23]. The amount of tibial slope, the design of the prosthesis and the patella resurfacing all might have influenced the outcome. Rossi [24] stated that the ROM technique is a reproducible method to establish tibial rotation during TKA, having found that components were positioned in 0.35 external rotation to the Akagi line. Using anatomical landmarks for the rotational alignment of the femoral and tibial components is a widely accepted method. Alignment to the medial 1/3 of the tibial tubercle (Insall’s reference [25]) is based on papers of Nicoll [26], Lawrie [27], Lützner [28] and Yin [29], who found the medial 1/3 of the tibial tubercle the most accurate and reliable anatomical landmark. However, determining the component positions separately can lead to rotational mismatch between the femoral and tibial components [30]. Using the dynamic ROM technique may allow for the tibial tray to align itself according to the femoral component position, ligament balancing and extensor mechanism alignment. Retrieval and biomechanical studies have indicated that femoro-tibial rotational mismatches cause increased contact stress on the tibial insert and patellar component that leads to accelerated polyethylene wear [31–33]. Steinbrück et al. [34] recommended the rotational alignment of the tibial component to the medial 1/3 of the tibial tubercle to achieve the lowest retro-patellar pressure. Using the ROM technique the tibial component was externally rotated by a mean 4.56 degrees respect the tibial tubercle which might have resulted in increased retro-patellar peak pressure. Kim et al [35] found the best survival rate when tibial component was aligned between 2 degrees of internal rotation to 5 degrees of external rotation to the medial 1/3 of the tibial tubercle. External rotational errors were not associated with pain in a study of Nicoll [26]. To measure the rotational difference between the ROM and TTL techniques in degrees may not be easy during TKAs that are performed without navigation. Computer navigation is an ideal method for measuring the difference in trial tibial tray position between the TTL and ROM techniques given its reported accuracy of one degree [36]. Furthermore, the use of computer navigation can help to optimize the femoral component position, which is crucial for the performance of the ROM technique. [15, 37] Computer navigation has no advantage regarding the identification of the correct positions of the anatomical landmarks, but it does have advantages in comparing the positions of the landmarks to other landmarks (e.g., the transepicondylar axis to Whiteside’s line) and positioning the implants according the identified landmarks. Huddleston et al. [16] found that when the ROM technique is applied to varus knees, the antero-posterior axis of the tibial tray is significantly more externally rotated then when this technique is applied to valgus knees. The same result was found in our study, although this difference did not reach statistical significance likely because our study was underpowered regarding this aspect. However, the pre-operative mechanical leg axis was correlated with the difference between the ROM and TTL techniques (p = 0.019). With increasing pre-operative varus alignment, the ROM technique results in increasing external rotation of the tibial component. The maximum degrees of flexion (p = 0.014) and extension (p = 0.005) during surgery were also correlated with the difference between the ROM technique and the tubercle landmark in this study, which indicates that the use of the ROM technique for a patient with great preoperative flexion would result in a more internally rotated tibial component position compared with a patient with less preoperative flexion. Limitations of the study The number of patients (observations) in our study is rather low. Various studies have suggested that for each variable studied in multiple regression analysis at least 10 observations are required [38–40] although a recent study showed that this number could be lower in certain circumstances [41]. The results should therefore be interpreted with some caution. Clinical and radiological data were not collected postoperatively. Therefore, no results are available regarding potential differences in clinical outcomes between the two techniques. It is known that a tourniquet affects the intra-operative patello-femoral tracking [42, 43]. Therefore, it is likely that the tourniquet had some effect on the tibial component rotational alignment in the ROM technique. The use of a tourniquet during all of the operations and keeping it inflated while performing the ROM cycles might have affected our results. Although tibial rotational alignment is also effected by ligament balancing, we did not measure the gaps intra-operatively. The design of the prosthesis (CR or PS version) and the design of the tibial tray (symmetric or anatomical) may also have influence on the outcome [44]. The number of patients (observations) in our study is rather low. Various studies have suggested that for each variable studied in multiple regression analysis at least 10 observations are required [38–40] although a recent study showed that this number could be lower in certain circumstances [41]. The results should therefore be interpreted with some caution. Clinical and radiological data were not collected postoperatively. Therefore, no results are available regarding potential differences in clinical outcomes between the two techniques. It is known that a tourniquet affects the intra-operative patello-femoral tracking [42, 43]. Therefore, it is likely that the tourniquet had some effect on the tibial component rotational alignment in the ROM technique. The use of a tourniquet during all of the operations and keeping it inflated while performing the ROM cycles might have affected our results. Although tibial rotational alignment is also effected by ligament balancing, we did not measure the gaps intra-operatively. The design of the prosthesis (CR or PS version) and the design of the tibial tray (symmetric or anatomical) may also have influence on the outcome [44]. Limitations of the study: The number of patients (observations) in our study is rather low. Various studies have suggested that for each variable studied in multiple regression analysis at least 10 observations are required [38–40] although a recent study showed that this number could be lower in certain circumstances [41]. The results should therefore be interpreted with some caution. Clinical and radiological data were not collected postoperatively. Therefore, no results are available regarding potential differences in clinical outcomes between the two techniques. It is known that a tourniquet affects the intra-operative patello-femoral tracking [42, 43]. Therefore, it is likely that the tourniquet had some effect on the tibial component rotational alignment in the ROM technique. The use of a tourniquet during all of the operations and keeping it inflated while performing the ROM cycles might have affected our results. Although tibial rotational alignment is also effected by ligament balancing, we did not measure the gaps intra-operatively. The design of the prosthesis (CR or PS version) and the design of the tibial tray (symmetric or anatomical) may also have influence on the outcome [44]. Conclusions: The ROM technique is a repeatable intra-operative method for determining the rotational position of the tibia trial component. Because the best method to determine the intra-operative position of the tibia component is still under debate, TKA surgeons should be aware that there is a difference between the ROM and TTL methods, particularly in patients with high peri-operative ranges of motion and/or high pre-operative varus/valgus alignment.
Background: Both the range of motion (ROM) technique and the tibial tubercle landmark (TTL) technique are frequently used to align the tibial component into proper rotational position during total knee arthroplasty (TKA). The aim of the study was to assess the intra-operative differences in tibial rotation position during computer-navigated primary TKA using either the TTL or ROM techniques. The ROM technique was hypothesized to be a repeatable method and to produce different tibial rotation positions compared to the TTL technique. Methods: A prospective, observational study was performed to evaluate the antero-posterior axis of the cut proximal tibia using both the ROM and the TTL technique during primary TKA without postoperative clinical assessment. Computer navigation was used to measure this difference in 20 consecutive knees of 20 patients who underwent a posterior stabilized total knee arthroplasty with a fixed-bearing polyethylene insert and a patella resurfacing. Results: The ROM technique is a repeatable method with an interclass correlation coefficient (ICC2) of 0.84 (p < 0.001). The trial tibial baseplate was on average 4.56 degrees externally rotated compared to the tubercle landmark. This difference was statistically significant (p = 0.028). The amount of maximum intra-operative flexion and the pre-operative mechanical axis were positively correlated with the magnitude of difference between the two methods. Conclusions: It is important for the orthopaedic surgeon to realise that there is a significant difference between the TTL technique and ROM technique when positioning the tibial component in a rotational position. This difference is correlated with high maximum flexion and mechanical axis deviations.
Background: Rotational alignment of the components in total knee arthroplasty (TKA) is an important factor for both survival and the performance of the prostheses [1, 2]. The majority of the attention has focussed on the rotational alignment of the femoral component [3–6], which has resulted in the widespread use of the transepicondylar axis and the antero-posterior axis (Whiteside’s Line) of the distal femur as the reference axes for the rotational alignment of the femoral component [3–6]. However there is more discussion about the rotational alignment of the tibial component in part because of the difficulty of clinically assessing tibial component rotation. Furthermore, a whole range of anatomical landmarks can be used, including the medial border of the tibial tuberosity, the medial third of the tibial tuberosity, the anterior tibial crest, the posterior tibial condylar line, the second ray and the first web space of the foot. Aligning the tibial component to the tibial tubercle is one of the most popular landmark methods [7–9]. The disadvantage of all anatomical landmark techniques is that they do not account for femoro-tibial kinematics [10]. To address this problem, the ROM technique was introduced; in this technique, the rotational alignment of the tibial tray is determined through conformity to the femoral component when the knee is put through a series of full flexion-extension cycles [11]. However, the position of the tibial tray is not exclusively determined by the femoral component but is also influenced by the extensor mechanism, the patellar component, the ligament balancing and the tibial cut [12, 13]. The rationale behind the ROM technique lies in the theoretical advantage of aligning the tibial component in relation to the femoral component while respecting the soft tissue torsion forces to create optimal femoro-tibial kinematics [14]. For this method to work, the femoral component should be positioned accurately. Using computer-assisted surgery may improve the accuracy of positioning [15]. Several studies have demonstrated variability in the relationships between different landmarks and techniques for establishing rotational alignment of tibial components in total knee arthroplasty [11, 16–18]. A review reported that there is no gold standard measurement of tibial component rotation [18]. Whether the ROM technique is a repeatable method, and whether there is a significant difference in tibial component rotational position between the TTL technique and the ROM technique in computer navigated TKA with patella resurfacing remains unanswered questions in the literature. The primary purpose of this study was therefore to intra-operatively evaluate the repeatability of the ROM technique. The secondary outcome was to evaluate the difference in rotational alignment of the trial tibial component with the use of the TTL and ROM techniques during computer-navigated TKA with patella resurfacing . Additionally, the factors that influenced the positioning of the trial tibial component with both techniques were investigated. Postoperative clinical and radiological data were not collected. Conclusions: The ROM technique is a repeatable intra-operative method for determining the rotational position of the tibia trial component. Because the best method to determine the intra-operative position of the tibia component is still under debate, TKA surgeons should be aware that there is a difference between the ROM and TTL methods, particularly in patients with high peri-operative ranges of motion and/or high pre-operative varus/valgus alignment.
Background: Both the range of motion (ROM) technique and the tibial tubercle landmark (TTL) technique are frequently used to align the tibial component into proper rotational position during total knee arthroplasty (TKA). The aim of the study was to assess the intra-operative differences in tibial rotation position during computer-navigated primary TKA using either the TTL or ROM techniques. The ROM technique was hypothesized to be a repeatable method and to produce different tibial rotation positions compared to the TTL technique. Methods: A prospective, observational study was performed to evaluate the antero-posterior axis of the cut proximal tibia using both the ROM and the TTL technique during primary TKA without postoperative clinical assessment. Computer navigation was used to measure this difference in 20 consecutive knees of 20 patients who underwent a posterior stabilized total knee arthroplasty with a fixed-bearing polyethylene insert and a patella resurfacing. Results: The ROM technique is a repeatable method with an interclass correlation coefficient (ICC2) of 0.84 (p < 0.001). The trial tibial baseplate was on average 4.56 degrees externally rotated compared to the tubercle landmark. This difference was statistically significant (p = 0.028). The amount of maximum intra-operative flexion and the pre-operative mechanical axis were positively correlated with the magnitude of difference between the two methods. Conclusions: It is important for the orthopaedic surgeon to realise that there is a significant difference between the TTL technique and ROM technique when positioning the tibial component in a rotational position. This difference is correlated with high maximum flexion and mechanical axis deviations.
6,662
301
[ 2763, 112, 80, 679, 392, 457, 1291, 216, 79 ]
10
[ "tibial", "rom", "component", "values", "technique", "tibial component", "rom technique", "alignment", "axis", "navigation" ]
[ "alignment femoral", "rotational alignment tibial", "femoral tibial components", "tibial distal femoral", "tibial rotational alignment" ]
null
null
[CONTENT] Total knee arthroplasty | Tibial rotation | ROM technique | TTL technique | Computer navigation [SUMMARY]
null
null
[CONTENT] Total knee arthroplasty | Tibial rotation | ROM technique | TTL technique | Computer navigation [SUMMARY]
[CONTENT] Total knee arthroplasty | Tibial rotation | ROM technique | TTL technique | Computer navigation [SUMMARY]
[CONTENT] Total knee arthroplasty | Tibial rotation | ROM technique | TTL technique | Computer navigation [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Arthroplasty, Replacement, Knee | Female | Humans | Male | Middle Aged | Prospective Studies | Range of Motion, Articular | Rotation | Tibia [SUMMARY]
null
null
[CONTENT] Aged | Aged, 80 and over | Arthroplasty, Replacement, Knee | Female | Humans | Male | Middle Aged | Prospective Studies | Range of Motion, Articular | Rotation | Tibia [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Arthroplasty, Replacement, Knee | Female | Humans | Male | Middle Aged | Prospective Studies | Range of Motion, Articular | Rotation | Tibia [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Arthroplasty, Replacement, Knee | Female | Humans | Male | Middle Aged | Prospective Studies | Range of Motion, Articular | Rotation | Tibia [SUMMARY]
[CONTENT] alignment femoral | rotational alignment tibial | femoral tibial components | tibial distal femoral | tibial rotational alignment [SUMMARY]
null
null
[CONTENT] alignment femoral | rotational alignment tibial | femoral tibial components | tibial distal femoral | tibial rotational alignment [SUMMARY]
[CONTENT] alignment femoral | rotational alignment tibial | femoral tibial components | tibial distal femoral | tibial rotational alignment [SUMMARY]
[CONTENT] alignment femoral | rotational alignment tibial | femoral tibial components | tibial distal femoral | tibial rotational alignment [SUMMARY]
[CONTENT] tibial | rom | component | values | technique | tibial component | rom technique | alignment | axis | navigation [SUMMARY]
null
null
[CONTENT] tibial | rom | component | values | technique | tibial component | rom technique | alignment | axis | navigation [SUMMARY]
[CONTENT] tibial | rom | component | values | technique | tibial component | rom technique | alignment | axis | navigation [SUMMARY]
[CONTENT] tibial | rom | component | values | technique | tibial component | rom technique | alignment | axis | navigation [SUMMARY]
[CONTENT] tibial | component | rotational alignment | femoral component | tibial component | rotational | femoral | alignment | technique | rom [SUMMARY]
null
null
[CONTENT] high | position tibia | operative | method | tibia | position | intra operative | pre operative varus valgus | position tibia trial component | position tibia trial [SUMMARY]
[CONTENT] tibial | component | rom | values | operative | technique | alignment | tibial component | degrees | navigation [SUMMARY]
[CONTENT] tibial | component | rom | values | operative | technique | alignment | tibial component | degrees | navigation [SUMMARY]
[CONTENT] TTL | TKA ||| TKA | TTL | ROM ||| ROM | TTL [SUMMARY]
null
null
[CONTENT] TTL | ROM ||| [SUMMARY]
[CONTENT] TTL | TKA ||| TKA | TTL | ROM ||| ROM | TTL ||| ROM | TTL | TKA ||| 20 | 20 ||| ||| ROM | 0.84 ||| 4.56 degrees ||| 0.028 ||| two ||| TTL | ROM ||| [SUMMARY]
[CONTENT] TTL | TKA ||| TKA | TTL | ROM ||| ROM | TTL ||| ROM | TTL | TKA ||| 20 | 20 ||| ||| ROM | 0.84 ||| 4.56 degrees ||| 0.028 ||| two ||| TTL | ROM ||| [SUMMARY]
Relationship of obesity to physical activity, domestic activities, and sedentary behaviours: cross-sectional findings from a national cohort of over 70,000 Thai adults.
21970620
Patterns of physical activity (PA), domestic activity and sedentary behaviours are changing rapidly in Asia. Little is known about their relationship with obesity in this context. This study investigates in detail the relationship between obesity, physical activity, domestic activity and sedentary behaviours in a Thai population.
BACKGROUND
74,981 adult students aged 20-50 from all regions of Thailand attending the Sukhothai Thammathirat Open University in 2005-2006 completed a self-administered questionnaire, including providing appropriate self-reported data on height, weight and PA. We conducted cross-sectional analyses of the relationship between obesity, defined according to Asian criteria (Body Mass Index (BMI) ≥25), and measures of physical activity and sedentary behaviours (exercise-related PA; leisure-related computer use and television watching ("screen-time"); housework and gardening; and sitting-time) adjusted for age, sex, income and education and compared according to a range of personal characteristics.
METHODS
Overall, 15.6% of participants were obese, with a substantially greater prevalence in men (22.4%) than women (9.9%). Inverse associations between being obese and total weekly sessions of exercise-related PA were observed in men, with a significantly weaker association seen in women (p(interaction) < 0.0001). Increasing obesity with increasing screen-time was seen in all population groups examined; there was an overall 18% (15-21%) increase in obesity with every two hours of additional daily screen-time. There were 33% (26-39%) and 33% (21-43%) reductions in the adjusted risk of being obese in men and women, respectively, reporting housework/gardening daily versus seldom or never. Exercise-related PA, screen-time and housework/gardening each had independent associations with obesity.
RESULTS
Domestic activities and sedentary behaviours are important in relation to obesity in Thailand, independent of exercise-related physical activity. In this setting, programs to prevent and treat obesity through increasing general physical activity need to consider overall energy expenditure and address a wide range of low-intensity high-volume activities in order to be effective.
CONCLUSIONS
[ "Activities of Daily Living", "Adult", "Body Mass Index", "Cohort Studies", "Cross-Sectional Studies", "Exercise", "Female", "Humans", "Male", "Middle Aged", "Obesity", "Odds Ratio", "Prevalence", "Risk Factors", "Sedentary Behavior", "Sex Factors", "Surveys and Questionnaires", "Thailand", "Time Factors", "Young Adult" ]
3204261
Background
The prevalence of obesity is rising rapidly in most Asian countries, with increases of 46% in Japan and over 400% in China observed from the 1980s to early 2000s [1]. In Thailand, the prevalence of obesity increased by around 19% from 1997 to 2004 alone [2]. There have been accompanying increases in morbidity related to conditions such as diabetes and cardiovascular disease in Asian countries [3,4]. It is well established in Western populations that increasing purposeful or leisure-time physical activity (PA) is associated with reduced rates of obesity [5,6]. Recent evidence, also from Western countries, suggests that sedentary activities, such as watching television or using a computer, are associated with increasing obesity, independent of purposeful PA [7-9]. The role of incidental PA and overall energy expenditure, in influencing obesity has been highlighted [10,11]. The interplay between these factors and their combined effects on obesity are not well understood and information relevant to Asian populations is particularly scarce. Furthermore, the relationship between domestic activities and obesity is unclear [12-14]. This is important because physical activity related to patterns of daily activity differs between Asia and Western countries [15] and because many Asian countries are experiencing rapid health and lifestyle transitions [16]. This paper examines in detail the relationships between obesity, exercise-related PA, domestic activities and sedentary behaviours in Thailand, with particular emphasis on the interaction between these factors.
Methods
Study population The Sukhothai Thammathirat Open University (STOU) Cohort Study is designed to provide evidence regarding the transition in risk factor profiles, health outcomes and other factors accompanying development and is described in detail elsewhere [17]. In brief, from April to November 2005, enrolled STOU students across Thailand who had completed a least one semester were mailed a 20-page health questionnaire and asked to join the study by completing the questionnaire, providing signed consent for follow-up, and returning these in a reply-paid envelope. A total of 87 134 men and women aged 15-87 years (median 29 years) joined the cohort. The Sukhothai Thammathirat Open University (STOU) Cohort Study is designed to provide evidence regarding the transition in risk factor profiles, health outcomes and other factors accompanying development and is described in detail elsewhere [17]. In brief, from April to November 2005, enrolled STOU students across Thailand who had completed a least one semester were mailed a 20-page health questionnaire and asked to join the study by completing the questionnaire, providing signed consent for follow-up, and returning these in a reply-paid envelope. A total of 87 134 men and women aged 15-87 years (median 29 years) joined the cohort. Data All of the variables used in this study were derived from cross-sectional self-reported data from the Thai Cohort Study questionnaire [17]. The questionnaire requested information on: socio-demographic factors; ethnicity; past and present residence and domestic environment; income; work-related factors; height; weight; sensory impairment; mental health; medical history; general health; use of health services; social networks; social capital; diet; physical activity; sedentary behaviours; tobacco and alcohol consumption; use of seat belts and motorcycle helmets; drink-driving; and family structure and health (See Additional files 1 and 2 for questionnaires). Where possible, questionnaire items that had been standardised and validated were used. Self-reported weight and height were used to calculate participants' BMI, as their weight in kilograms, divided by the square of their height in metres. Cut-points delineating overweight and obesity were set at BMIs ≥23 and ≥25, respectively, in accordance with International Obesity Taskforce recommendations [18] and studies in other Asian populations [19]. Information on exercise-related PA was obtained through a question asking: "During a typical week (7-day period), how many times on average do you do the following kinds of exercise?", with responses requested for: "Strenuous exercise (heart beats rapidly) for more than 20 minutes, e.g. heavy lifting, digging, aerobics or fast bicycling, running, soccer, trakraw"; "Moderate exercise (not exhausting but breathe harder than normal) for more than 20 minutes, e.g. carrying light loads, cycling at a regular pace"; "Mild exercise (minimal effort) for more than 20 minutes, e.g. yoga, Tai-Chi, bowling" and; "Walking for at least 10 minutes e.g. at work, at home, exercise". This question is a sessions-based measure of physical activity, similar to the sessions component of the International Physical Activity Questionnaire and the Active Australia Survey [20]; these session measures have been shown to provide a reliable index of sufficiency of physical activity in non-Thai populations [21]. It incorporates the three major intensities of activity (strenuous, moderate and walking), included in these measures, as well as an additional "mild" category that was created specifically for this study to cover common types of activity in Thailand. The responses to this question were used to derive a weighted measure of overall metabolically-adjusted exercise-related PA, calculated as "2 × strenuous + moderate + mild + walking" exercise sessions, in keeping with previous calculations of this quotient [20]. The frequency of reported housework and gardening was used as a measure of incidental PA and was classified into 5 groups according to the response to the question "How often do you do household cleaning or gardening work?" with options ranging from seldom or never, to most days. Total daily leisure-related screen-time and sitting time were classified according to the participant's response to the question "How many hours per day do you usually spend: Watching TV or playing computer games? Sitting for any purpose (e.g. reading, resting, working thinking)?". Sitting time could also include screen-time, as participants were not specifically asked to exclude screen-time from this measure. The availability of domestic appliances was classified according to the response to the question "Which of the following does your home have now?", with options including microwave oven, refrigerator, water heater and washing machine. The questions on housework/gardening, screen-time, sitting time and availability of domestic appliances were devised specially for the Thai Cohort Study. Education attainment was classified as: secondary school graduation or less; post-secondary school certificate or diploma; and tertiary graduate. Personal monthly income was in Thai Baht in four categories (≤7,000; 7,001-10,000; 10,001-20,000; >20,000). Respondents recorded the frequency of eating deep fried food and soft drinks and Western-style fast foods such as pizza (known as "junkfood") on a five-point Likert scale ranging from never or less than once a month, to once or more a day; this was categorised as consumption <3 times and ≥ 3 times per week for fried food and seldom (never or less than once per month) and regularly (≥ once per month) for "junkfood". Fruit and vegetable intakes were noted as serves eaten per day and categorised as <2 and ≥2 serves per day. They were asked if they have ever smoked, when they started and when they quit and were categorised as current smokers or not current smokers, with similar questions and categories for alcohol consumption. Analysis was restricted to the 95% of individuals aged between 20 and 50 years, with BMIs between 11 and 50. Individuals were excluded from the analyses if they were missing data on age or sex (n = 2), height or weight (n = 1030) or physical activity or inactivity (n = 6863), leaving 74,981 participants. All of the variables used in this study were derived from cross-sectional self-reported data from the Thai Cohort Study questionnaire [17]. The questionnaire requested information on: socio-demographic factors; ethnicity; past and present residence and domestic environment; income; work-related factors; height; weight; sensory impairment; mental health; medical history; general health; use of health services; social networks; social capital; diet; physical activity; sedentary behaviours; tobacco and alcohol consumption; use of seat belts and motorcycle helmets; drink-driving; and family structure and health (See Additional files 1 and 2 for questionnaires). Where possible, questionnaire items that had been standardised and validated were used. Self-reported weight and height were used to calculate participants' BMI, as their weight in kilograms, divided by the square of their height in metres. Cut-points delineating overweight and obesity were set at BMIs ≥23 and ≥25, respectively, in accordance with International Obesity Taskforce recommendations [18] and studies in other Asian populations [19]. Information on exercise-related PA was obtained through a question asking: "During a typical week (7-day period), how many times on average do you do the following kinds of exercise?", with responses requested for: "Strenuous exercise (heart beats rapidly) for more than 20 minutes, e.g. heavy lifting, digging, aerobics or fast bicycling, running, soccer, trakraw"; "Moderate exercise (not exhausting but breathe harder than normal) for more than 20 minutes, e.g. carrying light loads, cycling at a regular pace"; "Mild exercise (minimal effort) for more than 20 minutes, e.g. yoga, Tai-Chi, bowling" and; "Walking for at least 10 minutes e.g. at work, at home, exercise". This question is a sessions-based measure of physical activity, similar to the sessions component of the International Physical Activity Questionnaire and the Active Australia Survey [20]; these session measures have been shown to provide a reliable index of sufficiency of physical activity in non-Thai populations [21]. It incorporates the three major intensities of activity (strenuous, moderate and walking), included in these measures, as well as an additional "mild" category that was created specifically for this study to cover common types of activity in Thailand. The responses to this question were used to derive a weighted measure of overall metabolically-adjusted exercise-related PA, calculated as "2 × strenuous + moderate + mild + walking" exercise sessions, in keeping with previous calculations of this quotient [20]. The frequency of reported housework and gardening was used as a measure of incidental PA and was classified into 5 groups according to the response to the question "How often do you do household cleaning or gardening work?" with options ranging from seldom or never, to most days. Total daily leisure-related screen-time and sitting time were classified according to the participant's response to the question "How many hours per day do you usually spend: Watching TV or playing computer games? Sitting for any purpose (e.g. reading, resting, working thinking)?". Sitting time could also include screen-time, as participants were not specifically asked to exclude screen-time from this measure. The availability of domestic appliances was classified according to the response to the question "Which of the following does your home have now?", with options including microwave oven, refrigerator, water heater and washing machine. The questions on housework/gardening, screen-time, sitting time and availability of domestic appliances were devised specially for the Thai Cohort Study. Education attainment was classified as: secondary school graduation or less; post-secondary school certificate or diploma; and tertiary graduate. Personal monthly income was in Thai Baht in four categories (≤7,000; 7,001-10,000; 10,001-20,000; >20,000). Respondents recorded the frequency of eating deep fried food and soft drinks and Western-style fast foods such as pizza (known as "junkfood") on a five-point Likert scale ranging from never or less than once a month, to once or more a day; this was categorised as consumption <3 times and ≥ 3 times per week for fried food and seldom (never or less than once per month) and regularly (≥ once per month) for "junkfood". Fruit and vegetable intakes were noted as serves eaten per day and categorised as <2 and ≥2 serves per day. They were asked if they have ever smoked, when they started and when they quit and were categorised as current smokers or not current smokers, with similar questions and categories for alcohol consumption. Analysis was restricted to the 95% of individuals aged between 20 and 50 years, with BMIs between 11 and 50. Individuals were excluded from the analyses if they were missing data on age or sex (n = 2), height or weight (n = 1030) or physical activity or inactivity (n = 6863), leaving 74,981 participants. Statistical methods The relationships between a range of personal characteristics and exercise-related PA, housework/gardening and leisure-related screen-time were examined, as well as the correlation between the individual measures of physical activity and inactivity. Variables were categorised into the groups listed in the various tables. The proportion of the study population classified as obese according to exercise-related PA, housework, leisure-related screen-time and sitting time was examined. Prevalence odds ratios (OR) and 95% CIs for obesity according to PA, housework, screen-time and sitting time were estimated using unconditional logistic regression; crude and adjusted odds ratios were computed. ORs were presented separately for men and women and adjusted for age (as a continuous variable), income and educational attainment, with exploration of the effect of additional adjustment for factors such as marital status, smoking, alcohol consumption and urban/rural residence. We evaluated the significance of interaction terms using a likelihood ratio test, comparing the model with and without the interaction terms. We examined how much of any association of a specific PA or sedentary behaviour with obesity was attributable to differences in total physical activity level by modelling simultaneously the three PA variables and their two-way and three-way interactions. We also examined how much of the association of certain sedentary behaviours could be attributed to the effect of other sedentary behaviours and to consumption of fried foods and soft drinks and Western-style junkfood, using mutual adjustment. All analyses were carried out in STATA version 9.2. All statistical tests were two-sided, using a significance level of p < 0.05. Due to the large sample size, conclusions were based on both significance and the effect size. The relationships between a range of personal characteristics and exercise-related PA, housework/gardening and leisure-related screen-time were examined, as well as the correlation between the individual measures of physical activity and inactivity. Variables were categorised into the groups listed in the various tables. The proportion of the study population classified as obese according to exercise-related PA, housework, leisure-related screen-time and sitting time was examined. Prevalence odds ratios (OR) and 95% CIs for obesity according to PA, housework, screen-time and sitting time were estimated using unconditional logistic regression; crude and adjusted odds ratios were computed. ORs were presented separately for men and women and adjusted for age (as a continuous variable), income and educational attainment, with exploration of the effect of additional adjustment for factors such as marital status, smoking, alcohol consumption and urban/rural residence. We evaluated the significance of interaction terms using a likelihood ratio test, comparing the model with and without the interaction terms. We examined how much of any association of a specific PA or sedentary behaviour with obesity was attributable to differences in total physical activity level by modelling simultaneously the three PA variables and their two-way and three-way interactions. We also examined how much of the association of certain sedentary behaviours could be attributed to the effect of other sedentary behaviours and to consumption of fried foods and soft drinks and Western-style junkfood, using mutual adjustment. All analyses were carried out in STATA version 9.2. All statistical tests were two-sided, using a significance level of p < 0.05. Due to the large sample size, conclusions were based on both significance and the effect size. Ethical approval Ethics approval was obtained from Sukhothai Thammathirat Open University Research and Development Institute (protocol 0522/10) and the Australian National University Human Research Ethics Committee (protocol 2004344). Informed written consent was obtained from all participants. Ethics approval was obtained from Sukhothai Thammathirat Open University Research and Development Institute (protocol 0522/10) and the Australian National University Human Research Ethics Committee (protocol 2004344). Informed written consent was obtained from all participants.
null
null
Conclusions
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/11/762/prepub
[ "Background", "Study population", "Data", "Statistical methods", "Ethical approval", "Results", "Obesity and exercise-related physical activity, housework, and gardening", "Obesity and leisure related screen-time", "Obesity and domestic appliances", "Discussion", "Conclusions" ]
[ "The prevalence of obesity is rising rapidly in most Asian countries, with increases of 46% in Japan and over 400% in China observed from the 1980s to early 2000s [1]. In Thailand, the prevalence of obesity increased by around 19% from 1997 to 2004 alone [2]. There have been accompanying increases in morbidity related to conditions such as diabetes and cardiovascular disease in Asian countries [3,4].\nIt is well established in Western populations that increasing purposeful or leisure-time physical activity (PA) is associated with reduced rates of obesity [5,6]. Recent evidence, also from Western countries, suggests that sedentary activities, such as watching television or using a computer, are associated with increasing obesity, independent of purposeful PA [7-9]. The role of incidental PA and overall energy expenditure, in influencing obesity has been highlighted [10,11]. The interplay between these factors and their combined effects on obesity are not well understood and information relevant to Asian populations is particularly scarce. Furthermore, the relationship between domestic activities and obesity is unclear [12-14]. This is important because physical activity related to patterns of daily activity differs between Asia and Western countries [15] and because many Asian countries are experiencing rapid health and lifestyle transitions [16].\nThis paper examines in detail the relationships between obesity, exercise-related PA, domestic activities and sedentary behaviours in Thailand, with particular emphasis on the interaction between these factors.", "The Sukhothai Thammathirat Open University (STOU) Cohort Study is designed to provide evidence regarding the transition in risk factor profiles, health outcomes and other factors accompanying development and is described in detail elsewhere [17]. In brief, from April to November 2005, enrolled STOU students across Thailand who had completed a least one semester were mailed a 20-page health questionnaire and asked to join the study by completing the questionnaire, providing signed consent for follow-up, and returning these in a reply-paid envelope. A total of 87 134 men and women aged 15-87 years (median 29 years) joined the cohort.", "All of the variables used in this study were derived from cross-sectional self-reported data from the Thai Cohort Study questionnaire [17]. The questionnaire requested information on: socio-demographic factors; ethnicity; past and present residence and domestic environment; income; work-related factors; height; weight; sensory impairment; mental health; medical history; general health; use of health services; social networks; social capital; diet; physical activity; sedentary behaviours; tobacco and alcohol consumption; use of seat belts and motorcycle helmets; drink-driving; and family structure and health (See Additional files 1 and 2 for questionnaires). Where possible, questionnaire items that had been standardised and validated were used.\nSelf-reported weight and height were used to calculate participants' BMI, as their weight in kilograms, divided by the square of their height in metres. Cut-points delineating overweight and obesity were set at BMIs ≥23 and ≥25, respectively, in accordance with International Obesity Taskforce recommendations [18] and studies in other Asian populations [19].\nInformation on exercise-related PA was obtained through a question asking: \"During a typical week (7-day period), how many times on average do you do the following kinds of exercise?\", with responses requested for: \"Strenuous exercise (heart beats rapidly) for more than 20 minutes, e.g. heavy lifting, digging, aerobics or fast bicycling, running, soccer, trakraw\"; \"Moderate exercise (not exhausting but breathe harder than normal) for more than 20 minutes, e.g. carrying light loads, cycling at a regular pace\"; \"Mild exercise (minimal effort) for more than 20 minutes, e.g. yoga, Tai-Chi, bowling\" and; \"Walking for at least 10 minutes e.g. at work, at home, exercise\". This question is a sessions-based measure of physical activity, similar to the sessions component of the International Physical Activity Questionnaire and the Active Australia Survey [20]; these session measures have been shown to provide a reliable index of sufficiency of physical activity in non-Thai populations [21]. It incorporates the three major intensities of activity (strenuous, moderate and walking), included in these measures, as well as an additional \"mild\" category that was created specifically for this study to cover common types of activity in Thailand. The responses to this question were used to derive a weighted measure of overall metabolically-adjusted exercise-related PA, calculated as \"2 × strenuous + moderate + mild + walking\" exercise sessions, in keeping with previous calculations of this quotient [20].\nThe frequency of reported housework and gardening was used as a measure of incidental PA and was classified into 5 groups according to the response to the question \"How often do you do household cleaning or gardening work?\" with options ranging from seldom or never, to most days. Total daily leisure-related screen-time and sitting time were classified according to the participant's response to the question \"How many hours per day do you usually spend: Watching TV or playing computer games? Sitting for any purpose (e.g. reading, resting, working thinking)?\". Sitting time could also include screen-time, as participants were not specifically asked to exclude screen-time from this measure. The availability of domestic appliances was classified according to the response to the question \"Which of the following does your home have now?\", with options including microwave oven, refrigerator, water heater and washing machine. The questions on housework/gardening, screen-time, sitting time and availability of domestic appliances were devised specially for the Thai Cohort Study.\nEducation attainment was classified as: secondary school graduation or less; post-secondary school certificate or diploma; and tertiary graduate. Personal monthly income was in Thai Baht in four categories (≤7,000; 7,001-10,000; 10,001-20,000; >20,000). Respondents recorded the frequency of eating deep fried food and soft drinks and Western-style fast foods such as pizza (known as \"junkfood\") on a five-point Likert scale ranging from never or less than once a month, to once or more a day; this was categorised as consumption <3 times and ≥ 3 times per week for fried food and seldom (never or less than once per month) and regularly (≥ once per month) for \"junkfood\". Fruit and vegetable intakes were noted as serves eaten per day and categorised as <2 and ≥2 serves per day. They were asked if they have ever smoked, when they started and when they quit and were categorised as current smokers or not current smokers, with similar questions and categories for alcohol consumption.\nAnalysis was restricted to the 95% of individuals aged between 20 and 50 years, with BMIs between 11 and 50. Individuals were excluded from the analyses if they were missing data on age or sex (n = 2), height or weight (n = 1030) or physical activity or inactivity (n = 6863), leaving 74,981 participants.", "The relationships between a range of personal characteristics and exercise-related PA, housework/gardening and leisure-related screen-time were examined, as well as the correlation between the individual measures of physical activity and inactivity. Variables were categorised into the groups listed in the various tables.\nThe proportion of the study population classified as obese according to exercise-related PA, housework, leisure-related screen-time and sitting time was examined. Prevalence odds ratios (OR) and 95% CIs for obesity according to PA, housework, screen-time and sitting time were estimated using unconditional logistic regression; crude and adjusted odds ratios were computed. ORs were presented separately for men and women and adjusted for age (as a continuous variable), income and educational attainment, with exploration of the effect of additional adjustment for factors such as marital status, smoking, alcohol consumption and urban/rural residence. We evaluated the significance of interaction terms using a likelihood ratio test, comparing the model with and without the interaction terms.\nWe examined how much of any association of a specific PA or sedentary behaviour with obesity was attributable to differences in total physical activity level by modelling simultaneously the three PA variables and their two-way and three-way interactions. We also examined how much of the association of certain sedentary behaviours could be attributed to the effect of other sedentary behaviours and to consumption of fried foods and soft drinks and Western-style junkfood, using mutual adjustment.\nAll analyses were carried out in STATA version 9.2. All statistical tests were two-sided, using a significance level of p < 0.05. Due to the large sample size, conclusions were based on both significance and the effect size.", "Ethics approval was obtained from Sukhothai Thammathirat Open University Research and Development Institute (protocol 0522/10) and the Australian National University Human Research Ethics Committee (protocol 2004344). Informed written consent was obtained from all participants.", "Of 74 981 participants with appropriate data, 41 351 (55.2%, 95%CI 54.8-55.5%) were classified as being of healthy weight (BMI 18.5-22.9), 10 733 (14.4%, 14.1-14.6%) were underweight (BMI < 18.5), 11 241 (15.0%, 14.7-15.2%) were overweight but not obese (BMI ≥ 23.0-24.9) and 11 616 (15.6%, 15.2-15.7%) were obese (BMI ≥ 25.0).\nMen were far more likely to be overweight (21.7%, 21.3-22.1%) or obese (22.4%, 22.0-22.9%) than women (9.5% and 9.9%, respectively), while women were more likely to be underweight (21.3%, 20.9-21.7%) than men (5.9%, 5.6-6.1%). Compared to other members of the study cohort, obesity prevalence was higher in older participants and urban dwellers and in those with higher consumption of fried food (data not shown) [22].\nPatterns of exercise-related PA varied between men and women, with 12.5% (12.2-12.9%) of men reporting 0-3 sessions and 26.3% (25.8-26.8%) reporting ≥18 sessions of exercise-related PA per week compared to 22.2% (21.8-22.6%) and 12.1% (11.8-12.4%), respectively, for women. The mean number of sessions of exercise-related PA per week was 11.6 [sd 12.1] overall; 13.9 [sd 13.5] for men and 9.7 [sd 10.6] for women. A higher level of exercise-related PA was associated with having less than a tertiary education, being of lower income and eating more fruit and vegetables, but was not strongly related to other factors (Table 1). The pattern of PA making up the total weekly sessions also differed between the sexes, with women much less likely than men to report strenuous or moderate PA (Table 2).\nCharacteristics of study population according to total physical activity, housework/gardening and daily screen-time\naMeasured as top decile from 3 items assessing physical limitations in the past 4 weeks (eg how much bodily pain did you have in the past 4 weeks?)\nbOnly 1% and 0.6%, respectively, of females are current smokers and regular drinkers\nRelationship between being obese and measures of exercise-related physical activity (PA)\n*adjusted for age, income and education\nOverall, 49.4% (48.9-49.9%) of women and 34.4% (33.8-34.8%) of men reported doing household cleaning or gardening on most days of the week, while 3.7% (3.5-3.9%) of women and 8.8% (8.5-9.1%) of men reported that they did these seldom or never. Housework/gardening was more common among those who were married, not tertiary educated, of lower income and with greater fruit and vegetable intake than other cohort members (Table 1).\nLeisure related screen-time did not vary markedly between men and women. 17.8% (17.4-18.2%) of women and 22.2% (21.8-22.7%) of men reported less than two hours of daily screen-time, while 3.4% (3.2-3.6%) of women and 2.8% (2.6-2.9%) of men reported 8 hours or more. Average daily leisure related screen-time was 2.9 hours [sd 1.9]; it was 3.0 hours [sd 1.9] in women and 2.8 hours [sd 1.8] in men. Higher levels of screen-time were more common among cohort members who were younger, unmarried, urban residents and of lower income and who ate fried food daily and soft drinks or Western style junkfood once a month or more often (Table 1).\nWomen tended to have greater levels of sitting time than men, with 46.6% (46.0-46.9%) of women and 36.8% (36.2-37.3%) of men reporting 8 or more hours of daily sitting time. Average daily sitting time was 6.6 hours [sd 3.8] overall; 6.8 hours [sd 3.9] in women and 6.2 hours [sd 1.8] in men.\nThe number of hours of daily screen-time was poorly but significantly inversely correlated with the number of weekly sessions of exercise-related PA (r = -0.016; 95% CI: -0.024 to -0.009) and doing household cleaning or gardening (r = -0.022; -0.029 to -0.014) but was more strongly and positively related to number of hours sitting per day (r = 0.16; 0.15 to 0.16). The number of weekly sessions of exercise-related PA was positively correlated with doing household cleaning or gardening (r = 0.15; 0.14 to 0.16). The correlations between sitting time and number of weekly sessions of exercise-related PA and cleaning/gardening were -0.054 (-0.061 to -0.047) and -0.041 (-0.049 to -0.034) respectively.\n Obesity and exercise-related physical activity, housework, and gardening In men, the OR for being obese decreased steadily and significantly with increasing weighted total weekly sessions of exercise-related PA, such that those reporting 18 or more sessions had a OR of obesity of 0.69 (0.63-0.75) compared to those with 0-3 sessions (p(trend) < 0.0001) and this relationship was observed particularly for moderate and strenuous PA (Table 2). There was no apparent relationship between being obese and moderate and strenuous PA in women and the relationship between strenuous activity and obesity differed significantly between men and women (p(interaction) < 0.0001). However, in women an inverse relationship with being obese was observed mainly for mild PA and walking (Table 2).\nFor both sexes, the risk of being obese was consistently lower with increasing frequency of housework/gardening, with 33% (26-39%) and 33% (21-43%) lower adjusted ORs in men and women, respectively, in those reporting these activities daily versus seldom or never (Table 3). The lower risk of obesity with increasing housework or gardening was independent of the level of exercise-related PA, in that the OR did not change materially (i.e. changed by <10%) following additional adjustment for exercise-related PA (see below) and a similar relationship was observed within separate categories of exercise-related PA (Figure 1). The inverse relationship between housework/gardening and obesity was still present following additional adjustment for screen-time and exercise-related PA. Compared to people who did housework or gardening seldom or never, the ORs (95% CI) for being obese in men were: 0.85 (0.76-0.94) for people who did housework or gardening 1-3 times per month; 0.79 (0.71-0.87) for 1-2 times per week; 0.80 (0.72-0.90) for 3-4 times per week; and 0.73 (0.66-0.80) for housework/gardening on most days, adjusting for age, income, education, screen-time and weighted weekly sessions of exercise-related PA. For women, the ORs for the same categories were: 0.95 (0.79-1.15); 0.76 (0.64-0.90); 0.75 (0.63-0.90); and 0.71 (0.60-0.84), respectively.\nRelationship between being obese and gardening/housework, leisure-related computer or television use (\"screen-time\") and sitting time\n*adjusted for age, income and education\nnumbers do not always sum to total due to missing values\nOdds ratios (OR) for being obese in relation to weighted weekly sessions of exercise-related physical activity (ERPA), hours of daily screen-time and frequency of housework/gardening.\nIn men, the OR for being obese decreased steadily and significantly with increasing weighted total weekly sessions of exercise-related PA, such that those reporting 18 or more sessions had a OR of obesity of 0.69 (0.63-0.75) compared to those with 0-3 sessions (p(trend) < 0.0001) and this relationship was observed particularly for moderate and strenuous PA (Table 2). There was no apparent relationship between being obese and moderate and strenuous PA in women and the relationship between strenuous activity and obesity differed significantly between men and women (p(interaction) < 0.0001). However, in women an inverse relationship with being obese was observed mainly for mild PA and walking (Table 2).\nFor both sexes, the risk of being obese was consistently lower with increasing frequency of housework/gardening, with 33% (26-39%) and 33% (21-43%) lower adjusted ORs in men and women, respectively, in those reporting these activities daily versus seldom or never (Table 3). The lower risk of obesity with increasing housework or gardening was independent of the level of exercise-related PA, in that the OR did not change materially (i.e. changed by <10%) following additional adjustment for exercise-related PA (see below) and a similar relationship was observed within separate categories of exercise-related PA (Figure 1). The inverse relationship between housework/gardening and obesity was still present following additional adjustment for screen-time and exercise-related PA. Compared to people who did housework or gardening seldom or never, the ORs (95% CI) for being obese in men were: 0.85 (0.76-0.94) for people who did housework or gardening 1-3 times per month; 0.79 (0.71-0.87) for 1-2 times per week; 0.80 (0.72-0.90) for 3-4 times per week; and 0.73 (0.66-0.80) for housework/gardening on most days, adjusting for age, income, education, screen-time and weighted weekly sessions of exercise-related PA. For women, the ORs for the same categories were: 0.95 (0.79-1.15); 0.76 (0.64-0.90); 0.75 (0.63-0.90); and 0.71 (0.60-0.84), respectively.\nRelationship between being obese and gardening/housework, leisure-related computer or television use (\"screen-time\") and sitting time\n*adjusted for age, income and education\nnumbers do not always sum to total due to missing values\nOdds ratios (OR) for being obese in relation to weighted weekly sessions of exercise-related physical activity (ERPA), hours of daily screen-time and frequency of housework/gardening.\n Obesity and leisure related screen-time Increasing leisure-related screen-time was associated with significantly and substantially increasing risk of being obese in both men and women, with 85% and 116% increases in risk, respectively, for 8 or more hours of daily screen-time versus <2 hours (Table 3, p(trend) < 0.0001). Overall, sitting-time was not significantly related to the OR of being obese (p(trend) = 0.32), although a significant trend was observed in women (Table 3).\nThe positive relationship between screen-time and being obese was still present following additional adjustment for housework/gardening and exercise-related PA; the ORs (95% CI) for obesity in men were: 1.22 (1.14-1.30); 1.38 (1.27-1.50); 1.58 (1.36-1.83); and 1.80 (1.53-2.13) and for women were: 1.15 (1.05-1.27); 1.50 (1.35-1.67); 1.62 (1.36-1.92) and 2.13 (1.78-2.55), for people with: 2.0-2.9 hours; 3.0-3.9 hours; 4-7.9 hours; and ≥8 hours of daily screen-time versus 0-1.9 hours, respectively, adjusted for age, income, education, housework/gardening and exercise-related PA. Additional adjustment for consumption of fried foods, soft drink and junkfood and smoking and alcohol consumption did not materially alter the OR (data not shown).\nFigure 1 shows the cohort divided into four groups according to their level of exercise-related PA (ERPA in the figure). The ORs for being obese were then presented within each group according to the number of hours of total daily screen-time, with separate lines according to the frequency of housework/gardening. This figure shows increasing risk of being obese with increasing screen-time within each exercise-related PA group and within each housework/gardening group. It also shows that the finding of lower risks of being obese with increasing frequency of housework/gardening persists, even when screen-time and exercise-related PA were accounted for. When the relationships between being obese and exercise-related PA, screen-time and housework/gardening were modelled together, no significant interactions were observed (likelihood ratio χ392 = 38.54, p = 0.49), indicating that they were each independently associated with obesity.\nThe sex, income and education-adjusted OR of obesity per two-hour increase in daily screen-time is shown separately according to a variety of factors, including according to total exercise-related PA and according to housework and gardening, in Figure 2. There was an 18% (15-21%) increase in the risk of being obese with every two additional hours of daily screen-time overall and a significant elevation in the risk of being obese with increasing screen-time was seen in all of the population sub-groups examined (Figure 2). There was a significantly greater increase in the risk of being obese with increasing screen-time in unmarried compared to married individuals (p(heterogeneity) < 0.0001). The relationship between being obese and screen-time was attenuated significantly in older cohort members (p(heterogeneity) = 0.02) and in those with higher incomes (p(heterogeneity) = 0.02). No significant variation in the relationship between screen-time and being obese was seen according to the other factors examined, including: sex; urban/rural residence history; education; smoking status; alcohol, fruit, vegetable, junkfood and fried food intake; disability; level of exercise-related PA; and frequency of housework/gardening.\nOdds ratios (OR) for being obese per 2 hour increase in daily screen-time, in different population sub-groups, adjusted for age, sex, income and education, where appropriate.\nIncreasing leisure-related screen-time was associated with significantly and substantially increasing risk of being obese in both men and women, with 85% and 116% increases in risk, respectively, for 8 or more hours of daily screen-time versus <2 hours (Table 3, p(trend) < 0.0001). Overall, sitting-time was not significantly related to the OR of being obese (p(trend) = 0.32), although a significant trend was observed in women (Table 3).\nThe positive relationship between screen-time and being obese was still present following additional adjustment for housework/gardening and exercise-related PA; the ORs (95% CI) for obesity in men were: 1.22 (1.14-1.30); 1.38 (1.27-1.50); 1.58 (1.36-1.83); and 1.80 (1.53-2.13) and for women were: 1.15 (1.05-1.27); 1.50 (1.35-1.67); 1.62 (1.36-1.92) and 2.13 (1.78-2.55), for people with: 2.0-2.9 hours; 3.0-3.9 hours; 4-7.9 hours; and ≥8 hours of daily screen-time versus 0-1.9 hours, respectively, adjusted for age, income, education, housework/gardening and exercise-related PA. Additional adjustment for consumption of fried foods, soft drink and junkfood and smoking and alcohol consumption did not materially alter the OR (data not shown).\nFigure 1 shows the cohort divided into four groups according to their level of exercise-related PA (ERPA in the figure). The ORs for being obese were then presented within each group according to the number of hours of total daily screen-time, with separate lines according to the frequency of housework/gardening. This figure shows increasing risk of being obese with increasing screen-time within each exercise-related PA group and within each housework/gardening group. It also shows that the finding of lower risks of being obese with increasing frequency of housework/gardening persists, even when screen-time and exercise-related PA were accounted for. When the relationships between being obese and exercise-related PA, screen-time and housework/gardening were modelled together, no significant interactions were observed (likelihood ratio χ392 = 38.54, p = 0.49), indicating that they were each independently associated with obesity.\nThe sex, income and education-adjusted OR of obesity per two-hour increase in daily screen-time is shown separately according to a variety of factors, including according to total exercise-related PA and according to housework and gardening, in Figure 2. There was an 18% (15-21%) increase in the risk of being obese with every two additional hours of daily screen-time overall and a significant elevation in the risk of being obese with increasing screen-time was seen in all of the population sub-groups examined (Figure 2). There was a significantly greater increase in the risk of being obese with increasing screen-time in unmarried compared to married individuals (p(heterogeneity) < 0.0001). The relationship between being obese and screen-time was attenuated significantly in older cohort members (p(heterogeneity) = 0.02) and in those with higher incomes (p(heterogeneity) = 0.02). No significant variation in the relationship between screen-time and being obese was seen according to the other factors examined, including: sex; urban/rural residence history; education; smoking status; alcohol, fruit, vegetable, junkfood and fried food intake; disability; level of exercise-related PA; and frequency of housework/gardening.\nOdds ratios (OR) for being obese per 2 hour increase in daily screen-time, in different population sub-groups, adjusted for age, sex, income and education, where appropriate.\n Obesity and domestic appliances The risk of being obese was significantly higher in men and women from households with a refrigerator, microwave oven or washing machine and in men from households with a water heater (Table 4). The risk of being obese increased significantly with the increasing number of such appliances within a household. These results were not altered materially following additional adjustment for housework/gardening frequency, smoking, alcohol consumption and consumption of fried foods, Western-style junkfood, fruit and vegetables (data not shown).\nRelationship between being obese and ownership of household appliances\n*adjusted for age, income and education\nThe risk of being obese was significantly higher in men and women from households with a refrigerator, microwave oven or washing machine and in men from households with a water heater (Table 4). The risk of being obese increased significantly with the increasing number of such appliances within a household. These results were not altered materially following additional adjustment for housework/gardening frequency, smoking, alcohol consumption and consumption of fried foods, Western-style junkfood, fruit and vegetables (data not shown).\nRelationship between being obese and ownership of household appliances\n*adjusted for age, income and education", "In men, the OR for being obese decreased steadily and significantly with increasing weighted total weekly sessions of exercise-related PA, such that those reporting 18 or more sessions had a OR of obesity of 0.69 (0.63-0.75) compared to those with 0-3 sessions (p(trend) < 0.0001) and this relationship was observed particularly for moderate and strenuous PA (Table 2). There was no apparent relationship between being obese and moderate and strenuous PA in women and the relationship between strenuous activity and obesity differed significantly between men and women (p(interaction) < 0.0001). However, in women an inverse relationship with being obese was observed mainly for mild PA and walking (Table 2).\nFor both sexes, the risk of being obese was consistently lower with increasing frequency of housework/gardening, with 33% (26-39%) and 33% (21-43%) lower adjusted ORs in men and women, respectively, in those reporting these activities daily versus seldom or never (Table 3). The lower risk of obesity with increasing housework or gardening was independent of the level of exercise-related PA, in that the OR did not change materially (i.e. changed by <10%) following additional adjustment for exercise-related PA (see below) and a similar relationship was observed within separate categories of exercise-related PA (Figure 1). The inverse relationship between housework/gardening and obesity was still present following additional adjustment for screen-time and exercise-related PA. Compared to people who did housework or gardening seldom or never, the ORs (95% CI) for being obese in men were: 0.85 (0.76-0.94) for people who did housework or gardening 1-3 times per month; 0.79 (0.71-0.87) for 1-2 times per week; 0.80 (0.72-0.90) for 3-4 times per week; and 0.73 (0.66-0.80) for housework/gardening on most days, adjusting for age, income, education, screen-time and weighted weekly sessions of exercise-related PA. For women, the ORs for the same categories were: 0.95 (0.79-1.15); 0.76 (0.64-0.90); 0.75 (0.63-0.90); and 0.71 (0.60-0.84), respectively.\nRelationship between being obese and gardening/housework, leisure-related computer or television use (\"screen-time\") and sitting time\n*adjusted for age, income and education\nnumbers do not always sum to total due to missing values\nOdds ratios (OR) for being obese in relation to weighted weekly sessions of exercise-related physical activity (ERPA), hours of daily screen-time and frequency of housework/gardening.", "Increasing leisure-related screen-time was associated with significantly and substantially increasing risk of being obese in both men and women, with 85% and 116% increases in risk, respectively, for 8 or more hours of daily screen-time versus <2 hours (Table 3, p(trend) < 0.0001). Overall, sitting-time was not significantly related to the OR of being obese (p(trend) = 0.32), although a significant trend was observed in women (Table 3).\nThe positive relationship between screen-time and being obese was still present following additional adjustment for housework/gardening and exercise-related PA; the ORs (95% CI) for obesity in men were: 1.22 (1.14-1.30); 1.38 (1.27-1.50); 1.58 (1.36-1.83); and 1.80 (1.53-2.13) and for women were: 1.15 (1.05-1.27); 1.50 (1.35-1.67); 1.62 (1.36-1.92) and 2.13 (1.78-2.55), for people with: 2.0-2.9 hours; 3.0-3.9 hours; 4-7.9 hours; and ≥8 hours of daily screen-time versus 0-1.9 hours, respectively, adjusted for age, income, education, housework/gardening and exercise-related PA. Additional adjustment for consumption of fried foods, soft drink and junkfood and smoking and alcohol consumption did not materially alter the OR (data not shown).\nFigure 1 shows the cohort divided into four groups according to their level of exercise-related PA (ERPA in the figure). The ORs for being obese were then presented within each group according to the number of hours of total daily screen-time, with separate lines according to the frequency of housework/gardening. This figure shows increasing risk of being obese with increasing screen-time within each exercise-related PA group and within each housework/gardening group. It also shows that the finding of lower risks of being obese with increasing frequency of housework/gardening persists, even when screen-time and exercise-related PA were accounted for. When the relationships between being obese and exercise-related PA, screen-time and housework/gardening were modelled together, no significant interactions were observed (likelihood ratio χ392 = 38.54, p = 0.49), indicating that they were each independently associated with obesity.\nThe sex, income and education-adjusted OR of obesity per two-hour increase in daily screen-time is shown separately according to a variety of factors, including according to total exercise-related PA and according to housework and gardening, in Figure 2. There was an 18% (15-21%) increase in the risk of being obese with every two additional hours of daily screen-time overall and a significant elevation in the risk of being obese with increasing screen-time was seen in all of the population sub-groups examined (Figure 2). There was a significantly greater increase in the risk of being obese with increasing screen-time in unmarried compared to married individuals (p(heterogeneity) < 0.0001). The relationship between being obese and screen-time was attenuated significantly in older cohort members (p(heterogeneity) = 0.02) and in those with higher incomes (p(heterogeneity) = 0.02). No significant variation in the relationship between screen-time and being obese was seen according to the other factors examined, including: sex; urban/rural residence history; education; smoking status; alcohol, fruit, vegetable, junkfood and fried food intake; disability; level of exercise-related PA; and frequency of housework/gardening.\nOdds ratios (OR) for being obese per 2 hour increase in daily screen-time, in different population sub-groups, adjusted for age, sex, income and education, where appropriate.", "The risk of being obese was significantly higher in men and women from households with a refrigerator, microwave oven or washing machine and in men from households with a water heater (Table 4). The risk of being obese increased significantly with the increasing number of such appliances within a household. These results were not altered materially following additional adjustment for housework/gardening frequency, smoking, alcohol consumption and consumption of fried foods, Western-style junkfood, fruit and vegetables (data not shown).\nRelationship between being obese and ownership of household appliances\n*adjusted for age, income and education", "In this cohort of Thai men and women, the risk of being obese is consistently higher in those with greater time spent in leisure-related television watching and computer games and inversely associated with time spent doing housework or gardening. Inverse associations between obesity and total weekly sessions of exercise-related PA are observed in men with a significantly weaker association seen in women. Exercise-related PA, screen-time and housework/gardening each have independent associations with obesity. The magnitude of the association with obesity relating to these risk factors is substantial. Individuals reporting daily housework/gardening have a 33% lower risk of being obese compared to those reporting these activities seldom or never and there is an 18% increase in the risk of obesity with every two hours of additional daily screen-time.\nThe findings reported here show an inverse relationship between exercise-related PA and being obese that is stronger in men than in women and may be somewhat weaker than that observed in Western populations. The inverse relationship between obesity and exercise, usually leisure-related PA is well established in Western countries [5,6,21]. Although a reduced risk of obesity with increasing PA has been demonstrated in certain Asian populations, including those in China [23] and Korea [24], the specific relationship of leisure-related PA to obesity is less clear, and may be of lesser magnitude. The reason for this is not known. Potential explanations include: the lack of data relevant to Asia; the possibility that the proportion of total energy expenditure to leisure-related PA is lower in the Asian context [15]; differing types and intensities of leisure-related PA compared to the West; and differences in measurement error.\nWe were unable to locate any previous studies in adults of the relationship between being obese and television and computer use in Asia. Studies in Western populations consistently show increases in obesity with increasing time spent in sedentary activities, particularly screen-time [7,8,25-27]. The direct relationship between sedentary behaviours and obesity is observed in both cross-sectional [7,25,26] and prospective studies [8,15,28]. Studies have varied in the way they have measured and categorised screen-time and other sedentary behaviours, as well as obesity related outcomes, so it is difficult to summarise quantitatively the magnitude of the risk involved. However, the 18% increase in obesity risk per 2 hours of additional daily screen-time observed here is consistent with the 23% increase observed in US nurses [8] and older Australian adults [9].\nThe one previous publication we were able to locate examining the relationship between BMI and domestic activity in the Asian context demonstrated a significantly lower BMI in men with increasing time spent in domestic activities and a non-significant relationship in women, in China [29]. Studies in Western populations have generally not found a significant relationship between BMI and domestic activity [12,13,30], even heavy domestic activity, although one study in older US adults found house cleaning, but not gardening, to be associated with decreased BMI on multivariate analysis [14] and another found decreased all-cause mortality with increasing domestic PA [30]. The study presented here is the largest to date investigating the issue and shows a decreasing risk of being obese with increasing frequency of housework and gardening, independent of exercise-related PA and screen-time.\nAlthough the lack of a positive finding in the Western context may reflect measurement error, the play of chance, small sample sizes or other factors, it is also possible that domestic PA in Asian countries differs from that in Western countries, for example, due to use of labour saving devices or differing practices. Increasing use of labour saving devices is part of the transition accompanying industrialisation and is associated with reduced energy expenditure in domestic tasks [31]. Decreasing domestic physical activity over time has been noted in one Chinese study [29]. We found household ownership of domestic appliances to be significantly associated with increasing risk of being obese, with increasing risks of being obese accompanying increasing numbers of appliances within the household. However, the lack of specificity in the relationship of the different household appliances to obesity and the apparently greater effect in men compared to women suggests that this may well not be a causal relationship; it may instead reflect a broader difference in socioeconomic status and lifestyle between households with and without appliances.\nStrengths of the current study include its large size and inclusion of adults from a wide range of social and economic backgrounds. Although the cohort is somewhat younger and more urbanised than the Thai general population, it represents well the geographic regions of Thailand and exhibits substantial heterogeneity in the distribution of other factors [17]. For example, 35% of males and 47% of females had low incomes (<7000 Baht per month or $5.50 US per day). Participants in the Thai Cohort Study in 2005 were very similar to the STOU student body in that year for sex ratio, age distribution, geographic region of residence, income, education and course of study [32]. Much of the health-risk transition underway in middle income countries is mediated by education [33,34] and the cohort is, by definition, ahead of national education trends. It is therefore likely to provide useful early insights into the effects and mediators of the health-risk transition in middle-income countries. Previous relevant studies from the cohort include examination of the broader health-related correlates of obesity [22] and the relationship between gender, socioeconomic status and obesity [35].\nThe proportion of the cohort classified as overweight but not obese is similar to the 18% found in the third Thai National Health Examination Survey (2004), while obesity is much lower among STOU women (10% compared to 36% in the National Health Examination Survey) but similar among men in the two studies (23%) [2]. It should be noted that the Thai Cohort Study, like the vast majority of cohort studies, is not designed to be representative of the general population, but is meant to provide sufficient heterogeneity of exposure to allow reliable estimates of relative risk based on internal comparisons [36]. The \"healthy cohort effect\" and the 44% response rate for this study means that the estimates of relative risk shown here are likely to be conservative, since community members with more extreme behaviours and health conditions may be less likely to be attending an open university or to participate. However, it is important to note that ORs comparing groups within the cohort remain valid and can be generalised more broadly [36,37]. Furthermore, the major comparisons in this paper are between obese and non-obese individuals, rather than obese and \"healthy weight\" individuals, which is also likely to lead to more conservative estimates of association.\nThe limitations of the study should also be borne in mind. The measures used for tobacco smoking and alcohol consumption were brief and the physical activity measure used has, to our knowledge, only been validated in Western populations. BMI was based on self-reported height and weight, which have been shown to provide a valid measure of body size in this population, with correlation coefficients for BMI based on self-reported versus measured height and weight of 0.91 for men and 0.95 for women [38]. However, BMI based on self-reported measures was underestimated by an average of 0.77 kg/m2 for men and 0.62 kg/m2 for women [38]. Cut-points delineating overweight and obesity were set at BMIs ≥23 and ≥25; weight-related disease in Asian populations occurs least when BMI is about 22 or less [39], and significantly increases with BMI ≥23 [40]. The excellent correlation between measured and self-reported values mean that self-reported values are generally reliable for ranking participants according to BMI in epidemiological studies, as was done here. The general underestimation of BMI means that absolute values of BMI from self-reported measures are less reliable.\nSelf-reported leisure-related television and computer use has good test-retest reliability and validity, although respondents have a tendency to under-report the number of hours involved [41]. We were not able to locate any studies validating these measures in Asian populations. Time spent in habitual, incidental physical activity is difficult to measure [42] and domestic activities, screen-time and overall physical activity are all likely to be reported with differing degrees of measurement error. Although both domestic activities and screen-time remain predictive of obesity within categories of physical activity, it is possible that this is the result of greater measurement error in ascertaining overall physical activity than in ascertaining domestic and screen-related activities [25]. However, the lack of strong correlation between screen-time and overall physical activity (r = -0.016) goes against this argument, as does the previous observation that television viewing remains a significant predictor of obesity in women, following adjustment for pedometer measured physical activity [42].\nObesity and overweight are the result of sustained positive energy balance, whereby energy intake exceeds energy expenditure. Although dietary factors are important, there is mounting evidence that insufficient energy expenditure is likely to be a key factor underlying the global obesity epidemic [10]. The main determinant of individual energy expenditure is the basal metabolic rate, which typically accounts for 70% of all kilojoules burned [15]. A further 10% of energy expenditure comes from the thermic effect of food and the remaining 20% comes from PA [15]. PA is often conceptualised as comprised of purposeful and non-purposeful physical activity; the latter is also termed \"incidental\" PA. A recent study from the US found that over half of population-level energy expenditure from PA was from sedentary and low-intensity tasks, 16% was attributed to occupational activity above and beyond sitting at work, 16% was attributable to domestic activities and yard work of at least moderate intensity and less than 5% was attributable to leisure-time PA [43]. This evidence is consistent with the suggestion that differences in incidental physical activity are responsible for the greatest variations in energy expenditure between individuals and populations [10]. Moreover, recent evidence indicates that one of the most potent mechanisms determining cardiovascular risk factors, including obesity and metabolic disorders, is the amount of time spent in high volume daily intermittent low-intensity postural or ambulatory activities, which account for as much of 90% of energy expended in physical activity [11].\nSedentary behaviours generally involve sitting or lying down and are characterised by low energy expenditure (metabolic equivalent intensity <2) [41]. A substantial amount of time spent in sedentary activities is likely to contribute to obesity through reduced overall energy expenditure, mainly resulting from their impact on incidental physical activity, since it may co-exist with relatively high levels of exercise-related physical activity. Screen-time, particularly television watching, is also associated with other health behaviours, such as eating fatty foods. However, the finding of increased obesity among those watching greater amounts of television persisted in this dataset after adjustment for intake of fatty food and in other studies following adjustment for total energy intake [8] and foods eaten while watching television [27], so this is unlikely to explain much of its effects.\nIn lower- and middle-income countries, including Thailand, industrialisation is generally accompanied by increasing urbanisation, a more sedentary lifestyle, with increasing car and computer use and a higher fat diet dominated by more refined foods [16]. It is also characterised by a shift in work patterns for a substantial proportion of the population, from high energy expenditure activities such as farming, mining and forestry to less energy-demanding jobs in the service sector [16]. All of these changes are likely to increase population obesity. There are a number of specific barriers to increasing physical activity in many Asian countries, including environmental factors such as heat, inadequate urban infrastructure, pollution and other hazards. Furthermore, chronic malnutrition has been common in many Asian countries, leading to stunting in significant portions of the population and rendering them vulnerable to obesity as food availability improves.\nThe importance of obesity, the metabolic syndrome and diabetes in Thailand has been highlighted extensively [44-46]. In Thailand, and in many other countries, social factors are key upstream determinants of the major influences on obesity. For example, domestic duties are often divided along gender lines and many wealthier households have servants, particularly to do the heavier work. In this cohort, higher socioeconomic status was accompanied by increasing risk of being obese in men and decreasing risk of being obese in women [35]. This pattern is believed to represent an intermediate stage in the health-risk transition between less-developed countries such as China, which demonstrate high socioeconomic status to be associated with increased obesity in both men and women [47], and Western populations where high socioeconomic status is associated with reduced obesity in both men and women [2,35].\nThe study was able to investigate simultaneously a number of activity-related measures and the large numbers allowed quantification of the association of these factors with obesity within a range of population subgroups. However, the analyses presented here are cross-sectional so it is not possible to directly attribute causality to the relationships observed or to exclude reverse causality. Reverse causality occurs when an exposure varies because of the specific condition under investigation. In this case, reverse causality would mean that obesity might result in reduced exercise-related PA, increased sedentary behaviour and decreased domestic PA. There are a priori reasons why it is likely that certain elements of the PA-BMI relationship are causal i.e. that reduced energy expenditure due to reduced PA results in increased BMI. However, it is also possible that people with high BMI may change their level of PA. Intuitively, this might apply more to exercise-related PA than to screen-time or domestic activities; people with a high BMI may do more exercise-related PA in order to lose weight or may reduce their exercise-related PA, due to the extra exertion required because of their weight or obesity related health issues (e.g. joint problems). In Thai society, women in particular are under pressure to be thin and the increased walking among women of higher BMI may reflect this. Going against a large role for reverse causality is the fact that increasing inactivity has been shown to result in increased obesity in longitudinal data [8,48] and experimental studies show that increasing BMI by overfeeding of lean individuals does not result in increased sedentary behaviour [49]. This issue is not resolved entirely by using longitudinal data, since the major risk factor for incident obesity is having a high BMI at baseline [49]. We propose that the relationship between sedentary behaviour and obesity is likely to be complex, with a causal relationship between inactivity and obesity predominating. There is likely to be some contribution of obesity leading to inactivity [48], or indeed a \"spiral\" relationship, whereby inactivity leads to obesity, which further exacerbates inactivity, leading to further increases in obesity [50].", "In common with many middle to low income countries, the prevalence of overweight and obesity in Thailand is lower than that seen in many Western nations, but is increasing rapidly. Avoiding the transition to the obesity patterns seen in the West is a key priority. The data presented here suggest that habitual, high volume, low intensity PA is likely to be important for maintaining a healthy weight and are in keeping with other data that show that increasing exercise-related leisure-time PA alone is unlikely to be sufficient to prevent population obesity [15]. Leisure-related television and computer use were strongly related to the risk of being obese. Research focusing on habitual activities and sedentary behaviours is relatively new. Effective interventions to reduce sedentary time and increase incidental activity are being developed; innovative interventions applicable to the Asian context are needed urgently." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Data", "Statistical methods", "Ethical approval", "Results", "Obesity and exercise-related physical activity, housework, and gardening", "Obesity and leisure related screen-time", "Obesity and domestic appliances", "Discussion", "Conclusions", "Supplementary Material" ]
[ "The prevalence of obesity is rising rapidly in most Asian countries, with increases of 46% in Japan and over 400% in China observed from the 1980s to early 2000s [1]. In Thailand, the prevalence of obesity increased by around 19% from 1997 to 2004 alone [2]. There have been accompanying increases in morbidity related to conditions such as diabetes and cardiovascular disease in Asian countries [3,4].\nIt is well established in Western populations that increasing purposeful or leisure-time physical activity (PA) is associated with reduced rates of obesity [5,6]. Recent evidence, also from Western countries, suggests that sedentary activities, such as watching television or using a computer, are associated with increasing obesity, independent of purposeful PA [7-9]. The role of incidental PA and overall energy expenditure, in influencing obesity has been highlighted [10,11]. The interplay between these factors and their combined effects on obesity are not well understood and information relevant to Asian populations is particularly scarce. Furthermore, the relationship between domestic activities and obesity is unclear [12-14]. This is important because physical activity related to patterns of daily activity differs between Asia and Western countries [15] and because many Asian countries are experiencing rapid health and lifestyle transitions [16].\nThis paper examines in detail the relationships between obesity, exercise-related PA, domestic activities and sedentary behaviours in Thailand, with particular emphasis on the interaction between these factors.", " Study population The Sukhothai Thammathirat Open University (STOU) Cohort Study is designed to provide evidence regarding the transition in risk factor profiles, health outcomes and other factors accompanying development and is described in detail elsewhere [17]. In brief, from April to November 2005, enrolled STOU students across Thailand who had completed a least one semester were mailed a 20-page health questionnaire and asked to join the study by completing the questionnaire, providing signed consent for follow-up, and returning these in a reply-paid envelope. A total of 87 134 men and women aged 15-87 years (median 29 years) joined the cohort.\nThe Sukhothai Thammathirat Open University (STOU) Cohort Study is designed to provide evidence regarding the transition in risk factor profiles, health outcomes and other factors accompanying development and is described in detail elsewhere [17]. In brief, from April to November 2005, enrolled STOU students across Thailand who had completed a least one semester were mailed a 20-page health questionnaire and asked to join the study by completing the questionnaire, providing signed consent for follow-up, and returning these in a reply-paid envelope. A total of 87 134 men and women aged 15-87 years (median 29 years) joined the cohort.\n Data All of the variables used in this study were derived from cross-sectional self-reported data from the Thai Cohort Study questionnaire [17]. The questionnaire requested information on: socio-demographic factors; ethnicity; past and present residence and domestic environment; income; work-related factors; height; weight; sensory impairment; mental health; medical history; general health; use of health services; social networks; social capital; diet; physical activity; sedentary behaviours; tobacco and alcohol consumption; use of seat belts and motorcycle helmets; drink-driving; and family structure and health (See Additional files 1 and 2 for questionnaires). Where possible, questionnaire items that had been standardised and validated were used.\nSelf-reported weight and height were used to calculate participants' BMI, as their weight in kilograms, divided by the square of their height in metres. Cut-points delineating overweight and obesity were set at BMIs ≥23 and ≥25, respectively, in accordance with International Obesity Taskforce recommendations [18] and studies in other Asian populations [19].\nInformation on exercise-related PA was obtained through a question asking: \"During a typical week (7-day period), how many times on average do you do the following kinds of exercise?\", with responses requested for: \"Strenuous exercise (heart beats rapidly) for more than 20 minutes, e.g. heavy lifting, digging, aerobics or fast bicycling, running, soccer, trakraw\"; \"Moderate exercise (not exhausting but breathe harder than normal) for more than 20 minutes, e.g. carrying light loads, cycling at a regular pace\"; \"Mild exercise (minimal effort) for more than 20 minutes, e.g. yoga, Tai-Chi, bowling\" and; \"Walking for at least 10 minutes e.g. at work, at home, exercise\". This question is a sessions-based measure of physical activity, similar to the sessions component of the International Physical Activity Questionnaire and the Active Australia Survey [20]; these session measures have been shown to provide a reliable index of sufficiency of physical activity in non-Thai populations [21]. It incorporates the three major intensities of activity (strenuous, moderate and walking), included in these measures, as well as an additional \"mild\" category that was created specifically for this study to cover common types of activity in Thailand. The responses to this question were used to derive a weighted measure of overall metabolically-adjusted exercise-related PA, calculated as \"2 × strenuous + moderate + mild + walking\" exercise sessions, in keeping with previous calculations of this quotient [20].\nThe frequency of reported housework and gardening was used as a measure of incidental PA and was classified into 5 groups according to the response to the question \"How often do you do household cleaning or gardening work?\" with options ranging from seldom or never, to most days. Total daily leisure-related screen-time and sitting time were classified according to the participant's response to the question \"How many hours per day do you usually spend: Watching TV or playing computer games? Sitting for any purpose (e.g. reading, resting, working thinking)?\". Sitting time could also include screen-time, as participants were not specifically asked to exclude screen-time from this measure. The availability of domestic appliances was classified according to the response to the question \"Which of the following does your home have now?\", with options including microwave oven, refrigerator, water heater and washing machine. The questions on housework/gardening, screen-time, sitting time and availability of domestic appliances were devised specially for the Thai Cohort Study.\nEducation attainment was classified as: secondary school graduation or less; post-secondary school certificate or diploma; and tertiary graduate. Personal monthly income was in Thai Baht in four categories (≤7,000; 7,001-10,000; 10,001-20,000; >20,000). Respondents recorded the frequency of eating deep fried food and soft drinks and Western-style fast foods such as pizza (known as \"junkfood\") on a five-point Likert scale ranging from never or less than once a month, to once or more a day; this was categorised as consumption <3 times and ≥ 3 times per week for fried food and seldom (never or less than once per month) and regularly (≥ once per month) for \"junkfood\". Fruit and vegetable intakes were noted as serves eaten per day and categorised as <2 and ≥2 serves per day. They were asked if they have ever smoked, when they started and when they quit and were categorised as current smokers or not current smokers, with similar questions and categories for alcohol consumption.\nAnalysis was restricted to the 95% of individuals aged between 20 and 50 years, with BMIs between 11 and 50. Individuals were excluded from the analyses if they were missing data on age or sex (n = 2), height or weight (n = 1030) or physical activity or inactivity (n = 6863), leaving 74,981 participants.\nAll of the variables used in this study were derived from cross-sectional self-reported data from the Thai Cohort Study questionnaire [17]. The questionnaire requested information on: socio-demographic factors; ethnicity; past and present residence and domestic environment; income; work-related factors; height; weight; sensory impairment; mental health; medical history; general health; use of health services; social networks; social capital; diet; physical activity; sedentary behaviours; tobacco and alcohol consumption; use of seat belts and motorcycle helmets; drink-driving; and family structure and health (See Additional files 1 and 2 for questionnaires). Where possible, questionnaire items that had been standardised and validated were used.\nSelf-reported weight and height were used to calculate participants' BMI, as their weight in kilograms, divided by the square of their height in metres. Cut-points delineating overweight and obesity were set at BMIs ≥23 and ≥25, respectively, in accordance with International Obesity Taskforce recommendations [18] and studies in other Asian populations [19].\nInformation on exercise-related PA was obtained through a question asking: \"During a typical week (7-day period), how many times on average do you do the following kinds of exercise?\", with responses requested for: \"Strenuous exercise (heart beats rapidly) for more than 20 minutes, e.g. heavy lifting, digging, aerobics or fast bicycling, running, soccer, trakraw\"; \"Moderate exercise (not exhausting but breathe harder than normal) for more than 20 minutes, e.g. carrying light loads, cycling at a regular pace\"; \"Mild exercise (minimal effort) for more than 20 minutes, e.g. yoga, Tai-Chi, bowling\" and; \"Walking for at least 10 minutes e.g. at work, at home, exercise\". This question is a sessions-based measure of physical activity, similar to the sessions component of the International Physical Activity Questionnaire and the Active Australia Survey [20]; these session measures have been shown to provide a reliable index of sufficiency of physical activity in non-Thai populations [21]. It incorporates the three major intensities of activity (strenuous, moderate and walking), included in these measures, as well as an additional \"mild\" category that was created specifically for this study to cover common types of activity in Thailand. The responses to this question were used to derive a weighted measure of overall metabolically-adjusted exercise-related PA, calculated as \"2 × strenuous + moderate + mild + walking\" exercise sessions, in keeping with previous calculations of this quotient [20].\nThe frequency of reported housework and gardening was used as a measure of incidental PA and was classified into 5 groups according to the response to the question \"How often do you do household cleaning or gardening work?\" with options ranging from seldom or never, to most days. Total daily leisure-related screen-time and sitting time were classified according to the participant's response to the question \"How many hours per day do you usually spend: Watching TV or playing computer games? Sitting for any purpose (e.g. reading, resting, working thinking)?\". Sitting time could also include screen-time, as participants were not specifically asked to exclude screen-time from this measure. The availability of domestic appliances was classified according to the response to the question \"Which of the following does your home have now?\", with options including microwave oven, refrigerator, water heater and washing machine. The questions on housework/gardening, screen-time, sitting time and availability of domestic appliances were devised specially for the Thai Cohort Study.\nEducation attainment was classified as: secondary school graduation or less; post-secondary school certificate or diploma; and tertiary graduate. Personal monthly income was in Thai Baht in four categories (≤7,000; 7,001-10,000; 10,001-20,000; >20,000). Respondents recorded the frequency of eating deep fried food and soft drinks and Western-style fast foods such as pizza (known as \"junkfood\") on a five-point Likert scale ranging from never or less than once a month, to once or more a day; this was categorised as consumption <3 times and ≥ 3 times per week for fried food and seldom (never or less than once per month) and regularly (≥ once per month) for \"junkfood\". Fruit and vegetable intakes were noted as serves eaten per day and categorised as <2 and ≥2 serves per day. They were asked if they have ever smoked, when they started and when they quit and were categorised as current smokers or not current smokers, with similar questions and categories for alcohol consumption.\nAnalysis was restricted to the 95% of individuals aged between 20 and 50 years, with BMIs between 11 and 50. Individuals were excluded from the analyses if they were missing data on age or sex (n = 2), height or weight (n = 1030) or physical activity or inactivity (n = 6863), leaving 74,981 participants.\n Statistical methods The relationships between a range of personal characteristics and exercise-related PA, housework/gardening and leisure-related screen-time were examined, as well as the correlation between the individual measures of physical activity and inactivity. Variables were categorised into the groups listed in the various tables.\nThe proportion of the study population classified as obese according to exercise-related PA, housework, leisure-related screen-time and sitting time was examined. Prevalence odds ratios (OR) and 95% CIs for obesity according to PA, housework, screen-time and sitting time were estimated using unconditional logistic regression; crude and adjusted odds ratios were computed. ORs were presented separately for men and women and adjusted for age (as a continuous variable), income and educational attainment, with exploration of the effect of additional adjustment for factors such as marital status, smoking, alcohol consumption and urban/rural residence. We evaluated the significance of interaction terms using a likelihood ratio test, comparing the model with and without the interaction terms.\nWe examined how much of any association of a specific PA or sedentary behaviour with obesity was attributable to differences in total physical activity level by modelling simultaneously the three PA variables and their two-way and three-way interactions. We also examined how much of the association of certain sedentary behaviours could be attributed to the effect of other sedentary behaviours and to consumption of fried foods and soft drinks and Western-style junkfood, using mutual adjustment.\nAll analyses were carried out in STATA version 9.2. All statistical tests were two-sided, using a significance level of p < 0.05. Due to the large sample size, conclusions were based on both significance and the effect size.\nThe relationships between a range of personal characteristics and exercise-related PA, housework/gardening and leisure-related screen-time were examined, as well as the correlation between the individual measures of physical activity and inactivity. Variables were categorised into the groups listed in the various tables.\nThe proportion of the study population classified as obese according to exercise-related PA, housework, leisure-related screen-time and sitting time was examined. Prevalence odds ratios (OR) and 95% CIs for obesity according to PA, housework, screen-time and sitting time were estimated using unconditional logistic regression; crude and adjusted odds ratios were computed. ORs were presented separately for men and women and adjusted for age (as a continuous variable), income and educational attainment, with exploration of the effect of additional adjustment for factors such as marital status, smoking, alcohol consumption and urban/rural residence. We evaluated the significance of interaction terms using a likelihood ratio test, comparing the model with and without the interaction terms.\nWe examined how much of any association of a specific PA or sedentary behaviour with obesity was attributable to differences in total physical activity level by modelling simultaneously the three PA variables and their two-way and three-way interactions. We also examined how much of the association of certain sedentary behaviours could be attributed to the effect of other sedentary behaviours and to consumption of fried foods and soft drinks and Western-style junkfood, using mutual adjustment.\nAll analyses were carried out in STATA version 9.2. All statistical tests were two-sided, using a significance level of p < 0.05. Due to the large sample size, conclusions were based on both significance and the effect size.\n Ethical approval Ethics approval was obtained from Sukhothai Thammathirat Open University Research and Development Institute (protocol 0522/10) and the Australian National University Human Research Ethics Committee (protocol 2004344). Informed written consent was obtained from all participants.\nEthics approval was obtained from Sukhothai Thammathirat Open University Research and Development Institute (protocol 0522/10) and the Australian National University Human Research Ethics Committee (protocol 2004344). Informed written consent was obtained from all participants.", "The Sukhothai Thammathirat Open University (STOU) Cohort Study is designed to provide evidence regarding the transition in risk factor profiles, health outcomes and other factors accompanying development and is described in detail elsewhere [17]. In brief, from April to November 2005, enrolled STOU students across Thailand who had completed a least one semester were mailed a 20-page health questionnaire and asked to join the study by completing the questionnaire, providing signed consent for follow-up, and returning these in a reply-paid envelope. A total of 87 134 men and women aged 15-87 years (median 29 years) joined the cohort.", "All of the variables used in this study were derived from cross-sectional self-reported data from the Thai Cohort Study questionnaire [17]. The questionnaire requested information on: socio-demographic factors; ethnicity; past and present residence and domestic environment; income; work-related factors; height; weight; sensory impairment; mental health; medical history; general health; use of health services; social networks; social capital; diet; physical activity; sedentary behaviours; tobacco and alcohol consumption; use of seat belts and motorcycle helmets; drink-driving; and family structure and health (See Additional files 1 and 2 for questionnaires). Where possible, questionnaire items that had been standardised and validated were used.\nSelf-reported weight and height were used to calculate participants' BMI, as their weight in kilograms, divided by the square of their height in metres. Cut-points delineating overweight and obesity were set at BMIs ≥23 and ≥25, respectively, in accordance with International Obesity Taskforce recommendations [18] and studies in other Asian populations [19].\nInformation on exercise-related PA was obtained through a question asking: \"During a typical week (7-day period), how many times on average do you do the following kinds of exercise?\", with responses requested for: \"Strenuous exercise (heart beats rapidly) for more than 20 minutes, e.g. heavy lifting, digging, aerobics or fast bicycling, running, soccer, trakraw\"; \"Moderate exercise (not exhausting but breathe harder than normal) for more than 20 minutes, e.g. carrying light loads, cycling at a regular pace\"; \"Mild exercise (minimal effort) for more than 20 minutes, e.g. yoga, Tai-Chi, bowling\" and; \"Walking for at least 10 minutes e.g. at work, at home, exercise\". This question is a sessions-based measure of physical activity, similar to the sessions component of the International Physical Activity Questionnaire and the Active Australia Survey [20]; these session measures have been shown to provide a reliable index of sufficiency of physical activity in non-Thai populations [21]. It incorporates the three major intensities of activity (strenuous, moderate and walking), included in these measures, as well as an additional \"mild\" category that was created specifically for this study to cover common types of activity in Thailand. The responses to this question were used to derive a weighted measure of overall metabolically-adjusted exercise-related PA, calculated as \"2 × strenuous + moderate + mild + walking\" exercise sessions, in keeping with previous calculations of this quotient [20].\nThe frequency of reported housework and gardening was used as a measure of incidental PA and was classified into 5 groups according to the response to the question \"How often do you do household cleaning or gardening work?\" with options ranging from seldom or never, to most days. Total daily leisure-related screen-time and sitting time were classified according to the participant's response to the question \"How many hours per day do you usually spend: Watching TV or playing computer games? Sitting for any purpose (e.g. reading, resting, working thinking)?\". Sitting time could also include screen-time, as participants were not specifically asked to exclude screen-time from this measure. The availability of domestic appliances was classified according to the response to the question \"Which of the following does your home have now?\", with options including microwave oven, refrigerator, water heater and washing machine. The questions on housework/gardening, screen-time, sitting time and availability of domestic appliances were devised specially for the Thai Cohort Study.\nEducation attainment was classified as: secondary school graduation or less; post-secondary school certificate or diploma; and tertiary graduate. Personal monthly income was in Thai Baht in four categories (≤7,000; 7,001-10,000; 10,001-20,000; >20,000). Respondents recorded the frequency of eating deep fried food and soft drinks and Western-style fast foods such as pizza (known as \"junkfood\") on a five-point Likert scale ranging from never or less than once a month, to once or more a day; this was categorised as consumption <3 times and ≥ 3 times per week for fried food and seldom (never or less than once per month) and regularly (≥ once per month) for \"junkfood\". Fruit and vegetable intakes were noted as serves eaten per day and categorised as <2 and ≥2 serves per day. They were asked if they have ever smoked, when they started and when they quit and were categorised as current smokers or not current smokers, with similar questions and categories for alcohol consumption.\nAnalysis was restricted to the 95% of individuals aged between 20 and 50 years, with BMIs between 11 and 50. Individuals were excluded from the analyses if they were missing data on age or sex (n = 2), height or weight (n = 1030) or physical activity or inactivity (n = 6863), leaving 74,981 participants.", "The relationships between a range of personal characteristics and exercise-related PA, housework/gardening and leisure-related screen-time were examined, as well as the correlation between the individual measures of physical activity and inactivity. Variables were categorised into the groups listed in the various tables.\nThe proportion of the study population classified as obese according to exercise-related PA, housework, leisure-related screen-time and sitting time was examined. Prevalence odds ratios (OR) and 95% CIs for obesity according to PA, housework, screen-time and sitting time were estimated using unconditional logistic regression; crude and adjusted odds ratios were computed. ORs were presented separately for men and women and adjusted for age (as a continuous variable), income and educational attainment, with exploration of the effect of additional adjustment for factors such as marital status, smoking, alcohol consumption and urban/rural residence. We evaluated the significance of interaction terms using a likelihood ratio test, comparing the model with and without the interaction terms.\nWe examined how much of any association of a specific PA or sedentary behaviour with obesity was attributable to differences in total physical activity level by modelling simultaneously the three PA variables and their two-way and three-way interactions. We also examined how much of the association of certain sedentary behaviours could be attributed to the effect of other sedentary behaviours and to consumption of fried foods and soft drinks and Western-style junkfood, using mutual adjustment.\nAll analyses were carried out in STATA version 9.2. All statistical tests were two-sided, using a significance level of p < 0.05. Due to the large sample size, conclusions were based on both significance and the effect size.", "Ethics approval was obtained from Sukhothai Thammathirat Open University Research and Development Institute (protocol 0522/10) and the Australian National University Human Research Ethics Committee (protocol 2004344). Informed written consent was obtained from all participants.", "Of 74 981 participants with appropriate data, 41 351 (55.2%, 95%CI 54.8-55.5%) were classified as being of healthy weight (BMI 18.5-22.9), 10 733 (14.4%, 14.1-14.6%) were underweight (BMI < 18.5), 11 241 (15.0%, 14.7-15.2%) were overweight but not obese (BMI ≥ 23.0-24.9) and 11 616 (15.6%, 15.2-15.7%) were obese (BMI ≥ 25.0).\nMen were far more likely to be overweight (21.7%, 21.3-22.1%) or obese (22.4%, 22.0-22.9%) than women (9.5% and 9.9%, respectively), while women were more likely to be underweight (21.3%, 20.9-21.7%) than men (5.9%, 5.6-6.1%). Compared to other members of the study cohort, obesity prevalence was higher in older participants and urban dwellers and in those with higher consumption of fried food (data not shown) [22].\nPatterns of exercise-related PA varied between men and women, with 12.5% (12.2-12.9%) of men reporting 0-3 sessions and 26.3% (25.8-26.8%) reporting ≥18 sessions of exercise-related PA per week compared to 22.2% (21.8-22.6%) and 12.1% (11.8-12.4%), respectively, for women. The mean number of sessions of exercise-related PA per week was 11.6 [sd 12.1] overall; 13.9 [sd 13.5] for men and 9.7 [sd 10.6] for women. A higher level of exercise-related PA was associated with having less than a tertiary education, being of lower income and eating more fruit and vegetables, but was not strongly related to other factors (Table 1). The pattern of PA making up the total weekly sessions also differed between the sexes, with women much less likely than men to report strenuous or moderate PA (Table 2).\nCharacteristics of study population according to total physical activity, housework/gardening and daily screen-time\naMeasured as top decile from 3 items assessing physical limitations in the past 4 weeks (eg how much bodily pain did you have in the past 4 weeks?)\nbOnly 1% and 0.6%, respectively, of females are current smokers and regular drinkers\nRelationship between being obese and measures of exercise-related physical activity (PA)\n*adjusted for age, income and education\nOverall, 49.4% (48.9-49.9%) of women and 34.4% (33.8-34.8%) of men reported doing household cleaning or gardening on most days of the week, while 3.7% (3.5-3.9%) of women and 8.8% (8.5-9.1%) of men reported that they did these seldom or never. Housework/gardening was more common among those who were married, not tertiary educated, of lower income and with greater fruit and vegetable intake than other cohort members (Table 1).\nLeisure related screen-time did not vary markedly between men and women. 17.8% (17.4-18.2%) of women and 22.2% (21.8-22.7%) of men reported less than two hours of daily screen-time, while 3.4% (3.2-3.6%) of women and 2.8% (2.6-2.9%) of men reported 8 hours or more. Average daily leisure related screen-time was 2.9 hours [sd 1.9]; it was 3.0 hours [sd 1.9] in women and 2.8 hours [sd 1.8] in men. Higher levels of screen-time were more common among cohort members who were younger, unmarried, urban residents and of lower income and who ate fried food daily and soft drinks or Western style junkfood once a month or more often (Table 1).\nWomen tended to have greater levels of sitting time than men, with 46.6% (46.0-46.9%) of women and 36.8% (36.2-37.3%) of men reporting 8 or more hours of daily sitting time. Average daily sitting time was 6.6 hours [sd 3.8] overall; 6.8 hours [sd 3.9] in women and 6.2 hours [sd 1.8] in men.\nThe number of hours of daily screen-time was poorly but significantly inversely correlated with the number of weekly sessions of exercise-related PA (r = -0.016; 95% CI: -0.024 to -0.009) and doing household cleaning or gardening (r = -0.022; -0.029 to -0.014) but was more strongly and positively related to number of hours sitting per day (r = 0.16; 0.15 to 0.16). The number of weekly sessions of exercise-related PA was positively correlated with doing household cleaning or gardening (r = 0.15; 0.14 to 0.16). The correlations between sitting time and number of weekly sessions of exercise-related PA and cleaning/gardening were -0.054 (-0.061 to -0.047) and -0.041 (-0.049 to -0.034) respectively.\n Obesity and exercise-related physical activity, housework, and gardening In men, the OR for being obese decreased steadily and significantly with increasing weighted total weekly sessions of exercise-related PA, such that those reporting 18 or more sessions had a OR of obesity of 0.69 (0.63-0.75) compared to those with 0-3 sessions (p(trend) < 0.0001) and this relationship was observed particularly for moderate and strenuous PA (Table 2). There was no apparent relationship between being obese and moderate and strenuous PA in women and the relationship between strenuous activity and obesity differed significantly between men and women (p(interaction) < 0.0001). However, in women an inverse relationship with being obese was observed mainly for mild PA and walking (Table 2).\nFor both sexes, the risk of being obese was consistently lower with increasing frequency of housework/gardening, with 33% (26-39%) and 33% (21-43%) lower adjusted ORs in men and women, respectively, in those reporting these activities daily versus seldom or never (Table 3). The lower risk of obesity with increasing housework or gardening was independent of the level of exercise-related PA, in that the OR did not change materially (i.e. changed by <10%) following additional adjustment for exercise-related PA (see below) and a similar relationship was observed within separate categories of exercise-related PA (Figure 1). The inverse relationship between housework/gardening and obesity was still present following additional adjustment for screen-time and exercise-related PA. Compared to people who did housework or gardening seldom or never, the ORs (95% CI) for being obese in men were: 0.85 (0.76-0.94) for people who did housework or gardening 1-3 times per month; 0.79 (0.71-0.87) for 1-2 times per week; 0.80 (0.72-0.90) for 3-4 times per week; and 0.73 (0.66-0.80) for housework/gardening on most days, adjusting for age, income, education, screen-time and weighted weekly sessions of exercise-related PA. For women, the ORs for the same categories were: 0.95 (0.79-1.15); 0.76 (0.64-0.90); 0.75 (0.63-0.90); and 0.71 (0.60-0.84), respectively.\nRelationship between being obese and gardening/housework, leisure-related computer or television use (\"screen-time\") and sitting time\n*adjusted for age, income and education\nnumbers do not always sum to total due to missing values\nOdds ratios (OR) for being obese in relation to weighted weekly sessions of exercise-related physical activity (ERPA), hours of daily screen-time and frequency of housework/gardening.\nIn men, the OR for being obese decreased steadily and significantly with increasing weighted total weekly sessions of exercise-related PA, such that those reporting 18 or more sessions had a OR of obesity of 0.69 (0.63-0.75) compared to those with 0-3 sessions (p(trend) < 0.0001) and this relationship was observed particularly for moderate and strenuous PA (Table 2). There was no apparent relationship between being obese and moderate and strenuous PA in women and the relationship between strenuous activity and obesity differed significantly between men and women (p(interaction) < 0.0001). However, in women an inverse relationship with being obese was observed mainly for mild PA and walking (Table 2).\nFor both sexes, the risk of being obese was consistently lower with increasing frequency of housework/gardening, with 33% (26-39%) and 33% (21-43%) lower adjusted ORs in men and women, respectively, in those reporting these activities daily versus seldom or never (Table 3). The lower risk of obesity with increasing housework or gardening was independent of the level of exercise-related PA, in that the OR did not change materially (i.e. changed by <10%) following additional adjustment for exercise-related PA (see below) and a similar relationship was observed within separate categories of exercise-related PA (Figure 1). The inverse relationship between housework/gardening and obesity was still present following additional adjustment for screen-time and exercise-related PA. Compared to people who did housework or gardening seldom or never, the ORs (95% CI) for being obese in men were: 0.85 (0.76-0.94) for people who did housework or gardening 1-3 times per month; 0.79 (0.71-0.87) for 1-2 times per week; 0.80 (0.72-0.90) for 3-4 times per week; and 0.73 (0.66-0.80) for housework/gardening on most days, adjusting for age, income, education, screen-time and weighted weekly sessions of exercise-related PA. For women, the ORs for the same categories were: 0.95 (0.79-1.15); 0.76 (0.64-0.90); 0.75 (0.63-0.90); and 0.71 (0.60-0.84), respectively.\nRelationship between being obese and gardening/housework, leisure-related computer or television use (\"screen-time\") and sitting time\n*adjusted for age, income and education\nnumbers do not always sum to total due to missing values\nOdds ratios (OR) for being obese in relation to weighted weekly sessions of exercise-related physical activity (ERPA), hours of daily screen-time and frequency of housework/gardening.\n Obesity and leisure related screen-time Increasing leisure-related screen-time was associated with significantly and substantially increasing risk of being obese in both men and women, with 85% and 116% increases in risk, respectively, for 8 or more hours of daily screen-time versus <2 hours (Table 3, p(trend) < 0.0001). Overall, sitting-time was not significantly related to the OR of being obese (p(trend) = 0.32), although a significant trend was observed in women (Table 3).\nThe positive relationship between screen-time and being obese was still present following additional adjustment for housework/gardening and exercise-related PA; the ORs (95% CI) for obesity in men were: 1.22 (1.14-1.30); 1.38 (1.27-1.50); 1.58 (1.36-1.83); and 1.80 (1.53-2.13) and for women were: 1.15 (1.05-1.27); 1.50 (1.35-1.67); 1.62 (1.36-1.92) and 2.13 (1.78-2.55), for people with: 2.0-2.9 hours; 3.0-3.9 hours; 4-7.9 hours; and ≥8 hours of daily screen-time versus 0-1.9 hours, respectively, adjusted for age, income, education, housework/gardening and exercise-related PA. Additional adjustment for consumption of fried foods, soft drink and junkfood and smoking and alcohol consumption did not materially alter the OR (data not shown).\nFigure 1 shows the cohort divided into four groups according to their level of exercise-related PA (ERPA in the figure). The ORs for being obese were then presented within each group according to the number of hours of total daily screen-time, with separate lines according to the frequency of housework/gardening. This figure shows increasing risk of being obese with increasing screen-time within each exercise-related PA group and within each housework/gardening group. It also shows that the finding of lower risks of being obese with increasing frequency of housework/gardening persists, even when screen-time and exercise-related PA were accounted for. When the relationships between being obese and exercise-related PA, screen-time and housework/gardening were modelled together, no significant interactions were observed (likelihood ratio χ392 = 38.54, p = 0.49), indicating that they were each independently associated with obesity.\nThe sex, income and education-adjusted OR of obesity per two-hour increase in daily screen-time is shown separately according to a variety of factors, including according to total exercise-related PA and according to housework and gardening, in Figure 2. There was an 18% (15-21%) increase in the risk of being obese with every two additional hours of daily screen-time overall and a significant elevation in the risk of being obese with increasing screen-time was seen in all of the population sub-groups examined (Figure 2). There was a significantly greater increase in the risk of being obese with increasing screen-time in unmarried compared to married individuals (p(heterogeneity) < 0.0001). The relationship between being obese and screen-time was attenuated significantly in older cohort members (p(heterogeneity) = 0.02) and in those with higher incomes (p(heterogeneity) = 0.02). No significant variation in the relationship between screen-time and being obese was seen according to the other factors examined, including: sex; urban/rural residence history; education; smoking status; alcohol, fruit, vegetable, junkfood and fried food intake; disability; level of exercise-related PA; and frequency of housework/gardening.\nOdds ratios (OR) for being obese per 2 hour increase in daily screen-time, in different population sub-groups, adjusted for age, sex, income and education, where appropriate.\nIncreasing leisure-related screen-time was associated with significantly and substantially increasing risk of being obese in both men and women, with 85% and 116% increases in risk, respectively, for 8 or more hours of daily screen-time versus <2 hours (Table 3, p(trend) < 0.0001). Overall, sitting-time was not significantly related to the OR of being obese (p(trend) = 0.32), although a significant trend was observed in women (Table 3).\nThe positive relationship between screen-time and being obese was still present following additional adjustment for housework/gardening and exercise-related PA; the ORs (95% CI) for obesity in men were: 1.22 (1.14-1.30); 1.38 (1.27-1.50); 1.58 (1.36-1.83); and 1.80 (1.53-2.13) and for women were: 1.15 (1.05-1.27); 1.50 (1.35-1.67); 1.62 (1.36-1.92) and 2.13 (1.78-2.55), for people with: 2.0-2.9 hours; 3.0-3.9 hours; 4-7.9 hours; and ≥8 hours of daily screen-time versus 0-1.9 hours, respectively, adjusted for age, income, education, housework/gardening and exercise-related PA. Additional adjustment for consumption of fried foods, soft drink and junkfood and smoking and alcohol consumption did not materially alter the OR (data not shown).\nFigure 1 shows the cohort divided into four groups according to their level of exercise-related PA (ERPA in the figure). The ORs for being obese were then presented within each group according to the number of hours of total daily screen-time, with separate lines according to the frequency of housework/gardening. This figure shows increasing risk of being obese with increasing screen-time within each exercise-related PA group and within each housework/gardening group. It also shows that the finding of lower risks of being obese with increasing frequency of housework/gardening persists, even when screen-time and exercise-related PA were accounted for. When the relationships between being obese and exercise-related PA, screen-time and housework/gardening were modelled together, no significant interactions were observed (likelihood ratio χ392 = 38.54, p = 0.49), indicating that they were each independently associated with obesity.\nThe sex, income and education-adjusted OR of obesity per two-hour increase in daily screen-time is shown separately according to a variety of factors, including according to total exercise-related PA and according to housework and gardening, in Figure 2. There was an 18% (15-21%) increase in the risk of being obese with every two additional hours of daily screen-time overall and a significant elevation in the risk of being obese with increasing screen-time was seen in all of the population sub-groups examined (Figure 2). There was a significantly greater increase in the risk of being obese with increasing screen-time in unmarried compared to married individuals (p(heterogeneity) < 0.0001). The relationship between being obese and screen-time was attenuated significantly in older cohort members (p(heterogeneity) = 0.02) and in those with higher incomes (p(heterogeneity) = 0.02). No significant variation in the relationship between screen-time and being obese was seen according to the other factors examined, including: sex; urban/rural residence history; education; smoking status; alcohol, fruit, vegetable, junkfood and fried food intake; disability; level of exercise-related PA; and frequency of housework/gardening.\nOdds ratios (OR) for being obese per 2 hour increase in daily screen-time, in different population sub-groups, adjusted for age, sex, income and education, where appropriate.\n Obesity and domestic appliances The risk of being obese was significantly higher in men and women from households with a refrigerator, microwave oven or washing machine and in men from households with a water heater (Table 4). The risk of being obese increased significantly with the increasing number of such appliances within a household. These results were not altered materially following additional adjustment for housework/gardening frequency, smoking, alcohol consumption and consumption of fried foods, Western-style junkfood, fruit and vegetables (data not shown).\nRelationship between being obese and ownership of household appliances\n*adjusted for age, income and education\nThe risk of being obese was significantly higher in men and women from households with a refrigerator, microwave oven or washing machine and in men from households with a water heater (Table 4). The risk of being obese increased significantly with the increasing number of such appliances within a household. These results were not altered materially following additional adjustment for housework/gardening frequency, smoking, alcohol consumption and consumption of fried foods, Western-style junkfood, fruit and vegetables (data not shown).\nRelationship between being obese and ownership of household appliances\n*adjusted for age, income and education", "In men, the OR for being obese decreased steadily and significantly with increasing weighted total weekly sessions of exercise-related PA, such that those reporting 18 or more sessions had a OR of obesity of 0.69 (0.63-0.75) compared to those with 0-3 sessions (p(trend) < 0.0001) and this relationship was observed particularly for moderate and strenuous PA (Table 2). There was no apparent relationship between being obese and moderate and strenuous PA in women and the relationship between strenuous activity and obesity differed significantly between men and women (p(interaction) < 0.0001). However, in women an inverse relationship with being obese was observed mainly for mild PA and walking (Table 2).\nFor both sexes, the risk of being obese was consistently lower with increasing frequency of housework/gardening, with 33% (26-39%) and 33% (21-43%) lower adjusted ORs in men and women, respectively, in those reporting these activities daily versus seldom or never (Table 3). The lower risk of obesity with increasing housework or gardening was independent of the level of exercise-related PA, in that the OR did not change materially (i.e. changed by <10%) following additional adjustment for exercise-related PA (see below) and a similar relationship was observed within separate categories of exercise-related PA (Figure 1). The inverse relationship between housework/gardening and obesity was still present following additional adjustment for screen-time and exercise-related PA. Compared to people who did housework or gardening seldom or never, the ORs (95% CI) for being obese in men were: 0.85 (0.76-0.94) for people who did housework or gardening 1-3 times per month; 0.79 (0.71-0.87) for 1-2 times per week; 0.80 (0.72-0.90) for 3-4 times per week; and 0.73 (0.66-0.80) for housework/gardening on most days, adjusting for age, income, education, screen-time and weighted weekly sessions of exercise-related PA. For women, the ORs for the same categories were: 0.95 (0.79-1.15); 0.76 (0.64-0.90); 0.75 (0.63-0.90); and 0.71 (0.60-0.84), respectively.\nRelationship between being obese and gardening/housework, leisure-related computer or television use (\"screen-time\") and sitting time\n*adjusted for age, income and education\nnumbers do not always sum to total due to missing values\nOdds ratios (OR) for being obese in relation to weighted weekly sessions of exercise-related physical activity (ERPA), hours of daily screen-time and frequency of housework/gardening.", "Increasing leisure-related screen-time was associated with significantly and substantially increasing risk of being obese in both men and women, with 85% and 116% increases in risk, respectively, for 8 or more hours of daily screen-time versus <2 hours (Table 3, p(trend) < 0.0001). Overall, sitting-time was not significantly related to the OR of being obese (p(trend) = 0.32), although a significant trend was observed in women (Table 3).\nThe positive relationship between screen-time and being obese was still present following additional adjustment for housework/gardening and exercise-related PA; the ORs (95% CI) for obesity in men were: 1.22 (1.14-1.30); 1.38 (1.27-1.50); 1.58 (1.36-1.83); and 1.80 (1.53-2.13) and for women were: 1.15 (1.05-1.27); 1.50 (1.35-1.67); 1.62 (1.36-1.92) and 2.13 (1.78-2.55), for people with: 2.0-2.9 hours; 3.0-3.9 hours; 4-7.9 hours; and ≥8 hours of daily screen-time versus 0-1.9 hours, respectively, adjusted for age, income, education, housework/gardening and exercise-related PA. Additional adjustment for consumption of fried foods, soft drink and junkfood and smoking and alcohol consumption did not materially alter the OR (data not shown).\nFigure 1 shows the cohort divided into four groups according to their level of exercise-related PA (ERPA in the figure). The ORs for being obese were then presented within each group according to the number of hours of total daily screen-time, with separate lines according to the frequency of housework/gardening. This figure shows increasing risk of being obese with increasing screen-time within each exercise-related PA group and within each housework/gardening group. It also shows that the finding of lower risks of being obese with increasing frequency of housework/gardening persists, even when screen-time and exercise-related PA were accounted for. When the relationships between being obese and exercise-related PA, screen-time and housework/gardening were modelled together, no significant interactions were observed (likelihood ratio χ392 = 38.54, p = 0.49), indicating that they were each independently associated with obesity.\nThe sex, income and education-adjusted OR of obesity per two-hour increase in daily screen-time is shown separately according to a variety of factors, including according to total exercise-related PA and according to housework and gardening, in Figure 2. There was an 18% (15-21%) increase in the risk of being obese with every two additional hours of daily screen-time overall and a significant elevation in the risk of being obese with increasing screen-time was seen in all of the population sub-groups examined (Figure 2). There was a significantly greater increase in the risk of being obese with increasing screen-time in unmarried compared to married individuals (p(heterogeneity) < 0.0001). The relationship between being obese and screen-time was attenuated significantly in older cohort members (p(heterogeneity) = 0.02) and in those with higher incomes (p(heterogeneity) = 0.02). No significant variation in the relationship between screen-time and being obese was seen according to the other factors examined, including: sex; urban/rural residence history; education; smoking status; alcohol, fruit, vegetable, junkfood and fried food intake; disability; level of exercise-related PA; and frequency of housework/gardening.\nOdds ratios (OR) for being obese per 2 hour increase in daily screen-time, in different population sub-groups, adjusted for age, sex, income and education, where appropriate.", "The risk of being obese was significantly higher in men and women from households with a refrigerator, microwave oven or washing machine and in men from households with a water heater (Table 4). The risk of being obese increased significantly with the increasing number of such appliances within a household. These results were not altered materially following additional adjustment for housework/gardening frequency, smoking, alcohol consumption and consumption of fried foods, Western-style junkfood, fruit and vegetables (data not shown).\nRelationship between being obese and ownership of household appliances\n*adjusted for age, income and education", "In this cohort of Thai men and women, the risk of being obese is consistently higher in those with greater time spent in leisure-related television watching and computer games and inversely associated with time spent doing housework or gardening. Inverse associations between obesity and total weekly sessions of exercise-related PA are observed in men with a significantly weaker association seen in women. Exercise-related PA, screen-time and housework/gardening each have independent associations with obesity. The magnitude of the association with obesity relating to these risk factors is substantial. Individuals reporting daily housework/gardening have a 33% lower risk of being obese compared to those reporting these activities seldom or never and there is an 18% increase in the risk of obesity with every two hours of additional daily screen-time.\nThe findings reported here show an inverse relationship between exercise-related PA and being obese that is stronger in men than in women and may be somewhat weaker than that observed in Western populations. The inverse relationship between obesity and exercise, usually leisure-related PA is well established in Western countries [5,6,21]. Although a reduced risk of obesity with increasing PA has been demonstrated in certain Asian populations, including those in China [23] and Korea [24], the specific relationship of leisure-related PA to obesity is less clear, and may be of lesser magnitude. The reason for this is not known. Potential explanations include: the lack of data relevant to Asia; the possibility that the proportion of total energy expenditure to leisure-related PA is lower in the Asian context [15]; differing types and intensities of leisure-related PA compared to the West; and differences in measurement error.\nWe were unable to locate any previous studies in adults of the relationship between being obese and television and computer use in Asia. Studies in Western populations consistently show increases in obesity with increasing time spent in sedentary activities, particularly screen-time [7,8,25-27]. The direct relationship between sedentary behaviours and obesity is observed in both cross-sectional [7,25,26] and prospective studies [8,15,28]. Studies have varied in the way they have measured and categorised screen-time and other sedentary behaviours, as well as obesity related outcomes, so it is difficult to summarise quantitatively the magnitude of the risk involved. However, the 18% increase in obesity risk per 2 hours of additional daily screen-time observed here is consistent with the 23% increase observed in US nurses [8] and older Australian adults [9].\nThe one previous publication we were able to locate examining the relationship between BMI and domestic activity in the Asian context demonstrated a significantly lower BMI in men with increasing time spent in domestic activities and a non-significant relationship in women, in China [29]. Studies in Western populations have generally not found a significant relationship between BMI and domestic activity [12,13,30], even heavy domestic activity, although one study in older US adults found house cleaning, but not gardening, to be associated with decreased BMI on multivariate analysis [14] and another found decreased all-cause mortality with increasing domestic PA [30]. The study presented here is the largest to date investigating the issue and shows a decreasing risk of being obese with increasing frequency of housework and gardening, independent of exercise-related PA and screen-time.\nAlthough the lack of a positive finding in the Western context may reflect measurement error, the play of chance, small sample sizes or other factors, it is also possible that domestic PA in Asian countries differs from that in Western countries, for example, due to use of labour saving devices or differing practices. Increasing use of labour saving devices is part of the transition accompanying industrialisation and is associated with reduced energy expenditure in domestic tasks [31]. Decreasing domestic physical activity over time has been noted in one Chinese study [29]. We found household ownership of domestic appliances to be significantly associated with increasing risk of being obese, with increasing risks of being obese accompanying increasing numbers of appliances within the household. However, the lack of specificity in the relationship of the different household appliances to obesity and the apparently greater effect in men compared to women suggests that this may well not be a causal relationship; it may instead reflect a broader difference in socioeconomic status and lifestyle between households with and without appliances.\nStrengths of the current study include its large size and inclusion of adults from a wide range of social and economic backgrounds. Although the cohort is somewhat younger and more urbanised than the Thai general population, it represents well the geographic regions of Thailand and exhibits substantial heterogeneity in the distribution of other factors [17]. For example, 35% of males and 47% of females had low incomes (<7000 Baht per month or $5.50 US per day). Participants in the Thai Cohort Study in 2005 were very similar to the STOU student body in that year for sex ratio, age distribution, geographic region of residence, income, education and course of study [32]. Much of the health-risk transition underway in middle income countries is mediated by education [33,34] and the cohort is, by definition, ahead of national education trends. It is therefore likely to provide useful early insights into the effects and mediators of the health-risk transition in middle-income countries. Previous relevant studies from the cohort include examination of the broader health-related correlates of obesity [22] and the relationship between gender, socioeconomic status and obesity [35].\nThe proportion of the cohort classified as overweight but not obese is similar to the 18% found in the third Thai National Health Examination Survey (2004), while obesity is much lower among STOU women (10% compared to 36% in the National Health Examination Survey) but similar among men in the two studies (23%) [2]. It should be noted that the Thai Cohort Study, like the vast majority of cohort studies, is not designed to be representative of the general population, but is meant to provide sufficient heterogeneity of exposure to allow reliable estimates of relative risk based on internal comparisons [36]. The \"healthy cohort effect\" and the 44% response rate for this study means that the estimates of relative risk shown here are likely to be conservative, since community members with more extreme behaviours and health conditions may be less likely to be attending an open university or to participate. However, it is important to note that ORs comparing groups within the cohort remain valid and can be generalised more broadly [36,37]. Furthermore, the major comparisons in this paper are between obese and non-obese individuals, rather than obese and \"healthy weight\" individuals, which is also likely to lead to more conservative estimates of association.\nThe limitations of the study should also be borne in mind. The measures used for tobacco smoking and alcohol consumption were brief and the physical activity measure used has, to our knowledge, only been validated in Western populations. BMI was based on self-reported height and weight, which have been shown to provide a valid measure of body size in this population, with correlation coefficients for BMI based on self-reported versus measured height and weight of 0.91 for men and 0.95 for women [38]. However, BMI based on self-reported measures was underestimated by an average of 0.77 kg/m2 for men and 0.62 kg/m2 for women [38]. Cut-points delineating overweight and obesity were set at BMIs ≥23 and ≥25; weight-related disease in Asian populations occurs least when BMI is about 22 or less [39], and significantly increases with BMI ≥23 [40]. The excellent correlation between measured and self-reported values mean that self-reported values are generally reliable for ranking participants according to BMI in epidemiological studies, as was done here. The general underestimation of BMI means that absolute values of BMI from self-reported measures are less reliable.\nSelf-reported leisure-related television and computer use has good test-retest reliability and validity, although respondents have a tendency to under-report the number of hours involved [41]. We were not able to locate any studies validating these measures in Asian populations. Time spent in habitual, incidental physical activity is difficult to measure [42] and domestic activities, screen-time and overall physical activity are all likely to be reported with differing degrees of measurement error. Although both domestic activities and screen-time remain predictive of obesity within categories of physical activity, it is possible that this is the result of greater measurement error in ascertaining overall physical activity than in ascertaining domestic and screen-related activities [25]. However, the lack of strong correlation between screen-time and overall physical activity (r = -0.016) goes against this argument, as does the previous observation that television viewing remains a significant predictor of obesity in women, following adjustment for pedometer measured physical activity [42].\nObesity and overweight are the result of sustained positive energy balance, whereby energy intake exceeds energy expenditure. Although dietary factors are important, there is mounting evidence that insufficient energy expenditure is likely to be a key factor underlying the global obesity epidemic [10]. The main determinant of individual energy expenditure is the basal metabolic rate, which typically accounts for 70% of all kilojoules burned [15]. A further 10% of energy expenditure comes from the thermic effect of food and the remaining 20% comes from PA [15]. PA is often conceptualised as comprised of purposeful and non-purposeful physical activity; the latter is also termed \"incidental\" PA. A recent study from the US found that over half of population-level energy expenditure from PA was from sedentary and low-intensity tasks, 16% was attributed to occupational activity above and beyond sitting at work, 16% was attributable to domestic activities and yard work of at least moderate intensity and less than 5% was attributable to leisure-time PA [43]. This evidence is consistent with the suggestion that differences in incidental physical activity are responsible for the greatest variations in energy expenditure between individuals and populations [10]. Moreover, recent evidence indicates that one of the most potent mechanisms determining cardiovascular risk factors, including obesity and metabolic disorders, is the amount of time spent in high volume daily intermittent low-intensity postural or ambulatory activities, which account for as much of 90% of energy expended in physical activity [11].\nSedentary behaviours generally involve sitting or lying down and are characterised by low energy expenditure (metabolic equivalent intensity <2) [41]. A substantial amount of time spent in sedentary activities is likely to contribute to obesity through reduced overall energy expenditure, mainly resulting from their impact on incidental physical activity, since it may co-exist with relatively high levels of exercise-related physical activity. Screen-time, particularly television watching, is also associated with other health behaviours, such as eating fatty foods. However, the finding of increased obesity among those watching greater amounts of television persisted in this dataset after adjustment for intake of fatty food and in other studies following adjustment for total energy intake [8] and foods eaten while watching television [27], so this is unlikely to explain much of its effects.\nIn lower- and middle-income countries, including Thailand, industrialisation is generally accompanied by increasing urbanisation, a more sedentary lifestyle, with increasing car and computer use and a higher fat diet dominated by more refined foods [16]. It is also characterised by a shift in work patterns for a substantial proportion of the population, from high energy expenditure activities such as farming, mining and forestry to less energy-demanding jobs in the service sector [16]. All of these changes are likely to increase population obesity. There are a number of specific barriers to increasing physical activity in many Asian countries, including environmental factors such as heat, inadequate urban infrastructure, pollution and other hazards. Furthermore, chronic malnutrition has been common in many Asian countries, leading to stunting in significant portions of the population and rendering them vulnerable to obesity as food availability improves.\nThe importance of obesity, the metabolic syndrome and diabetes in Thailand has been highlighted extensively [44-46]. In Thailand, and in many other countries, social factors are key upstream determinants of the major influences on obesity. For example, domestic duties are often divided along gender lines and many wealthier households have servants, particularly to do the heavier work. In this cohort, higher socioeconomic status was accompanied by increasing risk of being obese in men and decreasing risk of being obese in women [35]. This pattern is believed to represent an intermediate stage in the health-risk transition between less-developed countries such as China, which demonstrate high socioeconomic status to be associated with increased obesity in both men and women [47], and Western populations where high socioeconomic status is associated with reduced obesity in both men and women [2,35].\nThe study was able to investigate simultaneously a number of activity-related measures and the large numbers allowed quantification of the association of these factors with obesity within a range of population subgroups. However, the analyses presented here are cross-sectional so it is not possible to directly attribute causality to the relationships observed or to exclude reverse causality. Reverse causality occurs when an exposure varies because of the specific condition under investigation. In this case, reverse causality would mean that obesity might result in reduced exercise-related PA, increased sedentary behaviour and decreased domestic PA. There are a priori reasons why it is likely that certain elements of the PA-BMI relationship are causal i.e. that reduced energy expenditure due to reduced PA results in increased BMI. However, it is also possible that people with high BMI may change their level of PA. Intuitively, this might apply more to exercise-related PA than to screen-time or domestic activities; people with a high BMI may do more exercise-related PA in order to lose weight or may reduce their exercise-related PA, due to the extra exertion required because of their weight or obesity related health issues (e.g. joint problems). In Thai society, women in particular are under pressure to be thin and the increased walking among women of higher BMI may reflect this. Going against a large role for reverse causality is the fact that increasing inactivity has been shown to result in increased obesity in longitudinal data [8,48] and experimental studies show that increasing BMI by overfeeding of lean individuals does not result in increased sedentary behaviour [49]. This issue is not resolved entirely by using longitudinal data, since the major risk factor for incident obesity is having a high BMI at baseline [49]. We propose that the relationship between sedentary behaviour and obesity is likely to be complex, with a causal relationship between inactivity and obesity predominating. There is likely to be some contribution of obesity leading to inactivity [48], or indeed a \"spiral\" relationship, whereby inactivity leads to obesity, which further exacerbates inactivity, leading to further increases in obesity [50].", "In common with many middle to low income countries, the prevalence of overweight and obesity in Thailand is lower than that seen in many Western nations, but is increasing rapidly. Avoiding the transition to the obesity patterns seen in the West is a key priority. The data presented here suggest that habitual, high volume, low intensity PA is likely to be important for maintaining a healthy weight and are in keeping with other data that show that increasing exercise-related leisure-time PA alone is unlikely to be sufficient to prevent population obesity [15]. Leisure-related television and computer use were strongly related to the risk of being obese. Research focusing on habitual activities and sedentary behaviours is relatively new. Effective interventions to reduce sedentary time and increase incidental activity are being developed; innovative interventions applicable to the Asian context are needed urgently.", "Thai Cohort Study baseline questionnaire (English). An English language translation of a questionnaire administered to students of Sukhothai Thammathirat Open University in 2005.\nClick here for file\nThai Cohort Study baseline questionnaire (Thai). The Thai language original questionnaire administered to students of Sukhothai Thammathirat Open University in 2005.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[ "Obesity", "Thailand", "physical activity", "inactivity", "domestic activity", "sedentary behaviours" ]
Background: The prevalence of obesity is rising rapidly in most Asian countries, with increases of 46% in Japan and over 400% in China observed from the 1980s to early 2000s [1]. In Thailand, the prevalence of obesity increased by around 19% from 1997 to 2004 alone [2]. There have been accompanying increases in morbidity related to conditions such as diabetes and cardiovascular disease in Asian countries [3,4]. It is well established in Western populations that increasing purposeful or leisure-time physical activity (PA) is associated with reduced rates of obesity [5,6]. Recent evidence, also from Western countries, suggests that sedentary activities, such as watching television or using a computer, are associated with increasing obesity, independent of purposeful PA [7-9]. The role of incidental PA and overall energy expenditure, in influencing obesity has been highlighted [10,11]. The interplay between these factors and their combined effects on obesity are not well understood and information relevant to Asian populations is particularly scarce. Furthermore, the relationship between domestic activities and obesity is unclear [12-14]. This is important because physical activity related to patterns of daily activity differs between Asia and Western countries [15] and because many Asian countries are experiencing rapid health and lifestyle transitions [16]. This paper examines in detail the relationships between obesity, exercise-related PA, domestic activities and sedentary behaviours in Thailand, with particular emphasis on the interaction between these factors. Methods: Study population The Sukhothai Thammathirat Open University (STOU) Cohort Study is designed to provide evidence regarding the transition in risk factor profiles, health outcomes and other factors accompanying development and is described in detail elsewhere [17]. In brief, from April to November 2005, enrolled STOU students across Thailand who had completed a least one semester were mailed a 20-page health questionnaire and asked to join the study by completing the questionnaire, providing signed consent for follow-up, and returning these in a reply-paid envelope. A total of 87 134 men and women aged 15-87 years (median 29 years) joined the cohort. The Sukhothai Thammathirat Open University (STOU) Cohort Study is designed to provide evidence regarding the transition in risk factor profiles, health outcomes and other factors accompanying development and is described in detail elsewhere [17]. In brief, from April to November 2005, enrolled STOU students across Thailand who had completed a least one semester were mailed a 20-page health questionnaire and asked to join the study by completing the questionnaire, providing signed consent for follow-up, and returning these in a reply-paid envelope. A total of 87 134 men and women aged 15-87 years (median 29 years) joined the cohort. Data All of the variables used in this study were derived from cross-sectional self-reported data from the Thai Cohort Study questionnaire [17]. The questionnaire requested information on: socio-demographic factors; ethnicity; past and present residence and domestic environment; income; work-related factors; height; weight; sensory impairment; mental health; medical history; general health; use of health services; social networks; social capital; diet; physical activity; sedentary behaviours; tobacco and alcohol consumption; use of seat belts and motorcycle helmets; drink-driving; and family structure and health (See Additional files 1 and 2 for questionnaires). Where possible, questionnaire items that had been standardised and validated were used. Self-reported weight and height were used to calculate participants' BMI, as their weight in kilograms, divided by the square of their height in metres. Cut-points delineating overweight and obesity were set at BMIs ≥23 and ≥25, respectively, in accordance with International Obesity Taskforce recommendations [18] and studies in other Asian populations [19]. Information on exercise-related PA was obtained through a question asking: "During a typical week (7-day period), how many times on average do you do the following kinds of exercise?", with responses requested for: "Strenuous exercise (heart beats rapidly) for more than 20 minutes, e.g. heavy lifting, digging, aerobics or fast bicycling, running, soccer, trakraw"; "Moderate exercise (not exhausting but breathe harder than normal) for more than 20 minutes, e.g. carrying light loads, cycling at a regular pace"; "Mild exercise (minimal effort) for more than 20 minutes, e.g. yoga, Tai-Chi, bowling" and; "Walking for at least 10 minutes e.g. at work, at home, exercise". This question is a sessions-based measure of physical activity, similar to the sessions component of the International Physical Activity Questionnaire and the Active Australia Survey [20]; these session measures have been shown to provide a reliable index of sufficiency of physical activity in non-Thai populations [21]. It incorporates the three major intensities of activity (strenuous, moderate and walking), included in these measures, as well as an additional "mild" category that was created specifically for this study to cover common types of activity in Thailand. The responses to this question were used to derive a weighted measure of overall metabolically-adjusted exercise-related PA, calculated as "2 × strenuous + moderate + mild + walking" exercise sessions, in keeping with previous calculations of this quotient [20]. The frequency of reported housework and gardening was used as a measure of incidental PA and was classified into 5 groups according to the response to the question "How often do you do household cleaning or gardening work?" with options ranging from seldom or never, to most days. Total daily leisure-related screen-time and sitting time were classified according to the participant's response to the question "How many hours per day do you usually spend: Watching TV or playing computer games? Sitting for any purpose (e.g. reading, resting, working thinking)?". Sitting time could also include screen-time, as participants were not specifically asked to exclude screen-time from this measure. The availability of domestic appliances was classified according to the response to the question "Which of the following does your home have now?", with options including microwave oven, refrigerator, water heater and washing machine. The questions on housework/gardening, screen-time, sitting time and availability of domestic appliances were devised specially for the Thai Cohort Study. Education attainment was classified as: secondary school graduation or less; post-secondary school certificate or diploma; and tertiary graduate. Personal monthly income was in Thai Baht in four categories (≤7,000; 7,001-10,000; 10,001-20,000; >20,000). Respondents recorded the frequency of eating deep fried food and soft drinks and Western-style fast foods such as pizza (known as "junkfood") on a five-point Likert scale ranging from never or less than once a month, to once or more a day; this was categorised as consumption <3 times and ≥ 3 times per week for fried food and seldom (never or less than once per month) and regularly (≥ once per month) for "junkfood". Fruit and vegetable intakes were noted as serves eaten per day and categorised as <2 and ≥2 serves per day. They were asked if they have ever smoked, when they started and when they quit and were categorised as current smokers or not current smokers, with similar questions and categories for alcohol consumption. Analysis was restricted to the 95% of individuals aged between 20 and 50 years, with BMIs between 11 and 50. Individuals were excluded from the analyses if they were missing data on age or sex (n = 2), height or weight (n = 1030) or physical activity or inactivity (n = 6863), leaving 74,981 participants. All of the variables used in this study were derived from cross-sectional self-reported data from the Thai Cohort Study questionnaire [17]. The questionnaire requested information on: socio-demographic factors; ethnicity; past and present residence and domestic environment; income; work-related factors; height; weight; sensory impairment; mental health; medical history; general health; use of health services; social networks; social capital; diet; physical activity; sedentary behaviours; tobacco and alcohol consumption; use of seat belts and motorcycle helmets; drink-driving; and family structure and health (See Additional files 1 and 2 for questionnaires). Where possible, questionnaire items that had been standardised and validated were used. Self-reported weight and height were used to calculate participants' BMI, as their weight in kilograms, divided by the square of their height in metres. Cut-points delineating overweight and obesity were set at BMIs ≥23 and ≥25, respectively, in accordance with International Obesity Taskforce recommendations [18] and studies in other Asian populations [19]. Information on exercise-related PA was obtained through a question asking: "During a typical week (7-day period), how many times on average do you do the following kinds of exercise?", with responses requested for: "Strenuous exercise (heart beats rapidly) for more than 20 minutes, e.g. heavy lifting, digging, aerobics or fast bicycling, running, soccer, trakraw"; "Moderate exercise (not exhausting but breathe harder than normal) for more than 20 minutes, e.g. carrying light loads, cycling at a regular pace"; "Mild exercise (minimal effort) for more than 20 minutes, e.g. yoga, Tai-Chi, bowling" and; "Walking for at least 10 minutes e.g. at work, at home, exercise". This question is a sessions-based measure of physical activity, similar to the sessions component of the International Physical Activity Questionnaire and the Active Australia Survey [20]; these session measures have been shown to provide a reliable index of sufficiency of physical activity in non-Thai populations [21]. It incorporates the three major intensities of activity (strenuous, moderate and walking), included in these measures, as well as an additional "mild" category that was created specifically for this study to cover common types of activity in Thailand. The responses to this question were used to derive a weighted measure of overall metabolically-adjusted exercise-related PA, calculated as "2 × strenuous + moderate + mild + walking" exercise sessions, in keeping with previous calculations of this quotient [20]. The frequency of reported housework and gardening was used as a measure of incidental PA and was classified into 5 groups according to the response to the question "How often do you do household cleaning or gardening work?" with options ranging from seldom or never, to most days. Total daily leisure-related screen-time and sitting time were classified according to the participant's response to the question "How many hours per day do you usually spend: Watching TV or playing computer games? Sitting for any purpose (e.g. reading, resting, working thinking)?". Sitting time could also include screen-time, as participants were not specifically asked to exclude screen-time from this measure. The availability of domestic appliances was classified according to the response to the question "Which of the following does your home have now?", with options including microwave oven, refrigerator, water heater and washing machine. The questions on housework/gardening, screen-time, sitting time and availability of domestic appliances were devised specially for the Thai Cohort Study. Education attainment was classified as: secondary school graduation or less; post-secondary school certificate or diploma; and tertiary graduate. Personal monthly income was in Thai Baht in four categories (≤7,000; 7,001-10,000; 10,001-20,000; >20,000). Respondents recorded the frequency of eating deep fried food and soft drinks and Western-style fast foods such as pizza (known as "junkfood") on a five-point Likert scale ranging from never or less than once a month, to once or more a day; this was categorised as consumption <3 times and ≥ 3 times per week for fried food and seldom (never or less than once per month) and regularly (≥ once per month) for "junkfood". Fruit and vegetable intakes were noted as serves eaten per day and categorised as <2 and ≥2 serves per day. They were asked if they have ever smoked, when they started and when they quit and were categorised as current smokers or not current smokers, with similar questions and categories for alcohol consumption. Analysis was restricted to the 95% of individuals aged between 20 and 50 years, with BMIs between 11 and 50. Individuals were excluded from the analyses if they were missing data on age or sex (n = 2), height or weight (n = 1030) or physical activity or inactivity (n = 6863), leaving 74,981 participants. Statistical methods The relationships between a range of personal characteristics and exercise-related PA, housework/gardening and leisure-related screen-time were examined, as well as the correlation between the individual measures of physical activity and inactivity. Variables were categorised into the groups listed in the various tables. The proportion of the study population classified as obese according to exercise-related PA, housework, leisure-related screen-time and sitting time was examined. Prevalence odds ratios (OR) and 95% CIs for obesity according to PA, housework, screen-time and sitting time were estimated using unconditional logistic regression; crude and adjusted odds ratios were computed. ORs were presented separately for men and women and adjusted for age (as a continuous variable), income and educational attainment, with exploration of the effect of additional adjustment for factors such as marital status, smoking, alcohol consumption and urban/rural residence. We evaluated the significance of interaction terms using a likelihood ratio test, comparing the model with and without the interaction terms. We examined how much of any association of a specific PA or sedentary behaviour with obesity was attributable to differences in total physical activity level by modelling simultaneously the three PA variables and their two-way and three-way interactions. We also examined how much of the association of certain sedentary behaviours could be attributed to the effect of other sedentary behaviours and to consumption of fried foods and soft drinks and Western-style junkfood, using mutual adjustment. All analyses were carried out in STATA version 9.2. All statistical tests were two-sided, using a significance level of p < 0.05. Due to the large sample size, conclusions were based on both significance and the effect size. The relationships between a range of personal characteristics and exercise-related PA, housework/gardening and leisure-related screen-time were examined, as well as the correlation between the individual measures of physical activity and inactivity. Variables were categorised into the groups listed in the various tables. The proportion of the study population classified as obese according to exercise-related PA, housework, leisure-related screen-time and sitting time was examined. Prevalence odds ratios (OR) and 95% CIs for obesity according to PA, housework, screen-time and sitting time were estimated using unconditional logistic regression; crude and adjusted odds ratios were computed. ORs were presented separately for men and women and adjusted for age (as a continuous variable), income and educational attainment, with exploration of the effect of additional adjustment for factors such as marital status, smoking, alcohol consumption and urban/rural residence. We evaluated the significance of interaction terms using a likelihood ratio test, comparing the model with and without the interaction terms. We examined how much of any association of a specific PA or sedentary behaviour with obesity was attributable to differences in total physical activity level by modelling simultaneously the three PA variables and their two-way and three-way interactions. We also examined how much of the association of certain sedentary behaviours could be attributed to the effect of other sedentary behaviours and to consumption of fried foods and soft drinks and Western-style junkfood, using mutual adjustment. All analyses were carried out in STATA version 9.2. All statistical tests were two-sided, using a significance level of p < 0.05. Due to the large sample size, conclusions were based on both significance and the effect size. Ethical approval Ethics approval was obtained from Sukhothai Thammathirat Open University Research and Development Institute (protocol 0522/10) and the Australian National University Human Research Ethics Committee (protocol 2004344). Informed written consent was obtained from all participants. Ethics approval was obtained from Sukhothai Thammathirat Open University Research and Development Institute (protocol 0522/10) and the Australian National University Human Research Ethics Committee (protocol 2004344). Informed written consent was obtained from all participants. Study population: The Sukhothai Thammathirat Open University (STOU) Cohort Study is designed to provide evidence regarding the transition in risk factor profiles, health outcomes and other factors accompanying development and is described in detail elsewhere [17]. In brief, from April to November 2005, enrolled STOU students across Thailand who had completed a least one semester were mailed a 20-page health questionnaire and asked to join the study by completing the questionnaire, providing signed consent for follow-up, and returning these in a reply-paid envelope. A total of 87 134 men and women aged 15-87 years (median 29 years) joined the cohort. Data: All of the variables used in this study were derived from cross-sectional self-reported data from the Thai Cohort Study questionnaire [17]. The questionnaire requested information on: socio-demographic factors; ethnicity; past and present residence and domestic environment; income; work-related factors; height; weight; sensory impairment; mental health; medical history; general health; use of health services; social networks; social capital; diet; physical activity; sedentary behaviours; tobacco and alcohol consumption; use of seat belts and motorcycle helmets; drink-driving; and family structure and health (See Additional files 1 and 2 for questionnaires). Where possible, questionnaire items that had been standardised and validated were used. Self-reported weight and height were used to calculate participants' BMI, as their weight in kilograms, divided by the square of their height in metres. Cut-points delineating overweight and obesity were set at BMIs ≥23 and ≥25, respectively, in accordance with International Obesity Taskforce recommendations [18] and studies in other Asian populations [19]. Information on exercise-related PA was obtained through a question asking: "During a typical week (7-day period), how many times on average do you do the following kinds of exercise?", with responses requested for: "Strenuous exercise (heart beats rapidly) for more than 20 minutes, e.g. heavy lifting, digging, aerobics or fast bicycling, running, soccer, trakraw"; "Moderate exercise (not exhausting but breathe harder than normal) for more than 20 minutes, e.g. carrying light loads, cycling at a regular pace"; "Mild exercise (minimal effort) for more than 20 minutes, e.g. yoga, Tai-Chi, bowling" and; "Walking for at least 10 minutes e.g. at work, at home, exercise". This question is a sessions-based measure of physical activity, similar to the sessions component of the International Physical Activity Questionnaire and the Active Australia Survey [20]; these session measures have been shown to provide a reliable index of sufficiency of physical activity in non-Thai populations [21]. It incorporates the three major intensities of activity (strenuous, moderate and walking), included in these measures, as well as an additional "mild" category that was created specifically for this study to cover common types of activity in Thailand. The responses to this question were used to derive a weighted measure of overall metabolically-adjusted exercise-related PA, calculated as "2 × strenuous + moderate + mild + walking" exercise sessions, in keeping with previous calculations of this quotient [20]. The frequency of reported housework and gardening was used as a measure of incidental PA and was classified into 5 groups according to the response to the question "How often do you do household cleaning or gardening work?" with options ranging from seldom or never, to most days. Total daily leisure-related screen-time and sitting time were classified according to the participant's response to the question "How many hours per day do you usually spend: Watching TV or playing computer games? Sitting for any purpose (e.g. reading, resting, working thinking)?". Sitting time could also include screen-time, as participants were not specifically asked to exclude screen-time from this measure. The availability of domestic appliances was classified according to the response to the question "Which of the following does your home have now?", with options including microwave oven, refrigerator, water heater and washing machine. The questions on housework/gardening, screen-time, sitting time and availability of domestic appliances were devised specially for the Thai Cohort Study. Education attainment was classified as: secondary school graduation or less; post-secondary school certificate or diploma; and tertiary graduate. Personal monthly income was in Thai Baht in four categories (≤7,000; 7,001-10,000; 10,001-20,000; >20,000). Respondents recorded the frequency of eating deep fried food and soft drinks and Western-style fast foods such as pizza (known as "junkfood") on a five-point Likert scale ranging from never or less than once a month, to once or more a day; this was categorised as consumption <3 times and ≥ 3 times per week for fried food and seldom (never or less than once per month) and regularly (≥ once per month) for "junkfood". Fruit and vegetable intakes were noted as serves eaten per day and categorised as <2 and ≥2 serves per day. They were asked if they have ever smoked, when they started and when they quit and were categorised as current smokers or not current smokers, with similar questions and categories for alcohol consumption. Analysis was restricted to the 95% of individuals aged between 20 and 50 years, with BMIs between 11 and 50. Individuals were excluded from the analyses if they were missing data on age or sex (n = 2), height or weight (n = 1030) or physical activity or inactivity (n = 6863), leaving 74,981 participants. Statistical methods: The relationships between a range of personal characteristics and exercise-related PA, housework/gardening and leisure-related screen-time were examined, as well as the correlation between the individual measures of physical activity and inactivity. Variables were categorised into the groups listed in the various tables. The proportion of the study population classified as obese according to exercise-related PA, housework, leisure-related screen-time and sitting time was examined. Prevalence odds ratios (OR) and 95% CIs for obesity according to PA, housework, screen-time and sitting time were estimated using unconditional logistic regression; crude and adjusted odds ratios were computed. ORs were presented separately for men and women and adjusted for age (as a continuous variable), income and educational attainment, with exploration of the effect of additional adjustment for factors such as marital status, smoking, alcohol consumption and urban/rural residence. We evaluated the significance of interaction terms using a likelihood ratio test, comparing the model with and without the interaction terms. We examined how much of any association of a specific PA or sedentary behaviour with obesity was attributable to differences in total physical activity level by modelling simultaneously the three PA variables and their two-way and three-way interactions. We also examined how much of the association of certain sedentary behaviours could be attributed to the effect of other sedentary behaviours and to consumption of fried foods and soft drinks and Western-style junkfood, using mutual adjustment. All analyses were carried out in STATA version 9.2. All statistical tests were two-sided, using a significance level of p < 0.05. Due to the large sample size, conclusions were based on both significance and the effect size. Ethical approval: Ethics approval was obtained from Sukhothai Thammathirat Open University Research and Development Institute (protocol 0522/10) and the Australian National University Human Research Ethics Committee (protocol 2004344). Informed written consent was obtained from all participants. Results: Of 74 981 participants with appropriate data, 41 351 (55.2%, 95%CI 54.8-55.5%) were classified as being of healthy weight (BMI 18.5-22.9), 10 733 (14.4%, 14.1-14.6%) were underweight (BMI < 18.5), 11 241 (15.0%, 14.7-15.2%) were overweight but not obese (BMI ≥ 23.0-24.9) and 11 616 (15.6%, 15.2-15.7%) were obese (BMI ≥ 25.0). Men were far more likely to be overweight (21.7%, 21.3-22.1%) or obese (22.4%, 22.0-22.9%) than women (9.5% and 9.9%, respectively), while women were more likely to be underweight (21.3%, 20.9-21.7%) than men (5.9%, 5.6-6.1%). Compared to other members of the study cohort, obesity prevalence was higher in older participants and urban dwellers and in those with higher consumption of fried food (data not shown) [22]. Patterns of exercise-related PA varied between men and women, with 12.5% (12.2-12.9%) of men reporting 0-3 sessions and 26.3% (25.8-26.8%) reporting ≥18 sessions of exercise-related PA per week compared to 22.2% (21.8-22.6%) and 12.1% (11.8-12.4%), respectively, for women. The mean number of sessions of exercise-related PA per week was 11.6 [sd 12.1] overall; 13.9 [sd 13.5] for men and 9.7 [sd 10.6] for women. A higher level of exercise-related PA was associated with having less than a tertiary education, being of lower income and eating more fruit and vegetables, but was not strongly related to other factors (Table 1). The pattern of PA making up the total weekly sessions also differed between the sexes, with women much less likely than men to report strenuous or moderate PA (Table 2). Characteristics of study population according to total physical activity, housework/gardening and daily screen-time aMeasured as top decile from 3 items assessing physical limitations in the past 4 weeks (eg how much bodily pain did you have in the past 4 weeks?) bOnly 1% and 0.6%, respectively, of females are current smokers and regular drinkers Relationship between being obese and measures of exercise-related physical activity (PA) *adjusted for age, income and education Overall, 49.4% (48.9-49.9%) of women and 34.4% (33.8-34.8%) of men reported doing household cleaning or gardening on most days of the week, while 3.7% (3.5-3.9%) of women and 8.8% (8.5-9.1%) of men reported that they did these seldom or never. Housework/gardening was more common among those who were married, not tertiary educated, of lower income and with greater fruit and vegetable intake than other cohort members (Table 1). Leisure related screen-time did not vary markedly between men and women. 17.8% (17.4-18.2%) of women and 22.2% (21.8-22.7%) of men reported less than two hours of daily screen-time, while 3.4% (3.2-3.6%) of women and 2.8% (2.6-2.9%) of men reported 8 hours or more. Average daily leisure related screen-time was 2.9 hours [sd 1.9]; it was 3.0 hours [sd 1.9] in women and 2.8 hours [sd 1.8] in men. Higher levels of screen-time were more common among cohort members who were younger, unmarried, urban residents and of lower income and who ate fried food daily and soft drinks or Western style junkfood once a month or more often (Table 1). Women tended to have greater levels of sitting time than men, with 46.6% (46.0-46.9%) of women and 36.8% (36.2-37.3%) of men reporting 8 or more hours of daily sitting time. Average daily sitting time was 6.6 hours [sd 3.8] overall; 6.8 hours [sd 3.9] in women and 6.2 hours [sd 1.8] in men. The number of hours of daily screen-time was poorly but significantly inversely correlated with the number of weekly sessions of exercise-related PA (r = -0.016; 95% CI: -0.024 to -0.009) and doing household cleaning or gardening (r = -0.022; -0.029 to -0.014) but was more strongly and positively related to number of hours sitting per day (r = 0.16; 0.15 to 0.16). The number of weekly sessions of exercise-related PA was positively correlated with doing household cleaning or gardening (r = 0.15; 0.14 to 0.16). The correlations between sitting time and number of weekly sessions of exercise-related PA and cleaning/gardening were -0.054 (-0.061 to -0.047) and -0.041 (-0.049 to -0.034) respectively. Obesity and exercise-related physical activity, housework, and gardening In men, the OR for being obese decreased steadily and significantly with increasing weighted total weekly sessions of exercise-related PA, such that those reporting 18 or more sessions had a OR of obesity of 0.69 (0.63-0.75) compared to those with 0-3 sessions (p(trend) < 0.0001) and this relationship was observed particularly for moderate and strenuous PA (Table 2). There was no apparent relationship between being obese and moderate and strenuous PA in women and the relationship between strenuous activity and obesity differed significantly between men and women (p(interaction) < 0.0001). However, in women an inverse relationship with being obese was observed mainly for mild PA and walking (Table 2). For both sexes, the risk of being obese was consistently lower with increasing frequency of housework/gardening, with 33% (26-39%) and 33% (21-43%) lower adjusted ORs in men and women, respectively, in those reporting these activities daily versus seldom or never (Table 3). The lower risk of obesity with increasing housework or gardening was independent of the level of exercise-related PA, in that the OR did not change materially (i.e. changed by <10%) following additional adjustment for exercise-related PA (see below) and a similar relationship was observed within separate categories of exercise-related PA (Figure 1). The inverse relationship between housework/gardening and obesity was still present following additional adjustment for screen-time and exercise-related PA. Compared to people who did housework or gardening seldom or never, the ORs (95% CI) for being obese in men were: 0.85 (0.76-0.94) for people who did housework or gardening 1-3 times per month; 0.79 (0.71-0.87) for 1-2 times per week; 0.80 (0.72-0.90) for 3-4 times per week; and 0.73 (0.66-0.80) for housework/gardening on most days, adjusting for age, income, education, screen-time and weighted weekly sessions of exercise-related PA. For women, the ORs for the same categories were: 0.95 (0.79-1.15); 0.76 (0.64-0.90); 0.75 (0.63-0.90); and 0.71 (0.60-0.84), respectively. Relationship between being obese and gardening/housework, leisure-related computer or television use ("screen-time") and sitting time *adjusted for age, income and education numbers do not always sum to total due to missing values Odds ratios (OR) for being obese in relation to weighted weekly sessions of exercise-related physical activity (ERPA), hours of daily screen-time and frequency of housework/gardening. In men, the OR for being obese decreased steadily and significantly with increasing weighted total weekly sessions of exercise-related PA, such that those reporting 18 or more sessions had a OR of obesity of 0.69 (0.63-0.75) compared to those with 0-3 sessions (p(trend) < 0.0001) and this relationship was observed particularly for moderate and strenuous PA (Table 2). There was no apparent relationship between being obese and moderate and strenuous PA in women and the relationship between strenuous activity and obesity differed significantly between men and women (p(interaction) < 0.0001). However, in women an inverse relationship with being obese was observed mainly for mild PA and walking (Table 2). For both sexes, the risk of being obese was consistently lower with increasing frequency of housework/gardening, with 33% (26-39%) and 33% (21-43%) lower adjusted ORs in men and women, respectively, in those reporting these activities daily versus seldom or never (Table 3). The lower risk of obesity with increasing housework or gardening was independent of the level of exercise-related PA, in that the OR did not change materially (i.e. changed by <10%) following additional adjustment for exercise-related PA (see below) and a similar relationship was observed within separate categories of exercise-related PA (Figure 1). The inverse relationship between housework/gardening and obesity was still present following additional adjustment for screen-time and exercise-related PA. Compared to people who did housework or gardening seldom or never, the ORs (95% CI) for being obese in men were: 0.85 (0.76-0.94) for people who did housework or gardening 1-3 times per month; 0.79 (0.71-0.87) for 1-2 times per week; 0.80 (0.72-0.90) for 3-4 times per week; and 0.73 (0.66-0.80) for housework/gardening on most days, adjusting for age, income, education, screen-time and weighted weekly sessions of exercise-related PA. For women, the ORs for the same categories were: 0.95 (0.79-1.15); 0.76 (0.64-0.90); 0.75 (0.63-0.90); and 0.71 (0.60-0.84), respectively. Relationship between being obese and gardening/housework, leisure-related computer or television use ("screen-time") and sitting time *adjusted for age, income and education numbers do not always sum to total due to missing values Odds ratios (OR) for being obese in relation to weighted weekly sessions of exercise-related physical activity (ERPA), hours of daily screen-time and frequency of housework/gardening. Obesity and leisure related screen-time Increasing leisure-related screen-time was associated with significantly and substantially increasing risk of being obese in both men and women, with 85% and 116% increases in risk, respectively, for 8 or more hours of daily screen-time versus <2 hours (Table 3, p(trend) < 0.0001). Overall, sitting-time was not significantly related to the OR of being obese (p(trend) = 0.32), although a significant trend was observed in women (Table 3). The positive relationship between screen-time and being obese was still present following additional adjustment for housework/gardening and exercise-related PA; the ORs (95% CI) for obesity in men were: 1.22 (1.14-1.30); 1.38 (1.27-1.50); 1.58 (1.36-1.83); and 1.80 (1.53-2.13) and for women were: 1.15 (1.05-1.27); 1.50 (1.35-1.67); 1.62 (1.36-1.92) and 2.13 (1.78-2.55), for people with: 2.0-2.9 hours; 3.0-3.9 hours; 4-7.9 hours; and ≥8 hours of daily screen-time versus 0-1.9 hours, respectively, adjusted for age, income, education, housework/gardening and exercise-related PA. Additional adjustment for consumption of fried foods, soft drink and junkfood and smoking and alcohol consumption did not materially alter the OR (data not shown). Figure 1 shows the cohort divided into four groups according to their level of exercise-related PA (ERPA in the figure). The ORs for being obese were then presented within each group according to the number of hours of total daily screen-time, with separate lines according to the frequency of housework/gardening. This figure shows increasing risk of being obese with increasing screen-time within each exercise-related PA group and within each housework/gardening group. It also shows that the finding of lower risks of being obese with increasing frequency of housework/gardening persists, even when screen-time and exercise-related PA were accounted for. When the relationships between being obese and exercise-related PA, screen-time and housework/gardening were modelled together, no significant interactions were observed (likelihood ratio χ392 = 38.54, p = 0.49), indicating that they were each independently associated with obesity. The sex, income and education-adjusted OR of obesity per two-hour increase in daily screen-time is shown separately according to a variety of factors, including according to total exercise-related PA and according to housework and gardening, in Figure 2. There was an 18% (15-21%) increase in the risk of being obese with every two additional hours of daily screen-time overall and a significant elevation in the risk of being obese with increasing screen-time was seen in all of the population sub-groups examined (Figure 2). There was a significantly greater increase in the risk of being obese with increasing screen-time in unmarried compared to married individuals (p(heterogeneity) < 0.0001). The relationship between being obese and screen-time was attenuated significantly in older cohort members (p(heterogeneity) = 0.02) and in those with higher incomes (p(heterogeneity) = 0.02). No significant variation in the relationship between screen-time and being obese was seen according to the other factors examined, including: sex; urban/rural residence history; education; smoking status; alcohol, fruit, vegetable, junkfood and fried food intake; disability; level of exercise-related PA; and frequency of housework/gardening. Odds ratios (OR) for being obese per 2 hour increase in daily screen-time, in different population sub-groups, adjusted for age, sex, income and education, where appropriate. Increasing leisure-related screen-time was associated with significantly and substantially increasing risk of being obese in both men and women, with 85% and 116% increases in risk, respectively, for 8 or more hours of daily screen-time versus <2 hours (Table 3, p(trend) < 0.0001). Overall, sitting-time was not significantly related to the OR of being obese (p(trend) = 0.32), although a significant trend was observed in women (Table 3). The positive relationship between screen-time and being obese was still present following additional adjustment for housework/gardening and exercise-related PA; the ORs (95% CI) for obesity in men were: 1.22 (1.14-1.30); 1.38 (1.27-1.50); 1.58 (1.36-1.83); and 1.80 (1.53-2.13) and for women were: 1.15 (1.05-1.27); 1.50 (1.35-1.67); 1.62 (1.36-1.92) and 2.13 (1.78-2.55), for people with: 2.0-2.9 hours; 3.0-3.9 hours; 4-7.9 hours; and ≥8 hours of daily screen-time versus 0-1.9 hours, respectively, adjusted for age, income, education, housework/gardening and exercise-related PA. Additional adjustment for consumption of fried foods, soft drink and junkfood and smoking and alcohol consumption did not materially alter the OR (data not shown). Figure 1 shows the cohort divided into four groups according to their level of exercise-related PA (ERPA in the figure). The ORs for being obese were then presented within each group according to the number of hours of total daily screen-time, with separate lines according to the frequency of housework/gardening. This figure shows increasing risk of being obese with increasing screen-time within each exercise-related PA group and within each housework/gardening group. It also shows that the finding of lower risks of being obese with increasing frequency of housework/gardening persists, even when screen-time and exercise-related PA were accounted for. When the relationships between being obese and exercise-related PA, screen-time and housework/gardening were modelled together, no significant interactions were observed (likelihood ratio χ392 = 38.54, p = 0.49), indicating that they were each independently associated with obesity. The sex, income and education-adjusted OR of obesity per two-hour increase in daily screen-time is shown separately according to a variety of factors, including according to total exercise-related PA and according to housework and gardening, in Figure 2. There was an 18% (15-21%) increase in the risk of being obese with every two additional hours of daily screen-time overall and a significant elevation in the risk of being obese with increasing screen-time was seen in all of the population sub-groups examined (Figure 2). There was a significantly greater increase in the risk of being obese with increasing screen-time in unmarried compared to married individuals (p(heterogeneity) < 0.0001). The relationship between being obese and screen-time was attenuated significantly in older cohort members (p(heterogeneity) = 0.02) and in those with higher incomes (p(heterogeneity) = 0.02). No significant variation in the relationship between screen-time and being obese was seen according to the other factors examined, including: sex; urban/rural residence history; education; smoking status; alcohol, fruit, vegetable, junkfood and fried food intake; disability; level of exercise-related PA; and frequency of housework/gardening. Odds ratios (OR) for being obese per 2 hour increase in daily screen-time, in different population sub-groups, adjusted for age, sex, income and education, where appropriate. Obesity and domestic appliances The risk of being obese was significantly higher in men and women from households with a refrigerator, microwave oven or washing machine and in men from households with a water heater (Table 4). The risk of being obese increased significantly with the increasing number of such appliances within a household. These results were not altered materially following additional adjustment for housework/gardening frequency, smoking, alcohol consumption and consumption of fried foods, Western-style junkfood, fruit and vegetables (data not shown). Relationship between being obese and ownership of household appliances *adjusted for age, income and education The risk of being obese was significantly higher in men and women from households with a refrigerator, microwave oven or washing machine and in men from households with a water heater (Table 4). The risk of being obese increased significantly with the increasing number of such appliances within a household. These results were not altered materially following additional adjustment for housework/gardening frequency, smoking, alcohol consumption and consumption of fried foods, Western-style junkfood, fruit and vegetables (data not shown). Relationship between being obese and ownership of household appliances *adjusted for age, income and education Obesity and exercise-related physical activity, housework, and gardening: In men, the OR for being obese decreased steadily and significantly with increasing weighted total weekly sessions of exercise-related PA, such that those reporting 18 or more sessions had a OR of obesity of 0.69 (0.63-0.75) compared to those with 0-3 sessions (p(trend) < 0.0001) and this relationship was observed particularly for moderate and strenuous PA (Table 2). There was no apparent relationship between being obese and moderate and strenuous PA in women and the relationship between strenuous activity and obesity differed significantly between men and women (p(interaction) < 0.0001). However, in women an inverse relationship with being obese was observed mainly for mild PA and walking (Table 2). For both sexes, the risk of being obese was consistently lower with increasing frequency of housework/gardening, with 33% (26-39%) and 33% (21-43%) lower adjusted ORs in men and women, respectively, in those reporting these activities daily versus seldom or never (Table 3). The lower risk of obesity with increasing housework or gardening was independent of the level of exercise-related PA, in that the OR did not change materially (i.e. changed by <10%) following additional adjustment for exercise-related PA (see below) and a similar relationship was observed within separate categories of exercise-related PA (Figure 1). The inverse relationship between housework/gardening and obesity was still present following additional adjustment for screen-time and exercise-related PA. Compared to people who did housework or gardening seldom or never, the ORs (95% CI) for being obese in men were: 0.85 (0.76-0.94) for people who did housework or gardening 1-3 times per month; 0.79 (0.71-0.87) for 1-2 times per week; 0.80 (0.72-0.90) for 3-4 times per week; and 0.73 (0.66-0.80) for housework/gardening on most days, adjusting for age, income, education, screen-time and weighted weekly sessions of exercise-related PA. For women, the ORs for the same categories were: 0.95 (0.79-1.15); 0.76 (0.64-0.90); 0.75 (0.63-0.90); and 0.71 (0.60-0.84), respectively. Relationship between being obese and gardening/housework, leisure-related computer or television use ("screen-time") and sitting time *adjusted for age, income and education numbers do not always sum to total due to missing values Odds ratios (OR) for being obese in relation to weighted weekly sessions of exercise-related physical activity (ERPA), hours of daily screen-time and frequency of housework/gardening. Obesity and leisure related screen-time: Increasing leisure-related screen-time was associated with significantly and substantially increasing risk of being obese in both men and women, with 85% and 116% increases in risk, respectively, for 8 or more hours of daily screen-time versus <2 hours (Table 3, p(trend) < 0.0001). Overall, sitting-time was not significantly related to the OR of being obese (p(trend) = 0.32), although a significant trend was observed in women (Table 3). The positive relationship between screen-time and being obese was still present following additional adjustment for housework/gardening and exercise-related PA; the ORs (95% CI) for obesity in men were: 1.22 (1.14-1.30); 1.38 (1.27-1.50); 1.58 (1.36-1.83); and 1.80 (1.53-2.13) and for women were: 1.15 (1.05-1.27); 1.50 (1.35-1.67); 1.62 (1.36-1.92) and 2.13 (1.78-2.55), for people with: 2.0-2.9 hours; 3.0-3.9 hours; 4-7.9 hours; and ≥8 hours of daily screen-time versus 0-1.9 hours, respectively, adjusted for age, income, education, housework/gardening and exercise-related PA. Additional adjustment for consumption of fried foods, soft drink and junkfood and smoking and alcohol consumption did not materially alter the OR (data not shown). Figure 1 shows the cohort divided into four groups according to their level of exercise-related PA (ERPA in the figure). The ORs for being obese were then presented within each group according to the number of hours of total daily screen-time, with separate lines according to the frequency of housework/gardening. This figure shows increasing risk of being obese with increasing screen-time within each exercise-related PA group and within each housework/gardening group. It also shows that the finding of lower risks of being obese with increasing frequency of housework/gardening persists, even when screen-time and exercise-related PA were accounted for. When the relationships between being obese and exercise-related PA, screen-time and housework/gardening were modelled together, no significant interactions were observed (likelihood ratio χ392 = 38.54, p = 0.49), indicating that they were each independently associated with obesity. The sex, income and education-adjusted OR of obesity per two-hour increase in daily screen-time is shown separately according to a variety of factors, including according to total exercise-related PA and according to housework and gardening, in Figure 2. There was an 18% (15-21%) increase in the risk of being obese with every two additional hours of daily screen-time overall and a significant elevation in the risk of being obese with increasing screen-time was seen in all of the population sub-groups examined (Figure 2). There was a significantly greater increase in the risk of being obese with increasing screen-time in unmarried compared to married individuals (p(heterogeneity) < 0.0001). The relationship between being obese and screen-time was attenuated significantly in older cohort members (p(heterogeneity) = 0.02) and in those with higher incomes (p(heterogeneity) = 0.02). No significant variation in the relationship between screen-time and being obese was seen according to the other factors examined, including: sex; urban/rural residence history; education; smoking status; alcohol, fruit, vegetable, junkfood and fried food intake; disability; level of exercise-related PA; and frequency of housework/gardening. Odds ratios (OR) for being obese per 2 hour increase in daily screen-time, in different population sub-groups, adjusted for age, sex, income and education, where appropriate. Obesity and domestic appliances: The risk of being obese was significantly higher in men and women from households with a refrigerator, microwave oven or washing machine and in men from households with a water heater (Table 4). The risk of being obese increased significantly with the increasing number of such appliances within a household. These results were not altered materially following additional adjustment for housework/gardening frequency, smoking, alcohol consumption and consumption of fried foods, Western-style junkfood, fruit and vegetables (data not shown). Relationship between being obese and ownership of household appliances *adjusted for age, income and education Discussion: In this cohort of Thai men and women, the risk of being obese is consistently higher in those with greater time spent in leisure-related television watching and computer games and inversely associated with time spent doing housework or gardening. Inverse associations between obesity and total weekly sessions of exercise-related PA are observed in men with a significantly weaker association seen in women. Exercise-related PA, screen-time and housework/gardening each have independent associations with obesity. The magnitude of the association with obesity relating to these risk factors is substantial. Individuals reporting daily housework/gardening have a 33% lower risk of being obese compared to those reporting these activities seldom or never and there is an 18% increase in the risk of obesity with every two hours of additional daily screen-time. The findings reported here show an inverse relationship between exercise-related PA and being obese that is stronger in men than in women and may be somewhat weaker than that observed in Western populations. The inverse relationship between obesity and exercise, usually leisure-related PA is well established in Western countries [5,6,21]. Although a reduced risk of obesity with increasing PA has been demonstrated in certain Asian populations, including those in China [23] and Korea [24], the specific relationship of leisure-related PA to obesity is less clear, and may be of lesser magnitude. The reason for this is not known. Potential explanations include: the lack of data relevant to Asia; the possibility that the proportion of total energy expenditure to leisure-related PA is lower in the Asian context [15]; differing types and intensities of leisure-related PA compared to the West; and differences in measurement error. We were unable to locate any previous studies in adults of the relationship between being obese and television and computer use in Asia. Studies in Western populations consistently show increases in obesity with increasing time spent in sedentary activities, particularly screen-time [7,8,25-27]. The direct relationship between sedentary behaviours and obesity is observed in both cross-sectional [7,25,26] and prospective studies [8,15,28]. Studies have varied in the way they have measured and categorised screen-time and other sedentary behaviours, as well as obesity related outcomes, so it is difficult to summarise quantitatively the magnitude of the risk involved. However, the 18% increase in obesity risk per 2 hours of additional daily screen-time observed here is consistent with the 23% increase observed in US nurses [8] and older Australian adults [9]. The one previous publication we were able to locate examining the relationship between BMI and domestic activity in the Asian context demonstrated a significantly lower BMI in men with increasing time spent in domestic activities and a non-significant relationship in women, in China [29]. Studies in Western populations have generally not found a significant relationship between BMI and domestic activity [12,13,30], even heavy domestic activity, although one study in older US adults found house cleaning, but not gardening, to be associated with decreased BMI on multivariate analysis [14] and another found decreased all-cause mortality with increasing domestic PA [30]. The study presented here is the largest to date investigating the issue and shows a decreasing risk of being obese with increasing frequency of housework and gardening, independent of exercise-related PA and screen-time. Although the lack of a positive finding in the Western context may reflect measurement error, the play of chance, small sample sizes or other factors, it is also possible that domestic PA in Asian countries differs from that in Western countries, for example, due to use of labour saving devices or differing practices. Increasing use of labour saving devices is part of the transition accompanying industrialisation and is associated with reduced energy expenditure in domestic tasks [31]. Decreasing domestic physical activity over time has been noted in one Chinese study [29]. We found household ownership of domestic appliances to be significantly associated with increasing risk of being obese, with increasing risks of being obese accompanying increasing numbers of appliances within the household. However, the lack of specificity in the relationship of the different household appliances to obesity and the apparently greater effect in men compared to women suggests that this may well not be a causal relationship; it may instead reflect a broader difference in socioeconomic status and lifestyle between households with and without appliances. Strengths of the current study include its large size and inclusion of adults from a wide range of social and economic backgrounds. Although the cohort is somewhat younger and more urbanised than the Thai general population, it represents well the geographic regions of Thailand and exhibits substantial heterogeneity in the distribution of other factors [17]. For example, 35% of males and 47% of females had low incomes (<7000 Baht per month or $5.50 US per day). Participants in the Thai Cohort Study in 2005 were very similar to the STOU student body in that year for sex ratio, age distribution, geographic region of residence, income, education and course of study [32]. Much of the health-risk transition underway in middle income countries is mediated by education [33,34] and the cohort is, by definition, ahead of national education trends. It is therefore likely to provide useful early insights into the effects and mediators of the health-risk transition in middle-income countries. Previous relevant studies from the cohort include examination of the broader health-related correlates of obesity [22] and the relationship between gender, socioeconomic status and obesity [35]. The proportion of the cohort classified as overweight but not obese is similar to the 18% found in the third Thai National Health Examination Survey (2004), while obesity is much lower among STOU women (10% compared to 36% in the National Health Examination Survey) but similar among men in the two studies (23%) [2]. It should be noted that the Thai Cohort Study, like the vast majority of cohort studies, is not designed to be representative of the general population, but is meant to provide sufficient heterogeneity of exposure to allow reliable estimates of relative risk based on internal comparisons [36]. The "healthy cohort effect" and the 44% response rate for this study means that the estimates of relative risk shown here are likely to be conservative, since community members with more extreme behaviours and health conditions may be less likely to be attending an open university or to participate. However, it is important to note that ORs comparing groups within the cohort remain valid and can be generalised more broadly [36,37]. Furthermore, the major comparisons in this paper are between obese and non-obese individuals, rather than obese and "healthy weight" individuals, which is also likely to lead to more conservative estimates of association. The limitations of the study should also be borne in mind. The measures used for tobacco smoking and alcohol consumption were brief and the physical activity measure used has, to our knowledge, only been validated in Western populations. BMI was based on self-reported height and weight, which have been shown to provide a valid measure of body size in this population, with correlation coefficients for BMI based on self-reported versus measured height and weight of 0.91 for men and 0.95 for women [38]. However, BMI based on self-reported measures was underestimated by an average of 0.77 kg/m2 for men and 0.62 kg/m2 for women [38]. Cut-points delineating overweight and obesity were set at BMIs ≥23 and ≥25; weight-related disease in Asian populations occurs least when BMI is about 22 or less [39], and significantly increases with BMI ≥23 [40]. The excellent correlation between measured and self-reported values mean that self-reported values are generally reliable for ranking participants according to BMI in epidemiological studies, as was done here. The general underestimation of BMI means that absolute values of BMI from self-reported measures are less reliable. Self-reported leisure-related television and computer use has good test-retest reliability and validity, although respondents have a tendency to under-report the number of hours involved [41]. We were not able to locate any studies validating these measures in Asian populations. Time spent in habitual, incidental physical activity is difficult to measure [42] and domestic activities, screen-time and overall physical activity are all likely to be reported with differing degrees of measurement error. Although both domestic activities and screen-time remain predictive of obesity within categories of physical activity, it is possible that this is the result of greater measurement error in ascertaining overall physical activity than in ascertaining domestic and screen-related activities [25]. However, the lack of strong correlation between screen-time and overall physical activity (r = -0.016) goes against this argument, as does the previous observation that television viewing remains a significant predictor of obesity in women, following adjustment for pedometer measured physical activity [42]. Obesity and overweight are the result of sustained positive energy balance, whereby energy intake exceeds energy expenditure. Although dietary factors are important, there is mounting evidence that insufficient energy expenditure is likely to be a key factor underlying the global obesity epidemic [10]. The main determinant of individual energy expenditure is the basal metabolic rate, which typically accounts for 70% of all kilojoules burned [15]. A further 10% of energy expenditure comes from the thermic effect of food and the remaining 20% comes from PA [15]. PA is often conceptualised as comprised of purposeful and non-purposeful physical activity; the latter is also termed "incidental" PA. A recent study from the US found that over half of population-level energy expenditure from PA was from sedentary and low-intensity tasks, 16% was attributed to occupational activity above and beyond sitting at work, 16% was attributable to domestic activities and yard work of at least moderate intensity and less than 5% was attributable to leisure-time PA [43]. This evidence is consistent with the suggestion that differences in incidental physical activity are responsible for the greatest variations in energy expenditure between individuals and populations [10]. Moreover, recent evidence indicates that one of the most potent mechanisms determining cardiovascular risk factors, including obesity and metabolic disorders, is the amount of time spent in high volume daily intermittent low-intensity postural or ambulatory activities, which account for as much of 90% of energy expended in physical activity [11]. Sedentary behaviours generally involve sitting or lying down and are characterised by low energy expenditure (metabolic equivalent intensity <2) [41]. A substantial amount of time spent in sedentary activities is likely to contribute to obesity through reduced overall energy expenditure, mainly resulting from their impact on incidental physical activity, since it may co-exist with relatively high levels of exercise-related physical activity. Screen-time, particularly television watching, is also associated with other health behaviours, such as eating fatty foods. However, the finding of increased obesity among those watching greater amounts of television persisted in this dataset after adjustment for intake of fatty food and in other studies following adjustment for total energy intake [8] and foods eaten while watching television [27], so this is unlikely to explain much of its effects. In lower- and middle-income countries, including Thailand, industrialisation is generally accompanied by increasing urbanisation, a more sedentary lifestyle, with increasing car and computer use and a higher fat diet dominated by more refined foods [16]. It is also characterised by a shift in work patterns for a substantial proportion of the population, from high energy expenditure activities such as farming, mining and forestry to less energy-demanding jobs in the service sector [16]. All of these changes are likely to increase population obesity. There are a number of specific barriers to increasing physical activity in many Asian countries, including environmental factors such as heat, inadequate urban infrastructure, pollution and other hazards. Furthermore, chronic malnutrition has been common in many Asian countries, leading to stunting in significant portions of the population and rendering them vulnerable to obesity as food availability improves. The importance of obesity, the metabolic syndrome and diabetes in Thailand has been highlighted extensively [44-46]. In Thailand, and in many other countries, social factors are key upstream determinants of the major influences on obesity. For example, domestic duties are often divided along gender lines and many wealthier households have servants, particularly to do the heavier work. In this cohort, higher socioeconomic status was accompanied by increasing risk of being obese in men and decreasing risk of being obese in women [35]. This pattern is believed to represent an intermediate stage in the health-risk transition between less-developed countries such as China, which demonstrate high socioeconomic status to be associated with increased obesity in both men and women [47], and Western populations where high socioeconomic status is associated with reduced obesity in both men and women [2,35]. The study was able to investigate simultaneously a number of activity-related measures and the large numbers allowed quantification of the association of these factors with obesity within a range of population subgroups. However, the analyses presented here are cross-sectional so it is not possible to directly attribute causality to the relationships observed or to exclude reverse causality. Reverse causality occurs when an exposure varies because of the specific condition under investigation. In this case, reverse causality would mean that obesity might result in reduced exercise-related PA, increased sedentary behaviour and decreased domestic PA. There are a priori reasons why it is likely that certain elements of the PA-BMI relationship are causal i.e. that reduced energy expenditure due to reduced PA results in increased BMI. However, it is also possible that people with high BMI may change their level of PA. Intuitively, this might apply more to exercise-related PA than to screen-time or domestic activities; people with a high BMI may do more exercise-related PA in order to lose weight or may reduce their exercise-related PA, due to the extra exertion required because of their weight or obesity related health issues (e.g. joint problems). In Thai society, women in particular are under pressure to be thin and the increased walking among women of higher BMI may reflect this. Going against a large role for reverse causality is the fact that increasing inactivity has been shown to result in increased obesity in longitudinal data [8,48] and experimental studies show that increasing BMI by overfeeding of lean individuals does not result in increased sedentary behaviour [49]. This issue is not resolved entirely by using longitudinal data, since the major risk factor for incident obesity is having a high BMI at baseline [49]. We propose that the relationship between sedentary behaviour and obesity is likely to be complex, with a causal relationship between inactivity and obesity predominating. There is likely to be some contribution of obesity leading to inactivity [48], or indeed a "spiral" relationship, whereby inactivity leads to obesity, which further exacerbates inactivity, leading to further increases in obesity [50]. Conclusions: In common with many middle to low income countries, the prevalence of overweight and obesity in Thailand is lower than that seen in many Western nations, but is increasing rapidly. Avoiding the transition to the obesity patterns seen in the West is a key priority. The data presented here suggest that habitual, high volume, low intensity PA is likely to be important for maintaining a healthy weight and are in keeping with other data that show that increasing exercise-related leisure-time PA alone is unlikely to be sufficient to prevent population obesity [15]. Leisure-related television and computer use were strongly related to the risk of being obese. Research focusing on habitual activities and sedentary behaviours is relatively new. Effective interventions to reduce sedentary time and increase incidental activity are being developed; innovative interventions applicable to the Asian context are needed urgently. Supplementary Material: Thai Cohort Study baseline questionnaire (English). An English language translation of a questionnaire administered to students of Sukhothai Thammathirat Open University in 2005. Click here for file Thai Cohort Study baseline questionnaire (Thai). The Thai language original questionnaire administered to students of Sukhothai Thammathirat Open University in 2005. Click here for file
Background: Patterns of physical activity (PA), domestic activity and sedentary behaviours are changing rapidly in Asia. Little is known about their relationship with obesity in this context. This study investigates in detail the relationship between obesity, physical activity, domestic activity and sedentary behaviours in a Thai population. Methods: 74,981 adult students aged 20-50 from all regions of Thailand attending the Sukhothai Thammathirat Open University in 2005-2006 completed a self-administered questionnaire, including providing appropriate self-reported data on height, weight and PA. We conducted cross-sectional analyses of the relationship between obesity, defined according to Asian criteria (Body Mass Index (BMI) ≥25), and measures of physical activity and sedentary behaviours (exercise-related PA; leisure-related computer use and television watching ("screen-time"); housework and gardening; and sitting-time) adjusted for age, sex, income and education and compared according to a range of personal characteristics. Results: Overall, 15.6% of participants were obese, with a substantially greater prevalence in men (22.4%) than women (9.9%). Inverse associations between being obese and total weekly sessions of exercise-related PA were observed in men, with a significantly weaker association seen in women (p(interaction) < 0.0001). Increasing obesity with increasing screen-time was seen in all population groups examined; there was an overall 18% (15-21%) increase in obesity with every two hours of additional daily screen-time. There were 33% (26-39%) and 33% (21-43%) reductions in the adjusted risk of being obese in men and women, respectively, reporting housework/gardening daily versus seldom or never. Exercise-related PA, screen-time and housework/gardening each had independent associations with obesity. Conclusions: Domestic activities and sedentary behaviours are important in relation to obesity in Thailand, independent of exercise-related physical activity. In this setting, programs to prevent and treat obesity through increasing general physical activity need to consider overall energy expenditure and address a wide range of low-intensity high-volume activities in order to be effective.
Background: The prevalence of obesity is rising rapidly in most Asian countries, with increases of 46% in Japan and over 400% in China observed from the 1980s to early 2000s [1]. In Thailand, the prevalence of obesity increased by around 19% from 1997 to 2004 alone [2]. There have been accompanying increases in morbidity related to conditions such as diabetes and cardiovascular disease in Asian countries [3,4]. It is well established in Western populations that increasing purposeful or leisure-time physical activity (PA) is associated with reduced rates of obesity [5,6]. Recent evidence, also from Western countries, suggests that sedentary activities, such as watching television or using a computer, are associated with increasing obesity, independent of purposeful PA [7-9]. The role of incidental PA and overall energy expenditure, in influencing obesity has been highlighted [10,11]. The interplay between these factors and their combined effects on obesity are not well understood and information relevant to Asian populations is particularly scarce. Furthermore, the relationship between domestic activities and obesity is unclear [12-14]. This is important because physical activity related to patterns of daily activity differs between Asia and Western countries [15] and because many Asian countries are experiencing rapid health and lifestyle transitions [16]. This paper examines in detail the relationships between obesity, exercise-related PA, domestic activities and sedentary behaviours in Thailand, with particular emphasis on the interaction between these factors. Conclusions: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2458/11/762/prepub
Background: Patterns of physical activity (PA), domestic activity and sedentary behaviours are changing rapidly in Asia. Little is known about their relationship with obesity in this context. This study investigates in detail the relationship between obesity, physical activity, domestic activity and sedentary behaviours in a Thai population. Methods: 74,981 adult students aged 20-50 from all regions of Thailand attending the Sukhothai Thammathirat Open University in 2005-2006 completed a self-administered questionnaire, including providing appropriate self-reported data on height, weight and PA. We conducted cross-sectional analyses of the relationship between obesity, defined according to Asian criteria (Body Mass Index (BMI) ≥25), and measures of physical activity and sedentary behaviours (exercise-related PA; leisure-related computer use and television watching ("screen-time"); housework and gardening; and sitting-time) adjusted for age, sex, income and education and compared according to a range of personal characteristics. Results: Overall, 15.6% of participants were obese, with a substantially greater prevalence in men (22.4%) than women (9.9%). Inverse associations between being obese and total weekly sessions of exercise-related PA were observed in men, with a significantly weaker association seen in women (p(interaction) < 0.0001). Increasing obesity with increasing screen-time was seen in all population groups examined; there was an overall 18% (15-21%) increase in obesity with every two hours of additional daily screen-time. There were 33% (26-39%) and 33% (21-43%) reductions in the adjusted risk of being obese in men and women, respectively, reporting housework/gardening daily versus seldom or never. Exercise-related PA, screen-time and housework/gardening each had independent associations with obesity. Conclusions: Domestic activities and sedentary behaviours are important in relation to obesity in Thailand, independent of exercise-related physical activity. In this setting, programs to prevent and treat obesity through increasing general physical activity need to consider overall energy expenditure and address a wide range of low-intensity high-volume activities in order to be effective.
13,010
424
[ 282, 120, 979, 327, 40, 3740, 528, 732, 113, 2901, 159 ]
13
[ "time", "related", "pa", "screen", "exercise", "screen time", "obesity", "obese", "exercise related", "gardening" ]
[ "sedentary behaviours thailand", "obesity leisure related", "domestic activities obesity", "physical activity asian", "thailand prevalence obesity" ]
null
[CONTENT] Obesity | Thailand | physical activity | inactivity | domestic activity | sedentary behaviours [SUMMARY]
[CONTENT] Obesity | Thailand | physical activity | inactivity | domestic activity | sedentary behaviours [SUMMARY]
null
[CONTENT] Obesity | Thailand | physical activity | inactivity | domestic activity | sedentary behaviours [SUMMARY]
[CONTENT] Obesity | Thailand | physical activity | inactivity | domestic activity | sedentary behaviours [SUMMARY]
[CONTENT] Obesity | Thailand | physical activity | inactivity | domestic activity | sedentary behaviours [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Body Mass Index | Cohort Studies | Cross-Sectional Studies | Exercise | Female | Humans | Male | Middle Aged | Obesity | Odds Ratio | Prevalence | Risk Factors | Sedentary Behavior | Sex Factors | Surveys and Questionnaires | Thailand | Time Factors | Young Adult [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Body Mass Index | Cohort Studies | Cross-Sectional Studies | Exercise | Female | Humans | Male | Middle Aged | Obesity | Odds Ratio | Prevalence | Risk Factors | Sedentary Behavior | Sex Factors | Surveys and Questionnaires | Thailand | Time Factors | Young Adult [SUMMARY]
null
[CONTENT] Activities of Daily Living | Adult | Body Mass Index | Cohort Studies | Cross-Sectional Studies | Exercise | Female | Humans | Male | Middle Aged | Obesity | Odds Ratio | Prevalence | Risk Factors | Sedentary Behavior | Sex Factors | Surveys and Questionnaires | Thailand | Time Factors | Young Adult [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Body Mass Index | Cohort Studies | Cross-Sectional Studies | Exercise | Female | Humans | Male | Middle Aged | Obesity | Odds Ratio | Prevalence | Risk Factors | Sedentary Behavior | Sex Factors | Surveys and Questionnaires | Thailand | Time Factors | Young Adult [SUMMARY]
[CONTENT] Activities of Daily Living | Adult | Body Mass Index | Cohort Studies | Cross-Sectional Studies | Exercise | Female | Humans | Male | Middle Aged | Obesity | Odds Ratio | Prevalence | Risk Factors | Sedentary Behavior | Sex Factors | Surveys and Questionnaires | Thailand | Time Factors | Young Adult [SUMMARY]
[CONTENT] sedentary behaviours thailand | obesity leisure related | domestic activities obesity | physical activity asian | thailand prevalence obesity [SUMMARY]
[CONTENT] sedentary behaviours thailand | obesity leisure related | domestic activities obesity | physical activity asian | thailand prevalence obesity [SUMMARY]
null
[CONTENT] sedentary behaviours thailand | obesity leisure related | domestic activities obesity | physical activity asian | thailand prevalence obesity [SUMMARY]
[CONTENT] sedentary behaviours thailand | obesity leisure related | domestic activities obesity | physical activity asian | thailand prevalence obesity [SUMMARY]
[CONTENT] sedentary behaviours thailand | obesity leisure related | domestic activities obesity | physical activity asian | thailand prevalence obesity [SUMMARY]
[CONTENT] time | related | pa | screen | exercise | screen time | obesity | obese | exercise related | gardening [SUMMARY]
[CONTENT] time | related | pa | screen | exercise | screen time | obesity | obese | exercise related | gardening [SUMMARY]
null
[CONTENT] time | related | pa | screen | exercise | screen time | obesity | obese | exercise related | gardening [SUMMARY]
[CONTENT] time | related | pa | screen | exercise | screen time | obesity | obese | exercise related | gardening [SUMMARY]
[CONTENT] time | related | pa | screen | exercise | screen time | obesity | obese | exercise related | gardening [SUMMARY]
[CONTENT] countries | obesity | asian countries | asian | prevalence obesity | activities | pa | purposeful | western countries | domestic activities [SUMMARY]
[CONTENT] 20 | time | question | exercise | activity | questionnaire | study | health | physical | screen [SUMMARY]
null
[CONTENT] interventions | habitual | low | seen | obesity | related | sedentary | data | increasing | volume low intensity [SUMMARY]
[CONTENT] time | related | pa | screen | screen time | obese | obesity | exercise | housework | gardening [SUMMARY]
[CONTENT] time | related | pa | screen | screen time | obese | obesity | exercise | housework | gardening [SUMMARY]
[CONTENT] Asia ||| ||| Thai [SUMMARY]
[CONTENT] 74,981 | 20-50 | Thailand | the Sukhothai Thammathirat Open University | 2005-2006 ||| Asian | BMI [SUMMARY]
null
[CONTENT] Thailand ||| [SUMMARY]
[CONTENT] Asia ||| ||| Thai ||| 74,981 | 20-50 | Thailand | the Sukhothai Thammathirat Open University | 2005-2006 ||| Asian | BMI ||| 15.6% | obese | 22.4% | 9.9% ||| weekly ||| 18% | 15-21% | every two hours ||| 33% | 26 | 39% | 33% | 21-43% | obese ||| ||| Thailand ||| [SUMMARY]
[CONTENT] Asia ||| ||| Thai ||| 74,981 | 20-50 | Thailand | the Sukhothai Thammathirat Open University | 2005-2006 ||| Asian | BMI ||| 15.6% | obese | 22.4% | 9.9% ||| weekly ||| 18% | 15-21% | every two hours ||| 33% | 26 | 39% | 33% | 21-43% | obese ||| ||| Thailand ||| [SUMMARY]
Lung Deposition and Inspiratory Flow Rate in Patients with Chronic Obstructive Pulmonary Disease Using Different Inhalation Devices: A Systematic Literature Review and Expert Opinion.
33907390
Our aim was to describe: 1) lung deposition and inspiratory flow rate; 2) main characteristics of inhaler devices in chronic obstructive pulmonary disease (COPD).
BACKGROUND
A systematic literature review (SLR) was conducted to analyze the features and results of inhaler devices in COPD patients. These devices included pressurized metered-dose inhalers (pMDIs), dry powder inhalers (DPIs), and a soft mist inhaler (SMI). Inclusion and exclusion criteria were established, as well as search strategies (Medline, Embase, and the Cochrane Library up to April 2019). In vitro and in vivo studies were included. Two reviewers selected articles, collected and analyzed data independently. Narrative searches complemented the SLR. We discussed the results of the reviews in a nominal group meeting and agreed on various general principles and recommendations.
METHODS
The SLR included 71 articles, some were of low-moderate quality, and there was great variability regarding populations and outcomes. Lung deposition rates varied across devices: 8%-53% for pMDIs, 7%-69% for DPIs, and 39%-67% for the SMI. The aerosol exit velocity was high with pMDIs (more than 3 m/s), while it is much slower (0.84-0.72 m/s) with the SMI. In general, pMDIs produce large-sized particles (1.22-8 μm), DPIs produce medium-sized particles (1.8-4.8 µm), and 60% of the particles reach an aerodynamic diameter <5 μm with the SMI. All inhalation devices reach central and peripheral lung regions, but the SMI distribution pattern might be better compared with pMDIs. DPIs' intrinsic resistance is higher than that of pMDIs and SMI, which are relatively similar and low. Depending on the DPI, the minimum flow inspiratory rate required was 30 L/min. pMDIs and SMI did not require a high inspiratory flow rate.
RESULTS
Lung deposition and inspiratory flow rate are key factors when selecting an inhalation device in COPD patients.
CONCLUSION
[ "Administration, Inhalation", "Bronchodilator Agents", "Dry Powder Inhalers", "Equipment Design", "Expert Testimony", "Humans", "Lung", "Metered Dose Inhalers", "Pulmonary Disease, Chronic Obstructive" ]
8064620
Introduction
Chronic obstructive pulmonary disease (COPD) is characterized by a persistent airflow limitation that is usually progressive, according to guidelines from the Global Initiative for Chronic Obstructive Lung Disease (GOLD).1 In recent years, the prevalence of COPD has dramatically increased, growing by 44.2% from 1990 to 2015.2 The impact on patients, society, and health systems is correspondingly huge. More than 3 million people die of COPD worldwide each year, accounting for 6% of all deaths worldwide.3 In 2010, the cost of COPD in the USA was projected to be approximately US $50 billion.4 One of the primary treatment modalities for COPD is medications that are delivered via inhalation devices. Currently, in clinical practice, a variety of devices are available for the treatment of these patients, including pressurized metered-dose inhalers (pMDIs), which are used with or without a valved holding chamber or spacer, as well as dry powder inhalers (DPIs) and the soft mist inhaler (SMI). Inhaler devices vary in several ways, including how the inhaler dispenses the drug, whether the treatment is passively or actively generated (using propellant, mechanical, or compressed air), and the drug’s formulation (solution, dry powder, or mist). The selection of an inhalation device is a key point in COPD because it impacts patient adherence, the drug’s effectiveness, and long–term outcomes.5 A range of studies have assessed which factors/characteristics should be considered when selecting the most appropriate device.6–8 Interestingly, according to many expert opinions, the most important factors involved in achieving optimal disease outcomes are the generation of high lung deposition and correct dispensation with low inspiratory flow rates.9 Other relevant factors include inhalation technique, potential difficulties with the device, and patient preferences. On the other hand, data regarding lung deposition and inspiratory flow rates across inhalation devices in COPD patients are usually described and evaluated as absolute, static numbers. However, a theoretical framework and pathophysiological and clinical evidence all suggest that both are influenced by several factors that relate to the patients and their COPD, all of which can change over time.6,10–17 Therefore, analyzing lung deposition and inspiratory flow rates in COPD patients who use inhalation devices requires a more careful, holistic, and dynamic approach. Considering all the aspects described above, we performed a systematic literature review (SLR) and a narrative review to assess lung deposition and inspiratory flow rates, as well as data related to these inhalation devices in COPD patients. Using this information, we propose related conclusions and recommendations that can contribute to the selection of inhalation devices. We are confident that this information will be very useful for health professionals who are involved in the care of patients with COPD.
null
null
null
null
General Conclusions and Recommendations
The choice of inhalation devices for COPD patients depends on a combination of factors, but lung deposition and inspiratory flow rate are key aspects of this selection process. When selecting an inhalation device, all health professionals who are involved in the care of patients with COPD must consider the basis of lung deposition and inspiratory flow rate, among other aspects. The clinician can then select the most adequate inhalation device, depending on the patient, their COPD, and the inhalation device’s characteristics, which will ultimately achieve the maximum lung deposition and distribution.
[ "Methods", "Experts’ Selection", "Systematic Literature Review", "Narrative Review", "Nominal Group Meeting", "Results", "Lung Deposition", "Inspiratory Flow Rate", "General Conclusions and Recommendations" ]
[ "This project consisted of an SLR, a narrative review, and an expert opinion based on a nominal group meeting. A nominal group meeting is a structured method for brainstorming that encourages contributions from everyone and facilitates quick agreement on the relative importance of issues, problems, or solutions.\nExperts’ Selection We first established a group of 10 pneumologists (two of us were project coordinators). We are all specialized in COPD with demonstrated clinical experience (a minimum of 8 years and ≥5 publications and members of the Sociedad Española de Neumología y Cirugía Torácica (SEPAR). Besides, we are located in different parts of Spain. Then, we defined the project’s objectives, established the protocol of the SLR, and decided that this would be complemented by a narrative review.\nWe first established a group of 10 pneumologists (two of us were project coordinators). We are all specialized in COPD with demonstrated clinical experience (a minimum of 8 years and ≥5 publications and members of the Sociedad Española de Neumología y Cirugía Torácica (SEPAR). Besides, we are located in different parts of Spain. Then, we defined the project’s objectives, established the protocol of the SLR, and decided that this would be complemented by a narrative review.\nSystematic Literature Review The SLR was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The objective of the SLR was to analyze lung deposition, inspiratory flow, and other characteristics of different inhaler devices (pMDIs, DPIs, and SMI) in both COPD patients and healthy subjects. Studies were identified using sensitive search strategies in the main medical databases. For this purpose, an expert librarian checked the search strategies (Tables 1–7 of the Supplementary material). Disease- and inhaler device-related terms were used as search keywords, which employed a controlled vocabulary, specific MeSH headings, and additional keywords. The following bibliographic databases were screened up to April 2019: Medline (PubMed) and Embase from 1961 to April 2019, and the Cochrane Library up to April 2019. Retrieved references were managed in Endnote X5 (Thomson Reuters). Finally, a manual search was performed by reviewing the references of the included studies and all the publications, as well as other information provided by the authors. Retrieved studies were included if they met the following pre-established criteria: Patients had to be diagnosed with COPD, aged 18 or older, and treated with an inhaler device, and studies had to include outcomes related to lung deposition and inspiratory flow, including the rate of lung deposition, the particles’ mass median aerodynamic diameter (MMAD) expressed as µm (micrometer), the aerosol exit velocity (AEV) in meter per second (m/s), the lung distribution pattern, the inspiratory flow rate expressed as liter per minute (L/min), or the device’s intrinsic resistance. Other variables, such as safety, were also considered. Only SLRs, meta-analyses, randomized controlled trials (RCTs), observational studies, and in vitro studies in English, French, or Spanish were included. Animal studies were excluded. The screening of studies, data collection (including the evidence tables), and analysis were independently performed by two reviewers. In the case of a discrepancy between the reviewers, a consensus was reached by including a third reviewer. The 2011 levels of evidence from the Oxford Center for Evidence-Based Medicine (OCEBM)18 were used to grade the quality of the studies.\nThe SLR was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The objective of the SLR was to analyze lung deposition, inspiratory flow, and other characteristics of different inhaler devices (pMDIs, DPIs, and SMI) in both COPD patients and healthy subjects. Studies were identified using sensitive search strategies in the main medical databases. For this purpose, an expert librarian checked the search strategies (Tables 1–7 of the Supplementary material). Disease- and inhaler device-related terms were used as search keywords, which employed a controlled vocabulary, specific MeSH headings, and additional keywords. The following bibliographic databases were screened up to April 2019: Medline (PubMed) and Embase from 1961 to April 2019, and the Cochrane Library up to April 2019. Retrieved references were managed in Endnote X5 (Thomson Reuters). Finally, a manual search was performed by reviewing the references of the included studies and all the publications, as well as other information provided by the authors. Retrieved studies were included if they met the following pre-established criteria: Patients had to be diagnosed with COPD, aged 18 or older, and treated with an inhaler device, and studies had to include outcomes related to lung deposition and inspiratory flow, including the rate of lung deposition, the particles’ mass median aerodynamic diameter (MMAD) expressed as µm (micrometer), the aerosol exit velocity (AEV) in meter per second (m/s), the lung distribution pattern, the inspiratory flow rate expressed as liter per minute (L/min), or the device’s intrinsic resistance. Other variables, such as safety, were also considered. Only SLRs, meta-analyses, randomized controlled trials (RCTs), observational studies, and in vitro studies in English, French, or Spanish were included. Animal studies were excluded. The screening of studies, data collection (including the evidence tables), and analysis were independently performed by two reviewers. In the case of a discrepancy between the reviewers, a consensus was reached by including a third reviewer. The 2011 levels of evidence from the Oxford Center for Evidence-Based Medicine (OCEBM)18 were used to grade the quality of the studies.\nNarrative Review To supplement the SLR, additional searches were performed specifically to explore the basis of lung deposition and inspiratory flow, including their determinants and the effect of COPD on these aspects. For this purpose, apart from the results of the SLR, we performed different searches in Medline using PubMed’s Clinical Queries tool and small search strategies using MeSH and text–word terms (Table 8 of the Supplementary material).\nTo supplement the SLR, additional searches were performed specifically to explore the basis of lung deposition and inspiratory flow, including their determinants and the effect of COPD on these aspects. For this purpose, apart from the results of the SLR, we performed different searches in Medline using PubMed’s Clinical Queries tool and small search strategies using MeSH and text–word terms (Table 8 of the Supplementary material).\nNominal Group Meeting The results of the SLR and narrative searches were presented and discussed in a guided nominal group meeting. In this meeting, we agreed on a series of general conclusions and clinical recommendations.\nThe results of the SLR and narrative searches were presented and discussed in a guided nominal group meeting. In this meeting, we agreed on a series of general conclusions and clinical recommendations.", "We first established a group of 10 pneumologists (two of us were project coordinators). We are all specialized in COPD with demonstrated clinical experience (a minimum of 8 years and ≥5 publications and members of the Sociedad Española de Neumología y Cirugía Torácica (SEPAR). Besides, we are located in different parts of Spain. Then, we defined the project’s objectives, established the protocol of the SLR, and decided that this would be complemented by a narrative review.", "The SLR was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The objective of the SLR was to analyze lung deposition, inspiratory flow, and other characteristics of different inhaler devices (pMDIs, DPIs, and SMI) in both COPD patients and healthy subjects. Studies were identified using sensitive search strategies in the main medical databases. For this purpose, an expert librarian checked the search strategies (Tables 1–7 of the Supplementary material). Disease- and inhaler device-related terms were used as search keywords, which employed a controlled vocabulary, specific MeSH headings, and additional keywords. The following bibliographic databases were screened up to April 2019: Medline (PubMed) and Embase from 1961 to April 2019, and the Cochrane Library up to April 2019. Retrieved references were managed in Endnote X5 (Thomson Reuters). Finally, a manual search was performed by reviewing the references of the included studies and all the publications, as well as other information provided by the authors. Retrieved studies were included if they met the following pre-established criteria: Patients had to be diagnosed with COPD, aged 18 or older, and treated with an inhaler device, and studies had to include outcomes related to lung deposition and inspiratory flow, including the rate of lung deposition, the particles’ mass median aerodynamic diameter (MMAD) expressed as µm (micrometer), the aerosol exit velocity (AEV) in meter per second (m/s), the lung distribution pattern, the inspiratory flow rate expressed as liter per minute (L/min), or the device’s intrinsic resistance. Other variables, such as safety, were also considered. Only SLRs, meta-analyses, randomized controlled trials (RCTs), observational studies, and in vitro studies in English, French, or Spanish were included. Animal studies were excluded. The screening of studies, data collection (including the evidence tables), and analysis were independently performed by two reviewers. In the case of a discrepancy between the reviewers, a consensus was reached by including a third reviewer. The 2011 levels of evidence from the Oxford Center for Evidence-Based Medicine (OCEBM)18 were used to grade the quality of the studies.", "To supplement the SLR, additional searches were performed specifically to explore the basis of lung deposition and inspiratory flow, including their determinants and the effect of COPD on these aspects. For this purpose, apart from the results of the SLR, we performed different searches in Medline using PubMed’s Clinical Queries tool and small search strategies using MeSH and text–word terms (Table 8 of the Supplementary material).", "The results of the SLR and narrative searches were presented and discussed in a guided nominal group meeting. In this meeting, we agreed on a series of general conclusions and clinical recommendations.", "The SLR retrieved 3064 articles, of which 979 were duplicates. A total of 120 articles were reviewed in detail, as well as a further 20 articles that were retrieved using the manual search. Eventually, 75 articles were excluded (Table 9 of the Supplementary material), most of them due to lack of relevant data, and 71 were included, 24 were in vitro studies. Some of the included articles were of low–moderate quality (due to the study design, and poor description of the methodology, especially for the articles published before the 1990s). We found great variability regarding study designs, populations, outcomes, and measures. There were 24 in vitro studies,16,17,19–40 and the rest of the articles comprised one SLR41 and several RCTs and cross-sectional studies. The studies analyzed more than 1600 COPD patients, most of whom were men, with age ranges from 27 to 89 years, and with forced expiratory volume in 1 second from 25% to 80%. Many of these studies assessed one type of inhalation device, but others compared pMDIs and DPIs,19,20,30,33,37,40–46 pMDIs and SMI,47–49 or DPIs and SMI.17,27,28 One study also evaluated the three inhalation devices.17 The narrative searches found almost 1000 articles.\nHere, we summarize the main results of the SLR and narrative review, according to the project’s objectives (lung deposition, inspiratory flow rate, and data regarding these aspects for different inhaler devices). We also present the general conclusions and recommendations. Tables 1–3 show the main characteristics of the inhalation devices.Table 1Main Characteristics of Pressurized Metered-Dose InhalersFormulationDrug Suspended or Dissolved in Propellant (With Surfactant and Cosolvent)Metering systemMetering valve and reservoirPropellantHFA or CFCDose counterSometimesPrimingVariable priming requirementsTemperature dependenceLowHumidity dependenceLowActuator orificeThe design and size of the actuator significantly influences the performance of pMDIsLung deposition8%-53%MMDA1.22 μm-8 μmAerosol exit velocityHigh (more than 3 m/s)Lung distributionCentral and peripheral regionsIntrinsic resistanceLowInspiratory flow rate~ 20 L/minAdvantagesCompact and portable, consistent dosing, and rapid deliveryDisadvantagesNot breath-actuated, require coordinationAbbreviations: pMDI, pressurized metered-dose inhaler; HFA, hydrofluoroalkane; CFC, chlorofluorocarbon; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute.\nTable 2Main Characteristics of Dry Powder InhalersFormulationDrug/Lactose Blend, Drug Alone, Drug/Excipient ParticlesMetering systemCapsules, blisters, multi-dose blister packs, reservoirsPropellantNoDose counterYesPrimingVariable priming requirementsTemperature dependenceYesHumidity dependenceYesActuator orificeDoes not applyLung deposition~ 20%MMDA1.8 µm–4.8 µmAerosol exit velocityDepends on inspiratory flow rateLung distributionCentral and peripheral regionsIntrinsic resistanceLow/medium/highInspiratory flow rateMinimum of 30 L/min to > 100 L/minAdvantagesCompact and portable Some are multi-dose devices. Do not require coordination of inhalation with activation or hand strengthDisadvantagesRequire a minimum inspiratory flowPatients with cognitive or debilitating conditions might not generate sufficiently high inspiratory flowsMost are moisture-sensitiveAbbreviations: DPI, dry powder inhaler; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute.\nTable 3Main Characteristics of the Soft Mist InhalerFormulationAqueous Solution or SuspensionMetering systemReservoirsPropellantNoDose counterYesPrimingActuate the inhaler toward the ground until an aerosol cloud is visible and then to repeat the process three more timesTemperature dependenceNoHumidity dependenceNoActuator orifice–Lung deposition39.2%–67%MMDA~ 3.7 μmAerosol exit velocity0.72–0.84 m/sLung distributionCentral and peripheral regionsIntrinsic resistanceLow/noneInspiratory flow rateIndependentAdvantagesPortable and compact. Multi-dose device. Reusable. Compared with dry powder inhalers, a considerably smaller dose of a combination bronchodilator results in the same level of efficacy and safetyDisadvantagesNeeds to be primed if not in use for over 21 daysAbbreviations: SMI, soft mist inhaler; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute.\n\nMain Characteristics of Pressurized Metered-Dose Inhalers\nAbbreviations: pMDI, pressurized metered-dose inhaler; HFA, hydrofluoroalkane; CFC, chlorofluorocarbon; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute.\nMain Characteristics of Dry Powder Inhalers\nAbbreviations: DPI, dry powder inhaler; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute.\nMain Characteristics of the Soft Mist Inhaler\nAbbreviations: SMI, soft mist inhaler; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute.\nLung Deposition Different factors have been associated with lung deposition, some of which relate to the patient’s features (eg, airway geometry, inspiratory capacity, inhalation technique, breath–hold time, etc.) and to COPD (eg, exacerbations or hyperinflation).10–12,50–52 In fact, it has been shown during COPD exacerbations patients present decreased lung function and respiratory muscle strength that eventually influence on lung deposition.53 However, other factors are connected to the inhaler device (eg, the aerosol-generating system, speed of the aerosol plume, intrinsic resistance, inhaled carrier gas, oral/nasal inhalation, etc.), formulation (eg, the particle charge, lipophilicity, hygroscopicity, etc.), inhaled particle (eg, MMDA, its effect on lung distribution, etc.), and inhalation pattern (eg, the inspiration flow rate, volume, breath–hold time, etc.).52,54\nWith regards to the lung deposition (in relation to the emitted dose) across inhaler devices, data from in vitro and in vivo studies have estimated that 10%–20% of the delivered dose reaches the airways.54–56\nLung deposition rates (from individual studies) ranging from 8% to 53% have been reported for pMDIs.49,54,56–59 However, this rate increased to 11%–68% with the addition of a valved holding chamber or spacer31,35,46,59–63 and to 50%–60% with press-and-breathe actuators.64 More specifically, when Modulite® was used, lung deposition could reach up to 31%–34%.65,66 The K–haler® has a reported lung deposit of 39%.67 As exposed before, different factors might be contributing to these rate variability.\nThe studies included that analyzed DPIs have shown that the lung deposition rate is low, at around 20%,68 which is negatively influenced by a suboptimal inspiratory flow rate, humidity, and changes in temperature.69 Furthermore, clear differences in lung deposition were not observed when patients performed inhalation correctly.68 For the main DPI devices, the published lung deposition rates from individual studies (without direct comparisons) are as follows: Accuhaler® 7.6%,70 Aerolizer® 13%–20%,34,71 Breezhaler® 26.8%–39%,24,29 Easyhaler® 18.5%–31%,71,72 Genuair® 30.1%–51.1%,27,73,74 Handihaler® 9.8%–46.7%,19,24,71 Ingelheim inhaler® 16%–59%,75 NEXThaler® 39.4%–56%,11,76 Spinhaler® 11.5%,75 Turbuhaler® 14.2%–69.3%,21,42,77–79 and Twisthaler® 36%–37%.80 Similarly to all inhaler devices, other factors are probably influencing the lung deposition rate.81 Although some studies have compared lung deposition in pMDIs and DPIs, their results are contradictory.19,33\nRespimat® (SMI) has largely exhibited high lung deposition rates that range from 39.2% to 67%,27,38,48,49,74,82–84 with different inspiratory flow rates (high and low) and irrespective of humidity.85 Compared with other devices, SMI showed higher lung deposition than pMDIs (including those with a chamber or spacer) or DPIs.27,48,74,83,86\nWe also evaluated the AEV. Inhalation devices with a high AEV might have a short spray duration and vice versa. With pMDIs, the aerosol exits through a nozzle at a very high rate of more than 3 m/s.87 However, the AEV of the SMI is much slower, at 0.84–0.72 m/s, and the aerosol cloud lasts longer.88–90\nIt has also been observed that the distribution of the deposition sites of inhaled particles is strongly dependent on their aerodynamic diameters.69 This SLR found that pMDIs generally produce at least medium-sized particles, with a significant rate of extrafine particles. The observed MMAD of conventional pMDIs varies from 1.22 to 8 μm,35,91,92 from 1.19 to 3.57 μm when a valved holding chamber or spacer is used,31,35,93 and from 0.72 to 2.0 μm with Modulite®.65,66 Regarding particle size data for DPIs, depending on the device and drug, MMDAs vary from 1.40 to 4.8 µm.11,19,21,24,27–29,36,37,74,76 Conversely, SMI generates a cloud that contains an aerosol with a fine particle fraction of around 3.7 μm.74 It is estimated that 60% of the particles reach a MMAD <5 μm with SMI.85 The reported rate with pMDIs and DPIs (indirect comparison) is not that high.27,28,74,94\nAnother relevant outcome when using inhalation devices is the lung distribution pattern (through the central and peripheral regions). All inhalation devices have been shown to reach both central and peripheral areas. SMI data suggest that lung distribution pattern might be better than pMDIs, with a higher distribution in bronchial trees and peripheral regions.11,28,49,60,65,66,73,74,82,95,96 More specifically, a comparative study found mean peripheral, intermediate and central lung deposition, and peripheral zone/central zone ratio of 5.0%–9.4%, 4.8%–11.3%, 4.5%–10.4%, 1.01–1.16 with Respimat® vs 3.8%, 4.9%, 5.6%, 1.36 with pMDIs, respectively.49 Comparative data between pMDIs and DPIs are conflicting.33,46\nDifferent factors have been associated with lung deposition, some of which relate to the patient’s features (eg, airway geometry, inspiratory capacity, inhalation technique, breath–hold time, etc.) and to COPD (eg, exacerbations or hyperinflation).10–12,50–52 In fact, it has been shown during COPD exacerbations patients present decreased lung function and respiratory muscle strength that eventually influence on lung deposition.53 However, other factors are connected to the inhaler device (eg, the aerosol-generating system, speed of the aerosol plume, intrinsic resistance, inhaled carrier gas, oral/nasal inhalation, etc.), formulation (eg, the particle charge, lipophilicity, hygroscopicity, etc.), inhaled particle (eg, MMDA, its effect on lung distribution, etc.), and inhalation pattern (eg, the inspiration flow rate, volume, breath–hold time, etc.).52,54\nWith regards to the lung deposition (in relation to the emitted dose) across inhaler devices, data from in vitro and in vivo studies have estimated that 10%–20% of the delivered dose reaches the airways.54–56\nLung deposition rates (from individual studies) ranging from 8% to 53% have been reported for pMDIs.49,54,56–59 However, this rate increased to 11%–68% with the addition of a valved holding chamber or spacer31,35,46,59–63 and to 50%–60% with press-and-breathe actuators.64 More specifically, when Modulite® was used, lung deposition could reach up to 31%–34%.65,66 The K–haler® has a reported lung deposit of 39%.67 As exposed before, different factors might be contributing to these rate variability.\nThe studies included that analyzed DPIs have shown that the lung deposition rate is low, at around 20%,68 which is negatively influenced by a suboptimal inspiratory flow rate, humidity, and changes in temperature.69 Furthermore, clear differences in lung deposition were not observed when patients performed inhalation correctly.68 For the main DPI devices, the published lung deposition rates from individual studies (without direct comparisons) are as follows: Accuhaler® 7.6%,70 Aerolizer® 13%–20%,34,71 Breezhaler® 26.8%–39%,24,29 Easyhaler® 18.5%–31%,71,72 Genuair® 30.1%–51.1%,27,73,74 Handihaler® 9.8%–46.7%,19,24,71 Ingelheim inhaler® 16%–59%,75 NEXThaler® 39.4%–56%,11,76 Spinhaler® 11.5%,75 Turbuhaler® 14.2%–69.3%,21,42,77–79 and Twisthaler® 36%–37%.80 Similarly to all inhaler devices, other factors are probably influencing the lung deposition rate.81 Although some studies have compared lung deposition in pMDIs and DPIs, their results are contradictory.19,33\nRespimat® (SMI) has largely exhibited high lung deposition rates that range from 39.2% to 67%,27,38,48,49,74,82–84 with different inspiratory flow rates (high and low) and irrespective of humidity.85 Compared with other devices, SMI showed higher lung deposition than pMDIs (including those with a chamber or spacer) or DPIs.27,48,74,83,86\nWe also evaluated the AEV. Inhalation devices with a high AEV might have a short spray duration and vice versa. With pMDIs, the aerosol exits through a nozzle at a very high rate of more than 3 m/s.87 However, the AEV of the SMI is much slower, at 0.84–0.72 m/s, and the aerosol cloud lasts longer.88–90\nIt has also been observed that the distribution of the deposition sites of inhaled particles is strongly dependent on their aerodynamic diameters.69 This SLR found that pMDIs generally produce at least medium-sized particles, with a significant rate of extrafine particles. The observed MMAD of conventional pMDIs varies from 1.22 to 8 μm,35,91,92 from 1.19 to 3.57 μm when a valved holding chamber or spacer is used,31,35,93 and from 0.72 to 2.0 μm with Modulite®.65,66 Regarding particle size data for DPIs, depending on the device and drug, MMDAs vary from 1.40 to 4.8 µm.11,19,21,24,27–29,36,37,74,76 Conversely, SMI generates a cloud that contains an aerosol with a fine particle fraction of around 3.7 μm.74 It is estimated that 60% of the particles reach a MMAD <5 μm with SMI.85 The reported rate with pMDIs and DPIs (indirect comparison) is not that high.27,28,74,94\nAnother relevant outcome when using inhalation devices is the lung distribution pattern (through the central and peripheral regions). All inhalation devices have been shown to reach both central and peripheral areas. SMI data suggest that lung distribution pattern might be better than pMDIs, with a higher distribution in bronchial trees and peripheral regions.11,28,49,60,65,66,73,74,82,95,96 More specifically, a comparative study found mean peripheral, intermediate and central lung deposition, and peripheral zone/central zone ratio of 5.0%–9.4%, 4.8%–11.3%, 4.5%–10.4%, 1.01–1.16 with Respimat® vs 3.8%, 4.9%, 5.6%, 1.36 with pMDIs, respectively.49 Comparative data between pMDIs and DPIs are conflicting.33,46\nInspiratory Flow Rate The other main focus of this project was the inspiratory flow rate. First, it is important to consider the factors associated with inspiratory flow rate (Table 4). Similar to lung deposition, some of these factors relate to the patient’s and COPD’s characteristics, while other factors relate to the inhaler device, such as the intrinsic resistance.6,10,13–17,43,45,97,98Table 4Main Factors Associated to Inspiratory Flow RatePatient-related Inspiratory capacity Inspiratory effort Comorbidities Inhalation techniqueCOPD-related Severity Hyperinflation Exacerbations Respiratory muscle alterationsInhalation device-related Internal resistance Disaggregation of the powdered drug dose (DPIs)Abbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers.\n\nMain Factors Associated to Inspiratory Flow Rate\nAbbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers.\nOverall, two main driving forces can affect the performance of DPIs: the inspiratory flow generated by the patient and the turbulence produced inside the device, the latter of which solely depends on the original technical characteristics of the device, including the intrinsic resistance. These two parameters affect the disaggregation of the drug dose, the diameter of the particles to inhale, the lung distribution of the dose, and eventually, the efficacy of the delivered drug. Essentially, a higher intrinsic resistance results in the patient needing to generate a higher inspiratory flow.\nIn general, although variable, DPIs’ intrinsic resistance is higher than that of pMDIs or SMI, which are relatively similar and low. Therefore, pMDIs and SMI do not require the patient to generate a high inspiratory flow (and inspiratory effort).\nAccording to the results of the SLR, pMDIs require low inspiratory flow rates of around 20 L/min (59, 70, 132) to achieve an adequate lung deposition.10,17,43,45,57,82,99–101 There were no major differences between the use of one propellant and another.57 In order to generate the correct inspiratory airflow and lung deposition with this type of inhalation device, it is recommended the patients start breathing from their functional residual capacity, then they should activate the inhalation device and start inhalation using an inspiratory flow rate that is below 60 L/min. Then, at the end of inspiration, patients should hold their breath for around 10 seconds.100 Consequently, patients need a correct inhalation technique and coordination. The K-haler® is triggered by an inspiratory flow rate of approximately 30 L/min.67\nInhaler devices are many times classified as low- (30 L/min or below), medium– (~30–60 L/min), and high-resistance (>60 L/min) devices.10,17 DPIs with low intrinsic resistance include Aerolizer®, Spinhaler®, and Breezhaler®; DPIs with medium resistance include Accuhaler®/Diskhaler®, Genuair®/Novolizer®, and NEXThaler®; DPIs with medium/high resistance include Turbuhaler®; and DPIs with high resistance include Easyhaler®, Handihaler®, and Twisthaler®. The estimated inspiratory flow rates required thus vary across devices, from a minimum of 30 L/min to more than 100 L/min.6,26,32,43,45,101–108\nBased on the information presented above, when using a high-resistance DPI, the disaggregation and micro-dispersion of the powdered drug are relatively independent of the patient’s inspiratory effort because the driving force depends on the intrinsic resistance of the DPI itself, which is able to produce the turbulence required for effective drug micro-dispersion. However, when a low-resistance device is used, the only force that can generate turbulence is the patient’s inspiratory airflow, which should be high.\nFinally, the studies showed that the SMI inhalation device uses mechanical energy (from a spring) to generate a fine, slow–moving mist from an aqueous solution, which is independent of the patient’s inspiratory effort. Therefore, the required inspiratory flow rate and/or effort are less relevant than with DPIs.83,88,89,109 Moreover, the inhalation maneuver with SMI is more similar to physiological inhalation. One study observed that drug delivery to the lungs with SMI was more efficient than with pMDIs, even with poor inhalation technique.82\nThe other main focus of this project was the inspiratory flow rate. First, it is important to consider the factors associated with inspiratory flow rate (Table 4). Similar to lung deposition, some of these factors relate to the patient’s and COPD’s characteristics, while other factors relate to the inhaler device, such as the intrinsic resistance.6,10,13–17,43,45,97,98Table 4Main Factors Associated to Inspiratory Flow RatePatient-related Inspiratory capacity Inspiratory effort Comorbidities Inhalation techniqueCOPD-related Severity Hyperinflation Exacerbations Respiratory muscle alterationsInhalation device-related Internal resistance Disaggregation of the powdered drug dose (DPIs)Abbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers.\n\nMain Factors Associated to Inspiratory Flow Rate\nAbbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers.\nOverall, two main driving forces can affect the performance of DPIs: the inspiratory flow generated by the patient and the turbulence produced inside the device, the latter of which solely depends on the original technical characteristics of the device, including the intrinsic resistance. These two parameters affect the disaggregation of the drug dose, the diameter of the particles to inhale, the lung distribution of the dose, and eventually, the efficacy of the delivered drug. Essentially, a higher intrinsic resistance results in the patient needing to generate a higher inspiratory flow.\nIn general, although variable, DPIs’ intrinsic resistance is higher than that of pMDIs or SMI, which are relatively similar and low. Therefore, pMDIs and SMI do not require the patient to generate a high inspiratory flow (and inspiratory effort).\nAccording to the results of the SLR, pMDIs require low inspiratory flow rates of around 20 L/min (59, 70, 132) to achieve an adequate lung deposition.10,17,43,45,57,82,99–101 There were no major differences between the use of one propellant and another.57 In order to generate the correct inspiratory airflow and lung deposition with this type of inhalation device, it is recommended the patients start breathing from their functional residual capacity, then they should activate the inhalation device and start inhalation using an inspiratory flow rate that is below 60 L/min. Then, at the end of inspiration, patients should hold their breath for around 10 seconds.100 Consequently, patients need a correct inhalation technique and coordination. The K-haler® is triggered by an inspiratory flow rate of approximately 30 L/min.67\nInhaler devices are many times classified as low- (30 L/min or below), medium– (~30–60 L/min), and high-resistance (>60 L/min) devices.10,17 DPIs with low intrinsic resistance include Aerolizer®, Spinhaler®, and Breezhaler®; DPIs with medium resistance include Accuhaler®/Diskhaler®, Genuair®/Novolizer®, and NEXThaler®; DPIs with medium/high resistance include Turbuhaler®; and DPIs with high resistance include Easyhaler®, Handihaler®, and Twisthaler®. The estimated inspiratory flow rates required thus vary across devices, from a minimum of 30 L/min to more than 100 L/min.6,26,32,43,45,101–108\nBased on the information presented above, when using a high-resistance DPI, the disaggregation and micro-dispersion of the powdered drug are relatively independent of the patient’s inspiratory effort because the driving force depends on the intrinsic resistance of the DPI itself, which is able to produce the turbulence required for effective drug micro-dispersion. However, when a low-resistance device is used, the only force that can generate turbulence is the patient’s inspiratory airflow, which should be high.\nFinally, the studies showed that the SMI inhalation device uses mechanical energy (from a spring) to generate a fine, slow–moving mist from an aqueous solution, which is independent of the patient’s inspiratory effort. Therefore, the required inspiratory flow rate and/or effort are less relevant than with DPIs.83,88,89,109 Moreover, the inhalation maneuver with SMI is more similar to physiological inhalation. One study observed that drug delivery to the lungs with SMI was more efficient than with pMDIs, even with poor inhalation technique.82\nGeneral Conclusions and Recommendations The experts discussed the results of the reviews, and, based on the evidence, they formulated a series of general conclusions and recommendations that are outlined in Tables 5 and 6. In summary, health professionals involved in the management of COPD patients should be aware of all factors involved in adequate drug distribution when using inhalation devices. Two main objective factors emerged at this point: lung deposition and the required inspiratory flow rate. Both of these factors are highly influenced by patient, COPD, and inhaler device characteristics. Moreover, COPD is a heterogeneous and dynamic chronic disease, in which lung deposition and inspiratory flow rates vary across patients and also within the same patient.Table 5General Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease#Conclusion1The lung deposition profile and required inspiratory flow rate are key factors to be considered when selecting an inhalation device2COPD is a progressive disease with specific pathophysiological features that impact patients’ lung deposition and inspiratory flow rate3In COPD patients, obstruction severity and especially hyperinflation are decisive pathophysiological factors4During the course of COPD, some situations, notably exacerbations, impact the inspiratory flow rate5An homogeneous drug distribution through the airways is essential, not only because of the COPD pathophysiology but also because of the different distribution of cholinergic and β2 receptors6COPD treatment requires inhalation devices capable of delivering particles with a MMAD comprised between 0.5 and 5 µm to achieve high lung deposition7The patients’ ability to perform a correct inhalation maneuver (inspiratory effort, coordination, etc.) is decisive to achieve an adequate inspiratory flow rate and lung deposition8Inhalation maneuvers that are similar to physiological/standard inspiratory flow are more likely associated with reduced oropharyngeal deposition and therefore increased lung deposition9Inhalation devices present different characteristics that define the required inspiratory flow rate and influence lung deposition10The inspiratory flow rate required for drug dispersion with a given DPI is inversely proportional to the intrinsic resistance of the DPI11The faster the exit speed of the drug delivered from the device (initial acceleration of the inhalation maneuver by the patient or directly by the device), the greater the risk of oropharyngeal deposition and the lesser the lung deposition12The SMI requires a low inspiratory flow rate. Therefore, compared with other inhaler devices, when performing a correct maneuver, oropharyngeal deposition is lower and lung deposition is higherAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler.\nTable 6Experts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease#It is Strongly Recommended to …1Consider COPD pathophysiological aspects as well as patients’ clinical status and disease severity/evolution when selecting an inhalation device2Take into account the specific characteristics of each inhalation device3Assess patients’ ability to perform a correct inhalation maneuver and the specific requirements for each inhalation device4Evaluate patients’ inspiratory flow rate or inspiratory capacity before selecting an inhalation device5Take into account patients’ history of exacerbations or other events that may affect their ability to perform adequate inhalation6Regularly review patients’ inhalation maneuver and check whether the inhalation device meets their needs7Use an active inhalation device, such as pMDI or SMI, in patients with reduced inspiratory capacity8Consider using a valved holding chamber with SMI or pMDI devices in fragile patients with inspiratory and/or coordination difficulties9Use inhalation devices that generate a low oropharyngeal and high lung deposition10Check patients’ inhalation maneuver during every visit and, where necessary, resolve errors or even change the inhalerAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler.\n\nGeneral Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease\nAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler.\nExperts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease\nAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler.\nThus, it is strongly recommended that, in addition to the standard variables for COPD, inspiratory flow rate and the patient’s inspiratory capacity are evaluated (on a regular basis), and the selection of an inhaler device should be based on the COPD patient’s features, needs, and clinical situation. This selection should consider the different characteristics of the devices to ensure physicians choose the device that best matches that patient’s needs.\nFinally, we considered it important to systematically review the patient’s inhalation maneuver,110 see Tables 3 and 6. This should be checked during every visit, so that errors can be resolved, and inhalers can be checked and even changed, where necessary. The same way, before considering a change in the patient’s treatment, possible errors with the inhalation maneuver should be evaluated.\nThe experts discussed the results of the reviews, and, based on the evidence, they formulated a series of general conclusions and recommendations that are outlined in Tables 5 and 6. In summary, health professionals involved in the management of COPD patients should be aware of all factors involved in adequate drug distribution when using inhalation devices. Two main objective factors emerged at this point: lung deposition and the required inspiratory flow rate. Both of these factors are highly influenced by patient, COPD, and inhaler device characteristics. Moreover, COPD is a heterogeneous and dynamic chronic disease, in which lung deposition and inspiratory flow rates vary across patients and also within the same patient.Table 5General Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease#Conclusion1The lung deposition profile and required inspiratory flow rate are key factors to be considered when selecting an inhalation device2COPD is a progressive disease with specific pathophysiological features that impact patients’ lung deposition and inspiratory flow rate3In COPD patients, obstruction severity and especially hyperinflation are decisive pathophysiological factors4During the course of COPD, some situations, notably exacerbations, impact the inspiratory flow rate5An homogeneous drug distribution through the airways is essential, not only because of the COPD pathophysiology but also because of the different distribution of cholinergic and β2 receptors6COPD treatment requires inhalation devices capable of delivering particles with a MMAD comprised between 0.5 and 5 µm to achieve high lung deposition7The patients’ ability to perform a correct inhalation maneuver (inspiratory effort, coordination, etc.) is decisive to achieve an adequate inspiratory flow rate and lung deposition8Inhalation maneuvers that are similar to physiological/standard inspiratory flow are more likely associated with reduced oropharyngeal deposition and therefore increased lung deposition9Inhalation devices present different characteristics that define the required inspiratory flow rate and influence lung deposition10The inspiratory flow rate required for drug dispersion with a given DPI is inversely proportional to the intrinsic resistance of the DPI11The faster the exit speed of the drug delivered from the device (initial acceleration of the inhalation maneuver by the patient or directly by the device), the greater the risk of oropharyngeal deposition and the lesser the lung deposition12The SMI requires a low inspiratory flow rate. Therefore, compared with other inhaler devices, when performing a correct maneuver, oropharyngeal deposition is lower and lung deposition is higherAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler.\nTable 6Experts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease#It is Strongly Recommended to …1Consider COPD pathophysiological aspects as well as patients’ clinical status and disease severity/evolution when selecting an inhalation device2Take into account the specific characteristics of each inhalation device3Assess patients’ ability to perform a correct inhalation maneuver and the specific requirements for each inhalation device4Evaluate patients’ inspiratory flow rate or inspiratory capacity before selecting an inhalation device5Take into account patients’ history of exacerbations or other events that may affect their ability to perform adequate inhalation6Regularly review patients’ inhalation maneuver and check whether the inhalation device meets their needs7Use an active inhalation device, such as pMDI or SMI, in patients with reduced inspiratory capacity8Consider using a valved holding chamber with SMI or pMDI devices in fragile patients with inspiratory and/or coordination difficulties9Use inhalation devices that generate a low oropharyngeal and high lung deposition10Check patients’ inhalation maneuver during every visit and, where necessary, resolve errors or even change the inhalerAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler.\n\nGeneral Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease\nAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler.\nExperts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease\nAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler.\nThus, it is strongly recommended that, in addition to the standard variables for COPD, inspiratory flow rate and the patient’s inspiratory capacity are evaluated (on a regular basis), and the selection of an inhaler device should be based on the COPD patient’s features, needs, and clinical situation. This selection should consider the different characteristics of the devices to ensure physicians choose the device that best matches that patient’s needs.\nFinally, we considered it important to systematically review the patient’s inhalation maneuver,110 see Tables 3 and 6. This should be checked during every visit, so that errors can be resolved, and inhalers can be checked and even changed, where necessary. The same way, before considering a change in the patient’s treatment, possible errors with the inhalation maneuver should be evaluated.", "Different factors have been associated with lung deposition, some of which relate to the patient’s features (eg, airway geometry, inspiratory capacity, inhalation technique, breath–hold time, etc.) and to COPD (eg, exacerbations or hyperinflation).10–12,50–52 In fact, it has been shown during COPD exacerbations patients present decreased lung function and respiratory muscle strength that eventually influence on lung deposition.53 However, other factors are connected to the inhaler device (eg, the aerosol-generating system, speed of the aerosol plume, intrinsic resistance, inhaled carrier gas, oral/nasal inhalation, etc.), formulation (eg, the particle charge, lipophilicity, hygroscopicity, etc.), inhaled particle (eg, MMDA, its effect on lung distribution, etc.), and inhalation pattern (eg, the inspiration flow rate, volume, breath–hold time, etc.).52,54\nWith regards to the lung deposition (in relation to the emitted dose) across inhaler devices, data from in vitro and in vivo studies have estimated that 10%–20% of the delivered dose reaches the airways.54–56\nLung deposition rates (from individual studies) ranging from 8% to 53% have been reported for pMDIs.49,54,56–59 However, this rate increased to 11%–68% with the addition of a valved holding chamber or spacer31,35,46,59–63 and to 50%–60% with press-and-breathe actuators.64 More specifically, when Modulite® was used, lung deposition could reach up to 31%–34%.65,66 The K–haler® has a reported lung deposit of 39%.67 As exposed before, different factors might be contributing to these rate variability.\nThe studies included that analyzed DPIs have shown that the lung deposition rate is low, at around 20%,68 which is negatively influenced by a suboptimal inspiratory flow rate, humidity, and changes in temperature.69 Furthermore, clear differences in lung deposition were not observed when patients performed inhalation correctly.68 For the main DPI devices, the published lung deposition rates from individual studies (without direct comparisons) are as follows: Accuhaler® 7.6%,70 Aerolizer® 13%–20%,34,71 Breezhaler® 26.8%–39%,24,29 Easyhaler® 18.5%–31%,71,72 Genuair® 30.1%–51.1%,27,73,74 Handihaler® 9.8%–46.7%,19,24,71 Ingelheim inhaler® 16%–59%,75 NEXThaler® 39.4%–56%,11,76 Spinhaler® 11.5%,75 Turbuhaler® 14.2%–69.3%,21,42,77–79 and Twisthaler® 36%–37%.80 Similarly to all inhaler devices, other factors are probably influencing the lung deposition rate.81 Although some studies have compared lung deposition in pMDIs and DPIs, their results are contradictory.19,33\nRespimat® (SMI) has largely exhibited high lung deposition rates that range from 39.2% to 67%,27,38,48,49,74,82–84 with different inspiratory flow rates (high and low) and irrespective of humidity.85 Compared with other devices, SMI showed higher lung deposition than pMDIs (including those with a chamber or spacer) or DPIs.27,48,74,83,86\nWe also evaluated the AEV. Inhalation devices with a high AEV might have a short spray duration and vice versa. With pMDIs, the aerosol exits through a nozzle at a very high rate of more than 3 m/s.87 However, the AEV of the SMI is much slower, at 0.84–0.72 m/s, and the aerosol cloud lasts longer.88–90\nIt has also been observed that the distribution of the deposition sites of inhaled particles is strongly dependent on their aerodynamic diameters.69 This SLR found that pMDIs generally produce at least medium-sized particles, with a significant rate of extrafine particles. The observed MMAD of conventional pMDIs varies from 1.22 to 8 μm,35,91,92 from 1.19 to 3.57 μm when a valved holding chamber or spacer is used,31,35,93 and from 0.72 to 2.0 μm with Modulite®.65,66 Regarding particle size data for DPIs, depending on the device and drug, MMDAs vary from 1.40 to 4.8 µm.11,19,21,24,27–29,36,37,74,76 Conversely, SMI generates a cloud that contains an aerosol with a fine particle fraction of around 3.7 μm.74 It is estimated that 60% of the particles reach a MMAD <5 μm with SMI.85 The reported rate with pMDIs and DPIs (indirect comparison) is not that high.27,28,74,94\nAnother relevant outcome when using inhalation devices is the lung distribution pattern (through the central and peripheral regions). All inhalation devices have been shown to reach both central and peripheral areas. SMI data suggest that lung distribution pattern might be better than pMDIs, with a higher distribution in bronchial trees and peripheral regions.11,28,49,60,65,66,73,74,82,95,96 More specifically, a comparative study found mean peripheral, intermediate and central lung deposition, and peripheral zone/central zone ratio of 5.0%–9.4%, 4.8%–11.3%, 4.5%–10.4%, 1.01–1.16 with Respimat® vs 3.8%, 4.9%, 5.6%, 1.36 with pMDIs, respectively.49 Comparative data between pMDIs and DPIs are conflicting.33,46", "The other main focus of this project was the inspiratory flow rate. First, it is important to consider the factors associated with inspiratory flow rate (Table 4). Similar to lung deposition, some of these factors relate to the patient’s and COPD’s characteristics, while other factors relate to the inhaler device, such as the intrinsic resistance.6,10,13–17,43,45,97,98Table 4Main Factors Associated to Inspiratory Flow RatePatient-related Inspiratory capacity Inspiratory effort Comorbidities Inhalation techniqueCOPD-related Severity Hyperinflation Exacerbations Respiratory muscle alterationsInhalation device-related Internal resistance Disaggregation of the powdered drug dose (DPIs)Abbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers.\n\nMain Factors Associated to Inspiratory Flow Rate\nAbbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers.\nOverall, two main driving forces can affect the performance of DPIs: the inspiratory flow generated by the patient and the turbulence produced inside the device, the latter of which solely depends on the original technical characteristics of the device, including the intrinsic resistance. These two parameters affect the disaggregation of the drug dose, the diameter of the particles to inhale, the lung distribution of the dose, and eventually, the efficacy of the delivered drug. Essentially, a higher intrinsic resistance results in the patient needing to generate a higher inspiratory flow.\nIn general, although variable, DPIs’ intrinsic resistance is higher than that of pMDIs or SMI, which are relatively similar and low. Therefore, pMDIs and SMI do not require the patient to generate a high inspiratory flow (and inspiratory effort).\nAccording to the results of the SLR, pMDIs require low inspiratory flow rates of around 20 L/min (59, 70, 132) to achieve an adequate lung deposition.10,17,43,45,57,82,99–101 There were no major differences between the use of one propellant and another.57 In order to generate the correct inspiratory airflow and lung deposition with this type of inhalation device, it is recommended the patients start breathing from their functional residual capacity, then they should activate the inhalation device and start inhalation using an inspiratory flow rate that is below 60 L/min. Then, at the end of inspiration, patients should hold their breath for around 10 seconds.100 Consequently, patients need a correct inhalation technique and coordination. The K-haler® is triggered by an inspiratory flow rate of approximately 30 L/min.67\nInhaler devices are many times classified as low- (30 L/min or below), medium– (~30–60 L/min), and high-resistance (>60 L/min) devices.10,17 DPIs with low intrinsic resistance include Aerolizer®, Spinhaler®, and Breezhaler®; DPIs with medium resistance include Accuhaler®/Diskhaler®, Genuair®/Novolizer®, and NEXThaler®; DPIs with medium/high resistance include Turbuhaler®; and DPIs with high resistance include Easyhaler®, Handihaler®, and Twisthaler®. The estimated inspiratory flow rates required thus vary across devices, from a minimum of 30 L/min to more than 100 L/min.6,26,32,43,45,101–108\nBased on the information presented above, when using a high-resistance DPI, the disaggregation and micro-dispersion of the powdered drug are relatively independent of the patient’s inspiratory effort because the driving force depends on the intrinsic resistance of the DPI itself, which is able to produce the turbulence required for effective drug micro-dispersion. However, when a low-resistance device is used, the only force that can generate turbulence is the patient’s inspiratory airflow, which should be high.\nFinally, the studies showed that the SMI inhalation device uses mechanical energy (from a spring) to generate a fine, slow–moving mist from an aqueous solution, which is independent of the patient’s inspiratory effort. Therefore, the required inspiratory flow rate and/or effort are less relevant than with DPIs.83,88,89,109 Moreover, the inhalation maneuver with SMI is more similar to physiological inhalation. One study observed that drug delivery to the lungs with SMI was more efficient than with pMDIs, even with poor inhalation technique.82", "The experts discussed the results of the reviews, and, based on the evidence, they formulated a series of general conclusions and recommendations that are outlined in Tables 5 and 6. In summary, health professionals involved in the management of COPD patients should be aware of all factors involved in adequate drug distribution when using inhalation devices. Two main objective factors emerged at this point: lung deposition and the required inspiratory flow rate. Both of these factors are highly influenced by patient, COPD, and inhaler device characteristics. Moreover, COPD is a heterogeneous and dynamic chronic disease, in which lung deposition and inspiratory flow rates vary across patients and also within the same patient.Table 5General Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease#Conclusion1The lung deposition profile and required inspiratory flow rate are key factors to be considered when selecting an inhalation device2COPD is a progressive disease with specific pathophysiological features that impact patients’ lung deposition and inspiratory flow rate3In COPD patients, obstruction severity and especially hyperinflation are decisive pathophysiological factors4During the course of COPD, some situations, notably exacerbations, impact the inspiratory flow rate5An homogeneous drug distribution through the airways is essential, not only because of the COPD pathophysiology but also because of the different distribution of cholinergic and β2 receptors6COPD treatment requires inhalation devices capable of delivering particles with a MMAD comprised between 0.5 and 5 µm to achieve high lung deposition7The patients’ ability to perform a correct inhalation maneuver (inspiratory effort, coordination, etc.) is decisive to achieve an adequate inspiratory flow rate and lung deposition8Inhalation maneuvers that are similar to physiological/standard inspiratory flow are more likely associated with reduced oropharyngeal deposition and therefore increased lung deposition9Inhalation devices present different characteristics that define the required inspiratory flow rate and influence lung deposition10The inspiratory flow rate required for drug dispersion with a given DPI is inversely proportional to the intrinsic resistance of the DPI11The faster the exit speed of the drug delivered from the device (initial acceleration of the inhalation maneuver by the patient or directly by the device), the greater the risk of oropharyngeal deposition and the lesser the lung deposition12The SMI requires a low inspiratory flow rate. Therefore, compared with other inhaler devices, when performing a correct maneuver, oropharyngeal deposition is lower and lung deposition is higherAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler.\nTable 6Experts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease#It is Strongly Recommended to …1Consider COPD pathophysiological aspects as well as patients’ clinical status and disease severity/evolution when selecting an inhalation device2Take into account the specific characteristics of each inhalation device3Assess patients’ ability to perform a correct inhalation maneuver and the specific requirements for each inhalation device4Evaluate patients’ inspiratory flow rate or inspiratory capacity before selecting an inhalation device5Take into account patients’ history of exacerbations or other events that may affect their ability to perform adequate inhalation6Regularly review patients’ inhalation maneuver and check whether the inhalation device meets their needs7Use an active inhalation device, such as pMDI or SMI, in patients with reduced inspiratory capacity8Consider using a valved holding chamber with SMI or pMDI devices in fragile patients with inspiratory and/or coordination difficulties9Use inhalation devices that generate a low oropharyngeal and high lung deposition10Check patients’ inhalation maneuver during every visit and, where necessary, resolve errors or even change the inhalerAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler.\n\nGeneral Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease\nAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler.\nExperts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease\nAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler.\nThus, it is strongly recommended that, in addition to the standard variables for COPD, inspiratory flow rate and the patient’s inspiratory capacity are evaluated (on a regular basis), and the selection of an inhaler device should be based on the COPD patient’s features, needs, and clinical situation. This selection should consider the different characteristics of the devices to ensure physicians choose the device that best matches that patient’s needs.\nFinally, we considered it important to systematically review the patient’s inhalation maneuver,110 see Tables 3 and 6. This should be checked during every visit, so that errors can be resolved, and inhalers can be checked and even changed, where necessary. The same way, before considering a change in the patient’s treatment, possible errors with the inhalation maneuver should be evaluated." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Experts’ Selection", "Systematic Literature Review", "Narrative Review", "Nominal Group Meeting", "Results", "Lung Deposition", "Inspiratory Flow Rate", "General Conclusions and Recommendations" ]
[ "Chronic obstructive pulmonary disease (COPD) is characterized by a persistent airflow limitation that is usually progressive, according to guidelines from the Global Initiative for Chronic Obstructive Lung Disease (GOLD).1 In recent years, the prevalence of COPD has dramatically increased, growing by 44.2% from 1990 to 2015.2 The impact on patients, society, and health systems is correspondingly huge. More than 3 million people die of COPD worldwide each year, accounting for 6% of all deaths worldwide.3 In 2010, the cost of COPD in the USA was projected to be approximately US $50 billion.4\nOne of the primary treatment modalities for COPD is medications that are delivered via inhalation devices. Currently, in clinical practice, a variety of devices are available for the treatment of these patients, including pressurized metered-dose inhalers (pMDIs), which are used with or without a valved holding chamber or spacer, as well as dry powder inhalers (DPIs) and the soft mist inhaler (SMI). Inhaler devices vary in several ways, including how the inhaler dispenses the drug, whether the treatment is passively or actively generated (using propellant, mechanical, or compressed air), and the drug’s formulation (solution, dry powder, or mist).\nThe selection of an inhalation device is a key point in COPD because it impacts patient adherence, the drug’s effectiveness, and long–term outcomes.5 A range of studies have assessed which factors/characteristics should be considered when selecting the most appropriate device.6–8 Interestingly, according to many expert opinions, the most important factors involved in achieving optimal disease outcomes are the generation of high lung deposition and correct dispensation with low inspiratory flow rates.9 Other relevant factors include inhalation technique, potential difficulties with the device, and patient preferences.\nOn the other hand, data regarding lung deposition and inspiratory flow rates across inhalation devices in COPD patients are usually described and evaluated as absolute, static numbers. However, a theoretical framework and pathophysiological and clinical evidence all suggest that both are influenced by several factors that relate to the patients and their COPD, all of which can change over time.6,10–17 Therefore, analyzing lung deposition and inspiratory flow rates in COPD patients who use inhalation devices requires a more careful, holistic, and dynamic approach.\nConsidering all the aspects described above, we performed a systematic literature review (SLR) and a narrative review to assess lung deposition and inspiratory flow rates, as well as data related to these inhalation devices in COPD patients. Using this information, we propose related conclusions and recommendations that can contribute to the selection of inhalation devices. We are confident that this information will be very useful for health professionals who are involved in the care of patients with COPD.", "This project consisted of an SLR, a narrative review, and an expert opinion based on a nominal group meeting. A nominal group meeting is a structured method for brainstorming that encourages contributions from everyone and facilitates quick agreement on the relative importance of issues, problems, or solutions.\nExperts’ Selection We first established a group of 10 pneumologists (two of us were project coordinators). We are all specialized in COPD with demonstrated clinical experience (a minimum of 8 years and ≥5 publications and members of the Sociedad Española de Neumología y Cirugía Torácica (SEPAR). Besides, we are located in different parts of Spain. Then, we defined the project’s objectives, established the protocol of the SLR, and decided that this would be complemented by a narrative review.\nWe first established a group of 10 pneumologists (two of us were project coordinators). We are all specialized in COPD with demonstrated clinical experience (a minimum of 8 years and ≥5 publications and members of the Sociedad Española de Neumología y Cirugía Torácica (SEPAR). Besides, we are located in different parts of Spain. Then, we defined the project’s objectives, established the protocol of the SLR, and decided that this would be complemented by a narrative review.\nSystematic Literature Review The SLR was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The objective of the SLR was to analyze lung deposition, inspiratory flow, and other characteristics of different inhaler devices (pMDIs, DPIs, and SMI) in both COPD patients and healthy subjects. Studies were identified using sensitive search strategies in the main medical databases. For this purpose, an expert librarian checked the search strategies (Tables 1–7 of the Supplementary material). Disease- and inhaler device-related terms were used as search keywords, which employed a controlled vocabulary, specific MeSH headings, and additional keywords. The following bibliographic databases were screened up to April 2019: Medline (PubMed) and Embase from 1961 to April 2019, and the Cochrane Library up to April 2019. Retrieved references were managed in Endnote X5 (Thomson Reuters). Finally, a manual search was performed by reviewing the references of the included studies and all the publications, as well as other information provided by the authors. Retrieved studies were included if they met the following pre-established criteria: Patients had to be diagnosed with COPD, aged 18 or older, and treated with an inhaler device, and studies had to include outcomes related to lung deposition and inspiratory flow, including the rate of lung deposition, the particles’ mass median aerodynamic diameter (MMAD) expressed as µm (micrometer), the aerosol exit velocity (AEV) in meter per second (m/s), the lung distribution pattern, the inspiratory flow rate expressed as liter per minute (L/min), or the device’s intrinsic resistance. Other variables, such as safety, were also considered. Only SLRs, meta-analyses, randomized controlled trials (RCTs), observational studies, and in vitro studies in English, French, or Spanish were included. Animal studies were excluded. The screening of studies, data collection (including the evidence tables), and analysis were independently performed by two reviewers. In the case of a discrepancy between the reviewers, a consensus was reached by including a third reviewer. The 2011 levels of evidence from the Oxford Center for Evidence-Based Medicine (OCEBM)18 were used to grade the quality of the studies.\nThe SLR was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The objective of the SLR was to analyze lung deposition, inspiratory flow, and other characteristics of different inhaler devices (pMDIs, DPIs, and SMI) in both COPD patients and healthy subjects. Studies were identified using sensitive search strategies in the main medical databases. For this purpose, an expert librarian checked the search strategies (Tables 1–7 of the Supplementary material). Disease- and inhaler device-related terms were used as search keywords, which employed a controlled vocabulary, specific MeSH headings, and additional keywords. The following bibliographic databases were screened up to April 2019: Medline (PubMed) and Embase from 1961 to April 2019, and the Cochrane Library up to April 2019. Retrieved references were managed in Endnote X5 (Thomson Reuters). Finally, a manual search was performed by reviewing the references of the included studies and all the publications, as well as other information provided by the authors. Retrieved studies were included if they met the following pre-established criteria: Patients had to be diagnosed with COPD, aged 18 or older, and treated with an inhaler device, and studies had to include outcomes related to lung deposition and inspiratory flow, including the rate of lung deposition, the particles’ mass median aerodynamic diameter (MMAD) expressed as µm (micrometer), the aerosol exit velocity (AEV) in meter per second (m/s), the lung distribution pattern, the inspiratory flow rate expressed as liter per minute (L/min), or the device’s intrinsic resistance. Other variables, such as safety, were also considered. Only SLRs, meta-analyses, randomized controlled trials (RCTs), observational studies, and in vitro studies in English, French, or Spanish were included. Animal studies were excluded. The screening of studies, data collection (including the evidence tables), and analysis were independently performed by two reviewers. In the case of a discrepancy between the reviewers, a consensus was reached by including a third reviewer. The 2011 levels of evidence from the Oxford Center for Evidence-Based Medicine (OCEBM)18 were used to grade the quality of the studies.\nNarrative Review To supplement the SLR, additional searches were performed specifically to explore the basis of lung deposition and inspiratory flow, including their determinants and the effect of COPD on these aspects. For this purpose, apart from the results of the SLR, we performed different searches in Medline using PubMed’s Clinical Queries tool and small search strategies using MeSH and text–word terms (Table 8 of the Supplementary material).\nTo supplement the SLR, additional searches were performed specifically to explore the basis of lung deposition and inspiratory flow, including their determinants and the effect of COPD on these aspects. For this purpose, apart from the results of the SLR, we performed different searches in Medline using PubMed’s Clinical Queries tool and small search strategies using MeSH and text–word terms (Table 8 of the Supplementary material).\nNominal Group Meeting The results of the SLR and narrative searches were presented and discussed in a guided nominal group meeting. In this meeting, we agreed on a series of general conclusions and clinical recommendations.\nThe results of the SLR and narrative searches were presented and discussed in a guided nominal group meeting. In this meeting, we agreed on a series of general conclusions and clinical recommendations.", "We first established a group of 10 pneumologists (two of us were project coordinators). We are all specialized in COPD with demonstrated clinical experience (a minimum of 8 years and ≥5 publications and members of the Sociedad Española de Neumología y Cirugía Torácica (SEPAR). Besides, we are located in different parts of Spain. Then, we defined the project’s objectives, established the protocol of the SLR, and decided that this would be complemented by a narrative review.", "The SLR was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The objective of the SLR was to analyze lung deposition, inspiratory flow, and other characteristics of different inhaler devices (pMDIs, DPIs, and SMI) in both COPD patients and healthy subjects. Studies were identified using sensitive search strategies in the main medical databases. For this purpose, an expert librarian checked the search strategies (Tables 1–7 of the Supplementary material). Disease- and inhaler device-related terms were used as search keywords, which employed a controlled vocabulary, specific MeSH headings, and additional keywords. The following bibliographic databases were screened up to April 2019: Medline (PubMed) and Embase from 1961 to April 2019, and the Cochrane Library up to April 2019. Retrieved references were managed in Endnote X5 (Thomson Reuters). Finally, a manual search was performed by reviewing the references of the included studies and all the publications, as well as other information provided by the authors. Retrieved studies were included if they met the following pre-established criteria: Patients had to be diagnosed with COPD, aged 18 or older, and treated with an inhaler device, and studies had to include outcomes related to lung deposition and inspiratory flow, including the rate of lung deposition, the particles’ mass median aerodynamic diameter (MMAD) expressed as µm (micrometer), the aerosol exit velocity (AEV) in meter per second (m/s), the lung distribution pattern, the inspiratory flow rate expressed as liter per minute (L/min), or the device’s intrinsic resistance. Other variables, such as safety, were also considered. Only SLRs, meta-analyses, randomized controlled trials (RCTs), observational studies, and in vitro studies in English, French, or Spanish were included. Animal studies were excluded. The screening of studies, data collection (including the evidence tables), and analysis were independently performed by two reviewers. In the case of a discrepancy between the reviewers, a consensus was reached by including a third reviewer. The 2011 levels of evidence from the Oxford Center for Evidence-Based Medicine (OCEBM)18 were used to grade the quality of the studies.", "To supplement the SLR, additional searches were performed specifically to explore the basis of lung deposition and inspiratory flow, including their determinants and the effect of COPD on these aspects. For this purpose, apart from the results of the SLR, we performed different searches in Medline using PubMed’s Clinical Queries tool and small search strategies using MeSH and text–word terms (Table 8 of the Supplementary material).", "The results of the SLR and narrative searches were presented and discussed in a guided nominal group meeting. In this meeting, we agreed on a series of general conclusions and clinical recommendations.", "The SLR retrieved 3064 articles, of which 979 were duplicates. A total of 120 articles were reviewed in detail, as well as a further 20 articles that were retrieved using the manual search. Eventually, 75 articles were excluded (Table 9 of the Supplementary material), most of them due to lack of relevant data, and 71 were included, 24 were in vitro studies. Some of the included articles were of low–moderate quality (due to the study design, and poor description of the methodology, especially for the articles published before the 1990s). We found great variability regarding study designs, populations, outcomes, and measures. There were 24 in vitro studies,16,17,19–40 and the rest of the articles comprised one SLR41 and several RCTs and cross-sectional studies. The studies analyzed more than 1600 COPD patients, most of whom were men, with age ranges from 27 to 89 years, and with forced expiratory volume in 1 second from 25% to 80%. Many of these studies assessed one type of inhalation device, but others compared pMDIs and DPIs,19,20,30,33,37,40–46 pMDIs and SMI,47–49 or DPIs and SMI.17,27,28 One study also evaluated the three inhalation devices.17 The narrative searches found almost 1000 articles.\nHere, we summarize the main results of the SLR and narrative review, according to the project’s objectives (lung deposition, inspiratory flow rate, and data regarding these aspects for different inhaler devices). We also present the general conclusions and recommendations. Tables 1–3 show the main characteristics of the inhalation devices.Table 1Main Characteristics of Pressurized Metered-Dose InhalersFormulationDrug Suspended or Dissolved in Propellant (With Surfactant and Cosolvent)Metering systemMetering valve and reservoirPropellantHFA or CFCDose counterSometimesPrimingVariable priming requirementsTemperature dependenceLowHumidity dependenceLowActuator orificeThe design and size of the actuator significantly influences the performance of pMDIsLung deposition8%-53%MMDA1.22 μm-8 μmAerosol exit velocityHigh (more than 3 m/s)Lung distributionCentral and peripheral regionsIntrinsic resistanceLowInspiratory flow rate~ 20 L/minAdvantagesCompact and portable, consistent dosing, and rapid deliveryDisadvantagesNot breath-actuated, require coordinationAbbreviations: pMDI, pressurized metered-dose inhaler; HFA, hydrofluoroalkane; CFC, chlorofluorocarbon; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute.\nTable 2Main Characteristics of Dry Powder InhalersFormulationDrug/Lactose Blend, Drug Alone, Drug/Excipient ParticlesMetering systemCapsules, blisters, multi-dose blister packs, reservoirsPropellantNoDose counterYesPrimingVariable priming requirementsTemperature dependenceYesHumidity dependenceYesActuator orificeDoes not applyLung deposition~ 20%MMDA1.8 µm–4.8 µmAerosol exit velocityDepends on inspiratory flow rateLung distributionCentral and peripheral regionsIntrinsic resistanceLow/medium/highInspiratory flow rateMinimum of 30 L/min to > 100 L/minAdvantagesCompact and portable Some are multi-dose devices. Do not require coordination of inhalation with activation or hand strengthDisadvantagesRequire a minimum inspiratory flowPatients with cognitive or debilitating conditions might not generate sufficiently high inspiratory flowsMost are moisture-sensitiveAbbreviations: DPI, dry powder inhaler; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute.\nTable 3Main Characteristics of the Soft Mist InhalerFormulationAqueous Solution or SuspensionMetering systemReservoirsPropellantNoDose counterYesPrimingActuate the inhaler toward the ground until an aerosol cloud is visible and then to repeat the process three more timesTemperature dependenceNoHumidity dependenceNoActuator orifice–Lung deposition39.2%–67%MMDA~ 3.7 μmAerosol exit velocity0.72–0.84 m/sLung distributionCentral and peripheral regionsIntrinsic resistanceLow/noneInspiratory flow rateIndependentAdvantagesPortable and compact. Multi-dose device. Reusable. Compared with dry powder inhalers, a considerably smaller dose of a combination bronchodilator results in the same level of efficacy and safetyDisadvantagesNeeds to be primed if not in use for over 21 daysAbbreviations: SMI, soft mist inhaler; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute.\n\nMain Characteristics of Pressurized Metered-Dose Inhalers\nAbbreviations: pMDI, pressurized metered-dose inhaler; HFA, hydrofluoroalkane; CFC, chlorofluorocarbon; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute.\nMain Characteristics of Dry Powder Inhalers\nAbbreviations: DPI, dry powder inhaler; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute.\nMain Characteristics of the Soft Mist Inhaler\nAbbreviations: SMI, soft mist inhaler; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute.\nLung Deposition Different factors have been associated with lung deposition, some of which relate to the patient’s features (eg, airway geometry, inspiratory capacity, inhalation technique, breath–hold time, etc.) and to COPD (eg, exacerbations or hyperinflation).10–12,50–52 In fact, it has been shown during COPD exacerbations patients present decreased lung function and respiratory muscle strength that eventually influence on lung deposition.53 However, other factors are connected to the inhaler device (eg, the aerosol-generating system, speed of the aerosol plume, intrinsic resistance, inhaled carrier gas, oral/nasal inhalation, etc.), formulation (eg, the particle charge, lipophilicity, hygroscopicity, etc.), inhaled particle (eg, MMDA, its effect on lung distribution, etc.), and inhalation pattern (eg, the inspiration flow rate, volume, breath–hold time, etc.).52,54\nWith regards to the lung deposition (in relation to the emitted dose) across inhaler devices, data from in vitro and in vivo studies have estimated that 10%–20% of the delivered dose reaches the airways.54–56\nLung deposition rates (from individual studies) ranging from 8% to 53% have been reported for pMDIs.49,54,56–59 However, this rate increased to 11%–68% with the addition of a valved holding chamber or spacer31,35,46,59–63 and to 50%–60% with press-and-breathe actuators.64 More specifically, when Modulite® was used, lung deposition could reach up to 31%–34%.65,66 The K–haler® has a reported lung deposit of 39%.67 As exposed before, different factors might be contributing to these rate variability.\nThe studies included that analyzed DPIs have shown that the lung deposition rate is low, at around 20%,68 which is negatively influenced by a suboptimal inspiratory flow rate, humidity, and changes in temperature.69 Furthermore, clear differences in lung deposition were not observed when patients performed inhalation correctly.68 For the main DPI devices, the published lung deposition rates from individual studies (without direct comparisons) are as follows: Accuhaler® 7.6%,70 Aerolizer® 13%–20%,34,71 Breezhaler® 26.8%–39%,24,29 Easyhaler® 18.5%–31%,71,72 Genuair® 30.1%–51.1%,27,73,74 Handihaler® 9.8%–46.7%,19,24,71 Ingelheim inhaler® 16%–59%,75 NEXThaler® 39.4%–56%,11,76 Spinhaler® 11.5%,75 Turbuhaler® 14.2%–69.3%,21,42,77–79 and Twisthaler® 36%–37%.80 Similarly to all inhaler devices, other factors are probably influencing the lung deposition rate.81 Although some studies have compared lung deposition in pMDIs and DPIs, their results are contradictory.19,33\nRespimat® (SMI) has largely exhibited high lung deposition rates that range from 39.2% to 67%,27,38,48,49,74,82–84 with different inspiratory flow rates (high and low) and irrespective of humidity.85 Compared with other devices, SMI showed higher lung deposition than pMDIs (including those with a chamber or spacer) or DPIs.27,48,74,83,86\nWe also evaluated the AEV. Inhalation devices with a high AEV might have a short spray duration and vice versa. With pMDIs, the aerosol exits through a nozzle at a very high rate of more than 3 m/s.87 However, the AEV of the SMI is much slower, at 0.84–0.72 m/s, and the aerosol cloud lasts longer.88–90\nIt has also been observed that the distribution of the deposition sites of inhaled particles is strongly dependent on their aerodynamic diameters.69 This SLR found that pMDIs generally produce at least medium-sized particles, with a significant rate of extrafine particles. The observed MMAD of conventional pMDIs varies from 1.22 to 8 μm,35,91,92 from 1.19 to 3.57 μm when a valved holding chamber or spacer is used,31,35,93 and from 0.72 to 2.0 μm with Modulite®.65,66 Regarding particle size data for DPIs, depending on the device and drug, MMDAs vary from 1.40 to 4.8 µm.11,19,21,24,27–29,36,37,74,76 Conversely, SMI generates a cloud that contains an aerosol with a fine particle fraction of around 3.7 μm.74 It is estimated that 60% of the particles reach a MMAD <5 μm with SMI.85 The reported rate with pMDIs and DPIs (indirect comparison) is not that high.27,28,74,94\nAnother relevant outcome when using inhalation devices is the lung distribution pattern (through the central and peripheral regions). All inhalation devices have been shown to reach both central and peripheral areas. SMI data suggest that lung distribution pattern might be better than pMDIs, with a higher distribution in bronchial trees and peripheral regions.11,28,49,60,65,66,73,74,82,95,96 More specifically, a comparative study found mean peripheral, intermediate and central lung deposition, and peripheral zone/central zone ratio of 5.0%–9.4%, 4.8%–11.3%, 4.5%–10.4%, 1.01–1.16 with Respimat® vs 3.8%, 4.9%, 5.6%, 1.36 with pMDIs, respectively.49 Comparative data between pMDIs and DPIs are conflicting.33,46\nDifferent factors have been associated with lung deposition, some of which relate to the patient’s features (eg, airway geometry, inspiratory capacity, inhalation technique, breath–hold time, etc.) and to COPD (eg, exacerbations or hyperinflation).10–12,50–52 In fact, it has been shown during COPD exacerbations patients present decreased lung function and respiratory muscle strength that eventually influence on lung deposition.53 However, other factors are connected to the inhaler device (eg, the aerosol-generating system, speed of the aerosol plume, intrinsic resistance, inhaled carrier gas, oral/nasal inhalation, etc.), formulation (eg, the particle charge, lipophilicity, hygroscopicity, etc.), inhaled particle (eg, MMDA, its effect on lung distribution, etc.), and inhalation pattern (eg, the inspiration flow rate, volume, breath–hold time, etc.).52,54\nWith regards to the lung deposition (in relation to the emitted dose) across inhaler devices, data from in vitro and in vivo studies have estimated that 10%–20% of the delivered dose reaches the airways.54–56\nLung deposition rates (from individual studies) ranging from 8% to 53% have been reported for pMDIs.49,54,56–59 However, this rate increased to 11%–68% with the addition of a valved holding chamber or spacer31,35,46,59–63 and to 50%–60% with press-and-breathe actuators.64 More specifically, when Modulite® was used, lung deposition could reach up to 31%–34%.65,66 The K–haler® has a reported lung deposit of 39%.67 As exposed before, different factors might be contributing to these rate variability.\nThe studies included that analyzed DPIs have shown that the lung deposition rate is low, at around 20%,68 which is negatively influenced by a suboptimal inspiratory flow rate, humidity, and changes in temperature.69 Furthermore, clear differences in lung deposition were not observed when patients performed inhalation correctly.68 For the main DPI devices, the published lung deposition rates from individual studies (without direct comparisons) are as follows: Accuhaler® 7.6%,70 Aerolizer® 13%–20%,34,71 Breezhaler® 26.8%–39%,24,29 Easyhaler® 18.5%–31%,71,72 Genuair® 30.1%–51.1%,27,73,74 Handihaler® 9.8%–46.7%,19,24,71 Ingelheim inhaler® 16%–59%,75 NEXThaler® 39.4%–56%,11,76 Spinhaler® 11.5%,75 Turbuhaler® 14.2%–69.3%,21,42,77–79 and Twisthaler® 36%–37%.80 Similarly to all inhaler devices, other factors are probably influencing the lung deposition rate.81 Although some studies have compared lung deposition in pMDIs and DPIs, their results are contradictory.19,33\nRespimat® (SMI) has largely exhibited high lung deposition rates that range from 39.2% to 67%,27,38,48,49,74,82–84 with different inspiratory flow rates (high and low) and irrespective of humidity.85 Compared with other devices, SMI showed higher lung deposition than pMDIs (including those with a chamber or spacer) or DPIs.27,48,74,83,86\nWe also evaluated the AEV. Inhalation devices with a high AEV might have a short spray duration and vice versa. With pMDIs, the aerosol exits through a nozzle at a very high rate of more than 3 m/s.87 However, the AEV of the SMI is much slower, at 0.84–0.72 m/s, and the aerosol cloud lasts longer.88–90\nIt has also been observed that the distribution of the deposition sites of inhaled particles is strongly dependent on their aerodynamic diameters.69 This SLR found that pMDIs generally produce at least medium-sized particles, with a significant rate of extrafine particles. The observed MMAD of conventional pMDIs varies from 1.22 to 8 μm,35,91,92 from 1.19 to 3.57 μm when a valved holding chamber or spacer is used,31,35,93 and from 0.72 to 2.0 μm with Modulite®.65,66 Regarding particle size data for DPIs, depending on the device and drug, MMDAs vary from 1.40 to 4.8 µm.11,19,21,24,27–29,36,37,74,76 Conversely, SMI generates a cloud that contains an aerosol with a fine particle fraction of around 3.7 μm.74 It is estimated that 60% of the particles reach a MMAD <5 μm with SMI.85 The reported rate with pMDIs and DPIs (indirect comparison) is not that high.27,28,74,94\nAnother relevant outcome when using inhalation devices is the lung distribution pattern (through the central and peripheral regions). All inhalation devices have been shown to reach both central and peripheral areas. SMI data suggest that lung distribution pattern might be better than pMDIs, with a higher distribution in bronchial trees and peripheral regions.11,28,49,60,65,66,73,74,82,95,96 More specifically, a comparative study found mean peripheral, intermediate and central lung deposition, and peripheral zone/central zone ratio of 5.0%–9.4%, 4.8%–11.3%, 4.5%–10.4%, 1.01–1.16 with Respimat® vs 3.8%, 4.9%, 5.6%, 1.36 with pMDIs, respectively.49 Comparative data between pMDIs and DPIs are conflicting.33,46\nInspiratory Flow Rate The other main focus of this project was the inspiratory flow rate. First, it is important to consider the factors associated with inspiratory flow rate (Table 4). Similar to lung deposition, some of these factors relate to the patient’s and COPD’s characteristics, while other factors relate to the inhaler device, such as the intrinsic resistance.6,10,13–17,43,45,97,98Table 4Main Factors Associated to Inspiratory Flow RatePatient-related Inspiratory capacity Inspiratory effort Comorbidities Inhalation techniqueCOPD-related Severity Hyperinflation Exacerbations Respiratory muscle alterationsInhalation device-related Internal resistance Disaggregation of the powdered drug dose (DPIs)Abbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers.\n\nMain Factors Associated to Inspiratory Flow Rate\nAbbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers.\nOverall, two main driving forces can affect the performance of DPIs: the inspiratory flow generated by the patient and the turbulence produced inside the device, the latter of which solely depends on the original technical characteristics of the device, including the intrinsic resistance. These two parameters affect the disaggregation of the drug dose, the diameter of the particles to inhale, the lung distribution of the dose, and eventually, the efficacy of the delivered drug. Essentially, a higher intrinsic resistance results in the patient needing to generate a higher inspiratory flow.\nIn general, although variable, DPIs’ intrinsic resistance is higher than that of pMDIs or SMI, which are relatively similar and low. Therefore, pMDIs and SMI do not require the patient to generate a high inspiratory flow (and inspiratory effort).\nAccording to the results of the SLR, pMDIs require low inspiratory flow rates of around 20 L/min (59, 70, 132) to achieve an adequate lung deposition.10,17,43,45,57,82,99–101 There were no major differences between the use of one propellant and another.57 In order to generate the correct inspiratory airflow and lung deposition with this type of inhalation device, it is recommended the patients start breathing from their functional residual capacity, then they should activate the inhalation device and start inhalation using an inspiratory flow rate that is below 60 L/min. Then, at the end of inspiration, patients should hold their breath for around 10 seconds.100 Consequently, patients need a correct inhalation technique and coordination. The K-haler® is triggered by an inspiratory flow rate of approximately 30 L/min.67\nInhaler devices are many times classified as low- (30 L/min or below), medium– (~30–60 L/min), and high-resistance (>60 L/min) devices.10,17 DPIs with low intrinsic resistance include Aerolizer®, Spinhaler®, and Breezhaler®; DPIs with medium resistance include Accuhaler®/Diskhaler®, Genuair®/Novolizer®, and NEXThaler®; DPIs with medium/high resistance include Turbuhaler®; and DPIs with high resistance include Easyhaler®, Handihaler®, and Twisthaler®. The estimated inspiratory flow rates required thus vary across devices, from a minimum of 30 L/min to more than 100 L/min.6,26,32,43,45,101–108\nBased on the information presented above, when using a high-resistance DPI, the disaggregation and micro-dispersion of the powdered drug are relatively independent of the patient’s inspiratory effort because the driving force depends on the intrinsic resistance of the DPI itself, which is able to produce the turbulence required for effective drug micro-dispersion. However, when a low-resistance device is used, the only force that can generate turbulence is the patient’s inspiratory airflow, which should be high.\nFinally, the studies showed that the SMI inhalation device uses mechanical energy (from a spring) to generate a fine, slow–moving mist from an aqueous solution, which is independent of the patient’s inspiratory effort. Therefore, the required inspiratory flow rate and/or effort are less relevant than with DPIs.83,88,89,109 Moreover, the inhalation maneuver with SMI is more similar to physiological inhalation. One study observed that drug delivery to the lungs with SMI was more efficient than with pMDIs, even with poor inhalation technique.82\nThe other main focus of this project was the inspiratory flow rate. First, it is important to consider the factors associated with inspiratory flow rate (Table 4). Similar to lung deposition, some of these factors relate to the patient’s and COPD’s characteristics, while other factors relate to the inhaler device, such as the intrinsic resistance.6,10,13–17,43,45,97,98Table 4Main Factors Associated to Inspiratory Flow RatePatient-related Inspiratory capacity Inspiratory effort Comorbidities Inhalation techniqueCOPD-related Severity Hyperinflation Exacerbations Respiratory muscle alterationsInhalation device-related Internal resistance Disaggregation of the powdered drug dose (DPIs)Abbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers.\n\nMain Factors Associated to Inspiratory Flow Rate\nAbbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers.\nOverall, two main driving forces can affect the performance of DPIs: the inspiratory flow generated by the patient and the turbulence produced inside the device, the latter of which solely depends on the original technical characteristics of the device, including the intrinsic resistance. These two parameters affect the disaggregation of the drug dose, the diameter of the particles to inhale, the lung distribution of the dose, and eventually, the efficacy of the delivered drug. Essentially, a higher intrinsic resistance results in the patient needing to generate a higher inspiratory flow.\nIn general, although variable, DPIs’ intrinsic resistance is higher than that of pMDIs or SMI, which are relatively similar and low. Therefore, pMDIs and SMI do not require the patient to generate a high inspiratory flow (and inspiratory effort).\nAccording to the results of the SLR, pMDIs require low inspiratory flow rates of around 20 L/min (59, 70, 132) to achieve an adequate lung deposition.10,17,43,45,57,82,99–101 There were no major differences between the use of one propellant and another.57 In order to generate the correct inspiratory airflow and lung deposition with this type of inhalation device, it is recommended the patients start breathing from their functional residual capacity, then they should activate the inhalation device and start inhalation using an inspiratory flow rate that is below 60 L/min. Then, at the end of inspiration, patients should hold their breath for around 10 seconds.100 Consequently, patients need a correct inhalation technique and coordination. The K-haler® is triggered by an inspiratory flow rate of approximately 30 L/min.67\nInhaler devices are many times classified as low- (30 L/min or below), medium– (~30–60 L/min), and high-resistance (>60 L/min) devices.10,17 DPIs with low intrinsic resistance include Aerolizer®, Spinhaler®, and Breezhaler®; DPIs with medium resistance include Accuhaler®/Diskhaler®, Genuair®/Novolizer®, and NEXThaler®; DPIs with medium/high resistance include Turbuhaler®; and DPIs with high resistance include Easyhaler®, Handihaler®, and Twisthaler®. The estimated inspiratory flow rates required thus vary across devices, from a minimum of 30 L/min to more than 100 L/min.6,26,32,43,45,101–108\nBased on the information presented above, when using a high-resistance DPI, the disaggregation and micro-dispersion of the powdered drug are relatively independent of the patient’s inspiratory effort because the driving force depends on the intrinsic resistance of the DPI itself, which is able to produce the turbulence required for effective drug micro-dispersion. However, when a low-resistance device is used, the only force that can generate turbulence is the patient’s inspiratory airflow, which should be high.\nFinally, the studies showed that the SMI inhalation device uses mechanical energy (from a spring) to generate a fine, slow–moving mist from an aqueous solution, which is independent of the patient’s inspiratory effort. Therefore, the required inspiratory flow rate and/or effort are less relevant than with DPIs.83,88,89,109 Moreover, the inhalation maneuver with SMI is more similar to physiological inhalation. One study observed that drug delivery to the lungs with SMI was more efficient than with pMDIs, even with poor inhalation technique.82\nGeneral Conclusions and Recommendations The experts discussed the results of the reviews, and, based on the evidence, they formulated a series of general conclusions and recommendations that are outlined in Tables 5 and 6. In summary, health professionals involved in the management of COPD patients should be aware of all factors involved in adequate drug distribution when using inhalation devices. Two main objective factors emerged at this point: lung deposition and the required inspiratory flow rate. Both of these factors are highly influenced by patient, COPD, and inhaler device characteristics. Moreover, COPD is a heterogeneous and dynamic chronic disease, in which lung deposition and inspiratory flow rates vary across patients and also within the same patient.Table 5General Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease#Conclusion1The lung deposition profile and required inspiratory flow rate are key factors to be considered when selecting an inhalation device2COPD is a progressive disease with specific pathophysiological features that impact patients’ lung deposition and inspiratory flow rate3In COPD patients, obstruction severity and especially hyperinflation are decisive pathophysiological factors4During the course of COPD, some situations, notably exacerbations, impact the inspiratory flow rate5An homogeneous drug distribution through the airways is essential, not only because of the COPD pathophysiology but also because of the different distribution of cholinergic and β2 receptors6COPD treatment requires inhalation devices capable of delivering particles with a MMAD comprised between 0.5 and 5 µm to achieve high lung deposition7The patients’ ability to perform a correct inhalation maneuver (inspiratory effort, coordination, etc.) is decisive to achieve an adequate inspiratory flow rate and lung deposition8Inhalation maneuvers that are similar to physiological/standard inspiratory flow are more likely associated with reduced oropharyngeal deposition and therefore increased lung deposition9Inhalation devices present different characteristics that define the required inspiratory flow rate and influence lung deposition10The inspiratory flow rate required for drug dispersion with a given DPI is inversely proportional to the intrinsic resistance of the DPI11The faster the exit speed of the drug delivered from the device (initial acceleration of the inhalation maneuver by the patient or directly by the device), the greater the risk of oropharyngeal deposition and the lesser the lung deposition12The SMI requires a low inspiratory flow rate. Therefore, compared with other inhaler devices, when performing a correct maneuver, oropharyngeal deposition is lower and lung deposition is higherAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler.\nTable 6Experts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease#It is Strongly Recommended to …1Consider COPD pathophysiological aspects as well as patients’ clinical status and disease severity/evolution when selecting an inhalation device2Take into account the specific characteristics of each inhalation device3Assess patients’ ability to perform a correct inhalation maneuver and the specific requirements for each inhalation device4Evaluate patients’ inspiratory flow rate or inspiratory capacity before selecting an inhalation device5Take into account patients’ history of exacerbations or other events that may affect their ability to perform adequate inhalation6Regularly review patients’ inhalation maneuver and check whether the inhalation device meets their needs7Use an active inhalation device, such as pMDI or SMI, in patients with reduced inspiratory capacity8Consider using a valved holding chamber with SMI or pMDI devices in fragile patients with inspiratory and/or coordination difficulties9Use inhalation devices that generate a low oropharyngeal and high lung deposition10Check patients’ inhalation maneuver during every visit and, where necessary, resolve errors or even change the inhalerAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler.\n\nGeneral Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease\nAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler.\nExperts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease\nAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler.\nThus, it is strongly recommended that, in addition to the standard variables for COPD, inspiratory flow rate and the patient’s inspiratory capacity are evaluated (on a regular basis), and the selection of an inhaler device should be based on the COPD patient’s features, needs, and clinical situation. This selection should consider the different characteristics of the devices to ensure physicians choose the device that best matches that patient’s needs.\nFinally, we considered it important to systematically review the patient’s inhalation maneuver,110 see Tables 3 and 6. This should be checked during every visit, so that errors can be resolved, and inhalers can be checked and even changed, where necessary. The same way, before considering a change in the patient’s treatment, possible errors with the inhalation maneuver should be evaluated.\nThe experts discussed the results of the reviews, and, based on the evidence, they formulated a series of general conclusions and recommendations that are outlined in Tables 5 and 6. In summary, health professionals involved in the management of COPD patients should be aware of all factors involved in adequate drug distribution when using inhalation devices. Two main objective factors emerged at this point: lung deposition and the required inspiratory flow rate. Both of these factors are highly influenced by patient, COPD, and inhaler device characteristics. Moreover, COPD is a heterogeneous and dynamic chronic disease, in which lung deposition and inspiratory flow rates vary across patients and also within the same patient.Table 5General Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease#Conclusion1The lung deposition profile and required inspiratory flow rate are key factors to be considered when selecting an inhalation device2COPD is a progressive disease with specific pathophysiological features that impact patients’ lung deposition and inspiratory flow rate3In COPD patients, obstruction severity and especially hyperinflation are decisive pathophysiological factors4During the course of COPD, some situations, notably exacerbations, impact the inspiratory flow rate5An homogeneous drug distribution through the airways is essential, not only because of the COPD pathophysiology but also because of the different distribution of cholinergic and β2 receptors6COPD treatment requires inhalation devices capable of delivering particles with a MMAD comprised between 0.5 and 5 µm to achieve high lung deposition7The patients’ ability to perform a correct inhalation maneuver (inspiratory effort, coordination, etc.) is decisive to achieve an adequate inspiratory flow rate and lung deposition8Inhalation maneuvers that are similar to physiological/standard inspiratory flow are more likely associated with reduced oropharyngeal deposition and therefore increased lung deposition9Inhalation devices present different characteristics that define the required inspiratory flow rate and influence lung deposition10The inspiratory flow rate required for drug dispersion with a given DPI is inversely proportional to the intrinsic resistance of the DPI11The faster the exit speed of the drug delivered from the device (initial acceleration of the inhalation maneuver by the patient or directly by the device), the greater the risk of oropharyngeal deposition and the lesser the lung deposition12The SMI requires a low inspiratory flow rate. Therefore, compared with other inhaler devices, when performing a correct maneuver, oropharyngeal deposition is lower and lung deposition is higherAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler.\nTable 6Experts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease#It is Strongly Recommended to …1Consider COPD pathophysiological aspects as well as patients’ clinical status and disease severity/evolution when selecting an inhalation device2Take into account the specific characteristics of each inhalation device3Assess patients’ ability to perform a correct inhalation maneuver and the specific requirements for each inhalation device4Evaluate patients’ inspiratory flow rate or inspiratory capacity before selecting an inhalation device5Take into account patients’ history of exacerbations or other events that may affect their ability to perform adequate inhalation6Regularly review patients’ inhalation maneuver and check whether the inhalation device meets their needs7Use an active inhalation device, such as pMDI or SMI, in patients with reduced inspiratory capacity8Consider using a valved holding chamber with SMI or pMDI devices in fragile patients with inspiratory and/or coordination difficulties9Use inhalation devices that generate a low oropharyngeal and high lung deposition10Check patients’ inhalation maneuver during every visit and, where necessary, resolve errors or even change the inhalerAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler.\n\nGeneral Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease\nAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler.\nExperts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease\nAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler.\nThus, it is strongly recommended that, in addition to the standard variables for COPD, inspiratory flow rate and the patient’s inspiratory capacity are evaluated (on a regular basis), and the selection of an inhaler device should be based on the COPD patient’s features, needs, and clinical situation. This selection should consider the different characteristics of the devices to ensure physicians choose the device that best matches that patient’s needs.\nFinally, we considered it important to systematically review the patient’s inhalation maneuver,110 see Tables 3 and 6. This should be checked during every visit, so that errors can be resolved, and inhalers can be checked and even changed, where necessary. The same way, before considering a change in the patient’s treatment, possible errors with the inhalation maneuver should be evaluated.", "Different factors have been associated with lung deposition, some of which relate to the patient’s features (eg, airway geometry, inspiratory capacity, inhalation technique, breath–hold time, etc.) and to COPD (eg, exacerbations or hyperinflation).10–12,50–52 In fact, it has been shown during COPD exacerbations patients present decreased lung function and respiratory muscle strength that eventually influence on lung deposition.53 However, other factors are connected to the inhaler device (eg, the aerosol-generating system, speed of the aerosol plume, intrinsic resistance, inhaled carrier gas, oral/nasal inhalation, etc.), formulation (eg, the particle charge, lipophilicity, hygroscopicity, etc.), inhaled particle (eg, MMDA, its effect on lung distribution, etc.), and inhalation pattern (eg, the inspiration flow rate, volume, breath–hold time, etc.).52,54\nWith regards to the lung deposition (in relation to the emitted dose) across inhaler devices, data from in vitro and in vivo studies have estimated that 10%–20% of the delivered dose reaches the airways.54–56\nLung deposition rates (from individual studies) ranging from 8% to 53% have been reported for pMDIs.49,54,56–59 However, this rate increased to 11%–68% with the addition of a valved holding chamber or spacer31,35,46,59–63 and to 50%–60% with press-and-breathe actuators.64 More specifically, when Modulite® was used, lung deposition could reach up to 31%–34%.65,66 The K–haler® has a reported lung deposit of 39%.67 As exposed before, different factors might be contributing to these rate variability.\nThe studies included that analyzed DPIs have shown that the lung deposition rate is low, at around 20%,68 which is negatively influenced by a suboptimal inspiratory flow rate, humidity, and changes in temperature.69 Furthermore, clear differences in lung deposition were not observed when patients performed inhalation correctly.68 For the main DPI devices, the published lung deposition rates from individual studies (without direct comparisons) are as follows: Accuhaler® 7.6%,70 Aerolizer® 13%–20%,34,71 Breezhaler® 26.8%–39%,24,29 Easyhaler® 18.5%–31%,71,72 Genuair® 30.1%–51.1%,27,73,74 Handihaler® 9.8%–46.7%,19,24,71 Ingelheim inhaler® 16%–59%,75 NEXThaler® 39.4%–56%,11,76 Spinhaler® 11.5%,75 Turbuhaler® 14.2%–69.3%,21,42,77–79 and Twisthaler® 36%–37%.80 Similarly to all inhaler devices, other factors are probably influencing the lung deposition rate.81 Although some studies have compared lung deposition in pMDIs and DPIs, their results are contradictory.19,33\nRespimat® (SMI) has largely exhibited high lung deposition rates that range from 39.2% to 67%,27,38,48,49,74,82–84 with different inspiratory flow rates (high and low) and irrespective of humidity.85 Compared with other devices, SMI showed higher lung deposition than pMDIs (including those with a chamber or spacer) or DPIs.27,48,74,83,86\nWe also evaluated the AEV. Inhalation devices with a high AEV might have a short spray duration and vice versa. With pMDIs, the aerosol exits through a nozzle at a very high rate of more than 3 m/s.87 However, the AEV of the SMI is much slower, at 0.84–0.72 m/s, and the aerosol cloud lasts longer.88–90\nIt has also been observed that the distribution of the deposition sites of inhaled particles is strongly dependent on their aerodynamic diameters.69 This SLR found that pMDIs generally produce at least medium-sized particles, with a significant rate of extrafine particles. The observed MMAD of conventional pMDIs varies from 1.22 to 8 μm,35,91,92 from 1.19 to 3.57 μm when a valved holding chamber or spacer is used,31,35,93 and from 0.72 to 2.0 μm with Modulite®.65,66 Regarding particle size data for DPIs, depending on the device and drug, MMDAs vary from 1.40 to 4.8 µm.11,19,21,24,27–29,36,37,74,76 Conversely, SMI generates a cloud that contains an aerosol with a fine particle fraction of around 3.7 μm.74 It is estimated that 60% of the particles reach a MMAD <5 μm with SMI.85 The reported rate with pMDIs and DPIs (indirect comparison) is not that high.27,28,74,94\nAnother relevant outcome when using inhalation devices is the lung distribution pattern (through the central and peripheral regions). All inhalation devices have been shown to reach both central and peripheral areas. SMI data suggest that lung distribution pattern might be better than pMDIs, with a higher distribution in bronchial trees and peripheral regions.11,28,49,60,65,66,73,74,82,95,96 More specifically, a comparative study found mean peripheral, intermediate and central lung deposition, and peripheral zone/central zone ratio of 5.0%–9.4%, 4.8%–11.3%, 4.5%–10.4%, 1.01–1.16 with Respimat® vs 3.8%, 4.9%, 5.6%, 1.36 with pMDIs, respectively.49 Comparative data between pMDIs and DPIs are conflicting.33,46", "The other main focus of this project was the inspiratory flow rate. First, it is important to consider the factors associated with inspiratory flow rate (Table 4). Similar to lung deposition, some of these factors relate to the patient’s and COPD’s characteristics, while other factors relate to the inhaler device, such as the intrinsic resistance.6,10,13–17,43,45,97,98Table 4Main Factors Associated to Inspiratory Flow RatePatient-related Inspiratory capacity Inspiratory effort Comorbidities Inhalation techniqueCOPD-related Severity Hyperinflation Exacerbations Respiratory muscle alterationsInhalation device-related Internal resistance Disaggregation of the powdered drug dose (DPIs)Abbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers.\n\nMain Factors Associated to Inspiratory Flow Rate\nAbbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers.\nOverall, two main driving forces can affect the performance of DPIs: the inspiratory flow generated by the patient and the turbulence produced inside the device, the latter of which solely depends on the original technical characteristics of the device, including the intrinsic resistance. These two parameters affect the disaggregation of the drug dose, the diameter of the particles to inhale, the lung distribution of the dose, and eventually, the efficacy of the delivered drug. Essentially, a higher intrinsic resistance results in the patient needing to generate a higher inspiratory flow.\nIn general, although variable, DPIs’ intrinsic resistance is higher than that of pMDIs or SMI, which are relatively similar and low. Therefore, pMDIs and SMI do not require the patient to generate a high inspiratory flow (and inspiratory effort).\nAccording to the results of the SLR, pMDIs require low inspiratory flow rates of around 20 L/min (59, 70, 132) to achieve an adequate lung deposition.10,17,43,45,57,82,99–101 There were no major differences between the use of one propellant and another.57 In order to generate the correct inspiratory airflow and lung deposition with this type of inhalation device, it is recommended the patients start breathing from their functional residual capacity, then they should activate the inhalation device and start inhalation using an inspiratory flow rate that is below 60 L/min. Then, at the end of inspiration, patients should hold their breath for around 10 seconds.100 Consequently, patients need a correct inhalation technique and coordination. The K-haler® is triggered by an inspiratory flow rate of approximately 30 L/min.67\nInhaler devices are many times classified as low- (30 L/min or below), medium– (~30–60 L/min), and high-resistance (>60 L/min) devices.10,17 DPIs with low intrinsic resistance include Aerolizer®, Spinhaler®, and Breezhaler®; DPIs with medium resistance include Accuhaler®/Diskhaler®, Genuair®/Novolizer®, and NEXThaler®; DPIs with medium/high resistance include Turbuhaler®; and DPIs with high resistance include Easyhaler®, Handihaler®, and Twisthaler®. The estimated inspiratory flow rates required thus vary across devices, from a minimum of 30 L/min to more than 100 L/min.6,26,32,43,45,101–108\nBased on the information presented above, when using a high-resistance DPI, the disaggregation and micro-dispersion of the powdered drug are relatively independent of the patient’s inspiratory effort because the driving force depends on the intrinsic resistance of the DPI itself, which is able to produce the turbulence required for effective drug micro-dispersion. However, when a low-resistance device is used, the only force that can generate turbulence is the patient’s inspiratory airflow, which should be high.\nFinally, the studies showed that the SMI inhalation device uses mechanical energy (from a spring) to generate a fine, slow–moving mist from an aqueous solution, which is independent of the patient’s inspiratory effort. Therefore, the required inspiratory flow rate and/or effort are less relevant than with DPIs.83,88,89,109 Moreover, the inhalation maneuver with SMI is more similar to physiological inhalation. One study observed that drug delivery to the lungs with SMI was more efficient than with pMDIs, even with poor inhalation technique.82", "The experts discussed the results of the reviews, and, based on the evidence, they formulated a series of general conclusions and recommendations that are outlined in Tables 5 and 6. In summary, health professionals involved in the management of COPD patients should be aware of all factors involved in adequate drug distribution when using inhalation devices. Two main objective factors emerged at this point: lung deposition and the required inspiratory flow rate. Both of these factors are highly influenced by patient, COPD, and inhaler device characteristics. Moreover, COPD is a heterogeneous and dynamic chronic disease, in which lung deposition and inspiratory flow rates vary across patients and also within the same patient.Table 5General Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease#Conclusion1The lung deposition profile and required inspiratory flow rate are key factors to be considered when selecting an inhalation device2COPD is a progressive disease with specific pathophysiological features that impact patients’ lung deposition and inspiratory flow rate3In COPD patients, obstruction severity and especially hyperinflation are decisive pathophysiological factors4During the course of COPD, some situations, notably exacerbations, impact the inspiratory flow rate5An homogeneous drug distribution through the airways is essential, not only because of the COPD pathophysiology but also because of the different distribution of cholinergic and β2 receptors6COPD treatment requires inhalation devices capable of delivering particles with a MMAD comprised between 0.5 and 5 µm to achieve high lung deposition7The patients’ ability to perform a correct inhalation maneuver (inspiratory effort, coordination, etc.) is decisive to achieve an adequate inspiratory flow rate and lung deposition8Inhalation maneuvers that are similar to physiological/standard inspiratory flow are more likely associated with reduced oropharyngeal deposition and therefore increased lung deposition9Inhalation devices present different characteristics that define the required inspiratory flow rate and influence lung deposition10The inspiratory flow rate required for drug dispersion with a given DPI is inversely proportional to the intrinsic resistance of the DPI11The faster the exit speed of the drug delivered from the device (initial acceleration of the inhalation maneuver by the patient or directly by the device), the greater the risk of oropharyngeal deposition and the lesser the lung deposition12The SMI requires a low inspiratory flow rate. Therefore, compared with other inhaler devices, when performing a correct maneuver, oropharyngeal deposition is lower and lung deposition is higherAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler.\nTable 6Experts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease#It is Strongly Recommended to …1Consider COPD pathophysiological aspects as well as patients’ clinical status and disease severity/evolution when selecting an inhalation device2Take into account the specific characteristics of each inhalation device3Assess patients’ ability to perform a correct inhalation maneuver and the specific requirements for each inhalation device4Evaluate patients’ inspiratory flow rate or inspiratory capacity before selecting an inhalation device5Take into account patients’ history of exacerbations or other events that may affect their ability to perform adequate inhalation6Regularly review patients’ inhalation maneuver and check whether the inhalation device meets their needs7Use an active inhalation device, such as pMDI or SMI, in patients with reduced inspiratory capacity8Consider using a valved holding chamber with SMI or pMDI devices in fragile patients with inspiratory and/or coordination difficulties9Use inhalation devices that generate a low oropharyngeal and high lung deposition10Check patients’ inhalation maneuver during every visit and, where necessary, resolve errors or even change the inhalerAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler.\n\nGeneral Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease\nAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler.\nExperts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease\nAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler.\nThus, it is strongly recommended that, in addition to the standard variables for COPD, inspiratory flow rate and the patient’s inspiratory capacity are evaluated (on a regular basis), and the selection of an inhaler device should be based on the COPD patient’s features, needs, and clinical situation. This selection should consider the different characteristics of the devices to ensure physicians choose the device that best matches that patient’s needs.\nFinally, we considered it important to systematically review the patient’s inhalation maneuver,110 see Tables 3 and 6. This should be checked during every visit, so that errors can be resolved, and inhalers can be checked and even changed, where necessary. The same way, before considering a change in the patient’s treatment, possible errors with the inhalation maneuver should be evaluated." ]
[ "intro", null, null, null, null, null, null, null, null, null ]
[ "COPD", "lung deposition", "inspiratory flow", "inhalation devices", "systematic literature review" ]
Introduction: Chronic obstructive pulmonary disease (COPD) is characterized by a persistent airflow limitation that is usually progressive, according to guidelines from the Global Initiative for Chronic Obstructive Lung Disease (GOLD).1 In recent years, the prevalence of COPD has dramatically increased, growing by 44.2% from 1990 to 2015.2 The impact on patients, society, and health systems is correspondingly huge. More than 3 million people die of COPD worldwide each year, accounting for 6% of all deaths worldwide.3 In 2010, the cost of COPD in the USA was projected to be approximately US $50 billion.4 One of the primary treatment modalities for COPD is medications that are delivered via inhalation devices. Currently, in clinical practice, a variety of devices are available for the treatment of these patients, including pressurized metered-dose inhalers (pMDIs), which are used with or without a valved holding chamber or spacer, as well as dry powder inhalers (DPIs) and the soft mist inhaler (SMI). Inhaler devices vary in several ways, including how the inhaler dispenses the drug, whether the treatment is passively or actively generated (using propellant, mechanical, or compressed air), and the drug’s formulation (solution, dry powder, or mist). The selection of an inhalation device is a key point in COPD because it impacts patient adherence, the drug’s effectiveness, and long–term outcomes.5 A range of studies have assessed which factors/characteristics should be considered when selecting the most appropriate device.6–8 Interestingly, according to many expert opinions, the most important factors involved in achieving optimal disease outcomes are the generation of high lung deposition and correct dispensation with low inspiratory flow rates.9 Other relevant factors include inhalation technique, potential difficulties with the device, and patient preferences. On the other hand, data regarding lung deposition and inspiratory flow rates across inhalation devices in COPD patients are usually described and evaluated as absolute, static numbers. However, a theoretical framework and pathophysiological and clinical evidence all suggest that both are influenced by several factors that relate to the patients and their COPD, all of which can change over time.6,10–17 Therefore, analyzing lung deposition and inspiratory flow rates in COPD patients who use inhalation devices requires a more careful, holistic, and dynamic approach. Considering all the aspects described above, we performed a systematic literature review (SLR) and a narrative review to assess lung deposition and inspiratory flow rates, as well as data related to these inhalation devices in COPD patients. Using this information, we propose related conclusions and recommendations that can contribute to the selection of inhalation devices. We are confident that this information will be very useful for health professionals who are involved in the care of patients with COPD. Methods: This project consisted of an SLR, a narrative review, and an expert opinion based on a nominal group meeting. A nominal group meeting is a structured method for brainstorming that encourages contributions from everyone and facilitates quick agreement on the relative importance of issues, problems, or solutions. Experts’ Selection We first established a group of 10 pneumologists (two of us were project coordinators). We are all specialized in COPD with demonstrated clinical experience (a minimum of 8 years and ≥5 publications and members of the Sociedad Española de Neumología y Cirugía Torácica (SEPAR). Besides, we are located in different parts of Spain. Then, we defined the project’s objectives, established the protocol of the SLR, and decided that this would be complemented by a narrative review. We first established a group of 10 pneumologists (two of us were project coordinators). We are all specialized in COPD with demonstrated clinical experience (a minimum of 8 years and ≥5 publications and members of the Sociedad Española de Neumología y Cirugía Torácica (SEPAR). Besides, we are located in different parts of Spain. Then, we defined the project’s objectives, established the protocol of the SLR, and decided that this would be complemented by a narrative review. Systematic Literature Review The SLR was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The objective of the SLR was to analyze lung deposition, inspiratory flow, and other characteristics of different inhaler devices (pMDIs, DPIs, and SMI) in both COPD patients and healthy subjects. Studies were identified using sensitive search strategies in the main medical databases. For this purpose, an expert librarian checked the search strategies (Tables 1–7 of the Supplementary material). Disease- and inhaler device-related terms were used as search keywords, which employed a controlled vocabulary, specific MeSH headings, and additional keywords. The following bibliographic databases were screened up to April 2019: Medline (PubMed) and Embase from 1961 to April 2019, and the Cochrane Library up to April 2019. Retrieved references were managed in Endnote X5 (Thomson Reuters). Finally, a manual search was performed by reviewing the references of the included studies and all the publications, as well as other information provided by the authors. Retrieved studies were included if they met the following pre-established criteria: Patients had to be diagnosed with COPD, aged 18 or older, and treated with an inhaler device, and studies had to include outcomes related to lung deposition and inspiratory flow, including the rate of lung deposition, the particles’ mass median aerodynamic diameter (MMAD) expressed as µm (micrometer), the aerosol exit velocity (AEV) in meter per second (m/s), the lung distribution pattern, the inspiratory flow rate expressed as liter per minute (L/min), or the device’s intrinsic resistance. Other variables, such as safety, were also considered. Only SLRs, meta-analyses, randomized controlled trials (RCTs), observational studies, and in vitro studies in English, French, or Spanish were included. Animal studies were excluded. The screening of studies, data collection (including the evidence tables), and analysis were independently performed by two reviewers. In the case of a discrepancy between the reviewers, a consensus was reached by including a third reviewer. The 2011 levels of evidence from the Oxford Center for Evidence-Based Medicine (OCEBM)18 were used to grade the quality of the studies. The SLR was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The objective of the SLR was to analyze lung deposition, inspiratory flow, and other characteristics of different inhaler devices (pMDIs, DPIs, and SMI) in both COPD patients and healthy subjects. Studies were identified using sensitive search strategies in the main medical databases. For this purpose, an expert librarian checked the search strategies (Tables 1–7 of the Supplementary material). Disease- and inhaler device-related terms were used as search keywords, which employed a controlled vocabulary, specific MeSH headings, and additional keywords. The following bibliographic databases were screened up to April 2019: Medline (PubMed) and Embase from 1961 to April 2019, and the Cochrane Library up to April 2019. Retrieved references were managed in Endnote X5 (Thomson Reuters). Finally, a manual search was performed by reviewing the references of the included studies and all the publications, as well as other information provided by the authors. Retrieved studies were included if they met the following pre-established criteria: Patients had to be diagnosed with COPD, aged 18 or older, and treated with an inhaler device, and studies had to include outcomes related to lung deposition and inspiratory flow, including the rate of lung deposition, the particles’ mass median aerodynamic diameter (MMAD) expressed as µm (micrometer), the aerosol exit velocity (AEV) in meter per second (m/s), the lung distribution pattern, the inspiratory flow rate expressed as liter per minute (L/min), or the device’s intrinsic resistance. Other variables, such as safety, were also considered. Only SLRs, meta-analyses, randomized controlled trials (RCTs), observational studies, and in vitro studies in English, French, or Spanish were included. Animal studies were excluded. The screening of studies, data collection (including the evidence tables), and analysis were independently performed by two reviewers. In the case of a discrepancy between the reviewers, a consensus was reached by including a third reviewer. The 2011 levels of evidence from the Oxford Center for Evidence-Based Medicine (OCEBM)18 were used to grade the quality of the studies. Narrative Review To supplement the SLR, additional searches were performed specifically to explore the basis of lung deposition and inspiratory flow, including their determinants and the effect of COPD on these aspects. For this purpose, apart from the results of the SLR, we performed different searches in Medline using PubMed’s Clinical Queries tool and small search strategies using MeSH and text–word terms (Table 8 of the Supplementary material). To supplement the SLR, additional searches were performed specifically to explore the basis of lung deposition and inspiratory flow, including their determinants and the effect of COPD on these aspects. For this purpose, apart from the results of the SLR, we performed different searches in Medline using PubMed’s Clinical Queries tool and small search strategies using MeSH and text–word terms (Table 8 of the Supplementary material). Nominal Group Meeting The results of the SLR and narrative searches were presented and discussed in a guided nominal group meeting. In this meeting, we agreed on a series of general conclusions and clinical recommendations. The results of the SLR and narrative searches were presented and discussed in a guided nominal group meeting. In this meeting, we agreed on a series of general conclusions and clinical recommendations. Experts’ Selection: We first established a group of 10 pneumologists (two of us were project coordinators). We are all specialized in COPD with demonstrated clinical experience (a minimum of 8 years and ≥5 publications and members of the Sociedad Española de Neumología y Cirugía Torácica (SEPAR). Besides, we are located in different parts of Spain. Then, we defined the project’s objectives, established the protocol of the SLR, and decided that this would be complemented by a narrative review. Systematic Literature Review: The SLR was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The objective of the SLR was to analyze lung deposition, inspiratory flow, and other characteristics of different inhaler devices (pMDIs, DPIs, and SMI) in both COPD patients and healthy subjects. Studies were identified using sensitive search strategies in the main medical databases. For this purpose, an expert librarian checked the search strategies (Tables 1–7 of the Supplementary material). Disease- and inhaler device-related terms were used as search keywords, which employed a controlled vocabulary, specific MeSH headings, and additional keywords. The following bibliographic databases were screened up to April 2019: Medline (PubMed) and Embase from 1961 to April 2019, and the Cochrane Library up to April 2019. Retrieved references were managed in Endnote X5 (Thomson Reuters). Finally, a manual search was performed by reviewing the references of the included studies and all the publications, as well as other information provided by the authors. Retrieved studies were included if they met the following pre-established criteria: Patients had to be diagnosed with COPD, aged 18 or older, and treated with an inhaler device, and studies had to include outcomes related to lung deposition and inspiratory flow, including the rate of lung deposition, the particles’ mass median aerodynamic diameter (MMAD) expressed as µm (micrometer), the aerosol exit velocity (AEV) in meter per second (m/s), the lung distribution pattern, the inspiratory flow rate expressed as liter per minute (L/min), or the device’s intrinsic resistance. Other variables, such as safety, were also considered. Only SLRs, meta-analyses, randomized controlled trials (RCTs), observational studies, and in vitro studies in English, French, or Spanish were included. Animal studies were excluded. The screening of studies, data collection (including the evidence tables), and analysis were independently performed by two reviewers. In the case of a discrepancy between the reviewers, a consensus was reached by including a third reviewer. The 2011 levels of evidence from the Oxford Center for Evidence-Based Medicine (OCEBM)18 were used to grade the quality of the studies. Narrative Review: To supplement the SLR, additional searches were performed specifically to explore the basis of lung deposition and inspiratory flow, including their determinants and the effect of COPD on these aspects. For this purpose, apart from the results of the SLR, we performed different searches in Medline using PubMed’s Clinical Queries tool and small search strategies using MeSH and text–word terms (Table 8 of the Supplementary material). Nominal Group Meeting: The results of the SLR and narrative searches were presented and discussed in a guided nominal group meeting. In this meeting, we agreed on a series of general conclusions and clinical recommendations. Results: The SLR retrieved 3064 articles, of which 979 were duplicates. A total of 120 articles were reviewed in detail, as well as a further 20 articles that were retrieved using the manual search. Eventually, 75 articles were excluded (Table 9 of the Supplementary material), most of them due to lack of relevant data, and 71 were included, 24 were in vitro studies. Some of the included articles were of low–moderate quality (due to the study design, and poor description of the methodology, especially for the articles published before the 1990s). We found great variability regarding study designs, populations, outcomes, and measures. There were 24 in vitro studies,16,17,19–40 and the rest of the articles comprised one SLR41 and several RCTs and cross-sectional studies. The studies analyzed more than 1600 COPD patients, most of whom were men, with age ranges from 27 to 89 years, and with forced expiratory volume in 1 second from 25% to 80%. Many of these studies assessed one type of inhalation device, but others compared pMDIs and DPIs,19,20,30,33,37,40–46 pMDIs and SMI,47–49 or DPIs and SMI.17,27,28 One study also evaluated the three inhalation devices.17 The narrative searches found almost 1000 articles. Here, we summarize the main results of the SLR and narrative review, according to the project’s objectives (lung deposition, inspiratory flow rate, and data regarding these aspects for different inhaler devices). We also present the general conclusions and recommendations. Tables 1–3 show the main characteristics of the inhalation devices.Table 1Main Characteristics of Pressurized Metered-Dose InhalersFormulationDrug Suspended or Dissolved in Propellant (With Surfactant and Cosolvent)Metering systemMetering valve and reservoirPropellantHFA or CFCDose counterSometimesPrimingVariable priming requirementsTemperature dependenceLowHumidity dependenceLowActuator orificeThe design and size of the actuator significantly influences the performance of pMDIsLung deposition8%-53%MMDA1.22 μm-8 μmAerosol exit velocityHigh (more than 3 m/s)Lung distributionCentral and peripheral regionsIntrinsic resistanceLowInspiratory flow rate~ 20 L/minAdvantagesCompact and portable, consistent dosing, and rapid deliveryDisadvantagesNot breath-actuated, require coordinationAbbreviations: pMDI, pressurized metered-dose inhaler; HFA, hydrofluoroalkane; CFC, chlorofluorocarbon; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute. Table 2Main Characteristics of Dry Powder InhalersFormulationDrug/Lactose Blend, Drug Alone, Drug/Excipient ParticlesMetering systemCapsules, blisters, multi-dose blister packs, reservoirsPropellantNoDose counterYesPrimingVariable priming requirementsTemperature dependenceYesHumidity dependenceYesActuator orificeDoes not applyLung deposition~ 20%MMDA1.8 µm–4.8 µmAerosol exit velocityDepends on inspiratory flow rateLung distributionCentral and peripheral regionsIntrinsic resistanceLow/medium/highInspiratory flow rateMinimum of 30 L/min to > 100 L/minAdvantagesCompact and portable Some are multi-dose devices. Do not require coordination of inhalation with activation or hand strengthDisadvantagesRequire a minimum inspiratory flowPatients with cognitive or debilitating conditions might not generate sufficiently high inspiratory flowsMost are moisture-sensitiveAbbreviations: DPI, dry powder inhaler; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute. Table 3Main Characteristics of the Soft Mist InhalerFormulationAqueous Solution or SuspensionMetering systemReservoirsPropellantNoDose counterYesPrimingActuate the inhaler toward the ground until an aerosol cloud is visible and then to repeat the process three more timesTemperature dependenceNoHumidity dependenceNoActuator orifice–Lung deposition39.2%–67%MMDA~ 3.7 μmAerosol exit velocity0.72–0.84 m/sLung distributionCentral and peripheral regionsIntrinsic resistanceLow/noneInspiratory flow rateIndependentAdvantagesPortable and compact. Multi-dose device. Reusable. Compared with dry powder inhalers, a considerably smaller dose of a combination bronchodilator results in the same level of efficacy and safetyDisadvantagesNeeds to be primed if not in use for over 21 daysAbbreviations: SMI, soft mist inhaler; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute. Main Characteristics of Pressurized Metered-Dose Inhalers Abbreviations: pMDI, pressurized metered-dose inhaler; HFA, hydrofluoroalkane; CFC, chlorofluorocarbon; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute. Main Characteristics of Dry Powder Inhalers Abbreviations: DPI, dry powder inhaler; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute. Main Characteristics of the Soft Mist Inhaler Abbreviations: SMI, soft mist inhaler; MMAD, mass median aerodynamic diameter; m/s, meter per second; μm, micrometer; L/min, liter per minute. Lung Deposition Different factors have been associated with lung deposition, some of which relate to the patient’s features (eg, airway geometry, inspiratory capacity, inhalation technique, breath–hold time, etc.) and to COPD (eg, exacerbations or hyperinflation).10–12,50–52 In fact, it has been shown during COPD exacerbations patients present decreased lung function and respiratory muscle strength that eventually influence on lung deposition.53 However, other factors are connected to the inhaler device (eg, the aerosol-generating system, speed of the aerosol plume, intrinsic resistance, inhaled carrier gas, oral/nasal inhalation, etc.), formulation (eg, the particle charge, lipophilicity, hygroscopicity, etc.), inhaled particle (eg, MMDA, its effect on lung distribution, etc.), and inhalation pattern (eg, the inspiration flow rate, volume, breath–hold time, etc.).52,54 With regards to the lung deposition (in relation to the emitted dose) across inhaler devices, data from in vitro and in vivo studies have estimated that 10%–20% of the delivered dose reaches the airways.54–56 Lung deposition rates (from individual studies) ranging from 8% to 53% have been reported for pMDIs.49,54,56–59 However, this rate increased to 11%–68% with the addition of a valved holding chamber or spacer31,35,46,59–63 and to 50%–60% with press-and-breathe actuators.64 More specifically, when Modulite® was used, lung deposition could reach up to 31%–34%.65,66 The K–haler® has a reported lung deposit of 39%.67 As exposed before, different factors might be contributing to these rate variability. The studies included that analyzed DPIs have shown that the lung deposition rate is low, at around 20%,68 which is negatively influenced by a suboptimal inspiratory flow rate, humidity, and changes in temperature.69 Furthermore, clear differences in lung deposition were not observed when patients performed inhalation correctly.68 For the main DPI devices, the published lung deposition rates from individual studies (without direct comparisons) are as follows: Accuhaler® 7.6%,70 Aerolizer® 13%–20%,34,71 Breezhaler® 26.8%–39%,24,29 Easyhaler® 18.5%–31%,71,72 Genuair® 30.1%–51.1%,27,73,74 Handihaler® 9.8%–46.7%,19,24,71 Ingelheim inhaler® 16%–59%,75 NEXThaler® 39.4%–56%,11,76 Spinhaler® 11.5%,75 Turbuhaler® 14.2%–69.3%,21,42,77–79 and Twisthaler® 36%–37%.80 Similarly to all inhaler devices, other factors are probably influencing the lung deposition rate.81 Although some studies have compared lung deposition in pMDIs and DPIs, their results are contradictory.19,33 Respimat® (SMI) has largely exhibited high lung deposition rates that range from 39.2% to 67%,27,38,48,49,74,82–84 with different inspiratory flow rates (high and low) and irrespective of humidity.85 Compared with other devices, SMI showed higher lung deposition than pMDIs (including those with a chamber or spacer) or DPIs.27,48,74,83,86 We also evaluated the AEV. Inhalation devices with a high AEV might have a short spray duration and vice versa. With pMDIs, the aerosol exits through a nozzle at a very high rate of more than 3 m/s.87 However, the AEV of the SMI is much slower, at 0.84–0.72 m/s, and the aerosol cloud lasts longer.88–90 It has also been observed that the distribution of the deposition sites of inhaled particles is strongly dependent on their aerodynamic diameters.69 This SLR found that pMDIs generally produce at least medium-sized particles, with a significant rate of extrafine particles. The observed MMAD of conventional pMDIs varies from 1.22 to 8 μm,35,91,92 from 1.19 to 3.57 μm when a valved holding chamber or spacer is used,31,35,93 and from 0.72 to 2.0 μm with Modulite®.65,66 Regarding particle size data for DPIs, depending on the device and drug, MMDAs vary from 1.40 to 4.8 µm.11,19,21,24,27–29,36,37,74,76 Conversely, SMI generates a cloud that contains an aerosol with a fine particle fraction of around 3.7 μm.74 It is estimated that 60% of the particles reach a MMAD <5 μm with SMI.85 The reported rate with pMDIs and DPIs (indirect comparison) is not that high.27,28,74,94 Another relevant outcome when using inhalation devices is the lung distribution pattern (through the central and peripheral regions). All inhalation devices have been shown to reach both central and peripheral areas. SMI data suggest that lung distribution pattern might be better than pMDIs, with a higher distribution in bronchial trees and peripheral regions.11,28,49,60,65,66,73,74,82,95,96 More specifically, a comparative study found mean peripheral, intermediate and central lung deposition, and peripheral zone/central zone ratio of 5.0%–9.4%, 4.8%–11.3%, 4.5%–10.4%, 1.01–1.16 with Respimat® vs 3.8%, 4.9%, 5.6%, 1.36 with pMDIs, respectively.49 Comparative data between pMDIs and DPIs are conflicting.33,46 Different factors have been associated with lung deposition, some of which relate to the patient’s features (eg, airway geometry, inspiratory capacity, inhalation technique, breath–hold time, etc.) and to COPD (eg, exacerbations or hyperinflation).10–12,50–52 In fact, it has been shown during COPD exacerbations patients present decreased lung function and respiratory muscle strength that eventually influence on lung deposition.53 However, other factors are connected to the inhaler device (eg, the aerosol-generating system, speed of the aerosol plume, intrinsic resistance, inhaled carrier gas, oral/nasal inhalation, etc.), formulation (eg, the particle charge, lipophilicity, hygroscopicity, etc.), inhaled particle (eg, MMDA, its effect on lung distribution, etc.), and inhalation pattern (eg, the inspiration flow rate, volume, breath–hold time, etc.).52,54 With regards to the lung deposition (in relation to the emitted dose) across inhaler devices, data from in vitro and in vivo studies have estimated that 10%–20% of the delivered dose reaches the airways.54–56 Lung deposition rates (from individual studies) ranging from 8% to 53% have been reported for pMDIs.49,54,56–59 However, this rate increased to 11%–68% with the addition of a valved holding chamber or spacer31,35,46,59–63 and to 50%–60% with press-and-breathe actuators.64 More specifically, when Modulite® was used, lung deposition could reach up to 31%–34%.65,66 The K–haler® has a reported lung deposit of 39%.67 As exposed before, different factors might be contributing to these rate variability. The studies included that analyzed DPIs have shown that the lung deposition rate is low, at around 20%,68 which is negatively influenced by a suboptimal inspiratory flow rate, humidity, and changes in temperature.69 Furthermore, clear differences in lung deposition were not observed when patients performed inhalation correctly.68 For the main DPI devices, the published lung deposition rates from individual studies (without direct comparisons) are as follows: Accuhaler® 7.6%,70 Aerolizer® 13%–20%,34,71 Breezhaler® 26.8%–39%,24,29 Easyhaler® 18.5%–31%,71,72 Genuair® 30.1%–51.1%,27,73,74 Handihaler® 9.8%–46.7%,19,24,71 Ingelheim inhaler® 16%–59%,75 NEXThaler® 39.4%–56%,11,76 Spinhaler® 11.5%,75 Turbuhaler® 14.2%–69.3%,21,42,77–79 and Twisthaler® 36%–37%.80 Similarly to all inhaler devices, other factors are probably influencing the lung deposition rate.81 Although some studies have compared lung deposition in pMDIs and DPIs, their results are contradictory.19,33 Respimat® (SMI) has largely exhibited high lung deposition rates that range from 39.2% to 67%,27,38,48,49,74,82–84 with different inspiratory flow rates (high and low) and irrespective of humidity.85 Compared with other devices, SMI showed higher lung deposition than pMDIs (including those with a chamber or spacer) or DPIs.27,48,74,83,86 We also evaluated the AEV. Inhalation devices with a high AEV might have a short spray duration and vice versa. With pMDIs, the aerosol exits through a nozzle at a very high rate of more than 3 m/s.87 However, the AEV of the SMI is much slower, at 0.84–0.72 m/s, and the aerosol cloud lasts longer.88–90 It has also been observed that the distribution of the deposition sites of inhaled particles is strongly dependent on their aerodynamic diameters.69 This SLR found that pMDIs generally produce at least medium-sized particles, with a significant rate of extrafine particles. The observed MMAD of conventional pMDIs varies from 1.22 to 8 μm,35,91,92 from 1.19 to 3.57 μm when a valved holding chamber or spacer is used,31,35,93 and from 0.72 to 2.0 μm with Modulite®.65,66 Regarding particle size data for DPIs, depending on the device and drug, MMDAs vary from 1.40 to 4.8 µm.11,19,21,24,27–29,36,37,74,76 Conversely, SMI generates a cloud that contains an aerosol with a fine particle fraction of around 3.7 μm.74 It is estimated that 60% of the particles reach a MMAD <5 μm with SMI.85 The reported rate with pMDIs and DPIs (indirect comparison) is not that high.27,28,74,94 Another relevant outcome when using inhalation devices is the lung distribution pattern (through the central and peripheral regions). All inhalation devices have been shown to reach both central and peripheral areas. SMI data suggest that lung distribution pattern might be better than pMDIs, with a higher distribution in bronchial trees and peripheral regions.11,28,49,60,65,66,73,74,82,95,96 More specifically, a comparative study found mean peripheral, intermediate and central lung deposition, and peripheral zone/central zone ratio of 5.0%–9.4%, 4.8%–11.3%, 4.5%–10.4%, 1.01–1.16 with Respimat® vs 3.8%, 4.9%, 5.6%, 1.36 with pMDIs, respectively.49 Comparative data between pMDIs and DPIs are conflicting.33,46 Inspiratory Flow Rate The other main focus of this project was the inspiratory flow rate. First, it is important to consider the factors associated with inspiratory flow rate (Table 4). Similar to lung deposition, some of these factors relate to the patient’s and COPD’s characteristics, while other factors relate to the inhaler device, such as the intrinsic resistance.6,10,13–17,43,45,97,98Table 4Main Factors Associated to Inspiratory Flow RatePatient-related Inspiratory capacity Inspiratory effort Comorbidities Inhalation techniqueCOPD-related Severity Hyperinflation Exacerbations Respiratory muscle alterationsInhalation device-related Internal resistance Disaggregation of the powdered drug dose (DPIs)Abbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers. Main Factors Associated to Inspiratory Flow Rate Abbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers. Overall, two main driving forces can affect the performance of DPIs: the inspiratory flow generated by the patient and the turbulence produced inside the device, the latter of which solely depends on the original technical characteristics of the device, including the intrinsic resistance. These two parameters affect the disaggregation of the drug dose, the diameter of the particles to inhale, the lung distribution of the dose, and eventually, the efficacy of the delivered drug. Essentially, a higher intrinsic resistance results in the patient needing to generate a higher inspiratory flow. In general, although variable, DPIs’ intrinsic resistance is higher than that of pMDIs or SMI, which are relatively similar and low. Therefore, pMDIs and SMI do not require the patient to generate a high inspiratory flow (and inspiratory effort). According to the results of the SLR, pMDIs require low inspiratory flow rates of around 20 L/min (59, 70, 132) to achieve an adequate lung deposition.10,17,43,45,57,82,99–101 There were no major differences between the use of one propellant and another.57 In order to generate the correct inspiratory airflow and lung deposition with this type of inhalation device, it is recommended the patients start breathing from their functional residual capacity, then they should activate the inhalation device and start inhalation using an inspiratory flow rate that is below 60 L/min. Then, at the end of inspiration, patients should hold their breath for around 10 seconds.100 Consequently, patients need a correct inhalation technique and coordination. The K-haler® is triggered by an inspiratory flow rate of approximately 30 L/min.67 Inhaler devices are many times classified as low- (30 L/min or below), medium– (~30–60 L/min), and high-resistance (>60 L/min) devices.10,17 DPIs with low intrinsic resistance include Aerolizer®, Spinhaler®, and Breezhaler®; DPIs with medium resistance include Accuhaler®/Diskhaler®, Genuair®/Novolizer®, and NEXThaler®; DPIs with medium/high resistance include Turbuhaler®; and DPIs with high resistance include Easyhaler®, Handihaler®, and Twisthaler®. The estimated inspiratory flow rates required thus vary across devices, from a minimum of 30 L/min to more than 100 L/min.6,26,32,43,45,101–108 Based on the information presented above, when using a high-resistance DPI, the disaggregation and micro-dispersion of the powdered drug are relatively independent of the patient’s inspiratory effort because the driving force depends on the intrinsic resistance of the DPI itself, which is able to produce the turbulence required for effective drug micro-dispersion. However, when a low-resistance device is used, the only force that can generate turbulence is the patient’s inspiratory airflow, which should be high. Finally, the studies showed that the SMI inhalation device uses mechanical energy (from a spring) to generate a fine, slow–moving mist from an aqueous solution, which is independent of the patient’s inspiratory effort. Therefore, the required inspiratory flow rate and/or effort are less relevant than with DPIs.83,88,89,109 Moreover, the inhalation maneuver with SMI is more similar to physiological inhalation. One study observed that drug delivery to the lungs with SMI was more efficient than with pMDIs, even with poor inhalation technique.82 The other main focus of this project was the inspiratory flow rate. First, it is important to consider the factors associated with inspiratory flow rate (Table 4). Similar to lung deposition, some of these factors relate to the patient’s and COPD’s characteristics, while other factors relate to the inhaler device, such as the intrinsic resistance.6,10,13–17,43,45,97,98Table 4Main Factors Associated to Inspiratory Flow RatePatient-related Inspiratory capacity Inspiratory effort Comorbidities Inhalation techniqueCOPD-related Severity Hyperinflation Exacerbations Respiratory muscle alterationsInhalation device-related Internal resistance Disaggregation of the powdered drug dose (DPIs)Abbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers. Main Factors Associated to Inspiratory Flow Rate Abbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers. Overall, two main driving forces can affect the performance of DPIs: the inspiratory flow generated by the patient and the turbulence produced inside the device, the latter of which solely depends on the original technical characteristics of the device, including the intrinsic resistance. These two parameters affect the disaggregation of the drug dose, the diameter of the particles to inhale, the lung distribution of the dose, and eventually, the efficacy of the delivered drug. Essentially, a higher intrinsic resistance results in the patient needing to generate a higher inspiratory flow. In general, although variable, DPIs’ intrinsic resistance is higher than that of pMDIs or SMI, which are relatively similar and low. Therefore, pMDIs and SMI do not require the patient to generate a high inspiratory flow (and inspiratory effort). According to the results of the SLR, pMDIs require low inspiratory flow rates of around 20 L/min (59, 70, 132) to achieve an adequate lung deposition.10,17,43,45,57,82,99–101 There were no major differences between the use of one propellant and another.57 In order to generate the correct inspiratory airflow and lung deposition with this type of inhalation device, it is recommended the patients start breathing from their functional residual capacity, then they should activate the inhalation device and start inhalation using an inspiratory flow rate that is below 60 L/min. Then, at the end of inspiration, patients should hold their breath for around 10 seconds.100 Consequently, patients need a correct inhalation technique and coordination. The K-haler® is triggered by an inspiratory flow rate of approximately 30 L/min.67 Inhaler devices are many times classified as low- (30 L/min or below), medium– (~30–60 L/min), and high-resistance (>60 L/min) devices.10,17 DPIs with low intrinsic resistance include Aerolizer®, Spinhaler®, and Breezhaler®; DPIs with medium resistance include Accuhaler®/Diskhaler®, Genuair®/Novolizer®, and NEXThaler®; DPIs with medium/high resistance include Turbuhaler®; and DPIs with high resistance include Easyhaler®, Handihaler®, and Twisthaler®. The estimated inspiratory flow rates required thus vary across devices, from a minimum of 30 L/min to more than 100 L/min.6,26,32,43,45,101–108 Based on the information presented above, when using a high-resistance DPI, the disaggregation and micro-dispersion of the powdered drug are relatively independent of the patient’s inspiratory effort because the driving force depends on the intrinsic resistance of the DPI itself, which is able to produce the turbulence required for effective drug micro-dispersion. However, when a low-resistance device is used, the only force that can generate turbulence is the patient’s inspiratory airflow, which should be high. Finally, the studies showed that the SMI inhalation device uses mechanical energy (from a spring) to generate a fine, slow–moving mist from an aqueous solution, which is independent of the patient’s inspiratory effort. Therefore, the required inspiratory flow rate and/or effort are less relevant than with DPIs.83,88,89,109 Moreover, the inhalation maneuver with SMI is more similar to physiological inhalation. One study observed that drug delivery to the lungs with SMI was more efficient than with pMDIs, even with poor inhalation technique.82 General Conclusions and Recommendations The experts discussed the results of the reviews, and, based on the evidence, they formulated a series of general conclusions and recommendations that are outlined in Tables 5 and 6. In summary, health professionals involved in the management of COPD patients should be aware of all factors involved in adequate drug distribution when using inhalation devices. Two main objective factors emerged at this point: lung deposition and the required inspiratory flow rate. Both of these factors are highly influenced by patient, COPD, and inhaler device characteristics. Moreover, COPD is a heterogeneous and dynamic chronic disease, in which lung deposition and inspiratory flow rates vary across patients and also within the same patient.Table 5General Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease#Conclusion1The lung deposition profile and required inspiratory flow rate are key factors to be considered when selecting an inhalation device2COPD is a progressive disease with specific pathophysiological features that impact patients’ lung deposition and inspiratory flow rate3In COPD patients, obstruction severity and especially hyperinflation are decisive pathophysiological factors4During the course of COPD, some situations, notably exacerbations, impact the inspiratory flow rate5An homogeneous drug distribution through the airways is essential, not only because of the COPD pathophysiology but also because of the different distribution of cholinergic and β2 receptors6COPD treatment requires inhalation devices capable of delivering particles with a MMAD comprised between 0.5 and 5 µm to achieve high lung deposition7The patients’ ability to perform a correct inhalation maneuver (inspiratory effort, coordination, etc.) is decisive to achieve an adequate inspiratory flow rate and lung deposition8Inhalation maneuvers that are similar to physiological/standard inspiratory flow are more likely associated with reduced oropharyngeal deposition and therefore increased lung deposition9Inhalation devices present different characteristics that define the required inspiratory flow rate and influence lung deposition10The inspiratory flow rate required for drug dispersion with a given DPI is inversely proportional to the intrinsic resistance of the DPI11The faster the exit speed of the drug delivered from the device (initial acceleration of the inhalation maneuver by the patient or directly by the device), the greater the risk of oropharyngeal deposition and the lesser the lung deposition12The SMI requires a low inspiratory flow rate. Therefore, compared with other inhaler devices, when performing a correct maneuver, oropharyngeal deposition is lower and lung deposition is higherAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler. Table 6Experts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease#It is Strongly Recommended to …1Consider COPD pathophysiological aspects as well as patients’ clinical status and disease severity/evolution when selecting an inhalation device2Take into account the specific characteristics of each inhalation device3Assess patients’ ability to perform a correct inhalation maneuver and the specific requirements for each inhalation device4Evaluate patients’ inspiratory flow rate or inspiratory capacity before selecting an inhalation device5Take into account patients’ history of exacerbations or other events that may affect their ability to perform adequate inhalation6Regularly review patients’ inhalation maneuver and check whether the inhalation device meets their needs7Use an active inhalation device, such as pMDI or SMI, in patients with reduced inspiratory capacity8Consider using a valved holding chamber with SMI or pMDI devices in fragile patients with inspiratory and/or coordination difficulties9Use inhalation devices that generate a low oropharyngeal and high lung deposition10Check patients’ inhalation maneuver during every visit and, where necessary, resolve errors or even change the inhalerAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler. General Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease Abbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler. Experts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease Abbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler. Thus, it is strongly recommended that, in addition to the standard variables for COPD, inspiratory flow rate and the patient’s inspiratory capacity are evaluated (on a regular basis), and the selection of an inhaler device should be based on the COPD patient’s features, needs, and clinical situation. This selection should consider the different characteristics of the devices to ensure physicians choose the device that best matches that patient’s needs. Finally, we considered it important to systematically review the patient’s inhalation maneuver,110 see Tables 3 and 6. This should be checked during every visit, so that errors can be resolved, and inhalers can be checked and even changed, where necessary. The same way, before considering a change in the patient’s treatment, possible errors with the inhalation maneuver should be evaluated. The experts discussed the results of the reviews, and, based on the evidence, they formulated a series of general conclusions and recommendations that are outlined in Tables 5 and 6. In summary, health professionals involved in the management of COPD patients should be aware of all factors involved in adequate drug distribution when using inhalation devices. Two main objective factors emerged at this point: lung deposition and the required inspiratory flow rate. Both of these factors are highly influenced by patient, COPD, and inhaler device characteristics. Moreover, COPD is a heterogeneous and dynamic chronic disease, in which lung deposition and inspiratory flow rates vary across patients and also within the same patient.Table 5General Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease#Conclusion1The lung deposition profile and required inspiratory flow rate are key factors to be considered when selecting an inhalation device2COPD is a progressive disease with specific pathophysiological features that impact patients’ lung deposition and inspiratory flow rate3In COPD patients, obstruction severity and especially hyperinflation are decisive pathophysiological factors4During the course of COPD, some situations, notably exacerbations, impact the inspiratory flow rate5An homogeneous drug distribution through the airways is essential, not only because of the COPD pathophysiology but also because of the different distribution of cholinergic and β2 receptors6COPD treatment requires inhalation devices capable of delivering particles with a MMAD comprised between 0.5 and 5 µm to achieve high lung deposition7The patients’ ability to perform a correct inhalation maneuver (inspiratory effort, coordination, etc.) is decisive to achieve an adequate inspiratory flow rate and lung deposition8Inhalation maneuvers that are similar to physiological/standard inspiratory flow are more likely associated with reduced oropharyngeal deposition and therefore increased lung deposition9Inhalation devices present different characteristics that define the required inspiratory flow rate and influence lung deposition10The inspiratory flow rate required for drug dispersion with a given DPI is inversely proportional to the intrinsic resistance of the DPI11The faster the exit speed of the drug delivered from the device (initial acceleration of the inhalation maneuver by the patient or directly by the device), the greater the risk of oropharyngeal deposition and the lesser the lung deposition12The SMI requires a low inspiratory flow rate. Therefore, compared with other inhaler devices, when performing a correct maneuver, oropharyngeal deposition is lower and lung deposition is higherAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler. Table 6Experts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease#It is Strongly Recommended to …1Consider COPD pathophysiological aspects as well as patients’ clinical status and disease severity/evolution when selecting an inhalation device2Take into account the specific characteristics of each inhalation device3Assess patients’ ability to perform a correct inhalation maneuver and the specific requirements for each inhalation device4Evaluate patients’ inspiratory flow rate or inspiratory capacity before selecting an inhalation device5Take into account patients’ history of exacerbations or other events that may affect their ability to perform adequate inhalation6Regularly review patients’ inhalation maneuver and check whether the inhalation device meets their needs7Use an active inhalation device, such as pMDI or SMI, in patients with reduced inspiratory capacity8Consider using a valved holding chamber with SMI or pMDI devices in fragile patients with inspiratory and/or coordination difficulties9Use inhalation devices that generate a low oropharyngeal and high lung deposition10Check patients’ inhalation maneuver during every visit and, where necessary, resolve errors or even change the inhalerAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler. General Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease Abbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler. Experts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease Abbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler. Thus, it is strongly recommended that, in addition to the standard variables for COPD, inspiratory flow rate and the patient’s inspiratory capacity are evaluated (on a regular basis), and the selection of an inhaler device should be based on the COPD patient’s features, needs, and clinical situation. This selection should consider the different characteristics of the devices to ensure physicians choose the device that best matches that patient’s needs. Finally, we considered it important to systematically review the patient’s inhalation maneuver,110 see Tables 3 and 6. This should be checked during every visit, so that errors can be resolved, and inhalers can be checked and even changed, where necessary. The same way, before considering a change in the patient’s treatment, possible errors with the inhalation maneuver should be evaluated. Lung Deposition: Different factors have been associated with lung deposition, some of which relate to the patient’s features (eg, airway geometry, inspiratory capacity, inhalation technique, breath–hold time, etc.) and to COPD (eg, exacerbations or hyperinflation).10–12,50–52 In fact, it has been shown during COPD exacerbations patients present decreased lung function and respiratory muscle strength that eventually influence on lung deposition.53 However, other factors are connected to the inhaler device (eg, the aerosol-generating system, speed of the aerosol plume, intrinsic resistance, inhaled carrier gas, oral/nasal inhalation, etc.), formulation (eg, the particle charge, lipophilicity, hygroscopicity, etc.), inhaled particle (eg, MMDA, its effect on lung distribution, etc.), and inhalation pattern (eg, the inspiration flow rate, volume, breath–hold time, etc.).52,54 With regards to the lung deposition (in relation to the emitted dose) across inhaler devices, data from in vitro and in vivo studies have estimated that 10%–20% of the delivered dose reaches the airways.54–56 Lung deposition rates (from individual studies) ranging from 8% to 53% have been reported for pMDIs.49,54,56–59 However, this rate increased to 11%–68% with the addition of a valved holding chamber or spacer31,35,46,59–63 and to 50%–60% with press-and-breathe actuators.64 More specifically, when Modulite® was used, lung deposition could reach up to 31%–34%.65,66 The K–haler® has a reported lung deposit of 39%.67 As exposed before, different factors might be contributing to these rate variability. The studies included that analyzed DPIs have shown that the lung deposition rate is low, at around 20%,68 which is negatively influenced by a suboptimal inspiratory flow rate, humidity, and changes in temperature.69 Furthermore, clear differences in lung deposition were not observed when patients performed inhalation correctly.68 For the main DPI devices, the published lung deposition rates from individual studies (without direct comparisons) are as follows: Accuhaler® 7.6%,70 Aerolizer® 13%–20%,34,71 Breezhaler® 26.8%–39%,24,29 Easyhaler® 18.5%–31%,71,72 Genuair® 30.1%–51.1%,27,73,74 Handihaler® 9.8%–46.7%,19,24,71 Ingelheim inhaler® 16%–59%,75 NEXThaler® 39.4%–56%,11,76 Spinhaler® 11.5%,75 Turbuhaler® 14.2%–69.3%,21,42,77–79 and Twisthaler® 36%–37%.80 Similarly to all inhaler devices, other factors are probably influencing the lung deposition rate.81 Although some studies have compared lung deposition in pMDIs and DPIs, their results are contradictory.19,33 Respimat® (SMI) has largely exhibited high lung deposition rates that range from 39.2% to 67%,27,38,48,49,74,82–84 with different inspiratory flow rates (high and low) and irrespective of humidity.85 Compared with other devices, SMI showed higher lung deposition than pMDIs (including those with a chamber or spacer) or DPIs.27,48,74,83,86 We also evaluated the AEV. Inhalation devices with a high AEV might have a short spray duration and vice versa. With pMDIs, the aerosol exits through a nozzle at a very high rate of more than 3 m/s.87 However, the AEV of the SMI is much slower, at 0.84–0.72 m/s, and the aerosol cloud lasts longer.88–90 It has also been observed that the distribution of the deposition sites of inhaled particles is strongly dependent on their aerodynamic diameters.69 This SLR found that pMDIs generally produce at least medium-sized particles, with a significant rate of extrafine particles. The observed MMAD of conventional pMDIs varies from 1.22 to 8 μm,35,91,92 from 1.19 to 3.57 μm when a valved holding chamber or spacer is used,31,35,93 and from 0.72 to 2.0 μm with Modulite®.65,66 Regarding particle size data for DPIs, depending on the device and drug, MMDAs vary from 1.40 to 4.8 µm.11,19,21,24,27–29,36,37,74,76 Conversely, SMI generates a cloud that contains an aerosol with a fine particle fraction of around 3.7 μm.74 It is estimated that 60% of the particles reach a MMAD <5 μm with SMI.85 The reported rate with pMDIs and DPIs (indirect comparison) is not that high.27,28,74,94 Another relevant outcome when using inhalation devices is the lung distribution pattern (through the central and peripheral regions). All inhalation devices have been shown to reach both central and peripheral areas. SMI data suggest that lung distribution pattern might be better than pMDIs, with a higher distribution in bronchial trees and peripheral regions.11,28,49,60,65,66,73,74,82,95,96 More specifically, a comparative study found mean peripheral, intermediate and central lung deposition, and peripheral zone/central zone ratio of 5.0%–9.4%, 4.8%–11.3%, 4.5%–10.4%, 1.01–1.16 with Respimat® vs 3.8%, 4.9%, 5.6%, 1.36 with pMDIs, respectively.49 Comparative data between pMDIs and DPIs are conflicting.33,46 Inspiratory Flow Rate: The other main focus of this project was the inspiratory flow rate. First, it is important to consider the factors associated with inspiratory flow rate (Table 4). Similar to lung deposition, some of these factors relate to the patient’s and COPD’s characteristics, while other factors relate to the inhaler device, such as the intrinsic resistance.6,10,13–17,43,45,97,98Table 4Main Factors Associated to Inspiratory Flow RatePatient-related Inspiratory capacity Inspiratory effort Comorbidities Inhalation techniqueCOPD-related Severity Hyperinflation Exacerbations Respiratory muscle alterationsInhalation device-related Internal resistance Disaggregation of the powdered drug dose (DPIs)Abbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers. Main Factors Associated to Inspiratory Flow Rate Abbreviations: COPD, chronic obstructive pulmonary disease; DPIs, dry powder inhalers. Overall, two main driving forces can affect the performance of DPIs: the inspiratory flow generated by the patient and the turbulence produced inside the device, the latter of which solely depends on the original technical characteristics of the device, including the intrinsic resistance. These two parameters affect the disaggregation of the drug dose, the diameter of the particles to inhale, the lung distribution of the dose, and eventually, the efficacy of the delivered drug. Essentially, a higher intrinsic resistance results in the patient needing to generate a higher inspiratory flow. In general, although variable, DPIs’ intrinsic resistance is higher than that of pMDIs or SMI, which are relatively similar and low. Therefore, pMDIs and SMI do not require the patient to generate a high inspiratory flow (and inspiratory effort). According to the results of the SLR, pMDIs require low inspiratory flow rates of around 20 L/min (59, 70, 132) to achieve an adequate lung deposition.10,17,43,45,57,82,99–101 There were no major differences between the use of one propellant and another.57 In order to generate the correct inspiratory airflow and lung deposition with this type of inhalation device, it is recommended the patients start breathing from their functional residual capacity, then they should activate the inhalation device and start inhalation using an inspiratory flow rate that is below 60 L/min. Then, at the end of inspiration, patients should hold their breath for around 10 seconds.100 Consequently, patients need a correct inhalation technique and coordination. The K-haler® is triggered by an inspiratory flow rate of approximately 30 L/min.67 Inhaler devices are many times classified as low- (30 L/min or below), medium– (~30–60 L/min), and high-resistance (>60 L/min) devices.10,17 DPIs with low intrinsic resistance include Aerolizer®, Spinhaler®, and Breezhaler®; DPIs with medium resistance include Accuhaler®/Diskhaler®, Genuair®/Novolizer®, and NEXThaler®; DPIs with medium/high resistance include Turbuhaler®; and DPIs with high resistance include Easyhaler®, Handihaler®, and Twisthaler®. The estimated inspiratory flow rates required thus vary across devices, from a minimum of 30 L/min to more than 100 L/min.6,26,32,43,45,101–108 Based on the information presented above, when using a high-resistance DPI, the disaggregation and micro-dispersion of the powdered drug are relatively independent of the patient’s inspiratory effort because the driving force depends on the intrinsic resistance of the DPI itself, which is able to produce the turbulence required for effective drug micro-dispersion. However, when a low-resistance device is used, the only force that can generate turbulence is the patient’s inspiratory airflow, which should be high. Finally, the studies showed that the SMI inhalation device uses mechanical energy (from a spring) to generate a fine, slow–moving mist from an aqueous solution, which is independent of the patient’s inspiratory effort. Therefore, the required inspiratory flow rate and/or effort are less relevant than with DPIs.83,88,89,109 Moreover, the inhalation maneuver with SMI is more similar to physiological inhalation. One study observed that drug delivery to the lungs with SMI was more efficient than with pMDIs, even with poor inhalation technique.82 General Conclusions and Recommendations: The experts discussed the results of the reviews, and, based on the evidence, they formulated a series of general conclusions and recommendations that are outlined in Tables 5 and 6. In summary, health professionals involved in the management of COPD patients should be aware of all factors involved in adequate drug distribution when using inhalation devices. Two main objective factors emerged at this point: lung deposition and the required inspiratory flow rate. Both of these factors are highly influenced by patient, COPD, and inhaler device characteristics. Moreover, COPD is a heterogeneous and dynamic chronic disease, in which lung deposition and inspiratory flow rates vary across patients and also within the same patient.Table 5General Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease#Conclusion1The lung deposition profile and required inspiratory flow rate are key factors to be considered when selecting an inhalation device2COPD is a progressive disease with specific pathophysiological features that impact patients’ lung deposition and inspiratory flow rate3In COPD patients, obstruction severity and especially hyperinflation are decisive pathophysiological factors4During the course of COPD, some situations, notably exacerbations, impact the inspiratory flow rate5An homogeneous drug distribution through the airways is essential, not only because of the COPD pathophysiology but also because of the different distribution of cholinergic and β2 receptors6COPD treatment requires inhalation devices capable of delivering particles with a MMAD comprised between 0.5 and 5 µm to achieve high lung deposition7The patients’ ability to perform a correct inhalation maneuver (inspiratory effort, coordination, etc.) is decisive to achieve an adequate inspiratory flow rate and lung deposition8Inhalation maneuvers that are similar to physiological/standard inspiratory flow are more likely associated with reduced oropharyngeal deposition and therefore increased lung deposition9Inhalation devices present different characteristics that define the required inspiratory flow rate and influence lung deposition10The inspiratory flow rate required for drug dispersion with a given DPI is inversely proportional to the intrinsic resistance of the DPI11The faster the exit speed of the drug delivered from the device (initial acceleration of the inhalation maneuver by the patient or directly by the device), the greater the risk of oropharyngeal deposition and the lesser the lung deposition12The SMI requires a low inspiratory flow rate. Therefore, compared with other inhaler devices, when performing a correct maneuver, oropharyngeal deposition is lower and lung deposition is higherAbbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler. Table 6Experts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease#It is Strongly Recommended to …1Consider COPD pathophysiological aspects as well as patients’ clinical status and disease severity/evolution when selecting an inhalation device2Take into account the specific characteristics of each inhalation device3Assess patients’ ability to perform a correct inhalation maneuver and the specific requirements for each inhalation device4Evaluate patients’ inspiratory flow rate or inspiratory capacity before selecting an inhalation device5Take into account patients’ history of exacerbations or other events that may affect their ability to perform adequate inhalation6Regularly review patients’ inhalation maneuver and check whether the inhalation device meets their needs7Use an active inhalation device, such as pMDI or SMI, in patients with reduced inspiratory capacity8Consider using a valved holding chamber with SMI or pMDI devices in fragile patients with inspiratory and/or coordination difficulties9Use inhalation devices that generate a low oropharyngeal and high lung deposition10Check patients’ inhalation maneuver during every visit and, where necessary, resolve errors or even change the inhalerAbbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler. General Conclusions Regarding Lung Deposition and Inspiratory Flow Rate in Chronic Obstructive Pulmonary Disease Abbreviations: COPD, chronic obstructive pulmonary disease; MMAD, mass median aerodynamic diameter; µm, micrometer; DPI, dry powder inhaler; SMI, soft mist inhaler. Experts’ Recommendations for the Selection of the Appropriate Inhalation Device in Chronic Obstructive Pulmonary Disease Abbreviations: COPD, chronic obstructive pulmonary disease; pMDI, pressurized metered-dose inhaler; SMI, soft mist inhaler. Thus, it is strongly recommended that, in addition to the standard variables for COPD, inspiratory flow rate and the patient’s inspiratory capacity are evaluated (on a regular basis), and the selection of an inhaler device should be based on the COPD patient’s features, needs, and clinical situation. This selection should consider the different characteristics of the devices to ensure physicians choose the device that best matches that patient’s needs. Finally, we considered it important to systematically review the patient’s inhalation maneuver,110 see Tables 3 and 6. This should be checked during every visit, so that errors can be resolved, and inhalers can be checked and even changed, where necessary. The same way, before considering a change in the patient’s treatment, possible errors with the inhalation maneuver should be evaluated.
Background: Our aim was to describe: 1) lung deposition and inspiratory flow rate; 2) main characteristics of inhaler devices in chronic obstructive pulmonary disease (COPD). Methods: A systematic literature review (SLR) was conducted to analyze the features and results of inhaler devices in COPD patients. These devices included pressurized metered-dose inhalers (pMDIs), dry powder inhalers (DPIs), and a soft mist inhaler (SMI). Inclusion and exclusion criteria were established, as well as search strategies (Medline, Embase, and the Cochrane Library up to April 2019). In vitro and in vivo studies were included. Two reviewers selected articles, collected and analyzed data independently. Narrative searches complemented the SLR. We discussed the results of the reviews in a nominal group meeting and agreed on various general principles and recommendations. Results: The SLR included 71 articles, some were of low-moderate quality, and there was great variability regarding populations and outcomes. Lung deposition rates varied across devices: 8%-53% for pMDIs, 7%-69% for DPIs, and 39%-67% for the SMI. The aerosol exit velocity was high with pMDIs (more than 3 m/s), while it is much slower (0.84-0.72 m/s) with the SMI. In general, pMDIs produce large-sized particles (1.22-8 μm), DPIs produce medium-sized particles (1.8-4.8 µm), and 60% of the particles reach an aerodynamic diameter <5 μm with the SMI. All inhalation devices reach central and peripheral lung regions, but the SMI distribution pattern might be better compared with pMDIs. DPIs' intrinsic resistance is higher than that of pMDIs and SMI, which are relatively similar and low. Depending on the DPI, the minimum flow inspiratory rate required was 30 L/min. pMDIs and SMI did not require a high inspiratory flow rate. Conclusions: Lung deposition and inspiratory flow rate are key factors when selecting an inhalation device in COPD patients.
Introduction: Chronic obstructive pulmonary disease (COPD) is characterized by a persistent airflow limitation that is usually progressive, according to guidelines from the Global Initiative for Chronic Obstructive Lung Disease (GOLD).1 In recent years, the prevalence of COPD has dramatically increased, growing by 44.2% from 1990 to 2015.2 The impact on patients, society, and health systems is correspondingly huge. More than 3 million people die of COPD worldwide each year, accounting for 6% of all deaths worldwide.3 In 2010, the cost of COPD in the USA was projected to be approximately US $50 billion.4 One of the primary treatment modalities for COPD is medications that are delivered via inhalation devices. Currently, in clinical practice, a variety of devices are available for the treatment of these patients, including pressurized metered-dose inhalers (pMDIs), which are used with or without a valved holding chamber or spacer, as well as dry powder inhalers (DPIs) and the soft mist inhaler (SMI). Inhaler devices vary in several ways, including how the inhaler dispenses the drug, whether the treatment is passively or actively generated (using propellant, mechanical, or compressed air), and the drug’s formulation (solution, dry powder, or mist). The selection of an inhalation device is a key point in COPD because it impacts patient adherence, the drug’s effectiveness, and long–term outcomes.5 A range of studies have assessed which factors/characteristics should be considered when selecting the most appropriate device.6–8 Interestingly, according to many expert opinions, the most important factors involved in achieving optimal disease outcomes are the generation of high lung deposition and correct dispensation with low inspiratory flow rates.9 Other relevant factors include inhalation technique, potential difficulties with the device, and patient preferences. On the other hand, data regarding lung deposition and inspiratory flow rates across inhalation devices in COPD patients are usually described and evaluated as absolute, static numbers. However, a theoretical framework and pathophysiological and clinical evidence all suggest that both are influenced by several factors that relate to the patients and their COPD, all of which can change over time.6,10–17 Therefore, analyzing lung deposition and inspiratory flow rates in COPD patients who use inhalation devices requires a more careful, holistic, and dynamic approach. Considering all the aspects described above, we performed a systematic literature review (SLR) and a narrative review to assess lung deposition and inspiratory flow rates, as well as data related to these inhalation devices in COPD patients. Using this information, we propose related conclusions and recommendations that can contribute to the selection of inhalation devices. We are confident that this information will be very useful for health professionals who are involved in the care of patients with COPD. General Conclusions and Recommendations: The choice of inhalation devices for COPD patients depends on a combination of factors, but lung deposition and inspiratory flow rate are key aspects of this selection process. When selecting an inhalation device, all health professionals who are involved in the care of patients with COPD must consider the basis of lung deposition and inspiratory flow rate, among other aspects. The clinician can then select the most adequate inhalation device, depending on the patient, their COPD, and the inhalation device’s characteristics, which will ultimately achieve the maximum lung deposition and distribution.
Background: Our aim was to describe: 1) lung deposition and inspiratory flow rate; 2) main characteristics of inhaler devices in chronic obstructive pulmonary disease (COPD). Methods: A systematic literature review (SLR) was conducted to analyze the features and results of inhaler devices in COPD patients. These devices included pressurized metered-dose inhalers (pMDIs), dry powder inhalers (DPIs), and a soft mist inhaler (SMI). Inclusion and exclusion criteria were established, as well as search strategies (Medline, Embase, and the Cochrane Library up to April 2019). In vitro and in vivo studies were included. Two reviewers selected articles, collected and analyzed data independently. Narrative searches complemented the SLR. We discussed the results of the reviews in a nominal group meeting and agreed on various general principles and recommendations. Results: The SLR included 71 articles, some were of low-moderate quality, and there was great variability regarding populations and outcomes. Lung deposition rates varied across devices: 8%-53% for pMDIs, 7%-69% for DPIs, and 39%-67% for the SMI. The aerosol exit velocity was high with pMDIs (more than 3 m/s), while it is much slower (0.84-0.72 m/s) with the SMI. In general, pMDIs produce large-sized particles (1.22-8 μm), DPIs produce medium-sized particles (1.8-4.8 µm), and 60% of the particles reach an aerodynamic diameter <5 μm with the SMI. All inhalation devices reach central and peripheral lung regions, but the SMI distribution pattern might be better compared with pMDIs. DPIs' intrinsic resistance is higher than that of pMDIs and SMI, which are relatively similar and low. Depending on the DPI, the minimum flow inspiratory rate required was 30 L/min. pMDIs and SMI did not require a high inspiratory flow rate. Conclusions: Lung deposition and inspiratory flow rate are key factors when selecting an inhalation device in COPD patients.
10,842
387
[ 1328, 90, 426, 77, 35, 5830, 833, 764, 895 ]
10
[ "inspiratory", "lung", "inhalation", "flow", "inspiratory flow", "deposition", "lung deposition", "rate", "copd", "inhaler" ]
[ "modalities copd medications", "inhalerabbreviations copd chronic", "patients copd methods", "copd inhaler device", "inhalation devices copd" ]
null
null
[CONTENT] COPD | lung deposition | inspiratory flow | inhalation devices | systematic literature review [SUMMARY]
null
null
[CONTENT] COPD | lung deposition | inspiratory flow | inhalation devices | systematic literature review [SUMMARY]
[CONTENT] COPD | lung deposition | inspiratory flow | inhalation devices | systematic literature review [SUMMARY]
[CONTENT] COPD | lung deposition | inspiratory flow | inhalation devices | systematic literature review [SUMMARY]
[CONTENT] Administration, Inhalation | Bronchodilator Agents | Dry Powder Inhalers | Equipment Design | Expert Testimony | Humans | Lung | Metered Dose Inhalers | Pulmonary Disease, Chronic Obstructive [SUMMARY]
null
null
[CONTENT] Administration, Inhalation | Bronchodilator Agents | Dry Powder Inhalers | Equipment Design | Expert Testimony | Humans | Lung | Metered Dose Inhalers | Pulmonary Disease, Chronic Obstructive [SUMMARY]
[CONTENT] Administration, Inhalation | Bronchodilator Agents | Dry Powder Inhalers | Equipment Design | Expert Testimony | Humans | Lung | Metered Dose Inhalers | Pulmonary Disease, Chronic Obstructive [SUMMARY]
[CONTENT] Administration, Inhalation | Bronchodilator Agents | Dry Powder Inhalers | Equipment Design | Expert Testimony | Humans | Lung | Metered Dose Inhalers | Pulmonary Disease, Chronic Obstructive [SUMMARY]
[CONTENT] modalities copd medications | inhalerabbreviations copd chronic | patients copd methods | copd inhaler device | inhalation devices copd [SUMMARY]
null
null
[CONTENT] modalities copd medications | inhalerabbreviations copd chronic | patients copd methods | copd inhaler device | inhalation devices copd [SUMMARY]
[CONTENT] modalities copd medications | inhalerabbreviations copd chronic | patients copd methods | copd inhaler device | inhalation devices copd [SUMMARY]
[CONTENT] modalities copd medications | inhalerabbreviations copd chronic | patients copd methods | copd inhaler device | inhalation devices copd [SUMMARY]
[CONTENT] inspiratory | lung | inhalation | flow | inspiratory flow | deposition | lung deposition | rate | copd | inhaler [SUMMARY]
null
null
[CONTENT] inspiratory | lung | inhalation | flow | inspiratory flow | deposition | lung deposition | rate | copd | inhaler [SUMMARY]
[CONTENT] inspiratory | lung | inhalation | flow | inspiratory flow | deposition | lung deposition | rate | copd | inhaler [SUMMARY]
[CONTENT] inspiratory | lung | inhalation | flow | inspiratory flow | deposition | lung deposition | rate | copd | inhaler [SUMMARY]
[CONTENT] copd | inhalation | patients | devices | inhalation devices | flow rates | inspiratory flow rates | factors | rates | deposition inspiratory flow rates [SUMMARY]
null
null
[CONTENT] inhalation | inspiratory | patients | inspiratory flow | flow | chronic | maneuver | disease | lung | rate [SUMMARY]
[CONTENT] inspiratory | inhalation | lung | flow | inspiratory flow | deposition | lung deposition | copd | rate | studies [SUMMARY]
[CONTENT] inspiratory | inhalation | lung | flow | inspiratory flow | deposition | lung deposition | copd | rate | studies [SUMMARY]
[CONTENT] 1 | 2 [SUMMARY]
null
null
[CONTENT] Lung [SUMMARY]
[CONTENT] 1 | 2 ||| ||| SMI ||| Medline | Embase | the Cochrane Library | April 2019 ||| ||| Two ||| SLR ||| ||| ||| 71 ||| Lung | 8%-53% | 7%-69% | 39%-67% | SMI ||| more than 3 | 0.84-0.72 | SMI ||| 1.22-8 | 1.8-4.8 | 60% | SMI ||| SMI | SMI ||| DPI | 30 | SMI ||| Lung [SUMMARY]
[CONTENT] 1 | 2 ||| ||| SMI ||| Medline | Embase | the Cochrane Library | April 2019 ||| ||| Two ||| SLR ||| ||| ||| 71 ||| Lung | 8%-53% | 7%-69% | 39%-67% | SMI ||| more than 3 | 0.84-0.72 | SMI ||| 1.22-8 | 1.8-4.8 | 60% | SMI ||| SMI | SMI ||| DPI | 30 | SMI ||| Lung [SUMMARY]
Vitamin D Deficiency among Patients Visiting a Tertiary Care Hospital: A Descriptive Cross-sectional Study.
34506404
Vitamin D deficiency is a common condition prevalent among both developed and developing countries where it is seen mostly in females. It has been linked to various skeletal and non-skeletal diseases. This study was done to find out the prevalence of Vitamin D deficiency and clinical features of deficient patients attending the outpatient department of a tertiary care hospital.
INTRODUCTION
This descriptive cross-sectional study was done among the patients attending the outpatient department of a tertiary care hospital in Kathmandu, Nepal. The study was conducted from May 2019 to July 2019. The ethical approval was taken from the Institutional Review Committee (ref no. 310520113). Convenient sampling was done. The collected data was entered in Microsoft Excel and was analyzed in the Statistical Package for the Social Sciences (SPSS) version 26.
METHODS
Out of 481 participants, the prevalence of vitamin D deficiency was 335 (69.6%). Severe vitamin D deficiency was seen in 78 (16.2%) and insufficient vitamin D in 77 (16%) of the patients. The mean serum vitamin D concentration by gender was 22.38±17.07 ng/ml in males and 18.89±15.25 ng/ml in females. A total of 263 (54.6%) females and 72 (14.97%) males had vitamin D deficiency. The most common symptoms found in vitamin D deficiency patients were fatigue 187(55.8%), muscle cramps 131(39.1%), generalized myalgia 125(37.31%), bone and joint pain 111(33.13%).
RESULTS
Vitamin D deficiency was prevalent especially in females and elderly people. Fatigability was present in more than half of the vitamin D deficient patients.
CONCLUSIONS
[ "Aged", "Cross-Sectional Studies", "Female", "Humans", "Male", "Prevalence", "Tertiary Care Centers", "Vitamin D", "Vitamin D Deficiency" ]
7775021
INTRODUCTION
Vitamin D is a fat-soluble prohormone, which is involved in the regulation of the physiological processes.1 It is found in two forms; ergocalciferol (Vitamin D2) found in plants and fungi, and cholecalciferol (Vitamin D3) from the sun.2 Vitamin D deficiency is defined as “25-hydroxyvitamin D level of less than 20ng per millimeter (50nmol per liter)”.3,4 It has been estimated that around one billion people have Vitamin D deficiency or insufficiency.5 Vitamin D deficiency shows its high prevalence in the developed countries and the regions of Asia, the Middle East, and India mostly in women.6 According to several studies, 40-100% of the US and European population are deficient in Vitamin D.7,8 The factors causing Vitamin D deficiency could be due to changes in the lifestyle based on the socio-cultural practice, inadequate sun exposure and the food consumed that are rarely fortified with Vitamin D.9 Its deficiency has been linked to different musculoskeletal and non-skeletal complications like congestive heart failure, peripheral vascular disease, hypertension, diabetes mellitus.10 So, it is important to know the burden of Vitamin D deficiency for the better patient management. The present study is done to find out the occurrence of vitamin D deficiency and the frequency of clinical features of vitamin D deficient patients.
METHODS
This descriptive cross-sectional study was done among patients attending the outpatient department (OPD) and general health checkup of Kathmandu Medical College and Teaching Hospital, Kathmandu, Nepal. The study was conducted from May 2019 to July 2019. The ethical approval was taken from the Institutional Review Committee of the Kathmandu Medical College and Teaching Hospital (ref no. 310520113). Patients visiting the OPDs were included in the study. Patients under the age of 15 years, those with chronic kidney disease and on medication that affects bone metabolism like phenobarbital, anti-tubercular drugs, thiazide, antiretroviral, glucocorticoids, and Vitamin D treatment for Vitamin D deficiency were excluded from the study. Informed written consent was taken from the participants. Convenient sampling was done. The sample size was calculated by using the formula, Where, n = required sample size,Z = 1.96 at 95% Confidence Interval,p = population proportion, 50%e = margin of error, 7% n = required sample size, Z = 1.96 at 95% Confidence Interval, p = population proportion, 50% e = margin of error, 7% As patients were enrolled using the convenient sampling, we doubled the size and considering 20% non-respondent rate, the total sample of 481 patients was taken for measurement of vitamin D. Data collected in semi-structured questionnaire regarding demographics, clinical features like fatigue, body ache, muscle cramps, joint pain, low backache, and medical history of diabetes, hypertension, chronic kidney disease, and Ischemic heart diseases (IHDs). Body mass index was used to define the weight status. Serum Vitamin D3, which is 25 hydroxyvitamin D [25(OH) D], was estimated by a fully-automated chemiluminescence immunoassay. Vitamin D deficiency was defined as 25(OH) D less than 20ng/ml, Vitamin D insufficiency as 20-29ng/ml, and Vitamin D sufficiency as ≥30ng/ml, and Vitamin D toxicity as more than 100ng/ml. Vitamin D levels less than 10ng/ml were regarded as a severe deficiency.2,11 The data collected was entered in Microsoft Excel and was analyzed in the IBM Statistical Package of the Social Sciences (SPSS) version 26. Demographic data and clinical variables were analyzed by descriptive analysis. Results are expressed as mean ± standard deviation for quantitative variables and percentage for qualitative variables like symptoms.
RESULTS
A total of 481 participants were included in the study. Overall, vitamin D deficiency was found in 335 (69.6%) (65.49-73.71 at 95% Confidence Interval). Insufficiency was seen in 77 (16%) patients. Among the participants, 69 (14.3%) had the sufficient vitamin D levels (Table 1) The total mean age of the participants was 40.5±14.4 years (39.64±14.10 for females and 43.59±15.13 for males). The majority were females with a female: male ratio of 3.3:1. The study included 111 (23.1%) males and 370 (76.1 %) females. The total mean serum vitamin D was 19.69±13.68ng/ ml. The mean serum vitamin D concentration by gender was 22.38±17.07ng/ml in males and 18.89±15.25ng/ ml in females. In the study, out of 481 participants, 78 (16.2%) of the participants had severe vitamin D deficiency. Among severely vitamin D deficient patients, 59 (75.6%) were symptomatic. The most common symptoms among them were fatigability 44 (56.4%), generalized myalgia 41 (52.6%), bone and joint pain 35 (44.9%), and muscle cramps 37 (47.4%). Among 257 mild vitamin D deficient patients, 169 (65.8%) participants had symptoms. The symptoms among them were fatigability 143 (55.6%), muscle cramps 94 (36.6%), generalized myalgia 84 (32.7%), bone and joint pain in 76 (29.6%). Overall, 227(68.05%) of vitamin D deficient patients were symptomatic. Vitamin D deficient patient includes mild and severe vitamin D deficiency. The symptoms among vitamin D deficient were fatigability 187 (55.8%), muscle cramps 131 (39.1%), generalized myalgia 125 (37.31%), bone and joint pain 111 (33.13%). The symptoms were predominantly seen in patients with severe deficiency. Some patients with severe deficiency also have a history of hair fall (Table 2). A total of 263 (54.6%) females and 72 (14.97%) males had vitamin D deficiency (Table 3). Seventy-six (22.7%) of patients with vitamin D deficiency were between the 66-75 aged group followed by 36-45 aged groups with 68 (20.3%) patients, 26-35 aged groups with 63 (18.8%) patients, and 46-55 aged groups with 57 (17%) patients. The age group with the least number of patients with vitamin D deficiency was between 15-25 aged groups with 9 (1.87%).
CONCLUSIONS
Vitamin D deficiency was prevalent especially in females and elderly people. Fatigability was present in more than half of the vitamin D deficient patients. Muscle cramps and generalized myalgia were present in more than one third and bone and joint pain in almost one third of the vitamin D deficient patients.
[]
[]
[]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION", "CONCLUSIONS" ]
[ "Vitamin D is a fat-soluble prohormone, which is involved in the regulation of the physiological processes.1 It is found in two forms; ergocalciferol (Vitamin D2) found in plants and fungi, and cholecalciferol (Vitamin D3) from the sun.2 Vitamin D deficiency is defined as “25-hydroxyvitamin D level of less than 20ng per millimeter (50nmol per liter)”.3,4\nIt has been estimated that around one billion people have Vitamin D deficiency or insufficiency.5 Vitamin D deficiency shows its high prevalence in the developed countries and the regions of Asia, the Middle East, and India mostly in women.6 According to several studies, 40-100% of the US and European population are deficient in Vitamin D.7,8 The factors causing Vitamin D deficiency could be due to changes in the lifestyle based on the socio-cultural practice, inadequate sun exposure and the food consumed that are rarely fortified with Vitamin D.9 Its deficiency has been linked to different musculoskeletal and non-skeletal complications like congestive heart failure, peripheral vascular disease, hypertension, diabetes mellitus.10 So, it is important to know the burden of Vitamin D deficiency for the better patient management.\nThe present study is done to find out the occurrence of vitamin D deficiency and the frequency of clinical features of vitamin D deficient patients.", "This descriptive cross-sectional study was done among patients attending the outpatient department (OPD) and general health checkup of Kathmandu Medical College and Teaching Hospital, Kathmandu, Nepal. The study was conducted from May 2019 to July 2019. The ethical approval was taken from the Institutional Review Committee of the Kathmandu Medical College and Teaching Hospital (ref no. 310520113). Patients visiting the OPDs were included in the study. Patients under the age of 15 years, those with chronic kidney disease and on medication that affects bone metabolism like phenobarbital, anti-tubercular drugs, thiazide, antiretroviral, glucocorticoids, and Vitamin D treatment for Vitamin D deficiency were excluded from the study. Informed written consent was taken from the participants. Convenient sampling was done. The sample size was calculated by using the formula,\nWhere,\nn = required sample size,Z = 1.96 at 95% Confidence Interval,p = population proportion, 50%e = margin of error, 7%\nn = required sample size,\nZ = 1.96 at 95% Confidence Interval,\np = population proportion, 50%\ne = margin of error, 7%\nAs patients were enrolled using the convenient sampling, we doubled the size and considering 20% non-respondent rate, the total sample of 481 patients was taken for measurement of vitamin D.\nData collected in semi-structured questionnaire regarding demographics, clinical features like fatigue, body ache, muscle cramps, joint pain, low backache, and medical history of diabetes, hypertension, chronic kidney disease, and Ischemic heart diseases (IHDs). Body mass index was used to define the weight status. Serum Vitamin D3, which is 25 hydroxyvitamin D [25(OH) D], was estimated by a fully-automated chemiluminescence immunoassay.\nVitamin D deficiency was defined as 25(OH) D less than 20ng/ml, Vitamin D insufficiency as 20-29ng/ml, and Vitamin D sufficiency as ≥30ng/ml, and Vitamin D toxicity as more than 100ng/ml. Vitamin D levels less than 10ng/ml were regarded as a severe deficiency.2,11\nThe data collected was entered in Microsoft Excel and was analyzed in the IBM Statistical Package of the Social Sciences (SPSS) version 26. Demographic data and clinical variables were analyzed by descriptive analysis. Results are expressed as mean ± standard deviation for quantitative variables and percentage for qualitative variables like symptoms.", "A total of 481 participants were included in the study. Overall, vitamin D deficiency was found in 335 (69.6%) (65.49-73.71 at 95% Confidence Interval). Insufficiency was seen in 77 (16%) patients. Among the participants, 69 (14.3%) had the sufficient vitamin D levels (Table 1) The total mean age of the participants was 40.5±14.4 years (39.64±14.10 for females and 43.59±15.13 for males). The majority were females with a female: male ratio of 3.3:1. The study included 111 (23.1%) males and 370 (76.1 %) females.\nThe total mean serum vitamin D was 19.69±13.68ng/ ml. The mean serum vitamin D concentration by gender was 22.38±17.07ng/ml in males and 18.89±15.25ng/ ml in females.\nIn the study, out of 481 participants, 78 (16.2%) of the participants had severe vitamin D deficiency. Among severely vitamin D deficient patients, 59 (75.6%) were symptomatic. The most common symptoms among them were fatigability 44 (56.4%), generalized myalgia 41 (52.6%), bone and joint pain 35 (44.9%), and muscle cramps 37 (47.4%). Among 257 mild vitamin D deficient patients, 169 (65.8%) participants had symptoms. The symptoms among them were fatigability 143 (55.6%), muscle cramps 94 (36.6%), generalized myalgia 84 (32.7%), bone and joint pain in 76 (29.6%). Overall, 227(68.05%) of vitamin D deficient patients were symptomatic. Vitamin D deficient patient includes mild and severe vitamin D deficiency. The symptoms among vitamin D deficient were fatigability 187 (55.8%), muscle cramps 131 (39.1%), generalized myalgia 125 (37.31%), bone and joint pain 111 (33.13%). The symptoms were predominantly seen in patients with severe deficiency. Some patients with severe deficiency also have a history of hair fall (Table 2).\nA total of 263 (54.6%) females and 72 (14.97%) males had vitamin D deficiency (Table 3).\nSeventy-six (22.7%) of patients with vitamin D deficiency were between the 66-75 aged group followed by 36-45 aged groups with 68 (20.3%) patients, 26-35 aged groups with 63 (18.8%) patients, and 46-55 aged groups with 57 (17%) patients. The age group with the least number of patients with vitamin D deficiency was between 15-25 aged groups with 9 (1.87%).", "The results of this cross-sectional study done in a tertiary care hospital of Kathmandu, Nepal showed the prevalence of Vitamin D deficiency as 69.6%, of insufficiency as 16%, and sufficient Vitamin D in 14.3%. The prevalence was higher among older ages and females. A severe deficiency was seen in 16.2% of the studied population. The rates of Vitamin D deficiency found in this study are markedly higher than in many western countries like in Germany, Austria, and the Netherlands, in North Europe (Denmark, Finland, Ireland, and Poland), Canada, and the United Kingdom which have shown the prevalence of vitamin D deficiency from 10-55.5%.11,12\nMariam Omar et al. reported a deficiency of 76.1% and insufficiency of 15.2% among the population of Benghazi, a sunny second-largest city in the east of Libya.2 Our results share similar vitamin D deficiency status with some parts of Africa, Asia, and the Middle East. The prevalence of vitamin D deficiency in Egypt was 77%, insufficiency was 15%, and 9% of the population has sufficient Vitamin D levels. In Qatar, 83-91% of the population is deficient in Vitamin D.2,13\nVitamin D deficiency is considered to be a public health problem worldwide. Female gender is one of the most important predictors of vitamin D deficiency.2 In this study, 23.1% of the participants were males and 76.1% were females. Out of the studied population, 54.7% of the females and 15% of the males had vitamin D deficiency. This finding of increased prevalence seen in females is comparable to other studies and can be due to a sedentary lifestyle and aggressive sun protection. The greater participation of females in the studies may be due to the greater willingness of females to use health services.2 Babita Ghai et al. reported 73% of the female to be vitamin D deficient.14 In contrast, Manoharan et al. studied the vitamin D status among people of Tamil Nadu and reported that 46% of the males and 37% of the females had a vitamin D deficiency.15\nVitamin D deficiency was seen in 22.7% of patients between the 66-75 aged group and 20.3% in the 36-45 age group and 17% in 46-55 aged group patients. Various studies report a similar observation by demonstrating lower vitamin D levels with increasing ages and higher vitamin D deficiency states in the older age group, mandating early investigating and thus helping them to prevent falls and fractures.2,16\nIn the study, it was found that among the 335 vitamin D deficient patients, 228 (68.05%) of the participants had symptoms. The most common symptoms that led the participant to seek health services were fatigue, generalized myalgia, and bone pain. Fatigability was seen in 55.8%, muscle cramps in 39.1%, generalized myalgia in 39.1%, bone and joint pain in 33.13% of the vitamin D deficient patients. The symptoms were predominantly seen in patients with severe deficiency. Lubna M et al, in their study, showed that the most common reason for requesting vitamin D level included generalized myalgia and bone pain in 51% of the patients.17 Satyajeet Roy et al. showed the prevalence of low vitamin in inpatients as 77.2% with fatigue and fatigability was normalized after correction of vitamin D level.18\nOur study has demonstrated a high prevalence of vitamin D deficiency among patients visiting in our center. A larger multicentric or community based study with a diverse sample population should be conducted in the future to find out a more accurate prevalence. Similarly, other studies that further look into the association between gender and age and other comorbidities with vitamin D levels in Nepalese are warranted.", "Vitamin D deficiency was prevalent especially in females and elderly people. Fatigability was present in more than half of the vitamin D deficient patients. Muscle cramps and generalized myalgia were present in more than one third and bone and joint pain in almost one third of the vitamin D deficient patients." ]
[ "intro", "methods", "results", "discussion", "conclusions" ]
[ "\nNepal\n", "\nprevalence\n", "\nvitamin D deficiency\n" ]
INTRODUCTION: Vitamin D is a fat-soluble prohormone, which is involved in the regulation of the physiological processes.1 It is found in two forms; ergocalciferol (Vitamin D2) found in plants and fungi, and cholecalciferol (Vitamin D3) from the sun.2 Vitamin D deficiency is defined as “25-hydroxyvitamin D level of less than 20ng per millimeter (50nmol per liter)”.3,4 It has been estimated that around one billion people have Vitamin D deficiency or insufficiency.5 Vitamin D deficiency shows its high prevalence in the developed countries and the regions of Asia, the Middle East, and India mostly in women.6 According to several studies, 40-100% of the US and European population are deficient in Vitamin D.7,8 The factors causing Vitamin D deficiency could be due to changes in the lifestyle based on the socio-cultural practice, inadequate sun exposure and the food consumed that are rarely fortified with Vitamin D.9 Its deficiency has been linked to different musculoskeletal and non-skeletal complications like congestive heart failure, peripheral vascular disease, hypertension, diabetes mellitus.10 So, it is important to know the burden of Vitamin D deficiency for the better patient management. The present study is done to find out the occurrence of vitamin D deficiency and the frequency of clinical features of vitamin D deficient patients. METHODS: This descriptive cross-sectional study was done among patients attending the outpatient department (OPD) and general health checkup of Kathmandu Medical College and Teaching Hospital, Kathmandu, Nepal. The study was conducted from May 2019 to July 2019. The ethical approval was taken from the Institutional Review Committee of the Kathmandu Medical College and Teaching Hospital (ref no. 310520113). Patients visiting the OPDs were included in the study. Patients under the age of 15 years, those with chronic kidney disease and on medication that affects bone metabolism like phenobarbital, anti-tubercular drugs, thiazide, antiretroviral, glucocorticoids, and Vitamin D treatment for Vitamin D deficiency were excluded from the study. Informed written consent was taken from the participants. Convenient sampling was done. The sample size was calculated by using the formula, Where, n = required sample size,Z = 1.96 at 95% Confidence Interval,p = population proportion, 50%e = margin of error, 7% n = required sample size, Z = 1.96 at 95% Confidence Interval, p = population proportion, 50% e = margin of error, 7% As patients were enrolled using the convenient sampling, we doubled the size and considering 20% non-respondent rate, the total sample of 481 patients was taken for measurement of vitamin D. Data collected in semi-structured questionnaire regarding demographics, clinical features like fatigue, body ache, muscle cramps, joint pain, low backache, and medical history of diabetes, hypertension, chronic kidney disease, and Ischemic heart diseases (IHDs). Body mass index was used to define the weight status. Serum Vitamin D3, which is 25 hydroxyvitamin D [25(OH) D], was estimated by a fully-automated chemiluminescence immunoassay. Vitamin D deficiency was defined as 25(OH) D less than 20ng/ml, Vitamin D insufficiency as 20-29ng/ml, and Vitamin D sufficiency as ≥30ng/ml, and Vitamin D toxicity as more than 100ng/ml. Vitamin D levels less than 10ng/ml were regarded as a severe deficiency.2,11 The data collected was entered in Microsoft Excel and was analyzed in the IBM Statistical Package of the Social Sciences (SPSS) version 26. Demographic data and clinical variables were analyzed by descriptive analysis. Results are expressed as mean ± standard deviation for quantitative variables and percentage for qualitative variables like symptoms. RESULTS: A total of 481 participants were included in the study. Overall, vitamin D deficiency was found in 335 (69.6%) (65.49-73.71 at 95% Confidence Interval). Insufficiency was seen in 77 (16%) patients. Among the participants, 69 (14.3%) had the sufficient vitamin D levels (Table 1) The total mean age of the participants was 40.5±14.4 years (39.64±14.10 for females and 43.59±15.13 for males). The majority were females with a female: male ratio of 3.3:1. The study included 111 (23.1%) males and 370 (76.1 %) females. The total mean serum vitamin D was 19.69±13.68ng/ ml. The mean serum vitamin D concentration by gender was 22.38±17.07ng/ml in males and 18.89±15.25ng/ ml in females. In the study, out of 481 participants, 78 (16.2%) of the participants had severe vitamin D deficiency. Among severely vitamin D deficient patients, 59 (75.6%) were symptomatic. The most common symptoms among them were fatigability 44 (56.4%), generalized myalgia 41 (52.6%), bone and joint pain 35 (44.9%), and muscle cramps 37 (47.4%). Among 257 mild vitamin D deficient patients, 169 (65.8%) participants had symptoms. The symptoms among them were fatigability 143 (55.6%), muscle cramps 94 (36.6%), generalized myalgia 84 (32.7%), bone and joint pain in 76 (29.6%). Overall, 227(68.05%) of vitamin D deficient patients were symptomatic. Vitamin D deficient patient includes mild and severe vitamin D deficiency. The symptoms among vitamin D deficient were fatigability 187 (55.8%), muscle cramps 131 (39.1%), generalized myalgia 125 (37.31%), bone and joint pain 111 (33.13%). The symptoms were predominantly seen in patients with severe deficiency. Some patients with severe deficiency also have a history of hair fall (Table 2). A total of 263 (54.6%) females and 72 (14.97%) males had vitamin D deficiency (Table 3). Seventy-six (22.7%) of patients with vitamin D deficiency were between the 66-75 aged group followed by 36-45 aged groups with 68 (20.3%) patients, 26-35 aged groups with 63 (18.8%) patients, and 46-55 aged groups with 57 (17%) patients. The age group with the least number of patients with vitamin D deficiency was between 15-25 aged groups with 9 (1.87%). DISCUSSION: The results of this cross-sectional study done in a tertiary care hospital of Kathmandu, Nepal showed the prevalence of Vitamin D deficiency as 69.6%, of insufficiency as 16%, and sufficient Vitamin D in 14.3%. The prevalence was higher among older ages and females. A severe deficiency was seen in 16.2% of the studied population. The rates of Vitamin D deficiency found in this study are markedly higher than in many western countries like in Germany, Austria, and the Netherlands, in North Europe (Denmark, Finland, Ireland, and Poland), Canada, and the United Kingdom which have shown the prevalence of vitamin D deficiency from 10-55.5%.11,12 Mariam Omar et al. reported a deficiency of 76.1% and insufficiency of 15.2% among the population of Benghazi, a sunny second-largest city in the east of Libya.2 Our results share similar vitamin D deficiency status with some parts of Africa, Asia, and the Middle East. The prevalence of vitamin D deficiency in Egypt was 77%, insufficiency was 15%, and 9% of the population has sufficient Vitamin D levels. In Qatar, 83-91% of the population is deficient in Vitamin D.2,13 Vitamin D deficiency is considered to be a public health problem worldwide. Female gender is one of the most important predictors of vitamin D deficiency.2 In this study, 23.1% of the participants were males and 76.1% were females. Out of the studied population, 54.7% of the females and 15% of the males had vitamin D deficiency. This finding of increased prevalence seen in females is comparable to other studies and can be due to a sedentary lifestyle and aggressive sun protection. The greater participation of females in the studies may be due to the greater willingness of females to use health services.2 Babita Ghai et al. reported 73% of the female to be vitamin D deficient.14 In contrast, Manoharan et al. studied the vitamin D status among people of Tamil Nadu and reported that 46% of the males and 37% of the females had a vitamin D deficiency.15 Vitamin D deficiency was seen in 22.7% of patients between the 66-75 aged group and 20.3% in the 36-45 age group and 17% in 46-55 aged group patients. Various studies report a similar observation by demonstrating lower vitamin D levels with increasing ages and higher vitamin D deficiency states in the older age group, mandating early investigating and thus helping them to prevent falls and fractures.2,16 In the study, it was found that among the 335 vitamin D deficient patients, 228 (68.05%) of the participants had symptoms. The most common symptoms that led the participant to seek health services were fatigue, generalized myalgia, and bone pain. Fatigability was seen in 55.8%, muscle cramps in 39.1%, generalized myalgia in 39.1%, bone and joint pain in 33.13% of the vitamin D deficient patients. The symptoms were predominantly seen in patients with severe deficiency. Lubna M et al, in their study, showed that the most common reason for requesting vitamin D level included generalized myalgia and bone pain in 51% of the patients.17 Satyajeet Roy et al. showed the prevalence of low vitamin in inpatients as 77.2% with fatigue and fatigability was normalized after correction of vitamin D level.18 Our study has demonstrated a high prevalence of vitamin D deficiency among patients visiting in our center. A larger multicentric or community based study with a diverse sample population should be conducted in the future to find out a more accurate prevalence. Similarly, other studies that further look into the association between gender and age and other comorbidities with vitamin D levels in Nepalese are warranted. CONCLUSIONS: Vitamin D deficiency was prevalent especially in females and elderly people. Fatigability was present in more than half of the vitamin D deficient patients. Muscle cramps and generalized myalgia were present in more than one third and bone and joint pain in almost one third of the vitamin D deficient patients.
Background: Vitamin D deficiency is a common condition prevalent among both developed and developing countries where it is seen mostly in females. It has been linked to various skeletal and non-skeletal diseases. This study was done to find out the prevalence of Vitamin D deficiency and clinical features of deficient patients attending the outpatient department of a tertiary care hospital. Methods: This descriptive cross-sectional study was done among the patients attending the outpatient department of a tertiary care hospital in Kathmandu, Nepal. The study was conducted from May 2019 to July 2019. The ethical approval was taken from the Institutional Review Committee (ref no. 310520113). Convenient sampling was done. The collected data was entered in Microsoft Excel and was analyzed in the Statistical Package for the Social Sciences (SPSS) version 26. Results: Out of 481 participants, the prevalence of vitamin D deficiency was 335 (69.6%). Severe vitamin D deficiency was seen in 78 (16.2%) and insufficient vitamin D in 77 (16%) of the patients. The mean serum vitamin D concentration by gender was 22.38±17.07 ng/ml in males and 18.89±15.25 ng/ml in females. A total of 263 (54.6%) females and 72 (14.97%) males had vitamin D deficiency. The most common symptoms found in vitamin D deficiency patients were fatigue 187(55.8%), muscle cramps 131(39.1%), generalized myalgia 125(37.31%), bone and joint pain 111(33.13%). Conclusions: Vitamin D deficiency was prevalent especially in females and elderly people. Fatigability was present in more than half of the vitamin D deficient patients.
INTRODUCTION: Vitamin D is a fat-soluble prohormone, which is involved in the regulation of the physiological processes.1 It is found in two forms; ergocalciferol (Vitamin D2) found in plants and fungi, and cholecalciferol (Vitamin D3) from the sun.2 Vitamin D deficiency is defined as “25-hydroxyvitamin D level of less than 20ng per millimeter (50nmol per liter)”.3,4 It has been estimated that around one billion people have Vitamin D deficiency or insufficiency.5 Vitamin D deficiency shows its high prevalence in the developed countries and the regions of Asia, the Middle East, and India mostly in women.6 According to several studies, 40-100% of the US and European population are deficient in Vitamin D.7,8 The factors causing Vitamin D deficiency could be due to changes in the lifestyle based on the socio-cultural practice, inadequate sun exposure and the food consumed that are rarely fortified with Vitamin D.9 Its deficiency has been linked to different musculoskeletal and non-skeletal complications like congestive heart failure, peripheral vascular disease, hypertension, diabetes mellitus.10 So, it is important to know the burden of Vitamin D deficiency for the better patient management. The present study is done to find out the occurrence of vitamin D deficiency and the frequency of clinical features of vitamin D deficient patients. CONCLUSIONS: Vitamin D deficiency was prevalent especially in females and elderly people. Fatigability was present in more than half of the vitamin D deficient patients. Muscle cramps and generalized myalgia were present in more than one third and bone and joint pain in almost one third of the vitamin D deficient patients.
Background: Vitamin D deficiency is a common condition prevalent among both developed and developing countries where it is seen mostly in females. It has been linked to various skeletal and non-skeletal diseases. This study was done to find out the prevalence of Vitamin D deficiency and clinical features of deficient patients attending the outpatient department of a tertiary care hospital. Methods: This descriptive cross-sectional study was done among the patients attending the outpatient department of a tertiary care hospital in Kathmandu, Nepal. The study was conducted from May 2019 to July 2019. The ethical approval was taken from the Institutional Review Committee (ref no. 310520113). Convenient sampling was done. The collected data was entered in Microsoft Excel and was analyzed in the Statistical Package for the Social Sciences (SPSS) version 26. Results: Out of 481 participants, the prevalence of vitamin D deficiency was 335 (69.6%). Severe vitamin D deficiency was seen in 78 (16.2%) and insufficient vitamin D in 77 (16%) of the patients. The mean serum vitamin D concentration by gender was 22.38±17.07 ng/ml in males and 18.89±15.25 ng/ml in females. A total of 263 (54.6%) females and 72 (14.97%) males had vitamin D deficiency. The most common symptoms found in vitamin D deficiency patients were fatigue 187(55.8%), muscle cramps 131(39.1%), generalized myalgia 125(37.31%), bone and joint pain 111(33.13%). Conclusions: Vitamin D deficiency was prevalent especially in females and elderly people. Fatigability was present in more than half of the vitamin D deficient patients.
1,952
312
[]
5
[ "vitamin", "deficiency", "vitamin deficiency", "patients", "study", "females", "deficient", "vitamin deficient", "participants", "population" ]
[ "vitamin deficient fatigability", "deficient vitamin factors", "mild vitamin deficient", "prevalence vitamin deficiency", "vitamin deficiency prevalent" ]
[CONTENT] Nepal | prevalence | vitamin D deficiency [SUMMARY]
[CONTENT] Nepal | prevalence | vitamin D deficiency [SUMMARY]
[CONTENT] Nepal | prevalence | vitamin D deficiency [SUMMARY]
[CONTENT] Nepal | prevalence | vitamin D deficiency [SUMMARY]
[CONTENT] Nepal | prevalence | vitamin D deficiency [SUMMARY]
[CONTENT] Nepal | prevalence | vitamin D deficiency [SUMMARY]
[CONTENT] Aged | Cross-Sectional Studies | Female | Humans | Male | Prevalence | Tertiary Care Centers | Vitamin D | Vitamin D Deficiency [SUMMARY]
[CONTENT] Aged | Cross-Sectional Studies | Female | Humans | Male | Prevalence | Tertiary Care Centers | Vitamin D | Vitamin D Deficiency [SUMMARY]
[CONTENT] Aged | Cross-Sectional Studies | Female | Humans | Male | Prevalence | Tertiary Care Centers | Vitamin D | Vitamin D Deficiency [SUMMARY]
[CONTENT] Aged | Cross-Sectional Studies | Female | Humans | Male | Prevalence | Tertiary Care Centers | Vitamin D | Vitamin D Deficiency [SUMMARY]
[CONTENT] Aged | Cross-Sectional Studies | Female | Humans | Male | Prevalence | Tertiary Care Centers | Vitamin D | Vitamin D Deficiency [SUMMARY]
[CONTENT] Aged | Cross-Sectional Studies | Female | Humans | Male | Prevalence | Tertiary Care Centers | Vitamin D | Vitamin D Deficiency [SUMMARY]
[CONTENT] vitamin deficient fatigability | deficient vitamin factors | mild vitamin deficient | prevalence vitamin deficiency | vitamin deficiency prevalent [SUMMARY]
[CONTENT] vitamin deficient fatigability | deficient vitamin factors | mild vitamin deficient | prevalence vitamin deficiency | vitamin deficiency prevalent [SUMMARY]
[CONTENT] vitamin deficient fatigability | deficient vitamin factors | mild vitamin deficient | prevalence vitamin deficiency | vitamin deficiency prevalent [SUMMARY]
[CONTENT] vitamin deficient fatigability | deficient vitamin factors | mild vitamin deficient | prevalence vitamin deficiency | vitamin deficiency prevalent [SUMMARY]
[CONTENT] vitamin deficient fatigability | deficient vitamin factors | mild vitamin deficient | prevalence vitamin deficiency | vitamin deficiency prevalent [SUMMARY]
[CONTENT] vitamin deficient fatigability | deficient vitamin factors | mild vitamin deficient | prevalence vitamin deficiency | vitamin deficiency prevalent [SUMMARY]
[CONTENT] vitamin | deficiency | vitamin deficiency | patients | study | females | deficient | vitamin deficient | participants | population [SUMMARY]
[CONTENT] vitamin | deficiency | vitamin deficiency | patients | study | females | deficient | vitamin deficient | participants | population [SUMMARY]
[CONTENT] vitamin | deficiency | vitamin deficiency | patients | study | females | deficient | vitamin deficient | participants | population [SUMMARY]
[CONTENT] vitamin | deficiency | vitamin deficiency | patients | study | females | deficient | vitamin deficient | participants | population [SUMMARY]
[CONTENT] vitamin | deficiency | vitamin deficiency | patients | study | females | deficient | vitamin deficient | participants | population [SUMMARY]
[CONTENT] vitamin | deficiency | vitamin deficiency | patients | study | females | deficient | vitamin deficient | participants | population [SUMMARY]
[CONTENT] vitamin | vitamin deficiency | deficiency | sun | found | deficient | liter | linked different musculoskeletal non | linked different musculoskeletal | like congestive [SUMMARY]
[CONTENT] vitamin | ml | ml vitamin | size | sample | data | variables | medical | sample size | taken [SUMMARY]
[CONTENT] vitamin | patients | aged | participants | aged groups | groups | deficiency | symptoms | females | total [SUMMARY]
[CONTENT] present | vitamin | deficient | deficient patients | vitamin deficient patients | vitamin deficient | patients muscle cramps generalized | fatigability present half | elderly | muscle cramps generalized myalgia [SUMMARY]
[CONTENT] vitamin | deficiency | vitamin deficiency | patients | deficient | females | vitamin deficient | study | vitamin deficient patients | deficient patients [SUMMARY]
[CONTENT] vitamin | deficiency | vitamin deficiency | patients | deficient | females | vitamin deficient | study | vitamin deficient patients | deficient patients [SUMMARY]
[CONTENT] Vitamin ||| ||| Vitamin D | tertiary [SUMMARY]
[CONTENT] tertiary | Kathmandu | Nepal ||| May 2019 to July 2019 ||| the Institutional Review Committee | ref no. ||| ||| Microsoft Excel | the Statistical Package | the Social Sciences | SPSS | 26 [SUMMARY]
[CONTENT] 481 | 335 | 69.6% ||| 16.2% | 77 | 16% ||| 22.38±17.07 ng/ml | 18.89±15.25 ng/ml ||| 263 | 54.6% | 72 | 14.97% ||| 187(55.8% | 131(39.1% | 125(37.31% | 111(33.13% [SUMMARY]
[CONTENT] Vitamin ||| more than half [SUMMARY]
[CONTENT] Vitamin ||| ||| Vitamin D | tertiary ||| tertiary | Kathmandu | Nepal ||| May 2019 to July 2019 ||| the Institutional Review Committee | ref no. ||| ||| Microsoft Excel | the Statistical Package | the Social Sciences | SPSS | 26 ||| 481 | 335 | 69.6% ||| 16.2% | 77 | 16% ||| 22.38±17.07 ng/ml | 18.89±15.25 ng/ml ||| 263 | 54.6% | 72 | 14.97% ||| 187(55.8% | 131(39.1% | 125(37.31% | 111(33.13% ||| Vitamin ||| more than half [SUMMARY]
[CONTENT] Vitamin ||| ||| Vitamin D | tertiary ||| tertiary | Kathmandu | Nepal ||| May 2019 to July 2019 ||| the Institutional Review Committee | ref no. ||| ||| Microsoft Excel | the Statistical Package | the Social Sciences | SPSS | 26 ||| 481 | 335 | 69.6% ||| 16.2% | 77 | 16% ||| 22.38±17.07 ng/ml | 18.89±15.25 ng/ml ||| 263 | 54.6% | 72 | 14.97% ||| 187(55.8% | 131(39.1% | 125(37.31% | 111(33.13% ||| Vitamin ||| more than half [SUMMARY]
Down-regulated IL36RN expression based on peripheral blood mononuclear cells and plasma of periodontitis patients and its clinical significance.
34272761
The role of IL-36 receptor antagonist (IL36RN), a mutated gene expression of IL-36 in periodontitis patients with peripheral blood mononuclear cells (PBMC) and plasma remains to be undetermined.
BACKGROUND
Our study discovered the IL36RN expression through GEO public databases and further validated by PBMC and plasma of periodontitis patients and healthy participants. A total of 194 participants of public datasets, consisting of 97 cases of periodontitis and 97 cases of healthy control were retrospectively evaluated and explored the gene enrichment pathways and clinical significance of IL36RN expression accompanied by three different cytokines. Furthermore, the clinical significance of IL36RN was evaluated in mild-to-severe patients of periodontitis by the receiver operating curve (ROC) using the area under the curve (AUC).
MATERIALS AND METHODS
IL36RN expressions were notably down-regulated in PBMC and plasma of periodontitis patients. Further, a positive correlation of IL36RN expression was significantly observed between PBMC and plasma of periodontitis patients while IL36RN expression was negatively correlated to serum-based three different cytokines of periodontitis patients. Meanwhile, the ROC-AUCs achieved a significantly higher range from 0.80 to 0.87 with PBMC of mild-to-severe and moderate-to-severe periodontitis patients whereas similar patients with plasma obtained a significant AUC range from 0.73 to 0.83.
RESULTS
IL36RN can distinctively be detectable in periodontitis patients with PBMC and plasma, which can act as a down-regulated mutated gene that might play an effective role in causing periodontitis. IL36RN may involve by other inflammatory cytokines in the pathogenesis of periodontitis.
CONCLUSION
[ "Adult", "Biomarkers", "Case-Control Studies", "Cytokines", "Down-Regulation", "Female", "Humans", "Interleukins", "Leukocytes, Mononuclear", "Male", "Middle Aged", "Periodontitis", "Retrospective Studies" ]
8418502
INTRODUCTION
Periodontitis, common but largely preventable, is a chronic and multifactorial inflammatory disease that damages the supporting soft tissue and bone of teeth.1, 2 Patients with periodontitis disease may last for a duration of several months or years. The interaction between periodontal pathogens and host inflammatory and immune responses is involved in the pathogenesis of periodontitis.2 Certain diseases, such as Crohn's disease,3 asthma,4 rheumatoid arthritis,5 and diabetes mellitus,6 were reported to increase the risk of periodontitis. Complicated immune responses in the body might be playing a critical role in the progression of tissue damagement in periodontitis.7, 8 Interleukin‐36 (IL‐36), one member of the interleukin‐1 (IL‐1) superfamily, has subfamily members known as three agonists (IL‐36α, IL‐36β, and IL‐36γ) and two antagonists (interleukin‐36 receptor antagonist (IL‐36Ra and IL‐38)).9 IL‐36Ra is an anti‐inflammatory mediator which takes responsibility for the tight regulation of IL‐36 signaling.9 The IL36RN gene encodes IL‐36Ra. Various inflammatory diseases, such as inflammatory skin disorders, Crohn's disease, and rheumatoid arthritis have increasingly connected with IL‐36 related cytokines.10, 11, 12 In periodontitis, IL‐1, IL‐6, IL‐17A, and tumor necrosis factor‐α (TNF‐α) known as pro‐inflammatory cytokines cause body immune responses to oral bacteria specifically called Porphyromonas gingivalis.13 Kübra et al. reported that active periodontal disease may cause downregulation of inflammasome regulators and they may increase the activity of IL‐1β in periodontal disease including periodontitis.14 Alexandra et al. found that IL‐36γ could be a key inflammatory player in periodontitis and its associated alveolar bone resorption and could be a therapeutic target.2 Patrick R. et al. checked the serum, saliva, gingival cervical fluid (GCF), and gingival biopsies of patients who suffer from inflammatory periodontal disease and found the presence of elevated levels of IL‐35.15 In this context, several interleukins might be acting as mutated genes in periodontitis disease and may be playing an important role in occurring and progression of periodontitis. IL‐36 has been evaluated in diverse inflammatory diseases.16 However, the role of IL‐36RN, a mutated gene expression of IL‐36 in periodontitis patients with peripheral blood mononuclear cells (PBMC) and plasma remains unknown. Our study aims to find IL36RN in PBMC and plasma of the periodontitis patients and its clinical significance.
null
null
RESULTS
Clinical characteristics A total of 194 participants were enrolled to evaluate IL36RN expression through PBMC and plasma samples of periodontitis (n = 97) and healthy controls (n = 97). Further, the patients and healthy controls were characterized by clinical parameters such as age, gender, bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level. Among both groups, the parameters of bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level were statistically significant (Table 1). Clinical characteristics of the periodontitis patients and healthy controls A total of 194 participants were enrolled to evaluate IL36RN expression through PBMC and plasma samples of periodontitis (n = 97) and healthy controls (n = 97). Further, the patients and healthy controls were characterized by clinical parameters such as age, gender, bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level. Among both groups, the parameters of bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level were statistically significant (Table 1). Clinical characteristics of the periodontitis patients and healthy controls Discovery of IL36RN expression in periodontitis Bioinformatics tools were used to discover IL36RN mutated gene in between periodontitis patients and healthy control of GEO dataset (GSE23586) via heatmap, volcano and box plot analyses (Figure 1A‐C). These heatmap and volcano plot results demonstrated that the IL36RN mutated gene was significantly down‐regulated, which was further validated by the GSE23586 dataset of periodontitis and healthy controls as per box plot (Figure 1C). In the box plot, the IL36RN expressions were lowered in periodontitis patients compared to healthy controls significantly (Figure 1C, p < 0.05). Discovery of IL36RN in periodontitis patients and healthy controls using GEO database analysis. (A) Volcano map showed significant up‐ and down‐regulated mRNA gene expression through log2‐fold change and log10 p‐values. (B) Heatmap clustering represented up‐ and down‐regulated expressed genes between 2 groups using fold change. (C) The comparison of IL36RN expression between 2 groups. G1: Healthy control G2: Periodontitis Furthermore, the GO database analysis represented the top‐20 significant up‐ and down‐regulated genes associated to GO enriched pathways of periodontitis and healthy control groups (Figure 2A,B, p < 0.05). Likewise, similar up‐ and down‐regulated genes related top‐20 significant KEGG pathways were observed through the KEGG pathway database (Figure 3A,B, p < 0.05). The top‐20 GO enrichment pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated GO enrichment pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot The top‐20 KEGG pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated KEGG enriched pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot Bioinformatics tools were used to discover IL36RN mutated gene in between periodontitis patients and healthy control of GEO dataset (GSE23586) via heatmap, volcano and box plot analyses (Figure 1A‐C). These heatmap and volcano plot results demonstrated that the IL36RN mutated gene was significantly down‐regulated, which was further validated by the GSE23586 dataset of periodontitis and healthy controls as per box plot (Figure 1C). In the box plot, the IL36RN expressions were lowered in periodontitis patients compared to healthy controls significantly (Figure 1C, p < 0.05). Discovery of IL36RN in periodontitis patients and healthy controls using GEO database analysis. (A) Volcano map showed significant up‐ and down‐regulated mRNA gene expression through log2‐fold change and log10 p‐values. (B) Heatmap clustering represented up‐ and down‐regulated expressed genes between 2 groups using fold change. (C) The comparison of IL36RN expression between 2 groups. G1: Healthy control G2: Periodontitis Furthermore, the GO database analysis represented the top‐20 significant up‐ and down‐regulated genes associated to GO enriched pathways of periodontitis and healthy control groups (Figure 2A,B, p < 0.05). Likewise, similar up‐ and down‐regulated genes related top‐20 significant KEGG pathways were observed through the KEGG pathway database (Figure 3A,B, p < 0.05). The top‐20 GO enrichment pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated GO enrichment pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot The top‐20 KEGG pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated KEGG enriched pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot Validation of IL36RN in PBMC and plasma of periodontitis To validate between PBMC and plasma of IL36RN expressions were shown by scatter plots in 97 periodontitis patients and 97 healthy controls. Herein, IL36RN expressions were notably down‐regulated in PBMC and plasma of periodontitis patients compared to the healthy control group (Figure 4A,B, p < 0.05). Further, a positive correlation of IL36RN expression was significantly observed between PBMC and plasma of periodontitis patients in Figure 4C, (r = 0.409, p < 0.001) Therefore, the expression level of IL36RN in PBMC was directly correlated to plasma of periodontitis patients. Validation of IL‐36RN mRNA expression between healthy control (n = 97) and periodontitis (n = 97) groups. (A) PBMC, (B) Plasma, (C) Correlation‐based scatter plot between PBMC and plasma To validate between PBMC and plasma of IL36RN expressions were shown by scatter plots in 97 periodontitis patients and 97 healthy controls. Herein, IL36RN expressions were notably down‐regulated in PBMC and plasma of periodontitis patients compared to the healthy control group (Figure 4A,B, p < 0.05). Further, a positive correlation of IL36RN expression was significantly observed between PBMC and plasma of periodontitis patients in Figure 4C, (r = 0.409, p < 0.001) Therefore, the expression level of IL36RN in PBMC was directly correlated to plasma of periodontitis patients. Validation of IL‐36RN mRNA expression between healthy control (n = 97) and periodontitis (n = 97) groups. (A) PBMC, (B) Plasma, (C) Correlation‐based scatter plot between PBMC and plasma Three different cytokines expression in PBMC of periodontitis Serum‐based IL‐6, TNF‐α and IL‐1β, three different cytokines expression were demonstrated in PBMC of periodontitis and healthy controls. Overall, these three different cytokines were highly expressed in periodontitis patients compared to healthy controls (Figure 5A‐C, p < 0.05). Meanwhile, IL36RN expression in PBMC was negatively correlated to these serum‐based three different cytokines of periodontitis patients (Figure 6A‐C, p < 0.05). Thus, IL‐6, TNF‐α, and IL‐1β were inversely correlated and up‐regulated expressions, which are in contrast to IL36RN. Expression of three different cytokines in serum of healthy control (n = 97) and periodontitis (n = 97) groups. (A) IL‐6, (B) TNF‐α, (C) IL‐1β Correlation between IL‐36RN expression and serum levels of three cytokines among periodontitis patients. (A) IL‐6, (B) TNF‐α, (C) IL‐1β Serum‐based IL‐6, TNF‐α and IL‐1β, three different cytokines expression were demonstrated in PBMC of periodontitis and healthy controls. Overall, these three different cytokines were highly expressed in periodontitis patients compared to healthy controls (Figure 5A‐C, p < 0.05). Meanwhile, IL36RN expression in PBMC was negatively correlated to these serum‐based three different cytokines of periodontitis patients (Figure 6A‐C, p < 0.05). Thus, IL‐6, TNF‐α, and IL‐1β were inversely correlated and up‐regulated expressions, which are in contrast to IL36RN. Expression of three different cytokines in serum of healthy control (n = 97) and periodontitis (n = 97) groups. (A) IL‐6, (B) TNF‐α, (C) IL‐1β Correlation between IL‐36RN expression and serum levels of three cytokines among periodontitis patients. (A) IL‐6, (B) TNF‐α, (C) IL‐1β Significance of IL36RN in mild, moderate, and severe periodontitis Besides, IL36RN expression and significance were observed in mild‐to‐severe periodontitis patients with PBMC and plasma samples. Herein, the mild periodontitis showed significantly higher expression of IL36RN in PBMC and plasma compared to moderate and severe periodontitis (Figure 7A,B, p < 0.05). Similarly, moderate periodontitis showed higher expression of IL36RN than severe periodontitis whereas lower expression than mild periodontitis patients (Figure 7A,B, p < 0.05). While severe periodontitis represented the lowest expression of IL36RN among PBMC and plasma of mild and moderate periodontitis patients (Figure 7A,B, p < 0.05). IL36RN mRNA expression in mild (n = 40) moderate (n = 32) and severe (n = 25) periodontitis patients. (A) PBMC, (B) Plasma The ROC‐AUC achieved a significantly higher of 0.875 (95% confidence interval(CI) = 0.7882‐0.9618, p < 0.05) in mild vs severe periodontitis with PBMC whereas moderate vs severe periodontitis with PBMC achieved AUC of 0.805 (95% CI = 0.6847‐0.9253, p < 0.05) (Figure 8A,B). Subsequently, in plasma, mild vs severe periodontitis patients yielded AUC of 0.839 (95% CI = 0.7451‐0.9329, p < 0.05) while moderate to severe periodontitis patients obtained a significant AUC of 0.731 (95% CI = 0.6014‐0.8611, p < 0.05) (Figure 8C,D). Thus, the diagnostic value of IL36RN can accurately distinguish between mild‐to‐severe or moderate‐to‐severe periodontitis patients with PBMC and plasma, and further may provide potential significance in diagnosing periodontitis. Diagnostic value of IL36RN for distinguishing mild or moderate patients from severe periodontitis patients. (A) PBMC of mild vs severe, (B) PBMC of moderate vs severe, (C) Plasma of mild vs severe, (D) Plasma of moderate vs severe Besides, IL36RN expression and significance were observed in mild‐to‐severe periodontitis patients with PBMC and plasma samples. Herein, the mild periodontitis showed significantly higher expression of IL36RN in PBMC and plasma compared to moderate and severe periodontitis (Figure 7A,B, p < 0.05). Similarly, moderate periodontitis showed higher expression of IL36RN than severe periodontitis whereas lower expression than mild periodontitis patients (Figure 7A,B, p < 0.05). While severe periodontitis represented the lowest expression of IL36RN among PBMC and plasma of mild and moderate periodontitis patients (Figure 7A,B, p < 0.05). IL36RN mRNA expression in mild (n = 40) moderate (n = 32) and severe (n = 25) periodontitis patients. (A) PBMC, (B) Plasma The ROC‐AUC achieved a significantly higher of 0.875 (95% confidence interval(CI) = 0.7882‐0.9618, p < 0.05) in mild vs severe periodontitis with PBMC whereas moderate vs severe periodontitis with PBMC achieved AUC of 0.805 (95% CI = 0.6847‐0.9253, p < 0.05) (Figure 8A,B). Subsequently, in plasma, mild vs severe periodontitis patients yielded AUC of 0.839 (95% CI = 0.7451‐0.9329, p < 0.05) while moderate to severe periodontitis patients obtained a significant AUC of 0.731 (95% CI = 0.6014‐0.8611, p < 0.05) (Figure 8C,D). Thus, the diagnostic value of IL36RN can accurately distinguish between mild‐to‐severe or moderate‐to‐severe periodontitis patients with PBMC and plasma, and further may provide potential significance in diagnosing periodontitis. Diagnostic value of IL36RN for distinguishing mild or moderate patients from severe periodontitis patients. (A) PBMC of mild vs severe, (B) PBMC of moderate vs severe, (C) Plasma of mild vs severe, (D) Plasma of moderate vs severe
CONCLUSION
IL36RN can distinctively be detectable in periodontitis patients with PBMC and plasma, which can act as a down‐regulated mutated gene that might play an effective role in causing periodontitis. IL36RN may involve by other inflammatory cytokines in the pathogenesis of periodontitis.
[ "INTRODUCTION", "Patients", "Inclusion and exclusion criteria", "Inclusion", "Exclusion", "Periodontitis severity criteria", "RNA isolation", "RT‐qPCR", "Determination of cytokines", "Statistical and datasets analysis", "Clinical characteristics", "Discovery of IL36RN expression in periodontitis", "Validation of IL36RN in PBMC and plasma of periodontitis", "Three different cytokines expression in PBMC of periodontitis", "Significance of IL36RN in mild, moderate, and severe periodontitis" ]
[ "Periodontitis, common but largely preventable, is a chronic and multifactorial inflammatory disease that damages the supporting soft tissue and bone of teeth.1, 2 Patients with periodontitis disease may last for a duration of several months or years. The interaction between periodontal pathogens and host inflammatory and immune responses is involved in the pathogenesis of periodontitis.2 Certain diseases, such as Crohn's disease,3 asthma,4 rheumatoid arthritis,5 and diabetes mellitus,6 were reported to increase the risk of periodontitis. Complicated immune responses in the body might be playing a critical role in the progression of tissue damagement in periodontitis.7, 8\n\nInterleukin‐36 (IL‐36), one member of the interleukin‐1 (IL‐1) superfamily, has subfamily members known as three agonists (IL‐36α, IL‐36β, and IL‐36γ) and two antagonists (interleukin‐36 receptor antagonist (IL‐36Ra and IL‐38)).9 IL‐36Ra is an anti‐inflammatory mediator which takes responsibility for the tight regulation of IL‐36 signaling.9 The IL36RN gene encodes IL‐36Ra. Various inflammatory diseases, such as inflammatory skin disorders, Crohn's disease, and rheumatoid arthritis have increasingly connected with IL‐36 related cytokines.10, 11, 12\n\nIn periodontitis, IL‐1, IL‐6, IL‐17A, and tumor necrosis factor‐α (TNF‐α) known as pro‐inflammatory cytokines cause body immune responses to oral bacteria specifically called Porphyromonas gingivalis.13 Kübra et al. reported that active periodontal disease may cause downregulation of inflammasome regulators and they may increase the activity of IL‐1β in periodontal disease including periodontitis.14 Alexandra et al. found that IL‐36γ could be a key inflammatory player in periodontitis and its associated alveolar bone resorption and could be a therapeutic target.2 Patrick R. et al. checked the serum, saliva, gingival cervical fluid (GCF), and gingival biopsies of patients who suffer from inflammatory periodontal disease and found the presence of elevated levels of IL‐35.15 In this context, several interleukins might be acting as mutated genes in periodontitis disease and may be playing an important role in occurring and progression of periodontitis.\nIL‐36 has been evaluated in diverse inflammatory diseases.16 However, the role of IL‐36RN, a mutated gene expression of IL‐36 in periodontitis patients with peripheral blood mononuclear cells (PBMC) and plasma remains unknown. Our study aims to find IL36RN in PBMC and plasma of the periodontitis patients and its clinical significance.", "A total of 194 cases of periodontitis and healthy control samples with PBMC and plasma were retrospectively accumulated from GEO‐based Affiliated Hospital of Beihua University. The patients were recruited from August 2018 to January 2020. All patients signed informed consent during their hospital stay, and the study was authorized by the ethics committee of Affiliated Hospital of Beihua University and was conducted following the Declaration of Helsinki guidelines (2018080375).", "Inclusion Periodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm.\nPeriodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm.\nExclusion Patients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation.\nPatients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation.", "Periodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm.", "Patients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation.", "Mild periodontitis: ≥2 interproximal sites with clinical attachment loss ≥3 mm, and ≥2 interproximal sites with probing depth ≥4 mm (not on same tooth) or one site with probing depth ≥5 mm.\nModerate periodontitis: ≥2 interproximal sites with clinical attachment loss ≥4 mm (not on same tooth), or ≥2 interproximal sites with probing depth ≥5 mm (not on same tooth).\nSevere periodontitis: ≥2 interproximal sites with clinical attachment loss ≥6 mm (not on same tooth) and ≥1 interproximal site with probing depth ≥5 mm.", "Total RNA was extracted by using the solutions of phenol‐chloroform afterward managing the homogenization by guanidine isothiocyanate (Trizol RNA Preparation kit). RNA concentration evaluated by NanoDrop spectrophotometer ND1000 (NanoDrop Technologies Inc.).", "The total RNA in the PBMC and plasma samples after transfection was extracted using Trizol reagent according to instructions. Thereafter, the total RNA was reverse‐transcribed into cDNA by the reverse transcription kit (provided by Shanghai Sangon Biological Engineering Co., LTD). The primers used are as follows: IL36RN 5′‐F: AGGCGCCAGAGGCACCATGGAC‐3′, R: 5′‐CATCCTGTGCGTTGGCTGCC‐3′, U6 F: 5′‐GAAGGTGAAGGTCGGAGTC‐3′, R: 5′‐GAAGATGGTGATGGGATTT‐3′. PCR reaction conditions were as follows: A: Pre‐denaturation at 95℃ for 10 min; B: Denaturation 95℃ 15 s; Annealing at 60℃ for 15 s, elongation at 72℃ for 20 s, a total of 40 cycles. C: 72℃ for 15 min. The reaction is terminated at 4℃. Three replicates were set for each sample, and 2−△△Ct was used for relative quantitative analysis of the data.", "The levels of IL‐6, TNF‐α and IL‐1β were detected with corresponding Enzyme Linked Immunosorbent Assay (Elisa) kits (ThermoFisher Scientific) according to the instruction of the kit.", "SPSS 20.0 version (SPSS Inc.) software was utilized for all the statistical analysis of the study and data were expressed as mean ± SD. Bioinformatics tools were used to analyze heatmap, volcano map, Gene Ontology (GO) and Kyoto Encylopedia of Genes and Genomes (KEGG) enrichment pathways from the GEO database. T‐test was performed for comparing the two groups. One‐way ANOVA analysis was used to compare multiple groups. Pearson correlation was used for correlation analysis. The clinical significance of IL36RN was evaluated in PBMC and plasma by the receiver operating curve (ROC) using the area under the curve (AUC). p < 0.05 was considered to be representing statistically significant.", "A total of 194 participants were enrolled to evaluate IL36RN expression through PBMC and plasma samples of periodontitis (n = 97) and healthy controls (n = 97). Further, the patients and healthy controls were characterized by clinical parameters such as age, gender, bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level. Among both groups, the parameters of bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level were statistically significant (Table 1).\nClinical characteristics of the periodontitis patients and healthy controls", "Bioinformatics tools were used to discover IL36RN mutated gene in between periodontitis patients and healthy control of GEO dataset (GSE23586) via heatmap, volcano and box plot analyses (Figure 1A‐C). These heatmap and volcano plot results demonstrated that the IL36RN mutated gene was significantly down‐regulated, which was further validated by the GSE23586 dataset of periodontitis and healthy controls as per box plot (Figure 1C). In the box plot, the IL36RN expressions were lowered in periodontitis patients compared to healthy controls significantly (Figure 1C, p < 0.05).\nDiscovery of IL36RN in periodontitis patients and healthy controls using GEO database analysis. (A) Volcano map showed significant up‐ and down‐regulated mRNA gene expression through log2‐fold change and log10 p‐values. (B) Heatmap clustering represented up‐ and down‐regulated expressed genes between 2 groups using fold change. (C) The comparison of IL36RN expression between 2 groups. G1: Healthy control G2: Periodontitis\nFurthermore, the GO database analysis represented the top‐20 significant up‐ and down‐regulated genes associated to GO enriched pathways of periodontitis and healthy control groups (Figure 2A,B, p < 0.05). Likewise, similar up‐ and down‐regulated genes related top‐20 significant KEGG pathways were observed through the KEGG pathway database (Figure 3A,B, p < 0.05).\nThe top‐20 GO enrichment pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated GO enrichment pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot\nThe top‐20 KEGG pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated KEGG enriched pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot", "To validate between PBMC and plasma of IL36RN expressions were shown by scatter plots in 97 periodontitis patients and 97 healthy controls. Herein, IL36RN expressions were notably down‐regulated in PBMC and plasma of periodontitis patients compared to the healthy control group (Figure 4A,B, p < 0.05). Further, a positive correlation of IL36RN expression was significantly observed between PBMC and plasma of periodontitis patients in Figure 4C, (r = 0.409, p < 0.001) Therefore, the expression level of IL36RN in PBMC was directly correlated to plasma of periodontitis patients.\nValidation of IL‐36RN mRNA expression between healthy control (n = 97) and periodontitis (n = 97) groups. (A) PBMC, (B) Plasma, (C) Correlation‐based scatter plot between PBMC and plasma", "Serum‐based IL‐6, TNF‐α and IL‐1β, three different cytokines expression were demonstrated in PBMC of periodontitis and healthy controls. Overall, these three different cytokines were highly expressed in periodontitis patients compared to healthy controls (Figure 5A‐C, p < 0.05). Meanwhile, IL36RN expression in PBMC was negatively correlated to these serum‐based three different cytokines of periodontitis patients (Figure 6A‐C, p < 0.05). Thus, IL‐6, TNF‐α, and IL‐1β were inversely correlated and up‐regulated expressions, which are in contrast to IL36RN.\nExpression of three different cytokines in serum of healthy control (n = 97) and periodontitis (n = 97) groups. (A) IL‐6, (B) TNF‐α, (C) IL‐1β\nCorrelation between IL‐36RN expression and serum levels of three cytokines among periodontitis patients. (A) IL‐6, (B) TNF‐α, (C) IL‐1β", "Besides, IL36RN expression and significance were observed in mild‐to‐severe periodontitis patients with PBMC and plasma samples. Herein, the mild periodontitis showed significantly higher expression of IL36RN in PBMC and plasma compared to moderate and severe periodontitis (Figure 7A,B, p < 0.05). Similarly, moderate periodontitis showed higher expression of IL36RN than severe periodontitis whereas lower expression than mild periodontitis patients (Figure 7A,B, p < 0.05). While severe periodontitis represented the lowest expression of IL36RN among PBMC and plasma of mild and moderate periodontitis patients (Figure 7A,B, p < 0.05).\nIL36RN mRNA expression in mild (n = 40) moderate (n = 32) and severe (n = 25) periodontitis patients. (A) PBMC, (B) Plasma\nThe ROC‐AUC achieved a significantly higher of 0.875 (95% confidence interval(CI) = 0.7882‐0.9618, p < 0.05) in mild vs severe periodontitis with PBMC whereas moderate vs severe periodontitis with PBMC achieved AUC of 0.805 (95% CI = 0.6847‐0.9253, p < 0.05) (Figure 8A,B). Subsequently, in plasma, mild vs severe periodontitis patients yielded AUC of 0.839 (95% CI = 0.7451‐0.9329, p < 0.05) while moderate to severe periodontitis patients obtained a significant AUC of 0.731 (95% CI = 0.6014‐0.8611, p < 0.05) (Figure 8C,D). Thus, the diagnostic value of IL36RN can accurately distinguish between mild‐to‐severe or moderate‐to‐severe periodontitis patients with PBMC and plasma, and further may provide potential significance in diagnosing periodontitis.\nDiagnostic value of IL36RN for distinguishing mild or moderate patients from severe periodontitis patients. (A) PBMC of mild vs severe, (B) PBMC of moderate vs severe, (C) Plasma of mild vs severe, (D) Plasma of moderate vs severe" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Patients", "Inclusion and exclusion criteria", "Inclusion", "Exclusion", "Periodontitis severity criteria", "RNA isolation", "RT‐qPCR", "Determination of cytokines", "Statistical and datasets analysis", "RESULTS", "Clinical characteristics", "Discovery of IL36RN expression in periodontitis", "Validation of IL36RN in PBMC and plasma of periodontitis", "Three different cytokines expression in PBMC of periodontitis", "Significance of IL36RN in mild, moderate, and severe periodontitis", "DISCUSSION", "CONCLUSION", "CONFLICTS OF INTERESTS" ]
[ "Periodontitis, common but largely preventable, is a chronic and multifactorial inflammatory disease that damages the supporting soft tissue and bone of teeth.1, 2 Patients with periodontitis disease may last for a duration of several months or years. The interaction between periodontal pathogens and host inflammatory and immune responses is involved in the pathogenesis of periodontitis.2 Certain diseases, such as Crohn's disease,3 asthma,4 rheumatoid arthritis,5 and diabetes mellitus,6 were reported to increase the risk of periodontitis. Complicated immune responses in the body might be playing a critical role in the progression of tissue damagement in periodontitis.7, 8\n\nInterleukin‐36 (IL‐36), one member of the interleukin‐1 (IL‐1) superfamily, has subfamily members known as three agonists (IL‐36α, IL‐36β, and IL‐36γ) and two antagonists (interleukin‐36 receptor antagonist (IL‐36Ra and IL‐38)).9 IL‐36Ra is an anti‐inflammatory mediator which takes responsibility for the tight regulation of IL‐36 signaling.9 The IL36RN gene encodes IL‐36Ra. Various inflammatory diseases, such as inflammatory skin disorders, Crohn's disease, and rheumatoid arthritis have increasingly connected with IL‐36 related cytokines.10, 11, 12\n\nIn periodontitis, IL‐1, IL‐6, IL‐17A, and tumor necrosis factor‐α (TNF‐α) known as pro‐inflammatory cytokines cause body immune responses to oral bacteria specifically called Porphyromonas gingivalis.13 Kübra et al. reported that active periodontal disease may cause downregulation of inflammasome regulators and they may increase the activity of IL‐1β in periodontal disease including periodontitis.14 Alexandra et al. found that IL‐36γ could be a key inflammatory player in periodontitis and its associated alveolar bone resorption and could be a therapeutic target.2 Patrick R. et al. checked the serum, saliva, gingival cervical fluid (GCF), and gingival biopsies of patients who suffer from inflammatory periodontal disease and found the presence of elevated levels of IL‐35.15 In this context, several interleukins might be acting as mutated genes in periodontitis disease and may be playing an important role in occurring and progression of periodontitis.\nIL‐36 has been evaluated in diverse inflammatory diseases.16 However, the role of IL‐36RN, a mutated gene expression of IL‐36 in periodontitis patients with peripheral blood mononuclear cells (PBMC) and plasma remains unknown. Our study aims to find IL36RN in PBMC and plasma of the periodontitis patients and its clinical significance.", "Patients A total of 194 cases of periodontitis and healthy control samples with PBMC and plasma were retrospectively accumulated from GEO‐based Affiliated Hospital of Beihua University. The patients were recruited from August 2018 to January 2020. All patients signed informed consent during their hospital stay, and the study was authorized by the ethics committee of Affiliated Hospital of Beihua University and was conducted following the Declaration of Helsinki guidelines (2018080375).\nA total of 194 cases of periodontitis and healthy control samples with PBMC and plasma were retrospectively accumulated from GEO‐based Affiliated Hospital of Beihua University. The patients were recruited from August 2018 to January 2020. All patients signed informed consent during their hospital stay, and the study was authorized by the ethics committee of Affiliated Hospital of Beihua University and was conducted following the Declaration of Helsinki guidelines (2018080375).\nInclusion and exclusion criteria Inclusion Periodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm.\nPeriodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm.\nExclusion Patients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation.\nPatients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation.\nInclusion Periodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm.\nPeriodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm.\nExclusion Patients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation.\nPatients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation.\nPeriodontitis severity criteria Mild periodontitis: ≥2 interproximal sites with clinical attachment loss ≥3 mm, and ≥2 interproximal sites with probing depth ≥4 mm (not on same tooth) or one site with probing depth ≥5 mm.\nModerate periodontitis: ≥2 interproximal sites with clinical attachment loss ≥4 mm (not on same tooth), or ≥2 interproximal sites with probing depth ≥5 mm (not on same tooth).\nSevere periodontitis: ≥2 interproximal sites with clinical attachment loss ≥6 mm (not on same tooth) and ≥1 interproximal site with probing depth ≥5 mm.\nMild periodontitis: ≥2 interproximal sites with clinical attachment loss ≥3 mm, and ≥2 interproximal sites with probing depth ≥4 mm (not on same tooth) or one site with probing depth ≥5 mm.\nModerate periodontitis: ≥2 interproximal sites with clinical attachment loss ≥4 mm (not on same tooth), or ≥2 interproximal sites with probing depth ≥5 mm (not on same tooth).\nSevere periodontitis: ≥2 interproximal sites with clinical attachment loss ≥6 mm (not on same tooth) and ≥1 interproximal site with probing depth ≥5 mm.\nRNA isolation Total RNA was extracted by using the solutions of phenol‐chloroform afterward managing the homogenization by guanidine isothiocyanate (Trizol RNA Preparation kit). RNA concentration evaluated by NanoDrop spectrophotometer ND1000 (NanoDrop Technologies Inc.).\nTotal RNA was extracted by using the solutions of phenol‐chloroform afterward managing the homogenization by guanidine isothiocyanate (Trizol RNA Preparation kit). RNA concentration evaluated by NanoDrop spectrophotometer ND1000 (NanoDrop Technologies Inc.).\nRT‐qPCR The total RNA in the PBMC and plasma samples after transfection was extracted using Trizol reagent according to instructions. Thereafter, the total RNA was reverse‐transcribed into cDNA by the reverse transcription kit (provided by Shanghai Sangon Biological Engineering Co., LTD). The primers used are as follows: IL36RN 5′‐F: AGGCGCCAGAGGCACCATGGAC‐3′, R: 5′‐CATCCTGTGCGTTGGCTGCC‐3′, U6 F: 5′‐GAAGGTGAAGGTCGGAGTC‐3′, R: 5′‐GAAGATGGTGATGGGATTT‐3′. PCR reaction conditions were as follows: A: Pre‐denaturation at 95℃ for 10 min; B: Denaturation 95℃ 15 s; Annealing at 60℃ for 15 s, elongation at 72℃ for 20 s, a total of 40 cycles. C: 72℃ for 15 min. The reaction is terminated at 4℃. Three replicates were set for each sample, and 2−△△Ct was used for relative quantitative analysis of the data.\nThe total RNA in the PBMC and plasma samples after transfection was extracted using Trizol reagent according to instructions. Thereafter, the total RNA was reverse‐transcribed into cDNA by the reverse transcription kit (provided by Shanghai Sangon Biological Engineering Co., LTD). The primers used are as follows: IL36RN 5′‐F: AGGCGCCAGAGGCACCATGGAC‐3′, R: 5′‐CATCCTGTGCGTTGGCTGCC‐3′, U6 F: 5′‐GAAGGTGAAGGTCGGAGTC‐3′, R: 5′‐GAAGATGGTGATGGGATTT‐3′. PCR reaction conditions were as follows: A: Pre‐denaturation at 95℃ for 10 min; B: Denaturation 95℃ 15 s; Annealing at 60℃ for 15 s, elongation at 72℃ for 20 s, a total of 40 cycles. C: 72℃ for 15 min. The reaction is terminated at 4℃. Three replicates were set for each sample, and 2−△△Ct was used for relative quantitative analysis of the data.\nDetermination of cytokines The levels of IL‐6, TNF‐α and IL‐1β were detected with corresponding Enzyme Linked Immunosorbent Assay (Elisa) kits (ThermoFisher Scientific) according to the instruction of the kit.\nThe levels of IL‐6, TNF‐α and IL‐1β were detected with corresponding Enzyme Linked Immunosorbent Assay (Elisa) kits (ThermoFisher Scientific) according to the instruction of the kit.\nStatistical and datasets analysis SPSS 20.0 version (SPSS Inc.) software was utilized for all the statistical analysis of the study and data were expressed as mean ± SD. Bioinformatics tools were used to analyze heatmap, volcano map, Gene Ontology (GO) and Kyoto Encylopedia of Genes and Genomes (KEGG) enrichment pathways from the GEO database. T‐test was performed for comparing the two groups. One‐way ANOVA analysis was used to compare multiple groups. Pearson correlation was used for correlation analysis. The clinical significance of IL36RN was evaluated in PBMC and plasma by the receiver operating curve (ROC) using the area under the curve (AUC). p < 0.05 was considered to be representing statistically significant.\nSPSS 20.0 version (SPSS Inc.) software was utilized for all the statistical analysis of the study and data were expressed as mean ± SD. Bioinformatics tools were used to analyze heatmap, volcano map, Gene Ontology (GO) and Kyoto Encylopedia of Genes and Genomes (KEGG) enrichment pathways from the GEO database. T‐test was performed for comparing the two groups. One‐way ANOVA analysis was used to compare multiple groups. Pearson correlation was used for correlation analysis. The clinical significance of IL36RN was evaluated in PBMC and plasma by the receiver operating curve (ROC) using the area under the curve (AUC). p < 0.05 was considered to be representing statistically significant.", "A total of 194 cases of periodontitis and healthy control samples with PBMC and plasma were retrospectively accumulated from GEO‐based Affiliated Hospital of Beihua University. The patients were recruited from August 2018 to January 2020. All patients signed informed consent during their hospital stay, and the study was authorized by the ethics committee of Affiliated Hospital of Beihua University and was conducted following the Declaration of Helsinki guidelines (2018080375).", "Inclusion Periodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm.\nPeriodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm.\nExclusion Patients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation.\nPatients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation.", "Periodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm.", "Patients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation.", "Mild periodontitis: ≥2 interproximal sites with clinical attachment loss ≥3 mm, and ≥2 interproximal sites with probing depth ≥4 mm (not on same tooth) or one site with probing depth ≥5 mm.\nModerate periodontitis: ≥2 interproximal sites with clinical attachment loss ≥4 mm (not on same tooth), or ≥2 interproximal sites with probing depth ≥5 mm (not on same tooth).\nSevere periodontitis: ≥2 interproximal sites with clinical attachment loss ≥6 mm (not on same tooth) and ≥1 interproximal site with probing depth ≥5 mm.", "Total RNA was extracted by using the solutions of phenol‐chloroform afterward managing the homogenization by guanidine isothiocyanate (Trizol RNA Preparation kit). RNA concentration evaluated by NanoDrop spectrophotometer ND1000 (NanoDrop Technologies Inc.).", "The total RNA in the PBMC and plasma samples after transfection was extracted using Trizol reagent according to instructions. Thereafter, the total RNA was reverse‐transcribed into cDNA by the reverse transcription kit (provided by Shanghai Sangon Biological Engineering Co., LTD). The primers used are as follows: IL36RN 5′‐F: AGGCGCCAGAGGCACCATGGAC‐3′, R: 5′‐CATCCTGTGCGTTGGCTGCC‐3′, U6 F: 5′‐GAAGGTGAAGGTCGGAGTC‐3′, R: 5′‐GAAGATGGTGATGGGATTT‐3′. PCR reaction conditions were as follows: A: Pre‐denaturation at 95℃ for 10 min; B: Denaturation 95℃ 15 s; Annealing at 60℃ for 15 s, elongation at 72℃ for 20 s, a total of 40 cycles. C: 72℃ for 15 min. The reaction is terminated at 4℃. Three replicates were set for each sample, and 2−△△Ct was used for relative quantitative analysis of the data.", "The levels of IL‐6, TNF‐α and IL‐1β were detected with corresponding Enzyme Linked Immunosorbent Assay (Elisa) kits (ThermoFisher Scientific) according to the instruction of the kit.", "SPSS 20.0 version (SPSS Inc.) software was utilized for all the statistical analysis of the study and data were expressed as mean ± SD. Bioinformatics tools were used to analyze heatmap, volcano map, Gene Ontology (GO) and Kyoto Encylopedia of Genes and Genomes (KEGG) enrichment pathways from the GEO database. T‐test was performed for comparing the two groups. One‐way ANOVA analysis was used to compare multiple groups. Pearson correlation was used for correlation analysis. The clinical significance of IL36RN was evaluated in PBMC and plasma by the receiver operating curve (ROC) using the area under the curve (AUC). p < 0.05 was considered to be representing statistically significant.", "Clinical characteristics A total of 194 participants were enrolled to evaluate IL36RN expression through PBMC and plasma samples of periodontitis (n = 97) and healthy controls (n = 97). Further, the patients and healthy controls were characterized by clinical parameters such as age, gender, bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level. Among both groups, the parameters of bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level were statistically significant (Table 1).\nClinical characteristics of the periodontitis patients and healthy controls\nA total of 194 participants were enrolled to evaluate IL36RN expression through PBMC and plasma samples of periodontitis (n = 97) and healthy controls (n = 97). Further, the patients and healthy controls were characterized by clinical parameters such as age, gender, bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level. Among both groups, the parameters of bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level were statistically significant (Table 1).\nClinical characteristics of the periodontitis patients and healthy controls\nDiscovery of IL36RN expression in periodontitis Bioinformatics tools were used to discover IL36RN mutated gene in between periodontitis patients and healthy control of GEO dataset (GSE23586) via heatmap, volcano and box plot analyses (Figure 1A‐C). These heatmap and volcano plot results demonstrated that the IL36RN mutated gene was significantly down‐regulated, which was further validated by the GSE23586 dataset of periodontitis and healthy controls as per box plot (Figure 1C). In the box plot, the IL36RN expressions were lowered in periodontitis patients compared to healthy controls significantly (Figure 1C, p < 0.05).\nDiscovery of IL36RN in periodontitis patients and healthy controls using GEO database analysis. (A) Volcano map showed significant up‐ and down‐regulated mRNA gene expression through log2‐fold change and log10 p‐values. (B) Heatmap clustering represented up‐ and down‐regulated expressed genes between 2 groups using fold change. (C) The comparison of IL36RN expression between 2 groups. G1: Healthy control G2: Periodontitis\nFurthermore, the GO database analysis represented the top‐20 significant up‐ and down‐regulated genes associated to GO enriched pathways of periodontitis and healthy control groups (Figure 2A,B, p < 0.05). Likewise, similar up‐ and down‐regulated genes related top‐20 significant KEGG pathways were observed through the KEGG pathway database (Figure 3A,B, p < 0.05).\nThe top‐20 GO enrichment pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated GO enrichment pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot\nThe top‐20 KEGG pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated KEGG enriched pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot\nBioinformatics tools were used to discover IL36RN mutated gene in between periodontitis patients and healthy control of GEO dataset (GSE23586) via heatmap, volcano and box plot analyses (Figure 1A‐C). These heatmap and volcano plot results demonstrated that the IL36RN mutated gene was significantly down‐regulated, which was further validated by the GSE23586 dataset of periodontitis and healthy controls as per box plot (Figure 1C). In the box plot, the IL36RN expressions were lowered in periodontitis patients compared to healthy controls significantly (Figure 1C, p < 0.05).\nDiscovery of IL36RN in periodontitis patients and healthy controls using GEO database analysis. (A) Volcano map showed significant up‐ and down‐regulated mRNA gene expression through log2‐fold change and log10 p‐values. (B) Heatmap clustering represented up‐ and down‐regulated expressed genes between 2 groups using fold change. (C) The comparison of IL36RN expression between 2 groups. G1: Healthy control G2: Periodontitis\nFurthermore, the GO database analysis represented the top‐20 significant up‐ and down‐regulated genes associated to GO enriched pathways of periodontitis and healthy control groups (Figure 2A,B, p < 0.05). Likewise, similar up‐ and down‐regulated genes related top‐20 significant KEGG pathways were observed through the KEGG pathway database (Figure 3A,B, p < 0.05).\nThe top‐20 GO enrichment pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated GO enrichment pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot\nThe top‐20 KEGG pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated KEGG enriched pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot\nValidation of IL36RN in PBMC and plasma of periodontitis To validate between PBMC and plasma of IL36RN expressions were shown by scatter plots in 97 periodontitis patients and 97 healthy controls. Herein, IL36RN expressions were notably down‐regulated in PBMC and plasma of periodontitis patients compared to the healthy control group (Figure 4A,B, p < 0.05). Further, a positive correlation of IL36RN expression was significantly observed between PBMC and plasma of periodontitis patients in Figure 4C, (r = 0.409, p < 0.001) Therefore, the expression level of IL36RN in PBMC was directly correlated to plasma of periodontitis patients.\nValidation of IL‐36RN mRNA expression between healthy control (n = 97) and periodontitis (n = 97) groups. (A) PBMC, (B) Plasma, (C) Correlation‐based scatter plot between PBMC and plasma\nTo validate between PBMC and plasma of IL36RN expressions were shown by scatter plots in 97 periodontitis patients and 97 healthy controls. Herein, IL36RN expressions were notably down‐regulated in PBMC and plasma of periodontitis patients compared to the healthy control group (Figure 4A,B, p < 0.05). Further, a positive correlation of IL36RN expression was significantly observed between PBMC and plasma of periodontitis patients in Figure 4C, (r = 0.409, p < 0.001) Therefore, the expression level of IL36RN in PBMC was directly correlated to plasma of periodontitis patients.\nValidation of IL‐36RN mRNA expression between healthy control (n = 97) and periodontitis (n = 97) groups. (A) PBMC, (B) Plasma, (C) Correlation‐based scatter plot between PBMC and plasma\nThree different cytokines expression in PBMC of periodontitis Serum‐based IL‐6, TNF‐α and IL‐1β, three different cytokines expression were demonstrated in PBMC of periodontitis and healthy controls. Overall, these three different cytokines were highly expressed in periodontitis patients compared to healthy controls (Figure 5A‐C, p < 0.05). Meanwhile, IL36RN expression in PBMC was negatively correlated to these serum‐based three different cytokines of periodontitis patients (Figure 6A‐C, p < 0.05). Thus, IL‐6, TNF‐α, and IL‐1β were inversely correlated and up‐regulated expressions, which are in contrast to IL36RN.\nExpression of three different cytokines in serum of healthy control (n = 97) and periodontitis (n = 97) groups. (A) IL‐6, (B) TNF‐α, (C) IL‐1β\nCorrelation between IL‐36RN expression and serum levels of three cytokines among periodontitis patients. (A) IL‐6, (B) TNF‐α, (C) IL‐1β\nSerum‐based IL‐6, TNF‐α and IL‐1β, three different cytokines expression were demonstrated in PBMC of periodontitis and healthy controls. Overall, these three different cytokines were highly expressed in periodontitis patients compared to healthy controls (Figure 5A‐C, p < 0.05). Meanwhile, IL36RN expression in PBMC was negatively correlated to these serum‐based three different cytokines of periodontitis patients (Figure 6A‐C, p < 0.05). Thus, IL‐6, TNF‐α, and IL‐1β were inversely correlated and up‐regulated expressions, which are in contrast to IL36RN.\nExpression of three different cytokines in serum of healthy control (n = 97) and periodontitis (n = 97) groups. (A) IL‐6, (B) TNF‐α, (C) IL‐1β\nCorrelation between IL‐36RN expression and serum levels of three cytokines among periodontitis patients. (A) IL‐6, (B) TNF‐α, (C) IL‐1β\nSignificance of IL36RN in mild, moderate, and severe periodontitis Besides, IL36RN expression and significance were observed in mild‐to‐severe periodontitis patients with PBMC and plasma samples. Herein, the mild periodontitis showed significantly higher expression of IL36RN in PBMC and plasma compared to moderate and severe periodontitis (Figure 7A,B, p < 0.05). Similarly, moderate periodontitis showed higher expression of IL36RN than severe periodontitis whereas lower expression than mild periodontitis patients (Figure 7A,B, p < 0.05). While severe periodontitis represented the lowest expression of IL36RN among PBMC and plasma of mild and moderate periodontitis patients (Figure 7A,B, p < 0.05).\nIL36RN mRNA expression in mild (n = 40) moderate (n = 32) and severe (n = 25) periodontitis patients. (A) PBMC, (B) Plasma\nThe ROC‐AUC achieved a significantly higher of 0.875 (95% confidence interval(CI) = 0.7882‐0.9618, p < 0.05) in mild vs severe periodontitis with PBMC whereas moderate vs severe periodontitis with PBMC achieved AUC of 0.805 (95% CI = 0.6847‐0.9253, p < 0.05) (Figure 8A,B). Subsequently, in plasma, mild vs severe periodontitis patients yielded AUC of 0.839 (95% CI = 0.7451‐0.9329, p < 0.05) while moderate to severe periodontitis patients obtained a significant AUC of 0.731 (95% CI = 0.6014‐0.8611, p < 0.05) (Figure 8C,D). Thus, the diagnostic value of IL36RN can accurately distinguish between mild‐to‐severe or moderate‐to‐severe periodontitis patients with PBMC and plasma, and further may provide potential significance in diagnosing periodontitis.\nDiagnostic value of IL36RN for distinguishing mild or moderate patients from severe periodontitis patients. (A) PBMC of mild vs severe, (B) PBMC of moderate vs severe, (C) Plasma of mild vs severe, (D) Plasma of moderate vs severe\nBesides, IL36RN expression and significance were observed in mild‐to‐severe periodontitis patients with PBMC and plasma samples. Herein, the mild periodontitis showed significantly higher expression of IL36RN in PBMC and plasma compared to moderate and severe periodontitis (Figure 7A,B, p < 0.05). Similarly, moderate periodontitis showed higher expression of IL36RN than severe periodontitis whereas lower expression than mild periodontitis patients (Figure 7A,B, p < 0.05). While severe periodontitis represented the lowest expression of IL36RN among PBMC and plasma of mild and moderate periodontitis patients (Figure 7A,B, p < 0.05).\nIL36RN mRNA expression in mild (n = 40) moderate (n = 32) and severe (n = 25) periodontitis patients. (A) PBMC, (B) Plasma\nThe ROC‐AUC achieved a significantly higher of 0.875 (95% confidence interval(CI) = 0.7882‐0.9618, p < 0.05) in mild vs severe periodontitis with PBMC whereas moderate vs severe periodontitis with PBMC achieved AUC of 0.805 (95% CI = 0.6847‐0.9253, p < 0.05) (Figure 8A,B). Subsequently, in plasma, mild vs severe periodontitis patients yielded AUC of 0.839 (95% CI = 0.7451‐0.9329, p < 0.05) while moderate to severe periodontitis patients obtained a significant AUC of 0.731 (95% CI = 0.6014‐0.8611, p < 0.05) (Figure 8C,D). Thus, the diagnostic value of IL36RN can accurately distinguish between mild‐to‐severe or moderate‐to‐severe periodontitis patients with PBMC and plasma, and further may provide potential significance in diagnosing periodontitis.\nDiagnostic value of IL36RN for distinguishing mild or moderate patients from severe periodontitis patients. (A) PBMC of mild vs severe, (B) PBMC of moderate vs severe, (C) Plasma of mild vs severe, (D) Plasma of moderate vs severe", "A total of 194 participants were enrolled to evaluate IL36RN expression through PBMC and plasma samples of periodontitis (n = 97) and healthy controls (n = 97). Further, the patients and healthy controls were characterized by clinical parameters such as age, gender, bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level. Among both groups, the parameters of bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level were statistically significant (Table 1).\nClinical characteristics of the periodontitis patients and healthy controls", "Bioinformatics tools were used to discover IL36RN mutated gene in between periodontitis patients and healthy control of GEO dataset (GSE23586) via heatmap, volcano and box plot analyses (Figure 1A‐C). These heatmap and volcano plot results demonstrated that the IL36RN mutated gene was significantly down‐regulated, which was further validated by the GSE23586 dataset of periodontitis and healthy controls as per box plot (Figure 1C). In the box plot, the IL36RN expressions were lowered in periodontitis patients compared to healthy controls significantly (Figure 1C, p < 0.05).\nDiscovery of IL36RN in periodontitis patients and healthy controls using GEO database analysis. (A) Volcano map showed significant up‐ and down‐regulated mRNA gene expression through log2‐fold change and log10 p‐values. (B) Heatmap clustering represented up‐ and down‐regulated expressed genes between 2 groups using fold change. (C) The comparison of IL36RN expression between 2 groups. G1: Healthy control G2: Periodontitis\nFurthermore, the GO database analysis represented the top‐20 significant up‐ and down‐regulated genes associated to GO enriched pathways of periodontitis and healthy control groups (Figure 2A,B, p < 0.05). Likewise, similar up‐ and down‐regulated genes related top‐20 significant KEGG pathways were observed through the KEGG pathway database (Figure 3A,B, p < 0.05).\nThe top‐20 GO enrichment pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated GO enrichment pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot\nThe top‐20 KEGG pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated KEGG enriched pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot", "To validate between PBMC and plasma of IL36RN expressions were shown by scatter plots in 97 periodontitis patients and 97 healthy controls. Herein, IL36RN expressions were notably down‐regulated in PBMC and plasma of periodontitis patients compared to the healthy control group (Figure 4A,B, p < 0.05). Further, a positive correlation of IL36RN expression was significantly observed between PBMC and plasma of periodontitis patients in Figure 4C, (r = 0.409, p < 0.001) Therefore, the expression level of IL36RN in PBMC was directly correlated to plasma of periodontitis patients.\nValidation of IL‐36RN mRNA expression between healthy control (n = 97) and periodontitis (n = 97) groups. (A) PBMC, (B) Plasma, (C) Correlation‐based scatter plot between PBMC and plasma", "Serum‐based IL‐6, TNF‐α and IL‐1β, three different cytokines expression were demonstrated in PBMC of periodontitis and healthy controls. Overall, these three different cytokines were highly expressed in periodontitis patients compared to healthy controls (Figure 5A‐C, p < 0.05). Meanwhile, IL36RN expression in PBMC was negatively correlated to these serum‐based three different cytokines of periodontitis patients (Figure 6A‐C, p < 0.05). Thus, IL‐6, TNF‐α, and IL‐1β were inversely correlated and up‐regulated expressions, which are in contrast to IL36RN.\nExpression of three different cytokines in serum of healthy control (n = 97) and periodontitis (n = 97) groups. (A) IL‐6, (B) TNF‐α, (C) IL‐1β\nCorrelation between IL‐36RN expression and serum levels of three cytokines among periodontitis patients. (A) IL‐6, (B) TNF‐α, (C) IL‐1β", "Besides, IL36RN expression and significance were observed in mild‐to‐severe periodontitis patients with PBMC and plasma samples. Herein, the mild periodontitis showed significantly higher expression of IL36RN in PBMC and plasma compared to moderate and severe periodontitis (Figure 7A,B, p < 0.05). Similarly, moderate periodontitis showed higher expression of IL36RN than severe periodontitis whereas lower expression than mild periodontitis patients (Figure 7A,B, p < 0.05). While severe periodontitis represented the lowest expression of IL36RN among PBMC and plasma of mild and moderate periodontitis patients (Figure 7A,B, p < 0.05).\nIL36RN mRNA expression in mild (n = 40) moderate (n = 32) and severe (n = 25) periodontitis patients. (A) PBMC, (B) Plasma\nThe ROC‐AUC achieved a significantly higher of 0.875 (95% confidence interval(CI) = 0.7882‐0.9618, p < 0.05) in mild vs severe periodontitis with PBMC whereas moderate vs severe periodontitis with PBMC achieved AUC of 0.805 (95% CI = 0.6847‐0.9253, p < 0.05) (Figure 8A,B). Subsequently, in plasma, mild vs severe periodontitis patients yielded AUC of 0.839 (95% CI = 0.7451‐0.9329, p < 0.05) while moderate to severe periodontitis patients obtained a significant AUC of 0.731 (95% CI = 0.6014‐0.8611, p < 0.05) (Figure 8C,D). Thus, the diagnostic value of IL36RN can accurately distinguish between mild‐to‐severe or moderate‐to‐severe periodontitis patients with PBMC and plasma, and further may provide potential significance in diagnosing periodontitis.\nDiagnostic value of IL36RN for distinguishing mild or moderate patients from severe periodontitis patients. (A) PBMC of mild vs severe, (B) PBMC of moderate vs severe, (C) Plasma of mild vs severe, (D) Plasma of moderate vs severe", "Periodontitis is a numerous factorial disease initially due to consisting of microorganisms based on various colonies. It is expected that occurring periodontitis is caused by environmental, genetic, and bacterial complex actions in which host factors and bacterial play a crucial role.17, 18, 19, 20 Periodontitis is linked to bleeding on probing, increased probing depth and plaque index, bone loos and clinical attachment level reduction clinically.18, 19 Bacteria of periodontal activate the host immune response that leads to the release of inflammatory cytokines and mediators in the tissues of periodontal and inducing the breakdown of periodontal.20 Previously, it was reported that cytokines with substantial networks play a crucial role in periodontitis pathogenesis, destroying soft tissue and resorption of bone.21 Besides, the existence of elevated cytokines expression comprising pro‐inflammatory and regulatory cytokines such as IL‐1β, interferon (IFN)‐γ, IL‐1 receptor antagonist (RA), IL‐4, IL‐6, IL‐10, IL‐12, TNF‐α, and induced protein (IP)‐10, lead the way to periodontitis’ inflammatory process.22 Thus, cytokines may involve in the pathogenesis of periodontitis.\nThe present study has discovered and validated the down‐regulated expression of IL36RN in periodontitis and healthy controls with PBMC and plasma samples, similar to previous studies.19, 23 Meanwhile, PBMC based IL36RN expression was positively correlated to plasma‐based IL36RN whereas the serum base three different cytokines were inversely correlated to PBMC based IL36RN. Here, the serum‐based three different cytokines (IL‐1β, IL‐6, TNF‐α) showed up‐regulated expression in periodontitis, which were consistent with previous studies.19, 23, 24 Moreover, previous researches mentioned that IL36RN mutation may promote general pustular psoriasis (GPP), a rare type of life‐threatening disease.25, 26, 27, 28, 29 Meanwhile, studies have discovered IL36RN mutations resulted in IL36‐Ra misfolded protein due to the introduction of a premature stop‐codon, frameshift mutation, or an amino acid substitution and founded that IL36‐Ra protein was poorly expressed and less stable.25, 27, 28 Furthermore, in the current study significance of IL36RN mutated gene was significantly distinguishing mild vs severe periodontitis and moderate vs severe periodontitis of PBMC and plasma samples with a potential AUC range of 0.73 to 0.85. Taken together, IL36RN mutation may not only affects the pathogenesis of periodontitis but also may accurately differentiate mild‐to‐severe and moderate‐to‐severe periodontitis.\nIn our study, closure of periodontal inflammation was obtained by removal of mechanical dental plaque, which leads to the expected betterment of periodontal infections. For few decades, most researchers have reported that non‐invasive or non‐surgical therapeutics may be a validated substitute to invasive or surgical therapeutics.30, 31 It appears that therapeutic outcomes mostly rely on the depth of probs. The periodontal pockets with 1‐3mm sizes in non‐surgical therapeutics led to 0.3mm lower losses of clinical attachment than that surgical therapeutics and reduced 0.1mm lesser depth of probes. The 4‐6mm ranging of pockets in non‐surgical therapeutics led to 0.3mm gain of more clinical attachment but reduced 0.3mm lesser depth of probes than surgical therapies. With more than 6mm pockets, surgical therapeutics represented to lead the reduction of probes depths.32, 33 Unfortunately, no efforts were made to compare the potency of different therapeutics approaches because the current study was only targeted to discover and evaluate the clinical significance of IL36RN mutated gene.\nHowever, the study has few limitations. Firstly, the discovery of IL36RN expression was conducted by an online GEO‐based database, which might represent biased samples or findings. Secondly, the validation of IL36RN was conducted with fewer samples, thus further studies need to validate it by a larger cohort of multicenters. Thirdly, the three different cytokines were evaluated and compared using serum samples, which were not explored using PBMC or plasma‐based samples. Therefore, further studies are needed to diversely detect and evaluate IL36RN with various cytokines, which may play role in the occurrence and progression of periodontitis.", "IL36RN can distinctively be detectable in periodontitis patients with PBMC and plasma, which can act as a down‐regulated mutated gene that might play an effective role in causing periodontitis. IL36RN may involve by other inflammatory cytokines in the pathogenesis of periodontitis.", "The authors declared that they have no potential conflicts of interest." ]
[ null, "materials-and-methods", null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusions", "COI-statement" ]
[ "cytokines", "IL36RN", "PBMC", "periodontitis", "plasma" ]
INTRODUCTION: Periodontitis, common but largely preventable, is a chronic and multifactorial inflammatory disease that damages the supporting soft tissue and bone of teeth.1, 2 Patients with periodontitis disease may last for a duration of several months or years. The interaction between periodontal pathogens and host inflammatory and immune responses is involved in the pathogenesis of periodontitis.2 Certain diseases, such as Crohn's disease,3 asthma,4 rheumatoid arthritis,5 and diabetes mellitus,6 were reported to increase the risk of periodontitis. Complicated immune responses in the body might be playing a critical role in the progression of tissue damagement in periodontitis.7, 8 Interleukin‐36 (IL‐36), one member of the interleukin‐1 (IL‐1) superfamily, has subfamily members known as three agonists (IL‐36α, IL‐36β, and IL‐36γ) and two antagonists (interleukin‐36 receptor antagonist (IL‐36Ra and IL‐38)).9 IL‐36Ra is an anti‐inflammatory mediator which takes responsibility for the tight regulation of IL‐36 signaling.9 The IL36RN gene encodes IL‐36Ra. Various inflammatory diseases, such as inflammatory skin disorders, Crohn's disease, and rheumatoid arthritis have increasingly connected with IL‐36 related cytokines.10, 11, 12 In periodontitis, IL‐1, IL‐6, IL‐17A, and tumor necrosis factor‐α (TNF‐α) known as pro‐inflammatory cytokines cause body immune responses to oral bacteria specifically called Porphyromonas gingivalis.13 Kübra et al. reported that active periodontal disease may cause downregulation of inflammasome regulators and they may increase the activity of IL‐1β in periodontal disease including periodontitis.14 Alexandra et al. found that IL‐36γ could be a key inflammatory player in periodontitis and its associated alveolar bone resorption and could be a therapeutic target.2 Patrick R. et al. checked the serum, saliva, gingival cervical fluid (GCF), and gingival biopsies of patients who suffer from inflammatory periodontal disease and found the presence of elevated levels of IL‐35.15 In this context, several interleukins might be acting as mutated genes in periodontitis disease and may be playing an important role in occurring and progression of periodontitis. IL‐36 has been evaluated in diverse inflammatory diseases.16 However, the role of IL‐36RN, a mutated gene expression of IL‐36 in periodontitis patients with peripheral blood mononuclear cells (PBMC) and plasma remains unknown. Our study aims to find IL36RN in PBMC and plasma of the periodontitis patients and its clinical significance. MATERIALS AND METHODS: Patients A total of 194 cases of periodontitis and healthy control samples with PBMC and plasma were retrospectively accumulated from GEO‐based Affiliated Hospital of Beihua University. The patients were recruited from August 2018 to January 2020. All patients signed informed consent during their hospital stay, and the study was authorized by the ethics committee of Affiliated Hospital of Beihua University and was conducted following the Declaration of Helsinki guidelines (2018080375). A total of 194 cases of periodontitis and healthy control samples with PBMC and plasma were retrospectively accumulated from GEO‐based Affiliated Hospital of Beihua University. The patients were recruited from August 2018 to January 2020. All patients signed informed consent during their hospital stay, and the study was authorized by the ethics committee of Affiliated Hospital of Beihua University and was conducted following the Declaration of Helsinki guidelines (2018080375). Inclusion and exclusion criteria Inclusion Periodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm. Periodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm. Exclusion Patients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation. Patients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation. Inclusion Periodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm. Periodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm. Exclusion Patients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation. Patients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation. Periodontitis severity criteria Mild periodontitis: ≥2 interproximal sites with clinical attachment loss ≥3 mm, and ≥2 interproximal sites with probing depth ≥4 mm (not on same tooth) or one site with probing depth ≥5 mm. Moderate periodontitis: ≥2 interproximal sites with clinical attachment loss ≥4 mm (not on same tooth), or ≥2 interproximal sites with probing depth ≥5 mm (not on same tooth). Severe periodontitis: ≥2 interproximal sites with clinical attachment loss ≥6 mm (not on same tooth) and ≥1 interproximal site with probing depth ≥5 mm. Mild periodontitis: ≥2 interproximal sites with clinical attachment loss ≥3 mm, and ≥2 interproximal sites with probing depth ≥4 mm (not on same tooth) or one site with probing depth ≥5 mm. Moderate periodontitis: ≥2 interproximal sites with clinical attachment loss ≥4 mm (not on same tooth), or ≥2 interproximal sites with probing depth ≥5 mm (not on same tooth). Severe periodontitis: ≥2 interproximal sites with clinical attachment loss ≥6 mm (not on same tooth) and ≥1 interproximal site with probing depth ≥5 mm. RNA isolation Total RNA was extracted by using the solutions of phenol‐chloroform afterward managing the homogenization by guanidine isothiocyanate (Trizol RNA Preparation kit). RNA concentration evaluated by NanoDrop spectrophotometer ND1000 (NanoDrop Technologies Inc.). Total RNA was extracted by using the solutions of phenol‐chloroform afterward managing the homogenization by guanidine isothiocyanate (Trizol RNA Preparation kit). RNA concentration evaluated by NanoDrop spectrophotometer ND1000 (NanoDrop Technologies Inc.). RT‐qPCR The total RNA in the PBMC and plasma samples after transfection was extracted using Trizol reagent according to instructions. Thereafter, the total RNA was reverse‐transcribed into cDNA by the reverse transcription kit (provided by Shanghai Sangon Biological Engineering Co., LTD). The primers used are as follows: IL36RN 5′‐F: AGGCGCCAGAGGCACCATGGAC‐3′, R: 5′‐CATCCTGTGCGTTGGCTGCC‐3′, U6 F: 5′‐GAAGGTGAAGGTCGGAGTC‐3′, R: 5′‐GAAGATGGTGATGGGATTT‐3′. PCR reaction conditions were as follows: A: Pre‐denaturation at 95℃ for 10 min; B: Denaturation 95℃ 15 s; Annealing at 60℃ for 15 s, elongation at 72℃ for 20 s, a total of 40 cycles. C: 72℃ for 15 min. The reaction is terminated at 4℃. Three replicates were set for each sample, and 2−△△Ct was used for relative quantitative analysis of the data. The total RNA in the PBMC and plasma samples after transfection was extracted using Trizol reagent according to instructions. Thereafter, the total RNA was reverse‐transcribed into cDNA by the reverse transcription kit (provided by Shanghai Sangon Biological Engineering Co., LTD). The primers used are as follows: IL36RN 5′‐F: AGGCGCCAGAGGCACCATGGAC‐3′, R: 5′‐CATCCTGTGCGTTGGCTGCC‐3′, U6 F: 5′‐GAAGGTGAAGGTCGGAGTC‐3′, R: 5′‐GAAGATGGTGATGGGATTT‐3′. PCR reaction conditions were as follows: A: Pre‐denaturation at 95℃ for 10 min; B: Denaturation 95℃ 15 s; Annealing at 60℃ for 15 s, elongation at 72℃ for 20 s, a total of 40 cycles. C: 72℃ for 15 min. The reaction is terminated at 4℃. Three replicates were set for each sample, and 2−△△Ct was used for relative quantitative analysis of the data. Determination of cytokines The levels of IL‐6, TNF‐α and IL‐1β were detected with corresponding Enzyme Linked Immunosorbent Assay (Elisa) kits (ThermoFisher Scientific) according to the instruction of the kit. The levels of IL‐6, TNF‐α and IL‐1β were detected with corresponding Enzyme Linked Immunosorbent Assay (Elisa) kits (ThermoFisher Scientific) according to the instruction of the kit. Statistical and datasets analysis SPSS 20.0 version (SPSS Inc.) software was utilized for all the statistical analysis of the study and data were expressed as mean ± SD. Bioinformatics tools were used to analyze heatmap, volcano map, Gene Ontology (GO) and Kyoto Encylopedia of Genes and Genomes (KEGG) enrichment pathways from the GEO database. T‐test was performed for comparing the two groups. One‐way ANOVA analysis was used to compare multiple groups. Pearson correlation was used for correlation analysis. The clinical significance of IL36RN was evaluated in PBMC and plasma by the receiver operating curve (ROC) using the area under the curve (AUC). p < 0.05 was considered to be representing statistically significant. SPSS 20.0 version (SPSS Inc.) software was utilized for all the statistical analysis of the study and data were expressed as mean ± SD. Bioinformatics tools were used to analyze heatmap, volcano map, Gene Ontology (GO) and Kyoto Encylopedia of Genes and Genomes (KEGG) enrichment pathways from the GEO database. T‐test was performed for comparing the two groups. One‐way ANOVA analysis was used to compare multiple groups. Pearson correlation was used for correlation analysis. The clinical significance of IL36RN was evaluated in PBMC and plasma by the receiver operating curve (ROC) using the area under the curve (AUC). p < 0.05 was considered to be representing statistically significant. Patients: A total of 194 cases of periodontitis and healthy control samples with PBMC and plasma were retrospectively accumulated from GEO‐based Affiliated Hospital of Beihua University. The patients were recruited from August 2018 to January 2020. All patients signed informed consent during their hospital stay, and the study was authorized by the ethics committee of Affiliated Hospital of Beihua University and was conducted following the Declaration of Helsinki guidelines (2018080375). Inclusion and exclusion criteria: Inclusion Periodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm. Periodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm. Exclusion Patients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation. Patients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation. Inclusion: Periodontitis patients: Age ≥35 years old; residual teeth ≥20 (2 teeth in each quadrant); probing depth ≥6 mm; clinical attachment loss ≥5 mm; absorption of Alveolar bone≥Ⅰdegree. Healthy control: age >18 years old; No history of periodontal disease and missing teeth; no gingival swelling, spontaneous bleeding, bleeding on probing and probing depth <3 mm. Exclusion: Patients suffered from systemic diseases such as diabetes and immune dysfunction. Patients who took anti‐inflammatory drugs and anti‐tumor drugs three months before treatment; Patients with oral local radiotherapy; Smoking and excessive drinking; Female in pregnancy or lactation. Periodontitis severity criteria: Mild periodontitis: ≥2 interproximal sites with clinical attachment loss ≥3 mm, and ≥2 interproximal sites with probing depth ≥4 mm (not on same tooth) or one site with probing depth ≥5 mm. Moderate periodontitis: ≥2 interproximal sites with clinical attachment loss ≥4 mm (not on same tooth), or ≥2 interproximal sites with probing depth ≥5 mm (not on same tooth). Severe periodontitis: ≥2 interproximal sites with clinical attachment loss ≥6 mm (not on same tooth) and ≥1 interproximal site with probing depth ≥5 mm. RNA isolation: Total RNA was extracted by using the solutions of phenol‐chloroform afterward managing the homogenization by guanidine isothiocyanate (Trizol RNA Preparation kit). RNA concentration evaluated by NanoDrop spectrophotometer ND1000 (NanoDrop Technologies Inc.). RT‐qPCR: The total RNA in the PBMC and plasma samples after transfection was extracted using Trizol reagent according to instructions. Thereafter, the total RNA was reverse‐transcribed into cDNA by the reverse transcription kit (provided by Shanghai Sangon Biological Engineering Co., LTD). The primers used are as follows: IL36RN 5′‐F: AGGCGCCAGAGGCACCATGGAC‐3′, R: 5′‐CATCCTGTGCGTTGGCTGCC‐3′, U6 F: 5′‐GAAGGTGAAGGTCGGAGTC‐3′, R: 5′‐GAAGATGGTGATGGGATTT‐3′. PCR reaction conditions were as follows: A: Pre‐denaturation at 95℃ for 10 min; B: Denaturation 95℃ 15 s; Annealing at 60℃ for 15 s, elongation at 72℃ for 20 s, a total of 40 cycles. C: 72℃ for 15 min. The reaction is terminated at 4℃. Three replicates were set for each sample, and 2−△△Ct was used for relative quantitative analysis of the data. Determination of cytokines: The levels of IL‐6, TNF‐α and IL‐1β were detected with corresponding Enzyme Linked Immunosorbent Assay (Elisa) kits (ThermoFisher Scientific) according to the instruction of the kit. Statistical and datasets analysis: SPSS 20.0 version (SPSS Inc.) software was utilized for all the statistical analysis of the study and data were expressed as mean ± SD. Bioinformatics tools were used to analyze heatmap, volcano map, Gene Ontology (GO) and Kyoto Encylopedia of Genes and Genomes (KEGG) enrichment pathways from the GEO database. T‐test was performed for comparing the two groups. One‐way ANOVA analysis was used to compare multiple groups. Pearson correlation was used for correlation analysis. The clinical significance of IL36RN was evaluated in PBMC and plasma by the receiver operating curve (ROC) using the area under the curve (AUC). p < 0.05 was considered to be representing statistically significant. RESULTS: Clinical characteristics A total of 194 participants were enrolled to evaluate IL36RN expression through PBMC and plasma samples of periodontitis (n = 97) and healthy controls (n = 97). Further, the patients and healthy controls were characterized by clinical parameters such as age, gender, bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level. Among both groups, the parameters of bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level were statistically significant (Table 1). Clinical characteristics of the periodontitis patients and healthy controls A total of 194 participants were enrolled to evaluate IL36RN expression through PBMC and plasma samples of periodontitis (n = 97) and healthy controls (n = 97). Further, the patients and healthy controls were characterized by clinical parameters such as age, gender, bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level. Among both groups, the parameters of bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level were statistically significant (Table 1). Clinical characteristics of the periodontitis patients and healthy controls Discovery of IL36RN expression in periodontitis Bioinformatics tools were used to discover IL36RN mutated gene in between periodontitis patients and healthy control of GEO dataset (GSE23586) via heatmap, volcano and box plot analyses (Figure 1A‐C). These heatmap and volcano plot results demonstrated that the IL36RN mutated gene was significantly down‐regulated, which was further validated by the GSE23586 dataset of periodontitis and healthy controls as per box plot (Figure 1C). In the box plot, the IL36RN expressions were lowered in periodontitis patients compared to healthy controls significantly (Figure 1C, p < 0.05). Discovery of IL36RN in periodontitis patients and healthy controls using GEO database analysis. (A) Volcano map showed significant up‐ and down‐regulated mRNA gene expression through log2‐fold change and log10 p‐values. (B) Heatmap clustering represented up‐ and down‐regulated expressed genes between 2 groups using fold change. (C) The comparison of IL36RN expression between 2 groups. G1: Healthy control G2: Periodontitis Furthermore, the GO database analysis represented the top‐20 significant up‐ and down‐regulated genes associated to GO enriched pathways of periodontitis and healthy control groups (Figure 2A,B, p < 0.05). Likewise, similar up‐ and down‐regulated genes related top‐20 significant KEGG pathways were observed through the KEGG pathway database (Figure 3A,B, p < 0.05). The top‐20 GO enrichment pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated GO enrichment pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot The top‐20 KEGG pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated KEGG enriched pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot Bioinformatics tools were used to discover IL36RN mutated gene in between periodontitis patients and healthy control of GEO dataset (GSE23586) via heatmap, volcano and box plot analyses (Figure 1A‐C). These heatmap and volcano plot results demonstrated that the IL36RN mutated gene was significantly down‐regulated, which was further validated by the GSE23586 dataset of periodontitis and healthy controls as per box plot (Figure 1C). In the box plot, the IL36RN expressions were lowered in periodontitis patients compared to healthy controls significantly (Figure 1C, p < 0.05). Discovery of IL36RN in periodontitis patients and healthy controls using GEO database analysis. (A) Volcano map showed significant up‐ and down‐regulated mRNA gene expression through log2‐fold change and log10 p‐values. (B) Heatmap clustering represented up‐ and down‐regulated expressed genes between 2 groups using fold change. (C) The comparison of IL36RN expression between 2 groups. G1: Healthy control G2: Periodontitis Furthermore, the GO database analysis represented the top‐20 significant up‐ and down‐regulated genes associated to GO enriched pathways of periodontitis and healthy control groups (Figure 2A,B, p < 0.05). Likewise, similar up‐ and down‐regulated genes related top‐20 significant KEGG pathways were observed through the KEGG pathway database (Figure 3A,B, p < 0.05). The top‐20 GO enrichment pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated GO enrichment pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot The top‐20 KEGG pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated KEGG enriched pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot Validation of IL36RN in PBMC and plasma of periodontitis To validate between PBMC and plasma of IL36RN expressions were shown by scatter plots in 97 periodontitis patients and 97 healthy controls. Herein, IL36RN expressions were notably down‐regulated in PBMC and plasma of periodontitis patients compared to the healthy control group (Figure 4A,B, p < 0.05). Further, a positive correlation of IL36RN expression was significantly observed between PBMC and plasma of periodontitis patients in Figure 4C, (r = 0.409, p < 0.001) Therefore, the expression level of IL36RN in PBMC was directly correlated to plasma of periodontitis patients. Validation of IL‐36RN mRNA expression between healthy control (n = 97) and periodontitis (n = 97) groups. (A) PBMC, (B) Plasma, (C) Correlation‐based scatter plot between PBMC and plasma To validate between PBMC and plasma of IL36RN expressions were shown by scatter plots in 97 periodontitis patients and 97 healthy controls. Herein, IL36RN expressions were notably down‐regulated in PBMC and plasma of periodontitis patients compared to the healthy control group (Figure 4A,B, p < 0.05). Further, a positive correlation of IL36RN expression was significantly observed between PBMC and plasma of periodontitis patients in Figure 4C, (r = 0.409, p < 0.001) Therefore, the expression level of IL36RN in PBMC was directly correlated to plasma of periodontitis patients. Validation of IL‐36RN mRNA expression between healthy control (n = 97) and periodontitis (n = 97) groups. (A) PBMC, (B) Plasma, (C) Correlation‐based scatter plot between PBMC and plasma Three different cytokines expression in PBMC of periodontitis Serum‐based IL‐6, TNF‐α and IL‐1β, three different cytokines expression were demonstrated in PBMC of periodontitis and healthy controls. Overall, these three different cytokines were highly expressed in periodontitis patients compared to healthy controls (Figure 5A‐C, p < 0.05). Meanwhile, IL36RN expression in PBMC was negatively correlated to these serum‐based three different cytokines of periodontitis patients (Figure 6A‐C, p < 0.05). Thus, IL‐6, TNF‐α, and IL‐1β were inversely correlated and up‐regulated expressions, which are in contrast to IL36RN. Expression of three different cytokines in serum of healthy control (n = 97) and periodontitis (n = 97) groups. (A) IL‐6, (B) TNF‐α, (C) IL‐1β Correlation between IL‐36RN expression and serum levels of three cytokines among periodontitis patients. (A) IL‐6, (B) TNF‐α, (C) IL‐1β Serum‐based IL‐6, TNF‐α and IL‐1β, three different cytokines expression were demonstrated in PBMC of periodontitis and healthy controls. Overall, these three different cytokines were highly expressed in periodontitis patients compared to healthy controls (Figure 5A‐C, p < 0.05). Meanwhile, IL36RN expression in PBMC was negatively correlated to these serum‐based three different cytokines of periodontitis patients (Figure 6A‐C, p < 0.05). Thus, IL‐6, TNF‐α, and IL‐1β were inversely correlated and up‐regulated expressions, which are in contrast to IL36RN. Expression of three different cytokines in serum of healthy control (n = 97) and periodontitis (n = 97) groups. (A) IL‐6, (B) TNF‐α, (C) IL‐1β Correlation between IL‐36RN expression and serum levels of three cytokines among periodontitis patients. (A) IL‐6, (B) TNF‐α, (C) IL‐1β Significance of IL36RN in mild, moderate, and severe periodontitis Besides, IL36RN expression and significance were observed in mild‐to‐severe periodontitis patients with PBMC and plasma samples. Herein, the mild periodontitis showed significantly higher expression of IL36RN in PBMC and plasma compared to moderate and severe periodontitis (Figure 7A,B, p < 0.05). Similarly, moderate periodontitis showed higher expression of IL36RN than severe periodontitis whereas lower expression than mild periodontitis patients (Figure 7A,B, p < 0.05). While severe periodontitis represented the lowest expression of IL36RN among PBMC and plasma of mild and moderate periodontitis patients (Figure 7A,B, p < 0.05). IL36RN mRNA expression in mild (n = 40) moderate (n = 32) and severe (n = 25) periodontitis patients. (A) PBMC, (B) Plasma The ROC‐AUC achieved a significantly higher of 0.875 (95% confidence interval(CI) = 0.7882‐0.9618, p < 0.05) in mild vs severe periodontitis with PBMC whereas moderate vs severe periodontitis with PBMC achieved AUC of 0.805 (95% CI = 0.6847‐0.9253, p < 0.05) (Figure 8A,B). Subsequently, in plasma, mild vs severe periodontitis patients yielded AUC of 0.839 (95% CI = 0.7451‐0.9329, p < 0.05) while moderate to severe periodontitis patients obtained a significant AUC of 0.731 (95% CI = 0.6014‐0.8611, p < 0.05) (Figure 8C,D). Thus, the diagnostic value of IL36RN can accurately distinguish between mild‐to‐severe or moderate‐to‐severe periodontitis patients with PBMC and plasma, and further may provide potential significance in diagnosing periodontitis. Diagnostic value of IL36RN for distinguishing mild or moderate patients from severe periodontitis patients. (A) PBMC of mild vs severe, (B) PBMC of moderate vs severe, (C) Plasma of mild vs severe, (D) Plasma of moderate vs severe Besides, IL36RN expression and significance were observed in mild‐to‐severe periodontitis patients with PBMC and plasma samples. Herein, the mild periodontitis showed significantly higher expression of IL36RN in PBMC and plasma compared to moderate and severe periodontitis (Figure 7A,B, p < 0.05). Similarly, moderate periodontitis showed higher expression of IL36RN than severe periodontitis whereas lower expression than mild periodontitis patients (Figure 7A,B, p < 0.05). While severe periodontitis represented the lowest expression of IL36RN among PBMC and plasma of mild and moderate periodontitis patients (Figure 7A,B, p < 0.05). IL36RN mRNA expression in mild (n = 40) moderate (n = 32) and severe (n = 25) periodontitis patients. (A) PBMC, (B) Plasma The ROC‐AUC achieved a significantly higher of 0.875 (95% confidence interval(CI) = 0.7882‐0.9618, p < 0.05) in mild vs severe periodontitis with PBMC whereas moderate vs severe periodontitis with PBMC achieved AUC of 0.805 (95% CI = 0.6847‐0.9253, p < 0.05) (Figure 8A,B). Subsequently, in plasma, mild vs severe periodontitis patients yielded AUC of 0.839 (95% CI = 0.7451‐0.9329, p < 0.05) while moderate to severe periodontitis patients obtained a significant AUC of 0.731 (95% CI = 0.6014‐0.8611, p < 0.05) (Figure 8C,D). Thus, the diagnostic value of IL36RN can accurately distinguish between mild‐to‐severe or moderate‐to‐severe periodontitis patients with PBMC and plasma, and further may provide potential significance in diagnosing periodontitis. Diagnostic value of IL36RN for distinguishing mild or moderate patients from severe periodontitis patients. (A) PBMC of mild vs severe, (B) PBMC of moderate vs severe, (C) Plasma of mild vs severe, (D) Plasma of moderate vs severe Clinical characteristics: A total of 194 participants were enrolled to evaluate IL36RN expression through PBMC and plasma samples of periodontitis (n = 97) and healthy controls (n = 97). Further, the patients and healthy controls were characterized by clinical parameters such as age, gender, bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level. Among both groups, the parameters of bleeding on probing, oral hygiene index, pocket depth, and clinical achievement level were statistically significant (Table 1). Clinical characteristics of the periodontitis patients and healthy controls Discovery of IL36RN expression in periodontitis: Bioinformatics tools were used to discover IL36RN mutated gene in between periodontitis patients and healthy control of GEO dataset (GSE23586) via heatmap, volcano and box plot analyses (Figure 1A‐C). These heatmap and volcano plot results demonstrated that the IL36RN mutated gene was significantly down‐regulated, which was further validated by the GSE23586 dataset of periodontitis and healthy controls as per box plot (Figure 1C). In the box plot, the IL36RN expressions were lowered in periodontitis patients compared to healthy controls significantly (Figure 1C, p < 0.05). Discovery of IL36RN in periodontitis patients and healthy controls using GEO database analysis. (A) Volcano map showed significant up‐ and down‐regulated mRNA gene expression through log2‐fold change and log10 p‐values. (B) Heatmap clustering represented up‐ and down‐regulated expressed genes between 2 groups using fold change. (C) The comparison of IL36RN expression between 2 groups. G1: Healthy control G2: Periodontitis Furthermore, the GO database analysis represented the top‐20 significant up‐ and down‐regulated genes associated to GO enriched pathways of periodontitis and healthy control groups (Figure 2A,B, p < 0.05). Likewise, similar up‐ and down‐regulated genes related top‐20 significant KEGG pathways were observed through the KEGG pathway database (Figure 3A,B, p < 0.05). The top‐20 GO enrichment pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated GO enrichment pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot The top‐20 KEGG pathway analysis of up‐ and down‐regulated genes. (A) Up‐regulated KEGG enriched pathways using bubble plot. (B) Down‐regulated GO enrichment pathways using bubble plot Validation of IL36RN in PBMC and plasma of periodontitis: To validate between PBMC and plasma of IL36RN expressions were shown by scatter plots in 97 periodontitis patients and 97 healthy controls. Herein, IL36RN expressions were notably down‐regulated in PBMC and plasma of periodontitis patients compared to the healthy control group (Figure 4A,B, p < 0.05). Further, a positive correlation of IL36RN expression was significantly observed between PBMC and plasma of periodontitis patients in Figure 4C, (r = 0.409, p < 0.001) Therefore, the expression level of IL36RN in PBMC was directly correlated to plasma of periodontitis patients. Validation of IL‐36RN mRNA expression between healthy control (n = 97) and periodontitis (n = 97) groups. (A) PBMC, (B) Plasma, (C) Correlation‐based scatter plot between PBMC and plasma Three different cytokines expression in PBMC of periodontitis: Serum‐based IL‐6, TNF‐α and IL‐1β, three different cytokines expression were demonstrated in PBMC of periodontitis and healthy controls. Overall, these three different cytokines were highly expressed in periodontitis patients compared to healthy controls (Figure 5A‐C, p < 0.05). Meanwhile, IL36RN expression in PBMC was negatively correlated to these serum‐based three different cytokines of periodontitis patients (Figure 6A‐C, p < 0.05). Thus, IL‐6, TNF‐α, and IL‐1β were inversely correlated and up‐regulated expressions, which are in contrast to IL36RN. Expression of three different cytokines in serum of healthy control (n = 97) and periodontitis (n = 97) groups. (A) IL‐6, (B) TNF‐α, (C) IL‐1β Correlation between IL‐36RN expression and serum levels of three cytokines among periodontitis patients. (A) IL‐6, (B) TNF‐α, (C) IL‐1β Significance of IL36RN in mild, moderate, and severe periodontitis: Besides, IL36RN expression and significance were observed in mild‐to‐severe periodontitis patients with PBMC and plasma samples. Herein, the mild periodontitis showed significantly higher expression of IL36RN in PBMC and plasma compared to moderate and severe periodontitis (Figure 7A,B, p < 0.05). Similarly, moderate periodontitis showed higher expression of IL36RN than severe periodontitis whereas lower expression than mild periodontitis patients (Figure 7A,B, p < 0.05). While severe periodontitis represented the lowest expression of IL36RN among PBMC and plasma of mild and moderate periodontitis patients (Figure 7A,B, p < 0.05). IL36RN mRNA expression in mild (n = 40) moderate (n = 32) and severe (n = 25) periodontitis patients. (A) PBMC, (B) Plasma The ROC‐AUC achieved a significantly higher of 0.875 (95% confidence interval(CI) = 0.7882‐0.9618, p < 0.05) in mild vs severe periodontitis with PBMC whereas moderate vs severe periodontitis with PBMC achieved AUC of 0.805 (95% CI = 0.6847‐0.9253, p < 0.05) (Figure 8A,B). Subsequently, in plasma, mild vs severe periodontitis patients yielded AUC of 0.839 (95% CI = 0.7451‐0.9329, p < 0.05) while moderate to severe periodontitis patients obtained a significant AUC of 0.731 (95% CI = 0.6014‐0.8611, p < 0.05) (Figure 8C,D). Thus, the diagnostic value of IL36RN can accurately distinguish between mild‐to‐severe or moderate‐to‐severe periodontitis patients with PBMC and plasma, and further may provide potential significance in diagnosing periodontitis. Diagnostic value of IL36RN for distinguishing mild or moderate patients from severe periodontitis patients. (A) PBMC of mild vs severe, (B) PBMC of moderate vs severe, (C) Plasma of mild vs severe, (D) Plasma of moderate vs severe DISCUSSION: Periodontitis is a numerous factorial disease initially due to consisting of microorganisms based on various colonies. It is expected that occurring periodontitis is caused by environmental, genetic, and bacterial complex actions in which host factors and bacterial play a crucial role.17, 18, 19, 20 Periodontitis is linked to bleeding on probing, increased probing depth and plaque index, bone loos and clinical attachment level reduction clinically.18, 19 Bacteria of periodontal activate the host immune response that leads to the release of inflammatory cytokines and mediators in the tissues of periodontal and inducing the breakdown of periodontal.20 Previously, it was reported that cytokines with substantial networks play a crucial role in periodontitis pathogenesis, destroying soft tissue and resorption of bone.21 Besides, the existence of elevated cytokines expression comprising pro‐inflammatory and regulatory cytokines such as IL‐1β, interferon (IFN)‐γ, IL‐1 receptor antagonist (RA), IL‐4, IL‐6, IL‐10, IL‐12, TNF‐α, and induced protein (IP)‐10, lead the way to periodontitis’ inflammatory process.22 Thus, cytokines may involve in the pathogenesis of periodontitis. The present study has discovered and validated the down‐regulated expression of IL36RN in periodontitis and healthy controls with PBMC and plasma samples, similar to previous studies.19, 23 Meanwhile, PBMC based IL36RN expression was positively correlated to plasma‐based IL36RN whereas the serum base three different cytokines were inversely correlated to PBMC based IL36RN. Here, the serum‐based three different cytokines (IL‐1β, IL‐6, TNF‐α) showed up‐regulated expression in periodontitis, which were consistent with previous studies.19, 23, 24 Moreover, previous researches mentioned that IL36RN mutation may promote general pustular psoriasis (GPP), a rare type of life‐threatening disease.25, 26, 27, 28, 29 Meanwhile, studies have discovered IL36RN mutations resulted in IL36‐Ra misfolded protein due to the introduction of a premature stop‐codon, frameshift mutation, or an amino acid substitution and founded that IL36‐Ra protein was poorly expressed and less stable.25, 27, 28 Furthermore, in the current study significance of IL36RN mutated gene was significantly distinguishing mild vs severe periodontitis and moderate vs severe periodontitis of PBMC and plasma samples with a potential AUC range of 0.73 to 0.85. Taken together, IL36RN mutation may not only affects the pathogenesis of periodontitis but also may accurately differentiate mild‐to‐severe and moderate‐to‐severe periodontitis. In our study, closure of periodontal inflammation was obtained by removal of mechanical dental plaque, which leads to the expected betterment of periodontal infections. For few decades, most researchers have reported that non‐invasive or non‐surgical therapeutics may be a validated substitute to invasive or surgical therapeutics.30, 31 It appears that therapeutic outcomes mostly rely on the depth of probs. The periodontal pockets with 1‐3mm sizes in non‐surgical therapeutics led to 0.3mm lower losses of clinical attachment than that surgical therapeutics and reduced 0.1mm lesser depth of probes. The 4‐6mm ranging of pockets in non‐surgical therapeutics led to 0.3mm gain of more clinical attachment but reduced 0.3mm lesser depth of probes than surgical therapies. With more than 6mm pockets, surgical therapeutics represented to lead the reduction of probes depths.32, 33 Unfortunately, no efforts were made to compare the potency of different therapeutics approaches because the current study was only targeted to discover and evaluate the clinical significance of IL36RN mutated gene. However, the study has few limitations. Firstly, the discovery of IL36RN expression was conducted by an online GEO‐based database, which might represent biased samples or findings. Secondly, the validation of IL36RN was conducted with fewer samples, thus further studies need to validate it by a larger cohort of multicenters. Thirdly, the three different cytokines were evaluated and compared using serum samples, which were not explored using PBMC or plasma‐based samples. Therefore, further studies are needed to diversely detect and evaluate IL36RN with various cytokines, which may play role in the occurrence and progression of periodontitis. CONCLUSION: IL36RN can distinctively be detectable in periodontitis patients with PBMC and plasma, which can act as a down‐regulated mutated gene that might play an effective role in causing periodontitis. IL36RN may involve by other inflammatory cytokines in the pathogenesis of periodontitis. CONFLICTS OF INTERESTS: The authors declared that they have no potential conflicts of interest.
Background: The role of IL-36 receptor antagonist (IL36RN), a mutated gene expression of IL-36 in periodontitis patients with peripheral blood mononuclear cells (PBMC) and plasma remains to be undetermined. Methods: Our study discovered the IL36RN expression through GEO public databases and further validated by PBMC and plasma of periodontitis patients and healthy participants. A total of 194 participants of public datasets, consisting of 97 cases of periodontitis and 97 cases of healthy control were retrospectively evaluated and explored the gene enrichment pathways and clinical significance of IL36RN expression accompanied by three different cytokines. Furthermore, the clinical significance of IL36RN was evaluated in mild-to-severe patients of periodontitis by the receiver operating curve (ROC) using the area under the curve (AUC). Results: IL36RN expressions were notably down-regulated in PBMC and plasma of periodontitis patients. Further, a positive correlation of IL36RN expression was significantly observed between PBMC and plasma of periodontitis patients while IL36RN expression was negatively correlated to serum-based three different cytokines of periodontitis patients. Meanwhile, the ROC-AUCs achieved a significantly higher range from 0.80 to 0.87 with PBMC of mild-to-severe and moderate-to-severe periodontitis patients whereas similar patients with plasma obtained a significant AUC range from 0.73 to 0.83. Conclusions: IL36RN can distinctively be detectable in periodontitis patients with PBMC and plasma, which can act as a down-regulated mutated gene that might play an effective role in causing periodontitis. IL36RN may involve by other inflammatory cytokines in the pathogenesis of periodontitis.
INTRODUCTION: Periodontitis, common but largely preventable, is a chronic and multifactorial inflammatory disease that damages the supporting soft tissue and bone of teeth.1, 2 Patients with periodontitis disease may last for a duration of several months or years. The interaction between periodontal pathogens and host inflammatory and immune responses is involved in the pathogenesis of periodontitis.2 Certain diseases, such as Crohn's disease,3 asthma,4 rheumatoid arthritis,5 and diabetes mellitus,6 were reported to increase the risk of periodontitis. Complicated immune responses in the body might be playing a critical role in the progression of tissue damagement in periodontitis.7, 8 Interleukin‐36 (IL‐36), one member of the interleukin‐1 (IL‐1) superfamily, has subfamily members known as three agonists (IL‐36α, IL‐36β, and IL‐36γ) and two antagonists (interleukin‐36 receptor antagonist (IL‐36Ra and IL‐38)).9 IL‐36Ra is an anti‐inflammatory mediator which takes responsibility for the tight regulation of IL‐36 signaling.9 The IL36RN gene encodes IL‐36Ra. Various inflammatory diseases, such as inflammatory skin disorders, Crohn's disease, and rheumatoid arthritis have increasingly connected with IL‐36 related cytokines.10, 11, 12 In periodontitis, IL‐1, IL‐6, IL‐17A, and tumor necrosis factor‐α (TNF‐α) known as pro‐inflammatory cytokines cause body immune responses to oral bacteria specifically called Porphyromonas gingivalis.13 Kübra et al. reported that active periodontal disease may cause downregulation of inflammasome regulators and they may increase the activity of IL‐1β in periodontal disease including periodontitis.14 Alexandra et al. found that IL‐36γ could be a key inflammatory player in periodontitis and its associated alveolar bone resorption and could be a therapeutic target.2 Patrick R. et al. checked the serum, saliva, gingival cervical fluid (GCF), and gingival biopsies of patients who suffer from inflammatory periodontal disease and found the presence of elevated levels of IL‐35.15 In this context, several interleukins might be acting as mutated genes in periodontitis disease and may be playing an important role in occurring and progression of periodontitis. IL‐36 has been evaluated in diverse inflammatory diseases.16 However, the role of IL‐36RN, a mutated gene expression of IL‐36 in periodontitis patients with peripheral blood mononuclear cells (PBMC) and plasma remains unknown. Our study aims to find IL36RN in PBMC and plasma of the periodontitis patients and its clinical significance. CONCLUSION: IL36RN can distinctively be detectable in periodontitis patients with PBMC and plasma, which can act as a down‐regulated mutated gene that might play an effective role in causing periodontitis. IL36RN may involve by other inflammatory cytokines in the pathogenesis of periodontitis.
Background: The role of IL-36 receptor antagonist (IL36RN), a mutated gene expression of IL-36 in periodontitis patients with peripheral blood mononuclear cells (PBMC) and plasma remains to be undetermined. Methods: Our study discovered the IL36RN expression through GEO public databases and further validated by PBMC and plasma of periodontitis patients and healthy participants. A total of 194 participants of public datasets, consisting of 97 cases of periodontitis and 97 cases of healthy control were retrospectively evaluated and explored the gene enrichment pathways and clinical significance of IL36RN expression accompanied by three different cytokines. Furthermore, the clinical significance of IL36RN was evaluated in mild-to-severe patients of periodontitis by the receiver operating curve (ROC) using the area under the curve (AUC). Results: IL36RN expressions were notably down-regulated in PBMC and plasma of periodontitis patients. Further, a positive correlation of IL36RN expression was significantly observed between PBMC and plasma of periodontitis patients while IL36RN expression was negatively correlated to serum-based three different cytokines of periodontitis patients. Meanwhile, the ROC-AUCs achieved a significantly higher range from 0.80 to 0.87 with PBMC of mild-to-severe and moderate-to-severe periodontitis patients whereas similar patients with plasma obtained a significant AUC range from 0.73 to 0.83. Conclusions: IL36RN can distinctively be detectable in periodontitis patients with PBMC and plasma, which can act as a down-regulated mutated gene that might play an effective role in causing periodontitis. IL36RN may involve by other inflammatory cytokines in the pathogenesis of periodontitis.
7,243
293
[ 407, 75, 237, 74, 42, 109, 37, 161, 32, 131, 111, 318, 158, 173, 377 ]
20
[ "periodontitis", "patients", "il36rn", "pbmc", "periodontitis patients", "il", "plasma", "expression", "severe", "healthy" ]
[ "il36rn expression periodontitis", "cytokines pathogenesis periodontitis", "il 36 periodontitis", "damagement periodontitis interleukin", "inflammatory periodontal disease" ]
null
[CONTENT] cytokines | IL36RN | PBMC | periodontitis | plasma [SUMMARY]
null
[CONTENT] cytokines | IL36RN | PBMC | periodontitis | plasma [SUMMARY]
[CONTENT] cytokines | IL36RN | PBMC | periodontitis | plasma [SUMMARY]
[CONTENT] cytokines | IL36RN | PBMC | periodontitis | plasma [SUMMARY]
[CONTENT] cytokines | IL36RN | PBMC | periodontitis | plasma [SUMMARY]
[CONTENT] Adult | Biomarkers | Case-Control Studies | Cytokines | Down-Regulation | Female | Humans | Interleukins | Leukocytes, Mononuclear | Male | Middle Aged | Periodontitis | Retrospective Studies [SUMMARY]
null
[CONTENT] Adult | Biomarkers | Case-Control Studies | Cytokines | Down-Regulation | Female | Humans | Interleukins | Leukocytes, Mononuclear | Male | Middle Aged | Periodontitis | Retrospective Studies [SUMMARY]
[CONTENT] Adult | Biomarkers | Case-Control Studies | Cytokines | Down-Regulation | Female | Humans | Interleukins | Leukocytes, Mononuclear | Male | Middle Aged | Periodontitis | Retrospective Studies [SUMMARY]
[CONTENT] Adult | Biomarkers | Case-Control Studies | Cytokines | Down-Regulation | Female | Humans | Interleukins | Leukocytes, Mononuclear | Male | Middle Aged | Periodontitis | Retrospective Studies [SUMMARY]
[CONTENT] Adult | Biomarkers | Case-Control Studies | Cytokines | Down-Regulation | Female | Humans | Interleukins | Leukocytes, Mononuclear | Male | Middle Aged | Periodontitis | Retrospective Studies [SUMMARY]
[CONTENT] il36rn expression periodontitis | cytokines pathogenesis periodontitis | il 36 periodontitis | damagement periodontitis interleukin | inflammatory periodontal disease [SUMMARY]
null
[CONTENT] il36rn expression periodontitis | cytokines pathogenesis periodontitis | il 36 periodontitis | damagement periodontitis interleukin | inflammatory periodontal disease [SUMMARY]
[CONTENT] il36rn expression periodontitis | cytokines pathogenesis periodontitis | il 36 periodontitis | damagement periodontitis interleukin | inflammatory periodontal disease [SUMMARY]
[CONTENT] il36rn expression periodontitis | cytokines pathogenesis periodontitis | il 36 periodontitis | damagement periodontitis interleukin | inflammatory periodontal disease [SUMMARY]
[CONTENT] il36rn expression periodontitis | cytokines pathogenesis periodontitis | il 36 periodontitis | damagement periodontitis interleukin | inflammatory periodontal disease [SUMMARY]
[CONTENT] periodontitis | patients | il36rn | pbmc | periodontitis patients | il | plasma | expression | severe | healthy [SUMMARY]
null
[CONTENT] periodontitis | patients | il36rn | pbmc | periodontitis patients | il | plasma | expression | severe | healthy [SUMMARY]
[CONTENT] periodontitis | patients | il36rn | pbmc | periodontitis patients | il | plasma | expression | severe | healthy [SUMMARY]
[CONTENT] periodontitis | patients | il36rn | pbmc | periodontitis patients | il | plasma | expression | severe | healthy [SUMMARY]
[CONTENT] periodontitis | patients | il36rn | pbmc | periodontitis patients | il | plasma | expression | severe | healthy [SUMMARY]
[CONTENT] il | 36 | inflammatory | disease | il 36 | periodontitis | immune responses | interleukin | responses | 36ra [SUMMARY]
null
[CONTENT] periodontitis | severe | figure | expression | il36rn | periodontitis patients | patients | pbmc | regulated | mild [SUMMARY]
[CONTENT] periodontitis | patients pbmc plasma act | mutated gene play effective | il36rn involve | mutated gene play | distinctively detectable | distinctively | detectable | inflammatory cytokines pathogenesis periodontitis | inflammatory cytokines pathogenesis [SUMMARY]
[CONTENT] periodontitis | patients | il | mm | il36rn | pbmc | periodontitis patients | plasma | probing | healthy [SUMMARY]
[CONTENT] periodontitis | patients | il | mm | il36rn | pbmc | periodontitis patients | plasma | probing | healthy [SUMMARY]
[CONTENT] IL-36 | IL-36 [SUMMARY]
null
[CONTENT] IL36RN ||| IL36RN | IL36RN | three ||| ROC | 0.80 | 0.87 | 0.73 | 0.83 [SUMMARY]
[CONTENT] IL36RN ||| [SUMMARY]
[CONTENT] IL-36 | IL-36 ||| IL36RN | GEO ||| 194 | 97 | 97 | IL36RN | three ||| IL36RN | ROC ||| ||| IL36RN ||| IL36RN | IL36RN | three ||| ROC | 0.80 | 0.87 | 0.73 | 0.83 | IL36RN ||| [SUMMARY]
[CONTENT] IL-36 | IL-36 ||| IL36RN | GEO ||| 194 | 97 | 97 | IL36RN | three ||| IL36RN | ROC ||| ||| IL36RN ||| IL36RN | IL36RN | three ||| ROC | 0.80 | 0.87 | 0.73 | 0.83 | IL36RN ||| [SUMMARY]
The economic burden of prostate cancer in Eswatini.
35410213
Prostate cancer is the fifth cause of cancer mortality among men worldwide. However, there is limited data on costs associated with prostate cancer in low- and middle-income countries particularly in the sub-Saharan region. From a societal perspective, this study aims to estimate the cost of prostate cancer in Eswatini.
BACKGROUND
This prevalence-based cost-of-illness study used diagnosis specific data from national registries to estimate costs associated to prostate cancer during 2018. The prevalence-based approach was used employing both top down and bottom up costing approaches. Costs data included health care utilization, transport, sick leave days and premature death.
METHODS
The total annual cost of prostate cancer was $6.2 million (ranging between $ 4.7 million and 7.8 million estimated with lower and upper bounds). Average cost-per patient for radiotherapy, chemotherapy and other non-medical direct costs (transport and lodging) were the highest cost drivers recording $16,648, $7,498 and $5,959 respectively whilst indirect costs including productive loss due to sick leave and pre-mature mortality was estimated at $58,320 and $113,760 respectively. Cost of managing prostate cancer increased with advanced disease and costs were highest for prostate cancer stages III and IV recording $1.1million, $1.9million respectively.
RESULTS
Prostate cancer is a public health concern in Eswatini, and it imposes significant economic burden to the society. This finding point areas for policy makers to perform cost containment regarding therapeutic procedures for prostate cancer and the need for strategies to increase efficiencies in the health care systems for increased value for health care services.
CONCLUSIONS
[ "Cost of Illness", "Eswatini", "Financial Stress", "Health Care Costs", "Humans", "Male", "Prostatic Neoplasms" ]
9004055
Background
Among cancers, prostate cancer is the third commonest cancer after breast and lung cancer and the fifth cause of cancer mortality among men [1, 2]. In 2018, the number of new cases increased from 1.1 million in 2012 to 1.3 million in 2018 accounting for about 7.1% of the total cancer cases globally and 15% among men [2]. The causes of prostate cancer is attributable to genetic and environmental factors [2]. However, the incidence and mortality rate vary substantially within and across regions. Notably, high-income countries (HICs) reports high incidence rate compared to low- and -middle income countries (LMICs) [2]. In contrast, mortality rate is higher in developing countries particularly in sub-Saharan Africa regions [3]. The inequalities observed across regions with respect to prostate cancer incidence and mortality are in part linked to availability of effective screening and improved treatment modalities which are directly linked to resources availability [3, 4]. In Eswatini, compared to other common cancers, prostate cancer is ranked third accounting for 7.6% of total new cases 1074 in 2018 [5]. Prostate cancer causes clinical and economic burden to patients and governments. Screening tests include prostate-specific antigen (PSA) and digital rectal examination (DRG) [6, 7]. A positive screening tests results indicate further investigation [6]. Whilst PSA is the frequent screening test, it has been argued that PSA could potentially cause harm by over diagnosing low risk cancers that otherwise would have remained without clinical consequences for life time if left untreated [8]. In turn, this increases costs for prostate cancer [9]. In Sweden, annual costs associated with prostate cancer (screening, diagnosis and treatment) was estimated at €281 million [9]. In Ontario, the mean per patient cost for prostate cancer–related medication was $1211 [10]. In Iran, the total annual cost of prostate cancer was estimated at $2900 million [11]. Other studies estimated the economic burden of prostate cancer along with other cancer type. A study focusing on European countries, ranked prostate cancer the fourth cancer disease to cause health care costs compared to lung (€18.8billion), breast cancer (€15 billion), colorectal cancer (€13.1 billion) [12]. Similarly, in Korea, prostate cancer was among the top four cancers attributing to economic burden of disease [13]. There is limited evidence on the economic burden of prostate cancer from LMICs. Estimation of the economic burden of disease provide insight on treatment modalities and associated costs. The study aims to investigate the societal cost of prostate cancer in Eswatini during 2018.
null
null
Results
Directs costs In 2018, there were 90 prostate cancer cases of which 89% aged 60 years an above. The average age was 73 years. Table 2, shows unit average costs for treating prostate cancer cases including other direct costs such as transportation and accommodation. Table 2Costs for screening, diagnosis and treatment of prostate cancerParameterVariables included in the costAverage (2018) USD Screening Consultation fee41Prostate Specific Antigen (PSA)16Digital rectal examination (DRE)32 Diagnosis PathologyTRUS guided Biopsy147RadiologyComputed Tomography (CT scan)- to rule out chest, abdomen and pelvis metastasis862Magnetic resonance imaging scan (MRI)1,034X-ray to rule out effusion28Bone scan in locally advanced prostate cancer607Ultrasound scan103 Treatment Watchful Waiting (WW) cost include PSA test every three months and follow up consultation feePSA plus follow-up consultation fee58SurgeryRadical prostatectomy5,726Orchiectomy5,726RadiotherapyAdministered at 5 function of a 5-week period16,647Chemotherapy (Brachytherapy or external beam radiationAdministered in 5 cycles over 5-week period7,498Androgen Deprivation Therapy (ADT)Mostly Zoladex o.8 mg intramuscular (IM) for every 3 months1,268Symptoms relieving procedures (TURP/TURB)Transurethral resection of the prostate or bladder (TURP/TURB)5,872 Other direct costs Hospitalization costs (local)Admitted for symptoms management procedures including transurethral resection of the prostate (TURP) and orchidectomy5,872Hospital admission in step down facility for late stage treatment (In South Africa)Patient who require close monitoring following radiotherapy or surgery in hospital outside Eswatini1,206Transport and lodging cost in South AfricaAll patients who received treatment in South Africa5,959Follow-up care (Year 1 following completing treatment) Follow-up is done using PSA testing, symptomology and clinical examination for metastatic cancer twice in a yearFollow up consultation, PSA tests.94 Total 58,796 Costs for screening, diagnosis and treatment of prostate cancer Cost distribution by disease stage is shown in Table 3. Following the Eswatini Standard Cancer Care Guidelines we assumed that all confirmed cases underwent similar screening, diagnosis and treatment pathway shown in Table 2, and simplified referral pathway shown in Fig. 1. The average costs for the different pathway including treatment intervention differed with the prostate cancer stage. Radical prostatectomy was more frequent with early stages of prostate cancer whilst interventions like chemotherapy were common with prostate cancer stages III and IV. Table 3 shows the prostate cancer costs distribution by stage. Table 3Costs for staging, management, and treatment of Prostate cancer stage I-IVStaging and treatment variablesUnit cost ($)I (T1)II (T2)III (T3)1 V (T4)Consultation for assessment4141414141 Screening and diagnosis   Prostate Specific Antigen (PSA)1648484848  Digital rectal examination (DRE)  TRUS guided Biopsy147147147147147  MRI scan1,0341,0341,0341,0341,034  Chest x-ray2827.627.627.627.6  Bone scan607607607607607  Ultrasound103103103103103  CT scan abdomen862862862862862 Treatment (Prostate Cancer prevalence in 2018=91 patient)   Watchful waiting (WW). Costs include PSA test every three months and follow up consultation fee5858000  Radical prostatectomy5,7265,7265,72600  Orchiectomy (surgical castration)5,7265,7265,7265,7265,726  Radiotherapy16,64816,64816,64816,64816,648  Chemotherapy7,498007,4987,498  Symptoms relieving procedures (TURP/TURB)5,872005,8725,872  Other supportive drugs: Pain killers6060606060  Hormonal therapy (ADT) Zoladex 0.8 mg injectables1,2681,2681,2681,2681,268 Other costs   Hospitalization costs (local)5,8725,8725,8725,8725,872  Hospital admission in step down facility for late stage treatment1,2061,2061,2061,2061,206  Transport and lodging cost (in RSA)5,9595,9595,9595,9595,959  Follow-up care (Year 1 following completing treatment)9494949494 Total 58,824 45,486 45,428 53,072 53,072 Costs for staging, management, and treatment of Prostate cancer stage I-IV Radiation is not available in Eswatini and patients are referred to private hospitals in South Africa. On average, radiotherapy treatment is administered for a period of 5-weeks [25]. The estimated unit costs for radiotherapy was $16,648 whilst chemotherapy was $7,498. In addition to treatment costs, all patients referred for radiotherapy also incurred other direct costs including transport, lodging and allowance for accompanying staff (nurse and driver) at a unit costs $5,959, Table 3. Direct non-medical costs Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds). Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds). In 2018, there were 90 prostate cancer cases of which 89% aged 60 years an above. The average age was 73 years. Table 2, shows unit average costs for treating prostate cancer cases including other direct costs such as transportation and accommodation. Table 2Costs for screening, diagnosis and treatment of prostate cancerParameterVariables included in the costAverage (2018) USD Screening Consultation fee41Prostate Specific Antigen (PSA)16Digital rectal examination (DRE)32 Diagnosis PathologyTRUS guided Biopsy147RadiologyComputed Tomography (CT scan)- to rule out chest, abdomen and pelvis metastasis862Magnetic resonance imaging scan (MRI)1,034X-ray to rule out effusion28Bone scan in locally advanced prostate cancer607Ultrasound scan103 Treatment Watchful Waiting (WW) cost include PSA test every three months and follow up consultation feePSA plus follow-up consultation fee58SurgeryRadical prostatectomy5,726Orchiectomy5,726RadiotherapyAdministered at 5 function of a 5-week period16,647Chemotherapy (Brachytherapy or external beam radiationAdministered in 5 cycles over 5-week period7,498Androgen Deprivation Therapy (ADT)Mostly Zoladex o.8 mg intramuscular (IM) for every 3 months1,268Symptoms relieving procedures (TURP/TURB)Transurethral resection of the prostate or bladder (TURP/TURB)5,872 Other direct costs Hospitalization costs (local)Admitted for symptoms management procedures including transurethral resection of the prostate (TURP) and orchidectomy5,872Hospital admission in step down facility for late stage treatment (In South Africa)Patient who require close monitoring following radiotherapy or surgery in hospital outside Eswatini1,206Transport and lodging cost in South AfricaAll patients who received treatment in South Africa5,959Follow-up care (Year 1 following completing treatment) Follow-up is done using PSA testing, symptomology and clinical examination for metastatic cancer twice in a yearFollow up consultation, PSA tests.94 Total 58,796 Costs for screening, diagnosis and treatment of prostate cancer Cost distribution by disease stage is shown in Table 3. Following the Eswatini Standard Cancer Care Guidelines we assumed that all confirmed cases underwent similar screening, diagnosis and treatment pathway shown in Table 2, and simplified referral pathway shown in Fig. 1. The average costs for the different pathway including treatment intervention differed with the prostate cancer stage. Radical prostatectomy was more frequent with early stages of prostate cancer whilst interventions like chemotherapy were common with prostate cancer stages III and IV. Table 3 shows the prostate cancer costs distribution by stage. Table 3Costs for staging, management, and treatment of Prostate cancer stage I-IVStaging and treatment variablesUnit cost ($)I (T1)II (T2)III (T3)1 V (T4)Consultation for assessment4141414141 Screening and diagnosis   Prostate Specific Antigen (PSA)1648484848  Digital rectal examination (DRE)  TRUS guided Biopsy147147147147147  MRI scan1,0341,0341,0341,0341,034  Chest x-ray2827.627.627.627.6  Bone scan607607607607607  Ultrasound103103103103103  CT scan abdomen862862862862862 Treatment (Prostate Cancer prevalence in 2018=91 patient)   Watchful waiting (WW). Costs include PSA test every three months and follow up consultation fee5858000  Radical prostatectomy5,7265,7265,72600  Orchiectomy (surgical castration)5,7265,7265,7265,7265,726  Radiotherapy16,64816,64816,64816,64816,648  Chemotherapy7,498007,4987,498  Symptoms relieving procedures (TURP/TURB)5,872005,8725,872  Other supportive drugs: Pain killers6060606060  Hormonal therapy (ADT) Zoladex 0.8 mg injectables1,2681,2681,2681,2681,268 Other costs   Hospitalization costs (local)5,8725,8725,8725,8725,872  Hospital admission in step down facility for late stage treatment1,2061,2061,2061,2061,206  Transport and lodging cost (in RSA)5,9595,9595,9595,9595,959  Follow-up care (Year 1 following completing treatment)9494949494 Total 58,824 45,486 45,428 53,072 53,072 Costs for staging, management, and treatment of Prostate cancer stage I-IV Radiation is not available in Eswatini and patients are referred to private hospitals in South Africa. On average, radiotherapy treatment is administered for a period of 5-weeks [25]. The estimated unit costs for radiotherapy was $16,648 whilst chemotherapy was $7,498. In addition to treatment costs, all patients referred for radiotherapy also incurred other direct costs including transport, lodging and allowance for accompanying staff (nurse and driver) at a unit costs $5,959, Table 3. Direct non-medical costs Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds). Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds). Indirect costs Productive loss due to sick leave as a result of patient seeking health care for prostate cancer was estimated at $58,320, Table 4. Out of the 90 patients diagnosed with prostate cancer, there were 13 men within the labor participating ages which were assumed to be on average sick leave of 54 days per person excluding short term sick leave of 14 days that is usually covered by employers. A total of 31 men died of prostate cancer in 2018 out of which 4 were less than 60 years. Costs due to prostate cancer premature mortality was estimated at $113,760, Table 5. Table 4Costs due to sick leave days associated with prostate cancer costsNumbers of sick leave daysNumber of patients alive in 2018Cost per workday ($)Total productivity loss due to costs due to prostate cancer in 2018 ($) for all patientTotal patient545812$37,584 Costs due to sick leave days associated with prostate cancer costs Table 5Mortality for prostate cancerMortality cost for Prostate cancerAge groupsLost YPPLL (Average YPPLL for 1 patient in each age group)Number of premature deaths before age 60Average annual incomeMortality cost ($) with 3% Discount rate multiplied with the number of patients in this age groupMortality cost ($) with 5% Discount rate multiplied with the number of patients in each age group46-51491269029,63210,67652-56543269084,12827,311Totals10342690113,76037,987YPPLL211Average annual gross income = $2,690 Mortality for prostate cancer Productive loss due to sick leave as a result of patient seeking health care for prostate cancer was estimated at $58,320, Table 4. Out of the 90 patients diagnosed with prostate cancer, there were 13 men within the labor participating ages which were assumed to be on average sick leave of 54 days per person excluding short term sick leave of 14 days that is usually covered by employers. A total of 31 men died of prostate cancer in 2018 out of which 4 were less than 60 years. Costs due to prostate cancer premature mortality was estimated at $113,760, Table 5. Table 4Costs due to sick leave days associated with prostate cancer costsNumbers of sick leave daysNumber of patients alive in 2018Cost per workday ($)Total productivity loss due to costs due to prostate cancer in 2018 ($) for all patientTotal patient545812$37,584 Costs due to sick leave days associated with prostate cancer costs Table 5Mortality for prostate cancerMortality cost for Prostate cancerAge groupsLost YPPLL (Average YPPLL for 1 patient in each age group)Number of premature deaths before age 60Average annual incomeMortality cost ($) with 3% Discount rate multiplied with the number of patients in this age groupMortality cost ($) with 5% Discount rate multiplied with the number of patients in each age group46-51491269029,63210,67652-56543269084,12827,311Totals10342690113,76037,987YPPLL211Average annual gross income = $2,690 Mortality for prostate cancer Total annual costs The total annual costs for prostate cancer was estimated at $ 6.2 million (between $4.7 million and 7.8 million estimated with lower and upper bounds), Table 6. Fourth 4% (40) of the cases were diagnoses with stage IV whilst only 11% (10) were diagnosed with stages I. Management of prostate cancer stages III and IV formed the greatest share of the costs for prostate cancer contributing about $1.2 and 2.1 million respectively. The total costs of stages I and II was estimated at $0.5 and $0.8 million. Transport and accommodation costs (cost incurred by those transferred to South Africa) were highest under other direct costs contributing about $0.5million. In 2018, there were 31 prostate cancer related deaths with only 4 occurred within the labor participating ages of Eswatini (18-60) years. The total year of productive life lost (YPPL) was 221 years. Indirect costs were estimated at $0.24 million and a majority (96%, $0.2 million) were productive loss from premature mortality, Table 6. Table 6Total Annual costs estimation for Prostate cancer (direct and indirect costs)Prevalence 2018Cost per item ($)Base case cost ($)Range ($) Parameter Number Average cost (2018) Base costs (2018) (Lower (-25%) Higher (+25) Direct costs (Health care costs) consultation fee9041369027684613 Screening and diagnosis   Prostate Specific Antigen (PSA)9016144810861810  Digital rectal examination (DRE)80000  TRUS guided Biopsy9014713,213991016,516  MRI scan90103493,06069,795116,325  Chest x-ray9028248418633105  Bone scan9060754,63040,97368,288  Ultra sound901039270695311,588  CT scan abdomen9086277,58058,18596,975 Treatment   Stage I1045,486454,861341,146568,577  Stage II1745,428772,278579,209965,348  Stage III2353,0721,220,659915,4941,525,824  Stage IV4053,0722,122,8861,592,1642,653,607 Other direct costs   Hospitalization costs (local)9021018,90014,17523,625  Hospital admission in step down facility for late stage treatment901206108,54081,405135,675  Transport and lodging cost (in RSA)905959536,310402,233670,388  Follow-up care (Year 1 following completing treatment)602662159,720119,790199,650 Total direct 209,8925,645,8394,234,3797,057,299 Direct non-medical cost   Transport costs for follow-up visits ,patient592513148,267111,200185,334  Transport costs for follow-up visits, accompanying relative592513148,267111,200185,334 Total Direct non-medical costs 5026296,534222,401370,668 Indirect costs   Morbidity costs due to sick leave136488424631810,530  Premature mortality costs457,675230,700173,025288,375 Total indirect costs 58,323239,124179,343298,905 Total 268,215 6,181,497 4,636,123 7,726,871 Total Annual costs estimation for Prostate cancer (direct and indirect costs) The total annual costs for prostate cancer was estimated at $ 6.2 million (between $4.7 million and 7.8 million estimated with lower and upper bounds), Table 6. Fourth 4% (40) of the cases were diagnoses with stage IV whilst only 11% (10) were diagnosed with stages I. Management of prostate cancer stages III and IV formed the greatest share of the costs for prostate cancer contributing about $1.2 and 2.1 million respectively. The total costs of stages I and II was estimated at $0.5 and $0.8 million. Transport and accommodation costs (cost incurred by those transferred to South Africa) were highest under other direct costs contributing about $0.5million. In 2018, there were 31 prostate cancer related deaths with only 4 occurred within the labor participating ages of Eswatini (18-60) years. The total year of productive life lost (YPPL) was 221 years. Indirect costs were estimated at $0.24 million and a majority (96%, $0.2 million) were productive loss from premature mortality, Table 6. Table 6Total Annual costs estimation for Prostate cancer (direct and indirect costs)Prevalence 2018Cost per item ($)Base case cost ($)Range ($) Parameter Number Average cost (2018) Base costs (2018) (Lower (-25%) Higher (+25) Direct costs (Health care costs) consultation fee9041369027684613 Screening and diagnosis   Prostate Specific Antigen (PSA)9016144810861810  Digital rectal examination (DRE)80000  TRUS guided Biopsy9014713,213991016,516  MRI scan90103493,06069,795116,325  Chest x-ray9028248418633105  Bone scan9060754,63040,97368,288  Ultra sound901039270695311,588  CT scan abdomen9086277,58058,18596,975 Treatment   Stage I1045,486454,861341,146568,577  Stage II1745,428772,278579,209965,348  Stage III2353,0721,220,659915,4941,525,824  Stage IV4053,0722,122,8861,592,1642,653,607 Other direct costs   Hospitalization costs (local)9021018,90014,17523,625  Hospital admission in step down facility for late stage treatment901206108,54081,405135,675  Transport and lodging cost (in RSA)905959536,310402,233670,388  Follow-up care (Year 1 following completing treatment)602662159,720119,790199,650 Total direct 209,8925,645,8394,234,3797,057,299 Direct non-medical cost   Transport costs for follow-up visits ,patient592513148,267111,200185,334  Transport costs for follow-up visits, accompanying relative592513148,267111,200185,334 Total Direct non-medical costs 5026296,534222,401370,668 Indirect costs   Morbidity costs due to sick leave136488424631810,530  Premature mortality costs457,675230,700173,025288,375 Total indirect costs 58,323239,124179,343298,905 Total 268,215 6,181,497 4,636,123 7,726,871 Total Annual costs estimation for Prostate cancer (direct and indirect costs)
Conclusions
The findings of the study indicated that costs attributed to prostate cancer were substantial and they are a public health concern. The findings were consistent with those of other countries, a majority of which were conducted in developed countries. The study demonstrated the interventions and associated costs. Radiotherapy was the most expensive treatment intervention in Eswatini, yet other studies cited surgery related intervention as the major costs driver. This is a reasonable finding in the context of Eswatini given that radiotherapy treatment is not available locally, patients are referred to private hospitals outside the country. The findings point areas for policy makers to perform cost containment regarding therapeutic procedures for prostate cancer. Also, the study findings demonstrate that prostate cancer costs are likely to increase in future and there is a need for strengthening adherence to the Eswatini Standardized Cancer Care and Guidelines in order to ensure that resources are invested to diagnosing the most at risk groups.
[ "Background", "Study area", "Methods of costing", "Study population", "Management of prostate cancer in Eswatini", "Costs", "Direct medical costs", "Direct non-medical costs", "Indirect costs", "Morbidity costs", "Mortality costs", "Cancer mortality and years of potential productive life lost (YPLL)", "Estimation of annual costs", "Sensitivity analysis", "Directs costs", "Direct non-medical costs", "Indirect costs", "Total annual costs", "" ]
[ "Among cancers, prostate cancer is the third commonest cancer after breast and lung cancer and the fifth cause of cancer mortality among men [1, 2]. In 2018, the number of new cases increased from 1.1 million in 2012 to 1.3 million in 2018 accounting for about 7.1% of the total cancer cases globally and 15% among men [2]. The causes of prostate cancer is attributable to genetic and environmental factors [2]. However, the incidence and mortality rate vary substantially within and across regions. Notably, high-income countries (HICs) reports high incidence rate compared to low- and -middle income countries (LMICs) [2]. In contrast, mortality rate is higher in developing countries particularly in sub-Saharan Africa regions [3]. The inequalities observed across regions with respect to prostate cancer incidence and mortality are in part linked to availability of effective screening and improved treatment modalities which are directly linked to resources availability [3, 4]. In Eswatini, compared to other common cancers, prostate cancer is ranked third accounting for 7.6% of total new cases 1074 in 2018 [5].\nProstate cancer causes clinical and economic burden to patients and governments. Screening tests include prostate-specific antigen (PSA) and digital rectal examination (DRG) [6, 7]. A positive screening tests results indicate further investigation [6]. Whilst PSA is the frequent screening test, it has been argued that PSA could potentially cause harm by over diagnosing low risk cancers that otherwise would have remained without clinical consequences for life time if left untreated [8]. In turn, this increases costs for prostate cancer [9]. In Sweden, annual costs associated with prostate cancer (screening, diagnosis and treatment) was estimated at €281 million [9]. In Ontario, the mean per patient cost for prostate cancer–related medication was $1211 [10]. In Iran, the total annual cost of prostate cancer was estimated at $2900 million [11]. Other studies estimated the economic burden of prostate cancer along with other cancer type. A study focusing on European countries, ranked prostate cancer the fourth cancer disease to cause health care costs compared to lung (€18.8billion), breast cancer (€15 billion), colorectal cancer (€13.1 billion) [12]. Similarly, in Korea, prostate cancer was among the top four cancers attributing to economic burden of disease [13].\nThere is limited evidence on the economic burden of prostate cancer from LMICs. Estimation of the economic burden of disease provide insight on treatment modalities and associated costs. The study aims to investigate the societal cost of prostate cancer in Eswatini during 2018.", "Eswatini formerly known as Swaziland is a country in Southern African bordering South Africa and Mozambique with an estimated population of 1.2 million [14]. The country’s economy is tied to South Africa and Eswatini’s domestic currency (Lilangeni=SZL) is pegged at parity with South African currency (Rand=ZAR) such that Eswatini cannot conduct its own monetary policy [15]. Eswatini fiscal revenue largely depend on Southern African Customs Union (SACU) revenues and remittance flowing mainly from South Africa [16, 17]. SACU receipts account for about a third of Eswatini’s total revenue and grants. However, over the past decades, SACU revenues have consistently declined leaving Eswatini’s economy constrained. The country records high national level poverty rate and income inequality which does not commensurate with its middle-income status. The national poverty rate is 58.9% percent at the international $1.90 poverty line and Gini index- a measure of inequality is 49.3 [17]. Eswatini ranks near the bottom of the World Bank’s Human Capital Index, with a score of 0.37 in 2020. Eswatini health spending as a share of the total budget is estimated at 10.1% and health per capita is estimated at $ 248 per annum [16]. Whilst Eswatini’s health expenditure is comparatively higher to some other countries in the Southern African region, the country’s health outcomes do not reflect its spending levels on health and its middle-income status. The health care service delivery is made up of public and private health care. Compared to the public, the private health care systems is better equipped both infrastructural and human resources however, at high health care costs. As such, private health care is accessed by less than 10% of the population, mainly those who owns health insurance [18].\nDiagnostic and treatment capacity of conditions including cancer remains limited in the country mostly in the public health system. Through a government funded scheme namely Phalala, the Eswatini citizens are supported to access specialized health care services from neighboring countries mainly South Africa.", "This is a Cost of Illness (CoI) study investigating costs of prostate cancer from the societal perspective [19]. CoI studies estimate disease specific costs [20]. The prevalence based approach, was used employing both top down and bottom up costing approaches [19, 21]. The cost estimation involved identification, quantification and valuation of resources used. The total costs for prostate cancer was calculated by multiplying identified resources quantities and the respective unit costs. All costs were presented in US$ adjusted for 2018 ($1= SZL14.5).\nStudy population Data on prostate cancer prevalence and mortality in 2018 was obtained from the National cancer registry [14]. The National Cancer Control Unit is led by the Ministry of Health. To estimate direct non-medical costs and annual gross earnings, estimates were obtained from a previous study that collected data using a direct non-medical costs patient questionnaire from a previous study on women diagnosed with breast cancer and receiving follow-up care at Mbabane Government chemotherapy unit (outpatient) in 2018 [22].\nData on prostate cancer prevalence and mortality in 2018 was obtained from the National cancer registry [14]. The National Cancer Control Unit is led by the Ministry of Health. To estimate direct non-medical costs and annual gross earnings, estimates were obtained from a previous study that collected data using a direct non-medical costs patient questionnaire from a previous study on women diagnosed with breast cancer and receiving follow-up care at Mbabane Government chemotherapy unit (outpatient) in 2018 [22].", "Data on prostate cancer prevalence and mortality in 2018 was obtained from the National cancer registry [14]. The National Cancer Control Unit is led by the Ministry of Health. To estimate direct non-medical costs and annual gross earnings, estimates were obtained from a previous study that collected data using a direct non-medical costs patient questionnaire from a previous study on women diagnosed with breast cancer and receiving follow-up care at Mbabane Government chemotherapy unit (outpatient) in 2018 [22].", "In Eswatini, routine prostate cancer screening is only recommended for men above age 50 every after two years [23]. The referral pathway shown in Fig. 1, simplifies the treatment pathway which begins by a man presenting with symptoms or eligible for screening at outpatient. Patient will be referred to urologist for screening tests including PSA and digital rectal examination (DRE) [6, 23]. These tests are not confirmatory however, they indicate changes in the prostate. Abnormal findings by either of the tests warrant further evaluation of patient and subsequent diagnostic test. These include biopsy (transrectal/perineal ultrasound guided biopsy (TRUS)). Patient with no cancer but presenting with symptom would receive management of lower urinary tract symptoms (LUTS). If cancer is confirmed further evaluation is conducted for cancer staging purposes in order to inform cancer management plan (metastasis screening). The evaluation includes radiology tests (bone scan, CT-scan and MRI pelvis). Staging is based on the tumor size (T) extent of lymph nodes involvement (N) and evidence of distant metastasis (M) [23, 24]. Depending on the risk score and prostate cancer stage, treatment include watchful waiting (cancer is monitored but not treated), surgery, radiation, chemotherapy and hormonal therapy (Androgen Deprivation Therapy) [23].\n\nFig. 1Simplified diagnosis and treatment pathway of patients diagnosed with prostate cancer. PSA Prostate specific antigen, DRE Digital rectal examination, TURP transurethral resection of prostate, TURB transurethral resection of bladder, TRUS Transrectal ultrasound, LUTS Lower Urinary Tract Symptoms\nSimplified diagnosis and treatment pathway of patients diagnosed with prostate cancer. PSA Prostate specific antigen, DRE Digital rectal examination, TURP transurethral resection of prostate, TURB transurethral resection of bladder, TRUS Transrectal ultrasound, LUTS Lower Urinary Tract Symptoms\nMost treatment modalities can be administered in various stages however for different intent [6, 23]. Radical prostatectomy, radiation and hormonal therapy can be applied for localised high risk prostate cancer (stage I and stage II) whilst for metastatic prostate cancer hormonal therapy will be first line in addition to radiotherapy, chemotherapy and hormonal therapy for palliation purposes. Radiation is not available in Eswatini and patient are referred to private hospitals in South Africa. Other surgical interventions for relieving symptoms such as transurethral resection of prostate (TURP) or bladder (TURB) can be conducted locally.\nWe used expert opinion from Mbabane Government Hospital - Chemotherapy Unit, Mbabane clinic -private hospital and information from Phalala Fund to establish patient referral pathway. Phalala Fund is a government funded scheme established to fund provision of specialized health care services to people of Eswatini that could not afford payment of specialized care that is not available in country [25]. The Eswatini standardized cancer care guidelines were used to establish screening, diagnosis and treatment variables. Costs were estimated based on market price. Radiotherapy is currently not available in Eswatini. As such, patients who require radiation are managed in South Africa through Phalala Fund. Chemotherapy is available locally through a government chemotherapy unit and local private clinic. However, it was established that most patients were still receiving chemotherapy from South Africa.", "From a societal perspective, costs associated with prostate cancer were estimated to assess economic burden of prostate cancer in Eswatini. Direct medical costs were divided into recurrent and capital costs [19]. Recurrent costs included personnel, travel, consumables including medical supplies, administration, utilities and overheads. Capital costs consistent mainly of equipment, building, vehicle and everything that have a useful life of more than one year. Costs for prostate cancer were determined based on the data source presented in Table 1. All costs were presented in US Dollars using 2018 average exchange rate (1 USD ($) = 14.5 SZL).\n\nTable 1Data variables and source for cost regarding screening, management and treatment of prostate cancerDataData sourcePrice sourceEstimated number of cases in 2018 = 90Swaziland National Cancer Unit, Eswatini Prostate cancer cases in 2018\n N/A\n\nScreening\n  Consultation feeMbabane ClinicPrivate hospital  Prostate Specific Antigen (PSA)Eswatini Health Laboratory ServicesPrivate hospital  Digital rectal examination (DRE)Interview with expertPrivate hospital\nDiagnosis\n  TRUS guided BiopsyMbabane ClinicPrivate hospital  Computed Tomography (CT scan)Mbabane ClinicPrivate hospital  MRI scanPhalala fundPrivate hospital  X-rayMbabane ClinicPrivate hospital  Bone scanMbabane ClinicPrivate hospital\nIntervention/Treatment\n  Watchful waiting (WW)Interview with expertPrivate hospital  SurgeryMbabane clinicPrivate hospital  RadiotherapyPhalala Fund based on SA hospitals feesPrivate hospital  ChemotherapyPhalala Fund based on SA hospitals feesPrivate hospital  Androgen deprivationPhalala Fund based on SA hospitals feesMarket price  Hospitalization (local)Phalala Fund based on SA hospitals feesPrivate hospital\nOther direct costs\n  Transport and lodging costs in South AfricaPhalala Fund based on SA hospitals feesMarket price  Follow-up care (Year 1 following completion of treatment) Follow-up involves PSA testing, symptomology and clinical examination for metastatic cancer twice in a yearBased on reported prevalencePrivate hospital\nData variables and source for cost regarding screening, management and treatment of prostate cancer\nDirect medical costs Direct costs in this study include resource utilization for diagnosis, treatment (surgery, chemotherapy, radiotherapy and androgen deprivation therapy) and follow-up care. To estimate the directs costs, we estimated average cost of each intervention from screening, staging and treatment and multiplied by the number of corresponding patients who received the intervention. The number of men diagnosed with prostate cancer were obtained from the national cancer registry [26]. All diagnosed cases were assumed to have undergone screening test using PSA. Screening and diagnosis costs were obtained from private hospital and market pricing. Treatment costs mainly radiation, chemotherapy and androgen deprivation therapy were received from Phalala fund based on South African private hospitals fees. In Eswatini, a majority of the management costs are borne by the Eswatini Government through Phalala fund.\nAs per the standardized cancer care guidelines, we assumed that all the men with confirmed prostate cancer in 2018 underwent screening and diagnosis tests, treatment and incurred other direct costs including transport and accommodation. Follow-up care costs was estimated for one year for those reported alive in 2018.\nDirect costs in this study include resource utilization for diagnosis, treatment (surgery, chemotherapy, radiotherapy and androgen deprivation therapy) and follow-up care. To estimate the directs costs, we estimated average cost of each intervention from screening, staging and treatment and multiplied by the number of corresponding patients who received the intervention. The number of men diagnosed with prostate cancer were obtained from the national cancer registry [26]. All diagnosed cases were assumed to have undergone screening test using PSA. Screening and diagnosis costs were obtained from private hospital and market pricing. Treatment costs mainly radiation, chemotherapy and androgen deprivation therapy were received from Phalala fund based on South African private hospitals fees. In Eswatini, a majority of the management costs are borne by the Eswatini Government through Phalala fund.\nAs per the standardized cancer care guidelines, we assumed that all the men with confirmed prostate cancer in 2018 underwent screening and diagnosis tests, treatment and incurred other direct costs including transport and accommodation. Follow-up care costs was estimated for one year for those reported alive in 2018.\nDirect non-medical costs Transport cost including return was estimated based on required patient follow-up visits based on the Eswatini Standardized Cancer Care Guidelines which state that follow-up visit should be every six months for the first two years and annually for up to five years following surgery [23]. Transport cost was estimated based on data from a previous study on breast cancer women receiving follow-up care at Mbabane Government Cancer Unit [22]. We assumed that all men completed treatment in 2018 had follow-up visits as per the Eswatini Standardized Cancer Guidelines. This study estimated one-year follow-up costs.\nTransport cost including return was estimated based on required patient follow-up visits based on the Eswatini Standardized Cancer Care Guidelines which state that follow-up visit should be every six months for the first two years and annually for up to five years following surgery [23]. Transport cost was estimated based on data from a previous study on breast cancer women receiving follow-up care at Mbabane Government Cancer Unit [22]. We assumed that all men completed treatment in 2018 had follow-up visits as per the Eswatini Standardized Cancer Guidelines. This study estimated one-year follow-up costs.\nIndirect costs We estimated the monitory value of prostate cancer related productivity loss due to morbidity (patient sick leave days incurred as a result of seeking health care) and pre-mature mortality).\nThe human capital method was used to estimate indirect costs related to productivity loss due to morbidity (sick leave as a result of seeking prostate cancer care) and pre-mature mortality [20]. We used average annual gross earnings computed from our previous study on breast cancer women receiving follow-up care in the chemotherapy unit, Mbabane Government hospital in Eswatini [22].\nMorbidity costs We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].\nWe estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].\nMortality costs To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].\nTo estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].\nWe estimated the monitory value of prostate cancer related productivity loss due to morbidity (patient sick leave days incurred as a result of seeking health care) and pre-mature mortality).\nThe human capital method was used to estimate indirect costs related to productivity loss due to morbidity (sick leave as a result of seeking prostate cancer care) and pre-mature mortality [20]. We used average annual gross earnings computed from our previous study on breast cancer women receiving follow-up care in the chemotherapy unit, Mbabane Government hospital in Eswatini [22].\nMorbidity costs We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].\nWe estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].\nMortality costs To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].\nTo estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].", "Direct costs in this study include resource utilization for diagnosis, treatment (surgery, chemotherapy, radiotherapy and androgen deprivation therapy) and follow-up care. To estimate the directs costs, we estimated average cost of each intervention from screening, staging and treatment and multiplied by the number of corresponding patients who received the intervention. The number of men diagnosed with prostate cancer were obtained from the national cancer registry [26]. All diagnosed cases were assumed to have undergone screening test using PSA. Screening and diagnosis costs were obtained from private hospital and market pricing. Treatment costs mainly radiation, chemotherapy and androgen deprivation therapy were received from Phalala fund based on South African private hospitals fees. In Eswatini, a majority of the management costs are borne by the Eswatini Government through Phalala fund.\nAs per the standardized cancer care guidelines, we assumed that all the men with confirmed prostate cancer in 2018 underwent screening and diagnosis tests, treatment and incurred other direct costs including transport and accommodation. Follow-up care costs was estimated for one year for those reported alive in 2018.", "Transport cost including return was estimated based on required patient follow-up visits based on the Eswatini Standardized Cancer Care Guidelines which state that follow-up visit should be every six months for the first two years and annually for up to five years following surgery [23]. Transport cost was estimated based on data from a previous study on breast cancer women receiving follow-up care at Mbabane Government Cancer Unit [22]. We assumed that all men completed treatment in 2018 had follow-up visits as per the Eswatini Standardized Cancer Guidelines. This study estimated one-year follow-up costs.", "We estimated the monitory value of prostate cancer related productivity loss due to morbidity (patient sick leave days incurred as a result of seeking health care) and pre-mature mortality).\nThe human capital method was used to estimate indirect costs related to productivity loss due to morbidity (sick leave as a result of seeking prostate cancer care) and pre-mature mortality [20]. We used average annual gross earnings computed from our previous study on breast cancer women receiving follow-up care in the chemotherapy unit, Mbabane Government hospital in Eswatini [22].\nMorbidity costs We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].\nWe estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].\nMortality costs To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].\nTo estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].", "We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].", "To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].", "The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry from which the years of productive life lost was calculated. In 2018, there were 31 prostate cancer related mortality out of which 4 occurred within the labor participating ages of Eswatini (18-60) years [28].", "We computed the aggregate total costs of screening, diagnosis and treatment of prostate cancer in 2018 as below:\n\n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$Cost\\;of\\;disease\\;=\\sum\\begin{pmatrix}Direct\\;medical\\;\\cos ts\\\\+\\\\Direct\\;non-medical\\;\\cos ts\\\\Direct\\;\\cos ts\\end{pmatrix}+\\begin{pmatrix}Morbidity\\\\+\\\\Mortality\\\\Indirect\\;\\cos ts\\end{pmatrix}$$\\end{document}Costofdisease=∑Directmedicalcosts+Directnon-medicalcostsDirectcosts+Morbidity+MortalityIndirectcosts\nDirect medical costs = Consisting of direct non-medical costs and direct medical costs.\nIndirect costs = Consisting of morbidity costs and mortality costs (Patient time lost as a result of the condition and costs associated with premature mortality as a result).\nAll costs were reported in 2018 US dollars ($1=SZL14.5).", "Sensitivity analysis was performed using ± 25% to account for the cost of follow-up prevalent cancer cases and to account for unrecorded cases by the facilities.", "In 2018, there were 90 prostate cancer cases of which 89% aged 60 years an above. The average age was 73 years. Table 2, shows unit average costs for treating prostate cancer cases including other direct costs such as transportation and accommodation.\n\nTable 2Costs for screening, diagnosis and treatment of prostate cancerParameterVariables included in the costAverage (2018) USD\nScreening\nConsultation fee41Prostate Specific Antigen (PSA)16Digital rectal examination (DRE)32\nDiagnosis\nPathologyTRUS guided Biopsy147RadiologyComputed Tomography (CT scan)- to rule out chest, abdomen and pelvis metastasis862Magnetic resonance imaging scan (MRI)1,034X-ray to rule out effusion28Bone scan in locally advanced prostate cancer607Ultrasound scan103\nTreatment\nWatchful Waiting (WW) cost include PSA test every three months and follow up consultation feePSA plus follow-up consultation fee58SurgeryRadical prostatectomy5,726Orchiectomy5,726RadiotherapyAdministered at 5 function of a 5-week period16,647Chemotherapy (Brachytherapy or external beam radiationAdministered in 5 cycles over 5-week period7,498Androgen Deprivation Therapy (ADT)Mostly Zoladex o.8 mg intramuscular (IM) for every 3 months1,268Symptoms relieving procedures (TURP/TURB)Transurethral resection of the prostate or bladder (TURP/TURB)5,872\nOther direct costs\nHospitalization costs (local)Admitted for symptoms management procedures including transurethral resection of the prostate (TURP) and orchidectomy5,872Hospital admission in step down facility for late stage treatment (In South Africa)Patient who require close monitoring following radiotherapy or surgery in hospital outside Eswatini1,206Transport and lodging cost in South AfricaAll patients who received treatment in South Africa5,959Follow-up care (Year 1 following completing treatment) Follow-up is done using PSA testing, symptomology and clinical examination for metastatic cancer twice in a yearFollow up consultation, PSA tests.94\nTotal\n\n58,796\n\nCosts for screening, diagnosis and treatment of prostate cancer\nCost distribution by disease stage is shown in Table 3. Following the Eswatini Standard Cancer Care Guidelines we assumed that all confirmed cases underwent similar screening, diagnosis and treatment pathway shown in Table 2, and simplified referral pathway shown in Fig. 1. The average costs for the different pathway including treatment intervention differed with the prostate cancer stage. Radical prostatectomy was more frequent with early stages of prostate cancer whilst interventions like chemotherapy were common with prostate cancer stages III and IV. Table 3 shows the prostate cancer costs distribution by stage.\n\nTable 3Costs for staging, management, and treatment of Prostate cancer stage I-IVStaging and treatment variablesUnit cost ($)I (T1)II (T2)III (T3)1 V (T4)Consultation for assessment4141414141\nScreening and diagnosis\n  Prostate Specific Antigen (PSA)1648484848  Digital rectal examination (DRE)  TRUS guided Biopsy147147147147147  MRI scan1,0341,0341,0341,0341,034  Chest x-ray2827.627.627.627.6  Bone scan607607607607607  Ultrasound103103103103103  CT scan abdomen862862862862862\nTreatment (Prostate Cancer prevalence in 2018=91 patient)\n  Watchful waiting (WW). Costs include PSA test every three months and follow up consultation fee5858000  Radical prostatectomy5,7265,7265,72600  Orchiectomy (surgical castration)5,7265,7265,7265,7265,726  Radiotherapy16,64816,64816,64816,64816,648  Chemotherapy7,498007,4987,498  Symptoms relieving procedures (TURP/TURB)5,872005,8725,872  Other supportive drugs: Pain killers6060606060  Hormonal therapy (ADT) Zoladex 0.8 mg injectables1,2681,2681,2681,2681,268\nOther costs\n  Hospitalization costs (local)5,8725,8725,8725,8725,872  Hospital admission in step down facility for late stage treatment1,2061,2061,2061,2061,206  Transport and lodging cost (in RSA)5,9595,9595,9595,9595,959  Follow-up care (Year 1 following completing treatment)9494949494\nTotal\n\n58,824\n\n45,486\n\n45,428\n\n53,072\n\n53,072\n\nCosts for staging, management, and treatment of Prostate cancer stage I-IV\nRadiation is not available in Eswatini and patients are referred to private hospitals in South Africa. On average, radiotherapy treatment is administered for a period of 5-weeks [25]. The estimated unit costs for radiotherapy was $16,648 whilst chemotherapy was $7,498. In addition to treatment costs, all patients referred for radiotherapy also incurred other direct costs including transport, lodging and allowance for accompanying staff (nurse and driver) at a unit costs $5,959, Table 3.\nDirect non-medical costs Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds).\nUsing estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds).", "Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds).", "Productive loss due to sick leave as a result of patient seeking health care for prostate cancer was estimated at $58,320, Table 4. Out of the 90 patients diagnosed with prostate cancer, there were 13 men within the labor participating ages which were assumed to be on average sick leave of 54 days per person excluding short term sick leave of 14 days that is usually covered by employers. A total of 31 men died of prostate cancer in 2018 out of which 4 were less than 60 years. Costs due to prostate cancer premature mortality was estimated at $113,760, Table 5.\n\nTable 4Costs due to sick leave days associated with prostate cancer costsNumbers of sick leave daysNumber of patients alive in 2018Cost per workday ($)Total productivity loss due to costs due to prostate cancer in 2018 ($) for all patientTotal patient545812$37,584\nCosts due to sick leave days associated with prostate cancer costs\n\nTable 5Mortality for prostate cancerMortality cost for Prostate cancerAge groupsLost YPPLL (Average YPPLL for 1 patient in each age group)Number of premature deaths before age 60Average annual incomeMortality cost ($) with 3% Discount rate multiplied with the number of patients in this age groupMortality cost ($) with 5% Discount rate multiplied with the number of patients in each age group46-51491269029,63210,67652-56543269084,12827,311Totals10342690113,76037,987YPPLL211Average annual gross income = $2,690\nMortality for prostate cancer", "The total annual costs for prostate cancer was estimated at $ 6.2 million (between $4.7 million and 7.8 million estimated with lower and upper bounds), Table 6. Fourth 4% (40) of the cases were diagnoses with stage IV whilst only 11% (10) were diagnosed with stages I. Management of prostate cancer stages III and IV formed the greatest share of the costs for prostate cancer contributing about $1.2 and 2.1 million respectively. The total costs of stages I and II was estimated at $0.5 and $0.8 million. Transport and accommodation costs (cost incurred by those transferred to South Africa) were highest under other direct costs contributing about $0.5million. In 2018, there were 31 prostate cancer related deaths with only 4 occurred within the labor participating ages of Eswatini (18-60) years. The total year of productive life lost (YPPL) was 221 years. Indirect costs were estimated at $0.24 million and a majority (96%, $0.2 million) were productive loss from premature mortality, Table 6.\n\nTable 6Total Annual costs estimation for Prostate cancer (direct and indirect costs)Prevalence 2018Cost per item ($)Base case cost ($)Range ($)\nParameter\n\nNumber\n\nAverage cost (2018)\n\nBase costs (2018)\n\n(Lower (-25%)\n\nHigher (+25)\nDirect costs (Health care costs) consultation fee9041369027684613\nScreening and diagnosis\n  Prostate Specific Antigen (PSA)9016144810861810  Digital rectal examination (DRE)80000  TRUS guided Biopsy9014713,213991016,516  MRI scan90103493,06069,795116,325  Chest x-ray9028248418633105  Bone scan9060754,63040,97368,288  Ultra sound901039270695311,588  CT scan abdomen9086277,58058,18596,975\nTreatment\n  Stage I1045,486454,861341,146568,577  Stage II1745,428772,278579,209965,348  Stage III2353,0721,220,659915,4941,525,824  Stage IV4053,0722,122,8861,592,1642,653,607\nOther direct costs\n  Hospitalization costs (local)9021018,90014,17523,625  Hospital admission in step down facility for late stage treatment901206108,54081,405135,675  Transport and lodging cost (in RSA)905959536,310402,233670,388  Follow-up care (Year 1 following completing treatment)602662159,720119,790199,650\nTotal direct\n209,8925,645,8394,234,3797,057,299\nDirect non-medical cost\n  Transport costs for follow-up visits ,patient592513148,267111,200185,334  Transport costs for follow-up visits, accompanying relative592513148,267111,200185,334\nTotal Direct non-medical costs\n5026296,534222,401370,668\nIndirect costs\n  Morbidity costs due to sick leave136488424631810,530  Premature mortality costs457,675230,700173,025288,375\nTotal indirect costs\n58,323239,124179,343298,905\nTotal\n\n268,215\n\n6,181,497\n\n4,636,123\n\n7,726,871\n\nTotal Annual costs estimation for Prostate cancer (direct and indirect costs)", "\nAdditional file 1. Direct non-medical costs patient questionnaires.pdf\nAdditional file 1. Direct non-medical costs patient questionnaires.pdf" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Materials and methods", "Study area", "Methods of costing", "Study population", "Management of prostate cancer in Eswatini", "Costs", "Direct medical costs", "Direct non-medical costs", "Indirect costs", "Morbidity costs", "Mortality costs", "Cancer mortality and years of potential productive life lost (YPLL)", "Estimation of annual costs", "Sensitivity analysis", "Results", "Directs costs", "Direct non-medical costs", "Indirect costs", "Total annual costs", "Discussion", "Conclusions", "Supplementary Information", "" ]
[ "Among cancers, prostate cancer is the third commonest cancer after breast and lung cancer and the fifth cause of cancer mortality among men [1, 2]. In 2018, the number of new cases increased from 1.1 million in 2012 to 1.3 million in 2018 accounting for about 7.1% of the total cancer cases globally and 15% among men [2]. The causes of prostate cancer is attributable to genetic and environmental factors [2]. However, the incidence and mortality rate vary substantially within and across regions. Notably, high-income countries (HICs) reports high incidence rate compared to low- and -middle income countries (LMICs) [2]. In contrast, mortality rate is higher in developing countries particularly in sub-Saharan Africa regions [3]. The inequalities observed across regions with respect to prostate cancer incidence and mortality are in part linked to availability of effective screening and improved treatment modalities which are directly linked to resources availability [3, 4]. In Eswatini, compared to other common cancers, prostate cancer is ranked third accounting for 7.6% of total new cases 1074 in 2018 [5].\nProstate cancer causes clinical and economic burden to patients and governments. Screening tests include prostate-specific antigen (PSA) and digital rectal examination (DRG) [6, 7]. A positive screening tests results indicate further investigation [6]. Whilst PSA is the frequent screening test, it has been argued that PSA could potentially cause harm by over diagnosing low risk cancers that otherwise would have remained without clinical consequences for life time if left untreated [8]. In turn, this increases costs for prostate cancer [9]. In Sweden, annual costs associated with prostate cancer (screening, diagnosis and treatment) was estimated at €281 million [9]. In Ontario, the mean per patient cost for prostate cancer–related medication was $1211 [10]. In Iran, the total annual cost of prostate cancer was estimated at $2900 million [11]. Other studies estimated the economic burden of prostate cancer along with other cancer type. A study focusing on European countries, ranked prostate cancer the fourth cancer disease to cause health care costs compared to lung (€18.8billion), breast cancer (€15 billion), colorectal cancer (€13.1 billion) [12]. Similarly, in Korea, prostate cancer was among the top four cancers attributing to economic burden of disease [13].\nThere is limited evidence on the economic burden of prostate cancer from LMICs. Estimation of the economic burden of disease provide insight on treatment modalities and associated costs. The study aims to investigate the societal cost of prostate cancer in Eswatini during 2018.", "Study area Eswatini formerly known as Swaziland is a country in Southern African bordering South Africa and Mozambique with an estimated population of 1.2 million [14]. The country’s economy is tied to South Africa and Eswatini’s domestic currency (Lilangeni=SZL) is pegged at parity with South African currency (Rand=ZAR) such that Eswatini cannot conduct its own monetary policy [15]. Eswatini fiscal revenue largely depend on Southern African Customs Union (SACU) revenues and remittance flowing mainly from South Africa [16, 17]. SACU receipts account for about a third of Eswatini’s total revenue and grants. However, over the past decades, SACU revenues have consistently declined leaving Eswatini’s economy constrained. The country records high national level poverty rate and income inequality which does not commensurate with its middle-income status. The national poverty rate is 58.9% percent at the international $1.90 poverty line and Gini index- a measure of inequality is 49.3 [17]. Eswatini ranks near the bottom of the World Bank’s Human Capital Index, with a score of 0.37 in 2020. Eswatini health spending as a share of the total budget is estimated at 10.1% and health per capita is estimated at $ 248 per annum [16]. Whilst Eswatini’s health expenditure is comparatively higher to some other countries in the Southern African region, the country’s health outcomes do not reflect its spending levels on health and its middle-income status. The health care service delivery is made up of public and private health care. Compared to the public, the private health care systems is better equipped both infrastructural and human resources however, at high health care costs. As such, private health care is accessed by less than 10% of the population, mainly those who owns health insurance [18].\nDiagnostic and treatment capacity of conditions including cancer remains limited in the country mostly in the public health system. Through a government funded scheme namely Phalala, the Eswatini citizens are supported to access specialized health care services from neighboring countries mainly South Africa.\nEswatini formerly known as Swaziland is a country in Southern African bordering South Africa and Mozambique with an estimated population of 1.2 million [14]. The country’s economy is tied to South Africa and Eswatini’s domestic currency (Lilangeni=SZL) is pegged at parity with South African currency (Rand=ZAR) such that Eswatini cannot conduct its own monetary policy [15]. Eswatini fiscal revenue largely depend on Southern African Customs Union (SACU) revenues and remittance flowing mainly from South Africa [16, 17]. SACU receipts account for about a third of Eswatini’s total revenue and grants. However, over the past decades, SACU revenues have consistently declined leaving Eswatini’s economy constrained. The country records high national level poverty rate and income inequality which does not commensurate with its middle-income status. The national poverty rate is 58.9% percent at the international $1.90 poverty line and Gini index- a measure of inequality is 49.3 [17]. Eswatini ranks near the bottom of the World Bank’s Human Capital Index, with a score of 0.37 in 2020. Eswatini health spending as a share of the total budget is estimated at 10.1% and health per capita is estimated at $ 248 per annum [16]. Whilst Eswatini’s health expenditure is comparatively higher to some other countries in the Southern African region, the country’s health outcomes do not reflect its spending levels on health and its middle-income status. The health care service delivery is made up of public and private health care. Compared to the public, the private health care systems is better equipped both infrastructural and human resources however, at high health care costs. As such, private health care is accessed by less than 10% of the population, mainly those who owns health insurance [18].\nDiagnostic and treatment capacity of conditions including cancer remains limited in the country mostly in the public health system. Through a government funded scheme namely Phalala, the Eswatini citizens are supported to access specialized health care services from neighboring countries mainly South Africa.", "Eswatini formerly known as Swaziland is a country in Southern African bordering South Africa and Mozambique with an estimated population of 1.2 million [14]. The country’s economy is tied to South Africa and Eswatini’s domestic currency (Lilangeni=SZL) is pegged at parity with South African currency (Rand=ZAR) such that Eswatini cannot conduct its own monetary policy [15]. Eswatini fiscal revenue largely depend on Southern African Customs Union (SACU) revenues and remittance flowing mainly from South Africa [16, 17]. SACU receipts account for about a third of Eswatini’s total revenue and grants. However, over the past decades, SACU revenues have consistently declined leaving Eswatini’s economy constrained. The country records high national level poverty rate and income inequality which does not commensurate with its middle-income status. The national poverty rate is 58.9% percent at the international $1.90 poverty line and Gini index- a measure of inequality is 49.3 [17]. Eswatini ranks near the bottom of the World Bank’s Human Capital Index, with a score of 0.37 in 2020. Eswatini health spending as a share of the total budget is estimated at 10.1% and health per capita is estimated at $ 248 per annum [16]. Whilst Eswatini’s health expenditure is comparatively higher to some other countries in the Southern African region, the country’s health outcomes do not reflect its spending levels on health and its middle-income status. The health care service delivery is made up of public and private health care. Compared to the public, the private health care systems is better equipped both infrastructural and human resources however, at high health care costs. As such, private health care is accessed by less than 10% of the population, mainly those who owns health insurance [18].\nDiagnostic and treatment capacity of conditions including cancer remains limited in the country mostly in the public health system. Through a government funded scheme namely Phalala, the Eswatini citizens are supported to access specialized health care services from neighboring countries mainly South Africa.", "This is a Cost of Illness (CoI) study investigating costs of prostate cancer from the societal perspective [19]. CoI studies estimate disease specific costs [20]. The prevalence based approach, was used employing both top down and bottom up costing approaches [19, 21]. The cost estimation involved identification, quantification and valuation of resources used. The total costs for prostate cancer was calculated by multiplying identified resources quantities and the respective unit costs. All costs were presented in US$ adjusted for 2018 ($1= SZL14.5).\nStudy population Data on prostate cancer prevalence and mortality in 2018 was obtained from the National cancer registry [14]. The National Cancer Control Unit is led by the Ministry of Health. To estimate direct non-medical costs and annual gross earnings, estimates were obtained from a previous study that collected data using a direct non-medical costs patient questionnaire from a previous study on women diagnosed with breast cancer and receiving follow-up care at Mbabane Government chemotherapy unit (outpatient) in 2018 [22].\nData on prostate cancer prevalence and mortality in 2018 was obtained from the National cancer registry [14]. The National Cancer Control Unit is led by the Ministry of Health. To estimate direct non-medical costs and annual gross earnings, estimates were obtained from a previous study that collected data using a direct non-medical costs patient questionnaire from a previous study on women diagnosed with breast cancer and receiving follow-up care at Mbabane Government chemotherapy unit (outpatient) in 2018 [22].", "Data on prostate cancer prevalence and mortality in 2018 was obtained from the National cancer registry [14]. The National Cancer Control Unit is led by the Ministry of Health. To estimate direct non-medical costs and annual gross earnings, estimates were obtained from a previous study that collected data using a direct non-medical costs patient questionnaire from a previous study on women diagnosed with breast cancer and receiving follow-up care at Mbabane Government chemotherapy unit (outpatient) in 2018 [22].", "In Eswatini, routine prostate cancer screening is only recommended for men above age 50 every after two years [23]. The referral pathway shown in Fig. 1, simplifies the treatment pathway which begins by a man presenting with symptoms or eligible for screening at outpatient. Patient will be referred to urologist for screening tests including PSA and digital rectal examination (DRE) [6, 23]. These tests are not confirmatory however, they indicate changes in the prostate. Abnormal findings by either of the tests warrant further evaluation of patient and subsequent diagnostic test. These include biopsy (transrectal/perineal ultrasound guided biopsy (TRUS)). Patient with no cancer but presenting with symptom would receive management of lower urinary tract symptoms (LUTS). If cancer is confirmed further evaluation is conducted for cancer staging purposes in order to inform cancer management plan (metastasis screening). The evaluation includes radiology tests (bone scan, CT-scan and MRI pelvis). Staging is based on the tumor size (T) extent of lymph nodes involvement (N) and evidence of distant metastasis (M) [23, 24]. Depending on the risk score and prostate cancer stage, treatment include watchful waiting (cancer is monitored but not treated), surgery, radiation, chemotherapy and hormonal therapy (Androgen Deprivation Therapy) [23].\n\nFig. 1Simplified diagnosis and treatment pathway of patients diagnosed with prostate cancer. PSA Prostate specific antigen, DRE Digital rectal examination, TURP transurethral resection of prostate, TURB transurethral resection of bladder, TRUS Transrectal ultrasound, LUTS Lower Urinary Tract Symptoms\nSimplified diagnosis and treatment pathway of patients diagnosed with prostate cancer. PSA Prostate specific antigen, DRE Digital rectal examination, TURP transurethral resection of prostate, TURB transurethral resection of bladder, TRUS Transrectal ultrasound, LUTS Lower Urinary Tract Symptoms\nMost treatment modalities can be administered in various stages however for different intent [6, 23]. Radical prostatectomy, radiation and hormonal therapy can be applied for localised high risk prostate cancer (stage I and stage II) whilst for metastatic prostate cancer hormonal therapy will be first line in addition to radiotherapy, chemotherapy and hormonal therapy for palliation purposes. Radiation is not available in Eswatini and patient are referred to private hospitals in South Africa. Other surgical interventions for relieving symptoms such as transurethral resection of prostate (TURP) or bladder (TURB) can be conducted locally.\nWe used expert opinion from Mbabane Government Hospital - Chemotherapy Unit, Mbabane clinic -private hospital and information from Phalala Fund to establish patient referral pathway. Phalala Fund is a government funded scheme established to fund provision of specialized health care services to people of Eswatini that could not afford payment of specialized care that is not available in country [25]. The Eswatini standardized cancer care guidelines were used to establish screening, diagnosis and treatment variables. Costs were estimated based on market price. Radiotherapy is currently not available in Eswatini. As such, patients who require radiation are managed in South Africa through Phalala Fund. Chemotherapy is available locally through a government chemotherapy unit and local private clinic. However, it was established that most patients were still receiving chemotherapy from South Africa.", "From a societal perspective, costs associated with prostate cancer were estimated to assess economic burden of prostate cancer in Eswatini. Direct medical costs were divided into recurrent and capital costs [19]. Recurrent costs included personnel, travel, consumables including medical supplies, administration, utilities and overheads. Capital costs consistent mainly of equipment, building, vehicle and everything that have a useful life of more than one year. Costs for prostate cancer were determined based on the data source presented in Table 1. All costs were presented in US Dollars using 2018 average exchange rate (1 USD ($) = 14.5 SZL).\n\nTable 1Data variables and source for cost regarding screening, management and treatment of prostate cancerDataData sourcePrice sourceEstimated number of cases in 2018 = 90Swaziland National Cancer Unit, Eswatini Prostate cancer cases in 2018\n N/A\n\nScreening\n  Consultation feeMbabane ClinicPrivate hospital  Prostate Specific Antigen (PSA)Eswatini Health Laboratory ServicesPrivate hospital  Digital rectal examination (DRE)Interview with expertPrivate hospital\nDiagnosis\n  TRUS guided BiopsyMbabane ClinicPrivate hospital  Computed Tomography (CT scan)Mbabane ClinicPrivate hospital  MRI scanPhalala fundPrivate hospital  X-rayMbabane ClinicPrivate hospital  Bone scanMbabane ClinicPrivate hospital\nIntervention/Treatment\n  Watchful waiting (WW)Interview with expertPrivate hospital  SurgeryMbabane clinicPrivate hospital  RadiotherapyPhalala Fund based on SA hospitals feesPrivate hospital  ChemotherapyPhalala Fund based on SA hospitals feesPrivate hospital  Androgen deprivationPhalala Fund based on SA hospitals feesMarket price  Hospitalization (local)Phalala Fund based on SA hospitals feesPrivate hospital\nOther direct costs\n  Transport and lodging costs in South AfricaPhalala Fund based on SA hospitals feesMarket price  Follow-up care (Year 1 following completion of treatment) Follow-up involves PSA testing, symptomology and clinical examination for metastatic cancer twice in a yearBased on reported prevalencePrivate hospital\nData variables and source for cost regarding screening, management and treatment of prostate cancer\nDirect medical costs Direct costs in this study include resource utilization for diagnosis, treatment (surgery, chemotherapy, radiotherapy and androgen deprivation therapy) and follow-up care. To estimate the directs costs, we estimated average cost of each intervention from screening, staging and treatment and multiplied by the number of corresponding patients who received the intervention. The number of men diagnosed with prostate cancer were obtained from the national cancer registry [26]. All diagnosed cases were assumed to have undergone screening test using PSA. Screening and diagnosis costs were obtained from private hospital and market pricing. Treatment costs mainly radiation, chemotherapy and androgen deprivation therapy were received from Phalala fund based on South African private hospitals fees. In Eswatini, a majority of the management costs are borne by the Eswatini Government through Phalala fund.\nAs per the standardized cancer care guidelines, we assumed that all the men with confirmed prostate cancer in 2018 underwent screening and diagnosis tests, treatment and incurred other direct costs including transport and accommodation. Follow-up care costs was estimated for one year for those reported alive in 2018.\nDirect costs in this study include resource utilization for diagnosis, treatment (surgery, chemotherapy, radiotherapy and androgen deprivation therapy) and follow-up care. To estimate the directs costs, we estimated average cost of each intervention from screening, staging and treatment and multiplied by the number of corresponding patients who received the intervention. The number of men diagnosed with prostate cancer were obtained from the national cancer registry [26]. All diagnosed cases were assumed to have undergone screening test using PSA. Screening and diagnosis costs were obtained from private hospital and market pricing. Treatment costs mainly radiation, chemotherapy and androgen deprivation therapy were received from Phalala fund based on South African private hospitals fees. In Eswatini, a majority of the management costs are borne by the Eswatini Government through Phalala fund.\nAs per the standardized cancer care guidelines, we assumed that all the men with confirmed prostate cancer in 2018 underwent screening and diagnosis tests, treatment and incurred other direct costs including transport and accommodation. Follow-up care costs was estimated for one year for those reported alive in 2018.\nDirect non-medical costs Transport cost including return was estimated based on required patient follow-up visits based on the Eswatini Standardized Cancer Care Guidelines which state that follow-up visit should be every six months for the first two years and annually for up to five years following surgery [23]. Transport cost was estimated based on data from a previous study on breast cancer women receiving follow-up care at Mbabane Government Cancer Unit [22]. We assumed that all men completed treatment in 2018 had follow-up visits as per the Eswatini Standardized Cancer Guidelines. This study estimated one-year follow-up costs.\nTransport cost including return was estimated based on required patient follow-up visits based on the Eswatini Standardized Cancer Care Guidelines which state that follow-up visit should be every six months for the first two years and annually for up to five years following surgery [23]. Transport cost was estimated based on data from a previous study on breast cancer women receiving follow-up care at Mbabane Government Cancer Unit [22]. We assumed that all men completed treatment in 2018 had follow-up visits as per the Eswatini Standardized Cancer Guidelines. This study estimated one-year follow-up costs.\nIndirect costs We estimated the monitory value of prostate cancer related productivity loss due to morbidity (patient sick leave days incurred as a result of seeking health care) and pre-mature mortality).\nThe human capital method was used to estimate indirect costs related to productivity loss due to morbidity (sick leave as a result of seeking prostate cancer care) and pre-mature mortality [20]. We used average annual gross earnings computed from our previous study on breast cancer women receiving follow-up care in the chemotherapy unit, Mbabane Government hospital in Eswatini [22].\nMorbidity costs We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].\nWe estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].\nMortality costs To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].\nTo estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].\nWe estimated the monitory value of prostate cancer related productivity loss due to morbidity (patient sick leave days incurred as a result of seeking health care) and pre-mature mortality).\nThe human capital method was used to estimate indirect costs related to productivity loss due to morbidity (sick leave as a result of seeking prostate cancer care) and pre-mature mortality [20]. We used average annual gross earnings computed from our previous study on breast cancer women receiving follow-up care in the chemotherapy unit, Mbabane Government hospital in Eswatini [22].\nMorbidity costs We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].\nWe estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].\nMortality costs To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].\nTo estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].", "Direct costs in this study include resource utilization for diagnosis, treatment (surgery, chemotherapy, radiotherapy and androgen deprivation therapy) and follow-up care. To estimate the directs costs, we estimated average cost of each intervention from screening, staging and treatment and multiplied by the number of corresponding patients who received the intervention. The number of men diagnosed with prostate cancer were obtained from the national cancer registry [26]. All diagnosed cases were assumed to have undergone screening test using PSA. Screening and diagnosis costs were obtained from private hospital and market pricing. Treatment costs mainly radiation, chemotherapy and androgen deprivation therapy were received from Phalala fund based on South African private hospitals fees. In Eswatini, a majority of the management costs are borne by the Eswatini Government through Phalala fund.\nAs per the standardized cancer care guidelines, we assumed that all the men with confirmed prostate cancer in 2018 underwent screening and diagnosis tests, treatment and incurred other direct costs including transport and accommodation. Follow-up care costs was estimated for one year for those reported alive in 2018.", "Transport cost including return was estimated based on required patient follow-up visits based on the Eswatini Standardized Cancer Care Guidelines which state that follow-up visit should be every six months for the first two years and annually for up to five years following surgery [23]. Transport cost was estimated based on data from a previous study on breast cancer women receiving follow-up care at Mbabane Government Cancer Unit [22]. We assumed that all men completed treatment in 2018 had follow-up visits as per the Eswatini Standardized Cancer Guidelines. This study estimated one-year follow-up costs.", "We estimated the monitory value of prostate cancer related productivity loss due to morbidity (patient sick leave days incurred as a result of seeking health care) and pre-mature mortality).\nThe human capital method was used to estimate indirect costs related to productivity loss due to morbidity (sick leave as a result of seeking prostate cancer care) and pre-mature mortality [20]. We used average annual gross earnings computed from our previous study on breast cancer women receiving follow-up care in the chemotherapy unit, Mbabane Government hospital in Eswatini [22].\nMorbidity costs We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].\nWe estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].\nMortality costs To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].\nTo estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].", "We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22].", "To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28].", "The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry from which the years of productive life lost was calculated. In 2018, there were 31 prostate cancer related mortality out of which 4 occurred within the labor participating ages of Eswatini (18-60) years [28].", "We computed the aggregate total costs of screening, diagnosis and treatment of prostate cancer in 2018 as below:\n\n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$Cost\\;of\\;disease\\;=\\sum\\begin{pmatrix}Direct\\;medical\\;\\cos ts\\\\+\\\\Direct\\;non-medical\\;\\cos ts\\\\Direct\\;\\cos ts\\end{pmatrix}+\\begin{pmatrix}Morbidity\\\\+\\\\Mortality\\\\Indirect\\;\\cos ts\\end{pmatrix}$$\\end{document}Costofdisease=∑Directmedicalcosts+Directnon-medicalcostsDirectcosts+Morbidity+MortalityIndirectcosts\nDirect medical costs = Consisting of direct non-medical costs and direct medical costs.\nIndirect costs = Consisting of morbidity costs and mortality costs (Patient time lost as a result of the condition and costs associated with premature mortality as a result).\nAll costs were reported in 2018 US dollars ($1=SZL14.5).", "Sensitivity analysis was performed using ± 25% to account for the cost of follow-up prevalent cancer cases and to account for unrecorded cases by the facilities.", "Directs costs In 2018, there were 90 prostate cancer cases of which 89% aged 60 years an above. The average age was 73 years. Table 2, shows unit average costs for treating prostate cancer cases including other direct costs such as transportation and accommodation.\n\nTable 2Costs for screening, diagnosis and treatment of prostate cancerParameterVariables included in the costAverage (2018) USD\nScreening\nConsultation fee41Prostate Specific Antigen (PSA)16Digital rectal examination (DRE)32\nDiagnosis\nPathologyTRUS guided Biopsy147RadiologyComputed Tomography (CT scan)- to rule out chest, abdomen and pelvis metastasis862Magnetic resonance imaging scan (MRI)1,034X-ray to rule out effusion28Bone scan in locally advanced prostate cancer607Ultrasound scan103\nTreatment\nWatchful Waiting (WW) cost include PSA test every three months and follow up consultation feePSA plus follow-up consultation fee58SurgeryRadical prostatectomy5,726Orchiectomy5,726RadiotherapyAdministered at 5 function of a 5-week period16,647Chemotherapy (Brachytherapy or external beam radiationAdministered in 5 cycles over 5-week period7,498Androgen Deprivation Therapy (ADT)Mostly Zoladex o.8 mg intramuscular (IM) for every 3 months1,268Symptoms relieving procedures (TURP/TURB)Transurethral resection of the prostate or bladder (TURP/TURB)5,872\nOther direct costs\nHospitalization costs (local)Admitted for symptoms management procedures including transurethral resection of the prostate (TURP) and orchidectomy5,872Hospital admission in step down facility for late stage treatment (In South Africa)Patient who require close monitoring following radiotherapy or surgery in hospital outside Eswatini1,206Transport and lodging cost in South AfricaAll patients who received treatment in South Africa5,959Follow-up care (Year 1 following completing treatment) Follow-up is done using PSA testing, symptomology and clinical examination for metastatic cancer twice in a yearFollow up consultation, PSA tests.94\nTotal\n\n58,796\n\nCosts for screening, diagnosis and treatment of prostate cancer\nCost distribution by disease stage is shown in Table 3. Following the Eswatini Standard Cancer Care Guidelines we assumed that all confirmed cases underwent similar screening, diagnosis and treatment pathway shown in Table 2, and simplified referral pathway shown in Fig. 1. The average costs for the different pathway including treatment intervention differed with the prostate cancer stage. Radical prostatectomy was more frequent with early stages of prostate cancer whilst interventions like chemotherapy were common with prostate cancer stages III and IV. Table 3 shows the prostate cancer costs distribution by stage.\n\nTable 3Costs for staging, management, and treatment of Prostate cancer stage I-IVStaging and treatment variablesUnit cost ($)I (T1)II (T2)III (T3)1 V (T4)Consultation for assessment4141414141\nScreening and diagnosis\n  Prostate Specific Antigen (PSA)1648484848  Digital rectal examination (DRE)  TRUS guided Biopsy147147147147147  MRI scan1,0341,0341,0341,0341,034  Chest x-ray2827.627.627.627.6  Bone scan607607607607607  Ultrasound103103103103103  CT scan abdomen862862862862862\nTreatment (Prostate Cancer prevalence in 2018=91 patient)\n  Watchful waiting (WW). Costs include PSA test every three months and follow up consultation fee5858000  Radical prostatectomy5,7265,7265,72600  Orchiectomy (surgical castration)5,7265,7265,7265,7265,726  Radiotherapy16,64816,64816,64816,64816,648  Chemotherapy7,498007,4987,498  Symptoms relieving procedures (TURP/TURB)5,872005,8725,872  Other supportive drugs: Pain killers6060606060  Hormonal therapy (ADT) Zoladex 0.8 mg injectables1,2681,2681,2681,2681,268\nOther costs\n  Hospitalization costs (local)5,8725,8725,8725,8725,872  Hospital admission in step down facility for late stage treatment1,2061,2061,2061,2061,206  Transport and lodging cost (in RSA)5,9595,9595,9595,9595,959  Follow-up care (Year 1 following completing treatment)9494949494\nTotal\n\n58,824\n\n45,486\n\n45,428\n\n53,072\n\n53,072\n\nCosts for staging, management, and treatment of Prostate cancer stage I-IV\nRadiation is not available in Eswatini and patients are referred to private hospitals in South Africa. On average, radiotherapy treatment is administered for a period of 5-weeks [25]. The estimated unit costs for radiotherapy was $16,648 whilst chemotherapy was $7,498. In addition to treatment costs, all patients referred for radiotherapy also incurred other direct costs including transport, lodging and allowance for accompanying staff (nurse and driver) at a unit costs $5,959, Table 3.\nDirect non-medical costs Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds).\nUsing estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds).\nIn 2018, there were 90 prostate cancer cases of which 89% aged 60 years an above. The average age was 73 years. Table 2, shows unit average costs for treating prostate cancer cases including other direct costs such as transportation and accommodation.\n\nTable 2Costs for screening, diagnosis and treatment of prostate cancerParameterVariables included in the costAverage (2018) USD\nScreening\nConsultation fee41Prostate Specific Antigen (PSA)16Digital rectal examination (DRE)32\nDiagnosis\nPathologyTRUS guided Biopsy147RadiologyComputed Tomography (CT scan)- to rule out chest, abdomen and pelvis metastasis862Magnetic resonance imaging scan (MRI)1,034X-ray to rule out effusion28Bone scan in locally advanced prostate cancer607Ultrasound scan103\nTreatment\nWatchful Waiting (WW) cost include PSA test every three months and follow up consultation feePSA plus follow-up consultation fee58SurgeryRadical prostatectomy5,726Orchiectomy5,726RadiotherapyAdministered at 5 function of a 5-week period16,647Chemotherapy (Brachytherapy or external beam radiationAdministered in 5 cycles over 5-week period7,498Androgen Deprivation Therapy (ADT)Mostly Zoladex o.8 mg intramuscular (IM) for every 3 months1,268Symptoms relieving procedures (TURP/TURB)Transurethral resection of the prostate or bladder (TURP/TURB)5,872\nOther direct costs\nHospitalization costs (local)Admitted for symptoms management procedures including transurethral resection of the prostate (TURP) and orchidectomy5,872Hospital admission in step down facility for late stage treatment (In South Africa)Patient who require close monitoring following radiotherapy or surgery in hospital outside Eswatini1,206Transport and lodging cost in South AfricaAll patients who received treatment in South Africa5,959Follow-up care (Year 1 following completing treatment) Follow-up is done using PSA testing, symptomology and clinical examination for metastatic cancer twice in a yearFollow up consultation, PSA tests.94\nTotal\n\n58,796\n\nCosts for screening, diagnosis and treatment of prostate cancer\nCost distribution by disease stage is shown in Table 3. Following the Eswatini Standard Cancer Care Guidelines we assumed that all confirmed cases underwent similar screening, diagnosis and treatment pathway shown in Table 2, and simplified referral pathway shown in Fig. 1. The average costs for the different pathway including treatment intervention differed with the prostate cancer stage. Radical prostatectomy was more frequent with early stages of prostate cancer whilst interventions like chemotherapy were common with prostate cancer stages III and IV. Table 3 shows the prostate cancer costs distribution by stage.\n\nTable 3Costs for staging, management, and treatment of Prostate cancer stage I-IVStaging and treatment variablesUnit cost ($)I (T1)II (T2)III (T3)1 V (T4)Consultation for assessment4141414141\nScreening and diagnosis\n  Prostate Specific Antigen (PSA)1648484848  Digital rectal examination (DRE)  TRUS guided Biopsy147147147147147  MRI scan1,0341,0341,0341,0341,034  Chest x-ray2827.627.627.627.6  Bone scan607607607607607  Ultrasound103103103103103  CT scan abdomen862862862862862\nTreatment (Prostate Cancer prevalence in 2018=91 patient)\n  Watchful waiting (WW). Costs include PSA test every three months and follow up consultation fee5858000  Radical prostatectomy5,7265,7265,72600  Orchiectomy (surgical castration)5,7265,7265,7265,7265,726  Radiotherapy16,64816,64816,64816,64816,648  Chemotherapy7,498007,4987,498  Symptoms relieving procedures (TURP/TURB)5,872005,8725,872  Other supportive drugs: Pain killers6060606060  Hormonal therapy (ADT) Zoladex 0.8 mg injectables1,2681,2681,2681,2681,268\nOther costs\n  Hospitalization costs (local)5,8725,8725,8725,8725,872  Hospital admission in step down facility for late stage treatment1,2061,2061,2061,2061,206  Transport and lodging cost (in RSA)5,9595,9595,9595,9595,959  Follow-up care (Year 1 following completing treatment)9494949494\nTotal\n\n58,824\n\n45,486\n\n45,428\n\n53,072\n\n53,072\n\nCosts for staging, management, and treatment of Prostate cancer stage I-IV\nRadiation is not available in Eswatini and patients are referred to private hospitals in South Africa. On average, radiotherapy treatment is administered for a period of 5-weeks [25]. The estimated unit costs for radiotherapy was $16,648 whilst chemotherapy was $7,498. In addition to treatment costs, all patients referred for radiotherapy also incurred other direct costs including transport, lodging and allowance for accompanying staff (nurse and driver) at a unit costs $5,959, Table 3.\nDirect non-medical costs Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds).\nUsing estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds).\nIndirect costs Productive loss due to sick leave as a result of patient seeking health care for prostate cancer was estimated at $58,320, Table 4. Out of the 90 patients diagnosed with prostate cancer, there were 13 men within the labor participating ages which were assumed to be on average sick leave of 54 days per person excluding short term sick leave of 14 days that is usually covered by employers. A total of 31 men died of prostate cancer in 2018 out of which 4 were less than 60 years. Costs due to prostate cancer premature mortality was estimated at $113,760, Table 5.\n\nTable 4Costs due to sick leave days associated with prostate cancer costsNumbers of sick leave daysNumber of patients alive in 2018Cost per workday ($)Total productivity loss due to costs due to prostate cancer in 2018 ($) for all patientTotal patient545812$37,584\nCosts due to sick leave days associated with prostate cancer costs\n\nTable 5Mortality for prostate cancerMortality cost for Prostate cancerAge groupsLost YPPLL (Average YPPLL for 1 patient in each age group)Number of premature deaths before age 60Average annual incomeMortality cost ($) with 3% Discount rate multiplied with the number of patients in this age groupMortality cost ($) with 5% Discount rate multiplied with the number of patients in each age group46-51491269029,63210,67652-56543269084,12827,311Totals10342690113,76037,987YPPLL211Average annual gross income = $2,690\nMortality for prostate cancer\nProductive loss due to sick leave as a result of patient seeking health care for prostate cancer was estimated at $58,320, Table 4. Out of the 90 patients diagnosed with prostate cancer, there were 13 men within the labor participating ages which were assumed to be on average sick leave of 54 days per person excluding short term sick leave of 14 days that is usually covered by employers. A total of 31 men died of prostate cancer in 2018 out of which 4 were less than 60 years. Costs due to prostate cancer premature mortality was estimated at $113,760, Table 5.\n\nTable 4Costs due to sick leave days associated with prostate cancer costsNumbers of sick leave daysNumber of patients alive in 2018Cost per workday ($)Total productivity loss due to costs due to prostate cancer in 2018 ($) for all patientTotal patient545812$37,584\nCosts due to sick leave days associated with prostate cancer costs\n\nTable 5Mortality for prostate cancerMortality cost for Prostate cancerAge groupsLost YPPLL (Average YPPLL for 1 patient in each age group)Number of premature deaths before age 60Average annual incomeMortality cost ($) with 3% Discount rate multiplied with the number of patients in this age groupMortality cost ($) with 5% Discount rate multiplied with the number of patients in each age group46-51491269029,63210,67652-56543269084,12827,311Totals10342690113,76037,987YPPLL211Average annual gross income = $2,690\nMortality for prostate cancer\nTotal annual costs The total annual costs for prostate cancer was estimated at $ 6.2 million (between $4.7 million and 7.8 million estimated with lower and upper bounds), Table 6. Fourth 4% (40) of the cases were diagnoses with stage IV whilst only 11% (10) were diagnosed with stages I. Management of prostate cancer stages III and IV formed the greatest share of the costs for prostate cancer contributing about $1.2 and 2.1 million respectively. The total costs of stages I and II was estimated at $0.5 and $0.8 million. Transport and accommodation costs (cost incurred by those transferred to South Africa) were highest under other direct costs contributing about $0.5million. In 2018, there were 31 prostate cancer related deaths with only 4 occurred within the labor participating ages of Eswatini (18-60) years. The total year of productive life lost (YPPL) was 221 years. Indirect costs were estimated at $0.24 million and a majority (96%, $0.2 million) were productive loss from premature mortality, Table 6.\n\nTable 6Total Annual costs estimation for Prostate cancer (direct and indirect costs)Prevalence 2018Cost per item ($)Base case cost ($)Range ($)\nParameter\n\nNumber\n\nAverage cost (2018)\n\nBase costs (2018)\n\n(Lower (-25%)\n\nHigher (+25)\nDirect costs (Health care costs) consultation fee9041369027684613\nScreening and diagnosis\n  Prostate Specific Antigen (PSA)9016144810861810  Digital rectal examination (DRE)80000  TRUS guided Biopsy9014713,213991016,516  MRI scan90103493,06069,795116,325  Chest x-ray9028248418633105  Bone scan9060754,63040,97368,288  Ultra sound901039270695311,588  CT scan abdomen9086277,58058,18596,975\nTreatment\n  Stage I1045,486454,861341,146568,577  Stage II1745,428772,278579,209965,348  Stage III2353,0721,220,659915,4941,525,824  Stage IV4053,0722,122,8861,592,1642,653,607\nOther direct costs\n  Hospitalization costs (local)9021018,90014,17523,625  Hospital admission in step down facility for late stage treatment901206108,54081,405135,675  Transport and lodging cost (in RSA)905959536,310402,233670,388  Follow-up care (Year 1 following completing treatment)602662159,720119,790199,650\nTotal direct\n209,8925,645,8394,234,3797,057,299\nDirect non-medical cost\n  Transport costs for follow-up visits ,patient592513148,267111,200185,334  Transport costs for follow-up visits, accompanying relative592513148,267111,200185,334\nTotal Direct non-medical costs\n5026296,534222,401370,668\nIndirect costs\n  Morbidity costs due to sick leave136488424631810,530  Premature mortality costs457,675230,700173,025288,375\nTotal indirect costs\n58,323239,124179,343298,905\nTotal\n\n268,215\n\n6,181,497\n\n4,636,123\n\n7,726,871\n\nTotal Annual costs estimation for Prostate cancer (direct and indirect costs)\nThe total annual costs for prostate cancer was estimated at $ 6.2 million (between $4.7 million and 7.8 million estimated with lower and upper bounds), Table 6. Fourth 4% (40) of the cases were diagnoses with stage IV whilst only 11% (10) were diagnosed with stages I. Management of prostate cancer stages III and IV formed the greatest share of the costs for prostate cancer contributing about $1.2 and 2.1 million respectively. The total costs of stages I and II was estimated at $0.5 and $0.8 million. Transport and accommodation costs (cost incurred by those transferred to South Africa) were highest under other direct costs contributing about $0.5million. In 2018, there were 31 prostate cancer related deaths with only 4 occurred within the labor participating ages of Eswatini (18-60) years. The total year of productive life lost (YPPL) was 221 years. Indirect costs were estimated at $0.24 million and a majority (96%, $0.2 million) were productive loss from premature mortality, Table 6.\n\nTable 6Total Annual costs estimation for Prostate cancer (direct and indirect costs)Prevalence 2018Cost per item ($)Base case cost ($)Range ($)\nParameter\n\nNumber\n\nAverage cost (2018)\n\nBase costs (2018)\n\n(Lower (-25%)\n\nHigher (+25)\nDirect costs (Health care costs) consultation fee9041369027684613\nScreening and diagnosis\n  Prostate Specific Antigen (PSA)9016144810861810  Digital rectal examination (DRE)80000  TRUS guided Biopsy9014713,213991016,516  MRI scan90103493,06069,795116,325  Chest x-ray9028248418633105  Bone scan9060754,63040,97368,288  Ultra sound901039270695311,588  CT scan abdomen9086277,58058,18596,975\nTreatment\n  Stage I1045,486454,861341,146568,577  Stage II1745,428772,278579,209965,348  Stage III2353,0721,220,659915,4941,525,824  Stage IV4053,0722,122,8861,592,1642,653,607\nOther direct costs\n  Hospitalization costs (local)9021018,90014,17523,625  Hospital admission in step down facility for late stage treatment901206108,54081,405135,675  Transport and lodging cost (in RSA)905959536,310402,233670,388  Follow-up care (Year 1 following completing treatment)602662159,720119,790199,650\nTotal direct\n209,8925,645,8394,234,3797,057,299\nDirect non-medical cost\n  Transport costs for follow-up visits ,patient592513148,267111,200185,334  Transport costs for follow-up visits, accompanying relative592513148,267111,200185,334\nTotal Direct non-medical costs\n5026296,534222,401370,668\nIndirect costs\n  Morbidity costs due to sick leave136488424631810,530  Premature mortality costs457,675230,700173,025288,375\nTotal indirect costs\n58,323239,124179,343298,905\nTotal\n\n268,215\n\n6,181,497\n\n4,636,123\n\n7,726,871\n\nTotal Annual costs estimation for Prostate cancer (direct and indirect costs)", "In 2018, there were 90 prostate cancer cases of which 89% aged 60 years an above. The average age was 73 years. Table 2, shows unit average costs for treating prostate cancer cases including other direct costs such as transportation and accommodation.\n\nTable 2Costs for screening, diagnosis and treatment of prostate cancerParameterVariables included in the costAverage (2018) USD\nScreening\nConsultation fee41Prostate Specific Antigen (PSA)16Digital rectal examination (DRE)32\nDiagnosis\nPathologyTRUS guided Biopsy147RadiologyComputed Tomography (CT scan)- to rule out chest, abdomen and pelvis metastasis862Magnetic resonance imaging scan (MRI)1,034X-ray to rule out effusion28Bone scan in locally advanced prostate cancer607Ultrasound scan103\nTreatment\nWatchful Waiting (WW) cost include PSA test every three months and follow up consultation feePSA plus follow-up consultation fee58SurgeryRadical prostatectomy5,726Orchiectomy5,726RadiotherapyAdministered at 5 function of a 5-week period16,647Chemotherapy (Brachytherapy or external beam radiationAdministered in 5 cycles over 5-week period7,498Androgen Deprivation Therapy (ADT)Mostly Zoladex o.8 mg intramuscular (IM) for every 3 months1,268Symptoms relieving procedures (TURP/TURB)Transurethral resection of the prostate or bladder (TURP/TURB)5,872\nOther direct costs\nHospitalization costs (local)Admitted for symptoms management procedures including transurethral resection of the prostate (TURP) and orchidectomy5,872Hospital admission in step down facility for late stage treatment (In South Africa)Patient who require close monitoring following radiotherapy or surgery in hospital outside Eswatini1,206Transport and lodging cost in South AfricaAll patients who received treatment in South Africa5,959Follow-up care (Year 1 following completing treatment) Follow-up is done using PSA testing, symptomology and clinical examination for metastatic cancer twice in a yearFollow up consultation, PSA tests.94\nTotal\n\n58,796\n\nCosts for screening, diagnosis and treatment of prostate cancer\nCost distribution by disease stage is shown in Table 3. Following the Eswatini Standard Cancer Care Guidelines we assumed that all confirmed cases underwent similar screening, diagnosis and treatment pathway shown in Table 2, and simplified referral pathway shown in Fig. 1. The average costs for the different pathway including treatment intervention differed with the prostate cancer stage. Radical prostatectomy was more frequent with early stages of prostate cancer whilst interventions like chemotherapy were common with prostate cancer stages III and IV. Table 3 shows the prostate cancer costs distribution by stage.\n\nTable 3Costs for staging, management, and treatment of Prostate cancer stage I-IVStaging and treatment variablesUnit cost ($)I (T1)II (T2)III (T3)1 V (T4)Consultation for assessment4141414141\nScreening and diagnosis\n  Prostate Specific Antigen (PSA)1648484848  Digital rectal examination (DRE)  TRUS guided Biopsy147147147147147  MRI scan1,0341,0341,0341,0341,034  Chest x-ray2827.627.627.627.6  Bone scan607607607607607  Ultrasound103103103103103  CT scan abdomen862862862862862\nTreatment (Prostate Cancer prevalence in 2018=91 patient)\n  Watchful waiting (WW). Costs include PSA test every three months and follow up consultation fee5858000  Radical prostatectomy5,7265,7265,72600  Orchiectomy (surgical castration)5,7265,7265,7265,7265,726  Radiotherapy16,64816,64816,64816,64816,648  Chemotherapy7,498007,4987,498  Symptoms relieving procedures (TURP/TURB)5,872005,8725,872  Other supportive drugs: Pain killers6060606060  Hormonal therapy (ADT) Zoladex 0.8 mg injectables1,2681,2681,2681,2681,268\nOther costs\n  Hospitalization costs (local)5,8725,8725,8725,8725,872  Hospital admission in step down facility for late stage treatment1,2061,2061,2061,2061,206  Transport and lodging cost (in RSA)5,9595,9595,9595,9595,959  Follow-up care (Year 1 following completing treatment)9494949494\nTotal\n\n58,824\n\n45,486\n\n45,428\n\n53,072\n\n53,072\n\nCosts for staging, management, and treatment of Prostate cancer stage I-IV\nRadiation is not available in Eswatini and patients are referred to private hospitals in South Africa. On average, radiotherapy treatment is administered for a period of 5-weeks [25]. The estimated unit costs for radiotherapy was $16,648 whilst chemotherapy was $7,498. In addition to treatment costs, all patients referred for radiotherapy also incurred other direct costs including transport, lodging and allowance for accompanying staff (nurse and driver) at a unit costs $5,959, Table 3.\nDirect non-medical costs Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds).\nUsing estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds).", "Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds).", "Productive loss due to sick leave as a result of patient seeking health care for prostate cancer was estimated at $58,320, Table 4. Out of the 90 patients diagnosed with prostate cancer, there were 13 men within the labor participating ages which were assumed to be on average sick leave of 54 days per person excluding short term sick leave of 14 days that is usually covered by employers. A total of 31 men died of prostate cancer in 2018 out of which 4 were less than 60 years. Costs due to prostate cancer premature mortality was estimated at $113,760, Table 5.\n\nTable 4Costs due to sick leave days associated with prostate cancer costsNumbers of sick leave daysNumber of patients alive in 2018Cost per workday ($)Total productivity loss due to costs due to prostate cancer in 2018 ($) for all patientTotal patient545812$37,584\nCosts due to sick leave days associated with prostate cancer costs\n\nTable 5Mortality for prostate cancerMortality cost for Prostate cancerAge groupsLost YPPLL (Average YPPLL for 1 patient in each age group)Number of premature deaths before age 60Average annual incomeMortality cost ($) with 3% Discount rate multiplied with the number of patients in this age groupMortality cost ($) with 5% Discount rate multiplied with the number of patients in each age group46-51491269029,63210,67652-56543269084,12827,311Totals10342690113,76037,987YPPLL211Average annual gross income = $2,690\nMortality for prostate cancer", "The total annual costs for prostate cancer was estimated at $ 6.2 million (between $4.7 million and 7.8 million estimated with lower and upper bounds), Table 6. Fourth 4% (40) of the cases were diagnoses with stage IV whilst only 11% (10) were diagnosed with stages I. Management of prostate cancer stages III and IV formed the greatest share of the costs for prostate cancer contributing about $1.2 and 2.1 million respectively. The total costs of stages I and II was estimated at $0.5 and $0.8 million. Transport and accommodation costs (cost incurred by those transferred to South Africa) were highest under other direct costs contributing about $0.5million. In 2018, there were 31 prostate cancer related deaths with only 4 occurred within the labor participating ages of Eswatini (18-60) years. The total year of productive life lost (YPPL) was 221 years. Indirect costs were estimated at $0.24 million and a majority (96%, $0.2 million) were productive loss from premature mortality, Table 6.\n\nTable 6Total Annual costs estimation for Prostate cancer (direct and indirect costs)Prevalence 2018Cost per item ($)Base case cost ($)Range ($)\nParameter\n\nNumber\n\nAverage cost (2018)\n\nBase costs (2018)\n\n(Lower (-25%)\n\nHigher (+25)\nDirect costs (Health care costs) consultation fee9041369027684613\nScreening and diagnosis\n  Prostate Specific Antigen (PSA)9016144810861810  Digital rectal examination (DRE)80000  TRUS guided Biopsy9014713,213991016,516  MRI scan90103493,06069,795116,325  Chest x-ray9028248418633105  Bone scan9060754,63040,97368,288  Ultra sound901039270695311,588  CT scan abdomen9086277,58058,18596,975\nTreatment\n  Stage I1045,486454,861341,146568,577  Stage II1745,428772,278579,209965,348  Stage III2353,0721,220,659915,4941,525,824  Stage IV4053,0722,122,8861,592,1642,653,607\nOther direct costs\n  Hospitalization costs (local)9021018,90014,17523,625  Hospital admission in step down facility for late stage treatment901206108,54081,405135,675  Transport and lodging cost (in RSA)905959536,310402,233670,388  Follow-up care (Year 1 following completing treatment)602662159,720119,790199,650\nTotal direct\n209,8925,645,8394,234,3797,057,299\nDirect non-medical cost\n  Transport costs for follow-up visits ,patient592513148,267111,200185,334  Transport costs for follow-up visits, accompanying relative592513148,267111,200185,334\nTotal Direct non-medical costs\n5026296,534222,401370,668\nIndirect costs\n  Morbidity costs due to sick leave136488424631810,530  Premature mortality costs457,675230,700173,025288,375\nTotal indirect costs\n58,323239,124179,343298,905\nTotal\n\n268,215\n\n6,181,497\n\n4,636,123\n\n7,726,871\n\nTotal Annual costs estimation for Prostate cancer (direct and indirect costs)", "The current study assessed the costs associated with prostate cancer in Eswatini, that is, screening, diagnosis, treatment and follow-up care. The study considered direct costs including follow-up care costs within one year of diagnosis. To our knowledge this is the first study to estimate the economic burden of prostate cancer in Eswatini. The estimated annual prostate cancer burden was $ 6.1 million in 2018. About 89% of the patient aged 60 years and above. Given the Eswatini Standardized Cancer Care and Guidelines [21], we assumed that all patients diagnosed in 2018 underwent the screening and diagnostic procedures. Treatment costs varied by cancer stage reflecting the utilization of treatment modalities per stage hence high costs observed in stages III ($1.2million) and IV (2.1million) versus Stage I and II with $0.5 and $0.8 million respectively. The findings indicate that managing advanced stages of the disease increases health care costs.\nThe study findings were in accordance with findings from other studies. A study assessing health care costs associated with prostate cancer in Canada reported increasing costs per stage I ($1,297), II ($3,289), III ($1,495), IV ($5,629) and V ($16,020) [30]. Similarly, a study conducted in Iran concluded that health care costs for metastatic stages were the highest compared to treatment costs for localized prostate cancer [11]. More studies had similar conclusions [31, 32]. Slightly different findings were from the United State of America who reported high treatment costs for initial diagnosis and metastatic phase with radical prostatectomy being the main cost driver [33]. Whilst in this study we found lesser cost with early stage cancer, however, both studies observed increasing costs with advanced cancer stages. Also, the differences could be partly explained by the men (20%) diagnosed with early stages of prostate cancer in our study. A systematic review of registry-based studies assessing economic burden of prostate cancer in Europe found that cost distribution across prostate cancer stages varied across countries [34]. This can be attributed to differences in prostate cancer detection and country specific management practice [34]. The authors also acknowledged the difference in methodologies applied in the studies as possible explanation to the varying outcome observed.\nThere seems to be lack of global consensus on prevention strategies particularly age of screening. The United State Preventive Service Task Force (USPSTF) recommend against routine screening for men 70 years and older for prostate cancer particularly using prostate specific antigen screening [35]. The Eswatini Standardized Cancer Care and Guidelines also discourages routine prostate cancer screening with an exception for men 50 years and above or symptomatic [23]. Other studies argue that increased screening lead to increased detection of low-grade cancers resulting to patient with indolent tumors receiving aggressive treatment [36].\nIn LICs such as Eswatini, the challenge is likely to be on a different direction than over diagnosing and consequently overt treatment. Lack of screening and comprehensive treatment remains the greatest challenge for most LMICs and LICs. Eswatini is not different from other low middle income countries from whom late diagnosis coupled with limited treatment options remains a challenge. In Eswatini, in 2018, more than 80% of the patients were diagnosed with advanced cancer (stages III and IV), yet major treatment is not available in country. These include radiotherapy and androgen deprivation therapy (ADT). Accessing care outside the country comes with additional costs, mainly accommodation, transportation and meals for patients referred to South Africa.\nLack of specialized and costly care have been reported in other countries particularly in Africa and mortality from prostate cancer is the highest in these countries and there is lack of cancer treatment guidelines [4, 37].\nThere is an urgent need to strengthen health systems enablers [38]. These include investments in the establishment of local cancer treatment centers, optimizing health workforce competencies throughout the continuum of care and ensuring availability of medical products and diagnostics technologies to facilitate local diagnosis, staging and management.\nDespite the evidence that prostate cancer is a major public health challenge, literature on the economic burden of prostate cancer is however limited and severely so in low income countries particularly in the sub-Saharan region. Findings from a systematic review on the costs of prostate cancer studies indicated a need not only for harmonized methodologies but also to expand research in this field [39]. Similarly, another systematic literature review of registry-based studies reached similar conclusion on the need for further research in cost of illness studies focusing on prostate cancer [40].\nIn the study we assessed indirect costs by estimating the costs associated with unpaid sick leave days and productive loss due to premature mortality from prostate cancer. Of the total costs, indirect costs accounted for 4.2% ($0.24 million). Comparing these findings to previous cost analysis studies for prostate cancer, most of the studies did not consider assessing indirect costs, however a study from Sweden reported low proportion of productivity loss associated with prostater cancer [9]. Further comparison of the findings with studies from other cancer types conducted in Eswatini [22, 41], the indirect costs from this study accounted for a lesser share of the total cost. This could partly be explained by the fact that most participants (89%) were above the labor participating ages (18-60 years) and few deaths occurred below age 60 years. A similar pattern was observed in Sweden, again the finding were attributed to low number of prostate cancer cases and deaths among labor participation groups [9].\nThe key strength of our study was that this is the first study to estimate cost associated with prostate cancer in Eswatini. The study considered both direct and indirect costs of prostate cancer. Our study has notable findings that has implications on health care systems strengthening and resources allocation in Eswatini. Our study present description of resource utilization and associated health care costs in managing prostate cancer in Eswatini.\nAn important limitation is the absence of index cost in Eswatini. We considered private and market prices for best possible price estimates.\nThe estimates presented were based on available data however, estimates could be conservative due to several reasons, First, due to limited data availability we used information from literature and interview with experts for some treatment variables, as such, some information can be subject to context and preferences. Secondly, we only considered costs in the first year of diagnosis yet cost for follow-up care can be even beyond five years [6, 42]. Lastly, we employed human capital approach to estimate the costs related to productivity loss associated with prostate cancer. Whilst this is a commonly applied approach, it is mostly criticized for excluding individuals above the labor participation age group yet there is argument that some of those people can still be involved in labor activities that gives meaningful income. Another author argues that this has severe implication when valuing productivity loss for prostate cancer given that a majority of the patients are diagnosed after they have past the retirement age.", "The findings of the study indicated that costs attributed to prostate cancer were substantial and they are a public health concern. The findings were consistent with those of other countries, a majority of which were conducted in developed countries. The study demonstrated the interventions and associated costs. Radiotherapy was the most expensive treatment intervention in Eswatini, yet other studies cited surgery related intervention as the major costs driver. This is a reasonable finding in the context of Eswatini given that radiotherapy treatment is not available locally, patients are referred to private hospitals outside the country. The findings point areas for policy makers to perform cost containment regarding therapeutic procedures for prostate cancer. Also, the study findings demonstrate that prostate cancer costs are likely to increase in future and there is a need for strengthening adherence to the Eswatini Standardized Cancer Care and Guidelines in order to ensure that resources are invested to diagnosing the most at risk groups.", " \nAdditional file 1. Direct non-medical costs patient questionnaires.pdf\nAdditional file 1. Direct non-medical costs patient questionnaires.pdf\n\nAdditional file 1. Direct non-medical costs patient questionnaires.pdf\nAdditional file 1. Direct non-medical costs patient questionnaires.pdf", "\nAdditional file 1. Direct non-medical costs patient questionnaires.pdf\nAdditional file 1. Direct non-medical costs patient questionnaires.pdf" ]
[ null, "materials|methods", null, null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusion", "supplementary-material", null ]
[ "Prostate cancer", "Cost-of-illness", "Eswatini", "Premature mortality", "Prostate antigen test" ]
Background: Among cancers, prostate cancer is the third commonest cancer after breast and lung cancer and the fifth cause of cancer mortality among men [1, 2]. In 2018, the number of new cases increased from 1.1 million in 2012 to 1.3 million in 2018 accounting for about 7.1% of the total cancer cases globally and 15% among men [2]. The causes of prostate cancer is attributable to genetic and environmental factors [2]. However, the incidence and mortality rate vary substantially within and across regions. Notably, high-income countries (HICs) reports high incidence rate compared to low- and -middle income countries (LMICs) [2]. In contrast, mortality rate is higher in developing countries particularly in sub-Saharan Africa regions [3]. The inequalities observed across regions with respect to prostate cancer incidence and mortality are in part linked to availability of effective screening and improved treatment modalities which are directly linked to resources availability [3, 4]. In Eswatini, compared to other common cancers, prostate cancer is ranked third accounting for 7.6% of total new cases 1074 in 2018 [5]. Prostate cancer causes clinical and economic burden to patients and governments. Screening tests include prostate-specific antigen (PSA) and digital rectal examination (DRG) [6, 7]. A positive screening tests results indicate further investigation [6]. Whilst PSA is the frequent screening test, it has been argued that PSA could potentially cause harm by over diagnosing low risk cancers that otherwise would have remained without clinical consequences for life time if left untreated [8]. In turn, this increases costs for prostate cancer [9]. In Sweden, annual costs associated with prostate cancer (screening, diagnosis and treatment) was estimated at €281 million [9]. In Ontario, the mean per patient cost for prostate cancer–related medication was $1211 [10]. In Iran, the total annual cost of prostate cancer was estimated at $2900 million [11]. Other studies estimated the economic burden of prostate cancer along with other cancer type. A study focusing on European countries, ranked prostate cancer the fourth cancer disease to cause health care costs compared to lung (€18.8billion), breast cancer (€15 billion), colorectal cancer (€13.1 billion) [12]. Similarly, in Korea, prostate cancer was among the top four cancers attributing to economic burden of disease [13]. There is limited evidence on the economic burden of prostate cancer from LMICs. Estimation of the economic burden of disease provide insight on treatment modalities and associated costs. The study aims to investigate the societal cost of prostate cancer in Eswatini during 2018. Materials and methods: Study area Eswatini formerly known as Swaziland is a country in Southern African bordering South Africa and Mozambique with an estimated population of 1.2 million [14]. The country’s economy is tied to South Africa and Eswatini’s domestic currency (Lilangeni=SZL) is pegged at parity with South African currency (Rand=ZAR) such that Eswatini cannot conduct its own monetary policy [15]. Eswatini fiscal revenue largely depend on Southern African Customs Union (SACU) revenues and remittance flowing mainly from South Africa [16, 17]. SACU receipts account for about a third of Eswatini’s total revenue and grants. However, over the past decades, SACU revenues have consistently declined leaving Eswatini’s economy constrained. The country records high national level poverty rate and income inequality which does not commensurate with its middle-income status. The national poverty rate is 58.9% percent at the international $1.90 poverty line and Gini index- a measure of inequality is 49.3 [17]. Eswatini ranks near the bottom of the World Bank’s Human Capital Index, with a score of 0.37 in 2020. Eswatini health spending as a share of the total budget is estimated at 10.1% and health per capita is estimated at $ 248 per annum [16]. Whilst Eswatini’s health expenditure is comparatively higher to some other countries in the Southern African region, the country’s health outcomes do not reflect its spending levels on health and its middle-income status. The health care service delivery is made up of public and private health care. Compared to the public, the private health care systems is better equipped both infrastructural and human resources however, at high health care costs. As such, private health care is accessed by less than 10% of the population, mainly those who owns health insurance [18]. Diagnostic and treatment capacity of conditions including cancer remains limited in the country mostly in the public health system. Through a government funded scheme namely Phalala, the Eswatini citizens are supported to access specialized health care services from neighboring countries mainly South Africa. Eswatini formerly known as Swaziland is a country in Southern African bordering South Africa and Mozambique with an estimated population of 1.2 million [14]. The country’s economy is tied to South Africa and Eswatini’s domestic currency (Lilangeni=SZL) is pegged at parity with South African currency (Rand=ZAR) such that Eswatini cannot conduct its own monetary policy [15]. Eswatini fiscal revenue largely depend on Southern African Customs Union (SACU) revenues and remittance flowing mainly from South Africa [16, 17]. SACU receipts account for about a third of Eswatini’s total revenue and grants. However, over the past decades, SACU revenues have consistently declined leaving Eswatini’s economy constrained. The country records high national level poverty rate and income inequality which does not commensurate with its middle-income status. The national poverty rate is 58.9% percent at the international $1.90 poverty line and Gini index- a measure of inequality is 49.3 [17]. Eswatini ranks near the bottom of the World Bank’s Human Capital Index, with a score of 0.37 in 2020. Eswatini health spending as a share of the total budget is estimated at 10.1% and health per capita is estimated at $ 248 per annum [16]. Whilst Eswatini’s health expenditure is comparatively higher to some other countries in the Southern African region, the country’s health outcomes do not reflect its spending levels on health and its middle-income status. The health care service delivery is made up of public and private health care. Compared to the public, the private health care systems is better equipped both infrastructural and human resources however, at high health care costs. As such, private health care is accessed by less than 10% of the population, mainly those who owns health insurance [18]. Diagnostic and treatment capacity of conditions including cancer remains limited in the country mostly in the public health system. Through a government funded scheme namely Phalala, the Eswatini citizens are supported to access specialized health care services from neighboring countries mainly South Africa. Study area: Eswatini formerly known as Swaziland is a country in Southern African bordering South Africa and Mozambique with an estimated population of 1.2 million [14]. The country’s economy is tied to South Africa and Eswatini’s domestic currency (Lilangeni=SZL) is pegged at parity with South African currency (Rand=ZAR) such that Eswatini cannot conduct its own monetary policy [15]. Eswatini fiscal revenue largely depend on Southern African Customs Union (SACU) revenues and remittance flowing mainly from South Africa [16, 17]. SACU receipts account for about a third of Eswatini’s total revenue and grants. However, over the past decades, SACU revenues have consistently declined leaving Eswatini’s economy constrained. The country records high national level poverty rate and income inequality which does not commensurate with its middle-income status. The national poverty rate is 58.9% percent at the international $1.90 poverty line and Gini index- a measure of inequality is 49.3 [17]. Eswatini ranks near the bottom of the World Bank’s Human Capital Index, with a score of 0.37 in 2020. Eswatini health spending as a share of the total budget is estimated at 10.1% and health per capita is estimated at $ 248 per annum [16]. Whilst Eswatini’s health expenditure is comparatively higher to some other countries in the Southern African region, the country’s health outcomes do not reflect its spending levels on health and its middle-income status. The health care service delivery is made up of public and private health care. Compared to the public, the private health care systems is better equipped both infrastructural and human resources however, at high health care costs. As such, private health care is accessed by less than 10% of the population, mainly those who owns health insurance [18]. Diagnostic and treatment capacity of conditions including cancer remains limited in the country mostly in the public health system. Through a government funded scheme namely Phalala, the Eswatini citizens are supported to access specialized health care services from neighboring countries mainly South Africa. Methods of costing: This is a Cost of Illness (CoI) study investigating costs of prostate cancer from the societal perspective [19]. CoI studies estimate disease specific costs [20]. The prevalence based approach, was used employing both top down and bottom up costing approaches [19, 21]. The cost estimation involved identification, quantification and valuation of resources used. The total costs for prostate cancer was calculated by multiplying identified resources quantities and the respective unit costs. All costs were presented in US$ adjusted for 2018 ($1= SZL14.5). Study population Data on prostate cancer prevalence and mortality in 2018 was obtained from the National cancer registry [14]. The National Cancer Control Unit is led by the Ministry of Health. To estimate direct non-medical costs and annual gross earnings, estimates were obtained from a previous study that collected data using a direct non-medical costs patient questionnaire from a previous study on women diagnosed with breast cancer and receiving follow-up care at Mbabane Government chemotherapy unit (outpatient) in 2018 [22]. Data on prostate cancer prevalence and mortality in 2018 was obtained from the National cancer registry [14]. The National Cancer Control Unit is led by the Ministry of Health. To estimate direct non-medical costs and annual gross earnings, estimates were obtained from a previous study that collected data using a direct non-medical costs patient questionnaire from a previous study on women diagnosed with breast cancer and receiving follow-up care at Mbabane Government chemotherapy unit (outpatient) in 2018 [22]. Study population: Data on prostate cancer prevalence and mortality in 2018 was obtained from the National cancer registry [14]. The National Cancer Control Unit is led by the Ministry of Health. To estimate direct non-medical costs and annual gross earnings, estimates were obtained from a previous study that collected data using a direct non-medical costs patient questionnaire from a previous study on women diagnosed with breast cancer and receiving follow-up care at Mbabane Government chemotherapy unit (outpatient) in 2018 [22]. Management of prostate cancer in Eswatini: In Eswatini, routine prostate cancer screening is only recommended for men above age 50 every after two years [23]. The referral pathway shown in Fig. 1, simplifies the treatment pathway which begins by a man presenting with symptoms or eligible for screening at outpatient. Patient will be referred to urologist for screening tests including PSA and digital rectal examination (DRE) [6, 23]. These tests are not confirmatory however, they indicate changes in the prostate. Abnormal findings by either of the tests warrant further evaluation of patient and subsequent diagnostic test. These include biopsy (transrectal/perineal ultrasound guided biopsy (TRUS)). Patient with no cancer but presenting with symptom would receive management of lower urinary tract symptoms (LUTS). If cancer is confirmed further evaluation is conducted for cancer staging purposes in order to inform cancer management plan (metastasis screening). The evaluation includes radiology tests (bone scan, CT-scan and MRI pelvis). Staging is based on the tumor size (T) extent of lymph nodes involvement (N) and evidence of distant metastasis (M) [23, 24]. Depending on the risk score and prostate cancer stage, treatment include watchful waiting (cancer is monitored but not treated), surgery, radiation, chemotherapy and hormonal therapy (Androgen Deprivation Therapy) [23]. Fig. 1Simplified diagnosis and treatment pathway of patients diagnosed with prostate cancer. PSA Prostate specific antigen, DRE Digital rectal examination, TURP transurethral resection of prostate, TURB transurethral resection of bladder, TRUS Transrectal ultrasound, LUTS Lower Urinary Tract Symptoms Simplified diagnosis and treatment pathway of patients diagnosed with prostate cancer. PSA Prostate specific antigen, DRE Digital rectal examination, TURP transurethral resection of prostate, TURB transurethral resection of bladder, TRUS Transrectal ultrasound, LUTS Lower Urinary Tract Symptoms Most treatment modalities can be administered in various stages however for different intent [6, 23]. Radical prostatectomy, radiation and hormonal therapy can be applied for localised high risk prostate cancer (stage I and stage II) whilst for metastatic prostate cancer hormonal therapy will be first line in addition to radiotherapy, chemotherapy and hormonal therapy for palliation purposes. Radiation is not available in Eswatini and patient are referred to private hospitals in South Africa. Other surgical interventions for relieving symptoms such as transurethral resection of prostate (TURP) or bladder (TURB) can be conducted locally. We used expert opinion from Mbabane Government Hospital - Chemotherapy Unit, Mbabane clinic -private hospital and information from Phalala Fund to establish patient referral pathway. Phalala Fund is a government funded scheme established to fund provision of specialized health care services to people of Eswatini that could not afford payment of specialized care that is not available in country [25]. The Eswatini standardized cancer care guidelines were used to establish screening, diagnosis and treatment variables. Costs were estimated based on market price. Radiotherapy is currently not available in Eswatini. As such, patients who require radiation are managed in South Africa through Phalala Fund. Chemotherapy is available locally through a government chemotherapy unit and local private clinic. However, it was established that most patients were still receiving chemotherapy from South Africa. Costs: From a societal perspective, costs associated with prostate cancer were estimated to assess economic burden of prostate cancer in Eswatini. Direct medical costs were divided into recurrent and capital costs [19]. Recurrent costs included personnel, travel, consumables including medical supplies, administration, utilities and overheads. Capital costs consistent mainly of equipment, building, vehicle and everything that have a useful life of more than one year. Costs for prostate cancer were determined based on the data source presented in Table 1. All costs were presented in US Dollars using 2018 average exchange rate (1 USD ($) = 14.5 SZL). Table 1Data variables and source for cost regarding screening, management and treatment of prostate cancerDataData sourcePrice sourceEstimated number of cases in 2018 = 90Swaziland National Cancer Unit, Eswatini Prostate cancer cases in 2018  N/A Screening   Consultation feeMbabane ClinicPrivate hospital  Prostate Specific Antigen (PSA)Eswatini Health Laboratory ServicesPrivate hospital  Digital rectal examination (DRE)Interview with expertPrivate hospital Diagnosis   TRUS guided BiopsyMbabane ClinicPrivate hospital  Computed Tomography (CT scan)Mbabane ClinicPrivate hospital  MRI scanPhalala fundPrivate hospital  X-rayMbabane ClinicPrivate hospital  Bone scanMbabane ClinicPrivate hospital Intervention/Treatment   Watchful waiting (WW)Interview with expertPrivate hospital  SurgeryMbabane clinicPrivate hospital  RadiotherapyPhalala Fund based on SA hospitals feesPrivate hospital  ChemotherapyPhalala Fund based on SA hospitals feesPrivate hospital  Androgen deprivationPhalala Fund based on SA hospitals feesMarket price  Hospitalization (local)Phalala Fund based on SA hospitals feesPrivate hospital Other direct costs   Transport and lodging costs in South AfricaPhalala Fund based on SA hospitals feesMarket price  Follow-up care (Year 1 following completion of treatment) Follow-up involves PSA testing, symptomology and clinical examination for metastatic cancer twice in a yearBased on reported prevalencePrivate hospital Data variables and source for cost regarding screening, management and treatment of prostate cancer Direct medical costs Direct costs in this study include resource utilization for diagnosis, treatment (surgery, chemotherapy, radiotherapy and androgen deprivation therapy) and follow-up care. To estimate the directs costs, we estimated average cost of each intervention from screening, staging and treatment and multiplied by the number of corresponding patients who received the intervention. The number of men diagnosed with prostate cancer were obtained from the national cancer registry [26]. All diagnosed cases were assumed to have undergone screening test using PSA. Screening and diagnosis costs were obtained from private hospital and market pricing. Treatment costs mainly radiation, chemotherapy and androgen deprivation therapy were received from Phalala fund based on South African private hospitals fees. In Eswatini, a majority of the management costs are borne by the Eswatini Government through Phalala fund. As per the standardized cancer care guidelines, we assumed that all the men with confirmed prostate cancer in 2018 underwent screening and diagnosis tests, treatment and incurred other direct costs including transport and accommodation. Follow-up care costs was estimated for one year for those reported alive in 2018. Direct costs in this study include resource utilization for diagnosis, treatment (surgery, chemotherapy, radiotherapy and androgen deprivation therapy) and follow-up care. To estimate the directs costs, we estimated average cost of each intervention from screening, staging and treatment and multiplied by the number of corresponding patients who received the intervention. The number of men diagnosed with prostate cancer were obtained from the national cancer registry [26]. All diagnosed cases were assumed to have undergone screening test using PSA. Screening and diagnosis costs were obtained from private hospital and market pricing. Treatment costs mainly radiation, chemotherapy and androgen deprivation therapy were received from Phalala fund based on South African private hospitals fees. In Eswatini, a majority of the management costs are borne by the Eswatini Government through Phalala fund. As per the standardized cancer care guidelines, we assumed that all the men with confirmed prostate cancer in 2018 underwent screening and diagnosis tests, treatment and incurred other direct costs including transport and accommodation. Follow-up care costs was estimated for one year for those reported alive in 2018. Direct non-medical costs Transport cost including return was estimated based on required patient follow-up visits based on the Eswatini Standardized Cancer Care Guidelines which state that follow-up visit should be every six months for the first two years and annually for up to five years following surgery [23]. Transport cost was estimated based on data from a previous study on breast cancer women receiving follow-up care at Mbabane Government Cancer Unit [22]. We assumed that all men completed treatment in 2018 had follow-up visits as per the Eswatini Standardized Cancer Guidelines. This study estimated one-year follow-up costs. Transport cost including return was estimated based on required patient follow-up visits based on the Eswatini Standardized Cancer Care Guidelines which state that follow-up visit should be every six months for the first two years and annually for up to five years following surgery [23]. Transport cost was estimated based on data from a previous study on breast cancer women receiving follow-up care at Mbabane Government Cancer Unit [22]. We assumed that all men completed treatment in 2018 had follow-up visits as per the Eswatini Standardized Cancer Guidelines. This study estimated one-year follow-up costs. Indirect costs We estimated the monitory value of prostate cancer related productivity loss due to morbidity (patient sick leave days incurred as a result of seeking health care) and pre-mature mortality). The human capital method was used to estimate indirect costs related to productivity loss due to morbidity (sick leave as a result of seeking prostate cancer care) and pre-mature mortality [20]. We used average annual gross earnings computed from our previous study on breast cancer women receiving follow-up care in the chemotherapy unit, Mbabane Government hospital in Eswatini [22]. Morbidity costs We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22]. We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22]. Mortality costs To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28]. To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28]. We estimated the monitory value of prostate cancer related productivity loss due to morbidity (patient sick leave days incurred as a result of seeking health care) and pre-mature mortality). The human capital method was used to estimate indirect costs related to productivity loss due to morbidity (sick leave as a result of seeking prostate cancer care) and pre-mature mortality [20]. We used average annual gross earnings computed from our previous study on breast cancer women receiving follow-up care in the chemotherapy unit, Mbabane Government hospital in Eswatini [22]. Morbidity costs We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22]. We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22]. Mortality costs To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28]. To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28]. Direct medical costs: Direct costs in this study include resource utilization for diagnosis, treatment (surgery, chemotherapy, radiotherapy and androgen deprivation therapy) and follow-up care. To estimate the directs costs, we estimated average cost of each intervention from screening, staging and treatment and multiplied by the number of corresponding patients who received the intervention. The number of men diagnosed with prostate cancer were obtained from the national cancer registry [26]. All diagnosed cases were assumed to have undergone screening test using PSA. Screening and diagnosis costs were obtained from private hospital and market pricing. Treatment costs mainly radiation, chemotherapy and androgen deprivation therapy were received from Phalala fund based on South African private hospitals fees. In Eswatini, a majority of the management costs are borne by the Eswatini Government through Phalala fund. As per the standardized cancer care guidelines, we assumed that all the men with confirmed prostate cancer in 2018 underwent screening and diagnosis tests, treatment and incurred other direct costs including transport and accommodation. Follow-up care costs was estimated for one year for those reported alive in 2018. Direct non-medical costs: Transport cost including return was estimated based on required patient follow-up visits based on the Eswatini Standardized Cancer Care Guidelines which state that follow-up visit should be every six months for the first two years and annually for up to five years following surgery [23]. Transport cost was estimated based on data from a previous study on breast cancer women receiving follow-up care at Mbabane Government Cancer Unit [22]. We assumed that all men completed treatment in 2018 had follow-up visits as per the Eswatini Standardized Cancer Guidelines. This study estimated one-year follow-up costs. Indirect costs: We estimated the monitory value of prostate cancer related productivity loss due to morbidity (patient sick leave days incurred as a result of seeking health care) and pre-mature mortality). The human capital method was used to estimate indirect costs related to productivity loss due to morbidity (sick leave as a result of seeking prostate cancer care) and pre-mature mortality [20]. We used average annual gross earnings computed from our previous study on breast cancer women receiving follow-up care in the chemotherapy unit, Mbabane Government hospital in Eswatini [22]. Morbidity costs We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22]. We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22]. Mortality costs To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28]. To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28]. Morbidity costs: We estimated the number of sick leave days for men diagnosed with prostate cancer who are in the labor participation ages (18-60 years). Using findings from a previously published study [27], we assumed sick leave for an average of 54 days per person. The sick leave days included days for staging, treatment and follow-up care. Using findings from a previous study on breast cancer conducted in Eswatini [22], we assumed 20 working days per month and a full-time working day of 8 h with estimated costs per workday ($12) translating ($1.5) per work hour [22]. Mortality costs: To estimate the cost of lost productivity due to premature death related to prostate cancer, years of potential productive life lost (YPPLL) were calculated by subtracting age at death from the local retirement age of 60 years [28]. Prostate cancer age groups specific deaths were estimated assuming labor participation ages of Eswatini (18-60 years). We used full employment rate and annual average earnings obtained from a previous study. Average YPPLL was multiplied by average annual earnings. According to health economic recommendations, future costs were discounted at 3% and 5% [19, 29]. The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry. In 2018, there were 31 prostate cancer related mortality with 4 that occurred within the labor participating ages of Eswatini (18-60) years [28]. Cancer mortality and years of potential productive life lost (YPLL): The number of prostate cancer related deaths was obtained from Eswatini Cancer Registry from which the years of productive life lost was calculated. In 2018, there were 31 prostate cancer related mortality out of which 4 occurred within the labor participating ages of Eswatini (18-60) years [28]. Estimation of annual costs: We computed the aggregate total costs of screening, diagnosis and treatment of prostate cancer in 2018 as below: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Cost\;of\;disease\;=\sum\begin{pmatrix}Direct\;medical\;\cos ts\\+\\Direct\;non-medical\;\cos ts\\Direct\;\cos ts\end{pmatrix}+\begin{pmatrix}Morbidity\\+\\Mortality\\Indirect\;\cos ts\end{pmatrix}$$\end{document}Costofdisease=∑Directmedicalcosts+Directnon-medicalcostsDirectcosts+Morbidity+MortalityIndirectcosts Direct medical costs = Consisting of direct non-medical costs and direct medical costs. Indirect costs = Consisting of morbidity costs and mortality costs (Patient time lost as a result of the condition and costs associated with premature mortality as a result). All costs were reported in 2018 US dollars ($1=SZL14.5). Sensitivity analysis: Sensitivity analysis was performed using ± 25% to account for the cost of follow-up prevalent cancer cases and to account for unrecorded cases by the facilities. Results: Directs costs In 2018, there were 90 prostate cancer cases of which 89% aged 60 years an above. The average age was 73 years. Table 2, shows unit average costs for treating prostate cancer cases including other direct costs such as transportation and accommodation. Table 2Costs for screening, diagnosis and treatment of prostate cancerParameterVariables included in the costAverage (2018) USD Screening Consultation fee41Prostate Specific Antigen (PSA)16Digital rectal examination (DRE)32 Diagnosis PathologyTRUS guided Biopsy147RadiologyComputed Tomography (CT scan)- to rule out chest, abdomen and pelvis metastasis862Magnetic resonance imaging scan (MRI)1,034X-ray to rule out effusion28Bone scan in locally advanced prostate cancer607Ultrasound scan103 Treatment Watchful Waiting (WW) cost include PSA test every three months and follow up consultation feePSA plus follow-up consultation fee58SurgeryRadical prostatectomy5,726Orchiectomy5,726RadiotherapyAdministered at 5 function of a 5-week period16,647Chemotherapy (Brachytherapy or external beam radiationAdministered in 5 cycles over 5-week period7,498Androgen Deprivation Therapy (ADT)Mostly Zoladex o.8 mg intramuscular (IM) for every 3 months1,268Symptoms relieving procedures (TURP/TURB)Transurethral resection of the prostate or bladder (TURP/TURB)5,872 Other direct costs Hospitalization costs (local)Admitted for symptoms management procedures including transurethral resection of the prostate (TURP) and orchidectomy5,872Hospital admission in step down facility for late stage treatment (In South Africa)Patient who require close monitoring following radiotherapy or surgery in hospital outside Eswatini1,206Transport and lodging cost in South AfricaAll patients who received treatment in South Africa5,959Follow-up care (Year 1 following completing treatment) Follow-up is done using PSA testing, symptomology and clinical examination for metastatic cancer twice in a yearFollow up consultation, PSA tests.94 Total 58,796 Costs for screening, diagnosis and treatment of prostate cancer Cost distribution by disease stage is shown in Table 3. Following the Eswatini Standard Cancer Care Guidelines we assumed that all confirmed cases underwent similar screening, diagnosis and treatment pathway shown in Table 2, and simplified referral pathway shown in Fig. 1. The average costs for the different pathway including treatment intervention differed with the prostate cancer stage. Radical prostatectomy was more frequent with early stages of prostate cancer whilst interventions like chemotherapy were common with prostate cancer stages III and IV. Table 3 shows the prostate cancer costs distribution by stage. Table 3Costs for staging, management, and treatment of Prostate cancer stage I-IVStaging and treatment variablesUnit cost ($)I (T1)II (T2)III (T3)1 V (T4)Consultation for assessment4141414141 Screening and diagnosis   Prostate Specific Antigen (PSA)1648484848  Digital rectal examination (DRE)  TRUS guided Biopsy147147147147147  MRI scan1,0341,0341,0341,0341,034  Chest x-ray2827.627.627.627.6  Bone scan607607607607607  Ultrasound103103103103103  CT scan abdomen862862862862862 Treatment (Prostate Cancer prevalence in 2018=91 patient)   Watchful waiting (WW). Costs include PSA test every three months and follow up consultation fee5858000  Radical prostatectomy5,7265,7265,72600  Orchiectomy (surgical castration)5,7265,7265,7265,7265,726  Radiotherapy16,64816,64816,64816,64816,648  Chemotherapy7,498007,4987,498  Symptoms relieving procedures (TURP/TURB)5,872005,8725,872  Other supportive drugs: Pain killers6060606060  Hormonal therapy (ADT) Zoladex 0.8 mg injectables1,2681,2681,2681,2681,268 Other costs   Hospitalization costs (local)5,8725,8725,8725,8725,872  Hospital admission in step down facility for late stage treatment1,2061,2061,2061,2061,206  Transport and lodging cost (in RSA)5,9595,9595,9595,9595,959  Follow-up care (Year 1 following completing treatment)9494949494 Total 58,824 45,486 45,428 53,072 53,072 Costs for staging, management, and treatment of Prostate cancer stage I-IV Radiation is not available in Eswatini and patients are referred to private hospitals in South Africa. On average, radiotherapy treatment is administered for a period of 5-weeks [25]. The estimated unit costs for radiotherapy was $16,648 whilst chemotherapy was $7,498. In addition to treatment costs, all patients referred for radiotherapy also incurred other direct costs including transport, lodging and allowance for accompanying staff (nurse and driver) at a unit costs $5,959, Table 3. Direct non-medical costs Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds). Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds). In 2018, there were 90 prostate cancer cases of which 89% aged 60 years an above. The average age was 73 years. Table 2, shows unit average costs for treating prostate cancer cases including other direct costs such as transportation and accommodation. Table 2Costs for screening, diagnosis and treatment of prostate cancerParameterVariables included in the costAverage (2018) USD Screening Consultation fee41Prostate Specific Antigen (PSA)16Digital rectal examination (DRE)32 Diagnosis PathologyTRUS guided Biopsy147RadiologyComputed Tomography (CT scan)- to rule out chest, abdomen and pelvis metastasis862Magnetic resonance imaging scan (MRI)1,034X-ray to rule out effusion28Bone scan in locally advanced prostate cancer607Ultrasound scan103 Treatment Watchful Waiting (WW) cost include PSA test every three months and follow up consultation feePSA plus follow-up consultation fee58SurgeryRadical prostatectomy5,726Orchiectomy5,726RadiotherapyAdministered at 5 function of a 5-week period16,647Chemotherapy (Brachytherapy or external beam radiationAdministered in 5 cycles over 5-week period7,498Androgen Deprivation Therapy (ADT)Mostly Zoladex o.8 mg intramuscular (IM) for every 3 months1,268Symptoms relieving procedures (TURP/TURB)Transurethral resection of the prostate or bladder (TURP/TURB)5,872 Other direct costs Hospitalization costs (local)Admitted for symptoms management procedures including transurethral resection of the prostate (TURP) and orchidectomy5,872Hospital admission in step down facility for late stage treatment (In South Africa)Patient who require close monitoring following radiotherapy or surgery in hospital outside Eswatini1,206Transport and lodging cost in South AfricaAll patients who received treatment in South Africa5,959Follow-up care (Year 1 following completing treatment) Follow-up is done using PSA testing, symptomology and clinical examination for metastatic cancer twice in a yearFollow up consultation, PSA tests.94 Total 58,796 Costs for screening, diagnosis and treatment of prostate cancer Cost distribution by disease stage is shown in Table 3. Following the Eswatini Standard Cancer Care Guidelines we assumed that all confirmed cases underwent similar screening, diagnosis and treatment pathway shown in Table 2, and simplified referral pathway shown in Fig. 1. The average costs for the different pathway including treatment intervention differed with the prostate cancer stage. Radical prostatectomy was more frequent with early stages of prostate cancer whilst interventions like chemotherapy were common with prostate cancer stages III and IV. Table 3 shows the prostate cancer costs distribution by stage. Table 3Costs for staging, management, and treatment of Prostate cancer stage I-IVStaging and treatment variablesUnit cost ($)I (T1)II (T2)III (T3)1 V (T4)Consultation for assessment4141414141 Screening and diagnosis   Prostate Specific Antigen (PSA)1648484848  Digital rectal examination (DRE)  TRUS guided Biopsy147147147147147  MRI scan1,0341,0341,0341,0341,034  Chest x-ray2827.627.627.627.6  Bone scan607607607607607  Ultrasound103103103103103  CT scan abdomen862862862862862 Treatment (Prostate Cancer prevalence in 2018=91 patient)   Watchful waiting (WW). Costs include PSA test every three months and follow up consultation fee5858000  Radical prostatectomy5,7265,7265,72600  Orchiectomy (surgical castration)5,7265,7265,7265,7265,726  Radiotherapy16,64816,64816,64816,64816,648  Chemotherapy7,498007,4987,498  Symptoms relieving procedures (TURP/TURB)5,872005,8725,872  Other supportive drugs: Pain killers6060606060  Hormonal therapy (ADT) Zoladex 0.8 mg injectables1,2681,2681,2681,2681,268 Other costs   Hospitalization costs (local)5,8725,8725,8725,8725,872  Hospital admission in step down facility for late stage treatment1,2061,2061,2061,2061,206  Transport and lodging cost (in RSA)5,9595,9595,9595,9595,959  Follow-up care (Year 1 following completing treatment)9494949494 Total 58,824 45,486 45,428 53,072 53,072 Costs for staging, management, and treatment of Prostate cancer stage I-IV Radiation is not available in Eswatini and patients are referred to private hospitals in South Africa. On average, radiotherapy treatment is administered for a period of 5-weeks [25]. The estimated unit costs for radiotherapy was $16,648 whilst chemotherapy was $7,498. In addition to treatment costs, all patients referred for radiotherapy also incurred other direct costs including transport, lodging and allowance for accompanying staff (nurse and driver) at a unit costs $5,959, Table 3. Direct non-medical costs Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds). Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds). Indirect costs Productive loss due to sick leave as a result of patient seeking health care for prostate cancer was estimated at $58,320, Table 4. Out of the 90 patients diagnosed with prostate cancer, there were 13 men within the labor participating ages which were assumed to be on average sick leave of 54 days per person excluding short term sick leave of 14 days that is usually covered by employers. A total of 31 men died of prostate cancer in 2018 out of which 4 were less than 60 years. Costs due to prostate cancer premature mortality was estimated at $113,760, Table 5. Table 4Costs due to sick leave days associated with prostate cancer costsNumbers of sick leave daysNumber of patients alive in 2018Cost per workday ($)Total productivity loss due to costs due to prostate cancer in 2018 ($) for all patientTotal patient545812$37,584 Costs due to sick leave days associated with prostate cancer costs Table 5Mortality for prostate cancerMortality cost for Prostate cancerAge groupsLost YPPLL (Average YPPLL for 1 patient in each age group)Number of premature deaths before age 60Average annual incomeMortality cost ($) with 3% Discount rate multiplied with the number of patients in this age groupMortality cost ($) with 5% Discount rate multiplied with the number of patients in each age group46-51491269029,63210,67652-56543269084,12827,311Totals10342690113,76037,987YPPLL211Average annual gross income = $2,690 Mortality for prostate cancer Productive loss due to sick leave as a result of patient seeking health care for prostate cancer was estimated at $58,320, Table 4. Out of the 90 patients diagnosed with prostate cancer, there were 13 men within the labor participating ages which were assumed to be on average sick leave of 54 days per person excluding short term sick leave of 14 days that is usually covered by employers. A total of 31 men died of prostate cancer in 2018 out of which 4 were less than 60 years. Costs due to prostate cancer premature mortality was estimated at $113,760, Table 5. Table 4Costs due to sick leave days associated with prostate cancer costsNumbers of sick leave daysNumber of patients alive in 2018Cost per workday ($)Total productivity loss due to costs due to prostate cancer in 2018 ($) for all patientTotal patient545812$37,584 Costs due to sick leave days associated with prostate cancer costs Table 5Mortality for prostate cancerMortality cost for Prostate cancerAge groupsLost YPPLL (Average YPPLL for 1 patient in each age group)Number of premature deaths before age 60Average annual incomeMortality cost ($) with 3% Discount rate multiplied with the number of patients in this age groupMortality cost ($) with 5% Discount rate multiplied with the number of patients in each age group46-51491269029,63210,67652-56543269084,12827,311Totals10342690113,76037,987YPPLL211Average annual gross income = $2,690 Mortality for prostate cancer Total annual costs The total annual costs for prostate cancer was estimated at $ 6.2 million (between $4.7 million and 7.8 million estimated with lower and upper bounds), Table 6. Fourth 4% (40) of the cases were diagnoses with stage IV whilst only 11% (10) were diagnosed with stages I. Management of prostate cancer stages III and IV formed the greatest share of the costs for prostate cancer contributing about $1.2 and 2.1 million respectively. The total costs of stages I and II was estimated at $0.5 and $0.8 million. Transport and accommodation costs (cost incurred by those transferred to South Africa) were highest under other direct costs contributing about $0.5million. In 2018, there were 31 prostate cancer related deaths with only 4 occurred within the labor participating ages of Eswatini (18-60) years. The total year of productive life lost (YPPL) was 221 years. Indirect costs were estimated at $0.24 million and a majority (96%, $0.2 million) were productive loss from premature mortality, Table 6. Table 6Total Annual costs estimation for Prostate cancer (direct and indirect costs)Prevalence 2018Cost per item ($)Base case cost ($)Range ($) Parameter Number Average cost (2018) Base costs (2018) (Lower (-25%) Higher (+25) Direct costs (Health care costs) consultation fee9041369027684613 Screening and diagnosis   Prostate Specific Antigen (PSA)9016144810861810  Digital rectal examination (DRE)80000  TRUS guided Biopsy9014713,213991016,516  MRI scan90103493,06069,795116,325  Chest x-ray9028248418633105  Bone scan9060754,63040,97368,288  Ultra sound901039270695311,588  CT scan abdomen9086277,58058,18596,975 Treatment   Stage I1045,486454,861341,146568,577  Stage II1745,428772,278579,209965,348  Stage III2353,0721,220,659915,4941,525,824  Stage IV4053,0722,122,8861,592,1642,653,607 Other direct costs   Hospitalization costs (local)9021018,90014,17523,625  Hospital admission in step down facility for late stage treatment901206108,54081,405135,675  Transport and lodging cost (in RSA)905959536,310402,233670,388  Follow-up care (Year 1 following completing treatment)602662159,720119,790199,650 Total direct 209,8925,645,8394,234,3797,057,299 Direct non-medical cost   Transport costs for follow-up visits ,patient592513148,267111,200185,334  Transport costs for follow-up visits, accompanying relative592513148,267111,200185,334 Total Direct non-medical costs 5026296,534222,401370,668 Indirect costs   Morbidity costs due to sick leave136488424631810,530  Premature mortality costs457,675230,700173,025288,375 Total indirect costs 58,323239,124179,343298,905 Total 268,215 6,181,497 4,636,123 7,726,871 Total Annual costs estimation for Prostate cancer (direct and indirect costs) The total annual costs for prostate cancer was estimated at $ 6.2 million (between $4.7 million and 7.8 million estimated with lower and upper bounds), Table 6. Fourth 4% (40) of the cases were diagnoses with stage IV whilst only 11% (10) were diagnosed with stages I. Management of prostate cancer stages III and IV formed the greatest share of the costs for prostate cancer contributing about $1.2 and 2.1 million respectively. The total costs of stages I and II was estimated at $0.5 and $0.8 million. Transport and accommodation costs (cost incurred by those transferred to South Africa) were highest under other direct costs contributing about $0.5million. In 2018, there were 31 prostate cancer related deaths with only 4 occurred within the labor participating ages of Eswatini (18-60) years. The total year of productive life lost (YPPL) was 221 years. Indirect costs were estimated at $0.24 million and a majority (96%, $0.2 million) were productive loss from premature mortality, Table 6. Table 6Total Annual costs estimation for Prostate cancer (direct and indirect costs)Prevalence 2018Cost per item ($)Base case cost ($)Range ($) Parameter Number Average cost (2018) Base costs (2018) (Lower (-25%) Higher (+25) Direct costs (Health care costs) consultation fee9041369027684613 Screening and diagnosis   Prostate Specific Antigen (PSA)9016144810861810  Digital rectal examination (DRE)80000  TRUS guided Biopsy9014713,213991016,516  MRI scan90103493,06069,795116,325  Chest x-ray9028248418633105  Bone scan9060754,63040,97368,288  Ultra sound901039270695311,588  CT scan abdomen9086277,58058,18596,975 Treatment   Stage I1045,486454,861341,146568,577  Stage II1745,428772,278579,209965,348  Stage III2353,0721,220,659915,4941,525,824  Stage IV4053,0722,122,8861,592,1642,653,607 Other direct costs   Hospitalization costs (local)9021018,90014,17523,625  Hospital admission in step down facility for late stage treatment901206108,54081,405135,675  Transport and lodging cost (in RSA)905959536,310402,233670,388  Follow-up care (Year 1 following completing treatment)602662159,720119,790199,650 Total direct 209,8925,645,8394,234,3797,057,299 Direct non-medical cost   Transport costs for follow-up visits ,patient592513148,267111,200185,334  Transport costs for follow-up visits, accompanying relative592513148,267111,200185,334 Total Direct non-medical costs 5026296,534222,401370,668 Indirect costs   Morbidity costs due to sick leave136488424631810,530  Premature mortality costs457,675230,700173,025288,375 Total indirect costs 58,323239,124179,343298,905 Total 268,215 6,181,497 4,636,123 7,726,871 Total Annual costs estimation for Prostate cancer (direct and indirect costs) Directs costs: In 2018, there were 90 prostate cancer cases of which 89% aged 60 years an above. The average age was 73 years. Table 2, shows unit average costs for treating prostate cancer cases including other direct costs such as transportation and accommodation. Table 2Costs for screening, diagnosis and treatment of prostate cancerParameterVariables included in the costAverage (2018) USD Screening Consultation fee41Prostate Specific Antigen (PSA)16Digital rectal examination (DRE)32 Diagnosis PathologyTRUS guided Biopsy147RadiologyComputed Tomography (CT scan)- to rule out chest, abdomen and pelvis metastasis862Magnetic resonance imaging scan (MRI)1,034X-ray to rule out effusion28Bone scan in locally advanced prostate cancer607Ultrasound scan103 Treatment Watchful Waiting (WW) cost include PSA test every three months and follow up consultation feePSA plus follow-up consultation fee58SurgeryRadical prostatectomy5,726Orchiectomy5,726RadiotherapyAdministered at 5 function of a 5-week period16,647Chemotherapy (Brachytherapy or external beam radiationAdministered in 5 cycles over 5-week period7,498Androgen Deprivation Therapy (ADT)Mostly Zoladex o.8 mg intramuscular (IM) for every 3 months1,268Symptoms relieving procedures (TURP/TURB)Transurethral resection of the prostate or bladder (TURP/TURB)5,872 Other direct costs Hospitalization costs (local)Admitted for symptoms management procedures including transurethral resection of the prostate (TURP) and orchidectomy5,872Hospital admission in step down facility for late stage treatment (In South Africa)Patient who require close monitoring following radiotherapy or surgery in hospital outside Eswatini1,206Transport and lodging cost in South AfricaAll patients who received treatment in South Africa5,959Follow-up care (Year 1 following completing treatment) Follow-up is done using PSA testing, symptomology and clinical examination for metastatic cancer twice in a yearFollow up consultation, PSA tests.94 Total 58,796 Costs for screening, diagnosis and treatment of prostate cancer Cost distribution by disease stage is shown in Table 3. Following the Eswatini Standard Cancer Care Guidelines we assumed that all confirmed cases underwent similar screening, diagnosis and treatment pathway shown in Table 2, and simplified referral pathway shown in Fig. 1. The average costs for the different pathway including treatment intervention differed with the prostate cancer stage. Radical prostatectomy was more frequent with early stages of prostate cancer whilst interventions like chemotherapy were common with prostate cancer stages III and IV. Table 3 shows the prostate cancer costs distribution by stage. Table 3Costs for staging, management, and treatment of Prostate cancer stage I-IVStaging and treatment variablesUnit cost ($)I (T1)II (T2)III (T3)1 V (T4)Consultation for assessment4141414141 Screening and diagnosis   Prostate Specific Antigen (PSA)1648484848  Digital rectal examination (DRE)  TRUS guided Biopsy147147147147147  MRI scan1,0341,0341,0341,0341,034  Chest x-ray2827.627.627.627.6  Bone scan607607607607607  Ultrasound103103103103103  CT scan abdomen862862862862862 Treatment (Prostate Cancer prevalence in 2018=91 patient)   Watchful waiting (WW). Costs include PSA test every three months and follow up consultation fee5858000  Radical prostatectomy5,7265,7265,72600  Orchiectomy (surgical castration)5,7265,7265,7265,7265,726  Radiotherapy16,64816,64816,64816,64816,648  Chemotherapy7,498007,4987,498  Symptoms relieving procedures (TURP/TURB)5,872005,8725,872  Other supportive drugs: Pain killers6060606060  Hormonal therapy (ADT) Zoladex 0.8 mg injectables1,2681,2681,2681,2681,268 Other costs   Hospitalization costs (local)5,8725,8725,8725,8725,872  Hospital admission in step down facility for late stage treatment1,2061,2061,2061,2061,206  Transport and lodging cost (in RSA)5,9595,9595,9595,9595,959  Follow-up care (Year 1 following completing treatment)9494949494 Total 58,824 45,486 45,428 53,072 53,072 Costs for staging, management, and treatment of Prostate cancer stage I-IV Radiation is not available in Eswatini and patients are referred to private hospitals in South Africa. On average, radiotherapy treatment is administered for a period of 5-weeks [25]. The estimated unit costs for radiotherapy was $16,648 whilst chemotherapy was $7,498. In addition to treatment costs, all patients referred for radiotherapy also incurred other direct costs including transport, lodging and allowance for accompanying staff (nurse and driver) at a unit costs $5,959, Table 3. Direct non-medical costs Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds). Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds). Direct non-medical costs: Using estimate from a previous study [22], the average transport cost per follow-up visit including return was $11 (inter quartile range (IQR)$4-46). On average, post treatment follow-up visits should be every 6 months resulting to four visits in a year including return. We assumed that all patients visited the hospital in the company of a relative. The total average transport costs including return was estimated at $5,029 (between $3,771 and 6,287 estimated with lower and upper bounds). Indirect costs: Productive loss due to sick leave as a result of patient seeking health care for prostate cancer was estimated at $58,320, Table 4. Out of the 90 patients diagnosed with prostate cancer, there were 13 men within the labor participating ages which were assumed to be on average sick leave of 54 days per person excluding short term sick leave of 14 days that is usually covered by employers. A total of 31 men died of prostate cancer in 2018 out of which 4 were less than 60 years. Costs due to prostate cancer premature mortality was estimated at $113,760, Table 5. Table 4Costs due to sick leave days associated with prostate cancer costsNumbers of sick leave daysNumber of patients alive in 2018Cost per workday ($)Total productivity loss due to costs due to prostate cancer in 2018 ($) for all patientTotal patient545812$37,584 Costs due to sick leave days associated with prostate cancer costs Table 5Mortality for prostate cancerMortality cost for Prostate cancerAge groupsLost YPPLL (Average YPPLL for 1 patient in each age group)Number of premature deaths before age 60Average annual incomeMortality cost ($) with 3% Discount rate multiplied with the number of patients in this age groupMortality cost ($) with 5% Discount rate multiplied with the number of patients in each age group46-51491269029,63210,67652-56543269084,12827,311Totals10342690113,76037,987YPPLL211Average annual gross income = $2,690 Mortality for prostate cancer Total annual costs: The total annual costs for prostate cancer was estimated at $ 6.2 million (between $4.7 million and 7.8 million estimated with lower and upper bounds), Table 6. Fourth 4% (40) of the cases were diagnoses with stage IV whilst only 11% (10) were diagnosed with stages I. Management of prostate cancer stages III and IV formed the greatest share of the costs for prostate cancer contributing about $1.2 and 2.1 million respectively. The total costs of stages I and II was estimated at $0.5 and $0.8 million. Transport and accommodation costs (cost incurred by those transferred to South Africa) were highest under other direct costs contributing about $0.5million. In 2018, there were 31 prostate cancer related deaths with only 4 occurred within the labor participating ages of Eswatini (18-60) years. The total year of productive life lost (YPPL) was 221 years. Indirect costs were estimated at $0.24 million and a majority (96%, $0.2 million) were productive loss from premature mortality, Table 6. Table 6Total Annual costs estimation for Prostate cancer (direct and indirect costs)Prevalence 2018Cost per item ($)Base case cost ($)Range ($) Parameter Number Average cost (2018) Base costs (2018) (Lower (-25%) Higher (+25) Direct costs (Health care costs) consultation fee9041369027684613 Screening and diagnosis   Prostate Specific Antigen (PSA)9016144810861810  Digital rectal examination (DRE)80000  TRUS guided Biopsy9014713,213991016,516  MRI scan90103493,06069,795116,325  Chest x-ray9028248418633105  Bone scan9060754,63040,97368,288  Ultra sound901039270695311,588  CT scan abdomen9086277,58058,18596,975 Treatment   Stage I1045,486454,861341,146568,577  Stage II1745,428772,278579,209965,348  Stage III2353,0721,220,659915,4941,525,824  Stage IV4053,0722,122,8861,592,1642,653,607 Other direct costs   Hospitalization costs (local)9021018,90014,17523,625  Hospital admission in step down facility for late stage treatment901206108,54081,405135,675  Transport and lodging cost (in RSA)905959536,310402,233670,388  Follow-up care (Year 1 following completing treatment)602662159,720119,790199,650 Total direct 209,8925,645,8394,234,3797,057,299 Direct non-medical cost   Transport costs for follow-up visits ,patient592513148,267111,200185,334  Transport costs for follow-up visits, accompanying relative592513148,267111,200185,334 Total Direct non-medical costs 5026296,534222,401370,668 Indirect costs   Morbidity costs due to sick leave136488424631810,530  Premature mortality costs457,675230,700173,025288,375 Total indirect costs 58,323239,124179,343298,905 Total 268,215 6,181,497 4,636,123 7,726,871 Total Annual costs estimation for Prostate cancer (direct and indirect costs) Discussion: The current study assessed the costs associated with prostate cancer in Eswatini, that is, screening, diagnosis, treatment and follow-up care. The study considered direct costs including follow-up care costs within one year of diagnosis. To our knowledge this is the first study to estimate the economic burden of prostate cancer in Eswatini. The estimated annual prostate cancer burden was $ 6.1 million in 2018. About 89% of the patient aged 60 years and above. Given the Eswatini Standardized Cancer Care and Guidelines [21], we assumed that all patients diagnosed in 2018 underwent the screening and diagnostic procedures. Treatment costs varied by cancer stage reflecting the utilization of treatment modalities per stage hence high costs observed in stages III ($1.2million) and IV (2.1million) versus Stage I and II with $0.5 and $0.8 million respectively. The findings indicate that managing advanced stages of the disease increases health care costs. The study findings were in accordance with findings from other studies. A study assessing health care costs associated with prostate cancer in Canada reported increasing costs per stage I ($1,297), II ($3,289), III ($1,495), IV ($5,629) and V ($16,020) [30]. Similarly, a study conducted in Iran concluded that health care costs for metastatic stages were the highest compared to treatment costs for localized prostate cancer [11]. More studies had similar conclusions [31, 32]. Slightly different findings were from the United State of America who reported high treatment costs for initial diagnosis and metastatic phase with radical prostatectomy being the main cost driver [33]. Whilst in this study we found lesser cost with early stage cancer, however, both studies observed increasing costs with advanced cancer stages. Also, the differences could be partly explained by the men (20%) diagnosed with early stages of prostate cancer in our study. A systematic review of registry-based studies assessing economic burden of prostate cancer in Europe found that cost distribution across prostate cancer stages varied across countries [34]. This can be attributed to differences in prostate cancer detection and country specific management practice [34]. The authors also acknowledged the difference in methodologies applied in the studies as possible explanation to the varying outcome observed. There seems to be lack of global consensus on prevention strategies particularly age of screening. The United State Preventive Service Task Force (USPSTF) recommend against routine screening for men 70 years and older for prostate cancer particularly using prostate specific antigen screening [35]. The Eswatini Standardized Cancer Care and Guidelines also discourages routine prostate cancer screening with an exception for men 50 years and above or symptomatic [23]. Other studies argue that increased screening lead to increased detection of low-grade cancers resulting to patient with indolent tumors receiving aggressive treatment [36]. In LICs such as Eswatini, the challenge is likely to be on a different direction than over diagnosing and consequently overt treatment. Lack of screening and comprehensive treatment remains the greatest challenge for most LMICs and LICs. Eswatini is not different from other low middle income countries from whom late diagnosis coupled with limited treatment options remains a challenge. In Eswatini, in 2018, more than 80% of the patients were diagnosed with advanced cancer (stages III and IV), yet major treatment is not available in country. These include radiotherapy and androgen deprivation therapy (ADT). Accessing care outside the country comes with additional costs, mainly accommodation, transportation and meals for patients referred to South Africa. Lack of specialized and costly care have been reported in other countries particularly in Africa and mortality from prostate cancer is the highest in these countries and there is lack of cancer treatment guidelines [4, 37]. There is an urgent need to strengthen health systems enablers [38]. These include investments in the establishment of local cancer treatment centers, optimizing health workforce competencies throughout the continuum of care and ensuring availability of medical products and diagnostics technologies to facilitate local diagnosis, staging and management. Despite the evidence that prostate cancer is a major public health challenge, literature on the economic burden of prostate cancer is however limited and severely so in low income countries particularly in the sub-Saharan region. Findings from a systematic review on the costs of prostate cancer studies indicated a need not only for harmonized methodologies but also to expand research in this field [39]. Similarly, another systematic literature review of registry-based studies reached similar conclusion on the need for further research in cost of illness studies focusing on prostate cancer [40]. In the study we assessed indirect costs by estimating the costs associated with unpaid sick leave days and productive loss due to premature mortality from prostate cancer. Of the total costs, indirect costs accounted for 4.2% ($0.24 million). Comparing these findings to previous cost analysis studies for prostate cancer, most of the studies did not consider assessing indirect costs, however a study from Sweden reported low proportion of productivity loss associated with prostater cancer [9]. Further comparison of the findings with studies from other cancer types conducted in Eswatini [22, 41], the indirect costs from this study accounted for a lesser share of the total cost. This could partly be explained by the fact that most participants (89%) were above the labor participating ages (18-60 years) and few deaths occurred below age 60 years. A similar pattern was observed in Sweden, again the finding were attributed to low number of prostate cancer cases and deaths among labor participation groups [9]. The key strength of our study was that this is the first study to estimate cost associated with prostate cancer in Eswatini. The study considered both direct and indirect costs of prostate cancer. Our study has notable findings that has implications on health care systems strengthening and resources allocation in Eswatini. Our study present description of resource utilization and associated health care costs in managing prostate cancer in Eswatini. An important limitation is the absence of index cost in Eswatini. We considered private and market prices for best possible price estimates. The estimates presented were based on available data however, estimates could be conservative due to several reasons, First, due to limited data availability we used information from literature and interview with experts for some treatment variables, as such, some information can be subject to context and preferences. Secondly, we only considered costs in the first year of diagnosis yet cost for follow-up care can be even beyond five years [6, 42]. Lastly, we employed human capital approach to estimate the costs related to productivity loss associated with prostate cancer. Whilst this is a commonly applied approach, it is mostly criticized for excluding individuals above the labor participation age group yet there is argument that some of those people can still be involved in labor activities that gives meaningful income. Another author argues that this has severe implication when valuing productivity loss for prostate cancer given that a majority of the patients are diagnosed after they have past the retirement age. Conclusions: The findings of the study indicated that costs attributed to prostate cancer were substantial and they are a public health concern. The findings were consistent with those of other countries, a majority of which were conducted in developed countries. The study demonstrated the interventions and associated costs. Radiotherapy was the most expensive treatment intervention in Eswatini, yet other studies cited surgery related intervention as the major costs driver. This is a reasonable finding in the context of Eswatini given that radiotherapy treatment is not available locally, patients are referred to private hospitals outside the country. The findings point areas for policy makers to perform cost containment regarding therapeutic procedures for prostate cancer. Also, the study findings demonstrate that prostate cancer costs are likely to increase in future and there is a need for strengthening adherence to the Eswatini Standardized Cancer Care and Guidelines in order to ensure that resources are invested to diagnosing the most at risk groups. Supplementary Information: Additional file 1. Direct non-medical costs patient questionnaires.pdf Additional file 1. Direct non-medical costs patient questionnaires.pdf Additional file 1. Direct non-medical costs patient questionnaires.pdf Additional file 1. Direct non-medical costs patient questionnaires.pdf : Additional file 1. Direct non-medical costs patient questionnaires.pdf Additional file 1. Direct non-medical costs patient questionnaires.pdf
Background: Prostate cancer is the fifth cause of cancer mortality among men worldwide. However, there is limited data on costs associated with prostate cancer in low- and middle-income countries particularly in the sub-Saharan region. From a societal perspective, this study aims to estimate the cost of prostate cancer in Eswatini. Methods: This prevalence-based cost-of-illness study used diagnosis specific data from national registries to estimate costs associated to prostate cancer during 2018. The prevalence-based approach was used employing both top down and bottom up costing approaches. Costs data included health care utilization, transport, sick leave days and premature death. Results: The total annual cost of prostate cancer was $6.2 million (ranging between $ 4.7 million and 7.8 million estimated with lower and upper bounds). Average cost-per patient for radiotherapy, chemotherapy and other non-medical direct costs (transport and lodging) were the highest cost drivers recording $16,648, $7,498 and $5,959 respectively whilst indirect costs including productive loss due to sick leave and pre-mature mortality was estimated at $58,320 and $113,760 respectively. Cost of managing prostate cancer increased with advanced disease and costs were highest for prostate cancer stages III and IV recording $1.1million, $1.9million respectively. Conclusions: Prostate cancer is a public health concern in Eswatini, and it imposes significant economic burden to the society. This finding point areas for policy makers to perform cost containment regarding therapeutic procedures for prostate cancer and the need for strategies to increase efficiencies in the health care systems for increased value for health care services.
Background: Among cancers, prostate cancer is the third commonest cancer after breast and lung cancer and the fifth cause of cancer mortality among men [1, 2]. In 2018, the number of new cases increased from 1.1 million in 2012 to 1.3 million in 2018 accounting for about 7.1% of the total cancer cases globally and 15% among men [2]. The causes of prostate cancer is attributable to genetic and environmental factors [2]. However, the incidence and mortality rate vary substantially within and across regions. Notably, high-income countries (HICs) reports high incidence rate compared to low- and -middle income countries (LMICs) [2]. In contrast, mortality rate is higher in developing countries particularly in sub-Saharan Africa regions [3]. The inequalities observed across regions with respect to prostate cancer incidence and mortality are in part linked to availability of effective screening and improved treatment modalities which are directly linked to resources availability [3, 4]. In Eswatini, compared to other common cancers, prostate cancer is ranked third accounting for 7.6% of total new cases 1074 in 2018 [5]. Prostate cancer causes clinical and economic burden to patients and governments. Screening tests include prostate-specific antigen (PSA) and digital rectal examination (DRG) [6, 7]. A positive screening tests results indicate further investigation [6]. Whilst PSA is the frequent screening test, it has been argued that PSA could potentially cause harm by over diagnosing low risk cancers that otherwise would have remained without clinical consequences for life time if left untreated [8]. In turn, this increases costs for prostate cancer [9]. In Sweden, annual costs associated with prostate cancer (screening, diagnosis and treatment) was estimated at €281 million [9]. In Ontario, the mean per patient cost for prostate cancer–related medication was $1211 [10]. In Iran, the total annual cost of prostate cancer was estimated at $2900 million [11]. Other studies estimated the economic burden of prostate cancer along with other cancer type. A study focusing on European countries, ranked prostate cancer the fourth cancer disease to cause health care costs compared to lung (€18.8billion), breast cancer (€15 billion), colorectal cancer (€13.1 billion) [12]. Similarly, in Korea, prostate cancer was among the top four cancers attributing to economic burden of disease [13]. There is limited evidence on the economic burden of prostate cancer from LMICs. Estimation of the economic burden of disease provide insight on treatment modalities and associated costs. The study aims to investigate the societal cost of prostate cancer in Eswatini during 2018. Conclusions: The findings of the study indicated that costs attributed to prostate cancer were substantial and they are a public health concern. The findings were consistent with those of other countries, a majority of which were conducted in developed countries. The study demonstrated the interventions and associated costs. Radiotherapy was the most expensive treatment intervention in Eswatini, yet other studies cited surgery related intervention as the major costs driver. This is a reasonable finding in the context of Eswatini given that radiotherapy treatment is not available locally, patients are referred to private hospitals outside the country. The findings point areas for policy makers to perform cost containment regarding therapeutic procedures for prostate cancer. Also, the study findings demonstrate that prostate cancer costs are likely to increase in future and there is a need for strengthening adherence to the Eswatini Standardized Cancer Care and Guidelines in order to ensure that resources are invested to diagnosing the most at risk groups.
Background: Prostate cancer is the fifth cause of cancer mortality among men worldwide. However, there is limited data on costs associated with prostate cancer in low- and middle-income countries particularly in the sub-Saharan region. From a societal perspective, this study aims to estimate the cost of prostate cancer in Eswatini. Methods: This prevalence-based cost-of-illness study used diagnosis specific data from national registries to estimate costs associated to prostate cancer during 2018. The prevalence-based approach was used employing both top down and bottom up costing approaches. Costs data included health care utilization, transport, sick leave days and premature death. Results: The total annual cost of prostate cancer was $6.2 million (ranging between $ 4.7 million and 7.8 million estimated with lower and upper bounds). Average cost-per patient for radiotherapy, chemotherapy and other non-medical direct costs (transport and lodging) were the highest cost drivers recording $16,648, $7,498 and $5,959 respectively whilst indirect costs including productive loss due to sick leave and pre-mature mortality was estimated at $58,320 and $113,760 respectively. Cost of managing prostate cancer increased with advanced disease and costs were highest for prostate cancer stages III and IV recording $1.1million, $1.9million respectively. Conclusions: Prostate cancer is a public health concern in Eswatini, and it imposes significant economic burden to the society. This finding point areas for policy makers to perform cost containment regarding therapeutic procedures for prostate cancer and the need for strategies to increase efficiencies in the health care systems for increased value for health care services.
13,191
309
[ 516, 388, 294, 94, 600, 2342, 203, 115, 674, 123, 156, 56, 122, 30, 923, 99, 255, 445, 24 ]
24
[ "costs", "cancer", "prostate", "prostate cancer", "treatment", "eswatini", "estimated", "care", "cost", "follow" ]
[ "costs prostate cancer", "countries ranked prostate", "prostate cancer societal", "prostate cancer costs", "africa mortality prostate" ]
null
[CONTENT] Prostate cancer | Cost-of-illness | Eswatini | Premature mortality | Prostate antigen test [SUMMARY]
null
[CONTENT] Prostate cancer | Cost-of-illness | Eswatini | Premature mortality | Prostate antigen test [SUMMARY]
[CONTENT] Prostate cancer | Cost-of-illness | Eswatini | Premature mortality | Prostate antigen test [SUMMARY]
[CONTENT] Prostate cancer | Cost-of-illness | Eswatini | Premature mortality | Prostate antigen test [SUMMARY]
[CONTENT] Prostate cancer | Cost-of-illness | Eswatini | Premature mortality | Prostate antigen test [SUMMARY]
[CONTENT] Cost of Illness | Eswatini | Financial Stress | Health Care Costs | Humans | Male | Prostatic Neoplasms [SUMMARY]
null
[CONTENT] Cost of Illness | Eswatini | Financial Stress | Health Care Costs | Humans | Male | Prostatic Neoplasms [SUMMARY]
[CONTENT] Cost of Illness | Eswatini | Financial Stress | Health Care Costs | Humans | Male | Prostatic Neoplasms [SUMMARY]
[CONTENT] Cost of Illness | Eswatini | Financial Stress | Health Care Costs | Humans | Male | Prostatic Neoplasms [SUMMARY]
[CONTENT] Cost of Illness | Eswatini | Financial Stress | Health Care Costs | Humans | Male | Prostatic Neoplasms [SUMMARY]
[CONTENT] costs prostate cancer | countries ranked prostate | prostate cancer societal | prostate cancer costs | africa mortality prostate [SUMMARY]
null
[CONTENT] costs prostate cancer | countries ranked prostate | prostate cancer societal | prostate cancer costs | africa mortality prostate [SUMMARY]
[CONTENT] costs prostate cancer | countries ranked prostate | prostate cancer societal | prostate cancer costs | africa mortality prostate [SUMMARY]
[CONTENT] costs prostate cancer | countries ranked prostate | prostate cancer societal | prostate cancer costs | africa mortality prostate [SUMMARY]
[CONTENT] costs prostate cancer | countries ranked prostate | prostate cancer societal | prostate cancer costs | africa mortality prostate [SUMMARY]
[CONTENT] costs | cancer | prostate | prostate cancer | treatment | eswatini | estimated | care | cost | follow [SUMMARY]
null
[CONTENT] costs | cancer | prostate | prostate cancer | treatment | eswatini | estimated | care | cost | follow [SUMMARY]
[CONTENT] costs | cancer | prostate | prostate cancer | treatment | eswatini | estimated | care | cost | follow [SUMMARY]
[CONTENT] costs | cancer | prostate | prostate cancer | treatment | eswatini | estimated | care | cost | follow [SUMMARY]
[CONTENT] costs | cancer | prostate | prostate cancer | treatment | eswatini | estimated | care | cost | follow [SUMMARY]
[CONTENT] cancer | prostate | prostate cancer | economic burden | burden | cancers | economic | cause | incidence | cost prostate cancer [SUMMARY]
null
[CONTENT] costs | prostate | table | prostate cancer | stage | cancer | treatment | total | average | transport [SUMMARY]
[CONTENT] findings | countries | intervention | cancer | study | costs | radiotherapy | eswatini | prostate cancer | prostate [SUMMARY]
[CONTENT] cancer | prostate | costs | prostate cancer | eswatini | direct | follow | treatment | average | estimated [SUMMARY]
[CONTENT] cancer | prostate | costs | prostate cancer | eswatini | direct | follow | treatment | average | estimated [SUMMARY]
[CONTENT] fifth ||| ||| Eswatini [SUMMARY]
null
[CONTENT] annual | $6.2 million | between $ 4.7 million | 7.8 million ||| 16,648 | 7,498 | 5,959 | 58,320 | 113,760 ||| IV | 1.1million | 1.9million [SUMMARY]
[CONTENT] Eswatini ||| [SUMMARY]
[CONTENT] fifth ||| ||| Eswatini ||| 2018 ||| ||| days ||| ||| annual | $6.2 million | between $ 4.7 million | 7.8 million ||| 16,648 | 7,498 | 5,959 | 58,320 | 113,760 ||| IV | 1.1million | 1.9million ||| Eswatini ||| [SUMMARY]
[CONTENT] fifth ||| ||| Eswatini ||| 2018 ||| ||| days ||| ||| annual | $6.2 million | between $ 4.7 million | 7.8 million ||| 16,648 | 7,498 | 5,959 | 58,320 | 113,760 ||| IV | 1.1million | 1.9million ||| Eswatini ||| [SUMMARY]
Epidemiology of congenital heart disease in Brazil.
26107454
Congenital heart disease is an abnormality in the structure or cardiocirculatory function, occurring from birth, even if diagnosed later. It can result in intrauterine death in childhood or in adulthood. Accounted for 6% of infant deaths in Brazil in 2007.
INTRODUCTION
The calculations of prevalence were performed by applying coefficients, giving them function rates for calculations of health problems. The study makes an approach between the literature and the governmental registries. It was adopted an estimate of 9: 1000 births and prevalence rates for subtypes applied to births of 2010. Estimates of births with congenital heart disease were compared with the reports to the Ministry of Health and were studied by descriptive methods with the use of rates and coefficients represented in tables.
METHODS
The incidence in Brazil is 25,757 new cases/year, distributed in: North 2,758; Northeast 7,570; Southeast 10,112; South 3,329; and Midwest 1,987. In 2010, were reported to System of Live Birth Information of Ministry of Health 1,377 cases of babies with congenital heart disease, representing 5.3% of the estimated for Brazil. In the same period, the most common subtypes were: ventricular septal defect (7,498); atrial septal defect (4,693); persistent ductus arteriosus (2,490); pulmonary stenosis (1,431); tetralogy of Fallot (973); coarctation of the aorta (973); transposition of the great arteries (887); and aortic stenosis 630. The prevalence of congenital heart disease, for the year of 2009, was 675,495 children and adolescents and 552,092 adults.
RESULTS
In Brazil, there is underreporting in the prevalence of congenital heart disease, signaling the need for adjustments in the methodology of registration.
CONCLUSION
[ "Adolescent", "Adult", "Age Distribution", "Brazil", "Child", "Child, Preschool", "Disease Notification", "Female", "Heart Defects, Congenital", "Humans", "Infant", "Infant, Newborn", "Male", "Prevalence", "Registries", "Young Adult" ]
4462968
INTRODUCTION
Congenital Heart Disease (CHD) is an abnormality in the structure or cardiocirculatory function that occurs from birth, even if subsequently diagnosed[1]. It varies in severity, occurring from communications between cavities that spontaneously regress up to major malformations that even require several procedures, surgical or catheterization. It can result in intrauterine, childhood or adulthood death[2]. Globally, 130 million children are born each year. Of these, four million die in the neonatal period, or that is, in the first 30 days of life[3] and 7% of the fatalities are related to CHD[4]. In 2007, in Brazil, 6% of deaths in children under one year of age were by CHD[5]. Among the congenital malformations, cardiovascular abnormalities are the most common cause of infant mortality, 40% under study in São Paulo/Brazil[6] and 26.6% and 48.1% in the US[7,8]. Delays in the development and cognitive deficits are associated with a congenital heart disease of 20% to 30%[9-11]. Hoffman & Kaplan reported a variation in prevalence rate of 4:1000 to 50:1000 births. The highest one is related to the occurrence of low severity injuries that are solved without medical intervention[12]. In Brazil, studies on the epidemiology of congenital heart disease with different cuts, express coefficients ranging from 5.494[13] to 7.17 per 1,000 births[14]. For a group with low birth weight, the prevalence rate ranged from 10.7 to 40.7:1,000 births[15]. Amorim, in Minas Gerais, Brazil, from 1990 to 2003, analyzing data on 29,770 newborns, found prevalence rate of congenital heart defects corresponding to 9.58:1,000 births and equal to 87.72:1,000 between stillbirths[16]. Rivera, in Alagoas, Brazil, for the same age group, found prevalence in 13.2:1,000 births[17]. Study review and meta-analysis on the global prevalence of congenital heart disease included 114 studies, with a study population of 24,091,867 births. The prevalence rate estimated at 9.1 per 1,000 births remains stable over the last 15 years. This corresponds to 1.35 million newborns with CHD each year[18]. In this study, prevalence is the greatness of the event at any given time[19], adopting the coefficient of 9:1000 births as the basis for calculation of CHD in Brazil and in each federal unit. The scarcity of specific bibliography and reliable statistics on the Brazilian population with congenital heart disease forces the approximation to the international literature, in order to estimate the prevalence of CHD in Brazil and compare it to the official notifications, serving therefore as basis for formulation of public policies, based on more realistic data. Objective The aim of this study was to estimate underreporting in the prevalence of congenital heart disease in Brazil and its most frequent subtypes. The aim of this study was to estimate underreporting in the prevalence of congenital heart disease in Brazil and its most frequent subtypes.
METHODS
We adopted for the test under report, the prevalence rate of congenital heart disease equal to 9:1000 births, defined in a review and meta-analysis study for births worldwide [18]. Population and birth registries in Brazil, regions and federated units, were obtained by consulting the Demographic Census of 2010, published in 2012, and made available on the DATASUS/Ministry of Health (MOH) website[20]. Secondary data relating to the distribution by age group, the Brazilian population of 2009, published by the Demographic Census of 2010[20], were divided into two groups: the first with children under 18 years of age and the second formed by adults, aged 18 and more. Rates for the prevalence of congenital heart disease by age group, were defined in the study on the population of Canada. For children under 18, 11.89 CHD per 1,000 individuals and 4.09 CHD per 1,000 adults[21]. The prevalence of congenital heart disease in 2010, recorded in SINASC (System of Live Birth Information), was obtained by consulting the DATASUS/MOH website. The estimated prevalence of congenital heart disease in Brazil, was generated when the prevalence rate equal to 9: 1000 births was applied to quantitative births. Estimated prevalence of congenital heart disease was compared with notifications from SINASC/MOH, revealing the disease registration percentage in Brazil and the federal units. The estimated prevalence of the eight most frequent subtypes of CHD was achieved by applying to the quantitative of birth the prevalence rates per 1,000 births of each subtype, namely: Ventricular septal defect (VSD) 2.62; Atrial septal defect (ASD) 1.64; Persistent ductus arteriosus (PDA) 0.87; Pulmonary stenosis 0.5; Tetralogy of Fallot (T4F) 0.34; Aortic Coarctation (AoCo) 0.34; Transposition of the great arteries (TGA) 0.31 and Aortic Stenosis, 0.22[18]. The CHD prevalence rates among age groups were estimated when applied to population groups their respective relative rates of prevalence[21]. The data were evaluated by descriptive methods using rates and ratios represented in tables.
RESULTS
In 2010, in Brazil, 2,861,868 births were reported. When applied the prevalence rate of congenital heart disease of 9:1,000 births, an estimate of 25,757 new cases for the year under study was found. The occurrence of CHD in the Brazilian regions, for the same year, was as follows: North 2758; Northeast 7,570; Southeast 10,112; South 3329; and Midwest 1,987 new cases Table 1. Distribution of the number of births, prevalence of Congenital Heart Disease (CHD), birth notification with (CHD) - SINASC/Ministry of Health (MOH) and notification percentage for Brazil, regions and federated units in 2010. Source: MoH/SVS/DASIS - Live Births Information System - SINASC - 2010. In the same period, 1,377 cases of births with CHD were reported to the Ministry of Health/SINASC, representing 5.3% of estimated for Brazil. The distribution by federal unit is visualized in Table 1. In this study, the prevalence for the year 2010, from the eight most frequent subtypes of CHD were: Ventricular septal defect (VSD) 7498; Atrial septal defect (ASD) 4693; Persistent ductus arteriosus (PDA) 2490; Pulmonary stenosis 1,431; Tetralogy of Fallot (T4F) 973; Aortic coarctation (CoA) 973; Transposition of the great arteries (TGA) 887 and Aortic Stenosis 630 Table 2. Distribution of the prevalence of the eight CHD subtypes in Brazil and in the regions for the year 2010. Source: MoH/SVS/DASIS - Live Births Information System - SINASC - 2010. Prevalence of types of CHD per 1,000 births. CHD=congenital heart deseases; VSD=ventricular septal defect; ASD=atrial septal defect; PDA=persistent ductus arteriosus; Pulm Stenosis=pulmonary stenosis; T4F=tetralogy of Fallot; AoCo=aortic coarctation; TGA=transposition of the great arteries; Ao Stenosis=aortic stenosis In 2009, to a Brazilian population of 191,795,000, divided into 56,809,000 under 18 and 134 986 000 adults was estimated a prevalence of 675,495 children and adolescents and 552,092 adults with congenital heart disease.
CONCLUSION
In Brazil, there is underreporting in the prevalence of congenital heart disease, signaling the need for adjustments in the methodology of registration.
[ "Objective", "Study limitations" ]
[ "The aim of this study was to estimate underreporting in the prevalence of\ncongenital heart disease in Brazil and its most frequent subtypes.", "The calculation of the prevalence of congenital heart disease and their most\nfrequent subtypes was anchored in study review and meta-analysis that proposed\nto estimate the prevalence of this disease in the world[18]. One of the\nlimitations of this meta-analysis is not cover the entire world population, in\naddition to using only studies with summaries in English and make use of\ngovernment records available online, a fact that leads to underreporting error,\nas demonstrated in this study. In this study we found differences in prevalence\nrates between the continents, ranging from 1.9 to 9.3/1000 births in Africa and\nAsia, respectively. The author states that data from developing countries were\nscarce, and studies often did not include indigenous peoples and tribes.\nRecords to consider the entire population of births are needed to determine the\ntrue prevalence of congenital heart disease." ]
[ null, null ]
[ "INTRODUCTION", "Objective", "METHODS", "RESULTS", "DISCUSSION", "Study limitations", "CONCLUSION" ]
[ "Congenital Heart Disease (CHD) is an abnormality in the structure or\ncardiocirculatory function that occurs from birth, even if subsequently\ndiagnosed[1]. It varies in severity, occurring from communications\nbetween cavities that spontaneously regress up to major malformations that even\nrequire several procedures, surgical or catheterization. It can result in\nintrauterine, childhood or adulthood death[2].\nGlobally, 130 million children are born each year. Of these, four million die in the\nneonatal period, or that is, in the first 30 days of life[3] and 7% of the fatalities\nare related to CHD[4].\nIn 2007, in Brazil, 6% of deaths in children under one year of age were by\nCHD[5].\nAmong the congenital malformations, cardiovascular abnormalities are the most common\ncause of infant mortality, 40% under study in São\nPaulo/Brazil[6] and 26.6% and 48.1% in the US[7,8]. Delays in the development and cognitive deficits are\nassociated with a congenital heart disease of 20% to 30%[9-11].\nHoffman & Kaplan reported a variation in prevalence rate of 4:1000 to 50:1000\nbirths. The highest one is related to the occurrence of low severity injuries that\nare solved without medical intervention[12].\nIn Brazil, studies on the epidemiology of congenital heart disease with different\ncuts, express coefficients ranging from 5.494[13] to 7.17 per 1,000 births[14]. For a group with low\nbirth weight, the prevalence rate ranged from 10.7 to 40.7:1,000\nbirths[15].\nAmorim, in Minas Gerais, Brazil, from 1990 to 2003, analyzing data on 29,770\nnewborns, found prevalence rate of congenital heart defects corresponding to\n9.58:1,000 births and equal to 87.72:1,000 between stillbirths[16]. Rivera, in Alagoas,\nBrazil, for the same age group, found prevalence in 13.2:1,000\nbirths[17].\nStudy review and meta-analysis on the global prevalence of congenital heart disease\nincluded 114 studies, with a study population of 24,091,867 births. The prevalence\nrate estimated at 9.1 per 1,000 births remains stable over the last 15 years. This\ncorresponds to 1.35 million newborns with CHD each year[18].\nIn this study, prevalence is the greatness of the event at any given\ntime[19],\nadopting the coefficient of 9:1000 births as the basis for calculation of CHD in\nBrazil and in each federal unit.\nThe scarcity of specific bibliography and reliable statistics on the Brazilian\npopulation with congenital heart disease forces the approximation to the\ninternational literature, in order to estimate the prevalence of CHD in Brazil and\ncompare it to the official notifications, serving therefore as basis for formulation\nof public policies, based on more realistic data.\n Objective The aim of this study was to estimate underreporting in the prevalence of\ncongenital heart disease in Brazil and its most frequent subtypes.\nThe aim of this study was to estimate underreporting in the prevalence of\ncongenital heart disease in Brazil and its most frequent subtypes.", "The aim of this study was to estimate underreporting in the prevalence of\ncongenital heart disease in Brazil and its most frequent subtypes.", "We adopted for the test under report, the prevalence rate of congenital heart disease\nequal to 9:1000 births, defined in a review and meta-analysis study for births\nworldwide [18].\nPopulation and birth registries in Brazil, regions and federated units, were obtained\nby consulting the Demographic Census of 2010, published in 2012, and made available\non the DATASUS/Ministry of Health (MOH) website[20].\nSecondary data relating to the distribution by age group, the Brazilian population of\n2009, published by the Demographic Census of 2010[20], were divided into two groups: the first\nwith children under 18 years of age and the second formed by adults, aged 18 and\nmore.\nRates for the prevalence of congenital heart disease by age group, were defined in\nthe study on the population of Canada. For children under 18, 11.89 CHD per 1,000\nindividuals and 4.09 CHD per 1,000 adults[21].\nThe prevalence of congenital heart disease in 2010, recorded in SINASC (System of\nLive Birth Information), was obtained by consulting the DATASUS/MOH website.\nThe estimated prevalence of congenital heart disease in Brazil, was generated when\nthe prevalence rate equal to 9: 1000 births was applied to quantitative births.\nEstimated prevalence of congenital heart disease was compared with notifications from\nSINASC/MOH, revealing the disease registration percentage in Brazil and the federal\nunits.\nThe estimated prevalence of the eight most frequent subtypes of CHD was achieved by\napplying to the quantitative of birth the prevalence rates per 1,000 births of each\nsubtype, namely: Ventricular septal defect (VSD) 2.62; Atrial septal defect (ASD)\n1.64; Persistent ductus arteriosus (PDA) 0.87; Pulmonary stenosis 0.5; Tetralogy of\nFallot (T4F) 0.34; Aortic Coarctation (AoCo) 0.34; Transposition of the great\narteries (TGA) 0.31 and Aortic Stenosis, 0.22[18].\nThe CHD prevalence rates among age groups were estimated when applied to population\ngroups their respective relative rates of prevalence[21].\nThe data were evaluated by descriptive methods using rates and ratios represented in\ntables.", "In 2010, in Brazil, 2,861,868 births were reported. When applied the prevalence rate\nof congenital heart disease of 9:1,000 births, an estimate of 25,757 new cases for\nthe year under study was found. The occurrence of CHD in the Brazilian regions, for\nthe same year, was as follows: North 2758; Northeast 7,570; Southeast 10,112; South\n3329; and Midwest 1,987 new cases Table\n1.\nDistribution of the number of births, prevalence of Congenital Heart Disease\n(CHD), birth notification with (CHD) - SINASC/Ministry of Health (MOH) and\nnotification percentage for Brazil, regions and federated units in 2010.\nSource: MoH/SVS/DASIS - Live Births Information System - SINASC -\n2010.\nIn the same period, 1,377 cases of births with CHD were reported to the Ministry of\nHealth/SINASC, representing 5.3% of estimated for Brazil. The distribution by\nfederal unit is visualized in Table 1.\nIn this study, the prevalence for the year 2010, from the eight most frequent\nsubtypes of CHD were: Ventricular septal defect (VSD) 7498; Atrial septal defect\n(ASD) 4693; Persistent ductus arteriosus (PDA) 2490; Pulmonary stenosis 1,431;\nTetralogy of Fallot (T4F) 973; Aortic coarctation (CoA) 973; Transposition of the\ngreat arteries (TGA) 887 and Aortic Stenosis 630 Table 2.\nDistribution of the prevalence of the eight CHD subtypes in Brazil and in the\nregions for the year 2010.\nSource: MoH/SVS/DASIS - Live Births Information System - SINASC -\n2010.\nPrevalence of types of CHD per 1,000 births. CHD=congenital heart\ndeseases; VSD=ventricular septal defect; ASD=atrial septal defect;\nPDA=persistent ductus arteriosus; Pulm Stenosis=pulmonary stenosis;\nT4F=tetralogy of Fallot; AoCo=aortic coarctation; TGA=transposition of\nthe great arteries; Ao Stenosis=aortic stenosis\nIn 2009, to a Brazilian population of 191,795,000, divided into 56,809,000 under 18\nand 134 986 000 adults was estimated a prevalence of 675,495 children and\nadolescents and 552,092 adults with congenital heart disease.", "Pinto Jr et al.[22]\npropose, in a study on regionalization of Brazilian pediatric cardiovascular\nsurgery, the implementation of a support network for patients with congenital heart\ndisease, of fair range for all regions of Brazil. Therefore, it becomes important to\ndefine the prevalence and distribution of the disease and its subtypes for each\nterritory.\nThe lack of national studies forces the approximation of the numbers with specific\ninternational literature and thus, through estimates, it reveals a scenario that\nbest describes the reality.\nIt is common to find in the scientific literature that large differences in\nepidemiological calculations, which stems in part of the variation in sample\nselection in which the studies are performed[12,15,16]. Studies on the prevalence\nof CHD after the first year of life underestimate the occurrence of the disease in\nthe fetus and newborn, considering that 20% of children die in their first year of\nlife[23].\nAnother aspect that implies inaccuracies is related to the fact that 30% of CHD can\nnot be diagnosed in the first weeks of life[24].\nCHD prevalence rate equal to 9:1000 births, defined in the study review and\nmeta-analysis to births worldwide[18], was defined as basis for calculation of CHD\nestimates for this study.\nThus, estimates for Brazil point to 25,757 new cases of CHD/year, distributed by\nregions North 2,758; Northeast 7,570; Southeast 10,112; South 3329; and Midwest\n1,987.\nThe notifications published by DATASUS/MS have shown 1,377 births with CHD in 2010,\ncorresponding to 5.3% of the estimate of 9:1,000 births used for this study.\nTherefore, the public policies in this segment are supported by a reality of\nunderreporting. Thus, assumptions of the Unified Health System; as universality,\ncomprehensiveness and equity, principles of health-inducing public policies with\nquality[25],\nremain outside of care to this group of patients.\nKnowing that 25% of births with CHD require invasive treatment in the first year of\nlife[26], in\norder to meet the demand, it would be needed 6,439 procedures per year in Brazil. In\n2008, however, 1919 procedures were performed[5]. There is, therefore, for this age group,\ndeficit of procedures is about 70%.\nA quantitative explanation of congenital heart disease, its territorial distribution\nand classification by subtypes associated with the determination of risk for\ndisease, allows establishing planning for the health care of this population.\nThus, considering all the variables, it is possible to direct investments equally to\nthis group that, in 2009 amounted to 675,495 children and adolescents and 552,092\nadults with CHD, with expected annual growth rate of 1% to 5%, depending on age and\nthe distribution of[21,27] injuries.\nTransposition of the gap between standards, Ordinances 1169/GM and\n210/SAS-MS[28,29] and the assistance\nrequires political will and planning aimed at improving access to pediatric\ncardiovascular surgery centers, financing of Cardiology and Cardiovascular Surgery\nPediatric; database feeding; promote management for quality and establish continuing\neducation programs[5,6].\nFailure to observe these points, which are fundamental in the design and\nimplementation of health care policy for CHD patients, blames the Brazilian system\nof health care, public and supplementary, to leave outside the surgical treatment of\n62% of infants with congenital heart disease, reaching in some regions of Brazil to\n76 and 91%[5].\nGomes[30], in an\neditorial, \"the debt to the nation's health: the case of congenital heart disease,\"\ntranslates numbers into words and says:\n[...] became untenable the acceptance of the quality of health care of\nthe Brazilians. Felt by the population, but mainly affecting patients, their\nfamilies, doctors and all other professionals involved, and even more so when it\ninvolves children with congenital heart disease, excluded from the basic right of\nadequate treatment and the chance to be alive.\nFacing such problem requires the participation of civil society in the development of\nsocial policies and it is mandatory for the elaboration of issues in the social\narea, the intervention of agents who experience difficulties, either as carriers of\ndisease, either as family components, either as professionals[31].\n Study limitations The calculation of the prevalence of congenital heart disease and their most\nfrequent subtypes was anchored in study review and meta-analysis that proposed\nto estimate the prevalence of this disease in the world[18]. One of the\nlimitations of this meta-analysis is not cover the entire world population, in\naddition to using only studies with summaries in English and make use of\ngovernment records available online, a fact that leads to underreporting error,\nas demonstrated in this study. In this study we found differences in prevalence\nrates between the continents, ranging from 1.9 to 9.3/1000 births in Africa and\nAsia, respectively. The author states that data from developing countries were\nscarce, and studies often did not include indigenous peoples and tribes.\nRecords to consider the entire population of births are needed to determine the\ntrue prevalence of congenital heart disease.\nThe calculation of the prevalence of congenital heart disease and their most\nfrequent subtypes was anchored in study review and meta-analysis that proposed\nto estimate the prevalence of this disease in the world[18]. One of the\nlimitations of this meta-analysis is not cover the entire world population, in\naddition to using only studies with summaries in English and make use of\ngovernment records available online, a fact that leads to underreporting error,\nas demonstrated in this study. In this study we found differences in prevalence\nrates between the continents, ranging from 1.9 to 9.3/1000 births in Africa and\nAsia, respectively. The author states that data from developing countries were\nscarce, and studies often did not include indigenous peoples and tribes.\nRecords to consider the entire population of births are needed to determine the\ntrue prevalence of congenital heart disease.", "The calculation of the prevalence of congenital heart disease and their most\nfrequent subtypes was anchored in study review and meta-analysis that proposed\nto estimate the prevalence of this disease in the world[18]. One of the\nlimitations of this meta-analysis is not cover the entire world population, in\naddition to using only studies with summaries in English and make use of\ngovernment records available online, a fact that leads to underreporting error,\nas demonstrated in this study. In this study we found differences in prevalence\nrates between the continents, ranging from 1.9 to 9.3/1000 births in Africa and\nAsia, respectively. The author states that data from developing countries were\nscarce, and studies often did not include indigenous peoples and tribes.\nRecords to consider the entire population of births are needed to determine the\ntrue prevalence of congenital heart disease.", "In Brazil, there is underreporting in the prevalence of congenital heart disease,\nsignaling the need for adjustments in the methodology of registration." ]
[ "intro", null, "methods", "results", "discussion", null, "conclusions" ]
[ "Heart Defects", "Congenital", "Epidemiology", "Health Policy", "Brazil" ]
INTRODUCTION: Congenital Heart Disease (CHD) is an abnormality in the structure or cardiocirculatory function that occurs from birth, even if subsequently diagnosed[1]. It varies in severity, occurring from communications between cavities that spontaneously regress up to major malformations that even require several procedures, surgical or catheterization. It can result in intrauterine, childhood or adulthood death[2]. Globally, 130 million children are born each year. Of these, four million die in the neonatal period, or that is, in the first 30 days of life[3] and 7% of the fatalities are related to CHD[4]. In 2007, in Brazil, 6% of deaths in children under one year of age were by CHD[5]. Among the congenital malformations, cardiovascular abnormalities are the most common cause of infant mortality, 40% under study in São Paulo/Brazil[6] and 26.6% and 48.1% in the US[7,8]. Delays in the development and cognitive deficits are associated with a congenital heart disease of 20% to 30%[9-11]. Hoffman & Kaplan reported a variation in prevalence rate of 4:1000 to 50:1000 births. The highest one is related to the occurrence of low severity injuries that are solved without medical intervention[12]. In Brazil, studies on the epidemiology of congenital heart disease with different cuts, express coefficients ranging from 5.494[13] to 7.17 per 1,000 births[14]. For a group with low birth weight, the prevalence rate ranged from 10.7 to 40.7:1,000 births[15]. Amorim, in Minas Gerais, Brazil, from 1990 to 2003, analyzing data on 29,770 newborns, found prevalence rate of congenital heart defects corresponding to 9.58:1,000 births and equal to 87.72:1,000 between stillbirths[16]. Rivera, in Alagoas, Brazil, for the same age group, found prevalence in 13.2:1,000 births[17]. Study review and meta-analysis on the global prevalence of congenital heart disease included 114 studies, with a study population of 24,091,867 births. The prevalence rate estimated at 9.1 per 1,000 births remains stable over the last 15 years. This corresponds to 1.35 million newborns with CHD each year[18]. In this study, prevalence is the greatness of the event at any given time[19], adopting the coefficient of 9:1000 births as the basis for calculation of CHD in Brazil and in each federal unit. The scarcity of specific bibliography and reliable statistics on the Brazilian population with congenital heart disease forces the approximation to the international literature, in order to estimate the prevalence of CHD in Brazil and compare it to the official notifications, serving therefore as basis for formulation of public policies, based on more realistic data. Objective The aim of this study was to estimate underreporting in the prevalence of congenital heart disease in Brazil and its most frequent subtypes. The aim of this study was to estimate underreporting in the prevalence of congenital heart disease in Brazil and its most frequent subtypes. Objective: The aim of this study was to estimate underreporting in the prevalence of congenital heart disease in Brazil and its most frequent subtypes. METHODS: We adopted for the test under report, the prevalence rate of congenital heart disease equal to 9:1000 births, defined in a review and meta-analysis study for births worldwide [18]. Population and birth registries in Brazil, regions and federated units, were obtained by consulting the Demographic Census of 2010, published in 2012, and made available on the DATASUS/Ministry of Health (MOH) website[20]. Secondary data relating to the distribution by age group, the Brazilian population of 2009, published by the Demographic Census of 2010[20], were divided into two groups: the first with children under 18 years of age and the second formed by adults, aged 18 and more. Rates for the prevalence of congenital heart disease by age group, were defined in the study on the population of Canada. For children under 18, 11.89 CHD per 1,000 individuals and 4.09 CHD per 1,000 adults[21]. The prevalence of congenital heart disease in 2010, recorded in SINASC (System of Live Birth Information), was obtained by consulting the DATASUS/MOH website. The estimated prevalence of congenital heart disease in Brazil, was generated when the prevalence rate equal to 9: 1000 births was applied to quantitative births. Estimated prevalence of congenital heart disease was compared with notifications from SINASC/MOH, revealing the disease registration percentage in Brazil and the federal units. The estimated prevalence of the eight most frequent subtypes of CHD was achieved by applying to the quantitative of birth the prevalence rates per 1,000 births of each subtype, namely: Ventricular septal defect (VSD) 2.62; Atrial septal defect (ASD) 1.64; Persistent ductus arteriosus (PDA) 0.87; Pulmonary stenosis 0.5; Tetralogy of Fallot (T4F) 0.34; Aortic Coarctation (AoCo) 0.34; Transposition of the great arteries (TGA) 0.31 and Aortic Stenosis, 0.22[18]. The CHD prevalence rates among age groups were estimated when applied to population groups their respective relative rates of prevalence[21]. The data were evaluated by descriptive methods using rates and ratios represented in tables. RESULTS: In 2010, in Brazil, 2,861,868 births were reported. When applied the prevalence rate of congenital heart disease of 9:1,000 births, an estimate of 25,757 new cases for the year under study was found. The occurrence of CHD in the Brazilian regions, for the same year, was as follows: North 2758; Northeast 7,570; Southeast 10,112; South 3329; and Midwest 1,987 new cases Table 1. Distribution of the number of births, prevalence of Congenital Heart Disease (CHD), birth notification with (CHD) - SINASC/Ministry of Health (MOH) and notification percentage for Brazil, regions and federated units in 2010. Source: MoH/SVS/DASIS - Live Births Information System - SINASC - 2010. In the same period, 1,377 cases of births with CHD were reported to the Ministry of Health/SINASC, representing 5.3% of estimated for Brazil. The distribution by federal unit is visualized in Table 1. In this study, the prevalence for the year 2010, from the eight most frequent subtypes of CHD were: Ventricular septal defect (VSD) 7498; Atrial septal defect (ASD) 4693; Persistent ductus arteriosus (PDA) 2490; Pulmonary stenosis 1,431; Tetralogy of Fallot (T4F) 973; Aortic coarctation (CoA) 973; Transposition of the great arteries (TGA) 887 and Aortic Stenosis 630 Table 2. Distribution of the prevalence of the eight CHD subtypes in Brazil and in the regions for the year 2010. Source: MoH/SVS/DASIS - Live Births Information System - SINASC - 2010. Prevalence of types of CHD per 1,000 births. CHD=congenital heart deseases; VSD=ventricular septal defect; ASD=atrial septal defect; PDA=persistent ductus arteriosus; Pulm Stenosis=pulmonary stenosis; T4F=tetralogy of Fallot; AoCo=aortic coarctation; TGA=transposition of the great arteries; Ao Stenosis=aortic stenosis In 2009, to a Brazilian population of 191,795,000, divided into 56,809,000 under 18 and 134 986 000 adults was estimated a prevalence of 675,495 children and adolescents and 552,092 adults with congenital heart disease. DISCUSSION: Pinto Jr et al.[22] propose, in a study on regionalization of Brazilian pediatric cardiovascular surgery, the implementation of a support network for patients with congenital heart disease, of fair range for all regions of Brazil. Therefore, it becomes important to define the prevalence and distribution of the disease and its subtypes for each territory. The lack of national studies forces the approximation of the numbers with specific international literature and thus, through estimates, it reveals a scenario that best describes the reality. It is common to find in the scientific literature that large differences in epidemiological calculations, which stems in part of the variation in sample selection in which the studies are performed[12,15,16]. Studies on the prevalence of CHD after the first year of life underestimate the occurrence of the disease in the fetus and newborn, considering that 20% of children die in their first year of life[23]. Another aspect that implies inaccuracies is related to the fact that 30% of CHD can not be diagnosed in the first weeks of life[24]. CHD prevalence rate equal to 9:1000 births, defined in the study review and meta-analysis to births worldwide[18], was defined as basis for calculation of CHD estimates for this study. Thus, estimates for Brazil point to 25,757 new cases of CHD/year, distributed by regions North 2,758; Northeast 7,570; Southeast 10,112; South 3329; and Midwest 1,987. The notifications published by DATASUS/MS have shown 1,377 births with CHD in 2010, corresponding to 5.3% of the estimate of 9:1,000 births used for this study. Therefore, the public policies in this segment are supported by a reality of underreporting. Thus, assumptions of the Unified Health System; as universality, comprehensiveness and equity, principles of health-inducing public policies with quality[25], remain outside of care to this group of patients. Knowing that 25% of births with CHD require invasive treatment in the first year of life[26], in order to meet the demand, it would be needed 6,439 procedures per year in Brazil. In 2008, however, 1919 procedures were performed[5]. There is, therefore, for this age group, deficit of procedures is about 70%. A quantitative explanation of congenital heart disease, its territorial distribution and classification by subtypes associated with the determination of risk for disease, allows establishing planning for the health care of this population. Thus, considering all the variables, it is possible to direct investments equally to this group that, in 2009 amounted to 675,495 children and adolescents and 552,092 adults with CHD, with expected annual growth rate of 1% to 5%, depending on age and the distribution of[21,27] injuries. Transposition of the gap between standards, Ordinances 1169/GM and 210/SAS-MS[28,29] and the assistance requires political will and planning aimed at improving access to pediatric cardiovascular surgery centers, financing of Cardiology and Cardiovascular Surgery Pediatric; database feeding; promote management for quality and establish continuing education programs[5,6]. Failure to observe these points, which are fundamental in the design and implementation of health care policy for CHD patients, blames the Brazilian system of health care, public and supplementary, to leave outside the surgical treatment of 62% of infants with congenital heart disease, reaching in some regions of Brazil to 76 and 91%[5]. Gomes[30], in an editorial, "the debt to the nation's health: the case of congenital heart disease," translates numbers into words and says: [...] became untenable the acceptance of the quality of health care of the Brazilians. Felt by the population, but mainly affecting patients, their families, doctors and all other professionals involved, and even more so when it involves children with congenital heart disease, excluded from the basic right of adequate treatment and the chance to be alive. Facing such problem requires the participation of civil society in the development of social policies and it is mandatory for the elaboration of issues in the social area, the intervention of agents who experience difficulties, either as carriers of disease, either as family components, either as professionals[31]. Study limitations The calculation of the prevalence of congenital heart disease and their most frequent subtypes was anchored in study review and meta-analysis that proposed to estimate the prevalence of this disease in the world[18]. One of the limitations of this meta-analysis is not cover the entire world population, in addition to using only studies with summaries in English and make use of government records available online, a fact that leads to underreporting error, as demonstrated in this study. In this study we found differences in prevalence rates between the continents, ranging from 1.9 to 9.3/1000 births in Africa and Asia, respectively. The author states that data from developing countries were scarce, and studies often did not include indigenous peoples and tribes. Records to consider the entire population of births are needed to determine the true prevalence of congenital heart disease. The calculation of the prevalence of congenital heart disease and their most frequent subtypes was anchored in study review and meta-analysis that proposed to estimate the prevalence of this disease in the world[18]. One of the limitations of this meta-analysis is not cover the entire world population, in addition to using only studies with summaries in English and make use of government records available online, a fact that leads to underreporting error, as demonstrated in this study. In this study we found differences in prevalence rates between the continents, ranging from 1.9 to 9.3/1000 births in Africa and Asia, respectively. The author states that data from developing countries were scarce, and studies often did not include indigenous peoples and tribes. Records to consider the entire population of births are needed to determine the true prevalence of congenital heart disease. Study limitations: The calculation of the prevalence of congenital heart disease and their most frequent subtypes was anchored in study review and meta-analysis that proposed to estimate the prevalence of this disease in the world[18]. One of the limitations of this meta-analysis is not cover the entire world population, in addition to using only studies with summaries in English and make use of government records available online, a fact that leads to underreporting error, as demonstrated in this study. In this study we found differences in prevalence rates between the continents, ranging from 1.9 to 9.3/1000 births in Africa and Asia, respectively. The author states that data from developing countries were scarce, and studies often did not include indigenous peoples and tribes. Records to consider the entire population of births are needed to determine the true prevalence of congenital heart disease. CONCLUSION: In Brazil, there is underreporting in the prevalence of congenital heart disease, signaling the need for adjustments in the methodology of registration.
Background: Congenital heart disease is an abnormality in the structure or cardiocirculatory function, occurring from birth, even if diagnosed later. It can result in intrauterine death in childhood or in adulthood. Accounted for 6% of infant deaths in Brazil in 2007. Methods: The calculations of prevalence were performed by applying coefficients, giving them function rates for calculations of health problems. The study makes an approach between the literature and the governmental registries. It was adopted an estimate of 9: 1000 births and prevalence rates for subtypes applied to births of 2010. Estimates of births with congenital heart disease were compared with the reports to the Ministry of Health and were studied by descriptive methods with the use of rates and coefficients represented in tables. Results: The incidence in Brazil is 25,757 new cases/year, distributed in: North 2,758; Northeast 7,570; Southeast 10,112; South 3,329; and Midwest 1,987. In 2010, were reported to System of Live Birth Information of Ministry of Health 1,377 cases of babies with congenital heart disease, representing 5.3% of the estimated for Brazil. In the same period, the most common subtypes were: ventricular septal defect (7,498); atrial septal defect (4,693); persistent ductus arteriosus (2,490); pulmonary stenosis (1,431); tetralogy of Fallot (973); coarctation of the aorta (973); transposition of the great arteries (887); and aortic stenosis 630. The prevalence of congenital heart disease, for the year of 2009, was 675,495 children and adolescents and 552,092 adults. Conclusions: In Brazil, there is underreporting in the prevalence of congenital heart disease, signaling the need for adjustments in the methodology of registration.
INTRODUCTION: Congenital Heart Disease (CHD) is an abnormality in the structure or cardiocirculatory function that occurs from birth, even if subsequently diagnosed[1]. It varies in severity, occurring from communications between cavities that spontaneously regress up to major malformations that even require several procedures, surgical or catheterization. It can result in intrauterine, childhood or adulthood death[2]. Globally, 130 million children are born each year. Of these, four million die in the neonatal period, or that is, in the first 30 days of life[3] and 7% of the fatalities are related to CHD[4]. In 2007, in Brazil, 6% of deaths in children under one year of age were by CHD[5]. Among the congenital malformations, cardiovascular abnormalities are the most common cause of infant mortality, 40% under study in São Paulo/Brazil[6] and 26.6% and 48.1% in the US[7,8]. Delays in the development and cognitive deficits are associated with a congenital heart disease of 20% to 30%[9-11]. Hoffman & Kaplan reported a variation in prevalence rate of 4:1000 to 50:1000 births. The highest one is related to the occurrence of low severity injuries that are solved without medical intervention[12]. In Brazil, studies on the epidemiology of congenital heart disease with different cuts, express coefficients ranging from 5.494[13] to 7.17 per 1,000 births[14]. For a group with low birth weight, the prevalence rate ranged from 10.7 to 40.7:1,000 births[15]. Amorim, in Minas Gerais, Brazil, from 1990 to 2003, analyzing data on 29,770 newborns, found prevalence rate of congenital heart defects corresponding to 9.58:1,000 births and equal to 87.72:1,000 between stillbirths[16]. Rivera, in Alagoas, Brazil, for the same age group, found prevalence in 13.2:1,000 births[17]. Study review and meta-analysis on the global prevalence of congenital heart disease included 114 studies, with a study population of 24,091,867 births. The prevalence rate estimated at 9.1 per 1,000 births remains stable over the last 15 years. This corresponds to 1.35 million newborns with CHD each year[18]. In this study, prevalence is the greatness of the event at any given time[19], adopting the coefficient of 9:1000 births as the basis for calculation of CHD in Brazil and in each federal unit. The scarcity of specific bibliography and reliable statistics on the Brazilian population with congenital heart disease forces the approximation to the international literature, in order to estimate the prevalence of CHD in Brazil and compare it to the official notifications, serving therefore as basis for formulation of public policies, based on more realistic data. Objective The aim of this study was to estimate underreporting in the prevalence of congenital heart disease in Brazil and its most frequent subtypes. The aim of this study was to estimate underreporting in the prevalence of congenital heart disease in Brazil and its most frequent subtypes. CONCLUSION: In Brazil, there is underreporting in the prevalence of congenital heart disease, signaling the need for adjustments in the methodology of registration.
Background: Congenital heart disease is an abnormality in the structure or cardiocirculatory function, occurring from birth, even if diagnosed later. It can result in intrauterine death in childhood or in adulthood. Accounted for 6% of infant deaths in Brazil in 2007. Methods: The calculations of prevalence were performed by applying coefficients, giving them function rates for calculations of health problems. The study makes an approach between the literature and the governmental registries. It was adopted an estimate of 9: 1000 births and prevalence rates for subtypes applied to births of 2010. Estimates of births with congenital heart disease were compared with the reports to the Ministry of Health and were studied by descriptive methods with the use of rates and coefficients represented in tables. Results: The incidence in Brazil is 25,757 new cases/year, distributed in: North 2,758; Northeast 7,570; Southeast 10,112; South 3,329; and Midwest 1,987. In 2010, were reported to System of Live Birth Information of Ministry of Health 1,377 cases of babies with congenital heart disease, representing 5.3% of the estimated for Brazil. In the same period, the most common subtypes were: ventricular septal defect (7,498); atrial septal defect (4,693); persistent ductus arteriosus (2,490); pulmonary stenosis (1,431); tetralogy of Fallot (973); coarctation of the aorta (973); transposition of the great arteries (887); and aortic stenosis 630. The prevalence of congenital heart disease, for the year of 2009, was 675,495 children and adolescents and 552,092 adults. Conclusions: In Brazil, there is underreporting in the prevalence of congenital heart disease, signaling the need for adjustments in the methodology of registration.
2,830
326
[ 25, 166 ]
7
[ "prevalence", "disease", "births", "congenital", "heart", "congenital heart", "heart disease", "congenital heart disease", "chd", "study" ]
[ "congenital malformations cardiovascular", "children congenital heart", "population congenital heart", "rate congenital heart", "epidemiology congenital heart" ]
[CONTENT] Heart Defects | Congenital | Epidemiology | Health Policy | Brazil [SUMMARY]
[CONTENT] Heart Defects | Congenital | Epidemiology | Health Policy | Brazil [SUMMARY]
[CONTENT] Heart Defects | Congenital | Epidemiology | Health Policy | Brazil [SUMMARY]
[CONTENT] Heart Defects | Congenital | Epidemiology | Health Policy | Brazil [SUMMARY]
[CONTENT] Heart Defects | Congenital | Epidemiology | Health Policy | Brazil [SUMMARY]
[CONTENT] Heart Defects | Congenital | Epidemiology | Health Policy | Brazil [SUMMARY]
[CONTENT] Adolescent | Adult | Age Distribution | Brazil | Child | Child, Preschool | Disease Notification | Female | Heart Defects, Congenital | Humans | Infant | Infant, Newborn | Male | Prevalence | Registries | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Age Distribution | Brazil | Child | Child, Preschool | Disease Notification | Female | Heart Defects, Congenital | Humans | Infant | Infant, Newborn | Male | Prevalence | Registries | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Age Distribution | Brazil | Child | Child, Preschool | Disease Notification | Female | Heart Defects, Congenital | Humans | Infant | Infant, Newborn | Male | Prevalence | Registries | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Age Distribution | Brazil | Child | Child, Preschool | Disease Notification | Female | Heart Defects, Congenital | Humans | Infant | Infant, Newborn | Male | Prevalence | Registries | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Age Distribution | Brazil | Child | Child, Preschool | Disease Notification | Female | Heart Defects, Congenital | Humans | Infant | Infant, Newborn | Male | Prevalence | Registries | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Age Distribution | Brazil | Child | Child, Preschool | Disease Notification | Female | Heart Defects, Congenital | Humans | Infant | Infant, Newborn | Male | Prevalence | Registries | Young Adult [SUMMARY]
[CONTENT] congenital malformations cardiovascular | children congenital heart | population congenital heart | rate congenital heart | epidemiology congenital heart [SUMMARY]
[CONTENT] congenital malformations cardiovascular | children congenital heart | population congenital heart | rate congenital heart | epidemiology congenital heart [SUMMARY]
[CONTENT] congenital malformations cardiovascular | children congenital heart | population congenital heart | rate congenital heart | epidemiology congenital heart [SUMMARY]
[CONTENT] congenital malformations cardiovascular | children congenital heart | population congenital heart | rate congenital heart | epidemiology congenital heart [SUMMARY]
[CONTENT] congenital malformations cardiovascular | children congenital heart | population congenital heart | rate congenital heart | epidemiology congenital heart [SUMMARY]
[CONTENT] congenital malformations cardiovascular | children congenital heart | population congenital heart | rate congenital heart | epidemiology congenital heart [SUMMARY]
[CONTENT] prevalence | disease | births | congenital | heart | congenital heart | heart disease | congenital heart disease | chd | study [SUMMARY]
[CONTENT] prevalence | disease | births | congenital | heart | congenital heart | heart disease | congenital heart disease | chd | study [SUMMARY]
[CONTENT] prevalence | disease | births | congenital | heart | congenital heart | heart disease | congenital heart disease | chd | study [SUMMARY]
[CONTENT] prevalence | disease | births | congenital | heart | congenital heart | heart disease | congenital heart disease | chd | study [SUMMARY]
[CONTENT] prevalence | disease | births | congenital | heart | congenital heart | heart disease | congenital heart disease | chd | study [SUMMARY]
[CONTENT] prevalence | disease | births | congenital | heart | congenital heart | heart disease | congenital heart disease | chd | study [SUMMARY]
[CONTENT] births | brazil | prevalence | congenital | 000 | chd | congenital heart | heart | 000 births | million [SUMMARY]
[CONTENT] prevalence | rates | groups | estimated | age | 18 | births | disease | moh | estimated prevalence [SUMMARY]
[CONTENT] stenosis | chd | 2010 | births | defect | aortic | septal | septal defect | sinasc | 000 [SUMMARY]
[CONTENT] congenital heart disease signaling | disease signaling | brazil underreporting prevalence congenital | adjustments | adjustments methodology | methodology registration | methodology | disease signaling need adjustments | adjustments methodology registration | disease signaling need [SUMMARY]
[CONTENT] prevalence | disease | congenital | heart | congenital heart | congenital heart disease | heart disease | births | brazil | study [SUMMARY]
[CONTENT] prevalence | disease | congenital | heart | congenital heart | congenital heart disease | heart disease | births | brazil | study [SUMMARY]
[CONTENT] ||| ||| 6% | Brazil | 2007 [SUMMARY]
[CONTENT] ||| ||| 9 | 1000 | 2010 ||| the Ministry of Health [SUMMARY]
[CONTENT] Brazil | 25,757 | North 2,758 | Northeast | 7,570 | South 3,329 | Midwest | 1,987 ||| 2010 | System of Live Birth Information of Ministry of Health | 1,377 | 5.3% | Brazil ||| 7,498 | 4,693 | 2,490 | 1,431 | Fallot | 973 | 973 | 887 | 630 ||| the year of 2009 | 675,495 | 552,092 [SUMMARY]
[CONTENT] Brazil [SUMMARY]
[CONTENT] ||| ||| 6% | Brazil | 2007 ||| ||| ||| 9 | 1000 | 2010 ||| the Ministry of Health ||| Brazil | 25,757 | North 2,758 | Northeast | 7,570 | South 3,329 | Midwest | 1,987 ||| 2010 | System of Live Birth Information of Ministry of Health | 1,377 | 5.3% | Brazil ||| 7,498 | 4,693 | 2,490 | 1,431 | Fallot | 973 | 973 | 887 | 630 ||| the year of 2009 | 675,495 | 552,092 ||| Brazil [SUMMARY]
[CONTENT] ||| ||| 6% | Brazil | 2007 ||| ||| ||| 9 | 1000 | 2010 ||| the Ministry of Health ||| Brazil | 25,757 | North 2,758 | Northeast | 7,570 | South 3,329 | Midwest | 1,987 ||| 2010 | System of Live Birth Information of Ministry of Health | 1,377 | 5.3% | Brazil ||| 7,498 | 4,693 | 2,490 | 1,431 | Fallot | 973 | 973 | 887 | 630 ||| the year of 2009 | 675,495 | 552,092 ||| Brazil [SUMMARY]
Use of statins after liver transplantation is associated with improved survival: results of a nationwide study.
35979872
There is limited information on the effects of statins on the outcomes of liver transplantation (LT), regarding either their use by LT recipients or donors.
BACKGROUND
We included adult LT recipients with deceased donors in a nationwide prospective database study. Using a multistate modelling approach, we examined the effect of statins on the transition hazard between LT, biliary and vascular complications and death, allowing for recurring events. The observation time was 3 years.
METHODS
We included 998 (696 male, 70%, mean age 54.46 ± 11.14 years) LT recipients. 14% of donors and 19% of recipients were exposed to statins during the study period. During follow-up, 141 patients died; there were 40 re-LT and 363 complications, with 66 patients having two or more complications. Treatment with statins in the recipient was modelled as a concurrent covariate and associated with lower mortality after LT (HR = 0.35; 95% CI 0.12-0.98; p = 0.047), as well as a significant reduction of re-LT (p = 0.004). However, it was not associated with lower incidence of complications (HR = 1.25; 95% CI = 0.85-1.83; p = 0.266). Moreover, in patients developing complications, statin use was significantly associated with decreased mortality (HR = 0.10; 95% CI = 0.01-0.81; p = 0.030), and reduced recurrence of complications (HR = 0.43; 95% CI = 0.20-0.93; p = 0.032).
RESULTS
Statin use by LT recipients may confer a survival advantage. Statin administration should be encouraged in LT recipients when clinically indicated.
CONCLUSIONS
[ "Adult", "Aged", "Graft Survival", "Humans", "Hydroxymethylglutaryl-CoA Reductase Inhibitors", "Liver Transplantation", "Male", "Middle Aged", "Retrospective Studies", "Risk Factors", "Treatment Outcome" ]
9545989
INTRODUCTION
Liver transplantation (LT) is considered the ultimate curative option for end‐stage liver disease and for non‐resectable hepatocellular carcinoma (HCC). Although survival rates after LT have progressively improved over the years, the first year post‐LT remains the critical period, summing 46% of the total deaths and 67% of re‐LT. 1 Initial outcomes are mainly determined by surgical or peri‐operative problems leading to primary non‐function or delayed graft function and ultimately to re‐LT. In contrast to this, long‐term outcomes are mainly affected by de novo or recurrent malignant tumours and cardiovascular disease. 2 Statins are inhibitors of 3‐hydroxy‐3‐methyl‐glutaryl‐coenzyme A reductase (HMG‐CoA reductase) widely used in the treatment of dyslipidemia and prophylaxis of cardiovascular events. 2 , 3 It has long been recognised that part of the benefits attributed to statins in cardiovascular disease are due to their pleiotropic effects that influence vascular remodelling and reverse endothelial dysfunction, among others. 4 These pleiotropic effects may likely explain the beneficial effects of statins in other conditions, such as sepsis 5 and cancer. 6 , 7 Recent studies reported beneficial effects of statins on chronic liver diseases, both in pre‐clinical models 8 , 9 , 10 and in clinical studies. 11 , 12 , 13 These range from improvement of hepatic sinusoidal endothelial function, leading to a reduction in intrahepatic vascular tone and portal pressure, to a decreased fibrogenesis that may translate into preventing disease progression and facilitating its regression. 14 Statins have also been shown to protect from lipopolysaccharide‐induced acute‐on‐chronic liver failure (ACLF) in cirrhotic rats 15 and to prevent liver function impairment after hypovolemic shock. 16 , 17 Interestingly, in preclinical models statins protect against ischemia/reperfusion injury in young and aged animals 18 and prolong liver graft preservation both in normal and liver grafts with steatosis considered at high‐risk of ischemia/reperfusion injury. 19 Furthermore, epidemiological studies in large cohorts of patients with chronic liver disease suggest a protective effect of statins reducing the rate of progression to cirrhosis, liver decompensation, development of HCC and death. 20 , 21 Classical indications for statins, including the treatment and prevention of cardiovascular diseases, may be relevant for the long‐term outcome after LT. 22 Moreover, the effects of statins protecting from ischemia/reperfusion injury could be beneficial in the early phase after LT, by reducing the incidence and severity of biliary and vascular complications. Therefore, we hypothesize that statin use in LT recipients may favourably influence the transition to adverse outcomes, including re‐LT, severe and recurrent biliary‐vascular complications, and death.
METHODS
Study design We conducted a cohort study in a nationwide database, including all adult patients who underwent LT from May 2008 to December 2019 registered in the Swiss Transplant Cohort Study (STCS). The STCS is a prospective open cohort study including data from transplanted patients starting in 2008 from the three Swiss liver transplant centres: Bern, Zurich and Geneva. Data from the liver donors were obtained from the Swiss Organ Allocation System (SOAS) records, with permission from Swisstransplant. The study was conducted in accordance with the Declaration of Helsinki and Good Clinical Practice. All the patients included in the present study had given their informed consent to be included in the STCS. In addition, the Bern Cantonal Ethics Committee (KEK‐ID 2020–02122) approved the present study, as well as the STCS (FUP 149). We conducted a cohort study in a nationwide database, including all adult patients who underwent LT from May 2008 to December 2019 registered in the Swiss Transplant Cohort Study (STCS). The STCS is a prospective open cohort study including data from transplanted patients starting in 2008 from the three Swiss liver transplant centres: Bern, Zurich and Geneva. Data from the liver donors were obtained from the Swiss Organ Allocation System (SOAS) records, with permission from Swisstransplant. The study was conducted in accordance with the Declaration of Helsinki and Good Clinical Practice. All the patients included in the present study had given their informed consent to be included in the STCS. In addition, the Bern Cantonal Ethics Committee (KEK‐ID 2020–02122) approved the present study, as well as the STCS (FUP 149). Study population We included all consecutive adult LT candidates who received a liver from a deceased donor and agreed to participate in the STCS. We collected data regarding liver disease aetiology, severity, comorbidities, use of statins and duration of exposure to these drugs. Regarding donor characteristics, we recorded demographic data, comorbidities, type of deceased donor, cause of death, biochemical characteristics and use of statins. We included all consecutive adult LT candidates who received a liver from a deceased donor and agreed to participate in the STCS. We collected data regarding liver disease aetiology, severity, comorbidities, use of statins and duration of exposure to these drugs. Regarding donor characteristics, we recorded demographic data, comorbidities, type of deceased donor, cause of death, biochemical characteristics and use of statins. Study outcomes and definitions We studied LT outcome according to statin exposure in the first 3 years after LT. The primary outcome was post‐LT mortality, while re‐LT and/or development of biliary‐vascular complications were secondary outcomes. Yet, we consider cancer and major cardiovascular events as secondary endpoint as well, arising in an ancillary analysis aimed primarily to better control for possible confounding factors. Patients who dropped out were considered alive until last follow‐up and then censored. Re‐LT was defined as a new LT due to all types of graft loss. Biliary‐vascular complications included biliary leaks, stones and strictures and vascular thrombosis, vascular stenosis and ischemic cholangiopathy. 23 Because not all vascular thrombotic complications necessarily lead to a biliary event, the two types of complications were considered separately. Systemic cardiovascular complications included occurrence of a major cardiovascular event (documented myocardial infarction or stroke), thromboembolic event (pulmonary embolism) and peripheral arterial ischemic disease. Concerning cancers other than HCC, we considered all solid and haematological cancers excluding non‐melanoma skin cancers. Statin exposure was assessed in both donors and recipients. For donors, statin use was assessed as a dichotomous variable at the time of donation, whereas for recipients, statin use was defined as concurrent use of statins for a given time t during follow‐up. In order to reduce the immortal time bias, exposure was considered as person‐time between prescription and end of follow‐up. 24 LT was considered time 0. The observation time after LT was 3 years. We studied LT outcome according to statin exposure in the first 3 years after LT. The primary outcome was post‐LT mortality, while re‐LT and/or development of biliary‐vascular complications were secondary outcomes. Yet, we consider cancer and major cardiovascular events as secondary endpoint as well, arising in an ancillary analysis aimed primarily to better control for possible confounding factors. Patients who dropped out were considered alive until last follow‐up and then censored. Re‐LT was defined as a new LT due to all types of graft loss. Biliary‐vascular complications included biliary leaks, stones and strictures and vascular thrombosis, vascular stenosis and ischemic cholangiopathy. 23 Because not all vascular thrombotic complications necessarily lead to a biliary event, the two types of complications were considered separately. Systemic cardiovascular complications included occurrence of a major cardiovascular event (documented myocardial infarction or stroke), thromboembolic event (pulmonary embolism) and peripheral arterial ischemic disease. Concerning cancers other than HCC, we considered all solid and haematological cancers excluding non‐melanoma skin cancers. Statin exposure was assessed in both donors and recipients. For donors, statin use was assessed as a dichotomous variable at the time of donation, whereas for recipients, statin use was defined as concurrent use of statins for a given time t during follow‐up. In order to reduce the immortal time bias, exposure was considered as person‐time between prescription and end of follow‐up. 24 LT was considered time 0. The observation time after LT was 3 years. Statistical analysis For descriptive aims, we report the general characteristics of the study cohort across the study period. Quantitative and categorical variables were expressed as mean and standard deviation or median and interquartile range (25%–75%) and percentages, respectively. As supplementary analysis, we divided the recipients into statin non‐user and statin user groups through the study period and compared these using Welch's t‐test for continuous variables and Wilcoxon's rank‐sum test for rank data, respectively. Categorical variables were compared using Chi2, where appropriate. Because only three patients had missing covariate data, we adopted a complete‐cases analysis for all models. We conducted the statistical analysis by adopting a multistate modelling approach in order to examine the effect of statin exposure (either as recipient currently under statin treatment or receiving a donor liver exposed to statins) on the transition hazards between transplant, biliary‐vascular complications and death, while allowing for recurring events such as re‐transplant and recurrent biliary‐vascular complications (Figure 1). Death was considered a terminal outcome, while re‐LT and biliary‐vascular complications were considered the main surrogate of graft loss and dysfunction, respectively. Therefore, the multistate modelling approach considered three main states of transition for the time‐dependent model: death, re‐LT and biliary‐vascular complications. Additional transitions are possible between each state, including recurrent events. The only exception is death, which represents a sinking state. Follow‐up for every patient started at the time of their first LT. The model was fitted using a Cox‐proportional hazards model (see more data in the Supplementary materials). Graphical representation of the possible transitions. The numbers in the arrows indicate the observed transitions. The number of transitions from biliary‐vascular complications to re‐LT and death is given globally, including those coming from first or recurrent biliary‐vascular complications. Administrative censoring was applied at 3 years because the primary interest of the study was the time immediately following LT. Effects might be different in the long‐term follow‐up since long‐term effects would progressively be more based on patients transplanted at an earlier date, which might therefore not generalise well to current standards. Furthermore, we did not include HCC or other cancers in the model because of indications that statins could influence the risk of developing HCC 25 , 26 as well as a potential protective role of statins in other cancers. 27 , 28 Including these variables would therefore constitute post‐treatment conditioning and might bias the estimates. However, we additionally explored this possibility using a Cox regression model with HCC‐free survival and non‐HCC cancer‐free survival as outcomes. All tests were two‐sided. Significance was accepted at p < 0.05. R version 4.1.0 (Development Core Team, 2021) was used for statistical calculations. The package survival (Therneau, T. M. [2020]. A Package for Survival Analysis in R. https://CRAN.R‐project.org/package=survival) was used to estimate the model. For descriptive aims, we report the general characteristics of the study cohort across the study period. Quantitative and categorical variables were expressed as mean and standard deviation or median and interquartile range (25%–75%) and percentages, respectively. As supplementary analysis, we divided the recipients into statin non‐user and statin user groups through the study period and compared these using Welch's t‐test for continuous variables and Wilcoxon's rank‐sum test for rank data, respectively. Categorical variables were compared using Chi2, where appropriate. Because only three patients had missing covariate data, we adopted a complete‐cases analysis for all models. We conducted the statistical analysis by adopting a multistate modelling approach in order to examine the effect of statin exposure (either as recipient currently under statin treatment or receiving a donor liver exposed to statins) on the transition hazards between transplant, biliary‐vascular complications and death, while allowing for recurring events such as re‐transplant and recurrent biliary‐vascular complications (Figure 1). Death was considered a terminal outcome, while re‐LT and biliary‐vascular complications were considered the main surrogate of graft loss and dysfunction, respectively. Therefore, the multistate modelling approach considered three main states of transition for the time‐dependent model: death, re‐LT and biliary‐vascular complications. Additional transitions are possible between each state, including recurrent events. The only exception is death, which represents a sinking state. Follow‐up for every patient started at the time of their first LT. The model was fitted using a Cox‐proportional hazards model (see more data in the Supplementary materials). Graphical representation of the possible transitions. The numbers in the arrows indicate the observed transitions. The number of transitions from biliary‐vascular complications to re‐LT and death is given globally, including those coming from first or recurrent biliary‐vascular complications. Administrative censoring was applied at 3 years because the primary interest of the study was the time immediately following LT. Effects might be different in the long‐term follow‐up since long‐term effects would progressively be more based on patients transplanted at an earlier date, which might therefore not generalise well to current standards. Furthermore, we did not include HCC or other cancers in the model because of indications that statins could influence the risk of developing HCC 25 , 26 as well as a potential protective role of statins in other cancers. 27 , 28 Including these variables would therefore constitute post‐treatment conditioning and might bias the estimates. However, we additionally explored this possibility using a Cox regression model with HCC‐free survival and non‐HCC cancer‐free survival as outcomes. All tests were two‐sided. Significance was accepted at p < 0.05. R version 4.1.0 (Development Core Team, 2021) was used for statistical calculations. The package survival (Therneau, T. M. [2020]. A Package for Survival Analysis in R. https://CRAN.R‐project.org/package=survival) was used to estimate the model.
RESULTS
General characteristics of the study population Overall, 998 LT adult recipients were included in the study, with a total of 1038 grafts transplanted. Male was the prevalent gender (696 patients, 70%) with a mean age of 55 ± 11 years. Hepatocellular carcinoma (HCC) was present in 452 (45%) of the patients. The mean calculated MELD score at LT was 14 [IQR 8–24]. Demographic and clinical information regarding LT and comorbidities are summarised in Table 1. 72 (7%) of LT recipients were on statin therapy at time of LT. The total number of patients exposed to statins increased thereafter up to 19% and the time of statin use for 13.52/100 patient‐days. New cardiovascular complications onset in patients without previous cardiovascular events, new type of cardiovascular events, as well as newly diagnosed T2DM and arterial hypertension after LT occurred in 150 (15%), 257 (27%), 96 (10%) and 155 (16%), respectively, suggesting an underuse of statins in this setting as previously observed (22). Thirty‐five (3.5%) patients had recurrence of HCC, whereas de novo/recurrent extrahepatic cancers occurred in 111 (11.1%). General baseline pre‐transplant characteristics of the LT recipients and recipients characteristics stratified into statin non‐user and statin user groups throughout the study period Note: Exposure and no exposure to statins relate to the whole follow‐up period of 3 years. Significances determined by chi‐squared or Fisher's exact test for categorical variables, Welch's T‐Test for variables summarised by mean ± sd and the Wilcoxon test where median [IQR] is reported. Abbreviations: BMI, body mass index; CAD, coronary artery disease; CKD, chronic kidney disease; GI gastrointestinal; HBV, hepatitis B virus; HCC, hepatocellular carcinoma; HCV, hepatitis C virus; HTN, arterial hypertension; LT, liver transplantation; MELD, Model End‐Stage Liver Disease; NAFLD, non‐alcoholic fatty liver disease; NASH, non‐alcoholic steatohepatitis; PBC, primary biliary cholangitis; PSC, primary sclerosing cholangitis; T2DM, diabetes mellitus type 2. p ≤ 0.05 p ≤ 0.01. Recipients on statins were older (59 ± 7 vs. 53 ± 12‐year‐old), more frequently male (84% vs. 66%) had higher BMI (BMI 27.8 ± 4.7 vs. 26.2 ± 5.1) and more frequently cardiovascular disease (58% vs. 41%) (all p ≤ 0.001). Additionally, statin recipients were more frequently transplanted for alcohol‐related liver disease (44% vs. 32% in non‐exposed to statins, p = 0.002), non‐alcoholic steatohepatitis (NASH) (14% vs. 9%, p = 0.04) and HCC (61% vs. 44%) (Table 1). The main characteristics of the study population 3‐year after LT are reported in Table 2. Events observed in LT recipients within 3 years after transplant Abbreviations: HCC, hepatocellular carcinoma; GI, gastrointestinal; LT, liver transplantation. Overall, 998 LT adult recipients were included in the study, with a total of 1038 grafts transplanted. Male was the prevalent gender (696 patients, 70%) with a mean age of 55 ± 11 years. Hepatocellular carcinoma (HCC) was present in 452 (45%) of the patients. The mean calculated MELD score at LT was 14 [IQR 8–24]. Demographic and clinical information regarding LT and comorbidities are summarised in Table 1. 72 (7%) of LT recipients were on statin therapy at time of LT. The total number of patients exposed to statins increased thereafter up to 19% and the time of statin use for 13.52/100 patient‐days. New cardiovascular complications onset in patients without previous cardiovascular events, new type of cardiovascular events, as well as newly diagnosed T2DM and arterial hypertension after LT occurred in 150 (15%), 257 (27%), 96 (10%) and 155 (16%), respectively, suggesting an underuse of statins in this setting as previously observed (22). Thirty‐five (3.5%) patients had recurrence of HCC, whereas de novo/recurrent extrahepatic cancers occurred in 111 (11.1%). General baseline pre‐transplant characteristics of the LT recipients and recipients characteristics stratified into statin non‐user and statin user groups throughout the study period Note: Exposure and no exposure to statins relate to the whole follow‐up period of 3 years. Significances determined by chi‐squared or Fisher's exact test for categorical variables, Welch's T‐Test for variables summarised by mean ± sd and the Wilcoxon test where median [IQR] is reported. Abbreviations: BMI, body mass index; CAD, coronary artery disease; CKD, chronic kidney disease; GI gastrointestinal; HBV, hepatitis B virus; HCC, hepatocellular carcinoma; HCV, hepatitis C virus; HTN, arterial hypertension; LT, liver transplantation; MELD, Model End‐Stage Liver Disease; NAFLD, non‐alcoholic fatty liver disease; NASH, non‐alcoholic steatohepatitis; PBC, primary biliary cholangitis; PSC, primary sclerosing cholangitis; T2DM, diabetes mellitus type 2. p ≤ 0.05 p ≤ 0.01. Recipients on statins were older (59 ± 7 vs. 53 ± 12‐year‐old), more frequently male (84% vs. 66%) had higher BMI (BMI 27.8 ± 4.7 vs. 26.2 ± 5.1) and more frequently cardiovascular disease (58% vs. 41%) (all p ≤ 0.001). Additionally, statin recipients were more frequently transplanted for alcohol‐related liver disease (44% vs. 32% in non‐exposed to statins, p = 0.002), non‐alcoholic steatohepatitis (NASH) (14% vs. 9%, p = 0.04) and HCC (61% vs. 44%) (Table 1). The main characteristics of the study population 3‐year after LT are reported in Table 2. Events observed in LT recipients within 3 years after transplant Abbreviations: HCC, hepatocellular carcinoma; GI, gastrointestinal; LT, liver transplantation. Donors' general characteristics A total of 1038 deceased donors were included in the study. Male was the prevalent sex (608, 57%), mean age was 56.18 ± 16.86 years. DCD accounted for 122 (12%) livers, alcohol use was reported in almost half of the donors (482, 46%) and 127 (12%) were obese (Table 3). One hundred forty‐three donors (14%) were on statins at the time of donation. Donors receiving statins were older (68 vs. 54 years, p < 0.001) and had higher BMI (26.9 vs. 25.3, p < 0.001). Donors with T2DM were had 3.39 times more likely to receive statins (p < 0.001). General characteristics of the donor population Note: Donor statin use was only recorded in 592 cases. Abbreviations: BMI, body mass index; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2. A total of 1038 deceased donors were included in the study. Male was the prevalent sex (608, 57%), mean age was 56.18 ± 16.86 years. DCD accounted for 122 (12%) livers, alcohol use was reported in almost half of the donors (482, 46%) and 127 (12%) were obese (Table 3). One hundred forty‐three donors (14%) were on statins at the time of donation. Donors receiving statins were older (68 vs. 54 years, p < 0.001) and had higher BMI (26.9 vs. 25.3, p < 0.001). Donors with T2DM were had 3.39 times more likely to receive statins (p < 0.001). General characteristics of the donor population Note: Donor statin use was only recorded in 592 cases. Abbreviations: BMI, body mass index; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2. Mortality, re‐transplant and vascular and biliary complication During the follow‐up, 141 (14%) patients died. Median time to death from first LT was 241 days (IQR, 66–548 days). Causes of death included infections (n = 42, 30%), liver failure (n = 34, 24%), terminal cancer (n = 27, 19%), haemorrhage (n = 15, 11%) and cardiovascular disease (n = 10, 7%). Among patients who died 16 (11%) were exposed to statins. No statistically significant differences were shown for causes of death and statin exposure, however, none of recipients on statins died from cardiovascular event compared to 10 deaths from cardiovascular events in patients who never used statins (Figure 3). There were 40 re‐LT in 38 patients (4%) and 363 biliary‐vascular complications in 287 patients (29%). During the follow‐up, 141 (14%) patients died. Median time to death from first LT was 241 days (IQR, 66–548 days). Causes of death included infections (n = 42, 30%), liver failure (n = 34, 24%), terminal cancer (n = 27, 19%), haemorrhage (n = 15, 11%) and cardiovascular disease (n = 10, 7%). Among patients who died 16 (11%) were exposed to statins. No statistically significant differences were shown for causes of death and statin exposure, however, none of recipients on statins died from cardiovascular event compared to 10 deaths from cardiovascular events in patients who never used statins (Figure 3). There were 40 re‐LT in 38 patients (4%) and 363 biliary‐vascular complications in 287 patients (29%). Multistate model approach In the multistate model (Table 4), any patient can migrate from one state to another with the only terminal state being death; migrations observed in the present study are summarised in Figure 1. Recipient statin use was associated with lower mortality after LT (HR = 0.35; 95% CI = 0.12–0.99; p = 0.047) (Figure 2). Statin use was also associated with lower hazard of re‐LT (p = 0.004), where the HR could not be reliably estimated because no statin users had re‐LT. Recipient statins' use was not associated with occurrence of biliary‐vascular complications (HR = 1.25; 95% CI = 0.85–1.83; p = 0.266). Regarding the transition from B‐V complications to other states, statin use was significantly associated with reduced likelihood of death (HR = 0.10; 95% CI = 0.01–0.81; p = 0.031), and of recurrent biliary‐vascular complications (HR = 0.43; 95% CI = 0.20–0.93; p = 0.033) (Figure 2). There was no significant effect of statin use on the likelihood of re‐LT after biliary‐vascular complications (HR = 1.12; 95% CI = 0.32–3.90, p = 0.858). Multistate model considering survival in the different transition according to Cox regression analysis Abbreviations: BMI, body mass index; B‐V, biliary‐vascular; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2. Cumulative transition hazards with significant effects of recipient statin use: Transplant to transplant; transplant to death; biliary‐vascular complication to biliary‐vascular complication; and biliary‐vascular complication to death. Different causes of death among recipients expose or not to statins. Considering donor statin use, there was no effect on any of the six transitions. Donors' statin use showed a trend for a similar association to occurrence of biliary‐vascular complications as recipient statin use: HR = 0.47; 95% CI = 0.22–1.02, but it did not reach statistical significance (p = 0.056). In the multistate model (Table 4), any patient can migrate from one state to another with the only terminal state being death; migrations observed in the present study are summarised in Figure 1. Recipient statin use was associated with lower mortality after LT (HR = 0.35; 95% CI = 0.12–0.99; p = 0.047) (Figure 2). Statin use was also associated with lower hazard of re‐LT (p = 0.004), where the HR could not be reliably estimated because no statin users had re‐LT. Recipient statins' use was not associated with occurrence of biliary‐vascular complications (HR = 1.25; 95% CI = 0.85–1.83; p = 0.266). Regarding the transition from B‐V complications to other states, statin use was significantly associated with reduced likelihood of death (HR = 0.10; 95% CI = 0.01–0.81; p = 0.031), and of recurrent biliary‐vascular complications (HR = 0.43; 95% CI = 0.20–0.93; p = 0.033) (Figure 2). There was no significant effect of statin use on the likelihood of re‐LT after biliary‐vascular complications (HR = 1.12; 95% CI = 0.32–3.90, p = 0.858). Multistate model considering survival in the different transition according to Cox regression analysis Abbreviations: BMI, body mass index; B‐V, biliary‐vascular; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2. Cumulative transition hazards with significant effects of recipient statin use: Transplant to transplant; transplant to death; biliary‐vascular complication to biliary‐vascular complication; and biliary‐vascular complication to death. Different causes of death among recipients expose or not to statins. Considering donor statin use, there was no effect on any of the six transitions. Donors' statin use showed a trend for a similar association to occurrence of biliary‐vascular complications as recipient statin use: HR = 0.47; 95% CI = 0.22–1.02, but it did not reach statistical significance (p = 0.056). HCC recurrence and de novo/recurrence of cancer other than HCC In order to better understand the impact of statins on survival, we built Cox models considering HCC recurrence and de novo/recurrence of non‐liver cancer, with death as a competing risk. While there was no association between HCC recurrence and statin use (Table S1), statin use was associated with a significant reduction of the risk of developing cancers other than HCC (HR = 0.48; 95% CI = 0.29–0.80; p = 0.005; Table S2). In order to better understand the impact of statins on survival, we built Cox models considering HCC recurrence and de novo/recurrence of non‐liver cancer, with death as a competing risk. While there was no association between HCC recurrence and statin use (Table S1), statin use was associated with a significant reduction of the risk of developing cancers other than HCC (HR = 0.48; 95% CI = 0.29–0.80; p = 0.005; Table S2).
null
null
[ "INTRODUCTION", "Study design", "Study population", "Study outcomes and definitions", "Statistical analysis", "General characteristics of the study population", "Donors' general characteristics", "Mortality, re‐transplant and vascular and biliary complication", "Multistate model approach", "\nHCC recurrence and de novo/recurrence of cancer other than HCC\n", "AUTHOR CONTRIBUTIONS", "FUNDING INFORMATION" ]
[ "Liver transplantation (LT) is considered the ultimate curative option for end‐stage liver disease and for non‐resectable hepatocellular carcinoma (HCC). Although survival rates after LT have progressively improved over the years, the first year post‐LT remains the critical period, summing 46% of the total deaths and 67% of re‐LT.\n1\n Initial outcomes are mainly determined by surgical or peri‐operative problems leading to primary non‐function or delayed graft function and ultimately to re‐LT. In contrast to this, long‐term outcomes are mainly affected by de novo or recurrent malignant tumours and cardiovascular disease.\n2\n\n\nStatins are inhibitors of 3‐hydroxy‐3‐methyl‐glutaryl‐coenzyme A reductase (HMG‐CoA reductase) widely used in the treatment of dyslipidemia and prophylaxis of cardiovascular events.\n2\n, \n3\n It has long been recognised that part of the benefits attributed to statins in cardiovascular disease are due to their pleiotropic effects that influence vascular remodelling and reverse endothelial dysfunction, among others.\n4\n These pleiotropic effects may likely explain the beneficial effects of statins in other conditions, such as sepsis\n5\n and cancer.\n6\n, \n7\n Recent studies reported beneficial effects of statins on chronic liver diseases, both in pre‐clinical models \n8\n, \n9\n, \n10\n and in clinical studies.\n11\n, \n12\n, \n13\n These range from improvement of hepatic sinusoidal endothelial function, leading to a reduction in intrahepatic vascular tone and portal pressure, to a decreased fibrogenesis that may translate into preventing disease progression and facilitating its regression.\n14\n Statins have also been shown to protect from lipopolysaccharide‐induced acute‐on‐chronic liver failure (ACLF) in cirrhotic rats\n15\n and to prevent liver function impairment after hypovolemic shock.\n16\n, \n17\n Interestingly, in preclinical models statins protect against ischemia/reperfusion injury in young and aged animals\n18\n and prolong liver graft preservation both in normal and liver grafts with steatosis considered at high‐risk of ischemia/reperfusion injury.\n19\n Furthermore, epidemiological studies in large cohorts of patients with chronic liver disease suggest a protective effect of statins reducing the rate of progression to cirrhosis, liver decompensation, development of HCC and death.\n20\n, \n21\n Classical indications for statins, including the treatment and prevention of cardiovascular diseases, may be relevant for the long‐term outcome after LT.\n22\n Moreover, the effects of statins protecting from ischemia/reperfusion injury could be beneficial in the early phase after LT, by reducing the incidence and severity of biliary and vascular complications. Therefore, we hypothesize that statin use in LT recipients may favourably influence the transition to adverse outcomes, including re‐LT, severe and recurrent biliary‐vascular complications, and death.", "We conducted a cohort study in a nationwide database, including all adult patients who underwent LT from May 2008 to December 2019 registered in the Swiss Transplant Cohort Study (STCS). The STCS is a prospective open cohort study including data from transplanted patients starting in 2008 from the three Swiss liver transplant centres: Bern, Zurich and Geneva. Data from the liver donors were obtained from the Swiss Organ Allocation System (SOAS) records, with permission from Swisstransplant.\nThe study was conducted in accordance with the Declaration of Helsinki and Good Clinical Practice. All the patients included in the present study had given their informed consent to be included in the STCS. In addition, the Bern Cantonal Ethics Committee (KEK‐ID 2020–02122) approved the present study, as well as the STCS (FUP 149).", "We included all consecutive adult LT candidates who received a liver from a deceased donor and agreed to participate in the STCS. We collected data regarding liver disease aetiology, severity, comorbidities, use of statins and duration of exposure to these drugs. Regarding donor characteristics, we recorded demographic data, comorbidities, type of deceased donor, cause of death, biochemical characteristics and use of statins.", "We studied LT outcome according to statin exposure in the first 3 years after LT. The primary outcome was post‐LT mortality, while re‐LT and/or development of biliary‐vascular complications were secondary outcomes. Yet, we consider cancer and major cardiovascular events as secondary endpoint as well, arising in an ancillary analysis aimed primarily to better control for possible confounding factors. Patients who dropped out were considered alive until last follow‐up and then censored.\nRe‐LT was defined as a new LT due to all types of graft loss. Biliary‐vascular complications included biliary leaks, stones and strictures and vascular thrombosis, vascular stenosis and ischemic cholangiopathy.\n23\n Because not all vascular thrombotic complications necessarily lead to a biliary event, the two types of complications were considered separately. Systemic cardiovascular complications included occurrence of a major cardiovascular event (documented myocardial infarction or stroke), thromboembolic event (pulmonary embolism) and peripheral arterial ischemic disease. Concerning cancers other than HCC, we considered all solid and haematological cancers excluding non‐melanoma skin cancers.\nStatin exposure was assessed in both donors and recipients. For donors, statin use was assessed as a dichotomous variable at the time of donation, whereas for recipients, statin use was defined as concurrent use of statins for a given time t during follow‐up. In order to reduce the immortal time bias, exposure was considered as person‐time between prescription and end of follow‐up.\n24\n LT was considered time 0. The observation time after LT was 3 years.", "For descriptive aims, we report the general characteristics of the study cohort across the study period. Quantitative and categorical variables were expressed as mean and standard deviation or median and interquartile range (25%–75%) and percentages, respectively. As supplementary analysis, we divided the recipients into statin non‐user and statin user groups through the study period and compared these using Welch's t‐test for continuous variables and Wilcoxon's rank‐sum test for rank data, respectively. Categorical variables were compared using Chi2, where appropriate.\nBecause only three patients had missing covariate data, we adopted a complete‐cases analysis for all models. We conducted the statistical analysis by adopting a multistate modelling approach in order to examine the effect of statin exposure (either as recipient currently under statin treatment or receiving a donor liver exposed to statins) on the transition hazards between transplant, biliary‐vascular complications and death, while allowing for recurring events such as re‐transplant and recurrent biliary‐vascular complications (Figure 1). Death was considered a terminal outcome, while re‐LT and biliary‐vascular complications were considered the main surrogate of graft loss and dysfunction, respectively. Therefore, the multistate modelling approach considered three main states of transition for the time‐dependent model: death, re‐LT and biliary‐vascular complications. Additional transitions are possible between each state, including recurrent events. The only exception is death, which represents a sinking state. Follow‐up for every patient started at the time of their first LT. The model was fitted using a Cox‐proportional hazards model (see more data in the Supplementary materials).\nGraphical representation of the possible transitions. The numbers in the arrows indicate the observed transitions. The number of transitions from biliary‐vascular complications to re‐LT and death is given globally, including those coming from first or recurrent biliary‐vascular complications.\nAdministrative censoring was applied at 3 years because the primary interest of the study was the time immediately following LT. Effects might be different in the long‐term follow‐up since long‐term effects would progressively be more based on patients transplanted at an earlier date, which might therefore not generalise well to current standards.\nFurthermore, we did not include HCC or other cancers in the model because of indications that statins could influence the risk of developing HCC\n25\n, \n26\n as well as a potential protective role of statins in other cancers.\n27\n, \n28\n Including these variables would therefore constitute post‐treatment conditioning and might bias the estimates. However, we additionally explored this possibility using a Cox regression model with HCC‐free survival and non‐HCC cancer‐free survival as outcomes.\nAll tests were two‐sided. Significance was accepted at p < 0.05. R version 4.1.0 (Development Core Team, 2021) was used for statistical calculations. The package survival (Therneau, T. M. [2020]. A Package for Survival Analysis in R. https://CRAN.R‐project.org/package=survival) was used to estimate the model.", "Overall, 998 LT adult recipients were included in the study, with a total of 1038 grafts transplanted. Male was the prevalent gender (696 patients, 70%) with a mean age of 55 ± 11 years. Hepatocellular carcinoma (HCC) was present in 452 (45%) of the patients. The mean calculated MELD score at LT was 14 [IQR 8–24]. Demographic and clinical information regarding LT and comorbidities are summarised in Table 1. 72 (7%) of LT recipients were on statin therapy at time of LT. The total number of patients exposed to statins increased thereafter up to 19% and the time of statin use for 13.52/100 patient‐days. New cardiovascular complications onset in patients without previous cardiovascular events, new type of cardiovascular events, as well as newly diagnosed T2DM and arterial hypertension after LT occurred in 150 (15%), 257 (27%), 96 (10%) and 155 (16%), respectively, suggesting an underuse of statins in this setting as previously observed (22). Thirty‐five (3.5%) patients had recurrence of HCC, whereas de novo/recurrent extrahepatic cancers occurred in 111 (11.1%).\nGeneral baseline pre‐transplant characteristics of the LT recipients and recipients characteristics stratified into statin non‐user and statin user groups throughout the study period\n\nNote: Exposure and no exposure to statins relate to the whole follow‐up period of 3 years. Significances determined by chi‐squared or Fisher's exact test for categorical variables, Welch's T‐Test for variables summarised by mean ± sd and the Wilcoxon test where median [IQR] is reported.\nAbbreviations: BMI, body mass index; CAD, coronary artery disease; CKD, chronic kidney disease; GI gastrointestinal; HBV, hepatitis B virus; HCC, hepatocellular carcinoma; HCV, hepatitis C virus; HTN, arterial hypertension; LT, liver transplantation; MELD, Model End‐Stage Liver Disease; NAFLD, non‐alcoholic fatty liver disease; NASH, non‐alcoholic steatohepatitis; PBC, primary biliary cholangitis; PSC, primary sclerosing cholangitis; T2DM, diabetes mellitus type 2.\n\np ≤ 0.05\n\np ≤ 0.01.\nRecipients on statins were older (59 ± 7 vs. 53 ± 12‐year‐old), more frequently male (84% vs. 66%) had higher BMI (BMI 27.8 ± 4.7 vs. 26.2 ± 5.1) and more frequently cardiovascular disease (58% vs. 41%) (all p ≤ 0.001). Additionally, statin recipients were more frequently transplanted for alcohol‐related liver disease (44% vs. 32% in non‐exposed to statins, p = 0.002), non‐alcoholic steatohepatitis (NASH) (14% vs. 9%, p = 0.04) and HCC (61% vs. 44%) (Table 1).\nThe main characteristics of the study population 3‐year after LT are reported in Table 2.\nEvents observed in LT recipients within 3 years after transplant\nAbbreviations: HCC, hepatocellular carcinoma; GI, gastrointestinal; LT, liver transplantation.", "A total of 1038 deceased donors were included in the study. Male was the prevalent sex (608, 57%), mean age was 56.18 ± 16.86 years. DCD accounted for 122 (12%) livers, alcohol use was reported in almost half of the donors (482, 46%) and 127 (12%) were obese (Table 3). One hundred forty‐three donors (14%) were on statins at the time of donation. Donors receiving statins were older (68 vs. 54 years, p < 0.001) and had higher BMI (26.9 vs. 25.3, p < 0.001). Donors with T2DM were had 3.39 times more likely to receive statins (p < 0.001).\nGeneral characteristics of the donor population\n\nNote: Donor statin use was only recorded in 592 cases.\nAbbreviations: BMI, body mass index; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2.", "During the follow‐up, 141 (14%) patients died. Median time to death from first LT was 241 days (IQR, 66–548 days). Causes of death included infections (n = 42, 30%), liver failure (n = 34, 24%), terminal cancer (n = 27, 19%), haemorrhage (n = 15, 11%) and cardiovascular disease (n = 10, 7%). Among patients who died 16 (11%) were exposed to statins. No statistically significant differences were shown for causes of death and statin exposure, however, none of recipients on statins died from cardiovascular event compared to 10 deaths from cardiovascular events in patients who never used statins (Figure 3). There were 40 re‐LT in 38 patients (4%) and 363 biliary‐vascular complications in 287 patients (29%).", "In the multistate model (Table 4), any patient can migrate from one state to another with the only terminal state being death; migrations observed in the present study are summarised in Figure 1. Recipient statin use was associated with lower mortality after LT (HR = 0.35; 95% CI = 0.12–0.99; p = 0.047) (Figure 2). Statin use was also associated with lower hazard of re‐LT (p = 0.004), where the HR could not be reliably estimated because no statin users had re‐LT. Recipient statins' use was not associated with occurrence of biliary‐vascular complications (HR = 1.25; 95% CI = 0.85–1.83; p = 0.266). Regarding the transition from B‐V complications to other states, statin use was significantly associated with reduced likelihood of death (HR = 0.10; 95% CI = 0.01–0.81; p = 0.031), and of recurrent biliary‐vascular complications (HR = 0.43; 95% CI = 0.20–0.93; p = 0.033) (Figure 2). There was no significant effect of statin use on the likelihood of re‐LT after biliary‐vascular complications (HR = 1.12; 95% CI = 0.32–3.90, p = 0.858).\nMultistate model considering survival in the different transition according to Cox regression analysis\nAbbreviations: BMI, body mass index; B‐V, biliary‐vascular; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2.\nCumulative transition hazards with significant effects of recipient statin use: Transplant to transplant; transplant to death; biliary‐vascular complication to biliary‐vascular complication; and biliary‐vascular complication to death.\nDifferent causes of death among recipients expose or not to statins.\nConsidering donor statin use, there was no effect on any of the six transitions. Donors' statin use showed a trend for a similar association to occurrence of biliary‐vascular complications as recipient statin use: HR = 0.47; 95% CI = 0.22–1.02, but it did not reach statistical significance (p = 0.056).", "In order to better understand the impact of statins on survival, we built Cox models considering HCC recurrence and de novo/recurrence of non‐liver cancer, with death as a competing risk. While there was no association between HCC recurrence and statin use (Table S1), statin use was associated with a significant reduction of the risk of developing cancers other than HCC (HR = 0.48; 95% CI = 0.29–0.80; p = 0.005; Table S2).", "\nChiara Becchetti: Conceptualization (equal); data curation (equal); investigation (equal); methodology (equal); writing – original draft (lead); writing – review and editing (equal). Melisa Dirchwolf: Conceptualization (equal); data curation (equal); investigation (equal); methodology (equal); writing – original draft (equal); writing – review and editing (equal). Jonas Schropp: Data curation (equal); formal analysis (equal); validation (equal). Giulia Magini: Data curation (equal); writing – review and editing (equal). Beat Mullhaupt: Data curation (equal); writing – review and editing (equal). Immer Franz: Data curation (equal); writing – review and editing (equal). Jean‐Francois Dufour: Data curation (equal); writing – review and editing (equal). Vanessa Banz: Funding acquisition (equal); writing – review and editing (equal). Annalisa Berzigotti: Conceptualization (equal); supervision (equal); writing – review and editing (equal). Jaime Bosch: Conceptualization (equal); funding acquisition (equal); methodology (equal); project administration (equal); supervision (equal); writing – review and editing (equal).", "C.B. and J.B. were supported by the Stiftung für Leberkrankheiten Bern. The study was supported by the Swiss Transplant Cohort Study (FUP 149)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study design", "Study population", "Study outcomes and definitions", "Statistical analysis", "RESULTS", "General characteristics of the study population", "Donors' general characteristics", "Mortality, re‐transplant and vascular and biliary complication", "Multistate model approach", "\nHCC recurrence and de novo/recurrence of cancer other than HCC\n", "DISCUSSION", "AUTHOR CONTRIBUTIONS", "FUNDING INFORMATION", "CONFLICT OF INTEREST", "Supporting information" ]
[ "Liver transplantation (LT) is considered the ultimate curative option for end‐stage liver disease and for non‐resectable hepatocellular carcinoma (HCC). Although survival rates after LT have progressively improved over the years, the first year post‐LT remains the critical period, summing 46% of the total deaths and 67% of re‐LT.\n1\n Initial outcomes are mainly determined by surgical or peri‐operative problems leading to primary non‐function or delayed graft function and ultimately to re‐LT. In contrast to this, long‐term outcomes are mainly affected by de novo or recurrent malignant tumours and cardiovascular disease.\n2\n\n\nStatins are inhibitors of 3‐hydroxy‐3‐methyl‐glutaryl‐coenzyme A reductase (HMG‐CoA reductase) widely used in the treatment of dyslipidemia and prophylaxis of cardiovascular events.\n2\n, \n3\n It has long been recognised that part of the benefits attributed to statins in cardiovascular disease are due to their pleiotropic effects that influence vascular remodelling and reverse endothelial dysfunction, among others.\n4\n These pleiotropic effects may likely explain the beneficial effects of statins in other conditions, such as sepsis\n5\n and cancer.\n6\n, \n7\n Recent studies reported beneficial effects of statins on chronic liver diseases, both in pre‐clinical models \n8\n, \n9\n, \n10\n and in clinical studies.\n11\n, \n12\n, \n13\n These range from improvement of hepatic sinusoidal endothelial function, leading to a reduction in intrahepatic vascular tone and portal pressure, to a decreased fibrogenesis that may translate into preventing disease progression and facilitating its regression.\n14\n Statins have also been shown to protect from lipopolysaccharide‐induced acute‐on‐chronic liver failure (ACLF) in cirrhotic rats\n15\n and to prevent liver function impairment after hypovolemic shock.\n16\n, \n17\n Interestingly, in preclinical models statins protect against ischemia/reperfusion injury in young and aged animals\n18\n and prolong liver graft preservation both in normal and liver grafts with steatosis considered at high‐risk of ischemia/reperfusion injury.\n19\n Furthermore, epidemiological studies in large cohorts of patients with chronic liver disease suggest a protective effect of statins reducing the rate of progression to cirrhosis, liver decompensation, development of HCC and death.\n20\n, \n21\n Classical indications for statins, including the treatment and prevention of cardiovascular diseases, may be relevant for the long‐term outcome after LT.\n22\n Moreover, the effects of statins protecting from ischemia/reperfusion injury could be beneficial in the early phase after LT, by reducing the incidence and severity of biliary and vascular complications. Therefore, we hypothesize that statin use in LT recipients may favourably influence the transition to adverse outcomes, including re‐LT, severe and recurrent biliary‐vascular complications, and death.", " Study design We conducted a cohort study in a nationwide database, including all adult patients who underwent LT from May 2008 to December 2019 registered in the Swiss Transplant Cohort Study (STCS). The STCS is a prospective open cohort study including data from transplanted patients starting in 2008 from the three Swiss liver transplant centres: Bern, Zurich and Geneva. Data from the liver donors were obtained from the Swiss Organ Allocation System (SOAS) records, with permission from Swisstransplant.\nThe study was conducted in accordance with the Declaration of Helsinki and Good Clinical Practice. All the patients included in the present study had given their informed consent to be included in the STCS. In addition, the Bern Cantonal Ethics Committee (KEK‐ID 2020–02122) approved the present study, as well as the STCS (FUP 149).\nWe conducted a cohort study in a nationwide database, including all adult patients who underwent LT from May 2008 to December 2019 registered in the Swiss Transplant Cohort Study (STCS). The STCS is a prospective open cohort study including data from transplanted patients starting in 2008 from the three Swiss liver transplant centres: Bern, Zurich and Geneva. Data from the liver donors were obtained from the Swiss Organ Allocation System (SOAS) records, with permission from Swisstransplant.\nThe study was conducted in accordance with the Declaration of Helsinki and Good Clinical Practice. All the patients included in the present study had given their informed consent to be included in the STCS. In addition, the Bern Cantonal Ethics Committee (KEK‐ID 2020–02122) approved the present study, as well as the STCS (FUP 149).\n Study population We included all consecutive adult LT candidates who received a liver from a deceased donor and agreed to participate in the STCS. We collected data regarding liver disease aetiology, severity, comorbidities, use of statins and duration of exposure to these drugs. Regarding donor characteristics, we recorded demographic data, comorbidities, type of deceased donor, cause of death, biochemical characteristics and use of statins.\nWe included all consecutive adult LT candidates who received a liver from a deceased donor and agreed to participate in the STCS. We collected data regarding liver disease aetiology, severity, comorbidities, use of statins and duration of exposure to these drugs. Regarding donor characteristics, we recorded demographic data, comorbidities, type of deceased donor, cause of death, biochemical characteristics and use of statins.\n Study outcomes and definitions We studied LT outcome according to statin exposure in the first 3 years after LT. The primary outcome was post‐LT mortality, while re‐LT and/or development of biliary‐vascular complications were secondary outcomes. Yet, we consider cancer and major cardiovascular events as secondary endpoint as well, arising in an ancillary analysis aimed primarily to better control for possible confounding factors. Patients who dropped out were considered alive until last follow‐up and then censored.\nRe‐LT was defined as a new LT due to all types of graft loss. Biliary‐vascular complications included biliary leaks, stones and strictures and vascular thrombosis, vascular stenosis and ischemic cholangiopathy.\n23\n Because not all vascular thrombotic complications necessarily lead to a biliary event, the two types of complications were considered separately. Systemic cardiovascular complications included occurrence of a major cardiovascular event (documented myocardial infarction or stroke), thromboembolic event (pulmonary embolism) and peripheral arterial ischemic disease. Concerning cancers other than HCC, we considered all solid and haematological cancers excluding non‐melanoma skin cancers.\nStatin exposure was assessed in both donors and recipients. For donors, statin use was assessed as a dichotomous variable at the time of donation, whereas for recipients, statin use was defined as concurrent use of statins for a given time t during follow‐up. In order to reduce the immortal time bias, exposure was considered as person‐time between prescription and end of follow‐up.\n24\n LT was considered time 0. The observation time after LT was 3 years.\nWe studied LT outcome according to statin exposure in the first 3 years after LT. The primary outcome was post‐LT mortality, while re‐LT and/or development of biliary‐vascular complications were secondary outcomes. Yet, we consider cancer and major cardiovascular events as secondary endpoint as well, arising in an ancillary analysis aimed primarily to better control for possible confounding factors. Patients who dropped out were considered alive until last follow‐up and then censored.\nRe‐LT was defined as a new LT due to all types of graft loss. Biliary‐vascular complications included biliary leaks, stones and strictures and vascular thrombosis, vascular stenosis and ischemic cholangiopathy.\n23\n Because not all vascular thrombotic complications necessarily lead to a biliary event, the two types of complications were considered separately. Systemic cardiovascular complications included occurrence of a major cardiovascular event (documented myocardial infarction or stroke), thromboembolic event (pulmonary embolism) and peripheral arterial ischemic disease. Concerning cancers other than HCC, we considered all solid and haematological cancers excluding non‐melanoma skin cancers.\nStatin exposure was assessed in both donors and recipients. For donors, statin use was assessed as a dichotomous variable at the time of donation, whereas for recipients, statin use was defined as concurrent use of statins for a given time t during follow‐up. In order to reduce the immortal time bias, exposure was considered as person‐time between prescription and end of follow‐up.\n24\n LT was considered time 0. The observation time after LT was 3 years.\n Statistical analysis For descriptive aims, we report the general characteristics of the study cohort across the study period. Quantitative and categorical variables were expressed as mean and standard deviation or median and interquartile range (25%–75%) and percentages, respectively. As supplementary analysis, we divided the recipients into statin non‐user and statin user groups through the study period and compared these using Welch's t‐test for continuous variables and Wilcoxon's rank‐sum test for rank data, respectively. Categorical variables were compared using Chi2, where appropriate.\nBecause only three patients had missing covariate data, we adopted a complete‐cases analysis for all models. We conducted the statistical analysis by adopting a multistate modelling approach in order to examine the effect of statin exposure (either as recipient currently under statin treatment or receiving a donor liver exposed to statins) on the transition hazards between transplant, biliary‐vascular complications and death, while allowing for recurring events such as re‐transplant and recurrent biliary‐vascular complications (Figure 1). Death was considered a terminal outcome, while re‐LT and biliary‐vascular complications were considered the main surrogate of graft loss and dysfunction, respectively. Therefore, the multistate modelling approach considered three main states of transition for the time‐dependent model: death, re‐LT and biliary‐vascular complications. Additional transitions are possible between each state, including recurrent events. The only exception is death, which represents a sinking state. Follow‐up for every patient started at the time of their first LT. The model was fitted using a Cox‐proportional hazards model (see more data in the Supplementary materials).\nGraphical representation of the possible transitions. The numbers in the arrows indicate the observed transitions. The number of transitions from biliary‐vascular complications to re‐LT and death is given globally, including those coming from first or recurrent biliary‐vascular complications.\nAdministrative censoring was applied at 3 years because the primary interest of the study was the time immediately following LT. Effects might be different in the long‐term follow‐up since long‐term effects would progressively be more based on patients transplanted at an earlier date, which might therefore not generalise well to current standards.\nFurthermore, we did not include HCC or other cancers in the model because of indications that statins could influence the risk of developing HCC\n25\n, \n26\n as well as a potential protective role of statins in other cancers.\n27\n, \n28\n Including these variables would therefore constitute post‐treatment conditioning and might bias the estimates. However, we additionally explored this possibility using a Cox regression model with HCC‐free survival and non‐HCC cancer‐free survival as outcomes.\nAll tests were two‐sided. Significance was accepted at p < 0.05. R version 4.1.0 (Development Core Team, 2021) was used for statistical calculations. The package survival (Therneau, T. M. [2020]. A Package for Survival Analysis in R. https://CRAN.R‐project.org/package=survival) was used to estimate the model.\nFor descriptive aims, we report the general characteristics of the study cohort across the study period. Quantitative and categorical variables were expressed as mean and standard deviation or median and interquartile range (25%–75%) and percentages, respectively. As supplementary analysis, we divided the recipients into statin non‐user and statin user groups through the study period and compared these using Welch's t‐test for continuous variables and Wilcoxon's rank‐sum test for rank data, respectively. Categorical variables were compared using Chi2, where appropriate.\nBecause only three patients had missing covariate data, we adopted a complete‐cases analysis for all models. We conducted the statistical analysis by adopting a multistate modelling approach in order to examine the effect of statin exposure (either as recipient currently under statin treatment or receiving a donor liver exposed to statins) on the transition hazards between transplant, biliary‐vascular complications and death, while allowing for recurring events such as re‐transplant and recurrent biliary‐vascular complications (Figure 1). Death was considered a terminal outcome, while re‐LT and biliary‐vascular complications were considered the main surrogate of graft loss and dysfunction, respectively. Therefore, the multistate modelling approach considered three main states of transition for the time‐dependent model: death, re‐LT and biliary‐vascular complications. Additional transitions are possible between each state, including recurrent events. The only exception is death, which represents a sinking state. Follow‐up for every patient started at the time of their first LT. The model was fitted using a Cox‐proportional hazards model (see more data in the Supplementary materials).\nGraphical representation of the possible transitions. The numbers in the arrows indicate the observed transitions. The number of transitions from biliary‐vascular complications to re‐LT and death is given globally, including those coming from first or recurrent biliary‐vascular complications.\nAdministrative censoring was applied at 3 years because the primary interest of the study was the time immediately following LT. Effects might be different in the long‐term follow‐up since long‐term effects would progressively be more based on patients transplanted at an earlier date, which might therefore not generalise well to current standards.\nFurthermore, we did not include HCC or other cancers in the model because of indications that statins could influence the risk of developing HCC\n25\n, \n26\n as well as a potential protective role of statins in other cancers.\n27\n, \n28\n Including these variables would therefore constitute post‐treatment conditioning and might bias the estimates. However, we additionally explored this possibility using a Cox regression model with HCC‐free survival and non‐HCC cancer‐free survival as outcomes.\nAll tests were two‐sided. Significance was accepted at p < 0.05. R version 4.1.0 (Development Core Team, 2021) was used for statistical calculations. The package survival (Therneau, T. M. [2020]. A Package for Survival Analysis in R. https://CRAN.R‐project.org/package=survival) was used to estimate the model.", "We conducted a cohort study in a nationwide database, including all adult patients who underwent LT from May 2008 to December 2019 registered in the Swiss Transplant Cohort Study (STCS). The STCS is a prospective open cohort study including data from transplanted patients starting in 2008 from the three Swiss liver transplant centres: Bern, Zurich and Geneva. Data from the liver donors were obtained from the Swiss Organ Allocation System (SOAS) records, with permission from Swisstransplant.\nThe study was conducted in accordance with the Declaration of Helsinki and Good Clinical Practice. All the patients included in the present study had given their informed consent to be included in the STCS. In addition, the Bern Cantonal Ethics Committee (KEK‐ID 2020–02122) approved the present study, as well as the STCS (FUP 149).", "We included all consecutive adult LT candidates who received a liver from a deceased donor and agreed to participate in the STCS. We collected data regarding liver disease aetiology, severity, comorbidities, use of statins and duration of exposure to these drugs. Regarding donor characteristics, we recorded demographic data, comorbidities, type of deceased donor, cause of death, biochemical characteristics and use of statins.", "We studied LT outcome according to statin exposure in the first 3 years after LT. The primary outcome was post‐LT mortality, while re‐LT and/or development of biliary‐vascular complications were secondary outcomes. Yet, we consider cancer and major cardiovascular events as secondary endpoint as well, arising in an ancillary analysis aimed primarily to better control for possible confounding factors. Patients who dropped out were considered alive until last follow‐up and then censored.\nRe‐LT was defined as a new LT due to all types of graft loss. Biliary‐vascular complications included biliary leaks, stones and strictures and vascular thrombosis, vascular stenosis and ischemic cholangiopathy.\n23\n Because not all vascular thrombotic complications necessarily lead to a biliary event, the two types of complications were considered separately. Systemic cardiovascular complications included occurrence of a major cardiovascular event (documented myocardial infarction or stroke), thromboembolic event (pulmonary embolism) and peripheral arterial ischemic disease. Concerning cancers other than HCC, we considered all solid and haematological cancers excluding non‐melanoma skin cancers.\nStatin exposure was assessed in both donors and recipients. For donors, statin use was assessed as a dichotomous variable at the time of donation, whereas for recipients, statin use was defined as concurrent use of statins for a given time t during follow‐up. In order to reduce the immortal time bias, exposure was considered as person‐time between prescription and end of follow‐up.\n24\n LT was considered time 0. The observation time after LT was 3 years.", "For descriptive aims, we report the general characteristics of the study cohort across the study period. Quantitative and categorical variables were expressed as mean and standard deviation or median and interquartile range (25%–75%) and percentages, respectively. As supplementary analysis, we divided the recipients into statin non‐user and statin user groups through the study period and compared these using Welch's t‐test for continuous variables and Wilcoxon's rank‐sum test for rank data, respectively. Categorical variables were compared using Chi2, where appropriate.\nBecause only three patients had missing covariate data, we adopted a complete‐cases analysis for all models. We conducted the statistical analysis by adopting a multistate modelling approach in order to examine the effect of statin exposure (either as recipient currently under statin treatment or receiving a donor liver exposed to statins) on the transition hazards between transplant, biliary‐vascular complications and death, while allowing for recurring events such as re‐transplant and recurrent biliary‐vascular complications (Figure 1). Death was considered a terminal outcome, while re‐LT and biliary‐vascular complications were considered the main surrogate of graft loss and dysfunction, respectively. Therefore, the multistate modelling approach considered three main states of transition for the time‐dependent model: death, re‐LT and biliary‐vascular complications. Additional transitions are possible between each state, including recurrent events. The only exception is death, which represents a sinking state. Follow‐up for every patient started at the time of their first LT. The model was fitted using a Cox‐proportional hazards model (see more data in the Supplementary materials).\nGraphical representation of the possible transitions. The numbers in the arrows indicate the observed transitions. The number of transitions from biliary‐vascular complications to re‐LT and death is given globally, including those coming from first or recurrent biliary‐vascular complications.\nAdministrative censoring was applied at 3 years because the primary interest of the study was the time immediately following LT. Effects might be different in the long‐term follow‐up since long‐term effects would progressively be more based on patients transplanted at an earlier date, which might therefore not generalise well to current standards.\nFurthermore, we did not include HCC or other cancers in the model because of indications that statins could influence the risk of developing HCC\n25\n, \n26\n as well as a potential protective role of statins in other cancers.\n27\n, \n28\n Including these variables would therefore constitute post‐treatment conditioning and might bias the estimates. However, we additionally explored this possibility using a Cox regression model with HCC‐free survival and non‐HCC cancer‐free survival as outcomes.\nAll tests were two‐sided. Significance was accepted at p < 0.05. R version 4.1.0 (Development Core Team, 2021) was used for statistical calculations. The package survival (Therneau, T. M. [2020]. A Package for Survival Analysis in R. https://CRAN.R‐project.org/package=survival) was used to estimate the model.", " General characteristics of the study population Overall, 998 LT adult recipients were included in the study, with a total of 1038 grafts transplanted. Male was the prevalent gender (696 patients, 70%) with a mean age of 55 ± 11 years. Hepatocellular carcinoma (HCC) was present in 452 (45%) of the patients. The mean calculated MELD score at LT was 14 [IQR 8–24]. Demographic and clinical information regarding LT and comorbidities are summarised in Table 1. 72 (7%) of LT recipients were on statin therapy at time of LT. The total number of patients exposed to statins increased thereafter up to 19% and the time of statin use for 13.52/100 patient‐days. New cardiovascular complications onset in patients without previous cardiovascular events, new type of cardiovascular events, as well as newly diagnosed T2DM and arterial hypertension after LT occurred in 150 (15%), 257 (27%), 96 (10%) and 155 (16%), respectively, suggesting an underuse of statins in this setting as previously observed (22). Thirty‐five (3.5%) patients had recurrence of HCC, whereas de novo/recurrent extrahepatic cancers occurred in 111 (11.1%).\nGeneral baseline pre‐transplant characteristics of the LT recipients and recipients characteristics stratified into statin non‐user and statin user groups throughout the study period\n\nNote: Exposure and no exposure to statins relate to the whole follow‐up period of 3 years. Significances determined by chi‐squared or Fisher's exact test for categorical variables, Welch's T‐Test for variables summarised by mean ± sd and the Wilcoxon test where median [IQR] is reported.\nAbbreviations: BMI, body mass index; CAD, coronary artery disease; CKD, chronic kidney disease; GI gastrointestinal; HBV, hepatitis B virus; HCC, hepatocellular carcinoma; HCV, hepatitis C virus; HTN, arterial hypertension; LT, liver transplantation; MELD, Model End‐Stage Liver Disease; NAFLD, non‐alcoholic fatty liver disease; NASH, non‐alcoholic steatohepatitis; PBC, primary biliary cholangitis; PSC, primary sclerosing cholangitis; T2DM, diabetes mellitus type 2.\n\np ≤ 0.05\n\np ≤ 0.01.\nRecipients on statins were older (59 ± 7 vs. 53 ± 12‐year‐old), more frequently male (84% vs. 66%) had higher BMI (BMI 27.8 ± 4.7 vs. 26.2 ± 5.1) and more frequently cardiovascular disease (58% vs. 41%) (all p ≤ 0.001). Additionally, statin recipients were more frequently transplanted for alcohol‐related liver disease (44% vs. 32% in non‐exposed to statins, p = 0.002), non‐alcoholic steatohepatitis (NASH) (14% vs. 9%, p = 0.04) and HCC (61% vs. 44%) (Table 1).\nThe main characteristics of the study population 3‐year after LT are reported in Table 2.\nEvents observed in LT recipients within 3 years after transplant\nAbbreviations: HCC, hepatocellular carcinoma; GI, gastrointestinal; LT, liver transplantation.\nOverall, 998 LT adult recipients were included in the study, with a total of 1038 grafts transplanted. Male was the prevalent gender (696 patients, 70%) with a mean age of 55 ± 11 years. Hepatocellular carcinoma (HCC) was present in 452 (45%) of the patients. The mean calculated MELD score at LT was 14 [IQR 8–24]. Demographic and clinical information regarding LT and comorbidities are summarised in Table 1. 72 (7%) of LT recipients were on statin therapy at time of LT. The total number of patients exposed to statins increased thereafter up to 19% and the time of statin use for 13.52/100 patient‐days. New cardiovascular complications onset in patients without previous cardiovascular events, new type of cardiovascular events, as well as newly diagnosed T2DM and arterial hypertension after LT occurred in 150 (15%), 257 (27%), 96 (10%) and 155 (16%), respectively, suggesting an underuse of statins in this setting as previously observed (22). Thirty‐five (3.5%) patients had recurrence of HCC, whereas de novo/recurrent extrahepatic cancers occurred in 111 (11.1%).\nGeneral baseline pre‐transplant characteristics of the LT recipients and recipients characteristics stratified into statin non‐user and statin user groups throughout the study period\n\nNote: Exposure and no exposure to statins relate to the whole follow‐up period of 3 years. Significances determined by chi‐squared or Fisher's exact test for categorical variables, Welch's T‐Test for variables summarised by mean ± sd and the Wilcoxon test where median [IQR] is reported.\nAbbreviations: BMI, body mass index; CAD, coronary artery disease; CKD, chronic kidney disease; GI gastrointestinal; HBV, hepatitis B virus; HCC, hepatocellular carcinoma; HCV, hepatitis C virus; HTN, arterial hypertension; LT, liver transplantation; MELD, Model End‐Stage Liver Disease; NAFLD, non‐alcoholic fatty liver disease; NASH, non‐alcoholic steatohepatitis; PBC, primary biliary cholangitis; PSC, primary sclerosing cholangitis; T2DM, diabetes mellitus type 2.\n\np ≤ 0.05\n\np ≤ 0.01.\nRecipients on statins were older (59 ± 7 vs. 53 ± 12‐year‐old), more frequently male (84% vs. 66%) had higher BMI (BMI 27.8 ± 4.7 vs. 26.2 ± 5.1) and more frequently cardiovascular disease (58% vs. 41%) (all p ≤ 0.001). Additionally, statin recipients were more frequently transplanted for alcohol‐related liver disease (44% vs. 32% in non‐exposed to statins, p = 0.002), non‐alcoholic steatohepatitis (NASH) (14% vs. 9%, p = 0.04) and HCC (61% vs. 44%) (Table 1).\nThe main characteristics of the study population 3‐year after LT are reported in Table 2.\nEvents observed in LT recipients within 3 years after transplant\nAbbreviations: HCC, hepatocellular carcinoma; GI, gastrointestinal; LT, liver transplantation.\n Donors' general characteristics A total of 1038 deceased donors were included in the study. Male was the prevalent sex (608, 57%), mean age was 56.18 ± 16.86 years. DCD accounted for 122 (12%) livers, alcohol use was reported in almost half of the donors (482, 46%) and 127 (12%) were obese (Table 3). One hundred forty‐three donors (14%) were on statins at the time of donation. Donors receiving statins were older (68 vs. 54 years, p < 0.001) and had higher BMI (26.9 vs. 25.3, p < 0.001). Donors with T2DM were had 3.39 times more likely to receive statins (p < 0.001).\nGeneral characteristics of the donor population\n\nNote: Donor statin use was only recorded in 592 cases.\nAbbreviations: BMI, body mass index; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2.\nA total of 1038 deceased donors were included in the study. Male was the prevalent sex (608, 57%), mean age was 56.18 ± 16.86 years. DCD accounted for 122 (12%) livers, alcohol use was reported in almost half of the donors (482, 46%) and 127 (12%) were obese (Table 3). One hundred forty‐three donors (14%) were on statins at the time of donation. Donors receiving statins were older (68 vs. 54 years, p < 0.001) and had higher BMI (26.9 vs. 25.3, p < 0.001). Donors with T2DM were had 3.39 times more likely to receive statins (p < 0.001).\nGeneral characteristics of the donor population\n\nNote: Donor statin use was only recorded in 592 cases.\nAbbreviations: BMI, body mass index; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2.\n Mortality, re‐transplant and vascular and biliary complication During the follow‐up, 141 (14%) patients died. Median time to death from first LT was 241 days (IQR, 66–548 days). Causes of death included infections (n = 42, 30%), liver failure (n = 34, 24%), terminal cancer (n = 27, 19%), haemorrhage (n = 15, 11%) and cardiovascular disease (n = 10, 7%). Among patients who died 16 (11%) were exposed to statins. No statistically significant differences were shown for causes of death and statin exposure, however, none of recipients on statins died from cardiovascular event compared to 10 deaths from cardiovascular events in patients who never used statins (Figure 3). There were 40 re‐LT in 38 patients (4%) and 363 biliary‐vascular complications in 287 patients (29%).\nDuring the follow‐up, 141 (14%) patients died. Median time to death from first LT was 241 days (IQR, 66–548 days). Causes of death included infections (n = 42, 30%), liver failure (n = 34, 24%), terminal cancer (n = 27, 19%), haemorrhage (n = 15, 11%) and cardiovascular disease (n = 10, 7%). Among patients who died 16 (11%) were exposed to statins. No statistically significant differences were shown for causes of death and statin exposure, however, none of recipients on statins died from cardiovascular event compared to 10 deaths from cardiovascular events in patients who never used statins (Figure 3). There were 40 re‐LT in 38 patients (4%) and 363 biliary‐vascular complications in 287 patients (29%).\n Multistate model approach In the multistate model (Table 4), any patient can migrate from one state to another with the only terminal state being death; migrations observed in the present study are summarised in Figure 1. Recipient statin use was associated with lower mortality after LT (HR = 0.35; 95% CI = 0.12–0.99; p = 0.047) (Figure 2). Statin use was also associated with lower hazard of re‐LT (p = 0.004), where the HR could not be reliably estimated because no statin users had re‐LT. Recipient statins' use was not associated with occurrence of biliary‐vascular complications (HR = 1.25; 95% CI = 0.85–1.83; p = 0.266). Regarding the transition from B‐V complications to other states, statin use was significantly associated with reduced likelihood of death (HR = 0.10; 95% CI = 0.01–0.81; p = 0.031), and of recurrent biliary‐vascular complications (HR = 0.43; 95% CI = 0.20–0.93; p = 0.033) (Figure 2). There was no significant effect of statin use on the likelihood of re‐LT after biliary‐vascular complications (HR = 1.12; 95% CI = 0.32–3.90, p = 0.858).\nMultistate model considering survival in the different transition according to Cox regression analysis\nAbbreviations: BMI, body mass index; B‐V, biliary‐vascular; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2.\nCumulative transition hazards with significant effects of recipient statin use: Transplant to transplant; transplant to death; biliary‐vascular complication to biliary‐vascular complication; and biliary‐vascular complication to death.\nDifferent causes of death among recipients expose or not to statins.\nConsidering donor statin use, there was no effect on any of the six transitions. Donors' statin use showed a trend for a similar association to occurrence of biliary‐vascular complications as recipient statin use: HR = 0.47; 95% CI = 0.22–1.02, but it did not reach statistical significance (p = 0.056).\nIn the multistate model (Table 4), any patient can migrate from one state to another with the only terminal state being death; migrations observed in the present study are summarised in Figure 1. Recipient statin use was associated with lower mortality after LT (HR = 0.35; 95% CI = 0.12–0.99; p = 0.047) (Figure 2). Statin use was also associated with lower hazard of re‐LT (p = 0.004), where the HR could not be reliably estimated because no statin users had re‐LT. Recipient statins' use was not associated with occurrence of biliary‐vascular complications (HR = 1.25; 95% CI = 0.85–1.83; p = 0.266). Regarding the transition from B‐V complications to other states, statin use was significantly associated with reduced likelihood of death (HR = 0.10; 95% CI = 0.01–0.81; p = 0.031), and of recurrent biliary‐vascular complications (HR = 0.43; 95% CI = 0.20–0.93; p = 0.033) (Figure 2). There was no significant effect of statin use on the likelihood of re‐LT after biliary‐vascular complications (HR = 1.12; 95% CI = 0.32–3.90, p = 0.858).\nMultistate model considering survival in the different transition according to Cox regression analysis\nAbbreviations: BMI, body mass index; B‐V, biliary‐vascular; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2.\nCumulative transition hazards with significant effects of recipient statin use: Transplant to transplant; transplant to death; biliary‐vascular complication to biliary‐vascular complication; and biliary‐vascular complication to death.\nDifferent causes of death among recipients expose or not to statins.\nConsidering donor statin use, there was no effect on any of the six transitions. Donors' statin use showed a trend for a similar association to occurrence of biliary‐vascular complications as recipient statin use: HR = 0.47; 95% CI = 0.22–1.02, but it did not reach statistical significance (p = 0.056).\n \nHCC recurrence and de novo/recurrence of cancer other than HCC\n In order to better understand the impact of statins on survival, we built Cox models considering HCC recurrence and de novo/recurrence of non‐liver cancer, with death as a competing risk. While there was no association between HCC recurrence and statin use (Table S1), statin use was associated with a significant reduction of the risk of developing cancers other than HCC (HR = 0.48; 95% CI = 0.29–0.80; p = 0.005; Table S2).\nIn order to better understand the impact of statins on survival, we built Cox models considering HCC recurrence and de novo/recurrence of non‐liver cancer, with death as a competing risk. While there was no association between HCC recurrence and statin use (Table S1), statin use was associated with a significant reduction of the risk of developing cancers other than HCC (HR = 0.48; 95% CI = 0.29–0.80; p = 0.005; Table S2).", "Overall, 998 LT adult recipients were included in the study, with a total of 1038 grafts transplanted. Male was the prevalent gender (696 patients, 70%) with a mean age of 55 ± 11 years. Hepatocellular carcinoma (HCC) was present in 452 (45%) of the patients. The mean calculated MELD score at LT was 14 [IQR 8–24]. Demographic and clinical information regarding LT and comorbidities are summarised in Table 1. 72 (7%) of LT recipients were on statin therapy at time of LT. The total number of patients exposed to statins increased thereafter up to 19% and the time of statin use for 13.52/100 patient‐days. New cardiovascular complications onset in patients without previous cardiovascular events, new type of cardiovascular events, as well as newly diagnosed T2DM and arterial hypertension after LT occurred in 150 (15%), 257 (27%), 96 (10%) and 155 (16%), respectively, suggesting an underuse of statins in this setting as previously observed (22). Thirty‐five (3.5%) patients had recurrence of HCC, whereas de novo/recurrent extrahepatic cancers occurred in 111 (11.1%).\nGeneral baseline pre‐transplant characteristics of the LT recipients and recipients characteristics stratified into statin non‐user and statin user groups throughout the study period\n\nNote: Exposure and no exposure to statins relate to the whole follow‐up period of 3 years. Significances determined by chi‐squared or Fisher's exact test for categorical variables, Welch's T‐Test for variables summarised by mean ± sd and the Wilcoxon test where median [IQR] is reported.\nAbbreviations: BMI, body mass index; CAD, coronary artery disease; CKD, chronic kidney disease; GI gastrointestinal; HBV, hepatitis B virus; HCC, hepatocellular carcinoma; HCV, hepatitis C virus; HTN, arterial hypertension; LT, liver transplantation; MELD, Model End‐Stage Liver Disease; NAFLD, non‐alcoholic fatty liver disease; NASH, non‐alcoholic steatohepatitis; PBC, primary biliary cholangitis; PSC, primary sclerosing cholangitis; T2DM, diabetes mellitus type 2.\n\np ≤ 0.05\n\np ≤ 0.01.\nRecipients on statins were older (59 ± 7 vs. 53 ± 12‐year‐old), more frequently male (84% vs. 66%) had higher BMI (BMI 27.8 ± 4.7 vs. 26.2 ± 5.1) and more frequently cardiovascular disease (58% vs. 41%) (all p ≤ 0.001). Additionally, statin recipients were more frequently transplanted for alcohol‐related liver disease (44% vs. 32% in non‐exposed to statins, p = 0.002), non‐alcoholic steatohepatitis (NASH) (14% vs. 9%, p = 0.04) and HCC (61% vs. 44%) (Table 1).\nThe main characteristics of the study population 3‐year after LT are reported in Table 2.\nEvents observed in LT recipients within 3 years after transplant\nAbbreviations: HCC, hepatocellular carcinoma; GI, gastrointestinal; LT, liver transplantation.", "A total of 1038 deceased donors were included in the study. Male was the prevalent sex (608, 57%), mean age was 56.18 ± 16.86 years. DCD accounted for 122 (12%) livers, alcohol use was reported in almost half of the donors (482, 46%) and 127 (12%) were obese (Table 3). One hundred forty‐three donors (14%) were on statins at the time of donation. Donors receiving statins were older (68 vs. 54 years, p < 0.001) and had higher BMI (26.9 vs. 25.3, p < 0.001). Donors with T2DM were had 3.39 times more likely to receive statins (p < 0.001).\nGeneral characteristics of the donor population\n\nNote: Donor statin use was only recorded in 592 cases.\nAbbreviations: BMI, body mass index; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2.", "During the follow‐up, 141 (14%) patients died. Median time to death from first LT was 241 days (IQR, 66–548 days). Causes of death included infections (n = 42, 30%), liver failure (n = 34, 24%), terminal cancer (n = 27, 19%), haemorrhage (n = 15, 11%) and cardiovascular disease (n = 10, 7%). Among patients who died 16 (11%) were exposed to statins. No statistically significant differences were shown for causes of death and statin exposure, however, none of recipients on statins died from cardiovascular event compared to 10 deaths from cardiovascular events in patients who never used statins (Figure 3). There were 40 re‐LT in 38 patients (4%) and 363 biliary‐vascular complications in 287 patients (29%).", "In the multistate model (Table 4), any patient can migrate from one state to another with the only terminal state being death; migrations observed in the present study are summarised in Figure 1. Recipient statin use was associated with lower mortality after LT (HR = 0.35; 95% CI = 0.12–0.99; p = 0.047) (Figure 2). Statin use was also associated with lower hazard of re‐LT (p = 0.004), where the HR could not be reliably estimated because no statin users had re‐LT. Recipient statins' use was not associated with occurrence of biliary‐vascular complications (HR = 1.25; 95% CI = 0.85–1.83; p = 0.266). Regarding the transition from B‐V complications to other states, statin use was significantly associated with reduced likelihood of death (HR = 0.10; 95% CI = 0.01–0.81; p = 0.031), and of recurrent biliary‐vascular complications (HR = 0.43; 95% CI = 0.20–0.93; p = 0.033) (Figure 2). There was no significant effect of statin use on the likelihood of re‐LT after biliary‐vascular complications (HR = 1.12; 95% CI = 0.32–3.90, p = 0.858).\nMultistate model considering survival in the different transition according to Cox regression analysis\nAbbreviations: BMI, body mass index; B‐V, biliary‐vascular; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2.\nCumulative transition hazards with significant effects of recipient statin use: Transplant to transplant; transplant to death; biliary‐vascular complication to biliary‐vascular complication; and biliary‐vascular complication to death.\nDifferent causes of death among recipients expose or not to statins.\nConsidering donor statin use, there was no effect on any of the six transitions. Donors' statin use showed a trend for a similar association to occurrence of biliary‐vascular complications as recipient statin use: HR = 0.47; 95% CI = 0.22–1.02, but it did not reach statistical significance (p = 0.056).", "In order to better understand the impact of statins on survival, we built Cox models considering HCC recurrence and de novo/recurrence of non‐liver cancer, with death as a competing risk. While there was no association between HCC recurrence and statin use (Table S1), statin use was associated with a significant reduction of the risk of developing cancers other than HCC (HR = 0.48; 95% CI = 0.29–0.80; p = 0.005; Table S2).", "The present study aimed at evaluating the effect of statin exposure of both recipients and donors on LT outcomes. Specifically, we wanted to investigate whether statin exposure could influence adverse outcomes, including death, re‐LT, severe biliary‐vascular complications, and recurrent biliary‐vascular complications.\nThe most significant result of our study is that statin use by LT recipients is associated with improved survival, both in patients with and without biliary‐vascular complications.\nThe study design was set up bearing in mind the main limitations affecting epidemiological studies, particularly in the LT setting. Therefore, we constructed a multistate model for the main analysis of the study, taking into account that each subject can transition to other state(s) in the course of the observation period and that this can happen several times (Figure 1). This approach ensures greater internal consistency than conventional approaches\n22\n and weights the accumulated risk for each transition. Statin use in the recipients was associated with a lower risk of transition from LT to death, and from biliary‐vascular complications to death. Recipients with concurrent statin use were never subject to re‐LT without prior biliary‐vascular complication in our cohort, which equates a significantly lower log‐likelihood (p = 0.004). In addition, recipient exposure to statins was associated with lower hazards of recurrent biliary‐vascular complications. This finding is particularly relevant considering that patients experiencing a first biliary‐vascular complication are more likely to develop a second one.\nTo our knowledge, this study is the first specifically designed to examine the pleiotropic protective effects of statins in LT. Recently, a study from the NailNASH Consortium analysed the outcome of 938 patients receiving LT for NASH cirrhosis. They concluded that statin use after LT favourably impacted survival (HR = 0.38; 95% CI = 0.19–0.75; p = 0.005), which is in accordance with our study. However, the protective effect of statins was not the main aim of the study and the results relate to a population known for higher risk of cardiovascular disease and likelihood of statin use compared to other LT indications.\n25\n A previous study by Patel et al.\n22\n examined the impact of coronary artery disease (CAD) and dyslipidemia on clinical outcomes after LT, and reported that in this context, statin exposure was associated with improved survival. Our findings in the present cohort are in line with this previous data, since among patients who died after LT being exposed to statins, none died from cardiovascular diseases. Additionally, they showed a considerable underuse of statins in LT recipients, even when clinically indicated, without observing side effects that could justify this underuse.\n22\n Statin underuse is also observed in the general population: only around 25% of subjects with an indication for statins are actually on statin therapy. Globally, up to 12.6% of annual cardiovascular deaths could be avoided if all patients eligible were on statins.\n29\n This could be estimated also in our cohort, where approximately 19% of recipients were exposed to statins, while circa 30% would have an indication for statins based on comorbidities. Under‐prescription of statins might partly reflect that attending physicians feel uncomfortable taking therapeutic decisions in LT recipients. On the other hand, the known interaction between cyclosporine and statins with potential increase in statins bioavailability may have led to the use of low potency hydrophilic statins (not metabolised through the cytochrome P‐450 3A4) or under‐use and under‐dosing of statins.\n30\n However, most LT recipients at present are under immunosuppressive regimens based on tacrolimus, whose interaction with statins is almost negligible.\n30\n There is thus no contraindication to the use of statins post‐LT, although slowly increasing the dose and monitoring immunosuppression trough levels is advisable.\nEven if post‐LT cancer outcomes were not the primary objective of our study,\n26\n, \n27\n our findings confirmed a strong association between statin exposure and reduced incidence of development/recurrence of extrahepatic tumours. However, in contrast with other recent studies, it failed to demonstrate a significant association with decreased HCC recurrence.\n26\n, \n31\n This might be partly due to the short post‐LT observation period (3 years) not being able to capture all HCC recurrences,\n32\n as well as the coding of statins use as a concurrent covariate, which might not be appropriate for long‐term outcomes.\nOn the other hand, even within a limited number of events, we observed a trend for a protective effect of donor statin exposure against post‐LT biliary‐vascular complications, analogous to that observed for recipient statin exposure. From a mechanistic point of view, this data is in line with preclinical data in experimental models of ischemia/reperfusion injury and cold liver preservation,\n16\n, \n17\n and with the preliminary results of a recent randomised controlled trial.\n33\n Interestingly, the biologic effect of statins on vascular function and liver grafts is quite similar to that exerted by pulsatile flow perfusion \n17\n, \n34\n which successfully minimises biliary‐vascular complications after LT.\n35\n\n\nThis study has some limitations, the main one being the applicability of the results of cohort studies to clinical practice, which usually require confirmation in ad hoc prospective studies. To minimise this limitation, we adopted a statistical strategy that is more process‐oriented than conventional regression analysis and that provides higher generalizability to real‐life scenarios. The total number of patients exposed to statins increased from 7% at the time of LT to 19% thereafter. It appears possible that clinicians were more confident to prescribe statins after LT rather to stable patients than to patients suffering from surgical complications or relevant liver dysfunction. However, we minimised selection bias inherent to cohort studies by including all adult patients receiving LT in the period of this nationwide study. Another potential limitation is that the short number of events early after liver transplantation and of donor statin users may have precluded observing and association between statin exposure and biliary/vascular complications very early after liver transplantation that perhaps would have been shown in a larger study. Unfortunately, fully modelling the time‐varying nature of the different effects of donor and recipient statin exposure would have required a significantly larger number of events than were available in our study.\nIn summary, our results suggest that the use of statins in LT recipients confers a survival advantage. Our data also confirm that statins are underused in LT recipients qualifying for statins, and further add to the body of evidence in favour of using statins in this population when there is a clinical indication. Considering that statins are cheap, safe and widely used and that current strategies for reducing graft loss and improving survival after LT are costly and still limited, our data suggest that statins may represent a new effective approach with relevant clinical impact in the post‐LT setting.", "\nChiara Becchetti: Conceptualization (equal); data curation (equal); investigation (equal); methodology (equal); writing – original draft (lead); writing – review and editing (equal). Melisa Dirchwolf: Conceptualization (equal); data curation (equal); investigation (equal); methodology (equal); writing – original draft (equal); writing – review and editing (equal). Jonas Schropp: Data curation (equal); formal analysis (equal); validation (equal). Giulia Magini: Data curation (equal); writing – review and editing (equal). Beat Mullhaupt: Data curation (equal); writing – review and editing (equal). Immer Franz: Data curation (equal); writing – review and editing (equal). Jean‐Francois Dufour: Data curation (equal); writing – review and editing (equal). Vanessa Banz: Funding acquisition (equal); writing – review and editing (equal). Annalisa Berzigotti: Conceptualization (equal); supervision (equal); writing – review and editing (equal). Jaime Bosch: Conceptualization (equal); funding acquisition (equal); methodology (equal); project administration (equal); supervision (equal); writing – review and editing (equal).", "C.B. and J.B. were supported by the Stiftung für Leberkrankheiten Bern. The study was supported by the Swiss Transplant Cohort Study (FUP 149).", "None.", "\nData S1 Supporting Information.\nClick here for additional data file." ]
[ null, "methods", null, null, null, null, "results", null, null, null, null, null, "discussion", null, null, "COI-statement", "supplementary-material" ]
[ "cardiovascular disease", "dyslipidemia", "solid organ transplantation", "survival" ]
INTRODUCTION: Liver transplantation (LT) is considered the ultimate curative option for end‐stage liver disease and for non‐resectable hepatocellular carcinoma (HCC). Although survival rates after LT have progressively improved over the years, the first year post‐LT remains the critical period, summing 46% of the total deaths and 67% of re‐LT. 1 Initial outcomes are mainly determined by surgical or peri‐operative problems leading to primary non‐function or delayed graft function and ultimately to re‐LT. In contrast to this, long‐term outcomes are mainly affected by de novo or recurrent malignant tumours and cardiovascular disease. 2 Statins are inhibitors of 3‐hydroxy‐3‐methyl‐glutaryl‐coenzyme A reductase (HMG‐CoA reductase) widely used in the treatment of dyslipidemia and prophylaxis of cardiovascular events. 2 , 3 It has long been recognised that part of the benefits attributed to statins in cardiovascular disease are due to their pleiotropic effects that influence vascular remodelling and reverse endothelial dysfunction, among others. 4 These pleiotropic effects may likely explain the beneficial effects of statins in other conditions, such as sepsis 5 and cancer. 6 , 7 Recent studies reported beneficial effects of statins on chronic liver diseases, both in pre‐clinical models 8 , 9 , 10 and in clinical studies. 11 , 12 , 13 These range from improvement of hepatic sinusoidal endothelial function, leading to a reduction in intrahepatic vascular tone and portal pressure, to a decreased fibrogenesis that may translate into preventing disease progression and facilitating its regression. 14 Statins have also been shown to protect from lipopolysaccharide‐induced acute‐on‐chronic liver failure (ACLF) in cirrhotic rats 15 and to prevent liver function impairment after hypovolemic shock. 16 , 17 Interestingly, in preclinical models statins protect against ischemia/reperfusion injury in young and aged animals 18 and prolong liver graft preservation both in normal and liver grafts with steatosis considered at high‐risk of ischemia/reperfusion injury. 19 Furthermore, epidemiological studies in large cohorts of patients with chronic liver disease suggest a protective effect of statins reducing the rate of progression to cirrhosis, liver decompensation, development of HCC and death. 20 , 21 Classical indications for statins, including the treatment and prevention of cardiovascular diseases, may be relevant for the long‐term outcome after LT. 22 Moreover, the effects of statins protecting from ischemia/reperfusion injury could be beneficial in the early phase after LT, by reducing the incidence and severity of biliary and vascular complications. Therefore, we hypothesize that statin use in LT recipients may favourably influence the transition to adverse outcomes, including re‐LT, severe and recurrent biliary‐vascular complications, and death. METHODS: Study design We conducted a cohort study in a nationwide database, including all adult patients who underwent LT from May 2008 to December 2019 registered in the Swiss Transplant Cohort Study (STCS). The STCS is a prospective open cohort study including data from transplanted patients starting in 2008 from the three Swiss liver transplant centres: Bern, Zurich and Geneva. Data from the liver donors were obtained from the Swiss Organ Allocation System (SOAS) records, with permission from Swisstransplant. The study was conducted in accordance with the Declaration of Helsinki and Good Clinical Practice. All the patients included in the present study had given their informed consent to be included in the STCS. In addition, the Bern Cantonal Ethics Committee (KEK‐ID 2020–02122) approved the present study, as well as the STCS (FUP 149). We conducted a cohort study in a nationwide database, including all adult patients who underwent LT from May 2008 to December 2019 registered in the Swiss Transplant Cohort Study (STCS). The STCS is a prospective open cohort study including data from transplanted patients starting in 2008 from the three Swiss liver transplant centres: Bern, Zurich and Geneva. Data from the liver donors were obtained from the Swiss Organ Allocation System (SOAS) records, with permission from Swisstransplant. The study was conducted in accordance with the Declaration of Helsinki and Good Clinical Practice. All the patients included in the present study had given their informed consent to be included in the STCS. In addition, the Bern Cantonal Ethics Committee (KEK‐ID 2020–02122) approved the present study, as well as the STCS (FUP 149). Study population We included all consecutive adult LT candidates who received a liver from a deceased donor and agreed to participate in the STCS. We collected data regarding liver disease aetiology, severity, comorbidities, use of statins and duration of exposure to these drugs. Regarding donor characteristics, we recorded demographic data, comorbidities, type of deceased donor, cause of death, biochemical characteristics and use of statins. We included all consecutive adult LT candidates who received a liver from a deceased donor and agreed to participate in the STCS. We collected data regarding liver disease aetiology, severity, comorbidities, use of statins and duration of exposure to these drugs. Regarding donor characteristics, we recorded demographic data, comorbidities, type of deceased donor, cause of death, biochemical characteristics and use of statins. Study outcomes and definitions We studied LT outcome according to statin exposure in the first 3 years after LT. The primary outcome was post‐LT mortality, while re‐LT and/or development of biliary‐vascular complications were secondary outcomes. Yet, we consider cancer and major cardiovascular events as secondary endpoint as well, arising in an ancillary analysis aimed primarily to better control for possible confounding factors. Patients who dropped out were considered alive until last follow‐up and then censored. Re‐LT was defined as a new LT due to all types of graft loss. Biliary‐vascular complications included biliary leaks, stones and strictures and vascular thrombosis, vascular stenosis and ischemic cholangiopathy. 23 Because not all vascular thrombotic complications necessarily lead to a biliary event, the two types of complications were considered separately. Systemic cardiovascular complications included occurrence of a major cardiovascular event (documented myocardial infarction or stroke), thromboembolic event (pulmonary embolism) and peripheral arterial ischemic disease. Concerning cancers other than HCC, we considered all solid and haematological cancers excluding non‐melanoma skin cancers. Statin exposure was assessed in both donors and recipients. For donors, statin use was assessed as a dichotomous variable at the time of donation, whereas for recipients, statin use was defined as concurrent use of statins for a given time t during follow‐up. In order to reduce the immortal time bias, exposure was considered as person‐time between prescription and end of follow‐up. 24 LT was considered time 0. The observation time after LT was 3 years. We studied LT outcome according to statin exposure in the first 3 years after LT. The primary outcome was post‐LT mortality, while re‐LT and/or development of biliary‐vascular complications were secondary outcomes. Yet, we consider cancer and major cardiovascular events as secondary endpoint as well, arising in an ancillary analysis aimed primarily to better control for possible confounding factors. Patients who dropped out were considered alive until last follow‐up and then censored. Re‐LT was defined as a new LT due to all types of graft loss. Biliary‐vascular complications included biliary leaks, stones and strictures and vascular thrombosis, vascular stenosis and ischemic cholangiopathy. 23 Because not all vascular thrombotic complications necessarily lead to a biliary event, the two types of complications were considered separately. Systemic cardiovascular complications included occurrence of a major cardiovascular event (documented myocardial infarction or stroke), thromboembolic event (pulmonary embolism) and peripheral arterial ischemic disease. Concerning cancers other than HCC, we considered all solid and haematological cancers excluding non‐melanoma skin cancers. Statin exposure was assessed in both donors and recipients. For donors, statin use was assessed as a dichotomous variable at the time of donation, whereas for recipients, statin use was defined as concurrent use of statins for a given time t during follow‐up. In order to reduce the immortal time bias, exposure was considered as person‐time between prescription and end of follow‐up. 24 LT was considered time 0. The observation time after LT was 3 years. Statistical analysis For descriptive aims, we report the general characteristics of the study cohort across the study period. Quantitative and categorical variables were expressed as mean and standard deviation or median and interquartile range (25%–75%) and percentages, respectively. As supplementary analysis, we divided the recipients into statin non‐user and statin user groups through the study period and compared these using Welch's t‐test for continuous variables and Wilcoxon's rank‐sum test for rank data, respectively. Categorical variables were compared using Chi2, where appropriate. Because only three patients had missing covariate data, we adopted a complete‐cases analysis for all models. We conducted the statistical analysis by adopting a multistate modelling approach in order to examine the effect of statin exposure (either as recipient currently under statin treatment or receiving a donor liver exposed to statins) on the transition hazards between transplant, biliary‐vascular complications and death, while allowing for recurring events such as re‐transplant and recurrent biliary‐vascular complications (Figure 1). Death was considered a terminal outcome, while re‐LT and biliary‐vascular complications were considered the main surrogate of graft loss and dysfunction, respectively. Therefore, the multistate modelling approach considered three main states of transition for the time‐dependent model: death, re‐LT and biliary‐vascular complications. Additional transitions are possible between each state, including recurrent events. The only exception is death, which represents a sinking state. Follow‐up for every patient started at the time of their first LT. The model was fitted using a Cox‐proportional hazards model (see more data in the Supplementary materials). Graphical representation of the possible transitions. The numbers in the arrows indicate the observed transitions. The number of transitions from biliary‐vascular complications to re‐LT and death is given globally, including those coming from first or recurrent biliary‐vascular complications. Administrative censoring was applied at 3 years because the primary interest of the study was the time immediately following LT. Effects might be different in the long‐term follow‐up since long‐term effects would progressively be more based on patients transplanted at an earlier date, which might therefore not generalise well to current standards. Furthermore, we did not include HCC or other cancers in the model because of indications that statins could influence the risk of developing HCC 25 , 26 as well as a potential protective role of statins in other cancers. 27 , 28 Including these variables would therefore constitute post‐treatment conditioning and might bias the estimates. However, we additionally explored this possibility using a Cox regression model with HCC‐free survival and non‐HCC cancer‐free survival as outcomes. All tests were two‐sided. Significance was accepted at p < 0.05. R version 4.1.0 (Development Core Team, 2021) was used for statistical calculations. The package survival (Therneau, T. M. [2020]. A Package for Survival Analysis in R. https://CRAN.R‐project.org/package=survival) was used to estimate the model. For descriptive aims, we report the general characteristics of the study cohort across the study period. Quantitative and categorical variables were expressed as mean and standard deviation or median and interquartile range (25%–75%) and percentages, respectively. As supplementary analysis, we divided the recipients into statin non‐user and statin user groups through the study period and compared these using Welch's t‐test for continuous variables and Wilcoxon's rank‐sum test for rank data, respectively. Categorical variables were compared using Chi2, where appropriate. Because only three patients had missing covariate data, we adopted a complete‐cases analysis for all models. We conducted the statistical analysis by adopting a multistate modelling approach in order to examine the effect of statin exposure (either as recipient currently under statin treatment or receiving a donor liver exposed to statins) on the transition hazards between transplant, biliary‐vascular complications and death, while allowing for recurring events such as re‐transplant and recurrent biliary‐vascular complications (Figure 1). Death was considered a terminal outcome, while re‐LT and biliary‐vascular complications were considered the main surrogate of graft loss and dysfunction, respectively. Therefore, the multistate modelling approach considered three main states of transition for the time‐dependent model: death, re‐LT and biliary‐vascular complications. Additional transitions are possible between each state, including recurrent events. The only exception is death, which represents a sinking state. Follow‐up for every patient started at the time of their first LT. The model was fitted using a Cox‐proportional hazards model (see more data in the Supplementary materials). Graphical representation of the possible transitions. The numbers in the arrows indicate the observed transitions. The number of transitions from biliary‐vascular complications to re‐LT and death is given globally, including those coming from first or recurrent biliary‐vascular complications. Administrative censoring was applied at 3 years because the primary interest of the study was the time immediately following LT. Effects might be different in the long‐term follow‐up since long‐term effects would progressively be more based on patients transplanted at an earlier date, which might therefore not generalise well to current standards. Furthermore, we did not include HCC or other cancers in the model because of indications that statins could influence the risk of developing HCC 25 , 26 as well as a potential protective role of statins in other cancers. 27 , 28 Including these variables would therefore constitute post‐treatment conditioning and might bias the estimates. However, we additionally explored this possibility using a Cox regression model with HCC‐free survival and non‐HCC cancer‐free survival as outcomes. All tests were two‐sided. Significance was accepted at p < 0.05. R version 4.1.0 (Development Core Team, 2021) was used for statistical calculations. The package survival (Therneau, T. M. [2020]. A Package for Survival Analysis in R. https://CRAN.R‐project.org/package=survival) was used to estimate the model. Study design: We conducted a cohort study in a nationwide database, including all adult patients who underwent LT from May 2008 to December 2019 registered in the Swiss Transplant Cohort Study (STCS). The STCS is a prospective open cohort study including data from transplanted patients starting in 2008 from the three Swiss liver transplant centres: Bern, Zurich and Geneva. Data from the liver donors were obtained from the Swiss Organ Allocation System (SOAS) records, with permission from Swisstransplant. The study was conducted in accordance with the Declaration of Helsinki and Good Clinical Practice. All the patients included in the present study had given their informed consent to be included in the STCS. In addition, the Bern Cantonal Ethics Committee (KEK‐ID 2020–02122) approved the present study, as well as the STCS (FUP 149). Study population: We included all consecutive adult LT candidates who received a liver from a deceased donor and agreed to participate in the STCS. We collected data regarding liver disease aetiology, severity, comorbidities, use of statins and duration of exposure to these drugs. Regarding donor characteristics, we recorded demographic data, comorbidities, type of deceased donor, cause of death, biochemical characteristics and use of statins. Study outcomes and definitions: We studied LT outcome according to statin exposure in the first 3 years after LT. The primary outcome was post‐LT mortality, while re‐LT and/or development of biliary‐vascular complications were secondary outcomes. Yet, we consider cancer and major cardiovascular events as secondary endpoint as well, arising in an ancillary analysis aimed primarily to better control for possible confounding factors. Patients who dropped out were considered alive until last follow‐up and then censored. Re‐LT was defined as a new LT due to all types of graft loss. Biliary‐vascular complications included biliary leaks, stones and strictures and vascular thrombosis, vascular stenosis and ischemic cholangiopathy. 23 Because not all vascular thrombotic complications necessarily lead to a biliary event, the two types of complications were considered separately. Systemic cardiovascular complications included occurrence of a major cardiovascular event (documented myocardial infarction or stroke), thromboembolic event (pulmonary embolism) and peripheral arterial ischemic disease. Concerning cancers other than HCC, we considered all solid and haematological cancers excluding non‐melanoma skin cancers. Statin exposure was assessed in both donors and recipients. For donors, statin use was assessed as a dichotomous variable at the time of donation, whereas for recipients, statin use was defined as concurrent use of statins for a given time t during follow‐up. In order to reduce the immortal time bias, exposure was considered as person‐time between prescription and end of follow‐up. 24 LT was considered time 0. The observation time after LT was 3 years. Statistical analysis: For descriptive aims, we report the general characteristics of the study cohort across the study period. Quantitative and categorical variables were expressed as mean and standard deviation or median and interquartile range (25%–75%) and percentages, respectively. As supplementary analysis, we divided the recipients into statin non‐user and statin user groups through the study period and compared these using Welch's t‐test for continuous variables and Wilcoxon's rank‐sum test for rank data, respectively. Categorical variables were compared using Chi2, where appropriate. Because only three patients had missing covariate data, we adopted a complete‐cases analysis for all models. We conducted the statistical analysis by adopting a multistate modelling approach in order to examine the effect of statin exposure (either as recipient currently under statin treatment or receiving a donor liver exposed to statins) on the transition hazards between transplant, biliary‐vascular complications and death, while allowing for recurring events such as re‐transplant and recurrent biliary‐vascular complications (Figure 1). Death was considered a terminal outcome, while re‐LT and biliary‐vascular complications were considered the main surrogate of graft loss and dysfunction, respectively. Therefore, the multistate modelling approach considered three main states of transition for the time‐dependent model: death, re‐LT and biliary‐vascular complications. Additional transitions are possible between each state, including recurrent events. The only exception is death, which represents a sinking state. Follow‐up for every patient started at the time of their first LT. The model was fitted using a Cox‐proportional hazards model (see more data in the Supplementary materials). Graphical representation of the possible transitions. The numbers in the arrows indicate the observed transitions. The number of transitions from biliary‐vascular complications to re‐LT and death is given globally, including those coming from first or recurrent biliary‐vascular complications. Administrative censoring was applied at 3 years because the primary interest of the study was the time immediately following LT. Effects might be different in the long‐term follow‐up since long‐term effects would progressively be more based on patients transplanted at an earlier date, which might therefore not generalise well to current standards. Furthermore, we did not include HCC or other cancers in the model because of indications that statins could influence the risk of developing HCC 25 , 26 as well as a potential protective role of statins in other cancers. 27 , 28 Including these variables would therefore constitute post‐treatment conditioning and might bias the estimates. However, we additionally explored this possibility using a Cox regression model with HCC‐free survival and non‐HCC cancer‐free survival as outcomes. All tests were two‐sided. Significance was accepted at p < 0.05. R version 4.1.0 (Development Core Team, 2021) was used for statistical calculations. The package survival (Therneau, T. M. [2020]. A Package for Survival Analysis in R. https://CRAN.R‐project.org/package=survival) was used to estimate the model. RESULTS: General characteristics of the study population Overall, 998 LT adult recipients were included in the study, with a total of 1038 grafts transplanted. Male was the prevalent gender (696 patients, 70%) with a mean age of 55 ± 11 years. Hepatocellular carcinoma (HCC) was present in 452 (45%) of the patients. The mean calculated MELD score at LT was 14 [IQR 8–24]. Demographic and clinical information regarding LT and comorbidities are summarised in Table 1. 72 (7%) of LT recipients were on statin therapy at time of LT. The total number of patients exposed to statins increased thereafter up to 19% and the time of statin use for 13.52/100 patient‐days. New cardiovascular complications onset in patients without previous cardiovascular events, new type of cardiovascular events, as well as newly diagnosed T2DM and arterial hypertension after LT occurred in 150 (15%), 257 (27%), 96 (10%) and 155 (16%), respectively, suggesting an underuse of statins in this setting as previously observed (22). Thirty‐five (3.5%) patients had recurrence of HCC, whereas de novo/recurrent extrahepatic cancers occurred in 111 (11.1%). General baseline pre‐transplant characteristics of the LT recipients and recipients characteristics stratified into statin non‐user and statin user groups throughout the study period Note: Exposure and no exposure to statins relate to the whole follow‐up period of 3 years. Significances determined by chi‐squared or Fisher's exact test for categorical variables, Welch's T‐Test for variables summarised by mean ± sd and the Wilcoxon test where median [IQR] is reported. Abbreviations: BMI, body mass index; CAD, coronary artery disease; CKD, chronic kidney disease; GI gastrointestinal; HBV, hepatitis B virus; HCC, hepatocellular carcinoma; HCV, hepatitis C virus; HTN, arterial hypertension; LT, liver transplantation; MELD, Model End‐Stage Liver Disease; NAFLD, non‐alcoholic fatty liver disease; NASH, non‐alcoholic steatohepatitis; PBC, primary biliary cholangitis; PSC, primary sclerosing cholangitis; T2DM, diabetes mellitus type 2. p ≤ 0.05 p ≤ 0.01. Recipients on statins were older (59 ± 7 vs. 53 ± 12‐year‐old), more frequently male (84% vs. 66%) had higher BMI (BMI 27.8 ± 4.7 vs. 26.2 ± 5.1) and more frequently cardiovascular disease (58% vs. 41%) (all p ≤ 0.001). Additionally, statin recipients were more frequently transplanted for alcohol‐related liver disease (44% vs. 32% in non‐exposed to statins, p = 0.002), non‐alcoholic steatohepatitis (NASH) (14% vs. 9%, p = 0.04) and HCC (61% vs. 44%) (Table 1). The main characteristics of the study population 3‐year after LT are reported in Table 2. Events observed in LT recipients within 3 years after transplant Abbreviations: HCC, hepatocellular carcinoma; GI, gastrointestinal; LT, liver transplantation. Overall, 998 LT adult recipients were included in the study, with a total of 1038 grafts transplanted. Male was the prevalent gender (696 patients, 70%) with a mean age of 55 ± 11 years. Hepatocellular carcinoma (HCC) was present in 452 (45%) of the patients. The mean calculated MELD score at LT was 14 [IQR 8–24]. Demographic and clinical information regarding LT and comorbidities are summarised in Table 1. 72 (7%) of LT recipients were on statin therapy at time of LT. The total number of patients exposed to statins increased thereafter up to 19% and the time of statin use for 13.52/100 patient‐days. New cardiovascular complications onset in patients without previous cardiovascular events, new type of cardiovascular events, as well as newly diagnosed T2DM and arterial hypertension after LT occurred in 150 (15%), 257 (27%), 96 (10%) and 155 (16%), respectively, suggesting an underuse of statins in this setting as previously observed (22). Thirty‐five (3.5%) patients had recurrence of HCC, whereas de novo/recurrent extrahepatic cancers occurred in 111 (11.1%). General baseline pre‐transplant characteristics of the LT recipients and recipients characteristics stratified into statin non‐user and statin user groups throughout the study period Note: Exposure and no exposure to statins relate to the whole follow‐up period of 3 years. Significances determined by chi‐squared or Fisher's exact test for categorical variables, Welch's T‐Test for variables summarised by mean ± sd and the Wilcoxon test where median [IQR] is reported. Abbreviations: BMI, body mass index; CAD, coronary artery disease; CKD, chronic kidney disease; GI gastrointestinal; HBV, hepatitis B virus; HCC, hepatocellular carcinoma; HCV, hepatitis C virus; HTN, arterial hypertension; LT, liver transplantation; MELD, Model End‐Stage Liver Disease; NAFLD, non‐alcoholic fatty liver disease; NASH, non‐alcoholic steatohepatitis; PBC, primary biliary cholangitis; PSC, primary sclerosing cholangitis; T2DM, diabetes mellitus type 2. p ≤ 0.05 p ≤ 0.01. Recipients on statins were older (59 ± 7 vs. 53 ± 12‐year‐old), more frequently male (84% vs. 66%) had higher BMI (BMI 27.8 ± 4.7 vs. 26.2 ± 5.1) and more frequently cardiovascular disease (58% vs. 41%) (all p ≤ 0.001). Additionally, statin recipients were more frequently transplanted for alcohol‐related liver disease (44% vs. 32% in non‐exposed to statins, p = 0.002), non‐alcoholic steatohepatitis (NASH) (14% vs. 9%, p = 0.04) and HCC (61% vs. 44%) (Table 1). The main characteristics of the study population 3‐year after LT are reported in Table 2. Events observed in LT recipients within 3 years after transplant Abbreviations: HCC, hepatocellular carcinoma; GI, gastrointestinal; LT, liver transplantation. Donors' general characteristics A total of 1038 deceased donors were included in the study. Male was the prevalent sex (608, 57%), mean age was 56.18 ± 16.86 years. DCD accounted for 122 (12%) livers, alcohol use was reported in almost half of the donors (482, 46%) and 127 (12%) were obese (Table 3). One hundred forty‐three donors (14%) were on statins at the time of donation. Donors receiving statins were older (68 vs. 54 years, p < 0.001) and had higher BMI (26.9 vs. 25.3, p < 0.001). Donors with T2DM were had 3.39 times more likely to receive statins (p < 0.001). General characteristics of the donor population Note: Donor statin use was only recorded in 592 cases. Abbreviations: BMI, body mass index; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2. A total of 1038 deceased donors were included in the study. Male was the prevalent sex (608, 57%), mean age was 56.18 ± 16.86 years. DCD accounted for 122 (12%) livers, alcohol use was reported in almost half of the donors (482, 46%) and 127 (12%) were obese (Table 3). One hundred forty‐three donors (14%) were on statins at the time of donation. Donors receiving statins were older (68 vs. 54 years, p < 0.001) and had higher BMI (26.9 vs. 25.3, p < 0.001). Donors with T2DM were had 3.39 times more likely to receive statins (p < 0.001). General characteristics of the donor population Note: Donor statin use was only recorded in 592 cases. Abbreviations: BMI, body mass index; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2. Mortality, re‐transplant and vascular and biliary complication During the follow‐up, 141 (14%) patients died. Median time to death from first LT was 241 days (IQR, 66–548 days). Causes of death included infections (n = 42, 30%), liver failure (n = 34, 24%), terminal cancer (n = 27, 19%), haemorrhage (n = 15, 11%) and cardiovascular disease (n = 10, 7%). Among patients who died 16 (11%) were exposed to statins. No statistically significant differences were shown for causes of death and statin exposure, however, none of recipients on statins died from cardiovascular event compared to 10 deaths from cardiovascular events in patients who never used statins (Figure 3). There were 40 re‐LT in 38 patients (4%) and 363 biliary‐vascular complications in 287 patients (29%). During the follow‐up, 141 (14%) patients died. Median time to death from first LT was 241 days (IQR, 66–548 days). Causes of death included infections (n = 42, 30%), liver failure (n = 34, 24%), terminal cancer (n = 27, 19%), haemorrhage (n = 15, 11%) and cardiovascular disease (n = 10, 7%). Among patients who died 16 (11%) were exposed to statins. No statistically significant differences were shown for causes of death and statin exposure, however, none of recipients on statins died from cardiovascular event compared to 10 deaths from cardiovascular events in patients who never used statins (Figure 3). There were 40 re‐LT in 38 patients (4%) and 363 biliary‐vascular complications in 287 patients (29%). Multistate model approach In the multistate model (Table 4), any patient can migrate from one state to another with the only terminal state being death; migrations observed in the present study are summarised in Figure 1. Recipient statin use was associated with lower mortality after LT (HR = 0.35; 95% CI = 0.12–0.99; p = 0.047) (Figure 2). Statin use was also associated with lower hazard of re‐LT (p = 0.004), where the HR could not be reliably estimated because no statin users had re‐LT. Recipient statins' use was not associated with occurrence of biliary‐vascular complications (HR = 1.25; 95% CI = 0.85–1.83; p = 0.266). Regarding the transition from B‐V complications to other states, statin use was significantly associated with reduced likelihood of death (HR = 0.10; 95% CI = 0.01–0.81; p = 0.031), and of recurrent biliary‐vascular complications (HR = 0.43; 95% CI = 0.20–0.93; p = 0.033) (Figure 2). There was no significant effect of statin use on the likelihood of re‐LT after biliary‐vascular complications (HR = 1.12; 95% CI = 0.32–3.90, p = 0.858). Multistate model considering survival in the different transition according to Cox regression analysis Abbreviations: BMI, body mass index; B‐V, biliary‐vascular; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2. Cumulative transition hazards with significant effects of recipient statin use: Transplant to transplant; transplant to death; biliary‐vascular complication to biliary‐vascular complication; and biliary‐vascular complication to death. Different causes of death among recipients expose or not to statins. Considering donor statin use, there was no effect on any of the six transitions. Donors' statin use showed a trend for a similar association to occurrence of biliary‐vascular complications as recipient statin use: HR = 0.47; 95% CI = 0.22–1.02, but it did not reach statistical significance (p = 0.056). In the multistate model (Table 4), any patient can migrate from one state to another with the only terminal state being death; migrations observed in the present study are summarised in Figure 1. Recipient statin use was associated with lower mortality after LT (HR = 0.35; 95% CI = 0.12–0.99; p = 0.047) (Figure 2). Statin use was also associated with lower hazard of re‐LT (p = 0.004), where the HR could not be reliably estimated because no statin users had re‐LT. Recipient statins' use was not associated with occurrence of biliary‐vascular complications (HR = 1.25; 95% CI = 0.85–1.83; p = 0.266). Regarding the transition from B‐V complications to other states, statin use was significantly associated with reduced likelihood of death (HR = 0.10; 95% CI = 0.01–0.81; p = 0.031), and of recurrent biliary‐vascular complications (HR = 0.43; 95% CI = 0.20–0.93; p = 0.033) (Figure 2). There was no significant effect of statin use on the likelihood of re‐LT after biliary‐vascular complications (HR = 1.12; 95% CI = 0.32–3.90, p = 0.858). Multistate model considering survival in the different transition according to Cox regression analysis Abbreviations: BMI, body mass index; B‐V, biliary‐vascular; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2. Cumulative transition hazards with significant effects of recipient statin use: Transplant to transplant; transplant to death; biliary‐vascular complication to biliary‐vascular complication; and biliary‐vascular complication to death. Different causes of death among recipients expose or not to statins. Considering donor statin use, there was no effect on any of the six transitions. Donors' statin use showed a trend for a similar association to occurrence of biliary‐vascular complications as recipient statin use: HR = 0.47; 95% CI = 0.22–1.02, but it did not reach statistical significance (p = 0.056). HCC recurrence and de novo/recurrence of cancer other than HCC In order to better understand the impact of statins on survival, we built Cox models considering HCC recurrence and de novo/recurrence of non‐liver cancer, with death as a competing risk. While there was no association between HCC recurrence and statin use (Table S1), statin use was associated with a significant reduction of the risk of developing cancers other than HCC (HR = 0.48; 95% CI = 0.29–0.80; p = 0.005; Table S2). In order to better understand the impact of statins on survival, we built Cox models considering HCC recurrence and de novo/recurrence of non‐liver cancer, with death as a competing risk. While there was no association between HCC recurrence and statin use (Table S1), statin use was associated with a significant reduction of the risk of developing cancers other than HCC (HR = 0.48; 95% CI = 0.29–0.80; p = 0.005; Table S2). General characteristics of the study population: Overall, 998 LT adult recipients were included in the study, with a total of 1038 grafts transplanted. Male was the prevalent gender (696 patients, 70%) with a mean age of 55 ± 11 years. Hepatocellular carcinoma (HCC) was present in 452 (45%) of the patients. The mean calculated MELD score at LT was 14 [IQR 8–24]. Demographic and clinical information regarding LT and comorbidities are summarised in Table 1. 72 (7%) of LT recipients were on statin therapy at time of LT. The total number of patients exposed to statins increased thereafter up to 19% and the time of statin use for 13.52/100 patient‐days. New cardiovascular complications onset in patients without previous cardiovascular events, new type of cardiovascular events, as well as newly diagnosed T2DM and arterial hypertension after LT occurred in 150 (15%), 257 (27%), 96 (10%) and 155 (16%), respectively, suggesting an underuse of statins in this setting as previously observed (22). Thirty‐five (3.5%) patients had recurrence of HCC, whereas de novo/recurrent extrahepatic cancers occurred in 111 (11.1%). General baseline pre‐transplant characteristics of the LT recipients and recipients characteristics stratified into statin non‐user and statin user groups throughout the study period Note: Exposure and no exposure to statins relate to the whole follow‐up period of 3 years. Significances determined by chi‐squared or Fisher's exact test for categorical variables, Welch's T‐Test for variables summarised by mean ± sd and the Wilcoxon test where median [IQR] is reported. Abbreviations: BMI, body mass index; CAD, coronary artery disease; CKD, chronic kidney disease; GI gastrointestinal; HBV, hepatitis B virus; HCC, hepatocellular carcinoma; HCV, hepatitis C virus; HTN, arterial hypertension; LT, liver transplantation; MELD, Model End‐Stage Liver Disease; NAFLD, non‐alcoholic fatty liver disease; NASH, non‐alcoholic steatohepatitis; PBC, primary biliary cholangitis; PSC, primary sclerosing cholangitis; T2DM, diabetes mellitus type 2. p ≤ 0.05 p ≤ 0.01. Recipients on statins were older (59 ± 7 vs. 53 ± 12‐year‐old), more frequently male (84% vs. 66%) had higher BMI (BMI 27.8 ± 4.7 vs. 26.2 ± 5.1) and more frequently cardiovascular disease (58% vs. 41%) (all p ≤ 0.001). Additionally, statin recipients were more frequently transplanted for alcohol‐related liver disease (44% vs. 32% in non‐exposed to statins, p = 0.002), non‐alcoholic steatohepatitis (NASH) (14% vs. 9%, p = 0.04) and HCC (61% vs. 44%) (Table 1). The main characteristics of the study population 3‐year after LT are reported in Table 2. Events observed in LT recipients within 3 years after transplant Abbreviations: HCC, hepatocellular carcinoma; GI, gastrointestinal; LT, liver transplantation. Donors' general characteristics: A total of 1038 deceased donors were included in the study. Male was the prevalent sex (608, 57%), mean age was 56.18 ± 16.86 years. DCD accounted for 122 (12%) livers, alcohol use was reported in almost half of the donors (482, 46%) and 127 (12%) were obese (Table 3). One hundred forty‐three donors (14%) were on statins at the time of donation. Donors receiving statins were older (68 vs. 54 years, p < 0.001) and had higher BMI (26.9 vs. 25.3, p < 0.001). Donors with T2DM were had 3.39 times more likely to receive statins (p < 0.001). General characteristics of the donor population Note: Donor statin use was only recorded in 592 cases. Abbreviations: BMI, body mass index; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2. Mortality, re‐transplant and vascular and biliary complication: During the follow‐up, 141 (14%) patients died. Median time to death from first LT was 241 days (IQR, 66–548 days). Causes of death included infections (n = 42, 30%), liver failure (n = 34, 24%), terminal cancer (n = 27, 19%), haemorrhage (n = 15, 11%) and cardiovascular disease (n = 10, 7%). Among patients who died 16 (11%) were exposed to statins. No statistically significant differences were shown for causes of death and statin exposure, however, none of recipients on statins died from cardiovascular event compared to 10 deaths from cardiovascular events in patients who never used statins (Figure 3). There were 40 re‐LT in 38 patients (4%) and 363 biliary‐vascular complications in 287 patients (29%). Multistate model approach: In the multistate model (Table 4), any patient can migrate from one state to another with the only terminal state being death; migrations observed in the present study are summarised in Figure 1. Recipient statin use was associated with lower mortality after LT (HR = 0.35; 95% CI = 0.12–0.99; p = 0.047) (Figure 2). Statin use was also associated with lower hazard of re‐LT (p = 0.004), where the HR could not be reliably estimated because no statin users had re‐LT. Recipient statins' use was not associated with occurrence of biliary‐vascular complications (HR = 1.25; 95% CI = 0.85–1.83; p = 0.266). Regarding the transition from B‐V complications to other states, statin use was significantly associated with reduced likelihood of death (HR = 0.10; 95% CI = 0.01–0.81; p = 0.031), and of recurrent biliary‐vascular complications (HR = 0.43; 95% CI = 0.20–0.93; p = 0.033) (Figure 2). There was no significant effect of statin use on the likelihood of re‐LT after biliary‐vascular complications (HR = 1.12; 95% CI = 0.32–3.90, p = 0.858). Multistate model considering survival in the different transition according to Cox regression analysis Abbreviations: BMI, body mass index; B‐V, biliary‐vascular; DCD, donor after circulatory death; LT, liver transplantation; T2DM, diabetes mellitus type 2. Cumulative transition hazards with significant effects of recipient statin use: Transplant to transplant; transplant to death; biliary‐vascular complication to biliary‐vascular complication; and biliary‐vascular complication to death. Different causes of death among recipients expose or not to statins. Considering donor statin use, there was no effect on any of the six transitions. Donors' statin use showed a trend for a similar association to occurrence of biliary‐vascular complications as recipient statin use: HR = 0.47; 95% CI = 0.22–1.02, but it did not reach statistical significance (p = 0.056). HCC recurrence and de novo/recurrence of cancer other than HCC : In order to better understand the impact of statins on survival, we built Cox models considering HCC recurrence and de novo/recurrence of non‐liver cancer, with death as a competing risk. While there was no association between HCC recurrence and statin use (Table S1), statin use was associated with a significant reduction of the risk of developing cancers other than HCC (HR = 0.48; 95% CI = 0.29–0.80; p = 0.005; Table S2). DISCUSSION: The present study aimed at evaluating the effect of statin exposure of both recipients and donors on LT outcomes. Specifically, we wanted to investigate whether statin exposure could influence adverse outcomes, including death, re‐LT, severe biliary‐vascular complications, and recurrent biliary‐vascular complications. The most significant result of our study is that statin use by LT recipients is associated with improved survival, both in patients with and without biliary‐vascular complications. The study design was set up bearing in mind the main limitations affecting epidemiological studies, particularly in the LT setting. Therefore, we constructed a multistate model for the main analysis of the study, taking into account that each subject can transition to other state(s) in the course of the observation period and that this can happen several times (Figure 1). This approach ensures greater internal consistency than conventional approaches 22 and weights the accumulated risk for each transition. Statin use in the recipients was associated with a lower risk of transition from LT to death, and from biliary‐vascular complications to death. Recipients with concurrent statin use were never subject to re‐LT without prior biliary‐vascular complication in our cohort, which equates a significantly lower log‐likelihood (p = 0.004). In addition, recipient exposure to statins was associated with lower hazards of recurrent biliary‐vascular complications. This finding is particularly relevant considering that patients experiencing a first biliary‐vascular complication are more likely to develop a second one. To our knowledge, this study is the first specifically designed to examine the pleiotropic protective effects of statins in LT. Recently, a study from the NailNASH Consortium analysed the outcome of 938 patients receiving LT for NASH cirrhosis. They concluded that statin use after LT favourably impacted survival (HR = 0.38; 95% CI = 0.19–0.75; p = 0.005), which is in accordance with our study. However, the protective effect of statins was not the main aim of the study and the results relate to a population known for higher risk of cardiovascular disease and likelihood of statin use compared to other LT indications. 25 A previous study by Patel et al. 22 examined the impact of coronary artery disease (CAD) and dyslipidemia on clinical outcomes after LT, and reported that in this context, statin exposure was associated with improved survival. Our findings in the present cohort are in line with this previous data, since among patients who died after LT being exposed to statins, none died from cardiovascular diseases. Additionally, they showed a considerable underuse of statins in LT recipients, even when clinically indicated, without observing side effects that could justify this underuse. 22 Statin underuse is also observed in the general population: only around 25% of subjects with an indication for statins are actually on statin therapy. Globally, up to 12.6% of annual cardiovascular deaths could be avoided if all patients eligible were on statins. 29 This could be estimated also in our cohort, where approximately 19% of recipients were exposed to statins, while circa 30% would have an indication for statins based on comorbidities. Under‐prescription of statins might partly reflect that attending physicians feel uncomfortable taking therapeutic decisions in LT recipients. On the other hand, the known interaction between cyclosporine and statins with potential increase in statins bioavailability may have led to the use of low potency hydrophilic statins (not metabolised through the cytochrome P‐450 3A4) or under‐use and under‐dosing of statins. 30 However, most LT recipients at present are under immunosuppressive regimens based on tacrolimus, whose interaction with statins is almost negligible. 30 There is thus no contraindication to the use of statins post‐LT, although slowly increasing the dose and monitoring immunosuppression trough levels is advisable. Even if post‐LT cancer outcomes were not the primary objective of our study, 26 , 27 our findings confirmed a strong association between statin exposure and reduced incidence of development/recurrence of extrahepatic tumours. However, in contrast with other recent studies, it failed to demonstrate a significant association with decreased HCC recurrence. 26 , 31 This might be partly due to the short post‐LT observation period (3 years) not being able to capture all HCC recurrences, 32 as well as the coding of statins use as a concurrent covariate, which might not be appropriate for long‐term outcomes. On the other hand, even within a limited number of events, we observed a trend for a protective effect of donor statin exposure against post‐LT biliary‐vascular complications, analogous to that observed for recipient statin exposure. From a mechanistic point of view, this data is in line with preclinical data in experimental models of ischemia/reperfusion injury and cold liver preservation, 16 , 17 and with the preliminary results of a recent randomised controlled trial. 33 Interestingly, the biologic effect of statins on vascular function and liver grafts is quite similar to that exerted by pulsatile flow perfusion 17 , 34 which successfully minimises biliary‐vascular complications after LT. 35 This study has some limitations, the main one being the applicability of the results of cohort studies to clinical practice, which usually require confirmation in ad hoc prospective studies. To minimise this limitation, we adopted a statistical strategy that is more process‐oriented than conventional regression analysis and that provides higher generalizability to real‐life scenarios. The total number of patients exposed to statins increased from 7% at the time of LT to 19% thereafter. It appears possible that clinicians were more confident to prescribe statins after LT rather to stable patients than to patients suffering from surgical complications or relevant liver dysfunction. However, we minimised selection bias inherent to cohort studies by including all adult patients receiving LT in the period of this nationwide study. Another potential limitation is that the short number of events early after liver transplantation and of donor statin users may have precluded observing and association between statin exposure and biliary/vascular complications very early after liver transplantation that perhaps would have been shown in a larger study. Unfortunately, fully modelling the time‐varying nature of the different effects of donor and recipient statin exposure would have required a significantly larger number of events than were available in our study. In summary, our results suggest that the use of statins in LT recipients confers a survival advantage. Our data also confirm that statins are underused in LT recipients qualifying for statins, and further add to the body of evidence in favour of using statins in this population when there is a clinical indication. Considering that statins are cheap, safe and widely used and that current strategies for reducing graft loss and improving survival after LT are costly and still limited, our data suggest that statins may represent a new effective approach with relevant clinical impact in the post‐LT setting. AUTHOR CONTRIBUTIONS: Chiara Becchetti: Conceptualization (equal); data curation (equal); investigation (equal); methodology (equal); writing – original draft (lead); writing – review and editing (equal). Melisa Dirchwolf: Conceptualization (equal); data curation (equal); investigation (equal); methodology (equal); writing – original draft (equal); writing – review and editing (equal). Jonas Schropp: Data curation (equal); formal analysis (equal); validation (equal). Giulia Magini: Data curation (equal); writing – review and editing (equal). Beat Mullhaupt: Data curation (equal); writing – review and editing (equal). Immer Franz: Data curation (equal); writing – review and editing (equal). Jean‐Francois Dufour: Data curation (equal); writing – review and editing (equal). Vanessa Banz: Funding acquisition (equal); writing – review and editing (equal). Annalisa Berzigotti: Conceptualization (equal); supervision (equal); writing – review and editing (equal). Jaime Bosch: Conceptualization (equal); funding acquisition (equal); methodology (equal); project administration (equal); supervision (equal); writing – review and editing (equal). FUNDING INFORMATION: C.B. and J.B. were supported by the Stiftung für Leberkrankheiten Bern. The study was supported by the Swiss Transplant Cohort Study (FUP 149). CONFLICT OF INTEREST: None. Supporting information: Data S1 Supporting Information. Click here for additional data file.
Background: There is limited information on the effects of statins on the outcomes of liver transplantation (LT), regarding either their use by LT recipients or donors. Methods: We included adult LT recipients with deceased donors in a nationwide prospective database study. Using a multistate modelling approach, we examined the effect of statins on the transition hazard between LT, biliary and vascular complications and death, allowing for recurring events. The observation time was 3 years. Results: We included 998 (696 male, 70%, mean age 54.46 ± 11.14 years) LT recipients. 14% of donors and 19% of recipients were exposed to statins during the study period. During follow-up, 141 patients died; there were 40 re-LT and 363 complications, with 66 patients having two or more complications. Treatment with statins in the recipient was modelled as a concurrent covariate and associated with lower mortality after LT (HR = 0.35; 95% CI 0.12-0.98; p = 0.047), as well as a significant reduction of re-LT (p = 0.004). However, it was not associated with lower incidence of complications (HR = 1.25; 95% CI = 0.85-1.83; p = 0.266). Moreover, in patients developing complications, statin use was significantly associated with decreased mortality (HR = 0.10; 95% CI = 0.01-0.81; p = 0.030), and reduced recurrence of complications (HR = 0.43; 95% CI = 0.20-0.93; p = 0.032). Conclusions: Statin use by LT recipients may confer a survival advantage. Statin administration should be encouraged in LT recipients when clinically indicated.
null
null
9,770
351
[ 509, 151, 73, 276, 533, 590, 194, 179, 414, 95, 254, 27 ]
17
[ "lt", "statin", "statins", "vascular", "biliary", "study", "use", "complications", "biliary vascular", "patients" ]
[ "effect statins vascular", "cardiovascular disease statins", "transplantation donor statin", "statins survival", "statins chronic liver" ]
null
null
[CONTENT] cardiovascular disease | dyslipidemia | solid organ transplantation | survival [SUMMARY]
[CONTENT] cardiovascular disease | dyslipidemia | solid organ transplantation | survival [SUMMARY]
[CONTENT] cardiovascular disease | dyslipidemia | solid organ transplantation | survival [SUMMARY]
null
[CONTENT] cardiovascular disease | dyslipidemia | solid organ transplantation | survival [SUMMARY]
null
[CONTENT] Adult | Aged | Graft Survival | Humans | Hydroxymethylglutaryl-CoA Reductase Inhibitors | Liver Transplantation | Male | Middle Aged | Retrospective Studies | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Graft Survival | Humans | Hydroxymethylglutaryl-CoA Reductase Inhibitors | Liver Transplantation | Male | Middle Aged | Retrospective Studies | Risk Factors | Treatment Outcome [SUMMARY]
[CONTENT] Adult | Aged | Graft Survival | Humans | Hydroxymethylglutaryl-CoA Reductase Inhibitors | Liver Transplantation | Male | Middle Aged | Retrospective Studies | Risk Factors | Treatment Outcome [SUMMARY]
null
[CONTENT] Adult | Aged | Graft Survival | Humans | Hydroxymethylglutaryl-CoA Reductase Inhibitors | Liver Transplantation | Male | Middle Aged | Retrospective Studies | Risk Factors | Treatment Outcome [SUMMARY]
null
[CONTENT] effect statins vascular | cardiovascular disease statins | transplantation donor statin | statins survival | statins chronic liver [SUMMARY]
[CONTENT] effect statins vascular | cardiovascular disease statins | transplantation donor statin | statins survival | statins chronic liver [SUMMARY]
[CONTENT] effect statins vascular | cardiovascular disease statins | transplantation donor statin | statins survival | statins chronic liver [SUMMARY]
null
[CONTENT] effect statins vascular | cardiovascular disease statins | transplantation donor statin | statins survival | statins chronic liver [SUMMARY]
null
[CONTENT] lt | statin | statins | vascular | biliary | study | use | complications | biliary vascular | patients [SUMMARY]
[CONTENT] lt | statin | statins | vascular | biliary | study | use | complications | biliary vascular | patients [SUMMARY]
[CONTENT] lt | statin | statins | vascular | biliary | study | use | complications | biliary vascular | patients [SUMMARY]
null
[CONTENT] lt | statin | statins | vascular | biliary | study | use | complications | biliary vascular | patients [SUMMARY]
null
[CONTENT] statins | liver | lt | function | effects | chronic liver | beneficial | disease | ischemia reperfusion | ischemia [SUMMARY]
[CONTENT] lt | vascular | study | considered | complications | biliary | time | biliary vascular | vascular complications | biliary vascular complications [SUMMARY]
[CONTENT] statin | lt | vs | use | statin use | statins | hr | biliary | death | vascular [SUMMARY]
null
[CONTENT] lt | statins | statin | vascular | study | use | biliary | data | biliary vascular | patients [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] ||| LT ||| 3 years [SUMMARY]
[CONTENT] 998 | 696 | 70% | 54.46 ± | 11.14 years ||| ||| 14% | 19% ||| 141 | 40 | 363 | 66 | two ||| 0.35 | 95% | CI | 0.12-0.98 | 0.047 | 0.004 ||| 1.25 | 95% | CI | 0.85-1.83 | 0.266 ||| statin | 0.10 | 95% | CI | 0.01 | 0.030 | 0.43 | 95% | CI | 0.20-0.93 | 0.032 [SUMMARY]
null
[CONTENT] ||| ||| LT ||| 3 years ||| ||| 998 | 696 | 70% | 54.46 ± | 11.14 years ||| 14% | 19% ||| 141 | 40 | 363 | 66 | two ||| 0.35 | 95% | CI | 0.12-0.98 | 0.047 | 0.004 ||| 1.25 | 95% | CI | 0.85-1.83 | 0.266 ||| statin | 0.10 | 95% | CI | 0.01 | 0.030 | 0.43 | 95% | CI | 0.20-0.93 | 0.032 ||| Statin ||| Statin | LT [SUMMARY]
null
The utility of 18 F-FDG PET/CT for suspected recurrent breast cancer: impact and prognostic stratification.
25608599
The incremental value of 18FDG PET/CT in patients with breast cancer (BC) compared to conventional imaging (CI) in clinical practice is unclear. The aim of this study was to evaluate the management impact and prognostic value of 18 F-FDG PET/CT in this setting.
BACKGROUND
Sixty-three patients who were referred to our institution for suspicion of BC relapse were retrospectively enrolled. All patients had been evaluated with CI and underwent PET/CT. At a median follow-up of 61 months, serial clinical, imaging and pathologic results were obtained to validate diagnostic findings. Overall Survival (OS) was estimated using Kaplan Meier methods and analyzed using the Cox proportional hazards regression models.
METHODS
Forty-two patients had a confirmed relapse with 37 (88%) positive on CI and 40 (95%) positive on PET/CT. When compared with CI, PET/CT had a higher negative predictive value (86% versus 54%) and positive predictive value (95% versus 70%). The management impact of PET/CT was high (change of treatment modality or intent) in 30 patients (48%) and medium (change in radiation treatment volume or dose fractionation) in 6 patients (9%). Thirty-nine patients (62%) died during follow-up. The PET/CT result was a highly significant predictor of OS (Hazard Ratio [95% Confidence Interval] =4.7 [2.0-10.9] for PET positive versus PET negative for a systemic recurrence; p = 0.0003). In a Cox multivariate analysis including other prognosis factors, PET/CT findings predicted survival (p = 0.005). In contrast, restaging by CI was not significant predictor of survival.
RESULTS
Our study support the value of 18 F-FDG PET/CT in providing incremental information that influence patient management and refine prognostic stratification in the setting of suspected recurrent breast cancer.
CONCLUSION
[ "Adult", "Aged", "Aged, 80 and over", "Breast Neoplasms", "Female", "Fluorodeoxyglucose F18", "Humans", "Male", "Middle Aged", "Multimodal Imaging", "Neoplasm Recurrence, Local", "Positron-Emission Tomography", "Prognosis", "Radiopharmaceuticals", "Tomography, X-Ray Computed" ]
4331819
Background
Breast cancer (BC) is the most commonly diagnosed cancer in women, and is the leading cause of death by cancer for women in the western world. Depending on the initial extent of the disease, approximately 30% of patients diagnosed with BC are at risk of developing loco-regional recurrence or secondary tumor dissemination to distant organs [1]. Moreover, the survival of patients who develop an isolated loco-regional recurrence differs from patients who have distant relapse. As a consequence, determination of both the locations and extent of the recurrent disease is essential to guide therapeutic decisions and estimate prognosis. Traditionally, routine evaluation of suspected recurrent BC involves physical examination and a multi-modality Conventional Imaging (CI) approach which may include mammography, CT, MRI, and bone scintigraphy. However, this CI approach is often time-intensive and potential false-negative findings may delay appropriate therapy. Positron Emission Tomography/Computed Tomography (PET/CT) with 18 F-Fluorodeoxyglucose (18 F-FDG) is also often used in this indication, given that 18 F-FDG has affinity for both primary and secondary breast tumors, depending on size and aggressiveness [2-4]. Several authors have suggested that 18 F-FDG PET and PET/CT are more sensitive than CI for detection of recurrent BC [5-15] and can have a significant impact on the therapeutic management [5,7-9,12,16]. However, information concerning the utility of 18 F-FDG PET/CT for long-term prognostic stratification, when compared with CI, is limited. Thus, the objectives of our study were to: [1] assess the incremental diagnostic performance and the impact on therapeutic management of 18 F-FDG PET/CT in a group of patients with a history of BC who had already been restaged by CI for identification of suspected disease relapse; [2] compare the long-term prognostic stratification of CI alone and 18 F-FDG PET/CT.
Methods
Patients A retrospective analysis was performed on consecutive patients with a history of BC and suspicion of recurrence who were referred for 18 F-FDG PET/CT at our institution from January 2002 to September 2008. BC was not a funded indication of 18 F-FDG PET/CT during this period in Australia; therefore, clinicians usually referred patients with high suspicion of recurrence for PET/CT. The inclusion criteria of the study were as follows: (a) a history of confirmed histologic diagnosis of primary BC treated as per local protocol; (b) CI performed no longer than 4 months prior to PET/CT and where the CI included at least a CT scan of the area of interest; (c) availability of follow-up data for a minimum of 6 months following PET/CT; (d) unequivocal determination of clinical status at the time of the last clinical follow-up. Sixty-three patients (62 women and one man; mean age = 57 years; range = 29-86 years) were included. The median time interval from initial diagnosis to 18 F-FDG PET/CT was 39 months (range 5-431 months). The median time interval between CT and PET/CT was 25 days (first-third quartile: 1-52 days). Indications for PET/CT were: equivocal or suspicious CI findings (n = 28); clinical suspicion of recurrence (n = 21); restaging after completion of therapy (n = 5); routine surveillance (n = 5); and increasing levels of tumor markers (n = 4). All patients provided permission to review medical records at the time of PET/CT imaging according to our institution’s investigational review board guidelines for informed consent (protocol number 09/78). A retrospective analysis was performed on consecutive patients with a history of BC and suspicion of recurrence who were referred for 18 F-FDG PET/CT at our institution from January 2002 to September 2008. BC was not a funded indication of 18 F-FDG PET/CT during this period in Australia; therefore, clinicians usually referred patients with high suspicion of recurrence for PET/CT. The inclusion criteria of the study were as follows: (a) a history of confirmed histologic diagnosis of primary BC treated as per local protocol; (b) CI performed no longer than 4 months prior to PET/CT and where the CI included at least a CT scan of the area of interest; (c) availability of follow-up data for a minimum of 6 months following PET/CT; (d) unequivocal determination of clinical status at the time of the last clinical follow-up. Sixty-three patients (62 women and one man; mean age = 57 years; range = 29-86 years) were included. The median time interval from initial diagnosis to 18 F-FDG PET/CT was 39 months (range 5-431 months). The median time interval between CT and PET/CT was 25 days (first-third quartile: 1-52 days). Indications for PET/CT were: equivocal or suspicious CI findings (n = 28); clinical suspicion of recurrence (n = 21); restaging after completion of therapy (n = 5); routine surveillance (n = 5); and increasing levels of tumor markers (n = 4). All patients provided permission to review medical records at the time of PET/CT imaging according to our institution’s investigational review board guidelines for informed consent (protocol number 09/78). PET/CT acquisition and processing Whole-body PET was acquired sequentially using a dedicated PET/CT system (Discovery LS PET/ 4-slice helical CT or Discovery STE/ 8-slice helical CT, General Electric Medical Systems, Milwaukee, WI) combining a multidetector CT scanner with a dedicated, full-ring PET scanner with bismuth germanate crystals. Patients were instructed to fast except for glucose-free oral hydration for at least 6 hours before injection of 300-400 MBq of 18 F-FDG. PET was performed 60 min following 18 F-FDG injection. Blood glucose levels were measured before the injection of the tracer to ensure levels below 10 mmol/l. Transmission data used for attenuation correction were obtained from a low-dose non diagnostic CT acquisition (140 kVp and 40-120 mA), without contrast enhancement. Attenuation corrected PET images were reconstructed with an iterative reconstruction (ordered-subset expectation maximization algorithm). Orthogonal CT, PET, and fused PET/CT images were displayed simultaneously on a GE Xeleris Workstation. The PET data were also displayed in a rotating maximum-intensity projection. An experienced nuclear medicine physician generated a clinical report after reviewing PET images, low-dose CT images, fused PET/CT images, previous imaging results and clinical information. Standard uptake values were not routinely measured. Once issued, the PET/CT report was not reinterpreted in the light of subsequent clinical information. Whole-body PET was acquired sequentially using a dedicated PET/CT system (Discovery LS PET/ 4-slice helical CT or Discovery STE/ 8-slice helical CT, General Electric Medical Systems, Milwaukee, WI) combining a multidetector CT scanner with a dedicated, full-ring PET scanner with bismuth germanate crystals. Patients were instructed to fast except for glucose-free oral hydration for at least 6 hours before injection of 300-400 MBq of 18 F-FDG. PET was performed 60 min following 18 F-FDG injection. Blood glucose levels were measured before the injection of the tracer to ensure levels below 10 mmol/l. Transmission data used for attenuation correction were obtained from a low-dose non diagnostic CT acquisition (140 kVp and 40-120 mA), without contrast enhancement. Attenuation corrected PET images were reconstructed with an iterative reconstruction (ordered-subset expectation maximization algorithm). Orthogonal CT, PET, and fused PET/CT images were displayed simultaneously on a GE Xeleris Workstation. The PET data were also displayed in a rotating maximum-intensity projection. An experienced nuclear medicine physician generated a clinical report after reviewing PET images, low-dose CT images, fused PET/CT images, previous imaging results and clinical information. Standard uptake values were not routinely measured. Once issued, the PET/CT report was not reinterpreted in the light of subsequent clinical information. Image interpretation and classification A total of 188 clinical, imaging and pathological procedures were performed (3 ± 1.4 per patients), including chest CT (n = 59), abdominopelvic CT (n = 44), whole-body bone scan (n = 30), clinical examination (n = 17), pathology (n = 9), abdominal ultrasound (US) (n = 7), MRI (n = 7), mammogram or breast US (n = 6), chest radiography (n = 5), other (n = 5). Written clinical reports of conventional images and PET/CT were reviewed and classified as (a) negative if imaging tests were negative for disease; (b) equivocal, when abnormal findings were present on any imaging test but were not interpreted as suspicious for malignancy; (c) positive, if any result was clearly described as suspicious or consistent with malignancy. Negative and equivocal findings were combined as negative for the analysis. In cases where recurrence was reported on CI or PET/CT, the location of relapse was also determined, and classified as loco-regional (ipsilateral breast, ipsilateral axillary, internal mammary or supraclavicular node station) or systemic (contralateral node station or distant metastasis). The final diagnosis of disease recurrence and location of disease was confirmed by histologic examination in 13 patients (21%). For the remaining patients, evidence of progression within 6 months of clinical and/or imaging follow-up was considered to indicate a site of disease relapse, whereas no evidence of progression after at least 6 months of follow-up was considered to confirm absence of active disease at that site. A total of 188 clinical, imaging and pathological procedures were performed (3 ± 1.4 per patients), including chest CT (n = 59), abdominopelvic CT (n = 44), whole-body bone scan (n = 30), clinical examination (n = 17), pathology (n = 9), abdominal ultrasound (US) (n = 7), MRI (n = 7), mammogram or breast US (n = 6), chest radiography (n = 5), other (n = 5). Written clinical reports of conventional images and PET/CT were reviewed and classified as (a) negative if imaging tests were negative for disease; (b) equivocal, when abnormal findings were present on any imaging test but were not interpreted as suspicious for malignancy; (c) positive, if any result was clearly described as suspicious or consistent with malignancy. Negative and equivocal findings were combined as negative for the analysis. In cases where recurrence was reported on CI or PET/CT, the location of relapse was also determined, and classified as loco-regional (ipsilateral breast, ipsilateral axillary, internal mammary or supraclavicular node station) or systemic (contralateral node station or distant metastasis). The final diagnosis of disease recurrence and location of disease was confirmed by histologic examination in 13 patients (21%). For the remaining patients, evidence of progression within 6 months of clinical and/or imaging follow-up was considered to indicate a site of disease relapse, whereas no evidence of progression after at least 6 months of follow-up was considered to confirm absence of active disease at that site. Assessment of impact Referring physicians were asked to record a pre-PET/CT management plan before PET/CT results on our routine clinical request form. The actual post-PET/CT management plan and treatment intent were determined from the medical record or by contacting the referring clinician. The impact of PET/CT on management was considered “high” when the treatment intent or modality was changed (e.g. from palliative to curative treatment or from surgery to radiotherapy) [17]. The impact was considered as “medium” when the method of treatment delivery was changed (e.g. radiation treatment volume and/or dose fractionation) [17]. When the PET/CT results did not indicate a need for change, the impact was considered to be “low”. PET/CT was considered to have had “no impact” when the management chosen conflicted with post-PET/CT disease extent on the basis of a synthesis of all available information. Referring physicians were asked to record a pre-PET/CT management plan before PET/CT results on our routine clinical request form. The actual post-PET/CT management plan and treatment intent were determined from the medical record or by contacting the referring clinician. The impact of PET/CT on management was considered “high” when the treatment intent or modality was changed (e.g. from palliative to curative treatment or from surgery to radiotherapy) [17]. The impact was considered as “medium” when the method of treatment delivery was changed (e.g. radiation treatment volume and/or dose fractionation) [17]. When the PET/CT results did not indicate a need for change, the impact was considered to be “low”. PET/CT was considered to have had “no impact” when the management chosen conflicted with post-PET/CT disease extent on the basis of a synthesis of all available information. Follow-up After PET/CT, progress updates were obtained from the medical record, family physician, or treating oncologist. When relevant, details of the date and cause of death were obtained. The disease status at the time of death was recorded. After PET/CT, progress updates were obtained from the medical record, family physician, or treating oncologist. When relevant, details of the date and cause of death were obtained. The disease status at the time of death was recorded. Statistical analysis Estimates of OS at 2 and 5 years were computed using the Kaplan Meier method, a log-rank test was used to analyze the effect of CI and PET/CT results on OS. Two Cox regression analyses were performed to assess the impact of PET/CT and CI on OS controlling for clinical variables and using a backward elimination process. Triple negative status of the primary tumor was not included in the multivariate model because this information was available for only 38 patients. For percentages such as PPV, NPV, sensitivity and specificity, a Blyth-Still-Casella 95% confidence interval (CI) was calculated. Diagnosis performance results were compared using McNemar tests, Fisher exact tests and a likelihood ratio, which summarizes how many times more likely patients with the disease are to have that particular result than patients without the disease. According to the fact that patients were selected on the basis of CI, many patients with unequivocal systemic relapse on CI would likely not have been referred for PET/CT. Given the likely pre-test selection bias, positive and negative predictive values were considered as more relevant comparators of diagnostic performance when compared with sensitivity and specificity. Estimates of OS at 2 and 5 years were computed using the Kaplan Meier method, a log-rank test was used to analyze the effect of CI and PET/CT results on OS. Two Cox regression analyses were performed to assess the impact of PET/CT and CI on OS controlling for clinical variables and using a backward elimination process. Triple negative status of the primary tumor was not included in the multivariate model because this information was available for only 38 patients. For percentages such as PPV, NPV, sensitivity and specificity, a Blyth-Still-Casella 95% confidence interval (CI) was calculated. Diagnosis performance results were compared using McNemar tests, Fisher exact tests and a likelihood ratio, which summarizes how many times more likely patients with the disease are to have that particular result than patients without the disease. According to the fact that patients were selected on the basis of CI, many patients with unequivocal systemic relapse on CI would likely not have been referred for PET/CT. Given the likely pre-test selection bias, positive and negative predictive values were considered as more relevant comparators of diagnostic performance when compared with sensitivity and specificity.
Results
Patient characteristics at the time of initial diagnosis are summarized in Table  1. Patient characteristics at the time of initial diagnosis HER-2 = Human Epidermal Growth Factor Receptor-2. Diagnostic performance Relapse involving at least one site was confirmed in 42 of the 63 patients (67%). CI was positive for disease in 37 of these patients, yielding a patient sensitivity of 88%, whereas PET/CT was positive for disease in 40, corresponding to a patient sensitivity of 95%. Table  2 shows comparison of extent of suspected relapse as assessed before and after PET/CT. Downstaging by PET/CT was confirmed to be correct in 12/14 patients (one patient had a suspicious bony lesion on bone scan that was non 18 F-FDG-avid, but confirmed to be malignant; one patient showed suspicious mediastinal lymph nodes on CT that were non 18 F-FDG-avid but confirmed to be metastasis of breast cancer by pathology), while upstaging with PET/CT was confirmed in 5/5 patients. Comparison of extent of suspected relapse as assessed before and after PET/CT LR = Loco-Regional; PET/CT = Positron Emission Tomography/Computerized Tomography. On final diagnosis, 20 patients (32%) had a loco-regional recurrence and 36 (57%) had a systemic recurrence (14 patients had both loco-regional and systemic recurrence). CI was truly positive for loco-regional recurrence in 8/20 patients (40%) and systemic recurrence in 30/36 patients (83%). PET/CT was truly positive for loco-regional recurrence in 20/20 patients (100%) and systemic disease in 32/36 patients (89%). Among the 20 patients experiencing loco-regional recurrence, only 3 had local relapse only, for whom both PET/CT and CI were positive. PET/CT detected regional relapse non-detected by CI in 12 patients (3 in axillary nodes only, 9 in extra-axillary nodes). PET/CT had significantly higher positive predictive values when compared with CI for determination of loco-regional, systemic and global recurrence, and higher negative predictive value for loco-regional and global recurrence (Table  3). 18 F-FDG PET/CT found incidental malignancies in 2 patients (one patient had a primary esophageal cancer and one patient had a gastrointestinal stromal tumor). Diagnostic accuracy of conventional imaging (CI) and PET/CT in detecting relapse of disease in all patients, according to the histological type, and for locoregional and systemic recurrence CI = Confidence Interval; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold. *likelihood ratio = sensitivity/ (1-specificity). Relapse involving at least one site was confirmed in 42 of the 63 patients (67%). CI was positive for disease in 37 of these patients, yielding a patient sensitivity of 88%, whereas PET/CT was positive for disease in 40, corresponding to a patient sensitivity of 95%. Table  2 shows comparison of extent of suspected relapse as assessed before and after PET/CT. Downstaging by PET/CT was confirmed to be correct in 12/14 patients (one patient had a suspicious bony lesion on bone scan that was non 18 F-FDG-avid, but confirmed to be malignant; one patient showed suspicious mediastinal lymph nodes on CT that were non 18 F-FDG-avid but confirmed to be metastasis of breast cancer by pathology), while upstaging with PET/CT was confirmed in 5/5 patients. Comparison of extent of suspected relapse as assessed before and after PET/CT LR = Loco-Regional; PET/CT = Positron Emission Tomography/Computerized Tomography. On final diagnosis, 20 patients (32%) had a loco-regional recurrence and 36 (57%) had a systemic recurrence (14 patients had both loco-regional and systemic recurrence). CI was truly positive for loco-regional recurrence in 8/20 patients (40%) and systemic recurrence in 30/36 patients (83%). PET/CT was truly positive for loco-regional recurrence in 20/20 patients (100%) and systemic disease in 32/36 patients (89%). Among the 20 patients experiencing loco-regional recurrence, only 3 had local relapse only, for whom both PET/CT and CI were positive. PET/CT detected regional relapse non-detected by CI in 12 patients (3 in axillary nodes only, 9 in extra-axillary nodes). PET/CT had significantly higher positive predictive values when compared with CI for determination of loco-regional, systemic and global recurrence, and higher negative predictive value for loco-regional and global recurrence (Table  3). 18 F-FDG PET/CT found incidental malignancies in 2 patients (one patient had a primary esophageal cancer and one patient had a gastrointestinal stromal tumor). Diagnostic accuracy of conventional imaging (CI) and PET/CT in detecting relapse of disease in all patients, according to the histological type, and for locoregional and systemic recurrence CI = Confidence Interval; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold. *likelihood ratio = sensitivity/ (1-specificity). Impact on management The PET/CT results had a high impact on management in 30 patients (48%) of whom the treatment intent was modified in 24 patients (including from invasive diagnosis to observation for 11 patients, according to negative PE/CT results despite suspicious CI findings). For the 6 remaining patients, the treatment intent was not modified after PET/CT (palliative for 5 patients, curative for 1 patient), but the modalities of therapy were changed. PET/CT had a medium impact on management in 6/63 patients (9%). Management changes in these patients primarily included changes in radiation treatment volume as a result of more extensive disease detected by PET/CT. All these patients had a palliative treatment intent, which was not modified by PET/CT. PET/CT had a low impact (i.e. did not change the planned management) in 27/63 patients (43%) for whom the relapse extent was concordant with that found on CI (19 patients) or the documentation of a different distribution of disease did not alter the planned treatment (2 patients), or both PET/CT and CI findings were negative (6 patients). In no case was the PET/CT result apparently ignored. The PET/CT results had a high impact on management in 30 patients (48%) of whom the treatment intent was modified in 24 patients (including from invasive diagnosis to observation for 11 patients, according to negative PE/CT results despite suspicious CI findings). For the 6 remaining patients, the treatment intent was not modified after PET/CT (palliative for 5 patients, curative for 1 patient), but the modalities of therapy were changed. PET/CT had a medium impact on management in 6/63 patients (9%). Management changes in these patients primarily included changes in radiation treatment volume as a result of more extensive disease detected by PET/CT. All these patients had a palliative treatment intent, which was not modified by PET/CT. PET/CT had a low impact (i.e. did not change the planned management) in 27/63 patients (43%) for whom the relapse extent was concordant with that found on CI (19 patients) or the documentation of a different distribution of disease did not alter the planned treatment (2 patients), or both PET/CT and CI findings were negative (6 patients). In no case was the PET/CT result apparently ignored. Prediction of survival by CI and PET/CT Survival data were analyzed with a close-out date of September, 4th 2010. The median follow-up time was 5.1 years (range 0.8-8.6 years). All 63 patients entered into the study had a known status at the close-out date. All patients with negative PET/CT findings were followed for a minimum of 2 years after the scan (except one patient who died 2 months after the scan, because of complications of an esophageal primary tumor discovered on PET/CT). Thirty-eight patients (60%) were deceased with a median survival of 3.4 years (95% CI 2.5 to 5.0 years) (Figure  1). The estimated 2-year OS was 66.4% (95% CI 53.2% to 76.6%) and the estimated 5-year OS was 37.3% (95% CI 24.3% to 50.3%). Estimated overall survival (+/- 95% confidence interval) for all 63 patients. On univariate analysis, PET/CT status (negative, positive LR or positive systemic) was strongly associated with survival (log-rank test: p = 0.0003 for the entire model; p = 0.0001 for comparison of negative results and positive for systemic disease; p > 0.05 for other single comparisons) (Figure  2). Patients with systemic disease according to PET/CT had a 4.7-fold in the risk of death when compared with patients with negative PET/CT findings, while patients with only loco-regional recurrence had a 2-fold increase in the risk of death (Table  4). In contrast, CI status (negative, positive LR or positive systemic) did not significantly predict OS (log-rank test: p = 0.07 for the entire model; p > 0.05 for all single comparisons) (Figure  2). Moreover, patients with positive CI findings but negative PET/CT findings had similar estimated survival to patients in whom both tests were negative, whereas patients with negative CI findings but positive PET/CT results had comparable estimated survival to patients in whom both procedures were positive for recurrence (Figure  3). Overall survival by staging technique. (A) Kaplan-Meier estimate of overall survival (OS) stratified by pre-PET/CT (Conventional imaging) status. (B) Kaplan-Meier estimate of OS stratified by post-PET/CT status. CI = Conventional Imaging; LR = loco-regional recurrence only; Syst = systemic recurrence. Univariate predictors of overall survival *The information is available for only 38 patients. CI = Confidence Interval; ER = Estrogen Receptor; HER-2 = Human Epidermal Growth Factor Receptor-2; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography; PR = Progesterone Receptor. Significant p values in bold. Kaplan-Meier estimate of overall survival (OS) stratified by combination of Conventional imaging (CI) and PET/CT results. Initial stage of the disease at the time of diagnosis, histological type of cancer and triple negative status (hormone receptor and HER-2 negativity) were also predictors of survival on the univariate analysis (Table  4). Time between initial diagnosis and the restaging PET/CT, initial histological grade, estrogen receptor status, progesterone receptor status and overexpression of Human Epidermal Growth Factor Receptor-2 (HER-2) were not significant predictors of survival. On multivariate analysis, positive PET/CT findings (for loco-regional and/or systemic recurrence) remained an independent predictor of OS when adjusting for age, histological subtype and initial stage (model 1, Table  5). In contrast, positive CI findings did not independently predict OS (model 2, Table  5). Multivariate predictors of overall survival CI = Confidence Interval; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold. Survival data were analyzed with a close-out date of September, 4th 2010. The median follow-up time was 5.1 years (range 0.8-8.6 years). All 63 patients entered into the study had a known status at the close-out date. All patients with negative PET/CT findings were followed for a minimum of 2 years after the scan (except one patient who died 2 months after the scan, because of complications of an esophageal primary tumor discovered on PET/CT). Thirty-eight patients (60%) were deceased with a median survival of 3.4 years (95% CI 2.5 to 5.0 years) (Figure  1). The estimated 2-year OS was 66.4% (95% CI 53.2% to 76.6%) and the estimated 5-year OS was 37.3% (95% CI 24.3% to 50.3%). Estimated overall survival (+/- 95% confidence interval) for all 63 patients. On univariate analysis, PET/CT status (negative, positive LR or positive systemic) was strongly associated with survival (log-rank test: p = 0.0003 for the entire model; p = 0.0001 for comparison of negative results and positive for systemic disease; p > 0.05 for other single comparisons) (Figure  2). Patients with systemic disease according to PET/CT had a 4.7-fold in the risk of death when compared with patients with negative PET/CT findings, while patients with only loco-regional recurrence had a 2-fold increase in the risk of death (Table  4). In contrast, CI status (negative, positive LR or positive systemic) did not significantly predict OS (log-rank test: p = 0.07 for the entire model; p > 0.05 for all single comparisons) (Figure  2). Moreover, patients with positive CI findings but negative PET/CT findings had similar estimated survival to patients in whom both tests were negative, whereas patients with negative CI findings but positive PET/CT results had comparable estimated survival to patients in whom both procedures were positive for recurrence (Figure  3). Overall survival by staging technique. (A) Kaplan-Meier estimate of overall survival (OS) stratified by pre-PET/CT (Conventional imaging) status. (B) Kaplan-Meier estimate of OS stratified by post-PET/CT status. CI = Conventional Imaging; LR = loco-regional recurrence only; Syst = systemic recurrence. Univariate predictors of overall survival *The information is available for only 38 patients. CI = Confidence Interval; ER = Estrogen Receptor; HER-2 = Human Epidermal Growth Factor Receptor-2; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography; PR = Progesterone Receptor. Significant p values in bold. Kaplan-Meier estimate of overall survival (OS) stratified by combination of Conventional imaging (CI) and PET/CT results. Initial stage of the disease at the time of diagnosis, histological type of cancer and triple negative status (hormone receptor and HER-2 negativity) were also predictors of survival on the univariate analysis (Table  4). Time between initial diagnosis and the restaging PET/CT, initial histological grade, estrogen receptor status, progesterone receptor status and overexpression of Human Epidermal Growth Factor Receptor-2 (HER-2) were not significant predictors of survival. On multivariate analysis, positive PET/CT findings (for loco-regional and/or systemic recurrence) remained an independent predictor of OS when adjusting for age, histological subtype and initial stage (model 1, Table  5). In contrast, positive CI findings did not independently predict OS (model 2, Table  5). Multivariate predictors of overall survival CI = Confidence Interval; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold.
Conclusion
Our findings support the value of 18 F-FDG PET/CT in providing incremental information that influence patient management and refine prognostic stratification in the setting of suspected recurrent breast cancer. The prognostic stratification provided by this technique emphasizes the crucial role of 18 F-FDG PET/CT in optimizing treatment choices in this setting. Further multicentric studies are needed to confirm this role in particular in patients with high suspicion of relapse.
[ "Background", "Patients", "PET/CT acquisition and processing", "Image interpretation and classification", "Assessment of impact", "Follow-up", "Statistical analysis", "Diagnostic performance", "Impact on management", "Prediction of survival by CI and PET/CT", "Competing interests", "Authors’ contributions" ]
[ "Breast cancer (BC) is the most commonly diagnosed cancer in women, and is the leading cause of death by cancer for women in the western world. Depending on the initial extent of the disease, approximately 30% of patients diagnosed with BC are at risk of developing loco-regional recurrence or secondary tumor dissemination to distant organs\n[1]. Moreover, the survival of patients who develop an isolated loco-regional recurrence differs from patients who have distant relapse. As a consequence, determination of both the locations and extent of the recurrent disease is essential to guide therapeutic decisions and estimate prognosis.\nTraditionally, routine evaluation of suspected recurrent BC involves physical examination and a multi-modality Conventional Imaging (CI) approach which may include mammography, CT, MRI, and bone scintigraphy. However, this CI approach is often time-intensive and potential false-negative findings may delay appropriate therapy. Positron Emission Tomography/Computed Tomography (PET/CT) with 18 F-Fluorodeoxyglucose (18 F-FDG) is also often used in this indication, given that 18 F-FDG has affinity for both primary and secondary breast tumors, depending on size and aggressiveness\n[2-4]. Several authors have suggested that 18 F-FDG PET and PET/CT are more sensitive than CI for detection of recurrent BC\n[5-15] and can have a significant impact on the therapeutic management\n[5,7-9,12,16]. However, information concerning the utility of 18 F-FDG PET/CT for long-term prognostic stratification, when compared with CI, is limited.\nThus, the objectives of our study were to:\n[1] assess the incremental diagnostic performance and the impact on therapeutic management of 18 F-FDG PET/CT in a group of patients with a history of BC who had already been restaged by CI for identification of suspected disease relapse;\n[2] compare the long-term prognostic stratification of CI alone and 18 F-FDG PET/CT.", "A retrospective analysis was performed on consecutive patients with a history of BC and suspicion of recurrence who were referred for 18 F-FDG PET/CT at our institution from January 2002 to September 2008. BC was not a funded indication of 18 F-FDG PET/CT during this period in Australia; therefore, clinicians usually referred patients with high suspicion of recurrence for PET/CT.\nThe inclusion criteria of the study were as follows: (a) a history of confirmed histologic diagnosis of primary BC treated as per local protocol; (b) CI performed no longer than 4 months prior to PET/CT and where the CI included at least a CT scan of the area of interest; (c) availability of follow-up data for a minimum of 6 months following PET/CT; (d) unequivocal determination of clinical status at the time of the last clinical follow-up.\nSixty-three patients (62 women and one man; mean age = 57 years; range = 29-86 years) were included. The median time interval from initial diagnosis to 18 F-FDG PET/CT was 39 months (range 5-431 months). The median time interval between CT and PET/CT was 25 days (first-third quartile: 1-52 days). Indications for PET/CT were: equivocal or suspicious CI findings (n = 28); clinical suspicion of recurrence (n = 21); restaging after completion of therapy (n = 5); routine surveillance (n = 5); and increasing levels of tumor markers (n = 4). All patients provided permission to review medical records at the time of PET/CT imaging according to our institution’s investigational review board guidelines for informed consent (protocol number 09/78).", "Whole-body PET was acquired sequentially using a dedicated PET/CT system (Discovery LS PET/ 4-slice helical CT or Discovery STE/ 8-slice helical CT, General Electric Medical Systems, Milwaukee, WI) combining a multidetector CT scanner with a dedicated, full-ring PET scanner with bismuth germanate crystals. Patients were instructed to fast except for glucose-free oral hydration for at least 6 hours before injection of 300-400 MBq of 18 F-FDG. PET was performed 60 min following 18 F-FDG injection. Blood glucose levels were measured before the injection of the tracer to ensure levels below 10 mmol/l. Transmission data used for attenuation correction were obtained from a low-dose non diagnostic CT acquisition (140 kVp and 40-120 mA), without contrast enhancement. Attenuation corrected PET images were reconstructed with an iterative reconstruction (ordered-subset expectation maximization algorithm). Orthogonal CT, PET, and fused PET/CT images were displayed simultaneously on a GE Xeleris Workstation. The PET data were also displayed in a rotating maximum-intensity projection.\nAn experienced nuclear medicine physician generated a clinical report after reviewing PET images, low-dose CT images, fused PET/CT images, previous imaging results and clinical information. Standard uptake values were not routinely measured. Once issued, the PET/CT report was not reinterpreted in the light of subsequent clinical information.", "A total of 188 clinical, imaging and pathological procedures were performed (3 ± 1.4 per patients), including chest CT (n = 59), abdominopelvic CT (n = 44), whole-body bone scan (n = 30), clinical examination (n = 17), pathology (n = 9), abdominal ultrasound (US) (n = 7), MRI (n = 7), mammogram or breast US (n = 6), chest radiography (n = 5), other (n = 5).\nWritten clinical reports of conventional images and PET/CT were reviewed and classified as (a) negative if imaging tests were negative for disease; (b) equivocal, when abnormal findings were present on any imaging test but were not interpreted as suspicious for malignancy; (c) positive, if any result was clearly described as suspicious or consistent with malignancy. Negative and equivocal findings were combined as negative for the analysis.\nIn cases where recurrence was reported on CI or PET/CT, the location of relapse was also determined, and classified as loco-regional (ipsilateral breast, ipsilateral axillary, internal mammary or supraclavicular node station) or systemic (contralateral node station or distant metastasis). The final diagnosis of disease recurrence and location of disease was confirmed by histologic examination in 13 patients (21%). For the remaining patients, evidence of progression within 6 months of clinical and/or imaging follow-up was considered to indicate a site of disease relapse, whereas no evidence of progression after at least 6 months of follow-up was considered to confirm absence of active disease at that site.", "Referring physicians were asked to record a pre-PET/CT management plan before PET/CT results on our routine clinical request form. The actual post-PET/CT management plan and treatment intent were determined from the medical record or by contacting the referring clinician.\nThe impact of PET/CT on management was considered “high” when the treatment intent or modality was changed (e.g. from palliative to curative treatment or from surgery to radiotherapy)\n[17]. The impact was considered as “medium” when the method of treatment delivery was changed (e.g. radiation treatment volume and/or dose fractionation)\n[17]. When the PET/CT results did not indicate a need for change, the impact was considered to be “low”. PET/CT was considered to have had “no impact” when the management chosen conflicted with post-PET/CT disease extent on the basis of a synthesis of all available information.", "After PET/CT, progress updates were obtained from the medical record, family physician, or treating oncologist. When relevant, details of the date and cause of death were obtained. The disease status at the time of death was recorded.", "Estimates of OS at 2 and 5 years were computed using the Kaplan Meier method, a log-rank test was used to analyze the effect of CI and PET/CT results on OS. Two Cox regression analyses were performed to assess the impact of PET/CT and CI on OS controlling for clinical variables and using a backward elimination process. Triple negative status of the primary tumor was not included in the multivariate model because this information was available for only 38 patients. For percentages such as PPV, NPV, sensitivity and specificity, a Blyth-Still-Casella 95% confidence interval (CI) was calculated. Diagnosis performance results were compared using McNemar tests, Fisher exact tests and a likelihood ratio, which summarizes how many times more likely patients with the disease are to have that particular result than patients without the disease. According to the fact that patients were selected on the basis of CI, many patients with unequivocal systemic relapse on CI would likely not have been referred for PET/CT. Given the likely pre-test selection bias, positive and negative predictive values were considered as more relevant comparators of diagnostic performance when compared with sensitivity and specificity.", "Relapse involving at least one site was confirmed in 42 of the 63 patients (67%). CI was positive for disease in 37 of these patients, yielding a patient sensitivity of 88%, whereas PET/CT was positive for disease in 40, corresponding to a patient sensitivity of 95%.\nTable \n2 shows comparison of extent of suspected relapse as assessed before and after PET/CT. Downstaging by PET/CT was confirmed to be correct in 12/14 patients (one patient had a suspicious bony lesion on bone scan that was non 18 F-FDG-avid, but confirmed to be malignant; one patient showed suspicious mediastinal lymph nodes on CT that were non 18 F-FDG-avid but confirmed to be metastasis of breast cancer by pathology), while upstaging with PET/CT was confirmed in 5/5 patients.\nComparison of extent of suspected relapse as assessed before and after PET/CT\nLR = Loco-Regional; PET/CT = Positron Emission Tomography/Computerized Tomography.\nOn final diagnosis, 20 patients (32%) had a loco-regional recurrence and 36 (57%) had a systemic recurrence (14 patients had both loco-regional and systemic recurrence). CI was truly positive for loco-regional recurrence in 8/20 patients (40%) and systemic recurrence in 30/36 patients (83%). PET/CT was truly positive for loco-regional recurrence in 20/20 patients (100%) and systemic disease in 32/36 patients (89%). Among the 20 patients experiencing loco-regional recurrence, only 3 had local relapse only, for whom both PET/CT and CI were positive. PET/CT detected regional relapse non-detected by CI in 12 patients (3 in axillary nodes only, 9 in extra-axillary nodes). PET/CT had significantly higher positive predictive values when compared with CI for determination of loco-regional, systemic and global recurrence, and higher negative predictive value for loco-regional and global recurrence (Table \n3). 18 F-FDG PET/CT found incidental malignancies in 2 patients (one patient had a primary esophageal cancer and one patient had a gastrointestinal stromal tumor).\nDiagnostic accuracy of conventional imaging (CI) and PET/CT in detecting relapse of disease in all patients, according to the histological type, and for locoregional and systemic recurrence\nCI = Confidence Interval; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold.\n*likelihood ratio = sensitivity/ (1-specificity).", "The PET/CT results had a high impact on management in 30 patients (48%) of whom the treatment intent was modified in 24 patients (including from invasive diagnosis to observation for 11 patients, according to negative PE/CT results despite suspicious CI findings). For the 6 remaining patients, the treatment intent was not modified after PET/CT (palliative for 5 patients, curative for 1 patient), but the modalities of therapy were changed.\nPET/CT had a medium impact on management in 6/63 patients (9%). Management changes in these patients primarily included changes in radiation treatment volume as a result of more extensive disease detected by PET/CT. All these patients had a palliative treatment intent, which was not modified by PET/CT.\nPET/CT had a low impact (i.e. did not change the planned management) in 27/63 patients (43%) for whom the relapse extent was concordant with that found on CI (19 patients) or the documentation of a different distribution of disease did not alter the planned treatment (2 patients), or both PET/CT and CI findings were negative (6 patients). In no case was the PET/CT result apparently ignored.", "Survival data were analyzed with a close-out date of September, 4th 2010. The median follow-up time was 5.1 years (range 0.8-8.6 years). All 63 patients entered into the study had a known status at the close-out date. All patients with negative PET/CT findings were followed for a minimum of 2 years after the scan (except one patient who died 2 months after the scan, because of complications of an esophageal primary tumor discovered on PET/CT).\nThirty-eight patients (60%) were deceased with a median survival of 3.4 years (95% CI 2.5 to 5.0 years) (Figure \n1). The estimated 2-year OS was 66.4% (95% CI 53.2% to 76.6%) and the estimated 5-year OS was 37.3% (95% CI 24.3% to 50.3%).\nEstimated overall survival (+/- 95% confidence interval) for all 63 patients.\nOn univariate analysis, PET/CT status (negative, positive LR or positive systemic) was strongly associated with survival (log-rank test: p = 0.0003 for the entire model; p = 0.0001 for comparison of negative results and positive for systemic disease; p > 0.05 for other single comparisons) (Figure \n2). Patients with systemic disease according to PET/CT had a 4.7-fold in the risk of death when compared with patients with negative PET/CT findings, while patients with only loco-regional recurrence had a 2-fold increase in the risk of death (Table \n4). In contrast, CI status (negative, positive LR or positive systemic) did not significantly predict OS (log-rank test: p = 0.07 for the entire model; p > 0.05 for all single comparisons) (Figure \n2). Moreover, patients with positive CI findings but negative PET/CT findings had similar estimated survival to patients in whom both tests were negative, whereas patients with negative CI findings but positive PET/CT results had comparable estimated survival to patients in whom both procedures were positive for recurrence (Figure \n3). \nOverall survival by staging technique. (A) Kaplan-Meier estimate of overall survival (OS) stratified by pre-PET/CT (Conventional imaging) status. (B) Kaplan-Meier estimate of OS stratified by post-PET/CT status. CI = Conventional Imaging; LR = loco-regional recurrence only; Syst = systemic recurrence.\nUnivariate predictors of overall survival\n*The information is available for only 38 patients.\nCI = Confidence Interval; ER = Estrogen Receptor; HER-2 = Human Epidermal Growth Factor Receptor-2; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography; PR = Progesterone Receptor. Significant p values in bold.\nKaplan-Meier estimate of overall survival (OS) stratified by combination of Conventional imaging (CI) and PET/CT results.\nInitial stage of the disease at the time of diagnosis, histological type of cancer and triple negative status (hormone receptor and HER-2 negativity) were also predictors of survival on the univariate analysis (Table \n4). Time between initial diagnosis and the restaging PET/CT, initial histological grade, estrogen receptor status, progesterone receptor status and overexpression of Human Epidermal Growth Factor Receptor-2 (HER-2) were not significant predictors of survival. On multivariate analysis, positive PET/CT findings (for loco-regional and/or systemic recurrence) remained an independent predictor of OS when adjusting for age, histological subtype and initial stage (model 1, Table \n5). In contrast, positive CI findings did not independently predict OS (model 2, Table \n5).\nMultivariate predictors of overall survival\nCI = Confidence Interval; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold.", "The authors declare that they have no competing interests.", "AC, SD, KM and RJH were involved in the concept and design of the study. AC SD, and ED were involved in data collection, history review and follow-up. KM was the imaging lead on the study and with RJH verified classification of scan results. GT was the study statistician and contributed to analysis of the primary data. SD, MM and BC were the clinical leads for assessing the impact criteria and verifying classifications of PET/CT impact. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "PET/CT acquisition and processing", "Image interpretation and classification", "Assessment of impact", "Follow-up", "Statistical analysis", "Results", "Diagnostic performance", "Impact on management", "Prediction of survival by CI and PET/CT", "Discussion", "Conclusion", "Competing interests", "Authors’ contributions" ]
[ "Breast cancer (BC) is the most commonly diagnosed cancer in women, and is the leading cause of death by cancer for women in the western world. Depending on the initial extent of the disease, approximately 30% of patients diagnosed with BC are at risk of developing loco-regional recurrence or secondary tumor dissemination to distant organs\n[1]. Moreover, the survival of patients who develop an isolated loco-regional recurrence differs from patients who have distant relapse. As a consequence, determination of both the locations and extent of the recurrent disease is essential to guide therapeutic decisions and estimate prognosis.\nTraditionally, routine evaluation of suspected recurrent BC involves physical examination and a multi-modality Conventional Imaging (CI) approach which may include mammography, CT, MRI, and bone scintigraphy. However, this CI approach is often time-intensive and potential false-negative findings may delay appropriate therapy. Positron Emission Tomography/Computed Tomography (PET/CT) with 18 F-Fluorodeoxyglucose (18 F-FDG) is also often used in this indication, given that 18 F-FDG has affinity for both primary and secondary breast tumors, depending on size and aggressiveness\n[2-4]. Several authors have suggested that 18 F-FDG PET and PET/CT are more sensitive than CI for detection of recurrent BC\n[5-15] and can have a significant impact on the therapeutic management\n[5,7-9,12,16]. However, information concerning the utility of 18 F-FDG PET/CT for long-term prognostic stratification, when compared with CI, is limited.\nThus, the objectives of our study were to:\n[1] assess the incremental diagnostic performance and the impact on therapeutic management of 18 F-FDG PET/CT in a group of patients with a history of BC who had already been restaged by CI for identification of suspected disease relapse;\n[2] compare the long-term prognostic stratification of CI alone and 18 F-FDG PET/CT.", " Patients A retrospective analysis was performed on consecutive patients with a history of BC and suspicion of recurrence who were referred for 18 F-FDG PET/CT at our institution from January 2002 to September 2008. BC was not a funded indication of 18 F-FDG PET/CT during this period in Australia; therefore, clinicians usually referred patients with high suspicion of recurrence for PET/CT.\nThe inclusion criteria of the study were as follows: (a) a history of confirmed histologic diagnosis of primary BC treated as per local protocol; (b) CI performed no longer than 4 months prior to PET/CT and where the CI included at least a CT scan of the area of interest; (c) availability of follow-up data for a minimum of 6 months following PET/CT; (d) unequivocal determination of clinical status at the time of the last clinical follow-up.\nSixty-three patients (62 women and one man; mean age = 57 years; range = 29-86 years) were included. The median time interval from initial diagnosis to 18 F-FDG PET/CT was 39 months (range 5-431 months). The median time interval between CT and PET/CT was 25 days (first-third quartile: 1-52 days). Indications for PET/CT were: equivocal or suspicious CI findings (n = 28); clinical suspicion of recurrence (n = 21); restaging after completion of therapy (n = 5); routine surveillance (n = 5); and increasing levels of tumor markers (n = 4). All patients provided permission to review medical records at the time of PET/CT imaging according to our institution’s investigational review board guidelines for informed consent (protocol number 09/78).\nA retrospective analysis was performed on consecutive patients with a history of BC and suspicion of recurrence who were referred for 18 F-FDG PET/CT at our institution from January 2002 to September 2008. BC was not a funded indication of 18 F-FDG PET/CT during this period in Australia; therefore, clinicians usually referred patients with high suspicion of recurrence for PET/CT.\nThe inclusion criteria of the study were as follows: (a) a history of confirmed histologic diagnosis of primary BC treated as per local protocol; (b) CI performed no longer than 4 months prior to PET/CT and where the CI included at least a CT scan of the area of interest; (c) availability of follow-up data for a minimum of 6 months following PET/CT; (d) unequivocal determination of clinical status at the time of the last clinical follow-up.\nSixty-three patients (62 women and one man; mean age = 57 years; range = 29-86 years) were included. The median time interval from initial diagnosis to 18 F-FDG PET/CT was 39 months (range 5-431 months). The median time interval between CT and PET/CT was 25 days (first-third quartile: 1-52 days). Indications for PET/CT were: equivocal or suspicious CI findings (n = 28); clinical suspicion of recurrence (n = 21); restaging after completion of therapy (n = 5); routine surveillance (n = 5); and increasing levels of tumor markers (n = 4). All patients provided permission to review medical records at the time of PET/CT imaging according to our institution’s investigational review board guidelines for informed consent (protocol number 09/78).\n PET/CT acquisition and processing Whole-body PET was acquired sequentially using a dedicated PET/CT system (Discovery LS PET/ 4-slice helical CT or Discovery STE/ 8-slice helical CT, General Electric Medical Systems, Milwaukee, WI) combining a multidetector CT scanner with a dedicated, full-ring PET scanner with bismuth germanate crystals. Patients were instructed to fast except for glucose-free oral hydration for at least 6 hours before injection of 300-400 MBq of 18 F-FDG. PET was performed 60 min following 18 F-FDG injection. Blood glucose levels were measured before the injection of the tracer to ensure levels below 10 mmol/l. Transmission data used for attenuation correction were obtained from a low-dose non diagnostic CT acquisition (140 kVp and 40-120 mA), without contrast enhancement. Attenuation corrected PET images were reconstructed with an iterative reconstruction (ordered-subset expectation maximization algorithm). Orthogonal CT, PET, and fused PET/CT images were displayed simultaneously on a GE Xeleris Workstation. The PET data were also displayed in a rotating maximum-intensity projection.\nAn experienced nuclear medicine physician generated a clinical report after reviewing PET images, low-dose CT images, fused PET/CT images, previous imaging results and clinical information. Standard uptake values were not routinely measured. Once issued, the PET/CT report was not reinterpreted in the light of subsequent clinical information.\nWhole-body PET was acquired sequentially using a dedicated PET/CT system (Discovery LS PET/ 4-slice helical CT or Discovery STE/ 8-slice helical CT, General Electric Medical Systems, Milwaukee, WI) combining a multidetector CT scanner with a dedicated, full-ring PET scanner with bismuth germanate crystals. Patients were instructed to fast except for glucose-free oral hydration for at least 6 hours before injection of 300-400 MBq of 18 F-FDG. PET was performed 60 min following 18 F-FDG injection. Blood glucose levels were measured before the injection of the tracer to ensure levels below 10 mmol/l. Transmission data used for attenuation correction were obtained from a low-dose non diagnostic CT acquisition (140 kVp and 40-120 mA), without contrast enhancement. Attenuation corrected PET images were reconstructed with an iterative reconstruction (ordered-subset expectation maximization algorithm). Orthogonal CT, PET, and fused PET/CT images were displayed simultaneously on a GE Xeleris Workstation. The PET data were also displayed in a rotating maximum-intensity projection.\nAn experienced nuclear medicine physician generated a clinical report after reviewing PET images, low-dose CT images, fused PET/CT images, previous imaging results and clinical information. Standard uptake values were not routinely measured. Once issued, the PET/CT report was not reinterpreted in the light of subsequent clinical information.\n Image interpretation and classification A total of 188 clinical, imaging and pathological procedures were performed (3 ± 1.4 per patients), including chest CT (n = 59), abdominopelvic CT (n = 44), whole-body bone scan (n = 30), clinical examination (n = 17), pathology (n = 9), abdominal ultrasound (US) (n = 7), MRI (n = 7), mammogram or breast US (n = 6), chest radiography (n = 5), other (n = 5).\nWritten clinical reports of conventional images and PET/CT were reviewed and classified as (a) negative if imaging tests were negative for disease; (b) equivocal, when abnormal findings were present on any imaging test but were not interpreted as suspicious for malignancy; (c) positive, if any result was clearly described as suspicious or consistent with malignancy. Negative and equivocal findings were combined as negative for the analysis.\nIn cases where recurrence was reported on CI or PET/CT, the location of relapse was also determined, and classified as loco-regional (ipsilateral breast, ipsilateral axillary, internal mammary or supraclavicular node station) or systemic (contralateral node station or distant metastasis). The final diagnosis of disease recurrence and location of disease was confirmed by histologic examination in 13 patients (21%). For the remaining patients, evidence of progression within 6 months of clinical and/or imaging follow-up was considered to indicate a site of disease relapse, whereas no evidence of progression after at least 6 months of follow-up was considered to confirm absence of active disease at that site.\nA total of 188 clinical, imaging and pathological procedures were performed (3 ± 1.4 per patients), including chest CT (n = 59), abdominopelvic CT (n = 44), whole-body bone scan (n = 30), clinical examination (n = 17), pathology (n = 9), abdominal ultrasound (US) (n = 7), MRI (n = 7), mammogram or breast US (n = 6), chest radiography (n = 5), other (n = 5).\nWritten clinical reports of conventional images and PET/CT were reviewed and classified as (a) negative if imaging tests were negative for disease; (b) equivocal, when abnormal findings were present on any imaging test but were not interpreted as suspicious for malignancy; (c) positive, if any result was clearly described as suspicious or consistent with malignancy. Negative and equivocal findings were combined as negative for the analysis.\nIn cases where recurrence was reported on CI or PET/CT, the location of relapse was also determined, and classified as loco-regional (ipsilateral breast, ipsilateral axillary, internal mammary or supraclavicular node station) or systemic (contralateral node station or distant metastasis). The final diagnosis of disease recurrence and location of disease was confirmed by histologic examination in 13 patients (21%). For the remaining patients, evidence of progression within 6 months of clinical and/or imaging follow-up was considered to indicate a site of disease relapse, whereas no evidence of progression after at least 6 months of follow-up was considered to confirm absence of active disease at that site.\n Assessment of impact Referring physicians were asked to record a pre-PET/CT management plan before PET/CT results on our routine clinical request form. The actual post-PET/CT management plan and treatment intent were determined from the medical record or by contacting the referring clinician.\nThe impact of PET/CT on management was considered “high” when the treatment intent or modality was changed (e.g. from palliative to curative treatment or from surgery to radiotherapy)\n[17]. The impact was considered as “medium” when the method of treatment delivery was changed (e.g. radiation treatment volume and/or dose fractionation)\n[17]. When the PET/CT results did not indicate a need for change, the impact was considered to be “low”. PET/CT was considered to have had “no impact” when the management chosen conflicted with post-PET/CT disease extent on the basis of a synthesis of all available information.\nReferring physicians were asked to record a pre-PET/CT management plan before PET/CT results on our routine clinical request form. The actual post-PET/CT management plan and treatment intent were determined from the medical record or by contacting the referring clinician.\nThe impact of PET/CT on management was considered “high” when the treatment intent or modality was changed (e.g. from palliative to curative treatment or from surgery to radiotherapy)\n[17]. The impact was considered as “medium” when the method of treatment delivery was changed (e.g. radiation treatment volume and/or dose fractionation)\n[17]. When the PET/CT results did not indicate a need for change, the impact was considered to be “low”. PET/CT was considered to have had “no impact” when the management chosen conflicted with post-PET/CT disease extent on the basis of a synthesis of all available information.\n Follow-up After PET/CT, progress updates were obtained from the medical record, family physician, or treating oncologist. When relevant, details of the date and cause of death were obtained. The disease status at the time of death was recorded.\nAfter PET/CT, progress updates were obtained from the medical record, family physician, or treating oncologist. When relevant, details of the date and cause of death were obtained. The disease status at the time of death was recorded.\n Statistical analysis Estimates of OS at 2 and 5 years were computed using the Kaplan Meier method, a log-rank test was used to analyze the effect of CI and PET/CT results on OS. Two Cox regression analyses were performed to assess the impact of PET/CT and CI on OS controlling for clinical variables and using a backward elimination process. Triple negative status of the primary tumor was not included in the multivariate model because this information was available for only 38 patients. For percentages such as PPV, NPV, sensitivity and specificity, a Blyth-Still-Casella 95% confidence interval (CI) was calculated. Diagnosis performance results were compared using McNemar tests, Fisher exact tests and a likelihood ratio, which summarizes how many times more likely patients with the disease are to have that particular result than patients without the disease. According to the fact that patients were selected on the basis of CI, many patients with unequivocal systemic relapse on CI would likely not have been referred for PET/CT. Given the likely pre-test selection bias, positive and negative predictive values were considered as more relevant comparators of diagnostic performance when compared with sensitivity and specificity.\nEstimates of OS at 2 and 5 years were computed using the Kaplan Meier method, a log-rank test was used to analyze the effect of CI and PET/CT results on OS. Two Cox regression analyses were performed to assess the impact of PET/CT and CI on OS controlling for clinical variables and using a backward elimination process. Triple negative status of the primary tumor was not included in the multivariate model because this information was available for only 38 patients. For percentages such as PPV, NPV, sensitivity and specificity, a Blyth-Still-Casella 95% confidence interval (CI) was calculated. Diagnosis performance results were compared using McNemar tests, Fisher exact tests and a likelihood ratio, which summarizes how many times more likely patients with the disease are to have that particular result than patients without the disease. According to the fact that patients were selected on the basis of CI, many patients with unequivocal systemic relapse on CI would likely not have been referred for PET/CT. Given the likely pre-test selection bias, positive and negative predictive values were considered as more relevant comparators of diagnostic performance when compared with sensitivity and specificity.", "A retrospective analysis was performed on consecutive patients with a history of BC and suspicion of recurrence who were referred for 18 F-FDG PET/CT at our institution from January 2002 to September 2008. BC was not a funded indication of 18 F-FDG PET/CT during this period in Australia; therefore, clinicians usually referred patients with high suspicion of recurrence for PET/CT.\nThe inclusion criteria of the study were as follows: (a) a history of confirmed histologic diagnosis of primary BC treated as per local protocol; (b) CI performed no longer than 4 months prior to PET/CT and where the CI included at least a CT scan of the area of interest; (c) availability of follow-up data for a minimum of 6 months following PET/CT; (d) unequivocal determination of clinical status at the time of the last clinical follow-up.\nSixty-three patients (62 women and one man; mean age = 57 years; range = 29-86 years) were included. The median time interval from initial diagnosis to 18 F-FDG PET/CT was 39 months (range 5-431 months). The median time interval between CT and PET/CT was 25 days (first-third quartile: 1-52 days). Indications for PET/CT were: equivocal or suspicious CI findings (n = 28); clinical suspicion of recurrence (n = 21); restaging after completion of therapy (n = 5); routine surveillance (n = 5); and increasing levels of tumor markers (n = 4). All patients provided permission to review medical records at the time of PET/CT imaging according to our institution’s investigational review board guidelines for informed consent (protocol number 09/78).", "Whole-body PET was acquired sequentially using a dedicated PET/CT system (Discovery LS PET/ 4-slice helical CT or Discovery STE/ 8-slice helical CT, General Electric Medical Systems, Milwaukee, WI) combining a multidetector CT scanner with a dedicated, full-ring PET scanner with bismuth germanate crystals. Patients were instructed to fast except for glucose-free oral hydration for at least 6 hours before injection of 300-400 MBq of 18 F-FDG. PET was performed 60 min following 18 F-FDG injection. Blood glucose levels were measured before the injection of the tracer to ensure levels below 10 mmol/l. Transmission data used for attenuation correction were obtained from a low-dose non diagnostic CT acquisition (140 kVp and 40-120 mA), without contrast enhancement. Attenuation corrected PET images were reconstructed with an iterative reconstruction (ordered-subset expectation maximization algorithm). Orthogonal CT, PET, and fused PET/CT images were displayed simultaneously on a GE Xeleris Workstation. The PET data were also displayed in a rotating maximum-intensity projection.\nAn experienced nuclear medicine physician generated a clinical report after reviewing PET images, low-dose CT images, fused PET/CT images, previous imaging results and clinical information. Standard uptake values were not routinely measured. Once issued, the PET/CT report was not reinterpreted in the light of subsequent clinical information.", "A total of 188 clinical, imaging and pathological procedures were performed (3 ± 1.4 per patients), including chest CT (n = 59), abdominopelvic CT (n = 44), whole-body bone scan (n = 30), clinical examination (n = 17), pathology (n = 9), abdominal ultrasound (US) (n = 7), MRI (n = 7), mammogram or breast US (n = 6), chest radiography (n = 5), other (n = 5).\nWritten clinical reports of conventional images and PET/CT were reviewed and classified as (a) negative if imaging tests were negative for disease; (b) equivocal, when abnormal findings were present on any imaging test but were not interpreted as suspicious for malignancy; (c) positive, if any result was clearly described as suspicious or consistent with malignancy. Negative and equivocal findings were combined as negative for the analysis.\nIn cases where recurrence was reported on CI or PET/CT, the location of relapse was also determined, and classified as loco-regional (ipsilateral breast, ipsilateral axillary, internal mammary or supraclavicular node station) or systemic (contralateral node station or distant metastasis). The final diagnosis of disease recurrence and location of disease was confirmed by histologic examination in 13 patients (21%). For the remaining patients, evidence of progression within 6 months of clinical and/or imaging follow-up was considered to indicate a site of disease relapse, whereas no evidence of progression after at least 6 months of follow-up was considered to confirm absence of active disease at that site.", "Referring physicians were asked to record a pre-PET/CT management plan before PET/CT results on our routine clinical request form. The actual post-PET/CT management plan and treatment intent were determined from the medical record or by contacting the referring clinician.\nThe impact of PET/CT on management was considered “high” when the treatment intent or modality was changed (e.g. from palliative to curative treatment or from surgery to radiotherapy)\n[17]. The impact was considered as “medium” when the method of treatment delivery was changed (e.g. radiation treatment volume and/or dose fractionation)\n[17]. When the PET/CT results did not indicate a need for change, the impact was considered to be “low”. PET/CT was considered to have had “no impact” when the management chosen conflicted with post-PET/CT disease extent on the basis of a synthesis of all available information.", "After PET/CT, progress updates were obtained from the medical record, family physician, or treating oncologist. When relevant, details of the date and cause of death were obtained. The disease status at the time of death was recorded.", "Estimates of OS at 2 and 5 years were computed using the Kaplan Meier method, a log-rank test was used to analyze the effect of CI and PET/CT results on OS. Two Cox regression analyses were performed to assess the impact of PET/CT and CI on OS controlling for clinical variables and using a backward elimination process. Triple negative status of the primary tumor was not included in the multivariate model because this information was available for only 38 patients. For percentages such as PPV, NPV, sensitivity and specificity, a Blyth-Still-Casella 95% confidence interval (CI) was calculated. Diagnosis performance results were compared using McNemar tests, Fisher exact tests and a likelihood ratio, which summarizes how many times more likely patients with the disease are to have that particular result than patients without the disease. According to the fact that patients were selected on the basis of CI, many patients with unequivocal systemic relapse on CI would likely not have been referred for PET/CT. Given the likely pre-test selection bias, positive and negative predictive values were considered as more relevant comparators of diagnostic performance when compared with sensitivity and specificity.", "Patient characteristics at the time of initial diagnosis are summarized in Table \n1.\nPatient characteristics at the time of initial diagnosis\nHER-2 = Human Epidermal Growth Factor Receptor-2.\n Diagnostic performance Relapse involving at least one site was confirmed in 42 of the 63 patients (67%). CI was positive for disease in 37 of these patients, yielding a patient sensitivity of 88%, whereas PET/CT was positive for disease in 40, corresponding to a patient sensitivity of 95%.\nTable \n2 shows comparison of extent of suspected relapse as assessed before and after PET/CT. Downstaging by PET/CT was confirmed to be correct in 12/14 patients (one patient had a suspicious bony lesion on bone scan that was non 18 F-FDG-avid, but confirmed to be malignant; one patient showed suspicious mediastinal lymph nodes on CT that were non 18 F-FDG-avid but confirmed to be metastasis of breast cancer by pathology), while upstaging with PET/CT was confirmed in 5/5 patients.\nComparison of extent of suspected relapse as assessed before and after PET/CT\nLR = Loco-Regional; PET/CT = Positron Emission Tomography/Computerized Tomography.\nOn final diagnosis, 20 patients (32%) had a loco-regional recurrence and 36 (57%) had a systemic recurrence (14 patients had both loco-regional and systemic recurrence). CI was truly positive for loco-regional recurrence in 8/20 patients (40%) and systemic recurrence in 30/36 patients (83%). PET/CT was truly positive for loco-regional recurrence in 20/20 patients (100%) and systemic disease in 32/36 patients (89%). Among the 20 patients experiencing loco-regional recurrence, only 3 had local relapse only, for whom both PET/CT and CI were positive. PET/CT detected regional relapse non-detected by CI in 12 patients (3 in axillary nodes only, 9 in extra-axillary nodes). PET/CT had significantly higher positive predictive values when compared with CI for determination of loco-regional, systemic and global recurrence, and higher negative predictive value for loco-regional and global recurrence (Table \n3). 18 F-FDG PET/CT found incidental malignancies in 2 patients (one patient had a primary esophageal cancer and one patient had a gastrointestinal stromal tumor).\nDiagnostic accuracy of conventional imaging (CI) and PET/CT in detecting relapse of disease in all patients, according to the histological type, and for locoregional and systemic recurrence\nCI = Confidence Interval; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold.\n*likelihood ratio = sensitivity/ (1-specificity).\nRelapse involving at least one site was confirmed in 42 of the 63 patients (67%). CI was positive for disease in 37 of these patients, yielding a patient sensitivity of 88%, whereas PET/CT was positive for disease in 40, corresponding to a patient sensitivity of 95%.\nTable \n2 shows comparison of extent of suspected relapse as assessed before and after PET/CT. Downstaging by PET/CT was confirmed to be correct in 12/14 patients (one patient had a suspicious bony lesion on bone scan that was non 18 F-FDG-avid, but confirmed to be malignant; one patient showed suspicious mediastinal lymph nodes on CT that were non 18 F-FDG-avid but confirmed to be metastasis of breast cancer by pathology), while upstaging with PET/CT was confirmed in 5/5 patients.\nComparison of extent of suspected relapse as assessed before and after PET/CT\nLR = Loco-Regional; PET/CT = Positron Emission Tomography/Computerized Tomography.\nOn final diagnosis, 20 patients (32%) had a loco-regional recurrence and 36 (57%) had a systemic recurrence (14 patients had both loco-regional and systemic recurrence). CI was truly positive for loco-regional recurrence in 8/20 patients (40%) and systemic recurrence in 30/36 patients (83%). PET/CT was truly positive for loco-regional recurrence in 20/20 patients (100%) and systemic disease in 32/36 patients (89%). Among the 20 patients experiencing loco-regional recurrence, only 3 had local relapse only, for whom both PET/CT and CI were positive. PET/CT detected regional relapse non-detected by CI in 12 patients (3 in axillary nodes only, 9 in extra-axillary nodes). PET/CT had significantly higher positive predictive values when compared with CI for determination of loco-regional, systemic and global recurrence, and higher negative predictive value for loco-regional and global recurrence (Table \n3). 18 F-FDG PET/CT found incidental malignancies in 2 patients (one patient had a primary esophageal cancer and one patient had a gastrointestinal stromal tumor).\nDiagnostic accuracy of conventional imaging (CI) and PET/CT in detecting relapse of disease in all patients, according to the histological type, and for locoregional and systemic recurrence\nCI = Confidence Interval; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold.\n*likelihood ratio = sensitivity/ (1-specificity).\n Impact on management The PET/CT results had a high impact on management in 30 patients (48%) of whom the treatment intent was modified in 24 patients (including from invasive diagnosis to observation for 11 patients, according to negative PE/CT results despite suspicious CI findings). For the 6 remaining patients, the treatment intent was not modified after PET/CT (palliative for 5 patients, curative for 1 patient), but the modalities of therapy were changed.\nPET/CT had a medium impact on management in 6/63 patients (9%). Management changes in these patients primarily included changes in radiation treatment volume as a result of more extensive disease detected by PET/CT. All these patients had a palliative treatment intent, which was not modified by PET/CT.\nPET/CT had a low impact (i.e. did not change the planned management) in 27/63 patients (43%) for whom the relapse extent was concordant with that found on CI (19 patients) or the documentation of a different distribution of disease did not alter the planned treatment (2 patients), or both PET/CT and CI findings were negative (6 patients). In no case was the PET/CT result apparently ignored.\nThe PET/CT results had a high impact on management in 30 patients (48%) of whom the treatment intent was modified in 24 patients (including from invasive diagnosis to observation for 11 patients, according to negative PE/CT results despite suspicious CI findings). For the 6 remaining patients, the treatment intent was not modified after PET/CT (palliative for 5 patients, curative for 1 patient), but the modalities of therapy were changed.\nPET/CT had a medium impact on management in 6/63 patients (9%). Management changes in these patients primarily included changes in radiation treatment volume as a result of more extensive disease detected by PET/CT. All these patients had a palliative treatment intent, which was not modified by PET/CT.\nPET/CT had a low impact (i.e. did not change the planned management) in 27/63 patients (43%) for whom the relapse extent was concordant with that found on CI (19 patients) or the documentation of a different distribution of disease did not alter the planned treatment (2 patients), or both PET/CT and CI findings were negative (6 patients). In no case was the PET/CT result apparently ignored.\n Prediction of survival by CI and PET/CT Survival data were analyzed with a close-out date of September, 4th 2010. The median follow-up time was 5.1 years (range 0.8-8.6 years). All 63 patients entered into the study had a known status at the close-out date. All patients with negative PET/CT findings were followed for a minimum of 2 years after the scan (except one patient who died 2 months after the scan, because of complications of an esophageal primary tumor discovered on PET/CT).\nThirty-eight patients (60%) were deceased with a median survival of 3.4 years (95% CI 2.5 to 5.0 years) (Figure \n1). The estimated 2-year OS was 66.4% (95% CI 53.2% to 76.6%) and the estimated 5-year OS was 37.3% (95% CI 24.3% to 50.3%).\nEstimated overall survival (+/- 95% confidence interval) for all 63 patients.\nOn univariate analysis, PET/CT status (negative, positive LR or positive systemic) was strongly associated with survival (log-rank test: p = 0.0003 for the entire model; p = 0.0001 for comparison of negative results and positive for systemic disease; p > 0.05 for other single comparisons) (Figure \n2). Patients with systemic disease according to PET/CT had a 4.7-fold in the risk of death when compared with patients with negative PET/CT findings, while patients with only loco-regional recurrence had a 2-fold increase in the risk of death (Table \n4). In contrast, CI status (negative, positive LR or positive systemic) did not significantly predict OS (log-rank test: p = 0.07 for the entire model; p > 0.05 for all single comparisons) (Figure \n2). Moreover, patients with positive CI findings but negative PET/CT findings had similar estimated survival to patients in whom both tests were negative, whereas patients with negative CI findings but positive PET/CT results had comparable estimated survival to patients in whom both procedures were positive for recurrence (Figure \n3). \nOverall survival by staging technique. (A) Kaplan-Meier estimate of overall survival (OS) stratified by pre-PET/CT (Conventional imaging) status. (B) Kaplan-Meier estimate of OS stratified by post-PET/CT status. CI = Conventional Imaging; LR = loco-regional recurrence only; Syst = systemic recurrence.\nUnivariate predictors of overall survival\n*The information is available for only 38 patients.\nCI = Confidence Interval; ER = Estrogen Receptor; HER-2 = Human Epidermal Growth Factor Receptor-2; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography; PR = Progesterone Receptor. Significant p values in bold.\nKaplan-Meier estimate of overall survival (OS) stratified by combination of Conventional imaging (CI) and PET/CT results.\nInitial stage of the disease at the time of diagnosis, histological type of cancer and triple negative status (hormone receptor and HER-2 negativity) were also predictors of survival on the univariate analysis (Table \n4). Time between initial diagnosis and the restaging PET/CT, initial histological grade, estrogen receptor status, progesterone receptor status and overexpression of Human Epidermal Growth Factor Receptor-2 (HER-2) were not significant predictors of survival. On multivariate analysis, positive PET/CT findings (for loco-regional and/or systemic recurrence) remained an independent predictor of OS when adjusting for age, histological subtype and initial stage (model 1, Table \n5). In contrast, positive CI findings did not independently predict OS (model 2, Table \n5).\nMultivariate predictors of overall survival\nCI = Confidence Interval; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold.\nSurvival data were analyzed with a close-out date of September, 4th 2010. The median follow-up time was 5.1 years (range 0.8-8.6 years). All 63 patients entered into the study had a known status at the close-out date. All patients with negative PET/CT findings were followed for a minimum of 2 years after the scan (except one patient who died 2 months after the scan, because of complications of an esophageal primary tumor discovered on PET/CT).\nThirty-eight patients (60%) were deceased with a median survival of 3.4 years (95% CI 2.5 to 5.0 years) (Figure \n1). The estimated 2-year OS was 66.4% (95% CI 53.2% to 76.6%) and the estimated 5-year OS was 37.3% (95% CI 24.3% to 50.3%).\nEstimated overall survival (+/- 95% confidence interval) for all 63 patients.\nOn univariate analysis, PET/CT status (negative, positive LR or positive systemic) was strongly associated with survival (log-rank test: p = 0.0003 for the entire model; p = 0.0001 for comparison of negative results and positive for systemic disease; p > 0.05 for other single comparisons) (Figure \n2). Patients with systemic disease according to PET/CT had a 4.7-fold in the risk of death when compared with patients with negative PET/CT findings, while patients with only loco-regional recurrence had a 2-fold increase in the risk of death (Table \n4). In contrast, CI status (negative, positive LR or positive systemic) did not significantly predict OS (log-rank test: p = 0.07 for the entire model; p > 0.05 for all single comparisons) (Figure \n2). Moreover, patients with positive CI findings but negative PET/CT findings had similar estimated survival to patients in whom both tests were negative, whereas patients with negative CI findings but positive PET/CT results had comparable estimated survival to patients in whom both procedures were positive for recurrence (Figure \n3). \nOverall survival by staging technique. (A) Kaplan-Meier estimate of overall survival (OS) stratified by pre-PET/CT (Conventional imaging) status. (B) Kaplan-Meier estimate of OS stratified by post-PET/CT status. CI = Conventional Imaging; LR = loco-regional recurrence only; Syst = systemic recurrence.\nUnivariate predictors of overall survival\n*The information is available for only 38 patients.\nCI = Confidence Interval; ER = Estrogen Receptor; HER-2 = Human Epidermal Growth Factor Receptor-2; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography; PR = Progesterone Receptor. Significant p values in bold.\nKaplan-Meier estimate of overall survival (OS) stratified by combination of Conventional imaging (CI) and PET/CT results.\nInitial stage of the disease at the time of diagnosis, histological type of cancer and triple negative status (hormone receptor and HER-2 negativity) were also predictors of survival on the univariate analysis (Table \n4). Time between initial diagnosis and the restaging PET/CT, initial histological grade, estrogen receptor status, progesterone receptor status and overexpression of Human Epidermal Growth Factor Receptor-2 (HER-2) were not significant predictors of survival. On multivariate analysis, positive PET/CT findings (for loco-regional and/or systemic recurrence) remained an independent predictor of OS when adjusting for age, histological subtype and initial stage (model 1, Table \n5). In contrast, positive CI findings did not independently predict OS (model 2, Table \n5).\nMultivariate predictors of overall survival\nCI = Confidence Interval; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold.", "Relapse involving at least one site was confirmed in 42 of the 63 patients (67%). CI was positive for disease in 37 of these patients, yielding a patient sensitivity of 88%, whereas PET/CT was positive for disease in 40, corresponding to a patient sensitivity of 95%.\nTable \n2 shows comparison of extent of suspected relapse as assessed before and after PET/CT. Downstaging by PET/CT was confirmed to be correct in 12/14 patients (one patient had a suspicious bony lesion on bone scan that was non 18 F-FDG-avid, but confirmed to be malignant; one patient showed suspicious mediastinal lymph nodes on CT that were non 18 F-FDG-avid but confirmed to be metastasis of breast cancer by pathology), while upstaging with PET/CT was confirmed in 5/5 patients.\nComparison of extent of suspected relapse as assessed before and after PET/CT\nLR = Loco-Regional; PET/CT = Positron Emission Tomography/Computerized Tomography.\nOn final diagnosis, 20 patients (32%) had a loco-regional recurrence and 36 (57%) had a systemic recurrence (14 patients had both loco-regional and systemic recurrence). CI was truly positive for loco-regional recurrence in 8/20 patients (40%) and systemic recurrence in 30/36 patients (83%). PET/CT was truly positive for loco-regional recurrence in 20/20 patients (100%) and systemic disease in 32/36 patients (89%). Among the 20 patients experiencing loco-regional recurrence, only 3 had local relapse only, for whom both PET/CT and CI were positive. PET/CT detected regional relapse non-detected by CI in 12 patients (3 in axillary nodes only, 9 in extra-axillary nodes). PET/CT had significantly higher positive predictive values when compared with CI for determination of loco-regional, systemic and global recurrence, and higher negative predictive value for loco-regional and global recurrence (Table \n3). 18 F-FDG PET/CT found incidental malignancies in 2 patients (one patient had a primary esophageal cancer and one patient had a gastrointestinal stromal tumor).\nDiagnostic accuracy of conventional imaging (CI) and PET/CT in detecting relapse of disease in all patients, according to the histological type, and for locoregional and systemic recurrence\nCI = Confidence Interval; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold.\n*likelihood ratio = sensitivity/ (1-specificity).", "The PET/CT results had a high impact on management in 30 patients (48%) of whom the treatment intent was modified in 24 patients (including from invasive diagnosis to observation for 11 patients, according to negative PE/CT results despite suspicious CI findings). For the 6 remaining patients, the treatment intent was not modified after PET/CT (palliative for 5 patients, curative for 1 patient), but the modalities of therapy were changed.\nPET/CT had a medium impact on management in 6/63 patients (9%). Management changes in these patients primarily included changes in radiation treatment volume as a result of more extensive disease detected by PET/CT. All these patients had a palliative treatment intent, which was not modified by PET/CT.\nPET/CT had a low impact (i.e. did not change the planned management) in 27/63 patients (43%) for whom the relapse extent was concordant with that found on CI (19 patients) or the documentation of a different distribution of disease did not alter the planned treatment (2 patients), or both PET/CT and CI findings were negative (6 patients). In no case was the PET/CT result apparently ignored.", "Survival data were analyzed with a close-out date of September, 4th 2010. The median follow-up time was 5.1 years (range 0.8-8.6 years). All 63 patients entered into the study had a known status at the close-out date. All patients with negative PET/CT findings were followed for a minimum of 2 years after the scan (except one patient who died 2 months after the scan, because of complications of an esophageal primary tumor discovered on PET/CT).\nThirty-eight patients (60%) were deceased with a median survival of 3.4 years (95% CI 2.5 to 5.0 years) (Figure \n1). The estimated 2-year OS was 66.4% (95% CI 53.2% to 76.6%) and the estimated 5-year OS was 37.3% (95% CI 24.3% to 50.3%).\nEstimated overall survival (+/- 95% confidence interval) for all 63 patients.\nOn univariate analysis, PET/CT status (negative, positive LR or positive systemic) was strongly associated with survival (log-rank test: p = 0.0003 for the entire model; p = 0.0001 for comparison of negative results and positive for systemic disease; p > 0.05 for other single comparisons) (Figure \n2). Patients with systemic disease according to PET/CT had a 4.7-fold in the risk of death when compared with patients with negative PET/CT findings, while patients with only loco-regional recurrence had a 2-fold increase in the risk of death (Table \n4). In contrast, CI status (negative, positive LR or positive systemic) did not significantly predict OS (log-rank test: p = 0.07 for the entire model; p > 0.05 for all single comparisons) (Figure \n2). Moreover, patients with positive CI findings but negative PET/CT findings had similar estimated survival to patients in whom both tests were negative, whereas patients with negative CI findings but positive PET/CT results had comparable estimated survival to patients in whom both procedures were positive for recurrence (Figure \n3). \nOverall survival by staging technique. (A) Kaplan-Meier estimate of overall survival (OS) stratified by pre-PET/CT (Conventional imaging) status. (B) Kaplan-Meier estimate of OS stratified by post-PET/CT status. CI = Conventional Imaging; LR = loco-regional recurrence only; Syst = systemic recurrence.\nUnivariate predictors of overall survival\n*The information is available for only 38 patients.\nCI = Confidence Interval; ER = Estrogen Receptor; HER-2 = Human Epidermal Growth Factor Receptor-2; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography; PR = Progesterone Receptor. Significant p values in bold.\nKaplan-Meier estimate of overall survival (OS) stratified by combination of Conventional imaging (CI) and PET/CT results.\nInitial stage of the disease at the time of diagnosis, histological type of cancer and triple negative status (hormone receptor and HER-2 negativity) were also predictors of survival on the univariate analysis (Table \n4). Time between initial diagnosis and the restaging PET/CT, initial histological grade, estrogen receptor status, progesterone receptor status and overexpression of Human Epidermal Growth Factor Receptor-2 (HER-2) were not significant predictors of survival. On multivariate analysis, positive PET/CT findings (for loco-regional and/or systemic recurrence) remained an independent predictor of OS when adjusting for age, histological subtype and initial stage (model 1, Table \n5). In contrast, positive CI findings did not independently predict OS (model 2, Table \n5).\nMultivariate predictors of overall survival\nCI = Confidence Interval; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold.", "For patients with possible recurrent breast cancer (BC), early detection and adequate localization of recurrent disease are essential for guiding optimal therapy and prognostication. Patients with isolated loco-regional recurrence are able to benefit from curative salvage therapy, whereas palliative treatment is generally indicated for patients with distant relapse.\nSeveral studies have shown the relevance of 18 F-FDG PET/CT in detecting distant metastasis in patients with clinical suspicion of recurrence\n[6,8,12,13,18-20], and in patients with documented loco-regional recurrence\n[5]. Our study confirmed that 18 F-FDG PET/CT is an accurate technique for the appropriate detection of relapse, when compared with CI alone. Of particular economic and clinical importance was the observation that 12 patients (19%) in this series who were suspected to have relapsed by conventional evaluation subsequently received no active treatment after a negative 18 F-FDG PET/CT evaluation and demonstrated an excellent prognosis. In contrast, of the 8 patients with negative CI findings, 1 patient was found to have locoregional relapse and 3 patients were found to have systemic recurrence on 18 F-FDG PET/CT. In the current study, 18 F-FDG PET/CT had an impact on therapeutic management in 57% of patients; in particular, the treatment intent was changed in 38% of patients. This result is consistent with other studies which showed the important impact of 18 F-FDG PET/CT on therapeutic management in patients with suspicion of recurrent BC\n[5,8,12,18,19]. Most of these previous studies reported that 18 F-FDG PET/CT was highly accurate for detecting recurrent disease in patients with negative or inconclusive CI findings\n[5,18,19]. In contrast, in our study, CI findings were consistent with relapse in the majority of patients (51/63) but 18 F-FDG PET/CT downstaged 12 of them, providing a better negative predictive value when compared to CI alone (Table \n3). These findings suggested that 18 F-FDG PET/CT was not only effective for early detection of relapse in patients with negative CI findings, but also yielded a better characterization of CI findings in patients with a high suspicion of relapse.\nAs BC was not a funded indication of 18 F-FDG PET/CT in Australia during the study period, referring clinicians were likely to use PET/CT in patients for whom there was clinical uncertainty with respect to the appropriate management. Although this patient selection could have introduced biases in the evaluation of the impact of PET/CT findings on patient management, it also showed the value of 18 F-FDG PET/CT in patients whose disease status could not be adequately determined using CI alone.\nThe more accurate prognostic information derived from PET/CT results when compared with CI findings underpinned the value of PET/CT in the management of patients with recurrent BC. Both 18F-FDG PET/CT and CT are not optimal modalities for detection of local recurrence, when compared with mammography, ultrasound or MRI. However, detection of additional distant metastases in patients with documented loco-regional recurrence is essential in order to optimize management and stratify prognosis. Of note was the favourable survival of patients with isolated loco-regional recurrence according to PET/CT, when compared with patients with systemic relapse.\nIn our study, CT performed with PET was not of diagnostic quality. However, all patients included in this study had diagnostic CT before undergoing FDG PET/CT.\nWhile Dirisamer et al. showed in a retrospective study that the association of FDG PET and contrast-enhanced (ce) CT could improve restaging of breast cancer when compared with ceCT or FDG PET alone\n[6], there is no evidence in the literature of the superiority of FDG PET/ceCT for restaging of breast cancer when compared with FDG PET associated to non-diagnostic, low dose CT. In our study, the outcome data validate our approach in that FDG PET/CT results were more often correct than conventional imaging (including diagnostic CT) when discordant, and stratified prognosis whereas conventional imaging did not (Figure \n2). If we hadn’t ignored the conventional imaging findings (including diagnostic CT) when discordant with FDG PET/CT, the accuracy wouldn’t have been enhanced and the prognostic value of FDG PET/CT incorporating a low dose, non-contrast CT wouldn’t have been superior to that of conventional imaging. These results have implications for the reporting of FDG PET combined with ceCT, which is performed by some facilities as a routine procedure and suggest that significant clinical weight should be placed on the PET findings even when discordant with the ceCT appearances.\nOne of the limitations of our study was that the CI procedures were not standardized and were selected on the basis of clinical findings. However, this represented routine clinical practice and did not detract from the results. Although comparison of a masked reading of PET/CT with a masked reading of CI techniques might be appropriate if PET/CT were to be suggested as a replacement for CI, the main purpose of this study was to evaluate the incremental diagnostic and prognostic value of PET/CT in routine practice. The results of this retrospective study would, however, justify a randomized trial in which patients with clinical risk or suspicion of relapse would be stratified to have either CI or FDG PET/CT as the initial restaging procedure\n[21]. Finally, we did not compare PET/CT and CI findings with histopathological findings in most patients. For 50 patients (79%), the final disease status was determined clinically and/or with follow-up imaging. Nevertheless, since most of the patients were followed up for a long period of time, the survival analysis would be the best validation of diagnostic accuracy.", "Our findings support the value of 18 F-FDG PET/CT in providing incremental information that influence patient management and refine prognostic stratification in the setting of suspected recurrent breast cancer. The prognostic stratification provided by this technique emphasizes the crucial role of 18 F-FDG PET/CT in optimizing treatment choices in this setting. Further multicentric studies are needed to confirm this role in particular in patients with high suspicion of relapse.", "The authors declare that they have no competing interests.", "AC, SD, KM and RJH were involved in the concept and design of the study. AC SD, and ED were involved in data collection, history review and follow-up. KM was the imaging lead on the study and with RJH verified classification of scan results. GT was the study statistician and contributed to analysis of the primary data. SD, MM and BC were the clinical leads for assessing the impact criteria and verifying classifications of PET/CT impact. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusions", null, null ]
[ "Breast cancer", "18 F-FDG PET/CT", "Restaging", "Prognosis" ]
Background: Breast cancer (BC) is the most commonly diagnosed cancer in women, and is the leading cause of death by cancer for women in the western world. Depending on the initial extent of the disease, approximately 30% of patients diagnosed with BC are at risk of developing loco-regional recurrence or secondary tumor dissemination to distant organs [1]. Moreover, the survival of patients who develop an isolated loco-regional recurrence differs from patients who have distant relapse. As a consequence, determination of both the locations and extent of the recurrent disease is essential to guide therapeutic decisions and estimate prognosis. Traditionally, routine evaluation of suspected recurrent BC involves physical examination and a multi-modality Conventional Imaging (CI) approach which may include mammography, CT, MRI, and bone scintigraphy. However, this CI approach is often time-intensive and potential false-negative findings may delay appropriate therapy. Positron Emission Tomography/Computed Tomography (PET/CT) with 18 F-Fluorodeoxyglucose (18 F-FDG) is also often used in this indication, given that 18 F-FDG has affinity for both primary and secondary breast tumors, depending on size and aggressiveness [2-4]. Several authors have suggested that 18 F-FDG PET and PET/CT are more sensitive than CI for detection of recurrent BC [5-15] and can have a significant impact on the therapeutic management [5,7-9,12,16]. However, information concerning the utility of 18 F-FDG PET/CT for long-term prognostic stratification, when compared with CI, is limited. Thus, the objectives of our study were to: [1] assess the incremental diagnostic performance and the impact on therapeutic management of 18 F-FDG PET/CT in a group of patients with a history of BC who had already been restaged by CI for identification of suspected disease relapse; [2] compare the long-term prognostic stratification of CI alone and 18 F-FDG PET/CT. Methods: Patients A retrospective analysis was performed on consecutive patients with a history of BC and suspicion of recurrence who were referred for 18 F-FDG PET/CT at our institution from January 2002 to September 2008. BC was not a funded indication of 18 F-FDG PET/CT during this period in Australia; therefore, clinicians usually referred patients with high suspicion of recurrence for PET/CT. The inclusion criteria of the study were as follows: (a) a history of confirmed histologic diagnosis of primary BC treated as per local protocol; (b) CI performed no longer than 4 months prior to PET/CT and where the CI included at least a CT scan of the area of interest; (c) availability of follow-up data for a minimum of 6 months following PET/CT; (d) unequivocal determination of clinical status at the time of the last clinical follow-up. Sixty-three patients (62 women and one man; mean age = 57 years; range = 29-86 years) were included. The median time interval from initial diagnosis to 18 F-FDG PET/CT was 39 months (range 5-431 months). The median time interval between CT and PET/CT was 25 days (first-third quartile: 1-52 days). Indications for PET/CT were: equivocal or suspicious CI findings (n = 28); clinical suspicion of recurrence (n = 21); restaging after completion of therapy (n = 5); routine surveillance (n = 5); and increasing levels of tumor markers (n = 4). All patients provided permission to review medical records at the time of PET/CT imaging according to our institution’s investigational review board guidelines for informed consent (protocol number 09/78). A retrospective analysis was performed on consecutive patients with a history of BC and suspicion of recurrence who were referred for 18 F-FDG PET/CT at our institution from January 2002 to September 2008. BC was not a funded indication of 18 F-FDG PET/CT during this period in Australia; therefore, clinicians usually referred patients with high suspicion of recurrence for PET/CT. The inclusion criteria of the study were as follows: (a) a history of confirmed histologic diagnosis of primary BC treated as per local protocol; (b) CI performed no longer than 4 months prior to PET/CT and where the CI included at least a CT scan of the area of interest; (c) availability of follow-up data for a minimum of 6 months following PET/CT; (d) unequivocal determination of clinical status at the time of the last clinical follow-up. Sixty-three patients (62 women and one man; mean age = 57 years; range = 29-86 years) were included. The median time interval from initial diagnosis to 18 F-FDG PET/CT was 39 months (range 5-431 months). The median time interval between CT and PET/CT was 25 days (first-third quartile: 1-52 days). Indications for PET/CT were: equivocal or suspicious CI findings (n = 28); clinical suspicion of recurrence (n = 21); restaging after completion of therapy (n = 5); routine surveillance (n = 5); and increasing levels of tumor markers (n = 4). All patients provided permission to review medical records at the time of PET/CT imaging according to our institution’s investigational review board guidelines for informed consent (protocol number 09/78). PET/CT acquisition and processing Whole-body PET was acquired sequentially using a dedicated PET/CT system (Discovery LS PET/ 4-slice helical CT or Discovery STE/ 8-slice helical CT, General Electric Medical Systems, Milwaukee, WI) combining a multidetector CT scanner with a dedicated, full-ring PET scanner with bismuth germanate crystals. Patients were instructed to fast except for glucose-free oral hydration for at least 6 hours before injection of 300-400 MBq of 18 F-FDG. PET was performed 60 min following 18 F-FDG injection. Blood glucose levels were measured before the injection of the tracer to ensure levels below 10 mmol/l. Transmission data used for attenuation correction were obtained from a low-dose non diagnostic CT acquisition (140 kVp and 40-120 mA), without contrast enhancement. Attenuation corrected PET images were reconstructed with an iterative reconstruction (ordered-subset expectation maximization algorithm). Orthogonal CT, PET, and fused PET/CT images were displayed simultaneously on a GE Xeleris Workstation. The PET data were also displayed in a rotating maximum-intensity projection. An experienced nuclear medicine physician generated a clinical report after reviewing PET images, low-dose CT images, fused PET/CT images, previous imaging results and clinical information. Standard uptake values were not routinely measured. Once issued, the PET/CT report was not reinterpreted in the light of subsequent clinical information. Whole-body PET was acquired sequentially using a dedicated PET/CT system (Discovery LS PET/ 4-slice helical CT or Discovery STE/ 8-slice helical CT, General Electric Medical Systems, Milwaukee, WI) combining a multidetector CT scanner with a dedicated, full-ring PET scanner with bismuth germanate crystals. Patients were instructed to fast except for glucose-free oral hydration for at least 6 hours before injection of 300-400 MBq of 18 F-FDG. PET was performed 60 min following 18 F-FDG injection. Blood glucose levels were measured before the injection of the tracer to ensure levels below 10 mmol/l. Transmission data used for attenuation correction were obtained from a low-dose non diagnostic CT acquisition (140 kVp and 40-120 mA), without contrast enhancement. Attenuation corrected PET images were reconstructed with an iterative reconstruction (ordered-subset expectation maximization algorithm). Orthogonal CT, PET, and fused PET/CT images were displayed simultaneously on a GE Xeleris Workstation. The PET data were also displayed in a rotating maximum-intensity projection. An experienced nuclear medicine physician generated a clinical report after reviewing PET images, low-dose CT images, fused PET/CT images, previous imaging results and clinical information. Standard uptake values were not routinely measured. Once issued, the PET/CT report was not reinterpreted in the light of subsequent clinical information. Image interpretation and classification A total of 188 clinical, imaging and pathological procedures were performed (3 ± 1.4 per patients), including chest CT (n = 59), abdominopelvic CT (n = 44), whole-body bone scan (n = 30), clinical examination (n = 17), pathology (n = 9), abdominal ultrasound (US) (n = 7), MRI (n = 7), mammogram or breast US (n = 6), chest radiography (n = 5), other (n = 5). Written clinical reports of conventional images and PET/CT were reviewed and classified as (a) negative if imaging tests were negative for disease; (b) equivocal, when abnormal findings were present on any imaging test but were not interpreted as suspicious for malignancy; (c) positive, if any result was clearly described as suspicious or consistent with malignancy. Negative and equivocal findings were combined as negative for the analysis. In cases where recurrence was reported on CI or PET/CT, the location of relapse was also determined, and classified as loco-regional (ipsilateral breast, ipsilateral axillary, internal mammary or supraclavicular node station) or systemic (contralateral node station or distant metastasis). The final diagnosis of disease recurrence and location of disease was confirmed by histologic examination in 13 patients (21%). For the remaining patients, evidence of progression within 6 months of clinical and/or imaging follow-up was considered to indicate a site of disease relapse, whereas no evidence of progression after at least 6 months of follow-up was considered to confirm absence of active disease at that site. A total of 188 clinical, imaging and pathological procedures were performed (3 ± 1.4 per patients), including chest CT (n = 59), abdominopelvic CT (n = 44), whole-body bone scan (n = 30), clinical examination (n = 17), pathology (n = 9), abdominal ultrasound (US) (n = 7), MRI (n = 7), mammogram or breast US (n = 6), chest radiography (n = 5), other (n = 5). Written clinical reports of conventional images and PET/CT were reviewed and classified as (a) negative if imaging tests were negative for disease; (b) equivocal, when abnormal findings were present on any imaging test but were not interpreted as suspicious for malignancy; (c) positive, if any result was clearly described as suspicious or consistent with malignancy. Negative and equivocal findings were combined as negative for the analysis. In cases where recurrence was reported on CI or PET/CT, the location of relapse was also determined, and classified as loco-regional (ipsilateral breast, ipsilateral axillary, internal mammary or supraclavicular node station) or systemic (contralateral node station or distant metastasis). The final diagnosis of disease recurrence and location of disease was confirmed by histologic examination in 13 patients (21%). For the remaining patients, evidence of progression within 6 months of clinical and/or imaging follow-up was considered to indicate a site of disease relapse, whereas no evidence of progression after at least 6 months of follow-up was considered to confirm absence of active disease at that site. Assessment of impact Referring physicians were asked to record a pre-PET/CT management plan before PET/CT results on our routine clinical request form. The actual post-PET/CT management plan and treatment intent were determined from the medical record or by contacting the referring clinician. The impact of PET/CT on management was considered “high” when the treatment intent or modality was changed (e.g. from palliative to curative treatment or from surgery to radiotherapy) [17]. The impact was considered as “medium” when the method of treatment delivery was changed (e.g. radiation treatment volume and/or dose fractionation) [17]. When the PET/CT results did not indicate a need for change, the impact was considered to be “low”. PET/CT was considered to have had “no impact” when the management chosen conflicted with post-PET/CT disease extent on the basis of a synthesis of all available information. Referring physicians were asked to record a pre-PET/CT management plan before PET/CT results on our routine clinical request form. The actual post-PET/CT management plan and treatment intent were determined from the medical record or by contacting the referring clinician. The impact of PET/CT on management was considered “high” when the treatment intent or modality was changed (e.g. from palliative to curative treatment or from surgery to radiotherapy) [17]. The impact was considered as “medium” when the method of treatment delivery was changed (e.g. radiation treatment volume and/or dose fractionation) [17]. When the PET/CT results did not indicate a need for change, the impact was considered to be “low”. PET/CT was considered to have had “no impact” when the management chosen conflicted with post-PET/CT disease extent on the basis of a synthesis of all available information. Follow-up After PET/CT, progress updates were obtained from the medical record, family physician, or treating oncologist. When relevant, details of the date and cause of death were obtained. The disease status at the time of death was recorded. After PET/CT, progress updates were obtained from the medical record, family physician, or treating oncologist. When relevant, details of the date and cause of death were obtained. The disease status at the time of death was recorded. Statistical analysis Estimates of OS at 2 and 5 years were computed using the Kaplan Meier method, a log-rank test was used to analyze the effect of CI and PET/CT results on OS. Two Cox regression analyses were performed to assess the impact of PET/CT and CI on OS controlling for clinical variables and using a backward elimination process. Triple negative status of the primary tumor was not included in the multivariate model because this information was available for only 38 patients. For percentages such as PPV, NPV, sensitivity and specificity, a Blyth-Still-Casella 95% confidence interval (CI) was calculated. Diagnosis performance results were compared using McNemar tests, Fisher exact tests and a likelihood ratio, which summarizes how many times more likely patients with the disease are to have that particular result than patients without the disease. According to the fact that patients were selected on the basis of CI, many patients with unequivocal systemic relapse on CI would likely not have been referred for PET/CT. Given the likely pre-test selection bias, positive and negative predictive values were considered as more relevant comparators of diagnostic performance when compared with sensitivity and specificity. Estimates of OS at 2 and 5 years were computed using the Kaplan Meier method, a log-rank test was used to analyze the effect of CI and PET/CT results on OS. Two Cox regression analyses were performed to assess the impact of PET/CT and CI on OS controlling for clinical variables and using a backward elimination process. Triple negative status of the primary tumor was not included in the multivariate model because this information was available for only 38 patients. For percentages such as PPV, NPV, sensitivity and specificity, a Blyth-Still-Casella 95% confidence interval (CI) was calculated. Diagnosis performance results were compared using McNemar tests, Fisher exact tests and a likelihood ratio, which summarizes how many times more likely patients with the disease are to have that particular result than patients without the disease. According to the fact that patients were selected on the basis of CI, many patients with unequivocal systemic relapse on CI would likely not have been referred for PET/CT. Given the likely pre-test selection bias, positive and negative predictive values were considered as more relevant comparators of diagnostic performance when compared with sensitivity and specificity. Patients: A retrospective analysis was performed on consecutive patients with a history of BC and suspicion of recurrence who were referred for 18 F-FDG PET/CT at our institution from January 2002 to September 2008. BC was not a funded indication of 18 F-FDG PET/CT during this period in Australia; therefore, clinicians usually referred patients with high suspicion of recurrence for PET/CT. The inclusion criteria of the study were as follows: (a) a history of confirmed histologic diagnosis of primary BC treated as per local protocol; (b) CI performed no longer than 4 months prior to PET/CT and where the CI included at least a CT scan of the area of interest; (c) availability of follow-up data for a minimum of 6 months following PET/CT; (d) unequivocal determination of clinical status at the time of the last clinical follow-up. Sixty-three patients (62 women and one man; mean age = 57 years; range = 29-86 years) were included. The median time interval from initial diagnosis to 18 F-FDG PET/CT was 39 months (range 5-431 months). The median time interval between CT and PET/CT was 25 days (first-third quartile: 1-52 days). Indications for PET/CT were: equivocal or suspicious CI findings (n = 28); clinical suspicion of recurrence (n = 21); restaging after completion of therapy (n = 5); routine surveillance (n = 5); and increasing levels of tumor markers (n = 4). All patients provided permission to review medical records at the time of PET/CT imaging according to our institution’s investigational review board guidelines for informed consent (protocol number 09/78). PET/CT acquisition and processing: Whole-body PET was acquired sequentially using a dedicated PET/CT system (Discovery LS PET/ 4-slice helical CT or Discovery STE/ 8-slice helical CT, General Electric Medical Systems, Milwaukee, WI) combining a multidetector CT scanner with a dedicated, full-ring PET scanner with bismuth germanate crystals. Patients were instructed to fast except for glucose-free oral hydration for at least 6 hours before injection of 300-400 MBq of 18 F-FDG. PET was performed 60 min following 18 F-FDG injection. Blood glucose levels were measured before the injection of the tracer to ensure levels below 10 mmol/l. Transmission data used for attenuation correction were obtained from a low-dose non diagnostic CT acquisition (140 kVp and 40-120 mA), without contrast enhancement. Attenuation corrected PET images were reconstructed with an iterative reconstruction (ordered-subset expectation maximization algorithm). Orthogonal CT, PET, and fused PET/CT images were displayed simultaneously on a GE Xeleris Workstation. The PET data were also displayed in a rotating maximum-intensity projection. An experienced nuclear medicine physician generated a clinical report after reviewing PET images, low-dose CT images, fused PET/CT images, previous imaging results and clinical information. Standard uptake values were not routinely measured. Once issued, the PET/CT report was not reinterpreted in the light of subsequent clinical information. Image interpretation and classification: A total of 188 clinical, imaging and pathological procedures were performed (3 ± 1.4 per patients), including chest CT (n = 59), abdominopelvic CT (n = 44), whole-body bone scan (n = 30), clinical examination (n = 17), pathology (n = 9), abdominal ultrasound (US) (n = 7), MRI (n = 7), mammogram or breast US (n = 6), chest radiography (n = 5), other (n = 5). Written clinical reports of conventional images and PET/CT were reviewed and classified as (a) negative if imaging tests were negative for disease; (b) equivocal, when abnormal findings were present on any imaging test but were not interpreted as suspicious for malignancy; (c) positive, if any result was clearly described as suspicious or consistent with malignancy. Negative and equivocal findings were combined as negative for the analysis. In cases where recurrence was reported on CI or PET/CT, the location of relapse was also determined, and classified as loco-regional (ipsilateral breast, ipsilateral axillary, internal mammary or supraclavicular node station) or systemic (contralateral node station or distant metastasis). The final diagnosis of disease recurrence and location of disease was confirmed by histologic examination in 13 patients (21%). For the remaining patients, evidence of progression within 6 months of clinical and/or imaging follow-up was considered to indicate a site of disease relapse, whereas no evidence of progression after at least 6 months of follow-up was considered to confirm absence of active disease at that site. Assessment of impact: Referring physicians were asked to record a pre-PET/CT management plan before PET/CT results on our routine clinical request form. The actual post-PET/CT management plan and treatment intent were determined from the medical record or by contacting the referring clinician. The impact of PET/CT on management was considered “high” when the treatment intent or modality was changed (e.g. from palliative to curative treatment or from surgery to radiotherapy) [17]. The impact was considered as “medium” when the method of treatment delivery was changed (e.g. radiation treatment volume and/or dose fractionation) [17]. When the PET/CT results did not indicate a need for change, the impact was considered to be “low”. PET/CT was considered to have had “no impact” when the management chosen conflicted with post-PET/CT disease extent on the basis of a synthesis of all available information. Follow-up: After PET/CT, progress updates were obtained from the medical record, family physician, or treating oncologist. When relevant, details of the date and cause of death were obtained. The disease status at the time of death was recorded. Statistical analysis: Estimates of OS at 2 and 5 years were computed using the Kaplan Meier method, a log-rank test was used to analyze the effect of CI and PET/CT results on OS. Two Cox regression analyses were performed to assess the impact of PET/CT and CI on OS controlling for clinical variables and using a backward elimination process. Triple negative status of the primary tumor was not included in the multivariate model because this information was available for only 38 patients. For percentages such as PPV, NPV, sensitivity and specificity, a Blyth-Still-Casella 95% confidence interval (CI) was calculated. Diagnosis performance results were compared using McNemar tests, Fisher exact tests and a likelihood ratio, which summarizes how many times more likely patients with the disease are to have that particular result than patients without the disease. According to the fact that patients were selected on the basis of CI, many patients with unequivocal systemic relapse on CI would likely not have been referred for PET/CT. Given the likely pre-test selection bias, positive and negative predictive values were considered as more relevant comparators of diagnostic performance when compared with sensitivity and specificity. Results: Patient characteristics at the time of initial diagnosis are summarized in Table  1. Patient characteristics at the time of initial diagnosis HER-2 = Human Epidermal Growth Factor Receptor-2. Diagnostic performance Relapse involving at least one site was confirmed in 42 of the 63 patients (67%). CI was positive for disease in 37 of these patients, yielding a patient sensitivity of 88%, whereas PET/CT was positive for disease in 40, corresponding to a patient sensitivity of 95%. Table  2 shows comparison of extent of suspected relapse as assessed before and after PET/CT. Downstaging by PET/CT was confirmed to be correct in 12/14 patients (one patient had a suspicious bony lesion on bone scan that was non 18 F-FDG-avid, but confirmed to be malignant; one patient showed suspicious mediastinal lymph nodes on CT that were non 18 F-FDG-avid but confirmed to be metastasis of breast cancer by pathology), while upstaging with PET/CT was confirmed in 5/5 patients. Comparison of extent of suspected relapse as assessed before and after PET/CT LR = Loco-Regional; PET/CT = Positron Emission Tomography/Computerized Tomography. On final diagnosis, 20 patients (32%) had a loco-regional recurrence and 36 (57%) had a systemic recurrence (14 patients had both loco-regional and systemic recurrence). CI was truly positive for loco-regional recurrence in 8/20 patients (40%) and systemic recurrence in 30/36 patients (83%). PET/CT was truly positive for loco-regional recurrence in 20/20 patients (100%) and systemic disease in 32/36 patients (89%). Among the 20 patients experiencing loco-regional recurrence, only 3 had local relapse only, for whom both PET/CT and CI were positive. PET/CT detected regional relapse non-detected by CI in 12 patients (3 in axillary nodes only, 9 in extra-axillary nodes). PET/CT had significantly higher positive predictive values when compared with CI for determination of loco-regional, systemic and global recurrence, and higher negative predictive value for loco-regional and global recurrence (Table  3). 18 F-FDG PET/CT found incidental malignancies in 2 patients (one patient had a primary esophageal cancer and one patient had a gastrointestinal stromal tumor). Diagnostic accuracy of conventional imaging (CI) and PET/CT in detecting relapse of disease in all patients, according to the histological type, and for locoregional and systemic recurrence CI = Confidence Interval; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold. *likelihood ratio = sensitivity/ (1-specificity). Relapse involving at least one site was confirmed in 42 of the 63 patients (67%). CI was positive for disease in 37 of these patients, yielding a patient sensitivity of 88%, whereas PET/CT was positive for disease in 40, corresponding to a patient sensitivity of 95%. Table  2 shows comparison of extent of suspected relapse as assessed before and after PET/CT. Downstaging by PET/CT was confirmed to be correct in 12/14 patients (one patient had a suspicious bony lesion on bone scan that was non 18 F-FDG-avid, but confirmed to be malignant; one patient showed suspicious mediastinal lymph nodes on CT that were non 18 F-FDG-avid but confirmed to be metastasis of breast cancer by pathology), while upstaging with PET/CT was confirmed in 5/5 patients. Comparison of extent of suspected relapse as assessed before and after PET/CT LR = Loco-Regional; PET/CT = Positron Emission Tomography/Computerized Tomography. On final diagnosis, 20 patients (32%) had a loco-regional recurrence and 36 (57%) had a systemic recurrence (14 patients had both loco-regional and systemic recurrence). CI was truly positive for loco-regional recurrence in 8/20 patients (40%) and systemic recurrence in 30/36 patients (83%). PET/CT was truly positive for loco-regional recurrence in 20/20 patients (100%) and systemic disease in 32/36 patients (89%). Among the 20 patients experiencing loco-regional recurrence, only 3 had local relapse only, for whom both PET/CT and CI were positive. PET/CT detected regional relapse non-detected by CI in 12 patients (3 in axillary nodes only, 9 in extra-axillary nodes). PET/CT had significantly higher positive predictive values when compared with CI for determination of loco-regional, systemic and global recurrence, and higher negative predictive value for loco-regional and global recurrence (Table  3). 18 F-FDG PET/CT found incidental malignancies in 2 patients (one patient had a primary esophageal cancer and one patient had a gastrointestinal stromal tumor). Diagnostic accuracy of conventional imaging (CI) and PET/CT in detecting relapse of disease in all patients, according to the histological type, and for locoregional and systemic recurrence CI = Confidence Interval; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold. *likelihood ratio = sensitivity/ (1-specificity). Impact on management The PET/CT results had a high impact on management in 30 patients (48%) of whom the treatment intent was modified in 24 patients (including from invasive diagnosis to observation for 11 patients, according to negative PE/CT results despite suspicious CI findings). For the 6 remaining patients, the treatment intent was not modified after PET/CT (palliative for 5 patients, curative for 1 patient), but the modalities of therapy were changed. PET/CT had a medium impact on management in 6/63 patients (9%). Management changes in these patients primarily included changes in radiation treatment volume as a result of more extensive disease detected by PET/CT. All these patients had a palliative treatment intent, which was not modified by PET/CT. PET/CT had a low impact (i.e. did not change the planned management) in 27/63 patients (43%) for whom the relapse extent was concordant with that found on CI (19 patients) or the documentation of a different distribution of disease did not alter the planned treatment (2 patients), or both PET/CT and CI findings were negative (6 patients). In no case was the PET/CT result apparently ignored. The PET/CT results had a high impact on management in 30 patients (48%) of whom the treatment intent was modified in 24 patients (including from invasive diagnosis to observation for 11 patients, according to negative PE/CT results despite suspicious CI findings). For the 6 remaining patients, the treatment intent was not modified after PET/CT (palliative for 5 patients, curative for 1 patient), but the modalities of therapy were changed. PET/CT had a medium impact on management in 6/63 patients (9%). Management changes in these patients primarily included changes in radiation treatment volume as a result of more extensive disease detected by PET/CT. All these patients had a palliative treatment intent, which was not modified by PET/CT. PET/CT had a low impact (i.e. did not change the planned management) in 27/63 patients (43%) for whom the relapse extent was concordant with that found on CI (19 patients) or the documentation of a different distribution of disease did not alter the planned treatment (2 patients), or both PET/CT and CI findings were negative (6 patients). In no case was the PET/CT result apparently ignored. Prediction of survival by CI and PET/CT Survival data were analyzed with a close-out date of September, 4th 2010. The median follow-up time was 5.1 years (range 0.8-8.6 years). All 63 patients entered into the study had a known status at the close-out date. All patients with negative PET/CT findings were followed for a minimum of 2 years after the scan (except one patient who died 2 months after the scan, because of complications of an esophageal primary tumor discovered on PET/CT). Thirty-eight patients (60%) were deceased with a median survival of 3.4 years (95% CI 2.5 to 5.0 years) (Figure  1). The estimated 2-year OS was 66.4% (95% CI 53.2% to 76.6%) and the estimated 5-year OS was 37.3% (95% CI 24.3% to 50.3%). Estimated overall survival (+/- 95% confidence interval) for all 63 patients. On univariate analysis, PET/CT status (negative, positive LR or positive systemic) was strongly associated with survival (log-rank test: p = 0.0003 for the entire model; p = 0.0001 for comparison of negative results and positive for systemic disease; p > 0.05 for other single comparisons) (Figure  2). Patients with systemic disease according to PET/CT had a 4.7-fold in the risk of death when compared with patients with negative PET/CT findings, while patients with only loco-regional recurrence had a 2-fold increase in the risk of death (Table  4). In contrast, CI status (negative, positive LR or positive systemic) did not significantly predict OS (log-rank test: p = 0.07 for the entire model; p > 0.05 for all single comparisons) (Figure  2). Moreover, patients with positive CI findings but negative PET/CT findings had similar estimated survival to patients in whom both tests were negative, whereas patients with negative CI findings but positive PET/CT results had comparable estimated survival to patients in whom both procedures were positive for recurrence (Figure  3). Overall survival by staging technique. (A) Kaplan-Meier estimate of overall survival (OS) stratified by pre-PET/CT (Conventional imaging) status. (B) Kaplan-Meier estimate of OS stratified by post-PET/CT status. CI = Conventional Imaging; LR = loco-regional recurrence only; Syst = systemic recurrence. Univariate predictors of overall survival *The information is available for only 38 patients. CI = Confidence Interval; ER = Estrogen Receptor; HER-2 = Human Epidermal Growth Factor Receptor-2; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography; PR = Progesterone Receptor. Significant p values in bold. Kaplan-Meier estimate of overall survival (OS) stratified by combination of Conventional imaging (CI) and PET/CT results. Initial stage of the disease at the time of diagnosis, histological type of cancer and triple negative status (hormone receptor and HER-2 negativity) were also predictors of survival on the univariate analysis (Table  4). Time between initial diagnosis and the restaging PET/CT, initial histological grade, estrogen receptor status, progesterone receptor status and overexpression of Human Epidermal Growth Factor Receptor-2 (HER-2) were not significant predictors of survival. On multivariate analysis, positive PET/CT findings (for loco-regional and/or systemic recurrence) remained an independent predictor of OS when adjusting for age, histological subtype and initial stage (model 1, Table  5). In contrast, positive CI findings did not independently predict OS (model 2, Table  5). Multivariate predictors of overall survival CI = Confidence Interval; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold. Survival data were analyzed with a close-out date of September, 4th 2010. The median follow-up time was 5.1 years (range 0.8-8.6 years). All 63 patients entered into the study had a known status at the close-out date. All patients with negative PET/CT findings were followed for a minimum of 2 years after the scan (except one patient who died 2 months after the scan, because of complications of an esophageal primary tumor discovered on PET/CT). Thirty-eight patients (60%) were deceased with a median survival of 3.4 years (95% CI 2.5 to 5.0 years) (Figure  1). The estimated 2-year OS was 66.4% (95% CI 53.2% to 76.6%) and the estimated 5-year OS was 37.3% (95% CI 24.3% to 50.3%). Estimated overall survival (+/- 95% confidence interval) for all 63 patients. On univariate analysis, PET/CT status (negative, positive LR or positive systemic) was strongly associated with survival (log-rank test: p = 0.0003 for the entire model; p = 0.0001 for comparison of negative results and positive for systemic disease; p > 0.05 for other single comparisons) (Figure  2). Patients with systemic disease according to PET/CT had a 4.7-fold in the risk of death when compared with patients with negative PET/CT findings, while patients with only loco-regional recurrence had a 2-fold increase in the risk of death (Table  4). In contrast, CI status (negative, positive LR or positive systemic) did not significantly predict OS (log-rank test: p = 0.07 for the entire model; p > 0.05 for all single comparisons) (Figure  2). Moreover, patients with positive CI findings but negative PET/CT findings had similar estimated survival to patients in whom both tests were negative, whereas patients with negative CI findings but positive PET/CT results had comparable estimated survival to patients in whom both procedures were positive for recurrence (Figure  3). Overall survival by staging technique. (A) Kaplan-Meier estimate of overall survival (OS) stratified by pre-PET/CT (Conventional imaging) status. (B) Kaplan-Meier estimate of OS stratified by post-PET/CT status. CI = Conventional Imaging; LR = loco-regional recurrence only; Syst = systemic recurrence. Univariate predictors of overall survival *The information is available for only 38 patients. CI = Confidence Interval; ER = Estrogen Receptor; HER-2 = Human Epidermal Growth Factor Receptor-2; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography; PR = Progesterone Receptor. Significant p values in bold. Kaplan-Meier estimate of overall survival (OS) stratified by combination of Conventional imaging (CI) and PET/CT results. Initial stage of the disease at the time of diagnosis, histological type of cancer and triple negative status (hormone receptor and HER-2 negativity) were also predictors of survival on the univariate analysis (Table  4). Time between initial diagnosis and the restaging PET/CT, initial histological grade, estrogen receptor status, progesterone receptor status and overexpression of Human Epidermal Growth Factor Receptor-2 (HER-2) were not significant predictors of survival. On multivariate analysis, positive PET/CT findings (for loco-regional and/or systemic recurrence) remained an independent predictor of OS when adjusting for age, histological subtype and initial stage (model 1, Table  5). In contrast, positive CI findings did not independently predict OS (model 2, Table  5). Multivariate predictors of overall survival CI = Confidence Interval; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold. Diagnostic performance: Relapse involving at least one site was confirmed in 42 of the 63 patients (67%). CI was positive for disease in 37 of these patients, yielding a patient sensitivity of 88%, whereas PET/CT was positive for disease in 40, corresponding to a patient sensitivity of 95%. Table  2 shows comparison of extent of suspected relapse as assessed before and after PET/CT. Downstaging by PET/CT was confirmed to be correct in 12/14 patients (one patient had a suspicious bony lesion on bone scan that was non 18 F-FDG-avid, but confirmed to be malignant; one patient showed suspicious mediastinal lymph nodes on CT that were non 18 F-FDG-avid but confirmed to be metastasis of breast cancer by pathology), while upstaging with PET/CT was confirmed in 5/5 patients. Comparison of extent of suspected relapse as assessed before and after PET/CT LR = Loco-Regional; PET/CT = Positron Emission Tomography/Computerized Tomography. On final diagnosis, 20 patients (32%) had a loco-regional recurrence and 36 (57%) had a systemic recurrence (14 patients had both loco-regional and systemic recurrence). CI was truly positive for loco-regional recurrence in 8/20 patients (40%) and systemic recurrence in 30/36 patients (83%). PET/CT was truly positive for loco-regional recurrence in 20/20 patients (100%) and systemic disease in 32/36 patients (89%). Among the 20 patients experiencing loco-regional recurrence, only 3 had local relapse only, for whom both PET/CT and CI were positive. PET/CT detected regional relapse non-detected by CI in 12 patients (3 in axillary nodes only, 9 in extra-axillary nodes). PET/CT had significantly higher positive predictive values when compared with CI for determination of loco-regional, systemic and global recurrence, and higher negative predictive value for loco-regional and global recurrence (Table  3). 18 F-FDG PET/CT found incidental malignancies in 2 patients (one patient had a primary esophageal cancer and one patient had a gastrointestinal stromal tumor). Diagnostic accuracy of conventional imaging (CI) and PET/CT in detecting relapse of disease in all patients, according to the histological type, and for locoregional and systemic recurrence CI = Confidence Interval; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold. *likelihood ratio = sensitivity/ (1-specificity). Impact on management: The PET/CT results had a high impact on management in 30 patients (48%) of whom the treatment intent was modified in 24 patients (including from invasive diagnosis to observation for 11 patients, according to negative PE/CT results despite suspicious CI findings). For the 6 remaining patients, the treatment intent was not modified after PET/CT (palliative for 5 patients, curative for 1 patient), but the modalities of therapy were changed. PET/CT had a medium impact on management in 6/63 patients (9%). Management changes in these patients primarily included changes in radiation treatment volume as a result of more extensive disease detected by PET/CT. All these patients had a palliative treatment intent, which was not modified by PET/CT. PET/CT had a low impact (i.e. did not change the planned management) in 27/63 patients (43%) for whom the relapse extent was concordant with that found on CI (19 patients) or the documentation of a different distribution of disease did not alter the planned treatment (2 patients), or both PET/CT and CI findings were negative (6 patients). In no case was the PET/CT result apparently ignored. Prediction of survival by CI and PET/CT: Survival data were analyzed with a close-out date of September, 4th 2010. The median follow-up time was 5.1 years (range 0.8-8.6 years). All 63 patients entered into the study had a known status at the close-out date. All patients with negative PET/CT findings were followed for a minimum of 2 years after the scan (except one patient who died 2 months after the scan, because of complications of an esophageal primary tumor discovered on PET/CT). Thirty-eight patients (60%) were deceased with a median survival of 3.4 years (95% CI 2.5 to 5.0 years) (Figure  1). The estimated 2-year OS was 66.4% (95% CI 53.2% to 76.6%) and the estimated 5-year OS was 37.3% (95% CI 24.3% to 50.3%). Estimated overall survival (+/- 95% confidence interval) for all 63 patients. On univariate analysis, PET/CT status (negative, positive LR or positive systemic) was strongly associated with survival (log-rank test: p = 0.0003 for the entire model; p = 0.0001 for comparison of negative results and positive for systemic disease; p > 0.05 for other single comparisons) (Figure  2). Patients with systemic disease according to PET/CT had a 4.7-fold in the risk of death when compared with patients with negative PET/CT findings, while patients with only loco-regional recurrence had a 2-fold increase in the risk of death (Table  4). In contrast, CI status (negative, positive LR or positive systemic) did not significantly predict OS (log-rank test: p = 0.07 for the entire model; p > 0.05 for all single comparisons) (Figure  2). Moreover, patients with positive CI findings but negative PET/CT findings had similar estimated survival to patients in whom both tests were negative, whereas patients with negative CI findings but positive PET/CT results had comparable estimated survival to patients in whom both procedures were positive for recurrence (Figure  3). Overall survival by staging technique. (A) Kaplan-Meier estimate of overall survival (OS) stratified by pre-PET/CT (Conventional imaging) status. (B) Kaplan-Meier estimate of OS stratified by post-PET/CT status. CI = Conventional Imaging; LR = loco-regional recurrence only; Syst = systemic recurrence. Univariate predictors of overall survival *The information is available for only 38 patients. CI = Confidence Interval; ER = Estrogen Receptor; HER-2 = Human Epidermal Growth Factor Receptor-2; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography; PR = Progesterone Receptor. Significant p values in bold. Kaplan-Meier estimate of overall survival (OS) stratified by combination of Conventional imaging (CI) and PET/CT results. Initial stage of the disease at the time of diagnosis, histological type of cancer and triple negative status (hormone receptor and HER-2 negativity) were also predictors of survival on the univariate analysis (Table  4). Time between initial diagnosis and the restaging PET/CT, initial histological grade, estrogen receptor status, progesterone receptor status and overexpression of Human Epidermal Growth Factor Receptor-2 (HER-2) were not significant predictors of survival. On multivariate analysis, positive PET/CT findings (for loco-regional and/or systemic recurrence) remained an independent predictor of OS when adjusting for age, histological subtype and initial stage (model 1, Table  5). In contrast, positive CI findings did not independently predict OS (model 2, Table  5). Multivariate predictors of overall survival CI = Confidence Interval; HR = Hazard Ratio; PET/CT = Positron Emission Tomography/Computerized Tomography. Significant p values in bold. Discussion: For patients with possible recurrent breast cancer (BC), early detection and adequate localization of recurrent disease are essential for guiding optimal therapy and prognostication. Patients with isolated loco-regional recurrence are able to benefit from curative salvage therapy, whereas palliative treatment is generally indicated for patients with distant relapse. Several studies have shown the relevance of 18 F-FDG PET/CT in detecting distant metastasis in patients with clinical suspicion of recurrence [6,8,12,13,18-20], and in patients with documented loco-regional recurrence [5]. Our study confirmed that 18 F-FDG PET/CT is an accurate technique for the appropriate detection of relapse, when compared with CI alone. Of particular economic and clinical importance was the observation that 12 patients (19%) in this series who were suspected to have relapsed by conventional evaluation subsequently received no active treatment after a negative 18 F-FDG PET/CT evaluation and demonstrated an excellent prognosis. In contrast, of the 8 patients with negative CI findings, 1 patient was found to have locoregional relapse and 3 patients were found to have systemic recurrence on 18 F-FDG PET/CT. In the current study, 18 F-FDG PET/CT had an impact on therapeutic management in 57% of patients; in particular, the treatment intent was changed in 38% of patients. This result is consistent with other studies which showed the important impact of 18 F-FDG PET/CT on therapeutic management in patients with suspicion of recurrent BC [5,8,12,18,19]. Most of these previous studies reported that 18 F-FDG PET/CT was highly accurate for detecting recurrent disease in patients with negative or inconclusive CI findings [5,18,19]. In contrast, in our study, CI findings were consistent with relapse in the majority of patients (51/63) but 18 F-FDG PET/CT downstaged 12 of them, providing a better negative predictive value when compared to CI alone (Table  3). These findings suggested that 18 F-FDG PET/CT was not only effective for early detection of relapse in patients with negative CI findings, but also yielded a better characterization of CI findings in patients with a high suspicion of relapse. As BC was not a funded indication of 18 F-FDG PET/CT in Australia during the study period, referring clinicians were likely to use PET/CT in patients for whom there was clinical uncertainty with respect to the appropriate management. Although this patient selection could have introduced biases in the evaluation of the impact of PET/CT findings on patient management, it also showed the value of 18 F-FDG PET/CT in patients whose disease status could not be adequately determined using CI alone. The more accurate prognostic information derived from PET/CT results when compared with CI findings underpinned the value of PET/CT in the management of patients with recurrent BC. Both 18F-FDG PET/CT and CT are not optimal modalities for detection of local recurrence, when compared with mammography, ultrasound or MRI. However, detection of additional distant metastases in patients with documented loco-regional recurrence is essential in order to optimize management and stratify prognosis. Of note was the favourable survival of patients with isolated loco-regional recurrence according to PET/CT, when compared with patients with systemic relapse. In our study, CT performed with PET was not of diagnostic quality. However, all patients included in this study had diagnostic CT before undergoing FDG PET/CT. While Dirisamer et al. showed in a retrospective study that the association of FDG PET and contrast-enhanced (ce) CT could improve restaging of breast cancer when compared with ceCT or FDG PET alone [6], there is no evidence in the literature of the superiority of FDG PET/ceCT for restaging of breast cancer when compared with FDG PET associated to non-diagnostic, low dose CT. In our study, the outcome data validate our approach in that FDG PET/CT results were more often correct than conventional imaging (including diagnostic CT) when discordant, and stratified prognosis whereas conventional imaging did not (Figure  2). If we hadn’t ignored the conventional imaging findings (including diagnostic CT) when discordant with FDG PET/CT, the accuracy wouldn’t have been enhanced and the prognostic value of FDG PET/CT incorporating a low dose, non-contrast CT wouldn’t have been superior to that of conventional imaging. These results have implications for the reporting of FDG PET combined with ceCT, which is performed by some facilities as a routine procedure and suggest that significant clinical weight should be placed on the PET findings even when discordant with the ceCT appearances. One of the limitations of our study was that the CI procedures were not standardized and were selected on the basis of clinical findings. However, this represented routine clinical practice and did not detract from the results. Although comparison of a masked reading of PET/CT with a masked reading of CI techniques might be appropriate if PET/CT were to be suggested as a replacement for CI, the main purpose of this study was to evaluate the incremental diagnostic and prognostic value of PET/CT in routine practice. The results of this retrospective study would, however, justify a randomized trial in which patients with clinical risk or suspicion of relapse would be stratified to have either CI or FDG PET/CT as the initial restaging procedure [21]. Finally, we did not compare PET/CT and CI findings with histopathological findings in most patients. For 50 patients (79%), the final disease status was determined clinically and/or with follow-up imaging. Nevertheless, since most of the patients were followed up for a long period of time, the survival analysis would be the best validation of diagnostic accuracy. Conclusion: Our findings support the value of 18 F-FDG PET/CT in providing incremental information that influence patient management and refine prognostic stratification in the setting of suspected recurrent breast cancer. The prognostic stratification provided by this technique emphasizes the crucial role of 18 F-FDG PET/CT in optimizing treatment choices in this setting. Further multicentric studies are needed to confirm this role in particular in patients with high suspicion of relapse. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: AC, SD, KM and RJH were involved in the concept and design of the study. AC SD, and ED were involved in data collection, history review and follow-up. KM was the imaging lead on the study and with RJH verified classification of scan results. GT was the study statistician and contributed to analysis of the primary data. SD, MM and BC were the clinical leads for assessing the impact criteria and verifying classifications of PET/CT impact. All authors read and approved the final manuscript.
Background: The incremental value of 18FDG PET/CT in patients with breast cancer (BC) compared to conventional imaging (CI) in clinical practice is unclear. The aim of this study was to evaluate the management impact and prognostic value of 18 F-FDG PET/CT in this setting. Methods: Sixty-three patients who were referred to our institution for suspicion of BC relapse were retrospectively enrolled. All patients had been evaluated with CI and underwent PET/CT. At a median follow-up of 61 months, serial clinical, imaging and pathologic results were obtained to validate diagnostic findings. Overall Survival (OS) was estimated using Kaplan Meier methods and analyzed using the Cox proportional hazards regression models. Results: Forty-two patients had a confirmed relapse with 37 (88%) positive on CI and 40 (95%) positive on PET/CT. When compared with CI, PET/CT had a higher negative predictive value (86% versus 54%) and positive predictive value (95% versus 70%). The management impact of PET/CT was high (change of treatment modality or intent) in 30 patients (48%) and medium (change in radiation treatment volume or dose fractionation) in 6 patients (9%). Thirty-nine patients (62%) died during follow-up. The PET/CT result was a highly significant predictor of OS (Hazard Ratio [95% Confidence Interval] =4.7 [2.0-10.9] for PET positive versus PET negative for a systemic recurrence; p = 0.0003). In a Cox multivariate analysis including other prognosis factors, PET/CT findings predicted survival (p = 0.005). In contrast, restaging by CI was not significant predictor of survival. Conclusions: Our study support the value of 18 F-FDG PET/CT in providing incremental information that influence patient management and refine prognostic stratification in the setting of suspected recurrent breast cancer.
Background: Breast cancer (BC) is the most commonly diagnosed cancer in women, and is the leading cause of death by cancer for women in the western world. Depending on the initial extent of the disease, approximately 30% of patients diagnosed with BC are at risk of developing loco-regional recurrence or secondary tumor dissemination to distant organs [1]. Moreover, the survival of patients who develop an isolated loco-regional recurrence differs from patients who have distant relapse. As a consequence, determination of both the locations and extent of the recurrent disease is essential to guide therapeutic decisions and estimate prognosis. Traditionally, routine evaluation of suspected recurrent BC involves physical examination and a multi-modality Conventional Imaging (CI) approach which may include mammography, CT, MRI, and bone scintigraphy. However, this CI approach is often time-intensive and potential false-negative findings may delay appropriate therapy. Positron Emission Tomography/Computed Tomography (PET/CT) with 18 F-Fluorodeoxyglucose (18 F-FDG) is also often used in this indication, given that 18 F-FDG has affinity for both primary and secondary breast tumors, depending on size and aggressiveness [2-4]. Several authors have suggested that 18 F-FDG PET and PET/CT are more sensitive than CI for detection of recurrent BC [5-15] and can have a significant impact on the therapeutic management [5,7-9,12,16]. However, information concerning the utility of 18 F-FDG PET/CT for long-term prognostic stratification, when compared with CI, is limited. Thus, the objectives of our study were to: [1] assess the incremental diagnostic performance and the impact on therapeutic management of 18 F-FDG PET/CT in a group of patients with a history of BC who had already been restaged by CI for identification of suspected disease relapse; [2] compare the long-term prognostic stratification of CI alone and 18 F-FDG PET/CT. Conclusion: Our findings support the value of 18 F-FDG PET/CT in providing incremental information that influence patient management and refine prognostic stratification in the setting of suspected recurrent breast cancer. The prognostic stratification provided by this technique emphasizes the crucial role of 18 F-FDG PET/CT in optimizing treatment choices in this setting. Further multicentric studies are needed to confirm this role in particular in patients with high suspicion of relapse.
Background: The incremental value of 18FDG PET/CT in patients with breast cancer (BC) compared to conventional imaging (CI) in clinical practice is unclear. The aim of this study was to evaluate the management impact and prognostic value of 18 F-FDG PET/CT in this setting. Methods: Sixty-three patients who were referred to our institution for suspicion of BC relapse were retrospectively enrolled. All patients had been evaluated with CI and underwent PET/CT. At a median follow-up of 61 months, serial clinical, imaging and pathologic results were obtained to validate diagnostic findings. Overall Survival (OS) was estimated using Kaplan Meier methods and analyzed using the Cox proportional hazards regression models. Results: Forty-two patients had a confirmed relapse with 37 (88%) positive on CI and 40 (95%) positive on PET/CT. When compared with CI, PET/CT had a higher negative predictive value (86% versus 54%) and positive predictive value (95% versus 70%). The management impact of PET/CT was high (change of treatment modality or intent) in 30 patients (48%) and medium (change in radiation treatment volume or dose fractionation) in 6 patients (9%). Thirty-nine patients (62%) died during follow-up. The PET/CT result was a highly significant predictor of OS (Hazard Ratio [95% Confidence Interval] =4.7 [2.0-10.9] for PET positive versus PET negative for a systemic recurrence; p = 0.0003). In a Cox multivariate analysis including other prognosis factors, PET/CT findings predicted survival (p = 0.005). In contrast, restaging by CI was not significant predictor of survival. Conclusions: Our study support the value of 18 F-FDG PET/CT in providing incremental information that influence patient management and refine prognostic stratification in the setting of suspected recurrent breast cancer.
10,617
385
[ 394, 349, 273, 341, 181, 46, 222, 496, 236, 765, 10, 99 ]
16
[ "ct", "pet", "pet ct", "patients", "ci", "recurrence", "disease", "negative", "positive", "fdg" ]
[ "ct 18 fluorodeoxyglucose", "fdg pet ct", "recurrent bc 18f", "prognosis conventional imaging", "breast cancer prognostic" ]
[CONTENT] Breast cancer | 18 F-FDG PET/CT | Restaging | Prognosis [SUMMARY]
[CONTENT] Breast cancer | 18 F-FDG PET/CT | Restaging | Prognosis [SUMMARY]
[CONTENT] Breast cancer | 18 F-FDG PET/CT | Restaging | Prognosis [SUMMARY]
[CONTENT] Breast cancer | 18 F-FDG PET/CT | Restaging | Prognosis [SUMMARY]
[CONTENT] Breast cancer | 18 F-FDG PET/CT | Restaging | Prognosis [SUMMARY]
[CONTENT] Breast cancer | 18 F-FDG PET/CT | Restaging | Prognosis [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Breast Neoplasms | Female | Fluorodeoxyglucose F18 | Humans | Male | Middle Aged | Multimodal Imaging | Neoplasm Recurrence, Local | Positron-Emission Tomography | Prognosis | Radiopharmaceuticals | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Breast Neoplasms | Female | Fluorodeoxyglucose F18 | Humans | Male | Middle Aged | Multimodal Imaging | Neoplasm Recurrence, Local | Positron-Emission Tomography | Prognosis | Radiopharmaceuticals | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Breast Neoplasms | Female | Fluorodeoxyglucose F18 | Humans | Male | Middle Aged | Multimodal Imaging | Neoplasm Recurrence, Local | Positron-Emission Tomography | Prognosis | Radiopharmaceuticals | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Breast Neoplasms | Female | Fluorodeoxyglucose F18 | Humans | Male | Middle Aged | Multimodal Imaging | Neoplasm Recurrence, Local | Positron-Emission Tomography | Prognosis | Radiopharmaceuticals | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Breast Neoplasms | Female | Fluorodeoxyglucose F18 | Humans | Male | Middle Aged | Multimodal Imaging | Neoplasm Recurrence, Local | Positron-Emission Tomography | Prognosis | Radiopharmaceuticals | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Breast Neoplasms | Female | Fluorodeoxyglucose F18 | Humans | Male | Middle Aged | Multimodal Imaging | Neoplasm Recurrence, Local | Positron-Emission Tomography | Prognosis | Radiopharmaceuticals | Tomography, X-Ray Computed [SUMMARY]
[CONTENT] ct 18 fluorodeoxyglucose | fdg pet ct | recurrent bc 18f | prognosis conventional imaging | breast cancer prognostic [SUMMARY]
[CONTENT] ct 18 fluorodeoxyglucose | fdg pet ct | recurrent bc 18f | prognosis conventional imaging | breast cancer prognostic [SUMMARY]
[CONTENT] ct 18 fluorodeoxyglucose | fdg pet ct | recurrent bc 18f | prognosis conventional imaging | breast cancer prognostic [SUMMARY]
[CONTENT] ct 18 fluorodeoxyglucose | fdg pet ct | recurrent bc 18f | prognosis conventional imaging | breast cancer prognostic [SUMMARY]
[CONTENT] ct 18 fluorodeoxyglucose | fdg pet ct | recurrent bc 18f | prognosis conventional imaging | breast cancer prognostic [SUMMARY]
[CONTENT] ct 18 fluorodeoxyglucose | fdg pet ct | recurrent bc 18f | prognosis conventional imaging | breast cancer prognostic [SUMMARY]
[CONTENT] ct | pet | pet ct | patients | ci | recurrence | disease | negative | positive | fdg [SUMMARY]
[CONTENT] ct | pet | pet ct | patients | ci | recurrence | disease | negative | positive | fdg [SUMMARY]
[CONTENT] ct | pet | pet ct | patients | ci | recurrence | disease | negative | positive | fdg [SUMMARY]
[CONTENT] ct | pet | pet ct | patients | ci | recurrence | disease | negative | positive | fdg [SUMMARY]
[CONTENT] ct | pet | pet ct | patients | ci | recurrence | disease | negative | positive | fdg [SUMMARY]
[CONTENT] ct | pet | pet ct | patients | ci | recurrence | disease | negative | positive | fdg [SUMMARY]
[CONTENT] 18 | bc | fdg | 18 fdg | ci | therapeutic | recurrent | fdg pet | 18 fdg pet | ct [SUMMARY]
[CONTENT] ct | pet | pet ct | clinical | patients | considered | images | ci | months | disease [SUMMARY]
[CONTENT] patients | ct | pet ct | pet | ci | positive | survival | recurrence | systemic | regional [SUMMARY]
[CONTENT] setting | role | prognostic stratification | stratification | prognostic | 18 fdg pet ct | fdg pet ct | fdg pet | 18 fdg pet | 18 fdg [SUMMARY]
[CONTENT] ct | pet | pet ct | patients | ci | fdg | recurrence | 18 | disease | 18 fdg [SUMMARY]
[CONTENT] ct | pet | pet ct | patients | ci | fdg | recurrence | 18 | disease | 18 fdg [SUMMARY]
[CONTENT] 18FDG | BC | CI ||| 18 | F-FDG PET/CT [SUMMARY]
[CONTENT] Sixty-three | BC ||| CI | PET/CT ||| 61 months ||| Kaplan Meier | Cox [SUMMARY]
[CONTENT] Forty-two | 37 | 88% | CI | 40 | 95% | PET/CT ||| CI | PET/CT | 86% | 54% | 95% | 70% ||| PET/CT | 30 | 48% | 6 | 9% ||| Thirty-nine | 62% ||| PET | Hazard Ratio | 95% ||| 4.7 ||| 2.0-10.9 | PET | PET | 0.0003 ||| Cox | PET/CT | 0.005 ||| CI [SUMMARY]
[CONTENT] 18 | F-FDG PET/CT [SUMMARY]
[CONTENT] 18FDG | BC | CI ||| 18 | F-FDG PET/CT ||| Sixty-three | BC ||| CI | PET/CT ||| 61 months ||| Kaplan Meier | Cox ||| Forty-two | 37 | 88% | CI | 40 | 95% | PET/CT ||| CI | PET/CT | 86% | 54% | 95% | 70% ||| PET/CT | 30 | 48% | 6 | 9% ||| Thirty-nine | 62% ||| PET | Hazard Ratio | 95% ||| 4.7 ||| 2.0-10.9 | PET | PET | 0.0003 ||| Cox | PET/CT | 0.005 ||| CI ||| 18 | F-FDG PET/CT [SUMMARY]
[CONTENT] 18FDG | BC | CI ||| 18 | F-FDG PET/CT ||| Sixty-three | BC ||| CI | PET/CT ||| 61 months ||| Kaplan Meier | Cox ||| Forty-two | 37 | 88% | CI | 40 | 95% | PET/CT ||| CI | PET/CT | 86% | 54% | 95% | 70% ||| PET/CT | 30 | 48% | 6 | 9% ||| Thirty-nine | 62% ||| PET | Hazard Ratio | 95% ||| 4.7 ||| 2.0-10.9 | PET | PET | 0.0003 ||| Cox | PET/CT | 0.005 ||| CI ||| 18 | F-FDG PET/CT [SUMMARY]
Association between suicide risk and traumatic brain injury in adults: a population based cohort study.
32015186
Traumatic brain injury (TBI) is a major cause of death and disability worldwide, and its treatment is potentially a heavy economic burden. Suicide is another global public health problem and the second leading cause of death in young adults. Patients with TBI are known to have higher than normal rates of non-fatal deliberate self-harm, suicide and all-cause mortality. The aim of this study was to explore the association between TBI and suicide risk in a Chinese cohort.
BACKGROUND
This study analysed data contained in the Taiwan National Health Insurance Research Database for 17 504 subjects with TBI and for 70 016 subjects without TBI matched for age and gender at a ratio of 1 to 4. Cox proportional hazard regression analysis was used to estimate subsequent suicide attempts in the TBI group. Probability of attempted suicide was determined by Kaplan-Meier method.
METHOD
The overall risk of suicide attempts was 2.23 times higher in the TBI group compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively) after adjustment for covariates. Regardless of gender, age or comorbidity, the TBI group tended to have more suicide attempts, and the risk attempted suicide increased with the severity of TBI. Depression and alcohol attributed disease also increased the risk of attempted suicide in the TBI group.
RESULTS
Suicide is preventable if risk factors are recognised. Hence, TBI patients require special attention to minimise their risk of attempted suicide.
CONCLUSION
[ "Adult", "Brain Injuries, Traumatic", "Cohort Studies", "Comorbidity", "Cost of Illness", "Female", "Humans", "Incidence", "Male", "Mortality", "Risk Assessment", "Risk Factors", "Severity of Illness Index", "Suicidal Ideation", "Suicide", "Taiwan", "Suicide Prevention" ]
7788485
Introduction
Traumatic brain injury (TBI), which is defined as damage to the skull or the brain and its frameworks through an external force,1 is a major cause of death and disability worldwide, and its treatment is potentially a heavy economic burden. The number of TBIs that occur annually worldwide is an estimated 1.5 to 2 million; the global prevalence is 12% to 16.7% in males and 8.5% in females, respectively. In Taiwan, up to 25% of the approximately 52 000 annual cases of TBI are fatal.2 The WHO projects that TBI will be the third leading cause of death or disability by 2020.3 Traumatic brain injury is most common in young adults and the elderly. Mild TBI is far more common than severe TBI. The severity of various consequences of TBI is known to increase proportionally to the severity of the TBI; typical consequences include physical disability, cognitive impairments and mood disorders. Suicide is a global public health problem. Approximately 800 000 people commit suicide every year, and suicide is the second leading cause of death in young adults. In Taiwan, the number of deaths by suicide in 2016 was 3765, and the standardised mortality rate was 12.3 persons per population of 100 000.4 According to statistics released by the Taiwan Ministry of Health and Welfare, suicide is the 12th most common cause of death. Suicide occurs throughout the life span and across all socioeconomic strata. Moreover, suicide has a potentially immense socioeconomic burden on individuals, families, communities and nations.5 The causes of suicide are multifactorial. Individuals with TBI have higher rates of non-fatal deliberate self-harm, suicide and all-cause mortality compared with the general population.6 7 Although recent studies suggest that TBI might be an important risk factor for suicide, studies of the association between TBI severity and suicide risk have reported conflicting results between countries and races. Simpson8 and Fonda et al 9 reported TBI severity contributed no meaningful differences to the risk of suicide attempts in the study of Australian population or US veterans who served in either Iraq or Afghanistan, respectively; whereas Madsen et al 7 showed the risk of suicide was higher for TBI individuals with evidence of structural brain injury in Denmark. However, most studies investigating the relation of suicide and TBI have some shortcomings in methodology, such as small clinical samples, especially very small number of TBI cases with suicide and self-reported data. Besides, little population-based data for this association are available. In the Simpson’s survey, they only included small number of patients (n=172) in Australia. In the Fonda’s analysis, only US veterans were included. The majority of military veterans were young (mean age=28.7 years) male (84%). The definition of TBI severity was based on self-reported duration of consciousness loss, alternation of consciousness and post-traumatic amnesia, not on objective examinations. Besides, moderate (6%) to severe (6%) TBI subsample cases were too small to bias the estimates. In Denmark, Madsen et al conducted a retrospective cohort study (1977 to 2014) and showed that a statistically significant increased suicide was present in TBI individuals. However, their research results were potentially biassed because they did not include all mild TBI patients treated prior to 1995. Also, there were no available records on what treatment TBI patients received, making it difficult to determine the TBI severity. Additionally, this study was performed in population of the Western world, not those of Eastern Asia. Generalising data retrieved from a study of a relatively homogeneous and wealthy Scandinavian people is not persuasive. Finally, large-scale population-based studies for the relationship between TBI severity and suicide risk in Asian cohorts are scarce. Therefore, this study used the Taiwan National Health Insurance Research Database (NHIRD) to clarify the association between TBI and subsequent occurrence of suicide as well as the effect of TBI severity in Chinese cohorts. In our analysis, the TBI severity is based on objective codes or examinations and then correlation is estimated through adjusting for all important suicide risks.
Methods
Database Data retrieved from the NHIRD maintained by the Taiwan healthcare system was used in this population-based cohort study. The encrypted NHIRD contains medical data of more than 99% of the 23.74 million Taiwan residents. The Taiwan national health insurance programme allowed researchers to access this administrative database for patients. The NHIRD contains enrolment information for all patients and comprehensive data for their use of healthcare resources. In accordance with the Personal Electronic Data Protection Law of Taiwan, any information that can be used to identify beneficiaries and hospitals must be removed from the NHIRD. This cohort study analysed 1996 to 2010 data from the Longitudinal Health Insurance Database 2010, which is an NHIRD subset that collects data of 1 million randomly-sampled beneficiaries from the primary NHIRD. This study identified and classified diseases according to the diagnostic codes in the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). Data retrieved from the NHIRD maintained by the Taiwan healthcare system was used in this population-based cohort study. The encrypted NHIRD contains medical data of more than 99% of the 23.74 million Taiwan residents. The Taiwan national health insurance programme allowed researchers to access this administrative database for patients. The NHIRD contains enrolment information for all patients and comprehensive data for their use of healthcare resources. In accordance with the Personal Electronic Data Protection Law of Taiwan, any information that can be used to identify beneficiaries and hospitals must be removed from the NHIRD. This cohort study analysed 1996 to 2010 data from the Longitudinal Health Insurance Database 2010, which is an NHIRD subset that collects data of 1 million randomly-sampled beneficiaries from the primary NHIRD. This study identified and classified diseases according to the diagnostic codes in the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). Study population The study included 17 504 patients who were older than 20 years and had been diagnosed with TBI (ICD-9-CM code 800, 803 to 804 and 850 to 854; operation codes 01.23, 01.24, 01.25, 01.31, 01.39 and 02.01) during 1996 to 2010.10 The analysis was limited to patients who had received ≥1 diagnoses during inpatient care or ≥2 TBI diagnoses during ambulatory visits in order to ensure data accuracy. The date of the first clinical visit for TBI was defined as the index date. The TBI cohort excluded patients who had been diagnosed with any mental disorder (ICD-9-CM code 290 to 319) before the index date. The TBI cases were further classified as severe, moderate or mild. Severe TBI was defined as a TBI that required surgery in the course of inpatient treatment; moderate TBI was defined as a TBI that required hospitalisation but not surgery; mild TBI was defined as a TBI that did not require inpatient treatment.11 The ICD-9-CM, E-Codes for suicide attempts (950 to 959 and 980 to 989)12 were assigned by psychiatrists. Patients were excluded if they had a ICD-9 code for suicide attempt before the index date. Additionally, 70 016 patients without TBI were identified to enhance the power of statistical tests in stratified analysis. The non-TBI patients were randomly selected from the registry of beneficiaries. Patients were eligible for inclusion in the non-TBI group if they had not been diagnosed with TBI during 1996 to 2010 and had no history of suicide attempts before enrolment. The non-TBI group and the TBI group were matched at a 4:1 ratio for gender, age and index year (year of TBI diagnosis). Figure 1 shows the workflow of the study procedure. Flow diagram summarising the process of enrolment. LHID, Longitudinal Health Insurance Database; TBI, traumaticbrain injury. The study included 17 504 patients who were older than 20 years and had been diagnosed with TBI (ICD-9-CM code 800, 803 to 804 and 850 to 854; operation codes 01.23, 01.24, 01.25, 01.31, 01.39 and 02.01) during 1996 to 2010.10 The analysis was limited to patients who had received ≥1 diagnoses during inpatient care or ≥2 TBI diagnoses during ambulatory visits in order to ensure data accuracy. The date of the first clinical visit for TBI was defined as the index date. The TBI cohort excluded patients who had been diagnosed with any mental disorder (ICD-9-CM code 290 to 319) before the index date. The TBI cases were further classified as severe, moderate or mild. Severe TBI was defined as a TBI that required surgery in the course of inpatient treatment; moderate TBI was defined as a TBI that required hospitalisation but not surgery; mild TBI was defined as a TBI that did not require inpatient treatment.11 The ICD-9-CM, E-Codes for suicide attempts (950 to 959 and 980 to 989)12 were assigned by psychiatrists. Patients were excluded if they had a ICD-9 code for suicide attempt before the index date. Additionally, 70 016 patients without TBI were identified to enhance the power of statistical tests in stratified analysis. The non-TBI patients were randomly selected from the registry of beneficiaries. Patients were eligible for inclusion in the non-TBI group if they had not been diagnosed with TBI during 1996 to 2010 and had no history of suicide attempts before enrolment. The non-TBI group and the TBI group were matched at a 4:1 ratio for gender, age and index year (year of TBI diagnosis). Figure 1 shows the workflow of the study procedure. Flow diagram summarising the process of enrolment. LHID, Longitudinal Health Insurance Database; TBI, traumaticbrain injury. Outcome and comorbidities The outcome was any occurrence of suicide attempts during the follow-up period. Both cohorts were followed until 31 December 2010 or until the occurrence of suicide attempts. A review of the psychiatry literature reveals that very few modifiable risk factors have been clearly defined.13 Therefore, most studies have focused on largely unmodifiable risk factors such as age and gender and on risk factors that are common physical comorbidities, such as hypertension (ICD-9-CM codes 401 to 405), hyperlipidaemia (ICD-9-CM code 272), congestive heart failure (ICD-9-CM code 428), diabetes mellitus (ICD-9-CM code 250), coronary artery disease (ICD-9-CM code 410 to 414), liver cirrhosis (ICD-9-CM code 571), depression (ICD-9-CM code 296.2 to 296.3, 300.4 and 311), tobacco use disorder (ICD-9-CM code 350.1) and alcohol attributed disease (ICD-9-CM codes 291.0 to 9, 303, 305.0, 357.5, 425.5, 535.3, 571.0 to 3, 980.0 and V11.3)13 were potential confounders in this analysis because they are unhealthy lifestyle behaviours associated with both TBI and mental illness.11 For each participant, urbanisation index and income-related insurance payment amounts were used as proxy measures of socioeconomic status at follow-up. Urbanisation index was categorised into three groups: high (metropolises), medium (small cities and suburban areas) and low (rural areas).14 Insurance premiums were categorised into three groups according to the monthly insurance payment by the enrollee: NT$0 to 20 000; NT$20 000 to 40 000 and more than NT$40 000.15 The outcome was any occurrence of suicide attempts during the follow-up period. Both cohorts were followed until 31 December 2010 or until the occurrence of suicide attempts. A review of the psychiatry literature reveals that very few modifiable risk factors have been clearly defined.13 Therefore, most studies have focused on largely unmodifiable risk factors such as age and gender and on risk factors that are common physical comorbidities, such as hypertension (ICD-9-CM codes 401 to 405), hyperlipidaemia (ICD-9-CM code 272), congestive heart failure (ICD-9-CM code 428), diabetes mellitus (ICD-9-CM code 250), coronary artery disease (ICD-9-CM code 410 to 414), liver cirrhosis (ICD-9-CM code 571), depression (ICD-9-CM code 296.2 to 296.3, 300.4 and 311), tobacco use disorder (ICD-9-CM code 350.1) and alcohol attributed disease (ICD-9-CM codes 291.0 to 9, 303, 305.0, 357.5, 425.5, 535.3, 571.0 to 3, 980.0 and V11.3)13 were potential confounders in this analysis because they are unhealthy lifestyle behaviours associated with both TBI and mental illness.11 For each participant, urbanisation index and income-related insurance payment amounts were used as proxy measures of socioeconomic status at follow-up. Urbanisation index was categorised into three groups: high (metropolises), medium (small cities and suburban areas) and low (rural areas).14 Insurance premiums were categorised into three groups according to the monthly insurance payment by the enrollee: NT$0 to 20 000; NT$20 000 to 40 000 and more than NT$40 000.15 Statistical analyses We used X2 test to compare clinical characteristics and distributions of categorical demographics between the TBI and non-TBI cohorts. The Wilcoxon rank-sum test and Student’s t-test were used to compare mean age and follow-up time (y) between the two cohorts, as appropriate. The Kaplan–Meier analysis was used to estimate cumulative incidence of suicide attempts, and the differences between the curves were compared by two-tailed log-rank test. For TBI patients, survival was calculated until an ambulatory visit for suicide attempts, hospitalisation or the end of the study period (31 December 2010), whichever came first. Incidence rates of suicide attempts estimated in 1000 person-years were compared between the two cohorts. Cox proportional hazard regression models were used to investigate the HR and 95% CI for suicide attempts for individuals with TBI if the proportional hazards assumption was satisfied. Multivariable Cox models were adjusted for gender, age, income, urbanisation and relevant comorbidities. A two-tailed p value less than 0.05 was considered statistically significant. Statistical Analysis Software, V.9.4, was used for all data analyses (SAS Institute, Cary, North Carolina, USA). We used X2 test to compare clinical characteristics and distributions of categorical demographics between the TBI and non-TBI cohorts. The Wilcoxon rank-sum test and Student’s t-test were used to compare mean age and follow-up time (y) between the two cohorts, as appropriate. The Kaplan–Meier analysis was used to estimate cumulative incidence of suicide attempts, and the differences between the curves were compared by two-tailed log-rank test. For TBI patients, survival was calculated until an ambulatory visit for suicide attempts, hospitalisation or the end of the study period (31 December 2010), whichever came first. Incidence rates of suicide attempts estimated in 1000 person-years were compared between the two cohorts. Cox proportional hazard regression models were used to investigate the HR and 95% CI for suicide attempts for individuals with TBI if the proportional hazards assumption was satisfied. Multivariable Cox models were adjusted for gender, age, income, urbanisation and relevant comorbidities. A two-tailed p value less than 0.05 was considered statistically significant. Statistical Analysis Software, V.9.4, was used for all data analyses (SAS Institute, Cary, North Carolina, USA).
Results
Baseline characteristics of the TBI and non-TBI cohorts Table 1 presents the baseline demographic characteristics and comorbidities in the two cohorts. In the TBI cohort, 43.35% patients were female. Compared with the non-TBI cohort, the TBI cohort had significantly higher percentages of patients with hypertension (39.45 vs 24.22; p<0.001), diabetes mellitus (24.24 vs 14.49; p<0.001), hyperlipidaemia (33.13 vs 22.97; p<0.001), coronary artery disease (7.25 vs 2.84; p<0.001), congestive heart failure (7.60 vs 3.39; p<0.001), liver cirrhosis (34.52 vs 23.79; p<0.001), tobacco use disorder (1.01 vs 0.62; p<0.001), depression (10.1 vs 7.41; p<0.001) and alcohol attributed disease (7.86 vs 2.40; p<0.001). During a median observation time of 4.2 years, 0.88% (154) of the TBI patients had suicide attempt (IQR=1.5 to 7.0). The incidence of suicide attempt in the TBI cohort was significantly (p<0.001) higher than that in the non-TBI cohort (314 suicide attempts out of 70 016 age-matched and gender-matched controls (0.45%) during a median observation time of 8.0 years (IQR=4.9 to 11.1)). The median duration of follow-up for suicide attempts was significantly shorter in the TBI group (4.2 years) compared with the non-TBI group (8.0 years). Baseline characteristics of patients with and without traumatic brain injury Table 1 presents the baseline demographic characteristics and comorbidities in the two cohorts. In the TBI cohort, 43.35% patients were female. Compared with the non-TBI cohort, the TBI cohort had significantly higher percentages of patients with hypertension (39.45 vs 24.22; p<0.001), diabetes mellitus (24.24 vs 14.49; p<0.001), hyperlipidaemia (33.13 vs 22.97; p<0.001), coronary artery disease (7.25 vs 2.84; p<0.001), congestive heart failure (7.60 vs 3.39; p<0.001), liver cirrhosis (34.52 vs 23.79; p<0.001), tobacco use disorder (1.01 vs 0.62; p<0.001), depression (10.1 vs 7.41; p<0.001) and alcohol attributed disease (7.86 vs 2.40; p<0.001). During a median observation time of 4.2 years, 0.88% (154) of the TBI patients had suicide attempt (IQR=1.5 to 7.0). The incidence of suicide attempt in the TBI cohort was significantly (p<0.001) higher than that in the non-TBI cohort (314 suicide attempts out of 70 016 age-matched and gender-matched controls (0.45%) during a median observation time of 8.0 years (IQR=4.9 to 11.1)). The median duration of follow-up for suicide attempts was significantly shorter in the TBI group (4.2 years) compared with the non-TBI group (8.0 years). Baseline characteristics of patients with and without traumatic brain injury Incidence and risk of suicide attempts In table 2, the incidence and HR for suicide attempts are stratified by gender, age and comorbidity. During the follow-up period, the overall risk of suicide attempt was 2.23 times higher in the TBI group compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively) after adjustment for gender, age and related comorbidities (diabetes mellitus, hyperlipidaemia, hypertension, coronary artery disease, congestive heart failure, liver cirrhosis, depression, alcohol attributed disease and tobacco use disorder). Incidence and HR of suicide attempt by demographic characteristics, comorbidity and follow-up duration among patients with or without TBI *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). †p<0.001; ‡P=0.002. §Patients with any examined comorbidities, including hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease, were classified as the comorbidity group. ¶p=0.028; **P=0.035. Rate, incidence rate in per 1000 person-years; TBI, traumatic brain injury. Gender-specific analyses showed that, in the TBI group, the incidence of suicide attempts was higher in women than in men (1.05 vs 0.92 per 1000 person-years, respectively). Additionally, in comparison with the non-TBI group, the TBI group had a significantly higher risk of suicide attempts in both genders (adjusted HR=2.49, 95% CI 1.82 to 3.41 for women; adjusted HR=2.04, 95% CI 1.53 to 2.72 for men). In age-specific risk comparisons of the two cohorts, the TBI cohort consistently showed a significantly higher risk of attempted suicide in all age groups. Regardless of comorbidities, the risk of suicide attempt was higher in TBI group than non-TBI group. Notably, the risk of attempted suicide was even higher in TBI patients who had comorbidities. Figure 2 compares the Kaplan–Meier curves for the cumulative incidence of suicide attempts between the TBI and non-TBI groups at the 15 year follow-up. The Kaplan–Meier curves showed a significantly higher cumulative incidence of suicide attempts in the TBI group compared with the non-TBI group (log-rank test p<0.001). Cumulative incidence of suicide attempts among patients with traumatic brain injury and the control cohort. TBI, traumatic brain injury. In table 2, the incidence and HR for suicide attempts are stratified by gender, age and comorbidity. During the follow-up period, the overall risk of suicide attempt was 2.23 times higher in the TBI group compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively) after adjustment for gender, age and related comorbidities (diabetes mellitus, hyperlipidaemia, hypertension, coronary artery disease, congestive heart failure, liver cirrhosis, depression, alcohol attributed disease and tobacco use disorder). Incidence and HR of suicide attempt by demographic characteristics, comorbidity and follow-up duration among patients with or without TBI *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). †p<0.001; ‡P=0.002. §Patients with any examined comorbidities, including hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease, were classified as the comorbidity group. ¶p=0.028; **P=0.035. Rate, incidence rate in per 1000 person-years; TBI, traumatic brain injury. Gender-specific analyses showed that, in the TBI group, the incidence of suicide attempts was higher in women than in men (1.05 vs 0.92 per 1000 person-years, respectively). Additionally, in comparison with the non-TBI group, the TBI group had a significantly higher risk of suicide attempts in both genders (adjusted HR=2.49, 95% CI 1.82 to 3.41 for women; adjusted HR=2.04, 95% CI 1.53 to 2.72 for men). In age-specific risk comparisons of the two cohorts, the TBI cohort consistently showed a significantly higher risk of attempted suicide in all age groups. Regardless of comorbidities, the risk of suicide attempt was higher in TBI group than non-TBI group. Notably, the risk of attempted suicide was even higher in TBI patients who had comorbidities. Figure 2 compares the Kaplan–Meier curves for the cumulative incidence of suicide attempts between the TBI and non-TBI groups at the 15 year follow-up. The Kaplan–Meier curves showed a significantly higher cumulative incidence of suicide attempts in the TBI group compared with the non-TBI group (log-rank test p<0.001). Cumulative incidence of suicide attempts among patients with traumatic brain injury and the control cohort. TBI, traumatic brain injury. Risks factors for suicide attempts in TBI patients The Cox regression analysis revealed two risk factors for suicide attempts in the TBI group: depression (adjusted HR=5.73, 95% CI 4.14 to 7.93) and alcohol attributed disease (adjusted HR=3.29, 95% CI 2.30 to 4.72). However, the risk of attempted suicide in this group decreased as age increased (table 3). Cox regression model: significant predictors of suicide attempt after traumatic brain injury The adjusted HR and 95% CI were estimated by a stepwise Cox proportional hazards regression method. *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). The Cox regression analysis revealed two risk factors for suicide attempts in the TBI group: depression (adjusted HR=5.73, 95% CI 4.14 to 7.93) and alcohol attributed disease (adjusted HR=3.29, 95% CI 2.30 to 4.72). However, the risk of attempted suicide in this group decreased as age increased (table 3). Cox regression model: significant predictors of suicide attempt after traumatic brain injury The adjusted HR and 95% CI were estimated by a stepwise Cox proportional hazards regression method. *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). Risk of suicide attempt and TBI severity Table 4 shows that all patients with TBI had a higher than normal risk of suicide attempts. Additionally, the risk of attempted suicide increased as the severity of TBI increased. Incidence and HRs for suicide attempt stratified by the severity of TBI Rate, incidence rate in per 1000 person-years. *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). †p<0.001; ‡p=0.044. TBI, traumatic brain injury. Table 4 shows that all patients with TBI had a higher than normal risk of suicide attempts. Additionally, the risk of attempted suicide increased as the severity of TBI increased. Incidence and HRs for suicide attempt stratified by the severity of TBI Rate, incidence rate in per 1000 person-years. *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). †p<0.001; ‡p=0.044. TBI, traumatic brain injury.
Conclusion
The worldwide consensus is that suicide is a preventable cause of death. The findings of our study suggest that depression, alcoholic attributed disease and high severity of TBI may increase the risk of attempted suicide in patients with TBI. These factors serve as vital mechanisms through which TBI influence suicide attempts. Suicide attempts are strong indicators for death of suicide.33 Suicide prevention requires a cooperative effort by society, the family and the individual. The results of this study suggested that intensive interventions for identifying individuals with a high risk of suicidal behaviour can have large and positive public health effect. Thereby, future research is warranted to identify other mechanisms of these association, including possible biological interaction of TBI and other predictors such as severity to elucidate whether prompt intervention for TBI cases could reduce this risk. Patients in Chinese ancestry initially diagnosed with traumatic brain injury (TBI) have a high risk of developing suicide attempts. The risk of suicide attempts is increased associated with the severity of TBI. The increased risk of suicide is less in TBI patients aged above 50 years; thus, older age may protect TBI cohort from suicide attempts. Does the TBI severity correlate with suicide attempts in Chinese patients? Would increased exposure to traumatic brain injury earlier in life increase the risk of suicide attempts? Do older TBI patients have higher suicide attempts risks? Both traumatic brain injury and suicide are worldwide health problems and burdens. Patients with TBI are known to have higher than normal mortality.
[ "Database", "Study population", "Outcome and comorbidities", "Statistical analyses", "Baseline characteristics of the TBI and non-TBI cohorts", "Incidence and risk of suicide attempts", "Risks factors for suicide attempts in TBI patients", "Risk of suicide attempt and TBI severity" ]
[ "Data retrieved from the NHIRD maintained by the Taiwan healthcare system was used in this population-based cohort study. The encrypted NHIRD contains medical data of more than 99% of the 23.74 million Taiwan residents. The Taiwan national health insurance programme allowed researchers to access this administrative database for patients. The NHIRD contains enrolment information for all patients and comprehensive data for their use of healthcare resources. In accordance with the Personal Electronic Data Protection Law of Taiwan, any information that can be used to identify beneficiaries and hospitals must be removed from the NHIRD. This cohort study analysed 1996 to 2010 data from the Longitudinal Health Insurance Database 2010, which is an NHIRD subset that collects data of 1 million randomly-sampled beneficiaries from the primary NHIRD. This study identified and classified diseases according to the diagnostic codes in the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM).", "The study included 17 504 patients who were older than 20 years and had been diagnosed with TBI (ICD-9-CM code 800, 803 to 804 and 850 to 854; operation codes 01.23, 01.24, 01.25, 01.31, 01.39 and 02.01) during 1996 to 2010.10 The analysis was limited to patients who had received ≥1 diagnoses during inpatient care or ≥2 TBI diagnoses during ambulatory visits in order to ensure data accuracy. The date of the first clinical visit for TBI was defined as the index date. The TBI cohort excluded patients who had been diagnosed with any mental disorder (ICD-9-CM code 290 to 319) before the index date. The TBI cases were further classified as severe, moderate or mild. Severe TBI was defined as a TBI that required surgery in the course of inpatient treatment; moderate TBI was defined as a TBI that required hospitalisation but not surgery; mild TBI was defined as a TBI that did not require inpatient treatment.11 The ICD-9-CM, E-Codes for suicide attempts (950 to 959 and 980 to 989)12 were assigned by psychiatrists. Patients were excluded if they had a ICD-9 code for suicide attempt before the index date. Additionally, 70 016 patients without TBI were identified to enhance the power of statistical tests in stratified analysis. The non-TBI patients were randomly selected from the registry of beneficiaries. Patients were eligible for inclusion in the non-TBI group if they had not been diagnosed with TBI during 1996 to 2010 and had no history of suicide attempts before enrolment. The non-TBI group and the TBI group were matched at a 4:1 ratio for gender, age and index year (year of TBI diagnosis). Figure 1 shows the workflow of the study procedure.\nFlow diagram summarising the process of enrolment. LHID, Longitudinal Health Insurance Database; TBI, traumaticbrain injury.", "The outcome was any occurrence of suicide attempts during the follow-up period. Both cohorts were followed until 31 December 2010 or until the occurrence of suicide attempts.\nA review of the psychiatry literature reveals that very few modifiable risk factors have been clearly defined.13 Therefore, most studies have focused on largely unmodifiable risk factors such as age and gender and on risk factors that are common physical comorbidities, such as hypertension (ICD-9-CM codes 401 to 405), hyperlipidaemia (ICD-9-CM code 272), congestive heart failure (ICD-9-CM code 428), diabetes mellitus (ICD-9-CM code 250), coronary artery disease (ICD-9-CM code 410 to 414), liver cirrhosis (ICD-9-CM code 571), depression (ICD-9-CM code 296.2 to 296.3, 300.4 and 311), tobacco use disorder (ICD-9-CM code 350.1) and alcohol attributed disease (ICD-9-CM codes 291.0 to 9, 303, 305.0, 357.5, 425.5, 535.3, 571.0 to 3, 980.0 and V11.3)13 were potential confounders in this analysis because they are unhealthy lifestyle behaviours associated with both TBI and mental illness.11\n\nFor each participant, urbanisation index and income-related insurance payment amounts were used as proxy measures of socioeconomic status at follow-up. Urbanisation index was categorised into three groups: high (metropolises), medium (small cities and suburban areas) and low (rural areas).14 Insurance premiums were categorised into three groups according to the monthly insurance payment by the enrollee: NT$0 to 20 000; NT$20 000 to 40 000 and more than NT$40 000.15\n", "We used X2 test to compare clinical characteristics and distributions of categorical demographics between the TBI and non-TBI cohorts. The Wilcoxon rank-sum test and Student’s t-test were used to compare mean age and follow-up time (y) between the two cohorts, as appropriate. The Kaplan–Meier analysis was used to estimate cumulative incidence of suicide attempts, and the differences between the curves were compared by two-tailed log-rank test. For TBI patients, survival was calculated until an ambulatory visit for suicide attempts, hospitalisation or the end of the study period (31 December 2010), whichever came first. Incidence rates of suicide attempts estimated in 1000 person-years were compared between the two cohorts. Cox proportional hazard regression models were used to investigate the HR and 95% CI for suicide attempts for individuals with TBI if the proportional hazards assumption was satisfied. Multivariable Cox models were adjusted for gender, age, income, urbanisation and relevant comorbidities. A two-tailed p value less than 0.05 was considered statistically significant. Statistical Analysis Software, V.9.4, was used for all data analyses (SAS Institute, Cary, North Carolina, USA).", "\nTable 1 presents the baseline demographic characteristics and comorbidities in the two cohorts. In the TBI cohort, 43.35% patients were female. Compared with the non-TBI cohort, the TBI cohort had significantly higher percentages of patients with hypertension (39.45 vs 24.22; p<0.001), diabetes mellitus (24.24 vs 14.49; p<0.001), hyperlipidaemia (33.13 vs 22.97; p<0.001), coronary artery disease (7.25 vs 2.84; p<0.001), congestive heart failure (7.60 vs 3.39; p<0.001), liver cirrhosis (34.52 vs 23.79; p<0.001), tobacco use disorder (1.01 vs 0.62; p<0.001), depression (10.1 vs 7.41; p<0.001) and alcohol attributed disease (7.86 vs 2.40; p<0.001). During a median observation time of 4.2 years, 0.88% (154) of the TBI patients had suicide attempt (IQR=1.5 to 7.0). The incidence of suicide attempt in the TBI cohort was significantly (p<0.001) higher than that in the non-TBI cohort (314 suicide attempts out of 70 016 age-matched and gender-matched controls (0.45%) during a median observation time of 8.0 years (IQR=4.9 to 11.1)). The median duration of follow-up for suicide attempts was significantly shorter in the TBI group (4.2 years) compared with the non-TBI group (8.0 years).\nBaseline characteristics of patients with and without traumatic brain injury", "In table 2, the incidence and HR for suicide attempts are stratified by gender, age and comorbidity. During the follow-up period, the overall risk of suicide attempt was 2.23 times higher in the TBI group compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively) after adjustment for gender, age and related comorbidities (diabetes mellitus, hyperlipidaemia, hypertension, coronary artery disease, congestive heart failure, liver cirrhosis, depression, alcohol attributed disease and tobacco use disorder).\nIncidence and HR of suicide attempt by demographic characteristics, comorbidity and follow-up duration among patients with or without TBI\n*Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease).\n†p<0.001;\n‡P=0.002.\n§Patients with any examined comorbidities, including hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease, were classified as the comorbidity group.\n¶p=0.028; **P=0.035.\nRate, incidence rate in per 1000 person-years; TBI, traumatic brain injury.\nGender-specific analyses showed that, in the TBI group, the incidence of suicide attempts was higher in women than in men (1.05 vs 0.92 per 1000 person-years, respectively). Additionally, in comparison with the non-TBI group, the TBI group had a significantly higher risk of suicide attempts in both genders (adjusted HR=2.49, 95% CI 1.82 to 3.41 for women; adjusted HR=2.04, 95% CI 1.53 to 2.72 for men).\nIn age-specific risk comparisons of the two cohorts, the TBI cohort consistently showed a significantly higher risk of attempted suicide in all age groups. Regardless of comorbidities, the risk of suicide attempt was higher in TBI group than non-TBI group. Notably, the risk of attempted suicide was even higher in TBI patients who had comorbidities.\n\nFigure 2 compares the Kaplan–Meier curves for the cumulative incidence of suicide attempts between the TBI and non-TBI groups at the 15 year follow-up. The Kaplan–Meier curves showed a significantly higher cumulative incidence of suicide attempts in the TBI group compared with the non-TBI group (log-rank test p<0.001).\nCumulative incidence of suicide attempts among patients with traumatic brain injury and the control cohort. TBI, traumatic brain injury.", "The Cox regression analysis revealed two risk factors for suicide attempts in the TBI group: depression (adjusted HR=5.73, 95% CI 4.14 to 7.93) and alcohol attributed disease (adjusted HR=3.29, 95% CI 2.30 to 4.72). However, the risk of attempted suicide in this group decreased as age increased (table 3).\nCox regression model: significant predictors of suicide attempt after traumatic brain injury\nThe adjusted HR and 95% CI were estimated by a stepwise Cox proportional hazards regression method.\n*Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease).", "\nTable 4 shows that all patients with TBI had a higher than normal risk of suicide attempts. Additionally, the risk of attempted suicide increased as the severity of TBI increased.\nIncidence and HRs for suicide attempt stratified by the severity of TBI\nRate, incidence rate in per 1000 person-years.\n*Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease).\n†p<0.001; ‡p=0.044.\nTBI, traumatic brain injury." ]
[ null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Database", "Study population", "Outcome and comorbidities", "Statistical analyses", "Results", "Baseline characteristics of the TBI and non-TBI cohorts", "Incidence and risk of suicide attempts", "Risks factors for suicide attempts in TBI patients", "Risk of suicide attempt and TBI severity", "Discussion", "Conclusion" ]
[ "Traumatic brain injury (TBI), which is defined as damage to the skull or the brain and its frameworks through an external force,1 is a major cause of death and disability worldwide, and its treatment is potentially a heavy economic burden. The number of TBIs that occur annually worldwide is an estimated 1.5 to 2 million; the global prevalence is 12% to 16.7% in males and 8.5% in females, respectively. In Taiwan, up to 25% of the approximately 52 000 annual cases of TBI are fatal.2 The WHO projects that TBI will be the third leading cause of death or disability by 2020.3 Traumatic brain injury is most common in young adults and the elderly. Mild TBI is far more common than severe TBI. The severity of various consequences of TBI is known to increase proportionally to the severity of the TBI; typical consequences include physical disability, cognitive impairments and mood disorders.\nSuicide is a global public health problem. Approximately 800 000 people commit suicide every year, and suicide is the second leading cause of death in young adults. In Taiwan, the number of deaths by suicide in 2016 was 3765, and the standardised mortality rate was 12.3 persons per population of 100 000.4 According to statistics released by the Taiwan Ministry of Health and Welfare, suicide is the 12th most common cause of death. Suicide occurs throughout the life span and across all socioeconomic strata. Moreover, suicide has a potentially immense socioeconomic burden on individuals, families, communities and nations.5\n\nThe causes of suicide are multifactorial. Individuals with TBI have higher rates of non-fatal deliberate self-harm, suicide and all-cause mortality compared with the general population.6 7 Although recent studies suggest that TBI might be an important risk factor for suicide, studies of the association between TBI severity and suicide risk have reported conflicting results between countries and races. Simpson8 and Fonda et al\n9 reported TBI severity contributed no meaningful differences to the risk of suicide attempts in the study of Australian population or US veterans who served in either Iraq or Afghanistan, respectively; whereas Madsen et al\n7 showed the risk of suicide was higher for TBI individuals with evidence of structural brain injury in Denmark.\nHowever, most studies investigating the relation of suicide and TBI have some shortcomings in methodology, such as small clinical samples, especially very small number of TBI cases with suicide and self-reported data. Besides, little population-based data for this association are available. In the Simpson’s survey, they only included small number of patients (n=172) in Australia. In the Fonda’s analysis, only US veterans were included. The majority of military veterans were young (mean age=28.7 years) male (84%). The definition of TBI severity was based on self-reported duration of consciousness loss, alternation of consciousness and post-traumatic amnesia, not on objective examinations. Besides, moderate (6%) to severe (6%) TBI subsample cases were too small to bias the estimates.\nIn Denmark, Madsen et al conducted a retrospective cohort study (1977 to 2014) and showed that a statistically significant increased suicide was present in TBI individuals. However, their research results were potentially biassed because they did not include all mild TBI patients treated prior to 1995. Also, there were no available records on what treatment TBI patients received, making it difficult to determine the TBI severity. Additionally, this study was performed in population of the Western world, not those of Eastern Asia. Generalising data retrieved from a study of a relatively homogeneous and wealthy Scandinavian people is not persuasive. Finally, large-scale population-based studies for the relationship between TBI severity and suicide risk in Asian cohorts are scarce. Therefore, this study used the Taiwan National Health Insurance Research Database (NHIRD) to clarify the association between TBI and subsequent occurrence of suicide as well as the effect of TBI severity in Chinese cohorts. In our analysis, the TBI severity is based on objective codes or examinations and then correlation is estimated through adjusting for all important suicide risks.", " Database Data retrieved from the NHIRD maintained by the Taiwan healthcare system was used in this population-based cohort study. The encrypted NHIRD contains medical data of more than 99% of the 23.74 million Taiwan residents. The Taiwan national health insurance programme allowed researchers to access this administrative database for patients. The NHIRD contains enrolment information for all patients and comprehensive data for their use of healthcare resources. In accordance with the Personal Electronic Data Protection Law of Taiwan, any information that can be used to identify beneficiaries and hospitals must be removed from the NHIRD. This cohort study analysed 1996 to 2010 data from the Longitudinal Health Insurance Database 2010, which is an NHIRD subset that collects data of 1 million randomly-sampled beneficiaries from the primary NHIRD. This study identified and classified diseases according to the diagnostic codes in the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM).\nData retrieved from the NHIRD maintained by the Taiwan healthcare system was used in this population-based cohort study. The encrypted NHIRD contains medical data of more than 99% of the 23.74 million Taiwan residents. The Taiwan national health insurance programme allowed researchers to access this administrative database for patients. The NHIRD contains enrolment information for all patients and comprehensive data for their use of healthcare resources. In accordance with the Personal Electronic Data Protection Law of Taiwan, any information that can be used to identify beneficiaries and hospitals must be removed from the NHIRD. This cohort study analysed 1996 to 2010 data from the Longitudinal Health Insurance Database 2010, which is an NHIRD subset that collects data of 1 million randomly-sampled beneficiaries from the primary NHIRD. This study identified and classified diseases according to the diagnostic codes in the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM).\n Study population The study included 17 504 patients who were older than 20 years and had been diagnosed with TBI (ICD-9-CM code 800, 803 to 804 and 850 to 854; operation codes 01.23, 01.24, 01.25, 01.31, 01.39 and 02.01) during 1996 to 2010.10 The analysis was limited to patients who had received ≥1 diagnoses during inpatient care or ≥2 TBI diagnoses during ambulatory visits in order to ensure data accuracy. The date of the first clinical visit for TBI was defined as the index date. The TBI cohort excluded patients who had been diagnosed with any mental disorder (ICD-9-CM code 290 to 319) before the index date. The TBI cases were further classified as severe, moderate or mild. Severe TBI was defined as a TBI that required surgery in the course of inpatient treatment; moderate TBI was defined as a TBI that required hospitalisation but not surgery; mild TBI was defined as a TBI that did not require inpatient treatment.11 The ICD-9-CM, E-Codes for suicide attempts (950 to 959 and 980 to 989)12 were assigned by psychiatrists. Patients were excluded if they had a ICD-9 code for suicide attempt before the index date. Additionally, 70 016 patients without TBI were identified to enhance the power of statistical tests in stratified analysis. The non-TBI patients were randomly selected from the registry of beneficiaries. Patients were eligible for inclusion in the non-TBI group if they had not been diagnosed with TBI during 1996 to 2010 and had no history of suicide attempts before enrolment. The non-TBI group and the TBI group were matched at a 4:1 ratio for gender, age and index year (year of TBI diagnosis). Figure 1 shows the workflow of the study procedure.\nFlow diagram summarising the process of enrolment. LHID, Longitudinal Health Insurance Database; TBI, traumaticbrain injury.\nThe study included 17 504 patients who were older than 20 years and had been diagnosed with TBI (ICD-9-CM code 800, 803 to 804 and 850 to 854; operation codes 01.23, 01.24, 01.25, 01.31, 01.39 and 02.01) during 1996 to 2010.10 The analysis was limited to patients who had received ≥1 diagnoses during inpatient care or ≥2 TBI diagnoses during ambulatory visits in order to ensure data accuracy. The date of the first clinical visit for TBI was defined as the index date. The TBI cohort excluded patients who had been diagnosed with any mental disorder (ICD-9-CM code 290 to 319) before the index date. The TBI cases were further classified as severe, moderate or mild. Severe TBI was defined as a TBI that required surgery in the course of inpatient treatment; moderate TBI was defined as a TBI that required hospitalisation but not surgery; mild TBI was defined as a TBI that did not require inpatient treatment.11 The ICD-9-CM, E-Codes for suicide attempts (950 to 959 and 980 to 989)12 were assigned by psychiatrists. Patients were excluded if they had a ICD-9 code for suicide attempt before the index date. Additionally, 70 016 patients without TBI were identified to enhance the power of statistical tests in stratified analysis. The non-TBI patients were randomly selected from the registry of beneficiaries. Patients were eligible for inclusion in the non-TBI group if they had not been diagnosed with TBI during 1996 to 2010 and had no history of suicide attempts before enrolment. The non-TBI group and the TBI group were matched at a 4:1 ratio for gender, age and index year (year of TBI diagnosis). Figure 1 shows the workflow of the study procedure.\nFlow diagram summarising the process of enrolment. LHID, Longitudinal Health Insurance Database; TBI, traumaticbrain injury.\n Outcome and comorbidities The outcome was any occurrence of suicide attempts during the follow-up period. Both cohorts were followed until 31 December 2010 or until the occurrence of suicide attempts.\nA review of the psychiatry literature reveals that very few modifiable risk factors have been clearly defined.13 Therefore, most studies have focused on largely unmodifiable risk factors such as age and gender and on risk factors that are common physical comorbidities, such as hypertension (ICD-9-CM codes 401 to 405), hyperlipidaemia (ICD-9-CM code 272), congestive heart failure (ICD-9-CM code 428), diabetes mellitus (ICD-9-CM code 250), coronary artery disease (ICD-9-CM code 410 to 414), liver cirrhosis (ICD-9-CM code 571), depression (ICD-9-CM code 296.2 to 296.3, 300.4 and 311), tobacco use disorder (ICD-9-CM code 350.1) and alcohol attributed disease (ICD-9-CM codes 291.0 to 9, 303, 305.0, 357.5, 425.5, 535.3, 571.0 to 3, 980.0 and V11.3)13 were potential confounders in this analysis because they are unhealthy lifestyle behaviours associated with both TBI and mental illness.11\n\nFor each participant, urbanisation index and income-related insurance payment amounts were used as proxy measures of socioeconomic status at follow-up. Urbanisation index was categorised into three groups: high (metropolises), medium (small cities and suburban areas) and low (rural areas).14 Insurance premiums were categorised into three groups according to the monthly insurance payment by the enrollee: NT$0 to 20 000; NT$20 000 to 40 000 and more than NT$40 000.15\n\nThe outcome was any occurrence of suicide attempts during the follow-up period. Both cohorts were followed until 31 December 2010 or until the occurrence of suicide attempts.\nA review of the psychiatry literature reveals that very few modifiable risk factors have been clearly defined.13 Therefore, most studies have focused on largely unmodifiable risk factors such as age and gender and on risk factors that are common physical comorbidities, such as hypertension (ICD-9-CM codes 401 to 405), hyperlipidaemia (ICD-9-CM code 272), congestive heart failure (ICD-9-CM code 428), diabetes mellitus (ICD-9-CM code 250), coronary artery disease (ICD-9-CM code 410 to 414), liver cirrhosis (ICD-9-CM code 571), depression (ICD-9-CM code 296.2 to 296.3, 300.4 and 311), tobacco use disorder (ICD-9-CM code 350.1) and alcohol attributed disease (ICD-9-CM codes 291.0 to 9, 303, 305.0, 357.5, 425.5, 535.3, 571.0 to 3, 980.0 and V11.3)13 were potential confounders in this analysis because they are unhealthy lifestyle behaviours associated with both TBI and mental illness.11\n\nFor each participant, urbanisation index and income-related insurance payment amounts were used as proxy measures of socioeconomic status at follow-up. Urbanisation index was categorised into three groups: high (metropolises), medium (small cities and suburban areas) and low (rural areas).14 Insurance premiums were categorised into three groups according to the monthly insurance payment by the enrollee: NT$0 to 20 000; NT$20 000 to 40 000 and more than NT$40 000.15\n\n Statistical analyses We used X2 test to compare clinical characteristics and distributions of categorical demographics between the TBI and non-TBI cohorts. The Wilcoxon rank-sum test and Student’s t-test were used to compare mean age and follow-up time (y) between the two cohorts, as appropriate. The Kaplan–Meier analysis was used to estimate cumulative incidence of suicide attempts, and the differences between the curves were compared by two-tailed log-rank test. For TBI patients, survival was calculated until an ambulatory visit for suicide attempts, hospitalisation or the end of the study period (31 December 2010), whichever came first. Incidence rates of suicide attempts estimated in 1000 person-years were compared between the two cohorts. Cox proportional hazard regression models were used to investigate the HR and 95% CI for suicide attempts for individuals with TBI if the proportional hazards assumption was satisfied. Multivariable Cox models were adjusted for gender, age, income, urbanisation and relevant comorbidities. A two-tailed p value less than 0.05 was considered statistically significant. Statistical Analysis Software, V.9.4, was used for all data analyses (SAS Institute, Cary, North Carolina, USA).\nWe used X2 test to compare clinical characteristics and distributions of categorical demographics between the TBI and non-TBI cohorts. The Wilcoxon rank-sum test and Student’s t-test were used to compare mean age and follow-up time (y) between the two cohorts, as appropriate. The Kaplan–Meier analysis was used to estimate cumulative incidence of suicide attempts, and the differences between the curves were compared by two-tailed log-rank test. For TBI patients, survival was calculated until an ambulatory visit for suicide attempts, hospitalisation or the end of the study period (31 December 2010), whichever came first. Incidence rates of suicide attempts estimated in 1000 person-years were compared between the two cohorts. Cox proportional hazard regression models were used to investigate the HR and 95% CI for suicide attempts for individuals with TBI if the proportional hazards assumption was satisfied. Multivariable Cox models were adjusted for gender, age, income, urbanisation and relevant comorbidities. A two-tailed p value less than 0.05 was considered statistically significant. Statistical Analysis Software, V.9.4, was used for all data analyses (SAS Institute, Cary, North Carolina, USA).", "Data retrieved from the NHIRD maintained by the Taiwan healthcare system was used in this population-based cohort study. The encrypted NHIRD contains medical data of more than 99% of the 23.74 million Taiwan residents. The Taiwan national health insurance programme allowed researchers to access this administrative database for patients. The NHIRD contains enrolment information for all patients and comprehensive data for their use of healthcare resources. In accordance with the Personal Electronic Data Protection Law of Taiwan, any information that can be used to identify beneficiaries and hospitals must be removed from the NHIRD. This cohort study analysed 1996 to 2010 data from the Longitudinal Health Insurance Database 2010, which is an NHIRD subset that collects data of 1 million randomly-sampled beneficiaries from the primary NHIRD. This study identified and classified diseases according to the diagnostic codes in the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM).", "The study included 17 504 patients who were older than 20 years and had been diagnosed with TBI (ICD-9-CM code 800, 803 to 804 and 850 to 854; operation codes 01.23, 01.24, 01.25, 01.31, 01.39 and 02.01) during 1996 to 2010.10 The analysis was limited to patients who had received ≥1 diagnoses during inpatient care or ≥2 TBI diagnoses during ambulatory visits in order to ensure data accuracy. The date of the first clinical visit for TBI was defined as the index date. The TBI cohort excluded patients who had been diagnosed with any mental disorder (ICD-9-CM code 290 to 319) before the index date. The TBI cases were further classified as severe, moderate or mild. Severe TBI was defined as a TBI that required surgery in the course of inpatient treatment; moderate TBI was defined as a TBI that required hospitalisation but not surgery; mild TBI was defined as a TBI that did not require inpatient treatment.11 The ICD-9-CM, E-Codes for suicide attempts (950 to 959 and 980 to 989)12 were assigned by psychiatrists. Patients were excluded if they had a ICD-9 code for suicide attempt before the index date. Additionally, 70 016 patients without TBI were identified to enhance the power of statistical tests in stratified analysis. The non-TBI patients were randomly selected from the registry of beneficiaries. Patients were eligible for inclusion in the non-TBI group if they had not been diagnosed with TBI during 1996 to 2010 and had no history of suicide attempts before enrolment. The non-TBI group and the TBI group were matched at a 4:1 ratio for gender, age and index year (year of TBI diagnosis). Figure 1 shows the workflow of the study procedure.\nFlow diagram summarising the process of enrolment. LHID, Longitudinal Health Insurance Database; TBI, traumaticbrain injury.", "The outcome was any occurrence of suicide attempts during the follow-up period. Both cohorts were followed until 31 December 2010 or until the occurrence of suicide attempts.\nA review of the psychiatry literature reveals that very few modifiable risk factors have been clearly defined.13 Therefore, most studies have focused on largely unmodifiable risk factors such as age and gender and on risk factors that are common physical comorbidities, such as hypertension (ICD-9-CM codes 401 to 405), hyperlipidaemia (ICD-9-CM code 272), congestive heart failure (ICD-9-CM code 428), diabetes mellitus (ICD-9-CM code 250), coronary artery disease (ICD-9-CM code 410 to 414), liver cirrhosis (ICD-9-CM code 571), depression (ICD-9-CM code 296.2 to 296.3, 300.4 and 311), tobacco use disorder (ICD-9-CM code 350.1) and alcohol attributed disease (ICD-9-CM codes 291.0 to 9, 303, 305.0, 357.5, 425.5, 535.3, 571.0 to 3, 980.0 and V11.3)13 were potential confounders in this analysis because they are unhealthy lifestyle behaviours associated with both TBI and mental illness.11\n\nFor each participant, urbanisation index and income-related insurance payment amounts were used as proxy measures of socioeconomic status at follow-up. Urbanisation index was categorised into three groups: high (metropolises), medium (small cities and suburban areas) and low (rural areas).14 Insurance premiums were categorised into three groups according to the monthly insurance payment by the enrollee: NT$0 to 20 000; NT$20 000 to 40 000 and more than NT$40 000.15\n", "We used X2 test to compare clinical characteristics and distributions of categorical demographics between the TBI and non-TBI cohorts. The Wilcoxon rank-sum test and Student’s t-test were used to compare mean age and follow-up time (y) between the two cohorts, as appropriate. The Kaplan–Meier analysis was used to estimate cumulative incidence of suicide attempts, and the differences between the curves were compared by two-tailed log-rank test. For TBI patients, survival was calculated until an ambulatory visit for suicide attempts, hospitalisation or the end of the study period (31 December 2010), whichever came first. Incidence rates of suicide attempts estimated in 1000 person-years were compared between the two cohorts. Cox proportional hazard regression models were used to investigate the HR and 95% CI for suicide attempts for individuals with TBI if the proportional hazards assumption was satisfied. Multivariable Cox models were adjusted for gender, age, income, urbanisation and relevant comorbidities. A two-tailed p value less than 0.05 was considered statistically significant. Statistical Analysis Software, V.9.4, was used for all data analyses (SAS Institute, Cary, North Carolina, USA).", " Baseline characteristics of the TBI and non-TBI cohorts \nTable 1 presents the baseline demographic characteristics and comorbidities in the two cohorts. In the TBI cohort, 43.35% patients were female. Compared with the non-TBI cohort, the TBI cohort had significantly higher percentages of patients with hypertension (39.45 vs 24.22; p<0.001), diabetes mellitus (24.24 vs 14.49; p<0.001), hyperlipidaemia (33.13 vs 22.97; p<0.001), coronary artery disease (7.25 vs 2.84; p<0.001), congestive heart failure (7.60 vs 3.39; p<0.001), liver cirrhosis (34.52 vs 23.79; p<0.001), tobacco use disorder (1.01 vs 0.62; p<0.001), depression (10.1 vs 7.41; p<0.001) and alcohol attributed disease (7.86 vs 2.40; p<0.001). During a median observation time of 4.2 years, 0.88% (154) of the TBI patients had suicide attempt (IQR=1.5 to 7.0). The incidence of suicide attempt in the TBI cohort was significantly (p<0.001) higher than that in the non-TBI cohort (314 suicide attempts out of 70 016 age-matched and gender-matched controls (0.45%) during a median observation time of 8.0 years (IQR=4.9 to 11.1)). The median duration of follow-up for suicide attempts was significantly shorter in the TBI group (4.2 years) compared with the non-TBI group (8.0 years).\nBaseline characteristics of patients with and without traumatic brain injury\n\nTable 1 presents the baseline demographic characteristics and comorbidities in the two cohorts. In the TBI cohort, 43.35% patients were female. Compared with the non-TBI cohort, the TBI cohort had significantly higher percentages of patients with hypertension (39.45 vs 24.22; p<0.001), diabetes mellitus (24.24 vs 14.49; p<0.001), hyperlipidaemia (33.13 vs 22.97; p<0.001), coronary artery disease (7.25 vs 2.84; p<0.001), congestive heart failure (7.60 vs 3.39; p<0.001), liver cirrhosis (34.52 vs 23.79; p<0.001), tobacco use disorder (1.01 vs 0.62; p<0.001), depression (10.1 vs 7.41; p<0.001) and alcohol attributed disease (7.86 vs 2.40; p<0.001). During a median observation time of 4.2 years, 0.88% (154) of the TBI patients had suicide attempt (IQR=1.5 to 7.0). The incidence of suicide attempt in the TBI cohort was significantly (p<0.001) higher than that in the non-TBI cohort (314 suicide attempts out of 70 016 age-matched and gender-matched controls (0.45%) during a median observation time of 8.0 years (IQR=4.9 to 11.1)). The median duration of follow-up for suicide attempts was significantly shorter in the TBI group (4.2 years) compared with the non-TBI group (8.0 years).\nBaseline characteristics of patients with and without traumatic brain injury\n Incidence and risk of suicide attempts In table 2, the incidence and HR for suicide attempts are stratified by gender, age and comorbidity. During the follow-up period, the overall risk of suicide attempt was 2.23 times higher in the TBI group compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively) after adjustment for gender, age and related comorbidities (diabetes mellitus, hyperlipidaemia, hypertension, coronary artery disease, congestive heart failure, liver cirrhosis, depression, alcohol attributed disease and tobacco use disorder).\nIncidence and HR of suicide attempt by demographic characteristics, comorbidity and follow-up duration among patients with or without TBI\n*Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease).\n†p<0.001;\n‡P=0.002.\n§Patients with any examined comorbidities, including hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease, were classified as the comorbidity group.\n¶p=0.028; **P=0.035.\nRate, incidence rate in per 1000 person-years; TBI, traumatic brain injury.\nGender-specific analyses showed that, in the TBI group, the incidence of suicide attempts was higher in women than in men (1.05 vs 0.92 per 1000 person-years, respectively). Additionally, in comparison with the non-TBI group, the TBI group had a significantly higher risk of suicide attempts in both genders (adjusted HR=2.49, 95% CI 1.82 to 3.41 for women; adjusted HR=2.04, 95% CI 1.53 to 2.72 for men).\nIn age-specific risk comparisons of the two cohorts, the TBI cohort consistently showed a significantly higher risk of attempted suicide in all age groups. Regardless of comorbidities, the risk of suicide attempt was higher in TBI group than non-TBI group. Notably, the risk of attempted suicide was even higher in TBI patients who had comorbidities.\n\nFigure 2 compares the Kaplan–Meier curves for the cumulative incidence of suicide attempts between the TBI and non-TBI groups at the 15 year follow-up. The Kaplan–Meier curves showed a significantly higher cumulative incidence of suicide attempts in the TBI group compared with the non-TBI group (log-rank test p<0.001).\nCumulative incidence of suicide attempts among patients with traumatic brain injury and the control cohort. TBI, traumatic brain injury.\nIn table 2, the incidence and HR for suicide attempts are stratified by gender, age and comorbidity. During the follow-up period, the overall risk of suicide attempt was 2.23 times higher in the TBI group compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively) after adjustment for gender, age and related comorbidities (diabetes mellitus, hyperlipidaemia, hypertension, coronary artery disease, congestive heart failure, liver cirrhosis, depression, alcohol attributed disease and tobacco use disorder).\nIncidence and HR of suicide attempt by demographic characteristics, comorbidity and follow-up duration among patients with or without TBI\n*Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease).\n†p<0.001;\n‡P=0.002.\n§Patients with any examined comorbidities, including hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease, were classified as the comorbidity group.\n¶p=0.028; **P=0.035.\nRate, incidence rate in per 1000 person-years; TBI, traumatic brain injury.\nGender-specific analyses showed that, in the TBI group, the incidence of suicide attempts was higher in women than in men (1.05 vs 0.92 per 1000 person-years, respectively). Additionally, in comparison with the non-TBI group, the TBI group had a significantly higher risk of suicide attempts in both genders (adjusted HR=2.49, 95% CI 1.82 to 3.41 for women; adjusted HR=2.04, 95% CI 1.53 to 2.72 for men).\nIn age-specific risk comparisons of the two cohorts, the TBI cohort consistently showed a significantly higher risk of attempted suicide in all age groups. Regardless of comorbidities, the risk of suicide attempt was higher in TBI group than non-TBI group. Notably, the risk of attempted suicide was even higher in TBI patients who had comorbidities.\n\nFigure 2 compares the Kaplan–Meier curves for the cumulative incidence of suicide attempts between the TBI and non-TBI groups at the 15 year follow-up. The Kaplan–Meier curves showed a significantly higher cumulative incidence of suicide attempts in the TBI group compared with the non-TBI group (log-rank test p<0.001).\nCumulative incidence of suicide attempts among patients with traumatic brain injury and the control cohort. TBI, traumatic brain injury.\n Risks factors for suicide attempts in TBI patients The Cox regression analysis revealed two risk factors for suicide attempts in the TBI group: depression (adjusted HR=5.73, 95% CI 4.14 to 7.93) and alcohol attributed disease (adjusted HR=3.29, 95% CI 2.30 to 4.72). However, the risk of attempted suicide in this group decreased as age increased (table 3).\nCox regression model: significant predictors of suicide attempt after traumatic brain injury\nThe adjusted HR and 95% CI were estimated by a stepwise Cox proportional hazards regression method.\n*Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease).\nThe Cox regression analysis revealed two risk factors for suicide attempts in the TBI group: depression (adjusted HR=5.73, 95% CI 4.14 to 7.93) and alcohol attributed disease (adjusted HR=3.29, 95% CI 2.30 to 4.72). However, the risk of attempted suicide in this group decreased as age increased (table 3).\nCox regression model: significant predictors of suicide attempt after traumatic brain injury\nThe adjusted HR and 95% CI were estimated by a stepwise Cox proportional hazards regression method.\n*Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease).\n Risk of suicide attempt and TBI severity \nTable 4 shows that all patients with TBI had a higher than normal risk of suicide attempts. Additionally, the risk of attempted suicide increased as the severity of TBI increased.\nIncidence and HRs for suicide attempt stratified by the severity of TBI\nRate, incidence rate in per 1000 person-years.\n*Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease).\n†p<0.001; ‡p=0.044.\nTBI, traumatic brain injury.\n\nTable 4 shows that all patients with TBI had a higher than normal risk of suicide attempts. Additionally, the risk of attempted suicide increased as the severity of TBI increased.\nIncidence and HRs for suicide attempt stratified by the severity of TBI\nRate, incidence rate in per 1000 person-years.\n*Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease).\n†p<0.001; ‡p=0.044.\nTBI, traumatic brain injury.", "\nTable 1 presents the baseline demographic characteristics and comorbidities in the two cohorts. In the TBI cohort, 43.35% patients were female. Compared with the non-TBI cohort, the TBI cohort had significantly higher percentages of patients with hypertension (39.45 vs 24.22; p<0.001), diabetes mellitus (24.24 vs 14.49; p<0.001), hyperlipidaemia (33.13 vs 22.97; p<0.001), coronary artery disease (7.25 vs 2.84; p<0.001), congestive heart failure (7.60 vs 3.39; p<0.001), liver cirrhosis (34.52 vs 23.79; p<0.001), tobacco use disorder (1.01 vs 0.62; p<0.001), depression (10.1 vs 7.41; p<0.001) and alcohol attributed disease (7.86 vs 2.40; p<0.001). During a median observation time of 4.2 years, 0.88% (154) of the TBI patients had suicide attempt (IQR=1.5 to 7.0). The incidence of suicide attempt in the TBI cohort was significantly (p<0.001) higher than that in the non-TBI cohort (314 suicide attempts out of 70 016 age-matched and gender-matched controls (0.45%) during a median observation time of 8.0 years (IQR=4.9 to 11.1)). The median duration of follow-up for suicide attempts was significantly shorter in the TBI group (4.2 years) compared with the non-TBI group (8.0 years).\nBaseline characteristics of patients with and without traumatic brain injury", "In table 2, the incidence and HR for suicide attempts are stratified by gender, age and comorbidity. During the follow-up period, the overall risk of suicide attempt was 2.23 times higher in the TBI group compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively) after adjustment for gender, age and related comorbidities (diabetes mellitus, hyperlipidaemia, hypertension, coronary artery disease, congestive heart failure, liver cirrhosis, depression, alcohol attributed disease and tobacco use disorder).\nIncidence and HR of suicide attempt by demographic characteristics, comorbidity and follow-up duration among patients with or without TBI\n*Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease).\n†p<0.001;\n‡P=0.002.\n§Patients with any examined comorbidities, including hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease, were classified as the comorbidity group.\n¶p=0.028; **P=0.035.\nRate, incidence rate in per 1000 person-years; TBI, traumatic brain injury.\nGender-specific analyses showed that, in the TBI group, the incidence of suicide attempts was higher in women than in men (1.05 vs 0.92 per 1000 person-years, respectively). Additionally, in comparison with the non-TBI group, the TBI group had a significantly higher risk of suicide attempts in both genders (adjusted HR=2.49, 95% CI 1.82 to 3.41 for women; adjusted HR=2.04, 95% CI 1.53 to 2.72 for men).\nIn age-specific risk comparisons of the two cohorts, the TBI cohort consistently showed a significantly higher risk of attempted suicide in all age groups. Regardless of comorbidities, the risk of suicide attempt was higher in TBI group than non-TBI group. Notably, the risk of attempted suicide was even higher in TBI patients who had comorbidities.\n\nFigure 2 compares the Kaplan–Meier curves for the cumulative incidence of suicide attempts between the TBI and non-TBI groups at the 15 year follow-up. The Kaplan–Meier curves showed a significantly higher cumulative incidence of suicide attempts in the TBI group compared with the non-TBI group (log-rank test p<0.001).\nCumulative incidence of suicide attempts among patients with traumatic brain injury and the control cohort. TBI, traumatic brain injury.", "The Cox regression analysis revealed two risk factors for suicide attempts in the TBI group: depression (adjusted HR=5.73, 95% CI 4.14 to 7.93) and alcohol attributed disease (adjusted HR=3.29, 95% CI 2.30 to 4.72). However, the risk of attempted suicide in this group decreased as age increased (table 3).\nCox regression model: significant predictors of suicide attempt after traumatic brain injury\nThe adjusted HR and 95% CI were estimated by a stepwise Cox proportional hazards regression method.\n*Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease).", "\nTable 4 shows that all patients with TBI had a higher than normal risk of suicide attempts. Additionally, the risk of attempted suicide increased as the severity of TBI increased.\nIncidence and HRs for suicide attempt stratified by the severity of TBI\nRate, incidence rate in per 1000 person-years.\n*Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease).\n†p<0.001; ‡p=0.044.\nTBI, traumatic brain injury.", "To our knowledge, this is the first large-scale population-based study to investigate the association between TBI and subsequent suicide risk in a Chinese population. The TBI group in this study had a 2.23-fold higher risk of suicide attempts compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively). Regardless of age, gender or comorbidity, all TBI subgroups had a higher than normal risk of attempted suicide. Moreover, patients with severe TBI had a slightly greater risk of attempted suicide compared with those with mild TBI. In the TBI group, the strongest risk factors for attempted suicide were depression and alcohol attributed disease.\nThis study had several notable advantages. First, this population-based study analysed longitudinal data contained in an insurance claims database over a long observation period (15 years) and in a large population (1 000 000 Taiwan residents). The analyses were adjusted for time-varying factors that are well recognised as major risk factors for suicide. Therefore, recall bias was limited compared with analyses based on self-reported data for suicide behaviour used in other surveys. Second, the Taiwan National Health Insurance programme includes over 99.5% residents in Taiwan. Therefore, the observed association between TBI and suicide was representative of the actual population, and the possibility of selection bias was minimal.\nHowever, this study has several limitations. First, the insurance claims database used in this analysis did not include data for suicidal ideation or completed suicide. Therefore, the analysis could only consider attempted suicide. Second, this insurance claims database did not include individuals who did not seek medical care because their TBI or self-harm behaviours were mild. Therefore, potential bias in identification may have resulted in an underestimated incidence of attempted suicide. Third, this study did not consider chronic pain, which has a high incidence in TBI patients and has a significant association with depression.16 Moreover, our conclusions were based on analysis of secondary data from an insurance claims database. Unexpected biasses from the effect of different numbers and severity of concomitant comorbidities were possible. Data were unavailable for potential confounding factors such as family history of suicide or child abuse or for environmental factors such as stressful life events or prolonged stress. Additionally, this study could only analyse diagnoses of alcohol attributed illness and tobacco use disorder rather than habitual use of alcohol or tobacco. Finally, this study did not consider social factors such as marital status and social support and did not consider patient characteristics such as Glasgow Coma Scale score, trauma mechanisms and duration of lost consciousness.\nPrevious studies have investigated the prevalence of suicidal ideation, attempted suicide and death by suicide in TBI patients in the community.17 Systemic review found robust evidence of the relationship between TBI and elevated risk of suicide.18 A TBI can cause dysfunction of the frontal lobes and frontal-subcortical circuits, then resulting in aggression, poor decision-making and impulsivity. Impulsivity and aggression also have a strong association with suicidal behaviour.19 This study revealed a markedly higher cumulative incidence of suicide attempts in the TBI cohort compared with the non-TBI cohort and the TBI group had a 2.23-fold higher risk of attempted suicide compared with the non-TBI group in Taiwan population. These results are consistent with a longitudinal follow-up study by Lauren et al,\n20 who reported that TBI is a major risk factor for suicidal behaviour. Moreover, data collected using standardised self-reported measures of post-traumatic stress disorder, depression, suicidal thoughts and behaviours in military personnel also indicates that suicide risk is higher in subjects with a history of multiple TBIs compared with those with only one or no TBI.21 22\n\nIn the general population, risk factors for attempted suicide include gender, age and history of previous suicide attempts.23 A notable gender difference is that the rate of attempted suicide is higher in females whereas the rate of completed suicides is higher in males.24 In the current study, both male and female patients in the TBI group had a higher risk of attempted suicide compared with their counterparts in the non-TBI group. Specifically, males in the TBI group had 2.04-fold higher risk of attempted suicide; females in the TBI group had a 2.49-fold higher risk of attempted suicide. In Teasdale et al,\n6an analysis of a Denmark cohort similarly reported that females with TBI had a higher suicide mortality rate compared with males with TBI. Hartkopp et al\n25 speculated that females have more difficulties bearing the consequence of injury because the role of physical attractiveness in self-image is more important in women compared with men.\nThe psychiatric disorder most commonly (50% to 60%) implicated in death by suicide is depression.26 Anxiety, chronic pain and drug abuse are other independent risk factors for attempted suicide.27 Psychotherapeutic interventions implemented early (within the first year) after injury may reduce the progression of major depression in TBI.28 In our study, depression and alcohol attributed disease are significantly associated with suicide after TBI (adjusted HR 5.73, 95% CI 4.14 to 7.93, p<0.001; adjusted HR 3.29, 95% CI 2.30 to 4.72, p<0.001, respectively).\nIn the elderly, the most common cause of TBI is falls. Elderly patients with TBI also have significantly worse outcomes for mortality and functional impairment.29 Age is a major factor in functional decline after TBI.30 A previous population-based study revealed the incidence of suicide was lower in TBI patients younger than 21 years or older than 60 years.6\nTable 3 shows that, after stratification by age, the risk of suicide attempt in elderly patients was significantly lower in the current study (adjusted HR 0.83; 95% CI 0.74 to 0.94; p=0.002). A possible explanation for the age difference is that elderly patients have less cognitive and physical capability to carry out a suicide after injury.\nThe association between TBI severity and suicide has also attracted the interest. The relative risk of attempted suicide is three to four times higher in patients with severe TBI compared with the general population.17 Clinical evidence also indicates that both severe and mild TBI are associated with increased suicidal tendencies.31 Brenner et al\n32 reported that compared with those without TBI, death by suicide was 1.34-fold higher in military veterans with a history of severe TBI, while veterans with a history of relative mild TBI, that is, fracture or contusion, had a 1.98 times higher risk of attempted suicide. Pain may have contributed to the increased risk of suicide in the mild TBI group. In comparison, increased suicide risk by severity of TBI in our study revealed that patients with mild, moderate and severe TBI had 2.22-fold, 2.23-fold and 2.32-fold, respectively, higher risks of attempted suicide compared with patients without TBI (table 4).", "The worldwide consensus is that suicide is a preventable cause of death. The findings of our study suggest that depression, alcoholic attributed disease and high severity of TBI may increase the risk of attempted suicide in patients with TBI. These factors serve as vital mechanisms through which TBI influence suicide attempts. Suicide attempts are strong indicators for death of suicide.33 Suicide prevention requires a cooperative effort by society, the family and the individual. The results of this study suggested that intensive interventions for identifying individuals with a high risk of suicidal behaviour can have large and positive public health effect. Thereby, future research is warranted to identify other mechanisms of these association, including possible biological interaction of TBI and other predictors such as severity to elucidate whether prompt intervention for TBI cases could reduce this risk.\nPatients in Chinese ancestry initially diagnosed with traumatic brain injury (TBI) have a high risk of developing suicide attempts.\nThe risk of suicide attempts is increased associated with the severity of TBI.\nThe increased risk of suicide is less in TBI patients aged above 50 years; thus, older age may protect TBI cohort from suicide attempts.\nDoes the TBI severity correlate with suicide attempts in Chinese patients?\nWould increased exposure to traumatic brain injury earlier in life increase the risk of suicide attempts?\nDo older TBI patients have higher suicide attempts risks?\nBoth traumatic brain injury and suicide are worldwide health problems and burdens.\nPatients with TBI are known to have higher than normal mortality." ]
[ "intro", "methods", null, null, null, null, "results", null, null, null, null, "discussion", "conclusions" ]
[ "traumatic brain injury", "suicide", "cohort study" ]
Introduction: Traumatic brain injury (TBI), which is defined as damage to the skull or the brain and its frameworks through an external force,1 is a major cause of death and disability worldwide, and its treatment is potentially a heavy economic burden. The number of TBIs that occur annually worldwide is an estimated 1.5 to 2 million; the global prevalence is 12% to 16.7% in males and 8.5% in females, respectively. In Taiwan, up to 25% of the approximately 52 000 annual cases of TBI are fatal.2 The WHO projects that TBI will be the third leading cause of death or disability by 2020.3 Traumatic brain injury is most common in young adults and the elderly. Mild TBI is far more common than severe TBI. The severity of various consequences of TBI is known to increase proportionally to the severity of the TBI; typical consequences include physical disability, cognitive impairments and mood disorders. Suicide is a global public health problem. Approximately 800 000 people commit suicide every year, and suicide is the second leading cause of death in young adults. In Taiwan, the number of deaths by suicide in 2016 was 3765, and the standardised mortality rate was 12.3 persons per population of 100 000.4 According to statistics released by the Taiwan Ministry of Health and Welfare, suicide is the 12th most common cause of death. Suicide occurs throughout the life span and across all socioeconomic strata. Moreover, suicide has a potentially immense socioeconomic burden on individuals, families, communities and nations.5 The causes of suicide are multifactorial. Individuals with TBI have higher rates of non-fatal deliberate self-harm, suicide and all-cause mortality compared with the general population.6 7 Although recent studies suggest that TBI might be an important risk factor for suicide, studies of the association between TBI severity and suicide risk have reported conflicting results between countries and races. Simpson8 and Fonda et al 9 reported TBI severity contributed no meaningful differences to the risk of suicide attempts in the study of Australian population or US veterans who served in either Iraq or Afghanistan, respectively; whereas Madsen et al 7 showed the risk of suicide was higher for TBI individuals with evidence of structural brain injury in Denmark. However, most studies investigating the relation of suicide and TBI have some shortcomings in methodology, such as small clinical samples, especially very small number of TBI cases with suicide and self-reported data. Besides, little population-based data for this association are available. In the Simpson’s survey, they only included small number of patients (n=172) in Australia. In the Fonda’s analysis, only US veterans were included. The majority of military veterans were young (mean age=28.7 years) male (84%). The definition of TBI severity was based on self-reported duration of consciousness loss, alternation of consciousness and post-traumatic amnesia, not on objective examinations. Besides, moderate (6%) to severe (6%) TBI subsample cases were too small to bias the estimates. In Denmark, Madsen et al conducted a retrospective cohort study (1977 to 2014) and showed that a statistically significant increased suicide was present in TBI individuals. However, their research results were potentially biassed because they did not include all mild TBI patients treated prior to 1995. Also, there were no available records on what treatment TBI patients received, making it difficult to determine the TBI severity. Additionally, this study was performed in population of the Western world, not those of Eastern Asia. Generalising data retrieved from a study of a relatively homogeneous and wealthy Scandinavian people is not persuasive. Finally, large-scale population-based studies for the relationship between TBI severity and suicide risk in Asian cohorts are scarce. Therefore, this study used the Taiwan National Health Insurance Research Database (NHIRD) to clarify the association between TBI and subsequent occurrence of suicide as well as the effect of TBI severity in Chinese cohorts. In our analysis, the TBI severity is based on objective codes or examinations and then correlation is estimated through adjusting for all important suicide risks. Methods: Database Data retrieved from the NHIRD maintained by the Taiwan healthcare system was used in this population-based cohort study. The encrypted NHIRD contains medical data of more than 99% of the 23.74 million Taiwan residents. The Taiwan national health insurance programme allowed researchers to access this administrative database for patients. The NHIRD contains enrolment information for all patients and comprehensive data for their use of healthcare resources. In accordance with the Personal Electronic Data Protection Law of Taiwan, any information that can be used to identify beneficiaries and hospitals must be removed from the NHIRD. This cohort study analysed 1996 to 2010 data from the Longitudinal Health Insurance Database 2010, which is an NHIRD subset that collects data of 1 million randomly-sampled beneficiaries from the primary NHIRD. This study identified and classified diseases according to the diagnostic codes in the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). Data retrieved from the NHIRD maintained by the Taiwan healthcare system was used in this population-based cohort study. The encrypted NHIRD contains medical data of more than 99% of the 23.74 million Taiwan residents. The Taiwan national health insurance programme allowed researchers to access this administrative database for patients. The NHIRD contains enrolment information for all patients and comprehensive data for their use of healthcare resources. In accordance with the Personal Electronic Data Protection Law of Taiwan, any information that can be used to identify beneficiaries and hospitals must be removed from the NHIRD. This cohort study analysed 1996 to 2010 data from the Longitudinal Health Insurance Database 2010, which is an NHIRD subset that collects data of 1 million randomly-sampled beneficiaries from the primary NHIRD. This study identified and classified diseases according to the diagnostic codes in the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). Study population The study included 17 504 patients who were older than 20 years and had been diagnosed with TBI (ICD-9-CM code 800, 803 to 804 and 850 to 854; operation codes 01.23, 01.24, 01.25, 01.31, 01.39 and 02.01) during 1996 to 2010.10 The analysis was limited to patients who had received ≥1 diagnoses during inpatient care or ≥2 TBI diagnoses during ambulatory visits in order to ensure data accuracy. The date of the first clinical visit for TBI was defined as the index date. The TBI cohort excluded patients who had been diagnosed with any mental disorder (ICD-9-CM code 290 to 319) before the index date. The TBI cases were further classified as severe, moderate or mild. Severe TBI was defined as a TBI that required surgery in the course of inpatient treatment; moderate TBI was defined as a TBI that required hospitalisation but not surgery; mild TBI was defined as a TBI that did not require inpatient treatment.11 The ICD-9-CM, E-Codes for suicide attempts (950 to 959 and 980 to 989)12 were assigned by psychiatrists. Patients were excluded if they had a ICD-9 code for suicide attempt before the index date. Additionally, 70 016 patients without TBI were identified to enhance the power of statistical tests in stratified analysis. The non-TBI patients were randomly selected from the registry of beneficiaries. Patients were eligible for inclusion in the non-TBI group if they had not been diagnosed with TBI during 1996 to 2010 and had no history of suicide attempts before enrolment. The non-TBI group and the TBI group were matched at a 4:1 ratio for gender, age and index year (year of TBI diagnosis). Figure 1 shows the workflow of the study procedure. Flow diagram summarising the process of enrolment. LHID, Longitudinal Health Insurance Database; TBI, traumaticbrain injury. The study included 17 504 patients who were older than 20 years and had been diagnosed with TBI (ICD-9-CM code 800, 803 to 804 and 850 to 854; operation codes 01.23, 01.24, 01.25, 01.31, 01.39 and 02.01) during 1996 to 2010.10 The analysis was limited to patients who had received ≥1 diagnoses during inpatient care or ≥2 TBI diagnoses during ambulatory visits in order to ensure data accuracy. The date of the first clinical visit for TBI was defined as the index date. The TBI cohort excluded patients who had been diagnosed with any mental disorder (ICD-9-CM code 290 to 319) before the index date. The TBI cases were further classified as severe, moderate or mild. Severe TBI was defined as a TBI that required surgery in the course of inpatient treatment; moderate TBI was defined as a TBI that required hospitalisation but not surgery; mild TBI was defined as a TBI that did not require inpatient treatment.11 The ICD-9-CM, E-Codes for suicide attempts (950 to 959 and 980 to 989)12 were assigned by psychiatrists. Patients were excluded if they had a ICD-9 code for suicide attempt before the index date. Additionally, 70 016 patients without TBI were identified to enhance the power of statistical tests in stratified analysis. The non-TBI patients were randomly selected from the registry of beneficiaries. Patients were eligible for inclusion in the non-TBI group if they had not been diagnosed with TBI during 1996 to 2010 and had no history of suicide attempts before enrolment. The non-TBI group and the TBI group were matched at a 4:1 ratio for gender, age and index year (year of TBI diagnosis). Figure 1 shows the workflow of the study procedure. Flow diagram summarising the process of enrolment. LHID, Longitudinal Health Insurance Database; TBI, traumaticbrain injury. Outcome and comorbidities The outcome was any occurrence of suicide attempts during the follow-up period. Both cohorts were followed until 31 December 2010 or until the occurrence of suicide attempts. A review of the psychiatry literature reveals that very few modifiable risk factors have been clearly defined.13 Therefore, most studies have focused on largely unmodifiable risk factors such as age and gender and on risk factors that are common physical comorbidities, such as hypertension (ICD-9-CM codes 401 to 405), hyperlipidaemia (ICD-9-CM code 272), congestive heart failure (ICD-9-CM code 428), diabetes mellitus (ICD-9-CM code 250), coronary artery disease (ICD-9-CM code 410 to 414), liver cirrhosis (ICD-9-CM code 571), depression (ICD-9-CM code 296.2 to 296.3, 300.4 and 311), tobacco use disorder (ICD-9-CM code 350.1) and alcohol attributed disease (ICD-9-CM codes 291.0 to 9, 303, 305.0, 357.5, 425.5, 535.3, 571.0 to 3, 980.0 and V11.3)13 were potential confounders in this analysis because they are unhealthy lifestyle behaviours associated with both TBI and mental illness.11 For each participant, urbanisation index and income-related insurance payment amounts were used as proxy measures of socioeconomic status at follow-up. Urbanisation index was categorised into three groups: high (metropolises), medium (small cities and suburban areas) and low (rural areas).14 Insurance premiums were categorised into three groups according to the monthly insurance payment by the enrollee: NT$0 to 20 000; NT$20 000 to 40 000 and more than NT$40 000.15 The outcome was any occurrence of suicide attempts during the follow-up period. Both cohorts were followed until 31 December 2010 or until the occurrence of suicide attempts. A review of the psychiatry literature reveals that very few modifiable risk factors have been clearly defined.13 Therefore, most studies have focused on largely unmodifiable risk factors such as age and gender and on risk factors that are common physical comorbidities, such as hypertension (ICD-9-CM codes 401 to 405), hyperlipidaemia (ICD-9-CM code 272), congestive heart failure (ICD-9-CM code 428), diabetes mellitus (ICD-9-CM code 250), coronary artery disease (ICD-9-CM code 410 to 414), liver cirrhosis (ICD-9-CM code 571), depression (ICD-9-CM code 296.2 to 296.3, 300.4 and 311), tobacco use disorder (ICD-9-CM code 350.1) and alcohol attributed disease (ICD-9-CM codes 291.0 to 9, 303, 305.0, 357.5, 425.5, 535.3, 571.0 to 3, 980.0 and V11.3)13 were potential confounders in this analysis because they are unhealthy lifestyle behaviours associated with both TBI and mental illness.11 For each participant, urbanisation index and income-related insurance payment amounts were used as proxy measures of socioeconomic status at follow-up. Urbanisation index was categorised into three groups: high (metropolises), medium (small cities and suburban areas) and low (rural areas).14 Insurance premiums were categorised into three groups according to the monthly insurance payment by the enrollee: NT$0 to 20 000; NT$20 000 to 40 000 and more than NT$40 000.15 Statistical analyses We used X2 test to compare clinical characteristics and distributions of categorical demographics between the TBI and non-TBI cohorts. The Wilcoxon rank-sum test and Student’s t-test were used to compare mean age and follow-up time (y) between the two cohorts, as appropriate. The Kaplan–Meier analysis was used to estimate cumulative incidence of suicide attempts, and the differences between the curves were compared by two-tailed log-rank test. For TBI patients, survival was calculated until an ambulatory visit for suicide attempts, hospitalisation or the end of the study period (31 December 2010), whichever came first. Incidence rates of suicide attempts estimated in 1000 person-years were compared between the two cohorts. Cox proportional hazard regression models were used to investigate the HR and 95% CI for suicide attempts for individuals with TBI if the proportional hazards assumption was satisfied. Multivariable Cox models were adjusted for gender, age, income, urbanisation and relevant comorbidities. A two-tailed p value less than 0.05 was considered statistically significant. Statistical Analysis Software, V.9.4, was used for all data analyses (SAS Institute, Cary, North Carolina, USA). We used X2 test to compare clinical characteristics and distributions of categorical demographics between the TBI and non-TBI cohorts. The Wilcoxon rank-sum test and Student’s t-test were used to compare mean age and follow-up time (y) between the two cohorts, as appropriate. The Kaplan–Meier analysis was used to estimate cumulative incidence of suicide attempts, and the differences between the curves were compared by two-tailed log-rank test. For TBI patients, survival was calculated until an ambulatory visit for suicide attempts, hospitalisation or the end of the study period (31 December 2010), whichever came first. Incidence rates of suicide attempts estimated in 1000 person-years were compared between the two cohorts. Cox proportional hazard regression models were used to investigate the HR and 95% CI for suicide attempts for individuals with TBI if the proportional hazards assumption was satisfied. Multivariable Cox models were adjusted for gender, age, income, urbanisation and relevant comorbidities. A two-tailed p value less than 0.05 was considered statistically significant. Statistical Analysis Software, V.9.4, was used for all data analyses (SAS Institute, Cary, North Carolina, USA). Database: Data retrieved from the NHIRD maintained by the Taiwan healthcare system was used in this population-based cohort study. The encrypted NHIRD contains medical data of more than 99% of the 23.74 million Taiwan residents. The Taiwan national health insurance programme allowed researchers to access this administrative database for patients. The NHIRD contains enrolment information for all patients and comprehensive data for their use of healthcare resources. In accordance with the Personal Electronic Data Protection Law of Taiwan, any information that can be used to identify beneficiaries and hospitals must be removed from the NHIRD. This cohort study analysed 1996 to 2010 data from the Longitudinal Health Insurance Database 2010, which is an NHIRD subset that collects data of 1 million randomly-sampled beneficiaries from the primary NHIRD. This study identified and classified diseases according to the diagnostic codes in the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). Study population: The study included 17 504 patients who were older than 20 years and had been diagnosed with TBI (ICD-9-CM code 800, 803 to 804 and 850 to 854; operation codes 01.23, 01.24, 01.25, 01.31, 01.39 and 02.01) during 1996 to 2010.10 The analysis was limited to patients who had received ≥1 diagnoses during inpatient care or ≥2 TBI diagnoses during ambulatory visits in order to ensure data accuracy. The date of the first clinical visit for TBI was defined as the index date. The TBI cohort excluded patients who had been diagnosed with any mental disorder (ICD-9-CM code 290 to 319) before the index date. The TBI cases were further classified as severe, moderate or mild. Severe TBI was defined as a TBI that required surgery in the course of inpatient treatment; moderate TBI was defined as a TBI that required hospitalisation but not surgery; mild TBI was defined as a TBI that did not require inpatient treatment.11 The ICD-9-CM, E-Codes for suicide attempts (950 to 959 and 980 to 989)12 were assigned by psychiatrists. Patients were excluded if they had a ICD-9 code for suicide attempt before the index date. Additionally, 70 016 patients without TBI were identified to enhance the power of statistical tests in stratified analysis. The non-TBI patients were randomly selected from the registry of beneficiaries. Patients were eligible for inclusion in the non-TBI group if they had not been diagnosed with TBI during 1996 to 2010 and had no history of suicide attempts before enrolment. The non-TBI group and the TBI group were matched at a 4:1 ratio for gender, age and index year (year of TBI diagnosis). Figure 1 shows the workflow of the study procedure. Flow diagram summarising the process of enrolment. LHID, Longitudinal Health Insurance Database; TBI, traumaticbrain injury. Outcome and comorbidities: The outcome was any occurrence of suicide attempts during the follow-up period. Both cohorts were followed until 31 December 2010 or until the occurrence of suicide attempts. A review of the psychiatry literature reveals that very few modifiable risk factors have been clearly defined.13 Therefore, most studies have focused on largely unmodifiable risk factors such as age and gender and on risk factors that are common physical comorbidities, such as hypertension (ICD-9-CM codes 401 to 405), hyperlipidaemia (ICD-9-CM code 272), congestive heart failure (ICD-9-CM code 428), diabetes mellitus (ICD-9-CM code 250), coronary artery disease (ICD-9-CM code 410 to 414), liver cirrhosis (ICD-9-CM code 571), depression (ICD-9-CM code 296.2 to 296.3, 300.4 and 311), tobacco use disorder (ICD-9-CM code 350.1) and alcohol attributed disease (ICD-9-CM codes 291.0 to 9, 303, 305.0, 357.5, 425.5, 535.3, 571.0 to 3, 980.0 and V11.3)13 were potential confounders in this analysis because they are unhealthy lifestyle behaviours associated with both TBI and mental illness.11 For each participant, urbanisation index and income-related insurance payment amounts were used as proxy measures of socioeconomic status at follow-up. Urbanisation index was categorised into three groups: high (metropolises), medium (small cities and suburban areas) and low (rural areas).14 Insurance premiums were categorised into three groups according to the monthly insurance payment by the enrollee: NT$0 to 20 000; NT$20 000 to 40 000 and more than NT$40 000.15 Statistical analyses: We used X2 test to compare clinical characteristics and distributions of categorical demographics between the TBI and non-TBI cohorts. The Wilcoxon rank-sum test and Student’s t-test were used to compare mean age and follow-up time (y) between the two cohorts, as appropriate. The Kaplan–Meier analysis was used to estimate cumulative incidence of suicide attempts, and the differences between the curves were compared by two-tailed log-rank test. For TBI patients, survival was calculated until an ambulatory visit for suicide attempts, hospitalisation or the end of the study period (31 December 2010), whichever came first. Incidence rates of suicide attempts estimated in 1000 person-years were compared between the two cohorts. Cox proportional hazard regression models were used to investigate the HR and 95% CI for suicide attempts for individuals with TBI if the proportional hazards assumption was satisfied. Multivariable Cox models were adjusted for gender, age, income, urbanisation and relevant comorbidities. A two-tailed p value less than 0.05 was considered statistically significant. Statistical Analysis Software, V.9.4, was used for all data analyses (SAS Institute, Cary, North Carolina, USA). Results: Baseline characteristics of the TBI and non-TBI cohorts Table 1 presents the baseline demographic characteristics and comorbidities in the two cohorts. In the TBI cohort, 43.35% patients were female. Compared with the non-TBI cohort, the TBI cohort had significantly higher percentages of patients with hypertension (39.45 vs 24.22; p<0.001), diabetes mellitus (24.24 vs 14.49; p<0.001), hyperlipidaemia (33.13 vs 22.97; p<0.001), coronary artery disease (7.25 vs 2.84; p<0.001), congestive heart failure (7.60 vs 3.39; p<0.001), liver cirrhosis (34.52 vs 23.79; p<0.001), tobacco use disorder (1.01 vs 0.62; p<0.001), depression (10.1 vs 7.41; p<0.001) and alcohol attributed disease (7.86 vs 2.40; p<0.001). During a median observation time of 4.2 years, 0.88% (154) of the TBI patients had suicide attempt (IQR=1.5 to 7.0). The incidence of suicide attempt in the TBI cohort was significantly (p<0.001) higher than that in the non-TBI cohort (314 suicide attempts out of 70 016 age-matched and gender-matched controls (0.45%) during a median observation time of 8.0 years (IQR=4.9 to 11.1)). The median duration of follow-up for suicide attempts was significantly shorter in the TBI group (4.2 years) compared with the non-TBI group (8.0 years). Baseline characteristics of patients with and without traumatic brain injury Table 1 presents the baseline demographic characteristics and comorbidities in the two cohorts. In the TBI cohort, 43.35% patients were female. Compared with the non-TBI cohort, the TBI cohort had significantly higher percentages of patients with hypertension (39.45 vs 24.22; p<0.001), diabetes mellitus (24.24 vs 14.49; p<0.001), hyperlipidaemia (33.13 vs 22.97; p<0.001), coronary artery disease (7.25 vs 2.84; p<0.001), congestive heart failure (7.60 vs 3.39; p<0.001), liver cirrhosis (34.52 vs 23.79; p<0.001), tobacco use disorder (1.01 vs 0.62; p<0.001), depression (10.1 vs 7.41; p<0.001) and alcohol attributed disease (7.86 vs 2.40; p<0.001). During a median observation time of 4.2 years, 0.88% (154) of the TBI patients had suicide attempt (IQR=1.5 to 7.0). The incidence of suicide attempt in the TBI cohort was significantly (p<0.001) higher than that in the non-TBI cohort (314 suicide attempts out of 70 016 age-matched and gender-matched controls (0.45%) during a median observation time of 8.0 years (IQR=4.9 to 11.1)). The median duration of follow-up for suicide attempts was significantly shorter in the TBI group (4.2 years) compared with the non-TBI group (8.0 years). Baseline characteristics of patients with and without traumatic brain injury Incidence and risk of suicide attempts In table 2, the incidence and HR for suicide attempts are stratified by gender, age and comorbidity. During the follow-up period, the overall risk of suicide attempt was 2.23 times higher in the TBI group compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively) after adjustment for gender, age and related comorbidities (diabetes mellitus, hyperlipidaemia, hypertension, coronary artery disease, congestive heart failure, liver cirrhosis, depression, alcohol attributed disease and tobacco use disorder). Incidence and HR of suicide attempt by demographic characteristics, comorbidity and follow-up duration among patients with or without TBI *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). †p<0.001; ‡P=0.002. §Patients with any examined comorbidities, including hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease, were classified as the comorbidity group. ¶p=0.028; **P=0.035. Rate, incidence rate in per 1000 person-years; TBI, traumatic brain injury. Gender-specific analyses showed that, in the TBI group, the incidence of suicide attempts was higher in women than in men (1.05 vs 0.92 per 1000 person-years, respectively). Additionally, in comparison with the non-TBI group, the TBI group had a significantly higher risk of suicide attempts in both genders (adjusted HR=2.49, 95% CI 1.82 to 3.41 for women; adjusted HR=2.04, 95% CI 1.53 to 2.72 for men). In age-specific risk comparisons of the two cohorts, the TBI cohort consistently showed a significantly higher risk of attempted suicide in all age groups. Regardless of comorbidities, the risk of suicide attempt was higher in TBI group than non-TBI group. Notably, the risk of attempted suicide was even higher in TBI patients who had comorbidities. Figure 2 compares the Kaplan–Meier curves for the cumulative incidence of suicide attempts between the TBI and non-TBI groups at the 15 year follow-up. The Kaplan–Meier curves showed a significantly higher cumulative incidence of suicide attempts in the TBI group compared with the non-TBI group (log-rank test p<0.001). Cumulative incidence of suicide attempts among patients with traumatic brain injury and the control cohort. TBI, traumatic brain injury. In table 2, the incidence and HR for suicide attempts are stratified by gender, age and comorbidity. During the follow-up period, the overall risk of suicide attempt was 2.23 times higher in the TBI group compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively) after adjustment for gender, age and related comorbidities (diabetes mellitus, hyperlipidaemia, hypertension, coronary artery disease, congestive heart failure, liver cirrhosis, depression, alcohol attributed disease and tobacco use disorder). Incidence and HR of suicide attempt by demographic characteristics, comorbidity and follow-up duration among patients with or without TBI *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). †p<0.001; ‡P=0.002. §Patients with any examined comorbidities, including hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease, were classified as the comorbidity group. ¶p=0.028; **P=0.035. Rate, incidence rate in per 1000 person-years; TBI, traumatic brain injury. Gender-specific analyses showed that, in the TBI group, the incidence of suicide attempts was higher in women than in men (1.05 vs 0.92 per 1000 person-years, respectively). Additionally, in comparison with the non-TBI group, the TBI group had a significantly higher risk of suicide attempts in both genders (adjusted HR=2.49, 95% CI 1.82 to 3.41 for women; adjusted HR=2.04, 95% CI 1.53 to 2.72 for men). In age-specific risk comparisons of the two cohorts, the TBI cohort consistently showed a significantly higher risk of attempted suicide in all age groups. Regardless of comorbidities, the risk of suicide attempt was higher in TBI group than non-TBI group. Notably, the risk of attempted suicide was even higher in TBI patients who had comorbidities. Figure 2 compares the Kaplan–Meier curves for the cumulative incidence of suicide attempts between the TBI and non-TBI groups at the 15 year follow-up. The Kaplan–Meier curves showed a significantly higher cumulative incidence of suicide attempts in the TBI group compared with the non-TBI group (log-rank test p<0.001). Cumulative incidence of suicide attempts among patients with traumatic brain injury and the control cohort. TBI, traumatic brain injury. Risks factors for suicide attempts in TBI patients The Cox regression analysis revealed two risk factors for suicide attempts in the TBI group: depression (adjusted HR=5.73, 95% CI 4.14 to 7.93) and alcohol attributed disease (adjusted HR=3.29, 95% CI 2.30 to 4.72). However, the risk of attempted suicide in this group decreased as age increased (table 3). Cox regression model: significant predictors of suicide attempt after traumatic brain injury The adjusted HR and 95% CI were estimated by a stepwise Cox proportional hazards regression method. *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). The Cox regression analysis revealed two risk factors for suicide attempts in the TBI group: depression (adjusted HR=5.73, 95% CI 4.14 to 7.93) and alcohol attributed disease (adjusted HR=3.29, 95% CI 2.30 to 4.72). However, the risk of attempted suicide in this group decreased as age increased (table 3). Cox regression model: significant predictors of suicide attempt after traumatic brain injury The adjusted HR and 95% CI were estimated by a stepwise Cox proportional hazards regression method. *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). Risk of suicide attempt and TBI severity Table 4 shows that all patients with TBI had a higher than normal risk of suicide attempts. Additionally, the risk of attempted suicide increased as the severity of TBI increased. Incidence and HRs for suicide attempt stratified by the severity of TBI Rate, incidence rate in per 1000 person-years. *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). †p<0.001; ‡p=0.044. TBI, traumatic brain injury. Table 4 shows that all patients with TBI had a higher than normal risk of suicide attempts. Additionally, the risk of attempted suicide increased as the severity of TBI increased. Incidence and HRs for suicide attempt stratified by the severity of TBI Rate, incidence rate in per 1000 person-years. *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). †p<0.001; ‡p=0.044. TBI, traumatic brain injury. Baseline characteristics of the TBI and non-TBI cohorts: Table 1 presents the baseline demographic characteristics and comorbidities in the two cohorts. In the TBI cohort, 43.35% patients were female. Compared with the non-TBI cohort, the TBI cohort had significantly higher percentages of patients with hypertension (39.45 vs 24.22; p<0.001), diabetes mellitus (24.24 vs 14.49; p<0.001), hyperlipidaemia (33.13 vs 22.97; p<0.001), coronary artery disease (7.25 vs 2.84; p<0.001), congestive heart failure (7.60 vs 3.39; p<0.001), liver cirrhosis (34.52 vs 23.79; p<0.001), tobacco use disorder (1.01 vs 0.62; p<0.001), depression (10.1 vs 7.41; p<0.001) and alcohol attributed disease (7.86 vs 2.40; p<0.001). During a median observation time of 4.2 years, 0.88% (154) of the TBI patients had suicide attempt (IQR=1.5 to 7.0). The incidence of suicide attempt in the TBI cohort was significantly (p<0.001) higher than that in the non-TBI cohort (314 suicide attempts out of 70 016 age-matched and gender-matched controls (0.45%) during a median observation time of 8.0 years (IQR=4.9 to 11.1)). The median duration of follow-up for suicide attempts was significantly shorter in the TBI group (4.2 years) compared with the non-TBI group (8.0 years). Baseline characteristics of patients with and without traumatic brain injury Incidence and risk of suicide attempts: In table 2, the incidence and HR for suicide attempts are stratified by gender, age and comorbidity. During the follow-up period, the overall risk of suicide attempt was 2.23 times higher in the TBI group compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively) after adjustment for gender, age and related comorbidities (diabetes mellitus, hyperlipidaemia, hypertension, coronary artery disease, congestive heart failure, liver cirrhosis, depression, alcohol attributed disease and tobacco use disorder). Incidence and HR of suicide attempt by demographic characteristics, comorbidity and follow-up duration among patients with or without TBI *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). †p<0.001; ‡P=0.002. §Patients with any examined comorbidities, including hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease, were classified as the comorbidity group. ¶p=0.028; **P=0.035. Rate, incidence rate in per 1000 person-years; TBI, traumatic brain injury. Gender-specific analyses showed that, in the TBI group, the incidence of suicide attempts was higher in women than in men (1.05 vs 0.92 per 1000 person-years, respectively). Additionally, in comparison with the non-TBI group, the TBI group had a significantly higher risk of suicide attempts in both genders (adjusted HR=2.49, 95% CI 1.82 to 3.41 for women; adjusted HR=2.04, 95% CI 1.53 to 2.72 for men). In age-specific risk comparisons of the two cohorts, the TBI cohort consistently showed a significantly higher risk of attempted suicide in all age groups. Regardless of comorbidities, the risk of suicide attempt was higher in TBI group than non-TBI group. Notably, the risk of attempted suicide was even higher in TBI patients who had comorbidities. Figure 2 compares the Kaplan–Meier curves for the cumulative incidence of suicide attempts between the TBI and non-TBI groups at the 15 year follow-up. The Kaplan–Meier curves showed a significantly higher cumulative incidence of suicide attempts in the TBI group compared with the non-TBI group (log-rank test p<0.001). Cumulative incidence of suicide attempts among patients with traumatic brain injury and the control cohort. TBI, traumatic brain injury. Risks factors for suicide attempts in TBI patients: The Cox regression analysis revealed two risk factors for suicide attempts in the TBI group: depression (adjusted HR=5.73, 95% CI 4.14 to 7.93) and alcohol attributed disease (adjusted HR=3.29, 95% CI 2.30 to 4.72). However, the risk of attempted suicide in this group decreased as age increased (table 3). Cox regression model: significant predictors of suicide attempt after traumatic brain injury The adjusted HR and 95% CI were estimated by a stepwise Cox proportional hazards regression method. *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). Risk of suicide attempt and TBI severity: Table 4 shows that all patients with TBI had a higher than normal risk of suicide attempts. Additionally, the risk of attempted suicide increased as the severity of TBI increased. Incidence and HRs for suicide attempt stratified by the severity of TBI Rate, incidence rate in per 1000 person-years. *Model adjusted for age, gender, income, urbanisation level and relevant comorbidities (hypertension, diabetes mellitus, hyperlipidaemia, coronary artery disease, congestive heart failure, liver cirrhosis, depression, tobacco use disorder and alcohol attributed disease). †p<0.001; ‡p=0.044. TBI, traumatic brain injury. Discussion: To our knowledge, this is the first large-scale population-based study to investigate the association between TBI and subsequent suicide risk in a Chinese population. The TBI group in this study had a 2.23-fold higher risk of suicide attempts compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively). Regardless of age, gender or comorbidity, all TBI subgroups had a higher than normal risk of attempted suicide. Moreover, patients with severe TBI had a slightly greater risk of attempted suicide compared with those with mild TBI. In the TBI group, the strongest risk factors for attempted suicide were depression and alcohol attributed disease. This study had several notable advantages. First, this population-based study analysed longitudinal data contained in an insurance claims database over a long observation period (15 years) and in a large population (1 000 000 Taiwan residents). The analyses were adjusted for time-varying factors that are well recognised as major risk factors for suicide. Therefore, recall bias was limited compared with analyses based on self-reported data for suicide behaviour used in other surveys. Second, the Taiwan National Health Insurance programme includes over 99.5% residents in Taiwan. Therefore, the observed association between TBI and suicide was representative of the actual population, and the possibility of selection bias was minimal. However, this study has several limitations. First, the insurance claims database used in this analysis did not include data for suicidal ideation or completed suicide. Therefore, the analysis could only consider attempted suicide. Second, this insurance claims database did not include individuals who did not seek medical care because their TBI or self-harm behaviours were mild. Therefore, potential bias in identification may have resulted in an underestimated incidence of attempted suicide. Third, this study did not consider chronic pain, which has a high incidence in TBI patients and has a significant association with depression.16 Moreover, our conclusions were based on analysis of secondary data from an insurance claims database. Unexpected biasses from the effect of different numbers and severity of concomitant comorbidities were possible. Data were unavailable for potential confounding factors such as family history of suicide or child abuse or for environmental factors such as stressful life events or prolonged stress. Additionally, this study could only analyse diagnoses of alcohol attributed illness and tobacco use disorder rather than habitual use of alcohol or tobacco. Finally, this study did not consider social factors such as marital status and social support and did not consider patient characteristics such as Glasgow Coma Scale score, trauma mechanisms and duration of lost consciousness. Previous studies have investigated the prevalence of suicidal ideation, attempted suicide and death by suicide in TBI patients in the community.17 Systemic review found robust evidence of the relationship between TBI and elevated risk of suicide.18 A TBI can cause dysfunction of the frontal lobes and frontal-subcortical circuits, then resulting in aggression, poor decision-making and impulsivity. Impulsivity and aggression also have a strong association with suicidal behaviour.19 This study revealed a markedly higher cumulative incidence of suicide attempts in the TBI cohort compared with the non-TBI cohort and the TBI group had a 2.23-fold higher risk of attempted suicide compared with the non-TBI group in Taiwan population. These results are consistent with a longitudinal follow-up study by Lauren et al, 20 who reported that TBI is a major risk factor for suicidal behaviour. Moreover, data collected using standardised self-reported measures of post-traumatic stress disorder, depression, suicidal thoughts and behaviours in military personnel also indicates that suicide risk is higher in subjects with a history of multiple TBIs compared with those with only one or no TBI.21 22 In the general population, risk factors for attempted suicide include gender, age and history of previous suicide attempts.23 A notable gender difference is that the rate of attempted suicide is higher in females whereas the rate of completed suicides is higher in males.24 In the current study, both male and female patients in the TBI group had a higher risk of attempted suicide compared with their counterparts in the non-TBI group. Specifically, males in the TBI group had 2.04-fold higher risk of attempted suicide; females in the TBI group had a 2.49-fold higher risk of attempted suicide. In Teasdale et al, 6an analysis of a Denmark cohort similarly reported that females with TBI had a higher suicide mortality rate compared with males with TBI. Hartkopp et al 25 speculated that females have more difficulties bearing the consequence of injury because the role of physical attractiveness in self-image is more important in women compared with men. The psychiatric disorder most commonly (50% to 60%) implicated in death by suicide is depression.26 Anxiety, chronic pain and drug abuse are other independent risk factors for attempted suicide.27 Psychotherapeutic interventions implemented early (within the first year) after injury may reduce the progression of major depression in TBI.28 In our study, depression and alcohol attributed disease are significantly associated with suicide after TBI (adjusted HR 5.73, 95% CI 4.14 to 7.93, p<0.001; adjusted HR 3.29, 95% CI 2.30 to 4.72, p<0.001, respectively). In the elderly, the most common cause of TBI is falls. Elderly patients with TBI also have significantly worse outcomes for mortality and functional impairment.29 Age is a major factor in functional decline after TBI.30 A previous population-based study revealed the incidence of suicide was lower in TBI patients younger than 21 years or older than 60 years.6 Table 3 shows that, after stratification by age, the risk of suicide attempt in elderly patients was significantly lower in the current study (adjusted HR 0.83; 95% CI 0.74 to 0.94; p=0.002). A possible explanation for the age difference is that elderly patients have less cognitive and physical capability to carry out a suicide after injury. The association between TBI severity and suicide has also attracted the interest. The relative risk of attempted suicide is three to four times higher in patients with severe TBI compared with the general population.17 Clinical evidence also indicates that both severe and mild TBI are associated with increased suicidal tendencies.31 Brenner et al 32 reported that compared with those without TBI, death by suicide was 1.34-fold higher in military veterans with a history of severe TBI, while veterans with a history of relative mild TBI, that is, fracture or contusion, had a 1.98 times higher risk of attempted suicide. Pain may have contributed to the increased risk of suicide in the mild TBI group. In comparison, increased suicide risk by severity of TBI in our study revealed that patients with mild, moderate and severe TBI had 2.22-fold, 2.23-fold and 2.32-fold, respectively, higher risks of attempted suicide compared with patients without TBI (table 4). Conclusion: The worldwide consensus is that suicide is a preventable cause of death. The findings of our study suggest that depression, alcoholic attributed disease and high severity of TBI may increase the risk of attempted suicide in patients with TBI. These factors serve as vital mechanisms through which TBI influence suicide attempts. Suicide attempts are strong indicators for death of suicide.33 Suicide prevention requires a cooperative effort by society, the family and the individual. The results of this study suggested that intensive interventions for identifying individuals with a high risk of suicidal behaviour can have large and positive public health effect. Thereby, future research is warranted to identify other mechanisms of these association, including possible biological interaction of TBI and other predictors such as severity to elucidate whether prompt intervention for TBI cases could reduce this risk. Patients in Chinese ancestry initially diagnosed with traumatic brain injury (TBI) have a high risk of developing suicide attempts. The risk of suicide attempts is increased associated with the severity of TBI. The increased risk of suicide is less in TBI patients aged above 50 years; thus, older age may protect TBI cohort from suicide attempts. Does the TBI severity correlate with suicide attempts in Chinese patients? Would increased exposure to traumatic brain injury earlier in life increase the risk of suicide attempts? Do older TBI patients have higher suicide attempts risks? Both traumatic brain injury and suicide are worldwide health problems and burdens. Patients with TBI are known to have higher than normal mortality.
Background: Traumatic brain injury (TBI) is a major cause of death and disability worldwide, and its treatment is potentially a heavy economic burden. Suicide is another global public health problem and the second leading cause of death in young adults. Patients with TBI are known to have higher than normal rates of non-fatal deliberate self-harm, suicide and all-cause mortality. The aim of this study was to explore the association between TBI and suicide risk in a Chinese cohort. Methods: This study analysed data contained in the Taiwan National Health Insurance Research Database for 17 504 subjects with TBI and for 70 016 subjects without TBI matched for age and gender at a ratio of 1 to 4. Cox proportional hazard regression analysis was used to estimate subsequent suicide attempts in the TBI group. Probability of attempted suicide was determined by Kaplan-Meier method. Results: The overall risk of suicide attempts was 2.23 times higher in the TBI group compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively) after adjustment for covariates. Regardless of gender, age or comorbidity, the TBI group tended to have more suicide attempts, and the risk attempted suicide increased with the severity of TBI. Depression and alcohol attributed disease also increased the risk of attempted suicide in the TBI group. Conclusions: Suicide is preventable if risk factors are recognised. Hence, TBI patients require special attention to minimise their risk of attempted suicide.
Introduction: Traumatic brain injury (TBI), which is defined as damage to the skull or the brain and its frameworks through an external force,1 is a major cause of death and disability worldwide, and its treatment is potentially a heavy economic burden. The number of TBIs that occur annually worldwide is an estimated 1.5 to 2 million; the global prevalence is 12% to 16.7% in males and 8.5% in females, respectively. In Taiwan, up to 25% of the approximately 52 000 annual cases of TBI are fatal.2 The WHO projects that TBI will be the third leading cause of death or disability by 2020.3 Traumatic brain injury is most common in young adults and the elderly. Mild TBI is far more common than severe TBI. The severity of various consequences of TBI is known to increase proportionally to the severity of the TBI; typical consequences include physical disability, cognitive impairments and mood disorders. Suicide is a global public health problem. Approximately 800 000 people commit suicide every year, and suicide is the second leading cause of death in young adults. In Taiwan, the number of deaths by suicide in 2016 was 3765, and the standardised mortality rate was 12.3 persons per population of 100 000.4 According to statistics released by the Taiwan Ministry of Health and Welfare, suicide is the 12th most common cause of death. Suicide occurs throughout the life span and across all socioeconomic strata. Moreover, suicide has a potentially immense socioeconomic burden on individuals, families, communities and nations.5 The causes of suicide are multifactorial. Individuals with TBI have higher rates of non-fatal deliberate self-harm, suicide and all-cause mortality compared with the general population.6 7 Although recent studies suggest that TBI might be an important risk factor for suicide, studies of the association between TBI severity and suicide risk have reported conflicting results between countries and races. Simpson8 and Fonda et al 9 reported TBI severity contributed no meaningful differences to the risk of suicide attempts in the study of Australian population or US veterans who served in either Iraq or Afghanistan, respectively; whereas Madsen et al 7 showed the risk of suicide was higher for TBI individuals with evidence of structural brain injury in Denmark. However, most studies investigating the relation of suicide and TBI have some shortcomings in methodology, such as small clinical samples, especially very small number of TBI cases with suicide and self-reported data. Besides, little population-based data for this association are available. In the Simpson’s survey, they only included small number of patients (n=172) in Australia. In the Fonda’s analysis, only US veterans were included. The majority of military veterans were young (mean age=28.7 years) male (84%). The definition of TBI severity was based on self-reported duration of consciousness loss, alternation of consciousness and post-traumatic amnesia, not on objective examinations. Besides, moderate (6%) to severe (6%) TBI subsample cases were too small to bias the estimates. In Denmark, Madsen et al conducted a retrospective cohort study (1977 to 2014) and showed that a statistically significant increased suicide was present in TBI individuals. However, their research results were potentially biassed because they did not include all mild TBI patients treated prior to 1995. Also, there were no available records on what treatment TBI patients received, making it difficult to determine the TBI severity. Additionally, this study was performed in population of the Western world, not those of Eastern Asia. Generalising data retrieved from a study of a relatively homogeneous and wealthy Scandinavian people is not persuasive. Finally, large-scale population-based studies for the relationship between TBI severity and suicide risk in Asian cohorts are scarce. Therefore, this study used the Taiwan National Health Insurance Research Database (NHIRD) to clarify the association between TBI and subsequent occurrence of suicide as well as the effect of TBI severity in Chinese cohorts. In our analysis, the TBI severity is based on objective codes or examinations and then correlation is estimated through adjusting for all important suicide risks. Conclusion: The worldwide consensus is that suicide is a preventable cause of death. The findings of our study suggest that depression, alcoholic attributed disease and high severity of TBI may increase the risk of attempted suicide in patients with TBI. These factors serve as vital mechanisms through which TBI influence suicide attempts. Suicide attempts are strong indicators for death of suicide.33 Suicide prevention requires a cooperative effort by society, the family and the individual. The results of this study suggested that intensive interventions for identifying individuals with a high risk of suicidal behaviour can have large and positive public health effect. Thereby, future research is warranted to identify other mechanisms of these association, including possible biological interaction of TBI and other predictors such as severity to elucidate whether prompt intervention for TBI cases could reduce this risk. Patients in Chinese ancestry initially diagnosed with traumatic brain injury (TBI) have a high risk of developing suicide attempts. The risk of suicide attempts is increased associated with the severity of TBI. The increased risk of suicide is less in TBI patients aged above 50 years; thus, older age may protect TBI cohort from suicide attempts. Does the TBI severity correlate with suicide attempts in Chinese patients? Would increased exposure to traumatic brain injury earlier in life increase the risk of suicide attempts? Do older TBI patients have higher suicide attempts risks? Both traumatic brain injury and suicide are worldwide health problems and burdens. Patients with TBI are known to have higher than normal mortality.
Background: Traumatic brain injury (TBI) is a major cause of death and disability worldwide, and its treatment is potentially a heavy economic burden. Suicide is another global public health problem and the second leading cause of death in young adults. Patients with TBI are known to have higher than normal rates of non-fatal deliberate self-harm, suicide and all-cause mortality. The aim of this study was to explore the association between TBI and suicide risk in a Chinese cohort. Methods: This study analysed data contained in the Taiwan National Health Insurance Research Database for 17 504 subjects with TBI and for 70 016 subjects without TBI matched for age and gender at a ratio of 1 to 4. Cox proportional hazard regression analysis was used to estimate subsequent suicide attempts in the TBI group. Probability of attempted suicide was determined by Kaplan-Meier method. Results: The overall risk of suicide attempts was 2.23 times higher in the TBI group compared with the non-TBI group (0.98 vs 0.29 per 1000 person-years, respectively) after adjustment for covariates. Regardless of gender, age or comorbidity, the TBI group tended to have more suicide attempts, and the risk attempted suicide increased with the severity of TBI. Depression and alcohol attributed disease also increased the risk of attempted suicide in the TBI group. Conclusions: Suicide is preventable if risk factors are recognised. Hence, TBI patients require special attention to minimise their risk of attempted suicide.
8,661
285
[ 170, 350, 302, 224, 265, 489, 145, 117 ]
13
[ "tbi", "suicide", "patients", "risk", "attempts", "suicide attempts", "group", "tbi group", "higher", "disease" ]
[ "suicide tbi shortcomings", "tbi traumatic brain", "tbi cohort suicide", "suicide patients tbi", "risk suicide tbi" ]
[CONTENT] traumatic brain injury | suicide | cohort study [SUMMARY]
[CONTENT] traumatic brain injury | suicide | cohort study [SUMMARY]
[CONTENT] traumatic brain injury | suicide | cohort study [SUMMARY]
[CONTENT] traumatic brain injury | suicide | cohort study [SUMMARY]
[CONTENT] traumatic brain injury | suicide | cohort study [SUMMARY]
[CONTENT] traumatic brain injury | suicide | cohort study [SUMMARY]
[CONTENT] Adult | Brain Injuries, Traumatic | Cohort Studies | Comorbidity | Cost of Illness | Female | Humans | Incidence | Male | Mortality | Risk Assessment | Risk Factors | Severity of Illness Index | Suicidal Ideation | Suicide | Taiwan | Suicide Prevention [SUMMARY]
[CONTENT] Adult | Brain Injuries, Traumatic | Cohort Studies | Comorbidity | Cost of Illness | Female | Humans | Incidence | Male | Mortality | Risk Assessment | Risk Factors | Severity of Illness Index | Suicidal Ideation | Suicide | Taiwan | Suicide Prevention [SUMMARY]
[CONTENT] Adult | Brain Injuries, Traumatic | Cohort Studies | Comorbidity | Cost of Illness | Female | Humans | Incidence | Male | Mortality | Risk Assessment | Risk Factors | Severity of Illness Index | Suicidal Ideation | Suicide | Taiwan | Suicide Prevention [SUMMARY]
[CONTENT] Adult | Brain Injuries, Traumatic | Cohort Studies | Comorbidity | Cost of Illness | Female | Humans | Incidence | Male | Mortality | Risk Assessment | Risk Factors | Severity of Illness Index | Suicidal Ideation | Suicide | Taiwan | Suicide Prevention [SUMMARY]
[CONTENT] Adult | Brain Injuries, Traumatic | Cohort Studies | Comorbidity | Cost of Illness | Female | Humans | Incidence | Male | Mortality | Risk Assessment | Risk Factors | Severity of Illness Index | Suicidal Ideation | Suicide | Taiwan | Suicide Prevention [SUMMARY]
[CONTENT] Adult | Brain Injuries, Traumatic | Cohort Studies | Comorbidity | Cost of Illness | Female | Humans | Incidence | Male | Mortality | Risk Assessment | Risk Factors | Severity of Illness Index | Suicidal Ideation | Suicide | Taiwan | Suicide Prevention [SUMMARY]
[CONTENT] suicide tbi shortcomings | tbi traumatic brain | tbi cohort suicide | suicide patients tbi | risk suicide tbi [SUMMARY]
[CONTENT] suicide tbi shortcomings | tbi traumatic brain | tbi cohort suicide | suicide patients tbi | risk suicide tbi [SUMMARY]
[CONTENT] suicide tbi shortcomings | tbi traumatic brain | tbi cohort suicide | suicide patients tbi | risk suicide tbi [SUMMARY]
[CONTENT] suicide tbi shortcomings | tbi traumatic brain | tbi cohort suicide | suicide patients tbi | risk suicide tbi [SUMMARY]
[CONTENT] suicide tbi shortcomings | tbi traumatic brain | tbi cohort suicide | suicide patients tbi | risk suicide tbi [SUMMARY]
[CONTENT] suicide tbi shortcomings | tbi traumatic brain | tbi cohort suicide | suicide patients tbi | risk suicide tbi [SUMMARY]
[CONTENT] tbi | suicide | patients | risk | attempts | suicide attempts | group | tbi group | higher | disease [SUMMARY]
[CONTENT] tbi | suicide | patients | risk | attempts | suicide attempts | group | tbi group | higher | disease [SUMMARY]
[CONTENT] tbi | suicide | patients | risk | attempts | suicide attempts | group | tbi group | higher | disease [SUMMARY]
[CONTENT] tbi | suicide | patients | risk | attempts | suicide attempts | group | tbi group | higher | disease [SUMMARY]
[CONTENT] tbi | suicide | patients | risk | attempts | suicide attempts | group | tbi group | higher | disease [SUMMARY]
[CONTENT] tbi | suicide | patients | risk | attempts | suicide attempts | group | tbi group | higher | disease [SUMMARY]
[CONTENT] tbi | suicide | severity | tbi severity | population | number | cause | cause death | reported | small [SUMMARY]
[CONTENT] icd | icd cm | cm | tbi | code | icd cm code | cm code | data | nhird | index [SUMMARY]
[CONTENT] tbi | suicide | 001 | vs | group | tbi group | disease | incidence | higher | risk [SUMMARY]
[CONTENT] suicide | tbi | risk | attempts | suicide attempts | severity | patients | high | increase risk | high risk [SUMMARY]
[CONTENT] tbi | suicide | risk | patients | group | icd | suicide attempts | attempts | 001 | icd cm [SUMMARY]
[CONTENT] tbi | suicide | risk | patients | group | icd | suicide attempts | attempts | 001 | icd cm [SUMMARY]
[CONTENT] TBI ||| second ||| TBI ||| TBI | Chinese [SUMMARY]
[CONTENT] the Taiwan National Health Insurance Research Database | 17 504 | TBI | 70 | TBI | 1 to 4 ||| TBI ||| Kaplan-Meier [SUMMARY]
[CONTENT] 2.23 | TBI | non-TBI | 0.98 | 0.29 | 1000 ||| TBI | TBI ||| TBI [SUMMARY]
[CONTENT] ||| TBI [SUMMARY]
[CONTENT] TBI ||| second ||| TBI ||| TBI | Chinese ||| the Taiwan National Health Insurance Research Database | 17 504 | TBI | 70 | TBI | 1 to 4 ||| TBI ||| Kaplan-Meier ||| ||| 2.23 | TBI | non-TBI | 0.98 | 0.29 | 1000 ||| TBI | TBI ||| TBI ||| ||| TBI [SUMMARY]
[CONTENT] TBI ||| second ||| TBI ||| TBI | Chinese ||| the Taiwan National Health Insurance Research Database | 17 504 | TBI | 70 | TBI | 1 to 4 ||| TBI ||| Kaplan-Meier ||| ||| 2.23 | TBI | non-TBI | 0.98 | 0.29 | 1000 ||| TBI | TBI ||| TBI ||| ||| TBI [SUMMARY]
Establishing the Domains of a Hospital Disaster Preparedness Evaluation Tool: A Systematic Review.
36052843
Recent disasters emphasize the need for disaster risk mitigation in the health sector. A lack of standardized tools to assess hospital disaster preparedness hinders the improvement of emergency/disaster preparedness in hospitals. There is very limited research on evaluation of hospital disaster preparedness tools.
INTRODUCTION
A systematic review was performed using three databases, namely Ovid Medline, Embase, and CINAHL, as well as available grey literature sourced by Google, relevant websites, and also from the reference lists of selected articles. The studies published on hospital disaster preparedness across the world from 2011-2020, written in English language, were selected by two independent reviewers. The global distribution of studies was analyzed according to the World Health Organization's (WHO) six geographical regions, and also according to the four categories of the United Nations Human Development Index (UNHDI). The preparedness themes were identified and categorized according to the 4S conceptual framework: space, stuff, staff, and systems.
METHOD
From a total of 1,568 articles, 53 met inclusion criteria and were selected for data extraction and synthesis. Few published studies had used a study instrument to assess hospital disaster preparedness. The Eastern Mediterranean region recorded the highest number of such publications. The countries with a low UNHDI were found to have a smaller number of publications. Developing countries had more focus on preparedness for natural disasters and less focus on chemical, biological, radiological, and nuclear (CBRN) preparedness. Infrastructure, logistics, capacity building, and communication were the priority themes under the space, stuff, staff, and system domains of the 4S framework, respectively. The majority of studies had neglected some crucial aspects of hospital disaster preparedness, such as transport, back-up power, morgue facilities and dead body handling, vaccination, rewards/incentive, and volunteers.
RESULT
Important preparedness themes were identified under each domain of the 4S framework. The neglected aspects should be properly addressed in order to ensure adequate preparedness of hospitals. The results of this review can be used for planning a comprehensive disaster preparedness tool.
CONCLUSION
[ "Civil Defense", "Communication", "Disaster Planning", "Disasters", "Hospitals", "Humans" ]
9470528
Introduction
Every year, millions of people across the world are being affected by floods, landslides, cyclones, hurricanes, tornados, tsunamis, volcanic eruptions, earthquakes, wildfires, or human-made disasters. In the past ten years, 83% of all disasters triggered by natural hazards were caused by extreme weather and climate-related events. 1 The on-going global pandemic of coronavirus disease 2019 (COVID-19) has caused a health and economic crisis emphasizing to the world how important disaster preparedness and disaster resilience are. 2 In addition to the COVID-19 pandemic, multiple climate-related disasters are also happening at the same time. 1 For example, more than 100 other disasters occurred around the world affecting over 50 million people during the first six months after COVID-19 was declared a pandemic by the World Health Organization (WHO; Geneva, Switzerland) in March 2020. 3 Asia has suffered the highest number of disaster events. In total, there were 3,068 disasters occurring in Asia from 2000 through 2019. China reported the highest number of disaster events (577 events), followed by India (321 events), the Philippines (304 events), and Indonesia (278 events). 4 Recent disaster events emphasize the need for disaster risk reduction in the health sector as well as health services in developed countries. For example, in 2011 during the Japan earthquake and tsunami, 80% of the hospitals in Fukushima, Miyagi, and Iwate prefectures of Japan were destroyed or severely damaged, and many local public health personnel were also affected by the disaster, resulting in the entire paralysis or severe compromise of the health services. 5,6 Disasters can cripple health facilities, leading to partial or total collapse of health services, especially in developing countries. 7 For example, after the Algerian earthquake in 2003, 50% of the health facilities in the impacted area were damaged and were no longer operational. 7 A further example occurred when an earthquake struck in South Asia in October 2005 and caused the complete destruction of almost 50% of health facilities in the affected areas in Afghanistan, India, and Northern Pakistan, ranging from sophisticated hospitals to rural clinics, overwhelming the existing Emergency Medical Services. 7 Currently, most South Asian countries, including Sri Lanka and India, are in a state of crisis resulting from COVID-19, with overcrowded hospitals, low oxygen supplies, and overwhelmed capacity. 8,9 Sri Lanka, a developing nation and small island in the Indian Ocean, is frequently battered by natural disasters. The most devastating disaster it had ever experienced was the tsunami of 2004, which killed over 30,000 people and internally displaced almost one-half a million people. The health systems of the country were severely affected, completely damaging 44 health institutions and partially damaging 48 health institutions. In addition, 35 health care workers (HCWs) lost their lives and a large number of health workers were affected by injuries or psychological trauma due to the loss of their family members or properties. 10 Monsoon floods and landslides also affect several health facilities across the country annually. Sometimes, they have even led to the full or partial evacuation of affected hospitals, as experienced, for example, during the floods of 2016 and 2017, due to infrastructure damage or the functional collapse of services. These instances have resulted in huge economic impacts on the government for recovery-related needs. 11,12 Sri Lanka has also experienced several man-made disasters resulting in mass-casualty incidents. A 26-year war came to an end in 2009 after more than 64,000 deaths, hundreds of thousands of injuries, and the displacement of more than 800,000 persons. 13 The Easter-Sunday bombing attack on April 21, 2019 was a recent human-made disaster which killed 250 people and resulted in more than 500 casualties. 14 These mass-casualty incidents caused an acute surge of patients to nearby hospitals, interrupting normal hospital operations and overwhelmed hospital capacity due to ill-preparedness, poor coordination, and limited resources. 15 Notwithstanding the vulnerability of Sri Lanka to disasters, there is no standard hospital disaster preparedness evaluation tool used in Sri Lanka at the moment. Such a tool could be used to inform potential improvements to hospital-level disaster preparedness. Therefore, with the goal of establishing a tool appropriate for Sri Lanka, this study aimed to determine the existence and distribution of hospital preparedness tools across the world, and also to identify the important components of those study instruments.
null
null
Results
The search resulted in a total of 1,568 articles, including 1,563 from databases and five from grey literature. After removing the duplicates, there were 1,070 articles. Based on the inclusion criteria, only 53 articles were selected for data extraction and synthesis. Figure 1 illustrates the PRISMA flow diagram. Figure 1.PRISMA 2009 Flow Diagram. PRISMA 2009 Flow Diagram. Table 1 summarizes the basic information of the selected articles. All these studies have assessed the preparedness of either the facilities or the HCWs. Altogether, these studies assessed the preparedness of approximately 5,100 HCWs across the world, including different categories of acute care providers such as physicians, doctors, nurses, paramedics, and health care assistants. These studies have also assessed approximately 1,930 different levels of hospitals (government, rural, military, tertiary, and district), health care facilities, and emergency departments (Table 1). Table 1.Basic Information of the Selected ArticlesNumberReferenceYear of PublicationCountry of OriginDisaster TypeSample SizeStudy Type1 77 2011UKAll Hazards41 HCWs(33 Nurses, 8 Health Care Assistants from two MICUs)Interventional Study2 29 2011ChinaPublic Health Emergencies45 HospitalsCross-Sectional Study3 34 2011IranEarthquake114 Health Managers of Hospitals, Health Networks, and HealthCentersDescriptive Cross-Sectional (Quantitative) Study4 41 2011South AfricaPreparedness for 2010 FIFA World CupNine HospitalsCross-SectionalStudy5 42 2011ThailandInfluenza Pandemic179 Health CentersCross-SectionalStudy6 47 2011CanadaMass Emergency Events34 Emergency DepartmentsCross-Sectional Study7 27 2012AustraliaExternal Disaster140 HCWs(Knowledge/ Perception) in Public Teaching HospitalCross-Sectional Study8 53 2012IranAll Hazards102 Emergency Nurses in Tabriz’s Educational HospitalsDescriptive Cross-Sectional Study9 30 2013CambodiaInfluenza Pandemic262 Health Facilities,185 Government Hospitals, 77 District Health OfficesCross-Sectional Study10 76 2013AustraliaAll HazardsN/AScoping Review11 63 2013IranAll Hazards24 HospitalsCross-Sectional Study12 64 2013IranAll Hazards15 HospitalsDescriptive Cross-Sectional Study13 28 2014IranNatural DisastersNine HospitalsCross-Sectional Descriptive Study14 32 2014ChinaAll Hazards50 Tertiary HospitalsCross-Sectional Study15 71 2014CanadaExtreme Weather EventSix Health Care FacilitiesMixed Methods Study16 70 2014ChinaAll HazardsN/AModified Delphi Study17 35 2014Europe and AsiaEpidemic Infectious Diseases238 Hospitals(236 European, 2 Western Asian)Descriptive Cross-SectionalStudy18 38 2014ChinaAll Hazards41 HospitalsDescriptive Cross-Sectional Study19 56 2014Saudi ArabiaAll Hazards6 HospitalsCross-Sectional Study20 62 2014ChinaAll Hazards41 HospitalsCross-Sectional Study21 75 2015USACBRNE Preparedness59 Health Care ProvidersRetrospective Observational Survey22 37 2015EnglandEbola Virus112 HospitalsCross-Sectional Study23 43 2015IranNatural Disasters200 HCWs in a Single HospitalCross-SectionalStudy24 44 2015IrelandInfluenza Pandemic46 HospitalsCross-SectionalStudy25 66 2015Yemen2011 Yemeni Revolution11 HospitalsComparative Study26 72 2015ChinaBioterrorism110 Military HospitalsMixed Method Study27 46 2015New ZealandMass Emergency Events911 Acute Care Providers (Doctors, Nurses, Paramedics)Cross-Sectional Study28 25 2016IndiaEbola VirusNine Countries (Bangladesh, Bhutan, Indonesia, Maldives, Myanmar, Nepal, Sri Lanka, Thailand, Timor-Leste)Cross-Sectional Study29 26 2016IranAll Hazards97 HCWs from Various Departments of Military HospitalCross-Sectional Study30 69 201610 Countries:Belgium, France,Italy, Romania, Sweden, UK, Iran, Israel, USA, AustraliaCBRN Emergencies18 ExpertsDelphi Method31 31 2016FinlandChemical Mass-Casualty Situations26 EMSCross-Sectional Study32 36 2016ChinaEbola Virus266 Medical Professionals from 236 HospitalsMixed Method Study33 40 2016ThailandFlood24 HospitalsDescriptive Cross-Sectional Study34 45 2016Saudi ArabiaAll Hazards17 HospitalsCross-Sectional Study35 67 2016USAChemical Hazard112 Hospitals in 200599 Hospitals in 2012Longitudinal Study36 74 2016IranAll Hazards15 StudiesSystematic Review37 68 2016USAAll Hazards137 VAMCsQuantitative Study38 33 2017USAAll Hazards80 HospitalsDescriptive/ Analytical Cross-Sectional Study39 39 2017Sri LankaFlood31 Government Health Care FacilitiesDescriptive Cross-Sectional, Mixed Methods Study40 51 2017IranAll Hazards6 HospitalsDescriptive Cross-Sectional Study41 61 2017Hong KongAll Hazards107 Doctors/Nurses from Hong Kong College of Emergency MedicineCross-Sectional Study42 49 2018IranAll Hazards18 HospitalsCross-Sectional Study43 52 2018SwitzerlandAll Hazards83 HospitalsCross-SectionalStudy44 54 2018TanzaniaAll Hazards25 Regional HospitalsDescriptive Cross-Sectional Study45 73 2018IranAll Hazards26 StudiesSystematic Review and Meta-Analysis46 55 2018YemenAll Hazards10 HospitalsCross-Sectional Study47 58 2018CroatiaMass Casualty Incidents80 PhysiciansCross-Sectional Study48 51 2019PakistanAll Hazards18 HospitalsCross-Sectional Study49 57 2019Sri LankaAll Hazards60 Doctors/NursesDescriptive Cross-Sectional Study50 65 2019IranAll Hazards8 HospitalsDescriptive Cross-Sectional Study51 50 2020IndiaCOVID-1958 DoctorsDescriptive Cross-Sectional Study52 59 2020USACOVID-1932 HospitalsCross-Sectional Study53 60 2020Saudi ArabiaAll Hazards315 Clinical StaffCross-Sectional StudyAbbreviations: HCW, Health Care Worker; EMS, Emergency Medical Services; VAMC, Veterans Affairs Medical Center; CBRN, Chemical, Biological, Radio, Nuclear Disasters. Basic Information of the Selected Articles Abbreviations: HCW, Health Care Worker; EMS, Emergency Medical Services; VAMC, Veterans Affairs Medical Center; CBRN, Chemical, Biological, Radio, Nuclear Disasters. Table 2 illustrates the number of publications by hazard type. One-half of the studies (27) covered all hazards, and the rest of the studies focused on a specific type of hazard. Among them, there were biological hazards like Ebola, influenza, and COVID-19; natural disasters like earthquake, flood, or extreme weather events; and man-made disasters like chemical-only, chemical, biological, radiological, and nuclear (CBRN), or mass-casualty incidents. Table 2.Number of Publications by Hazard TypeType of HazardNumber of Publications (%)ReferenceAll Hazards27 (51%) 26,32,33,36,38,45,48,49,51–57,60–65,68,70,73,74,76,77 Mass Casualty/Mass Emergency5 (9%) 41,46,47,58,66 Ebola3 (6%) 25,35,37 Influenza3 (6%) 30,42,44 CBRN2 (4%) 69,75 Natural Disasters2 (4%) 28,43 Chemical Hazards2 (4%) 31,67 Flood2 (4%) 39,40 COVID-192 (4%) 50,59 External Disasters1 (2%) 27 Public Health Emergencies1 (2%) 29 Extreme Weather Events1 (2%) 71 Earthquake1 (2%) 34 Bioterrorism1 (2%) 72 Abbreviation: CBRN, Chemical, Biological, Radiological, and Nuclear. Number of Publications by Hazard Type Abbreviation: CBRN, Chemical, Biological, Radiological, and Nuclear. These studies have used different methodologies; however, the majority (41) were cross-sectional studies. 25–65 The next most common were longitudinal studies, 66–68 followed by Delphi, 69,70 mixed method, 71,72 and systematic reviews. 73,74 In addition, there was one retrospective, one observational, 75 one scoping review, 76 and one interventional 77 study (Table 1). Figure 2 demonstrates the number of publications by year. There were six publications on hospital disaster preparedness in 2011. The analysis of publication incidence by year revealed an overall rise in publication rate from 2012-2016. A decline in the publication rate was observed thereafter until 2020. Figure 2.Number of Articles by Year of Publication. Number of Articles by Year of Publication. Altogether, these studies were conducted in 24 different countries around the world. Iran has published the highest number of studies (twelve), followed by China (six), USA (five), and Saudi Arabia (three). All the other countries have published one or two studies (Table 3). Table 3.Number of Publications by Country (Including the WHO Region and Reference)WHO RegionCountryTotal No. Publications (%)Hazard Type (No. Publications in Each Rype)ReferenceSouth-East AsiaNumber of Publications: 6 (11%)Sri Lanka2 (4%)AH (1), Flood (1) 39,57 India2 (4%)Ebola (1), COVID (1) 25,50 Thailand2 (4%)Flood (1), Influ (1) 40,42 Western-PacificNumber of Publications:12 (23%)China6 (11%)AH (4), BT (1), PHE (1) 29,32,36,62,70,72 Taiwan1 (2%)AH (1) 38 Hong Kong1 (2%)AH (1) 61 Cambodia1 (2%)Influ (1) 30 Australia2 (4%)AH (1), Ext.dis (1) 27,76 New Zealand1 (2%)MC (1) 46 Eastern- MediterraneanNumber of Publications:18 (34%)Pakistan1 (2%)AH (1) 48 Iran12 (23%)AH (9), ND (2), EQ (1) 26,28,34,43,49,51,53,63–65,73,74 Yemen2 (4%)AH (1), MC (1) 55,66 Saudi Arabia3 (6%)AH (3) 45,56,60 AmericasNumber of Publications: 7 (13%)USA5 (9%)AH (2), CBRNE (1), Chem (1), COVID (1) 33,59,67,68,75 Canada2 (4%)Ex. Weat (1), MC (1) 47,71 EuropeanNumber of Publications:8 (15%)UK2 (4%)AH (1), Ebola (1) 37,77 Italy1 (2%)CBRN (1) 69 Finland1 (2%)Chem (1) 31 Netherland1 (2%)Ebola (1) 35 Ireland1 (2%)Influ (1) 44 Switzerland1 (2%)AH (1) 52 Croatia1 (2%)MC (1) 58 AfricanNumber of Publications: 2 (4%)South Africa1 (2%)MC (1) 41 Tanzania1 (2%)AH (1) 54 Abbreviations: WHO, World Health Organization; AH, All hazards; BT, Bioterrorism; PHE, Public Health Emergencies; MC, Mass Casualty; Ex. Weat, Extreme Weather events; Influ, Influenza; Chem, Chemical events; CBRN, Chemical, Biological, Radiological, and Nuclear. Number of Publications by Country (Including the WHO Region and Reference) Abbreviations: WHO, World Health Organization; AH, All hazards; BT, Bioterrorism; PHE, Public Health Emergencies; MC, Mass Casualty; Ex. Weat, Extreme Weather events; Influ, Influenza; Chem, Chemical events; CBRN, Chemical, Biological, Radiological, and Nuclear. Figure 3 demonstrates the number of publications by WHO region and Table 3 illustrates the number of publications by country (including the WHO region, reference, and disaster type). The Eastern Mediterranean region has recorded the greatest number of publications (18), with Iran being responsible for two-thirds of the publications in the region. The Western-Pacific region had the second largest number of publications (12), with China recording the highest number of publications in that region. The European and Americas regions had a similar number of publications (eight and seven, respectively), and the UK and USA were the most represented countries in those regions. The South-East Asian region had six publications, where all three included countries (Sri Lanka, India, and Thailand) had two publications each. The African region recorded the lowest number of publication (two). The Americas and European regions had a focus on CBRN emergencies, whereas other regions had focused on natural disasters (Table 3). Figure 3.Number of Publications by WHO Region.Abbreviation: WHO, World Health Organization. Number of Publications by WHO Region. Abbreviation: WHO, World Health Organization. Figure 4 illustrates the number of publications by UNHDI. The countries with Very High HD and High HD have published the majority of studies, 23 and 24 publications, respectively. Conversely, the countries with Medium HD and Low HD have published a low number of studies, with four and three publications, respectively. Table 4 illustrates the number of publications by UNHDI, including the country and hazard type. Significantly, there were five publications concerned with man-made disasters like chemical or CBRN incidents among the developed countries, while only one study focused on such disasters (bio-terrorism: BT) among the developing countries. It was clear that developing countries have more focus on natural disasters. Figure 4.Number of Publications by UNHDI.Abbreviations: UNHDI, United Nations Human Development Index; HD, Human Development. Number of Publications by UNHDI. Abbreviations: UNHDI, United Nations Human Development Index; HD, Human Development. Table 4.Number of Publications by UNHDI Including the Country and Hazard TypeHDICountryHazard Type (No. of Publications in Each Type)Very High HD(Developed Countries Except Saudi Arabia)Number of Publications:22 (41%)IrelandInflu (1)SwitzerlandAH (1)Hong KongAH (1)AustraliaAH (1), Ext.dis (1)NetherlandEbola (1)FinlandChem (1)UKAH (1), Ebola (1)New ZealandMC (1)CanadaEx. Weat (1), MC (1)USAAH (2), CBRNE (1), Chem (1), COVID (1)ItalyCBRN (1)Saudi ArabiaAH (3)CroatiaMC (1)High HD(Developing Countries)Number of Publications:24 (45%)IranAH (9), ND (2), EQ (1)Sri LankaAH (1), Flood (1)ThailandFlood (1), Influ (1)ChinaAH (4), BT (1), PHE (1)TaiwanAH (1)South AfricaMC (1)Medium HD(Developing Countries)Number of Publications: 4 (8%)IndiaEbola (2), COVID (1)CambodiaInflu (1)PakistanAH (1)Low HD (Developing Countries)Number of Publications: 3 (6%)YemenAH (1), MC (1)TanzaniaAH (1)Abbreviations: UNHDI, United Nations Human Development Index; HD, Human Development; AH, All hazards; BT, Bioterrorism; PHE, Public Health Emergencies; MC, Mass Casualty; Ex. Weat, Extreme Weather events; Influ, Influenza; Chem, Chemical events; CBRN, Chemical, Biological, Radiological, and Nuclear. Number of Publications by UNHDI Including the Country and Hazard Type Abbreviations: UNHDI, United Nations Human Development Index; HD, Human Development; AH, All hazards; BT, Bioterrorism; PHE, Public Health Emergencies; MC, Mass Casualty; Ex. Weat, Extreme Weather events; Influ, Influenza; Chem, Chemical events; CBRN, Chemical, Biological, Radiological, and Nuclear. Table 5 summarizes the different themes/components identified in those study instruments according to the 4S domains. Table 5.Analysis of Different Themes/Components of Study Instruments According to 4S DomainsDomainsIndicatorsNumber of Articles (%)ReferenceSpaceInfrastructure20 (38%) 32,33,35,37,39,40,43,47,50,51,56,57,62,63,65,68,70,71,74,76 Isolation Facilities, Decontamination Facilities16 (30%) 33,35,44,45,47,50,52,53,59,60,62,67,70,72,74,76 ICC, ICU, Theatre, Laboratory10 (19%) 25,27,33,35,37,51,57,63,70,74 Morgue Facilities7 (13%) 33,45,54,56,57,60,74 Accessibility/Access Routes5 (9%) 39,42,57,67,68 StuffLogistics31(58%) 25,26,28,30,31,34–40,44,45,47,49,51,54,55,57,58,60,63,65–67,70,72–74,76 PPE26 (49%) 25,32,33,35–39,41,42,44,45,47,50,52,57,59,65,67,69,71,73–77 Medicines, Medical Equipment, Medical Gases, and Other Supplies (Food, Water, Fuel Reserves)26 (49%) 29–32,36–39,41,43,47,48,51,54,56,57,60,62–65,70,71,74–76 Back-Up Communication Devices18 (34%) 27,28,32,36,38,42,49,51,54,57,62,63,67,70,71,74,76,77 Back-Up Power12 (23%) 32,38,43,51,57,62,63,70,71,73,74,76 Stockpiling12 (23%) 32,36,38,44,50,54,57,62,63,70,74,76 Vehicles, Transport Equipment11 (21%) 26,31,43,49,57,63,65,68,70,74,76 StaffTraining/Education/Capacity Building41 (77%) 25,26,28–30,32–41,43,44,46–48,50,51,54–57,59–68,70,72–74,76 Drills/Simulation Exercises34 (64%) 25,29,32,33,35–48,52,54,55,57,58,60–62,65–68,70,72,74,76 Knowledge and Skills13 (25%) 31,38,39,50,52,53,57,58,62,64,65,75,77 Psychosocial Support Staff/Victims13 (25%) 32,38,50,57,58,60–62,69,71,74,76,77 Staff Well-Being, Roster Arrangement, Food, Water, Accommodation, Transport, Domestic Support10 (19%) 33,38,45,49,50,57,62,70,74,76 Vaccination7 (13%) 29,44,48,62,68,70,76 Rewards/Incentives6 (11%) 25,32,44,45,70,76 Volunteers6 (11%) 32,38,40,57,60,76 SystemsInformation Management/Communication System42 (79%) 25–29,32–34,36–39,41–43,45,49,51–57,59–71,73–77 ICS41 (77%) 25–29,32–34,36,38,40–47,49,51,52,54–57,59–66,68,71–77 Disaster Plans36 (68%) 25,27,29,32–34,36–40,42,44,45,47–52,54–60,62,64,66,67,70–72,75,76 Safety/Security System (Including Evacuation, Crowd Control, Transportation)32 (60%) 26,28,32,33,35,38,39,41,43,45,49–52,54–57,60,62–64,66,68–74,76,77 Triage27 (51%) 25,32,33,35,38,40,45,46,49,50,53–55,58–64,66,68,70,73,74,76,77 Cooperation and Coordination with Other Health/Non-Health Sector Facilities, and the Public (Including MOUs/Contracts/Agreements)25 (47%) 25,26,29,32–35,37–39,41,45,47,48,50,52,56,57,60,62,63,70,71,74,76 Surge Capacity25 (47%) 25,29,31–33,40,41,44,45,49,50,54–57,60,62,63,66,68,70,71,73,74,76 CES20 (38%) 26,28,29,32,34,40,49,54,56,57,60,62,63,66,68,70,73–76 SOP/Protocols/Guidelines19 (36%) 25,29,34,35,37,38,42,47,48,57–60,66,68–71,74 PDR18 (34%) 32,38,40,48,49,54–56,60,62,63,66,68,70,71,73,74,76 Isolation, Decontamination, and Quarantine15 (28%) 25,31,33,38,41,47,52,53,60,68–70,74,76,77 Surveillance, Early Warning, Outbreak Management System13 (25%) 25,34,35,42,48,60,62,63,67,70–72,76 Waste Management8 (15%) 37,43,47,60,63,68,71,74 Dead Body Handling3 (6%) 33,57,77 Abbreviations: ICC, Incident Command Centre; ICU, Intensive Care Unit; ICS, Incident Command System; SOP, Standard Operating Procedure; MOU, Memorandum of Understanding; PPE, Personal Protective Equipment; CES, Continuity of Essential Services; PDR, Post-Disaster Recovery. Analysis of Different Themes/Components of Study Instruments According to 4S Domains Abbreviations: ICC, Incident Command Centre; ICU, Intensive Care Unit; ICS, Incident Command System; SOP, Standard Operating Procedure; MOU, Memorandum of Understanding; PPE, Personal Protective Equipment; CES, Continuity of Essential Services; PDR, Post-Disaster Recovery. For the space domain: infrastructure and isolation/decontamination facilities were considered more frequently, with 20 and 16 publications, respectively, while morgue facilities and accessibility/access routes were considered less frequently, in only seven and five studies, respectively. For the stuff domain: logistics, personal protective equipment (PPE), and medicines/medical equipment/medical gases/other supplies (food, water, fuel reserves) were considered among the majority of studies, 32, 27, and 27, respectively, while back-up power, stockpiling, and transport themes were included in a smaller number of studies, 12, 12, and 11, respectively. For the staff domain: training/education/capacity building and drills/simulation exercises were included in the majority of studies, 40 and 34 studies, respectively, while vaccination, rewards/incentives, and volunteer themes were given the least priority, with seven, six, and six studies, respectively. For the systems domain: information/communication, Incident Command System (ICS), disaster plans, and safety/security themes were considered in most of the studies, while waste management and the handling of dead bodies were given the least priority, considered only in eight and three studies, respectively. Overall, isolation/decontamination facilities, Incident Command Centre (ICC), intensive care unit/ICU, and laboratories were considered frequently under the space domain, while access routes and morgue space were given less priority. For the stuff domain, PPE, medicines, and medical equipment were frequently considered in the broad category of logistics, while back-up power, stockpiling, and transport-related themes were considered less frequently. Capacity-building-related themes were frequently considered under the staff domain, while psychological well-being-related themes were given less priority. Further, communication, ICS, and disaster plans were considered frequently under the system domain, while the waste management and handling of dead bodies were given less priority in the majority of the study tools.
Conclusion
Few published studies used a toolkit, checklist, or questionnaire to assess hospital disaster preparedness across the world during the decade of 2011-2020. The countries with Low HD have a smaller number of publications and the developing countries generally have less focus on CBRN preparedness. The majority of the past studies have neglected some crucial aspects of hospital disaster preparedness. Important preparedness themes were identified under each domain of the 4S framework, and these aspects should be properly addressed in order to ensure adequate preparedness of hospitals. The results of this systematic review can be used for planning a comprehensive disaster preparedness tool.
[ "Aim", "Methodology", "Selection of Articles", "Inclusion Criteria", "Exclusion Criteria", "Data Synthesis and Analysis" ]
[ "The aim of this research was to determine the main components included in hospital disaster preparedness evaluation instruments.", "A systematic review was performed across three journal databases: Ovid Medline (US National Library of Medicine, National Institutes of Health; Bethesda, Maryland USA); Embase (Elsevier; Amsterdam, Netherlands); and CINAHL (EBSCO Information Services; Ipswich, Massachusetts USA) using the appropriate specifications for each database. The PRISMA Systematic Review Guidelines were used for the review, and the PRISMA 2020 checklist is shown in Appendix 1 (available online only). The search of the databases was conducted on November 23, 2020. The details of the search strategy are shown in Appendix 2 (available online only). Additional documents (grey literature) were sourced by a related search of Google (Google Inc.; Mountain View, California USA), relevant websites, and also from the reference lists of selected articles. Keywords used were: hospitals, health facilities, health services, and health personnel. These were combined with: disaster planning, disaster preparedness, and the terms: checklist, surveys, questionnaire, or toolkit.\nSelection of Articles Initially, two independent reviewers (NM and GO) screened titles and abstracts of the retrieved articles for eligibility. From the abstracts selected by both reviewers, full texts were retrieved and considered for eligibility. If discrepancies occurred between reviewers, the reasons were identified and a final decision was made by the main author (NM).\nInitially, two independent reviewers (NM and GO) screened titles and abstracts of the retrieved articles for eligibility. From the abstracts selected by both reviewers, full texts were retrieved and considered for eligibility. If discrepancies occurred between reviewers, the reasons were identified and a final decision was made by the main author (NM).\nInclusion Criteria Articles published across the world from 2011 through 2020, written in English language, which were conducted on the disaster preparedness of hospitals, health facilities, or health personnel using toolkits, checklists, or questionnaire surveys were selected for this study. The “safe hospitals” concept became popular after 2010 when the WHO introduced a guidebook on safe hospitals in emergencies and disasters. Therefore, the search was started from 2011, as relevant hospital disaster preparedness assessments were most likely to be published after this time.\nArticles published across the world from 2011 through 2020, written in English language, which were conducted on the disaster preparedness of hospitals, health facilities, or health personnel using toolkits, checklists, or questionnaire surveys were selected for this study. The “safe hospitals” concept became popular after 2010 when the WHO introduced a guidebook on safe hospitals in emergencies and disasters. Therefore, the search was started from 2011, as relevant hospital disaster preparedness assessments were most likely to be published after this time.\nExclusion Criteria The published studies in a language other than English, not available in full text, and not related to health sector disaster preparedness were excluded.\nThe published studies in a language other than English, not available in full text, and not related to health sector disaster preparedness were excluded.\nData Synthesis and Analysis Data extraction and analysis were based on the Joanna Briggs Institute (JBI; Adelaide, Australia) manual for evidence synthesis.\n16\n In order to identify the basic information, the selected studies were analyzed in the following categories: reference; year of publication; country of origin (where the study was published or conducted); type of hazard; sample size; and the study design.\nThe global distribution of studies was analyzed according to the WHO’s six geographical regions, namely South-East Asia, the Western-Pacific, the Eastern Mediterranean, the Americas, Europe, and Africa regions.\n17,18\n The distribution of studies was also analyzed according to the four categories of the United Nations Human Development Index (UNHDI).\n19\n The UNHDI comprises three dimensions: health, education, and standard of living. The health dimension is measured by life expectancy at birth, the education dimension by mean years of schooling for adult persons aged 25 years and above and expected years of schooling for children at school entering age, and the standard of living dimension is assessed by gross national income per capita. Based on the cut-off points of the HDI, the UN has identified four categories of human development (HD); Low HD, Medium HD, High HD, and Very High HD.\n19\n\n\nThe global literature classifies a hospital’s disaster preparedness and response in terms of the “4S’s” – space, stuff, staff, and systems.\n20–24\n The space domain includes: the physical space needed for patient care and workspace (infrastructure and their access routes). The stuff domain includes logistics, equipment, and supplies. The staff domain includes human resources. And the system domain includes all the plans, procedures, and protocols needed for preparedness management.\n20–24\n Similarly, preparedness themes of each selected study were identified according to this 4S conceptual framework.\nData extraction and analysis were based on the Joanna Briggs Institute (JBI; Adelaide, Australia) manual for evidence synthesis.\n16\n In order to identify the basic information, the selected studies were analyzed in the following categories: reference; year of publication; country of origin (where the study was published or conducted); type of hazard; sample size; and the study design.\nThe global distribution of studies was analyzed according to the WHO’s six geographical regions, namely South-East Asia, the Western-Pacific, the Eastern Mediterranean, the Americas, Europe, and Africa regions.\n17,18\n The distribution of studies was also analyzed according to the four categories of the United Nations Human Development Index (UNHDI).\n19\n The UNHDI comprises three dimensions: health, education, and standard of living. The health dimension is measured by life expectancy at birth, the education dimension by mean years of schooling for adult persons aged 25 years and above and expected years of schooling for children at school entering age, and the standard of living dimension is assessed by gross national income per capita. Based on the cut-off points of the HDI, the UN has identified four categories of human development (HD); Low HD, Medium HD, High HD, and Very High HD.\n19\n\n\nThe global literature classifies a hospital’s disaster preparedness and response in terms of the “4S’s” – space, stuff, staff, and systems.\n20–24\n The space domain includes: the physical space needed for patient care and workspace (infrastructure and their access routes). The stuff domain includes logistics, equipment, and supplies. The staff domain includes human resources. And the system domain includes all the plans, procedures, and protocols needed for preparedness management.\n20–24\n Similarly, preparedness themes of each selected study were identified according to this 4S conceptual framework.", "Initially, two independent reviewers (NM and GO) screened titles and abstracts of the retrieved articles for eligibility. From the abstracts selected by both reviewers, full texts were retrieved and considered for eligibility. If discrepancies occurred between reviewers, the reasons were identified and a final decision was made by the main author (NM).", "Articles published across the world from 2011 through 2020, written in English language, which were conducted on the disaster preparedness of hospitals, health facilities, or health personnel using toolkits, checklists, or questionnaire surveys were selected for this study. The “safe hospitals” concept became popular after 2010 when the WHO introduced a guidebook on safe hospitals in emergencies and disasters. Therefore, the search was started from 2011, as relevant hospital disaster preparedness assessments were most likely to be published after this time.", "The published studies in a language other than English, not available in full text, and not related to health sector disaster preparedness were excluded.", "Data extraction and analysis were based on the Joanna Briggs Institute (JBI; Adelaide, Australia) manual for evidence synthesis.\n16\n In order to identify the basic information, the selected studies were analyzed in the following categories: reference; year of publication; country of origin (where the study was published or conducted); type of hazard; sample size; and the study design.\nThe global distribution of studies was analyzed according to the WHO’s six geographical regions, namely South-East Asia, the Western-Pacific, the Eastern Mediterranean, the Americas, Europe, and Africa regions.\n17,18\n The distribution of studies was also analyzed according to the four categories of the United Nations Human Development Index (UNHDI).\n19\n The UNHDI comprises three dimensions: health, education, and standard of living. The health dimension is measured by life expectancy at birth, the education dimension by mean years of schooling for adult persons aged 25 years and above and expected years of schooling for children at school entering age, and the standard of living dimension is assessed by gross national income per capita. Based on the cut-off points of the HDI, the UN has identified four categories of human development (HD); Low HD, Medium HD, High HD, and Very High HD.\n19\n\n\nThe global literature classifies a hospital’s disaster preparedness and response in terms of the “4S’s” – space, stuff, staff, and systems.\n20–24\n The space domain includes: the physical space needed for patient care and workspace (infrastructure and their access routes). The stuff domain includes logistics, equipment, and supplies. The staff domain includes human resources. And the system domain includes all the plans, procedures, and protocols needed for preparedness management.\n20–24\n Similarly, preparedness themes of each selected study were identified according to this 4S conceptual framework." ]
[ "other", "other", "other", "other", "other", "other" ]
[ "Introduction", "Aim", "Methodology", "Selection of Articles", "Inclusion Criteria", "Exclusion Criteria", "Data Synthesis and Analysis", "Results", "Discussion", "Limitations", "Conclusion" ]
[ "Every year, millions of people across the world are being affected by floods, landslides, cyclones, hurricanes, tornados, tsunamis, volcanic eruptions, earthquakes, wildfires, or human-made disasters. In the past ten years, 83% of all disasters triggered by natural hazards were caused by extreme weather and climate-related events.\n1\n The on-going global pandemic of coronavirus disease 2019 (COVID-19) has caused a health and economic crisis emphasizing to the world how important disaster preparedness and disaster resilience are.\n2\n\n\nIn addition to the COVID-19 pandemic, multiple climate-related disasters are also happening at the same time.\n1\n For example, more than 100 other disasters occurred around the world affecting over 50 million people during the first six months after COVID-19 was declared a pandemic by the World Health Organization (WHO; Geneva, Switzerland) in March 2020.\n3\n\n\nAsia has suffered the highest number of disaster events. In total, there were 3,068 disasters occurring in Asia from 2000 through 2019. China reported the highest number of disaster events (577 events), followed by India (321 events), the Philippines (304 events), and Indonesia (278 events).\n4\n\n\nRecent disaster events emphasize the need for disaster risk reduction in the health sector as well as health services in developed countries. For example, in 2011 during the Japan earthquake and tsunami, 80% of the hospitals in Fukushima, Miyagi, and Iwate prefectures of Japan were destroyed or severely damaged, and many local public health personnel were also affected by the disaster, resulting in the entire paralysis or severe compromise of the health services.\n5,6\n\n\nDisasters can cripple health facilities, leading to partial or total collapse of health services, especially in developing countries.\n7\n For example, after the Algerian earthquake in 2003, 50% of the health facilities in the impacted area were damaged and were no longer operational.\n7\n A further example occurred when an earthquake struck in South Asia in October 2005 and caused the complete destruction of almost 50% of health facilities in the affected areas in Afghanistan, India, and Northern Pakistan, ranging from sophisticated hospitals to rural clinics, overwhelming the existing Emergency Medical Services.\n7\n Currently, most South Asian countries, including Sri Lanka and India, are in a state of crisis resulting from COVID-19, with overcrowded hospitals, low oxygen supplies, and overwhelmed capacity.\n8,9\n\n\nSri Lanka, a developing nation and small island in the Indian Ocean, is frequently battered by natural disasters. The most devastating disaster it had ever experienced was the tsunami of 2004, which killed over 30,000 people and internally displaced almost one-half a million people. The health systems of the country were severely affected, completely damaging 44 health institutions and partially damaging 48 health institutions. In addition, 35 health care workers (HCWs) lost their lives and a large number of health workers were affected by injuries or psychological trauma due to the loss of their family members or properties.\n10\n Monsoon floods and landslides also affect several health facilities across the country annually. Sometimes, they have even led to the full or partial evacuation of affected hospitals, as experienced, for example, during the floods of 2016 and 2017, due to infrastructure damage or the functional collapse of services. These instances have resulted in huge economic impacts on the government for recovery-related needs.\n11,12\n\n\nSri Lanka has also experienced several man-made disasters resulting in mass-casualty incidents. A 26-year war came to an end in 2009 after more than 64,000 deaths, hundreds of thousands of injuries, and the displacement of more than 800,000 persons.\n13\n The Easter-Sunday bombing attack on April 21, 2019 was a recent human-made disaster which killed 250 people and resulted in more than 500 casualties.\n14\n These mass-casualty incidents caused an acute surge of patients to nearby hospitals, interrupting normal hospital operations and overwhelmed hospital capacity due to ill-preparedness, poor coordination, and limited resources.\n15\n\n\nNotwithstanding the vulnerability of Sri Lanka to disasters, there is no standard hospital disaster preparedness evaluation tool used in Sri Lanka at the moment. Such a tool could be used to inform potential improvements to hospital-level disaster preparedness. Therefore, with the goal of establishing a tool appropriate for Sri Lanka, this study aimed to determine the existence and distribution of hospital preparedness tools across the world, and also to identify the important components of those study instruments.", "The aim of this research was to determine the main components included in hospital disaster preparedness evaluation instruments.", "A systematic review was performed across three journal databases: Ovid Medline (US National Library of Medicine, National Institutes of Health; Bethesda, Maryland USA); Embase (Elsevier; Amsterdam, Netherlands); and CINAHL (EBSCO Information Services; Ipswich, Massachusetts USA) using the appropriate specifications for each database. The PRISMA Systematic Review Guidelines were used for the review, and the PRISMA 2020 checklist is shown in Appendix 1 (available online only). The search of the databases was conducted on November 23, 2020. The details of the search strategy are shown in Appendix 2 (available online only). Additional documents (grey literature) were sourced by a related search of Google (Google Inc.; Mountain View, California USA), relevant websites, and also from the reference lists of selected articles. Keywords used were: hospitals, health facilities, health services, and health personnel. These were combined with: disaster planning, disaster preparedness, and the terms: checklist, surveys, questionnaire, or toolkit.\nSelection of Articles Initially, two independent reviewers (NM and GO) screened titles and abstracts of the retrieved articles for eligibility. From the abstracts selected by both reviewers, full texts were retrieved and considered for eligibility. If discrepancies occurred between reviewers, the reasons were identified and a final decision was made by the main author (NM).\nInitially, two independent reviewers (NM and GO) screened titles and abstracts of the retrieved articles for eligibility. From the abstracts selected by both reviewers, full texts were retrieved and considered for eligibility. If discrepancies occurred between reviewers, the reasons were identified and a final decision was made by the main author (NM).\nInclusion Criteria Articles published across the world from 2011 through 2020, written in English language, which were conducted on the disaster preparedness of hospitals, health facilities, or health personnel using toolkits, checklists, or questionnaire surveys were selected for this study. The “safe hospitals” concept became popular after 2010 when the WHO introduced a guidebook on safe hospitals in emergencies and disasters. Therefore, the search was started from 2011, as relevant hospital disaster preparedness assessments were most likely to be published after this time.\nArticles published across the world from 2011 through 2020, written in English language, which were conducted on the disaster preparedness of hospitals, health facilities, or health personnel using toolkits, checklists, or questionnaire surveys were selected for this study. The “safe hospitals” concept became popular after 2010 when the WHO introduced a guidebook on safe hospitals in emergencies and disasters. Therefore, the search was started from 2011, as relevant hospital disaster preparedness assessments were most likely to be published after this time.\nExclusion Criteria The published studies in a language other than English, not available in full text, and not related to health sector disaster preparedness were excluded.\nThe published studies in a language other than English, not available in full text, and not related to health sector disaster preparedness were excluded.\nData Synthesis and Analysis Data extraction and analysis were based on the Joanna Briggs Institute (JBI; Adelaide, Australia) manual for evidence synthesis.\n16\n In order to identify the basic information, the selected studies were analyzed in the following categories: reference; year of publication; country of origin (where the study was published or conducted); type of hazard; sample size; and the study design.\nThe global distribution of studies was analyzed according to the WHO’s six geographical regions, namely South-East Asia, the Western-Pacific, the Eastern Mediterranean, the Americas, Europe, and Africa regions.\n17,18\n The distribution of studies was also analyzed according to the four categories of the United Nations Human Development Index (UNHDI).\n19\n The UNHDI comprises three dimensions: health, education, and standard of living. The health dimension is measured by life expectancy at birth, the education dimension by mean years of schooling for adult persons aged 25 years and above and expected years of schooling for children at school entering age, and the standard of living dimension is assessed by gross national income per capita. Based on the cut-off points of the HDI, the UN has identified four categories of human development (HD); Low HD, Medium HD, High HD, and Very High HD.\n19\n\n\nThe global literature classifies a hospital’s disaster preparedness and response in terms of the “4S’s” – space, stuff, staff, and systems.\n20–24\n The space domain includes: the physical space needed for patient care and workspace (infrastructure and their access routes). The stuff domain includes logistics, equipment, and supplies. The staff domain includes human resources. And the system domain includes all the plans, procedures, and protocols needed for preparedness management.\n20–24\n Similarly, preparedness themes of each selected study were identified according to this 4S conceptual framework.\nData extraction and analysis were based on the Joanna Briggs Institute (JBI; Adelaide, Australia) manual for evidence synthesis.\n16\n In order to identify the basic information, the selected studies were analyzed in the following categories: reference; year of publication; country of origin (where the study was published or conducted); type of hazard; sample size; and the study design.\nThe global distribution of studies was analyzed according to the WHO’s six geographical regions, namely South-East Asia, the Western-Pacific, the Eastern Mediterranean, the Americas, Europe, and Africa regions.\n17,18\n The distribution of studies was also analyzed according to the four categories of the United Nations Human Development Index (UNHDI).\n19\n The UNHDI comprises three dimensions: health, education, and standard of living. The health dimension is measured by life expectancy at birth, the education dimension by mean years of schooling for adult persons aged 25 years and above and expected years of schooling for children at school entering age, and the standard of living dimension is assessed by gross national income per capita. Based on the cut-off points of the HDI, the UN has identified four categories of human development (HD); Low HD, Medium HD, High HD, and Very High HD.\n19\n\n\nThe global literature classifies a hospital’s disaster preparedness and response in terms of the “4S’s” – space, stuff, staff, and systems.\n20–24\n The space domain includes: the physical space needed for patient care and workspace (infrastructure and their access routes). The stuff domain includes logistics, equipment, and supplies. The staff domain includes human resources. And the system domain includes all the plans, procedures, and protocols needed for preparedness management.\n20–24\n Similarly, preparedness themes of each selected study were identified according to this 4S conceptual framework.", "Initially, two independent reviewers (NM and GO) screened titles and abstracts of the retrieved articles for eligibility. From the abstracts selected by both reviewers, full texts were retrieved and considered for eligibility. If discrepancies occurred between reviewers, the reasons were identified and a final decision was made by the main author (NM).", "Articles published across the world from 2011 through 2020, written in English language, which were conducted on the disaster preparedness of hospitals, health facilities, or health personnel using toolkits, checklists, or questionnaire surveys were selected for this study. The “safe hospitals” concept became popular after 2010 when the WHO introduced a guidebook on safe hospitals in emergencies and disasters. Therefore, the search was started from 2011, as relevant hospital disaster preparedness assessments were most likely to be published after this time.", "The published studies in a language other than English, not available in full text, and not related to health sector disaster preparedness were excluded.", "Data extraction and analysis were based on the Joanna Briggs Institute (JBI; Adelaide, Australia) manual for evidence synthesis.\n16\n In order to identify the basic information, the selected studies were analyzed in the following categories: reference; year of publication; country of origin (where the study was published or conducted); type of hazard; sample size; and the study design.\nThe global distribution of studies was analyzed according to the WHO’s six geographical regions, namely South-East Asia, the Western-Pacific, the Eastern Mediterranean, the Americas, Europe, and Africa regions.\n17,18\n The distribution of studies was also analyzed according to the four categories of the United Nations Human Development Index (UNHDI).\n19\n The UNHDI comprises three dimensions: health, education, and standard of living. The health dimension is measured by life expectancy at birth, the education dimension by mean years of schooling for adult persons aged 25 years and above and expected years of schooling for children at school entering age, and the standard of living dimension is assessed by gross national income per capita. Based on the cut-off points of the HDI, the UN has identified four categories of human development (HD); Low HD, Medium HD, High HD, and Very High HD.\n19\n\n\nThe global literature classifies a hospital’s disaster preparedness and response in terms of the “4S’s” – space, stuff, staff, and systems.\n20–24\n The space domain includes: the physical space needed for patient care and workspace (infrastructure and their access routes). The stuff domain includes logistics, equipment, and supplies. The staff domain includes human resources. And the system domain includes all the plans, procedures, and protocols needed for preparedness management.\n20–24\n Similarly, preparedness themes of each selected study were identified according to this 4S conceptual framework.", "The search resulted in a total of 1,568 articles, including 1,563 from databases and five from grey literature. After removing the duplicates, there were 1,070 articles. Based on the inclusion criteria, only 53 articles were selected for data extraction and synthesis. Figure 1 illustrates the PRISMA flow diagram.\n\nFigure 1.PRISMA 2009 Flow Diagram.\n\nPRISMA 2009 Flow Diagram.\nTable 1 summarizes the basic information of the selected articles. All these studies have assessed the preparedness of either the facilities or the HCWs. Altogether, these studies assessed the preparedness of approximately 5,100 HCWs across the world, including different categories of acute care providers such as physicians, doctors, nurses, paramedics, and health care assistants. These studies have also assessed approximately 1,930 different levels of hospitals (government, rural, military, tertiary, and district), health care facilities, and emergency departments (Table 1).\n\nTable 1.Basic Information of the Selected ArticlesNumberReferenceYear of PublicationCountry of OriginDisaster TypeSample SizeStudy Type1\n\n77\n\n2011UKAll Hazards41 HCWs(33 Nurses, 8 Health Care Assistants from two MICUs)Interventional Study2\n\n29\n\n2011ChinaPublic Health Emergencies45 HospitalsCross-Sectional Study3\n\n34\n\n2011IranEarthquake114 Health Managers of Hospitals, Health Networks, and HealthCentersDescriptive Cross-Sectional (Quantitative) Study4\n\n41\n\n2011South AfricaPreparedness for 2010 FIFA World CupNine HospitalsCross-SectionalStudy5\n\n42\n\n2011ThailandInfluenza Pandemic179 Health CentersCross-SectionalStudy6\n\n47\n\n2011CanadaMass Emergency Events34 Emergency DepartmentsCross-Sectional Study7\n\n27\n\n2012AustraliaExternal Disaster140 HCWs(Knowledge/ Perception) in Public Teaching HospitalCross-Sectional Study8\n\n53\n\n2012IranAll Hazards102 Emergency Nurses in Tabriz’s Educational HospitalsDescriptive Cross-Sectional Study9\n\n30\n\n2013CambodiaInfluenza Pandemic262 Health Facilities,185 Government Hospitals, 77 District Health OfficesCross-Sectional Study10\n\n76\n\n2013AustraliaAll HazardsN/AScoping Review11\n\n63\n\n2013IranAll Hazards24 HospitalsCross-Sectional Study12\n\n64\n\n2013IranAll Hazards15 HospitalsDescriptive Cross-Sectional Study13\n\n28\n\n2014IranNatural DisastersNine HospitalsCross-Sectional Descriptive Study14\n\n32\n\n2014ChinaAll Hazards50 Tertiary HospitalsCross-Sectional Study15\n\n71\n\n2014CanadaExtreme Weather EventSix Health Care FacilitiesMixed Methods Study16\n\n70\n\n2014ChinaAll HazardsN/AModified Delphi Study17\n\n35\n\n2014Europe and AsiaEpidemic Infectious Diseases238 Hospitals(236 European, 2 Western Asian)Descriptive Cross-SectionalStudy18\n\n38\n\n2014ChinaAll Hazards41 HospitalsDescriptive Cross-Sectional Study19\n\n56\n\n2014Saudi ArabiaAll Hazards6 HospitalsCross-Sectional Study20\n\n62\n\n2014ChinaAll Hazards41 HospitalsCross-Sectional Study21\n\n75\n\n2015USACBRNE Preparedness59 Health Care ProvidersRetrospective Observational Survey22\n\n37\n\n2015EnglandEbola Virus112 HospitalsCross-Sectional Study23\n\n43\n\n2015IranNatural Disasters200 HCWs in a Single HospitalCross-SectionalStudy24\n\n44\n\n2015IrelandInfluenza Pandemic46 HospitalsCross-SectionalStudy25\n\n66\n\n2015Yemen2011 Yemeni Revolution11 HospitalsComparative Study26\n\n72\n\n2015ChinaBioterrorism110 Military HospitalsMixed Method Study27\n\n46\n\n2015New ZealandMass Emergency Events911 Acute Care Providers (Doctors, Nurses, Paramedics)Cross-Sectional Study28\n\n25\n\n2016IndiaEbola VirusNine Countries (Bangladesh, Bhutan, Indonesia, Maldives, Myanmar, Nepal, Sri Lanka, Thailand, Timor-Leste)Cross-Sectional Study29\n\n26\n\n2016IranAll Hazards97 HCWs from Various Departments of Military HospitalCross-Sectional Study30\n\n69\n\n201610 Countries:Belgium, France,Italy, Romania, Sweden, UK, Iran, Israel, USA, AustraliaCBRN Emergencies18 ExpertsDelphi Method31\n\n31\n\n2016FinlandChemical Mass-Casualty Situations26 EMSCross-Sectional Study32\n\n36\n\n2016ChinaEbola Virus266 Medical Professionals from 236 HospitalsMixed Method Study33\n\n40\n\n2016ThailandFlood24 HospitalsDescriptive Cross-Sectional Study34\n\n45\n\n2016Saudi ArabiaAll Hazards17 HospitalsCross-Sectional Study35\n\n67\n\n2016USAChemical Hazard112 Hospitals in 200599 Hospitals in 2012Longitudinal Study36\n\n74\n\n2016IranAll Hazards15 StudiesSystematic Review37\n\n68\n\n2016USAAll Hazards137 VAMCsQuantitative Study38\n\n33\n\n2017USAAll Hazards80 HospitalsDescriptive/ Analytical Cross-Sectional Study39\n\n39\n\n2017Sri LankaFlood31 Government Health Care FacilitiesDescriptive Cross-Sectional, Mixed Methods Study40\n\n51\n\n2017IranAll Hazards6 HospitalsDescriptive Cross-Sectional Study41\n\n61\n\n2017Hong KongAll Hazards107 Doctors/Nurses from Hong Kong College of Emergency MedicineCross-Sectional Study42\n\n49\n\n2018IranAll Hazards18 HospitalsCross-Sectional Study43\n\n52\n\n2018SwitzerlandAll Hazards83 HospitalsCross-SectionalStudy44\n\n54\n\n2018TanzaniaAll Hazards25 Regional HospitalsDescriptive Cross-Sectional Study45\n\n73\n\n2018IranAll Hazards26 StudiesSystematic Review and Meta-Analysis46\n\n55\n\n2018YemenAll Hazards10 HospitalsCross-Sectional Study47\n\n58\n\n2018CroatiaMass Casualty Incidents80 PhysiciansCross-Sectional Study48\n\n51\n\n2019PakistanAll Hazards18 HospitalsCross-Sectional Study49\n\n57\n\n2019Sri LankaAll Hazards60 Doctors/NursesDescriptive Cross-Sectional Study50\n\n65\n\n2019IranAll Hazards8 HospitalsDescriptive Cross-Sectional Study51\n\n50\n\n2020IndiaCOVID-1958 DoctorsDescriptive Cross-Sectional Study52\n\n59\n\n2020USACOVID-1932 HospitalsCross-Sectional Study53\n\n60\n\n2020Saudi ArabiaAll Hazards315 Clinical StaffCross-Sectional StudyAbbreviations: HCW, Health Care Worker; EMS, Emergency Medical Services; VAMC, Veterans Affairs Medical Center; CBRN, Chemical, Biological, Radio, Nuclear Disasters.\n\nBasic Information of the Selected Articles\nAbbreviations: HCW, Health Care Worker; EMS, Emergency Medical Services; VAMC, Veterans Affairs Medical Center; CBRN, Chemical, Biological, Radio, Nuclear Disasters.\nTable 2 illustrates the number of publications by hazard type. One-half of the studies (27) covered all hazards, and the rest of the studies focused on a specific type of hazard. Among them, there were biological hazards like Ebola, influenza, and COVID-19; natural disasters like earthquake, flood, or extreme weather events; and man-made disasters like chemical-only, chemical, biological, radiological, and nuclear (CBRN), or mass-casualty incidents.\n\nTable 2.Number of Publications by Hazard TypeType of HazardNumber of Publications (%)ReferenceAll Hazards27 (51%)\n\n26,32,33,36,38,45,48,49,51–57,60–65,68,70,73,74,76,77\n\nMass Casualty/Mass Emergency5 (9%)\n\n41,46,47,58,66\n\nEbola3 (6%)\n\n25,35,37\n\nInfluenza3 (6%)\n\n30,42,44\n\nCBRN2 (4%)\n\n69,75\n\nNatural Disasters2 (4%)\n\n28,43\n\nChemical Hazards2 (4%)\n\n31,67\n\nFlood2 (4%)\n\n39,40\n\nCOVID-192 (4%)\n\n50,59\n\nExternal Disasters1 (2%)\n\n27\n\nPublic Health Emergencies1 (2%)\n\n29\n\nExtreme Weather Events1 (2%)\n\n71\n\nEarthquake1 (2%)\n\n34\n\nBioterrorism1 (2%)\n\n72\n\nAbbreviation: CBRN, Chemical, Biological, Radiological, and Nuclear.\n\nNumber of Publications by Hazard Type\nAbbreviation: CBRN, Chemical, Biological, Radiological, and Nuclear.\nThese studies have used different methodologies; however, the majority (41) were cross-sectional studies.\n25–65\n The next most common were longitudinal studies,\n66–68\n followed by Delphi,\n69,70\n mixed method,\n71,72\n and systematic reviews.\n73,74\n In addition, there was one retrospective, one observational,\n75\n one scoping review,\n76\n and one interventional\n77\n study (Table 1).\nFigure 2 demonstrates the number of publications by year. There were six publications on hospital disaster preparedness in 2011. The analysis of publication incidence by year revealed an overall rise in publication rate from 2012-2016. A decline in the publication rate was observed thereafter until 2020.\n\nFigure 2.Number of Articles by Year of Publication.\n\nNumber of Articles by Year of Publication.\nAltogether, these studies were conducted in 24 different countries around the world. Iran has published the highest number of studies (twelve), followed by China (six), USA (five), and Saudi Arabia (three). All the other countries have published one or two studies (Table 3).\n\nTable 3.Number of Publications by Country (Including the WHO Region and Reference)WHO RegionCountryTotal No. Publications (%)Hazard Type (No. Publications in Each Rype)ReferenceSouth-East AsiaNumber of Publications: 6 (11%)Sri Lanka2 (4%)AH (1), Flood (1)\n\n39,57\n\nIndia2 (4%)Ebola (1), COVID (1)\n\n25,50\n\nThailand2 (4%)Flood (1), Influ (1)\n\n40,42\n\nWestern-PacificNumber of Publications:12 (23%)China6 (11%)AH (4), BT (1), PHE (1)\n\n29,32,36,62,70,72\n\nTaiwan1 (2%)AH (1)\n\n38\n\nHong Kong1 (2%)AH (1)\n\n61\n\nCambodia1 (2%)Influ (1)\n\n30\n\nAustralia2 (4%)AH (1), Ext.dis (1)\n\n27,76\n\nNew Zealand1 (2%)MC (1)\n\n46\n\nEastern- MediterraneanNumber of Publications:18 (34%)Pakistan1 (2%)AH (1)\n\n48\n\nIran12 (23%)AH (9), ND (2), EQ (1)\n\n26,28,34,43,49,51,53,63–65,73,74\n\nYemen2 (4%)AH (1), MC (1)\n\n55,66\n\nSaudi Arabia3 (6%)AH (3)\n\n45,56,60\n\nAmericasNumber of Publications: 7 (13%)USA5 (9%)AH (2), CBRNE (1), Chem (1), COVID (1)\n\n33,59,67,68,75\n\nCanada2 (4%)Ex. Weat (1), MC (1)\n\n47,71\n\nEuropeanNumber of Publications:8 (15%)UK2 (4%)AH (1), Ebola (1)\n\n37,77\n\nItaly1 (2%)CBRN (1)\n\n69\n\nFinland1 (2%)Chem (1)\n\n31\n\nNetherland1 (2%)Ebola (1)\n\n35\n\nIreland1 (2%)Influ (1)\n\n44\n\nSwitzerland1 (2%)AH (1)\n\n52\n\nCroatia1 (2%)MC (1)\n\n58\n\nAfricanNumber of Publications: 2 (4%)South Africa1 (2%)MC (1)\n\n41\n\nTanzania1 (2%)AH (1)\n\n54\n\nAbbreviations: WHO, World Health Organization; AH, All hazards; BT, Bioterrorism; PHE, Public Health Emergencies; MC, Mass Casualty; Ex. Weat, Extreme Weather events; Influ, Influenza; Chem, Chemical events; CBRN, Chemical, Biological, Radiological, and Nuclear.\n\nNumber of Publications by Country (Including the WHO Region and Reference)\nAbbreviations: WHO, World Health Organization; AH, All hazards; BT, Bioterrorism; PHE, Public Health Emergencies; MC, Mass Casualty; Ex. Weat, Extreme Weather events; Influ, Influenza; Chem, Chemical events; CBRN, Chemical, Biological, Radiological, and Nuclear.\nFigure 3 demonstrates the number of publications by WHO region and Table 3 illustrates the number of publications by country (including the WHO region, reference, and disaster type). The Eastern Mediterranean region has recorded the greatest number of publications (18), with Iran being responsible for two-thirds of the publications in the region. The Western-Pacific region had the second largest number of publications (12), with China recording the highest number of publications in that region. The European and Americas regions had a similar number of publications (eight and seven, respectively), and the UK and USA were the most represented countries in those regions. The South-East Asian region had six publications, where all three included countries (Sri Lanka, India, and Thailand) had two publications each. The African region recorded the lowest number of publication (two). The Americas and European regions had a focus on CBRN emergencies, whereas other regions had focused on natural disasters (Table 3).\n\nFigure 3.Number of Publications by WHO Region.Abbreviation: WHO, World Health Organization.\n\nNumber of Publications by WHO Region.\nAbbreviation: WHO, World Health Organization.\nFigure 4 illustrates the number of publications by UNHDI. The countries with Very High HD and High HD have published the majority of studies, 23 and 24 publications, respectively. Conversely, the countries with Medium HD and Low HD have published a low number of studies, with four and three publications, respectively. Table 4 illustrates the number of publications by UNHDI, including the country and hazard type. Significantly, there were five publications concerned with man-made disasters like chemical or CBRN incidents among the developed countries, while only one study focused on such disasters (bio-terrorism: BT) among the developing countries. It was clear that developing countries have more focus on natural disasters.\n\nFigure 4.Number of Publications by UNHDI.Abbreviations: UNHDI, United Nations Human Development Index; HD, Human Development.\n\nNumber of Publications by UNHDI.\nAbbreviations: UNHDI, United Nations Human Development Index; HD, Human Development.\n\nTable 4.Number of Publications by UNHDI Including the Country and Hazard TypeHDICountryHazard Type (No. of Publications in Each Type)Very High HD(Developed Countries Except Saudi Arabia)Number of Publications:22 (41%)IrelandInflu (1)SwitzerlandAH (1)Hong KongAH (1)AustraliaAH (1), Ext.dis (1)NetherlandEbola (1)FinlandChem (1)UKAH (1), Ebola (1)New ZealandMC (1)CanadaEx. Weat (1), MC (1)USAAH (2), CBRNE (1), Chem (1), COVID (1)ItalyCBRN (1)Saudi ArabiaAH (3)CroatiaMC (1)High HD(Developing Countries)Number of Publications:24 (45%)IranAH (9), ND (2), EQ (1)Sri LankaAH (1), Flood (1)ThailandFlood (1), Influ (1)ChinaAH (4), BT (1), PHE (1)TaiwanAH (1)South AfricaMC (1)Medium HD(Developing Countries)Number of Publications: 4 (8%)IndiaEbola (2), COVID (1)CambodiaInflu (1)PakistanAH (1)Low HD (Developing Countries)Number of Publications: 3 (6%)YemenAH (1), MC (1)TanzaniaAH (1)Abbreviations: UNHDI, United Nations Human Development Index; HD, Human Development; AH, All hazards; BT, Bioterrorism; PHE, Public Health Emergencies; MC, Mass Casualty; Ex. Weat, Extreme Weather events; Influ, Influenza; Chem, Chemical events; CBRN, Chemical, Biological, Radiological, and Nuclear.\n\nNumber of Publications by UNHDI Including the Country and Hazard Type\nAbbreviations: UNHDI, United Nations Human Development Index; HD, Human Development; AH, All hazards; BT, Bioterrorism; PHE, Public Health Emergencies; MC, Mass Casualty; Ex. Weat, Extreme Weather events; Influ, Influenza; Chem, Chemical events; CBRN, Chemical, Biological, Radiological, and Nuclear.\nTable 5 summarizes the different themes/components identified in those study instruments according to the 4S domains.\n\nTable 5.Analysis of Different Themes/Components of Study Instruments According to 4S DomainsDomainsIndicatorsNumber of Articles (%)ReferenceSpaceInfrastructure20 (38%)\n\n32,33,35,37,39,40,43,47,50,51,56,57,62,63,65,68,70,71,74,76\n\nIsolation Facilities, Decontamination Facilities16 (30%)\n\n33,35,44,45,47,50,52,53,59,60,62,67,70,72,74,76\n\nICC, ICU, Theatre, Laboratory10 (19%)\n\n25,27,33,35,37,51,57,63,70,74\n\nMorgue Facilities7 (13%)\n\n33,45,54,56,57,60,74\n\nAccessibility/Access Routes5 (9%)\n\n39,42,57,67,68\n\nStuffLogistics31(58%)\n\n25,26,28,30,31,34–40,44,45,47,49,51,54,55,57,58,60,63,65–67,70,72–74,76\n\nPPE26 (49%)\n\n25,32,33,35–39,41,42,44,45,47,50,52,57,59,65,67,69,71,73–77\n\nMedicines, Medical Equipment, Medical Gases, and Other Supplies (Food, Water, Fuel Reserves)26 (49%)\n\n29–32,36–39,41,43,47,48,51,54,56,57,60,62–65,70,71,74–76\n\nBack-Up Communication Devices18 (34%)\n\n27,28,32,36,38,42,49,51,54,57,62,63,67,70,71,74,76,77\n\nBack-Up Power12 (23%)\n\n32,38,43,51,57,62,63,70,71,73,74,76\n\nStockpiling12 (23%)\n\n32,36,38,44,50,54,57,62,63,70,74,76\n\nVehicles, Transport Equipment11 (21%)\n\n26,31,43,49,57,63,65,68,70,74,76\n\nStaffTraining/Education/Capacity Building41 (77%)\n\n25,26,28–30,32–41,43,44,46–48,50,51,54–57,59–68,70,72–74,76\n\nDrills/Simulation Exercises34 (64%)\n\n25,29,32,33,35–48,52,54,55,57,58,60–62,65–68,70,72,74,76\n\nKnowledge and Skills13 (25%)\n\n31,38,39,50,52,53,57,58,62,64,65,75,77\n\nPsychosocial Support Staff/Victims13 (25%)\n\n32,38,50,57,58,60–62,69,71,74,76,77\n\nStaff Well-Being, Roster Arrangement, Food, Water, Accommodation, Transport, Domestic Support10 (19%)\n\n33,38,45,49,50,57,62,70,74,76\n\nVaccination7 (13%)\n\n29,44,48,62,68,70,76\n\nRewards/Incentives6 (11%)\n\n25,32,44,45,70,76\n\nVolunteers6 (11%)\n\n32,38,40,57,60,76\n\nSystemsInformation Management/Communication System42 (79%)\n\n25–29,32–34,36–39,41–43,45,49,51–57,59–71,73–77\n\nICS41 (77%)\n\n25–29,32–34,36,38,40–47,49,51,52,54–57,59–66,68,71–77\n\nDisaster Plans36 (68%)\n\n25,27,29,32–34,36–40,42,44,45,47–52,54–60,62,64,66,67,70–72,75,76\n\nSafety/Security System (Including Evacuation, Crowd Control, Transportation)32 (60%)\n\n26,28,32,33,35,38,39,41,43,45,49–52,54–57,60,62–64,66,68–74,76,77\n\nTriage27 (51%)\n\n25,32,33,35,38,40,45,46,49,50,53–55,58–64,66,68,70,73,74,76,77\n\nCooperation and Coordination with Other Health/Non-Health Sector Facilities, and the Public (Including MOUs/Contracts/Agreements)25 (47%)\n\n25,26,29,32–35,37–39,41,45,47,48,50,52,56,57,60,62,63,70,71,74,76\n\nSurge Capacity25 (47%)\n\n25,29,31–33,40,41,44,45,49,50,54–57,60,62,63,66,68,70,71,73,74,76\n\nCES20 (38%)\n\n26,28,29,32,34,40,49,54,56,57,60,62,63,66,68,70,73–76\n\nSOP/Protocols/Guidelines19 (36%)\n\n25,29,34,35,37,38,42,47,48,57–60,66,68–71,74\n\nPDR18 (34%)\n\n32,38,40,48,49,54–56,60,62,63,66,68,70,71,73,74,76\n\nIsolation, Decontamination, and Quarantine15 (28%)\n\n25,31,33,38,41,47,52,53,60,68–70,74,76,77\n\nSurveillance, Early Warning, Outbreak Management System13 (25%)\n\n25,34,35,42,48,60,62,63,67,70–72,76\n\nWaste Management8 (15%)\n\n37,43,47,60,63,68,71,74\n\nDead Body Handling3 (6%)\n\n33,57,77\n\nAbbreviations: ICC, Incident Command Centre; ICU, Intensive Care Unit; ICS, Incident Command System; SOP, Standard Operating Procedure; MOU, Memorandum of Understanding; PPE, Personal Protective Equipment; CES, Continuity of Essential Services; PDR, Post-Disaster Recovery.\n\nAnalysis of Different Themes/Components of Study Instruments According to 4S Domains\nAbbreviations: ICC, Incident Command Centre; ICU, Intensive Care Unit; ICS, Incident Command System; SOP, Standard Operating Procedure; MOU, Memorandum of Understanding; PPE, Personal Protective Equipment; CES, Continuity of Essential Services; PDR, Post-Disaster Recovery.\nFor the space domain: infrastructure and isolation/decontamination facilities were considered more frequently, with 20 and 16 publications, respectively, while morgue facilities and accessibility/access routes were considered less frequently, in only seven and five studies, respectively.\nFor the stuff domain: logistics, personal protective equipment (PPE), and medicines/medical equipment/medical gases/other supplies (food, water, fuel reserves) were considered among the majority of studies, 32, 27, and 27, respectively, while back-up power, stockpiling, and transport themes were included in a smaller number of studies, 12, 12, and 11, respectively.\nFor the staff domain: training/education/capacity building and drills/simulation exercises were included in the majority of studies, 40 and 34 studies, respectively, while vaccination, rewards/incentives, and volunteer themes were given the least priority, with seven, six, and six studies, respectively.\nFor the systems domain: information/communication, Incident Command System (ICS), disaster plans, and safety/security themes were considered in most of the studies, while waste management and the handling of dead bodies were given the least priority, considered only in eight and three studies, respectively.\nOverall, isolation/decontamination facilities, Incident Command Centre (ICC), intensive care unit/ICU, and laboratories were considered frequently under the space domain, while access routes and morgue space were given less priority. For the stuff domain, PPE, medicines, and medical equipment were frequently considered in the broad category of logistics, while back-up power, stockpiling, and transport-related themes were considered less frequently. Capacity-building-related themes were frequently considered under the staff domain, while psychological well-being-related themes were given less priority. Further, communication, ICS, and disaster plans were considered frequently under the system domain, while the waste management and handling of dead bodies were given less priority in the majority of the study tools.", "This is the first study to review publications assessing hospital-level disaster preparedness across the world using the 4S framework. Over the decade, the annual rate of publications varied considerably, with an overall increase up to 2016, and surprisingly, a reduction thereafter. Also, the developing countries with “Medium” and “Low” HDI featured less in terms of publications on hospital disaster preparedness. However, the developing countries with “High” HD contributed to a similar number of publications to developed countries.\nSurprisingly, this study found that the number of publications was reduced during the second-half of the decade despite the increase in disaster events globally. In contrast, a past study conducted on public health emergency preparedness from 1997-2008 reported that there was a 33% growth of publications per year.\n78\n Interestingly, Iran, as a highly vulnerable country to both natural and human-made disasters,\n1\n has published the majority of studies on hospital disaster preparedness. Therefore, the Eastern Mediterranean region has recorded the highest number of publications during the decade.\nOne of the important findings of the study was the significant interest in chemical hazards, CBRN, and bio-terrorism hazards in the Americas and European regions. All of these publications originated from developed countries such as USA, Italy, and Finland. Similarly, a study conducted by Mohsen, et al found that the majority of research on biological events were from USA, China, and Canada.\n79\n This trend may be due to assumed international best practice for disaster preparedness being adopted by the European Union and other developed nations.\n80\n However, no country is immune from CBRN threats, and therefore, those countries with more advanced disaster preparedness monitoring would have the capacity to partner with developing countries in implementing adequate preparedness measures for CBRN emergencies. The main focus of the South-East Asian region was on natural disasters, such as floods, and infectious disease outbreaks; none of the countries in the region published a study on man-made disasters during the decade. The Easter-Sunday terrorist attack that occurred in 2019 in Sri Lanka highlighted the importance of preparedness for CBRN events.\n81\n\n\nThe global literature reports that the countries with a Low HDI have a lower research investment and are, therefore, less dominant in the research and development area.\n82\n Conversely, the countries with a High HDI dominate with research publications. This study has further emphasized that the countries with a Low HDI have less publications despite their high vulnerability to disasters.\nA comprehensive plan should address every possible disaster scenario with contingency plans. Adequate preparedness of all 4S domains is, therefore, equally important. However, surprisingly, this study found that access routes, transport, morgue facilities, handling of dead bodies, back-up power, stockpiling, vaccination, rewards/incentives for staff, volunteer, and waste management themes were given less priority in most of the studies.\nRegarding the access routes, the WHO emphasized that in order to ensure the safety of lives, hospitals and health facilities must remain safe, accessible, and functioning at maximum capacity during emergencies or disasters.\n83\n Therefore, the WHO has identified that a safe site and accessibility are important aspects of a hospital disaster preparedness. They also recommended hospitals be located near good roads with an adequate means of transportation. To ensure readiness in transport preparedness, the WHO recommends having adequate transport equipment, equipped ambulances, and other vehicles. Therefore, transport facilities are one of the crucial aspects under the stuff preparedness, which is essential for transporting casualties from field to hospital, moving patients to other referral hospitals, and evacuating patients in an emergency or disaster situation.\n83\n However, this review identified that transport was neglected in the majority of the published studies.\nThis study identified that back-up power has also been neglected under the stuff preparedness in the majority of the studies. Electric power is a critical lifeline of a hospital. A survey conducted in Japan found that 65% of disaster-base hospitals considered electricity to be the paramount lifeline for the functioning of their hospital.\n84\n All the medical devices, diagnostic equipment, communication devices, lighting, heating and cooling systems, elevators, and IT-based patient information systems become useless when there is a power failure. Therefore, back-up generators or a reliable alternative power source should be an essential part of the stuff preparedness.\nThe study identified that most of the selected studies have ignored stockpiling under the stuff preparedness. The WHO also emphasizes that hospitals have a stockpile of at least one week of adequate emergency medicines and supplies when preparing for disasters.\n83\n The COVID-19 pandemic has proven that stockpiling is a cornerstone of a holistic approach to disaster preparedness.\n85\n One of the biggest reasons for countries to fail in their initial response to the pandemic was the lack of necessary PPE and emergency equipment to deal with the pandemic. Therefore, in addition to having a national stockpile, it is important to have an individual hospital stockpile of critical medicines, vaccines, emergency equipment, and supplies.\n85\n Periodic reviews and dynamic use of stockpiles are also necessary to ensure the effective use of stored equipment and other items before their expiry.\nThe study also found that morgue facilities and dead body handling were neglected themes under the space and the system preparedness, respectively. However, the COVID-19 pandemic has highlighted the importance of ensuring adequate morgue capacities in hospitals. For example, in India, during the peak of the pandemic, hospital morgues and crematoriums were overwhelmed and the bodies were piled up due to inadequate morgue facilities.\n86\n Therefore, in order to ensure proper identification and handing of dead bodies, adequate morgue capacity, temporary morgue spaces, cold storage facilities, and adequately trained staff are crucial.\n57\n\n\nThe study also identified that vaccination, rewards/incentive, and volunteer themes were neglected by the majority of studies under staff preparedness. The COVID-19 pandemic highlighted how important vaccination, volunteers, and rewarding were in improving psychological well-being of the staff. Martinese, et al examined the measures motivating hospital workers to report for duty during a crisis situation. They identified preventative measures for self and family, followed by alternative accommodation and financial incentives as high priority incentives.\n87\n Another study also reported that access to PPE and vaccines, childcare arrangements, volunteers’ networks, adequate training, and protection from disaster-related legal sanctions are some of the major incentives to motivate staff during disasters.\n88\n Therefore, these aspects should be considered in emergency planning as it plays a key role in motivating staff to work in disaster situations.\nThe study found that waste management was given the least priority under system preparedness. Waste management is an essential part of disaster preparedness, especially given that clinical waste should be handled carefully as it contains hazardous materials such as infectious, toxic, and radioactive substances.\n89\n The WHO recently reported that tons of extra medical waste from the COVID-19 response put enormous strains on health care waste management systems around the world. Therefore, the WHO emphasizes the dire need of improving the waste management practices in order to minimize the human and environmental impacts. Improper waste management could result in secondary disasters. Therefore, contingency plans are necessary for managing hospital waste as well as the waste generated in CBRN events.\nAs highlighted in the above discussion, all of these less prioritized areas have a significant impact in different disaster scenarios. Therefore, in a comprehensive disaster plan based on an all-hazard approach, it is also important to address these neglected aspects the same way as frequently prioritized areas.", "This study selected only the published articles on hospital-based disaster preparedness studies. However, there may be some publications related to hospital disaster preparedness in community or public health preparedness studies that were not published on the databases searched. In addition, this study selected only articles written in the English language from 2011-2020.", "Few published studies used a toolkit, checklist, or questionnaire to assess hospital disaster preparedness across the world during the decade of 2011-2020. The countries with Low HD have a smaller number of publications and the developing countries generally have less focus on CBRN preparedness. The majority of the past studies have neglected some crucial aspects of hospital disaster preparedness. Important preparedness themes were identified under each domain of the 4S framework, and these aspects should be properly addressed in order to ensure adequate preparedness of hospitals. The results of this systematic review can be used for planning a comprehensive disaster preparedness tool." ]
[ "intro", "other", "other", "other", "other", "other", "other", "results", "discussion", "other", "conclusions" ]
[ "disaster preparedness", "hospital", "questionnaire", "survey", "toolkit" ]
Introduction: Every year, millions of people across the world are being affected by floods, landslides, cyclones, hurricanes, tornados, tsunamis, volcanic eruptions, earthquakes, wildfires, or human-made disasters. In the past ten years, 83% of all disasters triggered by natural hazards were caused by extreme weather and climate-related events. 1 The on-going global pandemic of coronavirus disease 2019 (COVID-19) has caused a health and economic crisis emphasizing to the world how important disaster preparedness and disaster resilience are. 2 In addition to the COVID-19 pandemic, multiple climate-related disasters are also happening at the same time. 1 For example, more than 100 other disasters occurred around the world affecting over 50 million people during the first six months after COVID-19 was declared a pandemic by the World Health Organization (WHO; Geneva, Switzerland) in March 2020. 3 Asia has suffered the highest number of disaster events. In total, there were 3,068 disasters occurring in Asia from 2000 through 2019. China reported the highest number of disaster events (577 events), followed by India (321 events), the Philippines (304 events), and Indonesia (278 events). 4 Recent disaster events emphasize the need for disaster risk reduction in the health sector as well as health services in developed countries. For example, in 2011 during the Japan earthquake and tsunami, 80% of the hospitals in Fukushima, Miyagi, and Iwate prefectures of Japan were destroyed or severely damaged, and many local public health personnel were also affected by the disaster, resulting in the entire paralysis or severe compromise of the health services. 5,6 Disasters can cripple health facilities, leading to partial or total collapse of health services, especially in developing countries. 7 For example, after the Algerian earthquake in 2003, 50% of the health facilities in the impacted area were damaged and were no longer operational. 7 A further example occurred when an earthquake struck in South Asia in October 2005 and caused the complete destruction of almost 50% of health facilities in the affected areas in Afghanistan, India, and Northern Pakistan, ranging from sophisticated hospitals to rural clinics, overwhelming the existing Emergency Medical Services. 7 Currently, most South Asian countries, including Sri Lanka and India, are in a state of crisis resulting from COVID-19, with overcrowded hospitals, low oxygen supplies, and overwhelmed capacity. 8,9 Sri Lanka, a developing nation and small island in the Indian Ocean, is frequently battered by natural disasters. The most devastating disaster it had ever experienced was the tsunami of 2004, which killed over 30,000 people and internally displaced almost one-half a million people. The health systems of the country were severely affected, completely damaging 44 health institutions and partially damaging 48 health institutions. In addition, 35 health care workers (HCWs) lost their lives and a large number of health workers were affected by injuries or psychological trauma due to the loss of their family members or properties. 10 Monsoon floods and landslides also affect several health facilities across the country annually. Sometimes, they have even led to the full or partial evacuation of affected hospitals, as experienced, for example, during the floods of 2016 and 2017, due to infrastructure damage or the functional collapse of services. These instances have resulted in huge economic impacts on the government for recovery-related needs. 11,12 Sri Lanka has also experienced several man-made disasters resulting in mass-casualty incidents. A 26-year war came to an end in 2009 after more than 64,000 deaths, hundreds of thousands of injuries, and the displacement of more than 800,000 persons. 13 The Easter-Sunday bombing attack on April 21, 2019 was a recent human-made disaster which killed 250 people and resulted in more than 500 casualties. 14 These mass-casualty incidents caused an acute surge of patients to nearby hospitals, interrupting normal hospital operations and overwhelmed hospital capacity due to ill-preparedness, poor coordination, and limited resources. 15 Notwithstanding the vulnerability of Sri Lanka to disasters, there is no standard hospital disaster preparedness evaluation tool used in Sri Lanka at the moment. Such a tool could be used to inform potential improvements to hospital-level disaster preparedness. Therefore, with the goal of establishing a tool appropriate for Sri Lanka, this study aimed to determine the existence and distribution of hospital preparedness tools across the world, and also to identify the important components of those study instruments. Aim: The aim of this research was to determine the main components included in hospital disaster preparedness evaluation instruments. Methodology: A systematic review was performed across three journal databases: Ovid Medline (US National Library of Medicine, National Institutes of Health; Bethesda, Maryland USA); Embase (Elsevier; Amsterdam, Netherlands); and CINAHL (EBSCO Information Services; Ipswich, Massachusetts USA) using the appropriate specifications for each database. The PRISMA Systematic Review Guidelines were used for the review, and the PRISMA 2020 checklist is shown in Appendix 1 (available online only). The search of the databases was conducted on November 23, 2020. The details of the search strategy are shown in Appendix 2 (available online only). Additional documents (grey literature) were sourced by a related search of Google (Google Inc.; Mountain View, California USA), relevant websites, and also from the reference lists of selected articles. Keywords used were: hospitals, health facilities, health services, and health personnel. These were combined with: disaster planning, disaster preparedness, and the terms: checklist, surveys, questionnaire, or toolkit. Selection of Articles Initially, two independent reviewers (NM and GO) screened titles and abstracts of the retrieved articles for eligibility. From the abstracts selected by both reviewers, full texts were retrieved and considered for eligibility. If discrepancies occurred between reviewers, the reasons were identified and a final decision was made by the main author (NM). Initially, two independent reviewers (NM and GO) screened titles and abstracts of the retrieved articles for eligibility. From the abstracts selected by both reviewers, full texts were retrieved and considered for eligibility. If discrepancies occurred between reviewers, the reasons were identified and a final decision was made by the main author (NM). Inclusion Criteria Articles published across the world from 2011 through 2020, written in English language, which were conducted on the disaster preparedness of hospitals, health facilities, or health personnel using toolkits, checklists, or questionnaire surveys were selected for this study. The “safe hospitals” concept became popular after 2010 when the WHO introduced a guidebook on safe hospitals in emergencies and disasters. Therefore, the search was started from 2011, as relevant hospital disaster preparedness assessments were most likely to be published after this time. Articles published across the world from 2011 through 2020, written in English language, which were conducted on the disaster preparedness of hospitals, health facilities, or health personnel using toolkits, checklists, or questionnaire surveys were selected for this study. The “safe hospitals” concept became popular after 2010 when the WHO introduced a guidebook on safe hospitals in emergencies and disasters. Therefore, the search was started from 2011, as relevant hospital disaster preparedness assessments were most likely to be published after this time. Exclusion Criteria The published studies in a language other than English, not available in full text, and not related to health sector disaster preparedness were excluded. The published studies in a language other than English, not available in full text, and not related to health sector disaster preparedness were excluded. Data Synthesis and Analysis Data extraction and analysis were based on the Joanna Briggs Institute (JBI; Adelaide, Australia) manual for evidence synthesis. 16 In order to identify the basic information, the selected studies were analyzed in the following categories: reference; year of publication; country of origin (where the study was published or conducted); type of hazard; sample size; and the study design. The global distribution of studies was analyzed according to the WHO’s six geographical regions, namely South-East Asia, the Western-Pacific, the Eastern Mediterranean, the Americas, Europe, and Africa regions. 17,18 The distribution of studies was also analyzed according to the four categories of the United Nations Human Development Index (UNHDI). 19 The UNHDI comprises three dimensions: health, education, and standard of living. The health dimension is measured by life expectancy at birth, the education dimension by mean years of schooling for adult persons aged 25 years and above and expected years of schooling for children at school entering age, and the standard of living dimension is assessed by gross national income per capita. Based on the cut-off points of the HDI, the UN has identified four categories of human development (HD); Low HD, Medium HD, High HD, and Very High HD. 19 The global literature classifies a hospital’s disaster preparedness and response in terms of the “4S’s” – space, stuff, staff, and systems. 20–24 The space domain includes: the physical space needed for patient care and workspace (infrastructure and their access routes). The stuff domain includes logistics, equipment, and supplies. The staff domain includes human resources. And the system domain includes all the plans, procedures, and protocols needed for preparedness management. 20–24 Similarly, preparedness themes of each selected study were identified according to this 4S conceptual framework. Data extraction and analysis were based on the Joanna Briggs Institute (JBI; Adelaide, Australia) manual for evidence synthesis. 16 In order to identify the basic information, the selected studies were analyzed in the following categories: reference; year of publication; country of origin (where the study was published or conducted); type of hazard; sample size; and the study design. The global distribution of studies was analyzed according to the WHO’s six geographical regions, namely South-East Asia, the Western-Pacific, the Eastern Mediterranean, the Americas, Europe, and Africa regions. 17,18 The distribution of studies was also analyzed according to the four categories of the United Nations Human Development Index (UNHDI). 19 The UNHDI comprises three dimensions: health, education, and standard of living. The health dimension is measured by life expectancy at birth, the education dimension by mean years of schooling for adult persons aged 25 years and above and expected years of schooling for children at school entering age, and the standard of living dimension is assessed by gross national income per capita. Based on the cut-off points of the HDI, the UN has identified four categories of human development (HD); Low HD, Medium HD, High HD, and Very High HD. 19 The global literature classifies a hospital’s disaster preparedness and response in terms of the “4S’s” – space, stuff, staff, and systems. 20–24 The space domain includes: the physical space needed for patient care and workspace (infrastructure and their access routes). The stuff domain includes logistics, equipment, and supplies. The staff domain includes human resources. And the system domain includes all the plans, procedures, and protocols needed for preparedness management. 20–24 Similarly, preparedness themes of each selected study were identified according to this 4S conceptual framework. Selection of Articles: Initially, two independent reviewers (NM and GO) screened titles and abstracts of the retrieved articles for eligibility. From the abstracts selected by both reviewers, full texts were retrieved and considered for eligibility. If discrepancies occurred between reviewers, the reasons were identified and a final decision was made by the main author (NM). Inclusion Criteria: Articles published across the world from 2011 through 2020, written in English language, which were conducted on the disaster preparedness of hospitals, health facilities, or health personnel using toolkits, checklists, or questionnaire surveys were selected for this study. The “safe hospitals” concept became popular after 2010 when the WHO introduced a guidebook on safe hospitals in emergencies and disasters. Therefore, the search was started from 2011, as relevant hospital disaster preparedness assessments were most likely to be published after this time. Exclusion Criteria: The published studies in a language other than English, not available in full text, and not related to health sector disaster preparedness were excluded. Data Synthesis and Analysis: Data extraction and analysis were based on the Joanna Briggs Institute (JBI; Adelaide, Australia) manual for evidence synthesis. 16 In order to identify the basic information, the selected studies were analyzed in the following categories: reference; year of publication; country of origin (where the study was published or conducted); type of hazard; sample size; and the study design. The global distribution of studies was analyzed according to the WHO’s six geographical regions, namely South-East Asia, the Western-Pacific, the Eastern Mediterranean, the Americas, Europe, and Africa regions. 17,18 The distribution of studies was also analyzed according to the four categories of the United Nations Human Development Index (UNHDI). 19 The UNHDI comprises three dimensions: health, education, and standard of living. The health dimension is measured by life expectancy at birth, the education dimension by mean years of schooling for adult persons aged 25 years and above and expected years of schooling for children at school entering age, and the standard of living dimension is assessed by gross national income per capita. Based on the cut-off points of the HDI, the UN has identified four categories of human development (HD); Low HD, Medium HD, High HD, and Very High HD. 19 The global literature classifies a hospital’s disaster preparedness and response in terms of the “4S’s” – space, stuff, staff, and systems. 20–24 The space domain includes: the physical space needed for patient care and workspace (infrastructure and their access routes). The stuff domain includes logistics, equipment, and supplies. The staff domain includes human resources. And the system domain includes all the plans, procedures, and protocols needed for preparedness management. 20–24 Similarly, preparedness themes of each selected study were identified according to this 4S conceptual framework. Results: The search resulted in a total of 1,568 articles, including 1,563 from databases and five from grey literature. After removing the duplicates, there were 1,070 articles. Based on the inclusion criteria, only 53 articles were selected for data extraction and synthesis. Figure 1 illustrates the PRISMA flow diagram. Figure 1.PRISMA 2009 Flow Diagram. PRISMA 2009 Flow Diagram. Table 1 summarizes the basic information of the selected articles. All these studies have assessed the preparedness of either the facilities or the HCWs. Altogether, these studies assessed the preparedness of approximately 5,100 HCWs across the world, including different categories of acute care providers such as physicians, doctors, nurses, paramedics, and health care assistants. These studies have also assessed approximately 1,930 different levels of hospitals (government, rural, military, tertiary, and district), health care facilities, and emergency departments (Table 1). Table 1.Basic Information of the Selected ArticlesNumberReferenceYear of PublicationCountry of OriginDisaster TypeSample SizeStudy Type1 77 2011UKAll Hazards41 HCWs(33 Nurses, 8 Health Care Assistants from two MICUs)Interventional Study2 29 2011ChinaPublic Health Emergencies45 HospitalsCross-Sectional Study3 34 2011IranEarthquake114 Health Managers of Hospitals, Health Networks, and HealthCentersDescriptive Cross-Sectional (Quantitative) Study4 41 2011South AfricaPreparedness for 2010 FIFA World CupNine HospitalsCross-SectionalStudy5 42 2011ThailandInfluenza Pandemic179 Health CentersCross-SectionalStudy6 47 2011CanadaMass Emergency Events34 Emergency DepartmentsCross-Sectional Study7 27 2012AustraliaExternal Disaster140 HCWs(Knowledge/ Perception) in Public Teaching HospitalCross-Sectional Study8 53 2012IranAll Hazards102 Emergency Nurses in Tabriz’s Educational HospitalsDescriptive Cross-Sectional Study9 30 2013CambodiaInfluenza Pandemic262 Health Facilities,185 Government Hospitals, 77 District Health OfficesCross-Sectional Study10 76 2013AustraliaAll HazardsN/AScoping Review11 63 2013IranAll Hazards24 HospitalsCross-Sectional Study12 64 2013IranAll Hazards15 HospitalsDescriptive Cross-Sectional Study13 28 2014IranNatural DisastersNine HospitalsCross-Sectional Descriptive Study14 32 2014ChinaAll Hazards50 Tertiary HospitalsCross-Sectional Study15 71 2014CanadaExtreme Weather EventSix Health Care FacilitiesMixed Methods Study16 70 2014ChinaAll HazardsN/AModified Delphi Study17 35 2014Europe and AsiaEpidemic Infectious Diseases238 Hospitals(236 European, 2 Western Asian)Descriptive Cross-SectionalStudy18 38 2014ChinaAll Hazards41 HospitalsDescriptive Cross-Sectional Study19 56 2014Saudi ArabiaAll Hazards6 HospitalsCross-Sectional Study20 62 2014ChinaAll Hazards41 HospitalsCross-Sectional Study21 75 2015USACBRNE Preparedness59 Health Care ProvidersRetrospective Observational Survey22 37 2015EnglandEbola Virus112 HospitalsCross-Sectional Study23 43 2015IranNatural Disasters200 HCWs in a Single HospitalCross-SectionalStudy24 44 2015IrelandInfluenza Pandemic46 HospitalsCross-SectionalStudy25 66 2015Yemen2011 Yemeni Revolution11 HospitalsComparative Study26 72 2015ChinaBioterrorism110 Military HospitalsMixed Method Study27 46 2015New ZealandMass Emergency Events911 Acute Care Providers (Doctors, Nurses, Paramedics)Cross-Sectional Study28 25 2016IndiaEbola VirusNine Countries (Bangladesh, Bhutan, Indonesia, Maldives, Myanmar, Nepal, Sri Lanka, Thailand, Timor-Leste)Cross-Sectional Study29 26 2016IranAll Hazards97 HCWs from Various Departments of Military HospitalCross-Sectional Study30 69 201610 Countries:Belgium, France,Italy, Romania, Sweden, UK, Iran, Israel, USA, AustraliaCBRN Emergencies18 ExpertsDelphi Method31 31 2016FinlandChemical Mass-Casualty Situations26 EMSCross-Sectional Study32 36 2016ChinaEbola Virus266 Medical Professionals from 236 HospitalsMixed Method Study33 40 2016ThailandFlood24 HospitalsDescriptive Cross-Sectional Study34 45 2016Saudi ArabiaAll Hazards17 HospitalsCross-Sectional Study35 67 2016USAChemical Hazard112 Hospitals in 200599 Hospitals in 2012Longitudinal Study36 74 2016IranAll Hazards15 StudiesSystematic Review37 68 2016USAAll Hazards137 VAMCsQuantitative Study38 33 2017USAAll Hazards80 HospitalsDescriptive/ Analytical Cross-Sectional Study39 39 2017Sri LankaFlood31 Government Health Care FacilitiesDescriptive Cross-Sectional, Mixed Methods Study40 51 2017IranAll Hazards6 HospitalsDescriptive Cross-Sectional Study41 61 2017Hong KongAll Hazards107 Doctors/Nurses from Hong Kong College of Emergency MedicineCross-Sectional Study42 49 2018IranAll Hazards18 HospitalsCross-Sectional Study43 52 2018SwitzerlandAll Hazards83 HospitalsCross-SectionalStudy44 54 2018TanzaniaAll Hazards25 Regional HospitalsDescriptive Cross-Sectional Study45 73 2018IranAll Hazards26 StudiesSystematic Review and Meta-Analysis46 55 2018YemenAll Hazards10 HospitalsCross-Sectional Study47 58 2018CroatiaMass Casualty Incidents80 PhysiciansCross-Sectional Study48 51 2019PakistanAll Hazards18 HospitalsCross-Sectional Study49 57 2019Sri LankaAll Hazards60 Doctors/NursesDescriptive Cross-Sectional Study50 65 2019IranAll Hazards8 HospitalsDescriptive Cross-Sectional Study51 50 2020IndiaCOVID-1958 DoctorsDescriptive Cross-Sectional Study52 59 2020USACOVID-1932 HospitalsCross-Sectional Study53 60 2020Saudi ArabiaAll Hazards315 Clinical StaffCross-Sectional StudyAbbreviations: HCW, Health Care Worker; EMS, Emergency Medical Services; VAMC, Veterans Affairs Medical Center; CBRN, Chemical, Biological, Radio, Nuclear Disasters. Basic Information of the Selected Articles Abbreviations: HCW, Health Care Worker; EMS, Emergency Medical Services; VAMC, Veterans Affairs Medical Center; CBRN, Chemical, Biological, Radio, Nuclear Disasters. Table 2 illustrates the number of publications by hazard type. One-half of the studies (27) covered all hazards, and the rest of the studies focused on a specific type of hazard. Among them, there were biological hazards like Ebola, influenza, and COVID-19; natural disasters like earthquake, flood, or extreme weather events; and man-made disasters like chemical-only, chemical, biological, radiological, and nuclear (CBRN), or mass-casualty incidents. Table 2.Number of Publications by Hazard TypeType of HazardNumber of Publications (%)ReferenceAll Hazards27 (51%) 26,32,33,36,38,45,48,49,51–57,60–65,68,70,73,74,76,77 Mass Casualty/Mass Emergency5 (9%) 41,46,47,58,66 Ebola3 (6%) 25,35,37 Influenza3 (6%) 30,42,44 CBRN2 (4%) 69,75 Natural Disasters2 (4%) 28,43 Chemical Hazards2 (4%) 31,67 Flood2 (4%) 39,40 COVID-192 (4%) 50,59 External Disasters1 (2%) 27 Public Health Emergencies1 (2%) 29 Extreme Weather Events1 (2%) 71 Earthquake1 (2%) 34 Bioterrorism1 (2%) 72 Abbreviation: CBRN, Chemical, Biological, Radiological, and Nuclear. Number of Publications by Hazard Type Abbreviation: CBRN, Chemical, Biological, Radiological, and Nuclear. These studies have used different methodologies; however, the majority (41) were cross-sectional studies. 25–65 The next most common were longitudinal studies, 66–68 followed by Delphi, 69,70 mixed method, 71,72 and systematic reviews. 73,74 In addition, there was one retrospective, one observational, 75 one scoping review, 76 and one interventional 77 study (Table 1). Figure 2 demonstrates the number of publications by year. There were six publications on hospital disaster preparedness in 2011. The analysis of publication incidence by year revealed an overall rise in publication rate from 2012-2016. A decline in the publication rate was observed thereafter until 2020. Figure 2.Number of Articles by Year of Publication. Number of Articles by Year of Publication. Altogether, these studies were conducted in 24 different countries around the world. Iran has published the highest number of studies (twelve), followed by China (six), USA (five), and Saudi Arabia (three). All the other countries have published one or two studies (Table 3). Table 3.Number of Publications by Country (Including the WHO Region and Reference)WHO RegionCountryTotal No. Publications (%)Hazard Type (No. Publications in Each Rype)ReferenceSouth-East AsiaNumber of Publications: 6 (11%)Sri Lanka2 (4%)AH (1), Flood (1) 39,57 India2 (4%)Ebola (1), COVID (1) 25,50 Thailand2 (4%)Flood (1), Influ (1) 40,42 Western-PacificNumber of Publications:12 (23%)China6 (11%)AH (4), BT (1), PHE (1) 29,32,36,62,70,72 Taiwan1 (2%)AH (1) 38 Hong Kong1 (2%)AH (1) 61 Cambodia1 (2%)Influ (1) 30 Australia2 (4%)AH (1), Ext.dis (1) 27,76 New Zealand1 (2%)MC (1) 46 Eastern- MediterraneanNumber of Publications:18 (34%)Pakistan1 (2%)AH (1) 48 Iran12 (23%)AH (9), ND (2), EQ (1) 26,28,34,43,49,51,53,63–65,73,74 Yemen2 (4%)AH (1), MC (1) 55,66 Saudi Arabia3 (6%)AH (3) 45,56,60 AmericasNumber of Publications: 7 (13%)USA5 (9%)AH (2), CBRNE (1), Chem (1), COVID (1) 33,59,67,68,75 Canada2 (4%)Ex. Weat (1), MC (1) 47,71 EuropeanNumber of Publications:8 (15%)UK2 (4%)AH (1), Ebola (1) 37,77 Italy1 (2%)CBRN (1) 69 Finland1 (2%)Chem (1) 31 Netherland1 (2%)Ebola (1) 35 Ireland1 (2%)Influ (1) 44 Switzerland1 (2%)AH (1) 52 Croatia1 (2%)MC (1) 58 AfricanNumber of Publications: 2 (4%)South Africa1 (2%)MC (1) 41 Tanzania1 (2%)AH (1) 54 Abbreviations: WHO, World Health Organization; AH, All hazards; BT, Bioterrorism; PHE, Public Health Emergencies; MC, Mass Casualty; Ex. Weat, Extreme Weather events; Influ, Influenza; Chem, Chemical events; CBRN, Chemical, Biological, Radiological, and Nuclear. Number of Publications by Country (Including the WHO Region and Reference) Abbreviations: WHO, World Health Organization; AH, All hazards; BT, Bioterrorism; PHE, Public Health Emergencies; MC, Mass Casualty; Ex. Weat, Extreme Weather events; Influ, Influenza; Chem, Chemical events; CBRN, Chemical, Biological, Radiological, and Nuclear. Figure 3 demonstrates the number of publications by WHO region and Table 3 illustrates the number of publications by country (including the WHO region, reference, and disaster type). The Eastern Mediterranean region has recorded the greatest number of publications (18), with Iran being responsible for two-thirds of the publications in the region. The Western-Pacific region had the second largest number of publications (12), with China recording the highest number of publications in that region. The European and Americas regions had a similar number of publications (eight and seven, respectively), and the UK and USA were the most represented countries in those regions. The South-East Asian region had six publications, where all three included countries (Sri Lanka, India, and Thailand) had two publications each. The African region recorded the lowest number of publication (two). The Americas and European regions had a focus on CBRN emergencies, whereas other regions had focused on natural disasters (Table 3). Figure 3.Number of Publications by WHO Region.Abbreviation: WHO, World Health Organization. Number of Publications by WHO Region. Abbreviation: WHO, World Health Organization. Figure 4 illustrates the number of publications by UNHDI. The countries with Very High HD and High HD have published the majority of studies, 23 and 24 publications, respectively. Conversely, the countries with Medium HD and Low HD have published a low number of studies, with four and three publications, respectively. Table 4 illustrates the number of publications by UNHDI, including the country and hazard type. Significantly, there were five publications concerned with man-made disasters like chemical or CBRN incidents among the developed countries, while only one study focused on such disasters (bio-terrorism: BT) among the developing countries. It was clear that developing countries have more focus on natural disasters. Figure 4.Number of Publications by UNHDI.Abbreviations: UNHDI, United Nations Human Development Index; HD, Human Development. Number of Publications by UNHDI. Abbreviations: UNHDI, United Nations Human Development Index; HD, Human Development. Table 4.Number of Publications by UNHDI Including the Country and Hazard TypeHDICountryHazard Type (No. of Publications in Each Type)Very High HD(Developed Countries Except Saudi Arabia)Number of Publications:22 (41%)IrelandInflu (1)SwitzerlandAH (1)Hong KongAH (1)AustraliaAH (1), Ext.dis (1)NetherlandEbola (1)FinlandChem (1)UKAH (1), Ebola (1)New ZealandMC (1)CanadaEx. Weat (1), MC (1)USAAH (2), CBRNE (1), Chem (1), COVID (1)ItalyCBRN (1)Saudi ArabiaAH (3)CroatiaMC (1)High HD(Developing Countries)Number of Publications:24 (45%)IranAH (9), ND (2), EQ (1)Sri LankaAH (1), Flood (1)ThailandFlood (1), Influ (1)ChinaAH (4), BT (1), PHE (1)TaiwanAH (1)South AfricaMC (1)Medium HD(Developing Countries)Number of Publications: 4 (8%)IndiaEbola (2), COVID (1)CambodiaInflu (1)PakistanAH (1)Low HD (Developing Countries)Number of Publications: 3 (6%)YemenAH (1), MC (1)TanzaniaAH (1)Abbreviations: UNHDI, United Nations Human Development Index; HD, Human Development; AH, All hazards; BT, Bioterrorism; PHE, Public Health Emergencies; MC, Mass Casualty; Ex. Weat, Extreme Weather events; Influ, Influenza; Chem, Chemical events; CBRN, Chemical, Biological, Radiological, and Nuclear. Number of Publications by UNHDI Including the Country and Hazard Type Abbreviations: UNHDI, United Nations Human Development Index; HD, Human Development; AH, All hazards; BT, Bioterrorism; PHE, Public Health Emergencies; MC, Mass Casualty; Ex. Weat, Extreme Weather events; Influ, Influenza; Chem, Chemical events; CBRN, Chemical, Biological, Radiological, and Nuclear. Table 5 summarizes the different themes/components identified in those study instruments according to the 4S domains. Table 5.Analysis of Different Themes/Components of Study Instruments According to 4S DomainsDomainsIndicatorsNumber of Articles (%)ReferenceSpaceInfrastructure20 (38%) 32,33,35,37,39,40,43,47,50,51,56,57,62,63,65,68,70,71,74,76 Isolation Facilities, Decontamination Facilities16 (30%) 33,35,44,45,47,50,52,53,59,60,62,67,70,72,74,76 ICC, ICU, Theatre, Laboratory10 (19%) 25,27,33,35,37,51,57,63,70,74 Morgue Facilities7 (13%) 33,45,54,56,57,60,74 Accessibility/Access Routes5 (9%) 39,42,57,67,68 StuffLogistics31(58%) 25,26,28,30,31,34–40,44,45,47,49,51,54,55,57,58,60,63,65–67,70,72–74,76 PPE26 (49%) 25,32,33,35–39,41,42,44,45,47,50,52,57,59,65,67,69,71,73–77 Medicines, Medical Equipment, Medical Gases, and Other Supplies (Food, Water, Fuel Reserves)26 (49%) 29–32,36–39,41,43,47,48,51,54,56,57,60,62–65,70,71,74–76 Back-Up Communication Devices18 (34%) 27,28,32,36,38,42,49,51,54,57,62,63,67,70,71,74,76,77 Back-Up Power12 (23%) 32,38,43,51,57,62,63,70,71,73,74,76 Stockpiling12 (23%) 32,36,38,44,50,54,57,62,63,70,74,76 Vehicles, Transport Equipment11 (21%) 26,31,43,49,57,63,65,68,70,74,76 StaffTraining/Education/Capacity Building41 (77%) 25,26,28–30,32–41,43,44,46–48,50,51,54–57,59–68,70,72–74,76 Drills/Simulation Exercises34 (64%) 25,29,32,33,35–48,52,54,55,57,58,60–62,65–68,70,72,74,76 Knowledge and Skills13 (25%) 31,38,39,50,52,53,57,58,62,64,65,75,77 Psychosocial Support Staff/Victims13 (25%) 32,38,50,57,58,60–62,69,71,74,76,77 Staff Well-Being, Roster Arrangement, Food, Water, Accommodation, Transport, Domestic Support10 (19%) 33,38,45,49,50,57,62,70,74,76 Vaccination7 (13%) 29,44,48,62,68,70,76 Rewards/Incentives6 (11%) 25,32,44,45,70,76 Volunteers6 (11%) 32,38,40,57,60,76 SystemsInformation Management/Communication System42 (79%) 25–29,32–34,36–39,41–43,45,49,51–57,59–71,73–77 ICS41 (77%) 25–29,32–34,36,38,40–47,49,51,52,54–57,59–66,68,71–77 Disaster Plans36 (68%) 25,27,29,32–34,36–40,42,44,45,47–52,54–60,62,64,66,67,70–72,75,76 Safety/Security System (Including Evacuation, Crowd Control, Transportation)32 (60%) 26,28,32,33,35,38,39,41,43,45,49–52,54–57,60,62–64,66,68–74,76,77 Triage27 (51%) 25,32,33,35,38,40,45,46,49,50,53–55,58–64,66,68,70,73,74,76,77 Cooperation and Coordination with Other Health/Non-Health Sector Facilities, and the Public (Including MOUs/Contracts/Agreements)25 (47%) 25,26,29,32–35,37–39,41,45,47,48,50,52,56,57,60,62,63,70,71,74,76 Surge Capacity25 (47%) 25,29,31–33,40,41,44,45,49,50,54–57,60,62,63,66,68,70,71,73,74,76 CES20 (38%) 26,28,29,32,34,40,49,54,56,57,60,62,63,66,68,70,73–76 SOP/Protocols/Guidelines19 (36%) 25,29,34,35,37,38,42,47,48,57–60,66,68–71,74 PDR18 (34%) 32,38,40,48,49,54–56,60,62,63,66,68,70,71,73,74,76 Isolation, Decontamination, and Quarantine15 (28%) 25,31,33,38,41,47,52,53,60,68–70,74,76,77 Surveillance, Early Warning, Outbreak Management System13 (25%) 25,34,35,42,48,60,62,63,67,70–72,76 Waste Management8 (15%) 37,43,47,60,63,68,71,74 Dead Body Handling3 (6%) 33,57,77 Abbreviations: ICC, Incident Command Centre; ICU, Intensive Care Unit; ICS, Incident Command System; SOP, Standard Operating Procedure; MOU, Memorandum of Understanding; PPE, Personal Protective Equipment; CES, Continuity of Essential Services; PDR, Post-Disaster Recovery. Analysis of Different Themes/Components of Study Instruments According to 4S Domains Abbreviations: ICC, Incident Command Centre; ICU, Intensive Care Unit; ICS, Incident Command System; SOP, Standard Operating Procedure; MOU, Memorandum of Understanding; PPE, Personal Protective Equipment; CES, Continuity of Essential Services; PDR, Post-Disaster Recovery. For the space domain: infrastructure and isolation/decontamination facilities were considered more frequently, with 20 and 16 publications, respectively, while morgue facilities and accessibility/access routes were considered less frequently, in only seven and five studies, respectively. For the stuff domain: logistics, personal protective equipment (PPE), and medicines/medical equipment/medical gases/other supplies (food, water, fuel reserves) were considered among the majority of studies, 32, 27, and 27, respectively, while back-up power, stockpiling, and transport themes were included in a smaller number of studies, 12, 12, and 11, respectively. For the staff domain: training/education/capacity building and drills/simulation exercises were included in the majority of studies, 40 and 34 studies, respectively, while vaccination, rewards/incentives, and volunteer themes were given the least priority, with seven, six, and six studies, respectively. For the systems domain: information/communication, Incident Command System (ICS), disaster plans, and safety/security themes were considered in most of the studies, while waste management and the handling of dead bodies were given the least priority, considered only in eight and three studies, respectively. Overall, isolation/decontamination facilities, Incident Command Centre (ICC), intensive care unit/ICU, and laboratories were considered frequently under the space domain, while access routes and morgue space were given less priority. For the stuff domain, PPE, medicines, and medical equipment were frequently considered in the broad category of logistics, while back-up power, stockpiling, and transport-related themes were considered less frequently. Capacity-building-related themes were frequently considered under the staff domain, while psychological well-being-related themes were given less priority. Further, communication, ICS, and disaster plans were considered frequently under the system domain, while the waste management and handling of dead bodies were given less priority in the majority of the study tools. Discussion: This is the first study to review publications assessing hospital-level disaster preparedness across the world using the 4S framework. Over the decade, the annual rate of publications varied considerably, with an overall increase up to 2016, and surprisingly, a reduction thereafter. Also, the developing countries with “Medium” and “Low” HDI featured less in terms of publications on hospital disaster preparedness. However, the developing countries with “High” HD contributed to a similar number of publications to developed countries. Surprisingly, this study found that the number of publications was reduced during the second-half of the decade despite the increase in disaster events globally. In contrast, a past study conducted on public health emergency preparedness from 1997-2008 reported that there was a 33% growth of publications per year. 78 Interestingly, Iran, as a highly vulnerable country to both natural and human-made disasters, 1 has published the majority of studies on hospital disaster preparedness. Therefore, the Eastern Mediterranean region has recorded the highest number of publications during the decade. One of the important findings of the study was the significant interest in chemical hazards, CBRN, and bio-terrorism hazards in the Americas and European regions. All of these publications originated from developed countries such as USA, Italy, and Finland. Similarly, a study conducted by Mohsen, et al found that the majority of research on biological events were from USA, China, and Canada. 79 This trend may be due to assumed international best practice for disaster preparedness being adopted by the European Union and other developed nations. 80 However, no country is immune from CBRN threats, and therefore, those countries with more advanced disaster preparedness monitoring would have the capacity to partner with developing countries in implementing adequate preparedness measures for CBRN emergencies. The main focus of the South-East Asian region was on natural disasters, such as floods, and infectious disease outbreaks; none of the countries in the region published a study on man-made disasters during the decade. The Easter-Sunday terrorist attack that occurred in 2019 in Sri Lanka highlighted the importance of preparedness for CBRN events. 81 The global literature reports that the countries with a Low HDI have a lower research investment and are, therefore, less dominant in the research and development area. 82 Conversely, the countries with a High HDI dominate with research publications. This study has further emphasized that the countries with a Low HDI have less publications despite their high vulnerability to disasters. A comprehensive plan should address every possible disaster scenario with contingency plans. Adequate preparedness of all 4S domains is, therefore, equally important. However, surprisingly, this study found that access routes, transport, morgue facilities, handling of dead bodies, back-up power, stockpiling, vaccination, rewards/incentives for staff, volunteer, and waste management themes were given less priority in most of the studies. Regarding the access routes, the WHO emphasized that in order to ensure the safety of lives, hospitals and health facilities must remain safe, accessible, and functioning at maximum capacity during emergencies or disasters. 83 Therefore, the WHO has identified that a safe site and accessibility are important aspects of a hospital disaster preparedness. They also recommended hospitals be located near good roads with an adequate means of transportation. To ensure readiness in transport preparedness, the WHO recommends having adequate transport equipment, equipped ambulances, and other vehicles. Therefore, transport facilities are one of the crucial aspects under the stuff preparedness, which is essential for transporting casualties from field to hospital, moving patients to other referral hospitals, and evacuating patients in an emergency or disaster situation. 83 However, this review identified that transport was neglected in the majority of the published studies. This study identified that back-up power has also been neglected under the stuff preparedness in the majority of the studies. Electric power is a critical lifeline of a hospital. A survey conducted in Japan found that 65% of disaster-base hospitals considered electricity to be the paramount lifeline for the functioning of their hospital. 84 All the medical devices, diagnostic equipment, communication devices, lighting, heating and cooling systems, elevators, and IT-based patient information systems become useless when there is a power failure. Therefore, back-up generators or a reliable alternative power source should be an essential part of the stuff preparedness. The study identified that most of the selected studies have ignored stockpiling under the stuff preparedness. The WHO also emphasizes that hospitals have a stockpile of at least one week of adequate emergency medicines and supplies when preparing for disasters. 83 The COVID-19 pandemic has proven that stockpiling is a cornerstone of a holistic approach to disaster preparedness. 85 One of the biggest reasons for countries to fail in their initial response to the pandemic was the lack of necessary PPE and emergency equipment to deal with the pandemic. Therefore, in addition to having a national stockpile, it is important to have an individual hospital stockpile of critical medicines, vaccines, emergency equipment, and supplies. 85 Periodic reviews and dynamic use of stockpiles are also necessary to ensure the effective use of stored equipment and other items before their expiry. The study also found that morgue facilities and dead body handling were neglected themes under the space and the system preparedness, respectively. However, the COVID-19 pandemic has highlighted the importance of ensuring adequate morgue capacities in hospitals. For example, in India, during the peak of the pandemic, hospital morgues and crematoriums were overwhelmed and the bodies were piled up due to inadequate morgue facilities. 86 Therefore, in order to ensure proper identification and handing of dead bodies, adequate morgue capacity, temporary morgue spaces, cold storage facilities, and adequately trained staff are crucial. 57 The study also identified that vaccination, rewards/incentive, and volunteer themes were neglected by the majority of studies under staff preparedness. The COVID-19 pandemic highlighted how important vaccination, volunteers, and rewarding were in improving psychological well-being of the staff. Martinese, et al examined the measures motivating hospital workers to report for duty during a crisis situation. They identified preventative measures for self and family, followed by alternative accommodation and financial incentives as high priority incentives. 87 Another study also reported that access to PPE and vaccines, childcare arrangements, volunteers’ networks, adequate training, and protection from disaster-related legal sanctions are some of the major incentives to motivate staff during disasters. 88 Therefore, these aspects should be considered in emergency planning as it plays a key role in motivating staff to work in disaster situations. The study found that waste management was given the least priority under system preparedness. Waste management is an essential part of disaster preparedness, especially given that clinical waste should be handled carefully as it contains hazardous materials such as infectious, toxic, and radioactive substances. 89 The WHO recently reported that tons of extra medical waste from the COVID-19 response put enormous strains on health care waste management systems around the world. Therefore, the WHO emphasizes the dire need of improving the waste management practices in order to minimize the human and environmental impacts. Improper waste management could result in secondary disasters. Therefore, contingency plans are necessary for managing hospital waste as well as the waste generated in CBRN events. As highlighted in the above discussion, all of these less prioritized areas have a significant impact in different disaster scenarios. Therefore, in a comprehensive disaster plan based on an all-hazard approach, it is also important to address these neglected aspects the same way as frequently prioritized areas. Limitations: This study selected only the published articles on hospital-based disaster preparedness studies. However, there may be some publications related to hospital disaster preparedness in community or public health preparedness studies that were not published on the databases searched. In addition, this study selected only articles written in the English language from 2011-2020. Conclusion: Few published studies used a toolkit, checklist, or questionnaire to assess hospital disaster preparedness across the world during the decade of 2011-2020. The countries with Low HD have a smaller number of publications and the developing countries generally have less focus on CBRN preparedness. The majority of the past studies have neglected some crucial aspects of hospital disaster preparedness. Important preparedness themes were identified under each domain of the 4S framework, and these aspects should be properly addressed in order to ensure adequate preparedness of hospitals. The results of this systematic review can be used for planning a comprehensive disaster preparedness tool.
Background: Recent disasters emphasize the need for disaster risk mitigation in the health sector. A lack of standardized tools to assess hospital disaster preparedness hinders the improvement of emergency/disaster preparedness in hospitals. There is very limited research on evaluation of hospital disaster preparedness tools. Methods: A systematic review was performed using three databases, namely Ovid Medline, Embase, and CINAHL, as well as available grey literature sourced by Google, relevant websites, and also from the reference lists of selected articles. The studies published on hospital disaster preparedness across the world from 2011-2020, written in English language, were selected by two independent reviewers. The global distribution of studies was analyzed according to the World Health Organization's (WHO) six geographical regions, and also according to the four categories of the United Nations Human Development Index (UNHDI). The preparedness themes were identified and categorized according to the 4S conceptual framework: space, stuff, staff, and systems. Results: From a total of 1,568 articles, 53 met inclusion criteria and were selected for data extraction and synthesis. Few published studies had used a study instrument to assess hospital disaster preparedness. The Eastern Mediterranean region recorded the highest number of such publications. The countries with a low UNHDI were found to have a smaller number of publications. Developing countries had more focus on preparedness for natural disasters and less focus on chemical, biological, radiological, and nuclear (CBRN) preparedness. Infrastructure, logistics, capacity building, and communication were the priority themes under the space, stuff, staff, and system domains of the 4S framework, respectively. The majority of studies had neglected some crucial aspects of hospital disaster preparedness, such as transport, back-up power, morgue facilities and dead body handling, vaccination, rewards/incentive, and volunteers. Conclusions: Important preparedness themes were identified under each domain of the 4S framework. The neglected aspects should be properly addressed in order to ensure adequate preparedness of hospitals. The results of this review can be used for planning a comprehensive disaster preparedness tool.
Introduction: Every year, millions of people across the world are being affected by floods, landslides, cyclones, hurricanes, tornados, tsunamis, volcanic eruptions, earthquakes, wildfires, or human-made disasters. In the past ten years, 83% of all disasters triggered by natural hazards were caused by extreme weather and climate-related events. 1 The on-going global pandemic of coronavirus disease 2019 (COVID-19) has caused a health and economic crisis emphasizing to the world how important disaster preparedness and disaster resilience are. 2 In addition to the COVID-19 pandemic, multiple climate-related disasters are also happening at the same time. 1 For example, more than 100 other disasters occurred around the world affecting over 50 million people during the first six months after COVID-19 was declared a pandemic by the World Health Organization (WHO; Geneva, Switzerland) in March 2020. 3 Asia has suffered the highest number of disaster events. In total, there were 3,068 disasters occurring in Asia from 2000 through 2019. China reported the highest number of disaster events (577 events), followed by India (321 events), the Philippines (304 events), and Indonesia (278 events). 4 Recent disaster events emphasize the need for disaster risk reduction in the health sector as well as health services in developed countries. For example, in 2011 during the Japan earthquake and tsunami, 80% of the hospitals in Fukushima, Miyagi, and Iwate prefectures of Japan were destroyed or severely damaged, and many local public health personnel were also affected by the disaster, resulting in the entire paralysis or severe compromise of the health services. 5,6 Disasters can cripple health facilities, leading to partial or total collapse of health services, especially in developing countries. 7 For example, after the Algerian earthquake in 2003, 50% of the health facilities in the impacted area were damaged and were no longer operational. 7 A further example occurred when an earthquake struck in South Asia in October 2005 and caused the complete destruction of almost 50% of health facilities in the affected areas in Afghanistan, India, and Northern Pakistan, ranging from sophisticated hospitals to rural clinics, overwhelming the existing Emergency Medical Services. 7 Currently, most South Asian countries, including Sri Lanka and India, are in a state of crisis resulting from COVID-19, with overcrowded hospitals, low oxygen supplies, and overwhelmed capacity. 8,9 Sri Lanka, a developing nation and small island in the Indian Ocean, is frequently battered by natural disasters. The most devastating disaster it had ever experienced was the tsunami of 2004, which killed over 30,000 people and internally displaced almost one-half a million people. The health systems of the country were severely affected, completely damaging 44 health institutions and partially damaging 48 health institutions. In addition, 35 health care workers (HCWs) lost their lives and a large number of health workers were affected by injuries or psychological trauma due to the loss of their family members or properties. 10 Monsoon floods and landslides also affect several health facilities across the country annually. Sometimes, they have even led to the full or partial evacuation of affected hospitals, as experienced, for example, during the floods of 2016 and 2017, due to infrastructure damage or the functional collapse of services. These instances have resulted in huge economic impacts on the government for recovery-related needs. 11,12 Sri Lanka has also experienced several man-made disasters resulting in mass-casualty incidents. A 26-year war came to an end in 2009 after more than 64,000 deaths, hundreds of thousands of injuries, and the displacement of more than 800,000 persons. 13 The Easter-Sunday bombing attack on April 21, 2019 was a recent human-made disaster which killed 250 people and resulted in more than 500 casualties. 14 These mass-casualty incidents caused an acute surge of patients to nearby hospitals, interrupting normal hospital operations and overwhelmed hospital capacity due to ill-preparedness, poor coordination, and limited resources. 15 Notwithstanding the vulnerability of Sri Lanka to disasters, there is no standard hospital disaster preparedness evaluation tool used in Sri Lanka at the moment. Such a tool could be used to inform potential improvements to hospital-level disaster preparedness. Therefore, with the goal of establishing a tool appropriate for Sri Lanka, this study aimed to determine the existence and distribution of hospital preparedness tools across the world, and also to identify the important components of those study instruments. Conclusion: Few published studies used a toolkit, checklist, or questionnaire to assess hospital disaster preparedness across the world during the decade of 2011-2020. The countries with Low HD have a smaller number of publications and the developing countries generally have less focus on CBRN preparedness. The majority of the past studies have neglected some crucial aspects of hospital disaster preparedness. Important preparedness themes were identified under each domain of the 4S framework, and these aspects should be properly addressed in order to ensure adequate preparedness of hospitals. The results of this systematic review can be used for planning a comprehensive disaster preparedness tool.
Background: Recent disasters emphasize the need for disaster risk mitigation in the health sector. A lack of standardized tools to assess hospital disaster preparedness hinders the improvement of emergency/disaster preparedness in hospitals. There is very limited research on evaluation of hospital disaster preparedness tools. Methods: A systematic review was performed using three databases, namely Ovid Medline, Embase, and CINAHL, as well as available grey literature sourced by Google, relevant websites, and also from the reference lists of selected articles. The studies published on hospital disaster preparedness across the world from 2011-2020, written in English language, were selected by two independent reviewers. The global distribution of studies was analyzed according to the World Health Organization's (WHO) six geographical regions, and also according to the four categories of the United Nations Human Development Index (UNHDI). The preparedness themes were identified and categorized according to the 4S conceptual framework: space, stuff, staff, and systems. Results: From a total of 1,568 articles, 53 met inclusion criteria and were selected for data extraction and synthesis. Few published studies had used a study instrument to assess hospital disaster preparedness. The Eastern Mediterranean region recorded the highest number of such publications. The countries with a low UNHDI were found to have a smaller number of publications. Developing countries had more focus on preparedness for natural disasters and less focus on chemical, biological, radiological, and nuclear (CBRN) preparedness. Infrastructure, logistics, capacity building, and communication were the priority themes under the space, stuff, staff, and system domains of the 4S framework, respectively. The majority of studies had neglected some crucial aspects of hospital disaster preparedness, such as transport, back-up power, morgue facilities and dead body handling, vaccination, rewards/incentive, and volunteers. Conclusions: Important preparedness themes were identified under each domain of the 4S framework. The neglected aspects should be properly addressed in order to ensure adequate preparedness of hospitals. The results of this review can be used for planning a comprehensive disaster preparedness tool.
7,983
397
[ 19, 1309, 62, 94, 27, 365 ]
11
[ "health", "preparedness", "disaster", "publications", "studies", "number", "study", "sectional", "countries", "disaster preparedness" ]
[ "example 100 disasters", "increase disaster events", "countries advanced disaster", "19 natural disasters", "disasters occurring asia" ]
null
[CONTENT] disaster preparedness | hospital | questionnaire | survey | toolkit [SUMMARY]
null
[CONTENT] disaster preparedness | hospital | questionnaire | survey | toolkit [SUMMARY]
[CONTENT] disaster preparedness | hospital | questionnaire | survey | toolkit [SUMMARY]
[CONTENT] disaster preparedness | hospital | questionnaire | survey | toolkit [SUMMARY]
[CONTENT] disaster preparedness | hospital | questionnaire | survey | toolkit [SUMMARY]
[CONTENT] Civil Defense | Communication | Disaster Planning | Disasters | Hospitals | Humans [SUMMARY]
null
[CONTENT] Civil Defense | Communication | Disaster Planning | Disasters | Hospitals | Humans [SUMMARY]
[CONTENT] Civil Defense | Communication | Disaster Planning | Disasters | Hospitals | Humans [SUMMARY]
[CONTENT] Civil Defense | Communication | Disaster Planning | Disasters | Hospitals | Humans [SUMMARY]
[CONTENT] Civil Defense | Communication | Disaster Planning | Disasters | Hospitals | Humans [SUMMARY]
[CONTENT] example 100 disasters | increase disaster events | countries advanced disaster | 19 natural disasters | disasters occurring asia [SUMMARY]
null
[CONTENT] example 100 disasters | increase disaster events | countries advanced disaster | 19 natural disasters | disasters occurring asia [SUMMARY]
[CONTENT] example 100 disasters | increase disaster events | countries advanced disaster | 19 natural disasters | disasters occurring asia [SUMMARY]
[CONTENT] example 100 disasters | increase disaster events | countries advanced disaster | 19 natural disasters | disasters occurring asia [SUMMARY]
[CONTENT] example 100 disasters | increase disaster events | countries advanced disaster | 19 natural disasters | disasters occurring asia [SUMMARY]
[CONTENT] health | preparedness | disaster | publications | studies | number | study | sectional | countries | disaster preparedness [SUMMARY]
null
[CONTENT] health | preparedness | disaster | publications | studies | number | study | sectional | countries | disaster preparedness [SUMMARY]
[CONTENT] health | preparedness | disaster | publications | studies | number | study | sectional | countries | disaster preparedness [SUMMARY]
[CONTENT] health | preparedness | disaster | publications | studies | number | study | sectional | countries | disaster preparedness [SUMMARY]
[CONTENT] health | preparedness | disaster | publications | studies | number | study | sectional | countries | disaster preparedness [SUMMARY]
[CONTENT] health | events | affected | disasters | people | lanka | sri | sri lanka | disaster | example [SUMMARY]
null
[CONTENT] sectional | publications | 76 | 74 | 70 | 32 | 57 | 60 | number | 68 [SUMMARY]
[CONTENT] preparedness | aspects | countries | disaster preparedness | disaster | studies | countries generally focus cbrn | studies neglected | assess hospital disaster | assess hospital disaster preparedness [SUMMARY]
[CONTENT] preparedness | disaster | health | disaster preparedness | studies | hospital | study | published | hospitals | publications [SUMMARY]
[CONTENT] preparedness | disaster | health | disaster preparedness | studies | hospital | study | published | hospitals | publications [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] 1,568 | 53 ||| ||| Eastern Mediterranean ||| UNHDI ||| ||| 4S ||| morgue facilities [SUMMARY]
[CONTENT] 4S ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| three | Ovid Medline | Embase | Google ||| 2011-2020 | English | two ||| the World Health Organization's | six | four | the United Nations Human Development Index | UNHDI ||| 4S ||| 1,568 | 53 ||| ||| Eastern Mediterranean ||| UNHDI ||| ||| 4S ||| morgue facilities ||| 4S ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| three | Ovid Medline | Embase | Google ||| 2011-2020 | English | two ||| the World Health Organization's | six | four | the United Nations Human Development Index | UNHDI ||| 4S ||| 1,568 | 53 ||| ||| Eastern Mediterranean ||| UNHDI ||| ||| 4S ||| morgue facilities ||| 4S ||| ||| [SUMMARY]
Tobacco Use and Its Association with Mental Morbidity and Health Compromising Behaviours in Adolescents in Indonesia.
33507676
Limited evidence has been established on associations between tobacco use and mental morbidity and health compromising behaviours. The study aimed to investigate the associations between tobacco use, mental problems, and health risk behaviour among adolescents attending school in Indonesia.
BACKGROUND
Nationally representative data were studied from 11,124 adolescents that took part in the cross-sectional "Indonesia Global School-Based Student Health Survey (GSHS) in 2015".
METHODS
The prevalence of current tobacco use was 12.8%. In adjusted logistic regression analysis, compared to non-current or never tobacco users, current tobacco use was associated with eight of eight mental problem indicators (lonely, anxiety, no close friend, suicidal ideation, suicide plan, suicide attempt and current alcohol use), two of four dietary risk behaviours (soft drink and fast food consumption) and seven of ten other health risk behaviours (in a physical fight, bullied, injury, ever sex, school truancy, and two sub-optimal hand hygiene behaviours).
RESULTS
Compared to nontobacco users, current tobacco users had significantly higher mental problem indicators and health risk behaviours. Multiple comorbidity with tobacco use should be targeted in interventions.
CONCLUSION
[ "Adolescent", "Adolescent Behavior", "Cross-Sectional Studies", "Female", "Humans", "Indonesia", "Male", "Mental Health", "Prevalence", "Risk-Taking", "Suicidal Ideation", "Tobacco Use" ]
8184187
Introduction
Tobacco causes the death of 8 million persons per year (WHO, 2019). The majority of the world’s smokers (80%) reside in developing countries (WHO, 2019). Most users of tobacco initiate this habit when they are young during adolescence (Aldrich et al., 2014). More than one in ten (13.6%) adolescents in developing countries were current tobacco users (Xi et al., 2016). Smoking causes various diseases, including “cancer, heart disease, stroke, lung diseases, diabetes, and chronic obstructive pulmonary disease (COPD)” (CDC, 2018). Fewer studies have linked tobacco use with mental morbidity and health compromising behaviours. Some studies found a probable association between tobacco use and mental distress in young people (Chaiton et al., 2010; Lee et al., 2018; Peltzer and Pengpid, 2017; Saravanan and Heidhy, 2014), including suicidal ideation and suicide attempts (Han et al., 2009; Järvelaid, 2004; Lee et al., 2018; Tomori et al., 2001). Several studies found a relationship between tobacco use and alcohol use (Fujita and Maki, 2018; Tomori et al., 2001; Wang et al. 2017), and illicit drug use (Zammit et al., 2018; Tomori et al., 2001). A number of investigations identified that compared to non-smokers, smokers engaged more likely in poor dietary behaviour, such as inadequate fruit and vegetable consumption (Wang et al., 2017; Lee and Yi, 2016), fast food consumption (Hrubá et al., 2010; Larson et al., 2007; Wang et al., 2017), higher frequency of eating out (Fujita and Maki, 2018), high sugar foods (Lee and Yi, 2016), soft drink intake (Larsen et al., 2007; Wang et al., 2017), high sodium or salty snacks consumption (Hrubá et al., 2010), were less likely to eat milk and dairy products (Lee and Yi, 2016; Wang et al., 2017), and skipped breakfast (Cohen et al., 2003; Wang et al., 2017). In addition, tobacco use increased the odds of exercise (Lee and Yi, 2016) and school truancy (Tomori et al., 2001). Studies are needed on the association between tobacco use, psychiatric morbidity, and health risk behaviour among adolescents in developing countries, such as in Indonesia, in which smoking is on the rise. The study aimed to assess the associations between tobacco use, mental problems, and health risk behaviour among adolescents in Indonesia.
null
null
Results
Descriptive Statistics The study sample consisted of 11,124 in-school adolescents (overall response rate 94%) (mean age 14.0 years, with a Standard Deviation of 1.6) in Indonesia, 51.1% were girls, 43.3% experienced sometimes or mostly or always hunger in the past month, 77.6% had exposure to secondary smoke in the past week, and 53.8% had parents who used tobacco. In addition, 39.1% of the students had mostly or always support by their peers, and 22.4% scored high on parental support. The prevalence of current tobacco use was 12.8%. Current tobacco use was significantly higher in males, older adolescents, those who experienced more frequent hunger, those that were exposed to secondary smoke, parental tobacco use, had low peer, and parental support (see Table 1). Associations between tobacco use and mental problems and health compromising behaviour In adjusted logistic regression analysis, compared to non-current or never tobacco users, current tobacco use was associated with eight of eight mental problem indicators [lonely: Adjusted Odds Ratio-AOR: 1.87, 95% Confidence Interval-CI: 1.37-2.54), anxiety (AOR: 1.95, 95% CI: 1.36-2.83), no close friend (AOR: 1.59, 95% CI: 1.08-2.36), suicidal ideation (AOR: 2.39, 95% CI: 1.58-3.62), suicide plan (AOR: 2.02, 95% CI: 1.53-2.68), suicide attempt (AOR: 6.96, 95% CI: 4.43-10.93) and current alcohol use (AOR: 14.97, 95% CI: 8.42-26.63)], two of four dietary risk behaviours [soft drink intake (AOR: 1.54, 95% CI: 1.31-1.83) and fast food consumption (AOR: 1.37, 95% CI: 1.14-1.65)] and seven of ten other health risk behaviours [in a physical fight (AOR: 2.85, 95% CI: 2.39-3.38), bullied (AOR: 1.87, 95% CI: 1.55-2.26), injury (AOR: 2.04, 95% CI: 1.68-2.47), school truancy (AOR: 2.45, 95% CI: 2.01-2.98), ever sex (AOR: 2.63, 95% CI: 2.05-3.38), not always washing hands after toilet (AOR: 1.29, 95% CI: 1.10-1.51) and not always washing hands before eating (AOR: 1.32, 95% CI: 1.09-1.60)] (see Table). Sample Characteristics of School-Going Adolescents in Indonesia, 2015 (N=11124) 1Based on Chi-square tests Associations between Tobacco Use and Health Outcomes UOR, Unadjusted Odds Ratio; AOR, Adjusted Odds Ratio; 1Adjusted for age, sex, experience of hunger, secondary smoke, parental tobacco use, peer support, and parental support
null
null
[]
[]
[]
[ "Introduction", "Materials and Methods", "Results", "Discussion" ]
[ "Tobacco causes the death of 8 million persons per year (WHO, 2019). The majority of the world’s smokers (80%) reside in developing countries (WHO, 2019). Most users of tobacco initiate this habit when they are young during adolescence (Aldrich et al., 2014). More than one in ten (13.6%) adolescents in developing countries were current tobacco users (Xi et al., 2016). Smoking causes various diseases, including “cancer, heart disease, stroke, lung diseases, diabetes, and chronic obstructive pulmonary disease (COPD)” (CDC, 2018). Fewer studies have linked tobacco use with mental morbidity and health compromising behaviours. \nSome studies found a probable association between tobacco use and mental distress in young people (Chaiton et al., 2010; Lee et al., 2018; Peltzer and Pengpid, 2017; Saravanan and Heidhy, 2014), including suicidal ideation and suicide attempts (Han et al., 2009;\nJärvelaid, 2004; Lee et al., 2018; Tomori et al., 2001). Several studies found a relationship between tobacco use and alcohol use (Fujita and Maki, 2018; Tomori et al., 2001; Wang et al. 2017), and illicit drug use (Zammit et al., 2018; Tomori et al., 2001). \nA number of investigations identified that compared to non-smokers, smokers engaged more likely in poor dietary behaviour, such as inadequate fruit and vegetable consumption (Wang et al., 2017; Lee and Yi, 2016), fast food consumption (Hrubá et al., 2010; Larson et al., 2007; Wang et al., 2017), higher frequency of eating out (Fujita and Maki, 2018), high sugar foods (Lee and Yi, 2016), soft drink intake (Larsen et al., 2007; Wang et al., 2017), high sodium or salty snacks consumption (Hrubá et al., 2010), were less likely to eat milk and dairy products (Lee and Yi, 2016; Wang et al., 2017), and skipped breakfast (Cohen et al., 2003; Wang et al., 2017). In addition, tobacco use increased the odds of exercise (Lee and Yi, 2016) and school truancy (Tomori et al., 2001). \nStudies are needed on the association between tobacco use, psychiatric morbidity, and health risk behaviour among adolescents in developing countries, such as in Indonesia, in which smoking is on the rise. The study aimed to assess the associations between tobacco use, mental problems, and health risk behaviour among adolescents in Indonesia.", "\nSample and procedures\n\nCross-sectional nationally representative data from the 2015 GSHS in Indonesia were analysed (WHO, 2015a); sampling details were described previously (Pengpid and Peltzer, 2019). “The Ethics Commission for Health Research and Development approved the study and informed consent was obtained from the participating schools, parents, and students.”(WHO, 2015a).\n\nMeasures \n\n\nOutcome variables\n\n\nMental problem indicators\n\nLoneliness: “mostly or always lonely during the past 12 months.” Anxiety: ”mostly or always been so worried about something that could not sleep at night in the past 12 months.”\nNo close friends: “having no close friends.” Suicidal behaviour: suicidal ideation, suicide plan, and suicide attempt in the past 12 months. Current alcohol use: “≥1 days, at least one drink containing alcohol in the past 30 days.” (WHO, 2015b).\n\nDietary risk variables\n\nSoft drink consumption: drinking ≥1 time per day “carbonated soft drinks, such as Coca-Cola, Sprite, Fanta, or Big Bola (excluding diet soft drinks).” Inadequate fruit intake: “<twice/day during the past 30 days.” Inadequate vegetable intake: “<3 times/day during the past 30 days.” Fast food consumption: “≥2 days/week eating food from a fast food restaurant.” \nInadequate physical activity: “<7 days at least 60 minutes of moderate to vigorous-intensity physical activity” (WHO, 2015b, 2017).\n\nOther health risk behaviours\n\nSedentary behaviour: “≥3 hours/day sitting when you are not in school or doing homework” (Guthold et al., 2010). In a physical fight: any in the past 12 months. Being bullied: any day in the past 30 days. Serious injury: any during the past 12 months. School truancy: any day during the past 30 days. Ever sex: ever having had sexual intercourse. Inadequate tooth brushing: “<2 times/day cleaning or brushing one’s teeth.” Inadequate hand hygiene: “not always washing hands after toilet, before eating and with soap.” (WHO, 2015b).\nExposure variable\nTobacco use: “During the past 30 days, on how many days did you smoke cigarettes/use any tobacco products other than cigarettes, such as sirih, piper betel, cerutu, or cigars?” Responses ranged from “1=0 days to 7=All 30 days (coded 1=0 and 2–7=1)”\n\nConfounding variables\n\nAge, sex, and hunger (in the past month). Secondary smoke: any day in the past week.\nParental tobacco use: any parent or guardian. Peer support: “exposure to kind and helpful students in school in the past month” (never, rarely, or sometimes=low, and most of the time or always=high). Parental support: “mostly or always parental supervision, parental connectedness, parental bonding, and parental respect for privacy in the past 30 days.” (WHO, 2015b) (0=low, 2=moderate, and 3-4=high support).\n\nData analysis\n\nFrequency, mean and standard deviations were calculated to describe the sample and its indicators. To test for differences in proportions, Pearson Chi-square tests were used. Logistic regression was used to determine the associations between tobacco use and mental problems and health compromising behaviours. Health outcomes found to have a significant association between tobacco use and health indicators were subsequently included in the multivariable logistic regression model, which was adjusted for relevant confounders (age group, sex, experience of hunger, secondary smoke, parental tobacco use, peer support, and parental support). Missing data were excluded from the calculations. P<0.5 was accepted as significant. “STATA 15.00 (StatCorp LP, College Station, TX)” was used for all statistical procedures, taking the complex study design into account.", "\nDescriptive Statistics\n\nThe study sample consisted of 11,124 in-school adolescents (overall response rate 94%) (mean age 14.0 years, with a Standard Deviation of 1.6) in Indonesia, 51.1% were girls, 43.3% experienced sometimes or mostly or always hunger in the past month, 77.6% had exposure to secondary smoke in the past week, and 53.8% had parents who used tobacco. In addition, 39.1% of the students had mostly or always support by their peers, and 22.4% scored high on parental support. The prevalence of current tobacco use was 12.8%. Current tobacco use was significantly higher in males, older adolescents, those who experienced more frequent hunger, those that were exposed to secondary smoke, parental tobacco use, had low peer, and parental support (see Table 1).\n\nAssociations between tobacco use and mental problems and health compromising behaviour\n\nIn adjusted logistic regression analysis, compared to non-current or never tobacco users, current tobacco use was associated with eight of eight mental problem indicators [lonely: Adjusted Odds Ratio-AOR: 1.87, 95% Confidence Interval-CI: 1.37-2.54), anxiety (AOR: 1.95, 95% CI: 1.36-2.83), no close friend (AOR: 1.59, 95% CI: 1.08-2.36), suicidal ideation (AOR: 2.39, 95% CI: 1.58-3.62), suicide plan (AOR: 2.02, 95% CI: 1.53-2.68), suicide attempt (AOR: 6.96, 95% CI: 4.43-10.93) and current alcohol use (AOR: 14.97, 95% CI: 8.42-26.63)], two of four dietary risk behaviours [soft drink intake (AOR: 1.54, 95% CI: 1.31-1.83) and fast food consumption (AOR: 1.37, 95% CI: 1.14-1.65)] and seven of ten other health risk behaviours [in a physical fight (AOR: 2.85, 95% CI: 2.39-3.38), bullied (AOR: 1.87, 95% CI: 1.55-2.26), injury (AOR: 2.04, 95% CI: 1.68-2.47), school truancy (AOR: 2.45, 95% CI: 2.01-2.98), ever sex (AOR: 2.63, 95% CI: 2.05-3.38), not always washing hands after toilet (AOR: 1.29, 95% CI: 1.10-1.51) and not always washing hands before eating (AOR: 1.32, 95% CI: 1.09-1.60)] (see Table).\nSample Characteristics of School-Going Adolescents in Indonesia, 2015 (N=11124)\n\n1Based on Chi-square tests\nAssociations between Tobacco Use and Health Outcomes\nUOR, Unadjusted Odds Ratio; AOR, Adjusted Odds Ratio; 1Adjusted for age, sex, experience of hunger, secondary smoke, parental tobacco use, peer support, and parental support", "In this large nationally representative study of school-going adolescents in Indonesia, compared to non-current or never tobacco users, current tobacco users had significantly poorer mental health (lonely, anxiety, no close friends, suicidal ideation, suicide plan, suicide attempts and current alcohol use) and increased odds for several health compromising behaviours (soft drink intake, fast food consumption, in a physical fight, bullied, injury, school truancy, ever sex, and sub-optimal hand hygiene behaviours) than non-current or never tobacco users in adjusted analysis. These results are generally in line with previous findings (Chaiton et al., 2009; Fujita and Maki, 2018; Halperin et al., 2010; Han et al., 2009; Hrubá et al., 2010; Larson et al., 2007; Lee et al., 2018; Peltzer and Pengpid, 2017; Saravanan and Heidhy, 2014; Tomori et al., 2001; Wang et al., 2017). Alcohol use is known to be a comorbidity of tobacco use (Konkolÿ et al., 2016; Peltzer and Pengpid, 2018), and this study showed a very high association between tobacco and alcohol use. It is possible that tobacco users that are likely to deny the harms of tobacco use are also likely to deny the risks of other health risk behaviours (Zammit et al., 2018). Tobacco use may be utilized by students to cope with psychosocial problems, as a form of “self-medication” (Mathew et al., 2017; Tomori et al., 2001). The finding may be supported by the study results of very high comorbidity of tobacco use with alcohol use.\nSome studies (Wang et al., 2017; Lee and Yi, 2016) found an association between tobacco use and physical activity and inadequate fruit and vegetable intake, but we identified no significant associations in this study. Study findings support the importance of understanding the various health risk behaviours tobacco users are more likely to engage in for the development of school mental and physical health promotion (Pengpid and Peltzer, 2020). This study appears to confirm that tobacco use among adolescents is linked to the development of mental and physical health risk factors (Tomori et al., 2001). Programmes should not only target tobacco use cessation but also promote a variety of healthy behaviours, in an integrated school health promotion programme.\n\nStudy limitation\n\nDue to the cross-sectional design of the study, we are unable to determine the direction of the relationship between tobacco use and health indicators. We can also not generalize the findings to adolescent in Indonesia, since only school-going participants were included in the study. Out-off school adolescents may have a different pattern of tobacco use associations. Due to the self-report of the data, it is possible that some of the health and mental health indicators were underreported.\nIn conclusion, in this large cross-sectional national study in Indonesia, compared to non-current or never tobacco users, current tobacco users had significantly poorer mental health (lonely, anxiety, no close friends, suicidal ideation, suicide plan, suicide attempts and current alcohol use) and increased odds for several health compromising behaviours (soft drink intake, fast food consumption, in a physical fight, bullied, injury, ever sex, school truancy, and sub-optimal hand hygiene behaviours) than non-current or never tobacco users in adjusted analysis. Multiple comorbidities with tobacco use should be targeted in interventions." ]
[ "intro", "materials|methods", "results", "discussion" ]
[ "Tobacco use", "mental health", "health behaviour", "adolescents", "Indonesia" ]
Introduction: Tobacco causes the death of 8 million persons per year (WHO, 2019). The majority of the world’s smokers (80%) reside in developing countries (WHO, 2019). Most users of tobacco initiate this habit when they are young during adolescence (Aldrich et al., 2014). More than one in ten (13.6%) adolescents in developing countries were current tobacco users (Xi et al., 2016). Smoking causes various diseases, including “cancer, heart disease, stroke, lung diseases, diabetes, and chronic obstructive pulmonary disease (COPD)” (CDC, 2018). Fewer studies have linked tobacco use with mental morbidity and health compromising behaviours. Some studies found a probable association between tobacco use and mental distress in young people (Chaiton et al., 2010; Lee et al., 2018; Peltzer and Pengpid, 2017; Saravanan and Heidhy, 2014), including suicidal ideation and suicide attempts (Han et al., 2009; Järvelaid, 2004; Lee et al., 2018; Tomori et al., 2001). Several studies found a relationship between tobacco use and alcohol use (Fujita and Maki, 2018; Tomori et al., 2001; Wang et al. 2017), and illicit drug use (Zammit et al., 2018; Tomori et al., 2001). A number of investigations identified that compared to non-smokers, smokers engaged more likely in poor dietary behaviour, such as inadequate fruit and vegetable consumption (Wang et al., 2017; Lee and Yi, 2016), fast food consumption (Hrubá et al., 2010; Larson et al., 2007; Wang et al., 2017), higher frequency of eating out (Fujita and Maki, 2018), high sugar foods (Lee and Yi, 2016), soft drink intake (Larsen et al., 2007; Wang et al., 2017), high sodium or salty snacks consumption (Hrubá et al., 2010), were less likely to eat milk and dairy products (Lee and Yi, 2016; Wang et al., 2017), and skipped breakfast (Cohen et al., 2003; Wang et al., 2017). In addition, tobacco use increased the odds of exercise (Lee and Yi, 2016) and school truancy (Tomori et al., 2001). Studies are needed on the association between tobacco use, psychiatric morbidity, and health risk behaviour among adolescents in developing countries, such as in Indonesia, in which smoking is on the rise. The study aimed to assess the associations between tobacco use, mental problems, and health risk behaviour among adolescents in Indonesia. Materials and Methods: Sample and procedures Cross-sectional nationally representative data from the 2015 GSHS in Indonesia were analysed (WHO, 2015a); sampling details were described previously (Pengpid and Peltzer, 2019). “The Ethics Commission for Health Research and Development approved the study and informed consent was obtained from the participating schools, parents, and students.”(WHO, 2015a). Measures Outcome variables Mental problem indicators Loneliness: “mostly or always lonely during the past 12 months.” Anxiety: ”mostly or always been so worried about something that could not sleep at night in the past 12 months.” No close friends: “having no close friends.” Suicidal behaviour: suicidal ideation, suicide plan, and suicide attempt in the past 12 months. Current alcohol use: “≥1 days, at least one drink containing alcohol in the past 30 days.” (WHO, 2015b). Dietary risk variables Soft drink consumption: drinking ≥1 time per day “carbonated soft drinks, such as Coca-Cola, Sprite, Fanta, or Big Bola (excluding diet soft drinks).” Inadequate fruit intake: “<twice/day during the past 30 days.” Inadequate vegetable intake: “<3 times/day during the past 30 days.” Fast food consumption: “≥2 days/week eating food from a fast food restaurant.” Inadequate physical activity: “<7 days at least 60 minutes of moderate to vigorous-intensity physical activity” (WHO, 2015b, 2017). Other health risk behaviours Sedentary behaviour: “≥3 hours/day sitting when you are not in school or doing homework” (Guthold et al., 2010). In a physical fight: any in the past 12 months. Being bullied: any day in the past 30 days. Serious injury: any during the past 12 months. School truancy: any day during the past 30 days. Ever sex: ever having had sexual intercourse. Inadequate tooth brushing: “<2 times/day cleaning or brushing one’s teeth.” Inadequate hand hygiene: “not always washing hands after toilet, before eating and with soap.” (WHO, 2015b). Exposure variable Tobacco use: “During the past 30 days, on how many days did you smoke cigarettes/use any tobacco products other than cigarettes, such as sirih, piper betel, cerutu, or cigars?” Responses ranged from “1=0 days to 7=All 30 days (coded 1=0 and 2–7=1)” Confounding variables Age, sex, and hunger (in the past month). Secondary smoke: any day in the past week. Parental tobacco use: any parent or guardian. Peer support: “exposure to kind and helpful students in school in the past month” (never, rarely, or sometimes=low, and most of the time or always=high). Parental support: “mostly or always parental supervision, parental connectedness, parental bonding, and parental respect for privacy in the past 30 days.” (WHO, 2015b) (0=low, 2=moderate, and 3-4=high support). Data analysis Frequency, mean and standard deviations were calculated to describe the sample and its indicators. To test for differences in proportions, Pearson Chi-square tests were used. Logistic regression was used to determine the associations between tobacco use and mental problems and health compromising behaviours. Health outcomes found to have a significant association between tobacco use and health indicators were subsequently included in the multivariable logistic regression model, which was adjusted for relevant confounders (age group, sex, experience of hunger, secondary smoke, parental tobacco use, peer support, and parental support). Missing data were excluded from the calculations. P<0.5 was accepted as significant. “STATA 15.00 (StatCorp LP, College Station, TX)” was used for all statistical procedures, taking the complex study design into account. Results: Descriptive Statistics The study sample consisted of 11,124 in-school adolescents (overall response rate 94%) (mean age 14.0 years, with a Standard Deviation of 1.6) in Indonesia, 51.1% were girls, 43.3% experienced sometimes or mostly or always hunger in the past month, 77.6% had exposure to secondary smoke in the past week, and 53.8% had parents who used tobacco. In addition, 39.1% of the students had mostly or always support by their peers, and 22.4% scored high on parental support. The prevalence of current tobacco use was 12.8%. Current tobacco use was significantly higher in males, older adolescents, those who experienced more frequent hunger, those that were exposed to secondary smoke, parental tobacco use, had low peer, and parental support (see Table 1). Associations between tobacco use and mental problems and health compromising behaviour In adjusted logistic regression analysis, compared to non-current or never tobacco users, current tobacco use was associated with eight of eight mental problem indicators [lonely: Adjusted Odds Ratio-AOR: 1.87, 95% Confidence Interval-CI: 1.37-2.54), anxiety (AOR: 1.95, 95% CI: 1.36-2.83), no close friend (AOR: 1.59, 95% CI: 1.08-2.36), suicidal ideation (AOR: 2.39, 95% CI: 1.58-3.62), suicide plan (AOR: 2.02, 95% CI: 1.53-2.68), suicide attempt (AOR: 6.96, 95% CI: 4.43-10.93) and current alcohol use (AOR: 14.97, 95% CI: 8.42-26.63)], two of four dietary risk behaviours [soft drink intake (AOR: 1.54, 95% CI: 1.31-1.83) and fast food consumption (AOR: 1.37, 95% CI: 1.14-1.65)] and seven of ten other health risk behaviours [in a physical fight (AOR: 2.85, 95% CI: 2.39-3.38), bullied (AOR: 1.87, 95% CI: 1.55-2.26), injury (AOR: 2.04, 95% CI: 1.68-2.47), school truancy (AOR: 2.45, 95% CI: 2.01-2.98), ever sex (AOR: 2.63, 95% CI: 2.05-3.38), not always washing hands after toilet (AOR: 1.29, 95% CI: 1.10-1.51) and not always washing hands before eating (AOR: 1.32, 95% CI: 1.09-1.60)] (see Table). Sample Characteristics of School-Going Adolescents in Indonesia, 2015 (N=11124) 1Based on Chi-square tests Associations between Tobacco Use and Health Outcomes UOR, Unadjusted Odds Ratio; AOR, Adjusted Odds Ratio; 1Adjusted for age, sex, experience of hunger, secondary smoke, parental tobacco use, peer support, and parental support Discussion: In this large nationally representative study of school-going adolescents in Indonesia, compared to non-current or never tobacco users, current tobacco users had significantly poorer mental health (lonely, anxiety, no close friends, suicidal ideation, suicide plan, suicide attempts and current alcohol use) and increased odds for several health compromising behaviours (soft drink intake, fast food consumption, in a physical fight, bullied, injury, school truancy, ever sex, and sub-optimal hand hygiene behaviours) than non-current or never tobacco users in adjusted analysis. These results are generally in line with previous findings (Chaiton et al., 2009; Fujita and Maki, 2018; Halperin et al., 2010; Han et al., 2009; Hrubá et al., 2010; Larson et al., 2007; Lee et al., 2018; Peltzer and Pengpid, 2017; Saravanan and Heidhy, 2014; Tomori et al., 2001; Wang et al., 2017). Alcohol use is known to be a comorbidity of tobacco use (Konkolÿ et al., 2016; Peltzer and Pengpid, 2018), and this study showed a very high association between tobacco and alcohol use. It is possible that tobacco users that are likely to deny the harms of tobacco use are also likely to deny the risks of other health risk behaviours (Zammit et al., 2018). Tobacco use may be utilized by students to cope with psychosocial problems, as a form of “self-medication” (Mathew et al., 2017; Tomori et al., 2001). The finding may be supported by the study results of very high comorbidity of tobacco use with alcohol use. Some studies (Wang et al., 2017; Lee and Yi, 2016) found an association between tobacco use and physical activity and inadequate fruit and vegetable intake, but we identified no significant associations in this study. Study findings support the importance of understanding the various health risk behaviours tobacco users are more likely to engage in for the development of school mental and physical health promotion (Pengpid and Peltzer, 2020). This study appears to confirm that tobacco use among adolescents is linked to the development of mental and physical health risk factors (Tomori et al., 2001). Programmes should not only target tobacco use cessation but also promote a variety of healthy behaviours, in an integrated school health promotion programme. Study limitation Due to the cross-sectional design of the study, we are unable to determine the direction of the relationship between tobacco use and health indicators. We can also not generalize the findings to adolescent in Indonesia, since only school-going participants were included in the study. Out-off school adolescents may have a different pattern of tobacco use associations. Due to the self-report of the data, it is possible that some of the health and mental health indicators were underreported. In conclusion, in this large cross-sectional national study in Indonesia, compared to non-current or never tobacco users, current tobacco users had significantly poorer mental health (lonely, anxiety, no close friends, suicidal ideation, suicide plan, suicide attempts and current alcohol use) and increased odds for several health compromising behaviours (soft drink intake, fast food consumption, in a physical fight, bullied, injury, ever sex, school truancy, and sub-optimal hand hygiene behaviours) than non-current or never tobacco users in adjusted analysis. Multiple comorbidities with tobacco use should be targeted in interventions.
Background: Limited evidence has been established on associations between tobacco use and mental morbidity and health compromising behaviours. The study aimed to investigate the associations between tobacco use, mental problems, and health risk behaviour among adolescents attending school in Indonesia. Methods: Nationally representative data were studied from 11,124 adolescents that took part in the cross-sectional "Indonesia Global School-Based Student Health Survey (GSHS) in 2015". Results: The prevalence of current tobacco use was 12.8%. In adjusted logistic regression analysis, compared to non-current or never tobacco users, current tobacco use was associated with eight of eight mental problem indicators (lonely, anxiety, no close friend, suicidal ideation, suicide plan, suicide attempt and current alcohol use), two of four dietary risk behaviours (soft drink and fast food consumption) and seven of ten other health risk behaviours (in a physical fight, bullied, injury, ever sex, school truancy, and two sub-optimal hand hygiene behaviours). Conclusions: Compared to nontobacco users, current tobacco users had significantly higher mental problem indicators and health risk behaviours. Multiple comorbidity with tobacco use should be targeted in interventions.
null
null
2,541
229
[]
4
[ "tobacco", "use", "tobacco use", "health", "95", "past", "aor", "ci", "95 ci", "current" ]
[ "tobacco causes death", "risk behaviours tobacco", "tobacco use significantly", "tobacco use psychiatric", "tobacco use adolescents" ]
null
null
null
[CONTENT] Tobacco use | mental health | health behaviour | adolescents | Indonesia [SUMMARY]
null
[CONTENT] Tobacco use | mental health | health behaviour | adolescents | Indonesia [SUMMARY]
null
[CONTENT] Tobacco use | mental health | health behaviour | adolescents | Indonesia [SUMMARY]
null
[CONTENT] Adolescent | Adolescent Behavior | Cross-Sectional Studies | Female | Humans | Indonesia | Male | Mental Health | Prevalence | Risk-Taking | Suicidal Ideation | Tobacco Use [SUMMARY]
null
[CONTENT] Adolescent | Adolescent Behavior | Cross-Sectional Studies | Female | Humans | Indonesia | Male | Mental Health | Prevalence | Risk-Taking | Suicidal Ideation | Tobacco Use [SUMMARY]
null
[CONTENT] Adolescent | Adolescent Behavior | Cross-Sectional Studies | Female | Humans | Indonesia | Male | Mental Health | Prevalence | Risk-Taking | Suicidal Ideation | Tobacco Use [SUMMARY]
null
[CONTENT] tobacco causes death | risk behaviours tobacco | tobacco use significantly | tobacco use psychiatric | tobacco use adolescents [SUMMARY]
null
[CONTENT] tobacco causes death | risk behaviours tobacco | tobacco use significantly | tobacco use psychiatric | tobacco use adolescents [SUMMARY]
null
[CONTENT] tobacco causes death | risk behaviours tobacco | tobacco use significantly | tobacco use psychiatric | tobacco use adolescents [SUMMARY]
null
[CONTENT] tobacco | use | tobacco use | health | 95 | past | aor | ci | 95 ci | current [SUMMARY]
null
[CONTENT] tobacco | use | tobacco use | health | 95 | past | aor | ci | 95 ci | current [SUMMARY]
null
[CONTENT] tobacco | use | tobacco use | health | 95 | past | aor | ci | 95 ci | current [SUMMARY]
null
[CONTENT] wang 2017 | wang | 2018 | lee | tobacco | 2017 | use | 2016 | lee yi 2016 | tomori 2001 [SUMMARY]
null
[CONTENT] 95 | aor | ci | 95 ci | tobacco | use | parental | tobacco use | support | 39 [SUMMARY]
null
[CONTENT] tobacco | use | 95 | aor | tobacco use | ci | 95 ci | past | health | days [SUMMARY]
null
[CONTENT] ||| Indonesia [SUMMARY]
null
[CONTENT] 12.8% ||| eight | eight | two | four | seven | ten | two [SUMMARY]
null
[CONTENT] ||| Indonesia ||| 11,124 | Indonesia Global School-Based Student Health Survey (GSHS | 2015 ||| ||| 12.8% ||| eight | eight | two | four | seven | ten | two ||| ||| [SUMMARY]
null
Performance of ivisen IA-1400, a new point-of-care device with an internal centrifuge system, for the measurement of cardiac troponin I levels.
33729609
We present the analytical performance of the ivisen IA-1400, a new point-of-care device that features a characteristic built-in centrifuge system, to measure blood cardiac troponin I (cTnI) levels.
BACKGROUND
Whole blood and plasma samples obtained from patients who visited Korea University Guro Hospital were used to analyze measurement range, cross-reactivity, interference, and sensitivity and specificity. We performed a correlation analysis of the ivisen IA-1400 versus the Access AccuTnI+3 immunoassay using the UniCel™ DxI 800 platform and the PATHFAST™ hs-cTnI assay.
METHODS
Within-run precisions were classified as low, 9.8%; middle, 10.2%; and high, 8.5%. The limit of blank was 3.1 ng/L for plasma samples and 4.3 ng/L for whole blood samples. The limit of detection was 8.4 ng/L for plasma samples and 10.0 ng/L for whole blood samples, respectively. The limit of quantitation at a coefficient of variation of 20% and 10% was 19.5 ng/L and 45.5 ng/L for plasma samples, respectively. The comparative evaluation between the two other assays and ivisen IA-1400 showed excellent correlation, with Spearman's correlation coefficients (R) of 0.992 and 0.985. The sensitivity and specificity of ivisen IA-1400 using the optimum cut-off value of 235 ug/L were 94.6% and 98.2%, respectively.
RESULTS
The ivisen IA-1400 showed acceptable and promising performance in cTnI measurements using whole blood and plasma samples, with limited information in the clinical performance. The flexibility for sample selection using the internal centrifugation system is the main advantage of this point-of-care device.
CONCLUSION
[ "Centrifugation", "Confidence Intervals", "Cross Reactions", "Humans", "Limit of Detection", "Myocardium", "Point-of-Care Systems", "Sensitivity and Specificity", "Troponin I" ]
8128291
INTRODUCTION
The rapid and accurate diagnosis of cardiovascular diseases is essential for initiating appropriate and timely medical treatment, especially for life‐threatening emergencies, such as acute myocardial infarction (AMI). The World Health Organization incorporated the serial testing of cardiac biomarkers in the diagnostic criteria for AMI in 1986, along with a history of chest pain and changes on electrocardiograms 1 A cardiac biomarker is a biochemical compound used to detect cardiac diseases such as AMI and myocardial injury. 2 These compounds should be sensitive and specific to cardiac tissue, provide results with a short turnaround time (TAT), and be cost‐effective. 3 After the first report of the use of a biochemical marker for myocardial injury in 1954, 4 numerous new diagnostic marker proteins that aid the assessment of cardiac diseases and prediction of cardiovascular risk have been identified. The European Society of Cardiology/American College of Cardiology recommends the use of cardiac biomarkers for the diagnosis of myocardial injury, preferably cardiac troponin (cTn [I or T]). When the membranes of cardiac muscle cells are damaged, cTns are released into the circulation. Both cTnI and cTnT can be measured using commercially available analytical platforms. 5 From the development of early monoclonal antibody‐based diagnostic immunoassays to recent high‐sensitivity cTnI and cTnT (hs‐cTnI and hs‐cTnT) assays, the limit of detection (LoD) has been lowered, although the values are highly variable among various hs‐cTnI assays, ranging from 0.009 ng/L to 2.5 ng/L. 6 To provide earlier treatment for AMI, there remains a need for a more rapid and efficient measurement of cTns. One strategy is the development and use of point‐of‐care (POC) testing platforms. 7 The cTns measurements obtained using central laboratory equipment are more sensitive than POC measurements. 5 However, POC devices can deliver rapid results, within 30 minutes near the bedside, whereas cTn assays performed in the central laboratory usually take longer because of sample transport, handling, and pretreatment. Fast diagnosis from rapid TAT results can reduce the length of hospital stay and overall hospital costs. 8 , 9 These devices employ more user‐friendly systems that can be operated by less‐skilled personnel. Therefore, POC devices can be beneficial in situations where there are no skilled medical technologists or 24‐hour laboratory operations are impossible. These advantages of POC assays have led to the development of cTn assays with high analytical quality and a shorter TAT. Herein, we present the analytical performance of a newly developed POC device, the ivisen IA‐1400 (i‐SENS, Seoul, South Korea), in terms of imprecision, linearity, cross‐reactivity with interferences, sensitivity, and specificity, with a correlation analysis for comparison to preexisting devices. As the ivisen IA‐1400 can process both whole blood and plasma samples and we aimed to evaluate the correlation between the two sample types as well.
null
null
RESULTS
Imprecision Mean, SD, and CV values were obtained using three different QC materials (level 1, low; level 2, middle; and level 3, high) and three lots (Table 1). Within‐run precisions combining the results of the three lots were as follows: low, 9.5%; middle, 10.2%; and high, 8.5%. Between‐lot precisions were as follows: low, 6.0%; middle, 4.6%; and high, 4.8%. A total reproducibility was 10.1%, 12.2%, and 9.9% at the low, middle, and high levels, respectively. Imprecision study of ivisen IA‐1400 using LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad) of level 1, 2, and 3 Abbreviation: CV, Coefficient of variation. Mean, SD, and CV values were obtained using three different QC materials (level 1, low; level 2, middle; and level 3, high) and three lots (Table 1). Within‐run precisions combining the results of the three lots were as follows: low, 9.5%; middle, 10.2%; and high, 8.5%. Between‐lot precisions were as follows: low, 6.0%; middle, 4.6%; and high, 4.8%. A total reproducibility was 10.1%, 12.2%, and 9.9% at the low, middle, and high levels, respectively. Imprecision study of ivisen IA‐1400 using LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad) of level 1, 2, and 3 Abbreviation: CV, Coefficient of variation. Analytical sensitivity: LoB, LoD, and LoQ Both plasma and whole blood samples were used for the LoB, LoD, and LoQ analyses (Table 2). The LoB for lots 1 and 2 were 3.1 and 2.4 ng/L for plasma samples, and 4.3 and 2.8 ng/L for whole blood samples, respectively. The LoD for lots 1 and 2 were 8.4 and 7.1 ng/L for plasma samples, and 10.0 and 6.4 ng/L for whole blood samples, respectively. The LoQ values were calculated at CVs of 20% and 10%. The LoQ at CVs of 20% and 10% were 14.4 and 45.5 ng/L for plasma samples and 28.6 and 57.2 ng/L for whole blood samples, respectively, in lot 1. In lot 2, the LoQ at CVs of 20% and 10% were 19.5 and 39.1 ng/L for plasma samples and 15.6 and 31.1 ng/L for whole blood samples, respectively. The greater of two values were reported as the LoB, LoD, and LoQ for the ivisen IA‐1400. No statistically significant differences in the LoQ were observed between the plasma and whole blood samples (P > 0.05). The LoB, LoD, and LoQ profiles using plasma samples of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 are summarized in Table 3. 19 , 20 , 21 The data, including the 99th percentile upper reference limit (URL) for the AccuTnI+3 and PATHFAST were provided by their respective manufacturers. Summary of LoB, LoD, and LoQ in measurement of cTnI using ivisen IA‐1400 by sample type and lot. Bold letters indicate the reported value for overall study Abbreviations: CV, coefficient of variationLoB, limit of blank; LoD, limit of detection; LoQ, limit of quantitation. Profiles of LoB, LoD, and LoQ at CVs of 20% and 10% of Access AccuTnI+3, PATHFASTTM hs‐cTnI assay, and ivisen IA‐1400 using plasma samples Abbreviation: NA, not available. Both plasma and whole blood samples were used for the LoB, LoD, and LoQ analyses (Table 2). The LoB for lots 1 and 2 were 3.1 and 2.4 ng/L for plasma samples, and 4.3 and 2.8 ng/L for whole blood samples, respectively. The LoD for lots 1 and 2 were 8.4 and 7.1 ng/L for plasma samples, and 10.0 and 6.4 ng/L for whole blood samples, respectively. The LoQ values were calculated at CVs of 20% and 10%. The LoQ at CVs of 20% and 10% were 14.4 and 45.5 ng/L for plasma samples and 28.6 and 57.2 ng/L for whole blood samples, respectively, in lot 1. In lot 2, the LoQ at CVs of 20% and 10% were 19.5 and 39.1 ng/L for plasma samples and 15.6 and 31.1 ng/L for whole blood samples, respectively. The greater of two values were reported as the LoB, LoD, and LoQ for the ivisen IA‐1400. No statistically significant differences in the LoQ were observed between the plasma and whole blood samples (P > 0.05). The LoB, LoD, and LoQ profiles using plasma samples of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 are summarized in Table 3. 19 , 20 , 21 The data, including the 99th percentile upper reference limit (URL) for the AccuTnI+3 and PATHFAST were provided by their respective manufacturers. Summary of LoB, LoD, and LoQ in measurement of cTnI using ivisen IA‐1400 by sample type and lot. Bold letters indicate the reported value for overall study Abbreviations: CV, coefficient of variationLoB, limit of blank; LoD, limit of detection; LoQ, limit of quantitation. Profiles of LoB, LoD, and LoQ at CVs of 20% and 10% of Access AccuTnI+3, PATHFASTTM hs‐cTnI assay, and ivisen IA‐1400 using plasma samples Abbreviation: NA, not available. Linearity and hook effect According to the regression analysis of 10 pooled replicates, linearity was confirmed for the range of 10 ng/L (LoD value) to 10,000 ng/L. No hook effect was observed since there was no decrease in the measured value at high concentrations up to 36,700 ng/L. According to the regression analysis of 10 pooled replicates, linearity was confirmed for the range of 10 ng/L (LoD value) to 10,000 ng/L. No hook effect was observed since there was no decrease in the measured value at high concentrations up to 36,700 ng/L. Cross‐reactivity and interference The calculated bias (%) at low and high concentrations of the eight cross‐reacting materials was less than 1%, suggesting no significant cross‐reactivity. None of the tested interfering substances showed bias greater than ±20%, except for the high level of rheumatoid factor (500 IU/mL). The level of the rheumatoid factor that met the bias of less than ±20% was 125 IU/mL at high concentrations and 250 IU/mL at low concentrations. Immunoassays using mouse antibodies are prone to interference from heterophilic antibodies. For the evaluation of HAMA, multiple types of HAMA materials from different manufacturers were used and the bias was less than ±20%. However, in patients with a history of exposure to mice or immunotherapy, the results should be interpreted cautiously. The calculated bias (%) at low and high concentrations of the eight cross‐reacting materials was less than 1%, suggesting no significant cross‐reactivity. None of the tested interfering substances showed bias greater than ±20%, except for the high level of rheumatoid factor (500 IU/mL). The level of the rheumatoid factor that met the bias of less than ±20% was 125 IU/mL at high concentrations and 250 IU/mL at low concentrations. Immunoassays using mouse antibodies are prone to interference from heterophilic antibodies. For the evaluation of HAMA, multiple types of HAMA materials from different manufacturers were used and the bias was less than ±20%. However, in patients with a history of exposure to mice or immunotherapy, the results should be interpreted cautiously. Method and sample type comparison The comparative evaluation of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 using 111 plasma samples showed excellent correlation, with Spearman's correlation coefficient (R) values of 0.992 and 0.985, respectively (Figure 2, Table 4). The results of the correlation analysis between whole blood and plasma samples using the ivisen IA‐1400 are shown in Figure 3A. The correlation between the whole blood and plasma samples was excellent (R = 0.988), and no significant bias (−2.6%, P > 0.05) was observed in the Bland‐Altman plot (Figure 3B). The comparative evaluation between Access AccuTnI+3 assay (A), PATHFASTTM hs‐cTnI assay (B) and ivisen IA‐1400 Correlation coefficients (R) with 95% CI for cTnI measurement for methods and sample type Access AccuTnI+3 vs. ivisen IA−1400 PATHFASTTM hs‐cTnI assay vs. ivisen IA−1400 Sample type (whole blood vs. plasma) Abbreviation: CI, confidence interval. The results of correlation analysis between whole blood and plasma samples using ivisen IA‐1400 (A) Passing‐Bablock regression, (B) Bland‐Altman plot The comparative evaluation of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 using 111 plasma samples showed excellent correlation, with Spearman's correlation coefficient (R) values of 0.992 and 0.985, respectively (Figure 2, Table 4). The results of the correlation analysis between whole blood and plasma samples using the ivisen IA‐1400 are shown in Figure 3A. The correlation between the whole blood and plasma samples was excellent (R = 0.988), and no significant bias (−2.6%, P > 0.05) was observed in the Bland‐Altman plot (Figure 3B). The comparative evaluation between Access AccuTnI+3 assay (A), PATHFASTTM hs‐cTnI assay (B) and ivisen IA‐1400 Correlation coefficients (R) with 95% CI for cTnI measurement for methods and sample type Access AccuTnI+3 vs. ivisen IA−1400 PATHFASTTM hs‐cTnI assay vs. ivisen IA−1400 Sample type (whole blood vs. plasma) Abbreviation: CI, confidence interval. The results of correlation analysis between whole blood and plasma samples using ivisen IA‐1400 (A) Passing‐Bablock regression, (B) Bland‐Altman plot Optimal cut‐off values for sensitivity and specificity The optimal cut‐off value derived from the ROC analysis of the ivisen IA‐1400 was established as 235 ng/L. In the two groups previously classified by diagnosis (74 AMI and 341 non‐AMI cases), the sensitivity and specificity of ivisen IA‐1400 at the value of 235 ng/L were 94.6% and 98.2%, respectively. The ROC curve showed an area under the curve of 0.998, confirming excellent discrimination. The optimal cut‐off value derived from the ROC analysis of the ivisen IA‐1400 was established as 235 ng/L. In the two groups previously classified by diagnosis (74 AMI and 341 non‐AMI cases), the sensitivity and specificity of ivisen IA‐1400 at the value of 235 ng/L were 94.6% and 98.2%, respectively. The ROC curve showed an area under the curve of 0.998, confirming excellent discrimination.
null
null
[ "INTRODUCTION", "ivisen IA‐1400", "Samples and study protocols", "Imprecision study", "Estimation of LoB, LoD, and LoQ for analytical sensitivity", "AMR and hook effect", "Cross‐reactivity and interference", "Method and sample type comparison", "Optimal cut‐off value for sensitivity and specificity", "Statistical analysis", "Patient and public involvement", "Imprecision", "Analytical sensitivity: LoB, LoD, and LoQ", "Linearity and hook effect", "Cross‐reactivity and interference", "Method and sample type comparison", "Optimal cut‐off values for sensitivity and specificity" ]
[ "The rapid and accurate diagnosis of cardiovascular diseases is essential for initiating appropriate and timely medical treatment, especially for life‐threatening emergencies, such as acute myocardial infarction (AMI). The World Health Organization incorporated the serial testing of cardiac biomarkers in the diagnostic criteria for AMI in 1986, along with a history of chest pain and changes on electrocardiograms\n1\n\n\nA cardiac biomarker is a biochemical compound used to detect cardiac diseases such as AMI and myocardial injury.\n2\n These compounds should be sensitive and specific to cardiac tissue, provide results with a short turnaround time (TAT), and be cost‐effective.\n3\n After the first report of the use of a biochemical marker for myocardial injury in 1954,\n4\n numerous new diagnostic marker proteins that aid the assessment of cardiac diseases and prediction of cardiovascular risk have been identified. The European Society of Cardiology/American College of Cardiology recommends the use of cardiac biomarkers for the diagnosis of myocardial injury, preferably cardiac troponin (cTn [I or T]). When the membranes of cardiac muscle cells are damaged, cTns are released into the circulation. Both cTnI and cTnT can be measured using commercially available analytical platforms.\n5\n From the development of early monoclonal antibody‐based diagnostic immunoassays to recent high‐sensitivity cTnI and cTnT (hs‐cTnI and hs‐cTnT) assays, the limit of detection (LoD) has been lowered, although the values are highly variable among various hs‐cTnI assays, ranging from 0.009 ng/L to 2.5 ng/L.\n6\n\n\nTo provide earlier treatment for AMI, there remains a need for a more rapid and efficient measurement of cTns. One strategy is the development and use of point‐of‐care (POC) testing platforms.\n7\n The cTns measurements obtained using central laboratory equipment are more sensitive than POC measurements.\n5\n However, POC devices can deliver rapid results, within 30 minutes near the bedside, whereas cTn assays performed in the central laboratory usually take longer because of sample transport, handling, and pretreatment. Fast diagnosis from rapid TAT results can reduce the length of hospital stay and overall hospital costs.\n8\n, \n9\n These devices employ more user‐friendly systems that can be operated by less‐skilled personnel. Therefore, POC devices can be beneficial in situations where there are no skilled medical technologists or 24‐hour laboratory operations are impossible. These advantages of POC assays have led to the development of cTn assays with high analytical quality and a shorter TAT.\nHerein, we present the analytical performance of a newly developed POC device, the ivisen IA‐1400 (i‐SENS, Seoul, South Korea), in terms of imprecision, linearity, cross‐reactivity with interferences, sensitivity, and specificity, with a correlation analysis for comparison to preexisting devices. As the ivisen IA‐1400 can process both whole blood and plasma samples and we aimed to evaluate the correlation between the two sample types as well.", "The ivisen IA‐1400 system is a compact, fully automated, bench‐top immunoassay analyzer with a modular configuration that offers random access ability to perform one to four tests simultaneously. The system rotates the cartridge at 4,000 rpm, allowing the direct measurement of whole blood samples without sample pretreatment required for plasma separation, and consequently providing rapid quantitative measurements of cardiac biomarkers in less than 17 min. To utilize whole blood samples in preexisting POC immunoassay devices, cTnI values can be obtained after software correction by using each sample's externally measured hematocrit value.\n10\n, \n11\n In contrast, the ivisen IA‐1400 system itself has a built‐in internal centrifuge as shown in Figure 1 and does not need to measure hematocrit separately. Therefore, the measurement process is simple and has a shorter TAT than other POC devices, even when whole blood samples are used.\nInternal centrifuge in ivisen‐IA 1400 system that rotates the cartridge at 4000 rpm in 2 min. This image is provided from i‐SEN, Inc. with permission", "We enrolled 872 leftover ethylenediaminetetraacetic acid (EDTA)‐whole blood samples from patients who were requested to undergo both complete blood count and cTnT tests in Korea Guro University Hospital from January 2019 to December 2019. For the cTnT measurement, the hs‐cTnT assay (Elecsys Troponin T hs STAT, Roche Diagnostics, Mannheim, Germany) was performed on the cobas 8000 e602 analyzer (Roche Diagnostics) in our laboratory. Whole blood samples were stored at −80°C, and thawed before evaluation. Before the evaluation, samples with evidence of severe hemolysis, coagulation, contamination, or insufficient quantity were excluded.\nPlasma samples obtained from the EDTA‐whole blood samples were used to verify the analytical measurement range (AMR), interference with cross‐reactivity testing, and performance evaluation for the sensitivity, specificity, and correlation analysis. Whole blood and plasma samples were used to determine the limit of blank (LoB), LoD, and limit of quantitation (LoQ), with a comparative correlation analysis performed by sample type (whole blood vs. plasma). All analytical procedures were conducted in accordance with the Clinical and Laboratory Standards Institute (CLSI) guidelines. This study was approved by the institutional review board of Korea University Guro Hospital (IRB no. 2019GR0018).", "The imprecision study of the ivisen IA‐1400 was performed in accordance with the CLSI guidelines EP05‐A2.\n12\n The quality control (QC) materials, LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad Laboratories, Headquarters, Hercules, CA, USA) were used to determine imprecision. Three different levels of QC materials were measured twice a day with a minimum interval of 2 hours between measurements, in duplicate, for 20 consecutive days. This process was performed on three different lots at each QC level to define the between‐lot precision. To achieve reproducibility, three different QC levels were tested in triplicate twice a day for 5 consecutive days in two separate laboratories. Results including mean, standard deviation (SD), and coefficients of variation (CVs) were calculated for each of the three different lots and QC levels.", "The LoB, LoD, and LoQ were calculated based on the CLSI guidelines EP17‐A2.\n13\n All measurements were made using two different cartridges, one for the plasma samples and another for the whole blood samples. The greater values of LoB, LoD, and LoQ are reported for the overall study as recommended in CLSI guidelines EP17‐A2. The LoB is the highest apparent analyte concentration when replicates of a blank sample containing no analyte are tested and defined by the relation LoB=meanblank +1.645(SDblank).\n14\n Using cTnI‐free plasma and whole blood sample as a blank, the LoB was calculated from values obtained in 60 repeated measurements of two lots. The highest LoB value was used to obtain the LoD. To calculate the LoD and LoQ, samples showing high cTnI concentrations were pooled and serially diluted to reach their target concentrations. A total of seven pooled samples with different concentrations were prepared. LoD, the lowest analyte concentration determined by using both measured LoB and test replicates of a low concentration of analyte, is defined by the formula LoD = LoB + 1.645(SD low concentration sample). For the LoD estimation, the process was repeated for 5 days in five pooled samples and measured three times each at the estimated LoD concentration. The LoQ is the lowest concentration at which predefined goals for bias and imprecision are met. To calculate the LoQ, two pooled samples with estimated concentrations that resulted in CVs of 10% and 20% were measured three times for 5 days.", "The AMR or linearity of the method was determined according to the CLSI guidelines EP06‐A.\n15\n Using SeraconTM cTnI‐free human plasma as a negative‐reference material and human cardiac troponin I‐T‐C complex (Hytest Ltd., Turku, Finland) as a positive‐reference material, 10 pools were prepared by serial dilution. Six of the high‐concentration pools exceeding the upper measurable range (>10,000 ng/L) were used to examine the high‐dose hook effect. All high‐dose hook samples were measured in triplicate on the ivisen IA‐1400 cTnI cartridge.", "A total of 24 interfering substances (acetaminophen, acetylsalicylic acid, allopurinol, ampicillin, ascorbic acid, atenolol, caffeine, captopril, digoxin, dopamine, erythromycin, furosemide, methyldopa, niphedipine, phenytoin, theophylline, verapamil, bilirubin‐conjugated, bilirubin‐free, hemoglobin, human anti‐mouse antibodies [HAMA], rheumatoid factor, and triglycerides) and eight cross‐reacting materials (actin protein, creatine kinase myocardial band, human skeletal muscle troponin I, C, and T, myoglobin, myosin, and tropomyosin) were tested for interference and cross‐reactivity for four pools with negative, low, medium, and high cTnI concentrations according to the CLSI EP07‐02 guidelines.\n16\n All pooled samples with various materials were measured in triplicate and the average value was obtained to calculate bias.", "The results obtained from the ivisen IA‐1400 were compared with those of two preexisting hs‐cTnI assays using 111 samples: Access AccuTnI+3 (AccuTnI+3) immunoassay using UniCelTM DxI 800 platform (Beckman Coulter Inc., Fullerton, CA, USA), and PATHFASTTM hs‐cTnI assay (Mitsubishi Medience, Tokyo, Japan) using plasma samples in accordance with CLSI guidelines EP09‐A3.\n17\n The epitope peptides of AccuTnI+3 used for the cTnI measurement were the same as those used in ivisen IA‐1400. A sample type comparison study of the whole blood and plasma samples was conducted of 39 paired samples using the mean value of the duplicates. A Passing‐Bablok regression analysis was performed to define the relationship and agreement between devices in the comparison analysis. A Bland‐Altman analysis was also performed of the comparative evaluation of the significant differences between the sample types.", "The diagnostic performance of the ivisen IA‐1400 for predicting AMI was briefly evaluated. Measurements of cTnI were performed using residual samples from 415 patients with suspected AMI who visited Korea University Guro Hospital. The recruited samples were further classified as “non‐AMI sample” (341 samples) or “AMI sample” (74 samples), based on the clinicians’ diagnosis according to the universal guidelines for diagnosing AMI.\n18\n The optimal cut‐off value, which maximizes the sensitivity and specificity, was obtained from the best receiver‐operating characteristic curve (ROC) cut‐off value in the ROC analysis.", "The statistical analysis was performed using Analyse‐it Software (Analyse‐it Software, Leeds, UK) and Microsoft Excel version 2016 (Microsoft Corporation, Redmond, WA, USA).", "No patients or members of the public were involved in the design of this study.", "Mean, SD, and CV values were obtained using three different QC materials (level 1, low; level 2, middle; and level 3, high) and three lots (Table 1). Within‐run precisions combining the results of the three lots were as follows: low, 9.5%; middle, 10.2%; and high, 8.5%. Between‐lot precisions were as follows: low, 6.0%; middle, 4.6%; and high, 4.8%. A total reproducibility was 10.1%, 12.2%, and 9.9% at the low, middle, and high levels, respectively.\nImprecision study of ivisen IA‐1400 using LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad) of level 1, 2, and 3\nAbbreviation: CV, Coefficient of variation.", "Both plasma and whole blood samples were used for the LoB, LoD, and LoQ analyses (Table 2). The LoB for lots 1 and 2 were 3.1 and 2.4 ng/L for plasma samples, and 4.3 and 2.8 ng/L for whole blood samples, respectively. The LoD for lots 1 and 2 were 8.4 and 7.1 ng/L for plasma samples, and 10.0 and 6.4 ng/L for whole blood samples, respectively. The LoQ values were calculated at CVs of 20% and 10%. The LoQ at CVs of 20% and 10% were 14.4 and 45.5 ng/L for plasma samples and 28.6 and 57.2 ng/L for whole blood samples, respectively, in lot 1. In lot 2, the LoQ at CVs of 20% and 10% were 19.5 and 39.1 ng/L for plasma samples and 15.6 and 31.1 ng/L for whole blood samples, respectively. The greater of two values were reported as the LoB, LoD, and LoQ for the ivisen IA‐1400. No statistically significant differences in the LoQ were observed between the plasma and whole blood samples (P > 0.05). The LoB, LoD, and LoQ profiles using plasma samples of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 are summarized in Table 3.\n19\n, \n20\n, \n21\n The data, including the 99th percentile upper reference limit (URL) for the AccuTnI+3 and PATHFAST were provided by their respective manufacturers.\nSummary of LoB, LoD, and LoQ in measurement of cTnI using ivisen IA‐1400 by sample type and lot. Bold letters indicate the reported value for overall study\nAbbreviations: CV, coefficient of variationLoB, limit of blank; LoD, limit of detection; LoQ, limit of quantitation.\nProfiles of LoB, LoD, and LoQ at CVs of 20% and 10% of Access AccuTnI+3, PATHFASTTM hs‐cTnI assay, and ivisen IA‐1400 using plasma samples\nAbbreviation: NA, not available.", "According to the regression analysis of 10 pooled replicates, linearity was confirmed for the range of 10 ng/L (LoD value) to 10,000 ng/L. No hook effect was observed since there was no decrease in the measured value at high concentrations up to 36,700 ng/L.", "The calculated bias (%) at low and high concentrations of the eight cross‐reacting materials was less than 1%, suggesting no significant cross‐reactivity. None of the tested interfering substances showed bias greater than ±20%, except for the high level of rheumatoid factor (500 IU/mL). The level of the rheumatoid factor that met the bias of less than ±20% was 125 IU/mL at high concentrations and 250 IU/mL at low concentrations. Immunoassays using mouse antibodies are prone to interference from heterophilic antibodies. For the evaluation of HAMA, multiple types of HAMA materials from different manufacturers were used and the bias was less than ±20%. However, in patients with a history of exposure to mice or immunotherapy, the results should be interpreted cautiously.", "The comparative evaluation of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 using 111 plasma samples showed excellent correlation, with Spearman's correlation coefficient (R) values of 0.992 and 0.985, respectively (Figure 2, Table 4). The results of the correlation analysis between whole blood and plasma samples using the ivisen IA‐1400 are shown in Figure 3A. The correlation between the whole blood and plasma samples was excellent (R = 0.988), and no significant bias (−2.6%, P > 0.05) was observed in the Bland‐Altman plot (Figure 3B).\nThe comparative evaluation between Access AccuTnI+3 assay (A), PATHFASTTM hs‐cTnI assay (B) and ivisen IA‐1400\nCorrelation coefficients (R) with 95% CI for cTnI measurement for methods and sample type\nAccess AccuTnI+3 vs.\nivisen IA−1400\nPATHFASTTM hs‐cTnI assay\nvs. ivisen IA−1400\nSample type (whole blood\nvs. plasma)\nAbbreviation: CI, confidence interval.\nThe results of correlation analysis between whole blood and plasma samples using ivisen IA‐1400 (A) Passing‐Bablock regression, (B) Bland‐Altman plot", "The optimal cut‐off value derived from the ROC analysis of the ivisen IA‐1400 was established as 235 ng/L. In the two groups previously classified by diagnosis (74 AMI and 341 non‐AMI cases), the sensitivity and specificity of ivisen IA‐1400 at the value of 235 ng/L were 94.6% and 98.2%, respectively. The ROC curve showed an area under the curve of 0.998, confirming excellent discrimination." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "ivisen IA‐1400", "Samples and study protocols", "Imprecision study", "Estimation of LoB, LoD, and LoQ for analytical sensitivity", "AMR and hook effect", "Cross‐reactivity and interference", "Method and sample type comparison", "Optimal cut‐off value for sensitivity and specificity", "Statistical analysis", "Patient and public involvement", "RESULTS", "Imprecision", "Analytical sensitivity: LoB, LoD, and LoQ", "Linearity and hook effect", "Cross‐reactivity and interference", "Method and sample type comparison", "Optimal cut‐off values for sensitivity and specificity", "DISCUSSION", "CONFLICT OF INTERESTS" ]
[ "The rapid and accurate diagnosis of cardiovascular diseases is essential for initiating appropriate and timely medical treatment, especially for life‐threatening emergencies, such as acute myocardial infarction (AMI). The World Health Organization incorporated the serial testing of cardiac biomarkers in the diagnostic criteria for AMI in 1986, along with a history of chest pain and changes on electrocardiograms\n1\n\n\nA cardiac biomarker is a biochemical compound used to detect cardiac diseases such as AMI and myocardial injury.\n2\n These compounds should be sensitive and specific to cardiac tissue, provide results with a short turnaround time (TAT), and be cost‐effective.\n3\n After the first report of the use of a biochemical marker for myocardial injury in 1954,\n4\n numerous new diagnostic marker proteins that aid the assessment of cardiac diseases and prediction of cardiovascular risk have been identified. The European Society of Cardiology/American College of Cardiology recommends the use of cardiac biomarkers for the diagnosis of myocardial injury, preferably cardiac troponin (cTn [I or T]). When the membranes of cardiac muscle cells are damaged, cTns are released into the circulation. Both cTnI and cTnT can be measured using commercially available analytical platforms.\n5\n From the development of early monoclonal antibody‐based diagnostic immunoassays to recent high‐sensitivity cTnI and cTnT (hs‐cTnI and hs‐cTnT) assays, the limit of detection (LoD) has been lowered, although the values are highly variable among various hs‐cTnI assays, ranging from 0.009 ng/L to 2.5 ng/L.\n6\n\n\nTo provide earlier treatment for AMI, there remains a need for a more rapid and efficient measurement of cTns. One strategy is the development and use of point‐of‐care (POC) testing platforms.\n7\n The cTns measurements obtained using central laboratory equipment are more sensitive than POC measurements.\n5\n However, POC devices can deliver rapid results, within 30 minutes near the bedside, whereas cTn assays performed in the central laboratory usually take longer because of sample transport, handling, and pretreatment. Fast diagnosis from rapid TAT results can reduce the length of hospital stay and overall hospital costs.\n8\n, \n9\n These devices employ more user‐friendly systems that can be operated by less‐skilled personnel. Therefore, POC devices can be beneficial in situations where there are no skilled medical technologists or 24‐hour laboratory operations are impossible. These advantages of POC assays have led to the development of cTn assays with high analytical quality and a shorter TAT.\nHerein, we present the analytical performance of a newly developed POC device, the ivisen IA‐1400 (i‐SENS, Seoul, South Korea), in terms of imprecision, linearity, cross‐reactivity with interferences, sensitivity, and specificity, with a correlation analysis for comparison to preexisting devices. As the ivisen IA‐1400 can process both whole blood and plasma samples and we aimed to evaluate the correlation between the two sample types as well.", "ivisen IA‐1400 The ivisen IA‐1400 system is a compact, fully automated, bench‐top immunoassay analyzer with a modular configuration that offers random access ability to perform one to four tests simultaneously. The system rotates the cartridge at 4,000 rpm, allowing the direct measurement of whole blood samples without sample pretreatment required for plasma separation, and consequently providing rapid quantitative measurements of cardiac biomarkers in less than 17 min. To utilize whole blood samples in preexisting POC immunoassay devices, cTnI values can be obtained after software correction by using each sample's externally measured hematocrit value.\n10\n, \n11\n In contrast, the ivisen IA‐1400 system itself has a built‐in internal centrifuge as shown in Figure 1 and does not need to measure hematocrit separately. Therefore, the measurement process is simple and has a shorter TAT than other POC devices, even when whole blood samples are used.\nInternal centrifuge in ivisen‐IA 1400 system that rotates the cartridge at 4000 rpm in 2 min. This image is provided from i‐SEN, Inc. with permission\nThe ivisen IA‐1400 system is a compact, fully automated, bench‐top immunoassay analyzer with a modular configuration that offers random access ability to perform one to four tests simultaneously. The system rotates the cartridge at 4,000 rpm, allowing the direct measurement of whole blood samples without sample pretreatment required for plasma separation, and consequently providing rapid quantitative measurements of cardiac biomarkers in less than 17 min. To utilize whole blood samples in preexisting POC immunoassay devices, cTnI values can be obtained after software correction by using each sample's externally measured hematocrit value.\n10\n, \n11\n In contrast, the ivisen IA‐1400 system itself has a built‐in internal centrifuge as shown in Figure 1 and does not need to measure hematocrit separately. Therefore, the measurement process is simple and has a shorter TAT than other POC devices, even when whole blood samples are used.\nInternal centrifuge in ivisen‐IA 1400 system that rotates the cartridge at 4000 rpm in 2 min. This image is provided from i‐SEN, Inc. with permission\nSamples and study protocols We enrolled 872 leftover ethylenediaminetetraacetic acid (EDTA)‐whole blood samples from patients who were requested to undergo both complete blood count and cTnT tests in Korea Guro University Hospital from January 2019 to December 2019. For the cTnT measurement, the hs‐cTnT assay (Elecsys Troponin T hs STAT, Roche Diagnostics, Mannheim, Germany) was performed on the cobas 8000 e602 analyzer (Roche Diagnostics) in our laboratory. Whole blood samples were stored at −80°C, and thawed before evaluation. Before the evaluation, samples with evidence of severe hemolysis, coagulation, contamination, or insufficient quantity were excluded.\nPlasma samples obtained from the EDTA‐whole blood samples were used to verify the analytical measurement range (AMR), interference with cross‐reactivity testing, and performance evaluation for the sensitivity, specificity, and correlation analysis. Whole blood and plasma samples were used to determine the limit of blank (LoB), LoD, and limit of quantitation (LoQ), with a comparative correlation analysis performed by sample type (whole blood vs. plasma). All analytical procedures were conducted in accordance with the Clinical and Laboratory Standards Institute (CLSI) guidelines. This study was approved by the institutional review board of Korea University Guro Hospital (IRB no. 2019GR0018).\nWe enrolled 872 leftover ethylenediaminetetraacetic acid (EDTA)‐whole blood samples from patients who were requested to undergo both complete blood count and cTnT tests in Korea Guro University Hospital from January 2019 to December 2019. For the cTnT measurement, the hs‐cTnT assay (Elecsys Troponin T hs STAT, Roche Diagnostics, Mannheim, Germany) was performed on the cobas 8000 e602 analyzer (Roche Diagnostics) in our laboratory. Whole blood samples were stored at −80°C, and thawed before evaluation. Before the evaluation, samples with evidence of severe hemolysis, coagulation, contamination, or insufficient quantity were excluded.\nPlasma samples obtained from the EDTA‐whole blood samples were used to verify the analytical measurement range (AMR), interference with cross‐reactivity testing, and performance evaluation for the sensitivity, specificity, and correlation analysis. Whole blood and plasma samples were used to determine the limit of blank (LoB), LoD, and limit of quantitation (LoQ), with a comparative correlation analysis performed by sample type (whole blood vs. plasma). All analytical procedures were conducted in accordance with the Clinical and Laboratory Standards Institute (CLSI) guidelines. This study was approved by the institutional review board of Korea University Guro Hospital (IRB no. 2019GR0018).\nImprecision study The imprecision study of the ivisen IA‐1400 was performed in accordance with the CLSI guidelines EP05‐A2.\n12\n The quality control (QC) materials, LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad Laboratories, Headquarters, Hercules, CA, USA) were used to determine imprecision. Three different levels of QC materials were measured twice a day with a minimum interval of 2 hours between measurements, in duplicate, for 20 consecutive days. This process was performed on three different lots at each QC level to define the between‐lot precision. To achieve reproducibility, three different QC levels were tested in triplicate twice a day for 5 consecutive days in two separate laboratories. Results including mean, standard deviation (SD), and coefficients of variation (CVs) were calculated for each of the three different lots and QC levels.\nThe imprecision study of the ivisen IA‐1400 was performed in accordance with the CLSI guidelines EP05‐A2.\n12\n The quality control (QC) materials, LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad Laboratories, Headquarters, Hercules, CA, USA) were used to determine imprecision. Three different levels of QC materials were measured twice a day with a minimum interval of 2 hours between measurements, in duplicate, for 20 consecutive days. This process was performed on three different lots at each QC level to define the between‐lot precision. To achieve reproducibility, three different QC levels were tested in triplicate twice a day for 5 consecutive days in two separate laboratories. Results including mean, standard deviation (SD), and coefficients of variation (CVs) were calculated for each of the three different lots and QC levels.\nEstimation of LoB, LoD, and LoQ for analytical sensitivity The LoB, LoD, and LoQ were calculated based on the CLSI guidelines EP17‐A2.\n13\n All measurements were made using two different cartridges, one for the plasma samples and another for the whole blood samples. The greater values of LoB, LoD, and LoQ are reported for the overall study as recommended in CLSI guidelines EP17‐A2. The LoB is the highest apparent analyte concentration when replicates of a blank sample containing no analyte are tested and defined by the relation LoB=meanblank +1.645(SDblank).\n14\n Using cTnI‐free plasma and whole blood sample as a blank, the LoB was calculated from values obtained in 60 repeated measurements of two lots. The highest LoB value was used to obtain the LoD. To calculate the LoD and LoQ, samples showing high cTnI concentrations were pooled and serially diluted to reach their target concentrations. A total of seven pooled samples with different concentrations were prepared. LoD, the lowest analyte concentration determined by using both measured LoB and test replicates of a low concentration of analyte, is defined by the formula LoD = LoB + 1.645(SD low concentration sample). For the LoD estimation, the process was repeated for 5 days in five pooled samples and measured three times each at the estimated LoD concentration. The LoQ is the lowest concentration at which predefined goals for bias and imprecision are met. To calculate the LoQ, two pooled samples with estimated concentrations that resulted in CVs of 10% and 20% were measured three times for 5 days.\nThe LoB, LoD, and LoQ were calculated based on the CLSI guidelines EP17‐A2.\n13\n All measurements were made using two different cartridges, one for the plasma samples and another for the whole blood samples. The greater values of LoB, LoD, and LoQ are reported for the overall study as recommended in CLSI guidelines EP17‐A2. The LoB is the highest apparent analyte concentration when replicates of a blank sample containing no analyte are tested and defined by the relation LoB=meanblank +1.645(SDblank).\n14\n Using cTnI‐free plasma and whole blood sample as a blank, the LoB was calculated from values obtained in 60 repeated measurements of two lots. The highest LoB value was used to obtain the LoD. To calculate the LoD and LoQ, samples showing high cTnI concentrations were pooled and serially diluted to reach their target concentrations. A total of seven pooled samples with different concentrations were prepared. LoD, the lowest analyte concentration determined by using both measured LoB and test replicates of a low concentration of analyte, is defined by the formula LoD = LoB + 1.645(SD low concentration sample). For the LoD estimation, the process was repeated for 5 days in five pooled samples and measured three times each at the estimated LoD concentration. The LoQ is the lowest concentration at which predefined goals for bias and imprecision are met. To calculate the LoQ, two pooled samples with estimated concentrations that resulted in CVs of 10% and 20% were measured three times for 5 days.\nAMR and hook effect The AMR or linearity of the method was determined according to the CLSI guidelines EP06‐A.\n15\n Using SeraconTM cTnI‐free human plasma as a negative‐reference material and human cardiac troponin I‐T‐C complex (Hytest Ltd., Turku, Finland) as a positive‐reference material, 10 pools were prepared by serial dilution. Six of the high‐concentration pools exceeding the upper measurable range (>10,000 ng/L) were used to examine the high‐dose hook effect. All high‐dose hook samples were measured in triplicate on the ivisen IA‐1400 cTnI cartridge.\nThe AMR or linearity of the method was determined according to the CLSI guidelines EP06‐A.\n15\n Using SeraconTM cTnI‐free human plasma as a negative‐reference material and human cardiac troponin I‐T‐C complex (Hytest Ltd., Turku, Finland) as a positive‐reference material, 10 pools were prepared by serial dilution. Six of the high‐concentration pools exceeding the upper measurable range (>10,000 ng/L) were used to examine the high‐dose hook effect. All high‐dose hook samples were measured in triplicate on the ivisen IA‐1400 cTnI cartridge.\nCross‐reactivity and interference A total of 24 interfering substances (acetaminophen, acetylsalicylic acid, allopurinol, ampicillin, ascorbic acid, atenolol, caffeine, captopril, digoxin, dopamine, erythromycin, furosemide, methyldopa, niphedipine, phenytoin, theophylline, verapamil, bilirubin‐conjugated, bilirubin‐free, hemoglobin, human anti‐mouse antibodies [HAMA], rheumatoid factor, and triglycerides) and eight cross‐reacting materials (actin protein, creatine kinase myocardial band, human skeletal muscle troponin I, C, and T, myoglobin, myosin, and tropomyosin) were tested for interference and cross‐reactivity for four pools with negative, low, medium, and high cTnI concentrations according to the CLSI EP07‐02 guidelines.\n16\n All pooled samples with various materials were measured in triplicate and the average value was obtained to calculate bias.\nA total of 24 interfering substances (acetaminophen, acetylsalicylic acid, allopurinol, ampicillin, ascorbic acid, atenolol, caffeine, captopril, digoxin, dopamine, erythromycin, furosemide, methyldopa, niphedipine, phenytoin, theophylline, verapamil, bilirubin‐conjugated, bilirubin‐free, hemoglobin, human anti‐mouse antibodies [HAMA], rheumatoid factor, and triglycerides) and eight cross‐reacting materials (actin protein, creatine kinase myocardial band, human skeletal muscle troponin I, C, and T, myoglobin, myosin, and tropomyosin) were tested for interference and cross‐reactivity for four pools with negative, low, medium, and high cTnI concentrations according to the CLSI EP07‐02 guidelines.\n16\n All pooled samples with various materials were measured in triplicate and the average value was obtained to calculate bias.\nMethod and sample type comparison The results obtained from the ivisen IA‐1400 were compared with those of two preexisting hs‐cTnI assays using 111 samples: Access AccuTnI+3 (AccuTnI+3) immunoassay using UniCelTM DxI 800 platform (Beckman Coulter Inc., Fullerton, CA, USA), and PATHFASTTM hs‐cTnI assay (Mitsubishi Medience, Tokyo, Japan) using plasma samples in accordance with CLSI guidelines EP09‐A3.\n17\n The epitope peptides of AccuTnI+3 used for the cTnI measurement were the same as those used in ivisen IA‐1400. A sample type comparison study of the whole blood and plasma samples was conducted of 39 paired samples using the mean value of the duplicates. A Passing‐Bablok regression analysis was performed to define the relationship and agreement between devices in the comparison analysis. A Bland‐Altman analysis was also performed of the comparative evaluation of the significant differences between the sample types.\nThe results obtained from the ivisen IA‐1400 were compared with those of two preexisting hs‐cTnI assays using 111 samples: Access AccuTnI+3 (AccuTnI+3) immunoassay using UniCelTM DxI 800 platform (Beckman Coulter Inc., Fullerton, CA, USA), and PATHFASTTM hs‐cTnI assay (Mitsubishi Medience, Tokyo, Japan) using plasma samples in accordance with CLSI guidelines EP09‐A3.\n17\n The epitope peptides of AccuTnI+3 used for the cTnI measurement were the same as those used in ivisen IA‐1400. A sample type comparison study of the whole blood and plasma samples was conducted of 39 paired samples using the mean value of the duplicates. A Passing‐Bablok regression analysis was performed to define the relationship and agreement between devices in the comparison analysis. A Bland‐Altman analysis was also performed of the comparative evaluation of the significant differences between the sample types.\nOptimal cut‐off value for sensitivity and specificity The diagnostic performance of the ivisen IA‐1400 for predicting AMI was briefly evaluated. Measurements of cTnI were performed using residual samples from 415 patients with suspected AMI who visited Korea University Guro Hospital. The recruited samples were further classified as “non‐AMI sample” (341 samples) or “AMI sample” (74 samples), based on the clinicians’ diagnosis according to the universal guidelines for diagnosing AMI.\n18\n The optimal cut‐off value, which maximizes the sensitivity and specificity, was obtained from the best receiver‐operating characteristic curve (ROC) cut‐off value in the ROC analysis.\nThe diagnostic performance of the ivisen IA‐1400 for predicting AMI was briefly evaluated. Measurements of cTnI were performed using residual samples from 415 patients with suspected AMI who visited Korea University Guro Hospital. The recruited samples were further classified as “non‐AMI sample” (341 samples) or “AMI sample” (74 samples), based on the clinicians’ diagnosis according to the universal guidelines for diagnosing AMI.\n18\n The optimal cut‐off value, which maximizes the sensitivity and specificity, was obtained from the best receiver‐operating characteristic curve (ROC) cut‐off value in the ROC analysis.\nStatistical analysis The statistical analysis was performed using Analyse‐it Software (Analyse‐it Software, Leeds, UK) and Microsoft Excel version 2016 (Microsoft Corporation, Redmond, WA, USA).\nThe statistical analysis was performed using Analyse‐it Software (Analyse‐it Software, Leeds, UK) and Microsoft Excel version 2016 (Microsoft Corporation, Redmond, WA, USA).\nPatient and public involvement No patients or members of the public were involved in the design of this study.\nNo patients or members of the public were involved in the design of this study.", "The ivisen IA‐1400 system is a compact, fully automated, bench‐top immunoassay analyzer with a modular configuration that offers random access ability to perform one to four tests simultaneously. The system rotates the cartridge at 4,000 rpm, allowing the direct measurement of whole blood samples without sample pretreatment required for plasma separation, and consequently providing rapid quantitative measurements of cardiac biomarkers in less than 17 min. To utilize whole blood samples in preexisting POC immunoassay devices, cTnI values can be obtained after software correction by using each sample's externally measured hematocrit value.\n10\n, \n11\n In contrast, the ivisen IA‐1400 system itself has a built‐in internal centrifuge as shown in Figure 1 and does not need to measure hematocrit separately. Therefore, the measurement process is simple and has a shorter TAT than other POC devices, even when whole blood samples are used.\nInternal centrifuge in ivisen‐IA 1400 system that rotates the cartridge at 4000 rpm in 2 min. This image is provided from i‐SEN, Inc. with permission", "We enrolled 872 leftover ethylenediaminetetraacetic acid (EDTA)‐whole blood samples from patients who were requested to undergo both complete blood count and cTnT tests in Korea Guro University Hospital from January 2019 to December 2019. For the cTnT measurement, the hs‐cTnT assay (Elecsys Troponin T hs STAT, Roche Diagnostics, Mannheim, Germany) was performed on the cobas 8000 e602 analyzer (Roche Diagnostics) in our laboratory. Whole blood samples were stored at −80°C, and thawed before evaluation. Before the evaluation, samples with evidence of severe hemolysis, coagulation, contamination, or insufficient quantity were excluded.\nPlasma samples obtained from the EDTA‐whole blood samples were used to verify the analytical measurement range (AMR), interference with cross‐reactivity testing, and performance evaluation for the sensitivity, specificity, and correlation analysis. Whole blood and plasma samples were used to determine the limit of blank (LoB), LoD, and limit of quantitation (LoQ), with a comparative correlation analysis performed by sample type (whole blood vs. plasma). All analytical procedures were conducted in accordance with the Clinical and Laboratory Standards Institute (CLSI) guidelines. This study was approved by the institutional review board of Korea University Guro Hospital (IRB no. 2019GR0018).", "The imprecision study of the ivisen IA‐1400 was performed in accordance with the CLSI guidelines EP05‐A2.\n12\n The quality control (QC) materials, LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad Laboratories, Headquarters, Hercules, CA, USA) were used to determine imprecision. Three different levels of QC materials were measured twice a day with a minimum interval of 2 hours between measurements, in duplicate, for 20 consecutive days. This process was performed on three different lots at each QC level to define the between‐lot precision. To achieve reproducibility, three different QC levels were tested in triplicate twice a day for 5 consecutive days in two separate laboratories. Results including mean, standard deviation (SD), and coefficients of variation (CVs) were calculated for each of the three different lots and QC levels.", "The LoB, LoD, and LoQ were calculated based on the CLSI guidelines EP17‐A2.\n13\n All measurements were made using two different cartridges, one for the plasma samples and another for the whole blood samples. The greater values of LoB, LoD, and LoQ are reported for the overall study as recommended in CLSI guidelines EP17‐A2. The LoB is the highest apparent analyte concentration when replicates of a blank sample containing no analyte are tested and defined by the relation LoB=meanblank +1.645(SDblank).\n14\n Using cTnI‐free plasma and whole blood sample as a blank, the LoB was calculated from values obtained in 60 repeated measurements of two lots. The highest LoB value was used to obtain the LoD. To calculate the LoD and LoQ, samples showing high cTnI concentrations were pooled and serially diluted to reach their target concentrations. A total of seven pooled samples with different concentrations were prepared. LoD, the lowest analyte concentration determined by using both measured LoB and test replicates of a low concentration of analyte, is defined by the formula LoD = LoB + 1.645(SD low concentration sample). For the LoD estimation, the process was repeated for 5 days in five pooled samples and measured three times each at the estimated LoD concentration. The LoQ is the lowest concentration at which predefined goals for bias and imprecision are met. To calculate the LoQ, two pooled samples with estimated concentrations that resulted in CVs of 10% and 20% were measured three times for 5 days.", "The AMR or linearity of the method was determined according to the CLSI guidelines EP06‐A.\n15\n Using SeraconTM cTnI‐free human plasma as a negative‐reference material and human cardiac troponin I‐T‐C complex (Hytest Ltd., Turku, Finland) as a positive‐reference material, 10 pools were prepared by serial dilution. Six of the high‐concentration pools exceeding the upper measurable range (>10,000 ng/L) were used to examine the high‐dose hook effect. All high‐dose hook samples were measured in triplicate on the ivisen IA‐1400 cTnI cartridge.", "A total of 24 interfering substances (acetaminophen, acetylsalicylic acid, allopurinol, ampicillin, ascorbic acid, atenolol, caffeine, captopril, digoxin, dopamine, erythromycin, furosemide, methyldopa, niphedipine, phenytoin, theophylline, verapamil, bilirubin‐conjugated, bilirubin‐free, hemoglobin, human anti‐mouse antibodies [HAMA], rheumatoid factor, and triglycerides) and eight cross‐reacting materials (actin protein, creatine kinase myocardial band, human skeletal muscle troponin I, C, and T, myoglobin, myosin, and tropomyosin) were tested for interference and cross‐reactivity for four pools with negative, low, medium, and high cTnI concentrations according to the CLSI EP07‐02 guidelines.\n16\n All pooled samples with various materials were measured in triplicate and the average value was obtained to calculate bias.", "The results obtained from the ivisen IA‐1400 were compared with those of two preexisting hs‐cTnI assays using 111 samples: Access AccuTnI+3 (AccuTnI+3) immunoassay using UniCelTM DxI 800 platform (Beckman Coulter Inc., Fullerton, CA, USA), and PATHFASTTM hs‐cTnI assay (Mitsubishi Medience, Tokyo, Japan) using plasma samples in accordance with CLSI guidelines EP09‐A3.\n17\n The epitope peptides of AccuTnI+3 used for the cTnI measurement were the same as those used in ivisen IA‐1400. A sample type comparison study of the whole blood and plasma samples was conducted of 39 paired samples using the mean value of the duplicates. A Passing‐Bablok regression analysis was performed to define the relationship and agreement between devices in the comparison analysis. A Bland‐Altman analysis was also performed of the comparative evaluation of the significant differences between the sample types.", "The diagnostic performance of the ivisen IA‐1400 for predicting AMI was briefly evaluated. Measurements of cTnI were performed using residual samples from 415 patients with suspected AMI who visited Korea University Guro Hospital. The recruited samples were further classified as “non‐AMI sample” (341 samples) or “AMI sample” (74 samples), based on the clinicians’ diagnosis according to the universal guidelines for diagnosing AMI.\n18\n The optimal cut‐off value, which maximizes the sensitivity and specificity, was obtained from the best receiver‐operating characteristic curve (ROC) cut‐off value in the ROC analysis.", "The statistical analysis was performed using Analyse‐it Software (Analyse‐it Software, Leeds, UK) and Microsoft Excel version 2016 (Microsoft Corporation, Redmond, WA, USA).", "No patients or members of the public were involved in the design of this study.", "Imprecision Mean, SD, and CV values were obtained using three different QC materials (level 1, low; level 2, middle; and level 3, high) and three lots (Table 1). Within‐run precisions combining the results of the three lots were as follows: low, 9.5%; middle, 10.2%; and high, 8.5%. Between‐lot precisions were as follows: low, 6.0%; middle, 4.6%; and high, 4.8%. A total reproducibility was 10.1%, 12.2%, and 9.9% at the low, middle, and high levels, respectively.\nImprecision study of ivisen IA‐1400 using LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad) of level 1, 2, and 3\nAbbreviation: CV, Coefficient of variation.\nMean, SD, and CV values were obtained using three different QC materials (level 1, low; level 2, middle; and level 3, high) and three lots (Table 1). Within‐run precisions combining the results of the three lots were as follows: low, 9.5%; middle, 10.2%; and high, 8.5%. Between‐lot precisions were as follows: low, 6.0%; middle, 4.6%; and high, 4.8%. A total reproducibility was 10.1%, 12.2%, and 9.9% at the low, middle, and high levels, respectively.\nImprecision study of ivisen IA‐1400 using LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad) of level 1, 2, and 3\nAbbreviation: CV, Coefficient of variation.\nAnalytical sensitivity: LoB, LoD, and LoQ Both plasma and whole blood samples were used for the LoB, LoD, and LoQ analyses (Table 2). The LoB for lots 1 and 2 were 3.1 and 2.4 ng/L for plasma samples, and 4.3 and 2.8 ng/L for whole blood samples, respectively. The LoD for lots 1 and 2 were 8.4 and 7.1 ng/L for plasma samples, and 10.0 and 6.4 ng/L for whole blood samples, respectively. The LoQ values were calculated at CVs of 20% and 10%. The LoQ at CVs of 20% and 10% were 14.4 and 45.5 ng/L for plasma samples and 28.6 and 57.2 ng/L for whole blood samples, respectively, in lot 1. In lot 2, the LoQ at CVs of 20% and 10% were 19.5 and 39.1 ng/L for plasma samples and 15.6 and 31.1 ng/L for whole blood samples, respectively. The greater of two values were reported as the LoB, LoD, and LoQ for the ivisen IA‐1400. No statistically significant differences in the LoQ were observed between the plasma and whole blood samples (P > 0.05). The LoB, LoD, and LoQ profiles using plasma samples of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 are summarized in Table 3.\n19\n, \n20\n, \n21\n The data, including the 99th percentile upper reference limit (URL) for the AccuTnI+3 and PATHFAST were provided by their respective manufacturers.\nSummary of LoB, LoD, and LoQ in measurement of cTnI using ivisen IA‐1400 by sample type and lot. Bold letters indicate the reported value for overall study\nAbbreviations: CV, coefficient of variationLoB, limit of blank; LoD, limit of detection; LoQ, limit of quantitation.\nProfiles of LoB, LoD, and LoQ at CVs of 20% and 10% of Access AccuTnI+3, PATHFASTTM hs‐cTnI assay, and ivisen IA‐1400 using plasma samples\nAbbreviation: NA, not available.\nBoth plasma and whole blood samples were used for the LoB, LoD, and LoQ analyses (Table 2). The LoB for lots 1 and 2 were 3.1 and 2.4 ng/L for plasma samples, and 4.3 and 2.8 ng/L for whole blood samples, respectively. The LoD for lots 1 and 2 were 8.4 and 7.1 ng/L for plasma samples, and 10.0 and 6.4 ng/L for whole blood samples, respectively. The LoQ values were calculated at CVs of 20% and 10%. The LoQ at CVs of 20% and 10% were 14.4 and 45.5 ng/L for plasma samples and 28.6 and 57.2 ng/L for whole blood samples, respectively, in lot 1. In lot 2, the LoQ at CVs of 20% and 10% were 19.5 and 39.1 ng/L for plasma samples and 15.6 and 31.1 ng/L for whole blood samples, respectively. The greater of two values were reported as the LoB, LoD, and LoQ for the ivisen IA‐1400. No statistically significant differences in the LoQ were observed between the plasma and whole blood samples (P > 0.05). The LoB, LoD, and LoQ profiles using plasma samples of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 are summarized in Table 3.\n19\n, \n20\n, \n21\n The data, including the 99th percentile upper reference limit (URL) for the AccuTnI+3 and PATHFAST were provided by their respective manufacturers.\nSummary of LoB, LoD, and LoQ in measurement of cTnI using ivisen IA‐1400 by sample type and lot. Bold letters indicate the reported value for overall study\nAbbreviations: CV, coefficient of variationLoB, limit of blank; LoD, limit of detection; LoQ, limit of quantitation.\nProfiles of LoB, LoD, and LoQ at CVs of 20% and 10% of Access AccuTnI+3, PATHFASTTM hs‐cTnI assay, and ivisen IA‐1400 using plasma samples\nAbbreviation: NA, not available.\nLinearity and hook effect According to the regression analysis of 10 pooled replicates, linearity was confirmed for the range of 10 ng/L (LoD value) to 10,000 ng/L. No hook effect was observed since there was no decrease in the measured value at high concentrations up to 36,700 ng/L.\nAccording to the regression analysis of 10 pooled replicates, linearity was confirmed for the range of 10 ng/L (LoD value) to 10,000 ng/L. No hook effect was observed since there was no decrease in the measured value at high concentrations up to 36,700 ng/L.\nCross‐reactivity and interference The calculated bias (%) at low and high concentrations of the eight cross‐reacting materials was less than 1%, suggesting no significant cross‐reactivity. None of the tested interfering substances showed bias greater than ±20%, except for the high level of rheumatoid factor (500 IU/mL). The level of the rheumatoid factor that met the bias of less than ±20% was 125 IU/mL at high concentrations and 250 IU/mL at low concentrations. Immunoassays using mouse antibodies are prone to interference from heterophilic antibodies. For the evaluation of HAMA, multiple types of HAMA materials from different manufacturers were used and the bias was less than ±20%. However, in patients with a history of exposure to mice or immunotherapy, the results should be interpreted cautiously.\nThe calculated bias (%) at low and high concentrations of the eight cross‐reacting materials was less than 1%, suggesting no significant cross‐reactivity. None of the tested interfering substances showed bias greater than ±20%, except for the high level of rheumatoid factor (500 IU/mL). The level of the rheumatoid factor that met the bias of less than ±20% was 125 IU/mL at high concentrations and 250 IU/mL at low concentrations. Immunoassays using mouse antibodies are prone to interference from heterophilic antibodies. For the evaluation of HAMA, multiple types of HAMA materials from different manufacturers were used and the bias was less than ±20%. However, in patients with a history of exposure to mice or immunotherapy, the results should be interpreted cautiously.\nMethod and sample type comparison The comparative evaluation of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 using 111 plasma samples showed excellent correlation, with Spearman's correlation coefficient (R) values of 0.992 and 0.985, respectively (Figure 2, Table 4). The results of the correlation analysis between whole blood and plasma samples using the ivisen IA‐1400 are shown in Figure 3A. The correlation between the whole blood and plasma samples was excellent (R = 0.988), and no significant bias (−2.6%, P > 0.05) was observed in the Bland‐Altman plot (Figure 3B).\nThe comparative evaluation between Access AccuTnI+3 assay (A), PATHFASTTM hs‐cTnI assay (B) and ivisen IA‐1400\nCorrelation coefficients (R) with 95% CI for cTnI measurement for methods and sample type\nAccess AccuTnI+3 vs.\nivisen IA−1400\nPATHFASTTM hs‐cTnI assay\nvs. ivisen IA−1400\nSample type (whole blood\nvs. plasma)\nAbbreviation: CI, confidence interval.\nThe results of correlation analysis between whole blood and plasma samples using ivisen IA‐1400 (A) Passing‐Bablock regression, (B) Bland‐Altman plot\nThe comparative evaluation of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 using 111 plasma samples showed excellent correlation, with Spearman's correlation coefficient (R) values of 0.992 and 0.985, respectively (Figure 2, Table 4). The results of the correlation analysis between whole blood and plasma samples using the ivisen IA‐1400 are shown in Figure 3A. The correlation between the whole blood and plasma samples was excellent (R = 0.988), and no significant bias (−2.6%, P > 0.05) was observed in the Bland‐Altman plot (Figure 3B).\nThe comparative evaluation between Access AccuTnI+3 assay (A), PATHFASTTM hs‐cTnI assay (B) and ivisen IA‐1400\nCorrelation coefficients (R) with 95% CI for cTnI measurement for methods and sample type\nAccess AccuTnI+3 vs.\nivisen IA−1400\nPATHFASTTM hs‐cTnI assay\nvs. ivisen IA−1400\nSample type (whole blood\nvs. plasma)\nAbbreviation: CI, confidence interval.\nThe results of correlation analysis between whole blood and plasma samples using ivisen IA‐1400 (A) Passing‐Bablock regression, (B) Bland‐Altman plot\nOptimal cut‐off values for sensitivity and specificity The optimal cut‐off value derived from the ROC analysis of the ivisen IA‐1400 was established as 235 ng/L. In the two groups previously classified by diagnosis (74 AMI and 341 non‐AMI cases), the sensitivity and specificity of ivisen IA‐1400 at the value of 235 ng/L were 94.6% and 98.2%, respectively. The ROC curve showed an area under the curve of 0.998, confirming excellent discrimination.\nThe optimal cut‐off value derived from the ROC analysis of the ivisen IA‐1400 was established as 235 ng/L. In the two groups previously classified by diagnosis (74 AMI and 341 non‐AMI cases), the sensitivity and specificity of ivisen IA‐1400 at the value of 235 ng/L were 94.6% and 98.2%, respectively. The ROC curve showed an area under the curve of 0.998, confirming excellent discrimination.", "Mean, SD, and CV values were obtained using three different QC materials (level 1, low; level 2, middle; and level 3, high) and three lots (Table 1). Within‐run precisions combining the results of the three lots were as follows: low, 9.5%; middle, 10.2%; and high, 8.5%. Between‐lot precisions were as follows: low, 6.0%; middle, 4.6%; and high, 4.8%. A total reproducibility was 10.1%, 12.2%, and 9.9% at the low, middle, and high levels, respectively.\nImprecision study of ivisen IA‐1400 using LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad) of level 1, 2, and 3\nAbbreviation: CV, Coefficient of variation.", "Both plasma and whole blood samples were used for the LoB, LoD, and LoQ analyses (Table 2). The LoB for lots 1 and 2 were 3.1 and 2.4 ng/L for plasma samples, and 4.3 and 2.8 ng/L for whole blood samples, respectively. The LoD for lots 1 and 2 were 8.4 and 7.1 ng/L for plasma samples, and 10.0 and 6.4 ng/L for whole blood samples, respectively. The LoQ values were calculated at CVs of 20% and 10%. The LoQ at CVs of 20% and 10% were 14.4 and 45.5 ng/L for plasma samples and 28.6 and 57.2 ng/L for whole blood samples, respectively, in lot 1. In lot 2, the LoQ at CVs of 20% and 10% were 19.5 and 39.1 ng/L for plasma samples and 15.6 and 31.1 ng/L for whole blood samples, respectively. The greater of two values were reported as the LoB, LoD, and LoQ for the ivisen IA‐1400. No statistically significant differences in the LoQ were observed between the plasma and whole blood samples (P > 0.05). The LoB, LoD, and LoQ profiles using plasma samples of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 are summarized in Table 3.\n19\n, \n20\n, \n21\n The data, including the 99th percentile upper reference limit (URL) for the AccuTnI+3 and PATHFAST were provided by their respective manufacturers.\nSummary of LoB, LoD, and LoQ in measurement of cTnI using ivisen IA‐1400 by sample type and lot. Bold letters indicate the reported value for overall study\nAbbreviations: CV, coefficient of variationLoB, limit of blank; LoD, limit of detection; LoQ, limit of quantitation.\nProfiles of LoB, LoD, and LoQ at CVs of 20% and 10% of Access AccuTnI+3, PATHFASTTM hs‐cTnI assay, and ivisen IA‐1400 using plasma samples\nAbbreviation: NA, not available.", "According to the regression analysis of 10 pooled replicates, linearity was confirmed for the range of 10 ng/L (LoD value) to 10,000 ng/L. No hook effect was observed since there was no decrease in the measured value at high concentrations up to 36,700 ng/L.", "The calculated bias (%) at low and high concentrations of the eight cross‐reacting materials was less than 1%, suggesting no significant cross‐reactivity. None of the tested interfering substances showed bias greater than ±20%, except for the high level of rheumatoid factor (500 IU/mL). The level of the rheumatoid factor that met the bias of less than ±20% was 125 IU/mL at high concentrations and 250 IU/mL at low concentrations. Immunoassays using mouse antibodies are prone to interference from heterophilic antibodies. For the evaluation of HAMA, multiple types of HAMA materials from different manufacturers were used and the bias was less than ±20%. However, in patients with a history of exposure to mice or immunotherapy, the results should be interpreted cautiously.", "The comparative evaluation of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 using 111 plasma samples showed excellent correlation, with Spearman's correlation coefficient (R) values of 0.992 and 0.985, respectively (Figure 2, Table 4). The results of the correlation analysis between whole blood and plasma samples using the ivisen IA‐1400 are shown in Figure 3A. The correlation between the whole blood and plasma samples was excellent (R = 0.988), and no significant bias (−2.6%, P > 0.05) was observed in the Bland‐Altman plot (Figure 3B).\nThe comparative evaluation between Access AccuTnI+3 assay (A), PATHFASTTM hs‐cTnI assay (B) and ivisen IA‐1400\nCorrelation coefficients (R) with 95% CI for cTnI measurement for methods and sample type\nAccess AccuTnI+3 vs.\nivisen IA−1400\nPATHFASTTM hs‐cTnI assay\nvs. ivisen IA−1400\nSample type (whole blood\nvs. plasma)\nAbbreviation: CI, confidence interval.\nThe results of correlation analysis between whole blood and plasma samples using ivisen IA‐1400 (A) Passing‐Bablock regression, (B) Bland‐Altman plot", "The optimal cut‐off value derived from the ROC analysis of the ivisen IA‐1400 was established as 235 ng/L. In the two groups previously classified by diagnosis (74 AMI and 341 non‐AMI cases), the sensitivity and specificity of ivisen IA‐1400 at the value of 235 ng/L were 94.6% and 98.2%, respectively. The ROC curve showed an area under the curve of 0.998, confirming excellent discrimination.", "POC platforms have the advantages in terms of the promptness of delivering information for patient care and decision‐making, especially in medical emergencies. However, the significances of these advantages need to be carefully evaluated due to the lower sensitivity of POC tests compared to that of central laboratories, especially in the early stages after symptom onset. According to a study conducted in an emergency department (ED), sensitivity was significantly lower in patients sampled <3 hours after symptom onset than in those sampled >3 hours after symptom onset.\n5\n Another study indicated some discrepant results between the results of POC testing and those of laboratories, revealing six false negative results and three false positive results in a total of 189 samples in POC testing.\n22\n As the cut‐off value was adjusted higher, the number of discrepancies decreased (only two false positives), while the correlation increased. The authors in this study described that POC testing should be considered to provide faster cTn results and that the continued use of POC testing could help increase ED throughput, and decrease wait times and lengths of stay. According to a research article that analyzed the effect of POC testing on actual cost, POC testing decreased the referral rate in patients without acute coronary syndrome and also achieved tangible reductions in costs.\n23\n Therefore, to fully leverage of POC testing, an appropriate performance evaluation process is essential to ensuring acceptable quality of the information provided by POC testing.\nThis study evaluated the analytical performance of the ivisen IA‐1400 for taking cTnI measurements. The built‐in centrifuge is the most characteristic aspect of this new POC device, as it enables the convenient use of whole blood samples without a pretreatment process. However, the difference in sample type may affect cTnI levels. In the sample type correlation study, the results of the plasma and whole blood samples from the same patient demonstrated excellent correlation without significant difference, as shown in the Bland‐Altman plot. Therefore, the use of the ivisen IA‐1400 can reduce TAT by eliminating pretreatment time loss and increasing the flexibility in the choice of sample type.\nAmong the two types of hs‐cTn assays (hs‐cTnI and hs‐cTnT), hs‐cTnT assays are supplied by only one manufacturer (Roche Diagnostics) because of patent restrictions on the antibodies selected for the assay.\n18\n The choice of whether to measure hs‐cTnI or hs‐cTnI appears to be a judgment based on the situation of each laboratory (e.g., depends on who the primary supplier to the local laboratory is) rather than clinical decisions.\n21\n In our laboratory at Korea University Guro Hospital, the hs‐cTnT assay has been used since 2011. The ivisen IA‐1400 system showed acceptable sensitivity and specificity in the two populations (AMI vs. non‐AMI) diagnosed using the hs‐cTnT assay. In the precision study, acceptable repeatability, reproducibility, and between‐lot precision results (±10%) were achieved using three lots. The LoD values for the plasma and whole blood samples were below 10 ng/L, with no significant interference or cross‐reactivity. The results of the whole blood and plasma samples were significantly correlated, confirming the stable performance of the internal centrifuge, a special feature of the ivisen IA‐1400. The correlation study of the AccuTnI+3 and PATHFAST revealed a very strong correlation (R = 0.992 and 0.985, respectively). However, LoQ values at CVs of 20% and 10% were higher than those of the other two assays using plasma samples, especially when compared with the PATHFAST (at CV20%: ivisen IA‐1400, 19.5 ng/L, PATHFAST, 4.0 ng/L; and CV10%, 45.5 ng/L, 30.4 ng/L, respectively). The PATHFAST demonstrated a complete fulfillment of the analytical criteria for hs‐cTn assays, surpassing a CV of <10% at the 99th URL and a CV of 5.1% at 29 ng/L.\n24\n As the ivisen IA‐1400 was developed as a contemporary device, it is not appropriate to compare the values with the same standard as the PATHFAST hs‐cTnI assay.\nThere are some limitations to be mentioned in this study. For the qualified determination of the 99th percentile URL for a cTn assay, an adequate reference population of at least 300 healthy individuals with an appropriate age, ethnicity, and sex is required.\n25\n As this study used leftover whole blood samples from patients with various diseases other than AMI and other underlying medical conditions requiring cTn testing, the 99th percentile URL could not be obtained. This value is crucial in evaluating the overall performance of cTn testing devices and determining whether the device can be classified as a contemporary or hs‐cTn assay. Although the sensitivity and specificity calculated from the optimum cut‐off value were acceptable, the information gained from the clinical performance data is of limited value. A follow‐up study with the study population of healthy individuals will be necessary to determine whether the ivisen IA‐1400 will achieve the ideal recommendation of a CV <10% at the 99th percentile URL or a CV <20% as acceptable POC testing for clinical use. Second, although the comparison with the other two devices showed satisfactory results, there is a relatively insufficient number of samples in the lower range, which is important in clinical decision‐making process. As mentioned above, further studies need to be carried out using more data at the low cTnI concentrations to assure the analytical performance in the lower range. Another point to note is that mild discrepant CV values were observed between precision and LoQ analysis. Since the QC material was used in the precision analysis, not patient samples, it can be assumed that better CV values were obtained due to the difference in matrix.\nIn conclusion, a newly developed contemporary POC device for cTnI, the ivisen IA‐1400, showed acceptable and promising performance in cTnI measurements using whole blood and plasma samples, with respect to speed, imprecision, analytical sensitivity with specificity, and correlation. If the CV at the 99th percentile URL is acceptable for clinical use as POC testing, the characteristic internal centrifuge system can be conveniently used in various situations where whole blood samples are used along with plasma samples.", "None declared." ]
[ null, "materials-and-methods", null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "COI-statement" ]
[ "cardiac troponin", "cardiac troponin I", "i‐SENS", "ivisen IA‐1400", "point‐of‐care" ]
INTRODUCTION: The rapid and accurate diagnosis of cardiovascular diseases is essential for initiating appropriate and timely medical treatment, especially for life‐threatening emergencies, such as acute myocardial infarction (AMI). The World Health Organization incorporated the serial testing of cardiac biomarkers in the diagnostic criteria for AMI in 1986, along with a history of chest pain and changes on electrocardiograms 1 A cardiac biomarker is a biochemical compound used to detect cardiac diseases such as AMI and myocardial injury. 2 These compounds should be sensitive and specific to cardiac tissue, provide results with a short turnaround time (TAT), and be cost‐effective. 3 After the first report of the use of a biochemical marker for myocardial injury in 1954, 4 numerous new diagnostic marker proteins that aid the assessment of cardiac diseases and prediction of cardiovascular risk have been identified. The European Society of Cardiology/American College of Cardiology recommends the use of cardiac biomarkers for the diagnosis of myocardial injury, preferably cardiac troponin (cTn [I or T]). When the membranes of cardiac muscle cells are damaged, cTns are released into the circulation. Both cTnI and cTnT can be measured using commercially available analytical platforms. 5 From the development of early monoclonal antibody‐based diagnostic immunoassays to recent high‐sensitivity cTnI and cTnT (hs‐cTnI and hs‐cTnT) assays, the limit of detection (LoD) has been lowered, although the values are highly variable among various hs‐cTnI assays, ranging from 0.009 ng/L to 2.5 ng/L. 6 To provide earlier treatment for AMI, there remains a need for a more rapid and efficient measurement of cTns. One strategy is the development and use of point‐of‐care (POC) testing platforms. 7 The cTns measurements obtained using central laboratory equipment are more sensitive than POC measurements. 5 However, POC devices can deliver rapid results, within 30 minutes near the bedside, whereas cTn assays performed in the central laboratory usually take longer because of sample transport, handling, and pretreatment. Fast diagnosis from rapid TAT results can reduce the length of hospital stay and overall hospital costs. 8 , 9 These devices employ more user‐friendly systems that can be operated by less‐skilled personnel. Therefore, POC devices can be beneficial in situations where there are no skilled medical technologists or 24‐hour laboratory operations are impossible. These advantages of POC assays have led to the development of cTn assays with high analytical quality and a shorter TAT. Herein, we present the analytical performance of a newly developed POC device, the ivisen IA‐1400 (i‐SENS, Seoul, South Korea), in terms of imprecision, linearity, cross‐reactivity with interferences, sensitivity, and specificity, with a correlation analysis for comparison to preexisting devices. As the ivisen IA‐1400 can process both whole blood and plasma samples and we aimed to evaluate the correlation between the two sample types as well. MATERIALS AND METHODS: ivisen IA‐1400 The ivisen IA‐1400 system is a compact, fully automated, bench‐top immunoassay analyzer with a modular configuration that offers random access ability to perform one to four tests simultaneously. The system rotates the cartridge at 4,000 rpm, allowing the direct measurement of whole blood samples without sample pretreatment required for plasma separation, and consequently providing rapid quantitative measurements of cardiac biomarkers in less than 17 min. To utilize whole blood samples in preexisting POC immunoassay devices, cTnI values can be obtained after software correction by using each sample's externally measured hematocrit value. 10 , 11 In contrast, the ivisen IA‐1400 system itself has a built‐in internal centrifuge as shown in Figure 1 and does not need to measure hematocrit separately. Therefore, the measurement process is simple and has a shorter TAT than other POC devices, even when whole blood samples are used. Internal centrifuge in ivisen‐IA 1400 system that rotates the cartridge at 4000 rpm in 2 min. This image is provided from i‐SEN, Inc. with permission The ivisen IA‐1400 system is a compact, fully automated, bench‐top immunoassay analyzer with a modular configuration that offers random access ability to perform one to four tests simultaneously. The system rotates the cartridge at 4,000 rpm, allowing the direct measurement of whole blood samples without sample pretreatment required for plasma separation, and consequently providing rapid quantitative measurements of cardiac biomarkers in less than 17 min. To utilize whole blood samples in preexisting POC immunoassay devices, cTnI values can be obtained after software correction by using each sample's externally measured hematocrit value. 10 , 11 In contrast, the ivisen IA‐1400 system itself has a built‐in internal centrifuge as shown in Figure 1 and does not need to measure hematocrit separately. Therefore, the measurement process is simple and has a shorter TAT than other POC devices, even when whole blood samples are used. Internal centrifuge in ivisen‐IA 1400 system that rotates the cartridge at 4000 rpm in 2 min. This image is provided from i‐SEN, Inc. with permission Samples and study protocols We enrolled 872 leftover ethylenediaminetetraacetic acid (EDTA)‐whole blood samples from patients who were requested to undergo both complete blood count and cTnT tests in Korea Guro University Hospital from January 2019 to December 2019. For the cTnT measurement, the hs‐cTnT assay (Elecsys Troponin T hs STAT, Roche Diagnostics, Mannheim, Germany) was performed on the cobas 8000 e602 analyzer (Roche Diagnostics) in our laboratory. Whole blood samples were stored at −80°C, and thawed before evaluation. Before the evaluation, samples with evidence of severe hemolysis, coagulation, contamination, or insufficient quantity were excluded. Plasma samples obtained from the EDTA‐whole blood samples were used to verify the analytical measurement range (AMR), interference with cross‐reactivity testing, and performance evaluation for the sensitivity, specificity, and correlation analysis. Whole blood and plasma samples were used to determine the limit of blank (LoB), LoD, and limit of quantitation (LoQ), with a comparative correlation analysis performed by sample type (whole blood vs. plasma). All analytical procedures were conducted in accordance with the Clinical and Laboratory Standards Institute (CLSI) guidelines. This study was approved by the institutional review board of Korea University Guro Hospital (IRB no. 2019GR0018). We enrolled 872 leftover ethylenediaminetetraacetic acid (EDTA)‐whole blood samples from patients who were requested to undergo both complete blood count and cTnT tests in Korea Guro University Hospital from January 2019 to December 2019. For the cTnT measurement, the hs‐cTnT assay (Elecsys Troponin T hs STAT, Roche Diagnostics, Mannheim, Germany) was performed on the cobas 8000 e602 analyzer (Roche Diagnostics) in our laboratory. Whole blood samples were stored at −80°C, and thawed before evaluation. Before the evaluation, samples with evidence of severe hemolysis, coagulation, contamination, or insufficient quantity were excluded. Plasma samples obtained from the EDTA‐whole blood samples were used to verify the analytical measurement range (AMR), interference with cross‐reactivity testing, and performance evaluation for the sensitivity, specificity, and correlation analysis. Whole blood and plasma samples were used to determine the limit of blank (LoB), LoD, and limit of quantitation (LoQ), with a comparative correlation analysis performed by sample type (whole blood vs. plasma). All analytical procedures were conducted in accordance with the Clinical and Laboratory Standards Institute (CLSI) guidelines. This study was approved by the institutional review board of Korea University Guro Hospital (IRB no. 2019GR0018). Imprecision study The imprecision study of the ivisen IA‐1400 was performed in accordance with the CLSI guidelines EP05‐A2. 12 The quality control (QC) materials, LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad Laboratories, Headquarters, Hercules, CA, USA) were used to determine imprecision. Three different levels of QC materials were measured twice a day with a minimum interval of 2 hours between measurements, in duplicate, for 20 consecutive days. This process was performed on three different lots at each QC level to define the between‐lot precision. To achieve reproducibility, three different QC levels were tested in triplicate twice a day for 5 consecutive days in two separate laboratories. Results including mean, standard deviation (SD), and coefficients of variation (CVs) were calculated for each of the three different lots and QC levels. The imprecision study of the ivisen IA‐1400 was performed in accordance with the CLSI guidelines EP05‐A2. 12 The quality control (QC) materials, LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad Laboratories, Headquarters, Hercules, CA, USA) were used to determine imprecision. Three different levels of QC materials were measured twice a day with a minimum interval of 2 hours between measurements, in duplicate, for 20 consecutive days. This process was performed on three different lots at each QC level to define the between‐lot precision. To achieve reproducibility, three different QC levels were tested in triplicate twice a day for 5 consecutive days in two separate laboratories. Results including mean, standard deviation (SD), and coefficients of variation (CVs) were calculated for each of the three different lots and QC levels. Estimation of LoB, LoD, and LoQ for analytical sensitivity The LoB, LoD, and LoQ were calculated based on the CLSI guidelines EP17‐A2. 13 All measurements were made using two different cartridges, one for the plasma samples and another for the whole blood samples. The greater values of LoB, LoD, and LoQ are reported for the overall study as recommended in CLSI guidelines EP17‐A2. The LoB is the highest apparent analyte concentration when replicates of a blank sample containing no analyte are tested and defined by the relation LoB=meanblank +1.645(SDblank). 14 Using cTnI‐free plasma and whole blood sample as a blank, the LoB was calculated from values obtained in 60 repeated measurements of two lots. The highest LoB value was used to obtain the LoD. To calculate the LoD and LoQ, samples showing high cTnI concentrations were pooled and serially diluted to reach their target concentrations. A total of seven pooled samples with different concentrations were prepared. LoD, the lowest analyte concentration determined by using both measured LoB and test replicates of a low concentration of analyte, is defined by the formula LoD = LoB + 1.645(SD low concentration sample). For the LoD estimation, the process was repeated for 5 days in five pooled samples and measured three times each at the estimated LoD concentration. The LoQ is the lowest concentration at which predefined goals for bias and imprecision are met. To calculate the LoQ, two pooled samples with estimated concentrations that resulted in CVs of 10% and 20% were measured three times for 5 days. The LoB, LoD, and LoQ were calculated based on the CLSI guidelines EP17‐A2. 13 All measurements were made using two different cartridges, one for the plasma samples and another for the whole blood samples. The greater values of LoB, LoD, and LoQ are reported for the overall study as recommended in CLSI guidelines EP17‐A2. The LoB is the highest apparent analyte concentration when replicates of a blank sample containing no analyte are tested and defined by the relation LoB=meanblank +1.645(SDblank). 14 Using cTnI‐free plasma and whole blood sample as a blank, the LoB was calculated from values obtained in 60 repeated measurements of two lots. The highest LoB value was used to obtain the LoD. To calculate the LoD and LoQ, samples showing high cTnI concentrations were pooled and serially diluted to reach their target concentrations. A total of seven pooled samples with different concentrations were prepared. LoD, the lowest analyte concentration determined by using both measured LoB and test replicates of a low concentration of analyte, is defined by the formula LoD = LoB + 1.645(SD low concentration sample). For the LoD estimation, the process was repeated for 5 days in five pooled samples and measured three times each at the estimated LoD concentration. The LoQ is the lowest concentration at which predefined goals for bias and imprecision are met. To calculate the LoQ, two pooled samples with estimated concentrations that resulted in CVs of 10% and 20% were measured three times for 5 days. AMR and hook effect The AMR or linearity of the method was determined according to the CLSI guidelines EP06‐A. 15 Using SeraconTM cTnI‐free human plasma as a negative‐reference material and human cardiac troponin I‐T‐C complex (Hytest Ltd., Turku, Finland) as a positive‐reference material, 10 pools were prepared by serial dilution. Six of the high‐concentration pools exceeding the upper measurable range (>10,000 ng/L) were used to examine the high‐dose hook effect. All high‐dose hook samples were measured in triplicate on the ivisen IA‐1400 cTnI cartridge. The AMR or linearity of the method was determined according to the CLSI guidelines EP06‐A. 15 Using SeraconTM cTnI‐free human plasma as a negative‐reference material and human cardiac troponin I‐T‐C complex (Hytest Ltd., Turku, Finland) as a positive‐reference material, 10 pools were prepared by serial dilution. Six of the high‐concentration pools exceeding the upper measurable range (>10,000 ng/L) were used to examine the high‐dose hook effect. All high‐dose hook samples were measured in triplicate on the ivisen IA‐1400 cTnI cartridge. Cross‐reactivity and interference A total of 24 interfering substances (acetaminophen, acetylsalicylic acid, allopurinol, ampicillin, ascorbic acid, atenolol, caffeine, captopril, digoxin, dopamine, erythromycin, furosemide, methyldopa, niphedipine, phenytoin, theophylline, verapamil, bilirubin‐conjugated, bilirubin‐free, hemoglobin, human anti‐mouse antibodies [HAMA], rheumatoid factor, and triglycerides) and eight cross‐reacting materials (actin protein, creatine kinase myocardial band, human skeletal muscle troponin I, C, and T, myoglobin, myosin, and tropomyosin) were tested for interference and cross‐reactivity for four pools with negative, low, medium, and high cTnI concentrations according to the CLSI EP07‐02 guidelines. 16 All pooled samples with various materials were measured in triplicate and the average value was obtained to calculate bias. A total of 24 interfering substances (acetaminophen, acetylsalicylic acid, allopurinol, ampicillin, ascorbic acid, atenolol, caffeine, captopril, digoxin, dopamine, erythromycin, furosemide, methyldopa, niphedipine, phenytoin, theophylline, verapamil, bilirubin‐conjugated, bilirubin‐free, hemoglobin, human anti‐mouse antibodies [HAMA], rheumatoid factor, and triglycerides) and eight cross‐reacting materials (actin protein, creatine kinase myocardial band, human skeletal muscle troponin I, C, and T, myoglobin, myosin, and tropomyosin) were tested for interference and cross‐reactivity for four pools with negative, low, medium, and high cTnI concentrations according to the CLSI EP07‐02 guidelines. 16 All pooled samples with various materials were measured in triplicate and the average value was obtained to calculate bias. Method and sample type comparison The results obtained from the ivisen IA‐1400 were compared with those of two preexisting hs‐cTnI assays using 111 samples: Access AccuTnI+3 (AccuTnI+3) immunoassay using UniCelTM DxI 800 platform (Beckman Coulter Inc., Fullerton, CA, USA), and PATHFASTTM hs‐cTnI assay (Mitsubishi Medience, Tokyo, Japan) using plasma samples in accordance with CLSI guidelines EP09‐A3. 17 The epitope peptides of AccuTnI+3 used for the cTnI measurement were the same as those used in ivisen IA‐1400. A sample type comparison study of the whole blood and plasma samples was conducted of 39 paired samples using the mean value of the duplicates. A Passing‐Bablok regression analysis was performed to define the relationship and agreement between devices in the comparison analysis. A Bland‐Altman analysis was also performed of the comparative evaluation of the significant differences between the sample types. The results obtained from the ivisen IA‐1400 were compared with those of two preexisting hs‐cTnI assays using 111 samples: Access AccuTnI+3 (AccuTnI+3) immunoassay using UniCelTM DxI 800 platform (Beckman Coulter Inc., Fullerton, CA, USA), and PATHFASTTM hs‐cTnI assay (Mitsubishi Medience, Tokyo, Japan) using plasma samples in accordance with CLSI guidelines EP09‐A3. 17 The epitope peptides of AccuTnI+3 used for the cTnI measurement were the same as those used in ivisen IA‐1400. A sample type comparison study of the whole blood and plasma samples was conducted of 39 paired samples using the mean value of the duplicates. A Passing‐Bablok regression analysis was performed to define the relationship and agreement between devices in the comparison analysis. A Bland‐Altman analysis was also performed of the comparative evaluation of the significant differences between the sample types. Optimal cut‐off value for sensitivity and specificity The diagnostic performance of the ivisen IA‐1400 for predicting AMI was briefly evaluated. Measurements of cTnI were performed using residual samples from 415 patients with suspected AMI who visited Korea University Guro Hospital. The recruited samples were further classified as “non‐AMI sample” (341 samples) or “AMI sample” (74 samples), based on the clinicians’ diagnosis according to the universal guidelines for diagnosing AMI. 18 The optimal cut‐off value, which maximizes the sensitivity and specificity, was obtained from the best receiver‐operating characteristic curve (ROC) cut‐off value in the ROC analysis. The diagnostic performance of the ivisen IA‐1400 for predicting AMI was briefly evaluated. Measurements of cTnI were performed using residual samples from 415 patients with suspected AMI who visited Korea University Guro Hospital. The recruited samples were further classified as “non‐AMI sample” (341 samples) or “AMI sample” (74 samples), based on the clinicians’ diagnosis according to the universal guidelines for diagnosing AMI. 18 The optimal cut‐off value, which maximizes the sensitivity and specificity, was obtained from the best receiver‐operating characteristic curve (ROC) cut‐off value in the ROC analysis. Statistical analysis The statistical analysis was performed using Analyse‐it Software (Analyse‐it Software, Leeds, UK) and Microsoft Excel version 2016 (Microsoft Corporation, Redmond, WA, USA). The statistical analysis was performed using Analyse‐it Software (Analyse‐it Software, Leeds, UK) and Microsoft Excel version 2016 (Microsoft Corporation, Redmond, WA, USA). Patient and public involvement No patients or members of the public were involved in the design of this study. No patients or members of the public were involved in the design of this study. ivisen IA‐1400: The ivisen IA‐1400 system is a compact, fully automated, bench‐top immunoassay analyzer with a modular configuration that offers random access ability to perform one to four tests simultaneously. The system rotates the cartridge at 4,000 rpm, allowing the direct measurement of whole blood samples without sample pretreatment required for plasma separation, and consequently providing rapid quantitative measurements of cardiac biomarkers in less than 17 min. To utilize whole blood samples in preexisting POC immunoassay devices, cTnI values can be obtained after software correction by using each sample's externally measured hematocrit value. 10 , 11 In contrast, the ivisen IA‐1400 system itself has a built‐in internal centrifuge as shown in Figure 1 and does not need to measure hematocrit separately. Therefore, the measurement process is simple and has a shorter TAT than other POC devices, even when whole blood samples are used. Internal centrifuge in ivisen‐IA 1400 system that rotates the cartridge at 4000 rpm in 2 min. This image is provided from i‐SEN, Inc. with permission Samples and study protocols: We enrolled 872 leftover ethylenediaminetetraacetic acid (EDTA)‐whole blood samples from patients who were requested to undergo both complete blood count and cTnT tests in Korea Guro University Hospital from January 2019 to December 2019. For the cTnT measurement, the hs‐cTnT assay (Elecsys Troponin T hs STAT, Roche Diagnostics, Mannheim, Germany) was performed on the cobas 8000 e602 analyzer (Roche Diagnostics) in our laboratory. Whole blood samples were stored at −80°C, and thawed before evaluation. Before the evaluation, samples with evidence of severe hemolysis, coagulation, contamination, or insufficient quantity were excluded. Plasma samples obtained from the EDTA‐whole blood samples were used to verify the analytical measurement range (AMR), interference with cross‐reactivity testing, and performance evaluation for the sensitivity, specificity, and correlation analysis. Whole blood and plasma samples were used to determine the limit of blank (LoB), LoD, and limit of quantitation (LoQ), with a comparative correlation analysis performed by sample type (whole blood vs. plasma). All analytical procedures were conducted in accordance with the Clinical and Laboratory Standards Institute (CLSI) guidelines. This study was approved by the institutional review board of Korea University Guro Hospital (IRB no. 2019GR0018). Imprecision study: The imprecision study of the ivisen IA‐1400 was performed in accordance with the CLSI guidelines EP05‐A2. 12 The quality control (QC) materials, LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad Laboratories, Headquarters, Hercules, CA, USA) were used to determine imprecision. Three different levels of QC materials were measured twice a day with a minimum interval of 2 hours between measurements, in duplicate, for 20 consecutive days. This process was performed on three different lots at each QC level to define the between‐lot precision. To achieve reproducibility, three different QC levels were tested in triplicate twice a day for 5 consecutive days in two separate laboratories. Results including mean, standard deviation (SD), and coefficients of variation (CVs) were calculated for each of the three different lots and QC levels. Estimation of LoB, LoD, and LoQ for analytical sensitivity: The LoB, LoD, and LoQ were calculated based on the CLSI guidelines EP17‐A2. 13 All measurements were made using two different cartridges, one for the plasma samples and another for the whole blood samples. The greater values of LoB, LoD, and LoQ are reported for the overall study as recommended in CLSI guidelines EP17‐A2. The LoB is the highest apparent analyte concentration when replicates of a blank sample containing no analyte are tested and defined by the relation LoB=meanblank +1.645(SDblank). 14 Using cTnI‐free plasma and whole blood sample as a blank, the LoB was calculated from values obtained in 60 repeated measurements of two lots. The highest LoB value was used to obtain the LoD. To calculate the LoD and LoQ, samples showing high cTnI concentrations were pooled and serially diluted to reach their target concentrations. A total of seven pooled samples with different concentrations were prepared. LoD, the lowest analyte concentration determined by using both measured LoB and test replicates of a low concentration of analyte, is defined by the formula LoD = LoB + 1.645(SD low concentration sample). For the LoD estimation, the process was repeated for 5 days in five pooled samples and measured three times each at the estimated LoD concentration. The LoQ is the lowest concentration at which predefined goals for bias and imprecision are met. To calculate the LoQ, two pooled samples with estimated concentrations that resulted in CVs of 10% and 20% were measured three times for 5 days. AMR and hook effect: The AMR or linearity of the method was determined according to the CLSI guidelines EP06‐A. 15 Using SeraconTM cTnI‐free human plasma as a negative‐reference material and human cardiac troponin I‐T‐C complex (Hytest Ltd., Turku, Finland) as a positive‐reference material, 10 pools were prepared by serial dilution. Six of the high‐concentration pools exceeding the upper measurable range (>10,000 ng/L) were used to examine the high‐dose hook effect. All high‐dose hook samples were measured in triplicate on the ivisen IA‐1400 cTnI cartridge. Cross‐reactivity and interference: A total of 24 interfering substances (acetaminophen, acetylsalicylic acid, allopurinol, ampicillin, ascorbic acid, atenolol, caffeine, captopril, digoxin, dopamine, erythromycin, furosemide, methyldopa, niphedipine, phenytoin, theophylline, verapamil, bilirubin‐conjugated, bilirubin‐free, hemoglobin, human anti‐mouse antibodies [HAMA], rheumatoid factor, and triglycerides) and eight cross‐reacting materials (actin protein, creatine kinase myocardial band, human skeletal muscle troponin I, C, and T, myoglobin, myosin, and tropomyosin) were tested for interference and cross‐reactivity for four pools with negative, low, medium, and high cTnI concentrations according to the CLSI EP07‐02 guidelines. 16 All pooled samples with various materials were measured in triplicate and the average value was obtained to calculate bias. Method and sample type comparison: The results obtained from the ivisen IA‐1400 were compared with those of two preexisting hs‐cTnI assays using 111 samples: Access AccuTnI+3 (AccuTnI+3) immunoassay using UniCelTM DxI 800 platform (Beckman Coulter Inc., Fullerton, CA, USA), and PATHFASTTM hs‐cTnI assay (Mitsubishi Medience, Tokyo, Japan) using plasma samples in accordance with CLSI guidelines EP09‐A3. 17 The epitope peptides of AccuTnI+3 used for the cTnI measurement were the same as those used in ivisen IA‐1400. A sample type comparison study of the whole blood and plasma samples was conducted of 39 paired samples using the mean value of the duplicates. A Passing‐Bablok regression analysis was performed to define the relationship and agreement between devices in the comparison analysis. A Bland‐Altman analysis was also performed of the comparative evaluation of the significant differences between the sample types. Optimal cut‐off value for sensitivity and specificity: The diagnostic performance of the ivisen IA‐1400 for predicting AMI was briefly evaluated. Measurements of cTnI were performed using residual samples from 415 patients with suspected AMI who visited Korea University Guro Hospital. The recruited samples were further classified as “non‐AMI sample” (341 samples) or “AMI sample” (74 samples), based on the clinicians’ diagnosis according to the universal guidelines for diagnosing AMI. 18 The optimal cut‐off value, which maximizes the sensitivity and specificity, was obtained from the best receiver‐operating characteristic curve (ROC) cut‐off value in the ROC analysis. Statistical analysis: The statistical analysis was performed using Analyse‐it Software (Analyse‐it Software, Leeds, UK) and Microsoft Excel version 2016 (Microsoft Corporation, Redmond, WA, USA). Patient and public involvement: No patients or members of the public were involved in the design of this study. RESULTS: Imprecision Mean, SD, and CV values were obtained using three different QC materials (level 1, low; level 2, middle; and level 3, high) and three lots (Table 1). Within‐run precisions combining the results of the three lots were as follows: low, 9.5%; middle, 10.2%; and high, 8.5%. Between‐lot precisions were as follows: low, 6.0%; middle, 4.6%; and high, 4.8%. A total reproducibility was 10.1%, 12.2%, and 9.9% at the low, middle, and high levels, respectively. Imprecision study of ivisen IA‐1400 using LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad) of level 1, 2, and 3 Abbreviation: CV, Coefficient of variation. Mean, SD, and CV values were obtained using three different QC materials (level 1, low; level 2, middle; and level 3, high) and three lots (Table 1). Within‐run precisions combining the results of the three lots were as follows: low, 9.5%; middle, 10.2%; and high, 8.5%. Between‐lot precisions were as follows: low, 6.0%; middle, 4.6%; and high, 4.8%. A total reproducibility was 10.1%, 12.2%, and 9.9% at the low, middle, and high levels, respectively. Imprecision study of ivisen IA‐1400 using LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad) of level 1, 2, and 3 Abbreviation: CV, Coefficient of variation. Analytical sensitivity: LoB, LoD, and LoQ Both plasma and whole blood samples were used for the LoB, LoD, and LoQ analyses (Table 2). The LoB for lots 1 and 2 were 3.1 and 2.4 ng/L for plasma samples, and 4.3 and 2.8 ng/L for whole blood samples, respectively. The LoD for lots 1 and 2 were 8.4 and 7.1 ng/L for plasma samples, and 10.0 and 6.4 ng/L for whole blood samples, respectively. The LoQ values were calculated at CVs of 20% and 10%. The LoQ at CVs of 20% and 10% were 14.4 and 45.5 ng/L for plasma samples and 28.6 and 57.2 ng/L for whole blood samples, respectively, in lot 1. In lot 2, the LoQ at CVs of 20% and 10% were 19.5 and 39.1 ng/L for plasma samples and 15.6 and 31.1 ng/L for whole blood samples, respectively. The greater of two values were reported as the LoB, LoD, and LoQ for the ivisen IA‐1400. No statistically significant differences in the LoQ were observed between the plasma and whole blood samples (P > 0.05). The LoB, LoD, and LoQ profiles using plasma samples of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 are summarized in Table 3. 19 , 20 , 21 The data, including the 99th percentile upper reference limit (URL) for the AccuTnI+3 and PATHFAST were provided by their respective manufacturers. Summary of LoB, LoD, and LoQ in measurement of cTnI using ivisen IA‐1400 by sample type and lot. Bold letters indicate the reported value for overall study Abbreviations: CV, coefficient of variationLoB, limit of blank; LoD, limit of detection; LoQ, limit of quantitation. Profiles of LoB, LoD, and LoQ at CVs of 20% and 10% of Access AccuTnI+3, PATHFASTTM hs‐cTnI assay, and ivisen IA‐1400 using plasma samples Abbreviation: NA, not available. Both plasma and whole blood samples were used for the LoB, LoD, and LoQ analyses (Table 2). The LoB for lots 1 and 2 were 3.1 and 2.4 ng/L for plasma samples, and 4.3 and 2.8 ng/L for whole blood samples, respectively. The LoD for lots 1 and 2 were 8.4 and 7.1 ng/L for plasma samples, and 10.0 and 6.4 ng/L for whole blood samples, respectively. The LoQ values were calculated at CVs of 20% and 10%. The LoQ at CVs of 20% and 10% were 14.4 and 45.5 ng/L for plasma samples and 28.6 and 57.2 ng/L for whole blood samples, respectively, in lot 1. In lot 2, the LoQ at CVs of 20% and 10% were 19.5 and 39.1 ng/L for plasma samples and 15.6 and 31.1 ng/L for whole blood samples, respectively. The greater of two values were reported as the LoB, LoD, and LoQ for the ivisen IA‐1400. No statistically significant differences in the LoQ were observed between the plasma and whole blood samples (P > 0.05). The LoB, LoD, and LoQ profiles using plasma samples of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 are summarized in Table 3. 19 , 20 , 21 The data, including the 99th percentile upper reference limit (URL) for the AccuTnI+3 and PATHFAST were provided by their respective manufacturers. Summary of LoB, LoD, and LoQ in measurement of cTnI using ivisen IA‐1400 by sample type and lot. Bold letters indicate the reported value for overall study Abbreviations: CV, coefficient of variationLoB, limit of blank; LoD, limit of detection; LoQ, limit of quantitation. Profiles of LoB, LoD, and LoQ at CVs of 20% and 10% of Access AccuTnI+3, PATHFASTTM hs‐cTnI assay, and ivisen IA‐1400 using plasma samples Abbreviation: NA, not available. Linearity and hook effect According to the regression analysis of 10 pooled replicates, linearity was confirmed for the range of 10 ng/L (LoD value) to 10,000 ng/L. No hook effect was observed since there was no decrease in the measured value at high concentrations up to 36,700 ng/L. According to the regression analysis of 10 pooled replicates, linearity was confirmed for the range of 10 ng/L (LoD value) to 10,000 ng/L. No hook effect was observed since there was no decrease in the measured value at high concentrations up to 36,700 ng/L. Cross‐reactivity and interference The calculated bias (%) at low and high concentrations of the eight cross‐reacting materials was less than 1%, suggesting no significant cross‐reactivity. None of the tested interfering substances showed bias greater than ±20%, except for the high level of rheumatoid factor (500 IU/mL). The level of the rheumatoid factor that met the bias of less than ±20% was 125 IU/mL at high concentrations and 250 IU/mL at low concentrations. Immunoassays using mouse antibodies are prone to interference from heterophilic antibodies. For the evaluation of HAMA, multiple types of HAMA materials from different manufacturers were used and the bias was less than ±20%. However, in patients with a history of exposure to mice or immunotherapy, the results should be interpreted cautiously. The calculated bias (%) at low and high concentrations of the eight cross‐reacting materials was less than 1%, suggesting no significant cross‐reactivity. None of the tested interfering substances showed bias greater than ±20%, except for the high level of rheumatoid factor (500 IU/mL). The level of the rheumatoid factor that met the bias of less than ±20% was 125 IU/mL at high concentrations and 250 IU/mL at low concentrations. Immunoassays using mouse antibodies are prone to interference from heterophilic antibodies. For the evaluation of HAMA, multiple types of HAMA materials from different manufacturers were used and the bias was less than ±20%. However, in patients with a history of exposure to mice or immunotherapy, the results should be interpreted cautiously. Method and sample type comparison The comparative evaluation of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 using 111 plasma samples showed excellent correlation, with Spearman's correlation coefficient (R) values of 0.992 and 0.985, respectively (Figure 2, Table 4). The results of the correlation analysis between whole blood and plasma samples using the ivisen IA‐1400 are shown in Figure 3A. The correlation between the whole blood and plasma samples was excellent (R = 0.988), and no significant bias (−2.6%, P > 0.05) was observed in the Bland‐Altman plot (Figure 3B). The comparative evaluation between Access AccuTnI+3 assay (A), PATHFASTTM hs‐cTnI assay (B) and ivisen IA‐1400 Correlation coefficients (R) with 95% CI for cTnI measurement for methods and sample type Access AccuTnI+3 vs. ivisen IA−1400 PATHFASTTM hs‐cTnI assay vs. ivisen IA−1400 Sample type (whole blood vs. plasma) Abbreviation: CI, confidence interval. The results of correlation analysis between whole blood and plasma samples using ivisen IA‐1400 (A) Passing‐Bablock regression, (B) Bland‐Altman plot The comparative evaluation of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 using 111 plasma samples showed excellent correlation, with Spearman's correlation coefficient (R) values of 0.992 and 0.985, respectively (Figure 2, Table 4). The results of the correlation analysis between whole blood and plasma samples using the ivisen IA‐1400 are shown in Figure 3A. The correlation between the whole blood and plasma samples was excellent (R = 0.988), and no significant bias (−2.6%, P > 0.05) was observed in the Bland‐Altman plot (Figure 3B). The comparative evaluation between Access AccuTnI+3 assay (A), PATHFASTTM hs‐cTnI assay (B) and ivisen IA‐1400 Correlation coefficients (R) with 95% CI for cTnI measurement for methods and sample type Access AccuTnI+3 vs. ivisen IA−1400 PATHFASTTM hs‐cTnI assay vs. ivisen IA−1400 Sample type (whole blood vs. plasma) Abbreviation: CI, confidence interval. The results of correlation analysis between whole blood and plasma samples using ivisen IA‐1400 (A) Passing‐Bablock regression, (B) Bland‐Altman plot Optimal cut‐off values for sensitivity and specificity The optimal cut‐off value derived from the ROC analysis of the ivisen IA‐1400 was established as 235 ng/L. In the two groups previously classified by diagnosis (74 AMI and 341 non‐AMI cases), the sensitivity and specificity of ivisen IA‐1400 at the value of 235 ng/L were 94.6% and 98.2%, respectively. The ROC curve showed an area under the curve of 0.998, confirming excellent discrimination. The optimal cut‐off value derived from the ROC analysis of the ivisen IA‐1400 was established as 235 ng/L. In the two groups previously classified by diagnosis (74 AMI and 341 non‐AMI cases), the sensitivity and specificity of ivisen IA‐1400 at the value of 235 ng/L were 94.6% and 98.2%, respectively. The ROC curve showed an area under the curve of 0.998, confirming excellent discrimination. Imprecision: Mean, SD, and CV values were obtained using three different QC materials (level 1, low; level 2, middle; and level 3, high) and three lots (Table 1). Within‐run precisions combining the results of the three lots were as follows: low, 9.5%; middle, 10.2%; and high, 8.5%. Between‐lot precisions were as follows: low, 6.0%; middle, 4.6%; and high, 4.8%. A total reproducibility was 10.1%, 12.2%, and 9.9% at the low, middle, and high levels, respectively. Imprecision study of ivisen IA‐1400 using LiquichekTM Cardiac Markers Plus Control LT (Bio‐rad) of level 1, 2, and 3 Abbreviation: CV, Coefficient of variation. Analytical sensitivity: LoB, LoD, and LoQ: Both plasma and whole blood samples were used for the LoB, LoD, and LoQ analyses (Table 2). The LoB for lots 1 and 2 were 3.1 and 2.4 ng/L for plasma samples, and 4.3 and 2.8 ng/L for whole blood samples, respectively. The LoD for lots 1 and 2 were 8.4 and 7.1 ng/L for plasma samples, and 10.0 and 6.4 ng/L for whole blood samples, respectively. The LoQ values were calculated at CVs of 20% and 10%. The LoQ at CVs of 20% and 10% were 14.4 and 45.5 ng/L for plasma samples and 28.6 and 57.2 ng/L for whole blood samples, respectively, in lot 1. In lot 2, the LoQ at CVs of 20% and 10% were 19.5 and 39.1 ng/L for plasma samples and 15.6 and 31.1 ng/L for whole blood samples, respectively. The greater of two values were reported as the LoB, LoD, and LoQ for the ivisen IA‐1400. No statistically significant differences in the LoQ were observed between the plasma and whole blood samples (P > 0.05). The LoB, LoD, and LoQ profiles using plasma samples of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 are summarized in Table 3. 19 , 20 , 21 The data, including the 99th percentile upper reference limit (URL) for the AccuTnI+3 and PATHFAST were provided by their respective manufacturers. Summary of LoB, LoD, and LoQ in measurement of cTnI using ivisen IA‐1400 by sample type and lot. Bold letters indicate the reported value for overall study Abbreviations: CV, coefficient of variationLoB, limit of blank; LoD, limit of detection; LoQ, limit of quantitation. Profiles of LoB, LoD, and LoQ at CVs of 20% and 10% of Access AccuTnI+3, PATHFASTTM hs‐cTnI assay, and ivisen IA‐1400 using plasma samples Abbreviation: NA, not available. Linearity and hook effect: According to the regression analysis of 10 pooled replicates, linearity was confirmed for the range of 10 ng/L (LoD value) to 10,000 ng/L. No hook effect was observed since there was no decrease in the measured value at high concentrations up to 36,700 ng/L. Cross‐reactivity and interference: The calculated bias (%) at low and high concentrations of the eight cross‐reacting materials was less than 1%, suggesting no significant cross‐reactivity. None of the tested interfering substances showed bias greater than ±20%, except for the high level of rheumatoid factor (500 IU/mL). The level of the rheumatoid factor that met the bias of less than ±20% was 125 IU/mL at high concentrations and 250 IU/mL at low concentrations. Immunoassays using mouse antibodies are prone to interference from heterophilic antibodies. For the evaluation of HAMA, multiple types of HAMA materials from different manufacturers were used and the bias was less than ±20%. However, in patients with a history of exposure to mice or immunotherapy, the results should be interpreted cautiously. Method and sample type comparison: The comparative evaluation of the AccuTnI+3, PATHFAST, and ivisen IA‐1400 using 111 plasma samples showed excellent correlation, with Spearman's correlation coefficient (R) values of 0.992 and 0.985, respectively (Figure 2, Table 4). The results of the correlation analysis between whole blood and plasma samples using the ivisen IA‐1400 are shown in Figure 3A. The correlation between the whole blood and plasma samples was excellent (R = 0.988), and no significant bias (−2.6%, P > 0.05) was observed in the Bland‐Altman plot (Figure 3B). The comparative evaluation between Access AccuTnI+3 assay (A), PATHFASTTM hs‐cTnI assay (B) and ivisen IA‐1400 Correlation coefficients (R) with 95% CI for cTnI measurement for methods and sample type Access AccuTnI+3 vs. ivisen IA−1400 PATHFASTTM hs‐cTnI assay vs. ivisen IA−1400 Sample type (whole blood vs. plasma) Abbreviation: CI, confidence interval. The results of correlation analysis between whole blood and plasma samples using ivisen IA‐1400 (A) Passing‐Bablock regression, (B) Bland‐Altman plot Optimal cut‐off values for sensitivity and specificity: The optimal cut‐off value derived from the ROC analysis of the ivisen IA‐1400 was established as 235 ng/L. In the two groups previously classified by diagnosis (74 AMI and 341 non‐AMI cases), the sensitivity and specificity of ivisen IA‐1400 at the value of 235 ng/L were 94.6% and 98.2%, respectively. The ROC curve showed an area under the curve of 0.998, confirming excellent discrimination. DISCUSSION: POC platforms have the advantages in terms of the promptness of delivering information for patient care and decision‐making, especially in medical emergencies. However, the significances of these advantages need to be carefully evaluated due to the lower sensitivity of POC tests compared to that of central laboratories, especially in the early stages after symptom onset. According to a study conducted in an emergency department (ED), sensitivity was significantly lower in patients sampled <3 hours after symptom onset than in those sampled >3 hours after symptom onset. 5 Another study indicated some discrepant results between the results of POC testing and those of laboratories, revealing six false negative results and three false positive results in a total of 189 samples in POC testing. 22 As the cut‐off value was adjusted higher, the number of discrepancies decreased (only two false positives), while the correlation increased. The authors in this study described that POC testing should be considered to provide faster cTn results and that the continued use of POC testing could help increase ED throughput, and decrease wait times and lengths of stay. According to a research article that analyzed the effect of POC testing on actual cost, POC testing decreased the referral rate in patients without acute coronary syndrome and also achieved tangible reductions in costs. 23 Therefore, to fully leverage of POC testing, an appropriate performance evaluation process is essential to ensuring acceptable quality of the information provided by POC testing. This study evaluated the analytical performance of the ivisen IA‐1400 for taking cTnI measurements. The built‐in centrifuge is the most characteristic aspect of this new POC device, as it enables the convenient use of whole blood samples without a pretreatment process. However, the difference in sample type may affect cTnI levels. In the sample type correlation study, the results of the plasma and whole blood samples from the same patient demonstrated excellent correlation without significant difference, as shown in the Bland‐Altman plot. Therefore, the use of the ivisen IA‐1400 can reduce TAT by eliminating pretreatment time loss and increasing the flexibility in the choice of sample type. Among the two types of hs‐cTn assays (hs‐cTnI and hs‐cTnT), hs‐cTnT assays are supplied by only one manufacturer (Roche Diagnostics) because of patent restrictions on the antibodies selected for the assay. 18 The choice of whether to measure hs‐cTnI or hs‐cTnI appears to be a judgment based on the situation of each laboratory (e.g., depends on who the primary supplier to the local laboratory is) rather than clinical decisions. 21 In our laboratory at Korea University Guro Hospital, the hs‐cTnT assay has been used since 2011. The ivisen IA‐1400 system showed acceptable sensitivity and specificity in the two populations (AMI vs. non‐AMI) diagnosed using the hs‐cTnT assay. In the precision study, acceptable repeatability, reproducibility, and between‐lot precision results (±10%) were achieved using three lots. The LoD values for the plasma and whole blood samples were below 10 ng/L, with no significant interference or cross‐reactivity. The results of the whole blood and plasma samples were significantly correlated, confirming the stable performance of the internal centrifuge, a special feature of the ivisen IA‐1400. The correlation study of the AccuTnI+3 and PATHFAST revealed a very strong correlation (R = 0.992 and 0.985, respectively). However, LoQ values at CVs of 20% and 10% were higher than those of the other two assays using plasma samples, especially when compared with the PATHFAST (at CV20%: ivisen IA‐1400, 19.5 ng/L, PATHFAST, 4.0 ng/L; and CV10%, 45.5 ng/L, 30.4 ng/L, respectively). The PATHFAST demonstrated a complete fulfillment of the analytical criteria for hs‐cTn assays, surpassing a CV of <10% at the 99th URL and a CV of 5.1% at 29 ng/L. 24 As the ivisen IA‐1400 was developed as a contemporary device, it is not appropriate to compare the values with the same standard as the PATHFAST hs‐cTnI assay. There are some limitations to be mentioned in this study. For the qualified determination of the 99th percentile URL for a cTn assay, an adequate reference population of at least 300 healthy individuals with an appropriate age, ethnicity, and sex is required. 25 As this study used leftover whole blood samples from patients with various diseases other than AMI and other underlying medical conditions requiring cTn testing, the 99th percentile URL could not be obtained. This value is crucial in evaluating the overall performance of cTn testing devices and determining whether the device can be classified as a contemporary or hs‐cTn assay. Although the sensitivity and specificity calculated from the optimum cut‐off value were acceptable, the information gained from the clinical performance data is of limited value. A follow‐up study with the study population of healthy individuals will be necessary to determine whether the ivisen IA‐1400 will achieve the ideal recommendation of a CV <10% at the 99th percentile URL or a CV <20% as acceptable POC testing for clinical use. Second, although the comparison with the other two devices showed satisfactory results, there is a relatively insufficient number of samples in the lower range, which is important in clinical decision‐making process. As mentioned above, further studies need to be carried out using more data at the low cTnI concentrations to assure the analytical performance in the lower range. Another point to note is that mild discrepant CV values were observed between precision and LoQ analysis. Since the QC material was used in the precision analysis, not patient samples, it can be assumed that better CV values were obtained due to the difference in matrix. In conclusion, a newly developed contemporary POC device for cTnI, the ivisen IA‐1400, showed acceptable and promising performance in cTnI measurements using whole blood and plasma samples, with respect to speed, imprecision, analytical sensitivity with specificity, and correlation. If the CV at the 99th percentile URL is acceptable for clinical use as POC testing, the characteristic internal centrifuge system can be conveniently used in various situations where whole blood samples are used along with plasma samples. CONFLICT OF INTERESTS: None declared.
Background: We present the analytical performance of the ivisen IA-1400, a new point-of-care device that features a characteristic built-in centrifuge system, to measure blood cardiac troponin I (cTnI) levels. Methods: Whole blood and plasma samples obtained from patients who visited Korea University Guro Hospital were used to analyze measurement range, cross-reactivity, interference, and sensitivity and specificity. We performed a correlation analysis of the ivisen IA-1400 versus the Access AccuTnI+3 immunoassay using the UniCel™ DxI 800 platform and the PATHFAST™ hs-cTnI assay. Results: Within-run precisions were classified as low, 9.8%; middle, 10.2%; and high, 8.5%. The limit of blank was 3.1 ng/L for plasma samples and 4.3 ng/L for whole blood samples. The limit of detection was 8.4 ng/L for plasma samples and 10.0 ng/L for whole blood samples, respectively. The limit of quantitation at a coefficient of variation of 20% and 10% was 19.5 ng/L and 45.5 ng/L for plasma samples, respectively. The comparative evaluation between the two other assays and ivisen IA-1400 showed excellent correlation, with Spearman's correlation coefficients (R) of 0.992 and 0.985. The sensitivity and specificity of ivisen IA-1400 using the optimum cut-off value of 235 ug/L were 94.6% and 98.2%, respectively. Conclusions: The ivisen IA-1400 showed acceptable and promising performance in cTnI measurements using whole blood and plasma samples, with limited information in the clinical performance. The flexibility for sample selection using the internal centrifugation system is the main advantage of this point-of-care device.
null
null
9,260
327
[ 545, 192, 232, 154, 284, 96, 143, 152, 108, 32, 16, 149, 386, 56, 148, 210, 79 ]
21
[ "samples", "ia", "ia 1400", "1400", "ivisen", "ivisen ia 1400", "ivisen ia", "blood", "plasma", "ctni" ]
[ "biochemical marker myocardial", "biomarkers diagnosis myocardial", "testing cardiac biomarkers", "use cardiac biomarkers", "cardiac biomarker biochemical" ]
null
null
null
[CONTENT] cardiac troponin | cardiac troponin I | i‐SENS | ivisen IA‐1400 | point‐of‐care [SUMMARY]
null
[CONTENT] cardiac troponin | cardiac troponin I | i‐SENS | ivisen IA‐1400 | point‐of‐care [SUMMARY]
null
[CONTENT] cardiac troponin | cardiac troponin I | i‐SENS | ivisen IA‐1400 | point‐of‐care [SUMMARY]
null
[CONTENT] Centrifugation | Confidence Intervals | Cross Reactions | Humans | Limit of Detection | Myocardium | Point-of-Care Systems | Sensitivity and Specificity | Troponin I [SUMMARY]
null
[CONTENT] Centrifugation | Confidence Intervals | Cross Reactions | Humans | Limit of Detection | Myocardium | Point-of-Care Systems | Sensitivity and Specificity | Troponin I [SUMMARY]
null
[CONTENT] Centrifugation | Confidence Intervals | Cross Reactions | Humans | Limit of Detection | Myocardium | Point-of-Care Systems | Sensitivity and Specificity | Troponin I [SUMMARY]
null
[CONTENT] biochemical marker myocardial | biomarkers diagnosis myocardial | testing cardiac biomarkers | use cardiac biomarkers | cardiac biomarker biochemical [SUMMARY]
null
[CONTENT] biochemical marker myocardial | biomarkers diagnosis myocardial | testing cardiac biomarkers | use cardiac biomarkers | cardiac biomarker biochemical [SUMMARY]
null
[CONTENT] biochemical marker myocardial | biomarkers diagnosis myocardial | testing cardiac biomarkers | use cardiac biomarkers | cardiac biomarker biochemical [SUMMARY]
null
[CONTENT] samples | ia | ia 1400 | 1400 | ivisen | ivisen ia 1400 | ivisen ia | blood | plasma | ctni [SUMMARY]
null
[CONTENT] samples | ia | ia 1400 | 1400 | ivisen | ivisen ia 1400 | ivisen ia | blood | plasma | ctni [SUMMARY]
null
[CONTENT] samples | ia | ia 1400 | 1400 | ivisen | ivisen ia 1400 | ivisen ia | blood | plasma | ctni [SUMMARY]
null
[CONTENT] cardiac | poc | assays | myocardial | rapid | injury | ctns | development | myocardial injury | devices [SUMMARY]
null
[CONTENT] ng | samples | loq | plasma | ivisen | ia 1400 | ia | 1400 | ivisen ia 1400 | ivisen ia [SUMMARY]
null
[CONTENT] samples | declared | blood | ivisen ia | 1400 | ivisen ia 1400 | ia 1400 | ivisen | ia | plasma [SUMMARY]
null
[CONTENT] [SUMMARY]
null
[CONTENT] 9.8% | 10.2% | 8.5% ||| 3.1 ng/L | 4.3 ng/L ||| 8.4 ng/L | 10.0 ng/L ||| 20% and | 10% | 19.5  | ng/L | 45.5 ng ||| two other assays | Spearman | 0.992 | 0.985 ||| 235 | 94.6% | 98.2% [SUMMARY]
null
[CONTENT] ||| Korea University Guro Hospital ||| Access | UniCel | DxI ||| 9.8% | 10.2% | 8.5% ||| 3.1 ng/L | 4.3 ng/L ||| 8.4 ng/L | 10.0 ng/L ||| 20% and | 10% | 19.5  | ng/L | 45.5 ng ||| two other assays | Spearman | 0.992 | 0.985 ||| 235 | 94.6% | 98.2% ||| ||| [SUMMARY]
null
Serum Human Epididymis Protein-4 (HE4) - A novel Approach to Differentiate Malignant Frombenign Breast Tumors.
34452565
The lack of sensitivity and specificity of existing diagnostic markers like Carbohydrate Antigen 15-3(CA15-3) and Carcinoembryonic antigen (CEA) in breast cancer stimulates the search for new biomarkers to improve diagnostic sensitivity especially in differentiating benign and malignant breast tumors. Expression of Human epididymal protein 4 (HE4) has been demonstrated in ductal carcinoma of the breast tissue. So we tried to evaluate serum HE4 levels as diagnostic marker in breast cancer patients and to comparatively assess serum HE4, CEA and CA15-3 in breast tumor patients both benign and malignant.
BACKGROUND
Total 90 female subjects were included in the study. We selected 30 breast cancer cases (Malignant group) and 30 benign breast lump cases (Benign group) based on histopathology report. And other 30 were age matched apparently healthy controls (Control group). HE4, CEA and CA15-3 were analysed in serum samples of all subjects by Electrochemiluminiscence immunoassay method.
METHODS
A significant difference in the median (IQR) of HE4 (pmol/l) was identified among malignant, benign and control groups {62.4(52.6-73.7) vs 49.3(39.8-57.4) vs 52.3(50.6-63.3) P=0.0009} respectively. The cutoff value for prediction of breast cancer was determined at >54.5 pmol/l for HE4, with a sensitivity of 73.3%, specificity of 65.3%, whereas cutoff value of CA 15-3 was >21.24 (U/ml) with a sensitivity of 56.7%, specificity of 74.5%. For CEA at cutoff value >0.99 (ng/ml) the sensitivity and specificity were 96.7 % and 62.7% respectively. AUC for HE4, CA15-3 and CEA were 0.725, 0.644 and 0.857 respectively.
RESULTS
Our study demonstrated that serum levels of HE4 were significantly higher in malignant group compared to benign and control groups. There is no significant difference between HE4 levels between benign and control groups. These results indicate that HE4 appears as a useful and highly specific biomarker for breast cancer, which can differentiate between malignant and benign tumors.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Biomarkers, Tumor", "Breast Neoplasms", "Case-Control Studies", "Cross-Sectional Studies", "Diagnosis, Differential", "Female", "Follow-Up Studies", "Humans", "India", "Middle Aged", "Neoplasms", "Prognosis", "ROC Curve", "WAP Four-Disulfide Core Domain Protein 2", "Young Adult" ]
8629466
Introduction
Breast cancer is the most common cancer in women worldwide representing nearly a quarter of all cancers with an estimated 1.67 million new cancer cases diagnosed in 2012. Women from less developed regions have slightly more number of cases compared to more developed regions (Ferlay et al., 2015). Breast cancer has ranked number one cancer among Indian women with age adjusted rate as high as 25.8 per 100,000 women and mortality 12.7 per 100,000 women (Malvia et al., 2017). Changes in lifestyle increased risk-factor profile. Despite the rising incidence of breast cancer, significant improvement of survival in recent years is due to the progress of research at molecular and biological levels of breast cancer. However, it is essential to identify reliable prognostic factors to guide decision making during the treatment of breast cancer. There is lack of specific and sensitive serum biomarkers for detection and disease progression. Along with tumor size, grade, lymph node status, invasion, molecular markers including hormone receptor status and human epidermal growth factor receptor 2 (HER2) expression (Galgano et al., 2006), have an important role in screening, early diagnosis of recurrence, and treatment of many malignancies (Hellström et al., 2003; Bingle et al., 2006). In carcinoma of the breast, the most commonly used serum tumor marker is cancer antigen 15-3 (CA 15-3); however, its sensitivity and specificity are inadequate (Kamei et al., 2010; O’Neal et al., 2013). The lack of sensitivity and specificity of the present diagnostic parameters have led to the search for newer biomarkers to add to the present panel to identify the disease earlier and halt its progression. Human epididymal protein 4 (HE4) is a secretory protein initially identified in epithelial cells of the human epididymis (Donepudi et al., 2014). Expression of HE4 has been demonstrated in numerous types of normal human tissues, particularly in the epithelium of the respiratory and genitourinary tracts of men and women, and increased HE4 expression has been demonstrated in a range of malignant neoplasms, particularly those of gynecological, pulmonary, and gastrointestinal origin (Galgano et al., 2006; Geng et al., 2015; Ideo et al., 2015). It has been recently reported that HE4 is also expressed in ductal carcinoma of the breast tissue (Geng et al., 2015); however, its serum expression levels and their diagnostic and prognostic potential in breast cancer remain to be elucidated. HE4 is closely associated with lymph node metastases. These findings suggest that HE4 is a possible predictive marker of lymph node metastasis and has a critical role in its recurrence (Hellström et al., 2003; Bingle et al., 2006). So we aimed to estimate the levels of HE4 in established breast cancer, benign breast lump cases and healthy controls and to evaluate the clinical eligibility of HE4 as a potential tumor marker.
null
null
Results
Out of the total 90 women included in this study 63 were premenopausal women (30 in benign, 13 in malignant and 20 in the control group) and 27 postmenopausal women (17 in malignant and 10 in control groups). In the benign group, the most represented histological type was Fibroadenoma (77.67%), followed by benign phyllodes tumour (13.33%) and Intra-ductal papilloma (10%). In malignant group, the most common was invasive ductal carcinoma (IDC) (66.67%) followed by IDC with extensive ductal carcinoma in Situ DCIS (13.33%). Median age of cancer patients was 54 yrs (31 to 72 yrs). Median age in cases of benign group was 34 years (18 to 45 yrs). The Median age in malignant cases was significantly higher than in benign group (Table 1). Most common stage at which breast cancer patients presented was Stage IIIB. A significant difference was noted in HE-4 (pmol/L) levels [Median(IQR)] of controls 52.3(50.6- 63.3), malignant 62.4(52.6- 73.7) and benign 49.34(39.8-57.4) groups with p value of <0.0009 (Table 1). Median levels revealed significant difference in HE4 levels of malignant group when compared with controls (p=0.0465) and with benign group (p=0.0004). But there was no significant difference of HE4 between benign and control groups. Similarly significant differences in CEA levels (ng/ml) were also noted between malignant [2.067(1.5 – 3.6)] and control [0.21(0.21 – 0.23)] groups (p<0.0001), malignant and benign [1.37(0.8 -1.8)] groups (p=0.0012), benign and control groups (p=<0.0001). Whereas CA15-3 did not show any significant difference among the three groups (Table 1). To assess the diagnostic performance of various biomarkers, ROC analysis was done. HE4 had a sensitivity of 73.3% and specificity of 65.3% with AUC of 0.725 at a cutoff of >54.5pmol/L for diagnosis of breast cancer. CEA had a sensitivity of 96.7% and specificity of 62.7% with AUC of 0.857 at a cut-off of 0.99ng/ml. CA15-3 showed a sensitivity of 56.7% and specificity of 74.5% with AUC of 0.644 at a cut-off level of 21.24U/ml (Table 2 and Figure 1). The combination of HE4, CEA and CA15-3 showed sensitivity and specificity of 100% and 30.6% respectively with AUC 0.653 (Figure 2). Serum HE4 has not shown any significant correlation with either CEA (r=0.056, p=0.63) or CA15-3 (r=0.036, p=0.75). Biomarkers in Control Breast Tumor Groups HE-4, Human Epididymis protein-4; CEA, Carcinoembryonic antigen; CA 15-3, carbohydrate antigen 15-3; *p <0.05 is significant;“a”, significant difference in HE4 levels between malignant and control groups; “b”, significant difference in HE4 levels between malignant and benign group; “c”, significant difference in CEA levels between benign and control groups; “d”, significant difference in CEA levels between malignant and control groups; “e”, significant difference in CEA levels between malignant and benign groups (Dunn’s post-hoc multiple comparision analysis). Diagnostic Efficacy of the Biomarkers in Breast Tumors HE-4, Human Epididymis protein-4; CEA, Carcinoembryonic antigen; CA 15-3, carbohydrate antigen 15-3; PPV, Positive predictive value; NPV, negative predictive value; AUC, Area under curve. Comparison of AUC of HE4, CA 15-3 and CEA in Breast Cancer. AUC, Area under curve; HE-4, Human Epididymis protein-4; CEA, Carcinoembryonic antigen; CA 15-3, carbohydrate antigen 15-3 Combined AUC of HE4, CA 15-3 and CEA in Breast Cancer Patients. AUC, Area under curve; HE-4, Human Epididymis protein-4; CEA, Carcinoembryonic antigen; CA 15-3, carbohydrate antigen 15-3
null
null
[ "Author Contribution Statement" ]
[ "Study conception and design: Dr.KSS Sai Baba, Dr. Noorjahan; data collection: Dr. M.A.Rehman, Dr. Pradeep and Dr.Maira; analysis and interpretation of results:, Dr.Pradeep, Dr.Maira Dr.Shantveer and Dr.GSN Raju; draft manuscript preparation: Dr.KSS Sai Baba, Dr. Noorjahan, Dr. M.A.Rehman. All authors reviewed the results and approved the final version of the manuscript." ]
[ null ]
[ "Introduction", "Materials and Methods", "Results", "Discussion", "Author Contribution Statement" ]
[ "Breast cancer is the most common cancer in women worldwide representing nearly a quarter of all cancers with an estimated 1.67 million new cancer cases diagnosed in 2012. Women from less developed regions have slightly more number of cases compared to more developed regions (Ferlay et al., 2015). Breast cancer has ranked number one cancer among Indian women with age adjusted rate as high as 25.8 per 100,000 women and mortality 12.7 per 100,000 women (Malvia et al., 2017). Changes in lifestyle increased risk-factor profile. Despite the rising incidence of breast cancer, significant improvement of survival in recent years is due to the progress of research at molecular and biological levels of breast cancer. However, it is essential to identify reliable prognostic factors to guide decision making during the treatment of breast cancer. There is lack of specific and sensitive serum biomarkers for detection and disease progression. Along with tumor size, grade, lymph node status, invasion, molecular markers including hormone receptor status and human epidermal growth factor receptor 2 (HER2) expression (Galgano et al., 2006), have an important role in screening, early diagnosis of recurrence, and treatment of many malignancies (Hellström et al., 2003; Bingle et al., 2006).\nIn carcinoma of the breast, the most commonly used serum tumor marker is cancer antigen 15-3 (CA 15-3); however, its sensitivity and specificity are inadequate (Kamei et al., 2010; O’Neal et al., 2013). The lack of sensitivity and specificity of the present diagnostic parameters have led to the search for newer biomarkers to add to the present panel to identify the disease earlier and halt its progression.\nHuman epididymal protein 4 (HE4) is a secretory protein initially identified in epithelial cells of the human epididymis (Donepudi et al., 2014). Expression of HE4 has been demonstrated in numerous types of normal human tissues, particularly in the epithelium of the respiratory and genitourinary tracts of men and women, and increased HE4 expression has been demonstrated in a range of malignant neoplasms, particularly those of gynecological, pulmonary, and gastrointestinal origin (Galgano et al., 2006; Geng et al., 2015; Ideo et al., 2015). It has been recently reported that HE4 is also expressed in ductal carcinoma of the breast tissue (Geng et al., 2015); however, its serum expression levels and their diagnostic and prognostic potential in breast cancer remain to be elucidated. HE4 is closely associated with lymph node metastases. These findings suggest that HE4 is a possible predictive marker of lymph node metastasis and has a critical role in its recurrence (Hellström et al., 2003; Bingle et al., 2006).\nSo we aimed to estimate the levels of HE4 in established breast cancer, benign breast lump cases and healthy controls and to evaluate the clinical eligibility of HE4 as a potential tumor marker.", "A cross-sectional case-control study was conducted in departments of Biochemistry, Pathology and Surgical Oncology of a tertiary care hospital at Hyderabad, India from June to December 2019. Based on the alpha error at 0.05 and power of 0.8, standard deviation of group 1 and 2 as 26.19 and 2.19 respectively, difference of means as 14.89 and with ratio of sample sizes in group 1 to 2 as 3.27 with p value <0.05 from previous study (Gunduz et al., 2016) sample size calculated was 27 cases 09 and controls (MedCalc Software., 2019). Female patients with a newly diagnosed breast lump were recruited and grouped into malignant (n=30) and benign (n=30) groups after confirmation with biopsy. Thirty age matched controls were selected from healthy women volunteers. Patients with a previous history of breast or other cancers including that of endometrium, ovary etc were not included in the study. The study was approved by Institutional Ethical Committee (EC/NIMS/1990/2017). Informed consent was taken from all the participants. Samples from the cases were collected preoperatively. All the breast lump cases have undergone Chest X-ray, Bilateral mammography, FNAC and Core needle biopsy. Serum CA15-3, CEA and HE4 levels were measured on Roche Cobas e411 by electro-chemiluminescence immunoassay (diagnostics.roche.com). \n\nStatistical methods\n\nStatistical analysis was performed using MedCalc® Statistical Software version 19.6.1 (MedCalc Software Ltd, 2019). Distribution of normality was established by the ShapiroWilk normality test. KruskalWallis test with the post hoc Dunn’s multiple comparison method was used to determine the statistical significance across the three groups (Malignant, Benign and Control). Receiver operating characteristics (ROC) curves were used to evaluate the diagnostic utility of HE4, CA15-3 and CEA as estimated by the area under the curve (AUC), sensitivity, specificity, positive predictive value and negative predictive value. P < 0.05 is considered as statistically significant.", "Out of the total 90 women included in this study 63 were premenopausal women (30 in benign, 13 in malignant and 20 in the control group) and 27 postmenopausal women (17 in malignant and 10 in control groups). In the benign group, the most represented histological type was Fibroadenoma (77.67%), followed by benign phyllodes tumour (13.33%) and Intra-ductal papilloma (10%). In malignant group, the most common was invasive ductal carcinoma (IDC) (66.67%) followed by IDC with extensive ductal carcinoma in Situ DCIS (13.33%). \nMedian age of cancer patients was 54 yrs (31 to 72 yrs). Median age in cases of benign group was 34 years (18 to 45 yrs). The Median age in malignant cases was significantly higher than in benign group (Table 1). Most common stage at which breast cancer patients presented was Stage IIIB. \nA significant difference was noted in HE-4 (pmol/L) levels [Median(IQR)] of controls 52.3(50.6- 63.3), malignant 62.4(52.6- 73.7) and benign 49.34(39.8-57.4) groups with p value of <0.0009 (Table 1). Median levels revealed significant difference in HE4 levels of malignant group when compared with controls (p=0.0465) and with benign group (p=0.0004). But there was no significant difference of HE4 between benign and control groups.\nSimilarly significant differences in CEA levels (ng/ml) were also noted between malignant [2.067(1.5 – 3.6)] and control [0.21(0.21 – 0.23)] groups (p<0.0001), malignant and benign [1.37(0.8 -1.8)] groups (p=0.0012), benign and control groups (p=<0.0001). Whereas CA15-3 did not show any significant difference among the three groups (Table 1). \nTo assess the diagnostic performance of various biomarkers, ROC analysis was done. HE4 had a sensitivity of 73.3% and specificity of 65.3% with AUC of 0.725 at a cutoff of >54.5pmol/L for diagnosis of breast cancer. CEA had a sensitivity of 96.7% and specificity of 62.7% with AUC of 0.857 at a cut-off of 0.99ng/ml. CA15-3 showed a sensitivity of 56.7% and specificity of 74.5% with AUC of 0.644 at a cut-off level of 21.24U/ml (Table 2 and Figure 1). \nThe combination of HE4, CEA and CA15-3 showed sensitivity and specificity of 100% and 30.6% respectively with AUC 0.653 (Figure 2). Serum HE4 has not shown any significant correlation with either CEA (r=0.056, p=0.63) or CA15-3 (r=0.036, p=0.75).\nBiomarkers in Control Breast Tumor Groups\nHE-4, Human Epididymis protein-4; CEA, Carcinoembryonic antigen; CA 15-3, carbohydrate antigen 15-3; *p <0.05 is significant;“a”, significant difference in HE4 levels between malignant and control groups; “b”, significant difference in HE4 levels between malignant and benign group; “c”, significant difference in CEA levels between benign and control groups; “d”, significant difference in CEA levels between malignant and control groups; “e”, significant difference in CEA levels between malignant and benign groups (Dunn’s post-hoc multiple comparision analysis). \nDiagnostic Efficacy of the Biomarkers in Breast Tumors\nHE-4, Human Epididymis protein-4; CEA, Carcinoembryonic antigen; CA 15-3, carbohydrate antigen 15-3; PPV, Positive predictive value; NPV, negative predictive value; AUC, Area under curve.\nComparison of AUC of HE4, CA 15-3 and CEA in Breast Cancer. AUC, Area under curve; HE-4, Human Epididymis protein-4; CEA, Carcinoembryonic antigen; CA 15-3, carbohydrate antigen 15-3\nCombined AUC of HE4, CA 15-3 and CEA in Breast Cancer Patients. AUC, Area under curve; HE-4, Human Epididymis protein-4; CEA, Carcinoembryonic antigen; CA 15-3, carbohydrate antigen 15-3", "Breast cancer is a vast group of diseases with varied clinical presentation and pathological characteristics. The treatment course, recurrence and prognosis are affected by the biological features of the tumor and the stage at diagnosis. The effort with the novel biomarkers for breast cancer diagnosis is to improve the accuracy of the detection and assess the severity of the malignancy at earliest possible stage.\nSerum markers like BR27.29 (CA 27.29), CA15-3, mucin like carcinoma associated antigen, CA 549, and CEA with limited sensitivity and specificity have been investigated; however, none of these markers have reached the standards required for clinical practice (Bingle et al., 2006).\nSerum HE4, also known as whey acidic four disulfide core domain protein 2 (WFDC2), encoded by the WFDC2 gene has been introduced for the routine diagnostics of ovarian cancer. HE4 was first described in normal tissues such as the epithelium of epididymis, and the bronchial epithelium in the proximal respiratory tract (Bingle et al., 2002).\nLimited data describing HE4 as a diagnostic and prognostic marker in breast cancer especially in that of Indian population is available. The prospective use of HE4 as a tumor marker particularly of gynecological, pulmonary, and gastrointestinal origin has been established by a number of studies (Hellström et al., 2003; Geng et al., 2015). The serological detection of HE4 has been shown to have increased sensitivity and specificity in the detection of ovarian cancer compared with CA 125, which is the current gold standard serum biomarker for ovarian carcinoma (Ferraro et al., 2013; Zhen et al., 2014).\nWhile all these previous literature indicated the importance of HE4 in ovarian cancer, the purpose of this study was to evaluate the clinical utility of HE4 as a potential tumor marker in breast cancer patients.\nOur observations have shown significant difference in serum HE4 levels among malignant, benign and controls. Post-hoc analysis revealed significant difference when malignant group was compared to benign (p=0.0004) and controls (p=0.0465). No difference was seen between benign cases and controls. However no difference was found in CA15-3 levels between any groups. Even though CEA levels were significantly different among all the groups with better AUC, it is known to increase in many cancers which make it a general cancer marker not specific to any particular cancer. Thus our findings are intersting in that out of three markers studied, CA15-3 didn’t show any significant increase in malignant group and CEA showed increase in both malignant and benign groups whereas HE4 increased in only malignant group but not in benign or control groups.\n There appears to be no significant correlation between HE4, CA 15-3 (r=0.036; p=0.75) and CEA (r=0.056; p=0.63) in our study population. The cutoff value of HE4 levels for predicting breast cancer is >54.5 pmol/l with a sensitivity of 73.3%, specificity 65.3%, positive predictive value 56.4%, negative predictive value 80% and AUC of 0.725. These findings indicate that HE4 may be used as a predictive marker for breast carcinoma. Galgano et al., (2006) reported the mRNA and protein expression of HE4 in normal and malignant tissues. In addition, Kamei et al., (2010) found that the increased expression of HE4 in breast cancer tissues correlated with lymph node invasion and was a possible predictive factor of breast cancer recurrence.\nOur observations indicate that HE4 is a significant biomarker associated with malignant breast cancer. To the best of our knowledge, the serum levels of HE4 in breast cancer patients, and their diagnostic, prognostic potential have not been investigated in general and in Indian population specifically. In the current study, the serum levels of HE4 in patients diagnosed with breast cancer were assessed prior to any form of treatment and compared with those in healthy individuals and benign breast lump cases. The serum levels of HE4 were significantly increased in patients with breast malignancy compared with those in benign cases and healthy controls. The sensitivity and the specificity of serum HE4 was reasonable in distinguishing breast cancer patients from benign and healthy controls. These findings indicate that HE4 may be used as a predictive marker for breast carcinoma.\nIn the present study, serum HE4 levels based on menopausal status, stages of cancer, hormone receptor status were not statistically significant similar to Kamei et al., (2010) and Gunduz et al., ( 2016). Multivariate analysis did not show any significant positive correlation of HE4 serum levels with histological grade and clinical stage in breast cancer patients.\nIn conclusion, the significant elevation of HE4, an ovarian cancer marker, in malignant breast tumor patients and its non elevation in benign breast tumor patients makes it an interesting and important biomarker in the evaluation of breast tumors, in addition to the current markers. Further exploration of this marker in other cancers will help to bring out its specificity more clearly.", "Study conception and design: Dr.KSS Sai Baba, Dr. Noorjahan; data collection: Dr. M.A.Rehman, Dr. Pradeep and Dr.Maira; analysis and interpretation of results:, Dr.Pradeep, Dr.Maira Dr.Shantveer and Dr.GSN Raju; draft manuscript preparation: Dr.KSS Sai Baba, Dr. Noorjahan, Dr. M.A.Rehman. All authors reviewed the results and approved the final version of the manuscript." ]
[ "intro", "materials|methods", "results", "discussion", null ]
[ "HE4", "CA 15-3", "CEA", "breast tumors", "breast cancer" ]
Introduction: Breast cancer is the most common cancer in women worldwide representing nearly a quarter of all cancers with an estimated 1.67 million new cancer cases diagnosed in 2012. Women from less developed regions have slightly more number of cases compared to more developed regions (Ferlay et al., 2015). Breast cancer has ranked number one cancer among Indian women with age adjusted rate as high as 25.8 per 100,000 women and mortality 12.7 per 100,000 women (Malvia et al., 2017). Changes in lifestyle increased risk-factor profile. Despite the rising incidence of breast cancer, significant improvement of survival in recent years is due to the progress of research at molecular and biological levels of breast cancer. However, it is essential to identify reliable prognostic factors to guide decision making during the treatment of breast cancer. There is lack of specific and sensitive serum biomarkers for detection and disease progression. Along with tumor size, grade, lymph node status, invasion, molecular markers including hormone receptor status and human epidermal growth factor receptor 2 (HER2) expression (Galgano et al., 2006), have an important role in screening, early diagnosis of recurrence, and treatment of many malignancies (Hellström et al., 2003; Bingle et al., 2006). In carcinoma of the breast, the most commonly used serum tumor marker is cancer antigen 15-3 (CA 15-3); however, its sensitivity and specificity are inadequate (Kamei et al., 2010; O’Neal et al., 2013). The lack of sensitivity and specificity of the present diagnostic parameters have led to the search for newer biomarkers to add to the present panel to identify the disease earlier and halt its progression. Human epididymal protein 4 (HE4) is a secretory protein initially identified in epithelial cells of the human epididymis (Donepudi et al., 2014). Expression of HE4 has been demonstrated in numerous types of normal human tissues, particularly in the epithelium of the respiratory and genitourinary tracts of men and women, and increased HE4 expression has been demonstrated in a range of malignant neoplasms, particularly those of gynecological, pulmonary, and gastrointestinal origin (Galgano et al., 2006; Geng et al., 2015; Ideo et al., 2015). It has been recently reported that HE4 is also expressed in ductal carcinoma of the breast tissue (Geng et al., 2015); however, its serum expression levels and their diagnostic and prognostic potential in breast cancer remain to be elucidated. HE4 is closely associated with lymph node metastases. These findings suggest that HE4 is a possible predictive marker of lymph node metastasis and has a critical role in its recurrence (Hellström et al., 2003; Bingle et al., 2006). So we aimed to estimate the levels of HE4 in established breast cancer, benign breast lump cases and healthy controls and to evaluate the clinical eligibility of HE4 as a potential tumor marker. Materials and Methods: A cross-sectional case-control study was conducted in departments of Biochemistry, Pathology and Surgical Oncology of a tertiary care hospital at Hyderabad, India from June to December 2019. Based on the alpha error at 0.05 and power of 0.8, standard deviation of group 1 and 2 as 26.19 and 2.19 respectively, difference of means as 14.89 and with ratio of sample sizes in group 1 to 2 as 3.27 with p value <0.05 from previous study (Gunduz et al., 2016) sample size calculated was 27 cases 09 and controls (MedCalc Software., 2019). Female patients with a newly diagnosed breast lump were recruited and grouped into malignant (n=30) and benign (n=30) groups after confirmation with biopsy. Thirty age matched controls were selected from healthy women volunteers. Patients with a previous history of breast or other cancers including that of endometrium, ovary etc were not included in the study. The study was approved by Institutional Ethical Committee (EC/NIMS/1990/2017). Informed consent was taken from all the participants. Samples from the cases were collected preoperatively. All the breast lump cases have undergone Chest X-ray, Bilateral mammography, FNAC and Core needle biopsy. Serum CA15-3, CEA and HE4 levels were measured on Roche Cobas e411 by electro-chemiluminescence immunoassay (diagnostics.roche.com). Statistical methods Statistical analysis was performed using MedCalc® Statistical Software version 19.6.1 (MedCalc Software Ltd, 2019). Distribution of normality was established by the ShapiroWilk normality test. KruskalWallis test with the post hoc Dunn’s multiple comparison method was used to determine the statistical significance across the three groups (Malignant, Benign and Control). Receiver operating characteristics (ROC) curves were used to evaluate the diagnostic utility of HE4, CA15-3 and CEA as estimated by the area under the curve (AUC), sensitivity, specificity, positive predictive value and negative predictive value. P < 0.05 is considered as statistically significant. Results: Out of the total 90 women included in this study 63 were premenopausal women (30 in benign, 13 in malignant and 20 in the control group) and 27 postmenopausal women (17 in malignant and 10 in control groups). In the benign group, the most represented histological type was Fibroadenoma (77.67%), followed by benign phyllodes tumour (13.33%) and Intra-ductal papilloma (10%). In malignant group, the most common was invasive ductal carcinoma (IDC) (66.67%) followed by IDC with extensive ductal carcinoma in Situ DCIS (13.33%). Median age of cancer patients was 54 yrs (31 to 72 yrs). Median age in cases of benign group was 34 years (18 to 45 yrs). The Median age in malignant cases was significantly higher than in benign group (Table 1). Most common stage at which breast cancer patients presented was Stage IIIB. A significant difference was noted in HE-4 (pmol/L) levels [Median(IQR)] of controls 52.3(50.6- 63.3), malignant 62.4(52.6- 73.7) and benign 49.34(39.8-57.4) groups with p value of <0.0009 (Table 1). Median levels revealed significant difference in HE4 levels of malignant group when compared with controls (p=0.0465) and with benign group (p=0.0004). But there was no significant difference of HE4 between benign and control groups. Similarly significant differences in CEA levels (ng/ml) were also noted between malignant [2.067(1.5 – 3.6)] and control [0.21(0.21 – 0.23)] groups (p<0.0001), malignant and benign [1.37(0.8 -1.8)] groups (p=0.0012), benign and control groups (p=<0.0001). Whereas CA15-3 did not show any significant difference among the three groups (Table 1). To assess the diagnostic performance of various biomarkers, ROC analysis was done. HE4 had a sensitivity of 73.3% and specificity of 65.3% with AUC of 0.725 at a cutoff of >54.5pmol/L for diagnosis of breast cancer. CEA had a sensitivity of 96.7% and specificity of 62.7% with AUC of 0.857 at a cut-off of 0.99ng/ml. CA15-3 showed a sensitivity of 56.7% and specificity of 74.5% with AUC of 0.644 at a cut-off level of 21.24U/ml (Table 2 and Figure 1). The combination of HE4, CEA and CA15-3 showed sensitivity and specificity of 100% and 30.6% respectively with AUC 0.653 (Figure 2). Serum HE4 has not shown any significant correlation with either CEA (r=0.056, p=0.63) or CA15-3 (r=0.036, p=0.75). Biomarkers in Control Breast Tumor Groups HE-4, Human Epididymis protein-4; CEA, Carcinoembryonic antigen; CA 15-3, carbohydrate antigen 15-3; *p <0.05 is significant;“a”, significant difference in HE4 levels between malignant and control groups; “b”, significant difference in HE4 levels between malignant and benign group; “c”, significant difference in CEA levels between benign and control groups; “d”, significant difference in CEA levels between malignant and control groups; “e”, significant difference in CEA levels between malignant and benign groups (Dunn’s post-hoc multiple comparision analysis). Diagnostic Efficacy of the Biomarkers in Breast Tumors HE-4, Human Epididymis protein-4; CEA, Carcinoembryonic antigen; CA 15-3, carbohydrate antigen 15-3; PPV, Positive predictive value; NPV, negative predictive value; AUC, Area under curve. Comparison of AUC of HE4, CA 15-3 and CEA in Breast Cancer. AUC, Area under curve; HE-4, Human Epididymis protein-4; CEA, Carcinoembryonic antigen; CA 15-3, carbohydrate antigen 15-3 Combined AUC of HE4, CA 15-3 and CEA in Breast Cancer Patients. AUC, Area under curve; HE-4, Human Epididymis protein-4; CEA, Carcinoembryonic antigen; CA 15-3, carbohydrate antigen 15-3 Discussion: Breast cancer is a vast group of diseases with varied clinical presentation and pathological characteristics. The treatment course, recurrence and prognosis are affected by the biological features of the tumor and the stage at diagnosis. The effort with the novel biomarkers for breast cancer diagnosis is to improve the accuracy of the detection and assess the severity of the malignancy at earliest possible stage. Serum markers like BR27.29 (CA 27.29), CA15-3, mucin like carcinoma associated antigen, CA 549, and CEA with limited sensitivity and specificity have been investigated; however, none of these markers have reached the standards required for clinical practice (Bingle et al., 2006). Serum HE4, also known as whey acidic four disulfide core domain protein 2 (WFDC2), encoded by the WFDC2 gene has been introduced for the routine diagnostics of ovarian cancer. HE4 was first described in normal tissues such as the epithelium of epididymis, and the bronchial epithelium in the proximal respiratory tract (Bingle et al., 2002). Limited data describing HE4 as a diagnostic and prognostic marker in breast cancer especially in that of Indian population is available. The prospective use of HE4 as a tumor marker particularly of gynecological, pulmonary, and gastrointestinal origin has been established by a number of studies (Hellström et al., 2003; Geng et al., 2015). The serological detection of HE4 has been shown to have increased sensitivity and specificity in the detection of ovarian cancer compared with CA 125, which is the current gold standard serum biomarker for ovarian carcinoma (Ferraro et al., 2013; Zhen et al., 2014). While all these previous literature indicated the importance of HE4 in ovarian cancer, the purpose of this study was to evaluate the clinical utility of HE4 as a potential tumor marker in breast cancer patients. Our observations have shown significant difference in serum HE4 levels among malignant, benign and controls. Post-hoc analysis revealed significant difference when malignant group was compared to benign (p=0.0004) and controls (p=0.0465). No difference was seen between benign cases and controls. However no difference was found in CA15-3 levels between any groups. Even though CEA levels were significantly different among all the groups with better AUC, it is known to increase in many cancers which make it a general cancer marker not specific to any particular cancer. Thus our findings are intersting in that out of three markers studied, CA15-3 didn’t show any significant increase in malignant group and CEA showed increase in both malignant and benign groups whereas HE4 increased in only malignant group but not in benign or control groups. There appears to be no significant correlation between HE4, CA 15-3 (r=0.036; p=0.75) and CEA (r=0.056; p=0.63) in our study population. The cutoff value of HE4 levels for predicting breast cancer is >54.5 pmol/l with a sensitivity of 73.3%, specificity 65.3%, positive predictive value 56.4%, negative predictive value 80% and AUC of 0.725. These findings indicate that HE4 may be used as a predictive marker for breast carcinoma. Galgano et al., (2006) reported the mRNA and protein expression of HE4 in normal and malignant tissues. In addition, Kamei et al., (2010) found that the increased expression of HE4 in breast cancer tissues correlated with lymph node invasion and was a possible predictive factor of breast cancer recurrence. Our observations indicate that HE4 is a significant biomarker associated with malignant breast cancer. To the best of our knowledge, the serum levels of HE4 in breast cancer patients, and their diagnostic, prognostic potential have not been investigated in general and in Indian population specifically. In the current study, the serum levels of HE4 in patients diagnosed with breast cancer were assessed prior to any form of treatment and compared with those in healthy individuals and benign breast lump cases. The serum levels of HE4 were significantly increased in patients with breast malignancy compared with those in benign cases and healthy controls. The sensitivity and the specificity of serum HE4 was reasonable in distinguishing breast cancer patients from benign and healthy controls. These findings indicate that HE4 may be used as a predictive marker for breast carcinoma. In the present study, serum HE4 levels based on menopausal status, stages of cancer, hormone receptor status were not statistically significant similar to Kamei et al., (2010) and Gunduz et al., ( 2016). Multivariate analysis did not show any significant positive correlation of HE4 serum levels with histological grade and clinical stage in breast cancer patients. In conclusion, the significant elevation of HE4, an ovarian cancer marker, in malignant breast tumor patients and its non elevation in benign breast tumor patients makes it an interesting and important biomarker in the evaluation of breast tumors, in addition to the current markers. Further exploration of this marker in other cancers will help to bring out its specificity more clearly. Author Contribution Statement: Study conception and design: Dr.KSS Sai Baba, Dr. Noorjahan; data collection: Dr. M.A.Rehman, Dr. Pradeep and Dr.Maira; analysis and interpretation of results:, Dr.Pradeep, Dr.Maira Dr.Shantveer and Dr.GSN Raju; draft manuscript preparation: Dr.KSS Sai Baba, Dr. Noorjahan, Dr. M.A.Rehman. All authors reviewed the results and approved the final version of the manuscript.
Background: The lack of sensitivity and specificity of existing diagnostic markers like Carbohydrate Antigen 15-3(CA15-3) and Carcinoembryonic antigen (CEA) in breast cancer stimulates the search for new biomarkers to improve diagnostic sensitivity especially in differentiating benign and malignant breast tumors. Expression of Human epididymal protein 4 (HE4) has been demonstrated in ductal carcinoma of the breast tissue. So we tried to evaluate serum HE4 levels as diagnostic marker in breast cancer patients and to comparatively assess serum HE4, CEA and CA15-3 in breast tumor patients both benign and malignant. Methods: Total 90 female subjects were included in the study. We selected 30 breast cancer cases (Malignant group) and 30 benign breast lump cases (Benign group) based on histopathology report. And other 30 were age matched apparently healthy controls (Control group). HE4, CEA and CA15-3 were analysed in serum samples of all subjects by Electrochemiluminiscence immunoassay method. Results: A significant difference in the median (IQR) of HE4 (pmol/l) was identified among malignant, benign and control groups {62.4(52.6-73.7) vs 49.3(39.8-57.4) vs 52.3(50.6-63.3) P=0.0009} respectively. The cutoff value for prediction of breast cancer was determined at >54.5 pmol/l for HE4, with a sensitivity of 73.3%, specificity of 65.3%, whereas cutoff value of CA 15-3 was >21.24 (U/ml) with a sensitivity of 56.7%, specificity of 74.5%. For CEA at cutoff value >0.99 (ng/ml) the sensitivity and specificity were 96.7 % and 62.7% respectively. AUC for HE4, CA15-3 and CEA were 0.725, 0.644 and 0.857 respectively. Conclusions: Our study demonstrated that serum levels of HE4 were significantly higher in malignant group compared to benign and control groups. There is no significant difference between HE4 levels between benign and control groups. These results indicate that HE4 appears as a useful and highly specific biomarker for breast cancer, which can differentiate between malignant and benign tumors.
null
null
2,718
392
[ 73 ]
5
[ "he4", "breast", "cancer", "benign", "breast cancer", "malignant", "significant", "levels", "cea", "groups" ]
[ "novel biomarkers breast", "potential breast cancer", "factor breast cancer", "breast cancer significant", "predicting breast cancer" ]
null
null
null
[CONTENT] HE4 | CA 15-3 | CEA | breast tumors | breast cancer [SUMMARY]
null
[CONTENT] HE4 | CA 15-3 | CEA | breast tumors | breast cancer [SUMMARY]
null
[CONTENT] HE4 | CA 15-3 | CEA | breast tumors | breast cancer [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Biomarkers, Tumor | Breast Neoplasms | Case-Control Studies | Cross-Sectional Studies | Diagnosis, Differential | Female | Follow-Up Studies | Humans | India | Middle Aged | Neoplasms | Prognosis | ROC Curve | WAP Four-Disulfide Core Domain Protein 2 | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Biomarkers, Tumor | Breast Neoplasms | Case-Control Studies | Cross-Sectional Studies | Diagnosis, Differential | Female | Follow-Up Studies | Humans | India | Middle Aged | Neoplasms | Prognosis | ROC Curve | WAP Four-Disulfide Core Domain Protein 2 | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Biomarkers, Tumor | Breast Neoplasms | Case-Control Studies | Cross-Sectional Studies | Diagnosis, Differential | Female | Follow-Up Studies | Humans | India | Middle Aged | Neoplasms | Prognosis | ROC Curve | WAP Four-Disulfide Core Domain Protein 2 | Young Adult [SUMMARY]
null
[CONTENT] novel biomarkers breast | potential breast cancer | factor breast cancer | breast cancer significant | predicting breast cancer [SUMMARY]
null
[CONTENT] novel biomarkers breast | potential breast cancer | factor breast cancer | breast cancer significant | predicting breast cancer [SUMMARY]
null
[CONTENT] novel biomarkers breast | potential breast cancer | factor breast cancer | breast cancer significant | predicting breast cancer [SUMMARY]
null
[CONTENT] he4 | breast | cancer | benign | breast cancer | malignant | significant | levels | cea | groups [SUMMARY]
null
[CONTENT] he4 | breast | cancer | benign | breast cancer | malignant | significant | levels | cea | groups [SUMMARY]
null
[CONTENT] he4 | breast | cancer | benign | breast cancer | malignant | significant | levels | cea | groups [SUMMARY]
null
[CONTENT] cancer | breast | breast cancer | he4 | women | 2015 | expression | 2006 | human | marker [SUMMARY]
null
[CONTENT] cea | groups | benign | significant difference | significant | malignant | 15 | antigen 15 | difference | auc [SUMMARY]
null
[CONTENT] dr | cancer | he4 | breast | breast cancer | benign | malignant | cea | levels | groups [SUMMARY]
null
[CONTENT] Carcinoembryonic | CEA ||| 4 ||| CEA | 3 [SUMMARY]
null
[CONTENT] IQR | 62.4(52.6-73.7 | 49.3(39.8 | 52.3(50.6 ||| 54.5 | 73.3% | 65.3% | CA 15-3 | 21.24 | 56.7% | 74.5% ||| CEA | 0.99 | 96.7 % | 62.7% ||| AUC | 3 | CEA | 0.725 | 0.644 | 0.857 [SUMMARY]
null
[CONTENT] Carcinoembryonic | CEA ||| 4 ||| CEA | 3 ||| ||| 30 | 30 ||| 30 | Control group ||| CEA | 3 ||| IQR | 62.4(52.6-73.7 | 49.3(39.8 | 52.3(50.6 ||| 54.5 | 73.3% | 65.3% | CA 15-3 | 21.24 | 56.7% | 74.5% ||| CEA | 0.99 | 96.7 % | 62.7% ||| AUC | 3 | CEA | 0.725 | 0.644 | 0.857 ||| ||| ||| [SUMMARY]
null
Impact of Clostridium difficile infection among pneumonia and urinary tract infection hospitalizations: an analysis of the Nationwide Inpatient Sample.
26126606
Clostridium difficile infection (CDI) remains one of the major hospital acquired infections in the nation, often attributable to increased antibiotic use. Little research, however, exists on the prevalence and impact of CDI on patient and hospital outcomes among populations requiring such treatment. As such, the goal of this study was to examine the prevalence, risk factors, and impact of CDI among pneumonia and urinary tract infection (UTI) hospitalizations.
BACKGROUND
The Nationwide Inpatient Sample (2009-2011), reflecting a 20% stratified sample of community hospitals in the United States, was used. A total of 593,038 pneumonia and 255,770 UTI discharges were included. Survey-weighted multivariable regression analyses were conducted to assess the predictors and impact of CDI among pneumonia and UTI discharges.
METHODS
A significantly higher prevalence of CDI was present among men with UTI (13.3 per 1,000) as compared to women (11.3 per 1,000). CDI was associated with higher in-hospital mortality among discharges for pneumonia (adjusted odds ratio [aOR] for men = 3.2, women aOR = 2.8) and UTI (aOR for men = 4.1, women aOR = 3.4). Length of stay among pneumonia and UTI discharges were also double upon presence of CDI. In addition, CDI increased the total charges by at least 75% and 55% among pneumonia and UTI discharges, respectively. Patient and hospital characteristics associated with CDI included being 65 years or older, Charlson Deyo index for comorbidity of 2 or more, Medicare as the primary payer, and discharge from urban hospitals, among both pneumonia and UTI discharges.
RESULTS
CDI occurs frequently in hospitalizations among those discharged from hospital for pneumonia and UTI, and is associated with increased in-hospital mortality and health resource utilization. Interventions to mitigate the burden of CDI in these high-risk populations are urgently needed.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Clostridioides difficile", "Clostridium Infections", "Coinfection", "Comorbidity", "Cross Infection", "Databases, Factual", "Enterocolitis, Pseudomembranous", "Female", "Hospital Mortality", "Hospitalization", "Hospitals, Community", "Humans", "Inpatients", "Male", "Middle Aged", "Odds Ratio", "Pneumonia", "Prevalence", "Regression Analysis", "Risk Factors", "United States", "Urinary Tract Infections", "Young Adult" ]
4487835
Background
Clostridium difficile infection (CDI) is a leading cause of hospital-acquired infection (HAI) [1-3]. In many areas of the United States, CDI has surpassed methicillin-resistant Staphylococcus aureus as the most common type of HAI [1] with approximately 333,000 initial and 145,000 recurrent hospital-onset cases in the nation [4]. Certain patient populations have a disproportionately higher risk for CDI due to either host factors, frequent antibiotic use or both. These include older adults, patients using proton pump inhibitors [5-7] or antibiotics [8,9], those with inflammatory bowel disease [10-12], end-stage renal failure, or recipients of solid organ transplants [13,14]. In a study addressing the burden of CDI among patients with inflammatory bowel disease, Ananthakrishnan et al. [12] demonstrated four times higher mortality and three days longer hospital stay with presence of CDI. Similarly, among solid organ transplant patients, presence of CDI significantly increased the in-hospital mortality, length of stay, and charges, in addition to organ complications [14]. Despite such recognized burden of CDI, limited research exists on the prevalence and impact of infection among most common conditions that require antimicrobial treatment, with no study to date evaluating such impact among pneumonia or urinary tract infection (UTI) patients. Some recent empirical evidence has noted the co-occurrence of pneumonia and UTI with CDI in the United States [15] putatively due to the use of antimicrobial treatment; though none have evaluated the impact of such co-occurrences on patient and hospital outcomes. Misdiagnosis of pneumonia and inappropriate use of antimicrobial therapy was associated with a CDI outbreak [16]. Given the burden of CDI nationally and increasing prevalence attributed at least in part to antibiotic use, understanding the impact of CDI among patients with pneumonia or UTI would be valuable to devise potential preventive strategies. We undertook analyses of an existing large dataset from a nationally representative survey to assess [1] the prevalence and factors associated with CDI among pneumonia and UTI and [2] the impact of CDI on in-hospital mortality and health resource utilization (length of stay [LOS] and total charges).
null
null
Results
A total of 593,038 pneumonia and 255,770 UTI discharges were included in this study. Of them, 6,427 cases of secondary CDI among pneumonia patients and 3,037 secondary CDI cases among UTI patients were noted, representing 10.8 secondary CDI cases per 1,000 pneumonia discharges and 11.1 secondary CDI cases per 1,000 UTI discharges. Gender specific analyses found a total of 2,996 and 1,000 cases of secondary CDI among men with pneumonia and UTI, respectively. Among women, 3,431 cases of secondary CDI were noted among those hospitalized for pneumonia and an additional 2,037 cases among those with primary UTI. Table 1 summarizes the prevalence of secondary CDI, patient, and hospital characteristics among primary pneumonia and UTI discharges, by gender. While rates of CDI among those with pneumonia did not differ between each gender, significant difference was noted for UTI patients, with men reporting 13.3 cases of secondary CDI per 1,000 compared to 11.3 cases per 1,000 for women (P < 0.001). As further noted in Table 1, several patient and hospital characteristics were significantly different among men and women and as a result all such variables were included in final model building for regression analyses.Table 1 Prevalence of CDI, patient and hospital characteristics among primary pneumonia and UTI discharges, NIS 2009-2011 PneumoniaUTIMenWomen P valueMenWomen P valuen279,072313,96675,600180,170N462,171519,849125,338298,156 Prevalence of secondary CDI, cases per 1,000 10.710.90.5213.311.3<0.001 Patient Characteristics Age, % 18-34 years old5.75.03.23.835-49 years old10.610.4<0.0016.35.4<0.00150-64 years old22.421.215.711.865 years old or more61.363.374.978.9 Race/ethnicity, % White75.475.571.874.9Black11.512.014.912.1Hispanic7.87.5<0.0018.78.3<0.001Asian or Pacific Islander2.01.81.51. 7Native American0.80.80.60.6Other2.62.42.62.3 Charlson-Deyo Index, % 020.821.328.033.0126.731.8<0.00123.527.3<0.0012 or more52.546.848.539.8 Neighborhood income, % $1 - $38,99932.032.930.130.4$39,000 - $47,99927. 227.0<0.00124.925.40.009$48,000 - 62,99923.022.824.023.9$63,000 or more17.717.321.020.3 Payer type, % Private including HMO19.117.912.210.4Medicare64.566.777.279.6Medicaid8.49.8<0.0016.86.7<0.001Self-pay4.83.61.82.1No Charge0.50.40.20.2Other2.61.71.71.0 Hospital Characteristics Bed size, % Small19.019.716.416.8Medium24.925.2<0.00124.825.30.06Large56.155.258.857.9 Hospital control, % Private, investor-own14.414.715.316.2Private, non-profit71.071.30.00170.871.3<0.001Government, nonfederal14.614.013.912. 6 Setting, % Rural22.122.716.617.8Urban non-teaching44.044.8<0.00144.946.3<0.001Urban teaching33.932.538.435.8 Region, % Northeast18.818.022.119.9Midwest25.325.6<0.00122.522.7<0.001South38.940.640.442.7West16.915.915.114.7 Year, % 200934.134.132.532.0201032.332.10.2434.033.30.005201133.533.833.534.7CDI = Clostridium difficile infection, UTI = Urinary tract infection, n = total sample size, N = weighted average annual population estimate, CI = confidence interval Prevalence of CDI, patient and hospital characteristics among primary pneumonia and UTI discharges, NIS 2009-2011 CDI = Clostridium difficile infection, UTI = Urinary tract infection, n = total sample size, N = weighted average annual population estimate, CI = confidence interval Table 2 displays the factors significantly associated with secondary CDI among primary pneumonia and UTI discharges. Among hospitalizations for pneumonia, increased odds of CDI were associated with being 65 years or older (adjusted odds ratio [aOR] for men = 1.7; aOR for women = 1.9), having Medicare as the primary payer (aOR men and women = 1.3), and increasing Charlson Deyo index (aOR men = 1.5; aOR women = 1.8). CDI was also significantly associated with high income (aOR = 1.3) and Medicaid (aOR = 1.4) among men hospitalized for pneumonia. Furthermore, for both men and women, hospitalization at urban non-teaching facilities was associated with approximately 60% increased odds of CDI while nearly double the odds were noted if hospitalized at urban teaching centers. On the other hand, lower odds were noted among men admitted at government nonfederal hospitals, as compared to investor-owned facilities (aOR = 0.7).Table 2 Determinants of CDI among pneumonia and UTI discharges, NIS 2009-2011 PneumoniaUTIMenWomenMenWomenPatient characteristics Age 18-34 years old (ref.)35-49 years old0.98 (0.74, 1.31)0.84 (0.61, 1.14)0.82 (0.49, 1.38)1.49 (0.96, 2.33)50-64 years old1.27 (0.97, 1.65)1.34 (1.00, 1.78)0.84 (0.53, 1.33)1.92 (1.19, 3.11)65 years old or more1.70 (1.28, 2.25)*1.85 (1.40, 2.45)*0.94 (0.61, 1.44)2.41 (1.47, 3.95)* Race/ethnicity White (ref.)Black0.92 (0.79, 1.07)1.02 (0.89, 1.16)1.08 (0.87, 1.34)1.01 (0.86, 1.18)Hispanic1.02 (0.86, 1.21)0.84 (0.71, 1.00)0.85 (0.63, 1.15)0.88 (0.73, 1.08)Asian/Pacific Islander1.26 (0.97, 1.62)0.91 (0.69, 1.20)0.96 (0.55, 1.67)0.88 (0.60, 1.28)Native American1.50 (0.86, 2.62)0.84 (0.50, 1.43)0.57 (0.14, 2.31)0.66 (0.28, 1.56)Other1.14 (0.87, 1.50)1.14 (0.87, 1.47)1.24 (0.82, 1.87)0.84 (0.58, 1.21) Charlson-Deyo Index 0 (ref.)10.98 (0.86, 1.13)1.16 (1.02, 1.32)0.88 (0.72, 1.09)1.18 (1.03, 1.36)2 or more1.54 (1.36, 1.75)*1.81 (1.59, 2.05)*1.31 (1.10, 1.55)*1.72 (1.53, 1.94)* Neighborhood income $1 - $38,999 (ref.)$39,000 - $47,9991.08 (0.96, 1.23)1.07 (0.95, 1.21)1.09 (0.86, 1.37)0.89 (0.77, 1.03)$48,000 - 62,9991.12 (0.98, 1.27)1.10 (0.97, 1.25)1.29 (1.04, 1.59)1.17 (1.01, 1.36)$63,000 or more1.30 (1.13, 1.49)*1.23 (1.07, 1.43)1.52 (1.21, 1.90)*1.17 (0.98, 1.39) Payer type Private including HMO (ref.)Medicare1.32 (1.15, 1.51)*1.26 (1.09, 1.45)*1.21 (0.95, 1.55)0.90 (0.74, 1.10)Medicaid1.38 (1.14, 1.68)*1.19 (0.98, 1.45)1.04 (0.72, 1.50)1.00 (0.78, 1.28)Self-pay0.69 (0.49, 0.96)0.87 (0.61, 1.23)0.25 (0.08, 0.78)0.53 (0.30, 0.94)No Charge1.28 (0.66, 2.50)0.54 (0.17, 1.72)0.74 (0.13, 4.42)0.38 (0.05, 2.75)Other1.10 (0.81, 1.49)1.08 (0.70, 1.67)0.86 (0.44, 1.67)0.51 (0.26, 1.03) Hospital characteristics Bed size Small (ref.)Medium0.92 (0.78, 1.09)1.03 (0.88, 1.20)1.02 (0.80, 1.30)0.88 (0.74, 1.04)Large1.09 (0.94, 1.27)1.17 (1.01, 1.35)1.19 (0.96, 1.48)0.97 (0.83, 1.14) Hospital control Private investor-own (ref.)Private non-profit0.79 (0.68, 0.92)0.86 (0.74, 1.00)1.01 (0.79, 1.28)1.15 (0.96, 1.37)Government nonfederal0.68 (0.55, 0.84)*0.76 (0.63, 0.93)0.88 (0.63, 1.23)1.06 (0.84, 1.35) Setting Rural (ref.)Urban non-teaching1.63 (1.36, 1.96)*1.66 (1.40, 1.98)*1.63 (1.20, 2.20)1.42 (1.16, 1.73)*Urban teaching2.05 (1.71, 2.47)*1.92 (1.60, 2.30)*1.92 (1.42, 2.61)*1.74 (1.42, 2.14)* Region Northeast (ref.)Midwest0.94 (0.79, 1.11)0.87 (0.73, 1.04)0.99 (0.79, 1.23)1.04 (0.86, 1.25)South0.79 (0.68, 0.91)0.78 (0.66, 0.92)0.82 (0.66, 1.01)0.81 (0.68, 0.96)West0.83 (0.70, 0.97)0.82 (0.69, 0.98)0.78 (0.61, 1.01)0.87 (0.72, 1.05) Year 2009 (ref.)20101.10 (0.97, 1.24)0.96 (0.85, 1.08)1.11 (0.93, 1.32)1.03 (0.90, 1.19)20111.06 (0.94, 1.20)0.95 (0.84, 1.06)1.06 (0.89, 1.26)1.01 (0.88, 1.16)CDI = Clostridium difficile infection, UTI = Urinary tract infection*Bonferroni adjusted P < 0.0017 Determinants of CDI among pneumonia and UTI discharges, NIS 2009-2011 CDI = Clostridium difficile infection, UTI = Urinary tract infection *Bonferroni adjusted P < 0.0017 Similar trends were noted for UTI discharges. Being 65 years or older (aOR = 2.4 for men only), highest income category (aOR = 1.5 for men only), increasing comorbidities (aOR men = 1.3; aOR women = 1.7), urban teaching status (aOR men = 1.9; aOR women = 1.7), and urban non-teaching status (aOR = 1.4 for women only) were also significantly associated with increased likelihood of secondary CDI. Results of chi-square analyses demonstrated that in-hospital mortality was higher among pneumonia patients diagnosed with CDI, as compared to those without, for both men (13% vs. 4%, P < 0.0001) and women (11% vs. 4%, P < 0.0001). A similar effect of CDI among UTI discharges was noted in regards to in-hospital mortality among men (4% vs. 1%, P < 0.0001) and women (3% vs. 1%, P < 0.0001). Table 3 shows the results of regression analyses (unadjusted and adjusted) to evaluate the impact of CDI on in-hospital mortality and resource utilization among pneumonia and UTI cases. After adjusting for control variables, in-hospital mortality was approximately three times higher for pneumonia cases with CDI as compared to those without CDI. Having CDI among men hospitalized for UTI also increased the likelihood of in-hospital mortality by four times. Similarly, among women with UTI, CDI was associated with three times the odds of dying in the hospital.Table 3 Regression analyses of in-hospital mortality and health resource utilization upon secondary CDI among patients with pneumonia or urinary tract infection a In-hospital mortalityHealth resource utilizationOR (95% CI)Length of stayTotal chargeIRR (95% CI)% change (95% CI)UnadjustedAdjustedb UnadjustedAdjustedb UnadjustedAdjustedb Primary pneumonia with secondary CDIc Women3.422.842.402.2986.1474.9(3.03, 3.86)*(2.51, 3.21)*(2.30, 2.50)*(2.19, 2.39)*(80.36, 91.92)*(70.66, 79.20)*Men3.703.152.542.4093.6580.00(3.28, 4.17)*(2.79, 3.55)*(2.43, 2.66)*(2.30, 2.52)*(87.93, 99.38)*(75.56, 84.48)*Primary UTI with secondary CDId Women3.693.392.192.1162.9356.92(2.82, 4.84)*(2.58, 4.44)*(2.06, 2.32)*(1.99, 2.24)*(57.08, 68.78)*(52.41, 61.44)*Men4.054.132.142.1367.3759.34(2.89, 5.67)*(2.95, 5.78)*(1.99, 2.29)*(1.99, 2.29)*(59.23, 75.50)*(52.30, 66.38)**Bonferroni adjusted P < 0.0017CDI = Clostridium difficile infection, UTI = Urinary tract infection, OR = odds ratio, CI = confidence interval, IRR = incidence rate ratio aBinary logistic regression for in-hospital mortality, negative binomial regression for length of stay, linear regression for total charges. All procedures were survey weighted bModel adjusted for patient characteristics, hospital characteristics, and year cReference = primary pneumonia no secondary CDI dReference = UTI no secondary CDI Regression analyses of in-hospital mortality and health resource utilization upon secondary CDI among patients with pneumonia or urinary tract infection a *Bonferroni adjusted P < 0.0017 CDI = Clostridium difficile infection, UTI = Urinary tract infection, OR = odds ratio, CI = confidence interval, IRR = incidence rate ratio aBinary logistic regression for in-hospital mortality, negative binomial regression for length of stay, linear regression for total charges. All procedures were survey weighted bModel adjusted for patient characteristics, hospital characteristics, and year cReference = primary pneumonia no secondary CDI dReference = UTI no secondary CDI Wilcoxon rank sum tests demonstrated that median LOS was significantly longer in pneumonia cases with CDI as compared to those without CDI for both men and women (9 days vs. 4 days, P < 0.0001). Similar trends were noted for UTI with or without CDI (7 days vs. 3 days, P < 0.0001). Adjusted results of negative binomial regression analyses showed that having CDI for pneumonia and UTI discharges was associated with approximately 200% increased LOS (Table 3). Among men with pneumonia, higher median total charges ($104131 vs. $41157, P < 0.0001) were noted upon presence of CDI, with a similar trend reported among women ($96446 vs. $40700, P < 0.0001). Median total charges were also substantially higher upon UTI cases with CDI for both men ($63842 vs. $34182, P < 0.0001) and women ($33063 vs. $61577, P < 0.0001) as well. Results from multiple linear regression analyses showed that secondary CDI was significantly associated with increased percent change in total charges for both pneumonia and UTI cases. For example, presence of secondary CDI increased total charges by 80% and 75% among men and women with pneumonia, respectively. Similarly, CDI was associated with 59% and 57% increase in total charges among men and women with UTI, respectively (Table 3). After conducting a sensitivity analysis among ages 65 and older, a similar trend persisted for in-hospital mortality, LOS, and total charges (Table 4). Cumulatively, presence of secondary CDI among pneumonia or UTI discharges was substantially associated with increased in-hospital mortality and health resource utilization.Table 4 Regression analyses of in-hospital mortality and health resource utilization upon secondary CDI among patients ages 65+ with pneumonia or urinary tract infection a In-hospital mortalityHealth resource utilizationLength of stayTotal chargeOR (95% CI)b IRR (95% CI)b % change (95% CI)b Primary pneumonia with secondary CDIc Women2.67 (2.34, 3.05)*2.18 (2.08, 2.29)*71.04 (66.4, 75.73)*Men3.06 (2.69, 3.48)*2.25 (2.14, 2.35)*74.52 (69.76, 79.27)*Primary UTI with secondary CDId Women3.59 (2.73, 4.71)*2.06 (1.94, 2.18)*54.99 (50.20, 59.78)*Men4.35 (3.07, 6.16)*2.07 (1.93, 2.23)*57.13 (49.17, 65.08)**Bonferroni adjusted P < 0.0017CDI = Clostridium difficile infection, UTI = Urinary tract infection, OR = odds ratio, CI = confidence interval, IRR = incidence rate ratio aBinary logistic regression for in-hospital mortality, negative binomial regression for length of stay, linear regression for total charges. All procedures were survey weighted bModel adjusted for patient characteristics, hospital characteristics, and year cReference = primary pneumonia no secondary CDI dReference = UTI no secondary CDI Regression analyses of in-hospital mortality and health resource utilization upon secondary CDI among patients ages 65+ with pneumonia or urinary tract infection a *Bonferroni adjusted P < 0.0017 CDI = Clostridium difficile infection, UTI = Urinary tract infection, OR = odds ratio, CI = confidence interval, IRR = incidence rate ratio aBinary logistic regression for in-hospital mortality, negative binomial regression for length of stay, linear regression for total charges. All procedures were survey weighted bModel adjusted for patient characteristics, hospital characteristics, and year cReference = primary pneumonia no secondary CDI dReference = UTI no secondary CDI
Conclusion
In our study, using the largest inpatient data in the United States, we demonstrated that men have a significantly higher prevalence of CDI, as compared to women. CDI was also associated with increased in-hospital mortality for both pneumonia and UTI patients, as well as increased LOS and total charges; further highlighting the negative impact of CDI and imperative need for preventive measures.
[ "Data source", "Data collection and study definitions", "Statistical analyses" ]
[ "Data was extracted from the Nationwide Inpatient Sample (NIS), 2009–2011. NIS, considered the largest publically available all-payer inpatient database in the United States, includes data from all states that participate in the Healthcare Cost and Utilization Project (HCUP), sponsored by the Agency for Healthcare Research and Quality (AHRQ). An annual approximate sample of 8 million hospitalizations from 1,000 hospitals, reflecting a 20% stratified sample of community hospitals in the nation are included in NIS. NIS excludes short-term rehabilitation hospitals (starting 1998 data), long-term non-acute care hospitals, psychiatric hospitals, and alcoholism/chemical dependency treatment facilities. All hospitals in NIS are stratified based on five hospital characteristics: ownership/control, bed size, teaching status, urban/rural location, and geographic location. Starting 1988, NIS data is available yearly and further details of the dataset are available elsewhere [17].", "Our study sample included hospital primary discharges with pneumonia or UTI in adults over the age of 17 years. The International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) was used to identify UTI (599.0) and CDI (008.45). Pneumonia was assessed using Clinical Classification Software (CCS) code of 112, representing cases not caused by tuberculosis or sexually transmitted disease (as defined by NIS). Similar coding strategies have been utilized in previous literature for CDI [11,13,14], pneumonia [18,19], and UTI [20,21]. NIS provides a primary discharge code in addition to additional codes of secondary diagnoses. In this study, patients with secondary discharge code for CDI were identified as CDI patients.\nWe further included both patient and hospital characteristics, to both control for and evaluate if such characteristics negatively impact CDI patients. For example, previous research has demonstrated both patient socioeconomic factors and hospital characteristics, to be positively associated with length of stay and mortality in other patient populations [11,13,14,22,23], and thus such variables were further accounted for in our study.\nPatient characteristics included were: age (18–34 years, 35–49 years, 50–64 years, 65 years or more), gender (men, women), race/ethnicity (White, Black, Hispanic, Asian or Pacific Islander, Native American, other), primary payer type (Private including HMO, Medicare, Medicaid, Self-pay, no charge, other), neighborhood income defined as median household income quartiles by patient ZIP code ($1-$38999, $39000-$47999, $48000-$62999, $63000 or more) and Charlson-Deyo Index (0, 1, 2 or more). The Deyo modification of the Charlson comorbidity index was used, which creates a score representing co-morbidities for each discharge utilizing the ICD-9-CM coding algorithms. The 17-item index is a validated measure of comorbidity for administrative data [24-26].\nHospital characteristics included in the study were: bed size tertile categories (small, medium, large), ownership/control (private investor-own, private non-profit, government nonfederal), setting/teaching status (rural, urban non-teaching, urban teaching), and geographic location (Northeast, Midwest, South, West). In-hospital mortality was defined as those who died during hospitalization versus those who did not. LOS and total charges were used from NIS-provided variables, which were edited by AHRQ to ensure uniformity between states. Total charges were adjusted quarterly for inflation using the Gross Domestic Product (GDP) deflator available through the United States Department of Commerce, Bureau of Economic Analysis with 2009 USD as the reference year [27].", "SAS 9.4 (SAS Institute, Inc., Cary, NC) was used for all statistical analyses except for negative binomial regression, for which we used the STATA 12 package (Stata Corp LP, College Station, TX). Given that existing data suggests potential gender differences in CDI [28-30] all statistical analyses were stratified by gender. Due to the large number of variables and in turn multiple testing, a family-wise correction using the Bonferroni adjustment was conducted to reduce type I error rate. As a result, P < 0.0017 was set as the level of significance.\nTo assess CDI prevalence (for each patient population), in addition to patient and hospital characteristic differences between each gender among pneumonia and UTI cases, chi-square tests using design-based F values were used. The prevalence of secondary CDI was noted as cases per 1,000 discharges for pneumonia and UTI groups, by gender. Next, independent survey-weighted multivariable logistic regression analyses were performed to identify patient and hospital characteristics associated with prevalence of secondary CDI in both primary pneumonia and UTI discharges.\nIn order to identify the impact of secondary CDI on in-hospital mortality among patients hospitalized for primary pneumonia or UTI, chi-square tests were conducted followed by survey-weighted logistic regression analyses. To assess the impact of secondary CDI on resource utilization, Wilcoxon rank sum test was used, followed by survey-weighted negative binomial regression and survey-weighted linear regression for LOS and total charges, respectively. For all adjusted models in each regression analyses, control variables of survey year, patient, and hospital characteristics were included. Since the distribution of total charges was non-normal and skewed to the right, this variable was natural log transformed for linear regression analyses. In addition, given that descriptive analyses demonstrated a significantly higher percent of our population as 65 or older and the elderly are more likely to have negative health impacts [31-33], a sensitivity analysis was performed in the aforementioned adjusted models among patients aged 65 and older. Model building for all analyses included assessment of assumptions and relevant interaction terms (sociodemographic characteristics with hospital characteristics), with significance established at P < .05. The study was submitted to the University of Wisconsin-Madison Institutional Review Board and was considered exempt from review." ]
[ null, null, null ]
[ "Background", "Methods", "Data source", "Data collection and study definitions", "Statistical analyses", "Results", "Discussion", "Conclusion" ]
[ "Clostridium difficile infection (CDI) is a leading cause of hospital-acquired infection (HAI) [1-3]. In many areas of the United States, CDI has surpassed methicillin-resistant Staphylococcus aureus as the most common type of HAI [1] with approximately 333,000 initial and 145,000 recurrent hospital-onset cases in the nation [4]. Certain patient populations have a disproportionately higher risk for CDI due to either host factors, frequent antibiotic use or both. These include older adults, patients using proton pump inhibitors [5-7] or antibiotics [8,9], those with inflammatory bowel disease [10-12], end-stage renal failure, or recipients of solid organ transplants [13,14].\nIn a study addressing the burden of CDI among patients with inflammatory bowel disease, Ananthakrishnan et al. [12] demonstrated four times higher mortality and three days longer hospital stay with presence of CDI. Similarly, among solid organ transplant patients, presence of CDI significantly increased the in-hospital mortality, length of stay, and charges, in addition to organ complications [14]. Despite such recognized burden of CDI, limited research exists on the prevalence and impact of infection among most common conditions that require antimicrobial treatment, with no study to date evaluating such impact among pneumonia or urinary tract infection (UTI) patients. Some recent empirical evidence has noted the co-occurrence of pneumonia and UTI with CDI in the United States [15] putatively due to the use of antimicrobial treatment; though none have evaluated the impact of such co-occurrences on patient and hospital outcomes. Misdiagnosis of pneumonia and inappropriate use of antimicrobial therapy was associated with a CDI outbreak [16]. Given the burden of CDI nationally and increasing prevalence attributed at least in part to antibiotic use, understanding the impact of CDI among patients with pneumonia or UTI would be valuable to devise potential preventive strategies. We undertook analyses of an existing large dataset from a nationally representative survey to assess [1] the prevalence and factors associated with CDI among pneumonia and UTI and [2] the impact of CDI on in-hospital mortality and health resource utilization (length of stay [LOS] and total charges).", " Data source Data was extracted from the Nationwide Inpatient Sample (NIS), 2009–2011. NIS, considered the largest publically available all-payer inpatient database in the United States, includes data from all states that participate in the Healthcare Cost and Utilization Project (HCUP), sponsored by the Agency for Healthcare Research and Quality (AHRQ). An annual approximate sample of 8 million hospitalizations from 1,000 hospitals, reflecting a 20% stratified sample of community hospitals in the nation are included in NIS. NIS excludes short-term rehabilitation hospitals (starting 1998 data), long-term non-acute care hospitals, psychiatric hospitals, and alcoholism/chemical dependency treatment facilities. All hospitals in NIS are stratified based on five hospital characteristics: ownership/control, bed size, teaching status, urban/rural location, and geographic location. Starting 1988, NIS data is available yearly and further details of the dataset are available elsewhere [17].\nData was extracted from the Nationwide Inpatient Sample (NIS), 2009–2011. NIS, considered the largest publically available all-payer inpatient database in the United States, includes data from all states that participate in the Healthcare Cost and Utilization Project (HCUP), sponsored by the Agency for Healthcare Research and Quality (AHRQ). An annual approximate sample of 8 million hospitalizations from 1,000 hospitals, reflecting a 20% stratified sample of community hospitals in the nation are included in NIS. NIS excludes short-term rehabilitation hospitals (starting 1998 data), long-term non-acute care hospitals, psychiatric hospitals, and alcoholism/chemical dependency treatment facilities. All hospitals in NIS are stratified based on five hospital characteristics: ownership/control, bed size, teaching status, urban/rural location, and geographic location. Starting 1988, NIS data is available yearly and further details of the dataset are available elsewhere [17].\n Data collection and study definitions Our study sample included hospital primary discharges with pneumonia or UTI in adults over the age of 17 years. The International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) was used to identify UTI (599.0) and CDI (008.45). Pneumonia was assessed using Clinical Classification Software (CCS) code of 112, representing cases not caused by tuberculosis or sexually transmitted disease (as defined by NIS). Similar coding strategies have been utilized in previous literature for CDI [11,13,14], pneumonia [18,19], and UTI [20,21]. NIS provides a primary discharge code in addition to additional codes of secondary diagnoses. In this study, patients with secondary discharge code for CDI were identified as CDI patients.\nWe further included both patient and hospital characteristics, to both control for and evaluate if such characteristics negatively impact CDI patients. For example, previous research has demonstrated both patient socioeconomic factors and hospital characteristics, to be positively associated with length of stay and mortality in other patient populations [11,13,14,22,23], and thus such variables were further accounted for in our study.\nPatient characteristics included were: age (18–34 years, 35–49 years, 50–64 years, 65 years or more), gender (men, women), race/ethnicity (White, Black, Hispanic, Asian or Pacific Islander, Native American, other), primary payer type (Private including HMO, Medicare, Medicaid, Self-pay, no charge, other), neighborhood income defined as median household income quartiles by patient ZIP code ($1-$38999, $39000-$47999, $48000-$62999, $63000 or more) and Charlson-Deyo Index (0, 1, 2 or more). The Deyo modification of the Charlson comorbidity index was used, which creates a score representing co-morbidities for each discharge utilizing the ICD-9-CM coding algorithms. The 17-item index is a validated measure of comorbidity for administrative data [24-26].\nHospital characteristics included in the study were: bed size tertile categories (small, medium, large), ownership/control (private investor-own, private non-profit, government nonfederal), setting/teaching status (rural, urban non-teaching, urban teaching), and geographic location (Northeast, Midwest, South, West). In-hospital mortality was defined as those who died during hospitalization versus those who did not. LOS and total charges were used from NIS-provided variables, which were edited by AHRQ to ensure uniformity between states. Total charges were adjusted quarterly for inflation using the Gross Domestic Product (GDP) deflator available through the United States Department of Commerce, Bureau of Economic Analysis with 2009 USD as the reference year [27].\nOur study sample included hospital primary discharges with pneumonia or UTI in adults over the age of 17 years. The International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) was used to identify UTI (599.0) and CDI (008.45). Pneumonia was assessed using Clinical Classification Software (CCS) code of 112, representing cases not caused by tuberculosis or sexually transmitted disease (as defined by NIS). Similar coding strategies have been utilized in previous literature for CDI [11,13,14], pneumonia [18,19], and UTI [20,21]. NIS provides a primary discharge code in addition to additional codes of secondary diagnoses. In this study, patients with secondary discharge code for CDI were identified as CDI patients.\nWe further included both patient and hospital characteristics, to both control for and evaluate if such characteristics negatively impact CDI patients. For example, previous research has demonstrated both patient socioeconomic factors and hospital characteristics, to be positively associated with length of stay and mortality in other patient populations [11,13,14,22,23], and thus such variables were further accounted for in our study.\nPatient characteristics included were: age (18–34 years, 35–49 years, 50–64 years, 65 years or more), gender (men, women), race/ethnicity (White, Black, Hispanic, Asian or Pacific Islander, Native American, other), primary payer type (Private including HMO, Medicare, Medicaid, Self-pay, no charge, other), neighborhood income defined as median household income quartiles by patient ZIP code ($1-$38999, $39000-$47999, $48000-$62999, $63000 or more) and Charlson-Deyo Index (0, 1, 2 or more). The Deyo modification of the Charlson comorbidity index was used, which creates a score representing co-morbidities for each discharge utilizing the ICD-9-CM coding algorithms. The 17-item index is a validated measure of comorbidity for administrative data [24-26].\nHospital characteristics included in the study were: bed size tertile categories (small, medium, large), ownership/control (private investor-own, private non-profit, government nonfederal), setting/teaching status (rural, urban non-teaching, urban teaching), and geographic location (Northeast, Midwest, South, West). In-hospital mortality was defined as those who died during hospitalization versus those who did not. LOS and total charges were used from NIS-provided variables, which were edited by AHRQ to ensure uniformity between states. Total charges were adjusted quarterly for inflation using the Gross Domestic Product (GDP) deflator available through the United States Department of Commerce, Bureau of Economic Analysis with 2009 USD as the reference year [27].\n Statistical analyses SAS 9.4 (SAS Institute, Inc., Cary, NC) was used for all statistical analyses except for negative binomial regression, for which we used the STATA 12 package (Stata Corp LP, College Station, TX). Given that existing data suggests potential gender differences in CDI [28-30] all statistical analyses were stratified by gender. Due to the large number of variables and in turn multiple testing, a family-wise correction using the Bonferroni adjustment was conducted to reduce type I error rate. As a result, P < 0.0017 was set as the level of significance.\nTo assess CDI prevalence (for each patient population), in addition to patient and hospital characteristic differences between each gender among pneumonia and UTI cases, chi-square tests using design-based F values were used. The prevalence of secondary CDI was noted as cases per 1,000 discharges for pneumonia and UTI groups, by gender. Next, independent survey-weighted multivariable logistic regression analyses were performed to identify patient and hospital characteristics associated with prevalence of secondary CDI in both primary pneumonia and UTI discharges.\nIn order to identify the impact of secondary CDI on in-hospital mortality among patients hospitalized for primary pneumonia or UTI, chi-square tests were conducted followed by survey-weighted logistic regression analyses. To assess the impact of secondary CDI on resource utilization, Wilcoxon rank sum test was used, followed by survey-weighted negative binomial regression and survey-weighted linear regression for LOS and total charges, respectively. For all adjusted models in each regression analyses, control variables of survey year, patient, and hospital characteristics were included. Since the distribution of total charges was non-normal and skewed to the right, this variable was natural log transformed for linear regression analyses. In addition, given that descriptive analyses demonstrated a significantly higher percent of our population as 65 or older and the elderly are more likely to have negative health impacts [31-33], a sensitivity analysis was performed in the aforementioned adjusted models among patients aged 65 and older. Model building for all analyses included assessment of assumptions and relevant interaction terms (sociodemographic characteristics with hospital characteristics), with significance established at P < .05. The study was submitted to the University of Wisconsin-Madison Institutional Review Board and was considered exempt from review.\nSAS 9.4 (SAS Institute, Inc., Cary, NC) was used for all statistical analyses except for negative binomial regression, for which we used the STATA 12 package (Stata Corp LP, College Station, TX). Given that existing data suggests potential gender differences in CDI [28-30] all statistical analyses were stratified by gender. Due to the large number of variables and in turn multiple testing, a family-wise correction using the Bonferroni adjustment was conducted to reduce type I error rate. As a result, P < 0.0017 was set as the level of significance.\nTo assess CDI prevalence (for each patient population), in addition to patient and hospital characteristic differences between each gender among pneumonia and UTI cases, chi-square tests using design-based F values were used. The prevalence of secondary CDI was noted as cases per 1,000 discharges for pneumonia and UTI groups, by gender. Next, independent survey-weighted multivariable logistic regression analyses were performed to identify patient and hospital characteristics associated with prevalence of secondary CDI in both primary pneumonia and UTI discharges.\nIn order to identify the impact of secondary CDI on in-hospital mortality among patients hospitalized for primary pneumonia or UTI, chi-square tests were conducted followed by survey-weighted logistic regression analyses. To assess the impact of secondary CDI on resource utilization, Wilcoxon rank sum test was used, followed by survey-weighted negative binomial regression and survey-weighted linear regression for LOS and total charges, respectively. For all adjusted models in each regression analyses, control variables of survey year, patient, and hospital characteristics were included. Since the distribution of total charges was non-normal and skewed to the right, this variable was natural log transformed for linear regression analyses. In addition, given that descriptive analyses demonstrated a significantly higher percent of our population as 65 or older and the elderly are more likely to have negative health impacts [31-33], a sensitivity analysis was performed in the aforementioned adjusted models among patients aged 65 and older. Model building for all analyses included assessment of assumptions and relevant interaction terms (sociodemographic characteristics with hospital characteristics), with significance established at P < .05. The study was submitted to the University of Wisconsin-Madison Institutional Review Board and was considered exempt from review.", "Data was extracted from the Nationwide Inpatient Sample (NIS), 2009–2011. NIS, considered the largest publically available all-payer inpatient database in the United States, includes data from all states that participate in the Healthcare Cost and Utilization Project (HCUP), sponsored by the Agency for Healthcare Research and Quality (AHRQ). An annual approximate sample of 8 million hospitalizations from 1,000 hospitals, reflecting a 20% stratified sample of community hospitals in the nation are included in NIS. NIS excludes short-term rehabilitation hospitals (starting 1998 data), long-term non-acute care hospitals, psychiatric hospitals, and alcoholism/chemical dependency treatment facilities. All hospitals in NIS are stratified based on five hospital characteristics: ownership/control, bed size, teaching status, urban/rural location, and geographic location. Starting 1988, NIS data is available yearly and further details of the dataset are available elsewhere [17].", "Our study sample included hospital primary discharges with pneumonia or UTI in adults over the age of 17 years. The International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) was used to identify UTI (599.0) and CDI (008.45). Pneumonia was assessed using Clinical Classification Software (CCS) code of 112, representing cases not caused by tuberculosis or sexually transmitted disease (as defined by NIS). Similar coding strategies have been utilized in previous literature for CDI [11,13,14], pneumonia [18,19], and UTI [20,21]. NIS provides a primary discharge code in addition to additional codes of secondary diagnoses. In this study, patients with secondary discharge code for CDI were identified as CDI patients.\nWe further included both patient and hospital characteristics, to both control for and evaluate if such characteristics negatively impact CDI patients. For example, previous research has demonstrated both patient socioeconomic factors and hospital characteristics, to be positively associated with length of stay and mortality in other patient populations [11,13,14,22,23], and thus such variables were further accounted for in our study.\nPatient characteristics included were: age (18–34 years, 35–49 years, 50–64 years, 65 years or more), gender (men, women), race/ethnicity (White, Black, Hispanic, Asian or Pacific Islander, Native American, other), primary payer type (Private including HMO, Medicare, Medicaid, Self-pay, no charge, other), neighborhood income defined as median household income quartiles by patient ZIP code ($1-$38999, $39000-$47999, $48000-$62999, $63000 or more) and Charlson-Deyo Index (0, 1, 2 or more). The Deyo modification of the Charlson comorbidity index was used, which creates a score representing co-morbidities for each discharge utilizing the ICD-9-CM coding algorithms. The 17-item index is a validated measure of comorbidity for administrative data [24-26].\nHospital characteristics included in the study were: bed size tertile categories (small, medium, large), ownership/control (private investor-own, private non-profit, government nonfederal), setting/teaching status (rural, urban non-teaching, urban teaching), and geographic location (Northeast, Midwest, South, West). In-hospital mortality was defined as those who died during hospitalization versus those who did not. LOS and total charges were used from NIS-provided variables, which were edited by AHRQ to ensure uniformity between states. Total charges were adjusted quarterly for inflation using the Gross Domestic Product (GDP) deflator available through the United States Department of Commerce, Bureau of Economic Analysis with 2009 USD as the reference year [27].", "SAS 9.4 (SAS Institute, Inc., Cary, NC) was used for all statistical analyses except for negative binomial regression, for which we used the STATA 12 package (Stata Corp LP, College Station, TX). Given that existing data suggests potential gender differences in CDI [28-30] all statistical analyses were stratified by gender. Due to the large number of variables and in turn multiple testing, a family-wise correction using the Bonferroni adjustment was conducted to reduce type I error rate. As a result, P < 0.0017 was set as the level of significance.\nTo assess CDI prevalence (for each patient population), in addition to patient and hospital characteristic differences between each gender among pneumonia and UTI cases, chi-square tests using design-based F values were used. The prevalence of secondary CDI was noted as cases per 1,000 discharges for pneumonia and UTI groups, by gender. Next, independent survey-weighted multivariable logistic regression analyses were performed to identify patient and hospital characteristics associated with prevalence of secondary CDI in both primary pneumonia and UTI discharges.\nIn order to identify the impact of secondary CDI on in-hospital mortality among patients hospitalized for primary pneumonia or UTI, chi-square tests were conducted followed by survey-weighted logistic regression analyses. To assess the impact of secondary CDI on resource utilization, Wilcoxon rank sum test was used, followed by survey-weighted negative binomial regression and survey-weighted linear regression for LOS and total charges, respectively. For all adjusted models in each regression analyses, control variables of survey year, patient, and hospital characteristics were included. Since the distribution of total charges was non-normal and skewed to the right, this variable was natural log transformed for linear regression analyses. In addition, given that descriptive analyses demonstrated a significantly higher percent of our population as 65 or older and the elderly are more likely to have negative health impacts [31-33], a sensitivity analysis was performed in the aforementioned adjusted models among patients aged 65 and older. Model building for all analyses included assessment of assumptions and relevant interaction terms (sociodemographic characteristics with hospital characteristics), with significance established at P < .05. The study was submitted to the University of Wisconsin-Madison Institutional Review Board and was considered exempt from review.", "A total of 593,038 pneumonia and 255,770 UTI discharges were included in this study. Of them, 6,427 cases of secondary CDI among pneumonia patients and 3,037 secondary CDI cases among UTI patients were noted, representing 10.8 secondary CDI cases per 1,000 pneumonia discharges and 11.1 secondary CDI cases per 1,000 UTI discharges.\nGender specific analyses found a total of 2,996 and 1,000 cases of secondary CDI among men with pneumonia and UTI, respectively. Among women, 3,431 cases of secondary CDI were noted among those hospitalized for pneumonia and an additional 2,037 cases among those with primary UTI. Table 1 summarizes the prevalence of secondary CDI, patient, and hospital characteristics among primary pneumonia and UTI discharges, by gender. While rates of CDI among those with pneumonia did not differ between each gender, significant difference was noted for UTI patients, with men reporting 13.3 cases of secondary CDI per 1,000 compared to 11.3 cases per 1,000 for women (P < 0.001). As further noted in Table 1, several patient and hospital characteristics were significantly different among men and women and as a result all such variables were included in final model building for regression analyses.Table 1\nPrevalence of CDI, patient and hospital characteristics among primary pneumonia and UTI discharges, NIS 2009-2011\nPneumoniaUTIMenWomen\nP valueMenWomen\nP valuen279,072313,96675,600180,170N462,171519,849125,338298,156\nPrevalence of secondary CDI, cases per 1,000\n10.710.90.5213.311.3<0.001\nPatient Characteristics\n\nAge, %\n18-34 years old5.75.03.23.835-49 years old10.610.4<0.0016.35.4<0.00150-64 years old22.421.215.711.865 years old or more61.363.374.978.9\nRace/ethnicity, %\nWhite75.475.571.874.9Black11.512.014.912.1Hispanic7.87.5<0.0018.78.3<0.001Asian or Pacific Islander2.01.81.51. 7Native American0.80.80.60.6Other2.62.42.62.3\nCharlson-Deyo Index, %\n020.821.328.033.0126.731.8<0.00123.527.3<0.0012 or more52.546.848.539.8\nNeighborhood income, %\n$1 - $38,99932.032.930.130.4$39,000 - $47,99927. 227.0<0.00124.925.40.009$48,000 - 62,99923.022.824.023.9$63,000 or more17.717.321.020.3\nPayer type, %\nPrivate including HMO19.117.912.210.4Medicare64.566.777.279.6Medicaid8.49.8<0.0016.86.7<0.001Self-pay4.83.61.82.1No Charge0.50.40.20.2Other2.61.71.71.0\nHospital Characteristics\n\nBed size, %\nSmall19.019.716.416.8Medium24.925.2<0.00124.825.30.06Large56.155.258.857.9\nHospital control, %\nPrivate, investor-own14.414.715.316.2Private, non-profit71.071.30.00170.871.3<0.001Government, nonfederal14.614.013.912. 6\nSetting, %\nRural22.122.716.617.8Urban non-teaching44.044.8<0.00144.946.3<0.001Urban teaching33.932.538.435.8\nRegion, %\nNortheast18.818.022.119.9Midwest25.325.6<0.00122.522.7<0.001South38.940.640.442.7West16.915.915.114.7\nYear, %\n200934.134.132.532.0201032.332.10.2434.033.30.005201133.533.833.534.7CDI = Clostridium difficile infection, UTI = Urinary tract infection, n = total sample size, N = weighted average annual population estimate, CI = confidence interval\n\nPrevalence of CDI, patient and hospital characteristics among primary pneumonia and UTI discharges, NIS 2009-2011\n\nCDI = Clostridium difficile infection, UTI = Urinary tract infection, n = total sample size, N = weighted average annual population estimate, CI = confidence interval\nTable 2 displays the factors significantly associated with secondary CDI among primary pneumonia and UTI discharges. Among hospitalizations for pneumonia, increased odds of CDI were associated with being 65 years or older (adjusted odds ratio [aOR] for men = 1.7; aOR for women = 1.9), having Medicare as the primary payer (aOR men and women = 1.3), and increasing Charlson Deyo index (aOR men = 1.5; aOR women = 1.8). CDI was also significantly associated with high income (aOR = 1.3) and Medicaid (aOR = 1.4) among men hospitalized for pneumonia. Furthermore, for both men and women, hospitalization at urban non-teaching facilities was associated with approximately 60% increased odds of CDI while nearly double the odds were noted if hospitalized at urban teaching centers. On the other hand, lower odds were noted among men admitted at government nonfederal hospitals, as compared to investor-owned facilities (aOR = 0.7).Table 2\nDeterminants of CDI among pneumonia and UTI discharges, NIS 2009-2011\nPneumoniaUTIMenWomenMenWomenPatient characteristics\nAge\n18-34 years old (ref.)35-49 years old0.98 (0.74, 1.31)0.84 (0.61, 1.14)0.82 (0.49, 1.38)1.49 (0.96, 2.33)50-64 years old1.27 (0.97, 1.65)1.34 (1.00, 1.78)0.84 (0.53, 1.33)1.92 (1.19, 3.11)65 years old or more1.70 (1.28, 2.25)*1.85 (1.40, 2.45)*0.94 (0.61, 1.44)2.41 (1.47, 3.95)*\nRace/ethnicity\nWhite (ref.)Black0.92 (0.79, 1.07)1.02 (0.89, 1.16)1.08 (0.87, 1.34)1.01 (0.86, 1.18)Hispanic1.02 (0.86, 1.21)0.84 (0.71, 1.00)0.85 (0.63, 1.15)0.88 (0.73, 1.08)Asian/Pacific Islander1.26 (0.97, 1.62)0.91 (0.69, 1.20)0.96 (0.55, 1.67)0.88 (0.60, 1.28)Native American1.50 (0.86, 2.62)0.84 (0.50, 1.43)0.57 (0.14, 2.31)0.66 (0.28, 1.56)Other1.14 (0.87, 1.50)1.14 (0.87, 1.47)1.24 (0.82, 1.87)0.84 (0.58, 1.21)\nCharlson-Deyo Index\n0 (ref.)10.98 (0.86, 1.13)1.16 (1.02, 1.32)0.88 (0.72, 1.09)1.18 (1.03, 1.36)2 or more1.54 (1.36, 1.75)*1.81 (1.59, 2.05)*1.31 (1.10, 1.55)*1.72 (1.53, 1.94)*\nNeighborhood income\n$1 - $38,999 (ref.)$39,000 - $47,9991.08 (0.96, 1.23)1.07 (0.95, 1.21)1.09 (0.86, 1.37)0.89 (0.77, 1.03)$48,000 - 62,9991.12 (0.98, 1.27)1.10 (0.97, 1.25)1.29 (1.04, 1.59)1.17 (1.01, 1.36)$63,000 or more1.30 (1.13, 1.49)*1.23 (1.07, 1.43)1.52 (1.21, 1.90)*1.17 (0.98, 1.39)\nPayer type\nPrivate including HMO (ref.)Medicare1.32 (1.15, 1.51)*1.26 (1.09, 1.45)*1.21 (0.95, 1.55)0.90 (0.74, 1.10)Medicaid1.38 (1.14, 1.68)*1.19 (0.98, 1.45)1.04 (0.72, 1.50)1.00 (0.78, 1.28)Self-pay0.69 (0.49, 0.96)0.87 (0.61, 1.23)0.25 (0.08, 0.78)0.53 (0.30, 0.94)No Charge1.28 (0.66, 2.50)0.54 (0.17, 1.72)0.74 (0.13, 4.42)0.38 (0.05, 2.75)Other1.10 (0.81, 1.49)1.08 (0.70, 1.67)0.86 (0.44, 1.67)0.51 (0.26, 1.03)\nHospital characteristics\n\nBed size\nSmall (ref.)Medium0.92 (0.78, 1.09)1.03 (0.88, 1.20)1.02 (0.80, 1.30)0.88 (0.74, 1.04)Large1.09 (0.94, 1.27)1.17 (1.01, 1.35)1.19 (0.96, 1.48)0.97 (0.83, 1.14)\nHospital control\nPrivate investor-own (ref.)Private non-profit0.79 (0.68, 0.92)0.86 (0.74, 1.00)1.01 (0.79, 1.28)1.15 (0.96, 1.37)Government nonfederal0.68 (0.55, 0.84)*0.76 (0.63, 0.93)0.88 (0.63, 1.23)1.06 (0.84, 1.35)\nSetting\nRural (ref.)Urban non-teaching1.63 (1.36, 1.96)*1.66 (1.40, 1.98)*1.63 (1.20, 2.20)1.42 (1.16, 1.73)*Urban teaching2.05 (1.71, 2.47)*1.92 (1.60, 2.30)*1.92 (1.42, 2.61)*1.74 (1.42, 2.14)*\nRegion\nNortheast (ref.)Midwest0.94 (0.79, 1.11)0.87 (0.73, 1.04)0.99 (0.79, 1.23)1.04 (0.86, 1.25)South0.79 (0.68, 0.91)0.78 (0.66, 0.92)0.82 (0.66, 1.01)0.81 (0.68, 0.96)West0.83 (0.70, 0.97)0.82 (0.69, 0.98)0.78 (0.61, 1.01)0.87 (0.72, 1.05)\nYear\n2009 (ref.)20101.10 (0.97, 1.24)0.96 (0.85, 1.08)1.11 (0.93, 1.32)1.03 (0.90, 1.19)20111.06 (0.94, 1.20)0.95 (0.84, 1.06)1.06 (0.89, 1.26)1.01 (0.88, 1.16)CDI = Clostridium difficile infection, UTI = Urinary tract infection*Bonferroni adjusted P < 0.0017\n\nDeterminants of CDI among pneumonia and UTI discharges, NIS 2009-2011\n\nCDI = Clostridium difficile infection, UTI = Urinary tract infection\n*Bonferroni adjusted P < 0.0017\nSimilar trends were noted for UTI discharges. Being 65 years or older (aOR = 2.4 for men only), highest income category (aOR = 1.5 for men only), increasing comorbidities (aOR men = 1.3; aOR women = 1.7), urban teaching status (aOR men = 1.9; aOR women = 1.7), and urban non-teaching status (aOR = 1.4 for women only) were also significantly associated with increased likelihood of secondary CDI.\nResults of chi-square analyses demonstrated that in-hospital mortality was higher among pneumonia patients diagnosed with CDI, as compared to those without, for both men (13% vs. 4%, P < 0.0001) and women (11% vs. 4%, P < 0.0001). A similar effect of CDI among UTI discharges was noted in regards to in-hospital mortality among men (4% vs. 1%, P < 0.0001) and women (3% vs. 1%, P < 0.0001).\nTable 3 shows the results of regression analyses (unadjusted and adjusted) to evaluate the impact of CDI on in-hospital mortality and resource utilization among pneumonia and UTI cases. After adjusting for control variables, in-hospital mortality was approximately three times higher for pneumonia cases with CDI as compared to those without CDI. Having CDI among men hospitalized for UTI also increased the likelihood of in-hospital mortality by four times. Similarly, among women with UTI, CDI was associated with three times the odds of dying in the hospital.Table 3\nRegression analyses of in-hospital mortality and health resource utilization upon secondary CDI among patients with pneumonia or urinary tract infection\na\nIn-hospital mortalityHealth resource utilizationOR (95% CI)Length of stayTotal chargeIRR (95% CI)% change (95% CI)UnadjustedAdjustedb\nUnadjustedAdjustedb\nUnadjustedAdjustedb\nPrimary pneumonia with secondary CDIc\nWomen3.422.842.402.2986.1474.9(3.03, 3.86)*(2.51, 3.21)*(2.30, 2.50)*(2.19, 2.39)*(80.36, 91.92)*(70.66, 79.20)*Men3.703.152.542.4093.6580.00(3.28, 4.17)*(2.79, 3.55)*(2.43, 2.66)*(2.30, 2.52)*(87.93, 99.38)*(75.56, 84.48)*Primary UTI with secondary CDId\nWomen3.693.392.192.1162.9356.92(2.82, 4.84)*(2.58, 4.44)*(2.06, 2.32)*(1.99, 2.24)*(57.08, 68.78)*(52.41, 61.44)*Men4.054.132.142.1367.3759.34(2.89, 5.67)*(2.95, 5.78)*(1.99, 2.29)*(1.99, 2.29)*(59.23, 75.50)*(52.30, 66.38)**Bonferroni adjusted P < 0.0017CDI = Clostridium difficile infection, UTI = Urinary tract infection, OR = odds ratio, CI = confidence interval, IRR = incidence rate ratio\naBinary logistic regression for in-hospital mortality, negative binomial regression for length of stay, linear regression for total charges. All procedures were survey weighted\nbModel adjusted for patient characteristics, hospital characteristics, and year\ncReference = primary pneumonia no secondary CDI\ndReference = UTI no secondary CDI\n\nRegression analyses of in-hospital mortality and health resource utilization upon secondary CDI among patients with pneumonia or urinary tract infection\na\n\n*Bonferroni adjusted P < 0.0017\nCDI = Clostridium difficile infection, UTI = Urinary tract infection, OR = odds ratio, CI = confidence interval, IRR = incidence rate ratio\n\naBinary logistic regression for in-hospital mortality, negative binomial regression for length of stay, linear regression for total charges. All procedures were survey weighted\n\nbModel adjusted for patient characteristics, hospital characteristics, and year\n\ncReference = primary pneumonia no secondary CDI\n\ndReference = UTI no secondary CDI\nWilcoxon rank sum tests demonstrated that median LOS was significantly longer in pneumonia cases with CDI as compared to those without CDI for both men and women (9 days vs. 4 days, P < 0.0001). Similar trends were noted for UTI with or without CDI (7 days vs. 3 days, P < 0.0001). Adjusted results of negative binomial regression analyses showed that having CDI for pneumonia and UTI discharges was associated with approximately 200% increased LOS (Table 3).\nAmong men with pneumonia, higher median total charges ($104131 vs. $41157, P < 0.0001) were noted upon presence of CDI, with a similar trend reported among women ($96446 vs. $40700, P < 0.0001). Median total charges were also substantially higher upon UTI cases with CDI for both men ($63842 vs. $34182, P < 0.0001) and women ($33063 vs. $61577, P < 0.0001) as well. Results from multiple linear regression analyses showed that secondary CDI was significantly associated with increased percent change in total charges for both pneumonia and UTI cases. For example, presence of secondary CDI increased total charges by 80% and 75% among men and women with pneumonia, respectively. Similarly, CDI was associated with 59% and 57% increase in total charges among men and women with UTI, respectively (Table 3). After conducting a sensitivity analysis among ages 65 and older, a similar trend persisted for in-hospital mortality, LOS, and total charges (Table 4). Cumulatively, presence of secondary CDI among pneumonia or UTI discharges was substantially associated with increased in-hospital mortality and health resource utilization.Table 4\nRegression analyses of in-hospital mortality and health resource utilization upon secondary CDI among patients ages 65+ with pneumonia or urinary tract infection\na\nIn-hospital mortalityHealth resource utilizationLength of stayTotal chargeOR (95% CI)b\nIRR (95% CI)b\n% change (95% CI)b\nPrimary pneumonia with secondary CDIc\nWomen2.67 (2.34, 3.05)*2.18 (2.08, 2.29)*71.04 (66.4, 75.73)*Men3.06 (2.69, 3.48)*2.25 (2.14, 2.35)*74.52 (69.76, 79.27)*Primary UTI with secondary CDId\nWomen3.59 (2.73, 4.71)*2.06 (1.94, 2.18)*54.99 (50.20, 59.78)*Men4.35 (3.07, 6.16)*2.07 (1.93, 2.23)*57.13 (49.17, 65.08)**Bonferroni adjusted P < 0.0017CDI = Clostridium difficile infection, UTI = Urinary tract infection, OR = odds ratio, CI = confidence interval, IRR = incidence rate ratio\naBinary logistic regression for in-hospital mortality, negative binomial regression for length of stay, linear regression for total charges. All procedures were survey weighted\nbModel adjusted for patient characteristics, hospital characteristics, and year\ncReference = primary pneumonia no secondary CDI\ndReference = UTI no secondary CDI\n\nRegression analyses of in-hospital mortality and health resource utilization upon secondary CDI among patients ages 65+ with pneumonia or urinary tract infection\na\n\n*Bonferroni adjusted P < 0.0017\nCDI = Clostridium difficile infection, UTI = Urinary tract infection, OR = odds ratio, CI = confidence interval, IRR = incidence rate ratio\n\naBinary logistic regression for in-hospital mortality, negative binomial regression for length of stay, linear regression for total charges. All procedures were survey weighted\n\nbModel adjusted for patient characteristics, hospital characteristics, and year\n\ncReference = primary pneumonia no secondary CDI\n\ndReference = UTI no secondary CDI", "Our study found that CDI is highly prevalent in patients with pneumonia or UTI and is associated with significantly increased in-hospital mortality, LOS, and total charges. To our knowledge, this study is the first of its kind to examine secondary CDI using a nationally representative dataset in patients with pneumonia or UTI, both frequent conditions that often require hospitalization and antimicrobial therapy.\nIn our analyses, the prevalence of CDI is considerably higher compared to that reported among general hospitalized patients, but lower than prevalence noted among those with conditions that may uniquely predispose patients to CDI, such as ulcerative colitis [11,12]. We also found a gender difference in CDI prevalence, with higher rates noted among men with UTI as compared to women. Such gender differences may be attributable to differences in antibiotic prescribing practices as duration of treatment for men with UTI is generally longer [34-37]. Future prospective studies should further corroborate these findings and explore underlying mechanisms.\nA major finding in our study was that LOS was doubled for both men and women with secondary CDI as compared to those without. Moreover, at least 75% and 55% increases in total charges were noted among pneumonia and UTI patients with CDI, respectively. In-hospital mortality was also substantially higher among patients in secondary CDI as compared to those without. Our findings on the negative impact of CDI on patient and hospital outcomes are in line with results noted among other patient populations with CDI, such as inflammatory bowel disease, solid organ transplant, and end stage renal failure [11-14]; reflecting the substantial burden of CDI in vulnerable patients.\nWe identified a number of hospital and patient characteristics associated with CDI in patients with primary pneumonia or UTI. Consistent with our results, previous studies have reported a greater risk of CDI with increasing age [29,38]. A study assessing C. difficile colitis case fatality rate also noted higher rates among those in Medicare and Medicaid [30], demonstrating the negative burden of CDI among patients with such insurance status, similar to trends noticed in our findings. We hypothesize that given that Medicare patients are elderly, they may be at a greater risk of negative outcomes and thus the result noted in our study. In addition, Medicaid has traditionally been for low income populations, and thus limited resources, budget constraints, restrictions, etc. could contribute to the negative rates. Not surprisingly and in keeping with other studies, we found that increasing number of comorbidities, as reflected by the Charlson Deyo index, was associated with a greater risk for CDI [30].\nWe also reported higher odds of secondary CDI among urban hospitals. Possible reasons include greater complexity of illness or higher antibiotic prescribing practices among urban hospitals [39], though some researchers have illustrated the opposite [40] or non-significant differences between urban or rural settings [41]. Future studies to examine this issue with additional ways to account for case mix are needed. Our findings of worse outcomes in this large swath of hospitalized patients suggest opportunities for both infection prevention and stewardship in this population.\nOur findings have major implications for clinicians and healthcare institutions and add to the body of literature on the prevalence and impact of CDI. The substantial negative impact of secondary CDI on in-hospital mortality and health resource utilization emphasizes that those hospitalized for pneumonia or UTI represent high-risk patient populations for CDI and should be included in CDI preventive efforts. Opportunities for optimizing antimicrobial therapy use among patients to reduce such burden exists, such as severity-based treatment, de-escalation as appropriate, and adherence to treatment guidelines for choice and duration of anti-infectives [42,43]. Future studies to examine interventions for reducing the risk of and mitigating the consequences of CDI in patients with pneumonia and UTI are urgently needed.\nOur results should be interpreted in the context of study limitations. Given that NIS does not provide patient identifiable information, such ICD-9-CM codes could not be cross-validated with laboratory results. Previous research, however, has shown the validity and use of such codes for CDI detection [44,45]. Second, the unit of observation in NIS is discharges and not individual patients; therefore results of this study cannot assess the impact of initial versus recurrent CDI. Third, while patient level characteristics, such as age, gender, race/ethnicity, payer type, and neighborhood income based on zip codes are included in the NIS dataset, other social or medical determinants, such as health literacy, education, dietary factors and treatments, including outpatient procedures, could not be evaluated.", "In our study, using the largest inpatient data in the United States, we demonstrated that men have a significantly higher prevalence of CDI, as compared to women. CDI was also associated with increased in-hospital mortality for both pneumonia and UTI patients, as well as increased LOS and total charges; further highlighting the negative impact of CDI and imperative need for preventive measures." ]
[ "introduction", "materials|methods", null, null, null, "results", "discussion", "conclusion" ]
[ "\nClostridium difficile\n", "Urinary tract infection", "Pneumonia", "Nationwide inpatient sample", "Hospital acquired infection", "Nosocomial infection" ]
Background: Clostridium difficile infection (CDI) is a leading cause of hospital-acquired infection (HAI) [1-3]. In many areas of the United States, CDI has surpassed methicillin-resistant Staphylococcus aureus as the most common type of HAI [1] with approximately 333,000 initial and 145,000 recurrent hospital-onset cases in the nation [4]. Certain patient populations have a disproportionately higher risk for CDI due to either host factors, frequent antibiotic use or both. These include older adults, patients using proton pump inhibitors [5-7] or antibiotics [8,9], those with inflammatory bowel disease [10-12], end-stage renal failure, or recipients of solid organ transplants [13,14]. In a study addressing the burden of CDI among patients with inflammatory bowel disease, Ananthakrishnan et al. [12] demonstrated four times higher mortality and three days longer hospital stay with presence of CDI. Similarly, among solid organ transplant patients, presence of CDI significantly increased the in-hospital mortality, length of stay, and charges, in addition to organ complications [14]. Despite such recognized burden of CDI, limited research exists on the prevalence and impact of infection among most common conditions that require antimicrobial treatment, with no study to date evaluating such impact among pneumonia or urinary tract infection (UTI) patients. Some recent empirical evidence has noted the co-occurrence of pneumonia and UTI with CDI in the United States [15] putatively due to the use of antimicrobial treatment; though none have evaluated the impact of such co-occurrences on patient and hospital outcomes. Misdiagnosis of pneumonia and inappropriate use of antimicrobial therapy was associated with a CDI outbreak [16]. Given the burden of CDI nationally and increasing prevalence attributed at least in part to antibiotic use, understanding the impact of CDI among patients with pneumonia or UTI would be valuable to devise potential preventive strategies. We undertook analyses of an existing large dataset from a nationally representative survey to assess [1] the prevalence and factors associated with CDI among pneumonia and UTI and [2] the impact of CDI on in-hospital mortality and health resource utilization (length of stay [LOS] and total charges). Methods: Data source Data was extracted from the Nationwide Inpatient Sample (NIS), 2009–2011. NIS, considered the largest publically available all-payer inpatient database in the United States, includes data from all states that participate in the Healthcare Cost and Utilization Project (HCUP), sponsored by the Agency for Healthcare Research and Quality (AHRQ). An annual approximate sample of 8 million hospitalizations from 1,000 hospitals, reflecting a 20% stratified sample of community hospitals in the nation are included in NIS. NIS excludes short-term rehabilitation hospitals (starting 1998 data), long-term non-acute care hospitals, psychiatric hospitals, and alcoholism/chemical dependency treatment facilities. All hospitals in NIS are stratified based on five hospital characteristics: ownership/control, bed size, teaching status, urban/rural location, and geographic location. Starting 1988, NIS data is available yearly and further details of the dataset are available elsewhere [17]. Data was extracted from the Nationwide Inpatient Sample (NIS), 2009–2011. NIS, considered the largest publically available all-payer inpatient database in the United States, includes data from all states that participate in the Healthcare Cost and Utilization Project (HCUP), sponsored by the Agency for Healthcare Research and Quality (AHRQ). An annual approximate sample of 8 million hospitalizations from 1,000 hospitals, reflecting a 20% stratified sample of community hospitals in the nation are included in NIS. NIS excludes short-term rehabilitation hospitals (starting 1998 data), long-term non-acute care hospitals, psychiatric hospitals, and alcoholism/chemical dependency treatment facilities. All hospitals in NIS are stratified based on five hospital characteristics: ownership/control, bed size, teaching status, urban/rural location, and geographic location. Starting 1988, NIS data is available yearly and further details of the dataset are available elsewhere [17]. Data collection and study definitions Our study sample included hospital primary discharges with pneumonia or UTI in adults over the age of 17 years. The International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) was used to identify UTI (599.0) and CDI (008.45). Pneumonia was assessed using Clinical Classification Software (CCS) code of 112, representing cases not caused by tuberculosis or sexually transmitted disease (as defined by NIS). Similar coding strategies have been utilized in previous literature for CDI [11,13,14], pneumonia [18,19], and UTI [20,21]. NIS provides a primary discharge code in addition to additional codes of secondary diagnoses. In this study, patients with secondary discharge code for CDI were identified as CDI patients. We further included both patient and hospital characteristics, to both control for and evaluate if such characteristics negatively impact CDI patients. For example, previous research has demonstrated both patient socioeconomic factors and hospital characteristics, to be positively associated with length of stay and mortality in other patient populations [11,13,14,22,23], and thus such variables were further accounted for in our study. Patient characteristics included were: age (18–34 years, 35–49 years, 50–64 years, 65 years or more), gender (men, women), race/ethnicity (White, Black, Hispanic, Asian or Pacific Islander, Native American, other), primary payer type (Private including HMO, Medicare, Medicaid, Self-pay, no charge, other), neighborhood income defined as median household income quartiles by patient ZIP code ($1-$38999, $39000-$47999, $48000-$62999, $63000 or more) and Charlson-Deyo Index (0, 1, 2 or more). The Deyo modification of the Charlson comorbidity index was used, which creates a score representing co-morbidities for each discharge utilizing the ICD-9-CM coding algorithms. The 17-item index is a validated measure of comorbidity for administrative data [24-26]. Hospital characteristics included in the study were: bed size tertile categories (small, medium, large), ownership/control (private investor-own, private non-profit, government nonfederal), setting/teaching status (rural, urban non-teaching, urban teaching), and geographic location (Northeast, Midwest, South, West). In-hospital mortality was defined as those who died during hospitalization versus those who did not. LOS and total charges were used from NIS-provided variables, which were edited by AHRQ to ensure uniformity between states. Total charges were adjusted quarterly for inflation using the Gross Domestic Product (GDP) deflator available through the United States Department of Commerce, Bureau of Economic Analysis with 2009 USD as the reference year [27]. Our study sample included hospital primary discharges with pneumonia or UTI in adults over the age of 17 years. The International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) was used to identify UTI (599.0) and CDI (008.45). Pneumonia was assessed using Clinical Classification Software (CCS) code of 112, representing cases not caused by tuberculosis or sexually transmitted disease (as defined by NIS). Similar coding strategies have been utilized in previous literature for CDI [11,13,14], pneumonia [18,19], and UTI [20,21]. NIS provides a primary discharge code in addition to additional codes of secondary diagnoses. In this study, patients with secondary discharge code for CDI were identified as CDI patients. We further included both patient and hospital characteristics, to both control for and evaluate if such characteristics negatively impact CDI patients. For example, previous research has demonstrated both patient socioeconomic factors and hospital characteristics, to be positively associated with length of stay and mortality in other patient populations [11,13,14,22,23], and thus such variables were further accounted for in our study. Patient characteristics included were: age (18–34 years, 35–49 years, 50–64 years, 65 years or more), gender (men, women), race/ethnicity (White, Black, Hispanic, Asian or Pacific Islander, Native American, other), primary payer type (Private including HMO, Medicare, Medicaid, Self-pay, no charge, other), neighborhood income defined as median household income quartiles by patient ZIP code ($1-$38999, $39000-$47999, $48000-$62999, $63000 or more) and Charlson-Deyo Index (0, 1, 2 or more). The Deyo modification of the Charlson comorbidity index was used, which creates a score representing co-morbidities for each discharge utilizing the ICD-9-CM coding algorithms. The 17-item index is a validated measure of comorbidity for administrative data [24-26]. Hospital characteristics included in the study were: bed size tertile categories (small, medium, large), ownership/control (private investor-own, private non-profit, government nonfederal), setting/teaching status (rural, urban non-teaching, urban teaching), and geographic location (Northeast, Midwest, South, West). In-hospital mortality was defined as those who died during hospitalization versus those who did not. LOS and total charges were used from NIS-provided variables, which were edited by AHRQ to ensure uniformity between states. Total charges were adjusted quarterly for inflation using the Gross Domestic Product (GDP) deflator available through the United States Department of Commerce, Bureau of Economic Analysis with 2009 USD as the reference year [27]. Statistical analyses SAS 9.4 (SAS Institute, Inc., Cary, NC) was used for all statistical analyses except for negative binomial regression, for which we used the STATA 12 package (Stata Corp LP, College Station, TX). Given that existing data suggests potential gender differences in CDI [28-30] all statistical analyses were stratified by gender. Due to the large number of variables and in turn multiple testing, a family-wise correction using the Bonferroni adjustment was conducted to reduce type I error rate. As a result, P < 0.0017 was set as the level of significance. To assess CDI prevalence (for each patient population), in addition to patient and hospital characteristic differences between each gender among pneumonia and UTI cases, chi-square tests using design-based F values were used. The prevalence of secondary CDI was noted as cases per 1,000 discharges for pneumonia and UTI groups, by gender. Next, independent survey-weighted multivariable logistic regression analyses were performed to identify patient and hospital characteristics associated with prevalence of secondary CDI in both primary pneumonia and UTI discharges. In order to identify the impact of secondary CDI on in-hospital mortality among patients hospitalized for primary pneumonia or UTI, chi-square tests were conducted followed by survey-weighted logistic regression analyses. To assess the impact of secondary CDI on resource utilization, Wilcoxon rank sum test was used, followed by survey-weighted negative binomial regression and survey-weighted linear regression for LOS and total charges, respectively. For all adjusted models in each regression analyses, control variables of survey year, patient, and hospital characteristics were included. Since the distribution of total charges was non-normal and skewed to the right, this variable was natural log transformed for linear regression analyses. In addition, given that descriptive analyses demonstrated a significantly higher percent of our population as 65 or older and the elderly are more likely to have negative health impacts [31-33], a sensitivity analysis was performed in the aforementioned adjusted models among patients aged 65 and older. Model building for all analyses included assessment of assumptions and relevant interaction terms (sociodemographic characteristics with hospital characteristics), with significance established at P < .05. The study was submitted to the University of Wisconsin-Madison Institutional Review Board and was considered exempt from review. SAS 9.4 (SAS Institute, Inc., Cary, NC) was used for all statistical analyses except for negative binomial regression, for which we used the STATA 12 package (Stata Corp LP, College Station, TX). Given that existing data suggests potential gender differences in CDI [28-30] all statistical analyses were stratified by gender. Due to the large number of variables and in turn multiple testing, a family-wise correction using the Bonferroni adjustment was conducted to reduce type I error rate. As a result, P < 0.0017 was set as the level of significance. To assess CDI prevalence (for each patient population), in addition to patient and hospital characteristic differences between each gender among pneumonia and UTI cases, chi-square tests using design-based F values were used. The prevalence of secondary CDI was noted as cases per 1,000 discharges for pneumonia and UTI groups, by gender. Next, independent survey-weighted multivariable logistic regression analyses were performed to identify patient and hospital characteristics associated with prevalence of secondary CDI in both primary pneumonia and UTI discharges. In order to identify the impact of secondary CDI on in-hospital mortality among patients hospitalized for primary pneumonia or UTI, chi-square tests were conducted followed by survey-weighted logistic regression analyses. To assess the impact of secondary CDI on resource utilization, Wilcoxon rank sum test was used, followed by survey-weighted negative binomial regression and survey-weighted linear regression for LOS and total charges, respectively. For all adjusted models in each regression analyses, control variables of survey year, patient, and hospital characteristics were included. Since the distribution of total charges was non-normal and skewed to the right, this variable was natural log transformed for linear regression analyses. In addition, given that descriptive analyses demonstrated a significantly higher percent of our population as 65 or older and the elderly are more likely to have negative health impacts [31-33], a sensitivity analysis was performed in the aforementioned adjusted models among patients aged 65 and older. Model building for all analyses included assessment of assumptions and relevant interaction terms (sociodemographic characteristics with hospital characteristics), with significance established at P < .05. The study was submitted to the University of Wisconsin-Madison Institutional Review Board and was considered exempt from review. Data source: Data was extracted from the Nationwide Inpatient Sample (NIS), 2009–2011. NIS, considered the largest publically available all-payer inpatient database in the United States, includes data from all states that participate in the Healthcare Cost and Utilization Project (HCUP), sponsored by the Agency for Healthcare Research and Quality (AHRQ). An annual approximate sample of 8 million hospitalizations from 1,000 hospitals, reflecting a 20% stratified sample of community hospitals in the nation are included in NIS. NIS excludes short-term rehabilitation hospitals (starting 1998 data), long-term non-acute care hospitals, psychiatric hospitals, and alcoholism/chemical dependency treatment facilities. All hospitals in NIS are stratified based on five hospital characteristics: ownership/control, bed size, teaching status, urban/rural location, and geographic location. Starting 1988, NIS data is available yearly and further details of the dataset are available elsewhere [17]. Data collection and study definitions: Our study sample included hospital primary discharges with pneumonia or UTI in adults over the age of 17 years. The International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) was used to identify UTI (599.0) and CDI (008.45). Pneumonia was assessed using Clinical Classification Software (CCS) code of 112, representing cases not caused by tuberculosis or sexually transmitted disease (as defined by NIS). Similar coding strategies have been utilized in previous literature for CDI [11,13,14], pneumonia [18,19], and UTI [20,21]. NIS provides a primary discharge code in addition to additional codes of secondary diagnoses. In this study, patients with secondary discharge code for CDI were identified as CDI patients. We further included both patient and hospital characteristics, to both control for and evaluate if such characteristics negatively impact CDI patients. For example, previous research has demonstrated both patient socioeconomic factors and hospital characteristics, to be positively associated with length of stay and mortality in other patient populations [11,13,14,22,23], and thus such variables were further accounted for in our study. Patient characteristics included were: age (18–34 years, 35–49 years, 50–64 years, 65 years or more), gender (men, women), race/ethnicity (White, Black, Hispanic, Asian or Pacific Islander, Native American, other), primary payer type (Private including HMO, Medicare, Medicaid, Self-pay, no charge, other), neighborhood income defined as median household income quartiles by patient ZIP code ($1-$38999, $39000-$47999, $48000-$62999, $63000 or more) and Charlson-Deyo Index (0, 1, 2 or more). The Deyo modification of the Charlson comorbidity index was used, which creates a score representing co-morbidities for each discharge utilizing the ICD-9-CM coding algorithms. The 17-item index is a validated measure of comorbidity for administrative data [24-26]. Hospital characteristics included in the study were: bed size tertile categories (small, medium, large), ownership/control (private investor-own, private non-profit, government nonfederal), setting/teaching status (rural, urban non-teaching, urban teaching), and geographic location (Northeast, Midwest, South, West). In-hospital mortality was defined as those who died during hospitalization versus those who did not. LOS and total charges were used from NIS-provided variables, which were edited by AHRQ to ensure uniformity between states. Total charges were adjusted quarterly for inflation using the Gross Domestic Product (GDP) deflator available through the United States Department of Commerce, Bureau of Economic Analysis with 2009 USD as the reference year [27]. Statistical analyses: SAS 9.4 (SAS Institute, Inc., Cary, NC) was used for all statistical analyses except for negative binomial regression, for which we used the STATA 12 package (Stata Corp LP, College Station, TX). Given that existing data suggests potential gender differences in CDI [28-30] all statistical analyses were stratified by gender. Due to the large number of variables and in turn multiple testing, a family-wise correction using the Bonferroni adjustment was conducted to reduce type I error rate. As a result, P < 0.0017 was set as the level of significance. To assess CDI prevalence (for each patient population), in addition to patient and hospital characteristic differences between each gender among pneumonia and UTI cases, chi-square tests using design-based F values were used. The prevalence of secondary CDI was noted as cases per 1,000 discharges for pneumonia and UTI groups, by gender. Next, independent survey-weighted multivariable logistic regression analyses were performed to identify patient and hospital characteristics associated with prevalence of secondary CDI in both primary pneumonia and UTI discharges. In order to identify the impact of secondary CDI on in-hospital mortality among patients hospitalized for primary pneumonia or UTI, chi-square tests were conducted followed by survey-weighted logistic regression analyses. To assess the impact of secondary CDI on resource utilization, Wilcoxon rank sum test was used, followed by survey-weighted negative binomial regression and survey-weighted linear regression for LOS and total charges, respectively. For all adjusted models in each regression analyses, control variables of survey year, patient, and hospital characteristics were included. Since the distribution of total charges was non-normal and skewed to the right, this variable was natural log transformed for linear regression analyses. In addition, given that descriptive analyses demonstrated a significantly higher percent of our population as 65 or older and the elderly are more likely to have negative health impacts [31-33], a sensitivity analysis was performed in the aforementioned adjusted models among patients aged 65 and older. Model building for all analyses included assessment of assumptions and relevant interaction terms (sociodemographic characteristics with hospital characteristics), with significance established at P < .05. The study was submitted to the University of Wisconsin-Madison Institutional Review Board and was considered exempt from review. Results: A total of 593,038 pneumonia and 255,770 UTI discharges were included in this study. Of them, 6,427 cases of secondary CDI among pneumonia patients and 3,037 secondary CDI cases among UTI patients were noted, representing 10.8 secondary CDI cases per 1,000 pneumonia discharges and 11.1 secondary CDI cases per 1,000 UTI discharges. Gender specific analyses found a total of 2,996 and 1,000 cases of secondary CDI among men with pneumonia and UTI, respectively. Among women, 3,431 cases of secondary CDI were noted among those hospitalized for pneumonia and an additional 2,037 cases among those with primary UTI. Table 1 summarizes the prevalence of secondary CDI, patient, and hospital characteristics among primary pneumonia and UTI discharges, by gender. While rates of CDI among those with pneumonia did not differ between each gender, significant difference was noted for UTI patients, with men reporting 13.3 cases of secondary CDI per 1,000 compared to 11.3 cases per 1,000 for women (P < 0.001). As further noted in Table 1, several patient and hospital characteristics were significantly different among men and women and as a result all such variables were included in final model building for regression analyses.Table 1 Prevalence of CDI, patient and hospital characteristics among primary pneumonia and UTI discharges, NIS 2009-2011 PneumoniaUTIMenWomen P valueMenWomen P valuen279,072313,96675,600180,170N462,171519,849125,338298,156 Prevalence of secondary CDI, cases per 1,000 10.710.90.5213.311.3<0.001 Patient Characteristics Age, % 18-34 years old5.75.03.23.835-49 years old10.610.4<0.0016.35.4<0.00150-64 years old22.421.215.711.865 years old or more61.363.374.978.9 Race/ethnicity, % White75.475.571.874.9Black11.512.014.912.1Hispanic7.87.5<0.0018.78.3<0.001Asian or Pacific Islander2.01.81.51. 7Native American0.80.80.60.6Other2.62.42.62.3 Charlson-Deyo Index, % 020.821.328.033.0126.731.8<0.00123.527.3<0.0012 or more52.546.848.539.8 Neighborhood income, % $1 - $38,99932.032.930.130.4$39,000 - $47,99927. 227.0<0.00124.925.40.009$48,000 - 62,99923.022.824.023.9$63,000 or more17.717.321.020.3 Payer type, % Private including HMO19.117.912.210.4Medicare64.566.777.279.6Medicaid8.49.8<0.0016.86.7<0.001Self-pay4.83.61.82.1No Charge0.50.40.20.2Other2.61.71.71.0 Hospital Characteristics Bed size, % Small19.019.716.416.8Medium24.925.2<0.00124.825.30.06Large56.155.258.857.9 Hospital control, % Private, investor-own14.414.715.316.2Private, non-profit71.071.30.00170.871.3<0.001Government, nonfederal14.614.013.912. 6 Setting, % Rural22.122.716.617.8Urban non-teaching44.044.8<0.00144.946.3<0.001Urban teaching33.932.538.435.8 Region, % Northeast18.818.022.119.9Midwest25.325.6<0.00122.522.7<0.001South38.940.640.442.7West16.915.915.114.7 Year, % 200934.134.132.532.0201032.332.10.2434.033.30.005201133.533.833.534.7CDI = Clostridium difficile infection, UTI = Urinary tract infection, n = total sample size, N = weighted average annual population estimate, CI = confidence interval Prevalence of CDI, patient and hospital characteristics among primary pneumonia and UTI discharges, NIS 2009-2011 CDI = Clostridium difficile infection, UTI = Urinary tract infection, n = total sample size, N = weighted average annual population estimate, CI = confidence interval Table 2 displays the factors significantly associated with secondary CDI among primary pneumonia and UTI discharges. Among hospitalizations for pneumonia, increased odds of CDI were associated with being 65 years or older (adjusted odds ratio [aOR] for men = 1.7; aOR for women = 1.9), having Medicare as the primary payer (aOR men and women = 1.3), and increasing Charlson Deyo index (aOR men = 1.5; aOR women = 1.8). CDI was also significantly associated with high income (aOR = 1.3) and Medicaid (aOR = 1.4) among men hospitalized for pneumonia. Furthermore, for both men and women, hospitalization at urban non-teaching facilities was associated with approximately 60% increased odds of CDI while nearly double the odds were noted if hospitalized at urban teaching centers. On the other hand, lower odds were noted among men admitted at government nonfederal hospitals, as compared to investor-owned facilities (aOR = 0.7).Table 2 Determinants of CDI among pneumonia and UTI discharges, NIS 2009-2011 PneumoniaUTIMenWomenMenWomenPatient characteristics Age 18-34 years old (ref.)35-49 years old0.98 (0.74, 1.31)0.84 (0.61, 1.14)0.82 (0.49, 1.38)1.49 (0.96, 2.33)50-64 years old1.27 (0.97, 1.65)1.34 (1.00, 1.78)0.84 (0.53, 1.33)1.92 (1.19, 3.11)65 years old or more1.70 (1.28, 2.25)*1.85 (1.40, 2.45)*0.94 (0.61, 1.44)2.41 (1.47, 3.95)* Race/ethnicity White (ref.)Black0.92 (0.79, 1.07)1.02 (0.89, 1.16)1.08 (0.87, 1.34)1.01 (0.86, 1.18)Hispanic1.02 (0.86, 1.21)0.84 (0.71, 1.00)0.85 (0.63, 1.15)0.88 (0.73, 1.08)Asian/Pacific Islander1.26 (0.97, 1.62)0.91 (0.69, 1.20)0.96 (0.55, 1.67)0.88 (0.60, 1.28)Native American1.50 (0.86, 2.62)0.84 (0.50, 1.43)0.57 (0.14, 2.31)0.66 (0.28, 1.56)Other1.14 (0.87, 1.50)1.14 (0.87, 1.47)1.24 (0.82, 1.87)0.84 (0.58, 1.21) Charlson-Deyo Index 0 (ref.)10.98 (0.86, 1.13)1.16 (1.02, 1.32)0.88 (0.72, 1.09)1.18 (1.03, 1.36)2 or more1.54 (1.36, 1.75)*1.81 (1.59, 2.05)*1.31 (1.10, 1.55)*1.72 (1.53, 1.94)* Neighborhood income $1 - $38,999 (ref.)$39,000 - $47,9991.08 (0.96, 1.23)1.07 (0.95, 1.21)1.09 (0.86, 1.37)0.89 (0.77, 1.03)$48,000 - 62,9991.12 (0.98, 1.27)1.10 (0.97, 1.25)1.29 (1.04, 1.59)1.17 (1.01, 1.36)$63,000 or more1.30 (1.13, 1.49)*1.23 (1.07, 1.43)1.52 (1.21, 1.90)*1.17 (0.98, 1.39) Payer type Private including HMO (ref.)Medicare1.32 (1.15, 1.51)*1.26 (1.09, 1.45)*1.21 (0.95, 1.55)0.90 (0.74, 1.10)Medicaid1.38 (1.14, 1.68)*1.19 (0.98, 1.45)1.04 (0.72, 1.50)1.00 (0.78, 1.28)Self-pay0.69 (0.49, 0.96)0.87 (0.61, 1.23)0.25 (0.08, 0.78)0.53 (0.30, 0.94)No Charge1.28 (0.66, 2.50)0.54 (0.17, 1.72)0.74 (0.13, 4.42)0.38 (0.05, 2.75)Other1.10 (0.81, 1.49)1.08 (0.70, 1.67)0.86 (0.44, 1.67)0.51 (0.26, 1.03) Hospital characteristics Bed size Small (ref.)Medium0.92 (0.78, 1.09)1.03 (0.88, 1.20)1.02 (0.80, 1.30)0.88 (0.74, 1.04)Large1.09 (0.94, 1.27)1.17 (1.01, 1.35)1.19 (0.96, 1.48)0.97 (0.83, 1.14) Hospital control Private investor-own (ref.)Private non-profit0.79 (0.68, 0.92)0.86 (0.74, 1.00)1.01 (0.79, 1.28)1.15 (0.96, 1.37)Government nonfederal0.68 (0.55, 0.84)*0.76 (0.63, 0.93)0.88 (0.63, 1.23)1.06 (0.84, 1.35) Setting Rural (ref.)Urban non-teaching1.63 (1.36, 1.96)*1.66 (1.40, 1.98)*1.63 (1.20, 2.20)1.42 (1.16, 1.73)*Urban teaching2.05 (1.71, 2.47)*1.92 (1.60, 2.30)*1.92 (1.42, 2.61)*1.74 (1.42, 2.14)* Region Northeast (ref.)Midwest0.94 (0.79, 1.11)0.87 (0.73, 1.04)0.99 (0.79, 1.23)1.04 (0.86, 1.25)South0.79 (0.68, 0.91)0.78 (0.66, 0.92)0.82 (0.66, 1.01)0.81 (0.68, 0.96)West0.83 (0.70, 0.97)0.82 (0.69, 0.98)0.78 (0.61, 1.01)0.87 (0.72, 1.05) Year 2009 (ref.)20101.10 (0.97, 1.24)0.96 (0.85, 1.08)1.11 (0.93, 1.32)1.03 (0.90, 1.19)20111.06 (0.94, 1.20)0.95 (0.84, 1.06)1.06 (0.89, 1.26)1.01 (0.88, 1.16)CDI = Clostridium difficile infection, UTI = Urinary tract infection*Bonferroni adjusted P < 0.0017 Determinants of CDI among pneumonia and UTI discharges, NIS 2009-2011 CDI = Clostridium difficile infection, UTI = Urinary tract infection *Bonferroni adjusted P < 0.0017 Similar trends were noted for UTI discharges. Being 65 years or older (aOR = 2.4 for men only), highest income category (aOR = 1.5 for men only), increasing comorbidities (aOR men = 1.3; aOR women = 1.7), urban teaching status (aOR men = 1.9; aOR women = 1.7), and urban non-teaching status (aOR = 1.4 for women only) were also significantly associated with increased likelihood of secondary CDI. Results of chi-square analyses demonstrated that in-hospital mortality was higher among pneumonia patients diagnosed with CDI, as compared to those without, for both men (13% vs. 4%, P < 0.0001) and women (11% vs. 4%, P < 0.0001). A similar effect of CDI among UTI discharges was noted in regards to in-hospital mortality among men (4% vs. 1%, P < 0.0001) and women (3% vs. 1%, P < 0.0001). Table 3 shows the results of regression analyses (unadjusted and adjusted) to evaluate the impact of CDI on in-hospital mortality and resource utilization among pneumonia and UTI cases. After adjusting for control variables, in-hospital mortality was approximately three times higher for pneumonia cases with CDI as compared to those without CDI. Having CDI among men hospitalized for UTI also increased the likelihood of in-hospital mortality by four times. Similarly, among women with UTI, CDI was associated with three times the odds of dying in the hospital.Table 3 Regression analyses of in-hospital mortality and health resource utilization upon secondary CDI among patients with pneumonia or urinary tract infection a In-hospital mortalityHealth resource utilizationOR (95% CI)Length of stayTotal chargeIRR (95% CI)% change (95% CI)UnadjustedAdjustedb UnadjustedAdjustedb UnadjustedAdjustedb Primary pneumonia with secondary CDIc Women3.422.842.402.2986.1474.9(3.03, 3.86)*(2.51, 3.21)*(2.30, 2.50)*(2.19, 2.39)*(80.36, 91.92)*(70.66, 79.20)*Men3.703.152.542.4093.6580.00(3.28, 4.17)*(2.79, 3.55)*(2.43, 2.66)*(2.30, 2.52)*(87.93, 99.38)*(75.56, 84.48)*Primary UTI with secondary CDId Women3.693.392.192.1162.9356.92(2.82, 4.84)*(2.58, 4.44)*(2.06, 2.32)*(1.99, 2.24)*(57.08, 68.78)*(52.41, 61.44)*Men4.054.132.142.1367.3759.34(2.89, 5.67)*(2.95, 5.78)*(1.99, 2.29)*(1.99, 2.29)*(59.23, 75.50)*(52.30, 66.38)**Bonferroni adjusted P < 0.0017CDI = Clostridium difficile infection, UTI = Urinary tract infection, OR = odds ratio, CI = confidence interval, IRR = incidence rate ratio aBinary logistic regression for in-hospital mortality, negative binomial regression for length of stay, linear regression for total charges. All procedures were survey weighted bModel adjusted for patient characteristics, hospital characteristics, and year cReference = primary pneumonia no secondary CDI dReference = UTI no secondary CDI Regression analyses of in-hospital mortality and health resource utilization upon secondary CDI among patients with pneumonia or urinary tract infection a *Bonferroni adjusted P < 0.0017 CDI = Clostridium difficile infection, UTI = Urinary tract infection, OR = odds ratio, CI = confidence interval, IRR = incidence rate ratio aBinary logistic regression for in-hospital mortality, negative binomial regression for length of stay, linear regression for total charges. All procedures were survey weighted bModel adjusted for patient characteristics, hospital characteristics, and year cReference = primary pneumonia no secondary CDI dReference = UTI no secondary CDI Wilcoxon rank sum tests demonstrated that median LOS was significantly longer in pneumonia cases with CDI as compared to those without CDI for both men and women (9 days vs. 4 days, P < 0.0001). Similar trends were noted for UTI with or without CDI (7 days vs. 3 days, P < 0.0001). Adjusted results of negative binomial regression analyses showed that having CDI for pneumonia and UTI discharges was associated with approximately 200% increased LOS (Table 3). Among men with pneumonia, higher median total charges ($104131 vs. $41157, P < 0.0001) were noted upon presence of CDI, with a similar trend reported among women ($96446 vs. $40700, P < 0.0001). Median total charges were also substantially higher upon UTI cases with CDI for both men ($63842 vs. $34182, P < 0.0001) and women ($33063 vs. $61577, P < 0.0001) as well. Results from multiple linear regression analyses showed that secondary CDI was significantly associated with increased percent change in total charges for both pneumonia and UTI cases. For example, presence of secondary CDI increased total charges by 80% and 75% among men and women with pneumonia, respectively. Similarly, CDI was associated with 59% and 57% increase in total charges among men and women with UTI, respectively (Table 3). After conducting a sensitivity analysis among ages 65 and older, a similar trend persisted for in-hospital mortality, LOS, and total charges (Table 4). Cumulatively, presence of secondary CDI among pneumonia or UTI discharges was substantially associated with increased in-hospital mortality and health resource utilization.Table 4 Regression analyses of in-hospital mortality and health resource utilization upon secondary CDI among patients ages 65+ with pneumonia or urinary tract infection a In-hospital mortalityHealth resource utilizationLength of stayTotal chargeOR (95% CI)b IRR (95% CI)b % change (95% CI)b Primary pneumonia with secondary CDIc Women2.67 (2.34, 3.05)*2.18 (2.08, 2.29)*71.04 (66.4, 75.73)*Men3.06 (2.69, 3.48)*2.25 (2.14, 2.35)*74.52 (69.76, 79.27)*Primary UTI with secondary CDId Women3.59 (2.73, 4.71)*2.06 (1.94, 2.18)*54.99 (50.20, 59.78)*Men4.35 (3.07, 6.16)*2.07 (1.93, 2.23)*57.13 (49.17, 65.08)**Bonferroni adjusted P < 0.0017CDI = Clostridium difficile infection, UTI = Urinary tract infection, OR = odds ratio, CI = confidence interval, IRR = incidence rate ratio aBinary logistic regression for in-hospital mortality, negative binomial regression for length of stay, linear regression for total charges. All procedures were survey weighted bModel adjusted for patient characteristics, hospital characteristics, and year cReference = primary pneumonia no secondary CDI dReference = UTI no secondary CDI Regression analyses of in-hospital mortality and health resource utilization upon secondary CDI among patients ages 65+ with pneumonia or urinary tract infection a *Bonferroni adjusted P < 0.0017 CDI = Clostridium difficile infection, UTI = Urinary tract infection, OR = odds ratio, CI = confidence interval, IRR = incidence rate ratio aBinary logistic regression for in-hospital mortality, negative binomial regression for length of stay, linear regression for total charges. All procedures were survey weighted bModel adjusted for patient characteristics, hospital characteristics, and year cReference = primary pneumonia no secondary CDI dReference = UTI no secondary CDI Discussion: Our study found that CDI is highly prevalent in patients with pneumonia or UTI and is associated with significantly increased in-hospital mortality, LOS, and total charges. To our knowledge, this study is the first of its kind to examine secondary CDI using a nationally representative dataset in patients with pneumonia or UTI, both frequent conditions that often require hospitalization and antimicrobial therapy. In our analyses, the prevalence of CDI is considerably higher compared to that reported among general hospitalized patients, but lower than prevalence noted among those with conditions that may uniquely predispose patients to CDI, such as ulcerative colitis [11,12]. We also found a gender difference in CDI prevalence, with higher rates noted among men with UTI as compared to women. Such gender differences may be attributable to differences in antibiotic prescribing practices as duration of treatment for men with UTI is generally longer [34-37]. Future prospective studies should further corroborate these findings and explore underlying mechanisms. A major finding in our study was that LOS was doubled for both men and women with secondary CDI as compared to those without. Moreover, at least 75% and 55% increases in total charges were noted among pneumonia and UTI patients with CDI, respectively. In-hospital mortality was also substantially higher among patients in secondary CDI as compared to those without. Our findings on the negative impact of CDI on patient and hospital outcomes are in line with results noted among other patient populations with CDI, such as inflammatory bowel disease, solid organ transplant, and end stage renal failure [11-14]; reflecting the substantial burden of CDI in vulnerable patients. We identified a number of hospital and patient characteristics associated with CDI in patients with primary pneumonia or UTI. Consistent with our results, previous studies have reported a greater risk of CDI with increasing age [29,38]. A study assessing C. difficile colitis case fatality rate also noted higher rates among those in Medicare and Medicaid [30], demonstrating the negative burden of CDI among patients with such insurance status, similar to trends noticed in our findings. We hypothesize that given that Medicare patients are elderly, they may be at a greater risk of negative outcomes and thus the result noted in our study. In addition, Medicaid has traditionally been for low income populations, and thus limited resources, budget constraints, restrictions, etc. could contribute to the negative rates. Not surprisingly and in keeping with other studies, we found that increasing number of comorbidities, as reflected by the Charlson Deyo index, was associated with a greater risk for CDI [30]. We also reported higher odds of secondary CDI among urban hospitals. Possible reasons include greater complexity of illness or higher antibiotic prescribing practices among urban hospitals [39], though some researchers have illustrated the opposite [40] or non-significant differences between urban or rural settings [41]. Future studies to examine this issue with additional ways to account for case mix are needed. Our findings of worse outcomes in this large swath of hospitalized patients suggest opportunities for both infection prevention and stewardship in this population. Our findings have major implications for clinicians and healthcare institutions and add to the body of literature on the prevalence and impact of CDI. The substantial negative impact of secondary CDI on in-hospital mortality and health resource utilization emphasizes that those hospitalized for pneumonia or UTI represent high-risk patient populations for CDI and should be included in CDI preventive efforts. Opportunities for optimizing antimicrobial therapy use among patients to reduce such burden exists, such as severity-based treatment, de-escalation as appropriate, and adherence to treatment guidelines for choice and duration of anti-infectives [42,43]. Future studies to examine interventions for reducing the risk of and mitigating the consequences of CDI in patients with pneumonia and UTI are urgently needed. Our results should be interpreted in the context of study limitations. Given that NIS does not provide patient identifiable information, such ICD-9-CM codes could not be cross-validated with laboratory results. Previous research, however, has shown the validity and use of such codes for CDI detection [44,45]. Second, the unit of observation in NIS is discharges and not individual patients; therefore results of this study cannot assess the impact of initial versus recurrent CDI. Third, while patient level characteristics, such as age, gender, race/ethnicity, payer type, and neighborhood income based on zip codes are included in the NIS dataset, other social or medical determinants, such as health literacy, education, dietary factors and treatments, including outpatient procedures, could not be evaluated. Conclusion: In our study, using the largest inpatient data in the United States, we demonstrated that men have a significantly higher prevalence of CDI, as compared to women. CDI was also associated with increased in-hospital mortality for both pneumonia and UTI patients, as well as increased LOS and total charges; further highlighting the negative impact of CDI and imperative need for preventive measures.
Background: Clostridium difficile infection (CDI) remains one of the major hospital acquired infections in the nation, often attributable to increased antibiotic use. Little research, however, exists on the prevalence and impact of CDI on patient and hospital outcomes among populations requiring such treatment. As such, the goal of this study was to examine the prevalence, risk factors, and impact of CDI among pneumonia and urinary tract infection (UTI) hospitalizations. Methods: The Nationwide Inpatient Sample (2009-2011), reflecting a 20% stratified sample of community hospitals in the United States, was used. A total of 593,038 pneumonia and 255,770 UTI discharges were included. Survey-weighted multivariable regression analyses were conducted to assess the predictors and impact of CDI among pneumonia and UTI discharges. Results: A significantly higher prevalence of CDI was present among men with UTI (13.3 per 1,000) as compared to women (11.3 per 1,000). CDI was associated with higher in-hospital mortality among discharges for pneumonia (adjusted odds ratio [aOR] for men = 3.2, women aOR = 2.8) and UTI (aOR for men = 4.1, women aOR = 3.4). Length of stay among pneumonia and UTI discharges were also double upon presence of CDI. In addition, CDI increased the total charges by at least 75% and 55% among pneumonia and UTI discharges, respectively. Patient and hospital characteristics associated with CDI included being 65 years or older, Charlson Deyo index for comorbidity of 2 or more, Medicare as the primary payer, and discharge from urban hospitals, among both pneumonia and UTI discharges. Conclusions: CDI occurs frequently in hospitalizations among those discharged from hospital for pneumonia and UTI, and is associated with increased in-hospital mortality and health resource utilization. Interventions to mitigate the burden of CDI in these high-risk populations are urgently needed.
Background: Clostridium difficile infection (CDI) is a leading cause of hospital-acquired infection (HAI) [1-3]. In many areas of the United States, CDI has surpassed methicillin-resistant Staphylococcus aureus as the most common type of HAI [1] with approximately 333,000 initial and 145,000 recurrent hospital-onset cases in the nation [4]. Certain patient populations have a disproportionately higher risk for CDI due to either host factors, frequent antibiotic use or both. These include older adults, patients using proton pump inhibitors [5-7] or antibiotics [8,9], those with inflammatory bowel disease [10-12], end-stage renal failure, or recipients of solid organ transplants [13,14]. In a study addressing the burden of CDI among patients with inflammatory bowel disease, Ananthakrishnan et al. [12] demonstrated four times higher mortality and three days longer hospital stay with presence of CDI. Similarly, among solid organ transplant patients, presence of CDI significantly increased the in-hospital mortality, length of stay, and charges, in addition to organ complications [14]. Despite such recognized burden of CDI, limited research exists on the prevalence and impact of infection among most common conditions that require antimicrobial treatment, with no study to date evaluating such impact among pneumonia or urinary tract infection (UTI) patients. Some recent empirical evidence has noted the co-occurrence of pneumonia and UTI with CDI in the United States [15] putatively due to the use of antimicrobial treatment; though none have evaluated the impact of such co-occurrences on patient and hospital outcomes. Misdiagnosis of pneumonia and inappropriate use of antimicrobial therapy was associated with a CDI outbreak [16]. Given the burden of CDI nationally and increasing prevalence attributed at least in part to antibiotic use, understanding the impact of CDI among patients with pneumonia or UTI would be valuable to devise potential preventive strategies. We undertook analyses of an existing large dataset from a nationally representative survey to assess [1] the prevalence and factors associated with CDI among pneumonia and UTI and [2] the impact of CDI on in-hospital mortality and health resource utilization (length of stay [LOS] and total charges). Conclusion: In our study, using the largest inpatient data in the United States, we demonstrated that men have a significantly higher prevalence of CDI, as compared to women. CDI was also associated with increased in-hospital mortality for both pneumonia and UTI patients, as well as increased LOS and total charges; further highlighting the negative impact of CDI and imperative need for preventive measures.
Background: Clostridium difficile infection (CDI) remains one of the major hospital acquired infections in the nation, often attributable to increased antibiotic use. Little research, however, exists on the prevalence and impact of CDI on patient and hospital outcomes among populations requiring such treatment. As such, the goal of this study was to examine the prevalence, risk factors, and impact of CDI among pneumonia and urinary tract infection (UTI) hospitalizations. Methods: The Nationwide Inpatient Sample (2009-2011), reflecting a 20% stratified sample of community hospitals in the United States, was used. A total of 593,038 pneumonia and 255,770 UTI discharges were included. Survey-weighted multivariable regression analyses were conducted to assess the predictors and impact of CDI among pneumonia and UTI discharges. Results: A significantly higher prevalence of CDI was present among men with UTI (13.3 per 1,000) as compared to women (11.3 per 1,000). CDI was associated with higher in-hospital mortality among discharges for pneumonia (adjusted odds ratio [aOR] for men = 3.2, women aOR = 2.8) and UTI (aOR for men = 4.1, women aOR = 3.4). Length of stay among pneumonia and UTI discharges were also double upon presence of CDI. In addition, CDI increased the total charges by at least 75% and 55% among pneumonia and UTI discharges, respectively. Patient and hospital characteristics associated with CDI included being 65 years or older, Charlson Deyo index for comorbidity of 2 or more, Medicare as the primary payer, and discharge from urban hospitals, among both pneumonia and UTI discharges. Conclusions: CDI occurs frequently in hospitalizations among those discharged from hospital for pneumonia and UTI, and is associated with increased in-hospital mortality and health resource utilization. Interventions to mitigate the burden of CDI in these high-risk populations are urgently needed.
7,572
358
[ 176, 524, 443 ]
8
[ "cdi", "hospital", "uti", "pneumonia", "secondary", "characteristics", "patient", "patients", "secondary cdi", "regression" ]
[ "hospitalization antimicrobial therapy", "antibiotics inflammatory bowel", "higher antibiotic prescribing", "factors frequent antibiotic", "infection hospital mortalityhealth" ]
null
[CONTENT] Clostridium difficile | Urinary tract infection | Pneumonia | Nationwide inpatient sample | Hospital acquired infection | Nosocomial infection [SUMMARY]
null
[CONTENT] Clostridium difficile | Urinary tract infection | Pneumonia | Nationwide inpatient sample | Hospital acquired infection | Nosocomial infection [SUMMARY]
[CONTENT] Clostridium difficile | Urinary tract infection | Pneumonia | Nationwide inpatient sample | Hospital acquired infection | Nosocomial infection [SUMMARY]
[CONTENT] Clostridium difficile | Urinary tract infection | Pneumonia | Nationwide inpatient sample | Hospital acquired infection | Nosocomial infection [SUMMARY]
[CONTENT] Clostridium difficile | Urinary tract infection | Pneumonia | Nationwide inpatient sample | Hospital acquired infection | Nosocomial infection [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Clostridioides difficile | Clostridium Infections | Coinfection | Comorbidity | Cross Infection | Databases, Factual | Enterocolitis, Pseudomembranous | Female | Hospital Mortality | Hospitalization | Hospitals, Community | Humans | Inpatients | Male | Middle Aged | Odds Ratio | Pneumonia | Prevalence | Regression Analysis | Risk Factors | United States | Urinary Tract Infections | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Clostridioides difficile | Clostridium Infections | Coinfection | Comorbidity | Cross Infection | Databases, Factual | Enterocolitis, Pseudomembranous | Female | Hospital Mortality | Hospitalization | Hospitals, Community | Humans | Inpatients | Male | Middle Aged | Odds Ratio | Pneumonia | Prevalence | Regression Analysis | Risk Factors | United States | Urinary Tract Infections | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Clostridioides difficile | Clostridium Infections | Coinfection | Comorbidity | Cross Infection | Databases, Factual | Enterocolitis, Pseudomembranous | Female | Hospital Mortality | Hospitalization | Hospitals, Community | Humans | Inpatients | Male | Middle Aged | Odds Ratio | Pneumonia | Prevalence | Regression Analysis | Risk Factors | United States | Urinary Tract Infections | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Clostridioides difficile | Clostridium Infections | Coinfection | Comorbidity | Cross Infection | Databases, Factual | Enterocolitis, Pseudomembranous | Female | Hospital Mortality | Hospitalization | Hospitals, Community | Humans | Inpatients | Male | Middle Aged | Odds Ratio | Pneumonia | Prevalence | Regression Analysis | Risk Factors | United States | Urinary Tract Infections | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Clostridioides difficile | Clostridium Infections | Coinfection | Comorbidity | Cross Infection | Databases, Factual | Enterocolitis, Pseudomembranous | Female | Hospital Mortality | Hospitalization | Hospitals, Community | Humans | Inpatients | Male | Middle Aged | Odds Ratio | Pneumonia | Prevalence | Regression Analysis | Risk Factors | United States | Urinary Tract Infections | Young Adult [SUMMARY]
[CONTENT] hospitalization antimicrobial therapy | antibiotics inflammatory bowel | higher antibiotic prescribing | factors frequent antibiotic | infection hospital mortalityhealth [SUMMARY]
null
[CONTENT] hospitalization antimicrobial therapy | antibiotics inflammatory bowel | higher antibiotic prescribing | factors frequent antibiotic | infection hospital mortalityhealth [SUMMARY]
[CONTENT] hospitalization antimicrobial therapy | antibiotics inflammatory bowel | higher antibiotic prescribing | factors frequent antibiotic | infection hospital mortalityhealth [SUMMARY]
[CONTENT] hospitalization antimicrobial therapy | antibiotics inflammatory bowel | higher antibiotic prescribing | factors frequent antibiotic | infection hospital mortalityhealth [SUMMARY]
[CONTENT] hospitalization antimicrobial therapy | antibiotics inflammatory bowel | higher antibiotic prescribing | factors frequent antibiotic | infection hospital mortalityhealth [SUMMARY]
[CONTENT] cdi | hospital | uti | pneumonia | secondary | characteristics | patient | patients | secondary cdi | regression [SUMMARY]
null
[CONTENT] cdi | hospital | uti | pneumonia | secondary | characteristics | patient | patients | secondary cdi | regression [SUMMARY]
[CONTENT] cdi | hospital | uti | pneumonia | secondary | characteristics | patient | patients | secondary cdi | regression [SUMMARY]
[CONTENT] cdi | hospital | uti | pneumonia | secondary | characteristics | patient | patients | secondary cdi | regression [SUMMARY]
[CONTENT] cdi | hospital | uti | pneumonia | secondary | characteristics | patient | patients | secondary cdi | regression [SUMMARY]
[CONTENT] cdi | use | infection | burden | antimicrobial | burden cdi | organ | hospital | patients | pneumonia [SUMMARY]
null
[CONTENT] cdi | secondary | uti | secondary cdi | aor | pneumonia | regression | infection | ci | hospital [SUMMARY]
[CONTENT] cdi | increased | increased hospital mortality pneumonia | increased los total charges | data united | data united states | data united states demonstrated | pneumonia uti patients increased | total charges highlighting | total charges highlighting negative [SUMMARY]
[CONTENT] cdi | hospital | uti | pneumonia | nis | patients | regression | patient | secondary | characteristics [SUMMARY]
[CONTENT] cdi | hospital | uti | pneumonia | nis | patients | regression | patient | secondary | characteristics [SUMMARY]
[CONTENT] CDI ||| CDI ||| CDI | UTI [SUMMARY]
null
[CONTENT] CDI | UTI | 13.3 | 1,000 | 11.3 | 1,000 ||| CDI | 3.2 ||| 2.8 | UTI | 4.1 ||| 3.4 ||| UTI | CDI ||| CDI | at least 75% and 55% | UTI ||| CDI | 65 years | Charlson Deyo | 2 | Medicare | UTI [SUMMARY]
[CONTENT] CDI | UTI ||| CDI [SUMMARY]
[CONTENT] CDI ||| CDI ||| CDI | UTI ||| The Nationwide Inpatient Sample | 2009-2011 | 20% | the United States ||| 593,038 | 255,770 | UTI ||| CDI | UTI ||| CDI | UTI | 13.3 | 1,000 | 11.3 | 1,000 ||| CDI | 3.2 ||| 2.8 | UTI | 4.1 ||| 3.4 ||| UTI | CDI ||| CDI | at least 75% and 55% | UTI ||| CDI | 65 years | Charlson Deyo | 2 | Medicare | UTI ||| CDI | UTI ||| CDI [SUMMARY]
[CONTENT] CDI ||| CDI ||| CDI | UTI ||| The Nationwide Inpatient Sample | 2009-2011 | 20% | the United States ||| 593,038 | 255,770 | UTI ||| CDI | UTI ||| CDI | UTI | 13.3 | 1,000 | 11.3 | 1,000 ||| CDI | 3.2 ||| 2.8 | UTI | 4.1 ||| 3.4 ||| UTI | CDI ||| CDI | at least 75% and 55% | UTI ||| CDI | 65 years | Charlson Deyo | 2 | Medicare | UTI ||| CDI | UTI ||| CDI [SUMMARY]
Prevalence of Premenstrual Dysphoric Disorder among Female Students of a Medical College in Nepal: A Descriptive Cross-sectional Study.
35199667
Premenstrual dysphoric disorder is a severe form of premenstrual syndrome that impairs quality of life and carries an increased risk of suicidal attempts. Hormonal changes may underlie these symptoms. The present study was conducted to find out the prevalence of premenstrual dysphoric disorder among female students of a medical college in Nepal.
INTRODUCTION
This descriptive cross-sectional study was conducted among 266 healthy young females in a medical college of Nepal from 21st June 2021 to 31st August 2021 with approval from the Institutional Review Committee 51/2021. Convenience sampling was done. Self-rated questionnaire of premenstrual symptoms screening tool was used to evaluate premenstrual dysphoric disorder. The Premenstrual Symptoms Screening Tool reflects and translates categorical Diagnostic and Statistical Manual of Mental Disorders-IV criteria into a rating scale with degrees of severity. Data were analyzed using the Statistical Package for Social Sciences version 16. Point estimate at 95% confidence interval was calculated along with frequency and proportion for the binary data.
METHODS
Out of 266 female students, we found that the prevalence of premenstrual dysphoric disorder was 10 (3.8%) (1.50-6.10 at 95% Confidence Interval).
RESULTS
The prevalence of premenstrual dysphoric disorder in our study was found to be higher when compared to other similar studies.
CONCLUSIONS
[ "Cross-Sectional Studies", "Female", "Humans", "Nepal", "Premenstrual Dysphoric Disorder", "Prevalence", "Quality of Life", "Students" ]
9157674
INTRODUCTION
Menstruation is a natural, cyclic and physiological process,1 but most awomen feel emotional and physical discomfort a few days prior to its onset. When these symptoms affect daily activities, it is known as premenstrual syndrome (PMS).2 PMS occurs during late luteal phase and abating within a few days following the menses in > 90% of menstruating women.3,4 The exact cause of PMS is not known.5,6 Premenstrual dysphoric disorder (PMDD) is a severe form of PMS and recurs for at least two menstrual cycles,6 affecting 3-9% menstruating women.7 Previous Nepalese studies have found prevalence of PMDD as 37%8 and 2.1 %9in the medical students respectively. It should be studied as it affects student's social, academic or work performance and emotional wellbeing and carries risk of depression and suicide.9,10 The study aims to find out the prevalence of premenstrual dysphoric disorder among female students of a medical college in Nepal.
METHODS
A descriptive cross-sectional study was performed in two hundred sixty-six healthy young females, aged between 17 to 30 years, from 21st June 2021 to 31st August 2021 , in the Department of Physiology, Kathmandu University School of Medical Sciences with approval from the Institutional Review Committee of Kathmandu University School of Medical Sciences/Dhulikhel Hospital (IRC-KUSMS 51/2021). Students who were married, had irregular cycles for past six months, had a history of intake of any hormonal medication, or a major gynecological, psychological or medical problem were excluded from the study. Convenience sampling was done. The sample size was calculated using the formula as given below: n = Z2 × (p × q) / e2   = (1.96)2 × 0.5 × (1-0.5) / (0.05)2   = 384 Where, n= required sample size Z= 1.96 at 95% Confidence Interval (CI) p= prevalence of premenstrual dysphoric disorder among female students of a medical college in Nepal taken as 50% for maximum sample size e= margin of error, 5% in this study Total female students from MBBS, BDS, B. Sc. Nursing, BNS and BPT (N)= 620 Adjusted sample size= n/[1+{(n-1)/N]}]   = 384 / [1 + {(384-1)/620}]   = 238.50 Thus, the minimum number of the sample size required was calculated as 239. By adding 10% as a non-response rate, the minimum sample size was 263 and a sample size of 266 was taken. The prevalence of premenstrual dysphoric disorders was assessed using Premenstrual Symptom Screening Tool (PSST).7,9,11,12 The PSST reflects and 'translates' categorical DSM-IV criteria into a rating scale with degrees of severity. PSST measures the severity of premenstrual symptoms, and the diagnosis of PMDD, and is less time consuming and more practical. Consent was obtained after explaining the risk and benefits of the study. The questionnaires were selfadministered in a classroom/Hostel with ample seating space to ensure privacy. The questionnaire was in English and all participants were educated in English medium and were able to understand the terms used in the questionnaire. The participants were allowed to ask any doubts or to clarify any terms, etc. However, none asked for any clarification. Data collection was done in a single sitting using a self-administered questionnaire which consisted of sociodemographic profile of the students, details of their menstrual cycle and the Premenstrual Symptoms Screening Tool. The PSST consists of 19 items, 14 premenstrual symptoms, and 5 functional items, in line with DSM-IV criteria. Participants were asked to rate the extent to which they experience each symptom during the late luteal phase and stopping within a few days of bleeding and the extent to which the symptoms interfere with each functional domain. Items are rated as "not at all," "mild," "moderate," or "severe." The items of the PSST are divided into three categories to identify PMDD: (i) "core PMS" symptoms, (ii) "other PMS" symptoms, and (iii) "functional" items. The diagnostic criteria for PMDD, using the PSST, are: (a) at least 1 of 4 "core PMS" symptoms rated severe, (b) at least 4 additional of 1 to 14 PMS symptoms rated either moderate or severe, and (c) at least 1of 5 "functional" items rated severe. The diagnostic criteria for severe PMS are similar with PMDD, but less stringent: (a) at least 1 of 4 "core PMS" symptoms rated either moderate or severe, (b) at least 4 additional PMS symptoms rated either moderate or severe, and (c) at least 1 of 5 "functional" items rated either moderate or severe. The purpose of the "severe PMS" classification was to identify females who experience "clinically significant" PMS but do not meet criteria for PMDD. The rest of the subjects were considered as no or mild PMS. Data were analyzed using IBM Statistical Package for the Social Sciences version 16 software. The data were analyzed using descriptive statistics and have been presented as means, standard deviations, frequencies, and percentages. Point estimate at 95% confidence interval was calculated along with frequency and proportion for the binary data.
RESULTS
Out of 266 female medical students, we found that the prevalence of PMDD was 10 (3.8%) (1.50-6.10 at 95% Confidence Interval). Two hundred sixty-six students who participated in our study were between 17 and 30 years of age. The mean ages of the participants were 21.72±1.99 years with mean BMI 21.76±3.05 kg/m2 (Table 1). PMDD-premenstrual dysphoric disorder The most commonly reported symptom was anger/irritability 10 (100%) followed by depressed mood, lack of concentration, feeling overwhelmed or out of control (Table 2). Among participants with premenstrual dysphoric disorder, all respondents narrated that their symptoms interfered severely with work efficiency or productivity followed by home responsibility (Table 3).
CONCLUSIONS
The prevalence of Premenstrual Dysphoric Disorder in our study was found to be higher when compared to other similar studies. PMS and PMDD is an important health problem among university students. The premenstrual symptoms interfere with their work efficacy or productivity, social life and relationship with colleagues during that particular period of menstrual cycle.
[]
[]
[]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION", "CONCLUSIONS" ]
[ "Menstruation is a natural, cyclic and physiological process,1 but most awomen feel emotional and physical discomfort a few days prior to its onset. When these symptoms affect daily activities, it is known as premenstrual syndrome (PMS).2 PMS occurs during late luteal phase and abating within a few days following the menses in > 90% of menstruating women.3,4 The exact cause of PMS is not known.5,6 Premenstrual dysphoric disorder (PMDD) is a severe form of PMS and recurs for at least two menstrual cycles,6 affecting 3-9% menstruating women.7\nPrevious Nepalese studies have found prevalence of PMDD as 37%8 and 2.1 %9in the medical students respectively. It should be studied as it affects student's social, academic or work performance and emotional wellbeing and carries risk of depression and suicide.9,10\nThe study aims to find out the prevalence of premenstrual dysphoric disorder among female students of a medical college in Nepal.", "A descriptive cross-sectional study was performed in two hundred sixty-six healthy young females, aged between 17 to 30 years, from 21st June 2021 to 31st August 2021 , in the Department of Physiology, Kathmandu University School of Medical Sciences with approval from the Institutional Review Committee of Kathmandu University School of Medical Sciences/Dhulikhel Hospital (IRC-KUSMS 51/2021). Students who were married, had irregular cycles for past six months, had a history of intake of any hormonal medication, or a major gynecological, psychological or medical problem were excluded from the study. Convenience sampling was done.\nThe sample size was calculated using the formula as given below:\nn = Z2 × (p × q) / e2\n  = (1.96)2 × 0.5 × (1-0.5) / (0.05)2\n  = 384\nWhere,\nn= required sample size\nZ= 1.96 at 95% Confidence Interval (CI)\np= prevalence of premenstrual dysphoric disorder among female students of a medical college in Nepal taken as 50% for maximum sample size\ne= margin of error, 5% in this study\nTotal female students from MBBS, BDS, B. Sc. Nursing, BNS and BPT (N)= 620\nAdjusted sample size= n/[1+{(n-1)/N]}]\n  = 384 / [1 + {(384-1)/620}]\n  = 238.50\nThus, the minimum number of the sample size required was calculated as 239. By adding 10% as a non-response rate, the minimum sample size was 263 and a sample size of 266 was taken.\nThe prevalence of premenstrual dysphoric disorders was assessed using Premenstrual Symptom Screening Tool (PSST).7,9,11,12 The PSST reflects and 'translates' categorical DSM-IV criteria into a rating scale with degrees of severity. PSST measures the severity of premenstrual symptoms, and the diagnosis of PMDD, and is less time consuming and more practical.\nConsent was obtained after explaining the risk and benefits of the study. The questionnaires were selfadministered in a classroom/Hostel with ample seating space to ensure privacy. The questionnaire was in English and all participants were educated in English medium and were able to understand the terms used in the questionnaire. The participants were allowed to ask any doubts or to clarify any terms, etc. However, none asked for any clarification. Data collection was done in a single sitting using a self-administered questionnaire which consisted of sociodemographic profile of the students, details of their menstrual cycle and the Premenstrual Symptoms Screening Tool. The PSST consists of 19 items, 14 premenstrual symptoms, and 5 functional items, in line with DSM-IV criteria. Participants were asked to rate the extent to which they experience each symptom during the late luteal phase and stopping within a few days of bleeding and the extent to which the symptoms interfere with each functional domain. Items are rated as \"not at all,\" \"mild,\" \"moderate,\" or \"severe.\"\nThe items of the PSST are divided into three categories to identify PMDD: (i) \"core PMS\" symptoms, (ii) \"other PMS\" symptoms, and (iii) \"functional\" items. The diagnostic criteria for PMDD, using the PSST, are: (a) at least 1 of 4 \"core PMS\" symptoms rated severe, (b) at least 4 additional of 1 to 14 PMS symptoms rated either moderate or severe, and (c) at least 1of 5 \"functional\" items rated severe. The diagnostic criteria for severe PMS are similar with PMDD, but less stringent: (a) at least 1 of 4 \"core PMS\" symptoms rated either moderate or severe, (b) at least 4 additional PMS symptoms rated either moderate or severe, and (c) at least 1 of 5 \"functional\" items rated either moderate or severe. The purpose of the \"severe PMS\" classification was to identify females who experience \"clinically significant\" PMS but do not meet criteria for PMDD. The rest of the subjects were considered as no or mild PMS.\nData were analyzed using IBM Statistical Package for the Social Sciences version 16 software. The data were analyzed using descriptive statistics and have been presented as means, standard deviations, frequencies, and percentages. Point estimate at 95% confidence interval was calculated along with frequency and proportion for the binary data.", "Out of 266 female medical students, we found that the prevalence of PMDD was 10 (3.8%) (1.50-6.10 at 95% Confidence Interval). Two hundred sixty-six students who participated in our study were between 17 and 30 years of age. The mean ages of the participants were 21.72±1.99 years with mean BMI 21.76±3.05 kg/m2 (Table 1).\nPMDD-premenstrual dysphoric disorder\nThe most commonly reported symptom was anger/irritability 10 (100%) followed by depressed mood, lack of concentration, feeling overwhelmed or out of control (Table 2).\nAmong participants with premenstrual dysphoric disorder, all respondents narrated that their symptoms interfered severely with work efficiency or productivity followed by home responsibility (Table 3).", "Our study was conducted among young females of Kathmandu university school of medicine sciences, attending courses of different stream. In the present study, 3.8% fulfilled the criteria of PMDD. This finding was comparable with study conducted among college students of Bhavanagar found the prevalence of PMDD was 3.7%.13 Study done in the teaching hospital of Kathmandu, Nepal found 2.1% respondent had PMDD.9 In contrast to our study, another study conducted in Palpa, Nepal found that 39.2% students had PMDD that is very higher rate than ours.8 Study conducted in Japan among high school athletic students found that 1.3% participants had PMDD and 8.9% had moderate to severe PMS.14 However Steiners reported the prevalence of PMDD was 8.3%.12 Thakur, et al. demonstrated that the prevalence of PMDD screened by PSST was 5.04% and the prevalence of PMDD was 4.43% by the daily record of severity of problems form (DRSP).7 Currently the prevalence of PMDD ranges from 3-9% in women of reproductive age.7,15 The wide variation in incidence of PMDD was due to geographical and cultural variation, types of study population as well as different diagnostic criteria and methodology used to diagnose PMDD.16-8\nIn overall participants, 80.5% respondents reported dysmenorrhea out of which 90% had PMDD. This figure is consistent with study conducted by Aryal, et al. reported all participants with PMDD (100%) had dysmenorrhea.80 A significant relationship between dysmenorrhea and PMS/PMDD exists and that indicates dysmenorrhea provokes PMDD.8,9\nIn this study the most common symptoms reported were anger/irritability (100%) followed by depression, lack of concentration, feeling overwhelmed or out of control, all three reported by 90% of the participants. This finding was similar to previous study conducted by several authors. 9,19,20 Positive correlation exists between the severity of PMDD and severity of depression as well as anxiety.15 In contradiction to our result, the most common symptoms reported among the PMDD group was fatigue/lack of energy (100%) followed by physical symptoms, irritability/anger and decreased interest in home activities.7 The exact cause and pathophysiology of PMS/PMDD is not known. Investigators hypothesized that fluctuation in sex steroid, altered GABA neurotransmitter system and functional sensitivity of the GABA receptor, decreased serotonin activity might involve in the pathogenesis of PMDD. GABA and serotonin are involved in regulating mood, behavior, and cognitive functions. GABA is the main inhibitory neurotransmitter in the mammalian brain and is crucial for regulation of anxiety, alertness, and seizure.21 In this study we have found that impairment is present in all areas of functioning, most frequently in work efficiency or productivity (100%) followed by home responsibilities (90%).\nThe limitation of the present study was the use of retrospective and self reported research method that could lead to recall bias. High prevalence of PMDD among young adults warrants further large-scale study to evaluate the impact of PMDD on their academic performance, quality of life, and effective intervention for alleviating the symptoms.", "The prevalence of Premenstrual Dysphoric Disorder in our study was found to be higher when compared to other similar studies. PMS and PMDD is an important health problem among university students. The premenstrual symptoms interfere with their work efficacy or productivity, social life and relationship with colleagues during that particular period of menstrual cycle." ]
[ "intro", "methods", "results", "discussion", "conclusions" ]
[ "\nfemale\n", "\npremenstrual dysphoric syndrome\n", "\npremenstrual syndrome\n" ]
INTRODUCTION: Menstruation is a natural, cyclic and physiological process,1 but most awomen feel emotional and physical discomfort a few days prior to its onset. When these symptoms affect daily activities, it is known as premenstrual syndrome (PMS).2 PMS occurs during late luteal phase and abating within a few days following the menses in > 90% of menstruating women.3,4 The exact cause of PMS is not known.5,6 Premenstrual dysphoric disorder (PMDD) is a severe form of PMS and recurs for at least two menstrual cycles,6 affecting 3-9% menstruating women.7 Previous Nepalese studies have found prevalence of PMDD as 37%8 and 2.1 %9in the medical students respectively. It should be studied as it affects student's social, academic or work performance and emotional wellbeing and carries risk of depression and suicide.9,10 The study aims to find out the prevalence of premenstrual dysphoric disorder among female students of a medical college in Nepal. METHODS: A descriptive cross-sectional study was performed in two hundred sixty-six healthy young females, aged between 17 to 30 years, from 21st June 2021 to 31st August 2021 , in the Department of Physiology, Kathmandu University School of Medical Sciences with approval from the Institutional Review Committee of Kathmandu University School of Medical Sciences/Dhulikhel Hospital (IRC-KUSMS 51/2021). Students who were married, had irregular cycles for past six months, had a history of intake of any hormonal medication, or a major gynecological, psychological or medical problem were excluded from the study. Convenience sampling was done. The sample size was calculated using the formula as given below: n = Z2 × (p × q) / e2   = (1.96)2 × 0.5 × (1-0.5) / (0.05)2   = 384 Where, n= required sample size Z= 1.96 at 95% Confidence Interval (CI) p= prevalence of premenstrual dysphoric disorder among female students of a medical college in Nepal taken as 50% for maximum sample size e= margin of error, 5% in this study Total female students from MBBS, BDS, B. Sc. Nursing, BNS and BPT (N)= 620 Adjusted sample size= n/[1+{(n-1)/N]}]   = 384 / [1 + {(384-1)/620}]   = 238.50 Thus, the minimum number of the sample size required was calculated as 239. By adding 10% as a non-response rate, the minimum sample size was 263 and a sample size of 266 was taken. The prevalence of premenstrual dysphoric disorders was assessed using Premenstrual Symptom Screening Tool (PSST).7,9,11,12 The PSST reflects and 'translates' categorical DSM-IV criteria into a rating scale with degrees of severity. PSST measures the severity of premenstrual symptoms, and the diagnosis of PMDD, and is less time consuming and more practical. Consent was obtained after explaining the risk and benefits of the study. The questionnaires were selfadministered in a classroom/Hostel with ample seating space to ensure privacy. The questionnaire was in English and all participants were educated in English medium and were able to understand the terms used in the questionnaire. The participants were allowed to ask any doubts or to clarify any terms, etc. However, none asked for any clarification. Data collection was done in a single sitting using a self-administered questionnaire which consisted of sociodemographic profile of the students, details of their menstrual cycle and the Premenstrual Symptoms Screening Tool. The PSST consists of 19 items, 14 premenstrual symptoms, and 5 functional items, in line with DSM-IV criteria. Participants were asked to rate the extent to which they experience each symptom during the late luteal phase and stopping within a few days of bleeding and the extent to which the symptoms interfere with each functional domain. Items are rated as "not at all," "mild," "moderate," or "severe." The items of the PSST are divided into three categories to identify PMDD: (i) "core PMS" symptoms, (ii) "other PMS" symptoms, and (iii) "functional" items. The diagnostic criteria for PMDD, using the PSST, are: (a) at least 1 of 4 "core PMS" symptoms rated severe, (b) at least 4 additional of 1 to 14 PMS symptoms rated either moderate or severe, and (c) at least 1of 5 "functional" items rated severe. The diagnostic criteria for severe PMS are similar with PMDD, but less stringent: (a) at least 1 of 4 "core PMS" symptoms rated either moderate or severe, (b) at least 4 additional PMS symptoms rated either moderate or severe, and (c) at least 1 of 5 "functional" items rated either moderate or severe. The purpose of the "severe PMS" classification was to identify females who experience "clinically significant" PMS but do not meet criteria for PMDD. The rest of the subjects were considered as no or mild PMS. Data were analyzed using IBM Statistical Package for the Social Sciences version 16 software. The data were analyzed using descriptive statistics and have been presented as means, standard deviations, frequencies, and percentages. Point estimate at 95% confidence interval was calculated along with frequency and proportion for the binary data. RESULTS: Out of 266 female medical students, we found that the prevalence of PMDD was 10 (3.8%) (1.50-6.10 at 95% Confidence Interval). Two hundred sixty-six students who participated in our study were between 17 and 30 years of age. The mean ages of the participants were 21.72±1.99 years with mean BMI 21.76±3.05 kg/m2 (Table 1). PMDD-premenstrual dysphoric disorder The most commonly reported symptom was anger/irritability 10 (100%) followed by depressed mood, lack of concentration, feeling overwhelmed or out of control (Table 2). Among participants with premenstrual dysphoric disorder, all respondents narrated that their symptoms interfered severely with work efficiency or productivity followed by home responsibility (Table 3). DISCUSSION: Our study was conducted among young females of Kathmandu university school of medicine sciences, attending courses of different stream. In the present study, 3.8% fulfilled the criteria of PMDD. This finding was comparable with study conducted among college students of Bhavanagar found the prevalence of PMDD was 3.7%.13 Study done in the teaching hospital of Kathmandu, Nepal found 2.1% respondent had PMDD.9 In contrast to our study, another study conducted in Palpa, Nepal found that 39.2% students had PMDD that is very higher rate than ours.8 Study conducted in Japan among high school athletic students found that 1.3% participants had PMDD and 8.9% had moderate to severe PMS.14 However Steiners reported the prevalence of PMDD was 8.3%.12 Thakur, et al. demonstrated that the prevalence of PMDD screened by PSST was 5.04% and the prevalence of PMDD was 4.43% by the daily record of severity of problems form (DRSP).7 Currently the prevalence of PMDD ranges from 3-9% in women of reproductive age.7,15 The wide variation in incidence of PMDD was due to geographical and cultural variation, types of study population as well as different diagnostic criteria and methodology used to diagnose PMDD.16-8 In overall participants, 80.5% respondents reported dysmenorrhea out of which 90% had PMDD. This figure is consistent with study conducted by Aryal, et al. reported all participants with PMDD (100%) had dysmenorrhea.80 A significant relationship between dysmenorrhea and PMS/PMDD exists and that indicates dysmenorrhea provokes PMDD.8,9 In this study the most common symptoms reported were anger/irritability (100%) followed by depression, lack of concentration, feeling overwhelmed or out of control, all three reported by 90% of the participants. This finding was similar to previous study conducted by several authors. 9,19,20 Positive correlation exists between the severity of PMDD and severity of depression as well as anxiety.15 In contradiction to our result, the most common symptoms reported among the PMDD group was fatigue/lack of energy (100%) followed by physical symptoms, irritability/anger and decreased interest in home activities.7 The exact cause and pathophysiology of PMS/PMDD is not known. Investigators hypothesized that fluctuation in sex steroid, altered GABA neurotransmitter system and functional sensitivity of the GABA receptor, decreased serotonin activity might involve in the pathogenesis of PMDD. GABA and serotonin are involved in regulating mood, behavior, and cognitive functions. GABA is the main inhibitory neurotransmitter in the mammalian brain and is crucial for regulation of anxiety, alertness, and seizure.21 In this study we have found that impairment is present in all areas of functioning, most frequently in work efficiency or productivity (100%) followed by home responsibilities (90%). The limitation of the present study was the use of retrospective and self reported research method that could lead to recall bias. High prevalence of PMDD among young adults warrants further large-scale study to evaluate the impact of PMDD on their academic performance, quality of life, and effective intervention for alleviating the symptoms. CONCLUSIONS: The prevalence of Premenstrual Dysphoric Disorder in our study was found to be higher when compared to other similar studies. PMS and PMDD is an important health problem among university students. The premenstrual symptoms interfere with their work efficacy or productivity, social life and relationship with colleagues during that particular period of menstrual cycle.
Background: Premenstrual dysphoric disorder is a severe form of premenstrual syndrome that impairs quality of life and carries an increased risk of suicidal attempts. Hormonal changes may underlie these symptoms. The present study was conducted to find out the prevalence of premenstrual dysphoric disorder among female students of a medical college in Nepal. Methods: This descriptive cross-sectional study was conducted among 266 healthy young females in a medical college of Nepal from 21st June 2021 to 31st August 2021 with approval from the Institutional Review Committee 51/2021. Convenience sampling was done. Self-rated questionnaire of premenstrual symptoms screening tool was used to evaluate premenstrual dysphoric disorder. The Premenstrual Symptoms Screening Tool reflects and translates categorical Diagnostic and Statistical Manual of Mental Disorders-IV criteria into a rating scale with degrees of severity. Data were analyzed using the Statistical Package for Social Sciences version 16. Point estimate at 95% confidence interval was calculated along with frequency and proportion for the binary data. Results: Out of 266 female students, we found that the prevalence of premenstrual dysphoric disorder was 10 (3.8%) (1.50-6.10 at 95% Confidence Interval). Conclusions: The prevalence of premenstrual dysphoric disorder in our study was found to be higher when compared to other similar studies.
INTRODUCTION: Menstruation is a natural, cyclic and physiological process,1 but most awomen feel emotional and physical discomfort a few days prior to its onset. When these symptoms affect daily activities, it is known as premenstrual syndrome (PMS).2 PMS occurs during late luteal phase and abating within a few days following the menses in > 90% of menstruating women.3,4 The exact cause of PMS is not known.5,6 Premenstrual dysphoric disorder (PMDD) is a severe form of PMS and recurs for at least two menstrual cycles,6 affecting 3-9% menstruating women.7 Previous Nepalese studies have found prevalence of PMDD as 37%8 and 2.1 %9in the medical students respectively. It should be studied as it affects student's social, academic or work performance and emotional wellbeing and carries risk of depression and suicide.9,10 The study aims to find out the prevalence of premenstrual dysphoric disorder among female students of a medical college in Nepal. CONCLUSIONS: The prevalence of Premenstrual Dysphoric Disorder in our study was found to be higher when compared to other similar studies. PMS and PMDD is an important health problem among university students. The premenstrual symptoms interfere with their work efficacy or productivity, social life and relationship with colleagues during that particular period of menstrual cycle.
Background: Premenstrual dysphoric disorder is a severe form of premenstrual syndrome that impairs quality of life and carries an increased risk of suicidal attempts. Hormonal changes may underlie these symptoms. The present study was conducted to find out the prevalence of premenstrual dysphoric disorder among female students of a medical college in Nepal. Methods: This descriptive cross-sectional study was conducted among 266 healthy young females in a medical college of Nepal from 21st June 2021 to 31st August 2021 with approval from the Institutional Review Committee 51/2021. Convenience sampling was done. Self-rated questionnaire of premenstrual symptoms screening tool was used to evaluate premenstrual dysphoric disorder. The Premenstrual Symptoms Screening Tool reflects and translates categorical Diagnostic and Statistical Manual of Mental Disorders-IV criteria into a rating scale with degrees of severity. Data were analyzed using the Statistical Package for Social Sciences version 16. Point estimate at 95% confidence interval was calculated along with frequency and proportion for the binary data. Results: Out of 266 female students, we found that the prevalence of premenstrual dysphoric disorder was 10 (3.8%) (1.50-6.10 at 95% Confidence Interval). Conclusions: The prevalence of premenstrual dysphoric disorder in our study was found to be higher when compared to other similar studies.
1,775
241
[]
5
[ "pmdd", "study", "pms", "symptoms", "premenstrual", "prevalence", "students", "severe", "participants", "prevalence pmdd" ]
[ "prevalence premenstrual dysphoric", "dysmenorrhea pms", "disorders assessed premenstrual", "pmdd premenstrual", "dysmenorrhea pms pmdd" ]
[CONTENT] female | premenstrual dysphoric syndrome | premenstrual syndrome [SUMMARY]
[CONTENT] female | premenstrual dysphoric syndrome | premenstrual syndrome [SUMMARY]
[CONTENT] female | premenstrual dysphoric syndrome | premenstrual syndrome [SUMMARY]
[CONTENT] female | premenstrual dysphoric syndrome | premenstrual syndrome [SUMMARY]
[CONTENT] female | premenstrual dysphoric syndrome | premenstrual syndrome [SUMMARY]
[CONTENT] female | premenstrual dysphoric syndrome | premenstrual syndrome [SUMMARY]
[CONTENT] Cross-Sectional Studies | Female | Humans | Nepal | Premenstrual Dysphoric Disorder | Prevalence | Quality of Life | Students [SUMMARY]
[CONTENT] Cross-Sectional Studies | Female | Humans | Nepal | Premenstrual Dysphoric Disorder | Prevalence | Quality of Life | Students [SUMMARY]
[CONTENT] Cross-Sectional Studies | Female | Humans | Nepal | Premenstrual Dysphoric Disorder | Prevalence | Quality of Life | Students [SUMMARY]
[CONTENT] Cross-Sectional Studies | Female | Humans | Nepal | Premenstrual Dysphoric Disorder | Prevalence | Quality of Life | Students [SUMMARY]
[CONTENT] Cross-Sectional Studies | Female | Humans | Nepal | Premenstrual Dysphoric Disorder | Prevalence | Quality of Life | Students [SUMMARY]
[CONTENT] Cross-Sectional Studies | Female | Humans | Nepal | Premenstrual Dysphoric Disorder | Prevalence | Quality of Life | Students [SUMMARY]
[CONTENT] prevalence premenstrual dysphoric | dysmenorrhea pms | disorders assessed premenstrual | pmdd premenstrual | dysmenorrhea pms pmdd [SUMMARY]
[CONTENT] prevalence premenstrual dysphoric | dysmenorrhea pms | disorders assessed premenstrual | pmdd premenstrual | dysmenorrhea pms pmdd [SUMMARY]
[CONTENT] prevalence premenstrual dysphoric | dysmenorrhea pms | disorders assessed premenstrual | pmdd premenstrual | dysmenorrhea pms pmdd [SUMMARY]
[CONTENT] prevalence premenstrual dysphoric | dysmenorrhea pms | disorders assessed premenstrual | pmdd premenstrual | dysmenorrhea pms pmdd [SUMMARY]
[CONTENT] prevalence premenstrual dysphoric | dysmenorrhea pms | disorders assessed premenstrual | pmdd premenstrual | dysmenorrhea pms pmdd [SUMMARY]
[CONTENT] prevalence premenstrual dysphoric | dysmenorrhea pms | disorders assessed premenstrual | pmdd premenstrual | dysmenorrhea pms pmdd [SUMMARY]
[CONTENT] pmdd | study | pms | symptoms | premenstrual | prevalence | students | severe | participants | prevalence pmdd [SUMMARY]
[CONTENT] pmdd | study | pms | symptoms | premenstrual | prevalence | students | severe | participants | prevalence pmdd [SUMMARY]
[CONTENT] pmdd | study | pms | symptoms | premenstrual | prevalence | students | severe | participants | prevalence pmdd [SUMMARY]
[CONTENT] pmdd | study | pms | symptoms | premenstrual | prevalence | students | severe | participants | prevalence pmdd [SUMMARY]
[CONTENT] pmdd | study | pms | symptoms | premenstrual | prevalence | students | severe | participants | prevalence pmdd [SUMMARY]
[CONTENT] pmdd | study | pms | symptoms | premenstrual | prevalence | students | severe | participants | prevalence pmdd [SUMMARY]
[CONTENT] pms | known premenstrual | emotional | menstruating women | menstruating | premenstrual | known | women | days | medical [SUMMARY]
[CONTENT] sample | sample size | items | rated | size | severe | pms symptoms | pms | psst | symptoms [SUMMARY]
[CONTENT] table | 10 | mean | 21 | followed | years | participants | premenstrual dysphoric | dysphoric disorder | dysphoric [SUMMARY]
[CONTENT] premenstrual | similar studies pms | university students premenstrual | compared similar studies pms | compared similar studies | compared similar | compared | productivity social | productivity social life | productivity social life relationship [SUMMARY]
[CONTENT] pmdd | pms | premenstrual | study | symptoms | students | prevalence | premenstrual dysphoric | dysphoric | premenstrual dysphoric disorder [SUMMARY]
[CONTENT] pmdd | pms | premenstrual | study | symptoms | students | prevalence | premenstrual dysphoric | dysphoric | premenstrual dysphoric disorder [SUMMARY]
[CONTENT] ||| Hormonal ||| Nepal [SUMMARY]
[CONTENT] 266 | Nepal | 21st June 2021 to 31st August 2021 | the Institutional Review Committee ||| ||| ||| The Premenstrual Symptoms Screening Tool | Diagnostic ||| the Statistical Package for Social Sciences | 16 ||| Point | 95% [SUMMARY]
[CONTENT] 266 | 10 | 3.8% | 1.50-6.10 | 95% [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| Hormonal ||| Nepal ||| 266 | Nepal | 21st June 2021 to 31st August 2021 | the Institutional Review Committee ||| ||| ||| The Premenstrual Symptoms Screening Tool | Diagnostic ||| the Statistical Package for Social Sciences | 16 ||| Point | 95% ||| 266 | 10 | 3.8% | 1.50-6.10 | 95% ||| [SUMMARY]
[CONTENT] ||| Hormonal ||| Nepal ||| 266 | Nepal | 21st June 2021 to 31st August 2021 | the Institutional Review Committee ||| ||| ||| The Premenstrual Symptoms Screening Tool | Diagnostic ||| the Statistical Package for Social Sciences | 16 ||| Point | 95% ||| 266 | 10 | 3.8% | 1.50-6.10 | 95% ||| [SUMMARY]
Agonist-Evoked Increases in Intra-Platelet Zinc Couple to Functional Responses.
30597507
 Zinc (Zn2+) is an essential trace element that regulates intracellular processes in multiple cell types. While the role of Zn2+ as a platelet agonist is known, its secondary messenger activity in platelets has not been demonstrated.
BACKGROUND
 Changes in [Zn2+]i were quantified in Fluozin-3 (Fz-3)-loaded washed, human platelets using fluorometry. Increases in [Zn2+]i were modelled using Zn2+-specific chelators and ionophores. The influence of [Zn2+]i on platelet function was assessed using platelet aggregometry, flow cytometry and Western blotting.
METHODS
 Increases of intra-platelet Fluozin-3 (Fz-3) fluorescence occurred in response to stimulation by cross-linked collagen-related peptide (CRP-XL) or U46619, consistent with a rise of [Zn2+]i. Fluoresence increases were blocked by Zn2+ chelators and modulators of the platelet redox state, and were distinct from agonist-evoked [Ca2+]i signals. Stimulation of platelets with the Zn2+ ionophores clioquinol (Cq) or pyrithione (Py) caused sustained increases of [Zn2+]i, resulting in myosin light chain phosphorylation, and cytoskeletal re-arrangements which were sensitive to cytochalasin-D treatment. Cq stimulation resulted in integrin αIIbβ3 activation and release of dense, but not α, granules. Furthermore, Zn2+-ionophores induced externalization of phosphatidylserine.
RESULTS
 These data suggest that agonist-evoked fluctuations in intra-platelet Zn2+ couple to functional responses, in a manner that is consistent with a role as a secondary messenger. Increased intra-platelet Zn2+ regulates signalling processes, including shape change, αIIbβ3 up-regulation and dense granule release, in a redox-sensitive manner.
CONCLUSION
[ "Blood Platelets", "Calcium", "Cations", "Chelating Agents", "Cross-Linking Reagents", "Cytosol", "Humans", "Ionophores", "Microscopy, Confocal", "Oxidation-Reduction", "Phosphatidylserines", "Phosphorylation", "Platelet Activation", "Platelet Aggregation", "Platelet Glycoprotein GPIIb-IIIa Complex", "Polycyclic Compounds", "Signal Transduction", "Zinc" ]
6327715
Introduction
Zinc (Zn 2+ ) is an essential trace element, serving as a co-factor for 10 to 15% of proteins encoded within the human genome. 1 It is acknowledged as an extracellular signalling molecule in glycinergic and GABAergic neurones, and is released into the synaptic cleft following excitation. 2 3 Zn 2+ is concentrated in atherosclerotic plaques and released from damaged epithelial cells, and is released from platelets along with their α-granule cargo following collagen stimulation. 4 Therefore, increased concentrations of unbound or labile (free) Zn 2+ are likely to be present at areas of haemostasis, and may be much higher in the microenvironment of a growing thrombus. Zn 2+ plays a role in haemostasis by contributing to wound healing, 5 and regulating coagulation, for example, as a co-factor for factor XII. 6 Labile Zn 2+ acts as a platelet agonist, being able to induce tyrosine phosphorylation, integrin α IIb β 3 activation and aggregation at high concentrations, while potentiating platelet responses to other agonists at lower concentrations. 7 Zn 2+ is directly linked to platelet function in vivo , as dietary Zn 2+ deficiency of humans or rodents manifests with a bleeding phenotype that reverses with Zn 2+ supplementation. Labile, protein-bound and membrane-bound, Zn 2+ pools are found within multiple cell types, including immune cells and neurones. These pools are inter-changeable, allowing increases in the bioavailability of Zn 2+ to Zn 2+ -sensitive proteins following signalling-dependent processes. In this manner, Zn 2+ is acknowledged to behave as a secondary messenger. 8 In nucleated cells, Zn 2+ is released from intracellular granules into the cytosol via Zn 2+ transporters, or from Zn 2+ -binding proteins such as metallothioneins, following engagement of extracellular receptors. For example, a role for Zn 2+ as a secondary messenger has been shown in mast cells, where engagement of the F C ε receptor I results in rapid increases in intracellular Zn 2+ (Zn 2+ ] i ). This ‘zinc wave’ modulates transcription of cytokines, and affects tyrosine phosphatase activity. 8 Zn 2+ also acts as a secondary messenger in monocytes, where stimulation with lipopolysaccharide results in increases in [Zn 2+ ] i , suggestive of a role in transmembrane signalling. 9 Agonist-evoked changes of [Zn 2+ ] i modulate signalling proteins (i.e. protein kinase C [PKC], calmodulin-dependent protein kinase II [CamKII] and interleukin receptor-associated kinase) in a similar manner to calcium (Ca 2+ )-dependent processes. 4 8 10 While the role of Zn 2+ as a secondary messenger in nucleated cells has gathered support in recent years, agonist-dependent regulation of [Zn 2+ ] i in platelets during thrombosis has yet to be demonstrated. Here, we utilize Zn 2+ -specific fluorophores, chelators and ionophores to investigate the role of [Zn 2+ ] i fluctuations in platelet behaviour. We show that agonist-evoked elevation of [Zn 2+ ] i regulates platelet shape change, dense granule release and phosphatidylserine (PS) exposure. These findings indicate a role for Zn 2+ as a secondary messenger, which may have implications for the understanding of platelet signalling pathways involved in thrombosis during adverse cardiovascular events.
Experimental Procedures
Materials : Fluozin-3- am (Fz-3, Zn 2+ indicator) and Fluo-4- am (Ca 2+ indicator) were from Invitrogen (Paisley, United Kingdom). Z-VAD (pan-caspase inhibitor) was from R&D Systems (Abingdon, United Kingdom). Primary anti-vasodilator-stimulated phosphoprotein (VASP) (Ser157) and anti-myosin light chain (MLC) (Ser19) antibodies were from Cambridge Bioscience (Cambridge, United Kingdom), and fluorescently conjugated procaspase-activating compound 1 (PAC-1), CD62P and CD63 antibodies were from BD Biosciences (Oxford, United Kingdom). Cross-linked collagen-related peptide (CRP-XL; glycoprotein VI [GpVI] agonist) was from Professor Richard Farndale (Cambridge, United Kingdom), U46619 (thromboxane [TP]α receptor agonist) was from Tocris (Bristol, United Kingdom), thrombin (protease-activated receptor [PAR] agonist) was from Sigma Aldrich (Poole, United Kingdom) and cytochalasin-D (Cyt-D, actin polymerization inhibitor) was from AbCam (Cambridge, United Kingdom). Clioquinol (Cq, Zn 2+ ionophore, C 9 H 5 ClINO, K d Zn: 10 −7 M, K d Ca: 10 −4.9 M), pyrithione (Py, Zn 2+ ionophore, C 10 H 8 N 2 O 2 S 2 , K d Zn: 10 −7 M, K d Ca: 10 −4.9 M), A23187 (Ca 2+ ionophore, C 29 H 37 N 3 O 6 ), N,N,N′,N′-Tetrakis(2-pyridylmethyl)ethylenediamine (TPEN, Zn 2+ chelator, K d Zn: 2.6 × 10 −16 M, K d Ca: 4.4 × 10 −5 M, 11 12 13 14 ), dimethyl-bis-(aminophenoxy)ethane-tetraacetic acid (DM-BAPTA)-AM (C 34 H 40 N 2 O 18 , K d Zn: 7.9 × 10 −9 M, K d Ca: 110 × 10 −9 M, 11 12 13 14 ) and membrane permeant anti-oxidizing proteins, polyethylene glycol-superoxide dismutase (PEG-SOD) and PEG-catalase (CAT) were from Sigma Aldrich. Unless stated, all other reagents were from Sigma Aldrich. Preparation of washed human platelets : This study was approved by the Research Ethics Committee at Anglia Ruskin University and informed consent was obtained in accordance with the Declaration of Helsinki. Blood was donated by healthy human volunteers, free from medication for 2 weeks. Blood was collected into 11 mM sodium citrate and washed platelets were prepared as described previously. 7 Unless otherwise stated, to isolate the mechanisms of Zn 2+ fluctuations from other cation-specific effects, experiments were performed in the absence of extracellular Ca 2+ . Cation mobilisation studies : For studies of [Zn 2+ ] i or [Ca 2+ ] i mobilization, platelet-rich plasma was loaded with Fz-3 (2 µM, 30 minutes, 37°C), or Fluo-4 (2 µM, 30 minutes, 37°C). Fz-3 is responsive to Zn 2+ in the nM range and is not significantly affected by Ca 2+ . 15 Platelets were collected by centrifugation (350 ×  g , 15 minutes), re-suspended in Ca 2+ -free Tyrode's buffer (in mM: 140 NaCl, 5 KCl, 10 HEPES, 5 glucose, 0.42 NaH 2 PO 4 , 12 NaHCO 3 , pH 7.4) and rested at 37°C for 30 minutes prior to use. Fluorescence was monitored using a Fluoroskan Ascent fluorometer (ThermoScientific, United Kingdom) using 488 nm and 538 nm excitation and emission filters, respectively. Washed Fz-3 or Fluo-4 loaded platelet suspensions were treated with ionophores or chelators to calibrate R max or R min values ( Supplementary Fig. S1 , available in the online version). Results are expressed as an increase of background-corrected fluorescence at each time point relative to baseline: ( F - F background )/ F 0 - F background ). Optical aggregometry : Aggregometry was performed with washed platelet suspensions under stirring conditions at 37°C in an AggRam light transmission aggregometer (Helena Biosciences, Gateshead, United Kingdom). 7 Aggregation traces were acquired using a proprietary software (Helena Biosciences) and analysed within GraphPad Prism (Version 6.03). Confocal microscopy : Images of platelets adhering to coated fibrinogen coverslips (100 µM) were acquired using a LSM510/Axiovert laser scanning confocal microscope with 60× oil NA1.45 objective (Zeiss, United Kingdom). Surface coverage of DIOC 6 -stained platelets was quantified using ImageJ (v1.45, National Institutes of Health, Bethesda, Maryland, United States). Western blotting : Western blotting was performed as described previously. 7 Briefly, polyvinylidene difluoride membranes were incubated with MLC (1:400) or VASP (Ser157, 1:400) primary antibodies, and horseradish peroxidase-conjugated secondary antibodies (1:7,500). Flow cytometry : Washed platelet suspensions were incubated with fluorescently conjugated antibodies targeting markers of platelet activation: PAC-1 (α IIb β 3 activation), CD62P (α granule release) and CD63 (dense granule release). Antibody binding following agonist or ionophore stimulation was assessed using an Accuri C6 flow cytometer (BD Biosciences). Data analysis : Maximum and minimum aggregation and F / F 0 values were calculated using Microsoft Excel. Western blots were analysed using ImageJ. Data were analysed in GraphPad Prism by two-way analysis of variance followed by Tukey's post hoc test. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05.
Results
[Zn 2+ ] i fluctuations coordinate receptor stimulation with signalling responses in nucleated cells. 8 To investigate whether intra-platelet Zn 2+ fluctuates during activation, agonist-evoked changes of [Zn 2+ ] i were monitored in washed platelet suspensions, loaded with the Zn 2+ -specific fluorophore, Fz-3. Stimulation with conventional platelet agonists CRP-XL and U46619 induced rapid, dose-dependent increases of fluorescence peaking after approximately 2 minutes, consistent with increases in [Zn 2+ ] i . At 6 minutes, 1 µg/mL CRP-XL or 10 µM U46619 stimulation increased F / F 0 to 2.0 ± 0.1 and 1.2 ± 0.1 AU, respectively (compared with 0.9 ± 0.2 AU for the vehicle control, p  < 0.05, Fig. 1A , B ). Conversely, thrombin stimulation did not elevate Fz-3 fluorescence ( Fig. 1C ). These data indicate that platelet activation via GpVI and TP, but not via PARs, leads to signalling responses that result in the elevation of [Zn 2+ ] i , in a similar manner to agonist-evoked increases in [Ca 2+ ] i . Inclusion of 2 mM CaCl 2 in the extracellular medium did not significantly affect agonist-evoked responses ( Supplementary Fig. S2 , available in the online version). Agonist-dependent platelet activation via GpVI or TP, but not PARs elevates [Zn 2+ ] i . Fz-3-labelled washed human platelets were stimulated by CRP-XL ( A ), U46619 ( B ) or thrombin ( C ) and [Zn 2+ ] i fluctuations were monitored over 6 minutes using fluorometry. ( A ) Fz-3 responses to ○ 1 µg/mL, □ 0.3 µg/mL, ▵ 0.1 µg/mL, ⋄ 0.03 µg/mL CRP-XL or • vehicle (DMSO). ( B ) Fz-3 responses to ○ 10 µM, □ 3 µM, ▵ 1 µM, ⋄ 0.3 µM U46619 or • vehicle (DMSO). ( C ) Fz-3 responses to, ○ 1 U/mL, □ 0.3 U/mL, ▵ 0.1 U/mL, ⋄ 0.03 U/mL thrombin or • vehicle (DMSO). Data are mean ± standard error of the mean (SEM) from at least 8 independent experiments. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05. Experiments were performed to confirm the specificity of fluorescence fluctuations for Zn 2+ . Platelets were pre-treated with the intracellular Zn 2+ -specific chelator TPEN (50 µM) prior to stimulation with 1 µg/mL CRP-XL. Fz-3 responses were reduced to 1.1 ± 0.1 AU, compared with of 2.0 ± 0.1 AU for CRP-XL stimulation alone ( p  < 0.05, Fig. 2A ). Interestingly, treatment with DM-BAPTA (10 µM), a non-specific cation chelator, led to a similar reduction (to 1.0 ± 0.1 AU, p  < 0.05). Abrogation of Fz-3 fluorescence was also observed following stimulation with U46619 (10 µg/mL), where TPEN or DM-BAPTA treatment reduced F / F 0 plateau levels from 1.2 ± 0.1 to 0.8 ± 0.1 AU and 1.0 ± 0.1 AU, respectively ( p  < 0.05, Fig. 2B ). Further experiments were performed to investigate the influence of cation chelation on [Ca 2+ ] i fluctuations using Fluo-4-loaded platelets. As previously demonstrated, CRP-XL- and U46619-induced Ca 2+ signals were absent following BAPTA treatment ( F / F 0 signals were reduced from 1.6 ± 0.2 to 0.8 ± 0.1 AU, and from 1.4 ± 0.1 to 0.9 + 0.0 AU, for CRP-XL and U46619 stimulation, respectively, p  < 0.05, Fig. 2D , E ). However, Fluo-4 fluorescence was not significantly affected by TPEN treatment (1.5 ± 0.2 and 1.2 ± 0.1 AU for CRP-XL and U46619, respectively, ns) indicating that TPEN does not chelate [Ca 2+ ] i , and that Fz-3 signals are attributable to [Zn 2+ ] i with no influence from other cations. Furthermore, these data demonstrate that fluctuations in [Zn 2+ ] i do not affect agonist-evoked Ca 2+ signals. Agonist-dependent intracellular zinc ([Zn 2+ ] i ) fluctuations are sensitive to the platelet redox state. Platelets were loaded with Fz-3 ( A , B , C ), or Fluo-4 ( D , E , F ) and stimulated with CRP-XL (1 µg/mL, ○, A , D ), U46619 (10 µM, ○ B , E ) or H 2 O 2 (10 µM, ○, C , F ), during which changes in fluorescence were monitored. Where indicated, platelets were pre-treated with TPEN (▿, 50 µM), DM-BAPTA (⋄, 10µM), PEG-SOD (□, 30 U/mL), PEG-CAT (▵, 300 U/mL) or vehicle (DMSO), •). Data are mean ± standard error of the mean (SEM) from at least 5 independent experiments. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05. Agonist-evoked [Zn 2+ ] i increases may result from release of membrane-bound intracellular stores or by liberation from metal-binding proteins (e.g. metallothioneins) in response to redox-mediated modifications to thiol groups. 16 To investigate the nature of the Zn 2+ source, platelets were treated with membrane-permeant anti-oxidizing proteins PEG-SOD and PEG-CAT, 17 and CRP-XL-evoked [Zn 2+ ] i fluctuations were monitored. PEG-SOD and PEG-CAT both abolished CRP-XL-induced increases of Fz-3 fluorescence, indicating redox-dependent modulation of Zn 2+ release (PEG-SOD and PEG-CAT reduced F / F 0 plateaus following 1 µg/mL CRP-XL treatment from 2.0 ± 0.1 to 1.2 ± 0.1 AU and 1.3 ± 0.1 AU, respectively, p  < 0.05, Fig. 2A ). This is consistent with published data showing a greater capacity for GpVI to influence redox signalling than other receptors. 18 Similarly, PEG-SOD and PEG-CAT abolished U46619-induced [Zn 2+ ] i responses (to 1.0 ± 0.0 and 1.1 ± 0.0 AU, respectively, following 10 µM U46619 stimulation, p  < 0.05, Fig. 2B ). PEG-SOD and PEG-CAT did not affect CRP-XL- or U46619-mediated Fluo-4 fluorescence, suggesting that [Zn 2+ ] i but not [Ca 2+ ] i signals are regulated by redox-sensitive processes. Further experiments were performed to resolve the relationship between the platelet redox state and [Zn 2+ ] i fluctuations. Treatment with H 2 O 2 mimics increases in platelet reactive oxygen species (ROS). 19 H 2 O 2 increased both [Ca 2+ ] i and [Zn 2+ ] i ( F / F 0 plateaus were 1.8 + 0.3 AU following H 2 O 2 [10 µM] stimulation of Fz3-loaded platelets, compared with 0.9 ± 0.1 AU for vehicle-treated platelets, while H 2 O 2 stimulation increased Fluo-4 fluorescence from 0.9 ± 0.1 to 1.4 ± 0.1 AU, p  < 0.05, Fig. 2C , F ). H 2 O 2 -mediated [Zn 2+ ] i increases were abrogated with PEG-SOD or PEG-CAT, while [Ca 2+ ] i was unaffected ( Fig. 2E , F ). These data support a role for the platelet redox state in regulating [Zn 2+ ] i fluctuations. Having demonstrated that intra-platelet Zn 2+ rises in response to agonist stimulation, we further examined the influence of [Zn 2+ ] i on platelet responses. We hypothesized that liberation of Zn 2+ from intracellular stores (such as platelet α-granules 20 ) using specific ionophores would result in increased [Zn 2+ ] i , in a similar manner A23187-evoked Ca 2+ responses. 21 Zn 2+ ionophores Cq and Py have previously been used to model [Zn 2+ ] i increases in nucleated cells. 22 23 24 We utilized these reagents to model agonist-evoked [Zn 2+ ] i increases in washed platelet suspensions. Stimulation with Cq or Py produced large elevations of [Zn 2+ ] i , with F / F 0 plateaus of 7.9 ± 0.5 and 3.3 ± 0.3 AU, respectively ( p  < 0.05, Fig. 3A , B ). The extent of [Zn 2+ ] i increase was greater than that observed following CRP-XL stimulation, suggesting that liberation from stores is not the principal means by which [Zn 2+ ] i increases following agonist stimulation. Zn 2+ ionophore-dependent Fz-3 fluorescence increases were sensitive to pre-treatment with TPEN or BAPTA, consistent with a role for Cq or Py increasing [Zn 2+ ] i ( Fig. 3A , B ). However, [Zn 2+ ] i signals were not influenced by PEG-SOD or PEG-CAT, demonstrating that ionophore-induced [Zn 2+ ] i release is not redox sensitive. Cq or Py stimulation did not affect Fluo-4 fluorescence ( Fig. 3D , E ), indicating that Zn 2+ ionophores have a negligible affinity for Ca 2+ . A23187 increased Fluo-4 fluorescence (from 0.9 ± 0.1 to 5.8 ± 0.9 AU after 6 minutes, p  < 0.05, Fig. 3F ), but had no effect on Fz-3 fluorescence ( Fig. 3C ), demonstrating that Fz-3 fluorescence is not affected by changes in [Ca 2+ ] i . In a similar manner to agonist-dependent Ca 2+ signalling, A23187-dependent [Ca 2+ ] i increases were abrogated by BAPTA, but were unaffected by TPEN. Thus, Fluo-4 fluorescence is not influenced by Zn 2+ . Treatment of platelets with Zn 2+ ionophores clioquinol (Cq) or pyrithione (Py) elevates [Zn 2+ ] i , but not [Ca 2+ ] i . Washed platelet suspensions were loaded with Fz-3 ( A , B , C ), or Fluo-4 ( D , E , F ) and stimulated with Cq (○, 300 µM, A , D ), Py (○, 300 µM, B , E ) or A23187 (○, C , F ). Where indicated, platelets were pre-treated with (TPEN) (50 µM, ▿), DM-BAPTA (10 µM, ⋄), PEG-SOD (30 U/mL, □), PEG-CAT (300 U/mL, ▵), or vehicle (DMSO), •. Data are mean ± standard error of the mean (SEM) from at least 6 independent experiments. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05. Our data confirm that platelet [Zn 2+ ] i increases can be modelled using the Zn 2+ ionophores Cq and Py. Next, we examined the influence of increases in [Zn 2+ ] i on platelet aggregation. High concentrations of Cq (300 µM) resulted in an initial decrease in light transmission, followed by a substantial increase, consistent with shape change and aggregation. Platelet aggregates were present following visual inspection of test cuvettes at the end of each experiment (not shown). The extent of Cq-induced aggregation (300 µM, 27.8 ± 5.0%) was lower than that for A23187 (300 µM, 70.2 ± 8.6%, p  < 0.05, Fig. 4A , B ). Treatment with lower concentrations of Cq (30 µM) resulted in shape change only, with no progression to aggregation. Py stimulation did not cause aggregation but did result in shape change ( Fig. 4A – C ). Response to Py were biphasic, with intermediate concentrations (10–30 µM) resulting in shape change, and higher concentrations having no effect. Stimulation of platelets with Zn 2+ ionophores leads to shape change. ( A ) Washed platelet suspensions were stimulated with different concentrations of clioquinol (Cq), pyrithione (Py) or A23187 during which changes in light transmission were monitored using optical aggregometry. Initial downward deflections indicate a reduction in light transmission that are consistent with shape change. Subsequent upward deflections indicate increases in light transmission, consistent with platelet aggregation. The maximum ( B ) and minimum ( C ) extent of aggregation were calculated for each ionophore (▪ Cq, ▵ Py, ○ A23187). Data are mean ± standard error of the mean (SEM) from at least 5 experiments. The degree of shape change was quantified by calculating the lowest light transmission during ionophore-induced aggregation (denoted minimum aggregation, %). Shape change following Cq or A213817 treatment was comparable (minimum aggregation for 30 µM Cq or Py was –13.3 ± 2.9 and –27.5 ± 2.2%, respectively, compared with –15.1 ± 2.7% for 30 µM A23187, ns, Fig. 4C ). These data are consistent with a role for [Zn 2+ ] i in regulating cytoskeletal changes in a similar manner to [Ca 2+ ] i -induced shape change. To confirm that the changes in light transmission were a biological, rather than chemical phenomenon, we took a pharmacological approach by pre-treating platelets with the actin polymerization inhibitor Cyt-D prior to ionophore stimulation. Cyt-D abrogated Cq-, Py- and A23187-induced shape change, consistent with a genuine biological effect. The minimum aggregation for Cyt-D treated and untreated platelets were –5.7 ± 2.1 and –16.7 ± 1.9%, respectively, following Cq stimulation, –9.1 ± 1.9 and –33.2 ± 2.4, respectively, following Py stimulation, and –3.7 ± 1.4 and –13.0 ± 1.8%, respectively, following A23187 stimulation (30 µM, p  < 0.05, Fig. 5A , B ). Pre-treatment of platelets with TPEN abrogated Cq- or Py-induced shape change but had no effect on A23187 treatment (minimum aggregation following TPEN treatment was –4.9 ± 1.2, –11.1 ± 2.3 and –17.9 ± 2.6% for Cq, Py and A23817, respectively, p  < 0.05, Fig. 5A , B ). These data are consistent with a role for [Zn 2+ ] i in regulating cytoskeletal re-arrangements. The resistance of A23187-induced shape change to TPEN treatment suggests that the contribution of Ca 2+ signals to cytoskeletal re-arrangement occurs independently of Zn 2+ signals, and could indicate different mechanisms for Zn 2+ - and Ca 2+ -induced shape change. Ionophore-induced shape change is sensitive to pre-treatment with (Cyt-D) or TPEN. ( A ) Representative aggregometry traces showing clioquinol (Cq)-, pyrithione (Py)- or A23187-induced (30 µM) shape change following pre-treatment with TPEN (50 µM) or Cyt-D (10 µM). ( B ) Quantitation of minimum aggregation following treatment of platelets pre-treated with TPEN (▪ 25 µM), Cyt-D (▪ 10 µM) or vehicle (□ DMSO, prior to stimulation with Cq, Py or A23187 (30 µM). Data are mean ± standard error of the mean (SEM) of at least 6 experiments. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05. [Zn 2+ ] i -dependent cytoskeletal changes were further investigated by visualizing platelet spreading on fibrinogen. TPEN-treated platelets were able to adhere to fibrinogen, but did not spread, with no visible lamellipodia or filopodia ( Fig. 6A ). Mean platelet surface coverage after 10 minutes was 12.8 ± 1.5 µm, compared with 22.7 ± 1.6 µm for untreated platelets ( Fig. 6B ). Regulation of Cq-induced shape change was investigated by assaying VASP and MLC, which alter phosphorylation status during cytoskeletal re-arrangements. 25 26 Cq- or Py-induced shape change were accompanied by increased phosphorylation of ser157 of MLC, confirming a role for [Zn 2+ ] i in the signalling process leading to cytoskeletal changes. Unlike PGE 1 treatment, VASP did not undergo phosphorylation in response to ionophore treatment, indicating that Zn 2+ does not influence activity of cyclic nucleotide-dependent kinases such as protein kinase A (PKA) or protein kinase G (PKG). 27 [Zn 2+ ] i regulates platelet shape change, and phosphorylation of cytoskeletal regulators. Washed platelet suspensions were incubated on fibrinogen-coated coverslips following pre-treatment with 50 µM TPEN or vehicle control (DMSO). ( A ) Representative images of platelet spreading. ( B ) Quantification of the surface coverage by adherent platelets (○ DMSO, • 50 µM TPEN, n  = 3). ( C ) Representative Western blot showing increased MLC phosphorylation following stimulation of platelets for 2 minutes with vehicle (DMSO), thrombin (1 U/mL), A23187 (100 µM), clioquinol (Cq) (300 µM) and pyrithione (Py) (300 µM). ( D ) Representative Western blot showing VASP phosphorylation following stimulation of platelets for 2 minutes with vehicle (DMSO), prostaglandin E 1 (PGE 1 ) (1 U/mL), A23187 (100 µM), Cq (300 µM) and Py (300 µM). VASP phosphorylation was unaffected by Zn 2+ ionophore treatment. Blots are representative of three experiments. Data are means ± standard error of the mean (SEM), from at least 5 independent experiments. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05. These data indicate that increases in [Zn 2+ ] i initiate platelet activation events, such as shape change and aggregation. To better understand the extent to which changes in [Zn 2+ ] i regulate platelet activation, the influence of Cq treatment on conventional markers of platelet activation was investigated. In a similar manner to thrombin and A23187, Cq or Py stimulation (300 µM) substantially increased platelet PAC-1 binding (59.7 ± 5.5, 64.5 ± 5.8, 47.3 ± 4.1 and 37.8 ± 5.0%, respectively, p  < 0.05, Fig. 7A ), consistent with earlier observations correlating Cq stimulation with aggregation ( Fig. 4 ), and supportive of a role for [Zn 2+ ] i in α IIb β 3 activation. Cq or Py increased CD63, but not CD62P externalization (55.9 ± 7.8 and 5.7 ± 2.8%, respectively, following Cq stimulation, and 50.2 ± 2.6 and 6.9 ± 2.2% following Py stimulation, Fig. 7A ) indicating that increases in [Zn 2+ ] i initiate dense, but not α granule, secretion. This differed from both thrombin (CD62P: 62.9 ± 5.5%, CD63: 48.8 ± 3.0%) and A23187 (CD62P: 31.1 ± 5.7%, CD63: 55.1 ± 5.0%), which also regulate α and dense granule release. Increasing platelet [Zn 2+ ] i using Zn 2+ ionophores increases platelet activation markers. ( A ) Washed platelet suspensions were stimulated by thrombin (Thr, 1 U/mL), clioquinol (Cq) (300 µM), pyrithione (Py) (300 µM) or A23187 (100 µM) and changes of PAC-1 (white), CD62P (grey) and CD63 (black) binding were obtained after 60 minutes. ( B ) Washed platelet suspensions were stimulated by CRP-XL (1 µg/mL), U46619 (10 µM) or thrombin (1 U/mL), following pre-treatment with TPEN (50 µM), and changes of PAC-1 (white), CD62P (grey) and CD63 (black) binding were obtained after 60 minutes. ( C ) Washed platelet suspensions were treated with Ca 2+ or Zn 2+ ionophores, or conventional platelet agonists, prior to analysis of annexin-V binding by flow cytometry. □ Clioquinol (300 µM), ▵ pyrithione (300 µM), ○ A23187 (300 µM), • CRP (1 µg/mL), ▪ thrombin, (1 U/mL), ▪ vehicle (DMSO). ( D ) Platelet suspensions were pre-treated with the caspase inhibitor Z-VAD (▵, 1 µM), the Zn 2+ chelator, TPEN (▪, 25 µm) or vehicle (□) prior to stimulation with clioquinol (300 µM). ○ Unstimulated platelets. Changes in the percentage of platelets binding to annexin-V were recorded. Washed platelets suspensions were pre-treated with Z-VAD (1 µM), or TPEN (50 µM) prior to stimulation with conventional agonists, CRP-XL (1 µg/mL, E ), U46619 (10 µM, F ) or thrombin (1 U/mL, G ). Changes in annexin-V binding were monitored using flow cytometry. ○ Vehicle, □ Z-VAD (1 µM), ▵ TPEN (50 µM), ▿ DMSO (no agonist). Data are means ± standard error of the mean (SEM) of at least 3 independent experiments. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05. Further experiments were performed to assess the influence of [Zn 2+ ] i on agonist-evoked changes in platelet activatory markers. TPEN reduced increases of PAC-1, or CD63 binding in response to CRP-XL (1 µg/mL, from 55.4 ± 4.9 to 29.0 ± 1.5% for PAC-1 binding, and from 46.4 ± 4.0 to 24.2 ± 2.5% for CD63 binding, p  < 0.05), U46619 (10 µM, from 36.2 ± 2.8 to 16.5 ± 1.2% for PAC-1 binding, and from 21.9 ± 3.6 to 10.7 ± 1.3% for CD63 binding, p  < 0.05) or thrombin (1 U/mL, from 64.6 ± 5.2 to 32.1 ± 3.6% for PAC-1 binding, and from 46.8 ± 3.8 to 17.6 ± 2.3% for CD63 binding, p  < 0.05), but had no effect on agonist-evoked CD62P increases ( Fig. 7B ). This provides further support for a role of [Zn 2+ ] i in differentially regulating platelet granule secretion. Extracellular Zn 2+ signalling and agonist-induced changes in [Zn 2+ ] i have both been linked to apoptosis and related responses in nucleated cells. 28 29 30 31 However, the role of Zn 2+ in PS exposure during platelet activation has yet to be studied. To investigate the influence of [Zn 2+ ] i on PS exposure, platelets were treated with ionophores, and annexin-V binding was quantified in real time. Increasing platelet [Zn 2+ ] i with Cq (300 µM) resulted in a concurrent increase in annexin-V binding. PS exposure evolved more slowly with Zn 2+ ionophore treatment than A23817, but reached similar plateau levels (90.0 ± 0.9 and 88.6 ± 2.7% for Cq and A23187, respectively, Fig. 7C ), indicating that most platelets in the population were annexin-V positive. This differed in responses to conventional agonists, thrombin and CRP-XL, which induced PS exposure in a sub-set of platelets (35.0 ± 6.2 and 34.4 ± 6.2%, respectively). Cq-induced annexin-V binding was sensitive to TPEN (6.6 ± 6.3% positive platelets at 60 minutes) confirming a role for Zn 2+ . Furthermore, pre-treatment with the caspase inhibitor, Z-VAD, abrogated Cq-induced PS exposure (53.6 ± 4.7% at 60 minutes, p  < 0.05, Fig. 7D ). The influence of Zn 2+ on agonist-evoked annexin-V binding was also investigated. Consistent with the findings of Cohen et al, 32 we observed a reduction in agonist-evoked PS exposure in the presence of Z-VAD (1 µM) (from 34.4 ± 2.9 to 23.1 ± 2.0% following stimulation with 1 µg/mL CRP-XL, from 24.4 ± 1.8 to 15.2 ± 2.0% following stimulation with 10 µM U46619 and from 32.5 ± 4.8 to 21.2 ± 2.4% following stimulation with 1 U/mL thrombin, Fig. 7E – G , p  < 0.05). Similar reductions of annexin-V binding in TPEN-treated platelets were observed following stimulation by CRP-XL (26.3 ± 0.9%, p  < 0.05), or U46619 (21.4 ± 2.7%, p  < 0.05). However, TPEN did not affect thrombin-mediated annexin-V binding (28.3 ± 4.6%, ns). These data are consistent with a role for Zn 2+ in agonist-evoked PS exposure.
null
null
[]
[]
[]
[ "Introduction", "Experimental Procedures", "Results", "Discussion" ]
[ "\nZinc (Zn\n2+\n) is an essential trace element, serving as a co-factor for 10 to 15% of proteins encoded within the human genome.\n1\nIt is acknowledged as an extracellular signalling molecule in glycinergic and GABAergic neurones, and is released into the synaptic cleft following excitation.\n2\n3\nZn\n2+\nis concentrated in atherosclerotic plaques and released from damaged epithelial cells, and is released from platelets along with their α-granule cargo following collagen stimulation.\n4\nTherefore, increased concentrations of unbound or labile (free) Zn\n2+\nare likely to be present at areas of haemostasis, and may be much higher in the microenvironment of a growing thrombus. Zn\n2+\nplays a role in haemostasis by contributing to wound healing,\n5\nand regulating coagulation, for example, as a co-factor for factor XII.\n6\nLabile Zn\n2+\nacts as a platelet agonist, being able to induce tyrosine phosphorylation, integrin α\nIIb\nβ\n3\nactivation and aggregation at high concentrations, while potentiating platelet responses to other agonists at lower concentrations.\n7\nZn\n2+\nis directly linked to platelet function\nin vivo\n, as dietary Zn\n2+\ndeficiency of humans or rodents manifests with a bleeding phenotype that reverses with Zn\n2+\nsupplementation.\n\n\nLabile, protein-bound and membrane-bound, Zn\n2+\npools are found within multiple cell types, including immune cells and neurones. These pools are inter-changeable, allowing increases in the bioavailability of Zn\n2+\nto Zn\n2+\n-sensitive proteins following signalling-dependent processes. In this manner, Zn\n2+\nis acknowledged to behave as a secondary messenger.\n8\nIn nucleated cells, Zn\n2+\nis released from intracellular granules into the cytosol via Zn\n2+\ntransporters, or from Zn\n2+\n-binding proteins such as metallothioneins, following engagement of extracellular receptors. For example, a role for Zn\n2+\nas a secondary messenger has been shown in mast cells, where engagement of the F\nC\nε receptor I results in rapid increases in intracellular Zn\n2+\n(Zn\n2+\n]\ni\n). This ‘zinc wave’ modulates transcription of cytokines, and affects tyrosine phosphatase activity.\n8\nZn\n2+\nalso acts as a secondary messenger in monocytes, where stimulation with lipopolysaccharide results in increases in [Zn\n2+\n]\ni\n, suggestive of a role in transmembrane signalling.\n9\nAgonist-evoked changes of [Zn\n2+\n]\ni\nmodulate signalling proteins (i.e. protein kinase C [PKC], calmodulin-dependent protein kinase II [CamKII] and interleukin receptor-associated kinase) in a similar manner to calcium (Ca\n2+\n)-dependent processes.\n4\n8\n10\nWhile the role of Zn\n2+\nas a secondary messenger in nucleated cells has gathered support in recent years, agonist-dependent regulation of [Zn\n2+\n]\ni\nin platelets during thrombosis has yet to be demonstrated.\n\n\nHere, we utilize Zn\n2+\n-specific fluorophores, chelators and ionophores to investigate the role of [Zn\n2+\n]\ni\nfluctuations in platelet behaviour. We show that agonist-evoked elevation of [Zn\n2+\n]\ni\nregulates platelet shape change, dense granule release and phosphatidylserine (PS) exposure. These findings indicate a role for Zn\n2+\nas a secondary messenger, which may have implications for the understanding of platelet signalling pathways involved in thrombosis during adverse cardiovascular events.\n", "\nMaterials\n: Fluozin-3-\nam\n(Fz-3, Zn\n2+\nindicator) and Fluo-4-\nam\n(Ca\n2+\nindicator) were from Invitrogen (Paisley, United Kingdom). Z-VAD (pan-caspase inhibitor) was from R&D Systems (Abingdon, United Kingdom). Primary anti-vasodilator-stimulated phosphoprotein (VASP) (Ser157) and anti-myosin light chain (MLC) (Ser19) antibodies were from Cambridge Bioscience (Cambridge, United Kingdom), and fluorescently conjugated procaspase-activating compound 1 (PAC-1), CD62P and CD63 antibodies were from BD Biosciences (Oxford, United Kingdom). Cross-linked collagen-related peptide (CRP-XL; glycoprotein VI [GpVI] agonist) was from Professor Richard Farndale (Cambridge, United Kingdom), U46619 (thromboxane [TP]α receptor agonist) was from Tocris (Bristol, United Kingdom), thrombin (protease-activated receptor [PAR] agonist) was from Sigma Aldrich (Poole, United Kingdom) and cytochalasin-D (Cyt-D, actin polymerization inhibitor) was from AbCam (Cambridge, United Kingdom). Clioquinol (Cq, Zn\n2+\nionophore, C\n9\nH\n5\nClINO, K\nd\nZn: 10\n−7\nM, K\nd\nCa: 10\n−4.9\nM), pyrithione (Py, Zn\n2+\nionophore, C\n10\nH\n8\nN\n2\nO\n2\nS\n2\n, K\nd\nZn: 10\n−7\nM, K\nd\nCa: 10\n−4.9\nM), A23187 (Ca\n2+\nionophore, C\n29\nH\n37\nN\n3\nO\n6\n), N,N,N′,N′-Tetrakis(2-pyridylmethyl)ethylenediamine (TPEN, Zn\n2+\nchelator, K\nd\nZn: 2.6 × 10\n−16\nM, K\nd\nCa: 4.4 × 10\n−5\nM,\n11\n12\n13\n14\n), dimethyl-bis-(aminophenoxy)ethane-tetraacetic acid (DM-BAPTA)-AM (C\n34\nH\n40\nN\n2\nO\n18\n, K\nd\nZn: 7.9 × 10\n−9\nM, K\nd\nCa: 110 × 10\n−9\nM,\n11\n12\n13\n14\n) and membrane permeant anti-oxidizing proteins, polyethylene glycol-superoxide dismutase (PEG-SOD) and PEG-catalase (CAT) were from Sigma Aldrich. Unless stated, all other reagents were from Sigma Aldrich.\n\n\nPreparation of washed human platelets\n: This study was approved by the Research Ethics Committee at Anglia Ruskin University and informed consent was obtained in accordance with the Declaration of Helsinki. Blood was donated by healthy human volunteers, free from medication for 2 weeks. Blood was collected into 11 mM sodium citrate and washed platelets were prepared as described previously.\n7\nUnless otherwise stated, to isolate the mechanisms of Zn\n2+\nfluctuations from other cation-specific effects, experiments were performed in the absence of extracellular Ca\n2+\n.\n\n\nCation mobilisation studies\n: For studies of [Zn\n2+\n]\ni\nor [Ca\n2+\n]\ni\nmobilization, platelet-rich plasma was loaded with Fz-3 (2 µM, 30 minutes, 37°C), or Fluo-4 (2 µM, 30 minutes, 37°C). Fz-3 is responsive to Zn\n2+\nin the nM range and is not significantly affected by Ca\n2+\n.\n15\nPlatelets were collected by centrifugation (350 × \ng\n, 15 minutes), re-suspended in Ca\n2+\n-free Tyrode's buffer (in mM: 140 NaCl, 5 KCl, 10 HEPES, 5 glucose, 0.42 NaH\n2\nPO\n4\n, 12 NaHCO\n3\n, pH 7.4) and rested at 37°C for 30 minutes prior to use. Fluorescence was monitored using a Fluoroskan Ascent fluorometer (ThermoScientific, United Kingdom) using 488 nm and 538 nm excitation and emission filters, respectively. Washed Fz-3 or Fluo-4 loaded platelet suspensions were treated with ionophores or chelators to calibrate\nR\nmax\nor\nR\nmin\nvalues (\nSupplementary Fig. S1\n, available in the online version). Results are expressed as an increase of background-corrected fluorescence at each time point relative to baseline: (\nF\n-\nF\nbackground\n)/\nF\n0\n-\nF\nbackground\n).\n\n\nOptical aggregometry\n: Aggregometry was performed with washed platelet suspensions under stirring conditions at 37°C in an AggRam light transmission aggregometer (Helena Biosciences, Gateshead, United Kingdom).\n7\nAggregation traces were acquired using a proprietary software (Helena Biosciences) and analysed within GraphPad Prism (Version 6.03).\n\n\nConfocal microscopy\n: Images of platelets adhering to coated fibrinogen coverslips (100 µM) were acquired using a LSM510/Axiovert laser scanning confocal microscope with 60× oil NA1.45 objective (Zeiss, United Kingdom). Surface coverage of DIOC\n6\n-stained platelets was quantified using ImageJ (v1.45, National Institutes of Health, Bethesda, Maryland, United States).\n\n\nWestern blotting\n: Western blotting was performed as described previously.\n7\nBriefly, polyvinylidene difluoride membranes were incubated with MLC (1:400) or VASP (Ser157, 1:400) primary antibodies, and horseradish peroxidase-conjugated secondary antibodies (1:7,500).\n\n\nFlow cytometry\n: Washed platelet suspensions were incubated with fluorescently conjugated antibodies targeting markers of platelet activation: PAC-1 (α\nIIb\nβ\n3\nactivation), CD62P (α granule release) and CD63 (dense granule release). Antibody binding following agonist or ionophore stimulation was assessed using an Accuri C6 flow cytometer (BD Biosciences).\n\n\nData analysis\n: Maximum and minimum aggregation and\nF\n/\nF\n0\nvalues were calculated using Microsoft Excel. Western blots were analysed using ImageJ. Data were analysed in GraphPad Prism by two-way analysis of variance followed by Tukey's post hoc test. Significance is denoted as ***\np\n < 0.001, **\np\n < 0.01 or *\np\n < 0.05.\n", "\n[Zn\n2+\n]\ni\nfluctuations coordinate receptor stimulation with signalling responses in nucleated cells.\n8\nTo investigate whether intra-platelet Zn\n2+\nfluctuates during activation, agonist-evoked changes of [Zn\n2+\n]\ni\nwere monitored in washed platelet suspensions, loaded with the Zn\n2+\n-specific fluorophore, Fz-3. Stimulation with conventional platelet agonists CRP-XL and U46619 induced rapid, dose-dependent increases of fluorescence peaking after approximately 2 minutes, consistent with increases in [Zn\n2+\n]\ni\n. At 6 minutes, 1 µg/mL CRP-XL or 10 µM U46619 stimulation increased\nF\n/\nF\n0\nto 2.0 ± 0.1 and 1.2 ± 0.1 AU, respectively (compared with 0.9 ± 0.2 AU for the vehicle control,\np\n < 0.05,\nFig. 1A\n,\nB\n). Conversely, thrombin stimulation did not elevate Fz-3 fluorescence (\nFig. 1C\n). These data indicate that platelet activation via GpVI and TP, but not via PARs, leads to signalling responses that result in the elevation of [Zn\n2+\n]\ni\n, in a similar manner to agonist-evoked increases in [Ca\n2+\n]\ni\n. Inclusion of 2 mM CaCl\n2\nin the extracellular medium did not significantly affect agonist-evoked responses (\nSupplementary Fig. S2\n, available in the online version).\n\n\nAgonist-dependent platelet activation via GpVI or TP, but not PARs elevates [Zn\n2+\n]\ni\n. Fz-3-labelled washed human platelets were stimulated by CRP-XL (\nA\n), U46619 (\nB\n) or thrombin (\nC\n) and [Zn\n2+\n]\ni\nfluctuations were monitored over 6 minutes using fluorometry. (\nA\n) Fz-3 responses to ○ 1 µg/mL, □ 0.3 µg/mL, ▵ 0.1 µg/mL, ⋄ 0.03 µg/mL CRP-XL or • vehicle (DMSO). (\nB\n) Fz-3 responses to ○ 10 µM, □ 3 µM, ▵ 1 µM, ⋄ 0.3 µM U46619 or • vehicle (DMSO). (\nC\n) Fz-3 responses to, ○ 1 U/mL, □ 0.3 U/mL, ▵ 0.1 U/mL, ⋄ 0.03 U/mL thrombin or • vehicle (DMSO). Data are mean ± standard error of the mean (SEM) from at least 8 independent experiments. Significance is denoted as ***\np\n < 0.001, **\np\n < 0.01 or *\np\n < 0.05.\n\n\nExperiments were performed to confirm the specificity of fluorescence fluctuations for Zn\n2+\n. Platelets were pre-treated with the intracellular Zn\n2+\n-specific chelator TPEN (50 µM) prior to stimulation with 1 µg/mL CRP-XL. Fz-3 responses were reduced to 1.1 ± 0.1 AU, compared with of 2.0 ± 0.1 AU for CRP-XL stimulation alone (\np\n < 0.05,\nFig. 2A\n). Interestingly, treatment with DM-BAPTA (10 µM), a non-specific cation chelator, led to a similar reduction (to 1.0 ± 0.1 AU,\np\n < 0.05). Abrogation of Fz-3 fluorescence was also observed following stimulation with U46619 (10 µg/mL), where TPEN or DM-BAPTA treatment reduced\nF\n/\nF\n0\nplateau levels from 1.2 ± 0.1 to 0.8 ± 0.1 AU and 1.0 ± 0.1 AU, respectively (\np\n < 0.05,\nFig. 2B\n). Further experiments were performed to investigate the influence of cation chelation on [Ca\n2+\n]\ni\nfluctuations using Fluo-4-loaded platelets. As previously demonstrated, CRP-XL- and U46619-induced Ca\n2+\nsignals were absent following BAPTA treatment (\nF\n/\nF\n0\nsignals were reduced from 1.6 ± 0.2 to 0.8 ± 0.1 AU, and from 1.4 ± 0.1 to 0.9 + 0.0 AU, for CRP-XL and U46619 stimulation, respectively,\np\n < 0.05,\nFig. 2D\n,\nE\n). However, Fluo-4 fluorescence was not significantly affected by TPEN treatment (1.5 ± 0.2 and 1.2 ± 0.1 AU for CRP-XL and U46619, respectively, ns) indicating that TPEN does not chelate [Ca\n2+\n]\ni\n, and that Fz-3 signals are attributable to [Zn\n2+\n]\ni\nwith no influence from other cations. Furthermore, these data demonstrate that fluctuations in [Zn\n2+\n]\ni\ndo not affect agonist-evoked Ca\n2+\nsignals.\n\n\nAgonist-dependent intracellular zinc ([Zn\n2+\n]\ni\n) fluctuations are sensitive to the platelet redox state. Platelets were loaded with Fz-3 (\nA\n,\nB\n,\nC\n), or Fluo-4 (\nD\n,\nE\n,\nF\n) and stimulated with CRP-XL (1 µg/mL, ○,\nA\n,\nD\n), U46619 (10 µM, ○\nB\n,\nE\n) or H\n2\nO\n2\n(10 µM, ○,\nC\n,\nF\n), during which changes in fluorescence were monitored. Where indicated, platelets were pre-treated with TPEN (▿, 50 µM), DM-BAPTA (⋄, 10µM), PEG-SOD (□, 30 U/mL), PEG-CAT (▵, 300 U/mL) or vehicle (DMSO), •). Data are mean ± standard error of the mean (SEM) from at least 5 independent experiments. Significance is denoted as ***\np\n < 0.001, **\np\n < 0.01 or *\np\n < 0.05.\n\n\nAgonist-evoked [Zn\n2+\n]\ni\nincreases may result from release of membrane-bound intracellular stores or by liberation from metal-binding proteins (e.g. metallothioneins) in response to redox-mediated modifications to thiol groups.\n16\nTo investigate the nature of the Zn\n2+\nsource, platelets were treated with membrane-permeant anti-oxidizing proteins PEG-SOD and PEG-CAT,\n17\nand CRP-XL-evoked [Zn\n2+\n]\ni\nfluctuations were monitored. PEG-SOD and PEG-CAT both abolished CRP-XL-induced increases of Fz-3 fluorescence, indicating redox-dependent modulation of Zn\n2+\nrelease (PEG-SOD and PEG-CAT reduced\nF\n/\nF\n0\nplateaus following 1 µg/mL CRP-XL treatment from 2.0 ± 0.1 to 1.2 ± 0.1 AU and 1.3 ± 0.1 AU, respectively,\np\n < 0.05,\nFig. 2A\n). This is consistent with published data showing a greater capacity for GpVI to influence redox signalling than other receptors.\n18\nSimilarly, PEG-SOD and PEG-CAT abolished U46619-induced [Zn\n2+\n]\ni\nresponses (to 1.0 ± 0.0 and 1.1 ± 0.0 AU, respectively, following 10 µM U46619 stimulation,\np\n < 0.05,\nFig. 2B\n). PEG-SOD and PEG-CAT did not affect CRP-XL- or U46619-mediated Fluo-4 fluorescence, suggesting that [Zn\n2+\n]\ni\nbut not [Ca\n2+\n]\ni\nsignals are regulated by redox-sensitive processes.\n\n\nFurther experiments were performed to resolve the relationship between the platelet redox state and [Zn\n2+\n]\ni\nfluctuations. Treatment with H\n2\nO\n2\nmimics increases in platelet reactive oxygen species (ROS).\n19\nH\n2\nO\n2\nincreased both [Ca\n2+\n]\ni\nand [Zn\n2+\n]\ni\n(\nF\n/\nF\n0\nplateaus were 1.8 + 0.3 AU following H\n2\nO\n2\n[10 µM] stimulation of Fz3-loaded platelets, compared with 0.9 ± 0.1 AU for vehicle-treated platelets, while H\n2\nO\n2\nstimulation increased Fluo-4 fluorescence from 0.9 ± 0.1 to 1.4 ± 0.1 AU,\np\n < 0.05,\nFig. 2C\n,\nF\n). H\n2\nO\n2\n-mediated [Zn\n2+\n]\ni\nincreases were abrogated with PEG-SOD or PEG-CAT, while [Ca\n2+\n]\ni\nwas unaffected (\nFig. 2E\n,\nF\n). These data support a role for the platelet redox state in regulating [Zn\n2+\n]\ni\nfluctuations.\n\n\nHaving demonstrated that intra-platelet Zn\n2+\nrises in response to agonist stimulation, we further examined the influence of [Zn\n2+\n]\ni\non platelet responses. We hypothesized that liberation of Zn\n2+\nfrom intracellular stores (such as platelet α-granules\n20\n) using specific ionophores would result in increased [Zn\n2+\n]\ni\n, in a similar manner A23187-evoked Ca\n2+\nresponses.\n21\nZn\n2+\nionophores Cq and Py have previously been used to model [Zn\n2+\n]\ni\nincreases in nucleated cells.\n22\n23\n24\nWe utilized these reagents to model agonist-evoked [Zn\n2+\n]\ni\nincreases in washed platelet suspensions. Stimulation with Cq or Py produced large elevations of [Zn\n2+\n]\ni\n, with\nF\n/\nF\n0\nplateaus of 7.9 ± 0.5 and 3.3 ± 0.3 AU, respectively (\np\n < 0.05,\nFig. 3A\n,\nB\n). The extent of [Zn\n2+\n]\ni\nincrease was greater than that observed following CRP-XL stimulation, suggesting that liberation from stores is not the principal means by which [Zn\n2+\n]\ni\nincreases following agonist stimulation. Zn\n2+\nionophore-dependent Fz-3 fluorescence increases were sensitive to pre-treatment with TPEN or BAPTA, consistent with a role for Cq or Py increasing [Zn\n2+\n]\ni\n(\nFig. 3A\n,\nB\n). However, [Zn\n2+\n]\ni\nsignals were not influenced by PEG-SOD or PEG-CAT, demonstrating that ionophore-induced [Zn\n2+\n]\ni\nrelease is not redox sensitive. Cq or Py stimulation did not affect Fluo-4 fluorescence (\nFig. 3D\n,\nE\n), indicating that Zn\n2+\nionophores have a negligible affinity for Ca\n2+\n. A23187 increased Fluo-4 fluorescence (from 0.9 ± 0.1 to 5.8 ± 0.9 AU after 6 minutes,\np\n < 0.05,\nFig. 3F\n), but had no effect on Fz-3 fluorescence (\nFig. 3C\n), demonstrating that Fz-3 fluorescence is not affected by changes in [Ca\n2+\n]\ni\n. In a similar manner to agonist-dependent Ca\n2+\nsignalling, A23187-dependent [Ca\n2+\n]\ni\nincreases were abrogated by BAPTA, but were unaffected by TPEN. Thus, Fluo-4 fluorescence is not influenced by Zn\n2+\n.\n\n\nTreatment of platelets with Zn\n2+\nionophores clioquinol (Cq) or pyrithione (Py) elevates [Zn\n2+\n]\ni\n, but not [Ca\n2+\n]\ni\n. Washed platelet suspensions were loaded with Fz-3 (\nA\n,\nB\n,\nC\n), or Fluo-4 (\nD\n,\nE\n,\nF\n) and stimulated with Cq (○, 300 µM,\nA\n,\nD\n), Py (○, 300 µM,\nB\n,\nE\n) or A23187 (○,\nC\n,\nF\n). Where indicated, platelets were pre-treated with (TPEN) (50 µM, ▿), DM-BAPTA (10 µM, ⋄), PEG-SOD (30 U/mL, □), PEG-CAT (300 U/mL, ▵), or vehicle (DMSO), •. Data are mean ± standard error of the mean (SEM) from at least 6 independent experiments. Significance is denoted as ***\np\n < 0.001, **\np\n < 0.01 or *\np\n < 0.05.\n\n\nOur data confirm that platelet [Zn\n2+\n]\ni\nincreases can be modelled using the Zn\n2+\nionophores Cq and Py. Next, we examined the influence of increases in [Zn\n2+\n]\ni\non platelet aggregation. High concentrations of Cq (300 µM) resulted in an initial decrease in light transmission, followed by a substantial increase, consistent with shape change and aggregation. Platelet aggregates were present following visual inspection of test cuvettes at the end of each experiment (not shown). The extent of Cq-induced aggregation (300 µM, 27.8 ± 5.0%) was lower than that for A23187 (300 µM, 70.2 ± 8.6%,\np\n < 0.05,\nFig. 4A\n,\nB\n). Treatment with lower concentrations of Cq (30 µM) resulted in shape change only, with no progression to aggregation. Py stimulation did not cause aggregation but did result in shape change (\nFig. 4A\n–\nC\n). Response to Py were biphasic, with intermediate concentrations (10–30 µM) resulting in shape change, and higher concentrations having no effect.\n\n\nStimulation of platelets with Zn\n2+\nionophores leads to shape change. (\nA\n) Washed platelet suspensions were stimulated with different concentrations of clioquinol (Cq), pyrithione (Py) or A23187 during which changes in light transmission were monitored using optical aggregometry. Initial downward deflections indicate a reduction in light transmission that are consistent with shape change. Subsequent upward deflections indicate increases in light transmission, consistent with platelet aggregation. The maximum (\nB\n) and minimum (\nC\n) extent of aggregation were calculated for each ionophore (▪ Cq, ▵ Py, ○ A23187). Data are mean ± standard error of the mean (SEM) from at least 5 experiments.\n\n\nThe degree of shape change was quantified by calculating the lowest light transmission during ionophore-induced aggregation (denoted minimum aggregation, %). Shape change following Cq or A213817 treatment was comparable (minimum aggregation for 30 µM Cq or Py was –13.3 ± 2.9 and –27.5 ± 2.2%, respectively, compared with –15.1 ± 2.7% for 30 µM A23187, ns,\nFig. 4C\n). These data are consistent with a role for [Zn\n2+\n]\ni\nin regulating cytoskeletal changes in a similar manner to [Ca\n2+\n]\ni\n-induced shape change.\n\n\nTo confirm that the changes in light transmission were a biological, rather than chemical phenomenon, we took a pharmacological approach by pre-treating platelets with the actin polymerization inhibitor Cyt-D prior to ionophore stimulation. Cyt-D abrogated Cq-, Py- and A23187-induced shape change, consistent with a genuine biological effect. The minimum aggregation for Cyt-D treated and untreated platelets were –5.7 ± 2.1 and –16.7 ± 1.9%, respectively, following Cq stimulation, –9.1 ± 1.9 and –33.2 ± 2.4, respectively, following Py stimulation, and –3.7 ± 1.4 and –13.0 ± 1.8%, respectively, following A23187 stimulation (30 µM,\np\n < 0.05,\nFig. 5A\n,\nB\n). Pre-treatment of platelets with TPEN abrogated Cq- or Py-induced shape change but had no effect on A23187 treatment (minimum aggregation following TPEN treatment was –4.9 ± 1.2, –11.1 ± 2.3 and –17.9 ± 2.6% for Cq, Py and A23817, respectively,\np\n < 0.05,\nFig. 5A\n,\nB\n). These data are consistent with a role for [Zn\n2+\n]\ni\nin regulating cytoskeletal re-arrangements. The resistance of A23187-induced shape change to TPEN treatment suggests that the contribution of Ca\n2+\nsignals to cytoskeletal re-arrangement occurs independently of Zn\n2+\nsignals, and could indicate different mechanisms for Zn\n2+\n- and Ca\n2+\n-induced shape change.\n\n\nIonophore-induced shape change is sensitive to pre-treatment with (Cyt-D) or TPEN. (\nA\n) Representative aggregometry traces showing clioquinol (Cq)-, pyrithione (Py)- or A23187-induced (30 µM) shape change following pre-treatment with TPEN (50 µM) or Cyt-D (10 µM). (\nB\n) Quantitation of minimum aggregation following treatment of platelets pre-treated with TPEN (▪ 25 µM), Cyt-D (▪ 10 µM) or vehicle (□ DMSO, prior to stimulation with Cq, Py or A23187 (30 µM). Data are mean ± standard error of the mean (SEM) of at least 6 experiments. Significance is denoted as ***\np\n < 0.001, **\np\n < 0.01 or *\np\n < 0.05.\n\n\n[Zn\n2+\n]\ni\n-dependent cytoskeletal changes were further investigated by visualizing platelet spreading on fibrinogen. TPEN-treated platelets were able to adhere to fibrinogen, but did not spread, with no visible lamellipodia or filopodia (\nFig. 6A\n). Mean platelet surface coverage after 10 minutes was 12.8 ± 1.5 µm, compared with 22.7 ± 1.6 µm for untreated platelets (\nFig. 6B\n). Regulation of Cq-induced shape change was investigated by assaying VASP and MLC, which alter phosphorylation status during cytoskeletal re-arrangements.\n25\n26\nCq- or Py-induced shape change were accompanied by increased phosphorylation of ser157 of MLC, confirming a role for [Zn\n2+\n]\ni\nin the signalling process leading to cytoskeletal changes. Unlike PGE\n1\ntreatment, VASP did not undergo phosphorylation in response to ionophore treatment, indicating that Zn\n2+\ndoes not influence activity of cyclic nucleotide-dependent kinases such as protein kinase A (PKA) or protein kinase G (PKG).\n27\n\n\n[Zn\n2+\n]\ni\nregulates platelet shape change, and phosphorylation of cytoskeletal regulators. Washed platelet suspensions were incubated on fibrinogen-coated coverslips following pre-treatment with 50 µM TPEN or vehicle control (DMSO). (\nA\n) Representative images of platelet spreading. (\nB\n) Quantification of the surface coverage by adherent platelets (○ DMSO, • 50 µM TPEN,\nn\n = 3). (\nC\n) Representative Western blot showing increased MLC phosphorylation following stimulation of platelets for 2 minutes with vehicle (DMSO), thrombin (1 U/mL), A23187 (100 µM), clioquinol (Cq) (300 µM) and pyrithione (Py) (300 µM). (\nD\n) Representative Western blot showing VASP phosphorylation following stimulation of platelets for 2 minutes with vehicle (DMSO), prostaglandin E\n1\n(PGE\n1\n) (1 U/mL), A23187 (100 µM), Cq (300 µM) and Py (300 µM). VASP phosphorylation was unaffected by Zn\n2+\nionophore treatment. Blots are representative of three experiments. Data are means ± standard error of the mean (SEM), from at least 5 independent experiments. Significance is denoted as ***\np\n < 0.001, **\np\n < 0.01 or *\np\n < 0.05.\n\n\nThese data indicate that increases in [Zn\n2+\n]\ni\ninitiate platelet activation events, such as shape change and aggregation. To better understand the extent to which changes in [Zn\n2+\n]\ni\nregulate platelet activation, the influence of Cq treatment on conventional markers of platelet activation was investigated. In a similar manner to thrombin and A23187, Cq or Py stimulation (300 µM) substantially increased platelet PAC-1 binding (59.7 ± 5.5, 64.5 ± 5.8, 47.3 ± 4.1 and 37.8 ± 5.0%, respectively,\np\n < 0.05,\nFig. 7A\n), consistent with earlier observations correlating Cq stimulation with aggregation (\nFig. 4\n), and supportive of a role for [Zn\n2+\n]\ni\nin α\nIIb\nβ\n3\nactivation. Cq or Py increased CD63, but not CD62P externalization (55.9 ± 7.8 and 5.7 ± 2.8%, respectively, following Cq stimulation, and 50.2 ± 2.6 and 6.9 ± 2.2% following Py stimulation,\nFig. 7A\n) indicating that increases in [Zn\n2+\n]\ni\ninitiate dense, but not α granule, secretion. This differed from both thrombin (CD62P: 62.9 ± 5.5%, CD63: 48.8 ± 3.0%) and A23187 (CD62P: 31.1 ± 5.7%, CD63: 55.1 ± 5.0%), which also regulate α and dense granule release.\n\n\nIncreasing platelet [Zn\n2+\n]\ni\nusing Zn\n2+\nionophores increases platelet activation markers. (\nA\n) Washed platelet suspensions were stimulated by thrombin (Thr, 1 U/mL), clioquinol (Cq) (300 µM), pyrithione (Py) (300 µM) or A23187 (100 µM) and changes of PAC-1 (white), CD62P (grey) and CD63 (black) binding were obtained after 60 minutes. (\nB\n) Washed platelet suspensions were stimulated by CRP-XL (1 µg/mL), U46619 (10 µM) or thrombin (1 U/mL), following pre-treatment with TPEN (50 µM), and changes of PAC-1 (white), CD62P (grey) and CD63 (black) binding were obtained after 60 minutes. (\nC\n) Washed platelet suspensions were treated with Ca\n2+\nor Zn\n2+\nionophores, or conventional platelet agonists, prior to analysis of annexin-V binding by flow cytometry. □ Clioquinol (300 µM), ▵ pyrithione (300 µM), ○ A23187 (300 µM), • CRP (1 µg/mL), ▪ thrombin, (1 U/mL), ▪ vehicle (DMSO). (\nD\n) Platelet suspensions were pre-treated with the caspase inhibitor Z-VAD (▵, 1 µM), the Zn\n2+\nchelator, TPEN (▪, 25 µm) or vehicle (□) prior to stimulation with clioquinol (300 µM). ○ Unstimulated platelets. Changes in the percentage of platelets binding to annexin-V were recorded. Washed platelets suspensions were pre-treated with Z-VAD (1 µM), or TPEN (50 µM) prior to stimulation with conventional agonists, CRP-XL (1 µg/mL,\nE\n), U46619 (10 µM,\nF\n) or thrombin (1 U/mL,\nG\n). Changes in annexin-V binding were monitored using flow cytometry. ○ Vehicle, □ Z-VAD (1 µM), ▵ TPEN (50 µM), ▿ DMSO (no agonist). Data are means ± standard error of the mean (SEM) of at least 3 independent experiments. Significance is denoted as ***\np\n < 0.001, **\np\n < 0.01 or *\np\n < 0.05.\n\n\nFurther experiments were performed to assess the influence of [Zn\n2+\n]\ni\non agonist-evoked changes in platelet activatory markers. TPEN reduced increases of PAC-1, or CD63 binding in response to CRP-XL (1 µg/mL, from 55.4 ± 4.9 to 29.0 ± 1.5% for PAC-1 binding, and from 46.4 ± 4.0 to 24.2 ± 2.5% for CD63 binding,\np\n < 0.05), U46619 (10 µM, from 36.2 ± 2.8 to 16.5 ± 1.2% for PAC-1 binding, and from 21.9 ± 3.6 to 10.7 ± 1.3% for CD63 binding,\np\n < 0.05) or thrombin (1 U/mL, from 64.6 ± 5.2 to 32.1 ± 3.6% for PAC-1 binding, and from 46.8 ± 3.8 to 17.6 ± 2.3% for CD63 binding,\np\n < 0.05), but had no effect on agonist-evoked CD62P increases (\nFig. 7B\n). This provides further support for a role of [Zn\n2+\n]\ni\nin differentially regulating platelet granule secretion.\n\n\nExtracellular Zn\n2+\nsignalling and agonist-induced changes in [Zn\n2+\n]\ni\nhave both been linked to apoptosis and related responses in nucleated cells.\n28\n29\n30\n31\nHowever, the role of Zn\n2+\nin PS exposure during platelet activation has yet to be studied. To investigate the influence of [Zn\n2+\n]\ni\non PS exposure, platelets were treated with ionophores, and annexin-V binding was quantified in real time. Increasing platelet [Zn\n2+\n]\ni\nwith Cq (300 µM) resulted in a concurrent increase in annexin-V binding. PS exposure evolved more slowly with Zn\n2+\nionophore treatment than A23817, but reached similar plateau levels (90.0 ± 0.9 and 88.6 ± 2.7% for Cq and A23187, respectively,\nFig. 7C\n), indicating that most platelets in the population were annexin-V positive. This differed in responses to conventional agonists, thrombin and CRP-XL, which induced PS exposure in a sub-set of platelets (35.0 ± 6.2 and 34.4 ± 6.2%, respectively). Cq-induced annexin-V binding was sensitive to TPEN (6.6 ± 6.3% positive platelets at 60 minutes) confirming a role for Zn\n2+\n. Furthermore, pre-treatment with the caspase inhibitor, Z-VAD, abrogated Cq-induced PS exposure (53.6 ± 4.7% at 60 minutes,\np\n < 0.05,\nFig. 7D\n). The influence of Zn\n2+\non agonist-evoked annexin-V binding was also investigated. Consistent with the findings of Cohen et al,\n32\nwe observed a reduction in agonist-evoked PS exposure in the presence of Z-VAD (1 µM) (from 34.4 ± 2.9 to 23.1 ± 2.0% following stimulation with 1 µg/mL CRP-XL, from 24.4 ± 1.8 to 15.2 ± 2.0% following stimulation with 10 µM U46619 and from 32.5 ± 4.8 to 21.2 ± 2.4% following stimulation with 1 U/mL thrombin,\nFig. 7E\n–\nG\n,\np\n < 0.05). Similar reductions of annexin-V binding in TPEN-treated platelets were observed following stimulation by CRP-XL (26.3 ± 0.9%,\np\n < 0.05), or U46619 (21.4 ± 2.7%,\np\n < 0.05). However, TPEN did not affect thrombin-mediated annexin-V binding (28.3 ± 4.6%, ns). These data are consistent with a role for Zn\n2+\nin agonist-evoked PS exposure.\n", "\nThe role of Zn\n2+\nas a secondary signalling molecule has received little research interest, possibly owing to its relatively low resting cytosolic levels (pM, compared with nM concentrations of Ca\n2+\n). Zn\n2+\nis present in granules of nucleated cells, and in platelet α granules. It also associates with thiol-containing proteins such as metallothioneins, which are also present in platelets.\n33\nThe transition between protein- or membrane-bound Zn\n2+\nand labile Zn\n2+\nin the cell cytosol has been demonstrated in multiple cell systems, and increases in labile [Zn\n2+\n]\ni\nhave been correlated with phenotypic changes. Here, we show for the first time that agonist-evoked stimulation of platelets\nin vitro\nresults in increases of [Zn\n2+\n]\ni\n. While requiring further confirmation, such behaviour is consistent with a role of Zn\n2+\nas a secondary messenger. Zn\n2+\nfluctuations were apparent in the presence of extracellular CaCl\n2\n, supporting a physiological role for this effect. We confirm the nature of the fluorescent signal using the high affinity Zn\n2+\nchelator TPEN. TPEN was also used to probe the role of Zn\n2+\nin functional responses to agonist stimulation. Owing to its affinity for Zn\n2+\n, use of TPEN here is not only likely to abrogate agonist-evoked increases in [Zn\n2+\n]\ni\n, but could also strip metalloproteins of Zn\n2+\nco-factors.\n34\nThus, conclusions drawn from the use of TPEN may not only reflect abrogation of agonist-evoked [Zn\n2+\n]\ni\nincreases. [Zn\n2+\n]\ni\nincreases were observed in platelets following stimulation via GpVI and TP, but not via PAR, indicating that different signalling pathways link to [Zn\n2+\n]\ni\nrelease. Signalling via GpVI differs from that of TP or PAR G-protein-coupled receptors, in that it results in tyrosine phosphorylation of platelet proteins (such as Syk and LAT), leading to activation of PI3K and PLCγ2. Conversely, PAR and TP signal through G-protein-dependent routes to activate Rho-GEF and PLCβ. It is likely that [Zn\n2+\n]\ni\nincreases are regulated by signalling proteins that are not shared by GpVI and thrombin pathways. However, the different outcomes following PAR and TP-dependent signalling are harder to reconcile, as both receptors couple to similar signalling pathways that involve Gα\n12/13\nand Gα\nq\n.\n\n\nWe show that the platelet redox state effects [Zn\n2+\n]\ni\nfluctuations in a similar manner to nucleated cells.\n35\n36\nCRP-XL- and U46619-evoked elevations of [Zn\n2+\n]\ni\nwere sensitive to antioxidants, and could be enhanced by H\n2\nO\n2\n. Zn\n2+\nbinding to thiols (e.g. metallothioneins) is redox-sensitive and changes of redox state lead to release of Zn\n2+\ninto the labile pool in nucleated cells.\n37\nGiven that modulation of the platelet redox state led to a rapid and sustained rise of [Zn\n2+\n]\ni\n, it is possible that platelet Zn\n2+\n-binding proteins represent a store for these cations. Interestingly, Ca\n2+\nsignalling was unaffected by redox changes, suggesting that these ions are differentially regulated. Indeed, the predominant Ca\n2+\nstore is the dense tubular system, which performs a similar role to the endoplasmic reticulum in nucleated cells. It is therefore likely that intra-platelet Zn\n2+\nis stored by Zn\n2+\n-binding proteins and becomes liberated upon agonist stimulation. However, we did not observe increases of [Zn\n2+\n]\ni\nfollowing thrombin stimulation, which has been shown to induce similar levels of ROS activation as collagen activation.\n18\n38\nOne possible explanation could be that the larger Ca\n2+\nsignal generated by thrombin negatively regulates Zn\n2+\nrelease.\n\n\nWe examined the influence of [Zn\n2+\n]\ni\non activatory processes using membrane permeable Zn\n2+\n-specific ionophores, Py and Cq, which have been widely used to model increases in [Zn\n2+\n]\ni\n. Stimulation with either ionophore resulted in increases in [Zn\n2+\n]\ni\n, with a greater signal obtained with Cq. Neither ionophore produced increases in Fluo-4 fluorescence, indicating a negligible affinity for [Ca\n2+\n]\ni\n. Conversely, stimulation with the Ca\n2+\nionophore A23187 produced rapid increases in [Ca\n2+\n]\ni\n, but did not affect [Zn\n2+\n]\ni\n. Investigation of cation responses in cells depends heavily on the specificity of reagents for their cognate ions. By showing that A23187 initiates a Ca\n2+\nresponse which is not detected by Fz-3, we demonstrate that Fz-3 fluorescence increases are directly attributable to changes in [Zn\n2+\n]\ni\n, and are not influenced by [Ca\n2+\n]\ni\n. This is further supported by our observation that TPEN does not affect Fluo-4 fluorescence, which also provides evidence that agonist-evoked Ca\n2+\nsignalling does not depend on [Zn\n2+\n]\ni\nsignals. This observation raises questions about the relative roles of Ca\n2+\nand Zn\n2+\nin platelet activation, as both target similar proteins, including PKC, calmodulin and CamKII.\n4\nUnlike agonist stimulation, ionophore-induced [Zn\n2+\n]\ni\nincreases were not sensitive to anti-oxidant treatment. Furthermore, the extent of [Zn\n2+\n]\ni\nfollowing ionophore stimulation was greater than that observed for agonists, indicating that ionophores liberate Zn\n2+\nfrom stores that are not accessible to agonist-evoked signalling mechanisms. Such stores could include α granules, which are known to contain Zn\n2+\n.\n20\nOur use of ionophores here to model [Zn\n2+\n]\ni\nincreases while providing information on Zn\n2+\n-dependent mechanisms, is therefore unlikely to fully represent the physiological situation.\n\n\nCytoskeletal re-arrangements are primary steps in platelet activation. Zn\n2+\nionophore stimulation resulted in a demonstrable shape change, which was abrogated following Cyt-D treatment, verifying it as a biological, rather than chemical, response. Furthermore, platelet spreading on fibrinogen was abrogated following [Zn\n2+\n]\ni\nchelation. While not correlating [Zn\n2+\n]\ni\nfluctuations with shape change, these data provide support for a role of Zn\n2+\nin activation-dependent cytoskeletal re-arrangements. Zn\n2+\nis an important regulator of the cytoskeleton in nucleated cells.\n39\n40\nZn\n2+\nregulates tubulin polymerization leading to nuclear transport of transcription factors in neuronal cells,\n41\nand has been shown to regulate the actin cytoskeleton, focal adhesion dynamics and cell migration in PC-3 and HeLa cells,\n35\nwhere Zn\n2+\nchelation supresses filopodia formation and results in the loss of stress fibres. Conversely, treatment with Py increased filopodia formation, supressed stress fibres and decreased the number and size of focal adhesions.\n35\nThus, Zn\n2+\nis likely to play similar important roles in platelet cytoskeletal re-arrangements. We show that raising [Zn\n2+\n]\ni\nresults in increases in MLC phosphorylation. MLCK is canonically activated via Ca\n2+\n-mediated activation of calmodulin.\n42\nAs other calmodulin-dependent kinases have been shown to be modulated by Zn\n2+\n, it is possible that Zn\n2+\nis able to substitute for Ca\n2+\n, initiating MLCK activation.\n43\nAbsence of phosphorylation of VASP indicates that increases in [Zn\n2+\n]\ni\ndo not influence the activity of cyclic nucleotide-dependent kinases such as PKG or PKA.\n\n\nIonophore-induced elevation of [Zn\n2+\n]\ni\nincreased PAC-1 binding, supporting our aggregometry data (\nFig. 4\n), and supportive of role for Zn\n2+\nin regulating α\nIIb\nβ\n3\nactivity (\nFig. 6\n). Interestingly, [Zn\n2+\n]\ni\nincreases resulted in the externalization of CD63, but not CD62P, supporting a role for Zn\n2+\nin regulating α, but not dense granule release. Further experiments using TPEN in conjunction with conventional platelet agonists provides support for a role for [Zn\n2+\n]\ni\nin α\nIIb\nβ\n3\nactivation and dense granule secretion, but not α granule secretion (\nFig. 7B\n). Distinct signalling pathways contribute to differential release of α and dense granules, and while the exact mechanism is poorly understood, our work provides evidence for a role for Zn\n2+\nin these processes.\n44\n45\nWhile these studies show that Zn\n2+\nfluctuations correlate with platelet behaviour, it should be noted that the physiological relevance of the ionophore-evoked [Zn\n2+\n]\ni\nrises are unclear and that further work will be required to establish the significance of Zn\n2+\n-dependent secondary signalling\nin vivo\n. Upon stimulation with conventional agonists, a sub-set of platelets adopt pro-coagulant phenotypes, elevating [Ca\n2+\n]\ni\nand externalizing PS. Extracellular Zn\n2+\nsignalling, agonist-induced changes in [Zn\n2+\n]\ni\nand Zn\n2+\nionophore treatment have all been linked to apoptosis and related responses in nucleated cells.\n30\n31\n46\n47\n48\n49\n50\nHere, we demonstrate that ionophore or agonist-evoked increases in platelet [Zn\n2+\n]\ni\nresults in PS exposure, consistent with the development of a pro-coagulant phenotype. Interestingly, while CRP-XL and U46619 evoked PS exposure was sensitive to Zn\n2+\nchelation, thrombin stimulation was not. This provides further support for a role of Zn\n2+\nfollowing GpVI and TPα signalling, but not via PARs. Unlike conventional agonists, Cq stimulation resulted in PS exposure in a majority of platelets. This may indicate that agonist-evoked Zn\n2+\nsignals are stimulated in only a sub-set of platelets, which then proceed to become pro-coagulant. As previously shown (\nFig. 3\n), Cq stimulation did not induce increases in [Ca\n2+\n]\ni\n, so Cq-dependent PS exposure is independent of [Ca\n2+\n]\ni\n. Platelet PS exposure has been attributed to both caspase 3-dependent and independent mechanisms.\n51\n52\nCq-dependent PS exposure is partially abrogated by Z-VAD pre-treatment suggesting a partial role for caspase activity in this process.\n\n\nIn conclusion, this study provides the first evidence for agonist-evoked increases of [Zn\n2+\n]\ni\nin platelets. While requiring further confirmation, such behaviour is consistent with a role of Zn\n2+\nas a secondary messenger. Increases in [Zn\n2+\n]\ni\nare sensitive to the redox state, indicative of a role for redox in agonist-evoked Zn\n2+\nsignalling. Modelling increases of [Zn\n2+\n]\ni\nusing Zn\n2+\n-specific ionophores reveal a functional role for [Zn\n2+\n]\ni\nin platelet activatory changes. [Zn\n2+\n]\ni\nsignalling contributes to key activation-related platelet responses, including shape change, α\nIIb\nβ\n3\nactivation and granule release. The mechanism by which Zn\n2+\naffects these processes is currently unknown, but could be attributable to changes in activity of Zn\n2+\n-binding enzymes. These data indicate a hitherto unknown role for labile [Zn\n2+\n]\ni\nduring platelet activation, which has implications for our understanding of signalling responses in platelets. While this work does not address the physiological relevance of this process, a better understanding of Zn\n2+\nsignalling may be of significance to the role of platelets in thrombotic disorders such as heart attack and stroke.\n\n\nFurthermore, as they are readily available primary cells, platelets could be used as a model to better understand Zn\n2+\nsignalling in other mammalian cells.\n" ]
[ "intro", "methods", "results", "discussion" ]
[ "platelets", "zinc", "platelet activation", "signal transduction", "secretory vesicles", "granule release" ]
Introduction: Zinc (Zn 2+ ) is an essential trace element, serving as a co-factor for 10 to 15% of proteins encoded within the human genome. 1 It is acknowledged as an extracellular signalling molecule in glycinergic and GABAergic neurones, and is released into the synaptic cleft following excitation. 2 3 Zn 2+ is concentrated in atherosclerotic plaques and released from damaged epithelial cells, and is released from platelets along with their α-granule cargo following collagen stimulation. 4 Therefore, increased concentrations of unbound or labile (free) Zn 2+ are likely to be present at areas of haemostasis, and may be much higher in the microenvironment of a growing thrombus. Zn 2+ plays a role in haemostasis by contributing to wound healing, 5 and regulating coagulation, for example, as a co-factor for factor XII. 6 Labile Zn 2+ acts as a platelet agonist, being able to induce tyrosine phosphorylation, integrin α IIb β 3 activation and aggregation at high concentrations, while potentiating platelet responses to other agonists at lower concentrations. 7 Zn 2+ is directly linked to platelet function in vivo , as dietary Zn 2+ deficiency of humans or rodents manifests with a bleeding phenotype that reverses with Zn 2+ supplementation. Labile, protein-bound and membrane-bound, Zn 2+ pools are found within multiple cell types, including immune cells and neurones. These pools are inter-changeable, allowing increases in the bioavailability of Zn 2+ to Zn 2+ -sensitive proteins following signalling-dependent processes. In this manner, Zn 2+ is acknowledged to behave as a secondary messenger. 8 In nucleated cells, Zn 2+ is released from intracellular granules into the cytosol via Zn 2+ transporters, or from Zn 2+ -binding proteins such as metallothioneins, following engagement of extracellular receptors. For example, a role for Zn 2+ as a secondary messenger has been shown in mast cells, where engagement of the F C ε receptor I results in rapid increases in intracellular Zn 2+ (Zn 2+ ] i ). This ‘zinc wave’ modulates transcription of cytokines, and affects tyrosine phosphatase activity. 8 Zn 2+ also acts as a secondary messenger in monocytes, where stimulation with lipopolysaccharide results in increases in [Zn 2+ ] i , suggestive of a role in transmembrane signalling. 9 Agonist-evoked changes of [Zn 2+ ] i modulate signalling proteins (i.e. protein kinase C [PKC], calmodulin-dependent protein kinase II [CamKII] and interleukin receptor-associated kinase) in a similar manner to calcium (Ca 2+ )-dependent processes. 4 8 10 While the role of Zn 2+ as a secondary messenger in nucleated cells has gathered support in recent years, agonist-dependent regulation of [Zn 2+ ] i in platelets during thrombosis has yet to be demonstrated. Here, we utilize Zn 2+ -specific fluorophores, chelators and ionophores to investigate the role of [Zn 2+ ] i fluctuations in platelet behaviour. We show that agonist-evoked elevation of [Zn 2+ ] i regulates platelet shape change, dense granule release and phosphatidylserine (PS) exposure. These findings indicate a role for Zn 2+ as a secondary messenger, which may have implications for the understanding of platelet signalling pathways involved in thrombosis during adverse cardiovascular events. Experimental Procedures: Materials : Fluozin-3- am (Fz-3, Zn 2+ indicator) and Fluo-4- am (Ca 2+ indicator) were from Invitrogen (Paisley, United Kingdom). Z-VAD (pan-caspase inhibitor) was from R&D Systems (Abingdon, United Kingdom). Primary anti-vasodilator-stimulated phosphoprotein (VASP) (Ser157) and anti-myosin light chain (MLC) (Ser19) antibodies were from Cambridge Bioscience (Cambridge, United Kingdom), and fluorescently conjugated procaspase-activating compound 1 (PAC-1), CD62P and CD63 antibodies were from BD Biosciences (Oxford, United Kingdom). Cross-linked collagen-related peptide (CRP-XL; glycoprotein VI [GpVI] agonist) was from Professor Richard Farndale (Cambridge, United Kingdom), U46619 (thromboxane [TP]α receptor agonist) was from Tocris (Bristol, United Kingdom), thrombin (protease-activated receptor [PAR] agonist) was from Sigma Aldrich (Poole, United Kingdom) and cytochalasin-D (Cyt-D, actin polymerization inhibitor) was from AbCam (Cambridge, United Kingdom). Clioquinol (Cq, Zn 2+ ionophore, C 9 H 5 ClINO, K d Zn: 10 −7 M, K d Ca: 10 −4.9 M), pyrithione (Py, Zn 2+ ionophore, C 10 H 8 N 2 O 2 S 2 , K d Zn: 10 −7 M, K d Ca: 10 −4.9 M), A23187 (Ca 2+ ionophore, C 29 H 37 N 3 O 6 ), N,N,N′,N′-Tetrakis(2-pyridylmethyl)ethylenediamine (TPEN, Zn 2+ chelator, K d Zn: 2.6 × 10 −16 M, K d Ca: 4.4 × 10 −5 M, 11 12 13 14 ), dimethyl-bis-(aminophenoxy)ethane-tetraacetic acid (DM-BAPTA)-AM (C 34 H 40 N 2 O 18 , K d Zn: 7.9 × 10 −9 M, K d Ca: 110 × 10 −9 M, 11 12 13 14 ) and membrane permeant anti-oxidizing proteins, polyethylene glycol-superoxide dismutase (PEG-SOD) and PEG-catalase (CAT) were from Sigma Aldrich. Unless stated, all other reagents were from Sigma Aldrich. Preparation of washed human platelets : This study was approved by the Research Ethics Committee at Anglia Ruskin University and informed consent was obtained in accordance with the Declaration of Helsinki. Blood was donated by healthy human volunteers, free from medication for 2 weeks. Blood was collected into 11 mM sodium citrate and washed platelets were prepared as described previously. 7 Unless otherwise stated, to isolate the mechanisms of Zn 2+ fluctuations from other cation-specific effects, experiments were performed in the absence of extracellular Ca 2+ . Cation mobilisation studies : For studies of [Zn 2+ ] i or [Ca 2+ ] i mobilization, platelet-rich plasma was loaded with Fz-3 (2 µM, 30 minutes, 37°C), or Fluo-4 (2 µM, 30 minutes, 37°C). Fz-3 is responsive to Zn 2+ in the nM range and is not significantly affected by Ca 2+ . 15 Platelets were collected by centrifugation (350 ×  g , 15 minutes), re-suspended in Ca 2+ -free Tyrode's buffer (in mM: 140 NaCl, 5 KCl, 10 HEPES, 5 glucose, 0.42 NaH 2 PO 4 , 12 NaHCO 3 , pH 7.4) and rested at 37°C for 30 minutes prior to use. Fluorescence was monitored using a Fluoroskan Ascent fluorometer (ThermoScientific, United Kingdom) using 488 nm and 538 nm excitation and emission filters, respectively. Washed Fz-3 or Fluo-4 loaded platelet suspensions were treated with ionophores or chelators to calibrate R max or R min values ( Supplementary Fig. S1 , available in the online version). Results are expressed as an increase of background-corrected fluorescence at each time point relative to baseline: ( F - F background )/ F 0 - F background ). Optical aggregometry : Aggregometry was performed with washed platelet suspensions under stirring conditions at 37°C in an AggRam light transmission aggregometer (Helena Biosciences, Gateshead, United Kingdom). 7 Aggregation traces were acquired using a proprietary software (Helena Biosciences) and analysed within GraphPad Prism (Version 6.03). Confocal microscopy : Images of platelets adhering to coated fibrinogen coverslips (100 µM) were acquired using a LSM510/Axiovert laser scanning confocal microscope with 60× oil NA1.45 objective (Zeiss, United Kingdom). Surface coverage of DIOC 6 -stained platelets was quantified using ImageJ (v1.45, National Institutes of Health, Bethesda, Maryland, United States). Western blotting : Western blotting was performed as described previously. 7 Briefly, polyvinylidene difluoride membranes were incubated with MLC (1:400) or VASP (Ser157, 1:400) primary antibodies, and horseradish peroxidase-conjugated secondary antibodies (1:7,500). Flow cytometry : Washed platelet suspensions were incubated with fluorescently conjugated antibodies targeting markers of platelet activation: PAC-1 (α IIb β 3 activation), CD62P (α granule release) and CD63 (dense granule release). Antibody binding following agonist or ionophore stimulation was assessed using an Accuri C6 flow cytometer (BD Biosciences). Data analysis : Maximum and minimum aggregation and F / F 0 values were calculated using Microsoft Excel. Western blots were analysed using ImageJ. Data were analysed in GraphPad Prism by two-way analysis of variance followed by Tukey's post hoc test. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05. Results: [Zn 2+ ] i fluctuations coordinate receptor stimulation with signalling responses in nucleated cells. 8 To investigate whether intra-platelet Zn 2+ fluctuates during activation, agonist-evoked changes of [Zn 2+ ] i were monitored in washed platelet suspensions, loaded with the Zn 2+ -specific fluorophore, Fz-3. Stimulation with conventional platelet agonists CRP-XL and U46619 induced rapid, dose-dependent increases of fluorescence peaking after approximately 2 minutes, consistent with increases in [Zn 2+ ] i . At 6 minutes, 1 µg/mL CRP-XL or 10 µM U46619 stimulation increased F / F 0 to 2.0 ± 0.1 and 1.2 ± 0.1 AU, respectively (compared with 0.9 ± 0.2 AU for the vehicle control, p  < 0.05, Fig. 1A , B ). Conversely, thrombin stimulation did not elevate Fz-3 fluorescence ( Fig. 1C ). These data indicate that platelet activation via GpVI and TP, but not via PARs, leads to signalling responses that result in the elevation of [Zn 2+ ] i , in a similar manner to agonist-evoked increases in [Ca 2+ ] i . Inclusion of 2 mM CaCl 2 in the extracellular medium did not significantly affect agonist-evoked responses ( Supplementary Fig. S2 , available in the online version). Agonist-dependent platelet activation via GpVI or TP, but not PARs elevates [Zn 2+ ] i . Fz-3-labelled washed human platelets were stimulated by CRP-XL ( A ), U46619 ( B ) or thrombin ( C ) and [Zn 2+ ] i fluctuations were monitored over 6 minutes using fluorometry. ( A ) Fz-3 responses to ○ 1 µg/mL, □ 0.3 µg/mL, ▵ 0.1 µg/mL, ⋄ 0.03 µg/mL CRP-XL or • vehicle (DMSO). ( B ) Fz-3 responses to ○ 10 µM, □ 3 µM, ▵ 1 µM, ⋄ 0.3 µM U46619 or • vehicle (DMSO). ( C ) Fz-3 responses to, ○ 1 U/mL, □ 0.3 U/mL, ▵ 0.1 U/mL, ⋄ 0.03 U/mL thrombin or • vehicle (DMSO). Data are mean ± standard error of the mean (SEM) from at least 8 independent experiments. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05. Experiments were performed to confirm the specificity of fluorescence fluctuations for Zn 2+ . Platelets were pre-treated with the intracellular Zn 2+ -specific chelator TPEN (50 µM) prior to stimulation with 1 µg/mL CRP-XL. Fz-3 responses were reduced to 1.1 ± 0.1 AU, compared with of 2.0 ± 0.1 AU for CRP-XL stimulation alone ( p  < 0.05, Fig. 2A ). Interestingly, treatment with DM-BAPTA (10 µM), a non-specific cation chelator, led to a similar reduction (to 1.0 ± 0.1 AU, p  < 0.05). Abrogation of Fz-3 fluorescence was also observed following stimulation with U46619 (10 µg/mL), where TPEN or DM-BAPTA treatment reduced F / F 0 plateau levels from 1.2 ± 0.1 to 0.8 ± 0.1 AU and 1.0 ± 0.1 AU, respectively ( p  < 0.05, Fig. 2B ). Further experiments were performed to investigate the influence of cation chelation on [Ca 2+ ] i fluctuations using Fluo-4-loaded platelets. As previously demonstrated, CRP-XL- and U46619-induced Ca 2+ signals were absent following BAPTA treatment ( F / F 0 signals were reduced from 1.6 ± 0.2 to 0.8 ± 0.1 AU, and from 1.4 ± 0.1 to 0.9 + 0.0 AU, for CRP-XL and U46619 stimulation, respectively, p  < 0.05, Fig. 2D , E ). However, Fluo-4 fluorescence was not significantly affected by TPEN treatment (1.5 ± 0.2 and 1.2 ± 0.1 AU for CRP-XL and U46619, respectively, ns) indicating that TPEN does not chelate [Ca 2+ ] i , and that Fz-3 signals are attributable to [Zn 2+ ] i with no influence from other cations. Furthermore, these data demonstrate that fluctuations in [Zn 2+ ] i do not affect agonist-evoked Ca 2+ signals. Agonist-dependent intracellular zinc ([Zn 2+ ] i ) fluctuations are sensitive to the platelet redox state. Platelets were loaded with Fz-3 ( A , B , C ), or Fluo-4 ( D , E , F ) and stimulated with CRP-XL (1 µg/mL, ○, A , D ), U46619 (10 µM, ○ B , E ) or H 2 O 2 (10 µM, ○, C , F ), during which changes in fluorescence were monitored. Where indicated, platelets were pre-treated with TPEN (▿, 50 µM), DM-BAPTA (⋄, 10µM), PEG-SOD (□, 30 U/mL), PEG-CAT (▵, 300 U/mL) or vehicle (DMSO), •). Data are mean ± standard error of the mean (SEM) from at least 5 independent experiments. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05. Agonist-evoked [Zn 2+ ] i increases may result from release of membrane-bound intracellular stores or by liberation from metal-binding proteins (e.g. metallothioneins) in response to redox-mediated modifications to thiol groups. 16 To investigate the nature of the Zn 2+ source, platelets were treated with membrane-permeant anti-oxidizing proteins PEG-SOD and PEG-CAT, 17 and CRP-XL-evoked [Zn 2+ ] i fluctuations were monitored. PEG-SOD and PEG-CAT both abolished CRP-XL-induced increases of Fz-3 fluorescence, indicating redox-dependent modulation of Zn 2+ release (PEG-SOD and PEG-CAT reduced F / F 0 plateaus following 1 µg/mL CRP-XL treatment from 2.0 ± 0.1 to 1.2 ± 0.1 AU and 1.3 ± 0.1 AU, respectively, p  < 0.05, Fig. 2A ). This is consistent with published data showing a greater capacity for GpVI to influence redox signalling than other receptors. 18 Similarly, PEG-SOD and PEG-CAT abolished U46619-induced [Zn 2+ ] i responses (to 1.0 ± 0.0 and 1.1 ± 0.0 AU, respectively, following 10 µM U46619 stimulation, p  < 0.05, Fig. 2B ). PEG-SOD and PEG-CAT did not affect CRP-XL- or U46619-mediated Fluo-4 fluorescence, suggesting that [Zn 2+ ] i but not [Ca 2+ ] i signals are regulated by redox-sensitive processes. Further experiments were performed to resolve the relationship between the platelet redox state and [Zn 2+ ] i fluctuations. Treatment with H 2 O 2 mimics increases in platelet reactive oxygen species (ROS). 19 H 2 O 2 increased both [Ca 2+ ] i and [Zn 2+ ] i ( F / F 0 plateaus were 1.8 + 0.3 AU following H 2 O 2 [10 µM] stimulation of Fz3-loaded platelets, compared with 0.9 ± 0.1 AU for vehicle-treated platelets, while H 2 O 2 stimulation increased Fluo-4 fluorescence from 0.9 ± 0.1 to 1.4 ± 0.1 AU, p  < 0.05, Fig. 2C , F ). H 2 O 2 -mediated [Zn 2+ ] i increases were abrogated with PEG-SOD or PEG-CAT, while [Ca 2+ ] i was unaffected ( Fig. 2E , F ). These data support a role for the platelet redox state in regulating [Zn 2+ ] i fluctuations. Having demonstrated that intra-platelet Zn 2+ rises in response to agonist stimulation, we further examined the influence of [Zn 2+ ] i on platelet responses. We hypothesized that liberation of Zn 2+ from intracellular stores (such as platelet α-granules 20 ) using specific ionophores would result in increased [Zn 2+ ] i , in a similar manner A23187-evoked Ca 2+ responses. 21 Zn 2+ ionophores Cq and Py have previously been used to model [Zn 2+ ] i increases in nucleated cells. 22 23 24 We utilized these reagents to model agonist-evoked [Zn 2+ ] i increases in washed platelet suspensions. Stimulation with Cq or Py produced large elevations of [Zn 2+ ] i , with F / F 0 plateaus of 7.9 ± 0.5 and 3.3 ± 0.3 AU, respectively ( p  < 0.05, Fig. 3A , B ). The extent of [Zn 2+ ] i increase was greater than that observed following CRP-XL stimulation, suggesting that liberation from stores is not the principal means by which [Zn 2+ ] i increases following agonist stimulation. Zn 2+ ionophore-dependent Fz-3 fluorescence increases were sensitive to pre-treatment with TPEN or BAPTA, consistent with a role for Cq or Py increasing [Zn 2+ ] i ( Fig. 3A , B ). However, [Zn 2+ ] i signals were not influenced by PEG-SOD or PEG-CAT, demonstrating that ionophore-induced [Zn 2+ ] i release is not redox sensitive. Cq or Py stimulation did not affect Fluo-4 fluorescence ( Fig. 3D , E ), indicating that Zn 2+ ionophores have a negligible affinity for Ca 2+ . A23187 increased Fluo-4 fluorescence (from 0.9 ± 0.1 to 5.8 ± 0.9 AU after 6 minutes, p  < 0.05, Fig. 3F ), but had no effect on Fz-3 fluorescence ( Fig. 3C ), demonstrating that Fz-3 fluorescence is not affected by changes in [Ca 2+ ] i . In a similar manner to agonist-dependent Ca 2+ signalling, A23187-dependent [Ca 2+ ] i increases were abrogated by BAPTA, but were unaffected by TPEN. Thus, Fluo-4 fluorescence is not influenced by Zn 2+ . Treatment of platelets with Zn 2+ ionophores clioquinol (Cq) or pyrithione (Py) elevates [Zn 2+ ] i , but not [Ca 2+ ] i . Washed platelet suspensions were loaded with Fz-3 ( A , B , C ), or Fluo-4 ( D , E , F ) and stimulated with Cq (○, 300 µM, A , D ), Py (○, 300 µM, B , E ) or A23187 (○, C , F ). Where indicated, platelets were pre-treated with (TPEN) (50 µM, ▿), DM-BAPTA (10 µM, ⋄), PEG-SOD (30 U/mL, □), PEG-CAT (300 U/mL, ▵), or vehicle (DMSO), •. Data are mean ± standard error of the mean (SEM) from at least 6 independent experiments. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05. Our data confirm that platelet [Zn 2+ ] i increases can be modelled using the Zn 2+ ionophores Cq and Py. Next, we examined the influence of increases in [Zn 2+ ] i on platelet aggregation. High concentrations of Cq (300 µM) resulted in an initial decrease in light transmission, followed by a substantial increase, consistent with shape change and aggregation. Platelet aggregates were present following visual inspection of test cuvettes at the end of each experiment (not shown). The extent of Cq-induced aggregation (300 µM, 27.8 ± 5.0%) was lower than that for A23187 (300 µM, 70.2 ± 8.6%, p  < 0.05, Fig. 4A , B ). Treatment with lower concentrations of Cq (30 µM) resulted in shape change only, with no progression to aggregation. Py stimulation did not cause aggregation but did result in shape change ( Fig. 4A – C ). Response to Py were biphasic, with intermediate concentrations (10–30 µM) resulting in shape change, and higher concentrations having no effect. Stimulation of platelets with Zn 2+ ionophores leads to shape change. ( A ) Washed platelet suspensions were stimulated with different concentrations of clioquinol (Cq), pyrithione (Py) or A23187 during which changes in light transmission were monitored using optical aggregometry. Initial downward deflections indicate a reduction in light transmission that are consistent with shape change. Subsequent upward deflections indicate increases in light transmission, consistent with platelet aggregation. The maximum ( B ) and minimum ( C ) extent of aggregation were calculated for each ionophore (▪ Cq, ▵ Py, ○ A23187). Data are mean ± standard error of the mean (SEM) from at least 5 experiments. The degree of shape change was quantified by calculating the lowest light transmission during ionophore-induced aggregation (denoted minimum aggregation, %). Shape change following Cq or A213817 treatment was comparable (minimum aggregation for 30 µM Cq or Py was –13.3 ± 2.9 and –27.5 ± 2.2%, respectively, compared with –15.1 ± 2.7% for 30 µM A23187, ns, Fig. 4C ). These data are consistent with a role for [Zn 2+ ] i in regulating cytoskeletal changes in a similar manner to [Ca 2+ ] i -induced shape change. To confirm that the changes in light transmission were a biological, rather than chemical phenomenon, we took a pharmacological approach by pre-treating platelets with the actin polymerization inhibitor Cyt-D prior to ionophore stimulation. Cyt-D abrogated Cq-, Py- and A23187-induced shape change, consistent with a genuine biological effect. The minimum aggregation for Cyt-D treated and untreated platelets were –5.7 ± 2.1 and –16.7 ± 1.9%, respectively, following Cq stimulation, –9.1 ± 1.9 and –33.2 ± 2.4, respectively, following Py stimulation, and –3.7 ± 1.4 and –13.0 ± 1.8%, respectively, following A23187 stimulation (30 µM, p  < 0.05, Fig. 5A , B ). Pre-treatment of platelets with TPEN abrogated Cq- or Py-induced shape change but had no effect on A23187 treatment (minimum aggregation following TPEN treatment was –4.9 ± 1.2, –11.1 ± 2.3 and –17.9 ± 2.6% for Cq, Py and A23817, respectively, p  < 0.05, Fig. 5A , B ). These data are consistent with a role for [Zn 2+ ] i in regulating cytoskeletal re-arrangements. The resistance of A23187-induced shape change to TPEN treatment suggests that the contribution of Ca 2+ signals to cytoskeletal re-arrangement occurs independently of Zn 2+ signals, and could indicate different mechanisms for Zn 2+ - and Ca 2+ -induced shape change. Ionophore-induced shape change is sensitive to pre-treatment with (Cyt-D) or TPEN. ( A ) Representative aggregometry traces showing clioquinol (Cq)-, pyrithione (Py)- or A23187-induced (30 µM) shape change following pre-treatment with TPEN (50 µM) or Cyt-D (10 µM). ( B ) Quantitation of minimum aggregation following treatment of platelets pre-treated with TPEN (▪ 25 µM), Cyt-D (▪ 10 µM) or vehicle (□ DMSO, prior to stimulation with Cq, Py or A23187 (30 µM). Data are mean ± standard error of the mean (SEM) of at least 6 experiments. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05. [Zn 2+ ] i -dependent cytoskeletal changes were further investigated by visualizing platelet spreading on fibrinogen. TPEN-treated platelets were able to adhere to fibrinogen, but did not spread, with no visible lamellipodia or filopodia ( Fig. 6A ). Mean platelet surface coverage after 10 minutes was 12.8 ± 1.5 µm, compared with 22.7 ± 1.6 µm for untreated platelets ( Fig. 6B ). Regulation of Cq-induced shape change was investigated by assaying VASP and MLC, which alter phosphorylation status during cytoskeletal re-arrangements. 25 26 Cq- or Py-induced shape change were accompanied by increased phosphorylation of ser157 of MLC, confirming a role for [Zn 2+ ] i in the signalling process leading to cytoskeletal changes. Unlike PGE 1 treatment, VASP did not undergo phosphorylation in response to ionophore treatment, indicating that Zn 2+ does not influence activity of cyclic nucleotide-dependent kinases such as protein kinase A (PKA) or protein kinase G (PKG). 27 [Zn 2+ ] i regulates platelet shape change, and phosphorylation of cytoskeletal regulators. Washed platelet suspensions were incubated on fibrinogen-coated coverslips following pre-treatment with 50 µM TPEN or vehicle control (DMSO). ( A ) Representative images of platelet spreading. ( B ) Quantification of the surface coverage by adherent platelets (○ DMSO, • 50 µM TPEN, n  = 3). ( C ) Representative Western blot showing increased MLC phosphorylation following stimulation of platelets for 2 minutes with vehicle (DMSO), thrombin (1 U/mL), A23187 (100 µM), clioquinol (Cq) (300 µM) and pyrithione (Py) (300 µM). ( D ) Representative Western blot showing VASP phosphorylation following stimulation of platelets for 2 minutes with vehicle (DMSO), prostaglandin E 1 (PGE 1 ) (1 U/mL), A23187 (100 µM), Cq (300 µM) and Py (300 µM). VASP phosphorylation was unaffected by Zn 2+ ionophore treatment. Blots are representative of three experiments. Data are means ± standard error of the mean (SEM), from at least 5 independent experiments. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05. These data indicate that increases in [Zn 2+ ] i initiate platelet activation events, such as shape change and aggregation. To better understand the extent to which changes in [Zn 2+ ] i regulate platelet activation, the influence of Cq treatment on conventional markers of platelet activation was investigated. In a similar manner to thrombin and A23187, Cq or Py stimulation (300 µM) substantially increased platelet PAC-1 binding (59.7 ± 5.5, 64.5 ± 5.8, 47.3 ± 4.1 and 37.8 ± 5.0%, respectively, p  < 0.05, Fig. 7A ), consistent with earlier observations correlating Cq stimulation with aggregation ( Fig. 4 ), and supportive of a role for [Zn 2+ ] i in α IIb β 3 activation. Cq or Py increased CD63, but not CD62P externalization (55.9 ± 7.8 and 5.7 ± 2.8%, respectively, following Cq stimulation, and 50.2 ± 2.6 and 6.9 ± 2.2% following Py stimulation, Fig. 7A ) indicating that increases in [Zn 2+ ] i initiate dense, but not α granule, secretion. This differed from both thrombin (CD62P: 62.9 ± 5.5%, CD63: 48.8 ± 3.0%) and A23187 (CD62P: 31.1 ± 5.7%, CD63: 55.1 ± 5.0%), which also regulate α and dense granule release. Increasing platelet [Zn 2+ ] i using Zn 2+ ionophores increases platelet activation markers. ( A ) Washed platelet suspensions were stimulated by thrombin (Thr, 1 U/mL), clioquinol (Cq) (300 µM), pyrithione (Py) (300 µM) or A23187 (100 µM) and changes of PAC-1 (white), CD62P (grey) and CD63 (black) binding were obtained after 60 minutes. ( B ) Washed platelet suspensions were stimulated by CRP-XL (1 µg/mL), U46619 (10 µM) or thrombin (1 U/mL), following pre-treatment with TPEN (50 µM), and changes of PAC-1 (white), CD62P (grey) and CD63 (black) binding were obtained after 60 minutes. ( C ) Washed platelet suspensions were treated with Ca 2+ or Zn 2+ ionophores, or conventional platelet agonists, prior to analysis of annexin-V binding by flow cytometry. □ Clioquinol (300 µM), ▵ pyrithione (300 µM), ○ A23187 (300 µM), • CRP (1 µg/mL), ▪ thrombin, (1 U/mL), ▪ vehicle (DMSO). ( D ) Platelet suspensions were pre-treated with the caspase inhibitor Z-VAD (▵, 1 µM), the Zn 2+ chelator, TPEN (▪, 25 µm) or vehicle (□) prior to stimulation with clioquinol (300 µM). ○ Unstimulated platelets. Changes in the percentage of platelets binding to annexin-V were recorded. Washed platelets suspensions were pre-treated with Z-VAD (1 µM), or TPEN (50 µM) prior to stimulation with conventional agonists, CRP-XL (1 µg/mL, E ), U46619 (10 µM, F ) or thrombin (1 U/mL, G ). Changes in annexin-V binding were monitored using flow cytometry. ○ Vehicle, □ Z-VAD (1 µM), ▵ TPEN (50 µM), ▿ DMSO (no agonist). Data are means ± standard error of the mean (SEM) of at least 3 independent experiments. Significance is denoted as *** p  < 0.001, ** p  < 0.01 or * p  < 0.05. Further experiments were performed to assess the influence of [Zn 2+ ] i on agonist-evoked changes in platelet activatory markers. TPEN reduced increases of PAC-1, or CD63 binding in response to CRP-XL (1 µg/mL, from 55.4 ± 4.9 to 29.0 ± 1.5% for PAC-1 binding, and from 46.4 ± 4.0 to 24.2 ± 2.5% for CD63 binding, p  < 0.05), U46619 (10 µM, from 36.2 ± 2.8 to 16.5 ± 1.2% for PAC-1 binding, and from 21.9 ± 3.6 to 10.7 ± 1.3% for CD63 binding, p  < 0.05) or thrombin (1 U/mL, from 64.6 ± 5.2 to 32.1 ± 3.6% for PAC-1 binding, and from 46.8 ± 3.8 to 17.6 ± 2.3% for CD63 binding, p  < 0.05), but had no effect on agonist-evoked CD62P increases ( Fig. 7B ). This provides further support for a role of [Zn 2+ ] i in differentially regulating platelet granule secretion. Extracellular Zn 2+ signalling and agonist-induced changes in [Zn 2+ ] i have both been linked to apoptosis and related responses in nucleated cells. 28 29 30 31 However, the role of Zn 2+ in PS exposure during platelet activation has yet to be studied. To investigate the influence of [Zn 2+ ] i on PS exposure, platelets were treated with ionophores, and annexin-V binding was quantified in real time. Increasing platelet [Zn 2+ ] i with Cq (300 µM) resulted in a concurrent increase in annexin-V binding. PS exposure evolved more slowly with Zn 2+ ionophore treatment than A23817, but reached similar plateau levels (90.0 ± 0.9 and 88.6 ± 2.7% for Cq and A23187, respectively, Fig. 7C ), indicating that most platelets in the population were annexin-V positive. This differed in responses to conventional agonists, thrombin and CRP-XL, which induced PS exposure in a sub-set of platelets (35.0 ± 6.2 and 34.4 ± 6.2%, respectively). Cq-induced annexin-V binding was sensitive to TPEN (6.6 ± 6.3% positive platelets at 60 minutes) confirming a role for Zn 2+ . Furthermore, pre-treatment with the caspase inhibitor, Z-VAD, abrogated Cq-induced PS exposure (53.6 ± 4.7% at 60 minutes, p  < 0.05, Fig. 7D ). The influence of Zn 2+ on agonist-evoked annexin-V binding was also investigated. Consistent with the findings of Cohen et al, 32 we observed a reduction in agonist-evoked PS exposure in the presence of Z-VAD (1 µM) (from 34.4 ± 2.9 to 23.1 ± 2.0% following stimulation with 1 µg/mL CRP-XL, from 24.4 ± 1.8 to 15.2 ± 2.0% following stimulation with 10 µM U46619 and from 32.5 ± 4.8 to 21.2 ± 2.4% following stimulation with 1 U/mL thrombin, Fig. 7E – G , p  < 0.05). Similar reductions of annexin-V binding in TPEN-treated platelets were observed following stimulation by CRP-XL (26.3 ± 0.9%, p  < 0.05), or U46619 (21.4 ± 2.7%, p  < 0.05). However, TPEN did not affect thrombin-mediated annexin-V binding (28.3 ± 4.6%, ns). These data are consistent with a role for Zn 2+ in agonist-evoked PS exposure. Discussion: The role of Zn 2+ as a secondary signalling molecule has received little research interest, possibly owing to its relatively low resting cytosolic levels (pM, compared with nM concentrations of Ca 2+ ). Zn 2+ is present in granules of nucleated cells, and in platelet α granules. It also associates with thiol-containing proteins such as metallothioneins, which are also present in platelets. 33 The transition between protein- or membrane-bound Zn 2+ and labile Zn 2+ in the cell cytosol has been demonstrated in multiple cell systems, and increases in labile [Zn 2+ ] i have been correlated with phenotypic changes. Here, we show for the first time that agonist-evoked stimulation of platelets in vitro results in increases of [Zn 2+ ] i . While requiring further confirmation, such behaviour is consistent with a role of Zn 2+ as a secondary messenger. Zn 2+ fluctuations were apparent in the presence of extracellular CaCl 2 , supporting a physiological role for this effect. We confirm the nature of the fluorescent signal using the high affinity Zn 2+ chelator TPEN. TPEN was also used to probe the role of Zn 2+ in functional responses to agonist stimulation. Owing to its affinity for Zn 2+ , use of TPEN here is not only likely to abrogate agonist-evoked increases in [Zn 2+ ] i , but could also strip metalloproteins of Zn 2+ co-factors. 34 Thus, conclusions drawn from the use of TPEN may not only reflect abrogation of agonist-evoked [Zn 2+ ] i increases. [Zn 2+ ] i increases were observed in platelets following stimulation via GpVI and TP, but not via PAR, indicating that different signalling pathways link to [Zn 2+ ] i release. Signalling via GpVI differs from that of TP or PAR G-protein-coupled receptors, in that it results in tyrosine phosphorylation of platelet proteins (such as Syk and LAT), leading to activation of PI3K and PLCγ2. Conversely, PAR and TP signal through G-protein-dependent routes to activate Rho-GEF and PLCβ. It is likely that [Zn 2+ ] i increases are regulated by signalling proteins that are not shared by GpVI and thrombin pathways. However, the different outcomes following PAR and TP-dependent signalling are harder to reconcile, as both receptors couple to similar signalling pathways that involve Gα 12/13 and Gα q . We show that the platelet redox state effects [Zn 2+ ] i fluctuations in a similar manner to nucleated cells. 35 36 CRP-XL- and U46619-evoked elevations of [Zn 2+ ] i were sensitive to antioxidants, and could be enhanced by H 2 O 2 . Zn 2+ binding to thiols (e.g. metallothioneins) is redox-sensitive and changes of redox state lead to release of Zn 2+ into the labile pool in nucleated cells. 37 Given that modulation of the platelet redox state led to a rapid and sustained rise of [Zn 2+ ] i , it is possible that platelet Zn 2+ -binding proteins represent a store for these cations. Interestingly, Ca 2+ signalling was unaffected by redox changes, suggesting that these ions are differentially regulated. Indeed, the predominant Ca 2+ store is the dense tubular system, which performs a similar role to the endoplasmic reticulum in nucleated cells. It is therefore likely that intra-platelet Zn 2+ is stored by Zn 2+ -binding proteins and becomes liberated upon agonist stimulation. However, we did not observe increases of [Zn 2+ ] i following thrombin stimulation, which has been shown to induce similar levels of ROS activation as collagen activation. 18 38 One possible explanation could be that the larger Ca 2+ signal generated by thrombin negatively regulates Zn 2+ release. We examined the influence of [Zn 2+ ] i on activatory processes using membrane permeable Zn 2+ -specific ionophores, Py and Cq, which have been widely used to model increases in [Zn 2+ ] i . Stimulation with either ionophore resulted in increases in [Zn 2+ ] i , with a greater signal obtained with Cq. Neither ionophore produced increases in Fluo-4 fluorescence, indicating a negligible affinity for [Ca 2+ ] i . Conversely, stimulation with the Ca 2+ ionophore A23187 produced rapid increases in [Ca 2+ ] i , but did not affect [Zn 2+ ] i . Investigation of cation responses in cells depends heavily on the specificity of reagents for their cognate ions. By showing that A23187 initiates a Ca 2+ response which is not detected by Fz-3, we demonstrate that Fz-3 fluorescence increases are directly attributable to changes in [Zn 2+ ] i , and are not influenced by [Ca 2+ ] i . This is further supported by our observation that TPEN does not affect Fluo-4 fluorescence, which also provides evidence that agonist-evoked Ca 2+ signalling does not depend on [Zn 2+ ] i signals. This observation raises questions about the relative roles of Ca 2+ and Zn 2+ in platelet activation, as both target similar proteins, including PKC, calmodulin and CamKII. 4 Unlike agonist stimulation, ionophore-induced [Zn 2+ ] i increases were not sensitive to anti-oxidant treatment. Furthermore, the extent of [Zn 2+ ] i following ionophore stimulation was greater than that observed for agonists, indicating that ionophores liberate Zn 2+ from stores that are not accessible to agonist-evoked signalling mechanisms. Such stores could include α granules, which are known to contain Zn 2+ . 20 Our use of ionophores here to model [Zn 2+ ] i increases while providing information on Zn 2+ -dependent mechanisms, is therefore unlikely to fully represent the physiological situation. Cytoskeletal re-arrangements are primary steps in platelet activation. Zn 2+ ionophore stimulation resulted in a demonstrable shape change, which was abrogated following Cyt-D treatment, verifying it as a biological, rather than chemical, response. Furthermore, platelet spreading on fibrinogen was abrogated following [Zn 2+ ] i chelation. While not correlating [Zn 2+ ] i fluctuations with shape change, these data provide support for a role of Zn 2+ in activation-dependent cytoskeletal re-arrangements. Zn 2+ is an important regulator of the cytoskeleton in nucleated cells. 39 40 Zn 2+ regulates tubulin polymerization leading to nuclear transport of transcription factors in neuronal cells, 41 and has been shown to regulate the actin cytoskeleton, focal adhesion dynamics and cell migration in PC-3 and HeLa cells, 35 where Zn 2+ chelation supresses filopodia formation and results in the loss of stress fibres. Conversely, treatment with Py increased filopodia formation, supressed stress fibres and decreased the number and size of focal adhesions. 35 Thus, Zn 2+ is likely to play similar important roles in platelet cytoskeletal re-arrangements. We show that raising [Zn 2+ ] i results in increases in MLC phosphorylation. MLCK is canonically activated via Ca 2+ -mediated activation of calmodulin. 42 As other calmodulin-dependent kinases have been shown to be modulated by Zn 2+ , it is possible that Zn 2+ is able to substitute for Ca 2+ , initiating MLCK activation. 43 Absence of phosphorylation of VASP indicates that increases in [Zn 2+ ] i do not influence the activity of cyclic nucleotide-dependent kinases such as PKG or PKA. Ionophore-induced elevation of [Zn 2+ ] i increased PAC-1 binding, supporting our aggregometry data ( Fig. 4 ), and supportive of role for Zn 2+ in regulating α IIb β 3 activity ( Fig. 6 ). Interestingly, [Zn 2+ ] i increases resulted in the externalization of CD63, but not CD62P, supporting a role for Zn 2+ in regulating α, but not dense granule release. Further experiments using TPEN in conjunction with conventional platelet agonists provides support for a role for [Zn 2+ ] i in α IIb β 3 activation and dense granule secretion, but not α granule secretion ( Fig. 7B ). Distinct signalling pathways contribute to differential release of α and dense granules, and while the exact mechanism is poorly understood, our work provides evidence for a role for Zn 2+ in these processes. 44 45 While these studies show that Zn 2+ fluctuations correlate with platelet behaviour, it should be noted that the physiological relevance of the ionophore-evoked [Zn 2+ ] i rises are unclear and that further work will be required to establish the significance of Zn 2+ -dependent secondary signalling in vivo . Upon stimulation with conventional agonists, a sub-set of platelets adopt pro-coagulant phenotypes, elevating [Ca 2+ ] i and externalizing PS. Extracellular Zn 2+ signalling, agonist-induced changes in [Zn 2+ ] i and Zn 2+ ionophore treatment have all been linked to apoptosis and related responses in nucleated cells. 30 31 46 47 48 49 50 Here, we demonstrate that ionophore or agonist-evoked increases in platelet [Zn 2+ ] i results in PS exposure, consistent with the development of a pro-coagulant phenotype. Interestingly, while CRP-XL and U46619 evoked PS exposure was sensitive to Zn 2+ chelation, thrombin stimulation was not. This provides further support for a role of Zn 2+ following GpVI and TPα signalling, but not via PARs. Unlike conventional agonists, Cq stimulation resulted in PS exposure in a majority of platelets. This may indicate that agonist-evoked Zn 2+ signals are stimulated in only a sub-set of platelets, which then proceed to become pro-coagulant. As previously shown ( Fig. 3 ), Cq stimulation did not induce increases in [Ca 2+ ] i , so Cq-dependent PS exposure is independent of [Ca 2+ ] i . Platelet PS exposure has been attributed to both caspase 3-dependent and independent mechanisms. 51 52 Cq-dependent PS exposure is partially abrogated by Z-VAD pre-treatment suggesting a partial role for caspase activity in this process. In conclusion, this study provides the first evidence for agonist-evoked increases of [Zn 2+ ] i in platelets. While requiring further confirmation, such behaviour is consistent with a role of Zn 2+ as a secondary messenger. Increases in [Zn 2+ ] i are sensitive to the redox state, indicative of a role for redox in agonist-evoked Zn 2+ signalling. Modelling increases of [Zn 2+ ] i using Zn 2+ -specific ionophores reveal a functional role for [Zn 2+ ] i in platelet activatory changes. [Zn 2+ ] i signalling contributes to key activation-related platelet responses, including shape change, α IIb β 3 activation and granule release. The mechanism by which Zn 2+ affects these processes is currently unknown, but could be attributable to changes in activity of Zn 2+ -binding enzymes. These data indicate a hitherto unknown role for labile [Zn 2+ ] i during platelet activation, which has implications for our understanding of signalling responses in platelets. While this work does not address the physiological relevance of this process, a better understanding of Zn 2+ signalling may be of significance to the role of platelets in thrombotic disorders such as heart attack and stroke. Furthermore, as they are readily available primary cells, platelets could be used as a model to better understand Zn 2+ signalling in other mammalian cells.
Background:  Zinc (Zn2+) is an essential trace element that regulates intracellular processes in multiple cell types. While the role of Zn2+ as a platelet agonist is known, its secondary messenger activity in platelets has not been demonstrated. Methods:  Changes in [Zn2+]i were quantified in Fluozin-3 (Fz-3)-loaded washed, human platelets using fluorometry. Increases in [Zn2+]i were modelled using Zn2+-specific chelators and ionophores. The influence of [Zn2+]i on platelet function was assessed using platelet aggregometry, flow cytometry and Western blotting. Results:  Increases of intra-platelet Fluozin-3 (Fz-3) fluorescence occurred in response to stimulation by cross-linked collagen-related peptide (CRP-XL) or U46619, consistent with a rise of [Zn2+]i. Fluoresence increases were blocked by Zn2+ chelators and modulators of the platelet redox state, and were distinct from agonist-evoked [Ca2+]i signals. Stimulation of platelets with the Zn2+ ionophores clioquinol (Cq) or pyrithione (Py) caused sustained increases of [Zn2+]i, resulting in myosin light chain phosphorylation, and cytoskeletal re-arrangements which were sensitive to cytochalasin-D treatment. Cq stimulation resulted in integrin αIIbβ3 activation and release of dense, but not α, granules. Furthermore, Zn2+-ionophores induced externalization of phosphatidylserine. Conclusions:  These data suggest that agonist-evoked fluctuations in intra-platelet Zn2+ couple to functional responses, in a manner that is consistent with a role as a secondary messenger. Increased intra-platelet Zn2+ regulates signalling processes, including shape change, αIIbβ3 up-regulation and dense granule release, in a redox-sensitive manner.
null
null
10,174
311
[]
4
[ "zn", "platelet", "µm", "stimulation", "increases", "platelets", "cq", "agonist", "following", "fig" ]
[ "increases zn platelet", "zn platelet responses", "zn platelet activatory", "extracellular zn signalling", "regulation zn platelets" ]
null
null
[CONTENT] platelets | zinc | platelet activation | signal transduction | secretory vesicles | granule release [SUMMARY]
[CONTENT] platelets | zinc | platelet activation | signal transduction | secretory vesicles | granule release [SUMMARY]
[CONTENT] platelets | zinc | platelet activation | signal transduction | secretory vesicles | granule release [SUMMARY]
null
[CONTENT] platelets | zinc | platelet activation | signal transduction | secretory vesicles | granule release [SUMMARY]
null
[CONTENT] Blood Platelets | Calcium | Cations | Chelating Agents | Cross-Linking Reagents | Cytosol | Humans | Ionophores | Microscopy, Confocal | Oxidation-Reduction | Phosphatidylserines | Phosphorylation | Platelet Activation | Platelet Aggregation | Platelet Glycoprotein GPIIb-IIIa Complex | Polycyclic Compounds | Signal Transduction | Zinc [SUMMARY]
[CONTENT] Blood Platelets | Calcium | Cations | Chelating Agents | Cross-Linking Reagents | Cytosol | Humans | Ionophores | Microscopy, Confocal | Oxidation-Reduction | Phosphatidylserines | Phosphorylation | Platelet Activation | Platelet Aggregation | Platelet Glycoprotein GPIIb-IIIa Complex | Polycyclic Compounds | Signal Transduction | Zinc [SUMMARY]
[CONTENT] Blood Platelets | Calcium | Cations | Chelating Agents | Cross-Linking Reagents | Cytosol | Humans | Ionophores | Microscopy, Confocal | Oxidation-Reduction | Phosphatidylserines | Phosphorylation | Platelet Activation | Platelet Aggregation | Platelet Glycoprotein GPIIb-IIIa Complex | Polycyclic Compounds | Signal Transduction | Zinc [SUMMARY]
null
[CONTENT] Blood Platelets | Calcium | Cations | Chelating Agents | Cross-Linking Reagents | Cytosol | Humans | Ionophores | Microscopy, Confocal | Oxidation-Reduction | Phosphatidylserines | Phosphorylation | Platelet Activation | Platelet Aggregation | Platelet Glycoprotein GPIIb-IIIa Complex | Polycyclic Compounds | Signal Transduction | Zinc [SUMMARY]
null
[CONTENT] increases zn platelet | zn platelet responses | zn platelet activatory | extracellular zn signalling | regulation zn platelets [SUMMARY]
[CONTENT] increases zn platelet | zn platelet responses | zn platelet activatory | extracellular zn signalling | regulation zn platelets [SUMMARY]
[CONTENT] increases zn platelet | zn platelet responses | zn platelet activatory | extracellular zn signalling | regulation zn platelets [SUMMARY]
null
[CONTENT] increases zn platelet | zn platelet responses | zn platelet activatory | extracellular zn signalling | regulation zn platelets [SUMMARY]
null
[CONTENT] zn | platelet | µm | stimulation | increases | platelets | cq | agonist | following | fig [SUMMARY]
[CONTENT] zn | platelet | µm | stimulation | increases | platelets | cq | agonist | following | fig [SUMMARY]
[CONTENT] zn | platelet | µm | stimulation | increases | platelets | cq | agonist | following | fig [SUMMARY]
null
[CONTENT] zn | platelet | µm | stimulation | increases | platelets | cq | agonist | following | fig [SUMMARY]
null
[CONTENT] zn | released | messenger | secondary messenger | role | cells | signalling | secondary | platelet | factor [SUMMARY]
[CONTENT] united | kingdom | united kingdom | 10 | zn | antibodies | zn 10 | biosciences | cambridge | washed [SUMMARY]
[CONTENT] µm | zn | ml | cq | 05 | platelet | treatment | 300 | fig | stimulation [SUMMARY]
null
[CONTENT] zn | platelet | µm | increases | role | signalling | stimulation | united | agonist | platelets [SUMMARY]
null
[CONTENT] Zinc ||| [SUMMARY]
[CONTENT] ||| ||| Western [SUMMARY]
[CONTENT] CRP-XL ||| Zn2+ ||| Cq ||| Cq | αIIbβ3 ||| [SUMMARY]
null
[CONTENT] Zinc ||| ||| ||| ||| Western ||| CRP-XL ||| Zn2+ ||| Cq ||| Cq | αIIbβ3 ||| ||| ||| αIIbβ3 up [SUMMARY]
null
High-resolution ultrasound of rotator cuff and biceps reflection pulley in non-elite junior tennis players: anatomical study.
25034864
Tennis is believed to be potentially harmful for the shoulder, therefore the purpose of this study is to evaluate the anatomy of the rotator cuff and the coraco-humeral ligament (CHL) in a-symptomatic non-elite junior tennis players with high-resolution ultrasound (US).
BACKGROUND
From August 2009 to September 2010 n = 90 a-symptomatic non-elite junior tennis players (mean age ± standard deviation: 15 ± 3) and a control group of age- and sex- matched subjects were included. Shoulder assessment with a customized standardized protocol was performed. Body mass index, dominant arm, years of practice, weekly hours of training, racket weight, grip (Eastern, Western and semi-Western), kind of strings were recorded.
METHODS
Abnormalities were found at ultrasound in 14/90 (15%) players. Two players had supraspinatus tendinosis, two had subacromial impingement and ten had subacromial bursitis. CHL thickness resulted comparable in the dominant and non-dominant arms (11.3 ± 4.4 mm vs. 13 ± 4.2, p > 0.05). Multivariate analysis demonstrated that no association was present among CHL thickness and the variables evaluated. In the control group, abnormalities were found at ultrasound in 6/60 (10%) subjects (sub-acromial bursitis). No statistically significant differences between players and control group were found (p = 0.71).
RESULTS
In a-symptomatic non-elite junior tennis players only minor shoulder abnormalities were found.
CONCLUSION
[ "Adolescent", "Asymptomatic Diseases", "Athletes", "Athletic Injuries", "Bursitis", "Case-Control Studies", "Child", "Cross-Sectional Studies", "Female", "Hand Strength", "Humans", "Ligaments", "Male", "Rotator Cuff", "Shoulder Impingement Syndrome", "Tendinopathy", "Tennis", "Ultrasonography" ]
4109776
Background
Tennis is practiced by a wide range of people throughout the world and is the most popular of all racket sports. For the last 10 years tennis practice has grown significantly for recreational and competition purposes. Frequently tennis practice begins in childhood and may continue into late adulthood. In spite of the positive effects that tennis practice has shown on physical and mental fitness, some Authors believe that tennis may expose the shoulder to different kind of injuries [1,2]. In non-elite players, efforts spent to develop a more effective and aggressive play using tactics and techniques similar or equal to the elite players are not always supported by an adequate physical training and technical development [1-3]. Shoulder injuries are believed extremely common among elite tennis players and they are not only related to rotator cuff tendinopathy, but also to the long head of the biceps and to the reflection pulley [4-11]; however, the biceps tendinopathy and shoulder dislocations are relatively rare in the young tennis players. It is known that a-symptomatic tennis players may have rotator cuff tendon lesions and reduced sub-acromial space [6-10] and that asymptomatic shoulder abnormalities may be found in the majority of the adults [4]. The rotator cuff interval is the anatomical space where the coracohumaral ligament keeps the long head of the biceps in the appropriate position into the glenohumeral joint. Moreover, when the coracohumeral ligament is intact, the longhead of the biceps does not undergo medial subluxation or dislocation out of the bicipital groove [12]. In case of small anterior supraspinatus tears or shoulder impingement, which may happen in tennis players, the coracohumeral ligament may be thickened and torn. In such cases, the biceps may dislocate over the intact subscapularis and the ruptured lateral part of the coracohumeral ligament can be demonstrated with US [1-14]. For this reasons the integrity of the coracohumeral ligament may be considered as a marker of the integrity of the rotator interval as a whole and as a potential indicator of the technical skills of a tennis players. Indeed, if the technical movements are appropriate, no injuries are expected to occur at the shoulder [15]. Non-elite junior tennis players are supposed to have good technical skills, however in technical learning process some adjustments occur [16]. These adjustments may be responsible of shoulder soft-tissues injuries [17]. To the best of our knowledge, there have been no peer-reviewed ultrasound studies on non-elite junior tennis players including the entire anatomy of the shoulder. It is not known if shoulder abnormalities including rotator cuff tendons and coracohumeral ligament are detectable on high-resolution ultrasound in a-symptomatic junior tennis players. Therefore, the purpose of our study is to assess if shoulder anatomical abnormalities are present on a complete shoulder ultrasound examination in a-symptomatic non-elite junior tennis players.
Methods
The study was conducted in accord with the Helsinki Declaration of 1975 and subsequent updates [5]. All enrolled participants provided written consent from a parent or legal guardian. In addition, written consent was obtained from patients or parent/guardian in regards to publication of the patient images. Due to the nature of the study no formal ethical approval was deemed necessary. Tennis players were invited to take part in the study during their practice sessions at their club. To be included in the study, each athlete was required to be a member of the Italian Tennis Federation. To be considered “non-elite” the player had to be an Italian Ranking below 2.8 and not to be involved in National Representatives [18]. The age had to be below 18 years. For each athlete body mass index, dominant hand (the side on which they held the racquet for the forehand), the number of years playing tennis, the number of hours training per week, and what type of backhand stroke (with one or two hands), racket weight, grip (Eastern, Western and semi-Western), and string stiffness were registered. From August 2009 to September 2010 both shoulder of n = 90 a-symptomatic non-elite junior tennis players (mean age ± standard deviation: 15 ± 3 years) were evaluated bilaterally by mean of high-resolution ultrasound using a 17–5 MHz broadband linear array transducer (iU22, Philips Medical System, the Netherlands). Ultrasonographic evaluation included complete shoulder assessment with a standardized protocol suggested by the European Society of Musculoskeletal Radiology [9] and coracohumaral thickness measurement (Figure 1) as described in literature [19]. All players included in the study had no history of trauma or treatment involving either shoulder. No player had a history of systemic inflammatory disease. Example of coracohumeral thickness measurement. Note the tendon of the long head of the biceps below the calipers. A control group of 60 subjects was constituted by 33 boy and 27 girls (mean age ± standard deviation: 15 ± 3 years). All of them were not involved in overhead recreational or sporting activities and had no history of trauma or treatment involving the shoulder. None of the controls had a history of systemic inflammatory disease. An accurate physical examination was performed before high-resolution ultrasound examination of the shoulder. US scans were performed by two musculoskeletal sonographers (each with more than 5 years of scanning experience): both static images and cine clips were recorded. Recording of static images and cine clips was previously used to analyze US evaluations [20]. The sonographer who performed the scan was blinded to the subject’s dominant side. Both shoulders were scanned. This protocol includes evaluation of the rotator cuff tendons, the tendon of the long head of the biceps brachii muscle in the long and short axes and of the subacromial-subdeltoid bursa, acromioclavicular joint, and posterior recess. Dynamic assessment for subacromial impingement and subluxation and dislocation of the long head of the biceps brachii was also performed. US static images and cine clips were retrospectively reviewed by three musculoskeletal radiologists (3, 4 and 2 years of experience respectively). Only definitive sonographic abnormalities agreed on by the three musculoskeletal radiologists in consensus were included in the study. The ultrasound diagnoses of pathologic findings were based on established criteria and according to the technical guidelines of the European Society of Musculoskeletal Radiology [3,9]. To increase specificity and eliminate false-positive diagnoses, questionable findings were excluded from analysis as suggested by other studies [4]. Concerning the grip we registered the four basic single-handed grips used to hit the forehand: Continental, Eastern, Semi-western and Full Western. For each grip, the player places the base knuckle of the index finger and the heel pad of the palm on the grip bevel of the racquet. Different grips are defined on the base of the location of the base knuckle of the index finger on the eight faces of the racket grip (Figure 2). Grip types were defined according to the International Tennis Federation definitions [1,21] and checked for accuracy by two tennis instructor in consensus who observed the players holding the racket at rest and during the game. On the left side are represented the 8 facets of the butt cap and the reference points (base knuckle of the index finger and heel pad) on the hand to identify the different grips. On the right side the Eastern and Western Grips are illustrated: note that the hand of the players is in the same position while the inclination of the racket changes. Other grips are described in the text. The blue hexagons are positioned in critical areas (base knuckle and heel pad). Continental grip In the Continental grip the base knuckle is placed on the face number 2 and the heel pad between 1 and 2. This grip was once the universal grip used to hit almost all strokes: forehands, backhands, special shots, volleys and the serve. It originated on the soft, low bouncing clay courts of Europe. Nowadays it is usually employed only for serves and volleys. In the Continental grip the base knuckle is placed on the face number 2 and the heel pad between 1 and 2. This grip was once the universal grip used to hit almost all strokes: forehands, backhands, special shots, volleys and the serve. It originated on the soft, low bouncing clay courts of Europe. Nowadays it is usually employed only for serves and volleys. Eastern grip In the Eastern grip the base knuckle is on face 3 and the heel pad between 2 and 3. This grip arose on the medium-bouncing courts in the Eastern United States. It represents the classic forehand grip. The eastern grip is appropriate for different styles of play, comfortable for beginners, and adaptable for all surfaces. The advantages of the eastern grip are that it is easy for beginners to learn, it is easy to generate power, it is ideal for waist high balls, and you can hit a variety of topspin, under-spin and flat drive. The disadvantage is that it is difficult to powerfully hit very high balls. In the Eastern grip the base knuckle is on face 3 and the heel pad between 2 and 3. This grip arose on the medium-bouncing courts in the Eastern United States. It represents the classic forehand grip. The eastern grip is appropriate for different styles of play, comfortable for beginners, and adaptable for all surfaces. The advantages of the eastern grip are that it is easy for beginners to learn, it is easy to generate power, it is ideal for waist high balls, and you can hit a variety of topspin, under-spin and flat drive. The disadvantage is that it is difficult to powerfully hit very high balls. Semi-western grip The Semi-western forehand grip has the base knuckle and the heel pad on the face 4. Strength and control to the forehand are guaranteed by this grip, moreover beginners feel comfortable since the palm of the hand supports the racquet providing additional stability at contact. Powerful topspin forehands are the strokes facilitated by this grip. Advantage to this grip is that high balls are easy to hit, however low balls are difficult, back-spin is difficult and grip changes are necessary to hit volleys and overheads. The Semi-western forehand grip has the base knuckle and the heel pad on the face 4. Strength and control to the forehand are guaranteed by this grip, moreover beginners feel comfortable since the palm of the hand supports the racquet providing additional stability at contact. Powerful topspin forehands are the strokes facilitated by this grip. Advantage to this grip is that high balls are easy to hit, however low balls are difficult, back-spin is difficult and grip changes are necessary to hit volleys and overheads. Western grip In the Western grip both base knuckle and heel pad are located on face 5. This grip originated on the high-bouncing cement courts of the Western United States. The drawback of this grip is that it closes the racquet face too soon before contact. This is an excellent grip for high balls and topspin but is awkward for low balls and under-spin. It is widely accepted in the popular media that this grip is the most dangerous for the wrist and that a strong wrist and perfect timing are essential to avoid wrist injuries. Statistical analysis Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges. Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges. In the Western grip both base knuckle and heel pad are located on face 5. This grip originated on the high-bouncing cement courts of the Western United States. The drawback of this grip is that it closes the racquet face too soon before contact. This is an excellent grip for high balls and topspin but is awkward for low balls and under-spin. It is widely accepted in the popular media that this grip is the most dangerous for the wrist and that a strong wrist and perfect timing are essential to avoid wrist injuries. Statistical analysis Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges. Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges.
Results
Players’ characteristics relative to sex, dominant arm, years of practice, hours of training per week, grip, racket weight, type of backhand and body mass index are reported in Table 1.Abnormalities were found at ultrasound in 14/90 (15%) players. The majority of the lesions were located in the dominant arm (n = 10), whereas only few of them were in non-dominant arm (n = 4). No tendon tears were detected. Two players had supraspinatus tendinosis. Sub-acromial bursitis was present in 10 players (Figure 3). Two players had subacromial impingement. No rotator-cuff muscular atrophy was found. Coracohumaral thickness resulted comparable in the dominant and non-dominant arm of the players (11.3 ± 4.4 mm in the dominant arm versus 13 ± 4.2 in the non-dominant arm, p > 0.05). Multivariate analysis demonstrated that no association was present among coracohumaral thickness or lesions detected at sonography and body mass index, years of practice, weekly hours of training, racket weight, strings and dominant arm. In the control group, abnormalities were found at ultrasound in 6/60 (10%) subjects (sub-acromial bursitis). No statistically significant differences between players and control group were found (two-tailed P value = 0.71). Coracohumeral thickness was not statistically different in the two groups (p > 0.05). Players characteristics Data are expressed in mean ± standard deviation E, Eastern; SW, semi-Western; W, Western. Subacromion subdeltoid bursitis in a 15-year-old tennis player (arrow).
Discussion and conclusions
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2474/15/241/prepub
[ "Background", "Continental grip", "Eastern grip", "Semi-western grip", "Western grip", "Statistical analysis", "Discussion and conclusions" ]
[ "Tennis is practiced by a wide range of people throughout the world and is the most popular of all racket sports. For the last 10 years tennis practice has grown significantly for recreational and competition purposes. Frequently tennis practice begins in childhood and may continue into late adulthood. In spite of the positive effects that tennis practice has shown on physical and mental fitness, some Authors believe that tennis may expose the shoulder to different kind of injuries [1,2]. In non-elite players, efforts spent to develop a more effective and aggressive play using tactics and techniques similar or equal to the elite players are not always supported by an adequate physical training and technical development [1-3]. Shoulder injuries are believed extremely common among elite tennis players and they are not only related to rotator cuff tendinopathy, but also to the long head of the biceps and to the reflection pulley [4-11]; however, the biceps tendinopathy and shoulder dislocations are relatively rare in the young tennis players. It is known that a-symptomatic tennis players may have rotator cuff tendon lesions and reduced sub-acromial space [6-10] and that asymptomatic shoulder abnormalities may be found in the majority of the adults [4]. The rotator cuff interval is the anatomical space where the coracohumaral ligament keeps the long head of the biceps in the appropriate position into the glenohumeral joint. Moreover, when the coracohumeral ligament is intact, the longhead of the biceps does not undergo medial subluxation or dislocation out of the bicipital groove [12]. In case of small anterior supraspinatus tears or shoulder impingement, which may happen in tennis players, the coracohumeral ligament may be thickened and torn. In such cases, the biceps may dislocate over the intact subscapularis and the ruptured lateral part of the coracohumeral ligament can be demonstrated with US [1-14]. For this reasons the integrity of the coracohumeral ligament may be considered as a marker of the integrity of the rotator interval as a whole and as a potential indicator of the technical skills of a tennis players. Indeed, if the technical movements are appropriate, no injuries are expected to occur at the shoulder [15]. Non-elite junior tennis players are supposed to have good technical skills, however in technical learning process some adjustments occur [16]. These adjustments may be responsible of shoulder soft-tissues injuries [17]. To the best of our knowledge, there have been no peer-reviewed ultrasound studies on non-elite junior tennis players including the entire anatomy of the shoulder. It is not known if shoulder abnormalities including rotator cuff tendons and coracohumeral ligament are detectable on high-resolution ultrasound in a-symptomatic junior tennis players.\nTherefore, the purpose of our study is to assess if shoulder anatomical abnormalities are present on a complete shoulder ultrasound examination in a-symptomatic non-elite junior tennis players.", "In the Continental grip the base knuckle is placed on the face number 2 and the heel pad between 1 and 2. This grip was once the universal grip used to hit almost all strokes: forehands, backhands, special shots, volleys and the serve. It originated on the soft, low bouncing clay courts of Europe. Nowadays it is usually employed only for serves and volleys.", "In the Eastern grip the base knuckle is on face 3 and the heel pad between 2 and 3. This grip arose on the medium-bouncing courts in the Eastern United States. It represents the classic forehand grip. The eastern grip is appropriate for different styles of play, comfortable for beginners, and adaptable for all surfaces. The advantages of the eastern grip are that it is easy for beginners to learn, it is easy to generate power, it is ideal for waist high balls, and you can hit a variety of topspin, under-spin and flat drive. The disadvantage is that it is difficult to powerfully hit very high balls.", "The Semi-western forehand grip has the base knuckle and the heel pad on the face 4. Strength and control to the forehand are guaranteed by this grip, moreover beginners feel comfortable since the palm of the hand supports the racquet providing additional stability at contact. Powerful topspin forehands are the strokes facilitated by this grip. Advantage to this grip is that high balls are easy to hit, however low balls are difficult, back-spin is difficult and grip changes are necessary to hit volleys and overheads.", "In the Western grip both base knuckle and heel pad are located on face 5. This grip originated on the high-bouncing cement courts of the Western United States. The drawback of this grip is that it closes the racquet face too soon before contact. This is an excellent grip for high balls and topspin but is awkward for low balls and under-spin. It is widely accepted in the popular media that this grip is the most dangerous for the wrist and that a strong wrist and perfect timing are essential to avoid wrist injuries.\n Statistical analysis Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges.\nStatistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges.", "Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges.", "The main result of this study shows that in a-symptomatic non-elite junior tennis-players only minor shoulder abnormalities are detectable using high-resolution ultrasound.\nThese abnormalities are not different from those detected in the age- and sex- matched control group. In a previous study, using on high-resolution ultrasound, shoulder abnormalities were found in 96% of asymptomatic subjects [4]. However, in this study the age range was 40–70 years, therefore it is possible that the alteration found were related to normal ageing instead of the daily activity of the subjects. In our study, the players were young and only 15% of them reported a shoulder abnormality. Overall, shoulder abnormalities detected were mild: no partial or full-thickness tears were found. Sub-acromial bursitis was the most frequent find, but no player had fluid in the other recesses or bursae around the shoulder. The absence of rotator cuff muscular atrophy is sufficient to exclude any subclinical irritative or compressive neuropathy. In a paper published on volleyball players it has been shown that the prevalence of infraspinatus muscle atrophy in professional a-symptomatic players is 30% [8]. This data related to volley was not confirmed in our series of young tennis players. We acknowledge that it may be questionable to compare volley to tennis, but both sports are characterized by several overhead strokes that may damage the shoulder. Moreover, the medical literature lacks of similar studies on young non-elite tennis players to be compared with our work. In our series, the majority of the lesions were located in the dominant arm and few of them were located in the non-dominant arm. This observation, although based on few numbers, is not surprising and it may confirm that, in this physical activity, the dominant arm is more stressed than the non-dominant arm. Concerning the grip adopted by the players we did not registered any association between lesions and grips. This observation differs from the fact that, in nonprofessional tennis players, different grips of the racket are related to the anatomical site of the lesion at the wrist: Eastern grip with radial-side injuries and Western or semi-Western with ulnar-side injuries [1]. On the base of this consideration, we can make the hypothesis that the way of hitting the ball with different grips does not influence the biomechanical chain of the stroke at the level of the shoulder. Body mass index and strings stiffness were also considered because they may influence negatively the incidence and pattern of injury [1].\nOur study has limitations: the first is the cross-sectional and not prospective design. However, for the purpose of the study, we believe that the design may be considered almost sufficient. Moreover, we did not evaluate an adult population of players, therefore it is difficult to made comparisons with the existing literature and with “veterans” players who received similar loads on the shoulder for several years and hours. In conclusion, the main finding of our study is that shoulder’s soft-tissues of non-elite junior tennis players are similar to age- and sex- matched controls." ]
[ null, null, null, null, null, null, null ]
[ "Background", "Methods", "Continental grip", "Eastern grip", "Semi-western grip", "Western grip", "Statistical analysis", "Results", "Discussion and conclusions" ]
[ "Tennis is practiced by a wide range of people throughout the world and is the most popular of all racket sports. For the last 10 years tennis practice has grown significantly for recreational and competition purposes. Frequently tennis practice begins in childhood and may continue into late adulthood. In spite of the positive effects that tennis practice has shown on physical and mental fitness, some Authors believe that tennis may expose the shoulder to different kind of injuries [1,2]. In non-elite players, efforts spent to develop a more effective and aggressive play using tactics and techniques similar or equal to the elite players are not always supported by an adequate physical training and technical development [1-3]. Shoulder injuries are believed extremely common among elite tennis players and they are not only related to rotator cuff tendinopathy, but also to the long head of the biceps and to the reflection pulley [4-11]; however, the biceps tendinopathy and shoulder dislocations are relatively rare in the young tennis players. It is known that a-symptomatic tennis players may have rotator cuff tendon lesions and reduced sub-acromial space [6-10] and that asymptomatic shoulder abnormalities may be found in the majority of the adults [4]. The rotator cuff interval is the anatomical space where the coracohumaral ligament keeps the long head of the biceps in the appropriate position into the glenohumeral joint. Moreover, when the coracohumeral ligament is intact, the longhead of the biceps does not undergo medial subluxation or dislocation out of the bicipital groove [12]. In case of small anterior supraspinatus tears or shoulder impingement, which may happen in tennis players, the coracohumeral ligament may be thickened and torn. In such cases, the biceps may dislocate over the intact subscapularis and the ruptured lateral part of the coracohumeral ligament can be demonstrated with US [1-14]. For this reasons the integrity of the coracohumeral ligament may be considered as a marker of the integrity of the rotator interval as a whole and as a potential indicator of the technical skills of a tennis players. Indeed, if the technical movements are appropriate, no injuries are expected to occur at the shoulder [15]. Non-elite junior tennis players are supposed to have good technical skills, however in technical learning process some adjustments occur [16]. These adjustments may be responsible of shoulder soft-tissues injuries [17]. To the best of our knowledge, there have been no peer-reviewed ultrasound studies on non-elite junior tennis players including the entire anatomy of the shoulder. It is not known if shoulder abnormalities including rotator cuff tendons and coracohumeral ligament are detectable on high-resolution ultrasound in a-symptomatic junior tennis players.\nTherefore, the purpose of our study is to assess if shoulder anatomical abnormalities are present on a complete shoulder ultrasound examination in a-symptomatic non-elite junior tennis players.", "The study was conducted in accord with the Helsinki Declaration of 1975 and subsequent updates [5]. All enrolled participants provided written consent from a parent or legal guardian. In addition, written consent was obtained from patients or parent/guardian in regards to publication of the patient images. Due to the nature of the study no formal ethical approval was deemed necessary. Tennis players were invited to take part in the study during their practice sessions at their club. To be included in the study, each athlete was required to be a member of the Italian Tennis Federation. To be considered “non-elite” the player had to be an Italian Ranking below 2.8 and not to be involved in National Representatives [18]. The age had to be below 18 years.\nFor each athlete body mass index, dominant hand (the side on which they held the racquet for the forehand), the number of years playing tennis, the number of hours training per week, and what type of backhand stroke (with one or two hands), racket weight, grip (Eastern, Western and semi-Western), and string stiffness were registered.\nFrom August 2009 to September 2010 both shoulder of n = 90 a-symptomatic non-elite junior tennis players (mean age ± standard deviation: 15 ± 3 years) were evaluated bilaterally by mean of high-resolution ultrasound using a 17–5 MHz broadband linear array transducer (iU22, Philips Medical System, the Netherlands). Ultrasonographic evaluation included complete shoulder assessment with a standardized protocol suggested by the European Society of Musculoskeletal Radiology [9] and coracohumaral thickness measurement (Figure 1) as described in literature [19]. All players included in the study had no history of trauma or treatment involving either shoulder. No player had a history of systemic inflammatory disease.\nExample of coracohumeral thickness measurement. Note the tendon of the long head of the biceps below the calipers.\nA control group of 60 subjects was constituted by 33 boy and 27 girls (mean age ± standard deviation: 15 ± 3 years). All of them were not involved in overhead recreational or sporting activities and had no history of trauma or treatment involving the shoulder. None of the controls had a history of systemic inflammatory disease. An accurate physical examination was performed before high-resolution ultrasound examination of the shoulder. US scans were performed by two musculoskeletal sonographers (each with more than 5 years of scanning experience): both static images and cine clips were recorded. Recording of static images and cine clips was previously used to analyze US evaluations [20]. The sonographer who performed the scan was blinded to the subject’s dominant side. Both shoulders were scanned. This protocol includes evaluation of the rotator cuff tendons, the tendon of the long head of the biceps brachii muscle in the long and short axes and of the subacromial-subdeltoid bursa, acromioclavicular joint, and posterior recess. Dynamic assessment for subacromial impingement and subluxation and dislocation of the long head of the biceps brachii was also performed. US static images and cine clips were retrospectively reviewed by three musculoskeletal radiologists (3, 4 and 2 years of experience respectively). Only definitive sonographic abnormalities agreed on by the three musculoskeletal radiologists in consensus were included in the study. The ultrasound diagnoses of pathologic findings were based on established criteria and according to the technical guidelines of the European Society of Musculoskeletal Radiology [3,9]. To increase specificity and eliminate false-positive diagnoses, questionable findings were excluded from analysis as suggested by other studies [4].\nConcerning the grip we registered the four basic single-handed grips used to hit the forehand: Continental, Eastern, Semi-western and Full Western. For each grip, the player places the base knuckle of the index finger and the heel pad of the palm on the grip bevel of the racquet. Different grips are defined on the base of the location of the base knuckle of the index finger on the eight faces of the racket grip (Figure 2). Grip types were defined according to the International Tennis Federation definitions [1,21] and checked for accuracy by two tennis instructor in consensus who observed the players holding the racket at rest and during the game.\nOn the left side are represented the 8 facets of the butt cap and the reference points (base knuckle of the index finger and heel pad) on the hand to identify the different grips. On the right side the Eastern and Western Grips are illustrated: note that the hand of the players is in the same position while the inclination of the racket changes. Other grips are described in the text. The blue hexagons are positioned in critical areas (base knuckle and heel pad).\n Continental grip In the Continental grip the base knuckle is placed on the face number 2 and the heel pad between 1 and 2. This grip was once the universal grip used to hit almost all strokes: forehands, backhands, special shots, volleys and the serve. It originated on the soft, low bouncing clay courts of Europe. Nowadays it is usually employed only for serves and volleys.\nIn the Continental grip the base knuckle is placed on the face number 2 and the heel pad between 1 and 2. This grip was once the universal grip used to hit almost all strokes: forehands, backhands, special shots, volleys and the serve. It originated on the soft, low bouncing clay courts of Europe. Nowadays it is usually employed only for serves and volleys.\n Eastern grip In the Eastern grip the base knuckle is on face 3 and the heel pad between 2 and 3. This grip arose on the medium-bouncing courts in the Eastern United States. It represents the classic forehand grip. The eastern grip is appropriate for different styles of play, comfortable for beginners, and adaptable for all surfaces. The advantages of the eastern grip are that it is easy for beginners to learn, it is easy to generate power, it is ideal for waist high balls, and you can hit a variety of topspin, under-spin and flat drive. The disadvantage is that it is difficult to powerfully hit very high balls.\nIn the Eastern grip the base knuckle is on face 3 and the heel pad between 2 and 3. This grip arose on the medium-bouncing courts in the Eastern United States. It represents the classic forehand grip. The eastern grip is appropriate for different styles of play, comfortable for beginners, and adaptable for all surfaces. The advantages of the eastern grip are that it is easy for beginners to learn, it is easy to generate power, it is ideal for waist high balls, and you can hit a variety of topspin, under-spin and flat drive. The disadvantage is that it is difficult to powerfully hit very high balls.\n Semi-western grip The Semi-western forehand grip has the base knuckle and the heel pad on the face 4. Strength and control to the forehand are guaranteed by this grip, moreover beginners feel comfortable since the palm of the hand supports the racquet providing additional stability at contact. Powerful topspin forehands are the strokes facilitated by this grip. Advantage to this grip is that high balls are easy to hit, however low balls are difficult, back-spin is difficult and grip changes are necessary to hit volleys and overheads.\nThe Semi-western forehand grip has the base knuckle and the heel pad on the face 4. Strength and control to the forehand are guaranteed by this grip, moreover beginners feel comfortable since the palm of the hand supports the racquet providing additional stability at contact. Powerful topspin forehands are the strokes facilitated by this grip. Advantage to this grip is that high balls are easy to hit, however low balls are difficult, back-spin is difficult and grip changes are necessary to hit volleys and overheads.\n Western grip In the Western grip both base knuckle and heel pad are located on face 5. This grip originated on the high-bouncing cement courts of the Western United States. The drawback of this grip is that it closes the racquet face too soon before contact. This is an excellent grip for high balls and topspin but is awkward for low balls and under-spin. It is widely accepted in the popular media that this grip is the most dangerous for the wrist and that a strong wrist and perfect timing are essential to avoid wrist injuries.\n Statistical analysis Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges.\nStatistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges.\nIn the Western grip both base knuckle and heel pad are located on face 5. This grip originated on the high-bouncing cement courts of the Western United States. The drawback of this grip is that it closes the racquet face too soon before contact. This is an excellent grip for high balls and topspin but is awkward for low balls and under-spin. It is widely accepted in the popular media that this grip is the most dangerous for the wrist and that a strong wrist and perfect timing are essential to avoid wrist injuries.\n Statistical analysis Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges.\nStatistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges.", "In the Continental grip the base knuckle is placed on the face number 2 and the heel pad between 1 and 2. This grip was once the universal grip used to hit almost all strokes: forehands, backhands, special shots, volleys and the serve. It originated on the soft, low bouncing clay courts of Europe. Nowadays it is usually employed only for serves and volleys.", "In the Eastern grip the base knuckle is on face 3 and the heel pad between 2 and 3. This grip arose on the medium-bouncing courts in the Eastern United States. It represents the classic forehand grip. The eastern grip is appropriate for different styles of play, comfortable for beginners, and adaptable for all surfaces. The advantages of the eastern grip are that it is easy for beginners to learn, it is easy to generate power, it is ideal for waist high balls, and you can hit a variety of topspin, under-spin and flat drive. The disadvantage is that it is difficult to powerfully hit very high balls.", "The Semi-western forehand grip has the base knuckle and the heel pad on the face 4. Strength and control to the forehand are guaranteed by this grip, moreover beginners feel comfortable since the palm of the hand supports the racquet providing additional stability at contact. Powerful topspin forehands are the strokes facilitated by this grip. Advantage to this grip is that high balls are easy to hit, however low balls are difficult, back-spin is difficult and grip changes are necessary to hit volleys and overheads.", "In the Western grip both base knuckle and heel pad are located on face 5. This grip originated on the high-bouncing cement courts of the Western United States. The drawback of this grip is that it closes the racquet face too soon before contact. This is an excellent grip for high balls and topspin but is awkward for low balls and under-spin. It is widely accepted in the popular media that this grip is the most dangerous for the wrist and that a strong wrist and perfect timing are essential to avoid wrist injuries.\n Statistical analysis Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges.\nStatistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges.", "Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges.", "Players’ characteristics relative to sex, dominant arm, years of practice, hours of training per week, grip, racket weight, type of backhand and body mass index are reported in Table 1.Abnormalities were found at ultrasound in 14/90 (15%) players. The majority of the lesions were located in the dominant arm (n = 10), whereas only few of them were in non-dominant arm (n = 4). No tendon tears were detected. Two players had supraspinatus tendinosis. Sub-acromial bursitis was present in 10 players (Figure 3). Two players had subacromial impingement. No rotator-cuff muscular atrophy was found. Coracohumaral thickness resulted comparable in the dominant and non-dominant arm of the players (11.3 ± 4.4 mm in the dominant arm versus 13 ± 4.2 in the non-dominant arm, p > 0.05). Multivariate analysis demonstrated that no association was present among coracohumaral thickness or lesions detected at sonography and body mass index, years of practice, weekly hours of training, racket weight, strings and dominant arm. In the control group, abnormalities were found at ultrasound in 6/60 (10%) subjects (sub-acromial bursitis). No statistically significant differences between players and control group were found (two-tailed P value = 0.71). Coracohumeral thickness was not statistically different in the two groups (p > 0.05).\nPlayers characteristics\nData are expressed in mean ± standard deviation E, Eastern; SW, semi-Western; W, Western.\nSubacromion subdeltoid bursitis in a 15-year-old tennis player (arrow).", "The main result of this study shows that in a-symptomatic non-elite junior tennis-players only minor shoulder abnormalities are detectable using high-resolution ultrasound.\nThese abnormalities are not different from those detected in the age- and sex- matched control group. In a previous study, using on high-resolution ultrasound, shoulder abnormalities were found in 96% of asymptomatic subjects [4]. However, in this study the age range was 40–70 years, therefore it is possible that the alteration found were related to normal ageing instead of the daily activity of the subjects. In our study, the players were young and only 15% of them reported a shoulder abnormality. Overall, shoulder abnormalities detected were mild: no partial or full-thickness tears were found. Sub-acromial bursitis was the most frequent find, but no player had fluid in the other recesses or bursae around the shoulder. The absence of rotator cuff muscular atrophy is sufficient to exclude any subclinical irritative or compressive neuropathy. In a paper published on volleyball players it has been shown that the prevalence of infraspinatus muscle atrophy in professional a-symptomatic players is 30% [8]. This data related to volley was not confirmed in our series of young tennis players. We acknowledge that it may be questionable to compare volley to tennis, but both sports are characterized by several overhead strokes that may damage the shoulder. Moreover, the medical literature lacks of similar studies on young non-elite tennis players to be compared with our work. In our series, the majority of the lesions were located in the dominant arm and few of them were located in the non-dominant arm. This observation, although based on few numbers, is not surprising and it may confirm that, in this physical activity, the dominant arm is more stressed than the non-dominant arm. Concerning the grip adopted by the players we did not registered any association between lesions and grips. This observation differs from the fact that, in nonprofessional tennis players, different grips of the racket are related to the anatomical site of the lesion at the wrist: Eastern grip with radial-side injuries and Western or semi-Western with ulnar-side injuries [1]. On the base of this consideration, we can make the hypothesis that the way of hitting the ball with different grips does not influence the biomechanical chain of the stroke at the level of the shoulder. Body mass index and strings stiffness were also considered because they may influence negatively the incidence and pattern of injury [1].\nOur study has limitations: the first is the cross-sectional and not prospective design. However, for the purpose of the study, we believe that the design may be considered almost sufficient. Moreover, we did not evaluate an adult population of players, therefore it is difficult to made comparisons with the existing literature and with “veterans” players who received similar loads on the shoulder for several years and hours. In conclusion, the main finding of our study is that shoulder’s soft-tissues of non-elite junior tennis players are similar to age- and sex- matched controls." ]
[ null, "methods", null, null, null, null, null, "results", null ]
[ "Shoulder", "Ultrasound", "Tennis", "Biceps", "Bursitis" ]
Background: Tennis is practiced by a wide range of people throughout the world and is the most popular of all racket sports. For the last 10 years tennis practice has grown significantly for recreational and competition purposes. Frequently tennis practice begins in childhood and may continue into late adulthood. In spite of the positive effects that tennis practice has shown on physical and mental fitness, some Authors believe that tennis may expose the shoulder to different kind of injuries [1,2]. In non-elite players, efforts spent to develop a more effective and aggressive play using tactics and techniques similar or equal to the elite players are not always supported by an adequate physical training and technical development [1-3]. Shoulder injuries are believed extremely common among elite tennis players and they are not only related to rotator cuff tendinopathy, but also to the long head of the biceps and to the reflection pulley [4-11]; however, the biceps tendinopathy and shoulder dislocations are relatively rare in the young tennis players. It is known that a-symptomatic tennis players may have rotator cuff tendon lesions and reduced sub-acromial space [6-10] and that asymptomatic shoulder abnormalities may be found in the majority of the adults [4]. The rotator cuff interval is the anatomical space where the coracohumaral ligament keeps the long head of the biceps in the appropriate position into the glenohumeral joint. Moreover, when the coracohumeral ligament is intact, the longhead of the biceps does not undergo medial subluxation or dislocation out of the bicipital groove [12]. In case of small anterior supraspinatus tears or shoulder impingement, which may happen in tennis players, the coracohumeral ligament may be thickened and torn. In such cases, the biceps may dislocate over the intact subscapularis and the ruptured lateral part of the coracohumeral ligament can be demonstrated with US [1-14]. For this reasons the integrity of the coracohumeral ligament may be considered as a marker of the integrity of the rotator interval as a whole and as a potential indicator of the technical skills of a tennis players. Indeed, if the technical movements are appropriate, no injuries are expected to occur at the shoulder [15]. Non-elite junior tennis players are supposed to have good technical skills, however in technical learning process some adjustments occur [16]. These adjustments may be responsible of shoulder soft-tissues injuries [17]. To the best of our knowledge, there have been no peer-reviewed ultrasound studies on non-elite junior tennis players including the entire anatomy of the shoulder. It is not known if shoulder abnormalities including rotator cuff tendons and coracohumeral ligament are detectable on high-resolution ultrasound in a-symptomatic junior tennis players. Therefore, the purpose of our study is to assess if shoulder anatomical abnormalities are present on a complete shoulder ultrasound examination in a-symptomatic non-elite junior tennis players. Methods: The study was conducted in accord with the Helsinki Declaration of 1975 and subsequent updates [5]. All enrolled participants provided written consent from a parent or legal guardian. In addition, written consent was obtained from patients or parent/guardian in regards to publication of the patient images. Due to the nature of the study no formal ethical approval was deemed necessary. Tennis players were invited to take part in the study during their practice sessions at their club. To be included in the study, each athlete was required to be a member of the Italian Tennis Federation. To be considered “non-elite” the player had to be an Italian Ranking below 2.8 and not to be involved in National Representatives [18]. The age had to be below 18 years. For each athlete body mass index, dominant hand (the side on which they held the racquet for the forehand), the number of years playing tennis, the number of hours training per week, and what type of backhand stroke (with one or two hands), racket weight, grip (Eastern, Western and semi-Western), and string stiffness were registered. From August 2009 to September 2010 both shoulder of n = 90 a-symptomatic non-elite junior tennis players (mean age ± standard deviation: 15 ± 3 years) were evaluated bilaterally by mean of high-resolution ultrasound using a 17–5 MHz broadband linear array transducer (iU22, Philips Medical System, the Netherlands). Ultrasonographic evaluation included complete shoulder assessment with a standardized protocol suggested by the European Society of Musculoskeletal Radiology [9] and coracohumaral thickness measurement (Figure 1) as described in literature [19]. All players included in the study had no history of trauma or treatment involving either shoulder. No player had a history of systemic inflammatory disease. Example of coracohumeral thickness measurement. Note the tendon of the long head of the biceps below the calipers. A control group of 60 subjects was constituted by 33 boy and 27 girls (mean age ± standard deviation: 15 ± 3 years). All of them were not involved in overhead recreational or sporting activities and had no history of trauma or treatment involving the shoulder. None of the controls had a history of systemic inflammatory disease. An accurate physical examination was performed before high-resolution ultrasound examination of the shoulder. US scans were performed by two musculoskeletal sonographers (each with more than 5 years of scanning experience): both static images and cine clips were recorded. Recording of static images and cine clips was previously used to analyze US evaluations [20]. The sonographer who performed the scan was blinded to the subject’s dominant side. Both shoulders were scanned. This protocol includes evaluation of the rotator cuff tendons, the tendon of the long head of the biceps brachii muscle in the long and short axes and of the subacromial-subdeltoid bursa, acromioclavicular joint, and posterior recess. Dynamic assessment for subacromial impingement and subluxation and dislocation of the long head of the biceps brachii was also performed. US static images and cine clips were retrospectively reviewed by three musculoskeletal radiologists (3, 4 and 2 years of experience respectively). Only definitive sonographic abnormalities agreed on by the three musculoskeletal radiologists in consensus were included in the study. The ultrasound diagnoses of pathologic findings were based on established criteria and according to the technical guidelines of the European Society of Musculoskeletal Radiology [3,9]. To increase specificity and eliminate false-positive diagnoses, questionable findings were excluded from analysis as suggested by other studies [4]. Concerning the grip we registered the four basic single-handed grips used to hit the forehand: Continental, Eastern, Semi-western and Full Western. For each grip, the player places the base knuckle of the index finger and the heel pad of the palm on the grip bevel of the racquet. Different grips are defined on the base of the location of the base knuckle of the index finger on the eight faces of the racket grip (Figure 2). Grip types were defined according to the International Tennis Federation definitions [1,21] and checked for accuracy by two tennis instructor in consensus who observed the players holding the racket at rest and during the game. On the left side are represented the 8 facets of the butt cap and the reference points (base knuckle of the index finger and heel pad) on the hand to identify the different grips. On the right side the Eastern and Western Grips are illustrated: note that the hand of the players is in the same position while the inclination of the racket changes. Other grips are described in the text. The blue hexagons are positioned in critical areas (base knuckle and heel pad). Continental grip In the Continental grip the base knuckle is placed on the face number 2 and the heel pad between 1 and 2. This grip was once the universal grip used to hit almost all strokes: forehands, backhands, special shots, volleys and the serve. It originated on the soft, low bouncing clay courts of Europe. Nowadays it is usually employed only for serves and volleys. In the Continental grip the base knuckle is placed on the face number 2 and the heel pad between 1 and 2. This grip was once the universal grip used to hit almost all strokes: forehands, backhands, special shots, volleys and the serve. It originated on the soft, low bouncing clay courts of Europe. Nowadays it is usually employed only for serves and volleys. Eastern grip In the Eastern grip the base knuckle is on face 3 and the heel pad between 2 and 3. This grip arose on the medium-bouncing courts in the Eastern United States. It represents the classic forehand grip. The eastern grip is appropriate for different styles of play, comfortable for beginners, and adaptable for all surfaces. The advantages of the eastern grip are that it is easy for beginners to learn, it is easy to generate power, it is ideal for waist high balls, and you can hit a variety of topspin, under-spin and flat drive. The disadvantage is that it is difficult to powerfully hit very high balls. In the Eastern grip the base knuckle is on face 3 and the heel pad between 2 and 3. This grip arose on the medium-bouncing courts in the Eastern United States. It represents the classic forehand grip. The eastern grip is appropriate for different styles of play, comfortable for beginners, and adaptable for all surfaces. The advantages of the eastern grip are that it is easy for beginners to learn, it is easy to generate power, it is ideal for waist high balls, and you can hit a variety of topspin, under-spin and flat drive. The disadvantage is that it is difficult to powerfully hit very high balls. Semi-western grip The Semi-western forehand grip has the base knuckle and the heel pad on the face 4. Strength and control to the forehand are guaranteed by this grip, moreover beginners feel comfortable since the palm of the hand supports the racquet providing additional stability at contact. Powerful topspin forehands are the strokes facilitated by this grip. Advantage to this grip is that high balls are easy to hit, however low balls are difficult, back-spin is difficult and grip changes are necessary to hit volleys and overheads. The Semi-western forehand grip has the base knuckle and the heel pad on the face 4. Strength and control to the forehand are guaranteed by this grip, moreover beginners feel comfortable since the palm of the hand supports the racquet providing additional stability at contact. Powerful topspin forehands are the strokes facilitated by this grip. Advantage to this grip is that high balls are easy to hit, however low balls are difficult, back-spin is difficult and grip changes are necessary to hit volleys and overheads. Western grip In the Western grip both base knuckle and heel pad are located on face 5. This grip originated on the high-bouncing cement courts of the Western United States. The drawback of this grip is that it closes the racquet face too soon before contact. This is an excellent grip for high balls and topspin but is awkward for low balls and under-spin. It is widely accepted in the popular media that this grip is the most dangerous for the wrist and that a strong wrist and perfect timing are essential to avoid wrist injuries. Statistical analysis Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges. Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges. In the Western grip both base knuckle and heel pad are located on face 5. This grip originated on the high-bouncing cement courts of the Western United States. The drawback of this grip is that it closes the racquet face too soon before contact. This is an excellent grip for high balls and topspin but is awkward for low balls and under-spin. It is widely accepted in the popular media that this grip is the most dangerous for the wrist and that a strong wrist and perfect timing are essential to avoid wrist injuries. Statistical analysis Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges. Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges. Continental grip: In the Continental grip the base knuckle is placed on the face number 2 and the heel pad between 1 and 2. This grip was once the universal grip used to hit almost all strokes: forehands, backhands, special shots, volleys and the serve. It originated on the soft, low bouncing clay courts of Europe. Nowadays it is usually employed only for serves and volleys. Eastern grip: In the Eastern grip the base knuckle is on face 3 and the heel pad between 2 and 3. This grip arose on the medium-bouncing courts in the Eastern United States. It represents the classic forehand grip. The eastern grip is appropriate for different styles of play, comfortable for beginners, and adaptable for all surfaces. The advantages of the eastern grip are that it is easy for beginners to learn, it is easy to generate power, it is ideal for waist high balls, and you can hit a variety of topspin, under-spin and flat drive. The disadvantage is that it is difficult to powerfully hit very high balls. Semi-western grip: The Semi-western forehand grip has the base knuckle and the heel pad on the face 4. Strength and control to the forehand are guaranteed by this grip, moreover beginners feel comfortable since the palm of the hand supports the racquet providing additional stability at contact. Powerful topspin forehands are the strokes facilitated by this grip. Advantage to this grip is that high balls are easy to hit, however low balls are difficult, back-spin is difficult and grip changes are necessary to hit volleys and overheads. Western grip: In the Western grip both base knuckle and heel pad are located on face 5. This grip originated on the high-bouncing cement courts of the Western United States. The drawback of this grip is that it closes the racquet face too soon before contact. This is an excellent grip for high balls and topspin but is awkward for low balls and under-spin. It is widely accepted in the popular media that this grip is the most dangerous for the wrist and that a strong wrist and perfect timing are essential to avoid wrist injuries. Statistical analysis Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges. Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges. Statistical analysis: Statistical analysis included descriptive statistic and coracohumeral thickness was assessed to compare left and right side (dominant and non dominant arm). Fisher’s test was used to compare the presence of lesions in the players and in the control group. The presence of associations between the qualitative variables was evaluated using multivariate analysis. The significance level of 0.05 was adopted. The SPSS software package (release 13.0 for Windows, SPSS) was used. A post hoc power analysis was performed to be sure that the sample size was sufficient to make a meaningful statement. An error level or confidence level of 5% and a ß error level or statistical power (1–ß) of 80% was used and considered acceptable for medical purposes. A sample size of 40 enabled confidence within the required confidence ranges. Results: Players’ characteristics relative to sex, dominant arm, years of practice, hours of training per week, grip, racket weight, type of backhand and body mass index are reported in Table 1.Abnormalities were found at ultrasound in 14/90 (15%) players. The majority of the lesions were located in the dominant arm (n = 10), whereas only few of them were in non-dominant arm (n = 4). No tendon tears were detected. Two players had supraspinatus tendinosis. Sub-acromial bursitis was present in 10 players (Figure 3). Two players had subacromial impingement. No rotator-cuff muscular atrophy was found. Coracohumaral thickness resulted comparable in the dominant and non-dominant arm of the players (11.3 ± 4.4 mm in the dominant arm versus 13 ± 4.2 in the non-dominant arm, p > 0.05). Multivariate analysis demonstrated that no association was present among coracohumaral thickness or lesions detected at sonography and body mass index, years of practice, weekly hours of training, racket weight, strings and dominant arm. In the control group, abnormalities were found at ultrasound in 6/60 (10%) subjects (sub-acromial bursitis). No statistically significant differences between players and control group were found (two-tailed P value = 0.71). Coracohumeral thickness was not statistically different in the two groups (p > 0.05). Players characteristics Data are expressed in mean ± standard deviation E, Eastern; SW, semi-Western; W, Western. Subacromion subdeltoid bursitis in a 15-year-old tennis player (arrow). Discussion and conclusions: The main result of this study shows that in a-symptomatic non-elite junior tennis-players only minor shoulder abnormalities are detectable using high-resolution ultrasound. These abnormalities are not different from those detected in the age- and sex- matched control group. In a previous study, using on high-resolution ultrasound, shoulder abnormalities were found in 96% of asymptomatic subjects [4]. However, in this study the age range was 40–70 years, therefore it is possible that the alteration found were related to normal ageing instead of the daily activity of the subjects. In our study, the players were young and only 15% of them reported a shoulder abnormality. Overall, shoulder abnormalities detected were mild: no partial or full-thickness tears were found. Sub-acromial bursitis was the most frequent find, but no player had fluid in the other recesses or bursae around the shoulder. The absence of rotator cuff muscular atrophy is sufficient to exclude any subclinical irritative or compressive neuropathy. In a paper published on volleyball players it has been shown that the prevalence of infraspinatus muscle atrophy in professional a-symptomatic players is 30% [8]. This data related to volley was not confirmed in our series of young tennis players. We acknowledge that it may be questionable to compare volley to tennis, but both sports are characterized by several overhead strokes that may damage the shoulder. Moreover, the medical literature lacks of similar studies on young non-elite tennis players to be compared with our work. In our series, the majority of the lesions were located in the dominant arm and few of them were located in the non-dominant arm. This observation, although based on few numbers, is not surprising and it may confirm that, in this physical activity, the dominant arm is more stressed than the non-dominant arm. Concerning the grip adopted by the players we did not registered any association between lesions and grips. This observation differs from the fact that, in nonprofessional tennis players, different grips of the racket are related to the anatomical site of the lesion at the wrist: Eastern grip with radial-side injuries and Western or semi-Western with ulnar-side injuries [1]. On the base of this consideration, we can make the hypothesis that the way of hitting the ball with different grips does not influence the biomechanical chain of the stroke at the level of the shoulder. Body mass index and strings stiffness were also considered because they may influence negatively the incidence and pattern of injury [1]. Our study has limitations: the first is the cross-sectional and not prospective design. However, for the purpose of the study, we believe that the design may be considered almost sufficient. Moreover, we did not evaluate an adult population of players, therefore it is difficult to made comparisons with the existing literature and with “veterans” players who received similar loads on the shoulder for several years and hours. In conclusion, the main finding of our study is that shoulder’s soft-tissues of non-elite junior tennis players are similar to age- and sex- matched controls.
Background: Tennis is believed to be potentially harmful for the shoulder, therefore the purpose of this study is to evaluate the anatomy of the rotator cuff and the coraco-humeral ligament (CHL) in a-symptomatic non-elite junior tennis players with high-resolution ultrasound (US). Methods: From August 2009 to September 2010 n = 90 a-symptomatic non-elite junior tennis players (mean age ± standard deviation: 15 ± 3) and a control group of age- and sex- matched subjects were included. Shoulder assessment with a customized standardized protocol was performed. Body mass index, dominant arm, years of practice, weekly hours of training, racket weight, grip (Eastern, Western and semi-Western), kind of strings were recorded. Results: Abnormalities were found at ultrasound in 14/90 (15%) players. Two players had supraspinatus tendinosis, two had subacromial impingement and ten had subacromial bursitis. CHL thickness resulted comparable in the dominant and non-dominant arms (11.3 ± 4.4 mm vs. 13 ± 4.2, p > 0.05). Multivariate analysis demonstrated that no association was present among CHL thickness and the variables evaluated. In the control group, abnormalities were found at ultrasound in 6/60 (10%) subjects (sub-acromial bursitis). No statistically significant differences between players and control group were found (p = 0.71). Conclusions: In a-symptomatic non-elite junior tennis players only minor shoulder abnormalities were found.
Background: Tennis is practiced by a wide range of people throughout the world and is the most popular of all racket sports. For the last 10 years tennis practice has grown significantly for recreational and competition purposes. Frequently tennis practice begins in childhood and may continue into late adulthood. In spite of the positive effects that tennis practice has shown on physical and mental fitness, some Authors believe that tennis may expose the shoulder to different kind of injuries [1,2]. In non-elite players, efforts spent to develop a more effective and aggressive play using tactics and techniques similar or equal to the elite players are not always supported by an adequate physical training and technical development [1-3]. Shoulder injuries are believed extremely common among elite tennis players and they are not only related to rotator cuff tendinopathy, but also to the long head of the biceps and to the reflection pulley [4-11]; however, the biceps tendinopathy and shoulder dislocations are relatively rare in the young tennis players. It is known that a-symptomatic tennis players may have rotator cuff tendon lesions and reduced sub-acromial space [6-10] and that asymptomatic shoulder abnormalities may be found in the majority of the adults [4]. The rotator cuff interval is the anatomical space where the coracohumaral ligament keeps the long head of the biceps in the appropriate position into the glenohumeral joint. Moreover, when the coracohumeral ligament is intact, the longhead of the biceps does not undergo medial subluxation or dislocation out of the bicipital groove [12]. In case of small anterior supraspinatus tears or shoulder impingement, which may happen in tennis players, the coracohumeral ligament may be thickened and torn. In such cases, the biceps may dislocate over the intact subscapularis and the ruptured lateral part of the coracohumeral ligament can be demonstrated with US [1-14]. For this reasons the integrity of the coracohumeral ligament may be considered as a marker of the integrity of the rotator interval as a whole and as a potential indicator of the technical skills of a tennis players. Indeed, if the technical movements are appropriate, no injuries are expected to occur at the shoulder [15]. Non-elite junior tennis players are supposed to have good technical skills, however in technical learning process some adjustments occur [16]. These adjustments may be responsible of shoulder soft-tissues injuries [17]. To the best of our knowledge, there have been no peer-reviewed ultrasound studies on non-elite junior tennis players including the entire anatomy of the shoulder. It is not known if shoulder abnormalities including rotator cuff tendons and coracohumeral ligament are detectable on high-resolution ultrasound in a-symptomatic junior tennis players. Therefore, the purpose of our study is to assess if shoulder anatomical abnormalities are present on a complete shoulder ultrasound examination in a-symptomatic non-elite junior tennis players. Discussion and conclusions: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2474/15/241/prepub
Background: Tennis is believed to be potentially harmful for the shoulder, therefore the purpose of this study is to evaluate the anatomy of the rotator cuff and the coraco-humeral ligament (CHL) in a-symptomatic non-elite junior tennis players with high-resolution ultrasound (US). Methods: From August 2009 to September 2010 n = 90 a-symptomatic non-elite junior tennis players (mean age ± standard deviation: 15 ± 3) and a control group of age- and sex- matched subjects were included. Shoulder assessment with a customized standardized protocol was performed. Body mass index, dominant arm, years of practice, weekly hours of training, racket weight, grip (Eastern, Western and semi-Western), kind of strings were recorded. Results: Abnormalities were found at ultrasound in 14/90 (15%) players. Two players had supraspinatus tendinosis, two had subacromial impingement and ten had subacromial bursitis. CHL thickness resulted comparable in the dominant and non-dominant arms (11.3 ± 4.4 mm vs. 13 ± 4.2, p > 0.05). Multivariate analysis demonstrated that no association was present among CHL thickness and the variables evaluated. In the control group, abnormalities were found at ultrasound in 6/60 (10%) subjects (sub-acromial bursitis). No statistically significant differences between players and control group were found (p = 0.71). Conclusions: In a-symptomatic non-elite junior tennis players only minor shoulder abnormalities were found.
4,702
287
[ 547, 73, 124, 96, 408, 150, 596 ]
9
[ "grip", "players", "level", "dominant", "analysis", "tennis", "shoulder", "western", "confidence", "non" ]
[ "tennis players rotator", "skills tennis players", "known symptomatic tennis", "symptomatic junior tennis", "tennis expose shoulder" ]
[CONTENT] Shoulder | Ultrasound | Tennis | Biceps | Bursitis [SUMMARY]
[CONTENT] Shoulder | Ultrasound | Tennis | Biceps | Bursitis [SUMMARY]
[CONTENT] Shoulder | Ultrasound | Tennis | Biceps | Bursitis [SUMMARY]
[CONTENT] Shoulder | Ultrasound | Tennis | Biceps | Bursitis [SUMMARY]
[CONTENT] Shoulder | Ultrasound | Tennis | Biceps | Bursitis [SUMMARY]
[CONTENT] Shoulder | Ultrasound | Tennis | Biceps | Bursitis [SUMMARY]
[CONTENT] Adolescent | Asymptomatic Diseases | Athletes | Athletic Injuries | Bursitis | Case-Control Studies | Child | Cross-Sectional Studies | Female | Hand Strength | Humans | Ligaments | Male | Rotator Cuff | Shoulder Impingement Syndrome | Tendinopathy | Tennis | Ultrasonography [SUMMARY]
[CONTENT] Adolescent | Asymptomatic Diseases | Athletes | Athletic Injuries | Bursitis | Case-Control Studies | Child | Cross-Sectional Studies | Female | Hand Strength | Humans | Ligaments | Male | Rotator Cuff | Shoulder Impingement Syndrome | Tendinopathy | Tennis | Ultrasonography [SUMMARY]
[CONTENT] Adolescent | Asymptomatic Diseases | Athletes | Athletic Injuries | Bursitis | Case-Control Studies | Child | Cross-Sectional Studies | Female | Hand Strength | Humans | Ligaments | Male | Rotator Cuff | Shoulder Impingement Syndrome | Tendinopathy | Tennis | Ultrasonography [SUMMARY]
[CONTENT] Adolescent | Asymptomatic Diseases | Athletes | Athletic Injuries | Bursitis | Case-Control Studies | Child | Cross-Sectional Studies | Female | Hand Strength | Humans | Ligaments | Male | Rotator Cuff | Shoulder Impingement Syndrome | Tendinopathy | Tennis | Ultrasonography [SUMMARY]
[CONTENT] Adolescent | Asymptomatic Diseases | Athletes | Athletic Injuries | Bursitis | Case-Control Studies | Child | Cross-Sectional Studies | Female | Hand Strength | Humans | Ligaments | Male | Rotator Cuff | Shoulder Impingement Syndrome | Tendinopathy | Tennis | Ultrasonography [SUMMARY]
[CONTENT] Adolescent | Asymptomatic Diseases | Athletes | Athletic Injuries | Bursitis | Case-Control Studies | Child | Cross-Sectional Studies | Female | Hand Strength | Humans | Ligaments | Male | Rotator Cuff | Shoulder Impingement Syndrome | Tendinopathy | Tennis | Ultrasonography [SUMMARY]
[CONTENT] tennis players rotator | skills tennis players | known symptomatic tennis | symptomatic junior tennis | tennis expose shoulder [SUMMARY]
[CONTENT] tennis players rotator | skills tennis players | known symptomatic tennis | symptomatic junior tennis | tennis expose shoulder [SUMMARY]
[CONTENT] tennis players rotator | skills tennis players | known symptomatic tennis | symptomatic junior tennis | tennis expose shoulder [SUMMARY]
[CONTENT] tennis players rotator | skills tennis players | known symptomatic tennis | symptomatic junior tennis | tennis expose shoulder [SUMMARY]
[CONTENT] tennis players rotator | skills tennis players | known symptomatic tennis | symptomatic junior tennis | tennis expose shoulder [SUMMARY]
[CONTENT] tennis players rotator | skills tennis players | known symptomatic tennis | symptomatic junior tennis | tennis expose shoulder [SUMMARY]
[CONTENT] grip | players | level | dominant | analysis | tennis | shoulder | western | confidence | non [SUMMARY]
[CONTENT] grip | players | level | dominant | analysis | tennis | shoulder | western | confidence | non [SUMMARY]
[CONTENT] grip | players | level | dominant | analysis | tennis | shoulder | western | confidence | non [SUMMARY]
[CONTENT] grip | players | level | dominant | analysis | tennis | shoulder | western | confidence | non [SUMMARY]
[CONTENT] grip | players | level | dominant | analysis | tennis | shoulder | western | confidence | non [SUMMARY]
[CONTENT] grip | players | level | dominant | analysis | tennis | shoulder | western | confidence | non [SUMMARY]
[CONTENT] tennis | shoulder | tennis players | ligament | players | coracohumeral ligament | elite | biceps | technical | rotator [SUMMARY]
[CONTENT] grip | level | analysis | confidence | eastern | balls | western | statistical | hit | knuckle [SUMMARY]
[CONTENT] dominant | players | dominant arm | arm | found | bursitis | 10 | abnormalities found ultrasound | found ultrasound | statistically [SUMMARY]
[CONTENT] shoulder | players | study | tennis | tennis players | non | abnormalities | similar | shoulder abnormalities | young [SUMMARY]
[CONTENT] grip | players | level | dominant | shoulder | analysis | tennis | confidence | hit | balls [SUMMARY]
[CONTENT] grip | players | level | dominant | shoulder | analysis | tennis | confidence | hit | balls [SUMMARY]
[CONTENT] CHL | US [SUMMARY]
[CONTENT] August 2009 to September 2010 | 90 | 15 ± 3 ||| ||| years | weekly hours | Eastern | Western [SUMMARY]
[CONTENT] 14/90 | 15% ||| Two | two | ten ||| CHL | 11.3 | 4.4 mm | 13 | 4.2 | 0.05 ||| CHL ||| 6/60 | 10% ||| 0.71 [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] CHL | US ||| August 2009 to September 2010 | 90 | 15 ± 3 ||| ||| years | weekly hours | Eastern | Western ||| ||| 14/90 | 15% ||| Two | two | ten ||| CHL | 11.3 | 4.4 mm | 13 | 4.2 | 0.05 ||| CHL ||| 6/60 | 10% ||| 0.71 ||| [SUMMARY]
[CONTENT] CHL | US ||| August 2009 to September 2010 | 90 | 15 ± 3 ||| ||| years | weekly hours | Eastern | Western ||| ||| 14/90 | 15% ||| Two | two | ten ||| CHL | 11.3 | 4.4 mm | 13 | 4.2 | 0.05 ||| CHL ||| 6/60 | 10% ||| 0.71 ||| [SUMMARY]
On the proportional hazards model for occupational and environmental case-control analyses.
23414396
Case-control studies are generally designed to investigate the effect of exposures on the risk of a disease. Detailed information on past exposures is collected at the time of study. However, only the cumulated value of the exposure at the index date is usually used in logistic regression. A weighted Cox (WC) model has been proposed to estimate the effects of time-dependent exposures. The weights depend on the age conditional probabilities to develop the disease in the source population. While the WC model provided more accurate estimates of the effect of time-dependent covariates than standard logistic regression, the robust sandwich variance estimates were lower than the empirical variance, resulting in a low coverage probability of confidence intervals. The objectives of the present study were to investigate through simulations a new variance estimator and to compare the estimates from the WC model and standard logistic regression for estimating the effects of correlated temporal aspects of exposure with detailed information on exposure history.
BACKGROUND
We proposed a new variance estimator using a superpopulation approach, and compared its accuracy to the robust sandwich variance estimator. The full exposure histories of source populations were generated and case-control studies were simulated within each source population. Different models with selected time-dependent aspects of exposure such as intensity, duration, and time since cessation were considered. The performances of the WC model using the two variance estimators were compared to standard logistic regression. The results of the different models were finally compared for estimating the effects of correlated aspects of occupational exposure to asbestos on the risk of mesothelioma, using population-based case-control data.
METHOD
The superpopulation variance estimator provided better estimates than the robust sandwich variance estimator and the WC model provided accurate estimates of the effects of correlated aspects of temporal patterns of exposure.
RESULTS
The WC model with the superpopulation variance estimator provides an alternative analytical approach for estimating the effects of time-varying exposures with detailed history exposure information in case-control studies, especially if many subjects have time-varying exposure intensity over lifetime, and if only one control is available for each case.
CONCLUSION
[ "Analysis of Variance", "Asbestos", "Case-Control Studies", "Confidence Intervals", "Environmental Exposure", "Humans", "Logistic Models", "Mesothelioma", "Occupational Exposure", "Proportional Hazards Models", "Risk Assessment" ]
3598441
Background
Population-based case-control studies are widely used in epidemiology to investigate the association between environmental or occupational exposures over lifetime and the risk of cancer or other chronic diseases. Many of the exposures of interest are protracted and a huge amount of information is often retrospectively collected for each subject about his/her potential past exposure over lifetime. For example, for occupational exposures, the whole occupational history is usually investigated for each subject, and different methods exist to estimate the average dose of exposure at each past job [1-3]. However, only the cumulated estimated dose of exposure at the index age (age at diagnosis for cases and at interview for controls) is usually used in standard logistic regression analyses. Such approach does not use the (retrospective) dynamic information available on the exposure at different ages during lifetime. A time-dependent weighted Cox (WC) model has recently been proposed to incorporate this dynamic information on exposure, in order to more accurately estimate the effect of time-dependent exposures in population-based case-control studies [4]. The WC model consists in using age as the time axis and weighting cases and controls according to their case-control status and the age conditional probabilities of developing the disease in the source population. The weights proposed in the WC model are therefore time-dependent and estimated from data of the source population. A simulation study showed that the WC model improved the accuracy of the regression parameters estimates of time-dependent exposure variables as compared with standard logistic regression with fixed-in-time covariates [4]. However, the average robust sandwich variance estimates based on dfbetas residuals [5] were systematically lower than the empirical variance of the parameter estimates, which resulted in too narrow confidence intervals (CI) and low coverage probabilities [4]. There is an extensive statistical literature on the weighted analyses of cohort sampling designs (see among many others [6-10]). A population-based case-control study can be seen as a nested case-control study within the source population of cases and controls, and can therefore fit in this general cohort sampling design framework. Population-based case-control studies can also be seen as a survey with complex selection probabilities [11-14] and this is the general framework that we use in this paper. Specifically, we consider the superpopulation approach developed by Lin [13] who proposed a variance estimator that accounts for the extra variation due to sampling the finite survey population from an infinite superpopulation. As a result, the Lin variance estimator accounts for the random variation from one survey sample to another and from one survey population to another, as opposed to the robust sandwich variance estimator that accounts only for the random variation from one survey sample to another. In the context of population-based case-control study, the case-control sample could be considered as the survey sample, the source population as the finite survey population, and the population under study as the infinite superpopulation. The asymptotic properties of the Lin variance estimator have been investigated and a small simulation study has been conducted to investigate these properties in finite samples [13]. The results indicated that the superpopulation variance estimates were closer to the true variance than the robust sandwich variance estimates. However, the simulation study considered only fixed-in-time covariates and simple selection probabilities that did not reflect the more complex sampling scheme of population-based case-control studies. It is therefore unclear how the superpopulation variance estimator would perform for the estimation of the effects of time-dependent covariates using the specific estimated time-dependent weights proposed in the WC model [4]. In addition, for further applications to population-based case-control data, it would be important to clarify the performance of the WC model, as compared with standard logistic regression analyses, for estimating the effects of several correlated temporal patterns of protracted exposures. Indeed, the effects of temporal patterns of exposures such as intensity, duration, age at first exposure, and time since last exposure are often of great interest from an epidemiological point of view [15], but they need to be carefully adjusted for each other to avoid residual confounding [16]. Such adjustment induces correlation between covariates and it is important to investigate how it affects the proposed estimators. The first objective of the present study is to investigate through extensive simulations the accuracy of the Lin variance estimator for estimating the effects of time-varying covariates in case-control data, using the weights proposed in the WC model [4]. The second objective is to compare the estimates from the WC model and standard logistic regression for estimating the effects of selected correlated temporal aspects of exposure with detailed information on exposure history. The next section introduces the WC model and the robust and Lin’s variance estimators. The different approaches are then compared through simulations and using data from a large population-based case-control study on occupational exposure to asbestos and pleural mesothelioma (PM).
null
null
Results
Table 5 shows the estimated effects of the selected quantitative asbestos exposure variables on the risk of PM, using the four analytical approaches (WC1, WC2, CLR, and ULR) and Models 1–3. The estimated effects are shown in terms of expβ^, i.e. estimated hazard ratios for WC1 and WC2 and estimated odds ratios for ULR and CLR. These estimated effects were calculated for an increase of about one standard deviation of the exposure variable, i.e. 1 fiber/ml for asbestos exposure intensity, 14 years for duration, 8 years for age at first exposure, and 14 years for time since last exposure. Estimated effect of occupational asbestos exposure in males ever exposed (1041 cases and 1425 controls), using the WC models and logistic regression and assuming linear effects of quantitative exposure variables Results from the French case-control study on mesothelioma, 1987–2006. (a) All the exposure variables were time-dependent in WC1 and WC2 models, and fixed at their value at diagnosis/interview in CLR and ULR. Intensity was measured by the mean index of exposure (MIE). (b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; Both WC1 and WC2 used age as the time axis and included birth year as a quantitative covariate; ULR, unconditional logistic regression including age at diagnosis/interview and birth year as quantitative covariates; CLR, conditional logistic regression stratified on birth year group (5 years), and including age at diagnosis/interview as a quantitative covariate. (c) Hazard ratio estimates for WC1 and WC2 (same value for WC1 and WC2) and odds ratio estimates for CLR and ULR, adjusted for age and birth year, and corresponding 95% confidence interval (CI). As expected, the associations between all asbestos exposure variables and PM were significant with each of the four analytical approaches (Table 5). Specifically, increasing intensity or duration increased significantly the risk of PM, when adjusted or not on either age at first exposure or time since last exposure. Because the relative variation in the estimated effects of duration between Model 3 and Model 1 was higher than between Model 2 and Model 1, time since last exposure (in Model 3) seems to be a more important confounder than age at first exposure (in Model 2) in the relation between duration and PM. Estimates from Model 2 suggest that the later a subject is first occupationally exposed to asbestos, the smaller his risk of PM is. All the estimated effects of time since cessation indicate that risk continues to increase after the cessation of exposure, as in many other studies [15,26,27]. The 95% CI from WC1 and WC2 were almost identical (Table 5), suggesting that the robust variance estimates from WC1 was very close to the superpopulation variance estimates from WC2. This is likely due to the fact that the disease (PM) was very rare as shown in Table 3, as opposed to our simulation study where the overall event rates were about 10% and 2%. The strongest contrasts between the estimates from the WC models and ULR or CLR were for the effect of exposure intensity. Indeed, the estimated effect of intensity was systematically weaker with the WC models than with ULR or CLR, with even non overlapping 95% CI. Note that, as for Scenario A in our simulation study, CLR provided the strongest estimates for the strong effect of intensity. By contrast, for the effects of duration, age at initiation, and time since last exposure, the strongest estimates were provided by the WC models, but the discrepancies with ULR and CLR were weaker than for intensity. There are different potential explanations for the discrepancies between the results from the Cox (WC1 and WC2) and logistic (CLR and ULR) models. First the adjustment for age was largely different in the two series of models. While age was the time axis in the Cox models, and was therefore adequately adjusted for in both WC1 and WC2, it was included as a continuous covariate in both logistic models. This assumed that its effect was linear on the logit, which is actually not true [15]. Thus there may be some residual confounding by age in both CLR and ULR. Second, because controls of the case-control study on PM were selected from members of the general French population at calendar times that can possibly differ from the period of case’s recruitment, the case-control odds ratio estimate from ULR and CLR may estimate a different quantity than the hazard ratio estimate from the Cox model. Indeed, the hazard function in the Cox models provides a dynamic description of how the instantaneous risk of getting PM varies over the age. The exponential of regression parameter can be interpreted as a hazard ratio, which is equivalent to the rate ratio that would be obtained from a cohort design. If the controls of the case-control study on PM were randomly selected from the member of the population who were at risk at each age a case occurs (as in our simulation study), then the estimated odds ratio that would be obtained from ULR and CLR could also be interpreted as a rate ratio that would be obtained from a cohort design. However, this was not the way controls were selected in the case-control study on PM, and it is therefore difficult to directly compare odds ratio estimates obtained from ULR and CLR, and hazard ratio estimates obtained from WC1 and WC2.
Conclusion
We believe that the WC model using the superpopulation variance estimator may provide a potential alternative analytical method for case-control analyses with detailed information on the history of the exposure of interest, especially if a large part of the subjects has a time-varying exposure intensity over lifetime, and if only one control is available for each case.
[ "Background", "The regression model and the variance estimators", "The WC model", "The variance estimators", "Simulations", "Overview of the simulation design", "Analytical methods used to analyze the simulated data", "Statistical criteria used to compare the performance of the different estimators", "Simulation results", "Application to occupational exposure to asbestos and pleural mesothelioma", "Data source", "Analytical methods used to analyze the case-control data on pleural mesothelioma", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Population-based case-control studies are widely used in epidemiology to investigate the association between environmental or occupational exposures over lifetime and the risk of cancer or other chronic diseases. Many of the exposures of interest are protracted and a huge amount of information is often retrospectively collected for each subject about his/her potential past exposure over lifetime. For example, for occupational exposures, the whole occupational history is usually investigated for each subject, and different methods exist to estimate the average dose of exposure at each past job [1-3]. However, only the cumulated estimated dose of exposure at the index age (age at diagnosis for cases and at interview for controls) is usually used in standard logistic regression analyses. Such approach does not use the (retrospective) dynamic information available on the exposure at different ages during lifetime.\nA time-dependent weighted Cox (WC) model has recently been proposed to incorporate this dynamic information on exposure, in order to more accurately estimate the effect of time-dependent exposures in population-based case-control studies [4]. The WC model consists in using age as the time axis and weighting cases and controls according to their case-control status and the age conditional probabilities of developing the disease in the source population. The weights proposed in the WC model are therefore time-dependent and estimated from data of the source population. A simulation study showed that the WC model improved the accuracy of the regression parameters estimates of time-dependent exposure variables as compared with standard logistic regression with fixed-in-time covariates [4]. However, the average robust sandwich variance estimates based on dfbetas residuals [5] were systematically lower than the empirical variance of the parameter estimates, which resulted in too narrow confidence intervals (CI) and low coverage probabilities [4].\nThere is an extensive statistical literature on the weighted analyses of cohort sampling designs (see among many others [6-10]). A population-based case-control study can be seen as a nested case-control study within the source population of cases and controls, and can therefore fit in this general cohort sampling design framework. Population-based case-control studies can also be seen as a survey with complex selection probabilities [11-14] and this is the general framework that we use in this paper. Specifically, we consider the superpopulation approach developed by Lin [13] who proposed a variance estimator that accounts for the extra variation due to sampling the finite survey population from an infinite superpopulation. As a result, the Lin variance estimator accounts for the random variation from one survey sample to another and from one survey population to another, as opposed to the robust sandwich variance estimator that accounts only for the random variation from one survey sample to another. In the context of population-based case-control study, the case-control sample could be considered as the survey sample, the source population as the finite survey population, and the population under study as the infinite superpopulation.\nThe asymptotic properties of the Lin variance estimator have been investigated and a small simulation study has been conducted to investigate these properties in finite samples [13]. The results indicated that the superpopulation variance estimates were closer to the true variance than the robust sandwich variance estimates. However, the simulation study considered only fixed-in-time covariates and simple selection probabilities that did not reflect the more complex sampling scheme of population-based case-control studies. It is therefore unclear how the superpopulation variance estimator would perform for the estimation of the effects of time-dependent covariates using the specific estimated time-dependent weights proposed in the WC model [4]. In addition, for further applications to population-based case-control data, it would be important to clarify the performance of the WC model, as compared with standard logistic regression analyses, for estimating the effects of several correlated temporal patterns of protracted exposures. Indeed, the effects of temporal patterns of exposures such as intensity, duration, age at first exposure, and time since last exposure are often of great interest from an epidemiological point of view [15], but they need to be carefully adjusted for each other to avoid residual confounding [16]. Such adjustment induces correlation between covariates and it is important to investigate how it affects the proposed estimators.\nThe first objective of the present study is to investigate through extensive simulations the accuracy of the Lin variance estimator for estimating the effects of time-varying covariates in case-control data, using the weights proposed in the WC model [4]. The second objective is to compare the estimates from the WC model and standard logistic regression for estimating the effects of selected correlated temporal aspects of exposure with detailed information on exposure history. The next section introduces the WC model and the robust and Lin’s variance estimators. The different approaches are then compared through simulations and using data from a large population-based case-control study on occupational exposure to asbestos and pleural mesothelioma (PM).", " The WC model The Cox proportional hazards model specifies the hazard function as\n\n\n\n\nλ\n\n\nt\n|\nx\n\nt\n\n\n\n=\n\nλ\n0\n\n\nt\n\nexp\n\n\nx\n\n\nt\n\n′\n\nβ\n\n\n,\n\n\n\n\nwhere λ0 is the baseline hazard, x(t) is the vector of observed covariate values at time t and β is the vector of unknown regression parameters. In the context of a population-based survey with complex sampling design [5], the estimator of β is the solution of the pseudo-maximum likelihood equation\n\n\n(1)\n\n\nU\n\nβ\n\n=\n\n\n∑\n\ni\n=\n1\n\nn\n\n\n\nω\ni\n\n\nδ\ni\n\n\n\n\nx\ni\n\n\n\nt\ni\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n=\n0\n\n\n,\n\n\n\n\nwhere n is the sample size, ωi is the sampling weight for subject i, δi = 1 if subject i is the case diagnosed at age ti and 0 otherwise, and\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\nt\n,\n\nβ\n^\n\n\n\n=\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nω\nj\n\n\nY\nj\n\n\nt\n\nexp\n\n\n\nx\nj\n\n\n\nt\n\n′\n\n\nβ\n^\n\n\n\n\n\n,\n\n\n\n\n\n\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\nt\n,\n\nβ\n^\n\n\n\n=\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nω\nj\n\n\nY\nj\n\n\nt\n\n\nx\nj\n\n\nt\n\nexp\n\n\n\nx\nj\n\n\n\nt\n\n′\n\n\nβ\n^\n\n\n\n\n\n,\n\n\n\n\nwith Yj(t) = 1 if the subject j is at risk at time t (i.e. tj ≥ t), 0 otherwise.\nIn the WC model proposed for case-control data [4], t is age and the sampling weight ω of each subject depends on age and on his case-control status. Specifically, the weight for each subject i at age t is given by\n\n\n(2)\n\n\n\nω\ni\n\n\nt\n\n=\n{\n\n\n\n\n\n\n1\n−\nπ\n\nt\n\n\n\nπ\n\nt\n\n\n\n×\n\n\n\nn\ncases\n\n\nt\n\n\n\n\nn\ncontrols\n\n\nt\n\n\n\n\n\n\n\nif\n\nsubject\n\ni\n\nis\n\na\n\ncontrol\n\nselected\n\nat\n\nage\n\nt\n\nor\n\nat\n\na\n\nlater\n\nage\n\n\n\n\n\n1\n\n\n\nif\n\nsubject\n\ni\n\nis\n\na\n\ncase\n\ndiagnosed\n\nat\n\nage\n\nt\n\nor\n\nat\n\na\n\nlater\n\nage\n,\n\n\n\n\n\n\n\n\nwhere π(t) is the probability to develop the disease at age t or at a later age in the source population, ncases(t) is the number of cases diagnosed at age t or at a later age in the case-control study, and ncontrols(t) is the number of controls selected at age t or at a later age in the case-control study as well. If the WC model is used to analyze data from a nested case-control study, the age conditional probabilities π(t) in Equation (2) can directly be estimated from the full enumerated cohort. Left-truncation at age at entry into the cohort should be performed to account for delayed entry [17]. If the WC model is used to analyze population-based case-control data, π(t) can be estimated from health statistics on the population under study, as shown in our application on PM in the section following simulations. The weights equal 1 for cases because all the eligible cases of the source population (or in the cohort) are usually included in the case-control study. If the sampling probabilities of cases do not equal 1, then weights in Equation (2) should be adjusted accordingly.\nThe weights defined in Equation (2) can be implemented in any statistical software that handles time-dependent weights in the Cox model, such as the coxph function in R or the SAS PROC PHREG function.\nThe Cox proportional hazards model specifies the hazard function as\n\n\n\n\nλ\n\n\nt\n|\nx\n\nt\n\n\n\n=\n\nλ\n0\n\n\nt\n\nexp\n\n\nx\n\n\nt\n\n′\n\nβ\n\n\n,\n\n\n\n\nwhere λ0 is the baseline hazard, x(t) is the vector of observed covariate values at time t and β is the vector of unknown regression parameters. In the context of a population-based survey with complex sampling design [5], the estimator of β is the solution of the pseudo-maximum likelihood equation\n\n\n(1)\n\n\nU\n\nβ\n\n=\n\n\n∑\n\ni\n=\n1\n\nn\n\n\n\nω\ni\n\n\nδ\ni\n\n\n\n\nx\ni\n\n\n\nt\ni\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n=\n0\n\n\n,\n\n\n\n\nwhere n is the sample size, ωi is the sampling weight for subject i, δi = 1 if subject i is the case diagnosed at age ti and 0 otherwise, and\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\nt\n,\n\nβ\n^\n\n\n\n=\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nω\nj\n\n\nY\nj\n\n\nt\n\nexp\n\n\n\nx\nj\n\n\n\nt\n\n′\n\n\nβ\n^\n\n\n\n\n\n,\n\n\n\n\n\n\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\nt\n,\n\nβ\n^\n\n\n\n=\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nω\nj\n\n\nY\nj\n\n\nt\n\n\nx\nj\n\n\nt\n\nexp\n\n\n\nx\nj\n\n\n\nt\n\n′\n\n\nβ\n^\n\n\n\n\n\n,\n\n\n\n\nwith Yj(t) = 1 if the subject j is at risk at time t (i.e. tj ≥ t), 0 otherwise.\nIn the WC model proposed for case-control data [4], t is age and the sampling weight ω of each subject depends on age and on his case-control status. Specifically, the weight for each subject i at age t is given by\n\n\n(2)\n\n\n\nω\ni\n\n\nt\n\n=\n{\n\n\n\n\n\n\n1\n−\nπ\n\nt\n\n\n\nπ\n\nt\n\n\n\n×\n\n\n\nn\ncases\n\n\nt\n\n\n\n\nn\ncontrols\n\n\nt\n\n\n\n\n\n\n\nif\n\nsubject\n\ni\n\nis\n\na\n\ncontrol\n\nselected\n\nat\n\nage\n\nt\n\nor\n\nat\n\na\n\nlater\n\nage\n\n\n\n\n\n1\n\n\n\nif\n\nsubject\n\ni\n\nis\n\na\n\ncase\n\ndiagnosed\n\nat\n\nage\n\nt\n\nor\n\nat\n\na\n\nlater\n\nage\n,\n\n\n\n\n\n\n\n\nwhere π(t) is the probability to develop the disease at age t or at a later age in the source population, ncases(t) is the number of cases diagnosed at age t or at a later age in the case-control study, and ncontrols(t) is the number of controls selected at age t or at a later age in the case-control study as well. If the WC model is used to analyze data from a nested case-control study, the age conditional probabilities π(t) in Equation (2) can directly be estimated from the full enumerated cohort. Left-truncation at age at entry into the cohort should be performed to account for delayed entry [17]. If the WC model is used to analyze population-based case-control data, π(t) can be estimated from health statistics on the population under study, as shown in our application on PM in the section following simulations. The weights equal 1 for cases because all the eligible cases of the source population (or in the cohort) are usually included in the case-control study. If the sampling probabilities of cases do not equal 1, then weights in Equation (2) should be adjusted accordingly.\nThe weights defined in Equation (2) can be implemented in any statistical software that handles time-dependent weights in the Cox model, such as the coxph function in R or the SAS PROC PHREG function.\n The variance estimators The robust sandwich variance estimator for β^ in Equation (1) as proposed by Binder [5] for finite population-based surveys is given by\n\n\n(3)\n\n\n\n\nV\n^\n\n1\n\n\n\nβ\n^\n\n\n=\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n\n\n\n∑\n\ni\n=\n1\n\nn\n\n\n\n\n\n\nω\ni\n\n\n\nu\n^\n\ni\n\n\n\nβ\n^\n\n\n\n\n\n⊗\n2\n\n\n\n\n\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n\n\n\n\nwhere Iβ^ is the observed information matrix obtained by evaluation of this expression ∂U^β∂β|β=β^, a⊗ 2 = aa′, and\n\n\n(4)\n\n\n\n\n\n\nu\n^\n\ni\n\n\n\nβ\n^\n\n\n=\n\nδ\ni\n\n\n\n\nx\ni\n\n\n\nt\ni\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\nS\n\n0\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n\n\n\n\n\n−\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nδ\nj\n\n\nω\nj\n\n\n\n\nY\ni\n\n\n\nt\nj\n\n\nexp\n\n\n\nx\ni\n\n\n\n\nt\nj\n\n\n′\n\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n\n\n\n\n\nx\n\n\n\n\nx\ni\n\n\n\nt\nj\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n.\n\n\n\n\n\n\nThe robust variance estimator in Equation (3) can be rewritten as V^1β^=D’D where D is the dfbetas residuals [18] vector from the Cox model including the weights ω that can depend on time as those defined in Equation (2), as suggested in Barlow [19]. As indicated by Therneau and Li [20] and by Barlow et al. [21], the robust sandwich variance estimate from Equation (3) can directly be obtained with R using the commands\nM1 < −coxph(Surv(start,stop,event) ~ x + cluster(id), weights = weight)\nV1 < − M1$var\nwith the vector of weights derived from Equation (2) for the WC model.\nThe robust variance estimator V^1β^ accounts for the variability due to sampling the case-control sample from the source population. To account for the extra variability due to sampling the source population from the (infinite) superpopulation, we propose to use the Lin variance estimator [13] that turned out to consist in adding the naïve variance estimator to the robust variance estimator V^1β^. The Lin variance superpopulation estimator is thus given by\n\n\n(5)\n\n\n\n\nV\n^\n\n2\n\n\n\nβ\n^\n\n\n=\n\n\nV\n^\n\n1\n\n\n\nβ\n^\n\n\n+\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n.\n\n\n\n\nWith R, the superpopulation variance estimate from Equation (5) can simply be obtained using the command\nV2 <- V1 + M1$naive.var\nAll along this paper, the WC model using the robust variance estimator V^1β^ in Equation (3) will be denoted by WC1, while the WC model using the Lin’s superpopulation variance estimator V^2β^ in Equation (5) will be denoted by WC2. While WC1 and WC2 models give identical estimated exposure effects, they yield different standard errors and thus CI.\nThe robust sandwich variance estimator for β^ in Equation (1) as proposed by Binder [5] for finite population-based surveys is given by\n\n\n(3)\n\n\n\n\nV\n^\n\n1\n\n\n\nβ\n^\n\n\n=\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n\n\n\n∑\n\ni\n=\n1\n\nn\n\n\n\n\n\n\nω\ni\n\n\n\nu\n^\n\ni\n\n\n\nβ\n^\n\n\n\n\n\n⊗\n2\n\n\n\n\n\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n\n\n\n\nwhere Iβ^ is the observed information matrix obtained by evaluation of this expression ∂U^β∂β|β=β^, a⊗ 2 = aa′, and\n\n\n(4)\n\n\n\n\n\n\nu\n^\n\ni\n\n\n\nβ\n^\n\n\n=\n\nδ\ni\n\n\n\n\nx\ni\n\n\n\nt\ni\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\nS\n\n0\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n\n\n\n\n\n−\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nδ\nj\n\n\nω\nj\n\n\n\n\nY\ni\n\n\n\nt\nj\n\n\nexp\n\n\n\nx\ni\n\n\n\n\nt\nj\n\n\n′\n\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n\n\n\n\n\nx\n\n\n\n\nx\ni\n\n\n\nt\nj\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n.\n\n\n\n\n\n\nThe robust variance estimator in Equation (3) can be rewritten as V^1β^=D’D where D is the dfbetas residuals [18] vector from the Cox model including the weights ω that can depend on time as those defined in Equation (2), as suggested in Barlow [19]. As indicated by Therneau and Li [20] and by Barlow et al. [21], the robust sandwich variance estimate from Equation (3) can directly be obtained with R using the commands\nM1 < −coxph(Surv(start,stop,event) ~ x + cluster(id), weights = weight)\nV1 < − M1$var\nwith the vector of weights derived from Equation (2) for the WC model.\nThe robust variance estimator V^1β^ accounts for the variability due to sampling the case-control sample from the source population. To account for the extra variability due to sampling the source population from the (infinite) superpopulation, we propose to use the Lin variance estimator [13] that turned out to consist in adding the naïve variance estimator to the robust variance estimator V^1β^. The Lin variance superpopulation estimator is thus given by\n\n\n(5)\n\n\n\n\nV\n^\n\n2\n\n\n\nβ\n^\n\n\n=\n\n\nV\n^\n\n1\n\n\n\nβ\n^\n\n\n+\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n.\n\n\n\n\nWith R, the superpopulation variance estimate from Equation (5) can simply be obtained using the command\nV2 <- V1 + M1$naive.var\nAll along this paper, the WC model using the robust variance estimator V^1β^ in Equation (3) will be denoted by WC1, while the WC model using the Lin’s superpopulation variance estimator V^2β^ in Equation (5) will be denoted by WC2. While WC1 and WC2 models give identical estimated exposure effects, they yield different standard errors and thus CI.", "The Cox proportional hazards model specifies the hazard function as\n\n\n\n\nλ\n\n\nt\n|\nx\n\nt\n\n\n\n=\n\nλ\n0\n\n\nt\n\nexp\n\n\nx\n\n\nt\n\n′\n\nβ\n\n\n,\n\n\n\n\nwhere λ0 is the baseline hazard, x(t) is the vector of observed covariate values at time t and β is the vector of unknown regression parameters. In the context of a population-based survey with complex sampling design [5], the estimator of β is the solution of the pseudo-maximum likelihood equation\n\n\n(1)\n\n\nU\n\nβ\n\n=\n\n\n∑\n\ni\n=\n1\n\nn\n\n\n\nω\ni\n\n\nδ\ni\n\n\n\n\nx\ni\n\n\n\nt\ni\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n=\n0\n\n\n,\n\n\n\n\nwhere n is the sample size, ωi is the sampling weight for subject i, δi = 1 if subject i is the case diagnosed at age ti and 0 otherwise, and\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\nt\n,\n\nβ\n^\n\n\n\n=\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nω\nj\n\n\nY\nj\n\n\nt\n\nexp\n\n\n\nx\nj\n\n\n\nt\n\n′\n\n\nβ\n^\n\n\n\n\n\n,\n\n\n\n\n\n\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\nt\n,\n\nβ\n^\n\n\n\n=\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nω\nj\n\n\nY\nj\n\n\nt\n\n\nx\nj\n\n\nt\n\nexp\n\n\n\nx\nj\n\n\n\nt\n\n′\n\n\nβ\n^\n\n\n\n\n\n,\n\n\n\n\nwith Yj(t) = 1 if the subject j is at risk at time t (i.e. tj ≥ t), 0 otherwise.\nIn the WC model proposed for case-control data [4], t is age and the sampling weight ω of each subject depends on age and on his case-control status. Specifically, the weight for each subject i at age t is given by\n\n\n(2)\n\n\n\nω\ni\n\n\nt\n\n=\n{\n\n\n\n\n\n\n1\n−\nπ\n\nt\n\n\n\nπ\n\nt\n\n\n\n×\n\n\n\nn\ncases\n\n\nt\n\n\n\n\nn\ncontrols\n\n\nt\n\n\n\n\n\n\n\nif\n\nsubject\n\ni\n\nis\n\na\n\ncontrol\n\nselected\n\nat\n\nage\n\nt\n\nor\n\nat\n\na\n\nlater\n\nage\n\n\n\n\n\n1\n\n\n\nif\n\nsubject\n\ni\n\nis\n\na\n\ncase\n\ndiagnosed\n\nat\n\nage\n\nt\n\nor\n\nat\n\na\n\nlater\n\nage\n,\n\n\n\n\n\n\n\n\nwhere π(t) is the probability to develop the disease at age t or at a later age in the source population, ncases(t) is the number of cases diagnosed at age t or at a later age in the case-control study, and ncontrols(t) is the number of controls selected at age t or at a later age in the case-control study as well. If the WC model is used to analyze data from a nested case-control study, the age conditional probabilities π(t) in Equation (2) can directly be estimated from the full enumerated cohort. Left-truncation at age at entry into the cohort should be performed to account for delayed entry [17]. If the WC model is used to analyze population-based case-control data, π(t) can be estimated from health statistics on the population under study, as shown in our application on PM in the section following simulations. The weights equal 1 for cases because all the eligible cases of the source population (or in the cohort) are usually included in the case-control study. If the sampling probabilities of cases do not equal 1, then weights in Equation (2) should be adjusted accordingly.\nThe weights defined in Equation (2) can be implemented in any statistical software that handles time-dependent weights in the Cox model, such as the coxph function in R or the SAS PROC PHREG function.", "The robust sandwich variance estimator for β^ in Equation (1) as proposed by Binder [5] for finite population-based surveys is given by\n\n\n(3)\n\n\n\n\nV\n^\n\n1\n\n\n\nβ\n^\n\n\n=\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n\n\n\n∑\n\ni\n=\n1\n\nn\n\n\n\n\n\n\nω\ni\n\n\n\nu\n^\n\ni\n\n\n\nβ\n^\n\n\n\n\n\n⊗\n2\n\n\n\n\n\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n\n\n\n\nwhere Iβ^ is the observed information matrix obtained by evaluation of this expression ∂U^β∂β|β=β^, a⊗ 2 = aa′, and\n\n\n(4)\n\n\n\n\n\n\nu\n^\n\ni\n\n\n\nβ\n^\n\n\n=\n\nδ\ni\n\n\n\n\nx\ni\n\n\n\nt\ni\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\nS\n\n0\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n\n\n\n\n\n−\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nδ\nj\n\n\nω\nj\n\n\n\n\nY\ni\n\n\n\nt\nj\n\n\nexp\n\n\n\nx\ni\n\n\n\n\nt\nj\n\n\n′\n\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n\n\n\n\n\nx\n\n\n\n\nx\ni\n\n\n\nt\nj\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n.\n\n\n\n\n\n\nThe robust variance estimator in Equation (3) can be rewritten as V^1β^=D’D where D is the dfbetas residuals [18] vector from the Cox model including the weights ω that can depend on time as those defined in Equation (2), as suggested in Barlow [19]. As indicated by Therneau and Li [20] and by Barlow et al. [21], the robust sandwich variance estimate from Equation (3) can directly be obtained with R using the commands\nM1 < −coxph(Surv(start,stop,event) ~ x + cluster(id), weights = weight)\nV1 < − M1$var\nwith the vector of weights derived from Equation (2) for the WC model.\nThe robust variance estimator V^1β^ accounts for the variability due to sampling the case-control sample from the source population. To account for the extra variability due to sampling the source population from the (infinite) superpopulation, we propose to use the Lin variance estimator [13] that turned out to consist in adding the naïve variance estimator to the robust variance estimator V^1β^. The Lin variance superpopulation estimator is thus given by\n\n\n(5)\n\n\n\n\nV\n^\n\n2\n\n\n\nβ\n^\n\n\n=\n\n\nV\n^\n\n1\n\n\n\nβ\n^\n\n\n+\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n.\n\n\n\n\nWith R, the superpopulation variance estimate from Equation (5) can simply be obtained using the command\nV2 <- V1 + M1$naive.var\nAll along this paper, the WC model using the robust variance estimator V^1β^ in Equation (3) will be denoted by WC1, while the WC model using the Lin’s superpopulation variance estimator V^2β^ in Equation (5) will be denoted by WC2. While WC1 and WC2 models give identical estimated exposure effects, they yield different standard errors and thus CI.", " Overview of the simulation design The main objective of the simulation study was to evaluate the performance of Lin’s superpopulation variance estimator V^2β^ in Equation (5) with the time-dependent weights defined in Equation (2), for the estimation of the effects of time-varying exposures in case-control studies. In particular, we compared the coverage probability of the 95% CI resulting from the WC2 model, as compared to the WC1 model and standard logistic regression. We were specifically interested in the effects of exposure intensity, duration, age at first exposure and time since last exposure. These inter-related aspects of exposure are of interest in many epidemiological applications but induce some statistical analytical issues because of correlation and time-dependency.\nWe generated 1000 source populations of 1000 or 5000 individuals each, and within each source population, we simulated a case-control study. The age at event for each subject in each source population was generated from a standard Cox model with time-dependent covariates, using a permutation algorithm described elsewhere and assuming Weibull marginal distribution of age at event [4,22,23]. Three Cox models of interest from an epidemiological point of view were simulated. Model 1 included intensity and duration of exposure only. Model 2 included age at first exposure in addition to intensity and duration. Model 3 was similar to Model 2 but used time since last exposure instead of age at first exposure.\nThe distribution of the exposures variables were chosen to be close to the observed distributions of occupational asbestos exposure variables in our case-control data on PM [15] described in the application section. Specifically, the ages at first and at last exposure were generated for all subjects from lognormal distributions. The exposure intensity at each age was generated from a linear function of age. Parameters for the random intercept and slope were chosen such that either 85% of subjects had a constant intensity, 6% a highly increasing, 6% a moderately decreasing, and 3% a moderately increasing intensity over lifetime (Scenario A); or 50% a highly increasing and 50% a moderately decreasing intensity over lifetime (Scenario B). Scenario A reflects our real case-control data on occupational exposure to asbestos. The exposure intensity at each age was represented in all our models by a variable that equaled the cumulated value of intensity at that age divided by the total duration of exposure at that age. This exposure intensity variable is equivalent to the mean index of exposure (MIE) variable introduced in the application section. The exposure intensity, as well as duration and time since last exposure, were time-dependent in all our true Models 1–3. The true effects β of each exposure variable in Models 1–3 were fixed to values that ranged from weak to strong effects: 0.41 to 1.39 for intensity, 0.01 to 0.05 for duration, −0.01 to −0.11 for age at first exposure, and 0.01 to 0.04 for time since last exposure. These beta correspond to hazard ratios of 1.5 to 4.0 for one standard deviation (i.e. 1.0 fiber/ml) increase in exposure intensity, hazard ratios of 1.2 to 2.0 for one standard deviation increase (i.e. 14 years) in duration of exposure, hazard ratios of 0.9 to 0.4 for one standard deviation (i.e. 8 years) increase in age at first exposure, and hazard ratios of 1.2 to 1.8 for one standard deviation (i.e. 14 years) increase in time since cessation of exposure.\nCensoring for age at event in the source population was independently generated from a uniform distribution such that the event rate was about 10% in each source population of 1000 subjects, and 2% in each source population of 5 000 subjects. Each subject of the source population who had the event of interest was selected as a case in the case-control dataset. The event rates in the source population thus implied that we had about 100 cases in each case-control data set. For each case, 1, 2, or 4 controls were randomly selected with replacement among subjects at risk at the case’s event age, which corresponds to 1:1, 1:2, or 1:4 individual matching on age, respectively. On average, each case-control dataset was therefore made of about 100 cases and 100, 200, or 400 controls.\nThe main objective of the simulation study was to evaluate the performance of Lin’s superpopulation variance estimator V^2β^ in Equation (5) with the time-dependent weights defined in Equation (2), for the estimation of the effects of time-varying exposures in case-control studies. In particular, we compared the coverage probability of the 95% CI resulting from the WC2 model, as compared to the WC1 model and standard logistic regression. We were specifically interested in the effects of exposure intensity, duration, age at first exposure and time since last exposure. These inter-related aspects of exposure are of interest in many epidemiological applications but induce some statistical analytical issues because of correlation and time-dependency.\nWe generated 1000 source populations of 1000 or 5000 individuals each, and within each source population, we simulated a case-control study. The age at event for each subject in each source population was generated from a standard Cox model with time-dependent covariates, using a permutation algorithm described elsewhere and assuming Weibull marginal distribution of age at event [4,22,23]. Three Cox models of interest from an epidemiological point of view were simulated. Model 1 included intensity and duration of exposure only. Model 2 included age at first exposure in addition to intensity and duration. Model 3 was similar to Model 2 but used time since last exposure instead of age at first exposure.\nThe distribution of the exposures variables were chosen to be close to the observed distributions of occupational asbestos exposure variables in our case-control data on PM [15] described in the application section. Specifically, the ages at first and at last exposure were generated for all subjects from lognormal distributions. The exposure intensity at each age was generated from a linear function of age. Parameters for the random intercept and slope were chosen such that either 85% of subjects had a constant intensity, 6% a highly increasing, 6% a moderately decreasing, and 3% a moderately increasing intensity over lifetime (Scenario A); or 50% a highly increasing and 50% a moderately decreasing intensity over lifetime (Scenario B). Scenario A reflects our real case-control data on occupational exposure to asbestos. The exposure intensity at each age was represented in all our models by a variable that equaled the cumulated value of intensity at that age divided by the total duration of exposure at that age. This exposure intensity variable is equivalent to the mean index of exposure (MIE) variable introduced in the application section. The exposure intensity, as well as duration and time since last exposure, were time-dependent in all our true Models 1–3. The true effects β of each exposure variable in Models 1–3 were fixed to values that ranged from weak to strong effects: 0.41 to 1.39 for intensity, 0.01 to 0.05 for duration, −0.01 to −0.11 for age at first exposure, and 0.01 to 0.04 for time since last exposure. These beta correspond to hazard ratios of 1.5 to 4.0 for one standard deviation (i.e. 1.0 fiber/ml) increase in exposure intensity, hazard ratios of 1.2 to 2.0 for one standard deviation increase (i.e. 14 years) in duration of exposure, hazard ratios of 0.9 to 0.4 for one standard deviation (i.e. 8 years) increase in age at first exposure, and hazard ratios of 1.2 to 1.8 for one standard deviation (i.e. 14 years) increase in time since cessation of exposure.\nCensoring for age at event in the source population was independently generated from a uniform distribution such that the event rate was about 10% in each source population of 1000 subjects, and 2% in each source population of 5 000 subjects. Each subject of the source population who had the event of interest was selected as a case in the case-control dataset. The event rates in the source population thus implied that we had about 100 cases in each case-control data set. For each case, 1, 2, or 4 controls were randomly selected with replacement among subjects at risk at the case’s event age, which corresponds to 1:1, 1:2, or 1:4 individual matching on age, respectively. On average, each case-control dataset was therefore made of about 100 cases and 100, 200, or 400 controls.", "The main objective of the simulation study was to evaluate the performance of Lin’s superpopulation variance estimator V^2β^ in Equation (5) with the time-dependent weights defined in Equation (2), for the estimation of the effects of time-varying exposures in case-control studies. In particular, we compared the coverage probability of the 95% CI resulting from the WC2 model, as compared to the WC1 model and standard logistic regression. We were specifically interested in the effects of exposure intensity, duration, age at first exposure and time since last exposure. These inter-related aspects of exposure are of interest in many epidemiological applications but induce some statistical analytical issues because of correlation and time-dependency.\nWe generated 1000 source populations of 1000 or 5000 individuals each, and within each source population, we simulated a case-control study. The age at event for each subject in each source population was generated from a standard Cox model with time-dependent covariates, using a permutation algorithm described elsewhere and assuming Weibull marginal distribution of age at event [4,22,23]. Three Cox models of interest from an epidemiological point of view were simulated. Model 1 included intensity and duration of exposure only. Model 2 included age at first exposure in addition to intensity and duration. Model 3 was similar to Model 2 but used time since last exposure instead of age at first exposure.\nThe distribution of the exposures variables were chosen to be close to the observed distributions of occupational asbestos exposure variables in our case-control data on PM [15] described in the application section. Specifically, the ages at first and at last exposure were generated for all subjects from lognormal distributions. The exposure intensity at each age was generated from a linear function of age. Parameters for the random intercept and slope were chosen such that either 85% of subjects had a constant intensity, 6% a highly increasing, 6% a moderately decreasing, and 3% a moderately increasing intensity over lifetime (Scenario A); or 50% a highly increasing and 50% a moderately decreasing intensity over lifetime (Scenario B). Scenario A reflects our real case-control data on occupational exposure to asbestos. The exposure intensity at each age was represented in all our models by a variable that equaled the cumulated value of intensity at that age divided by the total duration of exposure at that age. This exposure intensity variable is equivalent to the mean index of exposure (MIE) variable introduced in the application section. The exposure intensity, as well as duration and time since last exposure, were time-dependent in all our true Models 1–3. The true effects β of each exposure variable in Models 1–3 were fixed to values that ranged from weak to strong effects: 0.41 to 1.39 for intensity, 0.01 to 0.05 for duration, −0.01 to −0.11 for age at first exposure, and 0.01 to 0.04 for time since last exposure. These beta correspond to hazard ratios of 1.5 to 4.0 for one standard deviation (i.e. 1.0 fiber/ml) increase in exposure intensity, hazard ratios of 1.2 to 2.0 for one standard deviation increase (i.e. 14 years) in duration of exposure, hazard ratios of 0.9 to 0.4 for one standard deviation (i.e. 8 years) increase in age at first exposure, and hazard ratios of 1.2 to 1.8 for one standard deviation (i.e. 14 years) increase in time since cessation of exposure.\nCensoring for age at event in the source population was independently generated from a uniform distribution such that the event rate was about 10% in each source population of 1000 subjects, and 2% in each source population of 5 000 subjects. Each subject of the source population who had the event of interest was selected as a case in the case-control dataset. The event rates in the source population thus implied that we had about 100 cases in each case-control data set. For each case, 1, 2, or 4 controls were randomly selected with replacement among subjects at risk at the case’s event age, which corresponds to 1:1, 1:2, or 1:4 individual matching on age, respectively. On average, each case-control dataset was therefore made of about 100 cases and 100, 200, or 400 controls.", "Each case-control sample was analyzed using four regression models (WC1 and WC2 models and two standard logistic regression models) that were correctly specified in terms of the exposure variables included. In the WC1 and WC2 models, the exposure variables were time-dependent, and the probability π(t) was the proportion of subjects in the source population who had an event at age t or at a later age among those at risk at age t. We assumed that all subjects of the population source were followed-up since birth, implying that age at event did not have to be left-truncated in WC1 and WC2. For comparison purpose, conditional logistic regression (CLR) was used as the standard analytical method for individually matched case-control studies. Unconditional logistic regression (ULR) including age as a continuous covariate in addition to the exposure variables, was also used as the standard alternative analytical approach. For both ULR and CLR, the time-dependent covariates were fixed at their observed value at the age at event for cases or selection for controls. Because controls were selected among subjects at risk at the ages where each case occurs, all the exponential of the regression parameter estimates can be interpreted as the source population rate ratio estimates [24].\n Statistical criteria used to compare the performance of the different estimators For each of the four regression models WC1, WC2, CLR, and ULR, we calculated the relative bias of the regression parameter estimator β^ associated with each exposure variable, as compared with the true effect β of that exposure variable, 11000∑i=11000β^i−ββ, where β^i is the parameter estimate of the model based on the ith case-control dataset (i = 1, …, 1 000). To evaluate whether the relative bias was not partly due to a bias generated in the population source data, we also derived the relative bias as compared with the estimated effect β^Cox of the well specified time-dependent Cox model using the full population source data, 11000∑i=11000β^i−β^Cox,iβ^Cox,i. We also derived the root mean squared error (RMSE) β^¯−β2+varβ^, where β^¯ is the mean of the 1 000 parameter estimates β^i. The empirical relative efficiency of each regression parameter estimator was computed as the ratio of the empirical variance of the Cox model using the full source population data, varβ^Cox, to the empirical variance of the parameter estimates varβ^. The average of the 1000 standard errors sβ^ (ASE) was compared to the empirical standard deviation of the 1000 β^ estimates (SDE). We also calculated the coverage probability as the proportion of samples for which the 95% CI of β,β^±1.96×sβ^, included the true value β.\nFor each of the four regression models WC1, WC2, CLR, and ULR, we calculated the relative bias of the regression parameter estimator β^ associated with each exposure variable, as compared with the true effect β of that exposure variable, 11000∑i=11000β^i−ββ, where β^i is the parameter estimate of the model based on the ith case-control dataset (i = 1, …, 1 000). To evaluate whether the relative bias was not partly due to a bias generated in the population source data, we also derived the relative bias as compared with the estimated effect β^Cox of the well specified time-dependent Cox model using the full population source data, 11000∑i=11000β^i−β^Cox,iβ^Cox,i. We also derived the root mean squared error (RMSE) β^¯−β2+varβ^, where β^¯ is the mean of the 1 000 parameter estimates β^i. The empirical relative efficiency of each regression parameter estimator was computed as the ratio of the empirical variance of the Cox model using the full source population data, varβ^Cox, to the empirical variance of the parameter estimates varβ^. The average of the 1000 standard errors sβ^ (ASE) was compared to the empirical standard deviation of the 1000 β^ estimates (SDE). We also calculated the coverage probability as the proportion of samples for which the 95% CI of β,β^±1.96×sβ^, included the true value β.\n Simulation results Table 1 shows the results of the four analytical methods (WC1, WC2, CLR, ULR) for strong effects of exposure intensity and duration in Model 1. Table 2 shows the results for strong effects of i) intensity, duration, and age at first exposure in Model 2, and ii) intensity, duration, and time since cessation in Model 3. The results tended to be similar for weaker effects.\nSimulation results for Model 1 for 1:1, 1:2, or 1:4 matched case-control data including about 100 cases arising from populations of 1000 or 5000 subjects, based on 1000 replications\n(a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B).\n(b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate.\n(c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^.\nSimulation results for Models 2 and 3 for 1:1 matched case-control data including about 100 cases arising from a population of 1000 subjects, based on 1000 replications\n(a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B).\n(b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate.\n(c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^.\nAs suggested by the ratio ASE/SDE, the superpopulation variance estimator (WC2) tended to give estimates that were closer to the true variance than the robust variance estimator (WC1) that systematically under-estimated the true variance. Despite the superpopulation variance estimator tended to overestimate the true variance for the effect of exposure intensity when the population was made of 1000 subjects only (Tables 1 and 2), the coverage rates from WC2 were systematically much closer to the nominal level of 95% than those from the WC1 model. For each scenario of intensity pattern (Scenario A or B), the ratio ASE/SDE and the coverage rate for the effects of intensity and duration were similar in Models 2–3 as compared with Model 1 (Table 2 versus Table 1), suggesting that additional adjustment for correlated covariates does not affect the performance of the different variance estimators.\nWhile the relative biases from all analytical models (WC, ULR and CLR) tended to be low and of the same magnitude in all scenarios, the relative efficiency as compared to the Cox model estimated on the full population source, as well as the accuracy in terms of RMSE, tended to be different. Indeed, in all scenarios with 1:1 case:control ratio within population source of 1000 subjects, the regression coefficient estimator from the WC models was much more efficient and thus also more accurate than that from CLR and ULR (Tables 1 and 2). As expected, the relative efficiency from all models estimated using 100 cases and 100 controls, as compared to the Cox model estimated on the full population source, decreased when the population size increases. For example, the relative efficiency of the WC for intensity with pattern B decreased from 0.59 to 0.20 when the population size increased from 1000 to 5000 subjects (Table 1). As expected as well, increasing the number of controls from 100 to 200 or 400 for a given population size (5000 in Table 1) strongly increased the relative efficiency of ULR and CLR but only moderately increased the relative efficiency of the WC models. For example, the relative efficiency for intensity with pattern B increased from 0.10 to 0.36 for CLR while only from 0.20 to 0.37 for the WC model (Table 1). Because the WC model used controls at different ages for which they were selected in the 1:1 case-control scenario, using additional controls in the 1:2 or 1:4 case:control ratio scenarios added relatively less information in this model than in ULR and CLR. As a result, ULR and CLR became more accurate in terms of RMSE than the WC models when four controls were selected for each case.\nInterestingly, CLR did not perform better in terms of both bias and RMSE than ULR, despite individual matching of cases and controls. ULR was actually systematically more efficient than CLR. This result may be consistent with our previous results where we found that CLR might have difficulty in separating the effects of correlated time-dependent variables [23]. Indeed, the correlation between each pair of the four exposure variables (intensity, duration, age at first exposure and time since last exposure) as well as with age at the index date, ranged between −0.679 and +0.453. The correlation also affected both the WC and ULR parameter estimators as suggested by the slightly higher RMSE in Models 2 and 3 (Table 2) as compared with Model 1 (Table 1) for the effects of intensity and duration, but it affected them less than the CLR estimator.\nTable 1 shows the results of the four analytical methods (WC1, WC2, CLR, ULR) for strong effects of exposure intensity and duration in Model 1. Table 2 shows the results for strong effects of i) intensity, duration, and age at first exposure in Model 2, and ii) intensity, duration, and time since cessation in Model 3. The results tended to be similar for weaker effects.\nSimulation results for Model 1 for 1:1, 1:2, or 1:4 matched case-control data including about 100 cases arising from populations of 1000 or 5000 subjects, based on 1000 replications\n(a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B).\n(b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate.\n(c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^.\nSimulation results for Models 2 and 3 for 1:1 matched case-control data including about 100 cases arising from a population of 1000 subjects, based on 1000 replications\n(a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B).\n(b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate.\n(c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^.\nAs suggested by the ratio ASE/SDE, the superpopulation variance estimator (WC2) tended to give estimates that were closer to the true variance than the robust variance estimator (WC1) that systematically under-estimated the true variance. Despite the superpopulation variance estimator tended to overestimate the true variance for the effect of exposure intensity when the population was made of 1000 subjects only (Tables 1 and 2), the coverage rates from WC2 were systematically much closer to the nominal level of 95% than those from the WC1 model. For each scenario of intensity pattern (Scenario A or B), the ratio ASE/SDE and the coverage rate for the effects of intensity and duration were similar in Models 2–3 as compared with Model 1 (Table 2 versus Table 1), suggesting that additional adjustment for correlated covariates does not affect the performance of the different variance estimators.\nWhile the relative biases from all analytical models (WC, ULR and CLR) tended to be low and of the same magnitude in all scenarios, the relative efficiency as compared to the Cox model estimated on the full population source, as well as the accuracy in terms of RMSE, tended to be different. Indeed, in all scenarios with 1:1 case:control ratio within population source of 1000 subjects, the regression coefficient estimator from the WC models was much more efficient and thus also more accurate than that from CLR and ULR (Tables 1 and 2). As expected, the relative efficiency from all models estimated using 100 cases and 100 controls, as compared to the Cox model estimated on the full population source, decreased when the population size increases. For example, the relative efficiency of the WC for intensity with pattern B decreased from 0.59 to 0.20 when the population size increased from 1000 to 5000 subjects (Table 1). As expected as well, increasing the number of controls from 100 to 200 or 400 for a given population size (5000 in Table 1) strongly increased the relative efficiency of ULR and CLR but only moderately increased the relative efficiency of the WC models. For example, the relative efficiency for intensity with pattern B increased from 0.10 to 0.36 for CLR while only from 0.20 to 0.37 for the WC model (Table 1). Because the WC model used controls at different ages for which they were selected in the 1:1 case-control scenario, using additional controls in the 1:2 or 1:4 case:control ratio scenarios added relatively less information in this model than in ULR and CLR. As a result, ULR and CLR became more accurate in terms of RMSE than the WC models when four controls were selected for each case.\nInterestingly, CLR did not perform better in terms of both bias and RMSE than ULR, despite individual matching of cases and controls. ULR was actually systematically more efficient than CLR. This result may be consistent with our previous results where we found that CLR might have difficulty in separating the effects of correlated time-dependent variables [23]. Indeed, the correlation between each pair of the four exposure variables (intensity, duration, age at first exposure and time since last exposure) as well as with age at the index date, ranged between −0.679 and +0.453. The correlation also affected both the WC and ULR parameter estimators as suggested by the slightly higher RMSE in Models 2 and 3 (Table 2) as compared with Model 1 (Table 1) for the effects of intensity and duration, but it affected them less than the CLR estimator.\n Application to occupational exposure to asbestos and pleural mesothelioma Mesothelioma is a rare tumor mostly located in the pleura and usually caused by exposure to asbestos. The role of the different temporal patterns of occupational exposure to this substance has still to be explored using appropriate statistical methods accounting for individual changes over time in the exposure intensity [15]. It is therefore of interest to apply the proposed estimators to estimate the mutually adjusted effects of exposure intensity, duration, age at first exposure, and time since last exposure, and to compare the results to those from standard logistic regression analyses that do not dynamically account for within subjects changes over time of exposure intensity.\nMesothelioma is a rare tumor mostly located in the pleura and usually caused by exposure to asbestos. The role of the different temporal patterns of occupational exposure to this substance has still to be explored using appropriate statistical methods accounting for individual changes over time in the exposure intensity [15]. It is therefore of interest to apply the proposed estimators to estimate the mutually adjusted effects of exposure intensity, duration, age at first exposure, and time since last exposure, and to compare the results to those from standard logistic regression analyses that do not dynamically account for within subjects changes over time of exposure intensity.\n Data source The data came from a large French population-based case-control study described in Lacourt et al. [15]. Cases were selected from a French case-control study conducted in 1987–1993 and the French National Mesothelioma Surveillance Program in 1998–2006. Population controls were frequency matched to cases by sex and year of birth within 5 years group. Occupational asbestos exposure was evaluated for each subject with a job-exposure matrix (JEM) which allowed us to derive the mean index of exposure (MIE) that was used in the regression models to represent intensity of exposure, as in Lacourt et al. [15]. The MIE at age t was given by\n\n\n\n\nMIE\n\nt\n\n=\n\n\n\n∑\n\nl\n=\n1\n\nL\n\n\n\nd\nl\n\n×\n\np\nl\n\n×\n\n\n\n\n\nf\nsl\n\n×\n\ni\nsl\n\n\n\n+\n\n\n\nf\nal\n\n×\n\ni\nal\n\n\n\n\n\n\n\n\n/\n\n\n∑\n\nl\n=\n1\n\nL\n\n\n\nd\nl\n\n\n\n\n\n\n\nwhere L is the total number of jobs exposed to asbestos till age t; dl the duration (in years) of job l, pl the probability of asbestos exposure for job l, fsl and isl the frequency and intensity of asbestos exposure due to specific task of job l, respectively, fal and ial the frequency and intensity of asbestos exposure due to environment work contamination of job l, respectively. For each job, the probability was derived from the percent of workers exposed in the considered job code, the frequency from the percent of work time, and the intensity from the concentration of asbestos fibers in the air expressed as fibers per milliliter (f/ml). See Lacourt et al. [15] for more details. An ever exposed subject to asbestos was a subject who had at least one job with a probability pl different from zero.\nBecause our objective was to accurately investigate the effects of the quantitative time-related aspects of occupational exposure, all our analyses were restricted to subjects ever exposed to asbestos (68.9% in males and 20.9% in females). In addition, because the sample size for females was too small to ensure adequate statistical power and accurate estimates in separate multiple regression analyses of this group [15], the analyses were restricted to males ever exposed to asbestos, i.e. to 1041 male cases and 1425 male controls. The distribution of age and the asbestos exposure characteristics at the time of diagnosis for cases and interview for controls are shown in Table 3. The distribution of the patterns of intensity over lifetime was similar to the one described in scenario A of the simulation, with 85% of subjects with almost constant asbestos exposure intensity over lifetime.\nMean and standard deviation of age and asbestos exposure variables at the time of diagnosis/interview for ever exposed males\nResults from the French case-control study on mesothelioma, 1987–2006.\n(a) Measured by the mean index of exposure (MIE).\nThe data came from a large French population-based case-control study described in Lacourt et al. [15]. Cases were selected from a French case-control study conducted in 1987–1993 and the French National Mesothelioma Surveillance Program in 1998–2006. Population controls were frequency matched to cases by sex and year of birth within 5 years group. Occupational asbestos exposure was evaluated for each subject with a job-exposure matrix (JEM) which allowed us to derive the mean index of exposure (MIE) that was used in the regression models to represent intensity of exposure, as in Lacourt et al. [15]. The MIE at age t was given by\n\n\n\n\nMIE\n\nt\n\n=\n\n\n\n∑\n\nl\n=\n1\n\nL\n\n\n\nd\nl\n\n×\n\np\nl\n\n×\n\n\n\n\n\nf\nsl\n\n×\n\ni\nsl\n\n\n\n+\n\n\n\nf\nal\n\n×\n\ni\nal\n\n\n\n\n\n\n\n\n/\n\n\n∑\n\nl\n=\n1\n\nL\n\n\n\nd\nl\n\n\n\n\n\n\n\nwhere L is the total number of jobs exposed to asbestos till age t; dl the duration (in years) of job l, pl the probability of asbestos exposure for job l, fsl and isl the frequency and intensity of asbestos exposure due to specific task of job l, respectively, fal and ial the frequency and intensity of asbestos exposure due to environment work contamination of job l, respectively. For each job, the probability was derived from the percent of workers exposed in the considered job code, the frequency from the percent of work time, and the intensity from the concentration of asbestos fibers in the air expressed as fibers per milliliter (f/ml). See Lacourt et al. [15] for more details. An ever exposed subject to asbestos was a subject who had at least one job with a probability pl different from zero.\nBecause our objective was to accurately investigate the effects of the quantitative time-related aspects of occupational exposure, all our analyses were restricted to subjects ever exposed to asbestos (68.9% in males and 20.9% in females). In addition, because the sample size for females was too small to ensure adequate statistical power and accurate estimates in separate multiple regression analyses of this group [15], the analyses were restricted to males ever exposed to asbestos, i.e. to 1041 male cases and 1425 male controls. The distribution of age and the asbestos exposure characteristics at the time of diagnosis for cases and interview for controls are shown in Table 3. The distribution of the patterns of intensity over lifetime was similar to the one described in scenario A of the simulation, with 85% of subjects with almost constant asbestos exposure intensity over lifetime.\nMean and standard deviation of age and asbestos exposure variables at the time of diagnosis/interview for ever exposed males\nResults from the French case-control study on mesothelioma, 1987–2006.\n(a) Measured by the mean index of exposure (MIE).\n Analytical methods used to analyze the case-control data on pleural mesothelioma To derive the weights proposed in the WC models (Equation 2), we first estimated the age-conditional probabilities π(t) of developing PM in the French male general population. These estimated probabilities were derived from published estimated sex- and age-specific incidence rates of PM per 100000 person-years in France in 2005 [25]. We assumed that these estimated incidence rates applied to our source population and that they were appropriate during the whole life of our subjects. The results are shown in Table 4 for males. As in the simulation study, standard errors for the WC model were then derived using the two variance estimators V^1β^ and V^2β^, resulting in the WC1 and WC2 models, respectively.\nEstimated male age-conditional probabilities used in the weights of the WC models to analyze the French case-control study of on mesothelioma\n(a) p(t) are estimated male age-specific incidence rates of pleural mesothelioma per 100 000 person-years in France in 2005 [25].\n(b) π(t) are estimated male age-conditional probabilities of developing pleural mesothelioma within residual lifetime after age t, calculated as πt=1−∏l≥t1−pl.\nFor comparison purpose, the data were further analyzed with ULR which is the standard method to analyze frequency matched case-control data, as well as with CLR. Age was the time axis in WC1 and WC2 models, and a continuous covariate in ULR and CLR. We did not perform left-truncation in WC1 and WC2 models thus assuming that all subjects of the population source were passively followed-up for PM since birth. The matching factor, birth year, was a quantitative covariate in WC1, WC2, and ULR, and was the stratification variable (in 5 years groups) in CLR. Using each of the four approaches (WC1, WC2, CLR and, ULR), we estimated the effects of intensity and duration of occupational asbestos exposure, the age at first exposure, and time since last exposure, using the same combination of quantitative exposure variables as in Models 1–3 of the simulation study. All the effects of these variables were therefore assumed to be linear. Despite our recent results that suggested that these effects were not linear on the logit of PM [15], we used quantitative variables in order to facilitate the comparison of the estimates from the four different analytical approaches. The resulting estimates should therefore be used only for methodological comparison purpose and not as substantive epidemiological results. As in the simulation study, all the exposure variables were time-dependent in WC1 and WC2 models, and fixed at their value at the age at diagnosis or interview for ULR and CLR.\nTo derive the weights proposed in the WC models (Equation 2), we first estimated the age-conditional probabilities π(t) of developing PM in the French male general population. These estimated probabilities were derived from published estimated sex- and age-specific incidence rates of PM per 100000 person-years in France in 2005 [25]. We assumed that these estimated incidence rates applied to our source population and that they were appropriate during the whole life of our subjects. The results are shown in Table 4 for males. As in the simulation study, standard errors for the WC model were then derived using the two variance estimators V^1β^ and V^2β^, resulting in the WC1 and WC2 models, respectively.\nEstimated male age-conditional probabilities used in the weights of the WC models to analyze the French case-control study of on mesothelioma\n(a) p(t) are estimated male age-specific incidence rates of pleural mesothelioma per 100 000 person-years in France in 2005 [25].\n(b) π(t) are estimated male age-conditional probabilities of developing pleural mesothelioma within residual lifetime after age t, calculated as πt=1−∏l≥t1−pl.\nFor comparison purpose, the data were further analyzed with ULR which is the standard method to analyze frequency matched case-control data, as well as with CLR. Age was the time axis in WC1 and WC2 models, and a continuous covariate in ULR and CLR. We did not perform left-truncation in WC1 and WC2 models thus assuming that all subjects of the population source were passively followed-up for PM since birth. The matching factor, birth year, was a quantitative covariate in WC1, WC2, and ULR, and was the stratification variable (in 5 years groups) in CLR. Using each of the four approaches (WC1, WC2, CLR and, ULR), we estimated the effects of intensity and duration of occupational asbestos exposure, the age at first exposure, and time since last exposure, using the same combination of quantitative exposure variables as in Models 1–3 of the simulation study. All the effects of these variables were therefore assumed to be linear. Despite our recent results that suggested that these effects were not linear on the logit of PM [15], we used quantitative variables in order to facilitate the comparison of the estimates from the four different analytical approaches. The resulting estimates should therefore be used only for methodological comparison purpose and not as substantive epidemiological results. As in the simulation study, all the exposure variables were time-dependent in WC1 and WC2 models, and fixed at their value at the age at diagnosis or interview for ULR and CLR.", "For each of the four regression models WC1, WC2, CLR, and ULR, we calculated the relative bias of the regression parameter estimator β^ associated with each exposure variable, as compared with the true effect β of that exposure variable, 11000∑i=11000β^i−ββ, where β^i is the parameter estimate of the model based on the ith case-control dataset (i = 1, …, 1 000). To evaluate whether the relative bias was not partly due to a bias generated in the population source data, we also derived the relative bias as compared with the estimated effect β^Cox of the well specified time-dependent Cox model using the full population source data, 11000∑i=11000β^i−β^Cox,iβ^Cox,i. We also derived the root mean squared error (RMSE) β^¯−β2+varβ^, where β^¯ is the mean of the 1 000 parameter estimates β^i. The empirical relative efficiency of each regression parameter estimator was computed as the ratio of the empirical variance of the Cox model using the full source population data, varβ^Cox, to the empirical variance of the parameter estimates varβ^. The average of the 1000 standard errors sβ^ (ASE) was compared to the empirical standard deviation of the 1000 β^ estimates (SDE). We also calculated the coverage probability as the proportion of samples for which the 95% CI of β,β^±1.96×sβ^, included the true value β.", "Table 1 shows the results of the four analytical methods (WC1, WC2, CLR, ULR) for strong effects of exposure intensity and duration in Model 1. Table 2 shows the results for strong effects of i) intensity, duration, and age at first exposure in Model 2, and ii) intensity, duration, and time since cessation in Model 3. The results tended to be similar for weaker effects.\nSimulation results for Model 1 for 1:1, 1:2, or 1:4 matched case-control data including about 100 cases arising from populations of 1000 or 5000 subjects, based on 1000 replications\n(a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B).\n(b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate.\n(c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^.\nSimulation results for Models 2 and 3 for 1:1 matched case-control data including about 100 cases arising from a population of 1000 subjects, based on 1000 replications\n(a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B).\n(b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate.\n(c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^.\nAs suggested by the ratio ASE/SDE, the superpopulation variance estimator (WC2) tended to give estimates that were closer to the true variance than the robust variance estimator (WC1) that systematically under-estimated the true variance. Despite the superpopulation variance estimator tended to overestimate the true variance for the effect of exposure intensity when the population was made of 1000 subjects only (Tables 1 and 2), the coverage rates from WC2 were systematically much closer to the nominal level of 95% than those from the WC1 model. For each scenario of intensity pattern (Scenario A or B), the ratio ASE/SDE and the coverage rate for the effects of intensity and duration were similar in Models 2–3 as compared with Model 1 (Table 2 versus Table 1), suggesting that additional adjustment for correlated covariates does not affect the performance of the different variance estimators.\nWhile the relative biases from all analytical models (WC, ULR and CLR) tended to be low and of the same magnitude in all scenarios, the relative efficiency as compared to the Cox model estimated on the full population source, as well as the accuracy in terms of RMSE, tended to be different. Indeed, in all scenarios with 1:1 case:control ratio within population source of 1000 subjects, the regression coefficient estimator from the WC models was much more efficient and thus also more accurate than that from CLR and ULR (Tables 1 and 2). As expected, the relative efficiency from all models estimated using 100 cases and 100 controls, as compared to the Cox model estimated on the full population source, decreased when the population size increases. For example, the relative efficiency of the WC for intensity with pattern B decreased from 0.59 to 0.20 when the population size increased from 1000 to 5000 subjects (Table 1). As expected as well, increasing the number of controls from 100 to 200 or 400 for a given population size (5000 in Table 1) strongly increased the relative efficiency of ULR and CLR but only moderately increased the relative efficiency of the WC models. For example, the relative efficiency for intensity with pattern B increased from 0.10 to 0.36 for CLR while only from 0.20 to 0.37 for the WC model (Table 1). Because the WC model used controls at different ages for which they were selected in the 1:1 case-control scenario, using additional controls in the 1:2 or 1:4 case:control ratio scenarios added relatively less information in this model than in ULR and CLR. As a result, ULR and CLR became more accurate in terms of RMSE than the WC models when four controls were selected for each case.\nInterestingly, CLR did not perform better in terms of both bias and RMSE than ULR, despite individual matching of cases and controls. ULR was actually systematically more efficient than CLR. This result may be consistent with our previous results where we found that CLR might have difficulty in separating the effects of correlated time-dependent variables [23]. Indeed, the correlation between each pair of the four exposure variables (intensity, duration, age at first exposure and time since last exposure) as well as with age at the index date, ranged between −0.679 and +0.453. The correlation also affected both the WC and ULR parameter estimators as suggested by the slightly higher RMSE in Models 2 and 3 (Table 2) as compared with Model 1 (Table 1) for the effects of intensity and duration, but it affected them less than the CLR estimator.", "Mesothelioma is a rare tumor mostly located in the pleura and usually caused by exposure to asbestos. The role of the different temporal patterns of occupational exposure to this substance has still to be explored using appropriate statistical methods accounting for individual changes over time in the exposure intensity [15]. It is therefore of interest to apply the proposed estimators to estimate the mutually adjusted effects of exposure intensity, duration, age at first exposure, and time since last exposure, and to compare the results to those from standard logistic regression analyses that do not dynamically account for within subjects changes over time of exposure intensity.", "The data came from a large French population-based case-control study described in Lacourt et al. [15]. Cases were selected from a French case-control study conducted in 1987–1993 and the French National Mesothelioma Surveillance Program in 1998–2006. Population controls were frequency matched to cases by sex and year of birth within 5 years group. Occupational asbestos exposure was evaluated for each subject with a job-exposure matrix (JEM) which allowed us to derive the mean index of exposure (MIE) that was used in the regression models to represent intensity of exposure, as in Lacourt et al. [15]. The MIE at age t was given by\n\n\n\n\nMIE\n\nt\n\n=\n\n\n\n∑\n\nl\n=\n1\n\nL\n\n\n\nd\nl\n\n×\n\np\nl\n\n×\n\n\n\n\n\nf\nsl\n\n×\n\ni\nsl\n\n\n\n+\n\n\n\nf\nal\n\n×\n\ni\nal\n\n\n\n\n\n\n\n\n/\n\n\n∑\n\nl\n=\n1\n\nL\n\n\n\nd\nl\n\n\n\n\n\n\n\nwhere L is the total number of jobs exposed to asbestos till age t; dl the duration (in years) of job l, pl the probability of asbestos exposure for job l, fsl and isl the frequency and intensity of asbestos exposure due to specific task of job l, respectively, fal and ial the frequency and intensity of asbestos exposure due to environment work contamination of job l, respectively. For each job, the probability was derived from the percent of workers exposed in the considered job code, the frequency from the percent of work time, and the intensity from the concentration of asbestos fibers in the air expressed as fibers per milliliter (f/ml). See Lacourt et al. [15] for more details. An ever exposed subject to asbestos was a subject who had at least one job with a probability pl different from zero.\nBecause our objective was to accurately investigate the effects of the quantitative time-related aspects of occupational exposure, all our analyses were restricted to subjects ever exposed to asbestos (68.9% in males and 20.9% in females). In addition, because the sample size for females was too small to ensure adequate statistical power and accurate estimates in separate multiple regression analyses of this group [15], the analyses were restricted to males ever exposed to asbestos, i.e. to 1041 male cases and 1425 male controls. The distribution of age and the asbestos exposure characteristics at the time of diagnosis for cases and interview for controls are shown in Table 3. The distribution of the patterns of intensity over lifetime was similar to the one described in scenario A of the simulation, with 85% of subjects with almost constant asbestos exposure intensity over lifetime.\nMean and standard deviation of age and asbestos exposure variables at the time of diagnosis/interview for ever exposed males\nResults from the French case-control study on mesothelioma, 1987–2006.\n(a) Measured by the mean index of exposure (MIE).", "To derive the weights proposed in the WC models (Equation 2), we first estimated the age-conditional probabilities π(t) of developing PM in the French male general population. These estimated probabilities were derived from published estimated sex- and age-specific incidence rates of PM per 100000 person-years in France in 2005 [25]. We assumed that these estimated incidence rates applied to our source population and that they were appropriate during the whole life of our subjects. The results are shown in Table 4 for males. As in the simulation study, standard errors for the WC model were then derived using the two variance estimators V^1β^ and V^2β^, resulting in the WC1 and WC2 models, respectively.\nEstimated male age-conditional probabilities used in the weights of the WC models to analyze the French case-control study of on mesothelioma\n(a) p(t) are estimated male age-specific incidence rates of pleural mesothelioma per 100 000 person-years in France in 2005 [25].\n(b) π(t) are estimated male age-conditional probabilities of developing pleural mesothelioma within residual lifetime after age t, calculated as πt=1−∏l≥t1−pl.\nFor comparison purpose, the data were further analyzed with ULR which is the standard method to analyze frequency matched case-control data, as well as with CLR. Age was the time axis in WC1 and WC2 models, and a continuous covariate in ULR and CLR. We did not perform left-truncation in WC1 and WC2 models thus assuming that all subjects of the population source were passively followed-up for PM since birth. The matching factor, birth year, was a quantitative covariate in WC1, WC2, and ULR, and was the stratification variable (in 5 years groups) in CLR. Using each of the four approaches (WC1, WC2, CLR and, ULR), we estimated the effects of intensity and duration of occupational asbestos exposure, the age at first exposure, and time since last exposure, using the same combination of quantitative exposure variables as in Models 1–3 of the simulation study. All the effects of these variables were therefore assumed to be linear. Despite our recent results that suggested that these effects were not linear on the logit of PM [15], we used quantitative variables in order to facilitate the comparison of the estimates from the four different analytical approaches. The resulting estimates should therefore be used only for methodological comparison purpose and not as substantive epidemiological results. As in the simulation study, all the exposure variables were time-dependent in WC1 and WC2 models, and fixed at their value at the age at diagnosis or interview for ULR and CLR.", "ASE: Average standard errors; CI: Confidence interval; CLR: Conditional logistic regression; JEM: Job-exposure matrix; MIE: Mean index of exposure; PM: Pleural mesothelioma; RMSE: Root mean squared error; SDE: Standard deviation of the estimates; ULR: Unconditional logistic regression; WC: Weighted Cox model.", "The authors declare that they have no competing interests.", "HG has drafted the manuscript, programmed and run the simulation study, analyzed the case-control data on mesothelioma, and has contributed to the interpretation of all the results. AL has provided the case-control data on mesothelioma and has revised the manuscript. KL has drafted and revised the manuscript, has designed the simulation study, and supervised HG in all stages. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/13/18/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "The regression model and the variance estimators", "The WC model", "The variance estimators", "Simulations", "Overview of the simulation design", "Analytical methods used to analyze the simulated data", "Statistical criteria used to compare the performance of the different estimators", "Simulation results", "Application to occupational exposure to asbestos and pleural mesothelioma", "Data source", "Analytical methods used to analyze the case-control data on pleural mesothelioma", "Results", "Discussion", "Conclusion", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Population-based case-control studies are widely used in epidemiology to investigate the association between environmental or occupational exposures over lifetime and the risk of cancer or other chronic diseases. Many of the exposures of interest are protracted and a huge amount of information is often retrospectively collected for each subject about his/her potential past exposure over lifetime. For example, for occupational exposures, the whole occupational history is usually investigated for each subject, and different methods exist to estimate the average dose of exposure at each past job [1-3]. However, only the cumulated estimated dose of exposure at the index age (age at diagnosis for cases and at interview for controls) is usually used in standard logistic regression analyses. Such approach does not use the (retrospective) dynamic information available on the exposure at different ages during lifetime.\nA time-dependent weighted Cox (WC) model has recently been proposed to incorporate this dynamic information on exposure, in order to more accurately estimate the effect of time-dependent exposures in population-based case-control studies [4]. The WC model consists in using age as the time axis and weighting cases and controls according to their case-control status and the age conditional probabilities of developing the disease in the source population. The weights proposed in the WC model are therefore time-dependent and estimated from data of the source population. A simulation study showed that the WC model improved the accuracy of the regression parameters estimates of time-dependent exposure variables as compared with standard logistic regression with fixed-in-time covariates [4]. However, the average robust sandwich variance estimates based on dfbetas residuals [5] were systematically lower than the empirical variance of the parameter estimates, which resulted in too narrow confidence intervals (CI) and low coverage probabilities [4].\nThere is an extensive statistical literature on the weighted analyses of cohort sampling designs (see among many others [6-10]). A population-based case-control study can be seen as a nested case-control study within the source population of cases and controls, and can therefore fit in this general cohort sampling design framework. Population-based case-control studies can also be seen as a survey with complex selection probabilities [11-14] and this is the general framework that we use in this paper. Specifically, we consider the superpopulation approach developed by Lin [13] who proposed a variance estimator that accounts for the extra variation due to sampling the finite survey population from an infinite superpopulation. As a result, the Lin variance estimator accounts for the random variation from one survey sample to another and from one survey population to another, as opposed to the robust sandwich variance estimator that accounts only for the random variation from one survey sample to another. In the context of population-based case-control study, the case-control sample could be considered as the survey sample, the source population as the finite survey population, and the population under study as the infinite superpopulation.\nThe asymptotic properties of the Lin variance estimator have been investigated and a small simulation study has been conducted to investigate these properties in finite samples [13]. The results indicated that the superpopulation variance estimates were closer to the true variance than the robust sandwich variance estimates. However, the simulation study considered only fixed-in-time covariates and simple selection probabilities that did not reflect the more complex sampling scheme of population-based case-control studies. It is therefore unclear how the superpopulation variance estimator would perform for the estimation of the effects of time-dependent covariates using the specific estimated time-dependent weights proposed in the WC model [4]. In addition, for further applications to population-based case-control data, it would be important to clarify the performance of the WC model, as compared with standard logistic regression analyses, for estimating the effects of several correlated temporal patterns of protracted exposures. Indeed, the effects of temporal patterns of exposures such as intensity, duration, age at first exposure, and time since last exposure are often of great interest from an epidemiological point of view [15], but they need to be carefully adjusted for each other to avoid residual confounding [16]. Such adjustment induces correlation between covariates and it is important to investigate how it affects the proposed estimators.\nThe first objective of the present study is to investigate through extensive simulations the accuracy of the Lin variance estimator for estimating the effects of time-varying covariates in case-control data, using the weights proposed in the WC model [4]. The second objective is to compare the estimates from the WC model and standard logistic regression for estimating the effects of selected correlated temporal aspects of exposure with detailed information on exposure history. The next section introduces the WC model and the robust and Lin’s variance estimators. The different approaches are then compared through simulations and using data from a large population-based case-control study on occupational exposure to asbestos and pleural mesothelioma (PM).", " The WC model The Cox proportional hazards model specifies the hazard function as\n\n\n\n\nλ\n\n\nt\n|\nx\n\nt\n\n\n\n=\n\nλ\n0\n\n\nt\n\nexp\n\n\nx\n\n\nt\n\n′\n\nβ\n\n\n,\n\n\n\n\nwhere λ0 is the baseline hazard, x(t) is the vector of observed covariate values at time t and β is the vector of unknown regression parameters. In the context of a population-based survey with complex sampling design [5], the estimator of β is the solution of the pseudo-maximum likelihood equation\n\n\n(1)\n\n\nU\n\nβ\n\n=\n\n\n∑\n\ni\n=\n1\n\nn\n\n\n\nω\ni\n\n\nδ\ni\n\n\n\n\nx\ni\n\n\n\nt\ni\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n=\n0\n\n\n,\n\n\n\n\nwhere n is the sample size, ωi is the sampling weight for subject i, δi = 1 if subject i is the case diagnosed at age ti and 0 otherwise, and\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\nt\n,\n\nβ\n^\n\n\n\n=\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nω\nj\n\n\nY\nj\n\n\nt\n\nexp\n\n\n\nx\nj\n\n\n\nt\n\n′\n\n\nβ\n^\n\n\n\n\n\n,\n\n\n\n\n\n\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\nt\n,\n\nβ\n^\n\n\n\n=\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nω\nj\n\n\nY\nj\n\n\nt\n\n\nx\nj\n\n\nt\n\nexp\n\n\n\nx\nj\n\n\n\nt\n\n′\n\n\nβ\n^\n\n\n\n\n\n,\n\n\n\n\nwith Yj(t) = 1 if the subject j is at risk at time t (i.e. tj ≥ t), 0 otherwise.\nIn the WC model proposed for case-control data [4], t is age and the sampling weight ω of each subject depends on age and on his case-control status. Specifically, the weight for each subject i at age t is given by\n\n\n(2)\n\n\n\nω\ni\n\n\nt\n\n=\n{\n\n\n\n\n\n\n1\n−\nπ\n\nt\n\n\n\nπ\n\nt\n\n\n\n×\n\n\n\nn\ncases\n\n\nt\n\n\n\n\nn\ncontrols\n\n\nt\n\n\n\n\n\n\n\nif\n\nsubject\n\ni\n\nis\n\na\n\ncontrol\n\nselected\n\nat\n\nage\n\nt\n\nor\n\nat\n\na\n\nlater\n\nage\n\n\n\n\n\n1\n\n\n\nif\n\nsubject\n\ni\n\nis\n\na\n\ncase\n\ndiagnosed\n\nat\n\nage\n\nt\n\nor\n\nat\n\na\n\nlater\n\nage\n,\n\n\n\n\n\n\n\n\nwhere π(t) is the probability to develop the disease at age t or at a later age in the source population, ncases(t) is the number of cases diagnosed at age t or at a later age in the case-control study, and ncontrols(t) is the number of controls selected at age t or at a later age in the case-control study as well. If the WC model is used to analyze data from a nested case-control study, the age conditional probabilities π(t) in Equation (2) can directly be estimated from the full enumerated cohort. Left-truncation at age at entry into the cohort should be performed to account for delayed entry [17]. If the WC model is used to analyze population-based case-control data, π(t) can be estimated from health statistics on the population under study, as shown in our application on PM in the section following simulations. The weights equal 1 for cases because all the eligible cases of the source population (or in the cohort) are usually included in the case-control study. If the sampling probabilities of cases do not equal 1, then weights in Equation (2) should be adjusted accordingly.\nThe weights defined in Equation (2) can be implemented in any statistical software that handles time-dependent weights in the Cox model, such as the coxph function in R or the SAS PROC PHREG function.\nThe Cox proportional hazards model specifies the hazard function as\n\n\n\n\nλ\n\n\nt\n|\nx\n\nt\n\n\n\n=\n\nλ\n0\n\n\nt\n\nexp\n\n\nx\n\n\nt\n\n′\n\nβ\n\n\n,\n\n\n\n\nwhere λ0 is the baseline hazard, x(t) is the vector of observed covariate values at time t and β is the vector of unknown regression parameters. In the context of a population-based survey with complex sampling design [5], the estimator of β is the solution of the pseudo-maximum likelihood equation\n\n\n(1)\n\n\nU\n\nβ\n\n=\n\n\n∑\n\ni\n=\n1\n\nn\n\n\n\nω\ni\n\n\nδ\ni\n\n\n\n\nx\ni\n\n\n\nt\ni\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n=\n0\n\n\n,\n\n\n\n\nwhere n is the sample size, ωi is the sampling weight for subject i, δi = 1 if subject i is the case diagnosed at age ti and 0 otherwise, and\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\nt\n,\n\nβ\n^\n\n\n\n=\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nω\nj\n\n\nY\nj\n\n\nt\n\nexp\n\n\n\nx\nj\n\n\n\nt\n\n′\n\n\nβ\n^\n\n\n\n\n\n,\n\n\n\n\n\n\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\nt\n,\n\nβ\n^\n\n\n\n=\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nω\nj\n\n\nY\nj\n\n\nt\n\n\nx\nj\n\n\nt\n\nexp\n\n\n\nx\nj\n\n\n\nt\n\n′\n\n\nβ\n^\n\n\n\n\n\n,\n\n\n\n\nwith Yj(t) = 1 if the subject j is at risk at time t (i.e. tj ≥ t), 0 otherwise.\nIn the WC model proposed for case-control data [4], t is age and the sampling weight ω of each subject depends on age and on his case-control status. Specifically, the weight for each subject i at age t is given by\n\n\n(2)\n\n\n\nω\ni\n\n\nt\n\n=\n{\n\n\n\n\n\n\n1\n−\nπ\n\nt\n\n\n\nπ\n\nt\n\n\n\n×\n\n\n\nn\ncases\n\n\nt\n\n\n\n\nn\ncontrols\n\n\nt\n\n\n\n\n\n\n\nif\n\nsubject\n\ni\n\nis\n\na\n\ncontrol\n\nselected\n\nat\n\nage\n\nt\n\nor\n\nat\n\na\n\nlater\n\nage\n\n\n\n\n\n1\n\n\n\nif\n\nsubject\n\ni\n\nis\n\na\n\ncase\n\ndiagnosed\n\nat\n\nage\n\nt\n\nor\n\nat\n\na\n\nlater\n\nage\n,\n\n\n\n\n\n\n\n\nwhere π(t) is the probability to develop the disease at age t or at a later age in the source population, ncases(t) is the number of cases diagnosed at age t or at a later age in the case-control study, and ncontrols(t) is the number of controls selected at age t or at a later age in the case-control study as well. If the WC model is used to analyze data from a nested case-control study, the age conditional probabilities π(t) in Equation (2) can directly be estimated from the full enumerated cohort. Left-truncation at age at entry into the cohort should be performed to account for delayed entry [17]. If the WC model is used to analyze population-based case-control data, π(t) can be estimated from health statistics on the population under study, as shown in our application on PM in the section following simulations. The weights equal 1 for cases because all the eligible cases of the source population (or in the cohort) are usually included in the case-control study. If the sampling probabilities of cases do not equal 1, then weights in Equation (2) should be adjusted accordingly.\nThe weights defined in Equation (2) can be implemented in any statistical software that handles time-dependent weights in the Cox model, such as the coxph function in R or the SAS PROC PHREG function.\n The variance estimators The robust sandwich variance estimator for β^ in Equation (1) as proposed by Binder [5] for finite population-based surveys is given by\n\n\n(3)\n\n\n\n\nV\n^\n\n1\n\n\n\nβ\n^\n\n\n=\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n\n\n\n∑\n\ni\n=\n1\n\nn\n\n\n\n\n\n\nω\ni\n\n\n\nu\n^\n\ni\n\n\n\nβ\n^\n\n\n\n\n\n⊗\n2\n\n\n\n\n\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n\n\n\n\nwhere Iβ^ is the observed information matrix obtained by evaluation of this expression ∂U^β∂β|β=β^, a⊗ 2 = aa′, and\n\n\n(4)\n\n\n\n\n\n\nu\n^\n\ni\n\n\n\nβ\n^\n\n\n=\n\nδ\ni\n\n\n\n\nx\ni\n\n\n\nt\ni\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\nS\n\n0\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n\n\n\n\n\n−\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nδ\nj\n\n\nω\nj\n\n\n\n\nY\ni\n\n\n\nt\nj\n\n\nexp\n\n\n\nx\ni\n\n\n\n\nt\nj\n\n\n′\n\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n\n\n\n\n\nx\n\n\n\n\nx\ni\n\n\n\nt\nj\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n.\n\n\n\n\n\n\nThe robust variance estimator in Equation (3) can be rewritten as V^1β^=D’D where D is the dfbetas residuals [18] vector from the Cox model including the weights ω that can depend on time as those defined in Equation (2), as suggested in Barlow [19]. As indicated by Therneau and Li [20] and by Barlow et al. [21], the robust sandwich variance estimate from Equation (3) can directly be obtained with R using the commands\nM1 < −coxph(Surv(start,stop,event) ~ x + cluster(id), weights = weight)\nV1 < − M1$var\nwith the vector of weights derived from Equation (2) for the WC model.\nThe robust variance estimator V^1β^ accounts for the variability due to sampling the case-control sample from the source population. To account for the extra variability due to sampling the source population from the (infinite) superpopulation, we propose to use the Lin variance estimator [13] that turned out to consist in adding the naïve variance estimator to the robust variance estimator V^1β^. The Lin variance superpopulation estimator is thus given by\n\n\n(5)\n\n\n\n\nV\n^\n\n2\n\n\n\nβ\n^\n\n\n=\n\n\nV\n^\n\n1\n\n\n\nβ\n^\n\n\n+\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n.\n\n\n\n\nWith R, the superpopulation variance estimate from Equation (5) can simply be obtained using the command\nV2 <- V1 + M1$naive.var\nAll along this paper, the WC model using the robust variance estimator V^1β^ in Equation (3) will be denoted by WC1, while the WC model using the Lin’s superpopulation variance estimator V^2β^ in Equation (5) will be denoted by WC2. While WC1 and WC2 models give identical estimated exposure effects, they yield different standard errors and thus CI.\nThe robust sandwich variance estimator for β^ in Equation (1) as proposed by Binder [5] for finite population-based surveys is given by\n\n\n(3)\n\n\n\n\nV\n^\n\n1\n\n\n\nβ\n^\n\n\n=\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n\n\n\n∑\n\ni\n=\n1\n\nn\n\n\n\n\n\n\nω\ni\n\n\n\nu\n^\n\ni\n\n\n\nβ\n^\n\n\n\n\n\n⊗\n2\n\n\n\n\n\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n\n\n\n\nwhere Iβ^ is the observed information matrix obtained by evaluation of this expression ∂U^β∂β|β=β^, a⊗ 2 = aa′, and\n\n\n(4)\n\n\n\n\n\n\nu\n^\n\ni\n\n\n\nβ\n^\n\n\n=\n\nδ\ni\n\n\n\n\nx\ni\n\n\n\nt\ni\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\nS\n\n0\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n\n\n\n\n\n−\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nδ\nj\n\n\nω\nj\n\n\n\n\nY\ni\n\n\n\nt\nj\n\n\nexp\n\n\n\nx\ni\n\n\n\n\nt\nj\n\n\n′\n\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n\n\n\n\n\nx\n\n\n\n\nx\ni\n\n\n\nt\nj\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n.\n\n\n\n\n\n\nThe robust variance estimator in Equation (3) can be rewritten as V^1β^=D’D where D is the dfbetas residuals [18] vector from the Cox model including the weights ω that can depend on time as those defined in Equation (2), as suggested in Barlow [19]. As indicated by Therneau and Li [20] and by Barlow et al. [21], the robust sandwich variance estimate from Equation (3) can directly be obtained with R using the commands\nM1 < −coxph(Surv(start,stop,event) ~ x + cluster(id), weights = weight)\nV1 < − M1$var\nwith the vector of weights derived from Equation (2) for the WC model.\nThe robust variance estimator V^1β^ accounts for the variability due to sampling the case-control sample from the source population. To account for the extra variability due to sampling the source population from the (infinite) superpopulation, we propose to use the Lin variance estimator [13] that turned out to consist in adding the naïve variance estimator to the robust variance estimator V^1β^. The Lin variance superpopulation estimator is thus given by\n\n\n(5)\n\n\n\n\nV\n^\n\n2\n\n\n\nβ\n^\n\n\n=\n\n\nV\n^\n\n1\n\n\n\nβ\n^\n\n\n+\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n.\n\n\n\n\nWith R, the superpopulation variance estimate from Equation (5) can simply be obtained using the command\nV2 <- V1 + M1$naive.var\nAll along this paper, the WC model using the robust variance estimator V^1β^ in Equation (3) will be denoted by WC1, while the WC model using the Lin’s superpopulation variance estimator V^2β^ in Equation (5) will be denoted by WC2. While WC1 and WC2 models give identical estimated exposure effects, they yield different standard errors and thus CI.", "The Cox proportional hazards model specifies the hazard function as\n\n\n\n\nλ\n\n\nt\n|\nx\n\nt\n\n\n\n=\n\nλ\n0\n\n\nt\n\nexp\n\n\nx\n\n\nt\n\n′\n\nβ\n\n\n,\n\n\n\n\nwhere λ0 is the baseline hazard, x(t) is the vector of observed covariate values at time t and β is the vector of unknown regression parameters. In the context of a population-based survey with complex sampling design [5], the estimator of β is the solution of the pseudo-maximum likelihood equation\n\n\n(1)\n\n\nU\n\nβ\n\n=\n\n\n∑\n\ni\n=\n1\n\nn\n\n\n\nω\ni\n\n\nδ\ni\n\n\n\n\nx\ni\n\n\n\nt\ni\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n=\n0\n\n\n,\n\n\n\n\nwhere n is the sample size, ωi is the sampling weight for subject i, δi = 1 if subject i is the case diagnosed at age ti and 0 otherwise, and\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\nt\n,\n\nβ\n^\n\n\n\n=\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nω\nj\n\n\nY\nj\n\n\nt\n\nexp\n\n\n\nx\nj\n\n\n\nt\n\n′\n\n\nβ\n^\n\n\n\n\n\n,\n\n\n\n\n\n\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\nt\n,\n\nβ\n^\n\n\n\n=\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nω\nj\n\n\nY\nj\n\n\nt\n\n\nx\nj\n\n\nt\n\nexp\n\n\n\nx\nj\n\n\n\nt\n\n′\n\n\nβ\n^\n\n\n\n\n\n,\n\n\n\n\nwith Yj(t) = 1 if the subject j is at risk at time t (i.e. tj ≥ t), 0 otherwise.\nIn the WC model proposed for case-control data [4], t is age and the sampling weight ω of each subject depends on age and on his case-control status. Specifically, the weight for each subject i at age t is given by\n\n\n(2)\n\n\n\nω\ni\n\n\nt\n\n=\n{\n\n\n\n\n\n\n1\n−\nπ\n\nt\n\n\n\nπ\n\nt\n\n\n\n×\n\n\n\nn\ncases\n\n\nt\n\n\n\n\nn\ncontrols\n\n\nt\n\n\n\n\n\n\n\nif\n\nsubject\n\ni\n\nis\n\na\n\ncontrol\n\nselected\n\nat\n\nage\n\nt\n\nor\n\nat\n\na\n\nlater\n\nage\n\n\n\n\n\n1\n\n\n\nif\n\nsubject\n\ni\n\nis\n\na\n\ncase\n\ndiagnosed\n\nat\n\nage\n\nt\n\nor\n\nat\n\na\n\nlater\n\nage\n,\n\n\n\n\n\n\n\n\nwhere π(t) is the probability to develop the disease at age t or at a later age in the source population, ncases(t) is the number of cases diagnosed at age t or at a later age in the case-control study, and ncontrols(t) is the number of controls selected at age t or at a later age in the case-control study as well. If the WC model is used to analyze data from a nested case-control study, the age conditional probabilities π(t) in Equation (2) can directly be estimated from the full enumerated cohort. Left-truncation at age at entry into the cohort should be performed to account for delayed entry [17]. If the WC model is used to analyze population-based case-control data, π(t) can be estimated from health statistics on the population under study, as shown in our application on PM in the section following simulations. The weights equal 1 for cases because all the eligible cases of the source population (or in the cohort) are usually included in the case-control study. If the sampling probabilities of cases do not equal 1, then weights in Equation (2) should be adjusted accordingly.\nThe weights defined in Equation (2) can be implemented in any statistical software that handles time-dependent weights in the Cox model, such as the coxph function in R or the SAS PROC PHREG function.", "The robust sandwich variance estimator for β^ in Equation (1) as proposed by Binder [5] for finite population-based surveys is given by\n\n\n(3)\n\n\n\n\nV\n^\n\n1\n\n\n\nβ\n^\n\n\n=\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n\n\n\n∑\n\ni\n=\n1\n\nn\n\n\n\n\n\n\nω\ni\n\n\n\nu\n^\n\ni\n\n\n\nβ\n^\n\n\n\n\n\n⊗\n2\n\n\n\n\n\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n\n\n\n\nwhere Iβ^ is the observed information matrix obtained by evaluation of this expression ∂U^β∂β|β=β^, a⊗ 2 = aa′, and\n\n\n(4)\n\n\n\n\n\n\nu\n^\n\ni\n\n\n\nβ\n^\n\n\n=\n\nδ\ni\n\n\n\n\nx\ni\n\n\n\nt\ni\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\nS\n\n0\n\n\n\n\n\nt\ni\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n\n\n\n\n\n−\n\n\n∑\n\nj\n=\n1\n\nn\n\n\n\nδ\nj\n\n\nω\nj\n\n\n\n\nY\ni\n\n\n\nt\nj\n\n\nexp\n\n\n\nx\ni\n\n\n\n\nt\nj\n\n\n′\n\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n\n\n\n\n\nx\n\n\n\n\nx\ni\n\n\n\nt\nj\n\n\n−\n\n\n\n\nS\n^\n\n\n1\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\nS\n^\n\n\n0\n\n\n\n\n\nt\nj\n\n,\n\nβ\n^\n\n\n\n\n\n\n\n.\n\n\n\n\n\n\nThe robust variance estimator in Equation (3) can be rewritten as V^1β^=D’D where D is the dfbetas residuals [18] vector from the Cox model including the weights ω that can depend on time as those defined in Equation (2), as suggested in Barlow [19]. As indicated by Therneau and Li [20] and by Barlow et al. [21], the robust sandwich variance estimate from Equation (3) can directly be obtained with R using the commands\nM1 < −coxph(Surv(start,stop,event) ~ x + cluster(id), weights = weight)\nV1 < − M1$var\nwith the vector of weights derived from Equation (2) for the WC model.\nThe robust variance estimator V^1β^ accounts for the variability due to sampling the case-control sample from the source population. To account for the extra variability due to sampling the source population from the (infinite) superpopulation, we propose to use the Lin variance estimator [13] that turned out to consist in adding the naïve variance estimator to the robust variance estimator V^1β^. The Lin variance superpopulation estimator is thus given by\n\n\n(5)\n\n\n\n\nV\n^\n\n2\n\n\n\nβ\n^\n\n\n=\n\n\nV\n^\n\n1\n\n\n\nβ\n^\n\n\n+\n\nI\n\n−\n1\n\n\n\n\nβ\n^\n\n\n.\n\n\n\n\nWith R, the superpopulation variance estimate from Equation (5) can simply be obtained using the command\nV2 <- V1 + M1$naive.var\nAll along this paper, the WC model using the robust variance estimator V^1β^ in Equation (3) will be denoted by WC1, while the WC model using the Lin’s superpopulation variance estimator V^2β^ in Equation (5) will be denoted by WC2. While WC1 and WC2 models give identical estimated exposure effects, they yield different standard errors and thus CI.", " Overview of the simulation design The main objective of the simulation study was to evaluate the performance of Lin’s superpopulation variance estimator V^2β^ in Equation (5) with the time-dependent weights defined in Equation (2), for the estimation of the effects of time-varying exposures in case-control studies. In particular, we compared the coverage probability of the 95% CI resulting from the WC2 model, as compared to the WC1 model and standard logistic regression. We were specifically interested in the effects of exposure intensity, duration, age at first exposure and time since last exposure. These inter-related aspects of exposure are of interest in many epidemiological applications but induce some statistical analytical issues because of correlation and time-dependency.\nWe generated 1000 source populations of 1000 or 5000 individuals each, and within each source population, we simulated a case-control study. The age at event for each subject in each source population was generated from a standard Cox model with time-dependent covariates, using a permutation algorithm described elsewhere and assuming Weibull marginal distribution of age at event [4,22,23]. Three Cox models of interest from an epidemiological point of view were simulated. Model 1 included intensity and duration of exposure only. Model 2 included age at first exposure in addition to intensity and duration. Model 3 was similar to Model 2 but used time since last exposure instead of age at first exposure.\nThe distribution of the exposures variables were chosen to be close to the observed distributions of occupational asbestos exposure variables in our case-control data on PM [15] described in the application section. Specifically, the ages at first and at last exposure were generated for all subjects from lognormal distributions. The exposure intensity at each age was generated from a linear function of age. Parameters for the random intercept and slope were chosen such that either 85% of subjects had a constant intensity, 6% a highly increasing, 6% a moderately decreasing, and 3% a moderately increasing intensity over lifetime (Scenario A); or 50% a highly increasing and 50% a moderately decreasing intensity over lifetime (Scenario B). Scenario A reflects our real case-control data on occupational exposure to asbestos. The exposure intensity at each age was represented in all our models by a variable that equaled the cumulated value of intensity at that age divided by the total duration of exposure at that age. This exposure intensity variable is equivalent to the mean index of exposure (MIE) variable introduced in the application section. The exposure intensity, as well as duration and time since last exposure, were time-dependent in all our true Models 1–3. The true effects β of each exposure variable in Models 1–3 were fixed to values that ranged from weak to strong effects: 0.41 to 1.39 for intensity, 0.01 to 0.05 for duration, −0.01 to −0.11 for age at first exposure, and 0.01 to 0.04 for time since last exposure. These beta correspond to hazard ratios of 1.5 to 4.0 for one standard deviation (i.e. 1.0 fiber/ml) increase in exposure intensity, hazard ratios of 1.2 to 2.0 for one standard deviation increase (i.e. 14 years) in duration of exposure, hazard ratios of 0.9 to 0.4 for one standard deviation (i.e. 8 years) increase in age at first exposure, and hazard ratios of 1.2 to 1.8 for one standard deviation (i.e. 14 years) increase in time since cessation of exposure.\nCensoring for age at event in the source population was independently generated from a uniform distribution such that the event rate was about 10% in each source population of 1000 subjects, and 2% in each source population of 5 000 subjects. Each subject of the source population who had the event of interest was selected as a case in the case-control dataset. The event rates in the source population thus implied that we had about 100 cases in each case-control data set. For each case, 1, 2, or 4 controls were randomly selected with replacement among subjects at risk at the case’s event age, which corresponds to 1:1, 1:2, or 1:4 individual matching on age, respectively. On average, each case-control dataset was therefore made of about 100 cases and 100, 200, or 400 controls.\nThe main objective of the simulation study was to evaluate the performance of Lin’s superpopulation variance estimator V^2β^ in Equation (5) with the time-dependent weights defined in Equation (2), for the estimation of the effects of time-varying exposures in case-control studies. In particular, we compared the coverage probability of the 95% CI resulting from the WC2 model, as compared to the WC1 model and standard logistic regression. We were specifically interested in the effects of exposure intensity, duration, age at first exposure and time since last exposure. These inter-related aspects of exposure are of interest in many epidemiological applications but induce some statistical analytical issues because of correlation and time-dependency.\nWe generated 1000 source populations of 1000 or 5000 individuals each, and within each source population, we simulated a case-control study. The age at event for each subject in each source population was generated from a standard Cox model with time-dependent covariates, using a permutation algorithm described elsewhere and assuming Weibull marginal distribution of age at event [4,22,23]. Three Cox models of interest from an epidemiological point of view were simulated. Model 1 included intensity and duration of exposure only. Model 2 included age at first exposure in addition to intensity and duration. Model 3 was similar to Model 2 but used time since last exposure instead of age at first exposure.\nThe distribution of the exposures variables were chosen to be close to the observed distributions of occupational asbestos exposure variables in our case-control data on PM [15] described in the application section. Specifically, the ages at first and at last exposure were generated for all subjects from lognormal distributions. The exposure intensity at each age was generated from a linear function of age. Parameters for the random intercept and slope were chosen such that either 85% of subjects had a constant intensity, 6% a highly increasing, 6% a moderately decreasing, and 3% a moderately increasing intensity over lifetime (Scenario A); or 50% a highly increasing and 50% a moderately decreasing intensity over lifetime (Scenario B). Scenario A reflects our real case-control data on occupational exposure to asbestos. The exposure intensity at each age was represented in all our models by a variable that equaled the cumulated value of intensity at that age divided by the total duration of exposure at that age. This exposure intensity variable is equivalent to the mean index of exposure (MIE) variable introduced in the application section. The exposure intensity, as well as duration and time since last exposure, were time-dependent in all our true Models 1–3. The true effects β of each exposure variable in Models 1–3 were fixed to values that ranged from weak to strong effects: 0.41 to 1.39 for intensity, 0.01 to 0.05 for duration, −0.01 to −0.11 for age at first exposure, and 0.01 to 0.04 for time since last exposure. These beta correspond to hazard ratios of 1.5 to 4.0 for one standard deviation (i.e. 1.0 fiber/ml) increase in exposure intensity, hazard ratios of 1.2 to 2.0 for one standard deviation increase (i.e. 14 years) in duration of exposure, hazard ratios of 0.9 to 0.4 for one standard deviation (i.e. 8 years) increase in age at first exposure, and hazard ratios of 1.2 to 1.8 for one standard deviation (i.e. 14 years) increase in time since cessation of exposure.\nCensoring for age at event in the source population was independently generated from a uniform distribution such that the event rate was about 10% in each source population of 1000 subjects, and 2% in each source population of 5 000 subjects. Each subject of the source population who had the event of interest was selected as a case in the case-control dataset. The event rates in the source population thus implied that we had about 100 cases in each case-control data set. For each case, 1, 2, or 4 controls were randomly selected with replacement among subjects at risk at the case’s event age, which corresponds to 1:1, 1:2, or 1:4 individual matching on age, respectively. On average, each case-control dataset was therefore made of about 100 cases and 100, 200, or 400 controls.", "The main objective of the simulation study was to evaluate the performance of Lin’s superpopulation variance estimator V^2β^ in Equation (5) with the time-dependent weights defined in Equation (2), for the estimation of the effects of time-varying exposures in case-control studies. In particular, we compared the coverage probability of the 95% CI resulting from the WC2 model, as compared to the WC1 model and standard logistic regression. We were specifically interested in the effects of exposure intensity, duration, age at first exposure and time since last exposure. These inter-related aspects of exposure are of interest in many epidemiological applications but induce some statistical analytical issues because of correlation and time-dependency.\nWe generated 1000 source populations of 1000 or 5000 individuals each, and within each source population, we simulated a case-control study. The age at event for each subject in each source population was generated from a standard Cox model with time-dependent covariates, using a permutation algorithm described elsewhere and assuming Weibull marginal distribution of age at event [4,22,23]. Three Cox models of interest from an epidemiological point of view were simulated. Model 1 included intensity and duration of exposure only. Model 2 included age at first exposure in addition to intensity and duration. Model 3 was similar to Model 2 but used time since last exposure instead of age at first exposure.\nThe distribution of the exposures variables were chosen to be close to the observed distributions of occupational asbestos exposure variables in our case-control data on PM [15] described in the application section. Specifically, the ages at first and at last exposure were generated for all subjects from lognormal distributions. The exposure intensity at each age was generated from a linear function of age. Parameters for the random intercept and slope were chosen such that either 85% of subjects had a constant intensity, 6% a highly increasing, 6% a moderately decreasing, and 3% a moderately increasing intensity over lifetime (Scenario A); or 50% a highly increasing and 50% a moderately decreasing intensity over lifetime (Scenario B). Scenario A reflects our real case-control data on occupational exposure to asbestos. The exposure intensity at each age was represented in all our models by a variable that equaled the cumulated value of intensity at that age divided by the total duration of exposure at that age. This exposure intensity variable is equivalent to the mean index of exposure (MIE) variable introduced in the application section. The exposure intensity, as well as duration and time since last exposure, were time-dependent in all our true Models 1–3. The true effects β of each exposure variable in Models 1–3 were fixed to values that ranged from weak to strong effects: 0.41 to 1.39 for intensity, 0.01 to 0.05 for duration, −0.01 to −0.11 for age at first exposure, and 0.01 to 0.04 for time since last exposure. These beta correspond to hazard ratios of 1.5 to 4.0 for one standard deviation (i.e. 1.0 fiber/ml) increase in exposure intensity, hazard ratios of 1.2 to 2.0 for one standard deviation increase (i.e. 14 years) in duration of exposure, hazard ratios of 0.9 to 0.4 for one standard deviation (i.e. 8 years) increase in age at first exposure, and hazard ratios of 1.2 to 1.8 for one standard deviation (i.e. 14 years) increase in time since cessation of exposure.\nCensoring for age at event in the source population was independently generated from a uniform distribution such that the event rate was about 10% in each source population of 1000 subjects, and 2% in each source population of 5 000 subjects. Each subject of the source population who had the event of interest was selected as a case in the case-control dataset. The event rates in the source population thus implied that we had about 100 cases in each case-control data set. For each case, 1, 2, or 4 controls were randomly selected with replacement among subjects at risk at the case’s event age, which corresponds to 1:1, 1:2, or 1:4 individual matching on age, respectively. On average, each case-control dataset was therefore made of about 100 cases and 100, 200, or 400 controls.", "Each case-control sample was analyzed using four regression models (WC1 and WC2 models and two standard logistic regression models) that were correctly specified in terms of the exposure variables included. In the WC1 and WC2 models, the exposure variables were time-dependent, and the probability π(t) was the proportion of subjects in the source population who had an event at age t or at a later age among those at risk at age t. We assumed that all subjects of the population source were followed-up since birth, implying that age at event did not have to be left-truncated in WC1 and WC2. For comparison purpose, conditional logistic regression (CLR) was used as the standard analytical method for individually matched case-control studies. Unconditional logistic regression (ULR) including age as a continuous covariate in addition to the exposure variables, was also used as the standard alternative analytical approach. For both ULR and CLR, the time-dependent covariates were fixed at their observed value at the age at event for cases or selection for controls. Because controls were selected among subjects at risk at the ages where each case occurs, all the exponential of the regression parameter estimates can be interpreted as the source population rate ratio estimates [24].\n Statistical criteria used to compare the performance of the different estimators For each of the four regression models WC1, WC2, CLR, and ULR, we calculated the relative bias of the regression parameter estimator β^ associated with each exposure variable, as compared with the true effect β of that exposure variable, 11000∑i=11000β^i−ββ, where β^i is the parameter estimate of the model based on the ith case-control dataset (i = 1, …, 1 000). To evaluate whether the relative bias was not partly due to a bias generated in the population source data, we also derived the relative bias as compared with the estimated effect β^Cox of the well specified time-dependent Cox model using the full population source data, 11000∑i=11000β^i−β^Cox,iβ^Cox,i. We also derived the root mean squared error (RMSE) β^¯−β2+varβ^, where β^¯ is the mean of the 1 000 parameter estimates β^i. The empirical relative efficiency of each regression parameter estimator was computed as the ratio of the empirical variance of the Cox model using the full source population data, varβ^Cox, to the empirical variance of the parameter estimates varβ^. The average of the 1000 standard errors sβ^ (ASE) was compared to the empirical standard deviation of the 1000 β^ estimates (SDE). We also calculated the coverage probability as the proportion of samples for which the 95% CI of β,β^±1.96×sβ^, included the true value β.\nFor each of the four regression models WC1, WC2, CLR, and ULR, we calculated the relative bias of the regression parameter estimator β^ associated with each exposure variable, as compared with the true effect β of that exposure variable, 11000∑i=11000β^i−ββ, where β^i is the parameter estimate of the model based on the ith case-control dataset (i = 1, …, 1 000). To evaluate whether the relative bias was not partly due to a bias generated in the population source data, we also derived the relative bias as compared with the estimated effect β^Cox of the well specified time-dependent Cox model using the full population source data, 11000∑i=11000β^i−β^Cox,iβ^Cox,i. We also derived the root mean squared error (RMSE) β^¯−β2+varβ^, where β^¯ is the mean of the 1 000 parameter estimates β^i. The empirical relative efficiency of each regression parameter estimator was computed as the ratio of the empirical variance of the Cox model using the full source population data, varβ^Cox, to the empirical variance of the parameter estimates varβ^. The average of the 1000 standard errors sβ^ (ASE) was compared to the empirical standard deviation of the 1000 β^ estimates (SDE). We also calculated the coverage probability as the proportion of samples for which the 95% CI of β,β^±1.96×sβ^, included the true value β.\n Simulation results Table 1 shows the results of the four analytical methods (WC1, WC2, CLR, ULR) for strong effects of exposure intensity and duration in Model 1. Table 2 shows the results for strong effects of i) intensity, duration, and age at first exposure in Model 2, and ii) intensity, duration, and time since cessation in Model 3. The results tended to be similar for weaker effects.\nSimulation results for Model 1 for 1:1, 1:2, or 1:4 matched case-control data including about 100 cases arising from populations of 1000 or 5000 subjects, based on 1000 replications\n(a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B).\n(b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate.\n(c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^.\nSimulation results for Models 2 and 3 for 1:1 matched case-control data including about 100 cases arising from a population of 1000 subjects, based on 1000 replications\n(a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B).\n(b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate.\n(c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^.\nAs suggested by the ratio ASE/SDE, the superpopulation variance estimator (WC2) tended to give estimates that were closer to the true variance than the robust variance estimator (WC1) that systematically under-estimated the true variance. Despite the superpopulation variance estimator tended to overestimate the true variance for the effect of exposure intensity when the population was made of 1000 subjects only (Tables 1 and 2), the coverage rates from WC2 were systematically much closer to the nominal level of 95% than those from the WC1 model. For each scenario of intensity pattern (Scenario A or B), the ratio ASE/SDE and the coverage rate for the effects of intensity and duration were similar in Models 2–3 as compared with Model 1 (Table 2 versus Table 1), suggesting that additional adjustment for correlated covariates does not affect the performance of the different variance estimators.\nWhile the relative biases from all analytical models (WC, ULR and CLR) tended to be low and of the same magnitude in all scenarios, the relative efficiency as compared to the Cox model estimated on the full population source, as well as the accuracy in terms of RMSE, tended to be different. Indeed, in all scenarios with 1:1 case:control ratio within population source of 1000 subjects, the regression coefficient estimator from the WC models was much more efficient and thus also more accurate than that from CLR and ULR (Tables 1 and 2). As expected, the relative efficiency from all models estimated using 100 cases and 100 controls, as compared to the Cox model estimated on the full population source, decreased when the population size increases. For example, the relative efficiency of the WC for intensity with pattern B decreased from 0.59 to 0.20 when the population size increased from 1000 to 5000 subjects (Table 1). As expected as well, increasing the number of controls from 100 to 200 or 400 for a given population size (5000 in Table 1) strongly increased the relative efficiency of ULR and CLR but only moderately increased the relative efficiency of the WC models. For example, the relative efficiency for intensity with pattern B increased from 0.10 to 0.36 for CLR while only from 0.20 to 0.37 for the WC model (Table 1). Because the WC model used controls at different ages for which they were selected in the 1:1 case-control scenario, using additional controls in the 1:2 or 1:4 case:control ratio scenarios added relatively less information in this model than in ULR and CLR. As a result, ULR and CLR became more accurate in terms of RMSE than the WC models when four controls were selected for each case.\nInterestingly, CLR did not perform better in terms of both bias and RMSE than ULR, despite individual matching of cases and controls. ULR was actually systematically more efficient than CLR. This result may be consistent with our previous results where we found that CLR might have difficulty in separating the effects of correlated time-dependent variables [23]. Indeed, the correlation between each pair of the four exposure variables (intensity, duration, age at first exposure and time since last exposure) as well as with age at the index date, ranged between −0.679 and +0.453. The correlation also affected both the WC and ULR parameter estimators as suggested by the slightly higher RMSE in Models 2 and 3 (Table 2) as compared with Model 1 (Table 1) for the effects of intensity and duration, but it affected them less than the CLR estimator.\nTable 1 shows the results of the four analytical methods (WC1, WC2, CLR, ULR) for strong effects of exposure intensity and duration in Model 1. Table 2 shows the results for strong effects of i) intensity, duration, and age at first exposure in Model 2, and ii) intensity, duration, and time since cessation in Model 3. The results tended to be similar for weaker effects.\nSimulation results for Model 1 for 1:1, 1:2, or 1:4 matched case-control data including about 100 cases arising from populations of 1000 or 5000 subjects, based on 1000 replications\n(a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B).\n(b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate.\n(c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^.\nSimulation results for Models 2 and 3 for 1:1 matched case-control data including about 100 cases arising from a population of 1000 subjects, based on 1000 replications\n(a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B).\n(b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate.\n(c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^.\nAs suggested by the ratio ASE/SDE, the superpopulation variance estimator (WC2) tended to give estimates that were closer to the true variance than the robust variance estimator (WC1) that systematically under-estimated the true variance. Despite the superpopulation variance estimator tended to overestimate the true variance for the effect of exposure intensity when the population was made of 1000 subjects only (Tables 1 and 2), the coverage rates from WC2 were systematically much closer to the nominal level of 95% than those from the WC1 model. For each scenario of intensity pattern (Scenario A or B), the ratio ASE/SDE and the coverage rate for the effects of intensity and duration were similar in Models 2–3 as compared with Model 1 (Table 2 versus Table 1), suggesting that additional adjustment for correlated covariates does not affect the performance of the different variance estimators.\nWhile the relative biases from all analytical models (WC, ULR and CLR) tended to be low and of the same magnitude in all scenarios, the relative efficiency as compared to the Cox model estimated on the full population source, as well as the accuracy in terms of RMSE, tended to be different. Indeed, in all scenarios with 1:1 case:control ratio within population source of 1000 subjects, the regression coefficient estimator from the WC models was much more efficient and thus also more accurate than that from CLR and ULR (Tables 1 and 2). As expected, the relative efficiency from all models estimated using 100 cases and 100 controls, as compared to the Cox model estimated on the full population source, decreased when the population size increases. For example, the relative efficiency of the WC for intensity with pattern B decreased from 0.59 to 0.20 when the population size increased from 1000 to 5000 subjects (Table 1). As expected as well, increasing the number of controls from 100 to 200 or 400 for a given population size (5000 in Table 1) strongly increased the relative efficiency of ULR and CLR but only moderately increased the relative efficiency of the WC models. For example, the relative efficiency for intensity with pattern B increased from 0.10 to 0.36 for CLR while only from 0.20 to 0.37 for the WC model (Table 1). Because the WC model used controls at different ages for which they were selected in the 1:1 case-control scenario, using additional controls in the 1:2 or 1:4 case:control ratio scenarios added relatively less information in this model than in ULR and CLR. As a result, ULR and CLR became more accurate in terms of RMSE than the WC models when four controls were selected for each case.\nInterestingly, CLR did not perform better in terms of both bias and RMSE than ULR, despite individual matching of cases and controls. ULR was actually systematically more efficient than CLR. This result may be consistent with our previous results where we found that CLR might have difficulty in separating the effects of correlated time-dependent variables [23]. Indeed, the correlation between each pair of the four exposure variables (intensity, duration, age at first exposure and time since last exposure) as well as with age at the index date, ranged between −0.679 and +0.453. The correlation also affected both the WC and ULR parameter estimators as suggested by the slightly higher RMSE in Models 2 and 3 (Table 2) as compared with Model 1 (Table 1) for the effects of intensity and duration, but it affected them less than the CLR estimator.\n Application to occupational exposure to asbestos and pleural mesothelioma Mesothelioma is a rare tumor mostly located in the pleura and usually caused by exposure to asbestos. The role of the different temporal patterns of occupational exposure to this substance has still to be explored using appropriate statistical methods accounting for individual changes over time in the exposure intensity [15]. It is therefore of interest to apply the proposed estimators to estimate the mutually adjusted effects of exposure intensity, duration, age at first exposure, and time since last exposure, and to compare the results to those from standard logistic regression analyses that do not dynamically account for within subjects changes over time of exposure intensity.\nMesothelioma is a rare tumor mostly located in the pleura and usually caused by exposure to asbestos. The role of the different temporal patterns of occupational exposure to this substance has still to be explored using appropriate statistical methods accounting for individual changes over time in the exposure intensity [15]. It is therefore of interest to apply the proposed estimators to estimate the mutually adjusted effects of exposure intensity, duration, age at first exposure, and time since last exposure, and to compare the results to those from standard logistic regression analyses that do not dynamically account for within subjects changes over time of exposure intensity.\n Data source The data came from a large French population-based case-control study described in Lacourt et al. [15]. Cases were selected from a French case-control study conducted in 1987–1993 and the French National Mesothelioma Surveillance Program in 1998–2006. Population controls were frequency matched to cases by sex and year of birth within 5 years group. Occupational asbestos exposure was evaluated for each subject with a job-exposure matrix (JEM) which allowed us to derive the mean index of exposure (MIE) that was used in the regression models to represent intensity of exposure, as in Lacourt et al. [15]. The MIE at age t was given by\n\n\n\n\nMIE\n\nt\n\n=\n\n\n\n∑\n\nl\n=\n1\n\nL\n\n\n\nd\nl\n\n×\n\np\nl\n\n×\n\n\n\n\n\nf\nsl\n\n×\n\ni\nsl\n\n\n\n+\n\n\n\nf\nal\n\n×\n\ni\nal\n\n\n\n\n\n\n\n\n/\n\n\n∑\n\nl\n=\n1\n\nL\n\n\n\nd\nl\n\n\n\n\n\n\n\nwhere L is the total number of jobs exposed to asbestos till age t; dl the duration (in years) of job l, pl the probability of asbestos exposure for job l, fsl and isl the frequency and intensity of asbestos exposure due to specific task of job l, respectively, fal and ial the frequency and intensity of asbestos exposure due to environment work contamination of job l, respectively. For each job, the probability was derived from the percent of workers exposed in the considered job code, the frequency from the percent of work time, and the intensity from the concentration of asbestos fibers in the air expressed as fibers per milliliter (f/ml). See Lacourt et al. [15] for more details. An ever exposed subject to asbestos was a subject who had at least one job with a probability pl different from zero.\nBecause our objective was to accurately investigate the effects of the quantitative time-related aspects of occupational exposure, all our analyses were restricted to subjects ever exposed to asbestos (68.9% in males and 20.9% in females). In addition, because the sample size for females was too small to ensure adequate statistical power and accurate estimates in separate multiple regression analyses of this group [15], the analyses were restricted to males ever exposed to asbestos, i.e. to 1041 male cases and 1425 male controls. The distribution of age and the asbestos exposure characteristics at the time of diagnosis for cases and interview for controls are shown in Table 3. The distribution of the patterns of intensity over lifetime was similar to the one described in scenario A of the simulation, with 85% of subjects with almost constant asbestos exposure intensity over lifetime.\nMean and standard deviation of age and asbestos exposure variables at the time of diagnosis/interview for ever exposed males\nResults from the French case-control study on mesothelioma, 1987–2006.\n(a) Measured by the mean index of exposure (MIE).\nThe data came from a large French population-based case-control study described in Lacourt et al. [15]. Cases were selected from a French case-control study conducted in 1987–1993 and the French National Mesothelioma Surveillance Program in 1998–2006. Population controls were frequency matched to cases by sex and year of birth within 5 years group. Occupational asbestos exposure was evaluated for each subject with a job-exposure matrix (JEM) which allowed us to derive the mean index of exposure (MIE) that was used in the regression models to represent intensity of exposure, as in Lacourt et al. [15]. The MIE at age t was given by\n\n\n\n\nMIE\n\nt\n\n=\n\n\n\n∑\n\nl\n=\n1\n\nL\n\n\n\nd\nl\n\n×\n\np\nl\n\n×\n\n\n\n\n\nf\nsl\n\n×\n\ni\nsl\n\n\n\n+\n\n\n\nf\nal\n\n×\n\ni\nal\n\n\n\n\n\n\n\n\n/\n\n\n∑\n\nl\n=\n1\n\nL\n\n\n\nd\nl\n\n\n\n\n\n\n\nwhere L is the total number of jobs exposed to asbestos till age t; dl the duration (in years) of job l, pl the probability of asbestos exposure for job l, fsl and isl the frequency and intensity of asbestos exposure due to specific task of job l, respectively, fal and ial the frequency and intensity of asbestos exposure due to environment work contamination of job l, respectively. For each job, the probability was derived from the percent of workers exposed in the considered job code, the frequency from the percent of work time, and the intensity from the concentration of asbestos fibers in the air expressed as fibers per milliliter (f/ml). See Lacourt et al. [15] for more details. An ever exposed subject to asbestos was a subject who had at least one job with a probability pl different from zero.\nBecause our objective was to accurately investigate the effects of the quantitative time-related aspects of occupational exposure, all our analyses were restricted to subjects ever exposed to asbestos (68.9% in males and 20.9% in females). In addition, because the sample size for females was too small to ensure adequate statistical power and accurate estimates in separate multiple regression analyses of this group [15], the analyses were restricted to males ever exposed to asbestos, i.e. to 1041 male cases and 1425 male controls. The distribution of age and the asbestos exposure characteristics at the time of diagnosis for cases and interview for controls are shown in Table 3. The distribution of the patterns of intensity over lifetime was similar to the one described in scenario A of the simulation, with 85% of subjects with almost constant asbestos exposure intensity over lifetime.\nMean and standard deviation of age and asbestos exposure variables at the time of diagnosis/interview for ever exposed males\nResults from the French case-control study on mesothelioma, 1987–2006.\n(a) Measured by the mean index of exposure (MIE).\n Analytical methods used to analyze the case-control data on pleural mesothelioma To derive the weights proposed in the WC models (Equation 2), we first estimated the age-conditional probabilities π(t) of developing PM in the French male general population. These estimated probabilities were derived from published estimated sex- and age-specific incidence rates of PM per 100000 person-years in France in 2005 [25]. We assumed that these estimated incidence rates applied to our source population and that they were appropriate during the whole life of our subjects. The results are shown in Table 4 for males. As in the simulation study, standard errors for the WC model were then derived using the two variance estimators V^1β^ and V^2β^, resulting in the WC1 and WC2 models, respectively.\nEstimated male age-conditional probabilities used in the weights of the WC models to analyze the French case-control study of on mesothelioma\n(a) p(t) are estimated male age-specific incidence rates of pleural mesothelioma per 100 000 person-years in France in 2005 [25].\n(b) π(t) are estimated male age-conditional probabilities of developing pleural mesothelioma within residual lifetime after age t, calculated as πt=1−∏l≥t1−pl.\nFor comparison purpose, the data were further analyzed with ULR which is the standard method to analyze frequency matched case-control data, as well as with CLR. Age was the time axis in WC1 and WC2 models, and a continuous covariate in ULR and CLR. We did not perform left-truncation in WC1 and WC2 models thus assuming that all subjects of the population source were passively followed-up for PM since birth. The matching factor, birth year, was a quantitative covariate in WC1, WC2, and ULR, and was the stratification variable (in 5 years groups) in CLR. Using each of the four approaches (WC1, WC2, CLR and, ULR), we estimated the effects of intensity and duration of occupational asbestos exposure, the age at first exposure, and time since last exposure, using the same combination of quantitative exposure variables as in Models 1–3 of the simulation study. All the effects of these variables were therefore assumed to be linear. Despite our recent results that suggested that these effects were not linear on the logit of PM [15], we used quantitative variables in order to facilitate the comparison of the estimates from the four different analytical approaches. The resulting estimates should therefore be used only for methodological comparison purpose and not as substantive epidemiological results. As in the simulation study, all the exposure variables were time-dependent in WC1 and WC2 models, and fixed at their value at the age at diagnosis or interview for ULR and CLR.\nTo derive the weights proposed in the WC models (Equation 2), we first estimated the age-conditional probabilities π(t) of developing PM in the French male general population. These estimated probabilities were derived from published estimated sex- and age-specific incidence rates of PM per 100000 person-years in France in 2005 [25]. We assumed that these estimated incidence rates applied to our source population and that they were appropriate during the whole life of our subjects. The results are shown in Table 4 for males. As in the simulation study, standard errors for the WC model were then derived using the two variance estimators V^1β^ and V^2β^, resulting in the WC1 and WC2 models, respectively.\nEstimated male age-conditional probabilities used in the weights of the WC models to analyze the French case-control study of on mesothelioma\n(a) p(t) are estimated male age-specific incidence rates of pleural mesothelioma per 100 000 person-years in France in 2005 [25].\n(b) π(t) are estimated male age-conditional probabilities of developing pleural mesothelioma within residual lifetime after age t, calculated as πt=1−∏l≥t1−pl.\nFor comparison purpose, the data were further analyzed with ULR which is the standard method to analyze frequency matched case-control data, as well as with CLR. Age was the time axis in WC1 and WC2 models, and a continuous covariate in ULR and CLR. We did not perform left-truncation in WC1 and WC2 models thus assuming that all subjects of the population source were passively followed-up for PM since birth. The matching factor, birth year, was a quantitative covariate in WC1, WC2, and ULR, and was the stratification variable (in 5 years groups) in CLR. Using each of the four approaches (WC1, WC2, CLR and, ULR), we estimated the effects of intensity and duration of occupational asbestos exposure, the age at first exposure, and time since last exposure, using the same combination of quantitative exposure variables as in Models 1–3 of the simulation study. All the effects of these variables were therefore assumed to be linear. Despite our recent results that suggested that these effects were not linear on the logit of PM [15], we used quantitative variables in order to facilitate the comparison of the estimates from the four different analytical approaches. The resulting estimates should therefore be used only for methodological comparison purpose and not as substantive epidemiological results. As in the simulation study, all the exposure variables were time-dependent in WC1 and WC2 models, and fixed at their value at the age at diagnosis or interview for ULR and CLR.", "For each of the four regression models WC1, WC2, CLR, and ULR, we calculated the relative bias of the regression parameter estimator β^ associated with each exposure variable, as compared with the true effect β of that exposure variable, 11000∑i=11000β^i−ββ, where β^i is the parameter estimate of the model based on the ith case-control dataset (i = 1, …, 1 000). To evaluate whether the relative bias was not partly due to a bias generated in the population source data, we also derived the relative bias as compared with the estimated effect β^Cox of the well specified time-dependent Cox model using the full population source data, 11000∑i=11000β^i−β^Cox,iβ^Cox,i. We also derived the root mean squared error (RMSE) β^¯−β2+varβ^, where β^¯ is the mean of the 1 000 parameter estimates β^i. The empirical relative efficiency of each regression parameter estimator was computed as the ratio of the empirical variance of the Cox model using the full source population data, varβ^Cox, to the empirical variance of the parameter estimates varβ^. The average of the 1000 standard errors sβ^ (ASE) was compared to the empirical standard deviation of the 1000 β^ estimates (SDE). We also calculated the coverage probability as the proportion of samples for which the 95% CI of β,β^±1.96×sβ^, included the true value β.", "Table 1 shows the results of the four analytical methods (WC1, WC2, CLR, ULR) for strong effects of exposure intensity and duration in Model 1. Table 2 shows the results for strong effects of i) intensity, duration, and age at first exposure in Model 2, and ii) intensity, duration, and time since cessation in Model 3. The results tended to be similar for weaker effects.\nSimulation results for Model 1 for 1:1, 1:2, or 1:4 matched case-control data including about 100 cases arising from populations of 1000 or 5000 subjects, based on 1000 replications\n(a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B).\n(b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate.\n(c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^.\nSimulation results for Models 2 and 3 for 1:1 matched case-control data including about 100 cases arising from a population of 1000 subjects, based on 1000 replications\n(a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B).\n(b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate.\n(c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^.\n(e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^.\nAs suggested by the ratio ASE/SDE, the superpopulation variance estimator (WC2) tended to give estimates that were closer to the true variance than the robust variance estimator (WC1) that systematically under-estimated the true variance. Despite the superpopulation variance estimator tended to overestimate the true variance for the effect of exposure intensity when the population was made of 1000 subjects only (Tables 1 and 2), the coverage rates from WC2 were systematically much closer to the nominal level of 95% than those from the WC1 model. For each scenario of intensity pattern (Scenario A or B), the ratio ASE/SDE and the coverage rate for the effects of intensity and duration were similar in Models 2–3 as compared with Model 1 (Table 2 versus Table 1), suggesting that additional adjustment for correlated covariates does not affect the performance of the different variance estimators.\nWhile the relative biases from all analytical models (WC, ULR and CLR) tended to be low and of the same magnitude in all scenarios, the relative efficiency as compared to the Cox model estimated on the full population source, as well as the accuracy in terms of RMSE, tended to be different. Indeed, in all scenarios with 1:1 case:control ratio within population source of 1000 subjects, the regression coefficient estimator from the WC models was much more efficient and thus also more accurate than that from CLR and ULR (Tables 1 and 2). As expected, the relative efficiency from all models estimated using 100 cases and 100 controls, as compared to the Cox model estimated on the full population source, decreased when the population size increases. For example, the relative efficiency of the WC for intensity with pattern B decreased from 0.59 to 0.20 when the population size increased from 1000 to 5000 subjects (Table 1). As expected as well, increasing the number of controls from 100 to 200 or 400 for a given population size (5000 in Table 1) strongly increased the relative efficiency of ULR and CLR but only moderately increased the relative efficiency of the WC models. For example, the relative efficiency for intensity with pattern B increased from 0.10 to 0.36 for CLR while only from 0.20 to 0.37 for the WC model (Table 1). Because the WC model used controls at different ages for which they were selected in the 1:1 case-control scenario, using additional controls in the 1:2 or 1:4 case:control ratio scenarios added relatively less information in this model than in ULR and CLR. As a result, ULR and CLR became more accurate in terms of RMSE than the WC models when four controls were selected for each case.\nInterestingly, CLR did not perform better in terms of both bias and RMSE than ULR, despite individual matching of cases and controls. ULR was actually systematically more efficient than CLR. This result may be consistent with our previous results where we found that CLR might have difficulty in separating the effects of correlated time-dependent variables [23]. Indeed, the correlation between each pair of the four exposure variables (intensity, duration, age at first exposure and time since last exposure) as well as with age at the index date, ranged between −0.679 and +0.453. The correlation also affected both the WC and ULR parameter estimators as suggested by the slightly higher RMSE in Models 2 and 3 (Table 2) as compared with Model 1 (Table 1) for the effects of intensity and duration, but it affected them less than the CLR estimator.", "Mesothelioma is a rare tumor mostly located in the pleura and usually caused by exposure to asbestos. The role of the different temporal patterns of occupational exposure to this substance has still to be explored using appropriate statistical methods accounting for individual changes over time in the exposure intensity [15]. It is therefore of interest to apply the proposed estimators to estimate the mutually adjusted effects of exposure intensity, duration, age at first exposure, and time since last exposure, and to compare the results to those from standard logistic regression analyses that do not dynamically account for within subjects changes over time of exposure intensity.", "The data came from a large French population-based case-control study described in Lacourt et al. [15]. Cases were selected from a French case-control study conducted in 1987–1993 and the French National Mesothelioma Surveillance Program in 1998–2006. Population controls were frequency matched to cases by sex and year of birth within 5 years group. Occupational asbestos exposure was evaluated for each subject with a job-exposure matrix (JEM) which allowed us to derive the mean index of exposure (MIE) that was used in the regression models to represent intensity of exposure, as in Lacourt et al. [15]. The MIE at age t was given by\n\n\n\n\nMIE\n\nt\n\n=\n\n\n\n∑\n\nl\n=\n1\n\nL\n\n\n\nd\nl\n\n×\n\np\nl\n\n×\n\n\n\n\n\nf\nsl\n\n×\n\ni\nsl\n\n\n\n+\n\n\n\nf\nal\n\n×\n\ni\nal\n\n\n\n\n\n\n\n\n/\n\n\n∑\n\nl\n=\n1\n\nL\n\n\n\nd\nl\n\n\n\n\n\n\n\nwhere L is the total number of jobs exposed to asbestos till age t; dl the duration (in years) of job l, pl the probability of asbestos exposure for job l, fsl and isl the frequency and intensity of asbestos exposure due to specific task of job l, respectively, fal and ial the frequency and intensity of asbestos exposure due to environment work contamination of job l, respectively. For each job, the probability was derived from the percent of workers exposed in the considered job code, the frequency from the percent of work time, and the intensity from the concentration of asbestos fibers in the air expressed as fibers per milliliter (f/ml). See Lacourt et al. [15] for more details. An ever exposed subject to asbestos was a subject who had at least one job with a probability pl different from zero.\nBecause our objective was to accurately investigate the effects of the quantitative time-related aspects of occupational exposure, all our analyses were restricted to subjects ever exposed to asbestos (68.9% in males and 20.9% in females). In addition, because the sample size for females was too small to ensure adequate statistical power and accurate estimates in separate multiple regression analyses of this group [15], the analyses were restricted to males ever exposed to asbestos, i.e. to 1041 male cases and 1425 male controls. The distribution of age and the asbestos exposure characteristics at the time of diagnosis for cases and interview for controls are shown in Table 3. The distribution of the patterns of intensity over lifetime was similar to the one described in scenario A of the simulation, with 85% of subjects with almost constant asbestos exposure intensity over lifetime.\nMean and standard deviation of age and asbestos exposure variables at the time of diagnosis/interview for ever exposed males\nResults from the French case-control study on mesothelioma, 1987–2006.\n(a) Measured by the mean index of exposure (MIE).", "To derive the weights proposed in the WC models (Equation 2), we first estimated the age-conditional probabilities π(t) of developing PM in the French male general population. These estimated probabilities were derived from published estimated sex- and age-specific incidence rates of PM per 100000 person-years in France in 2005 [25]. We assumed that these estimated incidence rates applied to our source population and that they were appropriate during the whole life of our subjects. The results are shown in Table 4 for males. As in the simulation study, standard errors for the WC model were then derived using the two variance estimators V^1β^ and V^2β^, resulting in the WC1 and WC2 models, respectively.\nEstimated male age-conditional probabilities used in the weights of the WC models to analyze the French case-control study of on mesothelioma\n(a) p(t) are estimated male age-specific incidence rates of pleural mesothelioma per 100 000 person-years in France in 2005 [25].\n(b) π(t) are estimated male age-conditional probabilities of developing pleural mesothelioma within residual lifetime after age t, calculated as πt=1−∏l≥t1−pl.\nFor comparison purpose, the data were further analyzed with ULR which is the standard method to analyze frequency matched case-control data, as well as with CLR. Age was the time axis in WC1 and WC2 models, and a continuous covariate in ULR and CLR. We did not perform left-truncation in WC1 and WC2 models thus assuming that all subjects of the population source were passively followed-up for PM since birth. The matching factor, birth year, was a quantitative covariate in WC1, WC2, and ULR, and was the stratification variable (in 5 years groups) in CLR. Using each of the four approaches (WC1, WC2, CLR and, ULR), we estimated the effects of intensity and duration of occupational asbestos exposure, the age at first exposure, and time since last exposure, using the same combination of quantitative exposure variables as in Models 1–3 of the simulation study. All the effects of these variables were therefore assumed to be linear. Despite our recent results that suggested that these effects were not linear on the logit of PM [15], we used quantitative variables in order to facilitate the comparison of the estimates from the four different analytical approaches. The resulting estimates should therefore be used only for methodological comparison purpose and not as substantive epidemiological results. As in the simulation study, all the exposure variables were time-dependent in WC1 and WC2 models, and fixed at their value at the age at diagnosis or interview for ULR and CLR.", "Table 5 shows the estimated effects of the selected quantitative asbestos exposure variables on the risk of PM, using the four analytical approaches (WC1, WC2, CLR, and ULR) and Models 1–3. The estimated effects are shown in terms of expβ^, i.e. estimated hazard ratios for WC1 and WC2 and estimated odds ratios for ULR and CLR. These estimated effects were calculated for an increase of about one standard deviation of the exposure variable, i.e. 1 fiber/ml for asbestos exposure intensity, 14 years for duration, 8 years for age at first exposure, and 14 years for time since last exposure.\nEstimated effect of occupational asbestos exposure in males ever exposed (1041 cases and 1425 controls), using the WC models and logistic regression and assuming linear effects of quantitative exposure variables\nResults from the French case-control study on mesothelioma, 1987–2006.\n(a) All the exposure variables were time-dependent in WC1 and WC2 models, and fixed at their value at diagnosis/interview in CLR and ULR. Intensity was measured by the mean index of exposure (MIE).\n(b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; Both WC1 and WC2 used age as the time axis and included birth year as a quantitative covariate; ULR, unconditional logistic regression including age at diagnosis/interview and birth year as quantitative covariates; CLR, conditional logistic regression stratified on birth year group (5 years), and including age at diagnosis/interview as a quantitative covariate.\n(c) Hazard ratio estimates for WC1 and WC2 (same value for WC1 and WC2) and odds ratio estimates for CLR and ULR, adjusted for age and birth year, and corresponding 95% confidence interval (CI).\nAs expected, the associations between all asbestos exposure variables and PM were significant with each of the four analytical approaches (Table 5). Specifically, increasing intensity or duration increased significantly the risk of PM, when adjusted or not on either age at first exposure or time since last exposure. Because the relative variation in the estimated effects of duration between Model 3 and Model 1 was higher than between Model 2 and Model 1, time since last exposure (in Model 3) seems to be a more important confounder than age at first exposure (in Model 2) in the relation between duration and PM. Estimates from Model 2 suggest that the later a subject is first occupationally exposed to asbestos, the smaller his risk of PM is. All the estimated effects of time since cessation indicate that risk continues to increase after the cessation of exposure, as in many other studies [15,26,27].\nThe 95% CI from WC1 and WC2 were almost identical (Table 5), suggesting that the robust variance estimates from WC1 was very close to the superpopulation variance estimates from WC2. This is likely due to the fact that the disease (PM) was very rare as shown in Table 3, as opposed to our simulation study where the overall event rates were about 10% and 2%.\nThe strongest contrasts between the estimates from the WC models and ULR or CLR were for the effect of exposure intensity. Indeed, the estimated effect of intensity was systematically weaker with the WC models than with ULR or CLR, with even non overlapping 95% CI. Note that, as for Scenario A in our simulation study, CLR provided the strongest estimates for the strong effect of intensity. By contrast, for the effects of duration, age at initiation, and time since last exposure, the strongest estimates were provided by the WC models, but the discrepancies with ULR and CLR were weaker than for intensity.\nThere are different potential explanations for the discrepancies between the results from the Cox (WC1 and WC2) and logistic (CLR and ULR) models. First the adjustment for age was largely different in the two series of models. While age was the time axis in the Cox models, and was therefore adequately adjusted for in both WC1 and WC2, it was included as a continuous covariate in both logistic models. This assumed that its effect was linear on the logit, which is actually not true [15]. Thus there may be some residual confounding by age in both CLR and ULR. Second, because controls of the case-control study on PM were selected from members of the general French population at calendar times that can possibly differ from the period of case’s recruitment, the case-control odds ratio estimate from ULR and CLR may estimate a different quantity than the hazard ratio estimate from the Cox model. Indeed, the hazard function in the Cox models provides a dynamic description of how the instantaneous risk of getting PM varies over the age. The exponential of regression parameter can be interpreted as a hazard ratio, which is equivalent to the rate ratio that would be obtained from a cohort design. If the controls of the case-control study on PM were randomly selected from the member of the population who were at risk at each age a case occurs (as in our simulation study), then the estimated odds ratio that would be obtained from ULR and CLR could also be interpreted as a rate ratio that would be obtained from a cohort design. However, this was not the way controls were selected in the case-control study on PM, and it is therefore difficult to directly compare odds ratio estimates obtained from ULR and CLR, and hazard ratio estimates obtained from WC1 and WC2.", "Our simulation results suggest that the superpopulation variance estimator [13] provides adequate coverage probabilities of the CI when using the time-dependent weights proposed in the WC model to estimate the effect of time-varying exposures in case-control studies. Indeed, our simulation results shows much better coverage probabilities of the CI resulting from the superpopulation estimator than those resulting from the robust variance estimator. However, our application to PM suggests that the two variance estimators give similar 95% CI when the disease is very rare. This is consistent with the results of Lin [13] who showed that the use of finite-population variance estimator (i.e. robust variance) results in reasonable coverage probabilities if the inclusion probabilities are low, but poor coverage probabilities if the inclusion probabilities are high. It should be noted that both robust and superpopulation variance estimators are easy to implement using most statistical softwares.\nOur simulation results also confirmed that the WC model is an alternative method for estimating the effects of time-varying exposure variables in case-control studies. In particular, when compared to standard logistic regression that did not dynamically account for the different values of covariates over lifetime, the WC model tended to provide more accurate estimates of the effects of variables for which an important percentage of subjects had time-varying values over lifetime, such as intensity. However, the superiority of the WC did not persist when more than one control were selected from the risk set. Our results also suggest that the estimates from the WC model are not more affected by correlations between time-dependent covariates included in the model than logistic regression with fixed-in-time covariates. Note that the modelling of the exposure in the WC model could further be improved by incorporating some more complex function of the trajectory of the exposure over time that have recently been proposed [28-30].\nThe application of the WC model requires estimating the age-conditional probabilities in the source population for population-based case-control studies, or in the full cohort for nested case-control studies. In our application to population-based case-control data on PM, these probabilities were estimated from health statistics on the general French male population. Yet, our analyses were restricted to ever exposed males only who have much higher probability to develop PM than the general French male population. Further studies are needed to investigate the impact of biased estimates of the age-conditional probabilities on the WC estimates. Accounting for uncertainty in the weight estimates could further improve the variance estimator [31]. In addition, controls in our case-control data set on PM were frequency matched to cases on birth year. To account for this stratification variable in the design, we included it as a covariate in the WC models. However, it would be interesting to consider accounting for this frequency matching variable in the weights of the WC models [12], and to investigate the performance of the resulting estimators through simulation of frequency matched case-control data. This would be all the more important that frequency matching is largely used in population-based case-control studies. It should also be mentioned that depending on the controls selection strategy, hazard ratio estimates from the WC model may not measure the same quantity as odds ratio estimates from the logistic regression. While the hazard ratio from the WC model estimates a rate ratio, the odds ratio may estimate another quantity depending on the control selection strategy [24].\nThe WC model with time-dependent variables requires also information on the values of the covariates at each event time, so at each age of diagnosis in cases. Such information may be missing, and different approaches could be considered to impute these values. However, further studies are needed to assess the impact of measurement errors of the time-dependent covariate values. Indeed, missmodeling the covariates has already been shown to induce bias in sandwich variance estimator based on dfbetas of unweighted Cox model for nested case-control analysis [32]. A variance estimator based on Schoenfeld residuals provided better variance estimates for severe model misspecification [32]. It may be of interest to further investigate such an estimator for misspecified time-dependent covariates in the WC model. Some further joint modelling between the WC model and the time-dependent covariate process could also be investigated as an alternative, especially for internal time-dependent exposure variables [33]. However, in most case-control studies on occupational exposures, the occupational history is sufficiently well investigated to allow the elaboration of quite accurate time-dependent covariates, as in our application on asbestos and PM.", "We believe that the WC model using the superpopulation variance estimator may provide a potential alternative analytical method for case-control analyses with detailed information on the history of the exposure of interest, especially if a large part of the subjects has a time-varying exposure intensity over lifetime, and if only one control is available for each case.", "ASE: Average standard errors; CI: Confidence interval; CLR: Conditional logistic regression; JEM: Job-exposure matrix; MIE: Mean index of exposure; PM: Pleural mesothelioma; RMSE: Root mean squared error; SDE: Standard deviation of the estimates; ULR: Unconditional logistic regression; WC: Weighted Cox model.", "The authors declare that they have no competing interests.", "HG has drafted the manuscript, programmed and run the simulation study, analyzed the case-control data on mesothelioma, and has contributed to the interpretation of all the results. AL has provided the case-control data on mesothelioma and has revised the manuscript. KL has drafted and revised the manuscript, has designed the simulation study, and supervised HG in all stages. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/13/18/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, "results", "discussion", "conclusions", null, null, null, null ]
[ "Case-control study", "Cox model", "Logistic regression", "Time-dependent variables", "Variance estimator", "Occupational exposures", "Environmental exposures", "Superpopulation" ]
Background: Population-based case-control studies are widely used in epidemiology to investigate the association between environmental or occupational exposures over lifetime and the risk of cancer or other chronic diseases. Many of the exposures of interest are protracted and a huge amount of information is often retrospectively collected for each subject about his/her potential past exposure over lifetime. For example, for occupational exposures, the whole occupational history is usually investigated for each subject, and different methods exist to estimate the average dose of exposure at each past job [1-3]. However, only the cumulated estimated dose of exposure at the index age (age at diagnosis for cases and at interview for controls) is usually used in standard logistic regression analyses. Such approach does not use the (retrospective) dynamic information available on the exposure at different ages during lifetime. A time-dependent weighted Cox (WC) model has recently been proposed to incorporate this dynamic information on exposure, in order to more accurately estimate the effect of time-dependent exposures in population-based case-control studies [4]. The WC model consists in using age as the time axis and weighting cases and controls according to their case-control status and the age conditional probabilities of developing the disease in the source population. The weights proposed in the WC model are therefore time-dependent and estimated from data of the source population. A simulation study showed that the WC model improved the accuracy of the regression parameters estimates of time-dependent exposure variables as compared with standard logistic regression with fixed-in-time covariates [4]. However, the average robust sandwich variance estimates based on dfbetas residuals [5] were systematically lower than the empirical variance of the parameter estimates, which resulted in too narrow confidence intervals (CI) and low coverage probabilities [4]. There is an extensive statistical literature on the weighted analyses of cohort sampling designs (see among many others [6-10]). A population-based case-control study can be seen as a nested case-control study within the source population of cases and controls, and can therefore fit in this general cohort sampling design framework. Population-based case-control studies can also be seen as a survey with complex selection probabilities [11-14] and this is the general framework that we use in this paper. Specifically, we consider the superpopulation approach developed by Lin [13] who proposed a variance estimator that accounts for the extra variation due to sampling the finite survey population from an infinite superpopulation. As a result, the Lin variance estimator accounts for the random variation from one survey sample to another and from one survey population to another, as opposed to the robust sandwich variance estimator that accounts only for the random variation from one survey sample to another. In the context of population-based case-control study, the case-control sample could be considered as the survey sample, the source population as the finite survey population, and the population under study as the infinite superpopulation. The asymptotic properties of the Lin variance estimator have been investigated and a small simulation study has been conducted to investigate these properties in finite samples [13]. The results indicated that the superpopulation variance estimates were closer to the true variance than the robust sandwich variance estimates. However, the simulation study considered only fixed-in-time covariates and simple selection probabilities that did not reflect the more complex sampling scheme of population-based case-control studies. It is therefore unclear how the superpopulation variance estimator would perform for the estimation of the effects of time-dependent covariates using the specific estimated time-dependent weights proposed in the WC model [4]. In addition, for further applications to population-based case-control data, it would be important to clarify the performance of the WC model, as compared with standard logistic regression analyses, for estimating the effects of several correlated temporal patterns of protracted exposures. Indeed, the effects of temporal patterns of exposures such as intensity, duration, age at first exposure, and time since last exposure are often of great interest from an epidemiological point of view [15], but they need to be carefully adjusted for each other to avoid residual confounding [16]. Such adjustment induces correlation between covariates and it is important to investigate how it affects the proposed estimators. The first objective of the present study is to investigate through extensive simulations the accuracy of the Lin variance estimator for estimating the effects of time-varying covariates in case-control data, using the weights proposed in the WC model [4]. The second objective is to compare the estimates from the WC model and standard logistic regression for estimating the effects of selected correlated temporal aspects of exposure with detailed information on exposure history. The next section introduces the WC model and the robust and Lin’s variance estimators. The different approaches are then compared through simulations and using data from a large population-based case-control study on occupational exposure to asbestos and pleural mesothelioma (PM). The regression model and the variance estimators: The WC model The Cox proportional hazards model specifies the hazard function as λ t | x t = λ 0 t exp x t ′ β , where λ0 is the baseline hazard, x(t) is the vector of observed covariate values at time t and β is the vector of unknown regression parameters. In the context of a population-based survey with complex sampling design [5], the estimator of β is the solution of the pseudo-maximum likelihood equation (1) U β = ∑ i = 1 n ω i δ i x i t i − S ^ 1 t i , β ^ S ^ 0 t i , β ^ = 0 , where n is the sample size, ωi is the sampling weight for subject i, δi = 1 if subject i is the case diagnosed at age ti and 0 otherwise, and S ^ 0 t , β ^ = ∑ j = 1 n ω j Y j t exp x j t ′ β ^ , S ^ 1 t , β ^ = ∑ j = 1 n ω j Y j t x j t exp x j t ′ β ^ , with Yj(t) = 1 if the subject j is at risk at time t (i.e. tj ≥ t), 0 otherwise. In the WC model proposed for case-control data [4], t is age and the sampling weight ω of each subject depends on age and on his case-control status. Specifically, the weight for each subject i at age t is given by (2) ω i t = { 1 − π t π t × n cases t n controls t if subject i is a control selected at age t or at a later age 1 if subject i is a case diagnosed at age t or at a later age , where π(t) is the probability to develop the disease at age t or at a later age in the source population, ncases(t) is the number of cases diagnosed at age t or at a later age in the case-control study, and ncontrols(t) is the number of controls selected at age t or at a later age in the case-control study as well. If the WC model is used to analyze data from a nested case-control study, the age conditional probabilities π(t) in Equation (2) can directly be estimated from the full enumerated cohort. Left-truncation at age at entry into the cohort should be performed to account for delayed entry [17]. If the WC model is used to analyze population-based case-control data, π(t) can be estimated from health statistics on the population under study, as shown in our application on PM in the section following simulations. The weights equal 1 for cases because all the eligible cases of the source population (or in the cohort) are usually included in the case-control study. If the sampling probabilities of cases do not equal 1, then weights in Equation (2) should be adjusted accordingly. The weights defined in Equation (2) can be implemented in any statistical software that handles time-dependent weights in the Cox model, such as the coxph function in R or the SAS PROC PHREG function. The Cox proportional hazards model specifies the hazard function as λ t | x t = λ 0 t exp x t ′ β , where λ0 is the baseline hazard, x(t) is the vector of observed covariate values at time t and β is the vector of unknown regression parameters. In the context of a population-based survey with complex sampling design [5], the estimator of β is the solution of the pseudo-maximum likelihood equation (1) U β = ∑ i = 1 n ω i δ i x i t i − S ^ 1 t i , β ^ S ^ 0 t i , β ^ = 0 , where n is the sample size, ωi is the sampling weight for subject i, δi = 1 if subject i is the case diagnosed at age ti and 0 otherwise, and S ^ 0 t , β ^ = ∑ j = 1 n ω j Y j t exp x j t ′ β ^ , S ^ 1 t , β ^ = ∑ j = 1 n ω j Y j t x j t exp x j t ′ β ^ , with Yj(t) = 1 if the subject j is at risk at time t (i.e. tj ≥ t), 0 otherwise. In the WC model proposed for case-control data [4], t is age and the sampling weight ω of each subject depends on age and on his case-control status. Specifically, the weight for each subject i at age t is given by (2) ω i t = { 1 − π t π t × n cases t n controls t if subject i is a control selected at age t or at a later age 1 if subject i is a case diagnosed at age t or at a later age , where π(t) is the probability to develop the disease at age t or at a later age in the source population, ncases(t) is the number of cases diagnosed at age t or at a later age in the case-control study, and ncontrols(t) is the number of controls selected at age t or at a later age in the case-control study as well. If the WC model is used to analyze data from a nested case-control study, the age conditional probabilities π(t) in Equation (2) can directly be estimated from the full enumerated cohort. Left-truncation at age at entry into the cohort should be performed to account for delayed entry [17]. If the WC model is used to analyze population-based case-control data, π(t) can be estimated from health statistics on the population under study, as shown in our application on PM in the section following simulations. The weights equal 1 for cases because all the eligible cases of the source population (or in the cohort) are usually included in the case-control study. If the sampling probabilities of cases do not equal 1, then weights in Equation (2) should be adjusted accordingly. The weights defined in Equation (2) can be implemented in any statistical software that handles time-dependent weights in the Cox model, such as the coxph function in R or the SAS PROC PHREG function. The variance estimators The robust sandwich variance estimator for β^ in Equation (1) as proposed by Binder [5] for finite population-based surveys is given by (3) V ^ 1 β ^ = I − 1 β ^ ∑ i = 1 n ω i u ^ i β ^ ⊗ 2 I − 1 β ^ where Iβ^ is the observed information matrix obtained by evaluation of this expression ∂U^β∂β|β=β^, a⊗ 2 = aa′, and (4) u ^ i β ^ = δ i x i t i − S ^ 1 t i , β ^ S 0 t i , β ^ − ∑ j = 1 n δ j ω j Y i t j exp x i t j ′ β ^ S ^ 0 t j , β ^ x x i t j − S ^ 1 t j , β ^ S ^ 0 t j , β ^ . The robust variance estimator in Equation (3) can be rewritten as V^1β^=D’D where D is the dfbetas residuals [18] vector from the Cox model including the weights ω that can depend on time as those defined in Equation (2), as suggested in Barlow [19]. As indicated by Therneau and Li [20] and by Barlow et al. [21], the robust sandwich variance estimate from Equation (3) can directly be obtained with R using the commands M1 < −coxph(Surv(start,stop,event) ~ x + cluster(id), weights = weight) V1 < − M1$var with the vector of weights derived from Equation (2) for the WC model. The robust variance estimator V^1β^ accounts for the variability due to sampling the case-control sample from the source population. To account for the extra variability due to sampling the source population from the (infinite) superpopulation, we propose to use the Lin variance estimator [13] that turned out to consist in adding the naïve variance estimator to the robust variance estimator V^1β^. The Lin variance superpopulation estimator is thus given by (5) V ^ 2 β ^ = V ^ 1 β ^ + I − 1 β ^ . With R, the superpopulation variance estimate from Equation (5) can simply be obtained using the command V2 <- V1 + M1$naive.var All along this paper, the WC model using the robust variance estimator V^1β^ in Equation (3) will be denoted by WC1, while the WC model using the Lin’s superpopulation variance estimator V^2β^ in Equation (5) will be denoted by WC2. While WC1 and WC2 models give identical estimated exposure effects, they yield different standard errors and thus CI. The robust sandwich variance estimator for β^ in Equation (1) as proposed by Binder [5] for finite population-based surveys is given by (3) V ^ 1 β ^ = I − 1 β ^ ∑ i = 1 n ω i u ^ i β ^ ⊗ 2 I − 1 β ^ where Iβ^ is the observed information matrix obtained by evaluation of this expression ∂U^β∂β|β=β^, a⊗ 2 = aa′, and (4) u ^ i β ^ = δ i x i t i − S ^ 1 t i , β ^ S 0 t i , β ^ − ∑ j = 1 n δ j ω j Y i t j exp x i t j ′ β ^ S ^ 0 t j , β ^ x x i t j − S ^ 1 t j , β ^ S ^ 0 t j , β ^ . The robust variance estimator in Equation (3) can be rewritten as V^1β^=D’D where D is the dfbetas residuals [18] vector from the Cox model including the weights ω that can depend on time as those defined in Equation (2), as suggested in Barlow [19]. As indicated by Therneau and Li [20] and by Barlow et al. [21], the robust sandwich variance estimate from Equation (3) can directly be obtained with R using the commands M1 < −coxph(Surv(start,stop,event) ~ x + cluster(id), weights = weight) V1 < − M1$var with the vector of weights derived from Equation (2) for the WC model. The robust variance estimator V^1β^ accounts for the variability due to sampling the case-control sample from the source population. To account for the extra variability due to sampling the source population from the (infinite) superpopulation, we propose to use the Lin variance estimator [13] that turned out to consist in adding the naïve variance estimator to the robust variance estimator V^1β^. The Lin variance superpopulation estimator is thus given by (5) V ^ 2 β ^ = V ^ 1 β ^ + I − 1 β ^ . With R, the superpopulation variance estimate from Equation (5) can simply be obtained using the command V2 <- V1 + M1$naive.var All along this paper, the WC model using the robust variance estimator V^1β^ in Equation (3) will be denoted by WC1, while the WC model using the Lin’s superpopulation variance estimator V^2β^ in Equation (5) will be denoted by WC2. While WC1 and WC2 models give identical estimated exposure effects, they yield different standard errors and thus CI. The WC model: The Cox proportional hazards model specifies the hazard function as λ t | x t = λ 0 t exp x t ′ β , where λ0 is the baseline hazard, x(t) is the vector of observed covariate values at time t and β is the vector of unknown regression parameters. In the context of a population-based survey with complex sampling design [5], the estimator of β is the solution of the pseudo-maximum likelihood equation (1) U β = ∑ i = 1 n ω i δ i x i t i − S ^ 1 t i , β ^ S ^ 0 t i , β ^ = 0 , where n is the sample size, ωi is the sampling weight for subject i, δi = 1 if subject i is the case diagnosed at age ti and 0 otherwise, and S ^ 0 t , β ^ = ∑ j = 1 n ω j Y j t exp x j t ′ β ^ , S ^ 1 t , β ^ = ∑ j = 1 n ω j Y j t x j t exp x j t ′ β ^ , with Yj(t) = 1 if the subject j is at risk at time t (i.e. tj ≥ t), 0 otherwise. In the WC model proposed for case-control data [4], t is age and the sampling weight ω of each subject depends on age and on his case-control status. Specifically, the weight for each subject i at age t is given by (2) ω i t = { 1 − π t π t × n cases t n controls t if subject i is a control selected at age t or at a later age 1 if subject i is a case diagnosed at age t or at a later age , where π(t) is the probability to develop the disease at age t or at a later age in the source population, ncases(t) is the number of cases diagnosed at age t or at a later age in the case-control study, and ncontrols(t) is the number of controls selected at age t or at a later age in the case-control study as well. If the WC model is used to analyze data from a nested case-control study, the age conditional probabilities π(t) in Equation (2) can directly be estimated from the full enumerated cohort. Left-truncation at age at entry into the cohort should be performed to account for delayed entry [17]. If the WC model is used to analyze population-based case-control data, π(t) can be estimated from health statistics on the population under study, as shown in our application on PM in the section following simulations. The weights equal 1 for cases because all the eligible cases of the source population (or in the cohort) are usually included in the case-control study. If the sampling probabilities of cases do not equal 1, then weights in Equation (2) should be adjusted accordingly. The weights defined in Equation (2) can be implemented in any statistical software that handles time-dependent weights in the Cox model, such as the coxph function in R or the SAS PROC PHREG function. The variance estimators: The robust sandwich variance estimator for β^ in Equation (1) as proposed by Binder [5] for finite population-based surveys is given by (3) V ^ 1 β ^ = I − 1 β ^ ∑ i = 1 n ω i u ^ i β ^ ⊗ 2 I − 1 β ^ where Iβ^ is the observed information matrix obtained by evaluation of this expression ∂U^β∂β|β=β^, a⊗ 2 = aa′, and (4) u ^ i β ^ = δ i x i t i − S ^ 1 t i , β ^ S 0 t i , β ^ − ∑ j = 1 n δ j ω j Y i t j exp x i t j ′ β ^ S ^ 0 t j , β ^ x x i t j − S ^ 1 t j , β ^ S ^ 0 t j , β ^ . The robust variance estimator in Equation (3) can be rewritten as V^1β^=D’D where D is the dfbetas residuals [18] vector from the Cox model including the weights ω that can depend on time as those defined in Equation (2), as suggested in Barlow [19]. As indicated by Therneau and Li [20] and by Barlow et al. [21], the robust sandwich variance estimate from Equation (3) can directly be obtained with R using the commands M1 < −coxph(Surv(start,stop,event) ~ x + cluster(id), weights = weight) V1 < − M1$var with the vector of weights derived from Equation (2) for the WC model. The robust variance estimator V^1β^ accounts for the variability due to sampling the case-control sample from the source population. To account for the extra variability due to sampling the source population from the (infinite) superpopulation, we propose to use the Lin variance estimator [13] that turned out to consist in adding the naïve variance estimator to the robust variance estimator V^1β^. The Lin variance superpopulation estimator is thus given by (5) V ^ 2 β ^ = V ^ 1 β ^ + I − 1 β ^ . With R, the superpopulation variance estimate from Equation (5) can simply be obtained using the command V2 <- V1 + M1$naive.var All along this paper, the WC model using the robust variance estimator V^1β^ in Equation (3) will be denoted by WC1, while the WC model using the Lin’s superpopulation variance estimator V^2β^ in Equation (5) will be denoted by WC2. While WC1 and WC2 models give identical estimated exposure effects, they yield different standard errors and thus CI. Simulations: Overview of the simulation design The main objective of the simulation study was to evaluate the performance of Lin’s superpopulation variance estimator V^2β^ in Equation (5) with the time-dependent weights defined in Equation (2), for the estimation of the effects of time-varying exposures in case-control studies. In particular, we compared the coverage probability of the 95% CI resulting from the WC2 model, as compared to the WC1 model and standard logistic regression. We were specifically interested in the effects of exposure intensity, duration, age at first exposure and time since last exposure. These inter-related aspects of exposure are of interest in many epidemiological applications but induce some statistical analytical issues because of correlation and time-dependency. We generated 1000 source populations of 1000 or 5000 individuals each, and within each source population, we simulated a case-control study. The age at event for each subject in each source population was generated from a standard Cox model with time-dependent covariates, using a permutation algorithm described elsewhere and assuming Weibull marginal distribution of age at event [4,22,23]. Three Cox models of interest from an epidemiological point of view were simulated. Model 1 included intensity and duration of exposure only. Model 2 included age at first exposure in addition to intensity and duration. Model 3 was similar to Model 2 but used time since last exposure instead of age at first exposure. The distribution of the exposures variables were chosen to be close to the observed distributions of occupational asbestos exposure variables in our case-control data on PM [15] described in the application section. Specifically, the ages at first and at last exposure were generated for all subjects from lognormal distributions. The exposure intensity at each age was generated from a linear function of age. Parameters for the random intercept and slope were chosen such that either 85% of subjects had a constant intensity, 6% a highly increasing, 6% a moderately decreasing, and 3% a moderately increasing intensity over lifetime (Scenario A); or 50% a highly increasing and 50% a moderately decreasing intensity over lifetime (Scenario B). Scenario A reflects our real case-control data on occupational exposure to asbestos. The exposure intensity at each age was represented in all our models by a variable that equaled the cumulated value of intensity at that age divided by the total duration of exposure at that age. This exposure intensity variable is equivalent to the mean index of exposure (MIE) variable introduced in the application section. The exposure intensity, as well as duration and time since last exposure, were time-dependent in all our true Models 1–3. The true effects β of each exposure variable in Models 1–3 were fixed to values that ranged from weak to strong effects: 0.41 to 1.39 for intensity, 0.01 to 0.05 for duration, −0.01 to −0.11 for age at first exposure, and 0.01 to 0.04 for time since last exposure. These beta correspond to hazard ratios of 1.5 to 4.0 for one standard deviation (i.e. 1.0 fiber/ml) increase in exposure intensity, hazard ratios of 1.2 to 2.0 for one standard deviation increase (i.e. 14 years) in duration of exposure, hazard ratios of 0.9 to 0.4 for one standard deviation (i.e. 8 years) increase in age at first exposure, and hazard ratios of 1.2 to 1.8 for one standard deviation (i.e. 14 years) increase in time since cessation of exposure. Censoring for age at event in the source population was independently generated from a uniform distribution such that the event rate was about 10% in each source population of 1000 subjects, and 2% in each source population of 5 000 subjects. Each subject of the source population who had the event of interest was selected as a case in the case-control dataset. The event rates in the source population thus implied that we had about 100 cases in each case-control data set. For each case, 1, 2, or 4 controls were randomly selected with replacement among subjects at risk at the case’s event age, which corresponds to 1:1, 1:2, or 1:4 individual matching on age, respectively. On average, each case-control dataset was therefore made of about 100 cases and 100, 200, or 400 controls. The main objective of the simulation study was to evaluate the performance of Lin’s superpopulation variance estimator V^2β^ in Equation (5) with the time-dependent weights defined in Equation (2), for the estimation of the effects of time-varying exposures in case-control studies. In particular, we compared the coverage probability of the 95% CI resulting from the WC2 model, as compared to the WC1 model and standard logistic regression. We were specifically interested in the effects of exposure intensity, duration, age at first exposure and time since last exposure. These inter-related aspects of exposure are of interest in many epidemiological applications but induce some statistical analytical issues because of correlation and time-dependency. We generated 1000 source populations of 1000 or 5000 individuals each, and within each source population, we simulated a case-control study. The age at event for each subject in each source population was generated from a standard Cox model with time-dependent covariates, using a permutation algorithm described elsewhere and assuming Weibull marginal distribution of age at event [4,22,23]. Three Cox models of interest from an epidemiological point of view were simulated. Model 1 included intensity and duration of exposure only. Model 2 included age at first exposure in addition to intensity and duration. Model 3 was similar to Model 2 but used time since last exposure instead of age at first exposure. The distribution of the exposures variables were chosen to be close to the observed distributions of occupational asbestos exposure variables in our case-control data on PM [15] described in the application section. Specifically, the ages at first and at last exposure were generated for all subjects from lognormal distributions. The exposure intensity at each age was generated from a linear function of age. Parameters for the random intercept and slope were chosen such that either 85% of subjects had a constant intensity, 6% a highly increasing, 6% a moderately decreasing, and 3% a moderately increasing intensity over lifetime (Scenario A); or 50% a highly increasing and 50% a moderately decreasing intensity over lifetime (Scenario B). Scenario A reflects our real case-control data on occupational exposure to asbestos. The exposure intensity at each age was represented in all our models by a variable that equaled the cumulated value of intensity at that age divided by the total duration of exposure at that age. This exposure intensity variable is equivalent to the mean index of exposure (MIE) variable introduced in the application section. The exposure intensity, as well as duration and time since last exposure, were time-dependent in all our true Models 1–3. The true effects β of each exposure variable in Models 1–3 were fixed to values that ranged from weak to strong effects: 0.41 to 1.39 for intensity, 0.01 to 0.05 for duration, −0.01 to −0.11 for age at first exposure, and 0.01 to 0.04 for time since last exposure. These beta correspond to hazard ratios of 1.5 to 4.0 for one standard deviation (i.e. 1.0 fiber/ml) increase in exposure intensity, hazard ratios of 1.2 to 2.0 for one standard deviation increase (i.e. 14 years) in duration of exposure, hazard ratios of 0.9 to 0.4 for one standard deviation (i.e. 8 years) increase in age at first exposure, and hazard ratios of 1.2 to 1.8 for one standard deviation (i.e. 14 years) increase in time since cessation of exposure. Censoring for age at event in the source population was independently generated from a uniform distribution such that the event rate was about 10% in each source population of 1000 subjects, and 2% in each source population of 5 000 subjects. Each subject of the source population who had the event of interest was selected as a case in the case-control dataset. The event rates in the source population thus implied that we had about 100 cases in each case-control data set. For each case, 1, 2, or 4 controls were randomly selected with replacement among subjects at risk at the case’s event age, which corresponds to 1:1, 1:2, or 1:4 individual matching on age, respectively. On average, each case-control dataset was therefore made of about 100 cases and 100, 200, or 400 controls. Overview of the simulation design: The main objective of the simulation study was to evaluate the performance of Lin’s superpopulation variance estimator V^2β^ in Equation (5) with the time-dependent weights defined in Equation (2), for the estimation of the effects of time-varying exposures in case-control studies. In particular, we compared the coverage probability of the 95% CI resulting from the WC2 model, as compared to the WC1 model and standard logistic regression. We were specifically interested in the effects of exposure intensity, duration, age at first exposure and time since last exposure. These inter-related aspects of exposure are of interest in many epidemiological applications but induce some statistical analytical issues because of correlation and time-dependency. We generated 1000 source populations of 1000 or 5000 individuals each, and within each source population, we simulated a case-control study. The age at event for each subject in each source population was generated from a standard Cox model with time-dependent covariates, using a permutation algorithm described elsewhere and assuming Weibull marginal distribution of age at event [4,22,23]. Three Cox models of interest from an epidemiological point of view were simulated. Model 1 included intensity and duration of exposure only. Model 2 included age at first exposure in addition to intensity and duration. Model 3 was similar to Model 2 but used time since last exposure instead of age at first exposure. The distribution of the exposures variables were chosen to be close to the observed distributions of occupational asbestos exposure variables in our case-control data on PM [15] described in the application section. Specifically, the ages at first and at last exposure were generated for all subjects from lognormal distributions. The exposure intensity at each age was generated from a linear function of age. Parameters for the random intercept and slope were chosen such that either 85% of subjects had a constant intensity, 6% a highly increasing, 6% a moderately decreasing, and 3% a moderately increasing intensity over lifetime (Scenario A); or 50% a highly increasing and 50% a moderately decreasing intensity over lifetime (Scenario B). Scenario A reflects our real case-control data on occupational exposure to asbestos. The exposure intensity at each age was represented in all our models by a variable that equaled the cumulated value of intensity at that age divided by the total duration of exposure at that age. This exposure intensity variable is equivalent to the mean index of exposure (MIE) variable introduced in the application section. The exposure intensity, as well as duration and time since last exposure, were time-dependent in all our true Models 1–3. The true effects β of each exposure variable in Models 1–3 were fixed to values that ranged from weak to strong effects: 0.41 to 1.39 for intensity, 0.01 to 0.05 for duration, −0.01 to −0.11 for age at first exposure, and 0.01 to 0.04 for time since last exposure. These beta correspond to hazard ratios of 1.5 to 4.0 for one standard deviation (i.e. 1.0 fiber/ml) increase in exposure intensity, hazard ratios of 1.2 to 2.0 for one standard deviation increase (i.e. 14 years) in duration of exposure, hazard ratios of 0.9 to 0.4 for one standard deviation (i.e. 8 years) increase in age at first exposure, and hazard ratios of 1.2 to 1.8 for one standard deviation (i.e. 14 years) increase in time since cessation of exposure. Censoring for age at event in the source population was independently generated from a uniform distribution such that the event rate was about 10% in each source population of 1000 subjects, and 2% in each source population of 5 000 subjects. Each subject of the source population who had the event of interest was selected as a case in the case-control dataset. The event rates in the source population thus implied that we had about 100 cases in each case-control data set. For each case, 1, 2, or 4 controls were randomly selected with replacement among subjects at risk at the case’s event age, which corresponds to 1:1, 1:2, or 1:4 individual matching on age, respectively. On average, each case-control dataset was therefore made of about 100 cases and 100, 200, or 400 controls. Analytical methods used to analyze the simulated data: Each case-control sample was analyzed using four regression models (WC1 and WC2 models and two standard logistic regression models) that were correctly specified in terms of the exposure variables included. In the WC1 and WC2 models, the exposure variables were time-dependent, and the probability π(t) was the proportion of subjects in the source population who had an event at age t or at a later age among those at risk at age t. We assumed that all subjects of the population source were followed-up since birth, implying that age at event did not have to be left-truncated in WC1 and WC2. For comparison purpose, conditional logistic regression (CLR) was used as the standard analytical method for individually matched case-control studies. Unconditional logistic regression (ULR) including age as a continuous covariate in addition to the exposure variables, was also used as the standard alternative analytical approach. For both ULR and CLR, the time-dependent covariates were fixed at their observed value at the age at event for cases or selection for controls. Because controls were selected among subjects at risk at the ages where each case occurs, all the exponential of the regression parameter estimates can be interpreted as the source population rate ratio estimates [24]. Statistical criteria used to compare the performance of the different estimators For each of the four regression models WC1, WC2, CLR, and ULR, we calculated the relative bias of the regression parameter estimator β^ associated with each exposure variable, as compared with the true effect β of that exposure variable, 11000∑i=11000β^i−ββ, where β^i is the parameter estimate of the model based on the ith case-control dataset (i = 1, …, 1 000). To evaluate whether the relative bias was not partly due to a bias generated in the population source data, we also derived the relative bias as compared with the estimated effect β^Cox of the well specified time-dependent Cox model using the full population source data, 11000∑i=11000β^i−β^Cox,iβ^Cox,i. We also derived the root mean squared error (RMSE) β^¯−β2+varβ^, where β^¯ is the mean of the 1 000 parameter estimates β^i. The empirical relative efficiency of each regression parameter estimator was computed as the ratio of the empirical variance of the Cox model using the full source population data, varβ^Cox, to the empirical variance of the parameter estimates varβ^. The average of the 1000 standard errors sβ^ (ASE) was compared to the empirical standard deviation of the 1000 β^ estimates (SDE). We also calculated the coverage probability as the proportion of samples for which the 95% CI of β,β^±1.96×sβ^, included the true value β. For each of the four regression models WC1, WC2, CLR, and ULR, we calculated the relative bias of the regression parameter estimator β^ associated with each exposure variable, as compared with the true effect β of that exposure variable, 11000∑i=11000β^i−ββ, where β^i is the parameter estimate of the model based on the ith case-control dataset (i = 1, …, 1 000). To evaluate whether the relative bias was not partly due to a bias generated in the population source data, we also derived the relative bias as compared with the estimated effect β^Cox of the well specified time-dependent Cox model using the full population source data, 11000∑i=11000β^i−β^Cox,iβ^Cox,i. We also derived the root mean squared error (RMSE) β^¯−β2+varβ^, where β^¯ is the mean of the 1 000 parameter estimates β^i. The empirical relative efficiency of each regression parameter estimator was computed as the ratio of the empirical variance of the Cox model using the full source population data, varβ^Cox, to the empirical variance of the parameter estimates varβ^. The average of the 1000 standard errors sβ^ (ASE) was compared to the empirical standard deviation of the 1000 β^ estimates (SDE). We also calculated the coverage probability as the proportion of samples for which the 95% CI of β,β^±1.96×sβ^, included the true value β. Simulation results Table 1 shows the results of the four analytical methods (WC1, WC2, CLR, ULR) for strong effects of exposure intensity and duration in Model 1. Table 2 shows the results for strong effects of i) intensity, duration, and age at first exposure in Model 2, and ii) intensity, duration, and time since cessation in Model 3. The results tended to be similar for weaker effects. Simulation results for Model 1 for 1:1, 1:2, or 1:4 matched case-control data including about 100 cases arising from populations of 1000 or 5000 subjects, based on 1000 replications (a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B). (b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate. (c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^. (d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^. (e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^. Simulation results for Models 2 and 3 for 1:1 matched case-control data including about 100 cases arising from a population of 1000 subjects, based on 1000 replications (a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B). (b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate. (c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^. (d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^. (e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^. As suggested by the ratio ASE/SDE, the superpopulation variance estimator (WC2) tended to give estimates that were closer to the true variance than the robust variance estimator (WC1) that systematically under-estimated the true variance. Despite the superpopulation variance estimator tended to overestimate the true variance for the effect of exposure intensity when the population was made of 1000 subjects only (Tables 1 and 2), the coverage rates from WC2 were systematically much closer to the nominal level of 95% than those from the WC1 model. For each scenario of intensity pattern (Scenario A or B), the ratio ASE/SDE and the coverage rate for the effects of intensity and duration were similar in Models 2–3 as compared with Model 1 (Table 2 versus Table 1), suggesting that additional adjustment for correlated covariates does not affect the performance of the different variance estimators. While the relative biases from all analytical models (WC, ULR and CLR) tended to be low and of the same magnitude in all scenarios, the relative efficiency as compared to the Cox model estimated on the full population source, as well as the accuracy in terms of RMSE, tended to be different. Indeed, in all scenarios with 1:1 case:control ratio within population source of 1000 subjects, the regression coefficient estimator from the WC models was much more efficient and thus also more accurate than that from CLR and ULR (Tables 1 and 2). As expected, the relative efficiency from all models estimated using 100 cases and 100 controls, as compared to the Cox model estimated on the full population source, decreased when the population size increases. For example, the relative efficiency of the WC for intensity with pattern B decreased from 0.59 to 0.20 when the population size increased from 1000 to 5000 subjects (Table 1). As expected as well, increasing the number of controls from 100 to 200 or 400 for a given population size (5000 in Table 1) strongly increased the relative efficiency of ULR and CLR but only moderately increased the relative efficiency of the WC models. For example, the relative efficiency for intensity with pattern B increased from 0.10 to 0.36 for CLR while only from 0.20 to 0.37 for the WC model (Table 1). Because the WC model used controls at different ages for which they were selected in the 1:1 case-control scenario, using additional controls in the 1:2 or 1:4 case:control ratio scenarios added relatively less information in this model than in ULR and CLR. As a result, ULR and CLR became more accurate in terms of RMSE than the WC models when four controls were selected for each case. Interestingly, CLR did not perform better in terms of both bias and RMSE than ULR, despite individual matching of cases and controls. ULR was actually systematically more efficient than CLR. This result may be consistent with our previous results where we found that CLR might have difficulty in separating the effects of correlated time-dependent variables [23]. Indeed, the correlation between each pair of the four exposure variables (intensity, duration, age at first exposure and time since last exposure) as well as with age at the index date, ranged between −0.679 and +0.453. The correlation also affected both the WC and ULR parameter estimators as suggested by the slightly higher RMSE in Models 2 and 3 (Table 2) as compared with Model 1 (Table 1) for the effects of intensity and duration, but it affected them less than the CLR estimator. Table 1 shows the results of the four analytical methods (WC1, WC2, CLR, ULR) for strong effects of exposure intensity and duration in Model 1. Table 2 shows the results for strong effects of i) intensity, duration, and age at first exposure in Model 2, and ii) intensity, duration, and time since cessation in Model 3. The results tended to be similar for weaker effects. Simulation results for Model 1 for 1:1, 1:2, or 1:4 matched case-control data including about 100 cases arising from populations of 1000 or 5000 subjects, based on 1000 replications (a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B). (b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate. (c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^. (d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^. (e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^. Simulation results for Models 2 and 3 for 1:1 matched case-control data including about 100 cases arising from a population of 1000 subjects, based on 1000 replications (a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B). (b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate. (c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^. (d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^. (e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^. As suggested by the ratio ASE/SDE, the superpopulation variance estimator (WC2) tended to give estimates that were closer to the true variance than the robust variance estimator (WC1) that systematically under-estimated the true variance. Despite the superpopulation variance estimator tended to overestimate the true variance for the effect of exposure intensity when the population was made of 1000 subjects only (Tables 1 and 2), the coverage rates from WC2 were systematically much closer to the nominal level of 95% than those from the WC1 model. For each scenario of intensity pattern (Scenario A or B), the ratio ASE/SDE and the coverage rate for the effects of intensity and duration were similar in Models 2–3 as compared with Model 1 (Table 2 versus Table 1), suggesting that additional adjustment for correlated covariates does not affect the performance of the different variance estimators. While the relative biases from all analytical models (WC, ULR and CLR) tended to be low and of the same magnitude in all scenarios, the relative efficiency as compared to the Cox model estimated on the full population source, as well as the accuracy in terms of RMSE, tended to be different. Indeed, in all scenarios with 1:1 case:control ratio within population source of 1000 subjects, the regression coefficient estimator from the WC models was much more efficient and thus also more accurate than that from CLR and ULR (Tables 1 and 2). As expected, the relative efficiency from all models estimated using 100 cases and 100 controls, as compared to the Cox model estimated on the full population source, decreased when the population size increases. For example, the relative efficiency of the WC for intensity with pattern B decreased from 0.59 to 0.20 when the population size increased from 1000 to 5000 subjects (Table 1). As expected as well, increasing the number of controls from 100 to 200 or 400 for a given population size (5000 in Table 1) strongly increased the relative efficiency of ULR and CLR but only moderately increased the relative efficiency of the WC models. For example, the relative efficiency for intensity with pattern B increased from 0.10 to 0.36 for CLR while only from 0.20 to 0.37 for the WC model (Table 1). Because the WC model used controls at different ages for which they were selected in the 1:1 case-control scenario, using additional controls in the 1:2 or 1:4 case:control ratio scenarios added relatively less information in this model than in ULR and CLR. As a result, ULR and CLR became more accurate in terms of RMSE than the WC models when four controls were selected for each case. Interestingly, CLR did not perform better in terms of both bias and RMSE than ULR, despite individual matching of cases and controls. ULR was actually systematically more efficient than CLR. This result may be consistent with our previous results where we found that CLR might have difficulty in separating the effects of correlated time-dependent variables [23]. Indeed, the correlation between each pair of the four exposure variables (intensity, duration, age at first exposure and time since last exposure) as well as with age at the index date, ranged between −0.679 and +0.453. The correlation also affected both the WC and ULR parameter estimators as suggested by the slightly higher RMSE in Models 2 and 3 (Table 2) as compared with Model 1 (Table 1) for the effects of intensity and duration, but it affected them less than the CLR estimator. Application to occupational exposure to asbestos and pleural mesothelioma Mesothelioma is a rare tumor mostly located in the pleura and usually caused by exposure to asbestos. The role of the different temporal patterns of occupational exposure to this substance has still to be explored using appropriate statistical methods accounting for individual changes over time in the exposure intensity [15]. It is therefore of interest to apply the proposed estimators to estimate the mutually adjusted effects of exposure intensity, duration, age at first exposure, and time since last exposure, and to compare the results to those from standard logistic regression analyses that do not dynamically account for within subjects changes over time of exposure intensity. Mesothelioma is a rare tumor mostly located in the pleura and usually caused by exposure to asbestos. The role of the different temporal patterns of occupational exposure to this substance has still to be explored using appropriate statistical methods accounting for individual changes over time in the exposure intensity [15]. It is therefore of interest to apply the proposed estimators to estimate the mutually adjusted effects of exposure intensity, duration, age at first exposure, and time since last exposure, and to compare the results to those from standard logistic regression analyses that do not dynamically account for within subjects changes over time of exposure intensity. Data source The data came from a large French population-based case-control study described in Lacourt et al. [15]. Cases were selected from a French case-control study conducted in 1987–1993 and the French National Mesothelioma Surveillance Program in 1998–2006. Population controls were frequency matched to cases by sex and year of birth within 5 years group. Occupational asbestos exposure was evaluated for each subject with a job-exposure matrix (JEM) which allowed us to derive the mean index of exposure (MIE) that was used in the regression models to represent intensity of exposure, as in Lacourt et al. [15]. The MIE at age t was given by MIE t = ∑ l = 1 L d l × p l × f sl × i sl + f al × i al / ∑ l = 1 L d l where L is the total number of jobs exposed to asbestos till age t; dl the duration (in years) of job l, pl the probability of asbestos exposure for job l, fsl and isl the frequency and intensity of asbestos exposure due to specific task of job l, respectively, fal and ial the frequency and intensity of asbestos exposure due to environment work contamination of job l, respectively. For each job, the probability was derived from the percent of workers exposed in the considered job code, the frequency from the percent of work time, and the intensity from the concentration of asbestos fibers in the air expressed as fibers per milliliter (f/ml). See Lacourt et al. [15] for more details. An ever exposed subject to asbestos was a subject who had at least one job with a probability pl different from zero. Because our objective was to accurately investigate the effects of the quantitative time-related aspects of occupational exposure, all our analyses were restricted to subjects ever exposed to asbestos (68.9% in males and 20.9% in females). In addition, because the sample size for females was too small to ensure adequate statistical power and accurate estimates in separate multiple regression analyses of this group [15], the analyses were restricted to males ever exposed to asbestos, i.e. to 1041 male cases and 1425 male controls. The distribution of age and the asbestos exposure characteristics at the time of diagnosis for cases and interview for controls are shown in Table 3. The distribution of the patterns of intensity over lifetime was similar to the one described in scenario A of the simulation, with 85% of subjects with almost constant asbestos exposure intensity over lifetime. Mean and standard deviation of age and asbestos exposure variables at the time of diagnosis/interview for ever exposed males Results from the French case-control study on mesothelioma, 1987–2006. (a) Measured by the mean index of exposure (MIE). The data came from a large French population-based case-control study described in Lacourt et al. [15]. Cases were selected from a French case-control study conducted in 1987–1993 and the French National Mesothelioma Surveillance Program in 1998–2006. Population controls were frequency matched to cases by sex and year of birth within 5 years group. Occupational asbestos exposure was evaluated for each subject with a job-exposure matrix (JEM) which allowed us to derive the mean index of exposure (MIE) that was used in the regression models to represent intensity of exposure, as in Lacourt et al. [15]. The MIE at age t was given by MIE t = ∑ l = 1 L d l × p l × f sl × i sl + f al × i al / ∑ l = 1 L d l where L is the total number of jobs exposed to asbestos till age t; dl the duration (in years) of job l, pl the probability of asbestos exposure for job l, fsl and isl the frequency and intensity of asbestos exposure due to specific task of job l, respectively, fal and ial the frequency and intensity of asbestos exposure due to environment work contamination of job l, respectively. For each job, the probability was derived from the percent of workers exposed in the considered job code, the frequency from the percent of work time, and the intensity from the concentration of asbestos fibers in the air expressed as fibers per milliliter (f/ml). See Lacourt et al. [15] for more details. An ever exposed subject to asbestos was a subject who had at least one job with a probability pl different from zero. Because our objective was to accurately investigate the effects of the quantitative time-related aspects of occupational exposure, all our analyses were restricted to subjects ever exposed to asbestos (68.9% in males and 20.9% in females). In addition, because the sample size for females was too small to ensure adequate statistical power and accurate estimates in separate multiple regression analyses of this group [15], the analyses were restricted to males ever exposed to asbestos, i.e. to 1041 male cases and 1425 male controls. The distribution of age and the asbestos exposure characteristics at the time of diagnosis for cases and interview for controls are shown in Table 3. The distribution of the patterns of intensity over lifetime was similar to the one described in scenario A of the simulation, with 85% of subjects with almost constant asbestos exposure intensity over lifetime. Mean and standard deviation of age and asbestos exposure variables at the time of diagnosis/interview for ever exposed males Results from the French case-control study on mesothelioma, 1987–2006. (a) Measured by the mean index of exposure (MIE). Analytical methods used to analyze the case-control data on pleural mesothelioma To derive the weights proposed in the WC models (Equation 2), we first estimated the age-conditional probabilities π(t) of developing PM in the French male general population. These estimated probabilities were derived from published estimated sex- and age-specific incidence rates of PM per 100000 person-years in France in 2005 [25]. We assumed that these estimated incidence rates applied to our source population and that they were appropriate during the whole life of our subjects. The results are shown in Table 4 for males. As in the simulation study, standard errors for the WC model were then derived using the two variance estimators V^1β^ and V^2β^, resulting in the WC1 and WC2 models, respectively. Estimated male age-conditional probabilities used in the weights of the WC models to analyze the French case-control study of on mesothelioma (a) p(t) are estimated male age-specific incidence rates of pleural mesothelioma per 100 000 person-years in France in 2005 [25]. (b) π(t) are estimated male age-conditional probabilities of developing pleural mesothelioma within residual lifetime after age t, calculated as πt=1−∏l≥t1−pl. For comparison purpose, the data were further analyzed with ULR which is the standard method to analyze frequency matched case-control data, as well as with CLR. Age was the time axis in WC1 and WC2 models, and a continuous covariate in ULR and CLR. We did not perform left-truncation in WC1 and WC2 models thus assuming that all subjects of the population source were passively followed-up for PM since birth. The matching factor, birth year, was a quantitative covariate in WC1, WC2, and ULR, and was the stratification variable (in 5 years groups) in CLR. Using each of the four approaches (WC1, WC2, CLR and, ULR), we estimated the effects of intensity and duration of occupational asbestos exposure, the age at first exposure, and time since last exposure, using the same combination of quantitative exposure variables as in Models 1–3 of the simulation study. All the effects of these variables were therefore assumed to be linear. Despite our recent results that suggested that these effects were not linear on the logit of PM [15], we used quantitative variables in order to facilitate the comparison of the estimates from the four different analytical approaches. The resulting estimates should therefore be used only for methodological comparison purpose and not as substantive epidemiological results. As in the simulation study, all the exposure variables were time-dependent in WC1 and WC2 models, and fixed at their value at the age at diagnosis or interview for ULR and CLR. To derive the weights proposed in the WC models (Equation 2), we first estimated the age-conditional probabilities π(t) of developing PM in the French male general population. These estimated probabilities were derived from published estimated sex- and age-specific incidence rates of PM per 100000 person-years in France in 2005 [25]. We assumed that these estimated incidence rates applied to our source population and that they were appropriate during the whole life of our subjects. The results are shown in Table 4 for males. As in the simulation study, standard errors for the WC model were then derived using the two variance estimators V^1β^ and V^2β^, resulting in the WC1 and WC2 models, respectively. Estimated male age-conditional probabilities used in the weights of the WC models to analyze the French case-control study of on mesothelioma (a) p(t) are estimated male age-specific incidence rates of pleural mesothelioma per 100 000 person-years in France in 2005 [25]. (b) π(t) are estimated male age-conditional probabilities of developing pleural mesothelioma within residual lifetime after age t, calculated as πt=1−∏l≥t1−pl. For comparison purpose, the data were further analyzed with ULR which is the standard method to analyze frequency matched case-control data, as well as with CLR. Age was the time axis in WC1 and WC2 models, and a continuous covariate in ULR and CLR. We did not perform left-truncation in WC1 and WC2 models thus assuming that all subjects of the population source were passively followed-up for PM since birth. The matching factor, birth year, was a quantitative covariate in WC1, WC2, and ULR, and was the stratification variable (in 5 years groups) in CLR. Using each of the four approaches (WC1, WC2, CLR and, ULR), we estimated the effects of intensity and duration of occupational asbestos exposure, the age at first exposure, and time since last exposure, using the same combination of quantitative exposure variables as in Models 1–3 of the simulation study. All the effects of these variables were therefore assumed to be linear. Despite our recent results that suggested that these effects were not linear on the logit of PM [15], we used quantitative variables in order to facilitate the comparison of the estimates from the four different analytical approaches. The resulting estimates should therefore be used only for methodological comparison purpose and not as substantive epidemiological results. As in the simulation study, all the exposure variables were time-dependent in WC1 and WC2 models, and fixed at their value at the age at diagnosis or interview for ULR and CLR. Statistical criteria used to compare the performance of the different estimators: For each of the four regression models WC1, WC2, CLR, and ULR, we calculated the relative bias of the regression parameter estimator β^ associated with each exposure variable, as compared with the true effect β of that exposure variable, 11000∑i=11000β^i−ββ, where β^i is the parameter estimate of the model based on the ith case-control dataset (i = 1, …, 1 000). To evaluate whether the relative bias was not partly due to a bias generated in the population source data, we also derived the relative bias as compared with the estimated effect β^Cox of the well specified time-dependent Cox model using the full population source data, 11000∑i=11000β^i−β^Cox,iβ^Cox,i. We also derived the root mean squared error (RMSE) β^¯−β2+varβ^, where β^¯ is the mean of the 1 000 parameter estimates β^i. The empirical relative efficiency of each regression parameter estimator was computed as the ratio of the empirical variance of the Cox model using the full source population data, varβ^Cox, to the empirical variance of the parameter estimates varβ^. The average of the 1000 standard errors sβ^ (ASE) was compared to the empirical standard deviation of the 1000 β^ estimates (SDE). We also calculated the coverage probability as the proportion of samples for which the 95% CI of β,β^±1.96×sβ^, included the true value β. Simulation results: Table 1 shows the results of the four analytical methods (WC1, WC2, CLR, ULR) for strong effects of exposure intensity and duration in Model 1. Table 2 shows the results for strong effects of i) intensity, duration, and age at first exposure in Model 2, and ii) intensity, duration, and time since cessation in Model 3. The results tended to be similar for weaker effects. Simulation results for Model 1 for 1:1, 1:2, or 1:4 matched case-control data including about 100 cases arising from populations of 1000 or 5000 subjects, based on 1000 replications (a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B). (b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate. (c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^. (d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^. (e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^. Simulation results for Models 2 and 3 for 1:1 matched case-control data including about 100 cases arising from a population of 1000 subjects, based on 1000 replications (a) Exposure intensity was either constant over lifetime for 85% of the subjects, highly increasing for 6%, moderately decreasing for 6%, and moderately increasing intensity for 3% (Scenario A); or, was highly increasing for 50% and moderately decreasing for 50% (Scenario B). (b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; CLR, conditional logistic regression on age; ULR, unconditional logistic regression adjusted for age as a continuous covariate. (c) Relative bias as compared to the true effect and as compared to the estimated effect of the Cox model using the full population source data. Each of these two bias was the same for WC1 and WC2 since these models used the same regression parameter estimator β^. (d) Relative efficiency as compared to the Cox model estimated on the full population source. This quantity was the same for WC1 and WC2 since these models used the same regression parameter estimator β^. (e) RMSE, root mean squared error (same for WC1 and WC2 which used the same regression parameter estimator β^); ASE, average of the 1000 standard errors sβ^; SDE, empirical standard deviation of the 1000 β^ estimates; cov. rate, coverage rate of the 95% confidence interval of β^. As suggested by the ratio ASE/SDE, the superpopulation variance estimator (WC2) tended to give estimates that were closer to the true variance than the robust variance estimator (WC1) that systematically under-estimated the true variance. Despite the superpopulation variance estimator tended to overestimate the true variance for the effect of exposure intensity when the population was made of 1000 subjects only (Tables 1 and 2), the coverage rates from WC2 were systematically much closer to the nominal level of 95% than those from the WC1 model. For each scenario of intensity pattern (Scenario A or B), the ratio ASE/SDE and the coverage rate for the effects of intensity and duration were similar in Models 2–3 as compared with Model 1 (Table 2 versus Table 1), suggesting that additional adjustment for correlated covariates does not affect the performance of the different variance estimators. While the relative biases from all analytical models (WC, ULR and CLR) tended to be low and of the same magnitude in all scenarios, the relative efficiency as compared to the Cox model estimated on the full population source, as well as the accuracy in terms of RMSE, tended to be different. Indeed, in all scenarios with 1:1 case:control ratio within population source of 1000 subjects, the regression coefficient estimator from the WC models was much more efficient and thus also more accurate than that from CLR and ULR (Tables 1 and 2). As expected, the relative efficiency from all models estimated using 100 cases and 100 controls, as compared to the Cox model estimated on the full population source, decreased when the population size increases. For example, the relative efficiency of the WC for intensity with pattern B decreased from 0.59 to 0.20 when the population size increased from 1000 to 5000 subjects (Table 1). As expected as well, increasing the number of controls from 100 to 200 or 400 for a given population size (5000 in Table 1) strongly increased the relative efficiency of ULR and CLR but only moderately increased the relative efficiency of the WC models. For example, the relative efficiency for intensity with pattern B increased from 0.10 to 0.36 for CLR while only from 0.20 to 0.37 for the WC model (Table 1). Because the WC model used controls at different ages for which they were selected in the 1:1 case-control scenario, using additional controls in the 1:2 or 1:4 case:control ratio scenarios added relatively less information in this model than in ULR and CLR. As a result, ULR and CLR became more accurate in terms of RMSE than the WC models when four controls were selected for each case. Interestingly, CLR did not perform better in terms of both bias and RMSE than ULR, despite individual matching of cases and controls. ULR was actually systematically more efficient than CLR. This result may be consistent with our previous results where we found that CLR might have difficulty in separating the effects of correlated time-dependent variables [23]. Indeed, the correlation between each pair of the four exposure variables (intensity, duration, age at first exposure and time since last exposure) as well as with age at the index date, ranged between −0.679 and +0.453. The correlation also affected both the WC and ULR parameter estimators as suggested by the slightly higher RMSE in Models 2 and 3 (Table 2) as compared with Model 1 (Table 1) for the effects of intensity and duration, but it affected them less than the CLR estimator. Application to occupational exposure to asbestos and pleural mesothelioma: Mesothelioma is a rare tumor mostly located in the pleura and usually caused by exposure to asbestos. The role of the different temporal patterns of occupational exposure to this substance has still to be explored using appropriate statistical methods accounting for individual changes over time in the exposure intensity [15]. It is therefore of interest to apply the proposed estimators to estimate the mutually adjusted effects of exposure intensity, duration, age at first exposure, and time since last exposure, and to compare the results to those from standard logistic regression analyses that do not dynamically account for within subjects changes over time of exposure intensity. Data source: The data came from a large French population-based case-control study described in Lacourt et al. [15]. Cases were selected from a French case-control study conducted in 1987–1993 and the French National Mesothelioma Surveillance Program in 1998–2006. Population controls were frequency matched to cases by sex and year of birth within 5 years group. Occupational asbestos exposure was evaluated for each subject with a job-exposure matrix (JEM) which allowed us to derive the mean index of exposure (MIE) that was used in the regression models to represent intensity of exposure, as in Lacourt et al. [15]. The MIE at age t was given by MIE t = ∑ l = 1 L d l × p l × f sl × i sl + f al × i al / ∑ l = 1 L d l where L is the total number of jobs exposed to asbestos till age t; dl the duration (in years) of job l, pl the probability of asbestos exposure for job l, fsl and isl the frequency and intensity of asbestos exposure due to specific task of job l, respectively, fal and ial the frequency and intensity of asbestos exposure due to environment work contamination of job l, respectively. For each job, the probability was derived from the percent of workers exposed in the considered job code, the frequency from the percent of work time, and the intensity from the concentration of asbestos fibers in the air expressed as fibers per milliliter (f/ml). See Lacourt et al. [15] for more details. An ever exposed subject to asbestos was a subject who had at least one job with a probability pl different from zero. Because our objective was to accurately investigate the effects of the quantitative time-related aspects of occupational exposure, all our analyses were restricted to subjects ever exposed to asbestos (68.9% in males and 20.9% in females). In addition, because the sample size for females was too small to ensure adequate statistical power and accurate estimates in separate multiple regression analyses of this group [15], the analyses were restricted to males ever exposed to asbestos, i.e. to 1041 male cases and 1425 male controls. The distribution of age and the asbestos exposure characteristics at the time of diagnosis for cases and interview for controls are shown in Table 3. The distribution of the patterns of intensity over lifetime was similar to the one described in scenario A of the simulation, with 85% of subjects with almost constant asbestos exposure intensity over lifetime. Mean and standard deviation of age and asbestos exposure variables at the time of diagnosis/interview for ever exposed males Results from the French case-control study on mesothelioma, 1987–2006. (a) Measured by the mean index of exposure (MIE). Analytical methods used to analyze the case-control data on pleural mesothelioma: To derive the weights proposed in the WC models (Equation 2), we first estimated the age-conditional probabilities π(t) of developing PM in the French male general population. These estimated probabilities were derived from published estimated sex- and age-specific incidence rates of PM per 100000 person-years in France in 2005 [25]. We assumed that these estimated incidence rates applied to our source population and that they were appropriate during the whole life of our subjects. The results are shown in Table 4 for males. As in the simulation study, standard errors for the WC model were then derived using the two variance estimators V^1β^ and V^2β^, resulting in the WC1 and WC2 models, respectively. Estimated male age-conditional probabilities used in the weights of the WC models to analyze the French case-control study of on mesothelioma (a) p(t) are estimated male age-specific incidence rates of pleural mesothelioma per 100 000 person-years in France in 2005 [25]. (b) π(t) are estimated male age-conditional probabilities of developing pleural mesothelioma within residual lifetime after age t, calculated as πt=1−∏l≥t1−pl. For comparison purpose, the data were further analyzed with ULR which is the standard method to analyze frequency matched case-control data, as well as with CLR. Age was the time axis in WC1 and WC2 models, and a continuous covariate in ULR and CLR. We did not perform left-truncation in WC1 and WC2 models thus assuming that all subjects of the population source were passively followed-up for PM since birth. The matching factor, birth year, was a quantitative covariate in WC1, WC2, and ULR, and was the stratification variable (in 5 years groups) in CLR. Using each of the four approaches (WC1, WC2, CLR and, ULR), we estimated the effects of intensity and duration of occupational asbestos exposure, the age at first exposure, and time since last exposure, using the same combination of quantitative exposure variables as in Models 1–3 of the simulation study. All the effects of these variables were therefore assumed to be linear. Despite our recent results that suggested that these effects were not linear on the logit of PM [15], we used quantitative variables in order to facilitate the comparison of the estimates from the four different analytical approaches. The resulting estimates should therefore be used only for methodological comparison purpose and not as substantive epidemiological results. As in the simulation study, all the exposure variables were time-dependent in WC1 and WC2 models, and fixed at their value at the age at diagnosis or interview for ULR and CLR. Results: Table 5 shows the estimated effects of the selected quantitative asbestos exposure variables on the risk of PM, using the four analytical approaches (WC1, WC2, CLR, and ULR) and Models 1–3. The estimated effects are shown in terms of expβ^, i.e. estimated hazard ratios for WC1 and WC2 and estimated odds ratios for ULR and CLR. These estimated effects were calculated for an increase of about one standard deviation of the exposure variable, i.e. 1 fiber/ml for asbestos exposure intensity, 14 years for duration, 8 years for age at first exposure, and 14 years for time since last exposure. Estimated effect of occupational asbestos exposure in males ever exposed (1041 cases and 1425 controls), using the WC models and logistic regression and assuming linear effects of quantitative exposure variables Results from the French case-control study on mesothelioma, 1987–2006. (a) All the exposure variables were time-dependent in WC1 and WC2 models, and fixed at their value at diagnosis/interview in CLR and ULR. Intensity was measured by the mean index of exposure (MIE). (b) WC1, weighted Cox models with robust sandwich variance; WC2, weighted Cox model with superpopulation variance; Both WC1 and WC2 used age as the time axis and included birth year as a quantitative covariate; ULR, unconditional logistic regression including age at diagnosis/interview and birth year as quantitative covariates; CLR, conditional logistic regression stratified on birth year group (5 years), and including age at diagnosis/interview as a quantitative covariate. (c) Hazard ratio estimates for WC1 and WC2 (same value for WC1 and WC2) and odds ratio estimates for CLR and ULR, adjusted for age and birth year, and corresponding 95% confidence interval (CI). As expected, the associations between all asbestos exposure variables and PM were significant with each of the four analytical approaches (Table 5). Specifically, increasing intensity or duration increased significantly the risk of PM, when adjusted or not on either age at first exposure or time since last exposure. Because the relative variation in the estimated effects of duration between Model 3 and Model 1 was higher than between Model 2 and Model 1, time since last exposure (in Model 3) seems to be a more important confounder than age at first exposure (in Model 2) in the relation between duration and PM. Estimates from Model 2 suggest that the later a subject is first occupationally exposed to asbestos, the smaller his risk of PM is. All the estimated effects of time since cessation indicate that risk continues to increase after the cessation of exposure, as in many other studies [15,26,27]. The 95% CI from WC1 and WC2 were almost identical (Table 5), suggesting that the robust variance estimates from WC1 was very close to the superpopulation variance estimates from WC2. This is likely due to the fact that the disease (PM) was very rare as shown in Table 3, as opposed to our simulation study where the overall event rates were about 10% and 2%. The strongest contrasts between the estimates from the WC models and ULR or CLR were for the effect of exposure intensity. Indeed, the estimated effect of intensity was systematically weaker with the WC models than with ULR or CLR, with even non overlapping 95% CI. Note that, as for Scenario A in our simulation study, CLR provided the strongest estimates for the strong effect of intensity. By contrast, for the effects of duration, age at initiation, and time since last exposure, the strongest estimates were provided by the WC models, but the discrepancies with ULR and CLR were weaker than for intensity. There are different potential explanations for the discrepancies between the results from the Cox (WC1 and WC2) and logistic (CLR and ULR) models. First the adjustment for age was largely different in the two series of models. While age was the time axis in the Cox models, and was therefore adequately adjusted for in both WC1 and WC2, it was included as a continuous covariate in both logistic models. This assumed that its effect was linear on the logit, which is actually not true [15]. Thus there may be some residual confounding by age in both CLR and ULR. Second, because controls of the case-control study on PM were selected from members of the general French population at calendar times that can possibly differ from the period of case’s recruitment, the case-control odds ratio estimate from ULR and CLR may estimate a different quantity than the hazard ratio estimate from the Cox model. Indeed, the hazard function in the Cox models provides a dynamic description of how the instantaneous risk of getting PM varies over the age. The exponential of regression parameter can be interpreted as a hazard ratio, which is equivalent to the rate ratio that would be obtained from a cohort design. If the controls of the case-control study on PM were randomly selected from the member of the population who were at risk at each age a case occurs (as in our simulation study), then the estimated odds ratio that would be obtained from ULR and CLR could also be interpreted as a rate ratio that would be obtained from a cohort design. However, this was not the way controls were selected in the case-control study on PM, and it is therefore difficult to directly compare odds ratio estimates obtained from ULR and CLR, and hazard ratio estimates obtained from WC1 and WC2. Discussion: Our simulation results suggest that the superpopulation variance estimator [13] provides adequate coverage probabilities of the CI when using the time-dependent weights proposed in the WC model to estimate the effect of time-varying exposures in case-control studies. Indeed, our simulation results shows much better coverage probabilities of the CI resulting from the superpopulation estimator than those resulting from the robust variance estimator. However, our application to PM suggests that the two variance estimators give similar 95% CI when the disease is very rare. This is consistent with the results of Lin [13] who showed that the use of finite-population variance estimator (i.e. robust variance) results in reasonable coverage probabilities if the inclusion probabilities are low, but poor coverage probabilities if the inclusion probabilities are high. It should be noted that both robust and superpopulation variance estimators are easy to implement using most statistical softwares. Our simulation results also confirmed that the WC model is an alternative method for estimating the effects of time-varying exposure variables in case-control studies. In particular, when compared to standard logistic regression that did not dynamically account for the different values of covariates over lifetime, the WC model tended to provide more accurate estimates of the effects of variables for which an important percentage of subjects had time-varying values over lifetime, such as intensity. However, the superiority of the WC did not persist when more than one control were selected from the risk set. Our results also suggest that the estimates from the WC model are not more affected by correlations between time-dependent covariates included in the model than logistic regression with fixed-in-time covariates. Note that the modelling of the exposure in the WC model could further be improved by incorporating some more complex function of the trajectory of the exposure over time that have recently been proposed [28-30]. The application of the WC model requires estimating the age-conditional probabilities in the source population for population-based case-control studies, or in the full cohort for nested case-control studies. In our application to population-based case-control data on PM, these probabilities were estimated from health statistics on the general French male population. Yet, our analyses were restricted to ever exposed males only who have much higher probability to develop PM than the general French male population. Further studies are needed to investigate the impact of biased estimates of the age-conditional probabilities on the WC estimates. Accounting for uncertainty in the weight estimates could further improve the variance estimator [31]. In addition, controls in our case-control data set on PM were frequency matched to cases on birth year. To account for this stratification variable in the design, we included it as a covariate in the WC models. However, it would be interesting to consider accounting for this frequency matching variable in the weights of the WC models [12], and to investigate the performance of the resulting estimators through simulation of frequency matched case-control data. This would be all the more important that frequency matching is largely used in population-based case-control studies. It should also be mentioned that depending on the controls selection strategy, hazard ratio estimates from the WC model may not measure the same quantity as odds ratio estimates from the logistic regression. While the hazard ratio from the WC model estimates a rate ratio, the odds ratio may estimate another quantity depending on the control selection strategy [24]. The WC model with time-dependent variables requires also information on the values of the covariates at each event time, so at each age of diagnosis in cases. Such information may be missing, and different approaches could be considered to impute these values. However, further studies are needed to assess the impact of measurement errors of the time-dependent covariate values. Indeed, missmodeling the covariates has already been shown to induce bias in sandwich variance estimator based on dfbetas of unweighted Cox model for nested case-control analysis [32]. A variance estimator based on Schoenfeld residuals provided better variance estimates for severe model misspecification [32]. It may be of interest to further investigate such an estimator for misspecified time-dependent covariates in the WC model. Some further joint modelling between the WC model and the time-dependent covariate process could also be investigated as an alternative, especially for internal time-dependent exposure variables [33]. However, in most case-control studies on occupational exposures, the occupational history is sufficiently well investigated to allow the elaboration of quite accurate time-dependent covariates, as in our application on asbestos and PM. Conclusion: We believe that the WC model using the superpopulation variance estimator may provide a potential alternative analytical method for case-control analyses with detailed information on the history of the exposure of interest, especially if a large part of the subjects has a time-varying exposure intensity over lifetime, and if only one control is available for each case. Abbreviations: ASE: Average standard errors; CI: Confidence interval; CLR: Conditional logistic regression; JEM: Job-exposure matrix; MIE: Mean index of exposure; PM: Pleural mesothelioma; RMSE: Root mean squared error; SDE: Standard deviation of the estimates; ULR: Unconditional logistic regression; WC: Weighted Cox model. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: HG has drafted the manuscript, programmed and run the simulation study, analyzed the case-control data on mesothelioma, and has contributed to the interpretation of all the results. AL has provided the case-control data on mesothelioma and has revised the manuscript. KL has drafted and revised the manuscript, has designed the simulation study, and supervised HG in all stages. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/13/18/prepub
Background: Case-control studies are generally designed to investigate the effect of exposures on the risk of a disease. Detailed information on past exposures is collected at the time of study. However, only the cumulated value of the exposure at the index date is usually used in logistic regression. A weighted Cox (WC) model has been proposed to estimate the effects of time-dependent exposures. The weights depend on the age conditional probabilities to develop the disease in the source population. While the WC model provided more accurate estimates of the effect of time-dependent covariates than standard logistic regression, the robust sandwich variance estimates were lower than the empirical variance, resulting in a low coverage probability of confidence intervals. The objectives of the present study were to investigate through simulations a new variance estimator and to compare the estimates from the WC model and standard logistic regression for estimating the effects of correlated temporal aspects of exposure with detailed information on exposure history. Methods: We proposed a new variance estimator using a superpopulation approach, and compared its accuracy to the robust sandwich variance estimator. The full exposure histories of source populations were generated and case-control studies were simulated within each source population. Different models with selected time-dependent aspects of exposure such as intensity, duration, and time since cessation were considered. The performances of the WC model using the two variance estimators were compared to standard logistic regression. The results of the different models were finally compared for estimating the effects of correlated aspects of occupational exposure to asbestos on the risk of mesothelioma, using population-based case-control data. Results: The superpopulation variance estimator provided better estimates than the robust sandwich variance estimator and the WC model provided accurate estimates of the effects of correlated aspects of temporal patterns of exposure. Conclusions: The WC model with the superpopulation variance estimator provides an alternative analytical approach for estimating the effects of time-varying exposures with detailed history exposure information in case-control studies, especially if many subjects have time-varying exposure intensity over lifetime, and if only one control is available for each case.
Background: Population-based case-control studies are widely used in epidemiology to investigate the association between environmental or occupational exposures over lifetime and the risk of cancer or other chronic diseases. Many of the exposures of interest are protracted and a huge amount of information is often retrospectively collected for each subject about his/her potential past exposure over lifetime. For example, for occupational exposures, the whole occupational history is usually investigated for each subject, and different methods exist to estimate the average dose of exposure at each past job [1-3]. However, only the cumulated estimated dose of exposure at the index age (age at diagnosis for cases and at interview for controls) is usually used in standard logistic regression analyses. Such approach does not use the (retrospective) dynamic information available on the exposure at different ages during lifetime. A time-dependent weighted Cox (WC) model has recently been proposed to incorporate this dynamic information on exposure, in order to more accurately estimate the effect of time-dependent exposures in population-based case-control studies [4]. The WC model consists in using age as the time axis and weighting cases and controls according to their case-control status and the age conditional probabilities of developing the disease in the source population. The weights proposed in the WC model are therefore time-dependent and estimated from data of the source population. A simulation study showed that the WC model improved the accuracy of the regression parameters estimates of time-dependent exposure variables as compared with standard logistic regression with fixed-in-time covariates [4]. However, the average robust sandwich variance estimates based on dfbetas residuals [5] were systematically lower than the empirical variance of the parameter estimates, which resulted in too narrow confidence intervals (CI) and low coverage probabilities [4]. There is an extensive statistical literature on the weighted analyses of cohort sampling designs (see among many others [6-10]). A population-based case-control study can be seen as a nested case-control study within the source population of cases and controls, and can therefore fit in this general cohort sampling design framework. Population-based case-control studies can also be seen as a survey with complex selection probabilities [11-14] and this is the general framework that we use in this paper. Specifically, we consider the superpopulation approach developed by Lin [13] who proposed a variance estimator that accounts for the extra variation due to sampling the finite survey population from an infinite superpopulation. As a result, the Lin variance estimator accounts for the random variation from one survey sample to another and from one survey population to another, as opposed to the robust sandwich variance estimator that accounts only for the random variation from one survey sample to another. In the context of population-based case-control study, the case-control sample could be considered as the survey sample, the source population as the finite survey population, and the population under study as the infinite superpopulation. The asymptotic properties of the Lin variance estimator have been investigated and a small simulation study has been conducted to investigate these properties in finite samples [13]. The results indicated that the superpopulation variance estimates were closer to the true variance than the robust sandwich variance estimates. However, the simulation study considered only fixed-in-time covariates and simple selection probabilities that did not reflect the more complex sampling scheme of population-based case-control studies. It is therefore unclear how the superpopulation variance estimator would perform for the estimation of the effects of time-dependent covariates using the specific estimated time-dependent weights proposed in the WC model [4]. In addition, for further applications to population-based case-control data, it would be important to clarify the performance of the WC model, as compared with standard logistic regression analyses, for estimating the effects of several correlated temporal patterns of protracted exposures. Indeed, the effects of temporal patterns of exposures such as intensity, duration, age at first exposure, and time since last exposure are often of great interest from an epidemiological point of view [15], but they need to be carefully adjusted for each other to avoid residual confounding [16]. Such adjustment induces correlation between covariates and it is important to investigate how it affects the proposed estimators. The first objective of the present study is to investigate through extensive simulations the accuracy of the Lin variance estimator for estimating the effects of time-varying covariates in case-control data, using the weights proposed in the WC model [4]. The second objective is to compare the estimates from the WC model and standard logistic regression for estimating the effects of selected correlated temporal aspects of exposure with detailed information on exposure history. The next section introduces the WC model and the robust and Lin’s variance estimators. The different approaches are then compared through simulations and using data from a large population-based case-control study on occupational exposure to asbestos and pleural mesothelioma (PM). Conclusion: We believe that the WC model using the superpopulation variance estimator may provide a potential alternative analytical method for case-control analyses with detailed information on the history of the exposure of interest, especially if a large part of the subjects has a time-varying exposure intensity over lifetime, and if only one control is available for each case.
Background: Case-control studies are generally designed to investigate the effect of exposures on the risk of a disease. Detailed information on past exposures is collected at the time of study. However, only the cumulated value of the exposure at the index date is usually used in logistic regression. A weighted Cox (WC) model has been proposed to estimate the effects of time-dependent exposures. The weights depend on the age conditional probabilities to develop the disease in the source population. While the WC model provided more accurate estimates of the effect of time-dependent covariates than standard logistic regression, the robust sandwich variance estimates were lower than the empirical variance, resulting in a low coverage probability of confidence intervals. The objectives of the present study were to investigate through simulations a new variance estimator and to compare the estimates from the WC model and standard logistic regression for estimating the effects of correlated temporal aspects of exposure with detailed information on exposure history. Methods: We proposed a new variance estimator using a superpopulation approach, and compared its accuracy to the robust sandwich variance estimator. The full exposure histories of source populations were generated and case-control studies were simulated within each source population. Different models with selected time-dependent aspects of exposure such as intensity, duration, and time since cessation were considered. The performances of the WC model using the two variance estimators were compared to standard logistic regression. The results of the different models were finally compared for estimating the effects of correlated aspects of occupational exposure to asbestos on the risk of mesothelioma, using population-based case-control data. Results: The superpopulation variance estimator provided better estimates than the robust sandwich variance estimator and the WC model provided accurate estimates of the effects of correlated aspects of temporal patterns of exposure. Conclusions: The WC model with the superpopulation variance estimator provides an alternative analytical approach for estimating the effects of time-varying exposures with detailed history exposure information in case-control studies, especially if many subjects have time-varying exposure intensity over lifetime, and if only one control is available for each case.
18,569
402
[ 962, 2856, 778, 644, 1622, 807, 5834, 252, 1333, 114, 568, 505, 63, 10, 79, 16 ]
19
[ "exposure", "age", "model", "case", "population", "intensity", "time", "control", "case control", "variance" ]
[ "occupational exposure analyses", "time dependent cox", "cox model estimated", "weighted cox model", "occupational exposures lifetime" ]
null
[CONTENT] Case-control study | Cox model | Logistic regression | Time-dependent variables | Variance estimator | Occupational exposures | Environmental exposures | Superpopulation [SUMMARY]
null
[CONTENT] Case-control study | Cox model | Logistic regression | Time-dependent variables | Variance estimator | Occupational exposures | Environmental exposures | Superpopulation [SUMMARY]
[CONTENT] Case-control study | Cox model | Logistic regression | Time-dependent variables | Variance estimator | Occupational exposures | Environmental exposures | Superpopulation [SUMMARY]
[CONTENT] Case-control study | Cox model | Logistic regression | Time-dependent variables | Variance estimator | Occupational exposures | Environmental exposures | Superpopulation [SUMMARY]
[CONTENT] Case-control study | Cox model | Logistic regression | Time-dependent variables | Variance estimator | Occupational exposures | Environmental exposures | Superpopulation [SUMMARY]
[CONTENT] Analysis of Variance | Asbestos | Case-Control Studies | Confidence Intervals | Environmental Exposure | Humans | Logistic Models | Mesothelioma | Occupational Exposure | Proportional Hazards Models | Risk Assessment [SUMMARY]
null
[CONTENT] Analysis of Variance | Asbestos | Case-Control Studies | Confidence Intervals | Environmental Exposure | Humans | Logistic Models | Mesothelioma | Occupational Exposure | Proportional Hazards Models | Risk Assessment [SUMMARY]
[CONTENT] Analysis of Variance | Asbestos | Case-Control Studies | Confidence Intervals | Environmental Exposure | Humans | Logistic Models | Mesothelioma | Occupational Exposure | Proportional Hazards Models | Risk Assessment [SUMMARY]
[CONTENT] Analysis of Variance | Asbestos | Case-Control Studies | Confidence Intervals | Environmental Exposure | Humans | Logistic Models | Mesothelioma | Occupational Exposure | Proportional Hazards Models | Risk Assessment [SUMMARY]
[CONTENT] Analysis of Variance | Asbestos | Case-Control Studies | Confidence Intervals | Environmental Exposure | Humans | Logistic Models | Mesothelioma | Occupational Exposure | Proportional Hazards Models | Risk Assessment [SUMMARY]
[CONTENT] occupational exposure analyses | time dependent cox | cox model estimated | weighted cox model | occupational exposures lifetime [SUMMARY]
null
[CONTENT] occupational exposure analyses | time dependent cox | cox model estimated | weighted cox model | occupational exposures lifetime [SUMMARY]
[CONTENT] occupational exposure analyses | time dependent cox | cox model estimated | weighted cox model | occupational exposures lifetime [SUMMARY]
[CONTENT] occupational exposure analyses | time dependent cox | cox model estimated | weighted cox model | occupational exposures lifetime [SUMMARY]
[CONTENT] occupational exposure analyses | time dependent cox | cox model estimated | weighted cox model | occupational exposures lifetime [SUMMARY]
[CONTENT] exposure | age | model | case | population | intensity | time | control | case control | variance [SUMMARY]
null
[CONTENT] exposure | age | model | case | population | intensity | time | control | case control | variance [SUMMARY]
[CONTENT] exposure | age | model | case | population | intensity | time | control | case control | variance [SUMMARY]
[CONTENT] exposure | age | model | case | population | intensity | time | control | case control | variance [SUMMARY]
[CONTENT] exposure | age | model | case | population | intensity | time | control | case control | variance [SUMMARY]
[CONTENT] population | survey | variance | based case control | population based case control | population based case | based case | population based | wc model | based [SUMMARY]
null
[CONTENT] clr | ulr | ratio | exposure | wc1 | wc2 | wc1 wc2 | age | models | estimates [SUMMARY]
[CONTENT] wc model superpopulation variance | potential alternative analytical method | provide potential alternative | provide potential | information history | information history exposure | information history exposure interest | detailed information history | detailed information history exposure | subjects time varying exposure [SUMMARY]
[CONTENT] exposure | age | model | case | intensity | population | control | time | variance | case control [SUMMARY]
[CONTENT] exposure | age | model | case | intensity | population | control | time | variance | case control [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| WC ||| WC [SUMMARY]
null
[CONTENT] WC [SUMMARY]
[CONTENT] WC | only one [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| WC ||| WC ||| ||| ||| ||| WC | two ||| ||| ||| WC ||| only one [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| WC ||| WC ||| ||| ||| ||| WC | two ||| ||| ||| WC ||| only one [SUMMARY]
Targets for parathyroid hormone in secondary hyperparathyroidism: is a "one-size-fits-all" approach appropriate? A prospective incident cohort study.
25123022
Recommendations for secondary hyperparathyroidism (SHPT) consider that a "one-size-fits-all" target enables efficacy of care. In routine clinical practice, SHPT continues to pose diagnosis and treatment challenges. One hypothesis that could explain these difficulties is that dialysis population with SHPT is not homogeneous.
BACKGROUND
EPHEYL is a prospective, multicenter, pharmacoepidemiological study including chronic dialysis patients (≥ 3 months) with newly SHPT diagnosis, i.e. parathyroid hormone (PTH) ≥ 500 ng/L for the first time, or initiation of cinacalcet, or parathyroidectomy. Multiple correspondence analysis and ascendant hierarchical clustering on clinico-biological (symptoms, PTH, plasma phosphorus and alkaline phosphatase) and treatment of SHPT (cinacalcet, vitamin D, calcium, or calcium-free calcic phosphate binder) were performed to identify distinct phenotypes.
METHODS
305 patients (261 with incident PTH ≥ 500 ng/L; 44 with cinacalcet initiation) were included. Their mean age was 67 ± 15 years, and 60% were men, 92% on hemodialysis and 8% on peritoneal dialysis. Four subgroups of SHPT patients were identified: 1/ "intermediate" phenotype with hyperphosphatemia without hypocalcemia (n = 113); 2/ younger patients with severe comorbidities, hyperphosphatemia and hypocalcemia, despite SHPT multiple medical treatments, suggesting poor adherence (n = 73); 3/ elderly patients with few cardiovascular comorbidities, controlled phospho-calcium balance, higher PTH, and few treatments (n = 75); 4/ patients who initiated cinacalcet (n = 43). The quality criterion of the model had a cut-off of 14 (>2), suggesting a relevant classification.
RESULTS
In real life, dialysis patients with newly diagnosed SHPT constitute a very heterogeneous population. A "one-size-fits-all" target approach is probably not appropriate. Therapeutic management needs to be adjusted to the 4 different phenotypes.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Cinacalcet", "Cohort Studies", "Drug Delivery Systems", "Female", "Follow-Up Studies", "Humans", "Hyperparathyroidism, Secondary", "Male", "Middle Aged", "Naphthalenes", "Parathyroid Hormone", "Prospective Studies" ]
4236624
Background
In the 70’s, secondary hyperparathyroidism (SHPT) was described as a severe bone disease occurring in young end-stage renal disease (ESRD) patients with significant duration of dialysis. When parathyroid hormone (PTH) was very high, up to 1000 ng/L, associated with hypercalcemia, the only treatment was subtotal parathyroïdectomy [1,2]. In the 90’s, access to kidney transplantation for the young and dialysis for the old led to a rapid ageing of dialyzed population [3]. During the first decade of the millennium, a new paradigm emerged. First, SHPT was considered not only as a bone disease, but also as a vascular disease [4]. Second, SHPT turned out to be a biological rather than a clinical disease at bedside [5]. A “one-size-fits-all” approach was recommended: PTH under 300ng/L from 2003 to 2009 according to K-DOQI [6]. Due to variability in PTH measurement, the target was modified in 2009: “maintaining PTH levels in the range of approximately two to nine times the upper normal limit for the assay” [7,8]. Third, cinacalcet tends to be seen by the clinicians as the most appropriate solution for the treatment of SHPT due to its mechanism of action, when conventional therapy is not effective enough [6,9]. But the randomized controlled EVOLVE study, published in 2012, failed to demonstrate efficacy of cinacalcet to reduce the risk of death or major cardiovascular events [10]. Today, the exact importance of PTH is still debated [11]. Large observational cross-sectional studies about SHPT have recently been published [12-15]. An incidence/prevalence bias may have hampered a precise description of SHPT phenotypes [16]. In order to capture the phenotypes of SHPT at bedside, we meticulously enrolled all dialysis patients of the REIN registry - Region of Lorraine with newly marked PTH elevation in a prospective observational study from December 2009 to May 2012. At inclusion, we delivered a validated questionnaire to measure clinical symptoms. With an original statistical analysis, we demonstrated that high PTH levels matched with 4 very different phenotype profiles, suggesting that a “one-size-fits-all” target approach for SHPT was not appropriate.
Methods
The pharmacoepidemiological EPHEYL (Étude PHarmacoÉpidémiologique de l’hYperparathyroïdie secondaire en Lorraine) study is a 2-year, open-cohort, prospective, observational study on incident SHPT, i.e. newly diagnosed, with a 2-year follow-up, set in the 12 dialysis units located in the French region of Lorraine (public or private). Inclusion criteria Adult patients included in EPHEYL were on dialysis (hemodialysis or peritoneal dialysis) for at least 3 months with one of the following criteria: for the first time 1) PTH ≥ 500ng/L, 2) initiation of cinacalcet, 3) parathyroidectomy if severe SHPT. The PTH cut-off value of 500 ng/L was chosen at the time of 2003 K-DOQI [6]. Indeed, when we initiated the study, the updated KDIGO recommendations were not effective, and PTH levels between 150 and 300 ng/L were advocated [6,8]. The indication for parathyroidectomy or the use of a calcimimetic were retained when PTH level was ≥ 500 ng/L, hence the choice of this threshold in our study. From 1st December, 2009 to 31st May, 2012, all patients who were on dialysis for at least 3 months were identified through the REIN registry - Region of Lorraine [17]. The occurrence of one out of the three inclusion criteria was prospectively followed up in all these patients. Patients were included in the study at the time of PTH dosing, initiation of cinacalcet, or parathyroidectomy. Physicians were encouraged to adhere to the KDIGO™ guidelines updated in 2009 [8]. Adult patients included in EPHEYL were on dialysis (hemodialysis or peritoneal dialysis) for at least 3 months with one of the following criteria: for the first time 1) PTH ≥ 500ng/L, 2) initiation of cinacalcet, 3) parathyroidectomy if severe SHPT. The PTH cut-off value of 500 ng/L was chosen at the time of 2003 K-DOQI [6]. Indeed, when we initiated the study, the updated KDIGO recommendations were not effective, and PTH levels between 150 and 300 ng/L were advocated [6,8]. The indication for parathyroidectomy or the use of a calcimimetic were retained when PTH level was ≥ 500 ng/L, hence the choice of this threshold in our study. From 1st December, 2009 to 31st May, 2012, all patients who were on dialysis for at least 3 months were identified through the REIN registry - Region of Lorraine [17]. The occurrence of one out of the three inclusion criteria was prospectively followed up in all these patients. Patients were included in the study at the time of PTH dosing, initiation of cinacalcet, or parathyroidectomy. Physicians were encouraged to adhere to the KDIGO™ guidelines updated in 2009 [8]. Data collection The following socio-demographic and clinical data were retrieved from the REIN registry: age, gender, body mass index (BMI), type of dialysis, dialysis vintage, primary etiology of nephropathy, comorbidities (smoking, diabetes, cardiovascular diseases, hypertension, respiratory diseases and cancer), and being on renal transplant waiting list [17]. The vast majority of patients were Caucasians. BMI was described as a continuous quantitative variable and obesity (BMI >30 kg/m2) as a binary variable. Primary etiology of nephropathy was classified into diabetic nephropathy, vascular nephropathy, glomerulonephritis, pyelonephritis, hereditary nephropathy, and other/unknown. Cardiovascular diseases comprised history of heart failure, cardiac heart disease, acute coronary syndrome, arrhythmia, peripheral arterial disease, and stroke. Respiratory diseases encompassed history of chronic respiratory insufficiency, asthma, and obstructive pulmonary disease. Hypertension was considered present if: diastolic and/or systolic blood pressure greater than 80 and 130 mm Hg, respectively, or a current antihypertensive therapy. SHPT symptoms experienced by patients were assessed by using: 1) the Parathyroidectomy Assessment of Symptoms (PAS) questionnaire. This self-administered questionnaire validated in dialysis patients with SHPT assessed the severity of 14 SHPT symptoms (Table 1) experienced by patients using a visual analog scale (VAS) which ranged from 0 (not experiencing any symptom) to 100 (experiencing the most extreme aspect of the symptom) [18-20]. This questionnaire was administered at the time of inclusion. PAS scores were analyzed as quantitative variables or proportion of patients with at least one symptom scoring more than 0; 2) the collection of clinical signs reported in medical records such as osteoarticular pain, myasthenia, bone fractures, paresthesia, pruritus, tetany, and calciphylaxis. A patient was symptomatic if at least one symptom had a PAS score greater than 0 or was reported in his medical records. Specific symptoms assessed by the parathyroidectomy assessment of symptoms (PAS) questionnaire, a self-administered disease-specific outcome tool, in patients with secondary hyperparathyroidism (SHPT) The following biological parameters were collected at the time of inclusion: PTH, calcemia, phosphorus, vitamin D, alkaline phosphatase (ALP), albumin, hemoglobin and measured ionized calcium. KDIGO™ guidelines have recommended to maintain PTH up to 2 to 9-fold above the normal range [8]. PTH was therefore described as binary variable (in or out of target) and quantitative variable (multiple of upper normal limit). According to KDIGO™ guidelines, calcemia was checked as hypo-, normo- or hypercalcemia using 2.1 and 2.6 mmol/L as cut-off values; phosphatemia as hypo-, normo- et hyperphosphatemia using 0.8 and 1,45 mmol/L as cut-off values. ALP was analyzed according to 2 stages using medians as cut-off values, albuminuria according to 2 stages using 25 g/L as cut-off value, and hemoglobin according to 3 stages using 10 and 12 g/dL as cut-off values. Four technologies were used for PTH dosing: chemiluminometric technology (48%), electro cheminulometric method (23%), immuno-enzymology (11%), immune chemiluminometric technology (16%), and unknown (2%); each kit was provided by several laboratories which had different standards. All drugs acting on phospho-calcic metabolism were collected and classified into 4 groups: vitamin D and analogs, calcium supplementation, calcium-free phosphorus binders, and cinacalcet. A standardized form was used to collect data from medical records. A Steering Committee consisting of an epidemiologist (CLA) and a nephrologist (LF) reviewed all forms and medical records when collected biological data were out of international standards. The following socio-demographic and clinical data were retrieved from the REIN registry: age, gender, body mass index (BMI), type of dialysis, dialysis vintage, primary etiology of nephropathy, comorbidities (smoking, diabetes, cardiovascular diseases, hypertension, respiratory diseases and cancer), and being on renal transplant waiting list [17]. The vast majority of patients were Caucasians. BMI was described as a continuous quantitative variable and obesity (BMI >30 kg/m2) as a binary variable. Primary etiology of nephropathy was classified into diabetic nephropathy, vascular nephropathy, glomerulonephritis, pyelonephritis, hereditary nephropathy, and other/unknown. Cardiovascular diseases comprised history of heart failure, cardiac heart disease, acute coronary syndrome, arrhythmia, peripheral arterial disease, and stroke. Respiratory diseases encompassed history of chronic respiratory insufficiency, asthma, and obstructive pulmonary disease. Hypertension was considered present if: diastolic and/or systolic blood pressure greater than 80 and 130 mm Hg, respectively, or a current antihypertensive therapy. SHPT symptoms experienced by patients were assessed by using: 1) the Parathyroidectomy Assessment of Symptoms (PAS) questionnaire. This self-administered questionnaire validated in dialysis patients with SHPT assessed the severity of 14 SHPT symptoms (Table 1) experienced by patients using a visual analog scale (VAS) which ranged from 0 (not experiencing any symptom) to 100 (experiencing the most extreme aspect of the symptom) [18-20]. This questionnaire was administered at the time of inclusion. PAS scores were analyzed as quantitative variables or proportion of patients with at least one symptom scoring more than 0; 2) the collection of clinical signs reported in medical records such as osteoarticular pain, myasthenia, bone fractures, paresthesia, pruritus, tetany, and calciphylaxis. A patient was symptomatic if at least one symptom had a PAS score greater than 0 or was reported in his medical records. Specific symptoms assessed by the parathyroidectomy assessment of symptoms (PAS) questionnaire, a self-administered disease-specific outcome tool, in patients with secondary hyperparathyroidism (SHPT) The following biological parameters were collected at the time of inclusion: PTH, calcemia, phosphorus, vitamin D, alkaline phosphatase (ALP), albumin, hemoglobin and measured ionized calcium. KDIGO™ guidelines have recommended to maintain PTH up to 2 to 9-fold above the normal range [8]. PTH was therefore described as binary variable (in or out of target) and quantitative variable (multiple of upper normal limit). According to KDIGO™ guidelines, calcemia was checked as hypo-, normo- or hypercalcemia using 2.1 and 2.6 mmol/L as cut-off values; phosphatemia as hypo-, normo- et hyperphosphatemia using 0.8 and 1,45 mmol/L as cut-off values. ALP was analyzed according to 2 stages using medians as cut-off values, albuminuria according to 2 stages using 25 g/L as cut-off value, and hemoglobin according to 3 stages using 10 and 12 g/dL as cut-off values. Four technologies were used for PTH dosing: chemiluminometric technology (48%), electro cheminulometric method (23%), immuno-enzymology (11%), immune chemiluminometric technology (16%), and unknown (2%); each kit was provided by several laboratories which had different standards. All drugs acting on phospho-calcic metabolism were collected and classified into 4 groups: vitamin D and analogs, calcium supplementation, calcium-free phosphorus binders, and cinacalcet. A standardized form was used to collect data from medical records. A Steering Committee consisting of an epidemiologist (CLA) and a nephrologist (LF) reviewed all forms and medical records when collected biological data were out of international standards. Ethics statement This study was conducted in compliance with French regulations concerning pharmaco-epidemiological studies [21]. Approvals from the French data protection agency (CNIL: n° 904163) and from the Advisory Committee on information processing research in the field of health located in the region of Lorraine (CCTIRS: n° 0428) were obtained through the national REIN registry. An information sheet was displayed in all dialysis units, and each patient was given an individual written information sheet at the initiation of dialysis. This study was conducted in compliance with French regulations concerning pharmaco-epidemiological studies [21]. Approvals from the French data protection agency (CNIL: n° 904163) and from the Advisory Committee on information processing research in the field of health located in the region of Lorraine (CCTIRS: n° 0428) were obtained through the national REIN registry. An information sheet was displayed in all dialysis units, and each patient was given an individual written information sheet at the initiation of dialysis. Statistical analyses Patient characteristics were described as proportions for categorical variables, and means and standard deviations (SD) for continuous variables, except PAS scores described as medians. Multivariate analyses using multiple correspondence analysis (MCA) and ascendant hierarchical clustering on clinical, biological and therapeutic characteristics of SHPT were performed to identify subgroups of patients [22]. All these parameters were binary variables. MCA was applied to determine the major axes summarizing more clearly data [22]. This method gives a set of coordinates of the categories of variables, and thus reveals the relationships between the individuals and the different categories. Each principal component was interpreted in terms of amount of contribution for each category to variance of axis. The contribution of a variable was statistically significant when its mean was greater than 1/p, (p = number of categories of variables). Graphical evaluation was built using the major components in a series of two-dimensional graphs. Then an ascendant hierarchical clustering was used to determine the number of subgroups on the basis of coordinates of the main components retained by MCA. The 4 clustered subgroups were numbered according to the order of the selection for the classification. The validity of this method was measured by the “cubic clustering criterion” with a cut-off of 2. After subgroups were selected, χ2 tests were performed to compare and highlight parameters defining each distinct profile of patients. The statistical analyses were performed using SAS software, version 9.2 (SAS Institute, North Carolina, US). Patient characteristics were described as proportions for categorical variables, and means and standard deviations (SD) for continuous variables, except PAS scores described as medians. Multivariate analyses using multiple correspondence analysis (MCA) and ascendant hierarchical clustering on clinical, biological and therapeutic characteristics of SHPT were performed to identify subgroups of patients [22]. All these parameters were binary variables. MCA was applied to determine the major axes summarizing more clearly data [22]. This method gives a set of coordinates of the categories of variables, and thus reveals the relationships between the individuals and the different categories. Each principal component was interpreted in terms of amount of contribution for each category to variance of axis. The contribution of a variable was statistically significant when its mean was greater than 1/p, (p = number of categories of variables). Graphical evaluation was built using the major components in a series of two-dimensional graphs. Then an ascendant hierarchical clustering was used to determine the number of subgroups on the basis of coordinates of the main components retained by MCA. The 4 clustered subgroups were numbered according to the order of the selection for the classification. The validity of this method was measured by the “cubic clustering criterion” with a cut-off of 2. After subgroups were selected, χ2 tests were performed to compare and highlight parameters defining each distinct profile of patients. The statistical analyses were performed using SAS software, version 9.2 (SAS Institute, North Carolina, US).
Results
Patients A total of 2137 patients who were on dialysis for at least 3 months, with PTH < 500 ng/L were retrieved from the REIN registry – Region of Lorraine between 1st December, 2009 and 31st May, 2012. Among them, 305 patients were included in the EPHEYL study: 86% with an incident PTH ≥500 ng/L (n = 261), 14% with an initiation of cinacalcet (n = 44), and 0% with a first-line parathyroidectomy (Figure 1). Disposition of patients. aParathyroid hormone. There was no statistically significant difference in socio-demographic and clinical characteristics between both groups according to inclusion criteria (Table 2). Regarding PTH levels, 10 different values for the upper normal limit were obtained, and ranged from 38.8 to 638 ng/L (median value: 560 ng/L). Despite these high values PTH remained in the KDIGO™ target range in 64% of patients.The distribution of PTH according to multiples of the upper normal limit revealed that 60% of patients maintained PTH up to 8-fold above the upper normal limit (Figure 2). Socio-demographic, clinical and biological characteristics for the EPHEYL population according to inclusion criteria aParathyroid hormone; bResults are presented as mean ± SD; cBody mass index; dParathyroidectomy Assessment of Symptoms; eMedian value; fThe 2009 updated KDIGOTM (Kidney Disease: Improving Global Outcomes) guidelines have recommended target range for serum PTH to 2–9 times upper the normal range [8]. Distribution of parathyroid hormone (PTH) according to multiples of the upper normal limit for the assays in the EPHEYL study. Among the 44 patients treated with cinacalcet, 36 patients had PTH within the KDIGO™ target range before initiating the treatment. At 3 months of inclusion, 19 patients (43%) discontinued treatment with cinacalcet. A total of 2137 patients who were on dialysis for at least 3 months, with PTH < 500 ng/L were retrieved from the REIN registry – Region of Lorraine between 1st December, 2009 and 31st May, 2012. Among them, 305 patients were included in the EPHEYL study: 86% with an incident PTH ≥500 ng/L (n = 261), 14% with an initiation of cinacalcet (n = 44), and 0% with a first-line parathyroidectomy (Figure 1). Disposition of patients. aParathyroid hormone. There was no statistically significant difference in socio-demographic and clinical characteristics between both groups according to inclusion criteria (Table 2). Regarding PTH levels, 10 different values for the upper normal limit were obtained, and ranged from 38.8 to 638 ng/L (median value: 560 ng/L). Despite these high values PTH remained in the KDIGO™ target range in 64% of patients.The distribution of PTH according to multiples of the upper normal limit revealed that 60% of patients maintained PTH up to 8-fold above the upper normal limit (Figure 2). Socio-demographic, clinical and biological characteristics for the EPHEYL population according to inclusion criteria aParathyroid hormone; bResults are presented as mean ± SD; cBody mass index; dParathyroidectomy Assessment of Symptoms; eMedian value; fThe 2009 updated KDIGOTM (Kidney Disease: Improving Global Outcomes) guidelines have recommended target range for serum PTH to 2–9 times upper the normal range [8]. Distribution of parathyroid hormone (PTH) according to multiples of the upper normal limit for the assays in the EPHEYL study. Among the 44 patients treated with cinacalcet, 36 patients had PTH within the KDIGO™ target range before initiating the treatment. At 3 months of inclusion, 19 patients (43%) discontinued treatment with cinacalcet. Cluster analysis Ascendant hierarchical clustering identified four subgroups of patients according to their SHPT profiles as shown Figure 3. The “cubic clustering criterion” of the model was 14, higher than the cut-off of 2, validating the classification. Identification of four distinct subgroups of dialysis patients with secondary hyperparathyroidism (SHPT) using multiple correspondence analyses. The horizontal axis defined the presence or absence of calcium supplementation, the presence or absence of treatment with cinacalcet, and serum PTH below or above 500 ng/L. The vertical axis defined a normophosphatemia, a hyperphosphatemia, the absence or presence of phosphorus binders, high or low level of alkaline phosphatases, the presence or absence of vitamin D supplementation, the presence or absence of calcium suplementation. Each patient is identified by a number and a color according to the following code: black for group 1 (“intermediate”), green for group 2 (younger with severe cardiovascular comorbidities), blue for group 3 (elderly patients with few cardiovascular comorbidities), pink for group 4 (“cinacalcet prescription”). The four clustered subgroups of patients were named according to their main characteristics regarding variables used (Table 3) or not used for clustering patients (Table 4): Characteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables used to cluster dialysis patients at time of SHPT diagnosis aGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular comorbidities, Group 4: patients who initiated cinacalcet; bParathyroidectomy Assessment of Symptoms questionnaire; cParathyroid hormone; dResults are presented as mean ± SD; p corresponds to ANOVA. Characteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables not used to cluster dialysis patients at time of SHPT diagnosis aGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular co-morbidities, Group 4: patients who initiated cinacalcet; bResults are presented as mean ± SD; cBody mass index; p corresponds to ANOVA. – “Intermediate” patients (Group 1, 37%): patients with hyperphosphatemia without hypocalcemia, sharing similar characteristics with the next group of patients (Group 2) but better controlled – Younger patients with severe cardiovascular comorbidities (Group 2, 24%): most often obese or diabetic patients, with shorter dialysis vintage, mainly with hyperphosphatemia and hypocalcemia despite multiple medical treatments, suggesting poor adherence – Elderly patients with a few cardiovascular comorbidities (Group 3, 25%): rarely obese, with longer dialysis vintage, mainly with normophosphatemia and normocalcemia despite few patients with SHPT treatment, with health status appearing to be, at first, much better than the one in group 2 – Patients who initiated cinacalcet (Group 4, 14%): 42 out of the 44 patients who initiated cinacalcet and another patient were classified into a clearly distinct subgroup (Figure 3). Two patients treated with cinacalcet were classified into other groups. Ascendant hierarchical clustering identified four subgroups of patients according to their SHPT profiles as shown Figure 3. The “cubic clustering criterion” of the model was 14, higher than the cut-off of 2, validating the classification. Identification of four distinct subgroups of dialysis patients with secondary hyperparathyroidism (SHPT) using multiple correspondence analyses. The horizontal axis defined the presence or absence of calcium supplementation, the presence or absence of treatment with cinacalcet, and serum PTH below or above 500 ng/L. The vertical axis defined a normophosphatemia, a hyperphosphatemia, the absence or presence of phosphorus binders, high or low level of alkaline phosphatases, the presence or absence of vitamin D supplementation, the presence or absence of calcium suplementation. Each patient is identified by a number and a color according to the following code: black for group 1 (“intermediate”), green for group 2 (younger with severe cardiovascular comorbidities), blue for group 3 (elderly patients with few cardiovascular comorbidities), pink for group 4 (“cinacalcet prescription”). The four clustered subgroups of patients were named according to their main characteristics regarding variables used (Table 3) or not used for clustering patients (Table 4): Characteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables used to cluster dialysis patients at time of SHPT diagnosis aGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular comorbidities, Group 4: patients who initiated cinacalcet; bParathyroidectomy Assessment of Symptoms questionnaire; cParathyroid hormone; dResults are presented as mean ± SD; p corresponds to ANOVA. Characteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables not used to cluster dialysis patients at time of SHPT diagnosis aGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular co-morbidities, Group 4: patients who initiated cinacalcet; bResults are presented as mean ± SD; cBody mass index; p corresponds to ANOVA. – “Intermediate” patients (Group 1, 37%): patients with hyperphosphatemia without hypocalcemia, sharing similar characteristics with the next group of patients (Group 2) but better controlled – Younger patients with severe cardiovascular comorbidities (Group 2, 24%): most often obese or diabetic patients, with shorter dialysis vintage, mainly with hyperphosphatemia and hypocalcemia despite multiple medical treatments, suggesting poor adherence – Elderly patients with a few cardiovascular comorbidities (Group 3, 25%): rarely obese, with longer dialysis vintage, mainly with normophosphatemia and normocalcemia despite few patients with SHPT treatment, with health status appearing to be, at first, much better than the one in group 2 – Patients who initiated cinacalcet (Group 4, 14%): 42 out of the 44 patients who initiated cinacalcet and another patient were classified into a clearly distinct subgroup (Figure 3). Two patients treated with cinacalcet were classified into other groups.
Conclusion
In conclusion, four significantly distinct profiles of dialysis patients with a recent severe SHPT diagnosis were identified on the basis of clinical, biological and therapeutic data routinely available. Our well-characterized incident cohort, coupled with an original methodological approach, highlights a contemporary picture of daily clinical practice. Our study reinforces that the benefit-risk balance of cinacalcet is not positive in patients with low PTH, and raises the matter whether cinacalcet should be contraindicated in such patients. A “one-size-fits-all” target for PTH approach is probably not appropriate. Therapeutic management needs to be adjusted to the four different phenotypes.
[ "Background", "Inclusion criteria", "Data collection", "Ethics statement", "Statistical analyses", "Patients", "Cluster analysis", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "In the 70’s, secondary hyperparathyroidism (SHPT) was described as a severe bone disease occurring in young end-stage renal disease (ESRD) patients with significant duration of dialysis. When parathyroid hormone (PTH) was very high, up to 1000 ng/L, associated with hypercalcemia, the only treatment was subtotal parathyroïdectomy [1,2]. In the 90’s, access to kidney transplantation for the young and dialysis for the old led to a rapid ageing of dialyzed population [3]. During the first decade of the millennium, a new paradigm emerged. First, SHPT was considered not only as a bone disease, but also as a vascular disease [4]. Second, SHPT turned out to be a biological rather than a clinical disease at bedside [5]. A “one-size-fits-all” approach was recommended: PTH under 300ng/L from 2003 to 2009 according to K-DOQI [6]. Due to variability in PTH measurement, the target was modified in 2009: “maintaining PTH levels in the range of approximately two to nine times the upper normal limit for the assay” [7,8]. Third, cinacalcet tends to be seen by the clinicians as the most appropriate solution for the treatment of SHPT due to its mechanism of action, when conventional therapy is not effective enough [6,9]. But the randomized controlled EVOLVE study, published in 2012, failed to demonstrate efficacy of cinacalcet to reduce the risk of death or major cardiovascular events [10]. Today, the exact importance of PTH is still debated [11].\nLarge observational cross-sectional studies about SHPT have recently been published [12-15]. An incidence/prevalence bias may have hampered a precise description of SHPT phenotypes [16]. In order to capture the phenotypes of SHPT at bedside, we meticulously enrolled all dialysis patients of the REIN registry - Region of Lorraine with newly marked PTH elevation in a prospective observational study from December 2009 to May 2012. At inclusion, we delivered a validated questionnaire to measure clinical symptoms. With an original statistical analysis, we demonstrated that high PTH levels matched with 4 very different phenotype profiles, suggesting that a “one-size-fits-all” target approach for SHPT was not appropriate.", "Adult patients included in EPHEYL were on dialysis (hemodialysis or peritoneal dialysis) for at least 3 months with one of the following criteria: for the first time 1) PTH ≥ 500ng/L, 2) initiation of cinacalcet, 3) parathyroidectomy if severe SHPT. The PTH cut-off value of 500 ng/L was chosen at the time of 2003 K-DOQI [6]. Indeed, when we initiated the study, the updated KDIGO recommendations were not effective, and PTH levels between 150 and 300 ng/L were advocated [6,8]. The indication for parathyroidectomy or the use of a calcimimetic were retained when PTH level was ≥ 500 ng/L, hence the choice of this threshold in our study.\nFrom 1st December, 2009 to 31st May, 2012, all patients who were on dialysis for at least 3 months were identified through the REIN registry - Region of Lorraine [17]. The occurrence of one out of the three inclusion criteria was prospectively followed up in all these patients.\nPatients were included in the study at the time of PTH dosing, initiation of cinacalcet, or parathyroidectomy.\nPhysicians were encouraged to adhere to the KDIGO™ guidelines updated in 2009 [8].", "The following socio-demographic and clinical data were retrieved from the REIN registry: age, gender, body mass index (BMI), type of dialysis, dialysis vintage, primary etiology of nephropathy, comorbidities (smoking, diabetes, cardiovascular diseases, hypertension, respiratory diseases and cancer), and being on renal transplant waiting list [17]. The vast majority of patients were Caucasians.\nBMI was described as a continuous quantitative variable and obesity (BMI >30 kg/m2) as a binary variable. Primary etiology of nephropathy was classified into diabetic nephropathy, vascular nephropathy, glomerulonephritis, pyelonephritis, hereditary nephropathy, and other/unknown. Cardiovascular diseases comprised history of heart failure, cardiac heart disease, acute coronary syndrome, arrhythmia, peripheral arterial disease, and stroke. Respiratory diseases encompassed history of chronic respiratory insufficiency, asthma, and obstructive pulmonary disease. Hypertension was considered present if: diastolic and/or systolic blood pressure greater than 80 and 130 mm Hg, respectively, or a current antihypertensive therapy.\nSHPT symptoms experienced by patients were assessed by using: 1) the Parathyroidectomy Assessment of Symptoms (PAS) questionnaire. This self-administered questionnaire validated in dialysis patients with SHPT assessed the severity of 14 SHPT symptoms (Table 1) experienced by patients using a visual analog scale (VAS) which ranged from 0 (not experiencing any symptom) to 100 (experiencing the most extreme aspect of the symptom) [18-20]. This questionnaire was administered at the time of inclusion. PAS scores were analyzed as quantitative variables or proportion of patients with at least one symptom scoring more than 0; 2) the collection of clinical signs reported in medical records such as osteoarticular pain, myasthenia, bone fractures, paresthesia, pruritus, tetany, and calciphylaxis. A patient was symptomatic if at least one symptom had a PAS score greater than 0 or was reported in his medical records.\nSpecific symptoms assessed by the parathyroidectomy assessment of symptoms (PAS) questionnaire, a self-administered disease-specific outcome tool, in patients with secondary hyperparathyroidism (SHPT)\nThe following biological parameters were collected at the time of inclusion: PTH, calcemia, phosphorus, vitamin D, alkaline phosphatase (ALP), albumin, hemoglobin and measured ionized calcium. KDIGO™ guidelines have recommended to maintain PTH up to 2 to 9-fold above the normal range [8]. PTH was therefore described as binary variable (in or out of target) and quantitative variable (multiple of upper normal limit). According to KDIGO™ guidelines, calcemia was checked as hypo-, normo- or hypercalcemia using 2.1 and 2.6 mmol/L as cut-off values; phosphatemia as hypo-, normo- et hyperphosphatemia using 0.8 and 1,45 mmol/L as cut-off values. ALP was analyzed according to 2 stages using medians as cut-off values, albuminuria according to 2 stages using 25 g/L as cut-off value, and hemoglobin according to 3 stages using 10 and 12 g/dL as cut-off values.\nFour technologies were used for PTH dosing: chemiluminometric technology (48%), electro cheminulometric method (23%), immuno-enzymology (11%), immune chemiluminometric technology (16%), and unknown (2%); each kit was provided by several laboratories which had different standards.\nAll drugs acting on phospho-calcic metabolism were collected and classified into 4 groups: vitamin D and analogs, calcium supplementation, calcium-free phosphorus binders, and cinacalcet.\nA standardized form was used to collect data from medical records. A Steering Committee consisting of an epidemiologist (CLA) and a nephrologist (LF) reviewed all forms and medical records when collected biological data were out of international standards.", "This study was conducted in compliance with French regulations concerning pharmaco-epidemiological studies [21]. Approvals from the French data protection agency (CNIL: n° 904163) and from the Advisory Committee on information processing research in the field of health located in the region of Lorraine (CCTIRS: n° 0428) were obtained through the national REIN registry. An information sheet was displayed in all dialysis units, and each patient was given an individual written information sheet at the initiation of dialysis.", "Patient characteristics were described as proportions for categorical variables, and means and standard deviations (SD) for continuous variables, except PAS scores described as medians.\nMultivariate analyses using multiple correspondence analysis (MCA) and ascendant hierarchical clustering on clinical, biological and therapeutic characteristics of SHPT were performed to identify subgroups of patients [22]. All these parameters were binary variables.\nMCA was applied to determine the major axes summarizing more clearly data [22]. This method gives a set of coordinates of the categories of variables, and thus reveals the relationships between the individuals and the different categories. Each principal component was interpreted in terms of amount of contribution for each category to variance of axis. The contribution of a variable was statistically significant when its mean was greater than 1/p, (p = number of categories of variables). Graphical evaluation was built using the major components in a series of two-dimensional graphs.\nThen an ascendant hierarchical clustering was used to determine the number of subgroups on the basis of coordinates of the main components retained by MCA. The 4 clustered subgroups were numbered according to the order of the selection for the classification. The validity of this method was measured by the “cubic clustering criterion” with a cut-off of 2. After subgroups were selected, χ2 tests were performed to compare and highlight parameters defining each distinct profile of patients.\nThe statistical analyses were performed using SAS software, version 9.2 (SAS Institute, North Carolina, US).", "A total of 2137 patients who were on dialysis for at least 3 months, with PTH < 500 ng/L were retrieved from the REIN registry – Region of Lorraine between 1st December, 2009 and 31st May, 2012. Among them, 305 patients were included in the EPHEYL study: 86% with an incident PTH ≥500 ng/L (n = 261), 14% with an initiation of cinacalcet (n = 44), and 0% with a first-line parathyroidectomy (Figure 1).\nDisposition of patients. aParathyroid hormone.\nThere was no statistically significant difference in socio-demographic and clinical characteristics between both groups according to inclusion criteria (Table 2). Regarding PTH levels, 10 different values for the upper normal limit were obtained, and ranged from 38.8 to 638 ng/L (median value: 560 ng/L). Despite these high values PTH remained in the KDIGO™ target range in 64% of patients.The distribution of PTH according to multiples of the upper normal limit revealed that 60% of patients maintained PTH up to 8-fold above the upper normal limit (Figure 2).\nSocio-demographic, clinical and biological characteristics for the EPHEYL population according to inclusion criteria\naParathyroid hormone; bResults are presented as mean ± SD; cBody mass index; dParathyroidectomy Assessment of Symptoms; eMedian value; fThe 2009 updated KDIGOTM (Kidney Disease: Improving Global Outcomes) guidelines have recommended target range for serum PTH to 2–9 times upper the normal range [8].\nDistribution of parathyroid hormone (PTH) according to multiples of the upper normal limit for the assays in the EPHEYL study.\nAmong the 44 patients treated with cinacalcet, 36 patients had PTH within the KDIGO™ target range before initiating the treatment. At 3 months of inclusion, 19 patients (43%) discontinued treatment with cinacalcet.", "Ascendant hierarchical clustering identified four subgroups of patients according to their SHPT profiles as shown Figure 3. The “cubic clustering criterion” of the model was 14, higher than the cut-off of 2, validating the classification.\nIdentification of four distinct subgroups of dialysis patients with secondary hyperparathyroidism (SHPT) using multiple correspondence analyses. The horizontal axis defined the presence or absence of calcium supplementation, the presence or absence of treatment with cinacalcet, and serum PTH below or above 500 ng/L. The vertical axis defined a normophosphatemia, a hyperphosphatemia, the absence or presence of phosphorus binders, high or low level of alkaline phosphatases, the presence or absence of vitamin D supplementation, the presence or absence of calcium suplementation. Each patient is identified by a number and a color according to the following code: black for group 1 (“intermediate”), green for group 2 (younger with severe cardiovascular comorbidities), blue for group 3 (elderly patients with few cardiovascular comorbidities), pink for group 4 (“cinacalcet prescription”).\nThe four clustered subgroups of patients were named according to their main characteristics regarding variables used (Table 3) or not used for clustering patients (Table 4):\nCharacteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables used to cluster dialysis patients at time of SHPT diagnosis\naGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular comorbidities, Group 4: patients who initiated cinacalcet; bParathyroidectomy Assessment of Symptoms questionnaire; cParathyroid hormone; dResults are presented as mean ± SD; p corresponds to ANOVA.\nCharacteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables not used to cluster dialysis patients at time of SHPT diagnosis\naGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular co-morbidities, Group 4: patients who initiated cinacalcet; bResults are presented as mean ± SD; cBody mass index; p corresponds to ANOVA.\n– “Intermediate” patients (Group 1, 37%): patients with hyperphosphatemia without hypocalcemia, sharing similar characteristics with the next group of patients (Group 2) but better controlled\n– Younger patients with severe cardiovascular comorbidities (Group 2, 24%): most often obese or diabetic patients, with shorter dialysis vintage, mainly with hyperphosphatemia and hypocalcemia despite multiple medical treatments, suggesting poor adherence\n– Elderly patients with a few cardiovascular comorbidities (Group 3, 25%): rarely obese, with longer dialysis vintage, mainly with normophosphatemia and normocalcemia despite few patients with SHPT treatment, with health status appearing to be, at first, much better than the one in group 2\n– Patients who initiated cinacalcet (Group 4, 14%): 42 out of the 44 patients who initiated cinacalcet and another patient were classified into a clearly distinct subgroup (Figure 3). Two patients treated with cinacalcet were classified into other groups.", "The authors declare that they have no competing interests.", "EL conceived the study, participated in its design, analysis and interpretation of data, and drafted the manuscript. CA was involved in the design of the study, acquisition, analysis and interpretation of data, and was involved in the statistical analysis. MLE participated in the design of the study and performed the statistical analysis. MK was involved in the design of the study and interpretation of the data. SB was accountable for all aspects of the analysis of data. LB was involved in the design of the study and interpretation of the data. LF participated in the general supervision of the work, helped to draft the manuscript, and revised it critically for important intellectual content. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2369/15/132/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Inclusion criteria", "Data collection", "Ethics statement", "Statistical analyses", "Results", "Patients", "Cluster analysis", "Discussion", "Conclusion", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "In the 70’s, secondary hyperparathyroidism (SHPT) was described as a severe bone disease occurring in young end-stage renal disease (ESRD) patients with significant duration of dialysis. When parathyroid hormone (PTH) was very high, up to 1000 ng/L, associated with hypercalcemia, the only treatment was subtotal parathyroïdectomy [1,2]. In the 90’s, access to kidney transplantation for the young and dialysis for the old led to a rapid ageing of dialyzed population [3]. During the first decade of the millennium, a new paradigm emerged. First, SHPT was considered not only as a bone disease, but also as a vascular disease [4]. Second, SHPT turned out to be a biological rather than a clinical disease at bedside [5]. A “one-size-fits-all” approach was recommended: PTH under 300ng/L from 2003 to 2009 according to K-DOQI [6]. Due to variability in PTH measurement, the target was modified in 2009: “maintaining PTH levels in the range of approximately two to nine times the upper normal limit for the assay” [7,8]. Third, cinacalcet tends to be seen by the clinicians as the most appropriate solution for the treatment of SHPT due to its mechanism of action, when conventional therapy is not effective enough [6,9]. But the randomized controlled EVOLVE study, published in 2012, failed to demonstrate efficacy of cinacalcet to reduce the risk of death or major cardiovascular events [10]. Today, the exact importance of PTH is still debated [11].\nLarge observational cross-sectional studies about SHPT have recently been published [12-15]. An incidence/prevalence bias may have hampered a precise description of SHPT phenotypes [16]. In order to capture the phenotypes of SHPT at bedside, we meticulously enrolled all dialysis patients of the REIN registry - Region of Lorraine with newly marked PTH elevation in a prospective observational study from December 2009 to May 2012. At inclusion, we delivered a validated questionnaire to measure clinical symptoms. With an original statistical analysis, we demonstrated that high PTH levels matched with 4 very different phenotype profiles, suggesting that a “one-size-fits-all” target approach for SHPT was not appropriate.", "The pharmacoepidemiological EPHEYL (Étude PHarmacoÉpidémiologique de l’hYperparathyroïdie secondaire en Lorraine) study is a 2-year, open-cohort, prospective, observational study on incident SHPT, i.e. newly diagnosed, with a 2-year follow-up, set in the 12 dialysis units located in the French region of Lorraine (public or private).\n Inclusion criteria Adult patients included in EPHEYL were on dialysis (hemodialysis or peritoneal dialysis) for at least 3 months with one of the following criteria: for the first time 1) PTH ≥ 500ng/L, 2) initiation of cinacalcet, 3) parathyroidectomy if severe SHPT. The PTH cut-off value of 500 ng/L was chosen at the time of 2003 K-DOQI [6]. Indeed, when we initiated the study, the updated KDIGO recommendations were not effective, and PTH levels between 150 and 300 ng/L were advocated [6,8]. The indication for parathyroidectomy or the use of a calcimimetic were retained when PTH level was ≥ 500 ng/L, hence the choice of this threshold in our study.\nFrom 1st December, 2009 to 31st May, 2012, all patients who were on dialysis for at least 3 months were identified through the REIN registry - Region of Lorraine [17]. The occurrence of one out of the three inclusion criteria was prospectively followed up in all these patients.\nPatients were included in the study at the time of PTH dosing, initiation of cinacalcet, or parathyroidectomy.\nPhysicians were encouraged to adhere to the KDIGO™ guidelines updated in 2009 [8].\nAdult patients included in EPHEYL were on dialysis (hemodialysis or peritoneal dialysis) for at least 3 months with one of the following criteria: for the first time 1) PTH ≥ 500ng/L, 2) initiation of cinacalcet, 3) parathyroidectomy if severe SHPT. The PTH cut-off value of 500 ng/L was chosen at the time of 2003 K-DOQI [6]. Indeed, when we initiated the study, the updated KDIGO recommendations were not effective, and PTH levels between 150 and 300 ng/L were advocated [6,8]. The indication for parathyroidectomy or the use of a calcimimetic were retained when PTH level was ≥ 500 ng/L, hence the choice of this threshold in our study.\nFrom 1st December, 2009 to 31st May, 2012, all patients who were on dialysis for at least 3 months were identified through the REIN registry - Region of Lorraine [17]. The occurrence of one out of the three inclusion criteria was prospectively followed up in all these patients.\nPatients were included in the study at the time of PTH dosing, initiation of cinacalcet, or parathyroidectomy.\nPhysicians were encouraged to adhere to the KDIGO™ guidelines updated in 2009 [8].\n Data collection The following socio-demographic and clinical data were retrieved from the REIN registry: age, gender, body mass index (BMI), type of dialysis, dialysis vintage, primary etiology of nephropathy, comorbidities (smoking, diabetes, cardiovascular diseases, hypertension, respiratory diseases and cancer), and being on renal transplant waiting list [17]. The vast majority of patients were Caucasians.\nBMI was described as a continuous quantitative variable and obesity (BMI >30 kg/m2) as a binary variable. Primary etiology of nephropathy was classified into diabetic nephropathy, vascular nephropathy, glomerulonephritis, pyelonephritis, hereditary nephropathy, and other/unknown. Cardiovascular diseases comprised history of heart failure, cardiac heart disease, acute coronary syndrome, arrhythmia, peripheral arterial disease, and stroke. Respiratory diseases encompassed history of chronic respiratory insufficiency, asthma, and obstructive pulmonary disease. Hypertension was considered present if: diastolic and/or systolic blood pressure greater than 80 and 130 mm Hg, respectively, or a current antihypertensive therapy.\nSHPT symptoms experienced by patients were assessed by using: 1) the Parathyroidectomy Assessment of Symptoms (PAS) questionnaire. This self-administered questionnaire validated in dialysis patients with SHPT assessed the severity of 14 SHPT symptoms (Table 1) experienced by patients using a visual analog scale (VAS) which ranged from 0 (not experiencing any symptom) to 100 (experiencing the most extreme aspect of the symptom) [18-20]. This questionnaire was administered at the time of inclusion. PAS scores were analyzed as quantitative variables or proportion of patients with at least one symptom scoring more than 0; 2) the collection of clinical signs reported in medical records such as osteoarticular pain, myasthenia, bone fractures, paresthesia, pruritus, tetany, and calciphylaxis. A patient was symptomatic if at least one symptom had a PAS score greater than 0 or was reported in his medical records.\nSpecific symptoms assessed by the parathyroidectomy assessment of symptoms (PAS) questionnaire, a self-administered disease-specific outcome tool, in patients with secondary hyperparathyroidism (SHPT)\nThe following biological parameters were collected at the time of inclusion: PTH, calcemia, phosphorus, vitamin D, alkaline phosphatase (ALP), albumin, hemoglobin and measured ionized calcium. KDIGO™ guidelines have recommended to maintain PTH up to 2 to 9-fold above the normal range [8]. PTH was therefore described as binary variable (in or out of target) and quantitative variable (multiple of upper normal limit). According to KDIGO™ guidelines, calcemia was checked as hypo-, normo- or hypercalcemia using 2.1 and 2.6 mmol/L as cut-off values; phosphatemia as hypo-, normo- et hyperphosphatemia using 0.8 and 1,45 mmol/L as cut-off values. ALP was analyzed according to 2 stages using medians as cut-off values, albuminuria according to 2 stages using 25 g/L as cut-off value, and hemoglobin according to 3 stages using 10 and 12 g/dL as cut-off values.\nFour technologies were used for PTH dosing: chemiluminometric technology (48%), electro cheminulometric method (23%), immuno-enzymology (11%), immune chemiluminometric technology (16%), and unknown (2%); each kit was provided by several laboratories which had different standards.\nAll drugs acting on phospho-calcic metabolism were collected and classified into 4 groups: vitamin D and analogs, calcium supplementation, calcium-free phosphorus binders, and cinacalcet.\nA standardized form was used to collect data from medical records. A Steering Committee consisting of an epidemiologist (CLA) and a nephrologist (LF) reviewed all forms and medical records when collected biological data were out of international standards.\nThe following socio-demographic and clinical data were retrieved from the REIN registry: age, gender, body mass index (BMI), type of dialysis, dialysis vintage, primary etiology of nephropathy, comorbidities (smoking, diabetes, cardiovascular diseases, hypertension, respiratory diseases and cancer), and being on renal transplant waiting list [17]. The vast majority of patients were Caucasians.\nBMI was described as a continuous quantitative variable and obesity (BMI >30 kg/m2) as a binary variable. Primary etiology of nephropathy was classified into diabetic nephropathy, vascular nephropathy, glomerulonephritis, pyelonephritis, hereditary nephropathy, and other/unknown. Cardiovascular diseases comprised history of heart failure, cardiac heart disease, acute coronary syndrome, arrhythmia, peripheral arterial disease, and stroke. Respiratory diseases encompassed history of chronic respiratory insufficiency, asthma, and obstructive pulmonary disease. Hypertension was considered present if: diastolic and/or systolic blood pressure greater than 80 and 130 mm Hg, respectively, or a current antihypertensive therapy.\nSHPT symptoms experienced by patients were assessed by using: 1) the Parathyroidectomy Assessment of Symptoms (PAS) questionnaire. This self-administered questionnaire validated in dialysis patients with SHPT assessed the severity of 14 SHPT symptoms (Table 1) experienced by patients using a visual analog scale (VAS) which ranged from 0 (not experiencing any symptom) to 100 (experiencing the most extreme aspect of the symptom) [18-20]. This questionnaire was administered at the time of inclusion. PAS scores were analyzed as quantitative variables or proportion of patients with at least one symptom scoring more than 0; 2) the collection of clinical signs reported in medical records such as osteoarticular pain, myasthenia, bone fractures, paresthesia, pruritus, tetany, and calciphylaxis. A patient was symptomatic if at least one symptom had a PAS score greater than 0 or was reported in his medical records.\nSpecific symptoms assessed by the parathyroidectomy assessment of symptoms (PAS) questionnaire, a self-administered disease-specific outcome tool, in patients with secondary hyperparathyroidism (SHPT)\nThe following biological parameters were collected at the time of inclusion: PTH, calcemia, phosphorus, vitamin D, alkaline phosphatase (ALP), albumin, hemoglobin and measured ionized calcium. KDIGO™ guidelines have recommended to maintain PTH up to 2 to 9-fold above the normal range [8]. PTH was therefore described as binary variable (in or out of target) and quantitative variable (multiple of upper normal limit). According to KDIGO™ guidelines, calcemia was checked as hypo-, normo- or hypercalcemia using 2.1 and 2.6 mmol/L as cut-off values; phosphatemia as hypo-, normo- et hyperphosphatemia using 0.8 and 1,45 mmol/L as cut-off values. ALP was analyzed according to 2 stages using medians as cut-off values, albuminuria according to 2 stages using 25 g/L as cut-off value, and hemoglobin according to 3 stages using 10 and 12 g/dL as cut-off values.\nFour technologies were used for PTH dosing: chemiluminometric technology (48%), electro cheminulometric method (23%), immuno-enzymology (11%), immune chemiluminometric technology (16%), and unknown (2%); each kit was provided by several laboratories which had different standards.\nAll drugs acting on phospho-calcic metabolism were collected and classified into 4 groups: vitamin D and analogs, calcium supplementation, calcium-free phosphorus binders, and cinacalcet.\nA standardized form was used to collect data from medical records. A Steering Committee consisting of an epidemiologist (CLA) and a nephrologist (LF) reviewed all forms and medical records when collected biological data were out of international standards.\n Ethics statement This study was conducted in compliance with French regulations concerning pharmaco-epidemiological studies [21]. Approvals from the French data protection agency (CNIL: n° 904163) and from the Advisory Committee on information processing research in the field of health located in the region of Lorraine (CCTIRS: n° 0428) were obtained through the national REIN registry. An information sheet was displayed in all dialysis units, and each patient was given an individual written information sheet at the initiation of dialysis.\nThis study was conducted in compliance with French regulations concerning pharmaco-epidemiological studies [21]. Approvals from the French data protection agency (CNIL: n° 904163) and from the Advisory Committee on information processing research in the field of health located in the region of Lorraine (CCTIRS: n° 0428) were obtained through the national REIN registry. An information sheet was displayed in all dialysis units, and each patient was given an individual written information sheet at the initiation of dialysis.\n Statistical analyses Patient characteristics were described as proportions for categorical variables, and means and standard deviations (SD) for continuous variables, except PAS scores described as medians.\nMultivariate analyses using multiple correspondence analysis (MCA) and ascendant hierarchical clustering on clinical, biological and therapeutic characteristics of SHPT were performed to identify subgroups of patients [22]. All these parameters were binary variables.\nMCA was applied to determine the major axes summarizing more clearly data [22]. This method gives a set of coordinates of the categories of variables, and thus reveals the relationships between the individuals and the different categories. Each principal component was interpreted in terms of amount of contribution for each category to variance of axis. The contribution of a variable was statistically significant when its mean was greater than 1/p, (p = number of categories of variables). Graphical evaluation was built using the major components in a series of two-dimensional graphs.\nThen an ascendant hierarchical clustering was used to determine the number of subgroups on the basis of coordinates of the main components retained by MCA. The 4 clustered subgroups were numbered according to the order of the selection for the classification. The validity of this method was measured by the “cubic clustering criterion” with a cut-off of 2. After subgroups were selected, χ2 tests were performed to compare and highlight parameters defining each distinct profile of patients.\nThe statistical analyses were performed using SAS software, version 9.2 (SAS Institute, North Carolina, US).\nPatient characteristics were described as proportions for categorical variables, and means and standard deviations (SD) for continuous variables, except PAS scores described as medians.\nMultivariate analyses using multiple correspondence analysis (MCA) and ascendant hierarchical clustering on clinical, biological and therapeutic characteristics of SHPT were performed to identify subgroups of patients [22]. All these parameters were binary variables.\nMCA was applied to determine the major axes summarizing more clearly data [22]. This method gives a set of coordinates of the categories of variables, and thus reveals the relationships between the individuals and the different categories. Each principal component was interpreted in terms of amount of contribution for each category to variance of axis. The contribution of a variable was statistically significant when its mean was greater than 1/p, (p = number of categories of variables). Graphical evaluation was built using the major components in a series of two-dimensional graphs.\nThen an ascendant hierarchical clustering was used to determine the number of subgroups on the basis of coordinates of the main components retained by MCA. The 4 clustered subgroups were numbered according to the order of the selection for the classification. The validity of this method was measured by the “cubic clustering criterion” with a cut-off of 2. After subgroups were selected, χ2 tests were performed to compare and highlight parameters defining each distinct profile of patients.\nThe statistical analyses were performed using SAS software, version 9.2 (SAS Institute, North Carolina, US).", "Adult patients included in EPHEYL were on dialysis (hemodialysis or peritoneal dialysis) for at least 3 months with one of the following criteria: for the first time 1) PTH ≥ 500ng/L, 2) initiation of cinacalcet, 3) parathyroidectomy if severe SHPT. The PTH cut-off value of 500 ng/L was chosen at the time of 2003 K-DOQI [6]. Indeed, when we initiated the study, the updated KDIGO recommendations were not effective, and PTH levels between 150 and 300 ng/L were advocated [6,8]. The indication for parathyroidectomy or the use of a calcimimetic were retained when PTH level was ≥ 500 ng/L, hence the choice of this threshold in our study.\nFrom 1st December, 2009 to 31st May, 2012, all patients who were on dialysis for at least 3 months were identified through the REIN registry - Region of Lorraine [17]. The occurrence of one out of the three inclusion criteria was prospectively followed up in all these patients.\nPatients were included in the study at the time of PTH dosing, initiation of cinacalcet, or parathyroidectomy.\nPhysicians were encouraged to adhere to the KDIGO™ guidelines updated in 2009 [8].", "The following socio-demographic and clinical data were retrieved from the REIN registry: age, gender, body mass index (BMI), type of dialysis, dialysis vintage, primary etiology of nephropathy, comorbidities (smoking, diabetes, cardiovascular diseases, hypertension, respiratory diseases and cancer), and being on renal transplant waiting list [17]. The vast majority of patients were Caucasians.\nBMI was described as a continuous quantitative variable and obesity (BMI >30 kg/m2) as a binary variable. Primary etiology of nephropathy was classified into diabetic nephropathy, vascular nephropathy, glomerulonephritis, pyelonephritis, hereditary nephropathy, and other/unknown. Cardiovascular diseases comprised history of heart failure, cardiac heart disease, acute coronary syndrome, arrhythmia, peripheral arterial disease, and stroke. Respiratory diseases encompassed history of chronic respiratory insufficiency, asthma, and obstructive pulmonary disease. Hypertension was considered present if: diastolic and/or systolic blood pressure greater than 80 and 130 mm Hg, respectively, or a current antihypertensive therapy.\nSHPT symptoms experienced by patients were assessed by using: 1) the Parathyroidectomy Assessment of Symptoms (PAS) questionnaire. This self-administered questionnaire validated in dialysis patients with SHPT assessed the severity of 14 SHPT symptoms (Table 1) experienced by patients using a visual analog scale (VAS) which ranged from 0 (not experiencing any symptom) to 100 (experiencing the most extreme aspect of the symptom) [18-20]. This questionnaire was administered at the time of inclusion. PAS scores were analyzed as quantitative variables or proportion of patients with at least one symptom scoring more than 0; 2) the collection of clinical signs reported in medical records such as osteoarticular pain, myasthenia, bone fractures, paresthesia, pruritus, tetany, and calciphylaxis. A patient was symptomatic if at least one symptom had a PAS score greater than 0 or was reported in his medical records.\nSpecific symptoms assessed by the parathyroidectomy assessment of symptoms (PAS) questionnaire, a self-administered disease-specific outcome tool, in patients with secondary hyperparathyroidism (SHPT)\nThe following biological parameters were collected at the time of inclusion: PTH, calcemia, phosphorus, vitamin D, alkaline phosphatase (ALP), albumin, hemoglobin and measured ionized calcium. KDIGO™ guidelines have recommended to maintain PTH up to 2 to 9-fold above the normal range [8]. PTH was therefore described as binary variable (in or out of target) and quantitative variable (multiple of upper normal limit). According to KDIGO™ guidelines, calcemia was checked as hypo-, normo- or hypercalcemia using 2.1 and 2.6 mmol/L as cut-off values; phosphatemia as hypo-, normo- et hyperphosphatemia using 0.8 and 1,45 mmol/L as cut-off values. ALP was analyzed according to 2 stages using medians as cut-off values, albuminuria according to 2 stages using 25 g/L as cut-off value, and hemoglobin according to 3 stages using 10 and 12 g/dL as cut-off values.\nFour technologies were used for PTH dosing: chemiluminometric technology (48%), electro cheminulometric method (23%), immuno-enzymology (11%), immune chemiluminometric technology (16%), and unknown (2%); each kit was provided by several laboratories which had different standards.\nAll drugs acting on phospho-calcic metabolism were collected and classified into 4 groups: vitamin D and analogs, calcium supplementation, calcium-free phosphorus binders, and cinacalcet.\nA standardized form was used to collect data from medical records. A Steering Committee consisting of an epidemiologist (CLA) and a nephrologist (LF) reviewed all forms and medical records when collected biological data were out of international standards.", "This study was conducted in compliance with French regulations concerning pharmaco-epidemiological studies [21]. Approvals from the French data protection agency (CNIL: n° 904163) and from the Advisory Committee on information processing research in the field of health located in the region of Lorraine (CCTIRS: n° 0428) were obtained through the national REIN registry. An information sheet was displayed in all dialysis units, and each patient was given an individual written information sheet at the initiation of dialysis.", "Patient characteristics were described as proportions for categorical variables, and means and standard deviations (SD) for continuous variables, except PAS scores described as medians.\nMultivariate analyses using multiple correspondence analysis (MCA) and ascendant hierarchical clustering on clinical, biological and therapeutic characteristics of SHPT were performed to identify subgroups of patients [22]. All these parameters were binary variables.\nMCA was applied to determine the major axes summarizing more clearly data [22]. This method gives a set of coordinates of the categories of variables, and thus reveals the relationships between the individuals and the different categories. Each principal component was interpreted in terms of amount of contribution for each category to variance of axis. The contribution of a variable was statistically significant when its mean was greater than 1/p, (p = number of categories of variables). Graphical evaluation was built using the major components in a series of two-dimensional graphs.\nThen an ascendant hierarchical clustering was used to determine the number of subgroups on the basis of coordinates of the main components retained by MCA. The 4 clustered subgroups were numbered according to the order of the selection for the classification. The validity of this method was measured by the “cubic clustering criterion” with a cut-off of 2. After subgroups were selected, χ2 tests were performed to compare and highlight parameters defining each distinct profile of patients.\nThe statistical analyses were performed using SAS software, version 9.2 (SAS Institute, North Carolina, US).", " Patients A total of 2137 patients who were on dialysis for at least 3 months, with PTH < 500 ng/L were retrieved from the REIN registry – Region of Lorraine between 1st December, 2009 and 31st May, 2012. Among them, 305 patients were included in the EPHEYL study: 86% with an incident PTH ≥500 ng/L (n = 261), 14% with an initiation of cinacalcet (n = 44), and 0% with a first-line parathyroidectomy (Figure 1).\nDisposition of patients. aParathyroid hormone.\nThere was no statistically significant difference in socio-demographic and clinical characteristics between both groups according to inclusion criteria (Table 2). Regarding PTH levels, 10 different values for the upper normal limit were obtained, and ranged from 38.8 to 638 ng/L (median value: 560 ng/L). Despite these high values PTH remained in the KDIGO™ target range in 64% of patients.The distribution of PTH according to multiples of the upper normal limit revealed that 60% of patients maintained PTH up to 8-fold above the upper normal limit (Figure 2).\nSocio-demographic, clinical and biological characteristics for the EPHEYL population according to inclusion criteria\naParathyroid hormone; bResults are presented as mean ± SD; cBody mass index; dParathyroidectomy Assessment of Symptoms; eMedian value; fThe 2009 updated KDIGOTM (Kidney Disease: Improving Global Outcomes) guidelines have recommended target range for serum PTH to 2–9 times upper the normal range [8].\nDistribution of parathyroid hormone (PTH) according to multiples of the upper normal limit for the assays in the EPHEYL study.\nAmong the 44 patients treated with cinacalcet, 36 patients had PTH within the KDIGO™ target range before initiating the treatment. At 3 months of inclusion, 19 patients (43%) discontinued treatment with cinacalcet.\nA total of 2137 patients who were on dialysis for at least 3 months, with PTH < 500 ng/L were retrieved from the REIN registry – Region of Lorraine between 1st December, 2009 and 31st May, 2012. Among them, 305 patients were included in the EPHEYL study: 86% with an incident PTH ≥500 ng/L (n = 261), 14% with an initiation of cinacalcet (n = 44), and 0% with a first-line parathyroidectomy (Figure 1).\nDisposition of patients. aParathyroid hormone.\nThere was no statistically significant difference in socio-demographic and clinical characteristics between both groups according to inclusion criteria (Table 2). Regarding PTH levels, 10 different values for the upper normal limit were obtained, and ranged from 38.8 to 638 ng/L (median value: 560 ng/L). Despite these high values PTH remained in the KDIGO™ target range in 64% of patients.The distribution of PTH according to multiples of the upper normal limit revealed that 60% of patients maintained PTH up to 8-fold above the upper normal limit (Figure 2).\nSocio-demographic, clinical and biological characteristics for the EPHEYL population according to inclusion criteria\naParathyroid hormone; bResults are presented as mean ± SD; cBody mass index; dParathyroidectomy Assessment of Symptoms; eMedian value; fThe 2009 updated KDIGOTM (Kidney Disease: Improving Global Outcomes) guidelines have recommended target range for serum PTH to 2–9 times upper the normal range [8].\nDistribution of parathyroid hormone (PTH) according to multiples of the upper normal limit for the assays in the EPHEYL study.\nAmong the 44 patients treated with cinacalcet, 36 patients had PTH within the KDIGO™ target range before initiating the treatment. At 3 months of inclusion, 19 patients (43%) discontinued treatment with cinacalcet.\n Cluster analysis Ascendant hierarchical clustering identified four subgroups of patients according to their SHPT profiles as shown Figure 3. The “cubic clustering criterion” of the model was 14, higher than the cut-off of 2, validating the classification.\nIdentification of four distinct subgroups of dialysis patients with secondary hyperparathyroidism (SHPT) using multiple correspondence analyses. The horizontal axis defined the presence or absence of calcium supplementation, the presence or absence of treatment with cinacalcet, and serum PTH below or above 500 ng/L. The vertical axis defined a normophosphatemia, a hyperphosphatemia, the absence or presence of phosphorus binders, high or low level of alkaline phosphatases, the presence or absence of vitamin D supplementation, the presence or absence of calcium suplementation. Each patient is identified by a number and a color according to the following code: black for group 1 (“intermediate”), green for group 2 (younger with severe cardiovascular comorbidities), blue for group 3 (elderly patients with few cardiovascular comorbidities), pink for group 4 (“cinacalcet prescription”).\nThe four clustered subgroups of patients were named according to their main characteristics regarding variables used (Table 3) or not used for clustering patients (Table 4):\nCharacteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables used to cluster dialysis patients at time of SHPT diagnosis\naGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular comorbidities, Group 4: patients who initiated cinacalcet; bParathyroidectomy Assessment of Symptoms questionnaire; cParathyroid hormone; dResults are presented as mean ± SD; p corresponds to ANOVA.\nCharacteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables not used to cluster dialysis patients at time of SHPT diagnosis\naGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular co-morbidities, Group 4: patients who initiated cinacalcet; bResults are presented as mean ± SD; cBody mass index; p corresponds to ANOVA.\n– “Intermediate” patients (Group 1, 37%): patients with hyperphosphatemia without hypocalcemia, sharing similar characteristics with the next group of patients (Group 2) but better controlled\n– Younger patients with severe cardiovascular comorbidities (Group 2, 24%): most often obese or diabetic patients, with shorter dialysis vintage, mainly with hyperphosphatemia and hypocalcemia despite multiple medical treatments, suggesting poor adherence\n– Elderly patients with a few cardiovascular comorbidities (Group 3, 25%): rarely obese, with longer dialysis vintage, mainly with normophosphatemia and normocalcemia despite few patients with SHPT treatment, with health status appearing to be, at first, much better than the one in group 2\n– Patients who initiated cinacalcet (Group 4, 14%): 42 out of the 44 patients who initiated cinacalcet and another patient were classified into a clearly distinct subgroup (Figure 3). Two patients treated with cinacalcet were classified into other groups.\nAscendant hierarchical clustering identified four subgroups of patients according to their SHPT profiles as shown Figure 3. The “cubic clustering criterion” of the model was 14, higher than the cut-off of 2, validating the classification.\nIdentification of four distinct subgroups of dialysis patients with secondary hyperparathyroidism (SHPT) using multiple correspondence analyses. The horizontal axis defined the presence or absence of calcium supplementation, the presence or absence of treatment with cinacalcet, and serum PTH below or above 500 ng/L. The vertical axis defined a normophosphatemia, a hyperphosphatemia, the absence or presence of phosphorus binders, high or low level of alkaline phosphatases, the presence or absence of vitamin D supplementation, the presence or absence of calcium suplementation. Each patient is identified by a number and a color according to the following code: black for group 1 (“intermediate”), green for group 2 (younger with severe cardiovascular comorbidities), blue for group 3 (elderly patients with few cardiovascular comorbidities), pink for group 4 (“cinacalcet prescription”).\nThe four clustered subgroups of patients were named according to their main characteristics regarding variables used (Table 3) or not used for clustering patients (Table 4):\nCharacteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables used to cluster dialysis patients at time of SHPT diagnosis\naGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular comorbidities, Group 4: patients who initiated cinacalcet; bParathyroidectomy Assessment of Symptoms questionnaire; cParathyroid hormone; dResults are presented as mean ± SD; p corresponds to ANOVA.\nCharacteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables not used to cluster dialysis patients at time of SHPT diagnosis\naGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular co-morbidities, Group 4: patients who initiated cinacalcet; bResults are presented as mean ± SD; cBody mass index; p corresponds to ANOVA.\n– “Intermediate” patients (Group 1, 37%): patients with hyperphosphatemia without hypocalcemia, sharing similar characteristics with the next group of patients (Group 2) but better controlled\n– Younger patients with severe cardiovascular comorbidities (Group 2, 24%): most often obese or diabetic patients, with shorter dialysis vintage, mainly with hyperphosphatemia and hypocalcemia despite multiple medical treatments, suggesting poor adherence\n– Elderly patients with a few cardiovascular comorbidities (Group 3, 25%): rarely obese, with longer dialysis vintage, mainly with normophosphatemia and normocalcemia despite few patients with SHPT treatment, with health status appearing to be, at first, much better than the one in group 2\n– Patients who initiated cinacalcet (Group 4, 14%): 42 out of the 44 patients who initiated cinacalcet and another patient were classified into a clearly distinct subgroup (Figure 3). Two patients treated with cinacalcet were classified into other groups.", "A total of 2137 patients who were on dialysis for at least 3 months, with PTH < 500 ng/L were retrieved from the REIN registry – Region of Lorraine between 1st December, 2009 and 31st May, 2012. Among them, 305 patients were included in the EPHEYL study: 86% with an incident PTH ≥500 ng/L (n = 261), 14% with an initiation of cinacalcet (n = 44), and 0% with a first-line parathyroidectomy (Figure 1).\nDisposition of patients. aParathyroid hormone.\nThere was no statistically significant difference in socio-demographic and clinical characteristics between both groups according to inclusion criteria (Table 2). Regarding PTH levels, 10 different values for the upper normal limit were obtained, and ranged from 38.8 to 638 ng/L (median value: 560 ng/L). Despite these high values PTH remained in the KDIGO™ target range in 64% of patients.The distribution of PTH according to multiples of the upper normal limit revealed that 60% of patients maintained PTH up to 8-fold above the upper normal limit (Figure 2).\nSocio-demographic, clinical and biological characteristics for the EPHEYL population according to inclusion criteria\naParathyroid hormone; bResults are presented as mean ± SD; cBody mass index; dParathyroidectomy Assessment of Symptoms; eMedian value; fThe 2009 updated KDIGOTM (Kidney Disease: Improving Global Outcomes) guidelines have recommended target range for serum PTH to 2–9 times upper the normal range [8].\nDistribution of parathyroid hormone (PTH) according to multiples of the upper normal limit for the assays in the EPHEYL study.\nAmong the 44 patients treated with cinacalcet, 36 patients had PTH within the KDIGO™ target range before initiating the treatment. At 3 months of inclusion, 19 patients (43%) discontinued treatment with cinacalcet.", "Ascendant hierarchical clustering identified four subgroups of patients according to their SHPT profiles as shown Figure 3. The “cubic clustering criterion” of the model was 14, higher than the cut-off of 2, validating the classification.\nIdentification of four distinct subgroups of dialysis patients with secondary hyperparathyroidism (SHPT) using multiple correspondence analyses. The horizontal axis defined the presence or absence of calcium supplementation, the presence or absence of treatment with cinacalcet, and serum PTH below or above 500 ng/L. The vertical axis defined a normophosphatemia, a hyperphosphatemia, the absence or presence of phosphorus binders, high or low level of alkaline phosphatases, the presence or absence of vitamin D supplementation, the presence or absence of calcium suplementation. Each patient is identified by a number and a color according to the following code: black for group 1 (“intermediate”), green for group 2 (younger with severe cardiovascular comorbidities), blue for group 3 (elderly patients with few cardiovascular comorbidities), pink for group 4 (“cinacalcet prescription”).\nThe four clustered subgroups of patients were named according to their main characteristics regarding variables used (Table 3) or not used for clustering patients (Table 4):\nCharacteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables used to cluster dialysis patients at time of SHPT diagnosis\naGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular comorbidities, Group 4: patients who initiated cinacalcet; bParathyroidectomy Assessment of Symptoms questionnaire; cParathyroid hormone; dResults are presented as mean ± SD; p corresponds to ANOVA.\nCharacteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables not used to cluster dialysis patients at time of SHPT diagnosis\naGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular co-morbidities, Group 4: patients who initiated cinacalcet; bResults are presented as mean ± SD; cBody mass index; p corresponds to ANOVA.\n– “Intermediate” patients (Group 1, 37%): patients with hyperphosphatemia without hypocalcemia, sharing similar characteristics with the next group of patients (Group 2) but better controlled\n– Younger patients with severe cardiovascular comorbidities (Group 2, 24%): most often obese or diabetic patients, with shorter dialysis vintage, mainly with hyperphosphatemia and hypocalcemia despite multiple medical treatments, suggesting poor adherence\n– Elderly patients with a few cardiovascular comorbidities (Group 3, 25%): rarely obese, with longer dialysis vintage, mainly with normophosphatemia and normocalcemia despite few patients with SHPT treatment, with health status appearing to be, at first, much better than the one in group 2\n– Patients who initiated cinacalcet (Group 4, 14%): 42 out of the 44 patients who initiated cinacalcet and another patient were classified into a clearly distinct subgroup (Figure 3). Two patients treated with cinacalcet were classified into other groups.", "EPHEYL is a well-characterized cohort of patients with incident severe SHPT diagnosis described not only on the basis of initiation of cinacalcet but also a cut-off value for PTH [6]. An incident population helps to accurately describe diseases, avoiding bias related to incidence and prevalence mix [16]. We used an appropriate methodology, MCA and ascendant hierarchical clustering, to identify homogeneous subgroups of cases with a high statistical level validity [22]. Our four clustered subgroups consisted of homogeneous patients with same medical history, same prior therapy, and probably similar characteristics concerning mineral bone diseases and cardiovascular co-morbidities.\nSHPT symptoms are difficult to assess due to the lack of specificity. The self-administered questionnaire developed by Pasieka et al. was used in several studies on primary and secondary hyperparathyroidism to quantify severity of symptoms using median values [18-20]. In the EPHEYL study, one out of two patients suffered from at least one symptom. But the most frequent symptoms (thirst, weakness, fatigue, and pain of joints) were not specific. As the questionnaire was developed in the context of parathyroidectomy, its validity is questionable at early stage of SHPT.\nThe PTH cut-off value of 500 ng/L was chosen at the time of 2003 K-DOQI [6]. Its enabled to focus on SHPT patients without adynamic bone disease [8,23]. Furthermore, no patient had hypercalcemia, suggesting that there was no tertiary or autonomized SHPT. This result is consistent with the incident type of our cohort, as tertiary SHPT were found in previous studies including prevalent SHPT patients [6,24]. Nevertheless, we know that PTH is subject to many simultaneous types of variability [7,11]. Our study points out obstacles with the use of PTH to precisely diagnose SHPT. The distribution of PTH at a cut-off value of 500, according to the new recommendation: “maintaining PTH levels in the range of approximately two to nine times the upper normal limit for the assay” was wide (Figure 2). Jean et al. have suggested that PTH should be replaced with specific biochemical markers of bone such as bone ALP and beta cross-laps to follow-up SHPT [24]. These measurements, however, are too costly to be recommended in routine clinical practice [8]. Finally, in the context of quite vague recommendations, clinicians should be aware that a binary approach for SHPT diagnosis, i.e. absence/presence, is not adequate. There is definitely a grey zone for diagnosis which limits are not easily defined. We should recommend an observation period before acting strongly.\nIn this grey zone, our study identified four statistically distinct subgroups of patients. Our description of each group reflected a clinical reality, and was therefore clinically appropriate. Noteworthy, at bedside, these distinct phenotypes should be distinguished by doctor rather by biological cut-offs. This pleads for patient-doctor contact. A recent publication has demonstrated a positive association between patient-doctor contact and outcomes [25]. Last but not least, our study reinforces the recent publication by Levin that has recommended acknowledging the heterogeneity of chronic kidney disease populations and appropriately characterizing populations for studies [26].\nThe group of “elderly patients with a few cardiovascular comorbidities”, in majority with normocalcemia and normophosphatemia, had a PTH which, at first, should impressed clinicians. In another hand, normal serum phosphorus could not be explained by malnutrition; despite their old age, nutritional markers (such as albumin and phosphatemia) were not statistically different from those in the other groups. PTH seemed to be associated with a good clinical condition and a low prevalence of comorbidities. These results are consistent with those from previous studies in showing that, particularly in elderly, PTH is inversely correlated with score of comorbidities [12,27]. At the time of diagnosis of SHPT, this subgroup of patients did not require presumably an intensive therapeutic management.\nThe group of “younger patients with severe cardiovascular comorbidities”, by contrast, consisted of a majority of patients (66%) with PTH within targets [8]. However, the mean PTH was similar to the one found in the previous group. They seemed to be most likely to have bone and cardiovascular complications, as in previous cohorts, possibly linked to hyperphosphatemia, with a high proportion of cardiovascular diseases, diabetes and obesity [4,13]. As a result, they should require an intensive care management. Of note, most of them had hyperphosphatemia and hypocalcemia despite multiple treatments. It is obvious that this phenotype is characterized by very low adherence to first-line strategies for SHPT, e.g. calcium, vitamin D and phosphorus binders. For this treatment category, a recent meta-analysis has pointed out poor adherence in a majority of patients [28].\nThe group of “intermediate patients” seemed to show intermediate characteristics between those with both previous groups. Most of them had hyperphosphatemia without hypocalcemia and the highest PTH. They seemed to be more likely to be at high risk for cardiovascular events due to uncontrolled hyperphosphatemia; as a result, they should be cautiously monitored [21].Finally, the group of patients “who initiated cinacalcet” was very different from the others (Figure 3). Most of patients had hypocalcemia and hyperphosphatemia. Patients were given already multiple conventional treatments before cinacalcet initiation. At the time of cinacalcet initiation, an overwhelming majority of patients (86%) had PTH within KDIGO™ targets which were applicable during the study period. Our study has confirmed that cinacalcet was prescribed for broadened indications in real life, and highlighted that the benefit-risk balance of cinacalcet was not favorable in patients with low PTH. This phenotype “cinacalcet user” including such patients should not exist. Before such therapeutic agents were marketed, indications for surgical parathyroidectomy were limited to symptomatic SHPT patients with very high PTH (>1000 ng/L). Initially presented as an alternative to parathyroidectomy, cinacalcet is now prescribed in pre-symptomatic patients with PTH > 300 ng/L, perhaps due to previous KDOQI recommendations.\nIts prescription is based on studies suggesting that cinacalcet should have a protective effect on cardiovascular disease outcomes and reduce risk of fractures [29]. Recently, the prospective EVOLVE study failed to demonstrate the role of cinacalcet in reducing cardiovascular mortality and events [10]. These negative results can be explained, in particular, by the fact that 23% of patients in the treated group and 11% in the placebo group received the trade formulation of cinacalcet, which skewed the results. The EPHEYL study has shown that the use of cinacalcet was likely to be physician-dependent rather than characteristic of patients, and decision-making to prescribe it was insufficiently detailed. Thus in the group of patients who initiated cinacalcet, 43% of them who should have been treated with cinacalcet were not taking treatment 3 months later. Considering that low PTH levels characterize adynamic bone disease and/or a high prevalence of comorbidities, we should currently recommend to prescribe cinacalcet only after conducting a rigorous investigation on its benefit/risks [23,27]. The strength of our study was to identify a subgroup of incident SHPT patients treated with cinacalcet despite low PTH in a real life setting. The matter whether cinacalcet should be contraindicated in patients with low PTH is of great interest.\nOur study has some limitations. Its observational nature which may seem like a drawback is strength actually. Pharmacoepidemiological study, as EPHEYL, reflects routine clinical practices with various therapeutic strategies which are not always consistent with previous randomized trials [10]. Second, we have taken into account only data at the inclusion, making sense as regards to the incident nature of a disease. The 2-year follow-up of the EPHEYL cohort should reinforce the relevance of the classification. Third, most of patients were included with a PTH cut-off value of 500ng/L, while the latest KDIGO™ guidelines have recommended maintaining PTH in ranges, at approximately 2 to 9 times the upper normal limit, rather than absolute values. KDIGO™ recommendations were not implemented when we wrote our article. As a result, data on PTH have been detailed in this report. Fourth, the methodology allowed identification of distinct profiles, not completely disjunctive. It is therefore possible that some patients of distinct subgroups should have partially similar characteristics at junctions. Notwithstanding differences between groups are relevant. Fifth, clinical symptoms assessed by PAS questionnaire need to be interpreted with caution, due to the difficulty met to collect data from self-administered questionnaire.", "In conclusion, four significantly distinct profiles of dialysis patients with a recent severe SHPT diagnosis were identified on the basis of clinical, biological and therapeutic data routinely available. Our well-characterized incident cohort, coupled with an original methodological approach, highlights a contemporary picture of daily clinical practice. Our study reinforces that the benefit-risk balance of cinacalcet is not positive in patients with low PTH, and raises the matter whether cinacalcet should be contraindicated in such patients. A “one-size-fits-all” target for PTH approach is probably not appropriate. Therapeutic management needs to be adjusted to the four different phenotypes.", "The authors declare that they have no competing interests.", "EL conceived the study, participated in its design, analysis and interpretation of data, and drafted the manuscript. CA was involved in the design of the study, acquisition, analysis and interpretation of data, and was involved in the statistical analysis. MLE participated in the design of the study and performed the statistical analysis. MK was involved in the design of the study and interpretation of the data. SB was accountable for all aspects of the analysis of data. LB was involved in the design of the study and interpretation of the data. LF participated in the general supervision of the work, helped to draft the manuscript, and revised it critically for important intellectual content. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2369/15/132/prepub\n" ]
[ null, "methods", null, null, null, null, "results", null, null, "discussion", "conclusions", null, null, null ]
[ "Dialysis", "Secondary hyperparathyroidism", "Cinacalcet", "Pharmacoepidemiological study" ]
Background: In the 70’s, secondary hyperparathyroidism (SHPT) was described as a severe bone disease occurring in young end-stage renal disease (ESRD) patients with significant duration of dialysis. When parathyroid hormone (PTH) was very high, up to 1000 ng/L, associated with hypercalcemia, the only treatment was subtotal parathyroïdectomy [1,2]. In the 90’s, access to kidney transplantation for the young and dialysis for the old led to a rapid ageing of dialyzed population [3]. During the first decade of the millennium, a new paradigm emerged. First, SHPT was considered not only as a bone disease, but also as a vascular disease [4]. Second, SHPT turned out to be a biological rather than a clinical disease at bedside [5]. A “one-size-fits-all” approach was recommended: PTH under 300ng/L from 2003 to 2009 according to K-DOQI [6]. Due to variability in PTH measurement, the target was modified in 2009: “maintaining PTH levels in the range of approximately two to nine times the upper normal limit for the assay” [7,8]. Third, cinacalcet tends to be seen by the clinicians as the most appropriate solution for the treatment of SHPT due to its mechanism of action, when conventional therapy is not effective enough [6,9]. But the randomized controlled EVOLVE study, published in 2012, failed to demonstrate efficacy of cinacalcet to reduce the risk of death or major cardiovascular events [10]. Today, the exact importance of PTH is still debated [11]. Large observational cross-sectional studies about SHPT have recently been published [12-15]. An incidence/prevalence bias may have hampered a precise description of SHPT phenotypes [16]. In order to capture the phenotypes of SHPT at bedside, we meticulously enrolled all dialysis patients of the REIN registry - Region of Lorraine with newly marked PTH elevation in a prospective observational study from December 2009 to May 2012. At inclusion, we delivered a validated questionnaire to measure clinical symptoms. With an original statistical analysis, we demonstrated that high PTH levels matched with 4 very different phenotype profiles, suggesting that a “one-size-fits-all” target approach for SHPT was not appropriate. Methods: The pharmacoepidemiological EPHEYL (Étude PHarmacoÉpidémiologique de l’hYperparathyroïdie secondaire en Lorraine) study is a 2-year, open-cohort, prospective, observational study on incident SHPT, i.e. newly diagnosed, with a 2-year follow-up, set in the 12 dialysis units located in the French region of Lorraine (public or private). Inclusion criteria Adult patients included in EPHEYL were on dialysis (hemodialysis or peritoneal dialysis) for at least 3 months with one of the following criteria: for the first time 1) PTH ≥ 500ng/L, 2) initiation of cinacalcet, 3) parathyroidectomy if severe SHPT. The PTH cut-off value of 500 ng/L was chosen at the time of 2003 K-DOQI [6]. Indeed, when we initiated the study, the updated KDIGO recommendations were not effective, and PTH levels between 150 and 300 ng/L were advocated [6,8]. The indication for parathyroidectomy or the use of a calcimimetic were retained when PTH level was ≥ 500 ng/L, hence the choice of this threshold in our study. From 1st December, 2009 to 31st May, 2012, all patients who were on dialysis for at least 3 months were identified through the REIN registry - Region of Lorraine [17]. The occurrence of one out of the three inclusion criteria was prospectively followed up in all these patients. Patients were included in the study at the time of PTH dosing, initiation of cinacalcet, or parathyroidectomy. Physicians were encouraged to adhere to the KDIGO™ guidelines updated in 2009 [8]. Adult patients included in EPHEYL were on dialysis (hemodialysis or peritoneal dialysis) for at least 3 months with one of the following criteria: for the first time 1) PTH ≥ 500ng/L, 2) initiation of cinacalcet, 3) parathyroidectomy if severe SHPT. The PTH cut-off value of 500 ng/L was chosen at the time of 2003 K-DOQI [6]. Indeed, when we initiated the study, the updated KDIGO recommendations were not effective, and PTH levels between 150 and 300 ng/L were advocated [6,8]. The indication for parathyroidectomy or the use of a calcimimetic were retained when PTH level was ≥ 500 ng/L, hence the choice of this threshold in our study. From 1st December, 2009 to 31st May, 2012, all patients who were on dialysis for at least 3 months were identified through the REIN registry - Region of Lorraine [17]. The occurrence of one out of the three inclusion criteria was prospectively followed up in all these patients. Patients were included in the study at the time of PTH dosing, initiation of cinacalcet, or parathyroidectomy. Physicians were encouraged to adhere to the KDIGO™ guidelines updated in 2009 [8]. Data collection The following socio-demographic and clinical data were retrieved from the REIN registry: age, gender, body mass index (BMI), type of dialysis, dialysis vintage, primary etiology of nephropathy, comorbidities (smoking, diabetes, cardiovascular diseases, hypertension, respiratory diseases and cancer), and being on renal transplant waiting list [17]. The vast majority of patients were Caucasians. BMI was described as a continuous quantitative variable and obesity (BMI >30 kg/m2) as a binary variable. Primary etiology of nephropathy was classified into diabetic nephropathy, vascular nephropathy, glomerulonephritis, pyelonephritis, hereditary nephropathy, and other/unknown. Cardiovascular diseases comprised history of heart failure, cardiac heart disease, acute coronary syndrome, arrhythmia, peripheral arterial disease, and stroke. Respiratory diseases encompassed history of chronic respiratory insufficiency, asthma, and obstructive pulmonary disease. Hypertension was considered present if: diastolic and/or systolic blood pressure greater than 80 and 130 mm Hg, respectively, or a current antihypertensive therapy. SHPT symptoms experienced by patients were assessed by using: 1) the Parathyroidectomy Assessment of Symptoms (PAS) questionnaire. This self-administered questionnaire validated in dialysis patients with SHPT assessed the severity of 14 SHPT symptoms (Table 1) experienced by patients using a visual analog scale (VAS) which ranged from 0 (not experiencing any symptom) to 100 (experiencing the most extreme aspect of the symptom) [18-20]. This questionnaire was administered at the time of inclusion. PAS scores were analyzed as quantitative variables or proportion of patients with at least one symptom scoring more than 0; 2) the collection of clinical signs reported in medical records such as osteoarticular pain, myasthenia, bone fractures, paresthesia, pruritus, tetany, and calciphylaxis. A patient was symptomatic if at least one symptom had a PAS score greater than 0 or was reported in his medical records. Specific symptoms assessed by the parathyroidectomy assessment of symptoms (PAS) questionnaire, a self-administered disease-specific outcome tool, in patients with secondary hyperparathyroidism (SHPT) The following biological parameters were collected at the time of inclusion: PTH, calcemia, phosphorus, vitamin D, alkaline phosphatase (ALP), albumin, hemoglobin and measured ionized calcium. KDIGO™ guidelines have recommended to maintain PTH up to 2 to 9-fold above the normal range [8]. PTH was therefore described as binary variable (in or out of target) and quantitative variable (multiple of upper normal limit). According to KDIGO™ guidelines, calcemia was checked as hypo-, normo- or hypercalcemia using 2.1 and 2.6 mmol/L as cut-off values; phosphatemia as hypo-, normo- et hyperphosphatemia using 0.8 and 1,45 mmol/L as cut-off values. ALP was analyzed according to 2 stages using medians as cut-off values, albuminuria according to 2 stages using 25 g/L as cut-off value, and hemoglobin according to 3 stages using 10 and 12 g/dL as cut-off values. Four technologies were used for PTH dosing: chemiluminometric technology (48%), electro cheminulometric method (23%), immuno-enzymology (11%), immune chemiluminometric technology (16%), and unknown (2%); each kit was provided by several laboratories which had different standards. All drugs acting on phospho-calcic metabolism were collected and classified into 4 groups: vitamin D and analogs, calcium supplementation, calcium-free phosphorus binders, and cinacalcet. A standardized form was used to collect data from medical records. A Steering Committee consisting of an epidemiologist (CLA) and a nephrologist (LF) reviewed all forms and medical records when collected biological data were out of international standards. The following socio-demographic and clinical data were retrieved from the REIN registry: age, gender, body mass index (BMI), type of dialysis, dialysis vintage, primary etiology of nephropathy, comorbidities (smoking, diabetes, cardiovascular diseases, hypertension, respiratory diseases and cancer), and being on renal transplant waiting list [17]. The vast majority of patients were Caucasians. BMI was described as a continuous quantitative variable and obesity (BMI >30 kg/m2) as a binary variable. Primary etiology of nephropathy was classified into diabetic nephropathy, vascular nephropathy, glomerulonephritis, pyelonephritis, hereditary nephropathy, and other/unknown. Cardiovascular diseases comprised history of heart failure, cardiac heart disease, acute coronary syndrome, arrhythmia, peripheral arterial disease, and stroke. Respiratory diseases encompassed history of chronic respiratory insufficiency, asthma, and obstructive pulmonary disease. Hypertension was considered present if: diastolic and/or systolic blood pressure greater than 80 and 130 mm Hg, respectively, or a current antihypertensive therapy. SHPT symptoms experienced by patients were assessed by using: 1) the Parathyroidectomy Assessment of Symptoms (PAS) questionnaire. This self-administered questionnaire validated in dialysis patients with SHPT assessed the severity of 14 SHPT symptoms (Table 1) experienced by patients using a visual analog scale (VAS) which ranged from 0 (not experiencing any symptom) to 100 (experiencing the most extreme aspect of the symptom) [18-20]. This questionnaire was administered at the time of inclusion. PAS scores were analyzed as quantitative variables or proportion of patients with at least one symptom scoring more than 0; 2) the collection of clinical signs reported in medical records such as osteoarticular pain, myasthenia, bone fractures, paresthesia, pruritus, tetany, and calciphylaxis. A patient was symptomatic if at least one symptom had a PAS score greater than 0 or was reported in his medical records. Specific symptoms assessed by the parathyroidectomy assessment of symptoms (PAS) questionnaire, a self-administered disease-specific outcome tool, in patients with secondary hyperparathyroidism (SHPT) The following biological parameters were collected at the time of inclusion: PTH, calcemia, phosphorus, vitamin D, alkaline phosphatase (ALP), albumin, hemoglobin and measured ionized calcium. KDIGO™ guidelines have recommended to maintain PTH up to 2 to 9-fold above the normal range [8]. PTH was therefore described as binary variable (in or out of target) and quantitative variable (multiple of upper normal limit). According to KDIGO™ guidelines, calcemia was checked as hypo-, normo- or hypercalcemia using 2.1 and 2.6 mmol/L as cut-off values; phosphatemia as hypo-, normo- et hyperphosphatemia using 0.8 and 1,45 mmol/L as cut-off values. ALP was analyzed according to 2 stages using medians as cut-off values, albuminuria according to 2 stages using 25 g/L as cut-off value, and hemoglobin according to 3 stages using 10 and 12 g/dL as cut-off values. Four technologies were used for PTH dosing: chemiluminometric technology (48%), electro cheminulometric method (23%), immuno-enzymology (11%), immune chemiluminometric technology (16%), and unknown (2%); each kit was provided by several laboratories which had different standards. All drugs acting on phospho-calcic metabolism were collected and classified into 4 groups: vitamin D and analogs, calcium supplementation, calcium-free phosphorus binders, and cinacalcet. A standardized form was used to collect data from medical records. A Steering Committee consisting of an epidemiologist (CLA) and a nephrologist (LF) reviewed all forms and medical records when collected biological data were out of international standards. Ethics statement This study was conducted in compliance with French regulations concerning pharmaco-epidemiological studies [21]. Approvals from the French data protection agency (CNIL: n° 904163) and from the Advisory Committee on information processing research in the field of health located in the region of Lorraine (CCTIRS: n° 0428) were obtained through the national REIN registry. An information sheet was displayed in all dialysis units, and each patient was given an individual written information sheet at the initiation of dialysis. This study was conducted in compliance with French regulations concerning pharmaco-epidemiological studies [21]. Approvals from the French data protection agency (CNIL: n° 904163) and from the Advisory Committee on information processing research in the field of health located in the region of Lorraine (CCTIRS: n° 0428) were obtained through the national REIN registry. An information sheet was displayed in all dialysis units, and each patient was given an individual written information sheet at the initiation of dialysis. Statistical analyses Patient characteristics were described as proportions for categorical variables, and means and standard deviations (SD) for continuous variables, except PAS scores described as medians. Multivariate analyses using multiple correspondence analysis (MCA) and ascendant hierarchical clustering on clinical, biological and therapeutic characteristics of SHPT were performed to identify subgroups of patients [22]. All these parameters were binary variables. MCA was applied to determine the major axes summarizing more clearly data [22]. This method gives a set of coordinates of the categories of variables, and thus reveals the relationships between the individuals and the different categories. Each principal component was interpreted in terms of amount of contribution for each category to variance of axis. The contribution of a variable was statistically significant when its mean was greater than 1/p, (p = number of categories of variables). Graphical evaluation was built using the major components in a series of two-dimensional graphs. Then an ascendant hierarchical clustering was used to determine the number of subgroups on the basis of coordinates of the main components retained by MCA. The 4 clustered subgroups were numbered according to the order of the selection for the classification. The validity of this method was measured by the “cubic clustering criterion” with a cut-off of 2. After subgroups were selected, χ2 tests were performed to compare and highlight parameters defining each distinct profile of patients. The statistical analyses were performed using SAS software, version 9.2 (SAS Institute, North Carolina, US). Patient characteristics were described as proportions for categorical variables, and means and standard deviations (SD) for continuous variables, except PAS scores described as medians. Multivariate analyses using multiple correspondence analysis (MCA) and ascendant hierarchical clustering on clinical, biological and therapeutic characteristics of SHPT were performed to identify subgroups of patients [22]. All these parameters were binary variables. MCA was applied to determine the major axes summarizing more clearly data [22]. This method gives a set of coordinates of the categories of variables, and thus reveals the relationships between the individuals and the different categories. Each principal component was interpreted in terms of amount of contribution for each category to variance of axis. The contribution of a variable was statistically significant when its mean was greater than 1/p, (p = number of categories of variables). Graphical evaluation was built using the major components in a series of two-dimensional graphs. Then an ascendant hierarchical clustering was used to determine the number of subgroups on the basis of coordinates of the main components retained by MCA. The 4 clustered subgroups were numbered according to the order of the selection for the classification. The validity of this method was measured by the “cubic clustering criterion” with a cut-off of 2. After subgroups were selected, χ2 tests were performed to compare and highlight parameters defining each distinct profile of patients. The statistical analyses were performed using SAS software, version 9.2 (SAS Institute, North Carolina, US). Inclusion criteria: Adult patients included in EPHEYL were on dialysis (hemodialysis or peritoneal dialysis) for at least 3 months with one of the following criteria: for the first time 1) PTH ≥ 500ng/L, 2) initiation of cinacalcet, 3) parathyroidectomy if severe SHPT. The PTH cut-off value of 500 ng/L was chosen at the time of 2003 K-DOQI [6]. Indeed, when we initiated the study, the updated KDIGO recommendations were not effective, and PTH levels between 150 and 300 ng/L were advocated [6,8]. The indication for parathyroidectomy or the use of a calcimimetic were retained when PTH level was ≥ 500 ng/L, hence the choice of this threshold in our study. From 1st December, 2009 to 31st May, 2012, all patients who were on dialysis for at least 3 months were identified through the REIN registry - Region of Lorraine [17]. The occurrence of one out of the three inclusion criteria was prospectively followed up in all these patients. Patients were included in the study at the time of PTH dosing, initiation of cinacalcet, or parathyroidectomy. Physicians were encouraged to adhere to the KDIGO™ guidelines updated in 2009 [8]. Data collection: The following socio-demographic and clinical data were retrieved from the REIN registry: age, gender, body mass index (BMI), type of dialysis, dialysis vintage, primary etiology of nephropathy, comorbidities (smoking, diabetes, cardiovascular diseases, hypertension, respiratory diseases and cancer), and being on renal transplant waiting list [17]. The vast majority of patients were Caucasians. BMI was described as a continuous quantitative variable and obesity (BMI >30 kg/m2) as a binary variable. Primary etiology of nephropathy was classified into diabetic nephropathy, vascular nephropathy, glomerulonephritis, pyelonephritis, hereditary nephropathy, and other/unknown. Cardiovascular diseases comprised history of heart failure, cardiac heart disease, acute coronary syndrome, arrhythmia, peripheral arterial disease, and stroke. Respiratory diseases encompassed history of chronic respiratory insufficiency, asthma, and obstructive pulmonary disease. Hypertension was considered present if: diastolic and/or systolic blood pressure greater than 80 and 130 mm Hg, respectively, or a current antihypertensive therapy. SHPT symptoms experienced by patients were assessed by using: 1) the Parathyroidectomy Assessment of Symptoms (PAS) questionnaire. This self-administered questionnaire validated in dialysis patients with SHPT assessed the severity of 14 SHPT symptoms (Table 1) experienced by patients using a visual analog scale (VAS) which ranged from 0 (not experiencing any symptom) to 100 (experiencing the most extreme aspect of the symptom) [18-20]. This questionnaire was administered at the time of inclusion. PAS scores were analyzed as quantitative variables or proportion of patients with at least one symptom scoring more than 0; 2) the collection of clinical signs reported in medical records such as osteoarticular pain, myasthenia, bone fractures, paresthesia, pruritus, tetany, and calciphylaxis. A patient was symptomatic if at least one symptom had a PAS score greater than 0 or was reported in his medical records. Specific symptoms assessed by the parathyroidectomy assessment of symptoms (PAS) questionnaire, a self-administered disease-specific outcome tool, in patients with secondary hyperparathyroidism (SHPT) The following biological parameters were collected at the time of inclusion: PTH, calcemia, phosphorus, vitamin D, alkaline phosphatase (ALP), albumin, hemoglobin and measured ionized calcium. KDIGO™ guidelines have recommended to maintain PTH up to 2 to 9-fold above the normal range [8]. PTH was therefore described as binary variable (in or out of target) and quantitative variable (multiple of upper normal limit). According to KDIGO™ guidelines, calcemia was checked as hypo-, normo- or hypercalcemia using 2.1 and 2.6 mmol/L as cut-off values; phosphatemia as hypo-, normo- et hyperphosphatemia using 0.8 and 1,45 mmol/L as cut-off values. ALP was analyzed according to 2 stages using medians as cut-off values, albuminuria according to 2 stages using 25 g/L as cut-off value, and hemoglobin according to 3 stages using 10 and 12 g/dL as cut-off values. Four technologies were used for PTH dosing: chemiluminometric technology (48%), electro cheminulometric method (23%), immuno-enzymology (11%), immune chemiluminometric technology (16%), and unknown (2%); each kit was provided by several laboratories which had different standards. All drugs acting on phospho-calcic metabolism were collected and classified into 4 groups: vitamin D and analogs, calcium supplementation, calcium-free phosphorus binders, and cinacalcet. A standardized form was used to collect data from medical records. A Steering Committee consisting of an epidemiologist (CLA) and a nephrologist (LF) reviewed all forms and medical records when collected biological data were out of international standards. Ethics statement: This study was conducted in compliance with French regulations concerning pharmaco-epidemiological studies [21]. Approvals from the French data protection agency (CNIL: n° 904163) and from the Advisory Committee on information processing research in the field of health located in the region of Lorraine (CCTIRS: n° 0428) were obtained through the national REIN registry. An information sheet was displayed in all dialysis units, and each patient was given an individual written information sheet at the initiation of dialysis. Statistical analyses: Patient characteristics were described as proportions for categorical variables, and means and standard deviations (SD) for continuous variables, except PAS scores described as medians. Multivariate analyses using multiple correspondence analysis (MCA) and ascendant hierarchical clustering on clinical, biological and therapeutic characteristics of SHPT were performed to identify subgroups of patients [22]. All these parameters were binary variables. MCA was applied to determine the major axes summarizing more clearly data [22]. This method gives a set of coordinates of the categories of variables, and thus reveals the relationships between the individuals and the different categories. Each principal component was interpreted in terms of amount of contribution for each category to variance of axis. The contribution of a variable was statistically significant when its mean was greater than 1/p, (p = number of categories of variables). Graphical evaluation was built using the major components in a series of two-dimensional graphs. Then an ascendant hierarchical clustering was used to determine the number of subgroups on the basis of coordinates of the main components retained by MCA. The 4 clustered subgroups were numbered according to the order of the selection for the classification. The validity of this method was measured by the “cubic clustering criterion” with a cut-off of 2. After subgroups were selected, χ2 tests were performed to compare and highlight parameters defining each distinct profile of patients. The statistical analyses were performed using SAS software, version 9.2 (SAS Institute, North Carolina, US). Results: Patients A total of 2137 patients who were on dialysis for at least 3 months, with PTH < 500 ng/L were retrieved from the REIN registry – Region of Lorraine between 1st December, 2009 and 31st May, 2012. Among them, 305 patients were included in the EPHEYL study: 86% with an incident PTH ≥500 ng/L (n = 261), 14% with an initiation of cinacalcet (n = 44), and 0% with a first-line parathyroidectomy (Figure 1). Disposition of patients. aParathyroid hormone. There was no statistically significant difference in socio-demographic and clinical characteristics between both groups according to inclusion criteria (Table 2). Regarding PTH levels, 10 different values for the upper normal limit were obtained, and ranged from 38.8 to 638 ng/L (median value: 560 ng/L). Despite these high values PTH remained in the KDIGO™ target range in 64% of patients.The distribution of PTH according to multiples of the upper normal limit revealed that 60% of patients maintained PTH up to 8-fold above the upper normal limit (Figure 2). Socio-demographic, clinical and biological characteristics for the EPHEYL population according to inclusion criteria aParathyroid hormone; bResults are presented as mean ± SD; cBody mass index; dParathyroidectomy Assessment of Symptoms; eMedian value; fThe 2009 updated KDIGOTM (Kidney Disease: Improving Global Outcomes) guidelines have recommended target range for serum PTH to 2–9 times upper the normal range [8]. Distribution of parathyroid hormone (PTH) according to multiples of the upper normal limit for the assays in the EPHEYL study. Among the 44 patients treated with cinacalcet, 36 patients had PTH within the KDIGO™ target range before initiating the treatment. At 3 months of inclusion, 19 patients (43%) discontinued treatment with cinacalcet. A total of 2137 patients who were on dialysis for at least 3 months, with PTH < 500 ng/L were retrieved from the REIN registry – Region of Lorraine between 1st December, 2009 and 31st May, 2012. Among them, 305 patients were included in the EPHEYL study: 86% with an incident PTH ≥500 ng/L (n = 261), 14% with an initiation of cinacalcet (n = 44), and 0% with a first-line parathyroidectomy (Figure 1). Disposition of patients. aParathyroid hormone. There was no statistically significant difference in socio-demographic and clinical characteristics between both groups according to inclusion criteria (Table 2). Regarding PTH levels, 10 different values for the upper normal limit were obtained, and ranged from 38.8 to 638 ng/L (median value: 560 ng/L). Despite these high values PTH remained in the KDIGO™ target range in 64% of patients.The distribution of PTH according to multiples of the upper normal limit revealed that 60% of patients maintained PTH up to 8-fold above the upper normal limit (Figure 2). Socio-demographic, clinical and biological characteristics for the EPHEYL population according to inclusion criteria aParathyroid hormone; bResults are presented as mean ± SD; cBody mass index; dParathyroidectomy Assessment of Symptoms; eMedian value; fThe 2009 updated KDIGOTM (Kidney Disease: Improving Global Outcomes) guidelines have recommended target range for serum PTH to 2–9 times upper the normal range [8]. Distribution of parathyroid hormone (PTH) according to multiples of the upper normal limit for the assays in the EPHEYL study. Among the 44 patients treated with cinacalcet, 36 patients had PTH within the KDIGO™ target range before initiating the treatment. At 3 months of inclusion, 19 patients (43%) discontinued treatment with cinacalcet. Cluster analysis Ascendant hierarchical clustering identified four subgroups of patients according to their SHPT profiles as shown Figure 3. The “cubic clustering criterion” of the model was 14, higher than the cut-off of 2, validating the classification. Identification of four distinct subgroups of dialysis patients with secondary hyperparathyroidism (SHPT) using multiple correspondence analyses. The horizontal axis defined the presence or absence of calcium supplementation, the presence or absence of treatment with cinacalcet, and serum PTH below or above 500 ng/L. The vertical axis defined a normophosphatemia, a hyperphosphatemia, the absence or presence of phosphorus binders, high or low level of alkaline phosphatases, the presence or absence of vitamin D supplementation, the presence or absence of calcium suplementation. Each patient is identified by a number and a color according to the following code: black for group 1 (“intermediate”), green for group 2 (younger with severe cardiovascular comorbidities), blue for group 3 (elderly patients with few cardiovascular comorbidities), pink for group 4 (“cinacalcet prescription”). The four clustered subgroups of patients were named according to their main characteristics regarding variables used (Table 3) or not used for clustering patients (Table 4): Characteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables used to cluster dialysis patients at time of SHPT diagnosis aGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular comorbidities, Group 4: patients who initiated cinacalcet; bParathyroidectomy Assessment of Symptoms questionnaire; cParathyroid hormone; dResults are presented as mean ± SD; p corresponds to ANOVA. Characteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables not used to cluster dialysis patients at time of SHPT diagnosis aGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular co-morbidities, Group 4: patients who initiated cinacalcet; bResults are presented as mean ± SD; cBody mass index; p corresponds to ANOVA. – “Intermediate” patients (Group 1, 37%): patients with hyperphosphatemia without hypocalcemia, sharing similar characteristics with the next group of patients (Group 2) but better controlled – Younger patients with severe cardiovascular comorbidities (Group 2, 24%): most often obese or diabetic patients, with shorter dialysis vintage, mainly with hyperphosphatemia and hypocalcemia despite multiple medical treatments, suggesting poor adherence – Elderly patients with a few cardiovascular comorbidities (Group 3, 25%): rarely obese, with longer dialysis vintage, mainly with normophosphatemia and normocalcemia despite few patients with SHPT treatment, with health status appearing to be, at first, much better than the one in group 2 – Patients who initiated cinacalcet (Group 4, 14%): 42 out of the 44 patients who initiated cinacalcet and another patient were classified into a clearly distinct subgroup (Figure 3). Two patients treated with cinacalcet were classified into other groups. Ascendant hierarchical clustering identified four subgroups of patients according to their SHPT profiles as shown Figure 3. The “cubic clustering criterion” of the model was 14, higher than the cut-off of 2, validating the classification. Identification of four distinct subgroups of dialysis patients with secondary hyperparathyroidism (SHPT) using multiple correspondence analyses. The horizontal axis defined the presence or absence of calcium supplementation, the presence or absence of treatment with cinacalcet, and serum PTH below or above 500 ng/L. The vertical axis defined a normophosphatemia, a hyperphosphatemia, the absence or presence of phosphorus binders, high or low level of alkaline phosphatases, the presence or absence of vitamin D supplementation, the presence or absence of calcium suplementation. Each patient is identified by a number and a color according to the following code: black for group 1 (“intermediate”), green for group 2 (younger with severe cardiovascular comorbidities), blue for group 3 (elderly patients with few cardiovascular comorbidities), pink for group 4 (“cinacalcet prescription”). The four clustered subgroups of patients were named according to their main characteristics regarding variables used (Table 3) or not used for clustering patients (Table 4): Characteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables used to cluster dialysis patients at time of SHPT diagnosis aGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular comorbidities, Group 4: patients who initiated cinacalcet; bParathyroidectomy Assessment of Symptoms questionnaire; cParathyroid hormone; dResults are presented as mean ± SD; p corresponds to ANOVA. Characteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables not used to cluster dialysis patients at time of SHPT diagnosis aGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular co-morbidities, Group 4: patients who initiated cinacalcet; bResults are presented as mean ± SD; cBody mass index; p corresponds to ANOVA. – “Intermediate” patients (Group 1, 37%): patients with hyperphosphatemia without hypocalcemia, sharing similar characteristics with the next group of patients (Group 2) but better controlled – Younger patients with severe cardiovascular comorbidities (Group 2, 24%): most often obese or diabetic patients, with shorter dialysis vintage, mainly with hyperphosphatemia and hypocalcemia despite multiple medical treatments, suggesting poor adherence – Elderly patients with a few cardiovascular comorbidities (Group 3, 25%): rarely obese, with longer dialysis vintage, mainly with normophosphatemia and normocalcemia despite few patients with SHPT treatment, with health status appearing to be, at first, much better than the one in group 2 – Patients who initiated cinacalcet (Group 4, 14%): 42 out of the 44 patients who initiated cinacalcet and another patient were classified into a clearly distinct subgroup (Figure 3). Two patients treated with cinacalcet were classified into other groups. Patients: A total of 2137 patients who were on dialysis for at least 3 months, with PTH < 500 ng/L were retrieved from the REIN registry – Region of Lorraine between 1st December, 2009 and 31st May, 2012. Among them, 305 patients were included in the EPHEYL study: 86% with an incident PTH ≥500 ng/L (n = 261), 14% with an initiation of cinacalcet (n = 44), and 0% with a first-line parathyroidectomy (Figure 1). Disposition of patients. aParathyroid hormone. There was no statistically significant difference in socio-demographic and clinical characteristics between both groups according to inclusion criteria (Table 2). Regarding PTH levels, 10 different values for the upper normal limit were obtained, and ranged from 38.8 to 638 ng/L (median value: 560 ng/L). Despite these high values PTH remained in the KDIGO™ target range in 64% of patients.The distribution of PTH according to multiples of the upper normal limit revealed that 60% of patients maintained PTH up to 8-fold above the upper normal limit (Figure 2). Socio-demographic, clinical and biological characteristics for the EPHEYL population according to inclusion criteria aParathyroid hormone; bResults are presented as mean ± SD; cBody mass index; dParathyroidectomy Assessment of Symptoms; eMedian value; fThe 2009 updated KDIGOTM (Kidney Disease: Improving Global Outcomes) guidelines have recommended target range for serum PTH to 2–9 times upper the normal range [8]. Distribution of parathyroid hormone (PTH) according to multiples of the upper normal limit for the assays in the EPHEYL study. Among the 44 patients treated with cinacalcet, 36 patients had PTH within the KDIGO™ target range before initiating the treatment. At 3 months of inclusion, 19 patients (43%) discontinued treatment with cinacalcet. Cluster analysis: Ascendant hierarchical clustering identified four subgroups of patients according to their SHPT profiles as shown Figure 3. The “cubic clustering criterion” of the model was 14, higher than the cut-off of 2, validating the classification. Identification of four distinct subgroups of dialysis patients with secondary hyperparathyroidism (SHPT) using multiple correspondence analyses. The horizontal axis defined the presence or absence of calcium supplementation, the presence or absence of treatment with cinacalcet, and serum PTH below or above 500 ng/L. The vertical axis defined a normophosphatemia, a hyperphosphatemia, the absence or presence of phosphorus binders, high or low level of alkaline phosphatases, the presence or absence of vitamin D supplementation, the presence or absence of calcium suplementation. Each patient is identified by a number and a color according to the following code: black for group 1 (“intermediate”), green for group 2 (younger with severe cardiovascular comorbidities), blue for group 3 (elderly patients with few cardiovascular comorbidities), pink for group 4 (“cinacalcet prescription”). The four clustered subgroups of patients were named according to their main characteristics regarding variables used (Table 3) or not used for clustering patients (Table 4): Characteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables used to cluster dialysis patients at time of SHPT diagnosis aGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular comorbidities, Group 4: patients who initiated cinacalcet; bParathyroidectomy Assessment of Symptoms questionnaire; cParathyroid hormone; dResults are presented as mean ± SD; p corresponds to ANOVA. Characteristics of dialysis subgroups identified at time of secondary hyperparathyroidism (SHPT) diagnosis: variables not used to cluster dialysis patients at time of SHPT diagnosis aGroup 1: “intermediate” patients, Group 2: younger patients with severe comorbidities, Group 3: elderly patients with few cardiovascular co-morbidities, Group 4: patients who initiated cinacalcet; bResults are presented as mean ± SD; cBody mass index; p corresponds to ANOVA. – “Intermediate” patients (Group 1, 37%): patients with hyperphosphatemia without hypocalcemia, sharing similar characteristics with the next group of patients (Group 2) but better controlled – Younger patients with severe cardiovascular comorbidities (Group 2, 24%): most often obese or diabetic patients, with shorter dialysis vintage, mainly with hyperphosphatemia and hypocalcemia despite multiple medical treatments, suggesting poor adherence – Elderly patients with a few cardiovascular comorbidities (Group 3, 25%): rarely obese, with longer dialysis vintage, mainly with normophosphatemia and normocalcemia despite few patients with SHPT treatment, with health status appearing to be, at first, much better than the one in group 2 – Patients who initiated cinacalcet (Group 4, 14%): 42 out of the 44 patients who initiated cinacalcet and another patient were classified into a clearly distinct subgroup (Figure 3). Two patients treated with cinacalcet were classified into other groups. Discussion: EPHEYL is a well-characterized cohort of patients with incident severe SHPT diagnosis described not only on the basis of initiation of cinacalcet but also a cut-off value for PTH [6]. An incident population helps to accurately describe diseases, avoiding bias related to incidence and prevalence mix [16]. We used an appropriate methodology, MCA and ascendant hierarchical clustering, to identify homogeneous subgroups of cases with a high statistical level validity [22]. Our four clustered subgroups consisted of homogeneous patients with same medical history, same prior therapy, and probably similar characteristics concerning mineral bone diseases and cardiovascular co-morbidities. SHPT symptoms are difficult to assess due to the lack of specificity. The self-administered questionnaire developed by Pasieka et al. was used in several studies on primary and secondary hyperparathyroidism to quantify severity of symptoms using median values [18-20]. In the EPHEYL study, one out of two patients suffered from at least one symptom. But the most frequent symptoms (thirst, weakness, fatigue, and pain of joints) were not specific. As the questionnaire was developed in the context of parathyroidectomy, its validity is questionable at early stage of SHPT. The PTH cut-off value of 500 ng/L was chosen at the time of 2003 K-DOQI [6]. Its enabled to focus on SHPT patients without adynamic bone disease [8,23]. Furthermore, no patient had hypercalcemia, suggesting that there was no tertiary or autonomized SHPT. This result is consistent with the incident type of our cohort, as tertiary SHPT were found in previous studies including prevalent SHPT patients [6,24]. Nevertheless, we know that PTH is subject to many simultaneous types of variability [7,11]. Our study points out obstacles with the use of PTH to precisely diagnose SHPT. The distribution of PTH at a cut-off value of 500, according to the new recommendation: “maintaining PTH levels in the range of approximately two to nine times the upper normal limit for the assay” was wide (Figure 2). Jean et al. have suggested that PTH should be replaced with specific biochemical markers of bone such as bone ALP and beta cross-laps to follow-up SHPT [24]. These measurements, however, are too costly to be recommended in routine clinical practice [8]. Finally, in the context of quite vague recommendations, clinicians should be aware that a binary approach for SHPT diagnosis, i.e. absence/presence, is not adequate. There is definitely a grey zone for diagnosis which limits are not easily defined. We should recommend an observation period before acting strongly. In this grey zone, our study identified four statistically distinct subgroups of patients. Our description of each group reflected a clinical reality, and was therefore clinically appropriate. Noteworthy, at bedside, these distinct phenotypes should be distinguished by doctor rather by biological cut-offs. This pleads for patient-doctor contact. A recent publication has demonstrated a positive association between patient-doctor contact and outcomes [25]. Last but not least, our study reinforces the recent publication by Levin that has recommended acknowledging the heterogeneity of chronic kidney disease populations and appropriately characterizing populations for studies [26]. The group of “elderly patients with a few cardiovascular comorbidities”, in majority with normocalcemia and normophosphatemia, had a PTH which, at first, should impressed clinicians. In another hand, normal serum phosphorus could not be explained by malnutrition; despite their old age, nutritional markers (such as albumin and phosphatemia) were not statistically different from those in the other groups. PTH seemed to be associated with a good clinical condition and a low prevalence of comorbidities. These results are consistent with those from previous studies in showing that, particularly in elderly, PTH is inversely correlated with score of comorbidities [12,27]. At the time of diagnosis of SHPT, this subgroup of patients did not require presumably an intensive therapeutic management. The group of “younger patients with severe cardiovascular comorbidities”, by contrast, consisted of a majority of patients (66%) with PTH within targets [8]. However, the mean PTH was similar to the one found in the previous group. They seemed to be most likely to have bone and cardiovascular complications, as in previous cohorts, possibly linked to hyperphosphatemia, with a high proportion of cardiovascular diseases, diabetes and obesity [4,13]. As a result, they should require an intensive care management. Of note, most of them had hyperphosphatemia and hypocalcemia despite multiple treatments. It is obvious that this phenotype is characterized by very low adherence to first-line strategies for SHPT, e.g. calcium, vitamin D and phosphorus binders. For this treatment category, a recent meta-analysis has pointed out poor adherence in a majority of patients [28]. The group of “intermediate patients” seemed to show intermediate characteristics between those with both previous groups. Most of them had hyperphosphatemia without hypocalcemia and the highest PTH. They seemed to be more likely to be at high risk for cardiovascular events due to uncontrolled hyperphosphatemia; as a result, they should be cautiously monitored [21].Finally, the group of patients “who initiated cinacalcet” was very different from the others (Figure 3). Most of patients had hypocalcemia and hyperphosphatemia. Patients were given already multiple conventional treatments before cinacalcet initiation. At the time of cinacalcet initiation, an overwhelming majority of patients (86%) had PTH within KDIGO™ targets which were applicable during the study period. Our study has confirmed that cinacalcet was prescribed for broadened indications in real life, and highlighted that the benefit-risk balance of cinacalcet was not favorable in patients with low PTH. This phenotype “cinacalcet user” including such patients should not exist. Before such therapeutic agents were marketed, indications for surgical parathyroidectomy were limited to symptomatic SHPT patients with very high PTH (>1000 ng/L). Initially presented as an alternative to parathyroidectomy, cinacalcet is now prescribed in pre-symptomatic patients with PTH > 300 ng/L, perhaps due to previous KDOQI recommendations. Its prescription is based on studies suggesting that cinacalcet should have a protective effect on cardiovascular disease outcomes and reduce risk of fractures [29]. Recently, the prospective EVOLVE study failed to demonstrate the role of cinacalcet in reducing cardiovascular mortality and events [10]. These negative results can be explained, in particular, by the fact that 23% of patients in the treated group and 11% in the placebo group received the trade formulation of cinacalcet, which skewed the results. The EPHEYL study has shown that the use of cinacalcet was likely to be physician-dependent rather than characteristic of patients, and decision-making to prescribe it was insufficiently detailed. Thus in the group of patients who initiated cinacalcet, 43% of them who should have been treated with cinacalcet were not taking treatment 3 months later. Considering that low PTH levels characterize adynamic bone disease and/or a high prevalence of comorbidities, we should currently recommend to prescribe cinacalcet only after conducting a rigorous investigation on its benefit/risks [23,27]. The strength of our study was to identify a subgroup of incident SHPT patients treated with cinacalcet despite low PTH in a real life setting. The matter whether cinacalcet should be contraindicated in patients with low PTH is of great interest. Our study has some limitations. Its observational nature which may seem like a drawback is strength actually. Pharmacoepidemiological study, as EPHEYL, reflects routine clinical practices with various therapeutic strategies which are not always consistent with previous randomized trials [10]. Second, we have taken into account only data at the inclusion, making sense as regards to the incident nature of a disease. The 2-year follow-up of the EPHEYL cohort should reinforce the relevance of the classification. Third, most of patients were included with a PTH cut-off value of 500ng/L, while the latest KDIGO™ guidelines have recommended maintaining PTH in ranges, at approximately 2 to 9 times the upper normal limit, rather than absolute values. KDIGO™ recommendations were not implemented when we wrote our article. As a result, data on PTH have been detailed in this report. Fourth, the methodology allowed identification of distinct profiles, not completely disjunctive. It is therefore possible that some patients of distinct subgroups should have partially similar characteristics at junctions. Notwithstanding differences between groups are relevant. Fifth, clinical symptoms assessed by PAS questionnaire need to be interpreted with caution, due to the difficulty met to collect data from self-administered questionnaire. Conclusion: In conclusion, four significantly distinct profiles of dialysis patients with a recent severe SHPT diagnosis were identified on the basis of clinical, biological and therapeutic data routinely available. Our well-characterized incident cohort, coupled with an original methodological approach, highlights a contemporary picture of daily clinical practice. Our study reinforces that the benefit-risk balance of cinacalcet is not positive in patients with low PTH, and raises the matter whether cinacalcet should be contraindicated in such patients. A “one-size-fits-all” target for PTH approach is probably not appropriate. Therapeutic management needs to be adjusted to the four different phenotypes. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: EL conceived the study, participated in its design, analysis and interpretation of data, and drafted the manuscript. CA was involved in the design of the study, acquisition, analysis and interpretation of data, and was involved in the statistical analysis. MLE participated in the design of the study and performed the statistical analysis. MK was involved in the design of the study and interpretation of the data. SB was accountable for all aspects of the analysis of data. LB was involved in the design of the study and interpretation of the data. LF participated in the general supervision of the work, helped to draft the manuscript, and revised it critically for important intellectual content. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2369/15/132/prepub
Background: Recommendations for secondary hyperparathyroidism (SHPT) consider that a "one-size-fits-all" target enables efficacy of care. In routine clinical practice, SHPT continues to pose diagnosis and treatment challenges. One hypothesis that could explain these difficulties is that dialysis population with SHPT is not homogeneous. Methods: EPHEYL is a prospective, multicenter, pharmacoepidemiological study including chronic dialysis patients (≥ 3 months) with newly SHPT diagnosis, i.e. parathyroid hormone (PTH) ≥ 500 ng/L for the first time, or initiation of cinacalcet, or parathyroidectomy. Multiple correspondence analysis and ascendant hierarchical clustering on clinico-biological (symptoms, PTH, plasma phosphorus and alkaline phosphatase) and treatment of SHPT (cinacalcet, vitamin D, calcium, or calcium-free calcic phosphate binder) were performed to identify distinct phenotypes. Results: 305 patients (261 with incident PTH ≥ 500 ng/L; 44 with cinacalcet initiation) were included. Their mean age was 67 ± 15 years, and 60% were men, 92% on hemodialysis and 8% on peritoneal dialysis. Four subgroups of SHPT patients were identified: 1/ "intermediate" phenotype with hyperphosphatemia without hypocalcemia (n = 113); 2/ younger patients with severe comorbidities, hyperphosphatemia and hypocalcemia, despite SHPT multiple medical treatments, suggesting poor adherence (n = 73); 3/ elderly patients with few cardiovascular comorbidities, controlled phospho-calcium balance, higher PTH, and few treatments (n = 75); 4/ patients who initiated cinacalcet (n = 43). The quality criterion of the model had a cut-off of 14 (>2), suggesting a relevant classification. Conclusions: In real life, dialysis patients with newly diagnosed SHPT constitute a very heterogeneous population. A "one-size-fits-all" target approach is probably not appropriate. Therapeutic management needs to be adjusted to the 4 different phenotypes.
Background: In the 70’s, secondary hyperparathyroidism (SHPT) was described as a severe bone disease occurring in young end-stage renal disease (ESRD) patients with significant duration of dialysis. When parathyroid hormone (PTH) was very high, up to 1000 ng/L, associated with hypercalcemia, the only treatment was subtotal parathyroïdectomy [1,2]. In the 90’s, access to kidney transplantation for the young and dialysis for the old led to a rapid ageing of dialyzed population [3]. During the first decade of the millennium, a new paradigm emerged. First, SHPT was considered not only as a bone disease, but also as a vascular disease [4]. Second, SHPT turned out to be a biological rather than a clinical disease at bedside [5]. A “one-size-fits-all” approach was recommended: PTH under 300ng/L from 2003 to 2009 according to K-DOQI [6]. Due to variability in PTH measurement, the target was modified in 2009: “maintaining PTH levels in the range of approximately two to nine times the upper normal limit for the assay” [7,8]. Third, cinacalcet tends to be seen by the clinicians as the most appropriate solution for the treatment of SHPT due to its mechanism of action, when conventional therapy is not effective enough [6,9]. But the randomized controlled EVOLVE study, published in 2012, failed to demonstrate efficacy of cinacalcet to reduce the risk of death or major cardiovascular events [10]. Today, the exact importance of PTH is still debated [11]. Large observational cross-sectional studies about SHPT have recently been published [12-15]. An incidence/prevalence bias may have hampered a precise description of SHPT phenotypes [16]. In order to capture the phenotypes of SHPT at bedside, we meticulously enrolled all dialysis patients of the REIN registry - Region of Lorraine with newly marked PTH elevation in a prospective observational study from December 2009 to May 2012. At inclusion, we delivered a validated questionnaire to measure clinical symptoms. With an original statistical analysis, we demonstrated that high PTH levels matched with 4 very different phenotype profiles, suggesting that a “one-size-fits-all” target approach for SHPT was not appropriate. Conclusion: In conclusion, four significantly distinct profiles of dialysis patients with a recent severe SHPT diagnosis were identified on the basis of clinical, biological and therapeutic data routinely available. Our well-characterized incident cohort, coupled with an original methodological approach, highlights a contemporary picture of daily clinical practice. Our study reinforces that the benefit-risk balance of cinacalcet is not positive in patients with low PTH, and raises the matter whether cinacalcet should be contraindicated in such patients. A “one-size-fits-all” target for PTH approach is probably not appropriate. Therapeutic management needs to be adjusted to the four different phenotypes.
Background: Recommendations for secondary hyperparathyroidism (SHPT) consider that a "one-size-fits-all" target enables efficacy of care. In routine clinical practice, SHPT continues to pose diagnosis and treatment challenges. One hypothesis that could explain these difficulties is that dialysis population with SHPT is not homogeneous. Methods: EPHEYL is a prospective, multicenter, pharmacoepidemiological study including chronic dialysis patients (≥ 3 months) with newly SHPT diagnosis, i.e. parathyroid hormone (PTH) ≥ 500 ng/L for the first time, or initiation of cinacalcet, or parathyroidectomy. Multiple correspondence analysis and ascendant hierarchical clustering on clinico-biological (symptoms, PTH, plasma phosphorus and alkaline phosphatase) and treatment of SHPT (cinacalcet, vitamin D, calcium, or calcium-free calcic phosphate binder) were performed to identify distinct phenotypes. Results: 305 patients (261 with incident PTH ≥ 500 ng/L; 44 with cinacalcet initiation) were included. Their mean age was 67 ± 15 years, and 60% were men, 92% on hemodialysis and 8% on peritoneal dialysis. Four subgroups of SHPT patients were identified: 1/ "intermediate" phenotype with hyperphosphatemia without hypocalcemia (n = 113); 2/ younger patients with severe comorbidities, hyperphosphatemia and hypocalcemia, despite SHPT multiple medical treatments, suggesting poor adherence (n = 73); 3/ elderly patients with few cardiovascular comorbidities, controlled phospho-calcium balance, higher PTH, and few treatments (n = 75); 4/ patients who initiated cinacalcet (n = 43). The quality criterion of the model had a cut-off of 14 (>2), suggesting a relevant classification. Conclusions: In real life, dialysis patients with newly diagnosed SHPT constitute a very heterogeneous population. A "one-size-fits-all" target approach is probably not appropriate. Therapeutic management needs to be adjusted to the 4 different phenotypes.
9,371
368
[ 436, 239, 714, 93, 288, 367, 587, 10, 137, 16 ]
14
[ "patients", "pth", "shpt", "cinacalcet", "group", "dialysis", "study", "according", "cardiovascular", "subgroups" ]
[ "secondary hyperparathyroidism shpt", "70 secondary hyperparathyroidism", "hyperparathyroidism quantify", "dialysis parathyroid", "dialysis parathyroid hormone" ]
[CONTENT] Dialysis | Secondary hyperparathyroidism | Cinacalcet | Pharmacoepidemiological study [SUMMARY]
[CONTENT] Dialysis | Secondary hyperparathyroidism | Cinacalcet | Pharmacoepidemiological study [SUMMARY]
[CONTENT] Dialysis | Secondary hyperparathyroidism | Cinacalcet | Pharmacoepidemiological study [SUMMARY]
[CONTENT] Dialysis | Secondary hyperparathyroidism | Cinacalcet | Pharmacoepidemiological study [SUMMARY]
[CONTENT] Dialysis | Secondary hyperparathyroidism | Cinacalcet | Pharmacoepidemiological study [SUMMARY]
[CONTENT] Dialysis | Secondary hyperparathyroidism | Cinacalcet | Pharmacoepidemiological study [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Cinacalcet | Cohort Studies | Drug Delivery Systems | Female | Follow-Up Studies | Humans | Hyperparathyroidism, Secondary | Male | Middle Aged | Naphthalenes | Parathyroid Hormone | Prospective Studies [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Cinacalcet | Cohort Studies | Drug Delivery Systems | Female | Follow-Up Studies | Humans | Hyperparathyroidism, Secondary | Male | Middle Aged | Naphthalenes | Parathyroid Hormone | Prospective Studies [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Cinacalcet | Cohort Studies | Drug Delivery Systems | Female | Follow-Up Studies | Humans | Hyperparathyroidism, Secondary | Male | Middle Aged | Naphthalenes | Parathyroid Hormone | Prospective Studies [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Cinacalcet | Cohort Studies | Drug Delivery Systems | Female | Follow-Up Studies | Humans | Hyperparathyroidism, Secondary | Male | Middle Aged | Naphthalenes | Parathyroid Hormone | Prospective Studies [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Cinacalcet | Cohort Studies | Drug Delivery Systems | Female | Follow-Up Studies | Humans | Hyperparathyroidism, Secondary | Male | Middle Aged | Naphthalenes | Parathyroid Hormone | Prospective Studies [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Cinacalcet | Cohort Studies | Drug Delivery Systems | Female | Follow-Up Studies | Humans | Hyperparathyroidism, Secondary | Male | Middle Aged | Naphthalenes | Parathyroid Hormone | Prospective Studies [SUMMARY]
[CONTENT] secondary hyperparathyroidism shpt | 70 secondary hyperparathyroidism | hyperparathyroidism quantify | dialysis parathyroid | dialysis parathyroid hormone [SUMMARY]
[CONTENT] secondary hyperparathyroidism shpt | 70 secondary hyperparathyroidism | hyperparathyroidism quantify | dialysis parathyroid | dialysis parathyroid hormone [SUMMARY]
[CONTENT] secondary hyperparathyroidism shpt | 70 secondary hyperparathyroidism | hyperparathyroidism quantify | dialysis parathyroid | dialysis parathyroid hormone [SUMMARY]
[CONTENT] secondary hyperparathyroidism shpt | 70 secondary hyperparathyroidism | hyperparathyroidism quantify | dialysis parathyroid | dialysis parathyroid hormone [SUMMARY]
[CONTENT] secondary hyperparathyroidism shpt | 70 secondary hyperparathyroidism | hyperparathyroidism quantify | dialysis parathyroid | dialysis parathyroid hormone [SUMMARY]
[CONTENT] secondary hyperparathyroidism shpt | 70 secondary hyperparathyroidism | hyperparathyroidism quantify | dialysis parathyroid | dialysis parathyroid hormone [SUMMARY]
[CONTENT] patients | pth | shpt | cinacalcet | group | dialysis | study | according | cardiovascular | subgroups [SUMMARY]
[CONTENT] patients | pth | shpt | cinacalcet | group | dialysis | study | according | cardiovascular | subgroups [SUMMARY]
[CONTENT] patients | pth | shpt | cinacalcet | group | dialysis | study | according | cardiovascular | subgroups [SUMMARY]
[CONTENT] patients | pth | shpt | cinacalcet | group | dialysis | study | according | cardiovascular | subgroups [SUMMARY]
[CONTENT] patients | pth | shpt | cinacalcet | group | dialysis | study | according | cardiovascular | subgroups [SUMMARY]
[CONTENT] patients | pth | shpt | cinacalcet | group | dialysis | study | according | cardiovascular | subgroups [SUMMARY]
[CONTENT] shpt | pth | disease | published | young | 2009 | bedside | size | size fits | fits [SUMMARY]
[CONTENT] patients | nephropathy | pth | dialysis | variable | variables | cut | pas | records | cut values [SUMMARY]
[CONTENT] patients | group | pth | cinacalcet | comorbidities | comorbidities group | presence | absence | cardiovascular comorbidities | dialysis [SUMMARY]
[CONTENT] approach | therapeutic | patients | clinical | appropriate therapeutic | shpt diagnosis identified basis | therapeutic data routinely | therapeutic data routinely available | incident cohort coupled original | incident cohort coupled [SUMMARY]
[CONTENT] patients | pth | group | shpt | cinacalcet | dialysis | study | data | subgroups | according [SUMMARY]
[CONTENT] patients | pth | group | shpt | cinacalcet | dialysis | study | data | subgroups | according [SUMMARY]
[CONTENT] one ||| SHPT ||| One | SHPT [SUMMARY]
[CONTENT] EPHEYL | ≥ | 3 months | PTH | 500 ||| ng/L | first ||| PTH | SHPT [SUMMARY]
[CONTENT] 305 | 261 | PTH ≥ | ng/L | 44 ||| 67 | 15 years | 60% | 92% | 8% ||| Four | SHPT | 1/ | 113 | 2/ | SHPT | 73 | 3/ | PTH | 75 | 4/ | 43 ||| 14 | 2 [SUMMARY]
[CONTENT] SHPT ||| one ||| 4 [SUMMARY]
[CONTENT] one ||| SHPT ||| One | SHPT ||| EPHEYL | ≥ | 3 months | PTH | ≥ ||| 500 ||| ng/L | first ||| PTH | SHPT ||| 305 | 261 | PTH ≥ | ng/L | 44 ||| 67 | 15 years | 60% | 92% | 8% ||| Four | SHPT | 1/ | 113 | 2/ | SHPT | 73 | 3/ | PTH | 75 | 4/ | 43 ||| 14 | 2 ||| SHPT ||| one ||| 4 [SUMMARY]
[CONTENT] one ||| SHPT ||| One | SHPT ||| EPHEYL | ≥ | 3 months | PTH | ≥ ||| 500 ||| ng/L | first ||| PTH | SHPT ||| 305 | 261 | PTH ≥ | ng/L | 44 ||| 67 | 15 years | 60% | 92% | 8% ||| Four | SHPT | 1/ | 113 | 2/ | SHPT | 73 | 3/ | PTH | 75 | 4/ | 43 ||| 14 | 2 ||| SHPT ||| one ||| 4 [SUMMARY]
Uveal Metastasis Based on Patient Sex in 2214 Tumors of 1111 Patients. A Comparison of Female Versus Male Clinical Features and Outcomes.
31373911
Lacking in previous studies on uveal metastasis is a robust statistical comparison of patient demographics, tumor features, and overall survival based on patient sex.
BACKGROUND
This is a retrospective analysis. All patients were evaluated on the Ocular Oncology Service, Wills Eye Hospital, PA between January 1, 1974 and June 1, 2017.
METHOD
A total of 2214 uveal metastases were diagnosed in 1310 eyes of 1111 consecutive patients. A comparison (female versus male) revealed differences across several demographic and clinical features including, among others, mean age at metastasis diagnosis (58 vs 63 years, P < 0.001), bilateral disease (21% vs 11%, P < 0.001), and mean number of metastases per eye (1.8 vs 1.6 tumors per eye, P = 0.04). There were differences in overall mean survival (20 vs 13 months, P = 0.03) and 5-year survival (Kaplan-Meier estimate) (31% vs 21%, P < 0.001).
RESULTS
There are demographic, clinical, and survival differences when patients with uveal metastases are compared by sex. Understanding these differences can aid the clinician in better anticipating patient outcomes.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Child", "Female", "Follow-Up Studies", "Humans", "Male", "Microscopy, Acoustic", "Middle Aged", "Neoplasm Metastasis", "Prognosis", "Retrospective Studies", "Sex Distribution", "Sex Factors", "Survival Rate", "United States", "Uvea", "Uveal Neoplasms", "Young Adult" ]
6727921
null
null
METHODS
Patients with uveal metastasis from the Wills Eye Hospital, Philadelphia, PA, who were evaluated between January 1, 1974, and June 1, 2017, were included. Patients with lymphoproliferative disorders, such as lymphoma, leukemia, and multiple myeloma, were excluded. This analysis adhered to the tenets of the Declaration of Helsinki and was approved by the Institutional Review Board of Wills Eye Hospital. Data included patient demographics, primary cancer site, uveal metastasis clinical features, tumor management, and survival outcome. Demographic data included patient age at the time of ocular diagnosis, sex, and race. Primary cancer information included site of primary malignancy (breast, lung, kidney, gastrointestinal tract, skin, prostate, thyroid, pancreas, others, unknown), date of primary cancer diagnosis, and interval between primary cancer diagnosis and uveal metastasis. Follow-up interval and overall survival data were collected. Clinical features included patient symptoms, involved eye (right, left), laterality (unilateral, bilateral), visual acuity, and intraocular pressure. The total number of metastases per eye, anatomic location of the uveal metastases (iris, ciliary body, choroid), tumor basal dimension (millimeter), thickness (millimeter), and color (yellow, orange, brown, other) were recorded. All tumors were counted; however, if multiple metastatic tumors were present in a single eye, detailed data were recorded for only the largest tumor per uveal tissue (iris, ciliary body, choroid). For iris metastases, presence of hyphema was recorded. For choroidal metastasis, distance to the foveola and optic disc (millimeter), presence of subretinal fluid, and ultrasonographic acoustic quality (dense, hollow) were recorded. All data were tabulated on Microsoft Excel 2016 and measures of central tendencies (mean, median, range) were obtained using built-in functions. Independent 2-sample t test was used to assess statistical significance between continuous data whereas Fisher exact test and chi-square test were used for categorical data. Five-year Kaplan-Meier (KM) survival analysis was performed by grouping censored and death data into half-month intervals. Log-rank test was used to assess statistical significance among KM data and hazard ratios with 95% confidence intervals were calculated. A P value <0.05 was considered statistically significant for all tests.
RESULTS
There were a total of 2214 uveal metastases in 1310 eyes of 1111 patients. Demographics and clinical features based on patient sex are listed in Table 1. There were 715 (64%) females and 396 males (36%). A comparison (female vs male) revealed significant difference in mean age at ocular diagnosis (58 vs 63 years, P < 0.001), bilateral involvement (21% vs 11%, P < 0.001), and mean visual acuity (20/80 vs 20/150, P < 0.001). Of the 1111 patients, most were white (88%) with no difference in race distribution between sexes. Demographics and Clinical Features of Patients The primary cancer site by sex is listed in Table 2. By comparison, uveal metastasis from breast cancer was more common in females (58% vs 1%, P < 0.001), whereas metastasis from lung (19% vs 40%, P < 0.001), kidney (2% vs 9%, P < 0.001), gastrointestinal tract (1% vs 8%, P < 0.001), cutaneous melanoma (1% vs 5%, P < 0.001), and prostate cancer (0% vs 6%, P < 0.001) was more common in males. Unknown primary cancer site was more common in males (11% vs 21%, P < 0.001). The primary cancers classified as other are listed as a footnote in Table 2. Primary Cancer Site Clinical features of uveal metastasis by sex are listed in Table 3. By comparison, there was no difference per sex in distribution of iris metastasis (7% vs 6%), ciliary body metastasis (2% vs 2%), and choroid metastasis (91% vs 92%). There was significantly more mean number of metastatic tumors per eye in females (1.8 vs 1.6, P = 0.04). Regarding choroidal metastasis, there were significant differences in mean tumor base (9.1 vs 10.3 mm, P < 0.001), mean tumor thickness (2.8 vs 3.9 mm, P < 0.001), yellow color (88% vs 81%, P = 0.002), brown color (3% vs 7%, P = 0.001), and presence of subretinal fluid (68% vs 79%, P < 0.001). Metastasis Features KM survival estimates at 1, 2, 3, 4, and 5 years are listed in Table 4. Considering all uveal metastasis by sex, there were differences in KM survival at 5 years (31% vs 21%, P < 0.001) and mean survival (19.8 vs. 12.6 months, P = 0.03) (Fig. 1). Regarding specific primary cancer sites per sex, a significant difference in KM survival at 5 years was found with primary lung cancer (24% vs 9%, P = 0.04). In the KM analysis, 489 females were censored, compared with 272 males, during the 5-year interval. Kaplan–Meier Survival Analysis and Mean Survival Kaplan-Meier survival analysis for patients with uveal metastasis based on sex. CI indicates confidence interval; HR, hazard ratio for death (female/male). Multiple post-hoc analyses were performed. When breast cancer was removed from the female cohort, a comparison of female versus male revealed mean age (60 vs 63 years, P = 0.007), bilateral metastases (15% vs 11%, P = 0.142), and mean number of tumors per eye (1.7 vs 1.6, P = 0.34). When looking only at the female cohort, a comparison of tumor diameter and thickness between breast cancer primary site versus all other primary tumor types revealed a statistically significant difference in tumor thickness (2.4 vs 3.4 mm, P < 0.001), but no difference in tumor diameter (9.3 vs 8.9, P = 0.28). For patients with primary lung cancer, tumor diameter was smaller in females compared with that in males (9.2 vs 11.0, P = 0.005) whereas tumor thickness was similar (3.4 vs 3.7, P = 0.25). When comparing breast cancer versus lung cancer in all patients, subretinal fluid was more common in lung cancer metastasis (67% vs 78%, P < 0.001), whereas bilateral disease was more common in breast cancer metastasis (26% vs 14%, P < 0.001).
null
null
[]
[]
[]
[ "METHODS", "RESULTS", "DISCUSSION" ]
[ "Patients with uveal metastasis from the Wills Eye Hospital, Philadelphia, PA, who were evaluated between January 1, 1974, and June 1, 2017, were included. Patients with lymphoproliferative disorders, such as lymphoma, leukemia, and multiple myeloma, were excluded. This analysis adhered to the tenets of the Declaration of Helsinki and was approved by the Institutional Review Board of Wills Eye Hospital.\nData included patient demographics, primary cancer site, uveal metastasis clinical features, tumor management, and survival outcome. Demographic data included patient age at the time of ocular diagnosis, sex, and race. Primary cancer information included site of primary malignancy (breast, lung, kidney, gastrointestinal tract, skin, prostate, thyroid, pancreas, others, unknown), date of primary cancer diagnosis, and interval between primary cancer diagnosis and uveal metastasis. Follow-up interval and overall survival data were collected.\nClinical features included patient symptoms, involved eye (right, left), laterality (unilateral, bilateral), visual acuity, and intraocular pressure. The total number of metastases per eye, anatomic location of the uveal metastases (iris, ciliary body, choroid), tumor basal dimension (millimeter), thickness (millimeter), and color (yellow, orange, brown, other) were recorded. All tumors were counted; however, if multiple metastatic tumors were present in a single eye, detailed data were recorded for only the largest tumor per uveal tissue (iris, ciliary body, choroid). For iris metastases, presence of hyphema was recorded. For choroidal metastasis, distance to the foveola and optic disc (millimeter), presence of subretinal fluid, and ultrasonographic acoustic quality (dense, hollow) were recorded.\nAll data were tabulated on Microsoft Excel 2016 and measures of central tendencies (mean, median, range) were obtained using built-in functions. Independent 2-sample t test was used to assess statistical significance between continuous data whereas Fisher exact test and chi-square test were used for categorical data. Five-year Kaplan-Meier (KM) survival analysis was performed by grouping censored and death data into half-month intervals. Log-rank test was used to assess statistical significance among KM data and hazard ratios with 95% confidence intervals were calculated. A P value <0.05 was considered statistically significant for all tests.", "There were a total of 2214 uveal metastases in 1310 eyes of 1111 patients. Demographics and clinical features based on patient sex are listed in Table 1. There were 715 (64%) females and 396 males (36%). A comparison (female vs male) revealed significant difference in mean age at ocular diagnosis (58 vs 63 years, P < 0.001), bilateral involvement (21% vs 11%, P < 0.001), and mean visual acuity (20/80 vs 20/150, P < 0.001). Of the 1111 patients, most were white (88%) with no difference in race distribution between sexes.\nDemographics and Clinical Features of Patients\nThe primary cancer site by sex is listed in Table 2. By comparison, uveal metastasis from breast cancer was more common in females (58% vs 1%, P < 0.001), whereas metastasis from lung (19% vs 40%, P < 0.001), kidney (2% vs 9%, P < 0.001), gastrointestinal tract (1% vs 8%, P < 0.001), cutaneous melanoma (1% vs 5%, P < 0.001), and prostate cancer (0% vs 6%, P < 0.001) was more common in males. Unknown primary cancer site was more common in males (11% vs 21%, P < 0.001). The primary cancers classified as other are listed as a footnote in Table 2.\nPrimary Cancer Site\nClinical features of uveal metastasis by sex are listed in Table 3. By comparison, there was no difference per sex in distribution of iris metastasis (7% vs 6%), ciliary body metastasis (2% vs 2%), and choroid metastasis (91% vs 92%). There was significantly more mean number of metastatic tumors per eye in females (1.8 vs 1.6, P = 0.04). Regarding choroidal metastasis, there were significant differences in mean tumor base (9.1 vs 10.3 mm, P < 0.001), mean tumor thickness (2.8 vs 3.9 mm, P < 0.001), yellow color (88% vs 81%, P = 0.002), brown color (3% vs 7%, P = 0.001), and presence of subretinal fluid (68% vs 79%, P < 0.001).\nMetastasis Features\nKM survival estimates at 1, 2, 3, 4, and 5 years are listed in Table 4. Considering all uveal metastasis by sex, there were differences in KM survival at 5 years (31% vs 21%, P < 0.001) and mean survival (19.8 vs. 12.6 months, P = 0.03) (Fig. 1). Regarding specific primary cancer sites per sex, a significant difference in KM survival at 5 years was found with primary lung cancer (24% vs 9%, P = 0.04). In the KM analysis, 489 females were censored, compared with 272 males, during the 5-year interval.\nKaplan–Meier Survival Analysis and Mean Survival\nKaplan-Meier survival analysis for patients with uveal metastasis based on sex. CI indicates confidence interval; HR, hazard ratio for death (female/male).\nMultiple post-hoc analyses were performed. When breast cancer was removed from the female cohort, a comparison of female versus male revealed mean age (60 vs 63 years, P = 0.007), bilateral metastases (15% vs 11%, P = 0.142), and mean number of tumors per eye (1.7 vs 1.6, P = 0.34). When looking only at the female cohort, a comparison of tumor diameter and thickness between breast cancer primary site versus all other primary tumor types revealed a statistically significant difference in tumor thickness (2.4 vs 3.4 mm, P < 0.001), but no difference in tumor diameter (9.3 vs 8.9, P = 0.28). For patients with primary lung cancer, tumor diameter was smaller in females compared with that in males (9.2 vs 11.0, P = 0.005) whereas tumor thickness was similar (3.4 vs 3.7, P = 0.25). When comparing breast cancer versus lung cancer in all patients, subretinal fluid was more common in lung cancer metastasis (67% vs 78%, P < 0.001), whereas bilateral disease was more common in breast cancer metastasis (26% vs 14%, P < 0.001).", "Our findings in 1111 patients support what has been reported in 2 other large series on uveal metastasis by Shields et al2 (420 patients) and Konstantinidis et al3 (96 patients), in that the most common primary cancer to metastasize to the uvea was breast cancer in females and lung cancer in males. We found additional important details in uveal metastatic disease per sex. We note that the mean age at diagnosis of uveal metastasis in females (58 years) was significantly younger than in males (63 years) (P < 0.001). This age difference was likely because of the predominance of breast cancer in females (58% of uveal metastasis in females) and lung cancer in males (40% of uveal metastasis in males), given that the mean age at breast cancer diagnosis in the United States is 61 years, somewhat lower than the mean age at lung cancer diagnosis of 70 years.1 Shields et al2 reported an average age of 58 years at time of uveal metastasis for all patients, most likely driven by the large cohort of females with breast cancer metastasis (47% of patients) in that study. In a series of 264 cases of uveal metastasis12 from breast cancer specifically, the average age at diagnosis was 56 years (median 57, range 23–84), slightly younger than the female cohort in our study. In a series of 194 patients with uveal metastasis13 from lung cancer specifically, the mean age at diagnosis was 62 years (55% male), slightly younger than the average age of men in our study. When patients with primary breast cancer were removed from the female cohort in our study, the mean age at uveal metastasis diagnosis in females increased from 58 to 60 years, but continued to demonstrate significant difference when compared with males (63 years) (P = 0.007). This suggests that other factors, apart from the predominance of breast cancer in females and the related younger age of onset, might be responsible for the age difference between females and males with uveal metastasis.\nBy comparison (female vs male), we found a relatively lower number of metastasis from lung (19% vs 40%), kidney (2% vs 9%), gastrointestinal (1% vs 8%), and cutaneous melanoma (1% vs 5%) malignancies in females. According to the US Centers for Disease Control and Prevention,1 the 2014 incidence rates (per 100,000) of these primary cancers in females versus males revealed lung (50.8 vs 68.1), kidney (11.3 vs 22.0), colorectal (33.7 vs 44.0), and cutaneous melanoma (16.9 vs 27.6).1 The lower incidence rates of these cancers in females correlate with the fewer related uveal metastasis. The lower percentage of brown choroidal metastases in females was likely because of the lower percentage of cutaneous melanoma metastasis in females.\nPrevious reports have revealed the bilateral and multifocal nature of uveal metastasis from breast cancer compared with other primary cancer types.2,3,12,13 The high rate of uveal metastasis from breast cancer in females likely contributed to the increased proportion of bilateral uveal metastasis and increased number of tumors per eye in females in this study. In other reports, bilateral uveal metastasis from breast cancer has ranged between 18% and 33% of patients, whereas bilateral uveal metastasis owing to lung cancer was less common, ranging from 14% to 20%.2,3,11,12 Our data showed that 26% of patients with uveal metastasis from breast cancer demonstrated bilateral involvement compared with 14% of patients with metastasis from lung cancer (P < 0.001). When we removed breast cancer patients from the female cohort, the percentage of bilateral uveal metastasis in females was reduced to 15%, still higher than in males (11%), but without significance (P = 0.14). Similarly, when breast cancer was removed from the female cohort, the mean number of tumors per eye in females decreased from 1.8 to 1.7 and failed to demonstrate significance when compared with a mean of 1.6 tumors per eye in males (P = 0.34). These analyses support previous suggestions regarding the particular bilateral and multifocal nature of uveal metastasis from breast cancer.2,3,12,13\nFemales demonstrated smaller choroidal metastases in base (9.1 vs 10.3 mm, P < 0.001) and thickness (2.8 vs 3.9 mm, P < 0.001) compared with males. A previous report documented a tendency toward thinner tumors in metastatic breast cancer (mean 2 mm thickness), compared to lung (3 mm), gastrointestinal (4 mm), kidney (4 mm), prostate (3 mm), and unknown (3 mm).2 In the current analysis, we compared females with choroidal metastasis from breast cancer to females with metastasis from all other primary sites and found no significant difference in tumor basal diameter (9.3 vs 8.9, P = 0.28), but noted that tumor thickness was significantly less in those females with breast metastasis (vs others) (2.4 vs 3.4 mm, P < 0.001). The overall difference in choroidal metastasis thickness (thinner in females) was possibly because of the flatter nature of choroidal metastases from breast cancer and the high proportion of breast cancer in the female cohort. When comparing females and males with choroidal metastasis from lung cancer, there was no significant difference in tumor thickness (3.4 vs 3.7, P = 0.25); however, tumor basal diameter was significantly smaller in females (9.2 vs 11.0, P = 0.005). The overall difference in choroidal metastasis basal diameter (larger in males) is possibly explained by the high proportion of lung cancer in the male cohort.\nThe smaller tumor size in females (base and thickness) could be responsible for the fewer number of tumors associated with subretinal fluid in females (68% vs 78%, P < 0.001). However, when comparing all patients with breast cancer metastases to all patients with lung cancer metastases, subretinal fluid was far more common with lung cancer metastases (67% in breast vs 78% in lung, P < 0.001). This is possibly explained by the larger tumor base of metastasis from lung cancer or the possibility that choroidal metastases from different primary sites display different exudative profiles. Other sex-related differences, such as better mean visual acuity in females, could be multifactorial with smaller tumor size, less frequent subretinal fluid, and younger age contributing. There was no sex difference in the precise distances of choroidal metastasis from the visually vital optic disc and foveola.\nRelatively few studies have addressed overall survival for patients with uveal metastasis.2–5,11–13 Freedman and Folk4 reported that patients with metastasis to the eye and orbit from breast cancer (55 patients) lived longer (mean survival time = 22 months) than those with metastasis owing to lung cancer (16 patients) (mean survival time = 6 months). A recent comprehensive analysis from our department on prognosis of uveal metastasis based on primary cancer site (1111 patients) revealed 1-year and 5-year KM survival estimates of 57% and 24% for all primary cancer sites. In that analysis, patients with uveal metastasis from breast cancer versus lung cancer had 5-year KM survival estimates of 25% versus 13%, with a statistically significant difference. In a report specifically regarding breast cancer metastasis to the uvea, 1-year and 5-year KM survival estimates of 65% and 24%, respectively, were reported.12 With specific investigation of lung cancer with uveal metastasis, survival at 1-year was 45% and there was no information on 5-year survival.13 In this analysis of 1111 patients with uveal metastasis based on sex, we note that overall 1-year and 5-year KM survival for females (66% and 31%, respectively) and males (47% and 21%, respectively) resemble those listed above, specifically for breast and lung cancer.12,13 It is likely that the predominance of uveal metastasis from breast cancer in females and the predominance of lung cancer in males are responsible for the overall survival differences per sex; however, other factors, like patient age, tumor size, and malignant potential of each specific tumor type, could be contributory. In particular, when focusing on survival after uveal metastasis from lung cancer by sex, 5-year survival in females (24%) was significantly longer than males (9%). This could be because of younger age of females with lung metastasis compared with males (61 years vs 64 years, P = 0.04), tumor size, and other factors.\nThis is a large series encompassing a broad time period and the survival data reported herein are possibly limited by the broad grouping of survival outcomes. Given that survival for some patients with metastatic cancer has increased in recent years, a study addressing survival trends of patients with uveal metastasis through the decades could be of interest.17\nIn conclusion, over a 43-year span at a major ocular oncology center, uveal metastasis was more common in females (64%) than males (36%). Females most commonly presented with metastasis from breast cancer (58%), whereas males most commonly demonstrated underlying lung cancer (40%). Females presented at a younger age, with better visual acuity, more bilateral metastases, smaller tumor size, and less frequent association with subretinal fluid, compared with males. After detection and management of uveal metastasis, overall survival was significantly better in females compared with males. Understanding sex differences in uveal metastasis can provide better understanding in patient care." ]
[ "methods", "results", "discussion" ]
[ "cancer", "choroid", "metastasis", "sex", "uvea" ]
METHODS: Patients with uveal metastasis from the Wills Eye Hospital, Philadelphia, PA, who were evaluated between January 1, 1974, and June 1, 2017, were included. Patients with lymphoproliferative disorders, such as lymphoma, leukemia, and multiple myeloma, were excluded. This analysis adhered to the tenets of the Declaration of Helsinki and was approved by the Institutional Review Board of Wills Eye Hospital. Data included patient demographics, primary cancer site, uveal metastasis clinical features, tumor management, and survival outcome. Demographic data included patient age at the time of ocular diagnosis, sex, and race. Primary cancer information included site of primary malignancy (breast, lung, kidney, gastrointestinal tract, skin, prostate, thyroid, pancreas, others, unknown), date of primary cancer diagnosis, and interval between primary cancer diagnosis and uveal metastasis. Follow-up interval and overall survival data were collected. Clinical features included patient symptoms, involved eye (right, left), laterality (unilateral, bilateral), visual acuity, and intraocular pressure. The total number of metastases per eye, anatomic location of the uveal metastases (iris, ciliary body, choroid), tumor basal dimension (millimeter), thickness (millimeter), and color (yellow, orange, brown, other) were recorded. All tumors were counted; however, if multiple metastatic tumors were present in a single eye, detailed data were recorded for only the largest tumor per uveal tissue (iris, ciliary body, choroid). For iris metastases, presence of hyphema was recorded. For choroidal metastasis, distance to the foveola and optic disc (millimeter), presence of subretinal fluid, and ultrasonographic acoustic quality (dense, hollow) were recorded. All data were tabulated on Microsoft Excel 2016 and measures of central tendencies (mean, median, range) were obtained using built-in functions. Independent 2-sample t test was used to assess statistical significance between continuous data whereas Fisher exact test and chi-square test were used for categorical data. Five-year Kaplan-Meier (KM) survival analysis was performed by grouping censored and death data into half-month intervals. Log-rank test was used to assess statistical significance among KM data and hazard ratios with 95% confidence intervals were calculated. A P value <0.05 was considered statistically significant for all tests. RESULTS: There were a total of 2214 uveal metastases in 1310 eyes of 1111 patients. Demographics and clinical features based on patient sex are listed in Table 1. There were 715 (64%) females and 396 males (36%). A comparison (female vs male) revealed significant difference in mean age at ocular diagnosis (58 vs 63 years, P < 0.001), bilateral involvement (21% vs 11%, P < 0.001), and mean visual acuity (20/80 vs 20/150, P < 0.001). Of the 1111 patients, most were white (88%) with no difference in race distribution between sexes. Demographics and Clinical Features of Patients The primary cancer site by sex is listed in Table 2. By comparison, uveal metastasis from breast cancer was more common in females (58% vs 1%, P < 0.001), whereas metastasis from lung (19% vs 40%, P < 0.001), kidney (2% vs 9%, P < 0.001), gastrointestinal tract (1% vs 8%, P < 0.001), cutaneous melanoma (1% vs 5%, P < 0.001), and prostate cancer (0% vs 6%, P < 0.001) was more common in males. Unknown primary cancer site was more common in males (11% vs 21%, P < 0.001). The primary cancers classified as other are listed as a footnote in Table 2. Primary Cancer Site Clinical features of uveal metastasis by sex are listed in Table 3. By comparison, there was no difference per sex in distribution of iris metastasis (7% vs 6%), ciliary body metastasis (2% vs 2%), and choroid metastasis (91% vs 92%). There was significantly more mean number of metastatic tumors per eye in females (1.8 vs 1.6, P = 0.04). Regarding choroidal metastasis, there were significant differences in mean tumor base (9.1 vs 10.3 mm, P < 0.001), mean tumor thickness (2.8 vs 3.9 mm, P < 0.001), yellow color (88% vs 81%, P = 0.002), brown color (3% vs 7%, P = 0.001), and presence of subretinal fluid (68% vs 79%, P < 0.001). Metastasis Features KM survival estimates at 1, 2, 3, 4, and 5 years are listed in Table 4. Considering all uveal metastasis by sex, there were differences in KM survival at 5 years (31% vs 21%, P < 0.001) and mean survival (19.8 vs. 12.6 months, P = 0.03) (Fig. 1). Regarding specific primary cancer sites per sex, a significant difference in KM survival at 5 years was found with primary lung cancer (24% vs 9%, P = 0.04). In the KM analysis, 489 females were censored, compared with 272 males, during the 5-year interval. Kaplan–Meier Survival Analysis and Mean Survival Kaplan-Meier survival analysis for patients with uveal metastasis based on sex. CI indicates confidence interval; HR, hazard ratio for death (female/male). Multiple post-hoc analyses were performed. When breast cancer was removed from the female cohort, a comparison of female versus male revealed mean age (60 vs 63 years, P = 0.007), bilateral metastases (15% vs 11%, P = 0.142), and mean number of tumors per eye (1.7 vs 1.6, P = 0.34). When looking only at the female cohort, a comparison of tumor diameter and thickness between breast cancer primary site versus all other primary tumor types revealed a statistically significant difference in tumor thickness (2.4 vs 3.4 mm, P < 0.001), but no difference in tumor diameter (9.3 vs 8.9, P = 0.28). For patients with primary lung cancer, tumor diameter was smaller in females compared with that in males (9.2 vs 11.0, P = 0.005) whereas tumor thickness was similar (3.4 vs 3.7, P = 0.25). When comparing breast cancer versus lung cancer in all patients, subretinal fluid was more common in lung cancer metastasis (67% vs 78%, P < 0.001), whereas bilateral disease was more common in breast cancer metastasis (26% vs 14%, P < 0.001). DISCUSSION: Our findings in 1111 patients support what has been reported in 2 other large series on uveal metastasis by Shields et al2 (420 patients) and Konstantinidis et al3 (96 patients), in that the most common primary cancer to metastasize to the uvea was breast cancer in females and lung cancer in males. We found additional important details in uveal metastatic disease per sex. We note that the mean age at diagnosis of uveal metastasis in females (58 years) was significantly younger than in males (63 years) (P < 0.001). This age difference was likely because of the predominance of breast cancer in females (58% of uveal metastasis in females) and lung cancer in males (40% of uveal metastasis in males), given that the mean age at breast cancer diagnosis in the United States is 61 years, somewhat lower than the mean age at lung cancer diagnosis of 70 years.1 Shields et al2 reported an average age of 58 years at time of uveal metastasis for all patients, most likely driven by the large cohort of females with breast cancer metastasis (47% of patients) in that study. In a series of 264 cases of uveal metastasis12 from breast cancer specifically, the average age at diagnosis was 56 years (median 57, range 23–84), slightly younger than the female cohort in our study. In a series of 194 patients with uveal metastasis13 from lung cancer specifically, the mean age at diagnosis was 62 years (55% male), slightly younger than the average age of men in our study. When patients with primary breast cancer were removed from the female cohort in our study, the mean age at uveal metastasis diagnosis in females increased from 58 to 60 years, but continued to demonstrate significant difference when compared with males (63 years) (P = 0.007). This suggests that other factors, apart from the predominance of breast cancer in females and the related younger age of onset, might be responsible for the age difference between females and males with uveal metastasis. By comparison (female vs male), we found a relatively lower number of metastasis from lung (19% vs 40%), kidney (2% vs 9%), gastrointestinal (1% vs 8%), and cutaneous melanoma (1% vs 5%) malignancies in females. According to the US Centers for Disease Control and Prevention,1 the 2014 incidence rates (per 100,000) of these primary cancers in females versus males revealed lung (50.8 vs 68.1), kidney (11.3 vs 22.0), colorectal (33.7 vs 44.0), and cutaneous melanoma (16.9 vs 27.6).1 The lower incidence rates of these cancers in females correlate with the fewer related uveal metastasis. The lower percentage of brown choroidal metastases in females was likely because of the lower percentage of cutaneous melanoma metastasis in females. Previous reports have revealed the bilateral and multifocal nature of uveal metastasis from breast cancer compared with other primary cancer types.2,3,12,13 The high rate of uveal metastasis from breast cancer in females likely contributed to the increased proportion of bilateral uveal metastasis and increased number of tumors per eye in females in this study. In other reports, bilateral uveal metastasis from breast cancer has ranged between 18% and 33% of patients, whereas bilateral uveal metastasis owing to lung cancer was less common, ranging from 14% to 20%.2,3,11,12 Our data showed that 26% of patients with uveal metastasis from breast cancer demonstrated bilateral involvement compared with 14% of patients with metastasis from lung cancer (P < 0.001). When we removed breast cancer patients from the female cohort, the percentage of bilateral uveal metastasis in females was reduced to 15%, still higher than in males (11%), but without significance (P = 0.14). Similarly, when breast cancer was removed from the female cohort, the mean number of tumors per eye in females decreased from 1.8 to 1.7 and failed to demonstrate significance when compared with a mean of 1.6 tumors per eye in males (P = 0.34). These analyses support previous suggestions regarding the particular bilateral and multifocal nature of uveal metastasis from breast cancer.2,3,12,13 Females demonstrated smaller choroidal metastases in base (9.1 vs 10.3 mm, P < 0.001) and thickness (2.8 vs 3.9 mm, P < 0.001) compared with males. A previous report documented a tendency toward thinner tumors in metastatic breast cancer (mean 2 mm thickness), compared to lung (3 mm), gastrointestinal (4 mm), kidney (4 mm), prostate (3 mm), and unknown (3 mm).2 In the current analysis, we compared females with choroidal metastasis from breast cancer to females with metastasis from all other primary sites and found no significant difference in tumor basal diameter (9.3 vs 8.9, P = 0.28), but noted that tumor thickness was significantly less in those females with breast metastasis (vs others) (2.4 vs 3.4 mm, P < 0.001). The overall difference in choroidal metastasis thickness (thinner in females) was possibly because of the flatter nature of choroidal metastases from breast cancer and the high proportion of breast cancer in the female cohort. When comparing females and males with choroidal metastasis from lung cancer, there was no significant difference in tumor thickness (3.4 vs 3.7, P = 0.25); however, tumor basal diameter was significantly smaller in females (9.2 vs 11.0, P = 0.005). The overall difference in choroidal metastasis basal diameter (larger in males) is possibly explained by the high proportion of lung cancer in the male cohort. The smaller tumor size in females (base and thickness) could be responsible for the fewer number of tumors associated with subretinal fluid in females (68% vs 78%, P < 0.001). However, when comparing all patients with breast cancer metastases to all patients with lung cancer metastases, subretinal fluid was far more common with lung cancer metastases (67% in breast vs 78% in lung, P < 0.001). This is possibly explained by the larger tumor base of metastasis from lung cancer or the possibility that choroidal metastases from different primary sites display different exudative profiles. Other sex-related differences, such as better mean visual acuity in females, could be multifactorial with smaller tumor size, less frequent subretinal fluid, and younger age contributing. There was no sex difference in the precise distances of choroidal metastasis from the visually vital optic disc and foveola. Relatively few studies have addressed overall survival for patients with uveal metastasis.2–5,11–13 Freedman and Folk4 reported that patients with metastasis to the eye and orbit from breast cancer (55 patients) lived longer (mean survival time = 22 months) than those with metastasis owing to lung cancer (16 patients) (mean survival time = 6 months). A recent comprehensive analysis from our department on prognosis of uveal metastasis based on primary cancer site (1111 patients) revealed 1-year and 5-year KM survival estimates of 57% and 24% for all primary cancer sites. In that analysis, patients with uveal metastasis from breast cancer versus lung cancer had 5-year KM survival estimates of 25% versus 13%, with a statistically significant difference. In a report specifically regarding breast cancer metastasis to the uvea, 1-year and 5-year KM survival estimates of 65% and 24%, respectively, were reported.12 With specific investigation of lung cancer with uveal metastasis, survival at 1-year was 45% and there was no information on 5-year survival.13 In this analysis of 1111 patients with uveal metastasis based on sex, we note that overall 1-year and 5-year KM survival for females (66% and 31%, respectively) and males (47% and 21%, respectively) resemble those listed above, specifically for breast and lung cancer.12,13 It is likely that the predominance of uveal metastasis from breast cancer in females and the predominance of lung cancer in males are responsible for the overall survival differences per sex; however, other factors, like patient age, tumor size, and malignant potential of each specific tumor type, could be contributory. In particular, when focusing on survival after uveal metastasis from lung cancer by sex, 5-year survival in females (24%) was significantly longer than males (9%). This could be because of younger age of females with lung metastasis compared with males (61 years vs 64 years, P = 0.04), tumor size, and other factors. This is a large series encompassing a broad time period and the survival data reported herein are possibly limited by the broad grouping of survival outcomes. Given that survival for some patients with metastatic cancer has increased in recent years, a study addressing survival trends of patients with uveal metastasis through the decades could be of interest.17 In conclusion, over a 43-year span at a major ocular oncology center, uveal metastasis was more common in females (64%) than males (36%). Females most commonly presented with metastasis from breast cancer (58%), whereas males most commonly demonstrated underlying lung cancer (40%). Females presented at a younger age, with better visual acuity, more bilateral metastases, smaller tumor size, and less frequent association with subretinal fluid, compared with males. After detection and management of uveal metastasis, overall survival was significantly better in females compared with males. Understanding sex differences in uveal metastasis can provide better understanding in patient care.
Background: Lacking in previous studies on uveal metastasis is a robust statistical comparison of patient demographics, tumor features, and overall survival based on patient sex. Methods: This is a retrospective analysis. All patients were evaluated on the Ocular Oncology Service, Wills Eye Hospital, PA between January 1, 1974 and June 1, 2017. Results: A total of 2214 uveal metastases were diagnosed in 1310 eyes of 1111 consecutive patients. A comparison (female versus male) revealed differences across several demographic and clinical features including, among others, mean age at metastasis diagnosis (58 vs 63 years, P < 0.001), bilateral disease (21% vs 11%, P < 0.001), and mean number of metastases per eye (1.8 vs 1.6 tumors per eye, P = 0.04). There were differences in overall mean survival (20 vs 13 months, P = 0.03) and 5-year survival (Kaplan-Meier estimate) (31% vs 21%, P < 0.001). Conclusions: There are demographic, clinical, and survival differences when patients with uveal metastases are compared by sex. Understanding these differences can aid the clinician in better anticipating patient outcomes.
null
null
3,147
234
[]
3
[ "cancer", "metastasis", "vs", "females", "uveal", "uveal metastasis", "breast", "patients", "lung", "breast cancer" ]
[ "patients uveal metastasis13", "prognosis uveal metastasis", "uveal metastasis common", "cancer uveal metastasis", "uveal metastasis wills" ]
null
null
null
null
[CONTENT] cancer | choroid | metastasis | sex | uvea [SUMMARY]
[CONTENT] cancer | choroid | metastasis | sex | uvea [SUMMARY]
null
[CONTENT] cancer | choroid | metastasis | sex | uvea [SUMMARY]
null
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Female | Follow-Up Studies | Humans | Male | Microscopy, Acoustic | Middle Aged | Neoplasm Metastasis | Prognosis | Retrospective Studies | Sex Distribution | Sex Factors | Survival Rate | United States | Uvea | Uveal Neoplasms | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Female | Follow-Up Studies | Humans | Male | Microscopy, Acoustic | Middle Aged | Neoplasm Metastasis | Prognosis | Retrospective Studies | Sex Distribution | Sex Factors | Survival Rate | United States | Uvea | Uveal Neoplasms | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | Child | Female | Follow-Up Studies | Humans | Male | Microscopy, Acoustic | Middle Aged | Neoplasm Metastasis | Prognosis | Retrospective Studies | Sex Distribution | Sex Factors | Survival Rate | United States | Uvea | Uveal Neoplasms | Young Adult [SUMMARY]
null
null
[CONTENT] patients uveal metastasis13 | prognosis uveal metastasis | uveal metastasis common | cancer uveal metastasis | uveal metastasis wills [SUMMARY]
[CONTENT] patients uveal metastasis13 | prognosis uveal metastasis | uveal metastasis common | cancer uveal metastasis | uveal metastasis wills [SUMMARY]
null
[CONTENT] patients uveal metastasis13 | prognosis uveal metastasis | uveal metastasis common | cancer uveal metastasis | uveal metastasis wills [SUMMARY]
null
null
[CONTENT] cancer | metastasis | vs | females | uveal | uveal metastasis | breast | patients | lung | breast cancer [SUMMARY]
[CONTENT] cancer | metastasis | vs | females | uveal | uveal metastasis | breast | patients | lung | breast cancer [SUMMARY]
null
[CONTENT] cancer | metastasis | vs | females | uveal | uveal metastasis | breast | patients | lung | breast cancer [SUMMARY]
null
null
[CONTENT] data | included | test | recorded | millimeter | included patient | eye | primary | uveal | metastasis [SUMMARY]
[CONTENT] vs | 001 | cancer | metastasis | vs 001 | primary | mean | table | tumor | difference [SUMMARY]
null
[CONTENT] vs | cancer | metastasis | females | uveal | 001 | uveal metastasis | breast cancer | primary | breast [SUMMARY]
null
null
[CONTENT] ||| the Ocular Oncology Service | Wills Eye Hospital | PA | between January 1, 1974 and June 1, 2017 [SUMMARY]
[CONTENT] 2214 | 1310 | 1111 ||| 58 | 63 years | P < 0.001 | 21% | 11% | P < 0.001 | 1.8 | 1.6 | 0.04 ||| 20 | 13 months | 0.03 | 5-year | Kaplan-Meier | 31% | 21% | P < 0.001 [SUMMARY]
null
[CONTENT] ||| ||| the Ocular Oncology Service | Wills Eye Hospital | PA | between January 1, 1974 and June 1, 2017 ||| 2214 | 1310 | 1111 ||| 58 | 63 years | P < 0.001 | 21% | 11% | P < 0.001 | 1.8 | 1.6 | 0.04 ||| 20 | 13 months | 0.03 | 5-year | Kaplan-Meier | 31% | 21% | P < 0.001 ||| ||| [SUMMARY]
null
Treatment of diabetic foot during the COVID-19 pandemic: A systematic review.
36107573
In the context of the novel coronavirus disease 2019 (COVID-19) pandemic, people have had to stay at home more and make fewer trips to the hospital. Furthermore, hospitals give priority to the treatment of COVID-19 patients. These factors are not conducive to the treatment of diabetic foot, and even increase the risk of amputation. Therefore, how to better treat patients with diabetic foot during the COVID-19 epidemic, prevent further aggravation of the disease and reduce the risk of amputation in patients with diabetic foot has become an urgent problem for doctors around the world.
BACKGROUND
The researchers searched PubMed, the Cochrane Library, and the Embase database. The retrieval time was set from the database establishment to October 2021. All studies on treatment of diabetic foot in the COVID-19 pandemic were included in our study.
METHODS
A total of 6 studies were included in this study. In the 6 protocols for treating patients with diabetic foot, the researchers classified patients according to the condition of their diabetic foot. Diabetic foot patients with general conditions received treatment at home, and doctors can guide the wound dressing change and medication treatment of patients through telemedicine. Patients with severe conditions of diabetic foot were admitted to hospital for treatment. Patients were screened for COVID-19 before hospitalization, those infected or suspected of COVID-19 were treated in isolation, and those not infected with COVID-19 were treated in a general ward.
RESULTS
Through this systematic review, we proposed a new protocol for the treatment of patients with diabetic foot in the context of the COVID-19 pandemic. It provided reference for the treatment of diabetic foot in the context of COVID-19 epidemic. However, the global applicability of the treatment protocol for diabetic foot in the context of COVID-19 epidemic proposed in this study needs further clinical testing.
CONCLUSION
[ "Amputation, Surgical", "COVID-19", "Diabetes Mellitus", "Diabetic Foot", "Humans", "Pandemics", "Telemedicine" ]
9439626
1. Introduction
The novel coronavirus disease 2019 (COVID-19) epidemic, which began in 2019, is not over yet.[1] So far, >85 million people worldwide have been infected with COVID-19 and >1.8 million have died from COVID-19.[2] In the context of the COVID-19 global pandemic, the treatment of diabetic foot may be wrongly classified as unnecessary. But without regular diabetic foot care or surgical intervention, patients with diabetic foot risk rapid wound infection, which can lead to amputation and death.[3] Study has showed that people with diabetes are more susceptible to COVID-19 and have a higher risk of death.[4] Diabetic foot ulcer is a more common complication in diabetic patients, and up to one-third of diabetic patients will develop diabetic foot ulcer symptoms.[5] Severe diabetic foot ulcers can lead to amputation, disability and even death. The conventional treatment of diabetic foot is mainly to control blood glucose under the supervision of doctors and change dressing on ulcer wounds. Patients with severe diabetic foot ulcer need to be hospitalized.[6] However, the COVID-19 epidemic has changed the treatment model of diabetic foot, bringing great challenges to the treatment of diabetic foot. In the context of the COVID-19 pandemic, people have had to stay at home more and make fewer trips to the hospital. Furthermore, hospitals give priority to the treatment of COVID-19 patients.[7] These factors are not conducive to the treatment of diabetic foot, and even increase the risk of amputation. Therefore, how to better treat patients with diabetic foot during the COVID-19 epidemic, prevent further aggravation of the disease and reduce the risk of amputation in patients with diabetic foot has become an urgent problem for doctors around the world. In order to solve this problem, we searched relevant literature and systematically reviewed the measures taken by doctors in various countries to treat diabetic foot in the context of COVID-19 pandemic, hoping to find effective methods to treat diabetic foot in the context of COVID-19 epidemic and provide reference for clinicians in the treatment of diabetic foot.
2. Methods
2.1. Search strategy The researchers searched PubMed, the Cochrane Library, and the Embase database. The retrieval time was set from the database establishment to October 2021. Keywords retrieved were “Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2),” “COVID-19,” “Diabetes Mellitus,” “Diabetic Foot,” and “Diabetic Foot Ulcer.” This search has no language restrictions or research type restrictions. The researchers searched PubMed, the Cochrane Library, and the Embase database. The retrieval time was set from the database establishment to October 2021. Keywords retrieved were “Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2),” “COVID-19,” “Diabetes Mellitus,” “Diabetic Foot,” and “Diabetic Foot Ulcer.” This search has no language restrictions or research type restrictions. 2.2. Inclusion and exclusion criteria 2.2.1. Inclusion criteria. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. 2.2.2. Exclusion criteria. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells. 2.2.1. Inclusion criteria. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. 2.2.2. Exclusion criteria. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells. 2.3. Data extraction and data synthesis Information was extracted from the included articles by 2 researchers, including the first authors’ name, time of publication, authors’ countries, and the specific measures to treat diabetic foot in the context of the COVID-19 pandemic. The findings of articles included in our study will be pooled and presented in the results section in the form of narration and tables. Information was extracted from the included articles by 2 researchers, including the first authors’ name, time of publication, authors’ countries, and the specific measures to treat diabetic foot in the context of the COVID-19 pandemic. The findings of articles included in our study will be pooled and presented in the results section in the form of narration and tables.
3. Results
By searching the database, a total of 65 related articles were retrieved. Two researchers screened the study by reading abstracts and full texts, and a total of 6 literatures were included.[8–13] The selection process was shown in Figure 1. The characteristics of the 6 included articles are shown in Table 1. Summary table of included studies. COVID-19 = novel coronavirus disease 2019; DFU = diabetic foot ulcer. PRISMA flow diagram. PRISMA = preferred reporting items for systematic reviews and meta-analyses. In Osman Kelahmetoglu protocol,[8] doctors first graded the patients’ wound levels and the degree of infection. Different treatment measures were taken according to patients’ different conditions. Once a patient had a fever, regardless of the degree of wound infection, COVID-19 screening should be prioritized, and isolation treatment should be conducted if confirmed. Diabetic foot wounds without infection or mild infection can be treated at home, and telemedicine follow-up was conducted once a week. Patients with moderate infection of diabetic foot wounds can be treated in the outpatient clinic, and telemedicine follow-up was conducted once a week. Patients with diabetic foot wounds with serious signs of infection need to be hospitalized. Treatment for diabetic foot ulcer infection was intravenous antibiotics, fluid replacement, correction of electrolytes imbalance, glucose control, etc. Moreover, the authors recommended online counseling for patients to reduce the number of hospital visits and the use of thorax computerized tomography for preoperative screening in all diabetic foot ulcer patients with severe signs of infection. In Marco Meloni protocol,[9] the doctor first assessed the patients’ condition. The fast-track-pathway classification method was used to grade diabetic foot. Critical diabetic foot patients with severe complications (wet gangrene, abscess, fever, signs of sepsis, and acute critical limb ischemia) require emergency hospitalization. Patients with diabetic foot with complex ulcers need to be evaluated on an outpatient basis, patients with severe ulcers need to be hospitalized for treatment, and patients without hospitalization can be followed up by telemedicine. Diabetic foot patients with complex ulcers with 3 or more complications were reevaluated on an outpatient basis only if the ulcers were aggravated. Diabetic foot patients with complex ulcers with 2 or less complications should be reevaluated according to individual circumstance. Telemedicine follow-up was performed for diabetic foot patients with less severe ulcer infection. In Ibrahim Jaly protocol,[10] doctors used online resources to educate patients. They reminded glycemic control through diet, exercise and correct medication. They developed patient understanding of diabetic foot and risks of complications. Diabetic foot patients were classified and condition evaluated and referred by telemedicine consultation. In Fenghua Tao protocol,[11] patients with diabetic foot were first screened for COVID-19. Diabetic foot patients who are not infected with COVID-19 follow normal treatment procedures, conservative treatment, interventional treatment, debridement, local decompression and amputation were adopted according to the severity of diabetic foot. Patients with diabetic feet who are suspected or confirmed to have COVID-19 should be quarantined first. Patients with asymptomatic COVID-19 or mild symptoms of COVID-19 can be treated for diabetic foot in isolation conditions. If the symptoms of COVID-19 are severe, COVID-19 treatment should be prioritized to ensure the safety of patients. If a patient has suspected or confirmed COVID-19 and requires surgery, the operation must proceed under strict protective conditions. In Avica Atri protocol,[12] medical staff need to give adequate care to patients with diabetic foot to minimize the number of visits to the hospital. Most patients with diabetic foot receive treatment and care at home. Patients with diabetic foot with severe infection may develop sepsis or require surgical intervention and need to go to hospital for treatment. Home care, telemedicine, and the establishment of clinics in other locations outside the hospital can fully meet the needs of patients with diabetic foot. In Lee C. Rogers protocol,[13] a system of triage was established to keep patients classified into 4 grades: stable, guarded, serious and critical. Patients with the first 2 grades could receive telemedicine care at home, while those with the latter 2 grades would receive outpatient treatment or even hospitalization. The model of inpatient care needs to be changed, and doctors need to treat patients more on an outpatient basis and care for patients in their homes. Physicians need to use telemedicine to guide patients, such as Medicare Telehealth Visit, Virtual check-in, e-Visit, and Remote Patient Monitoring. Doctors can schedule more home health visits, and doctors can change wound dressings and administer antibiotics in patients’ homes. The authors’ goal was to ensure patients safety and reduce the burden on healthcare systems during the COVID-19 pandemic.
5. Conclusion
Conceptualization: Jingui Yan, Yiqing Xiao, Yanjin Wang. Data curation: Jingui Yan, Yiqing Xiao, Rui Cao, Yipeng Su, Dan Wu, Yanjin Wang. Formal analysis: Yipeng Su, Dan Wu. Investigation: Dan Wu. Methodology: Jingui Yan, Yiqing Xiao, Rui Cao, Yipeng Su. Project administration: Jingui Yan, Yanjin Wang. Software: Rui Cao, Yipeng Su, Dan Wu. Supervision: Jingui Yan, Yanjin Wang. Visualization: Jingui Yan, Yanjin Wang. Writing – original draft: Jingui Yan, Yiqing Xiao, Rui Cao, Yipeng Su, Dan Wu, Yanjin Wang. Writing – review & editing: Jingui Yan, Yiqing Xiao, Rui Cao, Yanjin Wang.
[ "2.1. Search strategy", "2.2. Inclusion and exclusion criteria", "2.2.1. Inclusion criteria.", "2.2.2. Exclusion criteria.", "2.3. Data extraction and data synthesis", "5. Conclusion" ]
[ "The researchers searched PubMed, the Cochrane Library, and the Embase database. The retrieval time was set from the database establishment to October 2021. Keywords retrieved were “Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2),” “COVID-19,” “Diabetes Mellitus,” “Diabetic Foot,” and “Diabetic Foot Ulcer.” This search has no language restrictions or research type restrictions.", "2.2.1. Inclusion criteria. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies.\nStudies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies.\n2.2.2. Exclusion criteria. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells.\nDuplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells.", "Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies.", "Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells.", "Information was extracted from the included articles by 2 researchers, including the first authors’ name, time of publication, authors’ countries, and the specific measures to treat diabetic foot in the context of the COVID-19 pandemic. The findings of articles included in our study will be pooled and presented in the results section in the form of narration and tables.", "Through this systematic review, we proposed a new protocol for the treatment of patients with diabetic foot in the context of the COVID-19 pandemic. It provided reference for the treatment of diabetic foot in the context of COVID-19 epidemic. However, the global applicability of the treatment protocol for diabetic foot in the context of COVID-19 epidemic proposed in this study needs further clinical testing." ]
[ null, null, null, null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Search strategy", "2.2. Inclusion and exclusion criteria", "2.2.1. Inclusion criteria.", "2.2.2. Exclusion criteria.", "2.3. Data extraction and data synthesis", "3. Results", "4. Discussion", "5. Conclusion" ]
[ "The novel coronavirus disease 2019 (COVID-19) epidemic, which began in 2019, is not over yet.[1] So far, >85 million people worldwide have been infected with COVID-19 and >1.8 million have died from COVID-19.[2] In the context of the COVID-19 global pandemic, the treatment of diabetic foot may be wrongly classified as unnecessary. But without regular diabetic foot care or surgical intervention, patients with diabetic foot risk rapid wound infection, which can lead to amputation and death.[3] Study has showed that people with diabetes are more susceptible to COVID-19 and have a higher risk of death.[4]\nDiabetic foot ulcer is a more common complication in diabetic patients, and up to one-third of diabetic patients will develop diabetic foot ulcer symptoms.[5] Severe diabetic foot ulcers can lead to amputation, disability and even death. The conventional treatment of diabetic foot is mainly to control blood glucose under the supervision of doctors and change dressing on ulcer wounds. Patients with severe diabetic foot ulcer need to be hospitalized.[6] However, the COVID-19 epidemic has changed the treatment model of diabetic foot, bringing great challenges to the treatment of diabetic foot.\nIn the context of the COVID-19 pandemic, people have had to stay at home more and make fewer trips to the hospital. Furthermore, hospitals give priority to the treatment of COVID-19 patients.[7] These factors are not conducive to the treatment of diabetic foot, and even increase the risk of amputation. Therefore, how to better treat patients with diabetic foot during the COVID-19 epidemic, prevent further aggravation of the disease and reduce the risk of amputation in patients with diabetic foot has become an urgent problem for doctors around the world. In order to solve this problem, we searched relevant literature and systematically reviewed the measures taken by doctors in various countries to treat diabetic foot in the context of COVID-19 pandemic, hoping to find effective methods to treat diabetic foot in the context of COVID-19 epidemic and provide reference for clinicians in the treatment of diabetic foot.", "2.1. Search strategy The researchers searched PubMed, the Cochrane Library, and the Embase database. The retrieval time was set from the database establishment to October 2021. Keywords retrieved were “Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2),” “COVID-19,” “Diabetes Mellitus,” “Diabetic Foot,” and “Diabetic Foot Ulcer.” This search has no language restrictions or research type restrictions.\nThe researchers searched PubMed, the Cochrane Library, and the Embase database. The retrieval time was set from the database establishment to October 2021. Keywords retrieved were “Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2),” “COVID-19,” “Diabetes Mellitus,” “Diabetic Foot,” and “Diabetic Foot Ulcer.” This search has no language restrictions or research type restrictions.\n2.2. Inclusion and exclusion criteria 2.2.1. Inclusion criteria. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies.\nStudies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies.\n2.2.2. Exclusion criteria. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells.\nDuplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells.\n2.2.1. Inclusion criteria. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies.\nStudies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies.\n2.2.2. Exclusion criteria. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells.\nDuplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells.\n2.3. Data extraction and data synthesis Information was extracted from the included articles by 2 researchers, including the first authors’ name, time of publication, authors’ countries, and the specific measures to treat diabetic foot in the context of the COVID-19 pandemic. The findings of articles included in our study will be pooled and presented in the results section in the form of narration and tables.\nInformation was extracted from the included articles by 2 researchers, including the first authors’ name, time of publication, authors’ countries, and the specific measures to treat diabetic foot in the context of the COVID-19 pandemic. The findings of articles included in our study will be pooled and presented in the results section in the form of narration and tables.", "The researchers searched PubMed, the Cochrane Library, and the Embase database. The retrieval time was set from the database establishment to October 2021. Keywords retrieved were “Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2),” “COVID-19,” “Diabetes Mellitus,” “Diabetic Foot,” and “Diabetic Foot Ulcer.” This search has no language restrictions or research type restrictions.", "2.2.1. Inclusion criteria. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies.\nStudies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies.\n2.2.2. Exclusion criteria. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells.\nDuplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells.", "Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies.", "Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells.", "Information was extracted from the included articles by 2 researchers, including the first authors’ name, time of publication, authors’ countries, and the specific measures to treat diabetic foot in the context of the COVID-19 pandemic. The findings of articles included in our study will be pooled and presented in the results section in the form of narration and tables.", "By searching the database, a total of 65 related articles were retrieved. Two researchers screened the study by reading abstracts and full texts, and a total of 6 literatures were included.[8–13] The selection process was shown in Figure 1. The characteristics of the 6 included articles are shown in Table 1.\nSummary table of included studies.\nCOVID-19 = novel coronavirus disease 2019; DFU = diabetic foot ulcer.\nPRISMA flow diagram. PRISMA = preferred reporting items for systematic reviews and meta-analyses.\nIn Osman Kelahmetoglu protocol,[8] doctors first graded the patients’ wound levels and the degree of infection. Different treatment measures were taken according to patients’ different conditions. Once a patient had a fever, regardless of the degree of wound infection, COVID-19 screening should be prioritized, and isolation treatment should be conducted if confirmed. Diabetic foot wounds without infection or mild infection can be treated at home, and telemedicine follow-up was conducted once a week. Patients with moderate infection of diabetic foot wounds can be treated in the outpatient clinic, and telemedicine follow-up was conducted once a week. Patients with diabetic foot wounds with serious signs of infection need to be hospitalized. Treatment for diabetic foot ulcer infection was intravenous antibiotics, fluid replacement, correction of electrolytes imbalance, glucose control, etc. Moreover, the authors recommended online counseling for patients to reduce the number of hospital visits and the use of thorax computerized tomography for preoperative screening in all diabetic foot ulcer patients with severe signs of infection.\nIn Marco Meloni protocol,[9] the doctor first assessed the patients’ condition. The fast-track-pathway classification method was used to grade diabetic foot. Critical diabetic foot patients with severe complications (wet gangrene, abscess, fever, signs of sepsis, and acute critical limb ischemia) require emergency hospitalization. Patients with diabetic foot with complex ulcers need to be evaluated on an outpatient basis, patients with severe ulcers need to be hospitalized for treatment, and patients without hospitalization can be followed up by telemedicine. Diabetic foot patients with complex ulcers with 3 or more complications were reevaluated on an outpatient basis only if the ulcers were aggravated. Diabetic foot patients with complex ulcers with 2 or less complications should be reevaluated according to individual circumstance. Telemedicine follow-up was performed for diabetic foot patients with less severe ulcer infection.\nIn Ibrahim Jaly protocol,[10] doctors used online resources to educate patients. They reminded glycemic control through diet, exercise and correct medication. They developed patient understanding of diabetic foot and risks of complications. Diabetic foot patients were classified and condition evaluated and referred by telemedicine consultation.\nIn Fenghua Tao protocol,[11] patients with diabetic foot were first screened for COVID-19. Diabetic foot patients who are not infected with COVID-19 follow normal treatment procedures, conservative treatment, interventional treatment, debridement, local decompression and amputation were adopted according to the severity of diabetic foot. Patients with diabetic feet who are suspected or confirmed to have COVID-19 should be quarantined first. Patients with asymptomatic COVID-19 or mild symptoms of COVID-19 can be treated for diabetic foot in isolation conditions. If the symptoms of COVID-19 are severe, COVID-19 treatment should be prioritized to ensure the safety of patients. If a patient has suspected or confirmed COVID-19 and requires surgery, the operation must proceed under strict protective conditions.\nIn Avica Atri protocol,[12] medical staff need to give adequate care to patients with diabetic foot to minimize the number of visits to the hospital. Most patients with diabetic foot receive treatment and care at home. Patients with diabetic foot with severe infection may develop sepsis or require surgical intervention and need to go to hospital for treatment. Home care, telemedicine, and the establishment of clinics in other locations outside the hospital can fully meet the needs of patients with diabetic foot.\nIn Lee C. Rogers protocol,[13] a system of triage was established to keep patients classified into 4 grades: stable, guarded, serious and critical. Patients with the first 2 grades could receive telemedicine care at home, while those with the latter 2 grades would receive outpatient treatment or even hospitalization. The model of inpatient care needs to be changed, and doctors need to treat patients more on an outpatient basis and care for patients in their homes. Physicians need to use telemedicine to guide patients, such as Medicare Telehealth Visit, Virtual check-in, e-Visit, and Remote Patient Monitoring. Doctors can schedule more home health visits, and doctors can change wound dressings and administer antibiotics in patients’ homes. The authors’ goal was to ensure patients safety and reduce the burden on healthcare systems during the COVID-19 pandemic.", "Foot ulcers are the most common complication of diabetes, with higher morbidity and mortality than many cancers. Refractory diabetic foot ulcers are a leading cause of hospitalization, amputation, disability and death in patients with diabetes.[14] The global pandemic of COVID-19 poses significant challenges to the management of people with diabetes, especially those with severe foot ulcers.[15] The conventional treatment schedules of diabetic foot is no longer suitable for the treatment of diabetic foot in the context of COVID-19 epidemic.[16] So we conducted this study by systematically reviewing relevant studies to find new treatment schedules of diabetic foot to better treat diabetic foot in the context of COVID-19 prevalence.\nAfter a systematic review of the 6 articles included in this study, we concluded a new protocol for treating patients with diabetic foot in the context of the global COVID-19 pandemic. The severity of diabetic foot ulcer was assessed and classified into general (no wound or small wound, no infection, and stable condition), severe (complex and refractory infection wound), and critical (wet gangrene, abscess, fever, signs of sepsis, and acute critical limb ischemia). Diabetic foot patients with general conditions can receive treatment at home, and doctors can guide the wound dressing change and medication treatment of patients through telemedicine. Patients with severe conditions of diabetic foot need to go to the hospital outpatient clinic for debridement treatment after COVID-19 screening. Patients with severe diabetic foot diagnosed or suspected COVID-19 infection need debridement treatment in isolation and continue to quarantine after treatment. Severe diabetic foot patients who were not infected with COVID-19 were sent home after debridement treatment in an outpatient clinic, where they were monitored by telemedicine and instructed by doctors on the next steps. Critical diabetic foot patients were hospitalized after being screened for COVID-19. Critical diabetic foot patients diagnosed or suspected COVID-19 infection need to be hospitalized in isolation. Patients with critical diabetic feet who were not infected with COVID-19 are hospitalized in general wards. The critical diabetic foot patients were treated with conservative treatment, interventional treatment, debridement, local decompression, amputation and other treatment measures according to their condition during hospitalization. In the context of the COVID-19 epidemic, the primary step in the treatment of diabetic foot ulcers is to assess the severity of diabetic foot ulcers. Diabetic foot ulcers can be caused by a variety of causes, including peripheral artery disease, infection, neuropathy, etc. Most classification systems focus only on the local pathology of diabetic foot ulcer (DFU) and fail to adequately assess all important parameters associated with ulcer healing. Whether telemedicine or face-to-face treatment, doctors should use the same evaluation system for diabetic foot ulcers. According to current and previous study, we recommended the Perfusion, Extent, Depth, Infection and Sensation (PEDIS) classification system. The PEDIS classification system was developed by the International Working Group of the Diabetic Foot (IWGDF), which all DFUs are classified according to 5 categories: perfusion, extent/size, depth/tissue loss, infection and sensation. Study[17] found that the PEDIS classification system has a sensitivity of 93% and a specificity of 82%. Therefore, the PEDIS classification system is more objective and exact to assess DFU to predict the clinical outcome.\nA total of 6 relevant articles were included in this systematic review, from India, China, the United Kingdom, the United States, Italy, and Turkey, respectively. The types of articles will include introduction and letter, and there was no relevant data for meta-analysis. Based on the authors’ protocols for treating diabetic foot in the context of the COVID-19 epidemic in 6 relevant articles, we obtained a more widely applicable protocol for treating diabetic foot. It provides reference for the treatment of diabetic foot in the context of COVID-19 epidemic. However, since the experience of doctors in Oceania and South America in treating diabetic foot in the context of COVID-19 epidemic was not used for reference, the global applicability of the treatment protocol for diabetic foot in the context of COVID-19 epidemic proposed in this study needs further clinical testing.", "Through this systematic review, we proposed a new protocol for the treatment of patients with diabetic foot in the context of the COVID-19 pandemic. It provided reference for the treatment of diabetic foot in the context of COVID-19 epidemic. However, the global applicability of the treatment protocol for diabetic foot in the context of COVID-19 epidemic proposed in this study needs further clinical testing." ]
[ "intro", "methods", null, null, null, null, null, "results", "discussion", null ]
[ "COVID-19", "diabetes mellitus", "diabetic foot", "diabetic foot ulcer", "SARS-CoV-2" ]
1. Introduction: The novel coronavirus disease 2019 (COVID-19) epidemic, which began in 2019, is not over yet.[1] So far, >85 million people worldwide have been infected with COVID-19 and >1.8 million have died from COVID-19.[2] In the context of the COVID-19 global pandemic, the treatment of diabetic foot may be wrongly classified as unnecessary. But without regular diabetic foot care or surgical intervention, patients with diabetic foot risk rapid wound infection, which can lead to amputation and death.[3] Study has showed that people with diabetes are more susceptible to COVID-19 and have a higher risk of death.[4] Diabetic foot ulcer is a more common complication in diabetic patients, and up to one-third of diabetic patients will develop diabetic foot ulcer symptoms.[5] Severe diabetic foot ulcers can lead to amputation, disability and even death. The conventional treatment of diabetic foot is mainly to control blood glucose under the supervision of doctors and change dressing on ulcer wounds. Patients with severe diabetic foot ulcer need to be hospitalized.[6] However, the COVID-19 epidemic has changed the treatment model of diabetic foot, bringing great challenges to the treatment of diabetic foot. In the context of the COVID-19 pandemic, people have had to stay at home more and make fewer trips to the hospital. Furthermore, hospitals give priority to the treatment of COVID-19 patients.[7] These factors are not conducive to the treatment of diabetic foot, and even increase the risk of amputation. Therefore, how to better treat patients with diabetic foot during the COVID-19 epidemic, prevent further aggravation of the disease and reduce the risk of amputation in patients with diabetic foot has become an urgent problem for doctors around the world. In order to solve this problem, we searched relevant literature and systematically reviewed the measures taken by doctors in various countries to treat diabetic foot in the context of COVID-19 pandemic, hoping to find effective methods to treat diabetic foot in the context of COVID-19 epidemic and provide reference for clinicians in the treatment of diabetic foot. 2. Methods: 2.1. Search strategy The researchers searched PubMed, the Cochrane Library, and the Embase database. The retrieval time was set from the database establishment to October 2021. Keywords retrieved were “Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2),” “COVID-19,” “Diabetes Mellitus,” “Diabetic Foot,” and “Diabetic Foot Ulcer.” This search has no language restrictions or research type restrictions. The researchers searched PubMed, the Cochrane Library, and the Embase database. The retrieval time was set from the database establishment to October 2021. Keywords retrieved were “Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2),” “COVID-19,” “Diabetes Mellitus,” “Diabetic Foot,” and “Diabetic Foot Ulcer.” This search has no language restrictions or research type restrictions. 2.2. Inclusion and exclusion criteria 2.2.1. Inclusion criteria. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. 2.2.2. Exclusion criteria. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells. 2.2.1. Inclusion criteria. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. 2.2.2. Exclusion criteria. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells. 2.3. Data extraction and data synthesis Information was extracted from the included articles by 2 researchers, including the first authors’ name, time of publication, authors’ countries, and the specific measures to treat diabetic foot in the context of the COVID-19 pandemic. The findings of articles included in our study will be pooled and presented in the results section in the form of narration and tables. Information was extracted from the included articles by 2 researchers, including the first authors’ name, time of publication, authors’ countries, and the specific measures to treat diabetic foot in the context of the COVID-19 pandemic. The findings of articles included in our study will be pooled and presented in the results section in the form of narration and tables. 2.1. Search strategy: The researchers searched PubMed, the Cochrane Library, and the Embase database. The retrieval time was set from the database establishment to October 2021. Keywords retrieved were “Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2),” “COVID-19,” “Diabetes Mellitus,” “Diabetic Foot,” and “Diabetic Foot Ulcer.” This search has no language restrictions or research type restrictions. 2.2. Inclusion and exclusion criteria: 2.2.1. Inclusion criteria. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. 2.2.2. Exclusion criteria. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells. Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells. 2.2.1. Inclusion criteria.: Studies on the treatment of diabetic foot in the COVID-19 pandemic. The subjects were patients with diabetic foot, not animals or cells. Specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. 2.2.2. Exclusion criteria.: Duplicate publications. No specific measures to treat diabetic foot in the context of the COVID-19 pandemic are described in the studies. The subjects were animals or cells. 2.3. Data extraction and data synthesis: Information was extracted from the included articles by 2 researchers, including the first authors’ name, time of publication, authors’ countries, and the specific measures to treat diabetic foot in the context of the COVID-19 pandemic. The findings of articles included in our study will be pooled and presented in the results section in the form of narration and tables. 3. Results: By searching the database, a total of 65 related articles were retrieved. Two researchers screened the study by reading abstracts and full texts, and a total of 6 literatures were included.[8–13] The selection process was shown in Figure 1. The characteristics of the 6 included articles are shown in Table 1. Summary table of included studies. COVID-19 = novel coronavirus disease 2019; DFU = diabetic foot ulcer. PRISMA flow diagram. PRISMA = preferred reporting items for systematic reviews and meta-analyses. In Osman Kelahmetoglu protocol,[8] doctors first graded the patients’ wound levels and the degree of infection. Different treatment measures were taken according to patients’ different conditions. Once a patient had a fever, regardless of the degree of wound infection, COVID-19 screening should be prioritized, and isolation treatment should be conducted if confirmed. Diabetic foot wounds without infection or mild infection can be treated at home, and telemedicine follow-up was conducted once a week. Patients with moderate infection of diabetic foot wounds can be treated in the outpatient clinic, and telemedicine follow-up was conducted once a week. Patients with diabetic foot wounds with serious signs of infection need to be hospitalized. Treatment for diabetic foot ulcer infection was intravenous antibiotics, fluid replacement, correction of electrolytes imbalance, glucose control, etc. Moreover, the authors recommended online counseling for patients to reduce the number of hospital visits and the use of thorax computerized tomography for preoperative screening in all diabetic foot ulcer patients with severe signs of infection. In Marco Meloni protocol,[9] the doctor first assessed the patients’ condition. The fast-track-pathway classification method was used to grade diabetic foot. Critical diabetic foot patients with severe complications (wet gangrene, abscess, fever, signs of sepsis, and acute critical limb ischemia) require emergency hospitalization. Patients with diabetic foot with complex ulcers need to be evaluated on an outpatient basis, patients with severe ulcers need to be hospitalized for treatment, and patients without hospitalization can be followed up by telemedicine. Diabetic foot patients with complex ulcers with 3 or more complications were reevaluated on an outpatient basis only if the ulcers were aggravated. Diabetic foot patients with complex ulcers with 2 or less complications should be reevaluated according to individual circumstance. Telemedicine follow-up was performed for diabetic foot patients with less severe ulcer infection. In Ibrahim Jaly protocol,[10] doctors used online resources to educate patients. They reminded glycemic control through diet, exercise and correct medication. They developed patient understanding of diabetic foot and risks of complications. Diabetic foot patients were classified and condition evaluated and referred by telemedicine consultation. In Fenghua Tao protocol,[11] patients with diabetic foot were first screened for COVID-19. Diabetic foot patients who are not infected with COVID-19 follow normal treatment procedures, conservative treatment, interventional treatment, debridement, local decompression and amputation were adopted according to the severity of diabetic foot. Patients with diabetic feet who are suspected or confirmed to have COVID-19 should be quarantined first. Patients with asymptomatic COVID-19 or mild symptoms of COVID-19 can be treated for diabetic foot in isolation conditions. If the symptoms of COVID-19 are severe, COVID-19 treatment should be prioritized to ensure the safety of patients. If a patient has suspected or confirmed COVID-19 and requires surgery, the operation must proceed under strict protective conditions. In Avica Atri protocol,[12] medical staff need to give adequate care to patients with diabetic foot to minimize the number of visits to the hospital. Most patients with diabetic foot receive treatment and care at home. Patients with diabetic foot with severe infection may develop sepsis or require surgical intervention and need to go to hospital for treatment. Home care, telemedicine, and the establishment of clinics in other locations outside the hospital can fully meet the needs of patients with diabetic foot. In Lee C. Rogers protocol,[13] a system of triage was established to keep patients classified into 4 grades: stable, guarded, serious and critical. Patients with the first 2 grades could receive telemedicine care at home, while those with the latter 2 grades would receive outpatient treatment or even hospitalization. The model of inpatient care needs to be changed, and doctors need to treat patients more on an outpatient basis and care for patients in their homes. Physicians need to use telemedicine to guide patients, such as Medicare Telehealth Visit, Virtual check-in, e-Visit, and Remote Patient Monitoring. Doctors can schedule more home health visits, and doctors can change wound dressings and administer antibiotics in patients’ homes. The authors’ goal was to ensure patients safety and reduce the burden on healthcare systems during the COVID-19 pandemic. 4. Discussion: Foot ulcers are the most common complication of diabetes, with higher morbidity and mortality than many cancers. Refractory diabetic foot ulcers are a leading cause of hospitalization, amputation, disability and death in patients with diabetes.[14] The global pandemic of COVID-19 poses significant challenges to the management of people with diabetes, especially those with severe foot ulcers.[15] The conventional treatment schedules of diabetic foot is no longer suitable for the treatment of diabetic foot in the context of COVID-19 epidemic.[16] So we conducted this study by systematically reviewing relevant studies to find new treatment schedules of diabetic foot to better treat diabetic foot in the context of COVID-19 prevalence. After a systematic review of the 6 articles included in this study, we concluded a new protocol for treating patients with diabetic foot in the context of the global COVID-19 pandemic. The severity of diabetic foot ulcer was assessed and classified into general (no wound or small wound, no infection, and stable condition), severe (complex and refractory infection wound), and critical (wet gangrene, abscess, fever, signs of sepsis, and acute critical limb ischemia). Diabetic foot patients with general conditions can receive treatment at home, and doctors can guide the wound dressing change and medication treatment of patients through telemedicine. Patients with severe conditions of diabetic foot need to go to the hospital outpatient clinic for debridement treatment after COVID-19 screening. Patients with severe diabetic foot diagnosed or suspected COVID-19 infection need debridement treatment in isolation and continue to quarantine after treatment. Severe diabetic foot patients who were not infected with COVID-19 were sent home after debridement treatment in an outpatient clinic, where they were monitored by telemedicine and instructed by doctors on the next steps. Critical diabetic foot patients were hospitalized after being screened for COVID-19. Critical diabetic foot patients diagnosed or suspected COVID-19 infection need to be hospitalized in isolation. Patients with critical diabetic feet who were not infected with COVID-19 are hospitalized in general wards. The critical diabetic foot patients were treated with conservative treatment, interventional treatment, debridement, local decompression, amputation and other treatment measures according to their condition during hospitalization. In the context of the COVID-19 epidemic, the primary step in the treatment of diabetic foot ulcers is to assess the severity of diabetic foot ulcers. Diabetic foot ulcers can be caused by a variety of causes, including peripheral artery disease, infection, neuropathy, etc. Most classification systems focus only on the local pathology of diabetic foot ulcer (DFU) and fail to adequately assess all important parameters associated with ulcer healing. Whether telemedicine or face-to-face treatment, doctors should use the same evaluation system for diabetic foot ulcers. According to current and previous study, we recommended the Perfusion, Extent, Depth, Infection and Sensation (PEDIS) classification system. The PEDIS classification system was developed by the International Working Group of the Diabetic Foot (IWGDF), which all DFUs are classified according to 5 categories: perfusion, extent/size, depth/tissue loss, infection and sensation. Study[17] found that the PEDIS classification system has a sensitivity of 93% and a specificity of 82%. Therefore, the PEDIS classification system is more objective and exact to assess DFU to predict the clinical outcome. A total of 6 relevant articles were included in this systematic review, from India, China, the United Kingdom, the United States, Italy, and Turkey, respectively. The types of articles will include introduction and letter, and there was no relevant data for meta-analysis. Based on the authors’ protocols for treating diabetic foot in the context of the COVID-19 epidemic in 6 relevant articles, we obtained a more widely applicable protocol for treating diabetic foot. It provides reference for the treatment of diabetic foot in the context of COVID-19 epidemic. However, since the experience of doctors in Oceania and South America in treating diabetic foot in the context of COVID-19 epidemic was not used for reference, the global applicability of the treatment protocol for diabetic foot in the context of COVID-19 epidemic proposed in this study needs further clinical testing. 5. Conclusion: Through this systematic review, we proposed a new protocol for the treatment of patients with diabetic foot in the context of the COVID-19 pandemic. It provided reference for the treatment of diabetic foot in the context of COVID-19 epidemic. However, the global applicability of the treatment protocol for diabetic foot in the context of COVID-19 epidemic proposed in this study needs further clinical testing.
Background: In the context of the novel coronavirus disease 2019 (COVID-19) pandemic, people have had to stay at home more and make fewer trips to the hospital. Furthermore, hospitals give priority to the treatment of COVID-19 patients. These factors are not conducive to the treatment of diabetic foot, and even increase the risk of amputation. Therefore, how to better treat patients with diabetic foot during the COVID-19 epidemic, prevent further aggravation of the disease and reduce the risk of amputation in patients with diabetic foot has become an urgent problem for doctors around the world. Methods: The researchers searched PubMed, the Cochrane Library, and the Embase database. The retrieval time was set from the database establishment to October 2021. All studies on treatment of diabetic foot in the COVID-19 pandemic were included in our study. Results: A total of 6 studies were included in this study. In the 6 protocols for treating patients with diabetic foot, the researchers classified patients according to the condition of their diabetic foot. Diabetic foot patients with general conditions received treatment at home, and doctors can guide the wound dressing change and medication treatment of patients through telemedicine. Patients with severe conditions of diabetic foot were admitted to hospital for treatment. Patients were screened for COVID-19 before hospitalization, those infected or suspected of COVID-19 were treated in isolation, and those not infected with COVID-19 were treated in a general ward. Conclusions: Through this systematic review, we proposed a new protocol for the treatment of patients with diabetic foot in the context of the COVID-19 pandemic. It provided reference for the treatment of diabetic foot in the context of COVID-19 epidemic. However, the global applicability of the treatment protocol for diabetic foot in the context of COVID-19 epidemic proposed in this study needs further clinical testing.
1. Introduction: The novel coronavirus disease 2019 (COVID-19) epidemic, which began in 2019, is not over yet.[1] So far, >85 million people worldwide have been infected with COVID-19 and >1.8 million have died from COVID-19.[2] In the context of the COVID-19 global pandemic, the treatment of diabetic foot may be wrongly classified as unnecessary. But without regular diabetic foot care or surgical intervention, patients with diabetic foot risk rapid wound infection, which can lead to amputation and death.[3] Study has showed that people with diabetes are more susceptible to COVID-19 and have a higher risk of death.[4] Diabetic foot ulcer is a more common complication in diabetic patients, and up to one-third of diabetic patients will develop diabetic foot ulcer symptoms.[5] Severe diabetic foot ulcers can lead to amputation, disability and even death. The conventional treatment of diabetic foot is mainly to control blood glucose under the supervision of doctors and change dressing on ulcer wounds. Patients with severe diabetic foot ulcer need to be hospitalized.[6] However, the COVID-19 epidemic has changed the treatment model of diabetic foot, bringing great challenges to the treatment of diabetic foot. In the context of the COVID-19 pandemic, people have had to stay at home more and make fewer trips to the hospital. Furthermore, hospitals give priority to the treatment of COVID-19 patients.[7] These factors are not conducive to the treatment of diabetic foot, and even increase the risk of amputation. Therefore, how to better treat patients with diabetic foot during the COVID-19 epidemic, prevent further aggravation of the disease and reduce the risk of amputation in patients with diabetic foot has become an urgent problem for doctors around the world. In order to solve this problem, we searched relevant literature and systematically reviewed the measures taken by doctors in various countries to treat diabetic foot in the context of COVID-19 pandemic, hoping to find effective methods to treat diabetic foot in the context of COVID-19 epidemic and provide reference for clinicians in the treatment of diabetic foot. 5. Conclusion: Conceptualization: Jingui Yan, Yiqing Xiao, Yanjin Wang. Data curation: Jingui Yan, Yiqing Xiao, Rui Cao, Yipeng Su, Dan Wu, Yanjin Wang. Formal analysis: Yipeng Su, Dan Wu. Investigation: Dan Wu. Methodology: Jingui Yan, Yiqing Xiao, Rui Cao, Yipeng Su. Project administration: Jingui Yan, Yanjin Wang. Software: Rui Cao, Yipeng Su, Dan Wu. Supervision: Jingui Yan, Yanjin Wang. Visualization: Jingui Yan, Yanjin Wang. Writing – original draft: Jingui Yan, Yiqing Xiao, Rui Cao, Yipeng Su, Dan Wu, Yanjin Wang. Writing – review & editing: Jingui Yan, Yiqing Xiao, Rui Cao, Yanjin Wang.
Background: In the context of the novel coronavirus disease 2019 (COVID-19) pandemic, people have had to stay at home more and make fewer trips to the hospital. Furthermore, hospitals give priority to the treatment of COVID-19 patients. These factors are not conducive to the treatment of diabetic foot, and even increase the risk of amputation. Therefore, how to better treat patients with diabetic foot during the COVID-19 epidemic, prevent further aggravation of the disease and reduce the risk of amputation in patients with diabetic foot has become an urgent problem for doctors around the world. Methods: The researchers searched PubMed, the Cochrane Library, and the Embase database. The retrieval time was set from the database establishment to October 2021. All studies on treatment of diabetic foot in the COVID-19 pandemic were included in our study. Results: A total of 6 studies were included in this study. In the 6 protocols for treating patients with diabetic foot, the researchers classified patients according to the condition of their diabetic foot. Diabetic foot patients with general conditions received treatment at home, and doctors can guide the wound dressing change and medication treatment of patients through telemedicine. Patients with severe conditions of diabetic foot were admitted to hospital for treatment. Patients were screened for COVID-19 before hospitalization, those infected or suspected of COVID-19 were treated in isolation, and those not infected with COVID-19 were treated in a general ward. Conclusions: Through this systematic review, we proposed a new protocol for the treatment of patients with diabetic foot in the context of the COVID-19 pandemic. It provided reference for the treatment of diabetic foot in the context of COVID-19 epidemic. However, the global applicability of the treatment protocol for diabetic foot in the context of COVID-19 epidemic proposed in this study needs further clinical testing.
3,151
341
[ 75, 161, 44, 30, 67, 69 ]
10
[ "diabetic", "foot", "diabetic foot", "covid 19", "19", "covid", "patients", "treatment", "context", "context covid" ]
[ "treating diabetic foot", "diabetic feet infected", "diabetic foot covid", "diabetic foot wounds", "foot ulcers diabetic" ]
[CONTENT] COVID-19 | diabetes mellitus | diabetic foot | diabetic foot ulcer | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | diabetes mellitus | diabetic foot | diabetic foot ulcer | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | diabetes mellitus | diabetic foot | diabetic foot ulcer | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | diabetes mellitus | diabetic foot | diabetic foot ulcer | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | diabetes mellitus | diabetic foot | diabetic foot ulcer | SARS-CoV-2 [SUMMARY]
[CONTENT] COVID-19 | diabetes mellitus | diabetic foot | diabetic foot ulcer | SARS-CoV-2 [SUMMARY]
[CONTENT] Amputation, Surgical | COVID-19 | Diabetes Mellitus | Diabetic Foot | Humans | Pandemics | Telemedicine [SUMMARY]
[CONTENT] Amputation, Surgical | COVID-19 | Diabetes Mellitus | Diabetic Foot | Humans | Pandemics | Telemedicine [SUMMARY]
[CONTENT] Amputation, Surgical | COVID-19 | Diabetes Mellitus | Diabetic Foot | Humans | Pandemics | Telemedicine [SUMMARY]
[CONTENT] Amputation, Surgical | COVID-19 | Diabetes Mellitus | Diabetic Foot | Humans | Pandemics | Telemedicine [SUMMARY]
[CONTENT] Amputation, Surgical | COVID-19 | Diabetes Mellitus | Diabetic Foot | Humans | Pandemics | Telemedicine [SUMMARY]
[CONTENT] Amputation, Surgical | COVID-19 | Diabetes Mellitus | Diabetic Foot | Humans | Pandemics | Telemedicine [SUMMARY]
[CONTENT] treating diabetic foot | diabetic feet infected | diabetic foot covid | diabetic foot wounds | foot ulcers diabetic [SUMMARY]
[CONTENT] treating diabetic foot | diabetic feet infected | diabetic foot covid | diabetic foot wounds | foot ulcers diabetic [SUMMARY]
[CONTENT] treating diabetic foot | diabetic feet infected | diabetic foot covid | diabetic foot wounds | foot ulcers diabetic [SUMMARY]
[CONTENT] treating diabetic foot | diabetic feet infected | diabetic foot covid | diabetic foot wounds | foot ulcers diabetic [SUMMARY]
[CONTENT] treating diabetic foot | diabetic feet infected | diabetic foot covid | diabetic foot wounds | foot ulcers diabetic [SUMMARY]
[CONTENT] treating diabetic foot | diabetic feet infected | diabetic foot covid | diabetic foot wounds | foot ulcers diabetic [SUMMARY]
[CONTENT] diabetic | foot | diabetic foot | covid 19 | 19 | covid | patients | treatment | context | context covid [SUMMARY]
[CONTENT] diabetic | foot | diabetic foot | covid 19 | 19 | covid | patients | treatment | context | context covid [SUMMARY]
[CONTENT] diabetic | foot | diabetic foot | covid 19 | 19 | covid | patients | treatment | context | context covid [SUMMARY]
[CONTENT] diabetic | foot | diabetic foot | covid 19 | 19 | covid | patients | treatment | context | context covid [SUMMARY]
[CONTENT] diabetic | foot | diabetic foot | covid 19 | 19 | covid | patients | treatment | context | context covid [SUMMARY]
[CONTENT] diabetic | foot | diabetic foot | covid 19 | 19 | covid | patients | treatment | context | context covid [SUMMARY]
[CONTENT] diabetic | diabetic foot | foot | 19 | covid | covid 19 | risk | patients | treatment | 19 epidemic [SUMMARY]
[CONTENT] diabetic | diabetic foot | foot | studies | specific measures treat | specific measures treat diabetic | measures treat diabetic foot | measures treat diabetic | specific | measures treat [SUMMARY]
[CONTENT] patients | diabetic | diabetic foot | foot | infection | telemedicine | foot patients | diabetic foot patients | treatment | need [SUMMARY]
[CONTENT] proposed | covid 19 epidemic | epidemic | context covid 19 epidemic | 19 epidemic | protocol | treatment | diabetic foot context | diabetic foot context covid | foot context [SUMMARY]
[CONTENT] diabetic | foot | diabetic foot | 19 | covid | covid 19 | patients | treatment | context | context covid 19 [SUMMARY]
[CONTENT] diabetic | foot | diabetic foot | 19 | covid | covid 19 | patients | treatment | context | context covid 19 [SUMMARY]
[CONTENT] 2019 | COVID-19 ||| COVID-19 ||| ||| COVID-19 [SUMMARY]
[CONTENT] PubMed | the Cochrane Library | Embase ||| October 2021 ||| COVID-19 [SUMMARY]
[CONTENT] 6 ||| 6 ||| ||| ||| COVID-19 | COVID-19 | COVID-19 [SUMMARY]
[CONTENT] COVID-19 ||| COVID-19 ||| COVID-19 [SUMMARY]
[CONTENT] 2019 | COVID-19 ||| COVID-19 ||| ||| COVID-19 ||| PubMed | the Cochrane Library | Embase ||| October 2021 ||| COVID-19 ||| ||| 6 ||| 6 ||| ||| ||| COVID-19 | COVID-19 | COVID-19 ||| COVID-19 ||| COVID-19 ||| COVID-19 [SUMMARY]
[CONTENT] 2019 | COVID-19 ||| COVID-19 ||| ||| COVID-19 ||| PubMed | the Cochrane Library | Embase ||| October 2021 ||| COVID-19 ||| ||| 6 ||| 6 ||| ||| ||| COVID-19 | COVID-19 | COVID-19 ||| COVID-19 ||| COVID-19 ||| COVID-19 [SUMMARY]
Waning Effectiveness of One-dose Universal Varicella Vaccination in Korea, 2011-2018: a Propensity Score Matched National Population Cohort.
34519184
Despite high coverage (~98%) of universal varicella vaccination (UVV) in the Republic of Korea since 2005, reduction in the incidence rate of varicella is not obvious. The study aimed to evaluate the vaccine effectiveness (VE) of one-dose UVV by timeline and severity of the disease.
BACKGROUND
All children born in Korea in 2011 were included for this retrospective cohort study that analyzed insurance claims data from 2011-2018 and the varicella vaccination records in the immunization registry. Adjusted hazard ratios by Cox proportional hazard models were used to estimate the VE through propensity score matching by the month of birth, sex, healthcare utilization rate, and region.
METHODS
Of the total 421,070 newborns in the 2011 birth cohort, 13,360 were matched for age, sex, healthcare utilization rate, and region by the propensity score matching method. A total of 55,940 (13.29%) children were diagnosed with varicella, with the incidence rate 24.2 per 1000 person-year; 13.4% of vaccinated children and 10.4% of unvaccinated children. The VE of one-dose UVV against any varicella was 86.1% (95% confidence interval [CI], 81.4-89.5) during the first year after vaccination and 49.9% (95% CI, 43.3-55.7) during the 6-year follow-up period since vaccination, resulting in a 7.2% annual decrease of VE. The overall VE for severe varicella was 66.3%. The VE of two-dose compared to one-dose was 73.4% (95% CI, 72.2-74.6).
RESULTS
We found lower long-term VE in one-dose vaccination and waning of effectiveness over time. Longer follow ups of the vaccinated children as well as appropriately designed studies are needed to establish the optimal strategy in preventing varicella in Korea.
CONCLUSION
[ "Birth Cohort", "Chickenpox", "Chickenpox Vaccine", "Female", "Follow-Up Studies", "Humans", "Incidence", "Infant", "Male", "Propensity Score", "Republic of Korea", "Retrospective Studies", "Severity of Illness Index", "Vaccination", "Vaccine Efficacy" ]
8438188
INTRODUCTION
Varicella is a highly contagious acute infectious disease caused by the varicella-zoster virus.12 Varicella frequently attacks children between 4 to 6 years old, causing various complications.3 Introduction of varicella vaccination into the national immunization program reduced varicella burden of disease in many countries. Previously, vaccine effectiveness (VE) of varicella vaccines showed a wide range of results, and methods of the estimation varied by study due to lack of follow-up data.45678910 Few previous studies revealed VE through a follow-up cohort considering medical utilization, which can be heterogeneous, resulting in the difference in vaccination rate.11 Also, VE on complicated varicella infection may differ from the overall varicella infection since subclinical manifestations of varicella infection occasionally do not require medical attention.12 Varicella vaccine was first introduced in 198813 and it was included in the National Immunization Program (NIP) in Korea since 2005.14 The universal varicella vaccination (UVV) with one-dose is recommended to all children between 12 to 15 months.1516 Although the vaccine coverage rates are generally over 97% in Korea, national notifiable diseases surveillance system reported that the incidence of varicella continues to increase,17 adding substantial disease burden to the public health health.18 Therefore, it is important to assess the VE of UVV with one-dose in Korea and address the public health implication with regard to vaccination strategy. In this study, we aimed to evaluate the VE in a retrospective cohort using the insurance claims data and the immunization registry information from 2011–2018 by adjusting for major characteristics of the vaccinated and unvaccinated population in a country where UVV coverage is estimated higher than 97% nationwide.
METHODS
Study population and data sources All newborns in Korea from January 1, 2011 to December 31, 2011 were included in the study population. The annual birth cohort of the corresponding year was 470,000 newborns. Demographic characteristics of children such as date of birth, sex, and address were obtained from the National Health Insurance System (NHIS), which is a single insurer that covers over 97% of the Korean population.1419 All medical claims are reviewed by the Health Insurance Review and Assessment Service (HIRA), using admissions and diagnoses data submitted to the NHIS. We used the data from the National Immunization Registry, which contains the vaccination status and detailed information such as the vaccination dates and vaccination counts of all children (0–12 years old) receiving vaccines in Korea, obtained from the Korea Centers for Disease Control and Prevention Agency (KDCA). The vaccination status of the cohort was then linked to the medical claims data of HIRA with de-identified personal information. We followed the International Committee of Medical Journal Editors (ICMJE) recommendations throughout the study.20 All newborns in Korea from January 1, 2011 to December 31, 2011 were included in the study population. The annual birth cohort of the corresponding year was 470,000 newborns. Demographic characteristics of children such as date of birth, sex, and address were obtained from the National Health Insurance System (NHIS), which is a single insurer that covers over 97% of the Korean population.1419 All medical claims are reviewed by the Health Insurance Review and Assessment Service (HIRA), using admissions and diagnoses data submitted to the NHIS. We used the data from the National Immunization Registry, which contains the vaccination status and detailed information such as the vaccination dates and vaccination counts of all children (0–12 years old) receiving vaccines in Korea, obtained from the Korea Centers for Disease Control and Prevention Agency (KDCA). The vaccination status of the cohort was then linked to the medical claims data of HIRA with de-identified personal information. We followed the International Committee of Medical Journal Editors (ICMJE) recommendations throughout the study.20 Study design and inclusion criteria All children born in 2011 were eligible for the study population. Among them, children who did not receive varicella vaccination between 12 to 15 months old, received more than three-doses in total or two-doses within a 4-week window, and immunocompromised children (Supplementary Table 1) were excluded from the analysis (Fig. 1). The included study population were followed-up until the endpoint of the cohort (December 31, 2018). Based on the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10), a primary outcome was selected as the diagnosis of varicella (ICD-10: B01). The secondary outcome was diagnosis of severe varicella, which was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (ICD-10: B010, B011, B012, B0180, and B0188, respectively). The breakthrough infections, which were defined as the occurrence of varicella more than 42 days after the vaccination, were included for the evaluation of the incidence.21 All children born in 2011 were eligible for the study population. Among them, children who did not receive varicella vaccination between 12 to 15 months old, received more than three-doses in total or two-doses within a 4-week window, and immunocompromised children (Supplementary Table 1) were excluded from the analysis (Fig. 1). The included study population were followed-up until the endpoint of the cohort (December 31, 2018). Based on the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10), a primary outcome was selected as the diagnosis of varicella (ICD-10: B01). The secondary outcome was diagnosis of severe varicella, which was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (ICD-10: B010, B011, B012, B0180, and B0188, respectively). The breakthrough infections, which were defined as the occurrence of varicella more than 42 days after the vaccination, were included for the evaluation of the incidence.21 Statistical analysis Frequencies of varicella and severe varicella were measured by vaccination status over time. Crude incidence density and incidence rate ratio of varicella and severe varicella were calculated. In the setting of high vaccination coverage rate (> 95%), we chose to apply propensity score matching (PSM) to compare the VE of the vaccines in order to minimize the risk of selection bias.22 The propensity score consisted of: month of birth, sex, region of address by SIDO, and frequency of out-patient visits per year. The mean propensity score were 0.97 (standard deviation [SD], 0.01) in the vaccinated group and 0.98 (SD, 0.02) in the unvaccinated group. All unvaccinated cases were 1:1 matched for randomly sampled vaccinated group with the nearest neighbor match method.2324 VE was calculated as (1-hazard ratio)*100, which was estimated by the Cox proportional hazard models.25 We checked the linearity and proportionality by the log-cumulative hazard plots and Shoenfeld residuals in each model.26 In addition, we estimated cumulative VE by year during a 6-year follow-up to estimate the long-term protective effect of the vaccine. The population who received the two-dose vaccination between 4 to 6 years old were compared with the population who received only primary vaccination by the same methods. All statistical methods were performed by SAS 9.4 software (SAS Institute Inc., Cary, NC, USA). Frequencies of varicella and severe varicella were measured by vaccination status over time. Crude incidence density and incidence rate ratio of varicella and severe varicella were calculated. In the setting of high vaccination coverage rate (> 95%), we chose to apply propensity score matching (PSM) to compare the VE of the vaccines in order to minimize the risk of selection bias.22 The propensity score consisted of: month of birth, sex, region of address by SIDO, and frequency of out-patient visits per year. The mean propensity score were 0.97 (standard deviation [SD], 0.01) in the vaccinated group and 0.98 (SD, 0.02) in the unvaccinated group. All unvaccinated cases were 1:1 matched for randomly sampled vaccinated group with the nearest neighbor match method.2324 VE was calculated as (1-hazard ratio)*100, which was estimated by the Cox proportional hazard models.25 We checked the linearity and proportionality by the log-cumulative hazard plots and Shoenfeld residuals in each model.26 In addition, we estimated cumulative VE by year during a 6-year follow-up to estimate the long-term protective effect of the vaccine. The population who received the two-dose vaccination between 4 to 6 years old were compared with the population who received only primary vaccination by the same methods. All statistical methods were performed by SAS 9.4 software (SAS Institute Inc., Cary, NC, USA). Ethics statement The institutional review board (IRB) of the Seoul National University Hospital approved this study to conduct under the exemption of IRB review (IRB No. 1809-064-971). Informed consent was waived since we used secondary de-identified data from the KDCA and HIRA as a part of public health research. The institutional review board (IRB) of the Seoul National University Hospital approved this study to conduct under the exemption of IRB review (IRB No. 1809-064-971). Informed consent was waived since we used secondary de-identified data from the KDCA and HIRA as a part of public health research.
RESULTS
Total vaccination rate and incidence of varicella Among 421,070 births in 2011, 97.5% (410,393) received at least one-dose of varicella vaccines. Of the total, 13.3% (55,940) had diagnoses of varicella during the follow-up period. An average follow-up period was 5.5 years per person, resulting in a varicella incidence of 24.2 per 1,000 person-year in total. Severe varicella occurred in 0.98% (total n = 4,132; 3,449 complicated varicella, 700 hospitalizations, and 240 acyclovir prescriptions). Among the vaccinated population, 26.7% (108,797) received second-dose vaccinations at 4 to 6 years of age. Among 421,070 births in 2011, 97.5% (410,393) received at least one-dose of varicella vaccines. Of the total, 13.3% (55,940) had diagnoses of varicella during the follow-up period. An average follow-up period was 5.5 years per person, resulting in a varicella incidence of 24.2 per 1,000 person-year in total. Severe varicella occurred in 0.98% (total n = 4,132; 3,449 complicated varicella, 700 hospitalizations, and 240 acyclovir prescriptions). Among the vaccinated population, 26.7% (108,797) received second-dose vaccinations at 4 to 6 years of age. Demographics of the study population Table 1 shows the demographic characteristics of the studied cohort by vaccination status. After adjusting for age, sex, and region, the mean of out-patient visit for the unvaccinated group was significantly lower (7.69 per year; SD, 11.90) than the vaccinated group (26.39 per year; SD, 12.78; Table 1). After propensity score matching to estimate the VE of varicella, demographics of unvaccinated and vaccinated group were matched (Supplementary Table 2). SD = standard deviation, OR = odds ratio, CI = confidence interval. aOR, 1.33 (95% CI, 1.25–1.42), P < 0.001; bOR, 0.77 (95% CI, 0.65–0.92), P = 0.004. Table 1 shows the demographic characteristics of the studied cohort by vaccination status. After adjusting for age, sex, and region, the mean of out-patient visit for the unvaccinated group was significantly lower (7.69 per year; SD, 11.90) than the vaccinated group (26.39 per year; SD, 12.78; Table 1). After propensity score matching to estimate the VE of varicella, demographics of unvaccinated and vaccinated group were matched (Supplementary Table 2). SD = standard deviation, OR = odds ratio, CI = confidence interval. aOR, 1.33 (95% CI, 1.25–1.42), P < 0.001; bOR, 0.77 (95% CI, 0.65–0.92), P = 0.004. Incidence of varicella and severe varicella During the studied period, vaccinated group showed higher frequency of varicella diagnosis compared to unvaccinated group (13.4% vs. 10.4%, Table 1), whereas severe varicella was more frequent in the unvaccinated group than in the vaccinated group (1.3% vs. 0.97%, Table 1). The incidence density of breakthrough varicella was 23.3 (95% confidence interval [CI], 23.1–23.4) per 1,000 person-year. The incidence of varicella in the unvaccinated group was 13.7 (95% CI, 13.0–14.5) per 1,000 person-year. The mean age at infection was 2.0 (SD, 1.62) years old in the unvaccinated group and 4.3 (SD, 1.85) years old in the vaccinated group. During the studied period, vaccinated group showed higher frequency of varicella diagnosis compared to unvaccinated group (13.4% vs. 10.4%, Table 1), whereas severe varicella was more frequent in the unvaccinated group than in the vaccinated group (1.3% vs. 0.97%, Table 1). The incidence density of breakthrough varicella was 23.3 (95% confidence interval [CI], 23.1–23.4) per 1,000 person-year. The incidence of varicella in the unvaccinated group was 13.7 (95% CI, 13.0–14.5) per 1,000 person-year. The mean age at infection was 2.0 (SD, 1.62) years old in the unvaccinated group and 4.3 (SD, 1.85) years old in the vaccinated group. VE and its waning immunity Table 2 describes the overall VE after matching the propensity score by the month of birth, sex, out-patient visits per year, and region. During the first year from vaccination, the overall VE was 86.1%, but it decreased to 62.6% at four years and further to 49.9% at six years from vaccination. The VE for severe varicella was 80.4% in the first vaccinated year and decreased to 66.3% after six years following vaccination (Supplementary Table 2). The VE trend over time is shown in Fig. 2. Propensity score matched vaccinated group was compared to the unvaccinated group. The month of birth, sex, regions of address, and frequency of out-patient visits per year was used for the propensity score calculation. aVaccine effectiveness is estimated via (1-hazard ratio)*100 (%); bSevere varicella was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (10th revision of the International Statistical Classification of Diseases and Related Health Problems: B010, B011, B012, B0180, and B0188, respectively). Table 2 describes the overall VE after matching the propensity score by the month of birth, sex, out-patient visits per year, and region. During the first year from vaccination, the overall VE was 86.1%, but it decreased to 62.6% at four years and further to 49.9% at six years from vaccination. The VE for severe varicella was 80.4% in the first vaccinated year and decreased to 66.3% after six years following vaccination (Supplementary Table 2). The VE trend over time is shown in Fig. 2. Propensity score matched vaccinated group was compared to the unvaccinated group. The month of birth, sex, regions of address, and frequency of out-patient visits per year was used for the propensity score calculation. aVaccine effectiveness is estimated via (1-hazard ratio)*100 (%); bSevere varicella was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (10th revision of the International Statistical Classification of Diseases and Related Health Problems: B010, B011, B012, B0180, and B0188, respectively). VE of two-dose varicella vaccination Among the vaccinated population, 2,233 (2.11%) of the two-dose vaccinated group and 23,192 (8.42%) of the one-dose vaccinated group were diagnosed with varicella. The VE of two-dose vaccination compared to the one-dose-only vaccination was 73.4% (95% CI, 72.2–74.6), after being adjusted for month of birth, sex, region, and healthcare utilization. Adjusted two-dose VE for severe varicella was 61.4% (54.0–67.6). Among the vaccinated population, 2,233 (2.11%) of the two-dose vaccinated group and 23,192 (8.42%) of the one-dose vaccinated group were diagnosed with varicella. The VE of two-dose vaccination compared to the one-dose-only vaccination was 73.4% (95% CI, 72.2–74.6), after being adjusted for month of birth, sex, region, and healthcare utilization. Adjusted two-dose VE for severe varicella was 61.4% (54.0–67.6).
null
null
[ "Study population and data sources", "Study design and inclusion criteria", "Statistical analysis", "Ethics statement", "Total vaccination rate and incidence of varicella", "Demographics of the study population", "Incidence of varicella and severe varicella", "VE and its waning immunity", "VE of two-dose varicella vaccination" ]
[ "All newborns in Korea from January 1, 2011 to December 31, 2011 were included in the study population. The annual birth cohort of the corresponding year was 470,000 newborns. Demographic characteristics of children such as date of birth, sex, and address were obtained from the National Health Insurance System (NHIS), which is a single insurer that covers over 97% of the Korean population.1419 All medical claims are reviewed by the Health Insurance Review and Assessment Service (HIRA), using admissions and diagnoses data submitted to the NHIS.\nWe used the data from the National Immunization Registry, which contains the vaccination status and detailed information such as the vaccination dates and vaccination counts of all children (0–12 years old) receiving vaccines in Korea, obtained from the Korea Centers for Disease Control and Prevention Agency (KDCA). The vaccination status of the cohort was then linked to the medical claims data of HIRA with de-identified personal information. We followed the International Committee of Medical Journal Editors (ICMJE) recommendations throughout the study.20", "All children born in 2011 were eligible for the study population. Among them, children who did not receive varicella vaccination between 12 to 15 months old, received more than three-doses in total or two-doses within a 4-week window, and immunocompromised children (Supplementary Table 1) were excluded from the analysis (Fig. 1).\nThe included study population were followed-up until the endpoint of the cohort (December 31, 2018). Based on the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10), a primary outcome was selected as the diagnosis of varicella (ICD-10: B01). The secondary outcome was diagnosis of severe varicella, which was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (ICD-10: B010, B011, B012, B0180, and B0188, respectively).\nThe breakthrough infections, which were defined as the occurrence of varicella more than 42 days after the vaccination, were included for the evaluation of the incidence.21", "Frequencies of varicella and severe varicella were measured by vaccination status over time. Crude incidence density and incidence rate ratio of varicella and severe varicella were calculated. In the setting of high vaccination coverage rate (> 95%), we chose to apply propensity score matching (PSM) to compare the VE of the vaccines in order to minimize the risk of selection bias.22 The propensity score consisted of: month of birth, sex, region of address by SIDO, and frequency of out-patient visits per year. The mean propensity score were 0.97 (standard deviation [SD], 0.01) in the vaccinated group and 0.98 (SD, 0.02) in the unvaccinated group. All unvaccinated cases were 1:1 matched for randomly sampled vaccinated group with the nearest neighbor match method.2324 VE was calculated as (1-hazard ratio)*100, which was estimated by the Cox proportional hazard models.25 We checked the linearity and proportionality by the log-cumulative hazard plots and Shoenfeld residuals in each model.26 In addition, we estimated cumulative VE by year during a 6-year follow-up to estimate the long-term protective effect of the vaccine. The population who received the two-dose vaccination between 4 to 6 years old were compared with the population who received only primary vaccination by the same methods. All statistical methods were performed by SAS 9.4 software (SAS Institute Inc., Cary, NC, USA).", "The institutional review board (IRB) of the Seoul National University Hospital approved this study to conduct under the exemption of IRB review (IRB No. 1809-064-971). Informed consent was waived since we used secondary de-identified data from the KDCA and HIRA as a part of public health research.", "Among 421,070 births in 2011, 97.5% (410,393) received at least one-dose of varicella vaccines. Of the total, 13.3% (55,940) had diagnoses of varicella during the follow-up period. An average follow-up period was 5.5 years per person, resulting in a varicella incidence of 24.2 per 1,000 person-year in total. Severe varicella occurred in 0.98% (total n = 4,132; 3,449 complicated varicella, 700 hospitalizations, and 240 acyclovir prescriptions). Among the vaccinated population, 26.7% (108,797) received second-dose vaccinations at 4 to 6 years of age.", "Table 1 shows the demographic characteristics of the studied cohort by vaccination status. After adjusting for age, sex, and region, the mean of out-patient visit for the unvaccinated group was significantly lower (7.69 per year; SD, 11.90) than the vaccinated group (26.39 per year; SD, 12.78; Table 1). After propensity score matching to estimate the VE of varicella, demographics of unvaccinated and vaccinated group were matched (Supplementary Table 2).\nSD = standard deviation, OR = odds ratio, CI = confidence interval.\naOR, 1.33 (95% CI, 1.25–1.42), P < 0.001; bOR, 0.77 (95% CI, 0.65–0.92), P = 0.004.", "During the studied period, vaccinated group showed higher frequency of varicella diagnosis compared to unvaccinated group (13.4% vs. 10.4%, Table 1), whereas severe varicella was more frequent in the unvaccinated group than in the vaccinated group (1.3% vs. 0.97%, Table 1). The incidence density of breakthrough varicella was 23.3 (95% confidence interval [CI], 23.1–23.4) per 1,000 person-year. The incidence of varicella in the unvaccinated group was 13.7 (95% CI, 13.0–14.5) per 1,000 person-year. The mean age at infection was 2.0 (SD, 1.62) years old in the unvaccinated group and 4.3 (SD, 1.85) years old in the vaccinated group.", "Table 2 describes the overall VE after matching the propensity score by the month of birth, sex, out-patient visits per year, and region. During the first year from vaccination, the overall VE was 86.1%, but it decreased to 62.6% at four years and further to 49.9% at six years from vaccination. The VE for severe varicella was 80.4% in the first vaccinated year and decreased to 66.3% after six years following vaccination (Supplementary Table 2). The VE trend over time is shown in Fig. 2.\nPropensity score matched vaccinated group was compared to the unvaccinated group. The month of birth, sex, regions of address, and frequency of out-patient visits per year was used for the propensity score calculation.\naVaccine effectiveness is estimated via (1-hazard ratio)*100 (%); bSevere varicella was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (10th revision of the International Statistical Classification of Diseases and Related Health Problems: B010, B011, B012, B0180, and B0188, respectively).", "Among the vaccinated population, 2,233 (2.11%) of the two-dose vaccinated group and 23,192 (8.42%) of the one-dose vaccinated group were diagnosed with varicella. The VE of two-dose vaccination compared to the one-dose-only vaccination was 73.4% (95% CI, 72.2–74.6), after being adjusted for month of birth, sex, region, and healthcare utilization. Adjusted two-dose VE for severe varicella was 61.4% (54.0–67.6)." ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study population and data sources", "Study design and inclusion criteria", "Statistical analysis", "Ethics statement", "RESULTS", "Total vaccination rate and incidence of varicella", "Demographics of the study population", "Incidence of varicella and severe varicella", "VE and its waning immunity", "VE of two-dose varicella vaccination", "DISCUSSION" ]
[ "Varicella is a highly contagious acute infectious disease caused by the varicella-zoster virus.12 Varicella frequently attacks children between 4 to 6 years old, causing various complications.3 Introduction of varicella vaccination into the national immunization program reduced varicella burden of disease in many countries.\nPreviously, vaccine effectiveness (VE) of varicella vaccines showed a wide range of results, and methods of the estimation varied by study due to lack of follow-up data.45678910 Few previous studies revealed VE through a follow-up cohort considering medical utilization, which can be heterogeneous, resulting in the difference in vaccination rate.11 Also, VE on complicated varicella infection may differ from the overall varicella infection since subclinical manifestations of varicella infection occasionally do not require medical attention.12\nVaricella vaccine was first introduced in 198813 and it was included in the National Immunization Program (NIP) in Korea since 2005.14 The universal varicella vaccination (UVV) with one-dose is recommended to all children between 12 to 15 months.1516 Although the vaccine coverage rates are generally over 97% in Korea, national notifiable diseases surveillance system reported that the incidence of varicella continues to increase,17 adding substantial disease burden to the public health health.18\nTherefore, it is important to assess the VE of UVV with one-dose in Korea and address the public health implication with regard to vaccination strategy. In this study, we aimed to evaluate the VE in a retrospective cohort using the insurance claims data and the immunization registry information from 2011–2018 by adjusting for major characteristics of the vaccinated and unvaccinated population in a country where UVV coverage is estimated higher than 97% nationwide.", "Study population and data sources All newborns in Korea from January 1, 2011 to December 31, 2011 were included in the study population. The annual birth cohort of the corresponding year was 470,000 newborns. Demographic characteristics of children such as date of birth, sex, and address were obtained from the National Health Insurance System (NHIS), which is a single insurer that covers over 97% of the Korean population.1419 All medical claims are reviewed by the Health Insurance Review and Assessment Service (HIRA), using admissions and diagnoses data submitted to the NHIS.\nWe used the data from the National Immunization Registry, which contains the vaccination status and detailed information such as the vaccination dates and vaccination counts of all children (0–12 years old) receiving vaccines in Korea, obtained from the Korea Centers for Disease Control and Prevention Agency (KDCA). The vaccination status of the cohort was then linked to the medical claims data of HIRA with de-identified personal information. We followed the International Committee of Medical Journal Editors (ICMJE) recommendations throughout the study.20\nAll newborns in Korea from January 1, 2011 to December 31, 2011 were included in the study population. The annual birth cohort of the corresponding year was 470,000 newborns. Demographic characteristics of children such as date of birth, sex, and address were obtained from the National Health Insurance System (NHIS), which is a single insurer that covers over 97% of the Korean population.1419 All medical claims are reviewed by the Health Insurance Review and Assessment Service (HIRA), using admissions and diagnoses data submitted to the NHIS.\nWe used the data from the National Immunization Registry, which contains the vaccination status and detailed information such as the vaccination dates and vaccination counts of all children (0–12 years old) receiving vaccines in Korea, obtained from the Korea Centers for Disease Control and Prevention Agency (KDCA). The vaccination status of the cohort was then linked to the medical claims data of HIRA with de-identified personal information. We followed the International Committee of Medical Journal Editors (ICMJE) recommendations throughout the study.20\nStudy design and inclusion criteria All children born in 2011 were eligible for the study population. Among them, children who did not receive varicella vaccination between 12 to 15 months old, received more than three-doses in total or two-doses within a 4-week window, and immunocompromised children (Supplementary Table 1) were excluded from the analysis (Fig. 1).\nThe included study population were followed-up until the endpoint of the cohort (December 31, 2018). Based on the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10), a primary outcome was selected as the diagnosis of varicella (ICD-10: B01). The secondary outcome was diagnosis of severe varicella, which was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (ICD-10: B010, B011, B012, B0180, and B0188, respectively).\nThe breakthrough infections, which were defined as the occurrence of varicella more than 42 days after the vaccination, were included for the evaluation of the incidence.21\nAll children born in 2011 were eligible for the study population. Among them, children who did not receive varicella vaccination between 12 to 15 months old, received more than three-doses in total or two-doses within a 4-week window, and immunocompromised children (Supplementary Table 1) were excluded from the analysis (Fig. 1).\nThe included study population were followed-up until the endpoint of the cohort (December 31, 2018). Based on the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10), a primary outcome was selected as the diagnosis of varicella (ICD-10: B01). The secondary outcome was diagnosis of severe varicella, which was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (ICD-10: B010, B011, B012, B0180, and B0188, respectively).\nThe breakthrough infections, which were defined as the occurrence of varicella more than 42 days after the vaccination, were included for the evaluation of the incidence.21\nStatistical analysis Frequencies of varicella and severe varicella were measured by vaccination status over time. Crude incidence density and incidence rate ratio of varicella and severe varicella were calculated. In the setting of high vaccination coverage rate (> 95%), we chose to apply propensity score matching (PSM) to compare the VE of the vaccines in order to minimize the risk of selection bias.22 The propensity score consisted of: month of birth, sex, region of address by SIDO, and frequency of out-patient visits per year. The mean propensity score were 0.97 (standard deviation [SD], 0.01) in the vaccinated group and 0.98 (SD, 0.02) in the unvaccinated group. All unvaccinated cases were 1:1 matched for randomly sampled vaccinated group with the nearest neighbor match method.2324 VE was calculated as (1-hazard ratio)*100, which was estimated by the Cox proportional hazard models.25 We checked the linearity and proportionality by the log-cumulative hazard plots and Shoenfeld residuals in each model.26 In addition, we estimated cumulative VE by year during a 6-year follow-up to estimate the long-term protective effect of the vaccine. The population who received the two-dose vaccination between 4 to 6 years old were compared with the population who received only primary vaccination by the same methods. All statistical methods were performed by SAS 9.4 software (SAS Institute Inc., Cary, NC, USA).\nFrequencies of varicella and severe varicella were measured by vaccination status over time. Crude incidence density and incidence rate ratio of varicella and severe varicella were calculated. In the setting of high vaccination coverage rate (> 95%), we chose to apply propensity score matching (PSM) to compare the VE of the vaccines in order to minimize the risk of selection bias.22 The propensity score consisted of: month of birth, sex, region of address by SIDO, and frequency of out-patient visits per year. The mean propensity score were 0.97 (standard deviation [SD], 0.01) in the vaccinated group and 0.98 (SD, 0.02) in the unvaccinated group. All unvaccinated cases were 1:1 matched for randomly sampled vaccinated group with the nearest neighbor match method.2324 VE was calculated as (1-hazard ratio)*100, which was estimated by the Cox proportional hazard models.25 We checked the linearity and proportionality by the log-cumulative hazard plots and Shoenfeld residuals in each model.26 In addition, we estimated cumulative VE by year during a 6-year follow-up to estimate the long-term protective effect of the vaccine. The population who received the two-dose vaccination between 4 to 6 years old were compared with the population who received only primary vaccination by the same methods. All statistical methods were performed by SAS 9.4 software (SAS Institute Inc., Cary, NC, USA).\nEthics statement The institutional review board (IRB) of the Seoul National University Hospital approved this study to conduct under the exemption of IRB review (IRB No. 1809-064-971). Informed consent was waived since we used secondary de-identified data from the KDCA and HIRA as a part of public health research.\nThe institutional review board (IRB) of the Seoul National University Hospital approved this study to conduct under the exemption of IRB review (IRB No. 1809-064-971). Informed consent was waived since we used secondary de-identified data from the KDCA and HIRA as a part of public health research.", "All newborns in Korea from January 1, 2011 to December 31, 2011 were included in the study population. The annual birth cohort of the corresponding year was 470,000 newborns. Demographic characteristics of children such as date of birth, sex, and address were obtained from the National Health Insurance System (NHIS), which is a single insurer that covers over 97% of the Korean population.1419 All medical claims are reviewed by the Health Insurance Review and Assessment Service (HIRA), using admissions and diagnoses data submitted to the NHIS.\nWe used the data from the National Immunization Registry, which contains the vaccination status and detailed information such as the vaccination dates and vaccination counts of all children (0–12 years old) receiving vaccines in Korea, obtained from the Korea Centers for Disease Control and Prevention Agency (KDCA). The vaccination status of the cohort was then linked to the medical claims data of HIRA with de-identified personal information. We followed the International Committee of Medical Journal Editors (ICMJE) recommendations throughout the study.20", "All children born in 2011 were eligible for the study population. Among them, children who did not receive varicella vaccination between 12 to 15 months old, received more than three-doses in total or two-doses within a 4-week window, and immunocompromised children (Supplementary Table 1) were excluded from the analysis (Fig. 1).\nThe included study population were followed-up until the endpoint of the cohort (December 31, 2018). Based on the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10), a primary outcome was selected as the diagnosis of varicella (ICD-10: B01). The secondary outcome was diagnosis of severe varicella, which was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (ICD-10: B010, B011, B012, B0180, and B0188, respectively).\nThe breakthrough infections, which were defined as the occurrence of varicella more than 42 days after the vaccination, were included for the evaluation of the incidence.21", "Frequencies of varicella and severe varicella were measured by vaccination status over time. Crude incidence density and incidence rate ratio of varicella and severe varicella were calculated. In the setting of high vaccination coverage rate (> 95%), we chose to apply propensity score matching (PSM) to compare the VE of the vaccines in order to minimize the risk of selection bias.22 The propensity score consisted of: month of birth, sex, region of address by SIDO, and frequency of out-patient visits per year. The mean propensity score were 0.97 (standard deviation [SD], 0.01) in the vaccinated group and 0.98 (SD, 0.02) in the unvaccinated group. All unvaccinated cases were 1:1 matched for randomly sampled vaccinated group with the nearest neighbor match method.2324 VE was calculated as (1-hazard ratio)*100, which was estimated by the Cox proportional hazard models.25 We checked the linearity and proportionality by the log-cumulative hazard plots and Shoenfeld residuals in each model.26 In addition, we estimated cumulative VE by year during a 6-year follow-up to estimate the long-term protective effect of the vaccine. The population who received the two-dose vaccination between 4 to 6 years old were compared with the population who received only primary vaccination by the same methods. All statistical methods were performed by SAS 9.4 software (SAS Institute Inc., Cary, NC, USA).", "The institutional review board (IRB) of the Seoul National University Hospital approved this study to conduct under the exemption of IRB review (IRB No. 1809-064-971). Informed consent was waived since we used secondary de-identified data from the KDCA and HIRA as a part of public health research.", "Total vaccination rate and incidence of varicella Among 421,070 births in 2011, 97.5% (410,393) received at least one-dose of varicella vaccines. Of the total, 13.3% (55,940) had diagnoses of varicella during the follow-up period. An average follow-up period was 5.5 years per person, resulting in a varicella incidence of 24.2 per 1,000 person-year in total. Severe varicella occurred in 0.98% (total n = 4,132; 3,449 complicated varicella, 700 hospitalizations, and 240 acyclovir prescriptions). Among the vaccinated population, 26.7% (108,797) received second-dose vaccinations at 4 to 6 years of age.\nAmong 421,070 births in 2011, 97.5% (410,393) received at least one-dose of varicella vaccines. Of the total, 13.3% (55,940) had diagnoses of varicella during the follow-up period. An average follow-up period was 5.5 years per person, resulting in a varicella incidence of 24.2 per 1,000 person-year in total. Severe varicella occurred in 0.98% (total n = 4,132; 3,449 complicated varicella, 700 hospitalizations, and 240 acyclovir prescriptions). Among the vaccinated population, 26.7% (108,797) received second-dose vaccinations at 4 to 6 years of age.\nDemographics of the study population Table 1 shows the demographic characteristics of the studied cohort by vaccination status. After adjusting for age, sex, and region, the mean of out-patient visit for the unvaccinated group was significantly lower (7.69 per year; SD, 11.90) than the vaccinated group (26.39 per year; SD, 12.78; Table 1). After propensity score matching to estimate the VE of varicella, demographics of unvaccinated and vaccinated group were matched (Supplementary Table 2).\nSD = standard deviation, OR = odds ratio, CI = confidence interval.\naOR, 1.33 (95% CI, 1.25–1.42), P < 0.001; bOR, 0.77 (95% CI, 0.65–0.92), P = 0.004.\nTable 1 shows the demographic characteristics of the studied cohort by vaccination status. After adjusting for age, sex, and region, the mean of out-patient visit for the unvaccinated group was significantly lower (7.69 per year; SD, 11.90) than the vaccinated group (26.39 per year; SD, 12.78; Table 1). After propensity score matching to estimate the VE of varicella, demographics of unvaccinated and vaccinated group were matched (Supplementary Table 2).\nSD = standard deviation, OR = odds ratio, CI = confidence interval.\naOR, 1.33 (95% CI, 1.25–1.42), P < 0.001; bOR, 0.77 (95% CI, 0.65–0.92), P = 0.004.\nIncidence of varicella and severe varicella During the studied period, vaccinated group showed higher frequency of varicella diagnosis compared to unvaccinated group (13.4% vs. 10.4%, Table 1), whereas severe varicella was more frequent in the unvaccinated group than in the vaccinated group (1.3% vs. 0.97%, Table 1). The incidence density of breakthrough varicella was 23.3 (95% confidence interval [CI], 23.1–23.4) per 1,000 person-year. The incidence of varicella in the unvaccinated group was 13.7 (95% CI, 13.0–14.5) per 1,000 person-year. The mean age at infection was 2.0 (SD, 1.62) years old in the unvaccinated group and 4.3 (SD, 1.85) years old in the vaccinated group.\nDuring the studied period, vaccinated group showed higher frequency of varicella diagnosis compared to unvaccinated group (13.4% vs. 10.4%, Table 1), whereas severe varicella was more frequent in the unvaccinated group than in the vaccinated group (1.3% vs. 0.97%, Table 1). The incidence density of breakthrough varicella was 23.3 (95% confidence interval [CI], 23.1–23.4) per 1,000 person-year. The incidence of varicella in the unvaccinated group was 13.7 (95% CI, 13.0–14.5) per 1,000 person-year. The mean age at infection was 2.0 (SD, 1.62) years old in the unvaccinated group and 4.3 (SD, 1.85) years old in the vaccinated group.\nVE and its waning immunity Table 2 describes the overall VE after matching the propensity score by the month of birth, sex, out-patient visits per year, and region. During the first year from vaccination, the overall VE was 86.1%, but it decreased to 62.6% at four years and further to 49.9% at six years from vaccination. The VE for severe varicella was 80.4% in the first vaccinated year and decreased to 66.3% after six years following vaccination (Supplementary Table 2). The VE trend over time is shown in Fig. 2.\nPropensity score matched vaccinated group was compared to the unvaccinated group. The month of birth, sex, regions of address, and frequency of out-patient visits per year was used for the propensity score calculation.\naVaccine effectiveness is estimated via (1-hazard ratio)*100 (%); bSevere varicella was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (10th revision of the International Statistical Classification of Diseases and Related Health Problems: B010, B011, B012, B0180, and B0188, respectively).\nTable 2 describes the overall VE after matching the propensity score by the month of birth, sex, out-patient visits per year, and region. During the first year from vaccination, the overall VE was 86.1%, but it decreased to 62.6% at four years and further to 49.9% at six years from vaccination. The VE for severe varicella was 80.4% in the first vaccinated year and decreased to 66.3% after six years following vaccination (Supplementary Table 2). The VE trend over time is shown in Fig. 2.\nPropensity score matched vaccinated group was compared to the unvaccinated group. The month of birth, sex, regions of address, and frequency of out-patient visits per year was used for the propensity score calculation.\naVaccine effectiveness is estimated via (1-hazard ratio)*100 (%); bSevere varicella was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (10th revision of the International Statistical Classification of Diseases and Related Health Problems: B010, B011, B012, B0180, and B0188, respectively).\nVE of two-dose varicella vaccination Among the vaccinated population, 2,233 (2.11%) of the two-dose vaccinated group and 23,192 (8.42%) of the one-dose vaccinated group were diagnosed with varicella. The VE of two-dose vaccination compared to the one-dose-only vaccination was 73.4% (95% CI, 72.2–74.6), after being adjusted for month of birth, sex, region, and healthcare utilization. Adjusted two-dose VE for severe varicella was 61.4% (54.0–67.6).\nAmong the vaccinated population, 2,233 (2.11%) of the two-dose vaccinated group and 23,192 (8.42%) of the one-dose vaccinated group were diagnosed with varicella. The VE of two-dose vaccination compared to the one-dose-only vaccination was 73.4% (95% CI, 72.2–74.6), after being adjusted for month of birth, sex, region, and healthcare utilization. Adjusted two-dose VE for severe varicella was 61.4% (54.0–67.6).", "Among 421,070 births in 2011, 97.5% (410,393) received at least one-dose of varicella vaccines. Of the total, 13.3% (55,940) had diagnoses of varicella during the follow-up period. An average follow-up period was 5.5 years per person, resulting in a varicella incidence of 24.2 per 1,000 person-year in total. Severe varicella occurred in 0.98% (total n = 4,132; 3,449 complicated varicella, 700 hospitalizations, and 240 acyclovir prescriptions). Among the vaccinated population, 26.7% (108,797) received second-dose vaccinations at 4 to 6 years of age.", "Table 1 shows the demographic characteristics of the studied cohort by vaccination status. After adjusting for age, sex, and region, the mean of out-patient visit for the unvaccinated group was significantly lower (7.69 per year; SD, 11.90) than the vaccinated group (26.39 per year; SD, 12.78; Table 1). After propensity score matching to estimate the VE of varicella, demographics of unvaccinated and vaccinated group were matched (Supplementary Table 2).\nSD = standard deviation, OR = odds ratio, CI = confidence interval.\naOR, 1.33 (95% CI, 1.25–1.42), P < 0.001; bOR, 0.77 (95% CI, 0.65–0.92), P = 0.004.", "During the studied period, vaccinated group showed higher frequency of varicella diagnosis compared to unvaccinated group (13.4% vs. 10.4%, Table 1), whereas severe varicella was more frequent in the unvaccinated group than in the vaccinated group (1.3% vs. 0.97%, Table 1). The incidence density of breakthrough varicella was 23.3 (95% confidence interval [CI], 23.1–23.4) per 1,000 person-year. The incidence of varicella in the unvaccinated group was 13.7 (95% CI, 13.0–14.5) per 1,000 person-year. The mean age at infection was 2.0 (SD, 1.62) years old in the unvaccinated group and 4.3 (SD, 1.85) years old in the vaccinated group.", "Table 2 describes the overall VE after matching the propensity score by the month of birth, sex, out-patient visits per year, and region. During the first year from vaccination, the overall VE was 86.1%, but it decreased to 62.6% at four years and further to 49.9% at six years from vaccination. The VE for severe varicella was 80.4% in the first vaccinated year and decreased to 66.3% after six years following vaccination (Supplementary Table 2). The VE trend over time is shown in Fig. 2.\nPropensity score matched vaccinated group was compared to the unvaccinated group. The month of birth, sex, regions of address, and frequency of out-patient visits per year was used for the propensity score calculation.\naVaccine effectiveness is estimated via (1-hazard ratio)*100 (%); bSevere varicella was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (10th revision of the International Statistical Classification of Diseases and Related Health Problems: B010, B011, B012, B0180, and B0188, respectively).", "Among the vaccinated population, 2,233 (2.11%) of the two-dose vaccinated group and 23,192 (8.42%) of the one-dose vaccinated group were diagnosed with varicella. The VE of two-dose vaccination compared to the one-dose-only vaccination was 73.4% (95% CI, 72.2–74.6), after being adjusted for month of birth, sex, region, and healthcare utilization. Adjusted two-dose VE for severe varicella was 61.4% (54.0–67.6).", "In this study, we found a substantial incidence density of breakthrough varicella of 23.3 per 1,000 person-year (95% CI, 23.1–23.4). In a pooled analysis of breakthrough infection, the incidence rate of breakthrough varicella for one-dose vaccination was 8.5 cases per 1,000 person-year (95% CI, 5.3–13.7) and 2.2 cases per 1,000 person-year (95% CI, 0.5–9.3) for two-dose vaccination.27 Through estimating the incidence density, we attempted to present more informative results on varicella by comparing the frequency of varicella in two groups, which was 13.4% in vaccinated group and 10.4% in unvaccinated group.\nBreakthrough varicella is likely associated with time-lapse since vaccination, as demonstrated in a previous study that showed higher risk at 5 and 6 years of age among the vaccinated children (P < 0.001).28 This is in line with our finding that varicella VE has decreased over time from 86.1% to 49.9% during the six-year observation period in Korea. The serial decrease in VE by outcome, any varicella and severe varicella, indicates possible waning immunity of the vaccine.29 Concerns about vaccine failure due to waning immunity were suggested by previous studies of specific vaccine types or population cohorts.30313233 In this study, considering the mean age of varicella was older in the vaccinated population, the waning immunity may be the result from secondary vaccine failure.3435 Moreover, the reason for the possibility of the waning immunity could originate from not only population outbreaks but also from the vaccine itself.3637 Therefore, a thorough analysis of the overall process of vaccine development, production, transportation, and storage is strongly required to resolve this issue. Cumulative VE over time since the vaccination is important because the target age should match the main infected age of varicella to control outbreaks. In addition, cumulative VE can help predict the loss of protection and determine the optimal timing of second dose vaccination. In studies conducted in the United States38 and Germany,39 the VE declines over time, and for this reason, discussion of the two-dose strategy began.\nWe also found higher VE of second-dose vaccination, which was in line with reports from the U.S., Europe, and Asia.3338394041 Earlier studies showed a decline in incidence with the introduction of a two-dose program4042 or a decreased occurrence of outbreaks,1643 but the actual effectiveness compared to the single-dose vaccine group was rarely studied. In this study, the effectiveness of secondary vaccination was estimated with consistency, enabling comparison with the primary-only vaccinated group. Also, we were able to suggest ideal timing of second-dose varicella vaccination since the estimated decrease of VE by year was presented. Our data suggest the optimal timing of second-dose vaccine needs to be further investigated, given the rapid decline of the VE between three to four years after the first-dose vaccination.4445 With the consideration about the cost-effectiveness of two-dose vaccination strategy in Korean context,46 the recommended timing of second-dose vaccination should be considered according to this epidemiologic finding.\nHowever, one important question is if there are differences in waning immunity by the varicella vaccine type. Our study results demonstrated that waning immunity soon facilitate discussion about change to two-dose policy in Korea. Before doing this, the VE should be measured by the vaccine type. Four different varicella vaccine types were available to the 2011 birth cohort. PSM analysis cannot be applied for the analysis of VE by the vaccine type because each vaccine type must be matched separately for PSM analysis. After matching birth months, sex, geographic region, and the number of healthcare visits, VE by the vaccine type was analyzed in the retrospective cohort. In the retrospective cohort analysis, we found one specific vaccine type showed a significant higher incidence rate of breakthrough varicella (27.09/1,000 person-year) than the remaining three vaccine types (16.95–18.57/1,000 person-year). Among four vaccine types, the vaccine that contains MAV strain showed the highest incidence, while three remaining vaccine types that contain Oka strain had lower rates,4748 indicating potential difference in VE between strains.\nThere are some limitations to this study. Firstly, the cohort consisted of newborns in 2011, so we cannot exclude possible cohort effects of vaccination trends since all participants received varicella vaccination during a similar period. Although the long-term follow-up and the estimation from secondary vaccination, which had a wide range of vaccinated dates (2015 to 2017), can indirectly reflect the vaccination trends in Korea, further cohort studies with longer enrollment periods are needed for confirmation. Second, since the secondary vaccination was evaluated by the data in the immunization registry of NHIS, the effectiveness of secondary vaccination might have been underestimated due to missing data for the secondary vaccination rate. Fortunately, the direction of the effectiveness would not change due to the possibility of a differential misclassification.49 To minimize the risk of misleading, we used the PSM method to adjust for sociodemographic factors which could affect the vaccination status.1150 Third, because varicella appears to be a relatively mild clinical manifestation, not all people with varicella visit medical institutions.1251 Because of this, it is possible that the varicella record of the unvaccinated person may have been omitted from the claims data, which was adjusted in our analyses. However, since we defined the presence of immunosuppression by using diagnostic code, and it was indeed challenging to include children taking immunosuppressant drugs from our research scheme.52\nDespite these limitations, our study extends previous findings reported from other continents with different settings in the US and Europe, that varicella VE wanes over time, and the two-dose vaccination provides additional protection compared to the one-dose vaccination. The large cohort comprising more than 470,000 children gives larger power to the estimated VE at a national level. In countries where vaccination rates are maintained above 95%, the unvaccinated children group may not be a representative population. Therefore, it is imperative to take this into account when evaluating the effectiveness of a vaccine with a retrospective cohort design in countries with an extremely high vaccination rate. Through the methodologic novelty, we were able to calculate VE in the population with high vaccine coverage in a comprehensive way.\nIn conclusion, we found lower long-term VE of one-dose UVV and waning of effectiveness over time. Longer follow ups of the vaccinated children as well as appropriately designed studies are needed to establish the optimal strategy in containing varicella in Korea." ]
[ "intro", "methods", null, null, null, null, "results", null, null, null, null, null, "discussion" ]
[ "Chickenpox Vaccine", "Immunity, Heterologous", "Varicella Zoster Virus Infection", "Cohort Studies", "Vaccine" ]
INTRODUCTION: Varicella is a highly contagious acute infectious disease caused by the varicella-zoster virus.12 Varicella frequently attacks children between 4 to 6 years old, causing various complications.3 Introduction of varicella vaccination into the national immunization program reduced varicella burden of disease in many countries. Previously, vaccine effectiveness (VE) of varicella vaccines showed a wide range of results, and methods of the estimation varied by study due to lack of follow-up data.45678910 Few previous studies revealed VE through a follow-up cohort considering medical utilization, which can be heterogeneous, resulting in the difference in vaccination rate.11 Also, VE on complicated varicella infection may differ from the overall varicella infection since subclinical manifestations of varicella infection occasionally do not require medical attention.12 Varicella vaccine was first introduced in 198813 and it was included in the National Immunization Program (NIP) in Korea since 2005.14 The universal varicella vaccination (UVV) with one-dose is recommended to all children between 12 to 15 months.1516 Although the vaccine coverage rates are generally over 97% in Korea, national notifiable diseases surveillance system reported that the incidence of varicella continues to increase,17 adding substantial disease burden to the public health health.18 Therefore, it is important to assess the VE of UVV with one-dose in Korea and address the public health implication with regard to vaccination strategy. In this study, we aimed to evaluate the VE in a retrospective cohort using the insurance claims data and the immunization registry information from 2011–2018 by adjusting for major characteristics of the vaccinated and unvaccinated population in a country where UVV coverage is estimated higher than 97% nationwide. METHODS: Study population and data sources All newborns in Korea from January 1, 2011 to December 31, 2011 were included in the study population. The annual birth cohort of the corresponding year was 470,000 newborns. Demographic characteristics of children such as date of birth, sex, and address were obtained from the National Health Insurance System (NHIS), which is a single insurer that covers over 97% of the Korean population.1419 All medical claims are reviewed by the Health Insurance Review and Assessment Service (HIRA), using admissions and diagnoses data submitted to the NHIS. We used the data from the National Immunization Registry, which contains the vaccination status and detailed information such as the vaccination dates and vaccination counts of all children (0–12 years old) receiving vaccines in Korea, obtained from the Korea Centers for Disease Control and Prevention Agency (KDCA). The vaccination status of the cohort was then linked to the medical claims data of HIRA with de-identified personal information. We followed the International Committee of Medical Journal Editors (ICMJE) recommendations throughout the study.20 All newborns in Korea from January 1, 2011 to December 31, 2011 were included in the study population. The annual birth cohort of the corresponding year was 470,000 newborns. Demographic characteristics of children such as date of birth, sex, and address were obtained from the National Health Insurance System (NHIS), which is a single insurer that covers over 97% of the Korean population.1419 All medical claims are reviewed by the Health Insurance Review and Assessment Service (HIRA), using admissions and diagnoses data submitted to the NHIS. We used the data from the National Immunization Registry, which contains the vaccination status and detailed information such as the vaccination dates and vaccination counts of all children (0–12 years old) receiving vaccines in Korea, obtained from the Korea Centers for Disease Control and Prevention Agency (KDCA). The vaccination status of the cohort was then linked to the medical claims data of HIRA with de-identified personal information. We followed the International Committee of Medical Journal Editors (ICMJE) recommendations throughout the study.20 Study design and inclusion criteria All children born in 2011 were eligible for the study population. Among them, children who did not receive varicella vaccination between 12 to 15 months old, received more than three-doses in total or two-doses within a 4-week window, and immunocompromised children (Supplementary Table 1) were excluded from the analysis (Fig. 1). The included study population were followed-up until the endpoint of the cohort (December 31, 2018). Based on the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10), a primary outcome was selected as the diagnosis of varicella (ICD-10: B01). The secondary outcome was diagnosis of severe varicella, which was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (ICD-10: B010, B011, B012, B0180, and B0188, respectively). The breakthrough infections, which were defined as the occurrence of varicella more than 42 days after the vaccination, were included for the evaluation of the incidence.21 All children born in 2011 were eligible for the study population. Among them, children who did not receive varicella vaccination between 12 to 15 months old, received more than three-doses in total or two-doses within a 4-week window, and immunocompromised children (Supplementary Table 1) were excluded from the analysis (Fig. 1). The included study population were followed-up until the endpoint of the cohort (December 31, 2018). Based on the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10), a primary outcome was selected as the diagnosis of varicella (ICD-10: B01). The secondary outcome was diagnosis of severe varicella, which was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (ICD-10: B010, B011, B012, B0180, and B0188, respectively). The breakthrough infections, which were defined as the occurrence of varicella more than 42 days after the vaccination, were included for the evaluation of the incidence.21 Statistical analysis Frequencies of varicella and severe varicella were measured by vaccination status over time. Crude incidence density and incidence rate ratio of varicella and severe varicella were calculated. In the setting of high vaccination coverage rate (> 95%), we chose to apply propensity score matching (PSM) to compare the VE of the vaccines in order to minimize the risk of selection bias.22 The propensity score consisted of: month of birth, sex, region of address by SIDO, and frequency of out-patient visits per year. The mean propensity score were 0.97 (standard deviation [SD], 0.01) in the vaccinated group and 0.98 (SD, 0.02) in the unvaccinated group. All unvaccinated cases were 1:1 matched for randomly sampled vaccinated group with the nearest neighbor match method.2324 VE was calculated as (1-hazard ratio)*100, which was estimated by the Cox proportional hazard models.25 We checked the linearity and proportionality by the log-cumulative hazard plots and Shoenfeld residuals in each model.26 In addition, we estimated cumulative VE by year during a 6-year follow-up to estimate the long-term protective effect of the vaccine. The population who received the two-dose vaccination between 4 to 6 years old were compared with the population who received only primary vaccination by the same methods. All statistical methods were performed by SAS 9.4 software (SAS Institute Inc., Cary, NC, USA). Frequencies of varicella and severe varicella were measured by vaccination status over time. Crude incidence density and incidence rate ratio of varicella and severe varicella were calculated. In the setting of high vaccination coverage rate (> 95%), we chose to apply propensity score matching (PSM) to compare the VE of the vaccines in order to minimize the risk of selection bias.22 The propensity score consisted of: month of birth, sex, region of address by SIDO, and frequency of out-patient visits per year. The mean propensity score were 0.97 (standard deviation [SD], 0.01) in the vaccinated group and 0.98 (SD, 0.02) in the unvaccinated group. All unvaccinated cases were 1:1 matched for randomly sampled vaccinated group with the nearest neighbor match method.2324 VE was calculated as (1-hazard ratio)*100, which was estimated by the Cox proportional hazard models.25 We checked the linearity and proportionality by the log-cumulative hazard plots and Shoenfeld residuals in each model.26 In addition, we estimated cumulative VE by year during a 6-year follow-up to estimate the long-term protective effect of the vaccine. The population who received the two-dose vaccination between 4 to 6 years old were compared with the population who received only primary vaccination by the same methods. All statistical methods were performed by SAS 9.4 software (SAS Institute Inc., Cary, NC, USA). Ethics statement The institutional review board (IRB) of the Seoul National University Hospital approved this study to conduct under the exemption of IRB review (IRB No. 1809-064-971). Informed consent was waived since we used secondary de-identified data from the KDCA and HIRA as a part of public health research. The institutional review board (IRB) of the Seoul National University Hospital approved this study to conduct under the exemption of IRB review (IRB No. 1809-064-971). Informed consent was waived since we used secondary de-identified data from the KDCA and HIRA as a part of public health research. Study population and data sources: All newborns in Korea from January 1, 2011 to December 31, 2011 were included in the study population. The annual birth cohort of the corresponding year was 470,000 newborns. Demographic characteristics of children such as date of birth, sex, and address were obtained from the National Health Insurance System (NHIS), which is a single insurer that covers over 97% of the Korean population.1419 All medical claims are reviewed by the Health Insurance Review and Assessment Service (HIRA), using admissions and diagnoses data submitted to the NHIS. We used the data from the National Immunization Registry, which contains the vaccination status and detailed information such as the vaccination dates and vaccination counts of all children (0–12 years old) receiving vaccines in Korea, obtained from the Korea Centers for Disease Control and Prevention Agency (KDCA). The vaccination status of the cohort was then linked to the medical claims data of HIRA with de-identified personal information. We followed the International Committee of Medical Journal Editors (ICMJE) recommendations throughout the study.20 Study design and inclusion criteria: All children born in 2011 were eligible for the study population. Among them, children who did not receive varicella vaccination between 12 to 15 months old, received more than three-doses in total or two-doses within a 4-week window, and immunocompromised children (Supplementary Table 1) were excluded from the analysis (Fig. 1). The included study population were followed-up until the endpoint of the cohort (December 31, 2018). Based on the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10), a primary outcome was selected as the diagnosis of varicella (ICD-10: B01). The secondary outcome was diagnosis of severe varicella, which was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (ICD-10: B010, B011, B012, B0180, and B0188, respectively). The breakthrough infections, which were defined as the occurrence of varicella more than 42 days after the vaccination, were included for the evaluation of the incidence.21 Statistical analysis: Frequencies of varicella and severe varicella were measured by vaccination status over time. Crude incidence density and incidence rate ratio of varicella and severe varicella were calculated. In the setting of high vaccination coverage rate (> 95%), we chose to apply propensity score matching (PSM) to compare the VE of the vaccines in order to minimize the risk of selection bias.22 The propensity score consisted of: month of birth, sex, region of address by SIDO, and frequency of out-patient visits per year. The mean propensity score were 0.97 (standard deviation [SD], 0.01) in the vaccinated group and 0.98 (SD, 0.02) in the unvaccinated group. All unvaccinated cases were 1:1 matched for randomly sampled vaccinated group with the nearest neighbor match method.2324 VE was calculated as (1-hazard ratio)*100, which was estimated by the Cox proportional hazard models.25 We checked the linearity and proportionality by the log-cumulative hazard plots and Shoenfeld residuals in each model.26 In addition, we estimated cumulative VE by year during a 6-year follow-up to estimate the long-term protective effect of the vaccine. The population who received the two-dose vaccination between 4 to 6 years old were compared with the population who received only primary vaccination by the same methods. All statistical methods were performed by SAS 9.4 software (SAS Institute Inc., Cary, NC, USA). Ethics statement: The institutional review board (IRB) of the Seoul National University Hospital approved this study to conduct under the exemption of IRB review (IRB No. 1809-064-971). Informed consent was waived since we used secondary de-identified data from the KDCA and HIRA as a part of public health research. RESULTS: Total vaccination rate and incidence of varicella Among 421,070 births in 2011, 97.5% (410,393) received at least one-dose of varicella vaccines. Of the total, 13.3% (55,940) had diagnoses of varicella during the follow-up period. An average follow-up period was 5.5 years per person, resulting in a varicella incidence of 24.2 per 1,000 person-year in total. Severe varicella occurred in 0.98% (total n = 4,132; 3,449 complicated varicella, 700 hospitalizations, and 240 acyclovir prescriptions). Among the vaccinated population, 26.7% (108,797) received second-dose vaccinations at 4 to 6 years of age. Among 421,070 births in 2011, 97.5% (410,393) received at least one-dose of varicella vaccines. Of the total, 13.3% (55,940) had diagnoses of varicella during the follow-up period. An average follow-up period was 5.5 years per person, resulting in a varicella incidence of 24.2 per 1,000 person-year in total. Severe varicella occurred in 0.98% (total n = 4,132; 3,449 complicated varicella, 700 hospitalizations, and 240 acyclovir prescriptions). Among the vaccinated population, 26.7% (108,797) received second-dose vaccinations at 4 to 6 years of age. Demographics of the study population Table 1 shows the demographic characteristics of the studied cohort by vaccination status. After adjusting for age, sex, and region, the mean of out-patient visit for the unvaccinated group was significantly lower (7.69 per year; SD, 11.90) than the vaccinated group (26.39 per year; SD, 12.78; Table 1). After propensity score matching to estimate the VE of varicella, demographics of unvaccinated and vaccinated group were matched (Supplementary Table 2). SD = standard deviation, OR = odds ratio, CI = confidence interval. aOR, 1.33 (95% CI, 1.25–1.42), P < 0.001; bOR, 0.77 (95% CI, 0.65–0.92), P = 0.004. Table 1 shows the demographic characteristics of the studied cohort by vaccination status. After adjusting for age, sex, and region, the mean of out-patient visit for the unvaccinated group was significantly lower (7.69 per year; SD, 11.90) than the vaccinated group (26.39 per year; SD, 12.78; Table 1). After propensity score matching to estimate the VE of varicella, demographics of unvaccinated and vaccinated group were matched (Supplementary Table 2). SD = standard deviation, OR = odds ratio, CI = confidence interval. aOR, 1.33 (95% CI, 1.25–1.42), P < 0.001; bOR, 0.77 (95% CI, 0.65–0.92), P = 0.004. Incidence of varicella and severe varicella During the studied period, vaccinated group showed higher frequency of varicella diagnosis compared to unvaccinated group (13.4% vs. 10.4%, Table 1), whereas severe varicella was more frequent in the unvaccinated group than in the vaccinated group (1.3% vs. 0.97%, Table 1). The incidence density of breakthrough varicella was 23.3 (95% confidence interval [CI], 23.1–23.4) per 1,000 person-year. The incidence of varicella in the unvaccinated group was 13.7 (95% CI, 13.0–14.5) per 1,000 person-year. The mean age at infection was 2.0 (SD, 1.62) years old in the unvaccinated group and 4.3 (SD, 1.85) years old in the vaccinated group. During the studied period, vaccinated group showed higher frequency of varicella diagnosis compared to unvaccinated group (13.4% vs. 10.4%, Table 1), whereas severe varicella was more frequent in the unvaccinated group than in the vaccinated group (1.3% vs. 0.97%, Table 1). The incidence density of breakthrough varicella was 23.3 (95% confidence interval [CI], 23.1–23.4) per 1,000 person-year. The incidence of varicella in the unvaccinated group was 13.7 (95% CI, 13.0–14.5) per 1,000 person-year. The mean age at infection was 2.0 (SD, 1.62) years old in the unvaccinated group and 4.3 (SD, 1.85) years old in the vaccinated group. VE and its waning immunity Table 2 describes the overall VE after matching the propensity score by the month of birth, sex, out-patient visits per year, and region. During the first year from vaccination, the overall VE was 86.1%, but it decreased to 62.6% at four years and further to 49.9% at six years from vaccination. The VE for severe varicella was 80.4% in the first vaccinated year and decreased to 66.3% after six years following vaccination (Supplementary Table 2). The VE trend over time is shown in Fig. 2. Propensity score matched vaccinated group was compared to the unvaccinated group. The month of birth, sex, regions of address, and frequency of out-patient visits per year was used for the propensity score calculation. aVaccine effectiveness is estimated via (1-hazard ratio)*100 (%); bSevere varicella was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (10th revision of the International Statistical Classification of Diseases and Related Health Problems: B010, B011, B012, B0180, and B0188, respectively). Table 2 describes the overall VE after matching the propensity score by the month of birth, sex, out-patient visits per year, and region. During the first year from vaccination, the overall VE was 86.1%, but it decreased to 62.6% at four years and further to 49.9% at six years from vaccination. The VE for severe varicella was 80.4% in the first vaccinated year and decreased to 66.3% after six years following vaccination (Supplementary Table 2). The VE trend over time is shown in Fig. 2. Propensity score matched vaccinated group was compared to the unvaccinated group. The month of birth, sex, regions of address, and frequency of out-patient visits per year was used for the propensity score calculation. aVaccine effectiveness is estimated via (1-hazard ratio)*100 (%); bSevere varicella was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (10th revision of the International Statistical Classification of Diseases and Related Health Problems: B010, B011, B012, B0180, and B0188, respectively). VE of two-dose varicella vaccination Among the vaccinated population, 2,233 (2.11%) of the two-dose vaccinated group and 23,192 (8.42%) of the one-dose vaccinated group were diagnosed with varicella. The VE of two-dose vaccination compared to the one-dose-only vaccination was 73.4% (95% CI, 72.2–74.6), after being adjusted for month of birth, sex, region, and healthcare utilization. Adjusted two-dose VE for severe varicella was 61.4% (54.0–67.6). Among the vaccinated population, 2,233 (2.11%) of the two-dose vaccinated group and 23,192 (8.42%) of the one-dose vaccinated group were diagnosed with varicella. The VE of two-dose vaccination compared to the one-dose-only vaccination was 73.4% (95% CI, 72.2–74.6), after being adjusted for month of birth, sex, region, and healthcare utilization. Adjusted two-dose VE for severe varicella was 61.4% (54.0–67.6). Total vaccination rate and incidence of varicella: Among 421,070 births in 2011, 97.5% (410,393) received at least one-dose of varicella vaccines. Of the total, 13.3% (55,940) had diagnoses of varicella during the follow-up period. An average follow-up period was 5.5 years per person, resulting in a varicella incidence of 24.2 per 1,000 person-year in total. Severe varicella occurred in 0.98% (total n = 4,132; 3,449 complicated varicella, 700 hospitalizations, and 240 acyclovir prescriptions). Among the vaccinated population, 26.7% (108,797) received second-dose vaccinations at 4 to 6 years of age. Demographics of the study population: Table 1 shows the demographic characteristics of the studied cohort by vaccination status. After adjusting for age, sex, and region, the mean of out-patient visit for the unvaccinated group was significantly lower (7.69 per year; SD, 11.90) than the vaccinated group (26.39 per year; SD, 12.78; Table 1). After propensity score matching to estimate the VE of varicella, demographics of unvaccinated and vaccinated group were matched (Supplementary Table 2). SD = standard deviation, OR = odds ratio, CI = confidence interval. aOR, 1.33 (95% CI, 1.25–1.42), P < 0.001; bOR, 0.77 (95% CI, 0.65–0.92), P = 0.004. Incidence of varicella and severe varicella: During the studied period, vaccinated group showed higher frequency of varicella diagnosis compared to unvaccinated group (13.4% vs. 10.4%, Table 1), whereas severe varicella was more frequent in the unvaccinated group than in the vaccinated group (1.3% vs. 0.97%, Table 1). The incidence density of breakthrough varicella was 23.3 (95% confidence interval [CI], 23.1–23.4) per 1,000 person-year. The incidence of varicella in the unvaccinated group was 13.7 (95% CI, 13.0–14.5) per 1,000 person-year. The mean age at infection was 2.0 (SD, 1.62) years old in the unvaccinated group and 4.3 (SD, 1.85) years old in the vaccinated group. VE and its waning immunity: Table 2 describes the overall VE after matching the propensity score by the month of birth, sex, out-patient visits per year, and region. During the first year from vaccination, the overall VE was 86.1%, but it decreased to 62.6% at four years and further to 49.9% at six years from vaccination. The VE for severe varicella was 80.4% in the first vaccinated year and decreased to 66.3% after six years following vaccination (Supplementary Table 2). The VE trend over time is shown in Fig. 2. Propensity score matched vaccinated group was compared to the unvaccinated group. The month of birth, sex, regions of address, and frequency of out-patient visits per year was used for the propensity score calculation. aVaccine effectiveness is estimated via (1-hazard ratio)*100 (%); bSevere varicella was defined as including people who experienced one or more of the following: 1) admitted to hospital due to varicella, 2) prescribed for acyclovir to treat varicella, or 3) had complications due to varicella: varicella meningitis, varicella encephalomyelitis, varicella pneumonia, varicella keratitis, other varicella complications (10th revision of the International Statistical Classification of Diseases and Related Health Problems: B010, B011, B012, B0180, and B0188, respectively). VE of two-dose varicella vaccination: Among the vaccinated population, 2,233 (2.11%) of the two-dose vaccinated group and 23,192 (8.42%) of the one-dose vaccinated group were diagnosed with varicella. The VE of two-dose vaccination compared to the one-dose-only vaccination was 73.4% (95% CI, 72.2–74.6), after being adjusted for month of birth, sex, region, and healthcare utilization. Adjusted two-dose VE for severe varicella was 61.4% (54.0–67.6). DISCUSSION: In this study, we found a substantial incidence density of breakthrough varicella of 23.3 per 1,000 person-year (95% CI, 23.1–23.4). In a pooled analysis of breakthrough infection, the incidence rate of breakthrough varicella for one-dose vaccination was 8.5 cases per 1,000 person-year (95% CI, 5.3–13.7) and 2.2 cases per 1,000 person-year (95% CI, 0.5–9.3) for two-dose vaccination.27 Through estimating the incidence density, we attempted to present more informative results on varicella by comparing the frequency of varicella in two groups, which was 13.4% in vaccinated group and 10.4% in unvaccinated group. Breakthrough varicella is likely associated with time-lapse since vaccination, as demonstrated in a previous study that showed higher risk at 5 and 6 years of age among the vaccinated children (P < 0.001).28 This is in line with our finding that varicella VE has decreased over time from 86.1% to 49.9% during the six-year observation period in Korea. The serial decrease in VE by outcome, any varicella and severe varicella, indicates possible waning immunity of the vaccine.29 Concerns about vaccine failure due to waning immunity were suggested by previous studies of specific vaccine types or population cohorts.30313233 In this study, considering the mean age of varicella was older in the vaccinated population, the waning immunity may be the result from secondary vaccine failure.3435 Moreover, the reason for the possibility of the waning immunity could originate from not only population outbreaks but also from the vaccine itself.3637 Therefore, a thorough analysis of the overall process of vaccine development, production, transportation, and storage is strongly required to resolve this issue. Cumulative VE over time since the vaccination is important because the target age should match the main infected age of varicella to control outbreaks. In addition, cumulative VE can help predict the loss of protection and determine the optimal timing of second dose vaccination. In studies conducted in the United States38 and Germany,39 the VE declines over time, and for this reason, discussion of the two-dose strategy began. We also found higher VE of second-dose vaccination, which was in line with reports from the U.S., Europe, and Asia.3338394041 Earlier studies showed a decline in incidence with the introduction of a two-dose program4042 or a decreased occurrence of outbreaks,1643 but the actual effectiveness compared to the single-dose vaccine group was rarely studied. In this study, the effectiveness of secondary vaccination was estimated with consistency, enabling comparison with the primary-only vaccinated group. Also, we were able to suggest ideal timing of second-dose varicella vaccination since the estimated decrease of VE by year was presented. Our data suggest the optimal timing of second-dose vaccine needs to be further investigated, given the rapid decline of the VE between three to four years after the first-dose vaccination.4445 With the consideration about the cost-effectiveness of two-dose vaccination strategy in Korean context,46 the recommended timing of second-dose vaccination should be considered according to this epidemiologic finding. However, one important question is if there are differences in waning immunity by the varicella vaccine type. Our study results demonstrated that waning immunity soon facilitate discussion about change to two-dose policy in Korea. Before doing this, the VE should be measured by the vaccine type. Four different varicella vaccine types were available to the 2011 birth cohort. PSM analysis cannot be applied for the analysis of VE by the vaccine type because each vaccine type must be matched separately for PSM analysis. After matching birth months, sex, geographic region, and the number of healthcare visits, VE by the vaccine type was analyzed in the retrospective cohort. In the retrospective cohort analysis, we found one specific vaccine type showed a significant higher incidence rate of breakthrough varicella (27.09/1,000 person-year) than the remaining three vaccine types (16.95–18.57/1,000 person-year). Among four vaccine types, the vaccine that contains MAV strain showed the highest incidence, while three remaining vaccine types that contain Oka strain had lower rates,4748 indicating potential difference in VE between strains. There are some limitations to this study. Firstly, the cohort consisted of newborns in 2011, so we cannot exclude possible cohort effects of vaccination trends since all participants received varicella vaccination during a similar period. Although the long-term follow-up and the estimation from secondary vaccination, which had a wide range of vaccinated dates (2015 to 2017), can indirectly reflect the vaccination trends in Korea, further cohort studies with longer enrollment periods are needed for confirmation. Second, since the secondary vaccination was evaluated by the data in the immunization registry of NHIS, the effectiveness of secondary vaccination might have been underestimated due to missing data for the secondary vaccination rate. Fortunately, the direction of the effectiveness would not change due to the possibility of a differential misclassification.49 To minimize the risk of misleading, we used the PSM method to adjust for sociodemographic factors which could affect the vaccination status.1150 Third, because varicella appears to be a relatively mild clinical manifestation, not all people with varicella visit medical institutions.1251 Because of this, it is possible that the varicella record of the unvaccinated person may have been omitted from the claims data, which was adjusted in our analyses. However, since we defined the presence of immunosuppression by using diagnostic code, and it was indeed challenging to include children taking immunosuppressant drugs from our research scheme.52 Despite these limitations, our study extends previous findings reported from other continents with different settings in the US and Europe, that varicella VE wanes over time, and the two-dose vaccination provides additional protection compared to the one-dose vaccination. The large cohort comprising more than 470,000 children gives larger power to the estimated VE at a national level. In countries where vaccination rates are maintained above 95%, the unvaccinated children group may not be a representative population. Therefore, it is imperative to take this into account when evaluating the effectiveness of a vaccine with a retrospective cohort design in countries with an extremely high vaccination rate. Through the methodologic novelty, we were able to calculate VE in the population with high vaccine coverage in a comprehensive way. In conclusion, we found lower long-term VE of one-dose UVV and waning of effectiveness over time. Longer follow ups of the vaccinated children as well as appropriately designed studies are needed to establish the optimal strategy in containing varicella in Korea.
Background: Despite high coverage (~98%) of universal varicella vaccination (UVV) in the Republic of Korea since 2005, reduction in the incidence rate of varicella is not obvious. The study aimed to evaluate the vaccine effectiveness (VE) of one-dose UVV by timeline and severity of the disease. Methods: All children born in Korea in 2011 were included for this retrospective cohort study that analyzed insurance claims data from 2011-2018 and the varicella vaccination records in the immunization registry. Adjusted hazard ratios by Cox proportional hazard models were used to estimate the VE through propensity score matching by the month of birth, sex, healthcare utilization rate, and region. Results: Of the total 421,070 newborns in the 2011 birth cohort, 13,360 were matched for age, sex, healthcare utilization rate, and region by the propensity score matching method. A total of 55,940 (13.29%) children were diagnosed with varicella, with the incidence rate 24.2 per 1000 person-year; 13.4% of vaccinated children and 10.4% of unvaccinated children. The VE of one-dose UVV against any varicella was 86.1% (95% confidence interval [CI], 81.4-89.5) during the first year after vaccination and 49.9% (95% CI, 43.3-55.7) during the 6-year follow-up period since vaccination, resulting in a 7.2% annual decrease of VE. The overall VE for severe varicella was 66.3%. The VE of two-dose compared to one-dose was 73.4% (95% CI, 72.2-74.6). Conclusions: We found lower long-term VE in one-dose vaccination and waning of effectiveness over time. Longer follow ups of the vaccinated children as well as appropriately designed studies are needed to establish the optimal strategy in preventing varicella in Korea.
null
null
6,092
352
[ 194, 237, 263, 60, 116, 136, 134, 247, 93 ]
13
[ "varicella", "vaccination", "group", "ve", "year", "vaccinated", "dose", "population", "vaccinated group", "unvaccinated" ]
[ "receive varicella vaccination", "immunity varicella vaccine", "varicella vaccines total", "different varicella vaccine", "varicella vaccination estimated" ]
null
null
[CONTENT] Chickenpox Vaccine | Immunity, Heterologous | Varicella Zoster Virus Infection | Cohort Studies | Vaccine [SUMMARY]
[CONTENT] Chickenpox Vaccine | Immunity, Heterologous | Varicella Zoster Virus Infection | Cohort Studies | Vaccine [SUMMARY]
[CONTENT] Chickenpox Vaccine | Immunity, Heterologous | Varicella Zoster Virus Infection | Cohort Studies | Vaccine [SUMMARY]
null
[CONTENT] Chickenpox Vaccine | Immunity, Heterologous | Varicella Zoster Virus Infection | Cohort Studies | Vaccine [SUMMARY]
null
[CONTENT] Birth Cohort | Chickenpox | Chickenpox Vaccine | Female | Follow-Up Studies | Humans | Incidence | Infant | Male | Propensity Score | Republic of Korea | Retrospective Studies | Severity of Illness Index | Vaccination | Vaccine Efficacy [SUMMARY]
[CONTENT] Birth Cohort | Chickenpox | Chickenpox Vaccine | Female | Follow-Up Studies | Humans | Incidence | Infant | Male | Propensity Score | Republic of Korea | Retrospective Studies | Severity of Illness Index | Vaccination | Vaccine Efficacy [SUMMARY]
[CONTENT] Birth Cohort | Chickenpox | Chickenpox Vaccine | Female | Follow-Up Studies | Humans | Incidence | Infant | Male | Propensity Score | Republic of Korea | Retrospective Studies | Severity of Illness Index | Vaccination | Vaccine Efficacy [SUMMARY]
null
[CONTENT] Birth Cohort | Chickenpox | Chickenpox Vaccine | Female | Follow-Up Studies | Humans | Incidence | Infant | Male | Propensity Score | Republic of Korea | Retrospective Studies | Severity of Illness Index | Vaccination | Vaccine Efficacy [SUMMARY]
null
[CONTENT] receive varicella vaccination | immunity varicella vaccine | varicella vaccines total | different varicella vaccine | varicella vaccination estimated [SUMMARY]
[CONTENT] receive varicella vaccination | immunity varicella vaccine | varicella vaccines total | different varicella vaccine | varicella vaccination estimated [SUMMARY]
[CONTENT] receive varicella vaccination | immunity varicella vaccine | varicella vaccines total | different varicella vaccine | varicella vaccination estimated [SUMMARY]
null
[CONTENT] receive varicella vaccination | immunity varicella vaccine | varicella vaccines total | different varicella vaccine | varicella vaccination estimated [SUMMARY]
null
[CONTENT] varicella | vaccination | group | ve | year | vaccinated | dose | population | vaccinated group | unvaccinated [SUMMARY]
[CONTENT] varicella | vaccination | group | ve | year | vaccinated | dose | population | vaccinated group | unvaccinated [SUMMARY]
[CONTENT] varicella | vaccination | group | ve | year | vaccinated | dose | population | vaccinated group | unvaccinated [SUMMARY]
null
[CONTENT] varicella | vaccination | group | ve | year | vaccinated | dose | population | vaccinated group | unvaccinated [SUMMARY]
null
[CONTENT] varicella | varicella infection | uvv | ve | disease | infection | immunization | korea | vaccine | burden [SUMMARY]
[CONTENT] varicella | vaccination | study | children | population | data | irb | icd | icd 10 | study population [SUMMARY]
[CONTENT] varicella | group | vaccinated | year | table | dose | vaccinated group | ve | ci | unvaccinated [SUMMARY]
null
[CONTENT] varicella | vaccination | group | ve | dose | vaccinated | year | vaccinated group | unvaccinated | table [SUMMARY]
null
[CONTENT] UVV | the Republic of Korea | 2005 ||| VE | one | UVV [SUMMARY]
[CONTENT] Korea | 2011 | 2011-2018 ||| Cox | VE | the month [SUMMARY]
[CONTENT] 421,070 | 2011 | 13,360 ||| 55,940 | 13.29% | 24.2 | 1000 | 13.4% | 10.4% ||| VE | one | UVV | 86.1% | 95% ||| CI] | 81.4 | the first year | 49.9% | 95% | CI | 43.3 | 6-year | 7.2% | annual | VE ||| VE | 66.3% ||| VE | two | one | 73.4% | 95% | CI | 72.2 [SUMMARY]
null
[CONTENT] UVV | the Republic of Korea | 2005 ||| VE | one | UVV ||| Korea | 2011 | 2011-2018 ||| Cox | VE | the month ||| ||| 421,070 | 2011 | 13,360 ||| 55,940 | 13.29% | 24.2 | 1000 | 13.4% | 10.4% ||| VE | one | UVV | 86.1% | 95% ||| CI] | 81.4 | the first year | 49.9% | 95% | CI | 43.3 | 6-year | 7.2% | annual | VE ||| VE | 66.3% ||| VE | two | one | 73.4% | 95% | CI | 72.2 ||| VE | one ||| Korea [SUMMARY]
null
Psychometric properties of the Persian version of the Emotion Regulation Questionnaire.
34043902
Gross's Emotion Regulation Questionnaire is one of the most widely-used and valid questionnaires for assessing emotion regulation strategies. The validity and reliability of the Persian version have not been determined and data on its psychometric properties are not available to Iranian mental health researchers. The purpose of this study was to determine the psychometric properties of the Emotion Regulation Questionnaire in Iranian students.
INTRODUCTION
In this cross-sectional study, 348 students (170 males and 178 females) were selected from Shahid Beheshti University of Medical Science and Tehran University of Medical Science. The following statistical procedures were conducted: correlation coefficients, factor analysis, Cronbach's alpha, and independent t tests.
METHODOLOGY
The results showed that men use suppression more than women (T = -2.62, p = 0.009). Cronbach's alpha coefficients were 0.76 for the cognitive reappraisal sub-scale and 0.72 for the suppression sub-scale (excluding question 9). Six questions related to the cognitive reappraisal factor explained 30.97% of emotion regulation variance, and 3 questions related to the suppression factor explained 22.59% of emotion regulation variance. Overall, these factors explained 53.5% of emotion regulation variance. There were significant correlations between suppression and difficulties in emotion regulation, trait anxiety, and affective control. Furthermore, there was a significant correlation between cognitive reappraisal and the Five-Facet Mindfulness Questionnaire.
RESULTS
The results indicate that the Persian version of the ERQ is a reliable and valid instrument that can be helpful for development of further important studies of emotional regulation.
CONCLUSION
[ "Cross-Sectional Studies", "Emotional Regulation", "Female", "Humans", "Iran", "Male", "Psychometrics", "Reproducibility of Results", "Surveys and Questionnaires" ]
8317545
Introduction
Emotion is an individual’s overall, intense, and brief response to an unexpected event, accompanied by pleasant or unpleasant emotional states. Emotion has always been of interest to mental health researchers, for various reasons, including evolutionary function,1 social-communication,2 decision-making,3 and the important role it plays in mental health.4 In recent decades, there have been many advances in the field of emotion regulation, including scientific theories and studies. Hence, we have achieved a better understanding of the pathway of growth, neurology, genetic and environmental effects, and its relation to cognition.5 One of the most important issues in mental health is emotion regulation. Emotion regulation relates to a process in which individuals experience and express their emotions. According to Gross, the process of emotion regulation is further examined through cognitive reappraisal and expressive suppression, i.e. emotion regulation strategies that are activated at the beginning of an event or before it, and those that are activated after an event or an emotion. Gross believes that emotion regulation strategies do not represent the person’s positive or negative character, but rather are based on specific situations in the person’s life.6 Health professionals believe that problems with emotion regulation play a major role in maintenance and increase of mental disorders and maladaptive behaviors.7 Emotion regulation strategies play an essential role in mental health and psychiatric disorders such as depression,8 anxiety,9 borderline personality disorder10,11 and anorexia nervosa.12 Saxcena et al. found that difficulties in emotion regulation and use of dysfunctional emotion regulation strategies are factors that have a negative impact on mental health.13-15 In general, in most psychiatric disorders, there is at least one symptom that reflects impairment of emotions.16 Various instruments have been developed to measure the emotions. One of the most widely used instruments is Gross’s Emotion Regulation Questionnaire (ERQ). The ERQ is based on a theory-based approach and an emotion regulation model and has two sub-scales, cognitive reappraisal and expressive suppression. Cognitive reappraisal indicates that an individual makes an effort to change how he or she thinks about a situation in order to change its emotional impact and reappraise the initial perception, whereas expressive suppression is defined as a response-focused strategy.17 All items are answered on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree), with higher scores representing higher usage of that strategy. Gross & John, reported that the ERQ has a two-factor structure which means “reappraisal and suppression are two independent regulatory strategies that different individuals use to varying degrees.” Cronbach’s alphas were 0.79 for cognitive reappraisal and 0.73 for expressive suppression.17 Other studies have also shown that ERQ has good validity and reliability. For example in a study by Eldeleklioğlu & Eroğlu, Cronbach’s alphas were 0.78 for cognitive reappraisal and 0.73 for expressive suppression.18 Furthermore, in a study conducted by Enebrink et al., Cronbach’s alpha coefficients were 0.81 for cognitive reappraisal and 0.73 for expressive suppression.19 Similarly, in Balzarotti et al., Cronbach’s alpha coefficients were 0.84 for cognitive reappraisal and 0.72 for expressive suppression.20 However, no studies have determined the validity and reliability of the Persian version of the ERQ and no psychometric properties have been available to Iranian mental health researchers so far. The purpose of the present study was to evaluate the internal consistency and factor structure of an Iranian adaptation of the ERQ.
Methodology
Participants were selected from among the undergraduate, postgraduate, and PhD students at Shahid Beheshti University of Medical Sciences and Tehran University of Medical Sciences. Determining an appropriate sample size for structural equation modeling is a seminal element of factor analysis. Klein believes that 10 or 20 samples are necessary for each variable in exploratory factor analysis, but a sample size of at least 200 can be defended.21 The sample selected comprised 348 students (170 males and 178 females) who were chosen from the aforementioned universities. These participants were selected using a convenience sampling method. The Inclusion criterion was Persian as a native language and exclusion criteria were “having a severe psychiatric disorder and unwillingness to participate in research.” The ERQ was separately translated into Persian by two PhD students in clinical psychology, and afterwards a PhD professor in clinical psychology rectified discrepancies in the translations. In the next step, two English-language experts were asked to translate them back into the original language. The translated texts were compared with the original text and any problems were investigated, including the structures of translated sentences. In the next step, the scale was administered to a sample of 20 participants and problems such as ambiguity and incomprehensibility of a few Persian sentences were rectified. After taking these steps, the questionnaire was finally utilized. The study was designed according to the Declaration of Helsinki and approved by the Ethics Committee at Shahid Beheshti University of Medical Sciences (IR.SBMU.SM.REC.1394.181) and all participants gave their consent to take part in the study, signing the consent form. Instruments Five-Facet Mindfulness Questionnaire (FFMQ) The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness. The FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25. The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness. The FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25. State-Trait Anxiety Inventory (STAI) The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25 The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25 Affective Control Scale (ACS) The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27 The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27 Difficulty in Regulation of Emotion Scale (DRES) The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28 The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28 Five-Facet Mindfulness Questionnaire (FFMQ) The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness. The FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25. The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness. The FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25. State-Trait Anxiety Inventory (STAI) The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25 The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25 Affective Control Scale (ACS) The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27 The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27 Difficulty in Regulation of Emotion Scale (DRES) The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28 The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28
Results
Twenty-four participants were excluded due to missing information needed for the final analysis. Descriptive analysis of the data collected on the participants is shown in Table 1, and the means and standard deviations for the other questionnaires are listed in Table 2. Table 1 Descriptive data on participants Faculty n (%) Age, mean (SD) Sex Medicine 156 (44.8) 19.5 (1.84) F: 81, M: 75 Dentistry 84 (24.1) 20.07 (2.55) F: 43, M: 41 Pharmacy 14 (4) 23.07 (5.66) F: 8, M: 6 Paramedicine 86 (24.7) 23.38 (5.25) F: 42, M: 44 Basic Sciences 8 (2.3) 31.50 (4.98) F: 4, M: 4 Total 348 23.50 (4.05) F: 178, M: 170 F = female; M = male; SD = standard deviation. F = female; M = male; SD = standard deviation. Table 2 Mean, standard deviation, and minimum and maximum scores of questionnaires Measures FFMQ DRES STAI ACS Mean 98.3 37.8 41.6 73.3 Standard deviation 12.6 18.7 9.6 17 Minimum 68 0 20 25 Maximum 143 88 74 114 ACS = Affective Control Scale; DRES = Difficulties in Regulation of Emotion Scale; FFMQ = Five Facet Mindfulness Questionnaire; STAI = The State-Trait Anxiety Inventory. ACS = Affective Control Scale; DRES = Difficulties in Regulation of Emotion Scale; FFMQ = Five Facet Mindfulness Questionnaire; STAI = The State-Trait Anxiety Inventory. The independent t test for the ERQ subscales showed that men used suppression more than women, and this difference was statistically significant (p = 0.009, T = -2.62). Additionally, women used cognitive reappraisal more than men, but this difference was not statistically significant (T = 1.31, p = 0.759). The KMO index and Bartlett’s test of sphericity showed the sample was adequate for factor analysis (approximate chi-square = 0.734; df = 36; Sig = 0.001; sample size is greater than 0.5). Furthermore, the significance level is less than 0.05. We used the FFMQ, ACS, DRES, and TRAIT scales to evaluate the divergent and convergent validity of the ERQ. Pearson correlation coefficients were calculated and these results are presented in Table 4. Cohen determined that a correlation coefficient of 0.10 represents a weak or small association, a correlation coefficient of 0.30 is considered a moderate correlation, and a correlation coefficient of 0.50 or larger represents a strong or large correlation.31 Table 4 Correlations for the Emotion Regulation Questionnaire subscales and other scales Measures FFMQ DRES STAI ACS Suppression Reappraisal FFMQ -           DRES -0.68* -         TRAIT -0.54* 0.56* -       ACS -0.52* 0.57* 0.54* -     Expressive suppression -0.28* 0.25* 0.21* 0.09 -   Cognitive reappraisal 0.11† -0.11† -0.24* -0.06 -0.14* - ACS = Affective Control Scale; DRES = Difficulties in Regulation of Emotion Scale; FFMQ = Five-Facet Mindfulness Questionnaire; STAI= State-Trait Anxiety Inventory;. * p < 0.01; † p < 0.05. ACS = Affective Control Scale; DRES = Difficulties in Regulation of Emotion Scale; FFMQ = Five-Facet Mindfulness Questionnaire; STAI= State-Trait Anxiety Inventory;. * p < 0.01; † p < 0.05. As shown in Table 4, the correlations between expressive suppression and the three variables DRES, TRAIT, and FFMQ are negative. Additionally, there was a positive and significant relationship between cognitive reappraisal and FFMQ, and negative and significant correlations with ACS, DRES, and TRAIT, indicating adequate divergent validity of this sub-scale. As shown in Table 5, Cronbach’s alpha coefficients were 0.76 for cognitive reappraisal and 0.72 for expressive suppression (after elimination of item 9). These values were both greater than 0.7, indicating the questionnaire is reliable. Factor analysis results showed that 6 of the 10 questions on the ERQ were loaded onto the cognitive reappraisal factor (30.97%) and 3 questions (2, 4, and 6) were loaded onto expressive suppression (22.95%). Question 9 was omitted because it had high factor loadings for both cognitive reappraisal and expressive suppression factors. Table 5 Factor loadings, eigenvalues, and variances of the Emotion Regulation Questionnaire subscales Items (questions) Reappraisal Suppression 1. I control my emotions by changing the way I think about the situation I’m in. 0.65   2. When I want to feel less negative emotion, I change the way I’m thinking about the situation.   0.78 3. When I want to feel more positive emotion, I change the way I’m thinking about the situation. 0.72   4. When I want to feel more positive emotion (such as joy or amusement), I change what I’m thinking about.   0.81 5. When I want to feel less negative emotion (such as sadness or anger), I change what I’m thinking about. 0.69   6. When I’m faced with a stressful situation, I make myself think about it in a way that helps me stay calm.   0.78 7. I control my emotions by not expressing them. 0.66   8. When I am feeling negative emotions, I make sure not to express them. 0.60   10. When I am feeling positive emotions, I am careful not to express them. 0.74   Cronbach’s alpha coefficients 0.76 0.72 Eigenvalues 2.78 2.03 Factor variances 30.97 22.59 Total variance 53.56
Conclusion
The results of confirmatory factor analysis showed that the ERQ had good psychometric properties and that its Cronbach’s alpha coefficient was adequate. Convergent and divergent validity were observed between TRAIT, DRES, FFMQ, DERS questionnaires and ERQ sub-scales. Therefore, the Persian version of the ERQ is a reliable and valid instrument that has consistency and should be useful for development of further important studies on emotional regulation. Table 3 Kaiser-Meyer-Olkin measure of sampling adequacy and Bartlett’s test of sphericity   Kaiser-Meyer-Olkin measure of sampling adequacy Bartlett’s test of sphericity Approx. chi-square 0.734 df 36 Sig 0.001 df = degrees of freedom. df = degrees of freedom.
[ "Instruments", "Five-Facet Mindfulness Questionnaire (FFMQ)", "State-Trait Anxiety Inventory (STAI)", "Affective Control Scale (ACS)", "Difficulty in Regulation of Emotion Scale (DRES)", "Statistical analyses", "Kaiser-Meyer-Olkin (KMO)", "Bartlett’s test", "Confirmatory factor analysis", "Limitations" ]
[ "Five-Facet Mindfulness Questionnaire (FFMQ) The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness.\nThe FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25.\nThe FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness.\nThe FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25.\nState-Trait Anxiety Inventory (STAI) The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25\nThe STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25\nAffective Control Scale (ACS) The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27\nThe ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27\nDifficulty in Regulation of Emotion Scale (DRES) The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28\nThe DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28", "The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness.\nThe FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25.", "The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25", "The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27", "The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28", "Kaiser-Meyer-Olkin (KMO) The KMO index is a sampling coefficient index that indicates the proportion of variance among the variables that might be caused by underlying factors. This index ranges from 0 to 1. When the value approaches 1, the sampling of the data is adequate for performing factor analysis, otherwise (usually if KMO is less than 0.5) the factor analysis probably falls short of validity.29\nThe KMO index is a sampling coefficient index that indicates the proportion of variance among the variables that might be caused by underlying factors. This index ranges from 0 to 1. When the value approaches 1, the sampling of the data is adequate for performing factor analysis, otherwise (usually if KMO is less than 0.5) the factor analysis probably falls short of validity.29\nBartlett’s test Another method for determining the suitability of data is Bartlett’s test. This test examines the hypothesis that the observed correlation matrix belongs to a group with nonrelated variables. For a factor model to be useful and meaningful, the variables need to be correlated together. Small significance level values (< 0.05) indicate that a factor analysis appears to be appropriate for the data tested. If the significance level is less than 0.05, the factor analysis can coordinate with the data, since the assumption of correlation matrix unity is rejected.30\nAnother method for determining the suitability of data is Bartlett’s test. This test examines the hypothesis that the observed correlation matrix belongs to a group with nonrelated variables. For a factor model to be useful and meaningful, the variables need to be correlated together. Small significance level values (< 0.05) indicate that a factor analysis appears to be appropriate for the data tested. If the significance level is less than 0.05, the factor analysis can coordinate with the data, since the assumption of correlation matrix unity is rejected.30\nConfirmatory factor analysis Confirmatory factor analysis (CFA) is a statistical method used to investigate the factor structure of a set of observed variables. CFA let the researcher test the hypothesis that a relationship between observed variables and their underlying latent constructs exists. In confirmatory research (also known as hypothesis testing), the researcher has a good specific idea about the relationships between the variables under investigation and the researcher attempts to find whether a theory, which is specified as a hypothesis, is supported by data.29\nConfirmatory factor analysis (CFA) is a statistical method used to investigate the factor structure of a set of observed variables. CFA let the researcher test the hypothesis that a relationship between observed variables and their underlying latent constructs exists. In confirmatory research (also known as hypothesis testing), the researcher has a good specific idea about the relationships between the variables under investigation and the researcher attempts to find whether a theory, which is specified as a hypothesis, is supported by data.29", "The KMO index is a sampling coefficient index that indicates the proportion of variance among the variables that might be caused by underlying factors. This index ranges from 0 to 1. When the value approaches 1, the sampling of the data is adequate for performing factor analysis, otherwise (usually if KMO is less than 0.5) the factor analysis probably falls short of validity.29", "Another method for determining the suitability of data is Bartlett’s test. This test examines the hypothesis that the observed correlation matrix belongs to a group with nonrelated variables. For a factor model to be useful and meaningful, the variables need to be correlated together. Small significance level values (< 0.05) indicate that a factor analysis appears to be appropriate for the data tested. If the significance level is less than 0.05, the factor analysis can coordinate with the data, since the assumption of correlation matrix unity is rejected.30", "Confirmatory factor analysis (CFA) is a statistical method used to investigate the factor structure of a set of observed variables. CFA let the researcher test the hypothesis that a relationship between observed variables and their underlying latent constructs exists. In confirmatory research (also known as hypothesis testing), the researcher has a good specific idea about the relationships between the variables under investigation and the researcher attempts to find whether a theory, which is specified as a hypothesis, is supported by data.29", "The findings of the convenience sampling method cannot be generalized to other populations. Moreover, since the participants were selected from undergraduate, postgraduate, and PhD students at Shahid Beheshti University of Medical Sciences and Tehran University of Medical Sciences, the findings cannot be generalized to the general population including children and adults." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methodology", "Instruments", "Five-Facet Mindfulness Questionnaire (FFMQ)", "State-Trait Anxiety Inventory (STAI)", "Affective Control Scale (ACS)", "Difficulty in Regulation of Emotion Scale (DRES)", "Statistical analyses", "Kaiser-Meyer-Olkin (KMO)", "Bartlett’s test", "Confirmatory factor analysis", "Results", "Discussion", "Limitations", "Conclusion" ]
[ "Emotion is an individual’s overall, intense, and brief response to an unexpected event, accompanied by pleasant or unpleasant emotional states. Emotion has always been of interest to mental health researchers, for various reasons, including evolutionary function,1 social-communication,2 decision-making,3 and the important role it plays in mental health.4 In recent decades, there have been many advances in the field of emotion regulation, including scientific theories and studies. Hence, we have achieved a better understanding of the pathway of growth, neurology, genetic and environmental effects, and its relation to cognition.5 One of the most important issues in mental health is emotion regulation. Emotion regulation relates to a process in which individuals experience and express their emotions. According to Gross, the process of emotion regulation is further examined through cognitive reappraisal and expressive suppression, i.e. emotion regulation strategies that are activated at the beginning of an event or before it, and those that are activated after an event or an emotion. Gross believes that emotion regulation strategies do not represent the person’s positive or negative character, but rather are based on specific situations in the person’s life.6\nHealth professionals believe that problems with emotion regulation play a major role in maintenance and increase of mental disorders and maladaptive behaviors.7 Emotion regulation strategies play an essential role in mental health and psychiatric disorders such as depression,8 anxiety,9 borderline personality disorder10,11 and anorexia nervosa.12 Saxcena et al. found that difficulties in emotion regulation and use of dysfunctional emotion regulation strategies are factors that have a negative impact on mental health.13-15 In general, in most psychiatric disorders, there is at least one symptom that reflects impairment of emotions.16\nVarious instruments have been developed to measure the emotions. One of the most widely used instruments is Gross’s Emotion Regulation Questionnaire (ERQ). The ERQ is based on a theory-based approach and an emotion regulation model and has two sub-scales, cognitive reappraisal and expressive suppression. Cognitive reappraisal indicates that an individual makes an effort to change how he or she thinks about a situation in order to change its emotional impact and reappraise the initial perception, whereas expressive suppression is defined as a response-focused strategy.17 All items are answered on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree), with higher scores representing higher usage of that strategy.\nGross & John, reported that the ERQ has a two-factor structure which means “reappraisal and suppression are two independent regulatory strategies that different individuals use to varying degrees.” Cronbach’s alphas were 0.79 for cognitive reappraisal and 0.73 for expressive suppression.17 Other studies have also shown that ERQ has good validity and reliability. For example in a study by Eldeleklioğlu & Eroğlu, Cronbach’s alphas were 0.78 for cognitive reappraisal and 0.73 for expressive suppression.18 Furthermore, in a study conducted by Enebrink et al., Cronbach’s alpha coefficients were 0.81 for cognitive reappraisal and 0.73 for expressive suppression.19 Similarly, in Balzarotti et al., Cronbach’s alpha coefficients were 0.84 for cognitive reappraisal and 0.72 for expressive suppression.20\nHowever, no studies have determined the validity and reliability of the Persian version of the ERQ and no psychometric properties have been available to Iranian mental health researchers so far. The purpose of the present study was to evaluate the internal consistency and factor structure of an Iranian adaptation of the ERQ.", "Participants were selected from among the undergraduate, postgraduate, and PhD students at Shahid Beheshti University of Medical Sciences and Tehran University of Medical Sciences. Determining an appropriate sample size for structural equation modeling is a seminal element of factor analysis. Klein believes that 10 or 20 samples are necessary for each variable in exploratory factor analysis, but a sample size of at least 200 can be defended.21 The sample selected comprised 348 students (170 males and 178 females) who were chosen from the aforementioned universities. These participants were selected using a convenience sampling method. The Inclusion criterion was Persian as a native language and exclusion criteria were “having a severe psychiatric disorder and unwillingness to participate in research.”\nThe ERQ was separately translated into Persian by two PhD students in clinical psychology, and afterwards a PhD professor in clinical psychology rectified discrepancies in the translations. In the next step, two English-language experts were asked to translate them back into the original language. The translated texts were compared with the original text and any problems were investigated, including the structures of translated sentences. In the next step, the scale was administered to a sample of 20 participants and problems such as ambiguity and incomprehensibility of a few Persian sentences were rectified. After taking these steps, the questionnaire was finally utilized. The study was designed according to the Declaration of Helsinki and approved by the Ethics Committee at Shahid Beheshti University of Medical Sciences (IR.SBMU.SM.REC.1394.181) and all participants gave their consent to take part in the study, signing the consent form.\nInstruments Five-Facet Mindfulness Questionnaire (FFMQ) The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness.\nThe FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25.\nThe FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness.\nThe FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25.\nState-Trait Anxiety Inventory (STAI) The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25\nThe STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25\nAffective Control Scale (ACS) The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27\nThe ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27\nDifficulty in Regulation of Emotion Scale (DRES) The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28\nThe DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28\nFive-Facet Mindfulness Questionnaire (FFMQ) The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness.\nThe FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25.\nThe FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness.\nThe FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25.\nState-Trait Anxiety Inventory (STAI) The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25\nThe STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25\nAffective Control Scale (ACS) The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27\nThe ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27\nDifficulty in Regulation of Emotion Scale (DRES) The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28\nThe DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28", "Five-Facet Mindfulness Questionnaire (FFMQ) The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness.\nThe FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25.\nThe FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness.\nThe FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25.\nState-Trait Anxiety Inventory (STAI) The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25\nThe STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25\nAffective Control Scale (ACS) The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27\nThe ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27\nDifficulty in Regulation of Emotion Scale (DRES) The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28\nThe DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28", "The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness.\nThe FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25.", "The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25", "The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27", "The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28", "Kaiser-Meyer-Olkin (KMO) The KMO index is a sampling coefficient index that indicates the proportion of variance among the variables that might be caused by underlying factors. This index ranges from 0 to 1. When the value approaches 1, the sampling of the data is adequate for performing factor analysis, otherwise (usually if KMO is less than 0.5) the factor analysis probably falls short of validity.29\nThe KMO index is a sampling coefficient index that indicates the proportion of variance among the variables that might be caused by underlying factors. This index ranges from 0 to 1. When the value approaches 1, the sampling of the data is adequate for performing factor analysis, otherwise (usually if KMO is less than 0.5) the factor analysis probably falls short of validity.29\nBartlett’s test Another method for determining the suitability of data is Bartlett’s test. This test examines the hypothesis that the observed correlation matrix belongs to a group with nonrelated variables. For a factor model to be useful and meaningful, the variables need to be correlated together. Small significance level values (< 0.05) indicate that a factor analysis appears to be appropriate for the data tested. If the significance level is less than 0.05, the factor analysis can coordinate with the data, since the assumption of correlation matrix unity is rejected.30\nAnother method for determining the suitability of data is Bartlett’s test. This test examines the hypothesis that the observed correlation matrix belongs to a group with nonrelated variables. For a factor model to be useful and meaningful, the variables need to be correlated together. Small significance level values (< 0.05) indicate that a factor analysis appears to be appropriate for the data tested. If the significance level is less than 0.05, the factor analysis can coordinate with the data, since the assumption of correlation matrix unity is rejected.30\nConfirmatory factor analysis Confirmatory factor analysis (CFA) is a statistical method used to investigate the factor structure of a set of observed variables. CFA let the researcher test the hypothesis that a relationship between observed variables and their underlying latent constructs exists. In confirmatory research (also known as hypothesis testing), the researcher has a good specific idea about the relationships between the variables under investigation and the researcher attempts to find whether a theory, which is specified as a hypothesis, is supported by data.29\nConfirmatory factor analysis (CFA) is a statistical method used to investigate the factor structure of a set of observed variables. CFA let the researcher test the hypothesis that a relationship between observed variables and their underlying latent constructs exists. In confirmatory research (also known as hypothesis testing), the researcher has a good specific idea about the relationships between the variables under investigation and the researcher attempts to find whether a theory, which is specified as a hypothesis, is supported by data.29", "The KMO index is a sampling coefficient index that indicates the proportion of variance among the variables that might be caused by underlying factors. This index ranges from 0 to 1. When the value approaches 1, the sampling of the data is adequate for performing factor analysis, otherwise (usually if KMO is less than 0.5) the factor analysis probably falls short of validity.29", "Another method for determining the suitability of data is Bartlett’s test. This test examines the hypothesis that the observed correlation matrix belongs to a group with nonrelated variables. For a factor model to be useful and meaningful, the variables need to be correlated together. Small significance level values (< 0.05) indicate that a factor analysis appears to be appropriate for the data tested. If the significance level is less than 0.05, the factor analysis can coordinate with the data, since the assumption of correlation matrix unity is rejected.30", "Confirmatory factor analysis (CFA) is a statistical method used to investigate the factor structure of a set of observed variables. CFA let the researcher test the hypothesis that a relationship between observed variables and their underlying latent constructs exists. In confirmatory research (also known as hypothesis testing), the researcher has a good specific idea about the relationships between the variables under investigation and the researcher attempts to find whether a theory, which is specified as a hypothesis, is supported by data.29", "Twenty-four participants were excluded due to missing information needed for the final analysis. Descriptive analysis of the data collected on the participants is shown in Table 1, and the means and standard deviations for the other questionnaires are listed in Table 2.\n\n\nTable 1\n\nDescriptive data on participants\n\n\n\n\n\n\n\n\n\n\nFaculty\nn (%)\nAge, mean (SD)\nSex\n\n\n\n\nMedicine\n156 (44.8)\n19.5 (1.84)\nF: 81, M: 75\n\n\nDentistry\n84 (24.1)\n20.07 (2.55)\nF: 43, M: 41\n\n\nPharmacy\n14 (4)\n23.07 (5.66)\nF: 8, M: 6\n\n\nParamedicine\n86 (24.7)\n23.38 (5.25)\nF: 42, M: 44\n\n\nBasic Sciences\n8 (2.3)\n31.50 (4.98)\nF: 4, M: 4\n\n\nTotal\n348\n23.50 (4.05)\nF: 178, M: 170\n\n\n\n\n\nF = female; M = male; SD = standard deviation.\n\n\n\n\nF = female; M = male; SD = standard deviation.\n\n\nTable 2\n\nMean, standard deviation, and minimum and maximum scores of questionnaires\n\n\n\n\n\n\n\n\n\n\n\nMeasures\nFFMQ\nDRES\nSTAI\nACS\n\n\n\n\nMean\n98.3\n37.8\n41.6\n73.3\n\n\nStandard deviation\n12.6\n18.7\n9.6\n17\n\n\nMinimum\n68\n0\n20\n25\n\n\nMaximum\n143\n88\n74\n114\n\n\n\n\n\nACS = Affective Control Scale; DRES = Difficulties in Regulation of Emotion Scale; FFMQ = Five Facet Mindfulness Questionnaire; STAI = The State-Trait Anxiety Inventory.\n\n\n\n\nACS = Affective Control Scale; DRES = Difficulties in Regulation of Emotion Scale; FFMQ = Five Facet Mindfulness Questionnaire; STAI = The State-Trait Anxiety Inventory.\nThe independent t test for the ERQ subscales showed that men used suppression more than women, and this difference was statistically significant (p = 0.009, T = -2.62). Additionally, women used cognitive reappraisal more than men, but this difference was not statistically significant (T = 1.31, p = 0.759).\nThe KMO index and Bartlett’s test of sphericity showed the sample was adequate for factor analysis (approximate chi-square = 0.734; df = 36; Sig = 0.001; sample size is greater than 0.5). Furthermore, the significance level is less than 0.05. We used the FFMQ, ACS, DRES, and TRAIT scales to evaluate the divergent and convergent validity of the ERQ. Pearson correlation coefficients were calculated and these results are presented in Table 4. Cohen determined that a correlation coefficient of 0.10 represents a weak or small association, a correlation coefficient of 0.30 is considered a moderate correlation, and a correlation coefficient of 0.50 or larger represents a strong or large correlation.31\n\n\nTable 4\n\nCorrelations for the Emotion Regulation Questionnaire subscales and other scales\n\n\n\n\n\n\n\n\n\n\n\n\n\nMeasures\nFFMQ\nDRES\nSTAI\nACS\nSuppression\nReappraisal\n\n\n\n\nFFMQ\n-\n \n \n \n \n \n\n\nDRES\n-0.68*\n-\n \n \n \n \n\n\nTRAIT\n-0.54*\n0.56*\n-\n \n \n \n\n\nACS\n-0.52*\n0.57*\n0.54*\n-\n \n \n\n\nExpressive suppression\n-0.28*\n0.25*\n0.21*\n0.09\n-\n \n\n\nCognitive reappraisal\n0.11†\n-0.11†\n-0.24*\n-0.06\n-0.14*\n-\n\n\n\n\n\nACS = Affective Control Scale; DRES = Difficulties in Regulation of Emotion Scale; FFMQ = Five-Facet Mindfulness Questionnaire; STAI= State-Trait Anxiety Inventory;.\n\n\n* p < 0.01; † p < 0.05.\n\n\n\n\nACS = Affective Control Scale; DRES = Difficulties in Regulation of Emotion Scale; FFMQ = Five-Facet Mindfulness Questionnaire; STAI= State-Trait Anxiety Inventory;.\n* p < 0.01; † p < 0.05.\nAs shown in Table 4, the correlations between expressive suppression and the three variables DRES, TRAIT, and FFMQ are negative. Additionally, there was a positive and significant relationship between cognitive reappraisal and FFMQ, and negative and significant correlations with ACS, DRES, and TRAIT, indicating adequate divergent validity of this sub-scale.\nAs shown in Table 5, Cronbach’s alpha coefficients were 0.76 for cognitive reappraisal and 0.72 for expressive suppression (after elimination of item 9). These values were both greater than 0.7, indicating the questionnaire is reliable. Factor analysis results showed that 6 of the 10 questions on the ERQ were loaded onto the cognitive reappraisal factor (30.97%) and 3 questions (2, 4, and 6) were loaded onto expressive suppression (22.95%). Question 9 was omitted because it had high factor loadings for both cognitive reappraisal and expressive suppression factors.\n\n\nTable 5\n\nFactor loadings, eigenvalues, and variances of the Emotion Regulation Questionnaire subscales\n\n\n\n\n\n\n\n\n\nItems (questions)\nReappraisal\nSuppression\n\n\n\n\n1. I control my emotions by changing the way I think about the situation I’m in.\n0.65\n \n\n\n2. When I want to feel less negative emotion, I change the way I’m thinking about the situation.\n \n0.78\n\n\n3. When I want to feel more positive emotion, I change the way I’m thinking about the situation.\n0.72\n \n\n\n4. When I want to feel more positive emotion (such as joy or amusement), I change what I’m thinking about.\n \n0.81\n\n\n5. When I want to feel less negative emotion (such as sadness or anger), I change what I’m thinking about.\n0.69\n \n\n\n6. When I’m faced with a stressful situation, I make myself think about it in a way that helps me stay calm.\n \n0.78\n\n\n7. I control my emotions by not expressing them.\n0.66\n \n\n\n8. When I am feeling negative emotions, I make sure not to express them.\n0.60\n \n\n\n10. When I am feeling positive emotions, I am careful not to express them.\n0.74\n \n\n\nCronbach’s alpha coefficients\n0.76\n0.72\n\n\nEigenvalues\n2.78\n2.03\n\n\nFactor variances\n30.97\n22.59\n\n\nTotal variance\n53.56\n\n\n\n\n", "The results of the present study are in line with previous studies. The present study shows that men used suppression more than women did. Significant correlations were observed between the TRAIT, DRES, and FFMQ questionnaires and the cognitive reappraisal and expressive suppression ERQ subscales. The Cronbach’s alpha coefficients for cognitive reappraisal and for expressive suppression were adequate and indicated good internal consistency. Factor analysis showed that 6 out of the 10 items on the ERQ items loaded onto the cognitive reappraisal factor and 3 items loaded onto expressive suppression.\nThe results of the present study showed that men used suppression more than women, and this difference was statistically significant. Furthermore, women used cognitive reappraisal more than men, but this difference was not statistically significant. This finding is consistent with the findings of Gross et al.,17 Balzarotti et al.,20 Enebrink et al.,19 and Wiltink et al.,32 whereas research conducted by Mehri and Kazarian did not find a difference between men and women in use of suppression.33 This discrepancy could be explained by taking account of different variables such as research sample and different countries’ cultures. In studies by Gross et al. and by Wiltink and Enebrink, the research sample included university students, but the research samples in the studies by Balzarotti et al. and Eldeleklioğlu & Eroğlu, included members of the general community.18, 20\nThe results of the present study denoted that the correlations between suppression items and the DRES and STAI were significantly positive, which demonstrates the convergent validity of the suppression sub-scale (Table 4). Moreover, the negative correlations of suppression items with the FFMQ indicated appropriate divergent validity. A significant positive relationship between cognitive reappraisals and the FFMQ indicated adequate convergent validity (Table 4). The significant negative correlations between the ACS and the DRES and the STAI proved that the divergent validity of cognitive reappraisals sub-scale was adequate (Table 4). The findings of the present research are in line with results published by Wiltink et al. and by Abler & Kessler. Wiltink et al. showed that the negative relationship between cognitive reappraisal and anxiety that was reported denoted a significant relationship between repression and anxiety.32 Abler & Kessler found a significant relationship between suppression and anxiety.34 To explain this report, it can be stated that mindfulness is described as non-judgmental and momentary attention to current experience. Mindfulness has psychological effects such as decreases in psychological symptoms and emotional dysfunction and improvement of behavioral regulation. In addition, mindfulness also increases people’s ability to tolerate negative emotions and prepares them for well-adjusted behaviors in different situations. The present study indicated a significant correlation between expressive suppression and cognitive reappraisal. These findings are not in line with the study by Christos, Loannidis, and Siegling, which demonstrated a weak correlation between cognitive reappraisal and expressive suppression, and found that these two scales were independent of each other.35 Addressing these discrepancies, the authors consider that cultural differences between different societies are influential factors.\nThe results of this study showed that Cronbach’s alpha coefficients for cognitive reappraisal and expressive suppression (with the elimination of item 9) were 0.76 and 0.72 respectively. Since both values are greater than 0.7, they attest to the questionnaire’s reliability. This finding is in line with Gross & John’s study, in which Cronbach’s alphas for cognitive reappraisal and suppression were 0.79 and 0.73 respectively. In Eldeleklioğlu & Eroğlu’s study, Cronbach’s alphas were 0.78 for cognitive reappraisal and 0.73 for expressive suppression.18 In a study conducted by Enebrink et al., Cronbach’s alphas for cognitive reappraisal and expressive suppression were 0.81 and 0.73 respectively,19 and Balzarotti et al. reported Cronbach’s alphas for cognitive reappraisal and expressive suppression of 0.84 and 0.73 respectively.20\nThe factor analysis in this study showed that 6 out of 10 ERQ items loaded onto cognitive reappraisal and 3 items (items 2, 4, and 6) loaded onto expressive suppression (Table 5). Item 9 was excluded because of its high factor loadings onto both the cognitive reappraisal and expressive suppression factors. Six items relevant to the cognitive reappraisal factor explained 30.97 percent of emotion regulation variance and 3 items pertaining to the suppression factor explained 22.95 percent of emotional regulation variance. Together, these two factors explain 53.5 percent of total variance in emotion regulation (Table 5). These results are in line with the findings of Enebrink et al., who reported that there was a correlation between the suppression and cognitive reappraisal sub-scales.19 Enebrink et al. showed that modification indices (MI) suggested that the model would achieve a better fit by maintaining items 4 and 9 as linked to cognitive reappraisal, even though both of these items are obvious examples of suppression. Furthermore, MI suggested a path between cognitive reappraisal and item 9 on the ERQ along with two paths from expressive suppression to items 8 and 10. Moreover, the results of the study by Wiltink et al. showed that item 9 had substantial loading onto cognitive reappraisal as well.32", "The findings of the convenience sampling method cannot be generalized to other populations. Moreover, since the participants were selected from undergraduate, postgraduate, and PhD students at Shahid Beheshti University of Medical Sciences and Tehran University of Medical Sciences, the findings cannot be generalized to the general population including children and adults.", "The results of confirmatory factor analysis showed that the ERQ had good psychometric properties and that its Cronbach’s alpha coefficient was adequate. Convergent and divergent validity were observed between TRAIT, DRES, FFMQ, DERS questionnaires and ERQ sub-scales. Therefore, the Persian version of the ERQ is a reliable and valid instrument that has consistency and should be useful for development of further important studies on emotional regulation.\n\n\nTable 3\n\nKaiser-Meyer-Olkin measure of sampling adequacy and Bartlett’s test of sphericity\n\n\n\n\n\n\n\n\n\n \nKaiser-Meyer-Olkin measure of sampling adequacy\n\n\n\n\nBartlett’s test of sphericity\nApprox. chi-square\n0.734\n\n\ndf\n36\n\n\nSig\n0.001\n\n\n\n\n\ndf = degrees of freedom.\n\n\n\n\ndf = degrees of freedom." ]
[ "intro", "methods", null, null, null, null, null, null, null, null, null, "results", "discussion", null, "conclusions" ]
[ "Factor analysis", "Gross’s emotion regulation", "reliability", "validity" ]
Introduction: Emotion is an individual’s overall, intense, and brief response to an unexpected event, accompanied by pleasant or unpleasant emotional states. Emotion has always been of interest to mental health researchers, for various reasons, including evolutionary function,1 social-communication,2 decision-making,3 and the important role it plays in mental health.4 In recent decades, there have been many advances in the field of emotion regulation, including scientific theories and studies. Hence, we have achieved a better understanding of the pathway of growth, neurology, genetic and environmental effects, and its relation to cognition.5 One of the most important issues in mental health is emotion regulation. Emotion regulation relates to a process in which individuals experience and express their emotions. According to Gross, the process of emotion regulation is further examined through cognitive reappraisal and expressive suppression, i.e. emotion regulation strategies that are activated at the beginning of an event or before it, and those that are activated after an event or an emotion. Gross believes that emotion regulation strategies do not represent the person’s positive or negative character, but rather are based on specific situations in the person’s life.6 Health professionals believe that problems with emotion regulation play a major role in maintenance and increase of mental disorders and maladaptive behaviors.7 Emotion regulation strategies play an essential role in mental health and psychiatric disorders such as depression,8 anxiety,9 borderline personality disorder10,11 and anorexia nervosa.12 Saxcena et al. found that difficulties in emotion regulation and use of dysfunctional emotion regulation strategies are factors that have a negative impact on mental health.13-15 In general, in most psychiatric disorders, there is at least one symptom that reflects impairment of emotions.16 Various instruments have been developed to measure the emotions. One of the most widely used instruments is Gross’s Emotion Regulation Questionnaire (ERQ). The ERQ is based on a theory-based approach and an emotion regulation model and has two sub-scales, cognitive reappraisal and expressive suppression. Cognitive reappraisal indicates that an individual makes an effort to change how he or she thinks about a situation in order to change its emotional impact and reappraise the initial perception, whereas expressive suppression is defined as a response-focused strategy.17 All items are answered on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree), with higher scores representing higher usage of that strategy. Gross & John, reported that the ERQ has a two-factor structure which means “reappraisal and suppression are two independent regulatory strategies that different individuals use to varying degrees.” Cronbach’s alphas were 0.79 for cognitive reappraisal and 0.73 for expressive suppression.17 Other studies have also shown that ERQ has good validity and reliability. For example in a study by Eldeleklioğlu & Eroğlu, Cronbach’s alphas were 0.78 for cognitive reappraisal and 0.73 for expressive suppression.18 Furthermore, in a study conducted by Enebrink et al., Cronbach’s alpha coefficients were 0.81 for cognitive reappraisal and 0.73 for expressive suppression.19 Similarly, in Balzarotti et al., Cronbach’s alpha coefficients were 0.84 for cognitive reappraisal and 0.72 for expressive suppression.20 However, no studies have determined the validity and reliability of the Persian version of the ERQ and no psychometric properties have been available to Iranian mental health researchers so far. The purpose of the present study was to evaluate the internal consistency and factor structure of an Iranian adaptation of the ERQ. Methodology: Participants were selected from among the undergraduate, postgraduate, and PhD students at Shahid Beheshti University of Medical Sciences and Tehran University of Medical Sciences. Determining an appropriate sample size for structural equation modeling is a seminal element of factor analysis. Klein believes that 10 or 20 samples are necessary for each variable in exploratory factor analysis, but a sample size of at least 200 can be defended.21 The sample selected comprised 348 students (170 males and 178 females) who were chosen from the aforementioned universities. These participants were selected using a convenience sampling method. The Inclusion criterion was Persian as a native language and exclusion criteria were “having a severe psychiatric disorder and unwillingness to participate in research.” The ERQ was separately translated into Persian by two PhD students in clinical psychology, and afterwards a PhD professor in clinical psychology rectified discrepancies in the translations. In the next step, two English-language experts were asked to translate them back into the original language. The translated texts were compared with the original text and any problems were investigated, including the structures of translated sentences. In the next step, the scale was administered to a sample of 20 participants and problems such as ambiguity and incomprehensibility of a few Persian sentences were rectified. After taking these steps, the questionnaire was finally utilized. The study was designed according to the Declaration of Helsinki and approved by the Ethics Committee at Shahid Beheshti University of Medical Sciences (IR.SBMU.SM.REC.1394.181) and all participants gave their consent to take part in the study, signing the consent form. Instruments Five-Facet Mindfulness Questionnaire (FFMQ) The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness. The FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25. The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness. The FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25. State-Trait Anxiety Inventory (STAI) The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25 The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25 Affective Control Scale (ACS) The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27 The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27 Difficulty in Regulation of Emotion Scale (DRES) The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28 The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28 Five-Facet Mindfulness Questionnaire (FFMQ) The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness. The FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25. The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness. The FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25. State-Trait Anxiety Inventory (STAI) The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25 The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25 Affective Control Scale (ACS) The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27 The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27 Difficulty in Regulation of Emotion Scale (DRES) The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28 The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28 Instruments: Five-Facet Mindfulness Questionnaire (FFMQ) The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness. The FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25. The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness. The FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25. State-Trait Anxiety Inventory (STAI) The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25 The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25 Affective Control Scale (ACS) The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27 The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27 Difficulty in Regulation of Emotion Scale (DRES) The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28 The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28 Five-Facet Mindfulness Questionnaire (FFMQ): The FFMQ is used to measure the subjective view of one’s mindfulness. The FFMQ evaluates five facets of mindfulness: observing, describing, acting with awareness, non-reactivity (to inner experience), and non-judging (of inner experience). It was developed by Baer et al.22 The 39 items on the FFMQ are rated on a 5-point Likert scale ranging from 1 (never) to 5 (always true). FFMQ scores are obtained by summing up the scores of the individual items. FFMQ scores range from 8 to 40, with higher scores representing more mindfulness. The FFMQ has adequate internal consistency and the alpha coefficients of its sub-scales have been reported as follows: 0.91 for describing, 0.83 for observing, 0.87 for acting with awareness, 0.75 for non-reactivity, and 0.87 for non-judgmental inner experience.18 The reliability and validity of this test in Iranian samples was desirable (alpha ranged from 0.55 to 0.83). There were positive and significant correlations between the five personality factors and the five dimensions of mindfulness, with the exception of neuroticism.23 Furthermore, positive correlations were observed between psychological well-being and mindfulness sub-scales, while negative correlations were observed between mindfulness sub-scales and all the symptoms of the SCL-25. State-Trait Anxiety Inventory (STAI): The STAI was designed to measure anxiety in the form of state and trait. In this study, only the trait anxiety part was used, which has 20 items and scores ranging from 20 to 80. The Cronbach’s alpha for trait anxiety is equal to 0.9.24 In Gholami Booreng’s study, the Cronbach’s alpha coefficient was also reported to be 0.9.25 Affective Control Scale (ACS): The ACS measures people’s control over their emotions and includes 42 questions with four sub-scales that measure fears of emotion and attempt to control emotional experiences.26 The instrument’s sub-scales include four fears: fear of anxiety, of depression, of anger, and of positive emotion. The ACS is a self-report questionnaire rated on a 7-point Likert scale. It assesses attention control and attention shifting. The responses for items number 4, 9, 12, 16, 17, 18, 21, 22, 27, 30, 31, and 38 should be inverted. Its internal and test-retest reliabilities were found to be 0.94, 0.78 respectively. Internal and test-retest reliability indices for the fear of anger, fear of depression, fear of anxiety, and fear of positive emotion sub-scales were estimated as follows: 0.72 and 0.73; 0.91 and 0.76; 0.77 and 0.89; and 0.64 and 0.84. In Iran, a study by Dehesh reported an overall Cronbach’s alpha of 0.84, and Cronbach’s alphas of 0.53, 0.60, 0.76, and 0.64 for the sub-scales fear of anger, fear of emotion, fear of depression, and fear of anxiety.27 Difficulty in Regulation of Emotion Scale (DRES): The DRES consists of 36 items.10 DRES items are rated on a 5-point Likert scale ranging from 1 (almost never) to 5 (almost always). Higher scores indicate greater difficulty with emotion regulation. The results of exploratory factor analysis in the Iranian sample revealed eight factors, from which six factors coincided with those of the original version and two other factors were excluded. Furthermore, there was an internal correlation with Beck’s depression and anxiety questionnaire.28 Statistical analyses: Kaiser-Meyer-Olkin (KMO) The KMO index is a sampling coefficient index that indicates the proportion of variance among the variables that might be caused by underlying factors. This index ranges from 0 to 1. When the value approaches 1, the sampling of the data is adequate for performing factor analysis, otherwise (usually if KMO is less than 0.5) the factor analysis probably falls short of validity.29 The KMO index is a sampling coefficient index that indicates the proportion of variance among the variables that might be caused by underlying factors. This index ranges from 0 to 1. When the value approaches 1, the sampling of the data is adequate for performing factor analysis, otherwise (usually if KMO is less than 0.5) the factor analysis probably falls short of validity.29 Bartlett’s test Another method for determining the suitability of data is Bartlett’s test. This test examines the hypothesis that the observed correlation matrix belongs to a group with nonrelated variables. For a factor model to be useful and meaningful, the variables need to be correlated together. Small significance level values (< 0.05) indicate that a factor analysis appears to be appropriate for the data tested. If the significance level is less than 0.05, the factor analysis can coordinate with the data, since the assumption of correlation matrix unity is rejected.30 Another method for determining the suitability of data is Bartlett’s test. This test examines the hypothesis that the observed correlation matrix belongs to a group with nonrelated variables. For a factor model to be useful and meaningful, the variables need to be correlated together. Small significance level values (< 0.05) indicate that a factor analysis appears to be appropriate for the data tested. If the significance level is less than 0.05, the factor analysis can coordinate with the data, since the assumption of correlation matrix unity is rejected.30 Confirmatory factor analysis Confirmatory factor analysis (CFA) is a statistical method used to investigate the factor structure of a set of observed variables. CFA let the researcher test the hypothesis that a relationship between observed variables and their underlying latent constructs exists. In confirmatory research (also known as hypothesis testing), the researcher has a good specific idea about the relationships between the variables under investigation and the researcher attempts to find whether a theory, which is specified as a hypothesis, is supported by data.29 Confirmatory factor analysis (CFA) is a statistical method used to investigate the factor structure of a set of observed variables. CFA let the researcher test the hypothesis that a relationship between observed variables and their underlying latent constructs exists. In confirmatory research (also known as hypothesis testing), the researcher has a good specific idea about the relationships between the variables under investigation and the researcher attempts to find whether a theory, which is specified as a hypothesis, is supported by data.29 Kaiser-Meyer-Olkin (KMO): The KMO index is a sampling coefficient index that indicates the proportion of variance among the variables that might be caused by underlying factors. This index ranges from 0 to 1. When the value approaches 1, the sampling of the data is adequate for performing factor analysis, otherwise (usually if KMO is less than 0.5) the factor analysis probably falls short of validity.29 Bartlett’s test: Another method for determining the suitability of data is Bartlett’s test. This test examines the hypothesis that the observed correlation matrix belongs to a group with nonrelated variables. For a factor model to be useful and meaningful, the variables need to be correlated together. Small significance level values (< 0.05) indicate that a factor analysis appears to be appropriate for the data tested. If the significance level is less than 0.05, the factor analysis can coordinate with the data, since the assumption of correlation matrix unity is rejected.30 Confirmatory factor analysis: Confirmatory factor analysis (CFA) is a statistical method used to investigate the factor structure of a set of observed variables. CFA let the researcher test the hypothesis that a relationship between observed variables and their underlying latent constructs exists. In confirmatory research (also known as hypothesis testing), the researcher has a good specific idea about the relationships between the variables under investigation and the researcher attempts to find whether a theory, which is specified as a hypothesis, is supported by data.29 Results: Twenty-four participants were excluded due to missing information needed for the final analysis. Descriptive analysis of the data collected on the participants is shown in Table 1, and the means and standard deviations for the other questionnaires are listed in Table 2. Table 1 Descriptive data on participants Faculty n (%) Age, mean (SD) Sex Medicine 156 (44.8) 19.5 (1.84) F: 81, M: 75 Dentistry 84 (24.1) 20.07 (2.55) F: 43, M: 41 Pharmacy 14 (4) 23.07 (5.66) F: 8, M: 6 Paramedicine 86 (24.7) 23.38 (5.25) F: 42, M: 44 Basic Sciences 8 (2.3) 31.50 (4.98) F: 4, M: 4 Total 348 23.50 (4.05) F: 178, M: 170 F = female; M = male; SD = standard deviation. F = female; M = male; SD = standard deviation. Table 2 Mean, standard deviation, and minimum and maximum scores of questionnaires Measures FFMQ DRES STAI ACS Mean 98.3 37.8 41.6 73.3 Standard deviation 12.6 18.7 9.6 17 Minimum 68 0 20 25 Maximum 143 88 74 114 ACS = Affective Control Scale; DRES = Difficulties in Regulation of Emotion Scale; FFMQ = Five Facet Mindfulness Questionnaire; STAI = The State-Trait Anxiety Inventory. ACS = Affective Control Scale; DRES = Difficulties in Regulation of Emotion Scale; FFMQ = Five Facet Mindfulness Questionnaire; STAI = The State-Trait Anxiety Inventory. The independent t test for the ERQ subscales showed that men used suppression more than women, and this difference was statistically significant (p = 0.009, T = -2.62). Additionally, women used cognitive reappraisal more than men, but this difference was not statistically significant (T = 1.31, p = 0.759). The KMO index and Bartlett’s test of sphericity showed the sample was adequate for factor analysis (approximate chi-square = 0.734; df = 36; Sig = 0.001; sample size is greater than 0.5). Furthermore, the significance level is less than 0.05. We used the FFMQ, ACS, DRES, and TRAIT scales to evaluate the divergent and convergent validity of the ERQ. Pearson correlation coefficients were calculated and these results are presented in Table 4. Cohen determined that a correlation coefficient of 0.10 represents a weak or small association, a correlation coefficient of 0.30 is considered a moderate correlation, and a correlation coefficient of 0.50 or larger represents a strong or large correlation.31 Table 4 Correlations for the Emotion Regulation Questionnaire subscales and other scales Measures FFMQ DRES STAI ACS Suppression Reappraisal FFMQ -           DRES -0.68* -         TRAIT -0.54* 0.56* -       ACS -0.52* 0.57* 0.54* -     Expressive suppression -0.28* 0.25* 0.21* 0.09 -   Cognitive reappraisal 0.11† -0.11† -0.24* -0.06 -0.14* - ACS = Affective Control Scale; DRES = Difficulties in Regulation of Emotion Scale; FFMQ = Five-Facet Mindfulness Questionnaire; STAI= State-Trait Anxiety Inventory;. * p < 0.01; † p < 0.05. ACS = Affective Control Scale; DRES = Difficulties in Regulation of Emotion Scale; FFMQ = Five-Facet Mindfulness Questionnaire; STAI= State-Trait Anxiety Inventory;. * p < 0.01; † p < 0.05. As shown in Table 4, the correlations between expressive suppression and the three variables DRES, TRAIT, and FFMQ are negative. Additionally, there was a positive and significant relationship between cognitive reappraisal and FFMQ, and negative and significant correlations with ACS, DRES, and TRAIT, indicating adequate divergent validity of this sub-scale. As shown in Table 5, Cronbach’s alpha coefficients were 0.76 for cognitive reappraisal and 0.72 for expressive suppression (after elimination of item 9). These values were both greater than 0.7, indicating the questionnaire is reliable. Factor analysis results showed that 6 of the 10 questions on the ERQ were loaded onto the cognitive reappraisal factor (30.97%) and 3 questions (2, 4, and 6) were loaded onto expressive suppression (22.95%). Question 9 was omitted because it had high factor loadings for both cognitive reappraisal and expressive suppression factors. Table 5 Factor loadings, eigenvalues, and variances of the Emotion Regulation Questionnaire subscales Items (questions) Reappraisal Suppression 1. I control my emotions by changing the way I think about the situation I’m in. 0.65   2. When I want to feel less negative emotion, I change the way I’m thinking about the situation.   0.78 3. When I want to feel more positive emotion, I change the way I’m thinking about the situation. 0.72   4. When I want to feel more positive emotion (such as joy or amusement), I change what I’m thinking about.   0.81 5. When I want to feel less negative emotion (such as sadness or anger), I change what I’m thinking about. 0.69   6. When I’m faced with a stressful situation, I make myself think about it in a way that helps me stay calm.   0.78 7. I control my emotions by not expressing them. 0.66   8. When I am feeling negative emotions, I make sure not to express them. 0.60   10. When I am feeling positive emotions, I am careful not to express them. 0.74   Cronbach’s alpha coefficients 0.76 0.72 Eigenvalues 2.78 2.03 Factor variances 30.97 22.59 Total variance 53.56 Discussion: The results of the present study are in line with previous studies. The present study shows that men used suppression more than women did. Significant correlations were observed between the TRAIT, DRES, and FFMQ questionnaires and the cognitive reappraisal and expressive suppression ERQ subscales. The Cronbach’s alpha coefficients for cognitive reappraisal and for expressive suppression were adequate and indicated good internal consistency. Factor analysis showed that 6 out of the 10 items on the ERQ items loaded onto the cognitive reappraisal factor and 3 items loaded onto expressive suppression. The results of the present study showed that men used suppression more than women, and this difference was statistically significant. Furthermore, women used cognitive reappraisal more than men, but this difference was not statistically significant. This finding is consistent with the findings of Gross et al.,17 Balzarotti et al.,20 Enebrink et al.,19 and Wiltink et al.,32 whereas research conducted by Mehri and Kazarian did not find a difference between men and women in use of suppression.33 This discrepancy could be explained by taking account of different variables such as research sample and different countries’ cultures. In studies by Gross et al. and by Wiltink and Enebrink, the research sample included university students, but the research samples in the studies by Balzarotti et al. and Eldeleklioğlu & Eroğlu, included members of the general community.18, 20 The results of the present study denoted that the correlations between suppression items and the DRES and STAI were significantly positive, which demonstrates the convergent validity of the suppression sub-scale (Table 4). Moreover, the negative correlations of suppression items with the FFMQ indicated appropriate divergent validity. A significant positive relationship between cognitive reappraisals and the FFMQ indicated adequate convergent validity (Table 4). The significant negative correlations between the ACS and the DRES and the STAI proved that the divergent validity of cognitive reappraisals sub-scale was adequate (Table 4). The findings of the present research are in line with results published by Wiltink et al. and by Abler & Kessler. Wiltink et al. showed that the negative relationship between cognitive reappraisal and anxiety that was reported denoted a significant relationship between repression and anxiety.32 Abler & Kessler found a significant relationship between suppression and anxiety.34 To explain this report, it can be stated that mindfulness is described as non-judgmental and momentary attention to current experience. Mindfulness has psychological effects such as decreases in psychological symptoms and emotional dysfunction and improvement of behavioral regulation. In addition, mindfulness also increases people’s ability to tolerate negative emotions and prepares them for well-adjusted behaviors in different situations. The present study indicated a significant correlation between expressive suppression and cognitive reappraisal. These findings are not in line with the study by Christos, Loannidis, and Siegling, which demonstrated a weak correlation between cognitive reappraisal and expressive suppression, and found that these two scales were independent of each other.35 Addressing these discrepancies, the authors consider that cultural differences between different societies are influential factors. The results of this study showed that Cronbach’s alpha coefficients for cognitive reappraisal and expressive suppression (with the elimination of item 9) were 0.76 and 0.72 respectively. Since both values are greater than 0.7, they attest to the questionnaire’s reliability. This finding is in line with Gross & John’s study, in which Cronbach’s alphas for cognitive reappraisal and suppression were 0.79 and 0.73 respectively. In Eldeleklioğlu & Eroğlu’s study, Cronbach’s alphas were 0.78 for cognitive reappraisal and 0.73 for expressive suppression.18 In a study conducted by Enebrink et al., Cronbach’s alphas for cognitive reappraisal and expressive suppression were 0.81 and 0.73 respectively,19 and Balzarotti et al. reported Cronbach’s alphas for cognitive reappraisal and expressive suppression of 0.84 and 0.73 respectively.20 The factor analysis in this study showed that 6 out of 10 ERQ items loaded onto cognitive reappraisal and 3 items (items 2, 4, and 6) loaded onto expressive suppression (Table 5). Item 9 was excluded because of its high factor loadings onto both the cognitive reappraisal and expressive suppression factors. Six items relevant to the cognitive reappraisal factor explained 30.97 percent of emotion regulation variance and 3 items pertaining to the suppression factor explained 22.95 percent of emotional regulation variance. Together, these two factors explain 53.5 percent of total variance in emotion regulation (Table 5). These results are in line with the findings of Enebrink et al., who reported that there was a correlation between the suppression and cognitive reappraisal sub-scales.19 Enebrink et al. showed that modification indices (MI) suggested that the model would achieve a better fit by maintaining items 4 and 9 as linked to cognitive reappraisal, even though both of these items are obvious examples of suppression. Furthermore, MI suggested a path between cognitive reappraisal and item 9 on the ERQ along with two paths from expressive suppression to items 8 and 10. Moreover, the results of the study by Wiltink et al. showed that item 9 had substantial loading onto cognitive reappraisal as well.32 Limitations: The findings of the convenience sampling method cannot be generalized to other populations. Moreover, since the participants were selected from undergraduate, postgraduate, and PhD students at Shahid Beheshti University of Medical Sciences and Tehran University of Medical Sciences, the findings cannot be generalized to the general population including children and adults. Conclusion: The results of confirmatory factor analysis showed that the ERQ had good psychometric properties and that its Cronbach’s alpha coefficient was adequate. Convergent and divergent validity were observed between TRAIT, DRES, FFMQ, DERS questionnaires and ERQ sub-scales. Therefore, the Persian version of the ERQ is a reliable and valid instrument that has consistency and should be useful for development of further important studies on emotional regulation. Table 3 Kaiser-Meyer-Olkin measure of sampling adequacy and Bartlett’s test of sphericity   Kaiser-Meyer-Olkin measure of sampling adequacy Bartlett’s test of sphericity Approx. chi-square 0.734 df 36 Sig 0.001 df = degrees of freedom. df = degrees of freedom.
Background: Gross's Emotion Regulation Questionnaire is one of the most widely-used and valid questionnaires for assessing emotion regulation strategies. The validity and reliability of the Persian version have not been determined and data on its psychometric properties are not available to Iranian mental health researchers. The purpose of this study was to determine the psychometric properties of the Emotion Regulation Questionnaire in Iranian students. Methods: In this cross-sectional study, 348 students (170 males and 178 females) were selected from Shahid Beheshti University of Medical Science and Tehran University of Medical Science. The following statistical procedures were conducted: correlation coefficients, factor analysis, Cronbach's alpha, and independent t tests. Results: The results showed that men use suppression more than women (T = -2.62, p = 0.009). Cronbach's alpha coefficients were 0.76 for the cognitive reappraisal sub-scale and 0.72 for the suppression sub-scale (excluding question 9). Six questions related to the cognitive reappraisal factor explained 30.97% of emotion regulation variance, and 3 questions related to the suppression factor explained 22.59% of emotion regulation variance. Overall, these factors explained 53.5% of emotion regulation variance. There were significant correlations between suppression and difficulties in emotion regulation, trait anxiety, and affective control. Furthermore, there was a significant correlation between cognitive reappraisal and the Five-Facet Mindfulness Questionnaire. Conclusions: The results indicate that the Persian version of the ERQ is a reliable and valid instrument that can be helpful for development of further important studies of emotional regulation.
Introduction: Emotion is an individual’s overall, intense, and brief response to an unexpected event, accompanied by pleasant or unpleasant emotional states. Emotion has always been of interest to mental health researchers, for various reasons, including evolutionary function,1 social-communication,2 decision-making,3 and the important role it plays in mental health.4 In recent decades, there have been many advances in the field of emotion regulation, including scientific theories and studies. Hence, we have achieved a better understanding of the pathway of growth, neurology, genetic and environmental effects, and its relation to cognition.5 One of the most important issues in mental health is emotion regulation. Emotion regulation relates to a process in which individuals experience and express their emotions. According to Gross, the process of emotion regulation is further examined through cognitive reappraisal and expressive suppression, i.e. emotion regulation strategies that are activated at the beginning of an event or before it, and those that are activated after an event or an emotion. Gross believes that emotion regulation strategies do not represent the person’s positive or negative character, but rather are based on specific situations in the person’s life.6 Health professionals believe that problems with emotion regulation play a major role in maintenance and increase of mental disorders and maladaptive behaviors.7 Emotion regulation strategies play an essential role in mental health and psychiatric disorders such as depression,8 anxiety,9 borderline personality disorder10,11 and anorexia nervosa.12 Saxcena et al. found that difficulties in emotion regulation and use of dysfunctional emotion regulation strategies are factors that have a negative impact on mental health.13-15 In general, in most psychiatric disorders, there is at least one symptom that reflects impairment of emotions.16 Various instruments have been developed to measure the emotions. One of the most widely used instruments is Gross’s Emotion Regulation Questionnaire (ERQ). The ERQ is based on a theory-based approach and an emotion regulation model and has two sub-scales, cognitive reappraisal and expressive suppression. Cognitive reappraisal indicates that an individual makes an effort to change how he or she thinks about a situation in order to change its emotional impact and reappraise the initial perception, whereas expressive suppression is defined as a response-focused strategy.17 All items are answered on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree), with higher scores representing higher usage of that strategy. Gross & John, reported that the ERQ has a two-factor structure which means “reappraisal and suppression are two independent regulatory strategies that different individuals use to varying degrees.” Cronbach’s alphas were 0.79 for cognitive reappraisal and 0.73 for expressive suppression.17 Other studies have also shown that ERQ has good validity and reliability. For example in a study by Eldeleklioğlu & Eroğlu, Cronbach’s alphas were 0.78 for cognitive reappraisal and 0.73 for expressive suppression.18 Furthermore, in a study conducted by Enebrink et al., Cronbach’s alpha coefficients were 0.81 for cognitive reappraisal and 0.73 for expressive suppression.19 Similarly, in Balzarotti et al., Cronbach’s alpha coefficients were 0.84 for cognitive reappraisal and 0.72 for expressive suppression.20 However, no studies have determined the validity and reliability of the Persian version of the ERQ and no psychometric properties have been available to Iranian mental health researchers so far. The purpose of the present study was to evaluate the internal consistency and factor structure of an Iranian adaptation of the ERQ. Conclusion: The results of confirmatory factor analysis showed that the ERQ had good psychometric properties and that its Cronbach’s alpha coefficient was adequate. Convergent and divergent validity were observed between TRAIT, DRES, FFMQ, DERS questionnaires and ERQ sub-scales. Therefore, the Persian version of the ERQ is a reliable and valid instrument that has consistency and should be useful for development of further important studies on emotional regulation. Table 3 Kaiser-Meyer-Olkin measure of sampling adequacy and Bartlett’s test of sphericity   Kaiser-Meyer-Olkin measure of sampling adequacy Bartlett’s test of sphericity Approx. chi-square 0.734 df 36 Sig 0.001 df = degrees of freedom. df = degrees of freedom.
Background: Gross's Emotion Regulation Questionnaire is one of the most widely-used and valid questionnaires for assessing emotion regulation strategies. The validity and reliability of the Persian version have not been determined and data on its psychometric properties are not available to Iranian mental health researchers. The purpose of this study was to determine the psychometric properties of the Emotion Regulation Questionnaire in Iranian students. Methods: In this cross-sectional study, 348 students (170 males and 178 females) were selected from Shahid Beheshti University of Medical Science and Tehran University of Medical Science. The following statistical procedures were conducted: correlation coefficients, factor analysis, Cronbach's alpha, and independent t tests. Results: The results showed that men use suppression more than women (T = -2.62, p = 0.009). Cronbach's alpha coefficients were 0.76 for the cognitive reappraisal sub-scale and 0.72 for the suppression sub-scale (excluding question 9). Six questions related to the cognitive reappraisal factor explained 30.97% of emotion regulation variance, and 3 questions related to the suppression factor explained 22.59% of emotion regulation variance. Overall, these factors explained 53.5% of emotion regulation variance. There were significant correlations between suppression and difficulties in emotion regulation, trait anxiety, and affective control. Furthermore, there was a significant correlation between cognitive reappraisal and the Five-Facet Mindfulness Questionnaire. Conclusions: The results indicate that the Persian version of the ERQ is a reliable and valid instrument that can be helpful for development of further important studies of emotional regulation.
8,571
296
[ 1286, 242, 67, 229, 86, 533, 69, 98, 90, 59 ]
15
[ "emotion", "fear", "anxiety", "ffmq", "items", "sub", "scales", "mindfulness", "sub scales", "factor" ]
[ "emotional regulation", "health emotion regulation", "problems emotion regulation", "emotion regulation results", "emotion regulation emotion" ]
[CONTENT] Factor analysis | Gross’s emotion regulation | reliability | validity [SUMMARY]
[CONTENT] Factor analysis | Gross’s emotion regulation | reliability | validity [SUMMARY]
[CONTENT] Factor analysis | Gross’s emotion regulation | reliability | validity [SUMMARY]
[CONTENT] Factor analysis | Gross’s emotion regulation | reliability | validity [SUMMARY]
[CONTENT] Factor analysis | Gross’s emotion regulation | reliability | validity [SUMMARY]
[CONTENT] Factor analysis | Gross’s emotion regulation | reliability | validity [SUMMARY]
[CONTENT] Cross-Sectional Studies | Emotional Regulation | Female | Humans | Iran | Male | Psychometrics | Reproducibility of Results | Surveys and Questionnaires [SUMMARY]
[CONTENT] Cross-Sectional Studies | Emotional Regulation | Female | Humans | Iran | Male | Psychometrics | Reproducibility of Results | Surveys and Questionnaires [SUMMARY]
[CONTENT] Cross-Sectional Studies | Emotional Regulation | Female | Humans | Iran | Male | Psychometrics | Reproducibility of Results | Surveys and Questionnaires [SUMMARY]
[CONTENT] Cross-Sectional Studies | Emotional Regulation | Female | Humans | Iran | Male | Psychometrics | Reproducibility of Results | Surveys and Questionnaires [SUMMARY]
[CONTENT] Cross-Sectional Studies | Emotional Regulation | Female | Humans | Iran | Male | Psychometrics | Reproducibility of Results | Surveys and Questionnaires [SUMMARY]
[CONTENT] Cross-Sectional Studies | Emotional Regulation | Female | Humans | Iran | Male | Psychometrics | Reproducibility of Results | Surveys and Questionnaires [SUMMARY]
[CONTENT] emotional regulation | health emotion regulation | problems emotion regulation | emotion regulation results | emotion regulation emotion [SUMMARY]
[CONTENT] emotional regulation | health emotion regulation | problems emotion regulation | emotion regulation results | emotion regulation emotion [SUMMARY]
[CONTENT] emotional regulation | health emotion regulation | problems emotion regulation | emotion regulation results | emotion regulation emotion [SUMMARY]
[CONTENT] emotional regulation | health emotion regulation | problems emotion regulation | emotion regulation results | emotion regulation emotion [SUMMARY]
[CONTENT] emotional regulation | health emotion regulation | problems emotion regulation | emotion regulation results | emotion regulation emotion [SUMMARY]
[CONTENT] emotional regulation | health emotion regulation | problems emotion regulation | emotion regulation results | emotion regulation emotion [SUMMARY]
[CONTENT] emotion | fear | anxiety | ffmq | items | sub | scales | mindfulness | sub scales | factor [SUMMARY]
[CONTENT] emotion | fear | anxiety | ffmq | items | sub | scales | mindfulness | sub scales | factor [SUMMARY]
[CONTENT] emotion | fear | anxiety | ffmq | items | sub | scales | mindfulness | sub scales | factor [SUMMARY]
[CONTENT] emotion | fear | anxiety | ffmq | items | sub | scales | mindfulness | sub scales | factor [SUMMARY]
[CONTENT] emotion | fear | anxiety | ffmq | items | sub | scales | mindfulness | sub scales | factor [SUMMARY]
[CONTENT] emotion | fear | anxiety | ffmq | items | sub | scales | mindfulness | sub scales | factor [SUMMARY]
[CONTENT] emotion | emotion regulation | health | mental | regulation | reappraisal | suppression | mental health | expressive suppression | cognitive reappraisal [SUMMARY]
[CONTENT] fear | mindfulness | ffmq | sub scales | anxiety | sub | scales | scores | emotion | items [SUMMARY]
[CONTENT] table | suppression | reappraisal | dres | ffmq | acs | emotion | standard | trait | cognitive reappraisal [SUMMARY]
[CONTENT] df | measure sampling adequacy bartlett | sampling adequacy | olkin measure sampling | adequacy bartlett test sphericity | adequacy bartlett test | adequacy bartlett | adequacy | meyer olkin measure sampling | meyer olkin measure [SUMMARY]
[CONTENT] fear | emotion | anxiety | factor | ffmq | mindfulness | suppression | items | sub scales | reappraisal [SUMMARY]
[CONTENT] fear | emotion | anxiety | factor | ffmq | mindfulness | suppression | items | sub scales | reappraisal [SUMMARY]
[CONTENT] Gross's Emotion Regulation Questionnaire | one ||| Persian | Iranian ||| the Emotion Regulation Questionnaire | Iranian [SUMMARY]
[CONTENT] 348 | 170 | 178 | Shahid Beheshti University of Medical Science | Tehran University of Medical Science ||| Cronbach [SUMMARY]
[CONTENT] 0.009 ||| 0.76 | 0.72 | 9 ||| Six | 30.97% | 3 | 22.59% ||| 53.5% ||| ||| Five [SUMMARY]
[CONTENT] Persian | ERQ [SUMMARY]
[CONTENT] one ||| Persian | Iranian ||| the Emotion Regulation Questionnaire | Iranian ||| 348 | 170 | 178 | Shahid Beheshti University of Medical Science | Tehran University of Medical Science ||| Cronbach ||| ||| 0.009 ||| 0.76 | 0.72 | 9 ||| Six | 30.97% | 3 | 22.59% ||| 53.5% ||| ||| Five ||| Persian | ERQ [SUMMARY]
[CONTENT] one ||| Persian | Iranian ||| the Emotion Regulation Questionnaire | Iranian ||| 348 | 170 | 178 | Shahid Beheshti University of Medical Science | Tehran University of Medical Science ||| Cronbach ||| ||| 0.009 ||| 0.76 | 0.72 | 9 ||| Six | 30.97% | 3 | 22.59% ||| 53.5% ||| ||| Five ||| Persian | ERQ [SUMMARY]
Circulating tumor DNA dynamics analysis in a xenograft mouse model with esophageal squamous cell carcinoma.
34887633
It remains unclear which factors, such as tumor volume and tumor invasion, influence circulating tumor DNA (ctDNA), and the origin of ctDNA in liquid biopsy is always problematic. To use liquid biopsies clinically, it will be very important to address these questions.
BACKGROUND
Tumor xenotransplants were established by inoculating BALB/c-nu/nu mice with the TE11 cell line. Groups of mice were injected with xenografts at two or four sites and sacrificed at the appropriate time point after xenotransplantation for ctDNA analysis. Analysis of ctDNA was performed by droplet digital PCR, using the human telomerase reverse transcriptase (hTERT) gene.
METHODS
Mice given two-site xenografts were sacrificed for ctDNA at week 4 and week 8. No hTERT was detected at week 4, but it was detected at week 8. However, in four-site xenograft mice, hTERT was detected both at week 4 and week 6. These experiments revealed that both tumor invasion and tumor volume were associated with the detection of ctDNA. In resection experiments, hTERT was detected at resection, but had decreased by 6 h, and was no longer detected 1 and 3 d after resection.
RESULTS
We clarified the origin and dynamics of ctDNA, showing that tumor volume is an important factor. We also found that when the tumor was completely resected, ctDNA was absent after one or more days.
CONCLUSION
[ "Animals", "Biomarkers, Tumor", "Circulating Tumor DNA", "Esophageal Neoplasms", "Esophageal Squamous Cell Carcinoma", "Heterografts", "Mice", "Transplantation, Heterologous" ]
8613646
INTRODUCTION
Liquid biopsy, a molecular biological diagnostic method for blood and body fluids, has progressed dramatically in recent years. Circulating tumor DNA (ctDNA), one of the targets of liquid biopsy, is expected to be a useful method for screening and detection of cancer, monitoring therapy, prediction of prognosis, and personalized medicine[1-3]. Therefore, in addition to direct biopsy, which is the basis of conventional cancer diagnosis, a hybrid method, which includes non-invasive liquid biopsy, is becoming the mainstream. Cell-free DNA (cfDNA), which includes ctDNA, is derived from apoptotic or necrotic cells[4,5]. Theoretically, it could be applied regardless of the stage. However, reports of its usefulness for early stages of cancer are controversial. Bettegowda et al[6] revealed that the rate of ctDNA detection is generally high in advanced stages of cancer, but ctDNA levels are generally lower in early stages of cancer. On the other hand, some reports indicated that ctDNA was useful for detecting early-stage cancers[6-9]. It remains unclear which factors, such as tumor volume and tumor invasion, influence ctDNA, and the origin of ctDNA in liquid biopsy is always problematic. To use liquid biopsies clinically, it will be very important to address these questions. In this study, we used a xenograft mouse model to assess the origin of ctDNA, clarify the dynamics of ctDNA levels, assess ctDNA levels after treatment, and to determine whether tumor volume and invasion are related to ctDNA levels.
MATERIALS AND METHODS
Cell Line The human esophageal squamous cell carcinoma cell line TE11 was used because we established an experimental system for TE11 previously[10] and used it to show that liquid biopsy is useful in esophageal cancer cells as well as other gastrointestinal cancers. Cells were grown in RPMI 1640 (Thermo Fisher Scientific, Tokyo, Japan) containing 10% fetal bovine serum and 1% penicillin/streptomycin (Sigma-Aldrich, Tokyo, Japan) at 37.0 °C in a 5% CO2 atmosphere. Appropriate passages were made such that confluency did not exceed 70% prior to xenotransplantation. A Countess Automated Cell Counter (Thermo Fisher Scientific, Tokyo, Japan) was used to count cells, and 0.2% Trypan blue dye was used to exclude dead cells. The human esophageal squamous cell carcinoma cell line TE11 was used because we established an experimental system for TE11 previously[10] and used it to show that liquid biopsy is useful in esophageal cancer cells as well as other gastrointestinal cancers. Cells were grown in RPMI 1640 (Thermo Fisher Scientific, Tokyo, Japan) containing 10% fetal bovine serum and 1% penicillin/streptomycin (Sigma-Aldrich, Tokyo, Japan) at 37.0 °C in a 5% CO2 atmosphere. Appropriate passages were made such that confluency did not exceed 70% prior to xenotransplantation. A Countess Automated Cell Counter (Thermo Fisher Scientific, Tokyo, Japan) was used to count cells, and 0.2% Trypan blue dye was used to exclude dead cells. Xenograft mouse model Xenograft mouse experimental protocols were approved by the Ethical Committee of Okayama University (OKU-2019276). Six-week-old female nude mice (BALB/c-nu/nu) (Charles River Laboratories, Japan) were used. Mice were raised in the animal facility of Okayama University and given food and water. The physical conditions of the mice, including the presence or absence of body movement or the availability of food and drink, were monitored daily. Mice were euthanized with isoflurane if mice stopped moving or eating. Tumor xenotransplants were established in mice by inoculation in the shoulders or flanks with 1 × 106 TE11 cells suspended in 50 μL medium plus 50 μL Matrigel (Corning Product No. 356234). Inoculation was performed at two sites (i.e., both shoulders, two-site xenograft mouse group, 28 mice) or at four sites (i.e., both shoulders and both flanks, four-site xenograft mouse group, 28 mice) in order to determine the effect of tumor volume as well as the degree of invasion (Figure 1). Xenograft mouse model with TE11 cell. A: In the xenograft experiment, groups of 12 mice each were given two-site xenografts or four-site xenografts; B: In the resection experiment, groups of 16 mice each were given two- or four-site xenografts. All tumors were resected at week 7 after xenotransplantation in two-site xenograft mice, or at week 5 in for-site xenograft mice. Tumor formation was confirmed in all xenograft mice; although, the changes in size varied. Differences in tumor volume were evaluated over time. Two-site and four-site xenograft mouse groups were sacrificed for ctDNA analysis at the appropriate time point after xenotransplantation. To minimize the effects of differences in tumor size, four mice were used for each ctDNA time point analysis. A sample size calculation using power analysis determined 24 mice were needed in xenograft experiments and 32 mice were needed in resection experiments. Xenograft mouse experimental protocols were approved by the Ethical Committee of Okayama University (OKU-2019276). Six-week-old female nude mice (BALB/c-nu/nu) (Charles River Laboratories, Japan) were used. Mice were raised in the animal facility of Okayama University and given food and water. The physical conditions of the mice, including the presence or absence of body movement or the availability of food and drink, were monitored daily. Mice were euthanized with isoflurane if mice stopped moving or eating. Tumor xenotransplants were established in mice by inoculation in the shoulders or flanks with 1 × 106 TE11 cells suspended in 50 μL medium plus 50 μL Matrigel (Corning Product No. 356234). Inoculation was performed at two sites (i.e., both shoulders, two-site xenograft mouse group, 28 mice) or at four sites (i.e., both shoulders and both flanks, four-site xenograft mouse group, 28 mice) in order to determine the effect of tumor volume as well as the degree of invasion (Figure 1). Xenograft mouse model with TE11 cell. A: In the xenograft experiment, groups of 12 mice each were given two-site xenografts or four-site xenografts; B: In the resection experiment, groups of 16 mice each were given two- or four-site xenografts. All tumors were resected at week 7 after xenotransplantation in two-site xenograft mice, or at week 5 in for-site xenograft mice. Tumor formation was confirmed in all xenograft mice; although, the changes in size varied. Differences in tumor volume were evaluated over time. Two-site and four-site xenograft mouse groups were sacrificed for ctDNA analysis at the appropriate time point after xenotransplantation. To minimize the effects of differences in tumor size, four mice were used for each ctDNA time point analysis. A sample size calculation using power analysis determined 24 mice were needed in xenograft experiments and 32 mice were needed in resection experiments. Xenograft experiments Twelve mice received two-site xenografts, and 12 received four-site xenografts. Tumor size was measured every week after xenotransplantation, and ctDNA was evaluated at two time points: 4 wk and 8 wk after xenotransplantation (Figure 1). Twelve mice received two-site xenografts, and 12 received four-site xenografts. Tumor size was measured every week after xenotransplantation, and ctDNA was evaluated at two time points: 4 wk and 8 wk after xenotransplantation (Figure 1). Resection experiments Sixteen mice received two-site xenografts, and 16 mice received four-site xenografts. All tumors were resected at week 7 after xenotransplantation in the two-site xenograft group or at week 5 in the four-site xenograft group. cfDNA and ctDNA were evaluated 6 h, 1 d, and 3 d after resection, or simultaneously with resection in the controls (Figure 1). Sixteen mice received two-site xenografts, and 16 mice received four-site xenografts. All tumors were resected at week 7 after xenotransplantation in the two-site xenograft group or at week 5 in the four-site xenograft group. cfDNA and ctDNA were evaluated 6 h, 1 d, and 3 d after resection, or simultaneously with resection in the controls (Figure 1). Blood and tumor tissue sample collection For ctDNA analysis, whole blood was collected in BD Vacutainer tubes (Becton, Dickinson and Company, Franklin Lakes, NJ), and processed within 1 h after collection. The samples were centrifuged at 3000 × g at 4 °C to separate plasma from peripheral blood cells, and stored at -80 °C. DNA was extracted from 1000 μL of blood and the final solution was 25 μL of DNA. Plasma ctDNA was extracted (25 μl) with the QIAamp Circulating Nucleic Acid Kit (Qiagen, Valencia, Calif), according to the manufacturer’s instructions. At sacrifice, tumors were collected and divided into two fragments. One tumor fragment was snap-frozen in liquid nitrogen and used for preparation of genomic DNA. The other fragment was formalin-fixed and paraffin-embedded for histopathological diagnosis, morphological evaluation after hematoxylin/eosin staining, and immunohistochemistry. Four slides were made from the largest diameter section, where it was easy to obtain information on invasion. For ctDNA analysis, whole blood was collected in BD Vacutainer tubes (Becton, Dickinson and Company, Franklin Lakes, NJ), and processed within 1 h after collection. The samples were centrifuged at 3000 × g at 4 °C to separate plasma from peripheral blood cells, and stored at -80 °C. DNA was extracted from 1000 μL of blood and the final solution was 25 μL of DNA. Plasma ctDNA was extracted (25 μl) with the QIAamp Circulating Nucleic Acid Kit (Qiagen, Valencia, Calif), according to the manufacturer’s instructions. At sacrifice, tumors were collected and divided into two fragments. One tumor fragment was snap-frozen in liquid nitrogen and used for preparation of genomic DNA. The other fragment was formalin-fixed and paraffin-embedded for histopathological diagnosis, morphological evaluation after hematoxylin/eosin staining, and immunohistochemistry. Four slides were made from the largest diameter section, where it was easy to obtain information on invasion. Telomerase reverse transcriptase assay The wild-type telomerase reverse transcriptase (TERT) gene was analyzed by a mouse TERT (mTERT) assay (Thermo Fisher Scientific, Tokyo, Japan) or human TERT (hTERT) assay (Bio-Rad Laboratories, Hercules, CA, United States of America) to take advantage of the differences between mTERT and hTERT genes. The verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America) was performed. The wild-type telomerase reverse transcriptase (TERT) gene was analyzed by a mouse TERT (mTERT) assay (Thermo Fisher Scientific, Tokyo, Japan) or human TERT (hTERT) assay (Bio-Rad Laboratories, Hercules, CA, United States of America) to take advantage of the differences between mTERT and hTERT genes. The verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America) was performed. Droplet digital polymerase chain reaction and data analysis To evaluate ctDNA, hTERT was detected via droplet digital polymerase chain reaction (PCR) according to the following protocol. DNA eluent (5 μL) from plasma was combined with Droplet PCR Supermix (10 μL; Bio-Rad Laboratories, Hercules, CA, United States of America), primer/probe mixture (1 μL), 5M Betaine (2 μL), 80 mmol/L EDTA (0.25 μL), CviQl enzyme (0.25 μL), and sterile DNase- and RNase-free water (3.5 μL). The mixture (22 μL) was added to Droplet Generation Oil (70 μL; Bio-Rad Laboratories, Hercules, CA, United States of America) to produce droplets. Thermal cycling of the emulsion was as follows: an initial denaturation at 95 °C for 10 min, followed by 50 cycles of 96 °C for 30 s and 62 °C for 1 min. After a final enzyme deactivation step of 98 °C for 10 min, the reaction mixtures were analyzed using a droplet reader (Bio-Rad Laboratories, Hercules, CA, United States of America). For quantification, the fluorescence signal was acquired with QuantaSoft software (Bio-Rad Laboratories, Hercules, CA, United States of America). We set the threshold fluorescence intensity at 7500 (mTERT) or 2000 (hTERT), according to positive and negative controls in this study, i.e., plasma and tissue of healthy human, control mouse, or TE11 cell line. To evaluate ctDNA, hTERT was detected via droplet digital polymerase chain reaction (PCR) according to the following protocol. DNA eluent (5 μL) from plasma was combined with Droplet PCR Supermix (10 μL; Bio-Rad Laboratories, Hercules, CA, United States of America), primer/probe mixture (1 μL), 5M Betaine (2 μL), 80 mmol/L EDTA (0.25 μL), CviQl enzyme (0.25 μL), and sterile DNase- and RNase-free water (3.5 μL). The mixture (22 μL) was added to Droplet Generation Oil (70 μL; Bio-Rad Laboratories, Hercules, CA, United States of America) to produce droplets. Thermal cycling of the emulsion was as follows: an initial denaturation at 95 °C for 10 min, followed by 50 cycles of 96 °C for 30 s and 62 °C for 1 min. After a final enzyme deactivation step of 98 °C for 10 min, the reaction mixtures were analyzed using a droplet reader (Bio-Rad Laboratories, Hercules, CA, United States of America). For quantification, the fluorescence signal was acquired with QuantaSoft software (Bio-Rad Laboratories, Hercules, CA, United States of America). We set the threshold fluorescence intensity at 7500 (mTERT) or 2000 (hTERT), according to positive and negative controls in this study, i.e., plasma and tissue of healthy human, control mouse, or TE11 cell line. Statistical analysis We used JMP version 14.0 (SAS Institute, Cary, NC, United States of America) for statistical analysis and set the threshold of significance at P < 0.05. Continuous data were analyzed using the non-parametric Wilcoxon test, and categorical data were analyzed using a Chi-squared test. We used JMP version 14.0 (SAS Institute, Cary, NC, United States of America) for statistical analysis and set the threshold of significance at P < 0.05. Continuous data were analyzed using the non-parametric Wilcoxon test, and categorical data were analyzed using a Chi-squared test.
null
null
CONCLUSION
We thank all staff in the animal facility of Okayama University, Shinya Ohashi (MD, PhD; Department of Therapeutic Oncology, Kyoto University) and Hiroshi Nakagawa (MD, PhD; Department of Medicine, Columbia University).
[ "INTRODUCTION", "Cell Line", "Xenograft mouse model", "Xenograft experiments", "Resection experiments", "Blood and tumor tissue sample collection", "Telomerase reverse transcriptase assay", "Droplet digital polymerase chain reaction and data analysis", "Statistical analysis", "RESULTS", "Verification experiments", "Xenograft experiments ", "Resection experiments", "DISCUSSION", "CONCLUSION" ]
[ "Liquid biopsy, a molecular biological diagnostic method for blood and body fluids, has progressed dramatically in recent years. Circulating tumor DNA (ctDNA), one of the targets of liquid biopsy, is expected to be a useful method for screening and detection of cancer, monitoring therapy, prediction of prognosis, and personalized medicine[1-3]. Therefore, in addition to direct biopsy, which is the basis of conventional cancer diagnosis, a hybrid method, which includes non-invasive liquid biopsy, is becoming the mainstream.\nCell-free DNA (cfDNA), which includes ctDNA, is derived from apoptotic or necrotic cells[4,5]. Theoretically, it could be applied regardless of the stage. However, reports of its usefulness for early stages of cancer are controversial. Bettegowda et al[6] revealed that the rate of ctDNA detection is generally high in advanced stages of cancer, but ctDNA levels are generally lower in early stages of cancer. On the other hand, some reports indicated that ctDNA was useful for detecting early-stage cancers[6-9]. It remains unclear which factors, such as tumor volume and tumor invasion, influence ctDNA, and the origin of ctDNA in liquid biopsy is always problematic. To use liquid biopsies clinically, it will be very important to address these questions.\nIn this study, we used a xenograft mouse model to assess the origin of ctDNA, clarify the dynamics of ctDNA levels, assess ctDNA levels after treatment, and to determine whether tumor volume and invasion are related to ctDNA levels.", "The human esophageal squamous cell carcinoma cell line TE11 was used because we established an experimental system for TE11 previously[10] and used it to show that liquid biopsy is useful in esophageal cancer cells as well as other gastrointestinal cancers. Cells were grown in RPMI 1640 (Thermo Fisher Scientific, Tokyo, Japan) containing 10% fetal bovine serum and 1% penicillin/streptomycin (Sigma-Aldrich, Tokyo, Japan) at 37.0 °C in a 5% CO2 atmosphere. Appropriate passages were made such that confluency did not exceed 70% prior to xenotransplantation. A Countess Automated Cell Counter (Thermo Fisher Scientific, Tokyo, Japan) was used to count cells, and 0.2% Trypan blue dye was used to exclude dead cells. ", "Xenograft mouse experimental protocols were approved by the Ethical Committee of Okayama University (OKU-2019276). Six-week-old female nude mice (BALB/c-nu/nu) (Charles River Laboratories, Japan) were used. Mice were raised in the animal facility of Okayama University and given food and water. The physical conditions of the mice, including the presence or absence of body movement or the availability of food and drink, were monitored daily. Mice were euthanized with isoflurane if mice stopped moving or eating.\nTumor xenotransplants were established in mice by inoculation in the shoulders or flanks with 1 × 106 TE11 cells suspended in 50 μL medium plus 50 μL Matrigel (Corning Product No. 356234). Inoculation was performed at two sites (i.e., both shoulders, two-site xenograft mouse group, 28 mice) or at four sites (i.e., both shoulders and both flanks, four-site xenograft mouse group, 28 mice) in order to determine the effect of tumor volume as well as the degree of invasion (Figure 1). \n\nXenograft mouse model with TE11 cell. A: In the xenograft experiment, groups of 12 mice each were given two-site xenografts or four-site xenografts; B: In the resection experiment, groups of 16 mice each were given two- or four-site xenografts. All tumors were resected at week 7 after xenotransplantation in two-site xenograft mice, or at week 5 in for-site xenograft mice.\nTumor formation was confirmed in all xenograft mice; although, the changes in size varied. Differences in tumor volume were evaluated over time. Two-site and four-site xenograft mouse groups were sacrificed for ctDNA analysis at the appropriate time point after xenotransplantation. To minimize the effects of differences in tumor size, four mice were used for each ctDNA time point analysis. \nA sample size calculation using power analysis determined 24 mice were needed in xenograft experiments and 32 mice were needed in resection experiments.", "Twelve mice received two-site xenografts, and 12 received four-site xenografts. Tumor size was measured every week after xenotransplantation, and ctDNA was evaluated at two time points: 4 wk and 8 wk after xenotransplantation (Figure 1).", "Sixteen mice received two-site xenografts, and 16 mice received four-site xenografts. All tumors were resected at week 7 after xenotransplantation in the two-site xenograft group or at week 5 in the four-site xenograft group. cfDNA and ctDNA were evaluated 6 h, 1 d, and 3 d after resection, or simultaneously with resection in the controls (Figure 1). ", "For ctDNA analysis, whole blood was collected in BD Vacutainer tubes (Becton, Dickinson and Company, Franklin Lakes, NJ), and processed within 1 h after collection. The samples were centrifuged at 3000 × g at 4 °C to separate plasma from peripheral blood cells, and stored at -80 °C. DNA was extracted from 1000 μL of blood and the final solution was 25 μL of DNA. Plasma ctDNA was extracted (25 μl) with the QIAamp Circulating Nucleic Acid Kit (Qiagen, Valencia, Calif), according to the manufacturer’s instructions. At sacrifice, tumors were collected and divided into two fragments. One tumor fragment was snap-frozen in liquid nitrogen and used for preparation of genomic DNA. The other fragment was formalin-fixed and paraffin-embedded for histopathological diagnosis, morphological evaluation after hematoxylin/eosin staining, and immunohistochemistry. Four slides were made from the largest diameter section, where it was easy to obtain information on invasion.", "The wild-type telomerase reverse transcriptase (TERT) gene was analyzed by a mouse TERT (mTERT) assay (Thermo Fisher Scientific, Tokyo, Japan) or human TERT (hTERT) assay (Bio-Rad Laboratories, Hercules, CA, United States of America) to take advantage of the differences between mTERT and hTERT genes. The verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America) was performed.", "To evaluate ctDNA, hTERT was detected via droplet digital polymerase chain reaction (PCR) according to the following protocol. DNA eluent (5 μL) from plasma was combined with Droplet PCR Supermix (10 μL; Bio-Rad Laboratories, Hercules, CA, United States of America), primer/probe mixture (1 μL), 5M Betaine (2 μL), 80 mmol/L EDTA (0.25 μL), CviQl enzyme (0.25 μL), and sterile DNase- and RNase-free water (3.5 μL). The mixture (22 μL) was added to Droplet Generation Oil (70 μL; Bio-Rad Laboratories, Hercules, CA, United States of America) to produce droplets. Thermal cycling of the emulsion was as follows: an initial denaturation at 95 °C for 10 min, followed by 50 cycles of 96 °C for 30 s and 62 °C for 1 min. After a final enzyme deactivation step of 98 °C for 10 min, the reaction mixtures were analyzed using a droplet reader (Bio-Rad Laboratories, Hercules, CA, United States of America). For quantification, the fluorescence signal was acquired with QuantaSoft software (Bio-Rad Laboratories, Hercules, CA, United States of America). We set the threshold fluorescence intensity at 7500 (mTERT) or 2000 (hTERT), according to positive and negative controls in this study, i.e., plasma and tissue of healthy human, control mouse, or TE11 cell line. ", "We used JMP version 14.0 (SAS Institute, Cary, NC, United States of America) for statistical analysis and set the threshold of significance at P < 0.05. Continuous data were analyzed using the non-parametric Wilcoxon test, and categorical data were analyzed using a Chi-squared test. ", "Verification experiments In verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America), we confirmed that the mTERT gene was detected in tissue and plasma of control mice, but not in TE11 genomic DNA, whereas the hTERT gene was detected in TE11 genomic DNA, but not in the tissue or plasma of control mice (Figure 2).\n\nTelomerase reverse transcriptase assay by droplet digital polymerase chain reaction for mouse plasma, liver tissue, TE11 cell and water. The presence of mouse telomerase reverse transcriptase (mTERT) and human TERT (hTERT) forms of the wild type TERT was analyzed by droplet digital polymerase chain reaction. A: The assay correctly detected mTERT in mouse plasma and liver tissue; B: hTERT was detected in the TE11 cell line. Neither mTERT nor hTERT was detected in water. \nIn verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America), we confirmed that the mTERT gene was detected in tissue and plasma of control mice, but not in TE11 genomic DNA, whereas the hTERT gene was detected in TE11 genomic DNA, but not in the tissue or plasma of control mice (Figure 2).\n\nTelomerase reverse transcriptase assay by droplet digital polymerase chain reaction for mouse plasma, liver tissue, TE11 cell and water. The presence of mouse telomerase reverse transcriptase (mTERT) and human TERT (hTERT) forms of the wild type TERT was analyzed by droplet digital polymerase chain reaction. A: The assay correctly detected mTERT in mouse plasma and liver tissue; B: hTERT was detected in the TE11 cell line. Neither mTERT nor hTERT was detected in water. \nXenograft experiments Xenograft experiments were designed to reveal the origin of ctDNA and factors contributing to ctDNA increase. Average tumor sizes measured in the two-site xenograft group 1, 2, 3, 4, 5, 6, 7, and 8 wk after xenotransplantation were 1.8, 3.2, 4.6, 6.0, 6.8, 8.0, 8.5, and 12.5 mm, respectively. Two-site xenograft mice were sacrificed 4 or 8 wk after xenotransplantation to evaluate ctDNA. No hTERT was detected at week 4, but hTERT was detected at week 8 (Figure 3). These results indicated that ctDNA was associated with tumor growth.\n\nThe dynamics of circulating tumor DNA in xenograft experiments. A: Two-site and four-site xenograft mice were sacrificed for circulating tumor DNA (ctDNA) at week 4. Human telomerase reverse transcriptase (hTERT) was detected only in four-site xenograft mice, not in two-site xenograft mice; B: In both two-site xenograft mice sacrificed for ctDNA at week 8 and four-site xenograft mice sacrificed at week 6, hTERT was detected. \nIn four-site xenograft mice, the average tumor sizes at week 1, 2, 3, 4, 5, and 6 after xenotransplantation were 1.8, 4.0, 5.9, 7.1, 8.9, and 10.2 mm. The 8 wk evaluation planned for this group was revised to occur at week 6, because the tumor in one mouse had grown rapidly to cause thoracic invasion, and it was unlikely to survive to week 8. Four-site xenograft mice were sacrificed for ctDNA at week 4 and week 6. hTERT was detected both at week 4 and at week 6 in this group (Figure 3). These results indicated that ctDNA was associated with tumor growth as well as those of two-site xenograft mice. There were no other unexpected adverse events.\nHistopathology of tumors at week 4 showed no invasion in either the two-site or four-site xenograft group, while tumors showed invasion into muscle both at week 8 in the two-site xenograft mice (P = 0.02) and at week 6 in the four-site xenograft mice (Figure 4; P = 0.03). These results indicated that ctDNA was associated with tumor invasion.\n\nHistopathology of xenograft mouse with TE11. A: Histopathology showed absence of invasion in tumors at week 4 in mice with two-site or four-site xenografts; B: Muscle invasions were observed in tumors at week 8 in two-site xenograft mice, and at week 6 in four-site xenograft mice. \nThe rates of tumor size increase were similar between the two-site xenograft group and the four-site group. Interestingly, the two groups showed similar tumor diameters (P = 0.25) and invasion at week 4 (Figures 3 and 4), but a clear difference in the ctDNA detection rate (Figure 3; P = 0.02). These findings showed that not only invasion but also tumor volume might be related to the rate of ctDNA detection.\nXenograft experiments were designed to reveal the origin of ctDNA and factors contributing to ctDNA increase. Average tumor sizes measured in the two-site xenograft group 1, 2, 3, 4, 5, 6, 7, and 8 wk after xenotransplantation were 1.8, 3.2, 4.6, 6.0, 6.8, 8.0, 8.5, and 12.5 mm, respectively. Two-site xenograft mice were sacrificed 4 or 8 wk after xenotransplantation to evaluate ctDNA. No hTERT was detected at week 4, but hTERT was detected at week 8 (Figure 3). These results indicated that ctDNA was associated with tumor growth.\n\nThe dynamics of circulating tumor DNA in xenograft experiments. A: Two-site and four-site xenograft mice were sacrificed for circulating tumor DNA (ctDNA) at week 4. Human telomerase reverse transcriptase (hTERT) was detected only in four-site xenograft mice, not in two-site xenograft mice; B: In both two-site xenograft mice sacrificed for ctDNA at week 8 and four-site xenograft mice sacrificed at week 6, hTERT was detected. \nIn four-site xenograft mice, the average tumor sizes at week 1, 2, 3, 4, 5, and 6 after xenotransplantation were 1.8, 4.0, 5.9, 7.1, 8.9, and 10.2 mm. The 8 wk evaluation planned for this group was revised to occur at week 6, because the tumor in one mouse had grown rapidly to cause thoracic invasion, and it was unlikely to survive to week 8. Four-site xenograft mice were sacrificed for ctDNA at week 4 and week 6. hTERT was detected both at week 4 and at week 6 in this group (Figure 3). These results indicated that ctDNA was associated with tumor growth as well as those of two-site xenograft mice. There were no other unexpected adverse events.\nHistopathology of tumors at week 4 showed no invasion in either the two-site or four-site xenograft group, while tumors showed invasion into muscle both at week 8 in the two-site xenograft mice (P = 0.02) and at week 6 in the four-site xenograft mice (Figure 4; P = 0.03). These results indicated that ctDNA was associated with tumor invasion.\n\nHistopathology of xenograft mouse with TE11. A: Histopathology showed absence of invasion in tumors at week 4 in mice with two-site or four-site xenografts; B: Muscle invasions were observed in tumors at week 8 in two-site xenograft mice, and at week 6 in four-site xenograft mice. \nThe rates of tumor size increase were similar between the two-site xenograft group and the four-site group. Interestingly, the two groups showed similar tumor diameters (P = 0.25) and invasion at week 4 (Figures 3 and 4), but a clear difference in the ctDNA detection rate (Figure 3; P = 0.02). These findings showed that not only invasion but also tumor volume might be related to the rate of ctDNA detection.\nResection experiments Resection experiments were designed to clarify responses of ctDNA to tumor resection. Tumors in the two-site and four-site xenograft groups were resected when the diameter exceeded 10 mm. cfDNA and ctDNA were examined at sacrifice. In these resection experiments, two mice were excluded from the evaluation: one mouse with rapid tumor growth and a tendency toward paraplegia before resection, and another mouse with high invasion who died after tumor resection and before evaluation. \nIn two-site xenograft mice, tumor resection was performed at week 7. The average tumor size in the control group was 10.3 mm at the time of resection, and the average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 10.1, 10.3, and 10.2 mm, respectively (P = 0.98). We detected hTERT at resection (control), but hTERT had decreased by 6 h, and was undetectable 1 d or 3 d after resection (Figure 5). The control cfDNA concentration was 1.1 μg/mL at the time of resection, and was 1.2, 1.3, and 1.4 μg/mL measured 6 h, 1 d, and 3 d after resection. Pathological autopsy confirmed the absence of macroscopic residual tumor at each evaluation in this experiment. Using data for the number of positive droplets measured 0 and 6 h after tumor resection in the two-site xenograft resection experiment, the half-life of ctDNA may be calculated from y = 155e - 0.368x. In our study, the half-life of ctDNA was estimated to be 1.8–3.2 h (Figure 6).\n\nThe dynamics of circulating tumor DNA in resection experiments. A: Tumor resection was performed when tumor diameter xenograft mice exceeded 10 mm, at week 7 in two-site xenograft mice, or at week 5 in four-site xenograft mice. Human telomerase reverse transcriptase (hTERT) circulating tumor DNA (ctDNA) was detected at resection (control), had decreased by 6 h, and was undetectable 1 d and 3 d after resection; B: On the other hand, in four-site xenograft mice, hTERT (ctDNA) was detected at resection (control), 6 h, 1 d, and 3 d after resection. cfDNA: Cell-free DNA. \n\nThe half-life of circulating tumor DNA in resection experiments. To estimate half-life of circulating tumor DNA in two-site xenograft mice in the resection experiment, the number of positive droplets vs time after resection was fit to an exponential curve, y = 155e - 0.368x. \nIn four-site xenograft mice, tumor resection was performed at week 5. The average tumor size in the control group was 9.7 mm at the time of resection, while average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 11.4, 10.6, and 10.2 mm, respectively (P = 0.34). In this experiment, hTERT was detected in all groups (Figure 5). The control cfDNA concentration was 1.3 μg/mL at resection and 1.2, 1.5, and 1.7 μg/mL measured 6 h, 1 d, and 3 d, respectively, after resection. Here, pathological autopsy revealed the presence of macroscopic residual tumor at each resection evaluation, with tumor invasion and intrathoracic metastasis in all mice. This experiment revealed that residual ctDNA was associated with incomplete resection and metastasis.\nResection experiments were designed to clarify responses of ctDNA to tumor resection. Tumors in the two-site and four-site xenograft groups were resected when the diameter exceeded 10 mm. cfDNA and ctDNA were examined at sacrifice. In these resection experiments, two mice were excluded from the evaluation: one mouse with rapid tumor growth and a tendency toward paraplegia before resection, and another mouse with high invasion who died after tumor resection and before evaluation. \nIn two-site xenograft mice, tumor resection was performed at week 7. The average tumor size in the control group was 10.3 mm at the time of resection, and the average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 10.1, 10.3, and 10.2 mm, respectively (P = 0.98). We detected hTERT at resection (control), but hTERT had decreased by 6 h, and was undetectable 1 d or 3 d after resection (Figure 5). The control cfDNA concentration was 1.1 μg/mL at the time of resection, and was 1.2, 1.3, and 1.4 μg/mL measured 6 h, 1 d, and 3 d after resection. Pathological autopsy confirmed the absence of macroscopic residual tumor at each evaluation in this experiment. Using data for the number of positive droplets measured 0 and 6 h after tumor resection in the two-site xenograft resection experiment, the half-life of ctDNA may be calculated from y = 155e - 0.368x. In our study, the half-life of ctDNA was estimated to be 1.8–3.2 h (Figure 6).\n\nThe dynamics of circulating tumor DNA in resection experiments. A: Tumor resection was performed when tumor diameter xenograft mice exceeded 10 mm, at week 7 in two-site xenograft mice, or at week 5 in four-site xenograft mice. Human telomerase reverse transcriptase (hTERT) circulating tumor DNA (ctDNA) was detected at resection (control), had decreased by 6 h, and was undetectable 1 d and 3 d after resection; B: On the other hand, in four-site xenograft mice, hTERT (ctDNA) was detected at resection (control), 6 h, 1 d, and 3 d after resection. cfDNA: Cell-free DNA. \n\nThe half-life of circulating tumor DNA in resection experiments. To estimate half-life of circulating tumor DNA in two-site xenograft mice in the resection experiment, the number of positive droplets vs time after resection was fit to an exponential curve, y = 155e - 0.368x. \nIn four-site xenograft mice, tumor resection was performed at week 5. The average tumor size in the control group was 9.7 mm at the time of resection, while average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 11.4, 10.6, and 10.2 mm, respectively (P = 0.34). In this experiment, hTERT was detected in all groups (Figure 5). The control cfDNA concentration was 1.3 μg/mL at resection and 1.2, 1.5, and 1.7 μg/mL measured 6 h, 1 d, and 3 d, respectively, after resection. Here, pathological autopsy revealed the presence of macroscopic residual tumor at each resection evaluation, with tumor invasion and intrathoracic metastasis in all mice. This experiment revealed that residual ctDNA was associated with incomplete resection and metastasis.", "In verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America), we confirmed that the mTERT gene was detected in tissue and plasma of control mice, but not in TE11 genomic DNA, whereas the hTERT gene was detected in TE11 genomic DNA, but not in the tissue or plasma of control mice (Figure 2).\n\nTelomerase reverse transcriptase assay by droplet digital polymerase chain reaction for mouse plasma, liver tissue, TE11 cell and water. The presence of mouse telomerase reverse transcriptase (mTERT) and human TERT (hTERT) forms of the wild type TERT was analyzed by droplet digital polymerase chain reaction. A: The assay correctly detected mTERT in mouse plasma and liver tissue; B: hTERT was detected in the TE11 cell line. Neither mTERT nor hTERT was detected in water. ", "Xenograft experiments were designed to reveal the origin of ctDNA and factors contributing to ctDNA increase. Average tumor sizes measured in the two-site xenograft group 1, 2, 3, 4, 5, 6, 7, and 8 wk after xenotransplantation were 1.8, 3.2, 4.6, 6.0, 6.8, 8.0, 8.5, and 12.5 mm, respectively. Two-site xenograft mice were sacrificed 4 or 8 wk after xenotransplantation to evaluate ctDNA. No hTERT was detected at week 4, but hTERT was detected at week 8 (Figure 3). These results indicated that ctDNA was associated with tumor growth.\n\nThe dynamics of circulating tumor DNA in xenograft experiments. A: Two-site and four-site xenograft mice were sacrificed for circulating tumor DNA (ctDNA) at week 4. Human telomerase reverse transcriptase (hTERT) was detected only in four-site xenograft mice, not in two-site xenograft mice; B: In both two-site xenograft mice sacrificed for ctDNA at week 8 and four-site xenograft mice sacrificed at week 6, hTERT was detected. \nIn four-site xenograft mice, the average tumor sizes at week 1, 2, 3, 4, 5, and 6 after xenotransplantation were 1.8, 4.0, 5.9, 7.1, 8.9, and 10.2 mm. The 8 wk evaluation planned for this group was revised to occur at week 6, because the tumor in one mouse had grown rapidly to cause thoracic invasion, and it was unlikely to survive to week 8. Four-site xenograft mice were sacrificed for ctDNA at week 4 and week 6. hTERT was detected both at week 4 and at week 6 in this group (Figure 3). These results indicated that ctDNA was associated with tumor growth as well as those of two-site xenograft mice. There were no other unexpected adverse events.\nHistopathology of tumors at week 4 showed no invasion in either the two-site or four-site xenograft group, while tumors showed invasion into muscle both at week 8 in the two-site xenograft mice (P = 0.02) and at week 6 in the four-site xenograft mice (Figure 4; P = 0.03). These results indicated that ctDNA was associated with tumor invasion.\n\nHistopathology of xenograft mouse with TE11. A: Histopathology showed absence of invasion in tumors at week 4 in mice with two-site or four-site xenografts; B: Muscle invasions were observed in tumors at week 8 in two-site xenograft mice, and at week 6 in four-site xenograft mice. \nThe rates of tumor size increase were similar between the two-site xenograft group and the four-site group. Interestingly, the two groups showed similar tumor diameters (P = 0.25) and invasion at week 4 (Figures 3 and 4), but a clear difference in the ctDNA detection rate (Figure 3; P = 0.02). These findings showed that not only invasion but also tumor volume might be related to the rate of ctDNA detection.", "Resection experiments were designed to clarify responses of ctDNA to tumor resection. Tumors in the two-site and four-site xenograft groups were resected when the diameter exceeded 10 mm. cfDNA and ctDNA were examined at sacrifice. In these resection experiments, two mice were excluded from the evaluation: one mouse with rapid tumor growth and a tendency toward paraplegia before resection, and another mouse with high invasion who died after tumor resection and before evaluation. \nIn two-site xenograft mice, tumor resection was performed at week 7. The average tumor size in the control group was 10.3 mm at the time of resection, and the average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 10.1, 10.3, and 10.2 mm, respectively (P = 0.98). We detected hTERT at resection (control), but hTERT had decreased by 6 h, and was undetectable 1 d or 3 d after resection (Figure 5). The control cfDNA concentration was 1.1 μg/mL at the time of resection, and was 1.2, 1.3, and 1.4 μg/mL measured 6 h, 1 d, and 3 d after resection. Pathological autopsy confirmed the absence of macroscopic residual tumor at each evaluation in this experiment. Using data for the number of positive droplets measured 0 and 6 h after tumor resection in the two-site xenograft resection experiment, the half-life of ctDNA may be calculated from y = 155e - 0.368x. In our study, the half-life of ctDNA was estimated to be 1.8–3.2 h (Figure 6).\n\nThe dynamics of circulating tumor DNA in resection experiments. A: Tumor resection was performed when tumor diameter xenograft mice exceeded 10 mm, at week 7 in two-site xenograft mice, or at week 5 in four-site xenograft mice. Human telomerase reverse transcriptase (hTERT) circulating tumor DNA (ctDNA) was detected at resection (control), had decreased by 6 h, and was undetectable 1 d and 3 d after resection; B: On the other hand, in four-site xenograft mice, hTERT (ctDNA) was detected at resection (control), 6 h, 1 d, and 3 d after resection. cfDNA: Cell-free DNA. \n\nThe half-life of circulating tumor DNA in resection experiments. To estimate half-life of circulating tumor DNA in two-site xenograft mice in the resection experiment, the number of positive droplets vs time after resection was fit to an exponential curve, y = 155e - 0.368x. \nIn four-site xenograft mice, tumor resection was performed at week 5. The average tumor size in the control group was 9.7 mm at the time of resection, while average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 11.4, 10.6, and 10.2 mm, respectively (P = 0.34). In this experiment, hTERT was detected in all groups (Figure 5). The control cfDNA concentration was 1.3 μg/mL at resection and 1.2, 1.5, and 1.7 μg/mL measured 6 h, 1 d, and 3 d, respectively, after resection. Here, pathological autopsy revealed the presence of macroscopic residual tumor at each resection evaluation, with tumor invasion and intrathoracic metastasis in all mice. This experiment revealed that residual ctDNA was associated with incomplete resection and metastasis.", "Because the TERT gene sequence differs between human and mouse, we were able to determine the origin and dynamics of ctDNA in a xenograft mouse model in which human-derived esophageal cancer cells were injected into the epidermis of mice. This model allowed assessment of ctDNA, which has traditionally been difficult to assess in the human body, due to tumor heterogeneity and the influence of other cells. In our experiment, tumor volume was involved in increases or decreases in ctDNA. In addition, if ctDNA was present over 1 d after resection, the presence of residual tumor is suspected. \nAlthough studies of liquid biopsy using xenograft mouse model have been reported mainly in circulating tumor cells [11], we focused on ctDNA in this study. This model seems to be an ideal method because clinical samples contain a variety of cellular information as well as limitations such as ethical issues. Our report is also extremely valuable in providing direct evidence of the origin of plasma ctDNA, which we assessed in the xenograft mouse model by assaying mTERT and hTERT. Based on this ctDNA confirmation, other factors affecting ctDNA dynamics were examined. In our xenograft experiments, the average tumor sizes 4 wk after two-site and four-site xenografts were very similar (5.6 mm and 6.5 mm), and histology showed similar degrees of tumor invasion (Figure 4). However, ctDNA was detected in four-site xenograft mice but not in two-site xenograft mice. These findings revealed that tumor volume may influence ctDNA detection. In both groups, increasing ctDNA with tumor progression was confirmed at week 8 and week 6. The amount and detection rate of ctDNA correlated with tumor progression in a previous clinical study[6], and our results may support that finding. Although detailed studies on the association between tumor volume or invasion and ctDNA have not been conducted, ctDNA is assumed to be detectable in early cancer once the tumor reaches a certain volume. \nThe presence of ctDNA after surgical resection is observed in clinical samples from cancer patients, and evaluation during the perioperative period is useful for prediction of prognosis[12-14]. Detection of ctDNA after surgery suggests some residual disease[15]. However, these clinical studies may inevitably detect circulating DNA from sources other than tumor cells, and there have been no reports to indicate when liquid biopsy should be used. Regarding this point, our resection experiments demonstrated reduced hTERT at 6 h and its absence 1 to 3 d after resection, indicating that ctDNA evaluation 1 d after resection might be useful to detect residual tumor in clinical cases. These experiments also revealed tumor volume was involved in the increase or decrease of ctDNA and that post-tumor resection evaluation requires an interval of one day or more after resection.\nThe half-life of ctDNA was reported as approximately 2 h in one study[16], but another study found the half-life to be 16 min[17]. The metabolism and excretion of cfDNA is affected by liver and kidney function[18], and ctDNA levels might be regulated by the same mechanism. In our study, we estimated the half-life of ctDNA 1.8–3.2 h, based on ctDNA levels measured 0 and 6 h after resection (Figure 6), which was similar to data from previous reports. Assuming a half-life of 3 h, ctDNA will decline by a factor of 28 after 1 d, and postoperative assessment of ctDNA should be evaluated after 1 d.\ncfDNA is derived from apoptotic or necrotic cells[19,20], and its increase is considered to be caused by surgical manipulation, or perhaps cytokines, or cell proliferation in response to invasive therapy. Our results are consistent with these reports, indicating ctDNA decreased after complete resection, while cfDNA increased after resection. \nCarcinoembryonic antigen (CEA) and squamous cell carcinoma antigen (SCC-Ag) are biomarkers for esophageal cancer. However, the usefulness of these biomarkers in the early diagnosis of esophageal cancer has not been established. Currently, upper endoscopy is the most useful examination to pick up early-stage esophageal cancer. However, since this examination is invasive, the development of non-invasive methods such as liquid biopsy is eagerly awaited. The combination of this method with conventional methods may lead to the next generation of diagnosis.\nOur study had the following limitations. First, the artificial implantation of tumor under the skin in the xenograft model differs from the physiology of actual tumor development. Second, individual mice exhibit differences in tumor growth rates, and therefore, our comparative analyses in the present study used the average values for four animals per group. Third, regarding residual tumor, although pathological autopsies were performed on all mice, complete certainty with respect to residual disease is impossible. Forth, TE11 cell line alone is not necessarily sufficient, other cell lines should be examined as well. Fifth, comparison with conventional biomarkers such as CEA and SCC-Ag needs to be shown.", "We clarified the origin and dynamics of ctDNA in the xenograft mouse model. We showed that tumor volume was an important factor in ctDNA, and that if the tumor volume was sufficiently large, ctDNA can be detected even in early-stage or superficial cancers. We also found that, upon complete tumor resection, ctDNA disappeared after at least 1 d, unless residual tumor remained. These findings may indicate future clinical uses of liquid biopsy." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Cell Line", "Xenograft mouse model", "Xenograft experiments", "Resection experiments", "Blood and tumor tissue sample collection", "Telomerase reverse transcriptase assay", "Droplet digital polymerase chain reaction and data analysis", "Statistical analysis", "RESULTS", "Verification experiments", "Xenograft experiments ", "Resection experiments", "DISCUSSION", "CONCLUSION" ]
[ "Liquid biopsy, a molecular biological diagnostic method for blood and body fluids, has progressed dramatically in recent years. Circulating tumor DNA (ctDNA), one of the targets of liquid biopsy, is expected to be a useful method for screening and detection of cancer, monitoring therapy, prediction of prognosis, and personalized medicine[1-3]. Therefore, in addition to direct biopsy, which is the basis of conventional cancer diagnosis, a hybrid method, which includes non-invasive liquid biopsy, is becoming the mainstream.\nCell-free DNA (cfDNA), which includes ctDNA, is derived from apoptotic or necrotic cells[4,5]. Theoretically, it could be applied regardless of the stage. However, reports of its usefulness for early stages of cancer are controversial. Bettegowda et al[6] revealed that the rate of ctDNA detection is generally high in advanced stages of cancer, but ctDNA levels are generally lower in early stages of cancer. On the other hand, some reports indicated that ctDNA was useful for detecting early-stage cancers[6-9]. It remains unclear which factors, such as tumor volume and tumor invasion, influence ctDNA, and the origin of ctDNA in liquid biopsy is always problematic. To use liquid biopsies clinically, it will be very important to address these questions.\nIn this study, we used a xenograft mouse model to assess the origin of ctDNA, clarify the dynamics of ctDNA levels, assess ctDNA levels after treatment, and to determine whether tumor volume and invasion are related to ctDNA levels.", "Cell Line The human esophageal squamous cell carcinoma cell line TE11 was used because we established an experimental system for TE11 previously[10] and used it to show that liquid biopsy is useful in esophageal cancer cells as well as other gastrointestinal cancers. Cells were grown in RPMI 1640 (Thermo Fisher Scientific, Tokyo, Japan) containing 10% fetal bovine serum and 1% penicillin/streptomycin (Sigma-Aldrich, Tokyo, Japan) at 37.0 °C in a 5% CO2 atmosphere. Appropriate passages were made such that confluency did not exceed 70% prior to xenotransplantation. A Countess Automated Cell Counter (Thermo Fisher Scientific, Tokyo, Japan) was used to count cells, and 0.2% Trypan blue dye was used to exclude dead cells. \nThe human esophageal squamous cell carcinoma cell line TE11 was used because we established an experimental system for TE11 previously[10] and used it to show that liquid biopsy is useful in esophageal cancer cells as well as other gastrointestinal cancers. Cells were grown in RPMI 1640 (Thermo Fisher Scientific, Tokyo, Japan) containing 10% fetal bovine serum and 1% penicillin/streptomycin (Sigma-Aldrich, Tokyo, Japan) at 37.0 °C in a 5% CO2 atmosphere. Appropriate passages were made such that confluency did not exceed 70% prior to xenotransplantation. A Countess Automated Cell Counter (Thermo Fisher Scientific, Tokyo, Japan) was used to count cells, and 0.2% Trypan blue dye was used to exclude dead cells. \nXenograft mouse model Xenograft mouse experimental protocols were approved by the Ethical Committee of Okayama University (OKU-2019276). Six-week-old female nude mice (BALB/c-nu/nu) (Charles River Laboratories, Japan) were used. Mice were raised in the animal facility of Okayama University and given food and water. The physical conditions of the mice, including the presence or absence of body movement or the availability of food and drink, were monitored daily. Mice were euthanized with isoflurane if mice stopped moving or eating.\nTumor xenotransplants were established in mice by inoculation in the shoulders or flanks with 1 × 106 TE11 cells suspended in 50 μL medium plus 50 μL Matrigel (Corning Product No. 356234). Inoculation was performed at two sites (i.e., both shoulders, two-site xenograft mouse group, 28 mice) or at four sites (i.e., both shoulders and both flanks, four-site xenograft mouse group, 28 mice) in order to determine the effect of tumor volume as well as the degree of invasion (Figure 1). \n\nXenograft mouse model with TE11 cell. A: In the xenograft experiment, groups of 12 mice each were given two-site xenografts or four-site xenografts; B: In the resection experiment, groups of 16 mice each were given two- or four-site xenografts. All tumors were resected at week 7 after xenotransplantation in two-site xenograft mice, or at week 5 in for-site xenograft mice.\nTumor formation was confirmed in all xenograft mice; although, the changes in size varied. Differences in tumor volume were evaluated over time. Two-site and four-site xenograft mouse groups were sacrificed for ctDNA analysis at the appropriate time point after xenotransplantation. To minimize the effects of differences in tumor size, four mice were used for each ctDNA time point analysis. \nA sample size calculation using power analysis determined 24 mice were needed in xenograft experiments and 32 mice were needed in resection experiments.\nXenograft mouse experimental protocols were approved by the Ethical Committee of Okayama University (OKU-2019276). Six-week-old female nude mice (BALB/c-nu/nu) (Charles River Laboratories, Japan) were used. Mice were raised in the animal facility of Okayama University and given food and water. The physical conditions of the mice, including the presence or absence of body movement or the availability of food and drink, were monitored daily. Mice were euthanized with isoflurane if mice stopped moving or eating.\nTumor xenotransplants were established in mice by inoculation in the shoulders or flanks with 1 × 106 TE11 cells suspended in 50 μL medium plus 50 μL Matrigel (Corning Product No. 356234). Inoculation was performed at two sites (i.e., both shoulders, two-site xenograft mouse group, 28 mice) or at four sites (i.e., both shoulders and both flanks, four-site xenograft mouse group, 28 mice) in order to determine the effect of tumor volume as well as the degree of invasion (Figure 1). \n\nXenograft mouse model with TE11 cell. A: In the xenograft experiment, groups of 12 mice each were given two-site xenografts or four-site xenografts; B: In the resection experiment, groups of 16 mice each were given two- or four-site xenografts. All tumors were resected at week 7 after xenotransplantation in two-site xenograft mice, or at week 5 in for-site xenograft mice.\nTumor formation was confirmed in all xenograft mice; although, the changes in size varied. Differences in tumor volume were evaluated over time. Two-site and four-site xenograft mouse groups were sacrificed for ctDNA analysis at the appropriate time point after xenotransplantation. To minimize the effects of differences in tumor size, four mice were used for each ctDNA time point analysis. \nA sample size calculation using power analysis determined 24 mice were needed in xenograft experiments and 32 mice were needed in resection experiments.\nXenograft experiments Twelve mice received two-site xenografts, and 12 received four-site xenografts. Tumor size was measured every week after xenotransplantation, and ctDNA was evaluated at two time points: 4 wk and 8 wk after xenotransplantation (Figure 1).\nTwelve mice received two-site xenografts, and 12 received four-site xenografts. Tumor size was measured every week after xenotransplantation, and ctDNA was evaluated at two time points: 4 wk and 8 wk after xenotransplantation (Figure 1).\nResection experiments Sixteen mice received two-site xenografts, and 16 mice received four-site xenografts. All tumors were resected at week 7 after xenotransplantation in the two-site xenograft group or at week 5 in the four-site xenograft group. cfDNA and ctDNA were evaluated 6 h, 1 d, and 3 d after resection, or simultaneously with resection in the controls (Figure 1). \nSixteen mice received two-site xenografts, and 16 mice received four-site xenografts. All tumors were resected at week 7 after xenotransplantation in the two-site xenograft group or at week 5 in the four-site xenograft group. cfDNA and ctDNA were evaluated 6 h, 1 d, and 3 d after resection, or simultaneously with resection in the controls (Figure 1). \nBlood and tumor tissue sample collection For ctDNA analysis, whole blood was collected in BD Vacutainer tubes (Becton, Dickinson and Company, Franklin Lakes, NJ), and processed within 1 h after collection. The samples were centrifuged at 3000 × g at 4 °C to separate plasma from peripheral blood cells, and stored at -80 °C. DNA was extracted from 1000 μL of blood and the final solution was 25 μL of DNA. Plasma ctDNA was extracted (25 μl) with the QIAamp Circulating Nucleic Acid Kit (Qiagen, Valencia, Calif), according to the manufacturer’s instructions. At sacrifice, tumors were collected and divided into two fragments. One tumor fragment was snap-frozen in liquid nitrogen and used for preparation of genomic DNA. The other fragment was formalin-fixed and paraffin-embedded for histopathological diagnosis, morphological evaluation after hematoxylin/eosin staining, and immunohistochemistry. Four slides were made from the largest diameter section, where it was easy to obtain information on invasion.\nFor ctDNA analysis, whole blood was collected in BD Vacutainer tubes (Becton, Dickinson and Company, Franklin Lakes, NJ), and processed within 1 h after collection. The samples were centrifuged at 3000 × g at 4 °C to separate plasma from peripheral blood cells, and stored at -80 °C. DNA was extracted from 1000 μL of blood and the final solution was 25 μL of DNA. Plasma ctDNA was extracted (25 μl) with the QIAamp Circulating Nucleic Acid Kit (Qiagen, Valencia, Calif), according to the manufacturer’s instructions. At sacrifice, tumors were collected and divided into two fragments. One tumor fragment was snap-frozen in liquid nitrogen and used for preparation of genomic DNA. The other fragment was formalin-fixed and paraffin-embedded for histopathological diagnosis, morphological evaluation after hematoxylin/eosin staining, and immunohistochemistry. Four slides were made from the largest diameter section, where it was easy to obtain information on invasion.\nTelomerase reverse transcriptase assay The wild-type telomerase reverse transcriptase (TERT) gene was analyzed by a mouse TERT (mTERT) assay (Thermo Fisher Scientific, Tokyo, Japan) or human TERT (hTERT) assay (Bio-Rad Laboratories, Hercules, CA, United States of America) to take advantage of the differences between mTERT and hTERT genes. The verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America) was performed.\nThe wild-type telomerase reverse transcriptase (TERT) gene was analyzed by a mouse TERT (mTERT) assay (Thermo Fisher Scientific, Tokyo, Japan) or human TERT (hTERT) assay (Bio-Rad Laboratories, Hercules, CA, United States of America) to take advantage of the differences between mTERT and hTERT genes. The verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America) was performed.\nDroplet digital polymerase chain reaction and data analysis To evaluate ctDNA, hTERT was detected via droplet digital polymerase chain reaction (PCR) according to the following protocol. DNA eluent (5 μL) from plasma was combined with Droplet PCR Supermix (10 μL; Bio-Rad Laboratories, Hercules, CA, United States of America), primer/probe mixture (1 μL), 5M Betaine (2 μL), 80 mmol/L EDTA (0.25 μL), CviQl enzyme (0.25 μL), and sterile DNase- and RNase-free water (3.5 μL). The mixture (22 μL) was added to Droplet Generation Oil (70 μL; Bio-Rad Laboratories, Hercules, CA, United States of America) to produce droplets. Thermal cycling of the emulsion was as follows: an initial denaturation at 95 °C for 10 min, followed by 50 cycles of 96 °C for 30 s and 62 °C for 1 min. After a final enzyme deactivation step of 98 °C for 10 min, the reaction mixtures were analyzed using a droplet reader (Bio-Rad Laboratories, Hercules, CA, United States of America). For quantification, the fluorescence signal was acquired with QuantaSoft software (Bio-Rad Laboratories, Hercules, CA, United States of America). We set the threshold fluorescence intensity at 7500 (mTERT) or 2000 (hTERT), according to positive and negative controls in this study, i.e., plasma and tissue of healthy human, control mouse, or TE11 cell line. \nTo evaluate ctDNA, hTERT was detected via droplet digital polymerase chain reaction (PCR) according to the following protocol. DNA eluent (5 μL) from plasma was combined with Droplet PCR Supermix (10 μL; Bio-Rad Laboratories, Hercules, CA, United States of America), primer/probe mixture (1 μL), 5M Betaine (2 μL), 80 mmol/L EDTA (0.25 μL), CviQl enzyme (0.25 μL), and sterile DNase- and RNase-free water (3.5 μL). The mixture (22 μL) was added to Droplet Generation Oil (70 μL; Bio-Rad Laboratories, Hercules, CA, United States of America) to produce droplets. Thermal cycling of the emulsion was as follows: an initial denaturation at 95 °C for 10 min, followed by 50 cycles of 96 °C for 30 s and 62 °C for 1 min. After a final enzyme deactivation step of 98 °C for 10 min, the reaction mixtures were analyzed using a droplet reader (Bio-Rad Laboratories, Hercules, CA, United States of America). For quantification, the fluorescence signal was acquired with QuantaSoft software (Bio-Rad Laboratories, Hercules, CA, United States of America). We set the threshold fluorescence intensity at 7500 (mTERT) or 2000 (hTERT), according to positive and negative controls in this study, i.e., plasma and tissue of healthy human, control mouse, or TE11 cell line. \nStatistical analysis We used JMP version 14.0 (SAS Institute, Cary, NC, United States of America) for statistical analysis and set the threshold of significance at P < 0.05. Continuous data were analyzed using the non-parametric Wilcoxon test, and categorical data were analyzed using a Chi-squared test. \nWe used JMP version 14.0 (SAS Institute, Cary, NC, United States of America) for statistical analysis and set the threshold of significance at P < 0.05. Continuous data were analyzed using the non-parametric Wilcoxon test, and categorical data were analyzed using a Chi-squared test. ", "The human esophageal squamous cell carcinoma cell line TE11 was used because we established an experimental system for TE11 previously[10] and used it to show that liquid biopsy is useful in esophageal cancer cells as well as other gastrointestinal cancers. Cells were grown in RPMI 1640 (Thermo Fisher Scientific, Tokyo, Japan) containing 10% fetal bovine serum and 1% penicillin/streptomycin (Sigma-Aldrich, Tokyo, Japan) at 37.0 °C in a 5% CO2 atmosphere. Appropriate passages were made such that confluency did not exceed 70% prior to xenotransplantation. A Countess Automated Cell Counter (Thermo Fisher Scientific, Tokyo, Japan) was used to count cells, and 0.2% Trypan blue dye was used to exclude dead cells. ", "Xenograft mouse experimental protocols were approved by the Ethical Committee of Okayama University (OKU-2019276). Six-week-old female nude mice (BALB/c-nu/nu) (Charles River Laboratories, Japan) were used. Mice were raised in the animal facility of Okayama University and given food and water. The physical conditions of the mice, including the presence or absence of body movement or the availability of food and drink, were monitored daily. Mice were euthanized with isoflurane if mice stopped moving or eating.\nTumor xenotransplants were established in mice by inoculation in the shoulders or flanks with 1 × 106 TE11 cells suspended in 50 μL medium plus 50 μL Matrigel (Corning Product No. 356234). Inoculation was performed at two sites (i.e., both shoulders, two-site xenograft mouse group, 28 mice) or at four sites (i.e., both shoulders and both flanks, four-site xenograft mouse group, 28 mice) in order to determine the effect of tumor volume as well as the degree of invasion (Figure 1). \n\nXenograft mouse model with TE11 cell. A: In the xenograft experiment, groups of 12 mice each were given two-site xenografts or four-site xenografts; B: In the resection experiment, groups of 16 mice each were given two- or four-site xenografts. All tumors were resected at week 7 after xenotransplantation in two-site xenograft mice, or at week 5 in for-site xenograft mice.\nTumor formation was confirmed in all xenograft mice; although, the changes in size varied. Differences in tumor volume were evaluated over time. Two-site and four-site xenograft mouse groups were sacrificed for ctDNA analysis at the appropriate time point after xenotransplantation. To minimize the effects of differences in tumor size, four mice were used for each ctDNA time point analysis. \nA sample size calculation using power analysis determined 24 mice were needed in xenograft experiments and 32 mice were needed in resection experiments.", "Twelve mice received two-site xenografts, and 12 received four-site xenografts. Tumor size was measured every week after xenotransplantation, and ctDNA was evaluated at two time points: 4 wk and 8 wk after xenotransplantation (Figure 1).", "Sixteen mice received two-site xenografts, and 16 mice received four-site xenografts. All tumors were resected at week 7 after xenotransplantation in the two-site xenograft group or at week 5 in the four-site xenograft group. cfDNA and ctDNA were evaluated 6 h, 1 d, and 3 d after resection, or simultaneously with resection in the controls (Figure 1). ", "For ctDNA analysis, whole blood was collected in BD Vacutainer tubes (Becton, Dickinson and Company, Franklin Lakes, NJ), and processed within 1 h after collection. The samples were centrifuged at 3000 × g at 4 °C to separate plasma from peripheral blood cells, and stored at -80 °C. DNA was extracted from 1000 μL of blood and the final solution was 25 μL of DNA. Plasma ctDNA was extracted (25 μl) with the QIAamp Circulating Nucleic Acid Kit (Qiagen, Valencia, Calif), according to the manufacturer’s instructions. At sacrifice, tumors were collected and divided into two fragments. One tumor fragment was snap-frozen in liquid nitrogen and used for preparation of genomic DNA. The other fragment was formalin-fixed and paraffin-embedded for histopathological diagnosis, morphological evaluation after hematoxylin/eosin staining, and immunohistochemistry. Four slides were made from the largest diameter section, where it was easy to obtain information on invasion.", "The wild-type telomerase reverse transcriptase (TERT) gene was analyzed by a mouse TERT (mTERT) assay (Thermo Fisher Scientific, Tokyo, Japan) or human TERT (hTERT) assay (Bio-Rad Laboratories, Hercules, CA, United States of America) to take advantage of the differences between mTERT and hTERT genes. The verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America) was performed.", "To evaluate ctDNA, hTERT was detected via droplet digital polymerase chain reaction (PCR) according to the following protocol. DNA eluent (5 μL) from plasma was combined with Droplet PCR Supermix (10 μL; Bio-Rad Laboratories, Hercules, CA, United States of America), primer/probe mixture (1 μL), 5M Betaine (2 μL), 80 mmol/L EDTA (0.25 μL), CviQl enzyme (0.25 μL), and sterile DNase- and RNase-free water (3.5 μL). The mixture (22 μL) was added to Droplet Generation Oil (70 μL; Bio-Rad Laboratories, Hercules, CA, United States of America) to produce droplets. Thermal cycling of the emulsion was as follows: an initial denaturation at 95 °C for 10 min, followed by 50 cycles of 96 °C for 30 s and 62 °C for 1 min. After a final enzyme deactivation step of 98 °C for 10 min, the reaction mixtures were analyzed using a droplet reader (Bio-Rad Laboratories, Hercules, CA, United States of America). For quantification, the fluorescence signal was acquired with QuantaSoft software (Bio-Rad Laboratories, Hercules, CA, United States of America). We set the threshold fluorescence intensity at 7500 (mTERT) or 2000 (hTERT), according to positive and negative controls in this study, i.e., plasma and tissue of healthy human, control mouse, or TE11 cell line. ", "We used JMP version 14.0 (SAS Institute, Cary, NC, United States of America) for statistical analysis and set the threshold of significance at P < 0.05. Continuous data were analyzed using the non-parametric Wilcoxon test, and categorical data were analyzed using a Chi-squared test. ", "Verification experiments In verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America), we confirmed that the mTERT gene was detected in tissue and plasma of control mice, but not in TE11 genomic DNA, whereas the hTERT gene was detected in TE11 genomic DNA, but not in the tissue or plasma of control mice (Figure 2).\n\nTelomerase reverse transcriptase assay by droplet digital polymerase chain reaction for mouse plasma, liver tissue, TE11 cell and water. The presence of mouse telomerase reverse transcriptase (mTERT) and human TERT (hTERT) forms of the wild type TERT was analyzed by droplet digital polymerase chain reaction. A: The assay correctly detected mTERT in mouse plasma and liver tissue; B: hTERT was detected in the TE11 cell line. Neither mTERT nor hTERT was detected in water. \nIn verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America), we confirmed that the mTERT gene was detected in tissue and plasma of control mice, but not in TE11 genomic DNA, whereas the hTERT gene was detected in TE11 genomic DNA, but not in the tissue or plasma of control mice (Figure 2).\n\nTelomerase reverse transcriptase assay by droplet digital polymerase chain reaction for mouse plasma, liver tissue, TE11 cell and water. The presence of mouse telomerase reverse transcriptase (mTERT) and human TERT (hTERT) forms of the wild type TERT was analyzed by droplet digital polymerase chain reaction. A: The assay correctly detected mTERT in mouse plasma and liver tissue; B: hTERT was detected in the TE11 cell line. Neither mTERT nor hTERT was detected in water. \nXenograft experiments Xenograft experiments were designed to reveal the origin of ctDNA and factors contributing to ctDNA increase. Average tumor sizes measured in the two-site xenograft group 1, 2, 3, 4, 5, 6, 7, and 8 wk after xenotransplantation were 1.8, 3.2, 4.6, 6.0, 6.8, 8.0, 8.5, and 12.5 mm, respectively. Two-site xenograft mice were sacrificed 4 or 8 wk after xenotransplantation to evaluate ctDNA. No hTERT was detected at week 4, but hTERT was detected at week 8 (Figure 3). These results indicated that ctDNA was associated with tumor growth.\n\nThe dynamics of circulating tumor DNA in xenograft experiments. A: Two-site and four-site xenograft mice were sacrificed for circulating tumor DNA (ctDNA) at week 4. Human telomerase reverse transcriptase (hTERT) was detected only in four-site xenograft mice, not in two-site xenograft mice; B: In both two-site xenograft mice sacrificed for ctDNA at week 8 and four-site xenograft mice sacrificed at week 6, hTERT was detected. \nIn four-site xenograft mice, the average tumor sizes at week 1, 2, 3, 4, 5, and 6 after xenotransplantation were 1.8, 4.0, 5.9, 7.1, 8.9, and 10.2 mm. The 8 wk evaluation planned for this group was revised to occur at week 6, because the tumor in one mouse had grown rapidly to cause thoracic invasion, and it was unlikely to survive to week 8. Four-site xenograft mice were sacrificed for ctDNA at week 4 and week 6. hTERT was detected both at week 4 and at week 6 in this group (Figure 3). These results indicated that ctDNA was associated with tumor growth as well as those of two-site xenograft mice. There were no other unexpected adverse events.\nHistopathology of tumors at week 4 showed no invasion in either the two-site or four-site xenograft group, while tumors showed invasion into muscle both at week 8 in the two-site xenograft mice (P = 0.02) and at week 6 in the four-site xenograft mice (Figure 4; P = 0.03). These results indicated that ctDNA was associated with tumor invasion.\n\nHistopathology of xenograft mouse with TE11. A: Histopathology showed absence of invasion in tumors at week 4 in mice with two-site or four-site xenografts; B: Muscle invasions were observed in tumors at week 8 in two-site xenograft mice, and at week 6 in four-site xenograft mice. \nThe rates of tumor size increase were similar between the two-site xenograft group and the four-site group. Interestingly, the two groups showed similar tumor diameters (P = 0.25) and invasion at week 4 (Figures 3 and 4), but a clear difference in the ctDNA detection rate (Figure 3; P = 0.02). These findings showed that not only invasion but also tumor volume might be related to the rate of ctDNA detection.\nXenograft experiments were designed to reveal the origin of ctDNA and factors contributing to ctDNA increase. Average tumor sizes measured in the two-site xenograft group 1, 2, 3, 4, 5, 6, 7, and 8 wk after xenotransplantation were 1.8, 3.2, 4.6, 6.0, 6.8, 8.0, 8.5, and 12.5 mm, respectively. Two-site xenograft mice were sacrificed 4 or 8 wk after xenotransplantation to evaluate ctDNA. No hTERT was detected at week 4, but hTERT was detected at week 8 (Figure 3). These results indicated that ctDNA was associated with tumor growth.\n\nThe dynamics of circulating tumor DNA in xenograft experiments. A: Two-site and four-site xenograft mice were sacrificed for circulating tumor DNA (ctDNA) at week 4. Human telomerase reverse transcriptase (hTERT) was detected only in four-site xenograft mice, not in two-site xenograft mice; B: In both two-site xenograft mice sacrificed for ctDNA at week 8 and four-site xenograft mice sacrificed at week 6, hTERT was detected. \nIn four-site xenograft mice, the average tumor sizes at week 1, 2, 3, 4, 5, and 6 after xenotransplantation were 1.8, 4.0, 5.9, 7.1, 8.9, and 10.2 mm. The 8 wk evaluation planned for this group was revised to occur at week 6, because the tumor in one mouse had grown rapidly to cause thoracic invasion, and it was unlikely to survive to week 8. Four-site xenograft mice were sacrificed for ctDNA at week 4 and week 6. hTERT was detected both at week 4 and at week 6 in this group (Figure 3). These results indicated that ctDNA was associated with tumor growth as well as those of two-site xenograft mice. There were no other unexpected adverse events.\nHistopathology of tumors at week 4 showed no invasion in either the two-site or four-site xenograft group, while tumors showed invasion into muscle both at week 8 in the two-site xenograft mice (P = 0.02) and at week 6 in the four-site xenograft mice (Figure 4; P = 0.03). These results indicated that ctDNA was associated with tumor invasion.\n\nHistopathology of xenograft mouse with TE11. A: Histopathology showed absence of invasion in tumors at week 4 in mice with two-site or four-site xenografts; B: Muscle invasions were observed in tumors at week 8 in two-site xenograft mice, and at week 6 in four-site xenograft mice. \nThe rates of tumor size increase were similar between the two-site xenograft group and the four-site group. Interestingly, the two groups showed similar tumor diameters (P = 0.25) and invasion at week 4 (Figures 3 and 4), but a clear difference in the ctDNA detection rate (Figure 3; P = 0.02). These findings showed that not only invasion but also tumor volume might be related to the rate of ctDNA detection.\nResection experiments Resection experiments were designed to clarify responses of ctDNA to tumor resection. Tumors in the two-site and four-site xenograft groups were resected when the diameter exceeded 10 mm. cfDNA and ctDNA were examined at sacrifice. In these resection experiments, two mice were excluded from the evaluation: one mouse with rapid tumor growth and a tendency toward paraplegia before resection, and another mouse with high invasion who died after tumor resection and before evaluation. \nIn two-site xenograft mice, tumor resection was performed at week 7. The average tumor size in the control group was 10.3 mm at the time of resection, and the average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 10.1, 10.3, and 10.2 mm, respectively (P = 0.98). We detected hTERT at resection (control), but hTERT had decreased by 6 h, and was undetectable 1 d or 3 d after resection (Figure 5). The control cfDNA concentration was 1.1 μg/mL at the time of resection, and was 1.2, 1.3, and 1.4 μg/mL measured 6 h, 1 d, and 3 d after resection. Pathological autopsy confirmed the absence of macroscopic residual tumor at each evaluation in this experiment. Using data for the number of positive droplets measured 0 and 6 h after tumor resection in the two-site xenograft resection experiment, the half-life of ctDNA may be calculated from y = 155e - 0.368x. In our study, the half-life of ctDNA was estimated to be 1.8–3.2 h (Figure 6).\n\nThe dynamics of circulating tumor DNA in resection experiments. A: Tumor resection was performed when tumor diameter xenograft mice exceeded 10 mm, at week 7 in two-site xenograft mice, or at week 5 in four-site xenograft mice. Human telomerase reverse transcriptase (hTERT) circulating tumor DNA (ctDNA) was detected at resection (control), had decreased by 6 h, and was undetectable 1 d and 3 d after resection; B: On the other hand, in four-site xenograft mice, hTERT (ctDNA) was detected at resection (control), 6 h, 1 d, and 3 d after resection. cfDNA: Cell-free DNA. \n\nThe half-life of circulating tumor DNA in resection experiments. To estimate half-life of circulating tumor DNA in two-site xenograft mice in the resection experiment, the number of positive droplets vs time after resection was fit to an exponential curve, y = 155e - 0.368x. \nIn four-site xenograft mice, tumor resection was performed at week 5. The average tumor size in the control group was 9.7 mm at the time of resection, while average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 11.4, 10.6, and 10.2 mm, respectively (P = 0.34). In this experiment, hTERT was detected in all groups (Figure 5). The control cfDNA concentration was 1.3 μg/mL at resection and 1.2, 1.5, and 1.7 μg/mL measured 6 h, 1 d, and 3 d, respectively, after resection. Here, pathological autopsy revealed the presence of macroscopic residual tumor at each resection evaluation, with tumor invasion and intrathoracic metastasis in all mice. This experiment revealed that residual ctDNA was associated with incomplete resection and metastasis.\nResection experiments were designed to clarify responses of ctDNA to tumor resection. Tumors in the two-site and four-site xenograft groups were resected when the diameter exceeded 10 mm. cfDNA and ctDNA were examined at sacrifice. In these resection experiments, two mice were excluded from the evaluation: one mouse with rapid tumor growth and a tendency toward paraplegia before resection, and another mouse with high invasion who died after tumor resection and before evaluation. \nIn two-site xenograft mice, tumor resection was performed at week 7. The average tumor size in the control group was 10.3 mm at the time of resection, and the average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 10.1, 10.3, and 10.2 mm, respectively (P = 0.98). We detected hTERT at resection (control), but hTERT had decreased by 6 h, and was undetectable 1 d or 3 d after resection (Figure 5). The control cfDNA concentration was 1.1 μg/mL at the time of resection, and was 1.2, 1.3, and 1.4 μg/mL measured 6 h, 1 d, and 3 d after resection. Pathological autopsy confirmed the absence of macroscopic residual tumor at each evaluation in this experiment. Using data for the number of positive droplets measured 0 and 6 h after tumor resection in the two-site xenograft resection experiment, the half-life of ctDNA may be calculated from y = 155e - 0.368x. In our study, the half-life of ctDNA was estimated to be 1.8–3.2 h (Figure 6).\n\nThe dynamics of circulating tumor DNA in resection experiments. A: Tumor resection was performed when tumor diameter xenograft mice exceeded 10 mm, at week 7 in two-site xenograft mice, or at week 5 in four-site xenograft mice. Human telomerase reverse transcriptase (hTERT) circulating tumor DNA (ctDNA) was detected at resection (control), had decreased by 6 h, and was undetectable 1 d and 3 d after resection; B: On the other hand, in four-site xenograft mice, hTERT (ctDNA) was detected at resection (control), 6 h, 1 d, and 3 d after resection. cfDNA: Cell-free DNA. \n\nThe half-life of circulating tumor DNA in resection experiments. To estimate half-life of circulating tumor DNA in two-site xenograft mice in the resection experiment, the number of positive droplets vs time after resection was fit to an exponential curve, y = 155e - 0.368x. \nIn four-site xenograft mice, tumor resection was performed at week 5. The average tumor size in the control group was 9.7 mm at the time of resection, while average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 11.4, 10.6, and 10.2 mm, respectively (P = 0.34). In this experiment, hTERT was detected in all groups (Figure 5). The control cfDNA concentration was 1.3 μg/mL at resection and 1.2, 1.5, and 1.7 μg/mL measured 6 h, 1 d, and 3 d, respectively, after resection. Here, pathological autopsy revealed the presence of macroscopic residual tumor at each resection evaluation, with tumor invasion and intrathoracic metastasis in all mice. This experiment revealed that residual ctDNA was associated with incomplete resection and metastasis.", "In verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America), we confirmed that the mTERT gene was detected in tissue and plasma of control mice, but not in TE11 genomic DNA, whereas the hTERT gene was detected in TE11 genomic DNA, but not in the tissue or plasma of control mice (Figure 2).\n\nTelomerase reverse transcriptase assay by droplet digital polymerase chain reaction for mouse plasma, liver tissue, TE11 cell and water. The presence of mouse telomerase reverse transcriptase (mTERT) and human TERT (hTERT) forms of the wild type TERT was analyzed by droplet digital polymerase chain reaction. A: The assay correctly detected mTERT in mouse plasma and liver tissue; B: hTERT was detected in the TE11 cell line. Neither mTERT nor hTERT was detected in water. ", "Xenograft experiments were designed to reveal the origin of ctDNA and factors contributing to ctDNA increase. Average tumor sizes measured in the two-site xenograft group 1, 2, 3, 4, 5, 6, 7, and 8 wk after xenotransplantation were 1.8, 3.2, 4.6, 6.0, 6.8, 8.0, 8.5, and 12.5 mm, respectively. Two-site xenograft mice were sacrificed 4 or 8 wk after xenotransplantation to evaluate ctDNA. No hTERT was detected at week 4, but hTERT was detected at week 8 (Figure 3). These results indicated that ctDNA was associated with tumor growth.\n\nThe dynamics of circulating tumor DNA in xenograft experiments. A: Two-site and four-site xenograft mice were sacrificed for circulating tumor DNA (ctDNA) at week 4. Human telomerase reverse transcriptase (hTERT) was detected only in four-site xenograft mice, not in two-site xenograft mice; B: In both two-site xenograft mice sacrificed for ctDNA at week 8 and four-site xenograft mice sacrificed at week 6, hTERT was detected. \nIn four-site xenograft mice, the average tumor sizes at week 1, 2, 3, 4, 5, and 6 after xenotransplantation were 1.8, 4.0, 5.9, 7.1, 8.9, and 10.2 mm. The 8 wk evaluation planned for this group was revised to occur at week 6, because the tumor in one mouse had grown rapidly to cause thoracic invasion, and it was unlikely to survive to week 8. Four-site xenograft mice were sacrificed for ctDNA at week 4 and week 6. hTERT was detected both at week 4 and at week 6 in this group (Figure 3). These results indicated that ctDNA was associated with tumor growth as well as those of two-site xenograft mice. There were no other unexpected adverse events.\nHistopathology of tumors at week 4 showed no invasion in either the two-site or four-site xenograft group, while tumors showed invasion into muscle both at week 8 in the two-site xenograft mice (P = 0.02) and at week 6 in the four-site xenograft mice (Figure 4; P = 0.03). These results indicated that ctDNA was associated with tumor invasion.\n\nHistopathology of xenograft mouse with TE11. A: Histopathology showed absence of invasion in tumors at week 4 in mice with two-site or four-site xenografts; B: Muscle invasions were observed in tumors at week 8 in two-site xenograft mice, and at week 6 in four-site xenograft mice. \nThe rates of tumor size increase were similar between the two-site xenograft group and the four-site group. Interestingly, the two groups showed similar tumor diameters (P = 0.25) and invasion at week 4 (Figures 3 and 4), but a clear difference in the ctDNA detection rate (Figure 3; P = 0.02). These findings showed that not only invasion but also tumor volume might be related to the rate of ctDNA detection.", "Resection experiments were designed to clarify responses of ctDNA to tumor resection. Tumors in the two-site and four-site xenograft groups were resected when the diameter exceeded 10 mm. cfDNA and ctDNA were examined at sacrifice. In these resection experiments, two mice were excluded from the evaluation: one mouse with rapid tumor growth and a tendency toward paraplegia before resection, and another mouse with high invasion who died after tumor resection and before evaluation. \nIn two-site xenograft mice, tumor resection was performed at week 7. The average tumor size in the control group was 10.3 mm at the time of resection, and the average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 10.1, 10.3, and 10.2 mm, respectively (P = 0.98). We detected hTERT at resection (control), but hTERT had decreased by 6 h, and was undetectable 1 d or 3 d after resection (Figure 5). The control cfDNA concentration was 1.1 μg/mL at the time of resection, and was 1.2, 1.3, and 1.4 μg/mL measured 6 h, 1 d, and 3 d after resection. Pathological autopsy confirmed the absence of macroscopic residual tumor at each evaluation in this experiment. Using data for the number of positive droplets measured 0 and 6 h after tumor resection in the two-site xenograft resection experiment, the half-life of ctDNA may be calculated from y = 155e - 0.368x. In our study, the half-life of ctDNA was estimated to be 1.8–3.2 h (Figure 6).\n\nThe dynamics of circulating tumor DNA in resection experiments. A: Tumor resection was performed when tumor diameter xenograft mice exceeded 10 mm, at week 7 in two-site xenograft mice, or at week 5 in four-site xenograft mice. Human telomerase reverse transcriptase (hTERT) circulating tumor DNA (ctDNA) was detected at resection (control), had decreased by 6 h, and was undetectable 1 d and 3 d after resection; B: On the other hand, in four-site xenograft mice, hTERT (ctDNA) was detected at resection (control), 6 h, 1 d, and 3 d after resection. cfDNA: Cell-free DNA. \n\nThe half-life of circulating tumor DNA in resection experiments. To estimate half-life of circulating tumor DNA in two-site xenograft mice in the resection experiment, the number of positive droplets vs time after resection was fit to an exponential curve, y = 155e - 0.368x. \nIn four-site xenograft mice, tumor resection was performed at week 5. The average tumor size in the control group was 9.7 mm at the time of resection, while average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 11.4, 10.6, and 10.2 mm, respectively (P = 0.34). In this experiment, hTERT was detected in all groups (Figure 5). The control cfDNA concentration was 1.3 μg/mL at resection and 1.2, 1.5, and 1.7 μg/mL measured 6 h, 1 d, and 3 d, respectively, after resection. Here, pathological autopsy revealed the presence of macroscopic residual tumor at each resection evaluation, with tumor invasion and intrathoracic metastasis in all mice. This experiment revealed that residual ctDNA was associated with incomplete resection and metastasis.", "Because the TERT gene sequence differs between human and mouse, we were able to determine the origin and dynamics of ctDNA in a xenograft mouse model in which human-derived esophageal cancer cells were injected into the epidermis of mice. This model allowed assessment of ctDNA, which has traditionally been difficult to assess in the human body, due to tumor heterogeneity and the influence of other cells. In our experiment, tumor volume was involved in increases or decreases in ctDNA. In addition, if ctDNA was present over 1 d after resection, the presence of residual tumor is suspected. \nAlthough studies of liquid biopsy using xenograft mouse model have been reported mainly in circulating tumor cells [11], we focused on ctDNA in this study. This model seems to be an ideal method because clinical samples contain a variety of cellular information as well as limitations such as ethical issues. Our report is also extremely valuable in providing direct evidence of the origin of plasma ctDNA, which we assessed in the xenograft mouse model by assaying mTERT and hTERT. Based on this ctDNA confirmation, other factors affecting ctDNA dynamics were examined. In our xenograft experiments, the average tumor sizes 4 wk after two-site and four-site xenografts were very similar (5.6 mm and 6.5 mm), and histology showed similar degrees of tumor invasion (Figure 4). However, ctDNA was detected in four-site xenograft mice but not in two-site xenograft mice. These findings revealed that tumor volume may influence ctDNA detection. In both groups, increasing ctDNA with tumor progression was confirmed at week 8 and week 6. The amount and detection rate of ctDNA correlated with tumor progression in a previous clinical study[6], and our results may support that finding. Although detailed studies on the association between tumor volume or invasion and ctDNA have not been conducted, ctDNA is assumed to be detectable in early cancer once the tumor reaches a certain volume. \nThe presence of ctDNA after surgical resection is observed in clinical samples from cancer patients, and evaluation during the perioperative period is useful for prediction of prognosis[12-14]. Detection of ctDNA after surgery suggests some residual disease[15]. However, these clinical studies may inevitably detect circulating DNA from sources other than tumor cells, and there have been no reports to indicate when liquid biopsy should be used. Regarding this point, our resection experiments demonstrated reduced hTERT at 6 h and its absence 1 to 3 d after resection, indicating that ctDNA evaluation 1 d after resection might be useful to detect residual tumor in clinical cases. These experiments also revealed tumor volume was involved in the increase or decrease of ctDNA and that post-tumor resection evaluation requires an interval of one day or more after resection.\nThe half-life of ctDNA was reported as approximately 2 h in one study[16], but another study found the half-life to be 16 min[17]. The metabolism and excretion of cfDNA is affected by liver and kidney function[18], and ctDNA levels might be regulated by the same mechanism. In our study, we estimated the half-life of ctDNA 1.8–3.2 h, based on ctDNA levels measured 0 and 6 h after resection (Figure 6), which was similar to data from previous reports. Assuming a half-life of 3 h, ctDNA will decline by a factor of 28 after 1 d, and postoperative assessment of ctDNA should be evaluated after 1 d.\ncfDNA is derived from apoptotic or necrotic cells[19,20], and its increase is considered to be caused by surgical manipulation, or perhaps cytokines, or cell proliferation in response to invasive therapy. Our results are consistent with these reports, indicating ctDNA decreased after complete resection, while cfDNA increased after resection. \nCarcinoembryonic antigen (CEA) and squamous cell carcinoma antigen (SCC-Ag) are biomarkers for esophageal cancer. However, the usefulness of these biomarkers in the early diagnosis of esophageal cancer has not been established. Currently, upper endoscopy is the most useful examination to pick up early-stage esophageal cancer. However, since this examination is invasive, the development of non-invasive methods such as liquid biopsy is eagerly awaited. The combination of this method with conventional methods may lead to the next generation of diagnosis.\nOur study had the following limitations. First, the artificial implantation of tumor under the skin in the xenograft model differs from the physiology of actual tumor development. Second, individual mice exhibit differences in tumor growth rates, and therefore, our comparative analyses in the present study used the average values for four animals per group. Third, regarding residual tumor, although pathological autopsies were performed on all mice, complete certainty with respect to residual disease is impossible. Forth, TE11 cell line alone is not necessarily sufficient, other cell lines should be examined as well. Fifth, comparison with conventional biomarkers such as CEA and SCC-Ag needs to be shown.", "We clarified the origin and dynamics of ctDNA in the xenograft mouse model. We showed that tumor volume was an important factor in ctDNA, and that if the tumor volume was sufficiently large, ctDNA can be detected even in early-stage or superficial cancers. We also found that, upon complete tumor resection, ctDNA disappeared after at least 1 d, unless residual tumor remained. These findings may indicate future clinical uses of liquid biopsy." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Liquid biopsy", "Circulating tumor DNA", "Xenograft", "Esophageal squamous cell carcinoma", "Dynamics of circulating tumor DNA" ]
INTRODUCTION: Liquid biopsy, a molecular biological diagnostic method for blood and body fluids, has progressed dramatically in recent years. Circulating tumor DNA (ctDNA), one of the targets of liquid biopsy, is expected to be a useful method for screening and detection of cancer, monitoring therapy, prediction of prognosis, and personalized medicine[1-3]. Therefore, in addition to direct biopsy, which is the basis of conventional cancer diagnosis, a hybrid method, which includes non-invasive liquid biopsy, is becoming the mainstream. Cell-free DNA (cfDNA), which includes ctDNA, is derived from apoptotic or necrotic cells[4,5]. Theoretically, it could be applied regardless of the stage. However, reports of its usefulness for early stages of cancer are controversial. Bettegowda et al[6] revealed that the rate of ctDNA detection is generally high in advanced stages of cancer, but ctDNA levels are generally lower in early stages of cancer. On the other hand, some reports indicated that ctDNA was useful for detecting early-stage cancers[6-9]. It remains unclear which factors, such as tumor volume and tumor invasion, influence ctDNA, and the origin of ctDNA in liquid biopsy is always problematic. To use liquid biopsies clinically, it will be very important to address these questions. In this study, we used a xenograft mouse model to assess the origin of ctDNA, clarify the dynamics of ctDNA levels, assess ctDNA levels after treatment, and to determine whether tumor volume and invasion are related to ctDNA levels. MATERIALS AND METHODS: Cell Line The human esophageal squamous cell carcinoma cell line TE11 was used because we established an experimental system for TE11 previously[10] and used it to show that liquid biopsy is useful in esophageal cancer cells as well as other gastrointestinal cancers. Cells were grown in RPMI 1640 (Thermo Fisher Scientific, Tokyo, Japan) containing 10% fetal bovine serum and 1% penicillin/streptomycin (Sigma-Aldrich, Tokyo, Japan) at 37.0 °C in a 5% CO2 atmosphere. Appropriate passages were made such that confluency did not exceed 70% prior to xenotransplantation. A Countess Automated Cell Counter (Thermo Fisher Scientific, Tokyo, Japan) was used to count cells, and 0.2% Trypan blue dye was used to exclude dead cells. The human esophageal squamous cell carcinoma cell line TE11 was used because we established an experimental system for TE11 previously[10] and used it to show that liquid biopsy is useful in esophageal cancer cells as well as other gastrointestinal cancers. Cells were grown in RPMI 1640 (Thermo Fisher Scientific, Tokyo, Japan) containing 10% fetal bovine serum and 1% penicillin/streptomycin (Sigma-Aldrich, Tokyo, Japan) at 37.0 °C in a 5% CO2 atmosphere. Appropriate passages were made such that confluency did not exceed 70% prior to xenotransplantation. A Countess Automated Cell Counter (Thermo Fisher Scientific, Tokyo, Japan) was used to count cells, and 0.2% Trypan blue dye was used to exclude dead cells. Xenograft mouse model Xenograft mouse experimental protocols were approved by the Ethical Committee of Okayama University (OKU-2019276). Six-week-old female nude mice (BALB/c-nu/nu) (Charles River Laboratories, Japan) were used. Mice were raised in the animal facility of Okayama University and given food and water. The physical conditions of the mice, including the presence or absence of body movement or the availability of food and drink, were monitored daily. Mice were euthanized with isoflurane if mice stopped moving or eating. Tumor xenotransplants were established in mice by inoculation in the shoulders or flanks with 1 × 106 TE11 cells suspended in 50 μL medium plus 50 μL Matrigel (Corning Product No. 356234). Inoculation was performed at two sites (i.e., both shoulders, two-site xenograft mouse group, 28 mice) or at four sites (i.e., both shoulders and both flanks, four-site xenograft mouse group, 28 mice) in order to determine the effect of tumor volume as well as the degree of invasion (Figure 1). Xenograft mouse model with TE11 cell. A: In the xenograft experiment, groups of 12 mice each were given two-site xenografts or four-site xenografts; B: In the resection experiment, groups of 16 mice each were given two- or four-site xenografts. All tumors were resected at week 7 after xenotransplantation in two-site xenograft mice, or at week 5 in for-site xenograft mice. Tumor formation was confirmed in all xenograft mice; although, the changes in size varied. Differences in tumor volume were evaluated over time. Two-site and four-site xenograft mouse groups were sacrificed for ctDNA analysis at the appropriate time point after xenotransplantation. To minimize the effects of differences in tumor size, four mice were used for each ctDNA time point analysis. A sample size calculation using power analysis determined 24 mice were needed in xenograft experiments and 32 mice were needed in resection experiments. Xenograft mouse experimental protocols were approved by the Ethical Committee of Okayama University (OKU-2019276). Six-week-old female nude mice (BALB/c-nu/nu) (Charles River Laboratories, Japan) were used. Mice were raised in the animal facility of Okayama University and given food and water. The physical conditions of the mice, including the presence or absence of body movement or the availability of food and drink, were monitored daily. Mice were euthanized with isoflurane if mice stopped moving or eating. Tumor xenotransplants were established in mice by inoculation in the shoulders or flanks with 1 × 106 TE11 cells suspended in 50 μL medium plus 50 μL Matrigel (Corning Product No. 356234). Inoculation was performed at two sites (i.e., both shoulders, two-site xenograft mouse group, 28 mice) or at four sites (i.e., both shoulders and both flanks, four-site xenograft mouse group, 28 mice) in order to determine the effect of tumor volume as well as the degree of invasion (Figure 1). Xenograft mouse model with TE11 cell. A: In the xenograft experiment, groups of 12 mice each were given two-site xenografts or four-site xenografts; B: In the resection experiment, groups of 16 mice each were given two- or four-site xenografts. All tumors were resected at week 7 after xenotransplantation in two-site xenograft mice, or at week 5 in for-site xenograft mice. Tumor formation was confirmed in all xenograft mice; although, the changes in size varied. Differences in tumor volume were evaluated over time. Two-site and four-site xenograft mouse groups were sacrificed for ctDNA analysis at the appropriate time point after xenotransplantation. To minimize the effects of differences in tumor size, four mice were used for each ctDNA time point analysis. A sample size calculation using power analysis determined 24 mice were needed in xenograft experiments and 32 mice were needed in resection experiments. Xenograft experiments Twelve mice received two-site xenografts, and 12 received four-site xenografts. Tumor size was measured every week after xenotransplantation, and ctDNA was evaluated at two time points: 4 wk and 8 wk after xenotransplantation (Figure 1). Twelve mice received two-site xenografts, and 12 received four-site xenografts. Tumor size was measured every week after xenotransplantation, and ctDNA was evaluated at two time points: 4 wk and 8 wk after xenotransplantation (Figure 1). Resection experiments Sixteen mice received two-site xenografts, and 16 mice received four-site xenografts. All tumors were resected at week 7 after xenotransplantation in the two-site xenograft group or at week 5 in the four-site xenograft group. cfDNA and ctDNA were evaluated 6 h, 1 d, and 3 d after resection, or simultaneously with resection in the controls (Figure 1). Sixteen mice received two-site xenografts, and 16 mice received four-site xenografts. All tumors were resected at week 7 after xenotransplantation in the two-site xenograft group or at week 5 in the four-site xenograft group. cfDNA and ctDNA were evaluated 6 h, 1 d, and 3 d after resection, or simultaneously with resection in the controls (Figure 1). Blood and tumor tissue sample collection For ctDNA analysis, whole blood was collected in BD Vacutainer tubes (Becton, Dickinson and Company, Franklin Lakes, NJ), and processed within 1 h after collection. The samples were centrifuged at 3000 × g at 4 °C to separate plasma from peripheral blood cells, and stored at -80 °C. DNA was extracted from 1000 μL of blood and the final solution was 25 μL of DNA. Plasma ctDNA was extracted (25 μl) with the QIAamp Circulating Nucleic Acid Kit (Qiagen, Valencia, Calif), according to the manufacturer’s instructions. At sacrifice, tumors were collected and divided into two fragments. One tumor fragment was snap-frozen in liquid nitrogen and used for preparation of genomic DNA. The other fragment was formalin-fixed and paraffin-embedded for histopathological diagnosis, morphological evaluation after hematoxylin/eosin staining, and immunohistochemistry. Four slides were made from the largest diameter section, where it was easy to obtain information on invasion. For ctDNA analysis, whole blood was collected in BD Vacutainer tubes (Becton, Dickinson and Company, Franklin Lakes, NJ), and processed within 1 h after collection. The samples were centrifuged at 3000 × g at 4 °C to separate plasma from peripheral blood cells, and stored at -80 °C. DNA was extracted from 1000 μL of blood and the final solution was 25 μL of DNA. Plasma ctDNA was extracted (25 μl) with the QIAamp Circulating Nucleic Acid Kit (Qiagen, Valencia, Calif), according to the manufacturer’s instructions. At sacrifice, tumors were collected and divided into two fragments. One tumor fragment was snap-frozen in liquid nitrogen and used for preparation of genomic DNA. The other fragment was formalin-fixed and paraffin-embedded for histopathological diagnosis, morphological evaluation after hematoxylin/eosin staining, and immunohistochemistry. Four slides were made from the largest diameter section, where it was easy to obtain information on invasion. Telomerase reverse transcriptase assay The wild-type telomerase reverse transcriptase (TERT) gene was analyzed by a mouse TERT (mTERT) assay (Thermo Fisher Scientific, Tokyo, Japan) or human TERT (hTERT) assay (Bio-Rad Laboratories, Hercules, CA, United States of America) to take advantage of the differences between mTERT and hTERT genes. The verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America) was performed. The wild-type telomerase reverse transcriptase (TERT) gene was analyzed by a mouse TERT (mTERT) assay (Thermo Fisher Scientific, Tokyo, Japan) or human TERT (hTERT) assay (Bio-Rad Laboratories, Hercules, CA, United States of America) to take advantage of the differences between mTERT and hTERT genes. The verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America) was performed. Droplet digital polymerase chain reaction and data analysis To evaluate ctDNA, hTERT was detected via droplet digital polymerase chain reaction (PCR) according to the following protocol. DNA eluent (5 μL) from plasma was combined with Droplet PCR Supermix (10 μL; Bio-Rad Laboratories, Hercules, CA, United States of America), primer/probe mixture (1 μL), 5M Betaine (2 μL), 80 mmol/L EDTA (0.25 μL), CviQl enzyme (0.25 μL), and sterile DNase- and RNase-free water (3.5 μL). The mixture (22 μL) was added to Droplet Generation Oil (70 μL; Bio-Rad Laboratories, Hercules, CA, United States of America) to produce droplets. Thermal cycling of the emulsion was as follows: an initial denaturation at 95 °C for 10 min, followed by 50 cycles of 96 °C for 30 s and 62 °C for 1 min. After a final enzyme deactivation step of 98 °C for 10 min, the reaction mixtures were analyzed using a droplet reader (Bio-Rad Laboratories, Hercules, CA, United States of America). For quantification, the fluorescence signal was acquired with QuantaSoft software (Bio-Rad Laboratories, Hercules, CA, United States of America). We set the threshold fluorescence intensity at 7500 (mTERT) or 2000 (hTERT), according to positive and negative controls in this study, i.e., plasma and tissue of healthy human, control mouse, or TE11 cell line. To evaluate ctDNA, hTERT was detected via droplet digital polymerase chain reaction (PCR) according to the following protocol. DNA eluent (5 μL) from plasma was combined with Droplet PCR Supermix (10 μL; Bio-Rad Laboratories, Hercules, CA, United States of America), primer/probe mixture (1 μL), 5M Betaine (2 μL), 80 mmol/L EDTA (0.25 μL), CviQl enzyme (0.25 μL), and sterile DNase- and RNase-free water (3.5 μL). The mixture (22 μL) was added to Droplet Generation Oil (70 μL; Bio-Rad Laboratories, Hercules, CA, United States of America) to produce droplets. Thermal cycling of the emulsion was as follows: an initial denaturation at 95 °C for 10 min, followed by 50 cycles of 96 °C for 30 s and 62 °C for 1 min. After a final enzyme deactivation step of 98 °C for 10 min, the reaction mixtures were analyzed using a droplet reader (Bio-Rad Laboratories, Hercules, CA, United States of America). For quantification, the fluorescence signal was acquired with QuantaSoft software (Bio-Rad Laboratories, Hercules, CA, United States of America). We set the threshold fluorescence intensity at 7500 (mTERT) or 2000 (hTERT), according to positive and negative controls in this study, i.e., plasma and tissue of healthy human, control mouse, or TE11 cell line. Statistical analysis We used JMP version 14.0 (SAS Institute, Cary, NC, United States of America) for statistical analysis and set the threshold of significance at P < 0.05. Continuous data were analyzed using the non-parametric Wilcoxon test, and categorical data were analyzed using a Chi-squared test. We used JMP version 14.0 (SAS Institute, Cary, NC, United States of America) for statistical analysis and set the threshold of significance at P < 0.05. Continuous data were analyzed using the non-parametric Wilcoxon test, and categorical data were analyzed using a Chi-squared test. Cell Line: The human esophageal squamous cell carcinoma cell line TE11 was used because we established an experimental system for TE11 previously[10] and used it to show that liquid biopsy is useful in esophageal cancer cells as well as other gastrointestinal cancers. Cells were grown in RPMI 1640 (Thermo Fisher Scientific, Tokyo, Japan) containing 10% fetal bovine serum and 1% penicillin/streptomycin (Sigma-Aldrich, Tokyo, Japan) at 37.0 °C in a 5% CO2 atmosphere. Appropriate passages were made such that confluency did not exceed 70% prior to xenotransplantation. A Countess Automated Cell Counter (Thermo Fisher Scientific, Tokyo, Japan) was used to count cells, and 0.2% Trypan blue dye was used to exclude dead cells. Xenograft mouse model: Xenograft mouse experimental protocols were approved by the Ethical Committee of Okayama University (OKU-2019276). Six-week-old female nude mice (BALB/c-nu/nu) (Charles River Laboratories, Japan) were used. Mice were raised in the animal facility of Okayama University and given food and water. The physical conditions of the mice, including the presence or absence of body movement or the availability of food and drink, were monitored daily. Mice were euthanized with isoflurane if mice stopped moving or eating. Tumor xenotransplants were established in mice by inoculation in the shoulders or flanks with 1 × 106 TE11 cells suspended in 50 μL medium plus 50 μL Matrigel (Corning Product No. 356234). Inoculation was performed at two sites (i.e., both shoulders, two-site xenograft mouse group, 28 mice) or at four sites (i.e., both shoulders and both flanks, four-site xenograft mouse group, 28 mice) in order to determine the effect of tumor volume as well as the degree of invasion (Figure 1). Xenograft mouse model with TE11 cell. A: In the xenograft experiment, groups of 12 mice each were given two-site xenografts or four-site xenografts; B: In the resection experiment, groups of 16 mice each were given two- or four-site xenografts. All tumors were resected at week 7 after xenotransplantation in two-site xenograft mice, or at week 5 in for-site xenograft mice. Tumor formation was confirmed in all xenograft mice; although, the changes in size varied. Differences in tumor volume were evaluated over time. Two-site and four-site xenograft mouse groups were sacrificed for ctDNA analysis at the appropriate time point after xenotransplantation. To minimize the effects of differences in tumor size, four mice were used for each ctDNA time point analysis. A sample size calculation using power analysis determined 24 mice were needed in xenograft experiments and 32 mice were needed in resection experiments. Xenograft experiments: Twelve mice received two-site xenografts, and 12 received four-site xenografts. Tumor size was measured every week after xenotransplantation, and ctDNA was evaluated at two time points: 4 wk and 8 wk after xenotransplantation (Figure 1). Resection experiments: Sixteen mice received two-site xenografts, and 16 mice received four-site xenografts. All tumors were resected at week 7 after xenotransplantation in the two-site xenograft group or at week 5 in the four-site xenograft group. cfDNA and ctDNA were evaluated 6 h, 1 d, and 3 d after resection, or simultaneously with resection in the controls (Figure 1). Blood and tumor tissue sample collection: For ctDNA analysis, whole blood was collected in BD Vacutainer tubes (Becton, Dickinson and Company, Franklin Lakes, NJ), and processed within 1 h after collection. The samples were centrifuged at 3000 × g at 4 °C to separate plasma from peripheral blood cells, and stored at -80 °C. DNA was extracted from 1000 μL of blood and the final solution was 25 μL of DNA. Plasma ctDNA was extracted (25 μl) with the QIAamp Circulating Nucleic Acid Kit (Qiagen, Valencia, Calif), according to the manufacturer’s instructions. At sacrifice, tumors were collected and divided into two fragments. One tumor fragment was snap-frozen in liquid nitrogen and used for preparation of genomic DNA. The other fragment was formalin-fixed and paraffin-embedded for histopathological diagnosis, morphological evaluation after hematoxylin/eosin staining, and immunohistochemistry. Four slides were made from the largest diameter section, where it was easy to obtain information on invasion. Telomerase reverse transcriptase assay: The wild-type telomerase reverse transcriptase (TERT) gene was analyzed by a mouse TERT (mTERT) assay (Thermo Fisher Scientific, Tokyo, Japan) or human TERT (hTERT) assay (Bio-Rad Laboratories, Hercules, CA, United States of America) to take advantage of the differences between mTERT and hTERT genes. The verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America) was performed. Droplet digital polymerase chain reaction and data analysis: To evaluate ctDNA, hTERT was detected via droplet digital polymerase chain reaction (PCR) according to the following protocol. DNA eluent (5 μL) from plasma was combined with Droplet PCR Supermix (10 μL; Bio-Rad Laboratories, Hercules, CA, United States of America), primer/probe mixture (1 μL), 5M Betaine (2 μL), 80 mmol/L EDTA (0.25 μL), CviQl enzyme (0.25 μL), and sterile DNase- and RNase-free water (3.5 μL). The mixture (22 μL) was added to Droplet Generation Oil (70 μL; Bio-Rad Laboratories, Hercules, CA, United States of America) to produce droplets. Thermal cycling of the emulsion was as follows: an initial denaturation at 95 °C for 10 min, followed by 50 cycles of 96 °C for 30 s and 62 °C for 1 min. After a final enzyme deactivation step of 98 °C for 10 min, the reaction mixtures were analyzed using a droplet reader (Bio-Rad Laboratories, Hercules, CA, United States of America). For quantification, the fluorescence signal was acquired with QuantaSoft software (Bio-Rad Laboratories, Hercules, CA, United States of America). We set the threshold fluorescence intensity at 7500 (mTERT) or 2000 (hTERT), according to positive and negative controls in this study, i.e., plasma and tissue of healthy human, control mouse, or TE11 cell line. Statistical analysis: We used JMP version 14.0 (SAS Institute, Cary, NC, United States of America) for statistical analysis and set the threshold of significance at P < 0.05. Continuous data were analyzed using the non-parametric Wilcoxon test, and categorical data were analyzed using a Chi-squared test. RESULTS: Verification experiments In verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America), we confirmed that the mTERT gene was detected in tissue and plasma of control mice, but not in TE11 genomic DNA, whereas the hTERT gene was detected in TE11 genomic DNA, but not in the tissue or plasma of control mice (Figure 2). Telomerase reverse transcriptase assay by droplet digital polymerase chain reaction for mouse plasma, liver tissue, TE11 cell and water. The presence of mouse telomerase reverse transcriptase (mTERT) and human TERT (hTERT) forms of the wild type TERT was analyzed by droplet digital polymerase chain reaction. A: The assay correctly detected mTERT in mouse plasma and liver tissue; B: hTERT was detected in the TE11 cell line. Neither mTERT nor hTERT was detected in water. In verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America), we confirmed that the mTERT gene was detected in tissue and plasma of control mice, but not in TE11 genomic DNA, whereas the hTERT gene was detected in TE11 genomic DNA, but not in the tissue or plasma of control mice (Figure 2). Telomerase reverse transcriptase assay by droplet digital polymerase chain reaction for mouse plasma, liver tissue, TE11 cell and water. The presence of mouse telomerase reverse transcriptase (mTERT) and human TERT (hTERT) forms of the wild type TERT was analyzed by droplet digital polymerase chain reaction. A: The assay correctly detected mTERT in mouse plasma and liver tissue; B: hTERT was detected in the TE11 cell line. Neither mTERT nor hTERT was detected in water. Xenograft experiments Xenograft experiments were designed to reveal the origin of ctDNA and factors contributing to ctDNA increase. Average tumor sizes measured in the two-site xenograft group 1, 2, 3, 4, 5, 6, 7, and 8 wk after xenotransplantation were 1.8, 3.2, 4.6, 6.0, 6.8, 8.0, 8.5, and 12.5 mm, respectively. Two-site xenograft mice were sacrificed 4 or 8 wk after xenotransplantation to evaluate ctDNA. No hTERT was detected at week 4, but hTERT was detected at week 8 (Figure 3). These results indicated that ctDNA was associated with tumor growth. The dynamics of circulating tumor DNA in xenograft experiments. A: Two-site and four-site xenograft mice were sacrificed for circulating tumor DNA (ctDNA) at week 4. Human telomerase reverse transcriptase (hTERT) was detected only in four-site xenograft mice, not in two-site xenograft mice; B: In both two-site xenograft mice sacrificed for ctDNA at week 8 and four-site xenograft mice sacrificed at week 6, hTERT was detected. In four-site xenograft mice, the average tumor sizes at week 1, 2, 3, 4, 5, and 6 after xenotransplantation were 1.8, 4.0, 5.9, 7.1, 8.9, and 10.2 mm. The 8 wk evaluation planned for this group was revised to occur at week 6, because the tumor in one mouse had grown rapidly to cause thoracic invasion, and it was unlikely to survive to week 8. Four-site xenograft mice were sacrificed for ctDNA at week 4 and week 6. hTERT was detected both at week 4 and at week 6 in this group (Figure 3). These results indicated that ctDNA was associated with tumor growth as well as those of two-site xenograft mice. There were no other unexpected adverse events. Histopathology of tumors at week 4 showed no invasion in either the two-site or four-site xenograft group, while tumors showed invasion into muscle both at week 8 in the two-site xenograft mice (P = 0.02) and at week 6 in the four-site xenograft mice (Figure 4; P = 0.03). These results indicated that ctDNA was associated with tumor invasion. Histopathology of xenograft mouse with TE11. A: Histopathology showed absence of invasion in tumors at week 4 in mice with two-site or four-site xenografts; B: Muscle invasions were observed in tumors at week 8 in two-site xenograft mice, and at week 6 in four-site xenograft mice. The rates of tumor size increase were similar between the two-site xenograft group and the four-site group. Interestingly, the two groups showed similar tumor diameters (P = 0.25) and invasion at week 4 (Figures 3 and 4), but a clear difference in the ctDNA detection rate (Figure 3; P = 0.02). These findings showed that not only invasion but also tumor volume might be related to the rate of ctDNA detection. Xenograft experiments were designed to reveal the origin of ctDNA and factors contributing to ctDNA increase. Average tumor sizes measured in the two-site xenograft group 1, 2, 3, 4, 5, 6, 7, and 8 wk after xenotransplantation were 1.8, 3.2, 4.6, 6.0, 6.8, 8.0, 8.5, and 12.5 mm, respectively. Two-site xenograft mice were sacrificed 4 or 8 wk after xenotransplantation to evaluate ctDNA. No hTERT was detected at week 4, but hTERT was detected at week 8 (Figure 3). These results indicated that ctDNA was associated with tumor growth. The dynamics of circulating tumor DNA in xenograft experiments. A: Two-site and four-site xenograft mice were sacrificed for circulating tumor DNA (ctDNA) at week 4. Human telomerase reverse transcriptase (hTERT) was detected only in four-site xenograft mice, not in two-site xenograft mice; B: In both two-site xenograft mice sacrificed for ctDNA at week 8 and four-site xenograft mice sacrificed at week 6, hTERT was detected. In four-site xenograft mice, the average tumor sizes at week 1, 2, 3, 4, 5, and 6 after xenotransplantation were 1.8, 4.0, 5.9, 7.1, 8.9, and 10.2 mm. The 8 wk evaluation planned for this group was revised to occur at week 6, because the tumor in one mouse had grown rapidly to cause thoracic invasion, and it was unlikely to survive to week 8. Four-site xenograft mice were sacrificed for ctDNA at week 4 and week 6. hTERT was detected both at week 4 and at week 6 in this group (Figure 3). These results indicated that ctDNA was associated with tumor growth as well as those of two-site xenograft mice. There were no other unexpected adverse events. Histopathology of tumors at week 4 showed no invasion in either the two-site or four-site xenograft group, while tumors showed invasion into muscle both at week 8 in the two-site xenograft mice (P = 0.02) and at week 6 in the four-site xenograft mice (Figure 4; P = 0.03). These results indicated that ctDNA was associated with tumor invasion. Histopathology of xenograft mouse with TE11. A: Histopathology showed absence of invasion in tumors at week 4 in mice with two-site or four-site xenografts; B: Muscle invasions were observed in tumors at week 8 in two-site xenograft mice, and at week 6 in four-site xenograft mice. The rates of tumor size increase were similar between the two-site xenograft group and the four-site group. Interestingly, the two groups showed similar tumor diameters (P = 0.25) and invasion at week 4 (Figures 3 and 4), but a clear difference in the ctDNA detection rate (Figure 3; P = 0.02). These findings showed that not only invasion but also tumor volume might be related to the rate of ctDNA detection. Resection experiments Resection experiments were designed to clarify responses of ctDNA to tumor resection. Tumors in the two-site and four-site xenograft groups were resected when the diameter exceeded 10 mm. cfDNA and ctDNA were examined at sacrifice. In these resection experiments, two mice were excluded from the evaluation: one mouse with rapid tumor growth and a tendency toward paraplegia before resection, and another mouse with high invasion who died after tumor resection and before evaluation. In two-site xenograft mice, tumor resection was performed at week 7. The average tumor size in the control group was 10.3 mm at the time of resection, and the average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 10.1, 10.3, and 10.2 mm, respectively (P = 0.98). We detected hTERT at resection (control), but hTERT had decreased by 6 h, and was undetectable 1 d or 3 d after resection (Figure 5). The control cfDNA concentration was 1.1 μg/mL at the time of resection, and was 1.2, 1.3, and 1.4 μg/mL measured 6 h, 1 d, and 3 d after resection. Pathological autopsy confirmed the absence of macroscopic residual tumor at each evaluation in this experiment. Using data for the number of positive droplets measured 0 and 6 h after tumor resection in the two-site xenograft resection experiment, the half-life of ctDNA may be calculated from y = 155e - 0.368x. In our study, the half-life of ctDNA was estimated to be 1.8–3.2 h (Figure 6). The dynamics of circulating tumor DNA in resection experiments. A: Tumor resection was performed when tumor diameter xenograft mice exceeded 10 mm, at week 7 in two-site xenograft mice, or at week 5 in four-site xenograft mice. Human telomerase reverse transcriptase (hTERT) circulating tumor DNA (ctDNA) was detected at resection (control), had decreased by 6 h, and was undetectable 1 d and 3 d after resection; B: On the other hand, in four-site xenograft mice, hTERT (ctDNA) was detected at resection (control), 6 h, 1 d, and 3 d after resection. cfDNA: Cell-free DNA. The half-life of circulating tumor DNA in resection experiments. To estimate half-life of circulating tumor DNA in two-site xenograft mice in the resection experiment, the number of positive droplets vs time after resection was fit to an exponential curve, y = 155e - 0.368x. In four-site xenograft mice, tumor resection was performed at week 5. The average tumor size in the control group was 9.7 mm at the time of resection, while average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 11.4, 10.6, and 10.2 mm, respectively (P = 0.34). In this experiment, hTERT was detected in all groups (Figure 5). The control cfDNA concentration was 1.3 μg/mL at resection and 1.2, 1.5, and 1.7 μg/mL measured 6 h, 1 d, and 3 d, respectively, after resection. Here, pathological autopsy revealed the presence of macroscopic residual tumor at each resection evaluation, with tumor invasion and intrathoracic metastasis in all mice. This experiment revealed that residual ctDNA was associated with incomplete resection and metastasis. Resection experiments were designed to clarify responses of ctDNA to tumor resection. Tumors in the two-site and four-site xenograft groups were resected when the diameter exceeded 10 mm. cfDNA and ctDNA were examined at sacrifice. In these resection experiments, two mice were excluded from the evaluation: one mouse with rapid tumor growth and a tendency toward paraplegia before resection, and another mouse with high invasion who died after tumor resection and before evaluation. In two-site xenograft mice, tumor resection was performed at week 7. The average tumor size in the control group was 10.3 mm at the time of resection, and the average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 10.1, 10.3, and 10.2 mm, respectively (P = 0.98). We detected hTERT at resection (control), but hTERT had decreased by 6 h, and was undetectable 1 d or 3 d after resection (Figure 5). The control cfDNA concentration was 1.1 μg/mL at the time of resection, and was 1.2, 1.3, and 1.4 μg/mL measured 6 h, 1 d, and 3 d after resection. Pathological autopsy confirmed the absence of macroscopic residual tumor at each evaluation in this experiment. Using data for the number of positive droplets measured 0 and 6 h after tumor resection in the two-site xenograft resection experiment, the half-life of ctDNA may be calculated from y = 155e - 0.368x. In our study, the half-life of ctDNA was estimated to be 1.8–3.2 h (Figure 6). The dynamics of circulating tumor DNA in resection experiments. A: Tumor resection was performed when tumor diameter xenograft mice exceeded 10 mm, at week 7 in two-site xenograft mice, or at week 5 in four-site xenograft mice. Human telomerase reverse transcriptase (hTERT) circulating tumor DNA (ctDNA) was detected at resection (control), had decreased by 6 h, and was undetectable 1 d and 3 d after resection; B: On the other hand, in four-site xenograft mice, hTERT (ctDNA) was detected at resection (control), 6 h, 1 d, and 3 d after resection. cfDNA: Cell-free DNA. The half-life of circulating tumor DNA in resection experiments. To estimate half-life of circulating tumor DNA in two-site xenograft mice in the resection experiment, the number of positive droplets vs time after resection was fit to an exponential curve, y = 155e - 0.368x. In four-site xenograft mice, tumor resection was performed at week 5. The average tumor size in the control group was 9.7 mm at the time of resection, while average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 11.4, 10.6, and 10.2 mm, respectively (P = 0.34). In this experiment, hTERT was detected in all groups (Figure 5). The control cfDNA concentration was 1.3 μg/mL at resection and 1.2, 1.5, and 1.7 μg/mL measured 6 h, 1 d, and 3 d, respectively, after resection. Here, pathological autopsy revealed the presence of macroscopic residual tumor at each resection evaluation, with tumor invasion and intrathoracic metastasis in all mice. This experiment revealed that residual ctDNA was associated with incomplete resection and metastasis. Verification experiments: In verification experiments using a droplet digital PCR (QX200 system; Bio-Rad Laboratories, Hercules, CA, United States of America), we confirmed that the mTERT gene was detected in tissue and plasma of control mice, but not in TE11 genomic DNA, whereas the hTERT gene was detected in TE11 genomic DNA, but not in the tissue or plasma of control mice (Figure 2). Telomerase reverse transcriptase assay by droplet digital polymerase chain reaction for mouse plasma, liver tissue, TE11 cell and water. The presence of mouse telomerase reverse transcriptase (mTERT) and human TERT (hTERT) forms of the wild type TERT was analyzed by droplet digital polymerase chain reaction. A: The assay correctly detected mTERT in mouse plasma and liver tissue; B: hTERT was detected in the TE11 cell line. Neither mTERT nor hTERT was detected in water. Xenograft experiments : Xenograft experiments were designed to reveal the origin of ctDNA and factors contributing to ctDNA increase. Average tumor sizes measured in the two-site xenograft group 1, 2, 3, 4, 5, 6, 7, and 8 wk after xenotransplantation were 1.8, 3.2, 4.6, 6.0, 6.8, 8.0, 8.5, and 12.5 mm, respectively. Two-site xenograft mice were sacrificed 4 or 8 wk after xenotransplantation to evaluate ctDNA. No hTERT was detected at week 4, but hTERT was detected at week 8 (Figure 3). These results indicated that ctDNA was associated with tumor growth. The dynamics of circulating tumor DNA in xenograft experiments. A: Two-site and four-site xenograft mice were sacrificed for circulating tumor DNA (ctDNA) at week 4. Human telomerase reverse transcriptase (hTERT) was detected only in four-site xenograft mice, not in two-site xenograft mice; B: In both two-site xenograft mice sacrificed for ctDNA at week 8 and four-site xenograft mice sacrificed at week 6, hTERT was detected. In four-site xenograft mice, the average tumor sizes at week 1, 2, 3, 4, 5, and 6 after xenotransplantation were 1.8, 4.0, 5.9, 7.1, 8.9, and 10.2 mm. The 8 wk evaluation planned for this group was revised to occur at week 6, because the tumor in one mouse had grown rapidly to cause thoracic invasion, and it was unlikely to survive to week 8. Four-site xenograft mice were sacrificed for ctDNA at week 4 and week 6. hTERT was detected both at week 4 and at week 6 in this group (Figure 3). These results indicated that ctDNA was associated with tumor growth as well as those of two-site xenograft mice. There were no other unexpected adverse events. Histopathology of tumors at week 4 showed no invasion in either the two-site or four-site xenograft group, while tumors showed invasion into muscle both at week 8 in the two-site xenograft mice (P = 0.02) and at week 6 in the four-site xenograft mice (Figure 4; P = 0.03). These results indicated that ctDNA was associated with tumor invasion. Histopathology of xenograft mouse with TE11. A: Histopathology showed absence of invasion in tumors at week 4 in mice with two-site or four-site xenografts; B: Muscle invasions were observed in tumors at week 8 in two-site xenograft mice, and at week 6 in four-site xenograft mice. The rates of tumor size increase were similar between the two-site xenograft group and the four-site group. Interestingly, the two groups showed similar tumor diameters (P = 0.25) and invasion at week 4 (Figures 3 and 4), but a clear difference in the ctDNA detection rate (Figure 3; P = 0.02). These findings showed that not only invasion but also tumor volume might be related to the rate of ctDNA detection. Resection experiments: Resection experiments were designed to clarify responses of ctDNA to tumor resection. Tumors in the two-site and four-site xenograft groups were resected when the diameter exceeded 10 mm. cfDNA and ctDNA were examined at sacrifice. In these resection experiments, two mice were excluded from the evaluation: one mouse with rapid tumor growth and a tendency toward paraplegia before resection, and another mouse with high invasion who died after tumor resection and before evaluation. In two-site xenograft mice, tumor resection was performed at week 7. The average tumor size in the control group was 10.3 mm at the time of resection, and the average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 10.1, 10.3, and 10.2 mm, respectively (P = 0.98). We detected hTERT at resection (control), but hTERT had decreased by 6 h, and was undetectable 1 d or 3 d after resection (Figure 5). The control cfDNA concentration was 1.1 μg/mL at the time of resection, and was 1.2, 1.3, and 1.4 μg/mL measured 6 h, 1 d, and 3 d after resection. Pathological autopsy confirmed the absence of macroscopic residual tumor at each evaluation in this experiment. Using data for the number of positive droplets measured 0 and 6 h after tumor resection in the two-site xenograft resection experiment, the half-life of ctDNA may be calculated from y = 155e - 0.368x. In our study, the half-life of ctDNA was estimated to be 1.8–3.2 h (Figure 6). The dynamics of circulating tumor DNA in resection experiments. A: Tumor resection was performed when tumor diameter xenograft mice exceeded 10 mm, at week 7 in two-site xenograft mice, or at week 5 in four-site xenograft mice. Human telomerase reverse transcriptase (hTERT) circulating tumor DNA (ctDNA) was detected at resection (control), had decreased by 6 h, and was undetectable 1 d and 3 d after resection; B: On the other hand, in four-site xenograft mice, hTERT (ctDNA) was detected at resection (control), 6 h, 1 d, and 3 d after resection. cfDNA: Cell-free DNA. The half-life of circulating tumor DNA in resection experiments. To estimate half-life of circulating tumor DNA in two-site xenograft mice in the resection experiment, the number of positive droplets vs time after resection was fit to an exponential curve, y = 155e - 0.368x. In four-site xenograft mice, tumor resection was performed at week 5. The average tumor size in the control group was 9.7 mm at the time of resection, while average tumor sizes measured 6 h, 1 d, or 3 d at the time of resection were 11.4, 10.6, and 10.2 mm, respectively (P = 0.34). In this experiment, hTERT was detected in all groups (Figure 5). The control cfDNA concentration was 1.3 μg/mL at resection and 1.2, 1.5, and 1.7 μg/mL measured 6 h, 1 d, and 3 d, respectively, after resection. Here, pathological autopsy revealed the presence of macroscopic residual tumor at each resection evaluation, with tumor invasion and intrathoracic metastasis in all mice. This experiment revealed that residual ctDNA was associated with incomplete resection and metastasis. DISCUSSION: Because the TERT gene sequence differs between human and mouse, we were able to determine the origin and dynamics of ctDNA in a xenograft mouse model in which human-derived esophageal cancer cells were injected into the epidermis of mice. This model allowed assessment of ctDNA, which has traditionally been difficult to assess in the human body, due to tumor heterogeneity and the influence of other cells. In our experiment, tumor volume was involved in increases or decreases in ctDNA. In addition, if ctDNA was present over 1 d after resection, the presence of residual tumor is suspected. Although studies of liquid biopsy using xenograft mouse model have been reported mainly in circulating tumor cells [11], we focused on ctDNA in this study. This model seems to be an ideal method because clinical samples contain a variety of cellular information as well as limitations such as ethical issues. Our report is also extremely valuable in providing direct evidence of the origin of plasma ctDNA, which we assessed in the xenograft mouse model by assaying mTERT and hTERT. Based on this ctDNA confirmation, other factors affecting ctDNA dynamics were examined. In our xenograft experiments, the average tumor sizes 4 wk after two-site and four-site xenografts were very similar (5.6 mm and 6.5 mm), and histology showed similar degrees of tumor invasion (Figure 4). However, ctDNA was detected in four-site xenograft mice but not in two-site xenograft mice. These findings revealed that tumor volume may influence ctDNA detection. In both groups, increasing ctDNA with tumor progression was confirmed at week 8 and week 6. The amount and detection rate of ctDNA correlated with tumor progression in a previous clinical study[6], and our results may support that finding. Although detailed studies on the association between tumor volume or invasion and ctDNA have not been conducted, ctDNA is assumed to be detectable in early cancer once the tumor reaches a certain volume. The presence of ctDNA after surgical resection is observed in clinical samples from cancer patients, and evaluation during the perioperative period is useful for prediction of prognosis[12-14]. Detection of ctDNA after surgery suggests some residual disease[15]. However, these clinical studies may inevitably detect circulating DNA from sources other than tumor cells, and there have been no reports to indicate when liquid biopsy should be used. Regarding this point, our resection experiments demonstrated reduced hTERT at 6 h and its absence 1 to 3 d after resection, indicating that ctDNA evaluation 1 d after resection might be useful to detect residual tumor in clinical cases. These experiments also revealed tumor volume was involved in the increase or decrease of ctDNA and that post-tumor resection evaluation requires an interval of one day or more after resection. The half-life of ctDNA was reported as approximately 2 h in one study[16], but another study found the half-life to be 16 min[17]. The metabolism and excretion of cfDNA is affected by liver and kidney function[18], and ctDNA levels might be regulated by the same mechanism. In our study, we estimated the half-life of ctDNA 1.8–3.2 h, based on ctDNA levels measured 0 and 6 h after resection (Figure 6), which was similar to data from previous reports. Assuming a half-life of 3 h, ctDNA will decline by a factor of 28 after 1 d, and postoperative assessment of ctDNA should be evaluated after 1 d. cfDNA is derived from apoptotic or necrotic cells[19,20], and its increase is considered to be caused by surgical manipulation, or perhaps cytokines, or cell proliferation in response to invasive therapy. Our results are consistent with these reports, indicating ctDNA decreased after complete resection, while cfDNA increased after resection. Carcinoembryonic antigen (CEA) and squamous cell carcinoma antigen (SCC-Ag) are biomarkers for esophageal cancer. However, the usefulness of these biomarkers in the early diagnosis of esophageal cancer has not been established. Currently, upper endoscopy is the most useful examination to pick up early-stage esophageal cancer. However, since this examination is invasive, the development of non-invasive methods such as liquid biopsy is eagerly awaited. The combination of this method with conventional methods may lead to the next generation of diagnosis. Our study had the following limitations. First, the artificial implantation of tumor under the skin in the xenograft model differs from the physiology of actual tumor development. Second, individual mice exhibit differences in tumor growth rates, and therefore, our comparative analyses in the present study used the average values for four animals per group. Third, regarding residual tumor, although pathological autopsies were performed on all mice, complete certainty with respect to residual disease is impossible. Forth, TE11 cell line alone is not necessarily sufficient, other cell lines should be examined as well. Fifth, comparison with conventional biomarkers such as CEA and SCC-Ag needs to be shown. CONCLUSION: We clarified the origin and dynamics of ctDNA in the xenograft mouse model. We showed that tumor volume was an important factor in ctDNA, and that if the tumor volume was sufficiently large, ctDNA can be detected even in early-stage or superficial cancers. We also found that, upon complete tumor resection, ctDNA disappeared after at least 1 d, unless residual tumor remained. These findings may indicate future clinical uses of liquid biopsy.
Background: It remains unclear which factors, such as tumor volume and tumor invasion, influence circulating tumor DNA (ctDNA), and the origin of ctDNA in liquid biopsy is always problematic. To use liquid biopsies clinically, it will be very important to address these questions. Methods: Tumor xenotransplants were established by inoculating BALB/c-nu/nu mice with the TE11 cell line. Groups of mice were injected with xenografts at two or four sites and sacrificed at the appropriate time point after xenotransplantation for ctDNA analysis. Analysis of ctDNA was performed by droplet digital PCR, using the human telomerase reverse transcriptase (hTERT) gene. Results: Mice given two-site xenografts were sacrificed for ctDNA at week 4 and week 8. No hTERT was detected at week 4, but it was detected at week 8. However, in four-site xenograft mice, hTERT was detected both at week 4 and week 6. These experiments revealed that both tumor invasion and tumor volume were associated with the detection of ctDNA. In resection experiments, hTERT was detected at resection, but had decreased by 6 h, and was no longer detected 1 and 3 d after resection. Conclusions: We clarified the origin and dynamics of ctDNA, showing that tumor volume is an important factor. We also found that when the tumor was completely resected, ctDNA was absent after one or more days.
INTRODUCTION: Liquid biopsy, a molecular biological diagnostic method for blood and body fluids, has progressed dramatically in recent years. Circulating tumor DNA (ctDNA), one of the targets of liquid biopsy, is expected to be a useful method for screening and detection of cancer, monitoring therapy, prediction of prognosis, and personalized medicine[1-3]. Therefore, in addition to direct biopsy, which is the basis of conventional cancer diagnosis, a hybrid method, which includes non-invasive liquid biopsy, is becoming the mainstream. Cell-free DNA (cfDNA), which includes ctDNA, is derived from apoptotic or necrotic cells[4,5]. Theoretically, it could be applied regardless of the stage. However, reports of its usefulness for early stages of cancer are controversial. Bettegowda et al[6] revealed that the rate of ctDNA detection is generally high in advanced stages of cancer, but ctDNA levels are generally lower in early stages of cancer. On the other hand, some reports indicated that ctDNA was useful for detecting early-stage cancers[6-9]. It remains unclear which factors, such as tumor volume and tumor invasion, influence ctDNA, and the origin of ctDNA in liquid biopsy is always problematic. To use liquid biopsies clinically, it will be very important to address these questions. In this study, we used a xenograft mouse model to assess the origin of ctDNA, clarify the dynamics of ctDNA levels, assess ctDNA levels after treatment, and to determine whether tumor volume and invasion are related to ctDNA levels. CONCLUSION: We thank all staff in the animal facility of Okayama University, Shinya Ohashi (MD, PhD; Department of Therapeutic Oncology, Kyoto University) and Hiroshi Nakagawa (MD, PhD; Department of Medicine, Columbia University).
Background: It remains unclear which factors, such as tumor volume and tumor invasion, influence circulating tumor DNA (ctDNA), and the origin of ctDNA in liquid biopsy is always problematic. To use liquid biopsies clinically, it will be very important to address these questions. Methods: Tumor xenotransplants were established by inoculating BALB/c-nu/nu mice with the TE11 cell line. Groups of mice were injected with xenografts at two or four sites and sacrificed at the appropriate time point after xenotransplantation for ctDNA analysis. Analysis of ctDNA was performed by droplet digital PCR, using the human telomerase reverse transcriptase (hTERT) gene. Results: Mice given two-site xenografts were sacrificed for ctDNA at week 4 and week 8. No hTERT was detected at week 4, but it was detected at week 8. However, in four-site xenograft mice, hTERT was detected both at week 4 and week 6. These experiments revealed that both tumor invasion and tumor volume were associated with the detection of ctDNA. In resection experiments, hTERT was detected at resection, but had decreased by 6 h, and was no longer detected 1 and 3 d after resection. Conclusions: We clarified the origin and dynamics of ctDNA, showing that tumor volume is an important factor. We also found that when the tumor was completely resected, ctDNA was absent after one or more days.
9,373
269
[ 289, 138, 378, 46, 74, 184, 93, 285, 56, 2790, 164, 578, 647, 926, 83 ]
16
[ "tumor", "site", "mice", "xenograft", "resection", "ctdna", "site xenograft", "week", "xenograft mice", "site xenograft mice" ]
[ "cfdna ctdna evaluated", "liquid biopsy useful", "liquid biopsy molecular", "stages cancer ctdna", "detect circulating dna" ]
null
[CONTENT] Liquid biopsy | Circulating tumor DNA | Xenograft | Esophageal squamous cell carcinoma | Dynamics of circulating tumor DNA [SUMMARY]
[CONTENT] Liquid biopsy | Circulating tumor DNA | Xenograft | Esophageal squamous cell carcinoma | Dynamics of circulating tumor DNA [SUMMARY]
null
[CONTENT] Liquid biopsy | Circulating tumor DNA | Xenograft | Esophageal squamous cell carcinoma | Dynamics of circulating tumor DNA [SUMMARY]
[CONTENT] Liquid biopsy | Circulating tumor DNA | Xenograft | Esophageal squamous cell carcinoma | Dynamics of circulating tumor DNA [SUMMARY]
[CONTENT] Liquid biopsy | Circulating tumor DNA | Xenograft | Esophageal squamous cell carcinoma | Dynamics of circulating tumor DNA [SUMMARY]
[CONTENT] Animals | Biomarkers, Tumor | Circulating Tumor DNA | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Heterografts | Mice | Transplantation, Heterologous [SUMMARY]
[CONTENT] Animals | Biomarkers, Tumor | Circulating Tumor DNA | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Heterografts | Mice | Transplantation, Heterologous [SUMMARY]
null
[CONTENT] Animals | Biomarkers, Tumor | Circulating Tumor DNA | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Heterografts | Mice | Transplantation, Heterologous [SUMMARY]
[CONTENT] Animals | Biomarkers, Tumor | Circulating Tumor DNA | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Heterografts | Mice | Transplantation, Heterologous [SUMMARY]
[CONTENT] Animals | Biomarkers, Tumor | Circulating Tumor DNA | Esophageal Neoplasms | Esophageal Squamous Cell Carcinoma | Heterografts | Mice | Transplantation, Heterologous [SUMMARY]
[CONTENT] cfdna ctdna evaluated | liquid biopsy useful | liquid biopsy molecular | stages cancer ctdna | detect circulating dna [SUMMARY]
[CONTENT] cfdna ctdna evaluated | liquid biopsy useful | liquid biopsy molecular | stages cancer ctdna | detect circulating dna [SUMMARY]
null
[CONTENT] cfdna ctdna evaluated | liquid biopsy useful | liquid biopsy molecular | stages cancer ctdna | detect circulating dna [SUMMARY]
[CONTENT] cfdna ctdna evaluated | liquid biopsy useful | liquid biopsy molecular | stages cancer ctdna | detect circulating dna [SUMMARY]
[CONTENT] cfdna ctdna evaluated | liquid biopsy useful | liquid biopsy molecular | stages cancer ctdna | detect circulating dna [SUMMARY]
[CONTENT] tumor | site | mice | xenograft | resection | ctdna | site xenograft | week | xenograft mice | site xenograft mice [SUMMARY]
[CONTENT] tumor | site | mice | xenograft | resection | ctdna | site xenograft | week | xenograft mice | site xenograft mice [SUMMARY]
null
[CONTENT] tumor | site | mice | xenograft | resection | ctdna | site xenograft | week | xenograft mice | site xenograft mice [SUMMARY]
[CONTENT] tumor | site | mice | xenograft | resection | ctdna | site xenograft | week | xenograft mice | site xenograft mice [SUMMARY]
[CONTENT] tumor | site | mice | xenograft | resection | ctdna | site xenograft | week | xenograft mice | site xenograft mice [SUMMARY]
[CONTENT] ctdna | cancer | ctdna levels | levels | biopsy | liquid | stages | stages cancer | method | liquid biopsy [SUMMARY]
[CONTENT] μl | mice | site | xenograft | analysis | america | states | states america | laboratories | united [SUMMARY]
null
[CONTENT] tumor | ctdna | volume | tumor volume | disappeared residual tumor | stage superficial | stage superficial cancers | stage superficial cancers found | remained findings indicate future | remained findings indicate [SUMMARY]
[CONTENT] site | mice | tumor | xenograft | ctdna | resection | site xenograft | week | μl | xenograft mice [SUMMARY]
[CONTENT] site | mice | tumor | xenograft | ctdna | resection | site xenograft | week | μl | xenograft mice [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] BALB/c-nu ||| Groups | two or | four ||| PCR [SUMMARY]
null
[CONTENT] ||| one [SUMMARY]
[CONTENT] ||| ||| BALB/c-nu ||| Groups | two or | four ||| PCR ||| ||| two | week 4 and week 8 ||| week 4 | week 8 ||| four | week 4 and week 6 ||| ||| 6 | 1 ||| ||| one [SUMMARY]
[CONTENT] ||| ||| BALB/c-nu ||| Groups | two or | four ||| PCR ||| ||| two | week 4 and week 8 ||| week 4 | week 8 ||| four | week 4 and week 6 ||| ||| 6 | 1 ||| ||| one [SUMMARY]
The link between texting and motor vehicle collision frequency in the orthopaedic trauma population.
23416747
This study will evaluate whether or not texting frequency while driving and/or texting frequency in general are associated with an increased risk of incurring a motor vehicle collision (MVC) resulting in orthopaedic trauma injuries.
BACKGROUND
All patients who presented to the Vanderbilt University Medical Center Orthopaedic Trauma Clinic were administered a questionnaire to determine background information, mean phone use, texting frequency, texting frequency while driving, and whether or not the injury was the result of an MVC in which the patient was driving.
METHODS
237 questionnaires were collected. 60 were excluded due to incomplete date, leaving 57 questionnaires in the MVC group and 120 from patients with non-MVC injuries. Patients who sent more than 30 texts per week ("heavy texters") were 2.22 times more likely to be involved in an MVC than those who texted less frequently. 84% of respondents claimed to never text while driving. Dividing the sample into subsets on the basis of age (25 years of age or below considered "young adult," and above 25 years of age considered "adult"),young, heavy texters were 6.76 times more likely to be involved in an MVC than adult non-heavy texters (p = 0.000). Similarly, young adult, non-heavy texters were 6.65 (p = 0.005) times more likely to be involved in an MVC, and adult, heavy texters were 1.72 (p = 0.186) times more likely to be involved in an MVC.
RESULTS
Patients injured in an MVC sent more text messages per week than non-MVC patients. Additionally, controlling for age demonstrated that young age and heavy general texting frequency combined had the highest increase in MVC risk, with the former being the variable of greatest effect.
CONCLUSIONS
[ "Accidents, Traffic", "Adult", "Age Factors", "Automobile Driving", "Dangerous Behavior", "Female", "Fractures, Bone", "Humans", "Logistic Models", "Male", "Motor Vehicles", "Risk Factors", "Statistics as Topic", "Surveys and Questionnaires", "Tennessee", "Text Messaging", "Trauma Centers" ]
3683420
Introduction
With the advent of new technologies such as smaller, more mobile devices and phones capable of accessing email and the internet, cell phone use has skyrocketed in the past few decades. The ubiquity of smart phones and text messaging provides a significant source of driver distraction and inattentiveness, and many studies have attempted to quantitatively prove this hypothesis. A retrospective epidemiology study conducted in Quebec, Canada, in 2003 aimed to identify a link between cell phone use and motor vehicle collisions (MVC). After receiving over 36,000 questionnaires, the study concluded that cell phone users had a higher risk of incurring an MVC compared to non-users, with a dose-response relationship between the frequency of cell phone use and MVC risk.1 In addition, studies conducted by the Virginia Tech Transportation Institute (VTTI) suggested that text messaging, in particular, was associated with the highest risk of all cell phone-related tasks.2 Specifically, VTTI’s research demonstrated that text messaging while driving made a crash or near crash experience 23 times more likely than when driving without a phone, and that text messaging caused drivers to take their eyes off the road for an average of 4.6 seconds over a 6 second interval, which was the longest duration of time for any cell phone-related activity.2 Most recently, in a 2011 report on distracted driving, the Governors Highway Safety Association (GHSA) used surveys to conclude that about one eighth of drivers admitted to texting while driving, with younger drivers reporting texting while driving more frequently than older drivers.3 This latter result is important as teen drivers have the highest crash rate per mile driven of any age group, with crash rates declining with each year of increasing age but not reaching the lowest levels until after age 30.4 Thus, texting while driving seems to be reasonably prevalent and significantly dangerous; however, the lack of uniform prohibition in the US is disconcerting. Currently only thirty states, including Tennessee, have legislation banning texting while driving.5 Most studies, including the aforementioned one conducted by the VTTI, have demonstrated that cell phone use impairs driving performance by increasing driver inattentiveness, thereby significantly increasing the risk of MVC.2,6 Although this association appears intuitive, there are currently no studies linking texting while driving to actual trauma caused by MVCs. The majority of relevant research has been conducted under controlled simulated environments, and studies that evaluate actual risk of injury with texting behavior are rare. Moreover, while simulation studies allow the manipulation of independent variables in a randomized, controlled setting, there is disagreement about the applicability of their conclusions to actual driving. The aforementioned Quebec study 1identified a correlation between actual MVCs and cell phone use, but it simply looked at general phone use and did not address texting behavior or phone use while driving. Greater information concerning the connection between texting and actual MVC is needed. Some of the referenced retrospective studies did identify an association between cell phone use and MVCs, but various limitations prevented them from establishing a causal relationship. For example, the epidemiology study conducted in Canada was performed using cell phone bills, which did not provide information regarding cell phone use while driving, and did not include data regarding texting behavior. In addition, the realistic VTTI study found no statistical association between talking on the phone and MVC risk. A recent study conducted by the Highway Loss Data Institute determined that laws banning texting while driving have not decreased crash risk and in some cases the crash risk has paradoxically increased. 7 In addition, Goodwin et al. have studied the long-term effects of North Carolina’s 2006 law banning all cell phone use by drivers younger than 18. 8 Two years after the law took effect, cell phone use by teenage drivers in North Carolina was not significantly different than that of teenage drivers in South Carolina, where a cell phone restriction does not exist. A majority of teenagers interviewed were aware of the law but believed it was not enforced. 8 Furthermore, Braitman and McCartt used telephone surveys to conclude that while laws banning hand-held phone use seemed to discourage some drivers from using a phone while driving, laws banning texting in particular while driving had little effect on the reported frequency of texting while driving in any age group.9 These results and the limitations of the epidemiology studies have raised questions regarding the actual impact of cell phone use and/or texting on MVC risk. Accordingly, there is a need for more quantitative data supporting the association between cell phone use, texting, and MVCs. In addition, studies are needed to determine the association between cell phone use/texting while driving and resultant trauma sustained in MVCs. These studies would be critical in highlighting the healthcare costs that result from this dangerous behavior, and could help motivate legislation and community action that would save lives and healthcare dollars in the future. Accordingly, this study was designed to evaluate whether or not the frequency of texting while driving and/or general texting frequency is associated with an increased risk of incurring a motor vehicle collision.
Methods
From October 2010 to March 2011, questionnaires were distributed to all trauma patients who presented to the Orthopedic Trauma Clinic at Vanderbilt University Medical Center in Nashville, TN. The questionnaire consisted of 13 brief questions divided into four categories – basic demographics, trauma details/medical information, automobile involvement in trauma, and cell phone usage. The motor vehicle portion consisted of three questions with the main goal of determining if the trauma was a result of an MVC and if the patient was driving at the time of the collision. The model of the vehicle was also collected in order to be able to differentiate between automobile and motorcycle collisions. The cell phone component contained three specific questions regarding phone use: how many hours per week do patients use their phone, how many texts per week do they send, and how many texts per week do they send while driving. For statistical purposes, hours spent talking on the phone was classified as “general phone use” and does not include texting. Questionnaires were distributed to all new patients in the Orthopedic Trauma Clinic by the clinic nurses during their scheduled clinic visit. Patients who were unable to comprehend the questionnaire due to a language barrier or illiteracy were excluded from the study. Each questionnaire consisted of a brief disclaimer explaining the purpose of the study and ensuring patient confidentiality; patient participation was optional. Aside from the patient’s birth date, no other identifiable information was collected, and data was stored in a secured de-identified ¬dataset to further protect patient confidentiality. The study was approved by the Institutional Review Board of the Vanderbilt University School of Medicine. Collected questionnaires that were not complete were excluded from data analysis. The questionnaires that fit the inclusion criteria were grouped into two categories based on the specifics of the trauma. Patients who were involved in a MVC and were driving the vehicle at the time of the collision were assigned to Group A. All other patients were assigned to Group B. 237 questionnaires were collected. 60 questionnaires were excluded due to incomplete information such as demographic information, mechanism of injury, automobile information, and phone usage information. Out of the remaining 177 questionnaires, 57 fit the eligibility criteria to be assigned to Group A, while 120 were assigned to Group B. The average age of patients in Group A (MVC) was 38.0, and the age range was from 18 to 76. The average age of patients in Group B (non-MVC) was 44.4, and the age range was from 18 to 77. Two logistic models were utilized, comparing the likelihood of being involved in a MVC for those that declared that they had sent more than 30 texts per week in general and those patients that had sent more than 30 texts while driving per week (“heavy texters”) to those who had sent less (“non-heavy texters”). The decision to use 30 texts as the cut-off for “heavy texters” was made to ensure sufficient separation of those individuals who very rarely or never engaged in texting behaviors from those who text more often. If a higher cut-off (such as 100 texts sent per week) had been chosen, the patients that only sent a few or no texts in a week would have been grouped with the majority of respondents, preventing a comparison of individuals with disparate texting behaviors. In addition, the logistic models were utilized to make the same comparison after separating the questionnaires by age – 25 years of age or younger (“young adult”) and 26 years of age or older (“adult”). This divided the sample into four subsets – young adult non-heavy texters, young adult heavy texters, adult non-heavy texters, and adult heavy texters. Finally, statistical analysis was performed using STATA 10.
Results
Texting was found to be the cell phone activity associated with the greatest probability of being involved in a motor vehicle collision. In fact, the results indicated no association between heavy general phone use (deemed to be greater than four hours of talking on the phone per week) and the probability of being involved in a motor vehicle collision when compared to low or no phone use (p = 0.694) (Table 1). Logistic regression found no significant association between the risk of incurring an MVC and heavy general phone (greater than 4 hours per week) (p=.694). Patients who sent more than 30 texts per week were 2.22 times more likely to have presented to the Vanderbilt Orthopedic Trauma Clinic after being involved in an MVC in which they were driving (p = 0.015) compared with those who texted less frequently (Table 2). The frequencies of texting for MVC and non-MVC groups are displayed in Table 3. By contrast, the vast majority of patients (84%) claimed to never text while driving (Table 4). There were only two patients who reported themselves as sending more than 30 texts while driving per week (Table 4), and as such, an odds ratio for heavy texting while driving could not be calculated. Despite the lack of data, a linear regression showed that texting in general and texting while driving were indeed associated (p=0.000), though only 12.0% of the variance in texting in driving could be explained by texting in general (R2 = 0.1201). A subgroup analysis was conducted controlling for age, dividing the sample into four subsets – young adult non-heavy texters, young adult heavy texters, adult non-heavy texters, and adult heavy texters. Being a young adult alone increased the likelihood of incurring an MVC by 5.64 (p = 0.001), and heavy texting alone increased the same likelihood by 2.22 (p = 0.015) (Table 2). The subsets were compared against adult non-heavy texters, because this was the group with the lowest incidence of MVC. Young adult heavy texters were 6.76 times more likely to be involved in an MVC than adult non-heavy texters (p = 0.000), while young adult non-heavy texters and adult heavy texters were 6.65 (p = 0.005) and 1.72 (p = 0.186) times as likely, respectively (Table 5). However, it is important to note that large confidence intervals indicate greater levels of variance and therefore decreased accuracy and reliability of these odd ratios despite the presence of a significant association. However, an analysis of variance (ANOVA) demonstrated that a greater proportion of the elevated risk of incurring an MVC was due to age than frequency of texting. Attempts to explain this effect were made by repeating the analysis with different parameters. Treating age as a continuous variable showed that every additional year of age resulted in a 2.3% decreased risk of incurring an MVC (p = 0.066). Changing our comparison from “heavy texters” versus “non-heavy texters” to “texters” (those that declared that they had sent more than one text per week) versus “non-texters” (those that declared that they had sent no texts) showed that texting in general and texting while driving increased the likelihood of incurring an MVC by 1.45 (p = 0.259) and 1.61 (p = 0.216), respectively.
Conclusion
Patients who sustained orthopaedic injuries as a result of a motor vehicle collision were younger and sent more text messages per week than those patients who were older and/or injured by a cause other than a motor vehicle collision. Both factors (young age and high frequency of texting) combined demonstrated the greatest increase in risk of being involved in an MVC. Texting is a particularly hazardous form of driver inattention that increases the likelihood of being involved in a motor vehicle collision, even compared to other forms of phone use, and that awareness campaigns and legislation regarding the issue should target young drivers as the population group most at risk.
[ "Introduction" ]
[ "With the advent of new technologies such as smaller, more mobile devices and phones capable of accessing email and the internet, cell phone use has skyrocketed in the past few decades. The ubiquity of smart phones and text messaging provides a significant source of driver distraction and inattentiveness, and many studies have attempted to quantitatively prove this hypothesis. A retrospective epidemiology study conducted in Quebec, Canada, in 2003 aimed to identify a link between cell phone use and motor vehicle collisions (MVC). After receiving over 36,000 questionnaires, the study concluded that cell phone users had a higher risk of incurring an MVC compared to non-users, with a dose-response relationship between the frequency of cell phone use and MVC risk.1 In addition, studies conducted by the Virginia Tech Transportation Institute (VTTI) suggested that text messaging, in particular, was associated with the highest risk of all cell phone-related tasks.2 Specifically, VTTI’s research demonstrated that text messaging while driving made a crash or near crash experience 23 times more likely than when driving without a phone, and that text messaging caused drivers to take their eyes off the road for an average of 4.6 seconds over a 6 second interval, which was the longest duration of time for any cell phone-related activity.2 Most recently, in a 2011 report on distracted driving, the Governors Highway Safety Association (GHSA) used surveys to conclude that about one eighth of drivers admitted to texting while driving, with younger drivers reporting texting while driving more frequently than older drivers.3 This latter result is important as teen drivers have the highest crash rate per mile driven of any age group, with crash rates declining with each year of increasing age but not reaching the lowest levels until after age 30.4\nThus, texting while driving seems to be reasonably prevalent and significantly dangerous; however, the lack of uniform prohibition in the US is disconcerting. Currently only thirty states, including Tennessee, have legislation banning texting while driving.5 Most studies, including the aforementioned one conducted by the VTTI, have demonstrated that cell phone use impairs driving performance by increasing driver inattentiveness, thereby significantly increasing the risk of MVC.2,6 Although this association appears intuitive, there are currently no studies linking texting while driving to actual trauma caused by MVCs. The majority of relevant research has been conducted under controlled simulated environments, and studies that evaluate actual risk of injury with texting behavior are rare. Moreover, while simulation studies allow the manipulation of independent variables in a randomized, controlled setting, there is disagreement about the applicability of their conclusions to actual driving. The aforementioned Quebec study 1identified a correlation between actual MVCs and cell phone use, but it simply looked at general phone use and did not address texting behavior or phone use while driving. Greater information concerning the connection between texting and actual MVC is needed. \nSome of the referenced retrospective studies did identify an association between cell phone use and MVCs, but various limitations prevented them from establishing a causal relationship. For example, the epidemiology study conducted in Canada was performed using cell phone bills, which did not provide information regarding cell phone use while driving, and did not include data regarding texting behavior. In addition, the realistic VTTI study found no statistical association between talking on the phone and MVC risk. A recent study conducted by the Highway Loss Data Institute determined that laws banning texting while driving have not decreased crash risk and in some cases the crash risk has paradoxically increased. 7 In addition, Goodwin et al. have studied the long-term effects of North Carolina’s 2006 law banning all cell phone use by drivers younger than 18. 8 Two years after the law took effect, cell phone use by teenage drivers in North Carolina was not significantly different than that of teenage drivers in South Carolina, where a cell phone restriction does not exist. A majority of teenagers interviewed were aware of the law but believed it was not enforced. 8 Furthermore, Braitman and McCartt used telephone surveys to conclude that while laws banning hand-held phone use seemed to discourage some drivers from using a phone while driving, laws banning texting in particular while driving had little effect on the reported frequency of texting while driving in any age group.9\nThese results and the limitations of the epidemiology studies have raised questions regarding the actual impact of cell phone use and/or texting on MVC risk. Accordingly, there is a need for more quantitative data supporting the association between cell phone use, texting, and MVCs. In addition, studies are needed to determine the association between cell phone use/texting while driving and resultant trauma sustained in MVCs. These studies would be critical in highlighting the healthcare costs that result from this dangerous behavior, and could help motivate legislation and community action that would save lives and healthcare dollars in the future. Accordingly, this study was designed to evaluate whether or not the frequency of texting while driving and/or general texting frequency is associated with an increased risk of incurring a motor vehicle collision." ]
[ null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion" ]
[ "With the advent of new technologies such as smaller, more mobile devices and phones capable of accessing email and the internet, cell phone use has skyrocketed in the past few decades. The ubiquity of smart phones and text messaging provides a significant source of driver distraction and inattentiveness, and many studies have attempted to quantitatively prove this hypothesis. A retrospective epidemiology study conducted in Quebec, Canada, in 2003 aimed to identify a link between cell phone use and motor vehicle collisions (MVC). After receiving over 36,000 questionnaires, the study concluded that cell phone users had a higher risk of incurring an MVC compared to non-users, with a dose-response relationship between the frequency of cell phone use and MVC risk.1 In addition, studies conducted by the Virginia Tech Transportation Institute (VTTI) suggested that text messaging, in particular, was associated with the highest risk of all cell phone-related tasks.2 Specifically, VTTI’s research demonstrated that text messaging while driving made a crash or near crash experience 23 times more likely than when driving without a phone, and that text messaging caused drivers to take their eyes off the road for an average of 4.6 seconds over a 6 second interval, which was the longest duration of time for any cell phone-related activity.2 Most recently, in a 2011 report on distracted driving, the Governors Highway Safety Association (GHSA) used surveys to conclude that about one eighth of drivers admitted to texting while driving, with younger drivers reporting texting while driving more frequently than older drivers.3 This latter result is important as teen drivers have the highest crash rate per mile driven of any age group, with crash rates declining with each year of increasing age but not reaching the lowest levels until after age 30.4\nThus, texting while driving seems to be reasonably prevalent and significantly dangerous; however, the lack of uniform prohibition in the US is disconcerting. Currently only thirty states, including Tennessee, have legislation banning texting while driving.5 Most studies, including the aforementioned one conducted by the VTTI, have demonstrated that cell phone use impairs driving performance by increasing driver inattentiveness, thereby significantly increasing the risk of MVC.2,6 Although this association appears intuitive, there are currently no studies linking texting while driving to actual trauma caused by MVCs. The majority of relevant research has been conducted under controlled simulated environments, and studies that evaluate actual risk of injury with texting behavior are rare. Moreover, while simulation studies allow the manipulation of independent variables in a randomized, controlled setting, there is disagreement about the applicability of their conclusions to actual driving. The aforementioned Quebec study 1identified a correlation between actual MVCs and cell phone use, but it simply looked at general phone use and did not address texting behavior or phone use while driving. Greater information concerning the connection between texting and actual MVC is needed. \nSome of the referenced retrospective studies did identify an association between cell phone use and MVCs, but various limitations prevented them from establishing a causal relationship. For example, the epidemiology study conducted in Canada was performed using cell phone bills, which did not provide information regarding cell phone use while driving, and did not include data regarding texting behavior. In addition, the realistic VTTI study found no statistical association between talking on the phone and MVC risk. A recent study conducted by the Highway Loss Data Institute determined that laws banning texting while driving have not decreased crash risk and in some cases the crash risk has paradoxically increased. 7 In addition, Goodwin et al. have studied the long-term effects of North Carolina’s 2006 law banning all cell phone use by drivers younger than 18. 8 Two years after the law took effect, cell phone use by teenage drivers in North Carolina was not significantly different than that of teenage drivers in South Carolina, where a cell phone restriction does not exist. A majority of teenagers interviewed were aware of the law but believed it was not enforced. 8 Furthermore, Braitman and McCartt used telephone surveys to conclude that while laws banning hand-held phone use seemed to discourage some drivers from using a phone while driving, laws banning texting in particular while driving had little effect on the reported frequency of texting while driving in any age group.9\nThese results and the limitations of the epidemiology studies have raised questions regarding the actual impact of cell phone use and/or texting on MVC risk. Accordingly, there is a need for more quantitative data supporting the association between cell phone use, texting, and MVCs. In addition, studies are needed to determine the association between cell phone use/texting while driving and resultant trauma sustained in MVCs. These studies would be critical in highlighting the healthcare costs that result from this dangerous behavior, and could help motivate legislation and community action that would save lives and healthcare dollars in the future. Accordingly, this study was designed to evaluate whether or not the frequency of texting while driving and/or general texting frequency is associated with an increased risk of incurring a motor vehicle collision.", "From October 2010 to March 2011, questionnaires were distributed to all trauma patients who presented to the Orthopedic Trauma Clinic at Vanderbilt University Medical Center in Nashville, TN. The questionnaire consisted of 13 brief questions divided into four categories – basic demographics, trauma details/medical information, automobile involvement in trauma, and cell phone usage. The motor vehicle portion consisted of three questions with the main goal of determining if the trauma was a result of an MVC and if the patient was driving at the time of the collision. The model of the vehicle was also collected in order to be able to differentiate between automobile and motorcycle collisions. The cell phone component contained three specific questions regarding phone use: how many hours per week do patients use their phone, how many texts per week do they send, and how many texts per week do they send while driving. For statistical purposes, hours spent talking on the phone was classified as “general phone use” and does not include texting.\nQuestionnaires were distributed to all new patients in the Orthopedic Trauma Clinic by the clinic nurses during their scheduled clinic visit. Patients who were unable to comprehend the questionnaire due to a language barrier or illiteracy were excluded from the study. Each questionnaire consisted of a brief disclaimer explaining the purpose of the study and ensuring patient confidentiality; patient participation was optional. Aside from the patient’s birth date, no other identifiable information was collected, and data was stored in a secured de-identified ¬dataset to further protect patient confidentiality. The study was approved by the Institutional Review Board of the Vanderbilt University School of Medicine. Collected questionnaires that were not complete were excluded from data analysis. The questionnaires that fit the inclusion criteria were grouped into two categories based on the specifics of the trauma. Patients who were involved in a MVC and were driving the vehicle at the time of the collision were assigned to Group A. All other patients were assigned to Group B. 237 questionnaires were collected. 60 questionnaires were excluded due to incomplete information such as demographic information, mechanism of injury, automobile information, and phone usage information. Out of the remaining 177 questionnaires, 57 fit the eligibility criteria to be assigned to Group A, while 120 were assigned to Group B. The average age of patients in Group A (MVC) was 38.0, and the age range was from 18 to 76. The average age of patients in Group B (non-MVC) was 44.4, and the age range was from 18 to 77.\nTwo logistic models were utilized, comparing the likelihood of being involved in a MVC for those that declared that they had sent more than 30 texts per week in general and those patients that had sent more than 30 texts while driving per week (“heavy texters”) to those who had sent less (“non-heavy texters”). The decision to use 30 texts as the cut-off for “heavy texters” was made to ensure sufficient separation of those individuals who very rarely or never engaged in texting behaviors from those who text more often. If a higher cut-off (such as 100 texts sent per week) had been chosen, the patients that only sent a few or no texts in a week would have been grouped with the majority of respondents, preventing a comparison of individuals with disparate texting behaviors. In addition, the logistic models were utilized to make the same comparison after separating the questionnaires by age – 25 years of age or younger (“young adult”) and 26 years of age or older (“adult”). This divided the sample into four subsets – young adult non-heavy texters, young adult heavy texters, adult non-heavy texters, and adult heavy texters. Finally, statistical analysis was performed using STATA 10.", "Texting was found to be the cell phone activity associated with the greatest probability of being involved in a motor vehicle collision. In fact, the results indicated no association between heavy general phone use (deemed to be greater than four hours of talking on the phone per week) and the probability of being involved in a motor vehicle collision when compared to low or no phone use (p = 0.694) (Table 1). Logistic regression found no significant association between the risk of incurring an MVC and heavy general phone (greater than 4 hours per week) (p=.694). \nPatients who sent more than 30 texts per week were 2.22 times more likely to have presented to the Vanderbilt Orthopedic Trauma Clinic after being involved in an MVC in which they were driving (p = 0.015) compared with those who texted less frequently (Table 2). The frequencies of texting for MVC and non-MVC groups are displayed in Table 3. By contrast, the vast majority of patients (84%) claimed to never text while driving (Table 4). There were only two patients who reported themselves as sending more than 30 texts while driving per week (Table 4), and as such, an odds ratio for heavy texting while driving could not be calculated. Despite the lack of data, a linear regression showed that texting in general and texting while driving were indeed associated (p=0.000), though only 12.0% of the variance in texting in driving could be explained by texting in general (R2 = 0.1201). \nA subgroup analysis was conducted controlling for age, dividing the sample into four subsets – young adult non-heavy texters, young adult heavy texters, adult non-heavy texters, and adult heavy texters. Being a young adult alone increased the likelihood of incurring an MVC by 5.64 (p = 0.001), and heavy texting alone increased the same likelihood by 2.22 (p = 0.015) (Table 2). The subsets were compared against adult non-heavy texters, because this was the group with the lowest incidence of MVC. Young adult heavy texters were 6.76 times more likely to be involved in an MVC than adult non-heavy texters (p = 0.000), while young adult non-heavy texters and adult heavy texters were 6.65 (p = 0.005) and 1.72 (p = 0.186) times as likely, respectively (Table 5). However, it is important to note that large confidence intervals indicate greater levels of variance and therefore decreased accuracy and reliability of these odd ratios despite the presence of a significant association.\nHowever, an analysis of variance (ANOVA) demonstrated that a greater proportion of the elevated risk of incurring an MVC was due to age than frequency of texting. Attempts to explain this effect were made by repeating the analysis with different parameters. Treating age as a continuous variable showed that every additional year of age resulted in a 2.3% decreased risk of incurring an MVC (p = 0.066). Changing our comparison from “heavy texters” versus “non-heavy texters” to “texters” (those that declared that they had sent more than one text per week) versus “non-texters” (those that declared that they had sent no texts) showed that texting in general and texting while driving increased the likelihood of incurring an MVC by 1.45 (p = 0.259) and 1.61 (p = 0.216), respectively. ", "The aim of this study was to investigate whether or not the frequency of texting while driving and/or general texting frequency was associated with an increased risk of incurring an MVC. A number of associations became clear from the data. First, as the VTTI research demonstrated, texting was found to be the cell phone activity associated with the greatest probability of being involved in a motor vehicle collision.2 In fact, our results indicated no association between heavy general phone use and increased probability of being involved in an MVC when compared to low or no phone use. Conversely, patients who were heavy texters (sent more than 30 texts per week) were 2.22 times more likely to be involved in an MVC than those who texted less frequently. Accordingly, the specific act of manually manipulating a cell phone (as in receiving or sending a text message) may be a particularly significant source of driver inattention contributing to the increased incidence of motor vehicle collisions. These results have been supported in previous studies including the VTTI study, as well as an 18-month-long simulator study from the University of Utah, which showed an eight times greater motor vehicle collision risk when texting than when not texting among college students. 10\nOur results do not allow us to conclude that heavy texting specifically while driving is associated with an increased risk of being involved in a MVC. It is noteworthy that a mere two patients reported themselves as sending more than 30 texts per week while driving. In fact, the vast majority of patients (84.36%) claimed to never text and drive; however, patients are likely hesitant to admit to texting while driving, as Tennessee is one of the states in which texting while driving is legally prohibited. Various studies have shown texting frequencies, both in general and while driving, to be increasing in past years independent of texting bans or legislation, particularly among young drivers.11,12In addition, a linear regression showed that texting in general and texting while driving were associated with statistical significance. Consequently, despite a low R-squared, we may assume some correlation between general texting frequency and frequency of texting while driving among our sample, lending clinical significance to the aforementioned association between general texting frequency and likelihood of being involved in an MVC. \nWe must also recognize that age has a considerable demonstrated impact on the likelihood of incurring an MVC.13Accordingly, it is prudent to divide our sample into subsets and analyze the effect of both of these variables. The distinction between “young adult” and “adult” drivers was set at 25, because this is the age at which car insurance rates usually drop, indicating that crash rates go down at this age. Young adult heavy texters were most at risk for incurring an MVC, with a 6.76 times increase in probability as compared to adult non-heavy texters. Since age and heavy texting alone do not increase the likelihood of incurring an MVC to such an extent, we can conclude that both age and a high frequency of texting are correlated with the likelihood of incurring an MVC and that these variables are not independent of each other. However, a two-way ANOVA demonstrated that age was a more significant contributing factor to the elevated risk of incurring an MVC than frequency of texting. Treating age as a continuous variable (instead of “young adult” versus “adult”) did not produce useful information, as the risk of incurring an MVC decreased negligibly with increasing age, and car insurance companies almost always treat age as a discrete variable when determining insurance rates. In addition, changing the comparison from “heavy texters” versus “non-heavy texters” to “texters” versus “non-texters” in an effort to explain this contributory effect produced statistically insignificant results. Thus, despite this study’s initial aim of determining whether or not an association exists between texting while driving and/or general texting frequency and an increased risk of incurring an MVC, it appears the more statistically significant association exists between age and MVC risk. \nThere are several limitations of the study. First, the study’s analysis and conclusions could be strengthened by increasing the sample size, as 237 questionnaires may be considered too few to determine accurate MVC probabilities. Second, the determination of our cut-offs for statistical analysis also has certain implications. For example, we chose to use 30 texts per week as the cut-off to define “heavy texters”, which may be considered lower than the definition of “heavy texters” used by others. While our subjectively-selected cut-off facilitated a separation between those who rarely engaged in texting behaviors and those who texted more frequently, one must exercise caution when applying our results to situations implying variable cut-offs. Third, there may be recall or memory bias, as we have no way of determining the accuracy of patient’s recollection of their average phone use and texting habits. There may also be some response bias, as patients may presume they are being tested to determine adherence to Tennessee’s laws against texting while driving, thereby pushing more patients to answer that they do not text and drive at all. Fourth, we were unable to separate the effects of age and cell phone use on MVC risk. Most notably, while this study identified an association between general texting frequency and MVCs, the study did not identify a conclusive link between texting while driving and MVCs in this sample population. Over 84% of the patients claimed to never text while driving and there was no way to determine if patients were texting at the time of their accident. This study and other previously conducted retrospective studies identify the need for a better tool to quantitatively measure texting while driving habits, specifically in the time immediately preceding an MVC. For example, a structured interview could be a better way to overcome the response bias that may prevent patients from truthfully answering questions. Assurance that results will be kept confidential can be more powerful when told in person as opposed to a written disclaimer at the end of a survey tool. Regardless, until researchers are able to develop methods to overcome this barrier, it will be difficult to make a claim of causality between cell phone use and MVCs. This lack of established causation is perhaps one of the reasons why more comprehensive legislation against texting while driving does not exist today.\nThis study is one of the first to examine the association between texting behavior and increased MVC frequency using actual hospital patient data. This association is crucial to determining the effect of texting while driving on the health care costs incurred by physicians and hospitals. A conclusively-demonstrated association could be the first step to invoking national legislation regarding texting while driving. The US Department of Transportation issued regulatory guidance in January 2010 prohibiting text messaging by commercial motor vehicle drivers,14 but studies on the link between trauma and texting from other large healthcare institutions are needed to transform regulatory guidance into legislative restriction. Furthermore, the decide to drive national public service campaign highlighting the dangers of texting while driving, led jointly by the American Academy of Orthopaedic Surgeons (AAOS) and the Orthopaedic Trauma Association (OTA), would benefit from the results of this study and similar ones. Legislation alone may not be sufficient to curb the dangerous behavior of texting while driving. As evidenced by the GHSA report,3 laws banning hand-held cell phone use while driving tend to cause an initial sharp decrease in cell phone use followed by a gradual increase. Regardless of the method adopted to limit texting while driving, it is most important to target young novice drivers who are already at higher crash risk (as demonstrated by the results of this study) and thereby more likely to suffer serious consequences from distracting behaviors like texting while driving.", "Patients who sustained orthopaedic injuries as a result of a motor vehicle collision were younger and sent more text messages per week than those patients who were older and/or injured by a cause other than a motor vehicle collision. Both factors (young age and high frequency of texting) combined demonstrated the greatest increase in risk of being involved in an MVC. Texting is a particularly hazardous form of driver inattention that increases the likelihood of being involved in a motor vehicle collision, even compared to other forms of phone use, and that awareness campaigns and legislation regarding the issue should target young drivers as the population group most at risk." ]
[ null, "methods", "results", "discussion", "conclusion" ]
[ "Texting", "Driving", "Trauma", "Orthopaedic injury ", "Inattention", "Motor vehicle collision" ]
Introduction: With the advent of new technologies such as smaller, more mobile devices and phones capable of accessing email and the internet, cell phone use has skyrocketed in the past few decades. The ubiquity of smart phones and text messaging provides a significant source of driver distraction and inattentiveness, and many studies have attempted to quantitatively prove this hypothesis. A retrospective epidemiology study conducted in Quebec, Canada, in 2003 aimed to identify a link between cell phone use and motor vehicle collisions (MVC). After receiving over 36,000 questionnaires, the study concluded that cell phone users had a higher risk of incurring an MVC compared to non-users, with a dose-response relationship between the frequency of cell phone use and MVC risk.1 In addition, studies conducted by the Virginia Tech Transportation Institute (VTTI) suggested that text messaging, in particular, was associated with the highest risk of all cell phone-related tasks.2 Specifically, VTTI’s research demonstrated that text messaging while driving made a crash or near crash experience 23 times more likely than when driving without a phone, and that text messaging caused drivers to take their eyes off the road for an average of 4.6 seconds over a 6 second interval, which was the longest duration of time for any cell phone-related activity.2 Most recently, in a 2011 report on distracted driving, the Governors Highway Safety Association (GHSA) used surveys to conclude that about one eighth of drivers admitted to texting while driving, with younger drivers reporting texting while driving more frequently than older drivers.3 This latter result is important as teen drivers have the highest crash rate per mile driven of any age group, with crash rates declining with each year of increasing age but not reaching the lowest levels until after age 30.4 Thus, texting while driving seems to be reasonably prevalent and significantly dangerous; however, the lack of uniform prohibition in the US is disconcerting. Currently only thirty states, including Tennessee, have legislation banning texting while driving.5 Most studies, including the aforementioned one conducted by the VTTI, have demonstrated that cell phone use impairs driving performance by increasing driver inattentiveness, thereby significantly increasing the risk of MVC.2,6 Although this association appears intuitive, there are currently no studies linking texting while driving to actual trauma caused by MVCs. The majority of relevant research has been conducted under controlled simulated environments, and studies that evaluate actual risk of injury with texting behavior are rare. Moreover, while simulation studies allow the manipulation of independent variables in a randomized, controlled setting, there is disagreement about the applicability of their conclusions to actual driving. The aforementioned Quebec study 1identified a correlation between actual MVCs and cell phone use, but it simply looked at general phone use and did not address texting behavior or phone use while driving. Greater information concerning the connection between texting and actual MVC is needed. Some of the referenced retrospective studies did identify an association between cell phone use and MVCs, but various limitations prevented them from establishing a causal relationship. For example, the epidemiology study conducted in Canada was performed using cell phone bills, which did not provide information regarding cell phone use while driving, and did not include data regarding texting behavior. In addition, the realistic VTTI study found no statistical association between talking on the phone and MVC risk. A recent study conducted by the Highway Loss Data Institute determined that laws banning texting while driving have not decreased crash risk and in some cases the crash risk has paradoxically increased. 7 In addition, Goodwin et al. have studied the long-term effects of North Carolina’s 2006 law banning all cell phone use by drivers younger than 18. 8 Two years after the law took effect, cell phone use by teenage drivers in North Carolina was not significantly different than that of teenage drivers in South Carolina, where a cell phone restriction does not exist. A majority of teenagers interviewed were aware of the law but believed it was not enforced. 8 Furthermore, Braitman and McCartt used telephone surveys to conclude that while laws banning hand-held phone use seemed to discourage some drivers from using a phone while driving, laws banning texting in particular while driving had little effect on the reported frequency of texting while driving in any age group.9 These results and the limitations of the epidemiology studies have raised questions regarding the actual impact of cell phone use and/or texting on MVC risk. Accordingly, there is a need for more quantitative data supporting the association between cell phone use, texting, and MVCs. In addition, studies are needed to determine the association between cell phone use/texting while driving and resultant trauma sustained in MVCs. These studies would be critical in highlighting the healthcare costs that result from this dangerous behavior, and could help motivate legislation and community action that would save lives and healthcare dollars in the future. Accordingly, this study was designed to evaluate whether or not the frequency of texting while driving and/or general texting frequency is associated with an increased risk of incurring a motor vehicle collision. Methods: From October 2010 to March 2011, questionnaires were distributed to all trauma patients who presented to the Orthopedic Trauma Clinic at Vanderbilt University Medical Center in Nashville, TN. The questionnaire consisted of 13 brief questions divided into four categories – basic demographics, trauma details/medical information, automobile involvement in trauma, and cell phone usage. The motor vehicle portion consisted of three questions with the main goal of determining if the trauma was a result of an MVC and if the patient was driving at the time of the collision. The model of the vehicle was also collected in order to be able to differentiate between automobile and motorcycle collisions. The cell phone component contained three specific questions regarding phone use: how many hours per week do patients use their phone, how many texts per week do they send, and how many texts per week do they send while driving. For statistical purposes, hours spent talking on the phone was classified as “general phone use” and does not include texting. Questionnaires were distributed to all new patients in the Orthopedic Trauma Clinic by the clinic nurses during their scheduled clinic visit. Patients who were unable to comprehend the questionnaire due to a language barrier or illiteracy were excluded from the study. Each questionnaire consisted of a brief disclaimer explaining the purpose of the study and ensuring patient confidentiality; patient participation was optional. Aside from the patient’s birth date, no other identifiable information was collected, and data was stored in a secured de-identified ¬dataset to further protect patient confidentiality. The study was approved by the Institutional Review Board of the Vanderbilt University School of Medicine. Collected questionnaires that were not complete were excluded from data analysis. The questionnaires that fit the inclusion criteria were grouped into two categories based on the specifics of the trauma. Patients who were involved in a MVC and were driving the vehicle at the time of the collision were assigned to Group A. All other patients were assigned to Group B. 237 questionnaires were collected. 60 questionnaires were excluded due to incomplete information such as demographic information, mechanism of injury, automobile information, and phone usage information. Out of the remaining 177 questionnaires, 57 fit the eligibility criteria to be assigned to Group A, while 120 were assigned to Group B. The average age of patients in Group A (MVC) was 38.0, and the age range was from 18 to 76. The average age of patients in Group B (non-MVC) was 44.4, and the age range was from 18 to 77. Two logistic models were utilized, comparing the likelihood of being involved in a MVC for those that declared that they had sent more than 30 texts per week in general and those patients that had sent more than 30 texts while driving per week (“heavy texters”) to those who had sent less (“non-heavy texters”). The decision to use 30 texts as the cut-off for “heavy texters” was made to ensure sufficient separation of those individuals who very rarely or never engaged in texting behaviors from those who text more often. If a higher cut-off (such as 100 texts sent per week) had been chosen, the patients that only sent a few or no texts in a week would have been grouped with the majority of respondents, preventing a comparison of individuals with disparate texting behaviors. In addition, the logistic models were utilized to make the same comparison after separating the questionnaires by age – 25 years of age or younger (“young adult”) and 26 years of age or older (“adult”). This divided the sample into four subsets – young adult non-heavy texters, young adult heavy texters, adult non-heavy texters, and adult heavy texters. Finally, statistical analysis was performed using STATA 10. Results: Texting was found to be the cell phone activity associated with the greatest probability of being involved in a motor vehicle collision. In fact, the results indicated no association between heavy general phone use (deemed to be greater than four hours of talking on the phone per week) and the probability of being involved in a motor vehicle collision when compared to low or no phone use (p = 0.694) (Table 1). Logistic regression found no significant association between the risk of incurring an MVC and heavy general phone (greater than 4 hours per week) (p=.694). Patients who sent more than 30 texts per week were 2.22 times more likely to have presented to the Vanderbilt Orthopedic Trauma Clinic after being involved in an MVC in which they were driving (p = 0.015) compared with those who texted less frequently (Table 2). The frequencies of texting for MVC and non-MVC groups are displayed in Table 3. By contrast, the vast majority of patients (84%) claimed to never text while driving (Table 4). There were only two patients who reported themselves as sending more than 30 texts while driving per week (Table 4), and as such, an odds ratio for heavy texting while driving could not be calculated. Despite the lack of data, a linear regression showed that texting in general and texting while driving were indeed associated (p=0.000), though only 12.0% of the variance in texting in driving could be explained by texting in general (R2 = 0.1201). A subgroup analysis was conducted controlling for age, dividing the sample into four subsets – young adult non-heavy texters, young adult heavy texters, adult non-heavy texters, and adult heavy texters. Being a young adult alone increased the likelihood of incurring an MVC by 5.64 (p = 0.001), and heavy texting alone increased the same likelihood by 2.22 (p = 0.015) (Table 2). The subsets were compared against adult non-heavy texters, because this was the group with the lowest incidence of MVC. Young adult heavy texters were 6.76 times more likely to be involved in an MVC than adult non-heavy texters (p = 0.000), while young adult non-heavy texters and adult heavy texters were 6.65 (p = 0.005) and 1.72 (p = 0.186) times as likely, respectively (Table 5). However, it is important to note that large confidence intervals indicate greater levels of variance and therefore decreased accuracy and reliability of these odd ratios despite the presence of a significant association. However, an analysis of variance (ANOVA) demonstrated that a greater proportion of the elevated risk of incurring an MVC was due to age than frequency of texting. Attempts to explain this effect were made by repeating the analysis with different parameters. Treating age as a continuous variable showed that every additional year of age resulted in a 2.3% decreased risk of incurring an MVC (p = 0.066). Changing our comparison from “heavy texters” versus “non-heavy texters” to “texters” (those that declared that they had sent more than one text per week) versus “non-texters” (those that declared that they had sent no texts) showed that texting in general and texting while driving increased the likelihood of incurring an MVC by 1.45 (p = 0.259) and 1.61 (p = 0.216), respectively. Discussion: The aim of this study was to investigate whether or not the frequency of texting while driving and/or general texting frequency was associated with an increased risk of incurring an MVC. A number of associations became clear from the data. First, as the VTTI research demonstrated, texting was found to be the cell phone activity associated with the greatest probability of being involved in a motor vehicle collision.2 In fact, our results indicated no association between heavy general phone use and increased probability of being involved in an MVC when compared to low or no phone use. Conversely, patients who were heavy texters (sent more than 30 texts per week) were 2.22 times more likely to be involved in an MVC than those who texted less frequently. Accordingly, the specific act of manually manipulating a cell phone (as in receiving or sending a text message) may be a particularly significant source of driver inattention contributing to the increased incidence of motor vehicle collisions. These results have been supported in previous studies including the VTTI study, as well as an 18-month-long simulator study from the University of Utah, which showed an eight times greater motor vehicle collision risk when texting than when not texting among college students. 10 Our results do not allow us to conclude that heavy texting specifically while driving is associated with an increased risk of being involved in a MVC. It is noteworthy that a mere two patients reported themselves as sending more than 30 texts per week while driving. In fact, the vast majority of patients (84.36%) claimed to never text and drive; however, patients are likely hesitant to admit to texting while driving, as Tennessee is one of the states in which texting while driving is legally prohibited. Various studies have shown texting frequencies, both in general and while driving, to be increasing in past years independent of texting bans or legislation, particularly among young drivers.11,12In addition, a linear regression showed that texting in general and texting while driving were associated with statistical significance. Consequently, despite a low R-squared, we may assume some correlation between general texting frequency and frequency of texting while driving among our sample, lending clinical significance to the aforementioned association between general texting frequency and likelihood of being involved in an MVC. We must also recognize that age has a considerable demonstrated impact on the likelihood of incurring an MVC.13Accordingly, it is prudent to divide our sample into subsets and analyze the effect of both of these variables. The distinction between “young adult” and “adult” drivers was set at 25, because this is the age at which car insurance rates usually drop, indicating that crash rates go down at this age. Young adult heavy texters were most at risk for incurring an MVC, with a 6.76 times increase in probability as compared to adult non-heavy texters. Since age and heavy texting alone do not increase the likelihood of incurring an MVC to such an extent, we can conclude that both age and a high frequency of texting are correlated with the likelihood of incurring an MVC and that these variables are not independent of each other. However, a two-way ANOVA demonstrated that age was a more significant contributing factor to the elevated risk of incurring an MVC than frequency of texting. Treating age as a continuous variable (instead of “young adult” versus “adult”) did not produce useful information, as the risk of incurring an MVC decreased negligibly with increasing age, and car insurance companies almost always treat age as a discrete variable when determining insurance rates. In addition, changing the comparison from “heavy texters” versus “non-heavy texters” to “texters” versus “non-texters” in an effort to explain this contributory effect produced statistically insignificant results. Thus, despite this study’s initial aim of determining whether or not an association exists between texting while driving and/or general texting frequency and an increased risk of incurring an MVC, it appears the more statistically significant association exists between age and MVC risk. There are several limitations of the study. First, the study’s analysis and conclusions could be strengthened by increasing the sample size, as 237 questionnaires may be considered too few to determine accurate MVC probabilities. Second, the determination of our cut-offs for statistical analysis also has certain implications. For example, we chose to use 30 texts per week as the cut-off to define “heavy texters”, which may be considered lower than the definition of “heavy texters” used by others. While our subjectively-selected cut-off facilitated a separation between those who rarely engaged in texting behaviors and those who texted more frequently, one must exercise caution when applying our results to situations implying variable cut-offs. Third, there may be recall or memory bias, as we have no way of determining the accuracy of patient’s recollection of their average phone use and texting habits. There may also be some response bias, as patients may presume they are being tested to determine adherence to Tennessee’s laws against texting while driving, thereby pushing more patients to answer that they do not text and drive at all. Fourth, we were unable to separate the effects of age and cell phone use on MVC risk. Most notably, while this study identified an association between general texting frequency and MVCs, the study did not identify a conclusive link between texting while driving and MVCs in this sample population. Over 84% of the patients claimed to never text while driving and there was no way to determine if patients were texting at the time of their accident. This study and other previously conducted retrospective studies identify the need for a better tool to quantitatively measure texting while driving habits, specifically in the time immediately preceding an MVC. For example, a structured interview could be a better way to overcome the response bias that may prevent patients from truthfully answering questions. Assurance that results will be kept confidential can be more powerful when told in person as opposed to a written disclaimer at the end of a survey tool. Regardless, until researchers are able to develop methods to overcome this barrier, it will be difficult to make a claim of causality between cell phone use and MVCs. This lack of established causation is perhaps one of the reasons why more comprehensive legislation against texting while driving does not exist today. This study is one of the first to examine the association between texting behavior and increased MVC frequency using actual hospital patient data. This association is crucial to determining the effect of texting while driving on the health care costs incurred by physicians and hospitals. A conclusively-demonstrated association could be the first step to invoking national legislation regarding texting while driving. The US Department of Transportation issued regulatory guidance in January 2010 prohibiting text messaging by commercial motor vehicle drivers,14 but studies on the link between trauma and texting from other large healthcare institutions are needed to transform regulatory guidance into legislative restriction. Furthermore, the decide to drive national public service campaign highlighting the dangers of texting while driving, led jointly by the American Academy of Orthopaedic Surgeons (AAOS) and the Orthopaedic Trauma Association (OTA), would benefit from the results of this study and similar ones. Legislation alone may not be sufficient to curb the dangerous behavior of texting while driving. As evidenced by the GHSA report,3 laws banning hand-held cell phone use while driving tend to cause an initial sharp decrease in cell phone use followed by a gradual increase. Regardless of the method adopted to limit texting while driving, it is most important to target young novice drivers who are already at higher crash risk (as demonstrated by the results of this study) and thereby more likely to suffer serious consequences from distracting behaviors like texting while driving. Conclusion: Patients who sustained orthopaedic injuries as a result of a motor vehicle collision were younger and sent more text messages per week than those patients who were older and/or injured by a cause other than a motor vehicle collision. Both factors (young age and high frequency of texting) combined demonstrated the greatest increase in risk of being involved in an MVC. Texting is a particularly hazardous form of driver inattention that increases the likelihood of being involved in a motor vehicle collision, even compared to other forms of phone use, and that awareness campaigns and legislation regarding the issue should target young drivers as the population group most at risk.
Background: This study will evaluate whether or not texting frequency while driving and/or texting frequency in general are associated with an increased risk of incurring a motor vehicle collision (MVC) resulting in orthopaedic trauma injuries. Methods: All patients who presented to the Vanderbilt University Medical Center Orthopaedic Trauma Clinic were administered a questionnaire to determine background information, mean phone use, texting frequency, texting frequency while driving, and whether or not the injury was the result of an MVC in which the patient was driving. Results: 237 questionnaires were collected. 60 were excluded due to incomplete date, leaving 57 questionnaires in the MVC group and 120 from patients with non-MVC injuries. Patients who sent more than 30 texts per week ("heavy texters") were 2.22 times more likely to be involved in an MVC than those who texted less frequently. 84% of respondents claimed to never text while driving. Dividing the sample into subsets on the basis of age (25 years of age or below considered "young adult," and above 25 years of age considered "adult"),young, heavy texters were 6.76 times more likely to be involved in an MVC than adult non-heavy texters (p = 0.000). Similarly, young adult, non-heavy texters were 6.65 (p = 0.005) times more likely to be involved in an MVC, and adult, heavy texters were 1.72 (p = 0.186) times more likely to be involved in an MVC. Conclusions: Patients injured in an MVC sent more text messages per week than non-MVC patients. Additionally, controlling for age demonstrated that young age and heavy general texting frequency combined had the highest increase in MVC risk, with the former being the variable of greatest effect.
Introduction: With the advent of new technologies such as smaller, more mobile devices and phones capable of accessing email and the internet, cell phone use has skyrocketed in the past few decades. The ubiquity of smart phones and text messaging provides a significant source of driver distraction and inattentiveness, and many studies have attempted to quantitatively prove this hypothesis. A retrospective epidemiology study conducted in Quebec, Canada, in 2003 aimed to identify a link between cell phone use and motor vehicle collisions (MVC). After receiving over 36,000 questionnaires, the study concluded that cell phone users had a higher risk of incurring an MVC compared to non-users, with a dose-response relationship between the frequency of cell phone use and MVC risk.1 In addition, studies conducted by the Virginia Tech Transportation Institute (VTTI) suggested that text messaging, in particular, was associated with the highest risk of all cell phone-related tasks.2 Specifically, VTTI’s research demonstrated that text messaging while driving made a crash or near crash experience 23 times more likely than when driving without a phone, and that text messaging caused drivers to take their eyes off the road for an average of 4.6 seconds over a 6 second interval, which was the longest duration of time for any cell phone-related activity.2 Most recently, in a 2011 report on distracted driving, the Governors Highway Safety Association (GHSA) used surveys to conclude that about one eighth of drivers admitted to texting while driving, with younger drivers reporting texting while driving more frequently than older drivers.3 This latter result is important as teen drivers have the highest crash rate per mile driven of any age group, with crash rates declining with each year of increasing age but not reaching the lowest levels until after age 30.4 Thus, texting while driving seems to be reasonably prevalent and significantly dangerous; however, the lack of uniform prohibition in the US is disconcerting. Currently only thirty states, including Tennessee, have legislation banning texting while driving.5 Most studies, including the aforementioned one conducted by the VTTI, have demonstrated that cell phone use impairs driving performance by increasing driver inattentiveness, thereby significantly increasing the risk of MVC.2,6 Although this association appears intuitive, there are currently no studies linking texting while driving to actual trauma caused by MVCs. The majority of relevant research has been conducted under controlled simulated environments, and studies that evaluate actual risk of injury with texting behavior are rare. Moreover, while simulation studies allow the manipulation of independent variables in a randomized, controlled setting, there is disagreement about the applicability of their conclusions to actual driving. The aforementioned Quebec study 1identified a correlation between actual MVCs and cell phone use, but it simply looked at general phone use and did not address texting behavior or phone use while driving. Greater information concerning the connection between texting and actual MVC is needed. Some of the referenced retrospective studies did identify an association between cell phone use and MVCs, but various limitations prevented them from establishing a causal relationship. For example, the epidemiology study conducted in Canada was performed using cell phone bills, which did not provide information regarding cell phone use while driving, and did not include data regarding texting behavior. In addition, the realistic VTTI study found no statistical association between talking on the phone and MVC risk. A recent study conducted by the Highway Loss Data Institute determined that laws banning texting while driving have not decreased crash risk and in some cases the crash risk has paradoxically increased. 7 In addition, Goodwin et al. have studied the long-term effects of North Carolina’s 2006 law banning all cell phone use by drivers younger than 18. 8 Two years after the law took effect, cell phone use by teenage drivers in North Carolina was not significantly different than that of teenage drivers in South Carolina, where a cell phone restriction does not exist. A majority of teenagers interviewed were aware of the law but believed it was not enforced. 8 Furthermore, Braitman and McCartt used telephone surveys to conclude that while laws banning hand-held phone use seemed to discourage some drivers from using a phone while driving, laws banning texting in particular while driving had little effect on the reported frequency of texting while driving in any age group.9 These results and the limitations of the epidemiology studies have raised questions regarding the actual impact of cell phone use and/or texting on MVC risk. Accordingly, there is a need for more quantitative data supporting the association between cell phone use, texting, and MVCs. In addition, studies are needed to determine the association between cell phone use/texting while driving and resultant trauma sustained in MVCs. These studies would be critical in highlighting the healthcare costs that result from this dangerous behavior, and could help motivate legislation and community action that would save lives and healthcare dollars in the future. Accordingly, this study was designed to evaluate whether or not the frequency of texting while driving and/or general texting frequency is associated with an increased risk of incurring a motor vehicle collision. Conclusion: Patients who sustained orthopaedic injuries as a result of a motor vehicle collision were younger and sent more text messages per week than those patients who were older and/or injured by a cause other than a motor vehicle collision. Both factors (young age and high frequency of texting) combined demonstrated the greatest increase in risk of being involved in an MVC. Texting is a particularly hazardous form of driver inattention that increases the likelihood of being involved in a motor vehicle collision, even compared to other forms of phone use, and that awareness campaigns and legislation regarding the issue should target young drivers as the population group most at risk.
Background: This study will evaluate whether or not texting frequency while driving and/or texting frequency in general are associated with an increased risk of incurring a motor vehicle collision (MVC) resulting in orthopaedic trauma injuries. Methods: All patients who presented to the Vanderbilt University Medical Center Orthopaedic Trauma Clinic were administered a questionnaire to determine background information, mean phone use, texting frequency, texting frequency while driving, and whether or not the injury was the result of an MVC in which the patient was driving. Results: 237 questionnaires were collected. 60 were excluded due to incomplete date, leaving 57 questionnaires in the MVC group and 120 from patients with non-MVC injuries. Patients who sent more than 30 texts per week ("heavy texters") were 2.22 times more likely to be involved in an MVC than those who texted less frequently. 84% of respondents claimed to never text while driving. Dividing the sample into subsets on the basis of age (25 years of age or below considered "young adult," and above 25 years of age considered "adult"),young, heavy texters were 6.76 times more likely to be involved in an MVC than adult non-heavy texters (p = 0.000). Similarly, young adult, non-heavy texters were 6.65 (p = 0.005) times more likely to be involved in an MVC, and adult, heavy texters were 1.72 (p = 0.186) times more likely to be involved in an MVC. Conclusions: Patients injured in an MVC sent more text messages per week than non-MVC patients. Additionally, controlling for age demonstrated that young age and heavy general texting frequency combined had the highest increase in MVC risk, with the former being the variable of greatest effect.
3,900
335
[ 936 ]
5
[ "texting", "driving", "phone", "mvc", "heavy", "use", "texters", "texting driving", "phone use", "age" ]
[ "messaging driving", "texting driving frequently", "phone use awareness", "texting driving health", "dangers texting driving" ]
[CONTENT] Texting | Driving | Trauma | Orthopaedic injury | Inattention | Motor vehicle collision [SUMMARY]
[CONTENT] Texting | Driving | Trauma | Orthopaedic injury | Inattention | Motor vehicle collision [SUMMARY]
[CONTENT] Texting | Driving | Trauma | Orthopaedic injury | Inattention | Motor vehicle collision [SUMMARY]
[CONTENT] Texting | Driving | Trauma | Orthopaedic injury | Inattention | Motor vehicle collision [SUMMARY]
[CONTENT] Texting | Driving | Trauma | Orthopaedic injury | Inattention | Motor vehicle collision [SUMMARY]
[CONTENT] Texting | Driving | Trauma | Orthopaedic injury | Inattention | Motor vehicle collision [SUMMARY]
[CONTENT] Accidents, Traffic | Adult | Age Factors | Automobile Driving | Dangerous Behavior | Female | Fractures, Bone | Humans | Logistic Models | Male | Motor Vehicles | Risk Factors | Statistics as Topic | Surveys and Questionnaires | Tennessee | Text Messaging | Trauma Centers [SUMMARY]
[CONTENT] Accidents, Traffic | Adult | Age Factors | Automobile Driving | Dangerous Behavior | Female | Fractures, Bone | Humans | Logistic Models | Male | Motor Vehicles | Risk Factors | Statistics as Topic | Surveys and Questionnaires | Tennessee | Text Messaging | Trauma Centers [SUMMARY]
[CONTENT] Accidents, Traffic | Adult | Age Factors | Automobile Driving | Dangerous Behavior | Female | Fractures, Bone | Humans | Logistic Models | Male | Motor Vehicles | Risk Factors | Statistics as Topic | Surveys and Questionnaires | Tennessee | Text Messaging | Trauma Centers [SUMMARY]
[CONTENT] Accidents, Traffic | Adult | Age Factors | Automobile Driving | Dangerous Behavior | Female | Fractures, Bone | Humans | Logistic Models | Male | Motor Vehicles | Risk Factors | Statistics as Topic | Surveys and Questionnaires | Tennessee | Text Messaging | Trauma Centers [SUMMARY]
[CONTENT] Accidents, Traffic | Adult | Age Factors | Automobile Driving | Dangerous Behavior | Female | Fractures, Bone | Humans | Logistic Models | Male | Motor Vehicles | Risk Factors | Statistics as Topic | Surveys and Questionnaires | Tennessee | Text Messaging | Trauma Centers [SUMMARY]
[CONTENT] Accidents, Traffic | Adult | Age Factors | Automobile Driving | Dangerous Behavior | Female | Fractures, Bone | Humans | Logistic Models | Male | Motor Vehicles | Risk Factors | Statistics as Topic | Surveys and Questionnaires | Tennessee | Text Messaging | Trauma Centers [SUMMARY]
[CONTENT] messaging driving | texting driving frequently | phone use awareness | texting driving health | dangers texting driving [SUMMARY]
[CONTENT] messaging driving | texting driving frequently | phone use awareness | texting driving health | dangers texting driving [SUMMARY]
[CONTENT] messaging driving | texting driving frequently | phone use awareness | texting driving health | dangers texting driving [SUMMARY]
[CONTENT] messaging driving | texting driving frequently | phone use awareness | texting driving health | dangers texting driving [SUMMARY]
[CONTENT] messaging driving | texting driving frequently | phone use awareness | texting driving health | dangers texting driving [SUMMARY]
[CONTENT] messaging driving | texting driving frequently | phone use awareness | texting driving health | dangers texting driving [SUMMARY]
[CONTENT] texting | driving | phone | mvc | heavy | use | texters | texting driving | phone use | age [SUMMARY]
[CONTENT] texting | driving | phone | mvc | heavy | use | texters | texting driving | phone use | age [SUMMARY]
[CONTENT] texting | driving | phone | mvc | heavy | use | texters | texting driving | phone use | age [SUMMARY]
[CONTENT] texting | driving | phone | mvc | heavy | use | texters | texting driving | phone use | age [SUMMARY]
[CONTENT] texting | driving | phone | mvc | heavy | use | texters | texting driving | phone use | age [SUMMARY]
[CONTENT] texting | driving | phone | mvc | heavy | use | texters | texting driving | phone use | age [SUMMARY]
[CONTENT] phone | driving | cell phone use | cell phone | cell | texting | studies | phone use | use | drivers [SUMMARY]
[CONTENT] patients | questionnaires | heavy | texts | texters | heavy texters | patient | adult | information | collected [SUMMARY]
[CONTENT] heavy | texters | heavy texters | table | adult | texting | mvc | non | non heavy | non heavy texters [SUMMARY]
[CONTENT] motor vehicle collision | vehicle collision | motor | motor vehicle | vehicle | collision | involved | risk | patients | young [SUMMARY]
[CONTENT] texting | driving | heavy | phone | texters | mvc | heavy texters | texting driving | patients | adult [SUMMARY]
[CONTENT] texting | driving | heavy | phone | texters | mvc | heavy texters | texting driving | patients | adult [SUMMARY]
[CONTENT] MVC [SUMMARY]
[CONTENT] the Vanderbilt University Medical Center Orthopaedic Trauma Clinic | MVC [SUMMARY]
[CONTENT] 237 ||| 60 | 57 | MVC | 120 ||| more than 30 | 2.22 | MVC ||| 84% ||| age (25 years of age | above 25 years of age | 6.76 | MVC | 0.000 ||| 6.65 | 0.005 | MVC | 1.72 | 0.186 | MVC [SUMMARY]
[CONTENT] MVC ||| MVC [SUMMARY]
[CONTENT] ||| MVC ||| the Vanderbilt University Medical Center Orthopaedic Trauma Clinic | MVC ||| 237 ||| 60 | 57 | MVC | 120 ||| more than 30 | 2.22 | MVC ||| 84% ||| age (25 years of age | above 25 years of age | 6.76 | MVC | 0.000 ||| 6.65 | 0.005 | MVC | 1.72 | 0.186 | MVC ||| MVC ||| MVC [SUMMARY]
[CONTENT] ||| MVC ||| the Vanderbilt University Medical Center Orthopaedic Trauma Clinic | MVC ||| 237 ||| 60 | 57 | MVC | 120 ||| more than 30 | 2.22 | MVC ||| 84% ||| age (25 years of age | above 25 years of age | 6.76 | MVC | 0.000 ||| 6.65 | 0.005 | MVC | 1.72 | 0.186 | MVC ||| MVC ||| MVC [SUMMARY]
Lower cyclooxygenase-2 expression is associated with recurrence of solitary non-muscle invasive bladder carcinoma.
23126361
A new modality is necessary to prevent recurrence of superficial bladder cancer after complete transurethral resection because of the high recurrence rate even with current prophylaxis protocols.
BACKGROUND
In order to analyze the predictive value of cyclooxygenase-2 (COX-2) expression and tumor infiltrating lymphocytes (TILs) in recurrence of this disease tumor specimens from 127 patients with solitary papillary non-muscle invasive bladder cancer (NMIBC), 78 with recurrent disease and 49 without recurrence during follow up of minimum 5 years, were retrieved for tissue microarrays construction and immunohistochemical analysis. COX-2 expression was scored according to Allred's scoring protocol, while presence of TILs was categorized as absent (no) or present (yes) on whole tissue sections.
METHODS
COX-2 immunoreactivity was presented in 70 (71%), weak in 16% and strong in 55% of cases, while 29 (29%) tumors were negative. TILs were present in 64 (58%) NMIBC, while 44 cases (41%) did not reveal mononuclear infiltration in tumoral stroma. Statistical analysis demonstrated a higher proportion of patients with recurrence in the group with the COX-2 score 0, and lower in the group with score 2 (p=0.0001, p=0.0101, respectively). In addition, a higher proportion of recurrent patients in the group with no TILs, and lower proportion in the group with TILs were found (p=0.009, p=0.009, respectively). Univariate and multivariate analysis revealed overexpression of COX-2 and presence of TILs as negative predictors.
RESULTS
Patients with lower COX-2 expression and absence of TILs in NMIBC need to be followed up more vigorously and probably selected for adjuvant therapy.
CONCLUSION
[ "Adult", "Aged", "Aged, 80 and over", "Biomarkers, Tumor", "Carcinoma, Papillary", "Chi-Square Distribution", "Cyclooxygenase 2", "Down-Regulation", "Female", "Humans", "Immunohistochemistry", "Logistic Models", "Lymphocytes, Tumor-Infiltrating", "Male", "Middle Aged", "Multivariate Analysis", "Neoplasm Invasiveness", "Neoplasm Recurrence, Local", "Odds Ratio", "Risk Factors", "Time Factors", "Tissue Array Analysis", "Treatment Outcome", "Urinary Bladder Neoplasms" ]
3527228
Background
Urinary bladder cancer (UBC), on the average, includes 2% of all the malignant diseases with male-to-female ratio being about 4:1. The incidence of UBC increases with age [1]. The mortality from transitional cell carcinoma (TCC) of the urinary bladder increases significantly with the progression of superficial to invasive disease. Approximately 75-85% of patients present with a non-muscle invasive bladder carcinoma (stages pTa, pT1, pTis). Despite the same category this is a very heterogeneous group of tumors with various biological outcomes. The main clinical feature of UBC is a high percentage of recurrence [2]. Carcinoma of the urinary bladder is the only malignant neoplasm for which immunotherapy is often included as part of standard care. Intravesical instillations of Bacille Calmette–Guerin (BCG) has been demonstrated to reduce the recurrence rate and the risk of progression to muscle-invasive disease in patients with carcinoma in situ (pTis), as well as non-muscle-invasive urothelial carcinomas [3,4]. BCG immunotherapy results in 70% to 75% and 50% to 60% complete response rates for carcinoma in situ and small residual tumors, respectively [5]. Unfortunately, a significant percentage of patients will fail initial BCG therapy. These patients would have much more benefit if they were oriented early to other therapeutic approaches. In addition, another 30% to 50% of BCG responders will develop recurrent tumors within 5 years [6,7]. In fact, 70% of patients treated with transurethral resection (TUR) experience a relapse of the underlying disease and 15-25% of patients will progress over time to muscle-invasive cancer [2]. Although some prognostic variables have been shown to predict recurrence and can be used to identify patients who require adjuvant therapy after TUR, additional reliable markers for disease progression and recurrence are needed [8-10]. Standard prognostic factors like histological grading are limited in predicting possible recurrence of the disease. So, understanding of molecular processes that could reflect individual biological potential and clinical behavior is important. For that purpose, various biomarkers have been investigated since they have potential in decoding unique biological features in identifying patients with high risk of progression after the local treatment, as well as in more reliable prognosis and treatment of UBC [11]. In the present study the cyclooxygenase-2 (COX-2) was investigated. Beside prostaglandin G/H synthases and COX-1 it is one of the key enzymes in the synthesis of prostaglandins from arachidonic acid. COX-2 is not expressed in most tissue under normal conditions, but expression is rapidly induced by growth factors or agents that cause tissue irritation or inflammation [12]. COX-2 is expressed in many solid as well as hematological malignancies [13] where prostaglandins have been reported to increase proliferation, enhance angiogenesis, promote invasion, and inhibit apoptosis and differentiation, thus participating in carcinogenesis through various mechanisms [13,14]. There is evidence that COX-2 expression and activity is also important in the development of UBC [15,16]. To determine if COX-2 overexpression could be causally related to recurrence of superficial UBC the present study was designed with pathologically homogenous group of solitary papillary non-muscle invasive bladder cancer (NMIBC) presented in patients according to development of the recurrent disease. More specifically, the aim was to assess and compare the COX-2 expression between the patients with and without recurrence of disease and, moreover, to analyze association between COX-2 expression and tumor infiltrating lymphocytes (TILs). We hypothesize that such investigation can be used to predict recurrence and help identify patients who require adjuvant treatment.
null
null
Results
Clinicopathological characteristics and immunhistochemical findings in NMIBC Clinicopathological characteristics of 127 patients with solitary, papillary NMIBC unrolled in the present study are summarized in Table 1. All clinical details along with adequate tumor samples, at the time of diagnosis and in follow up period, were available for all patients. None of the patients was lost to follow up. The median age at diagnosis for the patient cohort was 73 years (range 41–87), with a male to female ratio of 3.3:1, and a median follow up period of 37 months (range 6–155). Tumors divided according to their size in ≤3 or >3 cm, according to their pathology in groups of papillary urothelial neoplasms of low malignant potential (PUNLMP) and low grade papillary urothelial carcinoma (LGPUC), and according to their stage in Ta and T1 were nearly equally distributed. Clinical features of patients with non-muscle invasive bladder carcinoma along with pathological and immunohistochemical characteristics of the resected tumors Note: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; *TILs, tumor-infiltrating lymphocytes. Immunohistochemical analysis revealed COX-2 expression predominantly granular localized to the cytoplasm of tumor cells. Nuclear staining was not observed. In a few cases immunoreactivity was detected in endothelial cells within the tumor. Low intensity staining was also observed in smooth muscle tissue in some sections. There was noticeable heterogeneity in the intensity and percentage of COX-2 positive cells within the tumor tissues. Positive COX-2 immunoreactivity scored according to Allred was presented in 70 (71%) while 29 (29%) tumors were negative. Among positive tumors 16 (16%) show weak and 54 (55%) strong immunostaining. Representative cases of tumors with moderate and strong COX-2 staining are presented in Figure 2. Immunohistochemical staining for COX-2 on tissue microarrays of urothelial tumors (A, magnification x20). On higher magnification representative cases demonstrate tumor cells with strong (B), moderate (C) and weak (D) staining (magnification x100). Tumor infiltrating lymphocytes as shown in Figure 1 were presented in majority (64; 59%) of cases, while 44 cases (41%) did not reveal mononuclear infiltration in tumoral stroma. Clinicopathological characteristics of 127 patients with solitary, papillary NMIBC unrolled in the present study are summarized in Table 1. All clinical details along with adequate tumor samples, at the time of diagnosis and in follow up period, were available for all patients. None of the patients was lost to follow up. The median age at diagnosis for the patient cohort was 73 years (range 41–87), with a male to female ratio of 3.3:1, and a median follow up period of 37 months (range 6–155). Tumors divided according to their size in ≤3 or >3 cm, according to their pathology in groups of papillary urothelial neoplasms of low malignant potential (PUNLMP) and low grade papillary urothelial carcinoma (LGPUC), and according to their stage in Ta and T1 were nearly equally distributed. Clinical features of patients with non-muscle invasive bladder carcinoma along with pathological and immunohistochemical characteristics of the resected tumors Note: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; *TILs, tumor-infiltrating lymphocytes. Immunohistochemical analysis revealed COX-2 expression predominantly granular localized to the cytoplasm of tumor cells. Nuclear staining was not observed. In a few cases immunoreactivity was detected in endothelial cells within the tumor. Low intensity staining was also observed in smooth muscle tissue in some sections. There was noticeable heterogeneity in the intensity and percentage of COX-2 positive cells within the tumor tissues. Positive COX-2 immunoreactivity scored according to Allred was presented in 70 (71%) while 29 (29%) tumors were negative. Among positive tumors 16 (16%) show weak and 54 (55%) strong immunostaining. Representative cases of tumors with moderate and strong COX-2 staining are presented in Figure 2. Immunohistochemical staining for COX-2 on tissue microarrays of urothelial tumors (A, magnification x20). On higher magnification representative cases demonstrate tumor cells with strong (B), moderate (C) and weak (D) staining (magnification x100). Tumor infiltrating lymphocytes as shown in Figure 1 were presented in majority (64; 59%) of cases, while 44 cases (41%) did not reveal mononuclear infiltration in tumoral stroma. COX-2 expression and TIL in correlation with recurrence of disease Main objectives of the present study were to examine possible association between COX-2 expression and TILs with regard to recurrence of disease, as well as their mutual relationship which revealed no significant association (p=0.892). Clinicopathological parameters included in the above mentioned analysis are presented in Table 2. As it can be seen among the examined parameters pathology and disease stages were different between recurrent and non recurrent disease. In particular, PUNLMP was more presented (61%) among the non-recurrent disease in opposite to LGPUC that was more frequently present (62%) in the recurrent group (p=0.0258, p=0.0258, respectively). According to disease stages there were more patients with recurrence in stage T1 (63%), while those patients without recurrent disease were in stage Ta (61%) (p=0.018). Clinicopathologic characteristics, COX-2 expression and TIL in comparison to recurrence of non-muscle invasive bladder carcinoma Note: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; * tumor-infiltrating lymphocytes. Moreover, statistical analysis demonstrated a difference in tumors COX-2 expression between the analyzed groups, e.g. between recurrent and non-recurrent patients. Possibly, due to small sample size in the group with the score 1 no significant difference was found between recurrent and non-recurrent patients (p=0.2196). In the group with the score 0 and the group with the score 2 there was a significant difference in the proportion of patients with and without recurrence (p=0.0001, p=0.0101, respectively). From these results we can expect that lower score is predictor of recurrence, since there is higher proportion of recurrent patients with score 0 and lower proportion of recurrent patients with score 2. A significant difference between the recurrent and non-recurrent patients in both groups, with absent and present TILs was also found (p=0.009, p=0.009, respectively). More precisely, higher proportion of recurrent patients in the group with no TILs and lower proportion in the group with TILs were found. Therefore, we can expect the absence of TILs to be the predictor of recurrence. Main objectives of the present study were to examine possible association between COX-2 expression and TILs with regard to recurrence of disease, as well as their mutual relationship which revealed no significant association (p=0.892). Clinicopathological parameters included in the above mentioned analysis are presented in Table 2. As it can be seen among the examined parameters pathology and disease stages were different between recurrent and non recurrent disease. In particular, PUNLMP was more presented (61%) among the non-recurrent disease in opposite to LGPUC that was more frequently present (62%) in the recurrent group (p=0.0258, p=0.0258, respectively). According to disease stages there were more patients with recurrence in stage T1 (63%), while those patients without recurrent disease were in stage Ta (61%) (p=0.018). Clinicopathologic characteristics, COX-2 expression and TIL in comparison to recurrence of non-muscle invasive bladder carcinoma Note: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; * tumor-infiltrating lymphocytes. Moreover, statistical analysis demonstrated a difference in tumors COX-2 expression between the analyzed groups, e.g. between recurrent and non-recurrent patients. Possibly, due to small sample size in the group with the score 1 no significant difference was found between recurrent and non-recurrent patients (p=0.2196). In the group with the score 0 and the group with the score 2 there was a significant difference in the proportion of patients with and without recurrence (p=0.0001, p=0.0101, respectively). From these results we can expect that lower score is predictor of recurrence, since there is higher proportion of recurrent patients with score 0 and lower proportion of recurrent patients with score 2. A significant difference between the recurrent and non-recurrent patients in both groups, with absent and present TILs was also found (p=0.009, p=0.009, respectively). More precisely, higher proportion of recurrent patients in the group with no TILs and lower proportion in the group with TILs were found. Therefore, we can expect the absence of TILs to be the predictor of recurrence. Univariate and multivariate analysis of prediction of recurrence Univariate and multivariate analysis of the influence of COX-2 expression, TILs, tumor size and disease stage on prediction of recurrence (PR) are shown in Table 3. Predictors of recurrence: univariate and multivariate analyses Note: OR= Odds ratio; CI = Confidence interval; AUC = Area under the curve; TILs = tumor-infiltrating lymphocytes. The odds ratio for each predictor was computed using the method of logistic regression. As we can see the tumor size is not the significant predictor in univariate, as well as is in multivariate manner. Disease stage is in both analysis significant and positive predictor, which means that the odds for a recurrence in cases with stage 1 are 2.39 times higher than in cases with stage 0. In both manners the TILs and COX-2 are negative predictors. The positive change in score for COX-2 (score higher for 1) produces 0.36 times lower risk, or that the odd for not getting a recurrence is 1/0.36=2.77 times higher. The TILs is also significant negative predictor of recurrence with OR 0.31 in univariate manner and 0.23 in multivariate analysis. The AUC value as a classification parameter which gives us the quality of predictive model was used. In our case we have the AUC 0.875 which means that our model has very high discriminating power. Univariate and multivariate analysis of the influence of COX-2 expression, TILs, tumor size and disease stage on prediction of recurrence (PR) are shown in Table 3. Predictors of recurrence: univariate and multivariate analyses Note: OR= Odds ratio; CI = Confidence interval; AUC = Area under the curve; TILs = tumor-infiltrating lymphocytes. The odds ratio for each predictor was computed using the method of logistic regression. As we can see the tumor size is not the significant predictor in univariate, as well as is in multivariate manner. Disease stage is in both analysis significant and positive predictor, which means that the odds for a recurrence in cases with stage 1 are 2.39 times higher than in cases with stage 0. In both manners the TILs and COX-2 are negative predictors. The positive change in score for COX-2 (score higher for 1) produces 0.36 times lower risk, or that the odd for not getting a recurrence is 1/0.36=2.77 times higher. The TILs is also significant negative predictor of recurrence with OR 0.31 in univariate manner and 0.23 in multivariate analysis. The AUC value as a classification parameter which gives us the quality of predictive model was used. In our case we have the AUC 0.875 which means that our model has very high discriminating power.
Conclusion
Although further studies are necessary to fully elucidate the mechanism involved, the present study showed that higher COX-2 expression and present TILs resulted as a negative predictor of recurrence in NMIBC. On the other hand, patients with lower value of both analyzed parameters should be probably selected for adjuvant therapy.
[ "Background", "Clinicopathological data", "Tissue microarray construction (TMA)", "Immunohistochemistry and scoring", "Statistical analysis", "Clinicopathological characteristics and immunhistochemical findings in NMIBC", "COX-2 expression and TIL in correlation with recurrence of disease", "Univariate and multivariate analysis of prediction of recurrence", "Competing interests", "Authors’ contributions" ]
[ "Urinary bladder cancer (UBC), on the average, includes 2% of all the malignant diseases with male-to-female ratio being about 4:1. The incidence of UBC increases with age [1]. The mortality from transitional cell carcinoma (TCC) of the urinary bladder increases significantly with the progression of superficial to invasive disease. Approximately 75-85% of patients present with a non-muscle invasive bladder carcinoma (stages pTa, pT1, pTis). Despite the same category this is a very heterogeneous group of tumors with various biological outcomes. The main clinical feature of UBC is a high percentage of recurrence [2]. Carcinoma of the urinary bladder is the only malignant neoplasm for which immunotherapy is often included as part of standard care. Intravesical instillations of Bacille Calmette–Guerin (BCG) has been demonstrated to reduce the recurrence rate and the risk of progression to muscle-invasive disease in patients with carcinoma in situ (pTis), as well as non-muscle-invasive urothelial carcinomas [3,4]. BCG immunotherapy results in 70% to 75% and 50% to 60% complete response rates for carcinoma in situ and small residual tumors, respectively [5]. Unfortunately, a significant percentage of patients will fail initial BCG therapy. These patients would have much more benefit if they were oriented early to other therapeutic approaches. In addition, another 30% to 50% of BCG responders will develop recurrent tumors within 5 years [6,7].\nIn fact, 70% of patients treated with transurethral resection (TUR) experience a relapse of the underlying disease and 15-25% of patients will progress over time to muscle-invasive cancer [2]. Although some prognostic variables have been shown to predict recurrence and can be used to identify patients who require adjuvant therapy after TUR, additional reliable markers for disease progression and recurrence are needed [8-10].\nStandard prognostic factors like histological grading are limited in predicting possible recurrence of the disease. So, understanding of molecular processes that could reflect individual biological potential and clinical behavior is important. For that purpose, various biomarkers have been investigated since they have potential in decoding unique biological features in identifying patients with high risk of progression after the local treatment, as well as in more reliable prognosis and treatment of UBC [11]. In the present study the cyclooxygenase-2 (COX-2) was investigated. Beside prostaglandin G/H synthases and COX-1 it is one of the key enzymes in the synthesis of prostaglandins from arachidonic acid. COX-2 is not expressed in most tissue under normal conditions, but expression is rapidly induced by growth factors or agents that cause tissue irritation or inflammation [12]. COX-2 is expressed in many solid as well as hematological malignancies [13] where prostaglandins have been reported to increase proliferation, enhance angiogenesis, promote invasion, and inhibit apoptosis and differentiation, thus participating in carcinogenesis through various mechanisms [13,14]. There is evidence that COX-2 expression and activity is also important in the development of UBC [15,16].\nTo determine if COX-2 overexpression could be causally related to recurrence of superficial UBC the present study was designed with pathologically homogenous group of solitary papillary non-muscle invasive bladder cancer (NMIBC) presented in patients according to development of the recurrent disease. More specifically, the aim was to assess and compare the COX-2 expression between the patients with and without recurrence of disease and, moreover, to analyze association between COX-2 expression and tumor infiltrating lymphocytes (TILs). We hypothesize that such investigation can be used to predict recurrence and help identify patients who require adjuvant treatment.", "The tumor specimens analyzed in this study were obtained from a total of 127 patients with solitary papillary NMIBC treated with initial transurethral resection (TUR), as a standard procedure at the Department of Urology, Rijeka University Hospital Center in Rijeka, between 1996 and 2006. None of the patients received adjuvant chemotherapy or immunotherapy or any other medical intervention after the initial TUR. All patients with multiple tumors and patients with a solid or flat aspect of the tumor were excluded from this study. The study was approved by the University of Rijeka Ethics Committee.\nAll the sections were reviewed to confirm the original diagnosis and were staged according to the 2002 American Joint Committee on Cancer guidelines [17] and graded according to the 2004 World Health Organization classification system [18] by an expert urologic pathologist. All the tumors were classified as papillary urothelial neoplasm of low malignant potential (PUNLMP) or low-grade papillary urothelial carcinoma (LGPUC). Clinicopathologic data were obtained from patient medical records and from the files kept at the Department of Pathology, Rijeka University School of Medicine. Based on disease recurrence, patients were divided in two groups: patients who developed recurrent disease during the first five post-operative years (N=78) and patients without recurrent disease during a follow-up of minimum 5 years (N=49). The follow-up of patients was subsequently scheduled at control cystoscopy every 3 months for the first two years after TUR, and after that biannually. Recurrence was defined as a written description of a recurrent tumor at any control cystoscopy 6 months after the operation, localized away from the primary tumor bed and the area of the initial resection. By this procedure protocol we wanted to exclude all possible residual tumor masses.\nHematoxylin and eosin stained tumor sections were used in order to evaluate the presence of inflammatory cells and to mark the areas with the most pronounced inflammatory infiltrate for construction of tissue microarrays. Positive inflammatory cells were semi quantitatively graded and presence of TILs was categorized as absent (no) or present (yes) as shown in Figure 1.\n\nUrothelial tumors (hemalaun-eosin) with strong (A) and moderate (B) mononuclear inflammatory infiltrates in lamina propria (*) predominantly composed of lymphocytes.", "Paraffin blocks were available for all 127 cases, and TMAs were constructed by using a manual tissue arrayer (Alphelys, Plaisir, France). Three tissue cores, each 1 mm in diameter, were placed into a recipient paraffin block. Normal liver tissue was used for orientation. Cores were spaced at intervals of 0.5 mm in the x- and y-axes. One section from each TMA block was stained with hematoxylin and eosin to confirm the presence of tumor tissue. Serial sections were cut from TMA blocks for immunohistochemical staining. Three to four μm thick sections were placed on adhesive glass slides (Capillary Gap Microscope Slides, 75 μm, Code S2024, DakoCytomation, Glostrup, Denmark), left to dry in oven at 55°C overnight, deparaffinized and rehydrated.", "Tumor samples were processed for immunohistological analysis in a Dako Autostainer Plus (DakoCytomation Colorado Inc, Fort Collins, CO, USA) according to the manufacturer’s protocol using Envision peroxidase procedure (ChemMate TM Envision HRP detection kit K5007, DakoCytomation, Glostrup, Denmark).\nThe sections were incubated with the primary antibodies, anti-COX-2 mouse monoclonal antibody (clone CX-294, code M3617, DAKO, Denmark), dilution range 1:100. The immune reaction was amplified using the appropriate secondary antibody and the Streptavidin–Biotin–Peroxidase HRP complex (code K5001, DAKO, Denmark). Sections were then developed using 3,3′-diaminobenzidine tetrahydrochloride chromogen (DAB, code K5001, DAKO, Denmark), under microscope control. The sections were counterstained with Mayer’s Hematoxylin. Quality control performed by external and internal negative and positive controls was necessary to monitor the accuracy of tissue processing, staining procedures and reagents effectiveness. The primary antibody specificity sought to be assessed by their negative controls.\nCOX-2 immunohistochemical expression was quantified and scored by assessing a proportion of percentage as an intensity score according to Allred’s scoring protocol [19]. The assigned proportion score represented the estimated proportion of positive-staining cells (0, none; 1, <1/100; 2, 1/100 to 1/10; 3, 1/10 to 1/3; 4, 1/3; to 2/3; and 5, >2/3). The intensity score assigned represented the average intensity of positive cells (0, none; 1, weak; 2, intermediate; and 3, strong). The proportion and intensity scores were combined to obtain a total score, which ranged from 0 to 8. Total score 0–2 was assessed as negative (0), score 3–5 was assessed as weak positive (1) and 6–8 as strong positive (2). All slides were scored by investigator who was blinded to patient clinical information.\nFor the purpose of further statistical analysis, the mean values of immunohistochemical staining of all three tissue microarrays were used.", "Statistical analysis was performed using MedCalc for Windows, version 12.3.0.0 (MedCalc Software, Mariakerke, Belgium). In the first part of the work the classical descriptive methods were used as well as the Chi-square test for the comparison of proportions. Mann–Whitney test was used to compare the medians. The method of logistic regression was used in order to compute the odds ratio for predictors of recurrence in univariate and multivariate manner. The AUC value was used to evaluate the quality of predictive model. The significance level in all tests was 0.05.", "Clinicopathological characteristics of 127 patients with solitary, papillary NMIBC unrolled in the present study are summarized in Table 1. All clinical details along with adequate tumor samples, at the time of diagnosis and in follow up period, were available for all patients. None of the patients was lost to follow up. The median age at diagnosis for the patient cohort was 73 years (range 41–87), with a male to female ratio of 3.3:1, and a median follow up period of 37 months (range 6–155). Tumors divided according to their size in ≤3 or >3 cm, according to their pathology in groups of papillary urothelial neoplasms of low malignant potential (PUNLMP) and low grade papillary urothelial carcinoma (LGPUC), and according to their stage in Ta and T1 were nearly equally distributed.\n\nClinical features of patients with non-muscle invasive bladder carcinoma along with pathological and immunohistochemical characteristics of the resected tumors\nNote: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; *TILs, tumor-infiltrating lymphocytes.\nImmunohistochemical analysis revealed COX-2 expression predominantly granular localized to the cytoplasm of tumor cells. Nuclear staining was not observed. In a few cases immunoreactivity was detected in endothelial cells within the tumor. Low intensity staining was also observed in smooth muscle tissue in some sections. There was noticeable heterogeneity in the intensity and percentage of COX-2 positive cells within the tumor tissues. Positive COX-2 immunoreactivity scored according to Allred was presented in 70 (71%) while 29 (29%) tumors were negative. Among positive tumors 16 (16%) show weak and 54 (55%) strong immunostaining. Representative cases of tumors with moderate and strong COX-2 staining are presented in Figure 2.\n\nImmunohistochemical staining for COX-2 on tissue microarrays of urothelial tumors (A, magnification x20). On higher magnification representative cases demonstrate tumor cells with strong (B), moderate (C) and weak (D) staining (magnification x100).\nTumor infiltrating lymphocytes as shown in Figure 1 were presented in majority (64; 59%) of cases, while 44 cases (41%) did not reveal mononuclear infiltration in tumoral stroma.", "Main objectives of the present study were to examine possible association between COX-2 expression and TILs with regard to recurrence of disease, as well as their mutual relationship which revealed no significant association (p=0.892).\nClinicopathological parameters included in the above mentioned analysis are presented in Table 2. As it can be seen among the examined parameters pathology and disease stages were different between recurrent and non recurrent disease. In particular, PUNLMP was more presented (61%) among the non-recurrent disease in opposite to LGPUC that was more frequently present (62%) in the recurrent group (p=0.0258, p=0.0258, respectively). According to disease stages there were more patients with recurrence in stage T1 (63%), while those patients without recurrent disease were in stage Ta (61%) (p=0.018).\n\nClinicopathologic characteristics, COX-2 expression and TIL in comparison to recurrence of \nnon-muscle invasive bladder carcinoma\nNote: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; * tumor-infiltrating lymphocytes.\nMoreover, statistical analysis demonstrated a difference in tumors COX-2 expression between the analyzed groups, e.g. between recurrent and non-recurrent patients. Possibly, due to small sample size in the group with the score 1 no significant difference was found between recurrent and non-recurrent patients (p=0.2196). In the group with the score 0 and the group with the score 2 there was a significant difference in the proportion of patients with and without recurrence (p=0.0001, p=0.0101, respectively). From these results we can expect that lower score is predictor of recurrence, since there is higher proportion of recurrent patients with score 0 and lower proportion of recurrent patients with score 2.\nA significant difference between the recurrent and non-recurrent patients in both groups, with absent and present TILs was also found (p=0.009, p=0.009, respectively). More precisely, higher proportion of recurrent patients in the group with no TILs and lower proportion in the group with TILs were found. Therefore, we can expect the absence of TILs to be the predictor of recurrence.", "Univariate and multivariate analysis of the influence of COX-2 expression, TILs, tumor size and disease stage on prediction of recurrence (PR) are shown in Table 3.\n\nPredictors of recurrence: univariate and multivariate analyses\nNote: OR= Odds ratio; CI = Confidence interval; AUC = Area under the curve; TILs = tumor-infiltrating lymphocytes.\nThe odds ratio for each predictor was computed using the method of logistic regression. As we can see the tumor size is not the significant predictor in univariate, as well as is in multivariate manner. Disease stage is in both analysis significant and positive predictor, which means that the odds for a recurrence in cases with stage 1 are 2.39 times higher than in cases with stage 0. In both manners the TILs and COX-2 are negative predictors. The positive change in score for COX-2 (score higher for 1) produces 0.36 times lower risk, or that the odd for not getting a recurrence is 1/0.36=2.77 times higher. The TILs is also significant negative predictor of recurrence with OR 0.31 in univariate manner and 0.23 in multivariate analysis. The AUC value as a classification parameter which gives us the quality of predictive model was used. In our case we have the AUC 0.875 which means that our model has very high discriminating power.", "The authors declare that they have no competing interests.", "TT participated in study design, conceived the study and participated in coordination. KK added clinical data. SŠ actively performed part of immunohistochemical analysis. EB conceived the study and participated in coordination and performed statistical analysis. ŽF participated in study design and coordination. NJ advised and led the whole group as coordinator. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Materials and methods", "Clinicopathological data", "Tissue microarray construction (TMA)", "Immunohistochemistry and scoring", "Statistical analysis", "Results", "Clinicopathological characteristics and immunhistochemical findings in NMIBC", "COX-2 expression and TIL in correlation with recurrence of disease", "Univariate and multivariate analysis of prediction of recurrence", "Discussion", "Conclusion", "Competing interests", "Authors’ contributions" ]
[ "Urinary bladder cancer (UBC), on the average, includes 2% of all the malignant diseases with male-to-female ratio being about 4:1. The incidence of UBC increases with age [1]. The mortality from transitional cell carcinoma (TCC) of the urinary bladder increases significantly with the progression of superficial to invasive disease. Approximately 75-85% of patients present with a non-muscle invasive bladder carcinoma (stages pTa, pT1, pTis). Despite the same category this is a very heterogeneous group of tumors with various biological outcomes. The main clinical feature of UBC is a high percentage of recurrence [2]. Carcinoma of the urinary bladder is the only malignant neoplasm for which immunotherapy is often included as part of standard care. Intravesical instillations of Bacille Calmette–Guerin (BCG) has been demonstrated to reduce the recurrence rate and the risk of progression to muscle-invasive disease in patients with carcinoma in situ (pTis), as well as non-muscle-invasive urothelial carcinomas [3,4]. BCG immunotherapy results in 70% to 75% and 50% to 60% complete response rates for carcinoma in situ and small residual tumors, respectively [5]. Unfortunately, a significant percentage of patients will fail initial BCG therapy. These patients would have much more benefit if they were oriented early to other therapeutic approaches. In addition, another 30% to 50% of BCG responders will develop recurrent tumors within 5 years [6,7].\nIn fact, 70% of patients treated with transurethral resection (TUR) experience a relapse of the underlying disease and 15-25% of patients will progress over time to muscle-invasive cancer [2]. Although some prognostic variables have been shown to predict recurrence and can be used to identify patients who require adjuvant therapy after TUR, additional reliable markers for disease progression and recurrence are needed [8-10].\nStandard prognostic factors like histological grading are limited in predicting possible recurrence of the disease. So, understanding of molecular processes that could reflect individual biological potential and clinical behavior is important. For that purpose, various biomarkers have been investigated since they have potential in decoding unique biological features in identifying patients with high risk of progression after the local treatment, as well as in more reliable prognosis and treatment of UBC [11]. In the present study the cyclooxygenase-2 (COX-2) was investigated. Beside prostaglandin G/H synthases and COX-1 it is one of the key enzymes in the synthesis of prostaglandins from arachidonic acid. COX-2 is not expressed in most tissue under normal conditions, but expression is rapidly induced by growth factors or agents that cause tissue irritation or inflammation [12]. COX-2 is expressed in many solid as well as hematological malignancies [13] where prostaglandins have been reported to increase proliferation, enhance angiogenesis, promote invasion, and inhibit apoptosis and differentiation, thus participating in carcinogenesis through various mechanisms [13,14]. There is evidence that COX-2 expression and activity is also important in the development of UBC [15,16].\nTo determine if COX-2 overexpression could be causally related to recurrence of superficial UBC the present study was designed with pathologically homogenous group of solitary papillary non-muscle invasive bladder cancer (NMIBC) presented in patients according to development of the recurrent disease. More specifically, the aim was to assess and compare the COX-2 expression between the patients with and without recurrence of disease and, moreover, to analyze association between COX-2 expression and tumor infiltrating lymphocytes (TILs). We hypothesize that such investigation can be used to predict recurrence and help identify patients who require adjuvant treatment.", " Clinicopathological data The tumor specimens analyzed in this study were obtained from a total of 127 patients with solitary papillary NMIBC treated with initial transurethral resection (TUR), as a standard procedure at the Department of Urology, Rijeka University Hospital Center in Rijeka, between 1996 and 2006. None of the patients received adjuvant chemotherapy or immunotherapy or any other medical intervention after the initial TUR. All patients with multiple tumors and patients with a solid or flat aspect of the tumor were excluded from this study. The study was approved by the University of Rijeka Ethics Committee.\nAll the sections were reviewed to confirm the original diagnosis and were staged according to the 2002 American Joint Committee on Cancer guidelines [17] and graded according to the 2004 World Health Organization classification system [18] by an expert urologic pathologist. All the tumors were classified as papillary urothelial neoplasm of low malignant potential (PUNLMP) or low-grade papillary urothelial carcinoma (LGPUC). Clinicopathologic data were obtained from patient medical records and from the files kept at the Department of Pathology, Rijeka University School of Medicine. Based on disease recurrence, patients were divided in two groups: patients who developed recurrent disease during the first five post-operative years (N=78) and patients without recurrent disease during a follow-up of minimum 5 years (N=49). The follow-up of patients was subsequently scheduled at control cystoscopy every 3 months for the first two years after TUR, and after that biannually. Recurrence was defined as a written description of a recurrent tumor at any control cystoscopy 6 months after the operation, localized away from the primary tumor bed and the area of the initial resection. By this procedure protocol we wanted to exclude all possible residual tumor masses.\nHematoxylin and eosin stained tumor sections were used in order to evaluate the presence of inflammatory cells and to mark the areas with the most pronounced inflammatory infiltrate for construction of tissue microarrays. Positive inflammatory cells were semi quantitatively graded and presence of TILs was categorized as absent (no) or present (yes) as shown in Figure 1.\n\nUrothelial tumors (hemalaun-eosin) with strong (A) and moderate (B) mononuclear inflammatory infiltrates in lamina propria (*) predominantly composed of lymphocytes.\nThe tumor specimens analyzed in this study were obtained from a total of 127 patients with solitary papillary NMIBC treated with initial transurethral resection (TUR), as a standard procedure at the Department of Urology, Rijeka University Hospital Center in Rijeka, between 1996 and 2006. None of the patients received adjuvant chemotherapy or immunotherapy or any other medical intervention after the initial TUR. All patients with multiple tumors and patients with a solid or flat aspect of the tumor were excluded from this study. The study was approved by the University of Rijeka Ethics Committee.\nAll the sections were reviewed to confirm the original diagnosis and were staged according to the 2002 American Joint Committee on Cancer guidelines [17] and graded according to the 2004 World Health Organization classification system [18] by an expert urologic pathologist. All the tumors were classified as papillary urothelial neoplasm of low malignant potential (PUNLMP) or low-grade papillary urothelial carcinoma (LGPUC). Clinicopathologic data were obtained from patient medical records and from the files kept at the Department of Pathology, Rijeka University School of Medicine. Based on disease recurrence, patients were divided in two groups: patients who developed recurrent disease during the first five post-operative years (N=78) and patients without recurrent disease during a follow-up of minimum 5 years (N=49). The follow-up of patients was subsequently scheduled at control cystoscopy every 3 months for the first two years after TUR, and after that biannually. Recurrence was defined as a written description of a recurrent tumor at any control cystoscopy 6 months after the operation, localized away from the primary tumor bed and the area of the initial resection. By this procedure protocol we wanted to exclude all possible residual tumor masses.\nHematoxylin and eosin stained tumor sections were used in order to evaluate the presence of inflammatory cells and to mark the areas with the most pronounced inflammatory infiltrate for construction of tissue microarrays. Positive inflammatory cells were semi quantitatively graded and presence of TILs was categorized as absent (no) or present (yes) as shown in Figure 1.\n\nUrothelial tumors (hemalaun-eosin) with strong (A) and moderate (B) mononuclear inflammatory infiltrates in lamina propria (*) predominantly composed of lymphocytes.\n Tissue microarray construction (TMA) Paraffin blocks were available for all 127 cases, and TMAs were constructed by using a manual tissue arrayer (Alphelys, Plaisir, France). Three tissue cores, each 1 mm in diameter, were placed into a recipient paraffin block. Normal liver tissue was used for orientation. Cores were spaced at intervals of 0.5 mm in the x- and y-axes. One section from each TMA block was stained with hematoxylin and eosin to confirm the presence of tumor tissue. Serial sections were cut from TMA blocks for immunohistochemical staining. Three to four μm thick sections were placed on adhesive glass slides (Capillary Gap Microscope Slides, 75 μm, Code S2024, DakoCytomation, Glostrup, Denmark), left to dry in oven at 55°C overnight, deparaffinized and rehydrated.\nParaffin blocks were available for all 127 cases, and TMAs were constructed by using a manual tissue arrayer (Alphelys, Plaisir, France). Three tissue cores, each 1 mm in diameter, were placed into a recipient paraffin block. Normal liver tissue was used for orientation. Cores were spaced at intervals of 0.5 mm in the x- and y-axes. One section from each TMA block was stained with hematoxylin and eosin to confirm the presence of tumor tissue. Serial sections were cut from TMA blocks for immunohistochemical staining. Three to four μm thick sections were placed on adhesive glass slides (Capillary Gap Microscope Slides, 75 μm, Code S2024, DakoCytomation, Glostrup, Denmark), left to dry in oven at 55°C overnight, deparaffinized and rehydrated.\n Immunohistochemistry and scoring Tumor samples were processed for immunohistological analysis in a Dako Autostainer Plus (DakoCytomation Colorado Inc, Fort Collins, CO, USA) according to the manufacturer’s protocol using Envision peroxidase procedure (ChemMate TM Envision HRP detection kit K5007, DakoCytomation, Glostrup, Denmark).\nThe sections were incubated with the primary antibodies, anti-COX-2 mouse monoclonal antibody (clone CX-294, code M3617, DAKO, Denmark), dilution range 1:100. The immune reaction was amplified using the appropriate secondary antibody and the Streptavidin–Biotin–Peroxidase HRP complex (code K5001, DAKO, Denmark). Sections were then developed using 3,3′-diaminobenzidine tetrahydrochloride chromogen (DAB, code K5001, DAKO, Denmark), under microscope control. The sections were counterstained with Mayer’s Hematoxylin. Quality control performed by external and internal negative and positive controls was necessary to monitor the accuracy of tissue processing, staining procedures and reagents effectiveness. The primary antibody specificity sought to be assessed by their negative controls.\nCOX-2 immunohistochemical expression was quantified and scored by assessing a proportion of percentage as an intensity score according to Allred’s scoring protocol [19]. The assigned proportion score represented the estimated proportion of positive-staining cells (0, none; 1, <1/100; 2, 1/100 to 1/10; 3, 1/10 to 1/3; 4, 1/3; to 2/3; and 5, >2/3). The intensity score assigned represented the average intensity of positive cells (0, none; 1, weak; 2, intermediate; and 3, strong). The proportion and intensity scores were combined to obtain a total score, which ranged from 0 to 8. Total score 0–2 was assessed as negative (0), score 3–5 was assessed as weak positive (1) and 6–8 as strong positive (2). All slides were scored by investigator who was blinded to patient clinical information.\nFor the purpose of further statistical analysis, the mean values of immunohistochemical staining of all three tissue microarrays were used.\nTumor samples were processed for immunohistological analysis in a Dako Autostainer Plus (DakoCytomation Colorado Inc, Fort Collins, CO, USA) according to the manufacturer’s protocol using Envision peroxidase procedure (ChemMate TM Envision HRP detection kit K5007, DakoCytomation, Glostrup, Denmark).\nThe sections were incubated with the primary antibodies, anti-COX-2 mouse monoclonal antibody (clone CX-294, code M3617, DAKO, Denmark), dilution range 1:100. The immune reaction was amplified using the appropriate secondary antibody and the Streptavidin–Biotin–Peroxidase HRP complex (code K5001, DAKO, Denmark). Sections were then developed using 3,3′-diaminobenzidine tetrahydrochloride chromogen (DAB, code K5001, DAKO, Denmark), under microscope control. The sections were counterstained with Mayer’s Hematoxylin. Quality control performed by external and internal negative and positive controls was necessary to monitor the accuracy of tissue processing, staining procedures and reagents effectiveness. The primary antibody specificity sought to be assessed by their negative controls.\nCOX-2 immunohistochemical expression was quantified and scored by assessing a proportion of percentage as an intensity score according to Allred’s scoring protocol [19]. The assigned proportion score represented the estimated proportion of positive-staining cells (0, none; 1, <1/100; 2, 1/100 to 1/10; 3, 1/10 to 1/3; 4, 1/3; to 2/3; and 5, >2/3). The intensity score assigned represented the average intensity of positive cells (0, none; 1, weak; 2, intermediate; and 3, strong). The proportion and intensity scores were combined to obtain a total score, which ranged from 0 to 8. Total score 0–2 was assessed as negative (0), score 3–5 was assessed as weak positive (1) and 6–8 as strong positive (2). All slides were scored by investigator who was blinded to patient clinical information.\nFor the purpose of further statistical analysis, the mean values of immunohistochemical staining of all three tissue microarrays were used.\n Statistical analysis Statistical analysis was performed using MedCalc for Windows, version 12.3.0.0 (MedCalc Software, Mariakerke, Belgium). In the first part of the work the classical descriptive methods were used as well as the Chi-square test for the comparison of proportions. Mann–Whitney test was used to compare the medians. The method of logistic regression was used in order to compute the odds ratio for predictors of recurrence in univariate and multivariate manner. The AUC value was used to evaluate the quality of predictive model. The significance level in all tests was 0.05.\nStatistical analysis was performed using MedCalc for Windows, version 12.3.0.0 (MedCalc Software, Mariakerke, Belgium). In the first part of the work the classical descriptive methods were used as well as the Chi-square test for the comparison of proportions. Mann–Whitney test was used to compare the medians. The method of logistic regression was used in order to compute the odds ratio for predictors of recurrence in univariate and multivariate manner. The AUC value was used to evaluate the quality of predictive model. The significance level in all tests was 0.05.", "The tumor specimens analyzed in this study were obtained from a total of 127 patients with solitary papillary NMIBC treated with initial transurethral resection (TUR), as a standard procedure at the Department of Urology, Rijeka University Hospital Center in Rijeka, between 1996 and 2006. None of the patients received adjuvant chemotherapy or immunotherapy or any other medical intervention after the initial TUR. All patients with multiple tumors and patients with a solid or flat aspect of the tumor were excluded from this study. The study was approved by the University of Rijeka Ethics Committee.\nAll the sections were reviewed to confirm the original diagnosis and were staged according to the 2002 American Joint Committee on Cancer guidelines [17] and graded according to the 2004 World Health Organization classification system [18] by an expert urologic pathologist. All the tumors were classified as papillary urothelial neoplasm of low malignant potential (PUNLMP) or low-grade papillary urothelial carcinoma (LGPUC). Clinicopathologic data were obtained from patient medical records and from the files kept at the Department of Pathology, Rijeka University School of Medicine. Based on disease recurrence, patients were divided in two groups: patients who developed recurrent disease during the first five post-operative years (N=78) and patients without recurrent disease during a follow-up of minimum 5 years (N=49). The follow-up of patients was subsequently scheduled at control cystoscopy every 3 months for the first two years after TUR, and after that biannually. Recurrence was defined as a written description of a recurrent tumor at any control cystoscopy 6 months after the operation, localized away from the primary tumor bed and the area of the initial resection. By this procedure protocol we wanted to exclude all possible residual tumor masses.\nHematoxylin and eosin stained tumor sections were used in order to evaluate the presence of inflammatory cells and to mark the areas with the most pronounced inflammatory infiltrate for construction of tissue microarrays. Positive inflammatory cells were semi quantitatively graded and presence of TILs was categorized as absent (no) or present (yes) as shown in Figure 1.\n\nUrothelial tumors (hemalaun-eosin) with strong (A) and moderate (B) mononuclear inflammatory infiltrates in lamina propria (*) predominantly composed of lymphocytes.", "Paraffin blocks were available for all 127 cases, and TMAs were constructed by using a manual tissue arrayer (Alphelys, Plaisir, France). Three tissue cores, each 1 mm in diameter, were placed into a recipient paraffin block. Normal liver tissue was used for orientation. Cores were spaced at intervals of 0.5 mm in the x- and y-axes. One section from each TMA block was stained with hematoxylin and eosin to confirm the presence of tumor tissue. Serial sections were cut from TMA blocks for immunohistochemical staining. Three to four μm thick sections were placed on adhesive glass slides (Capillary Gap Microscope Slides, 75 μm, Code S2024, DakoCytomation, Glostrup, Denmark), left to dry in oven at 55°C overnight, deparaffinized and rehydrated.", "Tumor samples were processed for immunohistological analysis in a Dako Autostainer Plus (DakoCytomation Colorado Inc, Fort Collins, CO, USA) according to the manufacturer’s protocol using Envision peroxidase procedure (ChemMate TM Envision HRP detection kit K5007, DakoCytomation, Glostrup, Denmark).\nThe sections were incubated with the primary antibodies, anti-COX-2 mouse monoclonal antibody (clone CX-294, code M3617, DAKO, Denmark), dilution range 1:100. The immune reaction was amplified using the appropriate secondary antibody and the Streptavidin–Biotin–Peroxidase HRP complex (code K5001, DAKO, Denmark). Sections were then developed using 3,3′-diaminobenzidine tetrahydrochloride chromogen (DAB, code K5001, DAKO, Denmark), under microscope control. The sections were counterstained with Mayer’s Hematoxylin. Quality control performed by external and internal negative and positive controls was necessary to monitor the accuracy of tissue processing, staining procedures and reagents effectiveness. The primary antibody specificity sought to be assessed by their negative controls.\nCOX-2 immunohistochemical expression was quantified and scored by assessing a proportion of percentage as an intensity score according to Allred’s scoring protocol [19]. The assigned proportion score represented the estimated proportion of positive-staining cells (0, none; 1, <1/100; 2, 1/100 to 1/10; 3, 1/10 to 1/3; 4, 1/3; to 2/3; and 5, >2/3). The intensity score assigned represented the average intensity of positive cells (0, none; 1, weak; 2, intermediate; and 3, strong). The proportion and intensity scores were combined to obtain a total score, which ranged from 0 to 8. Total score 0–2 was assessed as negative (0), score 3–5 was assessed as weak positive (1) and 6–8 as strong positive (2). All slides were scored by investigator who was blinded to patient clinical information.\nFor the purpose of further statistical analysis, the mean values of immunohistochemical staining of all three tissue microarrays were used.", "Statistical analysis was performed using MedCalc for Windows, version 12.3.0.0 (MedCalc Software, Mariakerke, Belgium). In the first part of the work the classical descriptive methods were used as well as the Chi-square test for the comparison of proportions. Mann–Whitney test was used to compare the medians. The method of logistic regression was used in order to compute the odds ratio for predictors of recurrence in univariate and multivariate manner. The AUC value was used to evaluate the quality of predictive model. The significance level in all tests was 0.05.", " Clinicopathological characteristics and immunhistochemical findings in NMIBC Clinicopathological characteristics of 127 patients with solitary, papillary NMIBC unrolled in the present study are summarized in Table 1. All clinical details along with adequate tumor samples, at the time of diagnosis and in follow up period, were available for all patients. None of the patients was lost to follow up. The median age at diagnosis for the patient cohort was 73 years (range 41–87), with a male to female ratio of 3.3:1, and a median follow up period of 37 months (range 6–155). Tumors divided according to their size in ≤3 or >3 cm, according to their pathology in groups of papillary urothelial neoplasms of low malignant potential (PUNLMP) and low grade papillary urothelial carcinoma (LGPUC), and according to their stage in Ta and T1 were nearly equally distributed.\n\nClinical features of patients with non-muscle invasive bladder carcinoma along with pathological and immunohistochemical characteristics of the resected tumors\nNote: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; *TILs, tumor-infiltrating lymphocytes.\nImmunohistochemical analysis revealed COX-2 expression predominantly granular localized to the cytoplasm of tumor cells. Nuclear staining was not observed. In a few cases immunoreactivity was detected in endothelial cells within the tumor. Low intensity staining was also observed in smooth muscle tissue in some sections. There was noticeable heterogeneity in the intensity and percentage of COX-2 positive cells within the tumor tissues. Positive COX-2 immunoreactivity scored according to Allred was presented in 70 (71%) while 29 (29%) tumors were negative. Among positive tumors 16 (16%) show weak and 54 (55%) strong immunostaining. Representative cases of tumors with moderate and strong COX-2 staining are presented in Figure 2.\n\nImmunohistochemical staining for COX-2 on tissue microarrays of urothelial tumors (A, magnification x20). On higher magnification representative cases demonstrate tumor cells with strong (B), moderate (C) and weak (D) staining (magnification x100).\nTumor infiltrating lymphocytes as shown in Figure 1 were presented in majority (64; 59%) of cases, while 44 cases (41%) did not reveal mononuclear infiltration in tumoral stroma.\nClinicopathological characteristics of 127 patients with solitary, papillary NMIBC unrolled in the present study are summarized in Table 1. All clinical details along with adequate tumor samples, at the time of diagnosis and in follow up period, were available for all patients. None of the patients was lost to follow up. The median age at diagnosis for the patient cohort was 73 years (range 41–87), with a male to female ratio of 3.3:1, and a median follow up period of 37 months (range 6–155). Tumors divided according to their size in ≤3 or >3 cm, according to their pathology in groups of papillary urothelial neoplasms of low malignant potential (PUNLMP) and low grade papillary urothelial carcinoma (LGPUC), and according to their stage in Ta and T1 were nearly equally distributed.\n\nClinical features of patients with non-muscle invasive bladder carcinoma along with pathological and immunohistochemical characteristics of the resected tumors\nNote: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; *TILs, tumor-infiltrating lymphocytes.\nImmunohistochemical analysis revealed COX-2 expression predominantly granular localized to the cytoplasm of tumor cells. Nuclear staining was not observed. In a few cases immunoreactivity was detected in endothelial cells within the tumor. Low intensity staining was also observed in smooth muscle tissue in some sections. There was noticeable heterogeneity in the intensity and percentage of COX-2 positive cells within the tumor tissues. Positive COX-2 immunoreactivity scored according to Allred was presented in 70 (71%) while 29 (29%) tumors were negative. Among positive tumors 16 (16%) show weak and 54 (55%) strong immunostaining. Representative cases of tumors with moderate and strong COX-2 staining are presented in Figure 2.\n\nImmunohistochemical staining for COX-2 on tissue microarrays of urothelial tumors (A, magnification x20). On higher magnification representative cases demonstrate tumor cells with strong (B), moderate (C) and weak (D) staining (magnification x100).\nTumor infiltrating lymphocytes as shown in Figure 1 were presented in majority (64; 59%) of cases, while 44 cases (41%) did not reveal mononuclear infiltration in tumoral stroma.\n COX-2 expression and TIL in correlation with recurrence of disease Main objectives of the present study were to examine possible association between COX-2 expression and TILs with regard to recurrence of disease, as well as their mutual relationship which revealed no significant association (p=0.892).\nClinicopathological parameters included in the above mentioned analysis are presented in Table 2. As it can be seen among the examined parameters pathology and disease stages were different between recurrent and non recurrent disease. In particular, PUNLMP was more presented (61%) among the non-recurrent disease in opposite to LGPUC that was more frequently present (62%) in the recurrent group (p=0.0258, p=0.0258, respectively). According to disease stages there were more patients with recurrence in stage T1 (63%), while those patients without recurrent disease were in stage Ta (61%) (p=0.018).\n\nClinicopathologic characteristics, COX-2 expression and TIL in comparison to recurrence of \nnon-muscle invasive bladder carcinoma\nNote: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; * tumor-infiltrating lymphocytes.\nMoreover, statistical analysis demonstrated a difference in tumors COX-2 expression between the analyzed groups, e.g. between recurrent and non-recurrent patients. Possibly, due to small sample size in the group with the score 1 no significant difference was found between recurrent and non-recurrent patients (p=0.2196). In the group with the score 0 and the group with the score 2 there was a significant difference in the proportion of patients with and without recurrence (p=0.0001, p=0.0101, respectively). From these results we can expect that lower score is predictor of recurrence, since there is higher proportion of recurrent patients with score 0 and lower proportion of recurrent patients with score 2.\nA significant difference between the recurrent and non-recurrent patients in both groups, with absent and present TILs was also found (p=0.009, p=0.009, respectively). More precisely, higher proportion of recurrent patients in the group with no TILs and lower proportion in the group with TILs were found. Therefore, we can expect the absence of TILs to be the predictor of recurrence.\nMain objectives of the present study were to examine possible association between COX-2 expression and TILs with regard to recurrence of disease, as well as their mutual relationship which revealed no significant association (p=0.892).\nClinicopathological parameters included in the above mentioned analysis are presented in Table 2. As it can be seen among the examined parameters pathology and disease stages were different between recurrent and non recurrent disease. In particular, PUNLMP was more presented (61%) among the non-recurrent disease in opposite to LGPUC that was more frequently present (62%) in the recurrent group (p=0.0258, p=0.0258, respectively). According to disease stages there were more patients with recurrence in stage T1 (63%), while those patients without recurrent disease were in stage Ta (61%) (p=0.018).\n\nClinicopathologic characteristics, COX-2 expression and TIL in comparison to recurrence of \nnon-muscle invasive bladder carcinoma\nNote: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; * tumor-infiltrating lymphocytes.\nMoreover, statistical analysis demonstrated a difference in tumors COX-2 expression between the analyzed groups, e.g. between recurrent and non-recurrent patients. Possibly, due to small sample size in the group with the score 1 no significant difference was found between recurrent and non-recurrent patients (p=0.2196). In the group with the score 0 and the group with the score 2 there was a significant difference in the proportion of patients with and without recurrence (p=0.0001, p=0.0101, respectively). From these results we can expect that lower score is predictor of recurrence, since there is higher proportion of recurrent patients with score 0 and lower proportion of recurrent patients with score 2.\nA significant difference between the recurrent and non-recurrent patients in both groups, with absent and present TILs was also found (p=0.009, p=0.009, respectively). More precisely, higher proportion of recurrent patients in the group with no TILs and lower proportion in the group with TILs were found. Therefore, we can expect the absence of TILs to be the predictor of recurrence.\n Univariate and multivariate analysis of prediction of recurrence Univariate and multivariate analysis of the influence of COX-2 expression, TILs, tumor size and disease stage on prediction of recurrence (PR) are shown in Table 3.\n\nPredictors of recurrence: univariate and multivariate analyses\nNote: OR= Odds ratio; CI = Confidence interval; AUC = Area under the curve; TILs = tumor-infiltrating lymphocytes.\nThe odds ratio for each predictor was computed using the method of logistic regression. As we can see the tumor size is not the significant predictor in univariate, as well as is in multivariate manner. Disease stage is in both analysis significant and positive predictor, which means that the odds for a recurrence in cases with stage 1 are 2.39 times higher than in cases with stage 0. In both manners the TILs and COX-2 are negative predictors. The positive change in score for COX-2 (score higher for 1) produces 0.36 times lower risk, or that the odd for not getting a recurrence is 1/0.36=2.77 times higher. The TILs is also significant negative predictor of recurrence with OR 0.31 in univariate manner and 0.23 in multivariate analysis. The AUC value as a classification parameter which gives us the quality of predictive model was used. In our case we have the AUC 0.875 which means that our model has very high discriminating power.\nUnivariate and multivariate analysis of the influence of COX-2 expression, TILs, tumor size and disease stage on prediction of recurrence (PR) are shown in Table 3.\n\nPredictors of recurrence: univariate and multivariate analyses\nNote: OR= Odds ratio; CI = Confidence interval; AUC = Area under the curve; TILs = tumor-infiltrating lymphocytes.\nThe odds ratio for each predictor was computed using the method of logistic regression. As we can see the tumor size is not the significant predictor in univariate, as well as is in multivariate manner. Disease stage is in both analysis significant and positive predictor, which means that the odds for a recurrence in cases with stage 1 are 2.39 times higher than in cases with stage 0. In both manners the TILs and COX-2 are negative predictors. The positive change in score for COX-2 (score higher for 1) produces 0.36 times lower risk, or that the odd for not getting a recurrence is 1/0.36=2.77 times higher. The TILs is also significant negative predictor of recurrence with OR 0.31 in univariate manner and 0.23 in multivariate analysis. The AUC value as a classification parameter which gives us the quality of predictive model was used. In our case we have the AUC 0.875 which means that our model has very high discriminating power.", "Clinicopathological characteristics of 127 patients with solitary, papillary NMIBC unrolled in the present study are summarized in Table 1. All clinical details along with adequate tumor samples, at the time of diagnosis and in follow up period, were available for all patients. None of the patients was lost to follow up. The median age at diagnosis for the patient cohort was 73 years (range 41–87), with a male to female ratio of 3.3:1, and a median follow up period of 37 months (range 6–155). Tumors divided according to their size in ≤3 or >3 cm, according to their pathology in groups of papillary urothelial neoplasms of low malignant potential (PUNLMP) and low grade papillary urothelial carcinoma (LGPUC), and according to their stage in Ta and T1 were nearly equally distributed.\n\nClinical features of patients with non-muscle invasive bladder carcinoma along with pathological and immunohistochemical characteristics of the resected tumors\nNote: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; *TILs, tumor-infiltrating lymphocytes.\nImmunohistochemical analysis revealed COX-2 expression predominantly granular localized to the cytoplasm of tumor cells. Nuclear staining was not observed. In a few cases immunoreactivity was detected in endothelial cells within the tumor. Low intensity staining was also observed in smooth muscle tissue in some sections. There was noticeable heterogeneity in the intensity and percentage of COX-2 positive cells within the tumor tissues. Positive COX-2 immunoreactivity scored according to Allred was presented in 70 (71%) while 29 (29%) tumors were negative. Among positive tumors 16 (16%) show weak and 54 (55%) strong immunostaining. Representative cases of tumors with moderate and strong COX-2 staining are presented in Figure 2.\n\nImmunohistochemical staining for COX-2 on tissue microarrays of urothelial tumors (A, magnification x20). On higher magnification representative cases demonstrate tumor cells with strong (B), moderate (C) and weak (D) staining (magnification x100).\nTumor infiltrating lymphocytes as shown in Figure 1 were presented in majority (64; 59%) of cases, while 44 cases (41%) did not reveal mononuclear infiltration in tumoral stroma.", "Main objectives of the present study were to examine possible association between COX-2 expression and TILs with regard to recurrence of disease, as well as their mutual relationship which revealed no significant association (p=0.892).\nClinicopathological parameters included in the above mentioned analysis are presented in Table 2. As it can be seen among the examined parameters pathology and disease stages were different between recurrent and non recurrent disease. In particular, PUNLMP was more presented (61%) among the non-recurrent disease in opposite to LGPUC that was more frequently present (62%) in the recurrent group (p=0.0258, p=0.0258, respectively). According to disease stages there were more patients with recurrence in stage T1 (63%), while those patients without recurrent disease were in stage Ta (61%) (p=0.018).\n\nClinicopathologic characteristics, COX-2 expression and TIL in comparison to recurrence of \nnon-muscle invasive bladder carcinoma\nNote: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; * tumor-infiltrating lymphocytes.\nMoreover, statistical analysis demonstrated a difference in tumors COX-2 expression between the analyzed groups, e.g. between recurrent and non-recurrent patients. Possibly, due to small sample size in the group with the score 1 no significant difference was found between recurrent and non-recurrent patients (p=0.2196). In the group with the score 0 and the group with the score 2 there was a significant difference in the proportion of patients with and without recurrence (p=0.0001, p=0.0101, respectively). From these results we can expect that lower score is predictor of recurrence, since there is higher proportion of recurrent patients with score 0 and lower proportion of recurrent patients with score 2.\nA significant difference between the recurrent and non-recurrent patients in both groups, with absent and present TILs was also found (p=0.009, p=0.009, respectively). More precisely, higher proportion of recurrent patients in the group with no TILs and lower proportion in the group with TILs were found. Therefore, we can expect the absence of TILs to be the predictor of recurrence.", "Univariate and multivariate analysis of the influence of COX-2 expression, TILs, tumor size and disease stage on prediction of recurrence (PR) are shown in Table 3.\n\nPredictors of recurrence: univariate and multivariate analyses\nNote: OR= Odds ratio; CI = Confidence interval; AUC = Area under the curve; TILs = tumor-infiltrating lymphocytes.\nThe odds ratio for each predictor was computed using the method of logistic regression. As we can see the tumor size is not the significant predictor in univariate, as well as is in multivariate manner. Disease stage is in both analysis significant and positive predictor, which means that the odds for a recurrence in cases with stage 1 are 2.39 times higher than in cases with stage 0. In both manners the TILs and COX-2 are negative predictors. The positive change in score for COX-2 (score higher for 1) produces 0.36 times lower risk, or that the odd for not getting a recurrence is 1/0.36=2.77 times higher. The TILs is also significant negative predictor of recurrence with OR 0.31 in univariate manner and 0.23 in multivariate analysis. The AUC value as a classification parameter which gives us the quality of predictive model was used. In our case we have the AUC 0.875 which means that our model has very high discriminating power.", "There is large evidence implicating COX-2 activity in the development of UBC. Yet, the results obtained in the present study demonstrated that low COX-2 instead of high COX-2 score could predict the recurrence of NMIBC. These results open many questions related to the methodology used in various studies, as well as the complexity in understanding the role of different biological markers in tumorigenesis.\nIn the present study we used TMA instead of whole tissue section for immunohistochemical analysis. Gudjonsson et al. in their study analyzed COX-2 among different markers on TMA and they found that proteins assessed had no predictive value for recurrences of Ta bladder cancer (BC) [20]. The concerns have been raised by the authors regarding the methodology and generalization of results obtained with TMA in immunohistochemical analysis. Even so we believe that conclusion was made despite the facts that relatively small number of patients (N=52) was analyzed. TMA construction was done with at least three 0.6 mm punch cores and specimens were heterogeneous, including single and multifocal tumors. In order to minimize these problems the present study was conducted on relatively larger number of the patients (N=127), with only single tumor at the time of diagnosis, and with specimens where at least three 1 mm punch could be selected for TMA construction. Nevertheless, the obtained results indicate that different methodology and also diverse scoring system for protein expression could be the reason for different results and conclusion made in different papers.\nPrevious studies have been conducted mostly on heterogeneous groups of patients with superficial and invasive UBC. One of the first papers in which the expression of COX-1 and COX-2 were analyzed in human invasive TCC of the urinary bladder samples was by Mohammed and coworkers [21]. COX-2 was not expressed in normal urinary bladder samples but was detected in invasive TCC, noninvasive TCC samples, and in cases of carcinoma in situ. Authors concluded that COX-2 may play a role in BC and support further study of COX-2 inhibitors as potential antitumor agents in human BC. After this study several other ones showed significant increase of COX-2 expression with advancing tumor grade and T stages of the disease [22,23], and not only with disease progression but also with BC specific survival [24].\nOpposite to the role of COX-2 in tumor progression there are also studies where COX-2 expression was not associated with primary tumor stage, lymph node status, histological grading, overall and disease-free survival [25]. Wulfing et al. also did not find COX-2 expression associated with TNM staging, histological grading, overall or disease free survival, but a significant relation to the histological subtype (transitional vs. squamous cell carcinoma) was present [26]. Subgroup of chemotherapy patients demonstrated a significant correlation of strong COX-2 expression with worse overall survival time. The authors concluded that further experimental and clinical studies were needed to elucidate if COX-2 inhibition can serve as an additive therapy to chemotherapy of BC.\nThere are also several studies analyzing COX-2 expression only in NMIBC. In the CIS group, COX-2 expression was significantly associated with disease recurrence and progression, but not with BC related survival (as in our analysis, data not shown), while in the stage T1 the TCC COX-2 expression was not associated with clinical or pathological parameters or clinical outcome [27]. Kim and coworkers in their study selected only T1G3 TCC who had undergone complete TUR [28]. In that case COX-2 expression was statistically significant in predicting both recurrence and disease progression, while patients’ age, shape and multiplicity of tumors were not significantly predictive. Thus patients, according to authors’ conclusion, with COX-2 positive superficial BC may need to be followed up more vigorously. In the paper of Okajima et al. with only 5 and 6 samples of superficial BC cases with and without recurrence, respectively, after TUR, more COX-2 protein samples in the cases with recurrence than in cases without recurrence were found [29]. Even though the number of cases examined is small, as the authors stated, this result supports their hypothesis that COX-2 contributes to superficial BC recurrence, thus selective COX-2 inhibitors can be a candidate chemo preventive agents for reoccurrence.\nOur results differ from the above mentioned because loss of COX-2 was a predictive marker of tumor recurrence. We believe this can be explained by very complex, even different role of this enzyme during tumorigenesis. There is no doubt that COX-2 expression is sequentially up-regulated from normal to chronic cystitis and to malignant changes [30]. Also there are evidence that COX-2 is involved in angiogenesis in BC, as described in the paper where COX-2 promoted vessel proliferation in the tumor zone of pTa/pT1 NMIBC [31]. Moreover, COX-2 is probably up-regulated during tumor progression, as mentioned above, but we believe its role in initial stage is very complex. We hypothesize that, in the initial tumor stage, COX-2 is probably associated with host immunological/inflammatory response and, as such, it could be a marker of sufficient anti-tumor response while loss of COX-2 expression may be indication for the selection of patients for additional immunotherapy. This supposition is also supported by our finding that absence of TILs became the predictor of recurrence. In our study we could not find the association between COX-2 and TILs although there is study by Himly and coworkers in which the association between COX-2 expression and T lymphocyte subsets, CD4+ and CD8+, was confirmed [23]. However, our previous work confirmed the proportion of CD4+ cells and Granzyme B+ TILs being significantly higher in non-recurrent group of patients [32].", "Although further studies are necessary to fully elucidate the mechanism involved, the present study showed that higher COX-2 expression and present TILs resulted as a negative predictor of recurrence in NMIBC. On the other hand, patients with lower value of both analyzed parameters should be probably selected for adjuvant therapy.", "The authors declare that they have no competing interests.", "TT participated in study design, conceived the study and participated in coordination. KK added clinical data. SŠ actively performed part of immunohistochemical analysis. EB conceived the study and participated in coordination and performed statistical analysis. ŽF participated in study design and coordination. NJ advised and led the whole group as coordinator. All authors read and approved the final manuscript." ]
[ null, "materials|methods", null, null, null, null, "results", null, null, null, "discussion", "conclusions", null, null ]
[ "Non-muscle invasive bladder cancer", "Recurrence", "Cyclooxygenase-2", "Tumor infiltrating lymphocytes" ]
Background: Urinary bladder cancer (UBC), on the average, includes 2% of all the malignant diseases with male-to-female ratio being about 4:1. The incidence of UBC increases with age [1]. The mortality from transitional cell carcinoma (TCC) of the urinary bladder increases significantly with the progression of superficial to invasive disease. Approximately 75-85% of patients present with a non-muscle invasive bladder carcinoma (stages pTa, pT1, pTis). Despite the same category this is a very heterogeneous group of tumors with various biological outcomes. The main clinical feature of UBC is a high percentage of recurrence [2]. Carcinoma of the urinary bladder is the only malignant neoplasm for which immunotherapy is often included as part of standard care. Intravesical instillations of Bacille Calmette–Guerin (BCG) has been demonstrated to reduce the recurrence rate and the risk of progression to muscle-invasive disease in patients with carcinoma in situ (pTis), as well as non-muscle-invasive urothelial carcinomas [3,4]. BCG immunotherapy results in 70% to 75% and 50% to 60% complete response rates for carcinoma in situ and small residual tumors, respectively [5]. Unfortunately, a significant percentage of patients will fail initial BCG therapy. These patients would have much more benefit if they were oriented early to other therapeutic approaches. In addition, another 30% to 50% of BCG responders will develop recurrent tumors within 5 years [6,7]. In fact, 70% of patients treated with transurethral resection (TUR) experience a relapse of the underlying disease and 15-25% of patients will progress over time to muscle-invasive cancer [2]. Although some prognostic variables have been shown to predict recurrence and can be used to identify patients who require adjuvant therapy after TUR, additional reliable markers for disease progression and recurrence are needed [8-10]. Standard prognostic factors like histological grading are limited in predicting possible recurrence of the disease. So, understanding of molecular processes that could reflect individual biological potential and clinical behavior is important. For that purpose, various biomarkers have been investigated since they have potential in decoding unique biological features in identifying patients with high risk of progression after the local treatment, as well as in more reliable prognosis and treatment of UBC [11]. In the present study the cyclooxygenase-2 (COX-2) was investigated. Beside prostaglandin G/H synthases and COX-1 it is one of the key enzymes in the synthesis of prostaglandins from arachidonic acid. COX-2 is not expressed in most tissue under normal conditions, but expression is rapidly induced by growth factors or agents that cause tissue irritation or inflammation [12]. COX-2 is expressed in many solid as well as hematological malignancies [13] where prostaglandins have been reported to increase proliferation, enhance angiogenesis, promote invasion, and inhibit apoptosis and differentiation, thus participating in carcinogenesis through various mechanisms [13,14]. There is evidence that COX-2 expression and activity is also important in the development of UBC [15,16]. To determine if COX-2 overexpression could be causally related to recurrence of superficial UBC the present study was designed with pathologically homogenous group of solitary papillary non-muscle invasive bladder cancer (NMIBC) presented in patients according to development of the recurrent disease. More specifically, the aim was to assess and compare the COX-2 expression between the patients with and without recurrence of disease and, moreover, to analyze association between COX-2 expression and tumor infiltrating lymphocytes (TILs). We hypothesize that such investigation can be used to predict recurrence and help identify patients who require adjuvant treatment. Materials and methods: Clinicopathological data The tumor specimens analyzed in this study were obtained from a total of 127 patients with solitary papillary NMIBC treated with initial transurethral resection (TUR), as a standard procedure at the Department of Urology, Rijeka University Hospital Center in Rijeka, between 1996 and 2006. None of the patients received adjuvant chemotherapy or immunotherapy or any other medical intervention after the initial TUR. All patients with multiple tumors and patients with a solid or flat aspect of the tumor were excluded from this study. The study was approved by the University of Rijeka Ethics Committee. All the sections were reviewed to confirm the original diagnosis and were staged according to the 2002 American Joint Committee on Cancer guidelines [17] and graded according to the 2004 World Health Organization classification system [18] by an expert urologic pathologist. All the tumors were classified as papillary urothelial neoplasm of low malignant potential (PUNLMP) or low-grade papillary urothelial carcinoma (LGPUC). Clinicopathologic data were obtained from patient medical records and from the files kept at the Department of Pathology, Rijeka University School of Medicine. Based on disease recurrence, patients were divided in two groups: patients who developed recurrent disease during the first five post-operative years (N=78) and patients without recurrent disease during a follow-up of minimum 5 years (N=49). The follow-up of patients was subsequently scheduled at control cystoscopy every 3 months for the first two years after TUR, and after that biannually. Recurrence was defined as a written description of a recurrent tumor at any control cystoscopy 6 months after the operation, localized away from the primary tumor bed and the area of the initial resection. By this procedure protocol we wanted to exclude all possible residual tumor masses. Hematoxylin and eosin stained tumor sections were used in order to evaluate the presence of inflammatory cells and to mark the areas with the most pronounced inflammatory infiltrate for construction of tissue microarrays. Positive inflammatory cells were semi quantitatively graded and presence of TILs was categorized as absent (no) or present (yes) as shown in Figure 1. Urothelial tumors (hemalaun-eosin) with strong (A) and moderate (B) mononuclear inflammatory infiltrates in lamina propria (*) predominantly composed of lymphocytes. The tumor specimens analyzed in this study were obtained from a total of 127 patients with solitary papillary NMIBC treated with initial transurethral resection (TUR), as a standard procedure at the Department of Urology, Rijeka University Hospital Center in Rijeka, between 1996 and 2006. None of the patients received adjuvant chemotherapy or immunotherapy or any other medical intervention after the initial TUR. All patients with multiple tumors and patients with a solid or flat aspect of the tumor were excluded from this study. The study was approved by the University of Rijeka Ethics Committee. All the sections were reviewed to confirm the original diagnosis and were staged according to the 2002 American Joint Committee on Cancer guidelines [17] and graded according to the 2004 World Health Organization classification system [18] by an expert urologic pathologist. All the tumors were classified as papillary urothelial neoplasm of low malignant potential (PUNLMP) or low-grade papillary urothelial carcinoma (LGPUC). Clinicopathologic data were obtained from patient medical records and from the files kept at the Department of Pathology, Rijeka University School of Medicine. Based on disease recurrence, patients were divided in two groups: patients who developed recurrent disease during the first five post-operative years (N=78) and patients without recurrent disease during a follow-up of minimum 5 years (N=49). The follow-up of patients was subsequently scheduled at control cystoscopy every 3 months for the first two years after TUR, and after that biannually. Recurrence was defined as a written description of a recurrent tumor at any control cystoscopy 6 months after the operation, localized away from the primary tumor bed and the area of the initial resection. By this procedure protocol we wanted to exclude all possible residual tumor masses. Hematoxylin and eosin stained tumor sections were used in order to evaluate the presence of inflammatory cells and to mark the areas with the most pronounced inflammatory infiltrate for construction of tissue microarrays. Positive inflammatory cells were semi quantitatively graded and presence of TILs was categorized as absent (no) or present (yes) as shown in Figure 1. Urothelial tumors (hemalaun-eosin) with strong (A) and moderate (B) mononuclear inflammatory infiltrates in lamina propria (*) predominantly composed of lymphocytes. Tissue microarray construction (TMA) Paraffin blocks were available for all 127 cases, and TMAs were constructed by using a manual tissue arrayer (Alphelys, Plaisir, France). Three tissue cores, each 1 mm in diameter, were placed into a recipient paraffin block. Normal liver tissue was used for orientation. Cores were spaced at intervals of 0.5 mm in the x- and y-axes. One section from each TMA block was stained with hematoxylin and eosin to confirm the presence of tumor tissue. Serial sections were cut from TMA blocks for immunohistochemical staining. Three to four μm thick sections were placed on adhesive glass slides (Capillary Gap Microscope Slides, 75 μm, Code S2024, DakoCytomation, Glostrup, Denmark), left to dry in oven at 55°C overnight, deparaffinized and rehydrated. Paraffin blocks were available for all 127 cases, and TMAs were constructed by using a manual tissue arrayer (Alphelys, Plaisir, France). Three tissue cores, each 1 mm in diameter, were placed into a recipient paraffin block. Normal liver tissue was used for orientation. Cores were spaced at intervals of 0.5 mm in the x- and y-axes. One section from each TMA block was stained with hematoxylin and eosin to confirm the presence of tumor tissue. Serial sections were cut from TMA blocks for immunohistochemical staining. Three to four μm thick sections were placed on adhesive glass slides (Capillary Gap Microscope Slides, 75 μm, Code S2024, DakoCytomation, Glostrup, Denmark), left to dry in oven at 55°C overnight, deparaffinized and rehydrated. Immunohistochemistry and scoring Tumor samples were processed for immunohistological analysis in a Dako Autostainer Plus (DakoCytomation Colorado Inc, Fort Collins, CO, USA) according to the manufacturer’s protocol using Envision peroxidase procedure (ChemMate TM Envision HRP detection kit K5007, DakoCytomation, Glostrup, Denmark). The sections were incubated with the primary antibodies, anti-COX-2 mouse monoclonal antibody (clone CX-294, code M3617, DAKO, Denmark), dilution range 1:100. The immune reaction was amplified using the appropriate secondary antibody and the Streptavidin–Biotin–Peroxidase HRP complex (code K5001, DAKO, Denmark). Sections were then developed using 3,3′-diaminobenzidine tetrahydrochloride chromogen (DAB, code K5001, DAKO, Denmark), under microscope control. The sections were counterstained with Mayer’s Hematoxylin. Quality control performed by external and internal negative and positive controls was necessary to monitor the accuracy of tissue processing, staining procedures and reagents effectiveness. The primary antibody specificity sought to be assessed by their negative controls. COX-2 immunohistochemical expression was quantified and scored by assessing a proportion of percentage as an intensity score according to Allred’s scoring protocol [19]. The assigned proportion score represented the estimated proportion of positive-staining cells (0, none; 1, <1/100; 2, 1/100 to 1/10; 3, 1/10 to 1/3; 4, 1/3; to 2/3; and 5, >2/3). The intensity score assigned represented the average intensity of positive cells (0, none; 1, weak; 2, intermediate; and 3, strong). The proportion and intensity scores were combined to obtain a total score, which ranged from 0 to 8. Total score 0–2 was assessed as negative (0), score 3–5 was assessed as weak positive (1) and 6–8 as strong positive (2). All slides were scored by investigator who was blinded to patient clinical information. For the purpose of further statistical analysis, the mean values of immunohistochemical staining of all three tissue microarrays were used. Tumor samples were processed for immunohistological analysis in a Dako Autostainer Plus (DakoCytomation Colorado Inc, Fort Collins, CO, USA) according to the manufacturer’s protocol using Envision peroxidase procedure (ChemMate TM Envision HRP detection kit K5007, DakoCytomation, Glostrup, Denmark). The sections were incubated with the primary antibodies, anti-COX-2 mouse monoclonal antibody (clone CX-294, code M3617, DAKO, Denmark), dilution range 1:100. The immune reaction was amplified using the appropriate secondary antibody and the Streptavidin–Biotin–Peroxidase HRP complex (code K5001, DAKO, Denmark). Sections were then developed using 3,3′-diaminobenzidine tetrahydrochloride chromogen (DAB, code K5001, DAKO, Denmark), under microscope control. The sections were counterstained with Mayer’s Hematoxylin. Quality control performed by external and internal negative and positive controls was necessary to monitor the accuracy of tissue processing, staining procedures and reagents effectiveness. The primary antibody specificity sought to be assessed by their negative controls. COX-2 immunohistochemical expression was quantified and scored by assessing a proportion of percentage as an intensity score according to Allred’s scoring protocol [19]. The assigned proportion score represented the estimated proportion of positive-staining cells (0, none; 1, <1/100; 2, 1/100 to 1/10; 3, 1/10 to 1/3; 4, 1/3; to 2/3; and 5, >2/3). The intensity score assigned represented the average intensity of positive cells (0, none; 1, weak; 2, intermediate; and 3, strong). The proportion and intensity scores were combined to obtain a total score, which ranged from 0 to 8. Total score 0–2 was assessed as negative (0), score 3–5 was assessed as weak positive (1) and 6–8 as strong positive (2). All slides were scored by investigator who was blinded to patient clinical information. For the purpose of further statistical analysis, the mean values of immunohistochemical staining of all three tissue microarrays were used. Statistical analysis Statistical analysis was performed using MedCalc for Windows, version 12.3.0.0 (MedCalc Software, Mariakerke, Belgium). In the first part of the work the classical descriptive methods were used as well as the Chi-square test for the comparison of proportions. Mann–Whitney test was used to compare the medians. The method of logistic regression was used in order to compute the odds ratio for predictors of recurrence in univariate and multivariate manner. The AUC value was used to evaluate the quality of predictive model. The significance level in all tests was 0.05. Statistical analysis was performed using MedCalc for Windows, version 12.3.0.0 (MedCalc Software, Mariakerke, Belgium). In the first part of the work the classical descriptive methods were used as well as the Chi-square test for the comparison of proportions. Mann–Whitney test was used to compare the medians. The method of logistic regression was used in order to compute the odds ratio for predictors of recurrence in univariate and multivariate manner. The AUC value was used to evaluate the quality of predictive model. The significance level in all tests was 0.05. Clinicopathological data: The tumor specimens analyzed in this study were obtained from a total of 127 patients with solitary papillary NMIBC treated with initial transurethral resection (TUR), as a standard procedure at the Department of Urology, Rijeka University Hospital Center in Rijeka, between 1996 and 2006. None of the patients received adjuvant chemotherapy or immunotherapy or any other medical intervention after the initial TUR. All patients with multiple tumors and patients with a solid or flat aspect of the tumor were excluded from this study. The study was approved by the University of Rijeka Ethics Committee. All the sections were reviewed to confirm the original diagnosis and were staged according to the 2002 American Joint Committee on Cancer guidelines [17] and graded according to the 2004 World Health Organization classification system [18] by an expert urologic pathologist. All the tumors were classified as papillary urothelial neoplasm of low malignant potential (PUNLMP) or low-grade papillary urothelial carcinoma (LGPUC). Clinicopathologic data were obtained from patient medical records and from the files kept at the Department of Pathology, Rijeka University School of Medicine. Based on disease recurrence, patients were divided in two groups: patients who developed recurrent disease during the first five post-operative years (N=78) and patients without recurrent disease during a follow-up of minimum 5 years (N=49). The follow-up of patients was subsequently scheduled at control cystoscopy every 3 months for the first two years after TUR, and after that biannually. Recurrence was defined as a written description of a recurrent tumor at any control cystoscopy 6 months after the operation, localized away from the primary tumor bed and the area of the initial resection. By this procedure protocol we wanted to exclude all possible residual tumor masses. Hematoxylin and eosin stained tumor sections were used in order to evaluate the presence of inflammatory cells and to mark the areas with the most pronounced inflammatory infiltrate for construction of tissue microarrays. Positive inflammatory cells were semi quantitatively graded and presence of TILs was categorized as absent (no) or present (yes) as shown in Figure 1. Urothelial tumors (hemalaun-eosin) with strong (A) and moderate (B) mononuclear inflammatory infiltrates in lamina propria (*) predominantly composed of lymphocytes. Tissue microarray construction (TMA): Paraffin blocks were available for all 127 cases, and TMAs were constructed by using a manual tissue arrayer (Alphelys, Plaisir, France). Three tissue cores, each 1 mm in diameter, were placed into a recipient paraffin block. Normal liver tissue was used for orientation. Cores were spaced at intervals of 0.5 mm in the x- and y-axes. One section from each TMA block was stained with hematoxylin and eosin to confirm the presence of tumor tissue. Serial sections were cut from TMA blocks for immunohistochemical staining. Three to four μm thick sections were placed on adhesive glass slides (Capillary Gap Microscope Slides, 75 μm, Code S2024, DakoCytomation, Glostrup, Denmark), left to dry in oven at 55°C overnight, deparaffinized and rehydrated. Immunohistochemistry and scoring: Tumor samples were processed for immunohistological analysis in a Dako Autostainer Plus (DakoCytomation Colorado Inc, Fort Collins, CO, USA) according to the manufacturer’s protocol using Envision peroxidase procedure (ChemMate TM Envision HRP detection kit K5007, DakoCytomation, Glostrup, Denmark). The sections were incubated with the primary antibodies, anti-COX-2 mouse monoclonal antibody (clone CX-294, code M3617, DAKO, Denmark), dilution range 1:100. The immune reaction was amplified using the appropriate secondary antibody and the Streptavidin–Biotin–Peroxidase HRP complex (code K5001, DAKO, Denmark). Sections were then developed using 3,3′-diaminobenzidine tetrahydrochloride chromogen (DAB, code K5001, DAKO, Denmark), under microscope control. The sections were counterstained with Mayer’s Hematoxylin. Quality control performed by external and internal negative and positive controls was necessary to monitor the accuracy of tissue processing, staining procedures and reagents effectiveness. The primary antibody specificity sought to be assessed by their negative controls. COX-2 immunohistochemical expression was quantified and scored by assessing a proportion of percentage as an intensity score according to Allred’s scoring protocol [19]. The assigned proportion score represented the estimated proportion of positive-staining cells (0, none; 1, <1/100; 2, 1/100 to 1/10; 3, 1/10 to 1/3; 4, 1/3; to 2/3; and 5, >2/3). The intensity score assigned represented the average intensity of positive cells (0, none; 1, weak; 2, intermediate; and 3, strong). The proportion and intensity scores were combined to obtain a total score, which ranged from 0 to 8. Total score 0–2 was assessed as negative (0), score 3–5 was assessed as weak positive (1) and 6–8 as strong positive (2). All slides were scored by investigator who was blinded to patient clinical information. For the purpose of further statistical analysis, the mean values of immunohistochemical staining of all three tissue microarrays were used. Statistical analysis: Statistical analysis was performed using MedCalc for Windows, version 12.3.0.0 (MedCalc Software, Mariakerke, Belgium). In the first part of the work the classical descriptive methods were used as well as the Chi-square test for the comparison of proportions. Mann–Whitney test was used to compare the medians. The method of logistic regression was used in order to compute the odds ratio for predictors of recurrence in univariate and multivariate manner. The AUC value was used to evaluate the quality of predictive model. The significance level in all tests was 0.05. Results: Clinicopathological characteristics and immunhistochemical findings in NMIBC Clinicopathological characteristics of 127 patients with solitary, papillary NMIBC unrolled in the present study are summarized in Table 1. All clinical details along with adequate tumor samples, at the time of diagnosis and in follow up period, were available for all patients. None of the patients was lost to follow up. The median age at diagnosis for the patient cohort was 73 years (range 41–87), with a male to female ratio of 3.3:1, and a median follow up period of 37 months (range 6–155). Tumors divided according to their size in ≤3 or >3 cm, according to their pathology in groups of papillary urothelial neoplasms of low malignant potential (PUNLMP) and low grade papillary urothelial carcinoma (LGPUC), and according to their stage in Ta and T1 were nearly equally distributed. Clinical features of patients with non-muscle invasive bladder carcinoma along with pathological and immunohistochemical characteristics of the resected tumors Note: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; *TILs, tumor-infiltrating lymphocytes. Immunohistochemical analysis revealed COX-2 expression predominantly granular localized to the cytoplasm of tumor cells. Nuclear staining was not observed. In a few cases immunoreactivity was detected in endothelial cells within the tumor. Low intensity staining was also observed in smooth muscle tissue in some sections. There was noticeable heterogeneity in the intensity and percentage of COX-2 positive cells within the tumor tissues. Positive COX-2 immunoreactivity scored according to Allred was presented in 70 (71%) while 29 (29%) tumors were negative. Among positive tumors 16 (16%) show weak and 54 (55%) strong immunostaining. Representative cases of tumors with moderate and strong COX-2 staining are presented in Figure 2. Immunohistochemical staining for COX-2 on tissue microarrays of urothelial tumors (A, magnification x20). On higher magnification representative cases demonstrate tumor cells with strong (B), moderate (C) and weak (D) staining (magnification x100). Tumor infiltrating lymphocytes as shown in Figure 1 were presented in majority (64; 59%) of cases, while 44 cases (41%) did not reveal mononuclear infiltration in tumoral stroma. Clinicopathological characteristics of 127 patients with solitary, papillary NMIBC unrolled in the present study are summarized in Table 1. All clinical details along with adequate tumor samples, at the time of diagnosis and in follow up period, were available for all patients. None of the patients was lost to follow up. The median age at diagnosis for the patient cohort was 73 years (range 41–87), with a male to female ratio of 3.3:1, and a median follow up period of 37 months (range 6–155). Tumors divided according to their size in ≤3 or >3 cm, according to their pathology in groups of papillary urothelial neoplasms of low malignant potential (PUNLMP) and low grade papillary urothelial carcinoma (LGPUC), and according to their stage in Ta and T1 were nearly equally distributed. Clinical features of patients with non-muscle invasive bladder carcinoma along with pathological and immunohistochemical characteristics of the resected tumors Note: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; *TILs, tumor-infiltrating lymphocytes. Immunohistochemical analysis revealed COX-2 expression predominantly granular localized to the cytoplasm of tumor cells. Nuclear staining was not observed. In a few cases immunoreactivity was detected in endothelial cells within the tumor. Low intensity staining was also observed in smooth muscle tissue in some sections. There was noticeable heterogeneity in the intensity and percentage of COX-2 positive cells within the tumor tissues. Positive COX-2 immunoreactivity scored according to Allred was presented in 70 (71%) while 29 (29%) tumors were negative. Among positive tumors 16 (16%) show weak and 54 (55%) strong immunostaining. Representative cases of tumors with moderate and strong COX-2 staining are presented in Figure 2. Immunohistochemical staining for COX-2 on tissue microarrays of urothelial tumors (A, magnification x20). On higher magnification representative cases demonstrate tumor cells with strong (B), moderate (C) and weak (D) staining (magnification x100). Tumor infiltrating lymphocytes as shown in Figure 1 were presented in majority (64; 59%) of cases, while 44 cases (41%) did not reveal mononuclear infiltration in tumoral stroma. COX-2 expression and TIL in correlation with recurrence of disease Main objectives of the present study were to examine possible association between COX-2 expression and TILs with regard to recurrence of disease, as well as their mutual relationship which revealed no significant association (p=0.892). Clinicopathological parameters included in the above mentioned analysis are presented in Table 2. As it can be seen among the examined parameters pathology and disease stages were different between recurrent and non recurrent disease. In particular, PUNLMP was more presented (61%) among the non-recurrent disease in opposite to LGPUC that was more frequently present (62%) in the recurrent group (p=0.0258, p=0.0258, respectively). According to disease stages there were more patients with recurrence in stage T1 (63%), while those patients without recurrent disease were in stage Ta (61%) (p=0.018). Clinicopathologic characteristics, COX-2 expression and TIL in comparison to recurrence of non-muscle invasive bladder carcinoma Note: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; * tumor-infiltrating lymphocytes. Moreover, statistical analysis demonstrated a difference in tumors COX-2 expression between the analyzed groups, e.g. between recurrent and non-recurrent patients. Possibly, due to small sample size in the group with the score 1 no significant difference was found between recurrent and non-recurrent patients (p=0.2196). In the group with the score 0 and the group with the score 2 there was a significant difference in the proportion of patients with and without recurrence (p=0.0001, p=0.0101, respectively). From these results we can expect that lower score is predictor of recurrence, since there is higher proportion of recurrent patients with score 0 and lower proportion of recurrent patients with score 2. A significant difference between the recurrent and non-recurrent patients in both groups, with absent and present TILs was also found (p=0.009, p=0.009, respectively). More precisely, higher proportion of recurrent patients in the group with no TILs and lower proportion in the group with TILs were found. Therefore, we can expect the absence of TILs to be the predictor of recurrence. Main objectives of the present study were to examine possible association between COX-2 expression and TILs with regard to recurrence of disease, as well as their mutual relationship which revealed no significant association (p=0.892). Clinicopathological parameters included in the above mentioned analysis are presented in Table 2. As it can be seen among the examined parameters pathology and disease stages were different between recurrent and non recurrent disease. In particular, PUNLMP was more presented (61%) among the non-recurrent disease in opposite to LGPUC that was more frequently present (62%) in the recurrent group (p=0.0258, p=0.0258, respectively). According to disease stages there were more patients with recurrence in stage T1 (63%), while those patients without recurrent disease were in stage Ta (61%) (p=0.018). Clinicopathologic characteristics, COX-2 expression and TIL in comparison to recurrence of non-muscle invasive bladder carcinoma Note: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; * tumor-infiltrating lymphocytes. Moreover, statistical analysis demonstrated a difference in tumors COX-2 expression between the analyzed groups, e.g. between recurrent and non-recurrent patients. Possibly, due to small sample size in the group with the score 1 no significant difference was found between recurrent and non-recurrent patients (p=0.2196). In the group with the score 0 and the group with the score 2 there was a significant difference in the proportion of patients with and without recurrence (p=0.0001, p=0.0101, respectively). From these results we can expect that lower score is predictor of recurrence, since there is higher proportion of recurrent patients with score 0 and lower proportion of recurrent patients with score 2. A significant difference between the recurrent and non-recurrent patients in both groups, with absent and present TILs was also found (p=0.009, p=0.009, respectively). More precisely, higher proportion of recurrent patients in the group with no TILs and lower proportion in the group with TILs were found. Therefore, we can expect the absence of TILs to be the predictor of recurrence. Univariate and multivariate analysis of prediction of recurrence Univariate and multivariate analysis of the influence of COX-2 expression, TILs, tumor size and disease stage on prediction of recurrence (PR) are shown in Table 3. Predictors of recurrence: univariate and multivariate analyses Note: OR= Odds ratio; CI = Confidence interval; AUC = Area under the curve; TILs = tumor-infiltrating lymphocytes. The odds ratio for each predictor was computed using the method of logistic regression. As we can see the tumor size is not the significant predictor in univariate, as well as is in multivariate manner. Disease stage is in both analysis significant and positive predictor, which means that the odds for a recurrence in cases with stage 1 are 2.39 times higher than in cases with stage 0. In both manners the TILs and COX-2 are negative predictors. The positive change in score for COX-2 (score higher for 1) produces 0.36 times lower risk, or that the odd for not getting a recurrence is 1/0.36=2.77 times higher. The TILs is also significant negative predictor of recurrence with OR 0.31 in univariate manner and 0.23 in multivariate analysis. The AUC value as a classification parameter which gives us the quality of predictive model was used. In our case we have the AUC 0.875 which means that our model has very high discriminating power. Univariate and multivariate analysis of the influence of COX-2 expression, TILs, tumor size and disease stage on prediction of recurrence (PR) are shown in Table 3. Predictors of recurrence: univariate and multivariate analyses Note: OR= Odds ratio; CI = Confidence interval; AUC = Area under the curve; TILs = tumor-infiltrating lymphocytes. The odds ratio for each predictor was computed using the method of logistic regression. As we can see the tumor size is not the significant predictor in univariate, as well as is in multivariate manner. Disease stage is in both analysis significant and positive predictor, which means that the odds for a recurrence in cases with stage 1 are 2.39 times higher than in cases with stage 0. In both manners the TILs and COX-2 are negative predictors. The positive change in score for COX-2 (score higher for 1) produces 0.36 times lower risk, or that the odd for not getting a recurrence is 1/0.36=2.77 times higher. The TILs is also significant negative predictor of recurrence with OR 0.31 in univariate manner and 0.23 in multivariate analysis. The AUC value as a classification parameter which gives us the quality of predictive model was used. In our case we have the AUC 0.875 which means that our model has very high discriminating power. Clinicopathological characteristics and immunhistochemical findings in NMIBC: Clinicopathological characteristics of 127 patients with solitary, papillary NMIBC unrolled in the present study are summarized in Table 1. All clinical details along with adequate tumor samples, at the time of diagnosis and in follow up period, were available for all patients. None of the patients was lost to follow up. The median age at diagnosis for the patient cohort was 73 years (range 41–87), with a male to female ratio of 3.3:1, and a median follow up period of 37 months (range 6–155). Tumors divided according to their size in ≤3 or >3 cm, according to their pathology in groups of papillary urothelial neoplasms of low malignant potential (PUNLMP) and low grade papillary urothelial carcinoma (LGPUC), and according to their stage in Ta and T1 were nearly equally distributed. Clinical features of patients with non-muscle invasive bladder carcinoma along with pathological and immunohistochemical characteristics of the resected tumors Note: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; *TILs, tumor-infiltrating lymphocytes. Immunohistochemical analysis revealed COX-2 expression predominantly granular localized to the cytoplasm of tumor cells. Nuclear staining was not observed. In a few cases immunoreactivity was detected in endothelial cells within the tumor. Low intensity staining was also observed in smooth muscle tissue in some sections. There was noticeable heterogeneity in the intensity and percentage of COX-2 positive cells within the tumor tissues. Positive COX-2 immunoreactivity scored according to Allred was presented in 70 (71%) while 29 (29%) tumors were negative. Among positive tumors 16 (16%) show weak and 54 (55%) strong immunostaining. Representative cases of tumors with moderate and strong COX-2 staining are presented in Figure 2. Immunohistochemical staining for COX-2 on tissue microarrays of urothelial tumors (A, magnification x20). On higher magnification representative cases demonstrate tumor cells with strong (B), moderate (C) and weak (D) staining (magnification x100). Tumor infiltrating lymphocytes as shown in Figure 1 were presented in majority (64; 59%) of cases, while 44 cases (41%) did not reveal mononuclear infiltration in tumoral stroma. COX-2 expression and TIL in correlation with recurrence of disease: Main objectives of the present study were to examine possible association between COX-2 expression and TILs with regard to recurrence of disease, as well as their mutual relationship which revealed no significant association (p=0.892). Clinicopathological parameters included in the above mentioned analysis are presented in Table 2. As it can be seen among the examined parameters pathology and disease stages were different between recurrent and non recurrent disease. In particular, PUNLMP was more presented (61%) among the non-recurrent disease in opposite to LGPUC that was more frequently present (62%) in the recurrent group (p=0.0258, p=0.0258, respectively). According to disease stages there were more patients with recurrence in stage T1 (63%), while those patients without recurrent disease were in stage Ta (61%) (p=0.018). Clinicopathologic characteristics, COX-2 expression and TIL in comparison to recurrence of non-muscle invasive bladder carcinoma Note: †PUNLMP indicates papillary urothelial neoplasm of low malignant potential; ‡LGPUC, low-grade papillary urothelial carcinoma; * tumor-infiltrating lymphocytes. Moreover, statistical analysis demonstrated a difference in tumors COX-2 expression between the analyzed groups, e.g. between recurrent and non-recurrent patients. Possibly, due to small sample size in the group with the score 1 no significant difference was found between recurrent and non-recurrent patients (p=0.2196). In the group with the score 0 and the group with the score 2 there was a significant difference in the proportion of patients with and without recurrence (p=0.0001, p=0.0101, respectively). From these results we can expect that lower score is predictor of recurrence, since there is higher proportion of recurrent patients with score 0 and lower proportion of recurrent patients with score 2. A significant difference between the recurrent and non-recurrent patients in both groups, with absent and present TILs was also found (p=0.009, p=0.009, respectively). More precisely, higher proportion of recurrent patients in the group with no TILs and lower proportion in the group with TILs were found. Therefore, we can expect the absence of TILs to be the predictor of recurrence. Univariate and multivariate analysis of prediction of recurrence: Univariate and multivariate analysis of the influence of COX-2 expression, TILs, tumor size and disease stage on prediction of recurrence (PR) are shown in Table 3. Predictors of recurrence: univariate and multivariate analyses Note: OR= Odds ratio; CI = Confidence interval; AUC = Area under the curve; TILs = tumor-infiltrating lymphocytes. The odds ratio for each predictor was computed using the method of logistic regression. As we can see the tumor size is not the significant predictor in univariate, as well as is in multivariate manner. Disease stage is in both analysis significant and positive predictor, which means that the odds for a recurrence in cases with stage 1 are 2.39 times higher than in cases with stage 0. In both manners the TILs and COX-2 are negative predictors. The positive change in score for COX-2 (score higher for 1) produces 0.36 times lower risk, or that the odd for not getting a recurrence is 1/0.36=2.77 times higher. The TILs is also significant negative predictor of recurrence with OR 0.31 in univariate manner and 0.23 in multivariate analysis. The AUC value as a classification parameter which gives us the quality of predictive model was used. In our case we have the AUC 0.875 which means that our model has very high discriminating power. Discussion: There is large evidence implicating COX-2 activity in the development of UBC. Yet, the results obtained in the present study demonstrated that low COX-2 instead of high COX-2 score could predict the recurrence of NMIBC. These results open many questions related to the methodology used in various studies, as well as the complexity in understanding the role of different biological markers in tumorigenesis. In the present study we used TMA instead of whole tissue section for immunohistochemical analysis. Gudjonsson et al. in their study analyzed COX-2 among different markers on TMA and they found that proteins assessed had no predictive value for recurrences of Ta bladder cancer (BC) [20]. The concerns have been raised by the authors regarding the methodology and generalization of results obtained with TMA in immunohistochemical analysis. Even so we believe that conclusion was made despite the facts that relatively small number of patients (N=52) was analyzed. TMA construction was done with at least three 0.6 mm punch cores and specimens were heterogeneous, including single and multifocal tumors. In order to minimize these problems the present study was conducted on relatively larger number of the patients (N=127), with only single tumor at the time of diagnosis, and with specimens where at least three 1 mm punch could be selected for TMA construction. Nevertheless, the obtained results indicate that different methodology and also diverse scoring system for protein expression could be the reason for different results and conclusion made in different papers. Previous studies have been conducted mostly on heterogeneous groups of patients with superficial and invasive UBC. One of the first papers in which the expression of COX-1 and COX-2 were analyzed in human invasive TCC of the urinary bladder samples was by Mohammed and coworkers [21]. COX-2 was not expressed in normal urinary bladder samples but was detected in invasive TCC, noninvasive TCC samples, and in cases of carcinoma in situ. Authors concluded that COX-2 may play a role in BC and support further study of COX-2 inhibitors as potential antitumor agents in human BC. After this study several other ones showed significant increase of COX-2 expression with advancing tumor grade and T stages of the disease [22,23], and not only with disease progression but also with BC specific survival [24]. Opposite to the role of COX-2 in tumor progression there are also studies where COX-2 expression was not associated with primary tumor stage, lymph node status, histological grading, overall and disease-free survival [25]. Wulfing et al. also did not find COX-2 expression associated with TNM staging, histological grading, overall or disease free survival, but a significant relation to the histological subtype (transitional vs. squamous cell carcinoma) was present [26]. Subgroup of chemotherapy patients demonstrated a significant correlation of strong COX-2 expression with worse overall survival time. The authors concluded that further experimental and clinical studies were needed to elucidate if COX-2 inhibition can serve as an additive therapy to chemotherapy of BC. There are also several studies analyzing COX-2 expression only in NMIBC. In the CIS group, COX-2 expression was significantly associated with disease recurrence and progression, but not with BC related survival (as in our analysis, data not shown), while in the stage T1 the TCC COX-2 expression was not associated with clinical or pathological parameters or clinical outcome [27]. Kim and coworkers in their study selected only T1G3 TCC who had undergone complete TUR [28]. In that case COX-2 expression was statistically significant in predicting both recurrence and disease progression, while patients’ age, shape and multiplicity of tumors were not significantly predictive. Thus patients, according to authors’ conclusion, with COX-2 positive superficial BC may need to be followed up more vigorously. In the paper of Okajima et al. with only 5 and 6 samples of superficial BC cases with and without recurrence, respectively, after TUR, more COX-2 protein samples in the cases with recurrence than in cases without recurrence were found [29]. Even though the number of cases examined is small, as the authors stated, this result supports their hypothesis that COX-2 contributes to superficial BC recurrence, thus selective COX-2 inhibitors can be a candidate chemo preventive agents for reoccurrence. Our results differ from the above mentioned because loss of COX-2 was a predictive marker of tumor recurrence. We believe this can be explained by very complex, even different role of this enzyme during tumorigenesis. There is no doubt that COX-2 expression is sequentially up-regulated from normal to chronic cystitis and to malignant changes [30]. Also there are evidence that COX-2 is involved in angiogenesis in BC, as described in the paper where COX-2 promoted vessel proliferation in the tumor zone of pTa/pT1 NMIBC [31]. Moreover, COX-2 is probably up-regulated during tumor progression, as mentioned above, but we believe its role in initial stage is very complex. We hypothesize that, in the initial tumor stage, COX-2 is probably associated with host immunological/inflammatory response and, as such, it could be a marker of sufficient anti-tumor response while loss of COX-2 expression may be indication for the selection of patients for additional immunotherapy. This supposition is also supported by our finding that absence of TILs became the predictor of recurrence. In our study we could not find the association between COX-2 and TILs although there is study by Himly and coworkers in which the association between COX-2 expression and T lymphocyte subsets, CD4+ and CD8+, was confirmed [23]. However, our previous work confirmed the proportion of CD4+ cells and Granzyme B+ TILs being significantly higher in non-recurrent group of patients [32]. Conclusion: Although further studies are necessary to fully elucidate the mechanism involved, the present study showed that higher COX-2 expression and present TILs resulted as a negative predictor of recurrence in NMIBC. On the other hand, patients with lower value of both analyzed parameters should be probably selected for adjuvant therapy. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: TT participated in study design, conceived the study and participated in coordination. KK added clinical data. SŠ actively performed part of immunohistochemical analysis. EB conceived the study and participated in coordination and performed statistical analysis. ŽF participated in study design and coordination. NJ advised and led the whole group as coordinator. All authors read and approved the final manuscript.
Background: A new modality is necessary to prevent recurrence of superficial bladder cancer after complete transurethral resection because of the high recurrence rate even with current prophylaxis protocols. Methods: In order to analyze the predictive value of cyclooxygenase-2 (COX-2) expression and tumor infiltrating lymphocytes (TILs) in recurrence of this disease tumor specimens from 127 patients with solitary papillary non-muscle invasive bladder cancer (NMIBC), 78 with recurrent disease and 49 without recurrence during follow up of minimum 5 years, were retrieved for tissue microarrays construction and immunohistochemical analysis. COX-2 expression was scored according to Allred's scoring protocol, while presence of TILs was categorized as absent (no) or present (yes) on whole tissue sections. Results: COX-2 immunoreactivity was presented in 70 (71%), weak in 16% and strong in 55% of cases, while 29 (29%) tumors were negative. TILs were present in 64 (58%) NMIBC, while 44 cases (41%) did not reveal mononuclear infiltration in tumoral stroma. Statistical analysis demonstrated a higher proportion of patients with recurrence in the group with the COX-2 score 0, and lower in the group with score 2 (p=0.0001, p=0.0101, respectively). In addition, a higher proportion of recurrent patients in the group with no TILs, and lower proportion in the group with TILs were found (p=0.009, p=0.009, respectively). Univariate and multivariate analysis revealed overexpression of COX-2 and presence of TILs as negative predictors. Conclusions: Patients with lower COX-2 expression and absence of TILs in NMIBC need to be followed up more vigorously and probably selected for adjuvant therapy.
Background: Urinary bladder cancer (UBC), on the average, includes 2% of all the malignant diseases with male-to-female ratio being about 4:1. The incidence of UBC increases with age [1]. The mortality from transitional cell carcinoma (TCC) of the urinary bladder increases significantly with the progression of superficial to invasive disease. Approximately 75-85% of patients present with a non-muscle invasive bladder carcinoma (stages pTa, pT1, pTis). Despite the same category this is a very heterogeneous group of tumors with various biological outcomes. The main clinical feature of UBC is a high percentage of recurrence [2]. Carcinoma of the urinary bladder is the only malignant neoplasm for which immunotherapy is often included as part of standard care. Intravesical instillations of Bacille Calmette–Guerin (BCG) has been demonstrated to reduce the recurrence rate and the risk of progression to muscle-invasive disease in patients with carcinoma in situ (pTis), as well as non-muscle-invasive urothelial carcinomas [3,4]. BCG immunotherapy results in 70% to 75% and 50% to 60% complete response rates for carcinoma in situ and small residual tumors, respectively [5]. Unfortunately, a significant percentage of patients will fail initial BCG therapy. These patients would have much more benefit if they were oriented early to other therapeutic approaches. In addition, another 30% to 50% of BCG responders will develop recurrent tumors within 5 years [6,7]. In fact, 70% of patients treated with transurethral resection (TUR) experience a relapse of the underlying disease and 15-25% of patients will progress over time to muscle-invasive cancer [2]. Although some prognostic variables have been shown to predict recurrence and can be used to identify patients who require adjuvant therapy after TUR, additional reliable markers for disease progression and recurrence are needed [8-10]. Standard prognostic factors like histological grading are limited in predicting possible recurrence of the disease. So, understanding of molecular processes that could reflect individual biological potential and clinical behavior is important. For that purpose, various biomarkers have been investigated since they have potential in decoding unique biological features in identifying patients with high risk of progression after the local treatment, as well as in more reliable prognosis and treatment of UBC [11]. In the present study the cyclooxygenase-2 (COX-2) was investigated. Beside prostaglandin G/H synthases and COX-1 it is one of the key enzymes in the synthesis of prostaglandins from arachidonic acid. COX-2 is not expressed in most tissue under normal conditions, but expression is rapidly induced by growth factors or agents that cause tissue irritation or inflammation [12]. COX-2 is expressed in many solid as well as hematological malignancies [13] where prostaglandins have been reported to increase proliferation, enhance angiogenesis, promote invasion, and inhibit apoptosis and differentiation, thus participating in carcinogenesis through various mechanisms [13,14]. There is evidence that COX-2 expression and activity is also important in the development of UBC [15,16]. To determine if COX-2 overexpression could be causally related to recurrence of superficial UBC the present study was designed with pathologically homogenous group of solitary papillary non-muscle invasive bladder cancer (NMIBC) presented in patients according to development of the recurrent disease. More specifically, the aim was to assess and compare the COX-2 expression between the patients with and without recurrence of disease and, moreover, to analyze association between COX-2 expression and tumor infiltrating lymphocytes (TILs). We hypothesize that such investigation can be used to predict recurrence and help identify patients who require adjuvant treatment. Conclusion: Although further studies are necessary to fully elucidate the mechanism involved, the present study showed that higher COX-2 expression and present TILs resulted as a negative predictor of recurrence in NMIBC. On the other hand, patients with lower value of both analyzed parameters should be probably selected for adjuvant therapy.
Background: A new modality is necessary to prevent recurrence of superficial bladder cancer after complete transurethral resection because of the high recurrence rate even with current prophylaxis protocols. Methods: In order to analyze the predictive value of cyclooxygenase-2 (COX-2) expression and tumor infiltrating lymphocytes (TILs) in recurrence of this disease tumor specimens from 127 patients with solitary papillary non-muscle invasive bladder cancer (NMIBC), 78 with recurrent disease and 49 without recurrence during follow up of minimum 5 years, were retrieved for tissue microarrays construction and immunohistochemical analysis. COX-2 expression was scored according to Allred's scoring protocol, while presence of TILs was categorized as absent (no) or present (yes) on whole tissue sections. Results: COX-2 immunoreactivity was presented in 70 (71%), weak in 16% and strong in 55% of cases, while 29 (29%) tumors were negative. TILs were present in 64 (58%) NMIBC, while 44 cases (41%) did not reveal mononuclear infiltration in tumoral stroma. Statistical analysis demonstrated a higher proportion of patients with recurrence in the group with the COX-2 score 0, and lower in the group with score 2 (p=0.0001, p=0.0101, respectively). In addition, a higher proportion of recurrent patients in the group with no TILs, and lower proportion in the group with TILs were found (p=0.009, p=0.009, respectively). Univariate and multivariate analysis revealed overexpression of COX-2 and presence of TILs as negative predictors. Conclusions: Patients with lower COX-2 expression and absence of TILs in NMIBC need to be followed up more vigorously and probably selected for adjuvant therapy.
8,340
314
[ 685, 422, 146, 376, 104, 416, 401, 242, 10, 67 ]
14
[ "patients", "cox", "tumor", "recurrence", "recurrent", "disease", "score", "tils", "expression", "analysis" ]
[ "urinary bladder malignant", "invasive urothelial carcinomas", "bladder cancer bc", "bladder carcinoma stages", "invasive bladder carcinoma" ]
null
[CONTENT] Non-muscle invasive bladder cancer | Recurrence | Cyclooxygenase-2 | Tumor infiltrating lymphocytes [SUMMARY]
null
[CONTENT] Non-muscle invasive bladder cancer | Recurrence | Cyclooxygenase-2 | Tumor infiltrating lymphocytes [SUMMARY]
[CONTENT] Non-muscle invasive bladder cancer | Recurrence | Cyclooxygenase-2 | Tumor infiltrating lymphocytes [SUMMARY]
[CONTENT] Non-muscle invasive bladder cancer | Recurrence | Cyclooxygenase-2 | Tumor infiltrating lymphocytes [SUMMARY]
[CONTENT] Non-muscle invasive bladder cancer | Recurrence | Cyclooxygenase-2 | Tumor infiltrating lymphocytes [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Carcinoma, Papillary | Chi-Square Distribution | Cyclooxygenase 2 | Down-Regulation | Female | Humans | Immunohistochemistry | Logistic Models | Lymphocytes, Tumor-Infiltrating | Male | Middle Aged | Multivariate Analysis | Neoplasm Invasiveness | Neoplasm Recurrence, Local | Odds Ratio | Risk Factors | Time Factors | Tissue Array Analysis | Treatment Outcome | Urinary Bladder Neoplasms [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Carcinoma, Papillary | Chi-Square Distribution | Cyclooxygenase 2 | Down-Regulation | Female | Humans | Immunohistochemistry | Logistic Models | Lymphocytes, Tumor-Infiltrating | Male | Middle Aged | Multivariate Analysis | Neoplasm Invasiveness | Neoplasm Recurrence, Local | Odds Ratio | Risk Factors | Time Factors | Tissue Array Analysis | Treatment Outcome | Urinary Bladder Neoplasms [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Carcinoma, Papillary | Chi-Square Distribution | Cyclooxygenase 2 | Down-Regulation | Female | Humans | Immunohistochemistry | Logistic Models | Lymphocytes, Tumor-Infiltrating | Male | Middle Aged | Multivariate Analysis | Neoplasm Invasiveness | Neoplasm Recurrence, Local | Odds Ratio | Risk Factors | Time Factors | Tissue Array Analysis | Treatment Outcome | Urinary Bladder Neoplasms [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Carcinoma, Papillary | Chi-Square Distribution | Cyclooxygenase 2 | Down-Regulation | Female | Humans | Immunohistochemistry | Logistic Models | Lymphocytes, Tumor-Infiltrating | Male | Middle Aged | Multivariate Analysis | Neoplasm Invasiveness | Neoplasm Recurrence, Local | Odds Ratio | Risk Factors | Time Factors | Tissue Array Analysis | Treatment Outcome | Urinary Bladder Neoplasms [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Biomarkers, Tumor | Carcinoma, Papillary | Chi-Square Distribution | Cyclooxygenase 2 | Down-Regulation | Female | Humans | Immunohistochemistry | Logistic Models | Lymphocytes, Tumor-Infiltrating | Male | Middle Aged | Multivariate Analysis | Neoplasm Invasiveness | Neoplasm Recurrence, Local | Odds Ratio | Risk Factors | Time Factors | Tissue Array Analysis | Treatment Outcome | Urinary Bladder Neoplasms [SUMMARY]
[CONTENT] urinary bladder malignant | invasive urothelial carcinomas | bladder cancer bc | bladder carcinoma stages | invasive bladder carcinoma [SUMMARY]
null
[CONTENT] urinary bladder malignant | invasive urothelial carcinomas | bladder cancer bc | bladder carcinoma stages | invasive bladder carcinoma [SUMMARY]
[CONTENT] urinary bladder malignant | invasive urothelial carcinomas | bladder cancer bc | bladder carcinoma stages | invasive bladder carcinoma [SUMMARY]
[CONTENT] urinary bladder malignant | invasive urothelial carcinomas | bladder cancer bc | bladder carcinoma stages | invasive bladder carcinoma [SUMMARY]
[CONTENT] urinary bladder malignant | invasive urothelial carcinomas | bladder cancer bc | bladder carcinoma stages | invasive bladder carcinoma [SUMMARY]
[CONTENT] patients | cox | tumor | recurrence | recurrent | disease | score | tils | expression | analysis [SUMMARY]
null
[CONTENT] patients | cox | tumor | recurrence | recurrent | disease | score | tils | expression | analysis [SUMMARY]
[CONTENT] patients | cox | tumor | recurrence | recurrent | disease | score | tils | expression | analysis [SUMMARY]
[CONTENT] patients | cox | tumor | recurrence | recurrent | disease | score | tils | expression | analysis [SUMMARY]
[CONTENT] patients | cox | tumor | recurrence | recurrent | disease | score | tils | expression | analysis [SUMMARY]
[CONTENT] patients | ubc | bcg | invasive | disease | recurrence | cox | muscle | muscle invasive | progression [SUMMARY]
null
[CONTENT] recurrent | patients | recurrence | cox | recurrent patients | tumor | score | tils | disease | non [SUMMARY]
[CONTENT] present | involved present study | involved present | lower value analyzed | lower value analyzed parameters | higher cox expression | higher cox expression present | parameters probably selected adjuvant | parameters probably selected | parameters probably [SUMMARY]
[CONTENT] patients | cox | recurrence | tumor | recurrent | disease | score | study | tils | analysis [SUMMARY]
[CONTENT] patients | cox | recurrence | tumor | recurrent | disease | score | study | tils | analysis [SUMMARY]
[CONTENT] [SUMMARY]
null
[CONTENT] 70 | 71% | 16% | 55% | 29 | 29% ||| 64 | 58% | NMIBC | 44 | 41% | stroma ||| COX-2 | 0 | 2 | p=0.0101 ||| p=0.009 ||| COX-2 [SUMMARY]
[CONTENT] COX-2 | NMIBC [SUMMARY]
[CONTENT] ||| COX-2 | 127 | NMIBC | 78 | 49 | minimum 5 years ||| Allred ||| ||| 70 | 71% | 16% | 55% | 29 | 29% ||| 64 | 58% | NMIBC | 44 | 41% | stroma ||| COX-2 | 0 | 2 | p=0.0101 ||| p=0.009 ||| COX-2 ||| COX-2 | NMIBC [SUMMARY]
[CONTENT] ||| COX-2 | 127 | NMIBC | 78 | 49 | minimum 5 years ||| Allred ||| ||| 70 | 71% | 16% | 55% | 29 | 29% ||| 64 | 58% | NMIBC | 44 | 41% | stroma ||| COX-2 | 0 | 2 | p=0.0101 ||| p=0.009 ||| COX-2 ||| COX-2 | NMIBC [SUMMARY]
Estimation of seroprevalence of HIV, hepatitis B and C virus and syphilis among blood donors in the hospital of Aïoun, Mauritania.
29515736
To estimating the seroprevalence of HIV, hepatitis B, hepatitis C and syphilis among blood donors in the Aïoun hospital.
INTRODUCTION
This is a retrospective study from 1 January 2010 to 31 December 2015.
METHODS
On the five-year study period, 1,123 donors were collected. Of these, 182 were HIV-positive, an overall prevalence of 16.2% with predominance in male with a sex ratio Man/Woman of 5.2. The average age of donors was 32.7 ± 10 years (range 17-73 years). The most represented that age group 21-30 years (40.5%). The seroprevalence found were 1.2% for HIV, 11.8% for HBV, HCV 0.2% and 3% for syphilis. Co-infection was found in 0.7% of which 0.5% of dual HIV HBV/Syphilis and 0.2% in HBV/HIV.
RESULTS
The transmission of infectious agents related to transfusion represents the greatest threat to transfusion safety of the recipient. Therefore, a rigorous selection and screening of blood donors are highly recommended to ensure blood safety for the recipient.
CONCLUSION
[ "Adolescent", "Adult", "Blood Donors", "Coinfection", "Female", "HIV Infections", "Hepatitis B", "Hepatitis C", "Humans", "Male", "Mauritania", "Middle Aged", "Prevalence", "Retrospective Studies", "Seroepidemiologic Studies", "Syphilis", "Young Adult" ]
5837177
Introduction
Blood transfusion is a medical therapeutic act [1–3]. However, despite the benefits, each patient is transfused at risk for transfusion-transmissible infections, mainly HIV, hepatitis B (HBV), hepatitis C (HCV) and Trepanoma pallidum (T. pallidum) [2, 3]. The morbidity and mortality resulting from transfusion have serious consequences, not only for the beneficiaries themselves but also for their families, their communities and society in general [3, 4]. Studies conducted in sub-Saharan Africa show that there is a high prevalence of these infections [3–12]. In Mauritania, studies of prevalence among blood donors held in Nouakchott in 1999, 2000 and 2009 showed respective HCV seroprevalence of 1.1% and 2.7% [11, 13] HBV and 15.3% and 20.3% [11, 14]. This study has aimed to update the seroprevalence data of 4 serological markers (HIV, HBV, HCV, anti-Ag-Trepanoma pallidum) tested in blood donors from the hospital Aïoun, in accordance with national strategy for patient safety.
Methods
This is a retrospective descriptive study among blood donors in the regional hospital Aïoun, Hodh El Gharbi (Mauritania), over a period of 5 years from January 1 2010 to 31 December 2015. This hospital is the reference center of the Hodh El Gharbi region (Mauritania) and welcomes an urban and rural population. Aïoun el Atrouss (62 984 inhabitants) is the administrative capital of the wilaya of Hodh El Gharbi (288,772 inhabitants). Wilaya is located 800km from Nouakchott (capital) South-East of the country and has the only regional hospital specific reference to medical care and/or surgery which offers the public a range of treatments specifically in the areas of dentistry, general medicine, surgery, obstetrics and ophthalmology. Donors were either volunteers or relatives or friends of patients apparently healthy weighing 50kg or more with a hemoglobin > 12.5 g/dl. Before every donation, sorting through a donor questionnaire stage, a complete physical examination, serologic screening of the major transfusion transmissible infections and ABO grouping. The confidentiality of donors was met, as the anonymity of the gift obliges. No information revealing their identity was collected in this study. The parameters studied were sex, age, serology for HIV, HBV, HCV and syphilis. Mark of HBV HBsAg was performed using an immunochromatographic test, Determine™ HBsAg Test (AlereMedical Co. Ltd, Japan). The demonstration of antibodies to HIV and those anti-HCV-Ab were carried out respectively by the test, Determine™ HIV-1/2 (Alere Medical Co. Lt, Japan) ant the Rapid test SignalMT HCV Serum/Plasma Dipstrip Test for the hepatite C (Alerehealthcare, South Africa). Seropositivity for syphilis in turn uses a completely screening by a Rapid-Plasma-Réagin test (syphilis RPR test, Human Gesellschaft für Biochemicaund Diagnostic amb H, Germany) then the positive samples were passed to the TPHA (Treponema Pallidum Hemagglutination Assay) and the Venereal Disease Research Laboratory (VDRL), for confirmation. Entry and data analysis were performed using Epi Info version 6.4 software. For the comparison of quantitative variables the Chi-square test was used. A p value < 0.05 was selected as the significance threshold.
Results
Over the study period of 5 years, in 1123 donors were collected. The male was predominant, with a ratio sex- Male/Female 5.2. The average age of donors was 32.7 ± 10 years (range 17-73 years). Up to 20 years old and the age group 21 to 30 years, respectively represented 11% and 40.5% of donors. The age group of 31 to 40 accounted for 26.8% and the 41 to 50 years 14.7%. The age group 51 and older accounted for 7% of donors. Considering all the markers, 182 among the donors presented seropositivity, an overall prevalence of 16.2%. The prevalence of HIV, HBV, HCV and syphilis were observed in 1.2%, 11.8%, 0.2% and 3%. Co-infection was found in 0.7% of cases and 0.5% of dual-infection HBV/Syphilis and 0.2% in the double HBV/HIV infection. A statistically significant difference was observed between HBV carriage and the most affected age group (p = 0.009) and between syphilis and age groups (p = 0.02) (Table 1). Comparing age groups of infected and non-infected blood donors
Conclusion
Despite enormous progress in the framework of transfusion safety, blood transfusion is a medical therapeutic act which exposes the recipient to a risk of contamination by infectious agents transmissible through blood. Therefore, to enhance good blood safety for the recipient, it is necessary to focus on a rigorous selection and retention of donors on the one hand and the use of screening méthods standards minimizing the window period. Furthermore, studies on the residual risk to measure the likelihood of transmission of various infectious agents by blood products, are entirely justified, especially for HBV, which is a real public health problem in our context, with prevalence approaching 20% in different groups (surveys conducted among different groups between 2007 and 2009). What is known about this topic Blood transfusion is a medical therapeutic act; Each patient is transfused at risk for transfusion-transmissible infections If the blood is not secured; The morbidity and mortality resulting from transfusion have serious consequences for patients, communities and society in general. Blood transfusion is a medical therapeutic act; Each patient is transfused at risk for transfusion-transmissible infections If the blood is not secured; The morbidity and mortality resulting from transfusion have serious consequences for patients, communities and society in general. What this study adds To our knowledge, this study is the first in the country to study these 4 markers at the same time; Estimating the seroprevalence of infectious markers in blood donors; To strengthen transfusion safety in recipients since there are only intra-family donors in most cases. To our knowledge, this study is the first in the country to study these 4 markers at the same time; Estimating the seroprevalence of infectious markers in blood donors; To strengthen transfusion safety in recipients since there are only intra-family donors in most cases.
[ "What is known about this topic", "What this study adds", "Competing interests" ]
[ "Blood transfusion is a medical therapeutic act;\nEach patient is transfused at risk for transfusion-transmissible infections If the blood is not secured;\nThe morbidity and mortality resulting from transfusion have serious consequences for patients, communities and society in general.", "To our knowledge, this study is the first in the country to study these 4 markers at the same time;\nEstimating the seroprevalence of infectious markers in blood donors;\nTo strengthen transfusion safety in recipients since there are only intra-family donors in most cases.", "The authors declare no competing interests." ]
[ null, null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion", "What is known about this topic", "What this study adds", "Competing interests" ]
[ "Blood transfusion is a medical therapeutic act [1–3]. However, despite the benefits, each patient is transfused at risk for transfusion-transmissible infections, mainly HIV, hepatitis B (HBV), hepatitis C (HCV) and Trepanoma pallidum (T. pallidum) [2, 3]. The morbidity and mortality resulting from transfusion have serious consequences, not only for the beneficiaries themselves but also for their families, their communities and society in general [3, 4]. Studies conducted in sub-Saharan Africa show that there is a high prevalence of these infections [3–12]. In Mauritania, studies of prevalence among blood donors held in Nouakchott in 1999, 2000 and 2009 showed respective HCV seroprevalence of 1.1% and 2.7% [11, 13] HBV and 15.3% and 20.3% [11, 14]. This study has aimed to update the seroprevalence data of 4 serological markers (HIV, HBV, HCV, anti-Ag-Trepanoma pallidum) tested in blood donors from the hospital Aïoun, in accordance with national strategy for patient safety.", "This is a retrospective descriptive study among blood donors in the regional hospital Aïoun, Hodh El Gharbi (Mauritania), over a period of 5 years from January 1 2010 to 31 December 2015. This hospital is the reference center of the Hodh El Gharbi region (Mauritania) and welcomes an urban and rural population. Aïoun el Atrouss (62 984 inhabitants) is the administrative capital of the wilaya of Hodh El Gharbi (288,772 inhabitants). Wilaya is located 800km from Nouakchott (capital) South-East of the country and has the only regional hospital specific reference to medical care and/or surgery which offers the public a range of treatments specifically in the areas of dentistry, general medicine, surgery, obstetrics and ophthalmology. Donors were either volunteers or relatives or friends of patients apparently healthy weighing 50kg or more with a hemoglobin > 12.5 g/dl. Before every donation, sorting through a donor questionnaire stage, a complete physical examination, serologic screening of the major transfusion transmissible infections and ABO grouping. The confidentiality of donors was met, as the anonymity of the gift obliges. No information revealing their identity was collected in this study. The parameters studied were sex, age, serology for HIV, HBV, HCV and syphilis. Mark of HBV HBsAg was performed using an immunochromatographic test, Determine™ HBsAg Test (AlereMedical Co. Ltd, Japan). The demonstration of antibodies to HIV and those anti-HCV-Ab were carried out respectively by the test, Determine™ HIV-1/2 (Alere Medical Co. Lt, Japan) ant the Rapid test SignalMT HCV Serum/Plasma Dipstrip Test for the hepatite C (Alerehealthcare, South Africa). Seropositivity for syphilis in turn uses a completely screening by a Rapid-Plasma-Réagin test (syphilis RPR test, Human Gesellschaft für Biochemicaund Diagnostic amb H, Germany) then the positive samples were passed to the TPHA (Treponema Pallidum Hemagglutination Assay) and the Venereal Disease Research Laboratory (VDRL), for confirmation. Entry and data analysis were performed using Epi Info version 6.4 software. For the comparison of quantitative variables the Chi-square test was used. A p value < 0.05 was selected as the significance threshold.", "Over the study period of 5 years, in 1123 donors were collected. The male was predominant, with a ratio sex- Male/Female 5.2. The average age of donors was 32.7 ± 10 years (range 17-73 years). Up to 20 years old and the age group 21 to 30 years, respectively represented 11% and 40.5% of donors. The age group of 31 to 40 accounted for 26.8% and the 41 to 50 years 14.7%. The age group 51 and older accounted for 7% of donors. Considering all the markers, 182 among the donors presented seropositivity, an overall prevalence of 16.2%. The prevalence of HIV, HBV, HCV and syphilis were observed in 1.2%, 11.8%, 0.2% and 3%. Co-infection was found in 0.7% of cases and 0.5% of dual-infection HBV/Syphilis and 0.2% in the double HBV/HIV infection. A statistically significant difference was observed between HBV carriage and the most affected age group (p = 0.009) and between syphilis and age groups (p = 0.02) (Table 1).\nComparing age groups of infected and non-infected blood donors", "The findings of this study reflect a general idea about the prevalence of infectious markers in a rural hospital with very limited means. The results can therefore only be interpreted within these limits. However, they highlight a greater representation of men with a sex-ratio Male/Female 5.1. This male predominance may be explained by socio-cultural markers making man the ideal candidate to for blood donation. On gynéco of obstetric physiological factors such as menstrual cycles, pregnancy, breastfeeding can also reinforce this trend. These factors may indeed encourage many women to not donate blood [15]. This male was already provided by other African writers in Nigeria, Mali, Niger, Ivory Coast and Cameroon [4–6, 16, 17]. The average age of our donors was similar to that reported by other African studies in Mali, Nigeria and Cameroon [4, 5, 17]. In our study, the overall prevalence of biomarkers studied in blood donors was 16.2%. This percentage is lower than that reported by other African writers in Burkina Faso, Nigeria, Niger, Cameroon and Tanzania [3, 4, 6, 7, 9]. The most represented age group was the 21 to 30 years. These results are similar to those in other african study [3, 8, 9, 18, 19]. HIV seroprevalence reported as part of this study was 11.8%. This proportion was lower than previously reported in studies conducted in Nouakchott in 1999 and 2012 [11, 14]. It was also lower than those made in the african sub-region, including Mali [20], in Sénégal [21], in Burkina Faso [9], in Niger [6], Ivory Coast [16] and in Nigeria [4] but was higher than those made in Morocco [22], in Ethiopia [8], in Tanzania [7], Democratic Republic of Congo (RDC) [10] and Cameroon [3, 17]. HVC, the prevalence was 0.2%.\nThis figure was lower than previously found in a study conducted in 1999 and 2007 in Nouakchott [11, 13], as well as those reported in studies African countries [3, 4, 6–9, 17, 20, 22–26]. In our study, HIV seroprevalence was 1.2%. These figures are higher than the estimated national prevalence was 0.4% [27]. This prevalence is higher than those made in Morocco and Algeria [28, 29]. As against it remains lower than those reported in Mali, Burkina Faso, Niger, Ivory Coast, Nigeria, Ethiopia, Tanzania, DRC and Cameroon [3, 4, 6–10, 16, 17, 20]. The prevalence of syphilis was 3%. These figures are lower than those found in other studies in Africa, including Burkina Faso, Tanzania, DRC and Cameroon [3, 8–10, 17] but it remains higher than those reported in Mali, Niger, Nigeria and Ethiopia [4, 6, 8, 20]. As regards co-infections, associations HBV/HIV and HBV/syphilis were observed in respectively 0.2% and 0.5% of cases. As indiquépar other studies, this association could be due to the fact that these infections share similar transmission, mainly blood and sexual behavior at high risk of infection [4, 8, 30]. Other studies have shown an association between HIV and syphilis, probably because of their sexual mode of transmission similaireet especially that the mucocutaneous lesions caused by syphilis is a gateway to HIV infection [8, 31]. In our study the seroprevalence of HIV, HBV and syphilis were the highest in the different age groups in comparison with other markers (HCV and syphilis studied). It y'avait statistically significant differences between seropositivity to HBV and syphilis and sex and age. These results may indicate certain risk behaviors of higher infection in men, such as multiple sexual partnerships, etc, as it could also be linked to a lower representation of female blood donation. This difference can be attributed to differences in methodologies, in fact the others have worked on the different categories of donors whereas in our study, it is only for family donors. Furthermore, in terms of the reagents used for serology, some authors have adopted confirmatory tests, plus tests used in this study.", "Despite enormous progress in the framework of transfusion safety, blood transfusion is a medical therapeutic act which exposes the recipient to a risk of contamination by infectious agents transmissible through blood. Therefore, to enhance good blood safety for the recipient, it is necessary to focus on a rigorous selection and retention of donors on the one hand and the use of screening méthods standards minimizing the window period. Furthermore, studies on the residual risk to measure the likelihood of transmission of various infectious agents by blood products, are entirely justified, especially for HBV, which is a real public health problem in our context, with prevalence approaching 20% in different groups (surveys conducted among different groups between 2007 and 2009).\n What is known about this topic Blood transfusion is a medical therapeutic act;\nEach patient is transfused at risk for transfusion-transmissible infections If the blood is not secured;\nThe morbidity and mortality resulting from transfusion have serious consequences for patients, communities and society in general.\nBlood transfusion is a medical therapeutic act;\nEach patient is transfused at risk for transfusion-transmissible infections If the blood is not secured;\nThe morbidity and mortality resulting from transfusion have serious consequences for patients, communities and society in general.\n What this study adds To our knowledge, this study is the first in the country to study these 4 markers at the same time;\nEstimating the seroprevalence of infectious markers in blood donors;\nTo strengthen transfusion safety in recipients since there are only intra-family donors in most cases.\nTo our knowledge, this study is the first in the country to study these 4 markers at the same time;\nEstimating the seroprevalence of infectious markers in blood donors;\nTo strengthen transfusion safety in recipients since there are only intra-family donors in most cases.", "Blood transfusion is a medical therapeutic act;\nEach patient is transfused at risk for transfusion-transmissible infections If the blood is not secured;\nThe morbidity and mortality resulting from transfusion have serious consequences for patients, communities and society in general.", "To our knowledge, this study is the first in the country to study these 4 markers at the same time;\nEstimating the seroprevalence of infectious markers in blood donors;\nTo strengthen transfusion safety in recipients since there are only intra-family donors in most cases.", "The authors declare no competing interests." ]
[ "intro", "methods", "results", "discussion", "conclusions", null, null, null ]
[ "Seroprevalence", "HIV", "hepatitis B", "hepatitis C", "syphilis", "blood donors", "Aïoun? Mauritania" ]
Introduction: Blood transfusion is a medical therapeutic act [1–3]. However, despite the benefits, each patient is transfused at risk for transfusion-transmissible infections, mainly HIV, hepatitis B (HBV), hepatitis C (HCV) and Trepanoma pallidum (T. pallidum) [2, 3]. The morbidity and mortality resulting from transfusion have serious consequences, not only for the beneficiaries themselves but also for their families, their communities and society in general [3, 4]. Studies conducted in sub-Saharan Africa show that there is a high prevalence of these infections [3–12]. In Mauritania, studies of prevalence among blood donors held in Nouakchott in 1999, 2000 and 2009 showed respective HCV seroprevalence of 1.1% and 2.7% [11, 13] HBV and 15.3% and 20.3% [11, 14]. This study has aimed to update the seroprevalence data of 4 serological markers (HIV, HBV, HCV, anti-Ag-Trepanoma pallidum) tested in blood donors from the hospital Aïoun, in accordance with national strategy for patient safety. Methods: This is a retrospective descriptive study among blood donors in the regional hospital Aïoun, Hodh El Gharbi (Mauritania), over a period of 5 years from January 1 2010 to 31 December 2015. This hospital is the reference center of the Hodh El Gharbi region (Mauritania) and welcomes an urban and rural population. Aïoun el Atrouss (62 984 inhabitants) is the administrative capital of the wilaya of Hodh El Gharbi (288,772 inhabitants). Wilaya is located 800km from Nouakchott (capital) South-East of the country and has the only regional hospital specific reference to medical care and/or surgery which offers the public a range of treatments specifically in the areas of dentistry, general medicine, surgery, obstetrics and ophthalmology. Donors were either volunteers or relatives or friends of patients apparently healthy weighing 50kg or more with a hemoglobin > 12.5 g/dl. Before every donation, sorting through a donor questionnaire stage, a complete physical examination, serologic screening of the major transfusion transmissible infections and ABO grouping. The confidentiality of donors was met, as the anonymity of the gift obliges. No information revealing their identity was collected in this study. The parameters studied were sex, age, serology for HIV, HBV, HCV and syphilis. Mark of HBV HBsAg was performed using an immunochromatographic test, Determine™ HBsAg Test (AlereMedical Co. Ltd, Japan). The demonstration of antibodies to HIV and those anti-HCV-Ab were carried out respectively by the test, Determine™ HIV-1/2 (Alere Medical Co. Lt, Japan) ant the Rapid test SignalMT HCV Serum/Plasma Dipstrip Test for the hepatite C (Alerehealthcare, South Africa). Seropositivity for syphilis in turn uses a completely screening by a Rapid-Plasma-Réagin test (syphilis RPR test, Human Gesellschaft für Biochemicaund Diagnostic amb H, Germany) then the positive samples were passed to the TPHA (Treponema Pallidum Hemagglutination Assay) and the Venereal Disease Research Laboratory (VDRL), for confirmation. Entry and data analysis were performed using Epi Info version 6.4 software. For the comparison of quantitative variables the Chi-square test was used. A p value < 0.05 was selected as the significance threshold. Results: Over the study period of 5 years, in 1123 donors were collected. The male was predominant, with a ratio sex- Male/Female 5.2. The average age of donors was 32.7 ± 10 years (range 17-73 years). Up to 20 years old and the age group 21 to 30 years, respectively represented 11% and 40.5% of donors. The age group of 31 to 40 accounted for 26.8% and the 41 to 50 years 14.7%. The age group 51 and older accounted for 7% of donors. Considering all the markers, 182 among the donors presented seropositivity, an overall prevalence of 16.2%. The prevalence of HIV, HBV, HCV and syphilis were observed in 1.2%, 11.8%, 0.2% and 3%. Co-infection was found in 0.7% of cases and 0.5% of dual-infection HBV/Syphilis and 0.2% in the double HBV/HIV infection. A statistically significant difference was observed between HBV carriage and the most affected age group (p = 0.009) and between syphilis and age groups (p = 0.02) (Table 1). Comparing age groups of infected and non-infected blood donors Discussion: The findings of this study reflect a general idea about the prevalence of infectious markers in a rural hospital with very limited means. The results can therefore only be interpreted within these limits. However, they highlight a greater representation of men with a sex-ratio Male/Female 5.1. This male predominance may be explained by socio-cultural markers making man the ideal candidate to for blood donation. On gynéco of obstetric physiological factors such as menstrual cycles, pregnancy, breastfeeding can also reinforce this trend. These factors may indeed encourage many women to not donate blood [15]. This male was already provided by other African writers in Nigeria, Mali, Niger, Ivory Coast and Cameroon [4–6, 16, 17]. The average age of our donors was similar to that reported by other African studies in Mali, Nigeria and Cameroon [4, 5, 17]. In our study, the overall prevalence of biomarkers studied in blood donors was 16.2%. This percentage is lower than that reported by other African writers in Burkina Faso, Nigeria, Niger, Cameroon and Tanzania [3, 4, 6, 7, 9]. The most represented age group was the 21 to 30 years. These results are similar to those in other african study [3, 8, 9, 18, 19]. HIV seroprevalence reported as part of this study was 11.8%. This proportion was lower than previously reported in studies conducted in Nouakchott in 1999 and 2012 [11, 14]. It was also lower than those made in the african sub-region, including Mali [20], in Sénégal [21], in Burkina Faso [9], in Niger [6], Ivory Coast [16] and in Nigeria [4] but was higher than those made in Morocco [22], in Ethiopia [8], in Tanzania [7], Democratic Republic of Congo (RDC) [10] and Cameroon [3, 17]. HVC, the prevalence was 0.2%. This figure was lower than previously found in a study conducted in 1999 and 2007 in Nouakchott [11, 13], as well as those reported in studies African countries [3, 4, 6–9, 17, 20, 22–26]. In our study, HIV seroprevalence was 1.2%. These figures are higher than the estimated national prevalence was 0.4% [27]. This prevalence is higher than those made in Morocco and Algeria [28, 29]. As against it remains lower than those reported in Mali, Burkina Faso, Niger, Ivory Coast, Nigeria, Ethiopia, Tanzania, DRC and Cameroon [3, 4, 6–10, 16, 17, 20]. The prevalence of syphilis was 3%. These figures are lower than those found in other studies in Africa, including Burkina Faso, Tanzania, DRC and Cameroon [3, 8–10, 17] but it remains higher than those reported in Mali, Niger, Nigeria and Ethiopia [4, 6, 8, 20]. As regards co-infections, associations HBV/HIV and HBV/syphilis were observed in respectively 0.2% and 0.5% of cases. As indiquépar other studies, this association could be due to the fact that these infections share similar transmission, mainly blood and sexual behavior at high risk of infection [4, 8, 30]. Other studies have shown an association between HIV and syphilis, probably because of their sexual mode of transmission similaireet especially that the mucocutaneous lesions caused by syphilis is a gateway to HIV infection [8, 31]. In our study the seroprevalence of HIV, HBV and syphilis were the highest in the different age groups in comparison with other markers (HCV and syphilis studied). It y'avait statistically significant differences between seropositivity to HBV and syphilis and sex and age. These results may indicate certain risk behaviors of higher infection in men, such as multiple sexual partnerships, etc, as it could also be linked to a lower representation of female blood donation. This difference can be attributed to differences in methodologies, in fact the others have worked on the different categories of donors whereas in our study, it is only for family donors. Furthermore, in terms of the reagents used for serology, some authors have adopted confirmatory tests, plus tests used in this study. Conclusion: Despite enormous progress in the framework of transfusion safety, blood transfusion is a medical therapeutic act which exposes the recipient to a risk of contamination by infectious agents transmissible through blood. Therefore, to enhance good blood safety for the recipient, it is necessary to focus on a rigorous selection and retention of donors on the one hand and the use of screening méthods standards minimizing the window period. Furthermore, studies on the residual risk to measure the likelihood of transmission of various infectious agents by blood products, are entirely justified, especially for HBV, which is a real public health problem in our context, with prevalence approaching 20% in different groups (surveys conducted among different groups between 2007 and 2009). What is known about this topic Blood transfusion is a medical therapeutic act; Each patient is transfused at risk for transfusion-transmissible infections If the blood is not secured; The morbidity and mortality resulting from transfusion have serious consequences for patients, communities and society in general. Blood transfusion is a medical therapeutic act; Each patient is transfused at risk for transfusion-transmissible infections If the blood is not secured; The morbidity and mortality resulting from transfusion have serious consequences for patients, communities and society in general. What this study adds To our knowledge, this study is the first in the country to study these 4 markers at the same time; Estimating the seroprevalence of infectious markers in blood donors; To strengthen transfusion safety in recipients since there are only intra-family donors in most cases. To our knowledge, this study is the first in the country to study these 4 markers at the same time; Estimating the seroprevalence of infectious markers in blood donors; To strengthen transfusion safety in recipients since there are only intra-family donors in most cases. What is known about this topic: Blood transfusion is a medical therapeutic act; Each patient is transfused at risk for transfusion-transmissible infections If the blood is not secured; The morbidity and mortality resulting from transfusion have serious consequences for patients, communities and society in general. What this study adds: To our knowledge, this study is the first in the country to study these 4 markers at the same time; Estimating the seroprevalence of infectious markers in blood donors; To strengthen transfusion safety in recipients since there are only intra-family donors in most cases. Competing interests: The authors declare no competing interests.
Background: To estimating the seroprevalence of HIV, hepatitis B, hepatitis C and syphilis among blood donors in the Aïoun hospital. Methods: This is a retrospective study from 1 January 2010 to 31 December 2015. Results: On the five-year study period, 1,123 donors were collected. Of these, 182 were HIV-positive, an overall prevalence of 16.2% with predominance in male with a sex ratio Man/Woman of 5.2. The average age of donors was 32.7 ± 10 years (range 17-73 years). The most represented that age group 21-30 years (40.5%). The seroprevalence found were 1.2% for HIV, 11.8% for HBV, HCV 0.2% and 3% for syphilis. Co-infection was found in 0.7% of which 0.5% of dual HIV HBV/Syphilis and 0.2% in HBV/HIV. Conclusions: The transmission of infectious agents related to transfusion represents the greatest threat to transfusion safety of the recipient. Therefore, a rigorous selection and screening of blood donors are highly recommended to ensure blood safety for the recipient.
Introduction: Blood transfusion is a medical therapeutic act [1–3]. However, despite the benefits, each patient is transfused at risk for transfusion-transmissible infections, mainly HIV, hepatitis B (HBV), hepatitis C (HCV) and Trepanoma pallidum (T. pallidum) [2, 3]. The morbidity and mortality resulting from transfusion have serious consequences, not only for the beneficiaries themselves but also for their families, their communities and society in general [3, 4]. Studies conducted in sub-Saharan Africa show that there is a high prevalence of these infections [3–12]. In Mauritania, studies of prevalence among blood donors held in Nouakchott in 1999, 2000 and 2009 showed respective HCV seroprevalence of 1.1% and 2.7% [11, 13] HBV and 15.3% and 20.3% [11, 14]. This study has aimed to update the seroprevalence data of 4 serological markers (HIV, HBV, HCV, anti-Ag-Trepanoma pallidum) tested in blood donors from the hospital Aïoun, in accordance with national strategy for patient safety. Conclusion: Despite enormous progress in the framework of transfusion safety, blood transfusion is a medical therapeutic act which exposes the recipient to a risk of contamination by infectious agents transmissible through blood. Therefore, to enhance good blood safety for the recipient, it is necessary to focus on a rigorous selection and retention of donors on the one hand and the use of screening méthods standards minimizing the window period. Furthermore, studies on the residual risk to measure the likelihood of transmission of various infectious agents by blood products, are entirely justified, especially for HBV, which is a real public health problem in our context, with prevalence approaching 20% in different groups (surveys conducted among different groups between 2007 and 2009). What is known about this topic Blood transfusion is a medical therapeutic act; Each patient is transfused at risk for transfusion-transmissible infections If the blood is not secured; The morbidity and mortality resulting from transfusion have serious consequences for patients, communities and society in general. Blood transfusion is a medical therapeutic act; Each patient is transfused at risk for transfusion-transmissible infections If the blood is not secured; The morbidity and mortality resulting from transfusion have serious consequences for patients, communities and society in general. What this study adds To our knowledge, this study is the first in the country to study these 4 markers at the same time; Estimating the seroprevalence of infectious markers in blood donors; To strengthen transfusion safety in recipients since there are only intra-family donors in most cases. To our knowledge, this study is the first in the country to study these 4 markers at the same time; Estimating the seroprevalence of infectious markers in blood donors; To strengthen transfusion safety in recipients since there are only intra-family donors in most cases.
Background: To estimating the seroprevalence of HIV, hepatitis B, hepatitis C and syphilis among blood donors in the Aïoun hospital. Methods: This is a retrospective study from 1 January 2010 to 31 December 2015. Results: On the five-year study period, 1,123 donors were collected. Of these, 182 were HIV-positive, an overall prevalence of 16.2% with predominance in male with a sex ratio Man/Woman of 5.2. The average age of donors was 32.7 ± 10 years (range 17-73 years). The most represented that age group 21-30 years (40.5%). The seroprevalence found were 1.2% for HIV, 11.8% for HBV, HCV 0.2% and 3% for syphilis. Co-infection was found in 0.7% of which 0.5% of dual HIV HBV/Syphilis and 0.2% in HBV/HIV. Conclusions: The transmission of infectious agents related to transfusion represents the greatest threat to transfusion safety of the recipient. Therefore, a rigorous selection and screening of blood donors are highly recommended to ensure blood safety for the recipient.
2,160
215
[ 47, 52, 7 ]
8
[ "blood", "donors", "study", "transfusion", "hbv", "syphilis", "hiv", "age", "markers", "prevalence" ]
[ "general blood transfusion", "safety blood transfusion", "transmissible infections blood", "prevalence blood donors", "infected blood donors" ]
[CONTENT] Seroprevalence | HIV | hepatitis B | hepatitis C | syphilis | blood donors | Aïoun? Mauritania [SUMMARY]
[CONTENT] Seroprevalence | HIV | hepatitis B | hepatitis C | syphilis | blood donors | Aïoun? Mauritania [SUMMARY]
[CONTENT] Seroprevalence | HIV | hepatitis B | hepatitis C | syphilis | blood donors | Aïoun? Mauritania [SUMMARY]
[CONTENT] Seroprevalence | HIV | hepatitis B | hepatitis C | syphilis | blood donors | Aïoun? Mauritania [SUMMARY]
[CONTENT] Seroprevalence | HIV | hepatitis B | hepatitis C | syphilis | blood donors | Aïoun? Mauritania [SUMMARY]
[CONTENT] Seroprevalence | HIV | hepatitis B | hepatitis C | syphilis | blood donors | Aïoun? Mauritania [SUMMARY]
[CONTENT] Adolescent | Adult | Blood Donors | Coinfection | Female | HIV Infections | Hepatitis B | Hepatitis C | Humans | Male | Mauritania | Middle Aged | Prevalence | Retrospective Studies | Seroepidemiologic Studies | Syphilis | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Blood Donors | Coinfection | Female | HIV Infections | Hepatitis B | Hepatitis C | Humans | Male | Mauritania | Middle Aged | Prevalence | Retrospective Studies | Seroepidemiologic Studies | Syphilis | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Blood Donors | Coinfection | Female | HIV Infections | Hepatitis B | Hepatitis C | Humans | Male | Mauritania | Middle Aged | Prevalence | Retrospective Studies | Seroepidemiologic Studies | Syphilis | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Blood Donors | Coinfection | Female | HIV Infections | Hepatitis B | Hepatitis C | Humans | Male | Mauritania | Middle Aged | Prevalence | Retrospective Studies | Seroepidemiologic Studies | Syphilis | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Blood Donors | Coinfection | Female | HIV Infections | Hepatitis B | Hepatitis C | Humans | Male | Mauritania | Middle Aged | Prevalence | Retrospective Studies | Seroepidemiologic Studies | Syphilis | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Blood Donors | Coinfection | Female | HIV Infections | Hepatitis B | Hepatitis C | Humans | Male | Mauritania | Middle Aged | Prevalence | Retrospective Studies | Seroepidemiologic Studies | Syphilis | Young Adult [SUMMARY]
[CONTENT] general blood transfusion | safety blood transfusion | transmissible infections blood | prevalence blood donors | infected blood donors [SUMMARY]
[CONTENT] general blood transfusion | safety blood transfusion | transmissible infections blood | prevalence blood donors | infected blood donors [SUMMARY]
[CONTENT] general blood transfusion | safety blood transfusion | transmissible infections blood | prevalence blood donors | infected blood donors [SUMMARY]
[CONTENT] general blood transfusion | safety blood transfusion | transmissible infections blood | prevalence blood donors | infected blood donors [SUMMARY]
[CONTENT] general blood transfusion | safety blood transfusion | transmissible infections blood | prevalence blood donors | infected blood donors [SUMMARY]
[CONTENT] general blood transfusion | safety blood transfusion | transmissible infections blood | prevalence blood donors | infected blood donors [SUMMARY]
[CONTENT] blood | donors | study | transfusion | hbv | syphilis | hiv | age | markers | prevalence [SUMMARY]
[CONTENT] blood | donors | study | transfusion | hbv | syphilis | hiv | age | markers | prevalence [SUMMARY]
[CONTENT] blood | donors | study | transfusion | hbv | syphilis | hiv | age | markers | prevalence [SUMMARY]
[CONTENT] blood | donors | study | transfusion | hbv | syphilis | hiv | age | markers | prevalence [SUMMARY]
[CONTENT] blood | donors | study | transfusion | hbv | syphilis | hiv | age | markers | prevalence [SUMMARY]
[CONTENT] blood | donors | study | transfusion | hbv | syphilis | hiv | age | markers | prevalence [SUMMARY]
[CONTENT] pallidum | trepanoma | hepatitis | trepanoma pallidum | hcv | hbv | transfusion | studies | 11 | patient [SUMMARY]
[CONTENT] test | el | gharbi | el gharbi | hodh | hodh el | hodh el gharbi | hospital | syphilis | inhabitants [SUMMARY]
[CONTENT] age | years | age group | group | donors | infection | hbv | syphilis | 40 | infected [SUMMARY]
[CONTENT] transfusion | blood | safety | infectious | risk | transfusion safety | study | donors | markers | medical therapeutic [SUMMARY]
[CONTENT] transfusion | donors | blood | study | markers | age | declare competing interests | authors declare | authors declare competing interests | competing interests [SUMMARY]
[CONTENT] transfusion | donors | blood | study | markers | age | declare competing interests | authors declare | authors declare competing interests | competing interests [SUMMARY]
[CONTENT] Aïoun [SUMMARY]
[CONTENT] 1 January 2010 to 31 December 2015 [SUMMARY]
[CONTENT] five-year | 1,123 ||| 182 | 16.2% | Man/Woman | 5.2 ||| 32.7 | 10 years | 17-73 years ||| 21-30 years | 40.5% ||| 1.2% | 11.8% | HBV | 0.2% | 3% | syphilis ||| 0.7% | 0.5% | 0.2% | HBV/HIV [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] Aïoun ||| 1 January 2010 to 31 December 2015 ||| five-year | 1,123 ||| 182 | 16.2% | Man/Woman | 5.2 ||| 32.7 | 10 years | 17-73 years ||| 21-30 years | 40.5% ||| 1.2% | 11.8% | HBV | 0.2% | 3% | syphilis ||| 0.7% | 0.5% | 0.2% | HBV/HIV ||| ||| [SUMMARY]
[CONTENT] Aïoun ||| 1 January 2010 to 31 December 2015 ||| five-year | 1,123 ||| 182 | 16.2% | Man/Woman | 5.2 ||| 32.7 | 10 years | 17-73 years ||| 21-30 years | 40.5% ||| 1.2% | 11.8% | HBV | 0.2% | 3% | syphilis ||| 0.7% | 0.5% | 0.2% | HBV/HIV ||| ||| [SUMMARY]
Borna disease virus (BDV) infection in psychiatric patients and healthy controls in Iran.
25186971
Borna disease virus (BDV) is an evolutionary old RNA virus, which infects brain and blood cells of humans, their primate ancestors, and other mammals. Human infection has been correlated to mood disorders and schizophrenia, but the impact of BDV on mental-health still remains controversial due to poor methodological and cross-national comparability.
BACKGROUND
This first report from the Middle East aimed to determine BDV infection prevalence in Iranian acute psychiatric disorder patients and healthy controls through circulating immune complexes (CIC), antibodies (Ab) and antigen (pAg) in blood plasma using a standardized triple enzyme immune assay (EIA). Samples of 314 subjects (114 psychiatric cases, 69 blood donors, and 131 healthy controls) were assayed and data analyzed quantitatively and qualitatively.
METHOD
CICs revealed a BDV prevalence of one third (29.5%) in healthy Iranian controls (27.5% controls; 33.3% blood donors). In psychiatric patients CIC prevalence was higher than in controls (40.4%) and significantly correlating with bipolar patients exhibiting overt clinical symptoms (p = 0.005, OR = 1.65). CIC values were significantly elevated in bipolar (p = 0.001) and major depressive disorder (p = 0.029) patients as compared to controls, and in females compared to males (p = 0.031).
RESULTS
This study supports a similarly high prevalence of subclinical human BDV infections in Iran as reported for central Europe, and provides again an indication for the correlation of BDV infection and mood disorders. Further studies should address the morbidity risk for healthy carriers and those with elevated CIC levels, along with gender disparities.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Bipolar Disorder", "Blood Donors", "Borna Disease", "Borna disease virus", "Case-Control Studies", "Depressive Disorder, Major", "Female", "Humans", "Iran", "Male", "Middle Aged", "Prevalence", "Young Adult" ]
4167498
Background
Borna disease virus (BDV) holds unique features in terms of its cell biology, molecular properties, preference to old brain areas, broad host spectrum [1], and unusual biological age, dating back to more than 40 million years [2, 3]. The outstanding molecular biology of the virus, and its single stranded RNA genome leading to the classification [4] of an own family, Bornaviridae (order Mononegavirales), has been comprehensively reviewed [5]. BDV had first been recognized as an often deadly pathogen of horses and sheep [1, 6] with a wide spectrum in other domestic and farm animals. However, BDV’s non-cytolytic properties, low replication while over-expressing two major proteins, and evidence of modulating neurotransmitter networks [7], pointed to a long-term adaption toward moderate pathogenicity and persistency [1]. Human infection and its putative link to mental disorders, first suggested after detection of antibodies [8], became a key issue inspiring research groups around the globe. After nucleic acid and antigen could be demonstrated in white blood cells of psychiatrically diseased patients [9], such a link was further strengthened by the finding of specific RNA sequences in post mortem brains of psychiatric patients [10] and limbic structures from old people [11]. The impact of human infection was significantly supported by the isolation and sequence characterization of human viruses from psychiatric patients’ blood cells and brain [12–14], and the recent correlation of neurological symptoms in humans with BDV infection [15]. The latest discovery of functional endogenous virus gene pieces integrated in the human and primate ancestor germ lines strongly argued in favor of a long-term co-evolution of virus and hosts [2, 3, 16]. However, a role of BDV, whatsoever, in human mental-health remained controversial, despite of predominantly supportive reports [17–24]. This is mainly due to a great variation in prevalence results largely caused by methodological disparities, due to different antibody and/or RNA techniques, affecting as well cross-national comparability. In contrast, BDV-specific circulating immune complexes, the most prevalent infection markers [25], have shown to be superior to antibody- or RNA- detection. Pilot prevalence studies could demonstrate that the BDV-CIC enzyme immune assay (EIA) is an easy to perform and robust test format, suitable to conducting comparable surveys in the general population of different countries, as well as longitudinal follow-up studies of patients in clinical cohorts [26–31]. Circulating immune complexes are the result of periods of antigenemia over-expressing N- and P-proteins, and antibody induction in the host, reflecting recent and current virus activity. Evidence for a contribution of BDV infection to disease symptoms has recently been reviewed [32]. This is the first report from the Middle East, addressing the prevalence of BDV in the human population in Iran. The virus in horses has previously been reported by antibody studies [33]. Here we explore the prevalence of BDV markers among Iranian mentally diseased patients, healthy controls, and blood donors.
Method
Individual subjects Three hundred and fourteen Iranian subjects, including 114 psychiatric patients, 131 sex and age matched healthy controls, and 69 blood donors were included in this study. The association between BDV infection markers in blood plasma and five DSM IV- categorized psychiatric diseases, as well as gender and age of the individuals were analyzed. Basic data are given in Table 1. One hundred and fourteen acute psychiatric patients, who had been admitted to local departments of psychiatry in Tehran, were included. All patients met the Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV) -criteria on the basis of interviews and medical records. They could be divided into five main groups and different DSM IV codes, including 64 bipolar disorder (BD)-, 12 major depressive disorder (MDD)-, 18 schizophrenia-, 15 schizoaffective- and 5 obsessive compulsive disorder (OCD) patients (Table 2). Additionally, 69 blood donors and 131 sex- and age matched, mentally healthy subjects (based on the supervision of the psychiatrists) were included and regarded as controls. All individuals were negative for Hepatitis B- and C-viruses, as well as HIV. The study was approved by the Ethic Committee of the Neuroscience Research Center at Shahid Beheshti University of Medical Sciences, and all patients -or an authorized representative- gave their written informed consent for participation. Blood samples of all individuals were collected prior to any medical treatment and plasma or sera were kept at -20°C.Table 1 Basic data on the population GroupsNFemale/MaleMean age + SEMin-Max Controls 13183/4841.08 + 1.00918-69 Blood donors 696/6329.93 + 1.29619-58 Mental patients 11452/6237.42 + 1.10317-62BD*6432/3236.20 + 1.47717-62MDD**127/543.42 + 3.45021-57Schizophrenia183/1534.56 + 2.68920-53Schizoaffective155/1038.33 + 2.86322-57OCD***55/046.20 + 3.96737-56 Summary 314141/17337.30 + 0.68817-69*Bipolar disorder.**Major depressive disorder.***Obsessive compulsive disorder.Table 2 DSM IV codes, numbers and symptoms of psychiatric patients Code (N)Symptoms BD (N=64) 296.02 (2)Single manic episode, moderate296.03 (4)Single manic episode, severe without psychotic features296.04 (6)Single manic episode, severe with psychotic features296.42 (1)Most recent episode manic, moderate296.43 (5)Most recent episode manic, severe without psychotic features296.44 (20)Most recent episode manic, severe with psychotic features296.52 (2)Most recent episode depressed, moderate296.53 (6)Most recent episode depressed, severe without psychotic features296.54 (2)Most recent episode depressed, severe with psychotic features296.62 (2)Most recent episode mixed, moderate296.63 (11)Most recent episode mixed, severe without psychotic features296.64 (3)Most recent episode mixed, severe with psychotic features MDD (12) 296.22 (2)Recurrent, moderate296.23 (1)Recurrent, severe without psychotic features296.24 (2)Recurrent, severe with psychotic features296.32 (3)Single episode, moderate296.33 (2)Single episode, severe without psychotic features296.34 (2)Single episode, severe with psychotic features Schizophrenia (18) 295.01 (3)Disorganized type295.03 (9)Paranoid type295.09 (6)Undifferentiated type Schizoaffective (15) 295.07 (15)Schizoaffective disorder OCD (5) 300.03 (5)Obsessive compulsive disorder Basic data on the population *Bipolar disorder. **Major depressive disorder. ***Obsessive compulsive disorder. DSM IV codes, numbers and symptoms of psychiatric patients Three hundred and fourteen Iranian subjects, including 114 psychiatric patients, 131 sex and age matched healthy controls, and 69 blood donors were included in this study. The association between BDV infection markers in blood plasma and five DSM IV- categorized psychiatric diseases, as well as gender and age of the individuals were analyzed. Basic data are given in Table 1. One hundred and fourteen acute psychiatric patients, who had been admitted to local departments of psychiatry in Tehran, were included. All patients met the Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV) -criteria on the basis of interviews and medical records. They could be divided into five main groups and different DSM IV codes, including 64 bipolar disorder (BD)-, 12 major depressive disorder (MDD)-, 18 schizophrenia-, 15 schizoaffective- and 5 obsessive compulsive disorder (OCD) patients (Table 2). Additionally, 69 blood donors and 131 sex- and age matched, mentally healthy subjects (based on the supervision of the psychiatrists) were included and regarded as controls. All individuals were negative for Hepatitis B- and C-viruses, as well as HIV. The study was approved by the Ethic Committee of the Neuroscience Research Center at Shahid Beheshti University of Medical Sciences, and all patients -or an authorized representative- gave their written informed consent for participation. Blood samples of all individuals were collected prior to any medical treatment and plasma or sera were kept at -20°C.Table 1 Basic data on the population GroupsNFemale/MaleMean age + SEMin-Max Controls 13183/4841.08 + 1.00918-69 Blood donors 696/6329.93 + 1.29619-58 Mental patients 11452/6237.42 + 1.10317-62BD*6432/3236.20 + 1.47717-62MDD**127/543.42 + 3.45021-57Schizophrenia183/1534.56 + 2.68920-53Schizoaffective155/1038.33 + 2.86322-57OCD***55/046.20 + 3.96737-56 Summary 314141/17337.30 + 0.68817-69*Bipolar disorder.**Major depressive disorder.***Obsessive compulsive disorder.Table 2 DSM IV codes, numbers and symptoms of psychiatric patients Code (N)Symptoms BD (N=64) 296.02 (2)Single manic episode, moderate296.03 (4)Single manic episode, severe without psychotic features296.04 (6)Single manic episode, severe with psychotic features296.42 (1)Most recent episode manic, moderate296.43 (5)Most recent episode manic, severe without psychotic features296.44 (20)Most recent episode manic, severe with psychotic features296.52 (2)Most recent episode depressed, moderate296.53 (6)Most recent episode depressed, severe without psychotic features296.54 (2)Most recent episode depressed, severe with psychotic features296.62 (2)Most recent episode mixed, moderate296.63 (11)Most recent episode mixed, severe without psychotic features296.64 (3)Most recent episode mixed, severe with psychotic features MDD (12) 296.22 (2)Recurrent, moderate296.23 (1)Recurrent, severe without psychotic features296.24 (2)Recurrent, severe with psychotic features296.32 (3)Single episode, moderate296.33 (2)Single episode, severe without psychotic features296.34 (2)Single episode, severe with psychotic features Schizophrenia (18) 295.01 (3)Disorganized type295.03 (9)Paranoid type295.09 (6)Undifferentiated type Schizoaffective (15) 295.07 (15)Schizoaffective disorder OCD (5) 300.03 (5)Obsessive compulsive disorder Basic data on the population *Bipolar disorder. **Major depressive disorder. ***Obsessive compulsive disorder. DSM IV codes, numbers and symptoms of psychiatric patients Enzyme immune assays (EIAs) The BDV infection markers, circulating immune complexes (CICs), virus antigens (N- and P- protein, N/P-complexes; abbreviated Ag), and antibodies (Ab), were assayed using the triple enzyme immune assay (EIA) system, as described [25]. According to the double- sandwich format, two BDV-specific monoclonal antibodies (mAbs), anti-N mAb (W1) and anti-P mAb (Kfu2), were used to bind any BDV-N- and P-protein or N/P -heterodimers in plasma, either circulating antigen bound to virus- specific host antibodies (CIC- EIA) or free antigen (pAg-EIA). CICs were visualized through alkaline phosphatase (AP)-coupled anti-human IgG and substrate, whereas the Ag-EIA needs a BDV-specific detecting antibody (rabbit hyper-immune serum) followed by AP phosphatase coupled anti-rabbit IgG and substrate. The specificity and sensitivity of the BDV mAbs have been further characterized [34]. In particular, epitope mapping has revealed that both these mAbs are binding to powerful conformational epitopes on either protein, which are formed through 5 binding sites in case of the anti-N mAb (W1) and 3 binding sites in case of the anti-P mAb (Kfu2). None of the W1 binding sites are overlapping with P-protein binding domains on the N-protein, confirming that commonly occurring N/P heterodimers are recognized by W1, as well. In addition, none of either W1- and Kfu2- binding sites are overlapping with functionally important sites on N and P-protein, like NLS (nuclear localization signal) and NES (nuclear export signal). The extraordinarily high binding capacities of these mAbs to native N and P proteins have been determined through affinity-chromatography methods using N and P protein from the brain of a horse with Borna disease, resulting in dissociation constants (KD) of 2.31 × 10-9 for W1 and 3.33 × 10-9 for Kfu2. Like for other antigen assays, recombinant proteins have been used to determine sensitivity and further confirm specificity. The detection limit of 1.5-3 ng/ml of purified recombinant N-protein (rN) has been determined for W1 mAb. Diluting of rN in CIC-, Ag- and Ab- EIA-negative serum did not make any difference, confirming specificity. Additionally, N-protein could be demonstrated in the immune precipitate (IP using W1) of a strongly antigen positive patient’s plasma by western blot, whereas the IP of an antigen-negative plasma showed nothing but the heavy and light chain of the mAb [34]. Furthermore, using recombinant P-protein, either the non-phosphorylated or phosphorylated form, revealed that mAb Kfu2 only detects the activated phosphorylated form. Regarding the antibody assay we followed the exact protocol given earlier [25]. All three assays use the basic coating of antibody-stabilized monoclonal antibodies (W1 and Kfu2) as a standard immune module [34]. According to the primary experimental setting, a standardized cut off value has been specified as a mean of negative values plus 2 standard deviations, regularly reaching an extinction of < = 0.1 which separates negative and positive scores. The initial dilutions of the samples were 1:20, 1:2, and 1:100 to allow the same cut off value for testing CICs, free Ag, and Ab, respectively [25]. Results were visualized through alkaline phosphatase conjugated antibodies and a colorimetric substrate, absorbance measured in a multichannel photometer (405 nm), and values imported to statistical software [25]. Repetition of one third of the sample collection was performed and essentially gave the same results. The BDV infection markers, circulating immune complexes (CICs), virus antigens (N- and P- protein, N/P-complexes; abbreviated Ag), and antibodies (Ab), were assayed using the triple enzyme immune assay (EIA) system, as described [25]. According to the double- sandwich format, two BDV-specific monoclonal antibodies (mAbs), anti-N mAb (W1) and anti-P mAb (Kfu2), were used to bind any BDV-N- and P-protein or N/P -heterodimers in plasma, either circulating antigen bound to virus- specific host antibodies (CIC- EIA) or free antigen (pAg-EIA). CICs were visualized through alkaline phosphatase (AP)-coupled anti-human IgG and substrate, whereas the Ag-EIA needs a BDV-specific detecting antibody (rabbit hyper-immune serum) followed by AP phosphatase coupled anti-rabbit IgG and substrate. The specificity and sensitivity of the BDV mAbs have been further characterized [34]. In particular, epitope mapping has revealed that both these mAbs are binding to powerful conformational epitopes on either protein, which are formed through 5 binding sites in case of the anti-N mAb (W1) and 3 binding sites in case of the anti-P mAb (Kfu2). None of the W1 binding sites are overlapping with P-protein binding domains on the N-protein, confirming that commonly occurring N/P heterodimers are recognized by W1, as well. In addition, none of either W1- and Kfu2- binding sites are overlapping with functionally important sites on N and P-protein, like NLS (nuclear localization signal) and NES (nuclear export signal). The extraordinarily high binding capacities of these mAbs to native N and P proteins have been determined through affinity-chromatography methods using N and P protein from the brain of a horse with Borna disease, resulting in dissociation constants (KD) of 2.31 × 10-9 for W1 and 3.33 × 10-9 for Kfu2. Like for other antigen assays, recombinant proteins have been used to determine sensitivity and further confirm specificity. The detection limit of 1.5-3 ng/ml of purified recombinant N-protein (rN) has been determined for W1 mAb. Diluting of rN in CIC-, Ag- and Ab- EIA-negative serum did not make any difference, confirming specificity. Additionally, N-protein could be demonstrated in the immune precipitate (IP using W1) of a strongly antigen positive patient’s plasma by western blot, whereas the IP of an antigen-negative plasma showed nothing but the heavy and light chain of the mAb [34]. Furthermore, using recombinant P-protein, either the non-phosphorylated or phosphorylated form, revealed that mAb Kfu2 only detects the activated phosphorylated form. Regarding the antibody assay we followed the exact protocol given earlier [25]. All three assays use the basic coating of antibody-stabilized monoclonal antibodies (W1 and Kfu2) as a standard immune module [34]. According to the primary experimental setting, a standardized cut off value has been specified as a mean of negative values plus 2 standard deviations, regularly reaching an extinction of < = 0.1 which separates negative and positive scores. The initial dilutions of the samples were 1:20, 1:2, and 1:100 to allow the same cut off value for testing CICs, free Ag, and Ab, respectively [25]. Results were visualized through alkaline phosphatase conjugated antibodies and a colorimetric substrate, absorbance measured in a multichannel photometer (405 nm), and values imported to statistical software [25]. Repetition of one third of the sample collection was performed and essentially gave the same results. Statistical analysis All data of the patients and controls were submitted to parametric and non-parametric statistical analyses. A comparison of the groups was carried out using independent T-Tests, ANOVA and Chi square tests. The prevalence of BDV infection markers was calculated as based on the cut off value of 0.1. Subjects were classified according to clinical diagnostic, gender and age as independent variables, as based on the CIC data measured. The detailed evaluation of CIC tests were based on standardized scoring of the OD-values of >0.100- 0.300 to be +, >0.300- 0.600 to be ++, > 0.600 - 1.000 to be +++, and > 1.000 to be ++++ [25]. Prevalence and odds ratios (OR) were calculated. Chi square tests were used for an estimation of statistical differences between the groups. Furthermore, binary logistic regression for an estimation of an individual influence of three basic variables, namely age, gender and clinical diagnosis, on CIC titers was applied. All data of the patients and controls were submitted to parametric and non-parametric statistical analyses. A comparison of the groups was carried out using independent T-Tests, ANOVA and Chi square tests. The prevalence of BDV infection markers was calculated as based on the cut off value of 0.1. Subjects were classified according to clinical diagnostic, gender and age as independent variables, as based on the CIC data measured. The detailed evaluation of CIC tests were based on standardized scoring of the OD-values of >0.100- 0.300 to be +, >0.300- 0.600 to be ++, > 0.600 - 1.000 to be +++, and > 1.000 to be ++++ [25]. Prevalence and odds ratios (OR) were calculated. Chi square tests were used for an estimation of statistical differences between the groups. Furthermore, binary logistic regression for an estimation of an individual influence of three basic variables, namely age, gender and clinical diagnosis, on CIC titers was applied.
Results
Population characteristics As shown in Table 1, efforts have been made to include gender and age matched control subjects, but comparability could finally not be achieved. The large disparity in both the female-to-male ratios and age of blood donors compared to patients considerably accounted for this limitation (gender: chi square = 7.758, p = 005; age: by t test, p = 0.015). As shown in Table 2, the majority of bipolar patients (BD) were either manic (59.4%) or in a mixed episode (25%), whereas only 15.6% experienced a recent depression. Of all patients, only 19.3% (10 BMD, 12 MDD patients out of 114) presented with a recent depressive episode. As shown in Table 1, efforts have been made to include gender and age matched control subjects, but comparability could finally not be achieved. The large disparity in both the female-to-male ratios and age of blood donors compared to patients considerably accounted for this limitation (gender: chi square = 7.758, p = 005; age: by t test, p = 0.015). As shown in Table 2, the majority of bipolar patients (BD) were either manic (59.4%) or in a mixed episode (25%), whereas only 15.6% experienced a recent depression. Of all patients, only 19.3% (10 BMD, 12 MDD patients out of 114) presented with a recent depressive episode. Circulating immune complexes Based on CICs we found a mean prevalence of subclinical infection of 29.5% in the healthy Iranian controls, displaying a slightly higher prevalence in blood donors (33.3%) as compared to the healthy subject cohort (27.5%) for whom any mental illness has been excluded. Gender and age had no significant influence on CIC prevalence, but psychiatric patients showed significant differences compared to the control group (p = 0.036), presenting with a mean CIC prevalence of 40.4%. Particularly, the patients with bipolar disorder were statistically significantly different with reference to CIC prevalence, OR and OR estimate (OR Est.) (p = 0.014). It is noteworthy that the CIC prevalence found in patients with mood disorders (BD, MDD, and schizoaffective disorders; N = 91) was doubling that of schizophrenia patients (44% vs. 22%), a difference which turned out to be statistically significant (p = 0.026), as well. The statistical evaluations are given in Table 3.Table 3 CIC results against three predictors: sex, age and diagnosis PredictorsNPos./Neg. (p %)OROR Est.CI (95%) Diagnoses: Controls13136/95 (27.5%)RefRefRefBlood donors6923/46 (33.3%)1.211.0290.483-2.193Patients11446/68 (40.4%)1.471.0881.042-3.405BD6429/35 (45.3%)1.652.0351.072-3.863MDD126/6 (50.0%)1.822.7500.820-9.222Schizophrenia184/14 (22.2%)0.810.6320.186-2.149Schizoaffective155/10 (33.3%)1.211.2540.392-4.014OCD52/3 (40%)1.462.2250.343-14.437 Sex: Male17357/116 (32.9%)RefRefRefFemale14148/93 (34.0%)1.030.9610.549-1.682 Age group: 18-257832/46 (41.0%)1.9482.9620.850-10.32526-356624/42 (36.4%)1.7272.4220.691-8.89436-458023/57 (28.8%)1.3651.5970.469-5.44046-557122/49 (31.0%)1.4711.8300.534-6.27856-65194/15 (21.1%)RefRefRef CIC results against three predictors: sex, age and diagnosis Based on CICs we found a mean prevalence of subclinical infection of 29.5% in the healthy Iranian controls, displaying a slightly higher prevalence in blood donors (33.3%) as compared to the healthy subject cohort (27.5%) for whom any mental illness has been excluded. Gender and age had no significant influence on CIC prevalence, but psychiatric patients showed significant differences compared to the control group (p = 0.036), presenting with a mean CIC prevalence of 40.4%. Particularly, the patients with bipolar disorder were statistically significantly different with reference to CIC prevalence, OR and OR estimate (OR Est.) (p = 0.014). It is noteworthy that the CIC prevalence found in patients with mood disorders (BD, MDD, and schizoaffective disorders; N = 91) was doubling that of schizophrenia patients (44% vs. 22%), a difference which turned out to be statistically significant (p = 0.026), as well. The statistical evaluations are given in Table 3.Table 3 CIC results against three predictors: sex, age and diagnosis PredictorsNPos./Neg. (p %)OROR Est.CI (95%) Diagnoses: Controls13136/95 (27.5%)RefRefRefBlood donors6923/46 (33.3%)1.211.0290.483-2.193Patients11446/68 (40.4%)1.471.0881.042-3.405BD6429/35 (45.3%)1.652.0351.072-3.863MDD126/6 (50.0%)1.822.7500.820-9.222Schizophrenia184/14 (22.2%)0.810.6320.186-2.149Schizoaffective155/10 (33.3%)1.211.2540.392-4.014OCD52/3 (40%)1.462.2250.343-14.437 Sex: Male17357/116 (32.9%)RefRefRefFemale14148/93 (34.0%)1.030.9610.549-1.682 Age group: 18-257832/46 (41.0%)1.9482.9620.850-10.32526-356624/42 (36.4%)1.7272.4220.691-8.89436-458023/57 (28.8%)1.3651.5970.469-5.44046-557122/49 (31.0%)1.4711.8300.534-6.27856-65194/15 (21.1%)RefRefRef CIC results against three predictors: sex, age and diagnosis Free antibody and antigen Based on the cut-off value of 0.1 [25] valid for all tests of the triple-EIA system to differentiate the negative from positive results, free Abs were measured in 7.8% and 16.7% of the bipolar (BMD) and schizophrenia patients, respectively, whereas the controls presented with 5.3%. Free Ag was present in 5.6% of the schizophrenic patients (1 out of 18), vs. 1 % in the controls (2 out of 200). Other patient groups were negative in both tests (for details see Table 4). The dynamic balance between CIC formation, antigens, and antibodies accounts for their relative amounts simultaneously present in a sample. The cross-sectional design of the study provides an infection profile only valid at a given time point, thereby limiting the explanatory power of triple-EIA results.Table 4 Prevalence of free antibodies and antigen Free antibodyFree antigen (N-& P-protein)GroupsPos./Neg.Prevalence %CIPos./Neg.Prevalence %CI Controls 7/124(5.3%)1.5-9%1/130(0.7%)0.7-2.3% Patients 8/106(7.0%)2.3-11.7%1/113(0.9%)0-2.6%BD5/597.8%1.2-14.4%0/640.0%-MDD0/120.0%-0/120.0%-Schizophrenia3/1516.7%0-33.9%1/175.6%0-16%Schizoaffective0/150.0%-0/150.0%-OCD0/50.0%-0/50.0%- Blood donors 1/68(1.4%)0-4.2%1/68(1.4%)0-4.2% Total 16/298(5.1%)2.7-7.5%3/311(1%)0-2% Prevalence of free antibodies and antigen Based on the cut-off value of 0.1 [25] valid for all tests of the triple-EIA system to differentiate the negative from positive results, free Abs were measured in 7.8% and 16.7% of the bipolar (BMD) and schizophrenia patients, respectively, whereas the controls presented with 5.3%. Free Ag was present in 5.6% of the schizophrenic patients (1 out of 18), vs. 1 % in the controls (2 out of 200). Other patient groups were negative in both tests (for details see Table 4). The dynamic balance between CIC formation, antigens, and antibodies accounts for their relative amounts simultaneously present in a sample. The cross-sectional design of the study provides an infection profile only valid at a given time point, thereby limiting the explanatory power of triple-EIA results.Table 4 Prevalence of free antibodies and antigen Free antibodyFree antigen (N-& P-protein)GroupsPos./Neg.Prevalence %CIPos./Neg.Prevalence %CI Controls 7/124(5.3%)1.5-9%1/130(0.7%)0.7-2.3% Patients 8/106(7.0%)2.3-11.7%1/113(0.9%)0-2.6%BD5/597.8%1.2-14.4%0/640.0%-MDD0/120.0%-0/120.0%-Schizophrenia3/1516.7%0-33.9%1/175.6%0-16%Schizoaffective0/150.0%-0/150.0%-OCD0/50.0%-0/50.0%- Blood donors 1/68(1.4%)0-4.2%1/68(1.4%)0-4.2% Total 16/298(5.1%)2.7-7.5%3/311(1%)0-2% Prevalence of free antibodies and antigen Additional data analysis According to the cut off values, as shown in Table 5, dividing all data into a negative and positive group and performing only non-parametric analyses resulted in many data unavailable for statistical inference. Instead, we used quantitative CIC data from the EIA-reading (after subtraction of the OD values for blanks) for parametric statistical analyses. A noticeable increase in CIC levels of both, the bipolar disorder (0.147) and the major depressive disorder (0.163) groups became obvious, being statistically significant when compared to control subjects. The values for 95% CI of CICs are illustrated in Figure 1. The CIC levels within the total population tend to be elevated among females when compared to males (p = 0.089). Therefore, the influence of sex on CIC extinction values was also analyzed in these patient groups (Figure 2). A significant increase in CIC levels in female patients was recognized when compared to males (p = 0.031).Table 5 Distribution of categorized CIC results (neg., +, ++, +++) in subgroups SubgroupsNeg N (%)++++++Total Controls 95 (72.5%)31 (23.7%)5 (3.8%)0131 Blood donors 46 (66.7%)21 (30.4%)2 (2.9%)069Case68 (59.6%)36 (31.5%)8 (7.0%)2 (1.7%)114BD35 (54.7%)22 (34.4%)6 (9.4%)1 (1.60%)64MDD6 (50.0%)5 (41.7%)01 (8.30%)12Schizophrenia14 (77.8%)2 (11.1%)2 (11.1%)018Schizoaffective10 (66.7%)5 (33.3%)0015OCD3 (60.0%)2 (40.0%)005 Sex Male116 (67.1%)50 (28.9%)7 (4.0%)0173Female93 (66.0%)38 (27.0%)8 (5.7%)2 (1.4%)141 Age groups 18-25 ys46 (59.0%)26 (33.3%)6 (7.7%)07826-35 ys42 (63.6%)20 (30.3%)4 (6.1%)06636-45 ys57 (71.3%)20 (25.0%)2 (2.5%)1 (1.3%)8046-55 ys49 (69.0%)20 (28.2%)2 (2.8%)07156-65 ys15 (78.9%)2 (10.5%)1 (5.3%)1 (5.3%)78OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600.Figure 1 Mean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA.Figure 2 Statistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test. Distribution of categorized CIC results (neg., +, ++, +++) in subgroups OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600. Mean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA. Statistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test. According to the cut off values, as shown in Table 5, dividing all data into a negative and positive group and performing only non-parametric analyses resulted in many data unavailable for statistical inference. Instead, we used quantitative CIC data from the EIA-reading (after subtraction of the OD values for blanks) for parametric statistical analyses. A noticeable increase in CIC levels of both, the bipolar disorder (0.147) and the major depressive disorder (0.163) groups became obvious, being statistically significant when compared to control subjects. The values for 95% CI of CICs are illustrated in Figure 1. The CIC levels within the total population tend to be elevated among females when compared to males (p = 0.089). Therefore, the influence of sex on CIC extinction values was also analyzed in these patient groups (Figure 2). A significant increase in CIC levels in female patients was recognized when compared to males (p = 0.031).Table 5 Distribution of categorized CIC results (neg., +, ++, +++) in subgroups SubgroupsNeg N (%)++++++Total Controls 95 (72.5%)31 (23.7%)5 (3.8%)0131 Blood donors 46 (66.7%)21 (30.4%)2 (2.9%)069Case68 (59.6%)36 (31.5%)8 (7.0%)2 (1.7%)114BD35 (54.7%)22 (34.4%)6 (9.4%)1 (1.60%)64MDD6 (50.0%)5 (41.7%)01 (8.30%)12Schizophrenia14 (77.8%)2 (11.1%)2 (11.1%)018Schizoaffective10 (66.7%)5 (33.3%)0015OCD3 (60.0%)2 (40.0%)005 Sex Male116 (67.1%)50 (28.9%)7 (4.0%)0173Female93 (66.0%)38 (27.0%)8 (5.7%)2 (1.4%)141 Age groups 18-25 ys46 (59.0%)26 (33.3%)6 (7.7%)07826-35 ys42 (63.6%)20 (30.3%)4 (6.1%)06636-45 ys57 (71.3%)20 (25.0%)2 (2.5%)1 (1.3%)8046-55 ys49 (69.0%)20 (28.2%)2 (2.8%)07156-65 ys15 (78.9%)2 (10.5%)1 (5.3%)1 (5.3%)78OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600.Figure 1 Mean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA.Figure 2 Statistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test. Distribution of categorized CIC results (neg., +, ++, +++) in subgroups OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600. Mean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA. Statistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test.
null
null
[ "Background", "Individual subjects", "Enzyme immune assays (EIAs)", "Statistical analysis", "Population characteristics", "Circulating immune complexes", "Free antibody and antigen", "Additional data analysis" ]
[ "Borna disease virus (BDV) holds unique features in terms of its cell biology, molecular properties, preference to old brain areas, broad host spectrum [1], and unusual biological age, dating back to more than 40 million years [2, 3]. The outstanding molecular biology of the virus, and its single\nstranded RNA genome leading to the classification [4] of an own family, Bornaviridae (order Mononegavirales), has been comprehensively reviewed [5]. BDV had first been recognized as an often deadly pathogen of horses and sheep [1, 6] with a wide spectrum in other domestic and farm animals. However, BDV’s non-cytolytic properties, low replication while over-expressing two major proteins, and evidence of modulating neurotransmitter networks [7], pointed to a long-term adaption toward moderate pathogenicity and persistency [1].\nHuman infection and its putative link to mental disorders, first suggested after detection of antibodies [8], became a key issue inspiring research groups around the globe. After nucleic acid and antigen could be demonstrated in white blood cells of psychiatrically diseased patients [9], such a link was further strengthened by the finding of specific RNA sequences in post mortem brains of psychiatric patients [10] and limbic structures from old people [11].\nThe impact of human infection was significantly supported by the isolation and sequence characterization of human viruses from psychiatric patients’ blood cells and brain [12–14], and the recent correlation of neurological symptoms in humans with BDV infection [15]. The latest discovery of functional endogenous virus gene pieces integrated in the human and primate ancestor germ lines strongly argued in favor of a long-term co-evolution of virus and hosts [2, 3, 16]. However, a role of BDV, whatsoever, in human mental-health remained controversial, despite of predominantly supportive reports [17–24]. This is mainly due to a great variation in prevalence results largely caused by methodological disparities, due to different antibody and/or RNA techniques, affecting as well cross-national comparability. In contrast, BDV-specific circulating immune complexes, the most prevalent infection markers [25], have shown to be superior to antibody- or RNA- detection. Pilot prevalence studies could demonstrate that the BDV-CIC enzyme immune assay (EIA) is an easy to perform and robust test format, suitable to conducting comparable surveys in the general population of different countries, as well as longitudinal follow-up studies of patients in clinical cohorts [26–31]. Circulating immune complexes are the result of periods of antigenemia over-expressing N- and P-proteins, and antibody induction in the host, reflecting recent and current virus activity. Evidence for a contribution of BDV infection to disease symptoms has recently been reviewed [32].\nThis is the first report from the Middle East, addressing the prevalence of BDV in the human population in Iran. The virus in horses has previously been reported by antibody studies [33]. Here we explore the prevalence of BDV markers among Iranian mentally diseased patients, healthy controls, and blood donors.", "Three hundred and fourteen Iranian subjects, including 114 psychiatric patients, 131 sex and age matched healthy controls, and 69 blood donors were included in this study. The association between BDV infection markers in blood plasma and five DSM IV- categorized psychiatric diseases, as well as gender and age of the individuals were analyzed.\nBasic data are given in Table 1. One hundred and fourteen acute psychiatric patients, who had been admitted to local departments of psychiatry in Tehran, were included. All patients met the Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV) -criteria on the basis of interviews and medical records. They could be divided into five main groups and different DSM IV codes, including 64 bipolar disorder (BD)-, 12 major depressive disorder (MDD)-, 18 schizophrenia-, 15 schizoaffective- and 5 obsessive compulsive disorder (OCD) patients (Table 2). Additionally, 69 blood donors and 131 sex- and age matched, mentally healthy subjects (based on the supervision of the psychiatrists) were included and regarded as controls. All individuals were negative for Hepatitis B- and C-viruses, as well as HIV. The study was approved by the Ethic Committee of the Neuroscience Research Center at Shahid Beheshti University of Medical Sciences, and all patients -or an authorized representative- gave their written informed consent for participation. Blood samples of all individuals were collected prior to any medical treatment and plasma or sera were kept at -20°C.Table 1\nBasic data on the population\nGroupsNFemale/MaleMean age + SEMin-Max\nControls\n13183/4841.08 + 1.00918-69\nBlood donors\n696/6329.93 + 1.29619-58\nMental patients\n11452/6237.42 + 1.10317-62BD*6432/3236.20 + 1.47717-62MDD**127/543.42 + 3.45021-57Schizophrenia183/1534.56 + 2.68920-53Schizoaffective155/1038.33 + 2.86322-57OCD***55/046.20 + 3.96737-56\nSummary\n314141/17337.30 + 0.68817-69*Bipolar disorder.**Major depressive disorder.***Obsessive compulsive disorder.Table 2\nDSM IV codes, numbers and symptoms of psychiatric patients\nCode (N)Symptoms\nBD (N=64)\n296.02 (2)Single manic episode, moderate296.03 (4)Single manic episode, severe without psychotic features296.04 (6)Single manic episode, severe with psychotic features296.42 (1)Most recent episode manic, moderate296.43 (5)Most recent episode manic, severe without psychotic features296.44 (20)Most recent episode manic, severe with psychotic features296.52 (2)Most recent episode depressed, moderate296.53 (6)Most recent episode depressed, severe without psychotic features296.54 (2)Most recent episode depressed, severe with psychotic features296.62 (2)Most recent episode mixed, moderate296.63 (11)Most recent episode mixed, severe without psychotic features296.64 (3)Most recent episode mixed, severe with psychotic features\nMDD (12)\n296.22 (2)Recurrent, moderate296.23 (1)Recurrent, severe without psychotic features296.24 (2)Recurrent, severe with psychotic features296.32 (3)Single episode, moderate296.33 (2)Single episode, severe without psychotic features296.34 (2)Single episode, severe with psychotic features\nSchizophrenia (18)\n295.01 (3)Disorganized type295.03 (9)Paranoid type295.09 (6)Undifferentiated type\nSchizoaffective (15)\n295.07 (15)Schizoaffective disorder\nOCD (5)\n300.03 (5)Obsessive compulsive disorder\n\nBasic data on the population\n\n*Bipolar disorder.\n**Major depressive disorder.\n***Obsessive compulsive disorder.\n\nDSM IV codes, numbers and symptoms of psychiatric patients\n", "The BDV infection markers, circulating immune complexes (CICs), virus antigens (N- and P- protein, N/P-complexes; abbreviated Ag), and antibodies (Ab), were assayed using the triple enzyme immune assay (EIA) system, as described [25]. According to the double- sandwich format, two BDV-specific monoclonal antibodies (mAbs), anti-N mAb (W1) and anti-P mAb (Kfu2), were used to bind any BDV-N- and P-protein or N/P -heterodimers in plasma, either circulating antigen bound to virus- specific host antibodies (CIC- EIA) or free antigen (pAg-EIA). CICs were visualized through alkaline phosphatase (AP)-coupled anti-human IgG and substrate, whereas the Ag-EIA needs a BDV-specific detecting antibody (rabbit hyper-immune serum) followed by AP phosphatase coupled anti-rabbit IgG and substrate. The specificity and sensitivity of the BDV mAbs have been further characterized [34]. In particular, epitope mapping has revealed that both these mAbs are binding to powerful conformational epitopes on either protein, which are formed through 5 binding sites in case of the anti-N mAb (W1) and 3 binding sites in case of the anti-P mAb (Kfu2). None of the W1 binding sites are overlapping with P-protein binding domains on the N-protein, confirming that commonly occurring N/P heterodimers are recognized by W1, as well. In addition, none of either W1- and Kfu2- binding sites are overlapping with functionally important sites on N and P-protein, like NLS (nuclear localization signal) and NES (nuclear export signal). The extraordinarily high binding capacities of these mAbs to native N and P proteins have been determined through affinity-chromatography methods using N and P protein from the brain of a horse with Borna disease, resulting in dissociation constants (KD) of 2.31 × 10-9 for W1 and 3.33 × 10-9 for Kfu2. Like for other antigen assays, recombinant proteins have been used to determine sensitivity and further confirm specificity. The detection limit of 1.5-3 ng/ml of purified recombinant N-protein (rN) has been determined for W1 mAb. Diluting of rN in CIC-, Ag- and Ab- EIA-negative serum did not make any difference, confirming specificity. Additionally, N-protein could be demonstrated in the immune precipitate (IP using W1) of a strongly antigen positive patient’s plasma by western blot, whereas the IP of an antigen-negative plasma showed nothing but the heavy and light chain of the mAb [34]. Furthermore, using recombinant P-protein, either the non-phosphorylated or phosphorylated form, revealed that mAb Kfu2 only detects the activated phosphorylated form. Regarding the antibody assay we followed the exact protocol given earlier [25]. All three assays use the basic coating of antibody-stabilized monoclonal antibodies (W1 and Kfu2) as a standard immune module [34].\nAccording to the primary experimental setting, a standardized cut off value has been specified as a mean of negative values plus 2 standard deviations, regularly reaching an extinction of < = 0.1 which separates negative and positive scores. The initial dilutions of the samples were 1:20, 1:2, and 1:100 to allow the same cut off value for testing CICs, free Ag, and Ab, respectively [25]. Results were visualized through alkaline phosphatase conjugated antibodies and a colorimetric substrate, absorbance measured in a multichannel photometer (405 nm), and values imported to statistical software [25].\nRepetition of one third of the sample collection was performed and essentially gave the same results.", "All data of the patients and controls were submitted to parametric and non-parametric statistical analyses. A comparison of the groups was carried out using independent T-Tests, ANOVA and Chi square tests. The prevalence of BDV infection markers was calculated as based on the cut off value of 0.1. Subjects were classified according to clinical diagnostic, gender and age as independent variables, as based on the CIC data measured.\nThe detailed evaluation of CIC tests were based on standardized scoring of the OD-values of >0.100- 0.300 to be +, >0.300- 0.600 to be ++, > 0.600 - 1.000 to be +++, and > 1.000 to be ++++ [25]. Prevalence and odds ratios (OR) were calculated. Chi square tests were used for an estimation of statistical differences between the groups. Furthermore, binary logistic regression for an estimation of an individual influence of three basic variables, namely age, gender and clinical diagnosis, on CIC titers was applied.", "As shown in Table 1, efforts have been made to include gender and age matched control subjects, but comparability could finally not be achieved. The large disparity in both the female-to-male ratios and age of blood donors compared to patients considerably accounted for this limitation (gender: chi square = 7.758, p = 005; age: by t test, p = 0.015). As shown in Table 2, the majority of bipolar patients (BD) were either manic (59.4%) or in a mixed episode (25%), whereas only 15.6% experienced a recent depression. Of all patients, only 19.3% (10 BMD, 12 MDD patients out of 114) presented with a recent depressive episode.", "Based on CICs we found a mean prevalence of subclinical infection of 29.5% in the healthy Iranian controls, displaying a slightly higher prevalence in blood donors (33.3%) as compared to the healthy subject cohort (27.5%) for whom any mental illness has been excluded.\nGender and age had no significant influence on CIC prevalence, but psychiatric patients showed significant differences compared to the control group (p = 0.036), presenting with a mean CIC prevalence of 40.4%. Particularly, the patients with bipolar disorder were statistically significantly different with reference to CIC prevalence, OR and OR estimate (OR Est.) (p = 0.014). It is noteworthy that the CIC prevalence found in patients with mood disorders (BD, MDD, and schizoaffective disorders; N = 91) was doubling that of schizophrenia patients (44% vs. 22%), a difference which turned out to be statistically significant (p = 0.026), as well. The statistical evaluations are given in Table 3.Table 3\nCIC results against three predictors: sex, age and diagnosis\nPredictorsNPos./Neg. (p %)OROR Est.CI (95%)\nDiagnoses:\nControls13136/95 (27.5%)RefRefRefBlood donors6923/46 (33.3%)1.211.0290.483-2.193Patients11446/68 (40.4%)1.471.0881.042-3.405BD6429/35 (45.3%)1.652.0351.072-3.863MDD126/6 (50.0%)1.822.7500.820-9.222Schizophrenia184/14 (22.2%)0.810.6320.186-2.149Schizoaffective155/10 (33.3%)1.211.2540.392-4.014OCD52/3 (40%)1.462.2250.343-14.437\nSex:\nMale17357/116 (32.9%)RefRefRefFemale14148/93 (34.0%)1.030.9610.549-1.682\nAge group:\n18-257832/46 (41.0%)1.9482.9620.850-10.32526-356624/42 (36.4%)1.7272.4220.691-8.89436-458023/57 (28.8%)1.3651.5970.469-5.44046-557122/49 (31.0%)1.4711.8300.534-6.27856-65194/15 (21.1%)RefRefRef\n\nCIC results against three predictors: sex, age and diagnosis\n", "Based on the cut-off value of 0.1 [25] valid for all tests of the triple-EIA system to differentiate the negative from positive results, free Abs were measured in 7.8% and 16.7% of the bipolar (BMD) and schizophrenia patients, respectively, whereas the controls presented with 5.3%. Free Ag was present in 5.6% of the schizophrenic patients (1 out of 18), vs. 1 % in the controls (2 out of 200). Other patient groups were negative in both tests (for details see Table 4). The dynamic balance between CIC formation, antigens, and antibodies accounts for their relative amounts simultaneously present in a sample. The cross-sectional design of the study provides an infection profile only valid at a given time point, thereby limiting the explanatory power of triple-EIA results.Table 4\nPrevalence of free antibodies and antigen\nFree antibodyFree antigen (N-& P-protein)GroupsPos./Neg.Prevalence %CIPos./Neg.Prevalence %CI\nControls\n7/124(5.3%)1.5-9%1/130(0.7%)0.7-2.3%\nPatients\n8/106(7.0%)2.3-11.7%1/113(0.9%)0-2.6%BD5/597.8%1.2-14.4%0/640.0%-MDD0/120.0%-0/120.0%-Schizophrenia3/1516.7%0-33.9%1/175.6%0-16%Schizoaffective0/150.0%-0/150.0%-OCD0/50.0%-0/50.0%-\nBlood donors\n1/68(1.4%)0-4.2%1/68(1.4%)0-4.2%\nTotal\n16/298(5.1%)2.7-7.5%3/311(1%)0-2%\n\nPrevalence of free antibodies and antigen\n", "According to the cut off values, as shown in Table 5, dividing all data into a negative and positive group and performing only non-parametric analyses resulted in many data unavailable for statistical inference. Instead, we used quantitative CIC data from the EIA-reading (after subtraction of the OD values for blanks) for parametric statistical analyses.\nA noticeable increase in CIC levels of both, the bipolar disorder (0.147) and the major depressive disorder (0.163) groups became obvious, being statistically significant when compared to control subjects. The values for 95% CI of CICs are illustrated in Figure 1. The CIC levels within the total population tend to be elevated among females when compared to males (p = 0.089). Therefore, the influence of sex on CIC extinction values was also analyzed in these patient groups (Figure 2). A significant increase in CIC levels in female patients was recognized when compared to males (p = 0.031).Table 5\nDistribution of categorized CIC results (neg., +, ++, +++) in subgroups\nSubgroupsNeg N (%)++++++Total\nControls\n95 (72.5%)31 (23.7%)5 (3.8%)0131\nBlood donors\n46 (66.7%)21 (30.4%)2 (2.9%)069Case68 (59.6%)36 (31.5%)8 (7.0%)2 (1.7%)114BD35 (54.7%)22 (34.4%)6 (9.4%)1 (1.60%)64MDD6 (50.0%)5 (41.7%)01 (8.30%)12Schizophrenia14 (77.8%)2 (11.1%)2 (11.1%)018Schizoaffective10 (66.7%)5 (33.3%)0015OCD3 (60.0%)2 (40.0%)005\nSex\nMale116 (67.1%)50 (28.9%)7 (4.0%)0173Female93 (66.0%)38 (27.0%)8 (5.7%)2 (1.4%)141\nAge groups\n18-25 ys46 (59.0%)26 (33.3%)6 (7.7%)07826-35 ys42 (63.6%)20 (30.3%)4 (6.1%)06636-45 ys57 (71.3%)20 (25.0%)2 (2.5%)1 (1.3%)8046-55 ys49 (69.0%)20 (28.2%)2 (2.8%)07156-65 ys15 (78.9%)2 (10.5%)1 (5.3%)1 (5.3%)78OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600.Figure 1\nMean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA.Figure 2\nStatistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test.\n\nDistribution of categorized CIC results (neg., +, ++, +++) in subgroups\n\nOD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600.\n\nMean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA.\n\nStatistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test." ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Method", "Individual subjects", "Enzyme immune assays (EIAs)", "Statistical analysis", "Results", "Population characteristics", "Circulating immune complexes", "Free antibody and antigen", "Additional data analysis", "Discussion" ]
[ "Borna disease virus (BDV) holds unique features in terms of its cell biology, molecular properties, preference to old brain areas, broad host spectrum [1], and unusual biological age, dating back to more than 40 million years [2, 3]. The outstanding molecular biology of the virus, and its single\nstranded RNA genome leading to the classification [4] of an own family, Bornaviridae (order Mononegavirales), has been comprehensively reviewed [5]. BDV had first been recognized as an often deadly pathogen of horses and sheep [1, 6] with a wide spectrum in other domestic and farm animals. However, BDV’s non-cytolytic properties, low replication while over-expressing two major proteins, and evidence of modulating neurotransmitter networks [7], pointed to a long-term adaption toward moderate pathogenicity and persistency [1].\nHuman infection and its putative link to mental disorders, first suggested after detection of antibodies [8], became a key issue inspiring research groups around the globe. After nucleic acid and antigen could be demonstrated in white blood cells of psychiatrically diseased patients [9], such a link was further strengthened by the finding of specific RNA sequences in post mortem brains of psychiatric patients [10] and limbic structures from old people [11].\nThe impact of human infection was significantly supported by the isolation and sequence characterization of human viruses from psychiatric patients’ blood cells and brain [12–14], and the recent correlation of neurological symptoms in humans with BDV infection [15]. The latest discovery of functional endogenous virus gene pieces integrated in the human and primate ancestor germ lines strongly argued in favor of a long-term co-evolution of virus and hosts [2, 3, 16]. However, a role of BDV, whatsoever, in human mental-health remained controversial, despite of predominantly supportive reports [17–24]. This is mainly due to a great variation in prevalence results largely caused by methodological disparities, due to different antibody and/or RNA techniques, affecting as well cross-national comparability. In contrast, BDV-specific circulating immune complexes, the most prevalent infection markers [25], have shown to be superior to antibody- or RNA- detection. Pilot prevalence studies could demonstrate that the BDV-CIC enzyme immune assay (EIA) is an easy to perform and robust test format, suitable to conducting comparable surveys in the general population of different countries, as well as longitudinal follow-up studies of patients in clinical cohorts [26–31]. Circulating immune complexes are the result of periods of antigenemia over-expressing N- and P-proteins, and antibody induction in the host, reflecting recent and current virus activity. Evidence for a contribution of BDV infection to disease symptoms has recently been reviewed [32].\nThis is the first report from the Middle East, addressing the prevalence of BDV in the human population in Iran. The virus in horses has previously been reported by antibody studies [33]. Here we explore the prevalence of BDV markers among Iranian mentally diseased patients, healthy controls, and blood donors.", " Individual subjects Three hundred and fourteen Iranian subjects, including 114 psychiatric patients, 131 sex and age matched healthy controls, and 69 blood donors were included in this study. The association between BDV infection markers in blood plasma and five DSM IV- categorized psychiatric diseases, as well as gender and age of the individuals were analyzed.\nBasic data are given in Table 1. One hundred and fourteen acute psychiatric patients, who had been admitted to local departments of psychiatry in Tehran, were included. All patients met the Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV) -criteria on the basis of interviews and medical records. They could be divided into five main groups and different DSM IV codes, including 64 bipolar disorder (BD)-, 12 major depressive disorder (MDD)-, 18 schizophrenia-, 15 schizoaffective- and 5 obsessive compulsive disorder (OCD) patients (Table 2). Additionally, 69 blood donors and 131 sex- and age matched, mentally healthy subjects (based on the supervision of the psychiatrists) were included and regarded as controls. All individuals were negative for Hepatitis B- and C-viruses, as well as HIV. The study was approved by the Ethic Committee of the Neuroscience Research Center at Shahid Beheshti University of Medical Sciences, and all patients -or an authorized representative- gave their written informed consent for participation. Blood samples of all individuals were collected prior to any medical treatment and plasma or sera were kept at -20°C.Table 1\nBasic data on the population\nGroupsNFemale/MaleMean age + SEMin-Max\nControls\n13183/4841.08 + 1.00918-69\nBlood donors\n696/6329.93 + 1.29619-58\nMental patients\n11452/6237.42 + 1.10317-62BD*6432/3236.20 + 1.47717-62MDD**127/543.42 + 3.45021-57Schizophrenia183/1534.56 + 2.68920-53Schizoaffective155/1038.33 + 2.86322-57OCD***55/046.20 + 3.96737-56\nSummary\n314141/17337.30 + 0.68817-69*Bipolar disorder.**Major depressive disorder.***Obsessive compulsive disorder.Table 2\nDSM IV codes, numbers and symptoms of psychiatric patients\nCode (N)Symptoms\nBD (N=64)\n296.02 (2)Single manic episode, moderate296.03 (4)Single manic episode, severe without psychotic features296.04 (6)Single manic episode, severe with psychotic features296.42 (1)Most recent episode manic, moderate296.43 (5)Most recent episode manic, severe without psychotic features296.44 (20)Most recent episode manic, severe with psychotic features296.52 (2)Most recent episode depressed, moderate296.53 (6)Most recent episode depressed, severe without psychotic features296.54 (2)Most recent episode depressed, severe with psychotic features296.62 (2)Most recent episode mixed, moderate296.63 (11)Most recent episode mixed, severe without psychotic features296.64 (3)Most recent episode mixed, severe with psychotic features\nMDD (12)\n296.22 (2)Recurrent, moderate296.23 (1)Recurrent, severe without psychotic features296.24 (2)Recurrent, severe with psychotic features296.32 (3)Single episode, moderate296.33 (2)Single episode, severe without psychotic features296.34 (2)Single episode, severe with psychotic features\nSchizophrenia (18)\n295.01 (3)Disorganized type295.03 (9)Paranoid type295.09 (6)Undifferentiated type\nSchizoaffective (15)\n295.07 (15)Schizoaffective disorder\nOCD (5)\n300.03 (5)Obsessive compulsive disorder\n\nBasic data on the population\n\n*Bipolar disorder.\n**Major depressive disorder.\n***Obsessive compulsive disorder.\n\nDSM IV codes, numbers and symptoms of psychiatric patients\n\nThree hundred and fourteen Iranian subjects, including 114 psychiatric patients, 131 sex and age matched healthy controls, and 69 blood donors were included in this study. The association between BDV infection markers in blood plasma and five DSM IV- categorized psychiatric diseases, as well as gender and age of the individuals were analyzed.\nBasic data are given in Table 1. One hundred and fourteen acute psychiatric patients, who had been admitted to local departments of psychiatry in Tehran, were included. All patients met the Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV) -criteria on the basis of interviews and medical records. They could be divided into five main groups and different DSM IV codes, including 64 bipolar disorder (BD)-, 12 major depressive disorder (MDD)-, 18 schizophrenia-, 15 schizoaffective- and 5 obsessive compulsive disorder (OCD) patients (Table 2). Additionally, 69 blood donors and 131 sex- and age matched, mentally healthy subjects (based on the supervision of the psychiatrists) were included and regarded as controls. All individuals were negative for Hepatitis B- and C-viruses, as well as HIV. The study was approved by the Ethic Committee of the Neuroscience Research Center at Shahid Beheshti University of Medical Sciences, and all patients -or an authorized representative- gave their written informed consent for participation. Blood samples of all individuals were collected prior to any medical treatment and plasma or sera were kept at -20°C.Table 1\nBasic data on the population\nGroupsNFemale/MaleMean age + SEMin-Max\nControls\n13183/4841.08 + 1.00918-69\nBlood donors\n696/6329.93 + 1.29619-58\nMental patients\n11452/6237.42 + 1.10317-62BD*6432/3236.20 + 1.47717-62MDD**127/543.42 + 3.45021-57Schizophrenia183/1534.56 + 2.68920-53Schizoaffective155/1038.33 + 2.86322-57OCD***55/046.20 + 3.96737-56\nSummary\n314141/17337.30 + 0.68817-69*Bipolar disorder.**Major depressive disorder.***Obsessive compulsive disorder.Table 2\nDSM IV codes, numbers and symptoms of psychiatric patients\nCode (N)Symptoms\nBD (N=64)\n296.02 (2)Single manic episode, moderate296.03 (4)Single manic episode, severe without psychotic features296.04 (6)Single manic episode, severe with psychotic features296.42 (1)Most recent episode manic, moderate296.43 (5)Most recent episode manic, severe without psychotic features296.44 (20)Most recent episode manic, severe with psychotic features296.52 (2)Most recent episode depressed, moderate296.53 (6)Most recent episode depressed, severe without psychotic features296.54 (2)Most recent episode depressed, severe with psychotic features296.62 (2)Most recent episode mixed, moderate296.63 (11)Most recent episode mixed, severe without psychotic features296.64 (3)Most recent episode mixed, severe with psychotic features\nMDD (12)\n296.22 (2)Recurrent, moderate296.23 (1)Recurrent, severe without psychotic features296.24 (2)Recurrent, severe with psychotic features296.32 (3)Single episode, moderate296.33 (2)Single episode, severe without psychotic features296.34 (2)Single episode, severe with psychotic features\nSchizophrenia (18)\n295.01 (3)Disorganized type295.03 (9)Paranoid type295.09 (6)Undifferentiated type\nSchizoaffective (15)\n295.07 (15)Schizoaffective disorder\nOCD (5)\n300.03 (5)Obsessive compulsive disorder\n\nBasic data on the population\n\n*Bipolar disorder.\n**Major depressive disorder.\n***Obsessive compulsive disorder.\n\nDSM IV codes, numbers and symptoms of psychiatric patients\n\n Enzyme immune assays (EIAs) The BDV infection markers, circulating immune complexes (CICs), virus antigens (N- and P- protein, N/P-complexes; abbreviated Ag), and antibodies (Ab), were assayed using the triple enzyme immune assay (EIA) system, as described [25]. According to the double- sandwich format, two BDV-specific monoclonal antibodies (mAbs), anti-N mAb (W1) and anti-P mAb (Kfu2), were used to bind any BDV-N- and P-protein or N/P -heterodimers in plasma, either circulating antigen bound to virus- specific host antibodies (CIC- EIA) or free antigen (pAg-EIA). CICs were visualized through alkaline phosphatase (AP)-coupled anti-human IgG and substrate, whereas the Ag-EIA needs a BDV-specific detecting antibody (rabbit hyper-immune serum) followed by AP phosphatase coupled anti-rabbit IgG and substrate. The specificity and sensitivity of the BDV mAbs have been further characterized [34]. In particular, epitope mapping has revealed that both these mAbs are binding to powerful conformational epitopes on either protein, which are formed through 5 binding sites in case of the anti-N mAb (W1) and 3 binding sites in case of the anti-P mAb (Kfu2). None of the W1 binding sites are overlapping with P-protein binding domains on the N-protein, confirming that commonly occurring N/P heterodimers are recognized by W1, as well. In addition, none of either W1- and Kfu2- binding sites are overlapping with functionally important sites on N and P-protein, like NLS (nuclear localization signal) and NES (nuclear export signal). The extraordinarily high binding capacities of these mAbs to native N and P proteins have been determined through affinity-chromatography methods using N and P protein from the brain of a horse with Borna disease, resulting in dissociation constants (KD) of 2.31 × 10-9 for W1 and 3.33 × 10-9 for Kfu2. Like for other antigen assays, recombinant proteins have been used to determine sensitivity and further confirm specificity. The detection limit of 1.5-3 ng/ml of purified recombinant N-protein (rN) has been determined for W1 mAb. Diluting of rN in CIC-, Ag- and Ab- EIA-negative serum did not make any difference, confirming specificity. Additionally, N-protein could be demonstrated in the immune precipitate (IP using W1) of a strongly antigen positive patient’s plasma by western blot, whereas the IP of an antigen-negative plasma showed nothing but the heavy and light chain of the mAb [34]. Furthermore, using recombinant P-protein, either the non-phosphorylated or phosphorylated form, revealed that mAb Kfu2 only detects the activated phosphorylated form. Regarding the antibody assay we followed the exact protocol given earlier [25]. All three assays use the basic coating of antibody-stabilized monoclonal antibodies (W1 and Kfu2) as a standard immune module [34].\nAccording to the primary experimental setting, a standardized cut off value has been specified as a mean of negative values plus 2 standard deviations, regularly reaching an extinction of < = 0.1 which separates negative and positive scores. The initial dilutions of the samples were 1:20, 1:2, and 1:100 to allow the same cut off value for testing CICs, free Ag, and Ab, respectively [25]. Results were visualized through alkaline phosphatase conjugated antibodies and a colorimetric substrate, absorbance measured in a multichannel photometer (405 nm), and values imported to statistical software [25].\nRepetition of one third of the sample collection was performed and essentially gave the same results.\nThe BDV infection markers, circulating immune complexes (CICs), virus antigens (N- and P- protein, N/P-complexes; abbreviated Ag), and antibodies (Ab), were assayed using the triple enzyme immune assay (EIA) system, as described [25]. According to the double- sandwich format, two BDV-specific monoclonal antibodies (mAbs), anti-N mAb (W1) and anti-P mAb (Kfu2), were used to bind any BDV-N- and P-protein or N/P -heterodimers in plasma, either circulating antigen bound to virus- specific host antibodies (CIC- EIA) or free antigen (pAg-EIA). CICs were visualized through alkaline phosphatase (AP)-coupled anti-human IgG and substrate, whereas the Ag-EIA needs a BDV-specific detecting antibody (rabbit hyper-immune serum) followed by AP phosphatase coupled anti-rabbit IgG and substrate. The specificity and sensitivity of the BDV mAbs have been further characterized [34]. In particular, epitope mapping has revealed that both these mAbs are binding to powerful conformational epitopes on either protein, which are formed through 5 binding sites in case of the anti-N mAb (W1) and 3 binding sites in case of the anti-P mAb (Kfu2). None of the W1 binding sites are overlapping with P-protein binding domains on the N-protein, confirming that commonly occurring N/P heterodimers are recognized by W1, as well. In addition, none of either W1- and Kfu2- binding sites are overlapping with functionally important sites on N and P-protein, like NLS (nuclear localization signal) and NES (nuclear export signal). The extraordinarily high binding capacities of these mAbs to native N and P proteins have been determined through affinity-chromatography methods using N and P protein from the brain of a horse with Borna disease, resulting in dissociation constants (KD) of 2.31 × 10-9 for W1 and 3.33 × 10-9 for Kfu2. Like for other antigen assays, recombinant proteins have been used to determine sensitivity and further confirm specificity. The detection limit of 1.5-3 ng/ml of purified recombinant N-protein (rN) has been determined for W1 mAb. Diluting of rN in CIC-, Ag- and Ab- EIA-negative serum did not make any difference, confirming specificity. Additionally, N-protein could be demonstrated in the immune precipitate (IP using W1) of a strongly antigen positive patient’s plasma by western blot, whereas the IP of an antigen-negative plasma showed nothing but the heavy and light chain of the mAb [34]. Furthermore, using recombinant P-protein, either the non-phosphorylated or phosphorylated form, revealed that mAb Kfu2 only detects the activated phosphorylated form. Regarding the antibody assay we followed the exact protocol given earlier [25]. All three assays use the basic coating of antibody-stabilized monoclonal antibodies (W1 and Kfu2) as a standard immune module [34].\nAccording to the primary experimental setting, a standardized cut off value has been specified as a mean of negative values plus 2 standard deviations, regularly reaching an extinction of < = 0.1 which separates negative and positive scores. The initial dilutions of the samples were 1:20, 1:2, and 1:100 to allow the same cut off value for testing CICs, free Ag, and Ab, respectively [25]. Results were visualized through alkaline phosphatase conjugated antibodies and a colorimetric substrate, absorbance measured in a multichannel photometer (405 nm), and values imported to statistical software [25].\nRepetition of one third of the sample collection was performed and essentially gave the same results.\n Statistical analysis All data of the patients and controls were submitted to parametric and non-parametric statistical analyses. A comparison of the groups was carried out using independent T-Tests, ANOVA and Chi square tests. The prevalence of BDV infection markers was calculated as based on the cut off value of 0.1. Subjects were classified according to clinical diagnostic, gender and age as independent variables, as based on the CIC data measured.\nThe detailed evaluation of CIC tests were based on standardized scoring of the OD-values of >0.100- 0.300 to be +, >0.300- 0.600 to be ++, > 0.600 - 1.000 to be +++, and > 1.000 to be ++++ [25]. Prevalence and odds ratios (OR) were calculated. Chi square tests were used for an estimation of statistical differences between the groups. Furthermore, binary logistic regression for an estimation of an individual influence of three basic variables, namely age, gender and clinical diagnosis, on CIC titers was applied.\nAll data of the patients and controls were submitted to parametric and non-parametric statistical analyses. A comparison of the groups was carried out using independent T-Tests, ANOVA and Chi square tests. The prevalence of BDV infection markers was calculated as based on the cut off value of 0.1. Subjects were classified according to clinical diagnostic, gender and age as independent variables, as based on the CIC data measured.\nThe detailed evaluation of CIC tests were based on standardized scoring of the OD-values of >0.100- 0.300 to be +, >0.300- 0.600 to be ++, > 0.600 - 1.000 to be +++, and > 1.000 to be ++++ [25]. Prevalence and odds ratios (OR) were calculated. Chi square tests were used for an estimation of statistical differences between the groups. Furthermore, binary logistic regression for an estimation of an individual influence of three basic variables, namely age, gender and clinical diagnosis, on CIC titers was applied.", "Three hundred and fourteen Iranian subjects, including 114 psychiatric patients, 131 sex and age matched healthy controls, and 69 blood donors were included in this study. The association between BDV infection markers in blood plasma and five DSM IV- categorized psychiatric diseases, as well as gender and age of the individuals were analyzed.\nBasic data are given in Table 1. One hundred and fourteen acute psychiatric patients, who had been admitted to local departments of psychiatry in Tehran, were included. All patients met the Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV) -criteria on the basis of interviews and medical records. They could be divided into five main groups and different DSM IV codes, including 64 bipolar disorder (BD)-, 12 major depressive disorder (MDD)-, 18 schizophrenia-, 15 schizoaffective- and 5 obsessive compulsive disorder (OCD) patients (Table 2). Additionally, 69 blood donors and 131 sex- and age matched, mentally healthy subjects (based on the supervision of the psychiatrists) were included and regarded as controls. All individuals were negative for Hepatitis B- and C-viruses, as well as HIV. The study was approved by the Ethic Committee of the Neuroscience Research Center at Shahid Beheshti University of Medical Sciences, and all patients -or an authorized representative- gave their written informed consent for participation. Blood samples of all individuals were collected prior to any medical treatment and plasma or sera were kept at -20°C.Table 1\nBasic data on the population\nGroupsNFemale/MaleMean age + SEMin-Max\nControls\n13183/4841.08 + 1.00918-69\nBlood donors\n696/6329.93 + 1.29619-58\nMental patients\n11452/6237.42 + 1.10317-62BD*6432/3236.20 + 1.47717-62MDD**127/543.42 + 3.45021-57Schizophrenia183/1534.56 + 2.68920-53Schizoaffective155/1038.33 + 2.86322-57OCD***55/046.20 + 3.96737-56\nSummary\n314141/17337.30 + 0.68817-69*Bipolar disorder.**Major depressive disorder.***Obsessive compulsive disorder.Table 2\nDSM IV codes, numbers and symptoms of psychiatric patients\nCode (N)Symptoms\nBD (N=64)\n296.02 (2)Single manic episode, moderate296.03 (4)Single manic episode, severe without psychotic features296.04 (6)Single manic episode, severe with psychotic features296.42 (1)Most recent episode manic, moderate296.43 (5)Most recent episode manic, severe without psychotic features296.44 (20)Most recent episode manic, severe with psychotic features296.52 (2)Most recent episode depressed, moderate296.53 (6)Most recent episode depressed, severe without psychotic features296.54 (2)Most recent episode depressed, severe with psychotic features296.62 (2)Most recent episode mixed, moderate296.63 (11)Most recent episode mixed, severe without psychotic features296.64 (3)Most recent episode mixed, severe with psychotic features\nMDD (12)\n296.22 (2)Recurrent, moderate296.23 (1)Recurrent, severe without psychotic features296.24 (2)Recurrent, severe with psychotic features296.32 (3)Single episode, moderate296.33 (2)Single episode, severe without psychotic features296.34 (2)Single episode, severe with psychotic features\nSchizophrenia (18)\n295.01 (3)Disorganized type295.03 (9)Paranoid type295.09 (6)Undifferentiated type\nSchizoaffective (15)\n295.07 (15)Schizoaffective disorder\nOCD (5)\n300.03 (5)Obsessive compulsive disorder\n\nBasic data on the population\n\n*Bipolar disorder.\n**Major depressive disorder.\n***Obsessive compulsive disorder.\n\nDSM IV codes, numbers and symptoms of psychiatric patients\n", "The BDV infection markers, circulating immune complexes (CICs), virus antigens (N- and P- protein, N/P-complexes; abbreviated Ag), and antibodies (Ab), were assayed using the triple enzyme immune assay (EIA) system, as described [25]. According to the double- sandwich format, two BDV-specific monoclonal antibodies (mAbs), anti-N mAb (W1) and anti-P mAb (Kfu2), were used to bind any BDV-N- and P-protein or N/P -heterodimers in plasma, either circulating antigen bound to virus- specific host antibodies (CIC- EIA) or free antigen (pAg-EIA). CICs were visualized through alkaline phosphatase (AP)-coupled anti-human IgG and substrate, whereas the Ag-EIA needs a BDV-specific detecting antibody (rabbit hyper-immune serum) followed by AP phosphatase coupled anti-rabbit IgG and substrate. The specificity and sensitivity of the BDV mAbs have been further characterized [34]. In particular, epitope mapping has revealed that both these mAbs are binding to powerful conformational epitopes on either protein, which are formed through 5 binding sites in case of the anti-N mAb (W1) and 3 binding sites in case of the anti-P mAb (Kfu2). None of the W1 binding sites are overlapping with P-protein binding domains on the N-protein, confirming that commonly occurring N/P heterodimers are recognized by W1, as well. In addition, none of either W1- and Kfu2- binding sites are overlapping with functionally important sites on N and P-protein, like NLS (nuclear localization signal) and NES (nuclear export signal). The extraordinarily high binding capacities of these mAbs to native N and P proteins have been determined through affinity-chromatography methods using N and P protein from the brain of a horse with Borna disease, resulting in dissociation constants (KD) of 2.31 × 10-9 for W1 and 3.33 × 10-9 for Kfu2. Like for other antigen assays, recombinant proteins have been used to determine sensitivity and further confirm specificity. The detection limit of 1.5-3 ng/ml of purified recombinant N-protein (rN) has been determined for W1 mAb. Diluting of rN in CIC-, Ag- and Ab- EIA-negative serum did not make any difference, confirming specificity. Additionally, N-protein could be demonstrated in the immune precipitate (IP using W1) of a strongly antigen positive patient’s plasma by western blot, whereas the IP of an antigen-negative plasma showed nothing but the heavy and light chain of the mAb [34]. Furthermore, using recombinant P-protein, either the non-phosphorylated or phosphorylated form, revealed that mAb Kfu2 only detects the activated phosphorylated form. Regarding the antibody assay we followed the exact protocol given earlier [25]. All three assays use the basic coating of antibody-stabilized monoclonal antibodies (W1 and Kfu2) as a standard immune module [34].\nAccording to the primary experimental setting, a standardized cut off value has been specified as a mean of negative values plus 2 standard deviations, regularly reaching an extinction of < = 0.1 which separates negative and positive scores. The initial dilutions of the samples were 1:20, 1:2, and 1:100 to allow the same cut off value for testing CICs, free Ag, and Ab, respectively [25]. Results were visualized through alkaline phosphatase conjugated antibodies and a colorimetric substrate, absorbance measured in a multichannel photometer (405 nm), and values imported to statistical software [25].\nRepetition of one third of the sample collection was performed and essentially gave the same results.", "All data of the patients and controls were submitted to parametric and non-parametric statistical analyses. A comparison of the groups was carried out using independent T-Tests, ANOVA and Chi square tests. The prevalence of BDV infection markers was calculated as based on the cut off value of 0.1. Subjects were classified according to clinical diagnostic, gender and age as independent variables, as based on the CIC data measured.\nThe detailed evaluation of CIC tests were based on standardized scoring of the OD-values of >0.100- 0.300 to be +, >0.300- 0.600 to be ++, > 0.600 - 1.000 to be +++, and > 1.000 to be ++++ [25]. Prevalence and odds ratios (OR) were calculated. Chi square tests were used for an estimation of statistical differences between the groups. Furthermore, binary logistic regression for an estimation of an individual influence of three basic variables, namely age, gender and clinical diagnosis, on CIC titers was applied.", " Population characteristics As shown in Table 1, efforts have been made to include gender and age matched control subjects, but comparability could finally not be achieved. The large disparity in both the female-to-male ratios and age of blood donors compared to patients considerably accounted for this limitation (gender: chi square = 7.758, p = 005; age: by t test, p = 0.015). As shown in Table 2, the majority of bipolar patients (BD) were either manic (59.4%) or in a mixed episode (25%), whereas only 15.6% experienced a recent depression. Of all patients, only 19.3% (10 BMD, 12 MDD patients out of 114) presented with a recent depressive episode.\nAs shown in Table 1, efforts have been made to include gender and age matched control subjects, but comparability could finally not be achieved. The large disparity in both the female-to-male ratios and age of blood donors compared to patients considerably accounted for this limitation (gender: chi square = 7.758, p = 005; age: by t test, p = 0.015). As shown in Table 2, the majority of bipolar patients (BD) were either manic (59.4%) or in a mixed episode (25%), whereas only 15.6% experienced a recent depression. Of all patients, only 19.3% (10 BMD, 12 MDD patients out of 114) presented with a recent depressive episode.\n Circulating immune complexes Based on CICs we found a mean prevalence of subclinical infection of 29.5% in the healthy Iranian controls, displaying a slightly higher prevalence in blood donors (33.3%) as compared to the healthy subject cohort (27.5%) for whom any mental illness has been excluded.\nGender and age had no significant influence on CIC prevalence, but psychiatric patients showed significant differences compared to the control group (p = 0.036), presenting with a mean CIC prevalence of 40.4%. Particularly, the patients with bipolar disorder were statistically significantly different with reference to CIC prevalence, OR and OR estimate (OR Est.) (p = 0.014). It is noteworthy that the CIC prevalence found in patients with mood disorders (BD, MDD, and schizoaffective disorders; N = 91) was doubling that of schizophrenia patients (44% vs. 22%), a difference which turned out to be statistically significant (p = 0.026), as well. The statistical evaluations are given in Table 3.Table 3\nCIC results against three predictors: sex, age and diagnosis\nPredictorsNPos./Neg. (p %)OROR Est.CI (95%)\nDiagnoses:\nControls13136/95 (27.5%)RefRefRefBlood donors6923/46 (33.3%)1.211.0290.483-2.193Patients11446/68 (40.4%)1.471.0881.042-3.405BD6429/35 (45.3%)1.652.0351.072-3.863MDD126/6 (50.0%)1.822.7500.820-9.222Schizophrenia184/14 (22.2%)0.810.6320.186-2.149Schizoaffective155/10 (33.3%)1.211.2540.392-4.014OCD52/3 (40%)1.462.2250.343-14.437\nSex:\nMale17357/116 (32.9%)RefRefRefFemale14148/93 (34.0%)1.030.9610.549-1.682\nAge group:\n18-257832/46 (41.0%)1.9482.9620.850-10.32526-356624/42 (36.4%)1.7272.4220.691-8.89436-458023/57 (28.8%)1.3651.5970.469-5.44046-557122/49 (31.0%)1.4711.8300.534-6.27856-65194/15 (21.1%)RefRefRef\n\nCIC results against three predictors: sex, age and diagnosis\n\nBased on CICs we found a mean prevalence of subclinical infection of 29.5% in the healthy Iranian controls, displaying a slightly higher prevalence in blood donors (33.3%) as compared to the healthy subject cohort (27.5%) for whom any mental illness has been excluded.\nGender and age had no significant influence on CIC prevalence, but psychiatric patients showed significant differences compared to the control group (p = 0.036), presenting with a mean CIC prevalence of 40.4%. Particularly, the patients with bipolar disorder were statistically significantly different with reference to CIC prevalence, OR and OR estimate (OR Est.) (p = 0.014). It is noteworthy that the CIC prevalence found in patients with mood disorders (BD, MDD, and schizoaffective disorders; N = 91) was doubling that of schizophrenia patients (44% vs. 22%), a difference which turned out to be statistically significant (p = 0.026), as well. The statistical evaluations are given in Table 3.Table 3\nCIC results against three predictors: sex, age and diagnosis\nPredictorsNPos./Neg. (p %)OROR Est.CI (95%)\nDiagnoses:\nControls13136/95 (27.5%)RefRefRefBlood donors6923/46 (33.3%)1.211.0290.483-2.193Patients11446/68 (40.4%)1.471.0881.042-3.405BD6429/35 (45.3%)1.652.0351.072-3.863MDD126/6 (50.0%)1.822.7500.820-9.222Schizophrenia184/14 (22.2%)0.810.6320.186-2.149Schizoaffective155/10 (33.3%)1.211.2540.392-4.014OCD52/3 (40%)1.462.2250.343-14.437\nSex:\nMale17357/116 (32.9%)RefRefRefFemale14148/93 (34.0%)1.030.9610.549-1.682\nAge group:\n18-257832/46 (41.0%)1.9482.9620.850-10.32526-356624/42 (36.4%)1.7272.4220.691-8.89436-458023/57 (28.8%)1.3651.5970.469-5.44046-557122/49 (31.0%)1.4711.8300.534-6.27856-65194/15 (21.1%)RefRefRef\n\nCIC results against three predictors: sex, age and diagnosis\n\n Free antibody and antigen Based on the cut-off value of 0.1 [25] valid for all tests of the triple-EIA system to differentiate the negative from positive results, free Abs were measured in 7.8% and 16.7% of the bipolar (BMD) and schizophrenia patients, respectively, whereas the controls presented with 5.3%. Free Ag was present in 5.6% of the schizophrenic patients (1 out of 18), vs. 1 % in the controls (2 out of 200). Other patient groups were negative in both tests (for details see Table 4). The dynamic balance between CIC formation, antigens, and antibodies accounts for their relative amounts simultaneously present in a sample. The cross-sectional design of the study provides an infection profile only valid at a given time point, thereby limiting the explanatory power of triple-EIA results.Table 4\nPrevalence of free antibodies and antigen\nFree antibodyFree antigen (N-& P-protein)GroupsPos./Neg.Prevalence %CIPos./Neg.Prevalence %CI\nControls\n7/124(5.3%)1.5-9%1/130(0.7%)0.7-2.3%\nPatients\n8/106(7.0%)2.3-11.7%1/113(0.9%)0-2.6%BD5/597.8%1.2-14.4%0/640.0%-MDD0/120.0%-0/120.0%-Schizophrenia3/1516.7%0-33.9%1/175.6%0-16%Schizoaffective0/150.0%-0/150.0%-OCD0/50.0%-0/50.0%-\nBlood donors\n1/68(1.4%)0-4.2%1/68(1.4%)0-4.2%\nTotal\n16/298(5.1%)2.7-7.5%3/311(1%)0-2%\n\nPrevalence of free antibodies and antigen\n\nBased on the cut-off value of 0.1 [25] valid for all tests of the triple-EIA system to differentiate the negative from positive results, free Abs were measured in 7.8% and 16.7% of the bipolar (BMD) and schizophrenia patients, respectively, whereas the controls presented with 5.3%. Free Ag was present in 5.6% of the schizophrenic patients (1 out of 18), vs. 1 % in the controls (2 out of 200). Other patient groups were negative in both tests (for details see Table 4). The dynamic balance between CIC formation, antigens, and antibodies accounts for their relative amounts simultaneously present in a sample. The cross-sectional design of the study provides an infection profile only valid at a given time point, thereby limiting the explanatory power of triple-EIA results.Table 4\nPrevalence of free antibodies and antigen\nFree antibodyFree antigen (N-& P-protein)GroupsPos./Neg.Prevalence %CIPos./Neg.Prevalence %CI\nControls\n7/124(5.3%)1.5-9%1/130(0.7%)0.7-2.3%\nPatients\n8/106(7.0%)2.3-11.7%1/113(0.9%)0-2.6%BD5/597.8%1.2-14.4%0/640.0%-MDD0/120.0%-0/120.0%-Schizophrenia3/1516.7%0-33.9%1/175.6%0-16%Schizoaffective0/150.0%-0/150.0%-OCD0/50.0%-0/50.0%-\nBlood donors\n1/68(1.4%)0-4.2%1/68(1.4%)0-4.2%\nTotal\n16/298(5.1%)2.7-7.5%3/311(1%)0-2%\n\nPrevalence of free antibodies and antigen\n\n Additional data analysis According to the cut off values, as shown in Table 5, dividing all data into a negative and positive group and performing only non-parametric analyses resulted in many data unavailable for statistical inference. Instead, we used quantitative CIC data from the EIA-reading (after subtraction of the OD values for blanks) for parametric statistical analyses.\nA noticeable increase in CIC levels of both, the bipolar disorder (0.147) and the major depressive disorder (0.163) groups became obvious, being statistically significant when compared to control subjects. The values for 95% CI of CICs are illustrated in Figure 1. The CIC levels within the total population tend to be elevated among females when compared to males (p = 0.089). Therefore, the influence of sex on CIC extinction values was also analyzed in these patient groups (Figure 2). A significant increase in CIC levels in female patients was recognized when compared to males (p = 0.031).Table 5\nDistribution of categorized CIC results (neg., +, ++, +++) in subgroups\nSubgroupsNeg N (%)++++++Total\nControls\n95 (72.5%)31 (23.7%)5 (3.8%)0131\nBlood donors\n46 (66.7%)21 (30.4%)2 (2.9%)069Case68 (59.6%)36 (31.5%)8 (7.0%)2 (1.7%)114BD35 (54.7%)22 (34.4%)6 (9.4%)1 (1.60%)64MDD6 (50.0%)5 (41.7%)01 (8.30%)12Schizophrenia14 (77.8%)2 (11.1%)2 (11.1%)018Schizoaffective10 (66.7%)5 (33.3%)0015OCD3 (60.0%)2 (40.0%)005\nSex\nMale116 (67.1%)50 (28.9%)7 (4.0%)0173Female93 (66.0%)38 (27.0%)8 (5.7%)2 (1.4%)141\nAge groups\n18-25 ys46 (59.0%)26 (33.3%)6 (7.7%)07826-35 ys42 (63.6%)20 (30.3%)4 (6.1%)06636-45 ys57 (71.3%)20 (25.0%)2 (2.5%)1 (1.3%)8046-55 ys49 (69.0%)20 (28.2%)2 (2.8%)07156-65 ys15 (78.9%)2 (10.5%)1 (5.3%)1 (5.3%)78OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600.Figure 1\nMean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA.Figure 2\nStatistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test.\n\nDistribution of categorized CIC results (neg., +, ++, +++) in subgroups\n\nOD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600.\n\nMean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA.\n\nStatistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test.\nAccording to the cut off values, as shown in Table 5, dividing all data into a negative and positive group and performing only non-parametric analyses resulted in many data unavailable for statistical inference. Instead, we used quantitative CIC data from the EIA-reading (after subtraction of the OD values for blanks) for parametric statistical analyses.\nA noticeable increase in CIC levels of both, the bipolar disorder (0.147) and the major depressive disorder (0.163) groups became obvious, being statistically significant when compared to control subjects. The values for 95% CI of CICs are illustrated in Figure 1. The CIC levels within the total population tend to be elevated among females when compared to males (p = 0.089). Therefore, the influence of sex on CIC extinction values was also analyzed in these patient groups (Figure 2). A significant increase in CIC levels in female patients was recognized when compared to males (p = 0.031).Table 5\nDistribution of categorized CIC results (neg., +, ++, +++) in subgroups\nSubgroupsNeg N (%)++++++Total\nControls\n95 (72.5%)31 (23.7%)5 (3.8%)0131\nBlood donors\n46 (66.7%)21 (30.4%)2 (2.9%)069Case68 (59.6%)36 (31.5%)8 (7.0%)2 (1.7%)114BD35 (54.7%)22 (34.4%)6 (9.4%)1 (1.60%)64MDD6 (50.0%)5 (41.7%)01 (8.30%)12Schizophrenia14 (77.8%)2 (11.1%)2 (11.1%)018Schizoaffective10 (66.7%)5 (33.3%)0015OCD3 (60.0%)2 (40.0%)005\nSex\nMale116 (67.1%)50 (28.9%)7 (4.0%)0173Female93 (66.0%)38 (27.0%)8 (5.7%)2 (1.4%)141\nAge groups\n18-25 ys46 (59.0%)26 (33.3%)6 (7.7%)07826-35 ys42 (63.6%)20 (30.3%)4 (6.1%)06636-45 ys57 (71.3%)20 (25.0%)2 (2.5%)1 (1.3%)8046-55 ys49 (69.0%)20 (28.2%)2 (2.8%)07156-65 ys15 (78.9%)2 (10.5%)1 (5.3%)1 (5.3%)78OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600.Figure 1\nMean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA.Figure 2\nStatistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test.\n\nDistribution of categorized CIC results (neg., +, ++, +++) in subgroups\n\nOD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600.\n\nMean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA.\n\nStatistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test.", "As shown in Table 1, efforts have been made to include gender and age matched control subjects, but comparability could finally not be achieved. The large disparity in both the female-to-male ratios and age of blood donors compared to patients considerably accounted for this limitation (gender: chi square = 7.758, p = 005; age: by t test, p = 0.015). As shown in Table 2, the majority of bipolar patients (BD) were either manic (59.4%) or in a mixed episode (25%), whereas only 15.6% experienced a recent depression. Of all patients, only 19.3% (10 BMD, 12 MDD patients out of 114) presented with a recent depressive episode.", "Based on CICs we found a mean prevalence of subclinical infection of 29.5% in the healthy Iranian controls, displaying a slightly higher prevalence in blood donors (33.3%) as compared to the healthy subject cohort (27.5%) for whom any mental illness has been excluded.\nGender and age had no significant influence on CIC prevalence, but psychiatric patients showed significant differences compared to the control group (p = 0.036), presenting with a mean CIC prevalence of 40.4%. Particularly, the patients with bipolar disorder were statistically significantly different with reference to CIC prevalence, OR and OR estimate (OR Est.) (p = 0.014). It is noteworthy that the CIC prevalence found in patients with mood disorders (BD, MDD, and schizoaffective disorders; N = 91) was doubling that of schizophrenia patients (44% vs. 22%), a difference which turned out to be statistically significant (p = 0.026), as well. The statistical evaluations are given in Table 3.Table 3\nCIC results against three predictors: sex, age and diagnosis\nPredictorsNPos./Neg. (p %)OROR Est.CI (95%)\nDiagnoses:\nControls13136/95 (27.5%)RefRefRefBlood donors6923/46 (33.3%)1.211.0290.483-2.193Patients11446/68 (40.4%)1.471.0881.042-3.405BD6429/35 (45.3%)1.652.0351.072-3.863MDD126/6 (50.0%)1.822.7500.820-9.222Schizophrenia184/14 (22.2%)0.810.6320.186-2.149Schizoaffective155/10 (33.3%)1.211.2540.392-4.014OCD52/3 (40%)1.462.2250.343-14.437\nSex:\nMale17357/116 (32.9%)RefRefRefFemale14148/93 (34.0%)1.030.9610.549-1.682\nAge group:\n18-257832/46 (41.0%)1.9482.9620.850-10.32526-356624/42 (36.4%)1.7272.4220.691-8.89436-458023/57 (28.8%)1.3651.5970.469-5.44046-557122/49 (31.0%)1.4711.8300.534-6.27856-65194/15 (21.1%)RefRefRef\n\nCIC results against three predictors: sex, age and diagnosis\n", "Based on the cut-off value of 0.1 [25] valid for all tests of the triple-EIA system to differentiate the negative from positive results, free Abs were measured in 7.8% and 16.7% of the bipolar (BMD) and schizophrenia patients, respectively, whereas the controls presented with 5.3%. Free Ag was present in 5.6% of the schizophrenic patients (1 out of 18), vs. 1 % in the controls (2 out of 200). Other patient groups were negative in both tests (for details see Table 4). The dynamic balance between CIC formation, antigens, and antibodies accounts for their relative amounts simultaneously present in a sample. The cross-sectional design of the study provides an infection profile only valid at a given time point, thereby limiting the explanatory power of triple-EIA results.Table 4\nPrevalence of free antibodies and antigen\nFree antibodyFree antigen (N-& P-protein)GroupsPos./Neg.Prevalence %CIPos./Neg.Prevalence %CI\nControls\n7/124(5.3%)1.5-9%1/130(0.7%)0.7-2.3%\nPatients\n8/106(7.0%)2.3-11.7%1/113(0.9%)0-2.6%BD5/597.8%1.2-14.4%0/640.0%-MDD0/120.0%-0/120.0%-Schizophrenia3/1516.7%0-33.9%1/175.6%0-16%Schizoaffective0/150.0%-0/150.0%-OCD0/50.0%-0/50.0%-\nBlood donors\n1/68(1.4%)0-4.2%1/68(1.4%)0-4.2%\nTotal\n16/298(5.1%)2.7-7.5%3/311(1%)0-2%\n\nPrevalence of free antibodies and antigen\n", "According to the cut off values, as shown in Table 5, dividing all data into a negative and positive group and performing only non-parametric analyses resulted in many data unavailable for statistical inference. Instead, we used quantitative CIC data from the EIA-reading (after subtraction of the OD values for blanks) for parametric statistical analyses.\nA noticeable increase in CIC levels of both, the bipolar disorder (0.147) and the major depressive disorder (0.163) groups became obvious, being statistically significant when compared to control subjects. The values for 95% CI of CICs are illustrated in Figure 1. The CIC levels within the total population tend to be elevated among females when compared to males (p = 0.089). Therefore, the influence of sex on CIC extinction values was also analyzed in these patient groups (Figure 2). A significant increase in CIC levels in female patients was recognized when compared to males (p = 0.031).Table 5\nDistribution of categorized CIC results (neg., +, ++, +++) in subgroups\nSubgroupsNeg N (%)++++++Total\nControls\n95 (72.5%)31 (23.7%)5 (3.8%)0131\nBlood donors\n46 (66.7%)21 (30.4%)2 (2.9%)069Case68 (59.6%)36 (31.5%)8 (7.0%)2 (1.7%)114BD35 (54.7%)22 (34.4%)6 (9.4%)1 (1.60%)64MDD6 (50.0%)5 (41.7%)01 (8.30%)12Schizophrenia14 (77.8%)2 (11.1%)2 (11.1%)018Schizoaffective10 (66.7%)5 (33.3%)0015OCD3 (60.0%)2 (40.0%)005\nSex\nMale116 (67.1%)50 (28.9%)7 (4.0%)0173Female93 (66.0%)38 (27.0%)8 (5.7%)2 (1.4%)141\nAge groups\n18-25 ys46 (59.0%)26 (33.3%)6 (7.7%)07826-35 ys42 (63.6%)20 (30.3%)4 (6.1%)06636-45 ys57 (71.3%)20 (25.0%)2 (2.5%)1 (1.3%)8046-55 ys49 (69.0%)20 (28.2%)2 (2.8%)07156-65 ys15 (78.9%)2 (10.5%)1 (5.3%)1 (5.3%)78OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600.Figure 1\nMean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA.Figure 2\nStatistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test.\n\nDistribution of categorized CIC results (neg., +, ++, +++) in subgroups\n\nOD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600.\n\nMean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA.\n\nStatistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test.", "This is the first study in Iranian people showing a fairly high prevalence of Bornavirus infection in healthy individuals including blood donors. The results meet reported data from Central Europe of about 30% based on the same infection marker (CIC). The study also supports previous findings that this neurotropic virus infection is more prevalent in psychiatric patients than in healthy donors. According to trends our results are supporting infection patterns in other countries, like Europe, the Americas and Asia which are based on specific antibody- and nucleic acid detection [9, 10, 15, 21, 35–42], despite of largely differing prevalence data. Based on measuring BDV released antigens or antigen-antibody complexes, like CICs [25, 27, 29, 30], our data showed a much better agreement.\nStudies questioning and reporting the absence of BDV in both normal and psychiatrically diseased people remain inconclusive as long as no other cohorts have been investigated and no other methods have been applied. Among those are studies of Na et al. [42] and Hornig et al. [43].The latter group even neglected an own earlier positive study with contradictory results from the same country [10]. On the other hand, the existence of a human BDV strain has recently been independently proven by an in vitro study in brain cells. Only the human virus was able to reduce proliferation and enhance apoptosis but not the animal-derived laboratory strain of BDV [44].\nOur study used an established triple EIA which had been successfully applied to monitor point- and longitudinal prevalence of BDV infection markers in patients [25, 26]. In our hands, these EIAs were found to be easy to handle and to provide robust and reproducible measurements. It is unfortunate that general acceptance is still pending. In this study, consecutive sampling of admitted patients was not possible. Although the data only refer to cross-sectional sample analysis, BDV markers were significantly more prevalent in Iranian patients with mental diseases than in control subjects. These findings were similar to data reported from Germany [25, 26], Italy [27] Australia [29], the CSSR [30], China (Xia Liu, Peng Xie, pers. communication), and Lithuania (Violeta Mockeliūnienė, Robertas Bunevicius, pers. communication) where the same test system had been applied.\nThe presence of CIC with or without antibodies indicates a chronic infection; the presence of Ag, with or without CICs at the same time, a currently active infection. The finding of free anti-BDV antibody alone (no antigen, no CICs) is thought to indicate previous exposure to the agent, but not a current active infection [34]. As shown in earlier reports CICs represent the major viral marker explaining the transient disappearance of antibodies and antigens in blood plasma between activated and dormant phases of virus infection, and by this providing also a clue for the true number of silently infected carriers in a healthy cohort or population [25, 26, 34].\nIranian psychiatric patients show a clearly elevated CIC sero-prevalence (40.4%) compared to healthy controls (27.5%). It is of special interest that 33.3% of samples from blood donations were silent virus carriers, a finding confirming Australian [29] and German pilot reports [17, 26], thus being quite in contrast to an earlier report [45]. Transfusion issues relating to BDV infection are still awaiting further clarification [46].\nBD, MDD and OCD patients presented with infection rates of 45.3%, 50.0% and 40.0%, respectively. However, significance levels were only reached in BD patients. This might be due to the small sample size, but in parametric data analysis, comparing OD values of absorbance (extinction), high levels of CICs in sera from BD and MDD patients were also significant.\nIn contrast to other reports [47, 48], we found a relatively high sero-prevalence of free Ab and Ag in schizophrenic patients (16.7% and 5.6%, respectively) which is consistent with a relatively low CIC sero-prevalence among those individuals (22.2%, see Table 2). In addition to schizophrenic patients, only BD patients showed free antibodies in their sera (7.8%). This implies that BDV antibodies are usually bound in immune complexes and are therefore becoming transiently absent in the blood stream.\nIt is of considerable interest that the CIC sero-prevalence adversely correlated with the corresponding age groups (linear regression done using age as continuous data, R = -0.116, p = 0.042), which means that the young patients had highest CIC values, although the age limit includes only adults 18 years and older. This leaves the question whether younger people are either more prone to BDV infection or their immune response is more prominent. It supports a recent finding that young children (from 4–6 months to 3 years of age) had even much higher infection rates, although this pilot study warrants further investigations [28, 34], In addition, it has to be further examined whether and to which extent vertical transmission of BDV in the pregnant horse [49], mouse [50] and human [28] contributes to higher infection rates at young age. In this regard, high prevalence of BDV in the normal population, lifelong persistence of the virus in infected subjects (patients or healthy people), and the so far undisclosed function of endogenized BDV genome stretches [2, 3, 16], might reflect further risk factors warranting urgent future investigations.\nInterestingly, significant differences between female and male patients could be measured for the first time (Figure 2. middle, p = 0.031), showing a prevalence of CICs in 42.3% of females, and 38.7% in males. In favor of these findings, two female patients, belonging to the BD and MDD groups, had high CIC titers with levels above 0.6 (+++) (Table 5). The sero-prevalence among healthy controls, however, reached only 25.3% in females and 31.3% in males.\nSuch sex-related specific differences according to, titers and prevalence of an antibody response to foreign antigens, infectious agents, or even auto antigens are known from the literature [51–55]. Females usually exhibit a stronger humoral immune response, as especially known after vaccination and infection with microbial agents. In fact, estrogens exert stimulatory effects on B cell proliferation and serum IgG levels, whereas testosterone may suppress B cell function [56, 57].\nIn conclusion, Iranian people seem to fit into the pattern of BDV infections, so far reported worldwide [5]. Moreover, the study benefits from using prevalent infection markers and a highly specific and effective test system [26, 34]. The study confirms evidence for a high infection prevalence, similar to Central Europe, in one third of healthy Iranian subjects, contrasting elevated levels in patients with mood disorders. In view of millions of people worldwide suffering from depression and the huge related health care costs [58], this study points again to integrating BDV infection surveillance in psychiatric research [26] rather than to continue in underplaying its impact." ]
[ null, "methods", null, null, null, "results", null, null, null, null, "discussion" ]
[ "Borna disease virus", "Circulating immune complexes", "Psychiatric disorders", "Iranian patients/controls" ]
Background: Borna disease virus (BDV) holds unique features in terms of its cell biology, molecular properties, preference to old brain areas, broad host spectrum [1], and unusual biological age, dating back to more than 40 million years [2, 3]. The outstanding molecular biology of the virus, and its single stranded RNA genome leading to the classification [4] of an own family, Bornaviridae (order Mononegavirales), has been comprehensively reviewed [5]. BDV had first been recognized as an often deadly pathogen of horses and sheep [1, 6] with a wide spectrum in other domestic and farm animals. However, BDV’s non-cytolytic properties, low replication while over-expressing two major proteins, and evidence of modulating neurotransmitter networks [7], pointed to a long-term adaption toward moderate pathogenicity and persistency [1]. Human infection and its putative link to mental disorders, first suggested after detection of antibodies [8], became a key issue inspiring research groups around the globe. After nucleic acid and antigen could be demonstrated in white blood cells of psychiatrically diseased patients [9], such a link was further strengthened by the finding of specific RNA sequences in post mortem brains of psychiatric patients [10] and limbic structures from old people [11]. The impact of human infection was significantly supported by the isolation and sequence characterization of human viruses from psychiatric patients’ blood cells and brain [12–14], and the recent correlation of neurological symptoms in humans with BDV infection [15]. The latest discovery of functional endogenous virus gene pieces integrated in the human and primate ancestor germ lines strongly argued in favor of a long-term co-evolution of virus and hosts [2, 3, 16]. However, a role of BDV, whatsoever, in human mental-health remained controversial, despite of predominantly supportive reports [17–24]. This is mainly due to a great variation in prevalence results largely caused by methodological disparities, due to different antibody and/or RNA techniques, affecting as well cross-national comparability. In contrast, BDV-specific circulating immune complexes, the most prevalent infection markers [25], have shown to be superior to antibody- or RNA- detection. Pilot prevalence studies could demonstrate that the BDV-CIC enzyme immune assay (EIA) is an easy to perform and robust test format, suitable to conducting comparable surveys in the general population of different countries, as well as longitudinal follow-up studies of patients in clinical cohorts [26–31]. Circulating immune complexes are the result of periods of antigenemia over-expressing N- and P-proteins, and antibody induction in the host, reflecting recent and current virus activity. Evidence for a contribution of BDV infection to disease symptoms has recently been reviewed [32]. This is the first report from the Middle East, addressing the prevalence of BDV in the human population in Iran. The virus in horses has previously been reported by antibody studies [33]. Here we explore the prevalence of BDV markers among Iranian mentally diseased patients, healthy controls, and blood donors. Method: Individual subjects Three hundred and fourteen Iranian subjects, including 114 psychiatric patients, 131 sex and age matched healthy controls, and 69 blood donors were included in this study. The association between BDV infection markers in blood plasma and five DSM IV- categorized psychiatric diseases, as well as gender and age of the individuals were analyzed. Basic data are given in Table 1. One hundred and fourteen acute psychiatric patients, who had been admitted to local departments of psychiatry in Tehran, were included. All patients met the Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV) -criteria on the basis of interviews and medical records. They could be divided into five main groups and different DSM IV codes, including 64 bipolar disorder (BD)-, 12 major depressive disorder (MDD)-, 18 schizophrenia-, 15 schizoaffective- and 5 obsessive compulsive disorder (OCD) patients (Table 2). Additionally, 69 blood donors and 131 sex- and age matched, mentally healthy subjects (based on the supervision of the psychiatrists) were included and regarded as controls. All individuals were negative for Hepatitis B- and C-viruses, as well as HIV. The study was approved by the Ethic Committee of the Neuroscience Research Center at Shahid Beheshti University of Medical Sciences, and all patients -or an authorized representative- gave their written informed consent for participation. Blood samples of all individuals were collected prior to any medical treatment and plasma or sera were kept at -20°C.Table 1 Basic data on the population GroupsNFemale/MaleMean age + SEMin-Max Controls 13183/4841.08 + 1.00918-69 Blood donors 696/6329.93 + 1.29619-58 Mental patients 11452/6237.42 + 1.10317-62BD*6432/3236.20 + 1.47717-62MDD**127/543.42 + 3.45021-57Schizophrenia183/1534.56 + 2.68920-53Schizoaffective155/1038.33 + 2.86322-57OCD***55/046.20 + 3.96737-56 Summary 314141/17337.30 + 0.68817-69*Bipolar disorder.**Major depressive disorder.***Obsessive compulsive disorder.Table 2 DSM IV codes, numbers and symptoms of psychiatric patients Code (N)Symptoms BD (N=64) 296.02 (2)Single manic episode, moderate296.03 (4)Single manic episode, severe without psychotic features296.04 (6)Single manic episode, severe with psychotic features296.42 (1)Most recent episode manic, moderate296.43 (5)Most recent episode manic, severe without psychotic features296.44 (20)Most recent episode manic, severe with psychotic features296.52 (2)Most recent episode depressed, moderate296.53 (6)Most recent episode depressed, severe without psychotic features296.54 (2)Most recent episode depressed, severe with psychotic features296.62 (2)Most recent episode mixed, moderate296.63 (11)Most recent episode mixed, severe without psychotic features296.64 (3)Most recent episode mixed, severe with psychotic features MDD (12) 296.22 (2)Recurrent, moderate296.23 (1)Recurrent, severe without psychotic features296.24 (2)Recurrent, severe with psychotic features296.32 (3)Single episode, moderate296.33 (2)Single episode, severe without psychotic features296.34 (2)Single episode, severe with psychotic features Schizophrenia (18) 295.01 (3)Disorganized type295.03 (9)Paranoid type295.09 (6)Undifferentiated type Schizoaffective (15) 295.07 (15)Schizoaffective disorder OCD (5) 300.03 (5)Obsessive compulsive disorder Basic data on the population *Bipolar disorder. **Major depressive disorder. ***Obsessive compulsive disorder. DSM IV codes, numbers and symptoms of psychiatric patients Three hundred and fourteen Iranian subjects, including 114 psychiatric patients, 131 sex and age matched healthy controls, and 69 blood donors were included in this study. The association between BDV infection markers in blood plasma and five DSM IV- categorized psychiatric diseases, as well as gender and age of the individuals were analyzed. Basic data are given in Table 1. One hundred and fourteen acute psychiatric patients, who had been admitted to local departments of psychiatry in Tehran, were included. All patients met the Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV) -criteria on the basis of interviews and medical records. They could be divided into five main groups and different DSM IV codes, including 64 bipolar disorder (BD)-, 12 major depressive disorder (MDD)-, 18 schizophrenia-, 15 schizoaffective- and 5 obsessive compulsive disorder (OCD) patients (Table 2). Additionally, 69 blood donors and 131 sex- and age matched, mentally healthy subjects (based on the supervision of the psychiatrists) were included and regarded as controls. All individuals were negative for Hepatitis B- and C-viruses, as well as HIV. The study was approved by the Ethic Committee of the Neuroscience Research Center at Shahid Beheshti University of Medical Sciences, and all patients -or an authorized representative- gave their written informed consent for participation. Blood samples of all individuals were collected prior to any medical treatment and plasma or sera were kept at -20°C.Table 1 Basic data on the population GroupsNFemale/MaleMean age + SEMin-Max Controls 13183/4841.08 + 1.00918-69 Blood donors 696/6329.93 + 1.29619-58 Mental patients 11452/6237.42 + 1.10317-62BD*6432/3236.20 + 1.47717-62MDD**127/543.42 + 3.45021-57Schizophrenia183/1534.56 + 2.68920-53Schizoaffective155/1038.33 + 2.86322-57OCD***55/046.20 + 3.96737-56 Summary 314141/17337.30 + 0.68817-69*Bipolar disorder.**Major depressive disorder.***Obsessive compulsive disorder.Table 2 DSM IV codes, numbers and symptoms of psychiatric patients Code (N)Symptoms BD (N=64) 296.02 (2)Single manic episode, moderate296.03 (4)Single manic episode, severe without psychotic features296.04 (6)Single manic episode, severe with psychotic features296.42 (1)Most recent episode manic, moderate296.43 (5)Most recent episode manic, severe without psychotic features296.44 (20)Most recent episode manic, severe with psychotic features296.52 (2)Most recent episode depressed, moderate296.53 (6)Most recent episode depressed, severe without psychotic features296.54 (2)Most recent episode depressed, severe with psychotic features296.62 (2)Most recent episode mixed, moderate296.63 (11)Most recent episode mixed, severe without psychotic features296.64 (3)Most recent episode mixed, severe with psychotic features MDD (12) 296.22 (2)Recurrent, moderate296.23 (1)Recurrent, severe without psychotic features296.24 (2)Recurrent, severe with psychotic features296.32 (3)Single episode, moderate296.33 (2)Single episode, severe without psychotic features296.34 (2)Single episode, severe with psychotic features Schizophrenia (18) 295.01 (3)Disorganized type295.03 (9)Paranoid type295.09 (6)Undifferentiated type Schizoaffective (15) 295.07 (15)Schizoaffective disorder OCD (5) 300.03 (5)Obsessive compulsive disorder Basic data on the population *Bipolar disorder. **Major depressive disorder. ***Obsessive compulsive disorder. DSM IV codes, numbers and symptoms of psychiatric patients Enzyme immune assays (EIAs) The BDV infection markers, circulating immune complexes (CICs), virus antigens (N- and P- protein, N/P-complexes; abbreviated Ag), and antibodies (Ab), were assayed using the triple enzyme immune assay (EIA) system, as described [25]. According to the double- sandwich format, two BDV-specific monoclonal antibodies (mAbs), anti-N mAb (W1) and anti-P mAb (Kfu2), were used to bind any BDV-N- and P-protein or N/P -heterodimers in plasma, either circulating antigen bound to virus- specific host antibodies (CIC- EIA) or free antigen (pAg-EIA). CICs were visualized through alkaline phosphatase (AP)-coupled anti-human IgG and substrate, whereas the Ag-EIA needs a BDV-specific detecting antibody (rabbit hyper-immune serum) followed by AP phosphatase coupled anti-rabbit IgG and substrate. The specificity and sensitivity of the BDV mAbs have been further characterized [34]. In particular, epitope mapping has revealed that both these mAbs are binding to powerful conformational epitopes on either protein, which are formed through 5 binding sites in case of the anti-N mAb (W1) and 3 binding sites in case of the anti-P mAb (Kfu2). None of the W1 binding sites are overlapping with P-protein binding domains on the N-protein, confirming that commonly occurring N/P heterodimers are recognized by W1, as well. In addition, none of either W1- and Kfu2- binding sites are overlapping with functionally important sites on N and P-protein, like NLS (nuclear localization signal) and NES (nuclear export signal). The extraordinarily high binding capacities of these mAbs to native N and P proteins have been determined through affinity-chromatography methods using N and P protein from the brain of a horse with Borna disease, resulting in dissociation constants (KD) of 2.31 × 10-9 for W1 and 3.33 × 10-9 for Kfu2. Like for other antigen assays, recombinant proteins have been used to determine sensitivity and further confirm specificity. The detection limit of 1.5-3 ng/ml of purified recombinant N-protein (rN) has been determined for W1 mAb. Diluting of rN in CIC-, Ag- and Ab- EIA-negative serum did not make any difference, confirming specificity. Additionally, N-protein could be demonstrated in the immune precipitate (IP using W1) of a strongly antigen positive patient’s plasma by western blot, whereas the IP of an antigen-negative plasma showed nothing but the heavy and light chain of the mAb [34]. Furthermore, using recombinant P-protein, either the non-phosphorylated or phosphorylated form, revealed that mAb Kfu2 only detects the activated phosphorylated form. Regarding the antibody assay we followed the exact protocol given earlier [25]. All three assays use the basic coating of antibody-stabilized monoclonal antibodies (W1 and Kfu2) as a standard immune module [34]. According to the primary experimental setting, a standardized cut off value has been specified as a mean of negative values plus 2 standard deviations, regularly reaching an extinction of < = 0.1 which separates negative and positive scores. The initial dilutions of the samples were 1:20, 1:2, and 1:100 to allow the same cut off value for testing CICs, free Ag, and Ab, respectively [25]. Results were visualized through alkaline phosphatase conjugated antibodies and a colorimetric substrate, absorbance measured in a multichannel photometer (405 nm), and values imported to statistical software [25]. Repetition of one third of the sample collection was performed and essentially gave the same results. The BDV infection markers, circulating immune complexes (CICs), virus antigens (N- and P- protein, N/P-complexes; abbreviated Ag), and antibodies (Ab), were assayed using the triple enzyme immune assay (EIA) system, as described [25]. According to the double- sandwich format, two BDV-specific monoclonal antibodies (mAbs), anti-N mAb (W1) and anti-P mAb (Kfu2), were used to bind any BDV-N- and P-protein or N/P -heterodimers in plasma, either circulating antigen bound to virus- specific host antibodies (CIC- EIA) or free antigen (pAg-EIA). CICs were visualized through alkaline phosphatase (AP)-coupled anti-human IgG and substrate, whereas the Ag-EIA needs a BDV-specific detecting antibody (rabbit hyper-immune serum) followed by AP phosphatase coupled anti-rabbit IgG and substrate. The specificity and sensitivity of the BDV mAbs have been further characterized [34]. In particular, epitope mapping has revealed that both these mAbs are binding to powerful conformational epitopes on either protein, which are formed through 5 binding sites in case of the anti-N mAb (W1) and 3 binding sites in case of the anti-P mAb (Kfu2). None of the W1 binding sites are overlapping with P-protein binding domains on the N-protein, confirming that commonly occurring N/P heterodimers are recognized by W1, as well. In addition, none of either W1- and Kfu2- binding sites are overlapping with functionally important sites on N and P-protein, like NLS (nuclear localization signal) and NES (nuclear export signal). The extraordinarily high binding capacities of these mAbs to native N and P proteins have been determined through affinity-chromatography methods using N and P protein from the brain of a horse with Borna disease, resulting in dissociation constants (KD) of 2.31 × 10-9 for W1 and 3.33 × 10-9 for Kfu2. Like for other antigen assays, recombinant proteins have been used to determine sensitivity and further confirm specificity. The detection limit of 1.5-3 ng/ml of purified recombinant N-protein (rN) has been determined for W1 mAb. Diluting of rN in CIC-, Ag- and Ab- EIA-negative serum did not make any difference, confirming specificity. Additionally, N-protein could be demonstrated in the immune precipitate (IP using W1) of a strongly antigen positive patient’s plasma by western blot, whereas the IP of an antigen-negative plasma showed nothing but the heavy and light chain of the mAb [34]. Furthermore, using recombinant P-protein, either the non-phosphorylated or phosphorylated form, revealed that mAb Kfu2 only detects the activated phosphorylated form. Regarding the antibody assay we followed the exact protocol given earlier [25]. All three assays use the basic coating of antibody-stabilized monoclonal antibodies (W1 and Kfu2) as a standard immune module [34]. According to the primary experimental setting, a standardized cut off value has been specified as a mean of negative values plus 2 standard deviations, regularly reaching an extinction of < = 0.1 which separates negative and positive scores. The initial dilutions of the samples were 1:20, 1:2, and 1:100 to allow the same cut off value for testing CICs, free Ag, and Ab, respectively [25]. Results were visualized through alkaline phosphatase conjugated antibodies and a colorimetric substrate, absorbance measured in a multichannel photometer (405 nm), and values imported to statistical software [25]. Repetition of one third of the sample collection was performed and essentially gave the same results. Statistical analysis All data of the patients and controls were submitted to parametric and non-parametric statistical analyses. A comparison of the groups was carried out using independent T-Tests, ANOVA and Chi square tests. The prevalence of BDV infection markers was calculated as based on the cut off value of 0.1. Subjects were classified according to clinical diagnostic, gender and age as independent variables, as based on the CIC data measured. The detailed evaluation of CIC tests were based on standardized scoring of the OD-values of >0.100- 0.300 to be +, >0.300- 0.600 to be ++, > 0.600 - 1.000 to be +++, and > 1.000 to be ++++ [25]. Prevalence and odds ratios (OR) were calculated. Chi square tests were used for an estimation of statistical differences between the groups. Furthermore, binary logistic regression for an estimation of an individual influence of three basic variables, namely age, gender and clinical diagnosis, on CIC titers was applied. All data of the patients and controls were submitted to parametric and non-parametric statistical analyses. A comparison of the groups was carried out using independent T-Tests, ANOVA and Chi square tests. The prevalence of BDV infection markers was calculated as based on the cut off value of 0.1. Subjects were classified according to clinical diagnostic, gender and age as independent variables, as based on the CIC data measured. The detailed evaluation of CIC tests were based on standardized scoring of the OD-values of >0.100- 0.300 to be +, >0.300- 0.600 to be ++, > 0.600 - 1.000 to be +++, and > 1.000 to be ++++ [25]. Prevalence and odds ratios (OR) were calculated. Chi square tests were used for an estimation of statistical differences between the groups. Furthermore, binary logistic regression for an estimation of an individual influence of three basic variables, namely age, gender and clinical diagnosis, on CIC titers was applied. Individual subjects: Three hundred and fourteen Iranian subjects, including 114 psychiatric patients, 131 sex and age matched healthy controls, and 69 blood donors were included in this study. The association between BDV infection markers in blood plasma and five DSM IV- categorized psychiatric diseases, as well as gender and age of the individuals were analyzed. Basic data are given in Table 1. One hundred and fourteen acute psychiatric patients, who had been admitted to local departments of psychiatry in Tehran, were included. All patients met the Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV) -criteria on the basis of interviews and medical records. They could be divided into five main groups and different DSM IV codes, including 64 bipolar disorder (BD)-, 12 major depressive disorder (MDD)-, 18 schizophrenia-, 15 schizoaffective- and 5 obsessive compulsive disorder (OCD) patients (Table 2). Additionally, 69 blood donors and 131 sex- and age matched, mentally healthy subjects (based on the supervision of the psychiatrists) were included and regarded as controls. All individuals were negative for Hepatitis B- and C-viruses, as well as HIV. The study was approved by the Ethic Committee of the Neuroscience Research Center at Shahid Beheshti University of Medical Sciences, and all patients -or an authorized representative- gave their written informed consent for participation. Blood samples of all individuals were collected prior to any medical treatment and plasma or sera were kept at -20°C.Table 1 Basic data on the population GroupsNFemale/MaleMean age + SEMin-Max Controls 13183/4841.08 + 1.00918-69 Blood donors 696/6329.93 + 1.29619-58 Mental patients 11452/6237.42 + 1.10317-62BD*6432/3236.20 + 1.47717-62MDD**127/543.42 + 3.45021-57Schizophrenia183/1534.56 + 2.68920-53Schizoaffective155/1038.33 + 2.86322-57OCD***55/046.20 + 3.96737-56 Summary 314141/17337.30 + 0.68817-69*Bipolar disorder.**Major depressive disorder.***Obsessive compulsive disorder.Table 2 DSM IV codes, numbers and symptoms of psychiatric patients Code (N)Symptoms BD (N=64) 296.02 (2)Single manic episode, moderate296.03 (4)Single manic episode, severe without psychotic features296.04 (6)Single manic episode, severe with psychotic features296.42 (1)Most recent episode manic, moderate296.43 (5)Most recent episode manic, severe without psychotic features296.44 (20)Most recent episode manic, severe with psychotic features296.52 (2)Most recent episode depressed, moderate296.53 (6)Most recent episode depressed, severe without psychotic features296.54 (2)Most recent episode depressed, severe with psychotic features296.62 (2)Most recent episode mixed, moderate296.63 (11)Most recent episode mixed, severe without psychotic features296.64 (3)Most recent episode mixed, severe with psychotic features MDD (12) 296.22 (2)Recurrent, moderate296.23 (1)Recurrent, severe without psychotic features296.24 (2)Recurrent, severe with psychotic features296.32 (3)Single episode, moderate296.33 (2)Single episode, severe without psychotic features296.34 (2)Single episode, severe with psychotic features Schizophrenia (18) 295.01 (3)Disorganized type295.03 (9)Paranoid type295.09 (6)Undifferentiated type Schizoaffective (15) 295.07 (15)Schizoaffective disorder OCD (5) 300.03 (5)Obsessive compulsive disorder Basic data on the population *Bipolar disorder. **Major depressive disorder. ***Obsessive compulsive disorder. DSM IV codes, numbers and symptoms of psychiatric patients Enzyme immune assays (EIAs): The BDV infection markers, circulating immune complexes (CICs), virus antigens (N- and P- protein, N/P-complexes; abbreviated Ag), and antibodies (Ab), were assayed using the triple enzyme immune assay (EIA) system, as described [25]. According to the double- sandwich format, two BDV-specific monoclonal antibodies (mAbs), anti-N mAb (W1) and anti-P mAb (Kfu2), were used to bind any BDV-N- and P-protein or N/P -heterodimers in plasma, either circulating antigen bound to virus- specific host antibodies (CIC- EIA) or free antigen (pAg-EIA). CICs were visualized through alkaline phosphatase (AP)-coupled anti-human IgG and substrate, whereas the Ag-EIA needs a BDV-specific detecting antibody (rabbit hyper-immune serum) followed by AP phosphatase coupled anti-rabbit IgG and substrate. The specificity and sensitivity of the BDV mAbs have been further characterized [34]. In particular, epitope mapping has revealed that both these mAbs are binding to powerful conformational epitopes on either protein, which are formed through 5 binding sites in case of the anti-N mAb (W1) and 3 binding sites in case of the anti-P mAb (Kfu2). None of the W1 binding sites are overlapping with P-protein binding domains on the N-protein, confirming that commonly occurring N/P heterodimers are recognized by W1, as well. In addition, none of either W1- and Kfu2- binding sites are overlapping with functionally important sites on N and P-protein, like NLS (nuclear localization signal) and NES (nuclear export signal). The extraordinarily high binding capacities of these mAbs to native N and P proteins have been determined through affinity-chromatography methods using N and P protein from the brain of a horse with Borna disease, resulting in dissociation constants (KD) of 2.31 × 10-9 for W1 and 3.33 × 10-9 for Kfu2. Like for other antigen assays, recombinant proteins have been used to determine sensitivity and further confirm specificity. The detection limit of 1.5-3 ng/ml of purified recombinant N-protein (rN) has been determined for W1 mAb. Diluting of rN in CIC-, Ag- and Ab- EIA-negative serum did not make any difference, confirming specificity. Additionally, N-protein could be demonstrated in the immune precipitate (IP using W1) of a strongly antigen positive patient’s plasma by western blot, whereas the IP of an antigen-negative plasma showed nothing but the heavy and light chain of the mAb [34]. Furthermore, using recombinant P-protein, either the non-phosphorylated or phosphorylated form, revealed that mAb Kfu2 only detects the activated phosphorylated form. Regarding the antibody assay we followed the exact protocol given earlier [25]. All three assays use the basic coating of antibody-stabilized monoclonal antibodies (W1 and Kfu2) as a standard immune module [34]. According to the primary experimental setting, a standardized cut off value has been specified as a mean of negative values plus 2 standard deviations, regularly reaching an extinction of < = 0.1 which separates negative and positive scores. The initial dilutions of the samples were 1:20, 1:2, and 1:100 to allow the same cut off value for testing CICs, free Ag, and Ab, respectively [25]. Results were visualized through alkaline phosphatase conjugated antibodies and a colorimetric substrate, absorbance measured in a multichannel photometer (405 nm), and values imported to statistical software [25]. Repetition of one third of the sample collection was performed and essentially gave the same results. Statistical analysis: All data of the patients and controls were submitted to parametric and non-parametric statistical analyses. A comparison of the groups was carried out using independent T-Tests, ANOVA and Chi square tests. The prevalence of BDV infection markers was calculated as based on the cut off value of 0.1. Subjects were classified according to clinical diagnostic, gender and age as independent variables, as based on the CIC data measured. The detailed evaluation of CIC tests were based on standardized scoring of the OD-values of >0.100- 0.300 to be +, >0.300- 0.600 to be ++, > 0.600 - 1.000 to be +++, and > 1.000 to be ++++ [25]. Prevalence and odds ratios (OR) were calculated. Chi square tests were used for an estimation of statistical differences between the groups. Furthermore, binary logistic regression for an estimation of an individual influence of three basic variables, namely age, gender and clinical diagnosis, on CIC titers was applied. Results: Population characteristics As shown in Table 1, efforts have been made to include gender and age matched control subjects, but comparability could finally not be achieved. The large disparity in both the female-to-male ratios and age of blood donors compared to patients considerably accounted for this limitation (gender: chi square = 7.758, p = 005; age: by t test, p = 0.015). As shown in Table 2, the majority of bipolar patients (BD) were either manic (59.4%) or in a mixed episode (25%), whereas only 15.6% experienced a recent depression. Of all patients, only 19.3% (10 BMD, 12 MDD patients out of 114) presented with a recent depressive episode. As shown in Table 1, efforts have been made to include gender and age matched control subjects, but comparability could finally not be achieved. The large disparity in both the female-to-male ratios and age of blood donors compared to patients considerably accounted for this limitation (gender: chi square = 7.758, p = 005; age: by t test, p = 0.015). As shown in Table 2, the majority of bipolar patients (BD) were either manic (59.4%) or in a mixed episode (25%), whereas only 15.6% experienced a recent depression. Of all patients, only 19.3% (10 BMD, 12 MDD patients out of 114) presented with a recent depressive episode. Circulating immune complexes Based on CICs we found a mean prevalence of subclinical infection of 29.5% in the healthy Iranian controls, displaying a slightly higher prevalence in blood donors (33.3%) as compared to the healthy subject cohort (27.5%) for whom any mental illness has been excluded. Gender and age had no significant influence on CIC prevalence, but psychiatric patients showed significant differences compared to the control group (p = 0.036), presenting with a mean CIC prevalence of 40.4%. Particularly, the patients with bipolar disorder were statistically significantly different with reference to CIC prevalence, OR and OR estimate (OR Est.) (p = 0.014). It is noteworthy that the CIC prevalence found in patients with mood disorders (BD, MDD, and schizoaffective disorders; N = 91) was doubling that of schizophrenia patients (44% vs. 22%), a difference which turned out to be statistically significant (p = 0.026), as well. The statistical evaluations are given in Table 3.Table 3 CIC results against three predictors: sex, age and diagnosis PredictorsNPos./Neg. (p %)OROR Est.CI (95%) Diagnoses: Controls13136/95 (27.5%)RefRefRefBlood donors6923/46 (33.3%)1.211.0290.483-2.193Patients11446/68 (40.4%)1.471.0881.042-3.405BD6429/35 (45.3%)1.652.0351.072-3.863MDD126/6 (50.0%)1.822.7500.820-9.222Schizophrenia184/14 (22.2%)0.810.6320.186-2.149Schizoaffective155/10 (33.3%)1.211.2540.392-4.014OCD52/3 (40%)1.462.2250.343-14.437 Sex: Male17357/116 (32.9%)RefRefRefFemale14148/93 (34.0%)1.030.9610.549-1.682 Age group: 18-257832/46 (41.0%)1.9482.9620.850-10.32526-356624/42 (36.4%)1.7272.4220.691-8.89436-458023/57 (28.8%)1.3651.5970.469-5.44046-557122/49 (31.0%)1.4711.8300.534-6.27856-65194/15 (21.1%)RefRefRef CIC results against three predictors: sex, age and diagnosis Based on CICs we found a mean prevalence of subclinical infection of 29.5% in the healthy Iranian controls, displaying a slightly higher prevalence in blood donors (33.3%) as compared to the healthy subject cohort (27.5%) for whom any mental illness has been excluded. Gender and age had no significant influence on CIC prevalence, but psychiatric patients showed significant differences compared to the control group (p = 0.036), presenting with a mean CIC prevalence of 40.4%. Particularly, the patients with bipolar disorder were statistically significantly different with reference to CIC prevalence, OR and OR estimate (OR Est.) (p = 0.014). It is noteworthy that the CIC prevalence found in patients with mood disorders (BD, MDD, and schizoaffective disorders; N = 91) was doubling that of schizophrenia patients (44% vs. 22%), a difference which turned out to be statistically significant (p = 0.026), as well. The statistical evaluations are given in Table 3.Table 3 CIC results against three predictors: sex, age and diagnosis PredictorsNPos./Neg. (p %)OROR Est.CI (95%) Diagnoses: Controls13136/95 (27.5%)RefRefRefBlood donors6923/46 (33.3%)1.211.0290.483-2.193Patients11446/68 (40.4%)1.471.0881.042-3.405BD6429/35 (45.3%)1.652.0351.072-3.863MDD126/6 (50.0%)1.822.7500.820-9.222Schizophrenia184/14 (22.2%)0.810.6320.186-2.149Schizoaffective155/10 (33.3%)1.211.2540.392-4.014OCD52/3 (40%)1.462.2250.343-14.437 Sex: Male17357/116 (32.9%)RefRefRefFemale14148/93 (34.0%)1.030.9610.549-1.682 Age group: 18-257832/46 (41.0%)1.9482.9620.850-10.32526-356624/42 (36.4%)1.7272.4220.691-8.89436-458023/57 (28.8%)1.3651.5970.469-5.44046-557122/49 (31.0%)1.4711.8300.534-6.27856-65194/15 (21.1%)RefRefRef CIC results against three predictors: sex, age and diagnosis Free antibody and antigen Based on the cut-off value of 0.1 [25] valid for all tests of the triple-EIA system to differentiate the negative from positive results, free Abs were measured in 7.8% and 16.7% of the bipolar (BMD) and schizophrenia patients, respectively, whereas the controls presented with 5.3%. Free Ag was present in 5.6% of the schizophrenic patients (1 out of 18), vs. 1 % in the controls (2 out of 200). Other patient groups were negative in both tests (for details see Table 4). The dynamic balance between CIC formation, antigens, and antibodies accounts for their relative amounts simultaneously present in a sample. The cross-sectional design of the study provides an infection profile only valid at a given time point, thereby limiting the explanatory power of triple-EIA results.Table 4 Prevalence of free antibodies and antigen Free antibodyFree antigen (N-& P-protein)GroupsPos./Neg.Prevalence %CIPos./Neg.Prevalence %CI Controls 7/124(5.3%)1.5-9%1/130(0.7%)0.7-2.3% Patients 8/106(7.0%)2.3-11.7%1/113(0.9%)0-2.6%BD5/597.8%1.2-14.4%0/640.0%-MDD0/120.0%-0/120.0%-Schizophrenia3/1516.7%0-33.9%1/175.6%0-16%Schizoaffective0/150.0%-0/150.0%-OCD0/50.0%-0/50.0%- Blood donors 1/68(1.4%)0-4.2%1/68(1.4%)0-4.2% Total 16/298(5.1%)2.7-7.5%3/311(1%)0-2% Prevalence of free antibodies and antigen Based on the cut-off value of 0.1 [25] valid for all tests of the triple-EIA system to differentiate the negative from positive results, free Abs were measured in 7.8% and 16.7% of the bipolar (BMD) and schizophrenia patients, respectively, whereas the controls presented with 5.3%. Free Ag was present in 5.6% of the schizophrenic patients (1 out of 18), vs. 1 % in the controls (2 out of 200). Other patient groups were negative in both tests (for details see Table 4). The dynamic balance between CIC formation, antigens, and antibodies accounts for their relative amounts simultaneously present in a sample. The cross-sectional design of the study provides an infection profile only valid at a given time point, thereby limiting the explanatory power of triple-EIA results.Table 4 Prevalence of free antibodies and antigen Free antibodyFree antigen (N-& P-protein)GroupsPos./Neg.Prevalence %CIPos./Neg.Prevalence %CI Controls 7/124(5.3%)1.5-9%1/130(0.7%)0.7-2.3% Patients 8/106(7.0%)2.3-11.7%1/113(0.9%)0-2.6%BD5/597.8%1.2-14.4%0/640.0%-MDD0/120.0%-0/120.0%-Schizophrenia3/1516.7%0-33.9%1/175.6%0-16%Schizoaffective0/150.0%-0/150.0%-OCD0/50.0%-0/50.0%- Blood donors 1/68(1.4%)0-4.2%1/68(1.4%)0-4.2% Total 16/298(5.1%)2.7-7.5%3/311(1%)0-2% Prevalence of free antibodies and antigen Additional data analysis According to the cut off values, as shown in Table 5, dividing all data into a negative and positive group and performing only non-parametric analyses resulted in many data unavailable for statistical inference. Instead, we used quantitative CIC data from the EIA-reading (after subtraction of the OD values for blanks) for parametric statistical analyses. A noticeable increase in CIC levels of both, the bipolar disorder (0.147) and the major depressive disorder (0.163) groups became obvious, being statistically significant when compared to control subjects. The values for 95% CI of CICs are illustrated in Figure 1. The CIC levels within the total population tend to be elevated among females when compared to males (p = 0.089). Therefore, the influence of sex on CIC extinction values was also analyzed in these patient groups (Figure 2). A significant increase in CIC levels in female patients was recognized when compared to males (p = 0.031).Table 5 Distribution of categorized CIC results (neg., +, ++, +++) in subgroups SubgroupsNeg N (%)++++++Total Controls 95 (72.5%)31 (23.7%)5 (3.8%)0131 Blood donors 46 (66.7%)21 (30.4%)2 (2.9%)069Case68 (59.6%)36 (31.5%)8 (7.0%)2 (1.7%)114BD35 (54.7%)22 (34.4%)6 (9.4%)1 (1.60%)64MDD6 (50.0%)5 (41.7%)01 (8.30%)12Schizophrenia14 (77.8%)2 (11.1%)2 (11.1%)018Schizoaffective10 (66.7%)5 (33.3%)0015OCD3 (60.0%)2 (40.0%)005 Sex Male116 (67.1%)50 (28.9%)7 (4.0%)0173Female93 (66.0%)38 (27.0%)8 (5.7%)2 (1.4%)141 Age groups 18-25 ys46 (59.0%)26 (33.3%)6 (7.7%)07826-35 ys42 (63.6%)20 (30.3%)4 (6.1%)06636-45 ys57 (71.3%)20 (25.0%)2 (2.5%)1 (1.3%)8046-55 ys49 (69.0%)20 (28.2%)2 (2.8%)07156-65 ys15 (78.9%)2 (10.5%)1 (5.3%)1 (5.3%)78OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600.Figure 1 Mean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA.Figure 2 Statistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test. Distribution of categorized CIC results (neg., +, ++, +++) in subgroups OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600. Mean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA. Statistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test. According to the cut off values, as shown in Table 5, dividing all data into a negative and positive group and performing only non-parametric analyses resulted in many data unavailable for statistical inference. Instead, we used quantitative CIC data from the EIA-reading (after subtraction of the OD values for blanks) for parametric statistical analyses. A noticeable increase in CIC levels of both, the bipolar disorder (0.147) and the major depressive disorder (0.163) groups became obvious, being statistically significant when compared to control subjects. The values for 95% CI of CICs are illustrated in Figure 1. The CIC levels within the total population tend to be elevated among females when compared to males (p = 0.089). Therefore, the influence of sex on CIC extinction values was also analyzed in these patient groups (Figure 2). A significant increase in CIC levels in female patients was recognized when compared to males (p = 0.031).Table 5 Distribution of categorized CIC results (neg., +, ++, +++) in subgroups SubgroupsNeg N (%)++++++Total Controls 95 (72.5%)31 (23.7%)5 (3.8%)0131 Blood donors 46 (66.7%)21 (30.4%)2 (2.9%)069Case68 (59.6%)36 (31.5%)8 (7.0%)2 (1.7%)114BD35 (54.7%)22 (34.4%)6 (9.4%)1 (1.60%)64MDD6 (50.0%)5 (41.7%)01 (8.30%)12Schizophrenia14 (77.8%)2 (11.1%)2 (11.1%)018Schizoaffective10 (66.7%)5 (33.3%)0015OCD3 (60.0%)2 (40.0%)005 Sex Male116 (67.1%)50 (28.9%)7 (4.0%)0173Female93 (66.0%)38 (27.0%)8 (5.7%)2 (1.4%)141 Age groups 18-25 ys46 (59.0%)26 (33.3%)6 (7.7%)07826-35 ys42 (63.6%)20 (30.3%)4 (6.1%)06636-45 ys57 (71.3%)20 (25.0%)2 (2.5%)1 (1.3%)8046-55 ys49 (69.0%)20 (28.2%)2 (2.8%)07156-65 ys15 (78.9%)2 (10.5%)1 (5.3%)1 (5.3%)78OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600.Figure 1 Mean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA.Figure 2 Statistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test. Distribution of categorized CIC results (neg., +, ++, +++) in subgroups OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600. Mean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA. Statistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test. Population characteristics: As shown in Table 1, efforts have been made to include gender and age matched control subjects, but comparability could finally not be achieved. The large disparity in both the female-to-male ratios and age of blood donors compared to patients considerably accounted for this limitation (gender: chi square = 7.758, p = 005; age: by t test, p = 0.015). As shown in Table 2, the majority of bipolar patients (BD) were either manic (59.4%) or in a mixed episode (25%), whereas only 15.6% experienced a recent depression. Of all patients, only 19.3% (10 BMD, 12 MDD patients out of 114) presented with a recent depressive episode. Circulating immune complexes: Based on CICs we found a mean prevalence of subclinical infection of 29.5% in the healthy Iranian controls, displaying a slightly higher prevalence in blood donors (33.3%) as compared to the healthy subject cohort (27.5%) for whom any mental illness has been excluded. Gender and age had no significant influence on CIC prevalence, but psychiatric patients showed significant differences compared to the control group (p = 0.036), presenting with a mean CIC prevalence of 40.4%. Particularly, the patients with bipolar disorder were statistically significantly different with reference to CIC prevalence, OR and OR estimate (OR Est.) (p = 0.014). It is noteworthy that the CIC prevalence found in patients with mood disorders (BD, MDD, and schizoaffective disorders; N = 91) was doubling that of schizophrenia patients (44% vs. 22%), a difference which turned out to be statistically significant (p = 0.026), as well. The statistical evaluations are given in Table 3.Table 3 CIC results against three predictors: sex, age and diagnosis PredictorsNPos./Neg. (p %)OROR Est.CI (95%) Diagnoses: Controls13136/95 (27.5%)RefRefRefBlood donors6923/46 (33.3%)1.211.0290.483-2.193Patients11446/68 (40.4%)1.471.0881.042-3.405BD6429/35 (45.3%)1.652.0351.072-3.863MDD126/6 (50.0%)1.822.7500.820-9.222Schizophrenia184/14 (22.2%)0.810.6320.186-2.149Schizoaffective155/10 (33.3%)1.211.2540.392-4.014OCD52/3 (40%)1.462.2250.343-14.437 Sex: Male17357/116 (32.9%)RefRefRefFemale14148/93 (34.0%)1.030.9610.549-1.682 Age group: 18-257832/46 (41.0%)1.9482.9620.850-10.32526-356624/42 (36.4%)1.7272.4220.691-8.89436-458023/57 (28.8%)1.3651.5970.469-5.44046-557122/49 (31.0%)1.4711.8300.534-6.27856-65194/15 (21.1%)RefRefRef CIC results against three predictors: sex, age and diagnosis Free antibody and antigen: Based on the cut-off value of 0.1 [25] valid for all tests of the triple-EIA system to differentiate the negative from positive results, free Abs were measured in 7.8% and 16.7% of the bipolar (BMD) and schizophrenia patients, respectively, whereas the controls presented with 5.3%. Free Ag was present in 5.6% of the schizophrenic patients (1 out of 18), vs. 1 % in the controls (2 out of 200). Other patient groups were negative in both tests (for details see Table 4). The dynamic balance between CIC formation, antigens, and antibodies accounts for their relative amounts simultaneously present in a sample. The cross-sectional design of the study provides an infection profile only valid at a given time point, thereby limiting the explanatory power of triple-EIA results.Table 4 Prevalence of free antibodies and antigen Free antibodyFree antigen (N-& P-protein)GroupsPos./Neg.Prevalence %CIPos./Neg.Prevalence %CI Controls 7/124(5.3%)1.5-9%1/130(0.7%)0.7-2.3% Patients 8/106(7.0%)2.3-11.7%1/113(0.9%)0-2.6%BD5/597.8%1.2-14.4%0/640.0%-MDD0/120.0%-0/120.0%-Schizophrenia3/1516.7%0-33.9%1/175.6%0-16%Schizoaffective0/150.0%-0/150.0%-OCD0/50.0%-0/50.0%- Blood donors 1/68(1.4%)0-4.2%1/68(1.4%)0-4.2% Total 16/298(5.1%)2.7-7.5%3/311(1%)0-2% Prevalence of free antibodies and antigen Additional data analysis: According to the cut off values, as shown in Table 5, dividing all data into a negative and positive group and performing only non-parametric analyses resulted in many data unavailable for statistical inference. Instead, we used quantitative CIC data from the EIA-reading (after subtraction of the OD values for blanks) for parametric statistical analyses. A noticeable increase in CIC levels of both, the bipolar disorder (0.147) and the major depressive disorder (0.163) groups became obvious, being statistically significant when compared to control subjects. The values for 95% CI of CICs are illustrated in Figure 1. The CIC levels within the total population tend to be elevated among females when compared to males (p = 0.089). Therefore, the influence of sex on CIC extinction values was also analyzed in these patient groups (Figure 2). A significant increase in CIC levels in female patients was recognized when compared to males (p = 0.031).Table 5 Distribution of categorized CIC results (neg., +, ++, +++) in subgroups SubgroupsNeg N (%)++++++Total Controls 95 (72.5%)31 (23.7%)5 (3.8%)0131 Blood donors 46 (66.7%)21 (30.4%)2 (2.9%)069Case68 (59.6%)36 (31.5%)8 (7.0%)2 (1.7%)114BD35 (54.7%)22 (34.4%)6 (9.4%)1 (1.60%)64MDD6 (50.0%)5 (41.7%)01 (8.30%)12Schizophrenia14 (77.8%)2 (11.1%)2 (11.1%)018Schizoaffective10 (66.7%)5 (33.3%)0015OCD3 (60.0%)2 (40.0%)005 Sex Male116 (67.1%)50 (28.9%)7 (4.0%)0173Female93 (66.0%)38 (27.0%)8 (5.7%)2 (1.4%)141 Age groups 18-25 ys46 (59.0%)26 (33.3%)6 (7.7%)07826-35 ys42 (63.6%)20 (30.3%)4 (6.1%)06636-45 ys57 (71.3%)20 (25.0%)2 (2.5%)1 (1.3%)8046-55 ys49 (69.0%)20 (28.2%)2 (2.8%)07156-65 ys15 (78.9%)2 (10.5%)1 (5.3%)1 (5.3%)78OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600.Figure 1 Mean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA.Figure 2 Statistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test. Distribution of categorized CIC results (neg., +, ++, +++) in subgroups OD absorbance is valued as (+): OD absorbance > 0.100 - 0.300, (++): OD absorbance > 0.300 - 0.600 and (+++): OD absorbance > 0.600. Mean and 95% confidence intervals for CIC extinction in the investigated groups. Lower and upper limits of 95% CI in groups including Control: 0.080-0.095, Schizophrenia: 0.068-0.121, Schizoaffective: 0.056-0.080, Bipolar: 0.125-0.167, MDD: 0.096-0.224, OCD: 0.047-0.134 and Donor: 0.083-0.105 based on Table 3. *Significant when compared to controls (p = 0.001, ANOVA). **Significant when compared to controls (p = 0.029, ANOVA). Extinction values refer to 1:20 dilution of plasma in the CIC-ELISA. Statistical differences between female and male samples in control and patient groups based on 95% CIC absorbance (p = 0.031). 131 control samples (83 female and 48 male) and 114 patient samples (52 female and 62 male) were calculated by a parametric t-student test. Discussion: This is the first study in Iranian people showing a fairly high prevalence of Bornavirus infection in healthy individuals including blood donors. The results meet reported data from Central Europe of about 30% based on the same infection marker (CIC). The study also supports previous findings that this neurotropic virus infection is more prevalent in psychiatric patients than in healthy donors. According to trends our results are supporting infection patterns in other countries, like Europe, the Americas and Asia which are based on specific antibody- and nucleic acid detection [9, 10, 15, 21, 35–42], despite of largely differing prevalence data. Based on measuring BDV released antigens or antigen-antibody complexes, like CICs [25, 27, 29, 30], our data showed a much better agreement. Studies questioning and reporting the absence of BDV in both normal and psychiatrically diseased people remain inconclusive as long as no other cohorts have been investigated and no other methods have been applied. Among those are studies of Na et al. [42] and Hornig et al. [43].The latter group even neglected an own earlier positive study with contradictory results from the same country [10]. On the other hand, the existence of a human BDV strain has recently been independently proven by an in vitro study in brain cells. Only the human virus was able to reduce proliferation and enhance apoptosis but not the animal-derived laboratory strain of BDV [44]. Our study used an established triple EIA which had been successfully applied to monitor point- and longitudinal prevalence of BDV infection markers in patients [25, 26]. In our hands, these EIAs were found to be easy to handle and to provide robust and reproducible measurements. It is unfortunate that general acceptance is still pending. In this study, consecutive sampling of admitted patients was not possible. Although the data only refer to cross-sectional sample analysis, BDV markers were significantly more prevalent in Iranian patients with mental diseases than in control subjects. These findings were similar to data reported from Germany [25, 26], Italy [27] Australia [29], the CSSR [30], China (Xia Liu, Peng Xie, pers. communication), and Lithuania (Violeta Mockeliūnienė, Robertas Bunevicius, pers. communication) where the same test system had been applied. The presence of CIC with or without antibodies indicates a chronic infection; the presence of Ag, with or without CICs at the same time, a currently active infection. The finding of free anti-BDV antibody alone (no antigen, no CICs) is thought to indicate previous exposure to the agent, but not a current active infection [34]. As shown in earlier reports CICs represent the major viral marker explaining the transient disappearance of antibodies and antigens in blood plasma between activated and dormant phases of virus infection, and by this providing also a clue for the true number of silently infected carriers in a healthy cohort or population [25, 26, 34]. Iranian psychiatric patients show a clearly elevated CIC sero-prevalence (40.4%) compared to healthy controls (27.5%). It is of special interest that 33.3% of samples from blood donations were silent virus carriers, a finding confirming Australian [29] and German pilot reports [17, 26], thus being quite in contrast to an earlier report [45]. Transfusion issues relating to BDV infection are still awaiting further clarification [46]. BD, MDD and OCD patients presented with infection rates of 45.3%, 50.0% and 40.0%, respectively. However, significance levels were only reached in BD patients. This might be due to the small sample size, but in parametric data analysis, comparing OD values of absorbance (extinction), high levels of CICs in sera from BD and MDD patients were also significant. In contrast to other reports [47, 48], we found a relatively high sero-prevalence of free Ab and Ag in schizophrenic patients (16.7% and 5.6%, respectively) which is consistent with a relatively low CIC sero-prevalence among those individuals (22.2%, see Table 2). In addition to schizophrenic patients, only BD patients showed free antibodies in their sera (7.8%). This implies that BDV antibodies are usually bound in immune complexes and are therefore becoming transiently absent in the blood stream. It is of considerable interest that the CIC sero-prevalence adversely correlated with the corresponding age groups (linear regression done using age as continuous data, R = -0.116, p = 0.042), which means that the young patients had highest CIC values, although the age limit includes only adults 18 years and older. This leaves the question whether younger people are either more prone to BDV infection or their immune response is more prominent. It supports a recent finding that young children (from 4–6 months to 3 years of age) had even much higher infection rates, although this pilot study warrants further investigations [28, 34], In addition, it has to be further examined whether and to which extent vertical transmission of BDV in the pregnant horse [49], mouse [50] and human [28] contributes to higher infection rates at young age. In this regard, high prevalence of BDV in the normal population, lifelong persistence of the virus in infected subjects (patients or healthy people), and the so far undisclosed function of endogenized BDV genome stretches [2, 3, 16], might reflect further risk factors warranting urgent future investigations. Interestingly, significant differences between female and male patients could be measured for the first time (Figure 2. middle, p = 0.031), showing a prevalence of CICs in 42.3% of females, and 38.7% in males. In favor of these findings, two female patients, belonging to the BD and MDD groups, had high CIC titers with levels above 0.6 (+++) (Table 5). The sero-prevalence among healthy controls, however, reached only 25.3% in females and 31.3% in males. Such sex-related specific differences according to, titers and prevalence of an antibody response to foreign antigens, infectious agents, or even auto antigens are known from the literature [51–55]. Females usually exhibit a stronger humoral immune response, as especially known after vaccination and infection with microbial agents. In fact, estrogens exert stimulatory effects on B cell proliferation and serum IgG levels, whereas testosterone may suppress B cell function [56, 57]. In conclusion, Iranian people seem to fit into the pattern of BDV infections, so far reported worldwide [5]. Moreover, the study benefits from using prevalent infection markers and a highly specific and effective test system [26, 34]. The study confirms evidence for a high infection prevalence, similar to Central Europe, in one third of healthy Iranian subjects, contrasting elevated levels in patients with mood disorders. In view of millions of people worldwide suffering from depression and the huge related health care costs [58], this study points again to integrating BDV infection surveillance in psychiatric research [26] rather than to continue in underplaying its impact.
Background: Borna disease virus (BDV) is an evolutionary old RNA virus, which infects brain and blood cells of humans, their primate ancestors, and other mammals. Human infection has been correlated to mood disorders and schizophrenia, but the impact of BDV on mental-health still remains controversial due to poor methodological and cross-national comparability. Methods: This first report from the Middle East aimed to determine BDV infection prevalence in Iranian acute psychiatric disorder patients and healthy controls through circulating immune complexes (CIC), antibodies (Ab) and antigen (pAg) in blood plasma using a standardized triple enzyme immune assay (EIA). Samples of 314 subjects (114 psychiatric cases, 69 blood donors, and 131 healthy controls) were assayed and data analyzed quantitatively and qualitatively. Results: CICs revealed a BDV prevalence of one third (29.5%) in healthy Iranian controls (27.5% controls; 33.3% blood donors). In psychiatric patients CIC prevalence was higher than in controls (40.4%) and significantly correlating with bipolar patients exhibiting overt clinical symptoms (p = 0.005, OR = 1.65). CIC values were significantly elevated in bipolar (p = 0.001) and major depressive disorder (p = 0.029) patients as compared to controls, and in females compared to males (p = 0.031). Conclusions: This study supports a similarly high prevalence of subclinical human BDV infections in Iran as reported for central Europe, and provides again an indication for the correlation of BDV infection and mood disorders. Further studies should address the morbidity risk for healthy carriers and those with elevated CIC levels, along with gender disparities.
null
null
10,979
315
[ 595, 599, 707, 191, 142, 312, 238, 784 ]
11
[ "patients", "cic", "prevalence", "episode", "age", "bdv", "table", "disorder", "groups", "controls" ]
[ "bdv genome", "bdv released antigens", "bdv protein", "bdv antibodies usually", "borna disease virus" ]
null
null
[CONTENT] Borna disease virus | Circulating immune complexes | Psychiatric disorders | Iranian patients/controls [SUMMARY]
[CONTENT] Borna disease virus | Circulating immune complexes | Psychiatric disorders | Iranian patients/controls [SUMMARY]
[CONTENT] Borna disease virus | Circulating immune complexes | Psychiatric disorders | Iranian patients/controls [SUMMARY]
null
[CONTENT] Borna disease virus | Circulating immune complexes | Psychiatric disorders | Iranian patients/controls [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Bipolar Disorder | Blood Donors | Borna Disease | Borna disease virus | Case-Control Studies | Depressive Disorder, Major | Female | Humans | Iran | Male | Middle Aged | Prevalence | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Bipolar Disorder | Blood Donors | Borna Disease | Borna disease virus | Case-Control Studies | Depressive Disorder, Major | Female | Humans | Iran | Male | Middle Aged | Prevalence | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Bipolar Disorder | Blood Donors | Borna Disease | Borna disease virus | Case-Control Studies | Depressive Disorder, Major | Female | Humans | Iran | Male | Middle Aged | Prevalence | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Bipolar Disorder | Blood Donors | Borna Disease | Borna disease virus | Case-Control Studies | Depressive Disorder, Major | Female | Humans | Iran | Male | Middle Aged | Prevalence | Young Adult [SUMMARY]
null
[CONTENT] bdv genome | bdv released antigens | bdv protein | bdv antibodies usually | borna disease virus [SUMMARY]
[CONTENT] bdv genome | bdv released antigens | bdv protein | bdv antibodies usually | borna disease virus [SUMMARY]
[CONTENT] bdv genome | bdv released antigens | bdv protein | bdv antibodies usually | borna disease virus [SUMMARY]
null
[CONTENT] bdv genome | bdv released antigens | bdv protein | bdv antibodies usually | borna disease virus [SUMMARY]
null
[CONTENT] patients | cic | prevalence | episode | age | bdv | table | disorder | groups | controls [SUMMARY]
[CONTENT] patients | cic | prevalence | episode | age | bdv | table | disorder | groups | controls [SUMMARY]
[CONTENT] patients | cic | prevalence | episode | age | bdv | table | disorder | groups | controls [SUMMARY]
null
[CONTENT] patients | cic | prevalence | episode | age | bdv | table | disorder | groups | controls [SUMMARY]
null
[CONTENT] bdv | virus | human | rna | studies | antibody | infection | prevalence | horses | term [SUMMARY]
[CONTENT] severe psychotic | psychotic | severe | episode | severe psychotic features296 | psychotic features296 | features296 | recent episode | w1 | protein [SUMMARY]
[CONTENT] cic | 95 | absorbance | compared | significant | od absorbance | control | female | prevalence | od [SUMMARY]
null
[CONTENT] patients | cic | prevalence | episode | bdv | age | psychotic | severe psychotic | severe | table [SUMMARY]
null
[CONTENT] Borna | BDV | RNA ||| BDV [SUMMARY]
[CONTENT] first | the Middle East | BDV | Iranian | CIC | pAg | EIA ||| 314 | 114 | 69 | 131 [SUMMARY]
[CONTENT] BDV | one third | 29.5% | Iranian | 27.5% | 33.3% ||| CIC | 40.4% | 0.005 | 1.65 ||| CIC | 0.001 | 0.029 ||| 0.031 [SUMMARY]
null
[CONTENT] BDV | RNA ||| BDV ||| first | the Middle East | BDV | Iranian | CIC | pAg | EIA ||| 314 | 114 | 69 | 131 ||| BDV | one third | 29.5% | Iranian | 27.5% | 33.3% ||| CIC | 40.4% | 0.005 | 1.65 ||| CIC | 0.001 | 0.029 ||| 0.031 ||| BDV | Iran | Europe | BDV ||| CIC [SUMMARY]
null
Frequency of HIV status disclosure, associated factors and outcomes among HIV positive pregnant women at Mbarara Regional Referral Hospital, southwestern Uganda.
31312312
Positive HIV results disclosure plays a significant role in the successful prevention and care of HIV infected patients. It provides significant social and health benefits to the individual and the community. Non-disclosure is one of the contextual factors driving the HIV epidemic in Uganda. Study objectives: to determine the frequency of HIV disclosure, associated factors and disclosure outcomes among HIV positive pregnant women at Mbarara Hospital, southwestern Uganda.
INTRODUCTION
A cross-sectional study using quantitative and qualitative methods among a group of HIV positive pregnant women attending antenatal clinic was done and consecutive sampling conducted.
METHODS
The total participant recruitment was 103, of which 88 (85.4%) had disclosed their serostatus with 57% disclosure to their partners. About 80% had disclosed within less than 2 months of testing HIV positive. Reasons for disclosure included their partners having disclosed to them (27.3%), caring partners (27.3%) and encouragement by health workers (25.0%). Following disclosure, 74%) were comforted and 6.8% were verbally abused. Reasons for non-disclosure were fear of abandonment (33.3%), being beaten (33.3%) and loss of financial and emotional support (13.3%). The factors associated with disclosure were age 26-35 years (OR 3.9, 95% CI 1.03-15.16), primary education (OR 3.53, 95%CI 1.10-11.307) and urban dwelling (OR 4.22, 95% CI 1.27-14.01).
RESULTS
Participants disclosed mainly to their partners and were comforted and many of them were encouraged by the health workers. There is need to optimize disclosure merits to enable increased participation in treatment and support programs.
CONCLUSION
[ "Adolescent", "Adult", "Cross-Sectional Studies", "Disclosure", "Female", "HIV Infections", "Humans", "Pregnancy", "Pregnancy Complications, Infectious", "Prenatal Care", "Sexual Partners", "Uganda", "Young Adult" ]
6620078
Introduction
HIV/AIDS remains a major public health problem and more effort is needed to ensure successful treatment and prevention programs. Disclosure is an important component in uptake of prevention of mother-to-child transmission (PMTCT) services [1]. Women are counseled to share with their partner their own HIV test result and they become responsible for encouraging their partner to undertake HIV testing. The dialogue on sexual activity or HIV/AIDS within a couple is often difficult, especially when women discover that they are HIV-infected [2]. Among postpartum mothers, disclosure is associated with adherence to safer infant feeding practices, exclusive breast feeding, exclusive replacement feeding, creates awareness about HIV risk for the untested sexual partners, supports risk reduction, promotes safer sexual behavior, increased retention in eMTCT programs, leads to better clinical outcomes such as CD4+ count increases and higher rates of retention in treatment programs [3-12]. Studies also show that disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy [13]. Despite these potential benefits, studies indicate the frequency of non-disclosure remains relatively high. About 25% of HIV infected patients do not disclose their HIV serostatus to their partners [14]. The proportion of women disclosing their HIV serostatus to their partners among HIV positive pregnant women attending antenatal care is even larger (60%) [15]. Disclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects [16]. HIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution [17, 18]. Therefore, the purpose of this study was to determine the frequency, factors associated with disclosure and the potential barriers and facilitators to disclosure of HIV serostatus by pregnant women attending antenatal clinic at Mbarara Regional Referral Hospital (MRRH), South Western Uganda.
Methods
Study site: The study was conducted at Mbarara Regional Referral Hospital Antenatal Care clinic (ANC). The study was conducted among HIV positive pregnant women attending the antenatal clinic at Mbarara Regional Referral Hospital ANC. The antenatal clinic sees 700 women per month at least 90% of women in Uganda attend at least one antenatal visit during their pregnancy. The antenatal HIV prevalence in Uganda is 6.4% while the prevalence in Mbarara Hospital maternity ward is about 12% [19]. Mbarara Hospital is both a teaching hospital for Mbarara University Medical School and Regional Referral Hospital located within Mbarara Municipality, Mbarara district in the Western region of Uganda about 270Km from the capital city Kampala. The facility provides care for a diverse ethnic group of patients from the region, including some parts of the Democratic Republic of Congo, Rwanda and the Northern part of Tanzania. Study design: The study was a cross sectional study using both quantitative and qualitative methods. The quantitative method involved use of interviewer-administered pre-tested questionnaires while the qualitative method involved two focus group discussions (FGDs) i.e. eight HIV positive women who had disclosed their serostatus and eight positive women who had not disclosed. Sample size and sampling procedure: The sample size calculated was 103 women using the formula by Kish and Leslie (1965). The sample size was calculated based on the assumption that 20% of women attending antenatal clinic will not disclose HIV results to anyone. Using an error margin of 7.5%, an estimated 103 women were sufficient to answer the study objectives. The HIV positive pregnant women attending MRRH ANC were consecutively recruited until the desired number of 103 was achieved while every 5th respondent during the quantitative survey was also requested to participate in a focus group discussion until the required number of eight participants per group. Data collection and instruments: The HIV positive mothers were received at the registration desk with the rest of the pregnant women. They were identified using the HIV status codes on their ANC charts. The clinician handed them over to a research assistant in a separate room away from the routine antenatal clinic activities. The research assistant provided the mothers with information about the study and sought their consent to participate. Quantitative data was collected using interviewer-administered pre-coded pre-tested questionnaires to determine: The socio-demographic variables such as age, marital status, residence, employment, education level, nature of domicile, religion, parity and tribe. The disclosure status (disclosed or not disclosed), to whom, when it was done, and the outcomes of disclosure were collected. The questionnaire was pre-tested and translation double verified. The questionnaire was piloted, and necessary adjustments made. The data was cleaned. During the focus group discussions, the interview was recorded using a voice recorder. Data entry and analysis: Quantitative data was entered into the EPI-INFO program and analyzed using the statistical package for social science (SPSS version 12). The categorical data was summarized into frequencies or proportions. The socio-demographic characteristics of women who disclosed their HIV serostatus were analyzed. The primary outcome for the quantitative analysis was disclosure. The association between social, economic and demographic categorical variables with disclosure status was obtained using binary logistic regression analysis and an association was considered significant if the p-value was less than 0.05. The qualitative data was verbatim transcripted, categories created with evidences from the responses and coded using the thematic content analysis. Ethical consideration: The work was presented to the department of obstetrics and Gynecology at Mbarara University and ethical approval sought from the faculty research ethical committee. Individual consent was obtained from the participants enrolled. Only participants who voluntarily agreed to participate in the study received an interview.
Results
Majority of the participants were between the ages of 18 and 35 years (94.2%), Christians (89.3%), had primary education (56.3%), were married either monogamously or polygamously (94.1%), were multipara (61.2%), lived in a nuclear family setting (71.8%), unemployed (59.2%) and had a monthly income of between 50,000-100,000 Uganda shillings (42.7%). Among the respondents who were below 18 years, 83.3% had disclosed their serostatus. Disclosed among those who had no formal education was 100% (Table 1). Persons disclosed to included partners (57%), parents (25%), friends (9%), relatives (6%) and siblings (3%). Out of the 103 respondents, 88 (85.4%) had disclosed to at least someone. About seventy nine percent (79%) had disclosed within less than 2 months of testing positive while 9.1% had disclosed after 6 or more months of having tested positive (Table 2). One of the respondents in the focus group discussions reported having disclosed within seven days as evidenced by her response i.e. "I told my husband on the second day following my testing positive because he had disclosed to me his serostatus and was openly taking his HIV drugs" said 30-year-old mother of three. Most women who disclosed their sero-status were encouraged by health care workers, had partners who were caring and had disclosed to them (Table 3). This was further supported by information gathered from the focus group discussions where participants reportedly disclosed because their partners had disclosed to them and some had been encouraged to do so by the health workers as said by participants a 32-year mother of four children and 22 year a primipara respectively: "I told my husband on the second day following my testing positive because he had disclosed to me his serostatus and was openly taking his HIV drugs". “The health worker always reminded and encouraged me whenever we met and I got the boldness to tell my husband ” The socio-demographic characteristics in relation to disclosure Frequency and timing of disclosure among women attending antenatal clinic at Mbarara Regional Referral Hospital Factors that motivated disclosure among pregnant women who disclosed their HIV sero-status, Mbarara Hospital, Uganda (n = 88) Reasons for non-disclosure: included fear of abandonment (32%), being beaten (32%), loss of financial support (12%), stigmatization (12%), loss of emotional support (6.7%) and others thought that disclosure was not necessary (6.7%). The above information from the questionnaires was supported by information gathered from the focus group discussions where women reported fear of death, divorce, being beaten, job denial and ignorance of the importance of disclosure were reported as some of the barriers to disclosure. Some of the responses included the following i.e. "Knowing that he easily gets upset by small things and begins fighting, if I tell him about my status will he not beat me to death? His brother beat his wife seriously when he found that she was positive, and he is now in prison" said 29-year old primary school teacher. "Can't I live with my disease without bothering people by telling them of my issues? In feel comfortable that way" reported a 40-year-old prisons warder and a mother of 6 children. "I am looking for a job right now and if probable employers get to know that am positive, they may deny me a job. I will reveal my status when I have a job" reported a 34-year-old mother of 3 children. Post disclosure experiences: Majority of the women who disclosed were comforted (73.9%). However, negative outcomes included accusation of infidelity (24.9%), others were verbally abused (6.8%), some were beaten (5.7%) and a few were chased out of their homes by their husbands and the relatives of their husbands (2.3%). The information from the focus group discussions lent further credence to that gathered from the questionnaires with respondents reporting increased support and comforting as said by a 25-year old mother of 3: "When I told my mother that I was HIV positive, she was so sad but later comforted me and promised to give me all the support I needed". She had also told her partner "My partner pledged his support and continued love till death do us part. He has always reminded me to take my drugs and goes with me to hospital during my clinic days". Negative, some mothers reported financial loss, being beaten, denial of conjugal rights, divorce and stigma as some of the outcomes of their having disclosed their serostatus. Some of their responses included the following i.e. "When I told my partner, he beat me that night and locked me in the house for two days though he came back to his senses and stopped harassing me" reported 18-year-old primipara. Factors associated with disclosure: The factors associated with disclosure were age between 26-35 years. This age group has 3.9-fold increase in the odds of disclosure compared to those in the 18-25 year age category (OR 3.9, 95% CI 1.03-15.16). Primary education was associated with a 3.5-fold increase in the odds of disclosure (OR 3.53, 95% CI 1.10-11.307) compared to those with post primary education. Urban dwelling was associated with an over 4-fold increase in the odds of disclosure compared to the rural folk (OR 4.22, 95% CI 1.27-14.01) (Table 4). Factors associated with disclosure of HIV among pregnant women attending antenatal clinic at Mbarara Regional Referral Hospital
Conclusion
There is heightened need to emphasize the importance of disclosure to enable increased participation in treatment and support programs; and find ways of minimizing the negative consequences and optimizing the positive outcomes of disclosure of the HIV status. What is known about this topic That disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy; HIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution; Disclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects. That disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy; HIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution; Disclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects. What this study adds The disclosure proportions among HIV-infected pregnant women in Mbarara Hospital; The factors associated with disclosure among HIV-infected pregnant women in Mbarara Hospital; The reasons for non-disclosure among HIV-infected pregnant women at Mbarara Hospital. The disclosure proportions among HIV-infected pregnant women in Mbarara Hospital; The factors associated with disclosure among HIV-infected pregnant women in Mbarara Hospital; The reasons for non-disclosure among HIV-infected pregnant women at Mbarara Hospital.
[ "What is known about this topic", "What this study adds" ]
[ "That disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy;\nHIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution;\nDisclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects.", "The disclosure proportions among HIV-infected pregnant women in Mbarara Hospital;\nThe factors associated with disclosure among HIV-infected pregnant women in Mbarara Hospital;\nThe reasons for non-disclosure among HIV-infected pregnant women at Mbarara Hospital." ]
[ null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion", "What is known about this topic", "What this study adds", "Competing interests" ]
[ "HIV/AIDS remains a major public health problem and more effort is needed to ensure successful treatment and prevention programs. Disclosure is an important component in uptake of prevention of mother-to-child transmission (PMTCT) services [1]. Women are counseled to share with their partner their own HIV test result and they become responsible for encouraging their partner to undertake HIV testing. The dialogue on sexual activity or HIV/AIDS within a couple is often difficult, especially when women discover that they are HIV-infected [2]. Among postpartum mothers, disclosure is associated with adherence to safer infant feeding practices, exclusive breast feeding, exclusive replacement feeding, creates awareness about HIV risk for the untested sexual partners, supports risk reduction, promotes safer sexual behavior, increased retention in eMTCT programs, leads to better clinical outcomes such as CD4+ count increases and higher rates of retention in treatment programs [3-12]. Studies also show that disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy [13]. Despite these potential benefits, studies indicate the frequency of non-disclosure remains relatively high. About 25% of HIV infected patients do not disclose their HIV serostatus to their partners [14]. The proportion of women disclosing their HIV serostatus to their partners among HIV positive pregnant women attending antenatal care is even larger (60%) [15]. Disclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects [16]. HIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution [17, 18]. Therefore, the purpose of this study was to determine the frequency, factors associated with disclosure and the potential barriers and facilitators to disclosure of HIV serostatus by pregnant women attending antenatal clinic at Mbarara Regional Referral Hospital (MRRH), South Western Uganda.", "Study site: The study was conducted at Mbarara Regional Referral Hospital Antenatal Care clinic (ANC). The study was conducted among HIV positive pregnant women attending the antenatal clinic at Mbarara Regional Referral Hospital ANC. The antenatal clinic sees 700 women per month at least 90% of women in Uganda attend at least one antenatal visit during their pregnancy. The antenatal HIV prevalence in Uganda is 6.4% while the prevalence in Mbarara Hospital maternity ward is about 12% [19]. Mbarara Hospital is both a teaching hospital for Mbarara University Medical School and Regional Referral Hospital located within Mbarara Municipality, Mbarara district in the Western region of Uganda about 270Km from the capital city Kampala. The facility provides care for a diverse ethnic group of patients from the region, including some parts of the Democratic Republic of Congo, Rwanda and the Northern part of Tanzania.\nStudy design: The study was a cross sectional study using both quantitative and qualitative methods. The quantitative method involved use of interviewer-administered pre-tested questionnaires while the qualitative method involved two focus group discussions (FGDs) i.e. eight HIV positive women who had disclosed their serostatus and eight positive women who had not disclosed.\nSample size and sampling procedure: The sample size calculated was 103 women using the formula by Kish and Leslie (1965). The sample size was calculated based on the assumption that 20% of women attending antenatal clinic will not disclose HIV results to anyone. Using an error margin of 7.5%, an estimated 103 women were sufficient to answer the study objectives. The HIV positive pregnant women attending MRRH ANC were consecutively recruited until the desired number of 103 was achieved while every 5th respondent during the quantitative survey was also requested to participate in a focus group discussion until the required number of eight participants per group.\nData collection and instruments: The HIV positive mothers were received at the registration desk with the rest of the pregnant women. They were identified using the HIV status codes on their ANC charts. The clinician handed them over to a research assistant in a separate room away from the routine antenatal clinic activities. The research assistant provided the mothers with information about the study and sought their consent to participate. Quantitative data was collected using interviewer-administered pre-coded pre-tested questionnaires to determine: The socio-demographic variables such as age, marital status, residence, employment, education level, nature of domicile, religion, parity and tribe. The disclosure status (disclosed or not disclosed), to whom, when it was done, and the outcomes of disclosure were collected. The questionnaire was pre-tested and translation double verified. The questionnaire was piloted, and necessary adjustments made. The data was cleaned. During the focus group discussions, the interview was recorded using a voice recorder.\nData entry and analysis: Quantitative data was entered into the EPI-INFO program and analyzed using the statistical package for social science (SPSS version 12). The categorical data was summarized into frequencies or proportions. The socio-demographic characteristics of women who disclosed their HIV serostatus were analyzed. The primary outcome for the quantitative analysis was disclosure. The association between social, economic and demographic categorical variables with disclosure status was obtained using binary logistic regression analysis and an association was considered significant if the p-value was less than 0.05. The qualitative data was verbatim transcripted, categories created with evidences from the responses and coded using the thematic content analysis.\nEthical consideration: The work was presented to the department of obstetrics and Gynecology at Mbarara University and ethical approval sought from the faculty research ethical committee. Individual consent was obtained from the participants enrolled. Only participants who voluntarily agreed to participate in the study received an interview.", "Majority of the participants were between the ages of 18 and 35 years (94.2%), Christians (89.3%), had primary education (56.3%), were married either monogamously or polygamously (94.1%), were multipara (61.2%), lived in a nuclear family setting (71.8%), unemployed (59.2%) and had a monthly income of between 50,000-100,000 Uganda shillings (42.7%). Among the respondents who were below 18 years, 83.3% had disclosed their serostatus. Disclosed among those who had no formal education was 100% (Table 1). Persons disclosed to included partners (57%), parents (25%), friends (9%), relatives (6%) and siblings (3%). Out of the 103 respondents, 88 (85.4%) had disclosed to at least someone. About seventy nine percent (79%) had disclosed within less than 2 months of testing positive while 9.1% had disclosed after 6 or more months of having tested positive (Table 2). One of the respondents in the focus group discussions reported having disclosed within seven days as evidenced by her response i.e. \"I told my husband on the second day following my testing positive because he had disclosed to me his serostatus and was openly taking his HIV drugs\" said 30-year-old mother of three. Most women who disclosed their sero-status were encouraged by health care workers, had partners who were caring and had disclosed to them (Table 3). This was further supported by information gathered from the focus group discussions where participants reportedly disclosed because their partners had disclosed to them and some had been encouraged to do so by the health workers as said by participants a 32-year mother of four children and 22 year a primipara respectively: \"I told my husband on the second day following my testing positive because he had disclosed to me his serostatus and was openly taking his HIV drugs\". “The health worker always reminded and encouraged me whenever we met and I got the boldness to tell my husband ”\nThe socio-demographic characteristics in relation to disclosure\nFrequency and timing of disclosure among women attending antenatal clinic at Mbarara Regional Referral Hospital\nFactors that motivated disclosure among pregnant women who disclosed their HIV sero-status, Mbarara Hospital, Uganda (n = 88)\nReasons for non-disclosure: included fear of abandonment (32%), being beaten (32%), loss of financial support (12%), stigmatization (12%), loss of emotional support (6.7%) and others thought that disclosure was not necessary (6.7%). The above information from the questionnaires was supported by information gathered from the focus group discussions where women reported fear of death, divorce, being beaten, job denial and ignorance of the importance of disclosure were reported as some of the barriers to disclosure. Some of the responses included the following i.e. \"Knowing that he easily gets upset by small things and begins fighting, if I tell him about my status will he not beat me to death? His brother beat his wife seriously when he found that she was positive, and he is now in prison\" said 29-year old primary school teacher. \"Can't I live with my disease without bothering people by telling them of my issues? In feel comfortable that way\" reported a 40-year-old prisons warder and a mother of 6 children. \"I am looking for a job right now and if probable employers get to know that am positive, they may deny me a job. I will reveal my status when I have a job\" reported a 34-year-old mother of 3 children.\nPost disclosure experiences: Majority of the women who disclosed were comforted (73.9%). However, negative outcomes included accusation of infidelity (24.9%), others were verbally abused (6.8%), some were beaten (5.7%) and a few were chased out of their homes by their husbands and the relatives of their husbands (2.3%). The information from the focus group discussions lent further credence to that gathered from the questionnaires with respondents reporting increased support and comforting as said by a 25-year old mother of 3: \"When I told my mother that I was HIV positive, she was so sad but later comforted me and promised to give me all the support I needed\". She had also told her partner \"My partner pledged his support and continued love till death do us part. He has always reminded me to take my drugs and goes with me to hospital during my clinic days\". Negative, some mothers reported financial loss, being beaten, denial of conjugal rights, divorce and stigma as some of the outcomes of their having disclosed their serostatus. Some of their responses included the following i.e. \"When I told my partner, he beat me that night and locked me in the house for two days though he came back to his senses and stopped harassing me\" reported 18-year-old primipara.\nFactors associated with disclosure: The factors associated with disclosure were age between 26-35 years. This age group has 3.9-fold increase in the odds of disclosure compared to those in the 18-25 year age category (OR 3.9, 95% CI 1.03-15.16). Primary education was associated with a 3.5-fold increase in the odds of disclosure (OR 3.53, 95% CI 1.10-11.307) compared to those with post primary education. Urban dwelling was associated with an over 4-fold increase in the odds of disclosure compared to the rural folk (OR 4.22, 95% CI 1.27-14.01) (Table 4).\nFactors associated with disclosure of HIV among pregnant women attending antenatal clinic at Mbarara Regional Referral Hospital", "About 85% of the participants had disclosed their HIV serostatus to at least one person and of these participants, 57% had disclosed to their partners. Almost 80% had disclosed within less than 2 months of testing HIV positive. Women disclosed their serostatus because their partners had disclosed to them (27.3%), their partners were caring (27.3%) and health workers had encouraged them to disclose (25.0%). Following disclosure, majority were comforted (73.9%) while others (6.8 %) were verbally abused. Those who did not disclose feared abandonment (33.3%), being beaten (33.3%) and loss of financial and emotional support (13.3%). The factors associated with disclosure included age group 26-35 years, primary education level and urban residence. A study in Dar Es salaam, Tanzania in an ANC clinic interviewing HIV positive women about disclosure to their partners found that 69% had disclosed to their partners [20]. The overall HIV status disclosure to sexual partner in a study in Ethiopia was 57.4% and the study showed that there is significant association between knowing HIV status of the sexual partner [21]. The rate of disclosure to partners in our study was 57%. These rates of disclosure to partners are almost similar probably because the settings were almost the same and the populations studied were from the low resource settings and had similar socio-economic and demographic characteristics.\nNegative disclosure outcomes and reasons for non-disclosure: in our study, women who disclosed were accused of infidelity, others were verbally abused, beaten, and a few were chased out their homes by their husbands and the relatives of their husbands. The women feared abandonment, being beaten, loss of financial support, stigmatization, loss of emotional support and others did not know the importance of disclosure. The three commonest reasons for non-disclosure in other studies are fear regarding spread of the information, stigmatization and deterioration in the relationship with the spouse [22]. This could be because this group of women considered the disclosure process to be too difficult and risky to undertake and engaged in avoidant behaviors to hide their HIV status. Women who do not disclose their HIV status to their sexual partners sometimes do not practice safer sex, especially condom use and it is possible that this group of women may be more likely to have re-infection [23]. The negative outcomes may lead women to choose not to share their HIV test results with their friends, family and sexual partners. This, in turn, leads to lost opportunities for the prevention of new infections and for the ability of these women to access appropriate treatment, care and support services where they are available.\nPositive disclosure outcomes: Most women who disclosed their HIV serostatus in our study were comforted and now able to participate in HIV treatment programs. Disclosure of HIV status expands the awareness of HIV risk to untested partners, which can lead to greater uptake of voluntary HIV testing and counseling and changes in HIV risk behaviors. In addition, disclosure of HIV status to sexual partners enables couples to make informed reproductive health choices that may ultimately lower the number of unintended pregnancies among HIV-positive women [9]. Among women, who disclose their HIV serostatus to their families, friends and sex partners, the incidence of regret is minimal and that disclosure improved on relationship satisfaction and security [24]. Disclosure is necessary to initiate discussions about HIV/AIDS and this raises each partner's awareness of the risk of infection and may ultimately lead to behavior change to reduce risk reduction. Disclosure can be an important starting point for HIV positive women to begin discussing the use of contraception with their partners and reduce the number of unintended pregnancies among HIV infected women. Disclosure helps in women's uptake of PMTCT programs and in their participation in treatment and support programs. To benefit from interventions that can reduce HIV perinatal transmission, women who are HIV infected must be willing to accept and adhere to PMTCT prophylaxis. The optimal uptake and adherence to PMTCT programs is difficult for women whose partners are either unaware or not supportive of their participation. It is well documented in Africa that women often lack the power to make independent decisions about their own health care. It is therefore difficult for HIV infected women to seek social and medical support from care and treatment programs for themselves and their infants without first disclosing their HIV serostatus to their partners [25].\nFactors associated with disclosure: In our study, the reasons given by the women for disclosure were partner having disclosed their status first, partner being caring, encouragement by the health worker and because they wanted to practice safer sex. The factors associated with disclosure included being an urban dweller and having age more than 25 years. This may be because of easy access to information and treatment opportunities compared to the people living in the rural areas. This could also be because those who are older than 25 years are more likely to have spent a longer time in relationships and thus built trust over time resulting into a higher chance to have disclosed compared to the younger ones. The older women are more likely to have gotten pregnant more times than those younger than 26 years and this could have exposed them to more information about disclosure leading to their being more likely to disclose. Disclosure is associated with being married, increased condom use, knowledge of partner's HIV serostatus, knowledge of the partner's status, late stage, staying together with partner, discussion about HIV testing before going for testing, having secondary education, age of more than 25 years, attending more antenatal care visits [26, 27]. Other factors associated with disclosure include having communication skills to disclose, having initiated anti-retroviral therapy, receiving ongoing counselling, having ever seen an HIV infected person publicly disclose their HIV status, being married, knowing the importance of HIV serostatus disclosure and being employed [28, 29]. The strength of our study is that it employed mixed methods to determine responses to multiple aspects of the disclosure are among a vulnerable group of women. The weakness to our study is that it had a small sample size.", "There is heightened need to emphasize the importance of disclosure to enable increased participation in treatment and support programs; and find ways of minimizing the negative consequences and optimizing the positive outcomes of disclosure of the HIV status.\n What is known about this topic That disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy;\nHIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution;\nDisclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects.\nThat disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy;\nHIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution;\nDisclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects.\n What this study adds The disclosure proportions among HIV-infected pregnant women in Mbarara Hospital;\nThe factors associated with disclosure among HIV-infected pregnant women in Mbarara Hospital;\nThe reasons for non-disclosure among HIV-infected pregnant women at Mbarara Hospital.\nThe disclosure proportions among HIV-infected pregnant women in Mbarara Hospital;\nThe factors associated with disclosure among HIV-infected pregnant women in Mbarara Hospital;\nThe reasons for non-disclosure among HIV-infected pregnant women at Mbarara Hospital.", "That disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy;\nHIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution;\nDisclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects.", "The disclosure proportions among HIV-infected pregnant women in Mbarara Hospital;\nThe factors associated with disclosure among HIV-infected pregnant women in Mbarara Hospital;\nThe reasons for non-disclosure among HIV-infected pregnant women at Mbarara Hospital.", "The authors declare no competing interests." ]
[ "intro", "methods", "results", "discussion", "conclusion", null, null, "COI-statement" ]
[ "Disclosure", "HIV/AIDS", "Mbarara University", "Mbarara hospital", "Uganda", "factors associated" ]
Introduction: HIV/AIDS remains a major public health problem and more effort is needed to ensure successful treatment and prevention programs. Disclosure is an important component in uptake of prevention of mother-to-child transmission (PMTCT) services [1]. Women are counseled to share with their partner their own HIV test result and they become responsible for encouraging their partner to undertake HIV testing. The dialogue on sexual activity or HIV/AIDS within a couple is often difficult, especially when women discover that they are HIV-infected [2]. Among postpartum mothers, disclosure is associated with adherence to safer infant feeding practices, exclusive breast feeding, exclusive replacement feeding, creates awareness about HIV risk for the untested sexual partners, supports risk reduction, promotes safer sexual behavior, increased retention in eMTCT programs, leads to better clinical outcomes such as CD4+ count increases and higher rates of retention in treatment programs [3-12]. Studies also show that disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy [13]. Despite these potential benefits, studies indicate the frequency of non-disclosure remains relatively high. About 25% of HIV infected patients do not disclose their HIV serostatus to their partners [14]. The proportion of women disclosing their HIV serostatus to their partners among HIV positive pregnant women attending antenatal care is even larger (60%) [15]. Disclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects [16]. HIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution [17, 18]. Therefore, the purpose of this study was to determine the frequency, factors associated with disclosure and the potential barriers and facilitators to disclosure of HIV serostatus by pregnant women attending antenatal clinic at Mbarara Regional Referral Hospital (MRRH), South Western Uganda. Methods: Study site: The study was conducted at Mbarara Regional Referral Hospital Antenatal Care clinic (ANC). The study was conducted among HIV positive pregnant women attending the antenatal clinic at Mbarara Regional Referral Hospital ANC. The antenatal clinic sees 700 women per month at least 90% of women in Uganda attend at least one antenatal visit during their pregnancy. The antenatal HIV prevalence in Uganda is 6.4% while the prevalence in Mbarara Hospital maternity ward is about 12% [19]. Mbarara Hospital is both a teaching hospital for Mbarara University Medical School and Regional Referral Hospital located within Mbarara Municipality, Mbarara district in the Western region of Uganda about 270Km from the capital city Kampala. The facility provides care for a diverse ethnic group of patients from the region, including some parts of the Democratic Republic of Congo, Rwanda and the Northern part of Tanzania. Study design: The study was a cross sectional study using both quantitative and qualitative methods. The quantitative method involved use of interviewer-administered pre-tested questionnaires while the qualitative method involved two focus group discussions (FGDs) i.e. eight HIV positive women who had disclosed their serostatus and eight positive women who had not disclosed. Sample size and sampling procedure: The sample size calculated was 103 women using the formula by Kish and Leslie (1965). The sample size was calculated based on the assumption that 20% of women attending antenatal clinic will not disclose HIV results to anyone. Using an error margin of 7.5%, an estimated 103 women were sufficient to answer the study objectives. The HIV positive pregnant women attending MRRH ANC were consecutively recruited until the desired number of 103 was achieved while every 5th respondent during the quantitative survey was also requested to participate in a focus group discussion until the required number of eight participants per group. Data collection and instruments: The HIV positive mothers were received at the registration desk with the rest of the pregnant women. They were identified using the HIV status codes on their ANC charts. The clinician handed them over to a research assistant in a separate room away from the routine antenatal clinic activities. The research assistant provided the mothers with information about the study and sought their consent to participate. Quantitative data was collected using interviewer-administered pre-coded pre-tested questionnaires to determine: The socio-demographic variables such as age, marital status, residence, employment, education level, nature of domicile, religion, parity and tribe. The disclosure status (disclosed or not disclosed), to whom, when it was done, and the outcomes of disclosure were collected. The questionnaire was pre-tested and translation double verified. The questionnaire was piloted, and necessary adjustments made. The data was cleaned. During the focus group discussions, the interview was recorded using a voice recorder. Data entry and analysis: Quantitative data was entered into the EPI-INFO program and analyzed using the statistical package for social science (SPSS version 12). The categorical data was summarized into frequencies or proportions. The socio-demographic characteristics of women who disclosed their HIV serostatus were analyzed. The primary outcome for the quantitative analysis was disclosure. The association between social, economic and demographic categorical variables with disclosure status was obtained using binary logistic regression analysis and an association was considered significant if the p-value was less than 0.05. The qualitative data was verbatim transcripted, categories created with evidences from the responses and coded using the thematic content analysis. Ethical consideration: The work was presented to the department of obstetrics and Gynecology at Mbarara University and ethical approval sought from the faculty research ethical committee. Individual consent was obtained from the participants enrolled. Only participants who voluntarily agreed to participate in the study received an interview. Results: Majority of the participants were between the ages of 18 and 35 years (94.2%), Christians (89.3%), had primary education (56.3%), were married either monogamously or polygamously (94.1%), were multipara (61.2%), lived in a nuclear family setting (71.8%), unemployed (59.2%) and had a monthly income of between 50,000-100,000 Uganda shillings (42.7%). Among the respondents who were below 18 years, 83.3% had disclosed their serostatus. Disclosed among those who had no formal education was 100% (Table 1). Persons disclosed to included partners (57%), parents (25%), friends (9%), relatives (6%) and siblings (3%). Out of the 103 respondents, 88 (85.4%) had disclosed to at least someone. About seventy nine percent (79%) had disclosed within less than 2 months of testing positive while 9.1% had disclosed after 6 or more months of having tested positive (Table 2). One of the respondents in the focus group discussions reported having disclosed within seven days as evidenced by her response i.e. "I told my husband on the second day following my testing positive because he had disclosed to me his serostatus and was openly taking his HIV drugs" said 30-year-old mother of three. Most women who disclosed their sero-status were encouraged by health care workers, had partners who were caring and had disclosed to them (Table 3). This was further supported by information gathered from the focus group discussions where participants reportedly disclosed because their partners had disclosed to them and some had been encouraged to do so by the health workers as said by participants a 32-year mother of four children and 22 year a primipara respectively: "I told my husband on the second day following my testing positive because he had disclosed to me his serostatus and was openly taking his HIV drugs". “The health worker always reminded and encouraged me whenever we met and I got the boldness to tell my husband ” The socio-demographic characteristics in relation to disclosure Frequency and timing of disclosure among women attending antenatal clinic at Mbarara Regional Referral Hospital Factors that motivated disclosure among pregnant women who disclosed their HIV sero-status, Mbarara Hospital, Uganda (n = 88) Reasons for non-disclosure: included fear of abandonment (32%), being beaten (32%), loss of financial support (12%), stigmatization (12%), loss of emotional support (6.7%) and others thought that disclosure was not necessary (6.7%). The above information from the questionnaires was supported by information gathered from the focus group discussions where women reported fear of death, divorce, being beaten, job denial and ignorance of the importance of disclosure were reported as some of the barriers to disclosure. Some of the responses included the following i.e. "Knowing that he easily gets upset by small things and begins fighting, if I tell him about my status will he not beat me to death? His brother beat his wife seriously when he found that she was positive, and he is now in prison" said 29-year old primary school teacher. "Can't I live with my disease without bothering people by telling them of my issues? In feel comfortable that way" reported a 40-year-old prisons warder and a mother of 6 children. "I am looking for a job right now and if probable employers get to know that am positive, they may deny me a job. I will reveal my status when I have a job" reported a 34-year-old mother of 3 children. Post disclosure experiences: Majority of the women who disclosed were comforted (73.9%). However, negative outcomes included accusation of infidelity (24.9%), others were verbally abused (6.8%), some were beaten (5.7%) and a few were chased out of their homes by their husbands and the relatives of their husbands (2.3%). The information from the focus group discussions lent further credence to that gathered from the questionnaires with respondents reporting increased support and comforting as said by a 25-year old mother of 3: "When I told my mother that I was HIV positive, she was so sad but later comforted me and promised to give me all the support I needed". She had also told her partner "My partner pledged his support and continued love till death do us part. He has always reminded me to take my drugs and goes with me to hospital during my clinic days". Negative, some mothers reported financial loss, being beaten, denial of conjugal rights, divorce and stigma as some of the outcomes of their having disclosed their serostatus. Some of their responses included the following i.e. "When I told my partner, he beat me that night and locked me in the house for two days though he came back to his senses and stopped harassing me" reported 18-year-old primipara. Factors associated with disclosure: The factors associated with disclosure were age between 26-35 years. This age group has 3.9-fold increase in the odds of disclosure compared to those in the 18-25 year age category (OR 3.9, 95% CI 1.03-15.16). Primary education was associated with a 3.5-fold increase in the odds of disclosure (OR 3.53, 95% CI 1.10-11.307) compared to those with post primary education. Urban dwelling was associated with an over 4-fold increase in the odds of disclosure compared to the rural folk (OR 4.22, 95% CI 1.27-14.01) (Table 4). Factors associated with disclosure of HIV among pregnant women attending antenatal clinic at Mbarara Regional Referral Hospital Discussion: About 85% of the participants had disclosed their HIV serostatus to at least one person and of these participants, 57% had disclosed to their partners. Almost 80% had disclosed within less than 2 months of testing HIV positive. Women disclosed their serostatus because their partners had disclosed to them (27.3%), their partners were caring (27.3%) and health workers had encouraged them to disclose (25.0%). Following disclosure, majority were comforted (73.9%) while others (6.8 %) were verbally abused. Those who did not disclose feared abandonment (33.3%), being beaten (33.3%) and loss of financial and emotional support (13.3%). The factors associated with disclosure included age group 26-35 years, primary education level and urban residence. A study in Dar Es salaam, Tanzania in an ANC clinic interviewing HIV positive women about disclosure to their partners found that 69% had disclosed to their partners [20]. The overall HIV status disclosure to sexual partner in a study in Ethiopia was 57.4% and the study showed that there is significant association between knowing HIV status of the sexual partner [21]. The rate of disclosure to partners in our study was 57%. These rates of disclosure to partners are almost similar probably because the settings were almost the same and the populations studied were from the low resource settings and had similar socio-economic and demographic characteristics. Negative disclosure outcomes and reasons for non-disclosure: in our study, women who disclosed were accused of infidelity, others were verbally abused, beaten, and a few were chased out their homes by their husbands and the relatives of their husbands. The women feared abandonment, being beaten, loss of financial support, stigmatization, loss of emotional support and others did not know the importance of disclosure. The three commonest reasons for non-disclosure in other studies are fear regarding spread of the information, stigmatization and deterioration in the relationship with the spouse [22]. This could be because this group of women considered the disclosure process to be too difficult and risky to undertake and engaged in avoidant behaviors to hide their HIV status. Women who do not disclose their HIV status to their sexual partners sometimes do not practice safer sex, especially condom use and it is possible that this group of women may be more likely to have re-infection [23]. The negative outcomes may lead women to choose not to share their HIV test results with their friends, family and sexual partners. This, in turn, leads to lost opportunities for the prevention of new infections and for the ability of these women to access appropriate treatment, care and support services where they are available. Positive disclosure outcomes: Most women who disclosed their HIV serostatus in our study were comforted and now able to participate in HIV treatment programs. Disclosure of HIV status expands the awareness of HIV risk to untested partners, which can lead to greater uptake of voluntary HIV testing and counseling and changes in HIV risk behaviors. In addition, disclosure of HIV status to sexual partners enables couples to make informed reproductive health choices that may ultimately lower the number of unintended pregnancies among HIV-positive women [9]. Among women, who disclose their HIV serostatus to their families, friends and sex partners, the incidence of regret is minimal and that disclosure improved on relationship satisfaction and security [24]. Disclosure is necessary to initiate discussions about HIV/AIDS and this raises each partner's awareness of the risk of infection and may ultimately lead to behavior change to reduce risk reduction. Disclosure can be an important starting point for HIV positive women to begin discussing the use of contraception with their partners and reduce the number of unintended pregnancies among HIV infected women. Disclosure helps in women's uptake of PMTCT programs and in their participation in treatment and support programs. To benefit from interventions that can reduce HIV perinatal transmission, women who are HIV infected must be willing to accept and adhere to PMTCT prophylaxis. The optimal uptake and adherence to PMTCT programs is difficult for women whose partners are either unaware or not supportive of their participation. It is well documented in Africa that women often lack the power to make independent decisions about their own health care. It is therefore difficult for HIV infected women to seek social and medical support from care and treatment programs for themselves and their infants without first disclosing their HIV serostatus to their partners [25]. Factors associated with disclosure: In our study, the reasons given by the women for disclosure were partner having disclosed their status first, partner being caring, encouragement by the health worker and because they wanted to practice safer sex. The factors associated with disclosure included being an urban dweller and having age more than 25 years. This may be because of easy access to information and treatment opportunities compared to the people living in the rural areas. This could also be because those who are older than 25 years are more likely to have spent a longer time in relationships and thus built trust over time resulting into a higher chance to have disclosed compared to the younger ones. The older women are more likely to have gotten pregnant more times than those younger than 26 years and this could have exposed them to more information about disclosure leading to their being more likely to disclose. Disclosure is associated with being married, increased condom use, knowledge of partner's HIV serostatus, knowledge of the partner's status, late stage, staying together with partner, discussion about HIV testing before going for testing, having secondary education, age of more than 25 years, attending more antenatal care visits [26, 27]. Other factors associated with disclosure include having communication skills to disclose, having initiated anti-retroviral therapy, receiving ongoing counselling, having ever seen an HIV infected person publicly disclose their HIV status, being married, knowing the importance of HIV serostatus disclosure and being employed [28, 29]. The strength of our study is that it employed mixed methods to determine responses to multiple aspects of the disclosure are among a vulnerable group of women. The weakness to our study is that it had a small sample size. Conclusion: There is heightened need to emphasize the importance of disclosure to enable increased participation in treatment and support programs; and find ways of minimizing the negative consequences and optimizing the positive outcomes of disclosure of the HIV status. What is known about this topic That disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy; HIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution; Disclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects. That disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy; HIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution; Disclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects. What this study adds The disclosure proportions among HIV-infected pregnant women in Mbarara Hospital; The factors associated with disclosure among HIV-infected pregnant women in Mbarara Hospital; The reasons for non-disclosure among HIV-infected pregnant women at Mbarara Hospital. The disclosure proportions among HIV-infected pregnant women in Mbarara Hospital; The factors associated with disclosure among HIV-infected pregnant women in Mbarara Hospital; The reasons for non-disclosure among HIV-infected pregnant women at Mbarara Hospital. What is known about this topic: That disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy; HIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution; Disclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects. What this study adds: The disclosure proportions among HIV-infected pregnant women in Mbarara Hospital; The factors associated with disclosure among HIV-infected pregnant women in Mbarara Hospital; The reasons for non-disclosure among HIV-infected pregnant women at Mbarara Hospital. Competing interests: The authors declare no competing interests.
Background: Positive HIV results disclosure plays a significant role in the successful prevention and care of HIV infected patients. It provides significant social and health benefits to the individual and the community. Non-disclosure is one of the contextual factors driving the HIV epidemic in Uganda. Study objectives: to determine the frequency of HIV disclosure, associated factors and disclosure outcomes among HIV positive pregnant women at Mbarara Hospital, southwestern Uganda. Methods: A cross-sectional study using quantitative and qualitative methods among a group of HIV positive pregnant women attending antenatal clinic was done and consecutive sampling conducted. Results: The total participant recruitment was 103, of which 88 (85.4%) had disclosed their serostatus with 57% disclosure to their partners. About 80% had disclosed within less than 2 months of testing HIV positive. Reasons for disclosure included their partners having disclosed to them (27.3%), caring partners (27.3%) and encouragement by health workers (25.0%). Following disclosure, 74%) were comforted and 6.8% were verbally abused. Reasons for non-disclosure were fear of abandonment (33.3%), being beaten (33.3%) and loss of financial and emotional support (13.3%). The factors associated with disclosure were age 26-35 years (OR 3.9, 95% CI 1.03-15.16), primary education (OR 3.53, 95%CI 1.10-11.307) and urban dwelling (OR 4.22, 95% CI 1.27-14.01). Conclusions: Participants disclosed mainly to their partners and were comforted and many of them were encouraged by the health workers. There is need to optimize disclosure merits to enable increased participation in treatment and support programs.
Introduction: HIV/AIDS remains a major public health problem and more effort is needed to ensure successful treatment and prevention programs. Disclosure is an important component in uptake of prevention of mother-to-child transmission (PMTCT) services [1]. Women are counseled to share with their partner their own HIV test result and they become responsible for encouraging their partner to undertake HIV testing. The dialogue on sexual activity or HIV/AIDS within a couple is often difficult, especially when women discover that they are HIV-infected [2]. Among postpartum mothers, disclosure is associated with adherence to safer infant feeding practices, exclusive breast feeding, exclusive replacement feeding, creates awareness about HIV risk for the untested sexual partners, supports risk reduction, promotes safer sexual behavior, increased retention in eMTCT programs, leads to better clinical outcomes such as CD4+ count increases and higher rates of retention in treatment programs [3-12]. Studies also show that disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy [13]. Despite these potential benefits, studies indicate the frequency of non-disclosure remains relatively high. About 25% of HIV infected patients do not disclose their HIV serostatus to their partners [14]. The proportion of women disclosing their HIV serostatus to their partners among HIV positive pregnant women attending antenatal care is even larger (60%) [15]. Disclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects [16]. HIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution [17, 18]. Therefore, the purpose of this study was to determine the frequency, factors associated with disclosure and the potential barriers and facilitators to disclosure of HIV serostatus by pregnant women attending antenatal clinic at Mbarara Regional Referral Hospital (MRRH), South Western Uganda. Conclusion: There is heightened need to emphasize the importance of disclosure to enable increased participation in treatment and support programs; and find ways of minimizing the negative consequences and optimizing the positive outcomes of disclosure of the HIV status. What is known about this topic That disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy; HIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution; Disclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects. That disclosure is a potential strategy for dealing with stigma among patients receiving antiretroviral therapy; HIV status disclosure can be a period of heightened risk for partner stigma, abuse and financial withdrawal and thus should be handled with caution; Disclosure is a complex process that requires delicate handling to prevent occurrence of unwanted effects. What this study adds The disclosure proportions among HIV-infected pregnant women in Mbarara Hospital; The factors associated with disclosure among HIV-infected pregnant women in Mbarara Hospital; The reasons for non-disclosure among HIV-infected pregnant women at Mbarara Hospital. The disclosure proportions among HIV-infected pregnant women in Mbarara Hospital; The factors associated with disclosure among HIV-infected pregnant women in Mbarara Hospital; The reasons for non-disclosure among HIV-infected pregnant women at Mbarara Hospital.
Background: Positive HIV results disclosure plays a significant role in the successful prevention and care of HIV infected patients. It provides significant social and health benefits to the individual and the community. Non-disclosure is one of the contextual factors driving the HIV epidemic in Uganda. Study objectives: to determine the frequency of HIV disclosure, associated factors and disclosure outcomes among HIV positive pregnant women at Mbarara Hospital, southwestern Uganda. Methods: A cross-sectional study using quantitative and qualitative methods among a group of HIV positive pregnant women attending antenatal clinic was done and consecutive sampling conducted. Results: The total participant recruitment was 103, of which 88 (85.4%) had disclosed their serostatus with 57% disclosure to their partners. About 80% had disclosed within less than 2 months of testing HIV positive. Reasons for disclosure included their partners having disclosed to them (27.3%), caring partners (27.3%) and encouragement by health workers (25.0%). Following disclosure, 74%) were comforted and 6.8% were verbally abused. Reasons for non-disclosure were fear of abandonment (33.3%), being beaten (33.3%) and loss of financial and emotional support (13.3%). The factors associated with disclosure were age 26-35 years (OR 3.9, 95% CI 1.03-15.16), primary education (OR 3.53, 95%CI 1.10-11.307) and urban dwelling (OR 4.22, 95% CI 1.27-14.01). Conclusions: Participants disclosed mainly to their partners and were comforted and many of them were encouraged by the health workers. There is need to optimize disclosure merits to enable increased participation in treatment and support programs.
3,776
325
[ 60, 46 ]
8
[ "disclosure", "hiv", "women", "disclosed", "status", "partners", "study", "mbarara", "hospital", "positive" ]
[ "patients disclose hiv", "programs disclosure hiv", "facilitators disclosure hiv", "outcomes disclosure hiv", "disclosure hiv pregnant" ]
[CONTENT] Disclosure | HIV/AIDS | Mbarara University | Mbarara hospital | Uganda | factors associated [SUMMARY]
[CONTENT] Disclosure | HIV/AIDS | Mbarara University | Mbarara hospital | Uganda | factors associated [SUMMARY]
[CONTENT] Disclosure | HIV/AIDS | Mbarara University | Mbarara hospital | Uganda | factors associated [SUMMARY]
[CONTENT] Disclosure | HIV/AIDS | Mbarara University | Mbarara hospital | Uganda | factors associated [SUMMARY]
[CONTENT] Disclosure | HIV/AIDS | Mbarara University | Mbarara hospital | Uganda | factors associated [SUMMARY]
[CONTENT] Disclosure | HIV/AIDS | Mbarara University | Mbarara hospital | Uganda | factors associated [SUMMARY]
[CONTENT] Adolescent | Adult | Cross-Sectional Studies | Disclosure | Female | HIV Infections | Humans | Pregnancy | Pregnancy Complications, Infectious | Prenatal Care | Sexual Partners | Uganda | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Cross-Sectional Studies | Disclosure | Female | HIV Infections | Humans | Pregnancy | Pregnancy Complications, Infectious | Prenatal Care | Sexual Partners | Uganda | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Cross-Sectional Studies | Disclosure | Female | HIV Infections | Humans | Pregnancy | Pregnancy Complications, Infectious | Prenatal Care | Sexual Partners | Uganda | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Cross-Sectional Studies | Disclosure | Female | HIV Infections | Humans | Pregnancy | Pregnancy Complications, Infectious | Prenatal Care | Sexual Partners | Uganda | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Cross-Sectional Studies | Disclosure | Female | HIV Infections | Humans | Pregnancy | Pregnancy Complications, Infectious | Prenatal Care | Sexual Partners | Uganda | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Cross-Sectional Studies | Disclosure | Female | HIV Infections | Humans | Pregnancy | Pregnancy Complications, Infectious | Prenatal Care | Sexual Partners | Uganda | Young Adult [SUMMARY]
[CONTENT] patients disclose hiv | programs disclosure hiv | facilitators disclosure hiv | outcomes disclosure hiv | disclosure hiv pregnant [SUMMARY]
[CONTENT] patients disclose hiv | programs disclosure hiv | facilitators disclosure hiv | outcomes disclosure hiv | disclosure hiv pregnant [SUMMARY]
[CONTENT] patients disclose hiv | programs disclosure hiv | facilitators disclosure hiv | outcomes disclosure hiv | disclosure hiv pregnant [SUMMARY]
[CONTENT] patients disclose hiv | programs disclosure hiv | facilitators disclosure hiv | outcomes disclosure hiv | disclosure hiv pregnant [SUMMARY]
[CONTENT] patients disclose hiv | programs disclosure hiv | facilitators disclosure hiv | outcomes disclosure hiv | disclosure hiv pregnant [SUMMARY]
[CONTENT] patients disclose hiv | programs disclosure hiv | facilitators disclosure hiv | outcomes disclosure hiv | disclosure hiv pregnant [SUMMARY]
[CONTENT] disclosure | hiv | women | disclosed | status | partners | study | mbarara | hospital | positive [SUMMARY]
[CONTENT] disclosure | hiv | women | disclosed | status | partners | study | mbarara | hospital | positive [SUMMARY]
[CONTENT] disclosure | hiv | women | disclosed | status | partners | study | mbarara | hospital | positive [SUMMARY]
[CONTENT] disclosure | hiv | women | disclosed | status | partners | study | mbarara | hospital | positive [SUMMARY]
[CONTENT] disclosure | hiv | women | disclosed | status | partners | study | mbarara | hospital | positive [SUMMARY]
[CONTENT] disclosure | hiv | women | disclosed | status | partners | study | mbarara | hospital | positive [SUMMARY]
[CONTENT] hiv | disclosure | feeding | sexual | women | potential | hiv serostatus | partners | programs | remains [SUMMARY]
[CONTENT] data | quantitative | study | women | mbarara | antenatal | analysis | pre | disclosed | group [SUMMARY]
[CONTENT] disclosed | year | reported | disclosure | old | year old | mother | told | included | table [SUMMARY]
[CONTENT] disclosure | women mbarara | hiv infected pregnant women | infected pregnant | infected pregnant women | infected pregnant women mbarara | women mbarara hospital | hiv infected pregnant | pregnant women mbarara | pregnant women mbarara hospital [SUMMARY]
[CONTENT] disclosure | hiv | women | disclosed | mbarara | hospital | hiv infected pregnant | women mbarara hospital | women mbarara | pregnant women mbarara [SUMMARY]
[CONTENT] disclosure | hiv | women | disclosed | mbarara | hospital | hiv infected pregnant | women mbarara hospital | women mbarara | pregnant women mbarara [SUMMARY]
[CONTENT] ||| ||| Uganda ||| Mbarara Hospital | Uganda [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] 103 | 88 | 85.4% | 57% ||| About 80% | less than 2 months ||| 27.3% | 27.3% | 25.0% ||| 74% | 6.8% ||| 33.3% | 33.3% | 13.3% ||| age 26-35 years | 3.9 | 95% | CI | 1.03-15.16 | 3.53 | 1.10-11.307 | 4.22 | 95% | CI | 1.27-14.01 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| Uganda ||| Mbarara Hospital | Uganda ||| ||| ||| 103 | 88 | 85.4% | 57% ||| About 80% | less than 2 months ||| 27.3% | 27.3% | 25.0% ||| 74% | 6.8% ||| 33.3% | 33.3% | 13.3% ||| age 26-35 years | 3.9 | 95% | CI | 1.03-15.16 | 3.53 | 1.10-11.307 | 4.22 | 95% | CI | 1.27-14.01 ||| ||| [SUMMARY]
[CONTENT] ||| ||| Uganda ||| Mbarara Hospital | Uganda ||| ||| ||| 103 | 88 | 85.4% | 57% ||| About 80% | less than 2 months ||| 27.3% | 27.3% | 25.0% ||| 74% | 6.8% ||| 33.3% | 33.3% | 13.3% ||| age 26-35 years | 3.9 | 95% | CI | 1.03-15.16 | 3.53 | 1.10-11.307 | 4.22 | 95% | CI | 1.27-14.01 ||| ||| [SUMMARY]
Evodiamine protects against airway remodelling and inflammation in asthmatic rats by modulating the HMGB1/NF-κB/TLR-4 signalling pathway.
33577738
Evodiamine, which is isolated from Evodia rutaecarpa (Rutaceae), possess strong anti-inflammatory, immunomodulatory, and antibacterial properties.
CONTEXT
Thirty-two Sprague-Dawley (SD) rats were used, asthma was induced by injecting intraperitoneally with a mixture of Al(OH)3 (100 mg) and ovalbumin (OA; 1 mg/kg), further exposing them to a 2% OA aerosol for 1 week. All animals were divided into four groups: control, asthma, and evodiamine 40 and 80 mg/kg p.o. treated group. Serum levels of inflammatory cytokines, interferon gamma (IFN-γ), and immunoglobulin E (IgE) and infiltrations of inflammatory cells in the bronchoalveolar lavage fluid (BALF) of the animals were determined. The thickness of the smooth muscle layer and airway wall in the intact small bronchioles of asthmatic rats was examined as well.
MATERIALS AND METHODS
Cytokine levels in the serum and BALF were lower in the evodiamine-treated group than in the asthma group. Evodiamine treatment reduced IgE and IFN-γ levels as well as the inflammatory cell infiltrate in the lung tissue of asthmatic rats. The thickness of the smooth muscle layer and airway wall of intact small bronchioles was less in the evodiamine-treated group than in the asthma group. Lower levels of TLR-4, MyD88, NF-κB, and HMGB1 mRNA in lung tissue were measured in the evodiamine-treated group than in the asthma group.
RESULTS
The effect of evodiamine treatment protects the asthma, as evodiamine reduces airway inflammation and remodelling in the lung tissue by downregulating the HMGB1/NF-κB/TLR-4 pathway in asthma.
DISCUSSION AND CONCLUSION
[ "Airway Remodeling", "Animals", "Anti-Inflammatory Agents", "Asthma", "Bronchoalveolar Lavage Fluid", "Cytokines", "Disease Models, Animal", "Dose-Response Relationship, Drug", "Evodia", "HMGB1 Protein", "Inflammation", "NF-kappa B", "Quinazolines", "Rats", "Rats, Sprague-Dawley", "Signal Transduction", "Toll-Like Receptor 4" ]
7889089
Introduction
Asthma is a respiratory disorder diagnosed by pathophysiological symptoms such as hyper-responsiveness, obstruction, remodelling, and inflammation of the airway. Some 339 million people suffer from asthma worldwide (Dharmage et al. 2019). Among the physiological pathways involved in the development of asthma is an imbalance in Th1/Th2 (Durrant and Metzger 2010). Th2 activation enhances the production of interleukins, leading to the development of several pathological conditions, including allergies and asthma. Allergic asthma occurs in response to enhanced production immunoglobulin E (IgE) by B cells following an increased release of cytokines (Galli and Tsai 2012). NF-κB, a potent inflammatory mediator, enhances the production of cytokines in several disorders, including asthma (Liu et al. 2017). Phosphorylation of IκB activates the NF-κB p65 subunit, which is responsible for increased production of cytokines and promotion of the inflammatory cascade (Christian et al. 2016). In addition, HMGB1, an immune system protein involved in the regulation of cell survival and death, has been implicated in sepsis, lung injury, and asthma. HMGB1 binds to toll-like receptors (TLRs), which stimulates the release of cytokines through inflammatory cells (Qu et al. 2019). MyD88- or non-MyD88-dependent signalling is used by HMGB1 to trigger downstream signals by activating TLR-4, which enhances the secretion of cytokines via the NF-κB pathway (Azam et al. 2019). Cytokines and activation of the inflammatory cascade contribute to both the allergic reaction and the remodelling of lung tissue. Airway remodelling in asthmatic patients is induced by increased infiltration of T lymphocytes (LYM) and eosinophils (EOS; Fehrenbach et al. 2017). The persistent hyper-responsiveness of the airway together with remodelling cause irreversible obstruction, which in turn depresses respiratory function. The medication currently available to treat asthma offers only symptomatic relief and has several limitations. Thus, new approaches to the management of asthma are needed. Several reports have suggested that asthma can be managed by targeting the remodelling and inflammation of the airway (Bergeron et al. 2010). There is increasing interest in molecules of natural origin as therapeutic agents. Evodiamine, which is isolated from Evodia rutaecarpa (Rutaceae), is a traditional medicine in China for treating cardiovascular disorders, infection, inflammation, and obesity (Liao et al. 2011). It also exhibits anticancer activity by regulating the TGF-β1 and NF-κB pathways and thereby controls the growth of several types of cancer cells (Jiang and Hu 2009). Anti-inflammatory activity, including innate immunity against bacterial infection, and antiulcer activity have also been demonstrated for evodiamine via its regulation of the inflammatory cascade and inflammasomes (Li et al. 2019; Yu et al. 2013). Thus, the present study evaluated the protective effects of evodiamine against asthma.
null
null
Results
Evodiamine reduces the cytokine level in serum and BALF Cytokine levels in the BALF and serum of evodiamine-treated asthmatic rats are shown in Figure 1(A,B). Serum IL-4, IL-9, and IL-13 levels were significantly (p < 0.01) enhanced in the asthma group compared with the control group. However, serum levels of the cytokines IL-4, IL-9, and IL-13 were reduced significantly (p < 0.01) in the evodiamine-treated group compared with the asthma group (Figure 1(A)). Moreover, levels of IL-4, IL-13, and IL-17 in BALF were increases in the asthma group relative to the normal group. Treatment of the asthmatic rats with evodiamine prevented this increase, with IL-4, IL-13, and IL-17 levels in BALF, relative to the asthma group (Figure 1(B)). Cytokine levels in the BALF and serum of evodiamine-treated asthmatic rats are shown in Figure 1(A,B). Serum IL-4, IL-9, and IL-13 levels were significantly (p < 0.01) enhanced in the asthma group compared with the control group. However, serum levels of the cytokines IL-4, IL-9, and IL-13 were reduced significantly (p < 0.01) in the evodiamine-treated group compared with the asthma group (Figure 1(A)). Moreover, levels of IL-4, IL-13, and IL-17 in BALF were increases in the asthma group relative to the normal group. Treatment of the asthmatic rats with evodiamine prevented this increase, with IL-4, IL-13, and IL-17 levels in BALF, relative to the asthma group (Figure 1(B)). Evodiamine reduces the thickness of smooth muscle and the airway wall Isolated intact small bronchioles of evodiamine-treated and untreated asthmatic rats are shown in Figure 2. Thickness of the airway wall and smooth muscle layer was enhanced in the lung tissue of asthma group than control group of rats. However, treatment with evodiamine significantly (p < 0.01) reduces the thickness of both (airway wall and smooth muscle) in the lung tissue of asthmatic rats. Evodiamine reduces the thickness of airway smooth muscle and the airway wall in intact small bronchioles of asthmatic rats. Values are means ± SD (n = 8); @@ p < 0.01 compared to the normal group; ** p < 0.01 compared to the asthma group. Isolated intact small bronchioles of evodiamine-treated and untreated asthmatic rats are shown in Figure 2. Thickness of the airway wall and smooth muscle layer was enhanced in the lung tissue of asthma group than control group of rats. However, treatment with evodiamine significantly (p < 0.01) reduces the thickness of both (airway wall and smooth muscle) in the lung tissue of asthmatic rats. Evodiamine reduces the thickness of airway smooth muscle and the airway wall in intact small bronchioles of asthmatic rats. Values are means ± SD (n = 8); @@ p < 0.01 compared to the normal group; ** p < 0.01 compared to the asthma group. Evodiamine attenuates biochemical parameters Levels of mediators that modulate the inflammatory reaction, including IFN-γ and IgE, were measured in lung tissue homogenates. In the lung tissue of asthmatic rats, IFN-γ decreased significantly (p < 0.01) and increase in IgE level than normal control group. Evodiamine treatment increased the level of IFN-γ and significantly reduces the level of IgE in the lung tissue compared to asthma group of rats (Figure 3). Effects of evodiamine on IFN-γ and IgE levels in lung tissue homogenates of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Levels of mediators that modulate the inflammatory reaction, including IFN-γ and IgE, were measured in lung tissue homogenates. In the lung tissue of asthmatic rats, IFN-γ decreased significantly (p < 0.01) and increase in IgE level than normal control group. Evodiamine treatment increased the level of IFN-γ and significantly reduces the level of IgE in the lung tissue compared to asthma group of rats (Figure 3). Effects of evodiamine on IFN-γ and IgE levels in lung tissue homogenates of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Evodiamine attenuates the infiltration of inflammatory cells We evaluated the infiltration of inflammatory cells by determining the level of white blood cells (WBC), EOS, and LYM in the BALF of the four groups of rats (Figure 4). WBC, EOS, and LYM counts were higher in the BALF of the asthma group than the normal group of rats, whereas they decreased significantly (p < 0.01) in the BALF of evodiamine-treated asthmatic rats. Effects of evodiamine on the infiltration of inflammatory cells in the BALF of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. We evaluated the infiltration of inflammatory cells by determining the level of white blood cells (WBC), EOS, and LYM in the BALF of the four groups of rats (Figure 4). WBC, EOS, and LYM counts were higher in the BALF of the asthma group than the normal group of rats, whereas they decreased significantly (p < 0.01) in the BALF of evodiamine-treated asthmatic rats. Effects of evodiamine on the infiltration of inflammatory cells in the BALF of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Evodiamine attenuates expression of NF-κB and HMGB1 protein Expression of NF-κB protein in the lung tissue of the rats is shown in Figure 5(A,B). Expression was higher in the asthma group than in the normal group, but it was reduced following treatment with evodiamine. Immunohistochemical analyses of the expression of NF-κB protein in the lung tissue of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Expression of NF-κB protein in the lung tissue of the rats is shown in Figure 5(A,B). Expression was higher in the asthma group than in the normal group, but it was reduced following treatment with evodiamine. Immunohistochemical analyses of the expression of NF-κB protein in the lung tissue of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Evodiamine attenuates mRNA expression of TLR-4, MyD88, NF-κB, and HMGB1 TLR-4, MyD88, NF-κB, and HMGB1 mRNA expression was measured in rat lung tissue (Figure 6). Levels of all four mRNAs were significantly enhanced in the lung tissue of the asthma group compared to the normal group but were reduced in the lung tissue of the evodiamine-treated group compared to the untreated asthmatic group. Expression of TLR-4, MyD88, NF-κB, and HMGB1 mRNA in the lung tissue of evodiamine-treated asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. TLR-4, MyD88, NF-κB, and HMGB1 mRNA expression was measured in rat lung tissue (Figure 6). Levels of all four mRNAs were significantly enhanced in the lung tissue of the asthma group compared to the normal group but were reduced in the lung tissue of the evodiamine-treated group compared to the untreated asthmatic group. Expression of TLR-4, MyD88, NF-κB, and HMGB1 mRNA in the lung tissue of evodiamine-treated asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Evodiamine attenuates histopathological changes in lung tissue Histopathological changes in the lung tissue of rats treated with evodiamine or not treated are shown in Figure 7(A,B). Inflammation scores were estimated based on histopathological scores determined from H&E-stained sections (Figure 7(A)). The histopathological score was higher in the asthma group than in the normal group, but the increase was reversed by treatment with evodiamine. Histopathological changes in the lung tissue of evodiamine-treated asthmatic rats. (A) H&E staining of lung tissue and histopathological score. (B) PAS staining of lung tissue and PAS score. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. The number of goblet cells in lung tissue was estimated by PAS staining (Figure 7(B)). We found a higher PAS score in the asthma group than in the normal group but a dose-dependent reduction in PAS score in the evodiamine-treated group versus the asthma group. Histopathological changes in the lung tissue of rats treated with evodiamine or not treated are shown in Figure 7(A,B). Inflammation scores were estimated based on histopathological scores determined from H&E-stained sections (Figure 7(A)). The histopathological score was higher in the asthma group than in the normal group, but the increase was reversed by treatment with evodiamine. Histopathological changes in the lung tissue of evodiamine-treated asthmatic rats. (A) H&E staining of lung tissue and histopathological score. (B) PAS staining of lung tissue and PAS score. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. The number of goblet cells in lung tissue was estimated by PAS staining (Figure 7(B)). We found a higher PAS score in the asthma group than in the normal group but a dose-dependent reduction in PAS score in the evodiamine-treated group versus the asthma group. Effects of evodiamine on the TLR-4 protein Given the in vivo and in vitro findings, we performed a molecular docking study using BLAST and homology modelling followed by ligand and protein preparation. Molecular docking simulation of the ligand evodiamine was done with the AutoDock Kollman and Gasteiger functions for both the ligand and the binding protein. The docking results showed the high binding affinity of evodiamine with the in vivo confirmed TLR-4 protein, a result supported by the high binding energies (Table 1). The 3 D structure of the protein revealed the area of ligand binding in the TLR-4 protein (Figure 8). In silico molecular docking shows the interaction of the TLR-4 protein and evodiamine. The solid area in the protein structures represents the area of interaction. Docking scores for TLR-4 protein with ligand molecule Evodiamine. Given the in vivo and in vitro findings, we performed a molecular docking study using BLAST and homology modelling followed by ligand and protein preparation. Molecular docking simulation of the ligand evodiamine was done with the AutoDock Kollman and Gasteiger functions for both the ligand and the binding protein. The docking results showed the high binding affinity of evodiamine with the in vivo confirmed TLR-4 protein, a result supported by the high binding energies (Table 1). The 3 D structure of the protein revealed the area of ligand binding in the TLR-4 protein (Figure 8). In silico molecular docking shows the interaction of the TLR-4 protein and evodiamine. The solid area in the protein structures represents the area of interaction. Docking scores for TLR-4 protein with ligand molecule Evodiamine.
Conclusion
This reports suggests that treatment with evodiamine reduces inflammation and airway remodelling in the lung tissue of asthmatic rats by downregulating the HMGB1/NF-κB/TLR-4 pathway. Data of the study reveal that evodiamine could be explored clinically for the management of asthma.
[ "Animals", "Chemicals", "Experimental design and treatment protocol", "Measurement of cytokines in the serum", "Preparation of BALF and determination of biochemical parameters", "Measurement of the thickness of smooth muscle and the airway wall", "Histopathological analyses", "Immunohistochemical analyses", "qRT-PCR", "Homology model of TLR-4", "Preparation of proteins and ligand and molecular docking", "Statistical analyses", "Evodiamine reduces the cytokine level in serum and BALF", "Evodiamine reduces the thickness of smooth muscle and the airway wall", "Evodiamine attenuates biochemical parameters", "Evodiamine attenuates the infiltration of inflammatory cells", "Evodiamine attenuates expression of NF-κB and HMGB1 protein", "Evodiamine attenuates mRNA expression of TLR-4, MyD88, NF-κB, and HMGB1", "Evodiamine attenuates histopathological changes in lung tissue", "Effects of evodiamine on the TLR-4 protein" ]
[ "Sprague-Dawley rats weighing 180–225 g and housed under controlled conditions (temperature of 24 ± 3 °C and 60 ± 5% humidity) with a 12 h light/dark cycle were used in this study. All animal experiments were approved by the Institutional Animal Ethical Committee of Wuxi People’s Hospital Affiliated with Nanjing Medical University, China (IAEC/WPH-NMU/2019/25).", "Aluminium hydroxide [Al(OH)3], ovalbumin (OA) and evodiamine were procured from Sigma Aldrich Ltd., USA. ELISA kits were purchased from ThermoFisher Scientific Ltd., China and anti-NF-κB antibodies were purchased from Abcam Ltd., USA. Total RNA Kit was purchased from Omega Bio-Tek Inc., USA.\nExperimental design and treatment protocol We induced asthma by sensitising rats with an intraperitoneal injection of Al(OH)3 (100 mg) and ovalbumin (OA; 1 mg/kg). Fifteen days later, the rats were kept in an 0.8 m3 chamber, where they were exposed to 2% OA aerosol via an airflow of 8 L/min for 20 min/day for eight consecutive days. Rats were separated into four groups: a normal control group; an asthma group sensitised with OA but treated with normal saline solution; and two evodiamine groups, treated with 40 and 80 mg/kg group p.o (Shen et al. 2019). Evodiamine dose preparation was prepared by dissolving it in DMSO solution (Figure 1).\nInfluence of evodiamine on cytokine levels in the serum (A) and BALF (B) of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nWe induced asthma by sensitising rats with an intraperitoneal injection of Al(OH)3 (100 mg) and ovalbumin (OA; 1 mg/kg). Fifteen days later, the rats were kept in an 0.8 m3 chamber, where they were exposed to 2% OA aerosol via an airflow of 8 L/min for 20 min/day for eight consecutive days. Rats were separated into four groups: a normal control group; an asthma group sensitised with OA but treated with normal saline solution; and two evodiamine groups, treated with 40 and 80 mg/kg group p.o (Shen et al. 2019). Evodiamine dose preparation was prepared by dissolving it in DMSO solution (Figure 1).\nInfluence of evodiamine on cytokine levels in the serum (A) and BALF (B) of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nMeasurement of cytokines in the serum Rats were anaesthetized at the end of the treatment protocol and blood was drawn. Then they were euthanized by intraperitoneal injection of sodium pentobarbital (100 mg/kg). Serum was separated, and serum levels of the cytokines interleukin (IL)-4, IL-9, and IL-13 were measured with ELISA kits per the manufacturer’s instructions.\nRats were anaesthetized at the end of the treatment protocol and blood was drawn. Then they were euthanized by intraperitoneal injection of sodium pentobarbital (100 mg/kg). Serum was separated, and serum levels of the cytokines interleukin (IL)-4, IL-9, and IL-13 were measured with ELISA kits per the manufacturer’s instructions.\nPreparation of BALF and determination of biochemical parameters We ligated the right lung of each rat to determine histological changes and collected BALF from the left lung after cannulating the trachea. Bronchoalveolar lavage fluid (BALF) was centrifuged at 4 °C at 2500 rpm for 10 min. The pellet was resuspended in 100 µL PBS and used to determine relative and total leukocyte counts with a haemocytometer. Interferon gamma (IFN-γ), IgE, IL-4, IL-13, and IL-17 levels were measured in the supernatant with ELISA kits.\nWe ligated the right lung of each rat to determine histological changes and collected BALF from the left lung after cannulating the trachea. Bronchoalveolar lavage fluid (BALF) was centrifuged at 4 °C at 2500 rpm for 10 min. The pellet was resuspended in 100 µL PBS and used to determine relative and total leukocyte counts with a haemocytometer. Interferon gamma (IFN-γ), IgE, IL-4, IL-13, and IL-17 levels were measured in the supernatant with ELISA kits.\nMeasurement of the thickness of smooth muscle and the airway wall We examined isolated intact small bronchioles by light microscopy to measure the thickness of the airway wall and smooth muscle layer. Image-Pro Plus version 6.0 was used to determine the internal smooth muscle area (Wam1), external smooth muscle area (Wam2), bronchial luminal area (Wat1), total bronchial wall area (Wat2), and basement membrane perimeter (Pbm) in the bronchioles. The thicknesses of the smooth muscle layer (Wat) and airway wall (Wan) were calculated as follows:\nWan = (Wat1 –Wat2)/PbmWat = (Wam1 –Wam2)/Pbm\n\nWe examined isolated intact small bronchioles by light microscopy to measure the thickness of the airway wall and smooth muscle layer. Image-Pro Plus version 6.0 was used to determine the internal smooth muscle area (Wam1), external smooth muscle area (Wam2), bronchial luminal area (Wat1), total bronchial wall area (Wat2), and basement membrane perimeter (Pbm) in the bronchioles. The thicknesses of the smooth muscle layer (Wat) and airway wall (Wan) were calculated as follows:\nWan = (Wat1 –Wat2)/PbmWat = (Wam1 –Wam2)/Pbm\n\nHistopathological analyses Lung tissue was isolated from each animal, fixed in paraformaldehyde (4%), embedded in paraffin, and sectioned at a thickness of 5 µm. The sections were stained with H&E, and their histopathology was analysed with optical microscopy. The amount of inflammatory injury to the lung tissue was determined on a scale from 0 to 5: 0: normal, 1: few cells, 2: inflammatory cells that form a ring one layer deep, 3: a 2- to 4-cell layer of inflammatory cells, 4: a ring of inflammatory cells >4 cells deep.\nWe analysed the airway epithelium for goblet cells by staining it with periodic acid-Schiff (PAS) stain. Goblet cell hyperplasia was scored based on the percentage of PAS-positive cells: 0: no goblet cells, 1: <25% goblet cells, 2: 25–50%, 3: 51–75%, 4: >75%.\nLung tissue was isolated from each animal, fixed in paraformaldehyde (4%), embedded in paraffin, and sectioned at a thickness of 5 µm. The sections were stained with H&E, and their histopathology was analysed with optical microscopy. The amount of inflammatory injury to the lung tissue was determined on a scale from 0 to 5: 0: normal, 1: few cells, 2: inflammatory cells that form a ring one layer deep, 3: a 2- to 4-cell layer of inflammatory cells, 4: a ring of inflammatory cells >4 cells deep.\nWe analysed the airway epithelium for goblet cells by staining it with periodic acid-Schiff (PAS) stain. Goblet cell hyperplasia was scored based on the percentage of PAS-positive cells: 0: no goblet cells, 1: <25% goblet cells, 2: 25–50%, 3: 51–75%, 4: >75%.\nImmunohistochemical analyses Lung tissue was incubated overnight at 4 °C with anti-NF-κB antibodies and then with secondary antibody for 60 min at 37 °C. Image-Pro Plus version 6.0 was used to quantify the number of positive cells.\nLung tissue was incubated overnight at 4 °C with anti-NF-κB antibodies and then with secondary antibody for 60 min at 37 °C. Image-Pro Plus version 6.0 was used to quantify the number of positive cells.\nqRT-PCR The relative expression of TLR-4, MyD88, NF-κB, HMGB1, and GAPDH mRNA was estimated with SYBR green-based qRT-PCR. Total RNA was extracted with TRIzol reagent and then subjected to TaqMan MicroRNA assays. cDNA was synthesised from 2 µg total RNA (20 µL) with Moloney murine leukaemia virus reverse transcriptase. An ABI Prism 7500 system (Applied Biosystems, Foster City, CA, USA) was used with a SYBR green/fluorescein qPCR Master Mix kit (Thermo Fisher Scientific) with the following conditions: 50 °C for 2 min; 95 °C for 10 min; followed by 40 cycles at 95 °C for 30 s and 60 °C for 30 s, and a quantitative SYBR Green PCR assay was performed to estimate gene expression, calculated for each gene with the 2-ΔΔCt method.\nThe relative expression of TLR-4, MyD88, NF-κB, HMGB1, and GAPDH mRNA was estimated with SYBR green-based qRT-PCR. Total RNA was extracted with TRIzol reagent and then subjected to TaqMan MicroRNA assays. cDNA was synthesised from 2 µg total RNA (20 µL) with Moloney murine leukaemia virus reverse transcriptase. An ABI Prism 7500 system (Applied Biosystems, Foster City, CA, USA) was used with a SYBR green/fluorescein qPCR Master Mix kit (Thermo Fisher Scientific) with the following conditions: 50 °C for 2 min; 95 °C for 10 min; followed by 40 cycles at 95 °C for 30 s and 60 °C for 30 s, and a quantitative SYBR Green PCR assay was performed to estimate gene expression, calculated for each gene with the 2-ΔΔCt method.\nHomology model of TLR-4 We obtained the TLR-4 protein sequence from the NCBI database and used it to prepare a homology model using SWISS-Model server. UniProt was used to analysed the sequence of the TLR-4 protein. The target sequences were matched against the primary amino acid sequence in a BLAST search. The best quality templates were selected to build the homology model.\nWe obtained the TLR-4 protein sequence from the NCBI database and used it to prepare a homology model using SWISS-Model server. UniProt was used to analysed the sequence of the TLR-4 protein. The target sequences were matched against the primary amino acid sequence in a BLAST search. The best quality templates were selected to build the homology model.\nPreparation of proteins and ligand and molecular docking The 2 D structure of evodiamine was obtained from the PubChem database and converted into a .pdb file with Open Babel. AutoDock Vina MGL tools were used to prepare the ligand for the docking study by removing the water molecules, adding hydrogens, and modulating the charges according to the Kollman approach. A PDBQT file was thus obtained. We performed a molecular docking simulation of the ligand sophoridine using AutoDock Kollman and Gasteiger functions for both the ligand and the target protein. The grid map was created with AutoGrid 4. The grid box was prepared, and we defined the area of protein structure to be mapped by providing the coordinates. The grid box dimensions (x, y, and z coordinates) for TLR-4 were 111.168929, −8.862438, and −4.180014. The Lamarckian genetic algorithm was used for energy minimisation and optimisation in the docking simulation process.\nThe 2 D structure of evodiamine was obtained from the PubChem database and converted into a .pdb file with Open Babel. AutoDock Vina MGL tools were used to prepare the ligand for the docking study by removing the water molecules, adding hydrogens, and modulating the charges according to the Kollman approach. A PDBQT file was thus obtained. We performed a molecular docking simulation of the ligand sophoridine using AutoDock Kollman and Gasteiger functions for both the ligand and the target protein. The grid map was created with AutoGrid 4. The grid box was prepared, and we defined the area of protein structure to be mapped by providing the coordinates. The grid box dimensions (x, y, and z coordinates) for TLR-4 were 111.168929, −8.862438, and −4.180014. The Lamarckian genetic algorithm was used for energy minimisation and optimisation in the docking simulation process.\nStatistical analyses All data are expressed as means ± standard deviations (n = 8). The statistical analyses consisted of one-way analyses of variance (ANOVAs). Post hoc comparisons of the means were performed with Dunnett’s post hoc test in GraphPad Prism (ver. 6.1; San Diego, CA, USA). p < 0.05 was considered to indicate statistical significance.\nAll data are expressed as means ± standard deviations (n = 8). The statistical analyses consisted of one-way analyses of variance (ANOVAs). Post hoc comparisons of the means were performed with Dunnett’s post hoc test in GraphPad Prism (ver. 6.1; San Diego, CA, USA). p < 0.05 was considered to indicate statistical significance.", "We induced asthma by sensitising rats with an intraperitoneal injection of Al(OH)3 (100 mg) and ovalbumin (OA; 1 mg/kg). Fifteen days later, the rats were kept in an 0.8 m3 chamber, where they were exposed to 2% OA aerosol via an airflow of 8 L/min for 20 min/day for eight consecutive days. Rats were separated into four groups: a normal control group; an asthma group sensitised with OA but treated with normal saline solution; and two evodiamine groups, treated with 40 and 80 mg/kg group p.o (Shen et al. 2019). Evodiamine dose preparation was prepared by dissolving it in DMSO solution (Figure 1).\nInfluence of evodiamine on cytokine levels in the serum (A) and BALF (B) of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.", "Rats were anaesthetized at the end of the treatment protocol and blood was drawn. Then they were euthanized by intraperitoneal injection of sodium pentobarbital (100 mg/kg). Serum was separated, and serum levels of the cytokines interleukin (IL)-4, IL-9, and IL-13 were measured with ELISA kits per the manufacturer’s instructions.", "We ligated the right lung of each rat to determine histological changes and collected BALF from the left lung after cannulating the trachea. Bronchoalveolar lavage fluid (BALF) was centrifuged at 4 °C at 2500 rpm for 10 min. The pellet was resuspended in 100 µL PBS and used to determine relative and total leukocyte counts with a haemocytometer. Interferon gamma (IFN-γ), IgE, IL-4, IL-13, and IL-17 levels were measured in the supernatant with ELISA kits.", "We examined isolated intact small bronchioles by light microscopy to measure the thickness of the airway wall and smooth muscle layer. Image-Pro Plus version 6.0 was used to determine the internal smooth muscle area (Wam1), external smooth muscle area (Wam2), bronchial luminal area (Wat1), total bronchial wall area (Wat2), and basement membrane perimeter (Pbm) in the bronchioles. The thicknesses of the smooth muscle layer (Wat) and airway wall (Wan) were calculated as follows:\nWan = (Wat1 –Wat2)/PbmWat = (Wam1 –Wam2)/Pbm\n", "Lung tissue was isolated from each animal, fixed in paraformaldehyde (4%), embedded in paraffin, and sectioned at a thickness of 5 µm. The sections were stained with H&E, and their histopathology was analysed with optical microscopy. The amount of inflammatory injury to the lung tissue was determined on a scale from 0 to 5: 0: normal, 1: few cells, 2: inflammatory cells that form a ring one layer deep, 3: a 2- to 4-cell layer of inflammatory cells, 4: a ring of inflammatory cells >4 cells deep.\nWe analysed the airway epithelium for goblet cells by staining it with periodic acid-Schiff (PAS) stain. Goblet cell hyperplasia was scored based on the percentage of PAS-positive cells: 0: no goblet cells, 1: <25% goblet cells, 2: 25–50%, 3: 51–75%, 4: >75%.", "Lung tissue was incubated overnight at 4 °C with anti-NF-κB antibodies and then with secondary antibody for 60 min at 37 °C. Image-Pro Plus version 6.0 was used to quantify the number of positive cells.", "The relative expression of TLR-4, MyD88, NF-κB, HMGB1, and GAPDH mRNA was estimated with SYBR green-based qRT-PCR. Total RNA was extracted with TRIzol reagent and then subjected to TaqMan MicroRNA assays. cDNA was synthesised from 2 µg total RNA (20 µL) with Moloney murine leukaemia virus reverse transcriptase. An ABI Prism 7500 system (Applied Biosystems, Foster City, CA, USA) was used with a SYBR green/fluorescein qPCR Master Mix kit (Thermo Fisher Scientific) with the following conditions: 50 °C for 2 min; 95 °C for 10 min; followed by 40 cycles at 95 °C for 30 s and 60 °C for 30 s, and a quantitative SYBR Green PCR assay was performed to estimate gene expression, calculated for each gene with the 2-ΔΔCt method.", "We obtained the TLR-4 protein sequence from the NCBI database and used it to prepare a homology model using SWISS-Model server. UniProt was used to analysed the sequence of the TLR-4 protein. The target sequences were matched against the primary amino acid sequence in a BLAST search. The best quality templates were selected to build the homology model.", "The 2 D structure of evodiamine was obtained from the PubChem database and converted into a .pdb file with Open Babel. AutoDock Vina MGL tools were used to prepare the ligand for the docking study by removing the water molecules, adding hydrogens, and modulating the charges according to the Kollman approach. A PDBQT file was thus obtained. We performed a molecular docking simulation of the ligand sophoridine using AutoDock Kollman and Gasteiger functions for both the ligand and the target protein. The grid map was created with AutoGrid 4. The grid box was prepared, and we defined the area of protein structure to be mapped by providing the coordinates. The grid box dimensions (x, y, and z coordinates) for TLR-4 were 111.168929, −8.862438, and −4.180014. The Lamarckian genetic algorithm was used for energy minimisation and optimisation in the docking simulation process.", "All data are expressed as means ± standard deviations (n = 8). The statistical analyses consisted of one-way analyses of variance (ANOVAs). Post hoc comparisons of the means were performed with Dunnett’s post hoc test in GraphPad Prism (ver. 6.1; San Diego, CA, USA). p < 0.05 was considered to indicate statistical significance.", "Cytokine levels in the BALF and serum of evodiamine-treated asthmatic rats are shown in Figure 1(A,B). Serum IL-4, IL-9, and IL-13 levels were significantly (p < 0.01) enhanced in the asthma group compared with the control group. However, serum levels of the cytokines IL-4, IL-9, and IL-13 were reduced significantly (p < 0.01) in the evodiamine-treated group compared with the asthma group (Figure 1(A)). Moreover, levels of IL-4, IL-13, and IL-17 in BALF were increases in the asthma group relative to the normal group. Treatment of the asthmatic rats with evodiamine prevented this increase, with IL-4, IL-13, and IL-17 levels in BALF, relative to the asthma group (Figure 1(B)).", "Isolated intact small bronchioles of evodiamine-treated and untreated asthmatic rats are shown in Figure 2. Thickness of the airway wall and smooth muscle layer was enhanced in the lung tissue of asthma group than control group of rats. However, treatment with evodiamine significantly (p < 0.01) reduces the thickness of both (airway wall and smooth muscle) in the lung tissue of asthmatic rats.\nEvodiamine reduces the thickness of airway smooth muscle and the airway wall in intact small bronchioles of asthmatic rats. Values are means ± SD (n = 8); @@\np < 0.01 compared to the normal group; ** p < 0.01 compared to the asthma group.", "Levels of mediators that modulate the inflammatory reaction, including IFN-γ and IgE, were measured in lung tissue homogenates. In the lung tissue of asthmatic rats, IFN-γ decreased significantly (p < 0.01) and increase in IgE level than normal control group. Evodiamine treatment increased the level of IFN-γ and significantly reduces the level of IgE in the lung tissue compared to asthma group of rats (Figure 3).\nEffects of evodiamine on IFN-γ and IgE levels in lung tissue homogenates of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.", "We evaluated the infiltration of inflammatory cells by determining the level of white blood cells (WBC), EOS, and LYM in the BALF of the four groups of rats (Figure 4). WBC, EOS, and LYM counts were higher in the BALF of the asthma group than the normal group of rats, whereas they decreased significantly (p < 0.01) in the BALF of evodiamine-treated asthmatic rats.\nEffects of evodiamine on the infiltration of inflammatory cells in the BALF of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.", "Expression of NF-κB protein in the lung tissue of the rats is shown in Figure 5(A,B). Expression was higher in the asthma group than in the normal group, but it was reduced following treatment with evodiamine.\nImmunohistochemical analyses of the expression of NF-κB protein in the lung tissue of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.", "TLR-4, MyD88, NF-κB, and HMGB1 mRNA expression was measured in rat lung tissue (Figure 6). Levels of all four mRNAs were significantly enhanced in the lung tissue of the asthma group compared to the normal group but were reduced in the lung tissue of the evodiamine-treated group compared to the untreated asthmatic group.\nExpression of TLR-4, MyD88, NF-κB, and HMGB1 mRNA in the lung tissue of evodiamine-treated asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.", "Histopathological changes in the lung tissue of rats treated with evodiamine or not treated are shown in Figure 7(A,B). Inflammation scores were estimated based on histopathological scores determined from H&E-stained sections (Figure 7(A)). The histopathological score was higher in the asthma group than in the normal group, but the increase was reversed by treatment with evodiamine.\nHistopathological changes in the lung tissue of evodiamine-treated asthmatic rats. (A) H&E staining of lung tissue and histopathological score. (B) PAS staining of lung tissue and PAS score. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nThe number of goblet cells in lung tissue was estimated by PAS staining (Figure 7(B)). We found a higher PAS score in the asthma group than in the normal group but a dose-dependent reduction in PAS score in the evodiamine-treated group versus the asthma group.", "Given the in vivo and in vitro findings, we performed a molecular docking study using BLAST and homology modelling followed by ligand and protein preparation. Molecular docking simulation of the ligand evodiamine was done with the AutoDock Kollman and Gasteiger functions for both the ligand and the binding protein. The docking results showed the high binding affinity of evodiamine with the in vivo confirmed TLR-4 protein, a result supported by the high binding energies (Table 1). The 3 D structure of the protein revealed the area of ligand binding in the TLR-4 protein (Figure 8).\nIn silico molecular docking shows the interaction of the TLR-4 protein and evodiamine. The solid area in the protein structures represents the area of interaction.\nDocking scores for TLR-4 protein with ligand molecule Evodiamine." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Animals", "Chemicals", "Experimental design and treatment protocol", "Measurement of cytokines in the serum", "Preparation of BALF and determination of biochemical parameters", "Measurement of the thickness of smooth muscle and the airway wall", "Histopathological analyses", "Immunohistochemical analyses", "qRT-PCR", "Homology model of TLR-4", "Preparation of proteins and ligand and molecular docking", "Statistical analyses", "Results", "Evodiamine reduces the cytokine level in serum and BALF", "Evodiamine reduces the thickness of smooth muscle and the airway wall", "Evodiamine attenuates biochemical parameters", "Evodiamine attenuates the infiltration of inflammatory cells", "Evodiamine attenuates expression of NF-κB and HMGB1 protein", "Evodiamine attenuates mRNA expression of TLR-4, MyD88, NF-κB, and HMGB1", "Evodiamine attenuates histopathological changes in lung tissue", "Effects of evodiamine on the TLR-4 protein", "Discussion", "Conclusion" ]
[ "Asthma is a respiratory disorder diagnosed by pathophysiological symptoms such as hyper-responsiveness, obstruction, remodelling, and inflammation of the airway. Some 339 million people suffer from asthma worldwide (Dharmage et al. 2019). Among the physiological pathways involved in the development of asthma is an imbalance in Th1/Th2 (Durrant and Metzger 2010). Th2 activation enhances the production of interleukins, leading to the development of several pathological conditions, including allergies and asthma. Allergic asthma occurs in response to enhanced production immunoglobulin E (IgE) by B cells following an increased release of cytokines (Galli and Tsai 2012). NF-κB, a potent inflammatory mediator, enhances the production of cytokines in several disorders, including asthma (Liu et al. 2017). Phosphorylation of IκB activates the NF-κB p65 subunit, which is responsible for increased production of cytokines and promotion of the inflammatory cascade (Christian et al. 2016). In addition, HMGB1, an immune system protein involved in the regulation of cell survival and death, has been implicated in sepsis, lung injury, and asthma. HMGB1 binds to toll-like receptors (TLRs), which stimulates the release of cytokines through inflammatory cells (Qu et al. 2019). MyD88- or non-MyD88-dependent signalling is used by HMGB1 to trigger downstream signals by activating TLR-4, which enhances the secretion of cytokines via the NF-κB pathway (Azam et al. 2019).\nCytokines and activation of the inflammatory cascade contribute to both the allergic reaction and the remodelling of lung tissue. Airway remodelling in asthmatic patients is induced by increased infiltration of T lymphocytes (LYM) and eosinophils (EOS; Fehrenbach et al. 2017). The persistent hyper-responsiveness of the airway together with remodelling cause irreversible obstruction, which in turn depresses respiratory function. The medication currently available to treat asthma offers only symptomatic relief and has several limitations. Thus, new approaches to the management of asthma are needed. Several reports have suggested that asthma can be managed by targeting the remodelling and inflammation of the airway (Bergeron et al. 2010).\nThere is increasing interest in molecules of natural origin as therapeutic agents. Evodiamine, which is isolated from Evodia rutaecarpa (Rutaceae), is a traditional medicine in China for treating cardiovascular disorders, infection, inflammation, and obesity (Liao et al. 2011). It also exhibits anticancer activity by regulating the TGF-β1 and NF-κB pathways and thereby controls the growth of several types of cancer cells (Jiang and Hu 2009). Anti-inflammatory activity, including innate immunity against bacterial infection, and antiulcer activity have also been demonstrated for evodiamine via its regulation of the inflammatory cascade and inflammasomes (Li et al. 2019; Yu et al. 2013). Thus, the present study evaluated the protective effects of evodiamine against asthma.", "Animals Sprague-Dawley rats weighing 180–225 g and housed under controlled conditions (temperature of 24 ± 3 °C and 60 ± 5% humidity) with a 12 h light/dark cycle were used in this study. All animal experiments were approved by the Institutional Animal Ethical Committee of Wuxi People’s Hospital Affiliated with Nanjing Medical University, China (IAEC/WPH-NMU/2019/25).\nSprague-Dawley rats weighing 180–225 g and housed under controlled conditions (temperature of 24 ± 3 °C and 60 ± 5% humidity) with a 12 h light/dark cycle were used in this study. All animal experiments were approved by the Institutional Animal Ethical Committee of Wuxi People’s Hospital Affiliated with Nanjing Medical University, China (IAEC/WPH-NMU/2019/25).", "Sprague-Dawley rats weighing 180–225 g and housed under controlled conditions (temperature of 24 ± 3 °C and 60 ± 5% humidity) with a 12 h light/dark cycle were used in this study. All animal experiments were approved by the Institutional Animal Ethical Committee of Wuxi People’s Hospital Affiliated with Nanjing Medical University, China (IAEC/WPH-NMU/2019/25).", "Aluminium hydroxide [Al(OH)3], ovalbumin (OA) and evodiamine were procured from Sigma Aldrich Ltd., USA. ELISA kits were purchased from ThermoFisher Scientific Ltd., China and anti-NF-κB antibodies were purchased from Abcam Ltd., USA. Total RNA Kit was purchased from Omega Bio-Tek Inc., USA.\nExperimental design and treatment protocol We induced asthma by sensitising rats with an intraperitoneal injection of Al(OH)3 (100 mg) and ovalbumin (OA; 1 mg/kg). Fifteen days later, the rats were kept in an 0.8 m3 chamber, where they were exposed to 2% OA aerosol via an airflow of 8 L/min for 20 min/day for eight consecutive days. Rats were separated into four groups: a normal control group; an asthma group sensitised with OA but treated with normal saline solution; and two evodiamine groups, treated with 40 and 80 mg/kg group p.o (Shen et al. 2019). Evodiamine dose preparation was prepared by dissolving it in DMSO solution (Figure 1).\nInfluence of evodiamine on cytokine levels in the serum (A) and BALF (B) of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nWe induced asthma by sensitising rats with an intraperitoneal injection of Al(OH)3 (100 mg) and ovalbumin (OA; 1 mg/kg). Fifteen days later, the rats were kept in an 0.8 m3 chamber, where they were exposed to 2% OA aerosol via an airflow of 8 L/min for 20 min/day for eight consecutive days. Rats were separated into four groups: a normal control group; an asthma group sensitised with OA but treated with normal saline solution; and two evodiamine groups, treated with 40 and 80 mg/kg group p.o (Shen et al. 2019). Evodiamine dose preparation was prepared by dissolving it in DMSO solution (Figure 1).\nInfluence of evodiamine on cytokine levels in the serum (A) and BALF (B) of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nMeasurement of cytokines in the serum Rats were anaesthetized at the end of the treatment protocol and blood was drawn. Then they were euthanized by intraperitoneal injection of sodium pentobarbital (100 mg/kg). Serum was separated, and serum levels of the cytokines interleukin (IL)-4, IL-9, and IL-13 were measured with ELISA kits per the manufacturer’s instructions.\nRats were anaesthetized at the end of the treatment protocol and blood was drawn. Then they were euthanized by intraperitoneal injection of sodium pentobarbital (100 mg/kg). Serum was separated, and serum levels of the cytokines interleukin (IL)-4, IL-9, and IL-13 were measured with ELISA kits per the manufacturer’s instructions.\nPreparation of BALF and determination of biochemical parameters We ligated the right lung of each rat to determine histological changes and collected BALF from the left lung after cannulating the trachea. Bronchoalveolar lavage fluid (BALF) was centrifuged at 4 °C at 2500 rpm for 10 min. The pellet was resuspended in 100 µL PBS and used to determine relative and total leukocyte counts with a haemocytometer. Interferon gamma (IFN-γ), IgE, IL-4, IL-13, and IL-17 levels were measured in the supernatant with ELISA kits.\nWe ligated the right lung of each rat to determine histological changes and collected BALF from the left lung after cannulating the trachea. Bronchoalveolar lavage fluid (BALF) was centrifuged at 4 °C at 2500 rpm for 10 min. The pellet was resuspended in 100 µL PBS and used to determine relative and total leukocyte counts with a haemocytometer. Interferon gamma (IFN-γ), IgE, IL-4, IL-13, and IL-17 levels were measured in the supernatant with ELISA kits.\nMeasurement of the thickness of smooth muscle and the airway wall We examined isolated intact small bronchioles by light microscopy to measure the thickness of the airway wall and smooth muscle layer. Image-Pro Plus version 6.0 was used to determine the internal smooth muscle area (Wam1), external smooth muscle area (Wam2), bronchial luminal area (Wat1), total bronchial wall area (Wat2), and basement membrane perimeter (Pbm) in the bronchioles. The thicknesses of the smooth muscle layer (Wat) and airway wall (Wan) were calculated as follows:\nWan = (Wat1 –Wat2)/PbmWat = (Wam1 –Wam2)/Pbm\n\nWe examined isolated intact small bronchioles by light microscopy to measure the thickness of the airway wall and smooth muscle layer. Image-Pro Plus version 6.0 was used to determine the internal smooth muscle area (Wam1), external smooth muscle area (Wam2), bronchial luminal area (Wat1), total bronchial wall area (Wat2), and basement membrane perimeter (Pbm) in the bronchioles. The thicknesses of the smooth muscle layer (Wat) and airway wall (Wan) were calculated as follows:\nWan = (Wat1 –Wat2)/PbmWat = (Wam1 –Wam2)/Pbm\n\nHistopathological analyses Lung tissue was isolated from each animal, fixed in paraformaldehyde (4%), embedded in paraffin, and sectioned at a thickness of 5 µm. The sections were stained with H&E, and their histopathology was analysed with optical microscopy. The amount of inflammatory injury to the lung tissue was determined on a scale from 0 to 5: 0: normal, 1: few cells, 2: inflammatory cells that form a ring one layer deep, 3: a 2- to 4-cell layer of inflammatory cells, 4: a ring of inflammatory cells >4 cells deep.\nWe analysed the airway epithelium for goblet cells by staining it with periodic acid-Schiff (PAS) stain. Goblet cell hyperplasia was scored based on the percentage of PAS-positive cells: 0: no goblet cells, 1: <25% goblet cells, 2: 25–50%, 3: 51–75%, 4: >75%.\nLung tissue was isolated from each animal, fixed in paraformaldehyde (4%), embedded in paraffin, and sectioned at a thickness of 5 µm. The sections were stained with H&E, and their histopathology was analysed with optical microscopy. The amount of inflammatory injury to the lung tissue was determined on a scale from 0 to 5: 0: normal, 1: few cells, 2: inflammatory cells that form a ring one layer deep, 3: a 2- to 4-cell layer of inflammatory cells, 4: a ring of inflammatory cells >4 cells deep.\nWe analysed the airway epithelium for goblet cells by staining it with periodic acid-Schiff (PAS) stain. Goblet cell hyperplasia was scored based on the percentage of PAS-positive cells: 0: no goblet cells, 1: <25% goblet cells, 2: 25–50%, 3: 51–75%, 4: >75%.\nImmunohistochemical analyses Lung tissue was incubated overnight at 4 °C with anti-NF-κB antibodies and then with secondary antibody for 60 min at 37 °C. Image-Pro Plus version 6.0 was used to quantify the number of positive cells.\nLung tissue was incubated overnight at 4 °C with anti-NF-κB antibodies and then with secondary antibody for 60 min at 37 °C. Image-Pro Plus version 6.0 was used to quantify the number of positive cells.\nqRT-PCR The relative expression of TLR-4, MyD88, NF-κB, HMGB1, and GAPDH mRNA was estimated with SYBR green-based qRT-PCR. Total RNA was extracted with TRIzol reagent and then subjected to TaqMan MicroRNA assays. cDNA was synthesised from 2 µg total RNA (20 µL) with Moloney murine leukaemia virus reverse transcriptase. An ABI Prism 7500 system (Applied Biosystems, Foster City, CA, USA) was used with a SYBR green/fluorescein qPCR Master Mix kit (Thermo Fisher Scientific) with the following conditions: 50 °C for 2 min; 95 °C for 10 min; followed by 40 cycles at 95 °C for 30 s and 60 °C for 30 s, and a quantitative SYBR Green PCR assay was performed to estimate gene expression, calculated for each gene with the 2-ΔΔCt method.\nThe relative expression of TLR-4, MyD88, NF-κB, HMGB1, and GAPDH mRNA was estimated with SYBR green-based qRT-PCR. Total RNA was extracted with TRIzol reagent and then subjected to TaqMan MicroRNA assays. cDNA was synthesised from 2 µg total RNA (20 µL) with Moloney murine leukaemia virus reverse transcriptase. An ABI Prism 7500 system (Applied Biosystems, Foster City, CA, USA) was used with a SYBR green/fluorescein qPCR Master Mix kit (Thermo Fisher Scientific) with the following conditions: 50 °C for 2 min; 95 °C for 10 min; followed by 40 cycles at 95 °C for 30 s and 60 °C for 30 s, and a quantitative SYBR Green PCR assay was performed to estimate gene expression, calculated for each gene with the 2-ΔΔCt method.\nHomology model of TLR-4 We obtained the TLR-4 protein sequence from the NCBI database and used it to prepare a homology model using SWISS-Model server. UniProt was used to analysed the sequence of the TLR-4 protein. The target sequences were matched against the primary amino acid sequence in a BLAST search. The best quality templates were selected to build the homology model.\nWe obtained the TLR-4 protein sequence from the NCBI database and used it to prepare a homology model using SWISS-Model server. UniProt was used to analysed the sequence of the TLR-4 protein. The target sequences were matched against the primary amino acid sequence in a BLAST search. The best quality templates were selected to build the homology model.\nPreparation of proteins and ligand and molecular docking The 2 D structure of evodiamine was obtained from the PubChem database and converted into a .pdb file with Open Babel. AutoDock Vina MGL tools were used to prepare the ligand for the docking study by removing the water molecules, adding hydrogens, and modulating the charges according to the Kollman approach. A PDBQT file was thus obtained. We performed a molecular docking simulation of the ligand sophoridine using AutoDock Kollman and Gasteiger functions for both the ligand and the target protein. The grid map was created with AutoGrid 4. The grid box was prepared, and we defined the area of protein structure to be mapped by providing the coordinates. The grid box dimensions (x, y, and z coordinates) for TLR-4 were 111.168929, −8.862438, and −4.180014. The Lamarckian genetic algorithm was used for energy minimisation and optimisation in the docking simulation process.\nThe 2 D structure of evodiamine was obtained from the PubChem database and converted into a .pdb file with Open Babel. AutoDock Vina MGL tools were used to prepare the ligand for the docking study by removing the water molecules, adding hydrogens, and modulating the charges according to the Kollman approach. A PDBQT file was thus obtained. We performed a molecular docking simulation of the ligand sophoridine using AutoDock Kollman and Gasteiger functions for both the ligand and the target protein. The grid map was created with AutoGrid 4. The grid box was prepared, and we defined the area of protein structure to be mapped by providing the coordinates. The grid box dimensions (x, y, and z coordinates) for TLR-4 were 111.168929, −8.862438, and −4.180014. The Lamarckian genetic algorithm was used for energy minimisation and optimisation in the docking simulation process.\nStatistical analyses All data are expressed as means ± standard deviations (n = 8). The statistical analyses consisted of one-way analyses of variance (ANOVAs). Post hoc comparisons of the means were performed with Dunnett’s post hoc test in GraphPad Prism (ver. 6.1; San Diego, CA, USA). p < 0.05 was considered to indicate statistical significance.\nAll data are expressed as means ± standard deviations (n = 8). The statistical analyses consisted of one-way analyses of variance (ANOVAs). Post hoc comparisons of the means were performed with Dunnett’s post hoc test in GraphPad Prism (ver. 6.1; San Diego, CA, USA). p < 0.05 was considered to indicate statistical significance.", "We induced asthma by sensitising rats with an intraperitoneal injection of Al(OH)3 (100 mg) and ovalbumin (OA; 1 mg/kg). Fifteen days later, the rats were kept in an 0.8 m3 chamber, where they were exposed to 2% OA aerosol via an airflow of 8 L/min for 20 min/day for eight consecutive days. Rats were separated into four groups: a normal control group; an asthma group sensitised with OA but treated with normal saline solution; and two evodiamine groups, treated with 40 and 80 mg/kg group p.o (Shen et al. 2019). Evodiamine dose preparation was prepared by dissolving it in DMSO solution (Figure 1).\nInfluence of evodiamine on cytokine levels in the serum (A) and BALF (B) of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.", "Rats were anaesthetized at the end of the treatment protocol and blood was drawn. Then they were euthanized by intraperitoneal injection of sodium pentobarbital (100 mg/kg). Serum was separated, and serum levels of the cytokines interleukin (IL)-4, IL-9, and IL-13 were measured with ELISA kits per the manufacturer’s instructions.", "We ligated the right lung of each rat to determine histological changes and collected BALF from the left lung after cannulating the trachea. Bronchoalveolar lavage fluid (BALF) was centrifuged at 4 °C at 2500 rpm for 10 min. The pellet was resuspended in 100 µL PBS and used to determine relative and total leukocyte counts with a haemocytometer. Interferon gamma (IFN-γ), IgE, IL-4, IL-13, and IL-17 levels were measured in the supernatant with ELISA kits.", "We examined isolated intact small bronchioles by light microscopy to measure the thickness of the airway wall and smooth muscle layer. Image-Pro Plus version 6.0 was used to determine the internal smooth muscle area (Wam1), external smooth muscle area (Wam2), bronchial luminal area (Wat1), total bronchial wall area (Wat2), and basement membrane perimeter (Pbm) in the bronchioles. The thicknesses of the smooth muscle layer (Wat) and airway wall (Wan) were calculated as follows:\nWan = (Wat1 –Wat2)/PbmWat = (Wam1 –Wam2)/Pbm\n", "Lung tissue was isolated from each animal, fixed in paraformaldehyde (4%), embedded in paraffin, and sectioned at a thickness of 5 µm. The sections were stained with H&E, and their histopathology was analysed with optical microscopy. The amount of inflammatory injury to the lung tissue was determined on a scale from 0 to 5: 0: normal, 1: few cells, 2: inflammatory cells that form a ring one layer deep, 3: a 2- to 4-cell layer of inflammatory cells, 4: a ring of inflammatory cells >4 cells deep.\nWe analysed the airway epithelium for goblet cells by staining it with periodic acid-Schiff (PAS) stain. Goblet cell hyperplasia was scored based on the percentage of PAS-positive cells: 0: no goblet cells, 1: <25% goblet cells, 2: 25–50%, 3: 51–75%, 4: >75%.", "Lung tissue was incubated overnight at 4 °C with anti-NF-κB antibodies and then with secondary antibody for 60 min at 37 °C. Image-Pro Plus version 6.0 was used to quantify the number of positive cells.", "The relative expression of TLR-4, MyD88, NF-κB, HMGB1, and GAPDH mRNA was estimated with SYBR green-based qRT-PCR. Total RNA was extracted with TRIzol reagent and then subjected to TaqMan MicroRNA assays. cDNA was synthesised from 2 µg total RNA (20 µL) with Moloney murine leukaemia virus reverse transcriptase. An ABI Prism 7500 system (Applied Biosystems, Foster City, CA, USA) was used with a SYBR green/fluorescein qPCR Master Mix kit (Thermo Fisher Scientific) with the following conditions: 50 °C for 2 min; 95 °C for 10 min; followed by 40 cycles at 95 °C for 30 s and 60 °C for 30 s, and a quantitative SYBR Green PCR assay was performed to estimate gene expression, calculated for each gene with the 2-ΔΔCt method.", "We obtained the TLR-4 protein sequence from the NCBI database and used it to prepare a homology model using SWISS-Model server. UniProt was used to analysed the sequence of the TLR-4 protein. The target sequences were matched against the primary amino acid sequence in a BLAST search. The best quality templates were selected to build the homology model.", "The 2 D structure of evodiamine was obtained from the PubChem database and converted into a .pdb file with Open Babel. AutoDock Vina MGL tools were used to prepare the ligand for the docking study by removing the water molecules, adding hydrogens, and modulating the charges according to the Kollman approach. A PDBQT file was thus obtained. We performed a molecular docking simulation of the ligand sophoridine using AutoDock Kollman and Gasteiger functions for both the ligand and the target protein. The grid map was created with AutoGrid 4. The grid box was prepared, and we defined the area of protein structure to be mapped by providing the coordinates. The grid box dimensions (x, y, and z coordinates) for TLR-4 were 111.168929, −8.862438, and −4.180014. The Lamarckian genetic algorithm was used for energy minimisation and optimisation in the docking simulation process.", "All data are expressed as means ± standard deviations (n = 8). The statistical analyses consisted of one-way analyses of variance (ANOVAs). Post hoc comparisons of the means were performed with Dunnett’s post hoc test in GraphPad Prism (ver. 6.1; San Diego, CA, USA). p < 0.05 was considered to indicate statistical significance.", "Evodiamine reduces the cytokine level in serum and BALF Cytokine levels in the BALF and serum of evodiamine-treated asthmatic rats are shown in Figure 1(A,B). Serum IL-4, IL-9, and IL-13 levels were significantly (p < 0.01) enhanced in the asthma group compared with the control group. However, serum levels of the cytokines IL-4, IL-9, and IL-13 were reduced significantly (p < 0.01) in the evodiamine-treated group compared with the asthma group (Figure 1(A)). Moreover, levels of IL-4, IL-13, and IL-17 in BALF were increases in the asthma group relative to the normal group. Treatment of the asthmatic rats with evodiamine prevented this increase, with IL-4, IL-13, and IL-17 levels in BALF, relative to the asthma group (Figure 1(B)).\nCytokine levels in the BALF and serum of evodiamine-treated asthmatic rats are shown in Figure 1(A,B). Serum IL-4, IL-9, and IL-13 levels were significantly (p < 0.01) enhanced in the asthma group compared with the control group. However, serum levels of the cytokines IL-4, IL-9, and IL-13 were reduced significantly (p < 0.01) in the evodiamine-treated group compared with the asthma group (Figure 1(A)). Moreover, levels of IL-4, IL-13, and IL-17 in BALF were increases in the asthma group relative to the normal group. Treatment of the asthmatic rats with evodiamine prevented this increase, with IL-4, IL-13, and IL-17 levels in BALF, relative to the asthma group (Figure 1(B)).\nEvodiamine reduces the thickness of smooth muscle and the airway wall Isolated intact small bronchioles of evodiamine-treated and untreated asthmatic rats are shown in Figure 2. Thickness of the airway wall and smooth muscle layer was enhanced in the lung tissue of asthma group than control group of rats. However, treatment with evodiamine significantly (p < 0.01) reduces the thickness of both (airway wall and smooth muscle) in the lung tissue of asthmatic rats.\nEvodiamine reduces the thickness of airway smooth muscle and the airway wall in intact small bronchioles of asthmatic rats. Values are means ± SD (n = 8); @@\np < 0.01 compared to the normal group; ** p < 0.01 compared to the asthma group.\nIsolated intact small bronchioles of evodiamine-treated and untreated asthmatic rats are shown in Figure 2. Thickness of the airway wall and smooth muscle layer was enhanced in the lung tissue of asthma group than control group of rats. However, treatment with evodiamine significantly (p < 0.01) reduces the thickness of both (airway wall and smooth muscle) in the lung tissue of asthmatic rats.\nEvodiamine reduces the thickness of airway smooth muscle and the airway wall in intact small bronchioles of asthmatic rats. Values are means ± SD (n = 8); @@\np < 0.01 compared to the normal group; ** p < 0.01 compared to the asthma group.\nEvodiamine attenuates biochemical parameters Levels of mediators that modulate the inflammatory reaction, including IFN-γ and IgE, were measured in lung tissue homogenates. In the lung tissue of asthmatic rats, IFN-γ decreased significantly (p < 0.01) and increase in IgE level than normal control group. Evodiamine treatment increased the level of IFN-γ and significantly reduces the level of IgE in the lung tissue compared to asthma group of rats (Figure 3).\nEffects of evodiamine on IFN-γ and IgE levels in lung tissue homogenates of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nLevels of mediators that modulate the inflammatory reaction, including IFN-γ and IgE, were measured in lung tissue homogenates. In the lung tissue of asthmatic rats, IFN-γ decreased significantly (p < 0.01) and increase in IgE level than normal control group. Evodiamine treatment increased the level of IFN-γ and significantly reduces the level of IgE in the lung tissue compared to asthma group of rats (Figure 3).\nEffects of evodiamine on IFN-γ and IgE levels in lung tissue homogenates of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nEvodiamine attenuates the infiltration of inflammatory cells We evaluated the infiltration of inflammatory cells by determining the level of white blood cells (WBC), EOS, and LYM in the BALF of the four groups of rats (Figure 4). WBC, EOS, and LYM counts were higher in the BALF of the asthma group than the normal group of rats, whereas they decreased significantly (p < 0.01) in the BALF of evodiamine-treated asthmatic rats.\nEffects of evodiamine on the infiltration of inflammatory cells in the BALF of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nWe evaluated the infiltration of inflammatory cells by determining the level of white blood cells (WBC), EOS, and LYM in the BALF of the four groups of rats (Figure 4). WBC, EOS, and LYM counts were higher in the BALF of the asthma group than the normal group of rats, whereas they decreased significantly (p < 0.01) in the BALF of evodiamine-treated asthmatic rats.\nEffects of evodiamine on the infiltration of inflammatory cells in the BALF of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nEvodiamine attenuates expression of NF-κB and HMGB1 protein Expression of NF-κB protein in the lung tissue of the rats is shown in Figure 5(A,B). Expression was higher in the asthma group than in the normal group, but it was reduced following treatment with evodiamine.\nImmunohistochemical analyses of the expression of NF-κB protein in the lung tissue of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nExpression of NF-κB protein in the lung tissue of the rats is shown in Figure 5(A,B). Expression was higher in the asthma group than in the normal group, but it was reduced following treatment with evodiamine.\nImmunohistochemical analyses of the expression of NF-κB protein in the lung tissue of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nEvodiamine attenuates mRNA expression of TLR-4, MyD88, NF-κB, and HMGB1 TLR-4, MyD88, NF-κB, and HMGB1 mRNA expression was measured in rat lung tissue (Figure 6). Levels of all four mRNAs were significantly enhanced in the lung tissue of the asthma group compared to the normal group but were reduced in the lung tissue of the evodiamine-treated group compared to the untreated asthmatic group.\nExpression of TLR-4, MyD88, NF-κB, and HMGB1 mRNA in the lung tissue of evodiamine-treated asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nTLR-4, MyD88, NF-κB, and HMGB1 mRNA expression was measured in rat lung tissue (Figure 6). Levels of all four mRNAs were significantly enhanced in the lung tissue of the asthma group compared to the normal group but were reduced in the lung tissue of the evodiamine-treated group compared to the untreated asthmatic group.\nExpression of TLR-4, MyD88, NF-κB, and HMGB1 mRNA in the lung tissue of evodiamine-treated asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nEvodiamine attenuates histopathological changes in lung tissue Histopathological changes in the lung tissue of rats treated with evodiamine or not treated are shown in Figure 7(A,B). Inflammation scores were estimated based on histopathological scores determined from H&E-stained sections (Figure 7(A)). The histopathological score was higher in the asthma group than in the normal group, but the increase was reversed by treatment with evodiamine.\nHistopathological changes in the lung tissue of evodiamine-treated asthmatic rats. (A) H&E staining of lung tissue and histopathological score. (B) PAS staining of lung tissue and PAS score. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nThe number of goblet cells in lung tissue was estimated by PAS staining (Figure 7(B)). We found a higher PAS score in the asthma group than in the normal group but a dose-dependent reduction in PAS score in the evodiamine-treated group versus the asthma group.\nHistopathological changes in the lung tissue of rats treated with evodiamine or not treated are shown in Figure 7(A,B). Inflammation scores were estimated based on histopathological scores determined from H&E-stained sections (Figure 7(A)). The histopathological score was higher in the asthma group than in the normal group, but the increase was reversed by treatment with evodiamine.\nHistopathological changes in the lung tissue of evodiamine-treated asthmatic rats. (A) H&E staining of lung tissue and histopathological score. (B) PAS staining of lung tissue and PAS score. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nThe number of goblet cells in lung tissue was estimated by PAS staining (Figure 7(B)). We found a higher PAS score in the asthma group than in the normal group but a dose-dependent reduction in PAS score in the evodiamine-treated group versus the asthma group.\nEffects of evodiamine on the TLR-4 protein Given the in vivo and in vitro findings, we performed a molecular docking study using BLAST and homology modelling followed by ligand and protein preparation. Molecular docking simulation of the ligand evodiamine was done with the AutoDock Kollman and Gasteiger functions for both the ligand and the binding protein. The docking results showed the high binding affinity of evodiamine with the in vivo confirmed TLR-4 protein, a result supported by the high binding energies (Table 1). The 3 D structure of the protein revealed the area of ligand binding in the TLR-4 protein (Figure 8).\nIn silico molecular docking shows the interaction of the TLR-4 protein and evodiamine. The solid area in the protein structures represents the area of interaction.\nDocking scores for TLR-4 protein with ligand molecule Evodiamine.\nGiven the in vivo and in vitro findings, we performed a molecular docking study using BLAST and homology modelling followed by ligand and protein preparation. Molecular docking simulation of the ligand evodiamine was done with the AutoDock Kollman and Gasteiger functions for both the ligand and the binding protein. The docking results showed the high binding affinity of evodiamine with the in vivo confirmed TLR-4 protein, a result supported by the high binding energies (Table 1). The 3 D structure of the protein revealed the area of ligand binding in the TLR-4 protein (Figure 8).\nIn silico molecular docking shows the interaction of the TLR-4 protein and evodiamine. The solid area in the protein structures represents the area of interaction.\nDocking scores for TLR-4 protein with ligand molecule Evodiamine.", "Cytokine levels in the BALF and serum of evodiamine-treated asthmatic rats are shown in Figure 1(A,B). Serum IL-4, IL-9, and IL-13 levels were significantly (p < 0.01) enhanced in the asthma group compared with the control group. However, serum levels of the cytokines IL-4, IL-9, and IL-13 were reduced significantly (p < 0.01) in the evodiamine-treated group compared with the asthma group (Figure 1(A)). Moreover, levels of IL-4, IL-13, and IL-17 in BALF were increases in the asthma group relative to the normal group. Treatment of the asthmatic rats with evodiamine prevented this increase, with IL-4, IL-13, and IL-17 levels in BALF, relative to the asthma group (Figure 1(B)).", "Isolated intact small bronchioles of evodiamine-treated and untreated asthmatic rats are shown in Figure 2. Thickness of the airway wall and smooth muscle layer was enhanced in the lung tissue of asthma group than control group of rats. However, treatment with evodiamine significantly (p < 0.01) reduces the thickness of both (airway wall and smooth muscle) in the lung tissue of asthmatic rats.\nEvodiamine reduces the thickness of airway smooth muscle and the airway wall in intact small bronchioles of asthmatic rats. Values are means ± SD (n = 8); @@\np < 0.01 compared to the normal group; ** p < 0.01 compared to the asthma group.", "Levels of mediators that modulate the inflammatory reaction, including IFN-γ and IgE, were measured in lung tissue homogenates. In the lung tissue of asthmatic rats, IFN-γ decreased significantly (p < 0.01) and increase in IgE level than normal control group. Evodiamine treatment increased the level of IFN-γ and significantly reduces the level of IgE in the lung tissue compared to asthma group of rats (Figure 3).\nEffects of evodiamine on IFN-γ and IgE levels in lung tissue homogenates of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.", "We evaluated the infiltration of inflammatory cells by determining the level of white blood cells (WBC), EOS, and LYM in the BALF of the four groups of rats (Figure 4). WBC, EOS, and LYM counts were higher in the BALF of the asthma group than the normal group of rats, whereas they decreased significantly (p < 0.01) in the BALF of evodiamine-treated asthmatic rats.\nEffects of evodiamine on the infiltration of inflammatory cells in the BALF of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.", "Expression of NF-κB protein in the lung tissue of the rats is shown in Figure 5(A,B). Expression was higher in the asthma group than in the normal group, but it was reduced following treatment with evodiamine.\nImmunohistochemical analyses of the expression of NF-κB protein in the lung tissue of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.", "TLR-4, MyD88, NF-κB, and HMGB1 mRNA expression was measured in rat lung tissue (Figure 6). Levels of all four mRNAs were significantly enhanced in the lung tissue of the asthma group compared to the normal group but were reduced in the lung tissue of the evodiamine-treated group compared to the untreated asthmatic group.\nExpression of TLR-4, MyD88, NF-κB, and HMGB1 mRNA in the lung tissue of evodiamine-treated asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.", "Histopathological changes in the lung tissue of rats treated with evodiamine or not treated are shown in Figure 7(A,B). Inflammation scores were estimated based on histopathological scores determined from H&E-stained sections (Figure 7(A)). The histopathological score was higher in the asthma group than in the normal group, but the increase was reversed by treatment with evodiamine.\nHistopathological changes in the lung tissue of evodiamine-treated asthmatic rats. (A) H&E staining of lung tissue and histopathological score. (B) PAS staining of lung tissue and PAS score. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group.\nThe number of goblet cells in lung tissue was estimated by PAS staining (Figure 7(B)). We found a higher PAS score in the asthma group than in the normal group but a dose-dependent reduction in PAS score in the evodiamine-treated group versus the asthma group.", "Given the in vivo and in vitro findings, we performed a molecular docking study using BLAST and homology modelling followed by ligand and protein preparation. Molecular docking simulation of the ligand evodiamine was done with the AutoDock Kollman and Gasteiger functions for both the ligand and the binding protein. The docking results showed the high binding affinity of evodiamine with the in vivo confirmed TLR-4 protein, a result supported by the high binding energies (Table 1). The 3 D structure of the protein revealed the area of ligand binding in the TLR-4 protein (Figure 8).\nIn silico molecular docking shows the interaction of the TLR-4 protein and evodiamine. The solid area in the protein structures represents the area of interaction.\nDocking scores for TLR-4 protein with ligand molecule Evodiamine.", "Asthma is a chronic disorder characterised by hyper-responsiveness of the bronchial tree due to inflammatory responses in the lung. Several pathophysiological pathways are associated with the development of asthma, including those that mediate the release of cytokines involved in the inflammatory process, in particular IL-4, IL-9, IL-13, and IL-17. Although leukotriene inhibitors have shown potential in the management of asthma, our study demonstrates that treatment with evodiamine ameliorates enhanced levels of cytokines in the serum and BALF of asthmatic rats.\nThe infiltration of inflammatory cells in the lung is another feature of asthma (National Asthma Education and Prevention Program, Third Expert Panel on the Diagnosis and Management of Asthma 2007). These inflammatory cells are responsible for the stimulated release of inflammatory cytokines and their subsequent effects on the smooth muscle of the respiratory tract (Moldoveanu et al. 2009). IgE levels are also enhanced in asthmatic patients and are responsible for inducing allergic reaction by releasing histamine (Yamauchi and Ogasawara 2019), a tissue amine responsible for the increased sensitivity of the bronchial tree and the contraction of its smooth muscle layer (Bonaldi et al. 2003). In our rat model of asthma, evodiamine ameliorated the altered level of IgE and reduced infiltration of inflammatory cells in the lung tissue of asthmatic rats. In addition, evodiamine reduced the thickness of both the airway wall and the smooth muscle layer in treated rats compared to untreated asthmatic rats.\nReports suggest that inflammation enhances the permeability of the nuclear membrane, allowing the nuclear HMGB1 protein to reach the cytoplasm (Sprague and Khalil 2009). Leukotrienes are secreted by macrophages and mononuclear cells, and their release is stimulated by HMGB1 (Baek et al. 2018). The knockdown of HMGB1 reduces the severity of asthma by reducing airway smooth muscle thickness, collagen deposition, mucus secretion, and inflammation in the airway (Hou et al. 2015). The TLR-4 protein, the receptor for HMGB1, is involved in the immune cell response and in inflammation, and its overexpression in the lung enhances the thickness of the airway wall by activating elastase and NF-κB (Wang et al. 2020). The MyD88-dependent pathway is used by TLR-4 to activate NF-κB and further contributes to the synthesis of inflammatory cytokines (Liu et al. 2017). Our study suggests a mechanism by which evodiamine hinders the development of asthma in rats by altering expression of TLR-4, NF-κB, MyD88, and HMGB1, thus hindering airway remodelling in the rat lung.", "This reports suggests that treatment with evodiamine reduces inflammation and airway remodelling in the lung tissue of asthmatic rats by downregulating the HMGB1/NF-κB/TLR-4 pathway. Data of the study reveal that evodiamine could be explored clinically for the management of asthma." ]
[ "intro", "materials", null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, null, "discussion", "conclusions" ]
[ "Asthma", "bronchioles", "cytokine", "lung" ]
Introduction: Asthma is a respiratory disorder diagnosed by pathophysiological symptoms such as hyper-responsiveness, obstruction, remodelling, and inflammation of the airway. Some 339 million people suffer from asthma worldwide (Dharmage et al. 2019). Among the physiological pathways involved in the development of asthma is an imbalance in Th1/Th2 (Durrant and Metzger 2010). Th2 activation enhances the production of interleukins, leading to the development of several pathological conditions, including allergies and asthma. Allergic asthma occurs in response to enhanced production immunoglobulin E (IgE) by B cells following an increased release of cytokines (Galli and Tsai 2012). NF-κB, a potent inflammatory mediator, enhances the production of cytokines in several disorders, including asthma (Liu et al. 2017). Phosphorylation of IκB activates the NF-κB p65 subunit, which is responsible for increased production of cytokines and promotion of the inflammatory cascade (Christian et al. 2016). In addition, HMGB1, an immune system protein involved in the regulation of cell survival and death, has been implicated in sepsis, lung injury, and asthma. HMGB1 binds to toll-like receptors (TLRs), which stimulates the release of cytokines through inflammatory cells (Qu et al. 2019). MyD88- or non-MyD88-dependent signalling is used by HMGB1 to trigger downstream signals by activating TLR-4, which enhances the secretion of cytokines via the NF-κB pathway (Azam et al. 2019). Cytokines and activation of the inflammatory cascade contribute to both the allergic reaction and the remodelling of lung tissue. Airway remodelling in asthmatic patients is induced by increased infiltration of T lymphocytes (LYM) and eosinophils (EOS; Fehrenbach et al. 2017). The persistent hyper-responsiveness of the airway together with remodelling cause irreversible obstruction, which in turn depresses respiratory function. The medication currently available to treat asthma offers only symptomatic relief and has several limitations. Thus, new approaches to the management of asthma are needed. Several reports have suggested that asthma can be managed by targeting the remodelling and inflammation of the airway (Bergeron et al. 2010). There is increasing interest in molecules of natural origin as therapeutic agents. Evodiamine, which is isolated from Evodia rutaecarpa (Rutaceae), is a traditional medicine in China for treating cardiovascular disorders, infection, inflammation, and obesity (Liao et al. 2011). It also exhibits anticancer activity by regulating the TGF-β1 and NF-κB pathways and thereby controls the growth of several types of cancer cells (Jiang and Hu 2009). Anti-inflammatory activity, including innate immunity against bacterial infection, and antiulcer activity have also been demonstrated for evodiamine via its regulation of the inflammatory cascade and inflammasomes (Li et al. 2019; Yu et al. 2013). Thus, the present study evaluated the protective effects of evodiamine against asthma. Materials and methods: Animals Sprague-Dawley rats weighing 180–225 g and housed under controlled conditions (temperature of 24 ± 3 °C and 60 ± 5% humidity) with a 12 h light/dark cycle were used in this study. All animal experiments were approved by the Institutional Animal Ethical Committee of Wuxi People’s Hospital Affiliated with Nanjing Medical University, China (IAEC/WPH-NMU/2019/25). Sprague-Dawley rats weighing 180–225 g and housed under controlled conditions (temperature of 24 ± 3 °C and 60 ± 5% humidity) with a 12 h light/dark cycle were used in this study. All animal experiments were approved by the Institutional Animal Ethical Committee of Wuxi People’s Hospital Affiliated with Nanjing Medical University, China (IAEC/WPH-NMU/2019/25). Animals: Sprague-Dawley rats weighing 180–225 g and housed under controlled conditions (temperature of 24 ± 3 °C and 60 ± 5% humidity) with a 12 h light/dark cycle were used in this study. All animal experiments were approved by the Institutional Animal Ethical Committee of Wuxi People’s Hospital Affiliated with Nanjing Medical University, China (IAEC/WPH-NMU/2019/25). Chemicals: Aluminium hydroxide [Al(OH)3], ovalbumin (OA) and evodiamine were procured from Sigma Aldrich Ltd., USA. ELISA kits were purchased from ThermoFisher Scientific Ltd., China and anti-NF-κB antibodies were purchased from Abcam Ltd., USA. Total RNA Kit was purchased from Omega Bio-Tek Inc., USA. Experimental design and treatment protocol We induced asthma by sensitising rats with an intraperitoneal injection of Al(OH)3 (100 mg) and ovalbumin (OA; 1 mg/kg). Fifteen days later, the rats were kept in an 0.8 m3 chamber, where they were exposed to 2% OA aerosol via an airflow of 8 L/min for 20 min/day for eight consecutive days. Rats were separated into four groups: a normal control group; an asthma group sensitised with OA but treated with normal saline solution; and two evodiamine groups, treated with 40 and 80 mg/kg group p.o (Shen et al. 2019). Evodiamine dose preparation was prepared by dissolving it in DMSO solution (Figure 1). Influence of evodiamine on cytokine levels in the serum (A) and BALF (B) of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. We induced asthma by sensitising rats with an intraperitoneal injection of Al(OH)3 (100 mg) and ovalbumin (OA; 1 mg/kg). Fifteen days later, the rats were kept in an 0.8 m3 chamber, where they were exposed to 2% OA aerosol via an airflow of 8 L/min for 20 min/day for eight consecutive days. Rats were separated into four groups: a normal control group; an asthma group sensitised with OA but treated with normal saline solution; and two evodiamine groups, treated with 40 and 80 mg/kg group p.o (Shen et al. 2019). Evodiamine dose preparation was prepared by dissolving it in DMSO solution (Figure 1). Influence of evodiamine on cytokine levels in the serum (A) and BALF (B) of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Measurement of cytokines in the serum Rats were anaesthetized at the end of the treatment protocol and blood was drawn. Then they were euthanized by intraperitoneal injection of sodium pentobarbital (100 mg/kg). Serum was separated, and serum levels of the cytokines interleukin (IL)-4, IL-9, and IL-13 were measured with ELISA kits per the manufacturer’s instructions. Rats were anaesthetized at the end of the treatment protocol and blood was drawn. Then they were euthanized by intraperitoneal injection of sodium pentobarbital (100 mg/kg). Serum was separated, and serum levels of the cytokines interleukin (IL)-4, IL-9, and IL-13 were measured with ELISA kits per the manufacturer’s instructions. Preparation of BALF and determination of biochemical parameters We ligated the right lung of each rat to determine histological changes and collected BALF from the left lung after cannulating the trachea. Bronchoalveolar lavage fluid (BALF) was centrifuged at 4 °C at 2500 rpm for 10 min. The pellet was resuspended in 100 µL PBS and used to determine relative and total leukocyte counts with a haemocytometer. Interferon gamma (IFN-γ), IgE, IL-4, IL-13, and IL-17 levels were measured in the supernatant with ELISA kits. We ligated the right lung of each rat to determine histological changes and collected BALF from the left lung after cannulating the trachea. Bronchoalveolar lavage fluid (BALF) was centrifuged at 4 °C at 2500 rpm for 10 min. The pellet was resuspended in 100 µL PBS and used to determine relative and total leukocyte counts with a haemocytometer. Interferon gamma (IFN-γ), IgE, IL-4, IL-13, and IL-17 levels were measured in the supernatant with ELISA kits. Measurement of the thickness of smooth muscle and the airway wall We examined isolated intact small bronchioles by light microscopy to measure the thickness of the airway wall and smooth muscle layer. Image-Pro Plus version 6.0 was used to determine the internal smooth muscle area (Wam1), external smooth muscle area (Wam2), bronchial luminal area (Wat1), total bronchial wall area (Wat2), and basement membrane perimeter (Pbm) in the bronchioles. The thicknesses of the smooth muscle layer (Wat) and airway wall (Wan) were calculated as follows: Wan = (Wat1 –Wat2)/PbmWat = (Wam1 –Wam2)/Pbm We examined isolated intact small bronchioles by light microscopy to measure the thickness of the airway wall and smooth muscle layer. Image-Pro Plus version 6.0 was used to determine the internal smooth muscle area (Wam1), external smooth muscle area (Wam2), bronchial luminal area (Wat1), total bronchial wall area (Wat2), and basement membrane perimeter (Pbm) in the bronchioles. The thicknesses of the smooth muscle layer (Wat) and airway wall (Wan) were calculated as follows: Wan = (Wat1 –Wat2)/PbmWat = (Wam1 –Wam2)/Pbm Histopathological analyses Lung tissue was isolated from each animal, fixed in paraformaldehyde (4%), embedded in paraffin, and sectioned at a thickness of 5 µm. The sections were stained with H&E, and their histopathology was analysed with optical microscopy. The amount of inflammatory injury to the lung tissue was determined on a scale from 0 to 5: 0: normal, 1: few cells, 2: inflammatory cells that form a ring one layer deep, 3: a 2- to 4-cell layer of inflammatory cells, 4: a ring of inflammatory cells >4 cells deep. We analysed the airway epithelium for goblet cells by staining it with periodic acid-Schiff (PAS) stain. Goblet cell hyperplasia was scored based on the percentage of PAS-positive cells: 0: no goblet cells, 1: <25% goblet cells, 2: 25–50%, 3: 51–75%, 4: >75%. Lung tissue was isolated from each animal, fixed in paraformaldehyde (4%), embedded in paraffin, and sectioned at a thickness of 5 µm. The sections were stained with H&E, and their histopathology was analysed with optical microscopy. The amount of inflammatory injury to the lung tissue was determined on a scale from 0 to 5: 0: normal, 1: few cells, 2: inflammatory cells that form a ring one layer deep, 3: a 2- to 4-cell layer of inflammatory cells, 4: a ring of inflammatory cells >4 cells deep. We analysed the airway epithelium for goblet cells by staining it with periodic acid-Schiff (PAS) stain. Goblet cell hyperplasia was scored based on the percentage of PAS-positive cells: 0: no goblet cells, 1: <25% goblet cells, 2: 25–50%, 3: 51–75%, 4: >75%. Immunohistochemical analyses Lung tissue was incubated overnight at 4 °C with anti-NF-κB antibodies and then with secondary antibody for 60 min at 37 °C. Image-Pro Plus version 6.0 was used to quantify the number of positive cells. Lung tissue was incubated overnight at 4 °C with anti-NF-κB antibodies and then with secondary antibody for 60 min at 37 °C. Image-Pro Plus version 6.0 was used to quantify the number of positive cells. qRT-PCR The relative expression of TLR-4, MyD88, NF-κB, HMGB1, and GAPDH mRNA was estimated with SYBR green-based qRT-PCR. Total RNA was extracted with TRIzol reagent and then subjected to TaqMan MicroRNA assays. cDNA was synthesised from 2 µg total RNA (20 µL) with Moloney murine leukaemia virus reverse transcriptase. An ABI Prism 7500 system (Applied Biosystems, Foster City, CA, USA) was used with a SYBR green/fluorescein qPCR Master Mix kit (Thermo Fisher Scientific) with the following conditions: 50 °C for 2 min; 95 °C for 10 min; followed by 40 cycles at 95 °C for 30 s and 60 °C for 30 s, and a quantitative SYBR Green PCR assay was performed to estimate gene expression, calculated for each gene with the 2-ΔΔCt method. The relative expression of TLR-4, MyD88, NF-κB, HMGB1, and GAPDH mRNA was estimated with SYBR green-based qRT-PCR. Total RNA was extracted with TRIzol reagent and then subjected to TaqMan MicroRNA assays. cDNA was synthesised from 2 µg total RNA (20 µL) with Moloney murine leukaemia virus reverse transcriptase. An ABI Prism 7500 system (Applied Biosystems, Foster City, CA, USA) was used with a SYBR green/fluorescein qPCR Master Mix kit (Thermo Fisher Scientific) with the following conditions: 50 °C for 2 min; 95 °C for 10 min; followed by 40 cycles at 95 °C for 30 s and 60 °C for 30 s, and a quantitative SYBR Green PCR assay was performed to estimate gene expression, calculated for each gene with the 2-ΔΔCt method. Homology model of TLR-4 We obtained the TLR-4 protein sequence from the NCBI database and used it to prepare a homology model using SWISS-Model server. UniProt was used to analysed the sequence of the TLR-4 protein. The target sequences were matched against the primary amino acid sequence in a BLAST search. The best quality templates were selected to build the homology model. We obtained the TLR-4 protein sequence from the NCBI database and used it to prepare a homology model using SWISS-Model server. UniProt was used to analysed the sequence of the TLR-4 protein. The target sequences were matched against the primary amino acid sequence in a BLAST search. The best quality templates were selected to build the homology model. Preparation of proteins and ligand and molecular docking The 2 D structure of evodiamine was obtained from the PubChem database and converted into a .pdb file with Open Babel. AutoDock Vina MGL tools were used to prepare the ligand for the docking study by removing the water molecules, adding hydrogens, and modulating the charges according to the Kollman approach. A PDBQT file was thus obtained. We performed a molecular docking simulation of the ligand sophoridine using AutoDock Kollman and Gasteiger functions for both the ligand and the target protein. The grid map was created with AutoGrid 4. The grid box was prepared, and we defined the area of protein structure to be mapped by providing the coordinates. The grid box dimensions (x, y, and z coordinates) for TLR-4 were 111.168929, −8.862438, and −4.180014. The Lamarckian genetic algorithm was used for energy minimisation and optimisation in the docking simulation process. The 2 D structure of evodiamine was obtained from the PubChem database and converted into a .pdb file with Open Babel. AutoDock Vina MGL tools were used to prepare the ligand for the docking study by removing the water molecules, adding hydrogens, and modulating the charges according to the Kollman approach. A PDBQT file was thus obtained. We performed a molecular docking simulation of the ligand sophoridine using AutoDock Kollman and Gasteiger functions for both the ligand and the target protein. The grid map was created with AutoGrid 4. The grid box was prepared, and we defined the area of protein structure to be mapped by providing the coordinates. The grid box dimensions (x, y, and z coordinates) for TLR-4 were 111.168929, −8.862438, and −4.180014. The Lamarckian genetic algorithm was used for energy minimisation and optimisation in the docking simulation process. Statistical analyses All data are expressed as means ± standard deviations (n = 8). The statistical analyses consisted of one-way analyses of variance (ANOVAs). Post hoc comparisons of the means were performed with Dunnett’s post hoc test in GraphPad Prism (ver. 6.1; San Diego, CA, USA). p < 0.05 was considered to indicate statistical significance. All data are expressed as means ± standard deviations (n = 8). The statistical analyses consisted of one-way analyses of variance (ANOVAs). Post hoc comparisons of the means were performed with Dunnett’s post hoc test in GraphPad Prism (ver. 6.1; San Diego, CA, USA). p < 0.05 was considered to indicate statistical significance. Experimental design and treatment protocol: We induced asthma by sensitising rats with an intraperitoneal injection of Al(OH)3 (100 mg) and ovalbumin (OA; 1 mg/kg). Fifteen days later, the rats were kept in an 0.8 m3 chamber, where they were exposed to 2% OA aerosol via an airflow of 8 L/min for 20 min/day for eight consecutive days. Rats were separated into four groups: a normal control group; an asthma group sensitised with OA but treated with normal saline solution; and two evodiamine groups, treated with 40 and 80 mg/kg group p.o (Shen et al. 2019). Evodiamine dose preparation was prepared by dissolving it in DMSO solution (Figure 1). Influence of evodiamine on cytokine levels in the serum (A) and BALF (B) of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Measurement of cytokines in the serum: Rats were anaesthetized at the end of the treatment protocol and blood was drawn. Then they were euthanized by intraperitoneal injection of sodium pentobarbital (100 mg/kg). Serum was separated, and serum levels of the cytokines interleukin (IL)-4, IL-9, and IL-13 were measured with ELISA kits per the manufacturer’s instructions. Preparation of BALF and determination of biochemical parameters: We ligated the right lung of each rat to determine histological changes and collected BALF from the left lung after cannulating the trachea. Bronchoalveolar lavage fluid (BALF) was centrifuged at 4 °C at 2500 rpm for 10 min. The pellet was resuspended in 100 µL PBS and used to determine relative and total leukocyte counts with a haemocytometer. Interferon gamma (IFN-γ), IgE, IL-4, IL-13, and IL-17 levels were measured in the supernatant with ELISA kits. Measurement of the thickness of smooth muscle and the airway wall: We examined isolated intact small bronchioles by light microscopy to measure the thickness of the airway wall and smooth muscle layer. Image-Pro Plus version 6.0 was used to determine the internal smooth muscle area (Wam1), external smooth muscle area (Wam2), bronchial luminal area (Wat1), total bronchial wall area (Wat2), and basement membrane perimeter (Pbm) in the bronchioles. The thicknesses of the smooth muscle layer (Wat) and airway wall (Wan) were calculated as follows: Wan = (Wat1 –Wat2)/PbmWat = (Wam1 –Wam2)/Pbm Histopathological analyses: Lung tissue was isolated from each animal, fixed in paraformaldehyde (4%), embedded in paraffin, and sectioned at a thickness of 5 µm. The sections were stained with H&E, and their histopathology was analysed with optical microscopy. The amount of inflammatory injury to the lung tissue was determined on a scale from 0 to 5: 0: normal, 1: few cells, 2: inflammatory cells that form a ring one layer deep, 3: a 2- to 4-cell layer of inflammatory cells, 4: a ring of inflammatory cells >4 cells deep. We analysed the airway epithelium for goblet cells by staining it with periodic acid-Schiff (PAS) stain. Goblet cell hyperplasia was scored based on the percentage of PAS-positive cells: 0: no goblet cells, 1: <25% goblet cells, 2: 25–50%, 3: 51–75%, 4: >75%. Immunohistochemical analyses: Lung tissue was incubated overnight at 4 °C with anti-NF-κB antibodies and then with secondary antibody for 60 min at 37 °C. Image-Pro Plus version 6.0 was used to quantify the number of positive cells. qRT-PCR: The relative expression of TLR-4, MyD88, NF-κB, HMGB1, and GAPDH mRNA was estimated with SYBR green-based qRT-PCR. Total RNA was extracted with TRIzol reagent and then subjected to TaqMan MicroRNA assays. cDNA was synthesised from 2 µg total RNA (20 µL) with Moloney murine leukaemia virus reverse transcriptase. An ABI Prism 7500 system (Applied Biosystems, Foster City, CA, USA) was used with a SYBR green/fluorescein qPCR Master Mix kit (Thermo Fisher Scientific) with the following conditions: 50 °C for 2 min; 95 °C for 10 min; followed by 40 cycles at 95 °C for 30 s and 60 °C for 30 s, and a quantitative SYBR Green PCR assay was performed to estimate gene expression, calculated for each gene with the 2-ΔΔCt method. Homology model of TLR-4: We obtained the TLR-4 protein sequence from the NCBI database and used it to prepare a homology model using SWISS-Model server. UniProt was used to analysed the sequence of the TLR-4 protein. The target sequences were matched against the primary amino acid sequence in a BLAST search. The best quality templates were selected to build the homology model. Preparation of proteins and ligand and molecular docking: The 2 D structure of evodiamine was obtained from the PubChem database and converted into a .pdb file with Open Babel. AutoDock Vina MGL tools were used to prepare the ligand for the docking study by removing the water molecules, adding hydrogens, and modulating the charges according to the Kollman approach. A PDBQT file was thus obtained. We performed a molecular docking simulation of the ligand sophoridine using AutoDock Kollman and Gasteiger functions for both the ligand and the target protein. The grid map was created with AutoGrid 4. The grid box was prepared, and we defined the area of protein structure to be mapped by providing the coordinates. The grid box dimensions (x, y, and z coordinates) for TLR-4 were 111.168929, −8.862438, and −4.180014. The Lamarckian genetic algorithm was used for energy minimisation and optimisation in the docking simulation process. Statistical analyses: All data are expressed as means ± standard deviations (n = 8). The statistical analyses consisted of one-way analyses of variance (ANOVAs). Post hoc comparisons of the means were performed with Dunnett’s post hoc test in GraphPad Prism (ver. 6.1; San Diego, CA, USA). p < 0.05 was considered to indicate statistical significance. Results: Evodiamine reduces the cytokine level in serum and BALF Cytokine levels in the BALF and serum of evodiamine-treated asthmatic rats are shown in Figure 1(A,B). Serum IL-4, IL-9, and IL-13 levels were significantly (p < 0.01) enhanced in the asthma group compared with the control group. However, serum levels of the cytokines IL-4, IL-9, and IL-13 were reduced significantly (p < 0.01) in the evodiamine-treated group compared with the asthma group (Figure 1(A)). Moreover, levels of IL-4, IL-13, and IL-17 in BALF were increases in the asthma group relative to the normal group. Treatment of the asthmatic rats with evodiamine prevented this increase, with IL-4, IL-13, and IL-17 levels in BALF, relative to the asthma group (Figure 1(B)). Cytokine levels in the BALF and serum of evodiamine-treated asthmatic rats are shown in Figure 1(A,B). Serum IL-4, IL-9, and IL-13 levels were significantly (p < 0.01) enhanced in the asthma group compared with the control group. However, serum levels of the cytokines IL-4, IL-9, and IL-13 were reduced significantly (p < 0.01) in the evodiamine-treated group compared with the asthma group (Figure 1(A)). Moreover, levels of IL-4, IL-13, and IL-17 in BALF were increases in the asthma group relative to the normal group. Treatment of the asthmatic rats with evodiamine prevented this increase, with IL-4, IL-13, and IL-17 levels in BALF, relative to the asthma group (Figure 1(B)). Evodiamine reduces the thickness of smooth muscle and the airway wall Isolated intact small bronchioles of evodiamine-treated and untreated asthmatic rats are shown in Figure 2. Thickness of the airway wall and smooth muscle layer was enhanced in the lung tissue of asthma group than control group of rats. However, treatment with evodiamine significantly (p < 0.01) reduces the thickness of both (airway wall and smooth muscle) in the lung tissue of asthmatic rats. Evodiamine reduces the thickness of airway smooth muscle and the airway wall in intact small bronchioles of asthmatic rats. Values are means ± SD (n = 8); @@ p < 0.01 compared to the normal group; ** p < 0.01 compared to the asthma group. Isolated intact small bronchioles of evodiamine-treated and untreated asthmatic rats are shown in Figure 2. Thickness of the airway wall and smooth muscle layer was enhanced in the lung tissue of asthma group than control group of rats. However, treatment with evodiamine significantly (p < 0.01) reduces the thickness of both (airway wall and smooth muscle) in the lung tissue of asthmatic rats. Evodiamine reduces the thickness of airway smooth muscle and the airway wall in intact small bronchioles of asthmatic rats. Values are means ± SD (n = 8); @@ p < 0.01 compared to the normal group; ** p < 0.01 compared to the asthma group. Evodiamine attenuates biochemical parameters Levels of mediators that modulate the inflammatory reaction, including IFN-γ and IgE, were measured in lung tissue homogenates. In the lung tissue of asthmatic rats, IFN-γ decreased significantly (p < 0.01) and increase in IgE level than normal control group. Evodiamine treatment increased the level of IFN-γ and significantly reduces the level of IgE in the lung tissue compared to asthma group of rats (Figure 3). Effects of evodiamine on IFN-γ and IgE levels in lung tissue homogenates of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Levels of mediators that modulate the inflammatory reaction, including IFN-γ and IgE, were measured in lung tissue homogenates. In the lung tissue of asthmatic rats, IFN-γ decreased significantly (p < 0.01) and increase in IgE level than normal control group. Evodiamine treatment increased the level of IFN-γ and significantly reduces the level of IgE in the lung tissue compared to asthma group of rats (Figure 3). Effects of evodiamine on IFN-γ and IgE levels in lung tissue homogenates of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Evodiamine attenuates the infiltration of inflammatory cells We evaluated the infiltration of inflammatory cells by determining the level of white blood cells (WBC), EOS, and LYM in the BALF of the four groups of rats (Figure 4). WBC, EOS, and LYM counts were higher in the BALF of the asthma group than the normal group of rats, whereas they decreased significantly (p < 0.01) in the BALF of evodiamine-treated asthmatic rats. Effects of evodiamine on the infiltration of inflammatory cells in the BALF of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. We evaluated the infiltration of inflammatory cells by determining the level of white blood cells (WBC), EOS, and LYM in the BALF of the four groups of rats (Figure 4). WBC, EOS, and LYM counts were higher in the BALF of the asthma group than the normal group of rats, whereas they decreased significantly (p < 0.01) in the BALF of evodiamine-treated asthmatic rats. Effects of evodiamine on the infiltration of inflammatory cells in the BALF of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Evodiamine attenuates expression of NF-κB and HMGB1 protein Expression of NF-κB protein in the lung tissue of the rats is shown in Figure 5(A,B). Expression was higher in the asthma group than in the normal group, but it was reduced following treatment with evodiamine. Immunohistochemical analyses of the expression of NF-κB protein in the lung tissue of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Expression of NF-κB protein in the lung tissue of the rats is shown in Figure 5(A,B). Expression was higher in the asthma group than in the normal group, but it was reduced following treatment with evodiamine. Immunohistochemical analyses of the expression of NF-κB protein in the lung tissue of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Evodiamine attenuates mRNA expression of TLR-4, MyD88, NF-κB, and HMGB1 TLR-4, MyD88, NF-κB, and HMGB1 mRNA expression was measured in rat lung tissue (Figure 6). Levels of all four mRNAs were significantly enhanced in the lung tissue of the asthma group compared to the normal group but were reduced in the lung tissue of the evodiamine-treated group compared to the untreated asthmatic group. Expression of TLR-4, MyD88, NF-κB, and HMGB1 mRNA in the lung tissue of evodiamine-treated asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. TLR-4, MyD88, NF-κB, and HMGB1 mRNA expression was measured in rat lung tissue (Figure 6). Levels of all four mRNAs were significantly enhanced in the lung tissue of the asthma group compared to the normal group but were reduced in the lung tissue of the evodiamine-treated group compared to the untreated asthmatic group. Expression of TLR-4, MyD88, NF-κB, and HMGB1 mRNA in the lung tissue of evodiamine-treated asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Evodiamine attenuates histopathological changes in lung tissue Histopathological changes in the lung tissue of rats treated with evodiamine or not treated are shown in Figure 7(A,B). Inflammation scores were estimated based on histopathological scores determined from H&E-stained sections (Figure 7(A)). The histopathological score was higher in the asthma group than in the normal group, but the increase was reversed by treatment with evodiamine. Histopathological changes in the lung tissue of evodiamine-treated asthmatic rats. (A) H&E staining of lung tissue and histopathological score. (B) PAS staining of lung tissue and PAS score. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. The number of goblet cells in lung tissue was estimated by PAS staining (Figure 7(B)). We found a higher PAS score in the asthma group than in the normal group but a dose-dependent reduction in PAS score in the evodiamine-treated group versus the asthma group. Histopathological changes in the lung tissue of rats treated with evodiamine or not treated are shown in Figure 7(A,B). Inflammation scores were estimated based on histopathological scores determined from H&E-stained sections (Figure 7(A)). The histopathological score was higher in the asthma group than in the normal group, but the increase was reversed by treatment with evodiamine. Histopathological changes in the lung tissue of evodiamine-treated asthmatic rats. (A) H&E staining of lung tissue and histopathological score. (B) PAS staining of lung tissue and PAS score. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. The number of goblet cells in lung tissue was estimated by PAS staining (Figure 7(B)). We found a higher PAS score in the asthma group than in the normal group but a dose-dependent reduction in PAS score in the evodiamine-treated group versus the asthma group. Effects of evodiamine on the TLR-4 protein Given the in vivo and in vitro findings, we performed a molecular docking study using BLAST and homology modelling followed by ligand and protein preparation. Molecular docking simulation of the ligand evodiamine was done with the AutoDock Kollman and Gasteiger functions for both the ligand and the binding protein. The docking results showed the high binding affinity of evodiamine with the in vivo confirmed TLR-4 protein, a result supported by the high binding energies (Table 1). The 3 D structure of the protein revealed the area of ligand binding in the TLR-4 protein (Figure 8). In silico molecular docking shows the interaction of the TLR-4 protein and evodiamine. The solid area in the protein structures represents the area of interaction. Docking scores for TLR-4 protein with ligand molecule Evodiamine. Given the in vivo and in vitro findings, we performed a molecular docking study using BLAST and homology modelling followed by ligand and protein preparation. Molecular docking simulation of the ligand evodiamine was done with the AutoDock Kollman and Gasteiger functions for both the ligand and the binding protein. The docking results showed the high binding affinity of evodiamine with the in vivo confirmed TLR-4 protein, a result supported by the high binding energies (Table 1). The 3 D structure of the protein revealed the area of ligand binding in the TLR-4 protein (Figure 8). In silico molecular docking shows the interaction of the TLR-4 protein and evodiamine. The solid area in the protein structures represents the area of interaction. Docking scores for TLR-4 protein with ligand molecule Evodiamine. Evodiamine reduces the cytokine level in serum and BALF: Cytokine levels in the BALF and serum of evodiamine-treated asthmatic rats are shown in Figure 1(A,B). Serum IL-4, IL-9, and IL-13 levels were significantly (p < 0.01) enhanced in the asthma group compared with the control group. However, serum levels of the cytokines IL-4, IL-9, and IL-13 were reduced significantly (p < 0.01) in the evodiamine-treated group compared with the asthma group (Figure 1(A)). Moreover, levels of IL-4, IL-13, and IL-17 in BALF were increases in the asthma group relative to the normal group. Treatment of the asthmatic rats with evodiamine prevented this increase, with IL-4, IL-13, and IL-17 levels in BALF, relative to the asthma group (Figure 1(B)). Evodiamine reduces the thickness of smooth muscle and the airway wall: Isolated intact small bronchioles of evodiamine-treated and untreated asthmatic rats are shown in Figure 2. Thickness of the airway wall and smooth muscle layer was enhanced in the lung tissue of asthma group than control group of rats. However, treatment with evodiamine significantly (p < 0.01) reduces the thickness of both (airway wall and smooth muscle) in the lung tissue of asthmatic rats. Evodiamine reduces the thickness of airway smooth muscle and the airway wall in intact small bronchioles of asthmatic rats. Values are means ± SD (n = 8); @@ p < 0.01 compared to the normal group; ** p < 0.01 compared to the asthma group. Evodiamine attenuates biochemical parameters: Levels of mediators that modulate the inflammatory reaction, including IFN-γ and IgE, were measured in lung tissue homogenates. In the lung tissue of asthmatic rats, IFN-γ decreased significantly (p < 0.01) and increase in IgE level than normal control group. Evodiamine treatment increased the level of IFN-γ and significantly reduces the level of IgE in the lung tissue compared to asthma group of rats (Figure 3). Effects of evodiamine on IFN-γ and IgE levels in lung tissue homogenates of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Evodiamine attenuates the infiltration of inflammatory cells: We evaluated the infiltration of inflammatory cells by determining the level of white blood cells (WBC), EOS, and LYM in the BALF of the four groups of rats (Figure 4). WBC, EOS, and LYM counts were higher in the BALF of the asthma group than the normal group of rats, whereas they decreased significantly (p < 0.01) in the BALF of evodiamine-treated asthmatic rats. Effects of evodiamine on the infiltration of inflammatory cells in the BALF of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Evodiamine attenuates expression of NF-κB and HMGB1 protein: Expression of NF-κB protein in the lung tissue of the rats is shown in Figure 5(A,B). Expression was higher in the asthma group than in the normal group, but it was reduced following treatment with evodiamine. Immunohistochemical analyses of the expression of NF-κB protein in the lung tissue of asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Evodiamine attenuates mRNA expression of TLR-4, MyD88, NF-κB, and HMGB1: TLR-4, MyD88, NF-κB, and HMGB1 mRNA expression was measured in rat lung tissue (Figure 6). Levels of all four mRNAs were significantly enhanced in the lung tissue of the asthma group compared to the normal group but were reduced in the lung tissue of the evodiamine-treated group compared to the untreated asthmatic group. Expression of TLR-4, MyD88, NF-κB, and HMGB1 mRNA in the lung tissue of evodiamine-treated asthmatic rats. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. Evodiamine attenuates histopathological changes in lung tissue: Histopathological changes in the lung tissue of rats treated with evodiamine or not treated are shown in Figure 7(A,B). Inflammation scores were estimated based on histopathological scores determined from H&E-stained sections (Figure 7(A)). The histopathological score was higher in the asthma group than in the normal group, but the increase was reversed by treatment with evodiamine. Histopathological changes in the lung tissue of evodiamine-treated asthmatic rats. (A) H&E staining of lung tissue and histopathological score. (B) PAS staining of lung tissue and PAS score. Values are means ± SD (n = 8); @@p < 0.01 compared to the normal group; **p < 0.01 compared to the asthma group. The number of goblet cells in lung tissue was estimated by PAS staining (Figure 7(B)). We found a higher PAS score in the asthma group than in the normal group but a dose-dependent reduction in PAS score in the evodiamine-treated group versus the asthma group. Effects of evodiamine on the TLR-4 protein: Given the in vivo and in vitro findings, we performed a molecular docking study using BLAST and homology modelling followed by ligand and protein preparation. Molecular docking simulation of the ligand evodiamine was done with the AutoDock Kollman and Gasteiger functions for both the ligand and the binding protein. The docking results showed the high binding affinity of evodiamine with the in vivo confirmed TLR-4 protein, a result supported by the high binding energies (Table 1). The 3 D structure of the protein revealed the area of ligand binding in the TLR-4 protein (Figure 8). In silico molecular docking shows the interaction of the TLR-4 protein and evodiamine. The solid area in the protein structures represents the area of interaction. Docking scores for TLR-4 protein with ligand molecule Evodiamine. Discussion: Asthma is a chronic disorder characterised by hyper-responsiveness of the bronchial tree due to inflammatory responses in the lung. Several pathophysiological pathways are associated with the development of asthma, including those that mediate the release of cytokines involved in the inflammatory process, in particular IL-4, IL-9, IL-13, and IL-17. Although leukotriene inhibitors have shown potential in the management of asthma, our study demonstrates that treatment with evodiamine ameliorates enhanced levels of cytokines in the serum and BALF of asthmatic rats. The infiltration of inflammatory cells in the lung is another feature of asthma (National Asthma Education and Prevention Program, Third Expert Panel on the Diagnosis and Management of Asthma 2007). These inflammatory cells are responsible for the stimulated release of inflammatory cytokines and their subsequent effects on the smooth muscle of the respiratory tract (Moldoveanu et al. 2009). IgE levels are also enhanced in asthmatic patients and are responsible for inducing allergic reaction by releasing histamine (Yamauchi and Ogasawara 2019), a tissue amine responsible for the increased sensitivity of the bronchial tree and the contraction of its smooth muscle layer (Bonaldi et al. 2003). In our rat model of asthma, evodiamine ameliorated the altered level of IgE and reduced infiltration of inflammatory cells in the lung tissue of asthmatic rats. In addition, evodiamine reduced the thickness of both the airway wall and the smooth muscle layer in treated rats compared to untreated asthmatic rats. Reports suggest that inflammation enhances the permeability of the nuclear membrane, allowing the nuclear HMGB1 protein to reach the cytoplasm (Sprague and Khalil 2009). Leukotrienes are secreted by macrophages and mononuclear cells, and their release is stimulated by HMGB1 (Baek et al. 2018). The knockdown of HMGB1 reduces the severity of asthma by reducing airway smooth muscle thickness, collagen deposition, mucus secretion, and inflammation in the airway (Hou et al. 2015). The TLR-4 protein, the receptor for HMGB1, is involved in the immune cell response and in inflammation, and its overexpression in the lung enhances the thickness of the airway wall by activating elastase and NF-κB (Wang et al. 2020). The MyD88-dependent pathway is used by TLR-4 to activate NF-κB and further contributes to the synthesis of inflammatory cytokines (Liu et al. 2017). Our study suggests a mechanism by which evodiamine hinders the development of asthma in rats by altering expression of TLR-4, NF-κB, MyD88, and HMGB1, thus hindering airway remodelling in the rat lung. Conclusion: This reports suggests that treatment with evodiamine reduces inflammation and airway remodelling in the lung tissue of asthmatic rats by downregulating the HMGB1/NF-κB/TLR-4 pathway. Data of the study reveal that evodiamine could be explored clinically for the management of asthma.
Background: Evodiamine, which is isolated from Evodia rutaecarpa (Rutaceae), possess strong anti-inflammatory, immunomodulatory, and antibacterial properties. Methods: Thirty-two Sprague-Dawley (SD) rats were used, asthma was induced by injecting intraperitoneally with a mixture of Al(OH)3 (100 mg) and ovalbumin (OA; 1 mg/kg), further exposing them to a 2% OA aerosol for 1 week. All animals were divided into four groups: control, asthma, and evodiamine 40 and 80 mg/kg p.o. treated group. Serum levels of inflammatory cytokines, interferon gamma (IFN-γ), and immunoglobulin E (IgE) and infiltrations of inflammatory cells in the bronchoalveolar lavage fluid (BALF) of the animals were determined. The thickness of the smooth muscle layer and airway wall in the intact small bronchioles of asthmatic rats was examined as well. Results: Cytokine levels in the serum and BALF were lower in the evodiamine-treated group than in the asthma group. Evodiamine treatment reduced IgE and IFN-γ levels as well as the inflammatory cell infiltrate in the lung tissue of asthmatic rats. The thickness of the smooth muscle layer and airway wall of intact small bronchioles was less in the evodiamine-treated group than in the asthma group. Lower levels of TLR-4, MyD88, NF-κB, and HMGB1 mRNA in lung tissue were measured in the evodiamine-treated group than in the asthma group. Conclusions: The effect of evodiamine treatment protects the asthma, as evodiamine reduces airway inflammation and remodelling in the lung tissue by downregulating the HMGB1/NF-κB/TLR-4 pathway in asthma.
Introduction: Asthma is a respiratory disorder diagnosed by pathophysiological symptoms such as hyper-responsiveness, obstruction, remodelling, and inflammation of the airway. Some 339 million people suffer from asthma worldwide (Dharmage et al. 2019). Among the physiological pathways involved in the development of asthma is an imbalance in Th1/Th2 (Durrant and Metzger 2010). Th2 activation enhances the production of interleukins, leading to the development of several pathological conditions, including allergies and asthma. Allergic asthma occurs in response to enhanced production immunoglobulin E (IgE) by B cells following an increased release of cytokines (Galli and Tsai 2012). NF-κB, a potent inflammatory mediator, enhances the production of cytokines in several disorders, including asthma (Liu et al. 2017). Phosphorylation of IκB activates the NF-κB p65 subunit, which is responsible for increased production of cytokines and promotion of the inflammatory cascade (Christian et al. 2016). In addition, HMGB1, an immune system protein involved in the regulation of cell survival and death, has been implicated in sepsis, lung injury, and asthma. HMGB1 binds to toll-like receptors (TLRs), which stimulates the release of cytokines through inflammatory cells (Qu et al. 2019). MyD88- or non-MyD88-dependent signalling is used by HMGB1 to trigger downstream signals by activating TLR-4, which enhances the secretion of cytokines via the NF-κB pathway (Azam et al. 2019). Cytokines and activation of the inflammatory cascade contribute to both the allergic reaction and the remodelling of lung tissue. Airway remodelling in asthmatic patients is induced by increased infiltration of T lymphocytes (LYM) and eosinophils (EOS; Fehrenbach et al. 2017). The persistent hyper-responsiveness of the airway together with remodelling cause irreversible obstruction, which in turn depresses respiratory function. The medication currently available to treat asthma offers only symptomatic relief and has several limitations. Thus, new approaches to the management of asthma are needed. Several reports have suggested that asthma can be managed by targeting the remodelling and inflammation of the airway (Bergeron et al. 2010). There is increasing interest in molecules of natural origin as therapeutic agents. Evodiamine, which is isolated from Evodia rutaecarpa (Rutaceae), is a traditional medicine in China for treating cardiovascular disorders, infection, inflammation, and obesity (Liao et al. 2011). It also exhibits anticancer activity by regulating the TGF-β1 and NF-κB pathways and thereby controls the growth of several types of cancer cells (Jiang and Hu 2009). Anti-inflammatory activity, including innate immunity against bacterial infection, and antiulcer activity have also been demonstrated for evodiamine via its regulation of the inflammatory cascade and inflammasomes (Li et al. 2019; Yu et al. 2013). Thus, the present study evaluated the protective effects of evodiamine against asthma. Conclusion: This reports suggests that treatment with evodiamine reduces inflammation and airway remodelling in the lung tissue of asthmatic rats by downregulating the HMGB1/NF-κB/TLR-4 pathway. Data of the study reveal that evodiamine could be explored clinically for the management of asthma.
Background: Evodiamine, which is isolated from Evodia rutaecarpa (Rutaceae), possess strong anti-inflammatory, immunomodulatory, and antibacterial properties. Methods: Thirty-two Sprague-Dawley (SD) rats were used, asthma was induced by injecting intraperitoneally with a mixture of Al(OH)3 (100 mg) and ovalbumin (OA; 1 mg/kg), further exposing them to a 2% OA aerosol for 1 week. All animals were divided into four groups: control, asthma, and evodiamine 40 and 80 mg/kg p.o. treated group. Serum levels of inflammatory cytokines, interferon gamma (IFN-γ), and immunoglobulin E (IgE) and infiltrations of inflammatory cells in the bronchoalveolar lavage fluid (BALF) of the animals were determined. The thickness of the smooth muscle layer and airway wall in the intact small bronchioles of asthmatic rats was examined as well. Results: Cytokine levels in the serum and BALF were lower in the evodiamine-treated group than in the asthma group. Evodiamine treatment reduced IgE and IFN-γ levels as well as the inflammatory cell infiltrate in the lung tissue of asthmatic rats. The thickness of the smooth muscle layer and airway wall of intact small bronchioles was less in the evodiamine-treated group than in the asthma group. Lower levels of TLR-4, MyD88, NF-κB, and HMGB1 mRNA in lung tissue were measured in the evodiamine-treated group than in the asthma group. Conclusions: The effect of evodiamine treatment protects the asthma, as evodiamine reduces airway inflammation and remodelling in the lung tissue by downregulating the HMGB1/NF-κB/TLR-4 pathway in asthma.
8,627
320
[ 79, 2455, 199, 62, 94, 115, 178, 48, 169, 64, 159, 75, 147, 136, 141, 136, 101, 127, 199, 147 ]
25
[ "group", "evodiamine", "asthma", "rats", "lung", "tissue", "lung tissue", "asthma group", "compared", "il" ]
[ "κb hmgb1 mrna", "reduces severity asthma", "induced asthma sensitising", "cells balf asthmatic", "asthma imbalance th1" ]
null
[CONTENT] Asthma | bronchioles | cytokine | lung [SUMMARY]
null
[CONTENT] Asthma | bronchioles | cytokine | lung [SUMMARY]
[CONTENT] Asthma | bronchioles | cytokine | lung [SUMMARY]
[CONTENT] Asthma | bronchioles | cytokine | lung [SUMMARY]
[CONTENT] Asthma | bronchioles | cytokine | lung [SUMMARY]
[CONTENT] Airway Remodeling | Animals | Anti-Inflammatory Agents | Asthma | Bronchoalveolar Lavage Fluid | Cytokines | Disease Models, Animal | Dose-Response Relationship, Drug | Evodia | HMGB1 Protein | Inflammation | NF-kappa B | Quinazolines | Rats | Rats, Sprague-Dawley | Signal Transduction | Toll-Like Receptor 4 [SUMMARY]
null
[CONTENT] Airway Remodeling | Animals | Anti-Inflammatory Agents | Asthma | Bronchoalveolar Lavage Fluid | Cytokines | Disease Models, Animal | Dose-Response Relationship, Drug | Evodia | HMGB1 Protein | Inflammation | NF-kappa B | Quinazolines | Rats | Rats, Sprague-Dawley | Signal Transduction | Toll-Like Receptor 4 [SUMMARY]
[CONTENT] Airway Remodeling | Animals | Anti-Inflammatory Agents | Asthma | Bronchoalveolar Lavage Fluid | Cytokines | Disease Models, Animal | Dose-Response Relationship, Drug | Evodia | HMGB1 Protein | Inflammation | NF-kappa B | Quinazolines | Rats | Rats, Sprague-Dawley | Signal Transduction | Toll-Like Receptor 4 [SUMMARY]
[CONTENT] Airway Remodeling | Animals | Anti-Inflammatory Agents | Asthma | Bronchoalveolar Lavage Fluid | Cytokines | Disease Models, Animal | Dose-Response Relationship, Drug | Evodia | HMGB1 Protein | Inflammation | NF-kappa B | Quinazolines | Rats | Rats, Sprague-Dawley | Signal Transduction | Toll-Like Receptor 4 [SUMMARY]
[CONTENT] Airway Remodeling | Animals | Anti-Inflammatory Agents | Asthma | Bronchoalveolar Lavage Fluid | Cytokines | Disease Models, Animal | Dose-Response Relationship, Drug | Evodia | HMGB1 Protein | Inflammation | NF-kappa B | Quinazolines | Rats | Rats, Sprague-Dawley | Signal Transduction | Toll-Like Receptor 4 [SUMMARY]
[CONTENT] κb hmgb1 mrna | reduces severity asthma | induced asthma sensitising | cells balf asthmatic | asthma imbalance th1 [SUMMARY]
null
[CONTENT] κb hmgb1 mrna | reduces severity asthma | induced asthma sensitising | cells balf asthmatic | asthma imbalance th1 [SUMMARY]
[CONTENT] κb hmgb1 mrna | reduces severity asthma | induced asthma sensitising | cells balf asthmatic | asthma imbalance th1 [SUMMARY]
[CONTENT] κb hmgb1 mrna | reduces severity asthma | induced asthma sensitising | cells balf asthmatic | asthma imbalance th1 [SUMMARY]
[CONTENT] κb hmgb1 mrna | reduces severity asthma | induced asthma sensitising | cells balf asthmatic | asthma imbalance th1 [SUMMARY]
[CONTENT] group | evodiamine | asthma | rats | lung | tissue | lung tissue | asthma group | compared | il [SUMMARY]
null
[CONTENT] group | evodiamine | asthma | rats | lung | tissue | lung tissue | asthma group | compared | il [SUMMARY]
[CONTENT] group | evodiamine | asthma | rats | lung | tissue | lung tissue | asthma group | compared | il [SUMMARY]
[CONTENT] group | evodiamine | asthma | rats | lung | tissue | lung tissue | asthma group | compared | il [SUMMARY]
[CONTENT] group | evodiamine | asthma | rats | lung | tissue | lung tissue | asthma group | compared | il [SUMMARY]
[CONTENT] asthma | remodelling | production | cytokines | inflammatory | cascade | inflammatory cascade | activity | enhances | 2019 [SUMMARY]
null
[CONTENT] group | evodiamine | asthma group | 01 | compared | tissue | lung tissue | asthma | lung | il [SUMMARY]
[CONTENT] evodiamine reduces inflammation | tlr pathway data study | study reveal evodiamine explored | study reveal evodiamine | study reveal | pathway data study | tlr pathway | tlr pathway data | pathway data study reveal | reduces inflammation airway [SUMMARY]
[CONTENT] group | asthma | il | evodiamine | lung | rats | tissue | lung tissue | asthma group | 01 [SUMMARY]
[CONTENT] group | asthma | il | evodiamine | lung | rats | tissue | lung tissue | asthma group | 01 [SUMMARY]
[CONTENT] Evodia | Rutaceae [SUMMARY]
null
[CONTENT] BALF ||| IFN ||| ||| TLR-4, MyD88 | NF-κB | HMGB1 [SUMMARY]
[CONTENT] HMGB1/NF-κB/TLR-4 [SUMMARY]
[CONTENT] Evodia | Rutaceae ||| Thirty-two Sprague-Dawley | 100 mg | OA | 1 mg/kg | 2% | OA | 1 week ||| four | 40 ||| IFN ||| ||| BALF ||| IFN ||| ||| TLR-4, MyD88 | NF-κB | HMGB1 ||| HMGB1/NF-κB/TLR-4 [SUMMARY]
[CONTENT] Evodia | Rutaceae ||| Thirty-two Sprague-Dawley | 100 mg | OA | 1 mg/kg | 2% | OA | 1 week ||| four | 40 ||| IFN ||| ||| BALF ||| IFN ||| ||| TLR-4, MyD88 | NF-κB | HMGB1 ||| HMGB1/NF-κB/TLR-4 [SUMMARY]
Nutritional Behaviors of Polish Adolescents: Results of the Wise Nutrition-Healthy Generation Project.
31337092
Recognition of the dominant dietary behaviors with respect to gender and specific age groups can be helpful in the development of targeted and effective nutritional education. The purpose of the study was to analyze the prevalence of the selected eating behaviors (favorable: Consuming breakfasts, fruit, vegetables, milk and milk beverages, whole grain bread and fish; adverse: Regular consumption of sweets, sugared soft drinks and fast-foods) among Polish adolescents.
BACKGROUND
Data on the nutritional behaviors were collected using a questionnaire. Body mass status was assessed based on weight and height measurements.
METHODS
14,044 students aged 13-19 years old from 207 schools participated in the study. Significant differences were found in the nutritional behaviors depending on age, gender and nutritional status. Favorable nutritional behaviors corresponded with each other, the same relationship was observed for adverse behaviors. The frequency of the majority of healthy eating behaviors decreased with age, whereas the incidence of adverse dietary behaviors increased with age. Underweight adolescents more often consumed sugared soft drinks, sweets and fast food compared to their peers with normal and excessive body mass.
RESULTS
A significant proportion of adolescents showed unhealthy nutritional behaviors. Showing changes in the incidence of nutritional behaviors depending on age, gender and body weight status, we provide data that can inform the development of dietary interventions tailored to promote specific food groups among adolescents on different stages of development to improve their diet quality.
CONCLUSIONS
[ "Adolescent", "Body Weight", "Diet", "Diet, Healthy", "Feeding Behavior", "Female", "Food Preferences", "Health Behavior", "Health Promotion", "Humans", "Male", "Nutritional Status", "Poland", "Young Adult" ]
6682866
1. Introduction
The health of children and adolescents is dependent upon food intake that provides sufficient energy and nutrients to promote optimal physical, cognitive and social growth and development [1,2,3]. However, in practice, the implementation of proper nutrition recommendations in these population groups is extremely difficult due to the existing barriers, e.g., availability of healthy food, inadequate nutritional knowledge of caregivers and children and personal food preferences [4,5,6,7]. A great body of the literature indicates the low overall diet quality in children and adolescents, both in terms of the amounts (deficits or excesses) of food/nutrients, and the selection of food groups/food products. One in four Polish 17–18 years old female adolescents did not eat breakfast regularly, and nearly half of them consumed fish only one time per month [8]. Almost 35% of schoolchildren and adolescents aged 9–13 years from rural parts of Poland regularly ate sweets, and 46% failed to consume vegetables and fruit at least once a day [9]. These inadequacies in the assortment and quantities of food products result in an incorrect supply of energy and nutrients. The average European adolescents’ diet is too high in saturated fatty acids and sodium, whereas too low in monounsaturated fatty acids, vitamin D, folate and iodine [10]. In Poland a significant increasing trend in calcium intake in teenagers aged 11–15 years was noted in the last 20 years, but the observed values are still lower than the recommendations [11]. In the US nearly 40% of total energy consumed by two- to 18-year-olds came in the form of empty calories (including 365 kcal from added sugars) [12]. Poor quality of the diet in early life may impair growth and development rate, and also increases the risk of some diet-related diseases (e.g., obesity, type 2 diabetes mellitus, cardiovascular disease and osteoporosis) in the future [3,13]. Although correct nutrition is important throughout the life span, it is possible to distinguish particularly critical periods, i.e., the first 2–3 years [3] and the period of puberty [14,15]. Dramatic physical growth and development during puberty significantly increases requirements for energy, protein, and also others nutrients compared to late childhood. Biological changes related to puberty might significantly affect psychosocial development. Rapid changes in body size, shape and composition in girls might lead to poor body image experience and development of eating disorders [16]. At this age, girls may experience nutritional behaviors leading to weight loss, e.g., alternative diets promoted in the media. Nevertheless, a delay in biological development might lower self-esteem and increase the risk of eating disorders among male teenagers [17]. As young teens are highly influenced by a peer group, then the desire to conform may also affect nutritional behaviors and food intake. Moreover, food choices can be used by adolescents as a way to express their independence from families and parents. At this age, young people may prefer to eat fast-food meals in a peer group instead of meals at home with their families. During middle adolescence (15–17 years) importance of peer groups even raising, and their influence regarding individual food choices peaks. Finally, in the late stage of adolescence (18–21 years) the influence of peer groups decreases, whereas an ability to comprehend how current health behaviors may affect the long-term health status significantly increases [18], which in turn can enhance the effectiveness of nutritional education. Although nutritional knowledge does not always translate into proper nutritional behavior [19], some data indicate the association between nutritional knowledge and the diet quality among adolescents [20]. Joulaei et al. (2018) observed that an increase in functional nutrition literacy was associated with lower sugar intake and better energy balance among boys and higher dairy intake among girls. Therefore, recognition of the dominant dietary behaviors with respect to gender and specific age groups can be helpful in the development of targeted and effective nutritional education. In Poland, there are many studies on nutritional behaviors of adolescents [8,9,11], but their limitation is the small number of participants and the lack of representativeness in their selection. The only study involving a large, representative group of Polish adolescents is the health behavior in school-aged children (HBSC) [21], conducted for over 30 years, now in more than 40 countries, including Poland. The HBSC study does not allow us to assess nutritional behaviors of older adolescents, because it covers only the group of 11-, 13- and 15-year-old boys and girls. In Poland there is no research including the wide age range of respondents with all periods of adolescence at the same time and with the same methodology. Therefore, the purpose of the present study was to analyze the frequency of occurrence of the behaviors important in terms of overall diet quality amongst Polish adolescents. The frequency of occurrence of nutritional behaviors was analyzed in the age categories with regard to gender and taking into account the criteria of the weight status.
null
null
3. Results
The total sample group consisted of 14,044 students, including 7553 girls and 6491 boys. The detailed characteristics of the group in terms of age distribution, sex and the body mass index are presented in Table 1. Data on examined dietary behaviors are presented in Table 2. All data are expressed as number values and in percentages. 3.1. Characteristics of the Study Group The characteristics of the study population in terms of age distribution and the body mass index (BMI) in the whole group and separately for girls and boys are presented in Table 1. The predominant group was students aged 17 (followed by 18 and 16 olds in girls, and 16 and 18 olds in boys), while the smallest groups were students aged 13 and 19 between both sex groups. There were significant differences in the average BMI between girls and boys in the total group and in the case of all age categories except the 13 year olds. The characteristics of the study population in terms of age distribution and the body mass index (BMI) in the whole group and separately for girls and boys are presented in Table 1. The predominant group was students aged 17 (followed by 18 and 16 olds in girls, and 16 and 18 olds in boys), while the smallest groups were students aged 13 and 19 between both sex groups. There were significant differences in the average BMI between girls and boys in the total group and in the case of all age categories except the 13 year olds. 3.2. Characteristics of Nutritional Behaviors Figure 1 presents the relationship between the examined nutritional behaviors in the whole group. Based on the correspondence analysis, it is possible to indicate the connections between the analyzed nutritional behaviors. Beneficial nutritional behaviors such as consuming breakfast, fruit, vegetables, whole-grain bread, milk or milk beverages and fish were linked together. In opposite, unfavorable eating behaviors such as skipping breakfast, low consumption of milk products, fruits, vegetables, fish and whole-grain bread were related. Behaviors such as fast food, sweets and sugared soft drinks consumption were linked together and corresponded more to the adverse nutritional behaviors. The frequencies of examined nutritional behaviors in the total group, and for girls and boys separately are presented in Table 2. Breakfast was regularly consumed by seven out of 10–13 year olds but only by half of 19 year olds. There was a statistically significant (but small) effect of age in the total group, and separately for girls and boys. Boys were more likely to eat breakfast in comparison to girls, and differences were particularly noticeable in the younger age groups. The frequency of eating at least one serving of fruit per day also decreased with age. A statistically significant (but small) effect in the total group, and separately for girls, and boys was noted. Girls were more likely to include fresh fruits to the daily diet in comparison to boys. There was a significant effect of age on the consumption of vegetables in the total group. Half of the 13-year-olds consumed at least two servings of vegetables a day, but the frequency of consumption decreased to 43% for the group of 19-year-olds. Similarly, the influence of age was observed in the group of girls, and boys. Girls consumed at least two servings of vegetables daily more often than boys in all age groups, except for 18-year-olds. The consumption of milk or milk beverages decreased with age. Significant age effects were observed throughout the total group, and among girls, and boys. In all age groups fewer girls drank milk and fermented milk beverages compared to boys. With age, the proportion of teenagers consuming whole grain bread in everyday diet decreased. However, age effects were not observed for girls, and neither for boys, separately. Considering gender, in all age groups, the greater percentage of girls consumed whole wheat bread in their usual diet. No significant effect of age was observed on fish consumption, neither in the total group, nor in girls, and boys. However, significant gender effects were observed: A greater percentage of boys consumed fish at least one a week in all age groups compared to girls. The effect of age was observed in regards of drinking sugared soft drinks a few times a week, for the whole group, and for both genders. The proportion of adolescents drinking sugared soft drinks increased with age. A higher percentage of boys consumed sugared soft drinks compared to girls in all age categories. Less than half of the students declared consuming sweets more than once a day. No age effects were observed, neither for the whole group, nor for the girls. A small effect of age was found only among boys. On the other hand, a relationship with gender was observed: In all age groups, a higher percentage of girls declared such behavior compared to boys. There was a significant relationship between fast-food consumption and age. The percentage of adolescents consuming fast food more than twice a week increased with age, and an analogous relationship was observed for girls and boys. In all age groups, a higher number of boys declared such nutritional behaviors comparing to girls. The data on the prevalence of examined nutritional behaviors depending on the nutritional status (underweight, normal body mass, overweight and obesity) are presented in Table 3. Analyzing the prevalence of selected nutritional behaviors in the whole group of adolescents depending on the body weight status, significant relationships were observed for all eating behaviors except for consuming vegetables. Regular consumption of breakfast was more often declared by adolescents with normal body weight and underweight in total group and both for girls and boys. The percentage of subjects consuming at least one portion of fruit was the smallest in the underweight group, and the largest among the obese adolescents. Consumption of milk and milk beverages was declared by a higher percentage by overweight adolescents, whereas in the smallest percentage by underweight individuals. At the same time, no relationship was observed between this nutritional behavior and the nutritional status separately for girls and boys. The frequency of regular consumption of whole-grain bread increased with the category of body weight in the whole group and in the case of girls. The relationship between fish consumption and the nutritional status was observed only in the whole group. As in the case of bread, the frequency of declared fish consumption increased with the BMI category. In the case of the last three nutritional behaviors: Drinking sweet beverages, eating sweets and fast foods, the incidence of these behaviors decreased with the BMI category, both in the whole group and among girls and boys (with the exception of drinking sweet drinks among boys, where no relation was observed with the body weight status). Figure 1 presents the relationship between the examined nutritional behaviors in the whole group. Based on the correspondence analysis, it is possible to indicate the connections between the analyzed nutritional behaviors. Beneficial nutritional behaviors such as consuming breakfast, fruit, vegetables, whole-grain bread, milk or milk beverages and fish were linked together. In opposite, unfavorable eating behaviors such as skipping breakfast, low consumption of milk products, fruits, vegetables, fish and whole-grain bread were related. Behaviors such as fast food, sweets and sugared soft drinks consumption were linked together and corresponded more to the adverse nutritional behaviors. The frequencies of examined nutritional behaviors in the total group, and for girls and boys separately are presented in Table 2. Breakfast was regularly consumed by seven out of 10–13 year olds but only by half of 19 year olds. There was a statistically significant (but small) effect of age in the total group, and separately for girls and boys. Boys were more likely to eat breakfast in comparison to girls, and differences were particularly noticeable in the younger age groups. The frequency of eating at least one serving of fruit per day also decreased with age. A statistically significant (but small) effect in the total group, and separately for girls, and boys was noted. Girls were more likely to include fresh fruits to the daily diet in comparison to boys. There was a significant effect of age on the consumption of vegetables in the total group. Half of the 13-year-olds consumed at least two servings of vegetables a day, but the frequency of consumption decreased to 43% for the group of 19-year-olds. Similarly, the influence of age was observed in the group of girls, and boys. Girls consumed at least two servings of vegetables daily more often than boys in all age groups, except for 18-year-olds. The consumption of milk or milk beverages decreased with age. Significant age effects were observed throughout the total group, and among girls, and boys. In all age groups fewer girls drank milk and fermented milk beverages compared to boys. With age, the proportion of teenagers consuming whole grain bread in everyday diet decreased. However, age effects were not observed for girls, and neither for boys, separately. Considering gender, in all age groups, the greater percentage of girls consumed whole wheat bread in their usual diet. No significant effect of age was observed on fish consumption, neither in the total group, nor in girls, and boys. However, significant gender effects were observed: A greater percentage of boys consumed fish at least one a week in all age groups compared to girls. The effect of age was observed in regards of drinking sugared soft drinks a few times a week, for the whole group, and for both genders. The proportion of adolescents drinking sugared soft drinks increased with age. A higher percentage of boys consumed sugared soft drinks compared to girls in all age categories. Less than half of the students declared consuming sweets more than once a day. No age effects were observed, neither for the whole group, nor for the girls. A small effect of age was found only among boys. On the other hand, a relationship with gender was observed: In all age groups, a higher percentage of girls declared such behavior compared to boys. There was a significant relationship between fast-food consumption and age. The percentage of adolescents consuming fast food more than twice a week increased with age, and an analogous relationship was observed for girls and boys. In all age groups, a higher number of boys declared such nutritional behaviors comparing to girls. The data on the prevalence of examined nutritional behaviors depending on the nutritional status (underweight, normal body mass, overweight and obesity) are presented in Table 3. Analyzing the prevalence of selected nutritional behaviors in the whole group of adolescents depending on the body weight status, significant relationships were observed for all eating behaviors except for consuming vegetables. Regular consumption of breakfast was more often declared by adolescents with normal body weight and underweight in total group and both for girls and boys. The percentage of subjects consuming at least one portion of fruit was the smallest in the underweight group, and the largest among the obese adolescents. Consumption of milk and milk beverages was declared by a higher percentage by overweight adolescents, whereas in the smallest percentage by underweight individuals. At the same time, no relationship was observed between this nutritional behavior and the nutritional status separately for girls and boys. The frequency of regular consumption of whole-grain bread increased with the category of body weight in the whole group and in the case of girls. The relationship between fish consumption and the nutritional status was observed only in the whole group. As in the case of bread, the frequency of declared fish consumption increased with the BMI category. In the case of the last three nutritional behaviors: Drinking sweet beverages, eating sweets and fast foods, the incidence of these behaviors decreased with the BMI category, both in the whole group and among girls and boys (with the exception of drinking sweet drinks among boys, where no relation was observed with the body weight status).
5. Conclusions
By analyzing the differences in nutritional behaviors between age and gender groups, we provide data that can inform the development of dietary interventions tailored to answer the needs of adolescents at different stage of development and to improve the quality of their diet. We observed significant changes in the frequencies of analyzed eating behaviors depending on gender as well as on age. Furthermore, we have shown that the incidence of undesirable eating behavior is higher among underweight adolescents compared to their peers with an excessive body mass. Information on the most frequent nutritional errors on every stage of adolescents might be used to determine the type of educational messages given when counseling this challenging group, e.g., education activities regarding regular breakfast consumption should be intensified in older age groups, as the percentage of young people who eat breakfast decreases with age. On the other hand, education on the adverse effects of consumption of sweets, sugared soft drinks and fast food should be directed not only to adolescents with excessive body weight, but mainly to those underweight, as the consumption of these products is more frequent in this group. Moreover, regardless of age and sex, both favorable and adverse nutritional behaviors corresponded with each other. The present findings can be used both for the development of educational programs and for educational activities carried out by teachers at the school level.
[ "2. Materials and Methods", "2.1. General Information", "2.3. Anthropometric Measurements", "2.4. Analysis of Nutritional Behaviors", "2.5. Statistical Analysis", "3.1. Characteristics of the Study Group", "3.2. Characteristics of Nutritional Behaviors", "Strengths and Limitations" ]
[ " 2.1. General Information The presented study is a part of the research and education program Wise nutrition—healthy generation granted by The Coca-Cola Foundation, and addressed to the secondary and upper secondary school youth, their parents and teachers. The main objective of the program was to educate the secondary and upper secondary school students regarding the importance of healthy nutrition and physical activity in the prevention of the diet-related diseases. The research part of the project included assessing the selected dietary behaviors and parameters related to physical activity of the students and performing anthropometric measurements to assess their nutritional status. Those with diagnosed abnormal body mass were invited to the dietary counseling program (two individual meetings with a dietician). The diagram presenting the overall activities within the project is provided in the supplementary materials (Figure S1). Participation in the project was voluntary and totally free of charge for all participants (schools, students and parents). All educational and research activities were carried out in schools participating in the program by trained dieticians. After receiving patronages from the government educational institutions and local authorities, written invitations were sent to all secondary and upper secondary schools in Poland. Nearly 14,000 educational institutions listed in the electronic register of schools of the Minister for National Education were invited to participate. Finally, 2058 schools attended by nearly 450,000 students joined the project in 2013–2015.\nThis paper focused on the results concerning nutritional behaviors of students (Figure S1). The program was carried out following the standards required by the Helsinki Declaration, and the protocol was approved by the Scientific Committee of the Polish Society of Dietetics. School directors provided written informed consent to participate in the study. Parents were provided with a detailed fact sheet describing the program and had to give written informed consent if they wanted their child to participate. All students over 16 years of age were asked to give their written informed consent to participate in the study.\nThe presented study is a part of the research and education program Wise nutrition—healthy generation granted by The Coca-Cola Foundation, and addressed to the secondary and upper secondary school youth, their parents and teachers. The main objective of the program was to educate the secondary and upper secondary school students regarding the importance of healthy nutrition and physical activity in the prevention of the diet-related diseases. The research part of the project included assessing the selected dietary behaviors and parameters related to physical activity of the students and performing anthropometric measurements to assess their nutritional status. Those with diagnosed abnormal body mass were invited to the dietary counseling program (two individual meetings with a dietician). The diagram presenting the overall activities within the project is provided in the supplementary materials (Figure S1). Participation in the project was voluntary and totally free of charge for all participants (schools, students and parents). All educational and research activities were carried out in schools participating in the program by trained dieticians. After receiving patronages from the government educational institutions and local authorities, written invitations were sent to all secondary and upper secondary schools in Poland. Nearly 14,000 educational institutions listed in the electronic register of schools of the Minister for National Education were invited to participate. Finally, 2058 schools attended by nearly 450,000 students joined the project in 2013–2015.\nThis paper focused on the results concerning nutritional behaviors of students (Figure S1). The program was carried out following the standards required by the Helsinki Declaration, and the protocol was approved by the Scientific Committee of the Polish Society of Dietetics. School directors provided written informed consent to participate in the study. Parents were provided with a detailed fact sheet describing the program and had to give written informed consent if they wanted their child to participate. All students over 16 years of age were asked to give their written informed consent to participate in the study.\n 2.2. Study Participants To examine the selected nutritional behaviors and nutritional status of Polish teenagers, participants were recruited from schools participating in the project. To ensure a representative selection of students, these schools were randomly selected using the stratified sampling method from all of the 2058 enrolled institutions. The sampling was stratified by province and location (large, medium, small city and countryside), as well as the type of school (secondary and upper secondary). Secondary schools (called “gimnazjum” in Poland) are compulsory for all adolescents aged 13–16 years, and are located close to the students’ place of residence. Upper secondary schools (high schools, technical schools and basic vocational schools) include, depending on the type, youth from 16 to 20 years of age. As in the case of secondary schools, students typically live with their families and commute to school. Within the selected schools, as the next step, students were randomly selected from the class registry. Exclusion criteria included: Diagnosed disease that required the use of a special diet, pregnancy or lactation in girls or lack of written consent. All the personal data of participants were fully anonymized. The schools, and consequently, students came from all over Poland, therefore the research was of a nationwide character. In total, 207 schools of the 2058 institutions were enrolled (~10%), and finally 14,044 students participated in the study, including 7553 (53.8%) girls and 6491 (46.2%) boys. The age categories for the studied group were adopted in accordance with the HBSC methodology [21].\nTo examine the selected nutritional behaviors and nutritional status of Polish teenagers, participants were recruited from schools participating in the project. To ensure a representative selection of students, these schools were randomly selected using the stratified sampling method from all of the 2058 enrolled institutions. The sampling was stratified by province and location (large, medium, small city and countryside), as well as the type of school (secondary and upper secondary). Secondary schools (called “gimnazjum” in Poland) are compulsory for all adolescents aged 13–16 years, and are located close to the students’ place of residence. Upper secondary schools (high schools, technical schools and basic vocational schools) include, depending on the type, youth from 16 to 20 years of age. As in the case of secondary schools, students typically live with their families and commute to school. Within the selected schools, as the next step, students were randomly selected from the class registry. Exclusion criteria included: Diagnosed disease that required the use of a special diet, pregnancy or lactation in girls or lack of written consent. All the personal data of participants were fully anonymized. The schools, and consequently, students came from all over Poland, therefore the research was of a nationwide character. In total, 207 schools of the 2058 institutions were enrolled (~10%), and finally 14,044 students participated in the study, including 7553 (53.8%) girls and 6491 (46.2%) boys. The age categories for the studied group were adopted in accordance with the HBSC methodology [21].\n 2.3. Anthropometric Measurements The assessment of the body weight status of the examined individuals was based on anthropometric measurements (body weight and height) conducted by a trained dietitian. All the measurements were carried out with the equipment provided by The Polish Society of Dietetics: Digital floor scales (TANITA HD-380 BK, Tanita Corporation, Tokyo, Japan) and a steel measuring tape (0–200 cm). All dieticians conducting the measurements were specially trained and followed the same procedures according to Anthropometry Procedures Manual by National Health and Nutrition Examination Survey (NHANES) [22] to minimize bias. The school was obliged to provide a room suitable for the measurements.\nWeight of the individuals was measured twice to the nearest 0.1 kg, and the mean value was recorded. Measurements were conducted on individuals dressed in basic clothes (e.g., underwear, trousers/skirt and t-shirt) and without shoes. From the final result 0.5 kg was subtracted (predicted weight of the basic clothes).\nFor height measurements individuals stood on a flat surface in an upright position with their back against the wall, and the heels together and toes apart (without shoes and socks). They were asked to stand as tall as possible with the head in the Frankfort horizontal plane [22]. The height measurement was conducted twice to the nearest 0.1 cm, and the mean value was recorded.\nBased on the body height and weight data, body mass index (BMI) value was calculated. BMI was calculated as body weight in kilograms divided by the square of height in meters. Depending on the age of the subjects different criteria for assessing the body weight status were used. For individuals aged 13–18 years old, calculated BMI value was plotted on gender BMI centile charts for age (with an accuracy of one month) [23]. The percentile value was read from percentile grids and the body mass status was assessed according to the International Obesity Task Force (IOTF) criteria (underweight <5 percentile, normal weight 5–85 percentile, overweight >85 and ≤95 percentile, obese >95 percentile) [24]. For students above the age of 18 years old, the standard World Health Organization (WHO) body mass index criteria were applied: Underweight for BMI <18.5 kg/m2, normal body weight for BMI between 18.5 and 24.9 kg/m2, overweight between 25 and 29.9 kg/m2 and obesity ≥30 kg/m2 [25].\nThe assessment of the body weight status of the examined individuals was based on anthropometric measurements (body weight and height) conducted by a trained dietitian. All the measurements were carried out with the equipment provided by The Polish Society of Dietetics: Digital floor scales (TANITA HD-380 BK, Tanita Corporation, Tokyo, Japan) and a steel measuring tape (0–200 cm). All dieticians conducting the measurements were specially trained and followed the same procedures according to Anthropometry Procedures Manual by National Health and Nutrition Examination Survey (NHANES) [22] to minimize bias. The school was obliged to provide a room suitable for the measurements.\nWeight of the individuals was measured twice to the nearest 0.1 kg, and the mean value was recorded. Measurements were conducted on individuals dressed in basic clothes (e.g., underwear, trousers/skirt and t-shirt) and without shoes. From the final result 0.5 kg was subtracted (predicted weight of the basic clothes).\nFor height measurements individuals stood on a flat surface in an upright position with their back against the wall, and the heels together and toes apart (without shoes and socks). They were asked to stand as tall as possible with the head in the Frankfort horizontal plane [22]. The height measurement was conducted twice to the nearest 0.1 cm, and the mean value was recorded.\nBased on the body height and weight data, body mass index (BMI) value was calculated. BMI was calculated as body weight in kilograms divided by the square of height in meters. Depending on the age of the subjects different criteria for assessing the body weight status were used. For individuals aged 13–18 years old, calculated BMI value was plotted on gender BMI centile charts for age (with an accuracy of one month) [23]. The percentile value was read from percentile grids and the body mass status was assessed according to the International Obesity Task Force (IOTF) criteria (underweight <5 percentile, normal weight 5–85 percentile, overweight >85 and ≤95 percentile, obese >95 percentile) [24]. For students above the age of 18 years old, the standard World Health Organization (WHO) body mass index criteria were applied: Underweight for BMI <18.5 kg/m2, normal body weight for BMI between 18.5 and 24.9 kg/m2, overweight between 25 and 29.9 kg/m2 and obesity ≥30 kg/m2 [25].\n 2.4. Analysis of Nutritional Behaviors Data on the selected nutritional behaviors were collected prior to the anthropometric measurements and dietary counseling. The paper questionnaire containing questions about the selected nutritional practices, physical activity and self-esteem satisfaction (data not included in this article) was carried out in individuals by a dietitian. This provided the opportunity to clarify possible doubts or ask additional questions. After its completion the questionnaire was collected by a dietician. Due to the large sample group and direct methods of data acquisition, it was decided that the questionnaire has to be short, and must contain questions about the critical determinants of teenagers diet quality. Taking into account the health behavior in school-aged children (HBSC) questionnaire (developed for 11, 13 and 15 year olds) concerning nutritional behaviors [26], and available data on nutritional characteristics of the Polish youth population [21], nine questions were finally formulated with the possibility of answering “yes” or “no”. The first six questions concerned favorable aspects of the nutritional behaviors, while the last three questions referred to the adverse nutritional practices. Healthy nutritional behaviors included: (1) Regular consumption of breakfast before leaving for school, (2) daily consumption of at least one serving of fresh fruit and (3) daily consumption of at least two servings of vegetables (recommended diet quality indicators adapted from HBSC questionnaire [26]. Additionally, taking into account the importance for the overall diet quality and the low consumption in the Polish population [21,27], the three extra questions were added: (4) Daily consumption of milk and/or milk fermented beverages (as the main source of calcium in the diet), (5) daily consumption of whole grains (as the main source of complex carbohydrates and dietary fiber) and (6) consumption of fish at least once week (as the main source of docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), vitamin D and iodine in the diet). On the other hand, negative dietary determinants (unfavorable nutritional practices increasing the share of free sugars, saturated fat and trans fatty acids in the diet) were considered as: Drinking sugared soft drinks (soda and other carbonated soft drinks) several times during the week, eating sweets more than once a day (adapted from HBSC), and consuming fast food more than twice a week. Prior to the main study, a pilot study (n = 50) was conducted to examine whether the questions were understandable to the respondents. The questionnaire was validated: Repeatability was verified by determining the correlation coefficient between the results obtained in the same group (n = 50, age 13–19 years old) twice; correlation coefficients for individual questions were on average 0.76 (95% CI = 0.71–0.83) and ranged from 0.18 to 0.96.\nData on the selected nutritional behaviors were collected prior to the anthropometric measurements and dietary counseling. The paper questionnaire containing questions about the selected nutritional practices, physical activity and self-esteem satisfaction (data not included in this article) was carried out in individuals by a dietitian. This provided the opportunity to clarify possible doubts or ask additional questions. After its completion the questionnaire was collected by a dietician. Due to the large sample group and direct methods of data acquisition, it was decided that the questionnaire has to be short, and must contain questions about the critical determinants of teenagers diet quality. Taking into account the health behavior in school-aged children (HBSC) questionnaire (developed for 11, 13 and 15 year olds) concerning nutritional behaviors [26], and available data on nutritional characteristics of the Polish youth population [21], nine questions were finally formulated with the possibility of answering “yes” or “no”. The first six questions concerned favorable aspects of the nutritional behaviors, while the last three questions referred to the adverse nutritional practices. Healthy nutritional behaviors included: (1) Regular consumption of breakfast before leaving for school, (2) daily consumption of at least one serving of fresh fruit and (3) daily consumption of at least two servings of vegetables (recommended diet quality indicators adapted from HBSC questionnaire [26]. Additionally, taking into account the importance for the overall diet quality and the low consumption in the Polish population [21,27], the three extra questions were added: (4) Daily consumption of milk and/or milk fermented beverages (as the main source of calcium in the diet), (5) daily consumption of whole grains (as the main source of complex carbohydrates and dietary fiber) and (6) consumption of fish at least once week (as the main source of docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), vitamin D and iodine in the diet). On the other hand, negative dietary determinants (unfavorable nutritional practices increasing the share of free sugars, saturated fat and trans fatty acids in the diet) were considered as: Drinking sugared soft drinks (soda and other carbonated soft drinks) several times during the week, eating sweets more than once a day (adapted from HBSC), and consuming fast food more than twice a week. Prior to the main study, a pilot study (n = 50) was conducted to examine whether the questions were understandable to the respondents. The questionnaire was validated: Repeatability was verified by determining the correlation coefficient between the results obtained in the same group (n = 50, age 13–19 years old) twice; correlation coefficients for individual questions were on average 0.76 (95% CI = 0.71–0.83) and ranged from 0.18 to 0.96.\n 2.5. Statistical Analysis Statistical data processing was performed using Statistica version 13.1 (Copyright©StatSoft, Inc, 1984–2014, Cracow, Poland). Data were analyzed in the total group, according to age, gender and body weight status. Statistical significances for nominal (categorical) variables were determined using the Pearson’s chi-square test. Additionally, contingency coefficient Cramér’s V was used to indicate the strength of association between categorical variables. Quantitative data was tested for normality of distribution; in the case of its absence the Mann–Whitney test was used for comparisons of independent groups. The correspondence analysis was used to study the relationship between dietary behaviors. The differences were considered significant at p ≤ 0.05.\nStatistical data processing was performed using Statistica version 13.1 (Copyright©StatSoft, Inc, 1984–2014, Cracow, Poland). Data were analyzed in the total group, according to age, gender and body weight status. Statistical significances for nominal (categorical) variables were determined using the Pearson’s chi-square test. Additionally, contingency coefficient Cramér’s V was used to indicate the strength of association between categorical variables. Quantitative data was tested for normality of distribution; in the case of its absence the Mann–Whitney test was used for comparisons of independent groups. The correspondence analysis was used to study the relationship between dietary behaviors. The differences were considered significant at p ≤ 0.05.", "The presented study is a part of the research and education program Wise nutrition—healthy generation granted by The Coca-Cola Foundation, and addressed to the secondary and upper secondary school youth, their parents and teachers. The main objective of the program was to educate the secondary and upper secondary school students regarding the importance of healthy nutrition and physical activity in the prevention of the diet-related diseases. The research part of the project included assessing the selected dietary behaviors and parameters related to physical activity of the students and performing anthropometric measurements to assess their nutritional status. Those with diagnosed abnormal body mass were invited to the dietary counseling program (two individual meetings with a dietician). The diagram presenting the overall activities within the project is provided in the supplementary materials (Figure S1). Participation in the project was voluntary and totally free of charge for all participants (schools, students and parents). All educational and research activities were carried out in schools participating in the program by trained dieticians. After receiving patronages from the government educational institutions and local authorities, written invitations were sent to all secondary and upper secondary schools in Poland. Nearly 14,000 educational institutions listed in the electronic register of schools of the Minister for National Education were invited to participate. Finally, 2058 schools attended by nearly 450,000 students joined the project in 2013–2015.\nThis paper focused on the results concerning nutritional behaviors of students (Figure S1). The program was carried out following the standards required by the Helsinki Declaration, and the protocol was approved by the Scientific Committee of the Polish Society of Dietetics. School directors provided written informed consent to participate in the study. Parents were provided with a detailed fact sheet describing the program and had to give written informed consent if they wanted their child to participate. All students over 16 years of age were asked to give their written informed consent to participate in the study.", "The assessment of the body weight status of the examined individuals was based on anthropometric measurements (body weight and height) conducted by a trained dietitian. All the measurements were carried out with the equipment provided by The Polish Society of Dietetics: Digital floor scales (TANITA HD-380 BK, Tanita Corporation, Tokyo, Japan) and a steel measuring tape (0–200 cm). All dieticians conducting the measurements were specially trained and followed the same procedures according to Anthropometry Procedures Manual by National Health and Nutrition Examination Survey (NHANES) [22] to minimize bias. The school was obliged to provide a room suitable for the measurements.\nWeight of the individuals was measured twice to the nearest 0.1 kg, and the mean value was recorded. Measurements were conducted on individuals dressed in basic clothes (e.g., underwear, trousers/skirt and t-shirt) and without shoes. From the final result 0.5 kg was subtracted (predicted weight of the basic clothes).\nFor height measurements individuals stood on a flat surface in an upright position with their back against the wall, and the heels together and toes apart (without shoes and socks). They were asked to stand as tall as possible with the head in the Frankfort horizontal plane [22]. The height measurement was conducted twice to the nearest 0.1 cm, and the mean value was recorded.\nBased on the body height and weight data, body mass index (BMI) value was calculated. BMI was calculated as body weight in kilograms divided by the square of height in meters. Depending on the age of the subjects different criteria for assessing the body weight status were used. For individuals aged 13–18 years old, calculated BMI value was plotted on gender BMI centile charts for age (with an accuracy of one month) [23]. The percentile value was read from percentile grids and the body mass status was assessed according to the International Obesity Task Force (IOTF) criteria (underweight <5 percentile, normal weight 5–85 percentile, overweight >85 and ≤95 percentile, obese >95 percentile) [24]. For students above the age of 18 years old, the standard World Health Organization (WHO) body mass index criteria were applied: Underweight for BMI <18.5 kg/m2, normal body weight for BMI between 18.5 and 24.9 kg/m2, overweight between 25 and 29.9 kg/m2 and obesity ≥30 kg/m2 [25].", "Data on the selected nutritional behaviors were collected prior to the anthropometric measurements and dietary counseling. The paper questionnaire containing questions about the selected nutritional practices, physical activity and self-esteem satisfaction (data not included in this article) was carried out in individuals by a dietitian. This provided the opportunity to clarify possible doubts or ask additional questions. After its completion the questionnaire was collected by a dietician. Due to the large sample group and direct methods of data acquisition, it was decided that the questionnaire has to be short, and must contain questions about the critical determinants of teenagers diet quality. Taking into account the health behavior in school-aged children (HBSC) questionnaire (developed for 11, 13 and 15 year olds) concerning nutritional behaviors [26], and available data on nutritional characteristics of the Polish youth population [21], nine questions were finally formulated with the possibility of answering “yes” or “no”. The first six questions concerned favorable aspects of the nutritional behaviors, while the last three questions referred to the adverse nutritional practices. Healthy nutritional behaviors included: (1) Regular consumption of breakfast before leaving for school, (2) daily consumption of at least one serving of fresh fruit and (3) daily consumption of at least two servings of vegetables (recommended diet quality indicators adapted from HBSC questionnaire [26]. Additionally, taking into account the importance for the overall diet quality and the low consumption in the Polish population [21,27], the three extra questions were added: (4) Daily consumption of milk and/or milk fermented beverages (as the main source of calcium in the diet), (5) daily consumption of whole grains (as the main source of complex carbohydrates and dietary fiber) and (6) consumption of fish at least once week (as the main source of docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), vitamin D and iodine in the diet). On the other hand, negative dietary determinants (unfavorable nutritional practices increasing the share of free sugars, saturated fat and trans fatty acids in the diet) were considered as: Drinking sugared soft drinks (soda and other carbonated soft drinks) several times during the week, eating sweets more than once a day (adapted from HBSC), and consuming fast food more than twice a week. Prior to the main study, a pilot study (n = 50) was conducted to examine whether the questions were understandable to the respondents. The questionnaire was validated: Repeatability was verified by determining the correlation coefficient between the results obtained in the same group (n = 50, age 13–19 years old) twice; correlation coefficients for individual questions were on average 0.76 (95% CI = 0.71–0.83) and ranged from 0.18 to 0.96.", "Statistical data processing was performed using Statistica version 13.1 (Copyright©StatSoft, Inc, 1984–2014, Cracow, Poland). Data were analyzed in the total group, according to age, gender and body weight status. Statistical significances for nominal (categorical) variables were determined using the Pearson’s chi-square test. Additionally, contingency coefficient Cramér’s V was used to indicate the strength of association between categorical variables. Quantitative data was tested for normality of distribution; in the case of its absence the Mann–Whitney test was used for comparisons of independent groups. The correspondence analysis was used to study the relationship between dietary behaviors. The differences were considered significant at p ≤ 0.05.", "The characteristics of the study population in terms of age distribution and the body mass index (BMI) in the whole group and separately for girls and boys are presented in Table 1. The predominant group was students aged 17 (followed by 18 and 16 olds in girls, and 16 and 18 olds in boys), while the smallest groups were students aged 13 and 19 between both sex groups. There were significant differences in the average BMI between girls and boys in the total group and in the case of all age categories except the 13 year olds.", "Figure 1 presents the relationship between the examined nutritional behaviors in the whole group. Based on the correspondence analysis, it is possible to indicate the connections between the analyzed nutritional behaviors. Beneficial nutritional behaviors such as consuming breakfast, fruit, vegetables, whole-grain bread, milk or milk beverages and fish were linked together. In opposite, unfavorable eating behaviors such as skipping breakfast, low consumption of milk products, fruits, vegetables, fish and whole-grain bread were related. Behaviors such as fast food, sweets and sugared soft drinks consumption were linked together and corresponded more to the adverse nutritional behaviors.\nThe frequencies of examined nutritional behaviors in the total group, and for girls and boys separately are presented in Table 2.\nBreakfast was regularly consumed by seven out of 10–13 year olds but only by half of 19 year olds. There was a statistically significant (but small) effect of age in the total group, and separately for girls and boys. Boys were more likely to eat breakfast in comparison to girls, and differences were particularly noticeable in the younger age groups. The frequency of eating at least one serving of fruit per day also decreased with age. A statistically significant (but small) effect in the total group, and separately for girls, and boys was noted. Girls were more likely to include fresh fruits to the daily diet in comparison to boys. There was a significant effect of age on the consumption of vegetables in the total group. Half of the 13-year-olds consumed at least two servings of vegetables a day, but the frequency of consumption decreased to 43% for the group of 19-year-olds. Similarly, the influence of age was observed in the group of girls, and boys. Girls consumed at least two servings of vegetables daily more often than boys in all age groups, except for 18-year-olds. The consumption of milk or milk beverages decreased with age. Significant age effects were observed throughout the total group, and among girls, and boys. In all age groups fewer girls drank milk and fermented milk beverages compared to boys. With age, the proportion of teenagers consuming whole grain bread in everyday diet decreased. However, age effects were not observed for girls, and neither for boys, separately. Considering gender, in all age groups, the greater percentage of girls consumed whole wheat bread in their usual diet. No significant effect of age was observed on fish consumption, neither in the total group, nor in girls, and boys. However, significant gender effects were observed: A greater percentage of boys consumed fish at least one a week in all age groups compared to girls. The effect of age was observed in regards of drinking sugared soft drinks a few times a week, for the whole group, and for both genders. The proportion of adolescents drinking sugared soft drinks increased with age. A higher percentage of boys consumed sugared soft drinks compared to girls in all age categories. Less than half of the students declared consuming sweets more than once a day. No age effects were observed, neither for the whole group, nor for the girls. A small effect of age was found only among boys. On the other hand, a relationship with gender was observed: In all age groups, a higher percentage of girls declared such behavior compared to boys. There was a significant relationship between fast-food consumption and age. The percentage of adolescents consuming fast food more than twice a week increased with age, and an analogous relationship was observed for girls and boys. In all age groups, a higher number of boys declared such nutritional behaviors comparing to girls.\nThe data on the prevalence of examined nutritional behaviors depending on the nutritional status (underweight, normal body mass, overweight and obesity) are presented in Table 3.\nAnalyzing the prevalence of selected nutritional behaviors in the whole group of adolescents depending on the body weight status, significant relationships were observed for all eating behaviors except for consuming vegetables. Regular consumption of breakfast was more often declared by adolescents with normal body weight and underweight in total group and both for girls and boys. The percentage of subjects consuming at least one portion of fruit was the smallest in the underweight group, and the largest among the obese adolescents. Consumption of milk and milk beverages was declared by a higher percentage by overweight adolescents, whereas in the smallest percentage by underweight individuals. At the same time, no relationship was observed between this nutritional behavior and the nutritional status separately for girls and boys. The frequency of regular consumption of whole-grain bread increased with the category of body weight in the whole group and in the case of girls. The relationship between fish consumption and the nutritional status was observed only in the whole group. As in the case of bread, the frequency of declared fish consumption increased with the BMI category. In the case of the last three nutritional behaviors: Drinking sweet beverages, eating sweets and fast foods, the incidence of these behaviors decreased with the BMI category, both in the whole group and among girls and boys (with the exception of drinking sweet drinks among boys, where no relation was observed with the body weight status).", "The strength of the study is the sample size. To our knowledge there is no research on such a scale covering all age categories over a large geographic area. With such a large sample, the advantage is also the way of obtaining data. All questionnaires were filled in by a trained dietician who could explain the respondents’ doubts on an ongoing basis. Moreover, all anthropometric data were obtained through measurements conducted also by a dietician, which ensured obtaining reliable results and minimize the bias.\nRespondents for our study were recruited from schools participating in the project, which can be a certain limitation. However, the number of schools allowed a random selection of the sample taking into account different types of institutions and their geographic location. The small number of questions with the very limited possibilities of answers in the questionnaire may also be a certain limitation. However, the questions have been developed on the basis of large, international studies on the nutritional behaviors of school-aged children [21,26], and include the most important healthy and unhealthy behaviors concerning nutrition. Additionally, the questionnaire was validated before the main study." ]
[ null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. General Information", "2.2. Study Participants", "2.3. Anthropometric Measurements", "2.4. Analysis of Nutritional Behaviors", "2.5. Statistical Analysis", "3. Results", "3.1. Characteristics of the Study Group", "3.2. Characteristics of Nutritional Behaviors", "4. Discussion", "Strengths and Limitations", "5. Conclusions" ]
[ "The health of children and adolescents is dependent upon food intake that provides sufficient energy and nutrients to promote optimal physical, cognitive and social growth and development [1,2,3]. However, in practice, the implementation of proper nutrition recommendations in these population groups is extremely difficult due to the existing barriers, e.g., availability of healthy food, inadequate nutritional knowledge of caregivers and children and personal food preferences [4,5,6,7]. A great body of the literature indicates the low overall diet quality in children and adolescents, both in terms of the amounts (deficits or excesses) of food/nutrients, and the selection of food groups/food products. One in four Polish 17–18 years old female adolescents did not eat breakfast regularly, and nearly half of them consumed fish only one time per month [8]. Almost 35% of schoolchildren and adolescents aged 9–13 years from rural parts of Poland regularly ate sweets, and 46% failed to consume vegetables and fruit at least once a day [9]. These inadequacies in the assortment and quantities of food products result in an incorrect supply of energy and nutrients. The average European adolescents’ diet is too high in saturated fatty acids and sodium, whereas too low in monounsaturated fatty acids, vitamin D, folate and iodine [10]. In Poland a significant increasing trend in calcium intake in teenagers aged 11–15 years was noted in the last 20 years, but the observed values are still lower than the recommendations [11]. In the US nearly 40% of total energy consumed by two- to 18-year-olds came in the form of empty calories (including 365 kcal from added sugars) [12]. Poor quality of the diet in early life may impair growth and development rate, and also increases the risk of some diet-related diseases (e.g., obesity, type 2 diabetes mellitus, cardiovascular disease and osteoporosis) in the future [3,13].\nAlthough correct nutrition is important throughout the life span, it is possible to distinguish particularly critical periods, i.e., the first 2–3 years [3] and the period of puberty [14,15]. Dramatic physical growth and development during puberty significantly increases requirements for energy, protein, and also others nutrients compared to late childhood. Biological changes related to puberty might significantly affect psychosocial development. Rapid changes in body size, shape and composition in girls might lead to poor body image experience and development of eating disorders [16]. At this age, girls may experience nutritional behaviors leading to weight loss, e.g., alternative diets promoted in the media. Nevertheless, a delay in biological development might lower self-esteem and increase the risk of eating disorders among male teenagers [17]. As young teens are highly influenced by a peer group, then the desire to conform may also affect nutritional behaviors and food intake. Moreover, food choices can be used by adolescents as a way to express their independence from families and parents. At this age, young people may prefer to eat fast-food meals in a peer group instead of meals at home with their families. During middle adolescence (15–17 years) importance of peer groups even raising, and their influence regarding individual food choices peaks. Finally, in the late stage of adolescence (18–21 years) the influence of peer groups decreases, whereas an ability to comprehend how current health behaviors may affect the long-term health status significantly increases [18], which in turn can enhance the effectiveness of nutritional education.\nAlthough nutritional knowledge does not always translate into proper nutritional behavior [19], some data indicate the association between nutritional knowledge and the diet quality among adolescents [20]. Joulaei et al. (2018) observed that an increase in functional nutrition literacy was associated with lower sugar intake and better energy balance among boys and higher dairy intake among girls. Therefore, recognition of the dominant dietary behaviors with respect to gender and specific age groups can be helpful in the development of targeted and effective nutritional education.\nIn Poland, there are many studies on nutritional behaviors of adolescents [8,9,11], but their limitation is the small number of participants and the lack of representativeness in their selection. The only study involving a large, representative group of Polish adolescents is the health behavior in school-aged children (HBSC) [21], conducted for over 30 years, now in more than 40 countries, including Poland. The HBSC study does not allow us to assess nutritional behaviors of older adolescents, because it covers only the group of 11-, 13- and 15-year-old boys and girls. In Poland there is no research including the wide age range of respondents with all periods of adolescence at the same time and with the same methodology. Therefore, the purpose of the present study was to analyze the frequency of occurrence of the behaviors important in terms of overall diet quality amongst Polish adolescents. The frequency of occurrence of nutritional behaviors was analyzed in the age categories with regard to gender and taking into account the criteria of the weight status.", " 2.1. General Information The presented study is a part of the research and education program Wise nutrition—healthy generation granted by The Coca-Cola Foundation, and addressed to the secondary and upper secondary school youth, their parents and teachers. The main objective of the program was to educate the secondary and upper secondary school students regarding the importance of healthy nutrition and physical activity in the prevention of the diet-related diseases. The research part of the project included assessing the selected dietary behaviors and parameters related to physical activity of the students and performing anthropometric measurements to assess their nutritional status. Those with diagnosed abnormal body mass were invited to the dietary counseling program (two individual meetings with a dietician). The diagram presenting the overall activities within the project is provided in the supplementary materials (Figure S1). Participation in the project was voluntary and totally free of charge for all participants (schools, students and parents). All educational and research activities were carried out in schools participating in the program by trained dieticians. After receiving patronages from the government educational institutions and local authorities, written invitations were sent to all secondary and upper secondary schools in Poland. Nearly 14,000 educational institutions listed in the electronic register of schools of the Minister for National Education were invited to participate. Finally, 2058 schools attended by nearly 450,000 students joined the project in 2013–2015.\nThis paper focused on the results concerning nutritional behaviors of students (Figure S1). The program was carried out following the standards required by the Helsinki Declaration, and the protocol was approved by the Scientific Committee of the Polish Society of Dietetics. School directors provided written informed consent to participate in the study. Parents were provided with a detailed fact sheet describing the program and had to give written informed consent if they wanted their child to participate. All students over 16 years of age were asked to give their written informed consent to participate in the study.\nThe presented study is a part of the research and education program Wise nutrition—healthy generation granted by The Coca-Cola Foundation, and addressed to the secondary and upper secondary school youth, their parents and teachers. The main objective of the program was to educate the secondary and upper secondary school students regarding the importance of healthy nutrition and physical activity in the prevention of the diet-related diseases. The research part of the project included assessing the selected dietary behaviors and parameters related to physical activity of the students and performing anthropometric measurements to assess their nutritional status. Those with diagnosed abnormal body mass were invited to the dietary counseling program (two individual meetings with a dietician). The diagram presenting the overall activities within the project is provided in the supplementary materials (Figure S1). Participation in the project was voluntary and totally free of charge for all participants (schools, students and parents). All educational and research activities were carried out in schools participating in the program by trained dieticians. After receiving patronages from the government educational institutions and local authorities, written invitations were sent to all secondary and upper secondary schools in Poland. Nearly 14,000 educational institutions listed in the electronic register of schools of the Minister for National Education were invited to participate. Finally, 2058 schools attended by nearly 450,000 students joined the project in 2013–2015.\nThis paper focused on the results concerning nutritional behaviors of students (Figure S1). The program was carried out following the standards required by the Helsinki Declaration, and the protocol was approved by the Scientific Committee of the Polish Society of Dietetics. School directors provided written informed consent to participate in the study. Parents were provided with a detailed fact sheet describing the program and had to give written informed consent if they wanted their child to participate. All students over 16 years of age were asked to give their written informed consent to participate in the study.\n 2.2. Study Participants To examine the selected nutritional behaviors and nutritional status of Polish teenagers, participants were recruited from schools participating in the project. To ensure a representative selection of students, these schools were randomly selected using the stratified sampling method from all of the 2058 enrolled institutions. The sampling was stratified by province and location (large, medium, small city and countryside), as well as the type of school (secondary and upper secondary). Secondary schools (called “gimnazjum” in Poland) are compulsory for all adolescents aged 13–16 years, and are located close to the students’ place of residence. Upper secondary schools (high schools, technical schools and basic vocational schools) include, depending on the type, youth from 16 to 20 years of age. As in the case of secondary schools, students typically live with their families and commute to school. Within the selected schools, as the next step, students were randomly selected from the class registry. Exclusion criteria included: Diagnosed disease that required the use of a special diet, pregnancy or lactation in girls or lack of written consent. All the personal data of participants were fully anonymized. The schools, and consequently, students came from all over Poland, therefore the research was of a nationwide character. In total, 207 schools of the 2058 institutions were enrolled (~10%), and finally 14,044 students participated in the study, including 7553 (53.8%) girls and 6491 (46.2%) boys. The age categories for the studied group were adopted in accordance with the HBSC methodology [21].\nTo examine the selected nutritional behaviors and nutritional status of Polish teenagers, participants were recruited from schools participating in the project. To ensure a representative selection of students, these schools were randomly selected using the stratified sampling method from all of the 2058 enrolled institutions. The sampling was stratified by province and location (large, medium, small city and countryside), as well as the type of school (secondary and upper secondary). Secondary schools (called “gimnazjum” in Poland) are compulsory for all adolescents aged 13–16 years, and are located close to the students’ place of residence. Upper secondary schools (high schools, technical schools and basic vocational schools) include, depending on the type, youth from 16 to 20 years of age. As in the case of secondary schools, students typically live with their families and commute to school. Within the selected schools, as the next step, students were randomly selected from the class registry. Exclusion criteria included: Diagnosed disease that required the use of a special diet, pregnancy or lactation in girls or lack of written consent. All the personal data of participants were fully anonymized. The schools, and consequently, students came from all over Poland, therefore the research was of a nationwide character. In total, 207 schools of the 2058 institutions were enrolled (~10%), and finally 14,044 students participated in the study, including 7553 (53.8%) girls and 6491 (46.2%) boys. The age categories for the studied group were adopted in accordance with the HBSC methodology [21].\n 2.3. Anthropometric Measurements The assessment of the body weight status of the examined individuals was based on anthropometric measurements (body weight and height) conducted by a trained dietitian. All the measurements were carried out with the equipment provided by The Polish Society of Dietetics: Digital floor scales (TANITA HD-380 BK, Tanita Corporation, Tokyo, Japan) and a steel measuring tape (0–200 cm). All dieticians conducting the measurements were specially trained and followed the same procedures according to Anthropometry Procedures Manual by National Health and Nutrition Examination Survey (NHANES) [22] to minimize bias. The school was obliged to provide a room suitable for the measurements.\nWeight of the individuals was measured twice to the nearest 0.1 kg, and the mean value was recorded. Measurements were conducted on individuals dressed in basic clothes (e.g., underwear, trousers/skirt and t-shirt) and without shoes. From the final result 0.5 kg was subtracted (predicted weight of the basic clothes).\nFor height measurements individuals stood on a flat surface in an upright position with their back against the wall, and the heels together and toes apart (without shoes and socks). They were asked to stand as tall as possible with the head in the Frankfort horizontal plane [22]. The height measurement was conducted twice to the nearest 0.1 cm, and the mean value was recorded.\nBased on the body height and weight data, body mass index (BMI) value was calculated. BMI was calculated as body weight in kilograms divided by the square of height in meters. Depending on the age of the subjects different criteria for assessing the body weight status were used. For individuals aged 13–18 years old, calculated BMI value was plotted on gender BMI centile charts for age (with an accuracy of one month) [23]. The percentile value was read from percentile grids and the body mass status was assessed according to the International Obesity Task Force (IOTF) criteria (underweight <5 percentile, normal weight 5–85 percentile, overweight >85 and ≤95 percentile, obese >95 percentile) [24]. For students above the age of 18 years old, the standard World Health Organization (WHO) body mass index criteria were applied: Underweight for BMI <18.5 kg/m2, normal body weight for BMI between 18.5 and 24.9 kg/m2, overweight between 25 and 29.9 kg/m2 and obesity ≥30 kg/m2 [25].\nThe assessment of the body weight status of the examined individuals was based on anthropometric measurements (body weight and height) conducted by a trained dietitian. All the measurements were carried out with the equipment provided by The Polish Society of Dietetics: Digital floor scales (TANITA HD-380 BK, Tanita Corporation, Tokyo, Japan) and a steel measuring tape (0–200 cm). All dieticians conducting the measurements were specially trained and followed the same procedures according to Anthropometry Procedures Manual by National Health and Nutrition Examination Survey (NHANES) [22] to minimize bias. The school was obliged to provide a room suitable for the measurements.\nWeight of the individuals was measured twice to the nearest 0.1 kg, and the mean value was recorded. Measurements were conducted on individuals dressed in basic clothes (e.g., underwear, trousers/skirt and t-shirt) and without shoes. From the final result 0.5 kg was subtracted (predicted weight of the basic clothes).\nFor height measurements individuals stood on a flat surface in an upright position with their back against the wall, and the heels together and toes apart (without shoes and socks). They were asked to stand as tall as possible with the head in the Frankfort horizontal plane [22]. The height measurement was conducted twice to the nearest 0.1 cm, and the mean value was recorded.\nBased on the body height and weight data, body mass index (BMI) value was calculated. BMI was calculated as body weight in kilograms divided by the square of height in meters. Depending on the age of the subjects different criteria for assessing the body weight status were used. For individuals aged 13–18 years old, calculated BMI value was plotted on gender BMI centile charts for age (with an accuracy of one month) [23]. The percentile value was read from percentile grids and the body mass status was assessed according to the International Obesity Task Force (IOTF) criteria (underweight <5 percentile, normal weight 5–85 percentile, overweight >85 and ≤95 percentile, obese >95 percentile) [24]. For students above the age of 18 years old, the standard World Health Organization (WHO) body mass index criteria were applied: Underweight for BMI <18.5 kg/m2, normal body weight for BMI between 18.5 and 24.9 kg/m2, overweight between 25 and 29.9 kg/m2 and obesity ≥30 kg/m2 [25].\n 2.4. Analysis of Nutritional Behaviors Data on the selected nutritional behaviors were collected prior to the anthropometric measurements and dietary counseling. The paper questionnaire containing questions about the selected nutritional practices, physical activity and self-esteem satisfaction (data not included in this article) was carried out in individuals by a dietitian. This provided the opportunity to clarify possible doubts or ask additional questions. After its completion the questionnaire was collected by a dietician. Due to the large sample group and direct methods of data acquisition, it was decided that the questionnaire has to be short, and must contain questions about the critical determinants of teenagers diet quality. Taking into account the health behavior in school-aged children (HBSC) questionnaire (developed for 11, 13 and 15 year olds) concerning nutritional behaviors [26], and available data on nutritional characteristics of the Polish youth population [21], nine questions were finally formulated with the possibility of answering “yes” or “no”. The first six questions concerned favorable aspects of the nutritional behaviors, while the last three questions referred to the adverse nutritional practices. Healthy nutritional behaviors included: (1) Regular consumption of breakfast before leaving for school, (2) daily consumption of at least one serving of fresh fruit and (3) daily consumption of at least two servings of vegetables (recommended diet quality indicators adapted from HBSC questionnaire [26]. Additionally, taking into account the importance for the overall diet quality and the low consumption in the Polish population [21,27], the three extra questions were added: (4) Daily consumption of milk and/or milk fermented beverages (as the main source of calcium in the diet), (5) daily consumption of whole grains (as the main source of complex carbohydrates and dietary fiber) and (6) consumption of fish at least once week (as the main source of docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), vitamin D and iodine in the diet). On the other hand, negative dietary determinants (unfavorable nutritional practices increasing the share of free sugars, saturated fat and trans fatty acids in the diet) were considered as: Drinking sugared soft drinks (soda and other carbonated soft drinks) several times during the week, eating sweets more than once a day (adapted from HBSC), and consuming fast food more than twice a week. Prior to the main study, a pilot study (n = 50) was conducted to examine whether the questions were understandable to the respondents. The questionnaire was validated: Repeatability was verified by determining the correlation coefficient between the results obtained in the same group (n = 50, age 13–19 years old) twice; correlation coefficients for individual questions were on average 0.76 (95% CI = 0.71–0.83) and ranged from 0.18 to 0.96.\nData on the selected nutritional behaviors were collected prior to the anthropometric measurements and dietary counseling. The paper questionnaire containing questions about the selected nutritional practices, physical activity and self-esteem satisfaction (data not included in this article) was carried out in individuals by a dietitian. This provided the opportunity to clarify possible doubts or ask additional questions. After its completion the questionnaire was collected by a dietician. Due to the large sample group and direct methods of data acquisition, it was decided that the questionnaire has to be short, and must contain questions about the critical determinants of teenagers diet quality. Taking into account the health behavior in school-aged children (HBSC) questionnaire (developed for 11, 13 and 15 year olds) concerning nutritional behaviors [26], and available data on nutritional characteristics of the Polish youth population [21], nine questions were finally formulated with the possibility of answering “yes” or “no”. The first six questions concerned favorable aspects of the nutritional behaviors, while the last three questions referred to the adverse nutritional practices. Healthy nutritional behaviors included: (1) Regular consumption of breakfast before leaving for school, (2) daily consumption of at least one serving of fresh fruit and (3) daily consumption of at least two servings of vegetables (recommended diet quality indicators adapted from HBSC questionnaire [26]. Additionally, taking into account the importance for the overall diet quality and the low consumption in the Polish population [21,27], the three extra questions were added: (4) Daily consumption of milk and/or milk fermented beverages (as the main source of calcium in the diet), (5) daily consumption of whole grains (as the main source of complex carbohydrates and dietary fiber) and (6) consumption of fish at least once week (as the main source of docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), vitamin D and iodine in the diet). On the other hand, negative dietary determinants (unfavorable nutritional practices increasing the share of free sugars, saturated fat and trans fatty acids in the diet) were considered as: Drinking sugared soft drinks (soda and other carbonated soft drinks) several times during the week, eating sweets more than once a day (adapted from HBSC), and consuming fast food more than twice a week. Prior to the main study, a pilot study (n = 50) was conducted to examine whether the questions were understandable to the respondents. The questionnaire was validated: Repeatability was verified by determining the correlation coefficient between the results obtained in the same group (n = 50, age 13–19 years old) twice; correlation coefficients for individual questions were on average 0.76 (95% CI = 0.71–0.83) and ranged from 0.18 to 0.96.\n 2.5. Statistical Analysis Statistical data processing was performed using Statistica version 13.1 (Copyright©StatSoft, Inc, 1984–2014, Cracow, Poland). Data were analyzed in the total group, according to age, gender and body weight status. Statistical significances for nominal (categorical) variables were determined using the Pearson’s chi-square test. Additionally, contingency coefficient Cramér’s V was used to indicate the strength of association between categorical variables. Quantitative data was tested for normality of distribution; in the case of its absence the Mann–Whitney test was used for comparisons of independent groups. The correspondence analysis was used to study the relationship between dietary behaviors. The differences were considered significant at p ≤ 0.05.\nStatistical data processing was performed using Statistica version 13.1 (Copyright©StatSoft, Inc, 1984–2014, Cracow, Poland). Data were analyzed in the total group, according to age, gender and body weight status. Statistical significances for nominal (categorical) variables were determined using the Pearson’s chi-square test. Additionally, contingency coefficient Cramér’s V was used to indicate the strength of association between categorical variables. Quantitative data was tested for normality of distribution; in the case of its absence the Mann–Whitney test was used for comparisons of independent groups. The correspondence analysis was used to study the relationship between dietary behaviors. The differences were considered significant at p ≤ 0.05.", "The presented study is a part of the research and education program Wise nutrition—healthy generation granted by The Coca-Cola Foundation, and addressed to the secondary and upper secondary school youth, their parents and teachers. The main objective of the program was to educate the secondary and upper secondary school students regarding the importance of healthy nutrition and physical activity in the prevention of the diet-related diseases. The research part of the project included assessing the selected dietary behaviors and parameters related to physical activity of the students and performing anthropometric measurements to assess their nutritional status. Those with diagnosed abnormal body mass were invited to the dietary counseling program (two individual meetings with a dietician). The diagram presenting the overall activities within the project is provided in the supplementary materials (Figure S1). Participation in the project was voluntary and totally free of charge for all participants (schools, students and parents). All educational and research activities were carried out in schools participating in the program by trained dieticians. After receiving patronages from the government educational institutions and local authorities, written invitations were sent to all secondary and upper secondary schools in Poland. Nearly 14,000 educational institutions listed in the electronic register of schools of the Minister for National Education were invited to participate. Finally, 2058 schools attended by nearly 450,000 students joined the project in 2013–2015.\nThis paper focused on the results concerning nutritional behaviors of students (Figure S1). The program was carried out following the standards required by the Helsinki Declaration, and the protocol was approved by the Scientific Committee of the Polish Society of Dietetics. School directors provided written informed consent to participate in the study. Parents were provided with a detailed fact sheet describing the program and had to give written informed consent if they wanted their child to participate. All students over 16 years of age were asked to give their written informed consent to participate in the study.", "To examine the selected nutritional behaviors and nutritional status of Polish teenagers, participants were recruited from schools participating in the project. To ensure a representative selection of students, these schools were randomly selected using the stratified sampling method from all of the 2058 enrolled institutions. The sampling was stratified by province and location (large, medium, small city and countryside), as well as the type of school (secondary and upper secondary). Secondary schools (called “gimnazjum” in Poland) are compulsory for all adolescents aged 13–16 years, and are located close to the students’ place of residence. Upper secondary schools (high schools, technical schools and basic vocational schools) include, depending on the type, youth from 16 to 20 years of age. As in the case of secondary schools, students typically live with their families and commute to school. Within the selected schools, as the next step, students were randomly selected from the class registry. Exclusion criteria included: Diagnosed disease that required the use of a special diet, pregnancy or lactation in girls or lack of written consent. All the personal data of participants were fully anonymized. The schools, and consequently, students came from all over Poland, therefore the research was of a nationwide character. In total, 207 schools of the 2058 institutions were enrolled (~10%), and finally 14,044 students participated in the study, including 7553 (53.8%) girls and 6491 (46.2%) boys. The age categories for the studied group were adopted in accordance with the HBSC methodology [21].", "The assessment of the body weight status of the examined individuals was based on anthropometric measurements (body weight and height) conducted by a trained dietitian. All the measurements were carried out with the equipment provided by The Polish Society of Dietetics: Digital floor scales (TANITA HD-380 BK, Tanita Corporation, Tokyo, Japan) and a steel measuring tape (0–200 cm). All dieticians conducting the measurements were specially trained and followed the same procedures according to Anthropometry Procedures Manual by National Health and Nutrition Examination Survey (NHANES) [22] to minimize bias. The school was obliged to provide a room suitable for the measurements.\nWeight of the individuals was measured twice to the nearest 0.1 kg, and the mean value was recorded. Measurements were conducted on individuals dressed in basic clothes (e.g., underwear, trousers/skirt and t-shirt) and without shoes. From the final result 0.5 kg was subtracted (predicted weight of the basic clothes).\nFor height measurements individuals stood on a flat surface in an upright position with their back against the wall, and the heels together and toes apart (without shoes and socks). They were asked to stand as tall as possible with the head in the Frankfort horizontal plane [22]. The height measurement was conducted twice to the nearest 0.1 cm, and the mean value was recorded.\nBased on the body height and weight data, body mass index (BMI) value was calculated. BMI was calculated as body weight in kilograms divided by the square of height in meters. Depending on the age of the subjects different criteria for assessing the body weight status were used. For individuals aged 13–18 years old, calculated BMI value was plotted on gender BMI centile charts for age (with an accuracy of one month) [23]. The percentile value was read from percentile grids and the body mass status was assessed according to the International Obesity Task Force (IOTF) criteria (underweight <5 percentile, normal weight 5–85 percentile, overweight >85 and ≤95 percentile, obese >95 percentile) [24]. For students above the age of 18 years old, the standard World Health Organization (WHO) body mass index criteria were applied: Underweight for BMI <18.5 kg/m2, normal body weight for BMI between 18.5 and 24.9 kg/m2, overweight between 25 and 29.9 kg/m2 and obesity ≥30 kg/m2 [25].", "Data on the selected nutritional behaviors were collected prior to the anthropometric measurements and dietary counseling. The paper questionnaire containing questions about the selected nutritional practices, physical activity and self-esteem satisfaction (data not included in this article) was carried out in individuals by a dietitian. This provided the opportunity to clarify possible doubts or ask additional questions. After its completion the questionnaire was collected by a dietician. Due to the large sample group and direct methods of data acquisition, it was decided that the questionnaire has to be short, and must contain questions about the critical determinants of teenagers diet quality. Taking into account the health behavior in school-aged children (HBSC) questionnaire (developed for 11, 13 and 15 year olds) concerning nutritional behaviors [26], and available data on nutritional characteristics of the Polish youth population [21], nine questions were finally formulated with the possibility of answering “yes” or “no”. The first six questions concerned favorable aspects of the nutritional behaviors, while the last three questions referred to the adverse nutritional practices. Healthy nutritional behaviors included: (1) Regular consumption of breakfast before leaving for school, (2) daily consumption of at least one serving of fresh fruit and (3) daily consumption of at least two servings of vegetables (recommended diet quality indicators adapted from HBSC questionnaire [26]. Additionally, taking into account the importance for the overall diet quality and the low consumption in the Polish population [21,27], the three extra questions were added: (4) Daily consumption of milk and/or milk fermented beverages (as the main source of calcium in the diet), (5) daily consumption of whole grains (as the main source of complex carbohydrates and dietary fiber) and (6) consumption of fish at least once week (as the main source of docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), vitamin D and iodine in the diet). On the other hand, negative dietary determinants (unfavorable nutritional practices increasing the share of free sugars, saturated fat and trans fatty acids in the diet) were considered as: Drinking sugared soft drinks (soda and other carbonated soft drinks) several times during the week, eating sweets more than once a day (adapted from HBSC), and consuming fast food more than twice a week. Prior to the main study, a pilot study (n = 50) was conducted to examine whether the questions were understandable to the respondents. The questionnaire was validated: Repeatability was verified by determining the correlation coefficient between the results obtained in the same group (n = 50, age 13–19 years old) twice; correlation coefficients for individual questions were on average 0.76 (95% CI = 0.71–0.83) and ranged from 0.18 to 0.96.", "Statistical data processing was performed using Statistica version 13.1 (Copyright©StatSoft, Inc, 1984–2014, Cracow, Poland). Data were analyzed in the total group, according to age, gender and body weight status. Statistical significances for nominal (categorical) variables were determined using the Pearson’s chi-square test. Additionally, contingency coefficient Cramér’s V was used to indicate the strength of association between categorical variables. Quantitative data was tested for normality of distribution; in the case of its absence the Mann–Whitney test was used for comparisons of independent groups. The correspondence analysis was used to study the relationship between dietary behaviors. The differences were considered significant at p ≤ 0.05.", "The total sample group consisted of 14,044 students, including 7553 girls and 6491 boys. The detailed characteristics of the group in terms of age distribution, sex and the body mass index are presented in Table 1. Data on examined dietary behaviors are presented in Table 2. All data are expressed as number values and in percentages.\n 3.1. Characteristics of the Study Group The characteristics of the study population in terms of age distribution and the body mass index (BMI) in the whole group and separately for girls and boys are presented in Table 1. The predominant group was students aged 17 (followed by 18 and 16 olds in girls, and 16 and 18 olds in boys), while the smallest groups were students aged 13 and 19 between both sex groups. There were significant differences in the average BMI between girls and boys in the total group and in the case of all age categories except the 13 year olds.\nThe characteristics of the study population in terms of age distribution and the body mass index (BMI) in the whole group and separately for girls and boys are presented in Table 1. The predominant group was students aged 17 (followed by 18 and 16 olds in girls, and 16 and 18 olds in boys), while the smallest groups were students aged 13 and 19 between both sex groups. There were significant differences in the average BMI between girls and boys in the total group and in the case of all age categories except the 13 year olds.\n 3.2. Characteristics of Nutritional Behaviors Figure 1 presents the relationship between the examined nutritional behaviors in the whole group. Based on the correspondence analysis, it is possible to indicate the connections between the analyzed nutritional behaviors. Beneficial nutritional behaviors such as consuming breakfast, fruit, vegetables, whole-grain bread, milk or milk beverages and fish were linked together. In opposite, unfavorable eating behaviors such as skipping breakfast, low consumption of milk products, fruits, vegetables, fish and whole-grain bread were related. Behaviors such as fast food, sweets and sugared soft drinks consumption were linked together and corresponded more to the adverse nutritional behaviors.\nThe frequencies of examined nutritional behaviors in the total group, and for girls and boys separately are presented in Table 2.\nBreakfast was regularly consumed by seven out of 10–13 year olds but only by half of 19 year olds. There was a statistically significant (but small) effect of age in the total group, and separately for girls and boys. Boys were more likely to eat breakfast in comparison to girls, and differences were particularly noticeable in the younger age groups. The frequency of eating at least one serving of fruit per day also decreased with age. A statistically significant (but small) effect in the total group, and separately for girls, and boys was noted. Girls were more likely to include fresh fruits to the daily diet in comparison to boys. There was a significant effect of age on the consumption of vegetables in the total group. Half of the 13-year-olds consumed at least two servings of vegetables a day, but the frequency of consumption decreased to 43% for the group of 19-year-olds. Similarly, the influence of age was observed in the group of girls, and boys. Girls consumed at least two servings of vegetables daily more often than boys in all age groups, except for 18-year-olds. The consumption of milk or milk beverages decreased with age. Significant age effects were observed throughout the total group, and among girls, and boys. In all age groups fewer girls drank milk and fermented milk beverages compared to boys. With age, the proportion of teenagers consuming whole grain bread in everyday diet decreased. However, age effects were not observed for girls, and neither for boys, separately. Considering gender, in all age groups, the greater percentage of girls consumed whole wheat bread in their usual diet. No significant effect of age was observed on fish consumption, neither in the total group, nor in girls, and boys. However, significant gender effects were observed: A greater percentage of boys consumed fish at least one a week in all age groups compared to girls. The effect of age was observed in regards of drinking sugared soft drinks a few times a week, for the whole group, and for both genders. The proportion of adolescents drinking sugared soft drinks increased with age. A higher percentage of boys consumed sugared soft drinks compared to girls in all age categories. Less than half of the students declared consuming sweets more than once a day. No age effects were observed, neither for the whole group, nor for the girls. A small effect of age was found only among boys. On the other hand, a relationship with gender was observed: In all age groups, a higher percentage of girls declared such behavior compared to boys. There was a significant relationship between fast-food consumption and age. The percentage of adolescents consuming fast food more than twice a week increased with age, and an analogous relationship was observed for girls and boys. In all age groups, a higher number of boys declared such nutritional behaviors comparing to girls.\nThe data on the prevalence of examined nutritional behaviors depending on the nutritional status (underweight, normal body mass, overweight and obesity) are presented in Table 3.\nAnalyzing the prevalence of selected nutritional behaviors in the whole group of adolescents depending on the body weight status, significant relationships were observed for all eating behaviors except for consuming vegetables. Regular consumption of breakfast was more often declared by adolescents with normal body weight and underweight in total group and both for girls and boys. The percentage of subjects consuming at least one portion of fruit was the smallest in the underweight group, and the largest among the obese adolescents. Consumption of milk and milk beverages was declared by a higher percentage by overweight adolescents, whereas in the smallest percentage by underweight individuals. At the same time, no relationship was observed between this nutritional behavior and the nutritional status separately for girls and boys. The frequency of regular consumption of whole-grain bread increased with the category of body weight in the whole group and in the case of girls. The relationship between fish consumption and the nutritional status was observed only in the whole group. As in the case of bread, the frequency of declared fish consumption increased with the BMI category. In the case of the last three nutritional behaviors: Drinking sweet beverages, eating sweets and fast foods, the incidence of these behaviors decreased with the BMI category, both in the whole group and among girls and boys (with the exception of drinking sweet drinks among boys, where no relation was observed with the body weight status).\nFigure 1 presents the relationship between the examined nutritional behaviors in the whole group. Based on the correspondence analysis, it is possible to indicate the connections between the analyzed nutritional behaviors. Beneficial nutritional behaviors such as consuming breakfast, fruit, vegetables, whole-grain bread, milk or milk beverages and fish were linked together. In opposite, unfavorable eating behaviors such as skipping breakfast, low consumption of milk products, fruits, vegetables, fish and whole-grain bread were related. Behaviors such as fast food, sweets and sugared soft drinks consumption were linked together and corresponded more to the adverse nutritional behaviors.\nThe frequencies of examined nutritional behaviors in the total group, and for girls and boys separately are presented in Table 2.\nBreakfast was regularly consumed by seven out of 10–13 year olds but only by half of 19 year olds. There was a statistically significant (but small) effect of age in the total group, and separately for girls and boys. Boys were more likely to eat breakfast in comparison to girls, and differences were particularly noticeable in the younger age groups. The frequency of eating at least one serving of fruit per day also decreased with age. A statistically significant (but small) effect in the total group, and separately for girls, and boys was noted. Girls were more likely to include fresh fruits to the daily diet in comparison to boys. There was a significant effect of age on the consumption of vegetables in the total group. Half of the 13-year-olds consumed at least two servings of vegetables a day, but the frequency of consumption decreased to 43% for the group of 19-year-olds. Similarly, the influence of age was observed in the group of girls, and boys. Girls consumed at least two servings of vegetables daily more often than boys in all age groups, except for 18-year-olds. The consumption of milk or milk beverages decreased with age. Significant age effects were observed throughout the total group, and among girls, and boys. In all age groups fewer girls drank milk and fermented milk beverages compared to boys. With age, the proportion of teenagers consuming whole grain bread in everyday diet decreased. However, age effects were not observed for girls, and neither for boys, separately. Considering gender, in all age groups, the greater percentage of girls consumed whole wheat bread in their usual diet. No significant effect of age was observed on fish consumption, neither in the total group, nor in girls, and boys. However, significant gender effects were observed: A greater percentage of boys consumed fish at least one a week in all age groups compared to girls. The effect of age was observed in regards of drinking sugared soft drinks a few times a week, for the whole group, and for both genders. The proportion of adolescents drinking sugared soft drinks increased with age. A higher percentage of boys consumed sugared soft drinks compared to girls in all age categories. Less than half of the students declared consuming sweets more than once a day. No age effects were observed, neither for the whole group, nor for the girls. A small effect of age was found only among boys. On the other hand, a relationship with gender was observed: In all age groups, a higher percentage of girls declared such behavior compared to boys. There was a significant relationship between fast-food consumption and age. The percentage of adolescents consuming fast food more than twice a week increased with age, and an analogous relationship was observed for girls and boys. In all age groups, a higher number of boys declared such nutritional behaviors comparing to girls.\nThe data on the prevalence of examined nutritional behaviors depending on the nutritional status (underweight, normal body mass, overweight and obesity) are presented in Table 3.\nAnalyzing the prevalence of selected nutritional behaviors in the whole group of adolescents depending on the body weight status, significant relationships were observed for all eating behaviors except for consuming vegetables. Regular consumption of breakfast was more often declared by adolescents with normal body weight and underweight in total group and both for girls and boys. The percentage of subjects consuming at least one portion of fruit was the smallest in the underweight group, and the largest among the obese adolescents. Consumption of milk and milk beverages was declared by a higher percentage by overweight adolescents, whereas in the smallest percentage by underweight individuals. At the same time, no relationship was observed between this nutritional behavior and the nutritional status separately for girls and boys. The frequency of regular consumption of whole-grain bread increased with the category of body weight in the whole group and in the case of girls. The relationship between fish consumption and the nutritional status was observed only in the whole group. As in the case of bread, the frequency of declared fish consumption increased with the BMI category. In the case of the last three nutritional behaviors: Drinking sweet beverages, eating sweets and fast foods, the incidence of these behaviors decreased with the BMI category, both in the whole group and among girls and boys (with the exception of drinking sweet drinks among boys, where no relation was observed with the body weight status).", "The characteristics of the study population in terms of age distribution and the body mass index (BMI) in the whole group and separately for girls and boys are presented in Table 1. The predominant group was students aged 17 (followed by 18 and 16 olds in girls, and 16 and 18 olds in boys), while the smallest groups were students aged 13 and 19 between both sex groups. There were significant differences in the average BMI between girls and boys in the total group and in the case of all age categories except the 13 year olds.", "Figure 1 presents the relationship between the examined nutritional behaviors in the whole group. Based on the correspondence analysis, it is possible to indicate the connections between the analyzed nutritional behaviors. Beneficial nutritional behaviors such as consuming breakfast, fruit, vegetables, whole-grain bread, milk or milk beverages and fish were linked together. In opposite, unfavorable eating behaviors such as skipping breakfast, low consumption of milk products, fruits, vegetables, fish and whole-grain bread were related. Behaviors such as fast food, sweets and sugared soft drinks consumption were linked together and corresponded more to the adverse nutritional behaviors.\nThe frequencies of examined nutritional behaviors in the total group, and for girls and boys separately are presented in Table 2.\nBreakfast was regularly consumed by seven out of 10–13 year olds but only by half of 19 year olds. There was a statistically significant (but small) effect of age in the total group, and separately for girls and boys. Boys were more likely to eat breakfast in comparison to girls, and differences were particularly noticeable in the younger age groups. The frequency of eating at least one serving of fruit per day also decreased with age. A statistically significant (but small) effect in the total group, and separately for girls, and boys was noted. Girls were more likely to include fresh fruits to the daily diet in comparison to boys. There was a significant effect of age on the consumption of vegetables in the total group. Half of the 13-year-olds consumed at least two servings of vegetables a day, but the frequency of consumption decreased to 43% for the group of 19-year-olds. Similarly, the influence of age was observed in the group of girls, and boys. Girls consumed at least two servings of vegetables daily more often than boys in all age groups, except for 18-year-olds. The consumption of milk or milk beverages decreased with age. Significant age effects were observed throughout the total group, and among girls, and boys. In all age groups fewer girls drank milk and fermented milk beverages compared to boys. With age, the proportion of teenagers consuming whole grain bread in everyday diet decreased. However, age effects were not observed for girls, and neither for boys, separately. Considering gender, in all age groups, the greater percentage of girls consumed whole wheat bread in their usual diet. No significant effect of age was observed on fish consumption, neither in the total group, nor in girls, and boys. However, significant gender effects were observed: A greater percentage of boys consumed fish at least one a week in all age groups compared to girls. The effect of age was observed in regards of drinking sugared soft drinks a few times a week, for the whole group, and for both genders. The proportion of adolescents drinking sugared soft drinks increased with age. A higher percentage of boys consumed sugared soft drinks compared to girls in all age categories. Less than half of the students declared consuming sweets more than once a day. No age effects were observed, neither for the whole group, nor for the girls. A small effect of age was found only among boys. On the other hand, a relationship with gender was observed: In all age groups, a higher percentage of girls declared such behavior compared to boys. There was a significant relationship between fast-food consumption and age. The percentage of adolescents consuming fast food more than twice a week increased with age, and an analogous relationship was observed for girls and boys. In all age groups, a higher number of boys declared such nutritional behaviors comparing to girls.\nThe data on the prevalence of examined nutritional behaviors depending on the nutritional status (underweight, normal body mass, overweight and obesity) are presented in Table 3.\nAnalyzing the prevalence of selected nutritional behaviors in the whole group of adolescents depending on the body weight status, significant relationships were observed for all eating behaviors except for consuming vegetables. Regular consumption of breakfast was more often declared by adolescents with normal body weight and underweight in total group and both for girls and boys. The percentage of subjects consuming at least one portion of fruit was the smallest in the underweight group, and the largest among the obese adolescents. Consumption of milk and milk beverages was declared by a higher percentage by overweight adolescents, whereas in the smallest percentage by underweight individuals. At the same time, no relationship was observed between this nutritional behavior and the nutritional status separately for girls and boys. The frequency of regular consumption of whole-grain bread increased with the category of body weight in the whole group and in the case of girls. The relationship between fish consumption and the nutritional status was observed only in the whole group. As in the case of bread, the frequency of declared fish consumption increased with the BMI category. In the case of the last three nutritional behaviors: Drinking sweet beverages, eating sweets and fast foods, the incidence of these behaviors decreased with the BMI category, both in the whole group and among girls and boys (with the exception of drinking sweet drinks among boys, where no relation was observed with the body weight status).", "Since adolescence is a time of tremendous biological, psychosocial and cognitive changes, nutrition interventions need to be tailored not only to the developmental stage, but also to the nutritional needs of individuals [18]. Based on dietary recommendation, nutritional behaviors crucial for the overall diet quality of children and adolescents might be determined. The “key” determinants of the healthy diet include eating breakfasts, regular consumption of vegetables, fruits, dairy products, whole grain products, fish, as well as avoiding sugared soft drinks, sweets and fast foods (empty calories) [26,28,29]. Literature data indicate the prevalence of selected nutritional behaviors, as well as typical nutritional errors in children and adolescents at different stages of development [8,30,31,32]. Hiza et al. [33] and Bandield et al. [33] reported a poorer diet quality in adolescents compared to younger children. In the US students, a decrease in fruit and vegetable consumption and an increase in fast food intake have been reported from childhood and young adolescence to older adolescence [34]. Nevertheless, Lipsky et al. [29] observed a modest improvement in diet quality between 16.5 and 20.5 years of age reflected, among others, in more frequent breakfasts consumption.\nBased on the analysis of correspondence, it can be noticed that regardless of age or sex, beneficial (or adverse) nutritional behaviors cluster together. Thus, individuals who, for example, do not consume breakfast, more often show other adverse nutritional behaviors (a low consumption of fruit, vegetables, fish and whole grain bread). A typical breakfast in Poland includes bread or cereals, dairy and/or meat products as well as vegetables and/or fruit. Thus, omitting breakfast may lead to a reduction in the supply of these products in the overall diet. Our results suggest that if one irregularity is found in a teenager’s diet, it can be assumed that the overall diet quality is low. Interestingly, consuming (or not consuming) sweets, sugared soft drinks and fast food cluster together, but did not correspond to other determinants of the quality of the diet. It may suggest that such products might be consumed together as a meal (e.g., meal typical for fast-food restaurants). It may also suggest the need for educational activities aimed at these products, regardless of the general education about healthy nutrition.\nIn our study, the frequency of regular breakfast consumption decreased with age, both among boys and girls. In addition, girls significantly less often declared eating breakfast compared to boys. Our observations are consistent with data from the HBSC study [26] where older children and girls were less likely to eat breakfast every weekday. However, more Polish 13- and 15 years olds declared this beneficial nutritional behavior compared to the average among their European peers [26]. Interestingly, we noted a significant relationship between the regularity of consuming breakfast and the body mass status. Regular breakfast consumption was declared by the highest percentage of students with normal body mass (61%) and the lowest with obesity (54%). It could be hypothesized that skipping breakfasts can be a strategy to reduce the weight of adolescents. However, this hypothesis requires additional research. Fayet-Moore et al. [35] observed a lower prevalence of overweight among breakfast consumers compared to skippers (n = 4487, 2–16 years). Moreover, individuals who eat breakfast had significantly higher intake of calcium and folate, and significantly lower intake of total fat than breakfast skippers, which indicates the important role of breakfast not only in maintaining a healthy body weight but also in the quality of the diet. Our results indicate a strong need to increase education activities promoting the regular breakfast consumption, especially among older girls and students with abnormal body mass status.\nRegular fruit and vegetables consumption is linked to many positive health outcomes [36]. The WHO recommends at least 400 g of fruit and vegetables daily, however studies in 10 European countries indicate that the majority of teenagers fail to meet the recommendations [37]. Only 37% of 13-year-olds and 33% 13-year-olds reported eating fruit at least once a day, whereas vegetables were consumed every day or more than once a day by 35% of the 13-year-olds and 33% of the 13-year-olds, respectively (average from 38 countries and regions) [26]. We observed a decrease in the daily fruit and vegetables consumption with age in the total group, and for both genders; in the total group the percentage of teenagers reporting daily fruit consumption decreased from 67% in 13-years-olds to 49% in 19-years-olds. In the case of vegetables, we did not observe a relationship with body weight status, but the frequency of daily fruit consumption was related to the nutritional state. Regular consumption of fruit was most often declared by obese teenagers, and least frequently by underweight adolescents. The fruit, in contrast to vegetables, have a higher energy value, which, with high consumption, may increase the energy value of the diet. In the case of vegetables and fruit there is still a substantial room for improvement in all subgroups, however education should emphasize differences in the caloric value between fruit and vegetables, especially promoting the latter.\nDairy products, especially milk and milk beverages, contribute to a healthy diet by providing energy, protein, and nutrients such as calcium, magnesium and vitamins B1, B2 and B12 [38]. Regular consumption of at least two servings of dairy products in adolescents resulted in a significant weight loss and a reduction in body fat [39,40]. However, data from HELENA study reported that European adolescents eat less than two-thirds of the recommended amount of milk (and milk products) [37], which reflected in low calcium intake, especially in oldest girls group [10]. We also observed a decrease in the percentage of students declaring daily milk and milk beverages consumption with age. The trend was particularly pronounced among girls: From 56% among 13-year-olds to 43% in 19-year-olds. We also observed a relationship between milk consumption and nutritional status in total group. In this case regular daily milk consumption most often has been declared by individuals with normal body weight. Based on our findings, nutritional education concerning promotion of milk products should be especially targeted at older girls.\nAs in the case of vegetables and fruits, consumption of whole grain products is associated with a lower risk of many diet-related diseases, e.g., cardiovascular disease and stroke, hypertension, insulin sensitivity, diabetes mellitus type 2, obesity and some types of cancer [41]. Papanikolaou et al. [42] reported a better diet quality and nutrients intake in US children and adolescents consuming grain food products compared to those consuming no grains. In our study less than half of students consumed whole grains bread every day, and the percentage of those decreased with age. Interestingly, the frequency of whole-grain bread consumption was the highest among adolescents with excessive body mass, especially in girls. This may suggest that although consumption of whole grain bread improves the quality of a diet, it may also contribute to increasing the overall caloric value of the diet.\nRegular intake of fish, particularly fatty fish, has positive health outcomes, especially in the long term. It reduces the risk of CHD mortality and ischaemic stroke [43]. Fish consumption in adolescents has been associated with better school achievements and performance in cognitive tests [44]. Handeland et al. [45] observed a small beneficial effect of fatty fish consumption on processing speed in tests of attention conducted in 426 students age 14–15 years old. In our study only half of students consumed fish at least once a week, and no age effect has been observed. However, boys declared consumption of fish more often compared with girls. Additionally, the significant relation has been noted between fish consumption and body weight status: The percentage of fish consumers increased with body mass status (45% in underweight and 53% in obese individuals). However, considering the beneficial role of fish and their low intake, nutritional education should be carried out in all subgroup of adolescents, regardless of age, sex or weight status.\nSugared soft drinks, sweets and fast foods are the sources of empty calories that contribute to a substantial share of the total energy intake in children and adolescents [46]. Intake of soft (sweetened) drinks among adolescents is higher than in other age groups (nearly 20% of 13- and 15-years olds reported their regular daily consumption), and it is associated with a greater risk of weight gain, obesity and chronic diseases and directly affects dental health by providing excessive amounts of sugars [26]. In our study, consumption of sugared soft drinks increased with the age category in total group, and both for boys and girls, but in the same time was the lowest in the case of obese individuals compared to other weight groups (except for boys). Sweetened beverages provide a high-energy amount in liquid form that contributes to increasing the simple-carbohydrate content of the diet and influencing the other nutrients’ intake [12,47]. Interestingly, similar relationships were also observed in the case of fast food consumption. While sweets consumption was significantly higher in girls and underweight students, but no effect of age has been noted. The HBSC data also highlighted gender differences in daily sweets intake (27% of 13-years old girls compared to 23% of boys in the same age). Taking into account the prevalence of these adverse behaviors, nutritional education should be directed at all adolescents, but with particular focus on older age groups.\n Strengths and Limitations The strength of the study is the sample size. To our knowledge there is no research on such a scale covering all age categories over a large geographic area. With such a large sample, the advantage is also the way of obtaining data. All questionnaires were filled in by a trained dietician who could explain the respondents’ doubts on an ongoing basis. Moreover, all anthropometric data were obtained through measurements conducted also by a dietician, which ensured obtaining reliable results and minimize the bias.\nRespondents for our study were recruited from schools participating in the project, which can be a certain limitation. However, the number of schools allowed a random selection of the sample taking into account different types of institutions and their geographic location. The small number of questions with the very limited possibilities of answers in the questionnaire may also be a certain limitation. However, the questions have been developed on the basis of large, international studies on the nutritional behaviors of school-aged children [21,26], and include the most important healthy and unhealthy behaviors concerning nutrition. Additionally, the questionnaire was validated before the main study.\nThe strength of the study is the sample size. To our knowledge there is no research on such a scale covering all age categories over a large geographic area. With such a large sample, the advantage is also the way of obtaining data. All questionnaires were filled in by a trained dietician who could explain the respondents’ doubts on an ongoing basis. Moreover, all anthropometric data were obtained through measurements conducted also by a dietician, which ensured obtaining reliable results and minimize the bias.\nRespondents for our study were recruited from schools participating in the project, which can be a certain limitation. However, the number of schools allowed a random selection of the sample taking into account different types of institutions and their geographic location. The small number of questions with the very limited possibilities of answers in the questionnaire may also be a certain limitation. However, the questions have been developed on the basis of large, international studies on the nutritional behaviors of school-aged children [21,26], and include the most important healthy and unhealthy behaviors concerning nutrition. Additionally, the questionnaire was validated before the main study.", "The strength of the study is the sample size. To our knowledge there is no research on such a scale covering all age categories over a large geographic area. With such a large sample, the advantage is also the way of obtaining data. All questionnaires were filled in by a trained dietician who could explain the respondents’ doubts on an ongoing basis. Moreover, all anthropometric data were obtained through measurements conducted also by a dietician, which ensured obtaining reliable results and minimize the bias.\nRespondents for our study were recruited from schools participating in the project, which can be a certain limitation. However, the number of schools allowed a random selection of the sample taking into account different types of institutions and their geographic location. The small number of questions with the very limited possibilities of answers in the questionnaire may also be a certain limitation. However, the questions have been developed on the basis of large, international studies on the nutritional behaviors of school-aged children [21,26], and include the most important healthy and unhealthy behaviors concerning nutrition. Additionally, the questionnaire was validated before the main study.", "By analyzing the differences in nutritional behaviors between age and gender groups, we provide data that can inform the development of dietary interventions tailored to answer the needs of adolescents at different stage of development and to improve the quality of their diet. We observed significant changes in the frequencies of analyzed eating behaviors depending on gender as well as on age. Furthermore, we have shown that the incidence of undesirable eating behavior is higher among underweight adolescents compared to their peers with an excessive body mass. Information on the most frequent nutritional errors on every stage of adolescents might be used to determine the type of educational messages given when counseling this challenging group, e.g., education activities regarding regular breakfast consumption should be intensified in older age groups, as the percentage of young people who eat breakfast decreases with age. On the other hand, education on the adverse effects of consumption of sweets, sugared soft drinks and fast food should be directed not only to adolescents with excessive body weight, but mainly to those underweight, as the consumption of these products is more frequent in this group. Moreover, regardless of age and sex, both favorable and adverse nutritional behaviors corresponded with each other. The present findings can be used both for the development of educational programs and for educational activities carried out by teachers at the school level." ]
[ "intro", null, null, "subjects", null, null, null, "results", null, null, "discussion", null, "conclusions" ]
[ "nutrition", "nutritional behavior", "diet quality", "adolescents" ]
1. Introduction: The health of children and adolescents is dependent upon food intake that provides sufficient energy and nutrients to promote optimal physical, cognitive and social growth and development [1,2,3]. However, in practice, the implementation of proper nutrition recommendations in these population groups is extremely difficult due to the existing barriers, e.g., availability of healthy food, inadequate nutritional knowledge of caregivers and children and personal food preferences [4,5,6,7]. A great body of the literature indicates the low overall diet quality in children and adolescents, both in terms of the amounts (deficits or excesses) of food/nutrients, and the selection of food groups/food products. One in four Polish 17–18 years old female adolescents did not eat breakfast regularly, and nearly half of them consumed fish only one time per month [8]. Almost 35% of schoolchildren and adolescents aged 9–13 years from rural parts of Poland regularly ate sweets, and 46% failed to consume vegetables and fruit at least once a day [9]. These inadequacies in the assortment and quantities of food products result in an incorrect supply of energy and nutrients. The average European adolescents’ diet is too high in saturated fatty acids and sodium, whereas too low in monounsaturated fatty acids, vitamin D, folate and iodine [10]. In Poland a significant increasing trend in calcium intake in teenagers aged 11–15 years was noted in the last 20 years, but the observed values are still lower than the recommendations [11]. In the US nearly 40% of total energy consumed by two- to 18-year-olds came in the form of empty calories (including 365 kcal from added sugars) [12]. Poor quality of the diet in early life may impair growth and development rate, and also increases the risk of some diet-related diseases (e.g., obesity, type 2 diabetes mellitus, cardiovascular disease and osteoporosis) in the future [3,13]. Although correct nutrition is important throughout the life span, it is possible to distinguish particularly critical periods, i.e., the first 2–3 years [3] and the period of puberty [14,15]. Dramatic physical growth and development during puberty significantly increases requirements for energy, protein, and also others nutrients compared to late childhood. Biological changes related to puberty might significantly affect psychosocial development. Rapid changes in body size, shape and composition in girls might lead to poor body image experience and development of eating disorders [16]. At this age, girls may experience nutritional behaviors leading to weight loss, e.g., alternative diets promoted in the media. Nevertheless, a delay in biological development might lower self-esteem and increase the risk of eating disorders among male teenagers [17]. As young teens are highly influenced by a peer group, then the desire to conform may also affect nutritional behaviors and food intake. Moreover, food choices can be used by adolescents as a way to express their independence from families and parents. At this age, young people may prefer to eat fast-food meals in a peer group instead of meals at home with their families. During middle adolescence (15–17 years) importance of peer groups even raising, and their influence regarding individual food choices peaks. Finally, in the late stage of adolescence (18–21 years) the influence of peer groups decreases, whereas an ability to comprehend how current health behaviors may affect the long-term health status significantly increases [18], which in turn can enhance the effectiveness of nutritional education. Although nutritional knowledge does not always translate into proper nutritional behavior [19], some data indicate the association between nutritional knowledge and the diet quality among adolescents [20]. Joulaei et al. (2018) observed that an increase in functional nutrition literacy was associated with lower sugar intake and better energy balance among boys and higher dairy intake among girls. Therefore, recognition of the dominant dietary behaviors with respect to gender and specific age groups can be helpful in the development of targeted and effective nutritional education. In Poland, there are many studies on nutritional behaviors of adolescents [8,9,11], but their limitation is the small number of participants and the lack of representativeness in their selection. The only study involving a large, representative group of Polish adolescents is the health behavior in school-aged children (HBSC) [21], conducted for over 30 years, now in more than 40 countries, including Poland. The HBSC study does not allow us to assess nutritional behaviors of older adolescents, because it covers only the group of 11-, 13- and 15-year-old boys and girls. In Poland there is no research including the wide age range of respondents with all periods of adolescence at the same time and with the same methodology. Therefore, the purpose of the present study was to analyze the frequency of occurrence of the behaviors important in terms of overall diet quality amongst Polish adolescents. The frequency of occurrence of nutritional behaviors was analyzed in the age categories with regard to gender and taking into account the criteria of the weight status. 2. Materials and Methods: 2.1. General Information The presented study is a part of the research and education program Wise nutrition—healthy generation granted by The Coca-Cola Foundation, and addressed to the secondary and upper secondary school youth, their parents and teachers. The main objective of the program was to educate the secondary and upper secondary school students regarding the importance of healthy nutrition and physical activity in the prevention of the diet-related diseases. The research part of the project included assessing the selected dietary behaviors and parameters related to physical activity of the students and performing anthropometric measurements to assess their nutritional status. Those with diagnosed abnormal body mass were invited to the dietary counseling program (two individual meetings with a dietician). The diagram presenting the overall activities within the project is provided in the supplementary materials (Figure S1). Participation in the project was voluntary and totally free of charge for all participants (schools, students and parents). All educational and research activities were carried out in schools participating in the program by trained dieticians. After receiving patronages from the government educational institutions and local authorities, written invitations were sent to all secondary and upper secondary schools in Poland. Nearly 14,000 educational institutions listed in the electronic register of schools of the Minister for National Education were invited to participate. Finally, 2058 schools attended by nearly 450,000 students joined the project in 2013–2015. This paper focused on the results concerning nutritional behaviors of students (Figure S1). The program was carried out following the standards required by the Helsinki Declaration, and the protocol was approved by the Scientific Committee of the Polish Society of Dietetics. School directors provided written informed consent to participate in the study. Parents were provided with a detailed fact sheet describing the program and had to give written informed consent if they wanted their child to participate. All students over 16 years of age were asked to give their written informed consent to participate in the study. The presented study is a part of the research and education program Wise nutrition—healthy generation granted by The Coca-Cola Foundation, and addressed to the secondary and upper secondary school youth, their parents and teachers. The main objective of the program was to educate the secondary and upper secondary school students regarding the importance of healthy nutrition and physical activity in the prevention of the diet-related diseases. The research part of the project included assessing the selected dietary behaviors and parameters related to physical activity of the students and performing anthropometric measurements to assess their nutritional status. Those with diagnosed abnormal body mass were invited to the dietary counseling program (two individual meetings with a dietician). The diagram presenting the overall activities within the project is provided in the supplementary materials (Figure S1). Participation in the project was voluntary and totally free of charge for all participants (schools, students and parents). All educational and research activities were carried out in schools participating in the program by trained dieticians. After receiving patronages from the government educational institutions and local authorities, written invitations were sent to all secondary and upper secondary schools in Poland. Nearly 14,000 educational institutions listed in the electronic register of schools of the Minister for National Education were invited to participate. Finally, 2058 schools attended by nearly 450,000 students joined the project in 2013–2015. This paper focused on the results concerning nutritional behaviors of students (Figure S1). The program was carried out following the standards required by the Helsinki Declaration, and the protocol was approved by the Scientific Committee of the Polish Society of Dietetics. School directors provided written informed consent to participate in the study. Parents were provided with a detailed fact sheet describing the program and had to give written informed consent if they wanted their child to participate. All students over 16 years of age were asked to give their written informed consent to participate in the study. 2.2. Study Participants To examine the selected nutritional behaviors and nutritional status of Polish teenagers, participants were recruited from schools participating in the project. To ensure a representative selection of students, these schools were randomly selected using the stratified sampling method from all of the 2058 enrolled institutions. The sampling was stratified by province and location (large, medium, small city and countryside), as well as the type of school (secondary and upper secondary). Secondary schools (called “gimnazjum” in Poland) are compulsory for all adolescents aged 13–16 years, and are located close to the students’ place of residence. Upper secondary schools (high schools, technical schools and basic vocational schools) include, depending on the type, youth from 16 to 20 years of age. As in the case of secondary schools, students typically live with their families and commute to school. Within the selected schools, as the next step, students were randomly selected from the class registry. Exclusion criteria included: Diagnosed disease that required the use of a special diet, pregnancy or lactation in girls or lack of written consent. All the personal data of participants were fully anonymized. The schools, and consequently, students came from all over Poland, therefore the research was of a nationwide character. In total, 207 schools of the 2058 institutions were enrolled (~10%), and finally 14,044 students participated in the study, including 7553 (53.8%) girls and 6491 (46.2%) boys. The age categories for the studied group were adopted in accordance with the HBSC methodology [21]. To examine the selected nutritional behaviors and nutritional status of Polish teenagers, participants were recruited from schools participating in the project. To ensure a representative selection of students, these schools were randomly selected using the stratified sampling method from all of the 2058 enrolled institutions. The sampling was stratified by province and location (large, medium, small city and countryside), as well as the type of school (secondary and upper secondary). Secondary schools (called “gimnazjum” in Poland) are compulsory for all adolescents aged 13–16 years, and are located close to the students’ place of residence. Upper secondary schools (high schools, technical schools and basic vocational schools) include, depending on the type, youth from 16 to 20 years of age. As in the case of secondary schools, students typically live with their families and commute to school. Within the selected schools, as the next step, students were randomly selected from the class registry. Exclusion criteria included: Diagnosed disease that required the use of a special diet, pregnancy or lactation in girls or lack of written consent. All the personal data of participants were fully anonymized. The schools, and consequently, students came from all over Poland, therefore the research was of a nationwide character. In total, 207 schools of the 2058 institutions were enrolled (~10%), and finally 14,044 students participated in the study, including 7553 (53.8%) girls and 6491 (46.2%) boys. The age categories for the studied group were adopted in accordance with the HBSC methodology [21]. 2.3. Anthropometric Measurements The assessment of the body weight status of the examined individuals was based on anthropometric measurements (body weight and height) conducted by a trained dietitian. All the measurements were carried out with the equipment provided by The Polish Society of Dietetics: Digital floor scales (TANITA HD-380 BK, Tanita Corporation, Tokyo, Japan) and a steel measuring tape (0–200 cm). All dieticians conducting the measurements were specially trained and followed the same procedures according to Anthropometry Procedures Manual by National Health and Nutrition Examination Survey (NHANES) [22] to minimize bias. The school was obliged to provide a room suitable for the measurements. Weight of the individuals was measured twice to the nearest 0.1 kg, and the mean value was recorded. Measurements were conducted on individuals dressed in basic clothes (e.g., underwear, trousers/skirt and t-shirt) and without shoes. From the final result 0.5 kg was subtracted (predicted weight of the basic clothes). For height measurements individuals stood on a flat surface in an upright position with their back against the wall, and the heels together and toes apart (without shoes and socks). They were asked to stand as tall as possible with the head in the Frankfort horizontal plane [22]. The height measurement was conducted twice to the nearest 0.1 cm, and the mean value was recorded. Based on the body height and weight data, body mass index (BMI) value was calculated. BMI was calculated as body weight in kilograms divided by the square of height in meters. Depending on the age of the subjects different criteria for assessing the body weight status were used. For individuals aged 13–18 years old, calculated BMI value was plotted on gender BMI centile charts for age (with an accuracy of one month) [23]. The percentile value was read from percentile grids and the body mass status was assessed according to the International Obesity Task Force (IOTF) criteria (underweight <5 percentile, normal weight 5–85 percentile, overweight >85 and ≤95 percentile, obese >95 percentile) [24]. For students above the age of 18 years old, the standard World Health Organization (WHO) body mass index criteria were applied: Underweight for BMI <18.5 kg/m2, normal body weight for BMI between 18.5 and 24.9 kg/m2, overweight between 25 and 29.9 kg/m2 and obesity ≥30 kg/m2 [25]. The assessment of the body weight status of the examined individuals was based on anthropometric measurements (body weight and height) conducted by a trained dietitian. All the measurements were carried out with the equipment provided by The Polish Society of Dietetics: Digital floor scales (TANITA HD-380 BK, Tanita Corporation, Tokyo, Japan) and a steel measuring tape (0–200 cm). All dieticians conducting the measurements were specially trained and followed the same procedures according to Anthropometry Procedures Manual by National Health and Nutrition Examination Survey (NHANES) [22] to minimize bias. The school was obliged to provide a room suitable for the measurements. Weight of the individuals was measured twice to the nearest 0.1 kg, and the mean value was recorded. Measurements were conducted on individuals dressed in basic clothes (e.g., underwear, trousers/skirt and t-shirt) and without shoes. From the final result 0.5 kg was subtracted (predicted weight of the basic clothes). For height measurements individuals stood on a flat surface in an upright position with their back against the wall, and the heels together and toes apart (without shoes and socks). They were asked to stand as tall as possible with the head in the Frankfort horizontal plane [22]. The height measurement was conducted twice to the nearest 0.1 cm, and the mean value was recorded. Based on the body height and weight data, body mass index (BMI) value was calculated. BMI was calculated as body weight in kilograms divided by the square of height in meters. Depending on the age of the subjects different criteria for assessing the body weight status were used. For individuals aged 13–18 years old, calculated BMI value was plotted on gender BMI centile charts for age (with an accuracy of one month) [23]. The percentile value was read from percentile grids and the body mass status was assessed according to the International Obesity Task Force (IOTF) criteria (underweight <5 percentile, normal weight 5–85 percentile, overweight >85 and ≤95 percentile, obese >95 percentile) [24]. For students above the age of 18 years old, the standard World Health Organization (WHO) body mass index criteria were applied: Underweight for BMI <18.5 kg/m2, normal body weight for BMI between 18.5 and 24.9 kg/m2, overweight between 25 and 29.9 kg/m2 and obesity ≥30 kg/m2 [25]. 2.4. Analysis of Nutritional Behaviors Data on the selected nutritional behaviors were collected prior to the anthropometric measurements and dietary counseling. The paper questionnaire containing questions about the selected nutritional practices, physical activity and self-esteem satisfaction (data not included in this article) was carried out in individuals by a dietitian. This provided the opportunity to clarify possible doubts or ask additional questions. After its completion the questionnaire was collected by a dietician. Due to the large sample group and direct methods of data acquisition, it was decided that the questionnaire has to be short, and must contain questions about the critical determinants of teenagers diet quality. Taking into account the health behavior in school-aged children (HBSC) questionnaire (developed for 11, 13 and 15 year olds) concerning nutritional behaviors [26], and available data on nutritional characteristics of the Polish youth population [21], nine questions were finally formulated with the possibility of answering “yes” or “no”. The first six questions concerned favorable aspects of the nutritional behaviors, while the last three questions referred to the adverse nutritional practices. Healthy nutritional behaviors included: (1) Regular consumption of breakfast before leaving for school, (2) daily consumption of at least one serving of fresh fruit and (3) daily consumption of at least two servings of vegetables (recommended diet quality indicators adapted from HBSC questionnaire [26]. Additionally, taking into account the importance for the overall diet quality and the low consumption in the Polish population [21,27], the three extra questions were added: (4) Daily consumption of milk and/or milk fermented beverages (as the main source of calcium in the diet), (5) daily consumption of whole grains (as the main source of complex carbohydrates and dietary fiber) and (6) consumption of fish at least once week (as the main source of docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), vitamin D and iodine in the diet). On the other hand, negative dietary determinants (unfavorable nutritional practices increasing the share of free sugars, saturated fat and trans fatty acids in the diet) were considered as: Drinking sugared soft drinks (soda and other carbonated soft drinks) several times during the week, eating sweets more than once a day (adapted from HBSC), and consuming fast food more than twice a week. Prior to the main study, a pilot study (n = 50) was conducted to examine whether the questions were understandable to the respondents. The questionnaire was validated: Repeatability was verified by determining the correlation coefficient between the results obtained in the same group (n = 50, age 13–19 years old) twice; correlation coefficients for individual questions were on average 0.76 (95% CI = 0.71–0.83) and ranged from 0.18 to 0.96. Data on the selected nutritional behaviors were collected prior to the anthropometric measurements and dietary counseling. The paper questionnaire containing questions about the selected nutritional practices, physical activity and self-esteem satisfaction (data not included in this article) was carried out in individuals by a dietitian. This provided the opportunity to clarify possible doubts or ask additional questions. After its completion the questionnaire was collected by a dietician. Due to the large sample group and direct methods of data acquisition, it was decided that the questionnaire has to be short, and must contain questions about the critical determinants of teenagers diet quality. Taking into account the health behavior in school-aged children (HBSC) questionnaire (developed for 11, 13 and 15 year olds) concerning nutritional behaviors [26], and available data on nutritional characteristics of the Polish youth population [21], nine questions were finally formulated with the possibility of answering “yes” or “no”. The first six questions concerned favorable aspects of the nutritional behaviors, while the last three questions referred to the adverse nutritional practices. Healthy nutritional behaviors included: (1) Regular consumption of breakfast before leaving for school, (2) daily consumption of at least one serving of fresh fruit and (3) daily consumption of at least two servings of vegetables (recommended diet quality indicators adapted from HBSC questionnaire [26]. Additionally, taking into account the importance for the overall diet quality and the low consumption in the Polish population [21,27], the three extra questions were added: (4) Daily consumption of milk and/or milk fermented beverages (as the main source of calcium in the diet), (5) daily consumption of whole grains (as the main source of complex carbohydrates and dietary fiber) and (6) consumption of fish at least once week (as the main source of docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), vitamin D and iodine in the diet). On the other hand, negative dietary determinants (unfavorable nutritional practices increasing the share of free sugars, saturated fat and trans fatty acids in the diet) were considered as: Drinking sugared soft drinks (soda and other carbonated soft drinks) several times during the week, eating sweets more than once a day (adapted from HBSC), and consuming fast food more than twice a week. Prior to the main study, a pilot study (n = 50) was conducted to examine whether the questions were understandable to the respondents. The questionnaire was validated: Repeatability was verified by determining the correlation coefficient between the results obtained in the same group (n = 50, age 13–19 years old) twice; correlation coefficients for individual questions were on average 0.76 (95% CI = 0.71–0.83) and ranged from 0.18 to 0.96. 2.5. Statistical Analysis Statistical data processing was performed using Statistica version 13.1 (Copyright©StatSoft, Inc, 1984–2014, Cracow, Poland). Data were analyzed in the total group, according to age, gender and body weight status. Statistical significances for nominal (categorical) variables were determined using the Pearson’s chi-square test. Additionally, contingency coefficient Cramér’s V was used to indicate the strength of association between categorical variables. Quantitative data was tested for normality of distribution; in the case of its absence the Mann–Whitney test was used for comparisons of independent groups. The correspondence analysis was used to study the relationship between dietary behaviors. The differences were considered significant at p ≤ 0.05. Statistical data processing was performed using Statistica version 13.1 (Copyright©StatSoft, Inc, 1984–2014, Cracow, Poland). Data were analyzed in the total group, according to age, gender and body weight status. Statistical significances for nominal (categorical) variables were determined using the Pearson’s chi-square test. Additionally, contingency coefficient Cramér’s V was used to indicate the strength of association between categorical variables. Quantitative data was tested for normality of distribution; in the case of its absence the Mann–Whitney test was used for comparisons of independent groups. The correspondence analysis was used to study the relationship between dietary behaviors. The differences were considered significant at p ≤ 0.05. 2.1. General Information: The presented study is a part of the research and education program Wise nutrition—healthy generation granted by The Coca-Cola Foundation, and addressed to the secondary and upper secondary school youth, their parents and teachers. The main objective of the program was to educate the secondary and upper secondary school students regarding the importance of healthy nutrition and physical activity in the prevention of the diet-related diseases. The research part of the project included assessing the selected dietary behaviors and parameters related to physical activity of the students and performing anthropometric measurements to assess their nutritional status. Those with diagnosed abnormal body mass were invited to the dietary counseling program (two individual meetings with a dietician). The diagram presenting the overall activities within the project is provided in the supplementary materials (Figure S1). Participation in the project was voluntary and totally free of charge for all participants (schools, students and parents). All educational and research activities were carried out in schools participating in the program by trained dieticians. After receiving patronages from the government educational institutions and local authorities, written invitations were sent to all secondary and upper secondary schools in Poland. Nearly 14,000 educational institutions listed in the electronic register of schools of the Minister for National Education were invited to participate. Finally, 2058 schools attended by nearly 450,000 students joined the project in 2013–2015. This paper focused on the results concerning nutritional behaviors of students (Figure S1). The program was carried out following the standards required by the Helsinki Declaration, and the protocol was approved by the Scientific Committee of the Polish Society of Dietetics. School directors provided written informed consent to participate in the study. Parents were provided with a detailed fact sheet describing the program and had to give written informed consent if they wanted their child to participate. All students over 16 years of age were asked to give their written informed consent to participate in the study. 2.2. Study Participants: To examine the selected nutritional behaviors and nutritional status of Polish teenagers, participants were recruited from schools participating in the project. To ensure a representative selection of students, these schools were randomly selected using the stratified sampling method from all of the 2058 enrolled institutions. The sampling was stratified by province and location (large, medium, small city and countryside), as well as the type of school (secondary and upper secondary). Secondary schools (called “gimnazjum” in Poland) are compulsory for all adolescents aged 13–16 years, and are located close to the students’ place of residence. Upper secondary schools (high schools, technical schools and basic vocational schools) include, depending on the type, youth from 16 to 20 years of age. As in the case of secondary schools, students typically live with their families and commute to school. Within the selected schools, as the next step, students were randomly selected from the class registry. Exclusion criteria included: Diagnosed disease that required the use of a special diet, pregnancy or lactation in girls or lack of written consent. All the personal data of participants were fully anonymized. The schools, and consequently, students came from all over Poland, therefore the research was of a nationwide character. In total, 207 schools of the 2058 institutions were enrolled (~10%), and finally 14,044 students participated in the study, including 7553 (53.8%) girls and 6491 (46.2%) boys. The age categories for the studied group were adopted in accordance with the HBSC methodology [21]. 2.3. Anthropometric Measurements: The assessment of the body weight status of the examined individuals was based on anthropometric measurements (body weight and height) conducted by a trained dietitian. All the measurements were carried out with the equipment provided by The Polish Society of Dietetics: Digital floor scales (TANITA HD-380 BK, Tanita Corporation, Tokyo, Japan) and a steel measuring tape (0–200 cm). All dieticians conducting the measurements were specially trained and followed the same procedures according to Anthropometry Procedures Manual by National Health and Nutrition Examination Survey (NHANES) [22] to minimize bias. The school was obliged to provide a room suitable for the measurements. Weight of the individuals was measured twice to the nearest 0.1 kg, and the mean value was recorded. Measurements were conducted on individuals dressed in basic clothes (e.g., underwear, trousers/skirt and t-shirt) and without shoes. From the final result 0.5 kg was subtracted (predicted weight of the basic clothes). For height measurements individuals stood on a flat surface in an upright position with their back against the wall, and the heels together and toes apart (without shoes and socks). They were asked to stand as tall as possible with the head in the Frankfort horizontal plane [22]. The height measurement was conducted twice to the nearest 0.1 cm, and the mean value was recorded. Based on the body height and weight data, body mass index (BMI) value was calculated. BMI was calculated as body weight in kilograms divided by the square of height in meters. Depending on the age of the subjects different criteria for assessing the body weight status were used. For individuals aged 13–18 years old, calculated BMI value was plotted on gender BMI centile charts for age (with an accuracy of one month) [23]. The percentile value was read from percentile grids and the body mass status was assessed according to the International Obesity Task Force (IOTF) criteria (underweight <5 percentile, normal weight 5–85 percentile, overweight >85 and ≤95 percentile, obese >95 percentile) [24]. For students above the age of 18 years old, the standard World Health Organization (WHO) body mass index criteria were applied: Underweight for BMI <18.5 kg/m2, normal body weight for BMI between 18.5 and 24.9 kg/m2, overweight between 25 and 29.9 kg/m2 and obesity ≥30 kg/m2 [25]. 2.4. Analysis of Nutritional Behaviors: Data on the selected nutritional behaviors were collected prior to the anthropometric measurements and dietary counseling. The paper questionnaire containing questions about the selected nutritional practices, physical activity and self-esteem satisfaction (data not included in this article) was carried out in individuals by a dietitian. This provided the opportunity to clarify possible doubts or ask additional questions. After its completion the questionnaire was collected by a dietician. Due to the large sample group and direct methods of data acquisition, it was decided that the questionnaire has to be short, and must contain questions about the critical determinants of teenagers diet quality. Taking into account the health behavior in school-aged children (HBSC) questionnaire (developed for 11, 13 and 15 year olds) concerning nutritional behaviors [26], and available data on nutritional characteristics of the Polish youth population [21], nine questions were finally formulated with the possibility of answering “yes” or “no”. The first six questions concerned favorable aspects of the nutritional behaviors, while the last three questions referred to the adverse nutritional practices. Healthy nutritional behaviors included: (1) Regular consumption of breakfast before leaving for school, (2) daily consumption of at least one serving of fresh fruit and (3) daily consumption of at least two servings of vegetables (recommended diet quality indicators adapted from HBSC questionnaire [26]. Additionally, taking into account the importance for the overall diet quality and the low consumption in the Polish population [21,27], the three extra questions were added: (4) Daily consumption of milk and/or milk fermented beverages (as the main source of calcium in the diet), (5) daily consumption of whole grains (as the main source of complex carbohydrates and dietary fiber) and (6) consumption of fish at least once week (as the main source of docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), vitamin D and iodine in the diet). On the other hand, negative dietary determinants (unfavorable nutritional practices increasing the share of free sugars, saturated fat and trans fatty acids in the diet) were considered as: Drinking sugared soft drinks (soda and other carbonated soft drinks) several times during the week, eating sweets more than once a day (adapted from HBSC), and consuming fast food more than twice a week. Prior to the main study, a pilot study (n = 50) was conducted to examine whether the questions were understandable to the respondents. The questionnaire was validated: Repeatability was verified by determining the correlation coefficient between the results obtained in the same group (n = 50, age 13–19 years old) twice; correlation coefficients for individual questions were on average 0.76 (95% CI = 0.71–0.83) and ranged from 0.18 to 0.96. 2.5. Statistical Analysis: Statistical data processing was performed using Statistica version 13.1 (Copyright©StatSoft, Inc, 1984–2014, Cracow, Poland). Data were analyzed in the total group, according to age, gender and body weight status. Statistical significances for nominal (categorical) variables were determined using the Pearson’s chi-square test. Additionally, contingency coefficient Cramér’s V was used to indicate the strength of association between categorical variables. Quantitative data was tested for normality of distribution; in the case of its absence the Mann–Whitney test was used for comparisons of independent groups. The correspondence analysis was used to study the relationship between dietary behaviors. The differences were considered significant at p ≤ 0.05. 3. Results: The total sample group consisted of 14,044 students, including 7553 girls and 6491 boys. The detailed characteristics of the group in terms of age distribution, sex and the body mass index are presented in Table 1. Data on examined dietary behaviors are presented in Table 2. All data are expressed as number values and in percentages. 3.1. Characteristics of the Study Group The characteristics of the study population in terms of age distribution and the body mass index (BMI) in the whole group and separately for girls and boys are presented in Table 1. The predominant group was students aged 17 (followed by 18 and 16 olds in girls, and 16 and 18 olds in boys), while the smallest groups were students aged 13 and 19 between both sex groups. There were significant differences in the average BMI between girls and boys in the total group and in the case of all age categories except the 13 year olds. The characteristics of the study population in terms of age distribution and the body mass index (BMI) in the whole group and separately for girls and boys are presented in Table 1. The predominant group was students aged 17 (followed by 18 and 16 olds in girls, and 16 and 18 olds in boys), while the smallest groups were students aged 13 and 19 between both sex groups. There were significant differences in the average BMI between girls and boys in the total group and in the case of all age categories except the 13 year olds. 3.2. Characteristics of Nutritional Behaviors Figure 1 presents the relationship between the examined nutritional behaviors in the whole group. Based on the correspondence analysis, it is possible to indicate the connections between the analyzed nutritional behaviors. Beneficial nutritional behaviors such as consuming breakfast, fruit, vegetables, whole-grain bread, milk or milk beverages and fish were linked together. In opposite, unfavorable eating behaviors such as skipping breakfast, low consumption of milk products, fruits, vegetables, fish and whole-grain bread were related. Behaviors such as fast food, sweets and sugared soft drinks consumption were linked together and corresponded more to the adverse nutritional behaviors. The frequencies of examined nutritional behaviors in the total group, and for girls and boys separately are presented in Table 2. Breakfast was regularly consumed by seven out of 10–13 year olds but only by half of 19 year olds. There was a statistically significant (but small) effect of age in the total group, and separately for girls and boys. Boys were more likely to eat breakfast in comparison to girls, and differences were particularly noticeable in the younger age groups. The frequency of eating at least one serving of fruit per day also decreased with age. A statistically significant (but small) effect in the total group, and separately for girls, and boys was noted. Girls were more likely to include fresh fruits to the daily diet in comparison to boys. There was a significant effect of age on the consumption of vegetables in the total group. Half of the 13-year-olds consumed at least two servings of vegetables a day, but the frequency of consumption decreased to 43% for the group of 19-year-olds. Similarly, the influence of age was observed in the group of girls, and boys. Girls consumed at least two servings of vegetables daily more often than boys in all age groups, except for 18-year-olds. The consumption of milk or milk beverages decreased with age. Significant age effects were observed throughout the total group, and among girls, and boys. In all age groups fewer girls drank milk and fermented milk beverages compared to boys. With age, the proportion of teenagers consuming whole grain bread in everyday diet decreased. However, age effects were not observed for girls, and neither for boys, separately. Considering gender, in all age groups, the greater percentage of girls consumed whole wheat bread in their usual diet. No significant effect of age was observed on fish consumption, neither in the total group, nor in girls, and boys. However, significant gender effects were observed: A greater percentage of boys consumed fish at least one a week in all age groups compared to girls. The effect of age was observed in regards of drinking sugared soft drinks a few times a week, for the whole group, and for both genders. The proportion of adolescents drinking sugared soft drinks increased with age. A higher percentage of boys consumed sugared soft drinks compared to girls in all age categories. Less than half of the students declared consuming sweets more than once a day. No age effects were observed, neither for the whole group, nor for the girls. A small effect of age was found only among boys. On the other hand, a relationship with gender was observed: In all age groups, a higher percentage of girls declared such behavior compared to boys. There was a significant relationship between fast-food consumption and age. The percentage of adolescents consuming fast food more than twice a week increased with age, and an analogous relationship was observed for girls and boys. In all age groups, a higher number of boys declared such nutritional behaviors comparing to girls. The data on the prevalence of examined nutritional behaviors depending on the nutritional status (underweight, normal body mass, overweight and obesity) are presented in Table 3. Analyzing the prevalence of selected nutritional behaviors in the whole group of adolescents depending on the body weight status, significant relationships were observed for all eating behaviors except for consuming vegetables. Regular consumption of breakfast was more often declared by adolescents with normal body weight and underweight in total group and both for girls and boys. The percentage of subjects consuming at least one portion of fruit was the smallest in the underweight group, and the largest among the obese adolescents. Consumption of milk and milk beverages was declared by a higher percentage by overweight adolescents, whereas in the smallest percentage by underweight individuals. At the same time, no relationship was observed between this nutritional behavior and the nutritional status separately for girls and boys. The frequency of regular consumption of whole-grain bread increased with the category of body weight in the whole group and in the case of girls. The relationship between fish consumption and the nutritional status was observed only in the whole group. As in the case of bread, the frequency of declared fish consumption increased with the BMI category. In the case of the last three nutritional behaviors: Drinking sweet beverages, eating sweets and fast foods, the incidence of these behaviors decreased with the BMI category, both in the whole group and among girls and boys (with the exception of drinking sweet drinks among boys, where no relation was observed with the body weight status). Figure 1 presents the relationship between the examined nutritional behaviors in the whole group. Based on the correspondence analysis, it is possible to indicate the connections between the analyzed nutritional behaviors. Beneficial nutritional behaviors such as consuming breakfast, fruit, vegetables, whole-grain bread, milk or milk beverages and fish were linked together. In opposite, unfavorable eating behaviors such as skipping breakfast, low consumption of milk products, fruits, vegetables, fish and whole-grain bread were related. Behaviors such as fast food, sweets and sugared soft drinks consumption were linked together and corresponded more to the adverse nutritional behaviors. The frequencies of examined nutritional behaviors in the total group, and for girls and boys separately are presented in Table 2. Breakfast was regularly consumed by seven out of 10–13 year olds but only by half of 19 year olds. There was a statistically significant (but small) effect of age in the total group, and separately for girls and boys. Boys were more likely to eat breakfast in comparison to girls, and differences were particularly noticeable in the younger age groups. The frequency of eating at least one serving of fruit per day also decreased with age. A statistically significant (but small) effect in the total group, and separately for girls, and boys was noted. Girls were more likely to include fresh fruits to the daily diet in comparison to boys. There was a significant effect of age on the consumption of vegetables in the total group. Half of the 13-year-olds consumed at least two servings of vegetables a day, but the frequency of consumption decreased to 43% for the group of 19-year-olds. Similarly, the influence of age was observed in the group of girls, and boys. Girls consumed at least two servings of vegetables daily more often than boys in all age groups, except for 18-year-olds. The consumption of milk or milk beverages decreased with age. Significant age effects were observed throughout the total group, and among girls, and boys. In all age groups fewer girls drank milk and fermented milk beverages compared to boys. With age, the proportion of teenagers consuming whole grain bread in everyday diet decreased. However, age effects were not observed for girls, and neither for boys, separately. Considering gender, in all age groups, the greater percentage of girls consumed whole wheat bread in their usual diet. No significant effect of age was observed on fish consumption, neither in the total group, nor in girls, and boys. However, significant gender effects were observed: A greater percentage of boys consumed fish at least one a week in all age groups compared to girls. The effect of age was observed in regards of drinking sugared soft drinks a few times a week, for the whole group, and for both genders. The proportion of adolescents drinking sugared soft drinks increased with age. A higher percentage of boys consumed sugared soft drinks compared to girls in all age categories. Less than half of the students declared consuming sweets more than once a day. No age effects were observed, neither for the whole group, nor for the girls. A small effect of age was found only among boys. On the other hand, a relationship with gender was observed: In all age groups, a higher percentage of girls declared such behavior compared to boys. There was a significant relationship between fast-food consumption and age. The percentage of adolescents consuming fast food more than twice a week increased with age, and an analogous relationship was observed for girls and boys. In all age groups, a higher number of boys declared such nutritional behaviors comparing to girls. The data on the prevalence of examined nutritional behaviors depending on the nutritional status (underweight, normal body mass, overweight and obesity) are presented in Table 3. Analyzing the prevalence of selected nutritional behaviors in the whole group of adolescents depending on the body weight status, significant relationships were observed for all eating behaviors except for consuming vegetables. Regular consumption of breakfast was more often declared by adolescents with normal body weight and underweight in total group and both for girls and boys. The percentage of subjects consuming at least one portion of fruit was the smallest in the underweight group, and the largest among the obese adolescents. Consumption of milk and milk beverages was declared by a higher percentage by overweight adolescents, whereas in the smallest percentage by underweight individuals. At the same time, no relationship was observed between this nutritional behavior and the nutritional status separately for girls and boys. The frequency of regular consumption of whole-grain bread increased with the category of body weight in the whole group and in the case of girls. The relationship between fish consumption and the nutritional status was observed only in the whole group. As in the case of bread, the frequency of declared fish consumption increased with the BMI category. In the case of the last three nutritional behaviors: Drinking sweet beverages, eating sweets and fast foods, the incidence of these behaviors decreased with the BMI category, both in the whole group and among girls and boys (with the exception of drinking sweet drinks among boys, where no relation was observed with the body weight status). 3.1. Characteristics of the Study Group: The characteristics of the study population in terms of age distribution and the body mass index (BMI) in the whole group and separately for girls and boys are presented in Table 1. The predominant group was students aged 17 (followed by 18 and 16 olds in girls, and 16 and 18 olds in boys), while the smallest groups were students aged 13 and 19 between both sex groups. There were significant differences in the average BMI between girls and boys in the total group and in the case of all age categories except the 13 year olds. 3.2. Characteristics of Nutritional Behaviors: Figure 1 presents the relationship between the examined nutritional behaviors in the whole group. Based on the correspondence analysis, it is possible to indicate the connections between the analyzed nutritional behaviors. Beneficial nutritional behaviors such as consuming breakfast, fruit, vegetables, whole-grain bread, milk or milk beverages and fish were linked together. In opposite, unfavorable eating behaviors such as skipping breakfast, low consumption of milk products, fruits, vegetables, fish and whole-grain bread were related. Behaviors such as fast food, sweets and sugared soft drinks consumption were linked together and corresponded more to the adverse nutritional behaviors. The frequencies of examined nutritional behaviors in the total group, and for girls and boys separately are presented in Table 2. Breakfast was regularly consumed by seven out of 10–13 year olds but only by half of 19 year olds. There was a statistically significant (but small) effect of age in the total group, and separately for girls and boys. Boys were more likely to eat breakfast in comparison to girls, and differences were particularly noticeable in the younger age groups. The frequency of eating at least one serving of fruit per day also decreased with age. A statistically significant (but small) effect in the total group, and separately for girls, and boys was noted. Girls were more likely to include fresh fruits to the daily diet in comparison to boys. There was a significant effect of age on the consumption of vegetables in the total group. Half of the 13-year-olds consumed at least two servings of vegetables a day, but the frequency of consumption decreased to 43% for the group of 19-year-olds. Similarly, the influence of age was observed in the group of girls, and boys. Girls consumed at least two servings of vegetables daily more often than boys in all age groups, except for 18-year-olds. The consumption of milk or milk beverages decreased with age. Significant age effects were observed throughout the total group, and among girls, and boys. In all age groups fewer girls drank milk and fermented milk beverages compared to boys. With age, the proportion of teenagers consuming whole grain bread in everyday diet decreased. However, age effects were not observed for girls, and neither for boys, separately. Considering gender, in all age groups, the greater percentage of girls consumed whole wheat bread in their usual diet. No significant effect of age was observed on fish consumption, neither in the total group, nor in girls, and boys. However, significant gender effects were observed: A greater percentage of boys consumed fish at least one a week in all age groups compared to girls. The effect of age was observed in regards of drinking sugared soft drinks a few times a week, for the whole group, and for both genders. The proportion of adolescents drinking sugared soft drinks increased with age. A higher percentage of boys consumed sugared soft drinks compared to girls in all age categories. Less than half of the students declared consuming sweets more than once a day. No age effects were observed, neither for the whole group, nor for the girls. A small effect of age was found only among boys. On the other hand, a relationship with gender was observed: In all age groups, a higher percentage of girls declared such behavior compared to boys. There was a significant relationship between fast-food consumption and age. The percentage of adolescents consuming fast food more than twice a week increased with age, and an analogous relationship was observed for girls and boys. In all age groups, a higher number of boys declared such nutritional behaviors comparing to girls. The data on the prevalence of examined nutritional behaviors depending on the nutritional status (underweight, normal body mass, overweight and obesity) are presented in Table 3. Analyzing the prevalence of selected nutritional behaviors in the whole group of adolescents depending on the body weight status, significant relationships were observed for all eating behaviors except for consuming vegetables. Regular consumption of breakfast was more often declared by adolescents with normal body weight and underweight in total group and both for girls and boys. The percentage of subjects consuming at least one portion of fruit was the smallest in the underweight group, and the largest among the obese adolescents. Consumption of milk and milk beverages was declared by a higher percentage by overweight adolescents, whereas in the smallest percentage by underweight individuals. At the same time, no relationship was observed between this nutritional behavior and the nutritional status separately for girls and boys. The frequency of regular consumption of whole-grain bread increased with the category of body weight in the whole group and in the case of girls. The relationship between fish consumption and the nutritional status was observed only in the whole group. As in the case of bread, the frequency of declared fish consumption increased with the BMI category. In the case of the last three nutritional behaviors: Drinking sweet beverages, eating sweets and fast foods, the incidence of these behaviors decreased with the BMI category, both in the whole group and among girls and boys (with the exception of drinking sweet drinks among boys, where no relation was observed with the body weight status). 4. Discussion: Since adolescence is a time of tremendous biological, psychosocial and cognitive changes, nutrition interventions need to be tailored not only to the developmental stage, but also to the nutritional needs of individuals [18]. Based on dietary recommendation, nutritional behaviors crucial for the overall diet quality of children and adolescents might be determined. The “key” determinants of the healthy diet include eating breakfasts, regular consumption of vegetables, fruits, dairy products, whole grain products, fish, as well as avoiding sugared soft drinks, sweets and fast foods (empty calories) [26,28,29]. Literature data indicate the prevalence of selected nutritional behaviors, as well as typical nutritional errors in children and adolescents at different stages of development [8,30,31,32]. Hiza et al. [33] and Bandield et al. [33] reported a poorer diet quality in adolescents compared to younger children. In the US students, a decrease in fruit and vegetable consumption and an increase in fast food intake have been reported from childhood and young adolescence to older adolescence [34]. Nevertheless, Lipsky et al. [29] observed a modest improvement in diet quality between 16.5 and 20.5 years of age reflected, among others, in more frequent breakfasts consumption. Based on the analysis of correspondence, it can be noticed that regardless of age or sex, beneficial (or adverse) nutritional behaviors cluster together. Thus, individuals who, for example, do not consume breakfast, more often show other adverse nutritional behaviors (a low consumption of fruit, vegetables, fish and whole grain bread). A typical breakfast in Poland includes bread or cereals, dairy and/or meat products as well as vegetables and/or fruit. Thus, omitting breakfast may lead to a reduction in the supply of these products in the overall diet. Our results suggest that if one irregularity is found in a teenager’s diet, it can be assumed that the overall diet quality is low. Interestingly, consuming (or not consuming) sweets, sugared soft drinks and fast food cluster together, but did not correspond to other determinants of the quality of the diet. It may suggest that such products might be consumed together as a meal (e.g., meal typical for fast-food restaurants). It may also suggest the need for educational activities aimed at these products, regardless of the general education about healthy nutrition. In our study, the frequency of regular breakfast consumption decreased with age, both among boys and girls. In addition, girls significantly less often declared eating breakfast compared to boys. Our observations are consistent with data from the HBSC study [26] where older children and girls were less likely to eat breakfast every weekday. However, more Polish 13- and 15 years olds declared this beneficial nutritional behavior compared to the average among their European peers [26]. Interestingly, we noted a significant relationship between the regularity of consuming breakfast and the body mass status. Regular breakfast consumption was declared by the highest percentage of students with normal body mass (61%) and the lowest with obesity (54%). It could be hypothesized that skipping breakfasts can be a strategy to reduce the weight of adolescents. However, this hypothesis requires additional research. Fayet-Moore et al. [35] observed a lower prevalence of overweight among breakfast consumers compared to skippers (n = 4487, 2–16 years). Moreover, individuals who eat breakfast had significantly higher intake of calcium and folate, and significantly lower intake of total fat than breakfast skippers, which indicates the important role of breakfast not only in maintaining a healthy body weight but also in the quality of the diet. Our results indicate a strong need to increase education activities promoting the regular breakfast consumption, especially among older girls and students with abnormal body mass status. Regular fruit and vegetables consumption is linked to many positive health outcomes [36]. The WHO recommends at least 400 g of fruit and vegetables daily, however studies in 10 European countries indicate that the majority of teenagers fail to meet the recommendations [37]. Only 37% of 13-year-olds and 33% 13-year-olds reported eating fruit at least once a day, whereas vegetables were consumed every day or more than once a day by 35% of the 13-year-olds and 33% of the 13-year-olds, respectively (average from 38 countries and regions) [26]. We observed a decrease in the daily fruit and vegetables consumption with age in the total group, and for both genders; in the total group the percentage of teenagers reporting daily fruit consumption decreased from 67% in 13-years-olds to 49% in 19-years-olds. In the case of vegetables, we did not observe a relationship with body weight status, but the frequency of daily fruit consumption was related to the nutritional state. Regular consumption of fruit was most often declared by obese teenagers, and least frequently by underweight adolescents. The fruit, in contrast to vegetables, have a higher energy value, which, with high consumption, may increase the energy value of the diet. In the case of vegetables and fruit there is still a substantial room for improvement in all subgroups, however education should emphasize differences in the caloric value between fruit and vegetables, especially promoting the latter. Dairy products, especially milk and milk beverages, contribute to a healthy diet by providing energy, protein, and nutrients such as calcium, magnesium and vitamins B1, B2 and B12 [38]. Regular consumption of at least two servings of dairy products in adolescents resulted in a significant weight loss and a reduction in body fat [39,40]. However, data from HELENA study reported that European adolescents eat less than two-thirds of the recommended amount of milk (and milk products) [37], which reflected in low calcium intake, especially in oldest girls group [10]. We also observed a decrease in the percentage of students declaring daily milk and milk beverages consumption with age. The trend was particularly pronounced among girls: From 56% among 13-year-olds to 43% in 19-year-olds. We also observed a relationship between milk consumption and nutritional status in total group. In this case regular daily milk consumption most often has been declared by individuals with normal body weight. Based on our findings, nutritional education concerning promotion of milk products should be especially targeted at older girls. As in the case of vegetables and fruits, consumption of whole grain products is associated with a lower risk of many diet-related diseases, e.g., cardiovascular disease and stroke, hypertension, insulin sensitivity, diabetes mellitus type 2, obesity and some types of cancer [41]. Papanikolaou et al. [42] reported a better diet quality and nutrients intake in US children and adolescents consuming grain food products compared to those consuming no grains. In our study less than half of students consumed whole grains bread every day, and the percentage of those decreased with age. Interestingly, the frequency of whole-grain bread consumption was the highest among adolescents with excessive body mass, especially in girls. This may suggest that although consumption of whole grain bread improves the quality of a diet, it may also contribute to increasing the overall caloric value of the diet. Regular intake of fish, particularly fatty fish, has positive health outcomes, especially in the long term. It reduces the risk of CHD mortality and ischaemic stroke [43]. Fish consumption in adolescents has been associated with better school achievements and performance in cognitive tests [44]. Handeland et al. [45] observed a small beneficial effect of fatty fish consumption on processing speed in tests of attention conducted in 426 students age 14–15 years old. In our study only half of students consumed fish at least once a week, and no age effect has been observed. However, boys declared consumption of fish more often compared with girls. Additionally, the significant relation has been noted between fish consumption and body weight status: The percentage of fish consumers increased with body mass status (45% in underweight and 53% in obese individuals). However, considering the beneficial role of fish and their low intake, nutritional education should be carried out in all subgroup of adolescents, regardless of age, sex or weight status. Sugared soft drinks, sweets and fast foods are the sources of empty calories that contribute to a substantial share of the total energy intake in children and adolescents [46]. Intake of soft (sweetened) drinks among adolescents is higher than in other age groups (nearly 20% of 13- and 15-years olds reported their regular daily consumption), and it is associated with a greater risk of weight gain, obesity and chronic diseases and directly affects dental health by providing excessive amounts of sugars [26]. In our study, consumption of sugared soft drinks increased with the age category in total group, and both for boys and girls, but in the same time was the lowest in the case of obese individuals compared to other weight groups (except for boys). Sweetened beverages provide a high-energy amount in liquid form that contributes to increasing the simple-carbohydrate content of the diet and influencing the other nutrients’ intake [12,47]. Interestingly, similar relationships were also observed in the case of fast food consumption. While sweets consumption was significantly higher in girls and underweight students, but no effect of age has been noted. The HBSC data also highlighted gender differences in daily sweets intake (27% of 13-years old girls compared to 23% of boys in the same age). Taking into account the prevalence of these adverse behaviors, nutritional education should be directed at all adolescents, but with particular focus on older age groups. Strengths and Limitations The strength of the study is the sample size. To our knowledge there is no research on such a scale covering all age categories over a large geographic area. With such a large sample, the advantage is also the way of obtaining data. All questionnaires were filled in by a trained dietician who could explain the respondents’ doubts on an ongoing basis. Moreover, all anthropometric data were obtained through measurements conducted also by a dietician, which ensured obtaining reliable results and minimize the bias. Respondents for our study were recruited from schools participating in the project, which can be a certain limitation. However, the number of schools allowed a random selection of the sample taking into account different types of institutions and their geographic location. The small number of questions with the very limited possibilities of answers in the questionnaire may also be a certain limitation. However, the questions have been developed on the basis of large, international studies on the nutritional behaviors of school-aged children [21,26], and include the most important healthy and unhealthy behaviors concerning nutrition. Additionally, the questionnaire was validated before the main study. The strength of the study is the sample size. To our knowledge there is no research on such a scale covering all age categories over a large geographic area. With such a large sample, the advantage is also the way of obtaining data. All questionnaires were filled in by a trained dietician who could explain the respondents’ doubts on an ongoing basis. Moreover, all anthropometric data were obtained through measurements conducted also by a dietician, which ensured obtaining reliable results and minimize the bias. Respondents for our study were recruited from schools participating in the project, which can be a certain limitation. However, the number of schools allowed a random selection of the sample taking into account different types of institutions and their geographic location. The small number of questions with the very limited possibilities of answers in the questionnaire may also be a certain limitation. However, the questions have been developed on the basis of large, international studies on the nutritional behaviors of school-aged children [21,26], and include the most important healthy and unhealthy behaviors concerning nutrition. Additionally, the questionnaire was validated before the main study. Strengths and Limitations: The strength of the study is the sample size. To our knowledge there is no research on such a scale covering all age categories over a large geographic area. With such a large sample, the advantage is also the way of obtaining data. All questionnaires were filled in by a trained dietician who could explain the respondents’ doubts on an ongoing basis. Moreover, all anthropometric data were obtained through measurements conducted also by a dietician, which ensured obtaining reliable results and minimize the bias. Respondents for our study were recruited from schools participating in the project, which can be a certain limitation. However, the number of schools allowed a random selection of the sample taking into account different types of institutions and their geographic location. The small number of questions with the very limited possibilities of answers in the questionnaire may also be a certain limitation. However, the questions have been developed on the basis of large, international studies on the nutritional behaviors of school-aged children [21,26], and include the most important healthy and unhealthy behaviors concerning nutrition. Additionally, the questionnaire was validated before the main study. 5. Conclusions: By analyzing the differences in nutritional behaviors between age and gender groups, we provide data that can inform the development of dietary interventions tailored to answer the needs of adolescents at different stage of development and to improve the quality of their diet. We observed significant changes in the frequencies of analyzed eating behaviors depending on gender as well as on age. Furthermore, we have shown that the incidence of undesirable eating behavior is higher among underweight adolescents compared to their peers with an excessive body mass. Information on the most frequent nutritional errors on every stage of adolescents might be used to determine the type of educational messages given when counseling this challenging group, e.g., education activities regarding regular breakfast consumption should be intensified in older age groups, as the percentage of young people who eat breakfast decreases with age. On the other hand, education on the adverse effects of consumption of sweets, sugared soft drinks and fast food should be directed not only to adolescents with excessive body weight, but mainly to those underweight, as the consumption of these products is more frequent in this group. Moreover, regardless of age and sex, both favorable and adverse nutritional behaviors corresponded with each other. The present findings can be used both for the development of educational programs and for educational activities carried out by teachers at the school level.
Background: Recognition of the dominant dietary behaviors with respect to gender and specific age groups can be helpful in the development of targeted and effective nutritional education. The purpose of the study was to analyze the prevalence of the selected eating behaviors (favorable: Consuming breakfasts, fruit, vegetables, milk and milk beverages, whole grain bread and fish; adverse: Regular consumption of sweets, sugared soft drinks and fast-foods) among Polish adolescents. Methods: Data on the nutritional behaviors were collected using a questionnaire. Body mass status was assessed based on weight and height measurements. Results: 14,044 students aged 13-19 years old from 207 schools participated in the study. Significant differences were found in the nutritional behaviors depending on age, gender and nutritional status. Favorable nutritional behaviors corresponded with each other, the same relationship was observed for adverse behaviors. The frequency of the majority of healthy eating behaviors decreased with age, whereas the incidence of adverse dietary behaviors increased with age. Underweight adolescents more often consumed sugared soft drinks, sweets and fast food compared to their peers with normal and excessive body mass. Conclusions: A significant proportion of adolescents showed unhealthy nutritional behaviors. Showing changes in the incidence of nutritional behaviors depending on age, gender and body weight status, we provide data that can inform the development of dietary interventions tailored to promote specific food groups among adolescents on different stages of development to improve their diet quality.
1. Introduction: The health of children and adolescents is dependent upon food intake that provides sufficient energy and nutrients to promote optimal physical, cognitive and social growth and development [1,2,3]. However, in practice, the implementation of proper nutrition recommendations in these population groups is extremely difficult due to the existing barriers, e.g., availability of healthy food, inadequate nutritional knowledge of caregivers and children and personal food preferences [4,5,6,7]. A great body of the literature indicates the low overall diet quality in children and adolescents, both in terms of the amounts (deficits or excesses) of food/nutrients, and the selection of food groups/food products. One in four Polish 17–18 years old female adolescents did not eat breakfast regularly, and nearly half of them consumed fish only one time per month [8]. Almost 35% of schoolchildren and adolescents aged 9–13 years from rural parts of Poland regularly ate sweets, and 46% failed to consume vegetables and fruit at least once a day [9]. These inadequacies in the assortment and quantities of food products result in an incorrect supply of energy and nutrients. The average European adolescents’ diet is too high in saturated fatty acids and sodium, whereas too low in monounsaturated fatty acids, vitamin D, folate and iodine [10]. In Poland a significant increasing trend in calcium intake in teenagers aged 11–15 years was noted in the last 20 years, but the observed values are still lower than the recommendations [11]. In the US nearly 40% of total energy consumed by two- to 18-year-olds came in the form of empty calories (including 365 kcal from added sugars) [12]. Poor quality of the diet in early life may impair growth and development rate, and also increases the risk of some diet-related diseases (e.g., obesity, type 2 diabetes mellitus, cardiovascular disease and osteoporosis) in the future [3,13]. Although correct nutrition is important throughout the life span, it is possible to distinguish particularly critical periods, i.e., the first 2–3 years [3] and the period of puberty [14,15]. Dramatic physical growth and development during puberty significantly increases requirements for energy, protein, and also others nutrients compared to late childhood. Biological changes related to puberty might significantly affect psychosocial development. Rapid changes in body size, shape and composition in girls might lead to poor body image experience and development of eating disorders [16]. At this age, girls may experience nutritional behaviors leading to weight loss, e.g., alternative diets promoted in the media. Nevertheless, a delay in biological development might lower self-esteem and increase the risk of eating disorders among male teenagers [17]. As young teens are highly influenced by a peer group, then the desire to conform may also affect nutritional behaviors and food intake. Moreover, food choices can be used by adolescents as a way to express their independence from families and parents. At this age, young people may prefer to eat fast-food meals in a peer group instead of meals at home with their families. During middle adolescence (15–17 years) importance of peer groups even raising, and their influence regarding individual food choices peaks. Finally, in the late stage of adolescence (18–21 years) the influence of peer groups decreases, whereas an ability to comprehend how current health behaviors may affect the long-term health status significantly increases [18], which in turn can enhance the effectiveness of nutritional education. Although nutritional knowledge does not always translate into proper nutritional behavior [19], some data indicate the association between nutritional knowledge and the diet quality among adolescents [20]. Joulaei et al. (2018) observed that an increase in functional nutrition literacy was associated with lower sugar intake and better energy balance among boys and higher dairy intake among girls. Therefore, recognition of the dominant dietary behaviors with respect to gender and specific age groups can be helpful in the development of targeted and effective nutritional education. In Poland, there are many studies on nutritional behaviors of adolescents [8,9,11], but their limitation is the small number of participants and the lack of representativeness in their selection. The only study involving a large, representative group of Polish adolescents is the health behavior in school-aged children (HBSC) [21], conducted for over 30 years, now in more than 40 countries, including Poland. The HBSC study does not allow us to assess nutritional behaviors of older adolescents, because it covers only the group of 11-, 13- and 15-year-old boys and girls. In Poland there is no research including the wide age range of respondents with all periods of adolescence at the same time and with the same methodology. Therefore, the purpose of the present study was to analyze the frequency of occurrence of the behaviors important in terms of overall diet quality amongst Polish adolescents. The frequency of occurrence of nutritional behaviors was analyzed in the age categories with regard to gender and taking into account the criteria of the weight status. 5. Conclusions: By analyzing the differences in nutritional behaviors between age and gender groups, we provide data that can inform the development of dietary interventions tailored to answer the needs of adolescents at different stage of development and to improve the quality of their diet. We observed significant changes in the frequencies of analyzed eating behaviors depending on gender as well as on age. Furthermore, we have shown that the incidence of undesirable eating behavior is higher among underweight adolescents compared to their peers with an excessive body mass. Information on the most frequent nutritional errors on every stage of adolescents might be used to determine the type of educational messages given when counseling this challenging group, e.g., education activities regarding regular breakfast consumption should be intensified in older age groups, as the percentage of young people who eat breakfast decreases with age. On the other hand, education on the adverse effects of consumption of sweets, sugared soft drinks and fast food should be directed not only to adolescents with excessive body weight, but mainly to those underweight, as the consumption of these products is more frequent in this group. Moreover, regardless of age and sex, both favorable and adverse nutritional behaviors corresponded with each other. The present findings can be used both for the development of educational programs and for educational activities carried out by teachers at the school level.
Background: Recognition of the dominant dietary behaviors with respect to gender and specific age groups can be helpful in the development of targeted and effective nutritional education. The purpose of the study was to analyze the prevalence of the selected eating behaviors (favorable: Consuming breakfasts, fruit, vegetables, milk and milk beverages, whole grain bread and fish; adverse: Regular consumption of sweets, sugared soft drinks and fast-foods) among Polish adolescents. Methods: Data on the nutritional behaviors were collected using a questionnaire. Body mass status was assessed based on weight and height measurements. Results: 14,044 students aged 13-19 years old from 207 schools participated in the study. Significant differences were found in the nutritional behaviors depending on age, gender and nutritional status. Favorable nutritional behaviors corresponded with each other, the same relationship was observed for adverse behaviors. The frequency of the majority of healthy eating behaviors decreased with age, whereas the incidence of adverse dietary behaviors increased with age. Underweight adolescents more often consumed sugared soft drinks, sweets and fast food compared to their peers with normal and excessive body mass. Conclusions: A significant proportion of adolescents showed unhealthy nutritional behaviors. Showing changes in the incidence of nutritional behaviors depending on age, gender and body weight status, we provide data that can inform the development of dietary interventions tailored to promote specific food groups among adolescents on different stages of development to improve their diet quality.
12,553
276
[ 3587, 358, 461, 528, 130, 106, 995, 212 ]
13
[ "age", "nutritional", "girls", "behaviors", "consumption", "group", "boys", "body", "nutritional behaviors", "students" ]
[ "determinants teenagers diet", "adolescents dependent food", "diet quality children", "european adolescents eat", "nutritional behaviors adolescents" ]
null
[CONTENT] nutrition | nutritional behavior | diet quality | adolescents [SUMMARY]
null
[CONTENT] nutrition | nutritional behavior | diet quality | adolescents [SUMMARY]
[CONTENT] nutrition | nutritional behavior | diet quality | adolescents [SUMMARY]
[CONTENT] nutrition | nutritional behavior | diet quality | adolescents [SUMMARY]
[CONTENT] nutrition | nutritional behavior | diet quality | adolescents [SUMMARY]
[CONTENT] Adolescent | Body Weight | Diet | Diet, Healthy | Feeding Behavior | Female | Food Preferences | Health Behavior | Health Promotion | Humans | Male | Nutritional Status | Poland | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Body Weight | Diet | Diet, Healthy | Feeding Behavior | Female | Food Preferences | Health Behavior | Health Promotion | Humans | Male | Nutritional Status | Poland | Young Adult [SUMMARY]
[CONTENT] Adolescent | Body Weight | Diet | Diet, Healthy | Feeding Behavior | Female | Food Preferences | Health Behavior | Health Promotion | Humans | Male | Nutritional Status | Poland | Young Adult [SUMMARY]
[CONTENT] Adolescent | Body Weight | Diet | Diet, Healthy | Feeding Behavior | Female | Food Preferences | Health Behavior | Health Promotion | Humans | Male | Nutritional Status | Poland | Young Adult [SUMMARY]
[CONTENT] Adolescent | Body Weight | Diet | Diet, Healthy | Feeding Behavior | Female | Food Preferences | Health Behavior | Health Promotion | Humans | Male | Nutritional Status | Poland | Young Adult [SUMMARY]
[CONTENT] determinants teenagers diet | adolescents dependent food | diet quality children | european adolescents eat | nutritional behaviors adolescents [SUMMARY]
null
[CONTENT] determinants teenagers diet | adolescents dependent food | diet quality children | european adolescents eat | nutritional behaviors adolescents [SUMMARY]
[CONTENT] determinants teenagers diet | adolescents dependent food | diet quality children | european adolescents eat | nutritional behaviors adolescents [SUMMARY]
[CONTENT] determinants teenagers diet | adolescents dependent food | diet quality children | european adolescents eat | nutritional behaviors adolescents [SUMMARY]
[CONTENT] determinants teenagers diet | adolescents dependent food | diet quality children | european adolescents eat | nutritional behaviors adolescents [SUMMARY]
[CONTENT] age | nutritional | girls | behaviors | consumption | group | boys | body | nutritional behaviors | students [SUMMARY]
null
[CONTENT] age | nutritional | girls | behaviors | consumption | group | boys | body | nutritional behaviors | students [SUMMARY]
[CONTENT] age | nutritional | girls | behaviors | consumption | group | boys | body | nutritional behaviors | students [SUMMARY]
[CONTENT] age | nutritional | girls | behaviors | consumption | group | boys | body | nutritional behaviors | students [SUMMARY]
[CONTENT] age | nutritional | girls | behaviors | consumption | group | boys | body | nutritional behaviors | students [SUMMARY]
[CONTENT] adolescents | food | development | nutritional | energy | intake | years | peer | nutrients | behaviors [SUMMARY]
null
[CONTENT] girls | boys | girls boys | group | age | observed | consumption | group girls | nutritional | milk [SUMMARY]
[CONTENT] development | adolescents | educational | frequent | excessive body | excessive | consumption | age | stage | activities [SUMMARY]
[CONTENT] girls | consumption | nutritional | age | schools | boys | group | behaviors | students | body [SUMMARY]
[CONTENT] girls | consumption | nutritional | age | schools | boys | group | behaviors | students | body [SUMMARY]
[CONTENT] Recognition ||| Polish [SUMMARY]
null
[CONTENT] 14,044 | 13-19 years old | 207 ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| Polish ||| ||| ||| 14,044 | 13-19 years old | 207 ||| ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| Polish ||| ||| ||| 14,044 | 13-19 years old | 207 ||| ||| ||| ||| ||| ||| [SUMMARY]
Guideline based knowledge and practice of physicians in the management of COPD in a low- to middle-income country.
35023608
Chronic obstructive pulmonary disease (COPD) is the third leading cause of death, with 80% of the total death occurring in low- to middle-income countries (LMICs). Nepal is one of the LMIC; COPD is a highly prevalent and significant public health issue often underdiagnosed. Medical physicians' good knowledge and practice to diagnose and treat COPD can help reduce the disease burden.
BACKGROUND
A cross-sectional descriptive study using a structured questionnaire was conducted among medical physicians working in Bagmati and Gandaki province of Nepal. Out of total scores, physicians knowledge and practice were graded according to Bloom's original cut-off point for good (≥80%), satisfactory (60%-78%) and poor (<60%).
DESIGN
A total of 152 medical physicians participated in this study. Out of the possible total score 20, the mean score on knowledge was 17.8 ± 2.4, and out of possible total score eight, the mean score on practice was 5.3 ± 1.3. The correlation test between total knowledge and practice scores showed r = 0.18 and p value <0.02. The most selected factors hindering the appropriate management of COPD was lack of patient follow up and lack of professional training in COPD. Other factors included patient unwillingness to discuss smoking quit plan, lack of screening tool, unavailability of spirometry and physician unawareness of available medicine to treat COPD.
RESULT
Despite physicians having good knowledge in COPD, the practice in COPD management is below guideline-recommended. There is a significant, very low positive correlation between total knowledge score and practice score. Proper COPD training to physicians, disease awareness among patients, easy availability of diagnostic equipment and medication can help improve physicians' practice and appropriately manage COPD patients.
CONCLUSION
[ "Cross-Sectional Studies", "Guideline Adherence", "Humans", "Physicians", "Practice Patterns, Physicians'", "Pulmonary Disease, Chronic Obstructive" ]
9060126
INTRODUCTION
Chronic obstructive pulmonary disease (COPD) is defined as a common, preventable and treatable disease characterised by persistent respiratory symptoms and airflow limitation due to airway or alveolar abnormalities usually caused by significant exposure to noxious particles or gases. 1 It is the 4th leading cause of death globally and is projected to be the third leading cause by 2020. 2 There is a significant variation in COPD prevalence worldwide, with 10%–95% underdiagnosis and 5%–60% overdiagnosis due to differences in the definition of diagnosis used and the unavailability of spirometry, especially in rural areas of low and middle‐income countries (LMICs). 3 According to the National Burden of Disease 2017 report in Nepal, non‐communicable disease (NCD) accounts for significant deaths and 1 in 10 deaths (10% of total deaths) is attributed to COPD. 4 Smoking is one of the most critical risk factors for COPD. 5 , 6 Other risk factors include a family history of COPD, second‐hand smoking, exposure to cooking or home heating fuels, and occupational specks of dust/chemicals. 7 COPD should be considered in any patient with dyspnoea, chronic cough or sputum production, and a history of recurrent lower respiratory tract infections with or without a history of exposure to harmful particles or gases. 8 The US National Heart, Lung, and Blood Institute and the World Health Organization established the Global Initiative for Chronic Obstructive Lung Disease (GOLD) to prepare a standard guideline for COPD management. The primary purpose of this project was to create and disseminate guidelines that would help prevent COPD and establish a standard of care for treating patients with COPD based on the most current medical evidence. As per GOLD, spirometry is needed to diagnose COPD with the post‐bronchodilator value of FEV1/FVC < 0.70, confirming the presence of persistent airflow limitation. 9 Despite much advancement in COPD treatment, studies suggest either underdiagnosis or overdiagnosis of the disease. A study indicates that 80% of COPD cases confirmed by spirometry were underdiagnosed. 10 Nepal being one of the resource‐poor countries, it is evident that COPD is a highly prevalent and significant public health issue that is often underdiagnosed. 11 It is related to various factors like lack of spirometry facility, lack of awareness of various risk factors of COPD, such as exposure to biomass fuel, a low level of education in patients, and lack of disease awareness among healthcare providers. 12 Proper knowledge of the disease and availability of adequate resources is required to implement the guideline and help doctors diagnose and provide best practices for COPD patients. Good practice based on proper guidelines can help diagnose the cases early and decrease COPD‐related morbidity and mortality. No study is available in Nepal to look into the knowledge level of a medical physician on COPD, their practice, and barriers to guideline‐based management. Thus, we aim to study the knowledge among medical physicians from Gandaki and Bagmati province of Nepal on COPD as per GOLD guidelines, their current practice, and study factors influencing the proper management of COPD patients. Our findings can help the concerned authority plan and effectively mobilise medical physicians to decrease the morbidity and mortality secondary to COPD.
null
null
RESULTS
A total of 152 medical physicians participated in the study. The baseline demographic and work‐related characteristics are shown in (Table 1). Of the total participants, 73.0% were male. Most of the participants, 93.4%, belonged to the 20–30 age group. For the province, an almost equal number of participants were from Bagmati province and Gandaki province. Furthermore, 44.0% of physicians worked in primary level health facilities, followed by 23.6% from tertiary level health facilities, 18.4% from secondary health facilities and 13.8% from private practice. Additionally, 44.0% of the participants had an employment duration of less than a year, 46.0% worked for 1–2 years, and 9.6% worked for more than two years. Regarding the number of cases seen, 43.7% of the participating physicians see on average 0–5 COPD cases per week, 24.3% see 6–10 cases per week, and 30.9% see more than 10 cases in a week. It was observed that 28.9% of physicians had availability of spirometry facility within their health facility, and 40.1% had either pulmonary rehabilitation, immunisation or both services available in their workplace. Lastly, only 11.1% of physicians had received CME/training on COPD and its management during their practice. Baseline demographic and work‐related characteristics Abbreviations: CME, continuing medical education; COPD, chronic obstructive pulmonary disease. Knowledge analysis Out of a total score of twenty, the mean score for overall COPD knowledge was 17.8 ± 2.4. Using Bloom's original cut‐off point for knowledge grading, 45.4% of physicians have good knowledge, 47.3% have moderate, and 7.2% of the medical physicians have poor knowledge regarding COPD. (Table 2) Shows the average mean score of overall knowledge and each domain of knowledge tested. 90.7% of the physicians correctly chose that the burden of COPD is increasing globally. Similarly, 65.7% said the burden of COPD in Nepal is within the top five causes of morbidity. For disease definition, most of the participants, 88.8%, correctly defined COPD as a chronic preventable and treatable condition. Only 32.9% of the participants correctly identified all six risk factors for COPD: smoking, exposure to biomass fuel, exposure to outdoor pollution, occupational air pollutions, genetic factors like alpha 1 anti‐trypsin deficiency and recurrent chest infections. Smoking was the most selected risk factor by 98.0%, whereas recurrent chest infection was the least selected by 48.6%. Participants overall knowledge score and on each domain of study Abbreviation: COPD, chronic obstructive pulmonary disease. Regarding the gold standard test for COPD diagnosis, 86.1% of the participants chose spirometry rightly; however, only 21.0% of the participants were able to identify the correct spirometry cut‐off value for diagnosis, that is, post‐bronchodilator FEV1/FVC < 0.7. For the use of inhaled corticosteroids (ICSs), 86.8% of physicians knew the history of two or more episodes of acute exacerbation in a year, and 6.5% of the physicians knew eosinophils count >300 cells/μl an indication for its use. The majority of the participants, 61.8%, correctly identified that oxygen supplementation and smoking cessation help decrease mortality in COPD patients. Regarding the use of domiciliary oxygen therapy, 74.3% of physicians rightly said arterial hypoxemia with paO2 < 55, spO2 < 88% as an indication for its use. Of the participants, 41.4% chose all four given symptoms of acute exacerbation, that is, increasing shortness of breath, increasing cough, increasing mucus production and mucus colour change. The most selected symptoms were increasing shortness of breath, 92.7%, and increasing cough, 89.4%. Out of a total score of twenty, the mean score for overall COPD knowledge was 17.8 ± 2.4. Using Bloom's original cut‐off point for knowledge grading, 45.4% of physicians have good knowledge, 47.3% have moderate, and 7.2% of the medical physicians have poor knowledge regarding COPD. (Table 2) Shows the average mean score of overall knowledge and each domain of knowledge tested. 90.7% of the physicians correctly chose that the burden of COPD is increasing globally. Similarly, 65.7% said the burden of COPD in Nepal is within the top five causes of morbidity. For disease definition, most of the participants, 88.8%, correctly defined COPD as a chronic preventable and treatable condition. Only 32.9% of the participants correctly identified all six risk factors for COPD: smoking, exposure to biomass fuel, exposure to outdoor pollution, occupational air pollutions, genetic factors like alpha 1 anti‐trypsin deficiency and recurrent chest infections. Smoking was the most selected risk factor by 98.0%, whereas recurrent chest infection was the least selected by 48.6%. Participants overall knowledge score and on each domain of study Abbreviation: COPD, chronic obstructive pulmonary disease. Regarding the gold standard test for COPD diagnosis, 86.1% of the participants chose spirometry rightly; however, only 21.0% of the participants were able to identify the correct spirometry cut‐off value for diagnosis, that is, post‐bronchodilator FEV1/FVC < 0.7. For the use of inhaled corticosteroids (ICSs), 86.8% of physicians knew the history of two or more episodes of acute exacerbation in a year, and 6.5% of the physicians knew eosinophils count >300 cells/μl an indication for its use. The majority of the participants, 61.8%, correctly identified that oxygen supplementation and smoking cessation help decrease mortality in COPD patients. Regarding the use of domiciliary oxygen therapy, 74.3% of physicians rightly said arterial hypoxemia with paO2 < 55, spO2 < 88% as an indication for its use. Of the participants, 41.4% chose all four given symptoms of acute exacerbation, that is, increasing shortness of breath, increasing cough, increasing mucus production and mucus colour change. The most selected symptoms were increasing shortness of breath, 92.7%, and increasing cough, 89.4%. Practice analysis For practice, out of possible eight, the mean score of the participants was 5.30 ± 1.30. Using Bloom's original cut‐off point for practice grading, 30 (19.73%) of the respondents were categorised as having a good practice, 55.9% at the moderate level, and 24.4 with poor practice in diagnosing and managing COPD. (Table 3) shows the overall response of the participants in different practice questions. Of respondents, 75.6% think of COPD as a differential when a patient with cough and difficulty breathing presents to the clinic. Similarly, 80.2% of the physicians said they enquired about smoking history in every patient age greater than 20 years. Frequency distribution of participants practice in COPD Abbreviation: COPD, chronic obstructive pulmonary disease. For initial management of suspected COPD, 58.5% said they would perform an X‐ray, blood test and give a trial of bronchodilator and send them home, whereas 36.8% of the participants chose to perform pulmonary function test (PFT) or refer patients to spirometry facility. Furthermore, 63.8% said that not improving to medical therapy is the main reason for referring COPD patients to spirometry. The first‐line drug of choice to treat acute relief of symptoms, 59.8% of physicians, chose to use short‐acting muscarinic antagonist (SAMA) alone or in combination with short‐acting beta 2 agonists (SABA) to relieve shortness of breath whereas, 23.6% of the physician choose to use ICS alone or in combination. Similarly, 87.5% used either amoxicillin or azithromycin as a first‐line antibiotic for AECOPD. During follow‐up, 92.7% of the participants asked about medical adherence in COPD patients; however, only 37.5% said they would counsel them about the COPD action plan at home during an acute exacerbation. For practice, out of possible eight, the mean score of the participants was 5.30 ± 1.30. Using Bloom's original cut‐off point for practice grading, 30 (19.73%) of the respondents were categorised as having a good practice, 55.9% at the moderate level, and 24.4 with poor practice in diagnosing and managing COPD. (Table 3) shows the overall response of the participants in different practice questions. Of respondents, 75.6% think of COPD as a differential when a patient with cough and difficulty breathing presents to the clinic. Similarly, 80.2% of the physicians said they enquired about smoking history in every patient age greater than 20 years. Frequency distribution of participants practice in COPD Abbreviation: COPD, chronic obstructive pulmonary disease. For initial management of suspected COPD, 58.5% said they would perform an X‐ray, blood test and give a trial of bronchodilator and send them home, whereas 36.8% of the participants chose to perform pulmonary function test (PFT) or refer patients to spirometry facility. Furthermore, 63.8% said that not improving to medical therapy is the main reason for referring COPD patients to spirometry. The first‐line drug of choice to treat acute relief of symptoms, 59.8% of physicians, chose to use short‐acting muscarinic antagonist (SAMA) alone or in combination with short‐acting beta 2 agonists (SABA) to relieve shortness of breath whereas, 23.6% of the physician choose to use ICS alone or in combination. Similarly, 87.5% used either amoxicillin or azithromycin as a first‐line antibiotic for AECOPD. During follow‐up, 92.7% of the participants asked about medical adherence in COPD patients; however, only 37.5% said they would counsel them about the COPD action plan at home during an acute exacerbation. Correlation analysis We did Pearson correlation, and there was a statically significant very low positive correlation between total knowledge score and practice score with r = 0.18 and p value <0.02. We did Pearson correlation, and there was a statically significant very low positive correlation between total knowledge score and practice score with r = 0.18 and p value <0.02. Factors affecting difficulties in patients quitting smoking Table 4 shows the reasons that are preventing the physician from helping COPD patients quit smoking. The most common factors were lack of follow‐up 65.7%, and patients not wanting to discuss a quit plan 65.1%. Others included lack of access to pharmacological measures like nicotine replacement therapy (NRT) 37.5% and physicians not being aware of any quit plan 8.5%. Factors affecting difficulty in patients quit smoking Table 4 shows the reasons that are preventing the physician from helping COPD patients quit smoking. The most common factors were lack of follow‐up 65.7%, and patients not wanting to discuss a quit plan 65.1%. Others included lack of access to pharmacological measures like nicotine replacement therapy (NRT) 37.5% and physicians not being aware of any quit plan 8.5%. Factors affecting difficulty in patients quit smoking Confidence level in diagnosis and factors preventing a proper diagnosis of COPD Of the doctors, 74.9% were either confident or extremely confident about correctly diagnosing the COPD cases. Table 5 shows the factors preventing a proper diagnosis of COPD. Most physicians said that lack of patient follow‐up 71.7%, lack of screening device for COPD 65.7%, and lack of professional training in the diagnosis of COPD 61.1% are the main reasons. Other reasons included lack of spirometry nearby in the referral centres 58.5% and poor patients' financial status 53.2%. Factors preventing the proper diagnosis of chronic obstructive pulmonary disease (COPD) patients Of the doctors, 74.9% were either confident or extremely confident about correctly diagnosing the COPD cases. Table 5 shows the factors preventing a proper diagnosis of COPD. Most physicians said that lack of patient follow‐up 71.7%, lack of screening device for COPD 65.7%, and lack of professional training in the diagnosis of COPD 61.1% are the main reasons. Other reasons included lack of spirometry nearby in the referral centres 58.5% and poor patients' financial status 53.2%. Factors preventing the proper diagnosis of chronic obstructive pulmonary disease (COPD) patients Confidence levels in pharmacological treatment and factors affected in providing appropriate pharmacological therapy to COPD patients Almost 83.6% of the physicians said that they were either undertreating, unsure or overtreating COPD patients. Table 6 shows the factors influencing in providing appropriate treatment to COPD patients. The most chosen factors were lack of professional training in COPD disease management, poor follow‐up and high medication cost. Factors affecting appropriate pharmacological therapy to chronic obstructive pulmonary disease (COPD) patients Almost 83.6% of the physicians said that they were either undertreating, unsure or overtreating COPD patients. Table 6 shows the factors influencing in providing appropriate treatment to COPD patients. The most chosen factors were lack of professional training in COPD disease management, poor follow‐up and high medication cost. Factors affecting appropriate pharmacological therapy to chronic obstructive pulmonary disease (COPD) patients
null
null
[ "INTRODUCTION", "Study design", "Study settings", "Sample selection", "Sample size", "Tools", "Data collection and analysis", "Knowledge analysis", "Practice analysis", "Correlation analysis", "Factors affecting difficulties in patients quitting smoking", "Confidence level in diagnosis and factors preventing a proper diagnosis of COPD", "Confidence levels in pharmacological treatment and factors affected in providing appropriate pharmacological therapy to COPD patients", "FUNDING INFORMATION", "ETHICS STATEMENT", "AUTHOR CONTRIBUTIONS" ]
[ "Chronic obstructive pulmonary disease (COPD) is defined as a common, preventable and treatable disease characterised by persistent respiratory symptoms and airflow limitation due to airway or alveolar abnormalities usually caused by significant exposure to noxious particles or gases.\n1\n It is the 4th leading cause of death globally and is projected to be the third leading cause by 2020.\n2\n There is a significant variation in COPD prevalence worldwide, with 10%–95% underdiagnosis and 5%–60% overdiagnosis due to differences in the definition of diagnosis used and the unavailability of spirometry, especially in rural areas of low and middle‐income countries (LMICs).\n3\n According to the National Burden of Disease 2017 report in Nepal, non‐communicable disease (NCD) accounts for significant deaths and 1 in 10 deaths (10% of total deaths) is attributed to COPD.\n4\n\n\nSmoking is one of the most critical risk factors for COPD.\n5\n, \n6\n Other risk factors include a family history of COPD, second‐hand smoking, exposure to cooking or home heating fuels, and occupational specks of dust/chemicals.\n7\n COPD should be considered in any patient with dyspnoea, chronic cough or sputum production, and a history of recurrent lower respiratory tract infections with or without a history of exposure to harmful particles or gases.\n8\n The US National Heart, Lung, and Blood Institute and the World Health Organization established the Global Initiative for Chronic Obstructive Lung Disease (GOLD) to prepare a standard guideline for COPD management. The primary purpose of this project was to create and disseminate guidelines that would help prevent COPD and establish a standard of care for treating patients with COPD based on the most current medical evidence. As per GOLD, spirometry is needed to diagnose COPD with the post‐bronchodilator value of FEV1/FVC < 0.70, confirming the presence of persistent airflow limitation.\n9\n Despite much advancement in COPD treatment, studies suggest either underdiagnosis or overdiagnosis of the disease. A study indicates that 80% of COPD cases confirmed by spirometry were underdiagnosed.\n10\n Nepal being one of the resource‐poor countries, it is evident that COPD is a highly prevalent and significant public health issue that is often underdiagnosed.\n11\n It is related to various factors like lack of spirometry facility, lack of awareness of various risk factors of COPD, such as exposure to biomass fuel, a low level of education in patients, and lack of disease awareness among healthcare providers.\n12\n Proper knowledge of the disease and availability of adequate resources is required to implement the guideline and help doctors diagnose and provide best practices for COPD patients. Good practice based on proper guidelines can help diagnose the cases early and decrease COPD‐related morbidity and mortality.\nNo study is available in Nepal to look into the knowledge level of a medical physician on COPD, their practice, and barriers to guideline‐based management. Thus, we aim to study the knowledge among medical physicians from Gandaki and Bagmati province of Nepal on COPD as per GOLD guidelines, their current practice, and study factors influencing the proper management of COPD patients. Our findings can help the concerned authority plan and effectively mobilise medical physicians to decrease the morbidity and mortality secondary to COPD.", "We carried out a descriptive cross‐sectional study among medical physicians working in all levels of health care delivery in the Bagmati and Gandaki provinces of Nepal.", "Nepal is divided into seven provinces; we did the study in two provinces, that is, Bagmati and Gandaki. These were selected based on the highest number of COPD diagnosed in the last 5 years.\n13\n We categorised health facilities into primary health facilities, secondary health facilities, tertiary health facilities and private practice. Primary level health facilities included a health post, primary health centre and community health centre. Secondary health facilities include hospitals with inpatient wards, that is, district hospitals, community hospitals and private hospitals. Tertiary Hospitals are referral centres with specialist services, that is, zonal hospitals, regional hospitals, province hospitals, central hospitals, and medical colleges. Private practice includes private clinics, polyclinics with no inpatient care.", "Our inclusion population included medical physicians. Medical physicians are medical officers who have completed their primary medical education, Bachelor of Medicine, and Bachelor of Surgery (MBBS) and registered in the Nepal Medical Council (NMC) with no specialist training. They are often the first point of contact for seeing COPD patients in primary, secondary and tertiary health facilities and even in private practice. Due to the vast geographical diversity in certain areas of Nepal, they are the only available doctors.", "The minimum sample size required was 152 and was calculated using expected proportions of 10%, an accepted error of 5%, and a nonresponse rate of 10% at a 95% confidence interval (C.I).", "The knowledge questionnaire was designed based on the COPD GOLD 2020 guideline; however, we did literature reviews to identify background variables and intermediate variables of knowledge and practice of COPD management among physicians. The questionnaire was further discussed with the content expert from Tribhuvan University Teaching Hospital (TUTH), G. P. Koirala National Centre for Respiratory Disease, ensuring face and content validity. After pretesting it on 20 medical physicians not included in the study, we prepared the final questionnaire. It consists of demographic and work‐related characteristics; age, gender, medical qualification, employment duration, province, number of COPD cases seen per week, availability of spirometry, availability of pulmonary rehabilitation and vaccination, and participation in COPD continuing medical education (CME)/training participation. We used single and multiple‐choice questions to test the knowledge level on five domains. They are COPD epidemiology and disease definition, risk factors, COPD diagnosis, treatment and knowledge on acute exacerbations with 20 total points. Each correct response based on GOLD guidelines was given one score, and the incorrect answer was allocated zero.\nTo test practice, participants were asked questions on when to suspect COPD as a differential, do they do smoking screening, what is the first‐line management done for suspected COPD cases, when do they refer COPD patients for spirometry, what antibiotics are used as first‐line antibiotics for acute exacerbation of COPD, do they ask for drug adherence, what are the first‐line bronchodilators used for acute relief of symptoms and do they explain COPD action plan. Each appropriate response was given one score out of 8. The last section included the confidence level of the participants in COPD diagnosis and treatment, followed by factors causing difficulties in patients to quit smoking, proper diagnosis, and providing appropriate pharmacological therapy to COPD patients.", "We noted the health facilities available in Gandaki and Bagmati province from the data of the Government of Nepal, Ministry of Health and Population, Health Emergency and Disaster Management Unit, Health Emergency Operation Centre in coordination with provincial health report. From that list of health institutes, medical physicians were contacted for participation through telephone, social media, and email. Once they agreed, we sent an online self‐administered structured questionnaire in the Google form. Data were collected from May 10th 2021, to June 10th 2021. Ethical approval for this study was obtained from Nepal Health Research Council (NHRC).\nWe conducted a descriptive analysis with percentage, mean, median, and proportion. Knowledge and practice total scores were graded according to Blooms original cut‐off points; good (≥ 80%), moderate (60%–79%) and poor (<60%).\n14\n Data were analysed with IBM SPSS statistics version 26 and Stata version 13. A correlation test was done to see the association between knowledge scores and practice scores. p value <0.05 were considered significant.", "Out of a total score of twenty, the mean score for overall COPD knowledge was 17.8 ± 2.4. Using Bloom's original cut‐off point for knowledge grading, 45.4% of physicians have good knowledge, 47.3% have moderate, and 7.2% of the medical physicians have poor knowledge regarding COPD. (Table 2) Shows the average mean score of overall knowledge and each domain of knowledge tested. 90.7% of the physicians correctly chose that the burden of COPD is increasing globally. Similarly, 65.7% said the burden of COPD in Nepal is within the top five causes of morbidity. For disease definition, most of the participants, 88.8%, correctly defined COPD as a chronic preventable and treatable condition. Only 32.9% of the participants correctly identified all six risk factors for COPD: smoking, exposure to biomass fuel, exposure to outdoor pollution, occupational air pollutions, genetic factors like alpha 1 anti‐trypsin deficiency and recurrent chest infections. Smoking was the most selected risk factor by 98.0%, whereas recurrent chest infection was the least selected by 48.6%.\nParticipants overall knowledge score and on each domain of study\nAbbreviation: COPD, chronic obstructive pulmonary disease.\nRegarding the gold standard test for COPD diagnosis, 86.1% of the participants chose spirometry rightly; however, only 21.0% of the participants were able to identify the correct spirometry cut‐off value for diagnosis, that is, post‐bronchodilator FEV1/FVC < 0.7. For the use of inhaled corticosteroids (ICSs), 86.8% of physicians knew the history of two or more episodes of acute exacerbation in a year, and 6.5% of the physicians knew eosinophils count >300 cells/μl an indication for its use. The majority of the participants, 61.8%, correctly identified that oxygen supplementation and smoking cessation help decrease mortality in COPD patients. Regarding the use of domiciliary oxygen therapy, 74.3% of physicians rightly said arterial hypoxemia with paO2 < 55, spO2 < 88% as an indication for its use. Of the participants, 41.4% chose all four given symptoms of acute exacerbation, that is, increasing shortness of breath, increasing cough, increasing mucus production and mucus colour change. The most selected symptoms were increasing shortness of breath, 92.7%, and increasing cough, 89.4%.", "For practice, out of possible eight, the mean score of the participants was 5.30 ± 1.30. Using Bloom's original cut‐off point for practice grading, 30 (19.73%) of the respondents were categorised as having a good practice, 55.9% at the moderate level, and 24.4 with poor practice in diagnosing and managing COPD. (Table 3) shows the overall response of the participants in different practice questions. Of respondents, 75.6% think of COPD as a differential when a patient with cough and difficulty breathing presents to the clinic. Similarly, 80.2% of the physicians said they enquired about smoking history in every patient age greater than 20 years.\nFrequency distribution of participants practice in COPD\nAbbreviation: COPD, chronic obstructive pulmonary disease.\nFor initial management of suspected COPD, 58.5% said they would perform an X‐ray, blood test and give a trial of bronchodilator and send them home, whereas 36.8% of the participants chose to perform pulmonary function test (PFT) or refer patients to spirometry facility. Furthermore, 63.8% said that not improving to medical therapy is the main reason for referring COPD patients to spirometry. The first‐line drug of choice to treat acute relief of symptoms, 59.8% of physicians, chose to use short‐acting muscarinic antagonist (SAMA) alone or in combination with short‐acting beta 2 agonists (SABA) to relieve shortness of breath whereas, 23.6% of the physician choose to use ICS alone or in combination. Similarly, 87.5% used either amoxicillin or azithromycin as a first‐line antibiotic for AECOPD. During follow‐up, 92.7% of the participants asked about medical adherence in COPD patients; however, only 37.5% said they would counsel them about the COPD action plan at home during an acute exacerbation.", "We did Pearson correlation, and there was a statically significant very low positive correlation between total knowledge score and practice score with r = 0.18 and p value <0.02.", "Table 4 shows the reasons that are preventing the physician from helping COPD patients quit smoking. The most common factors were lack of follow‐up 65.7%, and patients not wanting to discuss a quit plan 65.1%. Others included lack of access to pharmacological measures like nicotine replacement therapy (NRT) 37.5% and physicians not being aware of any quit plan 8.5%.\nFactors affecting difficulty in patients quit smoking", "Of the doctors, 74.9% were either confident or extremely confident about correctly diagnosing the COPD cases. Table 5 shows the factors preventing a proper diagnosis of COPD. Most physicians said that lack of patient follow‐up 71.7%, lack of screening device for COPD 65.7%, and lack of professional training in the diagnosis of COPD 61.1% are the main reasons. Other reasons included lack of spirometry nearby in the referral centres 58.5% and poor patients' financial status 53.2%.\nFactors preventing the proper diagnosis of chronic obstructive pulmonary disease (COPD) patients", "Almost 83.6% of the physicians said that they were either undertreating, unsure or overtreating COPD patients. Table 6 shows the factors influencing in providing appropriate treatment to COPD patients. The most chosen factors were lack of professional training in COPD disease management, poor follow‐up and high medication cost.\nFactors affecting appropriate pharmacological therapy to chronic obstructive pulmonary disease (COPD) patients", "No funding was received for this research.", "Ethical approval for this study was provided by the Government of Nepal, Nepal Health Research Council (Proposal ID: 636‐2020). Informed consent to participate and publication was obtained from all individual participants included in the study.", "Suraj Ghimire, Anish Lamichhane, Anita Basnet, Samikshya Pandey and Ram Kumar Shrestha designed the study and questionnaire. Nahakul Poudel, Bushan Shrestha, Santosh Pathak and Gaurav Mahato did data collection and entry. Suraj Ghimire and Anita Basnet were involved in data analysis and manuscript writing." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Study design", "Study settings", "Sample selection", "Sample size", "Tools", "Data collection and analysis", "RESULTS", "Knowledge analysis", "Practice analysis", "Correlation analysis", "Factors affecting difficulties in patients quitting smoking", "Confidence level in diagnosis and factors preventing a proper diagnosis of COPD", "Confidence levels in pharmacological treatment and factors affected in providing appropriate pharmacological therapy to COPD patients", "DISCUSSION", "CONFLICT OF INTEREST", "FUNDING INFORMATION", "ETHICS STATEMENT", "AUTHOR CONTRIBUTIONS" ]
[ "Chronic obstructive pulmonary disease (COPD) is defined as a common, preventable and treatable disease characterised by persistent respiratory symptoms and airflow limitation due to airway or alveolar abnormalities usually caused by significant exposure to noxious particles or gases.\n1\n It is the 4th leading cause of death globally and is projected to be the third leading cause by 2020.\n2\n There is a significant variation in COPD prevalence worldwide, with 10%–95% underdiagnosis and 5%–60% overdiagnosis due to differences in the definition of diagnosis used and the unavailability of spirometry, especially in rural areas of low and middle‐income countries (LMICs).\n3\n According to the National Burden of Disease 2017 report in Nepal, non‐communicable disease (NCD) accounts for significant deaths and 1 in 10 deaths (10% of total deaths) is attributed to COPD.\n4\n\n\nSmoking is one of the most critical risk factors for COPD.\n5\n, \n6\n Other risk factors include a family history of COPD, second‐hand smoking, exposure to cooking or home heating fuels, and occupational specks of dust/chemicals.\n7\n COPD should be considered in any patient with dyspnoea, chronic cough or sputum production, and a history of recurrent lower respiratory tract infections with or without a history of exposure to harmful particles or gases.\n8\n The US National Heart, Lung, and Blood Institute and the World Health Organization established the Global Initiative for Chronic Obstructive Lung Disease (GOLD) to prepare a standard guideline for COPD management. The primary purpose of this project was to create and disseminate guidelines that would help prevent COPD and establish a standard of care for treating patients with COPD based on the most current medical evidence. As per GOLD, spirometry is needed to diagnose COPD with the post‐bronchodilator value of FEV1/FVC < 0.70, confirming the presence of persistent airflow limitation.\n9\n Despite much advancement in COPD treatment, studies suggest either underdiagnosis or overdiagnosis of the disease. A study indicates that 80% of COPD cases confirmed by spirometry were underdiagnosed.\n10\n Nepal being one of the resource‐poor countries, it is evident that COPD is a highly prevalent and significant public health issue that is often underdiagnosed.\n11\n It is related to various factors like lack of spirometry facility, lack of awareness of various risk factors of COPD, such as exposure to biomass fuel, a low level of education in patients, and lack of disease awareness among healthcare providers.\n12\n Proper knowledge of the disease and availability of adequate resources is required to implement the guideline and help doctors diagnose and provide best practices for COPD patients. Good practice based on proper guidelines can help diagnose the cases early and decrease COPD‐related morbidity and mortality.\nNo study is available in Nepal to look into the knowledge level of a medical physician on COPD, their practice, and barriers to guideline‐based management. Thus, we aim to study the knowledge among medical physicians from Gandaki and Bagmati province of Nepal on COPD as per GOLD guidelines, their current practice, and study factors influencing the proper management of COPD patients. Our findings can help the concerned authority plan and effectively mobilise medical physicians to decrease the morbidity and mortality secondary to COPD.", " Study design We carried out a descriptive cross‐sectional study among medical physicians working in all levels of health care delivery in the Bagmati and Gandaki provinces of Nepal.\nWe carried out a descriptive cross‐sectional study among medical physicians working in all levels of health care delivery in the Bagmati and Gandaki provinces of Nepal.\n Study settings Nepal is divided into seven provinces; we did the study in two provinces, that is, Bagmati and Gandaki. These were selected based on the highest number of COPD diagnosed in the last 5 years.\n13\n We categorised health facilities into primary health facilities, secondary health facilities, tertiary health facilities and private practice. Primary level health facilities included a health post, primary health centre and community health centre. Secondary health facilities include hospitals with inpatient wards, that is, district hospitals, community hospitals and private hospitals. Tertiary Hospitals are referral centres with specialist services, that is, zonal hospitals, regional hospitals, province hospitals, central hospitals, and medical colleges. Private practice includes private clinics, polyclinics with no inpatient care.\nNepal is divided into seven provinces; we did the study in two provinces, that is, Bagmati and Gandaki. These were selected based on the highest number of COPD diagnosed in the last 5 years.\n13\n We categorised health facilities into primary health facilities, secondary health facilities, tertiary health facilities and private practice. Primary level health facilities included a health post, primary health centre and community health centre. Secondary health facilities include hospitals with inpatient wards, that is, district hospitals, community hospitals and private hospitals. Tertiary Hospitals are referral centres with specialist services, that is, zonal hospitals, regional hospitals, province hospitals, central hospitals, and medical colleges. Private practice includes private clinics, polyclinics with no inpatient care.\n Sample selection Our inclusion population included medical physicians. Medical physicians are medical officers who have completed their primary medical education, Bachelor of Medicine, and Bachelor of Surgery (MBBS) and registered in the Nepal Medical Council (NMC) with no specialist training. They are often the first point of contact for seeing COPD patients in primary, secondary and tertiary health facilities and even in private practice. Due to the vast geographical diversity in certain areas of Nepal, they are the only available doctors.\nOur inclusion population included medical physicians. Medical physicians are medical officers who have completed their primary medical education, Bachelor of Medicine, and Bachelor of Surgery (MBBS) and registered in the Nepal Medical Council (NMC) with no specialist training. They are often the first point of contact for seeing COPD patients in primary, secondary and tertiary health facilities and even in private practice. Due to the vast geographical diversity in certain areas of Nepal, they are the only available doctors.\n Sample size The minimum sample size required was 152 and was calculated using expected proportions of 10%, an accepted error of 5%, and a nonresponse rate of 10% at a 95% confidence interval (C.I).\nThe minimum sample size required was 152 and was calculated using expected proportions of 10%, an accepted error of 5%, and a nonresponse rate of 10% at a 95% confidence interval (C.I).\n Tools The knowledge questionnaire was designed based on the COPD GOLD 2020 guideline; however, we did literature reviews to identify background variables and intermediate variables of knowledge and practice of COPD management among physicians. The questionnaire was further discussed with the content expert from Tribhuvan University Teaching Hospital (TUTH), G. P. Koirala National Centre for Respiratory Disease, ensuring face and content validity. After pretesting it on 20 medical physicians not included in the study, we prepared the final questionnaire. It consists of demographic and work‐related characteristics; age, gender, medical qualification, employment duration, province, number of COPD cases seen per week, availability of spirometry, availability of pulmonary rehabilitation and vaccination, and participation in COPD continuing medical education (CME)/training participation. We used single and multiple‐choice questions to test the knowledge level on five domains. They are COPD epidemiology and disease definition, risk factors, COPD diagnosis, treatment and knowledge on acute exacerbations with 20 total points. Each correct response based on GOLD guidelines was given one score, and the incorrect answer was allocated zero.\nTo test practice, participants were asked questions on when to suspect COPD as a differential, do they do smoking screening, what is the first‐line management done for suspected COPD cases, when do they refer COPD patients for spirometry, what antibiotics are used as first‐line antibiotics for acute exacerbation of COPD, do they ask for drug adherence, what are the first‐line bronchodilators used for acute relief of symptoms and do they explain COPD action plan. Each appropriate response was given one score out of 8. The last section included the confidence level of the participants in COPD diagnosis and treatment, followed by factors causing difficulties in patients to quit smoking, proper diagnosis, and providing appropriate pharmacological therapy to COPD patients.\nThe knowledge questionnaire was designed based on the COPD GOLD 2020 guideline; however, we did literature reviews to identify background variables and intermediate variables of knowledge and practice of COPD management among physicians. The questionnaire was further discussed with the content expert from Tribhuvan University Teaching Hospital (TUTH), G. P. Koirala National Centre for Respiratory Disease, ensuring face and content validity. After pretesting it on 20 medical physicians not included in the study, we prepared the final questionnaire. It consists of demographic and work‐related characteristics; age, gender, medical qualification, employment duration, province, number of COPD cases seen per week, availability of spirometry, availability of pulmonary rehabilitation and vaccination, and participation in COPD continuing medical education (CME)/training participation. We used single and multiple‐choice questions to test the knowledge level on five domains. They are COPD epidemiology and disease definition, risk factors, COPD diagnosis, treatment and knowledge on acute exacerbations with 20 total points. Each correct response based on GOLD guidelines was given one score, and the incorrect answer was allocated zero.\nTo test practice, participants were asked questions on when to suspect COPD as a differential, do they do smoking screening, what is the first‐line management done for suspected COPD cases, when do they refer COPD patients for spirometry, what antibiotics are used as first‐line antibiotics for acute exacerbation of COPD, do they ask for drug adherence, what are the first‐line bronchodilators used for acute relief of symptoms and do they explain COPD action plan. Each appropriate response was given one score out of 8. The last section included the confidence level of the participants in COPD diagnosis and treatment, followed by factors causing difficulties in patients to quit smoking, proper diagnosis, and providing appropriate pharmacological therapy to COPD patients.\n Data collection and analysis We noted the health facilities available in Gandaki and Bagmati province from the data of the Government of Nepal, Ministry of Health and Population, Health Emergency and Disaster Management Unit, Health Emergency Operation Centre in coordination with provincial health report. From that list of health institutes, medical physicians were contacted for participation through telephone, social media, and email. Once they agreed, we sent an online self‐administered structured questionnaire in the Google form. Data were collected from May 10th 2021, to June 10th 2021. Ethical approval for this study was obtained from Nepal Health Research Council (NHRC).\nWe conducted a descriptive analysis with percentage, mean, median, and proportion. Knowledge and practice total scores were graded according to Blooms original cut‐off points; good (≥ 80%), moderate (60%–79%) and poor (<60%).\n14\n Data were analysed with IBM SPSS statistics version 26 and Stata version 13. A correlation test was done to see the association between knowledge scores and practice scores. p value <0.05 were considered significant.\nWe noted the health facilities available in Gandaki and Bagmati province from the data of the Government of Nepal, Ministry of Health and Population, Health Emergency and Disaster Management Unit, Health Emergency Operation Centre in coordination with provincial health report. From that list of health institutes, medical physicians were contacted for participation through telephone, social media, and email. Once they agreed, we sent an online self‐administered structured questionnaire in the Google form. Data were collected from May 10th 2021, to June 10th 2021. Ethical approval for this study was obtained from Nepal Health Research Council (NHRC).\nWe conducted a descriptive analysis with percentage, mean, median, and proportion. Knowledge and practice total scores were graded according to Blooms original cut‐off points; good (≥ 80%), moderate (60%–79%) and poor (<60%).\n14\n Data were analysed with IBM SPSS statistics version 26 and Stata version 13. A correlation test was done to see the association between knowledge scores and practice scores. p value <0.05 were considered significant.", "We carried out a descriptive cross‐sectional study among medical physicians working in all levels of health care delivery in the Bagmati and Gandaki provinces of Nepal.", "Nepal is divided into seven provinces; we did the study in two provinces, that is, Bagmati and Gandaki. These were selected based on the highest number of COPD diagnosed in the last 5 years.\n13\n We categorised health facilities into primary health facilities, secondary health facilities, tertiary health facilities and private practice. Primary level health facilities included a health post, primary health centre and community health centre. Secondary health facilities include hospitals with inpatient wards, that is, district hospitals, community hospitals and private hospitals. Tertiary Hospitals are referral centres with specialist services, that is, zonal hospitals, regional hospitals, province hospitals, central hospitals, and medical colleges. Private practice includes private clinics, polyclinics with no inpatient care.", "Our inclusion population included medical physicians. Medical physicians are medical officers who have completed their primary medical education, Bachelor of Medicine, and Bachelor of Surgery (MBBS) and registered in the Nepal Medical Council (NMC) with no specialist training. They are often the first point of contact for seeing COPD patients in primary, secondary and tertiary health facilities and even in private practice. Due to the vast geographical diversity in certain areas of Nepal, they are the only available doctors.", "The minimum sample size required was 152 and was calculated using expected proportions of 10%, an accepted error of 5%, and a nonresponse rate of 10% at a 95% confidence interval (C.I).", "The knowledge questionnaire was designed based on the COPD GOLD 2020 guideline; however, we did literature reviews to identify background variables and intermediate variables of knowledge and practice of COPD management among physicians. The questionnaire was further discussed with the content expert from Tribhuvan University Teaching Hospital (TUTH), G. P. Koirala National Centre for Respiratory Disease, ensuring face and content validity. After pretesting it on 20 medical physicians not included in the study, we prepared the final questionnaire. It consists of demographic and work‐related characteristics; age, gender, medical qualification, employment duration, province, number of COPD cases seen per week, availability of spirometry, availability of pulmonary rehabilitation and vaccination, and participation in COPD continuing medical education (CME)/training participation. We used single and multiple‐choice questions to test the knowledge level on five domains. They are COPD epidemiology and disease definition, risk factors, COPD diagnosis, treatment and knowledge on acute exacerbations with 20 total points. Each correct response based on GOLD guidelines was given one score, and the incorrect answer was allocated zero.\nTo test practice, participants were asked questions on when to suspect COPD as a differential, do they do smoking screening, what is the first‐line management done for suspected COPD cases, when do they refer COPD patients for spirometry, what antibiotics are used as first‐line antibiotics for acute exacerbation of COPD, do they ask for drug adherence, what are the first‐line bronchodilators used for acute relief of symptoms and do they explain COPD action plan. Each appropriate response was given one score out of 8. The last section included the confidence level of the participants in COPD diagnosis and treatment, followed by factors causing difficulties in patients to quit smoking, proper diagnosis, and providing appropriate pharmacological therapy to COPD patients.", "We noted the health facilities available in Gandaki and Bagmati province from the data of the Government of Nepal, Ministry of Health and Population, Health Emergency and Disaster Management Unit, Health Emergency Operation Centre in coordination with provincial health report. From that list of health institutes, medical physicians were contacted for participation through telephone, social media, and email. Once they agreed, we sent an online self‐administered structured questionnaire in the Google form. Data were collected from May 10th 2021, to June 10th 2021. Ethical approval for this study was obtained from Nepal Health Research Council (NHRC).\nWe conducted a descriptive analysis with percentage, mean, median, and proportion. Knowledge and practice total scores were graded according to Blooms original cut‐off points; good (≥ 80%), moderate (60%–79%) and poor (<60%).\n14\n Data were analysed with IBM SPSS statistics version 26 and Stata version 13. A correlation test was done to see the association between knowledge scores and practice scores. p value <0.05 were considered significant.", "A total of 152 medical physicians participated in the study. The baseline demographic and work‐related characteristics are shown in (Table 1). Of the total participants, 73.0% were male. Most of the participants, 93.4%, belonged to the 20–30 age group. For the province, an almost equal number of participants were from Bagmati province and Gandaki province. Furthermore, 44.0% of physicians worked in primary level health facilities, followed by 23.6% from tertiary level health facilities, 18.4% from secondary health facilities and 13.8% from private practice. Additionally, 44.0% of the participants had an employment duration of less than a year, 46.0% worked for 1–2 years, and 9.6% worked for more than two years. Regarding the number of cases seen, 43.7% of the participating physicians see on average 0–5 COPD cases per week, 24.3% see 6–10 cases per week, and 30.9% see more than 10 cases in a week. It was observed that 28.9% of physicians had availability of spirometry facility within their health facility, and 40.1% had either pulmonary rehabilitation, immunisation or both services available in their workplace. Lastly, only 11.1% of physicians had received CME/training on COPD and its management during their practice.\nBaseline demographic and work‐related characteristics\nAbbreviations: CME, continuing medical education; COPD, chronic obstructive pulmonary disease.\n Knowledge analysis Out of a total score of twenty, the mean score for overall COPD knowledge was 17.8 ± 2.4. Using Bloom's original cut‐off point for knowledge grading, 45.4% of physicians have good knowledge, 47.3% have moderate, and 7.2% of the medical physicians have poor knowledge regarding COPD. (Table 2) Shows the average mean score of overall knowledge and each domain of knowledge tested. 90.7% of the physicians correctly chose that the burden of COPD is increasing globally. Similarly, 65.7% said the burden of COPD in Nepal is within the top five causes of morbidity. For disease definition, most of the participants, 88.8%, correctly defined COPD as a chronic preventable and treatable condition. Only 32.9% of the participants correctly identified all six risk factors for COPD: smoking, exposure to biomass fuel, exposure to outdoor pollution, occupational air pollutions, genetic factors like alpha 1 anti‐trypsin deficiency and recurrent chest infections. Smoking was the most selected risk factor by 98.0%, whereas recurrent chest infection was the least selected by 48.6%.\nParticipants overall knowledge score and on each domain of study\nAbbreviation: COPD, chronic obstructive pulmonary disease.\nRegarding the gold standard test for COPD diagnosis, 86.1% of the participants chose spirometry rightly; however, only 21.0% of the participants were able to identify the correct spirometry cut‐off value for diagnosis, that is, post‐bronchodilator FEV1/FVC < 0.7. For the use of inhaled corticosteroids (ICSs), 86.8% of physicians knew the history of two or more episodes of acute exacerbation in a year, and 6.5% of the physicians knew eosinophils count >300 cells/μl an indication for its use. The majority of the participants, 61.8%, correctly identified that oxygen supplementation and smoking cessation help decrease mortality in COPD patients. Regarding the use of domiciliary oxygen therapy, 74.3% of physicians rightly said arterial hypoxemia with paO2 < 55, spO2 < 88% as an indication for its use. Of the participants, 41.4% chose all four given symptoms of acute exacerbation, that is, increasing shortness of breath, increasing cough, increasing mucus production and mucus colour change. The most selected symptoms were increasing shortness of breath, 92.7%, and increasing cough, 89.4%.\nOut of a total score of twenty, the mean score for overall COPD knowledge was 17.8 ± 2.4. Using Bloom's original cut‐off point for knowledge grading, 45.4% of physicians have good knowledge, 47.3% have moderate, and 7.2% of the medical physicians have poor knowledge regarding COPD. (Table 2) Shows the average mean score of overall knowledge and each domain of knowledge tested. 90.7% of the physicians correctly chose that the burden of COPD is increasing globally. Similarly, 65.7% said the burden of COPD in Nepal is within the top five causes of morbidity. For disease definition, most of the participants, 88.8%, correctly defined COPD as a chronic preventable and treatable condition. Only 32.9% of the participants correctly identified all six risk factors for COPD: smoking, exposure to biomass fuel, exposure to outdoor pollution, occupational air pollutions, genetic factors like alpha 1 anti‐trypsin deficiency and recurrent chest infections. Smoking was the most selected risk factor by 98.0%, whereas recurrent chest infection was the least selected by 48.6%.\nParticipants overall knowledge score and on each domain of study\nAbbreviation: COPD, chronic obstructive pulmonary disease.\nRegarding the gold standard test for COPD diagnosis, 86.1% of the participants chose spirometry rightly; however, only 21.0% of the participants were able to identify the correct spirometry cut‐off value for diagnosis, that is, post‐bronchodilator FEV1/FVC < 0.7. For the use of inhaled corticosteroids (ICSs), 86.8% of physicians knew the history of two or more episodes of acute exacerbation in a year, and 6.5% of the physicians knew eosinophils count >300 cells/μl an indication for its use. The majority of the participants, 61.8%, correctly identified that oxygen supplementation and smoking cessation help decrease mortality in COPD patients. Regarding the use of domiciliary oxygen therapy, 74.3% of physicians rightly said arterial hypoxemia with paO2 < 55, spO2 < 88% as an indication for its use. Of the participants, 41.4% chose all four given symptoms of acute exacerbation, that is, increasing shortness of breath, increasing cough, increasing mucus production and mucus colour change. The most selected symptoms were increasing shortness of breath, 92.7%, and increasing cough, 89.4%.\n Practice analysis For practice, out of possible eight, the mean score of the participants was 5.30 ± 1.30. Using Bloom's original cut‐off point for practice grading, 30 (19.73%) of the respondents were categorised as having a good practice, 55.9% at the moderate level, and 24.4 with poor practice in diagnosing and managing COPD. (Table 3) shows the overall response of the participants in different practice questions. Of respondents, 75.6% think of COPD as a differential when a patient with cough and difficulty breathing presents to the clinic. Similarly, 80.2% of the physicians said they enquired about smoking history in every patient age greater than 20 years.\nFrequency distribution of participants practice in COPD\nAbbreviation: COPD, chronic obstructive pulmonary disease.\nFor initial management of suspected COPD, 58.5% said they would perform an X‐ray, blood test and give a trial of bronchodilator and send them home, whereas 36.8% of the participants chose to perform pulmonary function test (PFT) or refer patients to spirometry facility. Furthermore, 63.8% said that not improving to medical therapy is the main reason for referring COPD patients to spirometry. The first‐line drug of choice to treat acute relief of symptoms, 59.8% of physicians, chose to use short‐acting muscarinic antagonist (SAMA) alone or in combination with short‐acting beta 2 agonists (SABA) to relieve shortness of breath whereas, 23.6% of the physician choose to use ICS alone or in combination. Similarly, 87.5% used either amoxicillin or azithromycin as a first‐line antibiotic for AECOPD. During follow‐up, 92.7% of the participants asked about medical adherence in COPD patients; however, only 37.5% said they would counsel them about the COPD action plan at home during an acute exacerbation.\nFor practice, out of possible eight, the mean score of the participants was 5.30 ± 1.30. Using Bloom's original cut‐off point for practice grading, 30 (19.73%) of the respondents were categorised as having a good practice, 55.9% at the moderate level, and 24.4 with poor practice in diagnosing and managing COPD. (Table 3) shows the overall response of the participants in different practice questions. Of respondents, 75.6% think of COPD as a differential when a patient with cough and difficulty breathing presents to the clinic. Similarly, 80.2% of the physicians said they enquired about smoking history in every patient age greater than 20 years.\nFrequency distribution of participants practice in COPD\nAbbreviation: COPD, chronic obstructive pulmonary disease.\nFor initial management of suspected COPD, 58.5% said they would perform an X‐ray, blood test and give a trial of bronchodilator and send them home, whereas 36.8% of the participants chose to perform pulmonary function test (PFT) or refer patients to spirometry facility. Furthermore, 63.8% said that not improving to medical therapy is the main reason for referring COPD patients to spirometry. The first‐line drug of choice to treat acute relief of symptoms, 59.8% of physicians, chose to use short‐acting muscarinic antagonist (SAMA) alone or in combination with short‐acting beta 2 agonists (SABA) to relieve shortness of breath whereas, 23.6% of the physician choose to use ICS alone or in combination. Similarly, 87.5% used either amoxicillin or azithromycin as a first‐line antibiotic for AECOPD. During follow‐up, 92.7% of the participants asked about medical adherence in COPD patients; however, only 37.5% said they would counsel them about the COPD action plan at home during an acute exacerbation.\n Correlation analysis We did Pearson correlation, and there was a statically significant very low positive correlation between total knowledge score and practice score with r = 0.18 and p value <0.02.\nWe did Pearson correlation, and there was a statically significant very low positive correlation between total knowledge score and practice score with r = 0.18 and p value <0.02.\n Factors affecting difficulties in patients quitting smoking Table 4 shows the reasons that are preventing the physician from helping COPD patients quit smoking. The most common factors were lack of follow‐up 65.7%, and patients not wanting to discuss a quit plan 65.1%. Others included lack of access to pharmacological measures like nicotine replacement therapy (NRT) 37.5% and physicians not being aware of any quit plan 8.5%.\nFactors affecting difficulty in patients quit smoking\nTable 4 shows the reasons that are preventing the physician from helping COPD patients quit smoking. The most common factors were lack of follow‐up 65.7%, and patients not wanting to discuss a quit plan 65.1%. Others included lack of access to pharmacological measures like nicotine replacement therapy (NRT) 37.5% and physicians not being aware of any quit plan 8.5%.\nFactors affecting difficulty in patients quit smoking\n Confidence level in diagnosis and factors preventing a proper diagnosis of COPD Of the doctors, 74.9% were either confident or extremely confident about correctly diagnosing the COPD cases. Table 5 shows the factors preventing a proper diagnosis of COPD. Most physicians said that lack of patient follow‐up 71.7%, lack of screening device for COPD 65.7%, and lack of professional training in the diagnosis of COPD 61.1% are the main reasons. Other reasons included lack of spirometry nearby in the referral centres 58.5% and poor patients' financial status 53.2%.\nFactors preventing the proper diagnosis of chronic obstructive pulmonary disease (COPD) patients\nOf the doctors, 74.9% were either confident or extremely confident about correctly diagnosing the COPD cases. Table 5 shows the factors preventing a proper diagnosis of COPD. Most physicians said that lack of patient follow‐up 71.7%, lack of screening device for COPD 65.7%, and lack of professional training in the diagnosis of COPD 61.1% are the main reasons. Other reasons included lack of spirometry nearby in the referral centres 58.5% and poor patients' financial status 53.2%.\nFactors preventing the proper diagnosis of chronic obstructive pulmonary disease (COPD) patients\n Confidence levels in pharmacological treatment and factors affected in providing appropriate pharmacological therapy to COPD patients Almost 83.6% of the physicians said that they were either undertreating, unsure or overtreating COPD patients. Table 6 shows the factors influencing in providing appropriate treatment to COPD patients. The most chosen factors were lack of professional training in COPD disease management, poor follow‐up and high medication cost.\nFactors affecting appropriate pharmacological therapy to chronic obstructive pulmonary disease (COPD) patients\nAlmost 83.6% of the physicians said that they were either undertreating, unsure or overtreating COPD patients. Table 6 shows the factors influencing in providing appropriate treatment to COPD patients. The most chosen factors were lack of professional training in COPD disease management, poor follow‐up and high medication cost.\nFactors affecting appropriate pharmacological therapy to chronic obstructive pulmonary disease (COPD) patients", "Out of a total score of twenty, the mean score for overall COPD knowledge was 17.8 ± 2.4. Using Bloom's original cut‐off point for knowledge grading, 45.4% of physicians have good knowledge, 47.3% have moderate, and 7.2% of the medical physicians have poor knowledge regarding COPD. (Table 2) Shows the average mean score of overall knowledge and each domain of knowledge tested. 90.7% of the physicians correctly chose that the burden of COPD is increasing globally. Similarly, 65.7% said the burden of COPD in Nepal is within the top five causes of morbidity. For disease definition, most of the participants, 88.8%, correctly defined COPD as a chronic preventable and treatable condition. Only 32.9% of the participants correctly identified all six risk factors for COPD: smoking, exposure to biomass fuel, exposure to outdoor pollution, occupational air pollutions, genetic factors like alpha 1 anti‐trypsin deficiency and recurrent chest infections. Smoking was the most selected risk factor by 98.0%, whereas recurrent chest infection was the least selected by 48.6%.\nParticipants overall knowledge score and on each domain of study\nAbbreviation: COPD, chronic obstructive pulmonary disease.\nRegarding the gold standard test for COPD diagnosis, 86.1% of the participants chose spirometry rightly; however, only 21.0% of the participants were able to identify the correct spirometry cut‐off value for diagnosis, that is, post‐bronchodilator FEV1/FVC < 0.7. For the use of inhaled corticosteroids (ICSs), 86.8% of physicians knew the history of two or more episodes of acute exacerbation in a year, and 6.5% of the physicians knew eosinophils count >300 cells/μl an indication for its use. The majority of the participants, 61.8%, correctly identified that oxygen supplementation and smoking cessation help decrease mortality in COPD patients. Regarding the use of domiciliary oxygen therapy, 74.3% of physicians rightly said arterial hypoxemia with paO2 < 55, spO2 < 88% as an indication for its use. Of the participants, 41.4% chose all four given symptoms of acute exacerbation, that is, increasing shortness of breath, increasing cough, increasing mucus production and mucus colour change. The most selected symptoms were increasing shortness of breath, 92.7%, and increasing cough, 89.4%.", "For practice, out of possible eight, the mean score of the participants was 5.30 ± 1.30. Using Bloom's original cut‐off point for practice grading, 30 (19.73%) of the respondents were categorised as having a good practice, 55.9% at the moderate level, and 24.4 with poor practice in diagnosing and managing COPD. (Table 3) shows the overall response of the participants in different practice questions. Of respondents, 75.6% think of COPD as a differential when a patient with cough and difficulty breathing presents to the clinic. Similarly, 80.2% of the physicians said they enquired about smoking history in every patient age greater than 20 years.\nFrequency distribution of participants practice in COPD\nAbbreviation: COPD, chronic obstructive pulmonary disease.\nFor initial management of suspected COPD, 58.5% said they would perform an X‐ray, blood test and give a trial of bronchodilator and send them home, whereas 36.8% of the participants chose to perform pulmonary function test (PFT) or refer patients to spirometry facility. Furthermore, 63.8% said that not improving to medical therapy is the main reason for referring COPD patients to spirometry. The first‐line drug of choice to treat acute relief of symptoms, 59.8% of physicians, chose to use short‐acting muscarinic antagonist (SAMA) alone or in combination with short‐acting beta 2 agonists (SABA) to relieve shortness of breath whereas, 23.6% of the physician choose to use ICS alone or in combination. Similarly, 87.5% used either amoxicillin or azithromycin as a first‐line antibiotic for AECOPD. During follow‐up, 92.7% of the participants asked about medical adherence in COPD patients; however, only 37.5% said they would counsel them about the COPD action plan at home during an acute exacerbation.", "We did Pearson correlation, and there was a statically significant very low positive correlation between total knowledge score and practice score with r = 0.18 and p value <0.02.", "Table 4 shows the reasons that are preventing the physician from helping COPD patients quit smoking. The most common factors were lack of follow‐up 65.7%, and patients not wanting to discuss a quit plan 65.1%. Others included lack of access to pharmacological measures like nicotine replacement therapy (NRT) 37.5% and physicians not being aware of any quit plan 8.5%.\nFactors affecting difficulty in patients quit smoking", "Of the doctors, 74.9% were either confident or extremely confident about correctly diagnosing the COPD cases. Table 5 shows the factors preventing a proper diagnosis of COPD. Most physicians said that lack of patient follow‐up 71.7%, lack of screening device for COPD 65.7%, and lack of professional training in the diagnosis of COPD 61.1% are the main reasons. Other reasons included lack of spirometry nearby in the referral centres 58.5% and poor patients' financial status 53.2%.\nFactors preventing the proper diagnosis of chronic obstructive pulmonary disease (COPD) patients", "Almost 83.6% of the physicians said that they were either undertreating, unsure or overtreating COPD patients. Table 6 shows the factors influencing in providing appropriate treatment to COPD patients. The most chosen factors were lack of professional training in COPD disease management, poor follow‐up and high medication cost.\nFactors affecting appropriate pharmacological therapy to chronic obstructive pulmonary disease (COPD) patients", "Our study showed that the overall knowledge of COPD based on GOLD guidelines among medical physicians is good in Nepal's Bagmati and Gandaki provinces. Still, the practice level was not up to the knowledge they had. There was a significant, very low positive correlation between total knowledge and practice scores. These can be due to various patient‐related factors like a poor follow‐up, health professional‐related factors like lack of training in COPD, and health institute factors like unavailability of screening devices, spirometry, and medications. They all can act as a barrier to diagnosing and managing COPD properly. These factors were in line with a report from the American Thoracic Society on challenges in implementing COPD guidelines in LMIC.\n15\n Thus, overcoming those barriers by proper training and supply of resources in LMIC like Nepal can help increase the physician's practice level in COPD and reflect their knowledge.\nThere are no previous studies available in COPD knowledge among medical physicians from Nepal; however, the level of knowledge was higher than the study from Saudi Arabia that showed physicians had a fair understanding of COPD based on GOLD guidelines.\n16\n In our study, good physicians' knowledge was there even though many participants had not received CME or training on COPD. Most of the participants were practising for less than 2 years immediately after graduation from medical college as per requirement from Nepal Government. Earlier years of practice after graduation from the medical college might have influenced their overall knowledge of COPD. The knowledge acquired from a medical college can decline over time, so it is crucial to provide continuous training to medical physicians to enhance their expertise. Similarly, in subdomains of knowledge, most physicians had good knowledge in each domain except for diagnosis. More than two‐thirds of the medical physicians were aware of smoking and exposure to biomass fuel as risk factors for COPD. These two are the most common risk factors of COPD prevalent in South‐East Asia.\n6\n Knowledge of these risk factors will help them select a high‐risk population. Similarly, they were aware of the indications for ICS use in COPD despite different studies suggesting its inappropriate use by physicians to treat COPD.\n17\n, \n18\n In addition, the two most selected interventions for treatment to reduce mortality in COPD patients, were smoking and domiciliary oxygen therapy.\nFor knowledge in the diagnosis, many physicians were aware of spirometry as the gold standard test for COPD diagnosis. Still, only a few could correctly choose its cut‐off value. It can be due to many physicians were practising in settings with no spirometry, that is, primary and secondary health facilities. Lack of spirometer, poor or no teaching of spirometry reading in the medical school, and lack of evidence base demonstration of the value might have resulted in less knowledge on spirometry correlation.\n19\n Proper hands‐on training and exposure to spirometry can help them correlate the cut‐off value for the diagnosis.\nWe further studied the practice and confidence level of physicians in the diagnosis and treatment of COPD. Although having good knowledge of COPD based on GOLD guidelines, the practice was not within guideline recommendations, especially in spirometry use. This finding was similar to a previous study from Nigeria that showed despite having an adequate understanding of the GOLD COPD guideline among physicians, adherence to the guideline recommendation was very poor.\n20\n Similarly, a study on primary care physician perception on the diagnosis and management of COPD in diverse regions of the world showed that management of COPD was well below guideline‐recommended levels in most of the areas investigated.\n21\n Only a few participants considered doing spirometry to diagnose COPD. They would use a trial of bronchodilators instead of spirometry for suspected COPD cases, and the majority of the participants were confident in their diagnostic approach. It can be due to the unavailability of the spirometry facility. Most of the physicians in our study did not have the availability of spirometry or peak flow spirometry within the facility. Lack of spirometry facilities and no education on its use may influence how they diagnose COPD in practice and not stick to the guideline.\n3\n, \n22\n Furthermore, with Nepal's huge geographical diversity, the financial cost of being referred to the centre can be high compared with just the treatment; therefore, physicians and patients may be less reluctant for its use. Further study is needed to look at the practice trend after making spirometry facilities readily available to a medical physician.\nHowever, in this current scenario for poor‐resource settings like Nepal, recommendations from Hurst et al. can be adopted to diagnose COPD. As recommended, the concerned authority and government should develop evidence‐based diagnostic tools, such as screening questionnaires and mobilise locally available resources like peak flow meters (PFMs) to diagnose COPD.\n15\n These methods can be cost‐effective and readily available in LMIC when there is limited access to spirometry. Training should still be given to medical physicians in the proper use of PFM. Furthermore, the government of Nepal has a Package of Essential Non‐Communicable Disease (PEN) for early detection of chronic diseases in primary level health facilities.\n23\n Hence, based on the above recommendation, PFM can be added to PEN to identify COPD patients early. These recommendations, as mentioned above, can help because many participants in our study thought of COPD as a differential when a patient with cough, shortness of breath, and smoking history presents but lacks proper diagnosis methodology. Thus, they can use the questionnaire and PFM on high‐risk patients to provisionally diagnose COPD and start proper management in case of a referral to spirometry acts as a barrier in appropriately diagnosing and managing COPD patients.\nNext, even though many of the participants mentioned the correct indication when to add ICS in therapy and said SAMA/SABA to be used for acute relief of symptoms, many were unsure about their treatment approach. This insecurity in treatment could be due to a lack of training or CME on COPD and less familiarity with the different medical treatments available. Furthermore, this low level of confidence in COPD treatment can decrease the quality of community between patients and physicians. Decreased quality of conversation can lead to poor understanding of treatment from the patients regarding reasons, the timing of use, and the dose of a particular given medication.\n24\n Apart from that, the cost of medicines and poor patient financial status can lead to poor compliance with medication. Thus, as recommended by Hurst et al., provision of COPD medication in essential drug list and continuous supply of it to primary and secondary level health facilities can help address those barriers.\n15\n And to engage patients in self‐management programmes for COPD, the physician should be given proper hands‐on training on the drugs available and their mode of use.\nAnother interesting finding was that although most physicians are aware of smoking as a risk factor and ask about it in daily practice, lack of patient follow‐up and patients' unwillingness to quit was responsible for the difficulties in helping patients quit smoking. This finding was consistent with a multinational qualitative study on why physicians lack engagement with smoking cessation treatment in COPD patients that highlighted unwillingness to quit and poor follow‐up of the patients as the primary reasons.\n25\n Poor follow‐up of patients can be due to socio‐economic factors, health system‐related, condition‐related, therapy‐related or patient‐related factors.\n26\n Community‐level awareness about COPD, risk factors, and easy availability of treatment like NRT can help the patient come to health care attention and address smoking and COPD accordingly.\nAt last, a COPD action plan at home is one of the critical steps in managing COPD, but only one‐third of the participant physicians were aware of it and discussed it with their patients. Studies have shown that COPD action plans help people with COPD recognise and initiate appropriate treatment at home.\n27\n Early intervention during acute exacerbation improves morbidity and mortality.\n28\n Therefore, physicians should also be trained in preparing action plans for COPD patients.\nOverall, early detection of airflow limitation and treatment helps reduce the burden of COPD and improve patients' quality of life.\n29\n Physicians who participate in CME programmes on COPD diagnosis, staging and treatment are more likely than nonparticipants to deliver evidence‐based COPD management.\n30\n Hence, in the first place for LMIC like Nepal, concerned authorities should give physicians regular training based on the availability of local resources. Second, proper diagnostic infrastructure should be in place to improve the early diagnosis of COPD cases. Third, COPD medication should be on the essential drug list and regularly supplied to primary and secondary level hospitals. Apart from that, adequate disease awareness is also required in patients to increase their follow‐up and make them aware of the harmful effects of smoking.", "The authors declare there is no conflict of interest present.", "No funding was received for this research.", "Ethical approval for this study was provided by the Government of Nepal, Nepal Health Research Council (Proposal ID: 636‐2020). Informed consent to participate and publication was obtained from all individual participants included in the study.", "Suraj Ghimire, Anish Lamichhane, Anita Basnet, Samikshya Pandey and Ram Kumar Shrestha designed the study and questionnaire. Nahakul Poudel, Bushan Shrestha, Santosh Pathak and Gaurav Mahato did data collection and entry. Suraj Ghimire and Anita Basnet were involved in data analysis and manuscript writing." ]
[ null, "materials-and-methods", null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "COI-statement", null, null, null ]
[ "COPD", "guideline", "knowledge", "Nepal", "physicians", "practice" ]
INTRODUCTION: Chronic obstructive pulmonary disease (COPD) is defined as a common, preventable and treatable disease characterised by persistent respiratory symptoms and airflow limitation due to airway or alveolar abnormalities usually caused by significant exposure to noxious particles or gases. 1 It is the 4th leading cause of death globally and is projected to be the third leading cause by 2020. 2 There is a significant variation in COPD prevalence worldwide, with 10%–95% underdiagnosis and 5%–60% overdiagnosis due to differences in the definition of diagnosis used and the unavailability of spirometry, especially in rural areas of low and middle‐income countries (LMICs). 3 According to the National Burden of Disease 2017 report in Nepal, non‐communicable disease (NCD) accounts for significant deaths and 1 in 10 deaths (10% of total deaths) is attributed to COPD. 4 Smoking is one of the most critical risk factors for COPD. 5 , 6 Other risk factors include a family history of COPD, second‐hand smoking, exposure to cooking or home heating fuels, and occupational specks of dust/chemicals. 7 COPD should be considered in any patient with dyspnoea, chronic cough or sputum production, and a history of recurrent lower respiratory tract infections with or without a history of exposure to harmful particles or gases. 8 The US National Heart, Lung, and Blood Institute and the World Health Organization established the Global Initiative for Chronic Obstructive Lung Disease (GOLD) to prepare a standard guideline for COPD management. The primary purpose of this project was to create and disseminate guidelines that would help prevent COPD and establish a standard of care for treating patients with COPD based on the most current medical evidence. As per GOLD, spirometry is needed to diagnose COPD with the post‐bronchodilator value of FEV1/FVC < 0.70, confirming the presence of persistent airflow limitation. 9 Despite much advancement in COPD treatment, studies suggest either underdiagnosis or overdiagnosis of the disease. A study indicates that 80% of COPD cases confirmed by spirometry were underdiagnosed. 10 Nepal being one of the resource‐poor countries, it is evident that COPD is a highly prevalent and significant public health issue that is often underdiagnosed. 11 It is related to various factors like lack of spirometry facility, lack of awareness of various risk factors of COPD, such as exposure to biomass fuel, a low level of education in patients, and lack of disease awareness among healthcare providers. 12 Proper knowledge of the disease and availability of adequate resources is required to implement the guideline and help doctors diagnose and provide best practices for COPD patients. Good practice based on proper guidelines can help diagnose the cases early and decrease COPD‐related morbidity and mortality. No study is available in Nepal to look into the knowledge level of a medical physician on COPD, their practice, and barriers to guideline‐based management. Thus, we aim to study the knowledge among medical physicians from Gandaki and Bagmati province of Nepal on COPD as per GOLD guidelines, their current practice, and study factors influencing the proper management of COPD patients. Our findings can help the concerned authority plan and effectively mobilise medical physicians to decrease the morbidity and mortality secondary to COPD. MATERIALS AND METHODS: Study design We carried out a descriptive cross‐sectional study among medical physicians working in all levels of health care delivery in the Bagmati and Gandaki provinces of Nepal. We carried out a descriptive cross‐sectional study among medical physicians working in all levels of health care delivery in the Bagmati and Gandaki provinces of Nepal. Study settings Nepal is divided into seven provinces; we did the study in two provinces, that is, Bagmati and Gandaki. These were selected based on the highest number of COPD diagnosed in the last 5 years. 13 We categorised health facilities into primary health facilities, secondary health facilities, tertiary health facilities and private practice. Primary level health facilities included a health post, primary health centre and community health centre. Secondary health facilities include hospitals with inpatient wards, that is, district hospitals, community hospitals and private hospitals. Tertiary Hospitals are referral centres with specialist services, that is, zonal hospitals, regional hospitals, province hospitals, central hospitals, and medical colleges. Private practice includes private clinics, polyclinics with no inpatient care. Nepal is divided into seven provinces; we did the study in two provinces, that is, Bagmati and Gandaki. These were selected based on the highest number of COPD diagnosed in the last 5 years. 13 We categorised health facilities into primary health facilities, secondary health facilities, tertiary health facilities and private practice. Primary level health facilities included a health post, primary health centre and community health centre. Secondary health facilities include hospitals with inpatient wards, that is, district hospitals, community hospitals and private hospitals. Tertiary Hospitals are referral centres with specialist services, that is, zonal hospitals, regional hospitals, province hospitals, central hospitals, and medical colleges. Private practice includes private clinics, polyclinics with no inpatient care. Sample selection Our inclusion population included medical physicians. Medical physicians are medical officers who have completed their primary medical education, Bachelor of Medicine, and Bachelor of Surgery (MBBS) and registered in the Nepal Medical Council (NMC) with no specialist training. They are often the first point of contact for seeing COPD patients in primary, secondary and tertiary health facilities and even in private practice. Due to the vast geographical diversity in certain areas of Nepal, they are the only available doctors. Our inclusion population included medical physicians. Medical physicians are medical officers who have completed their primary medical education, Bachelor of Medicine, and Bachelor of Surgery (MBBS) and registered in the Nepal Medical Council (NMC) with no specialist training. They are often the first point of contact for seeing COPD patients in primary, secondary and tertiary health facilities and even in private practice. Due to the vast geographical diversity in certain areas of Nepal, they are the only available doctors. Sample size The minimum sample size required was 152 and was calculated using expected proportions of 10%, an accepted error of 5%, and a nonresponse rate of 10% at a 95% confidence interval (C.I). The minimum sample size required was 152 and was calculated using expected proportions of 10%, an accepted error of 5%, and a nonresponse rate of 10% at a 95% confidence interval (C.I). Tools The knowledge questionnaire was designed based on the COPD GOLD 2020 guideline; however, we did literature reviews to identify background variables and intermediate variables of knowledge and practice of COPD management among physicians. The questionnaire was further discussed with the content expert from Tribhuvan University Teaching Hospital (TUTH), G. P. Koirala National Centre for Respiratory Disease, ensuring face and content validity. After pretesting it on 20 medical physicians not included in the study, we prepared the final questionnaire. It consists of demographic and work‐related characteristics; age, gender, medical qualification, employment duration, province, number of COPD cases seen per week, availability of spirometry, availability of pulmonary rehabilitation and vaccination, and participation in COPD continuing medical education (CME)/training participation. We used single and multiple‐choice questions to test the knowledge level on five domains. They are COPD epidemiology and disease definition, risk factors, COPD diagnosis, treatment and knowledge on acute exacerbations with 20 total points. Each correct response based on GOLD guidelines was given one score, and the incorrect answer was allocated zero. To test practice, participants were asked questions on when to suspect COPD as a differential, do they do smoking screening, what is the first‐line management done for suspected COPD cases, when do they refer COPD patients for spirometry, what antibiotics are used as first‐line antibiotics for acute exacerbation of COPD, do they ask for drug adherence, what are the first‐line bronchodilators used for acute relief of symptoms and do they explain COPD action plan. Each appropriate response was given one score out of 8. The last section included the confidence level of the participants in COPD diagnosis and treatment, followed by factors causing difficulties in patients to quit smoking, proper diagnosis, and providing appropriate pharmacological therapy to COPD patients. The knowledge questionnaire was designed based on the COPD GOLD 2020 guideline; however, we did literature reviews to identify background variables and intermediate variables of knowledge and practice of COPD management among physicians. The questionnaire was further discussed with the content expert from Tribhuvan University Teaching Hospital (TUTH), G. P. Koirala National Centre for Respiratory Disease, ensuring face and content validity. After pretesting it on 20 medical physicians not included in the study, we prepared the final questionnaire. It consists of demographic and work‐related characteristics; age, gender, medical qualification, employment duration, province, number of COPD cases seen per week, availability of spirometry, availability of pulmonary rehabilitation and vaccination, and participation in COPD continuing medical education (CME)/training participation. We used single and multiple‐choice questions to test the knowledge level on five domains. They are COPD epidemiology and disease definition, risk factors, COPD diagnosis, treatment and knowledge on acute exacerbations with 20 total points. Each correct response based on GOLD guidelines was given one score, and the incorrect answer was allocated zero. To test practice, participants were asked questions on when to suspect COPD as a differential, do they do smoking screening, what is the first‐line management done for suspected COPD cases, when do they refer COPD patients for spirometry, what antibiotics are used as first‐line antibiotics for acute exacerbation of COPD, do they ask for drug adherence, what are the first‐line bronchodilators used for acute relief of symptoms and do they explain COPD action plan. Each appropriate response was given one score out of 8. The last section included the confidence level of the participants in COPD diagnosis and treatment, followed by factors causing difficulties in patients to quit smoking, proper diagnosis, and providing appropriate pharmacological therapy to COPD patients. Data collection and analysis We noted the health facilities available in Gandaki and Bagmati province from the data of the Government of Nepal, Ministry of Health and Population, Health Emergency and Disaster Management Unit, Health Emergency Operation Centre in coordination with provincial health report. From that list of health institutes, medical physicians were contacted for participation through telephone, social media, and email. Once they agreed, we sent an online self‐administered structured questionnaire in the Google form. Data were collected from May 10th 2021, to June 10th 2021. Ethical approval for this study was obtained from Nepal Health Research Council (NHRC). We conducted a descriptive analysis with percentage, mean, median, and proportion. Knowledge and practice total scores were graded according to Blooms original cut‐off points; good (≥ 80%), moderate (60%–79%) and poor (<60%). 14 Data were analysed with IBM SPSS statistics version 26 and Stata version 13. A correlation test was done to see the association between knowledge scores and practice scores. p value <0.05 were considered significant. We noted the health facilities available in Gandaki and Bagmati province from the data of the Government of Nepal, Ministry of Health and Population, Health Emergency and Disaster Management Unit, Health Emergency Operation Centre in coordination with provincial health report. From that list of health institutes, medical physicians were contacted for participation through telephone, social media, and email. Once they agreed, we sent an online self‐administered structured questionnaire in the Google form. Data were collected from May 10th 2021, to June 10th 2021. Ethical approval for this study was obtained from Nepal Health Research Council (NHRC). We conducted a descriptive analysis with percentage, mean, median, and proportion. Knowledge and practice total scores were graded according to Blooms original cut‐off points; good (≥ 80%), moderate (60%–79%) and poor (<60%). 14 Data were analysed with IBM SPSS statistics version 26 and Stata version 13. A correlation test was done to see the association between knowledge scores and practice scores. p value <0.05 were considered significant. Study design: We carried out a descriptive cross‐sectional study among medical physicians working in all levels of health care delivery in the Bagmati and Gandaki provinces of Nepal. Study settings: Nepal is divided into seven provinces; we did the study in two provinces, that is, Bagmati and Gandaki. These were selected based on the highest number of COPD diagnosed in the last 5 years. 13 We categorised health facilities into primary health facilities, secondary health facilities, tertiary health facilities and private practice. Primary level health facilities included a health post, primary health centre and community health centre. Secondary health facilities include hospitals with inpatient wards, that is, district hospitals, community hospitals and private hospitals. Tertiary Hospitals are referral centres with specialist services, that is, zonal hospitals, regional hospitals, province hospitals, central hospitals, and medical colleges. Private practice includes private clinics, polyclinics with no inpatient care. Sample selection: Our inclusion population included medical physicians. Medical physicians are medical officers who have completed their primary medical education, Bachelor of Medicine, and Bachelor of Surgery (MBBS) and registered in the Nepal Medical Council (NMC) with no specialist training. They are often the first point of contact for seeing COPD patients in primary, secondary and tertiary health facilities and even in private practice. Due to the vast geographical diversity in certain areas of Nepal, they are the only available doctors. Sample size: The minimum sample size required was 152 and was calculated using expected proportions of 10%, an accepted error of 5%, and a nonresponse rate of 10% at a 95% confidence interval (C.I). Tools: The knowledge questionnaire was designed based on the COPD GOLD 2020 guideline; however, we did literature reviews to identify background variables and intermediate variables of knowledge and practice of COPD management among physicians. The questionnaire was further discussed with the content expert from Tribhuvan University Teaching Hospital (TUTH), G. P. Koirala National Centre for Respiratory Disease, ensuring face and content validity. After pretesting it on 20 medical physicians not included in the study, we prepared the final questionnaire. It consists of demographic and work‐related characteristics; age, gender, medical qualification, employment duration, province, number of COPD cases seen per week, availability of spirometry, availability of pulmonary rehabilitation and vaccination, and participation in COPD continuing medical education (CME)/training participation. We used single and multiple‐choice questions to test the knowledge level on five domains. They are COPD epidemiology and disease definition, risk factors, COPD diagnosis, treatment and knowledge on acute exacerbations with 20 total points. Each correct response based on GOLD guidelines was given one score, and the incorrect answer was allocated zero. To test practice, participants were asked questions on when to suspect COPD as a differential, do they do smoking screening, what is the first‐line management done for suspected COPD cases, when do they refer COPD patients for spirometry, what antibiotics are used as first‐line antibiotics for acute exacerbation of COPD, do they ask for drug adherence, what are the first‐line bronchodilators used for acute relief of symptoms and do they explain COPD action plan. Each appropriate response was given one score out of 8. The last section included the confidence level of the participants in COPD diagnosis and treatment, followed by factors causing difficulties in patients to quit smoking, proper diagnosis, and providing appropriate pharmacological therapy to COPD patients. Data collection and analysis: We noted the health facilities available in Gandaki and Bagmati province from the data of the Government of Nepal, Ministry of Health and Population, Health Emergency and Disaster Management Unit, Health Emergency Operation Centre in coordination with provincial health report. From that list of health institutes, medical physicians were contacted for participation through telephone, social media, and email. Once they agreed, we sent an online self‐administered structured questionnaire in the Google form. Data were collected from May 10th 2021, to June 10th 2021. Ethical approval for this study was obtained from Nepal Health Research Council (NHRC). We conducted a descriptive analysis with percentage, mean, median, and proportion. Knowledge and practice total scores were graded according to Blooms original cut‐off points; good (≥ 80%), moderate (60%–79%) and poor (<60%). 14 Data were analysed with IBM SPSS statistics version 26 and Stata version 13. A correlation test was done to see the association between knowledge scores and practice scores. p value <0.05 were considered significant. RESULTS: A total of 152 medical physicians participated in the study. The baseline demographic and work‐related characteristics are shown in (Table 1). Of the total participants, 73.0% were male. Most of the participants, 93.4%, belonged to the 20–30 age group. For the province, an almost equal number of participants were from Bagmati province and Gandaki province. Furthermore, 44.0% of physicians worked in primary level health facilities, followed by 23.6% from tertiary level health facilities, 18.4% from secondary health facilities and 13.8% from private practice. Additionally, 44.0% of the participants had an employment duration of less than a year, 46.0% worked for 1–2 years, and 9.6% worked for more than two years. Regarding the number of cases seen, 43.7% of the participating physicians see on average 0–5 COPD cases per week, 24.3% see 6–10 cases per week, and 30.9% see more than 10 cases in a week. It was observed that 28.9% of physicians had availability of spirometry facility within their health facility, and 40.1% had either pulmonary rehabilitation, immunisation or both services available in their workplace. Lastly, only 11.1% of physicians had received CME/training on COPD and its management during their practice. Baseline demographic and work‐related characteristics Abbreviations: CME, continuing medical education; COPD, chronic obstructive pulmonary disease. Knowledge analysis Out of a total score of twenty, the mean score for overall COPD knowledge was 17.8 ± 2.4. Using Bloom's original cut‐off point for knowledge grading, 45.4% of physicians have good knowledge, 47.3% have moderate, and 7.2% of the medical physicians have poor knowledge regarding COPD. (Table 2) Shows the average mean score of overall knowledge and each domain of knowledge tested. 90.7% of the physicians correctly chose that the burden of COPD is increasing globally. Similarly, 65.7% said the burden of COPD in Nepal is within the top five causes of morbidity. For disease definition, most of the participants, 88.8%, correctly defined COPD as a chronic preventable and treatable condition. Only 32.9% of the participants correctly identified all six risk factors for COPD: smoking, exposure to biomass fuel, exposure to outdoor pollution, occupational air pollutions, genetic factors like alpha 1 anti‐trypsin deficiency and recurrent chest infections. Smoking was the most selected risk factor by 98.0%, whereas recurrent chest infection was the least selected by 48.6%. Participants overall knowledge score and on each domain of study Abbreviation: COPD, chronic obstructive pulmonary disease. Regarding the gold standard test for COPD diagnosis, 86.1% of the participants chose spirometry rightly; however, only 21.0% of the participants were able to identify the correct spirometry cut‐off value for diagnosis, that is, post‐bronchodilator FEV1/FVC < 0.7. For the use of inhaled corticosteroids (ICSs), 86.8% of physicians knew the history of two or more episodes of acute exacerbation in a year, and 6.5% of the physicians knew eosinophils count >300 cells/μl an indication for its use. The majority of the participants, 61.8%, correctly identified that oxygen supplementation and smoking cessation help decrease mortality in COPD patients. Regarding the use of domiciliary oxygen therapy, 74.3% of physicians rightly said arterial hypoxemia with paO2 < 55, spO2 < 88% as an indication for its use. Of the participants, 41.4% chose all four given symptoms of acute exacerbation, that is, increasing shortness of breath, increasing cough, increasing mucus production and mucus colour change. The most selected symptoms were increasing shortness of breath, 92.7%, and increasing cough, 89.4%. Out of a total score of twenty, the mean score for overall COPD knowledge was 17.8 ± 2.4. Using Bloom's original cut‐off point for knowledge grading, 45.4% of physicians have good knowledge, 47.3% have moderate, and 7.2% of the medical physicians have poor knowledge regarding COPD. (Table 2) Shows the average mean score of overall knowledge and each domain of knowledge tested. 90.7% of the physicians correctly chose that the burden of COPD is increasing globally. Similarly, 65.7% said the burden of COPD in Nepal is within the top five causes of morbidity. For disease definition, most of the participants, 88.8%, correctly defined COPD as a chronic preventable and treatable condition. Only 32.9% of the participants correctly identified all six risk factors for COPD: smoking, exposure to biomass fuel, exposure to outdoor pollution, occupational air pollutions, genetic factors like alpha 1 anti‐trypsin deficiency and recurrent chest infections. Smoking was the most selected risk factor by 98.0%, whereas recurrent chest infection was the least selected by 48.6%. Participants overall knowledge score and on each domain of study Abbreviation: COPD, chronic obstructive pulmonary disease. Regarding the gold standard test for COPD diagnosis, 86.1% of the participants chose spirometry rightly; however, only 21.0% of the participants were able to identify the correct spirometry cut‐off value for diagnosis, that is, post‐bronchodilator FEV1/FVC < 0.7. For the use of inhaled corticosteroids (ICSs), 86.8% of physicians knew the history of two or more episodes of acute exacerbation in a year, and 6.5% of the physicians knew eosinophils count >300 cells/μl an indication for its use. The majority of the participants, 61.8%, correctly identified that oxygen supplementation and smoking cessation help decrease mortality in COPD patients. Regarding the use of domiciliary oxygen therapy, 74.3% of physicians rightly said arterial hypoxemia with paO2 < 55, spO2 < 88% as an indication for its use. Of the participants, 41.4% chose all four given symptoms of acute exacerbation, that is, increasing shortness of breath, increasing cough, increasing mucus production and mucus colour change. The most selected symptoms were increasing shortness of breath, 92.7%, and increasing cough, 89.4%. Practice analysis For practice, out of possible eight, the mean score of the participants was 5.30 ± 1.30. Using Bloom's original cut‐off point for practice grading, 30 (19.73%) of the respondents were categorised as having a good practice, 55.9% at the moderate level, and 24.4 with poor practice in diagnosing and managing COPD. (Table 3) shows the overall response of the participants in different practice questions. Of respondents, 75.6% think of COPD as a differential when a patient with cough and difficulty breathing presents to the clinic. Similarly, 80.2% of the physicians said they enquired about smoking history in every patient age greater than 20 years. Frequency distribution of participants practice in COPD Abbreviation: COPD, chronic obstructive pulmonary disease. For initial management of suspected COPD, 58.5% said they would perform an X‐ray, blood test and give a trial of bronchodilator and send them home, whereas 36.8% of the participants chose to perform pulmonary function test (PFT) or refer patients to spirometry facility. Furthermore, 63.8% said that not improving to medical therapy is the main reason for referring COPD patients to spirometry. The first‐line drug of choice to treat acute relief of symptoms, 59.8% of physicians, chose to use short‐acting muscarinic antagonist (SAMA) alone or in combination with short‐acting beta 2 agonists (SABA) to relieve shortness of breath whereas, 23.6% of the physician choose to use ICS alone or in combination. Similarly, 87.5% used either amoxicillin or azithromycin as a first‐line antibiotic for AECOPD. During follow‐up, 92.7% of the participants asked about medical adherence in COPD patients; however, only 37.5% said they would counsel them about the COPD action plan at home during an acute exacerbation. For practice, out of possible eight, the mean score of the participants was 5.30 ± 1.30. Using Bloom's original cut‐off point for practice grading, 30 (19.73%) of the respondents were categorised as having a good practice, 55.9% at the moderate level, and 24.4 with poor practice in diagnosing and managing COPD. (Table 3) shows the overall response of the participants in different practice questions. Of respondents, 75.6% think of COPD as a differential when a patient with cough and difficulty breathing presents to the clinic. Similarly, 80.2% of the physicians said they enquired about smoking history in every patient age greater than 20 years. Frequency distribution of participants practice in COPD Abbreviation: COPD, chronic obstructive pulmonary disease. For initial management of suspected COPD, 58.5% said they would perform an X‐ray, blood test and give a trial of bronchodilator and send them home, whereas 36.8% of the participants chose to perform pulmonary function test (PFT) or refer patients to spirometry facility. Furthermore, 63.8% said that not improving to medical therapy is the main reason for referring COPD patients to spirometry. The first‐line drug of choice to treat acute relief of symptoms, 59.8% of physicians, chose to use short‐acting muscarinic antagonist (SAMA) alone or in combination with short‐acting beta 2 agonists (SABA) to relieve shortness of breath whereas, 23.6% of the physician choose to use ICS alone or in combination. Similarly, 87.5% used either amoxicillin or azithromycin as a first‐line antibiotic for AECOPD. During follow‐up, 92.7% of the participants asked about medical adherence in COPD patients; however, only 37.5% said they would counsel them about the COPD action plan at home during an acute exacerbation. Correlation analysis We did Pearson correlation, and there was a statically significant very low positive correlation between total knowledge score and practice score with r = 0.18 and p value <0.02. We did Pearson correlation, and there was a statically significant very low positive correlation between total knowledge score and practice score with r = 0.18 and p value <0.02. Factors affecting difficulties in patients quitting smoking Table 4 shows the reasons that are preventing the physician from helping COPD patients quit smoking. The most common factors were lack of follow‐up 65.7%, and patients not wanting to discuss a quit plan 65.1%. Others included lack of access to pharmacological measures like nicotine replacement therapy (NRT) 37.5% and physicians not being aware of any quit plan 8.5%. Factors affecting difficulty in patients quit smoking Table 4 shows the reasons that are preventing the physician from helping COPD patients quit smoking. The most common factors were lack of follow‐up 65.7%, and patients not wanting to discuss a quit plan 65.1%. Others included lack of access to pharmacological measures like nicotine replacement therapy (NRT) 37.5% and physicians not being aware of any quit plan 8.5%. Factors affecting difficulty in patients quit smoking Confidence level in diagnosis and factors preventing a proper diagnosis of COPD Of the doctors, 74.9% were either confident or extremely confident about correctly diagnosing the COPD cases. Table 5 shows the factors preventing a proper diagnosis of COPD. Most physicians said that lack of patient follow‐up 71.7%, lack of screening device for COPD 65.7%, and lack of professional training in the diagnosis of COPD 61.1% are the main reasons. Other reasons included lack of spirometry nearby in the referral centres 58.5% and poor patients' financial status 53.2%. Factors preventing the proper diagnosis of chronic obstructive pulmonary disease (COPD) patients Of the doctors, 74.9% were either confident or extremely confident about correctly diagnosing the COPD cases. Table 5 shows the factors preventing a proper diagnosis of COPD. Most physicians said that lack of patient follow‐up 71.7%, lack of screening device for COPD 65.7%, and lack of professional training in the diagnosis of COPD 61.1% are the main reasons. Other reasons included lack of spirometry nearby in the referral centres 58.5% and poor patients' financial status 53.2%. Factors preventing the proper diagnosis of chronic obstructive pulmonary disease (COPD) patients Confidence levels in pharmacological treatment and factors affected in providing appropriate pharmacological therapy to COPD patients Almost 83.6% of the physicians said that they were either undertreating, unsure or overtreating COPD patients. Table 6 shows the factors influencing in providing appropriate treatment to COPD patients. The most chosen factors were lack of professional training in COPD disease management, poor follow‐up and high medication cost. Factors affecting appropriate pharmacological therapy to chronic obstructive pulmonary disease (COPD) patients Almost 83.6% of the physicians said that they were either undertreating, unsure or overtreating COPD patients. Table 6 shows the factors influencing in providing appropriate treatment to COPD patients. The most chosen factors were lack of professional training in COPD disease management, poor follow‐up and high medication cost. Factors affecting appropriate pharmacological therapy to chronic obstructive pulmonary disease (COPD) patients Knowledge analysis: Out of a total score of twenty, the mean score for overall COPD knowledge was 17.8 ± 2.4. Using Bloom's original cut‐off point for knowledge grading, 45.4% of physicians have good knowledge, 47.3% have moderate, and 7.2% of the medical physicians have poor knowledge regarding COPD. (Table 2) Shows the average mean score of overall knowledge and each domain of knowledge tested. 90.7% of the physicians correctly chose that the burden of COPD is increasing globally. Similarly, 65.7% said the burden of COPD in Nepal is within the top five causes of morbidity. For disease definition, most of the participants, 88.8%, correctly defined COPD as a chronic preventable and treatable condition. Only 32.9% of the participants correctly identified all six risk factors for COPD: smoking, exposure to biomass fuel, exposure to outdoor pollution, occupational air pollutions, genetic factors like alpha 1 anti‐trypsin deficiency and recurrent chest infections. Smoking was the most selected risk factor by 98.0%, whereas recurrent chest infection was the least selected by 48.6%. Participants overall knowledge score and on each domain of study Abbreviation: COPD, chronic obstructive pulmonary disease. Regarding the gold standard test for COPD diagnosis, 86.1% of the participants chose spirometry rightly; however, only 21.0% of the participants were able to identify the correct spirometry cut‐off value for diagnosis, that is, post‐bronchodilator FEV1/FVC < 0.7. For the use of inhaled corticosteroids (ICSs), 86.8% of physicians knew the history of two or more episodes of acute exacerbation in a year, and 6.5% of the physicians knew eosinophils count >300 cells/μl an indication for its use. The majority of the participants, 61.8%, correctly identified that oxygen supplementation and smoking cessation help decrease mortality in COPD patients. Regarding the use of domiciliary oxygen therapy, 74.3% of physicians rightly said arterial hypoxemia with paO2 < 55, spO2 < 88% as an indication for its use. Of the participants, 41.4% chose all four given symptoms of acute exacerbation, that is, increasing shortness of breath, increasing cough, increasing mucus production and mucus colour change. The most selected symptoms were increasing shortness of breath, 92.7%, and increasing cough, 89.4%. Practice analysis: For practice, out of possible eight, the mean score of the participants was 5.30 ± 1.30. Using Bloom's original cut‐off point for practice grading, 30 (19.73%) of the respondents were categorised as having a good practice, 55.9% at the moderate level, and 24.4 with poor practice in diagnosing and managing COPD. (Table 3) shows the overall response of the participants in different practice questions. Of respondents, 75.6% think of COPD as a differential when a patient with cough and difficulty breathing presents to the clinic. Similarly, 80.2% of the physicians said they enquired about smoking history in every patient age greater than 20 years. Frequency distribution of participants practice in COPD Abbreviation: COPD, chronic obstructive pulmonary disease. For initial management of suspected COPD, 58.5% said they would perform an X‐ray, blood test and give a trial of bronchodilator and send them home, whereas 36.8% of the participants chose to perform pulmonary function test (PFT) or refer patients to spirometry facility. Furthermore, 63.8% said that not improving to medical therapy is the main reason for referring COPD patients to spirometry. The first‐line drug of choice to treat acute relief of symptoms, 59.8% of physicians, chose to use short‐acting muscarinic antagonist (SAMA) alone or in combination with short‐acting beta 2 agonists (SABA) to relieve shortness of breath whereas, 23.6% of the physician choose to use ICS alone or in combination. Similarly, 87.5% used either amoxicillin or azithromycin as a first‐line antibiotic for AECOPD. During follow‐up, 92.7% of the participants asked about medical adherence in COPD patients; however, only 37.5% said they would counsel them about the COPD action plan at home during an acute exacerbation. Correlation analysis: We did Pearson correlation, and there was a statically significant very low positive correlation between total knowledge score and practice score with r = 0.18 and p value <0.02. Factors affecting difficulties in patients quitting smoking: Table 4 shows the reasons that are preventing the physician from helping COPD patients quit smoking. The most common factors were lack of follow‐up 65.7%, and patients not wanting to discuss a quit plan 65.1%. Others included lack of access to pharmacological measures like nicotine replacement therapy (NRT) 37.5% and physicians not being aware of any quit plan 8.5%. Factors affecting difficulty in patients quit smoking Confidence level in diagnosis and factors preventing a proper diagnosis of COPD: Of the doctors, 74.9% were either confident or extremely confident about correctly diagnosing the COPD cases. Table 5 shows the factors preventing a proper diagnosis of COPD. Most physicians said that lack of patient follow‐up 71.7%, lack of screening device for COPD 65.7%, and lack of professional training in the diagnosis of COPD 61.1% are the main reasons. Other reasons included lack of spirometry nearby in the referral centres 58.5% and poor patients' financial status 53.2%. Factors preventing the proper diagnosis of chronic obstructive pulmonary disease (COPD) patients Confidence levels in pharmacological treatment and factors affected in providing appropriate pharmacological therapy to COPD patients: Almost 83.6% of the physicians said that they were either undertreating, unsure or overtreating COPD patients. Table 6 shows the factors influencing in providing appropriate treatment to COPD patients. The most chosen factors were lack of professional training in COPD disease management, poor follow‐up and high medication cost. Factors affecting appropriate pharmacological therapy to chronic obstructive pulmonary disease (COPD) patients DISCUSSION: Our study showed that the overall knowledge of COPD based on GOLD guidelines among medical physicians is good in Nepal's Bagmati and Gandaki provinces. Still, the practice level was not up to the knowledge they had. There was a significant, very low positive correlation between total knowledge and practice scores. These can be due to various patient‐related factors like a poor follow‐up, health professional‐related factors like lack of training in COPD, and health institute factors like unavailability of screening devices, spirometry, and medications. They all can act as a barrier to diagnosing and managing COPD properly. These factors were in line with a report from the American Thoracic Society on challenges in implementing COPD guidelines in LMIC. 15 Thus, overcoming those barriers by proper training and supply of resources in LMIC like Nepal can help increase the physician's practice level in COPD and reflect their knowledge. There are no previous studies available in COPD knowledge among medical physicians from Nepal; however, the level of knowledge was higher than the study from Saudi Arabia that showed physicians had a fair understanding of COPD based on GOLD guidelines. 16 In our study, good physicians' knowledge was there even though many participants had not received CME or training on COPD. Most of the participants were practising for less than 2 years immediately after graduation from medical college as per requirement from Nepal Government. Earlier years of practice after graduation from the medical college might have influenced their overall knowledge of COPD. The knowledge acquired from a medical college can decline over time, so it is crucial to provide continuous training to medical physicians to enhance their expertise. Similarly, in subdomains of knowledge, most physicians had good knowledge in each domain except for diagnosis. More than two‐thirds of the medical physicians were aware of smoking and exposure to biomass fuel as risk factors for COPD. These two are the most common risk factors of COPD prevalent in South‐East Asia. 6 Knowledge of these risk factors will help them select a high‐risk population. Similarly, they were aware of the indications for ICS use in COPD despite different studies suggesting its inappropriate use by physicians to treat COPD. 17 , 18 In addition, the two most selected interventions for treatment to reduce mortality in COPD patients, were smoking and domiciliary oxygen therapy. For knowledge in the diagnosis, many physicians were aware of spirometry as the gold standard test for COPD diagnosis. Still, only a few could correctly choose its cut‐off value. It can be due to many physicians were practising in settings with no spirometry, that is, primary and secondary health facilities. Lack of spirometer, poor or no teaching of spirometry reading in the medical school, and lack of evidence base demonstration of the value might have resulted in less knowledge on spirometry correlation. 19 Proper hands‐on training and exposure to spirometry can help them correlate the cut‐off value for the diagnosis. We further studied the practice and confidence level of physicians in the diagnosis and treatment of COPD. Although having good knowledge of COPD based on GOLD guidelines, the practice was not within guideline recommendations, especially in spirometry use. This finding was similar to a previous study from Nigeria that showed despite having an adequate understanding of the GOLD COPD guideline among physicians, adherence to the guideline recommendation was very poor. 20 Similarly, a study on primary care physician perception on the diagnosis and management of COPD in diverse regions of the world showed that management of COPD was well below guideline‐recommended levels in most of the areas investigated. 21 Only a few participants considered doing spirometry to diagnose COPD. They would use a trial of bronchodilators instead of spirometry for suspected COPD cases, and the majority of the participants were confident in their diagnostic approach. It can be due to the unavailability of the spirometry facility. Most of the physicians in our study did not have the availability of spirometry or peak flow spirometry within the facility. Lack of spirometry facilities and no education on its use may influence how they diagnose COPD in practice and not stick to the guideline. 3 , 22 Furthermore, with Nepal's huge geographical diversity, the financial cost of being referred to the centre can be high compared with just the treatment; therefore, physicians and patients may be less reluctant for its use. Further study is needed to look at the practice trend after making spirometry facilities readily available to a medical physician. However, in this current scenario for poor‐resource settings like Nepal, recommendations from Hurst et al. can be adopted to diagnose COPD. As recommended, the concerned authority and government should develop evidence‐based diagnostic tools, such as screening questionnaires and mobilise locally available resources like peak flow meters (PFMs) to diagnose COPD. 15 These methods can be cost‐effective and readily available in LMIC when there is limited access to spirometry. Training should still be given to medical physicians in the proper use of PFM. Furthermore, the government of Nepal has a Package of Essential Non‐Communicable Disease (PEN) for early detection of chronic diseases in primary level health facilities. 23 Hence, based on the above recommendation, PFM can be added to PEN to identify COPD patients early. These recommendations, as mentioned above, can help because many participants in our study thought of COPD as a differential when a patient with cough, shortness of breath, and smoking history presents but lacks proper diagnosis methodology. Thus, they can use the questionnaire and PFM on high‐risk patients to provisionally diagnose COPD and start proper management in case of a referral to spirometry acts as a barrier in appropriately diagnosing and managing COPD patients. Next, even though many of the participants mentioned the correct indication when to add ICS in therapy and said SAMA/SABA to be used for acute relief of symptoms, many were unsure about their treatment approach. This insecurity in treatment could be due to a lack of training or CME on COPD and less familiarity with the different medical treatments available. Furthermore, this low level of confidence in COPD treatment can decrease the quality of community between patients and physicians. Decreased quality of conversation can lead to poor understanding of treatment from the patients regarding reasons, the timing of use, and the dose of a particular given medication. 24 Apart from that, the cost of medicines and poor patient financial status can lead to poor compliance with medication. Thus, as recommended by Hurst et al., provision of COPD medication in essential drug list and continuous supply of it to primary and secondary level health facilities can help address those barriers. 15 And to engage patients in self‐management programmes for COPD, the physician should be given proper hands‐on training on the drugs available and their mode of use. Another interesting finding was that although most physicians are aware of smoking as a risk factor and ask about it in daily practice, lack of patient follow‐up and patients' unwillingness to quit was responsible for the difficulties in helping patients quit smoking. This finding was consistent with a multinational qualitative study on why physicians lack engagement with smoking cessation treatment in COPD patients that highlighted unwillingness to quit and poor follow‐up of the patients as the primary reasons. 25 Poor follow‐up of patients can be due to socio‐economic factors, health system‐related, condition‐related, therapy‐related or patient‐related factors. 26 Community‐level awareness about COPD, risk factors, and easy availability of treatment like NRT can help the patient come to health care attention and address smoking and COPD accordingly. At last, a COPD action plan at home is one of the critical steps in managing COPD, but only one‐third of the participant physicians were aware of it and discussed it with their patients. Studies have shown that COPD action plans help people with COPD recognise and initiate appropriate treatment at home. 27 Early intervention during acute exacerbation improves morbidity and mortality. 28 Therefore, physicians should also be trained in preparing action plans for COPD patients. Overall, early detection of airflow limitation and treatment helps reduce the burden of COPD and improve patients' quality of life. 29 Physicians who participate in CME programmes on COPD diagnosis, staging and treatment are more likely than nonparticipants to deliver evidence‐based COPD management. 30 Hence, in the first place for LMIC like Nepal, concerned authorities should give physicians regular training based on the availability of local resources. Second, proper diagnostic infrastructure should be in place to improve the early diagnosis of COPD cases. Third, COPD medication should be on the essential drug list and regularly supplied to primary and secondary level hospitals. Apart from that, adequate disease awareness is also required in patients to increase their follow‐up and make them aware of the harmful effects of smoking. CONFLICT OF INTEREST: The authors declare there is no conflict of interest present. FUNDING INFORMATION: No funding was received for this research. ETHICS STATEMENT: Ethical approval for this study was provided by the Government of Nepal, Nepal Health Research Council (Proposal ID: 636‐2020). Informed consent to participate and publication was obtained from all individual participants included in the study. AUTHOR CONTRIBUTIONS: Suraj Ghimire, Anish Lamichhane, Anita Basnet, Samikshya Pandey and Ram Kumar Shrestha designed the study and questionnaire. Nahakul Poudel, Bushan Shrestha, Santosh Pathak and Gaurav Mahato did data collection and entry. Suraj Ghimire and Anita Basnet were involved in data analysis and manuscript writing.
Background: Chronic obstructive pulmonary disease (COPD) is the third leading cause of death, with 80% of the total death occurring in low- to middle-income countries (LMICs). Nepal is one of the LMIC; COPD is a highly prevalent and significant public health issue often underdiagnosed. Medical physicians' good knowledge and practice to diagnose and treat COPD can help reduce the disease burden. Methods: A cross-sectional descriptive study using a structured questionnaire was conducted among medical physicians working in Bagmati and Gandaki province of Nepal. Out of total scores, physicians knowledge and practice were graded according to Bloom's original cut-off point for good (≥80%), satisfactory (60%-78%) and poor (<60%). Results: A total of 152 medical physicians participated in this study. Out of the possible total score 20, the mean score on knowledge was 17.8 ± 2.4, and out of possible total score eight, the mean score on practice was 5.3 ± 1.3. The correlation test between total knowledge and practice scores showed r = 0.18 and p value <0.02. The most selected factors hindering the appropriate management of COPD was lack of patient follow up and lack of professional training in COPD. Other factors included patient unwillingness to discuss smoking quit plan, lack of screening tool, unavailability of spirometry and physician unawareness of available medicine to treat COPD. Conclusions: Despite physicians having good knowledge in COPD, the practice in COPD management is below guideline-recommended. There is a significant, very low positive correlation between total knowledge score and practice score. Proper COPD training to physicians, disease awareness among patients, easy availability of diagnostic equipment and medication can help improve physicians' practice and appropriately manage COPD patients.
null
null
8,528
343
[ 610, 27, 141, 91, 41, 331, 202, 438, 334, 34, 78, 106, 70, 8, 41, 52 ]
20
[ "copd", "physicians", "patients", "health", "knowledge", "medical", "practice", "factors", "participants", "spirometry" ]
[ "copd epidemiology disease", "spirometry suspected copd", "copd smoking exposure", "copd patients spirometry", "copd smoking critical" ]
null
null
null
[CONTENT] COPD | guideline | knowledge | Nepal | physicians | practice [SUMMARY]
null
[CONTENT] COPD | guideline | knowledge | Nepal | physicians | practice [SUMMARY]
null
[CONTENT] COPD | guideline | knowledge | Nepal | physicians | practice [SUMMARY]
null
[CONTENT] Cross-Sectional Studies | Guideline Adherence | Humans | Physicians | Practice Patterns, Physicians' | Pulmonary Disease, Chronic Obstructive [SUMMARY]
null
[CONTENT] Cross-Sectional Studies | Guideline Adherence | Humans | Physicians | Practice Patterns, Physicians' | Pulmonary Disease, Chronic Obstructive [SUMMARY]
null
[CONTENT] Cross-Sectional Studies | Guideline Adherence | Humans | Physicians | Practice Patterns, Physicians' | Pulmonary Disease, Chronic Obstructive [SUMMARY]
null
[CONTENT] copd epidemiology disease | spirometry suspected copd | copd smoking exposure | copd patients spirometry | copd smoking critical [SUMMARY]
null
[CONTENT] copd epidemiology disease | spirometry suspected copd | copd smoking exposure | copd patients spirometry | copd smoking critical [SUMMARY]
null
[CONTENT] copd epidemiology disease | spirometry suspected copd | copd smoking exposure | copd patients spirometry | copd smoking critical [SUMMARY]
null
[CONTENT] copd | physicians | patients | health | knowledge | medical | practice | factors | participants | spirometry [SUMMARY]
null
[CONTENT] copd | physicians | patients | health | knowledge | medical | practice | factors | participants | spirometry [SUMMARY]
null
[CONTENT] copd | physicians | patients | health | knowledge | medical | practice | factors | participants | spirometry [SUMMARY]
null
[CONTENT] copd | disease | deaths | 10 | exposure | help | diagnose | factors | significant | spirometry [SUMMARY]
null
[CONTENT] copd | participants | patients | physicians | factors | increasing | said | knowledge | score | lack [SUMMARY]
null
[CONTENT] copd | health | patients | physicians | medical | knowledge | factors | practice | participants | hospitals [SUMMARY]
null
[CONTENT] third | 80% ||| one | COPD ||| [SUMMARY]
null
[CONTENT] 152 ||| 20 | 17.8 | 2.4 | eight | 5.3 | 1.3 ||| 0.18 | 0.02 ||| COPD | COPD ||| COPD [SUMMARY]
null
[CONTENT] third | 80% ||| one | COPD ||| ||| Bagmati | Gandaki | Nepal ||| Bloom | 60%-78% | 60% ||| 152 ||| 20 | 17.8 | 2.4 | eight | 5.3 | 1.3 ||| 0.18 | 0.02 ||| COPD | COPD ||| ||| COPD | COPD ||| ||| [SUMMARY]
null
Management of trichobezoar: About 6 cases.
35017380
Trichobezoar is an uncommon clinical entity in which ingested hair mass accumulates within the digestive tract. It is generally observed in children and young females with psychological disorders. It can either be found as an isolated mass in the stomach or may extend into the intestine. Untreated cases may lead to grave complications.
BACKGROUND
We retrospectively analyzed the clinical data of six patients treated for trichobezoar in Monastir pediatric surgery department during 16-year-period between 2004 and 2019. Imaging (abdominal computed tomography and upper gastroduodenal opacification) and gastroduodenal endoscopy were tools of diagnosis.
MATERIAL AND METHODS
Our study involved 6 girls aged 4 to 12. Symptoms were epigastric pain associated with vomiting of recently ingested food in 3 cases and weight loss in one case. Physical examination found a hard epigastric mass in all cases. The trichobezoar was confined to the stomach in 4 cases. An extension into the jejunum was observed in 2 cases. Surgery was indicated in all patients. In two cases, the attempt of endoscopic extraction failed and patients were then operated on. All patients had gastrotomy to extract the whole bezoar even those with jejunal extension. Psychiatric follow-up was indicated in all cases. The six girls have evolved well and did not present any recurrence.
RESULTS
open surgery still plays a crucial role in Trichobezoard management . After successful treatment, psychiatric consultation is imperative to prevent reccurrence and improve long term prognosis.
CONCLUSION
[ "Abdominal Pain", "Bezoars", "Child", "Child, Preschool", "Female", "Humans", "Jejunum", "Retrospective Studies", "Stomach" ]
8809465
INTRODUCTION
Trichobezoar consists of ingested hair accumulating in the gastric mucosa folds instead of being digested.[12] It is essentially observed in teenage girls that have behavioural disorders such as trichotillomania and trichophagia.[34] In most cases, the bezoar is confined to the stomach.[5] Rapunzel syndrome is a rare form of gastric trichobezoar that develops through bezoar extension from the stomach to the intestine.[6] The diagnosis is established either endoscopically or radiologically. In the current article, we report our experience with management of trichobezoar. Six sample patients were treated in our unit during the 16-year- study period. The purpose of the present study is to discuss the diagnosis, imaging and therapy of trichobezoars.
null
null
null
null
CONCLUSION
The diagnosis of trichobezoar should be suspected in young girls with digestive symptoms associated with alopecia. While endoscopic extraction or laparoscopic surgical approach may be useful, open surgery still plays a crucial role. After successful treatment, psychiatric consultation and treatment is imperative to prevent reoccurrence and improve long-term prognosis. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest.
[ "Declaration of patient consent", "Financial support and sponsorship" ]
[ "The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.", "Nil." ]
[ null, null ]
[ "INTRODUCTION", "PATIENTS AND METHODS", "RESULTS", "DISCUSSION", "CONCLUSION", "Declaration of patient consent", "Financial support and sponsorship", "Conflicts of interest" ]
[ "Trichobezoar consists of ingested hair accumulating in the gastric mucosa folds instead of being digested.[12] It is essentially observed in teenage girls that have behavioural disorders such as trichotillomania and trichophagia.[34] In most cases, the bezoar is confined to the stomach.[5] Rapunzel syndrome is a rare form of gastric trichobezoar that develops through bezoar extension from the stomach to the intestine.[6] The diagnosis is established either endoscopically or radiologically. In the current article, we report our experience with management of trichobezoar. Six sample patients were treated in our unit during the 16-year- study period.\nThe purpose of the present study is to discuss the diagnosis, imaging and therapy of trichobezoars.", "We retrospectively reviewed the clinical records of all patients treated for trichobezoar in Monastir paediatric surgery department during the 16-year period between January 2004 and May 2019. Epidemiological data, clinical symptoms, diagnostic findings, treatment and outcomes were analysed.", "There were six girls aged 4 to 12 years. They were hospitalised for epigastric pain associated with food vomiting in three cases and weight loss in two cases. Two girls had trichophagia and one trichotillomania. Physical examination revealed a hard epigastric mass in all patients with partial alopecia in three cases. The upper gastrointestinal opacification performed in two patients showed an aspect in favour of a gastric trichobezoar. The computed tomography (CT) was the main diagnostic modality. It underlined a gastric trichobezoar in five cases and an extension to the jejunum in two cases which defined Rapunzel syndrome. In addition, the upper digestive fibroscopy performed in two cases highlighted an intraluminal gastric mass made of hair. However, the attempt at endoscopic extraction failed because of the large size of the mass. Bezoar surgical extraction was performed by gastrotomy in all cases. The bezoar was successfully extracted in one piece including those with jejunal extension [Figures 1 and 2]. Intraoperative findings revealed no evidence for detached parts of bezoar distally within the intestine, so no additional enterotomy was done. The post-operative follow-ups were uneventful. Follow-up in child psychiatry was indicated. After recovery, all patients were referred to the psychiatry department and were diagnosed with trichotillomania and trichophagia in all cases. Only one patient had been previously treated for trichotillomania. A treatment plan comprising pharmacological and psychotherapeutic interventions was initiated. All children were successfully managed with disappearance of the alopecia in three cases and progressive improvement of trichotillomania and trichophagia in all cases. No case of recurrence was underlined in our series.\nIntraoperative view showing the extraction of the trichobezoar\nThe trichobezoar after being extracted: Rapunzel syndrome\nTable 1 summarises the trichobezoar presentation and management in our series.\nTrichobezoar clinical presentation and management\nAp=Abdominal pain, Am=Abdominal mass, CT=Computed tomography, UGIO=Upper gastrointestinal opacification", "Bezoars typically develop in the stomach and the small intestine. While gastric bezoars are more common, intestinal bezoars are more likely to be revealed by bowel obstruction.[7] They may remain asymptomatic or may present several digestive symptoms.[8] Patients can present with abdominal pain, vomiting and constipation.[9] Early diagnosis is essential since obstructive bezoars may cause serious problems, including gastrointestinal (GI) ulceration, visceral perforation, bleeding and pressure necrosis.[10]\nThe patients under study showed gastric trichobezoar and complained essentially about abdominal pain and vomiting. Furthermore, we did not have any case of bowel obstruction.\nTrichobezoar diagnosis is based on imaging and often upper GI endoscopy.[11] On CT, small intestinal bezoars classically appear as a well-defined intraluminal mass containing mottled gas. Intestinal loops are dilated proximally and collapsed distally.[12] Direct visualisation of the bezoar through upper GI endoscopy is the gold standard for imaging. It is used for both diagnostic and therapeutic purposes.[1314] Gastric bezoar management mostly focuses on the dissolution or elimination of the mass.[13] It can be achieved either medicinally, endoscopic, or surgically. With different rates of success and frequently multiple failed attempts, several studies have reported lavage and aspirate using large gastric tubes, hydrolytic solutions, or mechanical fragmentation with lithotripsy or electrosurgical knife.[15] Chemical dissolution is an economical and non-invasive procedure, using agents that destroy bezoars, such as Coca-Cola® and acetylcysteine.[1617] The Coca-Cola® action may be due to its low pH, mucolytic effect of its high sodium bicarbonate concentration and carbon dioxide bubbles that improve dissolution.[16] None of our series underwent chemical dissolution because of the large size of the masses. The patients under study could not get better without surgery; attempted endoscopic extraction failed. A simple longitudinal gastrotomy was performed to remove the gastric mass. In addition, we managed to extract the trichobezoar in the two cases of Rapunzel syndrome using the same gastric incision. The majority of cases in the literature have been managed with surgical removal of the hair mass by laparotomy. The small bowel can be explored to look for detached bezoars. Hence, trichobezoar extensions may be extracted and intestinal segments which show extensive ulcerations or gangrene may be resected.[18] The surgery can also be achieved by employing the hand-assisted laparoscopic technique. Endoscopy frequently fails to remove the trichobezoar, except if small in size, while successful extraction can be achieved by mechanical and laser hair fragmentation.[1920] The conventional open surgery is still the preferred treatment method due to the very high success rate, shorter operative time, less complications and possibility to explore the whole GI tract.[19]\nPsychiatric evaluation and treatment as well as regular follow-ups is imperative to prevent trichophagia and recurrence.[21]", "The diagnosis of trichobezoar should be suspected in young girls with digestive symptoms associated with alopecia. While endoscopic extraction or laparoscopic surgical approach may be useful, open surgery still plays a crucial role. After successful treatment, psychiatric consultation and treatment is imperative to prevent reoccurrence and improve long-term prognosis.\n Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.\nThe authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.\n Financial support and sponsorship Nil.\nNil.\n Conflicts of interest There are no conflicts of interest.\nThere are no conflicts of interest.", "The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.", "Nil.", "There are no conflicts of interest." ]
[ "intro", "method", "result", "discussion", "conclusion", null, null, "COI-statement" ]
[ "Pediatric surgery", "rapunzel syndrome", "trichobezoar" ]
INTRODUCTION: Trichobezoar consists of ingested hair accumulating in the gastric mucosa folds instead of being digested.[12] It is essentially observed in teenage girls that have behavioural disorders such as trichotillomania and trichophagia.[34] In most cases, the bezoar is confined to the stomach.[5] Rapunzel syndrome is a rare form of gastric trichobezoar that develops through bezoar extension from the stomach to the intestine.[6] The diagnosis is established either endoscopically or radiologically. In the current article, we report our experience with management of trichobezoar. Six sample patients were treated in our unit during the 16-year- study period. The purpose of the present study is to discuss the diagnosis, imaging and therapy of trichobezoars. PATIENTS AND METHODS: We retrospectively reviewed the clinical records of all patients treated for trichobezoar in Monastir paediatric surgery department during the 16-year period between January 2004 and May 2019. Epidemiological data, clinical symptoms, diagnostic findings, treatment and outcomes were analysed. RESULTS: There were six girls aged 4 to 12 years. They were hospitalised for epigastric pain associated with food vomiting in three cases and weight loss in two cases. Two girls had trichophagia and one trichotillomania. Physical examination revealed a hard epigastric mass in all patients with partial alopecia in three cases. The upper gastrointestinal opacification performed in two patients showed an aspect in favour of a gastric trichobezoar. The computed tomography (CT) was the main diagnostic modality. It underlined a gastric trichobezoar in five cases and an extension to the jejunum in two cases which defined Rapunzel syndrome. In addition, the upper digestive fibroscopy performed in two cases highlighted an intraluminal gastric mass made of hair. However, the attempt at endoscopic extraction failed because of the large size of the mass. Bezoar surgical extraction was performed by gastrotomy in all cases. The bezoar was successfully extracted in one piece including those with jejunal extension [Figures 1 and 2]. Intraoperative findings revealed no evidence for detached parts of bezoar distally within the intestine, so no additional enterotomy was done. The post-operative follow-ups were uneventful. Follow-up in child psychiatry was indicated. After recovery, all patients were referred to the psychiatry department and were diagnosed with trichotillomania and trichophagia in all cases. Only one patient had been previously treated for trichotillomania. A treatment plan comprising pharmacological and psychotherapeutic interventions was initiated. All children were successfully managed with disappearance of the alopecia in three cases and progressive improvement of trichotillomania and trichophagia in all cases. No case of recurrence was underlined in our series. Intraoperative view showing the extraction of the trichobezoar The trichobezoar after being extracted: Rapunzel syndrome Table 1 summarises the trichobezoar presentation and management in our series. Trichobezoar clinical presentation and management Ap=Abdominal pain, Am=Abdominal mass, CT=Computed tomography, UGIO=Upper gastrointestinal opacification DISCUSSION: Bezoars typically develop in the stomach and the small intestine. While gastric bezoars are more common, intestinal bezoars are more likely to be revealed by bowel obstruction.[7] They may remain asymptomatic or may present several digestive symptoms.[8] Patients can present with abdominal pain, vomiting and constipation.[9] Early diagnosis is essential since obstructive bezoars may cause serious problems, including gastrointestinal (GI) ulceration, visceral perforation, bleeding and pressure necrosis.[10] The patients under study showed gastric trichobezoar and complained essentially about abdominal pain and vomiting. Furthermore, we did not have any case of bowel obstruction. Trichobezoar diagnosis is based on imaging and often upper GI endoscopy.[11] On CT, small intestinal bezoars classically appear as a well-defined intraluminal mass containing mottled gas. Intestinal loops are dilated proximally and collapsed distally.[12] Direct visualisation of the bezoar through upper GI endoscopy is the gold standard for imaging. It is used for both diagnostic and therapeutic purposes.[1314] Gastric bezoar management mostly focuses on the dissolution or elimination of the mass.[13] It can be achieved either medicinally, endoscopic, or surgically. With different rates of success and frequently multiple failed attempts, several studies have reported lavage and aspirate using large gastric tubes, hydrolytic solutions, or mechanical fragmentation with lithotripsy or electrosurgical knife.[15] Chemical dissolution is an economical and non-invasive procedure, using agents that destroy bezoars, such as Coca-Cola® and acetylcysteine.[1617] The Coca-Cola® action may be due to its low pH, mucolytic effect of its high sodium bicarbonate concentration and carbon dioxide bubbles that improve dissolution.[16] None of our series underwent chemical dissolution because of the large size of the masses. The patients under study could not get better without surgery; attempted endoscopic extraction failed. A simple longitudinal gastrotomy was performed to remove the gastric mass. In addition, we managed to extract the trichobezoar in the two cases of Rapunzel syndrome using the same gastric incision. The majority of cases in the literature have been managed with surgical removal of the hair mass by laparotomy. The small bowel can be explored to look for detached bezoars. Hence, trichobezoar extensions may be extracted and intestinal segments which show extensive ulcerations or gangrene may be resected.[18] The surgery can also be achieved by employing the hand-assisted laparoscopic technique. Endoscopy frequently fails to remove the trichobezoar, except if small in size, while successful extraction can be achieved by mechanical and laser hair fragmentation.[1920] The conventional open surgery is still the preferred treatment method due to the very high success rate, shorter operative time, less complications and possibility to explore the whole GI tract.[19] Psychiatric evaluation and treatment as well as regular follow-ups is imperative to prevent trichophagia and recurrence.[21] CONCLUSION: The diagnosis of trichobezoar should be suspected in young girls with digestive symptoms associated with alopecia. While endoscopic extraction or laparoscopic surgical approach may be useful, open surgery still plays a crucial role. After successful treatment, psychiatric consultation and treatment is imperative to prevent reoccurrence and improve long-term prognosis. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest. Declaration of patient consent: The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship: Nil. Conflicts of interest: There are no conflicts of interest.
Background: Trichobezoar is an uncommon clinical entity in which ingested hair mass accumulates within the digestive tract. It is generally observed in children and young females with psychological disorders. It can either be found as an isolated mass in the stomach or may extend into the intestine. Untreated cases may lead to grave complications. Methods: We retrospectively analyzed the clinical data of six patients treated for trichobezoar in Monastir pediatric surgery department during 16-year-period between 2004 and 2019. Imaging (abdominal computed tomography and upper gastroduodenal opacification) and gastroduodenal endoscopy were tools of diagnosis. Results: Our study involved 6 girls aged 4 to 12. Symptoms were epigastric pain associated with vomiting of recently ingested food in 3 cases and weight loss in one case. Physical examination found a hard epigastric mass in all cases. The trichobezoar was confined to the stomach in 4 cases. An extension into the jejunum was observed in 2 cases. Surgery was indicated in all patients. In two cases, the attempt of endoscopic extraction failed and patients were then operated on. All patients had gastrotomy to extract the whole bezoar even those with jejunal extension. Psychiatric follow-up was indicated in all cases. The six girls have evolved well and did not present any recurrence. Conclusions: open surgery still plays a crucial role in Trichobezoard management . After successful treatment, psychiatric consultation is imperative to prevent reccurrence and improve long term prognosis.
INTRODUCTION: Trichobezoar consists of ingested hair accumulating in the gastric mucosa folds instead of being digested.[12] It is essentially observed in teenage girls that have behavioural disorders such as trichotillomania and trichophagia.[34] In most cases, the bezoar is confined to the stomach.[5] Rapunzel syndrome is a rare form of gastric trichobezoar that develops through bezoar extension from the stomach to the intestine.[6] The diagnosis is established either endoscopically or radiologically. In the current article, we report our experience with management of trichobezoar. Six sample patients were treated in our unit during the 16-year- study period. The purpose of the present study is to discuss the diagnosis, imaging and therapy of trichobezoars. CONCLUSION: The diagnosis of trichobezoar should be suspected in young girls with digestive symptoms associated with alopecia. While endoscopic extraction or laparoscopic surgical approach may be useful, open surgery still plays a crucial role. After successful treatment, psychiatric consultation and treatment is imperative to prevent reoccurrence and improve long-term prognosis. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil. Nil. Conflicts of interest There are no conflicts of interest. There are no conflicts of interest.
Background: Trichobezoar is an uncommon clinical entity in which ingested hair mass accumulates within the digestive tract. It is generally observed in children and young females with psychological disorders. It can either be found as an isolated mass in the stomach or may extend into the intestine. Untreated cases may lead to grave complications. Methods: We retrospectively analyzed the clinical data of six patients treated for trichobezoar in Monastir pediatric surgery department during 16-year-period between 2004 and 2019. Imaging (abdominal computed tomography and upper gastroduodenal opacification) and gastroduodenal endoscopy were tools of diagnosis. Results: Our study involved 6 girls aged 4 to 12. Symptoms were epigastric pain associated with vomiting of recently ingested food in 3 cases and weight loss in one case. Physical examination found a hard epigastric mass in all cases. The trichobezoar was confined to the stomach in 4 cases. An extension into the jejunum was observed in 2 cases. Surgery was indicated in all patients. In two cases, the attempt of endoscopic extraction failed and patients were then operated on. All patients had gastrotomy to extract the whole bezoar even those with jejunal extension. Psychiatric follow-up was indicated in all cases. The six girls have evolved well and did not present any recurrence. Conclusions: open surgery still plays a crucial role in Trichobezoard management . After successful treatment, psychiatric consultation is imperative to prevent reccurrence and improve long term prognosis.
1,401
272
[ 77, 2 ]
8
[ "trichobezoar", "cases", "patients", "gastric", "patient", "mass", "consent", "bezoars", "bezoar", "treatment" ]
[ "trichobezoar monastir paediatric", "trichobezoar diagnosis", "trichobezoar clinical presentation", "gastric trichobezoar computed", "gastric trichobezoar complained" ]
null
null
[CONTENT] Pediatric surgery | rapunzel syndrome | trichobezoar [SUMMARY]
null
null
[CONTENT] Pediatric surgery | rapunzel syndrome | trichobezoar [SUMMARY]
[CONTENT] Pediatric surgery | rapunzel syndrome | trichobezoar [SUMMARY]
[CONTENT] Pediatric surgery | rapunzel syndrome | trichobezoar [SUMMARY]
[CONTENT] Abdominal Pain | Bezoars | Child | Child, Preschool | Female | Humans | Jejunum | Retrospective Studies | Stomach [SUMMARY]
null
null
[CONTENT] Abdominal Pain | Bezoars | Child | Child, Preschool | Female | Humans | Jejunum | Retrospective Studies | Stomach [SUMMARY]
[CONTENT] Abdominal Pain | Bezoars | Child | Child, Preschool | Female | Humans | Jejunum | Retrospective Studies | Stomach [SUMMARY]
[CONTENT] Abdominal Pain | Bezoars | Child | Child, Preschool | Female | Humans | Jejunum | Retrospective Studies | Stomach [SUMMARY]
[CONTENT] trichobezoar monastir paediatric | trichobezoar diagnosis | trichobezoar clinical presentation | gastric trichobezoar computed | gastric trichobezoar complained [SUMMARY]
null
null
[CONTENT] trichobezoar monastir paediatric | trichobezoar diagnosis | trichobezoar clinical presentation | gastric trichobezoar computed | gastric trichobezoar complained [SUMMARY]
[CONTENT] trichobezoar monastir paediatric | trichobezoar diagnosis | trichobezoar clinical presentation | gastric trichobezoar computed | gastric trichobezoar complained [SUMMARY]
[CONTENT] trichobezoar monastir paediatric | trichobezoar diagnosis | trichobezoar clinical presentation | gastric trichobezoar computed | gastric trichobezoar complained [SUMMARY]
[CONTENT] trichobezoar | cases | patients | gastric | patient | mass | consent | bezoars | bezoar | treatment [SUMMARY]
null
null
[CONTENT] trichobezoar | cases | patients | gastric | patient | mass | consent | bezoars | bezoar | treatment [SUMMARY]
[CONTENT] trichobezoar | cases | patients | gastric | patient | mass | consent | bezoars | bezoar | treatment [SUMMARY]
[CONTENT] trichobezoar | cases | patients | gastric | patient | mass | consent | bezoars | bezoar | treatment [SUMMARY]
[CONTENT] trichobezoar | study | stomach | gastric | bezoar | diagnosis | extension stomach intestine diagnosis | rapunzel syndrome rare form | current article report experience | current article report [SUMMARY]
null
null
[CONTENT] consent | patient | interest | patient consent | conflicts interest | conflicts | conflicts interest conflicts interest | conflicts interest conflicts | interest conflicts | interest conflicts interest [SUMMARY]
[CONTENT] nil | interest | conflicts interest | conflicts | trichobezoar | consent | patient | cases | gastric | clinical [SUMMARY]
[CONTENT] nil | interest | conflicts interest | conflicts | trichobezoar | consent | patient | cases | gastric | clinical [SUMMARY]
[CONTENT] Trichobezoar ||| ||| ||| [SUMMARY]
null
null
[CONTENT] Trichobezoard ||| [SUMMARY]
[CONTENT] Trichobezoar ||| ||| ||| ||| six | Monastir | 16-year-period between 2004 and 2019 ||| ||| ||| 6 | 4 to 12 ||| 3 | one ||| ||| 4 ||| jejunum | 2 ||| ||| two ||| ||| ||| six ||| Trichobezoard ||| [SUMMARY]
[CONTENT] Trichobezoar ||| ||| ||| ||| six | Monastir | 16-year-period between 2004 and 2019 ||| ||| ||| 6 | 4 to 12 ||| 3 | one ||| ||| 4 ||| jejunum | 2 ||| ||| two ||| ||| ||| six ||| Trichobezoard ||| [SUMMARY]
Mesenchymal stem cells with rhBMP-2 inhibits the growth of canine osteosarcoma cells.
22356869
The bone morphogenetic proteins (BMPs) belong to a unique group of proteins that includes the growth factor TGF-β. BMPs play important roles in cell differentiation, cell proliferation, and inhibition of cell growth. They also participate in the maturation of several cell types, depending on the microenvironment and interactions with other regulatory factors. Depending on their concentration gradient, the BMPs can attract various types of cells and act as chemotactic, mitogenic, or differentiation agents. BMPs can interfere with cell proliferation and the formation of cartilage and bone. In addition, BMPs can induce the differentiation of mesenchymal progenitor cells into various cell types, including chondroblasts and osteoblasts. The aim of this study was to analyze the effects of treatment with rhBMP-2 on the proliferation of canine mesenchymal stem cells (cMSCs) and the tumor suppression properties of rhBMP-2 in canine osteocarcoma (OST) cells. Osteosarcoma cell lines were isolated from biopsies and excisions of animals with osteosarcoma and were characterized by the Laboratory of Biochemistry and Biophysics, Butantan Institute. The mesenchymal stem cells were derived from the bone marrow of canine fetuses (cMSCs) and belong to the University of São Paulo, College of Veterinary Medicine (FMVZ-USP) stem cell bank. After expansion, the cells were cultured in a 12-well Transwell system; cells were treated with bone marrow mesenchymal stem cells associated with rhBMP2. Expression of the intracytoplasmic and nuclear markers such as Caspase-3, Bax, Bad, Bcl-2, Ki-67, p53, Oct3/4, Nanog, Stro-1 were performed by flow citometry.
BACKGROUND
Canine bone marrow mesenchymal stem cells associated with rhBMP2 in canine osteosarcoma treatment: "in vitro" study.
STUDY DESIGN
We evaluated the regenerative potential of in vitro treatment with rhBMP-2 and found that both osteogenic induction and tumor regression occur in stem cells from canine bone marrow. rhBMP-2 inhibits the proliferation capacity of OST cells by mechanisms of apoptosis and tumor suppression mediated by p53.
RESULTS
We propose that rhBMP-2 has great therapeutic potential in bone marrow cells by serving as a tumor suppressor to increase p53 and the pro-apoptotic proteins Bad and Bax, as well as by increasing the activity of phosphorylated caspase 3.
CONCLUSION
[ "Animals", "Bone Marrow Cells", "Bone Morphogenetic Protein 2", "Cell Line, Tumor", "Cell Proliferation", "Coculture Techniques", "Dogs", "Gene Expression Regulation, Neoplastic", "Humans", "Mesenchymal Stem Cells", "Osteosarcoma", "Recombinant Proteins" ]
3307475
Background
Osteosarcoma is as a primary bone cancer common in dogs. Frequently, osteosarcoma affects the limb bones of large-sized dogs over 15 kg at an average age of 7 years [1]. In 75% of cases, osteosarcoma affects either the appendicular skeleton [2] or the pelvic and thoracic limbs, and in the remaining 25%, it affects the axial skeleton or the flat bones [3,4]. Generally, males have a higher incidence of osteocarcoma than females [2], with the exception of the St. Bernard, Rottweiler, and Danish breeds, in which females are most affected [5,6]. Osteosarcoma cells induce platelet aggregation, which facilitates metastasis formation. Platelet aggregation and metastasis most commonly occur in the lung [7]. Platelet aggregation promotes the establishment of tumor cell aggregates, which could serve as a bridge between the tumor cells and the vascular surfaces [6]. A primary extraskeletal osteosarcoma has a metastatic rate that ranges from 60 to 85% in dogs and an average life expectancy after surgery of 26-90 days, which varies according to the location where the metastasis occurs [4]. Metastasis is the most common cause of death in dogs with osteosarcoma, and 90% of dogs either die or are euthanized due to complications associated with lung metastases. Therefore, chemotherapy is used to increase the long-term survival of dogs with osteosarcoma. To reduce the occurrence of metastasis, chemotherapy is often used in combination with surgery or radiotherapy. Specifically, either cisplatin or cisplatin and doxorubicin are chemotherapeutic agents used in dogs [8,9]. Numerous studies have aimed to develop antiangiogenic therapeutic strategies, which can be combined with other treatments [10]. The bone morphogenetic proteins (BMPs) belong to a unique group of proteins that includes the growth factor TGF-β. BMPs play important roles in cell differentiation, cell proliferation, and inhibition of cell growth. They also participate in the maturation of several cell types, depending on the microenvironment and interactions with other regulatory factors [11,12]. Depending on their concentration gradient, the BMPs can attract various types of cells [13] and act as chemotactic, mitogenic, or differentiation agents [14]. BMPs can interfere with the proliferation of cells and the formation of cartilage and bone. Finally, BMPs can also induce the differentiation of mesenchymal progenitor cells into various cell types, including chondroblasts and osteoblasts [15]. BMPs play important roles in cell differentiation, proliferation, morphogenesis, and apoptosis, and recent studies have shown that recombinant human BMP-2 (rhBMP-2) inhibits tumor formation [16-19]. However, the role of rhBMP-2 in canine osteosarcoma remains unknown. The osteoinductive capacity of rhBMP-2 has been widely studied in preclinical models and evaluated in the clinical setting [20]. Gene and cell therapy studies have shown that many bone defects can be treated by implantation of resorbable polymers with bone marrow cells transduced with an adenovirus expressing rhBMP-2 [21]. In addition, rhBMP-2 can be used as a substitute for bone grafts in spinal surgery, with results comparable to autogenous grafts [22]. Based on the studies cited above, the present work explores the proliferative effects of canine mesenchymal stem cells (cMSCs) and osteosarcoma (OST) cells treated with rhBMP-2 to evaluate their regenerative potential in the presence of the in vitro treatment.
Methods
Isolation of canine osteosarcoma (OST) cells The osteosarcoma cell lines were isolated from biopsies and excisions of animals with osteosarcoma performed in veterinary clinics and hospitals in São Paulo and were characterized by the Laboratory of Biochemistry and Biophysics, Butantan Institute. Approved by Ethic Committee in the use of animals, protocol number 1654/2009. After harvesting the tissue, the fragments were washed with saline solution containing antibiotics (10%), and the fatty and hemorrhagic tissues were removed from the tumors. Next, the tumors were divided in two samples. The first sample was cut into pieces of 0.01 cm2 using a scalpel and adhered to Petri dishes for 1 h in fetal bovine serum (FBS) in an incubator at 37°C and 5% CO2. The second sample was used for histopathological diagnosis. The cells were cultured in 25-cm2 flasks with DMEM-H (LGC) media supplemented with 10% of FBS (VITROCELL), 1% of penicillin and streptomycin (GIBCO), and 1% of pyruvic acid (GIBCO) at pH 7.4 and were kept in a incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed by trypsinizaton of the confluent monolayer cell cultures every 3 days. The osteosarcoma cell lines were isolated from biopsies and excisions of animals with osteosarcoma performed in veterinary clinics and hospitals in São Paulo and were characterized by the Laboratory of Biochemistry and Biophysics, Butantan Institute. Approved by Ethic Committee in the use of animals, protocol number 1654/2009. After harvesting the tissue, the fragments were washed with saline solution containing antibiotics (10%), and the fatty and hemorrhagic tissues were removed from the tumors. Next, the tumors were divided in two samples. The first sample was cut into pieces of 0.01 cm2 using a scalpel and adhered to Petri dishes for 1 h in fetal bovine serum (FBS) in an incubator at 37°C and 5% CO2. The second sample was used for histopathological diagnosis. The cells were cultured in 25-cm2 flasks with DMEM-H (LGC) media supplemented with 10% of FBS (VITROCELL), 1% of penicillin and streptomycin (GIBCO), and 1% of pyruvic acid (GIBCO) at pH 7.4 and were kept in a incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed by trypsinizaton of the confluent monolayer cell cultures every 3 days. Isolation of mesenchymal stem cells derived from canines fetuses (cMSCs) The mesenchymal stem cells were derived from the bone marrow of canine fetuses (cMSCs) and belong to the University of São Paulo, College of Veterinary Medicine (FMVZ-USP) stem cell bank. Approved by Ethic Committee in the use of animals, protocol number 931/2006. cMSCs were cultured in 25 cm2 flasks with ALPHA-MEM media (VITROCELL) supplemented with 10% of HyClone FBS, 1% of penicillin and streptomycin (GIBCO), 1% of nonessential amino acids (GIBCO), and 1% L-glutamine (GIBCO) at pH 7.4, and they were kept in an incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed from the monolayer cells. The mesenchymal stem cells were derived from the bone marrow of canine fetuses (cMSCs) and belong to the University of São Paulo, College of Veterinary Medicine (FMVZ-USP) stem cell bank. Approved by Ethic Committee in the use of animals, protocol number 931/2006. cMSCs were cultured in 25 cm2 flasks with ALPHA-MEM media (VITROCELL) supplemented with 10% of HyClone FBS, 1% of penicillin and streptomycin (GIBCO), 1% of nonessential amino acids (GIBCO), and 1% L-glutamine (GIBCO) at pH 7.4, and they were kept in an incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed from the monolayer cells. In vitro treatment of osteosarcoma with stem cells derived from canine bone marrow and rhBMP-2 After expansion, the OST and cMSCs cells were cultured in a 12-well plates containing 105 cells per well. The groups was composed by OST cells untreated (control group 1) and cMSCs untreated (control group 2). These cells were treated with different concentrations of rhBMP-2 (5, 10, and 20 nM) for 120 h. Cells were labeled with carboxyfluorescein diacetate succinimidyl diester (CFSE; Invitrogen, Carlsbad, CA, USA) by incubation for 15 min at room temperature with 1 mM CFSE at a density of 2 × 10 6 cells/ml in PBS and the reaction was stopped by adding bovine serum to the PBS. At different times of culture, the cells were harvested and directly run on the FACSCalibur flow cytometer (Becton Dickinson) to measure the intensity of CFSE in the cells in order to monitor the number of times that the cells had divided during culture. Then a total of 3 × 105 cells were cultured in 96 U-bottom well plates and were kept in an incubator at 37°C and 5% CO2 for 5 days. After the incubation period, the proliferation of the OST cells and the cMSCs was measured to standardize the optimal concentration to be used in the in vitro regenerative therapy of canine osteosarcoma. For transwell culture the cMSCs treated with rhBMP2 (20 nM) were cultured onto dry 6.5 mm diameter, 0.4 μm pore size polycarbonate Transwell filters containing 105 cells per filter. OST cells treated with rhBMP2 (5 nM) was added to the lower well, containing 105 cells per well. This treatment lasted for 120 h. After expansion, the OST and cMSCs cells were cultured in a 12-well plates containing 105 cells per well. The groups was composed by OST cells untreated (control group 1) and cMSCs untreated (control group 2). These cells were treated with different concentrations of rhBMP-2 (5, 10, and 20 nM) for 120 h. Cells were labeled with carboxyfluorescein diacetate succinimidyl diester (CFSE; Invitrogen, Carlsbad, CA, USA) by incubation for 15 min at room temperature with 1 mM CFSE at a density of 2 × 10 6 cells/ml in PBS and the reaction was stopped by adding bovine serum to the PBS. At different times of culture, the cells were harvested and directly run on the FACSCalibur flow cytometer (Becton Dickinson) to measure the intensity of CFSE in the cells in order to monitor the number of times that the cells had divided during culture. Then a total of 3 × 105 cells were cultured in 96 U-bottom well plates and were kept in an incubator at 37°C and 5% CO2 for 5 days. After the incubation period, the proliferation of the OST cells and the cMSCs was measured to standardize the optimal concentration to be used in the in vitro regenerative therapy of canine osteosarcoma. For transwell culture the cMSCs treated with rhBMP2 (20 nM) were cultured onto dry 6.5 mm diameter, 0.4 μm pore size polycarbonate Transwell filters containing 105 cells per filter. OST cells treated with rhBMP2 (5 nM) was added to the lower well, containing 105 cells per well. This treatment lasted for 120 h. Expression of cell proliferation and cell death markers The cells obtained from the Transwell culture plates were trypsinized and inactivated with FBS, centrifuged at 1,500 rpm for 10 min, and the supernatant was discarded. The pellet was resuspended in 5 ml of PBS at a concentration of 106 cells/ml. To analyze the amounts of intracytoplasmic and nuclear markers, the cells were permeabilized with 5 μl of 0.1% Triton X-100 for 30 min before the addition of specific primary antibodies. The following markers were used to determine the cell death pathways: caspase-3, Bax, Bad, and Bcl-2. The antibodies for Ki-67 and p53 were used to determine the proliferation index. Mesenchymal stem cell differentiation and maturation was determined by using the Oct 3/4, Stro-1, and Nanog antibodies. The samples were analyzed in a flow cytometer (FACSCalibur), and the expression of the markers was determined by comparison with an isotype control. The cells obtained from the Transwell culture plates were trypsinized and inactivated with FBS, centrifuged at 1,500 rpm for 10 min, and the supernatant was discarded. The pellet was resuspended in 5 ml of PBS at a concentration of 106 cells/ml. To analyze the amounts of intracytoplasmic and nuclear markers, the cells were permeabilized with 5 μl of 0.1% Triton X-100 for 30 min before the addition of specific primary antibodies. The following markers were used to determine the cell death pathways: caspase-3, Bax, Bad, and Bcl-2. The antibodies for Ki-67 and p53 were used to determine the proliferation index. Mesenchymal stem cell differentiation and maturation was determined by using the Oct 3/4, Stro-1, and Nanog antibodies. The samples were analyzed in a flow cytometer (FACSCalibur), and the expression of the markers was determined by comparison with an isotype control. Statistical analysis All the values reported are mean ± SD. Statistical analyses were performed using GraphPad Prism Version 5 software and significance was determined using either the nonparametric MannWhitney test for unpaired data or the two-tailed t-test. Difference was considered significant at p < 0.05. In all graphs, *, **, *** indicate difference between groups at p < 0.05, 0.01 and 0.001, respectively. All the values reported are mean ± SD. Statistical analyses were performed using GraphPad Prism Version 5 software and significance was determined using either the nonparametric MannWhitney test for unpaired data or the two-tailed t-test. Difference was considered significant at p < 0.05. In all graphs, *, **, *** indicate difference between groups at p < 0.05, 0.01 and 0.001, respectively.
null
null
Conclusions
REGR, DA, CVW collected the materials, established cell lines and carried out the experiment. REGR, DAM and PF performed the cytometry analysis and wrote manuscript. CEA and MAM reviewed the manuscript and the quality of the written English. All authors read and approved the final paper.
[ "Background", "Isolation of canine osteosarcoma (OST) cells", "Isolation of mesenchymal stem cells derived from canines fetuses (cMSCs)", "In vitro treatment of osteosarcoma with stem cells derived from canine bone marrow and rhBMP-2", "Expression of cell proliferation and cell death markers", "Statistical analysis", "Results", "Proliferative effects of rhBMP-2 in canine mesenchymal stem cells (cMSCs) and canine osteosarcoma (OST) cells", "Expression of cellular markers in canine osteosarcoma cells treated with cMSCs and rhBMP-2", "Flow cytometric analyses of markers of proliferation and cell death by apoptosis or necrosis", "Discussion", "Conclusions" ]
[ "Osteosarcoma is as a primary bone cancer common in dogs. Frequently, osteosarcoma affects the limb bones of large-sized dogs over 15 kg at an average age of 7 years [1]. In 75% of cases, osteosarcoma affects either the appendicular skeleton [2] or the pelvic and thoracic limbs, and in the remaining 25%, it affects the axial skeleton or the flat bones [3,4]. Generally, males have a higher incidence of osteocarcoma than females [2], with the exception of the St. Bernard, Rottweiler, and Danish breeds, in which females are most affected [5,6]. Osteosarcoma cells induce platelet aggregation, which facilitates metastasis formation. Platelet aggregation and metastasis most commonly occur in the lung [7]. Platelet aggregation promotes the establishment of tumor cell aggregates, which could serve as a bridge between the tumor cells and the vascular surfaces [6].\nA primary extraskeletal osteosarcoma has a metastatic rate that ranges from 60 to 85% in dogs and an average life expectancy after surgery of 26-90 days, which varies according to the location where the metastasis occurs [4].\nMetastasis is the most common cause of death in dogs with osteosarcoma, and 90% of dogs either die or are euthanized due to complications associated with lung metastases. Therefore, chemotherapy is used to increase the long-term survival of dogs with osteosarcoma. To reduce the occurrence of metastasis, chemotherapy is often used in combination with surgery or radiotherapy. Specifically, either cisplatin or cisplatin and doxorubicin are chemotherapeutic agents used in dogs [8,9]. Numerous studies have aimed to develop antiangiogenic therapeutic strategies, which can be combined with other treatments [10].\nThe bone morphogenetic proteins (BMPs) belong to a unique group of proteins that includes the growth factor TGF-β. BMPs play important roles in cell differentiation, cell proliferation, and inhibition of cell growth. They also participate in the maturation of several cell types, depending on the microenvironment and interactions with other regulatory factors [11,12]. Depending on their concentration gradient, the BMPs can attract various types of cells [13] and act as chemotactic, mitogenic, or differentiation agents [14]. BMPs can interfere with the proliferation of cells and the formation of cartilage and bone. Finally, BMPs can also induce the differentiation of mesenchymal progenitor cells into various cell types, including chondroblasts and osteoblasts [15].\nBMPs play important roles in cell differentiation, proliferation, morphogenesis, and apoptosis, and recent studies have shown that recombinant human BMP-2 (rhBMP-2) inhibits tumor formation [16-19]. However, the role of rhBMP-2 in canine osteosarcoma remains unknown. The osteoinductive capacity of rhBMP-2 has been widely studied in preclinical models and evaluated in the clinical setting [20]. Gene and cell therapy studies have shown that many bone defects can be treated by implantation of resorbable polymers with bone marrow cells transduced with an adenovirus expressing rhBMP-2 [21]. In addition, rhBMP-2 can be used as a substitute for bone grafts in spinal surgery, with results comparable to autogenous grafts [22].\nBased on the studies cited above, the present work explores the proliferative effects of canine mesenchymal stem cells (cMSCs) and osteosarcoma (OST) cells treated with rhBMP-2 to evaluate their regenerative potential in the presence of the in vitro treatment.", "The osteosarcoma cell lines were isolated from biopsies and excisions of animals with osteosarcoma performed in veterinary clinics and hospitals in São Paulo and were characterized by the Laboratory of Biochemistry and Biophysics, Butantan Institute. Approved by Ethic Committee in the use of animals, protocol number 1654/2009. After harvesting the tissue, the fragments were washed with saline solution containing antibiotics (10%), and the fatty and hemorrhagic tissues were removed from the tumors. Next, the tumors were divided in two samples. The first sample was cut into pieces of 0.01 cm2 using a scalpel and adhered to Petri dishes for 1 h in fetal bovine serum (FBS) in an incubator at 37°C and 5% CO2. The second sample was used for histopathological diagnosis. The cells were cultured in 25-cm2 flasks with DMEM-H (LGC) media supplemented with 10% of FBS (VITROCELL), 1% of penicillin and streptomycin (GIBCO), and 1% of pyruvic acid (GIBCO) at pH 7.4 and were kept in a incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed by trypsinizaton of the confluent monolayer cell cultures every 3 days.", "The mesenchymal stem cells were derived from the bone marrow of canine fetuses (cMSCs) and belong to the University of São Paulo, College of Veterinary Medicine (FMVZ-USP) stem cell bank. Approved by Ethic Committee in the use of animals, protocol number 931/2006. cMSCs were cultured in 25 cm2 flasks with ALPHA-MEM media (VITROCELL) supplemented with 10% of HyClone FBS, 1% of penicillin and streptomycin (GIBCO), 1% of nonessential amino acids (GIBCO), and 1% L-glutamine (GIBCO) at pH 7.4, and they were kept in an incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed from the monolayer cells.", "After expansion, the OST and cMSCs cells were cultured in a 12-well plates containing 105 cells per well. The groups was composed by OST cells untreated (control group 1) and cMSCs untreated (control group 2).\nThese cells were treated with different concentrations of rhBMP-2 (5, 10, and 20 nM) for 120 h. Cells were labeled with carboxyfluorescein diacetate succinimidyl diester (CFSE; Invitrogen, Carlsbad, CA, USA) by incubation for 15 min at room temperature with 1 mM CFSE at a density of 2 × 10 6 cells/ml in PBS and the reaction was stopped by adding bovine serum to the PBS. At different times of culture, the cells were harvested and directly run on the FACSCalibur flow cytometer (Becton Dickinson) to measure the intensity of CFSE in the cells in order to monitor the number of times that the cells had divided during culture.\nThen a total of 3 × 105 cells were cultured in 96 U-bottom well plates and were kept in an incubator at 37°C and 5% CO2 for 5 days. After the incubation period, the proliferation of the OST cells and the cMSCs was measured to standardize the optimal concentration to be used in the in vitro regenerative therapy of canine osteosarcoma.\nFor transwell culture the cMSCs treated with rhBMP2 (20 nM) were cultured onto dry 6.5 mm diameter, 0.4 μm pore size polycarbonate Transwell filters containing 105 cells per filter. OST cells treated with rhBMP2 (5 nM) was added to the lower well, containing 105 cells per well. This treatment lasted for 120 h.", "The cells obtained from the Transwell culture plates were trypsinized and inactivated with FBS, centrifuged at 1,500 rpm for 10 min, and the supernatant was discarded. The pellet was resuspended in 5 ml of PBS at a concentration of 106 cells/ml. To analyze the amounts of intracytoplasmic and nuclear markers, the cells were permeabilized with 5 μl of 0.1% Triton X-100 for 30 min before the addition of specific primary antibodies. The following markers were used to determine the cell death pathways: caspase-3, Bax, Bad, and Bcl-2. The antibodies for Ki-67 and p53 were used to determine the proliferation index. Mesenchymal stem cell differentiation and maturation was determined by using the Oct 3/4, Stro-1, and Nanog antibodies. The samples were analyzed in a flow cytometer (FACSCalibur), and the expression of the markers was determined by comparison with an isotype control.", "All the values reported are mean ± SD. Statistical analyses were performed using GraphPad Prism Version 5 software and significance was determined using either the nonparametric MannWhitney test for unpaired data or the two-tailed t-test. Difference was considered significant at p < 0.05. In all graphs, *, **, *** indicate difference between groups at p < 0.05, 0.01 and 0.001, respectively.", " Proliferative effects of rhBMP-2 in canine mesenchymal stem cells (cMSCs) and canine osteosarcoma (OST) cells cMSCs cells showed an enhanced cellular proliferation response in comparison to OST cells when treated with 20 nM of rhBMP-2 for 120 h. A significant proliferative response was observed in OST after treatment with 5 nM of rhBMP-2 (Figure 1).\nProliferation curves. Proliferation curves of bone marrow stem cells derived from canine fetuses (cMSCs) and canine osteosarcoma (OST) cells 120 h after rhBMP-2 treatment. The cells were stained with CSFE and analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01, * p < 0.05).\ncMSCs cells showed an enhanced cellular proliferation response in comparison to OST cells when treated with 20 nM of rhBMP-2 for 120 h. A significant proliferative response was observed in OST after treatment with 5 nM of rhBMP-2 (Figure 1).\nProliferation curves. Proliferation curves of bone marrow stem cells derived from canine fetuses (cMSCs) and canine osteosarcoma (OST) cells 120 h after rhBMP-2 treatment. The cells were stained with CSFE and analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01, * p < 0.05).\n Expression of cellular markers in canine osteosarcoma cells treated with cMSCs and rhBMP-2 Oct 3/4 and Nanog are markers of pluripotent embryonic stem cells. We analyzed the expression of these markers in cMSCs treated with rhBMP-2 (20 nM) decreased the expression of Oct 3/4 and Nanog in the bone marrow cells. Stro-1 and CD90 are surface markers of mesenchymal stem cells that specifically possess osteogenic potential. We observed an increase in the Stro-1 marker when the osteosarcoma cells were treated with rhBMP-2 (5 nM). This increase correlates with the finding that rhBMP-2 is expressed in some types of osteosarcoma, as observed in the OST cells (Figure 2A). A significant reduction in the expression of the pluripotent embryonic stem cell markers (Oct 3/4 and Nanog) and the mesenchymal stem cells markers (CD90 and Stro-1) was observed in the canine bone marrow cells treated with rhBMP-2, as shown in Figure 2B.\nPluripotent embryonic stem cell markers. Expression of pluripotent embryonic stem cell markers (Nanog and Oct 3/4) and markers of mesenchymal stem cells with osteogenic potential (Stro-1 and CD90) in OST cells (A) and cMSCs (B) from the control and the group treated with rhBMP-2 (5 nM e 20 nM respectively) for 120 h. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nOct 3/4 and Nanog are markers of pluripotent embryonic stem cells. We analyzed the expression of these markers in cMSCs treated with rhBMP-2 (20 nM) decreased the expression of Oct 3/4 and Nanog in the bone marrow cells. Stro-1 and CD90 are surface markers of mesenchymal stem cells that specifically possess osteogenic potential. We observed an increase in the Stro-1 marker when the osteosarcoma cells were treated with rhBMP-2 (5 nM). This increase correlates with the finding that rhBMP-2 is expressed in some types of osteosarcoma, as observed in the OST cells (Figure 2A). A significant reduction in the expression of the pluripotent embryonic stem cell markers (Oct 3/4 and Nanog) and the mesenchymal stem cells markers (CD90 and Stro-1) was observed in the canine bone marrow cells treated with rhBMP-2, as shown in Figure 2B.\nPluripotent embryonic stem cell markers. Expression of pluripotent embryonic stem cell markers (Nanog and Oct 3/4) and markers of mesenchymal stem cells with osteogenic potential (Stro-1 and CD90) in OST cells (A) and cMSCs (B) from the control and the group treated with rhBMP-2 (5 nM e 20 nM respectively) for 120 h. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\n Flow cytometric analyses of markers of proliferation and cell death by apoptosis or necrosis The treatment of cMSCs with rhBMP-2 induced an increase in proliferation followed by a significant increase in the expression of the Ki-67 marker. We observed a decrease in the expression of p53, which is involved in the regulation of apoptosis and tumor suppression. Treatment with rhBMP-2 can activate p53 when DNA damage is present, and this occurs without the involvement of cMSCs in the tumorigenic processes or an increase in apoptosis via phosphorylated caspase-3. We also observed a significant increase in the expression of the anti-apoptotic protein Bcl-2 and a decrease in phosphorylated caspase-3 after treatment with rhBMP-2. There was a inhibition of cell proliferation in the OST cells treated with rhBMP-2, which promotes a significant increase in p53 expression. An increased induction of cell death mediated by pro-apoptotic proteins (Bax) was observed, resulting in suppression of proliferation and an increase of phosphorylated caspase-3 (Figure 3).\nPro and anti-apoptotic proteins and the proliferative capacity. Expression of pro- and anti-apoptotic proteins and the proliferative capacity in cMSCs (A) and OST cells (B) 120 h after treatment with rhBMP-2 (20 nM and 5 nM respectively) analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nThe treatment of osteosarcoma with cMSCs treated with rhBMP-2 induced a significant decrease in the expression of the P53 marker which is involved in the regulation of apoptosis and tumor suppression. We observed a decrease in the expression of Ki67, which is involved in the cellular proliferation. The treatment had shown a significant increase in apoptosis via phosphorylated caspase-3 and in the expression of the pro-apoptotic protein Bax and a significant decrease in the expression of the anti-apoptotic protein Bcl-2 (Figures 4 and 5).\nPro apoptotic effects of osteosarcoma treatment with cMSCs associated with rhBMP2. Markers expression of osteosarcoma treatment with cMSCs and cMSCs treated with rhBMP2 (5 nM) for 120 after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nComparison between different treatments. Markers expression of osteosarcoma treatment with cMSCs, rhBMP2 and cMSCs treated with rhBMP2 (5 nM) 120 h after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nThe treatment of cMSCs with rhBMP-2 induced an increase in proliferation followed by a significant increase in the expression of the Ki-67 marker. We observed a decrease in the expression of p53, which is involved in the regulation of apoptosis and tumor suppression. Treatment with rhBMP-2 can activate p53 when DNA damage is present, and this occurs without the involvement of cMSCs in the tumorigenic processes or an increase in apoptosis via phosphorylated caspase-3. We also observed a significant increase in the expression of the anti-apoptotic protein Bcl-2 and a decrease in phosphorylated caspase-3 after treatment with rhBMP-2. There was a inhibition of cell proliferation in the OST cells treated with rhBMP-2, which promotes a significant increase in p53 expression. An increased induction of cell death mediated by pro-apoptotic proteins (Bax) was observed, resulting in suppression of proliferation and an increase of phosphorylated caspase-3 (Figure 3).\nPro and anti-apoptotic proteins and the proliferative capacity. Expression of pro- and anti-apoptotic proteins and the proliferative capacity in cMSCs (A) and OST cells (B) 120 h after treatment with rhBMP-2 (20 nM and 5 nM respectively) analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nThe treatment of osteosarcoma with cMSCs treated with rhBMP-2 induced a significant decrease in the expression of the P53 marker which is involved in the regulation of apoptosis and tumor suppression. We observed a decrease in the expression of Ki67, which is involved in the cellular proliferation. The treatment had shown a significant increase in apoptosis via phosphorylated caspase-3 and in the expression of the pro-apoptotic protein Bax and a significant decrease in the expression of the anti-apoptotic protein Bcl-2 (Figures 4 and 5).\nPro apoptotic effects of osteosarcoma treatment with cMSCs associated with rhBMP2. Markers expression of osteosarcoma treatment with cMSCs and cMSCs treated with rhBMP2 (5 nM) for 120 after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nComparison between different treatments. Markers expression of osteosarcoma treatment with cMSCs, rhBMP2 and cMSCs treated with rhBMP2 (5 nM) 120 h after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).", "cMSCs cells showed an enhanced cellular proliferation response in comparison to OST cells when treated with 20 nM of rhBMP-2 for 120 h. A significant proliferative response was observed in OST after treatment with 5 nM of rhBMP-2 (Figure 1).\nProliferation curves. Proliferation curves of bone marrow stem cells derived from canine fetuses (cMSCs) and canine osteosarcoma (OST) cells 120 h after rhBMP-2 treatment. The cells were stained with CSFE and analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01, * p < 0.05).", "Oct 3/4 and Nanog are markers of pluripotent embryonic stem cells. We analyzed the expression of these markers in cMSCs treated with rhBMP-2 (20 nM) decreased the expression of Oct 3/4 and Nanog in the bone marrow cells. Stro-1 and CD90 are surface markers of mesenchymal stem cells that specifically possess osteogenic potential. We observed an increase in the Stro-1 marker when the osteosarcoma cells were treated with rhBMP-2 (5 nM). This increase correlates with the finding that rhBMP-2 is expressed in some types of osteosarcoma, as observed in the OST cells (Figure 2A). A significant reduction in the expression of the pluripotent embryonic stem cell markers (Oct 3/4 and Nanog) and the mesenchymal stem cells markers (CD90 and Stro-1) was observed in the canine bone marrow cells treated with rhBMP-2, as shown in Figure 2B.\nPluripotent embryonic stem cell markers. Expression of pluripotent embryonic stem cell markers (Nanog and Oct 3/4) and markers of mesenchymal stem cells with osteogenic potential (Stro-1 and CD90) in OST cells (A) and cMSCs (B) from the control and the group treated with rhBMP-2 (5 nM e 20 nM respectively) for 120 h. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).", "The treatment of cMSCs with rhBMP-2 induced an increase in proliferation followed by a significant increase in the expression of the Ki-67 marker. We observed a decrease in the expression of p53, which is involved in the regulation of apoptosis and tumor suppression. Treatment with rhBMP-2 can activate p53 when DNA damage is present, and this occurs without the involvement of cMSCs in the tumorigenic processes or an increase in apoptosis via phosphorylated caspase-3. We also observed a significant increase in the expression of the anti-apoptotic protein Bcl-2 and a decrease in phosphorylated caspase-3 after treatment with rhBMP-2. There was a inhibition of cell proliferation in the OST cells treated with rhBMP-2, which promotes a significant increase in p53 expression. An increased induction of cell death mediated by pro-apoptotic proteins (Bax) was observed, resulting in suppression of proliferation and an increase of phosphorylated caspase-3 (Figure 3).\nPro and anti-apoptotic proteins and the proliferative capacity. Expression of pro- and anti-apoptotic proteins and the proliferative capacity in cMSCs (A) and OST cells (B) 120 h after treatment with rhBMP-2 (20 nM and 5 nM respectively) analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nThe treatment of osteosarcoma with cMSCs treated with rhBMP-2 induced a significant decrease in the expression of the P53 marker which is involved in the regulation of apoptosis and tumor suppression. We observed a decrease in the expression of Ki67, which is involved in the cellular proliferation. The treatment had shown a significant increase in apoptosis via phosphorylated caspase-3 and in the expression of the pro-apoptotic protein Bax and a significant decrease in the expression of the anti-apoptotic protein Bcl-2 (Figures 4 and 5).\nPro apoptotic effects of osteosarcoma treatment with cMSCs associated with rhBMP2. Markers expression of osteosarcoma treatment with cMSCs and cMSCs treated with rhBMP2 (5 nM) for 120 after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nComparison between different treatments. Markers expression of osteosarcoma treatment with cMSCs, rhBMP2 and cMSCs treated with rhBMP2 (5 nM) 120 h after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).", "In the present study, we analyzed the effects of rhBMP-2 in mesenchymal stem cells for use in the regenerative therapy of canine osteosarcoma utilizing OST cells as a model. The OST cells remained spindle-shaped during cell growth and the confluent stages.\nTreatment of OST cells with rhBMP-2 compromised the osteoblastic phenotype. Osteoprogenitors are either pre-determined or inducible, depending on additional signals necessary to cause differentiation. This difference is important because it reflects the variation between cell compromise, which is when cell fate is programmed, and cell differentiation, which is when cells are compromised by permissive microenvironment signs [23].\nAlthough the exact function and interrelation of each type of BMP are not completely understood, evidence indicates that they are a part of a series of complex factors that regulate cell differentiation, specifically maturation intochondroblasts and osteoblasts. The structural and functional evolutionary conservation of genes encoding BMPs suggest that these genes have critical regulatory functions in the process of differentiation during development and neoplastic transformation. In human colorectal carcinoma, for example, rhBMP-2 acts as a tumor suppressor [24]. Similarly, rhBMP-2 shows anti-proliferative and pro-apoptotic potential in gastric, prostate, and ovarian cancer cells. In breast cancer cell lines, rhBMP-2 treatment decreases cell proliferation. The effects of the BMPs varies with tumor progression. In other words, in the early stages of carcinogenesis, the TGF beta superfamily acts by suppressing tumor growth, and at later stages, the superfamily actually promotes tumor progression [25].\nIn our findings, we observed that when mesenchymal stem cells were exposed to rhBMP-2, there was a significant reduction in the expression of the marker Nanog. We noted a significant decrease in expression of the cell proliferation marker Oct 3/4 in OST cells treated with rhBMP-2. The treatment of OST cells with rhBMP-2 after Transwell culture inhibited the proliferative response (Ki-67 expression) and promoted an increase in cell death mediated by the pro-apoptotic proteins (Bax and Bad), resulting in suppression of proliferation and an increase of phosphorylation of caspase-3.\nThe treatment of bone marrow cells with rhBMP-2 stimulates the production of growth and differentiation factors. Thus, rhBMP-2 treatment may be a relevant treatment for canine osteosarcoma cells. Eliseev et al. [26], showed that rhBMP-2 increases the expression of Bax via Runx2, thus increasing the sensitivity of osteosarcoma cells to apoptosis. In our experiments, we also observed an increase in Bax expression when canine osteosarcoma cells were treated with rhBMP-2. These results suggest that rhBMP-2 treatment increases the susceptibility of OST cells to death by apoptosis.\nKawamura et al. [17] observed that rhBMP-2 inhibits the growth of human multiple myeloma U266 cells by arresting the cells in the G1 phase of the cell cycle, leading to apoptosis. The combined treatment with rhBMP-2 induces cell cycle inhibitory proteins, such as p21 and p27, and induces other proteins associated with apoptosis, such as Bcl-xl, Bcl2, Bax, and Bad. Thus, rhBMP-2 may be an important tool for the treatment of multiple myeloma due to its anti-tumor and bone regeneration effects. Kawamura et al. [17] investigated the antiproliferative effect of rhBMP-2 in myeloma cells and found that BMPs inactivate the STAT3 protein, which is a signal transducer activated by IL-6. BMPs were also found to increase the expression of cell cycle inhibitors leading to a cell replication blockage via pRb.\nBased on the studies of Hsu et al. [27], BMPs can function either as an oncogene or as a tumor suppressor, depending on the stage of disease. The effects of BMPs are cell type-specific and may vary among different tumors. BMPs are also reported as tumor suppressors and act on the cell cycle by inducing apoptosis of abnormal cells, such as tumors. Hardwick et al. [24] used cell lines of colorectal cancer to evaluate the role of rhBMP-2. They observed that rhBMP-2 reduced cell growth and stimulated apoptosis due to high levels of phosphorylated caspase-3. In our results, we also observed a decrease in cell growth when OST cells were treated with rhBMP-2, and we observed an increase of caspase-3 levels.\nTreatment with rhBMP-2 may be a new therapeutic option for canine osteosarcoma. We found that rhBMP-2 decreases the expression of embryonic stem cell markers Nanog and Oct 3/4 in different treatment regimens, and it is also associated with tumorigenesis of many types of cancer [28,29]. Because rhBMP-2 has the potential to inhibit the expression of markers such as Nanog and Oct 3/4, it may also exhibit anti-tumor effects in animal models in vivo. Oct 3/4 and Nanog are important factors in the regulation of self-renewal and the pluripotency of embryonic stem cells. There are studies showing a correlation of these cells with cases of tumorigenesis [29]. Oct 3/4 is a marker for both adult stem cells and cancer stem cells. Inhibition of this factor can inhibit the expression of proteins associated with tumorigenesis.\nOsteogenesis is defined by a series of events, which starts with a commitment to an osteogenic lineage by mesenchymal cells. Subsequently, these cells proliferate and demonstrate an upregulation of osteoblast-specific genes and mineralization. Multiple signaling pathways have been demonstrated to participate in the differentiation of an osteoblast progenitor to a committed osteoblast [30,31].\nAn association between the expression of STRO-1 and the presence of cells with osteogenic potential has been demonstrated in precursor adult human bone marrow. STRO-1+ population of human bone marrow cells is capable of osteogenic differentiation. The expression STRO-1 is complicated by the fact that a considerable proportion of STRO-1+ cells are not of the osteogenic lineage and the exact stage of osteogenic differentiation at which STRO-1 is expressed remains unclear, especially when working with cultured cell populations and the coexpression of STRO-1 and a panel of antibodies recognizing cell surface determinants which may be regulated during osteogenic differentiation [32].\nBMP-2 alone does not induce osteogenesis in isolates of human bone marrow stromal cells as measured by stimulation of alkaline phosphatase expression. However, BMP-2 does induce other markers associated with differentiation of osteoblasts. This osteogenic capacity is seen in stromal cells isolated from mice, rats, rabbits, and humans; however, cell behavior and efficacy of inducers varies in a species-dependent manner [33].\nBMP-2 stimulates surrounding tissues; however, more robust data is needed to demonstrate that BMP-2 also augments the osteogenic potential of implanted MSCs cells. In the present study, probably the effects of MSCs and rBMP-2 treated model culture Transwell, controlled environmental niches and alterations in this microenvironment can dramatically modify their behavior and differentiation capacities.\nLangenfeld et al. [34] showed that cell culture conditions and the intra- and extra-cellular antagonist concentrations interfere with the biological activities of BMPs. Wang et al. [23] observed that rhBMP-2 inhibits the tumorigenic potential of human osteosarcoma OS99-1 cells. The inhibition was due to a decrease in the expression of proteins associated with tumorigenesis and an increase of osteosarcoma cell differentiation in response to rhBMP-2. Thus, rhBMP-2 could be considered a novel tool for the treatment of human osteosarcoma. Our study clearly showed that the association of mesenchymal stem cells derived from canine fetal bone marrow, combined with rhBMP-2, decreases the tumorigenic potential of canine osteosarcoma cells in vitro.", "The in vitro treatment of bone marrow cells with rhBMP-2 decreased their osteogenic potential. Thus, we suggest that the treatment conditions for both osteogenic induction and for tumor regression are favorable when associated with stem cells derived from canine bone marrow. cMSCs treated with rhBMP-2 inhibits the proliferation capacity of OST cells by mechanisms of apoptosis and tumor suppression mediated by p53. Treatment of bone marrow cells with rhBMP-2 showed a high therapeutic potential observed by the increase in the tumor suppressor protein p53 and pro-apoptotic proteins Bad and Bax, and by increased activity of phosphorylated caspase-3." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Isolation of canine osteosarcoma (OST) cells", "Isolation of mesenchymal stem cells derived from canines fetuses (cMSCs)", "In vitro treatment of osteosarcoma with stem cells derived from canine bone marrow and rhBMP-2", "Expression of cell proliferation and cell death markers", "Statistical analysis", "Results", "Proliferative effects of rhBMP-2 in canine mesenchymal stem cells (cMSCs) and canine osteosarcoma (OST) cells", "Expression of cellular markers in canine osteosarcoma cells treated with cMSCs and rhBMP-2", "Flow cytometric analyses of markers of proliferation and cell death by apoptosis or necrosis", "Discussion", "Conclusions" ]
[ "Osteosarcoma is as a primary bone cancer common in dogs. Frequently, osteosarcoma affects the limb bones of large-sized dogs over 15 kg at an average age of 7 years [1]. In 75% of cases, osteosarcoma affects either the appendicular skeleton [2] or the pelvic and thoracic limbs, and in the remaining 25%, it affects the axial skeleton or the flat bones [3,4]. Generally, males have a higher incidence of osteocarcoma than females [2], with the exception of the St. Bernard, Rottweiler, and Danish breeds, in which females are most affected [5,6]. Osteosarcoma cells induce platelet aggregation, which facilitates metastasis formation. Platelet aggregation and metastasis most commonly occur in the lung [7]. Platelet aggregation promotes the establishment of tumor cell aggregates, which could serve as a bridge between the tumor cells and the vascular surfaces [6].\nA primary extraskeletal osteosarcoma has a metastatic rate that ranges from 60 to 85% in dogs and an average life expectancy after surgery of 26-90 days, which varies according to the location where the metastasis occurs [4].\nMetastasis is the most common cause of death in dogs with osteosarcoma, and 90% of dogs either die or are euthanized due to complications associated with lung metastases. Therefore, chemotherapy is used to increase the long-term survival of dogs with osteosarcoma. To reduce the occurrence of metastasis, chemotherapy is often used in combination with surgery or radiotherapy. Specifically, either cisplatin or cisplatin and doxorubicin are chemotherapeutic agents used in dogs [8,9]. Numerous studies have aimed to develop antiangiogenic therapeutic strategies, which can be combined with other treatments [10].\nThe bone morphogenetic proteins (BMPs) belong to a unique group of proteins that includes the growth factor TGF-β. BMPs play important roles in cell differentiation, cell proliferation, and inhibition of cell growth. They also participate in the maturation of several cell types, depending on the microenvironment and interactions with other regulatory factors [11,12]. Depending on their concentration gradient, the BMPs can attract various types of cells [13] and act as chemotactic, mitogenic, or differentiation agents [14]. BMPs can interfere with the proliferation of cells and the formation of cartilage and bone. Finally, BMPs can also induce the differentiation of mesenchymal progenitor cells into various cell types, including chondroblasts and osteoblasts [15].\nBMPs play important roles in cell differentiation, proliferation, morphogenesis, and apoptosis, and recent studies have shown that recombinant human BMP-2 (rhBMP-2) inhibits tumor formation [16-19]. However, the role of rhBMP-2 in canine osteosarcoma remains unknown. The osteoinductive capacity of rhBMP-2 has been widely studied in preclinical models and evaluated in the clinical setting [20]. Gene and cell therapy studies have shown that many bone defects can be treated by implantation of resorbable polymers with bone marrow cells transduced with an adenovirus expressing rhBMP-2 [21]. In addition, rhBMP-2 can be used as a substitute for bone grafts in spinal surgery, with results comparable to autogenous grafts [22].\nBased on the studies cited above, the present work explores the proliferative effects of canine mesenchymal stem cells (cMSCs) and osteosarcoma (OST) cells treated with rhBMP-2 to evaluate their regenerative potential in the presence of the in vitro treatment.", " Isolation of canine osteosarcoma (OST) cells The osteosarcoma cell lines were isolated from biopsies and excisions of animals with osteosarcoma performed in veterinary clinics and hospitals in São Paulo and were characterized by the Laboratory of Biochemistry and Biophysics, Butantan Institute. Approved by Ethic Committee in the use of animals, protocol number 1654/2009. After harvesting the tissue, the fragments were washed with saline solution containing antibiotics (10%), and the fatty and hemorrhagic tissues were removed from the tumors. Next, the tumors were divided in two samples. The first sample was cut into pieces of 0.01 cm2 using a scalpel and adhered to Petri dishes for 1 h in fetal bovine serum (FBS) in an incubator at 37°C and 5% CO2. The second sample was used for histopathological diagnosis. The cells were cultured in 25-cm2 flasks with DMEM-H (LGC) media supplemented with 10% of FBS (VITROCELL), 1% of penicillin and streptomycin (GIBCO), and 1% of pyruvic acid (GIBCO) at pH 7.4 and were kept in a incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed by trypsinizaton of the confluent monolayer cell cultures every 3 days.\nThe osteosarcoma cell lines were isolated from biopsies and excisions of animals with osteosarcoma performed in veterinary clinics and hospitals in São Paulo and were characterized by the Laboratory of Biochemistry and Biophysics, Butantan Institute. Approved by Ethic Committee in the use of animals, protocol number 1654/2009. After harvesting the tissue, the fragments were washed with saline solution containing antibiotics (10%), and the fatty and hemorrhagic tissues were removed from the tumors. Next, the tumors were divided in two samples. The first sample was cut into pieces of 0.01 cm2 using a scalpel and adhered to Petri dishes for 1 h in fetal bovine serum (FBS) in an incubator at 37°C and 5% CO2. The second sample was used for histopathological diagnosis. The cells were cultured in 25-cm2 flasks with DMEM-H (LGC) media supplemented with 10% of FBS (VITROCELL), 1% of penicillin and streptomycin (GIBCO), and 1% of pyruvic acid (GIBCO) at pH 7.4 and were kept in a incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed by trypsinizaton of the confluent monolayer cell cultures every 3 days.\n Isolation of mesenchymal stem cells derived from canines fetuses (cMSCs) The mesenchymal stem cells were derived from the bone marrow of canine fetuses (cMSCs) and belong to the University of São Paulo, College of Veterinary Medicine (FMVZ-USP) stem cell bank. Approved by Ethic Committee in the use of animals, protocol number 931/2006. cMSCs were cultured in 25 cm2 flasks with ALPHA-MEM media (VITROCELL) supplemented with 10% of HyClone FBS, 1% of penicillin and streptomycin (GIBCO), 1% of nonessential amino acids (GIBCO), and 1% L-glutamine (GIBCO) at pH 7.4, and they were kept in an incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed from the monolayer cells.\nThe mesenchymal stem cells were derived from the bone marrow of canine fetuses (cMSCs) and belong to the University of São Paulo, College of Veterinary Medicine (FMVZ-USP) stem cell bank. Approved by Ethic Committee in the use of animals, protocol number 931/2006. cMSCs were cultured in 25 cm2 flasks with ALPHA-MEM media (VITROCELL) supplemented with 10% of HyClone FBS, 1% of penicillin and streptomycin (GIBCO), 1% of nonessential amino acids (GIBCO), and 1% L-glutamine (GIBCO) at pH 7.4, and they were kept in an incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed from the monolayer cells.\n In vitro treatment of osteosarcoma with stem cells derived from canine bone marrow and rhBMP-2 After expansion, the OST and cMSCs cells were cultured in a 12-well plates containing 105 cells per well. The groups was composed by OST cells untreated (control group 1) and cMSCs untreated (control group 2).\nThese cells were treated with different concentrations of rhBMP-2 (5, 10, and 20 nM) for 120 h. Cells were labeled with carboxyfluorescein diacetate succinimidyl diester (CFSE; Invitrogen, Carlsbad, CA, USA) by incubation for 15 min at room temperature with 1 mM CFSE at a density of 2 × 10 6 cells/ml in PBS and the reaction was stopped by adding bovine serum to the PBS. At different times of culture, the cells were harvested and directly run on the FACSCalibur flow cytometer (Becton Dickinson) to measure the intensity of CFSE in the cells in order to monitor the number of times that the cells had divided during culture.\nThen a total of 3 × 105 cells were cultured in 96 U-bottom well plates and were kept in an incubator at 37°C and 5% CO2 for 5 days. After the incubation period, the proliferation of the OST cells and the cMSCs was measured to standardize the optimal concentration to be used in the in vitro regenerative therapy of canine osteosarcoma.\nFor transwell culture the cMSCs treated with rhBMP2 (20 nM) were cultured onto dry 6.5 mm diameter, 0.4 μm pore size polycarbonate Transwell filters containing 105 cells per filter. OST cells treated with rhBMP2 (5 nM) was added to the lower well, containing 105 cells per well. This treatment lasted for 120 h.\nAfter expansion, the OST and cMSCs cells were cultured in a 12-well plates containing 105 cells per well. The groups was composed by OST cells untreated (control group 1) and cMSCs untreated (control group 2).\nThese cells were treated with different concentrations of rhBMP-2 (5, 10, and 20 nM) for 120 h. Cells were labeled with carboxyfluorescein diacetate succinimidyl diester (CFSE; Invitrogen, Carlsbad, CA, USA) by incubation for 15 min at room temperature with 1 mM CFSE at a density of 2 × 10 6 cells/ml in PBS and the reaction was stopped by adding bovine serum to the PBS. At different times of culture, the cells were harvested and directly run on the FACSCalibur flow cytometer (Becton Dickinson) to measure the intensity of CFSE in the cells in order to monitor the number of times that the cells had divided during culture.\nThen a total of 3 × 105 cells were cultured in 96 U-bottom well plates and were kept in an incubator at 37°C and 5% CO2 for 5 days. After the incubation period, the proliferation of the OST cells and the cMSCs was measured to standardize the optimal concentration to be used in the in vitro regenerative therapy of canine osteosarcoma.\nFor transwell culture the cMSCs treated with rhBMP2 (20 nM) were cultured onto dry 6.5 mm diameter, 0.4 μm pore size polycarbonate Transwell filters containing 105 cells per filter. OST cells treated with rhBMP2 (5 nM) was added to the lower well, containing 105 cells per well. This treatment lasted for 120 h.\n Expression of cell proliferation and cell death markers The cells obtained from the Transwell culture plates were trypsinized and inactivated with FBS, centrifuged at 1,500 rpm for 10 min, and the supernatant was discarded. The pellet was resuspended in 5 ml of PBS at a concentration of 106 cells/ml. To analyze the amounts of intracytoplasmic and nuclear markers, the cells were permeabilized with 5 μl of 0.1% Triton X-100 for 30 min before the addition of specific primary antibodies. The following markers were used to determine the cell death pathways: caspase-3, Bax, Bad, and Bcl-2. The antibodies for Ki-67 and p53 were used to determine the proliferation index. Mesenchymal stem cell differentiation and maturation was determined by using the Oct 3/4, Stro-1, and Nanog antibodies. The samples were analyzed in a flow cytometer (FACSCalibur), and the expression of the markers was determined by comparison with an isotype control.\nThe cells obtained from the Transwell culture plates were trypsinized and inactivated with FBS, centrifuged at 1,500 rpm for 10 min, and the supernatant was discarded. The pellet was resuspended in 5 ml of PBS at a concentration of 106 cells/ml. To analyze the amounts of intracytoplasmic and nuclear markers, the cells were permeabilized with 5 μl of 0.1% Triton X-100 for 30 min before the addition of specific primary antibodies. The following markers were used to determine the cell death pathways: caspase-3, Bax, Bad, and Bcl-2. The antibodies for Ki-67 and p53 were used to determine the proliferation index. Mesenchymal stem cell differentiation and maturation was determined by using the Oct 3/4, Stro-1, and Nanog antibodies. The samples were analyzed in a flow cytometer (FACSCalibur), and the expression of the markers was determined by comparison with an isotype control.\n Statistical analysis All the values reported are mean ± SD. Statistical analyses were performed using GraphPad Prism Version 5 software and significance was determined using either the nonparametric MannWhitney test for unpaired data or the two-tailed t-test. Difference was considered significant at p < 0.05. In all graphs, *, **, *** indicate difference between groups at p < 0.05, 0.01 and 0.001, respectively.\nAll the values reported are mean ± SD. Statistical analyses were performed using GraphPad Prism Version 5 software and significance was determined using either the nonparametric MannWhitney test for unpaired data or the two-tailed t-test. Difference was considered significant at p < 0.05. In all graphs, *, **, *** indicate difference between groups at p < 0.05, 0.01 and 0.001, respectively.", "The osteosarcoma cell lines were isolated from biopsies and excisions of animals with osteosarcoma performed in veterinary clinics and hospitals in São Paulo and were characterized by the Laboratory of Biochemistry and Biophysics, Butantan Institute. Approved by Ethic Committee in the use of animals, protocol number 1654/2009. After harvesting the tissue, the fragments were washed with saline solution containing antibiotics (10%), and the fatty and hemorrhagic tissues were removed from the tumors. Next, the tumors were divided in two samples. The first sample was cut into pieces of 0.01 cm2 using a scalpel and adhered to Petri dishes for 1 h in fetal bovine serum (FBS) in an incubator at 37°C and 5% CO2. The second sample was used for histopathological diagnosis. The cells were cultured in 25-cm2 flasks with DMEM-H (LGC) media supplemented with 10% of FBS (VITROCELL), 1% of penicillin and streptomycin (GIBCO), and 1% of pyruvic acid (GIBCO) at pH 7.4 and were kept in a incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed by trypsinizaton of the confluent monolayer cell cultures every 3 days.", "The mesenchymal stem cells were derived from the bone marrow of canine fetuses (cMSCs) and belong to the University of São Paulo, College of Veterinary Medicine (FMVZ-USP) stem cell bank. Approved by Ethic Committee in the use of animals, protocol number 931/2006. cMSCs were cultured in 25 cm2 flasks with ALPHA-MEM media (VITROCELL) supplemented with 10% of HyClone FBS, 1% of penicillin and streptomycin (GIBCO), 1% of nonessential amino acids (GIBCO), and 1% L-glutamine (GIBCO) at pH 7.4, and they were kept in an incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed from the monolayer cells.", "After expansion, the OST and cMSCs cells were cultured in a 12-well plates containing 105 cells per well. The groups was composed by OST cells untreated (control group 1) and cMSCs untreated (control group 2).\nThese cells were treated with different concentrations of rhBMP-2 (5, 10, and 20 nM) for 120 h. Cells were labeled with carboxyfluorescein diacetate succinimidyl diester (CFSE; Invitrogen, Carlsbad, CA, USA) by incubation for 15 min at room temperature with 1 mM CFSE at a density of 2 × 10 6 cells/ml in PBS and the reaction was stopped by adding bovine serum to the PBS. At different times of culture, the cells were harvested and directly run on the FACSCalibur flow cytometer (Becton Dickinson) to measure the intensity of CFSE in the cells in order to monitor the number of times that the cells had divided during culture.\nThen a total of 3 × 105 cells were cultured in 96 U-bottom well plates and were kept in an incubator at 37°C and 5% CO2 for 5 days. After the incubation period, the proliferation of the OST cells and the cMSCs was measured to standardize the optimal concentration to be used in the in vitro regenerative therapy of canine osteosarcoma.\nFor transwell culture the cMSCs treated with rhBMP2 (20 nM) were cultured onto dry 6.5 mm diameter, 0.4 μm pore size polycarbonate Transwell filters containing 105 cells per filter. OST cells treated with rhBMP2 (5 nM) was added to the lower well, containing 105 cells per well. This treatment lasted for 120 h.", "The cells obtained from the Transwell culture plates were trypsinized and inactivated with FBS, centrifuged at 1,500 rpm for 10 min, and the supernatant was discarded. The pellet was resuspended in 5 ml of PBS at a concentration of 106 cells/ml. To analyze the amounts of intracytoplasmic and nuclear markers, the cells were permeabilized with 5 μl of 0.1% Triton X-100 for 30 min before the addition of specific primary antibodies. The following markers were used to determine the cell death pathways: caspase-3, Bax, Bad, and Bcl-2. The antibodies for Ki-67 and p53 were used to determine the proliferation index. Mesenchymal stem cell differentiation and maturation was determined by using the Oct 3/4, Stro-1, and Nanog antibodies. The samples were analyzed in a flow cytometer (FACSCalibur), and the expression of the markers was determined by comparison with an isotype control.", "All the values reported are mean ± SD. Statistical analyses were performed using GraphPad Prism Version 5 software and significance was determined using either the nonparametric MannWhitney test for unpaired data or the two-tailed t-test. Difference was considered significant at p < 0.05. In all graphs, *, **, *** indicate difference between groups at p < 0.05, 0.01 and 0.001, respectively.", " Proliferative effects of rhBMP-2 in canine mesenchymal stem cells (cMSCs) and canine osteosarcoma (OST) cells cMSCs cells showed an enhanced cellular proliferation response in comparison to OST cells when treated with 20 nM of rhBMP-2 for 120 h. A significant proliferative response was observed in OST after treatment with 5 nM of rhBMP-2 (Figure 1).\nProliferation curves. Proliferation curves of bone marrow stem cells derived from canine fetuses (cMSCs) and canine osteosarcoma (OST) cells 120 h after rhBMP-2 treatment. The cells were stained with CSFE and analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01, * p < 0.05).\ncMSCs cells showed an enhanced cellular proliferation response in comparison to OST cells when treated with 20 nM of rhBMP-2 for 120 h. A significant proliferative response was observed in OST after treatment with 5 nM of rhBMP-2 (Figure 1).\nProliferation curves. Proliferation curves of bone marrow stem cells derived from canine fetuses (cMSCs) and canine osteosarcoma (OST) cells 120 h after rhBMP-2 treatment. The cells were stained with CSFE and analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01, * p < 0.05).\n Expression of cellular markers in canine osteosarcoma cells treated with cMSCs and rhBMP-2 Oct 3/4 and Nanog are markers of pluripotent embryonic stem cells. We analyzed the expression of these markers in cMSCs treated with rhBMP-2 (20 nM) decreased the expression of Oct 3/4 and Nanog in the bone marrow cells. Stro-1 and CD90 are surface markers of mesenchymal stem cells that specifically possess osteogenic potential. We observed an increase in the Stro-1 marker when the osteosarcoma cells were treated with rhBMP-2 (5 nM). This increase correlates with the finding that rhBMP-2 is expressed in some types of osteosarcoma, as observed in the OST cells (Figure 2A). A significant reduction in the expression of the pluripotent embryonic stem cell markers (Oct 3/4 and Nanog) and the mesenchymal stem cells markers (CD90 and Stro-1) was observed in the canine bone marrow cells treated with rhBMP-2, as shown in Figure 2B.\nPluripotent embryonic stem cell markers. Expression of pluripotent embryonic stem cell markers (Nanog and Oct 3/4) and markers of mesenchymal stem cells with osteogenic potential (Stro-1 and CD90) in OST cells (A) and cMSCs (B) from the control and the group treated with rhBMP-2 (5 nM e 20 nM respectively) for 120 h. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nOct 3/4 and Nanog are markers of pluripotent embryonic stem cells. We analyzed the expression of these markers in cMSCs treated with rhBMP-2 (20 nM) decreased the expression of Oct 3/4 and Nanog in the bone marrow cells. Stro-1 and CD90 are surface markers of mesenchymal stem cells that specifically possess osteogenic potential. We observed an increase in the Stro-1 marker when the osteosarcoma cells were treated with rhBMP-2 (5 nM). This increase correlates with the finding that rhBMP-2 is expressed in some types of osteosarcoma, as observed in the OST cells (Figure 2A). A significant reduction in the expression of the pluripotent embryonic stem cell markers (Oct 3/4 and Nanog) and the mesenchymal stem cells markers (CD90 and Stro-1) was observed in the canine bone marrow cells treated with rhBMP-2, as shown in Figure 2B.\nPluripotent embryonic stem cell markers. Expression of pluripotent embryonic stem cell markers (Nanog and Oct 3/4) and markers of mesenchymal stem cells with osteogenic potential (Stro-1 and CD90) in OST cells (A) and cMSCs (B) from the control and the group treated with rhBMP-2 (5 nM e 20 nM respectively) for 120 h. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\n Flow cytometric analyses of markers of proliferation and cell death by apoptosis or necrosis The treatment of cMSCs with rhBMP-2 induced an increase in proliferation followed by a significant increase in the expression of the Ki-67 marker. We observed a decrease in the expression of p53, which is involved in the regulation of apoptosis and tumor suppression. Treatment with rhBMP-2 can activate p53 when DNA damage is present, and this occurs without the involvement of cMSCs in the tumorigenic processes or an increase in apoptosis via phosphorylated caspase-3. We also observed a significant increase in the expression of the anti-apoptotic protein Bcl-2 and a decrease in phosphorylated caspase-3 after treatment with rhBMP-2. There was a inhibition of cell proliferation in the OST cells treated with rhBMP-2, which promotes a significant increase in p53 expression. An increased induction of cell death mediated by pro-apoptotic proteins (Bax) was observed, resulting in suppression of proliferation and an increase of phosphorylated caspase-3 (Figure 3).\nPro and anti-apoptotic proteins and the proliferative capacity. Expression of pro- and anti-apoptotic proteins and the proliferative capacity in cMSCs (A) and OST cells (B) 120 h after treatment with rhBMP-2 (20 nM and 5 nM respectively) analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nThe treatment of osteosarcoma with cMSCs treated with rhBMP-2 induced a significant decrease in the expression of the P53 marker which is involved in the regulation of apoptosis and tumor suppression. We observed a decrease in the expression of Ki67, which is involved in the cellular proliferation. The treatment had shown a significant increase in apoptosis via phosphorylated caspase-3 and in the expression of the pro-apoptotic protein Bax and a significant decrease in the expression of the anti-apoptotic protein Bcl-2 (Figures 4 and 5).\nPro apoptotic effects of osteosarcoma treatment with cMSCs associated with rhBMP2. Markers expression of osteosarcoma treatment with cMSCs and cMSCs treated with rhBMP2 (5 nM) for 120 after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nComparison between different treatments. Markers expression of osteosarcoma treatment with cMSCs, rhBMP2 and cMSCs treated with rhBMP2 (5 nM) 120 h after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nThe treatment of cMSCs with rhBMP-2 induced an increase in proliferation followed by a significant increase in the expression of the Ki-67 marker. We observed a decrease in the expression of p53, which is involved in the regulation of apoptosis and tumor suppression. Treatment with rhBMP-2 can activate p53 when DNA damage is present, and this occurs without the involvement of cMSCs in the tumorigenic processes or an increase in apoptosis via phosphorylated caspase-3. We also observed a significant increase in the expression of the anti-apoptotic protein Bcl-2 and a decrease in phosphorylated caspase-3 after treatment with rhBMP-2. There was a inhibition of cell proliferation in the OST cells treated with rhBMP-2, which promotes a significant increase in p53 expression. An increased induction of cell death mediated by pro-apoptotic proteins (Bax) was observed, resulting in suppression of proliferation and an increase of phosphorylated caspase-3 (Figure 3).\nPro and anti-apoptotic proteins and the proliferative capacity. Expression of pro- and anti-apoptotic proteins and the proliferative capacity in cMSCs (A) and OST cells (B) 120 h after treatment with rhBMP-2 (20 nM and 5 nM respectively) analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nThe treatment of osteosarcoma with cMSCs treated with rhBMP-2 induced a significant decrease in the expression of the P53 marker which is involved in the regulation of apoptosis and tumor suppression. We observed a decrease in the expression of Ki67, which is involved in the cellular proliferation. The treatment had shown a significant increase in apoptosis via phosphorylated caspase-3 and in the expression of the pro-apoptotic protein Bax and a significant decrease in the expression of the anti-apoptotic protein Bcl-2 (Figures 4 and 5).\nPro apoptotic effects of osteosarcoma treatment with cMSCs associated with rhBMP2. Markers expression of osteosarcoma treatment with cMSCs and cMSCs treated with rhBMP2 (5 nM) for 120 after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nComparison between different treatments. Markers expression of osteosarcoma treatment with cMSCs, rhBMP2 and cMSCs treated with rhBMP2 (5 nM) 120 h after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).", "cMSCs cells showed an enhanced cellular proliferation response in comparison to OST cells when treated with 20 nM of rhBMP-2 for 120 h. A significant proliferative response was observed in OST after treatment with 5 nM of rhBMP-2 (Figure 1).\nProliferation curves. Proliferation curves of bone marrow stem cells derived from canine fetuses (cMSCs) and canine osteosarcoma (OST) cells 120 h after rhBMP-2 treatment. The cells were stained with CSFE and analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01, * p < 0.05).", "Oct 3/4 and Nanog are markers of pluripotent embryonic stem cells. We analyzed the expression of these markers in cMSCs treated with rhBMP-2 (20 nM) decreased the expression of Oct 3/4 and Nanog in the bone marrow cells. Stro-1 and CD90 are surface markers of mesenchymal stem cells that specifically possess osteogenic potential. We observed an increase in the Stro-1 marker when the osteosarcoma cells were treated with rhBMP-2 (5 nM). This increase correlates with the finding that rhBMP-2 is expressed in some types of osteosarcoma, as observed in the OST cells (Figure 2A). A significant reduction in the expression of the pluripotent embryonic stem cell markers (Oct 3/4 and Nanog) and the mesenchymal stem cells markers (CD90 and Stro-1) was observed in the canine bone marrow cells treated with rhBMP-2, as shown in Figure 2B.\nPluripotent embryonic stem cell markers. Expression of pluripotent embryonic stem cell markers (Nanog and Oct 3/4) and markers of mesenchymal stem cells with osteogenic potential (Stro-1 and CD90) in OST cells (A) and cMSCs (B) from the control and the group treated with rhBMP-2 (5 nM e 20 nM respectively) for 120 h. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).", "The treatment of cMSCs with rhBMP-2 induced an increase in proliferation followed by a significant increase in the expression of the Ki-67 marker. We observed a decrease in the expression of p53, which is involved in the regulation of apoptosis and tumor suppression. Treatment with rhBMP-2 can activate p53 when DNA damage is present, and this occurs without the involvement of cMSCs in the tumorigenic processes or an increase in apoptosis via phosphorylated caspase-3. We also observed a significant increase in the expression of the anti-apoptotic protein Bcl-2 and a decrease in phosphorylated caspase-3 after treatment with rhBMP-2. There was a inhibition of cell proliferation in the OST cells treated with rhBMP-2, which promotes a significant increase in p53 expression. An increased induction of cell death mediated by pro-apoptotic proteins (Bax) was observed, resulting in suppression of proliferation and an increase of phosphorylated caspase-3 (Figure 3).\nPro and anti-apoptotic proteins and the proliferative capacity. Expression of pro- and anti-apoptotic proteins and the proliferative capacity in cMSCs (A) and OST cells (B) 120 h after treatment with rhBMP-2 (20 nM and 5 nM respectively) analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nThe treatment of osteosarcoma with cMSCs treated with rhBMP-2 induced a significant decrease in the expression of the P53 marker which is involved in the regulation of apoptosis and tumor suppression. We observed a decrease in the expression of Ki67, which is involved in the cellular proliferation. The treatment had shown a significant increase in apoptosis via phosphorylated caspase-3 and in the expression of the pro-apoptotic protein Bax and a significant decrease in the expression of the anti-apoptotic protein Bcl-2 (Figures 4 and 5).\nPro apoptotic effects of osteosarcoma treatment with cMSCs associated with rhBMP2. Markers expression of osteosarcoma treatment with cMSCs and cMSCs treated with rhBMP2 (5 nM) for 120 after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).\nComparison between different treatments. Markers expression of osteosarcoma treatment with cMSCs, rhBMP2 and cMSCs treated with rhBMP2 (5 nM) 120 h after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01).", "In the present study, we analyzed the effects of rhBMP-2 in mesenchymal stem cells for use in the regenerative therapy of canine osteosarcoma utilizing OST cells as a model. The OST cells remained spindle-shaped during cell growth and the confluent stages.\nTreatment of OST cells with rhBMP-2 compromised the osteoblastic phenotype. Osteoprogenitors are either pre-determined or inducible, depending on additional signals necessary to cause differentiation. This difference is important because it reflects the variation between cell compromise, which is when cell fate is programmed, and cell differentiation, which is when cells are compromised by permissive microenvironment signs [23].\nAlthough the exact function and interrelation of each type of BMP are not completely understood, evidence indicates that they are a part of a series of complex factors that regulate cell differentiation, specifically maturation intochondroblasts and osteoblasts. The structural and functional evolutionary conservation of genes encoding BMPs suggest that these genes have critical regulatory functions in the process of differentiation during development and neoplastic transformation. In human colorectal carcinoma, for example, rhBMP-2 acts as a tumor suppressor [24]. Similarly, rhBMP-2 shows anti-proliferative and pro-apoptotic potential in gastric, prostate, and ovarian cancer cells. In breast cancer cell lines, rhBMP-2 treatment decreases cell proliferation. The effects of the BMPs varies with tumor progression. In other words, in the early stages of carcinogenesis, the TGF beta superfamily acts by suppressing tumor growth, and at later stages, the superfamily actually promotes tumor progression [25].\nIn our findings, we observed that when mesenchymal stem cells were exposed to rhBMP-2, there was a significant reduction in the expression of the marker Nanog. We noted a significant decrease in expression of the cell proliferation marker Oct 3/4 in OST cells treated with rhBMP-2. The treatment of OST cells with rhBMP-2 after Transwell culture inhibited the proliferative response (Ki-67 expression) and promoted an increase in cell death mediated by the pro-apoptotic proteins (Bax and Bad), resulting in suppression of proliferation and an increase of phosphorylation of caspase-3.\nThe treatment of bone marrow cells with rhBMP-2 stimulates the production of growth and differentiation factors. Thus, rhBMP-2 treatment may be a relevant treatment for canine osteosarcoma cells. Eliseev et al. [26], showed that rhBMP-2 increases the expression of Bax via Runx2, thus increasing the sensitivity of osteosarcoma cells to apoptosis. In our experiments, we also observed an increase in Bax expression when canine osteosarcoma cells were treated with rhBMP-2. These results suggest that rhBMP-2 treatment increases the susceptibility of OST cells to death by apoptosis.\nKawamura et al. [17] observed that rhBMP-2 inhibits the growth of human multiple myeloma U266 cells by arresting the cells in the G1 phase of the cell cycle, leading to apoptosis. The combined treatment with rhBMP-2 induces cell cycle inhibitory proteins, such as p21 and p27, and induces other proteins associated with apoptosis, such as Bcl-xl, Bcl2, Bax, and Bad. Thus, rhBMP-2 may be an important tool for the treatment of multiple myeloma due to its anti-tumor and bone regeneration effects. Kawamura et al. [17] investigated the antiproliferative effect of rhBMP-2 in myeloma cells and found that BMPs inactivate the STAT3 protein, which is a signal transducer activated by IL-6. BMPs were also found to increase the expression of cell cycle inhibitors leading to a cell replication blockage via pRb.\nBased on the studies of Hsu et al. [27], BMPs can function either as an oncogene or as a tumor suppressor, depending on the stage of disease. The effects of BMPs are cell type-specific and may vary among different tumors. BMPs are also reported as tumor suppressors and act on the cell cycle by inducing apoptosis of abnormal cells, such as tumors. Hardwick et al. [24] used cell lines of colorectal cancer to evaluate the role of rhBMP-2. They observed that rhBMP-2 reduced cell growth and stimulated apoptosis due to high levels of phosphorylated caspase-3. In our results, we also observed a decrease in cell growth when OST cells were treated with rhBMP-2, and we observed an increase of caspase-3 levels.\nTreatment with rhBMP-2 may be a new therapeutic option for canine osteosarcoma. We found that rhBMP-2 decreases the expression of embryonic stem cell markers Nanog and Oct 3/4 in different treatment regimens, and it is also associated with tumorigenesis of many types of cancer [28,29]. Because rhBMP-2 has the potential to inhibit the expression of markers such as Nanog and Oct 3/4, it may also exhibit anti-tumor effects in animal models in vivo. Oct 3/4 and Nanog are important factors in the regulation of self-renewal and the pluripotency of embryonic stem cells. There are studies showing a correlation of these cells with cases of tumorigenesis [29]. Oct 3/4 is a marker for both adult stem cells and cancer stem cells. Inhibition of this factor can inhibit the expression of proteins associated with tumorigenesis.\nOsteogenesis is defined by a series of events, which starts with a commitment to an osteogenic lineage by mesenchymal cells. Subsequently, these cells proliferate and demonstrate an upregulation of osteoblast-specific genes and mineralization. Multiple signaling pathways have been demonstrated to participate in the differentiation of an osteoblast progenitor to a committed osteoblast [30,31].\nAn association between the expression of STRO-1 and the presence of cells with osteogenic potential has been demonstrated in precursor adult human bone marrow. STRO-1+ population of human bone marrow cells is capable of osteogenic differentiation. The expression STRO-1 is complicated by the fact that a considerable proportion of STRO-1+ cells are not of the osteogenic lineage and the exact stage of osteogenic differentiation at which STRO-1 is expressed remains unclear, especially when working with cultured cell populations and the coexpression of STRO-1 and a panel of antibodies recognizing cell surface determinants which may be regulated during osteogenic differentiation [32].\nBMP-2 alone does not induce osteogenesis in isolates of human bone marrow stromal cells as measured by stimulation of alkaline phosphatase expression. However, BMP-2 does induce other markers associated with differentiation of osteoblasts. This osteogenic capacity is seen in stromal cells isolated from mice, rats, rabbits, and humans; however, cell behavior and efficacy of inducers varies in a species-dependent manner [33].\nBMP-2 stimulates surrounding tissues; however, more robust data is needed to demonstrate that BMP-2 also augments the osteogenic potential of implanted MSCs cells. In the present study, probably the effects of MSCs and rBMP-2 treated model culture Transwell, controlled environmental niches and alterations in this microenvironment can dramatically modify their behavior and differentiation capacities.\nLangenfeld et al. [34] showed that cell culture conditions and the intra- and extra-cellular antagonist concentrations interfere with the biological activities of BMPs. Wang et al. [23] observed that rhBMP-2 inhibits the tumorigenic potential of human osteosarcoma OS99-1 cells. The inhibition was due to a decrease in the expression of proteins associated with tumorigenesis and an increase of osteosarcoma cell differentiation in response to rhBMP-2. Thus, rhBMP-2 could be considered a novel tool for the treatment of human osteosarcoma. Our study clearly showed that the association of mesenchymal stem cells derived from canine fetal bone marrow, combined with rhBMP-2, decreases the tumorigenic potential of canine osteosarcoma cells in vitro.", "The in vitro treatment of bone marrow cells with rhBMP-2 decreased their osteogenic potential. Thus, we suggest that the treatment conditions for both osteogenic induction and for tumor regression are favorable when associated with stem cells derived from canine bone marrow. cMSCs treated with rhBMP-2 inhibits the proliferation capacity of OST cells by mechanisms of apoptosis and tumor suppression mediated by p53. Treatment of bone marrow cells with rhBMP-2 showed a high therapeutic potential observed by the increase in the tumor suppressor protein p53 and pro-apoptotic proteins Bad and Bax, and by increased activity of phosphorylated caspase-3." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null ]
[ "Osteosarcoma", "rhBMP-2", "Mesenchymal stem cell", "Canine" ]
Background: Osteosarcoma is as a primary bone cancer common in dogs. Frequently, osteosarcoma affects the limb bones of large-sized dogs over 15 kg at an average age of 7 years [1]. In 75% of cases, osteosarcoma affects either the appendicular skeleton [2] or the pelvic and thoracic limbs, and in the remaining 25%, it affects the axial skeleton or the flat bones [3,4]. Generally, males have a higher incidence of osteocarcoma than females [2], with the exception of the St. Bernard, Rottweiler, and Danish breeds, in which females are most affected [5,6]. Osteosarcoma cells induce platelet aggregation, which facilitates metastasis formation. Platelet aggregation and metastasis most commonly occur in the lung [7]. Platelet aggregation promotes the establishment of tumor cell aggregates, which could serve as a bridge between the tumor cells and the vascular surfaces [6]. A primary extraskeletal osteosarcoma has a metastatic rate that ranges from 60 to 85% in dogs and an average life expectancy after surgery of 26-90 days, which varies according to the location where the metastasis occurs [4]. Metastasis is the most common cause of death in dogs with osteosarcoma, and 90% of dogs either die or are euthanized due to complications associated with lung metastases. Therefore, chemotherapy is used to increase the long-term survival of dogs with osteosarcoma. To reduce the occurrence of metastasis, chemotherapy is often used in combination with surgery or radiotherapy. Specifically, either cisplatin or cisplatin and doxorubicin are chemotherapeutic agents used in dogs [8,9]. Numerous studies have aimed to develop antiangiogenic therapeutic strategies, which can be combined with other treatments [10]. The bone morphogenetic proteins (BMPs) belong to a unique group of proteins that includes the growth factor TGF-β. BMPs play important roles in cell differentiation, cell proliferation, and inhibition of cell growth. They also participate in the maturation of several cell types, depending on the microenvironment and interactions with other regulatory factors [11,12]. Depending on their concentration gradient, the BMPs can attract various types of cells [13] and act as chemotactic, mitogenic, or differentiation agents [14]. BMPs can interfere with the proliferation of cells and the formation of cartilage and bone. Finally, BMPs can also induce the differentiation of mesenchymal progenitor cells into various cell types, including chondroblasts and osteoblasts [15]. BMPs play important roles in cell differentiation, proliferation, morphogenesis, and apoptosis, and recent studies have shown that recombinant human BMP-2 (rhBMP-2) inhibits tumor formation [16-19]. However, the role of rhBMP-2 in canine osteosarcoma remains unknown. The osteoinductive capacity of rhBMP-2 has been widely studied in preclinical models and evaluated in the clinical setting [20]. Gene and cell therapy studies have shown that many bone defects can be treated by implantation of resorbable polymers with bone marrow cells transduced with an adenovirus expressing rhBMP-2 [21]. In addition, rhBMP-2 can be used as a substitute for bone grafts in spinal surgery, with results comparable to autogenous grafts [22]. Based on the studies cited above, the present work explores the proliferative effects of canine mesenchymal stem cells (cMSCs) and osteosarcoma (OST) cells treated with rhBMP-2 to evaluate their regenerative potential in the presence of the in vitro treatment. Methods: Isolation of canine osteosarcoma (OST) cells The osteosarcoma cell lines were isolated from biopsies and excisions of animals with osteosarcoma performed in veterinary clinics and hospitals in São Paulo and were characterized by the Laboratory of Biochemistry and Biophysics, Butantan Institute. Approved by Ethic Committee in the use of animals, protocol number 1654/2009. After harvesting the tissue, the fragments were washed with saline solution containing antibiotics (10%), and the fatty and hemorrhagic tissues were removed from the tumors. Next, the tumors were divided in two samples. The first sample was cut into pieces of 0.01 cm2 using a scalpel and adhered to Petri dishes for 1 h in fetal bovine serum (FBS) in an incubator at 37°C and 5% CO2. The second sample was used for histopathological diagnosis. The cells were cultured in 25-cm2 flasks with DMEM-H (LGC) media supplemented with 10% of FBS (VITROCELL), 1% of penicillin and streptomycin (GIBCO), and 1% of pyruvic acid (GIBCO) at pH 7.4 and were kept in a incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed by trypsinizaton of the confluent monolayer cell cultures every 3 days. The osteosarcoma cell lines were isolated from biopsies and excisions of animals with osteosarcoma performed in veterinary clinics and hospitals in São Paulo and were characterized by the Laboratory of Biochemistry and Biophysics, Butantan Institute. Approved by Ethic Committee in the use of animals, protocol number 1654/2009. After harvesting the tissue, the fragments were washed with saline solution containing antibiotics (10%), and the fatty and hemorrhagic tissues were removed from the tumors. Next, the tumors were divided in two samples. The first sample was cut into pieces of 0.01 cm2 using a scalpel and adhered to Petri dishes for 1 h in fetal bovine serum (FBS) in an incubator at 37°C and 5% CO2. The second sample was used for histopathological diagnosis. The cells were cultured in 25-cm2 flasks with DMEM-H (LGC) media supplemented with 10% of FBS (VITROCELL), 1% of penicillin and streptomycin (GIBCO), and 1% of pyruvic acid (GIBCO) at pH 7.4 and were kept in a incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed by trypsinizaton of the confluent monolayer cell cultures every 3 days. Isolation of mesenchymal stem cells derived from canines fetuses (cMSCs) The mesenchymal stem cells were derived from the bone marrow of canine fetuses (cMSCs) and belong to the University of São Paulo, College of Veterinary Medicine (FMVZ-USP) stem cell bank. Approved by Ethic Committee in the use of animals, protocol number 931/2006. cMSCs were cultured in 25 cm2 flasks with ALPHA-MEM media (VITROCELL) supplemented with 10% of HyClone FBS, 1% of penicillin and streptomycin (GIBCO), 1% of nonessential amino acids (GIBCO), and 1% L-glutamine (GIBCO) at pH 7.4, and they were kept in an incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed from the monolayer cells. The mesenchymal stem cells were derived from the bone marrow of canine fetuses (cMSCs) and belong to the University of São Paulo, College of Veterinary Medicine (FMVZ-USP) stem cell bank. Approved by Ethic Committee in the use of animals, protocol number 931/2006. cMSCs were cultured in 25 cm2 flasks with ALPHA-MEM media (VITROCELL) supplemented with 10% of HyClone FBS, 1% of penicillin and streptomycin (GIBCO), 1% of nonessential amino acids (GIBCO), and 1% L-glutamine (GIBCO) at pH 7.4, and they were kept in an incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed from the monolayer cells. In vitro treatment of osteosarcoma with stem cells derived from canine bone marrow and rhBMP-2 After expansion, the OST and cMSCs cells were cultured in a 12-well plates containing 105 cells per well. The groups was composed by OST cells untreated (control group 1) and cMSCs untreated (control group 2). These cells were treated with different concentrations of rhBMP-2 (5, 10, and 20 nM) for 120 h. Cells were labeled with carboxyfluorescein diacetate succinimidyl diester (CFSE; Invitrogen, Carlsbad, CA, USA) by incubation for 15 min at room temperature with 1 mM CFSE at a density of 2 × 10 6 cells/ml in PBS and the reaction was stopped by adding bovine serum to the PBS. At different times of culture, the cells were harvested and directly run on the FACSCalibur flow cytometer (Becton Dickinson) to measure the intensity of CFSE in the cells in order to monitor the number of times that the cells had divided during culture. Then a total of 3 × 105 cells were cultured in 96 U-bottom well plates and were kept in an incubator at 37°C and 5% CO2 for 5 days. After the incubation period, the proliferation of the OST cells and the cMSCs was measured to standardize the optimal concentration to be used in the in vitro regenerative therapy of canine osteosarcoma. For transwell culture the cMSCs treated with rhBMP2 (20 nM) were cultured onto dry 6.5 mm diameter, 0.4 μm pore size polycarbonate Transwell filters containing 105 cells per filter. OST cells treated with rhBMP2 (5 nM) was added to the lower well, containing 105 cells per well. This treatment lasted for 120 h. After expansion, the OST and cMSCs cells were cultured in a 12-well plates containing 105 cells per well. The groups was composed by OST cells untreated (control group 1) and cMSCs untreated (control group 2). These cells were treated with different concentrations of rhBMP-2 (5, 10, and 20 nM) for 120 h. Cells were labeled with carboxyfluorescein diacetate succinimidyl diester (CFSE; Invitrogen, Carlsbad, CA, USA) by incubation for 15 min at room temperature with 1 mM CFSE at a density of 2 × 10 6 cells/ml in PBS and the reaction was stopped by adding bovine serum to the PBS. At different times of culture, the cells were harvested and directly run on the FACSCalibur flow cytometer (Becton Dickinson) to measure the intensity of CFSE in the cells in order to monitor the number of times that the cells had divided during culture. Then a total of 3 × 105 cells were cultured in 96 U-bottom well plates and were kept in an incubator at 37°C and 5% CO2 for 5 days. After the incubation period, the proliferation of the OST cells and the cMSCs was measured to standardize the optimal concentration to be used in the in vitro regenerative therapy of canine osteosarcoma. For transwell culture the cMSCs treated with rhBMP2 (20 nM) were cultured onto dry 6.5 mm diameter, 0.4 μm pore size polycarbonate Transwell filters containing 105 cells per filter. OST cells treated with rhBMP2 (5 nM) was added to the lower well, containing 105 cells per well. This treatment lasted for 120 h. Expression of cell proliferation and cell death markers The cells obtained from the Transwell culture plates were trypsinized and inactivated with FBS, centrifuged at 1,500 rpm for 10 min, and the supernatant was discarded. The pellet was resuspended in 5 ml of PBS at a concentration of 106 cells/ml. To analyze the amounts of intracytoplasmic and nuclear markers, the cells were permeabilized with 5 μl of 0.1% Triton X-100 for 30 min before the addition of specific primary antibodies. The following markers were used to determine the cell death pathways: caspase-3, Bax, Bad, and Bcl-2. The antibodies for Ki-67 and p53 were used to determine the proliferation index. Mesenchymal stem cell differentiation and maturation was determined by using the Oct 3/4, Stro-1, and Nanog antibodies. The samples were analyzed in a flow cytometer (FACSCalibur), and the expression of the markers was determined by comparison with an isotype control. The cells obtained from the Transwell culture plates were trypsinized and inactivated with FBS, centrifuged at 1,500 rpm for 10 min, and the supernatant was discarded. The pellet was resuspended in 5 ml of PBS at a concentration of 106 cells/ml. To analyze the amounts of intracytoplasmic and nuclear markers, the cells were permeabilized with 5 μl of 0.1% Triton X-100 for 30 min before the addition of specific primary antibodies. The following markers were used to determine the cell death pathways: caspase-3, Bax, Bad, and Bcl-2. The antibodies for Ki-67 and p53 were used to determine the proliferation index. Mesenchymal stem cell differentiation and maturation was determined by using the Oct 3/4, Stro-1, and Nanog antibodies. The samples were analyzed in a flow cytometer (FACSCalibur), and the expression of the markers was determined by comparison with an isotype control. Statistical analysis All the values reported are mean ± SD. Statistical analyses were performed using GraphPad Prism Version 5 software and significance was determined using either the nonparametric MannWhitney test for unpaired data or the two-tailed t-test. Difference was considered significant at p < 0.05. In all graphs, *, **, *** indicate difference between groups at p < 0.05, 0.01 and 0.001, respectively. All the values reported are mean ± SD. Statistical analyses were performed using GraphPad Prism Version 5 software and significance was determined using either the nonparametric MannWhitney test for unpaired data or the two-tailed t-test. Difference was considered significant at p < 0.05. In all graphs, *, **, *** indicate difference between groups at p < 0.05, 0.01 and 0.001, respectively. Isolation of canine osteosarcoma (OST) cells: The osteosarcoma cell lines were isolated from biopsies and excisions of animals with osteosarcoma performed in veterinary clinics and hospitals in São Paulo and were characterized by the Laboratory of Biochemistry and Biophysics, Butantan Institute. Approved by Ethic Committee in the use of animals, protocol number 1654/2009. After harvesting the tissue, the fragments were washed with saline solution containing antibiotics (10%), and the fatty and hemorrhagic tissues were removed from the tumors. Next, the tumors were divided in two samples. The first sample was cut into pieces of 0.01 cm2 using a scalpel and adhered to Petri dishes for 1 h in fetal bovine serum (FBS) in an incubator at 37°C and 5% CO2. The second sample was used for histopathological diagnosis. The cells were cultured in 25-cm2 flasks with DMEM-H (LGC) media supplemented with 10% of FBS (VITROCELL), 1% of penicillin and streptomycin (GIBCO), and 1% of pyruvic acid (GIBCO) at pH 7.4 and were kept in a incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed by trypsinizaton of the confluent monolayer cell cultures every 3 days. Isolation of mesenchymal stem cells derived from canines fetuses (cMSCs): The mesenchymal stem cells were derived from the bone marrow of canine fetuses (cMSCs) and belong to the University of São Paulo, College of Veterinary Medicine (FMVZ-USP) stem cell bank. Approved by Ethic Committee in the use of animals, protocol number 931/2006. cMSCs were cultured in 25 cm2 flasks with ALPHA-MEM media (VITROCELL) supplemented with 10% of HyClone FBS, 1% of penicillin and streptomycin (GIBCO), 1% of nonessential amino acids (GIBCO), and 1% L-glutamine (GIBCO) at pH 7.4, and they were kept in an incubator at 37°C and 5% CO2. The cells grew in a monolayer attached to the surface of the culture plate. The sub-culture of cells was performed from the monolayer cells. In vitro treatment of osteosarcoma with stem cells derived from canine bone marrow and rhBMP-2: After expansion, the OST and cMSCs cells were cultured in a 12-well plates containing 105 cells per well. The groups was composed by OST cells untreated (control group 1) and cMSCs untreated (control group 2). These cells were treated with different concentrations of rhBMP-2 (5, 10, and 20 nM) for 120 h. Cells were labeled with carboxyfluorescein diacetate succinimidyl diester (CFSE; Invitrogen, Carlsbad, CA, USA) by incubation for 15 min at room temperature with 1 mM CFSE at a density of 2 × 10 6 cells/ml in PBS and the reaction was stopped by adding bovine serum to the PBS. At different times of culture, the cells were harvested and directly run on the FACSCalibur flow cytometer (Becton Dickinson) to measure the intensity of CFSE in the cells in order to monitor the number of times that the cells had divided during culture. Then a total of 3 × 105 cells were cultured in 96 U-bottom well plates and were kept in an incubator at 37°C and 5% CO2 for 5 days. After the incubation period, the proliferation of the OST cells and the cMSCs was measured to standardize the optimal concentration to be used in the in vitro regenerative therapy of canine osteosarcoma. For transwell culture the cMSCs treated with rhBMP2 (20 nM) were cultured onto dry 6.5 mm diameter, 0.4 μm pore size polycarbonate Transwell filters containing 105 cells per filter. OST cells treated with rhBMP2 (5 nM) was added to the lower well, containing 105 cells per well. This treatment lasted for 120 h. Expression of cell proliferation and cell death markers: The cells obtained from the Transwell culture plates were trypsinized and inactivated with FBS, centrifuged at 1,500 rpm for 10 min, and the supernatant was discarded. The pellet was resuspended in 5 ml of PBS at a concentration of 106 cells/ml. To analyze the amounts of intracytoplasmic and nuclear markers, the cells were permeabilized with 5 μl of 0.1% Triton X-100 for 30 min before the addition of specific primary antibodies. The following markers were used to determine the cell death pathways: caspase-3, Bax, Bad, and Bcl-2. The antibodies for Ki-67 and p53 were used to determine the proliferation index. Mesenchymal stem cell differentiation and maturation was determined by using the Oct 3/4, Stro-1, and Nanog antibodies. The samples were analyzed in a flow cytometer (FACSCalibur), and the expression of the markers was determined by comparison with an isotype control. Statistical analysis: All the values reported are mean ± SD. Statistical analyses were performed using GraphPad Prism Version 5 software and significance was determined using either the nonparametric MannWhitney test for unpaired data or the two-tailed t-test. Difference was considered significant at p < 0.05. In all graphs, *, **, *** indicate difference between groups at p < 0.05, 0.01 and 0.001, respectively. Results: Proliferative effects of rhBMP-2 in canine mesenchymal stem cells (cMSCs) and canine osteosarcoma (OST) cells cMSCs cells showed an enhanced cellular proliferation response in comparison to OST cells when treated with 20 nM of rhBMP-2 for 120 h. A significant proliferative response was observed in OST after treatment with 5 nM of rhBMP-2 (Figure 1). Proliferation curves. Proliferation curves of bone marrow stem cells derived from canine fetuses (cMSCs) and canine osteosarcoma (OST) cells 120 h after rhBMP-2 treatment. The cells were stained with CSFE and analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01, * p < 0.05). cMSCs cells showed an enhanced cellular proliferation response in comparison to OST cells when treated with 20 nM of rhBMP-2 for 120 h. A significant proliferative response was observed in OST after treatment with 5 nM of rhBMP-2 (Figure 1). Proliferation curves. Proliferation curves of bone marrow stem cells derived from canine fetuses (cMSCs) and canine osteosarcoma (OST) cells 120 h after rhBMP-2 treatment. The cells were stained with CSFE and analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01, * p < 0.05). Expression of cellular markers in canine osteosarcoma cells treated with cMSCs and rhBMP-2 Oct 3/4 and Nanog are markers of pluripotent embryonic stem cells. We analyzed the expression of these markers in cMSCs treated with rhBMP-2 (20 nM) decreased the expression of Oct 3/4 and Nanog in the bone marrow cells. Stro-1 and CD90 are surface markers of mesenchymal stem cells that specifically possess osteogenic potential. We observed an increase in the Stro-1 marker when the osteosarcoma cells were treated with rhBMP-2 (5 nM). This increase correlates with the finding that rhBMP-2 is expressed in some types of osteosarcoma, as observed in the OST cells (Figure 2A). A significant reduction in the expression of the pluripotent embryonic stem cell markers (Oct 3/4 and Nanog) and the mesenchymal stem cells markers (CD90 and Stro-1) was observed in the canine bone marrow cells treated with rhBMP-2, as shown in Figure 2B. Pluripotent embryonic stem cell markers. Expression of pluripotent embryonic stem cell markers (Nanog and Oct 3/4) and markers of mesenchymal stem cells with osteogenic potential (Stro-1 and CD90) in OST cells (A) and cMSCs (B) from the control and the group treated with rhBMP-2 (5 nM e 20 nM respectively) for 120 h. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01). Oct 3/4 and Nanog are markers of pluripotent embryonic stem cells. We analyzed the expression of these markers in cMSCs treated with rhBMP-2 (20 nM) decreased the expression of Oct 3/4 and Nanog in the bone marrow cells. Stro-1 and CD90 are surface markers of mesenchymal stem cells that specifically possess osteogenic potential. We observed an increase in the Stro-1 marker when the osteosarcoma cells were treated with rhBMP-2 (5 nM). This increase correlates with the finding that rhBMP-2 is expressed in some types of osteosarcoma, as observed in the OST cells (Figure 2A). A significant reduction in the expression of the pluripotent embryonic stem cell markers (Oct 3/4 and Nanog) and the mesenchymal stem cells markers (CD90 and Stro-1) was observed in the canine bone marrow cells treated with rhBMP-2, as shown in Figure 2B. Pluripotent embryonic stem cell markers. Expression of pluripotent embryonic stem cell markers (Nanog and Oct 3/4) and markers of mesenchymal stem cells with osteogenic potential (Stro-1 and CD90) in OST cells (A) and cMSCs (B) from the control and the group treated with rhBMP-2 (5 nM e 20 nM respectively) for 120 h. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01). Flow cytometric analyses of markers of proliferation and cell death by apoptosis or necrosis The treatment of cMSCs with rhBMP-2 induced an increase in proliferation followed by a significant increase in the expression of the Ki-67 marker. We observed a decrease in the expression of p53, which is involved in the regulation of apoptosis and tumor suppression. Treatment with rhBMP-2 can activate p53 when DNA damage is present, and this occurs without the involvement of cMSCs in the tumorigenic processes or an increase in apoptosis via phosphorylated caspase-3. We also observed a significant increase in the expression of the anti-apoptotic protein Bcl-2 and a decrease in phosphorylated caspase-3 after treatment with rhBMP-2. There was a inhibition of cell proliferation in the OST cells treated with rhBMP-2, which promotes a significant increase in p53 expression. An increased induction of cell death mediated by pro-apoptotic proteins (Bax) was observed, resulting in suppression of proliferation and an increase of phosphorylated caspase-3 (Figure 3). Pro and anti-apoptotic proteins and the proliferative capacity. Expression of pro- and anti-apoptotic proteins and the proliferative capacity in cMSCs (A) and OST cells (B) 120 h after treatment with rhBMP-2 (20 nM and 5 nM respectively) analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01). The treatment of osteosarcoma with cMSCs treated with rhBMP-2 induced a significant decrease in the expression of the P53 marker which is involved in the regulation of apoptosis and tumor suppression. We observed a decrease in the expression of Ki67, which is involved in the cellular proliferation. The treatment had shown a significant increase in apoptosis via phosphorylated caspase-3 and in the expression of the pro-apoptotic protein Bax and a significant decrease in the expression of the anti-apoptotic protein Bcl-2 (Figures 4 and 5). Pro apoptotic effects of osteosarcoma treatment with cMSCs associated with rhBMP2. Markers expression of osteosarcoma treatment with cMSCs and cMSCs treated with rhBMP2 (5 nM) for 120 after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01). Comparison between different treatments. Markers expression of osteosarcoma treatment with cMSCs, rhBMP2 and cMSCs treated with rhBMP2 (5 nM) 120 h after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01). The treatment of cMSCs with rhBMP-2 induced an increase in proliferation followed by a significant increase in the expression of the Ki-67 marker. We observed a decrease in the expression of p53, which is involved in the regulation of apoptosis and tumor suppression. Treatment with rhBMP-2 can activate p53 when DNA damage is present, and this occurs without the involvement of cMSCs in the tumorigenic processes or an increase in apoptosis via phosphorylated caspase-3. We also observed a significant increase in the expression of the anti-apoptotic protein Bcl-2 and a decrease in phosphorylated caspase-3 after treatment with rhBMP-2. There was a inhibition of cell proliferation in the OST cells treated with rhBMP-2, which promotes a significant increase in p53 expression. An increased induction of cell death mediated by pro-apoptotic proteins (Bax) was observed, resulting in suppression of proliferation and an increase of phosphorylated caspase-3 (Figure 3). Pro and anti-apoptotic proteins and the proliferative capacity. Expression of pro- and anti-apoptotic proteins and the proliferative capacity in cMSCs (A) and OST cells (B) 120 h after treatment with rhBMP-2 (20 nM and 5 nM respectively) analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01). The treatment of osteosarcoma with cMSCs treated with rhBMP-2 induced a significant decrease in the expression of the P53 marker which is involved in the regulation of apoptosis and tumor suppression. We observed a decrease in the expression of Ki67, which is involved in the cellular proliferation. The treatment had shown a significant increase in apoptosis via phosphorylated caspase-3 and in the expression of the pro-apoptotic protein Bax and a significant decrease in the expression of the anti-apoptotic protein Bcl-2 (Figures 4 and 5). Pro apoptotic effects of osteosarcoma treatment with cMSCs associated with rhBMP2. Markers expression of osteosarcoma treatment with cMSCs and cMSCs treated with rhBMP2 (5 nM) for 120 after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01). Comparison between different treatments. Markers expression of osteosarcoma treatment with cMSCs, rhBMP2 and cMSCs treated with rhBMP2 (5 nM) 120 h after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01). Proliferative effects of rhBMP-2 in canine mesenchymal stem cells (cMSCs) and canine osteosarcoma (OST) cells: cMSCs cells showed an enhanced cellular proliferation response in comparison to OST cells when treated with 20 nM of rhBMP-2 for 120 h. A significant proliferative response was observed in OST after treatment with 5 nM of rhBMP-2 (Figure 1). Proliferation curves. Proliferation curves of bone marrow stem cells derived from canine fetuses (cMSCs) and canine osteosarcoma (OST) cells 120 h after rhBMP-2 treatment. The cells were stained with CSFE and analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01, * p < 0.05). Expression of cellular markers in canine osteosarcoma cells treated with cMSCs and rhBMP-2: Oct 3/4 and Nanog are markers of pluripotent embryonic stem cells. We analyzed the expression of these markers in cMSCs treated with rhBMP-2 (20 nM) decreased the expression of Oct 3/4 and Nanog in the bone marrow cells. Stro-1 and CD90 are surface markers of mesenchymal stem cells that specifically possess osteogenic potential. We observed an increase in the Stro-1 marker when the osteosarcoma cells were treated with rhBMP-2 (5 nM). This increase correlates with the finding that rhBMP-2 is expressed in some types of osteosarcoma, as observed in the OST cells (Figure 2A). A significant reduction in the expression of the pluripotent embryonic stem cell markers (Oct 3/4 and Nanog) and the mesenchymal stem cells markers (CD90 and Stro-1) was observed in the canine bone marrow cells treated with rhBMP-2, as shown in Figure 2B. Pluripotent embryonic stem cell markers. Expression of pluripotent embryonic stem cell markers (Nanog and Oct 3/4) and markers of mesenchymal stem cells with osteogenic potential (Stro-1 and CD90) in OST cells (A) and cMSCs (B) from the control and the group treated with rhBMP-2 (5 nM e 20 nM respectively) for 120 h. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01). Flow cytometric analyses of markers of proliferation and cell death by apoptosis or necrosis: The treatment of cMSCs with rhBMP-2 induced an increase in proliferation followed by a significant increase in the expression of the Ki-67 marker. We observed a decrease in the expression of p53, which is involved in the regulation of apoptosis and tumor suppression. Treatment with rhBMP-2 can activate p53 when DNA damage is present, and this occurs without the involvement of cMSCs in the tumorigenic processes or an increase in apoptosis via phosphorylated caspase-3. We also observed a significant increase in the expression of the anti-apoptotic protein Bcl-2 and a decrease in phosphorylated caspase-3 after treatment with rhBMP-2. There was a inhibition of cell proliferation in the OST cells treated with rhBMP-2, which promotes a significant increase in p53 expression. An increased induction of cell death mediated by pro-apoptotic proteins (Bax) was observed, resulting in suppression of proliferation and an increase of phosphorylated caspase-3 (Figure 3). Pro and anti-apoptotic proteins and the proliferative capacity. Expression of pro- and anti-apoptotic proteins and the proliferative capacity in cMSCs (A) and OST cells (B) 120 h after treatment with rhBMP-2 (20 nM and 5 nM respectively) analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01). The treatment of osteosarcoma with cMSCs treated with rhBMP-2 induced a significant decrease in the expression of the P53 marker which is involved in the regulation of apoptosis and tumor suppression. We observed a decrease in the expression of Ki67, which is involved in the cellular proliferation. The treatment had shown a significant increase in apoptosis via phosphorylated caspase-3 and in the expression of the pro-apoptotic protein Bax and a significant decrease in the expression of the anti-apoptotic protein Bcl-2 (Figures 4 and 5). Pro apoptotic effects of osteosarcoma treatment with cMSCs associated with rhBMP2. Markers expression of osteosarcoma treatment with cMSCs and cMSCs treated with rhBMP2 (5 nM) for 120 after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01). Comparison between different treatments. Markers expression of osteosarcoma treatment with cMSCs, rhBMP2 and cMSCs treated with rhBMP2 (5 nM) 120 h after treatment analyzed by flow cytometry. Statistical differences were obtained by analysis of variance (*** p < 0.001, ** p < 0.01). Discussion: In the present study, we analyzed the effects of rhBMP-2 in mesenchymal stem cells for use in the regenerative therapy of canine osteosarcoma utilizing OST cells as a model. The OST cells remained spindle-shaped during cell growth and the confluent stages. Treatment of OST cells with rhBMP-2 compromised the osteoblastic phenotype. Osteoprogenitors are either pre-determined or inducible, depending on additional signals necessary to cause differentiation. This difference is important because it reflects the variation between cell compromise, which is when cell fate is programmed, and cell differentiation, which is when cells are compromised by permissive microenvironment signs [23]. Although the exact function and interrelation of each type of BMP are not completely understood, evidence indicates that they are a part of a series of complex factors that regulate cell differentiation, specifically maturation intochondroblasts and osteoblasts. The structural and functional evolutionary conservation of genes encoding BMPs suggest that these genes have critical regulatory functions in the process of differentiation during development and neoplastic transformation. In human colorectal carcinoma, for example, rhBMP-2 acts as a tumor suppressor [24]. Similarly, rhBMP-2 shows anti-proliferative and pro-apoptotic potential in gastric, prostate, and ovarian cancer cells. In breast cancer cell lines, rhBMP-2 treatment decreases cell proliferation. The effects of the BMPs varies with tumor progression. In other words, in the early stages of carcinogenesis, the TGF beta superfamily acts by suppressing tumor growth, and at later stages, the superfamily actually promotes tumor progression [25]. In our findings, we observed that when mesenchymal stem cells were exposed to rhBMP-2, there was a significant reduction in the expression of the marker Nanog. We noted a significant decrease in expression of the cell proliferation marker Oct 3/4 in OST cells treated with rhBMP-2. The treatment of OST cells with rhBMP-2 after Transwell culture inhibited the proliferative response (Ki-67 expression) and promoted an increase in cell death mediated by the pro-apoptotic proteins (Bax and Bad), resulting in suppression of proliferation and an increase of phosphorylation of caspase-3. The treatment of bone marrow cells with rhBMP-2 stimulates the production of growth and differentiation factors. Thus, rhBMP-2 treatment may be a relevant treatment for canine osteosarcoma cells. Eliseev et al. [26], showed that rhBMP-2 increases the expression of Bax via Runx2, thus increasing the sensitivity of osteosarcoma cells to apoptosis. In our experiments, we also observed an increase in Bax expression when canine osteosarcoma cells were treated with rhBMP-2. These results suggest that rhBMP-2 treatment increases the susceptibility of OST cells to death by apoptosis. Kawamura et al. [17] observed that rhBMP-2 inhibits the growth of human multiple myeloma U266 cells by arresting the cells in the G1 phase of the cell cycle, leading to apoptosis. The combined treatment with rhBMP-2 induces cell cycle inhibitory proteins, such as p21 and p27, and induces other proteins associated with apoptosis, such as Bcl-xl, Bcl2, Bax, and Bad. Thus, rhBMP-2 may be an important tool for the treatment of multiple myeloma due to its anti-tumor and bone regeneration effects. Kawamura et al. [17] investigated the antiproliferative effect of rhBMP-2 in myeloma cells and found that BMPs inactivate the STAT3 protein, which is a signal transducer activated by IL-6. BMPs were also found to increase the expression of cell cycle inhibitors leading to a cell replication blockage via pRb. Based on the studies of Hsu et al. [27], BMPs can function either as an oncogene or as a tumor suppressor, depending on the stage of disease. The effects of BMPs are cell type-specific and may vary among different tumors. BMPs are also reported as tumor suppressors and act on the cell cycle by inducing apoptosis of abnormal cells, such as tumors. Hardwick et al. [24] used cell lines of colorectal cancer to evaluate the role of rhBMP-2. They observed that rhBMP-2 reduced cell growth and stimulated apoptosis due to high levels of phosphorylated caspase-3. In our results, we also observed a decrease in cell growth when OST cells were treated with rhBMP-2, and we observed an increase of caspase-3 levels. Treatment with rhBMP-2 may be a new therapeutic option for canine osteosarcoma. We found that rhBMP-2 decreases the expression of embryonic stem cell markers Nanog and Oct 3/4 in different treatment regimens, and it is also associated with tumorigenesis of many types of cancer [28,29]. Because rhBMP-2 has the potential to inhibit the expression of markers such as Nanog and Oct 3/4, it may also exhibit anti-tumor effects in animal models in vivo. Oct 3/4 and Nanog are important factors in the regulation of self-renewal and the pluripotency of embryonic stem cells. There are studies showing a correlation of these cells with cases of tumorigenesis [29]. Oct 3/4 is a marker for both adult stem cells and cancer stem cells. Inhibition of this factor can inhibit the expression of proteins associated with tumorigenesis. Osteogenesis is defined by a series of events, which starts with a commitment to an osteogenic lineage by mesenchymal cells. Subsequently, these cells proliferate and demonstrate an upregulation of osteoblast-specific genes and mineralization. Multiple signaling pathways have been demonstrated to participate in the differentiation of an osteoblast progenitor to a committed osteoblast [30,31]. An association between the expression of STRO-1 and the presence of cells with osteogenic potential has been demonstrated in precursor adult human bone marrow. STRO-1+ population of human bone marrow cells is capable of osteogenic differentiation. The expression STRO-1 is complicated by the fact that a considerable proportion of STRO-1+ cells are not of the osteogenic lineage and the exact stage of osteogenic differentiation at which STRO-1 is expressed remains unclear, especially when working with cultured cell populations and the coexpression of STRO-1 and a panel of antibodies recognizing cell surface determinants which may be regulated during osteogenic differentiation [32]. BMP-2 alone does not induce osteogenesis in isolates of human bone marrow stromal cells as measured by stimulation of alkaline phosphatase expression. However, BMP-2 does induce other markers associated with differentiation of osteoblasts. This osteogenic capacity is seen in stromal cells isolated from mice, rats, rabbits, and humans; however, cell behavior and efficacy of inducers varies in a species-dependent manner [33]. BMP-2 stimulates surrounding tissues; however, more robust data is needed to demonstrate that BMP-2 also augments the osteogenic potential of implanted MSCs cells. In the present study, probably the effects of MSCs and rBMP-2 treated model culture Transwell, controlled environmental niches and alterations in this microenvironment can dramatically modify their behavior and differentiation capacities. Langenfeld et al. [34] showed that cell culture conditions and the intra- and extra-cellular antagonist concentrations interfere with the biological activities of BMPs. Wang et al. [23] observed that rhBMP-2 inhibits the tumorigenic potential of human osteosarcoma OS99-1 cells. The inhibition was due to a decrease in the expression of proteins associated with tumorigenesis and an increase of osteosarcoma cell differentiation in response to rhBMP-2. Thus, rhBMP-2 could be considered a novel tool for the treatment of human osteosarcoma. Our study clearly showed that the association of mesenchymal stem cells derived from canine fetal bone marrow, combined with rhBMP-2, decreases the tumorigenic potential of canine osteosarcoma cells in vitro. Conclusions: The in vitro treatment of bone marrow cells with rhBMP-2 decreased their osteogenic potential. Thus, we suggest that the treatment conditions for both osteogenic induction and for tumor regression are favorable when associated with stem cells derived from canine bone marrow. cMSCs treated with rhBMP-2 inhibits the proliferation capacity of OST cells by mechanisms of apoptosis and tumor suppression mediated by p53. Treatment of bone marrow cells with rhBMP-2 showed a high therapeutic potential observed by the increase in the tumor suppressor protein p53 and pro-apoptotic proteins Bad and Bax, and by increased activity of phosphorylated caspase-3.
Background: The bone morphogenetic proteins (BMPs) belong to a unique group of proteins that includes the growth factor TGF-β. BMPs play important roles in cell differentiation, cell proliferation, and inhibition of cell growth. They also participate in the maturation of several cell types, depending on the microenvironment and interactions with other regulatory factors. Depending on their concentration gradient, the BMPs can attract various types of cells and act as chemotactic, mitogenic, or differentiation agents. BMPs can interfere with cell proliferation and the formation of cartilage and bone. In addition, BMPs can induce the differentiation of mesenchymal progenitor cells into various cell types, including chondroblasts and osteoblasts. The aim of this study was to analyze the effects of treatment with rhBMP-2 on the proliferation of canine mesenchymal stem cells (cMSCs) and the tumor suppression properties of rhBMP-2 in canine osteocarcoma (OST) cells. Osteosarcoma cell lines were isolated from biopsies and excisions of animals with osteosarcoma and were characterized by the Laboratory of Biochemistry and Biophysics, Butantan Institute. The mesenchymal stem cells were derived from the bone marrow of canine fetuses (cMSCs) and belong to the University of São Paulo, College of Veterinary Medicine (FMVZ-USP) stem cell bank. After expansion, the cells were cultured in a 12-well Transwell system; cells were treated with bone marrow mesenchymal stem cells associated with rhBMP2. Expression of the intracytoplasmic and nuclear markers such as Caspase-3, Bax, Bad, Bcl-2, Ki-67, p53, Oct3/4, Nanog, Stro-1 were performed by flow citometry. Methods: Canine bone marrow mesenchymal stem cells associated with rhBMP2 in canine osteosarcoma treatment: "in vitro" study. Results: We evaluated the regenerative potential of in vitro treatment with rhBMP-2 and found that both osteogenic induction and tumor regression occur in stem cells from canine bone marrow. rhBMP-2 inhibits the proliferation capacity of OST cells by mechanisms of apoptosis and tumor suppression mediated by p53. Conclusions: We propose that rhBMP-2 has great therapeutic potential in bone marrow cells by serving as a tumor suppressor to increase p53 and the pro-apoptotic proteins Bad and Bax, as well as by increasing the activity of phosphorylated caspase 3.
Background: Osteosarcoma is as a primary bone cancer common in dogs. Frequently, osteosarcoma affects the limb bones of large-sized dogs over 15 kg at an average age of 7 years [1]. In 75% of cases, osteosarcoma affects either the appendicular skeleton [2] or the pelvic and thoracic limbs, and in the remaining 25%, it affects the axial skeleton or the flat bones [3,4]. Generally, males have a higher incidence of osteocarcoma than females [2], with the exception of the St. Bernard, Rottweiler, and Danish breeds, in which females are most affected [5,6]. Osteosarcoma cells induce platelet aggregation, which facilitates metastasis formation. Platelet aggregation and metastasis most commonly occur in the lung [7]. Platelet aggregation promotes the establishment of tumor cell aggregates, which could serve as a bridge between the tumor cells and the vascular surfaces [6]. A primary extraskeletal osteosarcoma has a metastatic rate that ranges from 60 to 85% in dogs and an average life expectancy after surgery of 26-90 days, which varies according to the location where the metastasis occurs [4]. Metastasis is the most common cause of death in dogs with osteosarcoma, and 90% of dogs either die or are euthanized due to complications associated with lung metastases. Therefore, chemotherapy is used to increase the long-term survival of dogs with osteosarcoma. To reduce the occurrence of metastasis, chemotherapy is often used in combination with surgery or radiotherapy. Specifically, either cisplatin or cisplatin and doxorubicin are chemotherapeutic agents used in dogs [8,9]. Numerous studies have aimed to develop antiangiogenic therapeutic strategies, which can be combined with other treatments [10]. The bone morphogenetic proteins (BMPs) belong to a unique group of proteins that includes the growth factor TGF-β. BMPs play important roles in cell differentiation, cell proliferation, and inhibition of cell growth. They also participate in the maturation of several cell types, depending on the microenvironment and interactions with other regulatory factors [11,12]. Depending on their concentration gradient, the BMPs can attract various types of cells [13] and act as chemotactic, mitogenic, or differentiation agents [14]. BMPs can interfere with the proliferation of cells and the formation of cartilage and bone. Finally, BMPs can also induce the differentiation of mesenchymal progenitor cells into various cell types, including chondroblasts and osteoblasts [15]. BMPs play important roles in cell differentiation, proliferation, morphogenesis, and apoptosis, and recent studies have shown that recombinant human BMP-2 (rhBMP-2) inhibits tumor formation [16-19]. However, the role of rhBMP-2 in canine osteosarcoma remains unknown. The osteoinductive capacity of rhBMP-2 has been widely studied in preclinical models and evaluated in the clinical setting [20]. Gene and cell therapy studies have shown that many bone defects can be treated by implantation of resorbable polymers with bone marrow cells transduced with an adenovirus expressing rhBMP-2 [21]. In addition, rhBMP-2 can be used as a substitute for bone grafts in spinal surgery, with results comparable to autogenous grafts [22]. Based on the studies cited above, the present work explores the proliferative effects of canine mesenchymal stem cells (cMSCs) and osteosarcoma (OST) cells treated with rhBMP-2 to evaluate their regenerative potential in the presence of the in vitro treatment. Conclusions: REGR, DA, CVW collected the materials, established cell lines and carried out the experiment. REGR, DAM and PF performed the cytometry analysis and wrote manuscript. CEA and MAM reviewed the manuscript and the quality of the written English. All authors read and approved the final paper.
Background: The bone morphogenetic proteins (BMPs) belong to a unique group of proteins that includes the growth factor TGF-β. BMPs play important roles in cell differentiation, cell proliferation, and inhibition of cell growth. They also participate in the maturation of several cell types, depending on the microenvironment and interactions with other regulatory factors. Depending on their concentration gradient, the BMPs can attract various types of cells and act as chemotactic, mitogenic, or differentiation agents. BMPs can interfere with cell proliferation and the formation of cartilage and bone. In addition, BMPs can induce the differentiation of mesenchymal progenitor cells into various cell types, including chondroblasts and osteoblasts. The aim of this study was to analyze the effects of treatment with rhBMP-2 on the proliferation of canine mesenchymal stem cells (cMSCs) and the tumor suppression properties of rhBMP-2 in canine osteocarcoma (OST) cells. Osteosarcoma cell lines were isolated from biopsies and excisions of animals with osteosarcoma and were characterized by the Laboratory of Biochemistry and Biophysics, Butantan Institute. The mesenchymal stem cells were derived from the bone marrow of canine fetuses (cMSCs) and belong to the University of São Paulo, College of Veterinary Medicine (FMVZ-USP) stem cell bank. After expansion, the cells were cultured in a 12-well Transwell system; cells were treated with bone marrow mesenchymal stem cells associated with rhBMP2. Expression of the intracytoplasmic and nuclear markers such as Caspase-3, Bax, Bad, Bcl-2, Ki-67, p53, Oct3/4, Nanog, Stro-1 were performed by flow citometry. Methods: Canine bone marrow mesenchymal stem cells associated with rhBMP2 in canine osteosarcoma treatment: "in vitro" study. Results: We evaluated the regenerative potential of in vitro treatment with rhBMP-2 and found that both osteogenic induction and tumor regression occur in stem cells from canine bone marrow. rhBMP-2 inhibits the proliferation capacity of OST cells by mechanisms of apoptosis and tumor suppression mediated by p53. Conclusions: We propose that rhBMP-2 has great therapeutic potential in bone marrow cells by serving as a tumor suppressor to increase p53 and the pro-apoptotic proteins Bad and Bax, as well as by increasing the activity of phosphorylated caspase 3.
7,579
414
[ 637, 240, 152, 302, 162, 77, 1671, 114, 241, 453, 1367, 105 ]
13
[ "cells", "rhbmp", "cell", "expression", "cmscs", "treatment", "osteosarcoma", "markers", "stem", "ost" ]
[ "canine osteosarcoma ost", "osteosarcoma 90 dogs", "option canine osteosarcoma", "dogs frequently osteosarcoma", "canine osteosarcoma cells" ]
null
[CONTENT] Osteosarcoma | rhBMP-2 | Mesenchymal stem cell | Canine [SUMMARY]
[CONTENT] Osteosarcoma | rhBMP-2 | Mesenchymal stem cell | Canine [SUMMARY]
null
[CONTENT] Osteosarcoma | rhBMP-2 | Mesenchymal stem cell | Canine [SUMMARY]
[CONTENT] Osteosarcoma | rhBMP-2 | Mesenchymal stem cell | Canine [SUMMARY]
[CONTENT] Osteosarcoma | rhBMP-2 | Mesenchymal stem cell | Canine [SUMMARY]
[CONTENT] Animals | Bone Marrow Cells | Bone Morphogenetic Protein 2 | Cell Line, Tumor | Cell Proliferation | Coculture Techniques | Dogs | Gene Expression Regulation, Neoplastic | Humans | Mesenchymal Stem Cells | Osteosarcoma | Recombinant Proteins [SUMMARY]
[CONTENT] Animals | Bone Marrow Cells | Bone Morphogenetic Protein 2 | Cell Line, Tumor | Cell Proliferation | Coculture Techniques | Dogs | Gene Expression Regulation, Neoplastic | Humans | Mesenchymal Stem Cells | Osteosarcoma | Recombinant Proteins [SUMMARY]
null
[CONTENT] Animals | Bone Marrow Cells | Bone Morphogenetic Protein 2 | Cell Line, Tumor | Cell Proliferation | Coculture Techniques | Dogs | Gene Expression Regulation, Neoplastic | Humans | Mesenchymal Stem Cells | Osteosarcoma | Recombinant Proteins [SUMMARY]
[CONTENT] Animals | Bone Marrow Cells | Bone Morphogenetic Protein 2 | Cell Line, Tumor | Cell Proliferation | Coculture Techniques | Dogs | Gene Expression Regulation, Neoplastic | Humans | Mesenchymal Stem Cells | Osteosarcoma | Recombinant Proteins [SUMMARY]
[CONTENT] Animals | Bone Marrow Cells | Bone Morphogenetic Protein 2 | Cell Line, Tumor | Cell Proliferation | Coculture Techniques | Dogs | Gene Expression Regulation, Neoplastic | Humans | Mesenchymal Stem Cells | Osteosarcoma | Recombinant Proteins [SUMMARY]
[CONTENT] canine osteosarcoma ost | osteosarcoma 90 dogs | option canine osteosarcoma | dogs frequently osteosarcoma | canine osteosarcoma cells [SUMMARY]
[CONTENT] canine osteosarcoma ost | osteosarcoma 90 dogs | option canine osteosarcoma | dogs frequently osteosarcoma | canine osteosarcoma cells [SUMMARY]
null
[CONTENT] canine osteosarcoma ost | osteosarcoma 90 dogs | option canine osteosarcoma | dogs frequently osteosarcoma | canine osteosarcoma cells [SUMMARY]
[CONTENT] canine osteosarcoma ost | osteosarcoma 90 dogs | option canine osteosarcoma | dogs frequently osteosarcoma | canine osteosarcoma cells [SUMMARY]
[CONTENT] canine osteosarcoma ost | osteosarcoma 90 dogs | option canine osteosarcoma | dogs frequently osteosarcoma | canine osteosarcoma cells [SUMMARY]
[CONTENT] cells | rhbmp | cell | expression | cmscs | treatment | osteosarcoma | markers | stem | ost [SUMMARY]
[CONTENT] cells | rhbmp | cell | expression | cmscs | treatment | osteosarcoma | markers | stem | ost [SUMMARY]
null
[CONTENT] cells | rhbmp | cell | expression | cmscs | treatment | osteosarcoma | markers | stem | ost [SUMMARY]
[CONTENT] cells | rhbmp | cell | expression | cmscs | treatment | osteosarcoma | markers | stem | ost [SUMMARY]
[CONTENT] cells | rhbmp | cell | expression | cmscs | treatment | osteosarcoma | markers | stem | ost [SUMMARY]
[CONTENT] dogs | bmps | metastasis | osteosarcoma | cell | studies | aggregation | surgery | platelet aggregation | platelet [SUMMARY]
[CONTENT] cells | culture | gibco | 105 cells | 105 | 10 | cultured | monolayer | containing | cmscs [SUMMARY]
null
[CONTENT] tumor | bone marrow cells rhbmp | marrow cells rhbmp | treatment bone marrow cells | cells rhbmp | treatment bone marrow | treatment bone | bone marrow | bone | marrow [SUMMARY]
[CONTENT] cells | rhbmp | expression | cmscs | treatment | cell | markers | nm | osteosarcoma | stem [SUMMARY]
[CONTENT] cells | rhbmp | expression | cmscs | treatment | cell | markers | nm | osteosarcoma | stem [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| OST ||| osteosarcoma | the Laboratory of Biochemistry and Biophysics | Butantan Institute ||| the University of São Paulo | College of Veterinary Medicine ||| 12 | Transwell | rhBMP2 ||| Bax | Bad | Bcl-2 | Nanog | Stro-1 [SUMMARY]
[CONTENT] Canine | rhBMP2 [SUMMARY]
null
[CONTENT] Bad | Bax | 3 [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| OST ||| osteosarcoma | the Laboratory of Biochemistry and Biophysics | Butantan Institute ||| the University of São Paulo | College of Veterinary Medicine ||| 12 | Transwell | rhBMP2 ||| Bax | Bad | Bcl-2 | Nanog | Stro-1 ||| Canine | rhBMP2 ||| ||| ||| OST ||| Bad | Bax | 3 [SUMMARY]
[CONTENT] ||| ||| ||| ||| ||| OST ||| osteosarcoma | the Laboratory of Biochemistry and Biophysics | Butantan Institute ||| the University of São Paulo | College of Veterinary Medicine ||| 12 | Transwell | rhBMP2 ||| Bax | Bad | Bcl-2 | Nanog | Stro-1 ||| Canine | rhBMP2 ||| ||| ||| OST ||| Bad | Bax | 3 [SUMMARY]
Characteristics and place of death in home care recipients in Germany - an analysis of nationwide health insurance claims data.
36203168
Most care-dependent people live at home, where they also would prefer to die. Unfortunately, this wish is often not fulfilled. This study aims to investigate place of death of home care recipients, taking characteristics and changes in care settings into account.
BACKGROUND
We retrospectively analysed a cohort of all home-care receiving people of a German statutory health insurance who were at least 65 years and who deceased between January 2016 and June 2019. Next to the care need, duration of care, age, sex, and disease, care setting at death and place of death were considered. We examined the characteristics by place of care, the proportion of dying in hospital by care setting and characterised the deceased cohort stratified by their actual place of death.
METHODS
Of 46,207 care-dependent people initially receiving home care, 57.5% died within 3.5 years (n = 26,590; mean age: 86.8; 66.6% female). More than half of those moved to another care setting before death with long-term nursing home care (32.3%) and short-term nursing home care (11.7%) being the most frequent transitions, while 48.1% were still cared for at home. Overall, 36.9% died in hospital and in-hospital deaths were found most often in those still receiving home care (44.7%) as well as care in semi-residential arrangements (43.9%) at the time of death. People who died in hospital were younger (mean age: 85.5 years) and with lower care dependency (low care need: 28.2%) as in all other analysed care settings.
RESULTS
In Germany, changes in care settings before death occur often. The proportion of in-hospital death is particularly high in the home setting and in semi-residential arrangements. These settings should be considered in interventions aiming to decrease the number of unwished care transitions and hospitalisations at the end of life.
CONCLUSION
[ "Aged, 80 and over", "Female", "Germany", "Home Care Services", "Hospital Mortality", "Humans", "Insurance, Health", "Male", "Retrospective Studies", "Terminal Care" ]
9535886
Background
Due to demographic changes the world’s population is ageing and more and more people will die in old age, often affected by multiple chronic diseases and with complex care needs [1]. The official German care statistics from 2019 reported a further significant increase of care dependency to a total of 4.1million people [2, 3]. Four fifths of them were cared for at home. The home care receiving group was younger than the nursing home residents and the proportion of women were with 60% lower than in nursing homes [4]. Accordingly, end-of-life (EOL) care is important in this population and is increasingly being researched. Regardless of the care-setting and the country, the majority of people wish to die at home [5–8], even if one should differentiate between ideal and actual preferred places of care as well as places of death and that both could change over time [9]. A cross-national comparison of places of death including 21 countries of people aged over 65 years from 2013 showed a median of 54% of death in hospital and 18% in residential aged care facilities [10], both with large differences between studies as well as between countries. Some countries – such as Germany – do not routinely compile data from death registrations including information on care dependency, which makes it difficult to obtain representative data regarding place of death. First data are available for the distribution of places of death in Germany, showing an overall trend in places of death [11]. However, these data do not contain information on care dependency and little is known about care transitions at the EOL in the group of older home care receiving people. Can the people who are cared for at home also die at home, as most would prefer? Therefore, aim of this explorative study was to investigate place of death of home care recipients in Germany, taking characteristics and changes in care settings into account.
null
null
Results
The entire cohort encompasses 46,207 care-dependent people, who were cared for at home at January 1, 2016. The target population included 26,590 people, who had died until June 2019 (57.5%). Baseline characteristics Table1 shows the characteristics of all deceased persons plus the comparison between males and females. Two thirds (66.6%) of the deceased cohort were female, more than 80% of all had a medium care need (53.3%) or even a high care need (29.6%) at death. The mean duration of care dependency was 3.7 years (SD: 2.8) at time of death. About two thirds had a dementia diagnosis (64.1%) and one third had a diagnosis for cancer (36.1%). Most of all were cared for at home at time of death (48.1%), followed by the long-term nursing home setting (32.3%) and short-term nursing home care (11.7%). Table 1Characteristics of deceased cohort (2016 to 2019)Total (n = 26,590)Female (n = 17,701)Male (n = 8,889)Age at death in years\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\stackrel{-}{x}$$\end{document} (SD)86.8 (7.4)87.5 (7.4)85.2 (7.2)65–741,714 (6.5%)980 (5.5%)734 (8.3%)75–847,671 (28.9%)4,601 (26.0%)3,070 (34.5%)85–9413,517 (50.8%)9,164 (51.8%)4,353 (49.0%)95+3,688 (13.9%)2,956 (16.7%)732 (8.2%)Care need* at deathLow (care level: 0/1, grade: 1/2)4,558 (17.1%)3,174 (17.9%)1,384 (15.6%)Medium (care level: 2; grade: 3/4)14,173 (53.3%)9,540 (53.9%)4,633 (52.1%)High (care level: 3; grade: 5)7,859 (29.6%)4,987 (28.2%)2,872 (32.3%)Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia17,049 (64.1%)11,461 (64.7%)5,588 (62.9%)Cancer9,603 (36.1%)5,558 (31.4%)4,045 (45.5%)Duration of care dependency at death in years: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\stackrel{-}{x}$$\end{document} (SD)3.7 (2.8)3.9 (2.8)3.4 (2.7)Setting at deathHome12,795 (48.1%)8,131 (45.9%)4,664 (52.5%)Long-term care in nursing home8,599 (32.3%)6,109 (34.5%)2,490 (28.0%)Short-term care in nursing home3,103 (11.7%)2,016 (11.4%)1,087 (12.2%)Semi-residential arrangement1,156 (4.4%)709 (4.0%)447 (5.0%)Shared housing arrangement937 (3.5%)736 (4.2%)201 (2.3%)* The three German care levels were modified into 5 care grades at 1st January 2017 Characteristics of deceased cohort (2016 to 2019) * The three German care levels were modified into 5 care grades at 1st January 2017 Females had a higher mean age of 87.5 years at death, compared to 85.2 years at death of all males. Men were more likely to have a cancer diagnosis and a shorter period of care dependency compared to women. Only small differences were found regarding the care needs and the prevalence of dementia. More females received long-term care in nursing homes (34.5% vs. males: 28.0%), whereas more males have been cared for at the home setting (52.5% vs. females: 45.9%). Table1 shows the characteristics of all deceased persons plus the comparison between males and females. Two thirds (66.6%) of the deceased cohort were female, more than 80% of all had a medium care need (53.3%) or even a high care need (29.6%) at death. The mean duration of care dependency was 3.7 years (SD: 2.8) at time of death. About two thirds had a dementia diagnosis (64.1%) and one third had a diagnosis for cancer (36.1%). Most of all were cared for at home at time of death (48.1%), followed by the long-term nursing home setting (32.3%) and short-term nursing home care (11.7%). Table 1Characteristics of deceased cohort (2016 to 2019)Total (n = 26,590)Female (n = 17,701)Male (n = 8,889)Age at death in years\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\stackrel{-}{x}$$\end{document} (SD)86.8 (7.4)87.5 (7.4)85.2 (7.2)65–741,714 (6.5%)980 (5.5%)734 (8.3%)75–847,671 (28.9%)4,601 (26.0%)3,070 (34.5%)85–9413,517 (50.8%)9,164 (51.8%)4,353 (49.0%)95+3,688 (13.9%)2,956 (16.7%)732 (8.2%)Care need* at deathLow (care level: 0/1, grade: 1/2)4,558 (17.1%)3,174 (17.9%)1,384 (15.6%)Medium (care level: 2; grade: 3/4)14,173 (53.3%)9,540 (53.9%)4,633 (52.1%)High (care level: 3; grade: 5)7,859 (29.6%)4,987 (28.2%)2,872 (32.3%)Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia17,049 (64.1%)11,461 (64.7%)5,588 (62.9%)Cancer9,603 (36.1%)5,558 (31.4%)4,045 (45.5%)Duration of care dependency at death in years: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\stackrel{-}{x}$$\end{document} (SD)3.7 (2.8)3.9 (2.8)3.4 (2.7)Setting at deathHome12,795 (48.1%)8,131 (45.9%)4,664 (52.5%)Long-term care in nursing home8,599 (32.3%)6,109 (34.5%)2,490 (28.0%)Short-term care in nursing home3,103 (11.7%)2,016 (11.4%)1,087 (12.2%)Semi-residential arrangement1,156 (4.4%)709 (4.0%)447 (5.0%)Shared housing arrangement937 (3.5%)736 (4.2%)201 (2.3%)* The three German care levels were modified into 5 care grades at 1st January 2017 Characteristics of deceased cohort (2016 to 2019) * The three German care levels were modified into 5 care grades at 1st January 2017 Females had a higher mean age of 87.5 years at death, compared to 85.2 years at death of all males. Men were more likely to have a cancer diagnosis and a shorter period of care dependency compared to women. Only small differences were found regarding the care needs and the prevalence of dementia. More females received long-term care in nursing homes (34.5% vs. males: 28.0%), whereas more males have been cared for at the home setting (52.5% vs. females: 45.9%). In-hospital death Table2 shows the prevalence of in-hospital deaths in total and stratified by care setting. Table 2Prevalence of in-hospital death by care settingTotal(n = 26,590)Care setting at time of deathHome setting(n = 12,795)Nursing home (long-term care)(n = 8,599)Nursing home (short-term care)(n = 3,103)Semi-residential arrangement (n = 1,156)Shared housing arrangement (n = 937)SexMale39.1%46.3%28.7%30.5%43.2%34.8%Female35.8%43.8%26.8%32.8%44.3%23.2%Age at death in years65–7447.9%55.2%29.3%42.0%56.4%45.2%75–8442.6%51.5%30.9%35.8%44.8%31.2%85–9435.2%42.3%27.3%31.1%44.3%22.0%95+26.2%31.6%21.1%23.1%29.9%13.1%Care need* at deathLow(level: 0/1, grade: 1/2)60.7%71.1%39.6%45.4%75.8%50.0%Medium(level: 2; grade: 3/4)38.4%47.2%29.2%32.5%53.5%32.6%High(level: 3; grade: 5)20.3%22.0%17.5%20.4%22.8%17.1%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia32.6%39.3%26.2%29.8%40.6%23.4%Cancer36.9%45.8%26.2%29.2%44.5%27.1%Total36.9%44.7%27.3%32.0%43.9%25.7%* The three German care levels were modified into 5 care grades at 1st January 2017 Prevalence of in-hospital death by care setting Low (level: 0/1, grade: 1/2) Medium (level: 2; grade: 3/4) High (level: 3; grade: 5) * The three German care levels were modified into 5 care grades at 1st January 2017 In total, 36.9% of all deceased died in hospital (n = 9,811). The proportion of in-hospital deaths is slightly higher in men (39.1%) than in women (35.8%). With higher age, the proportion of in-hospital deaths decreased (47.9% in the 65–74 years old versus 26.2% in the persons aged 95 or older). Another trend can be seen regarding the care need. More than 6 out of 10 persons with the lowest care need died in hospital compared to one fifth in the group with the highest care need. Differences with respect to dementia and cancer were not found. The prevalence of in-hospital death also varies by care setting. Whereas less than 28% of deceased cared for in the nursing home (long-term) died in hospital, this proportion was highest in the home care setting as well as the semi-residential arrangement care setting with 44% and 45% each. The sex difference was highest in the shared housing arrangement setting (23.2% in-hospital deaths in the female group versus 34.8% in males). The above-mentioned tendencies regarding age and care needs were found in all care settings. In people with a dementia diagnosis, the proportion of in-hospital deaths varies widely by care setting. It is lowest in the shared housing arrangement setting (23.4%) and the highest in the semi-residential arrangement setting (40.6%). Table2 shows the prevalence of in-hospital deaths in total and stratified by care setting. Table 2Prevalence of in-hospital death by care settingTotal(n = 26,590)Care setting at time of deathHome setting(n = 12,795)Nursing home (long-term care)(n = 8,599)Nursing home (short-term care)(n = 3,103)Semi-residential arrangement (n = 1,156)Shared housing arrangement (n = 937)SexMale39.1%46.3%28.7%30.5%43.2%34.8%Female35.8%43.8%26.8%32.8%44.3%23.2%Age at death in years65–7447.9%55.2%29.3%42.0%56.4%45.2%75–8442.6%51.5%30.9%35.8%44.8%31.2%85–9435.2%42.3%27.3%31.1%44.3%22.0%95+26.2%31.6%21.1%23.1%29.9%13.1%Care need* at deathLow(level: 0/1, grade: 1/2)60.7%71.1%39.6%45.4%75.8%50.0%Medium(level: 2; grade: 3/4)38.4%47.2%29.2%32.5%53.5%32.6%High(level: 3; grade: 5)20.3%22.0%17.5%20.4%22.8%17.1%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia32.6%39.3%26.2%29.8%40.6%23.4%Cancer36.9%45.8%26.2%29.2%44.5%27.1%Total36.9%44.7%27.3%32.0%43.9%25.7%* The three German care levels were modified into 5 care grades at 1st January 2017 Prevalence of in-hospital death by care setting Low (level: 0/1, grade: 1/2) Medium (level: 2; grade: 3/4) High (level: 3; grade: 5) * The three German care levels were modified into 5 care grades at 1st January 2017 In total, 36.9% of all deceased died in hospital (n = 9,811). The proportion of in-hospital deaths is slightly higher in men (39.1%) than in women (35.8%). With higher age, the proportion of in-hospital deaths decreased (47.9% in the 65–74 years old versus 26.2% in the persons aged 95 or older). Another trend can be seen regarding the care need. More than 6 out of 10 persons with the lowest care need died in hospital compared to one fifth in the group with the highest care need. Differences with respect to dementia and cancer were not found. The prevalence of in-hospital death also varies by care setting. Whereas less than 28% of deceased cared for in the nursing home (long-term) died in hospital, this proportion was highest in the home care setting as well as the semi-residential arrangement care setting with 44% and 45% each. The sex difference was highest in the shared housing arrangement setting (23.2% in-hospital deaths in the female group versus 34.8% in males). The above-mentioned tendencies regarding age and care needs were found in all care settings. In people with a dementia diagnosis, the proportion of in-hospital deaths varies widely by care setting. It is lowest in the shared housing arrangement setting (23.4%) and the highest in the semi-residential arrangement setting (40.6%). Place of death When having a closer look at the actual place of death (Table3), most persons died in hospital (36.9%), followed by the home setting (26.6%) and the nursing home (long-term) (23.5%). Nearly 13% either died in the short-term care (7.9%), in a shared housing arrangement (2.6%) or in the semi-residential arrangement setting (2.4%). Table 3Characteristics of deceased cohort by place of death (2016–2019)Hospital(n = 9,811; 36.9%)Home setting (n = 7,077;26.6%)Nursing home (long-term care) (n = 6,247; 23.5%)Nursing home (short-term care) (n = 2,110; 7.9%)Semi-residential arrangement (n = 649; 2.4%)Shared housing arrangement (n = 696;2.6%)SexMale35.4%35.4%28.4%35.8%39.1%18.8%Female64.6%64.6%71.6%64.2%60.9%81.2%Age at death in yearsMean (SD)85.5 (7.4)87.3 (7.7)88.0 (6.9)87.2 (7.2)86.4 (7.1)86.8 (7.0)65–748.4%6.4%4.1%5.3%5.2%4.9%75–8433.3%26.3%24.8%26.8%35.1%29.7%85–9448.5%51.1%53.6%52.6%47.0%54.9%95+9.8%16.2%17.5%15.3%12.6%10.5%Care need* at deathLow(level: 0/1, grade: 1/2)28.2%11.4%9.8%15.6%4.6%2.9%Medium(level: 2; grade: 3/4)55.5%45.4%60.6%55.4%41.6%42.2%High(level: 3; grade: 5)16.3%43.3%29.7%29.1%53.8%54.9%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia56.6%60.1%75.7%64.6%81.2%88.4%Cancer36.2%36.4%36.1%40.9%29.6%24.7%* The three German care levels were modified into 5 care grades at 1st January 2017 Characteristics of deceased cohort by place of death (2016–2019) Low (level: 0/1, grade: 1/2) Medium (level: 2; grade: 3/4) High (level: 3; grade: 5) * The three German care levels were modified into 5 care grades at 1st January 2017 Overall, 18.8% of those dying in shared housing arrangements were male versus 39.1% in semi-residential arrangements. Deceased people in hospital and semi-residential arrangements were the youngest with 85.5 and 86.4 years in mean, whereas the place of death with the oldest people was the nursing home (long-term) (88.0 years). The prevalence of dementia varies widely between the places of death from highest in shared housing arrangements (88.4%) to lowest at home setting (60.1%) and hospital (56.6%). The highest cancer prevalence was found at the short term-care setting (40.9%) versus lowest in shared housing arrangements (24.7%) as place of death. When having a closer look at the actual place of death (Table3), most persons died in hospital (36.9%), followed by the home setting (26.6%) and the nursing home (long-term) (23.5%). Nearly 13% either died in the short-term care (7.9%), in a shared housing arrangement (2.6%) or in the semi-residential arrangement setting (2.4%). Table 3Characteristics of deceased cohort by place of death (2016–2019)Hospital(n = 9,811; 36.9%)Home setting (n = 7,077;26.6%)Nursing home (long-term care) (n = 6,247; 23.5%)Nursing home (short-term care) (n = 2,110; 7.9%)Semi-residential arrangement (n = 649; 2.4%)Shared housing arrangement (n = 696;2.6%)SexMale35.4%35.4%28.4%35.8%39.1%18.8%Female64.6%64.6%71.6%64.2%60.9%81.2%Age at death in yearsMean (SD)85.5 (7.4)87.3 (7.7)88.0 (6.9)87.2 (7.2)86.4 (7.1)86.8 (7.0)65–748.4%6.4%4.1%5.3%5.2%4.9%75–8433.3%26.3%24.8%26.8%35.1%29.7%85–9448.5%51.1%53.6%52.6%47.0%54.9%95+9.8%16.2%17.5%15.3%12.6%10.5%Care need* at deathLow(level: 0/1, grade: 1/2)28.2%11.4%9.8%15.6%4.6%2.9%Medium(level: 2; grade: 3/4)55.5%45.4%60.6%55.4%41.6%42.2%High(level: 3; grade: 5)16.3%43.3%29.7%29.1%53.8%54.9%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia56.6%60.1%75.7%64.6%81.2%88.4%Cancer36.2%36.4%36.1%40.9%29.6%24.7%* The three German care levels were modified into 5 care grades at 1st January 2017 Characteristics of deceased cohort by place of death (2016–2019) Low (level: 0/1, grade: 1/2) Medium (level: 2; grade: 3/4) High (level: 3; grade: 5) * The three German care levels were modified into 5 care grades at 1st January 2017 Overall, 18.8% of those dying in shared housing arrangements were male versus 39.1% in semi-residential arrangements. Deceased people in hospital and semi-residential arrangements were the youngest with 85.5 and 86.4 years in mean, whereas the place of death with the oldest people was the nursing home (long-term) (88.0 years). The prevalence of dementia varies widely between the places of death from highest in shared housing arrangements (88.4%) to lowest at home setting (60.1%) and hospital (56.6%). The highest cancer prevalence was found at the short term-care setting (40.9%) versus lowest in shared housing arrangements (24.7%) as place of death.
Conclusion
In a large cohort of persons that were initially cared for at home, more than half moved to another care setting before death, most often to long-term nursing homes. Overall, about 4 of 10 perons died in hospital with highest proportions in those still receiving home care as well as care in semi-residential arrangements. Thus, there are still many unwanted and potentially preventable care transitions at the EOL in Germany. Interventions are needed to improve EOL care both in professional as well as informal home care settings also including semi residential arrangements. For example, ACP interventions were already proved effective as well as to support informal caregivers. Moreover, outpatient palliative care should be improved. This means, inter alia, an extension of an ACP-offer to the home setting in Germany as well as better access to outpatient hospice and palliative care services.
[ "Background", "Method", "Database, study population and outcome", "Statistical analysis", "Baseline characteristics", "In-hospital death", "Place of death", "Findings and comparison with the literature", "Actual place of death", "Hospital death by care setting", "Moving to other care settings before death", "Strengths and Limitations" ]
[ "Due to demographic changes the world’s population is ageing and more and more people will die in old age, often affected by multiple chronic diseases and with complex care needs [1]. The official German care statistics from 2019 reported a further significant increase of care dependency to a total of 4.1million people [2, 3]. Four fifths of them were cared for at home. The home care receiving group was younger than the nursing home residents and the proportion of women were with 60% lower than in nursing homes [4]. Accordingly, end-of-life (EOL) care is important in this population and is increasingly being researched. Regardless of the care-setting and the country, the majority of people wish to die at home [5–8], even if one should differentiate between ideal and actual preferred places of care as well as places of death and that both could change over time [9]. A cross-national comparison of places of death including 21 countries of people aged over 65 years from 2013 showed a median of 54% of death in hospital and 18% in residential aged care facilities [10], both with large differences between studies as well as between countries.\nSome countries – such as Germany – do not routinely compile data from death registrations including information on care dependency, which makes it difficult to obtain representative data regarding place of death. First data are available for the distribution of places of death in Germany, showing an overall trend in places of death [11]. However, these data do not contain information on care dependency and little is known about care transitions at the EOL in the group of older home care receiving people. Can the people who are cared for at home also die at home, as most would prefer?\nTherefore, aim of this explorative study was to investigate place of death of home care recipients in Germany, taking characteristics and changes in care settings into account.", "This retrospective study is part of the STudy on ADvance care PLANning (STADPLAN), funded by the German Federal Ministry of Education and Research (BMBF grant 01GL1707A-D). STADPLAN aims to evaluate the effect of an adapted advance care planning (ACP) program on patients’ activation regarding healthcare issues in care dependent community-dwelling older persons [12].\nDatabase, study population and outcome Anonymised data for this study were obtained from the DAK-Gesundheit, a large statutory health and long-term care insurance (LTCI) fund representing approximately 5.6million members (corresponding to 6.7% of the German population) [13]. The dataset included all insured persons being at least 65 years of age and who were cared for in their home setting on January 1, 2016 and contained different datasets that were merged via a unique identifier. For this study, all persons that died up to June 30, 2019 were included. Data on care dependency were obtained from the LTCI [14] providing data on services received with start and end dates and the person’s care need. At baseline all persons included received services in their own home setting (1). According to the received services during follow-up, we differentiate between four further care settings. We included shared housing arrangements (2) where a small group of people is living in private rooms, while sharing a common space, domestic support, and nursing care [15]. Semi-residential arrangements (3) provide a temporary care support during day or night in an institution [16]. Full residential care refers to either short-term stay (4, covered by the LTCI for a maximum of 58 days per year) or long-term stay (5) in a nursing home [17].\nUp to 2016, care-dependent persons were assigned to a level of care reflecting the time needed for daily help ranging between 1 and 3. These levels were modified into 5 care grades at January 1, 2017 reflecting a more comprehensive view on the person’s independence and competences considering physical, cognitive or psychological impairments [17]. Different claims data sets also contained information on demographics, date of death, outpatient care, and hospitalisations. Hospital data hold information on dates of admission and discharge, the respective diagnoses and diagnostic as well as therapeutic procedures. Outpatient data contained diagnoses including information on the level of diagnosis certainty (confirmed, suspected, ruled out and status post), treatments and procedures. All diagnoses were coded using the German modification of the international classification of diseases, 10th revision (ICD-10-GM).\nOur first outcome of interest was the care setting at time of death differentiated by the above mentioned 5 groups. The second variable of interest was the actual place of death, which in addition to those care settings further includes hospitals. We also assessed the proportion of persons that died in hospital, defined as being in hospital on the day of death. For providing baseline characteristics we assessed the mean age at death (categorised into four groups 65–74, 75–84, 85–94 and ≥ 95 years), sex and care need at death. We combined the old levels and new grades to 3 groups of care need: low (care level: 0/1, grade: 1/2), medium (care level: 2; grade: 3/4), high (care level: 3; grade: 5). Duration of care dependency was calculated as the time in years between the start of receiving care in the home setting (latest January 1, 2016) and the date of death. Furthermore, we assessed confirmed outpatient diagnoses of cancer (ICD-10-GM: C00-D48 [18]) and dementia (ICD-10-GM: F00, F01, F02.0, F02.3, F03, G30, G31.0, G31.1, G31.82, G31.9, R54 [19]) in the quarter of death and the three quarters before death.\nAnonymised data for this study were obtained from the DAK-Gesundheit, a large statutory health and long-term care insurance (LTCI) fund representing approximately 5.6million members (corresponding to 6.7% of the German population) [13]. The dataset included all insured persons being at least 65 years of age and who were cared for in their home setting on January 1, 2016 and contained different datasets that were merged via a unique identifier. For this study, all persons that died up to June 30, 2019 were included. Data on care dependency were obtained from the LTCI [14] providing data on services received with start and end dates and the person’s care need. At baseline all persons included received services in their own home setting (1). According to the received services during follow-up, we differentiate between four further care settings. We included shared housing arrangements (2) where a small group of people is living in private rooms, while sharing a common space, domestic support, and nursing care [15]. Semi-residential arrangements (3) provide a temporary care support during day or night in an institution [16]. Full residential care refers to either short-term stay (4, covered by the LTCI for a maximum of 58 days per year) or long-term stay (5) in a nursing home [17].\nUp to 2016, care-dependent persons were assigned to a level of care reflecting the time needed for daily help ranging between 1 and 3. These levels were modified into 5 care grades at January 1, 2017 reflecting a more comprehensive view on the person’s independence and competences considering physical, cognitive or psychological impairments [17]. Different claims data sets also contained information on demographics, date of death, outpatient care, and hospitalisations. Hospital data hold information on dates of admission and discharge, the respective diagnoses and diagnostic as well as therapeutic procedures. Outpatient data contained diagnoses including information on the level of diagnosis certainty (confirmed, suspected, ruled out and status post), treatments and procedures. All diagnoses were coded using the German modification of the international classification of diseases, 10th revision (ICD-10-GM).\nOur first outcome of interest was the care setting at time of death differentiated by the above mentioned 5 groups. The second variable of interest was the actual place of death, which in addition to those care settings further includes hospitals. We also assessed the proportion of persons that died in hospital, defined as being in hospital on the day of death. For providing baseline characteristics we assessed the mean age at death (categorised into four groups 65–74, 75–84, 85–94 and ≥ 95 years), sex and care need at death. We combined the old levels and new grades to 3 groups of care need: low (care level: 0/1, grade: 1/2), medium (care level: 2; grade: 3/4), high (care level: 3; grade: 5). Duration of care dependency was calculated as the time in years between the start of receiving care in the home setting (latest January 1, 2016) and the date of death. Furthermore, we assessed confirmed outpatient diagnoses of cancer (ICD-10-GM: C00-D48 [18]) and dementia (ICD-10-GM: F00, F01, F02.0, F02.3, F03, G30, G31.0, G31.1, G31.82, G31.9, R54 [19]) in the quarter of death and the three quarters before death.\nStatistical analysis Firstly, the study population was described by age, care need, duration of care and care setting at death as well as having a cancer or dementia diagnosis, respectively. These measures were calculated overall and stratified by sex. Secondly, we examined the proportion dying in hospital by their care setting. Finally, we characterised the deceased cohort stratified by their actual place of death (hospital, home setting, shared living arrangement, semi-residential arrangement, short-term care or long-term care in nursing home). Descriptive measures were computed.\nWe performed all analyses using the SAS programme for Windows, version 9.4 (SAS Institute Inc., Cary, NC, United States).\nFirstly, the study population was described by age, care need, duration of care and care setting at death as well as having a cancer or dementia diagnosis, respectively. These measures were calculated overall and stratified by sex. Secondly, we examined the proportion dying in hospital by their care setting. Finally, we characterised the deceased cohort stratified by their actual place of death (hospital, home setting, shared living arrangement, semi-residential arrangement, short-term care or long-term care in nursing home). Descriptive measures were computed.\nWe performed all analyses using the SAS programme for Windows, version 9.4 (SAS Institute Inc., Cary, NC, United States).", "Anonymised data for this study were obtained from the DAK-Gesundheit, a large statutory health and long-term care insurance (LTCI) fund representing approximately 5.6million members (corresponding to 6.7% of the German population) [13]. The dataset included all insured persons being at least 65 years of age and who were cared for in their home setting on January 1, 2016 and contained different datasets that were merged via a unique identifier. For this study, all persons that died up to June 30, 2019 were included. Data on care dependency were obtained from the LTCI [14] providing data on services received with start and end dates and the person’s care need. At baseline all persons included received services in their own home setting (1). According to the received services during follow-up, we differentiate between four further care settings. We included shared housing arrangements (2) where a small group of people is living in private rooms, while sharing a common space, domestic support, and nursing care [15]. Semi-residential arrangements (3) provide a temporary care support during day or night in an institution [16]. Full residential care refers to either short-term stay (4, covered by the LTCI for a maximum of 58 days per year) or long-term stay (5) in a nursing home [17].\nUp to 2016, care-dependent persons were assigned to a level of care reflecting the time needed for daily help ranging between 1 and 3. These levels were modified into 5 care grades at January 1, 2017 reflecting a more comprehensive view on the person’s independence and competences considering physical, cognitive or psychological impairments [17]. Different claims data sets also contained information on demographics, date of death, outpatient care, and hospitalisations. Hospital data hold information on dates of admission and discharge, the respective diagnoses and diagnostic as well as therapeutic procedures. Outpatient data contained diagnoses including information on the level of diagnosis certainty (confirmed, suspected, ruled out and status post), treatments and procedures. All diagnoses were coded using the German modification of the international classification of diseases, 10th revision (ICD-10-GM).\nOur first outcome of interest was the care setting at time of death differentiated by the above mentioned 5 groups. The second variable of interest was the actual place of death, which in addition to those care settings further includes hospitals. We also assessed the proportion of persons that died in hospital, defined as being in hospital on the day of death. For providing baseline characteristics we assessed the mean age at death (categorised into four groups 65–74, 75–84, 85–94 and ≥ 95 years), sex and care need at death. We combined the old levels and new grades to 3 groups of care need: low (care level: 0/1, grade: 1/2), medium (care level: 2; grade: 3/4), high (care level: 3; grade: 5). Duration of care dependency was calculated as the time in years between the start of receiving care in the home setting (latest January 1, 2016) and the date of death. Furthermore, we assessed confirmed outpatient diagnoses of cancer (ICD-10-GM: C00-D48 [18]) and dementia (ICD-10-GM: F00, F01, F02.0, F02.3, F03, G30, G31.0, G31.1, G31.82, G31.9, R54 [19]) in the quarter of death and the three quarters before death.", "Firstly, the study population was described by age, care need, duration of care and care setting at death as well as having a cancer or dementia diagnosis, respectively. These measures were calculated overall and stratified by sex. Secondly, we examined the proportion dying in hospital by their care setting. Finally, we characterised the deceased cohort stratified by their actual place of death (hospital, home setting, shared living arrangement, semi-residential arrangement, short-term care or long-term care in nursing home). Descriptive measures were computed.\nWe performed all analyses using the SAS programme for Windows, version 9.4 (SAS Institute Inc., Cary, NC, United States).", "Table1 shows the characteristics of all deceased persons plus the comparison between males and females. Two thirds (66.6%) of the deceased cohort were female, more than 80% of all had a medium care need (53.3%) or even a high care need (29.6%) at death. The mean duration of care dependency was 3.7 years (SD: 2.8) at time of death. About two thirds had a dementia diagnosis (64.1%) and one third had a diagnosis for cancer (36.1%). Most of all were cared for at home at time of death (48.1%), followed by the long-term nursing home setting (32.3%) and short-term nursing home care (11.7%).\n\nTable 1Characteristics of deceased cohort (2016 to 2019)Total (n = 26,590)Female (n = 17,701)Male (n = 8,889)Age at death in years\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\stackrel{-}{x}$$\\end{document} (SD)86.8 (7.4)87.5 (7.4)85.2 (7.2)65–741,714 (6.5%)980 (5.5%)734 (8.3%)75–847,671 (28.9%)4,601 (26.0%)3,070 (34.5%)85–9413,517 (50.8%)9,164 (51.8%)4,353 (49.0%)95+3,688 (13.9%)2,956 (16.7%)732 (8.2%)Care need* at deathLow (care level: 0/1, grade: 1/2)4,558 (17.1%)3,174 (17.9%)1,384 (15.6%)Medium (care level: 2; grade: 3/4)14,173 (53.3%)9,540 (53.9%)4,633 (52.1%)High (care level: 3; grade: 5)7,859 (29.6%)4,987 (28.2%)2,872 (32.3%)Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia17,049 (64.1%)11,461 (64.7%)5,588 (62.9%)Cancer9,603 (36.1%)5,558 (31.4%)4,045 (45.5%)Duration of care dependency at death in years: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\stackrel{-}{x}$$\\end{document} (SD)3.7 (2.8)3.9 (2.8)3.4 (2.7)Setting at deathHome12,795 (48.1%)8,131 (45.9%)4,664 (52.5%)Long-term care in nursing home8,599 (32.3%)6,109 (34.5%)2,490 (28.0%)Short-term care in nursing home3,103 (11.7%)2,016 (11.4%)1,087 (12.2%)Semi-residential arrangement1,156 (4.4%)709 (4.0%)447 (5.0%)Shared housing arrangement937 (3.5%)736 (4.2%)201 (2.3%)* The three German care levels were modified into 5 care grades at 1st January 2017\n\nCharacteristics of deceased cohort (2016 to 2019)\n* The three German care levels were modified into 5 care grades at 1st January 2017\nFemales had a higher mean age of 87.5 years at death, compared to 85.2 years at death of all males. Men were more likely to have a cancer diagnosis and a shorter period of care dependency compared to women. Only small differences were found regarding the care needs and the prevalence of dementia. More females received long-term care in nursing homes (34.5% vs. males: 28.0%), whereas more males have been cared for at the home setting (52.5% vs. females: 45.9%).", "Table2 shows the prevalence of in-hospital deaths in total and stratified by care setting.\n\nTable 2Prevalence of in-hospital death by care settingTotal(n = 26,590)Care setting at time of deathHome setting(n = 12,795)Nursing home (long-term care)(n = 8,599)Nursing home (short-term care)(n = 3,103)Semi-residential arrangement (n = 1,156)Shared housing arrangement (n = 937)SexMale39.1%46.3%28.7%30.5%43.2%34.8%Female35.8%43.8%26.8%32.8%44.3%23.2%Age at death in years65–7447.9%55.2%29.3%42.0%56.4%45.2%75–8442.6%51.5%30.9%35.8%44.8%31.2%85–9435.2%42.3%27.3%31.1%44.3%22.0%95+26.2%31.6%21.1%23.1%29.9%13.1%Care need* at deathLow(level: 0/1, grade: 1/2)60.7%71.1%39.6%45.4%75.8%50.0%Medium(level: 2; grade: 3/4)38.4%47.2%29.2%32.5%53.5%32.6%High(level: 3; grade: 5)20.3%22.0%17.5%20.4%22.8%17.1%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia32.6%39.3%26.2%29.8%40.6%23.4%Cancer36.9%45.8%26.2%29.2%44.5%27.1%Total36.9%44.7%27.3%32.0%43.9%25.7%* The three German care levels were modified into 5 care grades at 1st January 2017\n\nPrevalence of in-hospital death by care setting\nLow\n(level: 0/1, grade: 1/2)\nMedium\n(level: 2; grade: 3/4)\nHigh\n(level: 3; grade: 5)\n* The three German care levels were modified into 5 care grades at 1st January 2017\nIn total, 36.9% of all deceased died in hospital (n = 9,811). The proportion of in-hospital deaths is slightly higher in men (39.1%) than in women (35.8%). With higher age, the proportion of in-hospital deaths decreased (47.9% in the 65–74 years old versus 26.2% in the persons aged 95 or older). Another trend can be seen regarding the care need. More than 6 out of 10 persons with the lowest care need died in hospital compared to one fifth in the group with the highest care need. Differences with respect to dementia and cancer were not found.\nThe prevalence of in-hospital death also varies by care setting. Whereas less than 28% of deceased cared for in the nursing home (long-term) died in hospital, this proportion was highest in the home care setting as well as the semi-residential arrangement care setting with 44% and 45% each. The sex difference was highest in the shared housing arrangement setting (23.2% in-hospital deaths in the female group versus 34.8% in males). The above-mentioned tendencies regarding age and care needs were found in all care settings. In people with a dementia diagnosis, the proportion of in-hospital deaths varies widely by care setting. It is lowest in the shared housing arrangement setting (23.4%) and the highest in the semi-residential arrangement setting (40.6%).", "When having a closer look at the actual place of death (Table3), most persons died in hospital (36.9%), followed by the home setting (26.6%) and the nursing home (long-term) (23.5%). Nearly 13% either died in the short-term care (7.9%), in a shared housing arrangement (2.6%) or in the semi-residential arrangement setting (2.4%).\n\nTable 3Characteristics of deceased cohort by place of death (2016–2019)Hospital(n = 9,811; 36.9%)Home setting (n = 7,077;26.6%)Nursing home (long-term care) (n = 6,247; 23.5%)Nursing home (short-term care) (n = 2,110; 7.9%)Semi-residential arrangement (n = 649; 2.4%)Shared housing arrangement (n = 696;2.6%)SexMale35.4%35.4%28.4%35.8%39.1%18.8%Female64.6%64.6%71.6%64.2%60.9%81.2%Age at death in yearsMean (SD)85.5 (7.4)87.3 (7.7)88.0 (6.9)87.2 (7.2)86.4 (7.1)86.8 (7.0)65–748.4%6.4%4.1%5.3%5.2%4.9%75–8433.3%26.3%24.8%26.8%35.1%29.7%85–9448.5%51.1%53.6%52.6%47.0%54.9%95+9.8%16.2%17.5%15.3%12.6%10.5%Care need* at deathLow(level: 0/1, grade: 1/2)28.2%11.4%9.8%15.6%4.6%2.9%Medium(level: 2; grade: 3/4)55.5%45.4%60.6%55.4%41.6%42.2%High(level: 3; grade: 5)16.3%43.3%29.7%29.1%53.8%54.9%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia56.6%60.1%75.7%64.6%81.2%88.4%Cancer36.2%36.4%36.1%40.9%29.6%24.7%* The three German care levels were modified into 5 care grades at 1st January 2017\n\nCharacteristics of deceased cohort by place of death (2016–2019)\nLow\n(level: 0/1, grade: 1/2)\nMedium\n(level: 2; grade: 3/4)\nHigh\n(level: 3; grade: 5)\n* The three German care levels were modified into 5 care grades at 1st January 2017\nOverall, 18.8% of those dying in shared housing arrangements were male versus 39.1% in semi-residential arrangements. Deceased people in hospital and semi-residential arrangements were the youngest with 85.5 and 86.4 years in mean, whereas the place of death with the oldest people was the nursing home (long-term) (88.0 years). The prevalence of dementia varies widely between the places of death from highest in shared housing arrangements (88.4%) to lowest at home setting (60.1%) and hospital (56.6%). The highest cancer prevalence was found at the short term-care setting (40.9%) versus lowest in shared housing arrangements (24.7%) as place of death.", "In care-dependent people initially receiving home care, 57.5% died within 3.5 years. Overall, 36.9% died in hospital and in-hospital deaths were found most often in those still receiving home care as well as care in semi-residential arrangements at the time of death. People who died in hospital were younger and had lower care dependency compared to all other analysed care settings. More than half of home care receiving people moved to another (care-) setting before death (44.0% either to long- or short-term care in nursing home, 4.4% to semi-residential arrangements, plus 3.5% to shared housing arrangements).", "Nearly 37% of all deaths took place in hospitals, which is the most common place of death in our care receiving cohort as well as in the total German population [20–22] and in those of most other countries [23–25].\nAfter the hospital, the own home was found as the second most common place of death with 26.6%, which was also found by Dasch et al. (21.3%) for 2017 although their representative German cohort of all persons dying was in mean 77.6 years old, almost 10 years younger than our care receiving cohort [22]. Moreover, Herbst et al. analysed two random samples of German death certificates from 2007 and from 2017. They showed that while in 2007 home also was the second most frequent place of death (26.1%), it slid to third place in 2017 with 19.8% [20]. Taking results of published studies together, the likelihood to die at home has been decreasing since recent years [20, 22, 26]. In a review of 1998, the authors already investigated the relation between patient characteristics and home deaths [27]. They found out that improved access to home care is likely to increase home deaths for older people [27]. Especially palliative home care and hospice care are associated with fewer hospitalisations and more home deaths [28]. But there are several more potential factors influencing death at home, for example patients functional status, their preferences, living with relatives, and extended family support [29].\nThe present cohort showed that 23.5% died in nursing homes, which is a little higher than found in the younger-aged representative sample by Dasch et al. (20.4%) but lower than in the random sample of Herbst et al. with 27.1% [20, 30]. The older the people, the higher the probability to die in a nursing home instead of a hospital [31]. Also in our study nursing home residents receiving long-term care were the oldest group.\nShared housing arrangements were not considered in previous studies investigating place of death, even if this setting can be seen as an increasingly used, familiar care alternative for long-term nursing homes in Germany [15, 33, 34]. To the author’s knowledge there is also no data on the frequency of transitions to short-term care before death as well as nursing home as place of death for short-term care recipients. Studies based on German death certificate data cannot include care information, because they are not routinely covered in these documents. Our study shows that both settings are of relevance and should be included in future studies in EOL care. Quantification and deeper understanding of all possible care transitions at the end of life are important to estimate the relevance and trends for place of death from a public health perspective. Even if the hospice as place of death is still rare and unfortunately not covered in the present data, it has been shown to increase as place of death in Germany in recent years [22].", "As in the present cohort, the proportion of in-hospital death in older, care receiving people seems to be smaller compared to the general population [25]. The availability of formal versus informal care seems to influence hospital death rates. In the Netherlands older people receiving informal care were more likely to die in hospital than people receiving formal home care or institutional care [35]. Another Dutch study showed for Dutch people who only received informal care in their last three months of life that the odds of dying in a hospital was much higher compared to those who received a combination of formal and informal home care [36]. In the present cohort, the proportion of in-hospital death was also highest in people receiving home care (44.7%), where the largest proportion of informal care can certainly be found. The proportion of in-hospital deaths was lower in our group of nursing home residents receiving long-term care. Although this proportion goes in line with previous German analyses [37, 38], it is, however, internationally compared somewhat higher [39]. Nevertheless, the proportion of in-hospital deaths among nursing-home residents internationally varies markedly even within countries with an overall median of 22.6% [39].\nThere are other possible factors influencing the risk of dying in hospital for elderly, care receiving people like the care level, age and sex. In our cohort, the younger the people and the lower the care level, the higher was the proportion of in-hospital death. The same was found by previous studies [10, 39, 40]. This could possibly partly explain, why men in our cohort were a little more likely to die in hospital than women. However, there is increasing evidence of “real” sex-specific differences in burdensome interventions like transitions of care or invasive procedures during EOL and future studies should put more emphasis on sex-specific analyses [41].", "Our result regarding the frequent transition from home to other care settings before death indicates that home care cannot always be maintained until the EOL, although most patients wish so [5–8]. It was already shown in international studies that the frequency of care setting transitions of elderly people increases near to death [42]. For example, in the Netherlands nearly half of their 55–85 years old home-living persons was transferred between care settings one or more times in the last 3 months of life, mostly from home to hospital [35]. Care setting transitions at the EOL are seen as increasingly problematic, also because of potential medication and care errors, disrupting care teams, and a loss of care information [20, 27, 31], even if these transitions have the potential to be a relief for family caregivers [23]. Looking at the German situation, it also should be mentioned that structures in outpatient palliative care have been introduced just within the last 15 years and the growing number of general and specialist outpatient palliative care services (in German: AAPV and SAPV) provides more possibilities of outpatient palliative care since the last years [43], which can strengthen the quality of care at home at the patient’s EOL. Therewith unwanted care transitions can also be prevented. However, the care at home should not automatically be equated with the best care [9, 44] because institutionalised palliative care like hospice care and in-hospital palliative care can improve the quality of dying and death [45, 46]. Overall, care decisions always should be weighed individually to enable appropriate and timely care setting transitions in accordance with individualised EOL care needs [47].\nThere are different indicators already mentioned being associated with a risk of care transitions. In the UK, that people with severe cognitive impairment were the most likely group to move to other care settings [42]. Our results also show that the people with dementia more often died in another care setting than home. Another German study analysed predictors of admission to nursing home in care dependent people based on longitudinal secondary data and also found dementia, cognitive impairment, cancer of the brain and higher age as risk factors, which goes in line with our results [48].", "The strengths of this study are its real-world character, its large sample size which allowed us to stratify the analyses by sex, age and other variables. Furthermore, we had valid information on care setting pathways and place of death. Just like the strengths, the limitations are based to the nature of the administrative data from LTCI funds. The data were not captured for the purpose of scientific research and further information that could influence the placement and dying in different care settings (clinical data, socioeconomic status, marital status or family support, respectively) were not available. The same applies to further information related to the specific care recipient’s institution, e.g. staffing ratios or the nursing home’s ownership. For the ones living in semi-residential arrangements we were not able to differentiate between the care-recipients died at home or during day or night care, respectively. For the shared housing-arrangements an underreporting is possible since in this case care providers might only invoice other benefits for instance for nursing home care. Besides, our data did not contain palliative care units and hospices as places of death. However, their joint proportion on places of death in Germany is 11% [22]. Another limitation relates to the fact that data for this study were only obtained from one health insurance fund. Since the DAK-Gesundheit insures more women and a population with a generally poorer health status [49], our results cannot be extrapolated to the entire care receiving population in Germany. Nevertheless, the DAK-Gesundheit is with 5.6million insured persons one of Germany’s largest health insurance funds [13]." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Method", "Database, study population and outcome", "Statistical analysis", "Results", "Baseline characteristics", "In-hospital death", "Place of death", "Discussion", "Findings and comparison with the literature", "Actual place of death", "Hospital death by care setting", "Moving to other care settings before death", "Strengths and Limitations", "Conclusion" ]
[ "Due to demographic changes the world’s population is ageing and more and more people will die in old age, often affected by multiple chronic diseases and with complex care needs [1]. The official German care statistics from 2019 reported a further significant increase of care dependency to a total of 4.1million people [2, 3]. Four fifths of them were cared for at home. The home care receiving group was younger than the nursing home residents and the proportion of women were with 60% lower than in nursing homes [4]. Accordingly, end-of-life (EOL) care is important in this population and is increasingly being researched. Regardless of the care-setting and the country, the majority of people wish to die at home [5–8], even if one should differentiate between ideal and actual preferred places of care as well as places of death and that both could change over time [9]. A cross-national comparison of places of death including 21 countries of people aged over 65 years from 2013 showed a median of 54% of death in hospital and 18% in residential aged care facilities [10], both with large differences between studies as well as between countries.\nSome countries – such as Germany – do not routinely compile data from death registrations including information on care dependency, which makes it difficult to obtain representative data regarding place of death. First data are available for the distribution of places of death in Germany, showing an overall trend in places of death [11]. However, these data do not contain information on care dependency and little is known about care transitions at the EOL in the group of older home care receiving people. Can the people who are cared for at home also die at home, as most would prefer?\nTherefore, aim of this explorative study was to investigate place of death of home care recipients in Germany, taking characteristics and changes in care settings into account.", "This retrospective study is part of the STudy on ADvance care PLANning (STADPLAN), funded by the German Federal Ministry of Education and Research (BMBF grant 01GL1707A-D). STADPLAN aims to evaluate the effect of an adapted advance care planning (ACP) program on patients’ activation regarding healthcare issues in care dependent community-dwelling older persons [12].\nDatabase, study population and outcome Anonymised data for this study were obtained from the DAK-Gesundheit, a large statutory health and long-term care insurance (LTCI) fund representing approximately 5.6million members (corresponding to 6.7% of the German population) [13]. The dataset included all insured persons being at least 65 years of age and who were cared for in their home setting on January 1, 2016 and contained different datasets that were merged via a unique identifier. For this study, all persons that died up to June 30, 2019 were included. Data on care dependency were obtained from the LTCI [14] providing data on services received with start and end dates and the person’s care need. At baseline all persons included received services in their own home setting (1). According to the received services during follow-up, we differentiate between four further care settings. We included shared housing arrangements (2) where a small group of people is living in private rooms, while sharing a common space, domestic support, and nursing care [15]. Semi-residential arrangements (3) provide a temporary care support during day or night in an institution [16]. Full residential care refers to either short-term stay (4, covered by the LTCI for a maximum of 58 days per year) or long-term stay (5) in a nursing home [17].\nUp to 2016, care-dependent persons were assigned to a level of care reflecting the time needed for daily help ranging between 1 and 3. These levels were modified into 5 care grades at January 1, 2017 reflecting a more comprehensive view on the person’s independence and competences considering physical, cognitive or psychological impairments [17]. Different claims data sets also contained information on demographics, date of death, outpatient care, and hospitalisations. Hospital data hold information on dates of admission and discharge, the respective diagnoses and diagnostic as well as therapeutic procedures. Outpatient data contained diagnoses including information on the level of diagnosis certainty (confirmed, suspected, ruled out and status post), treatments and procedures. All diagnoses were coded using the German modification of the international classification of diseases, 10th revision (ICD-10-GM).\nOur first outcome of interest was the care setting at time of death differentiated by the above mentioned 5 groups. The second variable of interest was the actual place of death, which in addition to those care settings further includes hospitals. We also assessed the proportion of persons that died in hospital, defined as being in hospital on the day of death. For providing baseline characteristics we assessed the mean age at death (categorised into four groups 65–74, 75–84, 85–94 and ≥ 95 years), sex and care need at death. We combined the old levels and new grades to 3 groups of care need: low (care level: 0/1, grade: 1/2), medium (care level: 2; grade: 3/4), high (care level: 3; grade: 5). Duration of care dependency was calculated as the time in years between the start of receiving care in the home setting (latest January 1, 2016) and the date of death. Furthermore, we assessed confirmed outpatient diagnoses of cancer (ICD-10-GM: C00-D48 [18]) and dementia (ICD-10-GM: F00, F01, F02.0, F02.3, F03, G30, G31.0, G31.1, G31.82, G31.9, R54 [19]) in the quarter of death and the three quarters before death.\nAnonymised data for this study were obtained from the DAK-Gesundheit, a large statutory health and long-term care insurance (LTCI) fund representing approximately 5.6million members (corresponding to 6.7% of the German population) [13]. The dataset included all insured persons being at least 65 years of age and who were cared for in their home setting on January 1, 2016 and contained different datasets that were merged via a unique identifier. For this study, all persons that died up to June 30, 2019 were included. Data on care dependency were obtained from the LTCI [14] providing data on services received with start and end dates and the person’s care need. At baseline all persons included received services in their own home setting (1). According to the received services during follow-up, we differentiate between four further care settings. We included shared housing arrangements (2) where a small group of people is living in private rooms, while sharing a common space, domestic support, and nursing care [15]. Semi-residential arrangements (3) provide a temporary care support during day or night in an institution [16]. Full residential care refers to either short-term stay (4, covered by the LTCI for a maximum of 58 days per year) or long-term stay (5) in a nursing home [17].\nUp to 2016, care-dependent persons were assigned to a level of care reflecting the time needed for daily help ranging between 1 and 3. These levels were modified into 5 care grades at January 1, 2017 reflecting a more comprehensive view on the person’s independence and competences considering physical, cognitive or psychological impairments [17]. Different claims data sets also contained information on demographics, date of death, outpatient care, and hospitalisations. Hospital data hold information on dates of admission and discharge, the respective diagnoses and diagnostic as well as therapeutic procedures. Outpatient data contained diagnoses including information on the level of diagnosis certainty (confirmed, suspected, ruled out and status post), treatments and procedures. All diagnoses were coded using the German modification of the international classification of diseases, 10th revision (ICD-10-GM).\nOur first outcome of interest was the care setting at time of death differentiated by the above mentioned 5 groups. The second variable of interest was the actual place of death, which in addition to those care settings further includes hospitals. We also assessed the proportion of persons that died in hospital, defined as being in hospital on the day of death. For providing baseline characteristics we assessed the mean age at death (categorised into four groups 65–74, 75–84, 85–94 and ≥ 95 years), sex and care need at death. We combined the old levels and new grades to 3 groups of care need: low (care level: 0/1, grade: 1/2), medium (care level: 2; grade: 3/4), high (care level: 3; grade: 5). Duration of care dependency was calculated as the time in years between the start of receiving care in the home setting (latest January 1, 2016) and the date of death. Furthermore, we assessed confirmed outpatient diagnoses of cancer (ICD-10-GM: C00-D48 [18]) and dementia (ICD-10-GM: F00, F01, F02.0, F02.3, F03, G30, G31.0, G31.1, G31.82, G31.9, R54 [19]) in the quarter of death and the three quarters before death.\nStatistical analysis Firstly, the study population was described by age, care need, duration of care and care setting at death as well as having a cancer or dementia diagnosis, respectively. These measures were calculated overall and stratified by sex. Secondly, we examined the proportion dying in hospital by their care setting. Finally, we characterised the deceased cohort stratified by their actual place of death (hospital, home setting, shared living arrangement, semi-residential arrangement, short-term care or long-term care in nursing home). Descriptive measures were computed.\nWe performed all analyses using the SAS programme for Windows, version 9.4 (SAS Institute Inc., Cary, NC, United States).\nFirstly, the study population was described by age, care need, duration of care and care setting at death as well as having a cancer or dementia diagnosis, respectively. These measures were calculated overall and stratified by sex. Secondly, we examined the proportion dying in hospital by their care setting. Finally, we characterised the deceased cohort stratified by their actual place of death (hospital, home setting, shared living arrangement, semi-residential arrangement, short-term care or long-term care in nursing home). Descriptive measures were computed.\nWe performed all analyses using the SAS programme for Windows, version 9.4 (SAS Institute Inc., Cary, NC, United States).", "Anonymised data for this study were obtained from the DAK-Gesundheit, a large statutory health and long-term care insurance (LTCI) fund representing approximately 5.6million members (corresponding to 6.7% of the German population) [13]. The dataset included all insured persons being at least 65 years of age and who were cared for in their home setting on January 1, 2016 and contained different datasets that were merged via a unique identifier. For this study, all persons that died up to June 30, 2019 were included. Data on care dependency were obtained from the LTCI [14] providing data on services received with start and end dates and the person’s care need. At baseline all persons included received services in their own home setting (1). According to the received services during follow-up, we differentiate between four further care settings. We included shared housing arrangements (2) where a small group of people is living in private rooms, while sharing a common space, domestic support, and nursing care [15]. Semi-residential arrangements (3) provide a temporary care support during day or night in an institution [16]. Full residential care refers to either short-term stay (4, covered by the LTCI for a maximum of 58 days per year) or long-term stay (5) in a nursing home [17].\nUp to 2016, care-dependent persons were assigned to a level of care reflecting the time needed for daily help ranging between 1 and 3. These levels were modified into 5 care grades at January 1, 2017 reflecting a more comprehensive view on the person’s independence and competences considering physical, cognitive or psychological impairments [17]. Different claims data sets also contained information on demographics, date of death, outpatient care, and hospitalisations. Hospital data hold information on dates of admission and discharge, the respective diagnoses and diagnostic as well as therapeutic procedures. Outpatient data contained diagnoses including information on the level of diagnosis certainty (confirmed, suspected, ruled out and status post), treatments and procedures. All diagnoses were coded using the German modification of the international classification of diseases, 10th revision (ICD-10-GM).\nOur first outcome of interest was the care setting at time of death differentiated by the above mentioned 5 groups. The second variable of interest was the actual place of death, which in addition to those care settings further includes hospitals. We also assessed the proportion of persons that died in hospital, defined as being in hospital on the day of death. For providing baseline characteristics we assessed the mean age at death (categorised into four groups 65–74, 75–84, 85–94 and ≥ 95 years), sex and care need at death. We combined the old levels and new grades to 3 groups of care need: low (care level: 0/1, grade: 1/2), medium (care level: 2; grade: 3/4), high (care level: 3; grade: 5). Duration of care dependency was calculated as the time in years between the start of receiving care in the home setting (latest January 1, 2016) and the date of death. Furthermore, we assessed confirmed outpatient diagnoses of cancer (ICD-10-GM: C00-D48 [18]) and dementia (ICD-10-GM: F00, F01, F02.0, F02.3, F03, G30, G31.0, G31.1, G31.82, G31.9, R54 [19]) in the quarter of death and the three quarters before death.", "Firstly, the study population was described by age, care need, duration of care and care setting at death as well as having a cancer or dementia diagnosis, respectively. These measures were calculated overall and stratified by sex. Secondly, we examined the proportion dying in hospital by their care setting. Finally, we characterised the deceased cohort stratified by their actual place of death (hospital, home setting, shared living arrangement, semi-residential arrangement, short-term care or long-term care in nursing home). Descriptive measures were computed.\nWe performed all analyses using the SAS programme for Windows, version 9.4 (SAS Institute Inc., Cary, NC, United States).", "The entire cohort encompasses 46,207 care-dependent people, who were cared for at home at January 1, 2016. The target population included 26,590 people, who had died until June 2019 (57.5%).\nBaseline characteristics Table1 shows the characteristics of all deceased persons plus the comparison between males and females. Two thirds (66.6%) of the deceased cohort were female, more than 80% of all had a medium care need (53.3%) or even a high care need (29.6%) at death. The mean duration of care dependency was 3.7 years (SD: 2.8) at time of death. About two thirds had a dementia diagnosis (64.1%) and one third had a diagnosis for cancer (36.1%). Most of all were cared for at home at time of death (48.1%), followed by the long-term nursing home setting (32.3%) and short-term nursing home care (11.7%).\n\nTable 1Characteristics of deceased cohort (2016 to 2019)Total (n = 26,590)Female (n = 17,701)Male (n = 8,889)Age at death in years\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\stackrel{-}{x}$$\\end{document} (SD)86.8 (7.4)87.5 (7.4)85.2 (7.2)65–741,714 (6.5%)980 (5.5%)734 (8.3%)75–847,671 (28.9%)4,601 (26.0%)3,070 (34.5%)85–9413,517 (50.8%)9,164 (51.8%)4,353 (49.0%)95+3,688 (13.9%)2,956 (16.7%)732 (8.2%)Care need* at deathLow (care level: 0/1, grade: 1/2)4,558 (17.1%)3,174 (17.9%)1,384 (15.6%)Medium (care level: 2; grade: 3/4)14,173 (53.3%)9,540 (53.9%)4,633 (52.1%)High (care level: 3; grade: 5)7,859 (29.6%)4,987 (28.2%)2,872 (32.3%)Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia17,049 (64.1%)11,461 (64.7%)5,588 (62.9%)Cancer9,603 (36.1%)5,558 (31.4%)4,045 (45.5%)Duration of care dependency at death in years: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\stackrel{-}{x}$$\\end{document} (SD)3.7 (2.8)3.9 (2.8)3.4 (2.7)Setting at deathHome12,795 (48.1%)8,131 (45.9%)4,664 (52.5%)Long-term care in nursing home8,599 (32.3%)6,109 (34.5%)2,490 (28.0%)Short-term care in nursing home3,103 (11.7%)2,016 (11.4%)1,087 (12.2%)Semi-residential arrangement1,156 (4.4%)709 (4.0%)447 (5.0%)Shared housing arrangement937 (3.5%)736 (4.2%)201 (2.3%)* The three German care levels were modified into 5 care grades at 1st January 2017\n\nCharacteristics of deceased cohort (2016 to 2019)\n* The three German care levels were modified into 5 care grades at 1st January 2017\nFemales had a higher mean age of 87.5 years at death, compared to 85.2 years at death of all males. Men were more likely to have a cancer diagnosis and a shorter period of care dependency compared to women. Only small differences were found regarding the care needs and the prevalence of dementia. More females received long-term care in nursing homes (34.5% vs. males: 28.0%), whereas more males have been cared for at the home setting (52.5% vs. females: 45.9%).\nTable1 shows the characteristics of all deceased persons plus the comparison between males and females. Two thirds (66.6%) of the deceased cohort were female, more than 80% of all had a medium care need (53.3%) or even a high care need (29.6%) at death. The mean duration of care dependency was 3.7 years (SD: 2.8) at time of death. About two thirds had a dementia diagnosis (64.1%) and one third had a diagnosis for cancer (36.1%). Most of all were cared for at home at time of death (48.1%), followed by the long-term nursing home setting (32.3%) and short-term nursing home care (11.7%).\n\nTable 1Characteristics of deceased cohort (2016 to 2019)Total (n = 26,590)Female (n = 17,701)Male (n = 8,889)Age at death in years\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\stackrel{-}{x}$$\\end{document} (SD)86.8 (7.4)87.5 (7.4)85.2 (7.2)65–741,714 (6.5%)980 (5.5%)734 (8.3%)75–847,671 (28.9%)4,601 (26.0%)3,070 (34.5%)85–9413,517 (50.8%)9,164 (51.8%)4,353 (49.0%)95+3,688 (13.9%)2,956 (16.7%)732 (8.2%)Care need* at deathLow (care level: 0/1, grade: 1/2)4,558 (17.1%)3,174 (17.9%)1,384 (15.6%)Medium (care level: 2; grade: 3/4)14,173 (53.3%)9,540 (53.9%)4,633 (52.1%)High (care level: 3; grade: 5)7,859 (29.6%)4,987 (28.2%)2,872 (32.3%)Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia17,049 (64.1%)11,461 (64.7%)5,588 (62.9%)Cancer9,603 (36.1%)5,558 (31.4%)4,045 (45.5%)Duration of care dependency at death in years: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\stackrel{-}{x}$$\\end{document} (SD)3.7 (2.8)3.9 (2.8)3.4 (2.7)Setting at deathHome12,795 (48.1%)8,131 (45.9%)4,664 (52.5%)Long-term care in nursing home8,599 (32.3%)6,109 (34.5%)2,490 (28.0%)Short-term care in nursing home3,103 (11.7%)2,016 (11.4%)1,087 (12.2%)Semi-residential arrangement1,156 (4.4%)709 (4.0%)447 (5.0%)Shared housing arrangement937 (3.5%)736 (4.2%)201 (2.3%)* The three German care levels were modified into 5 care grades at 1st January 2017\n\nCharacteristics of deceased cohort (2016 to 2019)\n* The three German care levels were modified into 5 care grades at 1st January 2017\nFemales had a higher mean age of 87.5 years at death, compared to 85.2 years at death of all males. Men were more likely to have a cancer diagnosis and a shorter period of care dependency compared to women. Only small differences were found regarding the care needs and the prevalence of dementia. More females received long-term care in nursing homes (34.5% vs. males: 28.0%), whereas more males have been cared for at the home setting (52.5% vs. females: 45.9%).\nIn-hospital death Table2 shows the prevalence of in-hospital deaths in total and stratified by care setting.\n\nTable 2Prevalence of in-hospital death by care settingTotal(n = 26,590)Care setting at time of deathHome setting(n = 12,795)Nursing home (long-term care)(n = 8,599)Nursing home (short-term care)(n = 3,103)Semi-residential arrangement (n = 1,156)Shared housing arrangement (n = 937)SexMale39.1%46.3%28.7%30.5%43.2%34.8%Female35.8%43.8%26.8%32.8%44.3%23.2%Age at death in years65–7447.9%55.2%29.3%42.0%56.4%45.2%75–8442.6%51.5%30.9%35.8%44.8%31.2%85–9435.2%42.3%27.3%31.1%44.3%22.0%95+26.2%31.6%21.1%23.1%29.9%13.1%Care need* at deathLow(level: 0/1, grade: 1/2)60.7%71.1%39.6%45.4%75.8%50.0%Medium(level: 2; grade: 3/4)38.4%47.2%29.2%32.5%53.5%32.6%High(level: 3; grade: 5)20.3%22.0%17.5%20.4%22.8%17.1%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia32.6%39.3%26.2%29.8%40.6%23.4%Cancer36.9%45.8%26.2%29.2%44.5%27.1%Total36.9%44.7%27.3%32.0%43.9%25.7%* The three German care levels were modified into 5 care grades at 1st January 2017\n\nPrevalence of in-hospital death by care setting\nLow\n(level: 0/1, grade: 1/2)\nMedium\n(level: 2; grade: 3/4)\nHigh\n(level: 3; grade: 5)\n* The three German care levels were modified into 5 care grades at 1st January 2017\nIn total, 36.9% of all deceased died in hospital (n = 9,811). The proportion of in-hospital deaths is slightly higher in men (39.1%) than in women (35.8%). With higher age, the proportion of in-hospital deaths decreased (47.9% in the 65–74 years old versus 26.2% in the persons aged 95 or older). Another trend can be seen regarding the care need. More than 6 out of 10 persons with the lowest care need died in hospital compared to one fifth in the group with the highest care need. Differences with respect to dementia and cancer were not found.\nThe prevalence of in-hospital death also varies by care setting. Whereas less than 28% of deceased cared for in the nursing home (long-term) died in hospital, this proportion was highest in the home care setting as well as the semi-residential arrangement care setting with 44% and 45% each. The sex difference was highest in the shared housing arrangement setting (23.2% in-hospital deaths in the female group versus 34.8% in males). The above-mentioned tendencies regarding age and care needs were found in all care settings. In people with a dementia diagnosis, the proportion of in-hospital deaths varies widely by care setting. It is lowest in the shared housing arrangement setting (23.4%) and the highest in the semi-residential arrangement setting (40.6%).\nTable2 shows the prevalence of in-hospital deaths in total and stratified by care setting.\n\nTable 2Prevalence of in-hospital death by care settingTotal(n = 26,590)Care setting at time of deathHome setting(n = 12,795)Nursing home (long-term care)(n = 8,599)Nursing home (short-term care)(n = 3,103)Semi-residential arrangement (n = 1,156)Shared housing arrangement (n = 937)SexMale39.1%46.3%28.7%30.5%43.2%34.8%Female35.8%43.8%26.8%32.8%44.3%23.2%Age at death in years65–7447.9%55.2%29.3%42.0%56.4%45.2%75–8442.6%51.5%30.9%35.8%44.8%31.2%85–9435.2%42.3%27.3%31.1%44.3%22.0%95+26.2%31.6%21.1%23.1%29.9%13.1%Care need* at deathLow(level: 0/1, grade: 1/2)60.7%71.1%39.6%45.4%75.8%50.0%Medium(level: 2; grade: 3/4)38.4%47.2%29.2%32.5%53.5%32.6%High(level: 3; grade: 5)20.3%22.0%17.5%20.4%22.8%17.1%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia32.6%39.3%26.2%29.8%40.6%23.4%Cancer36.9%45.8%26.2%29.2%44.5%27.1%Total36.9%44.7%27.3%32.0%43.9%25.7%* The three German care levels were modified into 5 care grades at 1st January 2017\n\nPrevalence of in-hospital death by care setting\nLow\n(level: 0/1, grade: 1/2)\nMedium\n(level: 2; grade: 3/4)\nHigh\n(level: 3; grade: 5)\n* The three German care levels were modified into 5 care grades at 1st January 2017\nIn total, 36.9% of all deceased died in hospital (n = 9,811). The proportion of in-hospital deaths is slightly higher in men (39.1%) than in women (35.8%). With higher age, the proportion of in-hospital deaths decreased (47.9% in the 65–74 years old versus 26.2% in the persons aged 95 or older). Another trend can be seen regarding the care need. More than 6 out of 10 persons with the lowest care need died in hospital compared to one fifth in the group with the highest care need. Differences with respect to dementia and cancer were not found.\nThe prevalence of in-hospital death also varies by care setting. Whereas less than 28% of deceased cared for in the nursing home (long-term) died in hospital, this proportion was highest in the home care setting as well as the semi-residential arrangement care setting with 44% and 45% each. The sex difference was highest in the shared housing arrangement setting (23.2% in-hospital deaths in the female group versus 34.8% in males). The above-mentioned tendencies regarding age and care needs were found in all care settings. In people with a dementia diagnosis, the proportion of in-hospital deaths varies widely by care setting. It is lowest in the shared housing arrangement setting (23.4%) and the highest in the semi-residential arrangement setting (40.6%).\nPlace of death When having a closer look at the actual place of death (Table3), most persons died in hospital (36.9%), followed by the home setting (26.6%) and the nursing home (long-term) (23.5%). Nearly 13% either died in the short-term care (7.9%), in a shared housing arrangement (2.6%) or in the semi-residential arrangement setting (2.4%).\n\nTable 3Characteristics of deceased cohort by place of death (2016–2019)Hospital(n = 9,811; 36.9%)Home setting (n = 7,077;26.6%)Nursing home (long-term care) (n = 6,247; 23.5%)Nursing home (short-term care) (n = 2,110; 7.9%)Semi-residential arrangement (n = 649; 2.4%)Shared housing arrangement (n = 696;2.6%)SexMale35.4%35.4%28.4%35.8%39.1%18.8%Female64.6%64.6%71.6%64.2%60.9%81.2%Age at death in yearsMean (SD)85.5 (7.4)87.3 (7.7)88.0 (6.9)87.2 (7.2)86.4 (7.1)86.8 (7.0)65–748.4%6.4%4.1%5.3%5.2%4.9%75–8433.3%26.3%24.8%26.8%35.1%29.7%85–9448.5%51.1%53.6%52.6%47.0%54.9%95+9.8%16.2%17.5%15.3%12.6%10.5%Care need* at deathLow(level: 0/1, grade: 1/2)28.2%11.4%9.8%15.6%4.6%2.9%Medium(level: 2; grade: 3/4)55.5%45.4%60.6%55.4%41.6%42.2%High(level: 3; grade: 5)16.3%43.3%29.7%29.1%53.8%54.9%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia56.6%60.1%75.7%64.6%81.2%88.4%Cancer36.2%36.4%36.1%40.9%29.6%24.7%* The three German care levels were modified into 5 care grades at 1st January 2017\n\nCharacteristics of deceased cohort by place of death (2016–2019)\nLow\n(level: 0/1, grade: 1/2)\nMedium\n(level: 2; grade: 3/4)\nHigh\n(level: 3; grade: 5)\n* The three German care levels were modified into 5 care grades at 1st January 2017\nOverall, 18.8% of those dying in shared housing arrangements were male versus 39.1% in semi-residential arrangements. Deceased people in hospital and semi-residential arrangements were the youngest with 85.5 and 86.4 years in mean, whereas the place of death with the oldest people was the nursing home (long-term) (88.0 years). The prevalence of dementia varies widely between the places of death from highest in shared housing arrangements (88.4%) to lowest at home setting (60.1%) and hospital (56.6%). The highest cancer prevalence was found at the short term-care setting (40.9%) versus lowest in shared housing arrangements (24.7%) as place of death.\nWhen having a closer look at the actual place of death (Table3), most persons died in hospital (36.9%), followed by the home setting (26.6%) and the nursing home (long-term) (23.5%). Nearly 13% either died in the short-term care (7.9%), in a shared housing arrangement (2.6%) or in the semi-residential arrangement setting (2.4%).\n\nTable 3Characteristics of deceased cohort by place of death (2016–2019)Hospital(n = 9,811; 36.9%)Home setting (n = 7,077;26.6%)Nursing home (long-term care) (n = 6,247; 23.5%)Nursing home (short-term care) (n = 2,110; 7.9%)Semi-residential arrangement (n = 649; 2.4%)Shared housing arrangement (n = 696;2.6%)SexMale35.4%35.4%28.4%35.8%39.1%18.8%Female64.6%64.6%71.6%64.2%60.9%81.2%Age at death in yearsMean (SD)85.5 (7.4)87.3 (7.7)88.0 (6.9)87.2 (7.2)86.4 (7.1)86.8 (7.0)65–748.4%6.4%4.1%5.3%5.2%4.9%75–8433.3%26.3%24.8%26.8%35.1%29.7%85–9448.5%51.1%53.6%52.6%47.0%54.9%95+9.8%16.2%17.5%15.3%12.6%10.5%Care need* at deathLow(level: 0/1, grade: 1/2)28.2%11.4%9.8%15.6%4.6%2.9%Medium(level: 2; grade: 3/4)55.5%45.4%60.6%55.4%41.6%42.2%High(level: 3; grade: 5)16.3%43.3%29.7%29.1%53.8%54.9%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia56.6%60.1%75.7%64.6%81.2%88.4%Cancer36.2%36.4%36.1%40.9%29.6%24.7%* The three German care levels were modified into 5 care grades at 1st January 2017\n\nCharacteristics of deceased cohort by place of death (2016–2019)\nLow\n(level: 0/1, grade: 1/2)\nMedium\n(level: 2; grade: 3/4)\nHigh\n(level: 3; grade: 5)\n* The three German care levels were modified into 5 care grades at 1st January 2017\nOverall, 18.8% of those dying in shared housing arrangements were male versus 39.1% in semi-residential arrangements. Deceased people in hospital and semi-residential arrangements were the youngest with 85.5 and 86.4 years in mean, whereas the place of death with the oldest people was the nursing home (long-term) (88.0 years). The prevalence of dementia varies widely between the places of death from highest in shared housing arrangements (88.4%) to lowest at home setting (60.1%) and hospital (56.6%). The highest cancer prevalence was found at the short term-care setting (40.9%) versus lowest in shared housing arrangements (24.7%) as place of death.", "Table1 shows the characteristics of all deceased persons plus the comparison between males and females. Two thirds (66.6%) of the deceased cohort were female, more than 80% of all had a medium care need (53.3%) or even a high care need (29.6%) at death. The mean duration of care dependency was 3.7 years (SD: 2.8) at time of death. About two thirds had a dementia diagnosis (64.1%) and one third had a diagnosis for cancer (36.1%). Most of all were cared for at home at time of death (48.1%), followed by the long-term nursing home setting (32.3%) and short-term nursing home care (11.7%).\n\nTable 1Characteristics of deceased cohort (2016 to 2019)Total (n = 26,590)Female (n = 17,701)Male (n = 8,889)Age at death in years\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\stackrel{-}{x}$$\\end{document} (SD)86.8 (7.4)87.5 (7.4)85.2 (7.2)65–741,714 (6.5%)980 (5.5%)734 (8.3%)75–847,671 (28.9%)4,601 (26.0%)3,070 (34.5%)85–9413,517 (50.8%)9,164 (51.8%)4,353 (49.0%)95+3,688 (13.9%)2,956 (16.7%)732 (8.2%)Care need* at deathLow (care level: 0/1, grade: 1/2)4,558 (17.1%)3,174 (17.9%)1,384 (15.6%)Medium (care level: 2; grade: 3/4)14,173 (53.3%)9,540 (53.9%)4,633 (52.1%)High (care level: 3; grade: 5)7,859 (29.6%)4,987 (28.2%)2,872 (32.3%)Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia17,049 (64.1%)11,461 (64.7%)5,588 (62.9%)Cancer9,603 (36.1%)5,558 (31.4%)4,045 (45.5%)Duration of care dependency at death in years: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\stackrel{-}{x}$$\\end{document} (SD)3.7 (2.8)3.9 (2.8)3.4 (2.7)Setting at deathHome12,795 (48.1%)8,131 (45.9%)4,664 (52.5%)Long-term care in nursing home8,599 (32.3%)6,109 (34.5%)2,490 (28.0%)Short-term care in nursing home3,103 (11.7%)2,016 (11.4%)1,087 (12.2%)Semi-residential arrangement1,156 (4.4%)709 (4.0%)447 (5.0%)Shared housing arrangement937 (3.5%)736 (4.2%)201 (2.3%)* The three German care levels were modified into 5 care grades at 1st January 2017\n\nCharacteristics of deceased cohort (2016 to 2019)\n* The three German care levels were modified into 5 care grades at 1st January 2017\nFemales had a higher mean age of 87.5 years at death, compared to 85.2 years at death of all males. Men were more likely to have a cancer diagnosis and a shorter period of care dependency compared to women. Only small differences were found regarding the care needs and the prevalence of dementia. More females received long-term care in nursing homes (34.5% vs. males: 28.0%), whereas more males have been cared for at the home setting (52.5% vs. females: 45.9%).", "Table2 shows the prevalence of in-hospital deaths in total and stratified by care setting.\n\nTable 2Prevalence of in-hospital death by care settingTotal(n = 26,590)Care setting at time of deathHome setting(n = 12,795)Nursing home (long-term care)(n = 8,599)Nursing home (short-term care)(n = 3,103)Semi-residential arrangement (n = 1,156)Shared housing arrangement (n = 937)SexMale39.1%46.3%28.7%30.5%43.2%34.8%Female35.8%43.8%26.8%32.8%44.3%23.2%Age at death in years65–7447.9%55.2%29.3%42.0%56.4%45.2%75–8442.6%51.5%30.9%35.8%44.8%31.2%85–9435.2%42.3%27.3%31.1%44.3%22.0%95+26.2%31.6%21.1%23.1%29.9%13.1%Care need* at deathLow(level: 0/1, grade: 1/2)60.7%71.1%39.6%45.4%75.8%50.0%Medium(level: 2; grade: 3/4)38.4%47.2%29.2%32.5%53.5%32.6%High(level: 3; grade: 5)20.3%22.0%17.5%20.4%22.8%17.1%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia32.6%39.3%26.2%29.8%40.6%23.4%Cancer36.9%45.8%26.2%29.2%44.5%27.1%Total36.9%44.7%27.3%32.0%43.9%25.7%* The three German care levels were modified into 5 care grades at 1st January 2017\n\nPrevalence of in-hospital death by care setting\nLow\n(level: 0/1, grade: 1/2)\nMedium\n(level: 2; grade: 3/4)\nHigh\n(level: 3; grade: 5)\n* The three German care levels were modified into 5 care grades at 1st January 2017\nIn total, 36.9% of all deceased died in hospital (n = 9,811). The proportion of in-hospital deaths is slightly higher in men (39.1%) than in women (35.8%). With higher age, the proportion of in-hospital deaths decreased (47.9% in the 65–74 years old versus 26.2% in the persons aged 95 or older). Another trend can be seen regarding the care need. More than 6 out of 10 persons with the lowest care need died in hospital compared to one fifth in the group with the highest care need. Differences with respect to dementia and cancer were not found.\nThe prevalence of in-hospital death also varies by care setting. Whereas less than 28% of deceased cared for in the nursing home (long-term) died in hospital, this proportion was highest in the home care setting as well as the semi-residential arrangement care setting with 44% and 45% each. The sex difference was highest in the shared housing arrangement setting (23.2% in-hospital deaths in the female group versus 34.8% in males). The above-mentioned tendencies regarding age and care needs were found in all care settings. In people with a dementia diagnosis, the proportion of in-hospital deaths varies widely by care setting. It is lowest in the shared housing arrangement setting (23.4%) and the highest in the semi-residential arrangement setting (40.6%).", "When having a closer look at the actual place of death (Table3), most persons died in hospital (36.9%), followed by the home setting (26.6%) and the nursing home (long-term) (23.5%). Nearly 13% either died in the short-term care (7.9%), in a shared housing arrangement (2.6%) or in the semi-residential arrangement setting (2.4%).\n\nTable 3Characteristics of deceased cohort by place of death (2016–2019)Hospital(n = 9,811; 36.9%)Home setting (n = 7,077;26.6%)Nursing home (long-term care) (n = 6,247; 23.5%)Nursing home (short-term care) (n = 2,110; 7.9%)Semi-residential arrangement (n = 649; 2.4%)Shared housing arrangement (n = 696;2.6%)SexMale35.4%35.4%28.4%35.8%39.1%18.8%Female64.6%64.6%71.6%64.2%60.9%81.2%Age at death in yearsMean (SD)85.5 (7.4)87.3 (7.7)88.0 (6.9)87.2 (7.2)86.4 (7.1)86.8 (7.0)65–748.4%6.4%4.1%5.3%5.2%4.9%75–8433.3%26.3%24.8%26.8%35.1%29.7%85–9448.5%51.1%53.6%52.6%47.0%54.9%95+9.8%16.2%17.5%15.3%12.6%10.5%Care need* at deathLow(level: 0/1, grade: 1/2)28.2%11.4%9.8%15.6%4.6%2.9%Medium(level: 2; grade: 3/4)55.5%45.4%60.6%55.4%41.6%42.2%High(level: 3; grade: 5)16.3%43.3%29.7%29.1%53.8%54.9%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia56.6%60.1%75.7%64.6%81.2%88.4%Cancer36.2%36.4%36.1%40.9%29.6%24.7%* The three German care levels were modified into 5 care grades at 1st January 2017\n\nCharacteristics of deceased cohort by place of death (2016–2019)\nLow\n(level: 0/1, grade: 1/2)\nMedium\n(level: 2; grade: 3/4)\nHigh\n(level: 3; grade: 5)\n* The three German care levels were modified into 5 care grades at 1st January 2017\nOverall, 18.8% of those dying in shared housing arrangements were male versus 39.1% in semi-residential arrangements. Deceased people in hospital and semi-residential arrangements were the youngest with 85.5 and 86.4 years in mean, whereas the place of death with the oldest people was the nursing home (long-term) (88.0 years). The prevalence of dementia varies widely between the places of death from highest in shared housing arrangements (88.4%) to lowest at home setting (60.1%) and hospital (56.6%). The highest cancer prevalence was found at the short term-care setting (40.9%) versus lowest in shared housing arrangements (24.7%) as place of death.", "Findings and comparison with the literature In care-dependent people initially receiving home care, 57.5% died within 3.5 years. Overall, 36.9% died in hospital and in-hospital deaths were found most often in those still receiving home care as well as care in semi-residential arrangements at the time of death. People who died in hospital were younger and had lower care dependency compared to all other analysed care settings. More than half of home care receiving people moved to another (care-) setting before death (44.0% either to long- or short-term care in nursing home, 4.4% to semi-residential arrangements, plus 3.5% to shared housing arrangements).\nIn care-dependent people initially receiving home care, 57.5% died within 3.5 years. Overall, 36.9% died in hospital and in-hospital deaths were found most often in those still receiving home care as well as care in semi-residential arrangements at the time of death. People who died in hospital were younger and had lower care dependency compared to all other analysed care settings. More than half of home care receiving people moved to another (care-) setting before death (44.0% either to long- or short-term care in nursing home, 4.4% to semi-residential arrangements, plus 3.5% to shared housing arrangements).\nActual place of death Nearly 37% of all deaths took place in hospitals, which is the most common place of death in our care receiving cohort as well as in the total German population [20–22] and in those of most other countries [23–25].\nAfter the hospital, the own home was found as the second most common place of death with 26.6%, which was also found by Dasch et al. (21.3%) for 2017 although their representative German cohort of all persons dying was in mean 77.6 years old, almost 10 years younger than our care receiving cohort [22]. Moreover, Herbst et al. analysed two random samples of German death certificates from 2007 and from 2017. They showed that while in 2007 home also was the second most frequent place of death (26.1%), it slid to third place in 2017 with 19.8% [20]. Taking results of published studies together, the likelihood to die at home has been decreasing since recent years [20, 22, 26]. In a review of 1998, the authors already investigated the relation between patient characteristics and home deaths [27]. They found out that improved access to home care is likely to increase home deaths for older people [27]. Especially palliative home care and hospice care are associated with fewer hospitalisations and more home deaths [28]. But there are several more potential factors influencing death at home, for example patients functional status, their preferences, living with relatives, and extended family support [29].\nThe present cohort showed that 23.5% died in nursing homes, which is a little higher than found in the younger-aged representative sample by Dasch et al. (20.4%) but lower than in the random sample of Herbst et al. with 27.1% [20, 30]. The older the people, the higher the probability to die in a nursing home instead of a hospital [31]. Also in our study nursing home residents receiving long-term care were the oldest group.\nShared housing arrangements were not considered in previous studies investigating place of death, even if this setting can be seen as an increasingly used, familiar care alternative for long-term nursing homes in Germany [15, 33, 34]. To the author’s knowledge there is also no data on the frequency of transitions to short-term care before death as well as nursing home as place of death for short-term care recipients. Studies based on German death certificate data cannot include care information, because they are not routinely covered in these documents. Our study shows that both settings are of relevance and should be included in future studies in EOL care. Quantification and deeper understanding of all possible care transitions at the end of life are important to estimate the relevance and trends for place of death from a public health perspective. Even if the hospice as place of death is still rare and unfortunately not covered in the present data, it has been shown to increase as place of death in Germany in recent years [22].\nNearly 37% of all deaths took place in hospitals, which is the most common place of death in our care receiving cohort as well as in the total German population [20–22] and in those of most other countries [23–25].\nAfter the hospital, the own home was found as the second most common place of death with 26.6%, which was also found by Dasch et al. (21.3%) for 2017 although their representative German cohort of all persons dying was in mean 77.6 years old, almost 10 years younger than our care receiving cohort [22]. Moreover, Herbst et al. analysed two random samples of German death certificates from 2007 and from 2017. They showed that while in 2007 home also was the second most frequent place of death (26.1%), it slid to third place in 2017 with 19.8% [20]. Taking results of published studies together, the likelihood to die at home has been decreasing since recent years [20, 22, 26]. In a review of 1998, the authors already investigated the relation between patient characteristics and home deaths [27]. They found out that improved access to home care is likely to increase home deaths for older people [27]. Especially palliative home care and hospice care are associated with fewer hospitalisations and more home deaths [28]. But there are several more potential factors influencing death at home, for example patients functional status, their preferences, living with relatives, and extended family support [29].\nThe present cohort showed that 23.5% died in nursing homes, which is a little higher than found in the younger-aged representative sample by Dasch et al. (20.4%) but lower than in the random sample of Herbst et al. with 27.1% [20, 30]. The older the people, the higher the probability to die in a nursing home instead of a hospital [31]. Also in our study nursing home residents receiving long-term care were the oldest group.\nShared housing arrangements were not considered in previous studies investigating place of death, even if this setting can be seen as an increasingly used, familiar care alternative for long-term nursing homes in Germany [15, 33, 34]. To the author’s knowledge there is also no data on the frequency of transitions to short-term care before death as well as nursing home as place of death for short-term care recipients. Studies based on German death certificate data cannot include care information, because they are not routinely covered in these documents. Our study shows that both settings are of relevance and should be included in future studies in EOL care. Quantification and deeper understanding of all possible care transitions at the end of life are important to estimate the relevance and trends for place of death from a public health perspective. Even if the hospice as place of death is still rare and unfortunately not covered in the present data, it has been shown to increase as place of death in Germany in recent years [22].\nHospital death by care setting As in the present cohort, the proportion of in-hospital death in older, care receiving people seems to be smaller compared to the general population [25]. The availability of formal versus informal care seems to influence hospital death rates. In the Netherlands older people receiving informal care were more likely to die in hospital than people receiving formal home care or institutional care [35]. Another Dutch study showed for Dutch people who only received informal care in their last three months of life that the odds of dying in a hospital was much higher compared to those who received a combination of formal and informal home care [36]. In the present cohort, the proportion of in-hospital death was also highest in people receiving home care (44.7%), where the largest proportion of informal care can certainly be found. The proportion of in-hospital deaths was lower in our group of nursing home residents receiving long-term care. Although this proportion goes in line with previous German analyses [37, 38], it is, however, internationally compared somewhat higher [39]. Nevertheless, the proportion of in-hospital deaths among nursing-home residents internationally varies markedly even within countries with an overall median of 22.6% [39].\nThere are other possible factors influencing the risk of dying in hospital for elderly, care receiving people like the care level, age and sex. In our cohort, the younger the people and the lower the care level, the higher was the proportion of in-hospital death. The same was found by previous studies [10, 39, 40]. This could possibly partly explain, why men in our cohort were a little more likely to die in hospital than women. However, there is increasing evidence of “real” sex-specific differences in burdensome interventions like transitions of care or invasive procedures during EOL and future studies should put more emphasis on sex-specific analyses [41].\nAs in the present cohort, the proportion of in-hospital death in older, care receiving people seems to be smaller compared to the general population [25]. The availability of formal versus informal care seems to influence hospital death rates. In the Netherlands older people receiving informal care were more likely to die in hospital than people receiving formal home care or institutional care [35]. Another Dutch study showed for Dutch people who only received informal care in their last three months of life that the odds of dying in a hospital was much higher compared to those who received a combination of formal and informal home care [36]. In the present cohort, the proportion of in-hospital death was also highest in people receiving home care (44.7%), where the largest proportion of informal care can certainly be found. The proportion of in-hospital deaths was lower in our group of nursing home residents receiving long-term care. Although this proportion goes in line with previous German analyses [37, 38], it is, however, internationally compared somewhat higher [39]. Nevertheless, the proportion of in-hospital deaths among nursing-home residents internationally varies markedly even within countries with an overall median of 22.6% [39].\nThere are other possible factors influencing the risk of dying in hospital for elderly, care receiving people like the care level, age and sex. In our cohort, the younger the people and the lower the care level, the higher was the proportion of in-hospital death. The same was found by previous studies [10, 39, 40]. This could possibly partly explain, why men in our cohort were a little more likely to die in hospital than women. However, there is increasing evidence of “real” sex-specific differences in burdensome interventions like transitions of care or invasive procedures during EOL and future studies should put more emphasis on sex-specific analyses [41].\nMoving to other care settings before death Our result regarding the frequent transition from home to other care settings before death indicates that home care cannot always be maintained until the EOL, although most patients wish so [5–8]. It was already shown in international studies that the frequency of care setting transitions of elderly people increases near to death [42]. For example, in the Netherlands nearly half of their 55–85 years old home-living persons was transferred between care settings one or more times in the last 3 months of life, mostly from home to hospital [35]. Care setting transitions at the EOL are seen as increasingly problematic, also because of potential medication and care errors, disrupting care teams, and a loss of care information [20, 27, 31], even if these transitions have the potential to be a relief for family caregivers [23]. Looking at the German situation, it also should be mentioned that structures in outpatient palliative care have been introduced just within the last 15 years and the growing number of general and specialist outpatient palliative care services (in German: AAPV and SAPV) provides more possibilities of outpatient palliative care since the last years [43], which can strengthen the quality of care at home at the patient’s EOL. Therewith unwanted care transitions can also be prevented. However, the care at home should not automatically be equated with the best care [9, 44] because institutionalised palliative care like hospice care and in-hospital palliative care can improve the quality of dying and death [45, 46]. Overall, care decisions always should be weighed individually to enable appropriate and timely care setting transitions in accordance with individualised EOL care needs [47].\nThere are different indicators already mentioned being associated with a risk of care transitions. In the UK, that people with severe cognitive impairment were the most likely group to move to other care settings [42]. Our results also show that the people with dementia more often died in another care setting than home. Another German study analysed predictors of admission to nursing home in care dependent people based on longitudinal secondary data and also found dementia, cognitive impairment, cancer of the brain and higher age as risk factors, which goes in line with our results [48].\nOur result regarding the frequent transition from home to other care settings before death indicates that home care cannot always be maintained until the EOL, although most patients wish so [5–8]. It was already shown in international studies that the frequency of care setting transitions of elderly people increases near to death [42]. For example, in the Netherlands nearly half of their 55–85 years old home-living persons was transferred between care settings one or more times in the last 3 months of life, mostly from home to hospital [35]. Care setting transitions at the EOL are seen as increasingly problematic, also because of potential medication and care errors, disrupting care teams, and a loss of care information [20, 27, 31], even if these transitions have the potential to be a relief for family caregivers [23]. Looking at the German situation, it also should be mentioned that structures in outpatient palliative care have been introduced just within the last 15 years and the growing number of general and specialist outpatient palliative care services (in German: AAPV and SAPV) provides more possibilities of outpatient palliative care since the last years [43], which can strengthen the quality of care at home at the patient’s EOL. Therewith unwanted care transitions can also be prevented. However, the care at home should not automatically be equated with the best care [9, 44] because institutionalised palliative care like hospice care and in-hospital palliative care can improve the quality of dying and death [45, 46]. Overall, care decisions always should be weighed individually to enable appropriate and timely care setting transitions in accordance with individualised EOL care needs [47].\nThere are different indicators already mentioned being associated with a risk of care transitions. In the UK, that people with severe cognitive impairment were the most likely group to move to other care settings [42]. Our results also show that the people with dementia more often died in another care setting than home. Another German study analysed predictors of admission to nursing home in care dependent people based on longitudinal secondary data and also found dementia, cognitive impairment, cancer of the brain and higher age as risk factors, which goes in line with our results [48].\nStrengths and Limitations The strengths of this study are its real-world character, its large sample size which allowed us to stratify the analyses by sex, age and other variables. Furthermore, we had valid information on care setting pathways and place of death. Just like the strengths, the limitations are based to the nature of the administrative data from LTCI funds. The data were not captured for the purpose of scientific research and further information that could influence the placement and dying in different care settings (clinical data, socioeconomic status, marital status or family support, respectively) were not available. The same applies to further information related to the specific care recipient’s institution, e.g. staffing ratios or the nursing home’s ownership. For the ones living in semi-residential arrangements we were not able to differentiate between the care-recipients died at home or during day or night care, respectively. For the shared housing-arrangements an underreporting is possible since in this case care providers might only invoice other benefits for instance for nursing home care. Besides, our data did not contain palliative care units and hospices as places of death. However, their joint proportion on places of death in Germany is 11% [22]. Another limitation relates to the fact that data for this study were only obtained from one health insurance fund. Since the DAK-Gesundheit insures more women and a population with a generally poorer health status [49], our results cannot be extrapolated to the entire care receiving population in Germany. Nevertheless, the DAK-Gesundheit is with 5.6million insured persons one of Germany’s largest health insurance funds [13].\nThe strengths of this study are its real-world character, its large sample size which allowed us to stratify the analyses by sex, age and other variables. Furthermore, we had valid information on care setting pathways and place of death. Just like the strengths, the limitations are based to the nature of the administrative data from LTCI funds. The data were not captured for the purpose of scientific research and further information that could influence the placement and dying in different care settings (clinical data, socioeconomic status, marital status or family support, respectively) were not available. The same applies to further information related to the specific care recipient’s institution, e.g. staffing ratios or the nursing home’s ownership. For the ones living in semi-residential arrangements we were not able to differentiate between the care-recipients died at home or during day or night care, respectively. For the shared housing-arrangements an underreporting is possible since in this case care providers might only invoice other benefits for instance for nursing home care. Besides, our data did not contain palliative care units and hospices as places of death. However, their joint proportion on places of death in Germany is 11% [22]. Another limitation relates to the fact that data for this study were only obtained from one health insurance fund. Since the DAK-Gesundheit insures more women and a population with a generally poorer health status [49], our results cannot be extrapolated to the entire care receiving population in Germany. Nevertheless, the DAK-Gesundheit is with 5.6million insured persons one of Germany’s largest health insurance funds [13].", "In care-dependent people initially receiving home care, 57.5% died within 3.5 years. Overall, 36.9% died in hospital and in-hospital deaths were found most often in those still receiving home care as well as care in semi-residential arrangements at the time of death. People who died in hospital were younger and had lower care dependency compared to all other analysed care settings. More than half of home care receiving people moved to another (care-) setting before death (44.0% either to long- or short-term care in nursing home, 4.4% to semi-residential arrangements, plus 3.5% to shared housing arrangements).", "Nearly 37% of all deaths took place in hospitals, which is the most common place of death in our care receiving cohort as well as in the total German population [20–22] and in those of most other countries [23–25].\nAfter the hospital, the own home was found as the second most common place of death with 26.6%, which was also found by Dasch et al. (21.3%) for 2017 although their representative German cohort of all persons dying was in mean 77.6 years old, almost 10 years younger than our care receiving cohort [22]. Moreover, Herbst et al. analysed two random samples of German death certificates from 2007 and from 2017. They showed that while in 2007 home also was the second most frequent place of death (26.1%), it slid to third place in 2017 with 19.8% [20]. Taking results of published studies together, the likelihood to die at home has been decreasing since recent years [20, 22, 26]. In a review of 1998, the authors already investigated the relation between patient characteristics and home deaths [27]. They found out that improved access to home care is likely to increase home deaths for older people [27]. Especially palliative home care and hospice care are associated with fewer hospitalisations and more home deaths [28]. But there are several more potential factors influencing death at home, for example patients functional status, their preferences, living with relatives, and extended family support [29].\nThe present cohort showed that 23.5% died in nursing homes, which is a little higher than found in the younger-aged representative sample by Dasch et al. (20.4%) but lower than in the random sample of Herbst et al. with 27.1% [20, 30]. The older the people, the higher the probability to die in a nursing home instead of a hospital [31]. Also in our study nursing home residents receiving long-term care were the oldest group.\nShared housing arrangements were not considered in previous studies investigating place of death, even if this setting can be seen as an increasingly used, familiar care alternative for long-term nursing homes in Germany [15, 33, 34]. To the author’s knowledge there is also no data on the frequency of transitions to short-term care before death as well as nursing home as place of death for short-term care recipients. Studies based on German death certificate data cannot include care information, because they are not routinely covered in these documents. Our study shows that both settings are of relevance and should be included in future studies in EOL care. Quantification and deeper understanding of all possible care transitions at the end of life are important to estimate the relevance and trends for place of death from a public health perspective. Even if the hospice as place of death is still rare and unfortunately not covered in the present data, it has been shown to increase as place of death in Germany in recent years [22].", "As in the present cohort, the proportion of in-hospital death in older, care receiving people seems to be smaller compared to the general population [25]. The availability of formal versus informal care seems to influence hospital death rates. In the Netherlands older people receiving informal care were more likely to die in hospital than people receiving formal home care or institutional care [35]. Another Dutch study showed for Dutch people who only received informal care in their last three months of life that the odds of dying in a hospital was much higher compared to those who received a combination of formal and informal home care [36]. In the present cohort, the proportion of in-hospital death was also highest in people receiving home care (44.7%), where the largest proportion of informal care can certainly be found. The proportion of in-hospital deaths was lower in our group of nursing home residents receiving long-term care. Although this proportion goes in line with previous German analyses [37, 38], it is, however, internationally compared somewhat higher [39]. Nevertheless, the proportion of in-hospital deaths among nursing-home residents internationally varies markedly even within countries with an overall median of 22.6% [39].\nThere are other possible factors influencing the risk of dying in hospital for elderly, care receiving people like the care level, age and sex. In our cohort, the younger the people and the lower the care level, the higher was the proportion of in-hospital death. The same was found by previous studies [10, 39, 40]. This could possibly partly explain, why men in our cohort were a little more likely to die in hospital than women. However, there is increasing evidence of “real” sex-specific differences in burdensome interventions like transitions of care or invasive procedures during EOL and future studies should put more emphasis on sex-specific analyses [41].", "Our result regarding the frequent transition from home to other care settings before death indicates that home care cannot always be maintained until the EOL, although most patients wish so [5–8]. It was already shown in international studies that the frequency of care setting transitions of elderly people increases near to death [42]. For example, in the Netherlands nearly half of their 55–85 years old home-living persons was transferred between care settings one or more times in the last 3 months of life, mostly from home to hospital [35]. Care setting transitions at the EOL are seen as increasingly problematic, also because of potential medication and care errors, disrupting care teams, and a loss of care information [20, 27, 31], even if these transitions have the potential to be a relief for family caregivers [23]. Looking at the German situation, it also should be mentioned that structures in outpatient palliative care have been introduced just within the last 15 years and the growing number of general and specialist outpatient palliative care services (in German: AAPV and SAPV) provides more possibilities of outpatient palliative care since the last years [43], which can strengthen the quality of care at home at the patient’s EOL. Therewith unwanted care transitions can also be prevented. However, the care at home should not automatically be equated with the best care [9, 44] because institutionalised palliative care like hospice care and in-hospital palliative care can improve the quality of dying and death [45, 46]. Overall, care decisions always should be weighed individually to enable appropriate and timely care setting transitions in accordance with individualised EOL care needs [47].\nThere are different indicators already mentioned being associated with a risk of care transitions. In the UK, that people with severe cognitive impairment were the most likely group to move to other care settings [42]. Our results also show that the people with dementia more often died in another care setting than home. Another German study analysed predictors of admission to nursing home in care dependent people based on longitudinal secondary data and also found dementia, cognitive impairment, cancer of the brain and higher age as risk factors, which goes in line with our results [48].", "The strengths of this study are its real-world character, its large sample size which allowed us to stratify the analyses by sex, age and other variables. Furthermore, we had valid information on care setting pathways and place of death. Just like the strengths, the limitations are based to the nature of the administrative data from LTCI funds. The data were not captured for the purpose of scientific research and further information that could influence the placement and dying in different care settings (clinical data, socioeconomic status, marital status or family support, respectively) were not available. The same applies to further information related to the specific care recipient’s institution, e.g. staffing ratios or the nursing home’s ownership. For the ones living in semi-residential arrangements we were not able to differentiate between the care-recipients died at home or during day or night care, respectively. For the shared housing-arrangements an underreporting is possible since in this case care providers might only invoice other benefits for instance for nursing home care. Besides, our data did not contain palliative care units and hospices as places of death. However, their joint proportion on places of death in Germany is 11% [22]. Another limitation relates to the fact that data for this study were only obtained from one health insurance fund. Since the DAK-Gesundheit insures more women and a population with a generally poorer health status [49], our results cannot be extrapolated to the entire care receiving population in Germany. Nevertheless, the DAK-Gesundheit is with 5.6million insured persons one of Germany’s largest health insurance funds [13].", "In a large cohort of persons that were initially cared for at home, more than half moved to another care setting before death, most often to long-term nursing homes. Overall, about 4 of 10 perons died in hospital with highest proportions in those still receiving home care as well as care in semi-residential arrangements. Thus, there are still many unwanted and potentially preventable care transitions at the EOL in Germany. Interventions are needed to improve EOL care both in professional as well as informal home care settings also including semi residential arrangements. For example, ACP interventions were already proved effective as well as to support informal caregivers. Moreover, outpatient palliative care should be improved. This means, inter alia, an extension of an ACP-offer to the home setting in Germany as well as better access to outpatient hospice and palliative care services." ]
[ null, null, null, null, "results", null, null, null, "discussion", null, null, null, null, null, "conclusion" ]
[ "Home care recipients", "German health insurance claims-data", "Care settings", "Place of death" ]
Background: Due to demographic changes the world’s population is ageing and more and more people will die in old age, often affected by multiple chronic diseases and with complex care needs [1]. The official German care statistics from 2019 reported a further significant increase of care dependency to a total of 4.1million people [2, 3]. Four fifths of them were cared for at home. The home care receiving group was younger than the nursing home residents and the proportion of women were with 60% lower than in nursing homes [4]. Accordingly, end-of-life (EOL) care is important in this population and is increasingly being researched. Regardless of the care-setting and the country, the majority of people wish to die at home [5–8], even if one should differentiate between ideal and actual preferred places of care as well as places of death and that both could change over time [9]. A cross-national comparison of places of death including 21 countries of people aged over 65 years from 2013 showed a median of 54% of death in hospital and 18% in residential aged care facilities [10], both with large differences between studies as well as between countries. Some countries – such as Germany – do not routinely compile data from death registrations including information on care dependency, which makes it difficult to obtain representative data regarding place of death. First data are available for the distribution of places of death in Germany, showing an overall trend in places of death [11]. However, these data do not contain information on care dependency and little is known about care transitions at the EOL in the group of older home care receiving people. Can the people who are cared for at home also die at home, as most would prefer? Therefore, aim of this explorative study was to investigate place of death of home care recipients in Germany, taking characteristics and changes in care settings into account. Method: This retrospective study is part of the STudy on ADvance care PLANning (STADPLAN), funded by the German Federal Ministry of Education and Research (BMBF grant 01GL1707A-D). STADPLAN aims to evaluate the effect of an adapted advance care planning (ACP) program on patients’ activation regarding healthcare issues in care dependent community-dwelling older persons [12]. Database, study population and outcome Anonymised data for this study were obtained from the DAK-Gesundheit, a large statutory health and long-term care insurance (LTCI) fund representing approximately 5.6million members (corresponding to 6.7% of the German population) [13]. The dataset included all insured persons being at least 65 years of age and who were cared for in their home setting on January 1, 2016 and contained different datasets that were merged via a unique identifier. For this study, all persons that died up to June 30, 2019 were included. Data on care dependency were obtained from the LTCI [14] providing data on services received with start and end dates and the person’s care need. At baseline all persons included received services in their own home setting (1). According to the received services during follow-up, we differentiate between four further care settings. We included shared housing arrangements (2) where a small group of people is living in private rooms, while sharing a common space, domestic support, and nursing care [15]. Semi-residential arrangements (3) provide a temporary care support during day or night in an institution [16]. Full residential care refers to either short-term stay (4, covered by the LTCI for a maximum of 58 days per year) or long-term stay (5) in a nursing home [17]. Up to 2016, care-dependent persons were assigned to a level of care reflecting the time needed for daily help ranging between 1 and 3. These levels were modified into 5 care grades at January 1, 2017 reflecting a more comprehensive view on the person’s independence and competences considering physical, cognitive or psychological impairments [17]. Different claims data sets also contained information on demographics, date of death, outpatient care, and hospitalisations. Hospital data hold information on dates of admission and discharge, the respective diagnoses and diagnostic as well as therapeutic procedures. Outpatient data contained diagnoses including information on the level of diagnosis certainty (confirmed, suspected, ruled out and status post), treatments and procedures. All diagnoses were coded using the German modification of the international classification of diseases, 10th revision (ICD-10-GM). Our first outcome of interest was the care setting at time of death differentiated by the above mentioned 5 groups. The second variable of interest was the actual place of death, which in addition to those care settings further includes hospitals. We also assessed the proportion of persons that died in hospital, defined as being in hospital on the day of death. For providing baseline characteristics we assessed the mean age at death (categorised into four groups 65–74, 75–84, 85–94 and ≥ 95 years), sex and care need at death. We combined the old levels and new grades to 3 groups of care need: low (care level: 0/1, grade: 1/2), medium (care level: 2; grade: 3/4), high (care level: 3; grade: 5). Duration of care dependency was calculated as the time in years between the start of receiving care in the home setting (latest January 1, 2016) and the date of death. Furthermore, we assessed confirmed outpatient diagnoses of cancer (ICD-10-GM: C00-D48 [18]) and dementia (ICD-10-GM: F00, F01, F02.0, F02.3, F03, G30, G31.0, G31.1, G31.82, G31.9, R54 [19]) in the quarter of death and the three quarters before death. Anonymised data for this study were obtained from the DAK-Gesundheit, a large statutory health and long-term care insurance (LTCI) fund representing approximately 5.6million members (corresponding to 6.7% of the German population) [13]. The dataset included all insured persons being at least 65 years of age and who were cared for in their home setting on January 1, 2016 and contained different datasets that were merged via a unique identifier. For this study, all persons that died up to June 30, 2019 were included. Data on care dependency were obtained from the LTCI [14] providing data on services received with start and end dates and the person’s care need. At baseline all persons included received services in their own home setting (1). According to the received services during follow-up, we differentiate between four further care settings. We included shared housing arrangements (2) where a small group of people is living in private rooms, while sharing a common space, domestic support, and nursing care [15]. Semi-residential arrangements (3) provide a temporary care support during day or night in an institution [16]. Full residential care refers to either short-term stay (4, covered by the LTCI for a maximum of 58 days per year) or long-term stay (5) in a nursing home [17]. Up to 2016, care-dependent persons were assigned to a level of care reflecting the time needed for daily help ranging between 1 and 3. These levels were modified into 5 care grades at January 1, 2017 reflecting a more comprehensive view on the person’s independence and competences considering physical, cognitive or psychological impairments [17]. Different claims data sets also contained information on demographics, date of death, outpatient care, and hospitalisations. Hospital data hold information on dates of admission and discharge, the respective diagnoses and diagnostic as well as therapeutic procedures. Outpatient data contained diagnoses including information on the level of diagnosis certainty (confirmed, suspected, ruled out and status post), treatments and procedures. All diagnoses were coded using the German modification of the international classification of diseases, 10th revision (ICD-10-GM). Our first outcome of interest was the care setting at time of death differentiated by the above mentioned 5 groups. The second variable of interest was the actual place of death, which in addition to those care settings further includes hospitals. We also assessed the proportion of persons that died in hospital, defined as being in hospital on the day of death. For providing baseline characteristics we assessed the mean age at death (categorised into four groups 65–74, 75–84, 85–94 and ≥ 95 years), sex and care need at death. We combined the old levels and new grades to 3 groups of care need: low (care level: 0/1, grade: 1/2), medium (care level: 2; grade: 3/4), high (care level: 3; grade: 5). Duration of care dependency was calculated as the time in years between the start of receiving care in the home setting (latest January 1, 2016) and the date of death. Furthermore, we assessed confirmed outpatient diagnoses of cancer (ICD-10-GM: C00-D48 [18]) and dementia (ICD-10-GM: F00, F01, F02.0, F02.3, F03, G30, G31.0, G31.1, G31.82, G31.9, R54 [19]) in the quarter of death and the three quarters before death. Statistical analysis Firstly, the study population was described by age, care need, duration of care and care setting at death as well as having a cancer or dementia diagnosis, respectively. These measures were calculated overall and stratified by sex. Secondly, we examined the proportion dying in hospital by their care setting. Finally, we characterised the deceased cohort stratified by their actual place of death (hospital, home setting, shared living arrangement, semi-residential arrangement, short-term care or long-term care in nursing home). Descriptive measures were computed. We performed all analyses using the SAS programme for Windows, version 9.4 (SAS Institute Inc., Cary, NC, United States). Firstly, the study population was described by age, care need, duration of care and care setting at death as well as having a cancer or dementia diagnosis, respectively. These measures were calculated overall and stratified by sex. Secondly, we examined the proportion dying in hospital by their care setting. Finally, we characterised the deceased cohort stratified by their actual place of death (hospital, home setting, shared living arrangement, semi-residential arrangement, short-term care or long-term care in nursing home). Descriptive measures were computed. We performed all analyses using the SAS programme for Windows, version 9.4 (SAS Institute Inc., Cary, NC, United States). Database, study population and outcome: Anonymised data for this study were obtained from the DAK-Gesundheit, a large statutory health and long-term care insurance (LTCI) fund representing approximately 5.6million members (corresponding to 6.7% of the German population) [13]. The dataset included all insured persons being at least 65 years of age and who were cared for in their home setting on January 1, 2016 and contained different datasets that were merged via a unique identifier. For this study, all persons that died up to June 30, 2019 were included. Data on care dependency were obtained from the LTCI [14] providing data on services received with start and end dates and the person’s care need. At baseline all persons included received services in their own home setting (1). According to the received services during follow-up, we differentiate between four further care settings. We included shared housing arrangements (2) where a small group of people is living in private rooms, while sharing a common space, domestic support, and nursing care [15]. Semi-residential arrangements (3) provide a temporary care support during day or night in an institution [16]. Full residential care refers to either short-term stay (4, covered by the LTCI for a maximum of 58 days per year) or long-term stay (5) in a nursing home [17]. Up to 2016, care-dependent persons were assigned to a level of care reflecting the time needed for daily help ranging between 1 and 3. These levels were modified into 5 care grades at January 1, 2017 reflecting a more comprehensive view on the person’s independence and competences considering physical, cognitive or psychological impairments [17]. Different claims data sets also contained information on demographics, date of death, outpatient care, and hospitalisations. Hospital data hold information on dates of admission and discharge, the respective diagnoses and diagnostic as well as therapeutic procedures. Outpatient data contained diagnoses including information on the level of diagnosis certainty (confirmed, suspected, ruled out and status post), treatments and procedures. All diagnoses were coded using the German modification of the international classification of diseases, 10th revision (ICD-10-GM). Our first outcome of interest was the care setting at time of death differentiated by the above mentioned 5 groups. The second variable of interest was the actual place of death, which in addition to those care settings further includes hospitals. We also assessed the proportion of persons that died in hospital, defined as being in hospital on the day of death. For providing baseline characteristics we assessed the mean age at death (categorised into four groups 65–74, 75–84, 85–94 and ≥ 95 years), sex and care need at death. We combined the old levels and new grades to 3 groups of care need: low (care level: 0/1, grade: 1/2), medium (care level: 2; grade: 3/4), high (care level: 3; grade: 5). Duration of care dependency was calculated as the time in years between the start of receiving care in the home setting (latest January 1, 2016) and the date of death. Furthermore, we assessed confirmed outpatient diagnoses of cancer (ICD-10-GM: C00-D48 [18]) and dementia (ICD-10-GM: F00, F01, F02.0, F02.3, F03, G30, G31.0, G31.1, G31.82, G31.9, R54 [19]) in the quarter of death and the three quarters before death. Statistical analysis: Firstly, the study population was described by age, care need, duration of care and care setting at death as well as having a cancer or dementia diagnosis, respectively. These measures were calculated overall and stratified by sex. Secondly, we examined the proportion dying in hospital by their care setting. Finally, we characterised the deceased cohort stratified by their actual place of death (hospital, home setting, shared living arrangement, semi-residential arrangement, short-term care or long-term care in nursing home). Descriptive measures were computed. We performed all analyses using the SAS programme for Windows, version 9.4 (SAS Institute Inc., Cary, NC, United States). Results: The entire cohort encompasses 46,207 care-dependent people, who were cared for at home at January 1, 2016. The target population included 26,590 people, who had died until June 2019 (57.5%). Baseline characteristics Table1 shows the characteristics of all deceased persons plus the comparison between males and females. Two thirds (66.6%) of the deceased cohort were female, more than 80% of all had a medium care need (53.3%) or even a high care need (29.6%) at death. The mean duration of care dependency was 3.7 years (SD: 2.8) at time of death. About two thirds had a dementia diagnosis (64.1%) and one third had a diagnosis for cancer (36.1%). Most of all were cared for at home at time of death (48.1%), followed by the long-term nursing home setting (32.3%) and short-term nursing home care (11.7%). Table 1Characteristics of deceased cohort (2016 to 2019)Total (n = 26,590)Female (n = 17,701)Male (n = 8,889)Age at death in years\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\stackrel{-}{x}$$\end{document} (SD)86.8 (7.4)87.5 (7.4)85.2 (7.2)65–741,714 (6.5%)980 (5.5%)734 (8.3%)75–847,671 (28.9%)4,601 (26.0%)3,070 (34.5%)85–9413,517 (50.8%)9,164 (51.8%)4,353 (49.0%)95+3,688 (13.9%)2,956 (16.7%)732 (8.2%)Care need* at deathLow (care level: 0/1, grade: 1/2)4,558 (17.1%)3,174 (17.9%)1,384 (15.6%)Medium (care level: 2; grade: 3/4)14,173 (53.3%)9,540 (53.9%)4,633 (52.1%)High (care level: 3; grade: 5)7,859 (29.6%)4,987 (28.2%)2,872 (32.3%)Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia17,049 (64.1%)11,461 (64.7%)5,588 (62.9%)Cancer9,603 (36.1%)5,558 (31.4%)4,045 (45.5%)Duration of care dependency at death in years: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\stackrel{-}{x}$$\end{document} (SD)3.7 (2.8)3.9 (2.8)3.4 (2.7)Setting at deathHome12,795 (48.1%)8,131 (45.9%)4,664 (52.5%)Long-term care in nursing home8,599 (32.3%)6,109 (34.5%)2,490 (28.0%)Short-term care in nursing home3,103 (11.7%)2,016 (11.4%)1,087 (12.2%)Semi-residential arrangement1,156 (4.4%)709 (4.0%)447 (5.0%)Shared housing arrangement937 (3.5%)736 (4.2%)201 (2.3%)* The three German care levels were modified into 5 care grades at 1st January 2017 Characteristics of deceased cohort (2016 to 2019) * The three German care levels were modified into 5 care grades at 1st January 2017 Females had a higher mean age of 87.5 years at death, compared to 85.2 years at death of all males. Men were more likely to have a cancer diagnosis and a shorter period of care dependency compared to women. Only small differences were found regarding the care needs and the prevalence of dementia. More females received long-term care in nursing homes (34.5% vs. males: 28.0%), whereas more males have been cared for at the home setting (52.5% vs. females: 45.9%). Table1 shows the characteristics of all deceased persons plus the comparison between males and females. Two thirds (66.6%) of the deceased cohort were female, more than 80% of all had a medium care need (53.3%) or even a high care need (29.6%) at death. The mean duration of care dependency was 3.7 years (SD: 2.8) at time of death. About two thirds had a dementia diagnosis (64.1%) and one third had a diagnosis for cancer (36.1%). Most of all were cared for at home at time of death (48.1%), followed by the long-term nursing home setting (32.3%) and short-term nursing home care (11.7%). Table 1Characteristics of deceased cohort (2016 to 2019)Total (n = 26,590)Female (n = 17,701)Male (n = 8,889)Age at death in years\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\stackrel{-}{x}$$\end{document} (SD)86.8 (7.4)87.5 (7.4)85.2 (7.2)65–741,714 (6.5%)980 (5.5%)734 (8.3%)75–847,671 (28.9%)4,601 (26.0%)3,070 (34.5%)85–9413,517 (50.8%)9,164 (51.8%)4,353 (49.0%)95+3,688 (13.9%)2,956 (16.7%)732 (8.2%)Care need* at deathLow (care level: 0/1, grade: 1/2)4,558 (17.1%)3,174 (17.9%)1,384 (15.6%)Medium (care level: 2; grade: 3/4)14,173 (53.3%)9,540 (53.9%)4,633 (52.1%)High (care level: 3; grade: 5)7,859 (29.6%)4,987 (28.2%)2,872 (32.3%)Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia17,049 (64.1%)11,461 (64.7%)5,588 (62.9%)Cancer9,603 (36.1%)5,558 (31.4%)4,045 (45.5%)Duration of care dependency at death in years: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\stackrel{-}{x}$$\end{document} (SD)3.7 (2.8)3.9 (2.8)3.4 (2.7)Setting at deathHome12,795 (48.1%)8,131 (45.9%)4,664 (52.5%)Long-term care in nursing home8,599 (32.3%)6,109 (34.5%)2,490 (28.0%)Short-term care in nursing home3,103 (11.7%)2,016 (11.4%)1,087 (12.2%)Semi-residential arrangement1,156 (4.4%)709 (4.0%)447 (5.0%)Shared housing arrangement937 (3.5%)736 (4.2%)201 (2.3%)* The three German care levels were modified into 5 care grades at 1st January 2017 Characteristics of deceased cohort (2016 to 2019) * The three German care levels were modified into 5 care grades at 1st January 2017 Females had a higher mean age of 87.5 years at death, compared to 85.2 years at death of all males. Men were more likely to have a cancer diagnosis and a shorter period of care dependency compared to women. Only small differences were found regarding the care needs and the prevalence of dementia. More females received long-term care in nursing homes (34.5% vs. males: 28.0%), whereas more males have been cared for at the home setting (52.5% vs. females: 45.9%). In-hospital death Table2 shows the prevalence of in-hospital deaths in total and stratified by care setting. Table 2Prevalence of in-hospital death by care settingTotal(n = 26,590)Care setting at time of deathHome setting(n = 12,795)Nursing home (long-term care)(n = 8,599)Nursing home (short-term care)(n = 3,103)Semi-residential arrangement (n = 1,156)Shared housing arrangement (n = 937)SexMale39.1%46.3%28.7%30.5%43.2%34.8%Female35.8%43.8%26.8%32.8%44.3%23.2%Age at death in years65–7447.9%55.2%29.3%42.0%56.4%45.2%75–8442.6%51.5%30.9%35.8%44.8%31.2%85–9435.2%42.3%27.3%31.1%44.3%22.0%95+26.2%31.6%21.1%23.1%29.9%13.1%Care need* at deathLow(level: 0/1, grade: 1/2)60.7%71.1%39.6%45.4%75.8%50.0%Medium(level: 2; grade: 3/4)38.4%47.2%29.2%32.5%53.5%32.6%High(level: 3; grade: 5)20.3%22.0%17.5%20.4%22.8%17.1%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia32.6%39.3%26.2%29.8%40.6%23.4%Cancer36.9%45.8%26.2%29.2%44.5%27.1%Total36.9%44.7%27.3%32.0%43.9%25.7%* The three German care levels were modified into 5 care grades at 1st January 2017 Prevalence of in-hospital death by care setting Low (level: 0/1, grade: 1/2) Medium (level: 2; grade: 3/4) High (level: 3; grade: 5) * The three German care levels were modified into 5 care grades at 1st January 2017 In total, 36.9% of all deceased died in hospital (n = 9,811). The proportion of in-hospital deaths is slightly higher in men (39.1%) than in women (35.8%). With higher age, the proportion of in-hospital deaths decreased (47.9% in the 65–74 years old versus 26.2% in the persons aged 95 or older). Another trend can be seen regarding the care need. More than 6 out of 10 persons with the lowest care need died in hospital compared to one fifth in the group with the highest care need. Differences with respect to dementia and cancer were not found. The prevalence of in-hospital death also varies by care setting. Whereas less than 28% of deceased cared for in the nursing home (long-term) died in hospital, this proportion was highest in the home care setting as well as the semi-residential arrangement care setting with 44% and 45% each. The sex difference was highest in the shared housing arrangement setting (23.2% in-hospital deaths in the female group versus 34.8% in males). The above-mentioned tendencies regarding age and care needs were found in all care settings. In people with a dementia diagnosis, the proportion of in-hospital deaths varies widely by care setting. It is lowest in the shared housing arrangement setting (23.4%) and the highest in the semi-residential arrangement setting (40.6%). Table2 shows the prevalence of in-hospital deaths in total and stratified by care setting. Table 2Prevalence of in-hospital death by care settingTotal(n = 26,590)Care setting at time of deathHome setting(n = 12,795)Nursing home (long-term care)(n = 8,599)Nursing home (short-term care)(n = 3,103)Semi-residential arrangement (n = 1,156)Shared housing arrangement (n = 937)SexMale39.1%46.3%28.7%30.5%43.2%34.8%Female35.8%43.8%26.8%32.8%44.3%23.2%Age at death in years65–7447.9%55.2%29.3%42.0%56.4%45.2%75–8442.6%51.5%30.9%35.8%44.8%31.2%85–9435.2%42.3%27.3%31.1%44.3%22.0%95+26.2%31.6%21.1%23.1%29.9%13.1%Care need* at deathLow(level: 0/1, grade: 1/2)60.7%71.1%39.6%45.4%75.8%50.0%Medium(level: 2; grade: 3/4)38.4%47.2%29.2%32.5%53.5%32.6%High(level: 3; grade: 5)20.3%22.0%17.5%20.4%22.8%17.1%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia32.6%39.3%26.2%29.8%40.6%23.4%Cancer36.9%45.8%26.2%29.2%44.5%27.1%Total36.9%44.7%27.3%32.0%43.9%25.7%* The three German care levels were modified into 5 care grades at 1st January 2017 Prevalence of in-hospital death by care setting Low (level: 0/1, grade: 1/2) Medium (level: 2; grade: 3/4) High (level: 3; grade: 5) * The three German care levels were modified into 5 care grades at 1st January 2017 In total, 36.9% of all deceased died in hospital (n = 9,811). The proportion of in-hospital deaths is slightly higher in men (39.1%) than in women (35.8%). With higher age, the proportion of in-hospital deaths decreased (47.9% in the 65–74 years old versus 26.2% in the persons aged 95 or older). Another trend can be seen regarding the care need. More than 6 out of 10 persons with the lowest care need died in hospital compared to one fifth in the group with the highest care need. Differences with respect to dementia and cancer were not found. The prevalence of in-hospital death also varies by care setting. Whereas less than 28% of deceased cared for in the nursing home (long-term) died in hospital, this proportion was highest in the home care setting as well as the semi-residential arrangement care setting with 44% and 45% each. The sex difference was highest in the shared housing arrangement setting (23.2% in-hospital deaths in the female group versus 34.8% in males). The above-mentioned tendencies regarding age and care needs were found in all care settings. In people with a dementia diagnosis, the proportion of in-hospital deaths varies widely by care setting. It is lowest in the shared housing arrangement setting (23.4%) and the highest in the semi-residential arrangement setting (40.6%). Place of death When having a closer look at the actual place of death (Table3), most persons died in hospital (36.9%), followed by the home setting (26.6%) and the nursing home (long-term) (23.5%). Nearly 13% either died in the short-term care (7.9%), in a shared housing arrangement (2.6%) or in the semi-residential arrangement setting (2.4%). Table 3Characteristics of deceased cohort by place of death (2016–2019)Hospital(n = 9,811; 36.9%)Home setting (n = 7,077;26.6%)Nursing home (long-term care) (n = 6,247; 23.5%)Nursing home (short-term care) (n = 2,110; 7.9%)Semi-residential arrangement (n = 649; 2.4%)Shared housing arrangement (n = 696;2.6%)SexMale35.4%35.4%28.4%35.8%39.1%18.8%Female64.6%64.6%71.6%64.2%60.9%81.2%Age at death in yearsMean (SD)85.5 (7.4)87.3 (7.7)88.0 (6.9)87.2 (7.2)86.4 (7.1)86.8 (7.0)65–748.4%6.4%4.1%5.3%5.2%4.9%75–8433.3%26.3%24.8%26.8%35.1%29.7%85–9448.5%51.1%53.6%52.6%47.0%54.9%95+9.8%16.2%17.5%15.3%12.6%10.5%Care need* at deathLow(level: 0/1, grade: 1/2)28.2%11.4%9.8%15.6%4.6%2.9%Medium(level: 2; grade: 3/4)55.5%45.4%60.6%55.4%41.6%42.2%High(level: 3; grade: 5)16.3%43.3%29.7%29.1%53.8%54.9%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia56.6%60.1%75.7%64.6%81.2%88.4%Cancer36.2%36.4%36.1%40.9%29.6%24.7%* The three German care levels were modified into 5 care grades at 1st January 2017 Characteristics of deceased cohort by place of death (2016–2019) Low (level: 0/1, grade: 1/2) Medium (level: 2; grade: 3/4) High (level: 3; grade: 5) * The three German care levels were modified into 5 care grades at 1st January 2017 Overall, 18.8% of those dying in shared housing arrangements were male versus 39.1% in semi-residential arrangements. Deceased people in hospital and semi-residential arrangements were the youngest with 85.5 and 86.4 years in mean, whereas the place of death with the oldest people was the nursing home (long-term) (88.0 years). The prevalence of dementia varies widely between the places of death from highest in shared housing arrangements (88.4%) to lowest at home setting (60.1%) and hospital (56.6%). The highest cancer prevalence was found at the short term-care setting (40.9%) versus lowest in shared housing arrangements (24.7%) as place of death. When having a closer look at the actual place of death (Table3), most persons died in hospital (36.9%), followed by the home setting (26.6%) and the nursing home (long-term) (23.5%). Nearly 13% either died in the short-term care (7.9%), in a shared housing arrangement (2.6%) or in the semi-residential arrangement setting (2.4%). Table 3Characteristics of deceased cohort by place of death (2016–2019)Hospital(n = 9,811; 36.9%)Home setting (n = 7,077;26.6%)Nursing home (long-term care) (n = 6,247; 23.5%)Nursing home (short-term care) (n = 2,110; 7.9%)Semi-residential arrangement (n = 649; 2.4%)Shared housing arrangement (n = 696;2.6%)SexMale35.4%35.4%28.4%35.8%39.1%18.8%Female64.6%64.6%71.6%64.2%60.9%81.2%Age at death in yearsMean (SD)85.5 (7.4)87.3 (7.7)88.0 (6.9)87.2 (7.2)86.4 (7.1)86.8 (7.0)65–748.4%6.4%4.1%5.3%5.2%4.9%75–8433.3%26.3%24.8%26.8%35.1%29.7%85–9448.5%51.1%53.6%52.6%47.0%54.9%95+9.8%16.2%17.5%15.3%12.6%10.5%Care need* at deathLow(level: 0/1, grade: 1/2)28.2%11.4%9.8%15.6%4.6%2.9%Medium(level: 2; grade: 3/4)55.5%45.4%60.6%55.4%41.6%42.2%High(level: 3; grade: 5)16.3%43.3%29.7%29.1%53.8%54.9%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia56.6%60.1%75.7%64.6%81.2%88.4%Cancer36.2%36.4%36.1%40.9%29.6%24.7%* The three German care levels were modified into 5 care grades at 1st January 2017 Characteristics of deceased cohort by place of death (2016–2019) Low (level: 0/1, grade: 1/2) Medium (level: 2; grade: 3/4) High (level: 3; grade: 5) * The three German care levels were modified into 5 care grades at 1st January 2017 Overall, 18.8% of those dying in shared housing arrangements were male versus 39.1% in semi-residential arrangements. Deceased people in hospital and semi-residential arrangements were the youngest with 85.5 and 86.4 years in mean, whereas the place of death with the oldest people was the nursing home (long-term) (88.0 years). The prevalence of dementia varies widely between the places of death from highest in shared housing arrangements (88.4%) to lowest at home setting (60.1%) and hospital (56.6%). The highest cancer prevalence was found at the short term-care setting (40.9%) versus lowest in shared housing arrangements (24.7%) as place of death. Baseline characteristics: Table1 shows the characteristics of all deceased persons plus the comparison between males and females. Two thirds (66.6%) of the deceased cohort were female, more than 80% of all had a medium care need (53.3%) or even a high care need (29.6%) at death. The mean duration of care dependency was 3.7 years (SD: 2.8) at time of death. About two thirds had a dementia diagnosis (64.1%) and one third had a diagnosis for cancer (36.1%). Most of all were cared for at home at time of death (48.1%), followed by the long-term nursing home setting (32.3%) and short-term nursing home care (11.7%). Table 1Characteristics of deceased cohort (2016 to 2019)Total (n = 26,590)Female (n = 17,701)Male (n = 8,889)Age at death in years\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\stackrel{-}{x}$$\end{document} (SD)86.8 (7.4)87.5 (7.4)85.2 (7.2)65–741,714 (6.5%)980 (5.5%)734 (8.3%)75–847,671 (28.9%)4,601 (26.0%)3,070 (34.5%)85–9413,517 (50.8%)9,164 (51.8%)4,353 (49.0%)95+3,688 (13.9%)2,956 (16.7%)732 (8.2%)Care need* at deathLow (care level: 0/1, grade: 1/2)4,558 (17.1%)3,174 (17.9%)1,384 (15.6%)Medium (care level: 2; grade: 3/4)14,173 (53.3%)9,540 (53.9%)4,633 (52.1%)High (care level: 3; grade: 5)7,859 (29.6%)4,987 (28.2%)2,872 (32.3%)Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia17,049 (64.1%)11,461 (64.7%)5,588 (62.9%)Cancer9,603 (36.1%)5,558 (31.4%)4,045 (45.5%)Duration of care dependency at death in years: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\stackrel{-}{x}$$\end{document} (SD)3.7 (2.8)3.9 (2.8)3.4 (2.7)Setting at deathHome12,795 (48.1%)8,131 (45.9%)4,664 (52.5%)Long-term care in nursing home8,599 (32.3%)6,109 (34.5%)2,490 (28.0%)Short-term care in nursing home3,103 (11.7%)2,016 (11.4%)1,087 (12.2%)Semi-residential arrangement1,156 (4.4%)709 (4.0%)447 (5.0%)Shared housing arrangement937 (3.5%)736 (4.2%)201 (2.3%)* The three German care levels were modified into 5 care grades at 1st January 2017 Characteristics of deceased cohort (2016 to 2019) * The three German care levels were modified into 5 care grades at 1st January 2017 Females had a higher mean age of 87.5 years at death, compared to 85.2 years at death of all males. Men were more likely to have a cancer diagnosis and a shorter period of care dependency compared to women. Only small differences were found regarding the care needs and the prevalence of dementia. More females received long-term care in nursing homes (34.5% vs. males: 28.0%), whereas more males have been cared for at the home setting (52.5% vs. females: 45.9%). In-hospital death: Table2 shows the prevalence of in-hospital deaths in total and stratified by care setting. Table 2Prevalence of in-hospital death by care settingTotal(n = 26,590)Care setting at time of deathHome setting(n = 12,795)Nursing home (long-term care)(n = 8,599)Nursing home (short-term care)(n = 3,103)Semi-residential arrangement (n = 1,156)Shared housing arrangement (n = 937)SexMale39.1%46.3%28.7%30.5%43.2%34.8%Female35.8%43.8%26.8%32.8%44.3%23.2%Age at death in years65–7447.9%55.2%29.3%42.0%56.4%45.2%75–8442.6%51.5%30.9%35.8%44.8%31.2%85–9435.2%42.3%27.3%31.1%44.3%22.0%95+26.2%31.6%21.1%23.1%29.9%13.1%Care need* at deathLow(level: 0/1, grade: 1/2)60.7%71.1%39.6%45.4%75.8%50.0%Medium(level: 2; grade: 3/4)38.4%47.2%29.2%32.5%53.5%32.6%High(level: 3; grade: 5)20.3%22.0%17.5%20.4%22.8%17.1%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia32.6%39.3%26.2%29.8%40.6%23.4%Cancer36.9%45.8%26.2%29.2%44.5%27.1%Total36.9%44.7%27.3%32.0%43.9%25.7%* The three German care levels were modified into 5 care grades at 1st January 2017 Prevalence of in-hospital death by care setting Low (level: 0/1, grade: 1/2) Medium (level: 2; grade: 3/4) High (level: 3; grade: 5) * The three German care levels were modified into 5 care grades at 1st January 2017 In total, 36.9% of all deceased died in hospital (n = 9,811). The proportion of in-hospital deaths is slightly higher in men (39.1%) than in women (35.8%). With higher age, the proportion of in-hospital deaths decreased (47.9% in the 65–74 years old versus 26.2% in the persons aged 95 or older). Another trend can be seen regarding the care need. More than 6 out of 10 persons with the lowest care need died in hospital compared to one fifth in the group with the highest care need. Differences with respect to dementia and cancer were not found. The prevalence of in-hospital death also varies by care setting. Whereas less than 28% of deceased cared for in the nursing home (long-term) died in hospital, this proportion was highest in the home care setting as well as the semi-residential arrangement care setting with 44% and 45% each. The sex difference was highest in the shared housing arrangement setting (23.2% in-hospital deaths in the female group versus 34.8% in males). The above-mentioned tendencies regarding age and care needs were found in all care settings. In people with a dementia diagnosis, the proportion of in-hospital deaths varies widely by care setting. It is lowest in the shared housing arrangement setting (23.4%) and the highest in the semi-residential arrangement setting (40.6%). Place of death: When having a closer look at the actual place of death (Table3), most persons died in hospital (36.9%), followed by the home setting (26.6%) and the nursing home (long-term) (23.5%). Nearly 13% either died in the short-term care (7.9%), in a shared housing arrangement (2.6%) or in the semi-residential arrangement setting (2.4%). Table 3Characteristics of deceased cohort by place of death (2016–2019)Hospital(n = 9,811; 36.9%)Home setting (n = 7,077;26.6%)Nursing home (long-term care) (n = 6,247; 23.5%)Nursing home (short-term care) (n = 2,110; 7.9%)Semi-residential arrangement (n = 649; 2.4%)Shared housing arrangement (n = 696;2.6%)SexMale35.4%35.4%28.4%35.8%39.1%18.8%Female64.6%64.6%71.6%64.2%60.9%81.2%Age at death in yearsMean (SD)85.5 (7.4)87.3 (7.7)88.0 (6.9)87.2 (7.2)86.4 (7.1)86.8 (7.0)65–748.4%6.4%4.1%5.3%5.2%4.9%75–8433.3%26.3%24.8%26.8%35.1%29.7%85–9448.5%51.1%53.6%52.6%47.0%54.9%95+9.8%16.2%17.5%15.3%12.6%10.5%Care need* at deathLow(level: 0/1, grade: 1/2)28.2%11.4%9.8%15.6%4.6%2.9%Medium(level: 2; grade: 3/4)55.5%45.4%60.6%55.4%41.6%42.2%High(level: 3; grade: 5)16.3%43.3%29.7%29.1%53.8%54.9%Account for diagnosis assessed in the quarter of death or up to three quarters before deathDementia56.6%60.1%75.7%64.6%81.2%88.4%Cancer36.2%36.4%36.1%40.9%29.6%24.7%* The three German care levels were modified into 5 care grades at 1st January 2017 Characteristics of deceased cohort by place of death (2016–2019) Low (level: 0/1, grade: 1/2) Medium (level: 2; grade: 3/4) High (level: 3; grade: 5) * The three German care levels were modified into 5 care grades at 1st January 2017 Overall, 18.8% of those dying in shared housing arrangements were male versus 39.1% in semi-residential arrangements. Deceased people in hospital and semi-residential arrangements were the youngest with 85.5 and 86.4 years in mean, whereas the place of death with the oldest people was the nursing home (long-term) (88.0 years). The prevalence of dementia varies widely between the places of death from highest in shared housing arrangements (88.4%) to lowest at home setting (60.1%) and hospital (56.6%). The highest cancer prevalence was found at the short term-care setting (40.9%) versus lowest in shared housing arrangements (24.7%) as place of death. Discussion: Findings and comparison with the literature In care-dependent people initially receiving home care, 57.5% died within 3.5 years. Overall, 36.9% died in hospital and in-hospital deaths were found most often in those still receiving home care as well as care in semi-residential arrangements at the time of death. People who died in hospital were younger and had lower care dependency compared to all other analysed care settings. More than half of home care receiving people moved to another (care-) setting before death (44.0% either to long- or short-term care in nursing home, 4.4% to semi-residential arrangements, plus 3.5% to shared housing arrangements). In care-dependent people initially receiving home care, 57.5% died within 3.5 years. Overall, 36.9% died in hospital and in-hospital deaths were found most often in those still receiving home care as well as care in semi-residential arrangements at the time of death. People who died in hospital were younger and had lower care dependency compared to all other analysed care settings. More than half of home care receiving people moved to another (care-) setting before death (44.0% either to long- or short-term care in nursing home, 4.4% to semi-residential arrangements, plus 3.5% to shared housing arrangements). Actual place of death Nearly 37% of all deaths took place in hospitals, which is the most common place of death in our care receiving cohort as well as in the total German population [20–22] and in those of most other countries [23–25]. After the hospital, the own home was found as the second most common place of death with 26.6%, which was also found by Dasch et al. (21.3%) for 2017 although their representative German cohort of all persons dying was in mean 77.6 years old, almost 10 years younger than our care receiving cohort [22]. Moreover, Herbst et al. analysed two random samples of German death certificates from 2007 and from 2017. They showed that while in 2007 home also was the second most frequent place of death (26.1%), it slid to third place in 2017 with 19.8% [20]. Taking results of published studies together, the likelihood to die at home has been decreasing since recent years [20, 22, 26]. In a review of 1998, the authors already investigated the relation between patient characteristics and home deaths [27]. They found out that improved access to home care is likely to increase home deaths for older people [27]. Especially palliative home care and hospice care are associated with fewer hospitalisations and more home deaths [28]. But there are several more potential factors influencing death at home, for example patients functional status, their preferences, living with relatives, and extended family support [29]. The present cohort showed that 23.5% died in nursing homes, which is a little higher than found in the younger-aged representative sample by Dasch et al. (20.4%) but lower than in the random sample of Herbst et al. with 27.1% [20, 30]. The older the people, the higher the probability to die in a nursing home instead of a hospital [31]. Also in our study nursing home residents receiving long-term care were the oldest group. Shared housing arrangements were not considered in previous studies investigating place of death, even if this setting can be seen as an increasingly used, familiar care alternative for long-term nursing homes in Germany [15, 33, 34]. To the author’s knowledge there is also no data on the frequency of transitions to short-term care before death as well as nursing home as place of death for short-term care recipients. Studies based on German death certificate data cannot include care information, because they are not routinely covered in these documents. Our study shows that both settings are of relevance and should be included in future studies in EOL care. Quantification and deeper understanding of all possible care transitions at the end of life are important to estimate the relevance and trends for place of death from a public health perspective. Even if the hospice as place of death is still rare and unfortunately not covered in the present data, it has been shown to increase as place of death in Germany in recent years [22]. Nearly 37% of all deaths took place in hospitals, which is the most common place of death in our care receiving cohort as well as in the total German population [20–22] and in those of most other countries [23–25]. After the hospital, the own home was found as the second most common place of death with 26.6%, which was also found by Dasch et al. (21.3%) for 2017 although their representative German cohort of all persons dying was in mean 77.6 years old, almost 10 years younger than our care receiving cohort [22]. Moreover, Herbst et al. analysed two random samples of German death certificates from 2007 and from 2017. They showed that while in 2007 home also was the second most frequent place of death (26.1%), it slid to third place in 2017 with 19.8% [20]. Taking results of published studies together, the likelihood to die at home has been decreasing since recent years [20, 22, 26]. In a review of 1998, the authors already investigated the relation between patient characteristics and home deaths [27]. They found out that improved access to home care is likely to increase home deaths for older people [27]. Especially palliative home care and hospice care are associated with fewer hospitalisations and more home deaths [28]. But there are several more potential factors influencing death at home, for example patients functional status, their preferences, living with relatives, and extended family support [29]. The present cohort showed that 23.5% died in nursing homes, which is a little higher than found in the younger-aged representative sample by Dasch et al. (20.4%) but lower than in the random sample of Herbst et al. with 27.1% [20, 30]. The older the people, the higher the probability to die in a nursing home instead of a hospital [31]. Also in our study nursing home residents receiving long-term care were the oldest group. Shared housing arrangements were not considered in previous studies investigating place of death, even if this setting can be seen as an increasingly used, familiar care alternative for long-term nursing homes in Germany [15, 33, 34]. To the author’s knowledge there is also no data on the frequency of transitions to short-term care before death as well as nursing home as place of death for short-term care recipients. Studies based on German death certificate data cannot include care information, because they are not routinely covered in these documents. Our study shows that both settings are of relevance and should be included in future studies in EOL care. Quantification and deeper understanding of all possible care transitions at the end of life are important to estimate the relevance and trends for place of death from a public health perspective. Even if the hospice as place of death is still rare and unfortunately not covered in the present data, it has been shown to increase as place of death in Germany in recent years [22]. Hospital death by care setting As in the present cohort, the proportion of in-hospital death in older, care receiving people seems to be smaller compared to the general population [25]. The availability of formal versus informal care seems to influence hospital death rates. In the Netherlands older people receiving informal care were more likely to die in hospital than people receiving formal home care or institutional care [35]. Another Dutch study showed for Dutch people who only received informal care in their last three months of life that the odds of dying in a hospital was much higher compared to those who received a combination of formal and informal home care [36]. In the present cohort, the proportion of in-hospital death was also highest in people receiving home care (44.7%), where the largest proportion of informal care can certainly be found. The proportion of in-hospital deaths was lower in our group of nursing home residents receiving long-term care. Although this proportion goes in line with previous German analyses [37, 38], it is, however, internationally compared somewhat higher [39]. Nevertheless, the proportion of in-hospital deaths among nursing-home residents internationally varies markedly even within countries with an overall median of 22.6% [39]. There are other possible factors influencing the risk of dying in hospital for elderly, care receiving people like the care level, age and sex. In our cohort, the younger the people and the lower the care level, the higher was the proportion of in-hospital death. The same was found by previous studies [10, 39, 40]. This could possibly partly explain, why men in our cohort were a little more likely to die in hospital than women. However, there is increasing evidence of “real” sex-specific differences in burdensome interventions like transitions of care or invasive procedures during EOL and future studies should put more emphasis on sex-specific analyses [41]. As in the present cohort, the proportion of in-hospital death in older, care receiving people seems to be smaller compared to the general population [25]. The availability of formal versus informal care seems to influence hospital death rates. In the Netherlands older people receiving informal care were more likely to die in hospital than people receiving formal home care or institutional care [35]. Another Dutch study showed for Dutch people who only received informal care in their last three months of life that the odds of dying in a hospital was much higher compared to those who received a combination of formal and informal home care [36]. In the present cohort, the proportion of in-hospital death was also highest in people receiving home care (44.7%), where the largest proportion of informal care can certainly be found. The proportion of in-hospital deaths was lower in our group of nursing home residents receiving long-term care. Although this proportion goes in line with previous German analyses [37, 38], it is, however, internationally compared somewhat higher [39]. Nevertheless, the proportion of in-hospital deaths among nursing-home residents internationally varies markedly even within countries with an overall median of 22.6% [39]. There are other possible factors influencing the risk of dying in hospital for elderly, care receiving people like the care level, age and sex. In our cohort, the younger the people and the lower the care level, the higher was the proportion of in-hospital death. The same was found by previous studies [10, 39, 40]. This could possibly partly explain, why men in our cohort were a little more likely to die in hospital than women. However, there is increasing evidence of “real” sex-specific differences in burdensome interventions like transitions of care or invasive procedures during EOL and future studies should put more emphasis on sex-specific analyses [41]. Moving to other care settings before death Our result regarding the frequent transition from home to other care settings before death indicates that home care cannot always be maintained until the EOL, although most patients wish so [5–8]. It was already shown in international studies that the frequency of care setting transitions of elderly people increases near to death [42]. For example, in the Netherlands nearly half of their 55–85 years old home-living persons was transferred between care settings one or more times in the last 3 months of life, mostly from home to hospital [35]. Care setting transitions at the EOL are seen as increasingly problematic, also because of potential medication and care errors, disrupting care teams, and a loss of care information [20, 27, 31], even if these transitions have the potential to be a relief for family caregivers [23]. Looking at the German situation, it also should be mentioned that structures in outpatient palliative care have been introduced just within the last 15 years and the growing number of general and specialist outpatient palliative care services (in German: AAPV and SAPV) provides more possibilities of outpatient palliative care since the last years [43], which can strengthen the quality of care at home at the patient’s EOL. Therewith unwanted care transitions can also be prevented. However, the care at home should not automatically be equated with the best care [9, 44] because institutionalised palliative care like hospice care and in-hospital palliative care can improve the quality of dying and death [45, 46]. Overall, care decisions always should be weighed individually to enable appropriate and timely care setting transitions in accordance with individualised EOL care needs [47]. There are different indicators already mentioned being associated with a risk of care transitions. In the UK, that people with severe cognitive impairment were the most likely group to move to other care settings [42]. Our results also show that the people with dementia more often died in another care setting than home. Another German study analysed predictors of admission to nursing home in care dependent people based on longitudinal secondary data and also found dementia, cognitive impairment, cancer of the brain and higher age as risk factors, which goes in line with our results [48]. Our result regarding the frequent transition from home to other care settings before death indicates that home care cannot always be maintained until the EOL, although most patients wish so [5–8]. It was already shown in international studies that the frequency of care setting transitions of elderly people increases near to death [42]. For example, in the Netherlands nearly half of their 55–85 years old home-living persons was transferred between care settings one or more times in the last 3 months of life, mostly from home to hospital [35]. Care setting transitions at the EOL are seen as increasingly problematic, also because of potential medication and care errors, disrupting care teams, and a loss of care information [20, 27, 31], even if these transitions have the potential to be a relief for family caregivers [23]. Looking at the German situation, it also should be mentioned that structures in outpatient palliative care have been introduced just within the last 15 years and the growing number of general and specialist outpatient palliative care services (in German: AAPV and SAPV) provides more possibilities of outpatient palliative care since the last years [43], which can strengthen the quality of care at home at the patient’s EOL. Therewith unwanted care transitions can also be prevented. However, the care at home should not automatically be equated with the best care [9, 44] because institutionalised palliative care like hospice care and in-hospital palliative care can improve the quality of dying and death [45, 46]. Overall, care decisions always should be weighed individually to enable appropriate and timely care setting transitions in accordance with individualised EOL care needs [47]. There are different indicators already mentioned being associated with a risk of care transitions. In the UK, that people with severe cognitive impairment were the most likely group to move to other care settings [42]. Our results also show that the people with dementia more often died in another care setting than home. Another German study analysed predictors of admission to nursing home in care dependent people based on longitudinal secondary data and also found dementia, cognitive impairment, cancer of the brain and higher age as risk factors, which goes in line with our results [48]. Strengths and Limitations The strengths of this study are its real-world character, its large sample size which allowed us to stratify the analyses by sex, age and other variables. Furthermore, we had valid information on care setting pathways and place of death. Just like the strengths, the limitations are based to the nature of the administrative data from LTCI funds. The data were not captured for the purpose of scientific research and further information that could influence the placement and dying in different care settings (clinical data, socioeconomic status, marital status or family support, respectively) were not available. The same applies to further information related to the specific care recipient’s institution, e.g. staffing ratios or the nursing home’s ownership. For the ones living in semi-residential arrangements we were not able to differentiate between the care-recipients died at home or during day or night care, respectively. For the shared housing-arrangements an underreporting is possible since in this case care providers might only invoice other benefits for instance for nursing home care. Besides, our data did not contain palliative care units and hospices as places of death. However, their joint proportion on places of death in Germany is 11% [22]. Another limitation relates to the fact that data for this study were only obtained from one health insurance fund. Since the DAK-Gesundheit insures more women and a population with a generally poorer health status [49], our results cannot be extrapolated to the entire care receiving population in Germany. Nevertheless, the DAK-Gesundheit is with 5.6million insured persons one of Germany’s largest health insurance funds [13]. The strengths of this study are its real-world character, its large sample size which allowed us to stratify the analyses by sex, age and other variables. Furthermore, we had valid information on care setting pathways and place of death. Just like the strengths, the limitations are based to the nature of the administrative data from LTCI funds. The data were not captured for the purpose of scientific research and further information that could influence the placement and dying in different care settings (clinical data, socioeconomic status, marital status or family support, respectively) were not available. The same applies to further information related to the specific care recipient’s institution, e.g. staffing ratios or the nursing home’s ownership. For the ones living in semi-residential arrangements we were not able to differentiate between the care-recipients died at home or during day or night care, respectively. For the shared housing-arrangements an underreporting is possible since in this case care providers might only invoice other benefits for instance for nursing home care. Besides, our data did not contain palliative care units and hospices as places of death. However, their joint proportion on places of death in Germany is 11% [22]. Another limitation relates to the fact that data for this study were only obtained from one health insurance fund. Since the DAK-Gesundheit insures more women and a population with a generally poorer health status [49], our results cannot be extrapolated to the entire care receiving population in Germany. Nevertheless, the DAK-Gesundheit is with 5.6million insured persons one of Germany’s largest health insurance funds [13]. Findings and comparison with the literature: In care-dependent people initially receiving home care, 57.5% died within 3.5 years. Overall, 36.9% died in hospital and in-hospital deaths were found most often in those still receiving home care as well as care in semi-residential arrangements at the time of death. People who died in hospital were younger and had lower care dependency compared to all other analysed care settings. More than half of home care receiving people moved to another (care-) setting before death (44.0% either to long- or short-term care in nursing home, 4.4% to semi-residential arrangements, plus 3.5% to shared housing arrangements). Actual place of death: Nearly 37% of all deaths took place in hospitals, which is the most common place of death in our care receiving cohort as well as in the total German population [20–22] and in those of most other countries [23–25]. After the hospital, the own home was found as the second most common place of death with 26.6%, which was also found by Dasch et al. (21.3%) for 2017 although their representative German cohort of all persons dying was in mean 77.6 years old, almost 10 years younger than our care receiving cohort [22]. Moreover, Herbst et al. analysed two random samples of German death certificates from 2007 and from 2017. They showed that while in 2007 home also was the second most frequent place of death (26.1%), it slid to third place in 2017 with 19.8% [20]. Taking results of published studies together, the likelihood to die at home has been decreasing since recent years [20, 22, 26]. In a review of 1998, the authors already investigated the relation between patient characteristics and home deaths [27]. They found out that improved access to home care is likely to increase home deaths for older people [27]. Especially palliative home care and hospice care are associated with fewer hospitalisations and more home deaths [28]. But there are several more potential factors influencing death at home, for example patients functional status, their preferences, living with relatives, and extended family support [29]. The present cohort showed that 23.5% died in nursing homes, which is a little higher than found in the younger-aged representative sample by Dasch et al. (20.4%) but lower than in the random sample of Herbst et al. with 27.1% [20, 30]. The older the people, the higher the probability to die in a nursing home instead of a hospital [31]. Also in our study nursing home residents receiving long-term care were the oldest group. Shared housing arrangements were not considered in previous studies investigating place of death, even if this setting can be seen as an increasingly used, familiar care alternative for long-term nursing homes in Germany [15, 33, 34]. To the author’s knowledge there is also no data on the frequency of transitions to short-term care before death as well as nursing home as place of death for short-term care recipients. Studies based on German death certificate data cannot include care information, because they are not routinely covered in these documents. Our study shows that both settings are of relevance and should be included in future studies in EOL care. Quantification and deeper understanding of all possible care transitions at the end of life are important to estimate the relevance and trends for place of death from a public health perspective. Even if the hospice as place of death is still rare and unfortunately not covered in the present data, it has been shown to increase as place of death in Germany in recent years [22]. Hospital death by care setting: As in the present cohort, the proportion of in-hospital death in older, care receiving people seems to be smaller compared to the general population [25]. The availability of formal versus informal care seems to influence hospital death rates. In the Netherlands older people receiving informal care were more likely to die in hospital than people receiving formal home care or institutional care [35]. Another Dutch study showed for Dutch people who only received informal care in their last three months of life that the odds of dying in a hospital was much higher compared to those who received a combination of formal and informal home care [36]. In the present cohort, the proportion of in-hospital death was also highest in people receiving home care (44.7%), where the largest proportion of informal care can certainly be found. The proportion of in-hospital deaths was lower in our group of nursing home residents receiving long-term care. Although this proportion goes in line with previous German analyses [37, 38], it is, however, internationally compared somewhat higher [39]. Nevertheless, the proportion of in-hospital deaths among nursing-home residents internationally varies markedly even within countries with an overall median of 22.6% [39]. There are other possible factors influencing the risk of dying in hospital for elderly, care receiving people like the care level, age and sex. In our cohort, the younger the people and the lower the care level, the higher was the proportion of in-hospital death. The same was found by previous studies [10, 39, 40]. This could possibly partly explain, why men in our cohort were a little more likely to die in hospital than women. However, there is increasing evidence of “real” sex-specific differences in burdensome interventions like transitions of care or invasive procedures during EOL and future studies should put more emphasis on sex-specific analyses [41]. Moving to other care settings before death: Our result regarding the frequent transition from home to other care settings before death indicates that home care cannot always be maintained until the EOL, although most patients wish so [5–8]. It was already shown in international studies that the frequency of care setting transitions of elderly people increases near to death [42]. For example, in the Netherlands nearly half of their 55–85 years old home-living persons was transferred between care settings one or more times in the last 3 months of life, mostly from home to hospital [35]. Care setting transitions at the EOL are seen as increasingly problematic, also because of potential medication and care errors, disrupting care teams, and a loss of care information [20, 27, 31], even if these transitions have the potential to be a relief for family caregivers [23]. Looking at the German situation, it also should be mentioned that structures in outpatient palliative care have been introduced just within the last 15 years and the growing number of general and specialist outpatient palliative care services (in German: AAPV and SAPV) provides more possibilities of outpatient palliative care since the last years [43], which can strengthen the quality of care at home at the patient’s EOL. Therewith unwanted care transitions can also be prevented. However, the care at home should not automatically be equated with the best care [9, 44] because institutionalised palliative care like hospice care and in-hospital palliative care can improve the quality of dying and death [45, 46]. Overall, care decisions always should be weighed individually to enable appropriate and timely care setting transitions in accordance with individualised EOL care needs [47]. There are different indicators already mentioned being associated with a risk of care transitions. In the UK, that people with severe cognitive impairment were the most likely group to move to other care settings [42]. Our results also show that the people with dementia more often died in another care setting than home. Another German study analysed predictors of admission to nursing home in care dependent people based on longitudinal secondary data and also found dementia, cognitive impairment, cancer of the brain and higher age as risk factors, which goes in line with our results [48]. Strengths and Limitations: The strengths of this study are its real-world character, its large sample size which allowed us to stratify the analyses by sex, age and other variables. Furthermore, we had valid information on care setting pathways and place of death. Just like the strengths, the limitations are based to the nature of the administrative data from LTCI funds. The data were not captured for the purpose of scientific research and further information that could influence the placement and dying in different care settings (clinical data, socioeconomic status, marital status or family support, respectively) were not available. The same applies to further information related to the specific care recipient’s institution, e.g. staffing ratios or the nursing home’s ownership. For the ones living in semi-residential arrangements we were not able to differentiate between the care-recipients died at home or during day or night care, respectively. For the shared housing-arrangements an underreporting is possible since in this case care providers might only invoice other benefits for instance for nursing home care. Besides, our data did not contain palliative care units and hospices as places of death. However, their joint proportion on places of death in Germany is 11% [22]. Another limitation relates to the fact that data for this study were only obtained from one health insurance fund. Since the DAK-Gesundheit insures more women and a population with a generally poorer health status [49], our results cannot be extrapolated to the entire care receiving population in Germany. Nevertheless, the DAK-Gesundheit is with 5.6million insured persons one of Germany’s largest health insurance funds [13]. Conclusion: In a large cohort of persons that were initially cared for at home, more than half moved to another care setting before death, most often to long-term nursing homes. Overall, about 4 of 10 perons died in hospital with highest proportions in those still receiving home care as well as care in semi-residential arrangements. Thus, there are still many unwanted and potentially preventable care transitions at the EOL in Germany. Interventions are needed to improve EOL care both in professional as well as informal home care settings also including semi residential arrangements. For example, ACP interventions were already proved effective as well as to support informal caregivers. Moreover, outpatient palliative care should be improved. This means, inter alia, an extension of an ACP-offer to the home setting in Germany as well as better access to outpatient hospice and palliative care services.
Background: Most care-dependent people live at home, where they also would prefer to die. Unfortunately, this wish is often not fulfilled. This study aims to investigate place of death of home care recipients, taking characteristics and changes in care settings into account. Methods: We retrospectively analysed a cohort of all home-care receiving people of a German statutory health insurance who were at least 65 years and who deceased between January 2016 and June 2019. Next to the care need, duration of care, age, sex, and disease, care setting at death and place of death were considered. We examined the characteristics by place of care, the proportion of dying in hospital by care setting and characterised the deceased cohort stratified by their actual place of death. Results: Of 46,207 care-dependent people initially receiving home care, 57.5% died within 3.5 years (n = 26,590; mean age: 86.8; 66.6% female). More than half of those moved to another care setting before death with long-term nursing home care (32.3%) and short-term nursing home care (11.7%) being the most frequent transitions, while 48.1% were still cared for at home. Overall, 36.9% died in hospital and in-hospital deaths were found most often in those still receiving home care (44.7%) as well as care in semi-residential arrangements (43.9%) at the time of death. People who died in hospital were younger (mean age: 85.5 years) and with lower care dependency (low care need: 28.2%) as in all other analysed care settings. Conclusions: In Germany, changes in care settings before death occur often. The proportion of in-hospital death is particularly high in the home setting and in semi-residential arrangements. These settings should be considered in interventions aiming to decrease the number of unwished care transitions and hospitalisations at the end of life.
Background: Due to demographic changes the world’s population is ageing and more and more people will die in old age, often affected by multiple chronic diseases and with complex care needs [1]. The official German care statistics from 2019 reported a further significant increase of care dependency to a total of 4.1million people [2, 3]. Four fifths of them were cared for at home. The home care receiving group was younger than the nursing home residents and the proportion of women were with 60% lower than in nursing homes [4]. Accordingly, end-of-life (EOL) care is important in this population and is increasingly being researched. Regardless of the care-setting and the country, the majority of people wish to die at home [5–8], even if one should differentiate between ideal and actual preferred places of care as well as places of death and that both could change over time [9]. A cross-national comparison of places of death including 21 countries of people aged over 65 years from 2013 showed a median of 54% of death in hospital and 18% in residential aged care facilities [10], both with large differences between studies as well as between countries. Some countries – such as Germany – do not routinely compile data from death registrations including information on care dependency, which makes it difficult to obtain representative data regarding place of death. First data are available for the distribution of places of death in Germany, showing an overall trend in places of death [11]. However, these data do not contain information on care dependency and little is known about care transitions at the EOL in the group of older home care receiving people. Can the people who are cared for at home also die at home, as most would prefer? Therefore, aim of this explorative study was to investigate place of death of home care recipients in Germany, taking characteristics and changes in care settings into account. Conclusion: In a large cohort of persons that were initially cared for at home, more than half moved to another care setting before death, most often to long-term nursing homes. Overall, about 4 of 10 perons died in hospital with highest proportions in those still receiving home care as well as care in semi-residential arrangements. Thus, there are still many unwanted and potentially preventable care transitions at the EOL in Germany. Interventions are needed to improve EOL care both in professional as well as informal home care settings also including semi residential arrangements. For example, ACP interventions were already proved effective as well as to support informal caregivers. Moreover, outpatient palliative care should be improved. This means, inter alia, an extension of an ACP-offer to the home setting in Germany as well as better access to outpatient hospice and palliative care services.
Background: Most care-dependent people live at home, where they also would prefer to die. Unfortunately, this wish is often not fulfilled. This study aims to investigate place of death of home care recipients, taking characteristics and changes in care settings into account. Methods: We retrospectively analysed a cohort of all home-care receiving people of a German statutory health insurance who were at least 65 years and who deceased between January 2016 and June 2019. Next to the care need, duration of care, age, sex, and disease, care setting at death and place of death were considered. We examined the characteristics by place of care, the proportion of dying in hospital by care setting and characterised the deceased cohort stratified by their actual place of death. Results: Of 46,207 care-dependent people initially receiving home care, 57.5% died within 3.5 years (n = 26,590; mean age: 86.8; 66.6% female). More than half of those moved to another care setting before death with long-term nursing home care (32.3%) and short-term nursing home care (11.7%) being the most frequent transitions, while 48.1% were still cared for at home. Overall, 36.9% died in hospital and in-hospital deaths were found most often in those still receiving home care (44.7%) as well as care in semi-residential arrangements (43.9%) at the time of death. People who died in hospital were younger (mean age: 85.5 years) and with lower care dependency (low care need: 28.2%) as in all other analysed care settings. Conclusions: In Germany, changes in care settings before death occur often. The proportion of in-hospital death is particularly high in the home setting and in semi-residential arrangements. These settings should be considered in interventions aiming to decrease the number of unwished care transitions and hospitalisations at the end of life.
13,092
378
[ 374, 1697, 676, 132, 550, 484, 434, 123, 586, 374, 430, 309 ]
15
[ "care", "death", "home", "hospital", "setting", "nursing", "term", "people", "level", "nursing home" ]
[ "risk dying hospital", "care places death", "german care statistics", "death older care", "hospitalisations home deaths" ]
null
[CONTENT] Home care recipients | German health insurance claims-data | Care settings | Place of death [SUMMARY]
null
[CONTENT] Home care recipients | German health insurance claims-data | Care settings | Place of death [SUMMARY]
[CONTENT] Home care recipients | German health insurance claims-data | Care settings | Place of death [SUMMARY]
[CONTENT] Home care recipients | German health insurance claims-data | Care settings | Place of death [SUMMARY]
[CONTENT] Home care recipients | German health insurance claims-data | Care settings | Place of death [SUMMARY]
[CONTENT] Aged, 80 and over | Female | Germany | Home Care Services | Hospital Mortality | Humans | Insurance, Health | Male | Retrospective Studies | Terminal Care [SUMMARY]
null
[CONTENT] Aged, 80 and over | Female | Germany | Home Care Services | Hospital Mortality | Humans | Insurance, Health | Male | Retrospective Studies | Terminal Care [SUMMARY]
[CONTENT] Aged, 80 and over | Female | Germany | Home Care Services | Hospital Mortality | Humans | Insurance, Health | Male | Retrospective Studies | Terminal Care [SUMMARY]
[CONTENT] Aged, 80 and over | Female | Germany | Home Care Services | Hospital Mortality | Humans | Insurance, Health | Male | Retrospective Studies | Terminal Care [SUMMARY]
[CONTENT] Aged, 80 and over | Female | Germany | Home Care Services | Hospital Mortality | Humans | Insurance, Health | Male | Retrospective Studies | Terminal Care [SUMMARY]
[CONTENT] risk dying hospital | care places death | german care statistics | death older care | hospitalisations home deaths [SUMMARY]
null
[CONTENT] risk dying hospital | care places death | german care statistics | death older care | hospitalisations home deaths [SUMMARY]
[CONTENT] risk dying hospital | care places death | german care statistics | death older care | hospitalisations home deaths [SUMMARY]
[CONTENT] risk dying hospital | care places death | german care statistics | death older care | hospitalisations home deaths [SUMMARY]
[CONTENT] risk dying hospital | care places death | german care statistics | death older care | hospitalisations home deaths [SUMMARY]
[CONTENT] care | death | home | hospital | setting | nursing | term | people | level | nursing home [SUMMARY]
null
[CONTENT] care | death | home | hospital | setting | nursing | term | people | level | nursing home [SUMMARY]
[CONTENT] care | death | home | hospital | setting | nursing | term | people | level | nursing home [SUMMARY]
[CONTENT] care | death | home | hospital | setting | nursing | term | people | level | nursing home [SUMMARY]
[CONTENT] care | death | home | hospital | setting | nursing | term | people | level | nursing home [SUMMARY]
[CONTENT] care | places | death | home | places death | people | data | die | countries | information care dependency [SUMMARY]
null
[CONTENT] care | usepackage | grade | level grade | level | 26 | death | setting | 29 | arrangement [SUMMARY]
[CONTENT] care | acp | informal | interventions | palliative care | home | palliative | outpatient | germany | eol [SUMMARY]
[CONTENT] care | home | death | hospital | setting | people | data | term | level | grade [SUMMARY]
[CONTENT] care | home | death | hospital | setting | people | data | term | level | grade [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] 46,207 | 57.5% | within 3.5 years | 26,590 | 86.8 | 66.6% ||| More than half | 32.3% | 11.7% | 48.1% ||| 36.9% | 44.7% | 43.9% ||| 85.5 years | 28.2% [SUMMARY]
[CONTENT] Germany ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| German | at least 65 years | between January 2016 and June 2019 ||| ||| ||| 46,207 | 57.5% | within 3.5 years | 26,590 | 86.8 | 66.6% ||| More than half | 32.3% | 11.7% | 48.1% ||| 36.9% | 44.7% | 43.9% ||| 85.5 years | 28.2% ||| Germany ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| German | at least 65 years | between January 2016 and June 2019 ||| ||| ||| 46,207 | 57.5% | within 3.5 years | 26,590 | 86.8 | 66.6% ||| More than half | 32.3% | 11.7% | 48.1% ||| 36.9% | 44.7% | 43.9% ||| 85.5 years | 28.2% ||| Germany ||| ||| [SUMMARY]
Modern psychometrics applied in rheumatology--a systematic review.
23114105
Although item response theory (IRT) appears to be increasingly used within health care research in general, a comprehensive overview of the frequency and characteristics of IRT analyses within the rheumatic field is lacking. An overview of the use and application of IRT in rheumatology to date may give insight into future research directions and highlight new possibilities for the improvement of outcome assessment in rheumatic conditions. Therefore, this study systematically reviewed the application of IRT to patient-reported and clinical outcome measures in rheumatology.
BACKGROUND
Literature searches in PubMed, Scopus and Web of Science resulted in 99 original English-language articles which used some form of IRT-based analysis of patient-reported or clinical outcome data in patients with a rheumatic condition. Both general study information and IRT-specific information were assessed.
METHODS
Most studies used Rasch modeling for developing or evaluating new or existing patient-reported outcomes in rheumatoid arthritis or osteoarthritis patients. Outcomes of principle interest were physical functioning and quality of life. Since the last decade, IRT has also been applied to clinical measures more frequently. IRT was mostly used for evaluating model fit, unidimensionality and differential item functioning, the distribution of items and persons along the underlying scale, and reliability. Less frequently used IRT applications were the evaluation of local independence, the threshold ordering of items, and the measurement precision along the scale.
RESULTS
IRT applications have markedly increased within rheumatology over the past decades. To date, IRT has primarily been applied to patient-reported outcomes, however, applications to clinical measures are gaining interest. Useful IRT applications not yet widely used within rheumatology include the cross-calibration of instrument scores and the development of computerized adaptive tests which may reduce the measurement burden for both the patient and the clinician. Also, the measurement precision of outcome measures along the scale was only evaluated occasionally. Performed IRT analyses should be adequately explained, justified, and reported. A global consensus about uniform guidelines should be reached concerning the minimum number of assumptions which should be met and best ways of testing these assumptions, in order to stimulate the quality appraisal of performed IRT analyses.
CONCLUSION
[ "Animals", "Humans", "Psychometrics", "Rheumatic Diseases", "Rheumatology" ]
3517453
Background
Since there is no gold standard for the assessment of disease severity and impact in most rheumatic conditions, it is common practice to administer multiple outcome measures to patients. Initially, the severity and impact of most rheumatic conditions was typically evaluated with clinical measures (CMs) [1,2] such as laboratory measures of inflammation like the erythrocyte sedimentation rate [3] and physician-based joint counts [4,5]. Since the eighties of the last century, however, rheumatologists have increasingly started to use patient-reported outcomes (PROs) [1,2]. As a result, a wide variety of PROs are currently in use, varying from single item visual analogue scales (e.g. pain or general health) to multiple item scales like the health assessment questionnaire (HAQ) [6] which measures a patient’s functional status and the 36-item short form health survey (SF-36) which measures eight dimensions of health related quality of life [7]. Statistical methods are essential for the development and evaluation of all outcome measures. By far, most health outcome measures have been developed using methods from classical test theory (CTT). In recent years, however, an increase in the use of statistical methods based on item response theory (IRT) can be observed in health status assessment [8-10]. Extensive and detailed descriptions of IRT can be found in the literature [11-14]. In short, IRT is a collection of probabilistic models, describing the relation between a patient’s response to a categorical question/item and the underlying construct being measured by the scale [11,15]. IRT supplements CTT methods, because it provides more detailed information on the item level and on the person level. This enables a more thorough evaluation of an instrument’s psychometric characteristics [15], including its measurement range and measurement precision. The evaluation of the contribution of individual items facilitates the identification of the most relevant, precise, and efficient items for the assessment of the construct being measured by the instrument. This is very useful for the development of new instruments, but also for improving existing instruments and developing alternate or short form versions of existing instruments [16]. Additionally, IRT methods are particularly suitable for equating different instruments intended to measure the same construct [17] and for cross-cultural validation purposes [18]. Finally, IRT provides the basis for developing item banks and patient-tailored computerized adaptive tests (CATs) [9,19,20]. Although IRT appears to be increasingly used within health care research in general, a comprehensive overview of the frequency and characteristics of IRT analyses within the rheumatic field is lacking. The Outcome Measures in Rheumatology (OMERACT) network recently initiated a special interest group aimed at promoting the use of IRT methods in rheumatology [21]. An overview of the use and application of IRT in rheumatology to date may give insight into future research directions and highlight new possibilities for the improvement of outcome assessment in rheumatic conditions. Therefore, the aim of this study was to systematically review the application of IRT to clinical and patient-reported outcome measures within rheumatology.
Methods
Search strategy Figure 1 presents an overview of the various stages followed during the search process, starting with an extensive literature search in April 2012 to identify all eligible studies up to and including the year 2011. Electronic database searches of PubMed, Scopus, and Web of Science were carried out, using the terms 'Item response theor*' OR 'Item response model*' OR 'latent trait theor*' OR Rasch OR Mokken, in combination with Rheumat* OR Arthros* OR arthrit*. Flowchart of the search process. Figure 1 presents an overview of the various stages followed during the search process, starting with an extensive literature search in April 2012 to identify all eligible studies up to and including the year 2011. Electronic database searches of PubMed, Scopus, and Web of Science were carried out, using the terms 'Item response theor*' OR 'Item response model*' OR 'latent trait theor*' OR Rasch OR Mokken, in combination with Rheumat* OR Arthros* OR arthrit*. Flowchart of the search process. Inclusion and exclusion criteria Only original research articles written in English were included. Articles were considered original when they included original data and when they performed analyses on this data in order to achieve a defined study objective. To be included, studies should present an application of IRT in a sample of which at least 50% had some kind of rheumatic disease. In cases where less than 50% of the study sample consisted of rheumatic patients (i.e. inflammatory rheumatism, arthrosis, soft tissue rheumatism), the study was only included when the rheumatic sample was analysed separately from the rest of the sample. Reviews, letters, editorials, opinion papers, abstracts, posters, and purely descriptive studies were excluded. No limitations were set for study design. Only original research articles written in English were included. Articles were considered original when they included original data and when they performed analyses on this data in order to achieve a defined study objective. To be included, studies should present an application of IRT in a sample of which at least 50% had some kind of rheumatic disease. In cases where less than 50% of the study sample consisted of rheumatic patients (i.e. inflammatory rheumatism, arthrosis, soft tissue rheumatism), the study was only included when the rheumatic sample was analysed separately from the rest of the sample. Reviews, letters, editorials, opinion papers, abstracts, posters, and purely descriptive studies were excluded. No limitations were set for study design. Study identification and selection The search strategy resulted in a total of 385 studies. After the removal of 189 duplicates, 196 unique articles were identified. Two reviewers independently screened all 196 studies for relevance based on the abstract and title identified from the initial search. If no evident inclusion or exclusion reasons were identified, the full-text was examined. In total, 103 studies did not meet inclusion criteria and were excluded. The main reasons for exclusion were: the study population (i.e. the study population was not clearly defined or the study contained a rheumatic sample <50% of the total sample which was not separately analysed), the statistical analyses (i.e. no IRT application), and the article type (i.e. non-original research). Figure 1 includes an overview of the exclusion reasons followed by the number of articles removed. The search strategy resulted in a total of 385 studies. After the removal of 189 duplicates, 196 unique articles were identified. Two reviewers independently screened all 196 studies for relevance based on the abstract and title identified from the initial search. If no evident inclusion or exclusion reasons were identified, the full-text was examined. In total, 103 studies did not meet inclusion criteria and were excluded. The main reasons for exclusion were: the study population (i.e. the study population was not clearly defined or the study contained a rheumatic sample <50% of the total sample which was not separately analysed), the statistical analyses (i.e. no IRT application), and the article type (i.e. non-original research). Figure 1 includes an overview of the exclusion reasons followed by the number of articles removed. Data extraction First, two reviewers independently evaluated a random sample of 15 articles. Both general study information as well as IRT-specific information were extracted, using a purpose-made checklist ( Additional file 1) based on both expert input and important issues as mentioned in Tennant and Conaghan [22], Reeve and Fayers [15], and Orlando [23]. Inter-rater agreement of the evaluated variables was moderate to high, with Cohen’s kappa ranging from 0.60 to 1.00. Most of the disagreements were caused by differing interpretations of some of the extracted variables. For instance, one of the reviewers interpreted the checklist on “performed analyses” as performed analyses using IRT based methods only, whereas the other reviewer interpreted it more broadly including classical test theory methods as well (the latter being the correct method). Consensus about these differences was reached by discussion. Next, one of these reviewers also evaluated the remaining 84 articles. General study information General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning). General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning). Purpose of analyses The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments). The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments). Specific IRT analyses Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24]. The applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11]. To make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23]. Other useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct. Differential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22]. Global IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct. With rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23]. Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24]. The applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11]. To make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23]. Other useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct. Differential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22]. Global IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct. With rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23]. First, two reviewers independently evaluated a random sample of 15 articles. Both general study information as well as IRT-specific information were extracted, using a purpose-made checklist ( Additional file 1) based on both expert input and important issues as mentioned in Tennant and Conaghan [22], Reeve and Fayers [15], and Orlando [23]. Inter-rater agreement of the evaluated variables was moderate to high, with Cohen’s kappa ranging from 0.60 to 1.00. Most of the disagreements were caused by differing interpretations of some of the extracted variables. For instance, one of the reviewers interpreted the checklist on “performed analyses” as performed analyses using IRT based methods only, whereas the other reviewer interpreted it more broadly including classical test theory methods as well (the latter being the correct method). Consensus about these differences was reached by discussion. Next, one of these reviewers also evaluated the remaining 84 articles. General study information General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning). General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning). Purpose of analyses The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments). The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments). Specific IRT analyses Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24]. The applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11]. To make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23]. Other useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct. Differential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22]. Global IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct. With rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23]. Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24]. The applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11]. To make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23]. Other useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct. Differential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22]. Global IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct. With rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23].
Results
General information of included studies The initial database search yielded a total of 93 eligible studies. Six additional studies were identified by manual reference checks of the selected studies. This resulted in a final selection of 99 studies ( Additional file 2). Figure 2 shows that the prevalence of IRT analysis within rheumatology increased markedly over the past decades. This is consistent with conclusions from Hays et al. [19], and with findings from Belvedere and Morton [8] who examined the frequency of Rasch analyses in the development of mobility instruments. Number of published articles reporting the application of IRT within rheumatology. Table 1 presents an overview of the most prominent results. By far, most research was carried out with patients from the United States or the United Kingdom, but data from patients from the Netherlands and Canada were also regularly used. The vast majority of studies involved cross-sectional IRT analyses. It could also be observed that an increasing number of studies perform longitudinal IRT analyses since the 21st century, as represented by a rise of DIF testing over time. Overview of the most prominent results PRO: patient-reported outcome (N=85), CM: clinical measure (N=14), RA: rheumatoid arthritis, OA: osteoarthritis, CAT: computerized adaptive test, IRT: item response theory, 2-PLM: 2 parameter logistic model, DIF: differential item functioning. * Note that some studies can be assigned to multiple subcategories, therefore, the sum of the percentages within a category exceeds 100%. Study samples varied from as little as 18 persons in the study of Penta et al. [27] to as many as 16519 persons in the study conducted by Wolfe et al. [28]. Most studies (92.9%) performed analyses on a population sample of at least 50 persons. In 85 of the 99 studies IRT analyses were applied to PROs. The remaining 14 studies applied IRT to CMs. The vast majority of the studies applied IRT to data gathered from patients suffering from rheumatoid arthritis (RA) or osteoarthritis (OA). Outcome measures of overall physical functioning and quality of life were most frequently being analysed. To a lesser extent, studies applied IRT to PRO measures of specific functioning [27,29-37], pain [35,38-43], psychological constructs [44-46], and work disability [47-51]. Studies also applied IRT to CMs such as measures of disease activity [52-54] and disease damage or radiographic severity [55-57]. The initial database search yielded a total of 93 eligible studies. Six additional studies were identified by manual reference checks of the selected studies. This resulted in a final selection of 99 studies ( Additional file 2). Figure 2 shows that the prevalence of IRT analysis within rheumatology increased markedly over the past decades. This is consistent with conclusions from Hays et al. [19], and with findings from Belvedere and Morton [8] who examined the frequency of Rasch analyses in the development of mobility instruments. Number of published articles reporting the application of IRT within rheumatology. Table 1 presents an overview of the most prominent results. By far, most research was carried out with patients from the United States or the United Kingdom, but data from patients from the Netherlands and Canada were also regularly used. The vast majority of studies involved cross-sectional IRT analyses. It could also be observed that an increasing number of studies perform longitudinal IRT analyses since the 21st century, as represented by a rise of DIF testing over time. Overview of the most prominent results PRO: patient-reported outcome (N=85), CM: clinical measure (N=14), RA: rheumatoid arthritis, OA: osteoarthritis, CAT: computerized adaptive test, IRT: item response theory, 2-PLM: 2 parameter logistic model, DIF: differential item functioning. * Note that some studies can be assigned to multiple subcategories, therefore, the sum of the percentages within a category exceeds 100%. Study samples varied from as little as 18 persons in the study of Penta et al. [27] to as many as 16519 persons in the study conducted by Wolfe et al. [28]. Most studies (92.9%) performed analyses on a population sample of at least 50 persons. In 85 of the 99 studies IRT analyses were applied to PROs. The remaining 14 studies applied IRT to CMs. The vast majority of the studies applied IRT to data gathered from patients suffering from rheumatoid arthritis (RA) or osteoarthritis (OA). Outcome measures of overall physical functioning and quality of life were most frequently being analysed. To a lesser extent, studies applied IRT to PRO measures of specific functioning [27,29-37], pain [35,38-43], psychological constructs [44-46], and work disability [47-51]. Studies also applied IRT to CMs such as measures of disease activity [52-54] and disease damage or radiographic severity [55-57]. Purpose of analyses Most common main goals for both the PRO- and the CM-studies were the development or evaluation of new measures, the evaluation of existing measures, and the development or evaluation of alternate or short form versions of an existing measure. In addition, several studies aimed to cross-culturally validate a patient-reported or clinical measure. IRT was rarely applied for the development of item banks [17,58] or computerized adaptive tests [59,60]. Most common main goals for both the PRO- and the CM-studies were the development or evaluation of new measures, the evaluation of existing measures, and the development or evaluation of alternate or short form versions of an existing measure. In addition, several studies aimed to cross-culturally validate a patient-reported or clinical measure. IRT was rarely applied for the development of item banks [17,58] or computerized adaptive tests [59,60]. Specific IRT analyses IRT model and software The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM. A motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively). The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM. A motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively). IRT assumptions The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models. A possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used. The assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%). The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models. A possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used. The assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%). Additional IRT analyses More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%). Other commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this. More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%). Other commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this. IRT model and software The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM. A motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively). The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM. A motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively). IRT assumptions The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models. A possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used. The assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%). The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models. A possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used. The assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%). Additional IRT analyses More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%). Other commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this. More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%). Other commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this.
Conclusions
A marked increase of IRT applications could be observed within rheumatology. IRT has primarily been applied to patient-reported outcomes, but it also appeared to be a useful technique for the evaluation of clinical measures. To date, IRT has mainly been used for the development of new static outcome measures and the evaluation of existing measures. In addition, alternate or short forms were created by evaluating the fit and performance of individual items. Useful IRT applications which are not yet widely used within rheumatology include the cross-calibration of instrument scores and the development of computerized adaptive tests which may reduce the measurement burden for both the patient and the clinician. Also, the measurement precision of outcome measures along the scale has only been evaluated occasionally. The fact that IRT has not yet experienced the same level of standardization and consensus on methodology as CTT methods stresses the importance to adequately explain, justify, and report performed IRT analyses. A global consensus on uniform guidelines should be reached about the minimum number of assumptions which should be met and best ways of testing these assumptions, in order to stimulate the quality appraisal of performed and reported IRT analyses.
[ "Background", "Search strategy", "Inclusion and exclusion criteria", "Study identification and selection", "Data extraction", "General study information", "Purpose of analyses", "Specific IRT analyses", "General information of included studies", "Purpose of analyses", "Specific IRT analyses", "IRT model and software", "IRT assumptions", "Additional IRT analyses", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Since there is no gold standard for the assessment of disease severity and impact in most rheumatic conditions, it is common practice to administer multiple outcome measures to patients. Initially, the severity and impact of most rheumatic conditions was typically evaluated with clinical measures (CMs) [1,2] such as laboratory measures of inflammation like the erythrocyte sedimentation rate [3] and physician-based joint counts [4,5]. Since the eighties of the last century, however, rheumatologists have increasingly started to use patient-reported outcomes (PROs) [1,2]. As a result, a wide variety of PROs are currently in use, varying from single item visual analogue scales (e.g. pain or general health) to multiple item scales like the health assessment questionnaire (HAQ) [6] which measures a patient’s functional status and the 36-item short form health survey (SF-36) which measures eight dimensions of health related quality of life [7].\nStatistical methods are essential for the development and evaluation of all outcome measures. By far, most health outcome measures have been developed using methods from classical test theory (CTT). In recent years, however, an increase in the use of statistical methods based on item response theory (IRT) can be observed in health status assessment [8-10]. Extensive and detailed descriptions of IRT can be found in the literature [11-14]. In short, IRT is a collection of probabilistic models, describing the relation between a patient’s response to a categorical question/item and the underlying construct being measured by the scale [11,15]. IRT supplements CTT methods, because it provides more detailed information on the item level and on the person level. This enables a more thorough evaluation of an instrument’s psychometric characteristics [15], including its measurement range and measurement precision. The evaluation of the contribution of individual items facilitates the identification of the most relevant, precise, and efficient items for the assessment of the construct being measured by the instrument. This is very useful for the development of new instruments, but also for improving existing instruments and developing alternate or short form versions of existing instruments [16]. Additionally, IRT methods are particularly suitable for equating different instruments intended to measure the same construct [17] and for cross-cultural validation purposes [18]. Finally, IRT provides the basis for developing item banks and patient-tailored computerized adaptive tests (CATs) [9,19,20].\nAlthough IRT appears to be increasingly used within health care research in general, a comprehensive overview of the frequency and characteristics of IRT analyses within the rheumatic field is lacking. The Outcome Measures in Rheumatology (OMERACT) network recently initiated a special interest group aimed at promoting the use of IRT methods in rheumatology [21]. An overview of the use and application of IRT in rheumatology to date may give insight into future research directions and highlight new possibilities for the improvement of outcome assessment in rheumatic conditions. Therefore, the aim of this study was to systematically review the application of IRT to clinical and patient-reported outcome measures within rheumatology.", "Figure 1 presents an overview of the various stages followed during the search process, starting with an extensive literature search in April 2012 to identify all eligible studies up to and including the year 2011. Electronic database searches of PubMed, Scopus, and Web of Science were carried out, using the terms 'Item response theor*' OR 'Item response model*' OR 'latent trait theor*' OR Rasch OR Mokken, in combination with Rheumat* OR Arthros* OR arthrit*.\nFlowchart of the search process.", "Only original research articles written in English were included. Articles were considered original when they included original data and when they performed analyses on this data in order to achieve a defined study objective. To be included, studies should present an application of IRT in a sample of which at least 50% had some kind of rheumatic disease. In cases where less than 50% of the study sample consisted of rheumatic patients (i.e. inflammatory rheumatism, arthrosis, soft tissue rheumatism), the study was only included when the rheumatic sample was analysed separately from the rest of the sample. Reviews, letters, editorials, opinion papers, abstracts, posters, and purely descriptive studies were excluded. No limitations were set for study design.", "The search strategy resulted in a total of 385 studies. After the removal of 189 duplicates, 196 unique articles were identified. Two reviewers independently screened all 196 studies for relevance based on the abstract and title identified from the initial search. If no evident inclusion or exclusion reasons were identified, the full-text was examined. In total, 103 studies did not meet inclusion criteria and were excluded. The main reasons for exclusion were: the study population (i.e. the study population was not clearly defined or the study contained a rheumatic sample <50% of the total sample which was not separately analysed), the statistical analyses (i.e. no IRT application), and the article type (i.e. non-original research). Figure 1 includes an overview of the exclusion reasons followed by the number of articles removed.", "First, two reviewers independently evaluated a random sample of 15 articles. Both general study information as well as IRT-specific information were extracted, using a purpose-made checklist ( Additional file 1) based on both expert input and important issues as mentioned in Tennant and Conaghan [22], Reeve and Fayers [15], and Orlando [23]. Inter-rater agreement of the evaluated variables was moderate to high, with Cohen’s kappa ranging from 0.60 to 1.00. Most of the disagreements were caused by differing interpretations of some of the extracted variables. For instance, one of the reviewers interpreted the checklist on “performed analyses” as performed analyses using IRT based methods only, whereas the other reviewer interpreted it more broadly including classical test theory methods as well (the latter being the correct method). Consensus about these differences was reached by discussion. Next, one of these reviewers also evaluated the remaining 84 articles.\n General study information General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning).\nGeneral information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning).\n Purpose of analyses The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments).\nThe purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments).\n Specific IRT analyses Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24].\nThe applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11].\nTo make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23].\nOther useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct.\nDifferential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22].\nGlobal IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct.\nWith rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23].\nBefore a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24].\nThe applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11].\nTo make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23].\nOther useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct.\nDifferential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22].\nGlobal IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct.\nWith rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23].", "General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning).", "The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments).", "Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24].\nThe applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11].\nTo make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23].\nOther useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct.\nDifferential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22].\nGlobal IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct.\nWith rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23].", "The initial database search yielded a total of 93 eligible studies. Six additional studies were identified by manual reference checks of the selected studies. This resulted in a final selection of 99 studies ( Additional file 2). Figure 2 shows that the prevalence of IRT analysis within rheumatology increased markedly over the past decades. This is consistent with conclusions from Hays et al. [19], and with findings from Belvedere and Morton [8] who examined the frequency of Rasch analyses in the development of mobility instruments.\nNumber of published articles reporting the application of IRT within rheumatology.\nTable 1 presents an overview of the most prominent results. By far, most research was carried out with patients from the United States or the United Kingdom, but data from patients from the Netherlands and Canada were also regularly used. The vast majority of studies involved cross-sectional IRT analyses. It could also be observed that an increasing number of studies perform longitudinal IRT analyses since the 21st century, as represented by a rise of DIF testing over time.\nOverview of the most prominent results\nPRO: patient-reported outcome (N=85), CM: clinical measure (N=14), RA: rheumatoid arthritis, OA: osteoarthritis, CAT: computerized adaptive test, IRT: item response theory, 2-PLM: 2 parameter logistic model, DIF: differential item functioning.\n* Note that some studies can be assigned to multiple subcategories, therefore, the sum of the percentages within a category exceeds 100%.\nStudy samples varied from as little as 18 persons in the study of Penta et al. [27] to as many as 16519 persons in the study conducted by Wolfe et al. [28]. Most studies (92.9%) performed analyses on a population sample of at least 50 persons.\nIn 85 of the 99 studies IRT analyses were applied to PROs. The remaining 14 studies applied IRT to CMs. The vast majority of the studies applied IRT to data gathered from patients suffering from rheumatoid arthritis (RA) or osteoarthritis (OA).\nOutcome measures of overall physical functioning and quality of life were most frequently being analysed. To a lesser extent, studies applied IRT to PRO measures of specific functioning [27,29-37], pain [35,38-43], psychological constructs [44-46], and work disability [47-51]. Studies also applied IRT to CMs such as measures of disease activity [52-54] and disease damage or radiographic severity [55-57].", "Most common main goals for both the PRO- and the CM-studies were the development or evaluation of new measures, the evaluation of existing measures, and the development or evaluation of alternate or short form versions of an existing measure. In addition, several studies aimed to cross-culturally validate a patient-reported or clinical measure. IRT was rarely applied for the development of item banks [17,58] or computerized adaptive tests [59,60].", " IRT model and software The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM.\nA motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively).\nThe vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM.\nA motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively).\n IRT assumptions The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models.\nA possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used.\nThe assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%).\nThe assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models.\nA possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used.\nThe assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%).\n Additional IRT analyses More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%).\nOther commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this.\nMore than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%).\nOther commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this.", "The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM.\nA motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively).", "The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models.\nA possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used.\nThe assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%).", "More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%).\nOther commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this.", "2-PL model: 2-Parameter Logistic model; CAT: Computerized Adaptive Test; CM: Clinical Measure; CTT: Classical Test Theory; DIF: Differential Item Functioning; HAQ: Health Assessment Questionnaire; IRT: Item Response Theory; OA: OsteoArthritis; PRO: Patient-Reported Outcome; RA: Rheumatoid Arthritis; SF-36: 36-item Short Form health survey.", "The authors declare that they have no competing interests.", "LS was responsible for the conceptualization of the manuscript. LS and PTK were responsible for the screening and identification of studies and the extraction of relevant data. PTK, ET, CG and MVDL supervised the whole study and the interpretation of the results. All authors critically evaluated the manuscript, contributed to its content, and approved the final version.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2474/13/216/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Search strategy", "Inclusion and exclusion criteria", "Study identification and selection", "Data extraction", "General study information", "Purpose of analyses", "Specific IRT analyses", "Results", "General information of included studies", "Purpose of analyses", "Specific IRT analyses", "IRT model and software", "IRT assumptions", "Additional IRT analyses", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Pre-publication history", "Supplementary Material" ]
[ "Since there is no gold standard for the assessment of disease severity and impact in most rheumatic conditions, it is common practice to administer multiple outcome measures to patients. Initially, the severity and impact of most rheumatic conditions was typically evaluated with clinical measures (CMs) [1,2] such as laboratory measures of inflammation like the erythrocyte sedimentation rate [3] and physician-based joint counts [4,5]. Since the eighties of the last century, however, rheumatologists have increasingly started to use patient-reported outcomes (PROs) [1,2]. As a result, a wide variety of PROs are currently in use, varying from single item visual analogue scales (e.g. pain or general health) to multiple item scales like the health assessment questionnaire (HAQ) [6] which measures a patient’s functional status and the 36-item short form health survey (SF-36) which measures eight dimensions of health related quality of life [7].\nStatistical methods are essential for the development and evaluation of all outcome measures. By far, most health outcome measures have been developed using methods from classical test theory (CTT). In recent years, however, an increase in the use of statistical methods based on item response theory (IRT) can be observed in health status assessment [8-10]. Extensive and detailed descriptions of IRT can be found in the literature [11-14]. In short, IRT is a collection of probabilistic models, describing the relation between a patient’s response to a categorical question/item and the underlying construct being measured by the scale [11,15]. IRT supplements CTT methods, because it provides more detailed information on the item level and on the person level. This enables a more thorough evaluation of an instrument’s psychometric characteristics [15], including its measurement range and measurement precision. The evaluation of the contribution of individual items facilitates the identification of the most relevant, precise, and efficient items for the assessment of the construct being measured by the instrument. This is very useful for the development of new instruments, but also for improving existing instruments and developing alternate or short form versions of existing instruments [16]. Additionally, IRT methods are particularly suitable for equating different instruments intended to measure the same construct [17] and for cross-cultural validation purposes [18]. Finally, IRT provides the basis for developing item banks and patient-tailored computerized adaptive tests (CATs) [9,19,20].\nAlthough IRT appears to be increasingly used within health care research in general, a comprehensive overview of the frequency and characteristics of IRT analyses within the rheumatic field is lacking. The Outcome Measures in Rheumatology (OMERACT) network recently initiated a special interest group aimed at promoting the use of IRT methods in rheumatology [21]. An overview of the use and application of IRT in rheumatology to date may give insight into future research directions and highlight new possibilities for the improvement of outcome assessment in rheumatic conditions. Therefore, the aim of this study was to systematically review the application of IRT to clinical and patient-reported outcome measures within rheumatology.", " Search strategy Figure 1 presents an overview of the various stages followed during the search process, starting with an extensive literature search in April 2012 to identify all eligible studies up to and including the year 2011. Electronic database searches of PubMed, Scopus, and Web of Science were carried out, using the terms 'Item response theor*' OR 'Item response model*' OR 'latent trait theor*' OR Rasch OR Mokken, in combination with Rheumat* OR Arthros* OR arthrit*.\nFlowchart of the search process.\nFigure 1 presents an overview of the various stages followed during the search process, starting with an extensive literature search in April 2012 to identify all eligible studies up to and including the year 2011. Electronic database searches of PubMed, Scopus, and Web of Science were carried out, using the terms 'Item response theor*' OR 'Item response model*' OR 'latent trait theor*' OR Rasch OR Mokken, in combination with Rheumat* OR Arthros* OR arthrit*.\nFlowchart of the search process.\n Inclusion and exclusion criteria Only original research articles written in English were included. Articles were considered original when they included original data and when they performed analyses on this data in order to achieve a defined study objective. To be included, studies should present an application of IRT in a sample of which at least 50% had some kind of rheumatic disease. In cases where less than 50% of the study sample consisted of rheumatic patients (i.e. inflammatory rheumatism, arthrosis, soft tissue rheumatism), the study was only included when the rheumatic sample was analysed separately from the rest of the sample. Reviews, letters, editorials, opinion papers, abstracts, posters, and purely descriptive studies were excluded. No limitations were set for study design.\nOnly original research articles written in English were included. Articles were considered original when they included original data and when they performed analyses on this data in order to achieve a defined study objective. To be included, studies should present an application of IRT in a sample of which at least 50% had some kind of rheumatic disease. In cases where less than 50% of the study sample consisted of rheumatic patients (i.e. inflammatory rheumatism, arthrosis, soft tissue rheumatism), the study was only included when the rheumatic sample was analysed separately from the rest of the sample. Reviews, letters, editorials, opinion papers, abstracts, posters, and purely descriptive studies were excluded. No limitations were set for study design.\n Study identification and selection The search strategy resulted in a total of 385 studies. After the removal of 189 duplicates, 196 unique articles were identified. Two reviewers independently screened all 196 studies for relevance based on the abstract and title identified from the initial search. If no evident inclusion or exclusion reasons were identified, the full-text was examined. In total, 103 studies did not meet inclusion criteria and were excluded. The main reasons for exclusion were: the study population (i.e. the study population was not clearly defined or the study contained a rheumatic sample <50% of the total sample which was not separately analysed), the statistical analyses (i.e. no IRT application), and the article type (i.e. non-original research). Figure 1 includes an overview of the exclusion reasons followed by the number of articles removed.\nThe search strategy resulted in a total of 385 studies. After the removal of 189 duplicates, 196 unique articles were identified. Two reviewers independently screened all 196 studies for relevance based on the abstract and title identified from the initial search. If no evident inclusion or exclusion reasons were identified, the full-text was examined. In total, 103 studies did not meet inclusion criteria and were excluded. The main reasons for exclusion were: the study population (i.e. the study population was not clearly defined or the study contained a rheumatic sample <50% of the total sample which was not separately analysed), the statistical analyses (i.e. no IRT application), and the article type (i.e. non-original research). Figure 1 includes an overview of the exclusion reasons followed by the number of articles removed.\n Data extraction First, two reviewers independently evaluated a random sample of 15 articles. Both general study information as well as IRT-specific information were extracted, using a purpose-made checklist ( Additional file 1) based on both expert input and important issues as mentioned in Tennant and Conaghan [22], Reeve and Fayers [15], and Orlando [23]. Inter-rater agreement of the evaluated variables was moderate to high, with Cohen’s kappa ranging from 0.60 to 1.00. Most of the disagreements were caused by differing interpretations of some of the extracted variables. For instance, one of the reviewers interpreted the checklist on “performed analyses” as performed analyses using IRT based methods only, whereas the other reviewer interpreted it more broadly including classical test theory methods as well (the latter being the correct method). Consensus about these differences was reached by discussion. Next, one of these reviewers also evaluated the remaining 84 articles.\n General study information General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning).\nGeneral information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning).\n Purpose of analyses The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments).\nThe purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments).\n Specific IRT analyses Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24].\nThe applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11].\nTo make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23].\nOther useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct.\nDifferential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22].\nGlobal IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct.\nWith rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23].\nBefore a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24].\nThe applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11].\nTo make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23].\nOther useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct.\nDifferential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22].\nGlobal IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct.\nWith rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23].\nFirst, two reviewers independently evaluated a random sample of 15 articles. Both general study information as well as IRT-specific information were extracted, using a purpose-made checklist ( Additional file 1) based on both expert input and important issues as mentioned in Tennant and Conaghan [22], Reeve and Fayers [15], and Orlando [23]. Inter-rater agreement of the evaluated variables was moderate to high, with Cohen’s kappa ranging from 0.60 to 1.00. Most of the disagreements were caused by differing interpretations of some of the extracted variables. For instance, one of the reviewers interpreted the checklist on “performed analyses” as performed analyses using IRT based methods only, whereas the other reviewer interpreted it more broadly including classical test theory methods as well (the latter being the correct method). Consensus about these differences was reached by discussion. Next, one of these reviewers also evaluated the remaining 84 articles.\n General study information General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning).\nGeneral information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning).\n Purpose of analyses The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments).\nThe purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments).\n Specific IRT analyses Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24].\nThe applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11].\nTo make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23].\nOther useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct.\nDifferential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22].\nGlobal IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct.\nWith rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23].\nBefore a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24].\nThe applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11].\nTo make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23].\nOther useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct.\nDifferential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22].\nGlobal IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct.\nWith rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23].", "Figure 1 presents an overview of the various stages followed during the search process, starting with an extensive literature search in April 2012 to identify all eligible studies up to and including the year 2011. Electronic database searches of PubMed, Scopus, and Web of Science were carried out, using the terms 'Item response theor*' OR 'Item response model*' OR 'latent trait theor*' OR Rasch OR Mokken, in combination with Rheumat* OR Arthros* OR arthrit*.\nFlowchart of the search process.", "Only original research articles written in English were included. Articles were considered original when they included original data and when they performed analyses on this data in order to achieve a defined study objective. To be included, studies should present an application of IRT in a sample of which at least 50% had some kind of rheumatic disease. In cases where less than 50% of the study sample consisted of rheumatic patients (i.e. inflammatory rheumatism, arthrosis, soft tissue rheumatism), the study was only included when the rheumatic sample was analysed separately from the rest of the sample. Reviews, letters, editorials, opinion papers, abstracts, posters, and purely descriptive studies were excluded. No limitations were set for study design.", "The search strategy resulted in a total of 385 studies. After the removal of 189 duplicates, 196 unique articles were identified. Two reviewers independently screened all 196 studies for relevance based on the abstract and title identified from the initial search. If no evident inclusion or exclusion reasons were identified, the full-text was examined. In total, 103 studies did not meet inclusion criteria and were excluded. The main reasons for exclusion were: the study population (i.e. the study population was not clearly defined or the study contained a rheumatic sample <50% of the total sample which was not separately analysed), the statistical analyses (i.e. no IRT application), and the article type (i.e. non-original research). Figure 1 includes an overview of the exclusion reasons followed by the number of articles removed.", "First, two reviewers independently evaluated a random sample of 15 articles. Both general study information as well as IRT-specific information were extracted, using a purpose-made checklist ( Additional file 1) based on both expert input and important issues as mentioned in Tennant and Conaghan [22], Reeve and Fayers [15], and Orlando [23]. Inter-rater agreement of the evaluated variables was moderate to high, with Cohen’s kappa ranging from 0.60 to 1.00. Most of the disagreements were caused by differing interpretations of some of the extracted variables. For instance, one of the reviewers interpreted the checklist on “performed analyses” as performed analyses using IRT based methods only, whereas the other reviewer interpreted it more broadly including classical test theory methods as well (the latter being the correct method). Consensus about these differences was reached by discussion. Next, one of these reviewers also evaluated the remaining 84 articles.\n General study information General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning).\nGeneral information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning).\n Purpose of analyses The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments).\nThe purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments).\n Specific IRT analyses Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24].\nThe applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11].\nTo make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23].\nOther useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct.\nDifferential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22].\nGlobal IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct.\nWith rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23].\nBefore a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24].\nThe applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11].\nTo make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23].\nOther useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct.\nDifferential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22].\nGlobal IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct.\nWith rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23].", "General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning).", "The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments).", "Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24].\nThe applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11].\nTo make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23].\nOther useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct.\nDifferential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22].\nGlobal IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct.\nWith rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23].", " General information of included studies The initial database search yielded a total of 93 eligible studies. Six additional studies were identified by manual reference checks of the selected studies. This resulted in a final selection of 99 studies ( Additional file 2). Figure 2 shows that the prevalence of IRT analysis within rheumatology increased markedly over the past decades. This is consistent with conclusions from Hays et al. [19], and with findings from Belvedere and Morton [8] who examined the frequency of Rasch analyses in the development of mobility instruments.\nNumber of published articles reporting the application of IRT within rheumatology.\nTable 1 presents an overview of the most prominent results. By far, most research was carried out with patients from the United States or the United Kingdom, but data from patients from the Netherlands and Canada were also regularly used. The vast majority of studies involved cross-sectional IRT analyses. It could also be observed that an increasing number of studies perform longitudinal IRT analyses since the 21st century, as represented by a rise of DIF testing over time.\nOverview of the most prominent results\nPRO: patient-reported outcome (N=85), CM: clinical measure (N=14), RA: rheumatoid arthritis, OA: osteoarthritis, CAT: computerized adaptive test, IRT: item response theory, 2-PLM: 2 parameter logistic model, DIF: differential item functioning.\n* Note that some studies can be assigned to multiple subcategories, therefore, the sum of the percentages within a category exceeds 100%.\nStudy samples varied from as little as 18 persons in the study of Penta et al. [27] to as many as 16519 persons in the study conducted by Wolfe et al. [28]. Most studies (92.9%) performed analyses on a population sample of at least 50 persons.\nIn 85 of the 99 studies IRT analyses were applied to PROs. The remaining 14 studies applied IRT to CMs. The vast majority of the studies applied IRT to data gathered from patients suffering from rheumatoid arthritis (RA) or osteoarthritis (OA).\nOutcome measures of overall physical functioning and quality of life were most frequently being analysed. To a lesser extent, studies applied IRT to PRO measures of specific functioning [27,29-37], pain [35,38-43], psychological constructs [44-46], and work disability [47-51]. Studies also applied IRT to CMs such as measures of disease activity [52-54] and disease damage or radiographic severity [55-57].\nThe initial database search yielded a total of 93 eligible studies. Six additional studies were identified by manual reference checks of the selected studies. This resulted in a final selection of 99 studies ( Additional file 2). Figure 2 shows that the prevalence of IRT analysis within rheumatology increased markedly over the past decades. This is consistent with conclusions from Hays et al. [19], and with findings from Belvedere and Morton [8] who examined the frequency of Rasch analyses in the development of mobility instruments.\nNumber of published articles reporting the application of IRT within rheumatology.\nTable 1 presents an overview of the most prominent results. By far, most research was carried out with patients from the United States or the United Kingdom, but data from patients from the Netherlands and Canada were also regularly used. The vast majority of studies involved cross-sectional IRT analyses. It could also be observed that an increasing number of studies perform longitudinal IRT analyses since the 21st century, as represented by a rise of DIF testing over time.\nOverview of the most prominent results\nPRO: patient-reported outcome (N=85), CM: clinical measure (N=14), RA: rheumatoid arthritis, OA: osteoarthritis, CAT: computerized adaptive test, IRT: item response theory, 2-PLM: 2 parameter logistic model, DIF: differential item functioning.\n* Note that some studies can be assigned to multiple subcategories, therefore, the sum of the percentages within a category exceeds 100%.\nStudy samples varied from as little as 18 persons in the study of Penta et al. [27] to as many as 16519 persons in the study conducted by Wolfe et al. [28]. Most studies (92.9%) performed analyses on a population sample of at least 50 persons.\nIn 85 of the 99 studies IRT analyses were applied to PROs. The remaining 14 studies applied IRT to CMs. The vast majority of the studies applied IRT to data gathered from patients suffering from rheumatoid arthritis (RA) or osteoarthritis (OA).\nOutcome measures of overall physical functioning and quality of life were most frequently being analysed. To a lesser extent, studies applied IRT to PRO measures of specific functioning [27,29-37], pain [35,38-43], psychological constructs [44-46], and work disability [47-51]. Studies also applied IRT to CMs such as measures of disease activity [52-54] and disease damage or radiographic severity [55-57].\n Purpose of analyses Most common main goals for both the PRO- and the CM-studies were the development or evaluation of new measures, the evaluation of existing measures, and the development or evaluation of alternate or short form versions of an existing measure. In addition, several studies aimed to cross-culturally validate a patient-reported or clinical measure. IRT was rarely applied for the development of item banks [17,58] or computerized adaptive tests [59,60].\nMost common main goals for both the PRO- and the CM-studies were the development or evaluation of new measures, the evaluation of existing measures, and the development or evaluation of alternate or short form versions of an existing measure. In addition, several studies aimed to cross-culturally validate a patient-reported or clinical measure. IRT was rarely applied for the development of item banks [17,58] or computerized adaptive tests [59,60].\n Specific IRT analyses IRT model and software The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM.\nA motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively).\nThe vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM.\nA motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively).\n IRT assumptions The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models.\nA possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used.\nThe assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%).\nThe assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models.\nA possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used.\nThe assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%).\n Additional IRT analyses More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%).\nOther commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this.\nMore than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%).\nOther commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this.\n IRT model and software The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM.\nA motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively).\nThe vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM.\nA motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively).\n IRT assumptions The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models.\nA possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used.\nThe assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%).\nThe assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models.\nA possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used.\nThe assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%).\n Additional IRT analyses More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%).\nOther commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this.\nMore than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%).\nOther commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this.", "The initial database search yielded a total of 93 eligible studies. Six additional studies were identified by manual reference checks of the selected studies. This resulted in a final selection of 99 studies ( Additional file 2). Figure 2 shows that the prevalence of IRT analysis within rheumatology increased markedly over the past decades. This is consistent with conclusions from Hays et al. [19], and with findings from Belvedere and Morton [8] who examined the frequency of Rasch analyses in the development of mobility instruments.\nNumber of published articles reporting the application of IRT within rheumatology.\nTable 1 presents an overview of the most prominent results. By far, most research was carried out with patients from the United States or the United Kingdom, but data from patients from the Netherlands and Canada were also regularly used. The vast majority of studies involved cross-sectional IRT analyses. It could also be observed that an increasing number of studies perform longitudinal IRT analyses since the 21st century, as represented by a rise of DIF testing over time.\nOverview of the most prominent results\nPRO: patient-reported outcome (N=85), CM: clinical measure (N=14), RA: rheumatoid arthritis, OA: osteoarthritis, CAT: computerized adaptive test, IRT: item response theory, 2-PLM: 2 parameter logistic model, DIF: differential item functioning.\n* Note that some studies can be assigned to multiple subcategories, therefore, the sum of the percentages within a category exceeds 100%.\nStudy samples varied from as little as 18 persons in the study of Penta et al. [27] to as many as 16519 persons in the study conducted by Wolfe et al. [28]. Most studies (92.9%) performed analyses on a population sample of at least 50 persons.\nIn 85 of the 99 studies IRT analyses were applied to PROs. The remaining 14 studies applied IRT to CMs. The vast majority of the studies applied IRT to data gathered from patients suffering from rheumatoid arthritis (RA) or osteoarthritis (OA).\nOutcome measures of overall physical functioning and quality of life were most frequently being analysed. To a lesser extent, studies applied IRT to PRO measures of specific functioning [27,29-37], pain [35,38-43], psychological constructs [44-46], and work disability [47-51]. Studies also applied IRT to CMs such as measures of disease activity [52-54] and disease damage or radiographic severity [55-57].", "Most common main goals for both the PRO- and the CM-studies were the development or evaluation of new measures, the evaluation of existing measures, and the development or evaluation of alternate or short form versions of an existing measure. In addition, several studies aimed to cross-culturally validate a patient-reported or clinical measure. IRT was rarely applied for the development of item banks [17,58] or computerized adaptive tests [59,60].", " IRT model and software The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM.\nA motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively).\nThe vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM.\nA motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively).\n IRT assumptions The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models.\nA possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used.\nThe assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%).\nThe assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models.\nA possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used.\nThe assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%).\n Additional IRT analyses More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%).\nOther commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this.\nMore than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%).\nOther commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this.", "The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM.\nA motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively).", "The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models.\nA possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used.\nThe assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%).", "More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%).\nOther commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this.", "IRT offers a powerful framework for the evaluation or development of existing and new outcome measures. This is the first study that systematically reviewed the extent to which IRT has been applied to measurements from rheumatology. Results showed a marked increase in IRT applications within the rheumatic field from the late eighties up to now. Even though most research focussed on PROs, IRT also appeared to be useful for application to CMs. Some opportunities for further IRT applications and improvements in the analyses and reporting of IRT studies were also pointed out.\nIRT can be applied for various purposes. First, IRT analysis is useful for the development and evaluation of new measures [22]. For instance, Helliwell et al. [32] developed a foot impact scale to assess foot status in RA patients. Rasch modeling was used to facilitate item reduction by selecting items which were free of DIF and fitted model expectations. Where the CTT methods often discard items at the extremes of the measurement range because too few patients answer them affirmatively, IRT includes these items since they provide important information at the extremes of the measurement range [61].\nIRT is also suitable for the evaluation of existing (ordinal) outcome measures. For example, when evaluating an instrument’s included response categories it can be determined whether they perform as intended or whether categories should be collapsed into fewer options or expanded into more options [22]. Furthermore, it can be evaluated whether the items in the outcome measure form a unidimensional scale as expected or whether item deletion is necessary [22].\nAnother favourable feature of IRT is that it is expressed at the item level instead of test level as in CTT [11]. By evaluating the performance of individual items, alternate or short form versions of existing measures can be developed. For example, Wolfe et al. [62] developed an alternate version of the HAQ [6,63], known as the HAQ-II, specifically targeted at patients with a relatively high physical functioning.\nAnother commonly used feature of modeling at the item level is the robust assessment of DIF, as reflected in the high proportion of performed DIF analyses. Nevertheless, the full potential of modeling at the item level is not yet being used, given the low percentage of studies evaluating the items’ performance (i.e. measurement precision and local reliability) along the scale.\nWhen comparing the studies focusing on RA patients with those focusing on OA patients, the measurement intensions of the analysed instruments and the applied IRT models were highly comparable. However, a notable difference was found in the main goals of these studies. Where the RA studies pursued widely varying main goals, including the development of new instruments, the evaluation of existing instruments, the comparison of different instruments, and cross-cultural validation, the studies on OA patients generally focused on the evaluation of existing instruments only.\nThere are several IRT applications which have not yet been (frequently) used within rheumatology. One IRT application which appears to be still in its infancy within rheumatology, but which is likely to gain importance in the future, is the development of computerized adaptive tests (CATs) [2]. When testing by means of a CAT, every patient receives a test which is tailored (adapted) to his or her level on the underlying construct being measured. Consequently, each patient can be administered different sequences and numbers of items, drawn from a large item bank. By applying CATs, tests can be shortened without any loss of measurement precision, reducing measurement burden for both the patient and the rheumatologist [1,2,9-11,16].\nThe potential advantages of cross-calibration is another IRT application which has not yet been recognized within rheumatology. As opposed to CTT methods, the item responses are regressed on separate item and person parameters in IRT [11]. This means that the definition of item parameters is independent of the sample receiving the test and the definition of person parameters is independent of the test items given. This separation of parameters facilitates the cross-calibration of various outcome measures based on the same underlying construct [11,64], making their scores comparable with each other.\nAs discussed earlier, it is important to test the assumptions of unidimensionality, local independence, and model appropriateness when analysing data by means of IRT methods. Items which violate one or more of these assumptions should be combined, rephrased, or deleted [22,23], since they complicate the interpretation of model outcomes. A promising observation was that the majority of the studies tested the assumption of unidimensionality and the appropriateness of the IRT model, albeit some studies did not report any fit statistics. Although comparisons between unidimensional and multidimensional IRT models provide a much more rigorous test of unidimensionality than factor analyses, such comparisons were not made. Analyses of model fit mainly involved overall fit statistics or item fit statistics, and to a lesser extent the evaluation of person fit. Person fit, however, is also important since deviant response patterns of patients may seriously affect the item fit. The removal of patients with such response patterns from the analysis may improve the scale’s internal construct validity significantly [22]. Most studies, however, did not check the assumption of local independence. The importance of local independence has only more recently been recognized and, consequently, only some of the most recent studies (from the year 2007) did evaluate this assumption. Future studies should continue to pay attention to this assumption, since locally dependent items could cause parameter estimates to be biased, which may lead to wrong decisions concerning item selection when constructing a certain outcome measure [15].\nThe results also showed room for improvement in the reporting of made choices and the rationale for specific decisions. For instance, the applied IRT model is often not specified and, if specified, the reasons behind the selected IRT model and used estimation methods are often not clearly motivated. This complicates the quality appraisal and replication of performed analyses.\nWhere Belvedere and de Morton [8] examined the application of Rasch analysis only, this study included the whole spectrum of IRT models. A notable finding of this review was that the Rasch models dominate within rheumatology, and that two-parameter IRT models were applied in only a few studies. This may be due to the ease of use of a Rasch model and the easiness with which its results can be interpreted. However, this advantage of Rasch modeling comes with the strict assumption that every item of the measure is equally discriminative. Whether this assumption is appropriate can be tested by comparing the Rasch model fit with the 2-parameter model fit. Since the studies of Pham et al. [65] and Siemons et al. [54] are the only studies in which such a comparison was made, this is a point of interest for future studies.\nAlthough IRT is becoming increasingly popular in health status assessment, IRT is quite complex to understand and is not yet a main-stream technique for most researchers and rheumatologists. To increase common understanding and to improve the interpretation of outcomes resulting from the performed IRT analyses, (bio)statisticians, rheumatologists, and researchers should closely collaborate. Clear guidelines on the quality appraisal of performed IRT analyses might increase the use and understanding of IRT in rheumatology even further. Currently, there are no clear guidelines available for rating the methodological quality of the performed IRT analyses. Although standardized tools like the COSMIN (COnsensusbased Standards for the selection of health status Measurement INstruments) checklist [66] can be used for evaluating the methodological quality of studies on measurement properties, this checklist only contains a few questions regarding IRT analyses and is, therefore, more suitable for analyzing the quality of performed classical test theory analyses. Even though the quality checklist used in this study was based on both expert input and important issues from the literature, it was not exhaustive and, consequently, it might have some limitations. For example, when the sample size was considered, only the absolute number was reported. It was not checked whether the authors also justified the sample size for the analyses they wanted to perform. The varying sample size of the analysed patient groups which was found between studies, might be due to the absence of clear guidelines regarding sample size requirements. It is argued that the most simple Rasch analyses already require a minimum size of 50–100 persons [15,23]. However, many issues are involved in determining the right sample size for a certain study, including the model choice, the number of response categories, and the purpose of the study [15,23]. These issues should be carefully considered to determine the sample size which is minimally needed to achieve reliable model estimates. Consensus and clear guidelines on quality aspects concerning IRT analyses might guide the choice of an adequate sample size and might also stimulate the development of uniform guidelines for performing and reporting IRT studies, and the development of a checklist for evaluating the quality of the performed and reported IRT analyses.\nThe formulation of such guidelines will provide a strong foundation to future IRT studies. Tennant et al. already provided such guidelines for performing Rasch analyses [22]. However, given the large diversity of approaches, models, and software used in the field of IRT it is difficult to recommend a single set of guidelines for all types of studies, and an expansion or modification of their guidelines might be needed. In order to get sufficient support for these guidelines it is important to first attempt to reach a more global consensus about recommendations. This article could provide input for such attempts and the COSMIN checklist [66] can serve as an example of how such an international approach can lead to the development of a consensus-based checklist. Agreement should be reached on the minimum number of assumptions which should be met (e.g. unidimensionality, model fit, and DIF analysis) and best ways of testing these assumptions. Additionally, this review showed that IRT methods are rarely being applied for the evaluation of an instrument’s local reliability and measurement precision along the scale of the underlying construct and the construction of item banks and CATs, all unique features of IRT. Therefore, it is recommended that more b will be placed on these features in the guidelines and in future studies.", "A marked increase of IRT applications could be observed within rheumatology. IRT has primarily been applied to patient-reported outcomes, but it also appeared to be a useful technique for the evaluation of clinical measures. To date, IRT has mainly been used for the development of new static outcome measures and the evaluation of existing measures. In addition, alternate or short forms were created by evaluating the fit and performance of individual items. Useful IRT applications which are not yet widely used within rheumatology include the cross-calibration of instrument scores and the development of computerized adaptive tests which may reduce the measurement burden for both the patient and the clinician. Also, the measurement precision of outcome measures along the scale has only been evaluated occasionally. The fact that IRT has not yet experienced the same level of standardization and consensus on methodology as CTT methods stresses the importance to adequately explain, justify, and report performed IRT analyses. A global consensus on uniform guidelines should be reached about the minimum number of assumptions which should be met and best ways of testing these assumptions, in order to stimulate the quality appraisal of performed and reported IRT analyses.", "2-PL model: 2-Parameter Logistic model; CAT: Computerized Adaptive Test; CM: Clinical Measure; CTT: Classical Test Theory; DIF: Differential Item Functioning; HAQ: Health Assessment Questionnaire; IRT: Item Response Theory; OA: OsteoArthritis; PRO: Patient-Reported Outcome; RA: Rheumatoid Arthritis; SF-36: 36-item Short Form health survey.", "The authors declare that they have no competing interests.", "LS was responsible for the conceptualization of the manuscript. LS and PTK were responsible for the screening and identification of studies and the extraction of relevant data. PTK, ET, CG and MVDL supervised the whole study and the interpretation of the results. All authors critically evaluated the manuscript, contributed to its content, and approved the final version.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2474/13/216/prepub\n", "Checklist. Both general study information as well as IRT-specific information were extracted, using a purpose-made checklist based on both expert input and important issues as mentioned in Tennant and Conaghan [22], Reeve and Fayers [15], and Orlando [23].\nClick here for file\nList of included articles. Literature searches in PubMed, Scopus and Web of Science resulted in 99 original English-language articles which used some form of IRT-based analysis of patient reported or clinical outcome data in patients with a rheumatic condition.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", "conclusions", null, null, null, null, "supplementary-material" ]
[ "Clinical measures", "Item response theory", "Modern psychometrics", "Patient-reported outcomes", "Rheumatology" ]
Background: Since there is no gold standard for the assessment of disease severity and impact in most rheumatic conditions, it is common practice to administer multiple outcome measures to patients. Initially, the severity and impact of most rheumatic conditions was typically evaluated with clinical measures (CMs) [1,2] such as laboratory measures of inflammation like the erythrocyte sedimentation rate [3] and physician-based joint counts [4,5]. Since the eighties of the last century, however, rheumatologists have increasingly started to use patient-reported outcomes (PROs) [1,2]. As a result, a wide variety of PROs are currently in use, varying from single item visual analogue scales (e.g. pain or general health) to multiple item scales like the health assessment questionnaire (HAQ) [6] which measures a patient’s functional status and the 36-item short form health survey (SF-36) which measures eight dimensions of health related quality of life [7]. Statistical methods are essential for the development and evaluation of all outcome measures. By far, most health outcome measures have been developed using methods from classical test theory (CTT). In recent years, however, an increase in the use of statistical methods based on item response theory (IRT) can be observed in health status assessment [8-10]. Extensive and detailed descriptions of IRT can be found in the literature [11-14]. In short, IRT is a collection of probabilistic models, describing the relation between a patient’s response to a categorical question/item and the underlying construct being measured by the scale [11,15]. IRT supplements CTT methods, because it provides more detailed information on the item level and on the person level. This enables a more thorough evaluation of an instrument’s psychometric characteristics [15], including its measurement range and measurement precision. The evaluation of the contribution of individual items facilitates the identification of the most relevant, precise, and efficient items for the assessment of the construct being measured by the instrument. This is very useful for the development of new instruments, but also for improving existing instruments and developing alternate or short form versions of existing instruments [16]. Additionally, IRT methods are particularly suitable for equating different instruments intended to measure the same construct [17] and for cross-cultural validation purposes [18]. Finally, IRT provides the basis for developing item banks and patient-tailored computerized adaptive tests (CATs) [9,19,20]. Although IRT appears to be increasingly used within health care research in general, a comprehensive overview of the frequency and characteristics of IRT analyses within the rheumatic field is lacking. The Outcome Measures in Rheumatology (OMERACT) network recently initiated a special interest group aimed at promoting the use of IRT methods in rheumatology [21]. An overview of the use and application of IRT in rheumatology to date may give insight into future research directions and highlight new possibilities for the improvement of outcome assessment in rheumatic conditions. Therefore, the aim of this study was to systematically review the application of IRT to clinical and patient-reported outcome measures within rheumatology. Methods: Search strategy Figure 1 presents an overview of the various stages followed during the search process, starting with an extensive literature search in April 2012 to identify all eligible studies up to and including the year 2011. Electronic database searches of PubMed, Scopus, and Web of Science were carried out, using the terms 'Item response theor*' OR 'Item response model*' OR 'latent trait theor*' OR Rasch OR Mokken, in combination with Rheumat* OR Arthros* OR arthrit*. Flowchart of the search process. Figure 1 presents an overview of the various stages followed during the search process, starting with an extensive literature search in April 2012 to identify all eligible studies up to and including the year 2011. Electronic database searches of PubMed, Scopus, and Web of Science were carried out, using the terms 'Item response theor*' OR 'Item response model*' OR 'latent trait theor*' OR Rasch OR Mokken, in combination with Rheumat* OR Arthros* OR arthrit*. Flowchart of the search process. Inclusion and exclusion criteria Only original research articles written in English were included. Articles were considered original when they included original data and when they performed analyses on this data in order to achieve a defined study objective. To be included, studies should present an application of IRT in a sample of which at least 50% had some kind of rheumatic disease. In cases where less than 50% of the study sample consisted of rheumatic patients (i.e. inflammatory rheumatism, arthrosis, soft tissue rheumatism), the study was only included when the rheumatic sample was analysed separately from the rest of the sample. Reviews, letters, editorials, opinion papers, abstracts, posters, and purely descriptive studies were excluded. No limitations were set for study design. Only original research articles written in English were included. Articles were considered original when they included original data and when they performed analyses on this data in order to achieve a defined study objective. To be included, studies should present an application of IRT in a sample of which at least 50% had some kind of rheumatic disease. In cases where less than 50% of the study sample consisted of rheumatic patients (i.e. inflammatory rheumatism, arthrosis, soft tissue rheumatism), the study was only included when the rheumatic sample was analysed separately from the rest of the sample. Reviews, letters, editorials, opinion papers, abstracts, posters, and purely descriptive studies were excluded. No limitations were set for study design. Study identification and selection The search strategy resulted in a total of 385 studies. After the removal of 189 duplicates, 196 unique articles were identified. Two reviewers independently screened all 196 studies for relevance based on the abstract and title identified from the initial search. If no evident inclusion or exclusion reasons were identified, the full-text was examined. In total, 103 studies did not meet inclusion criteria and were excluded. The main reasons for exclusion were: the study population (i.e. the study population was not clearly defined or the study contained a rheumatic sample <50% of the total sample which was not separately analysed), the statistical analyses (i.e. no IRT application), and the article type (i.e. non-original research). Figure 1 includes an overview of the exclusion reasons followed by the number of articles removed. The search strategy resulted in a total of 385 studies. After the removal of 189 duplicates, 196 unique articles were identified. Two reviewers independently screened all 196 studies for relevance based on the abstract and title identified from the initial search. If no evident inclusion or exclusion reasons were identified, the full-text was examined. In total, 103 studies did not meet inclusion criteria and were excluded. The main reasons for exclusion were: the study population (i.e. the study population was not clearly defined or the study contained a rheumatic sample <50% of the total sample which was not separately analysed), the statistical analyses (i.e. no IRT application), and the article type (i.e. non-original research). Figure 1 includes an overview of the exclusion reasons followed by the number of articles removed. Data extraction First, two reviewers independently evaluated a random sample of 15 articles. Both general study information as well as IRT-specific information were extracted, using a purpose-made checklist ( Additional file 1) based on both expert input and important issues as mentioned in Tennant and Conaghan [22], Reeve and Fayers [15], and Orlando [23]. Inter-rater agreement of the evaluated variables was moderate to high, with Cohen’s kappa ranging from 0.60 to 1.00. Most of the disagreements were caused by differing interpretations of some of the extracted variables. For instance, one of the reviewers interpreted the checklist on “performed analyses” as performed analyses using IRT based methods only, whereas the other reviewer interpreted it more broadly including classical test theory methods as well (the latter being the correct method). Consensus about these differences was reached by discussion. Next, one of these reviewers also evaluated the remaining 84 articles. General study information General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning). General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning). Purpose of analyses The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments). The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments). Specific IRT analyses Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24]. The applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11]. To make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23]. Other useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct. Differential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22]. Global IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct. With rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23]. Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24]. The applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11]. To make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23]. Other useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct. Differential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22]. Global IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct. With rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23]. First, two reviewers independently evaluated a random sample of 15 articles. Both general study information as well as IRT-specific information were extracted, using a purpose-made checklist ( Additional file 1) based on both expert input and important issues as mentioned in Tennant and Conaghan [22], Reeve and Fayers [15], and Orlando [23]. Inter-rater agreement of the evaluated variables was moderate to high, with Cohen’s kappa ranging from 0.60 to 1.00. Most of the disagreements were caused by differing interpretations of some of the extracted variables. For instance, one of the reviewers interpreted the checklist on “performed analyses” as performed analyses using IRT based methods only, whereas the other reviewer interpreted it more broadly including classical test theory methods as well (the latter being the correct method). Consensus about these differences was reached by discussion. Next, one of these reviewers also evaluated the remaining 84 articles. General study information General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning). General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning). Purpose of analyses The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments). The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments). Specific IRT analyses Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24]. The applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11]. To make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23]. Other useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct. Differential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22]. Global IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct. With rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23]. Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24]. The applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11]. To make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23]. Other useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct. Differential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22]. Global IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct. With rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23]. Search strategy: Figure 1 presents an overview of the various stages followed during the search process, starting with an extensive literature search in April 2012 to identify all eligible studies up to and including the year 2011. Electronic database searches of PubMed, Scopus, and Web of Science were carried out, using the terms 'Item response theor*' OR 'Item response model*' OR 'latent trait theor*' OR Rasch OR Mokken, in combination with Rheumat* OR Arthros* OR arthrit*. Flowchart of the search process. Inclusion and exclusion criteria: Only original research articles written in English were included. Articles were considered original when they included original data and when they performed analyses on this data in order to achieve a defined study objective. To be included, studies should present an application of IRT in a sample of which at least 50% had some kind of rheumatic disease. In cases where less than 50% of the study sample consisted of rheumatic patients (i.e. inflammatory rheumatism, arthrosis, soft tissue rheumatism), the study was only included when the rheumatic sample was analysed separately from the rest of the sample. Reviews, letters, editorials, opinion papers, abstracts, posters, and purely descriptive studies were excluded. No limitations were set for study design. Study identification and selection: The search strategy resulted in a total of 385 studies. After the removal of 189 duplicates, 196 unique articles were identified. Two reviewers independently screened all 196 studies for relevance based on the abstract and title identified from the initial search. If no evident inclusion or exclusion reasons were identified, the full-text was examined. In total, 103 studies did not meet inclusion criteria and were excluded. The main reasons for exclusion were: the study population (i.e. the study population was not clearly defined or the study contained a rheumatic sample <50% of the total sample which was not separately analysed), the statistical analyses (i.e. no IRT application), and the article type (i.e. non-original research). Figure 1 includes an overview of the exclusion reasons followed by the number of articles removed. Data extraction: First, two reviewers independently evaluated a random sample of 15 articles. Both general study information as well as IRT-specific information were extracted, using a purpose-made checklist ( Additional file 1) based on both expert input and important issues as mentioned in Tennant and Conaghan [22], Reeve and Fayers [15], and Orlando [23]. Inter-rater agreement of the evaluated variables was moderate to high, with Cohen’s kappa ranging from 0.60 to 1.00. Most of the disagreements were caused by differing interpretations of some of the extracted variables. For instance, one of the reviewers interpreted the checklist on “performed analyses” as performed analyses using IRT based methods only, whereas the other reviewer interpreted it more broadly including classical test theory methods as well (the latter being the correct method). Consensus about these differences was reached by discussion. Next, one of these reviewers also evaluated the remaining 84 articles. General study information General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning). General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning). Purpose of analyses The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments). The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments). Specific IRT analyses Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24]. The applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11]. To make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23]. Other useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct. Differential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22]. Global IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct. With rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23]. Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24]. The applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11]. To make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23]. Other useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct. Differential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22]. Global IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct. With rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23]. General study information: General information concerned the author(s), publication year, study population, the populations’ country of origin, total number of participants (N), study design of the IRT analyses (i.e. cross-sectional or longitudinal), type of outcome measure (PRO or CM), and main measurement intention (e.g. quality of life, pain, overall physical functioning). Purpose of analyses: The purpose of the analyses was determined by the main goal the author(s) of the article pursued (e.g. the development, evaluation, comparison, or cross-cultural validation of instruments). Specific IRT analyses: Before a researcher can perform IRT analyses, an appropriate IRT model should be selected. Unidimensional models are most widely applied, the simplest being the Rasch model which assumes that the items are equally discriminating and vary only in their difficulty. The 2-parameter logistic model (2-PL model) extends the Rasch model by assuming that the items have a varying ability to discriminate among people with different levels of the underlying construct [11,15,19,23]. These models can be specified further for polytomous items. The rating scale model, graded response model, modified graded response model, partial credit model, and generalized partial credit model can be applied in case of ordered categorical responses. The nominal response model can be applied when response categories are not necessarily ordered [11,15,19,23,24]. The rating scale model and the partial credit model are generalizations of the Rasch model, the other models are generalizations of the 2-PL model. In addition to these unidimensional models, multidimensional models and specific non-parametric models like the Mokken model [25,26] have been developed. Differences in model assumptions should be taken into account when choosing a model and model choice should be motivated by taking the discrimination equality of the items and the number of (ordered) response categories into consideration [15,22-24]. The applied IRT software and the corresponding item and person parameter estimation method(s) should also be cited, since not all software packages report the findings in the same way [22] and because the use of different estimation methods may result in different parameter estimates [11]. To make IRT results interpretable and trustworthy, three principal assumptions should be evaluated when applying a unidimensional IRT model [15,23]. The first assumption concerns unidimensionality, meaning that the set of test items measure only a single construct [11,15,22,23]. Analyses for checking the unidimensionality can include different types of factor analysis of the items or the residuals. A more advanced method would be to compare a unidimensional IRT model with a multidimensional IRT model, for instance using a likelihood ratio test. The second (related) assumption concerns local independence of the items. When this assumption is violated this may indicate that the items have more in common with each other than just the single underlying construct [11,15,22,23]. This may either point to response dependency (e.g. overlapping items in the scale) or to multidimensionality of the scale [22]. It can lead to biased parameter estimates and wrong decisions about, for instance, item selection [15]. Local independence can be tested by a factor analysis of the residual covariations, or with more specific statistics targeted at responses to pairs of items [12]. The third assumption concerns the model’s appropriateness to reflect the true relationship among the underlying construct and the item responses [11,15,22,23]. This can be examined with both item and person fit statistics. More information about these assumptions and suggestions about which aspects to report can be found in the literature [11,15,22,23]. Other useful IRT applications include the evaluation of the presence of differential item functioning, the reliability and measurement precision, the ordering of the response categories or item thresholds, and the hierarchical ordering and distribution of persons and items along the scale of the underlying construct. Differential item functioning (DIF, also called item bias) is present when patients with similar levels of the underlying construct being measured respond differently to an item [15,22]. Commonly examined types of DIF are DIF across gender and age [22]. Global IRT reliability is equivalent to Cronbach’s alpha, with the difference that not the raw score but the IRT score is being used in its calculation. Which specific global reliability statistics are presented usually depends on the software package used. Contrary to CTT methods, IRT also provides information about the local reliability [12] and, related to this, the instrument’s measurement precision along the scale of the underlying construct. With rating scale analysis, the ordering of the response categories or item thresholds can be checked, enabling the evaluation of the appropriateness or redundancy of the response categories [15]. Likewise, the hierarchical ordering and/or distribution of persons and items along the scale can be analysed to determine the measurement range of the outcome measure and to determine whether the items function well for the included population sample or whether there is a mismatch between them [23]. Results: General information of included studies The initial database search yielded a total of 93 eligible studies. Six additional studies were identified by manual reference checks of the selected studies. This resulted in a final selection of 99 studies ( Additional file 2). Figure 2 shows that the prevalence of IRT analysis within rheumatology increased markedly over the past decades. This is consistent with conclusions from Hays et al. [19], and with findings from Belvedere and Morton [8] who examined the frequency of Rasch analyses in the development of mobility instruments. Number of published articles reporting the application of IRT within rheumatology. Table 1 presents an overview of the most prominent results. By far, most research was carried out with patients from the United States or the United Kingdom, but data from patients from the Netherlands and Canada were also regularly used. The vast majority of studies involved cross-sectional IRT analyses. It could also be observed that an increasing number of studies perform longitudinal IRT analyses since the 21st century, as represented by a rise of DIF testing over time. Overview of the most prominent results PRO: patient-reported outcome (N=85), CM: clinical measure (N=14), RA: rheumatoid arthritis, OA: osteoarthritis, CAT: computerized adaptive test, IRT: item response theory, 2-PLM: 2 parameter logistic model, DIF: differential item functioning. * Note that some studies can be assigned to multiple subcategories, therefore, the sum of the percentages within a category exceeds 100%. Study samples varied from as little as 18 persons in the study of Penta et al. [27] to as many as 16519 persons in the study conducted by Wolfe et al. [28]. Most studies (92.9%) performed analyses on a population sample of at least 50 persons. In 85 of the 99 studies IRT analyses were applied to PROs. The remaining 14 studies applied IRT to CMs. The vast majority of the studies applied IRT to data gathered from patients suffering from rheumatoid arthritis (RA) or osteoarthritis (OA). Outcome measures of overall physical functioning and quality of life were most frequently being analysed. To a lesser extent, studies applied IRT to PRO measures of specific functioning [27,29-37], pain [35,38-43], psychological constructs [44-46], and work disability [47-51]. Studies also applied IRT to CMs such as measures of disease activity [52-54] and disease damage or radiographic severity [55-57]. The initial database search yielded a total of 93 eligible studies. Six additional studies were identified by manual reference checks of the selected studies. This resulted in a final selection of 99 studies ( Additional file 2). Figure 2 shows that the prevalence of IRT analysis within rheumatology increased markedly over the past decades. This is consistent with conclusions from Hays et al. [19], and with findings from Belvedere and Morton [8] who examined the frequency of Rasch analyses in the development of mobility instruments. Number of published articles reporting the application of IRT within rheumatology. Table 1 presents an overview of the most prominent results. By far, most research was carried out with patients from the United States or the United Kingdom, but data from patients from the Netherlands and Canada were also regularly used. The vast majority of studies involved cross-sectional IRT analyses. It could also be observed that an increasing number of studies perform longitudinal IRT analyses since the 21st century, as represented by a rise of DIF testing over time. Overview of the most prominent results PRO: patient-reported outcome (N=85), CM: clinical measure (N=14), RA: rheumatoid arthritis, OA: osteoarthritis, CAT: computerized adaptive test, IRT: item response theory, 2-PLM: 2 parameter logistic model, DIF: differential item functioning. * Note that some studies can be assigned to multiple subcategories, therefore, the sum of the percentages within a category exceeds 100%. Study samples varied from as little as 18 persons in the study of Penta et al. [27] to as many as 16519 persons in the study conducted by Wolfe et al. [28]. Most studies (92.9%) performed analyses on a population sample of at least 50 persons. In 85 of the 99 studies IRT analyses were applied to PROs. The remaining 14 studies applied IRT to CMs. The vast majority of the studies applied IRT to data gathered from patients suffering from rheumatoid arthritis (RA) or osteoarthritis (OA). Outcome measures of overall physical functioning and quality of life were most frequently being analysed. To a lesser extent, studies applied IRT to PRO measures of specific functioning [27,29-37], pain [35,38-43], psychological constructs [44-46], and work disability [47-51]. Studies also applied IRT to CMs such as measures of disease activity [52-54] and disease damage or radiographic severity [55-57]. Purpose of analyses Most common main goals for both the PRO- and the CM-studies were the development or evaluation of new measures, the evaluation of existing measures, and the development or evaluation of alternate or short form versions of an existing measure. In addition, several studies aimed to cross-culturally validate a patient-reported or clinical measure. IRT was rarely applied for the development of item banks [17,58] or computerized adaptive tests [59,60]. Most common main goals for both the PRO- and the CM-studies were the development or evaluation of new measures, the evaluation of existing measures, and the development or evaluation of alternate or short form versions of an existing measure. In addition, several studies aimed to cross-culturally validate a patient-reported or clinical measure. IRT was rarely applied for the development of item banks [17,58] or computerized adaptive tests [59,60]. Specific IRT analyses IRT model and software The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM. A motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively). The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM. A motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively). IRT assumptions The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models. A possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used. The assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%). The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models. A possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used. The assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%). Additional IRT analyses More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%). Other commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this. More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%). Other commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this. IRT model and software The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM. A motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively). The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM. A motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively). IRT assumptions The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models. A possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used. The assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%). The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models. A possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used. The assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%). Additional IRT analyses More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%). Other commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this. More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%). Other commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this. General information of included studies: The initial database search yielded a total of 93 eligible studies. Six additional studies were identified by manual reference checks of the selected studies. This resulted in a final selection of 99 studies ( Additional file 2). Figure 2 shows that the prevalence of IRT analysis within rheumatology increased markedly over the past decades. This is consistent with conclusions from Hays et al. [19], and with findings from Belvedere and Morton [8] who examined the frequency of Rasch analyses in the development of mobility instruments. Number of published articles reporting the application of IRT within rheumatology. Table 1 presents an overview of the most prominent results. By far, most research was carried out with patients from the United States or the United Kingdom, but data from patients from the Netherlands and Canada were also regularly used. The vast majority of studies involved cross-sectional IRT analyses. It could also be observed that an increasing number of studies perform longitudinal IRT analyses since the 21st century, as represented by a rise of DIF testing over time. Overview of the most prominent results PRO: patient-reported outcome (N=85), CM: clinical measure (N=14), RA: rheumatoid arthritis, OA: osteoarthritis, CAT: computerized adaptive test, IRT: item response theory, 2-PLM: 2 parameter logistic model, DIF: differential item functioning. * Note that some studies can be assigned to multiple subcategories, therefore, the sum of the percentages within a category exceeds 100%. Study samples varied from as little as 18 persons in the study of Penta et al. [27] to as many as 16519 persons in the study conducted by Wolfe et al. [28]. Most studies (92.9%) performed analyses on a population sample of at least 50 persons. In 85 of the 99 studies IRT analyses were applied to PROs. The remaining 14 studies applied IRT to CMs. The vast majority of the studies applied IRT to data gathered from patients suffering from rheumatoid arthritis (RA) or osteoarthritis (OA). Outcome measures of overall physical functioning and quality of life were most frequently being analysed. To a lesser extent, studies applied IRT to PRO measures of specific functioning [27,29-37], pain [35,38-43], psychological constructs [44-46], and work disability [47-51]. Studies also applied IRT to CMs such as measures of disease activity [52-54] and disease damage or radiographic severity [55-57]. Purpose of analyses: Most common main goals for both the PRO- and the CM-studies were the development or evaluation of new measures, the evaluation of existing measures, and the development or evaluation of alternate or short form versions of an existing measure. In addition, several studies aimed to cross-culturally validate a patient-reported or clinical measure. IRT was rarely applied for the development of item banks [17,58] or computerized adaptive tests [59,60]. Specific IRT analyses: IRT model and software The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM. A motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively). The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM. A motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively). IRT assumptions The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models. A possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used. The assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%). The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models. A possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used. The assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%). Additional IRT analyses More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%). Other commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this. More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%). Other commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this. IRT model and software: The vast majority of IRT applications within rheumatology involved Rasch analyses, although a clear specification and rationale of the applied Rasch model was not always given. Few studies used a two-parameter IRT model or Mokken analysis. Most analyses were carried out with the software packages Bigsteps/Winsteps or RUMM. A motivation of the model choice was only provided in 27.3% of the studies. Likewise, the item and person parameter estimation methods were rarely specified (8.1% and 4.0% of the studies, respectively). IRT assumptions: The assumption of unidimensionality was tested in approximately three quarters of the studies. Methods used for this purpose mainly concerned some type of factor analysis (confirmatory/exploratory factor analysis or principal component analysis) or the examination of specific IRT statistics (e.g. whether the overall model fit or the item fit values were larger than a pre-specified cut-off point). No studies were found where unidimensional IRT models were contrasted with multidimensional IRT models. A possible violation of the assumption of local independence was evaluated in only one of the CM studies, and in only 18.8% of the studies concerning a PRO. Evaluation of the studies also indicated there was no clear agreement on how to evaluate this assumption, given the variety of methods used. The assumption of the appropriateness of the model was evaluated by approximately 91% of the studies. When applied, roughly half of the cases evaluated overall fit (PRO: 51.9%, CM: 53.8%), almost all evaluated item fit (PRO: 97.4%, CM: 100.0%), but a much smaller percentage evaluated person fit statistics (PRO: 33.8%, CM: 30.8%). Additional IRT analyses: More than half of the studies used IRT to examine DIF. When applied, analyses varied from cross-sectional DIF across gender (PRO: 80.0%, CM: 66.7%), age (PRO: 76.0%, CM: 66.7%), disease duration (PRO: 36.0%, CM: 16.7), countries/cultures/ethnicity (PRO: 18.0%, CM: 16.7%), and disease type (PRO: 10.0%, CM: 16.7%), to longitudinal DIF analyses over time (PRO: 28.0%, CM: 33.3%). Other commonly performed IRT analyses included analyses of the global reliability, the hierarchical ordering and distribution of items and persons, and rating scale analyses (i.e. the ordering of the response categories or item thresholds). In addition, a small number of PRO-studies reported IRT analyses regarding the measurement precision of the scale, whereas only 1 of the CM studies evaluated this. Discussion: IRT offers a powerful framework for the evaluation or development of existing and new outcome measures. This is the first study that systematically reviewed the extent to which IRT has been applied to measurements from rheumatology. Results showed a marked increase in IRT applications within the rheumatic field from the late eighties up to now. Even though most research focussed on PROs, IRT also appeared to be useful for application to CMs. Some opportunities for further IRT applications and improvements in the analyses and reporting of IRT studies were also pointed out. IRT can be applied for various purposes. First, IRT analysis is useful for the development and evaluation of new measures [22]. For instance, Helliwell et al. [32] developed a foot impact scale to assess foot status in RA patients. Rasch modeling was used to facilitate item reduction by selecting items which were free of DIF and fitted model expectations. Where the CTT methods often discard items at the extremes of the measurement range because too few patients answer them affirmatively, IRT includes these items since they provide important information at the extremes of the measurement range [61]. IRT is also suitable for the evaluation of existing (ordinal) outcome measures. For example, when evaluating an instrument’s included response categories it can be determined whether they perform as intended or whether categories should be collapsed into fewer options or expanded into more options [22]. Furthermore, it can be evaluated whether the items in the outcome measure form a unidimensional scale as expected or whether item deletion is necessary [22]. Another favourable feature of IRT is that it is expressed at the item level instead of test level as in CTT [11]. By evaluating the performance of individual items, alternate or short form versions of existing measures can be developed. For example, Wolfe et al. [62] developed an alternate version of the HAQ [6,63], known as the HAQ-II, specifically targeted at patients with a relatively high physical functioning. Another commonly used feature of modeling at the item level is the robust assessment of DIF, as reflected in the high proportion of performed DIF analyses. Nevertheless, the full potential of modeling at the item level is not yet being used, given the low percentage of studies evaluating the items’ performance (i.e. measurement precision and local reliability) along the scale. When comparing the studies focusing on RA patients with those focusing on OA patients, the measurement intensions of the analysed instruments and the applied IRT models were highly comparable. However, a notable difference was found in the main goals of these studies. Where the RA studies pursued widely varying main goals, including the development of new instruments, the evaluation of existing instruments, the comparison of different instruments, and cross-cultural validation, the studies on OA patients generally focused on the evaluation of existing instruments only. There are several IRT applications which have not yet been (frequently) used within rheumatology. One IRT application which appears to be still in its infancy within rheumatology, but which is likely to gain importance in the future, is the development of computerized adaptive tests (CATs) [2]. When testing by means of a CAT, every patient receives a test which is tailored (adapted) to his or her level on the underlying construct being measured. Consequently, each patient can be administered different sequences and numbers of items, drawn from a large item bank. By applying CATs, tests can be shortened without any loss of measurement precision, reducing measurement burden for both the patient and the rheumatologist [1,2,9-11,16]. The potential advantages of cross-calibration is another IRT application which has not yet been recognized within rheumatology. As opposed to CTT methods, the item responses are regressed on separate item and person parameters in IRT [11]. This means that the definition of item parameters is independent of the sample receiving the test and the definition of person parameters is independent of the test items given. This separation of parameters facilitates the cross-calibration of various outcome measures based on the same underlying construct [11,64], making their scores comparable with each other. As discussed earlier, it is important to test the assumptions of unidimensionality, local independence, and model appropriateness when analysing data by means of IRT methods. Items which violate one or more of these assumptions should be combined, rephrased, or deleted [22,23], since they complicate the interpretation of model outcomes. A promising observation was that the majority of the studies tested the assumption of unidimensionality and the appropriateness of the IRT model, albeit some studies did not report any fit statistics. Although comparisons between unidimensional and multidimensional IRT models provide a much more rigorous test of unidimensionality than factor analyses, such comparisons were not made. Analyses of model fit mainly involved overall fit statistics or item fit statistics, and to a lesser extent the evaluation of person fit. Person fit, however, is also important since deviant response patterns of patients may seriously affect the item fit. The removal of patients with such response patterns from the analysis may improve the scale’s internal construct validity significantly [22]. Most studies, however, did not check the assumption of local independence. The importance of local independence has only more recently been recognized and, consequently, only some of the most recent studies (from the year 2007) did evaluate this assumption. Future studies should continue to pay attention to this assumption, since locally dependent items could cause parameter estimates to be biased, which may lead to wrong decisions concerning item selection when constructing a certain outcome measure [15]. The results also showed room for improvement in the reporting of made choices and the rationale for specific decisions. For instance, the applied IRT model is often not specified and, if specified, the reasons behind the selected IRT model and used estimation methods are often not clearly motivated. This complicates the quality appraisal and replication of performed analyses. Where Belvedere and de Morton [8] examined the application of Rasch analysis only, this study included the whole spectrum of IRT models. A notable finding of this review was that the Rasch models dominate within rheumatology, and that two-parameter IRT models were applied in only a few studies. This may be due to the ease of use of a Rasch model and the easiness with which its results can be interpreted. However, this advantage of Rasch modeling comes with the strict assumption that every item of the measure is equally discriminative. Whether this assumption is appropriate can be tested by comparing the Rasch model fit with the 2-parameter model fit. Since the studies of Pham et al. [65] and Siemons et al. [54] are the only studies in which such a comparison was made, this is a point of interest for future studies. Although IRT is becoming increasingly popular in health status assessment, IRT is quite complex to understand and is not yet a main-stream technique for most researchers and rheumatologists. To increase common understanding and to improve the interpretation of outcomes resulting from the performed IRT analyses, (bio)statisticians, rheumatologists, and researchers should closely collaborate. Clear guidelines on the quality appraisal of performed IRT analyses might increase the use and understanding of IRT in rheumatology even further. Currently, there are no clear guidelines available for rating the methodological quality of the performed IRT analyses. Although standardized tools like the COSMIN (COnsensusbased Standards for the selection of health status Measurement INstruments) checklist [66] can be used for evaluating the methodological quality of studies on measurement properties, this checklist only contains a few questions regarding IRT analyses and is, therefore, more suitable for analyzing the quality of performed classical test theory analyses. Even though the quality checklist used in this study was based on both expert input and important issues from the literature, it was not exhaustive and, consequently, it might have some limitations. For example, when the sample size was considered, only the absolute number was reported. It was not checked whether the authors also justified the sample size for the analyses they wanted to perform. The varying sample size of the analysed patient groups which was found between studies, might be due to the absence of clear guidelines regarding sample size requirements. It is argued that the most simple Rasch analyses already require a minimum size of 50–100 persons [15,23]. However, many issues are involved in determining the right sample size for a certain study, including the model choice, the number of response categories, and the purpose of the study [15,23]. These issues should be carefully considered to determine the sample size which is minimally needed to achieve reliable model estimates. Consensus and clear guidelines on quality aspects concerning IRT analyses might guide the choice of an adequate sample size and might also stimulate the development of uniform guidelines for performing and reporting IRT studies, and the development of a checklist for evaluating the quality of the performed and reported IRT analyses. The formulation of such guidelines will provide a strong foundation to future IRT studies. Tennant et al. already provided such guidelines for performing Rasch analyses [22]. However, given the large diversity of approaches, models, and software used in the field of IRT it is difficult to recommend a single set of guidelines for all types of studies, and an expansion or modification of their guidelines might be needed. In order to get sufficient support for these guidelines it is important to first attempt to reach a more global consensus about recommendations. This article could provide input for such attempts and the COSMIN checklist [66] can serve as an example of how such an international approach can lead to the development of a consensus-based checklist. Agreement should be reached on the minimum number of assumptions which should be met (e.g. unidimensionality, model fit, and DIF analysis) and best ways of testing these assumptions. Additionally, this review showed that IRT methods are rarely being applied for the evaluation of an instrument’s local reliability and measurement precision along the scale of the underlying construct and the construction of item banks and CATs, all unique features of IRT. Therefore, it is recommended that more b will be placed on these features in the guidelines and in future studies. Conclusions: A marked increase of IRT applications could be observed within rheumatology. IRT has primarily been applied to patient-reported outcomes, but it also appeared to be a useful technique for the evaluation of clinical measures. To date, IRT has mainly been used for the development of new static outcome measures and the evaluation of existing measures. In addition, alternate or short forms were created by evaluating the fit and performance of individual items. Useful IRT applications which are not yet widely used within rheumatology include the cross-calibration of instrument scores and the development of computerized adaptive tests which may reduce the measurement burden for both the patient and the clinician. Also, the measurement precision of outcome measures along the scale has only been evaluated occasionally. The fact that IRT has not yet experienced the same level of standardization and consensus on methodology as CTT methods stresses the importance to adequately explain, justify, and report performed IRT analyses. A global consensus on uniform guidelines should be reached about the minimum number of assumptions which should be met and best ways of testing these assumptions, in order to stimulate the quality appraisal of performed and reported IRT analyses. Abbreviations: 2-PL model: 2-Parameter Logistic model; CAT: Computerized Adaptive Test; CM: Clinical Measure; CTT: Classical Test Theory; DIF: Differential Item Functioning; HAQ: Health Assessment Questionnaire; IRT: Item Response Theory; OA: OsteoArthritis; PRO: Patient-Reported Outcome; RA: Rheumatoid Arthritis; SF-36: 36-item Short Form health survey. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: LS was responsible for the conceptualization of the manuscript. LS and PTK were responsible for the screening and identification of studies and the extraction of relevant data. PTK, ET, CG and MVDL supervised the whole study and the interpretation of the results. All authors critically evaluated the manuscript, contributed to its content, and approved the final version. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2474/13/216/prepub Supplementary Material: Checklist. Both general study information as well as IRT-specific information were extracted, using a purpose-made checklist based on both expert input and important issues as mentioned in Tennant and Conaghan [22], Reeve and Fayers [15], and Orlando [23]. Click here for file List of included articles. Literature searches in PubMed, Scopus and Web of Science resulted in 99 original English-language articles which used some form of IRT-based analysis of patient reported or clinical outcome data in patients with a rheumatic condition. Click here for file
Background: Although item response theory (IRT) appears to be increasingly used within health care research in general, a comprehensive overview of the frequency and characteristics of IRT analyses within the rheumatic field is lacking. An overview of the use and application of IRT in rheumatology to date may give insight into future research directions and highlight new possibilities for the improvement of outcome assessment in rheumatic conditions. Therefore, this study systematically reviewed the application of IRT to patient-reported and clinical outcome measures in rheumatology. Methods: Literature searches in PubMed, Scopus and Web of Science resulted in 99 original English-language articles which used some form of IRT-based analysis of patient-reported or clinical outcome data in patients with a rheumatic condition. Both general study information and IRT-specific information were assessed. Results: Most studies used Rasch modeling for developing or evaluating new or existing patient-reported outcomes in rheumatoid arthritis or osteoarthritis patients. Outcomes of principle interest were physical functioning and quality of life. Since the last decade, IRT has also been applied to clinical measures more frequently. IRT was mostly used for evaluating model fit, unidimensionality and differential item functioning, the distribution of items and persons along the underlying scale, and reliability. Less frequently used IRT applications were the evaluation of local independence, the threshold ordering of items, and the measurement precision along the scale. Conclusions: IRT applications have markedly increased within rheumatology over the past decades. To date, IRT has primarily been applied to patient-reported outcomes, however, applications to clinical measures are gaining interest. Useful IRT applications not yet widely used within rheumatology include the cross-calibration of instrument scores and the development of computerized adaptive tests which may reduce the measurement burden for both the patient and the clinician. Also, the measurement precision of outcome measures along the scale was only evaluated occasionally. Performed IRT analyses should be adequately explained, justified, and reported. A global consensus about uniform guidelines should be reached concerning the minimum number of assumptions which should be met and best ways of testing these assumptions, in order to stimulate the quality appraisal of performed IRT analyses.
Background: Since there is no gold standard for the assessment of disease severity and impact in most rheumatic conditions, it is common practice to administer multiple outcome measures to patients. Initially, the severity and impact of most rheumatic conditions was typically evaluated with clinical measures (CMs) [1,2] such as laboratory measures of inflammation like the erythrocyte sedimentation rate [3] and physician-based joint counts [4,5]. Since the eighties of the last century, however, rheumatologists have increasingly started to use patient-reported outcomes (PROs) [1,2]. As a result, a wide variety of PROs are currently in use, varying from single item visual analogue scales (e.g. pain or general health) to multiple item scales like the health assessment questionnaire (HAQ) [6] which measures a patient’s functional status and the 36-item short form health survey (SF-36) which measures eight dimensions of health related quality of life [7]. Statistical methods are essential for the development and evaluation of all outcome measures. By far, most health outcome measures have been developed using methods from classical test theory (CTT). In recent years, however, an increase in the use of statistical methods based on item response theory (IRT) can be observed in health status assessment [8-10]. Extensive and detailed descriptions of IRT can be found in the literature [11-14]. In short, IRT is a collection of probabilistic models, describing the relation between a patient’s response to a categorical question/item and the underlying construct being measured by the scale [11,15]. IRT supplements CTT methods, because it provides more detailed information on the item level and on the person level. This enables a more thorough evaluation of an instrument’s psychometric characteristics [15], including its measurement range and measurement precision. The evaluation of the contribution of individual items facilitates the identification of the most relevant, precise, and efficient items for the assessment of the construct being measured by the instrument. This is very useful for the development of new instruments, but also for improving existing instruments and developing alternate or short form versions of existing instruments [16]. Additionally, IRT methods are particularly suitable for equating different instruments intended to measure the same construct [17] and for cross-cultural validation purposes [18]. Finally, IRT provides the basis for developing item banks and patient-tailored computerized adaptive tests (CATs) [9,19,20]. Although IRT appears to be increasingly used within health care research in general, a comprehensive overview of the frequency and characteristics of IRT analyses within the rheumatic field is lacking. The Outcome Measures in Rheumatology (OMERACT) network recently initiated a special interest group aimed at promoting the use of IRT methods in rheumatology [21]. An overview of the use and application of IRT in rheumatology to date may give insight into future research directions and highlight new possibilities for the improvement of outcome assessment in rheumatic conditions. Therefore, the aim of this study was to systematically review the application of IRT to clinical and patient-reported outcome measures within rheumatology. Conclusions: A marked increase of IRT applications could be observed within rheumatology. IRT has primarily been applied to patient-reported outcomes, but it also appeared to be a useful technique for the evaluation of clinical measures. To date, IRT has mainly been used for the development of new static outcome measures and the evaluation of existing measures. In addition, alternate or short forms were created by evaluating the fit and performance of individual items. Useful IRT applications which are not yet widely used within rheumatology include the cross-calibration of instrument scores and the development of computerized adaptive tests which may reduce the measurement burden for both the patient and the clinician. Also, the measurement precision of outcome measures along the scale has only been evaluated occasionally. The fact that IRT has not yet experienced the same level of standardization and consensus on methodology as CTT methods stresses the importance to adequately explain, justify, and report performed IRT analyses. A global consensus on uniform guidelines should be reached about the minimum number of assumptions which should be met and best ways of testing these assumptions, in order to stimulate the quality appraisal of performed and reported IRT analyses.
Background: Although item response theory (IRT) appears to be increasingly used within health care research in general, a comprehensive overview of the frequency and characteristics of IRT analyses within the rheumatic field is lacking. An overview of the use and application of IRT in rheumatology to date may give insight into future research directions and highlight new possibilities for the improvement of outcome assessment in rheumatic conditions. Therefore, this study systematically reviewed the application of IRT to patient-reported and clinical outcome measures in rheumatology. Methods: Literature searches in PubMed, Scopus and Web of Science resulted in 99 original English-language articles which used some form of IRT-based analysis of patient-reported or clinical outcome data in patients with a rheumatic condition. Both general study information and IRT-specific information were assessed. Results: Most studies used Rasch modeling for developing or evaluating new or existing patient-reported outcomes in rheumatoid arthritis or osteoarthritis patients. Outcomes of principle interest were physical functioning and quality of life. Since the last decade, IRT has also been applied to clinical measures more frequently. IRT was mostly used for evaluating model fit, unidimensionality and differential item functioning, the distribution of items and persons along the underlying scale, and reliability. Less frequently used IRT applications were the evaluation of local independence, the threshold ordering of items, and the measurement precision along the scale. Conclusions: IRT applications have markedly increased within rheumatology over the past decades. To date, IRT has primarily been applied to patient-reported outcomes, however, applications to clinical measures are gaining interest. Useful IRT applications not yet widely used within rheumatology include the cross-calibration of instrument scores and the development of computerized adaptive tests which may reduce the measurement burden for both the patient and the clinician. Also, the measurement precision of outcome measures along the scale was only evaluated occasionally. Performed IRT analyses should be adequately explained, justified, and reported. A global consensus about uniform guidelines should be reached concerning the minimum number of assumptions which should be met and best ways of testing these assumptions, in order to stimulate the quality appraisal of performed IRT analyses.
16,781
409
[ 592, 101, 137, 156, 2066, 70, 36, 829, 486, 84, 1022, 98, 221, 183, 72, 10, 65, 16 ]
23
[ "irt", "model", "studies", "analyses", "item", "items", "pro", "cm", "15", "response" ]
[ "rheumatology parameter", "health outcome measures", "measurements rheumatology results", "evaluated clinical measures", "assessment rheumatic conditions" ]
[CONTENT] Clinical measures | Item response theory | Modern psychometrics | Patient-reported outcomes | Rheumatology [SUMMARY]
[CONTENT] Clinical measures | Item response theory | Modern psychometrics | Patient-reported outcomes | Rheumatology [SUMMARY]
[CONTENT] Clinical measures | Item response theory | Modern psychometrics | Patient-reported outcomes | Rheumatology [SUMMARY]
[CONTENT] Clinical measures | Item response theory | Modern psychometrics | Patient-reported outcomes | Rheumatology [SUMMARY]
[CONTENT] Clinical measures | Item response theory | Modern psychometrics | Patient-reported outcomes | Rheumatology [SUMMARY]
[CONTENT] Clinical measures | Item response theory | Modern psychometrics | Patient-reported outcomes | Rheumatology [SUMMARY]
[CONTENT] Animals | Humans | Psychometrics | Rheumatic Diseases | Rheumatology [SUMMARY]
[CONTENT] Animals | Humans | Psychometrics | Rheumatic Diseases | Rheumatology [SUMMARY]
[CONTENT] Animals | Humans | Psychometrics | Rheumatic Diseases | Rheumatology [SUMMARY]
[CONTENT] Animals | Humans | Psychometrics | Rheumatic Diseases | Rheumatology [SUMMARY]
[CONTENT] Animals | Humans | Psychometrics | Rheumatic Diseases | Rheumatology [SUMMARY]
[CONTENT] Animals | Humans | Psychometrics | Rheumatic Diseases | Rheumatology [SUMMARY]
[CONTENT] rheumatology parameter | health outcome measures | measurements rheumatology results | evaluated clinical measures | assessment rheumatic conditions [SUMMARY]
[CONTENT] rheumatology parameter | health outcome measures | measurements rheumatology results | evaluated clinical measures | assessment rheumatic conditions [SUMMARY]
[CONTENT] rheumatology parameter | health outcome measures | measurements rheumatology results | evaluated clinical measures | assessment rheumatic conditions [SUMMARY]
[CONTENT] rheumatology parameter | health outcome measures | measurements rheumatology results | evaluated clinical measures | assessment rheumatic conditions [SUMMARY]
[CONTENT] rheumatology parameter | health outcome measures | measurements rheumatology results | evaluated clinical measures | assessment rheumatic conditions [SUMMARY]
[CONTENT] rheumatology parameter | health outcome measures | measurements rheumatology results | evaluated clinical measures | assessment rheumatic conditions [SUMMARY]
[CONTENT] irt | model | studies | analyses | item | items | pro | cm | 15 | response [SUMMARY]
[CONTENT] irt | model | studies | analyses | item | items | pro | cm | 15 | response [SUMMARY]
[CONTENT] irt | model | studies | analyses | item | items | pro | cm | 15 | response [SUMMARY]
[CONTENT] irt | model | studies | analyses | item | items | pro | cm | 15 | response [SUMMARY]
[CONTENT] irt | model | studies | analyses | item | items | pro | cm | 15 | response [SUMMARY]
[CONTENT] irt | model | studies | analyses | item | items | pro | cm | 15 | response [SUMMARY]
[CONTENT] measures | health | assessment | irt | use | outcome measures | methods | conditions | rheumatic conditions | item [SUMMARY]
[CONTENT] model | 15 | items | 22 | 23 | irt | response | 15 22 | construct | 11 [SUMMARY]
[CONTENT] studies | pro | cm | irt | analyses | model | evaluated | fit | applied | assumption [SUMMARY]
[CONTENT] measures | irt | consensus | outcome measures | useful | rheumatology | assumptions | applications | patient | irt applications [SUMMARY]
[CONTENT] irt | studies | model | analyses | pro | item | cm | items | study | 15 [SUMMARY]
[CONTENT] irt | studies | model | analyses | pro | item | cm | items | study | 15 [SUMMARY]
[CONTENT] IRT | IRT ||| IRT ||| IRT [SUMMARY]
[CONTENT] PubMed | 99 | English | IRT ||| IRT [SUMMARY]
[CONTENT] Rasch ||| ||| the last decade | IRT ||| IRT ||| IRT [SUMMARY]
[CONTENT] IRT | the past decades ||| IRT ||| IRT ||| ||| IRT ||| IRT [SUMMARY]
[CONTENT] IRT | IRT ||| IRT ||| IRT ||| PubMed | 99 | English | IRT ||| IRT ||| Rasch ||| ||| the last decade | IRT ||| IRT ||| IRT ||| IRT | the past decades ||| IRT ||| IRT ||| ||| IRT ||| IRT [SUMMARY]
[CONTENT] IRT | IRT ||| IRT ||| IRT ||| PubMed | 99 | English | IRT ||| IRT ||| Rasch ||| ||| the last decade | IRT ||| IRT ||| IRT ||| IRT | the past decades ||| IRT ||| IRT ||| ||| IRT ||| IRT [SUMMARY]
Risk of Single and Multiple Injuries Due to Static Balance and Movement Quality in Physically Active Women.
36231497
Static balance is a reliable indicator of the musculoskeletal and nervous systems, which is a basis for movement stabilization development. The disorders in this area may increase injury risk (IR). This study investigated the musculoskeletal injury risk due to static balance and movement quality regarding single and multiple injury occurrences in physically active women.
BACKGROUND
The study sample was 88 women aged 21.48 ± 1.56. The injury data were obtained with a questionnaire, and Deep Squat (DS), In-line lunge (IL), and Hurdle Step (HS) tests were conducted. Static balance was assessed with a stabilometric platform measured center of gravity area circle (AC) and path length (PL) with open (OE) and closed eyes (CE), maintaining a standing position for the 30 s.
METHODS
The logistic regression models revealed the general injury occurrence was predicted by AC-CE (OR = 0.70; p = 0.03) and IL (OR = 0.49; p = 0.03), and the two-factor model AC-CE*IL, (OR = 1.40; p &lt; 0.01). When the single injury was predicted by the same factors AC-CE (OR = 0.49; p &lt; 0.01), IL (OR = 0.36; p = 0.01), and AC-CE*IL (OR = 1.58; p &lt; 0.01).
RESULTS
Static balance and movement stability predict musculoskeletal injury risk alone and in one model. A further study is needed to verify the efficiency of indicated factors in prospective terms. Using both quantitative and qualitative tests could be helpful in IR prediction.
CONCLUSION
[ "Exercise Test", "Female", "Humans", "Movement", "Multiple Trauma", "Postural Balance", "Prospective Studies" ]
9564762
1. Introduction
Static balance is one of the most credible indicators of the musculoskeletal and nervous system state, which states the ability to maintain an upright posture and to keep the line of gravity within the limits of the base of support [1,2]. It expresses the harmonious work of the nervous and musculoskeletal systems. The disorders associated with balance are mainly observed in the elderly and are related to involution, but young ones can also have a problem [3]. Static balance disorders worsen the stabilization during the movements, which causes additional energy expenditure, compensation, and mobility limit. Therefore, movement becomes less effective [4,5,6]. However, this association is poorly explained [7]. High-quality movement patterns require a high level of mobility, stability, and motor control [4,5,6]. It was shown that there is a relationship between functional movement and postural stabilization [8]. The correct movement pattern is performed with the lowest energy expenditure, ensuring the effectiveness and precision of motor activity with safety for tissues [9,10]. Despite generally the same principles of performance, individual movement acts differ in humans. The movement pattern is understood here as a unique way of realizing a specific movement act [11]. Both balance and movement patterns quality are considered injury risk factors [12,13,14,15]. It was shown that athletes with lower stability are more likely to be injured. Especially cruciate ligaments are at injury risk [16]. It could be more evident in women who are more likely to have knee valgus; therefore, it is more probable for injury occurrence [17]. Moreover, movement pattern quality state a valuable factor in injury risk. Chorba et al. [18] indicated that female athletes with poor movement patterns are more likely to be injured. However, less is known about the usefulness of the single Functional Movement Screen (FMS) module test in injury risk prediction. Especially in terms of combining its results with other measurements. Hartigan et al. [19] investigated the in-line lunge (IL) scores test with balance ability and sprint and jump performance. However, there is a lack of data considering IL results with injury. Deep squat (DS) screening could be considered an injury risk predictor [20]. Hurdle Step (HS) is regarded as associated with physical performance [21], but results provided by Shimoura et al. [22] indicated DS and HS as injury risk factors. The above observations lead to questions about the possibility of using balance measurements and movement pattern tests based on stability in injury prediction. However, the accessible data mainly focus on one-factor analysis, omitting interference with other attributes that do not state the whole picture of human body property expressed to movement ability. There is a need to make an analysis concern more factors and examine the interaction between them. Therefore, this study aimed to investigate the injury risk due to static balance and movement quality and the single and multiple injury occurrence in physically active women. Specifically, we wanted to answer the following: (1) Which parameters differ in injured and uninjured subjects? (2) Which measured static balance parameters and movement patterns scores predict injury risk: (3) Is there the possibility of creating a two-factor prediction model in injury risk based on static balance and movement patterns quality? Moreover, (4) we deepened the analysis for injury regarding single and multiple incidents. The obtained results allow for a deeper look at the structure of dependencies between human movement abilities, enabling effective injury prediction and targeting adequate actions for injury prevention in physically active women.
null
null
3. Results
Table 1 presents the t-test comparison of static balance test results between the no injury and injury group. It was revealed that the injured group had statistically significantly worse area circle closed eyes scores than the no injury group. Moreover, the path length closed eyes scores difference was close to statistically significant. Table 2 shows the results of the U Mann–Whitney test comparison of movement quality test results between the no injury and injury group. It was revealed that the injured group had statistically significantly worse in-line lunge scores than the no-injury group. No other tests were statistically different. Further, the analysis of differences was respected to no injury group, single injury (one), and multiple injuries (more than one), so the Kruskal–Wallis test was conducted with the post hoc Bunn test (Table 3). It was revealed that the area circle closed eye (H = 12.97; p = 0.015) differed between the group with no injury with both injured groups. Area circle path length (H = 6.48; p = 0.0391) varies only between no injury and multiple injuries (Table 3). Regarding movement patterns quality, only in-line lunge differs (H = 10.08; p = 0.0065) between no injury and both injured groups (Table 3). Table 4 presents the logistic regression models for injury risk regarding injury occurrence (general, single, multiple). In the case of a general and single injury, both factors AC-CE and IL predict injury risk. The two-factor model showed that the injury risk increases with a decreased static balance and worsening movement quality. However, when multiple injury occurrences were predicted, no model was credible due to the lack of statistical significance.
5. Conclusions
The static balance ability and movement quality (alone and jointly) predict injury risk in physically active women. The most reliable parameters seem to be the center of pressure area circle closed eye, and in-line lunge test, which indicate injury alone and in one model. However, those parameters did not predict multiple injury occurrences. Using quantitative and qualitative measurements together to screen injury risk could be helpful in injury risk detection. However, the factors mentioned above should be used with caution. There is a need to verify them in prospective terms.
[ "2. Materials and Methods", "2.1. Ethics", "2.3. Settings", "2.4. Measurements", "2.5. Statistics" ]
[ "2.1. Ethics The relevant study followed the ethical Helsinki Declaration and was approved by The Senate Research Ethics Committee at the Wroclaw University of Health and Sport Sciences (ECUPE 16/2018).\nThe relevant study followed the ethical Helsinki Declaration and was approved by The Senate Research Ethics Committee at the Wroclaw University of Health and Sport Sciences (ECUPE 16/2018).\n2.2. Participants Participants were volunteers recruited from female students in the faculty of physical education and sport. The inclusion criteria were: (1) no experience in professional sport; (2) no injury 28 days before the measurement or any other medical contradiction for physical activity; (3) high level of physical activity based on IPAQ principles [23]—more than 1500 MET gained from 3 days of vigorous effort; or more than 3000 MET gained from 7 days of vigorous and/or moderate effort. Initially study involved 105 participants, but 17 did not complete the study due to the following: injury 28 days before the survey or other medical contradiction for physical activity (n = 3), not conducting all measurements (n = 4), and rejection from participation without no giving any reasons (n = 7); therefore, data of 88 subjects were included in the final analysis. The examined women were 21.48 ± 1.56 years, body height 1.68 ± 0.06 cm; body weight 60.5 ± 9.00 kg; BMI 21.48 ± 2.52 kg/m2; their sports experience was between 4 and 12 years, and weekly training volume was 3–12 h per week. The physical activity level based on IPAQ results was 4535.36 ± 2897.91 MET. According to the IPAQ criteria [23], all participants were physically active; 40.90% of women were injured, 13.63% once, and 27.27% had more than one injury.\nAll participants were volunteers and were asked to sign a written consent before participating in this study. All subjects were informed in detail about the aim, methodology, and participation conditions. They could withdraw from the research at any moment without providing any reason.\nParticipants were volunteers recruited from female students in the faculty of physical education and sport. The inclusion criteria were: (1) no experience in professional sport; (2) no injury 28 days before the measurement or any other medical contradiction for physical activity; (3) high level of physical activity based on IPAQ principles [23]—more than 1500 MET gained from 3 days of vigorous effort; or more than 3000 MET gained from 7 days of vigorous and/or moderate effort. Initially study involved 105 participants, but 17 did not complete the study due to the following: injury 28 days before the survey or other medical contradiction for physical activity (n = 3), not conducting all measurements (n = 4), and rejection from participation without no giving any reasons (n = 7); therefore, data of 88 subjects were included in the final analysis. The examined women were 21.48 ± 1.56 years, body height 1.68 ± 0.06 cm; body weight 60.5 ± 9.00 kg; BMI 21.48 ± 2.52 kg/m2; their sports experience was between 4 and 12 years, and weekly training volume was 3–12 h per week. The physical activity level based on IPAQ results was 4535.36 ± 2897.91 MET. According to the IPAQ criteria [23], all participants were physically active; 40.90% of women were injured, 13.63% once, and 27.27% had more than one injury.\nAll participants were volunteers and were asked to sign a written consent before participating in this study. All subjects were informed in detail about the aim, methodology, and participation conditions. They could withdraw from the research at any moment without providing any reason.\n2.3. Settings The research was carried out in the Biokinetics Research Laboratory of the Academy of Physical Education, which has a Quality Management System Certificate PN-EN ISO 9001: 2009 (Certificate Reg. No. PW-48606-10E).\nThe research was carried out in the Biokinetics Research Laboratory of the Academy of Physical Education, which has a Quality Management System Certificate PN-EN ISO 9001: 2009 (Certificate Reg. No. PW-48606-10E).\n2.4. Measurements The body height and mass were measured with anthropometer SECA model 764 (Seca GmbH & Co. KG, Hamburg, Germany). Based on obtained values, the body mass index was calculated.\nInjury data of musculoskeletal system injuries that occurred during physical activity were gained with the Injury History questionnaire (IHQ). This tool was validated with an alpha-Cronbach coefficient at level 0.836, indicating high reliability [15]. The IHQ includes questions concerning the number of injuries concerning the body part (head, neck, torso, upper and lower limbs). This study analyzed the total amount of injuries. The survey was conducted in a supervised manner. The researcher was available to the respondents during the survey.\nThe International Physical Activity Questionnaire (IPAQ) described the physical activity level. This questionnaire provides [23] measures and assesses the self-reported information about the average time spent on physical activity (minutes per week) and is used among young, middle-aged adults, and nonprofessional sports people. The obtained data allow calculating the number of Metabolic Equivalents of Task—MET. With this questionnaire, we also asked about trained sports, the average number of training, and the duration of a single session.\nStatic balance was measured with ACCU SWAY stabilometric platform—with Balance Cline software (Advanced Mechanical Technology, Inc. [AMTI], Newton, MA, USA). The participants stood on the platform without shoes, with the upper limbs lowered along the torso. Firstly, they were asked not to move and maintain a standing position for the 30 s with open eyes, and next, under the same condition but with closed eyes. The parameters included in the analysis were the distance along the center of gravity and the field’s perimeter determined by the center of gravity path traveled during the measurement.\nThe Functional Movement Screen (FMS) is used in movement pattern screening. In this study, three movement tests requiring leg stance were chosen, which involved proper stability and balance: deep squat (DS), hurdle step (HS), and in-line lunge (IL). All movements were assessed on a scale of 0–3, where 0 means pain during the movement, 1 is a lack of ability to move, 2 is moving with compensation, and 3 is moving without compensation. HS and IL are unilateral tests; therefore, they are performed for both body sides, and the worse score is considered [6].\nThe body height and mass were measured with anthropometer SECA model 764 (Seca GmbH & Co. KG, Hamburg, Germany). Based on obtained values, the body mass index was calculated.\nInjury data of musculoskeletal system injuries that occurred during physical activity were gained with the Injury History questionnaire (IHQ). This tool was validated with an alpha-Cronbach coefficient at level 0.836, indicating high reliability [15]. The IHQ includes questions concerning the number of injuries concerning the body part (head, neck, torso, upper and lower limbs). This study analyzed the total amount of injuries. The survey was conducted in a supervised manner. The researcher was available to the respondents during the survey.\nThe International Physical Activity Questionnaire (IPAQ) described the physical activity level. This questionnaire provides [23] measures and assesses the self-reported information about the average time spent on physical activity (minutes per week) and is used among young, middle-aged adults, and nonprofessional sports people. The obtained data allow calculating the number of Metabolic Equivalents of Task—MET. With this questionnaire, we also asked about trained sports, the average number of training, and the duration of a single session.\nStatic balance was measured with ACCU SWAY stabilometric platform—with Balance Cline software (Advanced Mechanical Technology, Inc. [AMTI], Newton, MA, USA). The participants stood on the platform without shoes, with the upper limbs lowered along the torso. Firstly, they were asked not to move and maintain a standing position for the 30 s with open eyes, and next, under the same condition but with closed eyes. The parameters included in the analysis were the distance along the center of gravity and the field’s perimeter determined by the center of gravity path traveled during the measurement.\nThe Functional Movement Screen (FMS) is used in movement pattern screening. In this study, three movement tests requiring leg stance were chosen, which involved proper stability and balance: deep squat (DS), hurdle step (HS), and in-line lunge (IL). All movements were assessed on a scale of 0–3, where 0 means pain during the movement, 1 is a lack of ability to move, 2 is moving with compensation, and 3 is moving without compensation. HS and IL are unilateral tests; therefore, they are performed for both body sides, and the worse score is considered [6].\n2.5. Statistics The means, standard deviations, and confidence intervals were calculated for normal distribution data and the median with standard errors for data lacking normal distribution. The Student’s t-test comparison was made for static balance results and the U Mann–Whitney test for movement quality tests. When the injury occurrence was respected (no injury; single injury; multiple injury group), the Kruskal–Wallis test and Dunn test (post hoc) were conducted to determine the static balance and movement quality differences in those groups. The logistic regression models were built to assess injury risk due to general, single, and multiple injury occurrences based on factors differentiative between groups. Then, in the second step of the analysis, the two-factor logistic regression was conducted. In both, Wald’s Statistics were provided. The reference group stated subjects with no injury. The level of significance was set at p < 0.05. Statistica 13.0 (Statsoft Poland, Cracow, Poland) was used for analysis.\nThe means, standard deviations, and confidence intervals were calculated for normal distribution data and the median with standard errors for data lacking normal distribution. The Student’s t-test comparison was made for static balance results and the U Mann–Whitney test for movement quality tests. When the injury occurrence was respected (no injury; single injury; multiple injury group), the Kruskal–Wallis test and Dunn test (post hoc) were conducted to determine the static balance and movement quality differences in those groups. The logistic regression models were built to assess injury risk due to general, single, and multiple injury occurrences based on factors differentiative between groups. Then, in the second step of the analysis, the two-factor logistic regression was conducted. In both, Wald’s Statistics were provided. The reference group stated subjects with no injury. The level of significance was set at p < 0.05. Statistica 13.0 (Statsoft Poland, Cracow, Poland) was used for analysis.", "The relevant study followed the ethical Helsinki Declaration and was approved by The Senate Research Ethics Committee at the Wroclaw University of Health and Sport Sciences (ECUPE 16/2018).", "The research was carried out in the Biokinetics Research Laboratory of the Academy of Physical Education, which has a Quality Management System Certificate PN-EN ISO 9001: 2009 (Certificate Reg. No. PW-48606-10E).", "The body height and mass were measured with anthropometer SECA model 764 (Seca GmbH & Co. KG, Hamburg, Germany). Based on obtained values, the body mass index was calculated.\nInjury data of musculoskeletal system injuries that occurred during physical activity were gained with the Injury History questionnaire (IHQ). This tool was validated with an alpha-Cronbach coefficient at level 0.836, indicating high reliability [15]. The IHQ includes questions concerning the number of injuries concerning the body part (head, neck, torso, upper and lower limbs). This study analyzed the total amount of injuries. The survey was conducted in a supervised manner. The researcher was available to the respondents during the survey.\nThe International Physical Activity Questionnaire (IPAQ) described the physical activity level. This questionnaire provides [23] measures and assesses the self-reported information about the average time spent on physical activity (minutes per week) and is used among young, middle-aged adults, and nonprofessional sports people. The obtained data allow calculating the number of Metabolic Equivalents of Task—MET. With this questionnaire, we also asked about trained sports, the average number of training, and the duration of a single session.\nStatic balance was measured with ACCU SWAY stabilometric platform—with Balance Cline software (Advanced Mechanical Technology, Inc. [AMTI], Newton, MA, USA). The participants stood on the platform without shoes, with the upper limbs lowered along the torso. Firstly, they were asked not to move and maintain a standing position for the 30 s with open eyes, and next, under the same condition but with closed eyes. The parameters included in the analysis were the distance along the center of gravity and the field’s perimeter determined by the center of gravity path traveled during the measurement.\nThe Functional Movement Screen (FMS) is used in movement pattern screening. In this study, three movement tests requiring leg stance were chosen, which involved proper stability and balance: deep squat (DS), hurdle step (HS), and in-line lunge (IL). All movements were assessed on a scale of 0–3, where 0 means pain during the movement, 1 is a lack of ability to move, 2 is moving with compensation, and 3 is moving without compensation. HS and IL are unilateral tests; therefore, they are performed for both body sides, and the worse score is considered [6].", "The means, standard deviations, and confidence intervals were calculated for normal distribution data and the median with standard errors for data lacking normal distribution. The Student’s t-test comparison was made for static balance results and the U Mann–Whitney test for movement quality tests. When the injury occurrence was respected (no injury; single injury; multiple injury group), the Kruskal–Wallis test and Dunn test (post hoc) were conducted to determine the static balance and movement quality differences in those groups. The logistic regression models were built to assess injury risk due to general, single, and multiple injury occurrences based on factors differentiative between groups. Then, in the second step of the analysis, the two-factor logistic regression was conducted. In both, Wald’s Statistics were provided. The reference group stated subjects with no injury. The level of significance was set at p < 0.05. Statistica 13.0 (Statsoft Poland, Cracow, Poland) was used for analysis." ]
[ null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Ethics", "2.2. Participants", "2.3. Settings", "2.4. Measurements", "2.5. Statistics", "3. Results", "4. Discussion", "5. Conclusions" ]
[ "Static balance is one of the most credible indicators of the musculoskeletal and nervous system state, which states the ability to maintain an upright posture and to keep the line of gravity within the limits of the base of support [1,2]. It expresses the harmonious work of the nervous and musculoskeletal systems. The disorders associated with balance are mainly observed in the elderly and are related to involution, but young ones can also have a problem [3]. Static balance disorders worsen the stabilization during the movements, which causes additional energy expenditure, compensation, and mobility limit. Therefore, movement becomes less effective [4,5,6]. However, this association is poorly explained [7].\nHigh-quality movement patterns require a high level of mobility, stability, and motor control [4,5,6]. It was shown that there is a relationship between functional movement and postural stabilization [8]. The correct movement pattern is performed with the lowest energy expenditure, ensuring the effectiveness and precision of motor activity with safety for tissues [9,10]. Despite generally the same principles of performance, individual movement acts differ in humans. The movement pattern is understood here as a unique way of realizing a specific movement act [11].\nBoth balance and movement patterns quality are considered injury risk factors [12,13,14,15]. It was shown that athletes with lower stability are more likely to be injured. Especially cruciate ligaments are at injury risk [16]. It could be more evident in women who are more likely to have knee valgus; therefore, it is more probable for injury occurrence [17]. Moreover, movement pattern quality state a valuable factor in injury risk. Chorba et al. [18] indicated that female athletes with poor movement patterns are more likely to be injured.\nHowever, less is known about the usefulness of the single Functional Movement Screen (FMS) module test in injury risk prediction. Especially in terms of combining its results with other measurements. Hartigan et al. [19] investigated the in-line lunge (IL) scores test with balance ability and sprint and jump performance. However, there is a lack of data considering IL results with injury. Deep squat (DS) screening could be considered an injury risk predictor [20]. Hurdle Step (HS) is regarded as associated with physical performance [21], but results provided by Shimoura et al. [22] indicated DS and HS as injury risk factors.\nThe above observations lead to questions about the possibility of using balance measurements and movement pattern tests based on stability in injury prediction. However, the accessible data mainly focus on one-factor analysis, omitting interference with other attributes that do not state the whole picture of human body property expressed to movement ability. There is a need to make an analysis concern more factors and examine the interaction between them. Therefore, this study aimed to investigate the injury risk due to static balance and movement quality and the single and multiple injury occurrence in physically active women. Specifically, we wanted to answer the following: (1) Which parameters differ in injured and uninjured subjects? (2) Which measured static balance parameters and movement patterns scores predict injury risk: (3) Is there the possibility of creating a two-factor prediction model in injury risk based on static balance and movement patterns quality? Moreover, (4) we deepened the analysis for injury regarding single and multiple incidents. The obtained results allow for a deeper look at the structure of dependencies between human movement abilities, enabling effective injury prediction and targeting adequate actions for injury prevention in physically active women.", "2.1. Ethics The relevant study followed the ethical Helsinki Declaration and was approved by The Senate Research Ethics Committee at the Wroclaw University of Health and Sport Sciences (ECUPE 16/2018).\nThe relevant study followed the ethical Helsinki Declaration and was approved by The Senate Research Ethics Committee at the Wroclaw University of Health and Sport Sciences (ECUPE 16/2018).\n2.2. Participants Participants were volunteers recruited from female students in the faculty of physical education and sport. The inclusion criteria were: (1) no experience in professional sport; (2) no injury 28 days before the measurement or any other medical contradiction for physical activity; (3) high level of physical activity based on IPAQ principles [23]—more than 1500 MET gained from 3 days of vigorous effort; or more than 3000 MET gained from 7 days of vigorous and/or moderate effort. Initially study involved 105 participants, but 17 did not complete the study due to the following: injury 28 days before the survey or other medical contradiction for physical activity (n = 3), not conducting all measurements (n = 4), and rejection from participation without no giving any reasons (n = 7); therefore, data of 88 subjects were included in the final analysis. The examined women were 21.48 ± 1.56 years, body height 1.68 ± 0.06 cm; body weight 60.5 ± 9.00 kg; BMI 21.48 ± 2.52 kg/m2; their sports experience was between 4 and 12 years, and weekly training volume was 3–12 h per week. The physical activity level based on IPAQ results was 4535.36 ± 2897.91 MET. According to the IPAQ criteria [23], all participants were physically active; 40.90% of women were injured, 13.63% once, and 27.27% had more than one injury.\nAll participants were volunteers and were asked to sign a written consent before participating in this study. All subjects were informed in detail about the aim, methodology, and participation conditions. They could withdraw from the research at any moment without providing any reason.\nParticipants were volunteers recruited from female students in the faculty of physical education and sport. The inclusion criteria were: (1) no experience in professional sport; (2) no injury 28 days before the measurement or any other medical contradiction for physical activity; (3) high level of physical activity based on IPAQ principles [23]—more than 1500 MET gained from 3 days of vigorous effort; or more than 3000 MET gained from 7 days of vigorous and/or moderate effort. Initially study involved 105 participants, but 17 did not complete the study due to the following: injury 28 days before the survey or other medical contradiction for physical activity (n = 3), not conducting all measurements (n = 4), and rejection from participation without no giving any reasons (n = 7); therefore, data of 88 subjects were included in the final analysis. The examined women were 21.48 ± 1.56 years, body height 1.68 ± 0.06 cm; body weight 60.5 ± 9.00 kg; BMI 21.48 ± 2.52 kg/m2; their sports experience was between 4 and 12 years, and weekly training volume was 3–12 h per week. The physical activity level based on IPAQ results was 4535.36 ± 2897.91 MET. According to the IPAQ criteria [23], all participants were physically active; 40.90% of women were injured, 13.63% once, and 27.27% had more than one injury.\nAll participants were volunteers and were asked to sign a written consent before participating in this study. All subjects were informed in detail about the aim, methodology, and participation conditions. They could withdraw from the research at any moment without providing any reason.\n2.3. Settings The research was carried out in the Biokinetics Research Laboratory of the Academy of Physical Education, which has a Quality Management System Certificate PN-EN ISO 9001: 2009 (Certificate Reg. No. PW-48606-10E).\nThe research was carried out in the Biokinetics Research Laboratory of the Academy of Physical Education, which has a Quality Management System Certificate PN-EN ISO 9001: 2009 (Certificate Reg. No. PW-48606-10E).\n2.4. Measurements The body height and mass were measured with anthropometer SECA model 764 (Seca GmbH & Co. KG, Hamburg, Germany). Based on obtained values, the body mass index was calculated.\nInjury data of musculoskeletal system injuries that occurred during physical activity were gained with the Injury History questionnaire (IHQ). This tool was validated with an alpha-Cronbach coefficient at level 0.836, indicating high reliability [15]. The IHQ includes questions concerning the number of injuries concerning the body part (head, neck, torso, upper and lower limbs). This study analyzed the total amount of injuries. The survey was conducted in a supervised manner. The researcher was available to the respondents during the survey.\nThe International Physical Activity Questionnaire (IPAQ) described the physical activity level. This questionnaire provides [23] measures and assesses the self-reported information about the average time spent on physical activity (minutes per week) and is used among young, middle-aged adults, and nonprofessional sports people. The obtained data allow calculating the number of Metabolic Equivalents of Task—MET. With this questionnaire, we also asked about trained sports, the average number of training, and the duration of a single session.\nStatic balance was measured with ACCU SWAY stabilometric platform—with Balance Cline software (Advanced Mechanical Technology, Inc. [AMTI], Newton, MA, USA). The participants stood on the platform without shoes, with the upper limbs lowered along the torso. Firstly, they were asked not to move and maintain a standing position for the 30 s with open eyes, and next, under the same condition but with closed eyes. The parameters included in the analysis were the distance along the center of gravity and the field’s perimeter determined by the center of gravity path traveled during the measurement.\nThe Functional Movement Screen (FMS) is used in movement pattern screening. In this study, three movement tests requiring leg stance were chosen, which involved proper stability and balance: deep squat (DS), hurdle step (HS), and in-line lunge (IL). All movements were assessed on a scale of 0–3, where 0 means pain during the movement, 1 is a lack of ability to move, 2 is moving with compensation, and 3 is moving without compensation. HS and IL are unilateral tests; therefore, they are performed for both body sides, and the worse score is considered [6].\nThe body height and mass were measured with anthropometer SECA model 764 (Seca GmbH & Co. KG, Hamburg, Germany). Based on obtained values, the body mass index was calculated.\nInjury data of musculoskeletal system injuries that occurred during physical activity were gained with the Injury History questionnaire (IHQ). This tool was validated with an alpha-Cronbach coefficient at level 0.836, indicating high reliability [15]. The IHQ includes questions concerning the number of injuries concerning the body part (head, neck, torso, upper and lower limbs). This study analyzed the total amount of injuries. The survey was conducted in a supervised manner. The researcher was available to the respondents during the survey.\nThe International Physical Activity Questionnaire (IPAQ) described the physical activity level. This questionnaire provides [23] measures and assesses the self-reported information about the average time spent on physical activity (minutes per week) and is used among young, middle-aged adults, and nonprofessional sports people. The obtained data allow calculating the number of Metabolic Equivalents of Task—MET. With this questionnaire, we also asked about trained sports, the average number of training, and the duration of a single session.\nStatic balance was measured with ACCU SWAY stabilometric platform—with Balance Cline software (Advanced Mechanical Technology, Inc. [AMTI], Newton, MA, USA). The participants stood on the platform without shoes, with the upper limbs lowered along the torso. Firstly, they were asked not to move and maintain a standing position for the 30 s with open eyes, and next, under the same condition but with closed eyes. The parameters included in the analysis were the distance along the center of gravity and the field’s perimeter determined by the center of gravity path traveled during the measurement.\nThe Functional Movement Screen (FMS) is used in movement pattern screening. In this study, three movement tests requiring leg stance were chosen, which involved proper stability and balance: deep squat (DS), hurdle step (HS), and in-line lunge (IL). All movements were assessed on a scale of 0–3, where 0 means pain during the movement, 1 is a lack of ability to move, 2 is moving with compensation, and 3 is moving without compensation. HS and IL are unilateral tests; therefore, they are performed for both body sides, and the worse score is considered [6].\n2.5. Statistics The means, standard deviations, and confidence intervals were calculated for normal distribution data and the median with standard errors for data lacking normal distribution. The Student’s t-test comparison was made for static balance results and the U Mann–Whitney test for movement quality tests. When the injury occurrence was respected (no injury; single injury; multiple injury group), the Kruskal–Wallis test and Dunn test (post hoc) were conducted to determine the static balance and movement quality differences in those groups. The logistic regression models were built to assess injury risk due to general, single, and multiple injury occurrences based on factors differentiative between groups. Then, in the second step of the analysis, the two-factor logistic regression was conducted. In both, Wald’s Statistics were provided. The reference group stated subjects with no injury. The level of significance was set at p < 0.05. Statistica 13.0 (Statsoft Poland, Cracow, Poland) was used for analysis.\nThe means, standard deviations, and confidence intervals were calculated for normal distribution data and the median with standard errors for data lacking normal distribution. The Student’s t-test comparison was made for static balance results and the U Mann–Whitney test for movement quality tests. When the injury occurrence was respected (no injury; single injury; multiple injury group), the Kruskal–Wallis test and Dunn test (post hoc) were conducted to determine the static balance and movement quality differences in those groups. The logistic regression models were built to assess injury risk due to general, single, and multiple injury occurrences based on factors differentiative between groups. Then, in the second step of the analysis, the two-factor logistic regression was conducted. In both, Wald’s Statistics were provided. The reference group stated subjects with no injury. The level of significance was set at p < 0.05. Statistica 13.0 (Statsoft Poland, Cracow, Poland) was used for analysis.", "The relevant study followed the ethical Helsinki Declaration and was approved by The Senate Research Ethics Committee at the Wroclaw University of Health and Sport Sciences (ECUPE 16/2018).", "Participants were volunteers recruited from female students in the faculty of physical education and sport. The inclusion criteria were: (1) no experience in professional sport; (2) no injury 28 days before the measurement or any other medical contradiction for physical activity; (3) high level of physical activity based on IPAQ principles [23]—more than 1500 MET gained from 3 days of vigorous effort; or more than 3000 MET gained from 7 days of vigorous and/or moderate effort. Initially study involved 105 participants, but 17 did not complete the study due to the following: injury 28 days before the survey or other medical contradiction for physical activity (n = 3), not conducting all measurements (n = 4), and rejection from participation without no giving any reasons (n = 7); therefore, data of 88 subjects were included in the final analysis. The examined women were 21.48 ± 1.56 years, body height 1.68 ± 0.06 cm; body weight 60.5 ± 9.00 kg; BMI 21.48 ± 2.52 kg/m2; their sports experience was between 4 and 12 years, and weekly training volume was 3–12 h per week. The physical activity level based on IPAQ results was 4535.36 ± 2897.91 MET. According to the IPAQ criteria [23], all participants were physically active; 40.90% of women were injured, 13.63% once, and 27.27% had more than one injury.\nAll participants were volunteers and were asked to sign a written consent before participating in this study. All subjects were informed in detail about the aim, methodology, and participation conditions. They could withdraw from the research at any moment without providing any reason.", "The research was carried out in the Biokinetics Research Laboratory of the Academy of Physical Education, which has a Quality Management System Certificate PN-EN ISO 9001: 2009 (Certificate Reg. No. PW-48606-10E).", "The body height and mass were measured with anthropometer SECA model 764 (Seca GmbH & Co. KG, Hamburg, Germany). Based on obtained values, the body mass index was calculated.\nInjury data of musculoskeletal system injuries that occurred during physical activity were gained with the Injury History questionnaire (IHQ). This tool was validated with an alpha-Cronbach coefficient at level 0.836, indicating high reliability [15]. The IHQ includes questions concerning the number of injuries concerning the body part (head, neck, torso, upper and lower limbs). This study analyzed the total amount of injuries. The survey was conducted in a supervised manner. The researcher was available to the respondents during the survey.\nThe International Physical Activity Questionnaire (IPAQ) described the physical activity level. This questionnaire provides [23] measures and assesses the self-reported information about the average time spent on physical activity (minutes per week) and is used among young, middle-aged adults, and nonprofessional sports people. The obtained data allow calculating the number of Metabolic Equivalents of Task—MET. With this questionnaire, we also asked about trained sports, the average number of training, and the duration of a single session.\nStatic balance was measured with ACCU SWAY stabilometric platform—with Balance Cline software (Advanced Mechanical Technology, Inc. [AMTI], Newton, MA, USA). The participants stood on the platform without shoes, with the upper limbs lowered along the torso. Firstly, they were asked not to move and maintain a standing position for the 30 s with open eyes, and next, under the same condition but with closed eyes. The parameters included in the analysis were the distance along the center of gravity and the field’s perimeter determined by the center of gravity path traveled during the measurement.\nThe Functional Movement Screen (FMS) is used in movement pattern screening. In this study, three movement tests requiring leg stance were chosen, which involved proper stability and balance: deep squat (DS), hurdle step (HS), and in-line lunge (IL). All movements were assessed on a scale of 0–3, where 0 means pain during the movement, 1 is a lack of ability to move, 2 is moving with compensation, and 3 is moving without compensation. HS and IL are unilateral tests; therefore, they are performed for both body sides, and the worse score is considered [6].", "The means, standard deviations, and confidence intervals were calculated for normal distribution data and the median with standard errors for data lacking normal distribution. The Student’s t-test comparison was made for static balance results and the U Mann–Whitney test for movement quality tests. When the injury occurrence was respected (no injury; single injury; multiple injury group), the Kruskal–Wallis test and Dunn test (post hoc) were conducted to determine the static balance and movement quality differences in those groups. The logistic regression models were built to assess injury risk due to general, single, and multiple injury occurrences based on factors differentiative between groups. Then, in the second step of the analysis, the two-factor logistic regression was conducted. In both, Wald’s Statistics were provided. The reference group stated subjects with no injury. The level of significance was set at p < 0.05. Statistica 13.0 (Statsoft Poland, Cracow, Poland) was used for analysis.", "Table 1 presents the t-test comparison of static balance test results between the no injury and injury group. It was revealed that the injured group had statistically significantly worse area circle closed eyes scores than the no injury group. Moreover, the path length closed eyes scores difference was close to statistically significant.\nTable 2 shows the results of the U Mann–Whitney test comparison of movement quality test results between the no injury and injury group. It was revealed that the injured group had statistically significantly worse in-line lunge scores than the no-injury group. No other tests were statistically different.\nFurther, the analysis of differences was respected to no injury group, single injury (one), and multiple injuries (more than one), so the Kruskal–Wallis test was conducted with the post hoc Bunn test (Table 3). It was revealed that the area circle closed eye (H = 12.97; p = 0.015) differed between the group with no injury with both injured groups. Area circle path length (H = 6.48; p = 0.0391) varies only between no injury and multiple injuries (Table 3). Regarding movement patterns quality, only in-line lunge differs (H = 10.08; p = 0.0065) between no injury and both injured groups (Table 3).\nTable 4 presents the logistic regression models for injury risk regarding injury occurrence (general, single, multiple). In the case of a general and single injury, both factors AC-CE and IL predict injury risk. The two-factor model showed that the injury risk increases with a decreased static balance and worsening movement quality. However, when multiple injury occurrences were predicted, no model was credible due to the lack of statistical significance.", "The results indicate the possibility of predicting injury risk based on static balance and movement quality in physically active women. In these terms, measurement of center of pressure area circle closed eyes (AC-CE) and in-line lunge (IL) screening seems helpful in injury risk detection. It also indicated the possibility of deepening the analysis of Functional Movement Screen results to the overall score and single module tests. Moreover, using both tests together seems more effective in injury risk prediction, but not for multiple incidents that were not predicted reliably.\nStatic balance is considered an indicator of good work of musculoskeletal and nervous systems [2]. It is also considered an injury risk factor due to lower balance ability in injured subjects [24,25]. Oshima et al. [14] provided the observations in a prospective study that indicated static balance as a significant factor in the anterior cruciate ligament in collegiate athletes. It showed that the one injury risk factor is postural control function. In the following study, these results were also confirmed [16]. Moreover, Hrysomalis [12] showed that injury might occur due to poor balance, suggesting improving deficits in this ability as effective injury prevention. Dunsky et al. [1] also indicated that improving static and dynamic balance is an effective injury prevention method.\nTherefore, the above observation suggests that other factors could have a role in injury mechanisms. The other one is movement quality screening [13,15]. The most reliable is the FMS test, which is the tool that is helpful in injury risk prediction in the physically active group [26]. Injury risk assessment is mainly based on the overall score, but some results also suggest the usefulness of a single module test [20,22]. In our study, the IL test predicts injury in the general approach and due to a single injury. However, Shimoura et al. [22], in the study on male basketball players, indicated that deep squat (DS) and hurdle step (HS) tests were injury predictors. The difference in our observation may be due to sex and sports group. Moreover, Bunn et al. [20] also pointed out that deep squats as a single screening test could be helpful in injury prediction. Deep squats, hurdle steps, and in-line lunges are global movement patterns based on motor control, mobility, and stability [6]. Therefore, using them together with static balance examination may provide a deeper insight into the balance disorders and associated injury risk.\nOur results showed that two-factor models based on COP AC-CE and IL predict injury risk in a general approach and single occurrence. Some results suggested that balance ability is strongly associated with IL score, which may explain that both factors predict injury risk [27]. The results of a study by de la Motte et al. [28] suggested using movement patterns screening with balance examination together as a more reliable way to predict injury than alone. However, the results published by Lisman et al. [29] cannot be omitted, which provided opposite results that indicated the need for further investigation. However, there is a need to emphasize the different methodology because we used the static balance platform. In contrast, the results mentioned above consider results from the y-balance test, which also measures stability with another view. Studies examining injury risk factors state a vital contribution in sports science, worth developing and exploring. However, especially valuable are researches considered more than one factor [30].\nWe are aware of this study’s limitations. Balance is a complex ability associated with vestibular, visual, and proprioceptive systems, and evaluation of its efficiency would provide more complex and reliable results. There was no examination of body posture (e.g., foot arch, pelvis position, spine shape, etc.), which may provide more related data. Moreover, static balance underrepresents postural control demands in daily life, so there is a need to interpret these results as a part of balance ability. We investigated only women of narrow age, indicating a strong need for further study to examine the similar association in men and other age groups. The group was not heterogenous according to sport discipline. More, we did not verify provided results in perspective terms. Therefore, there is also a strong need to conduct this kind of study to describe the efficiency of indicated parameters as an injury risk predictor.", "The static balance ability and movement quality (alone and jointly) predict injury risk in physically active women. The most reliable parameters seem to be the center of pressure area circle closed eye, and in-line lunge test, which indicate injury alone and in one model. However, those parameters did not predict multiple injury occurrences. Using quantitative and qualitative measurements together to screen injury risk could be helpful in injury risk detection. However, the factors mentioned above should be used with caution. There is a need to verify them in prospective terms." ]
[ "intro", null, null, "subjects", null, null, null, "results", "discussion", "conclusions" ]
[ "static balance", "stability", "injury risk", "movement", "women" ]
1. Introduction: Static balance is one of the most credible indicators of the musculoskeletal and nervous system state, which states the ability to maintain an upright posture and to keep the line of gravity within the limits of the base of support [1,2]. It expresses the harmonious work of the nervous and musculoskeletal systems. The disorders associated with balance are mainly observed in the elderly and are related to involution, but young ones can also have a problem [3]. Static balance disorders worsen the stabilization during the movements, which causes additional energy expenditure, compensation, and mobility limit. Therefore, movement becomes less effective [4,5,6]. However, this association is poorly explained [7]. High-quality movement patterns require a high level of mobility, stability, and motor control [4,5,6]. It was shown that there is a relationship between functional movement and postural stabilization [8]. The correct movement pattern is performed with the lowest energy expenditure, ensuring the effectiveness and precision of motor activity with safety for tissues [9,10]. Despite generally the same principles of performance, individual movement acts differ in humans. The movement pattern is understood here as a unique way of realizing a specific movement act [11]. Both balance and movement patterns quality are considered injury risk factors [12,13,14,15]. It was shown that athletes with lower stability are more likely to be injured. Especially cruciate ligaments are at injury risk [16]. It could be more evident in women who are more likely to have knee valgus; therefore, it is more probable for injury occurrence [17]. Moreover, movement pattern quality state a valuable factor in injury risk. Chorba et al. [18] indicated that female athletes with poor movement patterns are more likely to be injured. However, less is known about the usefulness of the single Functional Movement Screen (FMS) module test in injury risk prediction. Especially in terms of combining its results with other measurements. Hartigan et al. [19] investigated the in-line lunge (IL) scores test with balance ability and sprint and jump performance. However, there is a lack of data considering IL results with injury. Deep squat (DS) screening could be considered an injury risk predictor [20]. Hurdle Step (HS) is regarded as associated with physical performance [21], but results provided by Shimoura et al. [22] indicated DS and HS as injury risk factors. The above observations lead to questions about the possibility of using balance measurements and movement pattern tests based on stability in injury prediction. However, the accessible data mainly focus on one-factor analysis, omitting interference with other attributes that do not state the whole picture of human body property expressed to movement ability. There is a need to make an analysis concern more factors and examine the interaction between them. Therefore, this study aimed to investigate the injury risk due to static balance and movement quality and the single and multiple injury occurrence in physically active women. Specifically, we wanted to answer the following: (1) Which parameters differ in injured and uninjured subjects? (2) Which measured static balance parameters and movement patterns scores predict injury risk: (3) Is there the possibility of creating a two-factor prediction model in injury risk based on static balance and movement patterns quality? Moreover, (4) we deepened the analysis for injury regarding single and multiple incidents. The obtained results allow for a deeper look at the structure of dependencies between human movement abilities, enabling effective injury prediction and targeting adequate actions for injury prevention in physically active women. 2. Materials and Methods: 2.1. Ethics The relevant study followed the ethical Helsinki Declaration and was approved by The Senate Research Ethics Committee at the Wroclaw University of Health and Sport Sciences (ECUPE 16/2018). The relevant study followed the ethical Helsinki Declaration and was approved by The Senate Research Ethics Committee at the Wroclaw University of Health and Sport Sciences (ECUPE 16/2018). 2.2. Participants Participants were volunteers recruited from female students in the faculty of physical education and sport. The inclusion criteria were: (1) no experience in professional sport; (2) no injury 28 days before the measurement or any other medical contradiction for physical activity; (3) high level of physical activity based on IPAQ principles [23]—more than 1500 MET gained from 3 days of vigorous effort; or more than 3000 MET gained from 7 days of vigorous and/or moderate effort. Initially study involved 105 participants, but 17 did not complete the study due to the following: injury 28 days before the survey or other medical contradiction for physical activity (n = 3), not conducting all measurements (n = 4), and rejection from participation without no giving any reasons (n = 7); therefore, data of 88 subjects were included in the final analysis. The examined women were 21.48 ± 1.56 years, body height 1.68 ± 0.06 cm; body weight 60.5 ± 9.00 kg; BMI 21.48 ± 2.52 kg/m2; their sports experience was between 4 and 12 years, and weekly training volume was 3–12 h per week. The physical activity level based on IPAQ results was 4535.36 ± 2897.91 MET. According to the IPAQ criteria [23], all participants were physically active; 40.90% of women were injured, 13.63% once, and 27.27% had more than one injury. All participants were volunteers and were asked to sign a written consent before participating in this study. All subjects were informed in detail about the aim, methodology, and participation conditions. They could withdraw from the research at any moment without providing any reason. Participants were volunteers recruited from female students in the faculty of physical education and sport. The inclusion criteria were: (1) no experience in professional sport; (2) no injury 28 days before the measurement or any other medical contradiction for physical activity; (3) high level of physical activity based on IPAQ principles [23]—more than 1500 MET gained from 3 days of vigorous effort; or more than 3000 MET gained from 7 days of vigorous and/or moderate effort. Initially study involved 105 participants, but 17 did not complete the study due to the following: injury 28 days before the survey or other medical contradiction for physical activity (n = 3), not conducting all measurements (n = 4), and rejection from participation without no giving any reasons (n = 7); therefore, data of 88 subjects were included in the final analysis. The examined women were 21.48 ± 1.56 years, body height 1.68 ± 0.06 cm; body weight 60.5 ± 9.00 kg; BMI 21.48 ± 2.52 kg/m2; their sports experience was between 4 and 12 years, and weekly training volume was 3–12 h per week. The physical activity level based on IPAQ results was 4535.36 ± 2897.91 MET. According to the IPAQ criteria [23], all participants were physically active; 40.90% of women were injured, 13.63% once, and 27.27% had more than one injury. All participants were volunteers and were asked to sign a written consent before participating in this study. All subjects were informed in detail about the aim, methodology, and participation conditions. They could withdraw from the research at any moment without providing any reason. 2.3. Settings The research was carried out in the Biokinetics Research Laboratory of the Academy of Physical Education, which has a Quality Management System Certificate PN-EN ISO 9001: 2009 (Certificate Reg. No. PW-48606-10E). The research was carried out in the Biokinetics Research Laboratory of the Academy of Physical Education, which has a Quality Management System Certificate PN-EN ISO 9001: 2009 (Certificate Reg. No. PW-48606-10E). 2.4. Measurements The body height and mass were measured with anthropometer SECA model 764 (Seca GmbH & Co. KG, Hamburg, Germany). Based on obtained values, the body mass index was calculated. Injury data of musculoskeletal system injuries that occurred during physical activity were gained with the Injury History questionnaire (IHQ). This tool was validated with an alpha-Cronbach coefficient at level 0.836, indicating high reliability [15]. The IHQ includes questions concerning the number of injuries concerning the body part (head, neck, torso, upper and lower limbs). This study analyzed the total amount of injuries. The survey was conducted in a supervised manner. The researcher was available to the respondents during the survey. The International Physical Activity Questionnaire (IPAQ) described the physical activity level. This questionnaire provides [23] measures and assesses the self-reported information about the average time spent on physical activity (minutes per week) and is used among young, middle-aged adults, and nonprofessional sports people. The obtained data allow calculating the number of Metabolic Equivalents of Task—MET. With this questionnaire, we also asked about trained sports, the average number of training, and the duration of a single session. Static balance was measured with ACCU SWAY stabilometric platform—with Balance Cline software (Advanced Mechanical Technology, Inc. [AMTI], Newton, MA, USA). The participants stood on the platform without shoes, with the upper limbs lowered along the torso. Firstly, they were asked not to move and maintain a standing position for the 30 s with open eyes, and next, under the same condition but with closed eyes. The parameters included in the analysis were the distance along the center of gravity and the field’s perimeter determined by the center of gravity path traveled during the measurement. The Functional Movement Screen (FMS) is used in movement pattern screening. In this study, three movement tests requiring leg stance were chosen, which involved proper stability and balance: deep squat (DS), hurdle step (HS), and in-line lunge (IL). All movements were assessed on a scale of 0–3, where 0 means pain during the movement, 1 is a lack of ability to move, 2 is moving with compensation, and 3 is moving without compensation. HS and IL are unilateral tests; therefore, they are performed for both body sides, and the worse score is considered [6]. The body height and mass were measured with anthropometer SECA model 764 (Seca GmbH & Co. KG, Hamburg, Germany). Based on obtained values, the body mass index was calculated. Injury data of musculoskeletal system injuries that occurred during physical activity were gained with the Injury History questionnaire (IHQ). This tool was validated with an alpha-Cronbach coefficient at level 0.836, indicating high reliability [15]. The IHQ includes questions concerning the number of injuries concerning the body part (head, neck, torso, upper and lower limbs). This study analyzed the total amount of injuries. The survey was conducted in a supervised manner. The researcher was available to the respondents during the survey. The International Physical Activity Questionnaire (IPAQ) described the physical activity level. This questionnaire provides [23] measures and assesses the self-reported information about the average time spent on physical activity (minutes per week) and is used among young, middle-aged adults, and nonprofessional sports people. The obtained data allow calculating the number of Metabolic Equivalents of Task—MET. With this questionnaire, we also asked about trained sports, the average number of training, and the duration of a single session. Static balance was measured with ACCU SWAY stabilometric platform—with Balance Cline software (Advanced Mechanical Technology, Inc. [AMTI], Newton, MA, USA). The participants stood on the platform without shoes, with the upper limbs lowered along the torso. Firstly, they were asked not to move and maintain a standing position for the 30 s with open eyes, and next, under the same condition but with closed eyes. The parameters included in the analysis were the distance along the center of gravity and the field’s perimeter determined by the center of gravity path traveled during the measurement. The Functional Movement Screen (FMS) is used in movement pattern screening. In this study, three movement tests requiring leg stance were chosen, which involved proper stability and balance: deep squat (DS), hurdle step (HS), and in-line lunge (IL). All movements were assessed on a scale of 0–3, where 0 means pain during the movement, 1 is a lack of ability to move, 2 is moving with compensation, and 3 is moving without compensation. HS and IL are unilateral tests; therefore, they are performed for both body sides, and the worse score is considered [6]. 2.5. Statistics The means, standard deviations, and confidence intervals were calculated for normal distribution data and the median with standard errors for data lacking normal distribution. The Student’s t-test comparison was made for static balance results and the U Mann–Whitney test for movement quality tests. When the injury occurrence was respected (no injury; single injury; multiple injury group), the Kruskal–Wallis test and Dunn test (post hoc) were conducted to determine the static balance and movement quality differences in those groups. The logistic regression models were built to assess injury risk due to general, single, and multiple injury occurrences based on factors differentiative between groups. Then, in the second step of the analysis, the two-factor logistic regression was conducted. In both, Wald’s Statistics were provided. The reference group stated subjects with no injury. The level of significance was set at p < 0.05. Statistica 13.0 (Statsoft Poland, Cracow, Poland) was used for analysis. The means, standard deviations, and confidence intervals were calculated for normal distribution data and the median with standard errors for data lacking normal distribution. The Student’s t-test comparison was made for static balance results and the U Mann–Whitney test for movement quality tests. When the injury occurrence was respected (no injury; single injury; multiple injury group), the Kruskal–Wallis test and Dunn test (post hoc) were conducted to determine the static balance and movement quality differences in those groups. The logistic regression models were built to assess injury risk due to general, single, and multiple injury occurrences based on factors differentiative between groups. Then, in the second step of the analysis, the two-factor logistic regression was conducted. In both, Wald’s Statistics were provided. The reference group stated subjects with no injury. The level of significance was set at p < 0.05. Statistica 13.0 (Statsoft Poland, Cracow, Poland) was used for analysis. 2.1. Ethics: The relevant study followed the ethical Helsinki Declaration and was approved by The Senate Research Ethics Committee at the Wroclaw University of Health and Sport Sciences (ECUPE 16/2018). 2.2. Participants: Participants were volunteers recruited from female students in the faculty of physical education and sport. The inclusion criteria were: (1) no experience in professional sport; (2) no injury 28 days before the measurement or any other medical contradiction for physical activity; (3) high level of physical activity based on IPAQ principles [23]—more than 1500 MET gained from 3 days of vigorous effort; or more than 3000 MET gained from 7 days of vigorous and/or moderate effort. Initially study involved 105 participants, but 17 did not complete the study due to the following: injury 28 days before the survey or other medical contradiction for physical activity (n = 3), not conducting all measurements (n = 4), and rejection from participation without no giving any reasons (n = 7); therefore, data of 88 subjects were included in the final analysis. The examined women were 21.48 ± 1.56 years, body height 1.68 ± 0.06 cm; body weight 60.5 ± 9.00 kg; BMI 21.48 ± 2.52 kg/m2; their sports experience was between 4 and 12 years, and weekly training volume was 3–12 h per week. The physical activity level based on IPAQ results was 4535.36 ± 2897.91 MET. According to the IPAQ criteria [23], all participants were physically active; 40.90% of women were injured, 13.63% once, and 27.27% had more than one injury. All participants were volunteers and were asked to sign a written consent before participating in this study. All subjects were informed in detail about the aim, methodology, and participation conditions. They could withdraw from the research at any moment without providing any reason. 2.3. Settings: The research was carried out in the Biokinetics Research Laboratory of the Academy of Physical Education, which has a Quality Management System Certificate PN-EN ISO 9001: 2009 (Certificate Reg. No. PW-48606-10E). 2.4. Measurements: The body height and mass were measured with anthropometer SECA model 764 (Seca GmbH & Co. KG, Hamburg, Germany). Based on obtained values, the body mass index was calculated. Injury data of musculoskeletal system injuries that occurred during physical activity were gained with the Injury History questionnaire (IHQ). This tool was validated with an alpha-Cronbach coefficient at level 0.836, indicating high reliability [15]. The IHQ includes questions concerning the number of injuries concerning the body part (head, neck, torso, upper and lower limbs). This study analyzed the total amount of injuries. The survey was conducted in a supervised manner. The researcher was available to the respondents during the survey. The International Physical Activity Questionnaire (IPAQ) described the physical activity level. This questionnaire provides [23] measures and assesses the self-reported information about the average time spent on physical activity (minutes per week) and is used among young, middle-aged adults, and nonprofessional sports people. The obtained data allow calculating the number of Metabolic Equivalents of Task—MET. With this questionnaire, we also asked about trained sports, the average number of training, and the duration of a single session. Static balance was measured with ACCU SWAY stabilometric platform—with Balance Cline software (Advanced Mechanical Technology, Inc. [AMTI], Newton, MA, USA). The participants stood on the platform without shoes, with the upper limbs lowered along the torso. Firstly, they were asked not to move and maintain a standing position for the 30 s with open eyes, and next, under the same condition but with closed eyes. The parameters included in the analysis were the distance along the center of gravity and the field’s perimeter determined by the center of gravity path traveled during the measurement. The Functional Movement Screen (FMS) is used in movement pattern screening. In this study, three movement tests requiring leg stance were chosen, which involved proper stability and balance: deep squat (DS), hurdle step (HS), and in-line lunge (IL). All movements were assessed on a scale of 0–3, where 0 means pain during the movement, 1 is a lack of ability to move, 2 is moving with compensation, and 3 is moving without compensation. HS and IL are unilateral tests; therefore, they are performed for both body sides, and the worse score is considered [6]. 2.5. Statistics: The means, standard deviations, and confidence intervals were calculated for normal distribution data and the median with standard errors for data lacking normal distribution. The Student’s t-test comparison was made for static balance results and the U Mann–Whitney test for movement quality tests. When the injury occurrence was respected (no injury; single injury; multiple injury group), the Kruskal–Wallis test and Dunn test (post hoc) were conducted to determine the static balance and movement quality differences in those groups. The logistic regression models were built to assess injury risk due to general, single, and multiple injury occurrences based on factors differentiative between groups. Then, in the second step of the analysis, the two-factor logistic regression was conducted. In both, Wald’s Statistics were provided. The reference group stated subjects with no injury. The level of significance was set at p < 0.05. Statistica 13.0 (Statsoft Poland, Cracow, Poland) was used for analysis. 3. Results: Table 1 presents the t-test comparison of static balance test results between the no injury and injury group. It was revealed that the injured group had statistically significantly worse area circle closed eyes scores than the no injury group. Moreover, the path length closed eyes scores difference was close to statistically significant. Table 2 shows the results of the U Mann–Whitney test comparison of movement quality test results between the no injury and injury group. It was revealed that the injured group had statistically significantly worse in-line lunge scores than the no-injury group. No other tests were statistically different. Further, the analysis of differences was respected to no injury group, single injury (one), and multiple injuries (more than one), so the Kruskal–Wallis test was conducted with the post hoc Bunn test (Table 3). It was revealed that the area circle closed eye (H = 12.97; p = 0.015) differed between the group with no injury with both injured groups. Area circle path length (H = 6.48; p = 0.0391) varies only between no injury and multiple injuries (Table 3). Regarding movement patterns quality, only in-line lunge differs (H = 10.08; p = 0.0065) between no injury and both injured groups (Table 3). Table 4 presents the logistic regression models for injury risk regarding injury occurrence (general, single, multiple). In the case of a general and single injury, both factors AC-CE and IL predict injury risk. The two-factor model showed that the injury risk increases with a decreased static balance and worsening movement quality. However, when multiple injury occurrences were predicted, no model was credible due to the lack of statistical significance. 4. Discussion: The results indicate the possibility of predicting injury risk based on static balance and movement quality in physically active women. In these terms, measurement of center of pressure area circle closed eyes (AC-CE) and in-line lunge (IL) screening seems helpful in injury risk detection. It also indicated the possibility of deepening the analysis of Functional Movement Screen results to the overall score and single module tests. Moreover, using both tests together seems more effective in injury risk prediction, but not for multiple incidents that were not predicted reliably. Static balance is considered an indicator of good work of musculoskeletal and nervous systems [2]. It is also considered an injury risk factor due to lower balance ability in injured subjects [24,25]. Oshima et al. [14] provided the observations in a prospective study that indicated static balance as a significant factor in the anterior cruciate ligament in collegiate athletes. It showed that the one injury risk factor is postural control function. In the following study, these results were also confirmed [16]. Moreover, Hrysomalis [12] showed that injury might occur due to poor balance, suggesting improving deficits in this ability as effective injury prevention. Dunsky et al. [1] also indicated that improving static and dynamic balance is an effective injury prevention method. Therefore, the above observation suggests that other factors could have a role in injury mechanisms. The other one is movement quality screening [13,15]. The most reliable is the FMS test, which is the tool that is helpful in injury risk prediction in the physically active group [26]. Injury risk assessment is mainly based on the overall score, but some results also suggest the usefulness of a single module test [20,22]. In our study, the IL test predicts injury in the general approach and due to a single injury. However, Shimoura et al. [22], in the study on male basketball players, indicated that deep squat (DS) and hurdle step (HS) tests were injury predictors. The difference in our observation may be due to sex and sports group. Moreover, Bunn et al. [20] also pointed out that deep squats as a single screening test could be helpful in injury prediction. Deep squats, hurdle steps, and in-line lunges are global movement patterns based on motor control, mobility, and stability [6]. Therefore, using them together with static balance examination may provide a deeper insight into the balance disorders and associated injury risk. Our results showed that two-factor models based on COP AC-CE and IL predict injury risk in a general approach and single occurrence. Some results suggested that balance ability is strongly associated with IL score, which may explain that both factors predict injury risk [27]. The results of a study by de la Motte et al. [28] suggested using movement patterns screening with balance examination together as a more reliable way to predict injury than alone. However, the results published by Lisman et al. [29] cannot be omitted, which provided opposite results that indicated the need for further investigation. However, there is a need to emphasize the different methodology because we used the static balance platform. In contrast, the results mentioned above consider results from the y-balance test, which also measures stability with another view. Studies examining injury risk factors state a vital contribution in sports science, worth developing and exploring. However, especially valuable are researches considered more than one factor [30]. We are aware of this study’s limitations. Balance is a complex ability associated with vestibular, visual, and proprioceptive systems, and evaluation of its efficiency would provide more complex and reliable results. There was no examination of body posture (e.g., foot arch, pelvis position, spine shape, etc.), which may provide more related data. Moreover, static balance underrepresents postural control demands in daily life, so there is a need to interpret these results as a part of balance ability. We investigated only women of narrow age, indicating a strong need for further study to examine the similar association in men and other age groups. The group was not heterogenous according to sport discipline. More, we did not verify provided results in perspective terms. Therefore, there is also a strong need to conduct this kind of study to describe the efficiency of indicated parameters as an injury risk predictor. 5. Conclusions: The static balance ability and movement quality (alone and jointly) predict injury risk in physically active women. The most reliable parameters seem to be the center of pressure area circle closed eye, and in-line lunge test, which indicate injury alone and in one model. However, those parameters did not predict multiple injury occurrences. Using quantitative and qualitative measurements together to screen injury risk could be helpful in injury risk detection. However, the factors mentioned above should be used with caution. There is a need to verify them in prospective terms.
Background: Static balance is a reliable indicator of the musculoskeletal and nervous systems, which is a basis for movement stabilization development. The disorders in this area may increase injury risk (IR). This study investigated the musculoskeletal injury risk due to static balance and movement quality regarding single and multiple injury occurrences in physically active women. Methods: The study sample was 88 women aged 21.48 ± 1.56. The injury data were obtained with a questionnaire, and Deep Squat (DS), In-line lunge (IL), and Hurdle Step (HS) tests were conducted. Static balance was assessed with a stabilometric platform measured center of gravity area circle (AC) and path length (PL) with open (OE) and closed eyes (CE), maintaining a standing position for the 30 s. Results: The logistic regression models revealed the general injury occurrence was predicted by AC-CE (OR = 0.70; p = 0.03) and IL (OR = 0.49; p = 0.03), and the two-factor model AC-CE*IL, (OR = 1.40; p &lt; 0.01). When the single injury was predicted by the same factors AC-CE (OR = 0.49; p &lt; 0.01), IL (OR = 0.36; p = 0.01), and AC-CE*IL (OR = 1.58; p &lt; 0.01). Conclusions: Static balance and movement stability predict musculoskeletal injury risk alone and in one model. A further study is needed to verify the efficiency of indicated factors in prospective terms. Using both quantitative and qualitative tests could be helpful in IR prediction.
1. Introduction: Static balance is one of the most credible indicators of the musculoskeletal and nervous system state, which states the ability to maintain an upright posture and to keep the line of gravity within the limits of the base of support [1,2]. It expresses the harmonious work of the nervous and musculoskeletal systems. The disorders associated with balance are mainly observed in the elderly and are related to involution, but young ones can also have a problem [3]. Static balance disorders worsen the stabilization during the movements, which causes additional energy expenditure, compensation, and mobility limit. Therefore, movement becomes less effective [4,5,6]. However, this association is poorly explained [7]. High-quality movement patterns require a high level of mobility, stability, and motor control [4,5,6]. It was shown that there is a relationship between functional movement and postural stabilization [8]. The correct movement pattern is performed with the lowest energy expenditure, ensuring the effectiveness and precision of motor activity with safety for tissues [9,10]. Despite generally the same principles of performance, individual movement acts differ in humans. The movement pattern is understood here as a unique way of realizing a specific movement act [11]. Both balance and movement patterns quality are considered injury risk factors [12,13,14,15]. It was shown that athletes with lower stability are more likely to be injured. Especially cruciate ligaments are at injury risk [16]. It could be more evident in women who are more likely to have knee valgus; therefore, it is more probable for injury occurrence [17]. Moreover, movement pattern quality state a valuable factor in injury risk. Chorba et al. [18] indicated that female athletes with poor movement patterns are more likely to be injured. However, less is known about the usefulness of the single Functional Movement Screen (FMS) module test in injury risk prediction. Especially in terms of combining its results with other measurements. Hartigan et al. [19] investigated the in-line lunge (IL) scores test with balance ability and sprint and jump performance. However, there is a lack of data considering IL results with injury. Deep squat (DS) screening could be considered an injury risk predictor [20]. Hurdle Step (HS) is regarded as associated with physical performance [21], but results provided by Shimoura et al. [22] indicated DS and HS as injury risk factors. The above observations lead to questions about the possibility of using balance measurements and movement pattern tests based on stability in injury prediction. However, the accessible data mainly focus on one-factor analysis, omitting interference with other attributes that do not state the whole picture of human body property expressed to movement ability. There is a need to make an analysis concern more factors and examine the interaction between them. Therefore, this study aimed to investigate the injury risk due to static balance and movement quality and the single and multiple injury occurrence in physically active women. Specifically, we wanted to answer the following: (1) Which parameters differ in injured and uninjured subjects? (2) Which measured static balance parameters and movement patterns scores predict injury risk: (3) Is there the possibility of creating a two-factor prediction model in injury risk based on static balance and movement patterns quality? Moreover, (4) we deepened the analysis for injury regarding single and multiple incidents. The obtained results allow for a deeper look at the structure of dependencies between human movement abilities, enabling effective injury prediction and targeting adequate actions for injury prevention in physically active women. 5. Conclusions: The static balance ability and movement quality (alone and jointly) predict injury risk in physically active women. The most reliable parameters seem to be the center of pressure area circle closed eye, and in-line lunge test, which indicate injury alone and in one model. However, those parameters did not predict multiple injury occurrences. Using quantitative and qualitative measurements together to screen injury risk could be helpful in injury risk detection. However, the factors mentioned above should be used with caution. There is a need to verify them in prospective terms.
Background: Static balance is a reliable indicator of the musculoskeletal and nervous systems, which is a basis for movement stabilization development. The disorders in this area may increase injury risk (IR). This study investigated the musculoskeletal injury risk due to static balance and movement quality regarding single and multiple injury occurrences in physically active women. Methods: The study sample was 88 women aged 21.48 ± 1.56. The injury data were obtained with a questionnaire, and Deep Squat (DS), In-line lunge (IL), and Hurdle Step (HS) tests were conducted. Static balance was assessed with a stabilometric platform measured center of gravity area circle (AC) and path length (PL) with open (OE) and closed eyes (CE), maintaining a standing position for the 30 s. Results: The logistic regression models revealed the general injury occurrence was predicted by AC-CE (OR = 0.70; p = 0.03) and IL (OR = 0.49; p = 0.03), and the two-factor model AC-CE*IL, (OR = 1.40; p &lt; 0.01). When the single injury was predicted by the same factors AC-CE (OR = 0.49; p &lt; 0.01), IL (OR = 0.36; p = 0.01), and AC-CE*IL (OR = 1.58; p &lt; 0.01). Conclusions: Static balance and movement stability predict musculoskeletal injury risk alone and in one model. A further study is needed to verify the efficiency of indicated factors in prospective terms. Using both quantitative and qualitative tests could be helpful in IR prediction.
5,192
317
[ 2110, 31, 42, 470, 187 ]
10
[ "injury", "movement", "balance", "physical", "risk", "injury risk", "results", "study", "test", "activity" ]
[ "maintain upright posture", "balance movement patterns", "movement postural stabilization", "ability maintain upright", "static balance disorders" ]
null
[CONTENT] static balance | stability | injury risk | movement | women [SUMMARY]
null
[CONTENT] static balance | stability | injury risk | movement | women [SUMMARY]
[CONTENT] static balance | stability | injury risk | movement | women [SUMMARY]
[CONTENT] static balance | stability | injury risk | movement | women [SUMMARY]
[CONTENT] static balance | stability | injury risk | movement | women [SUMMARY]
[CONTENT] Exercise Test | Female | Humans | Movement | Multiple Trauma | Postural Balance | Prospective Studies [SUMMARY]
null
[CONTENT] Exercise Test | Female | Humans | Movement | Multiple Trauma | Postural Balance | Prospective Studies [SUMMARY]
[CONTENT] Exercise Test | Female | Humans | Movement | Multiple Trauma | Postural Balance | Prospective Studies [SUMMARY]
[CONTENT] Exercise Test | Female | Humans | Movement | Multiple Trauma | Postural Balance | Prospective Studies [SUMMARY]
[CONTENT] Exercise Test | Female | Humans | Movement | Multiple Trauma | Postural Balance | Prospective Studies [SUMMARY]
[CONTENT] maintain upright posture | balance movement patterns | movement postural stabilization | ability maintain upright | static balance disorders [SUMMARY]
null
[CONTENT] maintain upright posture | balance movement patterns | movement postural stabilization | ability maintain upright | static balance disorders [SUMMARY]
[CONTENT] maintain upright posture | balance movement patterns | movement postural stabilization | ability maintain upright | static balance disorders [SUMMARY]
[CONTENT] maintain upright posture | balance movement patterns | movement postural stabilization | ability maintain upright | static balance disorders [SUMMARY]
[CONTENT] maintain upright posture | balance movement patterns | movement postural stabilization | ability maintain upright | static balance disorders [SUMMARY]
[CONTENT] injury | movement | balance | physical | risk | injury risk | results | study | test | activity [SUMMARY]
null
[CONTENT] injury | movement | balance | physical | risk | injury risk | results | study | test | activity [SUMMARY]
[CONTENT] injury | movement | balance | physical | risk | injury risk | results | study | test | activity [SUMMARY]
[CONTENT] injury | movement | balance | physical | risk | injury risk | results | study | test | activity [SUMMARY]
[CONTENT] injury | movement | balance | physical | risk | injury risk | results | study | test | activity [SUMMARY]
[CONTENT] movement | injury | injury risk | risk | balance | movement patterns | patterns | prediction | performance | likely [SUMMARY]
null
[CONTENT] injury | table | group | statistically | injury group | test | revealed | scores | injured | area [SUMMARY]
[CONTENT] injury | risk | injury risk | predict | parameters | parameters predict multiple injury | model parameters predict | parameters predict multiple | verify prospective terms | parameters predict [SUMMARY]
[CONTENT] injury | movement | balance | risk | injury risk | physical | test | physical activity | study | results [SUMMARY]
[CONTENT] injury | movement | balance | risk | injury risk | physical | test | physical activity | study | results [SUMMARY]
[CONTENT] ||| IR ||| [SUMMARY]
null
[CONTENT] AC-CE | 0.70 | 0.03 | IL | 0.49 | 0.03 | two | AC-CE*IL | 1.40 | p &lt | 0.01 ||| AC-CE | 0.49 | p &lt | 0.01 | IL | 0.36 | 0.01 | AC-CE*IL | 1.58 | p &lt | 0.01 [SUMMARY]
[CONTENT] one ||| ||| IR [SUMMARY]
[CONTENT] ||| IR ||| ||| 88 | 21.48 | 1.56 ||| Deep Squat | IL ||| AC | CE | 30 | AC-CE | 0.70 | 0.03 | IL | 0.49 | 0.03 | two | AC-CE*IL | 1.40 | p &lt | 0.01 ||| AC-CE | 0.49 | p &lt | 0.01 | IL | 0.36 | 0.01 | AC-CE*IL | 1.58 | p &lt | 0.01 ||| one ||| ||| IR [SUMMARY]
[CONTENT] ||| IR ||| ||| 88 | 21.48 | 1.56 ||| Deep Squat | IL ||| AC | CE | 30 | AC-CE | 0.70 | 0.03 | IL | 0.49 | 0.03 | two | AC-CE*IL | 1.40 | p &lt | 0.01 ||| AC-CE | 0.49 | p &lt | 0.01 | IL | 0.36 | 0.01 | AC-CE*IL | 1.58 | p &lt | 0.01 ||| one ||| ||| IR [SUMMARY]
A meta-analysis of the association between day-care attendance and childhood acute lymphoblastic leukaemia.
20110276
Childhood acute lymphoblastic leukaemia (ALL) may be the result of a rare response to common infection(s) acquired by personal contact with infected individuals. A meta-analysis was conducted to examine the relationship between day-care attendance and risk of childhood ALL, specifically to address whether early-life exposure to infection is protective against ALL.
BACKGROUND
Searches of the PubMed database and bibliographies of publications on childhood leukaemia and infections were conducted. Observational studies of any size or location and published in English resulted in the inclusion of 14 case-control studies.
METHODS
The combined odds ratio (OR) based on the random effects model indicated that day-care attendance is associated with a reduced risk of ALL [OR = 0.76, 95% confidence interval (CI): 0.67, 0.87]. In subgroup analyses evaluating the influence of timing of exposure, a similarly reduced effect was observed for both day-care attendance occurring early in life (< or =2 years of age) (OR = 0.79, 95% CI: 0.65, 0.95) and day-care attendance with unspecified timing (anytime prior to diagnosis) (OR = 0.81, 95% CI: 0.70, 0.94). Similar findings were observed with seven studies in which common ALL were analysed separately. The reduced risk estimates persisted in sensitivity analyses that examined the sources of study heterogeneity.
RESULTS
This analysis provides strong support for an association between exposure to common infections in early childhood and a reduced risk of ALL. Implications of a 'hygiene'-related aetiology suggest that some form of prophylactic intervention in infancy may be possible.
CONCLUSIONS
[ "Case-Control Studies", "Child", "Child Day Care Centers", "Communicable Diseases", "Humans", "Precursor Cell Lymphoblastic Leukemia-Lymphoma", "Risk Assessment" ]
2878455
Introduction
Evidence is growing in support of a role for infections in the aetiology of childhood leukaemia, particularly for the most common subtype, acute lymphoblastic leukaemia (ALL).1–3 Two infection-related hypotheses have gained popularity and are currently supported by substantial, yet inconsistent, epidemiologic findings. Kinlen first proposed the ‘population mixing’ hypothesis in response to the observed childhood leukaemia clusters occurring in the early 1980s in Seascale and Thurso, two remote and isolated communities in the UK that experienced a rapid influx of professional workers.4 He proposed that childhood leukaemia may result from an abnormal immune response to specific, although unidentified, infections commonly seen with the influx of infected persons into an area previously populated with non-immune and susceptible individuals. This hypothesis suggests a mechanism that involves a direct pathological role of specific infectious agents, presumably viruses, in the development of childhood leukaemia and that an immunizing effect may be acquired through previous exposure. Supportive data include several subsequent studies conducted by Kinlen and others examining similar examples of population mixing including rural new towns, situations of wartime population change and other circumstances contributing to unusual patterns of personal contact.4–11 Currently, there is no molecular evidence implicating cell transformation by a specific virus.12 The ‘delayed infection’ hypothesis proposed by Greaves emphasizes the critical nature of the timing of exposure and is intended to apply mostly to common B-cell precursor ALL (c-ALL), which largely accounts for the observed peak incidence of ALL between 2 and 5 years of age in developed countries.13,14 He described a role for infections in the context of a ‘two-hit’ model of the natural history of c-ALL,15 where the first ‘hit’ or initiating genetic event occurs in utero during fetal haematopoiesis producing a clinically covert pre-leukemic clone. The transition to overt disease occurs, in a small fraction (∼1%) of pre-leukaemia carriers, after a sufficient postnatal secondary genetic event, which may be caused by a proliferative stress-induced effect of common infections on the developing immune system of the child.1,13 This adverse immune response to infections is thought to be the result of insufficient priming of the immune system usually influenced by a delay in exposure to common infectious agents during early childhood. With the assumption that improved socio-economic conditions may lead to delay in exposure to infections, the Greaves hypothesis provides one plausible explanation for the notably higher incidence rates of ALL with its characteristic peak age between 2 and 5 years observed only in more socio-economically developed countries.16,17 Although different in hypothesized mechanism, both the ‘population mixing’ and ‘delayed infection’ hypotheses propose childhood leukaemia to be caused by an abnormal immune response to infection(s) acquired by personal contacts, and are compatible with available evidence. In some populations, it is possible that both mechanisms may be operating. Several previous epidemiological studies have used day-care attendance as an indicator of the increased likelihood of early exposure to infections,18 since it is well documented that in developed countries exposures to common infections, particularly those affecting the respiratory and gastrointestinal tracts, occur more frequently in this type of setting.19 The immaturity of children’s immune systems in combination with the lack of appropriate hygienic behaviour is believed to promote the transmission of infectious agents in this social setting.19–21 In the current analysis, we took a meta-analytic approach to summarize the findings to date on the relationship between day-care attendance and risk of childhood ALL.
Methods
Identification of studies Literature searches were conducted in PubMed to identify original research and review articles related to childhood leukaemia and day-care attendance and/or social contacts published between January 1966 and October 2008. The searches were conducted using the term ‘childhood leukaemia’ in combination with other terms including ‘infection’, ‘child care’, ‘day care’ and ‘social contact’. In addition, the bibliographies of epidemiology publications on childhood leukaemia and infections were searched to identify studies that may not have been captured through the initial database search. This included the review published in 2004 by McNally and Eden on the infectious aetiology of childhood acute leukaemia (AL).2 Literature searches were conducted in PubMed to identify original research and review articles related to childhood leukaemia and day-care attendance and/or social contacts published between January 1966 and October 2008. The searches were conducted using the term ‘childhood leukaemia’ in combination with other terms including ‘infection’, ‘child care’, ‘day care’ and ‘social contact’. In addition, the bibliographies of epidemiology publications on childhood leukaemia and infections were searched to identify studies that may not have been captured through the initial database search. This included the review published in 2004 by McNally and Eden on the infectious aetiology of childhood acute leukaemia (AL).2 Inclusion/exclusion criteria and definitions Among the studies identified, inclusion in the meta-analysis was limited to observational studies of case–control or cohort design of any size, geographic location and race/ethnicity of study participants. When more than one publication from an individual study was available, either the most recent publication or the publication that performed the analysis most applicable to evaluating the ‘delayed infection’ hypothesis was selected. Studies needed to have reported a relative risk (RR) or odds ratio (OR) and confidence intervals (CIs), or original data by disease status from which a measure of effect could be calculated. The outcome of interest was defined as clinically diagnosed leukaemia in children between the ages of 0 and 19 years. In the very few studies that did not distinguish between specific leukaemia subtypes,22–26 it was assumed that ALL was the primary subtype since it accounts for the majority (∼80%) of leukaemia diagnoses in children.27 The exposure of interest generally referred to as ‘day-care attendance’, which, in addition to formal day care, may have included preschool, nursery school, play groups, mother–toddler groups and other early social contacts. A strict criterion for the meaning of ‘regular attendance’ was not defined a priori since it was assumed that this would vary between studies. Of the primary studies identified, four were excluded for various reasons, including study emphasis on evaluating leukaemia prognosis and outcome,28 an earlier analysis of data from a study for which a more complete and recent publication is available,29 and not reporting a risk estimate for day-care attendance.30,31 After the exclusions, a total of 14 studies, all case–control in design, were retained for the meta-analysis.22–24,32–42 Among the studies identified, inclusion in the meta-analysis was limited to observational studies of case–control or cohort design of any size, geographic location and race/ethnicity of study participants. When more than one publication from an individual study was available, either the most recent publication or the publication that performed the analysis most applicable to evaluating the ‘delayed infection’ hypothesis was selected. Studies needed to have reported a relative risk (RR) or odds ratio (OR) and confidence intervals (CIs), or original data by disease status from which a measure of effect could be calculated. The outcome of interest was defined as clinically diagnosed leukaemia in children between the ages of 0 and 19 years. In the very few studies that did not distinguish between specific leukaemia subtypes,22–26 it was assumed that ALL was the primary subtype since it accounts for the majority (∼80%) of leukaemia diagnoses in children.27 The exposure of interest generally referred to as ‘day-care attendance’, which, in addition to formal day care, may have included preschool, nursery school, play groups, mother–toddler groups and other early social contacts. A strict criterion for the meaning of ‘regular attendance’ was not defined a priori since it was assumed that this would vary between studies. Of the primary studies identified, four were excluded for various reasons, including study emphasis on evaluating leukaemia prognosis and outcome,28 an earlier analysis of data from a study for which a more complete and recent publication is available,29 and not reporting a risk estimate for day-care attendance.30,31 After the exclusions, a total of 14 studies, all case–control in design, were retained for the meta-analysis.22–24,32–42 Data extraction and statistical approach For most studies,23,24,34–40 the ORs and 95% CIs for leukaemia, AL, ALL or c-ALL among those who attended day care compared with those who did not attend day care were extracted. Among the few studies that did not provide this estimate, the OR for a similar measure was extracted, including those for no deficit in social contacts,42 regular contact outside the home,22 >36 months duration of day-care attendance,41 increasing index and family day-care measure,32 and social activity.33 In two instances, the reported OR was recalculated to reflect the risk associated with the highest level of day-care attendance and/or social activity measure compared with the lowest.41,42 Furthermore, several studies reported risk estimates for stratified analyses by specific subtype of leukaemia,22,32,33,35–38,40–42 age at diagnosis,34,35,38 specific age of day-care attendance,22,33,32–38,40 or race/ethnicity;37 multiple estimates were extracted from these studies for the purposes of subgroup and sensitivity evaluations in the meta-analysis, including specific leukaemia subtypes, particularly ALL and c-ALL and timing of day-care attendance. In general, studies referred to the common precursor B-cell ALL subtype (CD10 and CD19 positive ALL) as c-ALL. Four studies defined c-ALL with an added criterion that specified an age range between 2 and 5 years.33,37,38,43 Risk estimates by specific diagnosis age groups were not extracted since there were only a few studies that provided this information and the age cut-points varied. For the one study that stratified by race/ethnicity,37 two separate risk estimates were included in the meta-analysis since the reported estimates were based on independent populations. The between-study heterogeneity was assessed using the Q statistic, which tests the null hypothesis that the estimated effect is homogenous across all studies.44 Acknowledging that the eligible studies have been conducted independently and may represent only a random sample of the distribution of all possible effect sizes for this association, the random effects model was utilized, which incorporates an estimate of both between-study and within-study variation into the calculation of the summary effect measure.45 Compared with the fixed effects model,46 this method is more conservative and generally results in a wider CI. Finally, publication bias was evaluated visually using the funnel graph method that displays the distribution of all included studies by their point estimates and standard errors.47 In addition, the Begg and Mazumdar adjusted rank correlation test was used to test for correlation between the effect estimates and their variances which, if present, provides an indication of publication bias.48 The association with c-ALL was evaluated with a meta-analysis of 7 of the 14 studies.32,33,35–38,42 If a study reported multiple ORs and 95% CIs by timing of day-care attendance, the risk estimate associated with the earliest timing (e.g. age ≤ 2 years) was used to be consistent with the ‘delayed infection’ hypothesis.23,32,34,37,38 The effect of the timing of exposure was evaluated in subgroup meta-analyses of studies reporting risk estimates for early day-care attendance (age ≤ 2 years)22,23,32–34,36–38,42 and studies reporting risk estimates for day-care attendance anytime before diagnosis.23,24,35,37–39,41 Finally, a series of sensitivity analyses were conducted to evaluate the sources of study heterogeneity, namely, the influences of potential selection bias, and heterogeneity in disease classification and exposure definition. The analyses were conducted using the statistical software, STATA Version 9.49 For most studies,23,24,34–40 the ORs and 95% CIs for leukaemia, AL, ALL or c-ALL among those who attended day care compared with those who did not attend day care were extracted. Among the few studies that did not provide this estimate, the OR for a similar measure was extracted, including those for no deficit in social contacts,42 regular contact outside the home,22 >36 months duration of day-care attendance,41 increasing index and family day-care measure,32 and social activity.33 In two instances, the reported OR was recalculated to reflect the risk associated with the highest level of day-care attendance and/or social activity measure compared with the lowest.41,42 Furthermore, several studies reported risk estimates for stratified analyses by specific subtype of leukaemia,22,32,33,35–38,40–42 age at diagnosis,34,35,38 specific age of day-care attendance,22,33,32–38,40 or race/ethnicity;37 multiple estimates were extracted from these studies for the purposes of subgroup and sensitivity evaluations in the meta-analysis, including specific leukaemia subtypes, particularly ALL and c-ALL and timing of day-care attendance. In general, studies referred to the common precursor B-cell ALL subtype (CD10 and CD19 positive ALL) as c-ALL. Four studies defined c-ALL with an added criterion that specified an age range between 2 and 5 years.33,37,38,43 Risk estimates by specific diagnosis age groups were not extracted since there were only a few studies that provided this information and the age cut-points varied. For the one study that stratified by race/ethnicity,37 two separate risk estimates were included in the meta-analysis since the reported estimates were based on independent populations. The between-study heterogeneity was assessed using the Q statistic, which tests the null hypothesis that the estimated effect is homogenous across all studies.44 Acknowledging that the eligible studies have been conducted independently and may represent only a random sample of the distribution of all possible effect sizes for this association, the random effects model was utilized, which incorporates an estimate of both between-study and within-study variation into the calculation of the summary effect measure.45 Compared with the fixed effects model,46 this method is more conservative and generally results in a wider CI. Finally, publication bias was evaluated visually using the funnel graph method that displays the distribution of all included studies by their point estimates and standard errors.47 In addition, the Begg and Mazumdar adjusted rank correlation test was used to test for correlation between the effect estimates and their variances which, if present, provides an indication of publication bias.48 The association with c-ALL was evaluated with a meta-analysis of 7 of the 14 studies.32,33,35–38,42 If a study reported multiple ORs and 95% CIs by timing of day-care attendance, the risk estimate associated with the earliest timing (e.g. age ≤ 2 years) was used to be consistent with the ‘delayed infection’ hypothesis.23,32,34,37,38 The effect of the timing of exposure was evaluated in subgroup meta-analyses of studies reporting risk estimates for early day-care attendance (age ≤ 2 years)22,23,32–34,36–38,42 and studies reporting risk estimates for day-care attendance anytime before diagnosis.23,24,35,37–39,41 Finally, a series of sensitivity analyses were conducted to evaluate the sources of study heterogeneity, namely, the influences of potential selection bias, and heterogeneity in disease classification and exposure definition. The analyses were conducted using the statistical software, STATA Version 9.49
Results
Table 1 presents selected characteristics of the 14 studies included in this meta-analysis. The studies, all case–control in design, were published between 1993 and 2008 and were conducted in many different geographic areas. Most studies achieved a population-based ascertainment of cases utilizing a national registry or a regional network of all major paediatric oncology centres. A population-based control selection strategy was most common with the exception of three studies that selected hospital-based controls.23,24,39 Only 1 of the 14 studies utilized a records-based day-care assessment protocol,36 whereas the remaining studies relied on standardized questionnaires administered either in person, by telephone or by mail. All studies have accounted for major confounding factors such as age, sex, race and socio-economic status through a matched study design and/or statistical adjustment in the analysis. Of the 14 studies identified, 11 studies have reported either a statistically significant reduced risk associated with day-care attendance and/or social contact measures23,33–37,39 or provide some evidence of a reduced risk.22,24,40,41 Table 1Select characteristics of studies included in the meta-analyses of day-care attendance and childhood leukemiaAuthor, year (location)Case ascertainmentControl selectionData collectionSelect resultsConfounding addressedPetridou et al., 199323 (Greece, Attica and Crete)125 leukaemia (Attica) 11 leukaemia (Crete)Age 0–14 yearsChildren’s hospitals of the University of Athens (1987–91) and the University of Crete (1990–92)187 frequency matched children who attended the outpatient clinic of the hospitals where the children with leukaemia were treatedTelephone interviews with parents of childrenAttendance at crèche (yes/no) At any time Leukaemia: 0.67 (0.41, 1.11)In infancy (for ≥3 months in the first 2 years of life) Leukaemia: 0.28 (0.09, 0.88)Frequency match: sex, age, hospitalAdjustment: place of residence, social classRoman et al., 199440 (UK)38 ALLAge 0–4 yearsDiagnosed 1972–89Born in west Berkshire or north Hampshire, and were residents there when diagnosed112 individually matched children selected from hospital delivery registersParents were interviewedPreschool playgroup (yes/no) In the year before diagnosis (for ≥3 months) ALL: 0.6 (0.2, 1.8)Individual match: sex, date of birth, mother’s age, area of residence at birth, time of diagnosisAdjustment: not specifiedPetridou et al., 199724 (Greece)153 leukaemiaAge 0–14 yearsDiagnosed 1993–94Nation wide network of paediatric haematologists/oncologists300 individually matched children hospitalized at the same time as the corresponding case for acute conditionsGuardians of all subjects completed an interviewer-administered questionnaireDay-care attendance (ever/never) Leukaemia: 0.83 (0.51, 1.37)Individual match: sex, age, geographic regionAdjustment: maternal age at birth, maternal education, sibship size, birth order, persons per roomSchuz et al., 199942 (Germany)1010 AL (686 c-ALL)Diagnosed 1980–94Age 0–14 yearsNation wide German Children’s Cancer Registry at the University of Mainz2588 matched children randomly selected from complete files of local offices of registration of residentsStructured questionnaire based on the US Children’s Cancer GroupDay-care attendance not directly assessedDeficit in social contacts (yes/no, age ≤18 months excluded) AL: 1.1 (0.9, 1.3) c-ALL: 1.0 (0.8, 1.2)Analysis of ALIndividual match: date of birth, sex, districtAdjustment: SESAnalysis of c-ALLAdjustment: sex, age, year of birth, study settingAdjustment: SES, urbanizationDockerty et al., 199922 (New Zealand)97 ALLDiagnosed 1990–93Age 0–14 yearsNew Zealand Cancer registry, public hospital admission/discharge computer system, and the Children’s Cancer Registry. Nationwide97 individually matched children randomly selected from the New Zealand born and resident childhood population using national birth records209 solid cancer casesMothers interviewed in their homes using a questionnaire adapted from Patricia McKinney and Eve Roman in the UKRegular contact with other children from outside home at <12 months (yes/no, age <15 months excluded) ALL: 0.65 (0.36, 1.17)Individual match: age and sexAdjustment: sex, age, several others including social classInfante-Rivard et al., 200034 (Canada)491 ALLDiagnosed 1980–93Age 0–9 yearsTertiary care centre similar to population-based ascertainment491 individually matched children chosen from the most complete census of children for the study yearsStructured questionnaire administered to mothers by phoneDay-care attendance by age at entry Entry ≤2 years old vs no ALL: 0.49 (0.31, 0.77)Entry at >2 years old vs no ALL: 0.67 (0.45, 1.01)Individual match: age, sex, region of residence at diagnosisAdjustment: maternal age, maternal educationNeglia et al., 200038 (USA)1744 ALL (633 c-ALL; excludes cases < 1 year)Diagnosed 1989–93Age 0–14 yearsChildren’s Cancer Group member institutions throughout the USA1879 individually matched children randomly selected using the random digit dialing (RDD) methodologyStructured interviewDay-care attendance (age <1 year excluded) Yes vs no ALL: 0.96 (0.82, 1.12) c-ALL: 0.96 (0.75, 1.24)Day care before age 2 vs no ALL: 0.99 (0.84, 1.17) c-ALL: 1.05 (0.80, 1.37)Individual match: age, race, telephone area code, exchange, sex (T-cell leukaemia only)Adjustment: maternal race, education, family incomeRosenbaum et al., 200041 (USA)255 ALLDiagnosed 1980–91Age 0–14 yearsFour clinical centers in a 31-county study region. Institutional tumour registries and department of paediatric haematology–oncology records760 frequency matched children randomly selected through the Live Birth Certificate Registry maintained by the New York State Department of HealthSelf-administered questionnaire mailed to the parents of subjectsTotal duration of out-of-home care (duration vs >36 months) ALL: Stay home: 1.32 (0.70, 2.52) 1–18 months: 1.74 (0.89, 3.42) 19–36 months: 1.32 (0.70, 2.52)Frequency match: sex, age, race, birth yearAdjustment: maternal age, maternal education, birth year, maternal employment, breastfeeding, birth orderChan et al., 200232 (Hong Kong)98 AL (66 c-ALL)Diagnosed 1994–97Age 2–14 yearsHong Kong Pediatric Hematology and Oncology Study Group228 children selected using RDD methodologyIn-person interview using a structured questionnaire adapted from UKCCS and translated into ChineseIndex and family day-care attendance (3-category variable) First year of life AL: 0.96 (0.70, 1.32) Child peak: 0.63 (0.38, 1.07) c-ALL: 0.93 (0.63, 1.36)Matching: NoneAdjustment: age, number of children in household at reference datePerrillat et al., 200239 (France)280 AL (240 ALL)Diagnosed 1995–99Age 0–15 yearsHospitals of Lille, Lyon, Nancy and Paris. Cases need to have resided in the hospital catchment area288 frequency matched children hospitalized in the same hospital as the cases, and residing in the catchment area of the hospitalIn-person standardized questionnaireDay-care attendance (age <2 years excluded) Ever vs never AL: 0.6 (0.4, 1.0)Age started vs no day care >12 months: 0.5 (0.3, 1.0) 7–12 months: 0.6 (0.2, 1.7) ≤6 months: 0.5 (0.3, 1.0)Frequency match: age, sex, hospital, hospital catchment area, ethnic originAdjustment: age, sex, hospital, ethnic origin, maternal education, parental professional categoryJourdan-Da Silva et al., 200435 (France)473 AL (408 ALL, 304 c-ALL)Diagnosed 1995–98Age 0–14 yearsNational Registry of Childhood Leukaemia and Lymphoma (NRCL)567 frequency matched children randomly selected using age, sex, and region quotas from a sample of 30 000 phone numbers representative of the French population on area of residence and municipality sizeStandardized self-administered questionnaire on mothersDay-care attendance (age <1 year excluded) Ever vs never ALL: 0.7 (0.6, 1.0) c-ALL: 0.8 (0.6, 1.0)Started at age <3 months vs never ALL: 0.6 (0.4, 0.8) c-ALL: 0.6 (0.4, 0.9)Frequency match: age, sex, regionAdjustment: age, sex, regionGilham et al., 200533 (UK)1286 ALL (791 c-ALL; excludes cases <2 years)Diagnosed 1991–96Age 0–14 yearsNation-wide ascertainment through pediatric oncology units6238 individually matched children randomly selected from primary care population registersIn-person interview with parents using a structured questionnaireSocial activity in the first year of life (age <2 years excluded) Any vs no social activity ALL: 0.66 (0.56, 0.77) c-ALL: 0.67 (0.55, 0.82)Age started vs no day care ALL- < 3 months: 0.71 (0.60, 0.85) 3–5 months: 0.71 (0.56, 0.90) 6–11 months: 0.76 (0.63, 0.92)Individual match: sex, month and year of birth, region of residence at diagnosisAdjustment: age at diagnosis, sex, region, maternal age, mother working at time of birth, deprivationMa et al., 200537 (USA)294 ALL (145 c-ALL; excludes case <1 year)Diagnosed 1995–2002Age 0–14 yearsPopulation-based ascertainment from major paediatric clinical centres in Northern and Central California376 individually matched children randomly selected from statewide birth certificate files maintained by the California Department of Health ServicesPersonal interview with primary caretakerChild-hours of exposure at day care (age <1 year excluded) ≥5000 child-hours (1st year) vs 0 ALL- Hispanic: 2.10 (0.70, 6.34) White: 0.42 (0.18, 0.99) c-ALL- Hispanic: 2.53 (0.60, 10.7) White: 0.33 (0.11, 1.01)Individual match: date of birth, sex, mother’s race, Hispanic statusAdjustment: annual household income, maternal educationKamper-Jorgensen et al., 200836 (Denmark)559 ALL (199 c-ALL)Diagnosed 1989–2004Age 0–15 yearsAll cases of childhood leukaemia identified in a cohort of all children in Denmark5590 individually matched children selected from population registersRecords-based data from three population-based registries: The Nordic Society of Paediatric Haematology and Oncology, the Danish Civil Registration System, and the Childcare databaseChild-care attendance in children during first 2 years of life (yes/no) ALL: 0.68 (0.48, 0.95) c-ALL: 0.58 (0.36, 0.93)Individual match: date of birth, sex, birth cohortAdjustment: several demographic characteristics was considered but none were major confoundersSES, socioeconomic status; RDD, random digit dialing; UKCCS, United Kingdom Childhood Cancer Study. Select characteristics of studies included in the meta-analyses of day-care attendance and childhood leukemia 125 leukaemia (Attica) 11 leukaemia (Crete) Age 0–14 years Children’s hospitals of the University of Athens (1987–91) and the University of Crete (1990–92) At any time Leukaemia: 0.67 (0.41, 1.11) In infancy (for ≥3 months in the first 2 years of life) Leukaemia: 0.28 (0.09, 0.88) Frequency match: sex, age, hospital Adjustment: place of residence, social class 38 ALL Age 0–4 years Diagnosed 1972–89 Born in west Berkshire or north Hampshire, and were residents there when diagnosed In the year before diagnosis (for ≥3 months) ALL: 0.6 (0.2, 1.8) Individual match: sex, date of birth, mother’s age, area of residence at birth, time of diagnosis Adjustment: not specified 153 leukaemia Age 0–14 years Diagnosed 1993–94 Nation wide network of paediatric haematologists/oncologists Individual match: sex, age, geographic region Adjustment: maternal age at birth, maternal education, sibship size, birth order, persons per room 1010 AL (686 c-ALL) Diagnosed 1980–94 Age 0–14 years Nation wide German Children’s Cancer Registry at the University of Mainz Structured questionnaire based on the US Children’s Cancer Group Day-care attendance not directly assessed Individual match: date of birth, sex, district Adjustment: SES Adjustment: sex, age, year of birth, study setting Adjustment: SES, urbanization 97 ALL Diagnosed 1990–93 Age 0–14 years New Zealand Cancer registry, public hospital admission/discharge computer system, and the Children’s Cancer Registry. Nationwide 97 individually matched children randomly selected from the New Zealand born and resident childhood population using national birth records 209 solid cancer cases Individual match: age and sex Adjustment: sex, age, several others including social class 491 ALL Diagnosed 1980–93 Age 0–9 years Tertiary care centre similar to population-based ascertainment Entry ≤2 years old vs no ALL: 0.49 (0.31, 0.77) Entry at >2 years old vs no ALL: 0.67 (0.45, 1.01) Individual match: age, sex, region of residence at diagnosis Adjustment: maternal age, maternal education 1744 ALL (633 c-ALL; excludes cases < 1 year) Diagnosed 1989–93 Age 0–14 years Children’s Cancer Group member institutions throughout the USA Yes vs no ALL: 0.96 (0.82, 1.12) c-ALL: 0.96 (0.75, 1.24) Day care before age 2 vs no ALL: 0.99 (0.84, 1.17) c-ALL: 1.05 (0.80, 1.37) Individual match: age, race, telephone area code, exchange, sex (T-cell leukaemia only) Adjustment: maternal race, education, family income 255 ALL Diagnosed 1980–91 Age 0–14 years Four clinical centers in a 31-county study region. Institutional tumour registries and department of paediatric haematology–oncology records Frequency match: sex, age, race, birth year Adjustment: maternal age, maternal education, birth year, maternal employment, breastfeeding, birth order 98 AL (66 c-ALL) Diagnosed 1994–97 Age 2–14 years Hong Kong Pediatric Hematology and Oncology Study Group First year of life AL: 0.96 (0.70, 1.32) Child peak: 0.63 (0.38, 1.07) c-ALL: 0.93 (0.63, 1.36) Matching: None Adjustment: age, number of children in household at reference date 280 AL (240 ALL) Diagnosed 1995–99 Age 0–15 years Hospitals of Lille, Lyon, Nancy and Paris. Cases need to have resided in the hospital catchment area Ever vs never AL: 0.6 (0.4, 1.0) Age started vs no day care >12 months: 0.5 (0.3, 1.0) 7–12 months: 0.6 (0.2, 1.7) ≤6 months: 0.5 (0.3, 1.0) Frequency match: age, sex, hospital, hospital catchment area, ethnic origin Adjustment: age, sex, hospital, ethnic origin, maternal education, parental professional category 473 AL (408 ALL, 304 c-ALL) Diagnosed 1995–98 Age 0–14 years National Registry of Childhood Leukaemia and Lymphoma (NRCL) Ever vs never ALL: 0.7 (0.6, 1.0) c-ALL: 0.8 (0.6, 1.0) Started at age <3 months vs never ALL: 0.6 (0.4, 0.8) c-ALL: 0.6 (0.4, 0.9) Frequency match: age, sex, region Adjustment: age, sex, region 1286 ALL (791 c-ALL; excludes cases <2 years) Diagnosed 1991–96 Age 0–14 years Nation-wide ascertainment through pediatric oncology units Any vs no social activity ALL: 0.66 (0.56, 0.77) c-ALL: 0.67 (0.55, 0.82) Age started vs no day care ALL- < 3 months: 0.71 (0.60, 0.85) 3–5 months: 0.71 (0.56, 0.90) 6–11 months: 0.76 (0.63, 0.92) Individual match: sex, month and year of birth, region of residence at diagnosis Adjustment: age at diagnosis, sex, region, maternal age, mother working at time of birth, deprivation 294 ALL (145 c-ALL; excludes case <1 year) Diagnosed 1995–2002 Age 0–14 years Population-based ascertainment from major paediatric clinical centres in Northern and Central California ≥5000 child-hours (1st year) vs 0 ALL- Hispanic: 2.10 (0.70, 6.34) White: 0.42 (0.18, 0.99) c-ALL- Hispanic: 2.53 (0.60, 10.7) White: 0.33 (0.11, 1.01) Individual match: date of birth, sex, mother’s race, Hispanic status Adjustment: annual household income, maternal education 559 ALL (199 c-ALL) Diagnosed 1989–2004 Age 0–15 years All cases of childhood leukaemia identified in a cohort of all children in Denmark Individual match: date of birth, sex, birth cohort Adjustment: several demographic characteristics was considered but none were major confounders SES, socioeconomic status; RDD, random digit dialing; UKCCS, United Kingdom Childhood Cancer Study. As shown in Table 2, the 14 studies included a total of 6108 cases and generated a combined OR estimate indicating that day-care attendance is associated with a reduced risk of childhood ALL (OR = 0.76, 95% CI: 0.67, 0.87). Figure 1 provides a visual portrayal of the relationship between day-care attendance and the risk of childhood ALL. Three large studies conducted in Germany,42 the USA38 and the UK33 appeared to carry a large proportion of the weight in the meta-analysis at ∼13% each. The combined risk estimates excluding each of these studies individually remained similarly reduced indicating that no one large study was able to completely explain the protective effect observed (data not shown). No remarkable evidence of publication bias was apparent from the funnel plot since the data points for these 14 studies were, in general, randomly distributed around the combined OR estimate (plot not shown). This visual interpretation of the results was confirmed by the large P-value using the rank correlation method (P = 0.553). Table 2Meta-analysis of studies examining the association between day-care attendance and risk of childhood ALLStudy, yearOutcome, age in yearsDay-care definitionTimingCasesOR95% CIWi (%)aPetridou et al., 199323Leukaemia, 0–14Attendance at crèche: yes/noBefore age 2 years1360.280.09, 0.881.2Roman et al., 199440ALL, 0–4Pre school playgroup: yes/noYear before dx380.600.20, 1.801.3Petridou et al., 199724Leukaemia, 0–14Day care: ever/neverBirth to dx1530.830.51, 1.375.0Schuz et al., 1999 42,bAL, 1.5–14Deficit in social contacts: yes/noBefore age 2 years9210.910.90, 1.3012.7Dockerty et al., 199922ALL, 1.25–14Reg. contact outside home: yes/noFirst year of life900.650.36, 1.173.8Infante-Rivard et al., 200034ALL, 0–9Day care: entry at ≤2 years/neverAt or before age 2 years4330.490.31, 0.775.6Neglia et al., 200038ALL, 1–14Day care before age 2 years: yes/noBefore age 2 years17440.990.84, 1.1713.3Rosenbaum et al., 200041,bALL, 0–14Out-of-home care: >36 months/noneBirth to dx1580.760.70, 2.523.3Chan et al., 200232AL, 2–14Index and family day care: 3-levelFirst year of life980.960.70, 1.328.5Perrillat et al., 200239AL, 2–15Day-care attendance: yes/noBirth to dx2460.600.40, 1.005.5Jourdan-Da Silva et al., 200435ALL, 1–14Day-care attendance: yes/noBirth to dx3870.700.60, 1.0010.3Gilham et al., 200533ALL, 2–14Social activity: any/noneFirst year of life12720.660.56, 0.7713.6Ma et al., 200537—WhiteALL, 1–14Day care first year of life: yes/noFirst year of life1360.770.43, 1.403.8Ma et al., 200537—HispaniccALL, 1–14Day-care attendance: yes/noBirth to dx1201.090.62, 1.904.1Kamper-Jorgensen et al., 200836ALL, 0–15Attendance to child care: yes/noBefore age 2 years1760.680.48, 0.957.9P-value (heterogeneity) = 0.014Combined:61080.760.67, 0.87100.0aPercent weight assigned to each OR in the random effects model.bSchuz et al.: changed reference to ‘Yes-deficit in social contacts’ by calculating the inverse of the OR provided for ‘No-deficit in social contacts’; Rosenbaum et al.: estimated the OR for ‘ > 36 months’ by calculating the inverse of the originally provided OR for ‘stayed home’.cDay-care attendance censored on reference date was used due to the low number of Hispanic subjects attending day care during the first year of life.dx, diagnosis; Wi, weight. Figure 1 Forest plot displaying ORs and 95% CIs of studies examining the association between day-care attendance and risk of childhood ALL. The risk estimates are plotted with boxes and the area of each box is inversely proportional to the variance of the estimated effect. The horizontal lines represent the 95% CIs of the risk estimate for each study. The solid vertical line at 1.0 represents a risk estimate of no effect. The dashed vertical line represents the combined risk estimate (OR = 0.76), and the width of the diamond is the 95% CI for this risk estimate (0.67–0.87). Meta-analysis of studies examining the association between day-care attendance and risk of childhood ALL aPercent weight assigned to each OR in the random effects model. bSchuz et al.: changed reference to ‘Yes-deficit in social contacts’ by calculating the inverse of the OR provided for ‘No-deficit in social contacts’; Rosenbaum et al.: estimated the OR for ‘ > 36 months’ by calculating the inverse of the originally provided OR for ‘stayed home’. cDay-care attendance censored on reference date was used due to the low number of Hispanic subjects attending day care during the first year of life. dx, diagnosis; Wi, weight. Forest plot displaying ORs and 95% CIs of studies examining the association between day-care attendance and risk of childhood ALL. The risk estimates are plotted with boxes and the area of each box is inversely proportional to the variance of the estimated effect. The horizontal lines represent the 95% CIs of the risk estimate for each study. The solid vertical line at 1.0 represents a risk estimate of no effect. The dashed vertical line represents the combined risk estimate (OR = 0.76), and the width of the diamond is the 95% CI for this risk estimate (0.67–0.87). We attempted to maintain a reasonable balance between maximizing the inclusion of studies and minimizing sources of heterogeneity, by relaxing the eligibility criteria to include estimates for broader leukaemia subtypes, other social contact measures and unspecified timing of exposure. The contribution of the influence of possible sources of heterogeneity on the combined risk estimate was evaluated. In subgroup meta-analyses presented in Table 3 examining the influence of the timing of exposure, the combined OR for seven studies reporting estimates for day-care attendance or social contacts before diagnosis showed a reduced risk of childhood ALL (OR = 0.81, 95% CI: 0.70, 0.94). When the meta-analysis was limited to the nine studies that specifically evaluated day-care attendance at or before age 1 or 2 years, a similarly reduced risk of ALL (OR = 0.79, 95% CI: 0.65, 0.95) was observed. Table 3Subgroup meta-analyses of day-care attendance and risk of childhood ALL evaluating the influence of timing of day-care attendanceStudy, yearDay care any timeDay care at age ≤ 2CasesOR95% CIWi (%)aCasesOR95% CIWi (%)aPetridou et al., 1993231360.670.41, 1.117.71360.280.09, 0.882.3Roman et al., 199440Petridou et al., 1997241530.830.51, 1.377.8Schuz et al., 199942,b9210.910.90, 1.3015.7Dockerty et al., 199922900.650.36, 1.176.5Infante-Rivard et al., 2000344330.490.31, 0.778.8Neglia et al., 20003817440.960.82, 1.1238.017440.990.84, 1.1716.1Rosenbaum et al., 200041,b1580.760.70, 2.524.9Chan et al., 200232980.960.70, 1.3212.0Perrillat et al., 2002392460.600.40, 1.008.9Jourdan-Da Silva et al., 2004353870.700.60, 1.0022.1Gilham et al., 20053312720.660.56, 0.7728.8Ma et al., 2005—White371360.750.38, 1.454.51360.770.43, 1.402.1—Hispanic371201.090.62, 1.906.21201.920.89, 4.131.2Kamper-Jorgensen et al., 2008361760.680.48, 0.956.3Combined:30800.810.70, 0.94100.051260.790.65, 0.95100.0P-value (heterogeneity):0.2770.001aPercent weight assigned to each OR in the random effects model. Wi, weight.bSchuz et al.: changed reference to ‘Yes-deficit in social contacts’ by calculating the inverse of the OR provided for ‘No-deficit in social contacts’; Rosenbaum et al.: estimated the OR for ‘ > 36 months’ by calculating the inverse of the originally provided OR for ‘stayed home’. Subgroup meta-analyses of day-care attendance and risk of childhood ALL evaluating the influence of timing of day-care attendance aPercent weight assigned to each OR in the random effects model. Wi, weight. bSchuz et al.: changed reference to ‘Yes-deficit in social contacts’ by calculating the inverse of the OR provided for ‘No-deficit in social contacts’; Rosenbaum et al.: estimated the OR for ‘ > 36 months’ by calculating the inverse of the originally provided OR for ‘stayed home’. A series of sensitivity analyses were conducted on the meta-analysis of the 14 studies to examine the influence of individual study characteristics on the combined OR, namely, potential biases in the selection of controls, the categorization of leukaemia and the assessment of day-care attendance. Figure 2 presents a summary of these analyses showing that none of these factors was able to completely account for the reduced risk of ALL observed in the main analysis of the 14 studies. For example, in the evaluation of potential control selection bias, reduced risks were observed for the analyses excluding three studies that used hospital-based controls (OR = 0.78, 95% CI: 0.68, 0.90) and excluding two studies that used random digit dialing (RDD) to select controls (OR = 0.72, 95% CI: 0.63, 0.81). Similarly reduced combined ORs were observed when excluding studies that included infants ( <1 year of age) in the study population (OR = 0.81, 95% CI: 0.70, 0.94), studies not specifically examining ALL (OR = 0.74, 95% CI: 0.63, 0.87), and studies that did not define the exposure strictly as attendance at a day care or a similar type of setting (OR = 0.74, 95% CI: 0.61, 0.88). Figure 2 Plot showing results of sensitivity meta-analyses evaluating the influence of potential biases within individual studies on combined risk estimates. RDD, random digit dialing. Plot showing results of sensitivity meta-analyses evaluating the influence of potential biases within individual studies on combined risk estimates. RDD, random digit dialing. Table 4 presents the results of the meta-analyses evaluating the association between childhood c-ALL and day-care attendance. The analysis of c-ALL contained fewer numbers of studies compared with the analysis of ALL. Similar to the result from the meta-analysis of ALL, the combined OR for the seven studies of c-ALL was also <1, although the CI was slightly wider (OR = 0.83, 95% CI: 0.70, 0.98). The subgroup analyses among studies of day-care attendance before age 1 or 2 years and c-ALL generated results similar to those for ALL (data not shown). No evidence of publication bias was observed for these analyses. Table 4Meta-analysis of studies examining the association between day-care attendance and risk of c-ALLStudy, yearAgeDay-care definitionTimingCasesOR95% CIWi (%)aSchuz et al., 1999421.5–14Deficit in social contacts: Yes/noBefore age 2 years6581.000.80, 1.2019.9Neglia et al., 2000382–5Day care before age 2 years: Yes/noBefore age 2 years6331.050.80, 1.3716.3Chan et al., 2002322–14Index and family day care: 3-levelFirst year of life660.930.63, 1.3611.4Jourdan-Da Silva, 2004351–14Day-care attendance: Yes/noBirth to dx3040.800.60, 1.0017.0Gilham et al., 2005332–5Social activity: Any/noneFirst year of life7910.670.55, 0.8220.1Ma et al., 2005—White372–5Day care first year of life: Yes/noFirst year of life740.490.19, 1.262.8—-Hispanic37,b2–5Day-care attendance: Yes/noBirth to dx710.910.41, 2.053.8Kamper-Jorgensen et al., 2008360–15Attendance to child care: Yes/noBefore age 2 years1010.580.36, 0.938.7P-value (heterogeneity) = 0.044Combined:26980.830.70, 0.98100.0aPercent weight assigned to each OR in the random effects model.bDay-care attendance censored on reference date was used due to the low number of Hispanic subjects attending day-care during the first year of life.dx, diagnosis; Wi, weight. Meta-analysis of studies examining the association between day-care attendance and risk of c-ALL aPercent weight assigned to each OR in the random effects model. bDay-care attendance censored on reference date was used due to the low number of Hispanic subjects attending day-care during the first year of life. dx, diagnosis; Wi, weight.
null
null
[ "Identification of studies", "Inclusion/exclusion criteria and definitions", "Data extraction and statistical approach", "Funding" ]
[ "Literature searches were conducted in PubMed to identify original research and review articles related to childhood leukaemia and day-care attendance and/or social contacts published between January 1966 and October 2008. The searches were conducted using the term ‘childhood leukaemia’ in combination with other terms including ‘infection’, ‘child care’, ‘day care’ and ‘social contact’. In addition, the bibliographies of epidemiology publications on childhood leukaemia and infections were searched to identify studies that may not have been captured through the initial database search. This included the review published in 2004 by McNally and Eden on the infectious aetiology of childhood acute leukaemia (AL).2", "Among the studies identified, inclusion in the meta-analysis was limited to observational studies of case–control or cohort design of any size, geographic location and race/ethnicity of study participants. When more than one publication from an individual study was available, either the most recent publication or the publication that performed the analysis most applicable to evaluating the ‘delayed infection’ hypothesis was selected. Studies needed to have reported a relative risk (RR) or odds ratio (OR) and confidence intervals (CIs), or original data by disease status from which a measure of effect could be calculated. The outcome of interest was defined as clinically diagnosed leukaemia in children between the ages of 0 and 19 years. In the very few studies that did not distinguish between specific leukaemia subtypes,22–26 it was assumed that ALL was the primary subtype since it accounts for the majority (∼80%) of leukaemia diagnoses in children.27\nThe exposure of interest generally referred to as ‘day-care attendance’, which, in addition to formal day care, may have included preschool, nursery school, play groups, mother–toddler groups and other early social contacts. A strict criterion for the meaning of ‘regular attendance’ was not defined a priori since it was assumed that this would vary between studies. Of the primary studies identified, four were excluded for various reasons, including study emphasis on evaluating leukaemia prognosis and outcome,28 an earlier analysis of data from a study for which a more complete and recent publication is available,29 and not reporting a risk estimate for day-care attendance.30,31 After the exclusions, a total of 14 studies, all case–control in design, were retained for the meta-analysis.22–24,32–42", "For most studies,23,24,34–40 the ORs and 95% CIs for leukaemia, AL, ALL or c-ALL among those who attended day care compared with those who did not attend day care were extracted. Among the few studies that did not provide this estimate, the OR for a similar measure was extracted, including those for no deficit in social contacts,42 regular contact outside the home,22 >36 months duration of day-care attendance,41 increasing index and family day-care measure,32 and social activity.33 In two instances, the reported OR was recalculated to reflect the risk associated with the highest level of day-care attendance and/or social activity measure compared with the lowest.41,42 Furthermore, several studies reported risk estimates for stratified analyses by specific subtype of leukaemia,22,32,33,35–38,40–42 age at diagnosis,34,35,38 specific age of day-care attendance,22,33,32–38,40 or race/ethnicity;37 multiple estimates were extracted from these studies for the purposes of subgroup and sensitivity evaluations in the meta-analysis, including specific leukaemia subtypes, particularly ALL and c-ALL and timing of day-care attendance. In general, studies referred to the common precursor B-cell ALL subtype (CD10 and CD19 positive ALL) as c-ALL. Four studies defined c-ALL with an added criterion that specified an age range between 2 and 5 years.33,37,38,43 Risk estimates by specific diagnosis age groups were not extracted since there were only a few studies that provided this information and the age cut-points varied. For the one study that stratified by race/ethnicity,37 two separate risk estimates were included in the meta-analysis since the reported estimates were based on independent populations.\nThe between-study heterogeneity was assessed using the Q statistic, which tests the null hypothesis that the estimated effect is homogenous across all studies.44 Acknowledging that the eligible studies have been conducted independently and may represent only a random sample of the distribution of all possible effect sizes for this association, the random effects model was utilized, which incorporates an estimate of both between-study and within-study variation into the calculation of the summary effect measure.45 Compared with the fixed effects model,46 this method is more conservative and generally results in a wider CI. Finally, publication bias was evaluated visually using the funnel graph method that displays the distribution of all included studies by their point estimates and standard errors.47 In addition, the Begg and Mazumdar adjusted rank correlation test was used to test for correlation between the effect estimates and their variances which, if present, provides an indication of publication bias.48\nThe association with c-ALL was evaluated with a meta-analysis of 7 of the 14 studies.32,33,35–38,42 If a study reported multiple ORs and 95% CIs by timing of day-care attendance, the risk estimate associated with the earliest timing (e.g. age ≤ 2 years) was used to be consistent with the ‘delayed infection’ hypothesis.23,32,34,37,38 The effect of the timing of exposure was evaluated in subgroup meta-analyses of studies reporting risk estimates for early day-care attendance (age ≤ 2 years)22,23,32–34,36–38,42 and studies reporting risk estimates for day-care attendance anytime before diagnosis.23,24,35,37–39,41 Finally, a series of sensitivity analyses were conducted to evaluate the sources of study heterogeneity, namely, the influences of potential selection bias, and heterogeneity in disease classification and exposure definition. The analyses were conducted using the statistical software, STATA Version 9.49", "Grants from the US National Institute of Environmental Health Sciences [grant numbers PS42 ES04705, R01 ES09137] and the Children with Leukaemia Foundation, UK. Funding to pay the Open Access publication charges for this article was provided by the US National Institute of Environmental Health Sciences [grant numbers PS42 ES04705, R01 ES09137]." ]
[ null, null, null, null ]
[ "Introduction", "Methods", "Identification of studies", "Inclusion/exclusion criteria and definitions", "Data extraction and statistical approach", "Results", "Discussion", "Funding" ]
[ "Evidence is growing in support of a role for infections in the aetiology of childhood leukaemia, particularly for the most common subtype, acute lymphoblastic leukaemia (ALL).1–3 Two infection-related hypotheses have gained popularity and are currently supported by substantial, yet inconsistent, epidemiologic findings. Kinlen first proposed the ‘population mixing’ hypothesis in response to the observed childhood leukaemia clusters occurring in the early 1980s in Seascale and Thurso, two remote and isolated communities in the UK that experienced a rapid influx of professional workers.4 He proposed that childhood leukaemia may result from an abnormal immune response to specific, although unidentified, infections commonly seen with the influx of infected persons into an area previously populated with non-immune and susceptible individuals. This hypothesis suggests a mechanism that involves a direct pathological role of specific infectious agents, presumably viruses, in the development of childhood leukaemia and that an immunizing effect may be acquired through previous exposure. Supportive data include several subsequent studies conducted by Kinlen and others examining similar examples of population mixing including rural new towns, situations of wartime population change and other circumstances contributing to unusual patterns of personal contact.4–11 Currently, there is no molecular evidence implicating cell transformation by a specific virus.12\nThe ‘delayed infection’ hypothesis proposed by Greaves emphasizes the critical nature of the timing of exposure and is intended to apply mostly to common B-cell precursor ALL (c-ALL), which largely accounts for the observed peak incidence of ALL between 2 and 5 years of age in developed countries.13,14 He described a role for infections in the context of a ‘two-hit’ model of the natural history of c-ALL,15 where the first ‘hit’ or initiating genetic event occurs in utero during fetal haematopoiesis producing a clinically covert pre-leukemic clone. The transition to overt disease occurs, in a small fraction (∼1%) of pre-leukaemia carriers, after a sufficient postnatal secondary genetic event, which may be caused by a proliferative stress-induced effect of common infections on the developing immune system of the child.1,13 This adverse immune response to infections is thought to be the result of insufficient priming of the immune system usually influenced by a delay in exposure to common infectious agents during early childhood. With the assumption that improved socio-economic conditions may lead to delay in exposure to infections, the Greaves hypothesis provides one plausible explanation for the notably higher incidence rates of ALL with its characteristic peak age between 2 and 5 years observed only in more socio-economically developed countries.16,17 Although different in hypothesized mechanism, both the ‘population mixing’ and ‘delayed infection’ hypotheses propose childhood leukaemia to be caused by an abnormal immune response to infection(s) acquired by personal contacts, and are compatible with available evidence. In some populations, it is possible that both mechanisms may be operating.\nSeveral previous epidemiological studies have used day-care attendance as an indicator of the increased likelihood of early exposure to infections,18 since it is well documented that in developed countries exposures to common infections, particularly those affecting the respiratory and gastrointestinal tracts, occur more frequently in this type of setting.19 The immaturity of children’s immune systems in combination with the lack of appropriate hygienic behaviour is believed to promote the transmission of infectious agents in this social setting.19–21 In the current analysis, we took a meta-analytic approach to summarize the findings to date on the relationship between day-care attendance and risk of childhood ALL.", " Identification of studies Literature searches were conducted in PubMed to identify original research and review articles related to childhood leukaemia and day-care attendance and/or social contacts published between January 1966 and October 2008. The searches were conducted using the term ‘childhood leukaemia’ in combination with other terms including ‘infection’, ‘child care’, ‘day care’ and ‘social contact’. In addition, the bibliographies of epidemiology publications on childhood leukaemia and infections were searched to identify studies that may not have been captured through the initial database search. This included the review published in 2004 by McNally and Eden on the infectious aetiology of childhood acute leukaemia (AL).2\nLiterature searches were conducted in PubMed to identify original research and review articles related to childhood leukaemia and day-care attendance and/or social contacts published between January 1966 and October 2008. The searches were conducted using the term ‘childhood leukaemia’ in combination with other terms including ‘infection’, ‘child care’, ‘day care’ and ‘social contact’. In addition, the bibliographies of epidemiology publications on childhood leukaemia and infections were searched to identify studies that may not have been captured through the initial database search. This included the review published in 2004 by McNally and Eden on the infectious aetiology of childhood acute leukaemia (AL).2\n Inclusion/exclusion criteria and definitions Among the studies identified, inclusion in the meta-analysis was limited to observational studies of case–control or cohort design of any size, geographic location and race/ethnicity of study participants. When more than one publication from an individual study was available, either the most recent publication or the publication that performed the analysis most applicable to evaluating the ‘delayed infection’ hypothesis was selected. Studies needed to have reported a relative risk (RR) or odds ratio (OR) and confidence intervals (CIs), or original data by disease status from which a measure of effect could be calculated. The outcome of interest was defined as clinically diagnosed leukaemia in children between the ages of 0 and 19 years. In the very few studies that did not distinguish between specific leukaemia subtypes,22–26 it was assumed that ALL was the primary subtype since it accounts for the majority (∼80%) of leukaemia diagnoses in children.27\nThe exposure of interest generally referred to as ‘day-care attendance’, which, in addition to formal day care, may have included preschool, nursery school, play groups, mother–toddler groups and other early social contacts. A strict criterion for the meaning of ‘regular attendance’ was not defined a priori since it was assumed that this would vary between studies. Of the primary studies identified, four were excluded for various reasons, including study emphasis on evaluating leukaemia prognosis and outcome,28 an earlier analysis of data from a study for which a more complete and recent publication is available,29 and not reporting a risk estimate for day-care attendance.30,31 After the exclusions, a total of 14 studies, all case–control in design, were retained for the meta-analysis.22–24,32–42\nAmong the studies identified, inclusion in the meta-analysis was limited to observational studies of case–control or cohort design of any size, geographic location and race/ethnicity of study participants. When more than one publication from an individual study was available, either the most recent publication or the publication that performed the analysis most applicable to evaluating the ‘delayed infection’ hypothesis was selected. Studies needed to have reported a relative risk (RR) or odds ratio (OR) and confidence intervals (CIs), or original data by disease status from which a measure of effect could be calculated. The outcome of interest was defined as clinically diagnosed leukaemia in children between the ages of 0 and 19 years. In the very few studies that did not distinguish between specific leukaemia subtypes,22–26 it was assumed that ALL was the primary subtype since it accounts for the majority (∼80%) of leukaemia diagnoses in children.27\nThe exposure of interest generally referred to as ‘day-care attendance’, which, in addition to formal day care, may have included preschool, nursery school, play groups, mother–toddler groups and other early social contacts. A strict criterion for the meaning of ‘regular attendance’ was not defined a priori since it was assumed that this would vary between studies. Of the primary studies identified, four were excluded for various reasons, including study emphasis on evaluating leukaemia prognosis and outcome,28 an earlier analysis of data from a study for which a more complete and recent publication is available,29 and not reporting a risk estimate for day-care attendance.30,31 After the exclusions, a total of 14 studies, all case–control in design, were retained for the meta-analysis.22–24,32–42\n Data extraction and statistical approach For most studies,23,24,34–40 the ORs and 95% CIs for leukaemia, AL, ALL or c-ALL among those who attended day care compared with those who did not attend day care were extracted. Among the few studies that did not provide this estimate, the OR for a similar measure was extracted, including those for no deficit in social contacts,42 regular contact outside the home,22 >36 months duration of day-care attendance,41 increasing index and family day-care measure,32 and social activity.33 In two instances, the reported OR was recalculated to reflect the risk associated with the highest level of day-care attendance and/or social activity measure compared with the lowest.41,42 Furthermore, several studies reported risk estimates for stratified analyses by specific subtype of leukaemia,22,32,33,35–38,40–42 age at diagnosis,34,35,38 specific age of day-care attendance,22,33,32–38,40 or race/ethnicity;37 multiple estimates were extracted from these studies for the purposes of subgroup and sensitivity evaluations in the meta-analysis, including specific leukaemia subtypes, particularly ALL and c-ALL and timing of day-care attendance. In general, studies referred to the common precursor B-cell ALL subtype (CD10 and CD19 positive ALL) as c-ALL. Four studies defined c-ALL with an added criterion that specified an age range between 2 and 5 years.33,37,38,43 Risk estimates by specific diagnosis age groups were not extracted since there were only a few studies that provided this information and the age cut-points varied. For the one study that stratified by race/ethnicity,37 two separate risk estimates were included in the meta-analysis since the reported estimates were based on independent populations.\nThe between-study heterogeneity was assessed using the Q statistic, which tests the null hypothesis that the estimated effect is homogenous across all studies.44 Acknowledging that the eligible studies have been conducted independently and may represent only a random sample of the distribution of all possible effect sizes for this association, the random effects model was utilized, which incorporates an estimate of both between-study and within-study variation into the calculation of the summary effect measure.45 Compared with the fixed effects model,46 this method is more conservative and generally results in a wider CI. Finally, publication bias was evaluated visually using the funnel graph method that displays the distribution of all included studies by their point estimates and standard errors.47 In addition, the Begg and Mazumdar adjusted rank correlation test was used to test for correlation between the effect estimates and their variances which, if present, provides an indication of publication bias.48\nThe association with c-ALL was evaluated with a meta-analysis of 7 of the 14 studies.32,33,35–38,42 If a study reported multiple ORs and 95% CIs by timing of day-care attendance, the risk estimate associated with the earliest timing (e.g. age ≤ 2 years) was used to be consistent with the ‘delayed infection’ hypothesis.23,32,34,37,38 The effect of the timing of exposure was evaluated in subgroup meta-analyses of studies reporting risk estimates for early day-care attendance (age ≤ 2 years)22,23,32–34,36–38,42 and studies reporting risk estimates for day-care attendance anytime before diagnosis.23,24,35,37–39,41 Finally, a series of sensitivity analyses were conducted to evaluate the sources of study heterogeneity, namely, the influences of potential selection bias, and heterogeneity in disease classification and exposure definition. The analyses were conducted using the statistical software, STATA Version 9.49\nFor most studies,23,24,34–40 the ORs and 95% CIs for leukaemia, AL, ALL or c-ALL among those who attended day care compared with those who did not attend day care were extracted. Among the few studies that did not provide this estimate, the OR for a similar measure was extracted, including those for no deficit in social contacts,42 regular contact outside the home,22 >36 months duration of day-care attendance,41 increasing index and family day-care measure,32 and social activity.33 In two instances, the reported OR was recalculated to reflect the risk associated with the highest level of day-care attendance and/or social activity measure compared with the lowest.41,42 Furthermore, several studies reported risk estimates for stratified analyses by specific subtype of leukaemia,22,32,33,35–38,40–42 age at diagnosis,34,35,38 specific age of day-care attendance,22,33,32–38,40 or race/ethnicity;37 multiple estimates were extracted from these studies for the purposes of subgroup and sensitivity evaluations in the meta-analysis, including specific leukaemia subtypes, particularly ALL and c-ALL and timing of day-care attendance. In general, studies referred to the common precursor B-cell ALL subtype (CD10 and CD19 positive ALL) as c-ALL. Four studies defined c-ALL with an added criterion that specified an age range between 2 and 5 years.33,37,38,43 Risk estimates by specific diagnosis age groups were not extracted since there were only a few studies that provided this information and the age cut-points varied. For the one study that stratified by race/ethnicity,37 two separate risk estimates were included in the meta-analysis since the reported estimates were based on independent populations.\nThe between-study heterogeneity was assessed using the Q statistic, which tests the null hypothesis that the estimated effect is homogenous across all studies.44 Acknowledging that the eligible studies have been conducted independently and may represent only a random sample of the distribution of all possible effect sizes for this association, the random effects model was utilized, which incorporates an estimate of both between-study and within-study variation into the calculation of the summary effect measure.45 Compared with the fixed effects model,46 this method is more conservative and generally results in a wider CI. Finally, publication bias was evaluated visually using the funnel graph method that displays the distribution of all included studies by their point estimates and standard errors.47 In addition, the Begg and Mazumdar adjusted rank correlation test was used to test for correlation between the effect estimates and their variances which, if present, provides an indication of publication bias.48\nThe association with c-ALL was evaluated with a meta-analysis of 7 of the 14 studies.32,33,35–38,42 If a study reported multiple ORs and 95% CIs by timing of day-care attendance, the risk estimate associated with the earliest timing (e.g. age ≤ 2 years) was used to be consistent with the ‘delayed infection’ hypothesis.23,32,34,37,38 The effect of the timing of exposure was evaluated in subgroup meta-analyses of studies reporting risk estimates for early day-care attendance (age ≤ 2 years)22,23,32–34,36–38,42 and studies reporting risk estimates for day-care attendance anytime before diagnosis.23,24,35,37–39,41 Finally, a series of sensitivity analyses were conducted to evaluate the sources of study heterogeneity, namely, the influences of potential selection bias, and heterogeneity in disease classification and exposure definition. The analyses were conducted using the statistical software, STATA Version 9.49", "Literature searches were conducted in PubMed to identify original research and review articles related to childhood leukaemia and day-care attendance and/or social contacts published between January 1966 and October 2008. The searches were conducted using the term ‘childhood leukaemia’ in combination with other terms including ‘infection’, ‘child care’, ‘day care’ and ‘social contact’. In addition, the bibliographies of epidemiology publications on childhood leukaemia and infections were searched to identify studies that may not have been captured through the initial database search. This included the review published in 2004 by McNally and Eden on the infectious aetiology of childhood acute leukaemia (AL).2", "Among the studies identified, inclusion in the meta-analysis was limited to observational studies of case–control or cohort design of any size, geographic location and race/ethnicity of study participants. When more than one publication from an individual study was available, either the most recent publication or the publication that performed the analysis most applicable to evaluating the ‘delayed infection’ hypothesis was selected. Studies needed to have reported a relative risk (RR) or odds ratio (OR) and confidence intervals (CIs), or original data by disease status from which a measure of effect could be calculated. The outcome of interest was defined as clinically diagnosed leukaemia in children between the ages of 0 and 19 years. In the very few studies that did not distinguish between specific leukaemia subtypes,22–26 it was assumed that ALL was the primary subtype since it accounts for the majority (∼80%) of leukaemia diagnoses in children.27\nThe exposure of interest generally referred to as ‘day-care attendance’, which, in addition to formal day care, may have included preschool, nursery school, play groups, mother–toddler groups and other early social contacts. A strict criterion for the meaning of ‘regular attendance’ was not defined a priori since it was assumed that this would vary between studies. Of the primary studies identified, four were excluded for various reasons, including study emphasis on evaluating leukaemia prognosis and outcome,28 an earlier analysis of data from a study for which a more complete and recent publication is available,29 and not reporting a risk estimate for day-care attendance.30,31 After the exclusions, a total of 14 studies, all case–control in design, were retained for the meta-analysis.22–24,32–42", "For most studies,23,24,34–40 the ORs and 95% CIs for leukaemia, AL, ALL or c-ALL among those who attended day care compared with those who did not attend day care were extracted. Among the few studies that did not provide this estimate, the OR for a similar measure was extracted, including those for no deficit in social contacts,42 regular contact outside the home,22 >36 months duration of day-care attendance,41 increasing index and family day-care measure,32 and social activity.33 In two instances, the reported OR was recalculated to reflect the risk associated with the highest level of day-care attendance and/or social activity measure compared with the lowest.41,42 Furthermore, several studies reported risk estimates for stratified analyses by specific subtype of leukaemia,22,32,33,35–38,40–42 age at diagnosis,34,35,38 specific age of day-care attendance,22,33,32–38,40 or race/ethnicity;37 multiple estimates were extracted from these studies for the purposes of subgroup and sensitivity evaluations in the meta-analysis, including specific leukaemia subtypes, particularly ALL and c-ALL and timing of day-care attendance. In general, studies referred to the common precursor B-cell ALL subtype (CD10 and CD19 positive ALL) as c-ALL. Four studies defined c-ALL with an added criterion that specified an age range between 2 and 5 years.33,37,38,43 Risk estimates by specific diagnosis age groups were not extracted since there were only a few studies that provided this information and the age cut-points varied. For the one study that stratified by race/ethnicity,37 two separate risk estimates were included in the meta-analysis since the reported estimates were based on independent populations.\nThe between-study heterogeneity was assessed using the Q statistic, which tests the null hypothesis that the estimated effect is homogenous across all studies.44 Acknowledging that the eligible studies have been conducted independently and may represent only a random sample of the distribution of all possible effect sizes for this association, the random effects model was utilized, which incorporates an estimate of both between-study and within-study variation into the calculation of the summary effect measure.45 Compared with the fixed effects model,46 this method is more conservative and generally results in a wider CI. Finally, publication bias was evaluated visually using the funnel graph method that displays the distribution of all included studies by their point estimates and standard errors.47 In addition, the Begg and Mazumdar adjusted rank correlation test was used to test for correlation between the effect estimates and their variances which, if present, provides an indication of publication bias.48\nThe association with c-ALL was evaluated with a meta-analysis of 7 of the 14 studies.32,33,35–38,42 If a study reported multiple ORs and 95% CIs by timing of day-care attendance, the risk estimate associated with the earliest timing (e.g. age ≤ 2 years) was used to be consistent with the ‘delayed infection’ hypothesis.23,32,34,37,38 The effect of the timing of exposure was evaluated in subgroup meta-analyses of studies reporting risk estimates for early day-care attendance (age ≤ 2 years)22,23,32–34,36–38,42 and studies reporting risk estimates for day-care attendance anytime before diagnosis.23,24,35,37–39,41 Finally, a series of sensitivity analyses were conducted to evaluate the sources of study heterogeneity, namely, the influences of potential selection bias, and heterogeneity in disease classification and exposure definition. The analyses were conducted using the statistical software, STATA Version 9.49", "Table 1 presents selected characteristics of the 14 studies included in this meta-analysis. The studies, all case–control in design, were published between 1993 and 2008 and were conducted in many different geographic areas. Most studies achieved a population-based ascertainment of cases utilizing a national registry or a regional network of all major paediatric oncology centres. A population-based control selection strategy was most common with the exception of three studies that selected hospital-based controls.23,24,39 Only 1 of the 14 studies utilized a records-based day-care assessment protocol,36 whereas the remaining studies relied on standardized questionnaires administered either in person, by telephone or by mail. All studies have accounted for major confounding factors such as age, sex, race and socio-economic status through a matched study design and/or statistical adjustment in the analysis. Of the 14 studies identified, 11 studies have reported either a statistically significant reduced risk associated with day-care attendance and/or social contact measures23,33–37,39 or provide some evidence of a reduced risk.22,24,40,41\nTable 1Select characteristics of studies included in the meta-analyses of day-care attendance and childhood leukemiaAuthor, year (location)Case ascertainmentControl selectionData collectionSelect resultsConfounding addressedPetridou et al., 199323 (Greece, Attica and Crete)125 leukaemia (Attica) 11 leukaemia (Crete)Age 0–14 yearsChildren’s hospitals of the University of Athens (1987–91) and the University of Crete (1990–92)187 frequency matched children who attended the outpatient clinic of the hospitals where the children with leukaemia were treatedTelephone interviews with parents of childrenAttendance at crèche (yes/no) At any time Leukaemia: 0.67 (0.41, 1.11)In infancy (for ≥3 months in the first 2 years of life) Leukaemia: 0.28 (0.09, 0.88)Frequency match: sex, age, hospitalAdjustment: place of residence, social classRoman et al., 199440 (UK)38 ALLAge 0–4 yearsDiagnosed 1972–89Born in west Berkshire or north Hampshire, and were residents there when diagnosed112 individually matched children selected from hospital delivery registersParents were interviewedPreschool playgroup (yes/no) In the year before diagnosis (for ≥3 months) ALL: 0.6 (0.2, 1.8)Individual match: sex, date of birth, mother’s age, area of residence at birth, time of diagnosisAdjustment: not specifiedPetridou et al., 199724 (Greece)153 leukaemiaAge 0–14 yearsDiagnosed 1993–94Nation wide network of paediatric haematologists/oncologists300 individually matched children hospitalized at the same time as the corresponding case for acute conditionsGuardians of all subjects completed an interviewer-administered questionnaireDay-care attendance (ever/never) Leukaemia: 0.83 (0.51, 1.37)Individual match: sex, age, geographic regionAdjustment: maternal age at birth, maternal education, sibship size, birth order, persons per roomSchuz et al., 199942 (Germany)1010 AL (686 c-ALL)Diagnosed 1980–94Age 0–14 yearsNation wide German Children’s Cancer Registry at the University of Mainz2588 matched children randomly selected from complete files of local offices of registration of residentsStructured questionnaire based on the US Children’s Cancer GroupDay-care attendance not directly assessedDeficit in social contacts (yes/no, age ≤18 months excluded) AL: 1.1 (0.9, 1.3) c-ALL: 1.0 (0.8, 1.2)Analysis of ALIndividual match: date of birth, sex, districtAdjustment: SESAnalysis of c-ALLAdjustment: sex, age, year of birth, study settingAdjustment: SES, urbanizationDockerty et al., 199922 (New Zealand)97 ALLDiagnosed 1990–93Age 0–14 yearsNew Zealand Cancer registry, public hospital admission/discharge computer system, and the Children’s Cancer Registry. Nationwide97 individually matched children randomly selected from the New Zealand born and resident childhood population using national birth records209 solid cancer casesMothers interviewed in their homes using a questionnaire adapted from Patricia McKinney and Eve Roman in the UKRegular contact with other children from outside home at <12 months (yes/no, age <15 months excluded) ALL: 0.65 (0.36, 1.17)Individual match: age and sexAdjustment: sex, age, several others including social classInfante-Rivard et al., 200034 (Canada)491 ALLDiagnosed 1980–93Age 0–9 yearsTertiary care centre similar to population-based ascertainment491 individually matched children chosen from the most complete census of children for the study yearsStructured questionnaire administered to mothers by phoneDay-care attendance by age at entry Entry ≤2 years old vs no ALL: 0.49 (0.31, 0.77)Entry at >2 years old vs no ALL: 0.67 (0.45, 1.01)Individual match: age, sex, region of residence at diagnosisAdjustment: maternal age, maternal educationNeglia et al., 200038 (USA)1744 ALL (633 c-ALL; excludes cases < 1 year)Diagnosed 1989–93Age 0–14 yearsChildren’s Cancer Group member institutions throughout the USA1879 individually matched children randomly selected using the random digit dialing (RDD) methodologyStructured interviewDay-care attendance (age <1 year excluded) Yes vs no ALL: 0.96 (0.82, 1.12) c-ALL: 0.96 (0.75, 1.24)Day care before age 2 vs no ALL: 0.99 (0.84, 1.17) c-ALL: 1.05 (0.80, 1.37)Individual match: age, race, telephone area code, exchange, sex (T-cell leukaemia only)Adjustment: maternal race, education, family incomeRosenbaum et al., 200041 (USA)255 ALLDiagnosed 1980–91Age 0–14 yearsFour clinical centers in a 31-county study region. Institutional tumour registries and department of paediatric haematology–oncology records760 frequency matched children randomly selected through the Live Birth Certificate Registry maintained by the New York State Department of HealthSelf-administered questionnaire mailed to the parents of subjectsTotal duration of out-of-home care (duration vs >36 months) ALL: Stay home: 1.32 (0.70, 2.52) 1–18 months: 1.74 (0.89, 3.42) 19–36 months: 1.32 (0.70, 2.52)Frequency match: sex, age, race, birth yearAdjustment: maternal age, maternal education, birth year, maternal employment, breastfeeding, birth orderChan et al., 200232 (Hong Kong)98 AL (66 c-ALL)Diagnosed 1994–97Age 2–14 yearsHong Kong Pediatric Hematology and Oncology Study Group228 children selected using RDD methodologyIn-person interview using a structured questionnaire adapted from UKCCS and translated into ChineseIndex and family day-care attendance (3-category variable) First year of life AL: 0.96 (0.70, 1.32) Child peak: 0.63 (0.38, 1.07) c-ALL: 0.93 (0.63, 1.36)Matching: NoneAdjustment: age, number of children in household at reference datePerrillat et al., 200239 (France)280 AL (240 ALL)Diagnosed 1995–99Age 0–15 yearsHospitals of Lille, Lyon, Nancy and Paris. Cases need to have resided in the hospital catchment area288 frequency matched children hospitalized in the same hospital as the cases, and residing in the catchment area of the hospitalIn-person standardized questionnaireDay-care attendance (age <2 years excluded) Ever vs never AL: 0.6 (0.4, 1.0)Age started vs no day care >12 months: 0.5 (0.3, 1.0) 7–12 months: 0.6 (0.2, 1.7) ≤6 months: 0.5 (0.3, 1.0)Frequency match: age, sex, hospital, hospital catchment area, ethnic originAdjustment: age, sex, hospital, ethnic origin, maternal education, parental professional categoryJourdan-Da Silva et al., 200435 (France)473 AL (408 ALL, 304 c-ALL)Diagnosed 1995–98Age 0–14 yearsNational Registry of Childhood Leukaemia and Lymphoma (NRCL)567 frequency matched children randomly selected using age, sex, and region quotas from a sample of 30 000 phone numbers representative of the French population on area of residence and municipality sizeStandardized self-administered questionnaire on mothersDay-care attendance (age <1 year excluded) Ever vs never ALL: 0.7 (0.6, 1.0) c-ALL: 0.8 (0.6, 1.0)Started at age <3 months vs never ALL: 0.6 (0.4, 0.8) c-ALL: 0.6 (0.4, 0.9)Frequency match: age, sex, regionAdjustment: age, sex, regionGilham et al., 200533 (UK)1286 ALL (791 c-ALL; excludes cases <2 years)Diagnosed 1991–96Age 0–14 yearsNation-wide ascertainment through pediatric oncology units6238 individually matched children randomly selected from primary care population registersIn-person interview with parents using a structured questionnaireSocial activity in the first year of life (age <2 years excluded) Any vs no social activity ALL: 0.66 (0.56, 0.77) c-ALL: 0.67 (0.55, 0.82)Age started vs no day care ALL- < 3 months: 0.71 (0.60, 0.85) 3–5 months: 0.71 (0.56, 0.90) 6–11 months: 0.76 (0.63, 0.92)Individual match: sex, month and year of birth, region of residence at diagnosisAdjustment: age at diagnosis, sex, region, maternal age, mother working at time of birth, deprivationMa et al., 200537 (USA)294 ALL (145 c-ALL; excludes case <1 year)Diagnosed 1995–2002Age 0–14 yearsPopulation-based ascertainment from major paediatric clinical centres in Northern and Central California376 individually matched children randomly selected from statewide birth certificate files maintained by the California Department of Health ServicesPersonal interview with primary caretakerChild-hours of exposure at day care (age <1 year excluded) ≥5000 child-hours (1st year) vs 0 ALL- Hispanic: 2.10 (0.70, 6.34) White: 0.42 (0.18, 0.99) c-ALL- Hispanic: 2.53 (0.60, 10.7) White: 0.33 (0.11, 1.01)Individual match: date of birth, sex, mother’s race, Hispanic statusAdjustment: annual household income, maternal educationKamper-Jorgensen et al., 200836 (Denmark)559 ALL (199 c-ALL)Diagnosed 1989–2004Age 0–15 yearsAll cases of childhood leukaemia identified in a cohort of all children in Denmark5590 individually matched children selected from population registersRecords-based data from three population-based registries: The Nordic Society of Paediatric Haematology and Oncology, the Danish Civil Registration System, and the Childcare databaseChild-care attendance in children during first 2 years of life (yes/no) ALL: 0.68 (0.48, 0.95) c-ALL: 0.58 (0.36, 0.93)Individual match: date of birth, sex, birth cohortAdjustment: several demographic characteristics was considered but none were major confoundersSES, socioeconomic status; RDD, random digit dialing; UKCCS, United Kingdom Childhood Cancer Study.\nSelect characteristics of studies included in the meta-analyses of day-care attendance and childhood leukemia\n125 leukaemia (Attica) 11 leukaemia (Crete)\nAge 0–14 years\nChildren’s hospitals of the University of Athens (1987–91) and the University of Crete (1990–92)\nAt any time Leukaemia: 0.67 (0.41, 1.11)\nIn infancy (for ≥3 months in the first 2 years of life) Leukaemia: 0.28 (0.09, 0.88)\nFrequency match: sex, age, hospital\nAdjustment: place of residence, social class\n38 ALL\nAge 0–4 years\nDiagnosed 1972–89\nBorn in west Berkshire or north Hampshire, and were residents there when diagnosed\nIn the year before diagnosis (for ≥3 months) ALL: 0.6 (0.2, 1.8)\nIndividual match: sex, date of birth, mother’s age, area of residence at birth, time of diagnosis\nAdjustment: not specified\n153 leukaemia\nAge 0–14 years\nDiagnosed 1993–94\nNation wide network of paediatric haematologists/oncologists\nIndividual match: sex, age, geographic region\nAdjustment: maternal age at birth, maternal education, sibship size, birth order, persons per room\n1010 AL (686 c-ALL)\nDiagnosed 1980–94\nAge 0–14 years\nNation wide German Children’s Cancer Registry at the University of Mainz\nStructured questionnaire based on the US Children’s Cancer Group\nDay-care attendance not directly assessed\nIndividual match: date of birth, sex, district\nAdjustment: SES\nAdjustment: sex, age, year of birth, study setting\nAdjustment: SES, urbanization\n97 ALL\nDiagnosed 1990–93\nAge 0–14 years\nNew Zealand Cancer registry, public hospital admission/discharge computer system, and the Children’s Cancer Registry. Nationwide\n97 individually matched children randomly selected from the New Zealand born and resident childhood population using national birth records\n209 solid cancer cases\nIndividual match: age and sex\nAdjustment: sex, age, several others including social class\n491 ALL\nDiagnosed 1980–93\nAge 0–9 years\nTertiary care centre similar to population-based ascertainment\nEntry ≤2 years old vs no ALL: 0.49 (0.31, 0.77)\nEntry at >2 years old vs no ALL: 0.67 (0.45, 1.01)\nIndividual match: age, sex, region of residence at diagnosis\nAdjustment: maternal age, maternal education\n1744 ALL (633 c-ALL; excludes cases < 1 year)\nDiagnosed 1989–93\nAge 0–14 years\nChildren’s Cancer Group member institutions throughout the USA\nYes vs no ALL: 0.96 (0.82, 1.12) c-ALL: 0.96 (0.75, 1.24)\nDay care before age 2 vs no ALL: 0.99 (0.84, 1.17) c-ALL: 1.05 (0.80, 1.37)\nIndividual match: age, race, telephone area code, exchange, sex (T-cell leukaemia only)\nAdjustment: maternal race, education, family income\n255 ALL\nDiagnosed 1980–91\nAge 0–14 years\nFour clinical centers in a 31-county study region. Institutional tumour registries and department of paediatric haematology–oncology records\nFrequency match: sex, age, race, birth year\nAdjustment: maternal age, maternal education, birth year, maternal employment, breastfeeding, birth order\n98 AL (66 c-ALL)\nDiagnosed 1994–97\nAge 2–14 years\nHong Kong Pediatric Hematology and Oncology Study Group\nFirst year of life AL: 0.96 (0.70, 1.32) Child peak: 0.63 (0.38, 1.07) c-ALL: 0.93 (0.63, 1.36)\nMatching: None\nAdjustment: age, number of children in household at reference date\n280 AL (240 ALL)\nDiagnosed 1995–99\nAge 0–15 years\nHospitals of Lille, Lyon, Nancy and Paris. Cases need to have resided in the hospital catchment area\nEver vs never AL: 0.6 (0.4, 1.0)\nAge started vs no day care >12 months: 0.5 (0.3, 1.0) 7–12 months: 0.6 (0.2, 1.7) ≤6 months: 0.5 (0.3, 1.0)\nFrequency match: age, sex, hospital, hospital catchment area, ethnic origin\nAdjustment: age, sex, hospital, ethnic origin, maternal education, parental professional category\n473 AL (408 ALL, 304 c-ALL)\nDiagnosed 1995–98\nAge 0–14 years\nNational Registry of Childhood Leukaemia and Lymphoma (NRCL)\nEver vs never ALL: 0.7 (0.6, 1.0) c-ALL: 0.8 (0.6, 1.0)\nStarted at age <3 months vs never ALL: 0.6 (0.4, 0.8) c-ALL: 0.6 (0.4, 0.9)\nFrequency match: age, sex, region\nAdjustment: age, sex, region\n1286 ALL (791 c-ALL; excludes cases <2 years)\nDiagnosed 1991–96\nAge 0–14 years\nNation-wide ascertainment through pediatric oncology units\nAny vs no social activity ALL: 0.66 (0.56, 0.77) c-ALL: 0.67 (0.55, 0.82)\nAge started vs no day care ALL- < 3 months: 0.71 (0.60, 0.85) 3–5 months: 0.71 (0.56, 0.90) 6–11 months: 0.76 (0.63, 0.92)\nIndividual match: sex, month and year of birth, region of residence at diagnosis\nAdjustment: age at diagnosis, sex, region, maternal age, mother working at time of birth, deprivation\n294 ALL (145 c-ALL; excludes case <1 year)\nDiagnosed 1995–2002\nAge 0–14 years\nPopulation-based ascertainment from major paediatric clinical centres in Northern and Central California\n≥5000 child-hours (1st year) vs 0 ALL- Hispanic: 2.10 (0.70, 6.34) White: 0.42 (0.18, 0.99) c-ALL- Hispanic: 2.53 (0.60, 10.7) White: 0.33 (0.11, 1.01)\nIndividual match: date of birth, sex, mother’s race, Hispanic status\nAdjustment: annual household income, maternal education\n559 ALL (199 c-ALL)\nDiagnosed 1989–2004\nAge 0–15 years\nAll cases of childhood leukaemia identified in a cohort of all children in Denmark\nIndividual match: date of birth, sex, birth cohort\nAdjustment: several demographic characteristics was considered but none were major confounders\nSES, socioeconomic status; RDD, random digit dialing; UKCCS, United Kingdom Childhood Cancer Study.\nAs shown in Table 2, the 14 studies included a total of 6108 cases and generated a combined OR estimate indicating that day-care attendance is associated with a reduced risk of childhood ALL (OR = 0.76, 95% CI: 0.67, 0.87). Figure 1 provides a visual portrayal of the relationship between day-care attendance and the risk of childhood ALL. Three large studies conducted in Germany,42 the USA38 and the UK33 appeared to carry a large proportion of the weight in the meta-analysis at ∼13% each. The combined risk estimates excluding each of these studies individually remained similarly reduced indicating that no one large study was able to completely explain the protective effect observed (data not shown). No remarkable evidence of publication bias was apparent from the funnel plot since the data points for these 14 studies were, in general, randomly distributed around the combined OR estimate (plot not shown). This visual interpretation of the results was confirmed by the large P-value using the rank correlation method (P = 0.553).\nTable 2Meta-analysis of studies examining the association between day-care attendance and risk of childhood ALLStudy, yearOutcome, age in yearsDay-care definitionTimingCasesOR95% CIWi (%)aPetridou et al., 199323Leukaemia, 0–14Attendance at crèche: yes/noBefore age 2 years1360.280.09, 0.881.2Roman et al., 199440ALL, 0–4Pre school playgroup: yes/noYear before dx380.600.20, 1.801.3Petridou et al., 199724Leukaemia, 0–14Day care: ever/neverBirth to dx1530.830.51, 1.375.0Schuz et al., 1999 42,bAL, 1.5–14Deficit in social contacts: yes/noBefore age 2 years9210.910.90, 1.3012.7Dockerty et al., 199922ALL, 1.25–14Reg. contact outside home: yes/noFirst year of life900.650.36, 1.173.8Infante-Rivard et al., 200034ALL, 0–9Day care: entry at ≤2 years/neverAt or before age 2 years4330.490.31, 0.775.6Neglia et al., 200038ALL, 1–14Day care before age 2 years: yes/noBefore age 2 years17440.990.84, 1.1713.3Rosenbaum et al., 200041,bALL, 0–14Out-of-home care: >36 months/noneBirth to dx1580.760.70, 2.523.3Chan et al., 200232AL, 2–14Index and family day care: 3-levelFirst year of life980.960.70, 1.328.5Perrillat et al., 200239AL, 2–15Day-care attendance: yes/noBirth to dx2460.600.40, 1.005.5Jourdan-Da Silva et al., 200435ALL, 1–14Day-care attendance: yes/noBirth to dx3870.700.60, 1.0010.3Gilham et al., 200533ALL, 2–14Social activity: any/noneFirst year of life12720.660.56, 0.7713.6Ma et al., 200537—WhiteALL, 1–14Day care first year of life: yes/noFirst year of life1360.770.43, 1.403.8Ma et al., 200537—HispaniccALL, 1–14Day-care attendance: yes/noBirth to dx1201.090.62, 1.904.1Kamper-Jorgensen et al., 200836ALL, 0–15Attendance to child care: yes/noBefore age 2 years1760.680.48, 0.957.9P-value (heterogeneity) = 0.014Combined:61080.760.67, 0.87100.0aPercent weight assigned to each OR in the random effects model.bSchuz et al.: changed reference to ‘Yes-deficit in social contacts’ by calculating the inverse of the OR provided for ‘No-deficit in social contacts’; Rosenbaum et al.: estimated the OR for ‘ > 36 months’ by calculating the inverse of the originally provided OR for ‘stayed home’.cDay-care attendance censored on reference date was used due to the low number of Hispanic subjects attending day care during the first year of life.dx, diagnosis; Wi, weight.\nFigure 1 Forest plot displaying ORs and 95% CIs of studies examining the association between day-care attendance and risk of childhood ALL. The risk estimates are plotted with boxes and the area of each box is inversely proportional to the variance of the estimated effect. The horizontal lines represent the 95% CIs of the risk estimate for each study. The solid vertical line at 1.0 represents a risk estimate of no effect. The dashed vertical line represents the combined risk estimate (OR = 0.76), and the width of the diamond is the 95% CI for this risk estimate (0.67–0.87).\nMeta-analysis of studies examining the association between day-care attendance and risk of childhood ALL\naPercent weight assigned to each OR in the random effects model.\nbSchuz et al.: changed reference to ‘Yes-deficit in social contacts’ by calculating the inverse of the OR provided for ‘No-deficit in social contacts’; Rosenbaum et al.: estimated the OR for ‘ > 36 months’ by calculating the inverse of the originally provided OR for ‘stayed home’.\ncDay-care attendance censored on reference date was used due to the low number of Hispanic subjects attending day care during the first year of life.\ndx, diagnosis; Wi, weight.\n Forest plot displaying ORs and 95% CIs of studies examining the association between day-care attendance and risk of childhood ALL. The risk estimates are plotted with boxes and the area of each box is inversely proportional to the variance of the estimated effect. The horizontal lines represent the 95% CIs of the risk estimate for each study. The solid vertical line at 1.0 represents a risk estimate of no effect. The dashed vertical line represents the combined risk estimate (OR = 0.76), and the width of the diamond is the 95% CI for this risk estimate (0.67–0.87).\nWe attempted to maintain a reasonable balance between maximizing the inclusion of studies and minimizing sources of heterogeneity, by relaxing the eligibility criteria to include estimates for broader leukaemia subtypes, other social contact measures and unspecified timing of exposure. The contribution of the influence of possible sources of heterogeneity on the combined risk estimate was evaluated. In subgroup meta-analyses presented in Table 3 examining the influence of the timing of exposure, the combined OR for seven studies reporting estimates for day-care attendance or social contacts before diagnosis showed a reduced risk of childhood ALL (OR = 0.81, 95% CI: 0.70, 0.94). When the meta-analysis was limited to the nine studies that specifically evaluated day-care attendance at or before age 1 or 2 years, a similarly reduced risk of ALL (OR = 0.79, 95% CI: 0.65, 0.95) was observed.\nTable 3Subgroup meta-analyses of day-care attendance and risk of childhood ALL evaluating the influence of timing of day-care attendanceStudy, yearDay care any timeDay care at age ≤ 2CasesOR95% CIWi (%)aCasesOR95% CIWi (%)aPetridou et al., 1993231360.670.41, 1.117.71360.280.09, 0.882.3Roman et al., 199440Petridou et al., 1997241530.830.51, 1.377.8Schuz et al., 199942,b9210.910.90, 1.3015.7Dockerty et al., 199922900.650.36, 1.176.5Infante-Rivard et al., 2000344330.490.31, 0.778.8Neglia et al., 20003817440.960.82, 1.1238.017440.990.84, 1.1716.1Rosenbaum et al., 200041,b1580.760.70, 2.524.9Chan et al., 200232980.960.70, 1.3212.0Perrillat et al., 2002392460.600.40, 1.008.9Jourdan-Da Silva et al., 2004353870.700.60, 1.0022.1Gilham et al., 20053312720.660.56, 0.7728.8Ma et al., 2005—White371360.750.38, 1.454.51360.770.43, 1.402.1—Hispanic371201.090.62, 1.906.21201.920.89, 4.131.2Kamper-Jorgensen et al., 2008361760.680.48, 0.956.3Combined:30800.810.70, 0.94100.051260.790.65, 0.95100.0P-value (heterogeneity):0.2770.001aPercent weight assigned to each OR in the random effects model. Wi, weight.bSchuz et al.: changed reference to ‘Yes-deficit in social contacts’ by calculating the inverse of the OR provided for ‘No-deficit in social contacts’; Rosenbaum et al.: estimated the OR for ‘ > 36 months’ by calculating the inverse of the originally provided OR for ‘stayed home’.\nSubgroup meta-analyses of day-care attendance and risk of childhood ALL evaluating the influence of timing of day-care attendance\naPercent weight assigned to each OR in the random effects model. Wi, weight.\nbSchuz et al.: changed reference to ‘Yes-deficit in social contacts’ by calculating the inverse of the OR provided for ‘No-deficit in social contacts’; Rosenbaum et al.: estimated the OR for ‘ > 36 months’ by calculating the inverse of the originally provided OR for ‘stayed home’.\nA series of sensitivity analyses were conducted on the meta-analysis of the 14 studies to examine the influence of individual study characteristics on the combined OR, namely, potential biases in the selection of controls, the categorization of leukaemia and the assessment of day-care attendance. Figure 2 presents a summary of these analyses showing that none of these factors was able to completely account for the reduced risk of ALL observed in the main analysis of the 14 studies. For example, in the evaluation of potential control selection bias, reduced risks were observed for the analyses excluding three studies that used hospital-based controls (OR = 0.78, 95% CI: 0.68, 0.90) and excluding two studies that used random digit dialing (RDD) to select controls (OR = 0.72, 95% CI: 0.63, 0.81). Similarly reduced combined ORs were observed when excluding studies that included infants ( <1 year of age) in the study population (OR = 0.81, 95% CI: 0.70, 0.94), studies not specifically examining ALL (OR = 0.74, 95% CI: 0.63, 0.87), and studies that did not define the exposure strictly as attendance at a day care or a similar type of setting (OR = 0.74, 95% CI: 0.61, 0.88).\nFigure 2 Plot showing results of sensitivity meta-analyses evaluating the influence of potential biases within individual studies on combined risk estimates. RDD, random digit dialing.\n Plot showing results of sensitivity meta-analyses evaluating the influence of potential biases within individual studies on combined risk estimates. RDD, random digit dialing.\nTable 4 presents the results of the meta-analyses evaluating the association between childhood c-ALL and day-care attendance. The analysis of c-ALL contained fewer numbers of studies compared with the analysis of ALL. Similar to the result from the meta-analysis of ALL, the combined OR for the seven studies of c-ALL was also <1, although the CI was slightly wider (OR = 0.83, 95% CI: 0.70, 0.98). The subgroup analyses among studies of day-care attendance before age 1 or 2 years and c-ALL generated results similar to those for ALL (data not shown). No evidence of publication bias was observed for these analyses.\nTable 4Meta-analysis of studies examining the association between day-care attendance and risk of c-ALLStudy, yearAgeDay-care definitionTimingCasesOR95% CIWi (%)aSchuz et al., 1999421.5–14Deficit in social contacts: Yes/noBefore age 2 years6581.000.80, 1.2019.9Neglia et al., 2000382–5Day care before age 2 years: Yes/noBefore age 2 years6331.050.80, 1.3716.3Chan et al., 2002322–14Index and family day care: 3-levelFirst year of life660.930.63, 1.3611.4Jourdan-Da Silva, 2004351–14Day-care attendance: Yes/noBirth to dx3040.800.60, 1.0017.0Gilham et al., 2005332–5Social activity: Any/noneFirst year of life7910.670.55, 0.8220.1Ma et al., 2005—White372–5Day care first year of life: Yes/noFirst year of life740.490.19, 1.262.8—-Hispanic37,b2–5Day-care attendance: Yes/noBirth to dx710.910.41, 2.053.8Kamper-Jorgensen et al., 2008360–15Attendance to child care: Yes/noBefore age 2 years1010.580.36, 0.938.7P-value (heterogeneity) = 0.044Combined:26980.830.70, 0.98100.0aPercent weight assigned to each OR in the random effects model.bDay-care attendance censored on reference date was used due to the low number of Hispanic subjects attending day-care during the first year of life.dx, diagnosis; Wi, weight.\nMeta-analysis of studies examining the association between day-care attendance and risk of c-ALL\naPercent weight assigned to each OR in the random effects model.\nbDay-care attendance censored on reference date was used due to the low number of Hispanic subjects attending day-care during the first year of life.\ndx, diagnosis; Wi, weight.", "The evidence from a large and growing body of literature related to the exposure to infectious agents, as measured by day-care attendance, and the risk of childhood leukaemia was systematically evaluated using a meta-analytic approach. Heterogeneity between epidemiologic studies and their results is common and constitutes one of the major challenges in such a synthesis. Although the random effects model was used in this analysis to account for some of the between-study variation, we acknowledge the importance of interpreting results together with a thorough consideration of the potential sources of heterogeneity.\nAll the studies included in this analysis were conducted with the a priori objective of testing the biologically plausible, ‘delayed infection’ hypothesis, which specifies a predicted direction of risk, timing of the exposure and the most applicable subtype of leukaemia. Overall, the studies show consistency in support of a reduced risk associated with day-care attendance or social contacts during early childhood, with the vast majority of studies either reporting an effect in the hypothesized direction or no association. A quantitative assessment using meta-analysis indicates that day-care attendance is associated with a reduced risk of childhood ALL, as well as c-ALL. The reduction in risk persisted despite a thorough consideration of potential sources of study heterogeneity. We did not conduct a meta-analysis specifically in non-c-ALL or acute myeloid leukaemia due to the limited number of studies reporting results for these associations. Of the four studies that present data for non-c-ALL,35–38 three studies showed reduced ORs,35–37 but lacked precision. Based on currently available data, it is difficult to determine whether the association applies to a specific subtype of ALL only or ALL in general.\nThe subgroup meta-analysis by timing of day-care attendance did not suggest a stronger reduction in risk for day care specifically at or before age 1 or 2 years as might have been expected based on the hypothesis. However, a few individual studies have shown that the strongest reduction in risk occurs when day-care attendance is started <6 months of age.33,35,39 Although not formally evaluated in this meta-analysis, several individual studies that used detailed exposure assessment protocols demonstrated evidence of dose–response effects. Strong trends were observed for increasing levels of child-hours of day-care attendance,37 levels of social activity33 and age at start of day care.35\nWe were not able to conduct a comparable meta-analysis of studies pertaining to the related mechanism of rural ‘population mixing’ and the risk of childhood leukaemia. Although it was not possible to analyse the role of ‘population mixing’ in the same manner as was done for the ‘delayed infection’ hypothesis, it is recognized that these two processes may be interrelated or occur simultaneously and that both mechanisms may be operating in a given population. Thus, the results observed for the analyses of studies providing data relevant to the timing of infection in early life cannot be interpreted as ‘ruling out’ the possible role of ‘population mixing’, but rather lend further support to the role of immune related processes in the aetiology of ALL.\nOne major consideration in the evaluation of study validity is the possibility of selection bias, a type of systematic error that occurs when there is differential selection of either the cases or controls on the basis of characteristics which may affect exposure status. One way this may arise is if cases and controls do not originate from the same source population. A population-based ascertainment of cases is considered favourable since a defined source population, from which controls may be selected, is easily identifiable. Other strategies of case ascertainment may be appropriate as well, as long as the source population can be clearly defined. As implemented in three of the included studies, selection of controls among the inpatient cohort of the same hospital as the case diagnosis can fulfill this requirement, but can introduce bias if the illnesses/conditions of the control group are related to the exposure under study. Also, it has been suggested that the use of RDD, a population-based method of control recruitment, may result in a control group biased with respect to certain population characteristics that may be associated with exposures of interest.50 Analysis excluding the three studies that selected hospital-based controls23,24,39 or the two studies that used RDD to recruit population-based controls32,38 produced similar results to those for the full set of studies.\nSimilar types of systematic biases resulting in socio-economic differences between cases and controls have been implicated in other studies as well, including the large United Kingdom Children Childhood Cancer Study (UKCCS)33 and the Northern California Childhood Leukemia Study (NCCLS).37,51 Adjustments for these differences have been implemented in the analyses; however, the possibility of residual effects cannot be ruled out. To alleviate some of this concern, results of a subgroup analysis conducted in the NCCLS among matched cases and controls who had the same annual household income showed that the pattern of association with day-care attendance persisted.37\nThe potential for information bias in case–control studies is of particular importance due to the retrospective nature of data collection, and the recall of past exposures may be influenced by disease status. Most studies collected exposure data based on respondent recall using a standardized questionnaire administered either in person, by telephone or by mail. Recall bias in the evaluation of c-ALL is expected to be less likely, since diagnoses of c-ALL are usually made between ages 2 and 5 years, and recall of early exposure histories may be easier for the primary caregiver. Although the influence of recall bias could not be formally evaluated in these meta-analyses, one records-based day-care study conducted by Kamper-Jorgensen et al. in Denmark reported a reduced risk of childhood ALL associated with childcare attendance during the first 2 years of life.36 Several subtype specific analyses performed in this study showed the strongest association in B-cell precursor ALL and c-ALL.\nIn addition to potential biases associated with the ability of respondents to accurately recall past events, there was variation between studies in the extent of exposure assessment and categorization of individual exposures to infectious agents. For example, Schuz et al. reported results from a matched case–control study conducted in Germany that used a ‘deficit in social contacts’ variable based on the assumption that children were likely to have attended day care if during the first 2 years of their life both parents were in full-time work.42 The assumption made in the formulation of this social contact variable most likely contributed some non-differential misclassification, which tends to bias findings towards one of no effect. Their analysis did not indicate an association between deficit in social contact and AL or c-ALL.\nIn contrast, in the UKCCS, Gilham et al. created a hierarchical variable that reflected a child’s overall social activity based on interview data incorporating information on frequency of regular activity with children outside the home, frequency of attendance at a day nursery or nursery school, and number of other children in attendance.33 These analyses indicated that social activity/day-care attendance is associated with a reduced risk of childhood ALL. Ma et al., in the first publication on day-care attendance from the NCCLS, constructed a ‘child-hours of exposure’ variable incorporating information on the number of months attending a day care, mean hours per week at this day care and the number of children exposed to at this day care. They reported that children who had more total child-hours of exposure had a reduced risk of ALL.29 These results were later confirmed in a follow-up analysis using a larger study population.37 In non-Hispanic White children, children in the highest category of child-hours during infancy had a reduced risk of ALL and c-ALL compared with children who did not attend day care with strong evidence of a dose–response effect. This association was not observed in Hispanic children, which, as noted by the authors, had different socio-economic and demographic characteristics, including larger family size and different day-care utilization patterns. Although these types of refined exposure assessment strategies that account for duration, frequency and size of the day-care facility serve as examples for future studies, results from these analyses may have contributed to study heterogeneity. In a meta-analysis of 10 studies that strictly defined the exposure as attendance at a day care or other similar types of settings,23,24,34–41 a reduced risk estimate was observed.\nCurrent evidence suggests that different subtypes of leukaemia, defined by both immunophenotypic and molecular characteristics, may be associated with distinct aetiological mechanisms.52,53 To minimize the bias associated with misclassification of the phenotype, most studies specifically evaluating the infectious hypothesis have reported results by subtype-specific leukaemia such as c-ALL, and have excluded infants since there is evidence suggesting these leukaemias may be associated with a causal mechanism involving transplacental chemical carcinogenesis.54–56 This is not expected to be a major source of error, as observed in the sensitivity analysis, since infant leukaemias comprise only a very small proportion of all leukaemia diagnoses (<5%).57 It is believed that the hypothesis on infections, particularly the ‘delayed infection’ hypothesis, is most relevant to ALL and its most common subtype, c-ALL.1 Limiting the meta-analysis to only those studies providing risk estimates for specific subtypes resulted in a reduced risk associated with both ALL22,33–38,40,41 and c-ALL.32,33,35–38,42\nThe UKCCS recently published results from the first records-based study examining the relationship between clinically diagnosed infections in the first year of life and childhood ALL.43,58 Contrary to what is expected based on the ‘delayed infection’ hypothesis and what was observed in this meta-analysis of day-care attendance, the results of this well-designed records-based study showed evidence of an increased risk of childhood ALL and c-ALL associated with clinically diagnosed infections in the first year of life. It is possible that these contrasting results reflect one of many mechanisms involved in the aetiology of childhood ALL. The authors explain that their findings may indicate that a dysregulated immune response to infections during the first few months of life leads to an increased risk of ALL.43\nAlternatively, from a methodological perspective, it has been suggested that these contrasting results may be an indication that previous studies using self-reported data on infections and social contacts, many of which have found a reduced risk of ALL, may be biased due to differential recall/reporting between cases and controls.58 Although more studies are needed to evaluate this apparent discrepancy, it is important to note at this juncture that infection based on clinical diagnosis may reflect a different infectious disease experience of the child compared with a self-reported infectious disease history, as mothers may not seek medical attention for all of the common infections experienced by the child.\nAlthough still susceptible to recall bias, surrogate measures of exposure to infections such as day-care attendance and birth order, are recognized as strong alternative measures to testing the ‘delayed infection’ hypothesis, since they are highly associated with common childhood infectious diseases and have the added advantage of capturing a child’s asymptomatic infections.59 It is not known to what extent recall bias may have affected results of previous day-care studies, but there is evidence from a recent Denmark study also showing strong evidence of a reduced risk associated with a records-based assessment of day-care attendance.36\nOverall, this meta-analysis of existing epidemiological data provides strong support for an association between exposure to common infections in early childhood and subsequent risk of ALL. As an indirect measure of exposure to infections, the ability of day-care attendance to serve as a surrogate measure may vary depending on characteristics of the facility attended and the child’s pattern of attendance. Epidemiologic studies have shown that the transmission and development of infectious diseases are highly influenced by the age of the child, frequency and duration of attendance, structure and size of the facility.19,21 Future epidemiologic studies of childhood leukaemia should attempt to obtain this type of detailed information on the facilities attended to refine the exposure classification.\nAlthough inconsistent, there is evidence from studies of other surrogate measures of exposure to infections including birth order,2 parental social contacts in the workplace,60 and other immune-related factors (e.g. vaccination and breastfeeding history61,62), that support a role for infections and immune response in the aetiology of childhood leukaemia. The causal significance of the role of infections in childhood ALL would be strengthened by identification of a plausible biological mechanism for the conversion of pre-leukemic cells following infection1 and by incorporation of genetic biomarkers of susceptibility and immune response into further epidemiological studies.63,64 The protective effect of early infection on risk of subsequent childhood ALL parallels the similarly protective impact of parasitic infections on type I diabetes in both animal models and children.65 An important implication of these ‘hygiene’-related hypotheses and supportive data is that some form of prophylactic intervention in infancy may ultimately be possible.1,65", "Grants from the US National Institute of Environmental Health Sciences [grant numbers PS42 ES04705, R01 ES09137] and the Children with Leukaemia Foundation, UK. Funding to pay the Open Access publication charges for this article was provided by the US National Institute of Environmental Health Sciences [grant numbers PS42 ES04705, R01 ES09137]." ]
[ "intro", "methods", null, null, null, "results", "discussion", null ]
[ "Childhood", "leukaemia", "day care", "epidemiology", "infection", "meta-analysis", "case–control studies" ]
Introduction: Evidence is growing in support of a role for infections in the aetiology of childhood leukaemia, particularly for the most common subtype, acute lymphoblastic leukaemia (ALL).1–3 Two infection-related hypotheses have gained popularity and are currently supported by substantial, yet inconsistent, epidemiologic findings. Kinlen first proposed the ‘population mixing’ hypothesis in response to the observed childhood leukaemia clusters occurring in the early 1980s in Seascale and Thurso, two remote and isolated communities in the UK that experienced a rapid influx of professional workers.4 He proposed that childhood leukaemia may result from an abnormal immune response to specific, although unidentified, infections commonly seen with the influx of infected persons into an area previously populated with non-immune and susceptible individuals. This hypothesis suggests a mechanism that involves a direct pathological role of specific infectious agents, presumably viruses, in the development of childhood leukaemia and that an immunizing effect may be acquired through previous exposure. Supportive data include several subsequent studies conducted by Kinlen and others examining similar examples of population mixing including rural new towns, situations of wartime population change and other circumstances contributing to unusual patterns of personal contact.4–11 Currently, there is no molecular evidence implicating cell transformation by a specific virus.12 The ‘delayed infection’ hypothesis proposed by Greaves emphasizes the critical nature of the timing of exposure and is intended to apply mostly to common B-cell precursor ALL (c-ALL), which largely accounts for the observed peak incidence of ALL between 2 and 5 years of age in developed countries.13,14 He described a role for infections in the context of a ‘two-hit’ model of the natural history of c-ALL,15 where the first ‘hit’ or initiating genetic event occurs in utero during fetal haematopoiesis producing a clinically covert pre-leukemic clone. The transition to overt disease occurs, in a small fraction (∼1%) of pre-leukaemia carriers, after a sufficient postnatal secondary genetic event, which may be caused by a proliferative stress-induced effect of common infections on the developing immune system of the child.1,13 This adverse immune response to infections is thought to be the result of insufficient priming of the immune system usually influenced by a delay in exposure to common infectious agents during early childhood. With the assumption that improved socio-economic conditions may lead to delay in exposure to infections, the Greaves hypothesis provides one plausible explanation for the notably higher incidence rates of ALL with its characteristic peak age between 2 and 5 years observed only in more socio-economically developed countries.16,17 Although different in hypothesized mechanism, both the ‘population mixing’ and ‘delayed infection’ hypotheses propose childhood leukaemia to be caused by an abnormal immune response to infection(s) acquired by personal contacts, and are compatible with available evidence. In some populations, it is possible that both mechanisms may be operating. Several previous epidemiological studies have used day-care attendance as an indicator of the increased likelihood of early exposure to infections,18 since it is well documented that in developed countries exposures to common infections, particularly those affecting the respiratory and gastrointestinal tracts, occur more frequently in this type of setting.19 The immaturity of children’s immune systems in combination with the lack of appropriate hygienic behaviour is believed to promote the transmission of infectious agents in this social setting.19–21 In the current analysis, we took a meta-analytic approach to summarize the findings to date on the relationship between day-care attendance and risk of childhood ALL. Methods: Identification of studies Literature searches were conducted in PubMed to identify original research and review articles related to childhood leukaemia and day-care attendance and/or social contacts published between January 1966 and October 2008. The searches were conducted using the term ‘childhood leukaemia’ in combination with other terms including ‘infection’, ‘child care’, ‘day care’ and ‘social contact’. In addition, the bibliographies of epidemiology publications on childhood leukaemia and infections were searched to identify studies that may not have been captured through the initial database search. This included the review published in 2004 by McNally and Eden on the infectious aetiology of childhood acute leukaemia (AL).2 Literature searches were conducted in PubMed to identify original research and review articles related to childhood leukaemia and day-care attendance and/or social contacts published between January 1966 and October 2008. The searches were conducted using the term ‘childhood leukaemia’ in combination with other terms including ‘infection’, ‘child care’, ‘day care’ and ‘social contact’. In addition, the bibliographies of epidemiology publications on childhood leukaemia and infections were searched to identify studies that may not have been captured through the initial database search. This included the review published in 2004 by McNally and Eden on the infectious aetiology of childhood acute leukaemia (AL).2 Inclusion/exclusion criteria and definitions Among the studies identified, inclusion in the meta-analysis was limited to observational studies of case–control or cohort design of any size, geographic location and race/ethnicity of study participants. When more than one publication from an individual study was available, either the most recent publication or the publication that performed the analysis most applicable to evaluating the ‘delayed infection’ hypothesis was selected. Studies needed to have reported a relative risk (RR) or odds ratio (OR) and confidence intervals (CIs), or original data by disease status from which a measure of effect could be calculated. The outcome of interest was defined as clinically diagnosed leukaemia in children between the ages of 0 and 19 years. In the very few studies that did not distinguish between specific leukaemia subtypes,22–26 it was assumed that ALL was the primary subtype since it accounts for the majority (∼80%) of leukaemia diagnoses in children.27 The exposure of interest generally referred to as ‘day-care attendance’, which, in addition to formal day care, may have included preschool, nursery school, play groups, mother–toddler groups and other early social contacts. A strict criterion for the meaning of ‘regular attendance’ was not defined a priori since it was assumed that this would vary between studies. Of the primary studies identified, four were excluded for various reasons, including study emphasis on evaluating leukaemia prognosis and outcome,28 an earlier analysis of data from a study for which a more complete and recent publication is available,29 and not reporting a risk estimate for day-care attendance.30,31 After the exclusions, a total of 14 studies, all case–control in design, were retained for the meta-analysis.22–24,32–42 Among the studies identified, inclusion in the meta-analysis was limited to observational studies of case–control or cohort design of any size, geographic location and race/ethnicity of study participants. When more than one publication from an individual study was available, either the most recent publication or the publication that performed the analysis most applicable to evaluating the ‘delayed infection’ hypothesis was selected. Studies needed to have reported a relative risk (RR) or odds ratio (OR) and confidence intervals (CIs), or original data by disease status from which a measure of effect could be calculated. The outcome of interest was defined as clinically diagnosed leukaemia in children between the ages of 0 and 19 years. In the very few studies that did not distinguish between specific leukaemia subtypes,22–26 it was assumed that ALL was the primary subtype since it accounts for the majority (∼80%) of leukaemia diagnoses in children.27 The exposure of interest generally referred to as ‘day-care attendance’, which, in addition to formal day care, may have included preschool, nursery school, play groups, mother–toddler groups and other early social contacts. A strict criterion for the meaning of ‘regular attendance’ was not defined a priori since it was assumed that this would vary between studies. Of the primary studies identified, four were excluded for various reasons, including study emphasis on evaluating leukaemia prognosis and outcome,28 an earlier analysis of data from a study for which a more complete and recent publication is available,29 and not reporting a risk estimate for day-care attendance.30,31 After the exclusions, a total of 14 studies, all case–control in design, were retained for the meta-analysis.22–24,32–42 Data extraction and statistical approach For most studies,23,24,34–40 the ORs and 95% CIs for leukaemia, AL, ALL or c-ALL among those who attended day care compared with those who did not attend day care were extracted. Among the few studies that did not provide this estimate, the OR for a similar measure was extracted, including those for no deficit in social contacts,42 regular contact outside the home,22 >36 months duration of day-care attendance,41 increasing index and family day-care measure,32 and social activity.33 In two instances, the reported OR was recalculated to reflect the risk associated with the highest level of day-care attendance and/or social activity measure compared with the lowest.41,42 Furthermore, several studies reported risk estimates for stratified analyses by specific subtype of leukaemia,22,32,33,35–38,40–42 age at diagnosis,34,35,38 specific age of day-care attendance,22,33,32–38,40 or race/ethnicity;37 multiple estimates were extracted from these studies for the purposes of subgroup and sensitivity evaluations in the meta-analysis, including specific leukaemia subtypes, particularly ALL and c-ALL and timing of day-care attendance. In general, studies referred to the common precursor B-cell ALL subtype (CD10 and CD19 positive ALL) as c-ALL. Four studies defined c-ALL with an added criterion that specified an age range between 2 and 5 years.33,37,38,43 Risk estimates by specific diagnosis age groups were not extracted since there were only a few studies that provided this information and the age cut-points varied. For the one study that stratified by race/ethnicity,37 two separate risk estimates were included in the meta-analysis since the reported estimates were based on independent populations. The between-study heterogeneity was assessed using the Q statistic, which tests the null hypothesis that the estimated effect is homogenous across all studies.44 Acknowledging that the eligible studies have been conducted independently and may represent only a random sample of the distribution of all possible effect sizes for this association, the random effects model was utilized, which incorporates an estimate of both between-study and within-study variation into the calculation of the summary effect measure.45 Compared with the fixed effects model,46 this method is more conservative and generally results in a wider CI. Finally, publication bias was evaluated visually using the funnel graph method that displays the distribution of all included studies by their point estimates and standard errors.47 In addition, the Begg and Mazumdar adjusted rank correlation test was used to test for correlation between the effect estimates and their variances which, if present, provides an indication of publication bias.48 The association with c-ALL was evaluated with a meta-analysis of 7 of the 14 studies.32,33,35–38,42 If a study reported multiple ORs and 95% CIs by timing of day-care attendance, the risk estimate associated with the earliest timing (e.g. age ≤ 2 years) was used to be consistent with the ‘delayed infection’ hypothesis.23,32,34,37,38 The effect of the timing of exposure was evaluated in subgroup meta-analyses of studies reporting risk estimates for early day-care attendance (age ≤ 2 years)22,23,32–34,36–38,42 and studies reporting risk estimates for day-care attendance anytime before diagnosis.23,24,35,37–39,41 Finally, a series of sensitivity analyses were conducted to evaluate the sources of study heterogeneity, namely, the influences of potential selection bias, and heterogeneity in disease classification and exposure definition. The analyses were conducted using the statistical software, STATA Version 9.49 For most studies,23,24,34–40 the ORs and 95% CIs for leukaemia, AL, ALL or c-ALL among those who attended day care compared with those who did not attend day care were extracted. Among the few studies that did not provide this estimate, the OR for a similar measure was extracted, including those for no deficit in social contacts,42 regular contact outside the home,22 >36 months duration of day-care attendance,41 increasing index and family day-care measure,32 and social activity.33 In two instances, the reported OR was recalculated to reflect the risk associated with the highest level of day-care attendance and/or social activity measure compared with the lowest.41,42 Furthermore, several studies reported risk estimates for stratified analyses by specific subtype of leukaemia,22,32,33,35–38,40–42 age at diagnosis,34,35,38 specific age of day-care attendance,22,33,32–38,40 or race/ethnicity;37 multiple estimates were extracted from these studies for the purposes of subgroup and sensitivity evaluations in the meta-analysis, including specific leukaemia subtypes, particularly ALL and c-ALL and timing of day-care attendance. In general, studies referred to the common precursor B-cell ALL subtype (CD10 and CD19 positive ALL) as c-ALL. Four studies defined c-ALL with an added criterion that specified an age range between 2 and 5 years.33,37,38,43 Risk estimates by specific diagnosis age groups were not extracted since there were only a few studies that provided this information and the age cut-points varied. For the one study that stratified by race/ethnicity,37 two separate risk estimates were included in the meta-analysis since the reported estimates were based on independent populations. The between-study heterogeneity was assessed using the Q statistic, which tests the null hypothesis that the estimated effect is homogenous across all studies.44 Acknowledging that the eligible studies have been conducted independently and may represent only a random sample of the distribution of all possible effect sizes for this association, the random effects model was utilized, which incorporates an estimate of both between-study and within-study variation into the calculation of the summary effect measure.45 Compared with the fixed effects model,46 this method is more conservative and generally results in a wider CI. Finally, publication bias was evaluated visually using the funnel graph method that displays the distribution of all included studies by their point estimates and standard errors.47 In addition, the Begg and Mazumdar adjusted rank correlation test was used to test for correlation between the effect estimates and their variances which, if present, provides an indication of publication bias.48 The association with c-ALL was evaluated with a meta-analysis of 7 of the 14 studies.32,33,35–38,42 If a study reported multiple ORs and 95% CIs by timing of day-care attendance, the risk estimate associated with the earliest timing (e.g. age ≤ 2 years) was used to be consistent with the ‘delayed infection’ hypothesis.23,32,34,37,38 The effect of the timing of exposure was evaluated in subgroup meta-analyses of studies reporting risk estimates for early day-care attendance (age ≤ 2 years)22,23,32–34,36–38,42 and studies reporting risk estimates for day-care attendance anytime before diagnosis.23,24,35,37–39,41 Finally, a series of sensitivity analyses were conducted to evaluate the sources of study heterogeneity, namely, the influences of potential selection bias, and heterogeneity in disease classification and exposure definition. The analyses were conducted using the statistical software, STATA Version 9.49 Identification of studies: Literature searches were conducted in PubMed to identify original research and review articles related to childhood leukaemia and day-care attendance and/or social contacts published between January 1966 and October 2008. The searches were conducted using the term ‘childhood leukaemia’ in combination with other terms including ‘infection’, ‘child care’, ‘day care’ and ‘social contact’. In addition, the bibliographies of epidemiology publications on childhood leukaemia and infections were searched to identify studies that may not have been captured through the initial database search. This included the review published in 2004 by McNally and Eden on the infectious aetiology of childhood acute leukaemia (AL).2 Inclusion/exclusion criteria and definitions: Among the studies identified, inclusion in the meta-analysis was limited to observational studies of case–control or cohort design of any size, geographic location and race/ethnicity of study participants. When more than one publication from an individual study was available, either the most recent publication or the publication that performed the analysis most applicable to evaluating the ‘delayed infection’ hypothesis was selected. Studies needed to have reported a relative risk (RR) or odds ratio (OR) and confidence intervals (CIs), or original data by disease status from which a measure of effect could be calculated. The outcome of interest was defined as clinically diagnosed leukaemia in children between the ages of 0 and 19 years. In the very few studies that did not distinguish between specific leukaemia subtypes,22–26 it was assumed that ALL was the primary subtype since it accounts for the majority (∼80%) of leukaemia diagnoses in children.27 The exposure of interest generally referred to as ‘day-care attendance’, which, in addition to formal day care, may have included preschool, nursery school, play groups, mother–toddler groups and other early social contacts. A strict criterion for the meaning of ‘regular attendance’ was not defined a priori since it was assumed that this would vary between studies. Of the primary studies identified, four were excluded for various reasons, including study emphasis on evaluating leukaemia prognosis and outcome,28 an earlier analysis of data from a study for which a more complete and recent publication is available,29 and not reporting a risk estimate for day-care attendance.30,31 After the exclusions, a total of 14 studies, all case–control in design, were retained for the meta-analysis.22–24,32–42 Data extraction and statistical approach: For most studies,23,24,34–40 the ORs and 95% CIs for leukaemia, AL, ALL or c-ALL among those who attended day care compared with those who did not attend day care were extracted. Among the few studies that did not provide this estimate, the OR for a similar measure was extracted, including those for no deficit in social contacts,42 regular contact outside the home,22 >36 months duration of day-care attendance,41 increasing index and family day-care measure,32 and social activity.33 In two instances, the reported OR was recalculated to reflect the risk associated with the highest level of day-care attendance and/or social activity measure compared with the lowest.41,42 Furthermore, several studies reported risk estimates for stratified analyses by specific subtype of leukaemia,22,32,33,35–38,40–42 age at diagnosis,34,35,38 specific age of day-care attendance,22,33,32–38,40 or race/ethnicity;37 multiple estimates were extracted from these studies for the purposes of subgroup and sensitivity evaluations in the meta-analysis, including specific leukaemia subtypes, particularly ALL and c-ALL and timing of day-care attendance. In general, studies referred to the common precursor B-cell ALL subtype (CD10 and CD19 positive ALL) as c-ALL. Four studies defined c-ALL with an added criterion that specified an age range between 2 and 5 years.33,37,38,43 Risk estimates by specific diagnosis age groups were not extracted since there were only a few studies that provided this information and the age cut-points varied. For the one study that stratified by race/ethnicity,37 two separate risk estimates were included in the meta-analysis since the reported estimates were based on independent populations. The between-study heterogeneity was assessed using the Q statistic, which tests the null hypothesis that the estimated effect is homogenous across all studies.44 Acknowledging that the eligible studies have been conducted independently and may represent only a random sample of the distribution of all possible effect sizes for this association, the random effects model was utilized, which incorporates an estimate of both between-study and within-study variation into the calculation of the summary effect measure.45 Compared with the fixed effects model,46 this method is more conservative and generally results in a wider CI. Finally, publication bias was evaluated visually using the funnel graph method that displays the distribution of all included studies by their point estimates and standard errors.47 In addition, the Begg and Mazumdar adjusted rank correlation test was used to test for correlation between the effect estimates and their variances which, if present, provides an indication of publication bias.48 The association with c-ALL was evaluated with a meta-analysis of 7 of the 14 studies.32,33,35–38,42 If a study reported multiple ORs and 95% CIs by timing of day-care attendance, the risk estimate associated with the earliest timing (e.g. age ≤ 2 years) was used to be consistent with the ‘delayed infection’ hypothesis.23,32,34,37,38 The effect of the timing of exposure was evaluated in subgroup meta-analyses of studies reporting risk estimates for early day-care attendance (age ≤ 2 years)22,23,32–34,36–38,42 and studies reporting risk estimates for day-care attendance anytime before diagnosis.23,24,35,37–39,41 Finally, a series of sensitivity analyses were conducted to evaluate the sources of study heterogeneity, namely, the influences of potential selection bias, and heterogeneity in disease classification and exposure definition. The analyses were conducted using the statistical software, STATA Version 9.49 Results: Table 1 presents selected characteristics of the 14 studies included in this meta-analysis. The studies, all case–control in design, were published between 1993 and 2008 and were conducted in many different geographic areas. Most studies achieved a population-based ascertainment of cases utilizing a national registry or a regional network of all major paediatric oncology centres. A population-based control selection strategy was most common with the exception of three studies that selected hospital-based controls.23,24,39 Only 1 of the 14 studies utilized a records-based day-care assessment protocol,36 whereas the remaining studies relied on standardized questionnaires administered either in person, by telephone or by mail. All studies have accounted for major confounding factors such as age, sex, race and socio-economic status through a matched study design and/or statistical adjustment in the analysis. Of the 14 studies identified, 11 studies have reported either a statistically significant reduced risk associated with day-care attendance and/or social contact measures23,33–37,39 or provide some evidence of a reduced risk.22,24,40,41 Table 1Select characteristics of studies included in the meta-analyses of day-care attendance and childhood leukemiaAuthor, year (location)Case ascertainmentControl selectionData collectionSelect resultsConfounding addressedPetridou et al., 199323 (Greece, Attica and Crete)125 leukaemia (Attica) 11 leukaemia (Crete)Age 0–14 yearsChildren’s hospitals of the University of Athens (1987–91) and the University of Crete (1990–92)187 frequency matched children who attended the outpatient clinic of the hospitals where the children with leukaemia were treatedTelephone interviews with parents of childrenAttendance at crèche (yes/no) At any time Leukaemia: 0.67 (0.41, 1.11)In infancy (for ≥3 months in the first 2 years of life) Leukaemia: 0.28 (0.09, 0.88)Frequency match: sex, age, hospitalAdjustment: place of residence, social classRoman et al., 199440 (UK)38 ALLAge 0–4 yearsDiagnosed 1972–89Born in west Berkshire or north Hampshire, and were residents there when diagnosed112 individually matched children selected from hospital delivery registersParents were interviewedPreschool playgroup (yes/no) In the year before diagnosis (for ≥3 months) ALL: 0.6 (0.2, 1.8)Individual match: sex, date of birth, mother’s age, area of residence at birth, time of diagnosisAdjustment: not specifiedPetridou et al., 199724 (Greece)153 leukaemiaAge 0–14 yearsDiagnosed 1993–94Nation wide network of paediatric haematologists/oncologists300 individually matched children hospitalized at the same time as the corresponding case for acute conditionsGuardians of all subjects completed an interviewer-administered questionnaireDay-care attendance (ever/never) Leukaemia: 0.83 (0.51, 1.37)Individual match: sex, age, geographic regionAdjustment: maternal age at birth, maternal education, sibship size, birth order, persons per roomSchuz et al., 199942 (Germany)1010 AL (686 c-ALL)Diagnosed 1980–94Age 0–14 yearsNation wide German Children’s Cancer Registry at the University of Mainz2588 matched children randomly selected from complete files of local offices of registration of residentsStructured questionnaire based on the US Children’s Cancer GroupDay-care attendance not directly assessedDeficit in social contacts (yes/no, age ≤18 months excluded) AL: 1.1 (0.9, 1.3) c-ALL: 1.0 (0.8, 1.2)Analysis of ALIndividual match: date of birth, sex, districtAdjustment: SESAnalysis of c-ALLAdjustment: sex, age, year of birth, study settingAdjustment: SES, urbanizationDockerty et al., 199922 (New Zealand)97 ALLDiagnosed 1990–93Age 0–14 yearsNew Zealand Cancer registry, public hospital admission/discharge computer system, and the Children’s Cancer Registry. Nationwide97 individually matched children randomly selected from the New Zealand born and resident childhood population using national birth records209 solid cancer casesMothers interviewed in their homes using a questionnaire adapted from Patricia McKinney and Eve Roman in the UKRegular contact with other children from outside home at <12 months (yes/no, age <15 months excluded) ALL: 0.65 (0.36, 1.17)Individual match: age and sexAdjustment: sex, age, several others including social classInfante-Rivard et al., 200034 (Canada)491 ALLDiagnosed 1980–93Age 0–9 yearsTertiary care centre similar to population-based ascertainment491 individually matched children chosen from the most complete census of children for the study yearsStructured questionnaire administered to mothers by phoneDay-care attendance by age at entry Entry ≤2 years old vs no ALL: 0.49 (0.31, 0.77)Entry at >2 years old vs no ALL: 0.67 (0.45, 1.01)Individual match: age, sex, region of residence at diagnosisAdjustment: maternal age, maternal educationNeglia et al., 200038 (USA)1744 ALL (633 c-ALL; excludes cases < 1 year)Diagnosed 1989–93Age 0–14 yearsChildren’s Cancer Group member institutions throughout the USA1879 individually matched children randomly selected using the random digit dialing (RDD) methodologyStructured interviewDay-care attendance (age <1 year excluded) Yes vs no ALL: 0.96 (0.82, 1.12) c-ALL: 0.96 (0.75, 1.24)Day care before age 2 vs no ALL: 0.99 (0.84, 1.17) c-ALL: 1.05 (0.80, 1.37)Individual match: age, race, telephone area code, exchange, sex (T-cell leukaemia only)Adjustment: maternal race, education, family incomeRosenbaum et al., 200041 (USA)255 ALLDiagnosed 1980–91Age 0–14 yearsFour clinical centers in a 31-county study region. Institutional tumour registries and department of paediatric haematology–oncology records760 frequency matched children randomly selected through the Live Birth Certificate Registry maintained by the New York State Department of HealthSelf-administered questionnaire mailed to the parents of subjectsTotal duration of out-of-home care (duration vs >36 months) ALL: Stay home: 1.32 (0.70, 2.52) 1–18 months: 1.74 (0.89, 3.42) 19–36 months: 1.32 (0.70, 2.52)Frequency match: sex, age, race, birth yearAdjustment: maternal age, maternal education, birth year, maternal employment, breastfeeding, birth orderChan et al., 200232 (Hong Kong)98 AL (66 c-ALL)Diagnosed 1994–97Age 2–14 yearsHong Kong Pediatric Hematology and Oncology Study Group228 children selected using RDD methodologyIn-person interview using a structured questionnaire adapted from UKCCS and translated into ChineseIndex and family day-care attendance (3-category variable) First year of life AL: 0.96 (0.70, 1.32) Child peak: 0.63 (0.38, 1.07) c-ALL: 0.93 (0.63, 1.36)Matching: NoneAdjustment: age, number of children in household at reference datePerrillat et al., 200239 (France)280 AL (240 ALL)Diagnosed 1995–99Age 0–15 yearsHospitals of Lille, Lyon, Nancy and Paris. Cases need to have resided in the hospital catchment area288 frequency matched children hospitalized in the same hospital as the cases, and residing in the catchment area of the hospitalIn-person standardized questionnaireDay-care attendance (age <2 years excluded) Ever vs never AL: 0.6 (0.4, 1.0)Age started vs no day care >12 months: 0.5 (0.3, 1.0) 7–12 months: 0.6 (0.2, 1.7) ≤6 months: 0.5 (0.3, 1.0)Frequency match: age, sex, hospital, hospital catchment area, ethnic originAdjustment: age, sex, hospital, ethnic origin, maternal education, parental professional categoryJourdan-Da Silva et al., 200435 (France)473 AL (408 ALL, 304 c-ALL)Diagnosed 1995–98Age 0–14 yearsNational Registry of Childhood Leukaemia and Lymphoma (NRCL)567 frequency matched children randomly selected using age, sex, and region quotas from a sample of 30 000 phone numbers representative of the French population on area of residence and municipality sizeStandardized self-administered questionnaire on mothersDay-care attendance (age <1 year excluded) Ever vs never ALL: 0.7 (0.6, 1.0) c-ALL: 0.8 (0.6, 1.0)Started at age <3 months vs never ALL: 0.6 (0.4, 0.8) c-ALL: 0.6 (0.4, 0.9)Frequency match: age, sex, regionAdjustment: age, sex, regionGilham et al., 200533 (UK)1286 ALL (791 c-ALL; excludes cases <2 years)Diagnosed 1991–96Age 0–14 yearsNation-wide ascertainment through pediatric oncology units6238 individually matched children randomly selected from primary care population registersIn-person interview with parents using a structured questionnaireSocial activity in the first year of life (age <2 years excluded) Any vs no social activity ALL: 0.66 (0.56, 0.77) c-ALL: 0.67 (0.55, 0.82)Age started vs no day care ALL- < 3 months: 0.71 (0.60, 0.85) 3–5 months: 0.71 (0.56, 0.90) 6–11 months: 0.76 (0.63, 0.92)Individual match: sex, month and year of birth, region of residence at diagnosisAdjustment: age at diagnosis, sex, region, maternal age, mother working at time of birth, deprivationMa et al., 200537 (USA)294 ALL (145 c-ALL; excludes case <1 year)Diagnosed 1995–2002Age 0–14 yearsPopulation-based ascertainment from major paediatric clinical centres in Northern and Central California376 individually matched children randomly selected from statewide birth certificate files maintained by the California Department of Health ServicesPersonal interview with primary caretakerChild-hours of exposure at day care (age <1 year excluded) ≥5000 child-hours (1st year) vs 0 ALL- Hispanic: 2.10 (0.70, 6.34) White: 0.42 (0.18, 0.99) c-ALL- Hispanic: 2.53 (0.60, 10.7) White: 0.33 (0.11, 1.01)Individual match: date of birth, sex, mother’s race, Hispanic statusAdjustment: annual household income, maternal educationKamper-Jorgensen et al., 200836 (Denmark)559 ALL (199 c-ALL)Diagnosed 1989–2004Age 0–15 yearsAll cases of childhood leukaemia identified in a cohort of all children in Denmark5590 individually matched children selected from population registersRecords-based data from three population-based registries: The Nordic Society of Paediatric Haematology and Oncology, the Danish Civil Registration System, and the Childcare databaseChild-care attendance in children during first 2 years of life (yes/no) ALL: 0.68 (0.48, 0.95) c-ALL: 0.58 (0.36, 0.93)Individual match: date of birth, sex, birth cohortAdjustment: several demographic characteristics was considered but none were major confoundersSES, socioeconomic status; RDD, random digit dialing; UKCCS, United Kingdom Childhood Cancer Study. Select characteristics of studies included in the meta-analyses of day-care attendance and childhood leukemia 125 leukaemia (Attica) 11 leukaemia (Crete) Age 0–14 years Children’s hospitals of the University of Athens (1987–91) and the University of Crete (1990–92) At any time Leukaemia: 0.67 (0.41, 1.11) In infancy (for ≥3 months in the first 2 years of life) Leukaemia: 0.28 (0.09, 0.88) Frequency match: sex, age, hospital Adjustment: place of residence, social class 38 ALL Age 0–4 years Diagnosed 1972–89 Born in west Berkshire or north Hampshire, and were residents there when diagnosed In the year before diagnosis (for ≥3 months) ALL: 0.6 (0.2, 1.8) Individual match: sex, date of birth, mother’s age, area of residence at birth, time of diagnosis Adjustment: not specified 153 leukaemia Age 0–14 years Diagnosed 1993–94 Nation wide network of paediatric haematologists/oncologists Individual match: sex, age, geographic region Adjustment: maternal age at birth, maternal education, sibship size, birth order, persons per room 1010 AL (686 c-ALL) Diagnosed 1980–94 Age 0–14 years Nation wide German Children’s Cancer Registry at the University of Mainz Structured questionnaire based on the US Children’s Cancer Group Day-care attendance not directly assessed Individual match: date of birth, sex, district Adjustment: SES Adjustment: sex, age, year of birth, study setting Adjustment: SES, urbanization 97 ALL Diagnosed 1990–93 Age 0–14 years New Zealand Cancer registry, public hospital admission/discharge computer system, and the Children’s Cancer Registry. Nationwide 97 individually matched children randomly selected from the New Zealand born and resident childhood population using national birth records 209 solid cancer cases Individual match: age and sex Adjustment: sex, age, several others including social class 491 ALL Diagnosed 1980–93 Age 0–9 years Tertiary care centre similar to population-based ascertainment Entry ≤2 years old vs no ALL: 0.49 (0.31, 0.77) Entry at >2 years old vs no ALL: 0.67 (0.45, 1.01) Individual match: age, sex, region of residence at diagnosis Adjustment: maternal age, maternal education 1744 ALL (633 c-ALL; excludes cases < 1 year) Diagnosed 1989–93 Age 0–14 years Children’s Cancer Group member institutions throughout the USA Yes vs no ALL: 0.96 (0.82, 1.12) c-ALL: 0.96 (0.75, 1.24) Day care before age 2 vs no ALL: 0.99 (0.84, 1.17) c-ALL: 1.05 (0.80, 1.37) Individual match: age, race, telephone area code, exchange, sex (T-cell leukaemia only) Adjustment: maternal race, education, family income 255 ALL Diagnosed 1980–91 Age 0–14 years Four clinical centers in a 31-county study region. Institutional tumour registries and department of paediatric haematology–oncology records Frequency match: sex, age, race, birth year Adjustment: maternal age, maternal education, birth year, maternal employment, breastfeeding, birth order 98 AL (66 c-ALL) Diagnosed 1994–97 Age 2–14 years Hong Kong Pediatric Hematology and Oncology Study Group First year of life AL: 0.96 (0.70, 1.32) Child peak: 0.63 (0.38, 1.07) c-ALL: 0.93 (0.63, 1.36) Matching: None Adjustment: age, number of children in household at reference date 280 AL (240 ALL) Diagnosed 1995–99 Age 0–15 years Hospitals of Lille, Lyon, Nancy and Paris. Cases need to have resided in the hospital catchment area Ever vs never AL: 0.6 (0.4, 1.0) Age started vs no day care >12 months: 0.5 (0.3, 1.0) 7–12 months: 0.6 (0.2, 1.7) ≤6 months: 0.5 (0.3, 1.0) Frequency match: age, sex, hospital, hospital catchment area, ethnic origin Adjustment: age, sex, hospital, ethnic origin, maternal education, parental professional category 473 AL (408 ALL, 304 c-ALL) Diagnosed 1995–98 Age 0–14 years National Registry of Childhood Leukaemia and Lymphoma (NRCL) Ever vs never ALL: 0.7 (0.6, 1.0) c-ALL: 0.8 (0.6, 1.0) Started at age <3 months vs never ALL: 0.6 (0.4, 0.8) c-ALL: 0.6 (0.4, 0.9) Frequency match: age, sex, region Adjustment: age, sex, region 1286 ALL (791 c-ALL; excludes cases <2 years) Diagnosed 1991–96 Age 0–14 years Nation-wide ascertainment through pediatric oncology units Any vs no social activity ALL: 0.66 (0.56, 0.77) c-ALL: 0.67 (0.55, 0.82) Age started vs no day care ALL- < 3 months: 0.71 (0.60, 0.85) 3–5 months: 0.71 (0.56, 0.90) 6–11 months: 0.76 (0.63, 0.92) Individual match: sex, month and year of birth, region of residence at diagnosis Adjustment: age at diagnosis, sex, region, maternal age, mother working at time of birth, deprivation 294 ALL (145 c-ALL; excludes case <1 year) Diagnosed 1995–2002 Age 0–14 years Population-based ascertainment from major paediatric clinical centres in Northern and Central California ≥5000 child-hours (1st year) vs 0 ALL- Hispanic: 2.10 (0.70, 6.34) White: 0.42 (0.18, 0.99) c-ALL- Hispanic: 2.53 (0.60, 10.7) White: 0.33 (0.11, 1.01) Individual match: date of birth, sex, mother’s race, Hispanic status Adjustment: annual household income, maternal education 559 ALL (199 c-ALL) Diagnosed 1989–2004 Age 0–15 years All cases of childhood leukaemia identified in a cohort of all children in Denmark Individual match: date of birth, sex, birth cohort Adjustment: several demographic characteristics was considered but none were major confounders SES, socioeconomic status; RDD, random digit dialing; UKCCS, United Kingdom Childhood Cancer Study. As shown in Table 2, the 14 studies included a total of 6108 cases and generated a combined OR estimate indicating that day-care attendance is associated with a reduced risk of childhood ALL (OR = 0.76, 95% CI: 0.67, 0.87). Figure 1 provides a visual portrayal of the relationship between day-care attendance and the risk of childhood ALL. Three large studies conducted in Germany,42 the USA38 and the UK33 appeared to carry a large proportion of the weight in the meta-analysis at ∼13% each. The combined risk estimates excluding each of these studies individually remained similarly reduced indicating that no one large study was able to completely explain the protective effect observed (data not shown). No remarkable evidence of publication bias was apparent from the funnel plot since the data points for these 14 studies were, in general, randomly distributed around the combined OR estimate (plot not shown). This visual interpretation of the results was confirmed by the large P-value using the rank correlation method (P = 0.553). Table 2Meta-analysis of studies examining the association between day-care attendance and risk of childhood ALLStudy, yearOutcome, age in yearsDay-care definitionTimingCasesOR95% CIWi (%)aPetridou et al., 199323Leukaemia, 0–14Attendance at crèche: yes/noBefore age 2 years1360.280.09, 0.881.2Roman et al., 199440ALL, 0–4Pre school playgroup: yes/noYear before dx380.600.20, 1.801.3Petridou et al., 199724Leukaemia, 0–14Day care: ever/neverBirth to dx1530.830.51, 1.375.0Schuz et al., 1999 42,bAL, 1.5–14Deficit in social contacts: yes/noBefore age 2 years9210.910.90, 1.3012.7Dockerty et al., 199922ALL, 1.25–14Reg. contact outside home: yes/noFirst year of life900.650.36, 1.173.8Infante-Rivard et al., 200034ALL, 0–9Day care: entry at ≤2 years/neverAt or before age 2 years4330.490.31, 0.775.6Neglia et al., 200038ALL, 1–14Day care before age 2 years: yes/noBefore age 2 years17440.990.84, 1.1713.3Rosenbaum et al., 200041,bALL, 0–14Out-of-home care: >36 months/noneBirth to dx1580.760.70, 2.523.3Chan et al., 200232AL, 2–14Index and family day care: 3-levelFirst year of life980.960.70, 1.328.5Perrillat et al., 200239AL, 2–15Day-care attendance: yes/noBirth to dx2460.600.40, 1.005.5Jourdan-Da Silva et al., 200435ALL, 1–14Day-care attendance: yes/noBirth to dx3870.700.60, 1.0010.3Gilham et al., 200533ALL, 2–14Social activity: any/noneFirst year of life12720.660.56, 0.7713.6Ma et al., 200537—WhiteALL, 1–14Day care first year of life: yes/noFirst year of life1360.770.43, 1.403.8Ma et al., 200537—HispaniccALL, 1–14Day-care attendance: yes/noBirth to dx1201.090.62, 1.904.1Kamper-Jorgensen et al., 200836ALL, 0–15Attendance to child care: yes/noBefore age 2 years1760.680.48, 0.957.9P-value (heterogeneity) = 0.014Combined:61080.760.67, 0.87100.0aPercent weight assigned to each OR in the random effects model.bSchuz et al.: changed reference to ‘Yes-deficit in social contacts’ by calculating the inverse of the OR provided for ‘No-deficit in social contacts’; Rosenbaum et al.: estimated the OR for ‘ > 36 months’ by calculating the inverse of the originally provided OR for ‘stayed home’.cDay-care attendance censored on reference date was used due to the low number of Hispanic subjects attending day care during the first year of life.dx, diagnosis; Wi, weight. Figure 1 Forest plot displaying ORs and 95% CIs of studies examining the association between day-care attendance and risk of childhood ALL. The risk estimates are plotted with boxes and the area of each box is inversely proportional to the variance of the estimated effect. The horizontal lines represent the 95% CIs of the risk estimate for each study. The solid vertical line at 1.0 represents a risk estimate of no effect. The dashed vertical line represents the combined risk estimate (OR = 0.76), and the width of the diamond is the 95% CI for this risk estimate (0.67–0.87). Meta-analysis of studies examining the association between day-care attendance and risk of childhood ALL aPercent weight assigned to each OR in the random effects model. bSchuz et al.: changed reference to ‘Yes-deficit in social contacts’ by calculating the inverse of the OR provided for ‘No-deficit in social contacts’; Rosenbaum et al.: estimated the OR for ‘ > 36 months’ by calculating the inverse of the originally provided OR for ‘stayed home’. cDay-care attendance censored on reference date was used due to the low number of Hispanic subjects attending day care during the first year of life. dx, diagnosis; Wi, weight. Forest plot displaying ORs and 95% CIs of studies examining the association between day-care attendance and risk of childhood ALL. The risk estimates are plotted with boxes and the area of each box is inversely proportional to the variance of the estimated effect. The horizontal lines represent the 95% CIs of the risk estimate for each study. The solid vertical line at 1.0 represents a risk estimate of no effect. The dashed vertical line represents the combined risk estimate (OR = 0.76), and the width of the diamond is the 95% CI for this risk estimate (0.67–0.87). We attempted to maintain a reasonable balance between maximizing the inclusion of studies and minimizing sources of heterogeneity, by relaxing the eligibility criteria to include estimates for broader leukaemia subtypes, other social contact measures and unspecified timing of exposure. The contribution of the influence of possible sources of heterogeneity on the combined risk estimate was evaluated. In subgroup meta-analyses presented in Table 3 examining the influence of the timing of exposure, the combined OR for seven studies reporting estimates for day-care attendance or social contacts before diagnosis showed a reduced risk of childhood ALL (OR = 0.81, 95% CI: 0.70, 0.94). When the meta-analysis was limited to the nine studies that specifically evaluated day-care attendance at or before age 1 or 2 years, a similarly reduced risk of ALL (OR = 0.79, 95% CI: 0.65, 0.95) was observed. Table 3Subgroup meta-analyses of day-care attendance and risk of childhood ALL evaluating the influence of timing of day-care attendanceStudy, yearDay care any timeDay care at age ≤ 2CasesOR95% CIWi (%)aCasesOR95% CIWi (%)aPetridou et al., 1993231360.670.41, 1.117.71360.280.09, 0.882.3Roman et al., 199440Petridou et al., 1997241530.830.51, 1.377.8Schuz et al., 199942,b9210.910.90, 1.3015.7Dockerty et al., 199922900.650.36, 1.176.5Infante-Rivard et al., 2000344330.490.31, 0.778.8Neglia et al., 20003817440.960.82, 1.1238.017440.990.84, 1.1716.1Rosenbaum et al., 200041,b1580.760.70, 2.524.9Chan et al., 200232980.960.70, 1.3212.0Perrillat et al., 2002392460.600.40, 1.008.9Jourdan-Da Silva et al., 2004353870.700.60, 1.0022.1Gilham et al., 20053312720.660.56, 0.7728.8Ma et al., 2005—White371360.750.38, 1.454.51360.770.43, 1.402.1—Hispanic371201.090.62, 1.906.21201.920.89, 4.131.2Kamper-Jorgensen et al., 2008361760.680.48, 0.956.3Combined:30800.810.70, 0.94100.051260.790.65, 0.95100.0P-value (heterogeneity):0.2770.001aPercent weight assigned to each OR in the random effects model. Wi, weight.bSchuz et al.: changed reference to ‘Yes-deficit in social contacts’ by calculating the inverse of the OR provided for ‘No-deficit in social contacts’; Rosenbaum et al.: estimated the OR for ‘ > 36 months’ by calculating the inverse of the originally provided OR for ‘stayed home’. Subgroup meta-analyses of day-care attendance and risk of childhood ALL evaluating the influence of timing of day-care attendance aPercent weight assigned to each OR in the random effects model. Wi, weight. bSchuz et al.: changed reference to ‘Yes-deficit in social contacts’ by calculating the inverse of the OR provided for ‘No-deficit in social contacts’; Rosenbaum et al.: estimated the OR for ‘ > 36 months’ by calculating the inverse of the originally provided OR for ‘stayed home’. A series of sensitivity analyses were conducted on the meta-analysis of the 14 studies to examine the influence of individual study characteristics on the combined OR, namely, potential biases in the selection of controls, the categorization of leukaemia and the assessment of day-care attendance. Figure 2 presents a summary of these analyses showing that none of these factors was able to completely account for the reduced risk of ALL observed in the main analysis of the 14 studies. For example, in the evaluation of potential control selection bias, reduced risks were observed for the analyses excluding three studies that used hospital-based controls (OR = 0.78, 95% CI: 0.68, 0.90) and excluding two studies that used random digit dialing (RDD) to select controls (OR = 0.72, 95% CI: 0.63, 0.81). Similarly reduced combined ORs were observed when excluding studies that included infants ( <1 year of age) in the study population (OR = 0.81, 95% CI: 0.70, 0.94), studies not specifically examining ALL (OR = 0.74, 95% CI: 0.63, 0.87), and studies that did not define the exposure strictly as attendance at a day care or a similar type of setting (OR = 0.74, 95% CI: 0.61, 0.88). Figure 2 Plot showing results of sensitivity meta-analyses evaluating the influence of potential biases within individual studies on combined risk estimates. RDD, random digit dialing. Plot showing results of sensitivity meta-analyses evaluating the influence of potential biases within individual studies on combined risk estimates. RDD, random digit dialing. Table 4 presents the results of the meta-analyses evaluating the association between childhood c-ALL and day-care attendance. The analysis of c-ALL contained fewer numbers of studies compared with the analysis of ALL. Similar to the result from the meta-analysis of ALL, the combined OR for the seven studies of c-ALL was also <1, although the CI was slightly wider (OR = 0.83, 95% CI: 0.70, 0.98). The subgroup analyses among studies of day-care attendance before age 1 or 2 years and c-ALL generated results similar to those for ALL (data not shown). No evidence of publication bias was observed for these analyses. Table 4Meta-analysis of studies examining the association between day-care attendance and risk of c-ALLStudy, yearAgeDay-care definitionTimingCasesOR95% CIWi (%)aSchuz et al., 1999421.5–14Deficit in social contacts: Yes/noBefore age 2 years6581.000.80, 1.2019.9Neglia et al., 2000382–5Day care before age 2 years: Yes/noBefore age 2 years6331.050.80, 1.3716.3Chan et al., 2002322–14Index and family day care: 3-levelFirst year of life660.930.63, 1.3611.4Jourdan-Da Silva, 2004351–14Day-care attendance: Yes/noBirth to dx3040.800.60, 1.0017.0Gilham et al., 2005332–5Social activity: Any/noneFirst year of life7910.670.55, 0.8220.1Ma et al., 2005—White372–5Day care first year of life: Yes/noFirst year of life740.490.19, 1.262.8—-Hispanic37,b2–5Day-care attendance: Yes/noBirth to dx710.910.41, 2.053.8Kamper-Jorgensen et al., 2008360–15Attendance to child care: Yes/noBefore age 2 years1010.580.36, 0.938.7P-value (heterogeneity) = 0.044Combined:26980.830.70, 0.98100.0aPercent weight assigned to each OR in the random effects model.bDay-care attendance censored on reference date was used due to the low number of Hispanic subjects attending day-care during the first year of life.dx, diagnosis; Wi, weight. Meta-analysis of studies examining the association between day-care attendance and risk of c-ALL aPercent weight assigned to each OR in the random effects model. bDay-care attendance censored on reference date was used due to the low number of Hispanic subjects attending day-care during the first year of life. dx, diagnosis; Wi, weight. Discussion: The evidence from a large and growing body of literature related to the exposure to infectious agents, as measured by day-care attendance, and the risk of childhood leukaemia was systematically evaluated using a meta-analytic approach. Heterogeneity between epidemiologic studies and their results is common and constitutes one of the major challenges in such a synthesis. Although the random effects model was used in this analysis to account for some of the between-study variation, we acknowledge the importance of interpreting results together with a thorough consideration of the potential sources of heterogeneity. All the studies included in this analysis were conducted with the a priori objective of testing the biologically plausible, ‘delayed infection’ hypothesis, which specifies a predicted direction of risk, timing of the exposure and the most applicable subtype of leukaemia. Overall, the studies show consistency in support of a reduced risk associated with day-care attendance or social contacts during early childhood, with the vast majority of studies either reporting an effect in the hypothesized direction or no association. A quantitative assessment using meta-analysis indicates that day-care attendance is associated with a reduced risk of childhood ALL, as well as c-ALL. The reduction in risk persisted despite a thorough consideration of potential sources of study heterogeneity. We did not conduct a meta-analysis specifically in non-c-ALL or acute myeloid leukaemia due to the limited number of studies reporting results for these associations. Of the four studies that present data for non-c-ALL,35–38 three studies showed reduced ORs,35–37 but lacked precision. Based on currently available data, it is difficult to determine whether the association applies to a specific subtype of ALL only or ALL in general. The subgroup meta-analysis by timing of day-care attendance did not suggest a stronger reduction in risk for day care specifically at or before age 1 or 2 years as might have been expected based on the hypothesis. However, a few individual studies have shown that the strongest reduction in risk occurs when day-care attendance is started <6 months of age.33,35,39 Although not formally evaluated in this meta-analysis, several individual studies that used detailed exposure assessment protocols demonstrated evidence of dose–response effects. Strong trends were observed for increasing levels of child-hours of day-care attendance,37 levels of social activity33 and age at start of day care.35 We were not able to conduct a comparable meta-analysis of studies pertaining to the related mechanism of rural ‘population mixing’ and the risk of childhood leukaemia. Although it was not possible to analyse the role of ‘population mixing’ in the same manner as was done for the ‘delayed infection’ hypothesis, it is recognized that these two processes may be interrelated or occur simultaneously and that both mechanisms may be operating in a given population. Thus, the results observed for the analyses of studies providing data relevant to the timing of infection in early life cannot be interpreted as ‘ruling out’ the possible role of ‘population mixing’, but rather lend further support to the role of immune related processes in the aetiology of ALL. One major consideration in the evaluation of study validity is the possibility of selection bias, a type of systematic error that occurs when there is differential selection of either the cases or controls on the basis of characteristics which may affect exposure status. One way this may arise is if cases and controls do not originate from the same source population. A population-based ascertainment of cases is considered favourable since a defined source population, from which controls may be selected, is easily identifiable. Other strategies of case ascertainment may be appropriate as well, as long as the source population can be clearly defined. As implemented in three of the included studies, selection of controls among the inpatient cohort of the same hospital as the case diagnosis can fulfill this requirement, but can introduce bias if the illnesses/conditions of the control group are related to the exposure under study. Also, it has been suggested that the use of RDD, a population-based method of control recruitment, may result in a control group biased with respect to certain population characteristics that may be associated with exposures of interest.50 Analysis excluding the three studies that selected hospital-based controls23,24,39 or the two studies that used RDD to recruit population-based controls32,38 produced similar results to those for the full set of studies. Similar types of systematic biases resulting in socio-economic differences between cases and controls have been implicated in other studies as well, including the large United Kingdom Children Childhood Cancer Study (UKCCS)33 and the Northern California Childhood Leukemia Study (NCCLS).37,51 Adjustments for these differences have been implemented in the analyses; however, the possibility of residual effects cannot be ruled out. To alleviate some of this concern, results of a subgroup analysis conducted in the NCCLS among matched cases and controls who had the same annual household income showed that the pattern of association with day-care attendance persisted.37 The potential for information bias in case–control studies is of particular importance due to the retrospective nature of data collection, and the recall of past exposures may be influenced by disease status. Most studies collected exposure data based on respondent recall using a standardized questionnaire administered either in person, by telephone or by mail. Recall bias in the evaluation of c-ALL is expected to be less likely, since diagnoses of c-ALL are usually made between ages 2 and 5 years, and recall of early exposure histories may be easier for the primary caregiver. Although the influence of recall bias could not be formally evaluated in these meta-analyses, one records-based day-care study conducted by Kamper-Jorgensen et al. in Denmark reported a reduced risk of childhood ALL associated with childcare attendance during the first 2 years of life.36 Several subtype specific analyses performed in this study showed the strongest association in B-cell precursor ALL and c-ALL. In addition to potential biases associated with the ability of respondents to accurately recall past events, there was variation between studies in the extent of exposure assessment and categorization of individual exposures to infectious agents. For example, Schuz et al. reported results from a matched case–control study conducted in Germany that used a ‘deficit in social contacts’ variable based on the assumption that children were likely to have attended day care if during the first 2 years of their life both parents were in full-time work.42 The assumption made in the formulation of this social contact variable most likely contributed some non-differential misclassification, which tends to bias findings towards one of no effect. Their analysis did not indicate an association between deficit in social contact and AL or c-ALL. In contrast, in the UKCCS, Gilham et al. created a hierarchical variable that reflected a child’s overall social activity based on interview data incorporating information on frequency of regular activity with children outside the home, frequency of attendance at a day nursery or nursery school, and number of other children in attendance.33 These analyses indicated that social activity/day-care attendance is associated with a reduced risk of childhood ALL. Ma et al., in the first publication on day-care attendance from the NCCLS, constructed a ‘child-hours of exposure’ variable incorporating information on the number of months attending a day care, mean hours per week at this day care and the number of children exposed to at this day care. They reported that children who had more total child-hours of exposure had a reduced risk of ALL.29 These results were later confirmed in a follow-up analysis using a larger study population.37 In non-Hispanic White children, children in the highest category of child-hours during infancy had a reduced risk of ALL and c-ALL compared with children who did not attend day care with strong evidence of a dose–response effect. This association was not observed in Hispanic children, which, as noted by the authors, had different socio-economic and demographic characteristics, including larger family size and different day-care utilization patterns. Although these types of refined exposure assessment strategies that account for duration, frequency and size of the day-care facility serve as examples for future studies, results from these analyses may have contributed to study heterogeneity. In a meta-analysis of 10 studies that strictly defined the exposure as attendance at a day care or other similar types of settings,23,24,34–41 a reduced risk estimate was observed. Current evidence suggests that different subtypes of leukaemia, defined by both immunophenotypic and molecular characteristics, may be associated with distinct aetiological mechanisms.52,53 To minimize the bias associated with misclassification of the phenotype, most studies specifically evaluating the infectious hypothesis have reported results by subtype-specific leukaemia such as c-ALL, and have excluded infants since there is evidence suggesting these leukaemias may be associated with a causal mechanism involving transplacental chemical carcinogenesis.54–56 This is not expected to be a major source of error, as observed in the sensitivity analysis, since infant leukaemias comprise only a very small proportion of all leukaemia diagnoses (<5%).57 It is believed that the hypothesis on infections, particularly the ‘delayed infection’ hypothesis, is most relevant to ALL and its most common subtype, c-ALL.1 Limiting the meta-analysis to only those studies providing risk estimates for specific subtypes resulted in a reduced risk associated with both ALL22,33–38,40,41 and c-ALL.32,33,35–38,42 The UKCCS recently published results from the first records-based study examining the relationship between clinically diagnosed infections in the first year of life and childhood ALL.43,58 Contrary to what is expected based on the ‘delayed infection’ hypothesis and what was observed in this meta-analysis of day-care attendance, the results of this well-designed records-based study showed evidence of an increased risk of childhood ALL and c-ALL associated with clinically diagnosed infections in the first year of life. It is possible that these contrasting results reflect one of many mechanisms involved in the aetiology of childhood ALL. The authors explain that their findings may indicate that a dysregulated immune response to infections during the first few months of life leads to an increased risk of ALL.43 Alternatively, from a methodological perspective, it has been suggested that these contrasting results may be an indication that previous studies using self-reported data on infections and social contacts, many of which have found a reduced risk of ALL, may be biased due to differential recall/reporting between cases and controls.58 Although more studies are needed to evaluate this apparent discrepancy, it is important to note at this juncture that infection based on clinical diagnosis may reflect a different infectious disease experience of the child compared with a self-reported infectious disease history, as mothers may not seek medical attention for all of the common infections experienced by the child. Although still susceptible to recall bias, surrogate measures of exposure to infections such as day-care attendance and birth order, are recognized as strong alternative measures to testing the ‘delayed infection’ hypothesis, since they are highly associated with common childhood infectious diseases and have the added advantage of capturing a child’s asymptomatic infections.59 It is not known to what extent recall bias may have affected results of previous day-care studies, but there is evidence from a recent Denmark study also showing strong evidence of a reduced risk associated with a records-based assessment of day-care attendance.36 Overall, this meta-analysis of existing epidemiological data provides strong support for an association between exposure to common infections in early childhood and subsequent risk of ALL. As an indirect measure of exposure to infections, the ability of day-care attendance to serve as a surrogate measure may vary depending on characteristics of the facility attended and the child’s pattern of attendance. Epidemiologic studies have shown that the transmission and development of infectious diseases are highly influenced by the age of the child, frequency and duration of attendance, structure and size of the facility.19,21 Future epidemiologic studies of childhood leukaemia should attempt to obtain this type of detailed information on the facilities attended to refine the exposure classification. Although inconsistent, there is evidence from studies of other surrogate measures of exposure to infections including birth order,2 parental social contacts in the workplace,60 and other immune-related factors (e.g. vaccination and breastfeeding history61,62), that support a role for infections and immune response in the aetiology of childhood leukaemia. The causal significance of the role of infections in childhood ALL would be strengthened by identification of a plausible biological mechanism for the conversion of pre-leukemic cells following infection1 and by incorporation of genetic biomarkers of susceptibility and immune response into further epidemiological studies.63,64 The protective effect of early infection on risk of subsequent childhood ALL parallels the similarly protective impact of parasitic infections on type I diabetes in both animal models and children.65 An important implication of these ‘hygiene’-related hypotheses and supportive data is that some form of prophylactic intervention in infancy may ultimately be possible.1,65 Funding: Grants from the US National Institute of Environmental Health Sciences [grant numbers PS42 ES04705, R01 ES09137] and the Children with Leukaemia Foundation, UK. Funding to pay the Open Access publication charges for this article was provided by the US National Institute of Environmental Health Sciences [grant numbers PS42 ES04705, R01 ES09137].
Background: Childhood acute lymphoblastic leukaemia (ALL) may be the result of a rare response to common infection(s) acquired by personal contact with infected individuals. A meta-analysis was conducted to examine the relationship between day-care attendance and risk of childhood ALL, specifically to address whether early-life exposure to infection is protective against ALL. Methods: Searches of the PubMed database and bibliographies of publications on childhood leukaemia and infections were conducted. Observational studies of any size or location and published in English resulted in the inclusion of 14 case-control studies. Results: The combined odds ratio (OR) based on the random effects model indicated that day-care attendance is associated with a reduced risk of ALL [OR = 0.76, 95% confidence interval (CI): 0.67, 0.87]. In subgroup analyses evaluating the influence of timing of exposure, a similarly reduced effect was observed for both day-care attendance occurring early in life (< or =2 years of age) (OR = 0.79, 95% CI: 0.65, 0.95) and day-care attendance with unspecified timing (anytime prior to diagnosis) (OR = 0.81, 95% CI: 0.70, 0.94). Similar findings were observed with seven studies in which common ALL were analysed separately. The reduced risk estimates persisted in sensitivity analyses that examined the sources of study heterogeneity. Conclusions: This analysis provides strong support for an association between exposure to common infections in early childhood and a reduced risk of ALL. Implications of a 'hygiene'-related aetiology suggest that some form of prophylactic intervention in infancy may be possible.
null
null
11,739
311
[ 120, 320, 616, 60 ]
8
[ "care", "studies", "age", "day", "day care", "attendance", "care attendance", "risk", "leukaemia", "day care attendance" ]
[ "childhood acute leukaemia", "suggesting leukaemias associated", "childhood leukaemia clusters", "leukaemia infection related", "childhood leukaemia causal" ]
null
null
[CONTENT] Childhood | leukaemia | day care | epidemiology | infection | meta-analysis | case–control studies [SUMMARY]
[CONTENT] Childhood | leukaemia | day care | epidemiology | infection | meta-analysis | case–control studies [SUMMARY]
[CONTENT] Childhood | leukaemia | day care | epidemiology | infection | meta-analysis | case–control studies [SUMMARY]
null
[CONTENT] Childhood | leukaemia | day care | epidemiology | infection | meta-analysis | case–control studies [SUMMARY]
null
[CONTENT] Case-Control Studies | Child | Child Day Care Centers | Communicable Diseases | Humans | Precursor Cell Lymphoblastic Leukemia-Lymphoma | Risk Assessment [SUMMARY]
[CONTENT] Case-Control Studies | Child | Child Day Care Centers | Communicable Diseases | Humans | Precursor Cell Lymphoblastic Leukemia-Lymphoma | Risk Assessment [SUMMARY]
[CONTENT] Case-Control Studies | Child | Child Day Care Centers | Communicable Diseases | Humans | Precursor Cell Lymphoblastic Leukemia-Lymphoma | Risk Assessment [SUMMARY]
null
[CONTENT] Case-Control Studies | Child | Child Day Care Centers | Communicable Diseases | Humans | Precursor Cell Lymphoblastic Leukemia-Lymphoma | Risk Assessment [SUMMARY]
null
[CONTENT] childhood acute leukaemia | suggesting leukaemias associated | childhood leukaemia clusters | leukaemia infection related | childhood leukaemia causal [SUMMARY]
[CONTENT] childhood acute leukaemia | suggesting leukaemias associated | childhood leukaemia clusters | leukaemia infection related | childhood leukaemia causal [SUMMARY]
[CONTENT] childhood acute leukaemia | suggesting leukaemias associated | childhood leukaemia clusters | leukaemia infection related | childhood leukaemia causal [SUMMARY]
null
[CONTENT] childhood acute leukaemia | suggesting leukaemias associated | childhood leukaemia clusters | leukaemia infection related | childhood leukaemia causal [SUMMARY]
null
[CONTENT] care | studies | age | day | day care | attendance | care attendance | risk | leukaemia | day care attendance [SUMMARY]
[CONTENT] care | studies | age | day | day care | attendance | care attendance | risk | leukaemia | day care attendance [SUMMARY]
[CONTENT] care | studies | age | day | day care | attendance | care attendance | risk | leukaemia | day care attendance [SUMMARY]
null
[CONTENT] care | studies | age | day | day care | attendance | care attendance | risk | leukaemia | day care attendance [SUMMARY]
null
[CONTENT] immune | infections | childhood | response | countries | proposed | developed countries | developed | population | childhood leukaemia [SUMMARY]
[CONTENT] studies | care | day care | day | estimates | study | attendance | risk | day care attendance | care attendance [SUMMARY]
[CONTENT] age | sex | care | year | yes | birth | match | vs | maternal | months [SUMMARY]
null
[CONTENT] studies | care | day | day care | leukaemia | attendance | age | childhood | study | risk [SUMMARY]
null
[CONTENT] Childhood ||| ALL [SUMMARY]
[CONTENT] PubMed ||| English | 14 [SUMMARY]
[CONTENT] 0.76 | 95% | CI | 0.67 | 0.87 ||| 2 years of age | 0.79 | 95% | CI | 0.65 | 0.95 | 0.81 | 95% | CI | 0.70 | 0.94 ||| seven ||| [SUMMARY]
null
[CONTENT] Childhood ||| ALL ||| PubMed ||| English | 14 ||| ||| 0.76 | 95% | CI | 0.67 | 0.87 ||| 2 years of age | 0.79 | 95% | CI | 0.65 | 0.95 | 0.81 | 95% | CI | 0.70 | 0.94 ||| seven ||| ||| ALL ||| [SUMMARY]
null
Improving neuro-oncological patients care: basic and practical concepts for nurse specialist in neuro-rehabilitation.
23031446
Neuro-oncological population well expresses the complexity of neurological disability due to the multiple neurological deficits that affect these patients. Moreover, due to the therapeutical opportunities survival times for patients with brain tumor have increased and more of these patients require rehabilitation care. The figure of nurse in the interdisciplinary specialty of neurorehabilitation is not clearly defined, even if their role in this setting is recognized as being critical and is expanding.The purpose of the study is to identify the standard competencies for neurorehabilitation nurses that could be taught by means of a specialization course.
BACKGROUND
A literature review was conducted with preference given to works published between January 2000 and December 2008 in English. The search strategy identified 523 non-duplicated references of which 271 titles were considered relevant. After reviewing the abstracts, 147 papers were selected and made available to a group of healthcare professionals who were requested to classify them in few conceptual main areas defining the relative topics.
METHODS
The following five main areas were identified: clinical aspects of nursing; nursing techniques; nursing methodology; relational and organisational models; legal aspects of nursing. The relative topics were included within each area. As educational method a structured course based on lectures and practical sessions was designed. Also multi-choices questions were developed in order to evaluate the participants' level of knowledge, while a semi-structured interview was prepared to investigate students' satisfaction.
RESULTS
Literature shows that the development of rehabilitation depends on the improvement of scientific and practical knowledge of health care professionals. This structured training course could be incorporated into undergraduate nursing education programmes and also be inserted into continuing education programmes for graduate nurses. Developing expertise in neuro-rehabilitation for nurses, will be critical to improve overall care and care management of patients with highly complex disabilities as patients affected by brain tumors. The next step will be to start discussing, at the level of scientific societies linked to the field of neurorehabilitation and oncology, the development of a specialisation course in neurorehabilitation nursing.
CONCLUSIONS
[ "Brain Neoplasms", "Education, Nursing", "Humans", "Nurse's Role", "Patient Care" ]
3527182
Background
Neurological damage is the underlying cause of disability in around 40% of the most severely disabled people (who require daily help), and in the majority of people with complex disabilities involving a combination of physical, cognitive and behavioural impairments [1,2]. The complexity of neurological disability is well represented by neuro-oncological population: in the course of the disease, in fact, patients affected by malignant brain tumor (BT) present multiple neurological deficits, due to primary tumor effects and the adverse effects of treatments that pose important limitations to patient’s everyday functioning [3]. Impaired cognition, weakness, visuo-perceptual and motor problems were the most common neurological deficits reported in the population of patients with BTs [4]. Because of the recent advances in surgical techniques, chemotherapy, and radiation therapy, survival times for patients with BTs have increased and more of these patients require rehabilitation support and services [5-8]. In fact, when cancer is viewed as a chronic disease, the concept of cancer rehabilitation become an important aspect of comprehensive care: patients not only expect physical rehabilitation, but also a broad range of services offered to develop skills which can enable them to cope with the long term consequences of cancer diseases [9,10]. For this reason provision of individual- and group-oriented rehabilitation programs satisfies the patients’ demands for continuity in care and for encouragement to develop self-management skills as described in the Chronic Care Model of the World Health Organization (WHO) [11]. Rehabilitation intervention in cancer patients is recommended both in early stage of disease, for restoring function after surgery and cancer therapy, and in advanced stage of disease as important part of palliative care with the aim to prevent complication, control the symptoms and maintain patients’ independence and quality of life [12-16]. In the context of rehabilitation care to disabled neurological patients, nurses play a key role as patients are highly dependent both on them and on healthcare assistants [17]. Rehabilitation nursing practice is a specialty area in which the aim is to help individuals with disabilities and chronic illnesses regain and maintain optimal health, but also to prevent the occurrence of common complications [18]. In the past, the lead for rehabilitation programmes often came from physiotherapists and occupational therapists. The contribution of the nurse to the rehabilitation process has not always been valued or regarded as an equal member of the rehabilitation team [19]. Nurses were expected to assume little more than an understudy’s role, providing the necessary care required by the patient who was preparing for “rehabilitation”. However, much of this care remained invisible and almost absent from the literature [20], despite Henderson [21] proclaiming that nurses were “rehabilitators par excellence”. She recognized that many of the components of nursing care were not so much basic but essential rehabilitation nursing skills such as relieving pain; helping with hygiene and mobilization; giving pressure area care; ensuring adequate nutrition; promoting and managing continence; giving emotional support; providing patients and caregivers education; and providing opportunities for adequate sleep, rest and stimulation. Unless such needs are fully met and built into an educational rehabilitation programme, all other activities are ineffective. In addition to their clinical role, rehabilitation nurses also have an important administrative function, effectively acting as case managers, especially in acute care and acute rehabilitation settings. In this role, nurses must advocate for patients and families, representing their concerns regarding care both within and outside the clinical setting [22-24]. The case manager must review each patient individually to establish what treatments and services are appropriate. This role is bound to become increasingly important in the context of the ever-increasing need to achieve better management of resources and shorter hospitalizations. Nurses who are interested in neuro-oncological rehabilitation are concerned with changes and functional abilities, rather than the disease process, and with how to improve the remaining time, rather than with how many months an individual has left to live. As Dietz states, in fact, the goal of rehabilitation for people with cancer is to improve the quality of life for maximum productivity with minimum dependence, regardless of life expectancy [25]. The complexity of knowledge and skills required to provide such comprehensive care to neuro-oncological patients illustrates the need for increasing specialisation within the health professions [26,27]. Although nursing is purportedly about meeting the needs of all, the development of an understanding of patients with disabilities is one area that is generally not given specific attention in undergraduate nursing curricula [28]. Only a third of nurses felt, with hindsight, that their pre-registration education had provided them with adequate skills and knowledge for their role in rehabilitation; furthermore, nurses have expressed the need to have access to more education and training focused on rehabilitation per se and associated clinical skills, in order to strengthen and raise the profile of their professional role [29-31]. In this regard, The Specialty Practice ofRehabilitation Nursing: A CoreCurriculum, published by the Association of Rehabilitation Nurses (ARN) is a key text. Designed both for professionals entering rehabilitation nursing and for those already in the field, it is an important resource for those preparing for the Certified Rehabilitation Registered Nurse (CRRN) examination. In short, in the US, it is a fundamental reference guide to rehabilitation nursing [32]. Currently in Europe there is a discrepancy in training courses for nurses, Table 1. Training achieve different titles, as well as different is the duration in years of training. The only unifying element, which dates back to 1977, is represented by the European directives (77/452/EEC and 77/453/EEC, 27 June 1977) that governed the harmonization of programs and the number of hours needed to become nurse: 2300 of theory and 2300 of clinical practice (180 credits - CFU). Nursing education in Europe In Italy, the role of the nursing profession in the interdisciplinary specialty of neurorehabilitation remains poorly defined. There is currently no structured system allowing nurses to undertake further training to become nurse specialists (NSps) or nurse practitioners (NPs) in neurorehabilitation, and there is no system for the validation and accreditation of nursing skills. There therefore exists a need to promote excellence in rehabilitation nursing care by validating specialist knowledge and introducing qualifications in this area. These needs prompted us to propose a structured pathway that could be followed by staff nurses wishing to become NSps in neurorehabilitation. Specifically, the purposes of this paper are to identify areas of need within nurses’ clinical education and to propose an education course, defining the main topics to be included in a neurorehabilitation nursing core curriculum.
Methods
A literature review was conducted by means of PubMed, Cochrane database, and web searches for potentially relevant titles combining the search terms “nurses” and “nursing” with “education”, “rehabilitation”, “neurology”, “neuro-oncology”, “brain tumors”, “learning”, “core curriculum”. The main limits applied for the PubMed search were: clinical trial; meta-analysis; practice guideline; review; classical article; consensus development conference, NIH; guideline; journal article; newspaper article; MEDLINE; nursing journals; systematic reviews. Preference was given to works published between January 2000 and December 2008 in English. The search strategy identified 523 non-duplicated references of which 271 titles were considered relevant. After reviewing the abstracts, 147 papers were selected and made available to a group of healthcare professionals (nurses, physicians, physiotherapists, psychologists) with specific experience in neurorehabilitation, to perform a final revision. Each professional reviewed the articles and identified a limited number of areas and related topics deemed, by them, fundamental for anyone seeking to acquire the knowledge and skills needed to practice rehabilitation nursing. The results were compared and discussed among the professionals in order to include the identified areas and topics in the course; a consensus level ≥ 60% was requested otherwise the area or the topic were erased. Course description The discussion among the professionals led to the identification of the following five main areas: a) clinical aspects of nursing; b) nursing techniques; c) nursing methodology; d) relational and organisational models; e) legal aspects of nursing. The topics included in each area are listed in Table 2. Course areas and topics These issues have become the contents of a structured course, amounting to a total of 160 hours that includes three modules: theory (58 hours), practice (22 hours) and observation of experienced nurses (80 hours). The first module, delivered in the form of lectures, focused on theoretical aspects related to the five main areas. In the second and third modules, the participants received supervised practical training and were able to familiarise themselves with the logistics and use of various equipment, with patient management and with intervention protocols. Basic techniques were demonstrated and then applied by all the participants in turn. The course should last four weeks (6 days/week, 7 hours/day). The mornings will be were devoted to supervised practical activities and observations on the ward, and the afternoons to theoretical lessons. The setting for all these activities should be a highly specialised neurorehabilitation unit. The course teachers should be physicians (neurologists, an anaesthetist, a physiatrist), nurses, bioengineers, psychologists, and physiotherapists, all with specific experience in field of neurorehabilitation. The course will end with the presentation of a thesis. Self-administered questionnaires with multiple choice answers and regarding all the topics should be compiled by the participants to assess their basic level of knowledge, learning and satisfaction. The discussion among the professionals led to the identification of the following five main areas: a) clinical aspects of nursing; b) nursing techniques; c) nursing methodology; d) relational and organisational models; e) legal aspects of nursing. The topics included in each area are listed in Table 2. Course areas and topics These issues have become the contents of a structured course, amounting to a total of 160 hours that includes three modules: theory (58 hours), practice (22 hours) and observation of experienced nurses (80 hours). The first module, delivered in the form of lectures, focused on theoretical aspects related to the five main areas. In the second and third modules, the participants received supervised practical training and were able to familiarise themselves with the logistics and use of various equipment, with patient management and with intervention protocols. Basic techniques were demonstrated and then applied by all the participants in turn. The course should last four weeks (6 days/week, 7 hours/day). The mornings will be were devoted to supervised practical activities and observations on the ward, and the afternoons to theoretical lessons. The setting for all these activities should be a highly specialised neurorehabilitation unit. The course teachers should be physicians (neurologists, an anaesthetist, a physiatrist), nurses, bioengineers, psychologists, and physiotherapists, all with specific experience in field of neurorehabilitation. The course will end with the presentation of a thesis. Self-administered questionnaires with multiple choice answers and regarding all the topics should be compiled by the participants to assess their basic level of knowledge, learning and satisfaction.
null
null
null
null
[ "Background", "Course description", "Competing interests", "Authors’ contributions" ]
[ "Neurological damage is the underlying cause of disability in around 40% of the most severely disabled people (who require daily help), and in the majority of people with complex disabilities involving a combination of physical, cognitive and behavioural impairments [1,2].\nThe complexity of neurological disability is well represented by neuro-oncological population: in the course of the disease, in fact, patients affected by malignant brain tumor (BT) present multiple neurological deficits, due to primary tumor effects and the adverse effects of treatments that pose important limitations to patient’s everyday functioning [3].\nImpaired cognition, weakness, visuo-perceptual and motor problems were the most common neurological deficits reported in the population of patients with BTs [4]. Because of the recent advances in surgical techniques, chemotherapy, and radiation therapy, survival times for patients with BTs have increased and more of these patients require rehabilitation support and services [5-8]. In fact, when cancer is viewed as a chronic disease, the concept of cancer rehabilitation become an important aspect of comprehensive care: patients not only expect physical rehabilitation, but also a broad range of services offered to develop skills which can enable them to cope with the long term consequences of cancer diseases [9,10]. For this reason provision of individual- and group-oriented rehabilitation programs satisfies the patients’ demands for continuity in care and for encouragement to develop self-management skills as described in the Chronic Care Model of the World Health Organization (WHO) [11].\nRehabilitation intervention in cancer patients is recommended both in early stage of disease, for restoring function after surgery and cancer therapy, and in advanced stage of disease as important part of palliative care with the aim to prevent complication, control the symptoms and maintain patients’ independence and quality of life [12-16].\nIn the context of rehabilitation care to disabled neurological patients, nurses play a key role as patients are highly dependent both on them and on healthcare assistants [17].\nRehabilitation nursing practice is a specialty area in which the aim is to help individuals with disabilities and chronic illnesses regain and maintain optimal health, but also to prevent the occurrence of common complications [18].\nIn the past, the lead for rehabilitation programmes often came from physiotherapists and occupational therapists. The contribution of the nurse to the rehabilitation process has not always been valued or regarded as an equal member of the rehabilitation team [19].\nNurses were expected to assume little more than an understudy’s role, providing the necessary care required by the patient who was preparing for “rehabilitation”. However, much of this care remained invisible and almost absent from the literature [20], despite Henderson [21] proclaiming that nurses were “rehabilitators par excellence”. She recognized that many of the components of nursing care were not so much basic but essential rehabilitation nursing skills such as relieving pain; helping with hygiene and mobilization; giving pressure area care; ensuring adequate nutrition; promoting and managing continence; giving emotional support; providing patients and caregivers education; and providing opportunities for adequate sleep, rest and stimulation. Unless such needs are fully met and built into an educational rehabilitation programme, all other activities are ineffective.\nIn addition to their clinical role, rehabilitation nurses also have an important administrative function, effectively acting as case managers, especially in acute care and acute rehabilitation settings. In this role, nurses must advocate for patients and families, representing their concerns regarding care both within and outside the clinical setting [22-24]. The case manager must review each patient individually to establish what treatments and services are appropriate. This role is bound to become increasingly important in the context of the ever-increasing need to achieve better management of resources and shorter hospitalizations.\nNurses who are interested in neuro-oncological rehabilitation are concerned with changes and functional abilities, rather than the disease process, and with how to improve the remaining time, rather than with how many months an individual has left to live. As Dietz states, in fact, the goal of rehabilitation for people with cancer is to improve the quality of life for maximum productivity with minimum dependence, regardless of life expectancy [25].\nThe complexity of knowledge and skills required to provide such comprehensive care to neuro-oncological patients illustrates the need for increasing specialisation within the health professions [26,27].\nAlthough nursing is purportedly about meeting the needs of all, the development of an understanding of patients with disabilities is one area that is generally not given specific attention in undergraduate nursing curricula [28]. Only a third of nurses felt, with hindsight, that their pre-registration education had provided them with adequate skills and knowledge for their role in rehabilitation; furthermore, nurses have expressed the need to have access to more education and training focused on rehabilitation per se and associated clinical skills, in order to strengthen and raise the profile of their professional role [29-31].\nIn this regard, The Specialty Practice ofRehabilitation Nursing: A CoreCurriculum, published by the Association of Rehabilitation Nurses (ARN) is a key text. Designed both for professionals entering rehabilitation nursing and for those already in the field, it is an important resource for those preparing for the Certified Rehabilitation Registered Nurse (CRRN) examination. In short, in the US, it is a fundamental reference guide to rehabilitation nursing [32].\nCurrently in Europe there is a discrepancy in training courses for nurses, Table 1. Training achieve different titles, as well as different is the duration in years of training. The only unifying element, which dates back to 1977, is represented by the European directives (77/452/EEC and 77/453/EEC, 27 June 1977) that governed the harmonization of programs and the number of hours needed to become nurse: 2300 of theory and 2300 of clinical practice (180 credits - CFU).\nNursing education in Europe\nIn Italy, the role of the nursing profession in the interdisciplinary specialty of neurorehabilitation remains poorly defined. There is currently no structured system allowing nurses to undertake further training to become nurse specialists (NSps) or nurse practitioners (NPs) in neurorehabilitation, and there is no system for the validation and accreditation of nursing skills. There therefore exists a need to promote excellence in rehabilitation nursing care by validating specialist knowledge and introducing qualifications in this area.\nThese needs prompted us to propose a structured pathway that could be followed by staff nurses wishing to become NSps in neurorehabilitation. Specifically, the purposes of this paper are to identify areas of need within nurses’ clinical education and to propose an education course, defining the main topics to be included in a neurorehabilitation nursing core curriculum.", "The discussion among the professionals led to the identification of the following five main areas: a) clinical aspects of nursing; b) nursing techniques; c) nursing methodology; d) relational and organisational models; e) legal aspects of nursing. The topics included in each area are listed in Table 2.\nCourse areas and topics\nThese issues have become the contents of a structured course, amounting to a total of 160 hours that includes three modules: theory (58 hours), practice (22 hours) and observation of experienced nurses (80 hours).\nThe first module, delivered in the form of lectures, focused on theoretical aspects related to the five main areas. In the second and third modules, the participants received supervised practical training and were able to familiarise themselves with the logistics and use of various equipment, with patient management and with intervention protocols. Basic techniques were demonstrated and then applied by all the participants in turn.\nThe course should last four weeks (6 days/week, 7 hours/day). The mornings will be were devoted to supervised practical activities and observations on the ward, and the afternoons to theoretical lessons. The setting for all these activities should be a highly specialised neurorehabilitation unit.\nThe course teachers should be physicians (neurologists, an anaesthetist, a physiatrist), nurses, bioengineers, psychologists, and physiotherapists, all with specific experience in field of neurorehabilitation.\nThe course will end with the presentation of a thesis. Self-administered questionnaires with multiple choice answers and regarding all the topics should be compiled by the participants to assess their basic level of knowledge, learning and satisfaction.", "The authors declare that they have no competing interests.", "MB conceived the paper, interpreted data and wrote the final manuscript; CZ conceived the paper, interpreted data and wrote the final manuscript; AP reviewed and commented the last version of the manuscript; AMDN helped to revise the first draft of the manuscript; MS and GS reviewed and commented the last version of the manuscript; FP interpreted data, reviewed and commented the last version of the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null ]
[ "Background", "Methods", "Course description", "Discussion", "Competing interests", "Authors’ contributions" ]
[ "Neurological damage is the underlying cause of disability in around 40% of the most severely disabled people (who require daily help), and in the majority of people with complex disabilities involving a combination of physical, cognitive and behavioural impairments [1,2].\nThe complexity of neurological disability is well represented by neuro-oncological population: in the course of the disease, in fact, patients affected by malignant brain tumor (BT) present multiple neurological deficits, due to primary tumor effects and the adverse effects of treatments that pose important limitations to patient’s everyday functioning [3].\nImpaired cognition, weakness, visuo-perceptual and motor problems were the most common neurological deficits reported in the population of patients with BTs [4]. Because of the recent advances in surgical techniques, chemotherapy, and radiation therapy, survival times for patients with BTs have increased and more of these patients require rehabilitation support and services [5-8]. In fact, when cancer is viewed as a chronic disease, the concept of cancer rehabilitation become an important aspect of comprehensive care: patients not only expect physical rehabilitation, but also a broad range of services offered to develop skills which can enable them to cope with the long term consequences of cancer diseases [9,10]. For this reason provision of individual- and group-oriented rehabilitation programs satisfies the patients’ demands for continuity in care and for encouragement to develop self-management skills as described in the Chronic Care Model of the World Health Organization (WHO) [11].\nRehabilitation intervention in cancer patients is recommended both in early stage of disease, for restoring function after surgery and cancer therapy, and in advanced stage of disease as important part of palliative care with the aim to prevent complication, control the symptoms and maintain patients’ independence and quality of life [12-16].\nIn the context of rehabilitation care to disabled neurological patients, nurses play a key role as patients are highly dependent both on them and on healthcare assistants [17].\nRehabilitation nursing practice is a specialty area in which the aim is to help individuals with disabilities and chronic illnesses regain and maintain optimal health, but also to prevent the occurrence of common complications [18].\nIn the past, the lead for rehabilitation programmes often came from physiotherapists and occupational therapists. The contribution of the nurse to the rehabilitation process has not always been valued or regarded as an equal member of the rehabilitation team [19].\nNurses were expected to assume little more than an understudy’s role, providing the necessary care required by the patient who was preparing for “rehabilitation”. However, much of this care remained invisible and almost absent from the literature [20], despite Henderson [21] proclaiming that nurses were “rehabilitators par excellence”. She recognized that many of the components of nursing care were not so much basic but essential rehabilitation nursing skills such as relieving pain; helping with hygiene and mobilization; giving pressure area care; ensuring adequate nutrition; promoting and managing continence; giving emotional support; providing patients and caregivers education; and providing opportunities for adequate sleep, rest and stimulation. Unless such needs are fully met and built into an educational rehabilitation programme, all other activities are ineffective.\nIn addition to their clinical role, rehabilitation nurses also have an important administrative function, effectively acting as case managers, especially in acute care and acute rehabilitation settings. In this role, nurses must advocate for patients and families, representing their concerns regarding care both within and outside the clinical setting [22-24]. The case manager must review each patient individually to establish what treatments and services are appropriate. This role is bound to become increasingly important in the context of the ever-increasing need to achieve better management of resources and shorter hospitalizations.\nNurses who are interested in neuro-oncological rehabilitation are concerned with changes and functional abilities, rather than the disease process, and with how to improve the remaining time, rather than with how many months an individual has left to live. As Dietz states, in fact, the goal of rehabilitation for people with cancer is to improve the quality of life for maximum productivity with minimum dependence, regardless of life expectancy [25].\nThe complexity of knowledge and skills required to provide such comprehensive care to neuro-oncological patients illustrates the need for increasing specialisation within the health professions [26,27].\nAlthough nursing is purportedly about meeting the needs of all, the development of an understanding of patients with disabilities is one area that is generally not given specific attention in undergraduate nursing curricula [28]. Only a third of nurses felt, with hindsight, that their pre-registration education had provided them with adequate skills and knowledge for their role in rehabilitation; furthermore, nurses have expressed the need to have access to more education and training focused on rehabilitation per se and associated clinical skills, in order to strengthen and raise the profile of their professional role [29-31].\nIn this regard, The Specialty Practice ofRehabilitation Nursing: A CoreCurriculum, published by the Association of Rehabilitation Nurses (ARN) is a key text. Designed both for professionals entering rehabilitation nursing and for those already in the field, it is an important resource for those preparing for the Certified Rehabilitation Registered Nurse (CRRN) examination. In short, in the US, it is a fundamental reference guide to rehabilitation nursing [32].\nCurrently in Europe there is a discrepancy in training courses for nurses, Table 1. Training achieve different titles, as well as different is the duration in years of training. The only unifying element, which dates back to 1977, is represented by the European directives (77/452/EEC and 77/453/EEC, 27 June 1977) that governed the harmonization of programs and the number of hours needed to become nurse: 2300 of theory and 2300 of clinical practice (180 credits - CFU).\nNursing education in Europe\nIn Italy, the role of the nursing profession in the interdisciplinary specialty of neurorehabilitation remains poorly defined. There is currently no structured system allowing nurses to undertake further training to become nurse specialists (NSps) or nurse practitioners (NPs) in neurorehabilitation, and there is no system for the validation and accreditation of nursing skills. There therefore exists a need to promote excellence in rehabilitation nursing care by validating specialist knowledge and introducing qualifications in this area.\nThese needs prompted us to propose a structured pathway that could be followed by staff nurses wishing to become NSps in neurorehabilitation. Specifically, the purposes of this paper are to identify areas of need within nurses’ clinical education and to propose an education course, defining the main topics to be included in a neurorehabilitation nursing core curriculum.", "A literature review was conducted by means of PubMed, Cochrane database, and web searches for potentially relevant titles combining the search terms “nurses” and “nursing” with “education”, “rehabilitation”, “neurology”, “neuro-oncology”, “brain tumors”, “learning”, “core curriculum”. The main limits applied for the PubMed search were: clinical trial; meta-analysis; practice guideline; review; classical article; consensus development conference, NIH; guideline; journal article; newspaper article; MEDLINE; nursing journals; systematic reviews. Preference was given to works published between January 2000 and December 2008 in English. The search strategy identified 523 non-duplicated references of which 271 titles were considered relevant. After reviewing the abstracts, 147 papers were selected and made available to a group of healthcare professionals (nurses, physicians, physiotherapists, psychologists) with specific experience in neurorehabilitation, to perform a final revision.\nEach professional reviewed the articles and identified a limited number of areas and related topics deemed, by them, fundamental for anyone seeking to acquire the knowledge and skills needed to practice rehabilitation nursing.\nThe results were compared and discussed among the professionals in order to include the identified areas and topics in the course; a consensus level ≥ 60% was requested otherwise the area or the topic were erased.\n Course description The discussion among the professionals led to the identification of the following five main areas: a) clinical aspects of nursing; b) nursing techniques; c) nursing methodology; d) relational and organisational models; e) legal aspects of nursing. The topics included in each area are listed in Table 2.\nCourse areas and topics\nThese issues have become the contents of a structured course, amounting to a total of 160 hours that includes three modules: theory (58 hours), practice (22 hours) and observation of experienced nurses (80 hours).\nThe first module, delivered in the form of lectures, focused on theoretical aspects related to the five main areas. In the second and third modules, the participants received supervised practical training and were able to familiarise themselves with the logistics and use of various equipment, with patient management and with intervention protocols. Basic techniques were demonstrated and then applied by all the participants in turn.\nThe course should last four weeks (6 days/week, 7 hours/day). The mornings will be were devoted to supervised practical activities and observations on the ward, and the afternoons to theoretical lessons. The setting for all these activities should be a highly specialised neurorehabilitation unit.\nThe course teachers should be physicians (neurologists, an anaesthetist, a physiatrist), nurses, bioengineers, psychologists, and physiotherapists, all with specific experience in field of neurorehabilitation.\nThe course will end with the presentation of a thesis. Self-administered questionnaires with multiple choice answers and regarding all the topics should be compiled by the participants to assess their basic level of knowledge, learning and satisfaction.\nThe discussion among the professionals led to the identification of the following five main areas: a) clinical aspects of nursing; b) nursing techniques; c) nursing methodology; d) relational and organisational models; e) legal aspects of nursing. The topics included in each area are listed in Table 2.\nCourse areas and topics\nThese issues have become the contents of a structured course, amounting to a total of 160 hours that includes three modules: theory (58 hours), practice (22 hours) and observation of experienced nurses (80 hours).\nThe first module, delivered in the form of lectures, focused on theoretical aspects related to the five main areas. In the second and third modules, the participants received supervised practical training and were able to familiarise themselves with the logistics and use of various equipment, with patient management and with intervention protocols. Basic techniques were demonstrated and then applied by all the participants in turn.\nThe course should last four weeks (6 days/week, 7 hours/day). The mornings will be were devoted to supervised practical activities and observations on the ward, and the afternoons to theoretical lessons. The setting for all these activities should be a highly specialised neurorehabilitation unit.\nThe course teachers should be physicians (neurologists, an anaesthetist, a physiatrist), nurses, bioengineers, psychologists, and physiotherapists, all with specific experience in field of neurorehabilitation.\nThe course will end with the presentation of a thesis. Self-administered questionnaires with multiple choice answers and regarding all the topics should be compiled by the participants to assess their basic level of knowledge, learning and satisfaction.", "The discussion among the professionals led to the identification of the following five main areas: a) clinical aspects of nursing; b) nursing techniques; c) nursing methodology; d) relational and organisational models; e) legal aspects of nursing. The topics included in each area are listed in Table 2.\nCourse areas and topics\nThese issues have become the contents of a structured course, amounting to a total of 160 hours that includes three modules: theory (58 hours), practice (22 hours) and observation of experienced nurses (80 hours).\nThe first module, delivered in the form of lectures, focused on theoretical aspects related to the five main areas. In the second and third modules, the participants received supervised practical training and were able to familiarise themselves with the logistics and use of various equipment, with patient management and with intervention protocols. Basic techniques were demonstrated and then applied by all the participants in turn.\nThe course should last four weeks (6 days/week, 7 hours/day). The mornings will be were devoted to supervised practical activities and observations on the ward, and the afternoons to theoretical lessons. The setting for all these activities should be a highly specialised neurorehabilitation unit.\nThe course teachers should be physicians (neurologists, an anaesthetist, a physiatrist), nurses, bioengineers, psychologists, and physiotherapists, all with specific experience in field of neurorehabilitation.\nThe course will end with the presentation of a thesis. Self-administered questionnaires with multiple choice answers and regarding all the topics should be compiled by the participants to assess their basic level of knowledge, learning and satisfaction.", "This paper identifies the standard competencies of the neurorehabilitation nurses and describes a proposed structured education course to train specialist nurses in neurorehabilitation care.\nTo this end, drawing on the expertise of different clinicians and professionals a consensus was reached on a minimum core set of topics which covered five aspects of rehabilitation nursing: clinical, technical, methodological, organisational and legal.\nConsistent with previous literature, this review seems to support the need (perceived by nurses themselves) for specific education and training in order to work with people with complex neurological disabilities [33].\nIndeed, a wider investigation of the role of nurses within the multiprofessional rehabilitation team revealed gaps in the skills and knowledge of graduate nurses working in rehabilitation settings: while the role of nurses has evolved considerably, there are still obvious gaps in current rehabilitation nursing training [34].\nMoreover, the precise role of nurses in rehabilitation is not clearly defined: the literature shows that rehabilitation nursing has developed to various degrees worldwide. Furthermore, no comprehensive framework for the specialty practice of rehabilitation nursing can be found in the English language literature through Medline and Google searches [35].\nThe proposed course aims to fill these gaps, providing the necessary theoretical and practical bases, to train a professional NSp in neurorehabilitation. Specifically, its main objectives are: (a) to train nurses, providing them with the expertise to manage the care of neurological patients with disabilities, in both the acute and the chronic phase; (b) to provide them with the skills needed to lead and coordinate multidisciplinary teams so as to ensure the comprehensive care of patients; (c) to transfer, to them, knowledge about the clinical tools and technologies adopted within the field of neurorehabilitation; (d) to impart to them a working method that will enable them to go on expanding their knowledge base as well as to pass it on to other care providers, implementing this knowledge throughout the healthcare system, thereby increasing levels of both safety and quality.\nThe Association of Rehabilitation Nurses in the US has published a series of documents to guide the development of rehabilitation nursing practice – in particular three editions of a core curriculum [36-38]. Outside the US, however, these publications seem to have had limited impact. Elsewhere, there seems to be agreement that the potential role of nurses in rehabilitation is yet to be fully realized [39-49]. To achieve this goal, the development and implementation of formal education courses could be a key strategy, making it possible to train advanced practice nurses, particularly neurorehabilitation specialists, who could fill the growing need for expert clinicians able to assume major leadership roles in clinical, management and research areas. The course proposed in this paper is based on a minimum set of topics, grouped into five main areas, and could serve as a basis for a core curriculum. This model includes extensive clinical practice and focuses strongly on evidence-based practice; moreover, it highlights the importance of cross-disciplinary teaching, which aims to bring together and harmonise different professional skills in an interprofessional education framework [50,51].\nInterdisciplinary healthcare teams with members from many professions have to work closely with each other in order to optimise patient care [52]. In this context, non-technical skills such as communication, collaboration, cooperation and reflection are crucial for effective practice. As interprofessional collaboration is an important element in total quality management, education on how to function within a team is essential [53]: healthcare workers with different knowledge and backgrounds have to harmonise their intervention plans according to the competencies and goals of the other team members [54].\nThis need for integration is even greater for neuro-oncological patients in which the clinical complexity that derives from the coexistence of disability at different levels, requires a coordinated and synergistic intervention. Based on the bio-psycho-social model of the WHO and a holistic approach of rehabilitation, cancer rehabilitation in fact should comprise multidisciplinary efforts including, among others, medical, psychological and physiotherapeutic treatment as well as occupational therapy and functional therapy, depending on the patient’ s functional status [55,56]. Maintaining continuity, through coordination, represents one aspect of rehabilitation in which nursing has a key role that has been widely addressed in oncology nursing literature.\nWe believe that our findings have the potential to make a contribution to the development of rehabilitation nursing and that this training course, the first of this kind in Italy, could be incorporated into undergraduate nursing education programmes and also be inserted into continuing education programmes for graduate nurses. However further research is needed to refine the contents of the teaching units and to evaluate its feasibility and costs. The content of the entire curriculum is in fact open to modification on the basis of evaluations and feedback after the first implementation.\nProfessional rehabilitation nurses must, in fact, combine their practice with continuing education in order to acquire specific knowledge and skills that will contribute to more efficient rehabilitation processes and services.\nBy teaching registered nurses the principles of rehabilitation nursing, and creating, for them, the specific qualification of neurorehabilitation nurse, the quality of overall care for neurological patients could be improved, through fewer complications, shorter hospital stays, better and outcomes and better support for families.\nRecent studies reported that the presence of nurses with higher educational level improves patients’ outcomes. In fact, although it has not been conclusively demonstrated the link between the level of training and quality of care, associations between a series of patients’ outcomes, including mortality, and the training of nurses are well documented [57,58].\nDeveloping expertise in neuro-rehabilitation for nurses, will be critical to improve overall care according to the “simultaneous care” model [59] particularly for patients affected by BT, for which the integration of different professionals expertise can provide solutions to the complex needs of the patient and caregivers [60,61].\nIn this view, nurses can contribute to the quality and satisfaction of patients’ lives by developing a philosophy that incorporates rehabilitation principles as integral part of their practice.\nNursing profession has already made a significant contribution to the body of knowledge in the field of rehabilitation of the cancer patients and his/her family; new generations of allied health professionals need a solid grounding in clinical skills, but as already suggested by previous authors, they also need a strong educational background and attitudes that will enable them to build their profession as well as their own professional practice [62,63]. These attitudes and skills have been suggested to include a desire to engage in lifelong learning and professional growth and an ability to identify and critically evaluate their own practice and the underlying theories and perceptions that inform the practice of nursing [64].\nIn our view, the crucial next step will be to start discussing, at the level of scientific societies linked to the field of neurorehabilitation and oncology, the development of a specialisation course in neurorehabilitation nursing.", "The authors declare that they have no competing interests.", "MB conceived the paper, interpreted data and wrote the final manuscript; CZ conceived the paper, interpreted data and wrote the final manuscript; AP reviewed and commented the last version of the manuscript; AMDN helped to revise the first draft of the manuscript; MS and GS reviewed and commented the last version of the manuscript; FP interpreted data, reviewed and commented the last version of the manuscript. All authors read and approved the final manuscript." ]
[ null, "methods", null, "discussion", null, null ]
[ "Neurorehabilitation", "Nurses", "Education", "Certification", "Core curriculum", "Team", "Brain tumors" ]
Background: Neurological damage is the underlying cause of disability in around 40% of the most severely disabled people (who require daily help), and in the majority of people with complex disabilities involving a combination of physical, cognitive and behavioural impairments [1,2]. The complexity of neurological disability is well represented by neuro-oncological population: in the course of the disease, in fact, patients affected by malignant brain tumor (BT) present multiple neurological deficits, due to primary tumor effects and the adverse effects of treatments that pose important limitations to patient’s everyday functioning [3]. Impaired cognition, weakness, visuo-perceptual and motor problems were the most common neurological deficits reported in the population of patients with BTs [4]. Because of the recent advances in surgical techniques, chemotherapy, and radiation therapy, survival times for patients with BTs have increased and more of these patients require rehabilitation support and services [5-8]. In fact, when cancer is viewed as a chronic disease, the concept of cancer rehabilitation become an important aspect of comprehensive care: patients not only expect physical rehabilitation, but also a broad range of services offered to develop skills which can enable them to cope with the long term consequences of cancer diseases [9,10]. For this reason provision of individual- and group-oriented rehabilitation programs satisfies the patients’ demands for continuity in care and for encouragement to develop self-management skills as described in the Chronic Care Model of the World Health Organization (WHO) [11]. Rehabilitation intervention in cancer patients is recommended both in early stage of disease, for restoring function after surgery and cancer therapy, and in advanced stage of disease as important part of palliative care with the aim to prevent complication, control the symptoms and maintain patients’ independence and quality of life [12-16]. In the context of rehabilitation care to disabled neurological patients, nurses play a key role as patients are highly dependent both on them and on healthcare assistants [17]. Rehabilitation nursing practice is a specialty area in which the aim is to help individuals with disabilities and chronic illnesses regain and maintain optimal health, but also to prevent the occurrence of common complications [18]. In the past, the lead for rehabilitation programmes often came from physiotherapists and occupational therapists. The contribution of the nurse to the rehabilitation process has not always been valued or regarded as an equal member of the rehabilitation team [19]. Nurses were expected to assume little more than an understudy’s role, providing the necessary care required by the patient who was preparing for “rehabilitation”. However, much of this care remained invisible and almost absent from the literature [20], despite Henderson [21] proclaiming that nurses were “rehabilitators par excellence”. She recognized that many of the components of nursing care were not so much basic but essential rehabilitation nursing skills such as relieving pain; helping with hygiene and mobilization; giving pressure area care; ensuring adequate nutrition; promoting and managing continence; giving emotional support; providing patients and caregivers education; and providing opportunities for adequate sleep, rest and stimulation. Unless such needs are fully met and built into an educational rehabilitation programme, all other activities are ineffective. In addition to their clinical role, rehabilitation nurses also have an important administrative function, effectively acting as case managers, especially in acute care and acute rehabilitation settings. In this role, nurses must advocate for patients and families, representing their concerns regarding care both within and outside the clinical setting [22-24]. The case manager must review each patient individually to establish what treatments and services are appropriate. This role is bound to become increasingly important in the context of the ever-increasing need to achieve better management of resources and shorter hospitalizations. Nurses who are interested in neuro-oncological rehabilitation are concerned with changes and functional abilities, rather than the disease process, and with how to improve the remaining time, rather than with how many months an individual has left to live. As Dietz states, in fact, the goal of rehabilitation for people with cancer is to improve the quality of life for maximum productivity with minimum dependence, regardless of life expectancy [25]. The complexity of knowledge and skills required to provide such comprehensive care to neuro-oncological patients illustrates the need for increasing specialisation within the health professions [26,27]. Although nursing is purportedly about meeting the needs of all, the development of an understanding of patients with disabilities is one area that is generally not given specific attention in undergraduate nursing curricula [28]. Only a third of nurses felt, with hindsight, that their pre-registration education had provided them with adequate skills and knowledge for their role in rehabilitation; furthermore, nurses have expressed the need to have access to more education and training focused on rehabilitation per se and associated clinical skills, in order to strengthen and raise the profile of their professional role [29-31]. In this regard, The Specialty Practice ofRehabilitation Nursing: A CoreCurriculum, published by the Association of Rehabilitation Nurses (ARN) is a key text. Designed both for professionals entering rehabilitation nursing and for those already in the field, it is an important resource for those preparing for the Certified Rehabilitation Registered Nurse (CRRN) examination. In short, in the US, it is a fundamental reference guide to rehabilitation nursing [32]. Currently in Europe there is a discrepancy in training courses for nurses, Table 1. Training achieve different titles, as well as different is the duration in years of training. The only unifying element, which dates back to 1977, is represented by the European directives (77/452/EEC and 77/453/EEC, 27 June 1977) that governed the harmonization of programs and the number of hours needed to become nurse: 2300 of theory and 2300 of clinical practice (180 credits - CFU). Nursing education in Europe In Italy, the role of the nursing profession in the interdisciplinary specialty of neurorehabilitation remains poorly defined. There is currently no structured system allowing nurses to undertake further training to become nurse specialists (NSps) or nurse practitioners (NPs) in neurorehabilitation, and there is no system for the validation and accreditation of nursing skills. There therefore exists a need to promote excellence in rehabilitation nursing care by validating specialist knowledge and introducing qualifications in this area. These needs prompted us to propose a structured pathway that could be followed by staff nurses wishing to become NSps in neurorehabilitation. Specifically, the purposes of this paper are to identify areas of need within nurses’ clinical education and to propose an education course, defining the main topics to be included in a neurorehabilitation nursing core curriculum. Methods: A literature review was conducted by means of PubMed, Cochrane database, and web searches for potentially relevant titles combining the search terms “nurses” and “nursing” with “education”, “rehabilitation”, “neurology”, “neuro-oncology”, “brain tumors”, “learning”, “core curriculum”. The main limits applied for the PubMed search were: clinical trial; meta-analysis; practice guideline; review; classical article; consensus development conference, NIH; guideline; journal article; newspaper article; MEDLINE; nursing journals; systematic reviews. Preference was given to works published between January 2000 and December 2008 in English. The search strategy identified 523 non-duplicated references of which 271 titles were considered relevant. After reviewing the abstracts, 147 papers were selected and made available to a group of healthcare professionals (nurses, physicians, physiotherapists, psychologists) with specific experience in neurorehabilitation, to perform a final revision. Each professional reviewed the articles and identified a limited number of areas and related topics deemed, by them, fundamental for anyone seeking to acquire the knowledge and skills needed to practice rehabilitation nursing. The results were compared and discussed among the professionals in order to include the identified areas and topics in the course; a consensus level ≥ 60% was requested otherwise the area or the topic were erased. Course description The discussion among the professionals led to the identification of the following five main areas: a) clinical aspects of nursing; b) nursing techniques; c) nursing methodology; d) relational and organisational models; e) legal aspects of nursing. The topics included in each area are listed in Table 2. Course areas and topics These issues have become the contents of a structured course, amounting to a total of 160 hours that includes three modules: theory (58 hours), practice (22 hours) and observation of experienced nurses (80 hours). The first module, delivered in the form of lectures, focused on theoretical aspects related to the five main areas. In the second and third modules, the participants received supervised practical training and were able to familiarise themselves with the logistics and use of various equipment, with patient management and with intervention protocols. Basic techniques were demonstrated and then applied by all the participants in turn. The course should last four weeks (6 days/week, 7 hours/day). The mornings will be were devoted to supervised practical activities and observations on the ward, and the afternoons to theoretical lessons. The setting for all these activities should be a highly specialised neurorehabilitation unit. The course teachers should be physicians (neurologists, an anaesthetist, a physiatrist), nurses, bioengineers, psychologists, and physiotherapists, all with specific experience in field of neurorehabilitation. The course will end with the presentation of a thesis. Self-administered questionnaires with multiple choice answers and regarding all the topics should be compiled by the participants to assess their basic level of knowledge, learning and satisfaction. The discussion among the professionals led to the identification of the following five main areas: a) clinical aspects of nursing; b) nursing techniques; c) nursing methodology; d) relational and organisational models; e) legal aspects of nursing. The topics included in each area are listed in Table 2. Course areas and topics These issues have become the contents of a structured course, amounting to a total of 160 hours that includes three modules: theory (58 hours), practice (22 hours) and observation of experienced nurses (80 hours). The first module, delivered in the form of lectures, focused on theoretical aspects related to the five main areas. In the second and third modules, the participants received supervised practical training and were able to familiarise themselves with the logistics and use of various equipment, with patient management and with intervention protocols. Basic techniques were demonstrated and then applied by all the participants in turn. The course should last four weeks (6 days/week, 7 hours/day). The mornings will be were devoted to supervised practical activities and observations on the ward, and the afternoons to theoretical lessons. The setting for all these activities should be a highly specialised neurorehabilitation unit. The course teachers should be physicians (neurologists, an anaesthetist, a physiatrist), nurses, bioengineers, psychologists, and physiotherapists, all with specific experience in field of neurorehabilitation. The course will end with the presentation of a thesis. Self-administered questionnaires with multiple choice answers and regarding all the topics should be compiled by the participants to assess their basic level of knowledge, learning and satisfaction. Course description: The discussion among the professionals led to the identification of the following five main areas: a) clinical aspects of nursing; b) nursing techniques; c) nursing methodology; d) relational and organisational models; e) legal aspects of nursing. The topics included in each area are listed in Table 2. Course areas and topics These issues have become the contents of a structured course, amounting to a total of 160 hours that includes three modules: theory (58 hours), practice (22 hours) and observation of experienced nurses (80 hours). The first module, delivered in the form of lectures, focused on theoretical aspects related to the five main areas. In the second and third modules, the participants received supervised practical training and were able to familiarise themselves with the logistics and use of various equipment, with patient management and with intervention protocols. Basic techniques were demonstrated and then applied by all the participants in turn. The course should last four weeks (6 days/week, 7 hours/day). The mornings will be were devoted to supervised practical activities and observations on the ward, and the afternoons to theoretical lessons. The setting for all these activities should be a highly specialised neurorehabilitation unit. The course teachers should be physicians (neurologists, an anaesthetist, a physiatrist), nurses, bioengineers, psychologists, and physiotherapists, all with specific experience in field of neurorehabilitation. The course will end with the presentation of a thesis. Self-administered questionnaires with multiple choice answers and regarding all the topics should be compiled by the participants to assess their basic level of knowledge, learning and satisfaction. Discussion: This paper identifies the standard competencies of the neurorehabilitation nurses and describes a proposed structured education course to train specialist nurses in neurorehabilitation care. To this end, drawing on the expertise of different clinicians and professionals a consensus was reached on a minimum core set of topics which covered five aspects of rehabilitation nursing: clinical, technical, methodological, organisational and legal. Consistent with previous literature, this review seems to support the need (perceived by nurses themselves) for specific education and training in order to work with people with complex neurological disabilities [33]. Indeed, a wider investigation of the role of nurses within the multiprofessional rehabilitation team revealed gaps in the skills and knowledge of graduate nurses working in rehabilitation settings: while the role of nurses has evolved considerably, there are still obvious gaps in current rehabilitation nursing training [34]. Moreover, the precise role of nurses in rehabilitation is not clearly defined: the literature shows that rehabilitation nursing has developed to various degrees worldwide. Furthermore, no comprehensive framework for the specialty practice of rehabilitation nursing can be found in the English language literature through Medline and Google searches [35]. The proposed course aims to fill these gaps, providing the necessary theoretical and practical bases, to train a professional NSp in neurorehabilitation. Specifically, its main objectives are: (a) to train nurses, providing them with the expertise to manage the care of neurological patients with disabilities, in both the acute and the chronic phase; (b) to provide them with the skills needed to lead and coordinate multidisciplinary teams so as to ensure the comprehensive care of patients; (c) to transfer, to them, knowledge about the clinical tools and technologies adopted within the field of neurorehabilitation; (d) to impart to them a working method that will enable them to go on expanding their knowledge base as well as to pass it on to other care providers, implementing this knowledge throughout the healthcare system, thereby increasing levels of both safety and quality. The Association of Rehabilitation Nurses in the US has published a series of documents to guide the development of rehabilitation nursing practice – in particular three editions of a core curriculum [36-38]. Outside the US, however, these publications seem to have had limited impact. Elsewhere, there seems to be agreement that the potential role of nurses in rehabilitation is yet to be fully realized [39-49]. To achieve this goal, the development and implementation of formal education courses could be a key strategy, making it possible to train advanced practice nurses, particularly neurorehabilitation specialists, who could fill the growing need for expert clinicians able to assume major leadership roles in clinical, management and research areas. The course proposed in this paper is based on a minimum set of topics, grouped into five main areas, and could serve as a basis for a core curriculum. This model includes extensive clinical practice and focuses strongly on evidence-based practice; moreover, it highlights the importance of cross-disciplinary teaching, which aims to bring together and harmonise different professional skills in an interprofessional education framework [50,51]. Interdisciplinary healthcare teams with members from many professions have to work closely with each other in order to optimise patient care [52]. In this context, non-technical skills such as communication, collaboration, cooperation and reflection are crucial for effective practice. As interprofessional collaboration is an important element in total quality management, education on how to function within a team is essential [53]: healthcare workers with different knowledge and backgrounds have to harmonise their intervention plans according to the competencies and goals of the other team members [54]. This need for integration is even greater for neuro-oncological patients in which the clinical complexity that derives from the coexistence of disability at different levels, requires a coordinated and synergistic intervention. Based on the bio-psycho-social model of the WHO and a holistic approach of rehabilitation, cancer rehabilitation in fact should comprise multidisciplinary efforts including, among others, medical, psychological and physiotherapeutic treatment as well as occupational therapy and functional therapy, depending on the patient’ s functional status [55,56]. Maintaining continuity, through coordination, represents one aspect of rehabilitation in which nursing has a key role that has been widely addressed in oncology nursing literature. We believe that our findings have the potential to make a contribution to the development of rehabilitation nursing and that this training course, the first of this kind in Italy, could be incorporated into undergraduate nursing education programmes and also be inserted into continuing education programmes for graduate nurses. However further research is needed to refine the contents of the teaching units and to evaluate its feasibility and costs. The content of the entire curriculum is in fact open to modification on the basis of evaluations and feedback after the first implementation. Professional rehabilitation nurses must, in fact, combine their practice with continuing education in order to acquire specific knowledge and skills that will contribute to more efficient rehabilitation processes and services. By teaching registered nurses the principles of rehabilitation nursing, and creating, for them, the specific qualification of neurorehabilitation nurse, the quality of overall care for neurological patients could be improved, through fewer complications, shorter hospital stays, better and outcomes and better support for families. Recent studies reported that the presence of nurses with higher educational level improves patients’ outcomes. In fact, although it has not been conclusively demonstrated the link between the level of training and quality of care, associations between a series of patients’ outcomes, including mortality, and the training of nurses are well documented [57,58]. Developing expertise in neuro-rehabilitation for nurses, will be critical to improve overall care according to the “simultaneous care” model [59] particularly for patients affected by BT, for which the integration of different professionals expertise can provide solutions to the complex needs of the patient and caregivers [60,61]. In this view, nurses can contribute to the quality and satisfaction of patients’ lives by developing a philosophy that incorporates rehabilitation principles as integral part of their practice. Nursing profession has already made a significant contribution to the body of knowledge in the field of rehabilitation of the cancer patients and his/her family; new generations of allied health professionals need a solid grounding in clinical skills, but as already suggested by previous authors, they also need a strong educational background and attitudes that will enable them to build their profession as well as their own professional practice [62,63]. These attitudes and skills have been suggested to include a desire to engage in lifelong learning and professional growth and an ability to identify and critically evaluate their own practice and the underlying theories and perceptions that inform the practice of nursing [64]. In our view, the crucial next step will be to start discussing, at the level of scientific societies linked to the field of neurorehabilitation and oncology, the development of a specialisation course in neurorehabilitation nursing. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: MB conceived the paper, interpreted data and wrote the final manuscript; CZ conceived the paper, interpreted data and wrote the final manuscript; AP reviewed and commented the last version of the manuscript; AMDN helped to revise the first draft of the manuscript; MS and GS reviewed and commented the last version of the manuscript; FP interpreted data, reviewed and commented the last version of the manuscript. All authors read and approved the final manuscript.
Background: Neuro-oncological population well expresses the complexity of neurological disability due to the multiple neurological deficits that affect these patients. Moreover, due to the therapeutical opportunities survival times for patients with brain tumor have increased and more of these patients require rehabilitation care. The figure of nurse in the interdisciplinary specialty of neurorehabilitation is not clearly defined, even if their role in this setting is recognized as being critical and is expanding.The purpose of the study is to identify the standard competencies for neurorehabilitation nurses that could be taught by means of a specialization course. Methods: A literature review was conducted with preference given to works published between January 2000 and December 2008 in English. The search strategy identified 523 non-duplicated references of which 271 titles were considered relevant. After reviewing the abstracts, 147 papers were selected and made available to a group of healthcare professionals who were requested to classify them in few conceptual main areas defining the relative topics. Results: The following five main areas were identified: clinical aspects of nursing; nursing techniques; nursing methodology; relational and organisational models; legal aspects of nursing. The relative topics were included within each area. As educational method a structured course based on lectures and practical sessions was designed. Also multi-choices questions were developed in order to evaluate the participants' level of knowledge, while a semi-structured interview was prepared to investigate students' satisfaction. Conclusions: Literature shows that the development of rehabilitation depends on the improvement of scientific and practical knowledge of health care professionals. This structured training course could be incorporated into undergraduate nursing education programmes and also be inserted into continuing education programmes for graduate nurses. Developing expertise in neuro-rehabilitation for nurses, will be critical to improve overall care and care management of patients with highly complex disabilities as patients affected by brain tumors. The next step will be to start discussing, at the level of scientific societies linked to the field of neurorehabilitation and oncology, the development of a specialisation course in neurorehabilitation nursing.
null
null
3,941
385
[ 1283, 316, 10, 84 ]
6
[ "rehabilitation", "nursing", "nurses", "course", "patients", "care", "practice", "neurorehabilitation", "hours", "education" ]
[ "concept cancer rehabilitation", "field neurorehabilitation oncology", "neurorehabilitation oncology", "rehabilitation intervention cancer", "neuro oncological rehabilitation" ]
null
null
null
[CONTENT] Neurorehabilitation | Nurses | Education | Certification | Core curriculum | Team | Brain tumors [SUMMARY]
[CONTENT] Neurorehabilitation | Nurses | Education | Certification | Core curriculum | Team | Brain tumors [SUMMARY]
null
null
[CONTENT] Neurorehabilitation | Nurses | Education | Certification | Core curriculum | Team | Brain tumors [SUMMARY]
null
[CONTENT] Brain Neoplasms | Education, Nursing | Humans | Nurse's Role | Patient Care [SUMMARY]
[CONTENT] Brain Neoplasms | Education, Nursing | Humans | Nurse's Role | Patient Care [SUMMARY]
null
null
[CONTENT] Brain Neoplasms | Education, Nursing | Humans | Nurse's Role | Patient Care [SUMMARY]
null
[CONTENT] concept cancer rehabilitation | field neurorehabilitation oncology | neurorehabilitation oncology | rehabilitation intervention cancer | neuro oncological rehabilitation [SUMMARY]
[CONTENT] concept cancer rehabilitation | field neurorehabilitation oncology | neurorehabilitation oncology | rehabilitation intervention cancer | neuro oncological rehabilitation [SUMMARY]
null
null
[CONTENT] concept cancer rehabilitation | field neurorehabilitation oncology | neurorehabilitation oncology | rehabilitation intervention cancer | neuro oncological rehabilitation [SUMMARY]
null
[CONTENT] rehabilitation | nursing | nurses | course | patients | care | practice | neurorehabilitation | hours | education [SUMMARY]
[CONTENT] rehabilitation | nursing | nurses | course | patients | care | practice | neurorehabilitation | hours | education [SUMMARY]
null
null
[CONTENT] rehabilitation | nursing | nurses | course | patients | care | practice | neurorehabilitation | hours | education [SUMMARY]
null
[CONTENT] rehabilitation | patients | care | nursing | nurses | role | disease | important | cancer | skills [SUMMARY]
[CONTENT] course | hours | nursing | participants | topics | areas | aspects | nurses | supervised | supervised practical [SUMMARY]
null
null
[CONTENT] rehabilitation | nursing | nurses | manuscript | course | hours | patients | authors declare | competing | declare [SUMMARY]
null
[CONTENT] Neuro ||| ||| ||| [SUMMARY]
[CONTENT] January 2000 | December 2008 | English ||| 523 | 271 ||| 147 [SUMMARY]
null
null
[CONTENT] ||| ||| ||| ||| January 2000 | December 2008 | English ||| 523 | 271 ||| 147 ||| ||| five ||| ||| ||| ||| ||| ||| ||| [SUMMARY]
null
The prospective effects of workplace violence on physicians' job satisfaction and turnover intentions: the buffering effect of job control.
24438449
Health care professionals, including physicians, are at high risk of encountering workplace violence. At the same time physician turnover is an increasing problem that threatens the functioning of the health care sector worldwide. The present study examined the prospective associations of work-related physical violence and bullying with physicians' turnover intentions and job satisfaction. In addition, we tested whether job control would modify these associations.
BACKGROUND
The present study was a 4-year longitudinal survey study, with data gathered in 2006 and 2010.The present sample included 1515 (61% women) Finnish physicians aged 25-63 years at baseline. Analyses of covariance (ANCOVA) were conducted while adjusting for gender, age, baseline levels, specialisation status, and employment sector.
METHODS
The results of covariance analyses showed that physical violence led to increased physician turnover intentions and that both bullying and physical violence led to reduced physician job satisfaction even after adjustments. We also found that opportunities for job control were able to alleviate the increase in turnover intentions resulting from bullying.
RESULTS
Our results suggest that workplace violence is an extensive problem in the health care sector and may lead to increased turnover and job dissatisfaction. Thus, health care organisations should approach this problem through different means, for example, by giving health care employees more opportunities to control their own work.
CONCLUSIONS
[ "Adult", "Bullying", "Female", "Finland", "Humans", "Job Satisfaction", "Male", "Middle Aged", "Personnel Turnover", "Physicians", "Prospective Studies", "Workplace Violence" ]
3898009
Background
Health care professionals, including physicians, are at high risk of encountering workplace violence. For example, 59 per cent of Australian general practitioners reported that they had experienced work-related violence during the previous 12 months [1]. In US emergency departments, 75 per cent of physicians had encountered verbal violence and 28 per cent indicated that they had been victims of physical assault in the previous 12 months [2]. In another study, 96 per cent of physician respondents in US emergency departments reported experiencing verbal violence and 78 per cent a verbal threat during the previous 6 months [3]. In a study conducted among hospital and community physicians in Israel, 56 per cent reported verbal violence and 9 per cent physical assault during the previous year [4]. In Finland, every fifth physician reported having encountered physical violence or the threat of it in the previous year [5]. Workplace violence may have many negative ramifications for health care employees. Workplace violence has been associated with lower job satisfaction and higher levels of turnover intentions in nurses and home healthcare assistants [6,7]. Moreover, workplace violence has been found to affect negatively hospital personnel’s health [8] and increase sickness absences [9]. In physicians, work-related violence has been shown to lead to reduced job satisfaction and a decline in job performance [10]. In addition, among healthcare professionals, workplace violence may lead to difficulties in listening to patients, rumination, poor concentration, and intrusive thoughts [11], as well as impact negatively on family life and quality of life [4]. From the health care sector’s point of view, tackling workplace violence encountered by physicians is important given that it can lead to lower job satisfaction and increased turnover. Physician turnover is an increasing problem that threatens the functioning of the health care sector worldwide. Physician turnover may lead to decreased productivity, decreased quality of care and to an increased need to recruit and train new physicians. This is costly and may affect health outcomes [12,13]. In the US it has been estimated that the minimum cost of turnover may represent a loss of over 5 per cent of the total annual operating budget, due to hiring and training costs and productivity loss [14]. Job control refers to job and organisational characteristics, such as the employee’s decision-making authority, opportunities to participate, and opportunities to use skills and knowledge. Job control may have direct effects on job attitudes, health and wellbeing. In a study among Finnish anaesthesiologists, job control appeared as one of the most important work-related factors in relation to physicians’ work-related wellbeing [15]. Previous studies have repeatedly demonstrated the importance of job control for employees’ health. For example, low job control has been associated with increased myocardial infarction risk [16], increased heart disease risk [17], higher blood pressure [18], and to greater fibrinogen responses to stress [19]. Moreover, low job control has been associated with an increased number of sick-leave spells [20] and with poorer self-rated health eight years later [21]. In a study among emergency physicians, psychological health was not affected by the number and nature of hours worked but by the ability to control working hours and the perceived flexibility of the workplace [22]. High job control at work may protect employees from developing job dissatisfaction and psychiatric distress [23]. High job control may additionally increase organisational commitment [24] and decrease work-related anger [25]. A positive change in job control over a 4-year period was associated with higher levels of physical activity and self-rated health and lower levels of distress [26]. Job control has also been associated with job performance and ability to learn [27]. In addition, previous studies have shown that low control opportunities may affect employees’ attitudes to staying in or leaving a job, given that low job control has been associated with increased levels of retirement intentions [28,29]. In addition, job control has been found to mitigate retirement intentions associated with poor health and low work ability among physicians [30]. High job control may be viewed as a potential coping factor that helps distressed employees cope with demanding situations and, thus, lessen their job dissatisfaction and intentions to quit. According to Spector [31], job control can affect a person’s choice of coping strategy in a way that perceived high control is likely to lead to constructive coping, whereas lack of control is more likely to lead to destructive coping. Previous studies have indeed associated job control with successful coping [32,33] and successful coping, in turn, has been associated with fewer turnover intentions in demanding and stressful situations, such as with organisational change [34,35]. Job control may provide flexibility to avoid certain tasks that have a high risk of violence and to take breaks from work, which helps employees to regulate emotional responses and reappraise work challenges more positively [36]. Frese [37] has suggested that control enables a person to perform the most stressful tasks when that person feels particularly able to do them; that is, people can adjust the situation according to their needs, and can, therefore, be more relaxed in their work. Control may also act as a safety signal, given that a person with a high degree of control knows that he or she is able to change the situation if it becomes too difficult, thus knowing that the conditions may never be worse than he or she is willing to withstand [38]. Thus, many opportunities to control one’s job may act as a buffer against the negative effects of stressful working conditions like as work-related violence. The aim of the present study was to examine the associations of work-related violence (physical violence and bullying) with turnover intentions and job satisfaction in a four-year follow-up among Finnish physicians. Specifically, we were interested to see whether job control would modify these associations. We hypothesised that both physical violence and bullying would be associated with increased levels of turnover intentions and decreased job satisfaction. We additionally hypothesised that job control would act as a buffer for these negative effects of work-related violence.
Methods
The present study is part of the Finnish Health Care Professionals Study, in which we drew a random sample of 5000 physicians in Finland (30% of the whole physician population) from the 2006 database of physicians maintained by the Finnish Medical Association. The register covers all licensed physicians in Finland. Phase 1 data were gathered with postal questionnaires in 2006 (Figure  1). Non-respondents were twice sent a reminder and copy of the questionnaire. Responses were received from 2841 physicians (response rate 57%). The sample is representative of the eligible population in terms of age, gender, and employment sector [39]. The study flow diagram. Phase 2 took place four years later in 2010, with data gathered using either a web-based or a traditional postal survey. At phase 1 the respondents were asked their permission to participate in follow-up surveys and 2206 agreed. Those who had died or had incorrect address information were excluded (N = 37), thus, at phase 2 the survey was sent to 2169 physicians. First, an e-mail invitation to participate in the web-based survey was sent, followed by two reminders. For those who did not respond to these a postal questionnaire was then sent once. E-mail and postal addresses were obtained from the Finnish Medical Association. The total number of respondents was 1705 (response rate 79%), of which 1018 (60%) answered the web-based and 687 (40%) the postal questionnaire (the response format is adjusted for in the analyses). Of these, 190 had incomplete data and were excluded; the final study sample therefore includes 1515 physicians (61% women) aged 25–63 years (mean = 45.7 years) (2006). Ethical approval for this study was obtained from the Ethical Review Board of the National Institute for Health and Welfare. Measures The present study used violence variables, job control, baseline turnover intentions, baseline job satisfaction, and covariates from phase 1. The outcome (turnover intentions and job satisfaction) variables were taken from phase 2. Job satisfaction was assessed with the mean of 3 items derived from Hackman and Oldham’s [40] Job Diagnostic Survey on a 5-point scale, ranging from 1 (totally disagree) to 5 (totally agree). Cronbach’s alpha coefficient for this study was 0.66 at phase 1 and 0.88 at phase 2 (an example of the items: "I am generally satisfied with my work.") Turnover intentions were measured with the mean of three questions concerning willingness to (a) change to other physician work, (b) to another profession, and (c) to quit (α = 0.61 at phase 1 and 0.66 at phase 2). The response alternatives were "1 = no, 2 = perhaps, and 3 = yes". Physical violence was measured with a question asking whether the respondent had experienced work-related violence (such as kicking and hitting) or had been threatened with it and how often. Responses were coded as: 0 = never, 1 = less than once a year, 2 = once a year or more often. Bullying was asked with the following question: "Psychological violence means continuous repetitive bullying, victimising or offending treatment. Are you now or have you previously been a target of this kind of psychological violence and bullying in your own work?" The answer options were 0 = no and 1 = yes. Job control was measured by combining the skill discretion (6 items) and decision authority (3 items) scales derived from Karasek’s Job Content Questionnaire JCQ [41]. Skill discretion measures how much the job requires skill, creativity, task variety, and learning of new skills (e.g., "My job requires that I learn new things"). Decision authority measures the freedom to make independent decisions and possibilities to choose how to perform work (e.g., "I have a lot of say about what happens in my job"). The items were rated on a 5-point Likert-scale, ranging from 1 (totally disagree) to 5 (totally agree) (α = 0.76 at phase 1). Covariates included gender, age, specialisation status (specialist, specialisation on-going, and not specialised), and employment sector (hospital, primary care, and other). The present study used violence variables, job control, baseline turnover intentions, baseline job satisfaction, and covariates from phase 1. The outcome (turnover intentions and job satisfaction) variables were taken from phase 2. Job satisfaction was assessed with the mean of 3 items derived from Hackman and Oldham’s [40] Job Diagnostic Survey on a 5-point scale, ranging from 1 (totally disagree) to 5 (totally agree). Cronbach’s alpha coefficient for this study was 0.66 at phase 1 and 0.88 at phase 2 (an example of the items: "I am generally satisfied with my work.") Turnover intentions were measured with the mean of three questions concerning willingness to (a) change to other physician work, (b) to another profession, and (c) to quit (α = 0.61 at phase 1 and 0.66 at phase 2). The response alternatives were "1 = no, 2 = perhaps, and 3 = yes". Physical violence was measured with a question asking whether the respondent had experienced work-related violence (such as kicking and hitting) or had been threatened with it and how often. Responses were coded as: 0 = never, 1 = less than once a year, 2 = once a year or more often. Bullying was asked with the following question: "Psychological violence means continuous repetitive bullying, victimising or offending treatment. Are you now or have you previously been a target of this kind of psychological violence and bullying in your own work?" The answer options were 0 = no and 1 = yes. Job control was measured by combining the skill discretion (6 items) and decision authority (3 items) scales derived from Karasek’s Job Content Questionnaire JCQ [41]. Skill discretion measures how much the job requires skill, creativity, task variety, and learning of new skills (e.g., "My job requires that I learn new things"). Decision authority measures the freedom to make independent decisions and possibilities to choose how to perform work (e.g., "I have a lot of say about what happens in my job"). The items were rated on a 5-point Likert-scale, ranging from 1 (totally disagree) to 5 (totally agree) (α = 0.76 at phase 1). Covariates included gender, age, specialisation status (specialist, specialisation on-going, and not specialised), and employment sector (hospital, primary care, and other). Statistical analyses Analyses of covariance (ANCOVA) were conducted, with turnover intentions at phase 2 as the dependent variable, and physical violence, bullying, job control, gender, age, baseline turnover intentions, response format, specialisation status, and employment sector from phase 1 were included as independent variables. The analyses were conducted in four steps. In the first step, the univariate effects of physical violence, bullying, and job control for turnover intentions were examined in separate analyses (Model A). A second step included all the above-mentioned variables and gender, age, baseline turnover intentions, and response format (Model B). In the third step, specialisation status and employment sector were additionally added to the former model (Model C). Finally, the interactions of job control with physical violence and bullying were added (Model D). A similar series of ANCOVAs were generated with job satisfaction at phase 2 as for the dependent variable. Analyses of covariance (ANCOVA) were conducted, with turnover intentions at phase 2 as the dependent variable, and physical violence, bullying, job control, gender, age, baseline turnover intentions, response format, specialisation status, and employment sector from phase 1 were included as independent variables. The analyses were conducted in four steps. In the first step, the univariate effects of physical violence, bullying, and job control for turnover intentions were examined in separate analyses (Model A). A second step included all the above-mentioned variables and gender, age, baseline turnover intentions, and response format (Model B). In the third step, specialisation status and employment sector were additionally added to the former model (Model C). Finally, the interactions of job control with physical violence and bullying were added (Model D). A similar series of ANCOVAs were generated with job satisfaction at phase 2 as for the dependent variable.
Results
Table  1 shows the characteristics of the study sample. Sixty-one per cent had encountered physical violence in their career and 19 per cent had encountered bullying. The majority of the participants (77%) were specialised physicians and 46 per cent worked in hospitals, 22 per cent in primary care, and 32 per cent in other healthcare settings. The within-subjects differences in turnover intention and job satisfaction levels between phase 1 and phase 2 were examined with GLM repeated measures analyses. Analyses showed that job satisfaction levels had increased (F = 35.2, p < 0.001) and turnover intentions had decreased (F = 23.2, p < 0.001) during the study period. Characteristics of the study sample Table  2 shows the results from the ANCOVA models for turnover intentions. Physical violence and bullying were associated with more turnover intentions, and job control was associated with less turnover intentions. However, only the association between physical violence and turnover intentions persisted after adjusting for baseline level, response format, demographics and work-related variables. Older respondents were less likely to have turnover intentions than their younger counterparts. The results of the analyses of covariance for turnover intentions aModel A included univariate effects. bModel B included physical violence, bullying, job control, gender, age, baseline turnover intentions, and response format. cModel C included variables from model A and specialisation status and employment sector. dModel D included in addition to Model C also interactions physical violence*job control and bullying*job control. The interaction between bullying and job control was significant for turnover intentions. As Figure  2 shows, job control was not related to increased turnover intentions among those who had not encountered bullying, whereas among those who had encountered bullying, job control was associated with turnover intention levels. That is, the highest levels of turnover intentions were among those who had low job control opportunities and had encountered bullying. The interaction between bullying and job control for turnover intentions. Estimated marginal means among those scoring low (below median) and high (above median) in job control adjusted for baseline level, response format, demographics, and work-related variables. Table  3 shows the results from the ANCOVA models for job satisfaction. Physical violence and bullying were associated with lower levels of job satisfaction, and job control was associated with higher levels of job satisfaction. These associations persisted even after adjusting for baseline level, response format, demographics and work-related variables. Older respondents, those who answered by post, and those who worked in an employment sector other than hospitals or primary care were more likely to be satisfied with their jobs than their counterparts. The interactions with job control were not significant for job satisfaction. The results of the analyses of covariance for job satisfaction aModel A included univariate effects. bModel B included physical violence, bullying, job control, gender, age, baseline job satisfaction, and response format. cModel C included variables from model A and specialisation status and employment sector. dModel D included in addition to Model C also interactions physical violence*job control and bullying*job control.
Conclusions
Our results suggest that promoting employees’ control opportunities in health care organisations might help to provide a buffer against the negative effects of workplace violence on turnover intentions in physicians. In addition, we showed that physical violence and bullying has longitudinal effects on job satisfaction and turnover intentions. Workplace violence is an extensive problem especially in the health care sector and organisations should approach this problem through multiple means, such as, by giving health care employees more opportunities to control their own work, in addition to direct measures.
[ "Background", "Measures", "Statistical analyses", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Health care professionals, including physicians, are at high risk of encountering workplace violence. For example, 59 per cent of Australian general practitioners reported that they had experienced work-related violence during the previous 12 months\n[1]. In US emergency departments, 75 per cent of physicians had encountered verbal violence and 28 per cent indicated that they had been victims of physical assault in the previous 12 months\n[2]. In another study, 96 per cent of physician respondents in US emergency departments reported experiencing verbal violence and 78 per cent a verbal threat during the previous 6 months\n[3]. In a study conducted among hospital and community physicians in Israel, 56 per cent reported verbal violence and 9 per cent physical assault during the previous year\n[4]. In Finland, every fifth physician reported having encountered physical violence or the threat of it in the previous year\n[5].\nWorkplace violence may have many negative ramifications for health care employees. Workplace violence has been associated with lower job satisfaction and higher levels of turnover intentions in nurses and home healthcare assistants\n[6,7]. Moreover, workplace violence has been found to affect negatively hospital personnel’s health\n[8] and increase sickness absences\n[9]. In physicians, work-related violence has been shown to lead to reduced job satisfaction and a decline in job performance\n[10]. In addition, among healthcare professionals, workplace violence may lead to difficulties in listening to patients, rumination, poor concentration, and intrusive thoughts\n[11], as well as impact negatively on family life and quality of life\n[4].\nFrom the health care sector’s point of view, tackling workplace violence encountered by physicians is important given that it can lead to lower job satisfaction and increased turnover. Physician turnover is an increasing problem that threatens the functioning of the health care sector worldwide. Physician turnover may lead to decreased productivity, decreased quality of care and to an increased need to recruit and train new physicians. This is costly and may affect health outcomes\n[12,13]. In the US it has been estimated that the minimum cost of turnover may represent a loss of over 5 per cent of the total annual operating budget, due to hiring and training costs and productivity loss\n[14].\nJob control refers to job and organisational characteristics, such as the employee’s decision-making authority, opportunities to participate, and opportunities to use skills and knowledge. Job control may have direct effects on job attitudes, health and wellbeing. In a study among Finnish anaesthesiologists, job control appeared as one of the most important work-related factors in relation to physicians’ work-related wellbeing\n[15]. Previous studies have repeatedly demonstrated the importance of job control for employees’ health. For example, low job control has been associated with increased myocardial infarction risk\n[16], increased heart disease risk\n[17], higher blood pressure\n[18], and to greater fibrinogen responses to stress\n[19]. Moreover, low job control has been associated with an increased number of sick-leave spells\n[20] and with poorer self-rated health eight years later\n[21]. In a study among emergency physicians, psychological health was not affected by the number and nature of hours worked but by the ability to control working hours and the perceived flexibility of the workplace\n[22].\nHigh job control at work may protect employees from developing job dissatisfaction and psychiatric distress\n[23]. High job control may additionally increase organisational commitment\n[24] and decrease work-related anger\n[25]. A positive change in job control over a 4-year period was associated with higher levels of physical activity and self-rated health and lower levels of distress\n[26]. Job control has also been associated with job performance and ability to learn\n[27]. In addition, previous studies have shown that low control opportunities may affect employees’ attitudes to staying in or leaving a job, given that low job control has been associated with increased levels of retirement intentions\n[28,29]. In addition, job control has been found to mitigate retirement intentions associated with poor health and low work ability among physicians\n[30].\nHigh job control may be viewed as a potential coping factor that helps distressed employees cope with demanding situations and, thus, lessen their job dissatisfaction and intentions to quit. According to Spector\n[31], job control can affect a person’s choice of coping strategy in a way that perceived high control is likely to lead to constructive coping, whereas lack of control is more likely to lead to destructive coping. Previous studies have indeed associated job control with successful coping\n[32,33] and successful coping, in turn, has been associated with fewer turnover intentions in demanding and stressful situations, such as with organisational change\n[34,35].\nJob control may provide flexibility to avoid certain tasks that have a high risk of violence and to take breaks from work, which helps employees to regulate emotional responses and reappraise work challenges more positively\n[36]. Frese\n[37] has suggested that control enables a person to perform the most stressful tasks when that person feels particularly able to do them; that is, people can adjust the situation according to their needs, and can, therefore, be more relaxed in their work. Control may also act as a safety signal, given that a person with a high degree of control knows that he or she is able to change the situation if it becomes too difficult, thus knowing that the conditions may never be worse than he or she is willing to withstand\n[38]. Thus, many opportunities to control one’s job may act as a buffer against the negative effects of stressful working conditions like as work-related violence.\nThe aim of the present study was to examine the associations of work-related violence (physical violence and bullying) with turnover intentions and job satisfaction in a four-year follow-up among Finnish physicians. Specifically, we were interested to see whether job control would modify these associations. We hypothesised that both physical violence and bullying would be associated with increased levels of turnover intentions and decreased job satisfaction. We additionally hypothesised that job control would act as a buffer for these negative effects of work-related violence.", "The present study used violence variables, job control, baseline turnover intentions, baseline job satisfaction, and covariates from phase 1. The outcome (turnover intentions and job satisfaction) variables were taken from phase 2.\nJob satisfaction was assessed with the mean of 3 items derived from Hackman and Oldham’s\n[40] Job Diagnostic Survey on a 5-point scale, ranging from 1 (totally disagree) to 5 (totally agree). Cronbach’s alpha coefficient for this study was 0.66 at phase 1 and 0.88 at phase 2 (an example of the items: \"I am generally satisfied with my work.\")\nTurnover intentions were measured with the mean of three questions concerning willingness to (a) change to other physician work, (b) to another profession, and (c) to quit (α = 0.61 at phase 1 and 0.66 at phase 2). The response alternatives were \"1 = no, 2 = perhaps, and 3 = yes\".\nPhysical violence was measured with a question asking whether the respondent had experienced work-related violence (such as kicking and hitting) or had been threatened with it and how often. Responses were coded as: 0 = never, 1 = less than once a year, 2 = once a year or more often. Bullying was asked with the following question: \"Psychological violence means continuous repetitive bullying, victimising or offending treatment. Are you now or have you previously been a target of this kind of psychological violence and bullying in your own work?\" The answer options were 0 = no and 1 = yes.\nJob control was measured by combining the skill discretion (6 items) and decision authority (3 items) scales derived from Karasek’s Job Content Questionnaire JCQ\n[41]. Skill discretion measures how much the job requires skill, creativity, task variety, and learning of new skills (e.g., \"My job requires that I learn new things\"). Decision authority measures the freedom to make independent decisions and possibilities to choose how to perform work (e.g., \"I have a lot of say about what happens in my job\"). The items were rated on a 5-point Likert-scale, ranging from 1 (totally disagree) to 5 (totally agree) (α = 0.76 at phase 1).\nCovariates included gender, age, specialisation status (specialist, specialisation on-going, and not specialised), and employment sector (hospital, primary care, and other).", "Analyses of covariance (ANCOVA) were conducted, with turnover intentions at phase 2 as the dependent variable, and physical violence, bullying, job control, gender, age, baseline turnover intentions, response format, specialisation status, and employment sector from phase 1 were included as independent variables. The analyses were conducted in four steps. In the first step, the univariate effects of physical violence, bullying, and job control for turnover intentions were examined in separate analyses (Model A). A second step included all the above-mentioned variables and gender, age, baseline turnover intentions, and response format (Model B). In the third step, specialisation status and employment sector were additionally added to the former model (Model C). Finally, the interactions of job control with physical violence and bullying were added (Model D). A similar series of ANCOVAs were generated with job satisfaction at phase 2 as for the dependent variable.", "All authors declare that they have no competing interest.", "TH designed the study, directed its implementation, performed analyses and led all aspects of the work, including data analysis and writing. AK and ME contributed to planning the data analyses. AK, JV, MV and ME helped to conceptualise the ideas, interpret findings, and write and critically review drafts of the article. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/14/19/prepub\n" ]
[ null, null, null, null, null, null ]
[ "Background", "Methods", "Measures", "Statistical analyses", "Results", "Discussion", "Conclusions", "Competing interests", "Authors’ contributions", "Pre-publication history" ]
[ "Health care professionals, including physicians, are at high risk of encountering workplace violence. For example, 59 per cent of Australian general practitioners reported that they had experienced work-related violence during the previous 12 months\n[1]. In US emergency departments, 75 per cent of physicians had encountered verbal violence and 28 per cent indicated that they had been victims of physical assault in the previous 12 months\n[2]. In another study, 96 per cent of physician respondents in US emergency departments reported experiencing verbal violence and 78 per cent a verbal threat during the previous 6 months\n[3]. In a study conducted among hospital and community physicians in Israel, 56 per cent reported verbal violence and 9 per cent physical assault during the previous year\n[4]. In Finland, every fifth physician reported having encountered physical violence or the threat of it in the previous year\n[5].\nWorkplace violence may have many negative ramifications for health care employees. Workplace violence has been associated with lower job satisfaction and higher levels of turnover intentions in nurses and home healthcare assistants\n[6,7]. Moreover, workplace violence has been found to affect negatively hospital personnel’s health\n[8] and increase sickness absences\n[9]. In physicians, work-related violence has been shown to lead to reduced job satisfaction and a decline in job performance\n[10]. In addition, among healthcare professionals, workplace violence may lead to difficulties in listening to patients, rumination, poor concentration, and intrusive thoughts\n[11], as well as impact negatively on family life and quality of life\n[4].\nFrom the health care sector’s point of view, tackling workplace violence encountered by physicians is important given that it can lead to lower job satisfaction and increased turnover. Physician turnover is an increasing problem that threatens the functioning of the health care sector worldwide. Physician turnover may lead to decreased productivity, decreased quality of care and to an increased need to recruit and train new physicians. This is costly and may affect health outcomes\n[12,13]. In the US it has been estimated that the minimum cost of turnover may represent a loss of over 5 per cent of the total annual operating budget, due to hiring and training costs and productivity loss\n[14].\nJob control refers to job and organisational characteristics, such as the employee’s decision-making authority, opportunities to participate, and opportunities to use skills and knowledge. Job control may have direct effects on job attitudes, health and wellbeing. In a study among Finnish anaesthesiologists, job control appeared as one of the most important work-related factors in relation to physicians’ work-related wellbeing\n[15]. Previous studies have repeatedly demonstrated the importance of job control for employees’ health. For example, low job control has been associated with increased myocardial infarction risk\n[16], increased heart disease risk\n[17], higher blood pressure\n[18], and to greater fibrinogen responses to stress\n[19]. Moreover, low job control has been associated with an increased number of sick-leave spells\n[20] and with poorer self-rated health eight years later\n[21]. In a study among emergency physicians, psychological health was not affected by the number and nature of hours worked but by the ability to control working hours and the perceived flexibility of the workplace\n[22].\nHigh job control at work may protect employees from developing job dissatisfaction and psychiatric distress\n[23]. High job control may additionally increase organisational commitment\n[24] and decrease work-related anger\n[25]. A positive change in job control over a 4-year period was associated with higher levels of physical activity and self-rated health and lower levels of distress\n[26]. Job control has also been associated with job performance and ability to learn\n[27]. In addition, previous studies have shown that low control opportunities may affect employees’ attitudes to staying in or leaving a job, given that low job control has been associated with increased levels of retirement intentions\n[28,29]. In addition, job control has been found to mitigate retirement intentions associated with poor health and low work ability among physicians\n[30].\nHigh job control may be viewed as a potential coping factor that helps distressed employees cope with demanding situations and, thus, lessen their job dissatisfaction and intentions to quit. According to Spector\n[31], job control can affect a person’s choice of coping strategy in a way that perceived high control is likely to lead to constructive coping, whereas lack of control is more likely to lead to destructive coping. Previous studies have indeed associated job control with successful coping\n[32,33] and successful coping, in turn, has been associated with fewer turnover intentions in demanding and stressful situations, such as with organisational change\n[34,35].\nJob control may provide flexibility to avoid certain tasks that have a high risk of violence and to take breaks from work, which helps employees to regulate emotional responses and reappraise work challenges more positively\n[36]. Frese\n[37] has suggested that control enables a person to perform the most stressful tasks when that person feels particularly able to do them; that is, people can adjust the situation according to their needs, and can, therefore, be more relaxed in their work. Control may also act as a safety signal, given that a person with a high degree of control knows that he or she is able to change the situation if it becomes too difficult, thus knowing that the conditions may never be worse than he or she is willing to withstand\n[38]. Thus, many opportunities to control one’s job may act as a buffer against the negative effects of stressful working conditions like as work-related violence.\nThe aim of the present study was to examine the associations of work-related violence (physical violence and bullying) with turnover intentions and job satisfaction in a four-year follow-up among Finnish physicians. Specifically, we were interested to see whether job control would modify these associations. We hypothesised that both physical violence and bullying would be associated with increased levels of turnover intentions and decreased job satisfaction. We additionally hypothesised that job control would act as a buffer for these negative effects of work-related violence.", "The present study is part of the Finnish Health Care Professionals Study, in which we drew a random sample of 5000 physicians in Finland (30% of the whole physician population) from the 2006 database of physicians maintained by the Finnish Medical Association. The register covers all licensed physicians in Finland. Phase 1 data were gathered with postal questionnaires in 2006 (Figure \n1). Non-respondents were twice sent a reminder and copy of the questionnaire. Responses were received from 2841 physicians (response rate 57%). The sample is representative of the eligible population in terms of age, gender, and employment sector\n[39].\nThe study flow diagram.\nPhase 2 took place four years later in 2010, with data gathered using either a web-based or a traditional postal survey. At phase 1 the respondents were asked their permission to participate in follow-up surveys and 2206 agreed. Those who had died or had incorrect address information were excluded (N = 37), thus, at phase 2 the survey was sent to 2169 physicians. First, an e-mail invitation to participate in the web-based survey was sent, followed by two reminders. For those who did not respond to these a postal questionnaire was then sent once. E-mail and postal addresses were obtained from the Finnish Medical Association. The total number of respondents was 1705 (response rate 79%), of which 1018 (60%) answered the web-based and 687 (40%) the postal questionnaire (the response format is adjusted for in the analyses). Of these, 190 had incomplete data and were excluded; the final study sample therefore includes 1515 physicians (61% women) aged 25–63 years (mean = 45.7 years) (2006). Ethical approval for this study was obtained from the Ethical Review Board of the National Institute for Health and Welfare.\n Measures The present study used violence variables, job control, baseline turnover intentions, baseline job satisfaction, and covariates from phase 1. The outcome (turnover intentions and job satisfaction) variables were taken from phase 2.\nJob satisfaction was assessed with the mean of 3 items derived from Hackman and Oldham’s\n[40] Job Diagnostic Survey on a 5-point scale, ranging from 1 (totally disagree) to 5 (totally agree). Cronbach’s alpha coefficient for this study was 0.66 at phase 1 and 0.88 at phase 2 (an example of the items: \"I am generally satisfied with my work.\")\nTurnover intentions were measured with the mean of three questions concerning willingness to (a) change to other physician work, (b) to another profession, and (c) to quit (α = 0.61 at phase 1 and 0.66 at phase 2). The response alternatives were \"1 = no, 2 = perhaps, and 3 = yes\".\nPhysical violence was measured with a question asking whether the respondent had experienced work-related violence (such as kicking and hitting) or had been threatened with it and how often. Responses were coded as: 0 = never, 1 = less than once a year, 2 = once a year or more often. Bullying was asked with the following question: \"Psychological violence means continuous repetitive bullying, victimising or offending treatment. Are you now or have you previously been a target of this kind of psychological violence and bullying in your own work?\" The answer options were 0 = no and 1 = yes.\nJob control was measured by combining the skill discretion (6 items) and decision authority (3 items) scales derived from Karasek’s Job Content Questionnaire JCQ\n[41]. Skill discretion measures how much the job requires skill, creativity, task variety, and learning of new skills (e.g., \"My job requires that I learn new things\"). Decision authority measures the freedom to make independent decisions and possibilities to choose how to perform work (e.g., \"I have a lot of say about what happens in my job\"). The items were rated on a 5-point Likert-scale, ranging from 1 (totally disagree) to 5 (totally agree) (α = 0.76 at phase 1).\nCovariates included gender, age, specialisation status (specialist, specialisation on-going, and not specialised), and employment sector (hospital, primary care, and other).\nThe present study used violence variables, job control, baseline turnover intentions, baseline job satisfaction, and covariates from phase 1. The outcome (turnover intentions and job satisfaction) variables were taken from phase 2.\nJob satisfaction was assessed with the mean of 3 items derived from Hackman and Oldham’s\n[40] Job Diagnostic Survey on a 5-point scale, ranging from 1 (totally disagree) to 5 (totally agree). Cronbach’s alpha coefficient for this study was 0.66 at phase 1 and 0.88 at phase 2 (an example of the items: \"I am generally satisfied with my work.\")\nTurnover intentions were measured with the mean of three questions concerning willingness to (a) change to other physician work, (b) to another profession, and (c) to quit (α = 0.61 at phase 1 and 0.66 at phase 2). The response alternatives were \"1 = no, 2 = perhaps, and 3 = yes\".\nPhysical violence was measured with a question asking whether the respondent had experienced work-related violence (such as kicking and hitting) or had been threatened with it and how often. Responses were coded as: 0 = never, 1 = less than once a year, 2 = once a year or more often. Bullying was asked with the following question: \"Psychological violence means continuous repetitive bullying, victimising or offending treatment. Are you now or have you previously been a target of this kind of psychological violence and bullying in your own work?\" The answer options were 0 = no and 1 = yes.\nJob control was measured by combining the skill discretion (6 items) and decision authority (3 items) scales derived from Karasek’s Job Content Questionnaire JCQ\n[41]. Skill discretion measures how much the job requires skill, creativity, task variety, and learning of new skills (e.g., \"My job requires that I learn new things\"). Decision authority measures the freedom to make independent decisions and possibilities to choose how to perform work (e.g., \"I have a lot of say about what happens in my job\"). The items were rated on a 5-point Likert-scale, ranging from 1 (totally disagree) to 5 (totally agree) (α = 0.76 at phase 1).\nCovariates included gender, age, specialisation status (specialist, specialisation on-going, and not specialised), and employment sector (hospital, primary care, and other).\n Statistical analyses Analyses of covariance (ANCOVA) were conducted, with turnover intentions at phase 2 as the dependent variable, and physical violence, bullying, job control, gender, age, baseline turnover intentions, response format, specialisation status, and employment sector from phase 1 were included as independent variables. The analyses were conducted in four steps. In the first step, the univariate effects of physical violence, bullying, and job control for turnover intentions were examined in separate analyses (Model A). A second step included all the above-mentioned variables and gender, age, baseline turnover intentions, and response format (Model B). In the third step, specialisation status and employment sector were additionally added to the former model (Model C). Finally, the interactions of job control with physical violence and bullying were added (Model D). A similar series of ANCOVAs were generated with job satisfaction at phase 2 as for the dependent variable.\nAnalyses of covariance (ANCOVA) were conducted, with turnover intentions at phase 2 as the dependent variable, and physical violence, bullying, job control, gender, age, baseline turnover intentions, response format, specialisation status, and employment sector from phase 1 were included as independent variables. The analyses were conducted in four steps. In the first step, the univariate effects of physical violence, bullying, and job control for turnover intentions were examined in separate analyses (Model A). A second step included all the above-mentioned variables and gender, age, baseline turnover intentions, and response format (Model B). In the third step, specialisation status and employment sector were additionally added to the former model (Model C). Finally, the interactions of job control with physical violence and bullying were added (Model D). A similar series of ANCOVAs were generated with job satisfaction at phase 2 as for the dependent variable.", "The present study used violence variables, job control, baseline turnover intentions, baseline job satisfaction, and covariates from phase 1. The outcome (turnover intentions and job satisfaction) variables were taken from phase 2.\nJob satisfaction was assessed with the mean of 3 items derived from Hackman and Oldham’s\n[40] Job Diagnostic Survey on a 5-point scale, ranging from 1 (totally disagree) to 5 (totally agree). Cronbach’s alpha coefficient for this study was 0.66 at phase 1 and 0.88 at phase 2 (an example of the items: \"I am generally satisfied with my work.\")\nTurnover intentions were measured with the mean of three questions concerning willingness to (a) change to other physician work, (b) to another profession, and (c) to quit (α = 0.61 at phase 1 and 0.66 at phase 2). The response alternatives were \"1 = no, 2 = perhaps, and 3 = yes\".\nPhysical violence was measured with a question asking whether the respondent had experienced work-related violence (such as kicking and hitting) or had been threatened with it and how often. Responses were coded as: 0 = never, 1 = less than once a year, 2 = once a year or more often. Bullying was asked with the following question: \"Psychological violence means continuous repetitive bullying, victimising or offending treatment. Are you now or have you previously been a target of this kind of psychological violence and bullying in your own work?\" The answer options were 0 = no and 1 = yes.\nJob control was measured by combining the skill discretion (6 items) and decision authority (3 items) scales derived from Karasek’s Job Content Questionnaire JCQ\n[41]. Skill discretion measures how much the job requires skill, creativity, task variety, and learning of new skills (e.g., \"My job requires that I learn new things\"). Decision authority measures the freedom to make independent decisions and possibilities to choose how to perform work (e.g., \"I have a lot of say about what happens in my job\"). The items were rated on a 5-point Likert-scale, ranging from 1 (totally disagree) to 5 (totally agree) (α = 0.76 at phase 1).\nCovariates included gender, age, specialisation status (specialist, specialisation on-going, and not specialised), and employment sector (hospital, primary care, and other).", "Analyses of covariance (ANCOVA) were conducted, with turnover intentions at phase 2 as the dependent variable, and physical violence, bullying, job control, gender, age, baseline turnover intentions, response format, specialisation status, and employment sector from phase 1 were included as independent variables. The analyses were conducted in four steps. In the first step, the univariate effects of physical violence, bullying, and job control for turnover intentions were examined in separate analyses (Model A). A second step included all the above-mentioned variables and gender, age, baseline turnover intentions, and response format (Model B). In the third step, specialisation status and employment sector were additionally added to the former model (Model C). Finally, the interactions of job control with physical violence and bullying were added (Model D). A similar series of ANCOVAs were generated with job satisfaction at phase 2 as for the dependent variable.", "Table \n1 shows the characteristics of the study sample. Sixty-one per cent had encountered physical violence in their career and 19 per cent had encountered bullying. The majority of the participants (77%) were specialised physicians and 46 per cent worked in hospitals, 22 per cent in primary care, and 32 per cent in other healthcare settings. The within-subjects differences in turnover intention and job satisfaction levels between phase 1 and phase 2 were examined with GLM repeated measures analyses. Analyses showed that job satisfaction levels had increased (F = 35.2, p < 0.001) and turnover intentions had decreased (F = 23.2, p < 0.001) during the study period.\nCharacteristics of the study sample\nTable \n2 shows the results from the ANCOVA models for turnover intentions. Physical violence and bullying were associated with more turnover intentions, and job control was associated with less turnover intentions. However, only the association between physical violence and turnover intentions persisted after adjusting for baseline level, response format, demographics and work-related variables. Older respondents were less likely to have turnover intentions than their younger counterparts.\nThe results of the analyses of covariance for turnover intentions\naModel A included univariate effects.\nbModel B included physical violence, bullying, job control, gender, age, baseline turnover intentions, and response format.\ncModel C included variables from model A and specialisation status and employment sector.\ndModel D included in addition to Model C also interactions physical violence*job control and bullying*job control.\nThe interaction between bullying and job control was significant for turnover intentions. As Figure \n2 shows, job control was not related to increased turnover intentions among those who had not encountered bullying, whereas among those who had encountered bullying, job control was associated with turnover intention levels. That is, the highest levels of turnover intentions were among those who had low job control opportunities and had encountered bullying.\nThe interaction between bullying and job control for turnover intentions. Estimated marginal means among those scoring low (below median) and high (above median) in job control adjusted for baseline level, response format, demographics, and work-related variables.\nTable \n3 shows the results from the ANCOVA models for job satisfaction. Physical violence and bullying were associated with lower levels of job satisfaction, and job control was associated with higher levels of job satisfaction. These associations persisted even after adjusting for baseline level, response format, demographics and work-related variables. Older respondents, those who answered by post, and those who worked in an employment sector other than hospitals or primary care were more likely to be satisfied with their jobs than their counterparts. The interactions with job control were not significant for job satisfaction.\nThe results of the analyses of covariance for job satisfaction\naModel A included univariate effects.\nbModel B included physical violence, bullying, job control, gender, age, baseline job satisfaction, and response format.\ncModel C included variables from model A and specialisation status and employment sector.\ndModel D included in addition to Model C also interactions physical violence*job control and bullying*job control.", "The present four-year longitudinal study showed that workplace physical violence and bullying were associated with decreased job satisfaction and increased turnover intentions among Finnish physicians. In addition, we found that opportunities to control one’s job were able to alleviate the increase in turnover intentions resulting from bullying.\nOur results highlight the importance of job control as a buffer of negative psychosocial working environments. In addition, we found that job control was directly related to higher job satisfaction but the association between job control and turnover intentions did not remain significant after adjusting for baseline turnover intentions and demographics. Also previous studies have reported that job control is an important buffer. For example, high job control has been found to mitigate retirement intentions resulting from poor health and low work ability among Finnish physicians\n[30]. Furthermore, in a previous study high job control has been found to alleviate intentions to change profession that were associated with distress and sleeping problems\n[42]. Potential mechanisms behind this effect of job control could, for example, be that job control affects coping strategies, gives flexibility to avoid certain tasks and to take breaks to regulate emotional responses, gives possibilities to choose when to perform stressful tasks, and gives the assurance that a stressful situation can be changed if it gets intolerable\n[31,36-38].\nJob control could be improved by giving employees a greater variety of tasks, opportunities to fully use and develop their skills, and a stronger voice in decisions. For example, participative decision-making has been introduced as a method to increase job control\n[43] along with greater freedom over start and finish times, more discretion over how tasks are performed, and autonomous or self-regulated work teams\n[44]. Young and Leese\n[45] have proposed greater flexibility as a potential solution for the problems in retention and recruitment of general practitioners. They suggested that flexibility could be improved by a) varying the time commitment across the working day and week (part-time, job-share, temporary, and short-term), b) offering wider choice of long-term career paths, c) offering more education and training, and d) widening the scope of remuneration and contract conditions. In a similar way, Shanafelt et al.\n[46] highlighted the importance of job autonomy as the central organisational characteristic that promotes well-being in physicians; they suggest that physicians should be provided with increased opportunity to influence their work environment, to participate in decisions, and to have more control over schedules.\nWorkplace violence is a big problem in health care and organisations should pay more attention to these issues. For example, in our study over sixty per cent of physicians had encountered physical violence in their career and approximately one in five had encountered bullying in the previous year. Targeting efforts at increasing control opportunities could alleviate the negative effects of workplace violence. Nevertheless, direct actions are also needed to actually decrease violence in workplaces. For example, metal detectors, security dog teams, cameras, and security personnel have been suggested to improve health care personnel’s security\n[47]. Hoag-Apel\n[48] suggested appointing a risk assessment team and staff training on, for example, body language, being alert to the tone of voice, and not taking anger personally. It has also been shown that reducing staff stress by improving staff’s cognitive efficiency and emotional control can lead to reduced violence\n[49].\nIn the present study, we found that physical violence led to increased turnover intentions and both bullying and physical violence led to reduced job satisfaction even after adjustments. Previous findings have associated physical violence, bullying, and verbal violence with both lower levels of job satisfaction and higher turnover intentions\n[6,7,10,50]. However, the previous findings were from cross-sectional studies, while our results here confirm that work-related violence also has longitudinal effects.\nWe found that older respondents were more satisfied with their jobs and were less likely to have turnover intentions than younger respondents. This corresponds well with previous findings among physicians\n[51-54]. In our study gender was unrelated to both job satisfaction and turnover intentions. Previous studies have found mixed results: Among German and Norwegian hospital physicians, gender was unrelated to job satisfaction\n[52], whereas among German general practitioners women had higher levels of job satisfaction than men\n[54]. In Chinese physicians, men had a higher likelihood of turnover intentions compared to women\n[53]. Moreover, we found that physicians working in hospitals and primary care were less satisfied than physicians from other sectors. This is congruent with a previous study showing that private sector physicians had higher levels of job satisfaction and organisational commitment and lower levels of psychological distress and sleeping problems compared to physicians working in the public sector\n[51].\nThe present study relied on self-reported measures, which may lead to problems associated with an inflation of the strengths of relationships and with the common method variance. In our study we were not able to differentiate between violence from patients or customers and violence from co-workers. Regarding workplace bullying the source is more likely to be co-workers than with physical violence. The effects of violence may vary depending on the source of violence, especially regarding bullying; that is, the effects of bullying might be different when caused by patients than when caused by co-workers. This issue should be investigated in future studies. Moreover, violence measures were collected 4 year prior to turnover intentions and we did not discriminate within the study population as to whether there were changes in violence experience over the course of the study frame. Therefore, it is possible that this may have caused a misclassification bias in our results. However, it is likely that this bias might simply weaken the associations found.\nMoreover, although we controlled for age, gender, response format, specialisation status, and employment sector we cannot rule out the possibility of residual confounding. The present study used both a web-based and a more traditional postal survey to gather follow-up data. This is a limitation of the study. However, we controlled for this response format in the analyses. The use of web-based surveys is increasing, but they often result in low response rates, thus combining them with postal questionnaires might help to elevate response rates. There may be differences in response styles between the response formats and therefore it would be important to adjust for response format in analyses. Future studies about the subject are needed.", "Our results suggest that promoting employees’ control opportunities in health care organisations might help to provide a buffer against the negative effects of workplace violence on turnover intentions in physicians. In addition, we showed that physical violence and bullying has longitudinal effects on job satisfaction and turnover intentions. Workplace violence is an extensive problem especially in the health care sector and organisations should approach this problem through multiple means, such as, by giving health care employees more opportunities to control their own work, in addition to direct measures.", "All authors declare that they have no competing interest.", "TH designed the study, directed its implementation, performed analyses and led all aspects of the work, including data analysis and writing. AK and ME contributed to planning the data analyses. AK, JV, MV and ME helped to conceptualise the ideas, interpret findings, and write and critically review drafts of the article. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/14/19/prepub\n" ]
[ null, "methods", null, null, "results", "discussion", "conclusions", null, null, null ]
[ "Job control", "Work-related violence", "Psychosocial resources", "Intentions to quit", "Physicians" ]
Background: Health care professionals, including physicians, are at high risk of encountering workplace violence. For example, 59 per cent of Australian general practitioners reported that they had experienced work-related violence during the previous 12 months [1]. In US emergency departments, 75 per cent of physicians had encountered verbal violence and 28 per cent indicated that they had been victims of physical assault in the previous 12 months [2]. In another study, 96 per cent of physician respondents in US emergency departments reported experiencing verbal violence and 78 per cent a verbal threat during the previous 6 months [3]. In a study conducted among hospital and community physicians in Israel, 56 per cent reported verbal violence and 9 per cent physical assault during the previous year [4]. In Finland, every fifth physician reported having encountered physical violence or the threat of it in the previous year [5]. Workplace violence may have many negative ramifications for health care employees. Workplace violence has been associated with lower job satisfaction and higher levels of turnover intentions in nurses and home healthcare assistants [6,7]. Moreover, workplace violence has been found to affect negatively hospital personnel’s health [8] and increase sickness absences [9]. In physicians, work-related violence has been shown to lead to reduced job satisfaction and a decline in job performance [10]. In addition, among healthcare professionals, workplace violence may lead to difficulties in listening to patients, rumination, poor concentration, and intrusive thoughts [11], as well as impact negatively on family life and quality of life [4]. From the health care sector’s point of view, tackling workplace violence encountered by physicians is important given that it can lead to lower job satisfaction and increased turnover. Physician turnover is an increasing problem that threatens the functioning of the health care sector worldwide. Physician turnover may lead to decreased productivity, decreased quality of care and to an increased need to recruit and train new physicians. This is costly and may affect health outcomes [12,13]. In the US it has been estimated that the minimum cost of turnover may represent a loss of over 5 per cent of the total annual operating budget, due to hiring and training costs and productivity loss [14]. Job control refers to job and organisational characteristics, such as the employee’s decision-making authority, opportunities to participate, and opportunities to use skills and knowledge. Job control may have direct effects on job attitudes, health and wellbeing. In a study among Finnish anaesthesiologists, job control appeared as one of the most important work-related factors in relation to physicians’ work-related wellbeing [15]. Previous studies have repeatedly demonstrated the importance of job control for employees’ health. For example, low job control has been associated with increased myocardial infarction risk [16], increased heart disease risk [17], higher blood pressure [18], and to greater fibrinogen responses to stress [19]. Moreover, low job control has been associated with an increased number of sick-leave spells [20] and with poorer self-rated health eight years later [21]. In a study among emergency physicians, psychological health was not affected by the number and nature of hours worked but by the ability to control working hours and the perceived flexibility of the workplace [22]. High job control at work may protect employees from developing job dissatisfaction and psychiatric distress [23]. High job control may additionally increase organisational commitment [24] and decrease work-related anger [25]. A positive change in job control over a 4-year period was associated with higher levels of physical activity and self-rated health and lower levels of distress [26]. Job control has also been associated with job performance and ability to learn [27]. In addition, previous studies have shown that low control opportunities may affect employees’ attitudes to staying in or leaving a job, given that low job control has been associated with increased levels of retirement intentions [28,29]. In addition, job control has been found to mitigate retirement intentions associated with poor health and low work ability among physicians [30]. High job control may be viewed as a potential coping factor that helps distressed employees cope with demanding situations and, thus, lessen their job dissatisfaction and intentions to quit. According to Spector [31], job control can affect a person’s choice of coping strategy in a way that perceived high control is likely to lead to constructive coping, whereas lack of control is more likely to lead to destructive coping. Previous studies have indeed associated job control with successful coping [32,33] and successful coping, in turn, has been associated with fewer turnover intentions in demanding and stressful situations, such as with organisational change [34,35]. Job control may provide flexibility to avoid certain tasks that have a high risk of violence and to take breaks from work, which helps employees to regulate emotional responses and reappraise work challenges more positively [36]. Frese [37] has suggested that control enables a person to perform the most stressful tasks when that person feels particularly able to do them; that is, people can adjust the situation according to their needs, and can, therefore, be more relaxed in their work. Control may also act as a safety signal, given that a person with a high degree of control knows that he or she is able to change the situation if it becomes too difficult, thus knowing that the conditions may never be worse than he or she is willing to withstand [38]. Thus, many opportunities to control one’s job may act as a buffer against the negative effects of stressful working conditions like as work-related violence. The aim of the present study was to examine the associations of work-related violence (physical violence and bullying) with turnover intentions and job satisfaction in a four-year follow-up among Finnish physicians. Specifically, we were interested to see whether job control would modify these associations. We hypothesised that both physical violence and bullying would be associated with increased levels of turnover intentions and decreased job satisfaction. We additionally hypothesised that job control would act as a buffer for these negative effects of work-related violence. Methods: The present study is part of the Finnish Health Care Professionals Study, in which we drew a random sample of 5000 physicians in Finland (30% of the whole physician population) from the 2006 database of physicians maintained by the Finnish Medical Association. The register covers all licensed physicians in Finland. Phase 1 data were gathered with postal questionnaires in 2006 (Figure  1). Non-respondents were twice sent a reminder and copy of the questionnaire. Responses were received from 2841 physicians (response rate 57%). The sample is representative of the eligible population in terms of age, gender, and employment sector [39]. The study flow diagram. Phase 2 took place four years later in 2010, with data gathered using either a web-based or a traditional postal survey. At phase 1 the respondents were asked their permission to participate in follow-up surveys and 2206 agreed. Those who had died or had incorrect address information were excluded (N = 37), thus, at phase 2 the survey was sent to 2169 physicians. First, an e-mail invitation to participate in the web-based survey was sent, followed by two reminders. For those who did not respond to these a postal questionnaire was then sent once. E-mail and postal addresses were obtained from the Finnish Medical Association. The total number of respondents was 1705 (response rate 79%), of which 1018 (60%) answered the web-based and 687 (40%) the postal questionnaire (the response format is adjusted for in the analyses). Of these, 190 had incomplete data and were excluded; the final study sample therefore includes 1515 physicians (61% women) aged 25–63 years (mean = 45.7 years) (2006). Ethical approval for this study was obtained from the Ethical Review Board of the National Institute for Health and Welfare. Measures The present study used violence variables, job control, baseline turnover intentions, baseline job satisfaction, and covariates from phase 1. The outcome (turnover intentions and job satisfaction) variables were taken from phase 2. Job satisfaction was assessed with the mean of 3 items derived from Hackman and Oldham’s [40] Job Diagnostic Survey on a 5-point scale, ranging from 1 (totally disagree) to 5 (totally agree). Cronbach’s alpha coefficient for this study was 0.66 at phase 1 and 0.88 at phase 2 (an example of the items: "I am generally satisfied with my work.") Turnover intentions were measured with the mean of three questions concerning willingness to (a) change to other physician work, (b) to another profession, and (c) to quit (α = 0.61 at phase 1 and 0.66 at phase 2). The response alternatives were "1 = no, 2 = perhaps, and 3 = yes". Physical violence was measured with a question asking whether the respondent had experienced work-related violence (such as kicking and hitting) or had been threatened with it and how often. Responses were coded as: 0 = never, 1 = less than once a year, 2 = once a year or more often. Bullying was asked with the following question: "Psychological violence means continuous repetitive bullying, victimising or offending treatment. Are you now or have you previously been a target of this kind of psychological violence and bullying in your own work?" The answer options were 0 = no and 1 = yes. Job control was measured by combining the skill discretion (6 items) and decision authority (3 items) scales derived from Karasek’s Job Content Questionnaire JCQ [41]. Skill discretion measures how much the job requires skill, creativity, task variety, and learning of new skills (e.g., "My job requires that I learn new things"). Decision authority measures the freedom to make independent decisions and possibilities to choose how to perform work (e.g., "I have a lot of say about what happens in my job"). The items were rated on a 5-point Likert-scale, ranging from 1 (totally disagree) to 5 (totally agree) (α = 0.76 at phase 1). Covariates included gender, age, specialisation status (specialist, specialisation on-going, and not specialised), and employment sector (hospital, primary care, and other). The present study used violence variables, job control, baseline turnover intentions, baseline job satisfaction, and covariates from phase 1. The outcome (turnover intentions and job satisfaction) variables were taken from phase 2. Job satisfaction was assessed with the mean of 3 items derived from Hackman and Oldham’s [40] Job Diagnostic Survey on a 5-point scale, ranging from 1 (totally disagree) to 5 (totally agree). Cronbach’s alpha coefficient for this study was 0.66 at phase 1 and 0.88 at phase 2 (an example of the items: "I am generally satisfied with my work.") Turnover intentions were measured with the mean of three questions concerning willingness to (a) change to other physician work, (b) to another profession, and (c) to quit (α = 0.61 at phase 1 and 0.66 at phase 2). The response alternatives were "1 = no, 2 = perhaps, and 3 = yes". Physical violence was measured with a question asking whether the respondent had experienced work-related violence (such as kicking and hitting) or had been threatened with it and how often. Responses were coded as: 0 = never, 1 = less than once a year, 2 = once a year or more often. Bullying was asked with the following question: "Psychological violence means continuous repetitive bullying, victimising or offending treatment. Are you now or have you previously been a target of this kind of psychological violence and bullying in your own work?" The answer options were 0 = no and 1 = yes. Job control was measured by combining the skill discretion (6 items) and decision authority (3 items) scales derived from Karasek’s Job Content Questionnaire JCQ [41]. Skill discretion measures how much the job requires skill, creativity, task variety, and learning of new skills (e.g., "My job requires that I learn new things"). Decision authority measures the freedom to make independent decisions and possibilities to choose how to perform work (e.g., "I have a lot of say about what happens in my job"). The items were rated on a 5-point Likert-scale, ranging from 1 (totally disagree) to 5 (totally agree) (α = 0.76 at phase 1). Covariates included gender, age, specialisation status (specialist, specialisation on-going, and not specialised), and employment sector (hospital, primary care, and other). Statistical analyses Analyses of covariance (ANCOVA) were conducted, with turnover intentions at phase 2 as the dependent variable, and physical violence, bullying, job control, gender, age, baseline turnover intentions, response format, specialisation status, and employment sector from phase 1 were included as independent variables. The analyses were conducted in four steps. In the first step, the univariate effects of physical violence, bullying, and job control for turnover intentions were examined in separate analyses (Model A). A second step included all the above-mentioned variables and gender, age, baseline turnover intentions, and response format (Model B). In the third step, specialisation status and employment sector were additionally added to the former model (Model C). Finally, the interactions of job control with physical violence and bullying were added (Model D). A similar series of ANCOVAs were generated with job satisfaction at phase 2 as for the dependent variable. Analyses of covariance (ANCOVA) were conducted, with turnover intentions at phase 2 as the dependent variable, and physical violence, bullying, job control, gender, age, baseline turnover intentions, response format, specialisation status, and employment sector from phase 1 were included as independent variables. The analyses were conducted in four steps. In the first step, the univariate effects of physical violence, bullying, and job control for turnover intentions were examined in separate analyses (Model A). A second step included all the above-mentioned variables and gender, age, baseline turnover intentions, and response format (Model B). In the third step, specialisation status and employment sector were additionally added to the former model (Model C). Finally, the interactions of job control with physical violence and bullying were added (Model D). A similar series of ANCOVAs were generated with job satisfaction at phase 2 as for the dependent variable. Measures: The present study used violence variables, job control, baseline turnover intentions, baseline job satisfaction, and covariates from phase 1. The outcome (turnover intentions and job satisfaction) variables were taken from phase 2. Job satisfaction was assessed with the mean of 3 items derived from Hackman and Oldham’s [40] Job Diagnostic Survey on a 5-point scale, ranging from 1 (totally disagree) to 5 (totally agree). Cronbach’s alpha coefficient for this study was 0.66 at phase 1 and 0.88 at phase 2 (an example of the items: "I am generally satisfied with my work.") Turnover intentions were measured with the mean of three questions concerning willingness to (a) change to other physician work, (b) to another profession, and (c) to quit (α = 0.61 at phase 1 and 0.66 at phase 2). The response alternatives were "1 = no, 2 = perhaps, and 3 = yes". Physical violence was measured with a question asking whether the respondent had experienced work-related violence (such as kicking and hitting) or had been threatened with it and how often. Responses were coded as: 0 = never, 1 = less than once a year, 2 = once a year or more often. Bullying was asked with the following question: "Psychological violence means continuous repetitive bullying, victimising or offending treatment. Are you now or have you previously been a target of this kind of psychological violence and bullying in your own work?" The answer options were 0 = no and 1 = yes. Job control was measured by combining the skill discretion (6 items) and decision authority (3 items) scales derived from Karasek’s Job Content Questionnaire JCQ [41]. Skill discretion measures how much the job requires skill, creativity, task variety, and learning of new skills (e.g., "My job requires that I learn new things"). Decision authority measures the freedom to make independent decisions and possibilities to choose how to perform work (e.g., "I have a lot of say about what happens in my job"). The items were rated on a 5-point Likert-scale, ranging from 1 (totally disagree) to 5 (totally agree) (α = 0.76 at phase 1). Covariates included gender, age, specialisation status (specialist, specialisation on-going, and not specialised), and employment sector (hospital, primary care, and other). Statistical analyses: Analyses of covariance (ANCOVA) were conducted, with turnover intentions at phase 2 as the dependent variable, and physical violence, bullying, job control, gender, age, baseline turnover intentions, response format, specialisation status, and employment sector from phase 1 were included as independent variables. The analyses were conducted in four steps. In the first step, the univariate effects of physical violence, bullying, and job control for turnover intentions were examined in separate analyses (Model A). A second step included all the above-mentioned variables and gender, age, baseline turnover intentions, and response format (Model B). In the third step, specialisation status and employment sector were additionally added to the former model (Model C). Finally, the interactions of job control with physical violence and bullying were added (Model D). A similar series of ANCOVAs were generated with job satisfaction at phase 2 as for the dependent variable. Results: Table  1 shows the characteristics of the study sample. Sixty-one per cent had encountered physical violence in their career and 19 per cent had encountered bullying. The majority of the participants (77%) were specialised physicians and 46 per cent worked in hospitals, 22 per cent in primary care, and 32 per cent in other healthcare settings. The within-subjects differences in turnover intention and job satisfaction levels between phase 1 and phase 2 were examined with GLM repeated measures analyses. Analyses showed that job satisfaction levels had increased (F = 35.2, p < 0.001) and turnover intentions had decreased (F = 23.2, p < 0.001) during the study period. Characteristics of the study sample Table  2 shows the results from the ANCOVA models for turnover intentions. Physical violence and bullying were associated with more turnover intentions, and job control was associated with less turnover intentions. However, only the association between physical violence and turnover intentions persisted after adjusting for baseline level, response format, demographics and work-related variables. Older respondents were less likely to have turnover intentions than their younger counterparts. The results of the analyses of covariance for turnover intentions aModel A included univariate effects. bModel B included physical violence, bullying, job control, gender, age, baseline turnover intentions, and response format. cModel C included variables from model A and specialisation status and employment sector. dModel D included in addition to Model C also interactions physical violence*job control and bullying*job control. The interaction between bullying and job control was significant for turnover intentions. As Figure  2 shows, job control was not related to increased turnover intentions among those who had not encountered bullying, whereas among those who had encountered bullying, job control was associated with turnover intention levels. That is, the highest levels of turnover intentions were among those who had low job control opportunities and had encountered bullying. The interaction between bullying and job control for turnover intentions. Estimated marginal means among those scoring low (below median) and high (above median) in job control adjusted for baseline level, response format, demographics, and work-related variables. Table  3 shows the results from the ANCOVA models for job satisfaction. Physical violence and bullying were associated with lower levels of job satisfaction, and job control was associated with higher levels of job satisfaction. These associations persisted even after adjusting for baseline level, response format, demographics and work-related variables. Older respondents, those who answered by post, and those who worked in an employment sector other than hospitals or primary care were more likely to be satisfied with their jobs than their counterparts. The interactions with job control were not significant for job satisfaction. The results of the analyses of covariance for job satisfaction aModel A included univariate effects. bModel B included physical violence, bullying, job control, gender, age, baseline job satisfaction, and response format. cModel C included variables from model A and specialisation status and employment sector. dModel D included in addition to Model C also interactions physical violence*job control and bullying*job control. Discussion: The present four-year longitudinal study showed that workplace physical violence and bullying were associated with decreased job satisfaction and increased turnover intentions among Finnish physicians. In addition, we found that opportunities to control one’s job were able to alleviate the increase in turnover intentions resulting from bullying. Our results highlight the importance of job control as a buffer of negative psychosocial working environments. In addition, we found that job control was directly related to higher job satisfaction but the association between job control and turnover intentions did not remain significant after adjusting for baseline turnover intentions and demographics. Also previous studies have reported that job control is an important buffer. For example, high job control has been found to mitigate retirement intentions resulting from poor health and low work ability among Finnish physicians [30]. Furthermore, in a previous study high job control has been found to alleviate intentions to change profession that were associated with distress and sleeping problems [42]. Potential mechanisms behind this effect of job control could, for example, be that job control affects coping strategies, gives flexibility to avoid certain tasks and to take breaks to regulate emotional responses, gives possibilities to choose when to perform stressful tasks, and gives the assurance that a stressful situation can be changed if it gets intolerable [31,36-38]. Job control could be improved by giving employees a greater variety of tasks, opportunities to fully use and develop their skills, and a stronger voice in decisions. For example, participative decision-making has been introduced as a method to increase job control [43] along with greater freedom over start and finish times, more discretion over how tasks are performed, and autonomous or self-regulated work teams [44]. Young and Leese [45] have proposed greater flexibility as a potential solution for the problems in retention and recruitment of general practitioners. They suggested that flexibility could be improved by a) varying the time commitment across the working day and week (part-time, job-share, temporary, and short-term), b) offering wider choice of long-term career paths, c) offering more education and training, and d) widening the scope of remuneration and contract conditions. In a similar way, Shanafelt et al. [46] highlighted the importance of job autonomy as the central organisational characteristic that promotes well-being in physicians; they suggest that physicians should be provided with increased opportunity to influence their work environment, to participate in decisions, and to have more control over schedules. Workplace violence is a big problem in health care and organisations should pay more attention to these issues. For example, in our study over sixty per cent of physicians had encountered physical violence in their career and approximately one in five had encountered bullying in the previous year. Targeting efforts at increasing control opportunities could alleviate the negative effects of workplace violence. Nevertheless, direct actions are also needed to actually decrease violence in workplaces. For example, metal detectors, security dog teams, cameras, and security personnel have been suggested to improve health care personnel’s security [47]. Hoag-Apel [48] suggested appointing a risk assessment team and staff training on, for example, body language, being alert to the tone of voice, and not taking anger personally. It has also been shown that reducing staff stress by improving staff’s cognitive efficiency and emotional control can lead to reduced violence [49]. In the present study, we found that physical violence led to increased turnover intentions and both bullying and physical violence led to reduced job satisfaction even after adjustments. Previous findings have associated physical violence, bullying, and verbal violence with both lower levels of job satisfaction and higher turnover intentions [6,7,10,50]. However, the previous findings were from cross-sectional studies, while our results here confirm that work-related violence also has longitudinal effects. We found that older respondents were more satisfied with their jobs and were less likely to have turnover intentions than younger respondents. This corresponds well with previous findings among physicians [51-54]. In our study gender was unrelated to both job satisfaction and turnover intentions. Previous studies have found mixed results: Among German and Norwegian hospital physicians, gender was unrelated to job satisfaction [52], whereas among German general practitioners women had higher levels of job satisfaction than men [54]. In Chinese physicians, men had a higher likelihood of turnover intentions compared to women [53]. Moreover, we found that physicians working in hospitals and primary care were less satisfied than physicians from other sectors. This is congruent with a previous study showing that private sector physicians had higher levels of job satisfaction and organisational commitment and lower levels of psychological distress and sleeping problems compared to physicians working in the public sector [51]. The present study relied on self-reported measures, which may lead to problems associated with an inflation of the strengths of relationships and with the common method variance. In our study we were not able to differentiate between violence from patients or customers and violence from co-workers. Regarding workplace bullying the source is more likely to be co-workers than with physical violence. The effects of violence may vary depending on the source of violence, especially regarding bullying; that is, the effects of bullying might be different when caused by patients than when caused by co-workers. This issue should be investigated in future studies. Moreover, violence measures were collected 4 year prior to turnover intentions and we did not discriminate within the study population as to whether there were changes in violence experience over the course of the study frame. Therefore, it is possible that this may have caused a misclassification bias in our results. However, it is likely that this bias might simply weaken the associations found. Moreover, although we controlled for age, gender, response format, specialisation status, and employment sector we cannot rule out the possibility of residual confounding. The present study used both a web-based and a more traditional postal survey to gather follow-up data. This is a limitation of the study. However, we controlled for this response format in the analyses. The use of web-based surveys is increasing, but they often result in low response rates, thus combining them with postal questionnaires might help to elevate response rates. There may be differences in response styles between the response formats and therefore it would be important to adjust for response format in analyses. Future studies about the subject are needed. Conclusions: Our results suggest that promoting employees’ control opportunities in health care organisations might help to provide a buffer against the negative effects of workplace violence on turnover intentions in physicians. In addition, we showed that physical violence and bullying has longitudinal effects on job satisfaction and turnover intentions. Workplace violence is an extensive problem especially in the health care sector and organisations should approach this problem through multiple means, such as, by giving health care employees more opportunities to control their own work, in addition to direct measures. Competing interests: All authors declare that they have no competing interest. Authors’ contributions: TH designed the study, directed its implementation, performed analyses and led all aspects of the work, including data analysis and writing. AK and ME contributed to planning the data analyses. AK, JV, MV and ME helped to conceptualise the ideas, interpret findings, and write and critically review drafts of the article. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1472-6963/14/19/prepub
Background: Health care professionals, including physicians, are at high risk of encountering workplace violence. At the same time physician turnover is an increasing problem that threatens the functioning of the health care sector worldwide. The present study examined the prospective associations of work-related physical violence and bullying with physicians' turnover intentions and job satisfaction. In addition, we tested whether job control would modify these associations. Methods: The present study was a 4-year longitudinal survey study, with data gathered in 2006 and 2010.The present sample included 1515 (61% women) Finnish physicians aged 25-63 years at baseline. Analyses of covariance (ANCOVA) were conducted while adjusting for gender, age, baseline levels, specialisation status, and employment sector. Results: The results of covariance analyses showed that physical violence led to increased physician turnover intentions and that both bullying and physical violence led to reduced physician job satisfaction even after adjustments. We also found that opportunities for job control were able to alleviate the increase in turnover intentions resulting from bullying. Conclusions: Our results suggest that workplace violence is an extensive problem in the health care sector and may lead to increased turnover and job dissatisfaction. Thus, health care organisations should approach this problem through different means, for example, by giving health care employees more opportunities to control their own work.
Background: Health care professionals, including physicians, are at high risk of encountering workplace violence. For example, 59 per cent of Australian general practitioners reported that they had experienced work-related violence during the previous 12 months [1]. In US emergency departments, 75 per cent of physicians had encountered verbal violence and 28 per cent indicated that they had been victims of physical assault in the previous 12 months [2]. In another study, 96 per cent of physician respondents in US emergency departments reported experiencing verbal violence and 78 per cent a verbal threat during the previous 6 months [3]. In a study conducted among hospital and community physicians in Israel, 56 per cent reported verbal violence and 9 per cent physical assault during the previous year [4]. In Finland, every fifth physician reported having encountered physical violence or the threat of it in the previous year [5]. Workplace violence may have many negative ramifications for health care employees. Workplace violence has been associated with lower job satisfaction and higher levels of turnover intentions in nurses and home healthcare assistants [6,7]. Moreover, workplace violence has been found to affect negatively hospital personnel’s health [8] and increase sickness absences [9]. In physicians, work-related violence has been shown to lead to reduced job satisfaction and a decline in job performance [10]. In addition, among healthcare professionals, workplace violence may lead to difficulties in listening to patients, rumination, poor concentration, and intrusive thoughts [11], as well as impact negatively on family life and quality of life [4]. From the health care sector’s point of view, tackling workplace violence encountered by physicians is important given that it can lead to lower job satisfaction and increased turnover. Physician turnover is an increasing problem that threatens the functioning of the health care sector worldwide. Physician turnover may lead to decreased productivity, decreased quality of care and to an increased need to recruit and train new physicians. This is costly and may affect health outcomes [12,13]. In the US it has been estimated that the minimum cost of turnover may represent a loss of over 5 per cent of the total annual operating budget, due to hiring and training costs and productivity loss [14]. Job control refers to job and organisational characteristics, such as the employee’s decision-making authority, opportunities to participate, and opportunities to use skills and knowledge. Job control may have direct effects on job attitudes, health and wellbeing. In a study among Finnish anaesthesiologists, job control appeared as one of the most important work-related factors in relation to physicians’ work-related wellbeing [15]. Previous studies have repeatedly demonstrated the importance of job control for employees’ health. For example, low job control has been associated with increased myocardial infarction risk [16], increased heart disease risk [17], higher blood pressure [18], and to greater fibrinogen responses to stress [19]. Moreover, low job control has been associated with an increased number of sick-leave spells [20] and with poorer self-rated health eight years later [21]. In a study among emergency physicians, psychological health was not affected by the number and nature of hours worked but by the ability to control working hours and the perceived flexibility of the workplace [22]. High job control at work may protect employees from developing job dissatisfaction and psychiatric distress [23]. High job control may additionally increase organisational commitment [24] and decrease work-related anger [25]. A positive change in job control over a 4-year period was associated with higher levels of physical activity and self-rated health and lower levels of distress [26]. Job control has also been associated with job performance and ability to learn [27]. In addition, previous studies have shown that low control opportunities may affect employees’ attitudes to staying in or leaving a job, given that low job control has been associated with increased levels of retirement intentions [28,29]. In addition, job control has been found to mitigate retirement intentions associated with poor health and low work ability among physicians [30]. High job control may be viewed as a potential coping factor that helps distressed employees cope with demanding situations and, thus, lessen their job dissatisfaction and intentions to quit. According to Spector [31], job control can affect a person’s choice of coping strategy in a way that perceived high control is likely to lead to constructive coping, whereas lack of control is more likely to lead to destructive coping. Previous studies have indeed associated job control with successful coping [32,33] and successful coping, in turn, has been associated with fewer turnover intentions in demanding and stressful situations, such as with organisational change [34,35]. Job control may provide flexibility to avoid certain tasks that have a high risk of violence and to take breaks from work, which helps employees to regulate emotional responses and reappraise work challenges more positively [36]. Frese [37] has suggested that control enables a person to perform the most stressful tasks when that person feels particularly able to do them; that is, people can adjust the situation according to their needs, and can, therefore, be more relaxed in their work. Control may also act as a safety signal, given that a person with a high degree of control knows that he or she is able to change the situation if it becomes too difficult, thus knowing that the conditions may never be worse than he or she is willing to withstand [38]. Thus, many opportunities to control one’s job may act as a buffer against the negative effects of stressful working conditions like as work-related violence. The aim of the present study was to examine the associations of work-related violence (physical violence and bullying) with turnover intentions and job satisfaction in a four-year follow-up among Finnish physicians. Specifically, we were interested to see whether job control would modify these associations. We hypothesised that both physical violence and bullying would be associated with increased levels of turnover intentions and decreased job satisfaction. We additionally hypothesised that job control would act as a buffer for these negative effects of work-related violence. Conclusions: Our results suggest that promoting employees’ control opportunities in health care organisations might help to provide a buffer against the negative effects of workplace violence on turnover intentions in physicians. In addition, we showed that physical violence and bullying has longitudinal effects on job satisfaction and turnover intentions. Workplace violence is an extensive problem especially in the health care sector and organisations should approach this problem through multiple means, such as, by giving health care employees more opportunities to control their own work, in addition to direct measures.
Background: Health care professionals, including physicians, are at high risk of encountering workplace violence. At the same time physician turnover is an increasing problem that threatens the functioning of the health care sector worldwide. The present study examined the prospective associations of work-related physical violence and bullying with physicians' turnover intentions and job satisfaction. In addition, we tested whether job control would modify these associations. Methods: The present study was a 4-year longitudinal survey study, with data gathered in 2006 and 2010.The present sample included 1515 (61% women) Finnish physicians aged 25-63 years at baseline. Analyses of covariance (ANCOVA) were conducted while adjusting for gender, age, baseline levels, specialisation status, and employment sector. Results: The results of covariance analyses showed that physical violence led to increased physician turnover intentions and that both bullying and physical violence led to reduced physician job satisfaction even after adjustments. We also found that opportunities for job control were able to alleviate the increase in turnover intentions resulting from bullying. Conclusions: Our results suggest that workplace violence is an extensive problem in the health care sector and may lead to increased turnover and job dissatisfaction. Thus, health care organisations should approach this problem through different means, for example, by giving health care employees more opportunities to control their own work.
5,764
259
[ 1236, 503, 180, 10, 70, 16 ]
10
[ "job", "control", "violence", "job control", "turnover", "intentions", "turnover intentions", "bullying", "work", "phase" ]
[ "workplace violence extensive", "employees workplace violence", "workers physical violence", "violence encountered physicians", "encountering workplace violence" ]
[CONTENT] Job control | Work-related violence | Psychosocial resources | Intentions to quit | Physicians [SUMMARY]
[CONTENT] Job control | Work-related violence | Psychosocial resources | Intentions to quit | Physicians [SUMMARY]
[CONTENT] Job control | Work-related violence | Psychosocial resources | Intentions to quit | Physicians [SUMMARY]
[CONTENT] Job control | Work-related violence | Psychosocial resources | Intentions to quit | Physicians [SUMMARY]
[CONTENT] Job control | Work-related violence | Psychosocial resources | Intentions to quit | Physicians [SUMMARY]
[CONTENT] Job control | Work-related violence | Psychosocial resources | Intentions to quit | Physicians [SUMMARY]
[CONTENT] Adult | Bullying | Female | Finland | Humans | Job Satisfaction | Male | Middle Aged | Personnel Turnover | Physicians | Prospective Studies | Workplace Violence [SUMMARY]
[CONTENT] Adult | Bullying | Female | Finland | Humans | Job Satisfaction | Male | Middle Aged | Personnel Turnover | Physicians | Prospective Studies | Workplace Violence [SUMMARY]
[CONTENT] Adult | Bullying | Female | Finland | Humans | Job Satisfaction | Male | Middle Aged | Personnel Turnover | Physicians | Prospective Studies | Workplace Violence [SUMMARY]
[CONTENT] Adult | Bullying | Female | Finland | Humans | Job Satisfaction | Male | Middle Aged | Personnel Turnover | Physicians | Prospective Studies | Workplace Violence [SUMMARY]
[CONTENT] Adult | Bullying | Female | Finland | Humans | Job Satisfaction | Male | Middle Aged | Personnel Turnover | Physicians | Prospective Studies | Workplace Violence [SUMMARY]
[CONTENT] Adult | Bullying | Female | Finland | Humans | Job Satisfaction | Male | Middle Aged | Personnel Turnover | Physicians | Prospective Studies | Workplace Violence [SUMMARY]
[CONTENT] workplace violence extensive | employees workplace violence | workers physical violence | violence encountered physicians | encountering workplace violence [SUMMARY]
[CONTENT] workplace violence extensive | employees workplace violence | workers physical violence | violence encountered physicians | encountering workplace violence [SUMMARY]
[CONTENT] workplace violence extensive | employees workplace violence | workers physical violence | violence encountered physicians | encountering workplace violence [SUMMARY]
[CONTENT] workplace violence extensive | employees workplace violence | workers physical violence | violence encountered physicians | encountering workplace violence [SUMMARY]
[CONTENT] workplace violence extensive | employees workplace violence | workers physical violence | violence encountered physicians | encountering workplace violence [SUMMARY]
[CONTENT] workplace violence extensive | employees workplace violence | workers physical violence | violence encountered physicians | encountering workplace violence [SUMMARY]
[CONTENT] job | control | violence | job control | turnover | intentions | turnover intentions | bullying | work | phase [SUMMARY]
[CONTENT] job | control | violence | job control | turnover | intentions | turnover intentions | bullying | work | phase [SUMMARY]
[CONTENT] job | control | violence | job control | turnover | intentions | turnover intentions | bullying | work | phase [SUMMARY]
[CONTENT] job | control | violence | job control | turnover | intentions | turnover intentions | bullying | work | phase [SUMMARY]
[CONTENT] job | control | violence | job control | turnover | intentions | turnover intentions | bullying | work | phase [SUMMARY]
[CONTENT] job | control | violence | job control | turnover | intentions | turnover intentions | bullying | work | phase [SUMMARY]
[CONTENT] job | control | job control | violence | health | associated | previous | work | cent | physicians [SUMMARY]
[CONTENT] phase | job | items | violence | model | intentions | turnover intentions | turnover | totally | bullying [SUMMARY]
[CONTENT] job | job control | control | turnover | intentions | bullying | turnover intentions | included | bullying job | bullying job control [SUMMARY]
[CONTENT] health | health care | organisations | care | employees | problem | workplace violence | workplace | violence | addition [SUMMARY]
[CONTENT] job | violence | control | job control | turnover | intentions | turnover intentions | phase | bullying | work [SUMMARY]
[CONTENT] job | violence | control | job control | turnover | intentions | turnover intentions | phase | bullying | work [SUMMARY]
[CONTENT] ||| ||| ||| [SUMMARY]
[CONTENT] 4-year | 2006 | 2010.The | 1515 | 61% | Finnish | 25-63 years ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| 4-year | 2006 | 2010.The | 1515 | 61% | Finnish | 25-63 years ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| 4-year | 2006 | 2010.The | 1515 | 61% | Finnish | 25-63 years ||| ||| ||| ||| ||| [SUMMARY]
Diabetes and pancreatic cancer survival: a prospective cohort-based study.
24786605
Diabetes is a risk factor for pancreatic cancer but its association with survival from pancreatic cancer is poorly understood. Our objective was to investigate the association of diabetes with survival among pancreatic cancer patients in a prospective cohort-based study where diabetes history was ascertained before pancreatic cancer diagnosis.
BACKGROUND
We evaluated survival by baseline (1993-2001) self-reported diabetes history (n=62) among 504 participants that developed exocrine pancreatic cancer within the Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial. Hazard ratios (HRs) and 95% confidence intervals (CIs) for mortality were estimated using Cox proportional hazards model, adjusted for age, sex, body mass index, race, smoking, and tumour stage (local, locally advanced, and metastatic).
METHODS
The multivariable-adjusted HR for mortality comparing participants with diabetes to those without was 1.52 (95% CI=1.14-2.04, P-value <0.01). After excluding those diagnosed with pancreatic cancer within 3 years of study enrolment, HR for mortality among those with diabetes was 1.45 (95% CI=1.06-2.00, P-value=0.02).
RESULTS
Using prospectively collected data, our findings indicate that diabetes is associated with worse survival among patients with pancreatic cancer.
CONCLUSIONS
[ "Aged", "Cohort Studies", "Diabetes Mellitus", "Early Detection of Cancer", "Female", "Humans", "Male", "Middle Aged", "Pancreatic Neoplasms", "Proportional Hazards Models", "Prospective Studies", "Risk Factors", "Survival Analysis", "United States" ]
4090724
null
null
null
null
Results
Median age at baseline examination was 64 years. Baseline characteristics, by diabetes history, are listed in Table 1. Of the 504 participants with incident exocrine pancreatic cancer, 62 (12%) reported a history of diabetes before pancreatic cancer diagnosis. Participants with diabetes were comparable with those without diabetes for most characteristics except BMI. At the end of the follow-up period, 91% of the participants (N=460) had died. Median survival was shorter for those with diabetes (92 days) than for those without diabetes (139 days, P-value 0.05). Death fractions for diabetic/non-diabetic patients were 92% vs 91%. Most of the deaths among people with diabetes occurred during the first 500 days of follow-up (Figure 1). The majority (93%) of participants had pancreatic cancer listed as the cause of death on their death certificates (95% among those with diabetes and 93% among those without diabetes). Some other causes of death listed include ischaemic heart disease (four among those without diabetes and one among those with diabetes), cerebrovascular disease (four among those without diabetes and one among those with diabetes). A history of diabetes was associated with reduced survival (Figure 1). In multivariable adjusted Cox regression model, the HR for mortality comparing those with diabetes to those without diabetes was 1.52 (95% CI=1.14–2.04, P-value<0.01) (Table 2). In analyses stratified by tumour stage, diabetes was associated with a 2.3-fold (95%CI=1.16–4.58) and 1.5-fold (95% CI=1.04–2.24) greater mortality among those with localised and metastatic diseases, respectively, but not among those with locally advanced disease (HR=1.17, 95% CI=0.62–2.20). In sensitivity analysis excluding participants who developed pancreatic cancer within 3 years of enrolment, diabetes was associated with increased hazard of mortality (HR=1.45, 95% CI=1.06–2.00, P-value=0.02). The results were identical in analysis limited to participants with PDAC (n=437) (Supplementary Table 1).
null
null
[ "Design overview, study subjects, and endpoints", "Statistical analysis" ]
[ "The PLCO Cancer Screening Trial is a randomized, two-armed, controlled trial designed to determine the effects of screening on disease-specific mortality for cancers of the prostate, lung, colorectal, and ovaries. The PLCO study design and characteristics of the participants have been described in detail elsewhere (Zhu et al, 2013). Briefly, the PLCO enroled 154 901 men and women aged 55–74 years from 10 centres in the United States between November 1993 and July 2001 (Zhu et al, 2013). Study participants filled a baseline questionnaire at study entry where they provided demographic, personal, and medical information, including history of diabetes.\nEach eligible participant provided written informed consent. Pancreatic cancer diagnoses were determined from yearly questionnaires completed by participants or next of kin as well as state registries, death certificates, and physician reports and confirmed by PLCO staff (Oaks et al, 2010). At the time of last follow-up in December 2010, 627 primary newly diagnosed exocrine pancreas cancer cases had been identified. Pancreatic cancer stage was abstracted at the PLCO centres in categories of localised, locally advanced, and metastatic in 2010 from previously collected pathology reports and medical records used for cancer confirmation. As stage is an important prognostic factor, we decided a priori to include only those with information on tumour stage in our analyses. Tumour stage was classified as (i) local disease amenable to surgical resection; (ii) locally advanced disease with extra-pancreatic extension not amenable to surgical resection, but without distant metastases; and (iii) distant metastatic disease. The American Joint Committee on Cancer (AJCC)/International Union for Cancer Control (IUCC) tumour-lymph nodes-metastasis (TNM) staging was converted to the above categories. The AJCC/IUCC stages I and II corresponds to local disease, stage III corresponds to locally advanced disease, and stage IV corresponds to metastatic disease. We excluded participants with missing information on tumour stage (N=95) and diabetes (N=28). Therefore, our final analytic cohort for this study consisted of 504 exocrine pancreas cancer cases (437 were pancreatic ductal adenocarcinomas, PDAC). Baseline characteristics were similar for those who had information on tumour stage and those who did not. For instance, 14% of those with no information on stage had a positive history of diabetes compared with 12% of those with information on stage (P-value=0.30).\nInformation on deaths and causes of death were obtained by linking the study population to the National Death Index (Miller et al, 2000). The institutional review boards of the National Cancer Institute and each of the centres that participated approved the study.", "We compared baseline characteristics of participants who developed pancreatic cancer by diabetes history using χ2-test for categorical variables and Wilcoxon Rank test for continuous variables. Survival was calculated from the day of pancreatic cancer diagnosis to the day of death or December 2010, whichever came first. We used Cox proportional hazards model to calculate the hazard ratios (HRs) and 95% confidence intervals (CIs) of pancreatic cancer mortality by diabetes status. We first adjusted the model for age and tumour stage. In multivariable adjusted model, we adjusted for age, sex, body mass index (BMI), smoking status, race, and tumour stage. We conducted analyses stratified by sex and stage. Sex and stage were excluded from multivariable models stratified on these variables. Furthermore, because pancreatic cancer-induced diabetes could start 2 years before a diagnosis of pancreatic cancer is made, we conducted sensitivity analysis excluding participants who developed pancreatic cancer within 3 years of enrolment.\nSurvival curves were generated using Kaplan–Meier method. The proportional hazard assumption was tested and satisfied through the use of time-dependent covariate method. Statistical analyses were performed using SAS 9.3 statistical package (SAS Institute, Cary, NC, USA). Statistical significance was set at P<0.05 and all P-values are two-sided." ]
[ null, null ]
[ "Materials and Methods", "Design overview, study subjects, and endpoints", "Statistical analysis", "Results", "Discussion" ]
[ " Design overview, study subjects, and endpoints The PLCO Cancer Screening Trial is a randomized, two-armed, controlled trial designed to determine the effects of screening on disease-specific mortality for cancers of the prostate, lung, colorectal, and ovaries. The PLCO study design and characteristics of the participants have been described in detail elsewhere (Zhu et al, 2013). Briefly, the PLCO enroled 154 901 men and women aged 55–74 years from 10 centres in the United States between November 1993 and July 2001 (Zhu et al, 2013). Study participants filled a baseline questionnaire at study entry where they provided demographic, personal, and medical information, including history of diabetes.\nEach eligible participant provided written informed consent. Pancreatic cancer diagnoses were determined from yearly questionnaires completed by participants or next of kin as well as state registries, death certificates, and physician reports and confirmed by PLCO staff (Oaks et al, 2010). At the time of last follow-up in December 2010, 627 primary newly diagnosed exocrine pancreas cancer cases had been identified. Pancreatic cancer stage was abstracted at the PLCO centres in categories of localised, locally advanced, and metastatic in 2010 from previously collected pathology reports and medical records used for cancer confirmation. As stage is an important prognostic factor, we decided a priori to include only those with information on tumour stage in our analyses. Tumour stage was classified as (i) local disease amenable to surgical resection; (ii) locally advanced disease with extra-pancreatic extension not amenable to surgical resection, but without distant metastases; and (iii) distant metastatic disease. The American Joint Committee on Cancer (AJCC)/International Union for Cancer Control (IUCC) tumour-lymph nodes-metastasis (TNM) staging was converted to the above categories. The AJCC/IUCC stages I and II corresponds to local disease, stage III corresponds to locally advanced disease, and stage IV corresponds to metastatic disease. We excluded participants with missing information on tumour stage (N=95) and diabetes (N=28). Therefore, our final analytic cohort for this study consisted of 504 exocrine pancreas cancer cases (437 were pancreatic ductal adenocarcinomas, PDAC). Baseline characteristics were similar for those who had information on tumour stage and those who did not. For instance, 14% of those with no information on stage had a positive history of diabetes compared with 12% of those with information on stage (P-value=0.30).\nInformation on deaths and causes of death were obtained by linking the study population to the National Death Index (Miller et al, 2000). The institutional review boards of the National Cancer Institute and each of the centres that participated approved the study.\nThe PLCO Cancer Screening Trial is a randomized, two-armed, controlled trial designed to determine the effects of screening on disease-specific mortality for cancers of the prostate, lung, colorectal, and ovaries. The PLCO study design and characteristics of the participants have been described in detail elsewhere (Zhu et al, 2013). Briefly, the PLCO enroled 154 901 men and women aged 55–74 years from 10 centres in the United States between November 1993 and July 2001 (Zhu et al, 2013). Study participants filled a baseline questionnaire at study entry where they provided demographic, personal, and medical information, including history of diabetes.\nEach eligible participant provided written informed consent. Pancreatic cancer diagnoses were determined from yearly questionnaires completed by participants or next of kin as well as state registries, death certificates, and physician reports and confirmed by PLCO staff (Oaks et al, 2010). At the time of last follow-up in December 2010, 627 primary newly diagnosed exocrine pancreas cancer cases had been identified. Pancreatic cancer stage was abstracted at the PLCO centres in categories of localised, locally advanced, and metastatic in 2010 from previously collected pathology reports and medical records used for cancer confirmation. As stage is an important prognostic factor, we decided a priori to include only those with information on tumour stage in our analyses. Tumour stage was classified as (i) local disease amenable to surgical resection; (ii) locally advanced disease with extra-pancreatic extension not amenable to surgical resection, but without distant metastases; and (iii) distant metastatic disease. The American Joint Committee on Cancer (AJCC)/International Union for Cancer Control (IUCC) tumour-lymph nodes-metastasis (TNM) staging was converted to the above categories. The AJCC/IUCC stages I and II corresponds to local disease, stage III corresponds to locally advanced disease, and stage IV corresponds to metastatic disease. We excluded participants with missing information on tumour stage (N=95) and diabetes (N=28). Therefore, our final analytic cohort for this study consisted of 504 exocrine pancreas cancer cases (437 were pancreatic ductal adenocarcinomas, PDAC). Baseline characteristics were similar for those who had information on tumour stage and those who did not. For instance, 14% of those with no information on stage had a positive history of diabetes compared with 12% of those with information on stage (P-value=0.30).\nInformation on deaths and causes of death were obtained by linking the study population to the National Death Index (Miller et al, 2000). The institutional review boards of the National Cancer Institute and each of the centres that participated approved the study.\n Statistical analysis We compared baseline characteristics of participants who developed pancreatic cancer by diabetes history using χ2-test for categorical variables and Wilcoxon Rank test for continuous variables. Survival was calculated from the day of pancreatic cancer diagnosis to the day of death or December 2010, whichever came first. We used Cox proportional hazards model to calculate the hazard ratios (HRs) and 95% confidence intervals (CIs) of pancreatic cancer mortality by diabetes status. We first adjusted the model for age and tumour stage. In multivariable adjusted model, we adjusted for age, sex, body mass index (BMI), smoking status, race, and tumour stage. We conducted analyses stratified by sex and stage. Sex and stage were excluded from multivariable models stratified on these variables. Furthermore, because pancreatic cancer-induced diabetes could start 2 years before a diagnosis of pancreatic cancer is made, we conducted sensitivity analysis excluding participants who developed pancreatic cancer within 3 years of enrolment.\nSurvival curves were generated using Kaplan–Meier method. The proportional hazard assumption was tested and satisfied through the use of time-dependent covariate method. Statistical analyses were performed using SAS 9.3 statistical package (SAS Institute, Cary, NC, USA). Statistical significance was set at P<0.05 and all P-values are two-sided.\nWe compared baseline characteristics of participants who developed pancreatic cancer by diabetes history using χ2-test for categorical variables and Wilcoxon Rank test for continuous variables. Survival was calculated from the day of pancreatic cancer diagnosis to the day of death or December 2010, whichever came first. We used Cox proportional hazards model to calculate the hazard ratios (HRs) and 95% confidence intervals (CIs) of pancreatic cancer mortality by diabetes status. We first adjusted the model for age and tumour stage. In multivariable adjusted model, we adjusted for age, sex, body mass index (BMI), smoking status, race, and tumour stage. We conducted analyses stratified by sex and stage. Sex and stage were excluded from multivariable models stratified on these variables. Furthermore, because pancreatic cancer-induced diabetes could start 2 years before a diagnosis of pancreatic cancer is made, we conducted sensitivity analysis excluding participants who developed pancreatic cancer within 3 years of enrolment.\nSurvival curves were generated using Kaplan–Meier method. The proportional hazard assumption was tested and satisfied through the use of time-dependent covariate method. Statistical analyses were performed using SAS 9.3 statistical package (SAS Institute, Cary, NC, USA). Statistical significance was set at P<0.05 and all P-values are two-sided.", "The PLCO Cancer Screening Trial is a randomized, two-armed, controlled trial designed to determine the effects of screening on disease-specific mortality for cancers of the prostate, lung, colorectal, and ovaries. The PLCO study design and characteristics of the participants have been described in detail elsewhere (Zhu et al, 2013). Briefly, the PLCO enroled 154 901 men and women aged 55–74 years from 10 centres in the United States between November 1993 and July 2001 (Zhu et al, 2013). Study participants filled a baseline questionnaire at study entry where they provided demographic, personal, and medical information, including history of diabetes.\nEach eligible participant provided written informed consent. Pancreatic cancer diagnoses were determined from yearly questionnaires completed by participants or next of kin as well as state registries, death certificates, and physician reports and confirmed by PLCO staff (Oaks et al, 2010). At the time of last follow-up in December 2010, 627 primary newly diagnosed exocrine pancreas cancer cases had been identified. Pancreatic cancer stage was abstracted at the PLCO centres in categories of localised, locally advanced, and metastatic in 2010 from previously collected pathology reports and medical records used for cancer confirmation. As stage is an important prognostic factor, we decided a priori to include only those with information on tumour stage in our analyses. Tumour stage was classified as (i) local disease amenable to surgical resection; (ii) locally advanced disease with extra-pancreatic extension not amenable to surgical resection, but without distant metastases; and (iii) distant metastatic disease. The American Joint Committee on Cancer (AJCC)/International Union for Cancer Control (IUCC) tumour-lymph nodes-metastasis (TNM) staging was converted to the above categories. The AJCC/IUCC stages I and II corresponds to local disease, stage III corresponds to locally advanced disease, and stage IV corresponds to metastatic disease. We excluded participants with missing information on tumour stage (N=95) and diabetes (N=28). Therefore, our final analytic cohort for this study consisted of 504 exocrine pancreas cancer cases (437 were pancreatic ductal adenocarcinomas, PDAC). Baseline characteristics were similar for those who had information on tumour stage and those who did not. For instance, 14% of those with no information on stage had a positive history of diabetes compared with 12% of those with information on stage (P-value=0.30).\nInformation on deaths and causes of death were obtained by linking the study population to the National Death Index (Miller et al, 2000). The institutional review boards of the National Cancer Institute and each of the centres that participated approved the study.", "We compared baseline characteristics of participants who developed pancreatic cancer by diabetes history using χ2-test for categorical variables and Wilcoxon Rank test for continuous variables. Survival was calculated from the day of pancreatic cancer diagnosis to the day of death or December 2010, whichever came first. We used Cox proportional hazards model to calculate the hazard ratios (HRs) and 95% confidence intervals (CIs) of pancreatic cancer mortality by diabetes status. We first adjusted the model for age and tumour stage. In multivariable adjusted model, we adjusted for age, sex, body mass index (BMI), smoking status, race, and tumour stage. We conducted analyses stratified by sex and stage. Sex and stage were excluded from multivariable models stratified on these variables. Furthermore, because pancreatic cancer-induced diabetes could start 2 years before a diagnosis of pancreatic cancer is made, we conducted sensitivity analysis excluding participants who developed pancreatic cancer within 3 years of enrolment.\nSurvival curves were generated using Kaplan–Meier method. The proportional hazard assumption was tested and satisfied through the use of time-dependent covariate method. Statistical analyses were performed using SAS 9.3 statistical package (SAS Institute, Cary, NC, USA). Statistical significance was set at P<0.05 and all P-values are two-sided.", "Median age at baseline examination was 64 years. Baseline characteristics, by diabetes history, are listed in Table 1. Of the 504 participants with incident exocrine pancreatic cancer, 62 (12%) reported a history of diabetes before pancreatic cancer diagnosis. Participants with diabetes were comparable with those without diabetes for most characteristics except BMI. At the end of the follow-up period, 91% of the participants (N=460) had died. Median survival was shorter for those with diabetes (92 days) than for those without diabetes (139 days, P-value 0.05). Death fractions for diabetic/non-diabetic patients were 92% vs 91%. Most of the deaths among people with diabetes occurred during the first 500 days of follow-up (Figure 1).\nThe majority (93%) of participants had pancreatic cancer listed as the cause of death on their death certificates (95% among those with diabetes and 93% among those without diabetes). Some other causes of death listed include ischaemic heart disease (four among those without diabetes and one among those with diabetes), cerebrovascular disease (four among those without diabetes and one among those with diabetes).\nA history of diabetes was associated with reduced survival (Figure 1). In multivariable adjusted Cox regression model, the HR for mortality comparing those with diabetes to those without diabetes was 1.52 (95% CI=1.14–2.04, P-value<0.01) (Table 2). In analyses stratified by tumour stage, diabetes was associated with a 2.3-fold (95%CI=1.16–4.58) and 1.5-fold (95% CI=1.04–2.24) greater mortality among those with localised and metastatic diseases, respectively, but not among those with locally advanced disease (HR=1.17, 95% CI=0.62–2.20). In sensitivity analysis excluding participants who developed pancreatic cancer within 3 years of enrolment, diabetes was associated with increased hazard of mortality (HR=1.45, 95% CI=1.06–2.00, P-value=0.02).\nThe results were identical in analysis limited to participants with PDAC (n=437) (Supplementary Table 1).", "In this prospective cohort-based study, we observed that participants who reported having diabetes before being diagnosed with pancreatic cancer had worse survival compared with those who did not have diabetes.\nPrevious studies have evaluated the associations of diabetes with survival among pancreatic cancer patients using retrospectively collected data and among patients who underwent surgical resection, with conflicting results (Chu et al, 2010; McWilliams et al, 2010; Olson et al, 2010; Dandona et al, 2011; Hwang et al, 2013). In a large study conducted within the United Kingdom, there was no survival difference between those with diabetes and those without (Hwang et al, 2013). However, the authors reported increased pancreatic cancer mortality among those who had long-term diabetes (>5 years). Diabetes was not associated with survival from pancreatic cancer in three clinical studies (McWilliams et al, 2010; Olson et al, 2010; Dandona et al, 2011). Our study underscores the need for prospective studies to evaluate the associations of diabetes with survival among pancreatic cancer patients. In the Cancer Prevention Study II (CPS II), a history of diabetes was associated with higher death rates (Calle et al, 1998) but the analyses did not take into account tumour stage.\nExperimental studies suggest that diabetes can impact pancreatic cancer survival through its effect on glucose metabolism and insulin resistance-related metabolic markers. There is emerging evidence that diabetes alters, and reprograms the intracellular metabolic environment to a metabolism more favourable to proliferation (Regel et al, 2012; Sah et al, 2013). Kras, p53, and c-Myc pathways impact pancreatic cancer proliferation and studies have shown that these pathways also affect energy metabolism by influencing glucose and glutamine utilisation (Regel et al, 2012). In addition, hyperinsulinaemia enhances pancreatic cell proliferation and invasiveness either directly or indirectly through its effects on insulin-like growth factor (IGF) pathway. Metformin, an anti-diabetic drug, has been shown to improve survival among pancreatic cancer patients (Sadeghi et al, 2012), possibly via its impact on insulin/IGF-I signalling (Rozengurt et al, 2010). Median overall survival was 15.2 months among pancreatic cancer patients treated with metformin compared with 11.1 months among the non-metformin group (Sadeghi et al, 2012). A better understanding of the biological mechanisms driving the association between diabetes and pancreatic cancer survival could provide insights into the development of targeted therapies for some pancreatic cancer patients.\nAnother possible explanation for the worse survival may be because diabetics have more comorbidities, especially cardiovascular comorbidities, than those without diabetes. These comorbidities might put those with diabetes at a survival disadvantage directly or indirectly as physicians may be less likely to use aggressive chemotherapy on patients with diabetes because of the comorbidities. Hence, diabetes status may be driving aggressiveness of treatment decisions. Nevertheless, in our study cohort, pancreatic cancer was reported as the cause of death in 95% of participants with diabetes and 93% of participants without diabetes and there appears to be no difference in death due to ischaemic heart disease and cerebrovascular accident between the two groups. Further, we had no information available on the history of chronic pancreatitis. The worse survival among those with diabetes could also be related to later stage at diagnosis, although information on tumour staging in our data does not support this.\nThe major strength of our study is its prospective nature. Information on diabetes was collected before pancreatic cancer diagnosis. In addition, we adjusted for stage, which is an important determinant of pancreatic cancer survival. Our study has the following limitations. A sizeable proportion of diabetes may be undiagnosed. Using haemoglobin A1C concentrations as a diagnostic test, up to 19% of people with diabetes in the United States are undiagnosed (Cowie et al, 2010). Because history of diabetes was based on self-report, prevalence of diabetes in our study population would have been underestimated; hence some participants with diabetes would have been misclassified as not having diabetes. Any misclassification, though, would have been non-differential. Finally, because PLCO is a multicenter study, patients diagnosed with pancreatic cancer were treated at different hospitals in different geographical locations; hence, treatment programs which were likely to vary among patients could have contributed to differences in survival but we had no information on the treatment programs, particularly surgery, received by the patients.\nIn conclusion, this survival analysis using incident cases ascertained from the PLCO prospective cohort, a history of diabetes before pancreatic cancer diagnosis was associated with higher mortality among pancreatic cancer patients. Studies characterising the molecular mechanisms driving this association are needed as these can provide insights into relevant pathways that could be targeted for possible therapeutic interventions." ]
[ "materials|methods", null, null, "results", "discussion" ]
[ "diabetes", "pancreatic cancer", "mortality", "survival", "cohort", "prospective study" ]
Materials and Methods: Design overview, study subjects, and endpoints The PLCO Cancer Screening Trial is a randomized, two-armed, controlled trial designed to determine the effects of screening on disease-specific mortality for cancers of the prostate, lung, colorectal, and ovaries. The PLCO study design and characteristics of the participants have been described in detail elsewhere (Zhu et al, 2013). Briefly, the PLCO enroled 154 901 men and women aged 55–74 years from 10 centres in the United States between November 1993 and July 2001 (Zhu et al, 2013). Study participants filled a baseline questionnaire at study entry where they provided demographic, personal, and medical information, including history of diabetes. Each eligible participant provided written informed consent. Pancreatic cancer diagnoses were determined from yearly questionnaires completed by participants or next of kin as well as state registries, death certificates, and physician reports and confirmed by PLCO staff (Oaks et al, 2010). At the time of last follow-up in December 2010, 627 primary newly diagnosed exocrine pancreas cancer cases had been identified. Pancreatic cancer stage was abstracted at the PLCO centres in categories of localised, locally advanced, and metastatic in 2010 from previously collected pathology reports and medical records used for cancer confirmation. As stage is an important prognostic factor, we decided a priori to include only those with information on tumour stage in our analyses. Tumour stage was classified as (i) local disease amenable to surgical resection; (ii) locally advanced disease with extra-pancreatic extension not amenable to surgical resection, but without distant metastases; and (iii) distant metastatic disease. The American Joint Committee on Cancer (AJCC)/International Union for Cancer Control (IUCC) tumour-lymph nodes-metastasis (TNM) staging was converted to the above categories. The AJCC/IUCC stages I and II corresponds to local disease, stage III corresponds to locally advanced disease, and stage IV corresponds to metastatic disease. We excluded participants with missing information on tumour stage (N=95) and diabetes (N=28). Therefore, our final analytic cohort for this study consisted of 504 exocrine pancreas cancer cases (437 were pancreatic ductal adenocarcinomas, PDAC). Baseline characteristics were similar for those who had information on tumour stage and those who did not. For instance, 14% of those with no information on stage had a positive history of diabetes compared with 12% of those with information on stage (P-value=0.30). Information on deaths and causes of death were obtained by linking the study population to the National Death Index (Miller et al, 2000). The institutional review boards of the National Cancer Institute and each of the centres that participated approved the study. The PLCO Cancer Screening Trial is a randomized, two-armed, controlled trial designed to determine the effects of screening on disease-specific mortality for cancers of the prostate, lung, colorectal, and ovaries. The PLCO study design and characteristics of the participants have been described in detail elsewhere (Zhu et al, 2013). Briefly, the PLCO enroled 154 901 men and women aged 55–74 years from 10 centres in the United States between November 1993 and July 2001 (Zhu et al, 2013). Study participants filled a baseline questionnaire at study entry where they provided demographic, personal, and medical information, including history of diabetes. Each eligible participant provided written informed consent. Pancreatic cancer diagnoses were determined from yearly questionnaires completed by participants or next of kin as well as state registries, death certificates, and physician reports and confirmed by PLCO staff (Oaks et al, 2010). At the time of last follow-up in December 2010, 627 primary newly diagnosed exocrine pancreas cancer cases had been identified. Pancreatic cancer stage was abstracted at the PLCO centres in categories of localised, locally advanced, and metastatic in 2010 from previously collected pathology reports and medical records used for cancer confirmation. As stage is an important prognostic factor, we decided a priori to include only those with information on tumour stage in our analyses. Tumour stage was classified as (i) local disease amenable to surgical resection; (ii) locally advanced disease with extra-pancreatic extension not amenable to surgical resection, but without distant metastases; and (iii) distant metastatic disease. The American Joint Committee on Cancer (AJCC)/International Union for Cancer Control (IUCC) tumour-lymph nodes-metastasis (TNM) staging was converted to the above categories. The AJCC/IUCC stages I and II corresponds to local disease, stage III corresponds to locally advanced disease, and stage IV corresponds to metastatic disease. We excluded participants with missing information on tumour stage (N=95) and diabetes (N=28). Therefore, our final analytic cohort for this study consisted of 504 exocrine pancreas cancer cases (437 were pancreatic ductal adenocarcinomas, PDAC). Baseline characteristics were similar for those who had information on tumour stage and those who did not. For instance, 14% of those with no information on stage had a positive history of diabetes compared with 12% of those with information on stage (P-value=0.30). Information on deaths and causes of death were obtained by linking the study population to the National Death Index (Miller et al, 2000). The institutional review boards of the National Cancer Institute and each of the centres that participated approved the study. Statistical analysis We compared baseline characteristics of participants who developed pancreatic cancer by diabetes history using χ2-test for categorical variables and Wilcoxon Rank test for continuous variables. Survival was calculated from the day of pancreatic cancer diagnosis to the day of death or December 2010, whichever came first. We used Cox proportional hazards model to calculate the hazard ratios (HRs) and 95% confidence intervals (CIs) of pancreatic cancer mortality by diabetes status. We first adjusted the model for age and tumour stage. In multivariable adjusted model, we adjusted for age, sex, body mass index (BMI), smoking status, race, and tumour stage. We conducted analyses stratified by sex and stage. Sex and stage were excluded from multivariable models stratified on these variables. Furthermore, because pancreatic cancer-induced diabetes could start 2 years before a diagnosis of pancreatic cancer is made, we conducted sensitivity analysis excluding participants who developed pancreatic cancer within 3 years of enrolment. Survival curves were generated using Kaplan–Meier method. The proportional hazard assumption was tested and satisfied through the use of time-dependent covariate method. Statistical analyses were performed using SAS 9.3 statistical package (SAS Institute, Cary, NC, USA). Statistical significance was set at P<0.05 and all P-values are two-sided. We compared baseline characteristics of participants who developed pancreatic cancer by diabetes history using χ2-test for categorical variables and Wilcoxon Rank test for continuous variables. Survival was calculated from the day of pancreatic cancer diagnosis to the day of death or December 2010, whichever came first. We used Cox proportional hazards model to calculate the hazard ratios (HRs) and 95% confidence intervals (CIs) of pancreatic cancer mortality by diabetes status. We first adjusted the model for age and tumour stage. In multivariable adjusted model, we adjusted for age, sex, body mass index (BMI), smoking status, race, and tumour stage. We conducted analyses stratified by sex and stage. Sex and stage were excluded from multivariable models stratified on these variables. Furthermore, because pancreatic cancer-induced diabetes could start 2 years before a diagnosis of pancreatic cancer is made, we conducted sensitivity analysis excluding participants who developed pancreatic cancer within 3 years of enrolment. Survival curves were generated using Kaplan–Meier method. The proportional hazard assumption was tested and satisfied through the use of time-dependent covariate method. Statistical analyses were performed using SAS 9.3 statistical package (SAS Institute, Cary, NC, USA). Statistical significance was set at P<0.05 and all P-values are two-sided. Design overview, study subjects, and endpoints: The PLCO Cancer Screening Trial is a randomized, two-armed, controlled trial designed to determine the effects of screening on disease-specific mortality for cancers of the prostate, lung, colorectal, and ovaries. The PLCO study design and characteristics of the participants have been described in detail elsewhere (Zhu et al, 2013). Briefly, the PLCO enroled 154 901 men and women aged 55–74 years from 10 centres in the United States between November 1993 and July 2001 (Zhu et al, 2013). Study participants filled a baseline questionnaire at study entry where they provided demographic, personal, and medical information, including history of diabetes. Each eligible participant provided written informed consent. Pancreatic cancer diagnoses were determined from yearly questionnaires completed by participants or next of kin as well as state registries, death certificates, and physician reports and confirmed by PLCO staff (Oaks et al, 2010). At the time of last follow-up in December 2010, 627 primary newly diagnosed exocrine pancreas cancer cases had been identified. Pancreatic cancer stage was abstracted at the PLCO centres in categories of localised, locally advanced, and metastatic in 2010 from previously collected pathology reports and medical records used for cancer confirmation. As stage is an important prognostic factor, we decided a priori to include only those with information on tumour stage in our analyses. Tumour stage was classified as (i) local disease amenable to surgical resection; (ii) locally advanced disease with extra-pancreatic extension not amenable to surgical resection, but without distant metastases; and (iii) distant metastatic disease. The American Joint Committee on Cancer (AJCC)/International Union for Cancer Control (IUCC) tumour-lymph nodes-metastasis (TNM) staging was converted to the above categories. The AJCC/IUCC stages I and II corresponds to local disease, stage III corresponds to locally advanced disease, and stage IV corresponds to metastatic disease. We excluded participants with missing information on tumour stage (N=95) and diabetes (N=28). Therefore, our final analytic cohort for this study consisted of 504 exocrine pancreas cancer cases (437 were pancreatic ductal adenocarcinomas, PDAC). Baseline characteristics were similar for those who had information on tumour stage and those who did not. For instance, 14% of those with no information on stage had a positive history of diabetes compared with 12% of those with information on stage (P-value=0.30). Information on deaths and causes of death were obtained by linking the study population to the National Death Index (Miller et al, 2000). The institutional review boards of the National Cancer Institute and each of the centres that participated approved the study. Statistical analysis: We compared baseline characteristics of participants who developed pancreatic cancer by diabetes history using χ2-test for categorical variables and Wilcoxon Rank test for continuous variables. Survival was calculated from the day of pancreatic cancer diagnosis to the day of death or December 2010, whichever came first. We used Cox proportional hazards model to calculate the hazard ratios (HRs) and 95% confidence intervals (CIs) of pancreatic cancer mortality by diabetes status. We first adjusted the model for age and tumour stage. In multivariable adjusted model, we adjusted for age, sex, body mass index (BMI), smoking status, race, and tumour stage. We conducted analyses stratified by sex and stage. Sex and stage were excluded from multivariable models stratified on these variables. Furthermore, because pancreatic cancer-induced diabetes could start 2 years before a diagnosis of pancreatic cancer is made, we conducted sensitivity analysis excluding participants who developed pancreatic cancer within 3 years of enrolment. Survival curves were generated using Kaplan–Meier method. The proportional hazard assumption was tested and satisfied through the use of time-dependent covariate method. Statistical analyses were performed using SAS 9.3 statistical package (SAS Institute, Cary, NC, USA). Statistical significance was set at P<0.05 and all P-values are two-sided. Results: Median age at baseline examination was 64 years. Baseline characteristics, by diabetes history, are listed in Table 1. Of the 504 participants with incident exocrine pancreatic cancer, 62 (12%) reported a history of diabetes before pancreatic cancer diagnosis. Participants with diabetes were comparable with those without diabetes for most characteristics except BMI. At the end of the follow-up period, 91% of the participants (N=460) had died. Median survival was shorter for those with diabetes (92 days) than for those without diabetes (139 days, P-value 0.05). Death fractions for diabetic/non-diabetic patients were 92% vs 91%. Most of the deaths among people with diabetes occurred during the first 500 days of follow-up (Figure 1). The majority (93%) of participants had pancreatic cancer listed as the cause of death on their death certificates (95% among those with diabetes and 93% among those without diabetes). Some other causes of death listed include ischaemic heart disease (four among those without diabetes and one among those with diabetes), cerebrovascular disease (four among those without diabetes and one among those with diabetes). A history of diabetes was associated with reduced survival (Figure 1). In multivariable adjusted Cox regression model, the HR for mortality comparing those with diabetes to those without diabetes was 1.52 (95% CI=1.14–2.04, P-value<0.01) (Table 2). In analyses stratified by tumour stage, diabetes was associated with a 2.3-fold (95%CI=1.16–4.58) and 1.5-fold (95% CI=1.04–2.24) greater mortality among those with localised and metastatic diseases, respectively, but not among those with locally advanced disease (HR=1.17, 95% CI=0.62–2.20). In sensitivity analysis excluding participants who developed pancreatic cancer within 3 years of enrolment, diabetes was associated with increased hazard of mortality (HR=1.45, 95% CI=1.06–2.00, P-value=0.02). The results were identical in analysis limited to participants with PDAC (n=437) (Supplementary Table 1). Discussion: In this prospective cohort-based study, we observed that participants who reported having diabetes before being diagnosed with pancreatic cancer had worse survival compared with those who did not have diabetes. Previous studies have evaluated the associations of diabetes with survival among pancreatic cancer patients using retrospectively collected data and among patients who underwent surgical resection, with conflicting results (Chu et al, 2010; McWilliams et al, 2010; Olson et al, 2010; Dandona et al, 2011; Hwang et al, 2013). In a large study conducted within the United Kingdom, there was no survival difference between those with diabetes and those without (Hwang et al, 2013). However, the authors reported increased pancreatic cancer mortality among those who had long-term diabetes (>5 years). Diabetes was not associated with survival from pancreatic cancer in three clinical studies (McWilliams et al, 2010; Olson et al, 2010; Dandona et al, 2011). Our study underscores the need for prospective studies to evaluate the associations of diabetes with survival among pancreatic cancer patients. In the Cancer Prevention Study II (CPS II), a history of diabetes was associated with higher death rates (Calle et al, 1998) but the analyses did not take into account tumour stage. Experimental studies suggest that diabetes can impact pancreatic cancer survival through its effect on glucose metabolism and insulin resistance-related metabolic markers. There is emerging evidence that diabetes alters, and reprograms the intracellular metabolic environment to a metabolism more favourable to proliferation (Regel et al, 2012; Sah et al, 2013). Kras, p53, and c-Myc pathways impact pancreatic cancer proliferation and studies have shown that these pathways also affect energy metabolism by influencing glucose and glutamine utilisation (Regel et al, 2012). In addition, hyperinsulinaemia enhances pancreatic cell proliferation and invasiveness either directly or indirectly through its effects on insulin-like growth factor (IGF) pathway. Metformin, an anti-diabetic drug, has been shown to improve survival among pancreatic cancer patients (Sadeghi et al, 2012), possibly via its impact on insulin/IGF-I signalling (Rozengurt et al, 2010). Median overall survival was 15.2 months among pancreatic cancer patients treated with metformin compared with 11.1 months among the non-metformin group (Sadeghi et al, 2012). A better understanding of the biological mechanisms driving the association between diabetes and pancreatic cancer survival could provide insights into the development of targeted therapies for some pancreatic cancer patients. Another possible explanation for the worse survival may be because diabetics have more comorbidities, especially cardiovascular comorbidities, than those without diabetes. These comorbidities might put those with diabetes at a survival disadvantage directly or indirectly as physicians may be less likely to use aggressive chemotherapy on patients with diabetes because of the comorbidities. Hence, diabetes status may be driving aggressiveness of treatment decisions. Nevertheless, in our study cohort, pancreatic cancer was reported as the cause of death in 95% of participants with diabetes and 93% of participants without diabetes and there appears to be no difference in death due to ischaemic heart disease and cerebrovascular accident between the two groups. Further, we had no information available on the history of chronic pancreatitis. The worse survival among those with diabetes could also be related to later stage at diagnosis, although information on tumour staging in our data does not support this. The major strength of our study is its prospective nature. Information on diabetes was collected before pancreatic cancer diagnosis. In addition, we adjusted for stage, which is an important determinant of pancreatic cancer survival. Our study has the following limitations. A sizeable proportion of diabetes may be undiagnosed. Using haemoglobin A1C concentrations as a diagnostic test, up to 19% of people with diabetes in the United States are undiagnosed (Cowie et al, 2010). Because history of diabetes was based on self-report, prevalence of diabetes in our study population would have been underestimated; hence some participants with diabetes would have been misclassified as not having diabetes. Any misclassification, though, would have been non-differential. Finally, because PLCO is a multicenter study, patients diagnosed with pancreatic cancer were treated at different hospitals in different geographical locations; hence, treatment programs which were likely to vary among patients could have contributed to differences in survival but we had no information on the treatment programs, particularly surgery, received by the patients. In conclusion, this survival analysis using incident cases ascertained from the PLCO prospective cohort, a history of diabetes before pancreatic cancer diagnosis was associated with higher mortality among pancreatic cancer patients. Studies characterising the molecular mechanisms driving this association are needed as these can provide insights into relevant pathways that could be targeted for possible therapeutic interventions.
Background: Diabetes is a risk factor for pancreatic cancer but its association with survival from pancreatic cancer is poorly understood. Our objective was to investigate the association of diabetes with survival among pancreatic cancer patients in a prospective cohort-based study where diabetes history was ascertained before pancreatic cancer diagnosis. Methods: We evaluated survival by baseline (1993-2001) self-reported diabetes history (n=62) among 504 participants that developed exocrine pancreatic cancer within the Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial. Hazard ratios (HRs) and 95% confidence intervals (CIs) for mortality were estimated using Cox proportional hazards model, adjusted for age, sex, body mass index, race, smoking, and tumour stage (local, locally advanced, and metastatic). Results: The multivariable-adjusted HR for mortality comparing participants with diabetes to those without was 1.52 (95% CI=1.14-2.04, P-value <0.01). After excluding those diagnosed with pancreatic cancer within 3 years of study enrolment, HR for mortality among those with diabetes was 1.45 (95% CI=1.06-2.00, P-value=0.02). Conclusions: Using prospectively collected data, our findings indicate that diabetes is associated with worse survival among patients with pancreatic cancer.
null
null
3,579
246
[ 505, 244 ]
5
[ "cancer", "diabetes", "pancreatic", "stage", "pancreatic cancer", "study", "participants", "disease", "information", "tumour" ]
[ "pancreatic cancer diabetes", "study plco cancer", "consent pancreatic cancer", "study cohort pancreatic", "plco cancer screening" ]
null
null
null
null
null
null
[CONTENT] diabetes | pancreatic cancer | mortality | survival | cohort | prospective study [SUMMARY]
null
[CONTENT] diabetes | pancreatic cancer | mortality | survival | cohort | prospective study [SUMMARY]
null
null
null
[CONTENT] Aged | Cohort Studies | Diabetes Mellitus | Early Detection of Cancer | Female | Humans | Male | Middle Aged | Pancreatic Neoplasms | Proportional Hazards Models | Prospective Studies | Risk Factors | Survival Analysis | United States [SUMMARY]
null
[CONTENT] Aged | Cohort Studies | Diabetes Mellitus | Early Detection of Cancer | Female | Humans | Male | Middle Aged | Pancreatic Neoplasms | Proportional Hazards Models | Prospective Studies | Risk Factors | Survival Analysis | United States [SUMMARY]
null
null
null
[CONTENT] pancreatic cancer diabetes | study plco cancer | consent pancreatic cancer | study cohort pancreatic | plco cancer screening [SUMMARY]
null
[CONTENT] pancreatic cancer diabetes | study plco cancer | consent pancreatic cancer | study cohort pancreatic | plco cancer screening [SUMMARY]
null
null
null
[CONTENT] cancer | diabetes | pancreatic | stage | pancreatic cancer | study | participants | disease | information | tumour [SUMMARY]
null
[CONTENT] cancer | diabetes | pancreatic | stage | pancreatic cancer | study | participants | disease | information | tumour [SUMMARY]
null
null
null
[CONTENT] diabetes | ci | 95 ci | listed | diabetes diabetes | days | hr | table | participants | 95 [SUMMARY]
null
[CONTENT] diabetes | cancer | pancreatic | pancreatic cancer | stage | study | information | participants | disease | survival [SUMMARY]
null
null
null
[CONTENT] 1.52 | 95% | CI=1.14-2.04 ||| 3 years | 1.45 | 95% | CI=1.06-2.00 [SUMMARY]
null
[CONTENT] ||| ||| 1993-2001 | 504 | Prostate | Lung | Colorectal | Ovarian | PLCO ||| Cancer Screening Trial ||| 95% | Cox ||| ||| 1.52 | 95% | CI=1.14-2.04 ||| 3 years | 1.45 | 95% | CI=1.06-2.00 ||| [SUMMARY]
null
Expression changes of serum LINC00941 and LINC00514 in HBV infection-related liver diseases and their potential application values.
34825738
Long non-coding RNAs (LncRNAs) are considered as potential diagnostic markers for a variety of tumors. Here, we aimed to explore the changes of LINC00941 and LINC00514 expression in hepatitis B virus (HBV) infection-related liver disease and evaluate their application value in disease diagnosis.
BACKGROUND
Serum levels of LINC00941 and LINC00514 were detected by qRT-PCR. Potential diagnostic values were evaluated by receiver operating characteristic curve (ROC) analysis.
METHODS
Serum LINC00941 and LINC00514 levels were elevated in patients with chronic hepatitis B (CHB), liver cirrhosis (LC), and hepatocellular carcinoma (HCC) compared with controls. When distinguishing HCC from controls, serum LINC00941 and LINC00514 had diagnostic parameters of an AUC of 0.919 and 0.808, sensitivity of 85% and 90%, and specificity of 86.67% and 56.67%, which were higher than parameters for alpha fetal protein (AFP) (all p < 0.0001). When distinguishing HCC from LC, CHB, or LC from controls, the combined detection of LINC00941 or LINC00514 can significantly improve the accuracy of AFP test alone (all p < 0.05).
RESULTS
LINC00941 and LINC00514 were increased in the serum of HBV infection-associated liver diseases and might be independent markers for the detection of liver diseases.
CONCLUSIONS
[ "Adult", "Biomarkers", "Female", "Hepatitis B", "Humans", "Liver Diseases", "Male", "Middle Aged", "RNA, Long Noncoding" ]
8761418
INTRODUCTION
Liver cancer is one of the most common human cancers worldwide, and also one of the most important causes of individual death caused by cancer, and its morbidity and mortality rank 6th and 2nd among all malignant tumors, respectively. 1 According to the latest global cancer statistics in 2018, there are about 840,000 new cases of liver cancer each year, with an incidence rate of about 4.7%; and about 780,000 new liver cancer deaths occur every year, with a mortality rate of 8.2%. 2 China is one of the countries with a high incidence of liver cancer. According to the pathologic characteristics of tissues, hepatocellular carcinoma (HCC) accounts for more than 90% of all liver cancers, and ranks fourth and third in incidence and mortality among all malignant tumors; in addition, compared with females, males have a higher incidence and poorer prognosis. 3 , 4 HCC is mainly caused by hepatitis virus infection, alcohol abuse, non‐alcoholic steatohepatitis, toxin exposure, and metabolic syndrome. 5 Chronic infection of hepatitis B virus (HBV) and aflatoxin exposure are the main pathogenic factors of HCC in China, and HCC caused by chronic HBV infection accounts for more than 80% of all HCC. 6 In addition to liver ultrasound, serum alpha fetal protein (AFP) detection is the main method for extensive screening of HCC in China. However, due to its low sensitivity and specificity, the detection rate of HCC is not high, so that many HCC patients miss the optimal surgical period. Therefore, it is very important to explore novel and effective serum markers to improve the detection rate and prognosis of HCC. Non‐coding RNA is currently a research hotspot in the field of molecular biology. Although it does not have the function of encoding protein, it plays an important role in epigenetic, transcription, and post‐transcriptional levels. More and more lncRNAs have been found to be involved in the occurrence and development of a variety of tumors, and can be used for tumor diagnosis and prognosis monitoring. Studies have reported that the expression of LINC00941 is increased in pancreatic cancer, colorectal cancer, lung cancer, etc. 7 , 8 , 9 , 10 It can promote tumor progression through a variety of signaling pathways, such as proliferation, metastasis, invasion, and so on. 7 , 8 , 9 , 11 It can also be used as a diagnostic or prognostic marker for gastric cancer, lung cancer, head and neck squamous cell carcinoma, and other tumors 12 , 13 , 14 and could predict the recurrence‐free survival and overall survival of HCC. 15 On the other hand, it has been reported that LINC00514 is highly expressed in the tissues and cells of breast and pancreatic cancer and can promote tumor occurrence and development by regulating related microRNAs. 16 , 17 However, so far, no studies have reported their diagnostic value in HCC. In this research, we detected the expression of LINC00941 and LINC00514 in healthy controls and patients with HBV infection‐related liver disease and assessed its correlation with basic characteristics of patients with HCC. The diagnostic values of LINC00941 and LINC00514 in liver diseases were also analyzed by receiver operating characteristic curve (ROC).
null
null
RESULTS
Characteristics of healthy controls and patients with HBV infection‐related liver disease The main characteristics of all the study population are summarized in Table 1. First, there was no statistically significant difference in age between the groups. Regarding the liver biochemical indicators ALT, AST, ALP, GGT, ALB, TBIL, DBIL, the differences between the four groups were statistically significant (all p < 0.0001). For the level of HBV DNA, the CHB group was significantly higher than the LC group and the HCC group (all p < 0.0001). In the analysis of serum tumor markers, result showed that the AFP level of HCC patients was obviously increased than that of the control group, CHB and LC groups, and the difference was statistically significant. Basic biochemical data characteristics of controls and patients with HBV infection‐related liver disease Data are presented as means (SD) or median (interquartile range) or percentage. Abbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CHB, Chronic hepatitis b; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; LC, liver cirrhosis; TBIL, total bilirubin. The main characteristics of all the study population are summarized in Table 1. First, there was no statistically significant difference in age between the groups. Regarding the liver biochemical indicators ALT, AST, ALP, GGT, ALB, TBIL, DBIL, the differences between the four groups were statistically significant (all p < 0.0001). For the level of HBV DNA, the CHB group was significantly higher than the LC group and the HCC group (all p < 0.0001). In the analysis of serum tumor markers, result showed that the AFP level of HCC patients was obviously increased than that of the control group, CHB and LC groups, and the difference was statistically significant. Basic biochemical data characteristics of controls and patients with HBV infection‐related liver disease Data are presented as means (SD) or median (interquartile range) or percentage. Abbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CHB, Chronic hepatitis b; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; LC, liver cirrhosis; TBIL, total bilirubin. Comparison of serum LINC00941 and LINC00514 levels in controls, CHB, LC and HCC Furthermore, we used the LSD test to evaluate the serum levels of LINC00941 and LINC00514 in the controls, CHB, LC and HCC groups. As shown in Figure 1A, serum LINC00941 levels in the CHB, LC and HCC groups were significantly higher than those in the control group (all p < 0.0001), and compared with the patients with LC, the level of serum LINC00941 in CHB and HCC groups were obviously decreased (all p < 0.0001). Compared to the controls, the serum LINC00514 level was significantly increased in patients with CHB, LC and HCC (all p < 0.0001); and the serum LINC00514 level was significantly lower in the CHB group and the HCC group than that of LC group (all p < 0.0001, Figure 1B). Serum LINC00941 and LINC00514 expression in patients with hepatocellular carcinoma (HCC), patients with liver cirrhosis, patients with HCC and healthy controls. **** p < 0.0001 represents significant difference between two groups Furthermore, we used the LSD test to evaluate the serum levels of LINC00941 and LINC00514 in the controls, CHB, LC and HCC groups. As shown in Figure 1A, serum LINC00941 levels in the CHB, LC and HCC groups were significantly higher than those in the control group (all p < 0.0001), and compared with the patients with LC, the level of serum LINC00941 in CHB and HCC groups were obviously decreased (all p < 0.0001). Compared to the controls, the serum LINC00514 level was significantly increased in patients with CHB, LC and HCC (all p < 0.0001); and the serum LINC00514 level was significantly lower in the CHB group and the HCC group than that of LC group (all p < 0.0001, Figure 1B). Serum LINC00941 and LINC00514 expression in patients with hepatocellular carcinoma (HCC), patients with liver cirrhosis, patients with HCC and healthy controls. **** p < 0.0001 represents significant difference between two groups Relationship between serum LINC00941 and LINC00514 expression and basic biochemical indexes in patients with HCC To further evaluate whether serum LINC00941 and LINC00514 expression are related to the liver function, HBV viral load and AFP level of HCC patients, We analyzed the correlation between the high and low expression of two lncRNAs and the above indexes. The results showed that serum LINC00941 and LINC00514 had no correlation with liver function index, HBV viral load and AFP (Table 2). The association between the relative expression of LINC00941 and LINC00514 in serum of HCC patients and basic clinical data Patients (n = 40) LINC00941 expression levels Patients (n = 40) LINC00514 expression levels The data are divided on average. Abbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; TBIL, total bilirubin. To further evaluate whether serum LINC00941 and LINC00514 expression are related to the liver function, HBV viral load and AFP level of HCC patients, We analyzed the correlation between the high and low expression of two lncRNAs and the above indexes. The results showed that serum LINC00941 and LINC00514 had no correlation with liver function index, HBV viral load and AFP (Table 2). The association between the relative expression of LINC00941 and LINC00514 in serum of HCC patients and basic clinical data Patients (n = 40) LINC00941 expression levels Patients (n = 40) LINC00514 expression levels The data are divided on average. Abbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; TBIL, total bilirubin. Serum levels of LINC00941, LINC00514 and AFP at different HCC stages and liver function grades To assess the changes of LINC00941, LINC00514, and AFP levels in the progression of HCC, patients with HCC were divided into early, middle and advanced stages, according to the Barcelona Clinic Liver Cancer (BCLC). We found that serum level of AFP in advanced‐stage HCC was significantly higher than early stage and middle stage (all p < 0.01, Figure 2A). However, the levels of LINC00941 and LINC00514 showed no difference in different stages of HCC but showed a gradual downward trend (Figure 2B,C). According to Child‐Pugh class, patients with HCC were divided into class A, B, and C. The results indicated that the AFP level of class C patients was obviously higher than that of class A patients, while the LINC00941 and LINC00514 levels showed no difference in different liver function grades, and LINC00514 showed a trend of progressive decrease in different grades (Figure 2D–F). Serum concentration of LINC00941, LINC00514 and alpha fetoprotein according to Barcelona Clinic Liver Cancer and Child‐Pugh class in patients with hepatocellular carcinoma. ** p < 0.01 represents significant difference between two groups To assess the changes of LINC00941, LINC00514, and AFP levels in the progression of HCC, patients with HCC were divided into early, middle and advanced stages, according to the Barcelona Clinic Liver Cancer (BCLC). We found that serum level of AFP in advanced‐stage HCC was significantly higher than early stage and middle stage (all p < 0.01, Figure 2A). However, the levels of LINC00941 and LINC00514 showed no difference in different stages of HCC but showed a gradual downward trend (Figure 2B,C). According to Child‐Pugh class, patients with HCC were divided into class A, B, and C. The results indicated that the AFP level of class C patients was obviously higher than that of class A patients, while the LINC00941 and LINC00514 levels showed no difference in different liver function grades, and LINC00514 showed a trend of progressive decrease in different grades (Figure 2D–F). Serum concentration of LINC00941, LINC00514 and alpha fetoprotein according to Barcelona Clinic Liver Cancer and Child‐Pugh class in patients with hepatocellular carcinoma. ** p < 0.01 represents significant difference between two groups Diagnostic value of serum LINC00941, LINC00514 and AFP To explore whether serum LINC00941 and LINC005141 can be used as novel biomarkers for HCC, we evaluated the diagnostic value of LINC00941 and LINC00514 using the ROC curve model, and used AFP as a reference. The results showed that the sensitivity and specificity of LINC00941 and LINC00514 in distinguishing HCC from healthy controls were 85.00%, 90% and 86.67%, 56.67%, respectively (Figure 3A). In addition when combined with AFP, both LINC00941 and LINC00514 could improve the accuracy of HCC diagnosis (0.962 vs. 0.815 and 0.918 vs. 0.815, p < 0.001 and p < 0.01). LINC00941 and LINC00514 showed no significant advantage in distinguishing HCC from CHB compared to AFP (Figure 3B). When distinguishing HCC from LC, LINC00941 and LINC00514 combined with AFP significantly improved the accuracy of HCC diagnosis (0.820 vs. 0.668 and 0.835 vs. 0.668, all p < 0.01, Figure 3C). Compared with AFP alone, LINC00941 and LINC00514 alone or in combination with AFP improved sensitivity and accuracy in the diagnosis of CHB and LC (used healthy controls as control group, Figure 3D,F). When differentiating LC and CHB, LINC00941 alone or in combination with AFP significantly improved the sensitivity and accuracy of LC diagnosis compared with AFP alone (all p < 0.01, Figure 3E) (Table 3). ROC curves of serum LINC00941, LINC00514 and alpha fetoprotein (AFP) in the differential diagnosis of hepatocellular carcinoma (HCC). (A) HCC versus controls. (B) HCC versus chronic hepatitis B (CHB). (C) HCC versus liver cirrhosis (LC). (D) CHB versus controls. (E) LC versus CHB. (F) LC versus controls Differential diagnostic efficacy of serum LINC00941, LINC00514, AFP and the combination Compared with LINC00514, * P < 0.01, ** P < 0.01; compared with AFP, # P < 0.05, ## P < 0.01, ### P < 0.001, #### P < 0.0001. Abbreviations: AFP, alpha fetoprotein; CHB, Chronic hepatitis B; HCC, hepatocellular carcinoma; LC, liver cirrhosis; SEN, sensitivity; SPE, specificity. To explore whether serum LINC00941 and LINC005141 can be used as novel biomarkers for HCC, we evaluated the diagnostic value of LINC00941 and LINC00514 using the ROC curve model, and used AFP as a reference. The results showed that the sensitivity and specificity of LINC00941 and LINC00514 in distinguishing HCC from healthy controls were 85.00%, 90% and 86.67%, 56.67%, respectively (Figure 3A). In addition when combined with AFP, both LINC00941 and LINC00514 could improve the accuracy of HCC diagnosis (0.962 vs. 0.815 and 0.918 vs. 0.815, p < 0.001 and p < 0.01). LINC00941 and LINC00514 showed no significant advantage in distinguishing HCC from CHB compared to AFP (Figure 3B). When distinguishing HCC from LC, LINC00941 and LINC00514 combined with AFP significantly improved the accuracy of HCC diagnosis (0.820 vs. 0.668 and 0.835 vs. 0.668, all p < 0.01, Figure 3C). Compared with AFP alone, LINC00941 and LINC00514 alone or in combination with AFP improved sensitivity and accuracy in the diagnosis of CHB and LC (used healthy controls as control group, Figure 3D,F). When differentiating LC and CHB, LINC00941 alone or in combination with AFP significantly improved the sensitivity and accuracy of LC diagnosis compared with AFP alone (all p < 0.01, Figure 3E) (Table 3). ROC curves of serum LINC00941, LINC00514 and alpha fetoprotein (AFP) in the differential diagnosis of hepatocellular carcinoma (HCC). (A) HCC versus controls. (B) HCC versus chronic hepatitis B (CHB). (C) HCC versus liver cirrhosis (LC). (D) CHB versus controls. (E) LC versus CHB. (F) LC versus controls Differential diagnostic efficacy of serum LINC00941, LINC00514, AFP and the combination Compared with LINC00514, * P < 0.01, ** P < 0.01; compared with AFP, # P < 0.05, ## P < 0.01, ### P < 0.001, #### P < 0.0001. Abbreviations: AFP, alpha fetoprotein; CHB, Chronic hepatitis B; HCC, hepatocellular carcinoma; LC, liver cirrhosis; SEN, sensitivity; SPE, specificity.
CONCLUSIONS
LINC00941 and LINC00514 were upregulated in HBV infection‐associated liver disease. Their abnormal expression can be used as an independent marker for the diagnosis of liver diseases. However, the relevant mechanisms and effects are still unclear. Moreover, this research is limited by sample size. Hence, further studies are needed.
[ "INTRODUCTION", "Study population", "Sample collection", "Laboratory analysis", "Serum LINC00941 and LINC00514 extraction and qRT‐PCR analysis", "Statistical analysis", "Characteristics of healthy controls and patients with HBV infection‐related liver disease", "Comparison of serum LINC00941 and LINC00514 levels in controls, CHB, LC and HCC", "Relationship between serum LINC00941 and LINC00514 expression and basic biochemical indexes in patients with HCC", "Serum levels of LINC00941, LINC00514 and AFP at different HCC stages and liver function grades", "Diagnostic value of serum LINC00941, LINC00514 and AFP", "AUTHOR CONTRIBUTION" ]
[ "Liver cancer is one of the most common human cancers worldwide, and also one of the most important causes of individual death caused by cancer, and its morbidity and mortality rank 6th and 2nd among all malignant tumors, respectively.\n1\n According to the latest global cancer statistics in 2018, there are about 840,000 new cases of liver cancer each year, with an incidence rate of about 4.7%; and about 780,000 new liver cancer deaths occur every year, with a mortality rate of 8.2%.\n2\n China is one of the countries with a high incidence of liver cancer. According to the pathologic characteristics of tissues, hepatocellular carcinoma (HCC) accounts for more than 90% of all liver cancers, and ranks fourth and third in incidence and mortality among all malignant tumors; in addition, compared with females, males have a higher incidence and poorer prognosis.\n3\n, \n4\n HCC is mainly caused by hepatitis virus infection, alcohol abuse, non‐alcoholic steatohepatitis, toxin exposure, and metabolic syndrome.\n5\n Chronic infection of hepatitis B virus (HBV) and aflatoxin exposure are the main pathogenic factors of HCC in China, and HCC caused by chronic HBV infection accounts for more than 80% of all HCC.\n6\n In addition to liver ultrasound, serum alpha fetal protein (AFP) detection is the main method for extensive screening of HCC in China. However, due to its low sensitivity and specificity, the detection rate of HCC is not high, so that many HCC patients miss the optimal surgical period. Therefore, it is very important to explore novel and effective serum markers to improve the detection rate and prognosis of HCC.\nNon‐coding RNA is currently a research hotspot in the field of molecular biology. Although it does not have the function of encoding protein, it plays an important role in epigenetic, transcription, and post‐transcriptional levels. More and more lncRNAs have been found to be involved in the occurrence and development of a variety of tumors, and can be used for tumor diagnosis and prognosis monitoring. Studies have reported that the expression of LINC00941 is increased in pancreatic cancer, colorectal cancer, lung cancer, etc.\n7\n, \n8\n, \n9\n, \n10\n It can promote tumor progression through a variety of signaling pathways, such as proliferation, metastasis, invasion, and so on.\n7\n, \n8\n, \n9\n, \n11\n It can also be used as a diagnostic or prognostic marker for gastric cancer, lung cancer, head and neck squamous cell carcinoma, and other tumors\n12\n, \n13\n, \n14\n and could predict the recurrence‐free survival and overall survival of HCC.\n15\n On the other hand, it has been reported that LINC00514 is highly expressed in the tissues and cells of breast and pancreatic cancer and can promote tumor occurrence and development by regulating related microRNAs.\n16\n, \n17\n However, so far, no studies have reported their diagnostic value in HCC.\nIn this research, we detected the expression of LINC00941 and LINC00514 in healthy controls and patients with HBV infection‐related liver disease and assessed its correlation with basic characteristics of patients with HCC. The diagnostic values of LINC00941 and LINC00514 in liver diseases were also analyzed by receiver operating characteristic curve (ROC).", "All subjects were collected from inpatient department of Renmin Hospital of Wuhan University from November 2019 to January 2020. A total 147 subjects were divided into HBV‐associated HCC group (40 cases), HBV‐associated liver cirrhosis (LC) group (40 cases), chronic hepatitis B (CHB group) (37 cases). In addition, 30 normal males who received physical examination in our hospital during the same period were selected as the normal control group. The ages of all subjects ranged from 30 to 80 years old, and there was no significant difference in age among all groups. The diagnosis of HCC and LC patients was in accordance with the guidelines of liver Disease Society of Chinese Medical Association and Infectious Diseases Society of China, and they were confirmed by liver biopsy, X‐ray computed tomography or MAGNETIC resonance imaging. HCC and LC caused by other causes, as well as other infectious diseases, malignant tumors, and autoimmune diseases, were excluded. The diagnosis of CHB patients meets the Diagnostic Criteria for Chronic Hepatitis B (2015 edition).\nThis research was approved and reviewed by the Medical Ethics Review Committee of Renmin Hospital of Wuhan University. All participants agreed and signed the written informed consent in accordance with policies of the hospital Ethics Committee.", "All vacuum blood collection tubes were purchased from BD Company. The yellow head tube to promote blood coagulation and the pearl‐white head tube to promote blood coagulation of all subjects were collected on an empty stomach for more than 8 h in the morning. The yellow head tube blood was used for routine biochemical indexes, AFP, LINC00941, and LINC00514 detection, and the pearl‐white head procoagulant blood was used for HBV DNA detection. After collection, the blood was placed at room temperature for 15min. When the blood was completely coagulated, the serum was separated after centrifugated at 3500 r/min for 5 min for use.", "Automatic biochemical analyzer (Siemens, Germany) and supporting analysis reagents were used to detect serum alanine aminotransferase (ALT, normal reference range: 9–50 U/L), aspartate aminotransferase (AST, normal reference range: 15–40 U/L), alkaline phosphatase (ALP, normal reference range: 45–125 U/L), gamma‐glutamyl transferase (GGT, normal reference range: 10–60 U/L), albumin (ALB, normal reference range: 40–55 g/L), total bilirubin (TBIL, normal reference range: 0–23 μmol/L), and direct bilirubin (DBIL, normal reference range: 0–8 μmol/L). The real‐time fluorescent quantitative PCR instrument (ABI ViiA7, USA) was used to detect the HBV DNA load level, and the kit was produced by Shanghai Fosun Long March Medical Science Co., Ltd. Automatic immunoluminescence analyzer (Siemens, Germany) and supporting reagents were used to detect serum AFP.", "The serum LINC00941 and LINC00514 were extracted with Beijing Biotech (blood RNA extraction and adsorption column type) kit. The above two non‐coding RNAs were quantified by fluorescence quantitative PCR using the reverse transcription and quantitative PCR kits (RR036A and RR091A) of Takara, Japan, and GAPDH as the housekeeping gene. Reverse transcription was performed at 37℃ for 15 min and 85℃ for 5S; The RT‐PCR procedure was 95℃ for 30 s and the number of cycles was 1; 95℃ for 5 s, 64℃ for 30 s, and the number of cycles is 40. Primer sequences are as follows: LINC00941: forward primer CAAGCAACCGTCCAACTACCAGACA, reverse primer AAATCAAGAGCCCAAACATTGTGAA; LINC00514: forward primer CAACCAGGTGCTGGGGACAG, reverse primer GACCTCAAGTGATCCGCCCG; GAPDH: forward primer GGAGCGAGATCCCTCCAAAAT, reverse primer GGCTGTTGTCATACTTCTCATGG; and primer concentration was 10 μmol/L. The relative quantitative results were expressed by 2−△CT method.", "SPSS 20.0 and MedCalc 15.2.2 software were used for statistical analysis of the data, and GraphPad Prism 6 software was used for drawing. Kolmogorov‐Smirnov (K‐S) test was used to evaluate the normality of each group of data. The quantitative data complying with normal distribution were represented by mean ± SEM, and one‐way analysis of variance (ANOVA) was used for comparison among multiple groups. LSD test was used for homogeneity and Tamhane'ST2 test was used for non‐homogeneity. The quantitative data that did not obey the normal distribution was represented by the median and interquartile ranges, and the Mann‐Whitney test was used to analyze the corresponding data. Pearson's correlation was used to evaluate the correlation among all indicators. ROC curve was used to analyze the diagnostic value of LINC00941, LINC00514, and AFP for HCC. All data were analyzed by two‐sided test, and p < 0.05 was considered as statistically significant.", "The main characteristics of all the study population are summarized in Table 1. First, there was no statistically significant difference in age between the groups. Regarding the liver biochemical indicators ALT, AST, ALP, GGT, ALB, TBIL, DBIL, the differences between the four groups were statistically significant (all p < 0.0001). For the level of HBV DNA, the CHB group was significantly higher than the LC group and the HCC group (all p < 0.0001). In the analysis of serum tumor markers, result showed that the AFP level of HCC patients was obviously increased than that of the control group, CHB and LC groups, and the difference was statistically significant.\nBasic biochemical data characteristics of controls and patients with HBV infection‐related liver disease\nData are presented as means (SD) or median (interquartile range) or percentage.\nAbbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CHB, Chronic hepatitis b; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; LC, liver cirrhosis; TBIL, total bilirubin.", "Furthermore, we used the LSD test to evaluate the serum levels of LINC00941 and LINC00514 in the controls, CHB, LC and HCC groups. As shown in Figure 1A, serum LINC00941 levels in the CHB, LC and HCC groups were significantly higher than those in the control group (all p < 0.0001), and compared with the patients with LC, the level of serum LINC00941 in CHB and HCC groups were obviously decreased (all p < 0.0001). Compared to the controls, the serum LINC00514 level was significantly increased in patients with CHB, LC and HCC (all p < 0.0001); and the serum LINC00514 level was significantly lower in the CHB group and the HCC group than that of LC group (all p < 0.0001, Figure 1B).\nSerum LINC00941 and LINC00514 expression in patients with hepatocellular carcinoma (HCC), patients with liver cirrhosis, patients with HCC and healthy controls. ****\np < 0.0001 represents significant difference between two groups", "To further evaluate whether serum LINC00941 and LINC00514 expression are related to the liver function, HBV viral load and AFP level of HCC patients, We analyzed the correlation between the high and low expression of two lncRNAs and the above indexes. The results showed that serum LINC00941 and LINC00514 had no correlation with liver function index, HBV viral load and AFP (Table 2).\nThe association between the relative expression of LINC00941 and LINC00514 in serum of HCC patients and basic clinical data\nPatients\n(n = 40)\nLINC00941\nexpression levels\nPatients\n(n = 40)\nLINC00514\nexpression levels\nThe data are divided on average.\nAbbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; TBIL, total bilirubin.", "To assess the changes of LINC00941, LINC00514, and AFP levels in the progression of HCC, patients with HCC were divided into early, middle and advanced stages, according to the Barcelona Clinic Liver Cancer (BCLC). We found that serum level of AFP in advanced‐stage HCC was significantly higher than early stage and middle stage (all p < 0.01, Figure 2A). However, the levels of LINC00941 and LINC00514 showed no difference in different stages of HCC but showed a gradual downward trend (Figure 2B,C). According to Child‐Pugh class, patients with HCC were divided into class A, B, and C. The results indicated that the AFP level of class C patients was obviously higher than that of class A patients, while the LINC00941 and LINC00514 levels showed no difference in different liver function grades, and LINC00514 showed a trend of progressive decrease in different grades (Figure 2D–F).\nSerum concentration of LINC00941, LINC00514 and alpha fetoprotein according to Barcelona Clinic Liver Cancer and Child‐Pugh class in patients with hepatocellular carcinoma. **\np < 0.01 represents significant difference between two groups", "To explore whether serum LINC00941 and LINC005141 can be used as novel biomarkers for HCC, we evaluated the diagnostic value of LINC00941 and LINC00514 using the ROC curve model, and used AFP as a reference. The results showed that the sensitivity and specificity of LINC00941 and LINC00514 in distinguishing HCC from healthy controls were 85.00%, 90% and 86.67%, 56.67%, respectively (Figure 3A). In addition when combined with AFP, both LINC00941 and LINC00514 could improve the accuracy of HCC diagnosis (0.962 vs. 0.815 and 0.918 vs. 0.815, p < 0.001 and p < 0.01). LINC00941 and LINC00514 showed no significant advantage in distinguishing HCC from CHB compared to AFP (Figure 3B). When distinguishing HCC from LC, LINC00941 and LINC00514 combined with AFP significantly improved the accuracy of HCC diagnosis (0.820 vs. 0.668 and 0.835 vs. 0.668, all p < 0.01, Figure 3C). Compared with AFP alone, LINC00941 and LINC00514 alone or in combination with AFP improved sensitivity and accuracy in the diagnosis of CHB and LC (used healthy controls as control group, Figure 3D,F). When differentiating LC and CHB, LINC00941 alone or in combination with AFP significantly improved the sensitivity and accuracy of LC diagnosis compared with AFP alone (all p < 0.01, Figure 3E) (Table 3).\nROC curves of serum LINC00941, LINC00514 and alpha fetoprotein (AFP) in the differential diagnosis of hepatocellular carcinoma (HCC). (A) HCC versus controls. (B) HCC versus chronic hepatitis B (CHB). (C) HCC versus liver cirrhosis (LC). (D) CHB versus controls. (E) LC versus CHB. (F) LC versus controls\nDifferential diagnostic efficacy of serum LINC00941, LINC00514, AFP and the combination\nCompared with LINC00514, *\nP < 0.01, **\nP < 0.01; compared with AFP, #\nP < 0.05, ##\nP < 0.01, ###\nP < 0.001, ####\nP < 0.0001.\nAbbreviations: AFP, alpha fetoprotein; CHB, Chronic hepatitis B; HCC, hepatocellular carcinoma; LC, liver cirrhosis; SEN, sensitivity; SPE, specificity.", "PZ, JC, and DT proposed the concept of the work. JC carried out most of the experimental work and wrote the paper. HL provided critical reviews in order to promote the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIAL AND METHODS", "Study population", "Sample collection", "Laboratory analysis", "Serum LINC00941 and LINC00514 extraction and qRT‐PCR analysis", "Statistical analysis", "RESULTS", "Characteristics of healthy controls and patients with HBV infection‐related liver disease", "Comparison of serum LINC00941 and LINC00514 levels in controls, CHB, LC and HCC", "Relationship between serum LINC00941 and LINC00514 expression and basic biochemical indexes in patients with HCC", "Serum levels of LINC00941, LINC00514 and AFP at different HCC stages and liver function grades", "Diagnostic value of serum LINC00941, LINC00514 and AFP", "DISCUSSION", "CONCLUSIONS", "CONFLICT OF INTEREST", "AUTHOR CONTRIBUTION" ]
[ "Liver cancer is one of the most common human cancers worldwide, and also one of the most important causes of individual death caused by cancer, and its morbidity and mortality rank 6th and 2nd among all malignant tumors, respectively.\n1\n According to the latest global cancer statistics in 2018, there are about 840,000 new cases of liver cancer each year, with an incidence rate of about 4.7%; and about 780,000 new liver cancer deaths occur every year, with a mortality rate of 8.2%.\n2\n China is one of the countries with a high incidence of liver cancer. According to the pathologic characteristics of tissues, hepatocellular carcinoma (HCC) accounts for more than 90% of all liver cancers, and ranks fourth and third in incidence and mortality among all malignant tumors; in addition, compared with females, males have a higher incidence and poorer prognosis.\n3\n, \n4\n HCC is mainly caused by hepatitis virus infection, alcohol abuse, non‐alcoholic steatohepatitis, toxin exposure, and metabolic syndrome.\n5\n Chronic infection of hepatitis B virus (HBV) and aflatoxin exposure are the main pathogenic factors of HCC in China, and HCC caused by chronic HBV infection accounts for more than 80% of all HCC.\n6\n In addition to liver ultrasound, serum alpha fetal protein (AFP) detection is the main method for extensive screening of HCC in China. However, due to its low sensitivity and specificity, the detection rate of HCC is not high, so that many HCC patients miss the optimal surgical period. Therefore, it is very important to explore novel and effective serum markers to improve the detection rate and prognosis of HCC.\nNon‐coding RNA is currently a research hotspot in the field of molecular biology. Although it does not have the function of encoding protein, it plays an important role in epigenetic, transcription, and post‐transcriptional levels. More and more lncRNAs have been found to be involved in the occurrence and development of a variety of tumors, and can be used for tumor diagnosis and prognosis monitoring. Studies have reported that the expression of LINC00941 is increased in pancreatic cancer, colorectal cancer, lung cancer, etc.\n7\n, \n8\n, \n9\n, \n10\n It can promote tumor progression through a variety of signaling pathways, such as proliferation, metastasis, invasion, and so on.\n7\n, \n8\n, \n9\n, \n11\n It can also be used as a diagnostic or prognostic marker for gastric cancer, lung cancer, head and neck squamous cell carcinoma, and other tumors\n12\n, \n13\n, \n14\n and could predict the recurrence‐free survival and overall survival of HCC.\n15\n On the other hand, it has been reported that LINC00514 is highly expressed in the tissues and cells of breast and pancreatic cancer and can promote tumor occurrence and development by regulating related microRNAs.\n16\n, \n17\n However, so far, no studies have reported their diagnostic value in HCC.\nIn this research, we detected the expression of LINC00941 and LINC00514 in healthy controls and patients with HBV infection‐related liver disease and assessed its correlation with basic characteristics of patients with HCC. The diagnostic values of LINC00941 and LINC00514 in liver diseases were also analyzed by receiver operating characteristic curve (ROC).", "Study population All subjects were collected from inpatient department of Renmin Hospital of Wuhan University from November 2019 to January 2020. A total 147 subjects were divided into HBV‐associated HCC group (40 cases), HBV‐associated liver cirrhosis (LC) group (40 cases), chronic hepatitis B (CHB group) (37 cases). In addition, 30 normal males who received physical examination in our hospital during the same period were selected as the normal control group. The ages of all subjects ranged from 30 to 80 years old, and there was no significant difference in age among all groups. The diagnosis of HCC and LC patients was in accordance with the guidelines of liver Disease Society of Chinese Medical Association and Infectious Diseases Society of China, and they were confirmed by liver biopsy, X‐ray computed tomography or MAGNETIC resonance imaging. HCC and LC caused by other causes, as well as other infectious diseases, malignant tumors, and autoimmune diseases, were excluded. The diagnosis of CHB patients meets the Diagnostic Criteria for Chronic Hepatitis B (2015 edition).\nThis research was approved and reviewed by the Medical Ethics Review Committee of Renmin Hospital of Wuhan University. All participants agreed and signed the written informed consent in accordance with policies of the hospital Ethics Committee.\nAll subjects were collected from inpatient department of Renmin Hospital of Wuhan University from November 2019 to January 2020. A total 147 subjects were divided into HBV‐associated HCC group (40 cases), HBV‐associated liver cirrhosis (LC) group (40 cases), chronic hepatitis B (CHB group) (37 cases). In addition, 30 normal males who received physical examination in our hospital during the same period were selected as the normal control group. The ages of all subjects ranged from 30 to 80 years old, and there was no significant difference in age among all groups. The diagnosis of HCC and LC patients was in accordance with the guidelines of liver Disease Society of Chinese Medical Association and Infectious Diseases Society of China, and they were confirmed by liver biopsy, X‐ray computed tomography or MAGNETIC resonance imaging. HCC and LC caused by other causes, as well as other infectious diseases, malignant tumors, and autoimmune diseases, were excluded. The diagnosis of CHB patients meets the Diagnostic Criteria for Chronic Hepatitis B (2015 edition).\nThis research was approved and reviewed by the Medical Ethics Review Committee of Renmin Hospital of Wuhan University. All participants agreed and signed the written informed consent in accordance with policies of the hospital Ethics Committee.\nSample collection All vacuum blood collection tubes were purchased from BD Company. The yellow head tube to promote blood coagulation and the pearl‐white head tube to promote blood coagulation of all subjects were collected on an empty stomach for more than 8 h in the morning. The yellow head tube blood was used for routine biochemical indexes, AFP, LINC00941, and LINC00514 detection, and the pearl‐white head procoagulant blood was used for HBV DNA detection. After collection, the blood was placed at room temperature for 15min. When the blood was completely coagulated, the serum was separated after centrifugated at 3500 r/min for 5 min for use.\nAll vacuum blood collection tubes were purchased from BD Company. The yellow head tube to promote blood coagulation and the pearl‐white head tube to promote blood coagulation of all subjects were collected on an empty stomach for more than 8 h in the morning. The yellow head tube blood was used for routine biochemical indexes, AFP, LINC00941, and LINC00514 detection, and the pearl‐white head procoagulant blood was used for HBV DNA detection. After collection, the blood was placed at room temperature for 15min. When the blood was completely coagulated, the serum was separated after centrifugated at 3500 r/min for 5 min for use.\nLaboratory analysis Automatic biochemical analyzer (Siemens, Germany) and supporting analysis reagents were used to detect serum alanine aminotransferase (ALT, normal reference range: 9–50 U/L), aspartate aminotransferase (AST, normal reference range: 15–40 U/L), alkaline phosphatase (ALP, normal reference range: 45–125 U/L), gamma‐glutamyl transferase (GGT, normal reference range: 10–60 U/L), albumin (ALB, normal reference range: 40–55 g/L), total bilirubin (TBIL, normal reference range: 0–23 μmol/L), and direct bilirubin (DBIL, normal reference range: 0–8 μmol/L). The real‐time fluorescent quantitative PCR instrument (ABI ViiA7, USA) was used to detect the HBV DNA load level, and the kit was produced by Shanghai Fosun Long March Medical Science Co., Ltd. Automatic immunoluminescence analyzer (Siemens, Germany) and supporting reagents were used to detect serum AFP.\nAutomatic biochemical analyzer (Siemens, Germany) and supporting analysis reagents were used to detect serum alanine aminotransferase (ALT, normal reference range: 9–50 U/L), aspartate aminotransferase (AST, normal reference range: 15–40 U/L), alkaline phosphatase (ALP, normal reference range: 45–125 U/L), gamma‐glutamyl transferase (GGT, normal reference range: 10–60 U/L), albumin (ALB, normal reference range: 40–55 g/L), total bilirubin (TBIL, normal reference range: 0–23 μmol/L), and direct bilirubin (DBIL, normal reference range: 0–8 μmol/L). The real‐time fluorescent quantitative PCR instrument (ABI ViiA7, USA) was used to detect the HBV DNA load level, and the kit was produced by Shanghai Fosun Long March Medical Science Co., Ltd. Automatic immunoluminescence analyzer (Siemens, Germany) and supporting reagents were used to detect serum AFP.\nSerum LINC00941 and LINC00514 extraction and qRT‐PCR analysis The serum LINC00941 and LINC00514 were extracted with Beijing Biotech (blood RNA extraction and adsorption column type) kit. The above two non‐coding RNAs were quantified by fluorescence quantitative PCR using the reverse transcription and quantitative PCR kits (RR036A and RR091A) of Takara, Japan, and GAPDH as the housekeeping gene. Reverse transcription was performed at 37℃ for 15 min and 85℃ for 5S; The RT‐PCR procedure was 95℃ for 30 s and the number of cycles was 1; 95℃ for 5 s, 64℃ for 30 s, and the number of cycles is 40. Primer sequences are as follows: LINC00941: forward primer CAAGCAACCGTCCAACTACCAGACA, reverse primer AAATCAAGAGCCCAAACATTGTGAA; LINC00514: forward primer CAACCAGGTGCTGGGGACAG, reverse primer GACCTCAAGTGATCCGCCCG; GAPDH: forward primer GGAGCGAGATCCCTCCAAAAT, reverse primer GGCTGTTGTCATACTTCTCATGG; and primer concentration was 10 μmol/L. The relative quantitative results were expressed by 2−△CT method.\nThe serum LINC00941 and LINC00514 were extracted with Beijing Biotech (blood RNA extraction and adsorption column type) kit. The above two non‐coding RNAs were quantified by fluorescence quantitative PCR using the reverse transcription and quantitative PCR kits (RR036A and RR091A) of Takara, Japan, and GAPDH as the housekeeping gene. Reverse transcription was performed at 37℃ for 15 min and 85℃ for 5S; The RT‐PCR procedure was 95℃ for 30 s and the number of cycles was 1; 95℃ for 5 s, 64℃ for 30 s, and the number of cycles is 40. Primer sequences are as follows: LINC00941: forward primer CAAGCAACCGTCCAACTACCAGACA, reverse primer AAATCAAGAGCCCAAACATTGTGAA; LINC00514: forward primer CAACCAGGTGCTGGGGACAG, reverse primer GACCTCAAGTGATCCGCCCG; GAPDH: forward primer GGAGCGAGATCCCTCCAAAAT, reverse primer GGCTGTTGTCATACTTCTCATGG; and primer concentration was 10 μmol/L. The relative quantitative results were expressed by 2−△CT method.\nStatistical analysis SPSS 20.0 and MedCalc 15.2.2 software were used for statistical analysis of the data, and GraphPad Prism 6 software was used for drawing. Kolmogorov‐Smirnov (K‐S) test was used to evaluate the normality of each group of data. The quantitative data complying with normal distribution were represented by mean ± SEM, and one‐way analysis of variance (ANOVA) was used for comparison among multiple groups. LSD test was used for homogeneity and Tamhane'ST2 test was used for non‐homogeneity. The quantitative data that did not obey the normal distribution was represented by the median and interquartile ranges, and the Mann‐Whitney test was used to analyze the corresponding data. Pearson's correlation was used to evaluate the correlation among all indicators. ROC curve was used to analyze the diagnostic value of LINC00941, LINC00514, and AFP for HCC. All data were analyzed by two‐sided test, and p < 0.05 was considered as statistically significant.\nSPSS 20.0 and MedCalc 15.2.2 software were used for statistical analysis of the data, and GraphPad Prism 6 software was used for drawing. Kolmogorov‐Smirnov (K‐S) test was used to evaluate the normality of each group of data. The quantitative data complying with normal distribution were represented by mean ± SEM, and one‐way analysis of variance (ANOVA) was used for comparison among multiple groups. LSD test was used for homogeneity and Tamhane'ST2 test was used for non‐homogeneity. The quantitative data that did not obey the normal distribution was represented by the median and interquartile ranges, and the Mann‐Whitney test was used to analyze the corresponding data. Pearson's correlation was used to evaluate the correlation among all indicators. ROC curve was used to analyze the diagnostic value of LINC00941, LINC00514, and AFP for HCC. All data were analyzed by two‐sided test, and p < 0.05 was considered as statistically significant.", "All subjects were collected from inpatient department of Renmin Hospital of Wuhan University from November 2019 to January 2020. A total 147 subjects were divided into HBV‐associated HCC group (40 cases), HBV‐associated liver cirrhosis (LC) group (40 cases), chronic hepatitis B (CHB group) (37 cases). In addition, 30 normal males who received physical examination in our hospital during the same period were selected as the normal control group. The ages of all subjects ranged from 30 to 80 years old, and there was no significant difference in age among all groups. The diagnosis of HCC and LC patients was in accordance with the guidelines of liver Disease Society of Chinese Medical Association and Infectious Diseases Society of China, and they were confirmed by liver biopsy, X‐ray computed tomography or MAGNETIC resonance imaging. HCC and LC caused by other causes, as well as other infectious diseases, malignant tumors, and autoimmune diseases, were excluded. The diagnosis of CHB patients meets the Diagnostic Criteria for Chronic Hepatitis B (2015 edition).\nThis research was approved and reviewed by the Medical Ethics Review Committee of Renmin Hospital of Wuhan University. All participants agreed and signed the written informed consent in accordance with policies of the hospital Ethics Committee.", "All vacuum blood collection tubes were purchased from BD Company. The yellow head tube to promote blood coagulation and the pearl‐white head tube to promote blood coagulation of all subjects were collected on an empty stomach for more than 8 h in the morning. The yellow head tube blood was used for routine biochemical indexes, AFP, LINC00941, and LINC00514 detection, and the pearl‐white head procoagulant blood was used for HBV DNA detection. After collection, the blood was placed at room temperature for 15min. When the blood was completely coagulated, the serum was separated after centrifugated at 3500 r/min for 5 min for use.", "Automatic biochemical analyzer (Siemens, Germany) and supporting analysis reagents were used to detect serum alanine aminotransferase (ALT, normal reference range: 9–50 U/L), aspartate aminotransferase (AST, normal reference range: 15–40 U/L), alkaline phosphatase (ALP, normal reference range: 45–125 U/L), gamma‐glutamyl transferase (GGT, normal reference range: 10–60 U/L), albumin (ALB, normal reference range: 40–55 g/L), total bilirubin (TBIL, normal reference range: 0–23 μmol/L), and direct bilirubin (DBIL, normal reference range: 0–8 μmol/L). The real‐time fluorescent quantitative PCR instrument (ABI ViiA7, USA) was used to detect the HBV DNA load level, and the kit was produced by Shanghai Fosun Long March Medical Science Co., Ltd. Automatic immunoluminescence analyzer (Siemens, Germany) and supporting reagents were used to detect serum AFP.", "The serum LINC00941 and LINC00514 were extracted with Beijing Biotech (blood RNA extraction and adsorption column type) kit. The above two non‐coding RNAs were quantified by fluorescence quantitative PCR using the reverse transcription and quantitative PCR kits (RR036A and RR091A) of Takara, Japan, and GAPDH as the housekeeping gene. Reverse transcription was performed at 37℃ for 15 min and 85℃ for 5S; The RT‐PCR procedure was 95℃ for 30 s and the number of cycles was 1; 95℃ for 5 s, 64℃ for 30 s, and the number of cycles is 40. Primer sequences are as follows: LINC00941: forward primer CAAGCAACCGTCCAACTACCAGACA, reverse primer AAATCAAGAGCCCAAACATTGTGAA; LINC00514: forward primer CAACCAGGTGCTGGGGACAG, reverse primer GACCTCAAGTGATCCGCCCG; GAPDH: forward primer GGAGCGAGATCCCTCCAAAAT, reverse primer GGCTGTTGTCATACTTCTCATGG; and primer concentration was 10 μmol/L. The relative quantitative results were expressed by 2−△CT method.", "SPSS 20.0 and MedCalc 15.2.2 software were used for statistical analysis of the data, and GraphPad Prism 6 software was used for drawing. Kolmogorov‐Smirnov (K‐S) test was used to evaluate the normality of each group of data. The quantitative data complying with normal distribution were represented by mean ± SEM, and one‐way analysis of variance (ANOVA) was used for comparison among multiple groups. LSD test was used for homogeneity and Tamhane'ST2 test was used for non‐homogeneity. The quantitative data that did not obey the normal distribution was represented by the median and interquartile ranges, and the Mann‐Whitney test was used to analyze the corresponding data. Pearson's correlation was used to evaluate the correlation among all indicators. ROC curve was used to analyze the diagnostic value of LINC00941, LINC00514, and AFP for HCC. All data were analyzed by two‐sided test, and p < 0.05 was considered as statistically significant.", "Characteristics of healthy controls and patients with HBV infection‐related liver disease The main characteristics of all the study population are summarized in Table 1. First, there was no statistically significant difference in age between the groups. Regarding the liver biochemical indicators ALT, AST, ALP, GGT, ALB, TBIL, DBIL, the differences between the four groups were statistically significant (all p < 0.0001). For the level of HBV DNA, the CHB group was significantly higher than the LC group and the HCC group (all p < 0.0001). In the analysis of serum tumor markers, result showed that the AFP level of HCC patients was obviously increased than that of the control group, CHB and LC groups, and the difference was statistically significant.\nBasic biochemical data characteristics of controls and patients with HBV infection‐related liver disease\nData are presented as means (SD) or median (interquartile range) or percentage.\nAbbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CHB, Chronic hepatitis b; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; LC, liver cirrhosis; TBIL, total bilirubin.\nThe main characteristics of all the study population are summarized in Table 1. First, there was no statistically significant difference in age between the groups. Regarding the liver biochemical indicators ALT, AST, ALP, GGT, ALB, TBIL, DBIL, the differences between the four groups were statistically significant (all p < 0.0001). For the level of HBV DNA, the CHB group was significantly higher than the LC group and the HCC group (all p < 0.0001). In the analysis of serum tumor markers, result showed that the AFP level of HCC patients was obviously increased than that of the control group, CHB and LC groups, and the difference was statistically significant.\nBasic biochemical data characteristics of controls and patients with HBV infection‐related liver disease\nData are presented as means (SD) or median (interquartile range) or percentage.\nAbbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CHB, Chronic hepatitis b; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; LC, liver cirrhosis; TBIL, total bilirubin.\nComparison of serum LINC00941 and LINC00514 levels in controls, CHB, LC and HCC Furthermore, we used the LSD test to evaluate the serum levels of LINC00941 and LINC00514 in the controls, CHB, LC and HCC groups. As shown in Figure 1A, serum LINC00941 levels in the CHB, LC and HCC groups were significantly higher than those in the control group (all p < 0.0001), and compared with the patients with LC, the level of serum LINC00941 in CHB and HCC groups were obviously decreased (all p < 0.0001). Compared to the controls, the serum LINC00514 level was significantly increased in patients with CHB, LC and HCC (all p < 0.0001); and the serum LINC00514 level was significantly lower in the CHB group and the HCC group than that of LC group (all p < 0.0001, Figure 1B).\nSerum LINC00941 and LINC00514 expression in patients with hepatocellular carcinoma (HCC), patients with liver cirrhosis, patients with HCC and healthy controls. ****\np < 0.0001 represents significant difference between two groups\nFurthermore, we used the LSD test to evaluate the serum levels of LINC00941 and LINC00514 in the controls, CHB, LC and HCC groups. As shown in Figure 1A, serum LINC00941 levels in the CHB, LC and HCC groups were significantly higher than those in the control group (all p < 0.0001), and compared with the patients with LC, the level of serum LINC00941 in CHB and HCC groups were obviously decreased (all p < 0.0001). Compared to the controls, the serum LINC00514 level was significantly increased in patients with CHB, LC and HCC (all p < 0.0001); and the serum LINC00514 level was significantly lower in the CHB group and the HCC group than that of LC group (all p < 0.0001, Figure 1B).\nSerum LINC00941 and LINC00514 expression in patients with hepatocellular carcinoma (HCC), patients with liver cirrhosis, patients with HCC and healthy controls. ****\np < 0.0001 represents significant difference between two groups\nRelationship between serum LINC00941 and LINC00514 expression and basic biochemical indexes in patients with HCC To further evaluate whether serum LINC00941 and LINC00514 expression are related to the liver function, HBV viral load and AFP level of HCC patients, We analyzed the correlation between the high and low expression of two lncRNAs and the above indexes. The results showed that serum LINC00941 and LINC00514 had no correlation with liver function index, HBV viral load and AFP (Table 2).\nThe association between the relative expression of LINC00941 and LINC00514 in serum of HCC patients and basic clinical data\nPatients\n(n = 40)\nLINC00941\nexpression levels\nPatients\n(n = 40)\nLINC00514\nexpression levels\nThe data are divided on average.\nAbbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; TBIL, total bilirubin.\nTo further evaluate whether serum LINC00941 and LINC00514 expression are related to the liver function, HBV viral load and AFP level of HCC patients, We analyzed the correlation between the high and low expression of two lncRNAs and the above indexes. The results showed that serum LINC00941 and LINC00514 had no correlation with liver function index, HBV viral load and AFP (Table 2).\nThe association between the relative expression of LINC00941 and LINC00514 in serum of HCC patients and basic clinical data\nPatients\n(n = 40)\nLINC00941\nexpression levels\nPatients\n(n = 40)\nLINC00514\nexpression levels\nThe data are divided on average.\nAbbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; TBIL, total bilirubin.\nSerum levels of LINC00941, LINC00514 and AFP at different HCC stages and liver function grades To assess the changes of LINC00941, LINC00514, and AFP levels in the progression of HCC, patients with HCC were divided into early, middle and advanced stages, according to the Barcelona Clinic Liver Cancer (BCLC). We found that serum level of AFP in advanced‐stage HCC was significantly higher than early stage and middle stage (all p < 0.01, Figure 2A). However, the levels of LINC00941 and LINC00514 showed no difference in different stages of HCC but showed a gradual downward trend (Figure 2B,C). According to Child‐Pugh class, patients with HCC were divided into class A, B, and C. The results indicated that the AFP level of class C patients was obviously higher than that of class A patients, while the LINC00941 and LINC00514 levels showed no difference in different liver function grades, and LINC00514 showed a trend of progressive decrease in different grades (Figure 2D–F).\nSerum concentration of LINC00941, LINC00514 and alpha fetoprotein according to Barcelona Clinic Liver Cancer and Child‐Pugh class in patients with hepatocellular carcinoma. **\np < 0.01 represents significant difference between two groups\nTo assess the changes of LINC00941, LINC00514, and AFP levels in the progression of HCC, patients with HCC were divided into early, middle and advanced stages, according to the Barcelona Clinic Liver Cancer (BCLC). We found that serum level of AFP in advanced‐stage HCC was significantly higher than early stage and middle stage (all p < 0.01, Figure 2A). However, the levels of LINC00941 and LINC00514 showed no difference in different stages of HCC but showed a gradual downward trend (Figure 2B,C). According to Child‐Pugh class, patients with HCC were divided into class A, B, and C. The results indicated that the AFP level of class C patients was obviously higher than that of class A patients, while the LINC00941 and LINC00514 levels showed no difference in different liver function grades, and LINC00514 showed a trend of progressive decrease in different grades (Figure 2D–F).\nSerum concentration of LINC00941, LINC00514 and alpha fetoprotein according to Barcelona Clinic Liver Cancer and Child‐Pugh class in patients with hepatocellular carcinoma. **\np < 0.01 represents significant difference between two groups\nDiagnostic value of serum LINC00941, LINC00514 and AFP To explore whether serum LINC00941 and LINC005141 can be used as novel biomarkers for HCC, we evaluated the diagnostic value of LINC00941 and LINC00514 using the ROC curve model, and used AFP as a reference. The results showed that the sensitivity and specificity of LINC00941 and LINC00514 in distinguishing HCC from healthy controls were 85.00%, 90% and 86.67%, 56.67%, respectively (Figure 3A). In addition when combined with AFP, both LINC00941 and LINC00514 could improve the accuracy of HCC diagnosis (0.962 vs. 0.815 and 0.918 vs. 0.815, p < 0.001 and p < 0.01). LINC00941 and LINC00514 showed no significant advantage in distinguishing HCC from CHB compared to AFP (Figure 3B). When distinguishing HCC from LC, LINC00941 and LINC00514 combined with AFP significantly improved the accuracy of HCC diagnosis (0.820 vs. 0.668 and 0.835 vs. 0.668, all p < 0.01, Figure 3C). Compared with AFP alone, LINC00941 and LINC00514 alone or in combination with AFP improved sensitivity and accuracy in the diagnosis of CHB and LC (used healthy controls as control group, Figure 3D,F). When differentiating LC and CHB, LINC00941 alone or in combination with AFP significantly improved the sensitivity and accuracy of LC diagnosis compared with AFP alone (all p < 0.01, Figure 3E) (Table 3).\nROC curves of serum LINC00941, LINC00514 and alpha fetoprotein (AFP) in the differential diagnosis of hepatocellular carcinoma (HCC). (A) HCC versus controls. (B) HCC versus chronic hepatitis B (CHB). (C) HCC versus liver cirrhosis (LC). (D) CHB versus controls. (E) LC versus CHB. (F) LC versus controls\nDifferential diagnostic efficacy of serum LINC00941, LINC00514, AFP and the combination\nCompared with LINC00514, *\nP < 0.01, **\nP < 0.01; compared with AFP, #\nP < 0.05, ##\nP < 0.01, ###\nP < 0.001, ####\nP < 0.0001.\nAbbreviations: AFP, alpha fetoprotein; CHB, Chronic hepatitis B; HCC, hepatocellular carcinoma; LC, liver cirrhosis; SEN, sensitivity; SPE, specificity.\nTo explore whether serum LINC00941 and LINC005141 can be used as novel biomarkers for HCC, we evaluated the diagnostic value of LINC00941 and LINC00514 using the ROC curve model, and used AFP as a reference. The results showed that the sensitivity and specificity of LINC00941 and LINC00514 in distinguishing HCC from healthy controls were 85.00%, 90% and 86.67%, 56.67%, respectively (Figure 3A). In addition when combined with AFP, both LINC00941 and LINC00514 could improve the accuracy of HCC diagnosis (0.962 vs. 0.815 and 0.918 vs. 0.815, p < 0.001 and p < 0.01). LINC00941 and LINC00514 showed no significant advantage in distinguishing HCC from CHB compared to AFP (Figure 3B). When distinguishing HCC from LC, LINC00941 and LINC00514 combined with AFP significantly improved the accuracy of HCC diagnosis (0.820 vs. 0.668 and 0.835 vs. 0.668, all p < 0.01, Figure 3C). Compared with AFP alone, LINC00941 and LINC00514 alone or in combination with AFP improved sensitivity and accuracy in the diagnosis of CHB and LC (used healthy controls as control group, Figure 3D,F). When differentiating LC and CHB, LINC00941 alone or in combination with AFP significantly improved the sensitivity and accuracy of LC diagnosis compared with AFP alone (all p < 0.01, Figure 3E) (Table 3).\nROC curves of serum LINC00941, LINC00514 and alpha fetoprotein (AFP) in the differential diagnosis of hepatocellular carcinoma (HCC). (A) HCC versus controls. (B) HCC versus chronic hepatitis B (CHB). (C) HCC versus liver cirrhosis (LC). (D) CHB versus controls. (E) LC versus CHB. (F) LC versus controls\nDifferential diagnostic efficacy of serum LINC00941, LINC00514, AFP and the combination\nCompared with LINC00514, *\nP < 0.01, **\nP < 0.01; compared with AFP, #\nP < 0.05, ##\nP < 0.01, ###\nP < 0.001, ####\nP < 0.0001.\nAbbreviations: AFP, alpha fetoprotein; CHB, Chronic hepatitis B; HCC, hepatocellular carcinoma; LC, liver cirrhosis; SEN, sensitivity; SPE, specificity.", "The main characteristics of all the study population are summarized in Table 1. First, there was no statistically significant difference in age between the groups. Regarding the liver biochemical indicators ALT, AST, ALP, GGT, ALB, TBIL, DBIL, the differences between the four groups were statistically significant (all p < 0.0001). For the level of HBV DNA, the CHB group was significantly higher than the LC group and the HCC group (all p < 0.0001). In the analysis of serum tumor markers, result showed that the AFP level of HCC patients was obviously increased than that of the control group, CHB and LC groups, and the difference was statistically significant.\nBasic biochemical data characteristics of controls and patients with HBV infection‐related liver disease\nData are presented as means (SD) or median (interquartile range) or percentage.\nAbbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CHB, Chronic hepatitis b; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; LC, liver cirrhosis; TBIL, total bilirubin.", "Furthermore, we used the LSD test to evaluate the serum levels of LINC00941 and LINC00514 in the controls, CHB, LC and HCC groups. As shown in Figure 1A, serum LINC00941 levels in the CHB, LC and HCC groups were significantly higher than those in the control group (all p < 0.0001), and compared with the patients with LC, the level of serum LINC00941 in CHB and HCC groups were obviously decreased (all p < 0.0001). Compared to the controls, the serum LINC00514 level was significantly increased in patients with CHB, LC and HCC (all p < 0.0001); and the serum LINC00514 level was significantly lower in the CHB group and the HCC group than that of LC group (all p < 0.0001, Figure 1B).\nSerum LINC00941 and LINC00514 expression in patients with hepatocellular carcinoma (HCC), patients with liver cirrhosis, patients with HCC and healthy controls. ****\np < 0.0001 represents significant difference between two groups", "To further evaluate whether serum LINC00941 and LINC00514 expression are related to the liver function, HBV viral load and AFP level of HCC patients, We analyzed the correlation between the high and low expression of two lncRNAs and the above indexes. The results showed that serum LINC00941 and LINC00514 had no correlation with liver function index, HBV viral load and AFP (Table 2).\nThe association between the relative expression of LINC00941 and LINC00514 in serum of HCC patients and basic clinical data\nPatients\n(n = 40)\nLINC00941\nexpression levels\nPatients\n(n = 40)\nLINC00514\nexpression levels\nThe data are divided on average.\nAbbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; TBIL, total bilirubin.", "To assess the changes of LINC00941, LINC00514, and AFP levels in the progression of HCC, patients with HCC were divided into early, middle and advanced stages, according to the Barcelona Clinic Liver Cancer (BCLC). We found that serum level of AFP in advanced‐stage HCC was significantly higher than early stage and middle stage (all p < 0.01, Figure 2A). However, the levels of LINC00941 and LINC00514 showed no difference in different stages of HCC but showed a gradual downward trend (Figure 2B,C). According to Child‐Pugh class, patients with HCC were divided into class A, B, and C. The results indicated that the AFP level of class C patients was obviously higher than that of class A patients, while the LINC00941 and LINC00514 levels showed no difference in different liver function grades, and LINC00514 showed a trend of progressive decrease in different grades (Figure 2D–F).\nSerum concentration of LINC00941, LINC00514 and alpha fetoprotein according to Barcelona Clinic Liver Cancer and Child‐Pugh class in patients with hepatocellular carcinoma. **\np < 0.01 represents significant difference between two groups", "To explore whether serum LINC00941 and LINC005141 can be used as novel biomarkers for HCC, we evaluated the diagnostic value of LINC00941 and LINC00514 using the ROC curve model, and used AFP as a reference. The results showed that the sensitivity and specificity of LINC00941 and LINC00514 in distinguishing HCC from healthy controls were 85.00%, 90% and 86.67%, 56.67%, respectively (Figure 3A). In addition when combined with AFP, both LINC00941 and LINC00514 could improve the accuracy of HCC diagnosis (0.962 vs. 0.815 and 0.918 vs. 0.815, p < 0.001 and p < 0.01). LINC00941 and LINC00514 showed no significant advantage in distinguishing HCC from CHB compared to AFP (Figure 3B). When distinguishing HCC from LC, LINC00941 and LINC00514 combined with AFP significantly improved the accuracy of HCC diagnosis (0.820 vs. 0.668 and 0.835 vs. 0.668, all p < 0.01, Figure 3C). Compared with AFP alone, LINC00941 and LINC00514 alone or in combination with AFP improved sensitivity and accuracy in the diagnosis of CHB and LC (used healthy controls as control group, Figure 3D,F). When differentiating LC and CHB, LINC00941 alone or in combination with AFP significantly improved the sensitivity and accuracy of LC diagnosis compared with AFP alone (all p < 0.01, Figure 3E) (Table 3).\nROC curves of serum LINC00941, LINC00514 and alpha fetoprotein (AFP) in the differential diagnosis of hepatocellular carcinoma (HCC). (A) HCC versus controls. (B) HCC versus chronic hepatitis B (CHB). (C) HCC versus liver cirrhosis (LC). (D) CHB versus controls. (E) LC versus CHB. (F) LC versus controls\nDifferential diagnostic efficacy of serum LINC00941, LINC00514, AFP and the combination\nCompared with LINC00514, *\nP < 0.01, **\nP < 0.01; compared with AFP, #\nP < 0.05, ##\nP < 0.01, ###\nP < 0.001, ####\nP < 0.0001.\nAbbreviations: AFP, alpha fetoprotein; CHB, Chronic hepatitis B; HCC, hepatocellular carcinoma; LC, liver cirrhosis; SEN, sensitivity; SPE, specificity.", "The prevalence of liver cancer is related to its many pathogenic factors and complex pathogenesis, which also means that it is difficult to diagnose with a single biomarker. Therefore, the combined detection of multiple biomarkers is of great significance to improve the detection rate of liver cancer. At present, a variety of biomarkers have been applied to the diagnosis of HCC, including AFP, miRNA, specially expressed genes, lncRNA, etc. Among them, lncRNA is a new research direction and hotspot in recent years.\nIn previous studies, various lncRNAs have been studied in liver cancer. For example, H19 is the first non‐coding RNA reported to be abnormally expressed in HCC.\n18\n, \n19\n Unfried et al. reported that the expression of NIHCOLE was related to the poor prognosis and survival of patients with HCC, and the inhibition of NIHCOLE expression in HCC cells may lead to limited cell proliferation and increased apoptosis rate through the accumulation of DNA damage.\n20\n Research by Peng et al. showed that the expression of LINC00511 was higher in HCC tissues, and the mechanism study revealed that the invasion pseudopodia and exosomal secretion induced by LINC00511 were involved in tumor progression.\n21\n Yin et al. found that LINC01133 expression in HCC tissues was elevated, and can predict the poor prognosis of HCC patients; in‐vitro studies showed that overexpression of LINC01133 can promote the proliferation and aggressive phenotype of HCC cells, and promote tumor growth and lung metastasis in vivo, while knockdown LINC01133 had the opposite effect.\n22\n In addition, studies have also reported that lncRNA plays an important role in the chemotherapy resistance of HCC. The study of Ma et al. found that the expression of LINC01134 was upregulated after treatment with oxaliplatin (OXA), and the higher expression of LINC01134 was related to the poor treatment effect of OXA; mechanistic studies have shown that the LINC01134/SP1/p62 axis modulated OXA resistance by changing cell viability, apoptosis and mitochondrial homeostasis in vitro and in vivo, suggesting that the LINC01134/SP1/p62 axis may be a promising strategy to overcome OXA chemotherapy resistance.\n23\n These findings suggest that lncRNAs play an important role in the occurrence and development, treatment, and prognosis monitoring of HCC.\nIn our study, we detected the expression levels of LINC00941 and LINC00514 in the serum of controls, CHB, LC, and HCC patients, and found that compared with the healthy control group, the serum levels of LINC00941 and LINC00514 in the CHB, LC and HCC groups were significantly increased. The levels of LINC00941 and LINC00514 in the LC group were significantly higher than those in the CHB and HCC groups.\nIn order to investigate whether the expression levels of LINC00941 and LINC00514 are related to the basic biochemical parameters of HCC patients, we divided the expression levels of LINC00941 and LINC00514 into high and low levels, and the results showed that the expression levels of LINC00941 and LINC00514 were not related to the basic biochemical parameters of HCC patients. Further, we performed tumor staging and liver function grading for HCC patients, and compared the level of AFP, LINC00941, and LINC00514 in different stages and grades, and found that AFP levels in advanced HCC patients were significantly higher than those in early and middle‐stage HCC patients, LINC00941 and LINC00514 showed no significant difference in different HCC stages, but showed a trend of gradual decrease with the development of HCC. The level of AFP in class C HCC patients was significantly higher than that in class A HCC patients, and the level of LINC00514 gradually decreased with the deterioration of liver function in HCC patients.\nIn order to evaluate the value of LINC00941 and LINC00514 as diagnostic markers for HCC, we detected the serum levels of LINC00941, LINC00514, and AFP in controls, CHB, LC and HCC populations, respectively, and evaluated the diagnostic value of LINC00941 and LINC00514 for HCC, LC, or CHB. In the current small sample study, the sensitivity of serum LINC00941 and LINC00514 in differential diagnosis of normal control and HCC patients were 85% and 90%, specificity was 86.67% and 56.67%, and the AUC reached 0.919 and 0.808, indicating that serum LINC00941 and LINC00514 had good diagnostic efficacy in the diagnosis of HCC; and when combined with AFP, they can significantly improve the sensitivity and accuracy of AFP diagnosis (87.5% vs. 60%, 75% vs. 60% and 0.962 vs. 0.815, 0.918 vs. 0.815). LINC00941 and LINC00514 did not show obvious advantages in the differential diagnosis of HCC and CHB. When used to diagnose HCC and LC, LINC00941 and LINC00514 both show good specificity and accuracy (75.5%, 100% and 0.766, 0.781), LINC00941 had a sensitivity of 75%, while LINC00514 had a poor sensitivity of only 45%; when combined with AFP, LINC00941 and LINC00514 can significantly improve the sensitivity and accuracy of AFP detection (65% vs. 35%, 67.5% vs. 35% and 0.820 vs. 0.668, 0.835 vs. 0.668). When used to distinguish between CHB and controls or LC and controls, LINC00941 and LINC00514 alone or in combination with AFP increased sensitivity and accuracy in the diagnosis of CHB and LC compared with AFP alone. When distinguishing between LC and CHB, compared with AFP alone, LINC00941 alone or combined with AFP can significantly improve the sensitivity and accuracy of LC diagnosis.", "LINC00941 and LINC00514 were upregulated in HBV infection‐associated liver disease. Their abnormal expression can be used as an independent marker for the diagnosis of liver diseases. However, the relevant mechanisms and effects are still unclear. Moreover, this research is limited by sample size. Hence, further studies are needed.", "The authors have no conflicts of interest.", "PZ, JC, and DT proposed the concept of the work. JC carried out most of the experimental work and wrote the paper. HL provided critical reviews in order to promote the manuscript. All authors read and approved the final manuscript." ]
[ null, "materials-and-methods", null, null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusions", "COI-statement", null ]
[ "diagnosis", "hepatitis B virus", "LINC00514", "LINC00941", "liver diseases" ]
INTRODUCTION: Liver cancer is one of the most common human cancers worldwide, and also one of the most important causes of individual death caused by cancer, and its morbidity and mortality rank 6th and 2nd among all malignant tumors, respectively. 1 According to the latest global cancer statistics in 2018, there are about 840,000 new cases of liver cancer each year, with an incidence rate of about 4.7%; and about 780,000 new liver cancer deaths occur every year, with a mortality rate of 8.2%. 2 China is one of the countries with a high incidence of liver cancer. According to the pathologic characteristics of tissues, hepatocellular carcinoma (HCC) accounts for more than 90% of all liver cancers, and ranks fourth and third in incidence and mortality among all malignant tumors; in addition, compared with females, males have a higher incidence and poorer prognosis. 3 , 4 HCC is mainly caused by hepatitis virus infection, alcohol abuse, non‐alcoholic steatohepatitis, toxin exposure, and metabolic syndrome. 5 Chronic infection of hepatitis B virus (HBV) and aflatoxin exposure are the main pathogenic factors of HCC in China, and HCC caused by chronic HBV infection accounts for more than 80% of all HCC. 6 In addition to liver ultrasound, serum alpha fetal protein (AFP) detection is the main method for extensive screening of HCC in China. However, due to its low sensitivity and specificity, the detection rate of HCC is not high, so that many HCC patients miss the optimal surgical period. Therefore, it is very important to explore novel and effective serum markers to improve the detection rate and prognosis of HCC. Non‐coding RNA is currently a research hotspot in the field of molecular biology. Although it does not have the function of encoding protein, it plays an important role in epigenetic, transcription, and post‐transcriptional levels. More and more lncRNAs have been found to be involved in the occurrence and development of a variety of tumors, and can be used for tumor diagnosis and prognosis monitoring. Studies have reported that the expression of LINC00941 is increased in pancreatic cancer, colorectal cancer, lung cancer, etc. 7 , 8 , 9 , 10 It can promote tumor progression through a variety of signaling pathways, such as proliferation, metastasis, invasion, and so on. 7 , 8 , 9 , 11 It can also be used as a diagnostic or prognostic marker for gastric cancer, lung cancer, head and neck squamous cell carcinoma, and other tumors 12 , 13 , 14 and could predict the recurrence‐free survival and overall survival of HCC. 15 On the other hand, it has been reported that LINC00514 is highly expressed in the tissues and cells of breast and pancreatic cancer and can promote tumor occurrence and development by regulating related microRNAs. 16 , 17 However, so far, no studies have reported their diagnostic value in HCC. In this research, we detected the expression of LINC00941 and LINC00514 in healthy controls and patients with HBV infection‐related liver disease and assessed its correlation with basic characteristics of patients with HCC. The diagnostic values of LINC00941 and LINC00514 in liver diseases were also analyzed by receiver operating characteristic curve (ROC). MATERIAL AND METHODS: Study population All subjects were collected from inpatient department of Renmin Hospital of Wuhan University from November 2019 to January 2020. A total 147 subjects were divided into HBV‐associated HCC group (40 cases), HBV‐associated liver cirrhosis (LC) group (40 cases), chronic hepatitis B (CHB group) (37 cases). In addition, 30 normal males who received physical examination in our hospital during the same period were selected as the normal control group. The ages of all subjects ranged from 30 to 80 years old, and there was no significant difference in age among all groups. The diagnosis of HCC and LC patients was in accordance with the guidelines of liver Disease Society of Chinese Medical Association and Infectious Diseases Society of China, and they were confirmed by liver biopsy, X‐ray computed tomography or MAGNETIC resonance imaging. HCC and LC caused by other causes, as well as other infectious diseases, malignant tumors, and autoimmune diseases, were excluded. The diagnosis of CHB patients meets the Diagnostic Criteria for Chronic Hepatitis B (2015 edition). This research was approved and reviewed by the Medical Ethics Review Committee of Renmin Hospital of Wuhan University. All participants agreed and signed the written informed consent in accordance with policies of the hospital Ethics Committee. All subjects were collected from inpatient department of Renmin Hospital of Wuhan University from November 2019 to January 2020. A total 147 subjects were divided into HBV‐associated HCC group (40 cases), HBV‐associated liver cirrhosis (LC) group (40 cases), chronic hepatitis B (CHB group) (37 cases). In addition, 30 normal males who received physical examination in our hospital during the same period were selected as the normal control group. The ages of all subjects ranged from 30 to 80 years old, and there was no significant difference in age among all groups. The diagnosis of HCC and LC patients was in accordance with the guidelines of liver Disease Society of Chinese Medical Association and Infectious Diseases Society of China, and they were confirmed by liver biopsy, X‐ray computed tomography or MAGNETIC resonance imaging. HCC and LC caused by other causes, as well as other infectious diseases, malignant tumors, and autoimmune diseases, were excluded. The diagnosis of CHB patients meets the Diagnostic Criteria for Chronic Hepatitis B (2015 edition). This research was approved and reviewed by the Medical Ethics Review Committee of Renmin Hospital of Wuhan University. All participants agreed and signed the written informed consent in accordance with policies of the hospital Ethics Committee. Sample collection All vacuum blood collection tubes were purchased from BD Company. The yellow head tube to promote blood coagulation and the pearl‐white head tube to promote blood coagulation of all subjects were collected on an empty stomach for more than 8 h in the morning. The yellow head tube blood was used for routine biochemical indexes, AFP, LINC00941, and LINC00514 detection, and the pearl‐white head procoagulant blood was used for HBV DNA detection. After collection, the blood was placed at room temperature for 15min. When the blood was completely coagulated, the serum was separated after centrifugated at 3500 r/min for 5 min for use. All vacuum blood collection tubes were purchased from BD Company. The yellow head tube to promote blood coagulation and the pearl‐white head tube to promote blood coagulation of all subjects were collected on an empty stomach for more than 8 h in the morning. The yellow head tube blood was used for routine biochemical indexes, AFP, LINC00941, and LINC00514 detection, and the pearl‐white head procoagulant blood was used for HBV DNA detection. After collection, the blood was placed at room temperature for 15min. When the blood was completely coagulated, the serum was separated after centrifugated at 3500 r/min for 5 min for use. Laboratory analysis Automatic biochemical analyzer (Siemens, Germany) and supporting analysis reagents were used to detect serum alanine aminotransferase (ALT, normal reference range: 9–50 U/L), aspartate aminotransferase (AST, normal reference range: 15–40 U/L), alkaline phosphatase (ALP, normal reference range: 45–125 U/L), gamma‐glutamyl transferase (GGT, normal reference range: 10–60 U/L), albumin (ALB, normal reference range: 40–55 g/L), total bilirubin (TBIL, normal reference range: 0–23 μmol/L), and direct bilirubin (DBIL, normal reference range: 0–8 μmol/L). The real‐time fluorescent quantitative PCR instrument (ABI ViiA7, USA) was used to detect the HBV DNA load level, and the kit was produced by Shanghai Fosun Long March Medical Science Co., Ltd. Automatic immunoluminescence analyzer (Siemens, Germany) and supporting reagents were used to detect serum AFP. Automatic biochemical analyzer (Siemens, Germany) and supporting analysis reagents were used to detect serum alanine aminotransferase (ALT, normal reference range: 9–50 U/L), aspartate aminotransferase (AST, normal reference range: 15–40 U/L), alkaline phosphatase (ALP, normal reference range: 45–125 U/L), gamma‐glutamyl transferase (GGT, normal reference range: 10–60 U/L), albumin (ALB, normal reference range: 40–55 g/L), total bilirubin (TBIL, normal reference range: 0–23 μmol/L), and direct bilirubin (DBIL, normal reference range: 0–8 μmol/L). The real‐time fluorescent quantitative PCR instrument (ABI ViiA7, USA) was used to detect the HBV DNA load level, and the kit was produced by Shanghai Fosun Long March Medical Science Co., Ltd. Automatic immunoluminescence analyzer (Siemens, Germany) and supporting reagents were used to detect serum AFP. Serum LINC00941 and LINC00514 extraction and qRT‐PCR analysis The serum LINC00941 and LINC00514 were extracted with Beijing Biotech (blood RNA extraction and adsorption column type) kit. The above two non‐coding RNAs were quantified by fluorescence quantitative PCR using the reverse transcription and quantitative PCR kits (RR036A and RR091A) of Takara, Japan, and GAPDH as the housekeeping gene. Reverse transcription was performed at 37℃ for 15 min and 85℃ for 5S; The RT‐PCR procedure was 95℃ for 30 s and the number of cycles was 1; 95℃ for 5 s, 64℃ for 30 s, and the number of cycles is 40. Primer sequences are as follows: LINC00941: forward primer CAAGCAACCGTCCAACTACCAGACA, reverse primer AAATCAAGAGCCCAAACATTGTGAA; LINC00514: forward primer CAACCAGGTGCTGGGGACAG, reverse primer GACCTCAAGTGATCCGCCCG; GAPDH: forward primer GGAGCGAGATCCCTCCAAAAT, reverse primer GGCTGTTGTCATACTTCTCATGG; and primer concentration was 10 μmol/L. The relative quantitative results were expressed by 2−△CT method. The serum LINC00941 and LINC00514 were extracted with Beijing Biotech (blood RNA extraction and adsorption column type) kit. The above two non‐coding RNAs were quantified by fluorescence quantitative PCR using the reverse transcription and quantitative PCR kits (RR036A and RR091A) of Takara, Japan, and GAPDH as the housekeeping gene. Reverse transcription was performed at 37℃ for 15 min and 85℃ for 5S; The RT‐PCR procedure was 95℃ for 30 s and the number of cycles was 1; 95℃ for 5 s, 64℃ for 30 s, and the number of cycles is 40. Primer sequences are as follows: LINC00941: forward primer CAAGCAACCGTCCAACTACCAGACA, reverse primer AAATCAAGAGCCCAAACATTGTGAA; LINC00514: forward primer CAACCAGGTGCTGGGGACAG, reverse primer GACCTCAAGTGATCCGCCCG; GAPDH: forward primer GGAGCGAGATCCCTCCAAAAT, reverse primer GGCTGTTGTCATACTTCTCATGG; and primer concentration was 10 μmol/L. The relative quantitative results were expressed by 2−△CT method. Statistical analysis SPSS 20.0 and MedCalc 15.2.2 software were used for statistical analysis of the data, and GraphPad Prism 6 software was used for drawing. Kolmogorov‐Smirnov (K‐S) test was used to evaluate the normality of each group of data. The quantitative data complying with normal distribution were represented by mean ± SEM, and one‐way analysis of variance (ANOVA) was used for comparison among multiple groups. LSD test was used for homogeneity and Tamhane'ST2 test was used for non‐homogeneity. The quantitative data that did not obey the normal distribution was represented by the median and interquartile ranges, and the Mann‐Whitney test was used to analyze the corresponding data. Pearson's correlation was used to evaluate the correlation among all indicators. ROC curve was used to analyze the diagnostic value of LINC00941, LINC00514, and AFP for HCC. All data were analyzed by two‐sided test, and p < 0.05 was considered as statistically significant. SPSS 20.0 and MedCalc 15.2.2 software were used for statistical analysis of the data, and GraphPad Prism 6 software was used for drawing. Kolmogorov‐Smirnov (K‐S) test was used to evaluate the normality of each group of data. The quantitative data complying with normal distribution were represented by mean ± SEM, and one‐way analysis of variance (ANOVA) was used for comparison among multiple groups. LSD test was used for homogeneity and Tamhane'ST2 test was used for non‐homogeneity. The quantitative data that did not obey the normal distribution was represented by the median and interquartile ranges, and the Mann‐Whitney test was used to analyze the corresponding data. Pearson's correlation was used to evaluate the correlation among all indicators. ROC curve was used to analyze the diagnostic value of LINC00941, LINC00514, and AFP for HCC. All data were analyzed by two‐sided test, and p < 0.05 was considered as statistically significant. Study population: All subjects were collected from inpatient department of Renmin Hospital of Wuhan University from November 2019 to January 2020. A total 147 subjects were divided into HBV‐associated HCC group (40 cases), HBV‐associated liver cirrhosis (LC) group (40 cases), chronic hepatitis B (CHB group) (37 cases). In addition, 30 normal males who received physical examination in our hospital during the same period were selected as the normal control group. The ages of all subjects ranged from 30 to 80 years old, and there was no significant difference in age among all groups. The diagnosis of HCC and LC patients was in accordance with the guidelines of liver Disease Society of Chinese Medical Association and Infectious Diseases Society of China, and they were confirmed by liver biopsy, X‐ray computed tomography or MAGNETIC resonance imaging. HCC and LC caused by other causes, as well as other infectious diseases, malignant tumors, and autoimmune diseases, were excluded. The diagnosis of CHB patients meets the Diagnostic Criteria for Chronic Hepatitis B (2015 edition). This research was approved and reviewed by the Medical Ethics Review Committee of Renmin Hospital of Wuhan University. All participants agreed and signed the written informed consent in accordance with policies of the hospital Ethics Committee. Sample collection: All vacuum blood collection tubes were purchased from BD Company. The yellow head tube to promote blood coagulation and the pearl‐white head tube to promote blood coagulation of all subjects were collected on an empty stomach for more than 8 h in the morning. The yellow head tube blood was used for routine biochemical indexes, AFP, LINC00941, and LINC00514 detection, and the pearl‐white head procoagulant blood was used for HBV DNA detection. After collection, the blood was placed at room temperature for 15min. When the blood was completely coagulated, the serum was separated after centrifugated at 3500 r/min for 5 min for use. Laboratory analysis: Automatic biochemical analyzer (Siemens, Germany) and supporting analysis reagents were used to detect serum alanine aminotransferase (ALT, normal reference range: 9–50 U/L), aspartate aminotransferase (AST, normal reference range: 15–40 U/L), alkaline phosphatase (ALP, normal reference range: 45–125 U/L), gamma‐glutamyl transferase (GGT, normal reference range: 10–60 U/L), albumin (ALB, normal reference range: 40–55 g/L), total bilirubin (TBIL, normal reference range: 0–23 μmol/L), and direct bilirubin (DBIL, normal reference range: 0–8 μmol/L). The real‐time fluorescent quantitative PCR instrument (ABI ViiA7, USA) was used to detect the HBV DNA load level, and the kit was produced by Shanghai Fosun Long March Medical Science Co., Ltd. Automatic immunoluminescence analyzer (Siemens, Germany) and supporting reagents were used to detect serum AFP. Serum LINC00941 and LINC00514 extraction and qRT‐PCR analysis: The serum LINC00941 and LINC00514 were extracted with Beijing Biotech (blood RNA extraction and adsorption column type) kit. The above two non‐coding RNAs were quantified by fluorescence quantitative PCR using the reverse transcription and quantitative PCR kits (RR036A and RR091A) of Takara, Japan, and GAPDH as the housekeeping gene. Reverse transcription was performed at 37℃ for 15 min and 85℃ for 5S; The RT‐PCR procedure was 95℃ for 30 s and the number of cycles was 1; 95℃ for 5 s, 64℃ for 30 s, and the number of cycles is 40. Primer sequences are as follows: LINC00941: forward primer CAAGCAACCGTCCAACTACCAGACA, reverse primer AAATCAAGAGCCCAAACATTGTGAA; LINC00514: forward primer CAACCAGGTGCTGGGGACAG, reverse primer GACCTCAAGTGATCCGCCCG; GAPDH: forward primer GGAGCGAGATCCCTCCAAAAT, reverse primer GGCTGTTGTCATACTTCTCATGG; and primer concentration was 10 μmol/L. The relative quantitative results were expressed by 2−△CT method. Statistical analysis: SPSS 20.0 and MedCalc 15.2.2 software were used for statistical analysis of the data, and GraphPad Prism 6 software was used for drawing. Kolmogorov‐Smirnov (K‐S) test was used to evaluate the normality of each group of data. The quantitative data complying with normal distribution were represented by mean ± SEM, and one‐way analysis of variance (ANOVA) was used for comparison among multiple groups. LSD test was used for homogeneity and Tamhane'ST2 test was used for non‐homogeneity. The quantitative data that did not obey the normal distribution was represented by the median and interquartile ranges, and the Mann‐Whitney test was used to analyze the corresponding data. Pearson's correlation was used to evaluate the correlation among all indicators. ROC curve was used to analyze the diagnostic value of LINC00941, LINC00514, and AFP for HCC. All data were analyzed by two‐sided test, and p < 0.05 was considered as statistically significant. RESULTS: Characteristics of healthy controls and patients with HBV infection‐related liver disease The main characteristics of all the study population are summarized in Table 1. First, there was no statistically significant difference in age between the groups. Regarding the liver biochemical indicators ALT, AST, ALP, GGT, ALB, TBIL, DBIL, the differences between the four groups were statistically significant (all p < 0.0001). For the level of HBV DNA, the CHB group was significantly higher than the LC group and the HCC group (all p < 0.0001). In the analysis of serum tumor markers, result showed that the AFP level of HCC patients was obviously increased than that of the control group, CHB and LC groups, and the difference was statistically significant. Basic biochemical data characteristics of controls and patients with HBV infection‐related liver disease Data are presented as means (SD) or median (interquartile range) or percentage. Abbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CHB, Chronic hepatitis b; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; LC, liver cirrhosis; TBIL, total bilirubin. The main characteristics of all the study population are summarized in Table 1. First, there was no statistically significant difference in age between the groups. Regarding the liver biochemical indicators ALT, AST, ALP, GGT, ALB, TBIL, DBIL, the differences between the four groups were statistically significant (all p < 0.0001). For the level of HBV DNA, the CHB group was significantly higher than the LC group and the HCC group (all p < 0.0001). In the analysis of serum tumor markers, result showed that the AFP level of HCC patients was obviously increased than that of the control group, CHB and LC groups, and the difference was statistically significant. Basic biochemical data characteristics of controls and patients with HBV infection‐related liver disease Data are presented as means (SD) or median (interquartile range) or percentage. Abbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CHB, Chronic hepatitis b; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; LC, liver cirrhosis; TBIL, total bilirubin. Comparison of serum LINC00941 and LINC00514 levels in controls, CHB, LC and HCC Furthermore, we used the LSD test to evaluate the serum levels of LINC00941 and LINC00514 in the controls, CHB, LC and HCC groups. As shown in Figure 1A, serum LINC00941 levels in the CHB, LC and HCC groups were significantly higher than those in the control group (all p < 0.0001), and compared with the patients with LC, the level of serum LINC00941 in CHB and HCC groups were obviously decreased (all p < 0.0001). Compared to the controls, the serum LINC00514 level was significantly increased in patients with CHB, LC and HCC (all p < 0.0001); and the serum LINC00514 level was significantly lower in the CHB group and the HCC group than that of LC group (all p < 0.0001, Figure 1B). Serum LINC00941 and LINC00514 expression in patients with hepatocellular carcinoma (HCC), patients with liver cirrhosis, patients with HCC and healthy controls. **** p < 0.0001 represents significant difference between two groups Furthermore, we used the LSD test to evaluate the serum levels of LINC00941 and LINC00514 in the controls, CHB, LC and HCC groups. As shown in Figure 1A, serum LINC00941 levels in the CHB, LC and HCC groups were significantly higher than those in the control group (all p < 0.0001), and compared with the patients with LC, the level of serum LINC00941 in CHB and HCC groups were obviously decreased (all p < 0.0001). Compared to the controls, the serum LINC00514 level was significantly increased in patients with CHB, LC and HCC (all p < 0.0001); and the serum LINC00514 level was significantly lower in the CHB group and the HCC group than that of LC group (all p < 0.0001, Figure 1B). Serum LINC00941 and LINC00514 expression in patients with hepatocellular carcinoma (HCC), patients with liver cirrhosis, patients with HCC and healthy controls. **** p < 0.0001 represents significant difference between two groups Relationship between serum LINC00941 and LINC00514 expression and basic biochemical indexes in patients with HCC To further evaluate whether serum LINC00941 and LINC00514 expression are related to the liver function, HBV viral load and AFP level of HCC patients, We analyzed the correlation between the high and low expression of two lncRNAs and the above indexes. The results showed that serum LINC00941 and LINC00514 had no correlation with liver function index, HBV viral load and AFP (Table 2). The association between the relative expression of LINC00941 and LINC00514 in serum of HCC patients and basic clinical data Patients (n = 40) LINC00941 expression levels Patients (n = 40) LINC00514 expression levels The data are divided on average. Abbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; TBIL, total bilirubin. To further evaluate whether serum LINC00941 and LINC00514 expression are related to the liver function, HBV viral load and AFP level of HCC patients, We analyzed the correlation between the high and low expression of two lncRNAs and the above indexes. The results showed that serum LINC00941 and LINC00514 had no correlation with liver function index, HBV viral load and AFP (Table 2). The association between the relative expression of LINC00941 and LINC00514 in serum of HCC patients and basic clinical data Patients (n = 40) LINC00941 expression levels Patients (n = 40) LINC00514 expression levels The data are divided on average. Abbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; TBIL, total bilirubin. Serum levels of LINC00941, LINC00514 and AFP at different HCC stages and liver function grades To assess the changes of LINC00941, LINC00514, and AFP levels in the progression of HCC, patients with HCC were divided into early, middle and advanced stages, according to the Barcelona Clinic Liver Cancer (BCLC). We found that serum level of AFP in advanced‐stage HCC was significantly higher than early stage and middle stage (all p < 0.01, Figure 2A). However, the levels of LINC00941 and LINC00514 showed no difference in different stages of HCC but showed a gradual downward trend (Figure 2B,C). According to Child‐Pugh class, patients with HCC were divided into class A, B, and C. The results indicated that the AFP level of class C patients was obviously higher than that of class A patients, while the LINC00941 and LINC00514 levels showed no difference in different liver function grades, and LINC00514 showed a trend of progressive decrease in different grades (Figure 2D–F). Serum concentration of LINC00941, LINC00514 and alpha fetoprotein according to Barcelona Clinic Liver Cancer and Child‐Pugh class in patients with hepatocellular carcinoma. ** p < 0.01 represents significant difference between two groups To assess the changes of LINC00941, LINC00514, and AFP levels in the progression of HCC, patients with HCC were divided into early, middle and advanced stages, according to the Barcelona Clinic Liver Cancer (BCLC). We found that serum level of AFP in advanced‐stage HCC was significantly higher than early stage and middle stage (all p < 0.01, Figure 2A). However, the levels of LINC00941 and LINC00514 showed no difference in different stages of HCC but showed a gradual downward trend (Figure 2B,C). According to Child‐Pugh class, patients with HCC were divided into class A, B, and C. The results indicated that the AFP level of class C patients was obviously higher than that of class A patients, while the LINC00941 and LINC00514 levels showed no difference in different liver function grades, and LINC00514 showed a trend of progressive decrease in different grades (Figure 2D–F). Serum concentration of LINC00941, LINC00514 and alpha fetoprotein according to Barcelona Clinic Liver Cancer and Child‐Pugh class in patients with hepatocellular carcinoma. ** p < 0.01 represents significant difference between two groups Diagnostic value of serum LINC00941, LINC00514 and AFP To explore whether serum LINC00941 and LINC005141 can be used as novel biomarkers for HCC, we evaluated the diagnostic value of LINC00941 and LINC00514 using the ROC curve model, and used AFP as a reference. The results showed that the sensitivity and specificity of LINC00941 and LINC00514 in distinguishing HCC from healthy controls were 85.00%, 90% and 86.67%, 56.67%, respectively (Figure 3A). In addition when combined with AFP, both LINC00941 and LINC00514 could improve the accuracy of HCC diagnosis (0.962 vs. 0.815 and 0.918 vs. 0.815, p < 0.001 and p < 0.01). LINC00941 and LINC00514 showed no significant advantage in distinguishing HCC from CHB compared to AFP (Figure 3B). When distinguishing HCC from LC, LINC00941 and LINC00514 combined with AFP significantly improved the accuracy of HCC diagnosis (0.820 vs. 0.668 and 0.835 vs. 0.668, all p < 0.01, Figure 3C). Compared with AFP alone, LINC00941 and LINC00514 alone or in combination with AFP improved sensitivity and accuracy in the diagnosis of CHB and LC (used healthy controls as control group, Figure 3D,F). When differentiating LC and CHB, LINC00941 alone or in combination with AFP significantly improved the sensitivity and accuracy of LC diagnosis compared with AFP alone (all p < 0.01, Figure 3E) (Table 3). ROC curves of serum LINC00941, LINC00514 and alpha fetoprotein (AFP) in the differential diagnosis of hepatocellular carcinoma (HCC). (A) HCC versus controls. (B) HCC versus chronic hepatitis B (CHB). (C) HCC versus liver cirrhosis (LC). (D) CHB versus controls. (E) LC versus CHB. (F) LC versus controls Differential diagnostic efficacy of serum LINC00941, LINC00514, AFP and the combination Compared with LINC00514, * P < 0.01, ** P < 0.01; compared with AFP, # P < 0.05, ## P < 0.01, ### P < 0.001, #### P < 0.0001. Abbreviations: AFP, alpha fetoprotein; CHB, Chronic hepatitis B; HCC, hepatocellular carcinoma; LC, liver cirrhosis; SEN, sensitivity; SPE, specificity. To explore whether serum LINC00941 and LINC005141 can be used as novel biomarkers for HCC, we evaluated the diagnostic value of LINC00941 and LINC00514 using the ROC curve model, and used AFP as a reference. The results showed that the sensitivity and specificity of LINC00941 and LINC00514 in distinguishing HCC from healthy controls were 85.00%, 90% and 86.67%, 56.67%, respectively (Figure 3A). In addition when combined with AFP, both LINC00941 and LINC00514 could improve the accuracy of HCC diagnosis (0.962 vs. 0.815 and 0.918 vs. 0.815, p < 0.001 and p < 0.01). LINC00941 and LINC00514 showed no significant advantage in distinguishing HCC from CHB compared to AFP (Figure 3B). When distinguishing HCC from LC, LINC00941 and LINC00514 combined with AFP significantly improved the accuracy of HCC diagnosis (0.820 vs. 0.668 and 0.835 vs. 0.668, all p < 0.01, Figure 3C). Compared with AFP alone, LINC00941 and LINC00514 alone or in combination with AFP improved sensitivity and accuracy in the diagnosis of CHB and LC (used healthy controls as control group, Figure 3D,F). When differentiating LC and CHB, LINC00941 alone or in combination with AFP significantly improved the sensitivity and accuracy of LC diagnosis compared with AFP alone (all p < 0.01, Figure 3E) (Table 3). ROC curves of serum LINC00941, LINC00514 and alpha fetoprotein (AFP) in the differential diagnosis of hepatocellular carcinoma (HCC). (A) HCC versus controls. (B) HCC versus chronic hepatitis B (CHB). (C) HCC versus liver cirrhosis (LC). (D) CHB versus controls. (E) LC versus CHB. (F) LC versus controls Differential diagnostic efficacy of serum LINC00941, LINC00514, AFP and the combination Compared with LINC00514, * P < 0.01, ** P < 0.01; compared with AFP, # P < 0.05, ## P < 0.01, ### P < 0.001, #### P < 0.0001. Abbreviations: AFP, alpha fetoprotein; CHB, Chronic hepatitis B; HCC, hepatocellular carcinoma; LC, liver cirrhosis; SEN, sensitivity; SPE, specificity. Characteristics of healthy controls and patients with HBV infection‐related liver disease: The main characteristics of all the study population are summarized in Table 1. First, there was no statistically significant difference in age between the groups. Regarding the liver biochemical indicators ALT, AST, ALP, GGT, ALB, TBIL, DBIL, the differences between the four groups were statistically significant (all p < 0.0001). For the level of HBV DNA, the CHB group was significantly higher than the LC group and the HCC group (all p < 0.0001). In the analysis of serum tumor markers, result showed that the AFP level of HCC patients was obviously increased than that of the control group, CHB and LC groups, and the difference was statistically significant. Basic biochemical data characteristics of controls and patients with HBV infection‐related liver disease Data are presented as means (SD) or median (interquartile range) or percentage. Abbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; CHB, Chronic hepatitis b; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; LC, liver cirrhosis; TBIL, total bilirubin. Comparison of serum LINC00941 and LINC00514 levels in controls, CHB, LC and HCC: Furthermore, we used the LSD test to evaluate the serum levels of LINC00941 and LINC00514 in the controls, CHB, LC and HCC groups. As shown in Figure 1A, serum LINC00941 levels in the CHB, LC and HCC groups were significantly higher than those in the control group (all p < 0.0001), and compared with the patients with LC, the level of serum LINC00941 in CHB and HCC groups were obviously decreased (all p < 0.0001). Compared to the controls, the serum LINC00514 level was significantly increased in patients with CHB, LC and HCC (all p < 0.0001); and the serum LINC00514 level was significantly lower in the CHB group and the HCC group than that of LC group (all p < 0.0001, Figure 1B). Serum LINC00941 and LINC00514 expression in patients with hepatocellular carcinoma (HCC), patients with liver cirrhosis, patients with HCC and healthy controls. **** p < 0.0001 represents significant difference between two groups Relationship between serum LINC00941 and LINC00514 expression and basic biochemical indexes in patients with HCC: To further evaluate whether serum LINC00941 and LINC00514 expression are related to the liver function, HBV viral load and AFP level of HCC patients, We analyzed the correlation between the high and low expression of two lncRNAs and the above indexes. The results showed that serum LINC00941 and LINC00514 had no correlation with liver function index, HBV viral load and AFP (Table 2). The association between the relative expression of LINC00941 and LINC00514 in serum of HCC patients and basic clinical data Patients (n = 40) LINC00941 expression levels Patients (n = 40) LINC00514 expression levels The data are divided on average. Abbreviations: AFP, alpha fetoprotein; ALB, albumin; ALP, alkaline phosphatase; ALT, alanine aminotransferase; AST, aspartate aminotransferase; DBIL, direct bilirubin; GGT, gamma‐glutamyl transpeptidase; HBVDNA, hepatitis b virus deoxyribonucleic acid; HCC, hepatocellular carcinoma; TBIL, total bilirubin. Serum levels of LINC00941, LINC00514 and AFP at different HCC stages and liver function grades: To assess the changes of LINC00941, LINC00514, and AFP levels in the progression of HCC, patients with HCC were divided into early, middle and advanced stages, according to the Barcelona Clinic Liver Cancer (BCLC). We found that serum level of AFP in advanced‐stage HCC was significantly higher than early stage and middle stage (all p < 0.01, Figure 2A). However, the levels of LINC00941 and LINC00514 showed no difference in different stages of HCC but showed a gradual downward trend (Figure 2B,C). According to Child‐Pugh class, patients with HCC were divided into class A, B, and C. The results indicated that the AFP level of class C patients was obviously higher than that of class A patients, while the LINC00941 and LINC00514 levels showed no difference in different liver function grades, and LINC00514 showed a trend of progressive decrease in different grades (Figure 2D–F). Serum concentration of LINC00941, LINC00514 and alpha fetoprotein according to Barcelona Clinic Liver Cancer and Child‐Pugh class in patients with hepatocellular carcinoma. ** p < 0.01 represents significant difference between two groups Diagnostic value of serum LINC00941, LINC00514 and AFP: To explore whether serum LINC00941 and LINC005141 can be used as novel biomarkers for HCC, we evaluated the diagnostic value of LINC00941 and LINC00514 using the ROC curve model, and used AFP as a reference. The results showed that the sensitivity and specificity of LINC00941 and LINC00514 in distinguishing HCC from healthy controls were 85.00%, 90% and 86.67%, 56.67%, respectively (Figure 3A). In addition when combined with AFP, both LINC00941 and LINC00514 could improve the accuracy of HCC diagnosis (0.962 vs. 0.815 and 0.918 vs. 0.815, p < 0.001 and p < 0.01). LINC00941 and LINC00514 showed no significant advantage in distinguishing HCC from CHB compared to AFP (Figure 3B). When distinguishing HCC from LC, LINC00941 and LINC00514 combined with AFP significantly improved the accuracy of HCC diagnosis (0.820 vs. 0.668 and 0.835 vs. 0.668, all p < 0.01, Figure 3C). Compared with AFP alone, LINC00941 and LINC00514 alone or in combination with AFP improved sensitivity and accuracy in the diagnosis of CHB and LC (used healthy controls as control group, Figure 3D,F). When differentiating LC and CHB, LINC00941 alone or in combination with AFP significantly improved the sensitivity and accuracy of LC diagnosis compared with AFP alone (all p < 0.01, Figure 3E) (Table 3). ROC curves of serum LINC00941, LINC00514 and alpha fetoprotein (AFP) in the differential diagnosis of hepatocellular carcinoma (HCC). (A) HCC versus controls. (B) HCC versus chronic hepatitis B (CHB). (C) HCC versus liver cirrhosis (LC). (D) CHB versus controls. (E) LC versus CHB. (F) LC versus controls Differential diagnostic efficacy of serum LINC00941, LINC00514, AFP and the combination Compared with LINC00514, * P < 0.01, ** P < 0.01; compared with AFP, # P < 0.05, ## P < 0.01, ### P < 0.001, #### P < 0.0001. Abbreviations: AFP, alpha fetoprotein; CHB, Chronic hepatitis B; HCC, hepatocellular carcinoma; LC, liver cirrhosis; SEN, sensitivity; SPE, specificity. DISCUSSION: The prevalence of liver cancer is related to its many pathogenic factors and complex pathogenesis, which also means that it is difficult to diagnose with a single biomarker. Therefore, the combined detection of multiple biomarkers is of great significance to improve the detection rate of liver cancer. At present, a variety of biomarkers have been applied to the diagnosis of HCC, including AFP, miRNA, specially expressed genes, lncRNA, etc. Among them, lncRNA is a new research direction and hotspot in recent years. In previous studies, various lncRNAs have been studied in liver cancer. For example, H19 is the first non‐coding RNA reported to be abnormally expressed in HCC. 18 , 19 Unfried et al. reported that the expression of NIHCOLE was related to the poor prognosis and survival of patients with HCC, and the inhibition of NIHCOLE expression in HCC cells may lead to limited cell proliferation and increased apoptosis rate through the accumulation of DNA damage. 20 Research by Peng et al. showed that the expression of LINC00511 was higher in HCC tissues, and the mechanism study revealed that the invasion pseudopodia and exosomal secretion induced by LINC00511 were involved in tumor progression. 21 Yin et al. found that LINC01133 expression in HCC tissues was elevated, and can predict the poor prognosis of HCC patients; in‐vitro studies showed that overexpression of LINC01133 can promote the proliferation and aggressive phenotype of HCC cells, and promote tumor growth and lung metastasis in vivo, while knockdown LINC01133 had the opposite effect. 22 In addition, studies have also reported that lncRNA plays an important role in the chemotherapy resistance of HCC. The study of Ma et al. found that the expression of LINC01134 was upregulated after treatment with oxaliplatin (OXA), and the higher expression of LINC01134 was related to the poor treatment effect of OXA; mechanistic studies have shown that the LINC01134/SP1/p62 axis modulated OXA resistance by changing cell viability, apoptosis and mitochondrial homeostasis in vitro and in vivo, suggesting that the LINC01134/SP1/p62 axis may be a promising strategy to overcome OXA chemotherapy resistance. 23 These findings suggest that lncRNAs play an important role in the occurrence and development, treatment, and prognosis monitoring of HCC. In our study, we detected the expression levels of LINC00941 and LINC00514 in the serum of controls, CHB, LC, and HCC patients, and found that compared with the healthy control group, the serum levels of LINC00941 and LINC00514 in the CHB, LC and HCC groups were significantly increased. The levels of LINC00941 and LINC00514 in the LC group were significantly higher than those in the CHB and HCC groups. In order to investigate whether the expression levels of LINC00941 and LINC00514 are related to the basic biochemical parameters of HCC patients, we divided the expression levels of LINC00941 and LINC00514 into high and low levels, and the results showed that the expression levels of LINC00941 and LINC00514 were not related to the basic biochemical parameters of HCC patients. Further, we performed tumor staging and liver function grading for HCC patients, and compared the level of AFP, LINC00941, and LINC00514 in different stages and grades, and found that AFP levels in advanced HCC patients were significantly higher than those in early and middle‐stage HCC patients, LINC00941 and LINC00514 showed no significant difference in different HCC stages, but showed a trend of gradual decrease with the development of HCC. The level of AFP in class C HCC patients was significantly higher than that in class A HCC patients, and the level of LINC00514 gradually decreased with the deterioration of liver function in HCC patients. In order to evaluate the value of LINC00941 and LINC00514 as diagnostic markers for HCC, we detected the serum levels of LINC00941, LINC00514, and AFP in controls, CHB, LC and HCC populations, respectively, and evaluated the diagnostic value of LINC00941 and LINC00514 for HCC, LC, or CHB. In the current small sample study, the sensitivity of serum LINC00941 and LINC00514 in differential diagnosis of normal control and HCC patients were 85% and 90%, specificity was 86.67% and 56.67%, and the AUC reached 0.919 and 0.808, indicating that serum LINC00941 and LINC00514 had good diagnostic efficacy in the diagnosis of HCC; and when combined with AFP, they can significantly improve the sensitivity and accuracy of AFP diagnosis (87.5% vs. 60%, 75% vs. 60% and 0.962 vs. 0.815, 0.918 vs. 0.815). LINC00941 and LINC00514 did not show obvious advantages in the differential diagnosis of HCC and CHB. When used to diagnose HCC and LC, LINC00941 and LINC00514 both show good specificity and accuracy (75.5%, 100% and 0.766, 0.781), LINC00941 had a sensitivity of 75%, while LINC00514 had a poor sensitivity of only 45%; when combined with AFP, LINC00941 and LINC00514 can significantly improve the sensitivity and accuracy of AFP detection (65% vs. 35%, 67.5% vs. 35% and 0.820 vs. 0.668, 0.835 vs. 0.668). When used to distinguish between CHB and controls or LC and controls, LINC00941 and LINC00514 alone or in combination with AFP increased sensitivity and accuracy in the diagnosis of CHB and LC compared with AFP alone. When distinguishing between LC and CHB, compared with AFP alone, LINC00941 alone or combined with AFP can significantly improve the sensitivity and accuracy of LC diagnosis. CONCLUSIONS: LINC00941 and LINC00514 were upregulated in HBV infection‐associated liver disease. Their abnormal expression can be used as an independent marker for the diagnosis of liver diseases. However, the relevant mechanisms and effects are still unclear. Moreover, this research is limited by sample size. Hence, further studies are needed. CONFLICT OF INTEREST: The authors have no conflicts of interest. AUTHOR CONTRIBUTION: PZ, JC, and DT proposed the concept of the work. JC carried out most of the experimental work and wrote the paper. HL provided critical reviews in order to promote the manuscript. All authors read and approved the final manuscript.
Background: Long non-coding RNAs (LncRNAs) are considered as potential diagnostic markers for a variety of tumors. Here, we aimed to explore the changes of LINC00941 and LINC00514 expression in hepatitis B virus (HBV) infection-related liver disease and evaluate their application value in disease diagnosis. Methods: Serum levels of LINC00941 and LINC00514 were detected by qRT-PCR. Potential diagnostic values were evaluated by receiver operating characteristic curve (ROC) analysis. Results: Serum LINC00941 and LINC00514 levels were elevated in patients with chronic hepatitis B (CHB), liver cirrhosis (LC), and hepatocellular carcinoma (HCC) compared with controls. When distinguishing HCC from controls, serum LINC00941 and LINC00514 had diagnostic parameters of an AUC of 0.919 and 0.808, sensitivity of 85% and 90%, and specificity of 86.67% and 56.67%, which were higher than parameters for alpha fetal protein (AFP) (all p < 0.0001). When distinguishing HCC from LC, CHB, or LC from controls, the combined detection of LINC00941 or LINC00514 can significantly improve the accuracy of AFP test alone (all p < 0.05). Conclusions: LINC00941 and LINC00514 were increased in the serum of HBV infection-associated liver diseases and might be independent markers for the detection of liver diseases.
INTRODUCTION: Liver cancer is one of the most common human cancers worldwide, and also one of the most important causes of individual death caused by cancer, and its morbidity and mortality rank 6th and 2nd among all malignant tumors, respectively. 1 According to the latest global cancer statistics in 2018, there are about 840,000 new cases of liver cancer each year, with an incidence rate of about 4.7%; and about 780,000 new liver cancer deaths occur every year, with a mortality rate of 8.2%. 2 China is one of the countries with a high incidence of liver cancer. According to the pathologic characteristics of tissues, hepatocellular carcinoma (HCC) accounts for more than 90% of all liver cancers, and ranks fourth and third in incidence and mortality among all malignant tumors; in addition, compared with females, males have a higher incidence and poorer prognosis. 3 , 4 HCC is mainly caused by hepatitis virus infection, alcohol abuse, non‐alcoholic steatohepatitis, toxin exposure, and metabolic syndrome. 5 Chronic infection of hepatitis B virus (HBV) and aflatoxin exposure are the main pathogenic factors of HCC in China, and HCC caused by chronic HBV infection accounts for more than 80% of all HCC. 6 In addition to liver ultrasound, serum alpha fetal protein (AFP) detection is the main method for extensive screening of HCC in China. However, due to its low sensitivity and specificity, the detection rate of HCC is not high, so that many HCC patients miss the optimal surgical period. Therefore, it is very important to explore novel and effective serum markers to improve the detection rate and prognosis of HCC. Non‐coding RNA is currently a research hotspot in the field of molecular biology. Although it does not have the function of encoding protein, it plays an important role in epigenetic, transcription, and post‐transcriptional levels. More and more lncRNAs have been found to be involved in the occurrence and development of a variety of tumors, and can be used for tumor diagnosis and prognosis monitoring. Studies have reported that the expression of LINC00941 is increased in pancreatic cancer, colorectal cancer, lung cancer, etc. 7 , 8 , 9 , 10 It can promote tumor progression through a variety of signaling pathways, such as proliferation, metastasis, invasion, and so on. 7 , 8 , 9 , 11 It can also be used as a diagnostic or prognostic marker for gastric cancer, lung cancer, head and neck squamous cell carcinoma, and other tumors 12 , 13 , 14 and could predict the recurrence‐free survival and overall survival of HCC. 15 On the other hand, it has been reported that LINC00514 is highly expressed in the tissues and cells of breast and pancreatic cancer and can promote tumor occurrence and development by regulating related microRNAs. 16 , 17 However, so far, no studies have reported their diagnostic value in HCC. In this research, we detected the expression of LINC00941 and LINC00514 in healthy controls and patients with HBV infection‐related liver disease and assessed its correlation with basic characteristics of patients with HCC. The diagnostic values of LINC00941 and LINC00514 in liver diseases were also analyzed by receiver operating characteristic curve (ROC). CONCLUSIONS: LINC00941 and LINC00514 were upregulated in HBV infection‐associated liver disease. Their abnormal expression can be used as an independent marker for the diagnosis of liver diseases. However, the relevant mechanisms and effects are still unclear. Moreover, this research is limited by sample size. Hence, further studies are needed.
Background: Long non-coding RNAs (LncRNAs) are considered as potential diagnostic markers for a variety of tumors. Here, we aimed to explore the changes of LINC00941 and LINC00514 expression in hepatitis B virus (HBV) infection-related liver disease and evaluate their application value in disease diagnosis. Methods: Serum levels of LINC00941 and LINC00514 were detected by qRT-PCR. Potential diagnostic values were evaluated by receiver operating characteristic curve (ROC) analysis. Results: Serum LINC00941 and LINC00514 levels were elevated in patients with chronic hepatitis B (CHB), liver cirrhosis (LC), and hepatocellular carcinoma (HCC) compared with controls. When distinguishing HCC from controls, serum LINC00941 and LINC00514 had diagnostic parameters of an AUC of 0.919 and 0.808, sensitivity of 85% and 90%, and specificity of 86.67% and 56.67%, which were higher than parameters for alpha fetal protein (AFP) (all p < 0.0001). When distinguishing HCC from LC, CHB, or LC from controls, the combined detection of LINC00941 or LINC00514 can significantly improve the accuracy of AFP test alone (all p < 0.05). Conclusions: LINC00941 and LINC00514 were increased in the serum of HBV infection-associated liver diseases and might be independent markers for the detection of liver diseases.
8,442
254
[ 637, 236, 118, 184, 171, 171, 233, 199, 184, 216, 434, 46 ]
17
[ "hcc", "linc00514", "linc00941", "linc00941 linc00514", "afp", "serum", "patients", "lc", "chb", "liver" ]
[ "incidence liver cancer", "liver cancer related", "prevalence liver cancer", "hcc hepatocellular carcinoma", "hepatocellular carcinoma hcc" ]
null
[CONTENT] diagnosis | hepatitis B virus | LINC00514 | LINC00941 | liver diseases [SUMMARY]
null
[CONTENT] diagnosis | hepatitis B virus | LINC00514 | LINC00941 | liver diseases [SUMMARY]
[CONTENT] diagnosis | hepatitis B virus | LINC00514 | LINC00941 | liver diseases [SUMMARY]
[CONTENT] diagnosis | hepatitis B virus | LINC00514 | LINC00941 | liver diseases [SUMMARY]
[CONTENT] diagnosis | hepatitis B virus | LINC00514 | LINC00941 | liver diseases [SUMMARY]
[CONTENT] Adult | Biomarkers | Female | Hepatitis B | Humans | Liver Diseases | Male | Middle Aged | RNA, Long Noncoding [SUMMARY]
null
[CONTENT] Adult | Biomarkers | Female | Hepatitis B | Humans | Liver Diseases | Male | Middle Aged | RNA, Long Noncoding [SUMMARY]
[CONTENT] Adult | Biomarkers | Female | Hepatitis B | Humans | Liver Diseases | Male | Middle Aged | RNA, Long Noncoding [SUMMARY]
[CONTENT] Adult | Biomarkers | Female | Hepatitis B | Humans | Liver Diseases | Male | Middle Aged | RNA, Long Noncoding [SUMMARY]
[CONTENT] Adult | Biomarkers | Female | Hepatitis B | Humans | Liver Diseases | Male | Middle Aged | RNA, Long Noncoding [SUMMARY]
[CONTENT] incidence liver cancer | liver cancer related | prevalence liver cancer | hcc hepatocellular carcinoma | hepatocellular carcinoma hcc [SUMMARY]
null
[CONTENT] incidence liver cancer | liver cancer related | prevalence liver cancer | hcc hepatocellular carcinoma | hepatocellular carcinoma hcc [SUMMARY]
[CONTENT] incidence liver cancer | liver cancer related | prevalence liver cancer | hcc hepatocellular carcinoma | hepatocellular carcinoma hcc [SUMMARY]
[CONTENT] incidence liver cancer | liver cancer related | prevalence liver cancer | hcc hepatocellular carcinoma | hepatocellular carcinoma hcc [SUMMARY]
[CONTENT] incidence liver cancer | liver cancer related | prevalence liver cancer | hcc hepatocellular carcinoma | hepatocellular carcinoma hcc [SUMMARY]
[CONTENT] hcc | linc00514 | linc00941 | linc00941 linc00514 | afp | serum | patients | lc | chb | liver [SUMMARY]
null
[CONTENT] hcc | linc00514 | linc00941 | linc00941 linc00514 | afp | serum | patients | lc | chb | liver [SUMMARY]
[CONTENT] hcc | linc00514 | linc00941 | linc00941 linc00514 | afp | serum | patients | lc | chb | liver [SUMMARY]
[CONTENT] hcc | linc00514 | linc00941 | linc00941 linc00514 | afp | serum | patients | lc | chb | liver [SUMMARY]
[CONTENT] hcc | linc00514 | linc00941 | linc00941 linc00514 | afp | serum | patients | lc | chb | liver [SUMMARY]
[CONTENT] cancer | hcc | incidence | liver | rate | tumors | mortality | liver cancer | infection | important [SUMMARY]
null
[CONTENT] hcc | linc00514 | linc00941 | lc | afp | chb | patients | linc00941 linc00514 | figure | serum [SUMMARY]
[CONTENT] unclear research | relevant mechanisms | effects unclear research | effects unclear research limited | limited sample size studies | limited sample size | limited sample | independent | independent marker | independent marker diagnosis [SUMMARY]
[CONTENT] hcc | linc00514 | linc00941 | patients | lc | linc00941 linc00514 | afp | chb | liver | serum [SUMMARY]
[CONTENT] hcc | linc00514 | linc00941 | patients | lc | linc00941 linc00514 | afp | chb | liver | serum [SUMMARY]
[CONTENT] ||| LINC00941 [SUMMARY]
null
[CONTENT] Serum LINC00941 | CHB ||| serum LINC00941 | 0.919 | 0.808 | 85% and | 90% | 86.67% and | 56.67% | AFP | 0.0001 ||| CHB | LINC00941 | AFP | 0.05 [SUMMARY]
[CONTENT] LINC00941 | HBV [SUMMARY]
[CONTENT] ||| LINC00941 ||| LINC00941 ||| ROC ||| Serum LINC00941 | CHB ||| serum LINC00941 | 0.919 | 0.808 | 85% and | 90% | 86.67% and | 56.67% | AFP | 0.0001 ||| CHB | LINC00941 | AFP | 0.05 ||| LINC00941 | HBV [SUMMARY]
[CONTENT] ||| LINC00941 ||| LINC00941 ||| ROC ||| Serum LINC00941 | CHB ||| serum LINC00941 | 0.919 | 0.808 | 85% and | 90% | 86.67% and | 56.67% | AFP | 0.0001 ||| CHB | LINC00941 | AFP | 0.05 ||| LINC00941 | HBV [SUMMARY]
Association study indicates a protective role of phosphatidylinositol-4-phosphate-5-kinase against tardive dyskinesia.
25548108
Tardive dyskinesia is a disorder characterized by involuntary muscle movements that occur as a complication of long-term treatment with antipsychotic drugs. It has been suggested to be related to a malfunctioning of the indirect pathway of the motor part of the cortical-striatal-thalamic-cortical circuit, which may be caused by oxidative stress-induced neurotoxicity.
BACKGROUND
The purpose of our study was to investigate the possible association between phosphatidylinositol-4-phosphate-5-kinase type IIa (PIP5K2A) function and tardive dyskinesia in 491 Caucasian patients with schizophrenia from 3 different psychiatric institutes in West Siberia. The Abnormal Involuntary Movement Scale was used to assess tardive dyskinesia. Individuals were genotyped for 3 single nucleotide polymorphisms in PIP5K2A gene: rs10828317, rs746203, and rs8341.
METHODS
A significant association was established between the functional mutation N251S-polymorphism of the PIP5K2A gene (rs10828317) and tardive dyskinesia, while the other 2 examined nonfunctional single nucleotide polymorphisms were not related.
RESULTS
We conclude from this association that PIP5K2A is possibly involved in a mechanism protecting against tardive dyskinesia-inducing neurotoxicity. This corresponds to our hypothesis that tardive dyskinesia is related to neurotoxicity at striatal indirect pathway medium-sized spiny neurons.
CONCLUSIONS
[ "Adult", "Antipsychotic Agents", "Dyskinesia, Drug-Induced", "Female", "Gene Frequency", "Genetic Association Studies", "Genetic Predisposition to Disease", "Humans", "Male", "Middle Aged", "Movement Disorders", "Phenotype", "Phosphotransferases (Alcohol Group Acceptor)", "Polymorphism, Single Nucleotide", "Protective Factors", "Risk Assessment", "Risk Factors", "Schizophrenia", "Siberia", "Young Adult" ]
4438543
Introduction
Dyskinesia is a collective name for a variety of involuntary hyperkinetic movements (Loonen and Van Praag, 2007). The movements are irregular, repetitive, and typically include motionless intervals. Dyskinesia may result from long-term treatment with antipsychotic drugs. This involuntary movement syndrome is termed tardive dyskinesia (TD) (Margolese et al., 2005; Kane, 2006). TD is a potentially disabling irreversible movement disorder, which has a prevalence of around 30% in patients chronically exposed to antipsychotics (Kane et al., 1988; Glazer, 2000). It can be subdivided into orofaciolingual (TDof) and limb-truncal (TDlt) dyskinesia (Al Hadithy et al., 2009, 2010). TD is classified as an extrapyramidal movement disorder and may be related to a malfunctioning of the indirect pathway of the motor part of the cortical-striatal-thalamic-cortical circuit (Figure 1) (Loonen and Ivanova, 2013). The indirect pathway starts with dopamine-D2 receptor expressing medium-sized spiny neurons (MSNs) in the striatum. Activation of this pathway results in inhibition of motor parts of the frontal cerebral cortex, and malfunctioning of this circuit would result in disinhibition and therefore hyperkinesia (Loonen and Ivanova, 2013). The cortical-striatal-thalamic-cortical circuits, including the indirect and direct pathways. Activation of the direct pathway causes hyperkinesia and activation of the indirect pathway causes hypokinesia. ENK, enkephalin; GPe, globus pallidus, external segment; GPi, globus pallidus, internal segment; SNc, substantia nigra, pars compacta; SNr, substantia nigra, pars reticulata; SP/DYN, substance P/dynorphin; STh, subthalamic nucleus; D1, D2, medium-sized spiny neurons (MSNs) with D1 or D2 receptors. Red, excitatory (glutamatergic, dopaminergic); blue, inhibitory (GABAergic, dopaminergic). Recently, our group identified an important link between 2 other hyperkinetic extrapyramidal movement disorders: Huntington’s disease (HD) and Levodopa-induced dyskinesia (LID). Patients suffering from LID are more often carriers of the same variants of the GRIN2A gene as are determining an earlier age of onset of dyskinesia in HD patients (Ivanova et al., 2012). The GRIN2A gene encodes for the NR2A subunit of the glutamatergic N-methyl-d-aspartate (NMDA) receptor (Paoletti and Neyton, 2007; Ivanova et al., 2012). In HD, symptoms are linked to NMDA receptor-induced excitotoxicity in indirect pathway MSNs (Estrada Sanchez et al., 2007; Fan and Raymond, 2007; Kumar et al., 2010). Our finding suggests that LID is related to a similar NMDA receptor-related malfunctioning of dopamine-D2 receptor carrying indirect pathway MSNs as HD. According to the neurotoxicity theory of TD, degeneration of indirect pathway MSNs in this disorder is related to neurotoxic effects of the free radicals produced by excessive metabolism of dopamine (Lohr et al., 2003). This theory suggests that antipsychotic drugs block dopamine D2 receptors and therefore trigger a compensatory release of excess dopamine. This excess requires increased metabolism of the spilled neurotransmitter. Increased dopamine metabolism releases high levels of hydrogen peroxide, which results in the production of free radicals, which then cause cell damage. Hence, excessive dopamine metabolism results in the production of more free radicals than the cell can handle. This hypothesis is consistent with the reported association between the incidence of TD and the presence of variants in the gene that encodes manganese superoxidedismutase, an enzyme that scavenges free radicals (Al Hadithy et al., 2010). A reduction in manganese superoxidedismutase activity would increase the likelihood of neurotoxic effects. It can be concluded that HD, LID, and TD are related to neurotoxic damage of indirect pathway MSNs and that every factor that increases neurotoxicity may also increase the likelihood of their becoming symptomatic. Phosphatidylinositol 4-phosphate 5-kinase (PIP5K; EC 2.7.1.68) is a neuronal intracellular enzyme that produces phosphatidylinositol (4,5)-biphosphate, which is catalyzed by phospholipase C to the second messengers inositol (1,4,5) triphosphate and diacylglycerol (for review, see Van den Bout and Divecha, 2009). Three isoforms of this enzyme have been identified: PIP5Kα, PIP5Kβ, and PIP5Kγ. The PIP5Kα isoform is also known as phosphatidylinositol-4-phosphate-5-kinase type IIa (PIP5K2A) and localizes to the plasma membrane and the Golgi complex and in the nucleus. PIP5K2A is involved in many different processes, including signal transduction of G-protein-coupled receptors, cell survival by protection against apoptosis, and the genetic response to oxidative stress (Van den Bout and Divecha, 2009). The PIP5K2A gene has been shown to be associated with schizophrenia in several independent studies (Schwab et al., 2006; Bakker et al., 2007; He et al., 2007; Saggers-Gray et al., 2008; Fedorenko et al., 2013). This is possibly related to a similar direct vs indirect pathway MSN hyperactivity explaining positive psychotic symptoms in schizophrenia as well as dyskinesia in TD. Indeed, drug-naïve first-episode patients experience spontaneous dyskinesia more frequently than healthy controls (Tenback and Van Harten, 2011). Although the exact regulatory functions of different types of PIP5Ks are far from evident, these enzymes can be expected to also play a role in augmenting or decreasing the excitability of corticostriatal glutamatergic synapses with MSNs during the induction of long-term potentiation and long-term depression (LTD), respectively. Long-term potentiation and LTD may play an important role in the mechanism of dyskinesia, as they regulate the readiness of corticostriatal synapses to excitatory (including excitotoxic) effects (Ivanova et al., 2012). In mice, for example, NMDA receptor-mediated compensatory LTD depends upon activation of PIP5Kγ661, which results in AMPA receptor endocytosis (Unoki et al., 2012). In a heteromeric expression system, PIP5K2A has been disclosed to be a novel signaling element in the regulation of the neuronal KCNQ2/KCNQ3 and KCNQ3/KCNQ5 channels, EAAT3 glutamate transporter, and GluA1 function (Fedorenko et al., 2008, 2009; Seebohm et al., 2014). It has been shown that PIP5K2A regulation is disrupted in the schizophrenia-associated mutant (N251S)-PIP5K2A (rs10828317), which may contribute to the pathogenesis of schizophrenia through uncontrolled dopaminergic firing and deranged glutamate metabolism in the brain of schizophrenic patients carrying this mutation (Fedorenko et al., 2008, 2009; Seebohm et al., 2014). We decided to study a possible association between a genetic variant of PIP5K2A encoding—according to in vitro observations (Fedorenko et al., 2008, 2009; Seebohm et al., 2014)—for a less active variant of this enzyme in comparison to 2 nonfunctional genetic variations and the prevalence of TD in a White Siberian patient population suffering from schizophrenia in order to establish a possible role for PIP5K in the pathophysiology of this disorder.
Patients and Methods
Patients The work described in this article was carried out in accordance with the most recent version of the Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans and with the Uniform Requirements for manuscripts submitted to biomedical journals. After obtaining approval of the study protocol by the institutional bioethics committee, suitable participants were recruited from 3 psychiatric hospitals in the Tomsk, Kemerovo, and Chita areas in Siberia (Russia). All subjects gave informed consent after proper explanation of the study. We included 491 subjects with a clinical diagnosis of schizophrenia the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10: F20; N=465; 94.7%) or schizotypal disorder (ICD-10: F21) and excluded subjects with non-Caucasian physical appearance (eg, Mongoloid, Buryats, Tyvans, or Khakassians) or those with organic or neurological disorders. Patients were assessed for the presence or absence of dyskinesia according to the abnormal involuntary movement scale (AIMS) (Loonen and Van Praag, 2007). The AIMS scores were transformed into a binary form (presence or absence of dyskinesia) with Schooler and Kane (1982) criteria. The presence of TDof and TDlt was established by a cutoff score of ≥2 (mild but definite) on any of the items 1 through 4 and 5 through 7 of AIMS, respectively. The sum of the first 4 items was used as a proxy for the severity of TDof, while the sum of items 5 thru 7 was used as a proxy for the severity of TDlt. A blood sample was taken for DNA isolation and genotyping. The other inclusion criteria were no addictions, no organic disorders, and a high-quality DNA sample. The work described in this article was carried out in accordance with the most recent version of the Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans and with the Uniform Requirements for manuscripts submitted to biomedical journals. After obtaining approval of the study protocol by the institutional bioethics committee, suitable participants were recruited from 3 psychiatric hospitals in the Tomsk, Kemerovo, and Chita areas in Siberia (Russia). All subjects gave informed consent after proper explanation of the study. We included 491 subjects with a clinical diagnosis of schizophrenia the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10: F20; N=465; 94.7%) or schizotypal disorder (ICD-10: F21) and excluded subjects with non-Caucasian physical appearance (eg, Mongoloid, Buryats, Tyvans, or Khakassians) or those with organic or neurological disorders. Patients were assessed for the presence or absence of dyskinesia according to the abnormal involuntary movement scale (AIMS) (Loonen and Van Praag, 2007). The AIMS scores were transformed into a binary form (presence or absence of dyskinesia) with Schooler and Kane (1982) criteria. The presence of TDof and TDlt was established by a cutoff score of ≥2 (mild but definite) on any of the items 1 through 4 and 5 through 7 of AIMS, respectively. The sum of the first 4 items was used as a proxy for the severity of TDof, while the sum of items 5 thru 7 was used as a proxy for the severity of TDlt. A blood sample was taken for DNA isolation and genotyping. The other inclusion criteria were no addictions, no organic disorders, and a high-quality DNA sample. Medication On the day of TD assessment, a complete documentation of the medications utilized was compiled by the raters. For comparison, daily antipsychotic medication dosages were converted into chlorpromazine equivalents (Andreasen et al., 2010). Patients using clozapine who did not suffer from TD were excluded, as clozapine may suppress the symptoms of TD. On the day of TD assessment, a complete documentation of the medications utilized was compiled by the raters. For comparison, daily antipsychotic medication dosages were converted into chlorpromazine equivalents (Andreasen et al., 2010). Patients using clozapine who did not suffer from TD were excluded, as clozapine may suppress the symptoms of TD. Genotyping DNA extraction was conducted according to standard protocols using phenol-chloroform extraction. Genotyping of PIP5K2A polymorphisms (rs10828317, rs746203, rs8341) was performed on an ABI StepOnePlus with a TaqMan Validateе SNP Genotyping Assay (Applied Biosystems). DNA extraction was conducted according to standard protocols using phenol-chloroform extraction. Genotyping of PIP5K2A polymorphisms (rs10828317, rs746203, rs8341) was performed on an ABI StepOnePlus with a TaqMan Validateе SNP Genotyping Assay (Applied Biosystems). Statistics The Hardy-Weinberg equilibrium of genotypic frequencies was tested by the chi-square test. Statistical analyses were performed using SPSS software, release 17, for Windows; P<.05 was considered as significant. To apply a correction for multiple testing, we used algorithm for False Discovery Rate control, described by Benjamini and Hochberg (1995). The chi-square test and the Fisher’s exact test, if necessary, were used for between-group comparisons of genotypic or allelic frequencies. Between-group differences in continuous variables were evaluated using the Student’s t test or 1-way analysis of variance. Comparisons of AIMS-score in different groups were carried out with the Kruskal Wallis test. The relevant Bonferroni correction for multiple testing was applied. Logistic regression analysis was performed to isolate the possible TD-related variables: age, sex, duration of illness, age at onset, and PIP5K2A polymorphism. The Hardy-Weinberg equilibrium of genotypic frequencies was tested by the chi-square test. Statistical analyses were performed using SPSS software, release 17, for Windows; P<.05 was considered as significant. To apply a correction for multiple testing, we used algorithm for False Discovery Rate control, described by Benjamini and Hochberg (1995). The chi-square test and the Fisher’s exact test, if necessary, were used for between-group comparisons of genotypic or allelic frequencies. Between-group differences in continuous variables were evaluated using the Student’s t test or 1-way analysis of variance. Comparisons of AIMS-score in different groups were carried out with the Kruskal Wallis test. The relevant Bonferroni correction for multiple testing was applied. Logistic regression analysis was performed to isolate the possible TD-related variables: age, sex, duration of illness, age at onset, and PIP5K2A polymorphism.
null
null
Discussion
None.
[ "Introduction", "Patients", "Medication", "Genotyping", "Statistics", "Results", "Discussion" ]
[ "Dyskinesia is a collective name for a variety of involuntary hyperkinetic movements (Loonen and Van Praag, 2007). The movements are irregular, repetitive, and typically include motionless intervals. Dyskinesia may result from long-term treatment with antipsychotic drugs. This involuntary movement syndrome is termed tardive dyskinesia (TD) (Margolese et al., 2005; Kane, 2006). TD is a potentially disabling irreversible movement disorder, which has a prevalence of around 30% in patients chronically exposed to antipsychotics (Kane et al., 1988; Glazer, 2000). It can be subdivided into orofaciolingual (TDof) and limb-truncal (TDlt) dyskinesia (Al Hadithy et al., 2009, 2010).\nTD is classified as an extrapyramidal movement disorder and may be related to a malfunctioning of the indirect pathway of the motor part of the cortical-striatal-thalamic-cortical circuit (Figure 1) (Loonen and Ivanova, 2013). The indirect pathway starts with dopamine-D2 receptor expressing medium-sized spiny neurons (MSNs) in the striatum. Activation of this pathway results in inhibition of motor parts of the frontal cerebral cortex, and malfunctioning of this circuit would result in disinhibition and therefore hyperkinesia (Loonen and Ivanova, 2013).\nThe cortical-striatal-thalamic-cortical circuits, including the indirect and direct pathways. Activation of the direct pathway causes hyperkinesia and activation of the indirect pathway causes hypokinesia. ENK, enkephalin; GPe, globus pallidus, external segment; GPi, globus pallidus, internal segment; SNc, substantia nigra, pars compacta; SNr, substantia nigra, pars reticulata; SP/DYN, substance P/dynorphin; STh, subthalamic nucleus; D1, D2, medium-sized spiny neurons (MSNs) with D1 or D2 receptors. Red, excitatory (glutamatergic, dopaminergic); blue, inhibitory (GABAergic, dopaminergic).\nRecently, our group identified an important link between 2 other hyperkinetic extrapyramidal movement disorders: Huntington’s disease (HD) and Levodopa-induced dyskinesia (LID). Patients suffering from LID are more often carriers of the same variants of the GRIN2A gene as are determining an earlier age of onset of dyskinesia in HD patients (Ivanova et al., 2012). The GRIN2A gene encodes for the NR2A subunit of the glutamatergic N-methyl-d-aspartate (NMDA) receptor (Paoletti and Neyton, 2007; Ivanova et al., 2012). In HD, symptoms are linked to NMDA receptor-induced excitotoxicity in indirect pathway MSNs (Estrada Sanchez et al., 2007; Fan and Raymond, 2007; Kumar et al., 2010). Our finding suggests that LID is related to a similar NMDA receptor-related malfunctioning of dopamine-D2 receptor carrying indirect pathway MSNs as HD. According to the neurotoxicity theory of TD, degeneration of indirect pathway MSNs in this disorder is related to neurotoxic effects of the free radicals produced by excessive metabolism of dopamine (Lohr et al., 2003). This theory suggests that antipsychotic drugs block dopamine D2 receptors and therefore trigger a compensatory release of excess dopamine. This excess requires increased metabolism of the spilled neurotransmitter. Increased dopamine metabolism releases high levels of hydrogen peroxide, which results in the production of free radicals, which then cause cell damage. Hence, excessive dopamine metabolism results in the production of more free radicals than the cell can handle. This hypothesis is consistent with the reported association between the incidence of TD and the presence of variants in the gene that encodes manganese superoxidedismutase, an enzyme that scavenges free radicals (Al Hadithy et al., 2010). A reduction in manganese superoxidedismutase activity would increase the likelihood of neurotoxic effects. It can be concluded that HD, LID, and TD are related to neurotoxic damage of indirect pathway MSNs and that every factor that increases neurotoxicity may also increase the likelihood of their becoming symptomatic.\nPhosphatidylinositol 4-phosphate 5-kinase (PIP5K; EC 2.7.1.68) is a neuronal intracellular enzyme that produces phosphatidylinositol (4,5)-biphosphate, which is catalyzed by phospholipase C to the second messengers inositol (1,4,5) triphosphate and diacylglycerol (for review, see Van den Bout and Divecha, 2009). Three isoforms of this enzyme have been identified: PIP5Kα, PIP5Kβ, and PIP5Kγ. The PIP5Kα isoform is also known as phosphatidylinositol-4-phosphate-5-kinase type IIa (PIP5K2A) and localizes to the plasma membrane and the Golgi complex and in the nucleus. PIP5K2A is involved in many different processes, including signal transduction of G-protein-coupled receptors, cell survival by protection against apoptosis, and the genetic response to oxidative stress (Van den Bout and Divecha, 2009). The PIP5K2A gene has been shown to be associated with schizophrenia in several independent studies (Schwab et al., 2006; Bakker et al., 2007; He et al., 2007; Saggers-Gray et al., 2008; Fedorenko et al., 2013). This is possibly related to a similar direct vs indirect pathway MSN hyperactivity explaining positive psychotic symptoms in schizophrenia as well as dyskinesia in TD. Indeed, drug-naïve first-episode patients experience spontaneous dyskinesia more frequently than healthy controls (Tenback and Van Harten, 2011).\nAlthough the exact regulatory functions of different types of PIP5Ks are far from evident, these enzymes can be expected to also play a role in augmenting or decreasing the excitability of corticostriatal glutamatergic synapses with MSNs during the induction of long-term potentiation and long-term depression (LTD), respectively. Long-term potentiation and LTD may play an important role in the mechanism of dyskinesia, as they regulate the readiness of corticostriatal synapses to excitatory (including excitotoxic) effects (Ivanova et al., 2012). In mice, for example, NMDA receptor-mediated compensatory LTD depends upon activation of PIP5Kγ661, which results in AMPA receptor endocytosis (Unoki et al., 2012).\nIn a heteromeric expression system, PIP5K2A has been disclosed to be a novel signaling element in the regulation of the neuronal KCNQ2/KCNQ3 and KCNQ3/KCNQ5 channels, EAAT3 glutamate transporter, and GluA1 function (Fedorenko et al., 2008, 2009; Seebohm et al., 2014). It has been shown that PIP5K2A regulation is disrupted in the schizophrenia-associated mutant (N251S)-PIP5K2A (rs10828317), which may contribute to the pathogenesis of schizophrenia through uncontrolled dopaminergic firing and deranged glutamate metabolism in the brain of schizophrenic patients carrying this mutation (Fedorenko et al., 2008, 2009; Seebohm et al., 2014).\nWe decided to study a possible association between a genetic variant of PIP5K2A encoding—according to in vitro observations (Fedorenko et al., 2008, 2009; Seebohm et al., 2014)—for a less active variant of this enzyme in comparison to 2 nonfunctional genetic variations and the prevalence of TD in a White Siberian patient population suffering from schizophrenia in order to establish a possible role for PIP5K in the pathophysiology of this disorder.", "The work described in this article was carried out in accordance with the most recent version of the Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans and with the Uniform Requirements for manuscripts submitted to biomedical journals. After obtaining approval of the study protocol by the institutional bioethics committee, suitable participants were recruited from 3 psychiatric hospitals in the Tomsk, Kemerovo, and Chita areas in Siberia (Russia). All subjects gave informed consent after proper explanation of the study.\nWe included 491 subjects with a clinical diagnosis of schizophrenia the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10: F20; N=465; 94.7%) or schizotypal disorder (ICD-10: F21) and excluded subjects with non-Caucasian physical appearance (eg, Mongoloid, Buryats, Tyvans, or Khakassians) or those with organic or neurological disorders. Patients were assessed for the presence or absence of dyskinesia according to the abnormal involuntary movement scale (AIMS) (Loonen and Van Praag, 2007). The AIMS scores were transformed into a binary form (presence or absence of dyskinesia) with Schooler and Kane (1982) criteria. The presence of TDof and TDlt was established by a cutoff score of ≥2 (mild but definite) on any of the items 1 through 4 and 5 through 7 of AIMS, respectively. The sum of the first 4 items was used as a proxy for the severity of TDof, while the sum of items 5 thru 7 was used as a proxy for the severity of TDlt.\nA blood sample was taken for DNA isolation and genotyping. The other inclusion criteria were no addictions, no organic disorders, and a high-quality DNA sample.", "On the day of TD assessment, a complete documentation of the medications utilized was compiled by the raters. For comparison, daily antipsychotic medication dosages were converted into chlorpromazine equivalents (Andreasen et al., 2010). Patients using clozapine who did not suffer from TD were excluded, as clozapine may suppress the symptoms of TD.", "DNA extraction was conducted according to standard protocols using phenol-chloroform extraction. Genotyping of PIP5K2A polymorphisms (rs10828317, rs746203, rs8341) was performed on an ABI StepOnePlus with a TaqMan Validateе SNP Genotyping Assay (Applied Biosystems).", "The Hardy-Weinberg equilibrium of genotypic frequencies was tested by the chi-square test. Statistical analyses were performed using SPSS software, release 17, for Windows; P<.05 was considered as significant. To apply a correction for multiple testing, we used algorithm for False Discovery Rate control, described by Benjamini and Hochberg (1995).\nThe chi-square test and the Fisher’s exact test, if necessary, were used for between-group comparisons of genotypic or allelic frequencies. Between-group differences in continuous variables were evaluated using the Student’s t test or 1-way analysis of variance. Comparisons of AIMS-score in different groups were carried out with the Kruskal Wallis test. The relevant Bonferroni correction for multiple testing was applied.\nLogistic regression analysis was performed to isolate the possible TD-related variables: age, sex, duration of illness, age at onset, and PIP5K2A polymorphism.", "\nTable 1 shows the clinical and demographic characteristics of patients with and without TD. The genotype distribution of PIP5K2A (rs10828317, rs746203, rs8341) polymorphisms were in agreement with Hardy-Weinberg equilibrium in this patient group. No significant differences in genotype frequencies of the 2 nonfunctional polymorphisms rs746203 and rs8341 between the 2 groups of patients with and without TD were found (Table 2). However, a significant association was demonstrated to exist between TD and the functional rs10828317 mutation. After correction for multiple testing, the observed differences remained statistically significant (P=.018). CC carriers had a higher risk of TDof (OR=2.55, 95CI=1.56–4.14, P=.0006), TDlt (OR=1.85, 95CI=1.1–3.13, P=.04), and TDtot (OR=2.17, 95CI=1.34–3.51, P=.003). So, the frequency of CC-carriers is about twice as high in the group of schizophrenic patients with TD compared to the group without TD.\nThe Clinical and Demographic Characteristics of Patients with and without TD\nAbbreviation: TD, tardive dyskinesia.\n*Chi-square test; **\nt test.\nDistribution of rs10828317, rs8341, and rs746203 Genotypes and Alleles in Patients with and without TD\nAbbreviation: TD, tardive dyskinesia.\nWe also found an association between genotype and severity of TD. Patients who are CC carriers of rs10828317 had a significantly (P<.02 Mann-Whitney with Bonferroni correction) higher mean AIMS TDtot and TDof score in comparison to those with the CT or TT genotype (data not shown).\nAnalysis of covariance with age, sex, duration of disease, and chlorpromazine equivalent incorporated as covariates showed that TD is significantly (P < .005) associated with the PIP5K2A (rs10828317) polymorphism (details not shown).\nUsing the binary logistic regression method, we revealed an association between the CC-genotype of rs10828317 and TD (P=.005), whereas the input of age (P=.329), sex (P=.956), duration of disease (P=.139), chlorpromazine equivalent (P=.683), and age of onset of the disorder (Р=.608) were not statistically significant for our model.", "In this study, we genotyped patients with and without TD with respect to 3 polymorphisms of the PIP5K2A gene. In Figure 2, the single nucleotide polymorphism positions are represented. Only one of them, rs10828317, is known to be a functional mutation. Replacement of T to C leads to a nonsynonymous amino-acid exchange (asparagine/serine) that causes an increased distance between 2 antiparallel helices from 3Å to 6Å and thereby interferes with the function of the enzyme (Fedorenko et al., 2008).\nRepresentation of the single nucleotide polymorphism positions of 3 studied polymorphisms of the PIP5K2A gene (He et al., 2007).\nWe decided to study the PIP5K2A gene, because this gene has repeatedly been shown to be associated with schizophrenia (Schwab et al., 2006; Bakker et al., 2007; He et al., 2007; Saggers-Gray et al., 2008), and the vulnerability to develop TD is related to the likelihood to develop positive symptoms of schizophrenia. In a previous study, we have confirmed this association in the presently studied Caucasian Siberian patients with schizophrenia (Fedorenko et al., 2013), but now we also demonstrated a relationship with the prevalence of TD. Koning et al. (2011a, 2011b) have described an association of TD with schizotypy in unaffected siblings of patients with nonaffective psychosis. Moreover, drug naïve first-episode patients sometimes show spontaneous dyskinesia (Tenback and Van Harten, 2011). Therefore, indirect evidence supports a possible role of genetic factors increasing the vulnerability to develop TD in patients with schizophrenia. Hereditary decreased activity of PIP5K might be one of them.\nThe present study did not address the mechanisms regulated by PIP5K2A and possibly contributing to the development of TD in carriers of the rs10828317 polymorphism. It is noteworthy, however, that PIP5K2A participates in the regulation of both glutamate receptor GluA1 (Seebohm et al., 2014) and glutamate carrier EAAT3 (Fedorenko et al., 2009). Thus, PIP5K2A may both increase glutamate sensitivity of neurons and terminate glutamate-induced excitation by accelerating clearance of glutamate from the synaptic cleft. It is tempting to speculate that deranged glutamate sensitivity or abundance may foster the development of TD, as it may increase the vulnerability of indirect pathway MSNs for oxidative stress-induced neurotoxicity (Loonen and Ivanova, 2013).\nIn conclusion, the present observations reveal an association of PIP5K2A gene variants with TD and thus suggest a clinical significance of this kinase in the control of movement and/or neuronal survival." ]
[ null, null, null, null, null, null, null ]
[ "Introduction", "Patients and Methods", "Patients", "Medication", "Genotyping", "Statistics", "Results", "Discussion" ]
[ "Dyskinesia is a collective name for a variety of involuntary hyperkinetic movements (Loonen and Van Praag, 2007). The movements are irregular, repetitive, and typically include motionless intervals. Dyskinesia may result from long-term treatment with antipsychotic drugs. This involuntary movement syndrome is termed tardive dyskinesia (TD) (Margolese et al., 2005; Kane, 2006). TD is a potentially disabling irreversible movement disorder, which has a prevalence of around 30% in patients chronically exposed to antipsychotics (Kane et al., 1988; Glazer, 2000). It can be subdivided into orofaciolingual (TDof) and limb-truncal (TDlt) dyskinesia (Al Hadithy et al., 2009, 2010).\nTD is classified as an extrapyramidal movement disorder and may be related to a malfunctioning of the indirect pathway of the motor part of the cortical-striatal-thalamic-cortical circuit (Figure 1) (Loonen and Ivanova, 2013). The indirect pathway starts with dopamine-D2 receptor expressing medium-sized spiny neurons (MSNs) in the striatum. Activation of this pathway results in inhibition of motor parts of the frontal cerebral cortex, and malfunctioning of this circuit would result in disinhibition and therefore hyperkinesia (Loonen and Ivanova, 2013).\nThe cortical-striatal-thalamic-cortical circuits, including the indirect and direct pathways. Activation of the direct pathway causes hyperkinesia and activation of the indirect pathway causes hypokinesia. ENK, enkephalin; GPe, globus pallidus, external segment; GPi, globus pallidus, internal segment; SNc, substantia nigra, pars compacta; SNr, substantia nigra, pars reticulata; SP/DYN, substance P/dynorphin; STh, subthalamic nucleus; D1, D2, medium-sized spiny neurons (MSNs) with D1 or D2 receptors. Red, excitatory (glutamatergic, dopaminergic); blue, inhibitory (GABAergic, dopaminergic).\nRecently, our group identified an important link between 2 other hyperkinetic extrapyramidal movement disorders: Huntington’s disease (HD) and Levodopa-induced dyskinesia (LID). Patients suffering from LID are more often carriers of the same variants of the GRIN2A gene as are determining an earlier age of onset of dyskinesia in HD patients (Ivanova et al., 2012). The GRIN2A gene encodes for the NR2A subunit of the glutamatergic N-methyl-d-aspartate (NMDA) receptor (Paoletti and Neyton, 2007; Ivanova et al., 2012). In HD, symptoms are linked to NMDA receptor-induced excitotoxicity in indirect pathway MSNs (Estrada Sanchez et al., 2007; Fan and Raymond, 2007; Kumar et al., 2010). Our finding suggests that LID is related to a similar NMDA receptor-related malfunctioning of dopamine-D2 receptor carrying indirect pathway MSNs as HD. According to the neurotoxicity theory of TD, degeneration of indirect pathway MSNs in this disorder is related to neurotoxic effects of the free radicals produced by excessive metabolism of dopamine (Lohr et al., 2003). This theory suggests that antipsychotic drugs block dopamine D2 receptors and therefore trigger a compensatory release of excess dopamine. This excess requires increased metabolism of the spilled neurotransmitter. Increased dopamine metabolism releases high levels of hydrogen peroxide, which results in the production of free radicals, which then cause cell damage. Hence, excessive dopamine metabolism results in the production of more free radicals than the cell can handle. This hypothesis is consistent with the reported association between the incidence of TD and the presence of variants in the gene that encodes manganese superoxidedismutase, an enzyme that scavenges free radicals (Al Hadithy et al., 2010). A reduction in manganese superoxidedismutase activity would increase the likelihood of neurotoxic effects. It can be concluded that HD, LID, and TD are related to neurotoxic damage of indirect pathway MSNs and that every factor that increases neurotoxicity may also increase the likelihood of their becoming symptomatic.\nPhosphatidylinositol 4-phosphate 5-kinase (PIP5K; EC 2.7.1.68) is a neuronal intracellular enzyme that produces phosphatidylinositol (4,5)-biphosphate, which is catalyzed by phospholipase C to the second messengers inositol (1,4,5) triphosphate and diacylglycerol (for review, see Van den Bout and Divecha, 2009). Three isoforms of this enzyme have been identified: PIP5Kα, PIP5Kβ, and PIP5Kγ. The PIP5Kα isoform is also known as phosphatidylinositol-4-phosphate-5-kinase type IIa (PIP5K2A) and localizes to the plasma membrane and the Golgi complex and in the nucleus. PIP5K2A is involved in many different processes, including signal transduction of G-protein-coupled receptors, cell survival by protection against apoptosis, and the genetic response to oxidative stress (Van den Bout and Divecha, 2009). The PIP5K2A gene has been shown to be associated with schizophrenia in several independent studies (Schwab et al., 2006; Bakker et al., 2007; He et al., 2007; Saggers-Gray et al., 2008; Fedorenko et al., 2013). This is possibly related to a similar direct vs indirect pathway MSN hyperactivity explaining positive psychotic symptoms in schizophrenia as well as dyskinesia in TD. Indeed, drug-naïve first-episode patients experience spontaneous dyskinesia more frequently than healthy controls (Tenback and Van Harten, 2011).\nAlthough the exact regulatory functions of different types of PIP5Ks are far from evident, these enzymes can be expected to also play a role in augmenting or decreasing the excitability of corticostriatal glutamatergic synapses with MSNs during the induction of long-term potentiation and long-term depression (LTD), respectively. Long-term potentiation and LTD may play an important role in the mechanism of dyskinesia, as they regulate the readiness of corticostriatal synapses to excitatory (including excitotoxic) effects (Ivanova et al., 2012). In mice, for example, NMDA receptor-mediated compensatory LTD depends upon activation of PIP5Kγ661, which results in AMPA receptor endocytosis (Unoki et al., 2012).\nIn a heteromeric expression system, PIP5K2A has been disclosed to be a novel signaling element in the regulation of the neuronal KCNQ2/KCNQ3 and KCNQ3/KCNQ5 channels, EAAT3 glutamate transporter, and GluA1 function (Fedorenko et al., 2008, 2009; Seebohm et al., 2014). It has been shown that PIP5K2A regulation is disrupted in the schizophrenia-associated mutant (N251S)-PIP5K2A (rs10828317), which may contribute to the pathogenesis of schizophrenia through uncontrolled dopaminergic firing and deranged glutamate metabolism in the brain of schizophrenic patients carrying this mutation (Fedorenko et al., 2008, 2009; Seebohm et al., 2014).\nWe decided to study a possible association between a genetic variant of PIP5K2A encoding—according to in vitro observations (Fedorenko et al., 2008, 2009; Seebohm et al., 2014)—for a less active variant of this enzyme in comparison to 2 nonfunctional genetic variations and the prevalence of TD in a White Siberian patient population suffering from schizophrenia in order to establish a possible role for PIP5K in the pathophysiology of this disorder.", " Patients The work described in this article was carried out in accordance with the most recent version of the Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans and with the Uniform Requirements for manuscripts submitted to biomedical journals. After obtaining approval of the study protocol by the institutional bioethics committee, suitable participants were recruited from 3 psychiatric hospitals in the Tomsk, Kemerovo, and Chita areas in Siberia (Russia). All subjects gave informed consent after proper explanation of the study.\nWe included 491 subjects with a clinical diagnosis of schizophrenia the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10: F20; N=465; 94.7%) or schizotypal disorder (ICD-10: F21) and excluded subjects with non-Caucasian physical appearance (eg, Mongoloid, Buryats, Tyvans, or Khakassians) or those with organic or neurological disorders. Patients were assessed for the presence or absence of dyskinesia according to the abnormal involuntary movement scale (AIMS) (Loonen and Van Praag, 2007). The AIMS scores were transformed into a binary form (presence or absence of dyskinesia) with Schooler and Kane (1982) criteria. The presence of TDof and TDlt was established by a cutoff score of ≥2 (mild but definite) on any of the items 1 through 4 and 5 through 7 of AIMS, respectively. The sum of the first 4 items was used as a proxy for the severity of TDof, while the sum of items 5 thru 7 was used as a proxy for the severity of TDlt.\nA blood sample was taken for DNA isolation and genotyping. The other inclusion criteria were no addictions, no organic disorders, and a high-quality DNA sample.\nThe work described in this article was carried out in accordance with the most recent version of the Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans and with the Uniform Requirements for manuscripts submitted to biomedical journals. After obtaining approval of the study protocol by the institutional bioethics committee, suitable participants were recruited from 3 psychiatric hospitals in the Tomsk, Kemerovo, and Chita areas in Siberia (Russia). All subjects gave informed consent after proper explanation of the study.\nWe included 491 subjects with a clinical diagnosis of schizophrenia the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10: F20; N=465; 94.7%) or schizotypal disorder (ICD-10: F21) and excluded subjects with non-Caucasian physical appearance (eg, Mongoloid, Buryats, Tyvans, or Khakassians) or those with organic or neurological disorders. Patients were assessed for the presence or absence of dyskinesia according to the abnormal involuntary movement scale (AIMS) (Loonen and Van Praag, 2007). The AIMS scores were transformed into a binary form (presence or absence of dyskinesia) with Schooler and Kane (1982) criteria. The presence of TDof and TDlt was established by a cutoff score of ≥2 (mild but definite) on any of the items 1 through 4 and 5 through 7 of AIMS, respectively. The sum of the first 4 items was used as a proxy for the severity of TDof, while the sum of items 5 thru 7 was used as a proxy for the severity of TDlt.\nA blood sample was taken for DNA isolation and genotyping. The other inclusion criteria were no addictions, no organic disorders, and a high-quality DNA sample.\n Medication On the day of TD assessment, a complete documentation of the medications utilized was compiled by the raters. For comparison, daily antipsychotic medication dosages were converted into chlorpromazine equivalents (Andreasen et al., 2010). Patients using clozapine who did not suffer from TD were excluded, as clozapine may suppress the symptoms of TD.\nOn the day of TD assessment, a complete documentation of the medications utilized was compiled by the raters. For comparison, daily antipsychotic medication dosages were converted into chlorpromazine equivalents (Andreasen et al., 2010). Patients using clozapine who did not suffer from TD were excluded, as clozapine may suppress the symptoms of TD.\n Genotyping DNA extraction was conducted according to standard protocols using phenol-chloroform extraction. Genotyping of PIP5K2A polymorphisms (rs10828317, rs746203, rs8341) was performed on an ABI StepOnePlus with a TaqMan Validateе SNP Genotyping Assay (Applied Biosystems).\nDNA extraction was conducted according to standard protocols using phenol-chloroform extraction. Genotyping of PIP5K2A polymorphisms (rs10828317, rs746203, rs8341) was performed on an ABI StepOnePlus with a TaqMan Validateе SNP Genotyping Assay (Applied Biosystems).\n Statistics The Hardy-Weinberg equilibrium of genotypic frequencies was tested by the chi-square test. Statistical analyses were performed using SPSS software, release 17, for Windows; P<.05 was considered as significant. To apply a correction for multiple testing, we used algorithm for False Discovery Rate control, described by Benjamini and Hochberg (1995).\nThe chi-square test and the Fisher’s exact test, if necessary, were used for between-group comparisons of genotypic or allelic frequencies. Between-group differences in continuous variables were evaluated using the Student’s t test or 1-way analysis of variance. Comparisons of AIMS-score in different groups were carried out with the Kruskal Wallis test. The relevant Bonferroni correction for multiple testing was applied.\nLogistic regression analysis was performed to isolate the possible TD-related variables: age, sex, duration of illness, age at onset, and PIP5K2A polymorphism.\nThe Hardy-Weinberg equilibrium of genotypic frequencies was tested by the chi-square test. Statistical analyses were performed using SPSS software, release 17, for Windows; P<.05 was considered as significant. To apply a correction for multiple testing, we used algorithm for False Discovery Rate control, described by Benjamini and Hochberg (1995).\nThe chi-square test and the Fisher’s exact test, if necessary, were used for between-group comparisons of genotypic or allelic frequencies. Between-group differences in continuous variables were evaluated using the Student’s t test or 1-way analysis of variance. Comparisons of AIMS-score in different groups were carried out with the Kruskal Wallis test. The relevant Bonferroni correction for multiple testing was applied.\nLogistic regression analysis was performed to isolate the possible TD-related variables: age, sex, duration of illness, age at onset, and PIP5K2A polymorphism.", "The work described in this article was carried out in accordance with the most recent version of the Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans and with the Uniform Requirements for manuscripts submitted to biomedical journals. After obtaining approval of the study protocol by the institutional bioethics committee, suitable participants were recruited from 3 psychiatric hospitals in the Tomsk, Kemerovo, and Chita areas in Siberia (Russia). All subjects gave informed consent after proper explanation of the study.\nWe included 491 subjects with a clinical diagnosis of schizophrenia the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10: F20; N=465; 94.7%) or schizotypal disorder (ICD-10: F21) and excluded subjects with non-Caucasian physical appearance (eg, Mongoloid, Buryats, Tyvans, or Khakassians) or those with organic or neurological disorders. Patients were assessed for the presence or absence of dyskinesia according to the abnormal involuntary movement scale (AIMS) (Loonen and Van Praag, 2007). The AIMS scores were transformed into a binary form (presence or absence of dyskinesia) with Schooler and Kane (1982) criteria. The presence of TDof and TDlt was established by a cutoff score of ≥2 (mild but definite) on any of the items 1 through 4 and 5 through 7 of AIMS, respectively. The sum of the first 4 items was used as a proxy for the severity of TDof, while the sum of items 5 thru 7 was used as a proxy for the severity of TDlt.\nA blood sample was taken for DNA isolation and genotyping. The other inclusion criteria were no addictions, no organic disorders, and a high-quality DNA sample.", "On the day of TD assessment, a complete documentation of the medications utilized was compiled by the raters. For comparison, daily antipsychotic medication dosages were converted into chlorpromazine equivalents (Andreasen et al., 2010). Patients using clozapine who did not suffer from TD were excluded, as clozapine may suppress the symptoms of TD.", "DNA extraction was conducted according to standard protocols using phenol-chloroform extraction. Genotyping of PIP5K2A polymorphisms (rs10828317, rs746203, rs8341) was performed on an ABI StepOnePlus with a TaqMan Validateе SNP Genotyping Assay (Applied Biosystems).", "The Hardy-Weinberg equilibrium of genotypic frequencies was tested by the chi-square test. Statistical analyses were performed using SPSS software, release 17, for Windows; P<.05 was considered as significant. To apply a correction for multiple testing, we used algorithm for False Discovery Rate control, described by Benjamini and Hochberg (1995).\nThe chi-square test and the Fisher’s exact test, if necessary, were used for between-group comparisons of genotypic or allelic frequencies. Between-group differences in continuous variables were evaluated using the Student’s t test or 1-way analysis of variance. Comparisons of AIMS-score in different groups were carried out with the Kruskal Wallis test. The relevant Bonferroni correction for multiple testing was applied.\nLogistic regression analysis was performed to isolate the possible TD-related variables: age, sex, duration of illness, age at onset, and PIP5K2A polymorphism.", "\nTable 1 shows the clinical and demographic characteristics of patients with and without TD. The genotype distribution of PIP5K2A (rs10828317, rs746203, rs8341) polymorphisms were in agreement with Hardy-Weinberg equilibrium in this patient group. No significant differences in genotype frequencies of the 2 nonfunctional polymorphisms rs746203 and rs8341 between the 2 groups of patients with and without TD were found (Table 2). However, a significant association was demonstrated to exist between TD and the functional rs10828317 mutation. After correction for multiple testing, the observed differences remained statistically significant (P=.018). CC carriers had a higher risk of TDof (OR=2.55, 95CI=1.56–4.14, P=.0006), TDlt (OR=1.85, 95CI=1.1–3.13, P=.04), and TDtot (OR=2.17, 95CI=1.34–3.51, P=.003). So, the frequency of CC-carriers is about twice as high in the group of schizophrenic patients with TD compared to the group without TD.\nThe Clinical and Demographic Characteristics of Patients with and without TD\nAbbreviation: TD, tardive dyskinesia.\n*Chi-square test; **\nt test.\nDistribution of rs10828317, rs8341, and rs746203 Genotypes and Alleles in Patients with and without TD\nAbbreviation: TD, tardive dyskinesia.\nWe also found an association between genotype and severity of TD. Patients who are CC carriers of rs10828317 had a significantly (P<.02 Mann-Whitney with Bonferroni correction) higher mean AIMS TDtot and TDof score in comparison to those with the CT or TT genotype (data not shown).\nAnalysis of covariance with age, sex, duration of disease, and chlorpromazine equivalent incorporated as covariates showed that TD is significantly (P < .005) associated with the PIP5K2A (rs10828317) polymorphism (details not shown).\nUsing the binary logistic regression method, we revealed an association between the CC-genotype of rs10828317 and TD (P=.005), whereas the input of age (P=.329), sex (P=.956), duration of disease (P=.139), chlorpromazine equivalent (P=.683), and age of onset of the disorder (Р=.608) were not statistically significant for our model.", "In this study, we genotyped patients with and without TD with respect to 3 polymorphisms of the PIP5K2A gene. In Figure 2, the single nucleotide polymorphism positions are represented. Only one of them, rs10828317, is known to be a functional mutation. Replacement of T to C leads to a nonsynonymous amino-acid exchange (asparagine/serine) that causes an increased distance between 2 antiparallel helices from 3Å to 6Å and thereby interferes with the function of the enzyme (Fedorenko et al., 2008).\nRepresentation of the single nucleotide polymorphism positions of 3 studied polymorphisms of the PIP5K2A gene (He et al., 2007).\nWe decided to study the PIP5K2A gene, because this gene has repeatedly been shown to be associated with schizophrenia (Schwab et al., 2006; Bakker et al., 2007; He et al., 2007; Saggers-Gray et al., 2008), and the vulnerability to develop TD is related to the likelihood to develop positive symptoms of schizophrenia. In a previous study, we have confirmed this association in the presently studied Caucasian Siberian patients with schizophrenia (Fedorenko et al., 2013), but now we also demonstrated a relationship with the prevalence of TD. Koning et al. (2011a, 2011b) have described an association of TD with schizotypy in unaffected siblings of patients with nonaffective psychosis. Moreover, drug naïve first-episode patients sometimes show spontaneous dyskinesia (Tenback and Van Harten, 2011). Therefore, indirect evidence supports a possible role of genetic factors increasing the vulnerability to develop TD in patients with schizophrenia. Hereditary decreased activity of PIP5K might be one of them.\nThe present study did not address the mechanisms regulated by PIP5K2A and possibly contributing to the development of TD in carriers of the rs10828317 polymorphism. It is noteworthy, however, that PIP5K2A participates in the regulation of both glutamate receptor GluA1 (Seebohm et al., 2014) and glutamate carrier EAAT3 (Fedorenko et al., 2009). Thus, PIP5K2A may both increase glutamate sensitivity of neurons and terminate glutamate-induced excitation by accelerating clearance of glutamate from the synaptic cleft. It is tempting to speculate that deranged glutamate sensitivity or abundance may foster the development of TD, as it may increase the vulnerability of indirect pathway MSNs for oxidative stress-induced neurotoxicity (Loonen and Ivanova, 2013).\nIn conclusion, the present observations reveal an association of PIP5K2A gene variants with TD and thus suggest a clinical significance of this kinase in the control of movement and/or neuronal survival." ]
[ null, "methods", null, null, null, null, null, null ]
[ "PIP5K2A", "schizophrenia", "tardive dyskinesia", "gene polymorphism", "medium spiny neurons", "neurotoxicity" ]
Introduction: Dyskinesia is a collective name for a variety of involuntary hyperkinetic movements (Loonen and Van Praag, 2007). The movements are irregular, repetitive, and typically include motionless intervals. Dyskinesia may result from long-term treatment with antipsychotic drugs. This involuntary movement syndrome is termed tardive dyskinesia (TD) (Margolese et al., 2005; Kane, 2006). TD is a potentially disabling irreversible movement disorder, which has a prevalence of around 30% in patients chronically exposed to antipsychotics (Kane et al., 1988; Glazer, 2000). It can be subdivided into orofaciolingual (TDof) and limb-truncal (TDlt) dyskinesia (Al Hadithy et al., 2009, 2010). TD is classified as an extrapyramidal movement disorder and may be related to a malfunctioning of the indirect pathway of the motor part of the cortical-striatal-thalamic-cortical circuit (Figure 1) (Loonen and Ivanova, 2013). The indirect pathway starts with dopamine-D2 receptor expressing medium-sized spiny neurons (MSNs) in the striatum. Activation of this pathway results in inhibition of motor parts of the frontal cerebral cortex, and malfunctioning of this circuit would result in disinhibition and therefore hyperkinesia (Loonen and Ivanova, 2013). The cortical-striatal-thalamic-cortical circuits, including the indirect and direct pathways. Activation of the direct pathway causes hyperkinesia and activation of the indirect pathway causes hypokinesia. ENK, enkephalin; GPe, globus pallidus, external segment; GPi, globus pallidus, internal segment; SNc, substantia nigra, pars compacta; SNr, substantia nigra, pars reticulata; SP/DYN, substance P/dynorphin; STh, subthalamic nucleus; D1, D2, medium-sized spiny neurons (MSNs) with D1 or D2 receptors. Red, excitatory (glutamatergic, dopaminergic); blue, inhibitory (GABAergic, dopaminergic). Recently, our group identified an important link between 2 other hyperkinetic extrapyramidal movement disorders: Huntington’s disease (HD) and Levodopa-induced dyskinesia (LID). Patients suffering from LID are more often carriers of the same variants of the GRIN2A gene as are determining an earlier age of onset of dyskinesia in HD patients (Ivanova et al., 2012). The GRIN2A gene encodes for the NR2A subunit of the glutamatergic N-methyl-d-aspartate (NMDA) receptor (Paoletti and Neyton, 2007; Ivanova et al., 2012). In HD, symptoms are linked to NMDA receptor-induced excitotoxicity in indirect pathway MSNs (Estrada Sanchez et al., 2007; Fan and Raymond, 2007; Kumar et al., 2010). Our finding suggests that LID is related to a similar NMDA receptor-related malfunctioning of dopamine-D2 receptor carrying indirect pathway MSNs as HD. According to the neurotoxicity theory of TD, degeneration of indirect pathway MSNs in this disorder is related to neurotoxic effects of the free radicals produced by excessive metabolism of dopamine (Lohr et al., 2003). This theory suggests that antipsychotic drugs block dopamine D2 receptors and therefore trigger a compensatory release of excess dopamine. This excess requires increased metabolism of the spilled neurotransmitter. Increased dopamine metabolism releases high levels of hydrogen peroxide, which results in the production of free radicals, which then cause cell damage. Hence, excessive dopamine metabolism results in the production of more free radicals than the cell can handle. This hypothesis is consistent with the reported association between the incidence of TD and the presence of variants in the gene that encodes manganese superoxidedismutase, an enzyme that scavenges free radicals (Al Hadithy et al., 2010). A reduction in manganese superoxidedismutase activity would increase the likelihood of neurotoxic effects. It can be concluded that HD, LID, and TD are related to neurotoxic damage of indirect pathway MSNs and that every factor that increases neurotoxicity may also increase the likelihood of their becoming symptomatic. Phosphatidylinositol 4-phosphate 5-kinase (PIP5K; EC 2.7.1.68) is a neuronal intracellular enzyme that produces phosphatidylinositol (4,5)-biphosphate, which is catalyzed by phospholipase C to the second messengers inositol (1,4,5) triphosphate and diacylglycerol (for review, see Van den Bout and Divecha, 2009). Three isoforms of this enzyme have been identified: PIP5Kα, PIP5Kβ, and PIP5Kγ. The PIP5Kα isoform is also known as phosphatidylinositol-4-phosphate-5-kinase type IIa (PIP5K2A) and localizes to the plasma membrane and the Golgi complex and in the nucleus. PIP5K2A is involved in many different processes, including signal transduction of G-protein-coupled receptors, cell survival by protection against apoptosis, and the genetic response to oxidative stress (Van den Bout and Divecha, 2009). The PIP5K2A gene has been shown to be associated with schizophrenia in several independent studies (Schwab et al., 2006; Bakker et al., 2007; He et al., 2007; Saggers-Gray et al., 2008; Fedorenko et al., 2013). This is possibly related to a similar direct vs indirect pathway MSN hyperactivity explaining positive psychotic symptoms in schizophrenia as well as dyskinesia in TD. Indeed, drug-naïve first-episode patients experience spontaneous dyskinesia more frequently than healthy controls (Tenback and Van Harten, 2011). Although the exact regulatory functions of different types of PIP5Ks are far from evident, these enzymes can be expected to also play a role in augmenting or decreasing the excitability of corticostriatal glutamatergic synapses with MSNs during the induction of long-term potentiation and long-term depression (LTD), respectively. Long-term potentiation and LTD may play an important role in the mechanism of dyskinesia, as they regulate the readiness of corticostriatal synapses to excitatory (including excitotoxic) effects (Ivanova et al., 2012). In mice, for example, NMDA receptor-mediated compensatory LTD depends upon activation of PIP5Kγ661, which results in AMPA receptor endocytosis (Unoki et al., 2012). In a heteromeric expression system, PIP5K2A has been disclosed to be a novel signaling element in the regulation of the neuronal KCNQ2/KCNQ3 and KCNQ3/KCNQ5 channels, EAAT3 glutamate transporter, and GluA1 function (Fedorenko et al., 2008, 2009; Seebohm et al., 2014). It has been shown that PIP5K2A regulation is disrupted in the schizophrenia-associated mutant (N251S)-PIP5K2A (rs10828317), which may contribute to the pathogenesis of schizophrenia through uncontrolled dopaminergic firing and deranged glutamate metabolism in the brain of schizophrenic patients carrying this mutation (Fedorenko et al., 2008, 2009; Seebohm et al., 2014). We decided to study a possible association between a genetic variant of PIP5K2A encoding—according to in vitro observations (Fedorenko et al., 2008, 2009; Seebohm et al., 2014)—for a less active variant of this enzyme in comparison to 2 nonfunctional genetic variations and the prevalence of TD in a White Siberian patient population suffering from schizophrenia in order to establish a possible role for PIP5K in the pathophysiology of this disorder. Patients and Methods: Patients The work described in this article was carried out in accordance with the most recent version of the Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans and with the Uniform Requirements for manuscripts submitted to biomedical journals. After obtaining approval of the study protocol by the institutional bioethics committee, suitable participants were recruited from 3 psychiatric hospitals in the Tomsk, Kemerovo, and Chita areas in Siberia (Russia). All subjects gave informed consent after proper explanation of the study. We included 491 subjects with a clinical diagnosis of schizophrenia the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10: F20; N=465; 94.7%) or schizotypal disorder (ICD-10: F21) and excluded subjects with non-Caucasian physical appearance (eg, Mongoloid, Buryats, Tyvans, or Khakassians) or those with organic or neurological disorders. Patients were assessed for the presence or absence of dyskinesia according to the abnormal involuntary movement scale (AIMS) (Loonen and Van Praag, 2007). The AIMS scores were transformed into a binary form (presence or absence of dyskinesia) with Schooler and Kane (1982) criteria. The presence of TDof and TDlt was established by a cutoff score of ≥2 (mild but definite) on any of the items 1 through 4 and 5 through 7 of AIMS, respectively. The sum of the first 4 items was used as a proxy for the severity of TDof, while the sum of items 5 thru 7 was used as a proxy for the severity of TDlt. A blood sample was taken for DNA isolation and genotyping. The other inclusion criteria were no addictions, no organic disorders, and a high-quality DNA sample. The work described in this article was carried out in accordance with the most recent version of the Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans and with the Uniform Requirements for manuscripts submitted to biomedical journals. After obtaining approval of the study protocol by the institutional bioethics committee, suitable participants were recruited from 3 psychiatric hospitals in the Tomsk, Kemerovo, and Chita areas in Siberia (Russia). All subjects gave informed consent after proper explanation of the study. We included 491 subjects with a clinical diagnosis of schizophrenia the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10: F20; N=465; 94.7%) or schizotypal disorder (ICD-10: F21) and excluded subjects with non-Caucasian physical appearance (eg, Mongoloid, Buryats, Tyvans, or Khakassians) or those with organic or neurological disorders. Patients were assessed for the presence or absence of dyskinesia according to the abnormal involuntary movement scale (AIMS) (Loonen and Van Praag, 2007). The AIMS scores were transformed into a binary form (presence or absence of dyskinesia) with Schooler and Kane (1982) criteria. The presence of TDof and TDlt was established by a cutoff score of ≥2 (mild but definite) on any of the items 1 through 4 and 5 through 7 of AIMS, respectively. The sum of the first 4 items was used as a proxy for the severity of TDof, while the sum of items 5 thru 7 was used as a proxy for the severity of TDlt. A blood sample was taken for DNA isolation and genotyping. The other inclusion criteria were no addictions, no organic disorders, and a high-quality DNA sample. Medication On the day of TD assessment, a complete documentation of the medications utilized was compiled by the raters. For comparison, daily antipsychotic medication dosages were converted into chlorpromazine equivalents (Andreasen et al., 2010). Patients using clozapine who did not suffer from TD were excluded, as clozapine may suppress the symptoms of TD. On the day of TD assessment, a complete documentation of the medications utilized was compiled by the raters. For comparison, daily antipsychotic medication dosages were converted into chlorpromazine equivalents (Andreasen et al., 2010). Patients using clozapine who did not suffer from TD were excluded, as clozapine may suppress the symptoms of TD. Genotyping DNA extraction was conducted according to standard protocols using phenol-chloroform extraction. Genotyping of PIP5K2A polymorphisms (rs10828317, rs746203, rs8341) was performed on an ABI StepOnePlus with a TaqMan Validateе SNP Genotyping Assay (Applied Biosystems). DNA extraction was conducted according to standard protocols using phenol-chloroform extraction. Genotyping of PIP5K2A polymorphisms (rs10828317, rs746203, rs8341) was performed on an ABI StepOnePlus with a TaqMan Validateе SNP Genotyping Assay (Applied Biosystems). Statistics The Hardy-Weinberg equilibrium of genotypic frequencies was tested by the chi-square test. Statistical analyses were performed using SPSS software, release 17, for Windows; P<.05 was considered as significant. To apply a correction for multiple testing, we used algorithm for False Discovery Rate control, described by Benjamini and Hochberg (1995). The chi-square test and the Fisher’s exact test, if necessary, were used for between-group comparisons of genotypic or allelic frequencies. Between-group differences in continuous variables were evaluated using the Student’s t test or 1-way analysis of variance. Comparisons of AIMS-score in different groups were carried out with the Kruskal Wallis test. The relevant Bonferroni correction for multiple testing was applied. Logistic regression analysis was performed to isolate the possible TD-related variables: age, sex, duration of illness, age at onset, and PIP5K2A polymorphism. The Hardy-Weinberg equilibrium of genotypic frequencies was tested by the chi-square test. Statistical analyses were performed using SPSS software, release 17, for Windows; P<.05 was considered as significant. To apply a correction for multiple testing, we used algorithm for False Discovery Rate control, described by Benjamini and Hochberg (1995). The chi-square test and the Fisher’s exact test, if necessary, were used for between-group comparisons of genotypic or allelic frequencies. Between-group differences in continuous variables were evaluated using the Student’s t test or 1-way analysis of variance. Comparisons of AIMS-score in different groups were carried out with the Kruskal Wallis test. The relevant Bonferroni correction for multiple testing was applied. Logistic regression analysis was performed to isolate the possible TD-related variables: age, sex, duration of illness, age at onset, and PIP5K2A polymorphism. Patients: The work described in this article was carried out in accordance with the most recent version of the Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans and with the Uniform Requirements for manuscripts submitted to biomedical journals. After obtaining approval of the study protocol by the institutional bioethics committee, suitable participants were recruited from 3 psychiatric hospitals in the Tomsk, Kemerovo, and Chita areas in Siberia (Russia). All subjects gave informed consent after proper explanation of the study. We included 491 subjects with a clinical diagnosis of schizophrenia the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10: F20; N=465; 94.7%) or schizotypal disorder (ICD-10: F21) and excluded subjects with non-Caucasian physical appearance (eg, Mongoloid, Buryats, Tyvans, or Khakassians) or those with organic or neurological disorders. Patients were assessed for the presence or absence of dyskinesia according to the abnormal involuntary movement scale (AIMS) (Loonen and Van Praag, 2007). The AIMS scores were transformed into a binary form (presence or absence of dyskinesia) with Schooler and Kane (1982) criteria. The presence of TDof and TDlt was established by a cutoff score of ≥2 (mild but definite) on any of the items 1 through 4 and 5 through 7 of AIMS, respectively. The sum of the first 4 items was used as a proxy for the severity of TDof, while the sum of items 5 thru 7 was used as a proxy for the severity of TDlt. A blood sample was taken for DNA isolation and genotyping. The other inclusion criteria were no addictions, no organic disorders, and a high-quality DNA sample. Medication: On the day of TD assessment, a complete documentation of the medications utilized was compiled by the raters. For comparison, daily antipsychotic medication dosages were converted into chlorpromazine equivalents (Andreasen et al., 2010). Patients using clozapine who did not suffer from TD were excluded, as clozapine may suppress the symptoms of TD. Genotyping: DNA extraction was conducted according to standard protocols using phenol-chloroform extraction. Genotyping of PIP5K2A polymorphisms (rs10828317, rs746203, rs8341) was performed on an ABI StepOnePlus with a TaqMan Validateе SNP Genotyping Assay (Applied Biosystems). Statistics: The Hardy-Weinberg equilibrium of genotypic frequencies was tested by the chi-square test. Statistical analyses were performed using SPSS software, release 17, for Windows; P<.05 was considered as significant. To apply a correction for multiple testing, we used algorithm for False Discovery Rate control, described by Benjamini and Hochberg (1995). The chi-square test and the Fisher’s exact test, if necessary, were used for between-group comparisons of genotypic or allelic frequencies. Between-group differences in continuous variables were evaluated using the Student’s t test or 1-way analysis of variance. Comparisons of AIMS-score in different groups were carried out with the Kruskal Wallis test. The relevant Bonferroni correction for multiple testing was applied. Logistic regression analysis was performed to isolate the possible TD-related variables: age, sex, duration of illness, age at onset, and PIP5K2A polymorphism. Results: Table 1 shows the clinical and demographic characteristics of patients with and without TD. The genotype distribution of PIP5K2A (rs10828317, rs746203, rs8341) polymorphisms were in agreement with Hardy-Weinberg equilibrium in this patient group. No significant differences in genotype frequencies of the 2 nonfunctional polymorphisms rs746203 and rs8341 between the 2 groups of patients with and without TD were found (Table 2). However, a significant association was demonstrated to exist between TD and the functional rs10828317 mutation. After correction for multiple testing, the observed differences remained statistically significant (P=.018). CC carriers had a higher risk of TDof (OR=2.55, 95CI=1.56–4.14, P=.0006), TDlt (OR=1.85, 95CI=1.1–3.13, P=.04), and TDtot (OR=2.17, 95CI=1.34–3.51, P=.003). So, the frequency of CC-carriers is about twice as high in the group of schizophrenic patients with TD compared to the group without TD. The Clinical and Demographic Characteristics of Patients with and without TD Abbreviation: TD, tardive dyskinesia. *Chi-square test; ** t test. Distribution of rs10828317, rs8341, and rs746203 Genotypes and Alleles in Patients with and without TD Abbreviation: TD, tardive dyskinesia. We also found an association between genotype and severity of TD. Patients who are CC carriers of rs10828317 had a significantly (P<.02 Mann-Whitney with Bonferroni correction) higher mean AIMS TDtot and TDof score in comparison to those with the CT or TT genotype (data not shown). Analysis of covariance with age, sex, duration of disease, and chlorpromazine equivalent incorporated as covariates showed that TD is significantly (P < .005) associated with the PIP5K2A (rs10828317) polymorphism (details not shown). Using the binary logistic regression method, we revealed an association between the CC-genotype of rs10828317 and TD (P=.005), whereas the input of age (P=.329), sex (P=.956), duration of disease (P=.139), chlorpromazine equivalent (P=.683), and age of onset of the disorder (Р=.608) were not statistically significant for our model. Discussion: In this study, we genotyped patients with and without TD with respect to 3 polymorphisms of the PIP5K2A gene. In Figure 2, the single nucleotide polymorphism positions are represented. Only one of them, rs10828317, is known to be a functional mutation. Replacement of T to C leads to a nonsynonymous amino-acid exchange (asparagine/serine) that causes an increased distance between 2 antiparallel helices from 3Å to 6Å and thereby interferes with the function of the enzyme (Fedorenko et al., 2008). Representation of the single nucleotide polymorphism positions of 3 studied polymorphisms of the PIP5K2A gene (He et al., 2007). We decided to study the PIP5K2A gene, because this gene has repeatedly been shown to be associated with schizophrenia (Schwab et al., 2006; Bakker et al., 2007; He et al., 2007; Saggers-Gray et al., 2008), and the vulnerability to develop TD is related to the likelihood to develop positive symptoms of schizophrenia. In a previous study, we have confirmed this association in the presently studied Caucasian Siberian patients with schizophrenia (Fedorenko et al., 2013), but now we also demonstrated a relationship with the prevalence of TD. Koning et al. (2011a, 2011b) have described an association of TD with schizotypy in unaffected siblings of patients with nonaffective psychosis. Moreover, drug naïve first-episode patients sometimes show spontaneous dyskinesia (Tenback and Van Harten, 2011). Therefore, indirect evidence supports a possible role of genetic factors increasing the vulnerability to develop TD in patients with schizophrenia. Hereditary decreased activity of PIP5K might be one of them. The present study did not address the mechanisms regulated by PIP5K2A and possibly contributing to the development of TD in carriers of the rs10828317 polymorphism. It is noteworthy, however, that PIP5K2A participates in the regulation of both glutamate receptor GluA1 (Seebohm et al., 2014) and glutamate carrier EAAT3 (Fedorenko et al., 2009). Thus, PIP5K2A may both increase glutamate sensitivity of neurons and terminate glutamate-induced excitation by accelerating clearance of glutamate from the synaptic cleft. It is tempting to speculate that deranged glutamate sensitivity or abundance may foster the development of TD, as it may increase the vulnerability of indirect pathway MSNs for oxidative stress-induced neurotoxicity (Loonen and Ivanova, 2013). In conclusion, the present observations reveal an association of PIP5K2A gene variants with TD and thus suggest a clinical significance of this kinase in the control of movement and/or neuronal survival.
Background: Tardive dyskinesia is a disorder characterized by involuntary muscle movements that occur as a complication of long-term treatment with antipsychotic drugs. It has been suggested to be related to a malfunctioning of the indirect pathway of the motor part of the cortical-striatal-thalamic-cortical circuit, which may be caused by oxidative stress-induced neurotoxicity. Methods: The purpose of our study was to investigate the possible association between phosphatidylinositol-4-phosphate-5-kinase type IIa (PIP5K2A) function and tardive dyskinesia in 491 Caucasian patients with schizophrenia from 3 different psychiatric institutes in West Siberia. The Abnormal Involuntary Movement Scale was used to assess tardive dyskinesia. Individuals were genotyped for 3 single nucleotide polymorphisms in PIP5K2A gene: rs10828317, rs746203, and rs8341. Results: A significant association was established between the functional mutation N251S-polymorphism of the PIP5K2A gene (rs10828317) and tardive dyskinesia, while the other 2 examined nonfunctional single nucleotide polymorphisms were not related. Conclusions: We conclude from this association that PIP5K2A is possibly involved in a mechanism protecting against tardive dyskinesia-inducing neurotoxicity. This corresponds to our hypothesis that tardive dyskinesia is related to neurotoxicity at striatal indirect pathway medium-sized spiny neurons.
Introduction: Dyskinesia is a collective name for a variety of involuntary hyperkinetic movements (Loonen and Van Praag, 2007). The movements are irregular, repetitive, and typically include motionless intervals. Dyskinesia may result from long-term treatment with antipsychotic drugs. This involuntary movement syndrome is termed tardive dyskinesia (TD) (Margolese et al., 2005; Kane, 2006). TD is a potentially disabling irreversible movement disorder, which has a prevalence of around 30% in patients chronically exposed to antipsychotics (Kane et al., 1988; Glazer, 2000). It can be subdivided into orofaciolingual (TDof) and limb-truncal (TDlt) dyskinesia (Al Hadithy et al., 2009, 2010). TD is classified as an extrapyramidal movement disorder and may be related to a malfunctioning of the indirect pathway of the motor part of the cortical-striatal-thalamic-cortical circuit (Figure 1) (Loonen and Ivanova, 2013). The indirect pathway starts with dopamine-D2 receptor expressing medium-sized spiny neurons (MSNs) in the striatum. Activation of this pathway results in inhibition of motor parts of the frontal cerebral cortex, and malfunctioning of this circuit would result in disinhibition and therefore hyperkinesia (Loonen and Ivanova, 2013). The cortical-striatal-thalamic-cortical circuits, including the indirect and direct pathways. Activation of the direct pathway causes hyperkinesia and activation of the indirect pathway causes hypokinesia. ENK, enkephalin; GPe, globus pallidus, external segment; GPi, globus pallidus, internal segment; SNc, substantia nigra, pars compacta; SNr, substantia nigra, pars reticulata; SP/DYN, substance P/dynorphin; STh, subthalamic nucleus; D1, D2, medium-sized spiny neurons (MSNs) with D1 or D2 receptors. Red, excitatory (glutamatergic, dopaminergic); blue, inhibitory (GABAergic, dopaminergic). Recently, our group identified an important link between 2 other hyperkinetic extrapyramidal movement disorders: Huntington’s disease (HD) and Levodopa-induced dyskinesia (LID). Patients suffering from LID are more often carriers of the same variants of the GRIN2A gene as are determining an earlier age of onset of dyskinesia in HD patients (Ivanova et al., 2012). The GRIN2A gene encodes for the NR2A subunit of the glutamatergic N-methyl-d-aspartate (NMDA) receptor (Paoletti and Neyton, 2007; Ivanova et al., 2012). In HD, symptoms are linked to NMDA receptor-induced excitotoxicity in indirect pathway MSNs (Estrada Sanchez et al., 2007; Fan and Raymond, 2007; Kumar et al., 2010). Our finding suggests that LID is related to a similar NMDA receptor-related malfunctioning of dopamine-D2 receptor carrying indirect pathway MSNs as HD. According to the neurotoxicity theory of TD, degeneration of indirect pathway MSNs in this disorder is related to neurotoxic effects of the free radicals produced by excessive metabolism of dopamine (Lohr et al., 2003). This theory suggests that antipsychotic drugs block dopamine D2 receptors and therefore trigger a compensatory release of excess dopamine. This excess requires increased metabolism of the spilled neurotransmitter. Increased dopamine metabolism releases high levels of hydrogen peroxide, which results in the production of free radicals, which then cause cell damage. Hence, excessive dopamine metabolism results in the production of more free radicals than the cell can handle. This hypothesis is consistent with the reported association between the incidence of TD and the presence of variants in the gene that encodes manganese superoxidedismutase, an enzyme that scavenges free radicals (Al Hadithy et al., 2010). A reduction in manganese superoxidedismutase activity would increase the likelihood of neurotoxic effects. It can be concluded that HD, LID, and TD are related to neurotoxic damage of indirect pathway MSNs and that every factor that increases neurotoxicity may also increase the likelihood of their becoming symptomatic. Phosphatidylinositol 4-phosphate 5-kinase (PIP5K; EC 2.7.1.68) is a neuronal intracellular enzyme that produces phosphatidylinositol (4,5)-biphosphate, which is catalyzed by phospholipase C to the second messengers inositol (1,4,5) triphosphate and diacylglycerol (for review, see Van den Bout and Divecha, 2009). Three isoforms of this enzyme have been identified: PIP5Kα, PIP5Kβ, and PIP5Kγ. The PIP5Kα isoform is also known as phosphatidylinositol-4-phosphate-5-kinase type IIa (PIP5K2A) and localizes to the plasma membrane and the Golgi complex and in the nucleus. PIP5K2A is involved in many different processes, including signal transduction of G-protein-coupled receptors, cell survival by protection against apoptosis, and the genetic response to oxidative stress (Van den Bout and Divecha, 2009). The PIP5K2A gene has been shown to be associated with schizophrenia in several independent studies (Schwab et al., 2006; Bakker et al., 2007; He et al., 2007; Saggers-Gray et al., 2008; Fedorenko et al., 2013). This is possibly related to a similar direct vs indirect pathway MSN hyperactivity explaining positive psychotic symptoms in schizophrenia as well as dyskinesia in TD. Indeed, drug-naïve first-episode patients experience spontaneous dyskinesia more frequently than healthy controls (Tenback and Van Harten, 2011). Although the exact regulatory functions of different types of PIP5Ks are far from evident, these enzymes can be expected to also play a role in augmenting or decreasing the excitability of corticostriatal glutamatergic synapses with MSNs during the induction of long-term potentiation and long-term depression (LTD), respectively. Long-term potentiation and LTD may play an important role in the mechanism of dyskinesia, as they regulate the readiness of corticostriatal synapses to excitatory (including excitotoxic) effects (Ivanova et al., 2012). In mice, for example, NMDA receptor-mediated compensatory LTD depends upon activation of PIP5Kγ661, which results in AMPA receptor endocytosis (Unoki et al., 2012). In a heteromeric expression system, PIP5K2A has been disclosed to be a novel signaling element in the regulation of the neuronal KCNQ2/KCNQ3 and KCNQ3/KCNQ5 channels, EAAT3 glutamate transporter, and GluA1 function (Fedorenko et al., 2008, 2009; Seebohm et al., 2014). It has been shown that PIP5K2A regulation is disrupted in the schizophrenia-associated mutant (N251S)-PIP5K2A (rs10828317), which may contribute to the pathogenesis of schizophrenia through uncontrolled dopaminergic firing and deranged glutamate metabolism in the brain of schizophrenic patients carrying this mutation (Fedorenko et al., 2008, 2009; Seebohm et al., 2014). We decided to study a possible association between a genetic variant of PIP5K2A encoding—according to in vitro observations (Fedorenko et al., 2008, 2009; Seebohm et al., 2014)—for a less active variant of this enzyme in comparison to 2 nonfunctional genetic variations and the prevalence of TD in a White Siberian patient population suffering from schizophrenia in order to establish a possible role for PIP5K in the pathophysiology of this disorder. Discussion: None.
Background: Tardive dyskinesia is a disorder characterized by involuntary muscle movements that occur as a complication of long-term treatment with antipsychotic drugs. It has been suggested to be related to a malfunctioning of the indirect pathway of the motor part of the cortical-striatal-thalamic-cortical circuit, which may be caused by oxidative stress-induced neurotoxicity. Methods: The purpose of our study was to investigate the possible association between phosphatidylinositol-4-phosphate-5-kinase type IIa (PIP5K2A) function and tardive dyskinesia in 491 Caucasian patients with schizophrenia from 3 different psychiatric institutes in West Siberia. The Abnormal Involuntary Movement Scale was used to assess tardive dyskinesia. Individuals were genotyped for 3 single nucleotide polymorphisms in PIP5K2A gene: rs10828317, rs746203, and rs8341. Results: A significant association was established between the functional mutation N251S-polymorphism of the PIP5K2A gene (rs10828317) and tardive dyskinesia, while the other 2 examined nonfunctional single nucleotide polymorphisms were not related. Conclusions: We conclude from this association that PIP5K2A is possibly involved in a mechanism protecting against tardive dyskinesia-inducing neurotoxicity. This corresponds to our hypothesis that tardive dyskinesia is related to neurotoxicity at striatal indirect pathway medium-sized spiny neurons.
4,065
230
[ 1327, 326, 62, 43, 173, 396, 482 ]
8
[ "td", "patients", "pip5k2a", "dyskinesia", "test", "aims", "related", "2007", "schizophrenia", "rs10828317" ]
[ "involuntary hyperkinetic movements", "td tardive dyskinesia", "dyskinesia td drug", "onset dyskinesia", "schizophrenia dyskinesia td" ]
null
[CONTENT] PIP5K2A | schizophrenia | tardive dyskinesia | gene polymorphism | medium spiny neurons | neurotoxicity [SUMMARY]
[CONTENT] PIP5K2A | schizophrenia | tardive dyskinesia | gene polymorphism | medium spiny neurons | neurotoxicity [SUMMARY]
null
[CONTENT] PIP5K2A | schizophrenia | tardive dyskinesia | gene polymorphism | medium spiny neurons | neurotoxicity [SUMMARY]
[CONTENT] PIP5K2A | schizophrenia | tardive dyskinesia | gene polymorphism | medium spiny neurons | neurotoxicity [SUMMARY]
[CONTENT] PIP5K2A | schizophrenia | tardive dyskinesia | gene polymorphism | medium spiny neurons | neurotoxicity [SUMMARY]
[CONTENT] Adult | Antipsychotic Agents | Dyskinesia, Drug-Induced | Female | Gene Frequency | Genetic Association Studies | Genetic Predisposition to Disease | Humans | Male | Middle Aged | Movement Disorders | Phenotype | Phosphotransferases (Alcohol Group Acceptor) | Polymorphism, Single Nucleotide | Protective Factors | Risk Assessment | Risk Factors | Schizophrenia | Siberia | Young Adult [SUMMARY]
[CONTENT] Adult | Antipsychotic Agents | Dyskinesia, Drug-Induced | Female | Gene Frequency | Genetic Association Studies | Genetic Predisposition to Disease | Humans | Male | Middle Aged | Movement Disorders | Phenotype | Phosphotransferases (Alcohol Group Acceptor) | Polymorphism, Single Nucleotide | Protective Factors | Risk Assessment | Risk Factors | Schizophrenia | Siberia | Young Adult [SUMMARY]
null
[CONTENT] Adult | Antipsychotic Agents | Dyskinesia, Drug-Induced | Female | Gene Frequency | Genetic Association Studies | Genetic Predisposition to Disease | Humans | Male | Middle Aged | Movement Disorders | Phenotype | Phosphotransferases (Alcohol Group Acceptor) | Polymorphism, Single Nucleotide | Protective Factors | Risk Assessment | Risk Factors | Schizophrenia | Siberia | Young Adult [SUMMARY]
[CONTENT] Adult | Antipsychotic Agents | Dyskinesia, Drug-Induced | Female | Gene Frequency | Genetic Association Studies | Genetic Predisposition to Disease | Humans | Male | Middle Aged | Movement Disorders | Phenotype | Phosphotransferases (Alcohol Group Acceptor) | Polymorphism, Single Nucleotide | Protective Factors | Risk Assessment | Risk Factors | Schizophrenia | Siberia | Young Adult [SUMMARY]
[CONTENT] Adult | Antipsychotic Agents | Dyskinesia, Drug-Induced | Female | Gene Frequency | Genetic Association Studies | Genetic Predisposition to Disease | Humans | Male | Middle Aged | Movement Disorders | Phenotype | Phosphotransferases (Alcohol Group Acceptor) | Polymorphism, Single Nucleotide | Protective Factors | Risk Assessment | Risk Factors | Schizophrenia | Siberia | Young Adult [SUMMARY]
[CONTENT] involuntary hyperkinetic movements | td tardive dyskinesia | dyskinesia td drug | onset dyskinesia | schizophrenia dyskinesia td [SUMMARY]
[CONTENT] involuntary hyperkinetic movements | td tardive dyskinesia | dyskinesia td drug | onset dyskinesia | schizophrenia dyskinesia td [SUMMARY]
null
[CONTENT] involuntary hyperkinetic movements | td tardive dyskinesia | dyskinesia td drug | onset dyskinesia | schizophrenia dyskinesia td [SUMMARY]
[CONTENT] involuntary hyperkinetic movements | td tardive dyskinesia | dyskinesia td drug | onset dyskinesia | schizophrenia dyskinesia td [SUMMARY]
[CONTENT] involuntary hyperkinetic movements | td tardive dyskinesia | dyskinesia td drug | onset dyskinesia | schizophrenia dyskinesia td [SUMMARY]
[CONTENT] td | patients | pip5k2a | dyskinesia | test | aims | related | 2007 | schizophrenia | rs10828317 [SUMMARY]
[CONTENT] td | patients | pip5k2a | dyskinesia | test | aims | related | 2007 | schizophrenia | rs10828317 [SUMMARY]
null
[CONTENT] td | patients | pip5k2a | dyskinesia | test | aims | related | 2007 | schizophrenia | rs10828317 [SUMMARY]
[CONTENT] td | patients | pip5k2a | dyskinesia | test | aims | related | 2007 | schizophrenia | rs10828317 [SUMMARY]
[CONTENT] td | patients | pip5k2a | dyskinesia | test | aims | related | 2007 | schizophrenia | rs10828317 [SUMMARY]
[CONTENT] pathway | indirect | dopamine | indirect pathway | msns | receptor | dyskinesia | 2009 | hd | metabolism [SUMMARY]
[CONTENT] test | aims | genotyping | subjects | items | performed | presence | dna | td | 10 [SUMMARY]
null
[CONTENT] glutamate | gene | td | pip5k2a | pip5k2a gene | develop | vulnerability | study | schizophrenia | fedorenko [SUMMARY]
[CONTENT] td | test | patients | pip5k2a | genotyping | extraction | performed | clozapine | rs10828317 | aims [SUMMARY]
[CONTENT] td | test | patients | pip5k2a | genotyping | extraction | performed | clozapine | rs10828317 | aims [SUMMARY]
[CONTENT] dyskinesia ||| [SUMMARY]
[CONTENT] dyskinesia | 491 | Caucasian | 3 | West Siberia ||| The Abnormal Involuntary Movement Scale | dyskinesia ||| 3 | PIP5K2A | rs746203 | rs8341 [SUMMARY]
null
[CONTENT] dyskinesia ||| dyskinesia [SUMMARY]
[CONTENT] ||| ||| dyskinesia | 491 | Caucasian | 3 | West Siberia ||| The Abnormal Involuntary Movement Scale | dyskinesia ||| 3 | PIP5K2A | rs746203 | rs8341 ||| ||| dyskinesia | 2 ||| dyskinesia ||| dyskinesia [SUMMARY]
[CONTENT] ||| ||| dyskinesia | 491 | Caucasian | 3 | West Siberia ||| The Abnormal Involuntary Movement Scale | dyskinesia ||| 3 | PIP5K2A | rs746203 | rs8341 ||| ||| dyskinesia | 2 ||| dyskinesia ||| dyskinesia [SUMMARY]
Q fever: Evidence of a massive yet undetected cross-border outbreak, with ongoing risk of extra mortality, in a Dutch-German border region.
32027783
Following outbreaks in other parts of the Netherlands, the Dutch border region of South Limburg experienced a large-scale outbreak of human Q fever related to a single dairy goat farm in 2009, with surprisingly few cases reported from neighbouring German counties. Late chronic Q fever, with recent spikes of newly detected cases, is an ongoing public health concern in the Netherlands. We aimed to assess the scope and scale of any undetected cross-border transmission to neighbouring German counties, where individuals unknowingly exposed may carry extra risk of overlooked diagnosis.
BACKGROUND
(A) Seroprevalence rates in the Dutch area were estimated fitting an exponential gradient to the geographical distribution of notified acute human Q fever cases, using seroprevalence in a sample of farm township inhabitants as baseline. (B) Seroprevalence rates in 122 neighbouring German postcode areas were estimated from a sample of blood donors living in these areas and attending the regional blood donation centre in January/February 2010 (n = 3,460). (C) Using multivariate linear regression, including goat and sheep densities, veterinary Q fever notifications and blood donor sampling densities as covariates, we assessed whether seroprevalence rates across the entire border region were associated with distance from the farm.
METHODS
(A) Seroprevalence in the outbreak farm's township was 16.1%. Overall seroprevalence in the Dutch area was 3.6%. (B) Overall seroprevalence in the German area was 0.9%. Estimated mean seroprevalence rates (per 100,000 population) declined with increasing distance from the outbreak farm (0-19 km = 2,302, 20-39 km = 1,122, 40-59 km = 432 and ≥60 km = 0). Decline was linear in multivariate regression using log-transformed seroprevalence rates (0-19 km = 2.9 [95% confidence interval (CI) = 2.6 to 3.2], 20 to 39 km = 1.9 [95% CI = 1.0 to 2.8], 40-59 km = 0.6 [95% CI = -0.2 to 1.3] and ≥60 km = 0.0 [95% CI = -0.3 to 0.3]).
RESULTS
Our findings were suggestive of widespread cross-border transmission, with thousands of undetected infections, arguing for intensified cross-border collaboration and surveillance and screening of individuals susceptible to chronic Q fever in the affected area.
CONCLUSIONS
[ "Animals", "Antibodies, Bacterial", "Blood Specimen Collection", "Communicable Diseases, Imported", "Coxiella burnetii", "Diagnostic Tests, Routine", "Disease Outbreaks", "Germany", "Humans", "Immunoglobulin G", "Immunoglobulin M", "Linear Models", "Mass Screening", "Netherlands", "Q Fever", "Real-Time Polymerase Chain Reaction", "Seroepidemiologic Studies", "Sheep" ]
7383856
INTRODUCTION
Following outbreaks in other parts of the Netherlands, the Dutch border region of South Limburg experienced a massive single‐point source outbreak of Q fever related to a local dairy goat farm, counting 253 notified cases of acute human Q fever and an estimated 9,000 undetected infections across the entire region, in 2009 (Hackert et al., 2012; van Leuken et al., 2013). Q fever is a bacterial zoonosis caused by Coxiella burnetii, transmitted to humans from infected ruminants, primarily by the airborne route (Cutler, Bouzid, & Cutler, 2007; Maurin & Raoult, 1999; Parker, Barralet, & Bell, 2006). In acute disease, flulike illness is usually prominent. Most Q fever infections are mild or asymptomatic and self‐limiting (Hackert et al., 2012; van der Hoek et al., 2012). However, a small percentage of infected individuals may develop chronic Q fever, which often goes unnoticed for years after infection but is usually fatal if left untreated. In addition, a substantial proportion of infected individuals may suffer symptoms referred to as Q fever fatigue syndrome, which may persist for years, with major health‐related consequences (Morroy et al., 2016). In South Limburg, the distribution of notified cases of acute human Q fever followed a west–east gradient of decreasing incidence from the source, following westerly winds predominant at the time of the outbreak, towards and up to the Dutch–German border (Hackert et al., 2012). In the same year, only six cases of acute Q fever (notifiable under German law) were reported from the entire German federal state of North Rhine‐Westphalia, none of whom was a resident of the two districts bordering South Limburg (Heinsberg and Aachen). In the five years preceding the outbreak (2004–2008), a total of 42 cases were reported from North Rhine‐Westphalia, just one of whom lived in Aachen (2006). In 2010, the year following the outbreak, North Rhine‐Westphalia counted a total of 14 cases, only 2 of whom were from Aachen (Robert Koch Institute, 2016). Whereas these data suggest that cross‐border transmission from South Limburg to neighbouring German counties was negligible, it is rather unlikely that airborne transmission stopped short of the Dutch–German border. A Belgian study suggests that a limited degree of transmission took place in westerly direction across the Dutch–Belgian border, but does not quantify the extent of transmission (Naesens et al., 2012). Recent spikes of newly detected cases of late chronic Q fever in the Netherlands, with a high burden of extra mortality, show that the Dutch epidemic is far from over and should be seen as an ongoing public health concern with reason for unabated alertness (Radboud University Medical Center, 2018). This may be even more the case in a population unknowingly exposed to Q fever, where the risk of delayed or overlooked diagnosis of chronic Q fever may be even higher. We therefore aimed to assess the scope and scale of any undetected cross‐border transmission to neighbouring German counties associated with the regional outbreak in South Limburg.
null
null
RESULTS
Overall seroprevalence in the Dutch–German cross‐border region Our smoothed map shows a large area of adjacent postcodes affected by human Q fever surrounding the outbreak farm, spreading over distances of more than 40 km into the German border region. Baseline seroprevalence in the outbreak farm township, adjusted for age and gender, was 16.1%. Calculated mean seroprevalence for the Dutch study area, derived from the observed gradient of attack rates, was 3.6%, close to the 4.4% observed in the age‐ and gender‐adjusted general population sample from the Dutch study area dating from 2010. The difference between mean calculated and mean observed seroprevalence was statistically not significant by t test (p = .26). Observed seroprevalence in blood donors from the German study area was 0.9% (31/3,460) overall, extrapolating to a total of 11,308 infections in those postcode areas with at least one seropositive blood donor. Among the positive blood donors, 61.3% (19/31) had a serological profile (anti‐Coxiella phase II IgM) indicative of a fresh or recent infection and/or were Coxiella DNA positive: 13 donors had a solitary anti‐Coxiella phase II IgM (one of whom was also qPCR positive) and six were positive for both phase II IgM and IgG, while 15 had a solitary phase II IgG response. Our smoothed map shows a large area of adjacent postcodes affected by human Q fever surrounding the outbreak farm, spreading over distances of more than 40 km into the German border region. Baseline seroprevalence in the outbreak farm township, adjusted for age and gender, was 16.1%. Calculated mean seroprevalence for the Dutch study area, derived from the observed gradient of attack rates, was 3.6%, close to the 4.4% observed in the age‐ and gender‐adjusted general population sample from the Dutch study area dating from 2010. The difference between mean calculated and mean observed seroprevalence was statistically not significant by t test (p = .26). Observed seroprevalence in blood donors from the German study area was 0.9% (31/3,460) overall, extrapolating to a total of 11,308 infections in those postcode areas with at least one seropositive blood donor. Among the positive blood donors, 61.3% (19/31) had a serological profile (anti‐Coxiella phase II IgM) indicative of a fresh or recent infection and/or were Coxiella DNA positive: 13 donors had a solitary anti‐Coxiella phase II IgM (one of whom was also qPCR positive) and six were positive for both phase II IgM and IgG, while 15 had a solitary phase II IgG response. Seroprevalence over 20‐km distance classes from the outbreak farm Seroprevalence rates across radial 20‐km distance classes declined with increasing distance from the outbreak farm (Table 2 and Figure 3), comparable with log‐transformed estimates in our multivariate linear regression model. Multivariate analysis was performed twice, with and without inclusion of a high‐seroprevalence postcode on the German side, located at a distance of 54 km from the outbreak farm and figuring as a dark‐coloured ‘hotspot’ in our smoothed map. This postcode counted six donors, one of whom was seropositive (seroprevalence = 16.7%). Given that seroprevalence in surrounding postcodes was low (i.e. 1.6% for the entire district including 18 postcode areas and 257 donors), and there were no reports of Q fever in livestock from the district during the study period, high seroprevalence in this postcode was unlikely to reflect locally acquired infection. This ‘outlier’ had limited impact on our findings. Blood donor test results and Q fever seroprevalence rates in postcode area populations in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands Based on calculated seroprevalence in the Dutch area, derived from our exponential model and baseline sample from outbreak farm township general population and observed seroprevalence in German blood donors. Adjusted for goat and sheep density, veterinary Q fever notifications and sampling density per 100,000 population, with 95% confidence intervals derived from bootstrapping (in brackets). Log‐transformed seroprevalence in radial 20‐km distance classes from outbreak farm, point estimates based on multivariate linear regression including residential distance from outbreak farm, livestock densities (sheep and goats) and screening rates as predictors, with 95% confidence intervals derived from bootstrapping, including the ‘outlier’ postcode in the German study area [Colour figure can be viewed at wileyonlinelibrary.com] Seroprevalence rates across radial 20‐km distance classes declined with increasing distance from the outbreak farm (Table 2 and Figure 3), comparable with log‐transformed estimates in our multivariate linear regression model. Multivariate analysis was performed twice, with and without inclusion of a high‐seroprevalence postcode on the German side, located at a distance of 54 km from the outbreak farm and figuring as a dark‐coloured ‘hotspot’ in our smoothed map. This postcode counted six donors, one of whom was seropositive (seroprevalence = 16.7%). Given that seroprevalence in surrounding postcodes was low (i.e. 1.6% for the entire district including 18 postcode areas and 257 donors), and there were no reports of Q fever in livestock from the district during the study period, high seroprevalence in this postcode was unlikely to reflect locally acquired infection. This ‘outlier’ had limited impact on our findings. Blood donor test results and Q fever seroprevalence rates in postcode area populations in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands Based on calculated seroprevalence in the Dutch area, derived from our exponential model and baseline sample from outbreak farm township general population and observed seroprevalence in German blood donors. Adjusted for goat and sheep density, veterinary Q fever notifications and sampling density per 100,000 population, with 95% confidence intervals derived from bootstrapping (in brackets). Log‐transformed seroprevalence in radial 20‐km distance classes from outbreak farm, point estimates based on multivariate linear regression including residential distance from outbreak farm, livestock densities (sheep and goats) and screening rates as predictors, with 95% confidence intervals derived from bootstrapping, including the ‘outlier’ postcode in the German study area [Colour figure can be viewed at wileyonlinelibrary.com] Predicted versus observed seroprevalence in the German study area Mean predicted log‐transformed seroprevalence for all postcode areas included in our German study area, based on the gradient of attack rates of acute Q fever observed in the Dutch study area, was 0.27 (untransformed 27 per 100,000), while mean observed log‐transformed seroprevalence was 0.43 (untransformed 262 per 100,000). This difference between predicted and observed log‐transformed seroprevalence estimates was statistically not significant by t test (p = .08). Correlation between predicted and observed log‐transformed values was 0.48 using Pearson correlation (p < .001) and 0.49 using Spearman's rho (p < .001). Mean predicted log‐transformed seroprevalence for all postcode areas included in our German study area, based on the gradient of attack rates of acute Q fever observed in the Dutch study area, was 0.27 (untransformed 27 per 100,000), while mean observed log‐transformed seroprevalence was 0.43 (untransformed 262 per 100,000). This difference between predicted and observed log‐transformed seroprevalence estimates was statistically not significant by t test (p = .08). Correlation between predicted and observed log‐transformed values was 0.48 using Pearson correlation (p < .001) and 0.49 using Spearman's rho (p < .001).
null
null
[ "INTRODUCTION", "Study area, study population and study period", "Epidemiological investigation", "Dutch study area", "German study area", "Combined Dutch–German cross‐border study area", "Laboratory investigation", "Veterinary data", "Statistical analysis", "Overall seroprevalence in the Dutch–German cross‐border region", "Seroprevalence over 20‐km distance classes from the outbreak farm", "Predicted versus observed seroprevalence in the German study area", "ETHICAL APPROVAL" ]
[ "Following outbreaks in other parts of the Netherlands, the Dutch border region of South Limburg experienced a massive single‐point source outbreak of Q fever related to a local dairy goat farm, counting 253 notified cases of acute human Q fever and an estimated 9,000 undetected infections across the entire region, in 2009 (Hackert et al., 2012; van Leuken et al., 2013). Q fever is a bacterial zoonosis caused by Coxiella burnetii, transmitted to humans from infected ruminants, primarily by the airborne route (Cutler, Bouzid, & Cutler, 2007; Maurin & Raoult, 1999; Parker, Barralet, & Bell, 2006). In acute disease, flulike illness is usually prominent. Most Q fever infections are mild or asymptomatic and self‐limiting (Hackert et al., 2012; van der Hoek et al., 2012). However, a small percentage of infected individuals may develop chronic Q fever, which often goes unnoticed for years after infection but is usually fatal if left untreated. In addition, a substantial proportion of infected individuals may suffer symptoms referred to as Q fever fatigue syndrome, which may persist for years, with major health‐related consequences (Morroy et al., 2016). In South Limburg, the distribution of notified cases of acute human Q fever followed a west–east gradient of decreasing incidence from the source, following westerly winds predominant at the time of the outbreak, towards and up to the Dutch–German border (Hackert et al., 2012). In the same year, only six cases of acute Q fever (notifiable under German law) were reported from the entire German federal state of North Rhine‐Westphalia, none of whom was a resident of the two districts bordering South Limburg (Heinsberg and Aachen). In the five years preceding the outbreak (2004–2008), a total of 42 cases were reported from North Rhine‐Westphalia, just one of whom lived in Aachen (2006). In 2010, the year following the outbreak, North Rhine‐Westphalia counted a total of 14 cases, only 2 of whom were from Aachen (Robert Koch Institute, 2016). Whereas these data suggest that cross‐border transmission from South Limburg to neighbouring German counties was negligible, it is rather unlikely that airborne transmission stopped short of the Dutch–German border. A Belgian study suggests that a limited degree of transmission took place in westerly direction across the Dutch–Belgian border, but does not quantify the extent of transmission (Naesens et al., 2012). Recent spikes of newly detected cases of late chronic Q fever in the Netherlands, with a high burden of extra mortality, show that the Dutch epidemic is far from over and should be seen as an ongoing public health concern with reason for unabated alertness (Radboud University Medical Center, 2018). This may be even more the case in a population unknowingly exposed to Q fever, where the risk of delayed or overlooked diagnosis of chronic Q fever may be even higher. We therefore aimed to assess the scope and scale of any undetected cross‐border transmission to neighbouring German counties associated with the regional outbreak in South Limburg.", "The Meuse–Rhine Euroregion provided the administrative background for our study (Wikipedia contributors, 2017). Geographically, it covers an area of about 11,000 km2 around the city corridor of Aachen (North Rhine‐Westphalia, Germany), Maastricht (South Limburg, Netherlands), Hasselt (Limburg, Belgium) and Liège (Liège, Belgium).\nThe Dutch study area was defined by the approximate catchment area of a large regional general hospital (346 km2, 12 municipalities and 308,410 inhabitants).\nThe adjacent German study area was defined by the 122 postcodes of individual residents from North Rhine‐Westphalia who donated blood at the RWTH Aachen University Hospital Blood Donation Centre in the first two months of 2010. For a summary of statistics regarding study area and study population, see Table 1. Of the 3,460 included German blood donors, the majority (n = 3,083, 89.1%) lived in postcode areas whose centroid was located within 40 km from the outbreak farm. The study was conducted from February 2009 (when first abortions were registered on the outbreak farm) to February 2010, when (a) no more incident human cases of acute Q fever were reported in the Dutch study area; (b) culling of pregnant dairy goats to prevent further transmission during the upcoming 2010 lambing season had been finalized; and (c) inclusion of German blood donors (January and February 2010) ended.\nStudy area characteristics in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands\nEastern South Limburg, defined by catchment area of local general hospital.\nCatchment area of RWTH Aachen University Hospital Blood Donation Centre, including 122 postcodes counting at least one resident visiting the centre in January/February 2010.", "To assess seroprevalence of Q fever in the Dutch, German and combined cross‐border study area in relation to distance from the Dutch outbreak farm, we used various population samples and methods of analysis. An outline is given in Figure 1.\nOutline of population samples and study design by study area (Dutch study area, German study area, combined cross‐border study area)", "The outbreak affected a population largely naive to Q fever, according to a serological survey from the year predating the outbreak (2008). Human Q fever cases notified to the PHS South Limburg in 2009/10 (n = 253) were scattered downwind from the outbreak farm, following a gradient of declining attack rates in easterly direction from the source up to the Dutch–German border (Hackert et al., 2012). Using SPSS's curve fitting tool, we derived a curve of best exponential fit from aforementioned gradient, corresponding to the following model: 469.074733 * EXP (−0.321415 * [distance to outbreak farm in km]). Q fever seroprevalence in a convenience sample of adult residents from the outbreak farm's township (n = 120, aged ≥ 18 years, mean household distance from outbreak farm = 2.7 km) served as a baseline estimate for the calculation of seroprevalence rates across the entire distance from the outbreak farm up to the Dutch–German border according to our exponential model (Hackert et al., 2015). Underlying was the assumption that infections followed the same geographical gradient as notified cases. Seroprevalence in this township sample was age‐ and sex‐matched within 10‐year age strata according to the demographic distribution in our sample of German blood donors. To assess validity of our exponential model, we compared the calculated cumulative seroprevalence for the entire Dutch area with seroprevalence observed in a post‐outbreak convenience sample of adults from the same area (Hackert et al., 2012).", "A total of 3,460 out of 3,493 blood donors (99.1%), who visited the Blood Donation Centre in January/February 2010, were residents of North Rhine‐Westphalia, while n = 39 were residents of remote German federal states and consequently excluded from our study. Seroprevalence was estimated using retention sample sera from the 3,460 included blood donors. The detection of IgG or IgM phase II antibodies by ELISA or of C. burnetii DNA by qPCR identified positive sera. Minimum age for blood donations in Germany is 18 years. Apart from standard exclusion criteria relating to donor blood safety, additional criteria were applied during the study period to exclude donors with increased risk of recent or acute Q fever infection (contact with livestock, such as cattle, sheep, goats, rabbits and ducks, or their excrements, over the preceding 5 weeks; living in the vicinity of a livestock holding; and signs or symptoms of fever, sweats, nausea, vomiting, diarrhoea, malaise or headaches in the five weeks preceding donation). Cases of acute Q fever reported to the German public health authorities were retrieved from the publicly accessible notification database (SurvStat) of the Robert Koch Institute (Robert Koch Institute, 2016). For reasons of privacy protection, demographic blood donor information was limited to residential postcode and 10‐year age group. Based on postcode centroids as a proxy for residential address, GIS was used to map the geographical distribution of seropositive and seronegative donors, to determine seroprevalence in postcode areas and to extrapolate the seroprevalence to the general population in these areas.", "A regression model, including distance zones of 20 km as dummy variables, adjusting for goat and sheep densities, veterinary Q fever notifications and sampling rates (i.e. number of individuals tested per 100,000 population in each postcode area), was used to assess the geographical relationship between seroprevalence (assumed to represent incidence rate of infection) and exposure dose (approximated by residential distance from the outbreak farm). In addition, we applied our fitted model, derived from the geographical distribution of attack rates in the Dutch study area, to predict seroprevalence rates in the German study area by distance from the outbreak farm, and tested the correlation between predicted and observed rates. We used Spatial Empirical Bayes Smoothing (where estimates per postcode are weighted against estimates in neighbouring areas sharing a common edge or border) to visualize our data by creating a smoothed map of seroprevalence rates in the combined Dutch–German cross‐border region (Figure 2). Computations were carried out in OpenGeoDa 1.2.0 (Anselin, 2005).\nBayesian‐smoothed extrapolated Q fever seroprevalence rates in the Dutch–German cross‐border region by postcode area (dairy goat farm = location of the outbreak farm in South Limburg, Netherlands)\nWe used multivariate linear regression to assess the relationship between postcode seroprevalence rates, log‐transformed for better visualization and radial distance from the outbreak farm in 20‐km zones. Goat and sheep densities, veterinary Q fever outbreaks and sampling densities (per 100,000 population) were included as covariates. A priori, we chose to include goat and sheep densities as covariates in our multivariate regression model, irrespective of univariate linear regression outcome, given their important role as reservoirs and sources of human Q fever. Variables did not show collinearity.", "Laboratory procedures performed during the Dutch outbreak were described previously (Hackert et al., 2012). All retention samples of the German blood donors and the Dutch serum samples were screened for anti‐Coxiella phase II IgG according to manufacturers' protocols (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany). A selection of negative sera (n = 128) and all positive or indeterminate sera was additionally tested for anti‐Coxiella phase II IgM (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany) and for the presence of Coxiella DNA using qPCR. We essentially applied the qPCR assay described elsewhere with a slightly modified TaqMan probe (Klee et al., 2006).", "Human Q fever cases from 2009 were linked to a single dairy goat farm in South Limburg, whose nearest distance to the Dutch–German border in a south‐eastern direction was 7.0 km (Hackert et al., 2012; van Leuken et al., 2013).\nMunicipality‐level data on goat, sheep and cattle population densities (number of animals per km2) in the Dutch study area were retrieved from the National Bureau of Statistics (Statistics Netherlands, Statline, 2009). Comparable animal data in the German study area were obtained from the statistical bureau of North Rhine‐Westphalia, according to its 2010 livestock census (Information und Technik, Nordrhein‐Westfalen, Geschäftsbereich Statistik, Statistische Berichte, Viehaltungen und Viehbestände am 1. März 2010, Ergebnisse der Landwirtschaftszählung). These data were available on a district level only. Confirmed (but not suspect) cases of Q fever in ruminants (goat, sheep and cattle) are notifiable under German law. Data on Q fever notifications in ruminants were retrieved from the Animal Disease Reporting System (TSN), the standard electronic system for registration of all notifiable and reportable animal diseases in Germany, for the entire German study area and study period (Probst, 2010).", "Statistical analyses were performed using IBM SPSS Statistics, version 21 (IBM Inc.). We performed bootstrapping on our estimates using non‐linear regression to obtain more robust confidence interval estimates. We report B values along with their 95% confidence intervals derived from bootstrapping.", "Our smoothed map shows a large area of adjacent postcodes affected by human Q fever surrounding the outbreak farm, spreading over distances of more than 40 km into the German border region.\nBaseline seroprevalence in the outbreak farm township, adjusted for age and gender, was 16.1%. Calculated mean seroprevalence for the Dutch study area, derived from the observed gradient of attack rates, was 3.6%, close to the 4.4% observed in the age‐ and gender‐adjusted general population sample from the Dutch study area dating from 2010. The difference between mean calculated and mean observed seroprevalence was statistically not significant by t test (p = .26).\nObserved seroprevalence in blood donors from the German study area was 0.9% (31/3,460) overall, extrapolating to a total of 11,308 infections in those postcode areas with at least one seropositive blood donor. Among the positive blood donors, 61.3% (19/31) had a serological profile (anti‐Coxiella phase II IgM) indicative of a fresh or recent infection and/or were Coxiella DNA positive: 13 donors had a solitary anti‐Coxiella phase II IgM (one of whom was also qPCR positive) and six were positive for both phase II IgM and IgG, while 15 had a solitary phase II IgG response.", "Seroprevalence rates across radial 20‐km distance classes declined with increasing distance from the outbreak farm (Table 2 and Figure 3), comparable with log‐transformed estimates in our multivariate linear regression model. Multivariate analysis was performed twice, with and without inclusion of a high‐seroprevalence postcode on the German side, located at a distance of 54 km from the outbreak farm and figuring as a dark‐coloured ‘hotspot’ in our smoothed map. This postcode counted six donors, one of whom was seropositive (seroprevalence = 16.7%). Given that seroprevalence in surrounding postcodes was low (i.e. 1.6% for the entire district including 18 postcode areas and 257 donors), and there were no reports of Q fever in livestock from the district during the study period, high seroprevalence in this postcode was unlikely to reflect locally acquired infection. This ‘outlier’ had limited impact on our findings.\nBlood donor test results and Q fever seroprevalence rates in postcode area populations in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands\nBased on calculated seroprevalence in the Dutch area, derived from our exponential model and baseline sample from outbreak farm township general population and observed seroprevalence in German blood donors.\nAdjusted for goat and sheep density, veterinary Q fever notifications and sampling density per 100,000 population, with 95% confidence intervals derived from bootstrapping (in brackets).\nLog‐transformed seroprevalence in radial 20‐km distance classes from outbreak farm, point estimates based on multivariate linear regression including residential distance from outbreak farm, livestock densities (sheep and goats) and screening rates as predictors, with 95% confidence intervals derived from bootstrapping, including the ‘outlier’ postcode in the German study area [Colour figure can be viewed at wileyonlinelibrary.com]", "Mean predicted log‐transformed seroprevalence for all postcode areas included in our German study area, based on the gradient of attack rates of acute Q fever observed in the Dutch study area, was 0.27 (untransformed 27 per 100,000), while mean observed log‐transformed seroprevalence was 0.43 (untransformed 262 per 100,000). This difference between predicted and observed log‐transformed seroprevalence estimates was statistically not significant by t test (p = .08). Correlation between predicted and observed log‐transformed values was 0.48 using Pearson correlation (p < .001) and 0.49 using Spearman's rho (p < .001).", "Our study was ethically approved by the medical ethics committee of the Maastricht University Medical Centre (number 104034) and the medical ethics committee of the RWTH Aachen University Hospital (number EK 026‐10) and conforms to internationally recognized standards (Declaration of Helsinki)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Study area, study population and study period", "Epidemiological investigation", "Dutch study area", "German study area", "Combined Dutch–German cross‐border study area", "Laboratory investigation", "Veterinary data", "Statistical analysis", "RESULTS", "Overall seroprevalence in the Dutch–German cross‐border region", "Seroprevalence over 20‐km distance classes from the outbreak farm", "Predicted versus observed seroprevalence in the German study area", "DISCUSSION", "ETHICAL APPROVAL", "CONFLICT OF INTERESTS" ]
[ "Following outbreaks in other parts of the Netherlands, the Dutch border region of South Limburg experienced a massive single‐point source outbreak of Q fever related to a local dairy goat farm, counting 253 notified cases of acute human Q fever and an estimated 9,000 undetected infections across the entire region, in 2009 (Hackert et al., 2012; van Leuken et al., 2013). Q fever is a bacterial zoonosis caused by Coxiella burnetii, transmitted to humans from infected ruminants, primarily by the airborne route (Cutler, Bouzid, & Cutler, 2007; Maurin & Raoult, 1999; Parker, Barralet, & Bell, 2006). In acute disease, flulike illness is usually prominent. Most Q fever infections are mild or asymptomatic and self‐limiting (Hackert et al., 2012; van der Hoek et al., 2012). However, a small percentage of infected individuals may develop chronic Q fever, which often goes unnoticed for years after infection but is usually fatal if left untreated. In addition, a substantial proportion of infected individuals may suffer symptoms referred to as Q fever fatigue syndrome, which may persist for years, with major health‐related consequences (Morroy et al., 2016). In South Limburg, the distribution of notified cases of acute human Q fever followed a west–east gradient of decreasing incidence from the source, following westerly winds predominant at the time of the outbreak, towards and up to the Dutch–German border (Hackert et al., 2012). In the same year, only six cases of acute Q fever (notifiable under German law) were reported from the entire German federal state of North Rhine‐Westphalia, none of whom was a resident of the two districts bordering South Limburg (Heinsberg and Aachen). In the five years preceding the outbreak (2004–2008), a total of 42 cases were reported from North Rhine‐Westphalia, just one of whom lived in Aachen (2006). In 2010, the year following the outbreak, North Rhine‐Westphalia counted a total of 14 cases, only 2 of whom were from Aachen (Robert Koch Institute, 2016). Whereas these data suggest that cross‐border transmission from South Limburg to neighbouring German counties was negligible, it is rather unlikely that airborne transmission stopped short of the Dutch–German border. A Belgian study suggests that a limited degree of transmission took place in westerly direction across the Dutch–Belgian border, but does not quantify the extent of transmission (Naesens et al., 2012). Recent spikes of newly detected cases of late chronic Q fever in the Netherlands, with a high burden of extra mortality, show that the Dutch epidemic is far from over and should be seen as an ongoing public health concern with reason for unabated alertness (Radboud University Medical Center, 2018). This may be even more the case in a population unknowingly exposed to Q fever, where the risk of delayed or overlooked diagnosis of chronic Q fever may be even higher. We therefore aimed to assess the scope and scale of any undetected cross‐border transmission to neighbouring German counties associated with the regional outbreak in South Limburg.", " Study area, study population and study period The Meuse–Rhine Euroregion provided the administrative background for our study (Wikipedia contributors, 2017). Geographically, it covers an area of about 11,000 km2 around the city corridor of Aachen (North Rhine‐Westphalia, Germany), Maastricht (South Limburg, Netherlands), Hasselt (Limburg, Belgium) and Liège (Liège, Belgium).\nThe Dutch study area was defined by the approximate catchment area of a large regional general hospital (346 km2, 12 municipalities and 308,410 inhabitants).\nThe adjacent German study area was defined by the 122 postcodes of individual residents from North Rhine‐Westphalia who donated blood at the RWTH Aachen University Hospital Blood Donation Centre in the first two months of 2010. For a summary of statistics regarding study area and study population, see Table 1. Of the 3,460 included German blood donors, the majority (n = 3,083, 89.1%) lived in postcode areas whose centroid was located within 40 km from the outbreak farm. The study was conducted from February 2009 (when first abortions were registered on the outbreak farm) to February 2010, when (a) no more incident human cases of acute Q fever were reported in the Dutch study area; (b) culling of pregnant dairy goats to prevent further transmission during the upcoming 2010 lambing season had been finalized; and (c) inclusion of German blood donors (January and February 2010) ended.\nStudy area characteristics in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands\nEastern South Limburg, defined by catchment area of local general hospital.\nCatchment area of RWTH Aachen University Hospital Blood Donation Centre, including 122 postcodes counting at least one resident visiting the centre in January/February 2010.\nThe Meuse–Rhine Euroregion provided the administrative background for our study (Wikipedia contributors, 2017). Geographically, it covers an area of about 11,000 km2 around the city corridor of Aachen (North Rhine‐Westphalia, Germany), Maastricht (South Limburg, Netherlands), Hasselt (Limburg, Belgium) and Liège (Liège, Belgium).\nThe Dutch study area was defined by the approximate catchment area of a large regional general hospital (346 km2, 12 municipalities and 308,410 inhabitants).\nThe adjacent German study area was defined by the 122 postcodes of individual residents from North Rhine‐Westphalia who donated blood at the RWTH Aachen University Hospital Blood Donation Centre in the first two months of 2010. For a summary of statistics regarding study area and study population, see Table 1. Of the 3,460 included German blood donors, the majority (n = 3,083, 89.1%) lived in postcode areas whose centroid was located within 40 km from the outbreak farm. The study was conducted from February 2009 (when first abortions were registered on the outbreak farm) to February 2010, when (a) no more incident human cases of acute Q fever were reported in the Dutch study area; (b) culling of pregnant dairy goats to prevent further transmission during the upcoming 2010 lambing season had been finalized; and (c) inclusion of German blood donors (January and February 2010) ended.\nStudy area characteristics in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands\nEastern South Limburg, defined by catchment area of local general hospital.\nCatchment area of RWTH Aachen University Hospital Blood Donation Centre, including 122 postcodes counting at least one resident visiting the centre in January/February 2010.\n Epidemiological investigation To assess seroprevalence of Q fever in the Dutch, German and combined cross‐border study area in relation to distance from the Dutch outbreak farm, we used various population samples and methods of analysis. An outline is given in Figure 1.\nOutline of population samples and study design by study area (Dutch study area, German study area, combined cross‐border study area)\nTo assess seroprevalence of Q fever in the Dutch, German and combined cross‐border study area in relation to distance from the Dutch outbreak farm, we used various population samples and methods of analysis. An outline is given in Figure 1.\nOutline of population samples and study design by study area (Dutch study area, German study area, combined cross‐border study area)\n Dutch study area The outbreak affected a population largely naive to Q fever, according to a serological survey from the year predating the outbreak (2008). Human Q fever cases notified to the PHS South Limburg in 2009/10 (n = 253) were scattered downwind from the outbreak farm, following a gradient of declining attack rates in easterly direction from the source up to the Dutch–German border (Hackert et al., 2012). Using SPSS's curve fitting tool, we derived a curve of best exponential fit from aforementioned gradient, corresponding to the following model: 469.074733 * EXP (−0.321415 * [distance to outbreak farm in km]). Q fever seroprevalence in a convenience sample of adult residents from the outbreak farm's township (n = 120, aged ≥ 18 years, mean household distance from outbreak farm = 2.7 km) served as a baseline estimate for the calculation of seroprevalence rates across the entire distance from the outbreak farm up to the Dutch–German border according to our exponential model (Hackert et al., 2015). Underlying was the assumption that infections followed the same geographical gradient as notified cases. Seroprevalence in this township sample was age‐ and sex‐matched within 10‐year age strata according to the demographic distribution in our sample of German blood donors. To assess validity of our exponential model, we compared the calculated cumulative seroprevalence for the entire Dutch area with seroprevalence observed in a post‐outbreak convenience sample of adults from the same area (Hackert et al., 2012).\nThe outbreak affected a population largely naive to Q fever, according to a serological survey from the year predating the outbreak (2008). Human Q fever cases notified to the PHS South Limburg in 2009/10 (n = 253) were scattered downwind from the outbreak farm, following a gradient of declining attack rates in easterly direction from the source up to the Dutch–German border (Hackert et al., 2012). Using SPSS's curve fitting tool, we derived a curve of best exponential fit from aforementioned gradient, corresponding to the following model: 469.074733 * EXP (−0.321415 * [distance to outbreak farm in km]). Q fever seroprevalence in a convenience sample of adult residents from the outbreak farm's township (n = 120, aged ≥ 18 years, mean household distance from outbreak farm = 2.7 km) served as a baseline estimate for the calculation of seroprevalence rates across the entire distance from the outbreak farm up to the Dutch–German border according to our exponential model (Hackert et al., 2015). Underlying was the assumption that infections followed the same geographical gradient as notified cases. Seroprevalence in this township sample was age‐ and sex‐matched within 10‐year age strata according to the demographic distribution in our sample of German blood donors. To assess validity of our exponential model, we compared the calculated cumulative seroprevalence for the entire Dutch area with seroprevalence observed in a post‐outbreak convenience sample of adults from the same area (Hackert et al., 2012).\n German study area A total of 3,460 out of 3,493 blood donors (99.1%), who visited the Blood Donation Centre in January/February 2010, were residents of North Rhine‐Westphalia, while n = 39 were residents of remote German federal states and consequently excluded from our study. Seroprevalence was estimated using retention sample sera from the 3,460 included blood donors. The detection of IgG or IgM phase II antibodies by ELISA or of C. burnetii DNA by qPCR identified positive sera. Minimum age for blood donations in Germany is 18 years. Apart from standard exclusion criteria relating to donor blood safety, additional criteria were applied during the study period to exclude donors with increased risk of recent or acute Q fever infection (contact with livestock, such as cattle, sheep, goats, rabbits and ducks, or their excrements, over the preceding 5 weeks; living in the vicinity of a livestock holding; and signs or symptoms of fever, sweats, nausea, vomiting, diarrhoea, malaise or headaches in the five weeks preceding donation). Cases of acute Q fever reported to the German public health authorities were retrieved from the publicly accessible notification database (SurvStat) of the Robert Koch Institute (Robert Koch Institute, 2016). For reasons of privacy protection, demographic blood donor information was limited to residential postcode and 10‐year age group. Based on postcode centroids as a proxy for residential address, GIS was used to map the geographical distribution of seropositive and seronegative donors, to determine seroprevalence in postcode areas and to extrapolate the seroprevalence to the general population in these areas.\nA total of 3,460 out of 3,493 blood donors (99.1%), who visited the Blood Donation Centre in January/February 2010, were residents of North Rhine‐Westphalia, while n = 39 were residents of remote German federal states and consequently excluded from our study. Seroprevalence was estimated using retention sample sera from the 3,460 included blood donors. The detection of IgG or IgM phase II antibodies by ELISA or of C. burnetii DNA by qPCR identified positive sera. Minimum age for blood donations in Germany is 18 years. Apart from standard exclusion criteria relating to donor blood safety, additional criteria were applied during the study period to exclude donors with increased risk of recent or acute Q fever infection (contact with livestock, such as cattle, sheep, goats, rabbits and ducks, or their excrements, over the preceding 5 weeks; living in the vicinity of a livestock holding; and signs or symptoms of fever, sweats, nausea, vomiting, diarrhoea, malaise or headaches in the five weeks preceding donation). Cases of acute Q fever reported to the German public health authorities were retrieved from the publicly accessible notification database (SurvStat) of the Robert Koch Institute (Robert Koch Institute, 2016). For reasons of privacy protection, demographic blood donor information was limited to residential postcode and 10‐year age group. Based on postcode centroids as a proxy for residential address, GIS was used to map the geographical distribution of seropositive and seronegative donors, to determine seroprevalence in postcode areas and to extrapolate the seroprevalence to the general population in these areas.\n Combined Dutch–German cross‐border study area A regression model, including distance zones of 20 km as dummy variables, adjusting for goat and sheep densities, veterinary Q fever notifications and sampling rates (i.e. number of individuals tested per 100,000 population in each postcode area), was used to assess the geographical relationship between seroprevalence (assumed to represent incidence rate of infection) and exposure dose (approximated by residential distance from the outbreak farm). In addition, we applied our fitted model, derived from the geographical distribution of attack rates in the Dutch study area, to predict seroprevalence rates in the German study area by distance from the outbreak farm, and tested the correlation between predicted and observed rates. We used Spatial Empirical Bayes Smoothing (where estimates per postcode are weighted against estimates in neighbouring areas sharing a common edge or border) to visualize our data by creating a smoothed map of seroprevalence rates in the combined Dutch–German cross‐border region (Figure 2). Computations were carried out in OpenGeoDa 1.2.0 (Anselin, 2005).\nBayesian‐smoothed extrapolated Q fever seroprevalence rates in the Dutch–German cross‐border region by postcode area (dairy goat farm = location of the outbreak farm in South Limburg, Netherlands)\nWe used multivariate linear regression to assess the relationship between postcode seroprevalence rates, log‐transformed for better visualization and radial distance from the outbreak farm in 20‐km zones. Goat and sheep densities, veterinary Q fever outbreaks and sampling densities (per 100,000 population) were included as covariates. A priori, we chose to include goat and sheep densities as covariates in our multivariate regression model, irrespective of univariate linear regression outcome, given their important role as reservoirs and sources of human Q fever. Variables did not show collinearity.\nA regression model, including distance zones of 20 km as dummy variables, adjusting for goat and sheep densities, veterinary Q fever notifications and sampling rates (i.e. number of individuals tested per 100,000 population in each postcode area), was used to assess the geographical relationship between seroprevalence (assumed to represent incidence rate of infection) and exposure dose (approximated by residential distance from the outbreak farm). In addition, we applied our fitted model, derived from the geographical distribution of attack rates in the Dutch study area, to predict seroprevalence rates in the German study area by distance from the outbreak farm, and tested the correlation between predicted and observed rates. We used Spatial Empirical Bayes Smoothing (where estimates per postcode are weighted against estimates in neighbouring areas sharing a common edge or border) to visualize our data by creating a smoothed map of seroprevalence rates in the combined Dutch–German cross‐border region (Figure 2). Computations were carried out in OpenGeoDa 1.2.0 (Anselin, 2005).\nBayesian‐smoothed extrapolated Q fever seroprevalence rates in the Dutch–German cross‐border region by postcode area (dairy goat farm = location of the outbreak farm in South Limburg, Netherlands)\nWe used multivariate linear regression to assess the relationship between postcode seroprevalence rates, log‐transformed for better visualization and radial distance from the outbreak farm in 20‐km zones. Goat and sheep densities, veterinary Q fever outbreaks and sampling densities (per 100,000 population) were included as covariates. A priori, we chose to include goat and sheep densities as covariates in our multivariate regression model, irrespective of univariate linear regression outcome, given their important role as reservoirs and sources of human Q fever. Variables did not show collinearity.\n Laboratory investigation Laboratory procedures performed during the Dutch outbreak were described previously (Hackert et al., 2012). All retention samples of the German blood donors and the Dutch serum samples were screened for anti‐Coxiella phase II IgG according to manufacturers' protocols (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany). A selection of negative sera (n = 128) and all positive or indeterminate sera was additionally tested for anti‐Coxiella phase II IgM (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany) and for the presence of Coxiella DNA using qPCR. We essentially applied the qPCR assay described elsewhere with a slightly modified TaqMan probe (Klee et al., 2006).\nLaboratory procedures performed during the Dutch outbreak were described previously (Hackert et al., 2012). All retention samples of the German blood donors and the Dutch serum samples were screened for anti‐Coxiella phase II IgG according to manufacturers' protocols (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany). A selection of negative sera (n = 128) and all positive or indeterminate sera was additionally tested for anti‐Coxiella phase II IgM (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany) and for the presence of Coxiella DNA using qPCR. We essentially applied the qPCR assay described elsewhere with a slightly modified TaqMan probe (Klee et al., 2006).\n Veterinary data Human Q fever cases from 2009 were linked to a single dairy goat farm in South Limburg, whose nearest distance to the Dutch–German border in a south‐eastern direction was 7.0 km (Hackert et al., 2012; van Leuken et al., 2013).\nMunicipality‐level data on goat, sheep and cattle population densities (number of animals per km2) in the Dutch study area were retrieved from the National Bureau of Statistics (Statistics Netherlands, Statline, 2009). Comparable animal data in the German study area were obtained from the statistical bureau of North Rhine‐Westphalia, according to its 2010 livestock census (Information und Technik, Nordrhein‐Westfalen, Geschäftsbereich Statistik, Statistische Berichte, Viehaltungen und Viehbestände am 1. März 2010, Ergebnisse der Landwirtschaftszählung). These data were available on a district level only. Confirmed (but not suspect) cases of Q fever in ruminants (goat, sheep and cattle) are notifiable under German law. Data on Q fever notifications in ruminants were retrieved from the Animal Disease Reporting System (TSN), the standard electronic system for registration of all notifiable and reportable animal diseases in Germany, for the entire German study area and study period (Probst, 2010).\nHuman Q fever cases from 2009 were linked to a single dairy goat farm in South Limburg, whose nearest distance to the Dutch–German border in a south‐eastern direction was 7.0 km (Hackert et al., 2012; van Leuken et al., 2013).\nMunicipality‐level data on goat, sheep and cattle population densities (number of animals per km2) in the Dutch study area were retrieved from the National Bureau of Statistics (Statistics Netherlands, Statline, 2009). Comparable animal data in the German study area were obtained from the statistical bureau of North Rhine‐Westphalia, according to its 2010 livestock census (Information und Technik, Nordrhein‐Westfalen, Geschäftsbereich Statistik, Statistische Berichte, Viehaltungen und Viehbestände am 1. März 2010, Ergebnisse der Landwirtschaftszählung). These data were available on a district level only. Confirmed (but not suspect) cases of Q fever in ruminants (goat, sheep and cattle) are notifiable under German law. Data on Q fever notifications in ruminants were retrieved from the Animal Disease Reporting System (TSN), the standard electronic system for registration of all notifiable and reportable animal diseases in Germany, for the entire German study area and study period (Probst, 2010).\n Statistical analysis Statistical analyses were performed using IBM SPSS Statistics, version 21 (IBM Inc.). We performed bootstrapping on our estimates using non‐linear regression to obtain more robust confidence interval estimates. We report B values along with their 95% confidence intervals derived from bootstrapping.\nStatistical analyses were performed using IBM SPSS Statistics, version 21 (IBM Inc.). We performed bootstrapping on our estimates using non‐linear regression to obtain more robust confidence interval estimates. We report B values along with their 95% confidence intervals derived from bootstrapping.", "The Meuse–Rhine Euroregion provided the administrative background for our study (Wikipedia contributors, 2017). Geographically, it covers an area of about 11,000 km2 around the city corridor of Aachen (North Rhine‐Westphalia, Germany), Maastricht (South Limburg, Netherlands), Hasselt (Limburg, Belgium) and Liège (Liège, Belgium).\nThe Dutch study area was defined by the approximate catchment area of a large regional general hospital (346 km2, 12 municipalities and 308,410 inhabitants).\nThe adjacent German study area was defined by the 122 postcodes of individual residents from North Rhine‐Westphalia who donated blood at the RWTH Aachen University Hospital Blood Donation Centre in the first two months of 2010. For a summary of statistics regarding study area and study population, see Table 1. Of the 3,460 included German blood donors, the majority (n = 3,083, 89.1%) lived in postcode areas whose centroid was located within 40 km from the outbreak farm. The study was conducted from February 2009 (when first abortions were registered on the outbreak farm) to February 2010, when (a) no more incident human cases of acute Q fever were reported in the Dutch study area; (b) culling of pregnant dairy goats to prevent further transmission during the upcoming 2010 lambing season had been finalized; and (c) inclusion of German blood donors (January and February 2010) ended.\nStudy area characteristics in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands\nEastern South Limburg, defined by catchment area of local general hospital.\nCatchment area of RWTH Aachen University Hospital Blood Donation Centre, including 122 postcodes counting at least one resident visiting the centre in January/February 2010.", "To assess seroprevalence of Q fever in the Dutch, German and combined cross‐border study area in relation to distance from the Dutch outbreak farm, we used various population samples and methods of analysis. An outline is given in Figure 1.\nOutline of population samples and study design by study area (Dutch study area, German study area, combined cross‐border study area)", "The outbreak affected a population largely naive to Q fever, according to a serological survey from the year predating the outbreak (2008). Human Q fever cases notified to the PHS South Limburg in 2009/10 (n = 253) were scattered downwind from the outbreak farm, following a gradient of declining attack rates in easterly direction from the source up to the Dutch–German border (Hackert et al., 2012). Using SPSS's curve fitting tool, we derived a curve of best exponential fit from aforementioned gradient, corresponding to the following model: 469.074733 * EXP (−0.321415 * [distance to outbreak farm in km]). Q fever seroprevalence in a convenience sample of adult residents from the outbreak farm's township (n = 120, aged ≥ 18 years, mean household distance from outbreak farm = 2.7 km) served as a baseline estimate for the calculation of seroprevalence rates across the entire distance from the outbreak farm up to the Dutch–German border according to our exponential model (Hackert et al., 2015). Underlying was the assumption that infections followed the same geographical gradient as notified cases. Seroprevalence in this township sample was age‐ and sex‐matched within 10‐year age strata according to the demographic distribution in our sample of German blood donors. To assess validity of our exponential model, we compared the calculated cumulative seroprevalence for the entire Dutch area with seroprevalence observed in a post‐outbreak convenience sample of adults from the same area (Hackert et al., 2012).", "A total of 3,460 out of 3,493 blood donors (99.1%), who visited the Blood Donation Centre in January/February 2010, were residents of North Rhine‐Westphalia, while n = 39 were residents of remote German federal states and consequently excluded from our study. Seroprevalence was estimated using retention sample sera from the 3,460 included blood donors. The detection of IgG or IgM phase II antibodies by ELISA or of C. burnetii DNA by qPCR identified positive sera. Minimum age for blood donations in Germany is 18 years. Apart from standard exclusion criteria relating to donor blood safety, additional criteria were applied during the study period to exclude donors with increased risk of recent or acute Q fever infection (contact with livestock, such as cattle, sheep, goats, rabbits and ducks, or their excrements, over the preceding 5 weeks; living in the vicinity of a livestock holding; and signs or symptoms of fever, sweats, nausea, vomiting, diarrhoea, malaise or headaches in the five weeks preceding donation). Cases of acute Q fever reported to the German public health authorities were retrieved from the publicly accessible notification database (SurvStat) of the Robert Koch Institute (Robert Koch Institute, 2016). For reasons of privacy protection, demographic blood donor information was limited to residential postcode and 10‐year age group. Based on postcode centroids as a proxy for residential address, GIS was used to map the geographical distribution of seropositive and seronegative donors, to determine seroprevalence in postcode areas and to extrapolate the seroprevalence to the general population in these areas.", "A regression model, including distance zones of 20 km as dummy variables, adjusting for goat and sheep densities, veterinary Q fever notifications and sampling rates (i.e. number of individuals tested per 100,000 population in each postcode area), was used to assess the geographical relationship between seroprevalence (assumed to represent incidence rate of infection) and exposure dose (approximated by residential distance from the outbreak farm). In addition, we applied our fitted model, derived from the geographical distribution of attack rates in the Dutch study area, to predict seroprevalence rates in the German study area by distance from the outbreak farm, and tested the correlation between predicted and observed rates. We used Spatial Empirical Bayes Smoothing (where estimates per postcode are weighted against estimates in neighbouring areas sharing a common edge or border) to visualize our data by creating a smoothed map of seroprevalence rates in the combined Dutch–German cross‐border region (Figure 2). Computations were carried out in OpenGeoDa 1.2.0 (Anselin, 2005).\nBayesian‐smoothed extrapolated Q fever seroprevalence rates in the Dutch–German cross‐border region by postcode area (dairy goat farm = location of the outbreak farm in South Limburg, Netherlands)\nWe used multivariate linear regression to assess the relationship between postcode seroprevalence rates, log‐transformed for better visualization and radial distance from the outbreak farm in 20‐km zones. Goat and sheep densities, veterinary Q fever outbreaks and sampling densities (per 100,000 population) were included as covariates. A priori, we chose to include goat and sheep densities as covariates in our multivariate regression model, irrespective of univariate linear regression outcome, given their important role as reservoirs and sources of human Q fever. Variables did not show collinearity.", "Laboratory procedures performed during the Dutch outbreak were described previously (Hackert et al., 2012). All retention samples of the German blood donors and the Dutch serum samples were screened for anti‐Coxiella phase II IgG according to manufacturers' protocols (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany). A selection of negative sera (n = 128) and all positive or indeterminate sera was additionally tested for anti‐Coxiella phase II IgM (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany) and for the presence of Coxiella DNA using qPCR. We essentially applied the qPCR assay described elsewhere with a slightly modified TaqMan probe (Klee et al., 2006).", "Human Q fever cases from 2009 were linked to a single dairy goat farm in South Limburg, whose nearest distance to the Dutch–German border in a south‐eastern direction was 7.0 km (Hackert et al., 2012; van Leuken et al., 2013).\nMunicipality‐level data on goat, sheep and cattle population densities (number of animals per km2) in the Dutch study area were retrieved from the National Bureau of Statistics (Statistics Netherlands, Statline, 2009). Comparable animal data in the German study area were obtained from the statistical bureau of North Rhine‐Westphalia, according to its 2010 livestock census (Information und Technik, Nordrhein‐Westfalen, Geschäftsbereich Statistik, Statistische Berichte, Viehaltungen und Viehbestände am 1. März 2010, Ergebnisse der Landwirtschaftszählung). These data were available on a district level only. Confirmed (but not suspect) cases of Q fever in ruminants (goat, sheep and cattle) are notifiable under German law. Data on Q fever notifications in ruminants were retrieved from the Animal Disease Reporting System (TSN), the standard electronic system for registration of all notifiable and reportable animal diseases in Germany, for the entire German study area and study period (Probst, 2010).", "Statistical analyses were performed using IBM SPSS Statistics, version 21 (IBM Inc.). We performed bootstrapping on our estimates using non‐linear regression to obtain more robust confidence interval estimates. We report B values along with their 95% confidence intervals derived from bootstrapping.", " Overall seroprevalence in the Dutch–German cross‐border region Our smoothed map shows a large area of adjacent postcodes affected by human Q fever surrounding the outbreak farm, spreading over distances of more than 40 km into the German border region.\nBaseline seroprevalence in the outbreak farm township, adjusted for age and gender, was 16.1%. Calculated mean seroprevalence for the Dutch study area, derived from the observed gradient of attack rates, was 3.6%, close to the 4.4% observed in the age‐ and gender‐adjusted general population sample from the Dutch study area dating from 2010. The difference between mean calculated and mean observed seroprevalence was statistically not significant by t test (p = .26).\nObserved seroprevalence in blood donors from the German study area was 0.9% (31/3,460) overall, extrapolating to a total of 11,308 infections in those postcode areas with at least one seropositive blood donor. Among the positive blood donors, 61.3% (19/31) had a serological profile (anti‐Coxiella phase II IgM) indicative of a fresh or recent infection and/or were Coxiella DNA positive: 13 donors had a solitary anti‐Coxiella phase II IgM (one of whom was also qPCR positive) and six were positive for both phase II IgM and IgG, while 15 had a solitary phase II IgG response.\nOur smoothed map shows a large area of adjacent postcodes affected by human Q fever surrounding the outbreak farm, spreading over distances of more than 40 km into the German border region.\nBaseline seroprevalence in the outbreak farm township, adjusted for age and gender, was 16.1%. Calculated mean seroprevalence for the Dutch study area, derived from the observed gradient of attack rates, was 3.6%, close to the 4.4% observed in the age‐ and gender‐adjusted general population sample from the Dutch study area dating from 2010. The difference between mean calculated and mean observed seroprevalence was statistically not significant by t test (p = .26).\nObserved seroprevalence in blood donors from the German study area was 0.9% (31/3,460) overall, extrapolating to a total of 11,308 infections in those postcode areas with at least one seropositive blood donor. Among the positive blood donors, 61.3% (19/31) had a serological profile (anti‐Coxiella phase II IgM) indicative of a fresh or recent infection and/or were Coxiella DNA positive: 13 donors had a solitary anti‐Coxiella phase II IgM (one of whom was also qPCR positive) and six were positive for both phase II IgM and IgG, while 15 had a solitary phase II IgG response.\n Seroprevalence over 20‐km distance classes from the outbreak farm Seroprevalence rates across radial 20‐km distance classes declined with increasing distance from the outbreak farm (Table 2 and Figure 3), comparable with log‐transformed estimates in our multivariate linear regression model. Multivariate analysis was performed twice, with and without inclusion of a high‐seroprevalence postcode on the German side, located at a distance of 54 km from the outbreak farm and figuring as a dark‐coloured ‘hotspot’ in our smoothed map. This postcode counted six donors, one of whom was seropositive (seroprevalence = 16.7%). Given that seroprevalence in surrounding postcodes was low (i.e. 1.6% for the entire district including 18 postcode areas and 257 donors), and there were no reports of Q fever in livestock from the district during the study period, high seroprevalence in this postcode was unlikely to reflect locally acquired infection. This ‘outlier’ had limited impact on our findings.\nBlood donor test results and Q fever seroprevalence rates in postcode area populations in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands\nBased on calculated seroprevalence in the Dutch area, derived from our exponential model and baseline sample from outbreak farm township general population and observed seroprevalence in German blood donors.\nAdjusted for goat and sheep density, veterinary Q fever notifications and sampling density per 100,000 population, with 95% confidence intervals derived from bootstrapping (in brackets).\nLog‐transformed seroprevalence in radial 20‐km distance classes from outbreak farm, point estimates based on multivariate linear regression including residential distance from outbreak farm, livestock densities (sheep and goats) and screening rates as predictors, with 95% confidence intervals derived from bootstrapping, including the ‘outlier’ postcode in the German study area [Colour figure can be viewed at wileyonlinelibrary.com]\nSeroprevalence rates across radial 20‐km distance classes declined with increasing distance from the outbreak farm (Table 2 and Figure 3), comparable with log‐transformed estimates in our multivariate linear regression model. Multivariate analysis was performed twice, with and without inclusion of a high‐seroprevalence postcode on the German side, located at a distance of 54 km from the outbreak farm and figuring as a dark‐coloured ‘hotspot’ in our smoothed map. This postcode counted six donors, one of whom was seropositive (seroprevalence = 16.7%). Given that seroprevalence in surrounding postcodes was low (i.e. 1.6% for the entire district including 18 postcode areas and 257 donors), and there were no reports of Q fever in livestock from the district during the study period, high seroprevalence in this postcode was unlikely to reflect locally acquired infection. This ‘outlier’ had limited impact on our findings.\nBlood donor test results and Q fever seroprevalence rates in postcode area populations in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands\nBased on calculated seroprevalence in the Dutch area, derived from our exponential model and baseline sample from outbreak farm township general population and observed seroprevalence in German blood donors.\nAdjusted for goat and sheep density, veterinary Q fever notifications and sampling density per 100,000 population, with 95% confidence intervals derived from bootstrapping (in brackets).\nLog‐transformed seroprevalence in radial 20‐km distance classes from outbreak farm, point estimates based on multivariate linear regression including residential distance from outbreak farm, livestock densities (sheep and goats) and screening rates as predictors, with 95% confidence intervals derived from bootstrapping, including the ‘outlier’ postcode in the German study area [Colour figure can be viewed at wileyonlinelibrary.com]\n Predicted versus observed seroprevalence in the German study area Mean predicted log‐transformed seroprevalence for all postcode areas included in our German study area, based on the gradient of attack rates of acute Q fever observed in the Dutch study area, was 0.27 (untransformed 27 per 100,000), while mean observed log‐transformed seroprevalence was 0.43 (untransformed 262 per 100,000). This difference between predicted and observed log‐transformed seroprevalence estimates was statistically not significant by t test (p = .08). Correlation between predicted and observed log‐transformed values was 0.48 using Pearson correlation (p < .001) and 0.49 using Spearman's rho (p < .001).\nMean predicted log‐transformed seroprevalence for all postcode areas included in our German study area, based on the gradient of attack rates of acute Q fever observed in the Dutch study area, was 0.27 (untransformed 27 per 100,000), while mean observed log‐transformed seroprevalence was 0.43 (untransformed 262 per 100,000). This difference between predicted and observed log‐transformed seroprevalence estimates was statistically not significant by t test (p = .08). Correlation between predicted and observed log‐transformed values was 0.48 using Pearson correlation (p < .001) and 0.49 using Spearman's rho (p < .001).", "Our smoothed map shows a large area of adjacent postcodes affected by human Q fever surrounding the outbreak farm, spreading over distances of more than 40 km into the German border region.\nBaseline seroprevalence in the outbreak farm township, adjusted for age and gender, was 16.1%. Calculated mean seroprevalence for the Dutch study area, derived from the observed gradient of attack rates, was 3.6%, close to the 4.4% observed in the age‐ and gender‐adjusted general population sample from the Dutch study area dating from 2010. The difference between mean calculated and mean observed seroprevalence was statistically not significant by t test (p = .26).\nObserved seroprevalence in blood donors from the German study area was 0.9% (31/3,460) overall, extrapolating to a total of 11,308 infections in those postcode areas with at least one seropositive blood donor. Among the positive blood donors, 61.3% (19/31) had a serological profile (anti‐Coxiella phase II IgM) indicative of a fresh or recent infection and/or were Coxiella DNA positive: 13 donors had a solitary anti‐Coxiella phase II IgM (one of whom was also qPCR positive) and six were positive for both phase II IgM and IgG, while 15 had a solitary phase II IgG response.", "Seroprevalence rates across radial 20‐km distance classes declined with increasing distance from the outbreak farm (Table 2 and Figure 3), comparable with log‐transformed estimates in our multivariate linear regression model. Multivariate analysis was performed twice, with and without inclusion of a high‐seroprevalence postcode on the German side, located at a distance of 54 km from the outbreak farm and figuring as a dark‐coloured ‘hotspot’ in our smoothed map. This postcode counted six donors, one of whom was seropositive (seroprevalence = 16.7%). Given that seroprevalence in surrounding postcodes was low (i.e. 1.6% for the entire district including 18 postcode areas and 257 donors), and there were no reports of Q fever in livestock from the district during the study period, high seroprevalence in this postcode was unlikely to reflect locally acquired infection. This ‘outlier’ had limited impact on our findings.\nBlood donor test results and Q fever seroprevalence rates in postcode area populations in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands\nBased on calculated seroprevalence in the Dutch area, derived from our exponential model and baseline sample from outbreak farm township general population and observed seroprevalence in German blood donors.\nAdjusted for goat and sheep density, veterinary Q fever notifications and sampling density per 100,000 population, with 95% confidence intervals derived from bootstrapping (in brackets).\nLog‐transformed seroprevalence in radial 20‐km distance classes from outbreak farm, point estimates based on multivariate linear regression including residential distance from outbreak farm, livestock densities (sheep and goats) and screening rates as predictors, with 95% confidence intervals derived from bootstrapping, including the ‘outlier’ postcode in the German study area [Colour figure can be viewed at wileyonlinelibrary.com]", "Mean predicted log‐transformed seroprevalence for all postcode areas included in our German study area, based on the gradient of attack rates of acute Q fever observed in the Dutch study area, was 0.27 (untransformed 27 per 100,000), while mean observed log‐transformed seroprevalence was 0.43 (untransformed 262 per 100,000). This difference between predicted and observed log‐transformed seroprevalence estimates was statistically not significant by t test (p = .08). Correlation between predicted and observed log‐transformed values was 0.48 using Pearson correlation (p < .001) and 0.49 using Spearman's rho (p < .001).", "Transmission of Q fever over distances of at least 30 km has been described before (Eldin et al., 2017; Tissot‐Dupont, Amadei, Nezri, & Raoult, 2004). Our study, however, is first to provide evidence of large scale yet undetected long‐distance transmission of Q fever in a Dutch–German cross‐border context, associated with the 2007–2010 Q fever epidemic in the Netherlands. Presumed transmission into neighbouring German counties took place over distances of 40 km or more from a single dairy goat farm located in South Limburg, the southernmost part of the Netherlands.\nOur study suggests that cross‐border transmission, in spite of evidence for massive numbers of infections dispersed over a wide geographical area, went largely undetected, posing susceptible patients at risk of long‐term sequelae, most notably chronic Q fever, due to delayed diagnosis, missed diagnosis or misdiagnosis. Extrapolating from our sample of blood donors, the estimated number of unreported infections in the affected German border area may have been as high as 11,000.\nOur data suggest that the risk of Q fever infections going undetected was higher on the German than on the Dutch side of the border, with ascertainment rates (i.e. numbers of notified adult cases vs. estimated numbers of infections) being at least 10 times lower on the German side (Hackert et al., 2015, 2012). Based on unpublished hospital data from South Limburg, with 17 confirmed cases of Q fever, we estimate that the 10‐year incidence of chronic Q fever is 1 in 1,000 infected individuals, with a case fatality rate of approximately 70%. As yet unpublished data from the German cross‐border area, showing high numbers of overlooked or misdiagnosed chronic Q fever infections in hospitalized patients, lend further support to our findings.\nThe magnitude and relevance of this ongoing public health concern are underlined by recent figures from the Netherlands. As of 2018, the Netherlands had counted a total of 519 chronic Q fever patients since 2007. Of these, 86 patients had died, 21 of whom between 2016 and 2018 alone (Radboud University Medical Center, 2018). While incidence and prevalence of Q fever fatigue syndrome have been registered neither regionally nor nationally, studies suggest that approximately 20% of Q fever patients are affected by long‐term fatigue persisting for up to 20 years, adding to the disease burden related to the Dutch epidemic (Morroy et al., 2016).\nSeroprevalence in our study declined exponentially with increasing distance from the outbreak farm across four 20‐km zones. The observed west–east gradient is compatible with dispersal of C. burnetii from the farm, given that transmission of Q fever is usually airborne, and westerly winds were predominant in the study area at the time of Q fever‐related abortions in pregnant goats on the outbreak farm.\nObserved seroprevalence rates in the German study area were higher than rates predicted by our exponential model, with moderate statistical correlation between the two. Alternative sources, such as contaminated manure, wildlife reservoirs and sheep flocks that migrate over longer distances and have been shown to carry C. burnetii in clinically inconspicuous animals, may have contributed to human infections in the German border region, but these phenomena would not explain the geographical distribution observed (Hermans, Jeurissen, Hackert, & Hoebe, 2014; Hilbert et al., 2012; Webster, Lloyd, & Macdonald, 1995). While reports of Q fever in livestock in the German border region during the study period may implicate cattle as potential sources, a recent study found that human contact with sheep and goats, rather than cattle, was a consistent risk factor in human outbreaks (Georgiev et al., 2013; Verso et al., 2016). Goat and sheep densities in the German cross‐border area, however, were rather low (see Table 1). While goats and sheep may have contributed to our seropositive findings, they thus seem unlikely sources for large numbers of human infections spread over a wide area. In addition, our data did not reveal any plausible local source other than the outbreak farm.\nNevertheless, while the observed west–east gradient of seroprevalence rates in the population is consistent with airborne dispersal of C. burnetii from the index farm, we cannot exclude other routes of transmission contributing to the observed geographical distribution of infections. For example, active human mobility in the vicinity of dairy goat farms with a history of Q fever‐related abortion waves has been shown to increase the risk for positive Q fever serology (Klous et al., 2019). The province of Limburg is the most popular day‐travel destination for German tourists (ZKA Consultants & Planners, 2012). Moreover, 1.7% of workers in the region are cross‐border commuters from Germany. Thus, an unknown proportion of blood donors may have acquired the infection through travel or transit through South Limburg. Blood donors living closest to the border could be expected to have the highest risk of infection related to airborne transmission and travel alike, as they would be most likely to undertake day trips or commute into neighbouring South Limburg. Human cross‐border movement as a contributing factor would not diminish the relevance of our findings or change the fact that hidden transmission revealed by our data would have important implications for cross‐border communicable disease control in terms of alerting members of the public and the medical profession of potential risks of exposure and associated health hazards.\nUnder‐ascertainment and under‐reporting of Q fever are usually attributed to its mild and non‐specific clinical presentation. Primary infections are often mild, sometimes resembling a common cold, or asymptomatic, diagnosed only retrospectively through systematic testing (Eldin et al., 2017). This phenomenon is mirrored by a growing number of studies where seroprevalence rates in the population exceed reported cases of symptomatic Q fever. One such study from Denmark showed a rate of 64% of asymptomatic primary infections, while a study from Italy reported 30 seropositive individuals with no related episodes of respiratory or febrile disease (Bacci et al., 2012; Verso et al., 2016). A recent study from Spain found fever—usually considered the hallmark of symptomatic infection—to be absent in almost a third of 39 Q fever positive cases, even though all cases without fever had pneumonia. Interestingly, a systematic review by the same authors reported the absence of traditional risk factors, most notably animal exposure, in a majority of almost 1,500 included Q fever cases, with as much as 60% of cases living in urban areas, raising the possibility of airborne and other routes of transmission in these cases (Alende‐Castro et al., 2018).\nSeroprevalence studies from the Netherlands following the Dutch 2007–2010 Q fever epidemic revealed incident Q fever infections to exceed notified infections by factors of ten and higher (Hackert et al., 2015,2012; Hogema et al., 2012; van der Hoek et al., 2012,). A syndromic surveillance study that retrospectively identified three clusters of lower respiratory infections dating from 2005 and 2006 plausibly linked to the Dutch epidemic appears to confirm that even clusters of more severe disease may easily be missed by physicians (van den Wijngaard et al., 2011). In the case of the cross‐border outbreak we describe here, limited attention paid to the outbreak in South Limburg by regional German media, health professionals and members of the public may have influenced peoples' help‐seeking behaviour and resulted in a low index of suspicion towards Q fever in clinical cases, as well as misperceptions about the outbreak's potential for widespread cross‐border transmission. Also, since goat husbandry is uncommon in Germany, there may have been a mistaken belief that the epidemic was just a domestic Dutch problem, reinforced by unfamiliarity with C. burnetii's potential for airborne transmission over long distances.\nOur study has limitations. While we had data for the Dutch region showing that pre‐outbreak seroprevalence in 2008 was as low as 0.5%, we had no pre‐outbreak data for the German study area. Seropositive findings in the blood donors may thus reflect past exposure to sources other than the Dutch outbreak farm. However, more than 60% of the positive blood donors had serological profiles indicative of acute or recent infection, arguing for a close temporal association with the South Limburg outbreak. Seroresponse time of anti‐Coxiella phase II IgM antibodies, that is the period from onset of symptoms to the onset of phase II IgM seroresponse, appears to be extremely variable, ranging from zero to seven months with a median of less than one month (Wielders, Teunis, Hermans, van der Hoek, & Schneeberger, 2015). The same is true for phase II IgG seroresponse. Blood donors from the German cross‐border area were recruited and tested 10–11 months after the peak of abortions on the outbreak farm in South Limburg. Thus, our solitary phase II IgM findings are highly suggestive of recent infections incurred somewhere between mid‐2009 and early 2010, well in line with the South Limburg outbreak, where new cases were reported throughout the entire period from April 2009 to March 2010, with a peak in May 2009. Any link to events predating the South Limburg outbreak can virtually be ruled out in blood donors with solitary phase II IgM response. Blood donors with combined phase II IgM and IgG serology also may be linked to the South Limburg outbreak, although infections incurred earlier cannot be ruled out in these cases. Median half time decay rates of IgM phase II antibodies appear to vary widely, ranging from less than a month to several years. Thus, even solitary phase II IgG findings would fit our hypothesis of a link with the South Limburg outbreak (Teunis et al., 2013).\nSeroprevalence in 2010 German blood donors living in the city of Aachen, which borders directly with South Limburg, was more than twice as high as 2008 pre‐outbreak seroprevalence in South Limburg. Since there are no natural or man‐made obstacles standing in the way of airborne transmission between the eastern part of South Limburg and Aachen, any Q fever events on either side of the border would likely impact both areas in similar ways, depending among others on the prevailing wind direction at the time of the outbreak. Conversely, we would expect pre‐outbreak seroprevalence in Aachen not to be higher than in neighbouring South Limburg, suggesting that the higher seroprevalence rate observed in 2010 blood donors may reflect a real increase in Q fever infections.\nOverall precision of our data for the German study area was limited. Sample sizes of blood donors per postcode were small, particularly in postcodes located at larger distances from the outbreak farm. For lack of individual blood donor data such as residential address and travel patterns, we had to use postcode centroids as a proxy, resulting in low resolution and possibly misclassification regarding exposure location.\nWhen interpreting seroprevalence and incidence rates in blood donors, one always needs to realize that the study population consists of adult, healthy blood donors, not of randomly selected citizens. However, while donors in many cases poorly represent the general population, infections incurred through airborne transmission are generally independent of donor status, reducing biases caused by the comparison of donors and the general population. A Dutch study among blood donors showed that the age and sex distribution of the study population was very similar to the age and sex distribution of the notified Q fever cases in the Netherlands (Hogema et al., 2012). A recent Australian study found the seroprevalence in blood donors to be lower than in the general population, but indicates that different laboratory methods and population sampling may account for some of the differences (Gidding et al., 2019). Seroprevalence in our group of blood donors also may have underestimated the true seroprevalence in the general population, due to the selection process with excluded donor candidates with signs of acute or recent infection, and those with known risk exposures.\nIdeally, our findings should be replicated by serological studies of preserved pre‐ and post‐outbreak human samples from other Dutch–German border regions that likely were affected by the Dutch epidemic in 2007–2010. Meanwhile, in the absence of such studies, our findings argue for intensified and harmonized cross‐border communicable disease control, including public health communication to professionals, public and media, as well as exchange of data suitable for surveillance, detection and early warning. In addition, we urgently recommend that patients, who live in affected areas and have predisposing conditions, serological evidence or clinical symptoms consistent with persistent focalized (chronic) Q fever infection, should be considered for low‐threshold screening, keeping in mind that chronic Q fever may have atypical presentations (Melenotte, Million, & Raoult, 2020).", "Our study was ethically approved by the medical ethics committee of the Maastricht University Medical Centre (number 104034) and the medical ethics committee of the RWTH Aachen University Hospital (number EK 026‐10) and conforms to internationally recognized standards (Declaration of Helsinki).", "The authors declare they have no conflict of interests." ]
[ null, "materials-and-methods", null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", null, "COI-statement" ]
[ "communicable disease control", "Coxiella burnetii infection", "international health regulations", "one health", "outbreaks", "Q fever" ]
INTRODUCTION: Following outbreaks in other parts of the Netherlands, the Dutch border region of South Limburg experienced a massive single‐point source outbreak of Q fever related to a local dairy goat farm, counting 253 notified cases of acute human Q fever and an estimated 9,000 undetected infections across the entire region, in 2009 (Hackert et al., 2012; van Leuken et al., 2013). Q fever is a bacterial zoonosis caused by Coxiella burnetii, transmitted to humans from infected ruminants, primarily by the airborne route (Cutler, Bouzid, & Cutler, 2007; Maurin & Raoult, 1999; Parker, Barralet, & Bell, 2006). In acute disease, flulike illness is usually prominent. Most Q fever infections are mild or asymptomatic and self‐limiting (Hackert et al., 2012; van der Hoek et al., 2012). However, a small percentage of infected individuals may develop chronic Q fever, which often goes unnoticed for years after infection but is usually fatal if left untreated. In addition, a substantial proportion of infected individuals may suffer symptoms referred to as Q fever fatigue syndrome, which may persist for years, with major health‐related consequences (Morroy et al., 2016). In South Limburg, the distribution of notified cases of acute human Q fever followed a west–east gradient of decreasing incidence from the source, following westerly winds predominant at the time of the outbreak, towards and up to the Dutch–German border (Hackert et al., 2012). In the same year, only six cases of acute Q fever (notifiable under German law) were reported from the entire German federal state of North Rhine‐Westphalia, none of whom was a resident of the two districts bordering South Limburg (Heinsberg and Aachen). In the five years preceding the outbreak (2004–2008), a total of 42 cases were reported from North Rhine‐Westphalia, just one of whom lived in Aachen (2006). In 2010, the year following the outbreak, North Rhine‐Westphalia counted a total of 14 cases, only 2 of whom were from Aachen (Robert Koch Institute, 2016). Whereas these data suggest that cross‐border transmission from South Limburg to neighbouring German counties was negligible, it is rather unlikely that airborne transmission stopped short of the Dutch–German border. A Belgian study suggests that a limited degree of transmission took place in westerly direction across the Dutch–Belgian border, but does not quantify the extent of transmission (Naesens et al., 2012). Recent spikes of newly detected cases of late chronic Q fever in the Netherlands, with a high burden of extra mortality, show that the Dutch epidemic is far from over and should be seen as an ongoing public health concern with reason for unabated alertness (Radboud University Medical Center, 2018). This may be even more the case in a population unknowingly exposed to Q fever, where the risk of delayed or overlooked diagnosis of chronic Q fever may be even higher. We therefore aimed to assess the scope and scale of any undetected cross‐border transmission to neighbouring German counties associated with the regional outbreak in South Limburg. MATERIALS AND METHODS: Study area, study population and study period The Meuse–Rhine Euroregion provided the administrative background for our study (Wikipedia contributors, 2017). Geographically, it covers an area of about 11,000 km2 around the city corridor of Aachen (North Rhine‐Westphalia, Germany), Maastricht (South Limburg, Netherlands), Hasselt (Limburg, Belgium) and Liège (Liège, Belgium). The Dutch study area was defined by the approximate catchment area of a large regional general hospital (346 km2, 12 municipalities and 308,410 inhabitants). The adjacent German study area was defined by the 122 postcodes of individual residents from North Rhine‐Westphalia who donated blood at the RWTH Aachen University Hospital Blood Donation Centre in the first two months of 2010. For a summary of statistics regarding study area and study population, see Table 1. Of the 3,460 included German blood donors, the majority (n = 3,083, 89.1%) lived in postcode areas whose centroid was located within 40 km from the outbreak farm. The study was conducted from February 2009 (when first abortions were registered on the outbreak farm) to February 2010, when (a) no more incident human cases of acute Q fever were reported in the Dutch study area; (b) culling of pregnant dairy goats to prevent further transmission during the upcoming 2010 lambing season had been finalized; and (c) inclusion of German blood donors (January and February 2010) ended. Study area characteristics in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands Eastern South Limburg, defined by catchment area of local general hospital. Catchment area of RWTH Aachen University Hospital Blood Donation Centre, including 122 postcodes counting at least one resident visiting the centre in January/February 2010. The Meuse–Rhine Euroregion provided the administrative background for our study (Wikipedia contributors, 2017). Geographically, it covers an area of about 11,000 km2 around the city corridor of Aachen (North Rhine‐Westphalia, Germany), Maastricht (South Limburg, Netherlands), Hasselt (Limburg, Belgium) and Liège (Liège, Belgium). The Dutch study area was defined by the approximate catchment area of a large regional general hospital (346 km2, 12 municipalities and 308,410 inhabitants). The adjacent German study area was defined by the 122 postcodes of individual residents from North Rhine‐Westphalia who donated blood at the RWTH Aachen University Hospital Blood Donation Centre in the first two months of 2010. For a summary of statistics regarding study area and study population, see Table 1. Of the 3,460 included German blood donors, the majority (n = 3,083, 89.1%) lived in postcode areas whose centroid was located within 40 km from the outbreak farm. The study was conducted from February 2009 (when first abortions were registered on the outbreak farm) to February 2010, when (a) no more incident human cases of acute Q fever were reported in the Dutch study area; (b) culling of pregnant dairy goats to prevent further transmission during the upcoming 2010 lambing season had been finalized; and (c) inclusion of German blood donors (January and February 2010) ended. Study area characteristics in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands Eastern South Limburg, defined by catchment area of local general hospital. Catchment area of RWTH Aachen University Hospital Blood Donation Centre, including 122 postcodes counting at least one resident visiting the centre in January/February 2010. Epidemiological investigation To assess seroprevalence of Q fever in the Dutch, German and combined cross‐border study area in relation to distance from the Dutch outbreak farm, we used various population samples and methods of analysis. An outline is given in Figure 1. Outline of population samples and study design by study area (Dutch study area, German study area, combined cross‐border study area) To assess seroprevalence of Q fever in the Dutch, German and combined cross‐border study area in relation to distance from the Dutch outbreak farm, we used various population samples and methods of analysis. An outline is given in Figure 1. Outline of population samples and study design by study area (Dutch study area, German study area, combined cross‐border study area) Dutch study area The outbreak affected a population largely naive to Q fever, according to a serological survey from the year predating the outbreak (2008). Human Q fever cases notified to the PHS South Limburg in 2009/10 (n = 253) were scattered downwind from the outbreak farm, following a gradient of declining attack rates in easterly direction from the source up to the Dutch–German border (Hackert et al., 2012). Using SPSS's curve fitting tool, we derived a curve of best exponential fit from aforementioned gradient, corresponding to the following model: 469.074733 * EXP (−0.321415 * [distance to outbreak farm in km]). Q fever seroprevalence in a convenience sample of adult residents from the outbreak farm's township (n = 120, aged ≥ 18 years, mean household distance from outbreak farm = 2.7 km) served as a baseline estimate for the calculation of seroprevalence rates across the entire distance from the outbreak farm up to the Dutch–German border according to our exponential model (Hackert et al., 2015). Underlying was the assumption that infections followed the same geographical gradient as notified cases. Seroprevalence in this township sample was age‐ and sex‐matched within 10‐year age strata according to the demographic distribution in our sample of German blood donors. To assess validity of our exponential model, we compared the calculated cumulative seroprevalence for the entire Dutch area with seroprevalence observed in a post‐outbreak convenience sample of adults from the same area (Hackert et al., 2012). The outbreak affected a population largely naive to Q fever, according to a serological survey from the year predating the outbreak (2008). Human Q fever cases notified to the PHS South Limburg in 2009/10 (n = 253) were scattered downwind from the outbreak farm, following a gradient of declining attack rates in easterly direction from the source up to the Dutch–German border (Hackert et al., 2012). Using SPSS's curve fitting tool, we derived a curve of best exponential fit from aforementioned gradient, corresponding to the following model: 469.074733 * EXP (−0.321415 * [distance to outbreak farm in km]). Q fever seroprevalence in a convenience sample of adult residents from the outbreak farm's township (n = 120, aged ≥ 18 years, mean household distance from outbreak farm = 2.7 km) served as a baseline estimate for the calculation of seroprevalence rates across the entire distance from the outbreak farm up to the Dutch–German border according to our exponential model (Hackert et al., 2015). Underlying was the assumption that infections followed the same geographical gradient as notified cases. Seroprevalence in this township sample was age‐ and sex‐matched within 10‐year age strata according to the demographic distribution in our sample of German blood donors. To assess validity of our exponential model, we compared the calculated cumulative seroprevalence for the entire Dutch area with seroprevalence observed in a post‐outbreak convenience sample of adults from the same area (Hackert et al., 2012). German study area A total of 3,460 out of 3,493 blood donors (99.1%), who visited the Blood Donation Centre in January/February 2010, were residents of North Rhine‐Westphalia, while n = 39 were residents of remote German federal states and consequently excluded from our study. Seroprevalence was estimated using retention sample sera from the 3,460 included blood donors. The detection of IgG or IgM phase II antibodies by ELISA or of C. burnetii DNA by qPCR identified positive sera. Minimum age for blood donations in Germany is 18 years. Apart from standard exclusion criteria relating to donor blood safety, additional criteria were applied during the study period to exclude donors with increased risk of recent or acute Q fever infection (contact with livestock, such as cattle, sheep, goats, rabbits and ducks, or their excrements, over the preceding 5 weeks; living in the vicinity of a livestock holding; and signs or symptoms of fever, sweats, nausea, vomiting, diarrhoea, malaise or headaches in the five weeks preceding donation). Cases of acute Q fever reported to the German public health authorities were retrieved from the publicly accessible notification database (SurvStat) of the Robert Koch Institute (Robert Koch Institute, 2016). For reasons of privacy protection, demographic blood donor information was limited to residential postcode and 10‐year age group. Based on postcode centroids as a proxy for residential address, GIS was used to map the geographical distribution of seropositive and seronegative donors, to determine seroprevalence in postcode areas and to extrapolate the seroprevalence to the general population in these areas. A total of 3,460 out of 3,493 blood donors (99.1%), who visited the Blood Donation Centre in January/February 2010, were residents of North Rhine‐Westphalia, while n = 39 were residents of remote German federal states and consequently excluded from our study. Seroprevalence was estimated using retention sample sera from the 3,460 included blood donors. The detection of IgG or IgM phase II antibodies by ELISA or of C. burnetii DNA by qPCR identified positive sera. Minimum age for blood donations in Germany is 18 years. Apart from standard exclusion criteria relating to donor blood safety, additional criteria were applied during the study period to exclude donors with increased risk of recent or acute Q fever infection (contact with livestock, such as cattle, sheep, goats, rabbits and ducks, or their excrements, over the preceding 5 weeks; living in the vicinity of a livestock holding; and signs or symptoms of fever, sweats, nausea, vomiting, diarrhoea, malaise or headaches in the five weeks preceding donation). Cases of acute Q fever reported to the German public health authorities were retrieved from the publicly accessible notification database (SurvStat) of the Robert Koch Institute (Robert Koch Institute, 2016). For reasons of privacy protection, demographic blood donor information was limited to residential postcode and 10‐year age group. Based on postcode centroids as a proxy for residential address, GIS was used to map the geographical distribution of seropositive and seronegative donors, to determine seroprevalence in postcode areas and to extrapolate the seroprevalence to the general population in these areas. Combined Dutch–German cross‐border study area A regression model, including distance zones of 20 km as dummy variables, adjusting for goat and sheep densities, veterinary Q fever notifications and sampling rates (i.e. number of individuals tested per 100,000 population in each postcode area), was used to assess the geographical relationship between seroprevalence (assumed to represent incidence rate of infection) and exposure dose (approximated by residential distance from the outbreak farm). In addition, we applied our fitted model, derived from the geographical distribution of attack rates in the Dutch study area, to predict seroprevalence rates in the German study area by distance from the outbreak farm, and tested the correlation between predicted and observed rates. We used Spatial Empirical Bayes Smoothing (where estimates per postcode are weighted against estimates in neighbouring areas sharing a common edge or border) to visualize our data by creating a smoothed map of seroprevalence rates in the combined Dutch–German cross‐border region (Figure 2). Computations were carried out in OpenGeoDa 1.2.0 (Anselin, 2005). Bayesian‐smoothed extrapolated Q fever seroprevalence rates in the Dutch–German cross‐border region by postcode area (dairy goat farm = location of the outbreak farm in South Limburg, Netherlands) We used multivariate linear regression to assess the relationship between postcode seroprevalence rates, log‐transformed for better visualization and radial distance from the outbreak farm in 20‐km zones. Goat and sheep densities, veterinary Q fever outbreaks and sampling densities (per 100,000 population) were included as covariates. A priori, we chose to include goat and sheep densities as covariates in our multivariate regression model, irrespective of univariate linear regression outcome, given their important role as reservoirs and sources of human Q fever. Variables did not show collinearity. A regression model, including distance zones of 20 km as dummy variables, adjusting for goat and sheep densities, veterinary Q fever notifications and sampling rates (i.e. number of individuals tested per 100,000 population in each postcode area), was used to assess the geographical relationship between seroprevalence (assumed to represent incidence rate of infection) and exposure dose (approximated by residential distance from the outbreak farm). In addition, we applied our fitted model, derived from the geographical distribution of attack rates in the Dutch study area, to predict seroprevalence rates in the German study area by distance from the outbreak farm, and tested the correlation between predicted and observed rates. We used Spatial Empirical Bayes Smoothing (where estimates per postcode are weighted against estimates in neighbouring areas sharing a common edge or border) to visualize our data by creating a smoothed map of seroprevalence rates in the combined Dutch–German cross‐border region (Figure 2). Computations were carried out in OpenGeoDa 1.2.0 (Anselin, 2005). Bayesian‐smoothed extrapolated Q fever seroprevalence rates in the Dutch–German cross‐border region by postcode area (dairy goat farm = location of the outbreak farm in South Limburg, Netherlands) We used multivariate linear regression to assess the relationship between postcode seroprevalence rates, log‐transformed for better visualization and radial distance from the outbreak farm in 20‐km zones. Goat and sheep densities, veterinary Q fever outbreaks and sampling densities (per 100,000 population) were included as covariates. A priori, we chose to include goat and sheep densities as covariates in our multivariate regression model, irrespective of univariate linear regression outcome, given their important role as reservoirs and sources of human Q fever. Variables did not show collinearity. Laboratory investigation Laboratory procedures performed during the Dutch outbreak were described previously (Hackert et al., 2012). All retention samples of the German blood donors and the Dutch serum samples were screened for anti‐Coxiella phase II IgG according to manufacturers' protocols (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany). A selection of negative sera (n = 128) and all positive or indeterminate sera was additionally tested for anti‐Coxiella phase II IgM (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany) and for the presence of Coxiella DNA using qPCR. We essentially applied the qPCR assay described elsewhere with a slightly modified TaqMan probe (Klee et al., 2006). Laboratory procedures performed during the Dutch outbreak were described previously (Hackert et al., 2012). All retention samples of the German blood donors and the Dutch serum samples were screened for anti‐Coxiella phase II IgG according to manufacturers' protocols (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany). A selection of negative sera (n = 128) and all positive or indeterminate sera was additionally tested for anti‐Coxiella phase II IgM (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany) and for the presence of Coxiella DNA using qPCR. We essentially applied the qPCR assay described elsewhere with a slightly modified TaqMan probe (Klee et al., 2006). Veterinary data Human Q fever cases from 2009 were linked to a single dairy goat farm in South Limburg, whose nearest distance to the Dutch–German border in a south‐eastern direction was 7.0 km (Hackert et al., 2012; van Leuken et al., 2013). Municipality‐level data on goat, sheep and cattle population densities (number of animals per km2) in the Dutch study area were retrieved from the National Bureau of Statistics (Statistics Netherlands, Statline, 2009). Comparable animal data in the German study area were obtained from the statistical bureau of North Rhine‐Westphalia, according to its 2010 livestock census (Information und Technik, Nordrhein‐Westfalen, Geschäftsbereich Statistik, Statistische Berichte, Viehaltungen und Viehbestände am 1. März 2010, Ergebnisse der Landwirtschaftszählung). These data were available on a district level only. Confirmed (but not suspect) cases of Q fever in ruminants (goat, sheep and cattle) are notifiable under German law. Data on Q fever notifications in ruminants were retrieved from the Animal Disease Reporting System (TSN), the standard electronic system for registration of all notifiable and reportable animal diseases in Germany, for the entire German study area and study period (Probst, 2010). Human Q fever cases from 2009 were linked to a single dairy goat farm in South Limburg, whose nearest distance to the Dutch–German border in a south‐eastern direction was 7.0 km (Hackert et al., 2012; van Leuken et al., 2013). Municipality‐level data on goat, sheep and cattle population densities (number of animals per km2) in the Dutch study area were retrieved from the National Bureau of Statistics (Statistics Netherlands, Statline, 2009). Comparable animal data in the German study area were obtained from the statistical bureau of North Rhine‐Westphalia, according to its 2010 livestock census (Information und Technik, Nordrhein‐Westfalen, Geschäftsbereich Statistik, Statistische Berichte, Viehaltungen und Viehbestände am 1. März 2010, Ergebnisse der Landwirtschaftszählung). These data were available on a district level only. Confirmed (but not suspect) cases of Q fever in ruminants (goat, sheep and cattle) are notifiable under German law. Data on Q fever notifications in ruminants were retrieved from the Animal Disease Reporting System (TSN), the standard electronic system for registration of all notifiable and reportable animal diseases in Germany, for the entire German study area and study period (Probst, 2010). Statistical analysis Statistical analyses were performed using IBM SPSS Statistics, version 21 (IBM Inc.). We performed bootstrapping on our estimates using non‐linear regression to obtain more robust confidence interval estimates. We report B values along with their 95% confidence intervals derived from bootstrapping. Statistical analyses were performed using IBM SPSS Statistics, version 21 (IBM Inc.). We performed bootstrapping on our estimates using non‐linear regression to obtain more robust confidence interval estimates. We report B values along with their 95% confidence intervals derived from bootstrapping. Study area, study population and study period: The Meuse–Rhine Euroregion provided the administrative background for our study (Wikipedia contributors, 2017). Geographically, it covers an area of about 11,000 km2 around the city corridor of Aachen (North Rhine‐Westphalia, Germany), Maastricht (South Limburg, Netherlands), Hasselt (Limburg, Belgium) and Liège (Liège, Belgium). The Dutch study area was defined by the approximate catchment area of a large regional general hospital (346 km2, 12 municipalities and 308,410 inhabitants). The adjacent German study area was defined by the 122 postcodes of individual residents from North Rhine‐Westphalia who donated blood at the RWTH Aachen University Hospital Blood Donation Centre in the first two months of 2010. For a summary of statistics regarding study area and study population, see Table 1. Of the 3,460 included German blood donors, the majority (n = 3,083, 89.1%) lived in postcode areas whose centroid was located within 40 km from the outbreak farm. The study was conducted from February 2009 (when first abortions were registered on the outbreak farm) to February 2010, when (a) no more incident human cases of acute Q fever were reported in the Dutch study area; (b) culling of pregnant dairy goats to prevent further transmission during the upcoming 2010 lambing season had been finalized; and (c) inclusion of German blood donors (January and February 2010) ended. Study area characteristics in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands Eastern South Limburg, defined by catchment area of local general hospital. Catchment area of RWTH Aachen University Hospital Blood Donation Centre, including 122 postcodes counting at least one resident visiting the centre in January/February 2010. Epidemiological investigation: To assess seroprevalence of Q fever in the Dutch, German and combined cross‐border study area in relation to distance from the Dutch outbreak farm, we used various population samples and methods of analysis. An outline is given in Figure 1. Outline of population samples and study design by study area (Dutch study area, German study area, combined cross‐border study area) Dutch study area: The outbreak affected a population largely naive to Q fever, according to a serological survey from the year predating the outbreak (2008). Human Q fever cases notified to the PHS South Limburg in 2009/10 (n = 253) were scattered downwind from the outbreak farm, following a gradient of declining attack rates in easterly direction from the source up to the Dutch–German border (Hackert et al., 2012). Using SPSS's curve fitting tool, we derived a curve of best exponential fit from aforementioned gradient, corresponding to the following model: 469.074733 * EXP (−0.321415 * [distance to outbreak farm in km]). Q fever seroprevalence in a convenience sample of adult residents from the outbreak farm's township (n = 120, aged ≥ 18 years, mean household distance from outbreak farm = 2.7 km) served as a baseline estimate for the calculation of seroprevalence rates across the entire distance from the outbreak farm up to the Dutch–German border according to our exponential model (Hackert et al., 2015). Underlying was the assumption that infections followed the same geographical gradient as notified cases. Seroprevalence in this township sample was age‐ and sex‐matched within 10‐year age strata according to the demographic distribution in our sample of German blood donors. To assess validity of our exponential model, we compared the calculated cumulative seroprevalence for the entire Dutch area with seroprevalence observed in a post‐outbreak convenience sample of adults from the same area (Hackert et al., 2012). German study area: A total of 3,460 out of 3,493 blood donors (99.1%), who visited the Blood Donation Centre in January/February 2010, were residents of North Rhine‐Westphalia, while n = 39 were residents of remote German federal states and consequently excluded from our study. Seroprevalence was estimated using retention sample sera from the 3,460 included blood donors. The detection of IgG or IgM phase II antibodies by ELISA or of C. burnetii DNA by qPCR identified positive sera. Minimum age for blood donations in Germany is 18 years. Apart from standard exclusion criteria relating to donor blood safety, additional criteria were applied during the study period to exclude donors with increased risk of recent or acute Q fever infection (contact with livestock, such as cattle, sheep, goats, rabbits and ducks, or their excrements, over the preceding 5 weeks; living in the vicinity of a livestock holding; and signs or symptoms of fever, sweats, nausea, vomiting, diarrhoea, malaise or headaches in the five weeks preceding donation). Cases of acute Q fever reported to the German public health authorities were retrieved from the publicly accessible notification database (SurvStat) of the Robert Koch Institute (Robert Koch Institute, 2016). For reasons of privacy protection, demographic blood donor information was limited to residential postcode and 10‐year age group. Based on postcode centroids as a proxy for residential address, GIS was used to map the geographical distribution of seropositive and seronegative donors, to determine seroprevalence in postcode areas and to extrapolate the seroprevalence to the general population in these areas. Combined Dutch–German cross‐border study area: A regression model, including distance zones of 20 km as dummy variables, adjusting for goat and sheep densities, veterinary Q fever notifications and sampling rates (i.e. number of individuals tested per 100,000 population in each postcode area), was used to assess the geographical relationship between seroprevalence (assumed to represent incidence rate of infection) and exposure dose (approximated by residential distance from the outbreak farm). In addition, we applied our fitted model, derived from the geographical distribution of attack rates in the Dutch study area, to predict seroprevalence rates in the German study area by distance from the outbreak farm, and tested the correlation between predicted and observed rates. We used Spatial Empirical Bayes Smoothing (where estimates per postcode are weighted against estimates in neighbouring areas sharing a common edge or border) to visualize our data by creating a smoothed map of seroprevalence rates in the combined Dutch–German cross‐border region (Figure 2). Computations were carried out in OpenGeoDa 1.2.0 (Anselin, 2005). Bayesian‐smoothed extrapolated Q fever seroprevalence rates in the Dutch–German cross‐border region by postcode area (dairy goat farm = location of the outbreak farm in South Limburg, Netherlands) We used multivariate linear regression to assess the relationship between postcode seroprevalence rates, log‐transformed for better visualization and radial distance from the outbreak farm in 20‐km zones. Goat and sheep densities, veterinary Q fever outbreaks and sampling densities (per 100,000 population) were included as covariates. A priori, we chose to include goat and sheep densities as covariates in our multivariate regression model, irrespective of univariate linear regression outcome, given their important role as reservoirs and sources of human Q fever. Variables did not show collinearity. Laboratory investigation: Laboratory procedures performed during the Dutch outbreak were described previously (Hackert et al., 2012). All retention samples of the German blood donors and the Dutch serum samples were screened for anti‐Coxiella phase II IgG according to manufacturers' protocols (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany). A selection of negative sera (n = 128) and all positive or indeterminate sera was additionally tested for anti‐Coxiella phase II IgM (Serion ELISA classic, Institut Virion/Serion GmbH, Würzburg, Germany) and for the presence of Coxiella DNA using qPCR. We essentially applied the qPCR assay described elsewhere with a slightly modified TaqMan probe (Klee et al., 2006). Veterinary data: Human Q fever cases from 2009 were linked to a single dairy goat farm in South Limburg, whose nearest distance to the Dutch–German border in a south‐eastern direction was 7.0 km (Hackert et al., 2012; van Leuken et al., 2013). Municipality‐level data on goat, sheep and cattle population densities (number of animals per km2) in the Dutch study area were retrieved from the National Bureau of Statistics (Statistics Netherlands, Statline, 2009). Comparable animal data in the German study area were obtained from the statistical bureau of North Rhine‐Westphalia, according to its 2010 livestock census (Information und Technik, Nordrhein‐Westfalen, Geschäftsbereich Statistik, Statistische Berichte, Viehaltungen und Viehbestände am 1. März 2010, Ergebnisse der Landwirtschaftszählung). These data were available on a district level only. Confirmed (but not suspect) cases of Q fever in ruminants (goat, sheep and cattle) are notifiable under German law. Data on Q fever notifications in ruminants were retrieved from the Animal Disease Reporting System (TSN), the standard electronic system for registration of all notifiable and reportable animal diseases in Germany, for the entire German study area and study period (Probst, 2010). Statistical analysis: Statistical analyses were performed using IBM SPSS Statistics, version 21 (IBM Inc.). We performed bootstrapping on our estimates using non‐linear regression to obtain more robust confidence interval estimates. We report B values along with their 95% confidence intervals derived from bootstrapping. RESULTS: Overall seroprevalence in the Dutch–German cross‐border region Our smoothed map shows a large area of adjacent postcodes affected by human Q fever surrounding the outbreak farm, spreading over distances of more than 40 km into the German border region. Baseline seroprevalence in the outbreak farm township, adjusted for age and gender, was 16.1%. Calculated mean seroprevalence for the Dutch study area, derived from the observed gradient of attack rates, was 3.6%, close to the 4.4% observed in the age‐ and gender‐adjusted general population sample from the Dutch study area dating from 2010. The difference between mean calculated and mean observed seroprevalence was statistically not significant by t test (p = .26). Observed seroprevalence in blood donors from the German study area was 0.9% (31/3,460) overall, extrapolating to a total of 11,308 infections in those postcode areas with at least one seropositive blood donor. Among the positive blood donors, 61.3% (19/31) had a serological profile (anti‐Coxiella phase II IgM) indicative of a fresh or recent infection and/or were Coxiella DNA positive: 13 donors had a solitary anti‐Coxiella phase II IgM (one of whom was also qPCR positive) and six were positive for both phase II IgM and IgG, while 15 had a solitary phase II IgG response. Our smoothed map shows a large area of adjacent postcodes affected by human Q fever surrounding the outbreak farm, spreading over distances of more than 40 km into the German border region. Baseline seroprevalence in the outbreak farm township, adjusted for age and gender, was 16.1%. Calculated mean seroprevalence for the Dutch study area, derived from the observed gradient of attack rates, was 3.6%, close to the 4.4% observed in the age‐ and gender‐adjusted general population sample from the Dutch study area dating from 2010. The difference between mean calculated and mean observed seroprevalence was statistically not significant by t test (p = .26). Observed seroprevalence in blood donors from the German study area was 0.9% (31/3,460) overall, extrapolating to a total of 11,308 infections in those postcode areas with at least one seropositive blood donor. Among the positive blood donors, 61.3% (19/31) had a serological profile (anti‐Coxiella phase II IgM) indicative of a fresh or recent infection and/or were Coxiella DNA positive: 13 donors had a solitary anti‐Coxiella phase II IgM (one of whom was also qPCR positive) and six were positive for both phase II IgM and IgG, while 15 had a solitary phase II IgG response. Seroprevalence over 20‐km distance classes from the outbreak farm Seroprevalence rates across radial 20‐km distance classes declined with increasing distance from the outbreak farm (Table 2 and Figure 3), comparable with log‐transformed estimates in our multivariate linear regression model. Multivariate analysis was performed twice, with and without inclusion of a high‐seroprevalence postcode on the German side, located at a distance of 54 km from the outbreak farm and figuring as a dark‐coloured ‘hotspot’ in our smoothed map. This postcode counted six donors, one of whom was seropositive (seroprevalence = 16.7%). Given that seroprevalence in surrounding postcodes was low (i.e. 1.6% for the entire district including 18 postcode areas and 257 donors), and there were no reports of Q fever in livestock from the district during the study period, high seroprevalence in this postcode was unlikely to reflect locally acquired infection. This ‘outlier’ had limited impact on our findings. Blood donor test results and Q fever seroprevalence rates in postcode area populations in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands Based on calculated seroprevalence in the Dutch area, derived from our exponential model and baseline sample from outbreak farm township general population and observed seroprevalence in German blood donors. Adjusted for goat and sheep density, veterinary Q fever notifications and sampling density per 100,000 population, with 95% confidence intervals derived from bootstrapping (in brackets). Log‐transformed seroprevalence in radial 20‐km distance classes from outbreak farm, point estimates based on multivariate linear regression including residential distance from outbreak farm, livestock densities (sheep and goats) and screening rates as predictors, with 95% confidence intervals derived from bootstrapping, including the ‘outlier’ postcode in the German study area [Colour figure can be viewed at wileyonlinelibrary.com] Seroprevalence rates across radial 20‐km distance classes declined with increasing distance from the outbreak farm (Table 2 and Figure 3), comparable with log‐transformed estimates in our multivariate linear regression model. Multivariate analysis was performed twice, with and without inclusion of a high‐seroprevalence postcode on the German side, located at a distance of 54 km from the outbreak farm and figuring as a dark‐coloured ‘hotspot’ in our smoothed map. This postcode counted six donors, one of whom was seropositive (seroprevalence = 16.7%). Given that seroprevalence in surrounding postcodes was low (i.e. 1.6% for the entire district including 18 postcode areas and 257 donors), and there were no reports of Q fever in livestock from the district during the study period, high seroprevalence in this postcode was unlikely to reflect locally acquired infection. This ‘outlier’ had limited impact on our findings. Blood donor test results and Q fever seroprevalence rates in postcode area populations in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands Based on calculated seroprevalence in the Dutch area, derived from our exponential model and baseline sample from outbreak farm township general population and observed seroprevalence in German blood donors. Adjusted for goat and sheep density, veterinary Q fever notifications and sampling density per 100,000 population, with 95% confidence intervals derived from bootstrapping (in brackets). Log‐transformed seroprevalence in radial 20‐km distance classes from outbreak farm, point estimates based on multivariate linear regression including residential distance from outbreak farm, livestock densities (sheep and goats) and screening rates as predictors, with 95% confidence intervals derived from bootstrapping, including the ‘outlier’ postcode in the German study area [Colour figure can be viewed at wileyonlinelibrary.com] Predicted versus observed seroprevalence in the German study area Mean predicted log‐transformed seroprevalence for all postcode areas included in our German study area, based on the gradient of attack rates of acute Q fever observed in the Dutch study area, was 0.27 (untransformed 27 per 100,000), while mean observed log‐transformed seroprevalence was 0.43 (untransformed 262 per 100,000). This difference between predicted and observed log‐transformed seroprevalence estimates was statistically not significant by t test (p = .08). Correlation between predicted and observed log‐transformed values was 0.48 using Pearson correlation (p < .001) and 0.49 using Spearman's rho (p < .001). Mean predicted log‐transformed seroprevalence for all postcode areas included in our German study area, based on the gradient of attack rates of acute Q fever observed in the Dutch study area, was 0.27 (untransformed 27 per 100,000), while mean observed log‐transformed seroprevalence was 0.43 (untransformed 262 per 100,000). This difference between predicted and observed log‐transformed seroprevalence estimates was statistically not significant by t test (p = .08). Correlation between predicted and observed log‐transformed values was 0.48 using Pearson correlation (p < .001) and 0.49 using Spearman's rho (p < .001). Overall seroprevalence in the Dutch–German cross‐border region: Our smoothed map shows a large area of adjacent postcodes affected by human Q fever surrounding the outbreak farm, spreading over distances of more than 40 km into the German border region. Baseline seroprevalence in the outbreak farm township, adjusted for age and gender, was 16.1%. Calculated mean seroprevalence for the Dutch study area, derived from the observed gradient of attack rates, was 3.6%, close to the 4.4% observed in the age‐ and gender‐adjusted general population sample from the Dutch study area dating from 2010. The difference between mean calculated and mean observed seroprevalence was statistically not significant by t test (p = .26). Observed seroprevalence in blood donors from the German study area was 0.9% (31/3,460) overall, extrapolating to a total of 11,308 infections in those postcode areas with at least one seropositive blood donor. Among the positive blood donors, 61.3% (19/31) had a serological profile (anti‐Coxiella phase II IgM) indicative of a fresh or recent infection and/or were Coxiella DNA positive: 13 donors had a solitary anti‐Coxiella phase II IgM (one of whom was also qPCR positive) and six were positive for both phase II IgM and IgG, while 15 had a solitary phase II IgG response. Seroprevalence over 20‐km distance classes from the outbreak farm: Seroprevalence rates across radial 20‐km distance classes declined with increasing distance from the outbreak farm (Table 2 and Figure 3), comparable with log‐transformed estimates in our multivariate linear regression model. Multivariate analysis was performed twice, with and without inclusion of a high‐seroprevalence postcode on the German side, located at a distance of 54 km from the outbreak farm and figuring as a dark‐coloured ‘hotspot’ in our smoothed map. This postcode counted six donors, one of whom was seropositive (seroprevalence = 16.7%). Given that seroprevalence in surrounding postcodes was low (i.e. 1.6% for the entire district including 18 postcode areas and 257 donors), and there were no reports of Q fever in livestock from the district during the study period, high seroprevalence in this postcode was unlikely to reflect locally acquired infection. This ‘outlier’ had limited impact on our findings. Blood donor test results and Q fever seroprevalence rates in postcode area populations in radial 20‐km distance classes from the index dairy goat farm in South Limburg, Netherlands Based on calculated seroprevalence in the Dutch area, derived from our exponential model and baseline sample from outbreak farm township general population and observed seroprevalence in German blood donors. Adjusted for goat and sheep density, veterinary Q fever notifications and sampling density per 100,000 population, with 95% confidence intervals derived from bootstrapping (in brackets). Log‐transformed seroprevalence in radial 20‐km distance classes from outbreak farm, point estimates based on multivariate linear regression including residential distance from outbreak farm, livestock densities (sheep and goats) and screening rates as predictors, with 95% confidence intervals derived from bootstrapping, including the ‘outlier’ postcode in the German study area [Colour figure can be viewed at wileyonlinelibrary.com] Predicted versus observed seroprevalence in the German study area: Mean predicted log‐transformed seroprevalence for all postcode areas included in our German study area, based on the gradient of attack rates of acute Q fever observed in the Dutch study area, was 0.27 (untransformed 27 per 100,000), while mean observed log‐transformed seroprevalence was 0.43 (untransformed 262 per 100,000). This difference between predicted and observed log‐transformed seroprevalence estimates was statistically not significant by t test (p = .08). Correlation between predicted and observed log‐transformed values was 0.48 using Pearson correlation (p < .001) and 0.49 using Spearman's rho (p < .001). DISCUSSION: Transmission of Q fever over distances of at least 30 km has been described before (Eldin et al., 2017; Tissot‐Dupont, Amadei, Nezri, & Raoult, 2004). Our study, however, is first to provide evidence of large scale yet undetected long‐distance transmission of Q fever in a Dutch–German cross‐border context, associated with the 2007–2010 Q fever epidemic in the Netherlands. Presumed transmission into neighbouring German counties took place over distances of 40 km or more from a single dairy goat farm located in South Limburg, the southernmost part of the Netherlands. Our study suggests that cross‐border transmission, in spite of evidence for massive numbers of infections dispersed over a wide geographical area, went largely undetected, posing susceptible patients at risk of long‐term sequelae, most notably chronic Q fever, due to delayed diagnosis, missed diagnosis or misdiagnosis. Extrapolating from our sample of blood donors, the estimated number of unreported infections in the affected German border area may have been as high as 11,000. Our data suggest that the risk of Q fever infections going undetected was higher on the German than on the Dutch side of the border, with ascertainment rates (i.e. numbers of notified adult cases vs. estimated numbers of infections) being at least 10 times lower on the German side (Hackert et al., 2015, 2012). Based on unpublished hospital data from South Limburg, with 17 confirmed cases of Q fever, we estimate that the 10‐year incidence of chronic Q fever is 1 in 1,000 infected individuals, with a case fatality rate of approximately 70%. As yet unpublished data from the German cross‐border area, showing high numbers of overlooked or misdiagnosed chronic Q fever infections in hospitalized patients, lend further support to our findings. The magnitude and relevance of this ongoing public health concern are underlined by recent figures from the Netherlands. As of 2018, the Netherlands had counted a total of 519 chronic Q fever patients since 2007. Of these, 86 patients had died, 21 of whom between 2016 and 2018 alone (Radboud University Medical Center, 2018). While incidence and prevalence of Q fever fatigue syndrome have been registered neither regionally nor nationally, studies suggest that approximately 20% of Q fever patients are affected by long‐term fatigue persisting for up to 20 years, adding to the disease burden related to the Dutch epidemic (Morroy et al., 2016). Seroprevalence in our study declined exponentially with increasing distance from the outbreak farm across four 20‐km zones. The observed west–east gradient is compatible with dispersal of C. burnetii from the farm, given that transmission of Q fever is usually airborne, and westerly winds were predominant in the study area at the time of Q fever‐related abortions in pregnant goats on the outbreak farm. Observed seroprevalence rates in the German study area were higher than rates predicted by our exponential model, with moderate statistical correlation between the two. Alternative sources, such as contaminated manure, wildlife reservoirs and sheep flocks that migrate over longer distances and have been shown to carry C. burnetii in clinically inconspicuous animals, may have contributed to human infections in the German border region, but these phenomena would not explain the geographical distribution observed (Hermans, Jeurissen, Hackert, & Hoebe, 2014; Hilbert et al., 2012; Webster, Lloyd, & Macdonald, 1995). While reports of Q fever in livestock in the German border region during the study period may implicate cattle as potential sources, a recent study found that human contact with sheep and goats, rather than cattle, was a consistent risk factor in human outbreaks (Georgiev et al., 2013; Verso et al., 2016). Goat and sheep densities in the German cross‐border area, however, were rather low (see Table 1). While goats and sheep may have contributed to our seropositive findings, they thus seem unlikely sources for large numbers of human infections spread over a wide area. In addition, our data did not reveal any plausible local source other than the outbreak farm. Nevertheless, while the observed west–east gradient of seroprevalence rates in the population is consistent with airborne dispersal of C. burnetii from the index farm, we cannot exclude other routes of transmission contributing to the observed geographical distribution of infections. For example, active human mobility in the vicinity of dairy goat farms with a history of Q fever‐related abortion waves has been shown to increase the risk for positive Q fever serology (Klous et al., 2019). The province of Limburg is the most popular day‐travel destination for German tourists (ZKA Consultants & Planners, 2012). Moreover, 1.7% of workers in the region are cross‐border commuters from Germany. Thus, an unknown proportion of blood donors may have acquired the infection through travel or transit through South Limburg. Blood donors living closest to the border could be expected to have the highest risk of infection related to airborne transmission and travel alike, as they would be most likely to undertake day trips or commute into neighbouring South Limburg. Human cross‐border movement as a contributing factor would not diminish the relevance of our findings or change the fact that hidden transmission revealed by our data would have important implications for cross‐border communicable disease control in terms of alerting members of the public and the medical profession of potential risks of exposure and associated health hazards. Under‐ascertainment and under‐reporting of Q fever are usually attributed to its mild and non‐specific clinical presentation. Primary infections are often mild, sometimes resembling a common cold, or asymptomatic, diagnosed only retrospectively through systematic testing (Eldin et al., 2017). This phenomenon is mirrored by a growing number of studies where seroprevalence rates in the population exceed reported cases of symptomatic Q fever. One such study from Denmark showed a rate of 64% of asymptomatic primary infections, while a study from Italy reported 30 seropositive individuals with no related episodes of respiratory or febrile disease (Bacci et al., 2012; Verso et al., 2016). A recent study from Spain found fever—usually considered the hallmark of symptomatic infection—to be absent in almost a third of 39 Q fever positive cases, even though all cases without fever had pneumonia. Interestingly, a systematic review by the same authors reported the absence of traditional risk factors, most notably animal exposure, in a majority of almost 1,500 included Q fever cases, with as much as 60% of cases living in urban areas, raising the possibility of airborne and other routes of transmission in these cases (Alende‐Castro et al., 2018). Seroprevalence studies from the Netherlands following the Dutch 2007–2010 Q fever epidemic revealed incident Q fever infections to exceed notified infections by factors of ten and higher (Hackert et al., 2015,2012; Hogema et al., 2012; van der Hoek et al., 2012,). A syndromic surveillance study that retrospectively identified three clusters of lower respiratory infections dating from 2005 and 2006 plausibly linked to the Dutch epidemic appears to confirm that even clusters of more severe disease may easily be missed by physicians (van den Wijngaard et al., 2011). In the case of the cross‐border outbreak we describe here, limited attention paid to the outbreak in South Limburg by regional German media, health professionals and members of the public may have influenced peoples' help‐seeking behaviour and resulted in a low index of suspicion towards Q fever in clinical cases, as well as misperceptions about the outbreak's potential for widespread cross‐border transmission. Also, since goat husbandry is uncommon in Germany, there may have been a mistaken belief that the epidemic was just a domestic Dutch problem, reinforced by unfamiliarity with C. burnetii's potential for airborne transmission over long distances. Our study has limitations. While we had data for the Dutch region showing that pre‐outbreak seroprevalence in 2008 was as low as 0.5%, we had no pre‐outbreak data for the German study area. Seropositive findings in the blood donors may thus reflect past exposure to sources other than the Dutch outbreak farm. However, more than 60% of the positive blood donors had serological profiles indicative of acute or recent infection, arguing for a close temporal association with the South Limburg outbreak. Seroresponse time of anti‐Coxiella phase II IgM antibodies, that is the period from onset of symptoms to the onset of phase II IgM seroresponse, appears to be extremely variable, ranging from zero to seven months with a median of less than one month (Wielders, Teunis, Hermans, van der Hoek, & Schneeberger, 2015). The same is true for phase II IgG seroresponse. Blood donors from the German cross‐border area were recruited and tested 10–11 months after the peak of abortions on the outbreak farm in South Limburg. Thus, our solitary phase II IgM findings are highly suggestive of recent infections incurred somewhere between mid‐2009 and early 2010, well in line with the South Limburg outbreak, where new cases were reported throughout the entire period from April 2009 to March 2010, with a peak in May 2009. Any link to events predating the South Limburg outbreak can virtually be ruled out in blood donors with solitary phase II IgM response. Blood donors with combined phase II IgM and IgG serology also may be linked to the South Limburg outbreak, although infections incurred earlier cannot be ruled out in these cases. Median half time decay rates of IgM phase II antibodies appear to vary widely, ranging from less than a month to several years. Thus, even solitary phase II IgG findings would fit our hypothesis of a link with the South Limburg outbreak (Teunis et al., 2013). Seroprevalence in 2010 German blood donors living in the city of Aachen, which borders directly with South Limburg, was more than twice as high as 2008 pre‐outbreak seroprevalence in South Limburg. Since there are no natural or man‐made obstacles standing in the way of airborne transmission between the eastern part of South Limburg and Aachen, any Q fever events on either side of the border would likely impact both areas in similar ways, depending among others on the prevailing wind direction at the time of the outbreak. Conversely, we would expect pre‐outbreak seroprevalence in Aachen not to be higher than in neighbouring South Limburg, suggesting that the higher seroprevalence rate observed in 2010 blood donors may reflect a real increase in Q fever infections. Overall precision of our data for the German study area was limited. Sample sizes of blood donors per postcode were small, particularly in postcodes located at larger distances from the outbreak farm. For lack of individual blood donor data such as residential address and travel patterns, we had to use postcode centroids as a proxy, resulting in low resolution and possibly misclassification regarding exposure location. When interpreting seroprevalence and incidence rates in blood donors, one always needs to realize that the study population consists of adult, healthy blood donors, not of randomly selected citizens. However, while donors in many cases poorly represent the general population, infections incurred through airborne transmission are generally independent of donor status, reducing biases caused by the comparison of donors and the general population. A Dutch study among blood donors showed that the age and sex distribution of the study population was very similar to the age and sex distribution of the notified Q fever cases in the Netherlands (Hogema et al., 2012). A recent Australian study found the seroprevalence in blood donors to be lower than in the general population, but indicates that different laboratory methods and population sampling may account for some of the differences (Gidding et al., 2019). Seroprevalence in our group of blood donors also may have underestimated the true seroprevalence in the general population, due to the selection process with excluded donor candidates with signs of acute or recent infection, and those with known risk exposures. Ideally, our findings should be replicated by serological studies of preserved pre‐ and post‐outbreak human samples from other Dutch–German border regions that likely were affected by the Dutch epidemic in 2007–2010. Meanwhile, in the absence of such studies, our findings argue for intensified and harmonized cross‐border communicable disease control, including public health communication to professionals, public and media, as well as exchange of data suitable for surveillance, detection and early warning. In addition, we urgently recommend that patients, who live in affected areas and have predisposing conditions, serological evidence or clinical symptoms consistent with persistent focalized (chronic) Q fever infection, should be considered for low‐threshold screening, keeping in mind that chronic Q fever may have atypical presentations (Melenotte, Million, & Raoult, 2020). ETHICAL APPROVAL: Our study was ethically approved by the medical ethics committee of the Maastricht University Medical Centre (number 104034) and the medical ethics committee of the RWTH Aachen University Hospital (number EK 026‐10) and conforms to internationally recognized standards (Declaration of Helsinki). CONFLICT OF INTERESTS: The authors declare they have no conflict of interests.
Background: Following outbreaks in other parts of the Netherlands, the Dutch border region of South Limburg experienced a large-scale outbreak of human Q fever related to a single dairy goat farm in 2009, with surprisingly few cases reported from neighbouring German counties. Late chronic Q fever, with recent spikes of newly detected cases, is an ongoing public health concern in the Netherlands. We aimed to assess the scope and scale of any undetected cross-border transmission to neighbouring German counties, where individuals unknowingly exposed may carry extra risk of overlooked diagnosis. Methods: (A) Seroprevalence rates in the Dutch area were estimated fitting an exponential gradient to the geographical distribution of notified acute human Q fever cases, using seroprevalence in a sample of farm township inhabitants as baseline. (B) Seroprevalence rates in 122 neighbouring German postcode areas were estimated from a sample of blood donors living in these areas and attending the regional blood donation centre in January/February 2010 (n = 3,460). (C) Using multivariate linear regression, including goat and sheep densities, veterinary Q fever notifications and blood donor sampling densities as covariates, we assessed whether seroprevalence rates across the entire border region were associated with distance from the farm. Results: (A) Seroprevalence in the outbreak farm's township was 16.1%. Overall seroprevalence in the Dutch area was 3.6%. (B) Overall seroprevalence in the German area was 0.9%. Estimated mean seroprevalence rates (per 100,000 population) declined with increasing distance from the outbreak farm (0-19 km = 2,302, 20-39 km = 1,122, 40-59 km = 432 and ≥60 km = 0). Decline was linear in multivariate regression using log-transformed seroprevalence rates (0-19 km = 2.9 [95% confidence interval (CI) = 2.6 to 3.2], 20 to 39 km = 1.9 [95% CI = 1.0 to 2.8], 40-59 km = 0.6 [95% CI = -0.2 to 1.3] and ≥60 km = 0.0 [95% CI = -0.3 to 0.3]). Conclusions: Our findings were suggestive of widespread cross-border transmission, with thousands of undetected infections, arguing for intensified cross-border collaboration and surveillance and screening of individuals susceptible to chronic Q fever in the affected area.
null
null
10,391
476
[ 587, 331, 69, 290, 296, 320, 134, 228, 48, 235, 327, 113, 48 ]
17
[ "study", "area", "seroprevalence", "german", "fever", "outbreak", "farm", "dutch", "study area", "blood" ]
[ "fever bacterial zoonosis", "fever livestock german", "fever outbreaks", "fever notifications ruminants", "cases fever ruminants" ]
null
null
null
[CONTENT] communicable disease control | Coxiella burnetii infection | international health regulations | one health | outbreaks | Q fever [SUMMARY]
null
[CONTENT] communicable disease control | Coxiella burnetii infection | international health regulations | one health | outbreaks | Q fever [SUMMARY]
null
[CONTENT] communicable disease control | Coxiella burnetii infection | international health regulations | one health | outbreaks | Q fever [SUMMARY]
null
[CONTENT] Animals | Antibodies, Bacterial | Blood Specimen Collection | Communicable Diseases, Imported | Coxiella burnetii | Diagnostic Tests, Routine | Disease Outbreaks | Germany | Humans | Immunoglobulin G | Immunoglobulin M | Linear Models | Mass Screening | Netherlands | Q Fever | Real-Time Polymerase Chain Reaction | Seroepidemiologic Studies | Sheep [SUMMARY]
null
[CONTENT] Animals | Antibodies, Bacterial | Blood Specimen Collection | Communicable Diseases, Imported | Coxiella burnetii | Diagnostic Tests, Routine | Disease Outbreaks | Germany | Humans | Immunoglobulin G | Immunoglobulin M | Linear Models | Mass Screening | Netherlands | Q Fever | Real-Time Polymerase Chain Reaction | Seroepidemiologic Studies | Sheep [SUMMARY]
null
[CONTENT] Animals | Antibodies, Bacterial | Blood Specimen Collection | Communicable Diseases, Imported | Coxiella burnetii | Diagnostic Tests, Routine | Disease Outbreaks | Germany | Humans | Immunoglobulin G | Immunoglobulin M | Linear Models | Mass Screening | Netherlands | Q Fever | Real-Time Polymerase Chain Reaction | Seroepidemiologic Studies | Sheep [SUMMARY]
null
[CONTENT] fever bacterial zoonosis | fever livestock german | fever outbreaks | fever notifications ruminants | cases fever ruminants [SUMMARY]
null
[CONTENT] fever bacterial zoonosis | fever livestock german | fever outbreaks | fever notifications ruminants | cases fever ruminants [SUMMARY]
null
[CONTENT] fever bacterial zoonosis | fever livestock german | fever outbreaks | fever notifications ruminants | cases fever ruminants [SUMMARY]
null
[CONTENT] study | area | seroprevalence | german | fever | outbreak | farm | dutch | study area | blood [SUMMARY]
null
[CONTENT] study | area | seroprevalence | german | fever | outbreak | farm | dutch | study area | blood [SUMMARY]
null
[CONTENT] study | area | seroprevalence | german | fever | outbreak | farm | dutch | study area | blood [SUMMARY]
null
[CONTENT] fever | transmission | cases | 2012 | border | chronic | infected | chronic fever | limburg | south [SUMMARY]
null
[CONTENT] seroprevalence | observed | area | postcode | log | log transformed | transformed | farm | outbreak farm | outbreak [SUMMARY]
null
[CONTENT] area | seroprevalence | study | outbreak | study area | fever | german | farm | blood | dutch [SUMMARY]
null
[CONTENT] Netherlands | Dutch | South Limburg | 2009 | German ||| Netherlands ||| German [SUMMARY]
null
[CONTENT] 16.1% ||| Dutch | 3.6% ||| German | 0.9% ||| 100,000 | 0-19 | 2,302 | 20 | 1,122 | 40-59 | 432 ||| 0-19 | 2.9 [ | 95% | CI | 2.6 | 3.2 | 20 | 39 km | 1.9 ||| 95% | CI | 1.0 to 2.8 | 40-59 | 0.6 ||| 95% | CI | 1.3 | 0.0 ||| 95% | CI | 0.3 [SUMMARY]
null
[CONTENT] Netherlands | Dutch | South Limburg | 2009 | German ||| Netherlands ||| German ||| Dutch ||| 122 | German | January/February 2010 | 3,460 ||| ||| 16.1% ||| Dutch | 3.6% ||| German | 0.9% ||| 100,000 | 0-19 | 2,302 | 20 | 1,122 | 40-59 | 432 ||| 0-19 | 2.9 [ | 95% | CI | 2.6 | 3.2 | 20 | 39 km | 1.9 ||| 95% | CI | 1.0 to 2.8 | 40-59 | 0.6 ||| 95% | CI | 1.3 | 0.0 ||| 95% | CI | 0.3 ||| thousands [SUMMARY]
null
Role of Immunotherapy in Stage IV Large Cell Neuroendocrine Carcinoma of the Lung.
33639649
Despite approvals of immune checkpoint inhibitors in both small cell and non-small cell lung cancers, the role of immunotherapy in large cell neuroendocrine carcinoma (LCNEC) in lung is undefined.
BACKGROUND
Using the National Cancer Database (NCDB), Stage IV lung LCNEC cases diagnosed from 2014 to 2016 were analyzed. Information regarding cancer treatment was limited to first course of therapy, including surgery for primary lesion, radiation, chemotherapy, and immunotherapy. Survival analysis was performed using Kaplan-Meier curves and Log-rank tests. Cox proportional hazard model was used for multivariate analysis.
METHODS
Among 661 eligible cases, 37 patients were treated with immunotherapy. No significant association between use of immunotherapy and clinical demographics was observed except for use of chemotherapy (p=0.0008). Chemotherapy was administered in 34 (92%) and 406 (65%) in immunotherapy and non-immunotherapy groups, respectively. Use of immunotherapy was associated with improved overall survival (Log-rank p=0.0018). Landmark analysis in the immunotherapy group showed 12 and 18-month survivals of 34.0% and 29.1%, respectively, whereas those in the non-immunotherapy group were 24.1% and 15.0%, respectively. Multivariate analysis demonstrated that female sex (HR=0.79, p=0.0063), liver metastases (HR=0.75, p=.0392), surgery (HR= 0.50, p <0.0001) use of chemotherapy (HR= 0.44, p <0.0001), and use of immunotherapy (HR=0.64, p=0.0164) had statistical significance. Propensity score matching in overall survival analysis showed a nonsignificant trend (p=0.0733) in favor of immunotherapy treatment.
RESULTS
This retrospective study using NCDB suggests that use of immunotherapy may improve survival of LCNEC patients.
CONCLUSION
[ "Aged", "Antineoplastic Agents, Immunological", "Carcinoma, Large Cell", "Carcinoma, Neuroendocrine", "Databases, Factual", "Female", "Humans", "Immunotherapy", "Lung Neoplasms", "Male", "Middle Aged", "Neoplasm Staging", "Propensity Score", "Retrospective Studies", "Survival Analysis", "United States" ]
8190341
Introduction
Large cell neuroendocrine carcinoma (LCNEC) is a relatively rare histologic subtype, accounting for approximately 3% of lung cancer cases (Fasano et al., 2015, Rekhtman, 2010). It has been described in recent decades since Travis et al. originally described it in the early 1990s (Travis, et al., 1991). Despite its neuroendocrine features, it was initially classified as a variant of large cell carcinoma by the 2004 World Health Organization (WHO). The current 2015 version of WHO lists it in a group of neuroendocrine neoplasms along with typical carcinoid, atypical carcinoid, and small cell carcinoma (Travis et al., 2015). Due to its sparsity of cases and difficulty in diagnosis with small biopsy samples, standard systemic therapy for advanced stage has not been well established. To our knowledge, no prospective, randomized study comparing multiple systemic regiments has been reported in the literature. Limited literature with retrospective studies, case reviews, and single arm prospective trials suggest that regimens used for small cell lung cancers (i.e., platinum plus etoposide) are superior to those used for non-small cell lung cancer (i.e., platinum plus taxane) and result in improved patient outcomes for stage IV cancers, as well as for early stages when the chemotherapy is given as adjuvant therapy (Sun et al., 2012, Zhang et al., 2020, Le Treut et al., 2013, Niho et al., 2013) although contradictory reports exist (Derks et al., 2017). Recent progress in the development of immune checkpoint inhibitors has dramatically changed survival outcomes and disease management for lung cancer. Several agents have been approved as monotherapy or are used in combination with chemotherapy agents for a number of human cancer types. Pembrolizumab, nivolumab, atezolizumab, and durvalumab are approved by the FDA for the treatment of advanced non-small cell lung cancer (NSCLC). Treatment with atezolizumab or durvalumab, a monoclonal antibody directed against programmed cell death ligand 1 (PDL1), resulted in improved overall survival of patients with extensive stage small cell lung cancer (ES-SCLC) when combined with first-line chemotherapy (Horn et al., 2018, Paz-Ares et al., 2019). Although these agents are currently investigated in early stage settings of SCLC and NSCLC, rare cancer subtypes such as LCNEC may not be investigated soon due to paucity of the disease. It seems unlikely that controlled randomized studies will be conducted specifically for LCNEC for the next few decades. Because of limitations in retrospective case series and lack of potential for prospective clinical trials for rare disease such as LCNEC, cancer researchers commonly use large databases to analyze rare cancer types. Although there are some limitations, this approach allows assessment of prognosis and impact of therapeutic interventions across a larger patient population. Using the National Cancer database (NCDB), we investigated whether the use of immunotherapeutic agents influences overall survival in patients with stage IV LCNEC.
null
null
Results
Patient characteristics A total of 661 patients with stage IV LCNEC diagnosed between 2014 and 2016 met eligibility for this study (Table 1). Among those, 37 and 624 patients were assigned to IO or non-IO group, respectively. In the IO group, the majority of cases were categorized as follows: less than age 70 (68%), male sex (62%), white, insured (100%), treated at non-academic centers (54%), Charlson-Deyo (CD) comorbidity score of 0-1, absence of brain metastasis, absence of liver metastasis (73%), absence of surgery (97%), and presence of chemotherapy. There was no significant association between the clinical characteristics and presence/absence of IO except for use of chemotherapy; more patients (92%) in the IO group received chemotherapy than those in the non-IO group. In the propensity matched analysis, all the variables were matched and balanced. No significant correlation between any variable and IO status was observed (Table 2). Due to the restriction by CoC and NCDB, cells with less than 10 cases are not provided in Table 1 or 2. Survival analysis Univariate and multivariate analyses were conducted for the original cohort. In the univariate analysis, significantly improved survival was seen in young age, female sex, non-white race, academic institution, CD score of 0-1, absence of liver metastasis, use of surgery, use of chemotherapy, and use of IO. Female sex, absence of liver metastasis, use of surgery, use of chemotherapy, and use of IO remained statistically significant in multivariate analyses (Table 3). Kaplan-Meier and Logrank methods demonstrated a statistically improved survival in the original cohort (p= p=0.0018) and a non-significant trend in propensity score matched cohort (p=0.0733) (Figure 2). Study Flow Diagram of Case Eligibility. NSCLC, non-small cell lung cancer; NCDB, National Cancer Database; OS, overall survival; LCNEC, large cell neuroendocrine carcinoma; IO, immunotherapy Clinical Characteristics of Stage IV LCNEC Patients with or without Immunotherapy Note: Due to NCDB agreement, cells with less than 10 cases in Race, Charlson-Deyo score, Brain metastasis and Chemotherapy were combined with other opposing cells. Clinical Characteristics of Stage IV LCNEC Patients with or without Immunotherapy: Propensity Score Matched Cases Note: Due to NCDB agreement, cells with less than 10 cases in Race, Charlson-Deyo score, Brain metastasis and Chemotherapy were combined with other opposing cells Overall Survival According to Use of Immunotherapy Univariate and Multivariate Analysis in Original Cohort
null
null
[]
[]
[]
[ "Introduction", "Materials and Methods", "Results", "Discussion" ]
[ "Large cell neuroendocrine carcinoma (LCNEC) is a relatively rare histologic subtype, accounting for approximately 3% of lung cancer cases (Fasano et al., 2015, Rekhtman, 2010). It has been described in recent decades since Travis et al. originally described it in the early 1990s (Travis, et al., 1991). Despite its neuroendocrine features, it was initially classified as a variant of large cell carcinoma by the 2004 World Health Organization (WHO). The current 2015 version of WHO lists it in a group of neuroendocrine neoplasms along with typical carcinoid, atypical carcinoid, and small cell carcinoma (Travis et al., 2015).\nDue to its sparsity of cases and difficulty in diagnosis with small biopsy samples, standard systemic therapy for advanced stage has not been well established. To our knowledge, no prospective, randomized study comparing multiple systemic regiments has been reported in the literature. Limited literature with retrospective studies, case reviews, and single arm prospective trials suggest that regimens used for small cell lung cancers (i.e., platinum plus etoposide) are superior to those used for non-small cell lung cancer (i.e., platinum plus taxane) and result in improved patient outcomes for stage IV cancers, as well as for early stages when the chemotherapy is given as adjuvant therapy (Sun et al., 2012, Zhang et al., 2020, Le Treut et al., 2013, Niho et al., 2013) although contradictory reports exist (Derks et al., 2017).\nRecent progress in the development of immune checkpoint inhibitors has dramatically changed survival outcomes and disease management for lung cancer. Several agents have been approved as monotherapy or are used in combination with chemotherapy agents for a number of human cancer types. Pembrolizumab, nivolumab, atezolizumab, and durvalumab are approved by the FDA for the treatment of advanced non-small cell lung cancer (NSCLC). Treatment with atezolizumab or durvalumab, a monoclonal antibody directed against programmed cell death ligand 1 (PDL1), resulted in improved overall survival of patients with extensive stage small cell lung cancer (ES-SCLC) when combined with first-line chemotherapy (Horn et al., 2018, Paz-Ares et al., 2019). Although these agents are currently investigated in early stage settings of SCLC and NSCLC, rare cancer subtypes such as LCNEC may not be investigated soon due to paucity of the disease. It seems unlikely that controlled randomized studies will be conducted specifically for LCNEC for the next few decades. \nBecause of limitations in retrospective case series and lack of potential for prospective clinical trials for rare disease such as LCNEC, cancer researchers commonly use large databases to analyze rare cancer types. Although there are some limitations, this approach allows assessment of prognosis and impact of therapeutic interventions across a larger patient population. Using the National Cancer database (NCDB), we investigated whether the use of immunotherapeutic agents influences overall survival in patients with stage IV LCNEC.", "\nNCDB\n\nThe National Cancer Data Base (NCDB) is a joint project of the Commission on Cancer (CoC) of the American College of Surgeons and the American Cancer Society. The CoC’s NCDB and the hospitals participating in the CoC NCDB are the source of the de-identified data used herein; they have not verified and are not responsible for the statistical validity of the data analysis or the conclusions derived by the authors. The data is considered as hospital-based rather than population-based.\nAfter obtaining approval by CoC, access to information of deidentified cases with stage IV NSCLC was granted in October 2019. A total of 101,169 adult cases diagnosed between 2014 and 2016 at the CoC participating institution in the United States were screened for the current study. Eligible cases must have the diagnostic ICD-O-3 code for LCNEC (8013/3) and have survived for at least one month (Figure 1). Presence or absence of IO (immunotherapy) as the first course of therapy was available. They were assigned into IO positive vs. negative groups. Information regarding name, regimen, dose, dosing frequency of IO was not available. \nAvailable background characteristics included age (<70 vs. 70+), sex (male vs. female), race (white vs. others), type of institution (academic vs. non-academic), presence of insurance, Charlson-Deyo comorbidity score (0-1 vs. 2-3), presence of brain/liver metastases, use of external beam radiation, use of multiagent chemotherapy in first course of therapy. Reporting any cell with less than 10 cases were prohibited according to agreement with CoC and NCDB.\nOverall survival data was available according to IO status in first course of therapy. Progression-free, time-to-progression, or other survival data were not available.\n\nStatistics\n\nRelationships between clinical characteristics and use of IO were determined by chi-square tests. Survival analysis was conducted using Kaplan-Meier and Logrank methods. A p-value of less than 0.05 on a two-tailed statistical analysis was considered significant. Univariate and multivariate Cox proportional hazard analyses were performed using JMP version 14 (SAS Institute, Cary, NC, USA). Propensity score matching analysis included all the variables listed in Table 1 and was performed according to XLSTAT software guideline (Rosenbaum, 1989). \nThis is a hospital-based study that involves no identifiable information for individuals throughout the analyses. This study was reviewed by the institutional review board at Parkview Health and was designated exempt from human subject research.", "\nPatient characteristics\n\nA total of 661 patients with stage IV LCNEC diagnosed between 2014 and 2016 met eligibility for this study (Table 1). Among those, 37 and 624 patients were assigned to IO or non-IO group, respectively. \nIn the IO group, the majority of cases were categorized as follows: less than age 70 (68%), male sex (62%), white, insured (100%), treated at non-academic centers (54%), Charlson-Deyo (CD) comorbidity score of 0-1, absence of brain metastasis, absence of liver metastasis (73%), absence of surgery (97%), and presence of chemotherapy. There was no significant association between the clinical characteristics and presence/absence of IO except for use of chemotherapy; more patients (92%) in the IO group received chemotherapy than those in the non-IO group. In the propensity matched analysis, all the variables were matched and balanced. No significant correlation between any variable and IO status was observed (Table 2). Due to the restriction by CoC and NCDB, cells with less than 10 cases are not provided in Table 1 or 2.\n\nSurvival analysis\n\nUnivariate and multivariate analyses were conducted for the original cohort. In the univariate analysis, significantly improved survival was seen in young age, female sex, non-white race, academic institution, CD score of 0-1, absence of liver metastasis, use of surgery, use of chemotherapy, and use of IO. Female sex, absence of liver metastasis, use of surgery, use of chemotherapy, and use of IO remained statistically significant in multivariate analyses (Table 3). Kaplan-Meier and Logrank methods demonstrated a statistically improved survival in the original cohort (p= p=0.0018) and a non-significant trend in propensity score matched cohort (p=0.0733) (Figure 2).\nStudy Flow Diagram of Case Eligibility. NSCLC, non-small cell lung cancer; NCDB, National Cancer Database; OS, overall survival; LCNEC, large cell neuroendocrine carcinoma; IO, immunotherapy\nClinical Characteristics of Stage IV LCNEC Patients with or without Immunotherapy\nNote: Due to NCDB agreement, cells with less than 10 cases in Race, Charlson-Deyo score, Brain metastasis and Chemotherapy were combined with other opposing cells.\nClinical Characteristics of Stage IV LCNEC Patients with or without Immunotherapy: Propensity Score Matched Cases\nNote: Due to NCDB agreement, cells with less than 10 cases in Race, Charlson-Deyo score, Brain metastasis and Chemotherapy were combined with other opposing cells\nOverall Survival According to Use of Immunotherapy\nUnivariate and Multivariate Analysis in Original Cohort", "LCNEC is a relatively rare and aggressive type of lung cancer with abysmal prognosis that accounts for 3% of lung cancer with most patients being diagnosed in advanced stages (Fasano et al., 2015). LCNEC is classified accordingly because of its biological and clinical features. It is included in the group of thoracic neuroendocrine tumor per 2015 WHO classification of lung and pleural tumors (Travis et al., 2015).\nPulmonary LCNEC may have the following features which include (1) morphology of nesting, peripheral palisading, and rosettes (2) expression of neuroendocrine markers like synaptophysin, chromogranin A, TTF-1 Thyroid transcription factor, and CD56 (3) necrosis over large zones with mitotic rates >10 per 10 high power fields (Travis et al., 2015). Despite efforts to define these features, diagnosis of LCNEC remains a challenge for pathologists, especially with small biopsy specimens. For instance, in the two prospective, single-arm clinical trials of SCLC-like chemotherapy regimens for advanced LCNEC, central pathology review determined 11 of 41 cases and 11 of 40 cases should be reclassified as different diagnoses (Le Treut et al., 2013, Niho et al., 2013), demonstrating frequent disagreement among pathologists.\nTo improve diagnosis and understanding of LCNEC, researchers investigated molecular characteristics of LCNEC to further define its unique biologic features. Rekhtman et al., (2016) identified that 40% of these tumors are similar to SCLC which is characterized by p53 and RB1 gene alterations, whereas 55.5% had mutations such as STK11/Kras that are commonly seen in NSCLC. Moreover, 15% of LCNEC tumors showed genetic changes in P13K/AKT/mTOR pathway; it was also observed that LCNEC might have activating mutations in receptor tyrosine kinase genes such as EGFR, KIT, ERBB2 (Umemura et al., 2014, Miyoshi et al., 2017). Although these finding do not seem very helpful in current practice, further research into the molecular mechanisms of LCNEC might assist oncologists in the future.\nMore practically, medical oncologists facing advanced LCNEC in clinic must determine how to manage the cases with systemic therapy. Most reports below suggest using SCLC-based regimens in LCNEC patients. Sun et al. (2012) revealed that advanced LCNEC could be treated in a similar manner as SCLC rather than NSCLC and the response rates to platinum-based chemotherapy were 60% compared to non-platinum based chemotherapy which was 11%, with the overall survival OS being 16.5 vs. 9.2 months in SCLC regimen and NSCLC regimen group, respectively. \nConsistent with these findings, another study conducted by Shimada et al., (2012). observed a response rate of 61% vs. 63% to initial chemotherapy and that of 86% vs. 98% to chemoradiotherapy in patients high grade LCNEC vs. SCLC, suggesting that chemotherapy treatment using SCLC standard protocol significantly improves the OS of patients with LCNEC compared to that of NSCLC-based protocols.\nIn contrast, there are contradictory reports that do not suggest use of SCLC-like regimens. Igawa el al., (2010) evaluated 14 patients with high-grade unresectable LCNEC, with various platinum-based combination regimens or irinotecan (SCLC-like regimen) vs. vinorelbine or docetaxel alone (NSCLC-like regimen), and found the objective response rate to be 50% (7/14) vs. 53% (41/77); one-year survival rate to be 34% vs. 48%; median survival time of 10 months vs. 12.3 months. \nIn keeping with this, another study conducted by Varlotto et al., (2011) based on the data obtained from Surveillance, Epidemiology and End Results Program (SEER) of the US National Cancer Institute, stated that in patients with LCNEC had characteristics where overall survival and Lung cancer-specific survival rates were more similar to those with other large cell carcinomas than SCLC. Derks et al., (2017) also reported overall survival for LCNEC patients treated with NSCLC based regimen was significantly longer than that for those treated with SCLC based regimen with a median survival of 8.5 vs 6.7 months, respectively. While this controversy still remains, prognosis of advanced LCNEC remains extremely poor regardless of treatment regimens. There is a need for novel systemic therapies to improve poor outcomes for patients with LCNEC. \nThe use of IO agents has shown promising results in the treatment of solid tumors such as melanoma, NSCLC, renal cell cancer; therefore, we investigated IO use in LCNEC of the lung. Since first-line chemotherapy has limited efficacy in LCNEC, the use of IO may become an alternative option in the treatment of advanced LCNEC. PD-1/PD-L1 inhibitors are proven to improve survival in advanced stage NSCLC, and also have activity for SCLC (Horn et al., 2018, Paz-Ares et al., 2019). Still, IO efficacy in LCNEC is unknown and limited due to a few case reports. \nIn the first case report, metastatic LCNEC in two patients confirmed by lung biopsies were treated with nivolumab as the sixth and third-line of treatment, showing responses in both cases with a decrease in serum tumor marker levels and significant tumor reduction (Daido et al., 2017). In a second case report, a strong and robust response to pembrolizumab was observed in a metastatic LCNEC despite the tumor being PD-L1 negative by immunohistochemistry (Wang et al., 2017). A third paper reported a case of locally advanced LCNEC with complete tumor response after palliative thoracic radiotherapy and treatment with nivolumab, indicating that radiation may enhance the activity of PD-1/PD/L1 inhibitors in LCNEC (Mauclet et al., 2019).\nThis retrospective NCDB analysis demonstrated that the IO group had 12 and 18-month survival of 34.0% and 29.1% as compared to 24.1% and 15.0% in the non-IO group. Multivariate analyses showed that female sex, absence of liver metastasis, use of surgery, use of chemotherapy, use of IO remained statistically significant. Propensity score matching analysis in overall survival showed a non-significant trend (p=0.0733) in favor of the IO group. These findings suggest that IO treatment benefits patients with advanced LCNEC.\nWe however acknowledge the limitation of current study. This is a retrospective, “hospital-based” data analysis using NCDB database. With lack of central review, histologic diagnosis of LCNEC was completely up to local pathologists. As discussed earlier, there might be cases to which alternative diagnoses can be assigned due to common discrepancies among pathologists. Administration of IO agents was recorded only when they were used as part of first course of therapy. Information regarding regimen, dose, frequency, duration, presence of other concurrent treatment modality was not available. It was unknown how IO agents were obtained for treatment such as through prospective clinical trials. The number of cases treated with IOs was relatively small, accounting for only 5.6% of total population. Nevertheless, the current study includes propensity score matching and a larger sample size than what is currently available in the literature. \nIn conclusion, our findings suggest that use of IO might improve outcomes for advanced LCNEC patients. Further investigation is warranted to define the role of IO treatment in advanced LCNEC." ]
[ "intro", "materials|methods", "results", "discussion" ]
[ "Immunotherapy", "large cell neuroendocrine carcinoma", "lung cancer" ]
Introduction: Large cell neuroendocrine carcinoma (LCNEC) is a relatively rare histologic subtype, accounting for approximately 3% of lung cancer cases (Fasano et al., 2015, Rekhtman, 2010). It has been described in recent decades since Travis et al. originally described it in the early 1990s (Travis, et al., 1991). Despite its neuroendocrine features, it was initially classified as a variant of large cell carcinoma by the 2004 World Health Organization (WHO). The current 2015 version of WHO lists it in a group of neuroendocrine neoplasms along with typical carcinoid, atypical carcinoid, and small cell carcinoma (Travis et al., 2015). Due to its sparsity of cases and difficulty in diagnosis with small biopsy samples, standard systemic therapy for advanced stage has not been well established. To our knowledge, no prospective, randomized study comparing multiple systemic regiments has been reported in the literature. Limited literature with retrospective studies, case reviews, and single arm prospective trials suggest that regimens used for small cell lung cancers (i.e., platinum plus etoposide) are superior to those used for non-small cell lung cancer (i.e., platinum plus taxane) and result in improved patient outcomes for stage IV cancers, as well as for early stages when the chemotherapy is given as adjuvant therapy (Sun et al., 2012, Zhang et al., 2020, Le Treut et al., 2013, Niho et al., 2013) although contradictory reports exist (Derks et al., 2017). Recent progress in the development of immune checkpoint inhibitors has dramatically changed survival outcomes and disease management for lung cancer. Several agents have been approved as monotherapy or are used in combination with chemotherapy agents for a number of human cancer types. Pembrolizumab, nivolumab, atezolizumab, and durvalumab are approved by the FDA for the treatment of advanced non-small cell lung cancer (NSCLC). Treatment with atezolizumab or durvalumab, a monoclonal antibody directed against programmed cell death ligand 1 (PDL1), resulted in improved overall survival of patients with extensive stage small cell lung cancer (ES-SCLC) when combined with first-line chemotherapy (Horn et al., 2018, Paz-Ares et al., 2019). Although these agents are currently investigated in early stage settings of SCLC and NSCLC, rare cancer subtypes such as LCNEC may not be investigated soon due to paucity of the disease. It seems unlikely that controlled randomized studies will be conducted specifically for LCNEC for the next few decades. Because of limitations in retrospective case series and lack of potential for prospective clinical trials for rare disease such as LCNEC, cancer researchers commonly use large databases to analyze rare cancer types. Although there are some limitations, this approach allows assessment of prognosis and impact of therapeutic interventions across a larger patient population. Using the National Cancer database (NCDB), we investigated whether the use of immunotherapeutic agents influences overall survival in patients with stage IV LCNEC. Materials and Methods: NCDB The National Cancer Data Base (NCDB) is a joint project of the Commission on Cancer (CoC) of the American College of Surgeons and the American Cancer Society. The CoC’s NCDB and the hospitals participating in the CoC NCDB are the source of the de-identified data used herein; they have not verified and are not responsible for the statistical validity of the data analysis or the conclusions derived by the authors. The data is considered as hospital-based rather than population-based. After obtaining approval by CoC, access to information of deidentified cases with stage IV NSCLC was granted in October 2019. A total of 101,169 adult cases diagnosed between 2014 and 2016 at the CoC participating institution in the United States were screened for the current study. Eligible cases must have the diagnostic ICD-O-3 code for LCNEC (8013/3) and have survived for at least one month (Figure 1). Presence or absence of IO (immunotherapy) as the first course of therapy was available. They were assigned into IO positive vs. negative groups. Information regarding name, regimen, dose, dosing frequency of IO was not available. Available background characteristics included age (<70 vs. 70+), sex (male vs. female), race (white vs. others), type of institution (academic vs. non-academic), presence of insurance, Charlson-Deyo comorbidity score (0-1 vs. 2-3), presence of brain/liver metastases, use of external beam radiation, use of multiagent chemotherapy in first course of therapy. Reporting any cell with less than 10 cases were prohibited according to agreement with CoC and NCDB. Overall survival data was available according to IO status in first course of therapy. Progression-free, time-to-progression, or other survival data were not available. Statistics Relationships between clinical characteristics and use of IO were determined by chi-square tests. Survival analysis was conducted using Kaplan-Meier and Logrank methods. A p-value of less than 0.05 on a two-tailed statistical analysis was considered significant. Univariate and multivariate Cox proportional hazard analyses were performed using JMP version 14 (SAS Institute, Cary, NC, USA). Propensity score matching analysis included all the variables listed in Table 1 and was performed according to XLSTAT software guideline (Rosenbaum, 1989). This is a hospital-based study that involves no identifiable information for individuals throughout the analyses. This study was reviewed by the institutional review board at Parkview Health and was designated exempt from human subject research. Results: Patient characteristics A total of 661 patients with stage IV LCNEC diagnosed between 2014 and 2016 met eligibility for this study (Table 1). Among those, 37 and 624 patients were assigned to IO or non-IO group, respectively. In the IO group, the majority of cases were categorized as follows: less than age 70 (68%), male sex (62%), white, insured (100%), treated at non-academic centers (54%), Charlson-Deyo (CD) comorbidity score of 0-1, absence of brain metastasis, absence of liver metastasis (73%), absence of surgery (97%), and presence of chemotherapy. There was no significant association between the clinical characteristics and presence/absence of IO except for use of chemotherapy; more patients (92%) in the IO group received chemotherapy than those in the non-IO group. In the propensity matched analysis, all the variables were matched and balanced. No significant correlation between any variable and IO status was observed (Table 2). Due to the restriction by CoC and NCDB, cells with less than 10 cases are not provided in Table 1 or 2. Survival analysis Univariate and multivariate analyses were conducted for the original cohort. In the univariate analysis, significantly improved survival was seen in young age, female sex, non-white race, academic institution, CD score of 0-1, absence of liver metastasis, use of surgery, use of chemotherapy, and use of IO. Female sex, absence of liver metastasis, use of surgery, use of chemotherapy, and use of IO remained statistically significant in multivariate analyses (Table 3). Kaplan-Meier and Logrank methods demonstrated a statistically improved survival in the original cohort (p= p=0.0018) and a non-significant trend in propensity score matched cohort (p=0.0733) (Figure 2). Study Flow Diagram of Case Eligibility. NSCLC, non-small cell lung cancer; NCDB, National Cancer Database; OS, overall survival; LCNEC, large cell neuroendocrine carcinoma; IO, immunotherapy Clinical Characteristics of Stage IV LCNEC Patients with or without Immunotherapy Note: Due to NCDB agreement, cells with less than 10 cases in Race, Charlson-Deyo score, Brain metastasis and Chemotherapy were combined with other opposing cells. Clinical Characteristics of Stage IV LCNEC Patients with or without Immunotherapy: Propensity Score Matched Cases Note: Due to NCDB agreement, cells with less than 10 cases in Race, Charlson-Deyo score, Brain metastasis and Chemotherapy were combined with other opposing cells Overall Survival According to Use of Immunotherapy Univariate and Multivariate Analysis in Original Cohort Discussion: LCNEC is a relatively rare and aggressive type of lung cancer with abysmal prognosis that accounts for 3% of lung cancer with most patients being diagnosed in advanced stages (Fasano et al., 2015). LCNEC is classified accordingly because of its biological and clinical features. It is included in the group of thoracic neuroendocrine tumor per 2015 WHO classification of lung and pleural tumors (Travis et al., 2015). Pulmonary LCNEC may have the following features which include (1) morphology of nesting, peripheral palisading, and rosettes (2) expression of neuroendocrine markers like synaptophysin, chromogranin A, TTF-1 Thyroid transcription factor, and CD56 (3) necrosis over large zones with mitotic rates >10 per 10 high power fields (Travis et al., 2015). Despite efforts to define these features, diagnosis of LCNEC remains a challenge for pathologists, especially with small biopsy specimens. For instance, in the two prospective, single-arm clinical trials of SCLC-like chemotherapy regimens for advanced LCNEC, central pathology review determined 11 of 41 cases and 11 of 40 cases should be reclassified as different diagnoses (Le Treut et al., 2013, Niho et al., 2013), demonstrating frequent disagreement among pathologists. To improve diagnosis and understanding of LCNEC, researchers investigated molecular characteristics of LCNEC to further define its unique biologic features. Rekhtman et al., (2016) identified that 40% of these tumors are similar to SCLC which is characterized by p53 and RB1 gene alterations, whereas 55.5% had mutations such as STK11/Kras that are commonly seen in NSCLC. Moreover, 15% of LCNEC tumors showed genetic changes in P13K/AKT/mTOR pathway; it was also observed that LCNEC might have activating mutations in receptor tyrosine kinase genes such as EGFR, KIT, ERBB2 (Umemura et al., 2014, Miyoshi et al., 2017). Although these finding do not seem very helpful in current practice, further research into the molecular mechanisms of LCNEC might assist oncologists in the future. More practically, medical oncologists facing advanced LCNEC in clinic must determine how to manage the cases with systemic therapy. Most reports below suggest using SCLC-based regimens in LCNEC patients. Sun et al. (2012) revealed that advanced LCNEC could be treated in a similar manner as SCLC rather than NSCLC and the response rates to platinum-based chemotherapy were 60% compared to non-platinum based chemotherapy which was 11%, with the overall survival OS being 16.5 vs. 9.2 months in SCLC regimen and NSCLC regimen group, respectively. Consistent with these findings, another study conducted by Shimada et al., (2012). observed a response rate of 61% vs. 63% to initial chemotherapy and that of 86% vs. 98% to chemoradiotherapy in patients high grade LCNEC vs. SCLC, suggesting that chemotherapy treatment using SCLC standard protocol significantly improves the OS of patients with LCNEC compared to that of NSCLC-based protocols. In contrast, there are contradictory reports that do not suggest use of SCLC-like regimens. Igawa el al., (2010) evaluated 14 patients with high-grade unresectable LCNEC, with various platinum-based combination regimens or irinotecan (SCLC-like regimen) vs. vinorelbine or docetaxel alone (NSCLC-like regimen), and found the objective response rate to be 50% (7/14) vs. 53% (41/77); one-year survival rate to be 34% vs. 48%; median survival time of 10 months vs. 12.3 months. In keeping with this, another study conducted by Varlotto et al., (2011) based on the data obtained from Surveillance, Epidemiology and End Results Program (SEER) of the US National Cancer Institute, stated that in patients with LCNEC had characteristics where overall survival and Lung cancer-specific survival rates were more similar to those with other large cell carcinomas than SCLC. Derks et al., (2017) also reported overall survival for LCNEC patients treated with NSCLC based regimen was significantly longer than that for those treated with SCLC based regimen with a median survival of 8.5 vs 6.7 months, respectively. While this controversy still remains, prognosis of advanced LCNEC remains extremely poor regardless of treatment regimens. There is a need for novel systemic therapies to improve poor outcomes for patients with LCNEC. The use of IO agents has shown promising results in the treatment of solid tumors such as melanoma, NSCLC, renal cell cancer; therefore, we investigated IO use in LCNEC of the lung. Since first-line chemotherapy has limited efficacy in LCNEC, the use of IO may become an alternative option in the treatment of advanced LCNEC. PD-1/PD-L1 inhibitors are proven to improve survival in advanced stage NSCLC, and also have activity for SCLC (Horn et al., 2018, Paz-Ares et al., 2019). Still, IO efficacy in LCNEC is unknown and limited due to a few case reports. In the first case report, metastatic LCNEC in two patients confirmed by lung biopsies were treated with nivolumab as the sixth and third-line of treatment, showing responses in both cases with a decrease in serum tumor marker levels and significant tumor reduction (Daido et al., 2017). In a second case report, a strong and robust response to pembrolizumab was observed in a metastatic LCNEC despite the tumor being PD-L1 negative by immunohistochemistry (Wang et al., 2017). A third paper reported a case of locally advanced LCNEC with complete tumor response after palliative thoracic radiotherapy and treatment with nivolumab, indicating that radiation may enhance the activity of PD-1/PD/L1 inhibitors in LCNEC (Mauclet et al., 2019). This retrospective NCDB analysis demonstrated that the IO group had 12 and 18-month survival of 34.0% and 29.1% as compared to 24.1% and 15.0% in the non-IO group. Multivariate analyses showed that female sex, absence of liver metastasis, use of surgery, use of chemotherapy, use of IO remained statistically significant. Propensity score matching analysis in overall survival showed a non-significant trend (p=0.0733) in favor of the IO group. These findings suggest that IO treatment benefits patients with advanced LCNEC. We however acknowledge the limitation of current study. This is a retrospective, “hospital-based” data analysis using NCDB database. With lack of central review, histologic diagnosis of LCNEC was completely up to local pathologists. As discussed earlier, there might be cases to which alternative diagnoses can be assigned due to common discrepancies among pathologists. Administration of IO agents was recorded only when they were used as part of first course of therapy. Information regarding regimen, dose, frequency, duration, presence of other concurrent treatment modality was not available. It was unknown how IO agents were obtained for treatment such as through prospective clinical trials. The number of cases treated with IOs was relatively small, accounting for only 5.6% of total population. Nevertheless, the current study includes propensity score matching and a larger sample size than what is currently available in the literature. In conclusion, our findings suggest that use of IO might improve outcomes for advanced LCNEC patients. Further investigation is warranted to define the role of IO treatment in advanced LCNEC.
Background: Despite approvals of immune checkpoint inhibitors in both small cell and non-small cell lung cancers, the role of immunotherapy in large cell neuroendocrine carcinoma (LCNEC) in lung is undefined. Methods: Using the National Cancer Database (NCDB), Stage IV lung LCNEC cases diagnosed from 2014 to 2016 were analyzed. Information regarding cancer treatment was limited to first course of therapy, including surgery for primary lesion, radiation, chemotherapy, and immunotherapy. Survival analysis was performed using Kaplan-Meier curves and Log-rank tests. Cox proportional hazard model was used for multivariate analysis. Results: Among 661 eligible cases, 37 patients were treated with immunotherapy. No significant association between use of immunotherapy and clinical demographics was observed except for use of chemotherapy (p=0.0008). Chemotherapy was administered in 34 (92%) and 406 (65%) in immunotherapy and non-immunotherapy groups, respectively. Use of immunotherapy was associated with improved overall survival (Log-rank p=0.0018). Landmark analysis in the immunotherapy group showed 12 and 18-month survivals of 34.0% and 29.1%, respectively, whereas those in the non-immunotherapy group were 24.1% and 15.0%, respectively. Multivariate analysis demonstrated that female sex (HR=0.79, p=0.0063), liver metastases (HR=0.75, p=.0392), surgery (HR= 0.50, p <0.0001) use of chemotherapy (HR= 0.44, p <0.0001), and use of immunotherapy (HR=0.64, p=0.0164) had statistical significance. Propensity score matching in overall survival analysis showed a nonsignificant trend (p=0.0733) in favor of immunotherapy treatment. Conclusions: This retrospective study using NCDB suggests that use of immunotherapy may improve survival of LCNEC patients.
null
null
2,970
326
[]
4
[ "lcnec", "io", "use", "survival", "cancer", "patients", "chemotherapy", "cases", "vs", "sclc" ]
[ "carcinoma lcnec relatively", "neuroendocrine carcinoma io", "thoracic neuroendocrine tumor", "lung cancer platinum", "lung cancer specific" ]
null
null
null
[CONTENT] Immunotherapy | large cell neuroendocrine carcinoma | lung cancer [SUMMARY]
null
[CONTENT] Immunotherapy | large cell neuroendocrine carcinoma | lung cancer [SUMMARY]
null
[CONTENT] Immunotherapy | large cell neuroendocrine carcinoma | lung cancer [SUMMARY]
null
[CONTENT] Aged | Antineoplastic Agents, Immunological | Carcinoma, Large Cell | Carcinoma, Neuroendocrine | Databases, Factual | Female | Humans | Immunotherapy | Lung Neoplasms | Male | Middle Aged | Neoplasm Staging | Propensity Score | Retrospective Studies | Survival Analysis | United States [SUMMARY]
null
[CONTENT] Aged | Antineoplastic Agents, Immunological | Carcinoma, Large Cell | Carcinoma, Neuroendocrine | Databases, Factual | Female | Humans | Immunotherapy | Lung Neoplasms | Male | Middle Aged | Neoplasm Staging | Propensity Score | Retrospective Studies | Survival Analysis | United States [SUMMARY]
null
[CONTENT] Aged | Antineoplastic Agents, Immunological | Carcinoma, Large Cell | Carcinoma, Neuroendocrine | Databases, Factual | Female | Humans | Immunotherapy | Lung Neoplasms | Male | Middle Aged | Neoplasm Staging | Propensity Score | Retrospective Studies | Survival Analysis | United States [SUMMARY]
null
[CONTENT] carcinoma lcnec relatively | neuroendocrine carcinoma io | thoracic neuroendocrine tumor | lung cancer platinum | lung cancer specific [SUMMARY]
null
[CONTENT] carcinoma lcnec relatively | neuroendocrine carcinoma io | thoracic neuroendocrine tumor | lung cancer platinum | lung cancer specific [SUMMARY]
null
[CONTENT] carcinoma lcnec relatively | neuroendocrine carcinoma io | thoracic neuroendocrine tumor | lung cancer platinum | lung cancer specific [SUMMARY]
null
[CONTENT] lcnec | io | use | survival | cancer | patients | chemotherapy | cases | vs | sclc [SUMMARY]
null
[CONTENT] lcnec | io | use | survival | cancer | patients | chemotherapy | cases | vs | sclc [SUMMARY]
null
[CONTENT] lcnec | io | use | survival | cancer | patients | chemotherapy | cases | vs | sclc [SUMMARY]
null
[CONTENT] cancer | cell | small cell | small | lung | lung cancer | small cell lung | agents | rare | cell lung [SUMMARY]
null
[CONTENT] io | cells | metastasis | use | matched | cohort | score | absence | chemotherapy | patients [SUMMARY]
null
[CONTENT] lcnec | io | cancer | vs | use | survival | patients | chemotherapy | cases | sclc [SUMMARY]
null
[CONTENT] [SUMMARY]
null
[CONTENT] 661 | 37 ||| ||| Chemotherapy | 34 | 92% | 406 | 65% ||| Log-rank p=0.0018 ||| Landmark | 12 | 18-month | 34.0% | 29.1% | 24.1% | 15.0% ||| HR=0.79 | HR=0.75 | 0.50 | 0.44 | HR=0.64 | p=0.0164 ||| [SUMMARY]
null
[CONTENT] ||| the National Cancer Database | NCDB | 2014 | 2016 ||| first ||| Kaplan-Meier | Log-rank ||| ||| ||| 661 | 37 ||| ||| Chemotherapy | 34 | 92% | 406 | 65% ||| Log-rank p=0.0018 ||| Landmark | 12 | 18-month | 34.0% | 29.1% | 24.1% | 15.0% ||| HR=0.79 | HR=0.75 | 0.50 | 0.44 | HR=0.64 | p=0.0164 ||| ||| NCDB [SUMMARY]
null
Prognostic nomogram incorporating radiological features for predicting overall survival in patients with AIDS-related non-Hodgkin lymphoma.
34982056
Acquired immune deficiency syndrome (AIDS)-related non-Hodgkin lymphoma (AR-NHL) is a high-risk factor for morbidity and mortality in patients with AIDS. This study aimed to determine the prognostic factors associated with overall survival (OS) and to develop a prognostic nomogram incorporating computed tomography imaging features in patients with acquired immune deficiency syndrome-related non-Hodgkin lymphoma (AR-NHL).
BACKGROUND
A total of 121 AR-NHL patients between July 2012 and November 2019 were retrospectively reviewed. Clinical and radiological independent predictors of OS were confirmed using multivariable Cox analysis. A prognostic nomogram was constructed based on the above clinical and radiological factors and then provided optimum accuracy in predicting OS. The predictive accuracy of the nomogram was determined by Harrell C-statistic. Kaplan-Meier survival analysis was used to determine median OS. The prognostic value of adjuvant therapy was evaluated in different subgroups.
METHODS
In the multivariate Cox regression analysis, involvement of mediastinal or hilar lymph nodes, liver, necrosis in the lesions, the treatment with chemotherapy, and the CD4 ≤100 cells/μL were independent risk factors for poor OS (all P < 0.050). The predictive nomogram based on Cox regression has good discrimination (Harrell C-index = 0.716) and good calibration (Hosmer-Lemeshow test, P = 0.620) in high- and low-risk groups. Only patients in the high-risk group who received adjuvant chemotherapy had a significantly better survival outcome.
RESULTS
A survival-predicting nomogram was developed in this study, which was effective in assessing the survival outcomes of patients with AR-NHL. Notably, decision-making of chemotherapy regimens and more frequent follow-up should be considered in the high-risk group determined by this model.
CONCLUSION
[ "Acquired Immunodeficiency Syndrome", "Humans", "Lymphoma, Non-Hodgkin", "Neoplasm Staging", "Nomograms", "Prognosis", "Retrospective Studies" ]
8850812
Introduction
Acquired immune deficiency syndrome (AIDS)-related non-Hodgkin lymphoma (AR-NHL) is a high-risk factor for morbidity and mortality in patients with AIDS.[1,2] Although the incidence of AIDS-related tumors has decreased with the advent of highly active antiretroviral therapy (HAART), the occurrence rate of AR-NHL appears to be unexpectedly declined.[3,4] In addition to HAART, the use of adjuvant chemotherapy can improve the tolerance and remission rate of patients; however, inappropriate adjuvant therapy may induce adverse effects in patients, including myelosuppression, tissue necrosis, and liver dysfunction. To date, the application of chemotherapies and chemoradiotherapy remains controversial, partially due to the failure of selecting suitable AR-NHL patients. Additionally, previous studies reported that CD4+ count, human immunodeficiency virus (HIV) ribonucleic acid levels, Ann Arbor stage, lactate dehydrogenase (LDH) levels, international prognostic index (IPI) score, and age are key predictors for survival in AR-NHL patients.[5–7] However, only a few studies investigated the significance of imaging characteristics for predicting prognosis and survival. Novel imaging modalities for assessing lymphoma can provide useful information for treatment regimens and predict patients’ prognoses. With the advancement of imaging techniques such as computed tomography (CT), magnetic resonance, and positron emission computed tomography (PET)/CT, radiological techniques play an increasingly essential role in detecting lesions and evaluating disease.[8–10] CT can detect enlarged lymph nodes, guide biopsy, and observe early relapse through follow-up.[11–13] Magnetic resonance imaging and PET/CT are limited due to various economic and social factors in some developing countries.[14,15] Therefore, a widely applicable nomogram based on clinical and CT characteristics is needed to appropriately predict prognosis in patients. To address this issue, we integrated clinical and CT-related factors to create a novel nomogram to stratify the low- and high-risk groups. This study aimed to determine the prognostic factors associated with overall survival and to develop a prognostic nomogram incorporating clinical and CT imaging features in patients with AR-NHL.
Methods
Ethics approval The study was conducted under approval by the Institutional Review Board of You’an Hospital Affiliated of Capital Medical University (No. 2018066). The consent to participate was waived due to the retrospective nature of the study. The study was conducted under approval by the Institutional Review Board of You’an Hospital Affiliated of Capital Medical University (No. 2018066). The consent to participate was waived due to the retrospective nature of the study. Patients In this multicenter retrospective study, information of 181 patients with AIDS-related lymphoma from three tertiary infectious disease hospitals was retrospectively reviewed and their clinical and imaging data were analyzed between July 2012 and November 2019. The diagnosis of HIV infection was based on the standards of the Centers for Disease Control and prevention of the USA. The diagnosis of lymphoma was based on puncture biopsy (163 patients), endoscopic biopsy (six patients), and operation specimens (12 patients). All intervention and treatment were processed according to National Comprehensive Cancer Network (NCCN) Clinical Practice Guidelines in Oncology: non-Hodgkin lymphomas.[16–18] If the patients were in stable conditions, they were followed up once every 3 months in the first year, once every 6 months in the second year, once every year in the third year, and so on. Patients were followed up at any time if disease progression and deterioration occurred. Overall survival (OS) was selected as the endpoint. OS was measured from the lymphoma diagnosis until the last follow-up or death from any cause. Follow-up was continued until November 2020. Patients eligible for this study were (1) with age >18 years, (2) with a history of HIV-infection, (3) with pathologically confirmed diffuse large B-cell lymphoma (DLBCL) or Burkitt lymphoma (BL),[19] and (4) with available clinical and CT imaging data before any clinical intervention. Patients with Hodgkin lymphoma, indolent B-cell non-Hodgkin lymphoma (NHL), and T-cell NHL or lacking a specific pathological type were excluded. One patient younger than 18 years of age and two patients with severe artifacts in the CT images were also excluded. Patients who were lost during follow-up were also excluded from the study. All examinations were imaged with Philips CT 256 (Philips, Amsterdam, Netherlands), 39 patients accepted contrast-enhanced CT scan via intravenous contrast materials. The CT protocols were: tube voltage, 120 kV; automatic tube current, 30 to 300 mA; rotation time, 0.75 s; collimation, 0.625 mm; pitch, 0.945; matrix, 512 × 512; section thickness, 5 mm; breath-hold at full inspiration. The images were transmitted to the workstation and picture achieving and communication systems for multiplanar reconstruction and post-processing. All images (both axial CT images and multiplanar reconstruction images) were reviewed by three radiologists (Doctor A with 22 years of experience, B with 7 years of experience, and C with 10 years of experience) blinded to clinical and laboratory data. Three statisticians assessed the CT features independently. After separate evaluations, any divergences were resolved by discussion or consultation from a specialist in infectious imaging (Doctor D with 33 years of experience), eventually reviewed by Doctor E for consistency analysis. Baseline data are provided in Supplementary Data. In this multicenter retrospective study, information of 181 patients with AIDS-related lymphoma from three tertiary infectious disease hospitals was retrospectively reviewed and their clinical and imaging data were analyzed between July 2012 and November 2019. The diagnosis of HIV infection was based on the standards of the Centers for Disease Control and prevention of the USA. The diagnosis of lymphoma was based on puncture biopsy (163 patients), endoscopic biopsy (six patients), and operation specimens (12 patients). All intervention and treatment were processed according to National Comprehensive Cancer Network (NCCN) Clinical Practice Guidelines in Oncology: non-Hodgkin lymphomas.[16–18] If the patients were in stable conditions, they were followed up once every 3 months in the first year, once every 6 months in the second year, once every year in the third year, and so on. Patients were followed up at any time if disease progression and deterioration occurred. Overall survival (OS) was selected as the endpoint. OS was measured from the lymphoma diagnosis until the last follow-up or death from any cause. Follow-up was continued until November 2020. Patients eligible for this study were (1) with age >18 years, (2) with a history of HIV-infection, (3) with pathologically confirmed diffuse large B-cell lymphoma (DLBCL) or Burkitt lymphoma (BL),[19] and (4) with available clinical and CT imaging data before any clinical intervention. Patients with Hodgkin lymphoma, indolent B-cell non-Hodgkin lymphoma (NHL), and T-cell NHL or lacking a specific pathological type were excluded. One patient younger than 18 years of age and two patients with severe artifacts in the CT images were also excluded. Patients who were lost during follow-up were also excluded from the study. All examinations were imaged with Philips CT 256 (Philips, Amsterdam, Netherlands), 39 patients accepted contrast-enhanced CT scan via intravenous contrast materials. The CT protocols were: tube voltage, 120 kV; automatic tube current, 30 to 300 mA; rotation time, 0.75 s; collimation, 0.625 mm; pitch, 0.945; matrix, 512 × 512; section thickness, 5 mm; breath-hold at full inspiration. The images were transmitted to the workstation and picture achieving and communication systems for multiplanar reconstruction and post-processing. All images (both axial CT images and multiplanar reconstruction images) were reviewed by three radiologists (Doctor A with 22 years of experience, B with 7 years of experience, and C with 10 years of experience) blinded to clinical and laboratory data. Three statisticians assessed the CT features independently. After separate evaluations, any divergences were resolved by discussion or consultation from a specialist in infectious imaging (Doctor D with 33 years of experience), eventually reviewed by Doctor E for consistency analysis. Baseline data are provided in Supplementary Data. Statistical analysis The imaging findings were tested for agreement using the Kappa test. If the Kappa value was <0.400, the consistency of the diagnostic findings was poor. If the Kappa value was >0.750, then the diagnostic findings were considered to be sufficiently consistent. Continuous variables of parameters were tested for normal distribution using the Kolmogorov–Smirnov method. If the data fitted a normal distribution, mean ± standard deviation and the t test were used to check for differences between the two groups. The chi-square test and Fisher exact test were used to compare categorical variables. The univariate analysis of a Kaplan–Meier (K–M) analysis model was fitted to determine the significant prognostic factors for OS of the patients. If P values of prognostic factors were <0.050, they were tested in a multivariate Cox proportional hazard model for the independence of association. Factors showing significant impact in the multivariate analysis were expressed via forest graph. Proportional hazards assumption was assessed through visual inspection of (log–log) plots of cumulative log hazard against time. A predictive model was developed for AR-NHL using Cox regression and illustrated by nomogram. The accuracy of predictions was assessed by estimating the nomogram discrimination measured by Harrell concordance index (C-index). The C-index is the probability chosen for two patients randomly, the patient who had the event first had a higher probability of having the event according to the model. C-index = 0.500 represents an agreement by chance; C-index = 1.000 represents perfect discrimination.[20] The calibration of the nomogram was evaluated by the Hosmer–Lemeshow test. K–M estimates were used to determine median OS by Log-rank methods, defined as the time period between the date of pathological diagnosis and last follow-up or death. All statistical analyses were performed using SPSS version 22.0 (IBM Corp., Armonk, NY, USA). The figures were developed using GraphPad Prism 7 (GraphPad Software; San Diego, CA, USA) and R software (version 4.0.1; http://www.r-project.org). All statistical tests were two-sided, the significance level was set at P < 0.050. The imaging findings were tested for agreement using the Kappa test. If the Kappa value was <0.400, the consistency of the diagnostic findings was poor. If the Kappa value was >0.750, then the diagnostic findings were considered to be sufficiently consistent. Continuous variables of parameters were tested for normal distribution using the Kolmogorov–Smirnov method. If the data fitted a normal distribution, mean ± standard deviation and the t test were used to check for differences between the two groups. The chi-square test and Fisher exact test were used to compare categorical variables. The univariate analysis of a Kaplan–Meier (K–M) analysis model was fitted to determine the significant prognostic factors for OS of the patients. If P values of prognostic factors were <0.050, they were tested in a multivariate Cox proportional hazard model for the independence of association. Factors showing significant impact in the multivariate analysis were expressed via forest graph. Proportional hazards assumption was assessed through visual inspection of (log–log) plots of cumulative log hazard against time. A predictive model was developed for AR-NHL using Cox regression and illustrated by nomogram. The accuracy of predictions was assessed by estimating the nomogram discrimination measured by Harrell concordance index (C-index). The C-index is the probability chosen for two patients randomly, the patient who had the event first had a higher probability of having the event according to the model. C-index = 0.500 represents an agreement by chance; C-index = 1.000 represents perfect discrimination.[20] The calibration of the nomogram was evaluated by the Hosmer–Lemeshow test. K–M estimates were used to determine median OS by Log-rank methods, defined as the time period between the date of pathological diagnosis and last follow-up or death. All statistical analyses were performed using SPSS version 22.0 (IBM Corp., Armonk, NY, USA). The figures were developed using GraphPad Prism 7 (GraphPad Software; San Diego, CA, USA) and R software (version 4.0.1; http://www.r-project.org). All statistical tests were two-sided, the significance level was set at P < 0.050.
Results
Patients’ demographic data A total of 121 AR-NHL patients was included in the final analysis who were reviewed at multiple centers between 2000 and 2016. The median age at diagnosis was 40 years (range, 35–53 years), and 112 (92.5%) of the eligible patients were male. Among all patients, 83 received chemotherapy, 57 underwent definitive HAART. Table 1 summarizes the demographic data, clinical, and tumor characteristics of laboratory results, such as white blood cell, neutrophils, and lymphocytes. All CT features were defined as a good agreement among the three doctors (Kappa value 0.778–0.998; Supplementary Table S1). Demographic, clinical, and tumor characteristics of patients with AR-NHL. Data are presented as n (%) or median (inter-quantile range). AR-NHL: AIDS-related non-Hodgkin lymphoma; BL: Burkitt lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; NEUT: Neutrophil; WBC: White blood cell. A total of 121 AR-NHL patients was included in the final analysis who were reviewed at multiple centers between 2000 and 2016. The median age at diagnosis was 40 years (range, 35–53 years), and 112 (92.5%) of the eligible patients were male. Among all patients, 83 received chemotherapy, 57 underwent definitive HAART. Table 1 summarizes the demographic data, clinical, and tumor characteristics of laboratory results, such as white blood cell, neutrophils, and lymphocytes. All CT features were defined as a good agreement among the three doctors (Kappa value 0.778–0.998; Supplementary Table S1). Demographic, clinical, and tumor characteristics of patients with AR-NHL. Data are presented as n (%) or median (inter-quantile range). AR-NHL: AIDS-related non-Hodgkin lymphoma; BL: Burkitt lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; NEUT: Neutrophil; WBC: White blood cell. Prognostic factors for OS The median follow-up of the entire cohort was 12 months. On univariable analysis, factors associated with poor OS included CD4 ≤100 cells/μL, involvement of mediastinal or hilar lymph nodes, liver, gastrointestinal tract, presence of extracapsular infiltration, necrosis inside the lesions, and treatment without chemotherapy [Table 2]. On multivariable Cox analysis, involvement of mediastinal or hilar lymph nodes, liver, necrosis in the lesions, CD4 ≤100 cells/μL, and treatment without chemotherapy were independent risk factors for short OS. Significant radiological features were shown in Figure 1. Clinical prognostic factors for survival of total patients using univariate and multivariate analyses. BL: Burkitt's lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; OS: Overall survival. CI: confidence interval. (A) A 37-year-old man with DLBCL was found positive HIV-antibodies for 1 month and with nausea and fever for 1 day. The thickening wall of the stomach was detected using CT (white arrow). The OS time of this patient was 11 months. (B) A 31-year-old man with BL was found positive HIV-antibodies for 1 month, fever for 5 days. CT showed multiple, irregular, heterogeneous lesions of the liver (white arrow) and bilateral adrenal glands (black arrows). The OS of this patient was 5 months. (C, D) A 29-year-old man with BL was found positive HIV-antibodies for 2 months and dyspnea and fever for 3 days. CT lung window showed multiple irregular lesions in bilateral lungs; enlarged lymph nodes were found on mediastinal and hilum. Progression was found on a 1-month follow-up. (D) The OS time of this patient was 3 months. (E) A 25-year-old man with DLBCL was found positive HIV-antibodies for 3 months. A large irregular, homogeneous, isoattenuation mass that infiltrated adjacent bone and muscles was found in the right cervical area IX (white arrow). (F) A 40-year-old man with DLBCL was found HIV-antibodies for 5 months. A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). BL: Burkitt lymphoma; CT: Computed tomography; DLBCL: Diffuse large B-cell lymphoma; HIV: Human immunodeficiency virus; OS: Overall survival. The median follow-up of the entire cohort was 12 months. On univariable analysis, factors associated with poor OS included CD4 ≤100 cells/μL, involvement of mediastinal or hilar lymph nodes, liver, gastrointestinal tract, presence of extracapsular infiltration, necrosis inside the lesions, and treatment without chemotherapy [Table 2]. On multivariable Cox analysis, involvement of mediastinal or hilar lymph nodes, liver, necrosis in the lesions, CD4 ≤100 cells/μL, and treatment without chemotherapy were independent risk factors for short OS. Significant radiological features were shown in Figure 1. Clinical prognostic factors for survival of total patients using univariate and multivariate analyses. BL: Burkitt's lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; OS: Overall survival. CI: confidence interval. (A) A 37-year-old man with DLBCL was found positive HIV-antibodies for 1 month and with nausea and fever for 1 day. The thickening wall of the stomach was detected using CT (white arrow). The OS time of this patient was 11 months. (B) A 31-year-old man with BL was found positive HIV-antibodies for 1 month, fever for 5 days. CT showed multiple, irregular, heterogeneous lesions of the liver (white arrow) and bilateral adrenal glands (black arrows). The OS of this patient was 5 months. (C, D) A 29-year-old man with BL was found positive HIV-antibodies for 2 months and dyspnea and fever for 3 days. CT lung window showed multiple irregular lesions in bilateral lungs; enlarged lymph nodes were found on mediastinal and hilum. Progression was found on a 1-month follow-up. (D) The OS time of this patient was 3 months. (E) A 25-year-old man with DLBCL was found positive HIV-antibodies for 3 months. A large irregular, homogeneous, isoattenuation mass that infiltrated adjacent bone and muscles was found in the right cervical area IX (white arrow). (F) A 40-year-old man with DLBCL was found HIV-antibodies for 5 months. A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). BL: Burkitt lymphoma; CT: Computed tomography; DLBCL: Diffuse large B-cell lymphoma; HIV: Human immunodeficiency virus; OS: Overall survival. Development of nomogram The predictive models were based on Cox regression and illustrated by nomogram [Figure 2], indicating the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. Harrell C-index was 0.716, suggesting relatively good discrimination. The Hosmer–Lemeshow test demonstrated a P = 0.620 > 0.050, indicating no departure from a good fit. The probability of survival at 1, 2, and 3 years was obtained by drawing a vertical line from the “total points” axis straight down to the outcome axes. The total number of points for each patient was obtained by summing the points for each of the individual factors in the nomogram. In the predictive model, treated with chemotherapy can increase 100 points for total points, then followed by liver involved free (add 82 points), CD4 counts >100 cells/μL (add 69 points), mediastinal or hilar lymph nodes involved free (add 58 points), and without necrosis (add 50 points). For instance, an AR-NHL patient with liver involved (0 points), without mediastinal or hilar lymph nodes involved (58 points), CD4 counts 125 cells/μL (69 points), with tumor necrosis (0 points), treated with chemotherapy (100 points), and the total points were 227 points. This patient was predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years [Figure 3]. Nomogram indicates the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. CD4 was determined by drawing a line straight up to the point axis for clinical. Then, this process was repeated for other four variables (necrosis, chemotherapy, liver involved, mediastinal or hilar lymph nodes involved). Each variable had a corresponding value (points), which were marked in the superior toolbar. Total adding up scores of all variables could predict OS probability. AR-NHL: AIDS-related non-Hodgkin lymphoma; NHL: Non-Hodgkin lymphoma; OS: Overall survival. Examples of using the nomogram to predict the individual survival probability by manually placing straight lines across the diagram. A 32-year-old man was found positive HIV-antibodies for 7 months and with the treatment of HAART. (A) A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). (B) Two irregular, heterogeneous lesions were found in the liver (black arrow). DLBCL was confirmed by biopsy. The patient received a chemotherapy regimen of CHOP and OS time was 14 months. The total points are 227 points. This patient is predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years (black line). CHOP: Cyclophosphamide, doxorubicin, vincristine and prednisolone; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; OS: Overall survival. The predictive models were based on Cox regression and illustrated by nomogram [Figure 2], indicating the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. Harrell C-index was 0.716, suggesting relatively good discrimination. The Hosmer–Lemeshow test demonstrated a P = 0.620 > 0.050, indicating no departure from a good fit. The probability of survival at 1, 2, and 3 years was obtained by drawing a vertical line from the “total points” axis straight down to the outcome axes. The total number of points for each patient was obtained by summing the points for each of the individual factors in the nomogram. In the predictive model, treated with chemotherapy can increase 100 points for total points, then followed by liver involved free (add 82 points), CD4 counts >100 cells/μL (add 69 points), mediastinal or hilar lymph nodes involved free (add 58 points), and without necrosis (add 50 points). For instance, an AR-NHL patient with liver involved (0 points), without mediastinal or hilar lymph nodes involved (58 points), CD4 counts 125 cells/μL (69 points), with tumor necrosis (0 points), treated with chemotherapy (100 points), and the total points were 227 points. This patient was predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years [Figure 3]. Nomogram indicates the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. CD4 was determined by drawing a line straight up to the point axis for clinical. Then, this process was repeated for other four variables (necrosis, chemotherapy, liver involved, mediastinal or hilar lymph nodes involved). Each variable had a corresponding value (points), which were marked in the superior toolbar. Total adding up scores of all variables could predict OS probability. AR-NHL: AIDS-related non-Hodgkin lymphoma; NHL: Non-Hodgkin lymphoma; OS: Overall survival. Examples of using the nomogram to predict the individual survival probability by manually placing straight lines across the diagram. A 32-year-old man was found positive HIV-antibodies for 7 months and with the treatment of HAART. (A) A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). (B) Two irregular, heterogeneous lesions were found in the liver (black arrow). DLBCL was confirmed by biopsy. The patient received a chemotherapy regimen of CHOP and OS time was 14 months. The total points are 227 points. This patient is predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years (black line). CHOP: Cyclophosphamide, doxorubicin, vincristine and prednisolone; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; OS: Overall survival. Clinical value of the novel classification system Compared with the patients with levels of CD4 < 100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020) [Figure 4]. K–M analysis and the log-rank test showed that patients who did not receive chemotherapy had poor survival outcomes than those receiving chemotherapy (P < 0.050) [Supplementary Figure S1]. The patients with involvement of mediastinal or hilar lymph nodes or involvement of the liver, or the lesions with necrosis had a worse prognosis (P < 0.050) [Supplementary Figure S1]. Kaplan–Meier.curves of CD4 for the whole patient population. Compared with the patients with levels of CD4 <100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020). Next, we further compared the survival outcomes between patients with and without chemotherapy in each subgroup based on our novel survival-predicting model. Notably, patients in the high-risk group could benefit from chemotherapy treatment (P = 0.040). Compared with the patients with levels of CD4 < 100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020) [Figure 4]. K–M analysis and the log-rank test showed that patients who did not receive chemotherapy had poor survival outcomes than those receiving chemotherapy (P < 0.050) [Supplementary Figure S1]. The patients with involvement of mediastinal or hilar lymph nodes or involvement of the liver, or the lesions with necrosis had a worse prognosis (P < 0.050) [Supplementary Figure S1]. Kaplan–Meier.curves of CD4 for the whole patient population. Compared with the patients with levels of CD4 <100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020). Next, we further compared the survival outcomes between patients with and without chemotherapy in each subgroup based on our novel survival-predicting model. Notably, patients in the high-risk group could benefit from chemotherapy treatment (P = 0.040).
null
null
[ "Ethics approval", "Patients", "Statistical analysis", "Prognostic factors for OS", "Development of nomogram", "Clinical value of the novel classification system", "Acknowledgements" ]
[ "The study was conducted under approval by the Institutional Review Board of You’an Hospital Affiliated of Capital Medical University (No. 2018066). The consent to participate was waived due to the retrospective nature of the study.", "In this multicenter retrospective study, information of 181 patients with AIDS-related lymphoma from three tertiary infectious disease hospitals was retrospectively reviewed and their clinical and imaging data were analyzed between July 2012 and November 2019. The diagnosis of HIV infection was based on the standards of the Centers for Disease Control and prevention of the USA. The diagnosis of lymphoma was based on puncture biopsy (163 patients), endoscopic biopsy (six patients), and operation specimens (12 patients). All intervention and treatment were processed according to National Comprehensive Cancer Network (NCCN) Clinical Practice Guidelines in Oncology: non-Hodgkin lymphomas.[16–18] If the patients were in stable conditions, they were followed up once every 3 months in the first year, once every 6 months in the second year, once every year in the third year, and so on. Patients were followed up at any time if disease progression and deterioration occurred. Overall survival (OS) was selected as the endpoint. OS was measured from the lymphoma diagnosis until the last follow-up or death from any cause. Follow-up was continued until November 2020.\nPatients eligible for this study were (1) with age >18 years, (2) with a history of HIV-infection, (3) with pathologically confirmed diffuse large B-cell lymphoma (DLBCL) or Burkitt lymphoma (BL),[19] and (4) with available clinical and CT imaging data before any clinical intervention. Patients with Hodgkin lymphoma, indolent B-cell non-Hodgkin lymphoma (NHL), and T-cell NHL or lacking a specific pathological type were excluded. One patient younger than 18 years of age and two patients with severe artifacts in the CT images were also excluded. Patients who were lost during follow-up were also excluded from the study.\nAll examinations were imaged with Philips CT 256 (Philips, Amsterdam, Netherlands), 39 patients accepted contrast-enhanced CT scan via intravenous contrast materials. The CT protocols were: tube voltage, 120 kV; automatic tube current, 30 to 300 mA; rotation time, 0.75 s; collimation, 0.625 mm; pitch, 0.945; matrix, 512 × 512; section thickness, 5 mm; breath-hold at full inspiration. The images were transmitted to the workstation and picture achieving and communication systems for multiplanar reconstruction and post-processing. All images (both axial CT images and multiplanar reconstruction images) were reviewed by three radiologists (Doctor A with 22 years of experience, B with 7 years of experience, and C with 10 years of experience) blinded to clinical and laboratory data. Three statisticians assessed the CT features independently. After separate evaluations, any divergences were resolved by discussion or consultation from a specialist in infectious imaging (Doctor D with 33 years of experience), eventually reviewed by Doctor E for consistency analysis. Baseline data are provided in Supplementary Data.", "The imaging findings were tested for agreement using the Kappa test. If the Kappa value was <0.400, the consistency of the diagnostic findings was poor. If the Kappa value was >0.750, then the diagnostic findings were considered to be sufficiently consistent. Continuous variables of parameters were tested for normal distribution using the Kolmogorov–Smirnov method. If the data fitted a normal distribution, mean ± standard deviation and the t test were used to check for differences between the two groups. The chi-square test and Fisher exact test were used to compare categorical variables.\nThe univariate analysis of a Kaplan–Meier (K–M) analysis model was fitted to determine the significant prognostic factors for OS of the patients. If P values of prognostic factors were <0.050, they were tested in a multivariate Cox proportional hazard model for the independence of association. Factors showing significant impact in the multivariate analysis were expressed via forest graph. Proportional hazards assumption was assessed through visual inspection of (log–log) plots of cumulative log hazard against time. A predictive model was developed for AR-NHL using Cox regression and illustrated by nomogram. The accuracy of predictions was assessed by estimating the nomogram discrimination measured by Harrell concordance index (C-index). The C-index is the probability chosen for two patients randomly, the patient who had the event first had a higher probability of having the event according to the model. C-index = 0.500 represents an agreement by chance; C-index = 1.000 represents perfect discrimination.[20] The calibration of the nomogram was evaluated by the Hosmer–Lemeshow test. K–M estimates were used to determine median OS by Log-rank methods, defined as the time period between the date of pathological diagnosis and last follow-up or death. All statistical analyses were performed using SPSS version 22.0 (IBM Corp., Armonk, NY, USA). The figures were developed using GraphPad Prism 7 (GraphPad Software; San Diego, CA, USA) and R software (version 4.0.1; http://www.r-project.org). All statistical tests were two-sided, the significance level was set at P < 0.050.", "The median follow-up of the entire cohort was 12 months. On univariable analysis, factors associated with poor OS included CD4 ≤100 cells/μL, involvement of mediastinal or hilar lymph nodes, liver, gastrointestinal tract, presence of extracapsular infiltration, necrosis inside the lesions, and treatment without chemotherapy [Table 2]. On multivariable Cox analysis, involvement of mediastinal or hilar lymph nodes, liver, necrosis in the lesions, CD4 ≤100 cells/μL, and treatment without chemotherapy were independent risk factors for short OS. Significant radiological features were shown in Figure 1.\nClinical prognostic factors for survival of total patients using univariate and multivariate analyses.\nBL: Burkitt's lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; OS: Overall survival. CI: confidence interval.\n(A) A 37-year-old man with DLBCL was found positive HIV-antibodies for 1 month and with nausea and fever for 1 day. The thickening wall of the stomach was detected using CT (white arrow). The OS time of this patient was 11 months. (B) A 31-year-old man with BL was found positive HIV-antibodies for 1 month, fever for 5 days. CT showed multiple, irregular, heterogeneous lesions of the liver (white arrow) and bilateral adrenal glands (black arrows). The OS of this patient was 5 months. (C, D) A 29-year-old man with BL was found positive HIV-antibodies for 2 months and dyspnea and fever for 3 days. CT lung window showed multiple irregular lesions in bilateral lungs; enlarged lymph nodes were found on mediastinal and hilum. Progression was found on a 1-month follow-up. (D) The OS time of this patient was 3 months. (E) A 25-year-old man with DLBCL was found positive HIV-antibodies for 3 months. A large irregular, homogeneous, isoattenuation mass that infiltrated adjacent bone and muscles was found in the right cervical area IX (white arrow). (F) A 40-year-old man with DLBCL was found HIV-antibodies for 5 months. A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). BL: Burkitt lymphoma; CT: Computed tomography; DLBCL: Diffuse large B-cell lymphoma; HIV: Human immunodeficiency virus; OS: Overall survival.", "The predictive models were based on Cox regression and illustrated by nomogram [Figure 2], indicating the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. Harrell C-index was 0.716, suggesting relatively good discrimination. The Hosmer–Lemeshow test demonstrated a P = 0.620 > 0.050, indicating no departure from a good fit. The probability of survival at 1, 2, and 3 years was obtained by drawing a vertical line from the “total points” axis straight down to the outcome axes. The total number of points for each patient was obtained by summing the points for each of the individual factors in the nomogram. In the predictive model, treated with chemotherapy can increase 100 points for total points, then followed by liver involved free (add 82 points), CD4 counts >100 cells/μL (add 69 points), mediastinal or hilar lymph nodes involved free (add 58 points), and without necrosis (add 50 points). For instance, an AR-NHL patient with liver involved (0 points), without mediastinal or hilar lymph nodes involved (58 points), CD4 counts 125 cells/μL (69 points), with tumor necrosis (0 points), treated with chemotherapy (100 points), and the total points were 227 points. This patient was predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years [Figure 3].\nNomogram indicates the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. CD4 was determined by drawing a line straight up to the point axis for clinical. Then, this process was repeated for other four variables (necrosis, chemotherapy, liver involved, mediastinal or hilar lymph nodes involved). Each variable had a corresponding value (points), which were marked in the superior toolbar. Total adding up scores of all variables could predict OS probability. AR-NHL: AIDS-related non-Hodgkin lymphoma; NHL: Non-Hodgkin lymphoma; OS: Overall survival.\nExamples of using the nomogram to predict the individual survival probability by manually placing straight lines across the diagram. A 32-year-old man was found positive HIV-antibodies for 7 months and with the treatment of HAART. (A) A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). (B) Two irregular, heterogeneous lesions were found in the liver (black arrow). DLBCL was confirmed by biopsy. The patient received a chemotherapy regimen of CHOP and OS time was 14 months. The total points are 227 points. This patient is predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years (black line). CHOP: Cyclophosphamide, doxorubicin, vincristine and prednisolone; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; OS: Overall survival.", "Compared with the patients with levels of CD4 < 100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020) [Figure 4]. K–M analysis and the log-rank test showed that patients who did not receive chemotherapy had poor survival outcomes than those receiving chemotherapy (P < 0.050) [Supplementary Figure S1]. The patients with involvement of mediastinal or hilar lymph nodes or involvement of the liver, or the lesions with necrosis had a worse prognosis (P < 0.050) [Supplementary Figure S1].\nKaplan–Meier.curves of CD4 for the whole patient population. Compared with the patients with levels of CD4 <100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020).\nNext, we further compared the survival outcomes between patients with and without chemotherapy in each subgroup based on our novel survival-predicting model. Notably, patients in the high-risk group could benefit from chemotherapy treatment (P = 0.040).", "We thank all the patients, investigators, co-investigators, and study teams at each participating site." ]
[ null, "subjects", null, null, null, null, null ]
[ "Introduction", "Methods", "Ethics approval", "Patients", "Statistical analysis", "Results", "Patients’ demographic data", "Prognostic factors for OS", "Development of nomogram", "Clinical value of the novel classification system", "Discussion", "Acknowledgements", "Conflicts of interest", "Supplementary Material" ]
[ "Acquired immune deficiency syndrome (AIDS)-related non-Hodgkin lymphoma (AR-NHL) is a high-risk factor for morbidity and mortality in patients with AIDS.[1,2] Although the incidence of AIDS-related tumors has decreased with the advent of highly active antiretroviral therapy (HAART), the occurrence rate of AR-NHL appears to be unexpectedly declined.[3,4] In addition to HAART, the use of adjuvant chemotherapy can improve the tolerance and remission rate of patients; however, inappropriate adjuvant therapy may induce adverse effects in patients, including myelosuppression, tissue necrosis, and liver dysfunction. To date, the application of chemotherapies and chemoradiotherapy remains controversial, partially due to the failure of selecting suitable AR-NHL patients.\nAdditionally, previous studies reported that CD4+ count, human immunodeficiency virus (HIV) ribonucleic acid levels, Ann Arbor stage, lactate dehydrogenase (LDH) levels, international prognostic index (IPI) score, and age are key predictors for survival in AR-NHL patients.[5–7] However, only a few studies investigated the significance of imaging characteristics for predicting prognosis and survival. Novel imaging modalities for assessing lymphoma can provide useful information for treatment regimens and predict patients’ prognoses. With the advancement of imaging techniques such as computed tomography (CT), magnetic resonance, and positron emission computed tomography (PET)/CT, radiological techniques play an increasingly essential role in detecting lesions and evaluating disease.[8–10] CT can detect enlarged lymph nodes, guide biopsy, and observe early relapse through follow-up.[11–13] Magnetic resonance imaging and PET/CT are limited due to various economic and social factors in some developing countries.[14,15] Therefore, a widely applicable nomogram based on clinical and CT characteristics is needed to appropriately predict prognosis in patients.\nTo address this issue, we integrated clinical and CT-related factors to create a novel nomogram to stratify the low- and high-risk groups. This study aimed to determine the prognostic factors associated with overall survival and to develop a prognostic nomogram incorporating clinical and CT imaging features in patients with AR-NHL.", "Ethics approval The study was conducted under approval by the Institutional Review Board of You’an Hospital Affiliated of Capital Medical University (No. 2018066). The consent to participate was waived due to the retrospective nature of the study.\nThe study was conducted under approval by the Institutional Review Board of You’an Hospital Affiliated of Capital Medical University (No. 2018066). The consent to participate was waived due to the retrospective nature of the study.\nPatients In this multicenter retrospective study, information of 181 patients with AIDS-related lymphoma from three tertiary infectious disease hospitals was retrospectively reviewed and their clinical and imaging data were analyzed between July 2012 and November 2019. The diagnosis of HIV infection was based on the standards of the Centers for Disease Control and prevention of the USA. The diagnosis of lymphoma was based on puncture biopsy (163 patients), endoscopic biopsy (six patients), and operation specimens (12 patients). All intervention and treatment were processed according to National Comprehensive Cancer Network (NCCN) Clinical Practice Guidelines in Oncology: non-Hodgkin lymphomas.[16–18] If the patients were in stable conditions, they were followed up once every 3 months in the first year, once every 6 months in the second year, once every year in the third year, and so on. Patients were followed up at any time if disease progression and deterioration occurred. Overall survival (OS) was selected as the endpoint. OS was measured from the lymphoma diagnosis until the last follow-up or death from any cause. Follow-up was continued until November 2020.\nPatients eligible for this study were (1) with age >18 years, (2) with a history of HIV-infection, (3) with pathologically confirmed diffuse large B-cell lymphoma (DLBCL) or Burkitt lymphoma (BL),[19] and (4) with available clinical and CT imaging data before any clinical intervention. Patients with Hodgkin lymphoma, indolent B-cell non-Hodgkin lymphoma (NHL), and T-cell NHL or lacking a specific pathological type were excluded. One patient younger than 18 years of age and two patients with severe artifacts in the CT images were also excluded. Patients who were lost during follow-up were also excluded from the study.\nAll examinations were imaged with Philips CT 256 (Philips, Amsterdam, Netherlands), 39 patients accepted contrast-enhanced CT scan via intravenous contrast materials. The CT protocols were: tube voltage, 120 kV; automatic tube current, 30 to 300 mA; rotation time, 0.75 s; collimation, 0.625 mm; pitch, 0.945; matrix, 512 × 512; section thickness, 5 mm; breath-hold at full inspiration. The images were transmitted to the workstation and picture achieving and communication systems for multiplanar reconstruction and post-processing. All images (both axial CT images and multiplanar reconstruction images) were reviewed by three radiologists (Doctor A with 22 years of experience, B with 7 years of experience, and C with 10 years of experience) blinded to clinical and laboratory data. Three statisticians assessed the CT features independently. After separate evaluations, any divergences were resolved by discussion or consultation from a specialist in infectious imaging (Doctor D with 33 years of experience), eventually reviewed by Doctor E for consistency analysis. Baseline data are provided in Supplementary Data.\nIn this multicenter retrospective study, information of 181 patients with AIDS-related lymphoma from three tertiary infectious disease hospitals was retrospectively reviewed and their clinical and imaging data were analyzed between July 2012 and November 2019. The diagnosis of HIV infection was based on the standards of the Centers for Disease Control and prevention of the USA. The diagnosis of lymphoma was based on puncture biopsy (163 patients), endoscopic biopsy (six patients), and operation specimens (12 patients). All intervention and treatment were processed according to National Comprehensive Cancer Network (NCCN) Clinical Practice Guidelines in Oncology: non-Hodgkin lymphomas.[16–18] If the patients were in stable conditions, they were followed up once every 3 months in the first year, once every 6 months in the second year, once every year in the third year, and so on. Patients were followed up at any time if disease progression and deterioration occurred. Overall survival (OS) was selected as the endpoint. OS was measured from the lymphoma diagnosis until the last follow-up or death from any cause. Follow-up was continued until November 2020.\nPatients eligible for this study were (1) with age >18 years, (2) with a history of HIV-infection, (3) with pathologically confirmed diffuse large B-cell lymphoma (DLBCL) or Burkitt lymphoma (BL),[19] and (4) with available clinical and CT imaging data before any clinical intervention. Patients with Hodgkin lymphoma, indolent B-cell non-Hodgkin lymphoma (NHL), and T-cell NHL or lacking a specific pathological type were excluded. One patient younger than 18 years of age and two patients with severe artifacts in the CT images were also excluded. Patients who were lost during follow-up were also excluded from the study.\nAll examinations were imaged with Philips CT 256 (Philips, Amsterdam, Netherlands), 39 patients accepted contrast-enhanced CT scan via intravenous contrast materials. The CT protocols were: tube voltage, 120 kV; automatic tube current, 30 to 300 mA; rotation time, 0.75 s; collimation, 0.625 mm; pitch, 0.945; matrix, 512 × 512; section thickness, 5 mm; breath-hold at full inspiration. The images were transmitted to the workstation and picture achieving and communication systems for multiplanar reconstruction and post-processing. All images (both axial CT images and multiplanar reconstruction images) were reviewed by three radiologists (Doctor A with 22 years of experience, B with 7 years of experience, and C with 10 years of experience) blinded to clinical and laboratory data. Three statisticians assessed the CT features independently. After separate evaluations, any divergences were resolved by discussion or consultation from a specialist in infectious imaging (Doctor D with 33 years of experience), eventually reviewed by Doctor E for consistency analysis. Baseline data are provided in Supplementary Data.\nStatistical analysis The imaging findings were tested for agreement using the Kappa test. If the Kappa value was <0.400, the consistency of the diagnostic findings was poor. If the Kappa value was >0.750, then the diagnostic findings were considered to be sufficiently consistent. Continuous variables of parameters were tested for normal distribution using the Kolmogorov–Smirnov method. If the data fitted a normal distribution, mean ± standard deviation and the t test were used to check for differences between the two groups. The chi-square test and Fisher exact test were used to compare categorical variables.\nThe univariate analysis of a Kaplan–Meier (K–M) analysis model was fitted to determine the significant prognostic factors for OS of the patients. If P values of prognostic factors were <0.050, they were tested in a multivariate Cox proportional hazard model for the independence of association. Factors showing significant impact in the multivariate analysis were expressed via forest graph. Proportional hazards assumption was assessed through visual inspection of (log–log) plots of cumulative log hazard against time. A predictive model was developed for AR-NHL using Cox regression and illustrated by nomogram. The accuracy of predictions was assessed by estimating the nomogram discrimination measured by Harrell concordance index (C-index). The C-index is the probability chosen for two patients randomly, the patient who had the event first had a higher probability of having the event according to the model. C-index = 0.500 represents an agreement by chance; C-index = 1.000 represents perfect discrimination.[20] The calibration of the nomogram was evaluated by the Hosmer–Lemeshow test. K–M estimates were used to determine median OS by Log-rank methods, defined as the time period between the date of pathological diagnosis and last follow-up or death. All statistical analyses were performed using SPSS version 22.0 (IBM Corp., Armonk, NY, USA). The figures were developed using GraphPad Prism 7 (GraphPad Software; San Diego, CA, USA) and R software (version 4.0.1; http://www.r-project.org). All statistical tests were two-sided, the significance level was set at P < 0.050.\nThe imaging findings were tested for agreement using the Kappa test. If the Kappa value was <0.400, the consistency of the diagnostic findings was poor. If the Kappa value was >0.750, then the diagnostic findings were considered to be sufficiently consistent. Continuous variables of parameters were tested for normal distribution using the Kolmogorov–Smirnov method. If the data fitted a normal distribution, mean ± standard deviation and the t test were used to check for differences between the two groups. The chi-square test and Fisher exact test were used to compare categorical variables.\nThe univariate analysis of a Kaplan–Meier (K–M) analysis model was fitted to determine the significant prognostic factors for OS of the patients. If P values of prognostic factors were <0.050, they were tested in a multivariate Cox proportional hazard model for the independence of association. Factors showing significant impact in the multivariate analysis were expressed via forest graph. Proportional hazards assumption was assessed through visual inspection of (log–log) plots of cumulative log hazard against time. A predictive model was developed for AR-NHL using Cox regression and illustrated by nomogram. The accuracy of predictions was assessed by estimating the nomogram discrimination measured by Harrell concordance index (C-index). The C-index is the probability chosen for two patients randomly, the patient who had the event first had a higher probability of having the event according to the model. C-index = 0.500 represents an agreement by chance; C-index = 1.000 represents perfect discrimination.[20] The calibration of the nomogram was evaluated by the Hosmer–Lemeshow test. K–M estimates were used to determine median OS by Log-rank methods, defined as the time period between the date of pathological diagnosis and last follow-up or death. All statistical analyses were performed using SPSS version 22.0 (IBM Corp., Armonk, NY, USA). The figures were developed using GraphPad Prism 7 (GraphPad Software; San Diego, CA, USA) and R software (version 4.0.1; http://www.r-project.org). All statistical tests were two-sided, the significance level was set at P < 0.050.", "The study was conducted under approval by the Institutional Review Board of You’an Hospital Affiliated of Capital Medical University (No. 2018066). The consent to participate was waived due to the retrospective nature of the study.", "In this multicenter retrospective study, information of 181 patients with AIDS-related lymphoma from three tertiary infectious disease hospitals was retrospectively reviewed and their clinical and imaging data were analyzed between July 2012 and November 2019. The diagnosis of HIV infection was based on the standards of the Centers for Disease Control and prevention of the USA. The diagnosis of lymphoma was based on puncture biopsy (163 patients), endoscopic biopsy (six patients), and operation specimens (12 patients). All intervention and treatment were processed according to National Comprehensive Cancer Network (NCCN) Clinical Practice Guidelines in Oncology: non-Hodgkin lymphomas.[16–18] If the patients were in stable conditions, they were followed up once every 3 months in the first year, once every 6 months in the second year, once every year in the third year, and so on. Patients were followed up at any time if disease progression and deterioration occurred. Overall survival (OS) was selected as the endpoint. OS was measured from the lymphoma diagnosis until the last follow-up or death from any cause. Follow-up was continued until November 2020.\nPatients eligible for this study were (1) with age >18 years, (2) with a history of HIV-infection, (3) with pathologically confirmed diffuse large B-cell lymphoma (DLBCL) or Burkitt lymphoma (BL),[19] and (4) with available clinical and CT imaging data before any clinical intervention. Patients with Hodgkin lymphoma, indolent B-cell non-Hodgkin lymphoma (NHL), and T-cell NHL or lacking a specific pathological type were excluded. One patient younger than 18 years of age and two patients with severe artifacts in the CT images were also excluded. Patients who were lost during follow-up were also excluded from the study.\nAll examinations were imaged with Philips CT 256 (Philips, Amsterdam, Netherlands), 39 patients accepted contrast-enhanced CT scan via intravenous contrast materials. The CT protocols were: tube voltage, 120 kV; automatic tube current, 30 to 300 mA; rotation time, 0.75 s; collimation, 0.625 mm; pitch, 0.945; matrix, 512 × 512; section thickness, 5 mm; breath-hold at full inspiration. The images were transmitted to the workstation and picture achieving and communication systems for multiplanar reconstruction and post-processing. All images (both axial CT images and multiplanar reconstruction images) were reviewed by three radiologists (Doctor A with 22 years of experience, B with 7 years of experience, and C with 10 years of experience) blinded to clinical and laboratory data. Three statisticians assessed the CT features independently. After separate evaluations, any divergences were resolved by discussion or consultation from a specialist in infectious imaging (Doctor D with 33 years of experience), eventually reviewed by Doctor E for consistency analysis. Baseline data are provided in Supplementary Data.", "The imaging findings were tested for agreement using the Kappa test. If the Kappa value was <0.400, the consistency of the diagnostic findings was poor. If the Kappa value was >0.750, then the diagnostic findings were considered to be sufficiently consistent. Continuous variables of parameters were tested for normal distribution using the Kolmogorov–Smirnov method. If the data fitted a normal distribution, mean ± standard deviation and the t test were used to check for differences between the two groups. The chi-square test and Fisher exact test were used to compare categorical variables.\nThe univariate analysis of a Kaplan–Meier (K–M) analysis model was fitted to determine the significant prognostic factors for OS of the patients. If P values of prognostic factors were <0.050, they were tested in a multivariate Cox proportional hazard model for the independence of association. Factors showing significant impact in the multivariate analysis were expressed via forest graph. Proportional hazards assumption was assessed through visual inspection of (log–log) plots of cumulative log hazard against time. A predictive model was developed for AR-NHL using Cox regression and illustrated by nomogram. The accuracy of predictions was assessed by estimating the nomogram discrimination measured by Harrell concordance index (C-index). The C-index is the probability chosen for two patients randomly, the patient who had the event first had a higher probability of having the event according to the model. C-index = 0.500 represents an agreement by chance; C-index = 1.000 represents perfect discrimination.[20] The calibration of the nomogram was evaluated by the Hosmer–Lemeshow test. K–M estimates were used to determine median OS by Log-rank methods, defined as the time period between the date of pathological diagnosis and last follow-up or death. All statistical analyses were performed using SPSS version 22.0 (IBM Corp., Armonk, NY, USA). The figures were developed using GraphPad Prism 7 (GraphPad Software; San Diego, CA, USA) and R software (version 4.0.1; http://www.r-project.org). All statistical tests were two-sided, the significance level was set at P < 0.050.", "Patients’ demographic data A total of 121 AR-NHL patients was included in the final analysis who were reviewed at multiple centers between 2000 and 2016. The median age at diagnosis was 40 years (range, 35–53 years), and 112 (92.5%) of the eligible patients were male. Among all patients, 83 received chemotherapy, 57 underwent definitive HAART. Table 1 summarizes the demographic data, clinical, and tumor characteristics of laboratory results, such as white blood cell, neutrophils, and lymphocytes. All CT features were defined as a good agreement among the three doctors (Kappa value 0.778–0.998; Supplementary Table S1).\nDemographic, clinical, and tumor characteristics of patients with AR-NHL.\nData are presented as n (%) or median (inter-quantile range). AR-NHL: AIDS-related non-Hodgkin lymphoma; BL: Burkitt lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; NEUT: Neutrophil; WBC: White blood cell.\nA total of 121 AR-NHL patients was included in the final analysis who were reviewed at multiple centers between 2000 and 2016. The median age at diagnosis was 40 years (range, 35–53 years), and 112 (92.5%) of the eligible patients were male. Among all patients, 83 received chemotherapy, 57 underwent definitive HAART. Table 1 summarizes the demographic data, clinical, and tumor characteristics of laboratory results, such as white blood cell, neutrophils, and lymphocytes. All CT features were defined as a good agreement among the three doctors (Kappa value 0.778–0.998; Supplementary Table S1).\nDemographic, clinical, and tumor characteristics of patients with AR-NHL.\nData are presented as n (%) or median (inter-quantile range). AR-NHL: AIDS-related non-Hodgkin lymphoma; BL: Burkitt lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; NEUT: Neutrophil; WBC: White blood cell.\nPrognostic factors for OS The median follow-up of the entire cohort was 12 months. On univariable analysis, factors associated with poor OS included CD4 ≤100 cells/μL, involvement of mediastinal or hilar lymph nodes, liver, gastrointestinal tract, presence of extracapsular infiltration, necrosis inside the lesions, and treatment without chemotherapy [Table 2]. On multivariable Cox analysis, involvement of mediastinal or hilar lymph nodes, liver, necrosis in the lesions, CD4 ≤100 cells/μL, and treatment without chemotherapy were independent risk factors for short OS. Significant radiological features were shown in Figure 1.\nClinical prognostic factors for survival of total patients using univariate and multivariate analyses.\nBL: Burkitt's lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; OS: Overall survival. CI: confidence interval.\n(A) A 37-year-old man with DLBCL was found positive HIV-antibodies for 1 month and with nausea and fever for 1 day. The thickening wall of the stomach was detected using CT (white arrow). The OS time of this patient was 11 months. (B) A 31-year-old man with BL was found positive HIV-antibodies for 1 month, fever for 5 days. CT showed multiple, irregular, heterogeneous lesions of the liver (white arrow) and bilateral adrenal glands (black arrows). The OS of this patient was 5 months. (C, D) A 29-year-old man with BL was found positive HIV-antibodies for 2 months and dyspnea and fever for 3 days. CT lung window showed multiple irregular lesions in bilateral lungs; enlarged lymph nodes were found on mediastinal and hilum. Progression was found on a 1-month follow-up. (D) The OS time of this patient was 3 months. (E) A 25-year-old man with DLBCL was found positive HIV-antibodies for 3 months. A large irregular, homogeneous, isoattenuation mass that infiltrated adjacent bone and muscles was found in the right cervical area IX (white arrow). (F) A 40-year-old man with DLBCL was found HIV-antibodies for 5 months. A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). BL: Burkitt lymphoma; CT: Computed tomography; DLBCL: Diffuse large B-cell lymphoma; HIV: Human immunodeficiency virus; OS: Overall survival.\nThe median follow-up of the entire cohort was 12 months. On univariable analysis, factors associated with poor OS included CD4 ≤100 cells/μL, involvement of mediastinal or hilar lymph nodes, liver, gastrointestinal tract, presence of extracapsular infiltration, necrosis inside the lesions, and treatment without chemotherapy [Table 2]. On multivariable Cox analysis, involvement of mediastinal or hilar lymph nodes, liver, necrosis in the lesions, CD4 ≤100 cells/μL, and treatment without chemotherapy were independent risk factors for short OS. Significant radiological features were shown in Figure 1.\nClinical prognostic factors for survival of total patients using univariate and multivariate analyses.\nBL: Burkitt's lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; OS: Overall survival. CI: confidence interval.\n(A) A 37-year-old man with DLBCL was found positive HIV-antibodies for 1 month and with nausea and fever for 1 day. The thickening wall of the stomach was detected using CT (white arrow). The OS time of this patient was 11 months. (B) A 31-year-old man with BL was found positive HIV-antibodies for 1 month, fever for 5 days. CT showed multiple, irregular, heterogeneous lesions of the liver (white arrow) and bilateral adrenal glands (black arrows). The OS of this patient was 5 months. (C, D) A 29-year-old man with BL was found positive HIV-antibodies for 2 months and dyspnea and fever for 3 days. CT lung window showed multiple irregular lesions in bilateral lungs; enlarged lymph nodes were found on mediastinal and hilum. Progression was found on a 1-month follow-up. (D) The OS time of this patient was 3 months. (E) A 25-year-old man with DLBCL was found positive HIV-antibodies for 3 months. A large irregular, homogeneous, isoattenuation mass that infiltrated adjacent bone and muscles was found in the right cervical area IX (white arrow). (F) A 40-year-old man with DLBCL was found HIV-antibodies for 5 months. A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). BL: Burkitt lymphoma; CT: Computed tomography; DLBCL: Diffuse large B-cell lymphoma; HIV: Human immunodeficiency virus; OS: Overall survival.\nDevelopment of nomogram The predictive models were based on Cox regression and illustrated by nomogram [Figure 2], indicating the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. Harrell C-index was 0.716, suggesting relatively good discrimination. The Hosmer–Lemeshow test demonstrated a P = 0.620 > 0.050, indicating no departure from a good fit. The probability of survival at 1, 2, and 3 years was obtained by drawing a vertical line from the “total points” axis straight down to the outcome axes. The total number of points for each patient was obtained by summing the points for each of the individual factors in the nomogram. In the predictive model, treated with chemotherapy can increase 100 points for total points, then followed by liver involved free (add 82 points), CD4 counts >100 cells/μL (add 69 points), mediastinal or hilar lymph nodes involved free (add 58 points), and without necrosis (add 50 points). For instance, an AR-NHL patient with liver involved (0 points), without mediastinal or hilar lymph nodes involved (58 points), CD4 counts 125 cells/μL (69 points), with tumor necrosis (0 points), treated with chemotherapy (100 points), and the total points were 227 points. This patient was predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years [Figure 3].\nNomogram indicates the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. CD4 was determined by drawing a line straight up to the point axis for clinical. Then, this process was repeated for other four variables (necrosis, chemotherapy, liver involved, mediastinal or hilar lymph nodes involved). Each variable had a corresponding value (points), which were marked in the superior toolbar. Total adding up scores of all variables could predict OS probability. AR-NHL: AIDS-related non-Hodgkin lymphoma; NHL: Non-Hodgkin lymphoma; OS: Overall survival.\nExamples of using the nomogram to predict the individual survival probability by manually placing straight lines across the diagram. A 32-year-old man was found positive HIV-antibodies for 7 months and with the treatment of HAART. (A) A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). (B) Two irregular, heterogeneous lesions were found in the liver (black arrow). DLBCL was confirmed by biopsy. The patient received a chemotherapy regimen of CHOP and OS time was 14 months. The total points are 227 points. This patient is predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years (black line). CHOP: Cyclophosphamide, doxorubicin, vincristine and prednisolone; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; OS: Overall survival.\nThe predictive models were based on Cox regression and illustrated by nomogram [Figure 2], indicating the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. Harrell C-index was 0.716, suggesting relatively good discrimination. The Hosmer–Lemeshow test demonstrated a P = 0.620 > 0.050, indicating no departure from a good fit. The probability of survival at 1, 2, and 3 years was obtained by drawing a vertical line from the “total points” axis straight down to the outcome axes. The total number of points for each patient was obtained by summing the points for each of the individual factors in the nomogram. In the predictive model, treated with chemotherapy can increase 100 points for total points, then followed by liver involved free (add 82 points), CD4 counts >100 cells/μL (add 69 points), mediastinal or hilar lymph nodes involved free (add 58 points), and without necrosis (add 50 points). For instance, an AR-NHL patient with liver involved (0 points), without mediastinal or hilar lymph nodes involved (58 points), CD4 counts 125 cells/μL (69 points), with tumor necrosis (0 points), treated with chemotherapy (100 points), and the total points were 227 points. This patient was predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years [Figure 3].\nNomogram indicates the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. CD4 was determined by drawing a line straight up to the point axis for clinical. Then, this process was repeated for other four variables (necrosis, chemotherapy, liver involved, mediastinal or hilar lymph nodes involved). Each variable had a corresponding value (points), which were marked in the superior toolbar. Total adding up scores of all variables could predict OS probability. AR-NHL: AIDS-related non-Hodgkin lymphoma; NHL: Non-Hodgkin lymphoma; OS: Overall survival.\nExamples of using the nomogram to predict the individual survival probability by manually placing straight lines across the diagram. A 32-year-old man was found positive HIV-antibodies for 7 months and with the treatment of HAART. (A) A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). (B) Two irregular, heterogeneous lesions were found in the liver (black arrow). DLBCL was confirmed by biopsy. The patient received a chemotherapy regimen of CHOP and OS time was 14 months. The total points are 227 points. This patient is predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years (black line). CHOP: Cyclophosphamide, doxorubicin, vincristine and prednisolone; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; OS: Overall survival.\nClinical value of the novel classification system Compared with the patients with levels of CD4 < 100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020) [Figure 4]. K–M analysis and the log-rank test showed that patients who did not receive chemotherapy had poor survival outcomes than those receiving chemotherapy (P < 0.050) [Supplementary Figure S1]. The patients with involvement of mediastinal or hilar lymph nodes or involvement of the liver, or the lesions with necrosis had a worse prognosis (P < 0.050) [Supplementary Figure S1].\nKaplan–Meier.curves of CD4 for the whole patient population. Compared with the patients with levels of CD4 <100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020).\nNext, we further compared the survival outcomes between patients with and without chemotherapy in each subgroup based on our novel survival-predicting model. Notably, patients in the high-risk group could benefit from chemotherapy treatment (P = 0.040).\nCompared with the patients with levels of CD4 < 100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020) [Figure 4]. K–M analysis and the log-rank test showed that patients who did not receive chemotherapy had poor survival outcomes than those receiving chemotherapy (P < 0.050) [Supplementary Figure S1]. The patients with involvement of mediastinal or hilar lymph nodes or involvement of the liver, or the lesions with necrosis had a worse prognosis (P < 0.050) [Supplementary Figure S1].\nKaplan–Meier.curves of CD4 for the whole patient population. Compared with the patients with levels of CD4 <100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020).\nNext, we further compared the survival outcomes between patients with and without chemotherapy in each subgroup based on our novel survival-predicting model. Notably, patients in the high-risk group could benefit from chemotherapy treatment (P = 0.040).", "A total of 121 AR-NHL patients was included in the final analysis who were reviewed at multiple centers between 2000 and 2016. The median age at diagnosis was 40 years (range, 35–53 years), and 112 (92.5%) of the eligible patients were male. Among all patients, 83 received chemotherapy, 57 underwent definitive HAART. Table 1 summarizes the demographic data, clinical, and tumor characteristics of laboratory results, such as white blood cell, neutrophils, and lymphocytes. All CT features were defined as a good agreement among the three doctors (Kappa value 0.778–0.998; Supplementary Table S1).\nDemographic, clinical, and tumor characteristics of patients with AR-NHL.\nData are presented as n (%) or median (inter-quantile range). AR-NHL: AIDS-related non-Hodgkin lymphoma; BL: Burkitt lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; NEUT: Neutrophil; WBC: White blood cell.", "The median follow-up of the entire cohort was 12 months. On univariable analysis, factors associated with poor OS included CD4 ≤100 cells/μL, involvement of mediastinal or hilar lymph nodes, liver, gastrointestinal tract, presence of extracapsular infiltration, necrosis inside the lesions, and treatment without chemotherapy [Table 2]. On multivariable Cox analysis, involvement of mediastinal or hilar lymph nodes, liver, necrosis in the lesions, CD4 ≤100 cells/μL, and treatment without chemotherapy were independent risk factors for short OS. Significant radiological features were shown in Figure 1.\nClinical prognostic factors for survival of total patients using univariate and multivariate analyses.\nBL: Burkitt's lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; OS: Overall survival. CI: confidence interval.\n(A) A 37-year-old man with DLBCL was found positive HIV-antibodies for 1 month and with nausea and fever for 1 day. The thickening wall of the stomach was detected using CT (white arrow). The OS time of this patient was 11 months. (B) A 31-year-old man with BL was found positive HIV-antibodies for 1 month, fever for 5 days. CT showed multiple, irregular, heterogeneous lesions of the liver (white arrow) and bilateral adrenal glands (black arrows). The OS of this patient was 5 months. (C, D) A 29-year-old man with BL was found positive HIV-antibodies for 2 months and dyspnea and fever for 3 days. CT lung window showed multiple irregular lesions in bilateral lungs; enlarged lymph nodes were found on mediastinal and hilum. Progression was found on a 1-month follow-up. (D) The OS time of this patient was 3 months. (E) A 25-year-old man with DLBCL was found positive HIV-antibodies for 3 months. A large irregular, homogeneous, isoattenuation mass that infiltrated adjacent bone and muscles was found in the right cervical area IX (white arrow). (F) A 40-year-old man with DLBCL was found HIV-antibodies for 5 months. A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). BL: Burkitt lymphoma; CT: Computed tomography; DLBCL: Diffuse large B-cell lymphoma; HIV: Human immunodeficiency virus; OS: Overall survival.", "The predictive models were based on Cox regression and illustrated by nomogram [Figure 2], indicating the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. Harrell C-index was 0.716, suggesting relatively good discrimination. The Hosmer–Lemeshow test demonstrated a P = 0.620 > 0.050, indicating no departure from a good fit. The probability of survival at 1, 2, and 3 years was obtained by drawing a vertical line from the “total points” axis straight down to the outcome axes. The total number of points for each patient was obtained by summing the points for each of the individual factors in the nomogram. In the predictive model, treated with chemotherapy can increase 100 points for total points, then followed by liver involved free (add 82 points), CD4 counts >100 cells/μL (add 69 points), mediastinal or hilar lymph nodes involved free (add 58 points), and without necrosis (add 50 points). For instance, an AR-NHL patient with liver involved (0 points), without mediastinal or hilar lymph nodes involved (58 points), CD4 counts 125 cells/μL (69 points), with tumor necrosis (0 points), treated with chemotherapy (100 points), and the total points were 227 points. This patient was predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years [Figure 3].\nNomogram indicates the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. CD4 was determined by drawing a line straight up to the point axis for clinical. Then, this process was repeated for other four variables (necrosis, chemotherapy, liver involved, mediastinal or hilar lymph nodes involved). Each variable had a corresponding value (points), which were marked in the superior toolbar. Total adding up scores of all variables could predict OS probability. AR-NHL: AIDS-related non-Hodgkin lymphoma; NHL: Non-Hodgkin lymphoma; OS: Overall survival.\nExamples of using the nomogram to predict the individual survival probability by manually placing straight lines across the diagram. A 32-year-old man was found positive HIV-antibodies for 7 months and with the treatment of HAART. (A) A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). (B) Two irregular, heterogeneous lesions were found in the liver (black arrow). DLBCL was confirmed by biopsy. The patient received a chemotherapy regimen of CHOP and OS time was 14 months. The total points are 227 points. This patient is predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years (black line). CHOP: Cyclophosphamide, doxorubicin, vincristine and prednisolone; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; OS: Overall survival.", "Compared with the patients with levels of CD4 < 100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020) [Figure 4]. K–M analysis and the log-rank test showed that patients who did not receive chemotherapy had poor survival outcomes than those receiving chemotherapy (P < 0.050) [Supplementary Figure S1]. The patients with involvement of mediastinal or hilar lymph nodes or involvement of the liver, or the lesions with necrosis had a worse prognosis (P < 0.050) [Supplementary Figure S1].\nKaplan–Meier.curves of CD4 for the whole patient population. Compared with the patients with levels of CD4 <100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020).\nNext, we further compared the survival outcomes between patients with and without chemotherapy in each subgroup based on our novel survival-predicting model. Notably, patients in the high-risk group could benefit from chemotherapy treatment (P = 0.040).", "To the best of our knowledge, this is the largest study from triple institutions in Asia to evaluate prognostic factors in patients with AR-NHL. We developed a nomogram based on patients’ demographics, CT imaging features, and laboratory data, which can be effectively stratified patients into low- and high-risk groups. Involvement of mediastinal or hilar lymph nodes, liver, necrosis in lesions, CD4 ≤100 cells/μL, and treatment without chemotherapy were independent risk factors for shorter OS. Notably, only patients in the high-risk group in our study were found to be significantly benefited from chemotherapy, so we strongly recommend patients in the high-risk group as candidates for chemotherapy.\nImaging plays an important role in the detection and evaluation of AR-NHL lesions.[8,10,12] CT of the head and neck, chest, abdomen, and pelvis is a critical staging modality recommended by the NCCN guidelines.[18–20] The potential mechanism of necrosis in lymphoma is the occlusion of the supplying hilar artery by the tumor (compression or invasion) in addition to lymphatic flow obstruction.[21,22]A previous study reported that HIV (−) NHL patients with necrosis had significantly higher Ann Arbor stages, greater IPI, and higher serum LDH levels than those without necrosis. However, in their K–M survival analysis, no statistically significant difference was noted for necrosis.[23] Our study focused on patients with AR-NHL, and necrosis in the lesions was an independent risk factor for shorter OS. Necrosis of the lesion indicated the aggressive behavior of the tumor and the tendency for treatment resistance. The presence of extracapsular infiltration is also correlated with the OS in the current study. The invasion of tumor cells may be a potential mechanism of extracapsular infiltration and may correlate with a poor prognosis.[24] Natural killer (NK) cells play an important role in the growth and infiltration of lymphoma cells and activated NK cells could be a promising immunotherapeutic tool against lymphoma cells either alone or in combination with conventional therapy.[25]\nAR-NHLs are usually B-cell, high-grade, and poorly differentiated lymphomas.[26] Extranodal sites involvement is common. The liver is the second most common site with an incidence ranging from 26% to 45%. HIV (+) patients have a higher incidence of NHL than HIV (−) patients.[27,28] A previous study indicated that primary mediastinal large B-cell lymphoma (PMBCL), representing 10% of all DLBCL, was predictive of poor OS and progression-free survival.[29] Compared with cyclophosphamide, doxorubicin, vincristine, and prednisolone (CHOP) regimen, rituximab and its use with intensified chemotherapy such as R-Hyper-cyclophosphamide, vincristine, doxorubicin, and dexamethasone and R-EPOCH (etoposide, prednisone, vincristine, cyclophosphamide, doxorubicin) might improve the response rate and survival outcome for patients with mediastinal NHL, especially for PMBCL.[29–31]\nAlthough Guech-Ongey et al[32,33] found acquired immune deficiency syndrome-related Burkitt lymphoma incidence declined at low CD4 counts, suggesting functional CD4 lymphocytes may be required for BL to develop, we consider lower CD4 counts to reflect more severe immunodeficiency, which is likely to cause opportunistic infections and other malignant tumors for patients with AR-NHL.\nFor patients who have factors correlated with shorter OS time, intensive chemotherapy should be considered. Intensive chemotherapy is relatively safe and effective in AR-NHL.[34] Notably, only patients in the high-risk group in our study were found to be significantly benefited from chemotherapy, so we strongly recommend patients in the high-risk group as candidates for chemotherapy. Chemotherapy and concomitant HAART for AR-NHL does not cause prolonged suppression of lymphocyt-e subsets. On the contrary, chemotherapy can increase the counts of CD4, CD8, CD19, and CD56 cell populations, which provide reassurance regarding the long-term consequences of chemotherapy in these individuals.[35] While for patients in the lower-risk group, the survival difference was not statistically significant, so HAART alone and regular examination are recommended, because adverse characteristics, such as severe bone marrow toxicity chemotherapy, should be considered.[36]\nThere are several limitations to the current study. First, it is a retrospective study with a limited number of patients, and patients were diagnosed in different hospitals. Therefore, the quality of chemotherapy and the methods employed by pathologists for diagnosing metastatic lymph nodes were uniform, which might bias our results. Second, although univariate analysis showed that pathology classifications (acquired immune deficiency syndrome-related diffuse large B-cell lymphoma and acquired immune deficiency syndrome-related Burkitt lymphoma) were not an independent prognostic factor for patients, other studies have reported that pathology classifications correspond to a different OS. Prospective investigations with larger samples focus on certain pathological types should be designed to find more predictive factors for prognoses of AR-NHL patients. Importantly, external validation of the proposed staging system in an independent cohort is required to determine whether it can be generalized to other institutions. Despite the current limitations, our study still has a high value because CT lesions necrosis characteristics and organ involvement in clinical work have a high degree of recognition.\nIn conclusion, a survival-predicting nomogram integrating CT features was developed in this study, which was promising for assessing the survival outcomes of patients with AR-NHL. Notably, decision-making of chemotherapy regimens and more frequent follow-up should be considered in the high-risk group determined by this model.", "We thank all the patients, investigators, co-investigators, and study teams at each participating site.", "None.", "" ]
[ "intro", "methods", null, "subjects", null, "results", "subjects", null, null, null, "discussion", null, "COI-statement", "supplementary-material" ]
[ "Lymphoma", "AIDS-related AR-NHL", "Computed tomography", "Prognosis", "Nomogram" ]
Introduction: Acquired immune deficiency syndrome (AIDS)-related non-Hodgkin lymphoma (AR-NHL) is a high-risk factor for morbidity and mortality in patients with AIDS.[1,2] Although the incidence of AIDS-related tumors has decreased with the advent of highly active antiretroviral therapy (HAART), the occurrence rate of AR-NHL appears to be unexpectedly declined.[3,4] In addition to HAART, the use of adjuvant chemotherapy can improve the tolerance and remission rate of patients; however, inappropriate adjuvant therapy may induce adverse effects in patients, including myelosuppression, tissue necrosis, and liver dysfunction. To date, the application of chemotherapies and chemoradiotherapy remains controversial, partially due to the failure of selecting suitable AR-NHL patients. Additionally, previous studies reported that CD4+ count, human immunodeficiency virus (HIV) ribonucleic acid levels, Ann Arbor stage, lactate dehydrogenase (LDH) levels, international prognostic index (IPI) score, and age are key predictors for survival in AR-NHL patients.[5–7] However, only a few studies investigated the significance of imaging characteristics for predicting prognosis and survival. Novel imaging modalities for assessing lymphoma can provide useful information for treatment regimens and predict patients’ prognoses. With the advancement of imaging techniques such as computed tomography (CT), magnetic resonance, and positron emission computed tomography (PET)/CT, radiological techniques play an increasingly essential role in detecting lesions and evaluating disease.[8–10] CT can detect enlarged lymph nodes, guide biopsy, and observe early relapse through follow-up.[11–13] Magnetic resonance imaging and PET/CT are limited due to various economic and social factors in some developing countries.[14,15] Therefore, a widely applicable nomogram based on clinical and CT characteristics is needed to appropriately predict prognosis in patients. To address this issue, we integrated clinical and CT-related factors to create a novel nomogram to stratify the low- and high-risk groups. This study aimed to determine the prognostic factors associated with overall survival and to develop a prognostic nomogram incorporating clinical and CT imaging features in patients with AR-NHL. Methods: Ethics approval The study was conducted under approval by the Institutional Review Board of You’an Hospital Affiliated of Capital Medical University (No. 2018066). The consent to participate was waived due to the retrospective nature of the study. The study was conducted under approval by the Institutional Review Board of You’an Hospital Affiliated of Capital Medical University (No. 2018066). The consent to participate was waived due to the retrospective nature of the study. Patients In this multicenter retrospective study, information of 181 patients with AIDS-related lymphoma from three tertiary infectious disease hospitals was retrospectively reviewed and their clinical and imaging data were analyzed between July 2012 and November 2019. The diagnosis of HIV infection was based on the standards of the Centers for Disease Control and prevention of the USA. The diagnosis of lymphoma was based on puncture biopsy (163 patients), endoscopic biopsy (six patients), and operation specimens (12 patients). All intervention and treatment were processed according to National Comprehensive Cancer Network (NCCN) Clinical Practice Guidelines in Oncology: non-Hodgkin lymphomas.[16–18] If the patients were in stable conditions, they were followed up once every 3 months in the first year, once every 6 months in the second year, once every year in the third year, and so on. Patients were followed up at any time if disease progression and deterioration occurred. Overall survival (OS) was selected as the endpoint. OS was measured from the lymphoma diagnosis until the last follow-up or death from any cause. Follow-up was continued until November 2020. Patients eligible for this study were (1) with age >18 years, (2) with a history of HIV-infection, (3) with pathologically confirmed diffuse large B-cell lymphoma (DLBCL) or Burkitt lymphoma (BL),[19] and (4) with available clinical and CT imaging data before any clinical intervention. Patients with Hodgkin lymphoma, indolent B-cell non-Hodgkin lymphoma (NHL), and T-cell NHL or lacking a specific pathological type were excluded. One patient younger than 18 years of age and two patients with severe artifacts in the CT images were also excluded. Patients who were lost during follow-up were also excluded from the study. All examinations were imaged with Philips CT 256 (Philips, Amsterdam, Netherlands), 39 patients accepted contrast-enhanced CT scan via intravenous contrast materials. The CT protocols were: tube voltage, 120 kV; automatic tube current, 30 to 300 mA; rotation time, 0.75 s; collimation, 0.625 mm; pitch, 0.945; matrix, 512 × 512; section thickness, 5 mm; breath-hold at full inspiration. The images were transmitted to the workstation and picture achieving and communication systems for multiplanar reconstruction and post-processing. All images (both axial CT images and multiplanar reconstruction images) were reviewed by three radiologists (Doctor A with 22 years of experience, B with 7 years of experience, and C with 10 years of experience) blinded to clinical and laboratory data. Three statisticians assessed the CT features independently. After separate evaluations, any divergences were resolved by discussion or consultation from a specialist in infectious imaging (Doctor D with 33 years of experience), eventually reviewed by Doctor E for consistency analysis. Baseline data are provided in Supplementary Data. In this multicenter retrospective study, information of 181 patients with AIDS-related lymphoma from three tertiary infectious disease hospitals was retrospectively reviewed and their clinical and imaging data were analyzed between July 2012 and November 2019. The diagnosis of HIV infection was based on the standards of the Centers for Disease Control and prevention of the USA. The diagnosis of lymphoma was based on puncture biopsy (163 patients), endoscopic biopsy (six patients), and operation specimens (12 patients). All intervention and treatment were processed according to National Comprehensive Cancer Network (NCCN) Clinical Practice Guidelines in Oncology: non-Hodgkin lymphomas.[16–18] If the patients were in stable conditions, they were followed up once every 3 months in the first year, once every 6 months in the second year, once every year in the third year, and so on. Patients were followed up at any time if disease progression and deterioration occurred. Overall survival (OS) was selected as the endpoint. OS was measured from the lymphoma diagnosis until the last follow-up or death from any cause. Follow-up was continued until November 2020. Patients eligible for this study were (1) with age >18 years, (2) with a history of HIV-infection, (3) with pathologically confirmed diffuse large B-cell lymphoma (DLBCL) or Burkitt lymphoma (BL),[19] and (4) with available clinical and CT imaging data before any clinical intervention. Patients with Hodgkin lymphoma, indolent B-cell non-Hodgkin lymphoma (NHL), and T-cell NHL or lacking a specific pathological type were excluded. One patient younger than 18 years of age and two patients with severe artifacts in the CT images were also excluded. Patients who were lost during follow-up were also excluded from the study. All examinations were imaged with Philips CT 256 (Philips, Amsterdam, Netherlands), 39 patients accepted contrast-enhanced CT scan via intravenous contrast materials. The CT protocols were: tube voltage, 120 kV; automatic tube current, 30 to 300 mA; rotation time, 0.75 s; collimation, 0.625 mm; pitch, 0.945; matrix, 512 × 512; section thickness, 5 mm; breath-hold at full inspiration. The images were transmitted to the workstation and picture achieving and communication systems for multiplanar reconstruction and post-processing. All images (both axial CT images and multiplanar reconstruction images) were reviewed by three radiologists (Doctor A with 22 years of experience, B with 7 years of experience, and C with 10 years of experience) blinded to clinical and laboratory data. Three statisticians assessed the CT features independently. After separate evaluations, any divergences were resolved by discussion or consultation from a specialist in infectious imaging (Doctor D with 33 years of experience), eventually reviewed by Doctor E for consistency analysis. Baseline data are provided in Supplementary Data. Statistical analysis The imaging findings were tested for agreement using the Kappa test. If the Kappa value was <0.400, the consistency of the diagnostic findings was poor. If the Kappa value was >0.750, then the diagnostic findings were considered to be sufficiently consistent. Continuous variables of parameters were tested for normal distribution using the Kolmogorov–Smirnov method. If the data fitted a normal distribution, mean ± standard deviation and the t test were used to check for differences between the two groups. The chi-square test and Fisher exact test were used to compare categorical variables. The univariate analysis of a Kaplan–Meier (K–M) analysis model was fitted to determine the significant prognostic factors for OS of the patients. If P values of prognostic factors were <0.050, they were tested in a multivariate Cox proportional hazard model for the independence of association. Factors showing significant impact in the multivariate analysis were expressed via forest graph. Proportional hazards assumption was assessed through visual inspection of (log–log) plots of cumulative log hazard against time. A predictive model was developed for AR-NHL using Cox regression and illustrated by nomogram. The accuracy of predictions was assessed by estimating the nomogram discrimination measured by Harrell concordance index (C-index). The C-index is the probability chosen for two patients randomly, the patient who had the event first had a higher probability of having the event according to the model. C-index = 0.500 represents an agreement by chance; C-index = 1.000 represents perfect discrimination.[20] The calibration of the nomogram was evaluated by the Hosmer–Lemeshow test. K–M estimates were used to determine median OS by Log-rank methods, defined as the time period between the date of pathological diagnosis and last follow-up or death. All statistical analyses were performed using SPSS version 22.0 (IBM Corp., Armonk, NY, USA). The figures were developed using GraphPad Prism 7 (GraphPad Software; San Diego, CA, USA) and R software (version 4.0.1; http://www.r-project.org). All statistical tests were two-sided, the significance level was set at P < 0.050. The imaging findings were tested for agreement using the Kappa test. If the Kappa value was <0.400, the consistency of the diagnostic findings was poor. If the Kappa value was >0.750, then the diagnostic findings were considered to be sufficiently consistent. Continuous variables of parameters were tested for normal distribution using the Kolmogorov–Smirnov method. If the data fitted a normal distribution, mean ± standard deviation and the t test were used to check for differences between the two groups. The chi-square test and Fisher exact test were used to compare categorical variables. The univariate analysis of a Kaplan–Meier (K–M) analysis model was fitted to determine the significant prognostic factors for OS of the patients. If P values of prognostic factors were <0.050, they were tested in a multivariate Cox proportional hazard model for the independence of association. Factors showing significant impact in the multivariate analysis were expressed via forest graph. Proportional hazards assumption was assessed through visual inspection of (log–log) plots of cumulative log hazard against time. A predictive model was developed for AR-NHL using Cox regression and illustrated by nomogram. The accuracy of predictions was assessed by estimating the nomogram discrimination measured by Harrell concordance index (C-index). The C-index is the probability chosen for two patients randomly, the patient who had the event first had a higher probability of having the event according to the model. C-index = 0.500 represents an agreement by chance; C-index = 1.000 represents perfect discrimination.[20] The calibration of the nomogram was evaluated by the Hosmer–Lemeshow test. K–M estimates were used to determine median OS by Log-rank methods, defined as the time period between the date of pathological diagnosis and last follow-up or death. All statistical analyses were performed using SPSS version 22.0 (IBM Corp., Armonk, NY, USA). The figures were developed using GraphPad Prism 7 (GraphPad Software; San Diego, CA, USA) and R software (version 4.0.1; http://www.r-project.org). All statistical tests were two-sided, the significance level was set at P < 0.050. Ethics approval: The study was conducted under approval by the Institutional Review Board of You’an Hospital Affiliated of Capital Medical University (No. 2018066). The consent to participate was waived due to the retrospective nature of the study. Patients: In this multicenter retrospective study, information of 181 patients with AIDS-related lymphoma from three tertiary infectious disease hospitals was retrospectively reviewed and their clinical and imaging data were analyzed between July 2012 and November 2019. The diagnosis of HIV infection was based on the standards of the Centers for Disease Control and prevention of the USA. The diagnosis of lymphoma was based on puncture biopsy (163 patients), endoscopic biopsy (six patients), and operation specimens (12 patients). All intervention and treatment were processed according to National Comprehensive Cancer Network (NCCN) Clinical Practice Guidelines in Oncology: non-Hodgkin lymphomas.[16–18] If the patients were in stable conditions, they were followed up once every 3 months in the first year, once every 6 months in the second year, once every year in the third year, and so on. Patients were followed up at any time if disease progression and deterioration occurred. Overall survival (OS) was selected as the endpoint. OS was measured from the lymphoma diagnosis until the last follow-up or death from any cause. Follow-up was continued until November 2020. Patients eligible for this study were (1) with age >18 years, (2) with a history of HIV-infection, (3) with pathologically confirmed diffuse large B-cell lymphoma (DLBCL) or Burkitt lymphoma (BL),[19] and (4) with available clinical and CT imaging data before any clinical intervention. Patients with Hodgkin lymphoma, indolent B-cell non-Hodgkin lymphoma (NHL), and T-cell NHL or lacking a specific pathological type were excluded. One patient younger than 18 years of age and two patients with severe artifacts in the CT images were also excluded. Patients who were lost during follow-up were also excluded from the study. All examinations were imaged with Philips CT 256 (Philips, Amsterdam, Netherlands), 39 patients accepted contrast-enhanced CT scan via intravenous contrast materials. The CT protocols were: tube voltage, 120 kV; automatic tube current, 30 to 300 mA; rotation time, 0.75 s; collimation, 0.625 mm; pitch, 0.945; matrix, 512 × 512; section thickness, 5 mm; breath-hold at full inspiration. The images were transmitted to the workstation and picture achieving and communication systems for multiplanar reconstruction and post-processing. All images (both axial CT images and multiplanar reconstruction images) were reviewed by three radiologists (Doctor A with 22 years of experience, B with 7 years of experience, and C with 10 years of experience) blinded to clinical and laboratory data. Three statisticians assessed the CT features independently. After separate evaluations, any divergences were resolved by discussion or consultation from a specialist in infectious imaging (Doctor D with 33 years of experience), eventually reviewed by Doctor E for consistency analysis. Baseline data are provided in Supplementary Data. Statistical analysis: The imaging findings were tested for agreement using the Kappa test. If the Kappa value was <0.400, the consistency of the diagnostic findings was poor. If the Kappa value was >0.750, then the diagnostic findings were considered to be sufficiently consistent. Continuous variables of parameters were tested for normal distribution using the Kolmogorov–Smirnov method. If the data fitted a normal distribution, mean ± standard deviation and the t test were used to check for differences between the two groups. The chi-square test and Fisher exact test were used to compare categorical variables. The univariate analysis of a Kaplan–Meier (K–M) analysis model was fitted to determine the significant prognostic factors for OS of the patients. If P values of prognostic factors were <0.050, they were tested in a multivariate Cox proportional hazard model for the independence of association. Factors showing significant impact in the multivariate analysis were expressed via forest graph. Proportional hazards assumption was assessed through visual inspection of (log–log) plots of cumulative log hazard against time. A predictive model was developed for AR-NHL using Cox regression and illustrated by nomogram. The accuracy of predictions was assessed by estimating the nomogram discrimination measured by Harrell concordance index (C-index). The C-index is the probability chosen for two patients randomly, the patient who had the event first had a higher probability of having the event according to the model. C-index = 0.500 represents an agreement by chance; C-index = 1.000 represents perfect discrimination.[20] The calibration of the nomogram was evaluated by the Hosmer–Lemeshow test. K–M estimates were used to determine median OS by Log-rank methods, defined as the time period between the date of pathological diagnosis and last follow-up or death. All statistical analyses were performed using SPSS version 22.0 (IBM Corp., Armonk, NY, USA). The figures were developed using GraphPad Prism 7 (GraphPad Software; San Diego, CA, USA) and R software (version 4.0.1; http://www.r-project.org). All statistical tests were two-sided, the significance level was set at P < 0.050. Results: Patients’ demographic data A total of 121 AR-NHL patients was included in the final analysis who were reviewed at multiple centers between 2000 and 2016. The median age at diagnosis was 40 years (range, 35–53 years), and 112 (92.5%) of the eligible patients were male. Among all patients, 83 received chemotherapy, 57 underwent definitive HAART. Table 1 summarizes the demographic data, clinical, and tumor characteristics of laboratory results, such as white blood cell, neutrophils, and lymphocytes. All CT features were defined as a good agreement among the three doctors (Kappa value 0.778–0.998; Supplementary Table S1). Demographic, clinical, and tumor characteristics of patients with AR-NHL. Data are presented as n (%) or median (inter-quantile range). AR-NHL: AIDS-related non-Hodgkin lymphoma; BL: Burkitt lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; NEUT: Neutrophil; WBC: White blood cell. A total of 121 AR-NHL patients was included in the final analysis who were reviewed at multiple centers between 2000 and 2016. The median age at diagnosis was 40 years (range, 35–53 years), and 112 (92.5%) of the eligible patients were male. Among all patients, 83 received chemotherapy, 57 underwent definitive HAART. Table 1 summarizes the demographic data, clinical, and tumor characteristics of laboratory results, such as white blood cell, neutrophils, and lymphocytes. All CT features were defined as a good agreement among the three doctors (Kappa value 0.778–0.998; Supplementary Table S1). Demographic, clinical, and tumor characteristics of patients with AR-NHL. Data are presented as n (%) or median (inter-quantile range). AR-NHL: AIDS-related non-Hodgkin lymphoma; BL: Burkitt lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; NEUT: Neutrophil; WBC: White blood cell. Prognostic factors for OS The median follow-up of the entire cohort was 12 months. On univariable analysis, factors associated with poor OS included CD4 ≤100 cells/μL, involvement of mediastinal or hilar lymph nodes, liver, gastrointestinal tract, presence of extracapsular infiltration, necrosis inside the lesions, and treatment without chemotherapy [Table 2]. On multivariable Cox analysis, involvement of mediastinal or hilar lymph nodes, liver, necrosis in the lesions, CD4 ≤100 cells/μL, and treatment without chemotherapy were independent risk factors for short OS. Significant radiological features were shown in Figure 1. Clinical prognostic factors for survival of total patients using univariate and multivariate analyses. BL: Burkitt's lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; OS: Overall survival. CI: confidence interval. (A) A 37-year-old man with DLBCL was found positive HIV-antibodies for 1 month and with nausea and fever for 1 day. The thickening wall of the stomach was detected using CT (white arrow). The OS time of this patient was 11 months. (B) A 31-year-old man with BL was found positive HIV-antibodies for 1 month, fever for 5 days. CT showed multiple, irregular, heterogeneous lesions of the liver (white arrow) and bilateral adrenal glands (black arrows). The OS of this patient was 5 months. (C, D) A 29-year-old man with BL was found positive HIV-antibodies for 2 months and dyspnea and fever for 3 days. CT lung window showed multiple irregular lesions in bilateral lungs; enlarged lymph nodes were found on mediastinal and hilum. Progression was found on a 1-month follow-up. (D) The OS time of this patient was 3 months. (E) A 25-year-old man with DLBCL was found positive HIV-antibodies for 3 months. A large irregular, homogeneous, isoattenuation mass that infiltrated adjacent bone and muscles was found in the right cervical area IX (white arrow). (F) A 40-year-old man with DLBCL was found HIV-antibodies for 5 months. A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). BL: Burkitt lymphoma; CT: Computed tomography; DLBCL: Diffuse large B-cell lymphoma; HIV: Human immunodeficiency virus; OS: Overall survival. The median follow-up of the entire cohort was 12 months. On univariable analysis, factors associated with poor OS included CD4 ≤100 cells/μL, involvement of mediastinal or hilar lymph nodes, liver, gastrointestinal tract, presence of extracapsular infiltration, necrosis inside the lesions, and treatment without chemotherapy [Table 2]. On multivariable Cox analysis, involvement of mediastinal or hilar lymph nodes, liver, necrosis in the lesions, CD4 ≤100 cells/μL, and treatment without chemotherapy were independent risk factors for short OS. Significant radiological features were shown in Figure 1. Clinical prognostic factors for survival of total patients using univariate and multivariate analyses. BL: Burkitt's lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; OS: Overall survival. CI: confidence interval. (A) A 37-year-old man with DLBCL was found positive HIV-antibodies for 1 month and with nausea and fever for 1 day. The thickening wall of the stomach was detected using CT (white arrow). The OS time of this patient was 11 months. (B) A 31-year-old man with BL was found positive HIV-antibodies for 1 month, fever for 5 days. CT showed multiple, irregular, heterogeneous lesions of the liver (white arrow) and bilateral adrenal glands (black arrows). The OS of this patient was 5 months. (C, D) A 29-year-old man with BL was found positive HIV-antibodies for 2 months and dyspnea and fever for 3 days. CT lung window showed multiple irregular lesions in bilateral lungs; enlarged lymph nodes were found on mediastinal and hilum. Progression was found on a 1-month follow-up. (D) The OS time of this patient was 3 months. (E) A 25-year-old man with DLBCL was found positive HIV-antibodies for 3 months. A large irregular, homogeneous, isoattenuation mass that infiltrated adjacent bone and muscles was found in the right cervical area IX (white arrow). (F) A 40-year-old man with DLBCL was found HIV-antibodies for 5 months. A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). BL: Burkitt lymphoma; CT: Computed tomography; DLBCL: Diffuse large B-cell lymphoma; HIV: Human immunodeficiency virus; OS: Overall survival. Development of nomogram The predictive models were based on Cox regression and illustrated by nomogram [Figure 2], indicating the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. Harrell C-index was 0.716, suggesting relatively good discrimination. The Hosmer–Lemeshow test demonstrated a P = 0.620 > 0.050, indicating no departure from a good fit. The probability of survival at 1, 2, and 3 years was obtained by drawing a vertical line from the “total points” axis straight down to the outcome axes. The total number of points for each patient was obtained by summing the points for each of the individual factors in the nomogram. In the predictive model, treated with chemotherapy can increase 100 points for total points, then followed by liver involved free (add 82 points), CD4 counts >100 cells/μL (add 69 points), mediastinal or hilar lymph nodes involved free (add 58 points), and without necrosis (add 50 points). For instance, an AR-NHL patient with liver involved (0 points), without mediastinal or hilar lymph nodes involved (58 points), CD4 counts 125 cells/μL (69 points), with tumor necrosis (0 points), treated with chemotherapy (100 points), and the total points were 227 points. This patient was predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years [Figure 3]. Nomogram indicates the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. CD4 was determined by drawing a line straight up to the point axis for clinical. Then, this process was repeated for other four variables (necrosis, chemotherapy, liver involved, mediastinal or hilar lymph nodes involved). Each variable had a corresponding value (points), which were marked in the superior toolbar. Total adding up scores of all variables could predict OS probability. AR-NHL: AIDS-related non-Hodgkin lymphoma; NHL: Non-Hodgkin lymphoma; OS: Overall survival. Examples of using the nomogram to predict the individual survival probability by manually placing straight lines across the diagram. A 32-year-old man was found positive HIV-antibodies for 7 months and with the treatment of HAART. (A) A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). (B) Two irregular, heterogeneous lesions were found in the liver (black arrow). DLBCL was confirmed by biopsy. The patient received a chemotherapy regimen of CHOP and OS time was 14 months. The total points are 227 points. This patient is predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years (black line). CHOP: Cyclophosphamide, doxorubicin, vincristine and prednisolone; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; OS: Overall survival. The predictive models were based on Cox regression and illustrated by nomogram [Figure 2], indicating the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. Harrell C-index was 0.716, suggesting relatively good discrimination. The Hosmer–Lemeshow test demonstrated a P = 0.620 > 0.050, indicating no departure from a good fit. The probability of survival at 1, 2, and 3 years was obtained by drawing a vertical line from the “total points” axis straight down to the outcome axes. The total number of points for each patient was obtained by summing the points for each of the individual factors in the nomogram. In the predictive model, treated with chemotherapy can increase 100 points for total points, then followed by liver involved free (add 82 points), CD4 counts >100 cells/μL (add 69 points), mediastinal or hilar lymph nodes involved free (add 58 points), and without necrosis (add 50 points). For instance, an AR-NHL patient with liver involved (0 points), without mediastinal or hilar lymph nodes involved (58 points), CD4 counts 125 cells/μL (69 points), with tumor necrosis (0 points), treated with chemotherapy (100 points), and the total points were 227 points. This patient was predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years [Figure 3]. Nomogram indicates the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. CD4 was determined by drawing a line straight up to the point axis for clinical. Then, this process was repeated for other four variables (necrosis, chemotherapy, liver involved, mediastinal or hilar lymph nodes involved). Each variable had a corresponding value (points), which were marked in the superior toolbar. Total adding up scores of all variables could predict OS probability. AR-NHL: AIDS-related non-Hodgkin lymphoma; NHL: Non-Hodgkin lymphoma; OS: Overall survival. Examples of using the nomogram to predict the individual survival probability by manually placing straight lines across the diagram. A 32-year-old man was found positive HIV-antibodies for 7 months and with the treatment of HAART. (A) A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). (B) Two irregular, heterogeneous lesions were found in the liver (black arrow). DLBCL was confirmed by biopsy. The patient received a chemotherapy regimen of CHOP and OS time was 14 months. The total points are 227 points. This patient is predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years (black line). CHOP: Cyclophosphamide, doxorubicin, vincristine and prednisolone; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; OS: Overall survival. Clinical value of the novel classification system Compared with the patients with levels of CD4 < 100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020) [Figure 4]. K–M analysis and the log-rank test showed that patients who did not receive chemotherapy had poor survival outcomes than those receiving chemotherapy (P < 0.050) [Supplementary Figure S1]. The patients with involvement of mediastinal or hilar lymph nodes or involvement of the liver, or the lesions with necrosis had a worse prognosis (P < 0.050) [Supplementary Figure S1]. Kaplan–Meier.curves of CD4 for the whole patient population. Compared with the patients with levels of CD4 <100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020). Next, we further compared the survival outcomes between patients with and without chemotherapy in each subgroup based on our novel survival-predicting model. Notably, patients in the high-risk group could benefit from chemotherapy treatment (P = 0.040). Compared with the patients with levels of CD4 < 100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020) [Figure 4]. K–M analysis and the log-rank test showed that patients who did not receive chemotherapy had poor survival outcomes than those receiving chemotherapy (P < 0.050) [Supplementary Figure S1]. The patients with involvement of mediastinal or hilar lymph nodes or involvement of the liver, or the lesions with necrosis had a worse prognosis (P < 0.050) [Supplementary Figure S1]. Kaplan–Meier.curves of CD4 for the whole patient population. Compared with the patients with levels of CD4 <100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020). Next, we further compared the survival outcomes between patients with and without chemotherapy in each subgroup based on our novel survival-predicting model. Notably, patients in the high-risk group could benefit from chemotherapy treatment (P = 0.040). Patients’ demographic data: A total of 121 AR-NHL patients was included in the final analysis who were reviewed at multiple centers between 2000 and 2016. The median age at diagnosis was 40 years (range, 35–53 years), and 112 (92.5%) of the eligible patients were male. Among all patients, 83 received chemotherapy, 57 underwent definitive HAART. Table 1 summarizes the demographic data, clinical, and tumor characteristics of laboratory results, such as white blood cell, neutrophils, and lymphocytes. All CT features were defined as a good agreement among the three doctors (Kappa value 0.778–0.998; Supplementary Table S1). Demographic, clinical, and tumor characteristics of patients with AR-NHL. Data are presented as n (%) or median (inter-quantile range). AR-NHL: AIDS-related non-Hodgkin lymphoma; BL: Burkitt lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; NEUT: Neutrophil; WBC: White blood cell. Prognostic factors for OS: The median follow-up of the entire cohort was 12 months. On univariable analysis, factors associated with poor OS included CD4 ≤100 cells/μL, involvement of mediastinal or hilar lymph nodes, liver, gastrointestinal tract, presence of extracapsular infiltration, necrosis inside the lesions, and treatment without chemotherapy [Table 2]. On multivariable Cox analysis, involvement of mediastinal or hilar lymph nodes, liver, necrosis in the lesions, CD4 ≤100 cells/μL, and treatment without chemotherapy were independent risk factors for short OS. Significant radiological features were shown in Figure 1. Clinical prognostic factors for survival of total patients using univariate and multivariate analyses. BL: Burkitt's lymphoma; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; IQR: Interquartile range; OS: Overall survival. CI: confidence interval. (A) A 37-year-old man with DLBCL was found positive HIV-antibodies for 1 month and with nausea and fever for 1 day. The thickening wall of the stomach was detected using CT (white arrow). The OS time of this patient was 11 months. (B) A 31-year-old man with BL was found positive HIV-antibodies for 1 month, fever for 5 days. CT showed multiple, irregular, heterogeneous lesions of the liver (white arrow) and bilateral adrenal glands (black arrows). The OS of this patient was 5 months. (C, D) A 29-year-old man with BL was found positive HIV-antibodies for 2 months and dyspnea and fever for 3 days. CT lung window showed multiple irregular lesions in bilateral lungs; enlarged lymph nodes were found on mediastinal and hilum. Progression was found on a 1-month follow-up. (D) The OS time of this patient was 3 months. (E) A 25-year-old man with DLBCL was found positive HIV-antibodies for 3 months. A large irregular, homogeneous, isoattenuation mass that infiltrated adjacent bone and muscles was found in the right cervical area IX (white arrow). (F) A 40-year-old man with DLBCL was found HIV-antibodies for 5 months. A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). BL: Burkitt lymphoma; CT: Computed tomography; DLBCL: Diffuse large B-cell lymphoma; HIV: Human immunodeficiency virus; OS: Overall survival. Development of nomogram: The predictive models were based on Cox regression and illustrated by nomogram [Figure 2], indicating the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. Harrell C-index was 0.716, suggesting relatively good discrimination. The Hosmer–Lemeshow test demonstrated a P = 0.620 > 0.050, indicating no departure from a good fit. The probability of survival at 1, 2, and 3 years was obtained by drawing a vertical line from the “total points” axis straight down to the outcome axes. The total number of points for each patient was obtained by summing the points for each of the individual factors in the nomogram. In the predictive model, treated with chemotherapy can increase 100 points for total points, then followed by liver involved free (add 82 points), CD4 counts >100 cells/μL (add 69 points), mediastinal or hilar lymph nodes involved free (add 58 points), and without necrosis (add 50 points). For instance, an AR-NHL patient with liver involved (0 points), without mediastinal or hilar lymph nodes involved (58 points), CD4 counts 125 cells/μL (69 points), with tumor necrosis (0 points), treated with chemotherapy (100 points), and the total points were 227 points. This patient was predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years [Figure 3]. Nomogram indicates the probability of 1-, 2-, and 3-year OS in patients with AR-NHL. CD4 was determined by drawing a line straight up to the point axis for clinical. Then, this process was repeated for other four variables (necrosis, chemotherapy, liver involved, mediastinal or hilar lymph nodes involved). Each variable had a corresponding value (points), which were marked in the superior toolbar. Total adding up scores of all variables could predict OS probability. AR-NHL: AIDS-related non-Hodgkin lymphoma; NHL: Non-Hodgkin lymphoma; OS: Overall survival. Examples of using the nomogram to predict the individual survival probability by manually placing straight lines across the diagram. A 32-year-old man was found positive HIV-antibodies for 7 months and with the treatment of HAART. (A) A large circular, heterogeneous mass with necrosis was found inside the left cervical area II (white arrow). (B) Two irregular, heterogeneous lesions were found in the liver (black arrow). DLBCL was confirmed by biopsy. The patient received a chemotherapy regimen of CHOP and OS time was 14 months. The total points are 227 points. This patient is predicted to have a 58% probability of surviving for 1 year, 40% probability of surviving for 2 years, and 31% probability of surviving for 3 years (black line). CHOP: Cyclophosphamide, doxorubicin, vincristine and prednisolone; DLBCL: Diffuse large B-cell lymphoma; HAART: Highly active antiretroviral therapy; HIV: Human immunodeficiency virus; OS: Overall survival. Clinical value of the novel classification system: Compared with the patients with levels of CD4 < 100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020) [Figure 4]. K–M analysis and the log-rank test showed that patients who did not receive chemotherapy had poor survival outcomes than those receiving chemotherapy (P < 0.050) [Supplementary Figure S1]. The patients with involvement of mediastinal or hilar lymph nodes or involvement of the liver, or the lesions with necrosis had a worse prognosis (P < 0.050) [Supplementary Figure S1]. Kaplan–Meier.curves of CD4 for the whole patient population. Compared with the patients with levels of CD4 <100 cells/μL, the patients with levels of CD4 over 100 cells/μL had a good prognosis (P = 0.020). Next, we further compared the survival outcomes between patients with and without chemotherapy in each subgroup based on our novel survival-predicting model. Notably, patients in the high-risk group could benefit from chemotherapy treatment (P = 0.040). Discussion: To the best of our knowledge, this is the largest study from triple institutions in Asia to evaluate prognostic factors in patients with AR-NHL. We developed a nomogram based on patients’ demographics, CT imaging features, and laboratory data, which can be effectively stratified patients into low- and high-risk groups. Involvement of mediastinal or hilar lymph nodes, liver, necrosis in lesions, CD4 ≤100 cells/μL, and treatment without chemotherapy were independent risk factors for shorter OS. Notably, only patients in the high-risk group in our study were found to be significantly benefited from chemotherapy, so we strongly recommend patients in the high-risk group as candidates for chemotherapy. Imaging plays an important role in the detection and evaluation of AR-NHL lesions.[8,10,12] CT of the head and neck, chest, abdomen, and pelvis is a critical staging modality recommended by the NCCN guidelines.[18–20] The potential mechanism of necrosis in lymphoma is the occlusion of the supplying hilar artery by the tumor (compression or invasion) in addition to lymphatic flow obstruction.[21,22]A previous study reported that HIV (−) NHL patients with necrosis had significantly higher Ann Arbor stages, greater IPI, and higher serum LDH levels than those without necrosis. However, in their K–M survival analysis, no statistically significant difference was noted for necrosis.[23] Our study focused on patients with AR-NHL, and necrosis in the lesions was an independent risk factor for shorter OS. Necrosis of the lesion indicated the aggressive behavior of the tumor and the tendency for treatment resistance. The presence of extracapsular infiltration is also correlated with the OS in the current study. The invasion of tumor cells may be a potential mechanism of extracapsular infiltration and may correlate with a poor prognosis.[24] Natural killer (NK) cells play an important role in the growth and infiltration of lymphoma cells and activated NK cells could be a promising immunotherapeutic tool against lymphoma cells either alone or in combination with conventional therapy.[25] AR-NHLs are usually B-cell, high-grade, and poorly differentiated lymphomas.[26] Extranodal sites involvement is common. The liver is the second most common site with an incidence ranging from 26% to 45%. HIV (+) patients have a higher incidence of NHL than HIV (−) patients.[27,28] A previous study indicated that primary mediastinal large B-cell lymphoma (PMBCL), representing 10% of all DLBCL, was predictive of poor OS and progression-free survival.[29] Compared with cyclophosphamide, doxorubicin, vincristine, and prednisolone (CHOP) regimen, rituximab and its use with intensified chemotherapy such as R-Hyper-cyclophosphamide, vincristine, doxorubicin, and dexamethasone and R-EPOCH (etoposide, prednisone, vincristine, cyclophosphamide, doxorubicin) might improve the response rate and survival outcome for patients with mediastinal NHL, especially for PMBCL.[29–31] Although Guech-Ongey et al[32,33] found acquired immune deficiency syndrome-related Burkitt lymphoma incidence declined at low CD4 counts, suggesting functional CD4 lymphocytes may be required for BL to develop, we consider lower CD4 counts to reflect more severe immunodeficiency, which is likely to cause opportunistic infections and other malignant tumors for patients with AR-NHL. For patients who have factors correlated with shorter OS time, intensive chemotherapy should be considered. Intensive chemotherapy is relatively safe and effective in AR-NHL.[34] Notably, only patients in the high-risk group in our study were found to be significantly benefited from chemotherapy, so we strongly recommend patients in the high-risk group as candidates for chemotherapy. Chemotherapy and concomitant HAART for AR-NHL does not cause prolonged suppression of lymphocyt-e subsets. On the contrary, chemotherapy can increase the counts of CD4, CD8, CD19, and CD56 cell populations, which provide reassurance regarding the long-term consequences of chemotherapy in these individuals.[35] While for patients in the lower-risk group, the survival difference was not statistically significant, so HAART alone and regular examination are recommended, because adverse characteristics, such as severe bone marrow toxicity chemotherapy, should be considered.[36] There are several limitations to the current study. First, it is a retrospective study with a limited number of patients, and patients were diagnosed in different hospitals. Therefore, the quality of chemotherapy and the methods employed by pathologists for diagnosing metastatic lymph nodes were uniform, which might bias our results. Second, although univariate analysis showed that pathology classifications (acquired immune deficiency syndrome-related diffuse large B-cell lymphoma and acquired immune deficiency syndrome-related Burkitt lymphoma) were not an independent prognostic factor for patients, other studies have reported that pathology classifications correspond to a different OS. Prospective investigations with larger samples focus on certain pathological types should be designed to find more predictive factors for prognoses of AR-NHL patients. Importantly, external validation of the proposed staging system in an independent cohort is required to determine whether it can be generalized to other institutions. Despite the current limitations, our study still has a high value because CT lesions necrosis characteristics and organ involvement in clinical work have a high degree of recognition. In conclusion, a survival-predicting nomogram integrating CT features was developed in this study, which was promising for assessing the survival outcomes of patients with AR-NHL. Notably, decision-making of chemotherapy regimens and more frequent follow-up should be considered in the high-risk group determined by this model. Acknowledgements: We thank all the patients, investigators, co-investigators, and study teams at each participating site. Conflicts of interest: None. Supplementary Material:
Background: Acquired immune deficiency syndrome (AIDS)-related non-Hodgkin lymphoma (AR-NHL) is a high-risk factor for morbidity and mortality in patients with AIDS. This study aimed to determine the prognostic factors associated with overall survival (OS) and to develop a prognostic nomogram incorporating computed tomography imaging features in patients with acquired immune deficiency syndrome-related non-Hodgkin lymphoma (AR-NHL). Methods: A total of 121 AR-NHL patients between July 2012 and November 2019 were retrospectively reviewed. Clinical and radiological independent predictors of OS were confirmed using multivariable Cox analysis. A prognostic nomogram was constructed based on the above clinical and radiological factors and then provided optimum accuracy in predicting OS. The predictive accuracy of the nomogram was determined by Harrell C-statistic. Kaplan-Meier survival analysis was used to determine median OS. The prognostic value of adjuvant therapy was evaluated in different subgroups. Results: In the multivariate Cox regression analysis, involvement of mediastinal or hilar lymph nodes, liver, necrosis in the lesions, the treatment with chemotherapy, and the CD4 ≤100 cells/μL were independent risk factors for poor OS (all P < 0.050). The predictive nomogram based on Cox regression has good discrimination (Harrell C-index = 0.716) and good calibration (Hosmer-Lemeshow test, P = 0.620) in high- and low-risk groups. Only patients in the high-risk group who received adjuvant chemotherapy had a significantly better survival outcome. Conclusions: A survival-predicting nomogram was developed in this study, which was effective in assessing the survival outcomes of patients with AR-NHL. Notably, decision-making of chemotherapy regimens and more frequent follow-up should be considered in the high-risk group determined by this model.
null
null
9,052
348
[ 40, 551, 415, 481, 602, 214, 20 ]
14
[ "patients", "lymphoma", "os", "points", "nhl", "chemotherapy", "ct", "survival", "year", "hiv" ]
[ "lymphoma ar nhl", "lymphoma hiv human", "patients hodgkin lymphoma", "lymphoma acquired immune", "cell lymphoma hiv" ]
null
null
[CONTENT] Lymphoma | AIDS-related AR-NHL | Computed tomography | Prognosis | Nomogram [SUMMARY]
[CONTENT] Lymphoma | AIDS-related AR-NHL | Computed tomography | Prognosis | Nomogram [SUMMARY]
[CONTENT] Lymphoma | AIDS-related AR-NHL | Computed tomography | Prognosis | Nomogram [SUMMARY]
null
[CONTENT] Lymphoma | AIDS-related AR-NHL | Computed tomography | Prognosis | Nomogram [SUMMARY]
null
[CONTENT] Acquired Immunodeficiency Syndrome | Humans | Lymphoma, Non-Hodgkin | Neoplasm Staging | Nomograms | Prognosis | Retrospective Studies [SUMMARY]
[CONTENT] Acquired Immunodeficiency Syndrome | Humans | Lymphoma, Non-Hodgkin | Neoplasm Staging | Nomograms | Prognosis | Retrospective Studies [SUMMARY]
[CONTENT] Acquired Immunodeficiency Syndrome | Humans | Lymphoma, Non-Hodgkin | Neoplasm Staging | Nomograms | Prognosis | Retrospective Studies [SUMMARY]
null
[CONTENT] Acquired Immunodeficiency Syndrome | Humans | Lymphoma, Non-Hodgkin | Neoplasm Staging | Nomograms | Prognosis | Retrospective Studies [SUMMARY]
null
[CONTENT] lymphoma ar nhl | lymphoma hiv human | patients hodgkin lymphoma | lymphoma acquired immune | cell lymphoma hiv [SUMMARY]
[CONTENT] lymphoma ar nhl | lymphoma hiv human | patients hodgkin lymphoma | lymphoma acquired immune | cell lymphoma hiv [SUMMARY]
[CONTENT] lymphoma ar nhl | lymphoma hiv human | patients hodgkin lymphoma | lymphoma acquired immune | cell lymphoma hiv [SUMMARY]
null
[CONTENT] lymphoma ar nhl | lymphoma hiv human | patients hodgkin lymphoma | lymphoma acquired immune | cell lymphoma hiv [SUMMARY]
null
[CONTENT] patients | lymphoma | os | points | nhl | chemotherapy | ct | survival | year | hiv [SUMMARY]
[CONTENT] patients | lymphoma | os | points | nhl | chemotherapy | ct | survival | year | hiv [SUMMARY]
[CONTENT] patients | lymphoma | os | points | nhl | chemotherapy | ct | survival | year | hiv [SUMMARY]
null
[CONTENT] patients | lymphoma | os | points | nhl | chemotherapy | ct | survival | year | hiv [SUMMARY]
null
[CONTENT] ct | patients | imaging | ar | ar nhl | nhl | clinical ct | magnetic | pet | pet ct [SUMMARY]
[CONTENT] patients | images | years | ct | lymphoma | data | years experience | experience | test | index [SUMMARY]
[CONTENT] points | found | probability | os | year | patients | chemotherapy | cd4 | 100 | months [SUMMARY]
null
[CONTENT] patients | points | lymphoma | study | ct | os | nhl | chemotherapy | years | ar [SUMMARY]
null
[CONTENT] non-Hodgkin | AR-NHL ||| non-Hodgkin | AR-NHL [SUMMARY]
[CONTENT] 121 | AR-NHL | between July 2012 and | November 2019 ||| Cox ||| ||| Harrell C-statistic ||| Kaplan-Meier ||| [SUMMARY]
[CONTENT] ||| Cox | 0.716 | Hosmer-Lemeshow | 0.620 ||| [SUMMARY]
null
[CONTENT] non-Hodgkin | AR-NHL ||| non-Hodgkin | AR-NHL ||| 121 | AR-NHL | between July 2012 and | November 2019 ||| Cox ||| ||| Harrell C-statistic ||| Kaplan-Meier ||| ||| ||| ||| Cox | 0.716 | Hosmer-Lemeshow | 0.620 ||| ||| AR-NHL ||| [SUMMARY]
null
Age-Specific Differences in Online Grocery Shopping Behaviors and Attitudes among Adults with Low Income in the United States in 2021.
36297112
Online grocery shopping has surged in popularity, but we know little about online grocery shopping behaviors and attitudes of adults with low income, including differences by age.
BACKGROUND
From October to November 2021, we used a survey research firm to recruit a convenience sample of adults who have ever received Supplemental Nutrition Assistance Program (SNAP) benefits (n = 3526). Participants completed an online survey designed to assess diet and online food shopping behaviors. Using logistic regression, we examined the relationship between participant characteristics, including age, and the likelihood of online grocery shopping, and separately examined variation in the reasons for online grocery shopping by age.
METHODS
About 54% of the participants reported shopping online for groceries in the previous 12 months. Odds of online shopping were higher for those aged 18-33 years (OR = 1.95 (95% CI: 1.52, 2.52; p &lt; 0.001)) and 34-44 years (OR = 1.50 (95% CI: 1.19, 1.90; p &lt; 0.001)) than for those aged ≥65 years. Odds were also higher for those who were food insecure and those with income below USD 20,000, higher educational attainment, and higher fruit and vegetable intake. Low prices were the most popular reason for online grocery shopping (57%). Adults aged 18-33 years old had higher odds of reporting low prices as a motivating factor than older adults (OR = 2.34 (95% CI: 1.78, 3.08; p &lt; 0.001)) and lower odds of reporting being discouraged by lack of social interaction (OR = 0.34 (95% CI: 0.25, 0.45; p &lt; 0.001)).
RESULTS
Strategies for making online grocery shopping more affordable for adults with lower income may be promising, especially online produce. For older adults, additional support may be needed to make online shopping a suitable replacement for in-store shopping, such as education on technology and combining it with opportunities for social support.
CONCLUSION
[ "Humans", "United States", "Aged", "Adolescent", "Young Adult", "Adult", "Food Supply", "Poverty", "Food Assistance", "Fruit", "Attitude", "Age Factors" ]
9609768
1. Introduction
To reduce food insecurity, the United States Department of Agriculture (USDA) offers financial assistance through the Supplemental Nutrition Assistance Program (SNAP) program, delivering nutrition benefits to over 41 million households [1]. Between 2019 and 2021, food insecurity grew by over 15% in the U.S., largely due to a surge in unemployment and income loss during the COVID-19 pandemic [1]. During this period, online grocery shopping rapidly expanded in popularity, and now accounts for 10% of all U.S. grocery sales [2]. Online food shopping may promote healthy choices by mitigating the influence of in-store triggers and support equitable food access [3,4], but it may also lead to more frequent purchases of less healthy foods due to targeted marketing [5]. In response to increased demand for online grocery shopping options, the USDA expanded the SNAP Online Purchasing Pilot (OPP) program to additional retailers and locations [6]. The program allows SNAP recipients to use their benefits in online transactions [7]. Partly fueled by the pandemic shutdown, the value of SNAP benefits redeemed online grew from USD 2.9 million in February 2020 to USD 196.3 million in September 2020, reaching 2.4% of total SNAP sales [8]. Research shows, however, that online grocery delivery services are inequitably distributed for those paying with SNAP benefits, with lower access in rural areas and areas with higher poverty and limited food access [9,10,11]. A recent review study highlighted several reasons for the low uptake of online grocery shopping among those with low income, including high cost and lack of control over food selection, lack of social interaction, and lack of interest [12]. Several benefits were also reported, such as lower stress, saving time, and fewer impulse purchases than in-store shopping. Another study found that purchases of fresh fruits and vegetables were lower online than in-store in SNAP-eligible households [13]. The majority of this previous work has focused on small geographic areas, with small samples, and thus lacked sufficient statistical power to test for differences by respondent characteristics. A more recent study, however, used a large, nationally representative sample of mostly food-secure adults and found that 39% had ever shopped online for groceries [14]. In this study, we characterized online grocery shopping behaviors and attitudes in a nationwide sample of adults with low income. We examined the extent to which the frequency of online grocery shopping differed by age and other sociodemographic characteristics and the frequency of fruit and vegetable intake. Given that younger (vs. older) individuals are more likely to use the internet and shop online [12], we also examined whether other behaviors and attitudes related to online grocery shopping differed by age.
null
null
3. Results
We excluded those participants who reused the same IP address (n = 265) and/or did not finish the survey (n = 90). We also excluded those who finished the survey in under one-third of the median completion time (<2.1 min) (n = 51). The final sample included 3526 adults, and the median completion time for the survey was 11.9 min. Approximately 51% of the sample identified as female, and the average age was 46.8 (SD = 15.9) years (Table 1). The average household size was 2.3 (SD = 1.0), including 1.4 (SD = 0.7) children per household. The majority of the sample identified as non-Hispanic/Latinx (90.0%) and/or White (75.2%). About 44% of the participants reported an annual household income <USD 20,000, 58.6% reported being unemployed, 67.0% reported currently receiving SNAP benefits, and 70.3% were classified as food insecure. On average, the participants reported consuming fruits and vegetables 16.7 (SD = 16.8) times per week. Compared with the FoodAPS sample, our sample had a higher percentage of older participants and participants who identified as White and non-Hispanic/Latinx, and a higher percentage of participants with household income <USD 20,000. In the full sample, 54% reported shopping online for groceries in the previous 12 months, primarily via Walmart (38%) or Amazon (19%) (Table 2). The likelihood ratio tests indicated that the odds of online grocery shopping differed across levels of age group, race, and fruit and vegetable intake (p values < 0.001). The model-based results indicate that the odds of online grocery shopping were higher for those aged 18–33 years (OR = 1.95 (95% CI: 1.52, 2.52)) and 34–44 years (OR = 1.50 (95% CI: 1.19, 1.90)) than for those 65 years or older, and higher for households with more children (OR = 1.24 for every additional child (95% CI: 1.07, 1.43)) (Table 3). Those who identified as Hispanic/Latinx (OR = 1.63 (95% CI: 1.21, 2.19)) or Black (OR = 1.52 (1.22, 1.89)) had higher odds of online grocery shopping than non-Hispanic/Latinx and White participants, respectively. The odds of online grocery shopping were lower for those with a high school education or less (OR = 0.83 (95% CI: 0.71, 0.97)) and income <USD 20,000 per year (OR = 0.81 (95% CI: 0.68, 0.96)), and higher for those who were employed (OR = 1.43 (95% CI: 1.20, 1.69)) or food insecure (OR = 1.42 (95% CI: 1.20, 1.67)). The odds of online grocery shopping were also higher for those with a higher self-reported intake of fruits and vegetables. Among those who shopped online for groceries, 54% reported shopping online at least once a month, and 18% at least once per week (Table 2). Fresh produce (19%) and desserts, snacks, and candy (18%) were the most popular items purchased online, whereas meat, poultry, and fish (12%), and grains (11%) were less popular. About 57% reported low prices as a motivating factor, and 32–40% of the participants reported being motivated by a good selection of produce, good quality food, the variety of goods, having someone else select grocery items on their behalf, and/or inexpensive or no delivery fees. Adults aged 18–33 years old had higher odds of reporting low prices as a motivating factor than older adults (OR = 2.34 (95% CI: 1.78, 3.08; p < 0.001)), with an 11% difference between age groups (Table 4). We observed similar associations between age group and other motivating reasons, including the variety of goods, good quality food, having someone else select grocery items on one’s behalf, and having an option for using SNAP benefits online. Among those who reported never shopping online for groceries, 64% reported a lack of social interaction as a reason preventing them from shopping online, and 27–29% of the participants reported high prices, not being able to interact with the food itself, and/or a lack of a loyalty/frequent shopping program to be the reasons preventing them from shopping online (Table 2). Adults aged 18–33 years old had lower odds of reporting being discouraged by lack of social interaction than older adults (OR = 0.34 (95% CI: 0.25, 0.45; p < 0.001)) (Table 4). Only 5% of the participants reported that the lack of an option for using SNAP benefits online was a discouraging factor, with no significant differences by age group.
null
null
[ "2. Materials and Methods", "2.1. Data", "2.2. Outcomes", "2.3. Statistical Analysis" ]
[ " 2.1. Data This study was part of a grant designed to characterize online grocery shopping behaviors and attitudes in adults with low income and examine the extent to which financial incentives and behavioral nudges increased fruit and vegetable purchases in a randomized, controlled experiment. To achieve the goals of our grant, we used CloudResearch, a survey research firm, to recruit a convenience sample of adults aged 18 years or older who have ever received SNAP benefits, read and speak English, and live with fewer than four people (to accommodate the shopping budget for the randomized, controlled experiment of the parent grant). The sample was recruited to approximately match the distribution of gender and age of adults residing in the U.S. in 2019 [15], using non-random quota sampling from participant pools on several market research platforms. Invitations were sent to eligible participants via email and dashboards. CloudResearch has quality control mechanisms, such as English comprehension screener questions and attention checks, and those participants who complete CloudResearch’s online surveys receive points that can be redeemed for various incentives, including cash, lotteries, or donations to charity.\nThe survey was completed on a personal computer, laptop, tablet, or mobile phone from October to November 2021. Qualtrics, an online survey platform, was used to create and distribute the survey (Supplemental File S1) [16]. The survey was designed to assess sociodemographic characteristics, health status, food shopping behaviors, and fruit and vegetable intake. Sociodemographic and food insecurity questions were derived from the 2017–2018 National Health and Nutrition Examination Survey [17]. The questions related to grocery shopping were derived from the USDA National Household Food Acquisition and Purchase Survey (FoodAPS) [18] or adapted from previous work by the authors [19,20]. We captured weekly fruit and vegetable intake using a 6-item fruit and vegetable dietary intake module from the Behavioral Risk Factor Surveillance System [21].\nThis study was part of a grant designed to characterize online grocery shopping behaviors and attitudes in adults with low income and examine the extent to which financial incentives and behavioral nudges increased fruit and vegetable purchases in a randomized, controlled experiment. To achieve the goals of our grant, we used CloudResearch, a survey research firm, to recruit a convenience sample of adults aged 18 years or older who have ever received SNAP benefits, read and speak English, and live with fewer than four people (to accommodate the shopping budget for the randomized, controlled experiment of the parent grant). The sample was recruited to approximately match the distribution of gender and age of adults residing in the U.S. in 2019 [15], using non-random quota sampling from participant pools on several market research platforms. Invitations were sent to eligible participants via email and dashboards. CloudResearch has quality control mechanisms, such as English comprehension screener questions and attention checks, and those participants who complete CloudResearch’s online surveys receive points that can be redeemed for various incentives, including cash, lotteries, or donations to charity.\nThe survey was completed on a personal computer, laptop, tablet, or mobile phone from October to November 2021. Qualtrics, an online survey platform, was used to create and distribute the survey (Supplemental File S1) [16]. The survey was designed to assess sociodemographic characteristics, health status, food shopping behaviors, and fruit and vegetable intake. Sociodemographic and food insecurity questions were derived from the 2017–2018 National Health and Nutrition Examination Survey [17]. The questions related to grocery shopping were derived from the USDA National Household Food Acquisition and Purchase Survey (FoodAPS) [18] or adapted from previous work by the authors [19,20]. We captured weekly fruit and vegetable intake using a 6-item fruit and vegetable dietary intake module from the Behavioral Risk Factor Surveillance System [21].\n 2.2. Outcomes Our primary outcome was whether the participants reported ever shopping online for groceries in the previous 12 months. Among those who shopped online for groceries, we estimated the frequency of online grocery shopping, types of groceries purchased online, types of retailers, and methods of delivery. We also calculated the frequency (%) of reasons that the participants choose as motivating or preventing them from purchasing groceries online.\nOur primary outcome was whether the participants reported ever shopping online for groceries in the previous 12 months. Among those who shopped online for groceries, we estimated the frequency of online grocery shopping, types of groceries purchased online, types of retailers, and methods of delivery. We also calculated the frequency (%) of reasons that the participants choose as motivating or preventing them from purchasing groceries online.\n 2.3. Statistical Analysis To determine the extent to which our convenience sample differs from a national probability sample, we used binomial tests and t-tests to compare the sociodemographic characteristics of our sample to respondents in the FoodAPS survey who ever participated in SNAP. We report descriptive results using averages and standard deviations, or median and interquartile range. Using logistic regression, we examined the relationship between our primary outcome and sociodemographic characteristics, including age group (quartiles: 18–33, 34–44, 45–59, ≥60 years), gender, household size (total and children), ethnicity (yes/no Hispanic or Latinx), race (White; Black; Asian, Native Hawaiian, or Pacific Islander; American Indian or Alaska Native; and Other), educational attainment (yes/no high school or less), household income (yes/no <USD 20,000 annually), marital status (yes/no married), employment status (yes/no currently employed or student), SNAP status (yes/no currently receiving benefits), responsible for most household food shopping (yes/no), and responsible for most household food preparation (yes/no). The model also included food insecurity status, defined as yes if a participant indicated that it was true or sometimes true that (1) their household was worried whether their food would run out before they had money to buy more, and/or (2) the food that they bought did not last and they did not have enough money to buy more. We also included fruit and vegetable intake, which we transformed into times per week using the median of response options, and then categorizing responses into quartiles. For age group, race, and fruit and vegetable intake, we performed likelihood ratio tests to determine whether the odds of online grocery shopping differed among the levels overall; we then performed pairwise comparisons if the results were statistically significant. In additional analyses, we used logistic regression to examine the relationship between other online grocery shopping behaviors and attitudes and age group, controlling for other covariates. We used a two-sided alpha of 0.05 as the threshold for statistical significance in our primary analysis. In our additional analyses (n = 60), we used a p < 0.0008 significance threshold (0.05/60) and corrected for multiple comparisons using the Bonferroni–Holm procedure [22].\nTo determine the extent to which our convenience sample differs from a national probability sample, we used binomial tests and t-tests to compare the sociodemographic characteristics of our sample to respondents in the FoodAPS survey who ever participated in SNAP. We report descriptive results using averages and standard deviations, or median and interquartile range. Using logistic regression, we examined the relationship between our primary outcome and sociodemographic characteristics, including age group (quartiles: 18–33, 34–44, 45–59, ≥60 years), gender, household size (total and children), ethnicity (yes/no Hispanic or Latinx), race (White; Black; Asian, Native Hawaiian, or Pacific Islander; American Indian or Alaska Native; and Other), educational attainment (yes/no high school or less), household income (yes/no <USD 20,000 annually), marital status (yes/no married), employment status (yes/no currently employed or student), SNAP status (yes/no currently receiving benefits), responsible for most household food shopping (yes/no), and responsible for most household food preparation (yes/no). The model also included food insecurity status, defined as yes if a participant indicated that it was true or sometimes true that (1) their household was worried whether their food would run out before they had money to buy more, and/or (2) the food that they bought did not last and they did not have enough money to buy more. We also included fruit and vegetable intake, which we transformed into times per week using the median of response options, and then categorizing responses into quartiles. For age group, race, and fruit and vegetable intake, we performed likelihood ratio tests to determine whether the odds of online grocery shopping differed among the levels overall; we then performed pairwise comparisons if the results were statistically significant. In additional analyses, we used logistic regression to examine the relationship between other online grocery shopping behaviors and attitudes and age group, controlling for other covariates. We used a two-sided alpha of 0.05 as the threshold for statistical significance in our primary analysis. In our additional analyses (n = 60), we used a p < 0.0008 significance threshold (0.05/60) and corrected for multiple comparisons using the Bonferroni–Holm procedure [22].", "This study was part of a grant designed to characterize online grocery shopping behaviors and attitudes in adults with low income and examine the extent to which financial incentives and behavioral nudges increased fruit and vegetable purchases in a randomized, controlled experiment. To achieve the goals of our grant, we used CloudResearch, a survey research firm, to recruit a convenience sample of adults aged 18 years or older who have ever received SNAP benefits, read and speak English, and live with fewer than four people (to accommodate the shopping budget for the randomized, controlled experiment of the parent grant). The sample was recruited to approximately match the distribution of gender and age of adults residing in the U.S. in 2019 [15], using non-random quota sampling from participant pools on several market research platforms. Invitations were sent to eligible participants via email and dashboards. CloudResearch has quality control mechanisms, such as English comprehension screener questions and attention checks, and those participants who complete CloudResearch’s online surveys receive points that can be redeemed for various incentives, including cash, lotteries, or donations to charity.\nThe survey was completed on a personal computer, laptop, tablet, or mobile phone from October to November 2021. Qualtrics, an online survey platform, was used to create and distribute the survey (Supplemental File S1) [16]. The survey was designed to assess sociodemographic characteristics, health status, food shopping behaviors, and fruit and vegetable intake. Sociodemographic and food insecurity questions were derived from the 2017–2018 National Health and Nutrition Examination Survey [17]. The questions related to grocery shopping were derived from the USDA National Household Food Acquisition and Purchase Survey (FoodAPS) [18] or adapted from previous work by the authors [19,20]. We captured weekly fruit and vegetable intake using a 6-item fruit and vegetable dietary intake module from the Behavioral Risk Factor Surveillance System [21].", "Our primary outcome was whether the participants reported ever shopping online for groceries in the previous 12 months. Among those who shopped online for groceries, we estimated the frequency of online grocery shopping, types of groceries purchased online, types of retailers, and methods of delivery. We also calculated the frequency (%) of reasons that the participants choose as motivating or preventing them from purchasing groceries online.", "To determine the extent to which our convenience sample differs from a national probability sample, we used binomial tests and t-tests to compare the sociodemographic characteristics of our sample to respondents in the FoodAPS survey who ever participated in SNAP. We report descriptive results using averages and standard deviations, or median and interquartile range. Using logistic regression, we examined the relationship between our primary outcome and sociodemographic characteristics, including age group (quartiles: 18–33, 34–44, 45–59, ≥60 years), gender, household size (total and children), ethnicity (yes/no Hispanic or Latinx), race (White; Black; Asian, Native Hawaiian, or Pacific Islander; American Indian or Alaska Native; and Other), educational attainment (yes/no high school or less), household income (yes/no <USD 20,000 annually), marital status (yes/no married), employment status (yes/no currently employed or student), SNAP status (yes/no currently receiving benefits), responsible for most household food shopping (yes/no), and responsible for most household food preparation (yes/no). The model also included food insecurity status, defined as yes if a participant indicated that it was true or sometimes true that (1) their household was worried whether their food would run out before they had money to buy more, and/or (2) the food that they bought did not last and they did not have enough money to buy more. We also included fruit and vegetable intake, which we transformed into times per week using the median of response options, and then categorizing responses into quartiles. For age group, race, and fruit and vegetable intake, we performed likelihood ratio tests to determine whether the odds of online grocery shopping differed among the levels overall; we then performed pairwise comparisons if the results were statistically significant. In additional analyses, we used logistic regression to examine the relationship between other online grocery shopping behaviors and attitudes and age group, controlling for other covariates. We used a two-sided alpha of 0.05 as the threshold for statistical significance in our primary analysis. In our additional analyses (n = 60), we used a p < 0.0008 significance threshold (0.05/60) and corrected for multiple comparisons using the Bonferroni–Holm procedure [22]." ]
[ null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Data", "2.2. Outcomes", "2.3. Statistical Analysis", "3. Results", "4. Discussion" ]
[ "To reduce food insecurity, the United States Department of Agriculture (USDA) offers financial assistance through the Supplemental Nutrition Assistance Program (SNAP) program, delivering nutrition benefits to over 41 million households [1]. Between 2019 and 2021, food insecurity grew by over 15% in the U.S., largely due to a surge in unemployment and income loss during the COVID-19 pandemic [1]. During this period, online grocery shopping rapidly expanded in popularity, and now accounts for 10% of all U.S. grocery sales [2]. Online food shopping may promote healthy choices by mitigating the influence of in-store triggers and support equitable food access [3,4], but it may also lead to more frequent purchases of less healthy foods due to targeted marketing [5].\nIn response to increased demand for online grocery shopping options, the USDA expanded the SNAP Online Purchasing Pilot (OPP) program to additional retailers and locations [6]. The program allows SNAP recipients to use their benefits in online transactions [7]. Partly fueled by the pandemic shutdown, the value of SNAP benefits redeemed online grew from USD 2.9 million in February 2020 to USD 196.3 million in September 2020, reaching 2.4% of total SNAP sales [8]. Research shows, however, that online grocery delivery services are inequitably distributed for those paying with SNAP benefits, with lower access in rural areas and areas with higher poverty and limited food access [9,10,11].\nA recent review study highlighted several reasons for the low uptake of online grocery shopping among those with low income, including high cost and lack of control over food selection, lack of social interaction, and lack of interest [12]. Several benefits were also reported, such as lower stress, saving time, and fewer impulse purchases than in-store shopping. Another study found that purchases of fresh fruits and vegetables were lower online than in-store in SNAP-eligible households [13]. The majority of this previous work has focused on small geographic areas, with small samples, and thus lacked sufficient statistical power to test for differences by respondent characteristics. A more recent study, however, used a large, nationally representative sample of mostly food-secure adults and found that 39% had ever shopped online for groceries [14].\nIn this study, we characterized online grocery shopping behaviors and attitudes in a nationwide sample of adults with low income. We examined the extent to which the frequency of online grocery shopping differed by age and other sociodemographic characteristics and the frequency of fruit and vegetable intake. Given that younger (vs. older) individuals are more likely to use the internet and shop online [12], we also examined whether other behaviors and attitudes related to online grocery shopping differed by age.", " 2.1. Data This study was part of a grant designed to characterize online grocery shopping behaviors and attitudes in adults with low income and examine the extent to which financial incentives and behavioral nudges increased fruit and vegetable purchases in a randomized, controlled experiment. To achieve the goals of our grant, we used CloudResearch, a survey research firm, to recruit a convenience sample of adults aged 18 years or older who have ever received SNAP benefits, read and speak English, and live with fewer than four people (to accommodate the shopping budget for the randomized, controlled experiment of the parent grant). The sample was recruited to approximately match the distribution of gender and age of adults residing in the U.S. in 2019 [15], using non-random quota sampling from participant pools on several market research platforms. Invitations were sent to eligible participants via email and dashboards. CloudResearch has quality control mechanisms, such as English comprehension screener questions and attention checks, and those participants who complete CloudResearch’s online surveys receive points that can be redeemed for various incentives, including cash, lotteries, or donations to charity.\nThe survey was completed on a personal computer, laptop, tablet, or mobile phone from October to November 2021. Qualtrics, an online survey platform, was used to create and distribute the survey (Supplemental File S1) [16]. The survey was designed to assess sociodemographic characteristics, health status, food shopping behaviors, and fruit and vegetable intake. Sociodemographic and food insecurity questions were derived from the 2017–2018 National Health and Nutrition Examination Survey [17]. The questions related to grocery shopping were derived from the USDA National Household Food Acquisition and Purchase Survey (FoodAPS) [18] or adapted from previous work by the authors [19,20]. We captured weekly fruit and vegetable intake using a 6-item fruit and vegetable dietary intake module from the Behavioral Risk Factor Surveillance System [21].\nThis study was part of a grant designed to characterize online grocery shopping behaviors and attitudes in adults with low income and examine the extent to which financial incentives and behavioral nudges increased fruit and vegetable purchases in a randomized, controlled experiment. To achieve the goals of our grant, we used CloudResearch, a survey research firm, to recruit a convenience sample of adults aged 18 years or older who have ever received SNAP benefits, read and speak English, and live with fewer than four people (to accommodate the shopping budget for the randomized, controlled experiment of the parent grant). The sample was recruited to approximately match the distribution of gender and age of adults residing in the U.S. in 2019 [15], using non-random quota sampling from participant pools on several market research platforms. Invitations were sent to eligible participants via email and dashboards. CloudResearch has quality control mechanisms, such as English comprehension screener questions and attention checks, and those participants who complete CloudResearch’s online surveys receive points that can be redeemed for various incentives, including cash, lotteries, or donations to charity.\nThe survey was completed on a personal computer, laptop, tablet, or mobile phone from October to November 2021. Qualtrics, an online survey platform, was used to create and distribute the survey (Supplemental File S1) [16]. The survey was designed to assess sociodemographic characteristics, health status, food shopping behaviors, and fruit and vegetable intake. Sociodemographic and food insecurity questions were derived from the 2017–2018 National Health and Nutrition Examination Survey [17]. The questions related to grocery shopping were derived from the USDA National Household Food Acquisition and Purchase Survey (FoodAPS) [18] or adapted from previous work by the authors [19,20]. We captured weekly fruit and vegetable intake using a 6-item fruit and vegetable dietary intake module from the Behavioral Risk Factor Surveillance System [21].\n 2.2. Outcomes Our primary outcome was whether the participants reported ever shopping online for groceries in the previous 12 months. Among those who shopped online for groceries, we estimated the frequency of online grocery shopping, types of groceries purchased online, types of retailers, and methods of delivery. We also calculated the frequency (%) of reasons that the participants choose as motivating or preventing them from purchasing groceries online.\nOur primary outcome was whether the participants reported ever shopping online for groceries in the previous 12 months. Among those who shopped online for groceries, we estimated the frequency of online grocery shopping, types of groceries purchased online, types of retailers, and methods of delivery. We also calculated the frequency (%) of reasons that the participants choose as motivating or preventing them from purchasing groceries online.\n 2.3. Statistical Analysis To determine the extent to which our convenience sample differs from a national probability sample, we used binomial tests and t-tests to compare the sociodemographic characteristics of our sample to respondents in the FoodAPS survey who ever participated in SNAP. We report descriptive results using averages and standard deviations, or median and interquartile range. Using logistic regression, we examined the relationship between our primary outcome and sociodemographic characteristics, including age group (quartiles: 18–33, 34–44, 45–59, ≥60 years), gender, household size (total and children), ethnicity (yes/no Hispanic or Latinx), race (White; Black; Asian, Native Hawaiian, or Pacific Islander; American Indian or Alaska Native; and Other), educational attainment (yes/no high school or less), household income (yes/no <USD 20,000 annually), marital status (yes/no married), employment status (yes/no currently employed or student), SNAP status (yes/no currently receiving benefits), responsible for most household food shopping (yes/no), and responsible for most household food preparation (yes/no). The model also included food insecurity status, defined as yes if a participant indicated that it was true or sometimes true that (1) their household was worried whether their food would run out before they had money to buy more, and/or (2) the food that they bought did not last and they did not have enough money to buy more. We also included fruit and vegetable intake, which we transformed into times per week using the median of response options, and then categorizing responses into quartiles. For age group, race, and fruit and vegetable intake, we performed likelihood ratio tests to determine whether the odds of online grocery shopping differed among the levels overall; we then performed pairwise comparisons if the results were statistically significant. In additional analyses, we used logistic regression to examine the relationship between other online grocery shopping behaviors and attitudes and age group, controlling for other covariates. We used a two-sided alpha of 0.05 as the threshold for statistical significance in our primary analysis. In our additional analyses (n = 60), we used a p < 0.0008 significance threshold (0.05/60) and corrected for multiple comparisons using the Bonferroni–Holm procedure [22].\nTo determine the extent to which our convenience sample differs from a national probability sample, we used binomial tests and t-tests to compare the sociodemographic characteristics of our sample to respondents in the FoodAPS survey who ever participated in SNAP. We report descriptive results using averages and standard deviations, or median and interquartile range. Using logistic regression, we examined the relationship between our primary outcome and sociodemographic characteristics, including age group (quartiles: 18–33, 34–44, 45–59, ≥60 years), gender, household size (total and children), ethnicity (yes/no Hispanic or Latinx), race (White; Black; Asian, Native Hawaiian, or Pacific Islander; American Indian or Alaska Native; and Other), educational attainment (yes/no high school or less), household income (yes/no <USD 20,000 annually), marital status (yes/no married), employment status (yes/no currently employed or student), SNAP status (yes/no currently receiving benefits), responsible for most household food shopping (yes/no), and responsible for most household food preparation (yes/no). The model also included food insecurity status, defined as yes if a participant indicated that it was true or sometimes true that (1) their household was worried whether their food would run out before they had money to buy more, and/or (2) the food that they bought did not last and they did not have enough money to buy more. We also included fruit and vegetable intake, which we transformed into times per week using the median of response options, and then categorizing responses into quartiles. For age group, race, and fruit and vegetable intake, we performed likelihood ratio tests to determine whether the odds of online grocery shopping differed among the levels overall; we then performed pairwise comparisons if the results were statistically significant. In additional analyses, we used logistic regression to examine the relationship between other online grocery shopping behaviors and attitudes and age group, controlling for other covariates. We used a two-sided alpha of 0.05 as the threshold for statistical significance in our primary analysis. In our additional analyses (n = 60), we used a p < 0.0008 significance threshold (0.05/60) and corrected for multiple comparisons using the Bonferroni–Holm procedure [22].", "This study was part of a grant designed to characterize online grocery shopping behaviors and attitudes in adults with low income and examine the extent to which financial incentives and behavioral nudges increased fruit and vegetable purchases in a randomized, controlled experiment. To achieve the goals of our grant, we used CloudResearch, a survey research firm, to recruit a convenience sample of adults aged 18 years or older who have ever received SNAP benefits, read and speak English, and live with fewer than four people (to accommodate the shopping budget for the randomized, controlled experiment of the parent grant). The sample was recruited to approximately match the distribution of gender and age of adults residing in the U.S. in 2019 [15], using non-random quota sampling from participant pools on several market research platforms. Invitations were sent to eligible participants via email and dashboards. CloudResearch has quality control mechanisms, such as English comprehension screener questions and attention checks, and those participants who complete CloudResearch’s online surveys receive points that can be redeemed for various incentives, including cash, lotteries, or donations to charity.\nThe survey was completed on a personal computer, laptop, tablet, or mobile phone from October to November 2021. Qualtrics, an online survey platform, was used to create and distribute the survey (Supplemental File S1) [16]. The survey was designed to assess sociodemographic characteristics, health status, food shopping behaviors, and fruit and vegetable intake. Sociodemographic and food insecurity questions were derived from the 2017–2018 National Health and Nutrition Examination Survey [17]. The questions related to grocery shopping were derived from the USDA National Household Food Acquisition and Purchase Survey (FoodAPS) [18] or adapted from previous work by the authors [19,20]. We captured weekly fruit and vegetable intake using a 6-item fruit and vegetable dietary intake module from the Behavioral Risk Factor Surveillance System [21].", "Our primary outcome was whether the participants reported ever shopping online for groceries in the previous 12 months. Among those who shopped online for groceries, we estimated the frequency of online grocery shopping, types of groceries purchased online, types of retailers, and methods of delivery. We also calculated the frequency (%) of reasons that the participants choose as motivating or preventing them from purchasing groceries online.", "To determine the extent to which our convenience sample differs from a national probability sample, we used binomial tests and t-tests to compare the sociodemographic characteristics of our sample to respondents in the FoodAPS survey who ever participated in SNAP. We report descriptive results using averages and standard deviations, or median and interquartile range. Using logistic regression, we examined the relationship between our primary outcome and sociodemographic characteristics, including age group (quartiles: 18–33, 34–44, 45–59, ≥60 years), gender, household size (total and children), ethnicity (yes/no Hispanic or Latinx), race (White; Black; Asian, Native Hawaiian, or Pacific Islander; American Indian or Alaska Native; and Other), educational attainment (yes/no high school or less), household income (yes/no <USD 20,000 annually), marital status (yes/no married), employment status (yes/no currently employed or student), SNAP status (yes/no currently receiving benefits), responsible for most household food shopping (yes/no), and responsible for most household food preparation (yes/no). The model also included food insecurity status, defined as yes if a participant indicated that it was true or sometimes true that (1) their household was worried whether their food would run out before they had money to buy more, and/or (2) the food that they bought did not last and they did not have enough money to buy more. We also included fruit and vegetable intake, which we transformed into times per week using the median of response options, and then categorizing responses into quartiles. For age group, race, and fruit and vegetable intake, we performed likelihood ratio tests to determine whether the odds of online grocery shopping differed among the levels overall; we then performed pairwise comparisons if the results were statistically significant. In additional analyses, we used logistic regression to examine the relationship between other online grocery shopping behaviors and attitudes and age group, controlling for other covariates. We used a two-sided alpha of 0.05 as the threshold for statistical significance in our primary analysis. In our additional analyses (n = 60), we used a p < 0.0008 significance threshold (0.05/60) and corrected for multiple comparisons using the Bonferroni–Holm procedure [22].", "We excluded those participants who reused the same IP address (n = 265) and/or did not finish the survey (n = 90). We also excluded those who finished the survey in under one-third of the median completion time (<2.1 min) (n = 51). The final sample included 3526 adults, and the median completion time for the survey was 11.9 min. Approximately 51% of the sample identified as female, and the average age was 46.8 (SD = 15.9) years (Table 1). The average household size was 2.3 (SD = 1.0), including 1.4 (SD = 0.7) children per household. The majority of the sample identified as non-Hispanic/Latinx (90.0%) and/or White (75.2%). About 44% of the participants reported an annual household income <USD 20,000, 58.6% reported being unemployed, 67.0% reported currently receiving SNAP benefits, and 70.3% were classified as food insecure. On average, the participants reported consuming fruits and vegetables 16.7 (SD = 16.8) times per week. Compared with the FoodAPS sample, our sample had a higher percentage of older participants and participants who identified as White and non-Hispanic/Latinx, and a higher percentage of participants with household income <USD 20,000.\nIn the full sample, 54% reported shopping online for groceries in the previous 12 months, primarily via Walmart (38%) or Amazon (19%) (Table 2). The likelihood ratio tests indicated that the odds of online grocery shopping differed across levels of age group, race, and fruit and vegetable intake (p values < 0.001). The model-based results indicate that the odds of online grocery shopping were higher for those aged 18–33 years (OR = 1.95 (95% CI: 1.52, 2.52)) and 34–44 years (OR = 1.50 (95% CI: 1.19, 1.90)) than for those 65 years or older, and higher for households with more children (OR = 1.24 for every additional child (95% CI: 1.07, 1.43)) (Table 3). Those who identified as Hispanic/Latinx (OR = 1.63 (95% CI: 1.21, 2.19)) or Black (OR = 1.52 (1.22, 1.89)) had higher odds of online grocery shopping than non-Hispanic/Latinx and White participants, respectively. The odds of online grocery shopping were lower for those with a high school education or less (OR = 0.83 (95% CI: 0.71, 0.97)) and income <USD 20,000 per year (OR = 0.81 (95% CI: 0.68, 0.96)), and higher for those who were employed (OR = 1.43 (95% CI: 1.20, 1.69)) or food insecure (OR = 1.42 (95% CI: 1.20, 1.67)). The odds of online grocery shopping were also higher for those with a higher self-reported intake of fruits and vegetables.\nAmong those who shopped online for groceries, 54% reported shopping online at least once a month, and 18% at least once per week (Table 2). Fresh produce (19%) and desserts, snacks, and candy (18%) were the most popular items purchased online, whereas meat, poultry, and fish (12%), and grains (11%) were less popular. About 57% reported low prices as a motivating factor, and 32–40% of the participants reported being motivated by a good selection of produce, good quality food, the variety of goods, having someone else select grocery items on their behalf, and/or inexpensive or no delivery fees. Adults aged 18–33 years old had higher odds of reporting low prices as a motivating factor than older adults (OR = 2.34 (95% CI: 1.78, 3.08; p < 0.001)), with an 11% difference between age groups (Table 4). We observed similar associations between age group and other motivating reasons, including the variety of goods, good quality food, having someone else select grocery items on one’s behalf, and having an option for using SNAP benefits online. \nAmong those who reported never shopping online for groceries, 64% reported a lack of social interaction as a reason preventing them from shopping online, and 27–29% of the participants reported high prices, not being able to interact with the food itself, and/or a lack of a loyalty/frequent shopping program to be the reasons preventing them from shopping online (Table 2). Adults aged 18–33 years old had lower odds of reporting being discouraged by lack of social interaction than older adults (OR = 0.34 (95% CI: 0.25, 0.45; p < 0.001)) (Table 4). Only 5% of the participants reported that the lack of an option for using SNAP benefits online was a discouraging factor, with no significant differences by age group.", "We found that a little over half of the participants with lower income reported shopping online for groceries at least once in the previous 12 months, which is higher than the frequency estimates published prior to the pandemic [14,19,23] but similar to more recent estimates collected during the pandemic [13], which might reflect a combination of a surge in online shopping during the lockdown and the expansion of the SNAP OPP program in 2020–2021. A third of the sample reported that the lack of social interaction discouraged them from ever shopping online, which is consistent with previous work [12] and also suggests that for some, especially older adults, online grocery shopping is not a suitable replacement for in-store shopping.\nThough some fresh items, such as meat, poultry, and fish, were less frequently purchased online, almost a fifth of our sample reported shopping online for fresh produce, and a good selection of produce was a popular motivation to shop online, in addition to low prices. Indeed, those who consumed more fruits and vegetables per week were more likely to report shopping online for groceries. This is lower than reported in a recent study, which found that about half of mostly food-secure adults purchased fresh foods [14]. Taken together, these findings suggest that financial incentive programs targeting fruits and vegetables may be attractive and effective options to promote healthy purchasing behaviors among SNAP participants shopping in online retail settings. This may be an especially effective option for older adults, who had a lower percentage of purchasing fresh produce online relative to younger adults.\nLike previous studies [12], we found that younger individuals were more likely to shop online than their older counterparts, seemingly driven by lower prices and convenience, including the option to use SNAP benefits online. These results suggest that further expansion of the SNAP OPP program may be an effective strategy for motivating younger adults to start shopping online for groceries but may not be sufficient to motivate older adults with low income. Older adults were particularly discouraged by the lack of social interaction and may instead benefit from an expansion of “click-and-collect” options, wherein customers order groceries online for pick-up at a centralized location, such as a community center; or the delivery of groceries with a social support component (e.g., checking in to see how the customer is doing and providing additional resources as needed).\nWe also found those with higher food insecurity were more likely to shop online, which is consistent with another study that found a particularly high prevalence of online grocery shopping among higher-income food-insecure households, potentially due to job loss during the pandemic or limited food access [14]. Unlike that study, however, we found that participants who identified as Black and Hispanic/Latinx had higher odds of shopping online for groceries. This may be because Black and Hispanic/Latinx individuals are more likely to live in urban areas where online grocery shopping options may be more prevalent or where there is limited access to neighborhood supermarkets. It may also reflect differences in norms, preferences, and attitudes across racial/ethnic groups.\nThough our sample was recruited to match the distribution of gender and age of adults in the U.S., our sample was not nationally representative of SNAP-participating adults. Yet, it was larger and more geographically diverse than the samples in previous studies, which allowed us to examine unique relationships between participant characteristics and online grocery shopping frequency. Another limitation of our study is that the distribution of sociodemographic characteristics in our sample differed in some ways from that of the FoodAPS sample, and it was not recruited using random sampling. However, previous studies indicate that experimental results from convenience samples can yield similar findings to the results of studies conducted via probability-based samples, despite differences in demographic characteristics between samples [24,25,26]. Overall, our findings highlight the need to develop and test strategies for making online grocery shopping more affordable and appealing for individuals with lower income. Future research should strive to understand why specific groups are more likely to shop online for groceries, such as those in food-insecure households, and the extent to which their purchases differ from their counterparts who primarily shop in-store." ]
[ "intro", null, null, null, null, "results", "discussion" ]
[ "food security", "older adults", "internet", "disparities" ]
1. Introduction: To reduce food insecurity, the United States Department of Agriculture (USDA) offers financial assistance through the Supplemental Nutrition Assistance Program (SNAP) program, delivering nutrition benefits to over 41 million households [1]. Between 2019 and 2021, food insecurity grew by over 15% in the U.S., largely due to a surge in unemployment and income loss during the COVID-19 pandemic [1]. During this period, online grocery shopping rapidly expanded in popularity, and now accounts for 10% of all U.S. grocery sales [2]. Online food shopping may promote healthy choices by mitigating the influence of in-store triggers and support equitable food access [3,4], but it may also lead to more frequent purchases of less healthy foods due to targeted marketing [5]. In response to increased demand for online grocery shopping options, the USDA expanded the SNAP Online Purchasing Pilot (OPP) program to additional retailers and locations [6]. The program allows SNAP recipients to use their benefits in online transactions [7]. Partly fueled by the pandemic shutdown, the value of SNAP benefits redeemed online grew from USD 2.9 million in February 2020 to USD 196.3 million in September 2020, reaching 2.4% of total SNAP sales [8]. Research shows, however, that online grocery delivery services are inequitably distributed for those paying with SNAP benefits, with lower access in rural areas and areas with higher poverty and limited food access [9,10,11]. A recent review study highlighted several reasons for the low uptake of online grocery shopping among those with low income, including high cost and lack of control over food selection, lack of social interaction, and lack of interest [12]. Several benefits were also reported, such as lower stress, saving time, and fewer impulse purchases than in-store shopping. Another study found that purchases of fresh fruits and vegetables were lower online than in-store in SNAP-eligible households [13]. The majority of this previous work has focused on small geographic areas, with small samples, and thus lacked sufficient statistical power to test for differences by respondent characteristics. A more recent study, however, used a large, nationally representative sample of mostly food-secure adults and found that 39% had ever shopped online for groceries [14]. In this study, we characterized online grocery shopping behaviors and attitudes in a nationwide sample of adults with low income. We examined the extent to which the frequency of online grocery shopping differed by age and other sociodemographic characteristics and the frequency of fruit and vegetable intake. Given that younger (vs. older) individuals are more likely to use the internet and shop online [12], we also examined whether other behaviors and attitudes related to online grocery shopping differed by age. 2. Materials and Methods: 2.1. Data This study was part of a grant designed to characterize online grocery shopping behaviors and attitudes in adults with low income and examine the extent to which financial incentives and behavioral nudges increased fruit and vegetable purchases in a randomized, controlled experiment. To achieve the goals of our grant, we used CloudResearch, a survey research firm, to recruit a convenience sample of adults aged 18 years or older who have ever received SNAP benefits, read and speak English, and live with fewer than four people (to accommodate the shopping budget for the randomized, controlled experiment of the parent grant). The sample was recruited to approximately match the distribution of gender and age of adults residing in the U.S. in 2019 [15], using non-random quota sampling from participant pools on several market research platforms. Invitations were sent to eligible participants via email and dashboards. CloudResearch has quality control mechanisms, such as English comprehension screener questions and attention checks, and those participants who complete CloudResearch’s online surveys receive points that can be redeemed for various incentives, including cash, lotteries, or donations to charity. The survey was completed on a personal computer, laptop, tablet, or mobile phone from October to November 2021. Qualtrics, an online survey platform, was used to create and distribute the survey (Supplemental File S1) [16]. The survey was designed to assess sociodemographic characteristics, health status, food shopping behaviors, and fruit and vegetable intake. Sociodemographic and food insecurity questions were derived from the 2017–2018 National Health and Nutrition Examination Survey [17]. The questions related to grocery shopping were derived from the USDA National Household Food Acquisition and Purchase Survey (FoodAPS) [18] or adapted from previous work by the authors [19,20]. We captured weekly fruit and vegetable intake using a 6-item fruit and vegetable dietary intake module from the Behavioral Risk Factor Surveillance System [21]. This study was part of a grant designed to characterize online grocery shopping behaviors and attitudes in adults with low income and examine the extent to which financial incentives and behavioral nudges increased fruit and vegetable purchases in a randomized, controlled experiment. To achieve the goals of our grant, we used CloudResearch, a survey research firm, to recruit a convenience sample of adults aged 18 years or older who have ever received SNAP benefits, read and speak English, and live with fewer than four people (to accommodate the shopping budget for the randomized, controlled experiment of the parent grant). The sample was recruited to approximately match the distribution of gender and age of adults residing in the U.S. in 2019 [15], using non-random quota sampling from participant pools on several market research platforms. Invitations were sent to eligible participants via email and dashboards. CloudResearch has quality control mechanisms, such as English comprehension screener questions and attention checks, and those participants who complete CloudResearch’s online surveys receive points that can be redeemed for various incentives, including cash, lotteries, or donations to charity. The survey was completed on a personal computer, laptop, tablet, or mobile phone from October to November 2021. Qualtrics, an online survey platform, was used to create and distribute the survey (Supplemental File S1) [16]. The survey was designed to assess sociodemographic characteristics, health status, food shopping behaviors, and fruit and vegetable intake. Sociodemographic and food insecurity questions were derived from the 2017–2018 National Health and Nutrition Examination Survey [17]. The questions related to grocery shopping were derived from the USDA National Household Food Acquisition and Purchase Survey (FoodAPS) [18] or adapted from previous work by the authors [19,20]. We captured weekly fruit and vegetable intake using a 6-item fruit and vegetable dietary intake module from the Behavioral Risk Factor Surveillance System [21]. 2.2. Outcomes Our primary outcome was whether the participants reported ever shopping online for groceries in the previous 12 months. Among those who shopped online for groceries, we estimated the frequency of online grocery shopping, types of groceries purchased online, types of retailers, and methods of delivery. We also calculated the frequency (%) of reasons that the participants choose as motivating or preventing them from purchasing groceries online. Our primary outcome was whether the participants reported ever shopping online for groceries in the previous 12 months. Among those who shopped online for groceries, we estimated the frequency of online grocery shopping, types of groceries purchased online, types of retailers, and methods of delivery. We also calculated the frequency (%) of reasons that the participants choose as motivating or preventing them from purchasing groceries online. 2.3. Statistical Analysis To determine the extent to which our convenience sample differs from a national probability sample, we used binomial tests and t-tests to compare the sociodemographic characteristics of our sample to respondents in the FoodAPS survey who ever participated in SNAP. We report descriptive results using averages and standard deviations, or median and interquartile range. Using logistic regression, we examined the relationship between our primary outcome and sociodemographic characteristics, including age group (quartiles: 18–33, 34–44, 45–59, ≥60 years), gender, household size (total and children), ethnicity (yes/no Hispanic or Latinx), race (White; Black; Asian, Native Hawaiian, or Pacific Islander; American Indian or Alaska Native; and Other), educational attainment (yes/no high school or less), household income (yes/no <USD 20,000 annually), marital status (yes/no married), employment status (yes/no currently employed or student), SNAP status (yes/no currently receiving benefits), responsible for most household food shopping (yes/no), and responsible for most household food preparation (yes/no). The model also included food insecurity status, defined as yes if a participant indicated that it was true or sometimes true that (1) their household was worried whether their food would run out before they had money to buy more, and/or (2) the food that they bought did not last and they did not have enough money to buy more. We also included fruit and vegetable intake, which we transformed into times per week using the median of response options, and then categorizing responses into quartiles. For age group, race, and fruit and vegetable intake, we performed likelihood ratio tests to determine whether the odds of online grocery shopping differed among the levels overall; we then performed pairwise comparisons if the results were statistically significant. In additional analyses, we used logistic regression to examine the relationship between other online grocery shopping behaviors and attitudes and age group, controlling for other covariates. We used a two-sided alpha of 0.05 as the threshold for statistical significance in our primary analysis. In our additional analyses (n = 60), we used a p < 0.0008 significance threshold (0.05/60) and corrected for multiple comparisons using the Bonferroni–Holm procedure [22]. To determine the extent to which our convenience sample differs from a national probability sample, we used binomial tests and t-tests to compare the sociodemographic characteristics of our sample to respondents in the FoodAPS survey who ever participated in SNAP. We report descriptive results using averages and standard deviations, or median and interquartile range. Using logistic regression, we examined the relationship between our primary outcome and sociodemographic characteristics, including age group (quartiles: 18–33, 34–44, 45–59, ≥60 years), gender, household size (total and children), ethnicity (yes/no Hispanic or Latinx), race (White; Black; Asian, Native Hawaiian, or Pacific Islander; American Indian or Alaska Native; and Other), educational attainment (yes/no high school or less), household income (yes/no <USD 20,000 annually), marital status (yes/no married), employment status (yes/no currently employed or student), SNAP status (yes/no currently receiving benefits), responsible for most household food shopping (yes/no), and responsible for most household food preparation (yes/no). The model also included food insecurity status, defined as yes if a participant indicated that it was true or sometimes true that (1) their household was worried whether their food would run out before they had money to buy more, and/or (2) the food that they bought did not last and they did not have enough money to buy more. We also included fruit and vegetable intake, which we transformed into times per week using the median of response options, and then categorizing responses into quartiles. For age group, race, and fruit and vegetable intake, we performed likelihood ratio tests to determine whether the odds of online grocery shopping differed among the levels overall; we then performed pairwise comparisons if the results were statistically significant. In additional analyses, we used logistic regression to examine the relationship between other online grocery shopping behaviors and attitudes and age group, controlling for other covariates. We used a two-sided alpha of 0.05 as the threshold for statistical significance in our primary analysis. In our additional analyses (n = 60), we used a p < 0.0008 significance threshold (0.05/60) and corrected for multiple comparisons using the Bonferroni–Holm procedure [22]. 2.1. Data: This study was part of a grant designed to characterize online grocery shopping behaviors and attitudes in adults with low income and examine the extent to which financial incentives and behavioral nudges increased fruit and vegetable purchases in a randomized, controlled experiment. To achieve the goals of our grant, we used CloudResearch, a survey research firm, to recruit a convenience sample of adults aged 18 years or older who have ever received SNAP benefits, read and speak English, and live with fewer than four people (to accommodate the shopping budget for the randomized, controlled experiment of the parent grant). The sample was recruited to approximately match the distribution of gender and age of adults residing in the U.S. in 2019 [15], using non-random quota sampling from participant pools on several market research platforms. Invitations were sent to eligible participants via email and dashboards. CloudResearch has quality control mechanisms, such as English comprehension screener questions and attention checks, and those participants who complete CloudResearch’s online surveys receive points that can be redeemed for various incentives, including cash, lotteries, or donations to charity. The survey was completed on a personal computer, laptop, tablet, or mobile phone from October to November 2021. Qualtrics, an online survey platform, was used to create and distribute the survey (Supplemental File S1) [16]. The survey was designed to assess sociodemographic characteristics, health status, food shopping behaviors, and fruit and vegetable intake. Sociodemographic and food insecurity questions were derived from the 2017–2018 National Health and Nutrition Examination Survey [17]. The questions related to grocery shopping were derived from the USDA National Household Food Acquisition and Purchase Survey (FoodAPS) [18] or adapted from previous work by the authors [19,20]. We captured weekly fruit and vegetable intake using a 6-item fruit and vegetable dietary intake module from the Behavioral Risk Factor Surveillance System [21]. 2.2. Outcomes: Our primary outcome was whether the participants reported ever shopping online for groceries in the previous 12 months. Among those who shopped online for groceries, we estimated the frequency of online grocery shopping, types of groceries purchased online, types of retailers, and methods of delivery. We also calculated the frequency (%) of reasons that the participants choose as motivating or preventing them from purchasing groceries online. 2.3. Statistical Analysis: To determine the extent to which our convenience sample differs from a national probability sample, we used binomial tests and t-tests to compare the sociodemographic characteristics of our sample to respondents in the FoodAPS survey who ever participated in SNAP. We report descriptive results using averages and standard deviations, or median and interquartile range. Using logistic regression, we examined the relationship between our primary outcome and sociodemographic characteristics, including age group (quartiles: 18–33, 34–44, 45–59, ≥60 years), gender, household size (total and children), ethnicity (yes/no Hispanic or Latinx), race (White; Black; Asian, Native Hawaiian, or Pacific Islander; American Indian or Alaska Native; and Other), educational attainment (yes/no high school or less), household income (yes/no <USD 20,000 annually), marital status (yes/no married), employment status (yes/no currently employed or student), SNAP status (yes/no currently receiving benefits), responsible for most household food shopping (yes/no), and responsible for most household food preparation (yes/no). The model also included food insecurity status, defined as yes if a participant indicated that it was true or sometimes true that (1) their household was worried whether their food would run out before they had money to buy more, and/or (2) the food that they bought did not last and they did not have enough money to buy more. We also included fruit and vegetable intake, which we transformed into times per week using the median of response options, and then categorizing responses into quartiles. For age group, race, and fruit and vegetable intake, we performed likelihood ratio tests to determine whether the odds of online grocery shopping differed among the levels overall; we then performed pairwise comparisons if the results were statistically significant. In additional analyses, we used logistic regression to examine the relationship between other online grocery shopping behaviors and attitudes and age group, controlling for other covariates. We used a two-sided alpha of 0.05 as the threshold for statistical significance in our primary analysis. In our additional analyses (n = 60), we used a p < 0.0008 significance threshold (0.05/60) and corrected for multiple comparisons using the Bonferroni–Holm procedure [22]. 3. Results: We excluded those participants who reused the same IP address (n = 265) and/or did not finish the survey (n = 90). We also excluded those who finished the survey in under one-third of the median completion time (<2.1 min) (n = 51). The final sample included 3526 adults, and the median completion time for the survey was 11.9 min. Approximately 51% of the sample identified as female, and the average age was 46.8 (SD = 15.9) years (Table 1). The average household size was 2.3 (SD = 1.0), including 1.4 (SD = 0.7) children per household. The majority of the sample identified as non-Hispanic/Latinx (90.0%) and/or White (75.2%). About 44% of the participants reported an annual household income <USD 20,000, 58.6% reported being unemployed, 67.0% reported currently receiving SNAP benefits, and 70.3% were classified as food insecure. On average, the participants reported consuming fruits and vegetables 16.7 (SD = 16.8) times per week. Compared with the FoodAPS sample, our sample had a higher percentage of older participants and participants who identified as White and non-Hispanic/Latinx, and a higher percentage of participants with household income <USD 20,000. In the full sample, 54% reported shopping online for groceries in the previous 12 months, primarily via Walmart (38%) or Amazon (19%) (Table 2). The likelihood ratio tests indicated that the odds of online grocery shopping differed across levels of age group, race, and fruit and vegetable intake (p values < 0.001). The model-based results indicate that the odds of online grocery shopping were higher for those aged 18–33 years (OR = 1.95 (95% CI: 1.52, 2.52)) and 34–44 years (OR = 1.50 (95% CI: 1.19, 1.90)) than for those 65 years or older, and higher for households with more children (OR = 1.24 for every additional child (95% CI: 1.07, 1.43)) (Table 3). Those who identified as Hispanic/Latinx (OR = 1.63 (95% CI: 1.21, 2.19)) or Black (OR = 1.52 (1.22, 1.89)) had higher odds of online grocery shopping than non-Hispanic/Latinx and White participants, respectively. The odds of online grocery shopping were lower for those with a high school education or less (OR = 0.83 (95% CI: 0.71, 0.97)) and income <USD 20,000 per year (OR = 0.81 (95% CI: 0.68, 0.96)), and higher for those who were employed (OR = 1.43 (95% CI: 1.20, 1.69)) or food insecure (OR = 1.42 (95% CI: 1.20, 1.67)). The odds of online grocery shopping were also higher for those with a higher self-reported intake of fruits and vegetables. Among those who shopped online for groceries, 54% reported shopping online at least once a month, and 18% at least once per week (Table 2). Fresh produce (19%) and desserts, snacks, and candy (18%) were the most popular items purchased online, whereas meat, poultry, and fish (12%), and grains (11%) were less popular. About 57% reported low prices as a motivating factor, and 32–40% of the participants reported being motivated by a good selection of produce, good quality food, the variety of goods, having someone else select grocery items on their behalf, and/or inexpensive or no delivery fees. Adults aged 18–33 years old had higher odds of reporting low prices as a motivating factor than older adults (OR = 2.34 (95% CI: 1.78, 3.08; p < 0.001)), with an 11% difference between age groups (Table 4). We observed similar associations between age group and other motivating reasons, including the variety of goods, good quality food, having someone else select grocery items on one’s behalf, and having an option for using SNAP benefits online. Among those who reported never shopping online for groceries, 64% reported a lack of social interaction as a reason preventing them from shopping online, and 27–29% of the participants reported high prices, not being able to interact with the food itself, and/or a lack of a loyalty/frequent shopping program to be the reasons preventing them from shopping online (Table 2). Adults aged 18–33 years old had lower odds of reporting being discouraged by lack of social interaction than older adults (OR = 0.34 (95% CI: 0.25, 0.45; p < 0.001)) (Table 4). Only 5% of the participants reported that the lack of an option for using SNAP benefits online was a discouraging factor, with no significant differences by age group. 4. Discussion: We found that a little over half of the participants with lower income reported shopping online for groceries at least once in the previous 12 months, which is higher than the frequency estimates published prior to the pandemic [14,19,23] but similar to more recent estimates collected during the pandemic [13], which might reflect a combination of a surge in online shopping during the lockdown and the expansion of the SNAP OPP program in 2020–2021. A third of the sample reported that the lack of social interaction discouraged them from ever shopping online, which is consistent with previous work [12] and also suggests that for some, especially older adults, online grocery shopping is not a suitable replacement for in-store shopping. Though some fresh items, such as meat, poultry, and fish, were less frequently purchased online, almost a fifth of our sample reported shopping online for fresh produce, and a good selection of produce was a popular motivation to shop online, in addition to low prices. Indeed, those who consumed more fruits and vegetables per week were more likely to report shopping online for groceries. This is lower than reported in a recent study, which found that about half of mostly food-secure adults purchased fresh foods [14]. Taken together, these findings suggest that financial incentive programs targeting fruits and vegetables may be attractive and effective options to promote healthy purchasing behaviors among SNAP participants shopping in online retail settings. This may be an especially effective option for older adults, who had a lower percentage of purchasing fresh produce online relative to younger adults. Like previous studies [12], we found that younger individuals were more likely to shop online than their older counterparts, seemingly driven by lower prices and convenience, including the option to use SNAP benefits online. These results suggest that further expansion of the SNAP OPP program may be an effective strategy for motivating younger adults to start shopping online for groceries but may not be sufficient to motivate older adults with low income. Older adults were particularly discouraged by the lack of social interaction and may instead benefit from an expansion of “click-and-collect” options, wherein customers order groceries online for pick-up at a centralized location, such as a community center; or the delivery of groceries with a social support component (e.g., checking in to see how the customer is doing and providing additional resources as needed). We also found those with higher food insecurity were more likely to shop online, which is consistent with another study that found a particularly high prevalence of online grocery shopping among higher-income food-insecure households, potentially due to job loss during the pandemic or limited food access [14]. Unlike that study, however, we found that participants who identified as Black and Hispanic/Latinx had higher odds of shopping online for groceries. This may be because Black and Hispanic/Latinx individuals are more likely to live in urban areas where online grocery shopping options may be more prevalent or where there is limited access to neighborhood supermarkets. It may also reflect differences in norms, preferences, and attitudes across racial/ethnic groups. Though our sample was recruited to match the distribution of gender and age of adults in the U.S., our sample was not nationally representative of SNAP-participating adults. Yet, it was larger and more geographically diverse than the samples in previous studies, which allowed us to examine unique relationships between participant characteristics and online grocery shopping frequency. Another limitation of our study is that the distribution of sociodemographic characteristics in our sample differed in some ways from that of the FoodAPS sample, and it was not recruited using random sampling. However, previous studies indicate that experimental results from convenience samples can yield similar findings to the results of studies conducted via probability-based samples, despite differences in demographic characteristics between samples [24,25,26]. Overall, our findings highlight the need to develop and test strategies for making online grocery shopping more affordable and appealing for individuals with lower income. Future research should strive to understand why specific groups are more likely to shop online for groceries, such as those in food-insecure households, and the extent to which their purchases differ from their counterparts who primarily shop in-store.
Background: Online grocery shopping has surged in popularity, but we know little about online grocery shopping behaviors and attitudes of adults with low income, including differences by age. Methods: From October to November 2021, we used a survey research firm to recruit a convenience sample of adults who have ever received Supplemental Nutrition Assistance Program (SNAP) benefits (n = 3526). Participants completed an online survey designed to assess diet and online food shopping behaviors. Using logistic regression, we examined the relationship between participant characteristics, including age, and the likelihood of online grocery shopping, and separately examined variation in the reasons for online grocery shopping by age. Results: About 54% of the participants reported shopping online for groceries in the previous 12 months. Odds of online shopping were higher for those aged 18-33 years (OR = 1.95 (95% CI: 1.52, 2.52; p &lt; 0.001)) and 34-44 years (OR = 1.50 (95% CI: 1.19, 1.90; p &lt; 0.001)) than for those aged ≥65 years. Odds were also higher for those who were food insecure and those with income below USD 20,000, higher educational attainment, and higher fruit and vegetable intake. Low prices were the most popular reason for online grocery shopping (57%). Adults aged 18-33 years old had higher odds of reporting low prices as a motivating factor than older adults (OR = 2.34 (95% CI: 1.78, 3.08; p &lt; 0.001)) and lower odds of reporting being discouraged by lack of social interaction (OR = 0.34 (95% CI: 0.25, 0.45; p &lt; 0.001)). Conclusions: Strategies for making online grocery shopping more affordable for adults with lower income may be promising, especially online produce. For older adults, additional support may be needed to make online shopping a suitable replacement for in-store shopping, such as education on technology and combining it with opportunities for social support.
null
null
4,974
392
[ 1775, 360, 75, 443 ]
7
[ "online", "shopping", "food", "grocery", "grocery shopping", "online grocery", "sample", "online grocery shopping", "survey", "yes" ]
[ "online groceries food", "online groceries estimated", "purchasing behaviors snap", "option snap benefits", "snap benefits redeemed" ]
null
null
null
[CONTENT] food security | older adults | internet | disparities [SUMMARY]
null
[CONTENT] food security | older adults | internet | disparities [SUMMARY]
null
[CONTENT] food security | older adults | internet | disparities [SUMMARY]
null
[CONTENT] Humans | United States | Aged | Adolescent | Young Adult | Adult | Food Supply | Poverty | Food Assistance | Fruit | Attitude | Age Factors [SUMMARY]
null
[CONTENT] Humans | United States | Aged | Adolescent | Young Adult | Adult | Food Supply | Poverty | Food Assistance | Fruit | Attitude | Age Factors [SUMMARY]
null
[CONTENT] Humans | United States | Aged | Adolescent | Young Adult | Adult | Food Supply | Poverty | Food Assistance | Fruit | Attitude | Age Factors [SUMMARY]
null
[CONTENT] online groceries food | online groceries estimated | purchasing behaviors snap | option snap benefits | snap benefits redeemed [SUMMARY]
null
[CONTENT] online groceries food | online groceries estimated | purchasing behaviors snap | option snap benefits | snap benefits redeemed [SUMMARY]
null
[CONTENT] online groceries food | online groceries estimated | purchasing behaviors snap | option snap benefits | snap benefits redeemed [SUMMARY]
null
[CONTENT] online | shopping | food | grocery | grocery shopping | online grocery | sample | online grocery shopping | survey | yes [SUMMARY]
null
[CONTENT] online | shopping | food | grocery | grocery shopping | online grocery | sample | online grocery shopping | survey | yes [SUMMARY]
null
[CONTENT] online | shopping | food | grocery | grocery shopping | online grocery | sample | online grocery shopping | survey | yes [SUMMARY]
null
[CONTENT] online | shopping | grocery | food | snap | million | online grocery | program | online grocery shopping | grocery shopping [SUMMARY]
null
[CONTENT] 95 | ci | 95 ci | reported | table | higher | online | participants | shopping | odds [SUMMARY]
null
[CONTENT] online | shopping | yes | food | groceries | grocery | survey | participants | grocery shopping | online grocery [SUMMARY]
null
[CONTENT] [SUMMARY]
null
[CONTENT] About 54% | the previous 12 months ||| 18-33 years | 1.95 | 95% | CI | 1.52 | 2.52 | p &lt | 0.001 | 34-44 years | 1.50 | 95% | CI | 1.19 | p &lt | 0.001 | years ||| ||| 57% ||| 18-33 years old | 2.34 | 95% | CI | 1.78 | 3.08 | p &lt | 0.001 | 0.34 | 95% | CI | 0.25 | 0.45 | p &lt | 0.001 [SUMMARY]
null
[CONTENT] ||| October to November 2021 | Supplemental Nutrition Assistance Program | 3526 ||| ||| ||| About 54% | the previous 12 months ||| 18-33 years | 1.95 | 95% | CI | 1.52 | 2.52 | p &lt | 0.001 | 34-44 years | 1.50 | 95% | CI | 1.19 | p &lt | 0.001 | years ||| ||| 57% ||| 18-33 years old | 2.34 | 95% | CI | 1.78 | 3.08 | p &lt | 0.001 | 0.34 | 95% | CI | 0.25 | 0.45 | p &lt | 0.001 ||| ||| [SUMMARY]
null
Left subclavian artery stenting: an option for the treatment of the coronary-subclavian steal syndrome.
25140474
The subclavian steal syndrome is characterized by the vertebral artery flow inversion, due to a stenotic lesion in the origin of the subclavian artery. The Coronary-subclavian Steal Syndrome is a variation of the Subclavian Steal Syndrome and is characterized by inversion of flow in the Internal Thracic artery that has been used as conduct in a myocardial revascularization. Its diagnosis must be suspected in patients with difference in pulse and arterial pressure in the upper limbs, that present with angina pectoris and that have done a myocardial revascularization. Its treatment must be a surgical bypass or a transluminal angioplasty.
INTRODUCTION
Historical prospective, non-randomized trial, through revision of the hospital records of the patients treated with the stenting of the left subclavian artery, from January 2006 to September 2012.
METHODS
In the mentioned period, 4.291 miocardial revascularizations were performed with the use of the left mammary artery, and 16 patients were identified to have the Coronary-subclavian steal syndrome. All of them were submitted to endovascular treatment. The success rate was 100%; two patients experienced minor complications; none of them presented with major complications. Eleven of the 16 patients had ultrassonographic documentation of patent stent for at least one year; two patients lost follow up and other two died.
RESULTS
The stenting of the left subclavian artery is a good option for the treatment of the Coronary-subclavian Steal Syndrome, with high level of technical and clinical success.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Angioplasty, Balloon, Coronary", "Coronary Angiography", "Coronary-Subclavian Steal Syndrome", "Female", "Humans", "Male", "Middle Aged", "Prospective Studies", "Reproducibility of Results", "Risk Factors", "Stents", "Subclavian Artery", "Treatment Outcome" ]
4389454
INTRODUCTION
The subclavian steal syndrome (SSS) is characterized by the vertebral artery flow inversion, due to a stenotic lesion in the origin of the subclavian artery. The Coronary-subclavian Steal Syndrome is a variation of the SSS and is characterized by inversion of flow in the Internal Mammary artery that has been used as conduct in a myocardial revascularization, leading to miocardial infarction. Its diagnosis must be suspected in patients with difference in pulse and arterial pressure in the upper limbs, that present with angina pectoris and that have done a myocardial revascularization. Its treatment must be a surgical bypass or, after the rise of the minimal invasive techniques, a transluminal angioplasty. Objective The objective of this article is to show the left subclavian artery stenting as a safe and effective method to treat the Coronary-subclavian Steal Syndrome. The objective of this article is to show the left subclavian artery stenting as a safe and effective method to treat the Coronary-subclavian Steal Syndrome.
METHODS
Retrospective, non-randomized trial, through revision of the hospital records of the patients treated with the stenting of the left subclavian artery, from January 2006 to September 2012. Epidemiological and clinical data were assessed, as well as technique and materials used in the procedure. The study was approved by the ethics commission. The procedures were performed after local anesthesia and positioning of a valvulated sheath in the common right femoral artery and in the left brachial artery, by the Seldinger technique. After systemic heparin, the lesion was crossed with a 0.035'' hydrophilic guidewire, and then it was exchanged by a stiff wire of the same diameter, to give support to the stent (Figure 1). This approach allows several attempts of recanalization, whether proximal or distal to the lesion and appropriate angiographic control (Figure 2). Filling of the left subclavian artery by retrograde flow of the left internal mammary artery Proximal and distal access to the lesion access and facilitating overtaking angiographic control Balloon expandable stents were used in the majority of the cases; only one selfexpandable stent was used. The sizes varied form 7 to 10 mm in diameter and 25 to 60 mm in length. The material selection was made by visual angiographic analysis and was based on the nominal diameter of the target vessel, diameter proximal and distal to the lesion and the extension of it (Figure 3). Only one of the four cases of occlusion demanded pre dilatation (Figure 4), due to the difficulty of moving the balloon expandable stent through the lesion. Balloon Expandable stent being released by brachial access Filling anterograde left subclavian artery and the proximal third of the left internal thoracic artery All patients were receiving antiplatelet therapy with acetylsalicylic acid (ASA) at the time of diagnosis, in doses ranging from 75 to 325 mg/day. In addition, Clopidogrel was started 75 mg/day orally, or 300 mg loading dose in the morning of the procedure, maintaining the dual antiplatelet therapy for 30 days. The monitoring was carried out on an outpatient basis and by performing ultrasound duplex color of the left subclavian artery. Routinely the patient returned to the Endovascular clinic in a week for initial evaluation of pulse and pressure measurement in the upper limbs. Then we continued monitoring in the sector of origin.
RESULTS
Four thousand two hundred and ninety one coronary artery bypasses were performed in this period with the use of the left internal mammary artery angioplasty and 69 angioplasties of the subclavian artery in this Institute, identifying 16 patients with the Coronary-subclavian steal syndrome (CSSS). All of them underwent endovascular treatment. The mean age of patients was 67.2 (53-81 years), sevenfemales and nine males. Table 1 shows the distribution of risk factors among patients treated. The clinical presentation leading to diagnosis of CSSS varied, and three patients presented with acute myocardial ischemia and in the other 12, the diagnosis was made by coronary angiography after provocative test positive for ischemia. In only one case the diagnosis was made after coronary angiography prior to percutaneous valve replacement (Table 2). Of the 16 patients included, 13 had stenoses and three had occlusions, in all cases in the proximal left subclavian artery. Demographical Characteristics of the analyzed patients. Non dialysis chronic renal insufficiency patient, being treated after adequate renal preparation Clinical characteristics of the patients. Asymptomatic patient; diagnosis in the coronary angiography previous to valve replacement surgery The therapeutic success rate was 100%, with the criterion of antegrade flow in the internal mammary through digital angiography. Two patients experienced minor complications, being a minor hematoma and pseudoaneurysm that did not require surgical correction. No patient had major complications. Upon examination of the medical records, 11 patients had sonographic documentation of stent patency for at least one year; two patients lost follow-up and two died. One of infection and sepsis in diabetic foot and other unknown cause. In all cases there was clinical improvement of symptoms after the procedure.
CONCLUSION
Angioplasty and stenting of the left subclavian artery is a good option for the treatment of coronary subclavian steal syndrome, with high rates of technical and clinical success. Besides, does not preclude surgical treatment, in the case of more than one unsuccessfull endovascular attempt.
[ "Objective" ]
[ "The objective of this article is to show the left subclavian artery stenting as a\nsafe and effective method to treat the Coronary-subclavian Steal Syndrome." ]
[ null ]
[ "INTRODUCTION", "Objective", "METHODS", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "The subclavian steal syndrome (SSS) is characterized by the vertebral artery flow\ninversion, due to a stenotic lesion in the origin of the subclavian artery. The\nCoronary-subclavian Steal Syndrome is a variation of the SSS and is characterized by\ninversion of flow in the Internal Mammary artery that has been used as conduct in a\nmyocardial revascularization, leading to miocardial infarction.\nIts diagnosis must be suspected in patients with difference in pulse and arterial\npressure in the upper limbs, that present with angina pectoris and that have done a\nmyocardial revascularization.\nIts treatment must be a surgical bypass or, after the rise of the minimal invasive\ntechniques, a transluminal angioplasty.\n Objective The objective of this article is to show the left subclavian artery stenting as a\nsafe and effective method to treat the Coronary-subclavian Steal Syndrome.\nThe objective of this article is to show the left subclavian artery stenting as a\nsafe and effective method to treat the Coronary-subclavian Steal Syndrome.", "The objective of this article is to show the left subclavian artery stenting as a\nsafe and effective method to treat the Coronary-subclavian Steal Syndrome.", "Retrospective, non-randomized trial, through revision of the hospital records of the\npatients treated with the stenting of the left subclavian artery, from January 2006 to\nSeptember 2012. Epidemiological and clinical data were assessed, as well as technique\nand materials used in the procedure. The study was approved by the ethics\ncommission.\nThe procedures were performed after local anesthesia and positioning of a valvulated\nsheath in the common right femoral artery and in the left brachial artery, by the\nSeldinger technique. After systemic heparin, the lesion was crossed with a 0.035''\nhydrophilic guidewire, and then it was exchanged by a stiff wire of the same diameter,\nto give support to the stent (Figure 1). This\napproach allows several attempts of recanalization, whether proximal or distal to the\nlesion and appropriate angiographic control (Figure\n2).\nFilling of the left subclavian artery by retrograde flow of the left internal\nmammary artery\nProximal and distal access to the lesion access and facilitating overtaking\nangiographic control\nBalloon expandable stents were used in the majority of the cases; only one\nselfexpandable stent was used. The sizes varied form 7 to 10 mm in diameter and 25 to 60\nmm in length. The material selection was made by visual angiographic analysis and was\nbased on the nominal diameter of the target vessel, diameter proximal and distal to the\nlesion and the extension of it (Figure 3). Only\none of the four cases of occlusion demanded pre dilatation (Figure 4), due to the difficulty of moving the balloon expandable\nstent through the lesion.\nBalloon Expandable stent being released by brachial access\nFilling anterograde left subclavian artery and the proximal third of the left\ninternal thoracic artery\nAll patients were receiving antiplatelet therapy with acetylsalicylic acid (ASA) at the\ntime of diagnosis, in doses ranging from 75 to 325 mg/day. In addition, Clopidogrel was\nstarted 75 mg/day orally, or 300 mg loading dose in the morning of the procedure,\nmaintaining the dual antiplatelet therapy for 30 days.\nThe monitoring was carried out on an outpatient basis and by performing ultrasound\nduplex color of the left subclavian artery. Routinely the patient returned to the\nEndovascular clinic in a week for initial evaluation of pulse and pressure measurement\nin the upper limbs. Then we continued monitoring in the sector of origin.", "Four thousand two hundred and ninety one coronary artery bypasses were performed in this\nperiod with the use of the left internal mammary artery angioplasty and 69 angioplasties\nof the subclavian artery in this Institute, identifying 16 patients with the\nCoronary-subclavian steal syndrome (CSSS). All of them underwent endovascular\ntreatment.\nThe mean age of patients was 67.2 (53-81 years), sevenfemales and nine males. Table 1 shows the distribution of risk factors\namong patients treated. The clinical presentation leading to diagnosis of CSSS varied,\nand three patients presented with acute myocardial ischemia and in the other 12, the\ndiagnosis was made by coronary angiography after provocative test positive for ischemia.\nIn only one case the diagnosis was made after coronary angiography prior to percutaneous\nvalve replacement (Table 2). Of the 16 patients\nincluded, 13 had stenoses and three had occlusions, in all cases in the proximal left\nsubclavian artery.\nDemographical Characteristics of the analyzed patients.\nNon dialysis chronic renal insufficiency patient, being treated after adequate\nrenal preparation\nClinical characteristics of the patients.\nAsymptomatic patient; diagnosis in the coronary angiography previous to valve\nreplacement surgery\nThe therapeutic success rate was 100%, with the criterion of antegrade flow in the\ninternal mammary through digital angiography. Two patients experienced minor\ncomplications, being a minor hematoma and pseudoaneurysm that did not require surgical\ncorrection. No patient had major complications. Upon examination of the medical records,\n11 patients had sonographic documentation of stent patency for at least one year; two\npatients lost follow-up and two died. One of infection and sepsis in diabetic foot and\nother unknown cause. In all cases there was clinical improvement of symptoms after the\nprocedure.", "With an incidence between 0.5% and 2% of patients undergoing coronary artery bypass\ngrafting[1], the CSSS was\ninitially described by Hargola / Valle and Tyras / Barner in the 70s, concurrently with\nthe beginning of the use of the internal mammary artery as a conduit artery[2]. The use of this artery is widely\naccepted because of its high long-term patency rate and low atherosclerosis, being used\nin most Coronary artery bypasses[3 -6]. The incidence of this syndrome was 0.3%\nin our study, being equivalent to that found in the literature. All cases diagnosed\nunderwent endovascular treatment.\nThe left subclavian artery is the branch of the aortic arch most affected by\natherosclerosis[7], which is the\nmain cause of the syndrome. This also explains why the vast majority of cases in the\nliterature CSSS occur on the left side, not being different in the cases presented.\nOther causes include Takayasu arteritis, actinic arteritis and giant cell\narteritis[1]. The occlusion of\nthe proximal subclavian artery causes flow reversal in arteries downstream (vertebral\nand internal mammary), leading to several vertebral- basilary symptoms (dizziness,\nnystagmus, nausea) and myocardial ischemia[2].\nConventional surgical revascularization procedures have good long-term\npatency[7], but contain a risk\nof morbidity of 4-11% and a mortality rate of up to 5%. Options include\nsubclavian-subclavian, carotid-subclavian, axillary- axillary grafts, subclavian-\ncarotid transposition or even transposition of the internal mammary artery. Potential\ncomplications include fistula of the thoracic duct, Horner's syndrome, supraclavicular\nnerve injury (eg N. Phrenic and recurrent laryngeal) and decompensation of preexisting\natherosclerotic disease in the supra-aortic trunks, thereby leading to ischemic or\nneurological symptoms[7,8].\nIn the cases presented, there were two minor complications and no major complications,\nbesides the therapeutic and clinical success of 100%. The technical literature reports\nsuccess of more than 80%, with rates of complications from 3 to 6% and high patency up\nto ten years of follow-up[5,7,9,10]. None of the patients experienced\nvertebrobasilar acute neurological symptoms due to reverse flow in vertebral artery,\nwhich protects cerebral circulation leading to embolic events fragments plate to the\nupper member. The forward flow is restored gradually from 20 seconds to 30 minutes,\nprobably due systems of cerebral selfregulation by decreasing vascular\nresistance[3,8]. There are reports in the literature of internal mammary\nartery blockage and aspiration of blood through the brachial catheter during the\nprocedure, to avoid potential embolization to coronary territory, especially when it is\nnoted an antegrade flow in the internal mammary - a variant of the CSSS[3,10]. However, in the patients presented, this technique was not used,\nwithout any harm to the result.\nThe choice of balloon expandable stents rather than selfexpandable in most cases is due\nto its greater radial strength and greater accuracy in delivery. However, in very tight\nlesions or occlusions when it perceives a certain resistance in the positioning of the\nstent, may be necessary pre-dilation to facilitate their passage and to prevent it from\ndeforming over the balloon. In the cases presented, only one required a\npre-dilatation.\nOne should remember that the label of the balloon- expandable stents available on the\nmarket today does not include its use in the supra -aortic area. The use of these stents\nin this region is due to the excellent results in case series.\nIn most services is not routine an angiographic study of aortic arch and supra-aortic\ntrunks prior to Coronary artery bypass[7,11]. Thus, the physical\nexamination of the upper limbs is necessary, so you can detect any change in\npulse/pressure or supraclavicular souffle before surgery[5,12]. In contrast,\nin revascularized patients presenting with acute or insidious myocardial ischemia, we\nmust always remember the CSSS as a possible etiology. In fact, the development of this\nsyndrome in less than a year after myocardial revascularization suggests the presence of\nsubclavian steal syndrome not diagnosed by the time of surgery[5].", "Angioplasty and stenting of the left subclavian artery is a good option for the\ntreatment of coronary subclavian steal syndrome, with high rates of technical and\nclinical success. Besides, does not preclude surgical treatment, in the case of more\nthan one unsuccessfull endovascular attempt." ]
[ "intro", null, "methods", "results", "discussion", "conclusions" ]
[ "Angioplasty", "Peripheral Arterial Disease", "Coronary Disease", "Subclavian Artery", "Coronary-Subclavian Steal Syndrome" ]
INTRODUCTION: The subclavian steal syndrome (SSS) is characterized by the vertebral artery flow inversion, due to a stenotic lesion in the origin of the subclavian artery. The Coronary-subclavian Steal Syndrome is a variation of the SSS and is characterized by inversion of flow in the Internal Mammary artery that has been used as conduct in a myocardial revascularization, leading to miocardial infarction. Its diagnosis must be suspected in patients with difference in pulse and arterial pressure in the upper limbs, that present with angina pectoris and that have done a myocardial revascularization. Its treatment must be a surgical bypass or, after the rise of the minimal invasive techniques, a transluminal angioplasty. Objective The objective of this article is to show the left subclavian artery stenting as a safe and effective method to treat the Coronary-subclavian Steal Syndrome. The objective of this article is to show the left subclavian artery stenting as a safe and effective method to treat the Coronary-subclavian Steal Syndrome. Objective: The objective of this article is to show the left subclavian artery stenting as a safe and effective method to treat the Coronary-subclavian Steal Syndrome. METHODS: Retrospective, non-randomized trial, through revision of the hospital records of the patients treated with the stenting of the left subclavian artery, from January 2006 to September 2012. Epidemiological and clinical data were assessed, as well as technique and materials used in the procedure. The study was approved by the ethics commission. The procedures were performed after local anesthesia and positioning of a valvulated sheath in the common right femoral artery and in the left brachial artery, by the Seldinger technique. After systemic heparin, the lesion was crossed with a 0.035'' hydrophilic guidewire, and then it was exchanged by a stiff wire of the same diameter, to give support to the stent (Figure 1). This approach allows several attempts of recanalization, whether proximal or distal to the lesion and appropriate angiographic control (Figure 2). Filling of the left subclavian artery by retrograde flow of the left internal mammary artery Proximal and distal access to the lesion access and facilitating overtaking angiographic control Balloon expandable stents were used in the majority of the cases; only one selfexpandable stent was used. The sizes varied form 7 to 10 mm in diameter and 25 to 60 mm in length. The material selection was made by visual angiographic analysis and was based on the nominal diameter of the target vessel, diameter proximal and distal to the lesion and the extension of it (Figure 3). Only one of the four cases of occlusion demanded pre dilatation (Figure 4), due to the difficulty of moving the balloon expandable stent through the lesion. Balloon Expandable stent being released by brachial access Filling anterograde left subclavian artery and the proximal third of the left internal thoracic artery All patients were receiving antiplatelet therapy with acetylsalicylic acid (ASA) at the time of diagnosis, in doses ranging from 75 to 325 mg/day. In addition, Clopidogrel was started 75 mg/day orally, or 300 mg loading dose in the morning of the procedure, maintaining the dual antiplatelet therapy for 30 days. The monitoring was carried out on an outpatient basis and by performing ultrasound duplex color of the left subclavian artery. Routinely the patient returned to the Endovascular clinic in a week for initial evaluation of pulse and pressure measurement in the upper limbs. Then we continued monitoring in the sector of origin. RESULTS: Four thousand two hundred and ninety one coronary artery bypasses were performed in this period with the use of the left internal mammary artery angioplasty and 69 angioplasties of the subclavian artery in this Institute, identifying 16 patients with the Coronary-subclavian steal syndrome (CSSS). All of them underwent endovascular treatment. The mean age of patients was 67.2 (53-81 years), sevenfemales and nine males. Table 1 shows the distribution of risk factors among patients treated. The clinical presentation leading to diagnosis of CSSS varied, and three patients presented with acute myocardial ischemia and in the other 12, the diagnosis was made by coronary angiography after provocative test positive for ischemia. In only one case the diagnosis was made after coronary angiography prior to percutaneous valve replacement (Table 2). Of the 16 patients included, 13 had stenoses and three had occlusions, in all cases in the proximal left subclavian artery. Demographical Characteristics of the analyzed patients. Non dialysis chronic renal insufficiency patient, being treated after adequate renal preparation Clinical characteristics of the patients. Asymptomatic patient; diagnosis in the coronary angiography previous to valve replacement surgery The therapeutic success rate was 100%, with the criterion of antegrade flow in the internal mammary through digital angiography. Two patients experienced minor complications, being a minor hematoma and pseudoaneurysm that did not require surgical correction. No patient had major complications. Upon examination of the medical records, 11 patients had sonographic documentation of stent patency for at least one year; two patients lost follow-up and two died. One of infection and sepsis in diabetic foot and other unknown cause. In all cases there was clinical improvement of symptoms after the procedure. DISCUSSION: With an incidence between 0.5% and 2% of patients undergoing coronary artery bypass grafting[1], the CSSS was initially described by Hargola / Valle and Tyras / Barner in the 70s, concurrently with the beginning of the use of the internal mammary artery as a conduit artery[2]. The use of this artery is widely accepted because of its high long-term patency rate and low atherosclerosis, being used in most Coronary artery bypasses[3 -6]. The incidence of this syndrome was 0.3% in our study, being equivalent to that found in the literature. All cases diagnosed underwent endovascular treatment. The left subclavian artery is the branch of the aortic arch most affected by atherosclerosis[7], which is the main cause of the syndrome. This also explains why the vast majority of cases in the literature CSSS occur on the left side, not being different in the cases presented. Other causes include Takayasu arteritis, actinic arteritis and giant cell arteritis[1]. The occlusion of the proximal subclavian artery causes flow reversal in arteries downstream (vertebral and internal mammary), leading to several vertebral- basilary symptoms (dizziness, nystagmus, nausea) and myocardial ischemia[2]. Conventional surgical revascularization procedures have good long-term patency[7], but contain a risk of morbidity of 4-11% and a mortality rate of up to 5%. Options include subclavian-subclavian, carotid-subclavian, axillary- axillary grafts, subclavian- carotid transposition or even transposition of the internal mammary artery. Potential complications include fistula of the thoracic duct, Horner's syndrome, supraclavicular nerve injury (eg N. Phrenic and recurrent laryngeal) and decompensation of preexisting atherosclerotic disease in the supra-aortic trunks, thereby leading to ischemic or neurological symptoms[7,8]. In the cases presented, there were two minor complications and no major complications, besides the therapeutic and clinical success of 100%. The technical literature reports success of more than 80%, with rates of complications from 3 to 6% and high patency up to ten years of follow-up[5,7,9,10]. None of the patients experienced vertebrobasilar acute neurological symptoms due to reverse flow in vertebral artery, which protects cerebral circulation leading to embolic events fragments plate to the upper member. The forward flow is restored gradually from 20 seconds to 30 minutes, probably due systems of cerebral selfregulation by decreasing vascular resistance[3,8]. There are reports in the literature of internal mammary artery blockage and aspiration of blood through the brachial catheter during the procedure, to avoid potential embolization to coronary territory, especially when it is noted an antegrade flow in the internal mammary - a variant of the CSSS[3,10]. However, in the patients presented, this technique was not used, without any harm to the result. The choice of balloon expandable stents rather than selfexpandable in most cases is due to its greater radial strength and greater accuracy in delivery. However, in very tight lesions or occlusions when it perceives a certain resistance in the positioning of the stent, may be necessary pre-dilation to facilitate their passage and to prevent it from deforming over the balloon. In the cases presented, only one required a pre-dilatation. One should remember that the label of the balloon- expandable stents available on the market today does not include its use in the supra -aortic area. The use of these stents in this region is due to the excellent results in case series. In most services is not routine an angiographic study of aortic arch and supra-aortic trunks prior to Coronary artery bypass[7,11]. Thus, the physical examination of the upper limbs is necessary, so you can detect any change in pulse/pressure or supraclavicular souffle before surgery[5,12]. In contrast, in revascularized patients presenting with acute or insidious myocardial ischemia, we must always remember the CSSS as a possible etiology. In fact, the development of this syndrome in less than a year after myocardial revascularization suggests the presence of subclavian steal syndrome not diagnosed by the time of surgery[5]. CONCLUSION: Angioplasty and stenting of the left subclavian artery is a good option for the treatment of coronary subclavian steal syndrome, with high rates of technical and clinical success. Besides, does not preclude surgical treatment, in the case of more than one unsuccessfull endovascular attempt.
Background: The subclavian steal syndrome is characterized by the vertebral artery flow inversion, due to a stenotic lesion in the origin of the subclavian artery. The Coronary-subclavian Steal Syndrome is a variation of the Subclavian Steal Syndrome and is characterized by inversion of flow in the Internal Thracic artery that has been used as conduct in a myocardial revascularization. Its diagnosis must be suspected in patients with difference in pulse and arterial pressure in the upper limbs, that present with angina pectoris and that have done a myocardial revascularization. Its treatment must be a surgical bypass or a transluminal angioplasty. Methods: Historical prospective, non-randomized trial, through revision of the hospital records of the patients treated with the stenting of the left subclavian artery, from January 2006 to September 2012. Results: In the mentioned period, 4.291 miocardial revascularizations were performed with the use of the left mammary artery, and 16 patients were identified to have the Coronary-subclavian steal syndrome. All of them were submitted to endovascular treatment. The success rate was 100%; two patients experienced minor complications; none of them presented with major complications. Eleven of the 16 patients had ultrassonographic documentation of patent stent for at least one year; two patients lost follow up and other two died. Conclusions: The stenting of the left subclavian artery is a good option for the treatment of the Coronary-subclavian Steal Syndrome, with high level of technical and clinical success.
INTRODUCTION: The subclavian steal syndrome (SSS) is characterized by the vertebral artery flow inversion, due to a stenotic lesion in the origin of the subclavian artery. The Coronary-subclavian Steal Syndrome is a variation of the SSS and is characterized by inversion of flow in the Internal Mammary artery that has been used as conduct in a myocardial revascularization, leading to miocardial infarction. Its diagnosis must be suspected in patients with difference in pulse and arterial pressure in the upper limbs, that present with angina pectoris and that have done a myocardial revascularization. Its treatment must be a surgical bypass or, after the rise of the minimal invasive techniques, a transluminal angioplasty. Objective The objective of this article is to show the left subclavian artery stenting as a safe and effective method to treat the Coronary-subclavian Steal Syndrome. The objective of this article is to show the left subclavian artery stenting as a safe and effective method to treat the Coronary-subclavian Steal Syndrome. CONCLUSION: Angioplasty and stenting of the left subclavian artery is a good option for the treatment of coronary subclavian steal syndrome, with high rates of technical and clinical success. Besides, does not preclude surgical treatment, in the case of more than one unsuccessfull endovascular attempt.
Background: The subclavian steal syndrome is characterized by the vertebral artery flow inversion, due to a stenotic lesion in the origin of the subclavian artery. The Coronary-subclavian Steal Syndrome is a variation of the Subclavian Steal Syndrome and is characterized by inversion of flow in the Internal Thracic artery that has been used as conduct in a myocardial revascularization. Its diagnosis must be suspected in patients with difference in pulse and arterial pressure in the upper limbs, that present with angina pectoris and that have done a myocardial revascularization. Its treatment must be a surgical bypass or a transluminal angioplasty. Methods: Historical prospective, non-randomized trial, through revision of the hospital records of the patients treated with the stenting of the left subclavian artery, from January 2006 to September 2012. Results: In the mentioned period, 4.291 miocardial revascularizations were performed with the use of the left mammary artery, and 16 patients were identified to have the Coronary-subclavian steal syndrome. All of them were submitted to endovascular treatment. The success rate was 100%; two patients experienced minor complications; none of them presented with major complications. Eleven of the 16 patients had ultrassonographic documentation of patent stent for at least one year; two patients lost follow up and other two died. Conclusions: The stenting of the left subclavian artery is a good option for the treatment of the Coronary-subclavian Steal Syndrome, with high level of technical and clinical success.
1,898
277
[ 29 ]
6
[ "artery", "subclavian", "patients", "left", "coronary", "subclavian artery", "syndrome", "internal", "left subclavian", "left subclavian artery" ]
[ "angioplasties subclavian artery", "treat coronary subclavian", "stenting left subclavian", "coronary subclavian steal", "subclavian artery stenting" ]
[CONTENT] Angioplasty | Peripheral Arterial Disease | Coronary Disease | Subclavian Artery | Coronary-Subclavian Steal Syndrome [SUMMARY]
[CONTENT] Angioplasty | Peripheral Arterial Disease | Coronary Disease | Subclavian Artery | Coronary-Subclavian Steal Syndrome [SUMMARY]
[CONTENT] Angioplasty | Peripheral Arterial Disease | Coronary Disease | Subclavian Artery | Coronary-Subclavian Steal Syndrome [SUMMARY]
[CONTENT] Angioplasty | Peripheral Arterial Disease | Coronary Disease | Subclavian Artery | Coronary-Subclavian Steal Syndrome [SUMMARY]
[CONTENT] Angioplasty | Peripheral Arterial Disease | Coronary Disease | Subclavian Artery | Coronary-Subclavian Steal Syndrome [SUMMARY]
[CONTENT] Angioplasty | Peripheral Arterial Disease | Coronary Disease | Subclavian Artery | Coronary-Subclavian Steal Syndrome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Angioplasty, Balloon, Coronary | Coronary Angiography | Coronary-Subclavian Steal Syndrome | Female | Humans | Male | Middle Aged | Prospective Studies | Reproducibility of Results | Risk Factors | Stents | Subclavian Artery | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Angioplasty, Balloon, Coronary | Coronary Angiography | Coronary-Subclavian Steal Syndrome | Female | Humans | Male | Middle Aged | Prospective Studies | Reproducibility of Results | Risk Factors | Stents | Subclavian Artery | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Angioplasty, Balloon, Coronary | Coronary Angiography | Coronary-Subclavian Steal Syndrome | Female | Humans | Male | Middle Aged | Prospective Studies | Reproducibility of Results | Risk Factors | Stents | Subclavian Artery | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Angioplasty, Balloon, Coronary | Coronary Angiography | Coronary-Subclavian Steal Syndrome | Female | Humans | Male | Middle Aged | Prospective Studies | Reproducibility of Results | Risk Factors | Stents | Subclavian Artery | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Angioplasty, Balloon, Coronary | Coronary Angiography | Coronary-Subclavian Steal Syndrome | Female | Humans | Male | Middle Aged | Prospective Studies | Reproducibility of Results | Risk Factors | Stents | Subclavian Artery | Treatment Outcome [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Angioplasty, Balloon, Coronary | Coronary Angiography | Coronary-Subclavian Steal Syndrome | Female | Humans | Male | Middle Aged | Prospective Studies | Reproducibility of Results | Risk Factors | Stents | Subclavian Artery | Treatment Outcome [SUMMARY]
[CONTENT] angioplasties subclavian artery | treat coronary subclavian | stenting left subclavian | coronary subclavian steal | subclavian artery stenting [SUMMARY]
[CONTENT] angioplasties subclavian artery | treat coronary subclavian | stenting left subclavian | coronary subclavian steal | subclavian artery stenting [SUMMARY]
[CONTENT] angioplasties subclavian artery | treat coronary subclavian | stenting left subclavian | coronary subclavian steal | subclavian artery stenting [SUMMARY]
[CONTENT] angioplasties subclavian artery | treat coronary subclavian | stenting left subclavian | coronary subclavian steal | subclavian artery stenting [SUMMARY]
[CONTENT] angioplasties subclavian artery | treat coronary subclavian | stenting left subclavian | coronary subclavian steal | subclavian artery stenting [SUMMARY]
[CONTENT] angioplasties subclavian artery | treat coronary subclavian | stenting left subclavian | coronary subclavian steal | subclavian artery stenting [SUMMARY]
[CONTENT] artery | subclavian | patients | left | coronary | subclavian artery | syndrome | internal | left subclavian | left subclavian artery [SUMMARY]
[CONTENT] artery | subclavian | patients | left | coronary | subclavian artery | syndrome | internal | left subclavian | left subclavian artery [SUMMARY]
[CONTENT] artery | subclavian | patients | left | coronary | subclavian artery | syndrome | internal | left subclavian | left subclavian artery [SUMMARY]
[CONTENT] artery | subclavian | patients | left | coronary | subclavian artery | syndrome | internal | left subclavian | left subclavian artery [SUMMARY]
[CONTENT] artery | subclavian | patients | left | coronary | subclavian artery | syndrome | internal | left subclavian | left subclavian artery [SUMMARY]
[CONTENT] artery | subclavian | patients | left | coronary | subclavian artery | syndrome | internal | left subclavian | left subclavian artery [SUMMARY]
[CONTENT] subclavian | objective | artery | steal syndrome | steal | subclavian steal | subclavian steal syndrome | syndrome | characterized | sss characterized [SUMMARY]
[CONTENT] lesion | diameter | figure | artery | left | distal | mg | proximal distal | access | stent [SUMMARY]
[CONTENT] patients | angiography | coronary angiography | diagnosis coronary angiography | diagnosis coronary | diagnosis | coronary | patient | replacement | characteristics [SUMMARY]
[CONTENT] treatment | case unsuccessfull endovascular attempt | angioplasty stenting left | angioplasty stenting | good option | good option treatment | good option treatment coronary | preclude | preclude surgical | preclude surgical treatment [SUMMARY]
[CONTENT] subclavian | artery | coronary | patients | syndrome | left | subclavian artery | coronary subclavian | coronary subclavian steal | coronary subclavian steal syndrome [SUMMARY]
[CONTENT] subclavian | artery | coronary | patients | syndrome | left | subclavian artery | coronary subclavian | coronary subclavian steal | coronary subclavian steal syndrome [SUMMARY]
[CONTENT] subclavian ||| the Subclavian Steal Syndrome | Internal Thracic ||| ||| [SUMMARY]
[CONTENT] subclavian | January 2006 to September 2012 [SUMMARY]
[CONTENT] 4.291 | 16 | Coronary ||| ||| 100% | two ||| Eleven | 16 | at least one year | two | two [SUMMARY]
[CONTENT] subclavian [SUMMARY]
[CONTENT] subclavian ||| the Subclavian Steal Syndrome | Internal Thracic ||| ||| ||| subclavian | January 2006 to September 2012 ||| ||| 4.291 | 16 | Coronary ||| ||| 100% | two ||| Eleven | 16 | at least one year | two | two ||| subclavian [SUMMARY]
[CONTENT] subclavian ||| the Subclavian Steal Syndrome | Internal Thracic ||| ||| ||| subclavian | January 2006 to September 2012 ||| ||| 4.291 | 16 | Coronary ||| ||| 100% | two ||| Eleven | 16 | at least one year | two | two ||| subclavian [SUMMARY]
Utilization and impact of cardiovascular magnetic resonance on patient management in heart failure: insights from the SCMR Registry.
36404335
Cardiovascular magnetic resonance (CMR) is an important diagnostic test used in the evaluation of patients with heart failure (HF). However, the demographics and clinical characteristics of those undergoing CMR for evaluation of HF are unknown. Further, the impact of CMR on subsequent HF patient care is unclear. The goal of this study was to describe the characteristics of patients undergoing CMR for HF and to determine the extent to which CMR leads to changes in downstream patient management by comparing pre-CMR indications and post-CMR diagnoses.
BACKGROUND
We utilized the Society for Cardiovascular Magnetic Resonance (SCMR) Registry as our data source and abstracted data for patients undergoing CMR scanning for HF indications from 2013 to 2019. Descriptive statistics (percentages, proportions) were performed on key CMR and clinical variables of the patient population. The Fisher's exact test was used when comparing categorical variables. The Wilcoxon rank sum test was used to compare continuous variables.
METHODS
3,837 patients were included in our study. 94% of the CMRs were performed in the United States with China, South Korea and India also contributing cases. Median age of HF patients was 59.3 years (IQR, 47.1, 68.3 years) with 67% of the scans occurring on women. Almost 2/3 of the patients were scanned on 3T CMR scanners. Overall, 49% of patients who underwent CMR scanning for HF had a change between the pre-test indication and post CMR diagnosis. 53% of patients undergoing scanning on 3T had a change between the pre-test indication and post CMR diagnosis when compared to 44% of patients who were scanned on 1.5T (p < 0.01).
RESULTS
Our results suggest a potential impact of CMR scanning on downstream diagnosis of patients referred for CMR for HF, with a larger potential impact on those scanned on 3T CMR scanners.
CONCLUSION
[ "Humans", "Female", "Predictive Value of Tests", "Magnetic Resonance Spectroscopy", "Heart Failure", "Magnetic Resonance Imaging", "Registries" ]
9677679
Background
Heart failure (HF) is a global public health problem affecting at least 26 million people worldwide and is increasing in prevalence [1–3]. Cardiovascular magnetic resonance (CMR) imaging is an important diagnostic test in the assessment of patients with HF [4–18]. However, the demographics and clinical characteristics of those undergoing CMR for the evaluation of HF are unknown. Further, the impact of CMR on subsequent HF patient care is unclear. Much of the current available evidence in this area is built upon small single center studies with widely varying results and conclusions. For example, CMR has been reported to lead to a change in patient management in anywhere between 16 and 65% of studies [19–21]. One way in which CMR can lead to a change in patient management is by providing valuable diagnostic information that leads to a change in the understanding of the etiology of HF post-test, when compared to pre-test [19] [20, 21]. This diagnostic information can help inform downstream treatment decisions and impact outcomes. However, the extent to which CMR can impact HF management in this manner is currently unclear. A systematic, multi-center, evaluation of the impact of CMR on patient management is required to shed further light onto these issues. The objectives of this paper are to utilize the Society for Cardiovascular Magnetic Resonance (SCMR) Registry in order to: Describe patient demographics, clinical and CMR scanning characteristics of patients undergoing CMR for HF; and.Determine the extent to which CMR is associated with changes in downstream patient management by comparing the pre-CMR indication and the post-CMR diagnostic information. Describe patient demographics, clinical and CMR scanning characteristics of patients undergoing CMR for HF; and. Determine the extent to which CMR is associated with changes in downstream patient management by comparing the pre-CMR indication and the post-CMR diagnostic information.
null
null
Results
There were 6,654 patients in the registry who underwent CMR for the indication of HF during our study period. However, 2,817 patients were excluded because they did not have data entered regarding their pre-CMR indication and/or post CMR diagnosis to allow us to evaluate whether or not receipt of CMR was associated with a change in management. Thus, 3,837 patients ultimately remained in our cohort. Of these, 94% of the CMRs were performed in the United States with 68% occurring at one site (see Table 1). Other countries with significant contributions to the SCMR Registry include China (n = 182, 4.7% and South Korea (n = 41, 1.1%). India contributed 3 cases. Table 1Distribution of sites contributing to the Society for Cardiovascular Magnetic Resonance (SCMR) RegistryCountrySite NumberNo. of PatientsPercentageUnited States871< 0.1%United States701< 0.1%India5430.1%United States2570.2%United States63310.8%South Korea68411.1%United States86561.5%United States361163.0%United States851453.8%China181824.7%United States22626.8%United States83769.8%United States12,61668.2%Total:3,837 Distribution of sites contributing to the Society for Cardiovascular Magnetic Resonance (SCMR) Registry
Conclusion
In our study of 3,837 SCMR Registry patients undergoing CMR for the evaluation of HF, we found that CMR was associated with a change between the pre-test indication and post-CMR diagnosis in 49% of cases, suggesting a potential impact on patient management. The rate of change occurred more commonly for patients scanned at 3T.
[ "Background", "Methods", "Data source", "Identification of the patient population and cohort creation", "Determination of whether CMR was associated with changes in downstream management", "Statistical analysis", "Baseline patient characteristics", "Sequences utilized in the CMRs", "Field strength and image quality", "Initial indications and diagnosis following CMR scanning", "Sensitivity analysis", "Comparison of cohort patients with excluded patients", "Limitations", "" ]
[ "Heart failure (HF) is a global public health problem affecting at least 26 million people worldwide and is increasing in prevalence [1–3]. Cardiovascular magnetic resonance (CMR) imaging is an important diagnostic test in the assessment of patients with HF [4–18]. However, the demographics and clinical characteristics of those undergoing CMR for the evaluation of HF are unknown. Further, the impact of CMR on subsequent HF patient care is unclear. Much of the current available evidence in this area is built upon small single center studies with widely varying results and conclusions. For example, CMR has been reported to lead to a change in patient management in anywhere between 16 and 65% of studies [19–21]. One way in which CMR can lead to a change in patient management is by providing valuable diagnostic information that leads to a change in the understanding of the etiology of HF post-test, when compared to pre-test [19] [20, 21]. This diagnostic information can help inform downstream treatment decisions and impact outcomes. However, the extent to which CMR can impact HF management in this manner is currently unclear. A systematic, multi-center, evaluation of the impact of CMR on patient management is required to shed further light onto these issues.\nThe objectives of this paper are to utilize the Society for Cardiovascular Magnetic Resonance (SCMR) Registry in order to:\n\nDescribe patient demographics, clinical and CMR scanning characteristics of patients undergoing CMR for HF; and.Determine the extent to which CMR is associated with changes in downstream patient management by comparing the pre-CMR indication and the post-CMR diagnostic information.\n\nDescribe patient demographics, clinical and CMR scanning characteristics of patients undergoing CMR for HF; and.\nDetermine the extent to which CMR is associated with changes in downstream patient management by comparing the pre-CMR indication and the post-CMR diagnostic information.", " Data source The data source was the SCMR Registry. This Registry was created by the SCMR in 2013 and the SCMR has continued to support this registry since its creation. For this project, data were abstracted from Jan 1, 2013- Dec 31, 2019. The overarching vision of the SCMR was to provide a central platform to demonstrate the impact of CMR on patient outcomes and clinical care. One of its stated goals is the determination of the downstream impact of CMR on diagnostic clinical decision making [22]. At the time that this project was performed, the SCMR Registry contained demographic and CMR data from 13 centers across the world. Site participation was invited and advertised by the SCMR at multiple forums, including at the SCMR Annual Scientific Sessions. Data for the registry were collected by individual sites retrospectively on consecutive patients who underwent clinically indicated CMR scans and then entered into the SCMR registry by those sites.\nThe data source was the SCMR Registry. This Registry was created by the SCMR in 2013 and the SCMR has continued to support this registry since its creation. For this project, data were abstracted from Jan 1, 2013- Dec 31, 2019. The overarching vision of the SCMR was to provide a central platform to demonstrate the impact of CMR on patient outcomes and clinical care. One of its stated goals is the determination of the downstream impact of CMR on diagnostic clinical decision making [22]. At the time that this project was performed, the SCMR Registry contained demographic and CMR data from 13 centers across the world. Site participation was invited and advertised by the SCMR at multiple forums, including at the SCMR Annual Scientific Sessions. Data for the registry were collected by individual sites retrospectively on consecutive patients who underwent clinically indicated CMR scans and then entered into the SCMR registry by those sites.", "The data source was the SCMR Registry. This Registry was created by the SCMR in 2013 and the SCMR has continued to support this registry since its creation. For this project, data were abstracted from Jan 1, 2013- Dec 31, 2019. The overarching vision of the SCMR was to provide a central platform to demonstrate the impact of CMR on patient outcomes and clinical care. One of its stated goals is the determination of the downstream impact of CMR on diagnostic clinical decision making [22]. At the time that this project was performed, the SCMR Registry contained demographic and CMR data from 13 centers across the world. Site participation was invited and advertised by the SCMR at multiple forums, including at the SCMR Annual Scientific Sessions. Data for the registry were collected by individual sites retrospectively on consecutive patients who underwent clinically indicated CMR scans and then entered into the SCMR registry by those sites.", "CMRs were identified in the SCMR Registry if they were performed for a HF related pre-test indication. Specifically, patients with left ventricular ejection fraction (LVEF) < 55% with the following pre-CMR suspected indications were included: amyloidosis, coronary artery disease (CAD) AND categorized as having LV dysfunction, arrhythmogenic right ventricular dysplasia (ARVC)/cardiomyopathy, cardiomyopathy, arrhythmic disease, Friedrich’s ataxia, hypertrophic cardiomyopathy (HCM) or hemochromatosis. Derivation of the above-mentioned algorithm for patient selection involved the following steps: First, a draft list of indications was selected a priori by the lead author (IR) and a co-author who was largely responsible for establishing the SCMR Registry and who was intimately familiar with its structure and data (RK). This list was based on the principle of balancing the detection of the largest number of patients who received a CMR for the indication of HF whilst minimizing inclusion of patients who were not scanned for HF related indications. It is for this reason that we included only those patients with reduced LVEF (LVEF < 55%) in the cohort. While this list excluded those with HF with preserved ejection fraction, we believed that removing the LVEF inclusion condition would contaminate our data by including many patients who were not scanned for HF indications. Next, as this an SCMR initiated paper, the proposal including the criteria for patient selection was reviewed and approved by the SCMR’s Science Committee. Following this initial approval, the dataset/cohort for the project was created from the larger SCMR Registry based on the approved patient selection criteria. Subsequently, the data and various iterations of the paper were approved by the SCMR’s Publications Committee. The final paper, prior to submission to this journal was approved by the SCMR’s Board of Trustees.", "We determined that a change in management was associated with the CMR if there was a change between the initial indication for the CMR and the subsequent diagnosis after the CMR was completed. The ‘indication’ field was an existing variable in the SCMR Registry. We then reviewed, for each subject, the text field where the final conclusions from the CMR scan were inserted. Using our clinical judgement, we determined if there was a clinically relevant change from the original pre-CMR indication, in a method similar to that used in other studies [21]. The initial screen was done by a senior cardiovascular disease resident (MH). In terms of training, this resident had completed a cardiology residency program (i.e. was a board certified cardiologist in Canada) and was undergoing a fellowship in advanced cardiac imaging at the time of the data review. As such, the resident was familiar with the role of CMR in the management of cardiovascular disease. All results were subsequently reviewed by a level 3 trained SCMR cardiologist and Fellow of the SCMR (IR).\n Statistical analysis Descriptive statistics (percentages, proportions) were performed on key CMR and clinical variables of the patient population. The Fisher’s exact test was used when comparing categorical variables. Specifically, this test was used when comparing the change rate between patients undergoing CMR at 1.5 vs. 3T, between males and females, between patients of different body mass index (BMI) categories, and between patients with and without late gadolinium enhancement (LGE). The Wilcoxon rank sum test was used to compare continuous variables. P values of < 0.05 were considered statistically significant.\nDescriptive statistics (percentages, proportions) were performed on key CMR and clinical variables of the patient population. The Fisher’s exact test was used when comparing categorical variables. Specifically, this test was used when comparing the change rate between patients undergoing CMR at 1.5 vs. 3T, between males and females, between patients of different body mass index (BMI) categories, and between patients with and without late gadolinium enhancement (LGE). The Wilcoxon rank sum test was used to compare continuous variables. P values of < 0.05 were considered statistically significant.", "Descriptive statistics (percentages, proportions) were performed on key CMR and clinical variables of the patient population. The Fisher’s exact test was used when comparing categorical variables. Specifically, this test was used when comparing the change rate between patients undergoing CMR at 1.5 vs. 3T, between males and females, between patients of different body mass index (BMI) categories, and between patients with and without late gadolinium enhancement (LGE). The Wilcoxon rank sum test was used to compare continuous variables. P values of < 0.05 were considered statistically significant.", "Baseline patient characteristics are summarized in Table 2. Median age of subjects was 59.3 years (IQR, 47.1, 68.3 years) the median BMI was 27. 1 kg/m2 (IQR, 23.8, 31.5 kg/m2). Women constituted 67% of the patients. In terms of major cardiovascular risk factors, 49% of patients had hypertension, 18% had diabetes, 21% were active smokers and 11% had a family history of premature CAD. There were 36% of patients who had evidence of overt CAD (prior myocardial infarction, percutaneous coronary interventions and/or coronary artery disease). With regards to cardiac function, the median LVEF was 41% (IQR 29%, 50%) and the median right ventricular ejection fraction (RVEF) was 48% (IQR 39%, 54%). 3,540 patients had data entered regarding LGE positivity (positive or negative). Overall, 54% of patients were LGE positive. Patterns of LGE (for example ischemic vs. non-ischemic) as well as LGE quantity were not available in the data.\n\nTable 2Characteristics of the patient population\nMean ± SD\n\nMedian\n\nQ1\n\nQ3\n\n% missing values\nAge, years57.0 ± 16.159.347.168.30.3%Height, meters1.72 ± 0.111.731.651.801.1%Weight, kilograms83.8 ± 21.981.568.096.50.4%BMI (kilograms/m2)28.1 ± 6.527.123.831.51.1%Percentage% missing valuesFemale sex67%0%\nCardiac function\n\nMedian\n\nQ1\n\nQ3\n\n% missing values\nLVEDV, mL19515625015%LVESV, mL1128316916%LVEF, %41295016%LVM, g12810016125%RVEDV, mL14711618522%RVESV, mL775710622%LVEDVI, mL/m21008212716%LVESVI, mL/m257438616%LVSV, mL74589116%LVEDD, mm60546713%LVESD, mm47405714%LVMI, g/m265538125%RVEDVI, mL/m276619223%RVESVI, mL/m239305423%RVSV, mL67518422%RVEF, %48395422%\nCardiovascular history\nPercentage\n% missing values\nHistory of myocardial infarction18%11%History of percutaneous coronary intervention12%11%History of coronary artery bypass grafting6%10%History of hypertension49%9%History of diabetes18%10%History of heart failure37%11%History of dyslipidemia39%10%History of smoking21%8%History of peripheral vascular disease4%12%Family history of coronary artery disease11%14%BMI: Body Mass Index LVEDV: Left ventricular (LV) end-diastolic volume, LVEDVI: LV end-diastolic volume indexed to body surface area, LVESV: LV end-systolic volume, LVESVI: LV end-systolic volume indexed to body surface area, LVSV: LV stroke volume, LVEF: LV ejection fraction, LVEDD: LV end-diastolic dimension, LVESD: LV end-systolic dimension, LVM: LV mass, LVMI: LV mass indexed to body surface area, RVEDV: Right ventricular (RV) end-diastolic volume, RVESV: RV end-systolic volume; RVSV: RV stroke volume, RVEF: RVejection fraction, RVEDVI: RV end-diastolic volume indexed to body surface area, RVESVI: RV end-systolic volume indexed to body surface area\n\nCharacteristics of the patient population\nBMI: Body Mass Index LVEDV: Left ventricular (LV) end-diastolic volume, LVEDVI: LV end-diastolic volume indexed to body surface area, LVESV: LV end-systolic volume, LVESVI: LV end-systolic volume indexed to body surface area, LVSV: LV stroke volume, LVEF: LV ejection fraction, LVEDD: LV end-diastolic dimension, LVESD: LV end-systolic dimension, LVM: LV mass, LVMI: LV mass indexed to body surface area, RVEDV: Right ventricular (RV) end-diastolic volume, RVESV: RV end-systolic volume; RVSV: RV stroke volume, RVEF: RVejection fraction, RVEDVI: RV end-diastolic volume indexed to body surface area, RVESVI: RV end-systolic volume indexed to body surface area", "Table 3 summarizes the pulse sequences employed in patients referred for CMR for HF. 94% of patients underwent cine balanced steady state free precession (bSSFP) imaging whilst 90% underwent LGE imaging, 43% underwent T2 weighted imaging and 22% underwent T1 mapping sequences.\n\nTable 3CMR Pulse SequencesVariable Name1.5T (N = 1,217)3T (N = 2,385)All Scans (N = 3,837)Balanced steady state free precession89.6%100.0%94.2%T2 weighted imaging37.6%50.6%42.5%T1 mapping4.5%33.5%21.6%T2 mapping2.8%23.6%14.7%T2 *20.2%33.1%26.6%Stress perfusion22.4%16.3%17.6%Late gadolinium enhancement86.9%96.7%90.4%\n\nCMR Pulse Sequences", "There were 2,385 patients who were scanned on a 3T CMR system (62.1%) vs. 1,217 patients who were scanned at 1.5 T(31.7%). Regarding CMR image quality; 88% of scans were ranked as ‘good’ or ‘excellent’ with 12% ranked as ‘poor’ or ‘fair’. These clinical judgements regarding image quality were site-based assessments and were performed qualitatively by the reading physician. There were no preset criteria to distinguish what constituted ‘excellent’, ‘good’, ‘fair’ or ‘poor’ image quality.", "The top 6 indications for CMRs performed in the SCMR Registry were: Cardiomyopathy, etiology not yet diagnosed (NYD) (1,776; 46.2%), CAD/ischemia/viability (1,230, 32.1%), ARVC (423, 11.0%), HCM (146, 3.8%), arrhythmic substrate (136, 3.5%) and amyloidosis (97, 2.5%). Overall, in 1,892 (49%) of patients, there was a change between the initial pre-test indication and the post CMR diagnosis. When broken down by indication, CMR was associated with changes between indication and post-CMR diagnosis in 333 (79%) of those referred for ARVC, 1,114 (63%) of those referred for cardiomyopathy, 136 (49%) of those referred for arrhythmic disease, 66 (45%) of those undergoing CMR for HCM, 25 (26%) undergoing scanning for amyloidosis and 270 (22%) of those undergoing CMR for CAD. Table 4 summarizes the nature of these changes amongst those who had CMRs performed for the top 6 indications. Table 5 stratifies the change rate between the pre-CMR indication and post-CMR diagnosis according to site. The largest contributing American site had a similar change rate (51%) to that of the entire patient population. The second largest contributing site, also from the United States, similarly had a change rate of 50%. In total, there were 3,611 patients who received CMRs at American sites and they had an overall change rate of 50%. The country that contributed the second largest number of patients was China. They contributed 182 patients and reported a change rate of 52%, similar to the overall and American rate. The country that reported the lowest change rate was South Korea (15%).\n\nTable 4Pre-CMR indications and post CMR diagnosesPre-CMR indicationNumber of patients with pre-CMR diagnoses (%)Most Common Post-CMR diagnosesNumber of patients with post-CMR diagnoses (%)\nAmyloidosis of the heart\nTotal97No change in diagnosis72 (74%)Same as pre-CMR diagnosis72 (100%)Change in diagnosis25 (26%)Non-ischemic cardiomyopathy13 (52%)Coronary artery disease4 (16%)Left ventricular hypertrophy3 (12%)No amyloidosis4 (16%)Possible hypertrophic cardiomyopathy1 (4%)\nArrhythmic Disease\nTotal136No change in diagnosis70 (51%)Same as pre-CMR diagnosis70 (100%)Change in diagnosis66 (49%)Non-ischemic cardiomyopathy44 (67%)Coronary artery disease9 (14%)Possible myocarditis4 (6%)Myocarditis1 (2%)Hypertrophic cardiomyopathy1 (2%)Left ventricular non-compaction1 (2%)Coronary artery disease and non-ischemic cardiomyopathy1 (2%)Cardiomyopathy and ventricular septal defect1 (2%)Possible transplant rejection1 (2%)No arrhythmic substrate3 (5%)\nARVC\nTotal423No change in diagnosis90 (21%)Same as pre-CMR diagnosis90 (100%)Change in diagnosis333 (79%)Non-ischemic cardiomyopathy131 (39%)No ARVC60 (18%)Borderline left ventricular function without other significant abnormalities96 (29%)Coronary artery disease13 (4%)Myocarditis3 (1%)ARVC and cardiomyopathy3 (< 1%)LV non-compaction5 (2%)Infiltrative cardiomyopathy2 (< 1%)Likely inflammatory cardiomyopathy1 (< 1%)Left ventricular hypertrophy2 (< 1%)Ventricular septal defect1 (< 1%)Right atrial dilatation without ARVC5 (2%)Other diagnoses11 (3%)\nCardiomyopathy NYD\nTotal1,776No change in diagnosis662 (37%)Same as pre-CMR diagnosis662 (100%)Change in diagnosis1,114 (63%)Coronary artery disease240 (22%)Non-ischemic cardiomyopathy484 (43%)Myocarditis52 (5%)Left ventricular non-compaction42 (4%)Borderline LV function without other significant abnormalities70 (6%)Sarcoidosis34 (3%)ARVC9 (< 1%)Hypertrophic cardiomyopathy9 (< 1%)Coronary artery disease and non-ischemic cardiomyopathy16 (1%)Infiltrative cardiomyopathy26 (2%)Hemochromatosis2 (< 1%)Other diagnoses130 (12%)\nCoronary Artery Disease\nTotal1,230No change in diagnosis960 (78%)Same as pre-CMR diagnosis960 (100%)Change in diagnosis270 (22%)Non-ischemic cardiomyopathy154 (57%)Borderline left ventricular function without other significant abnormalities42 (16%)No coronary artery disease9 (3%)Myocarditis12 (4%)Coronary artery disease and non-ischemic cardiomyopathy11 (4%)Sarcoidosis4 (1%)Left ventricular non-compaction4 (1%)Infiltrative cardiomyopathy8 (3%)Hypertrophic cardiomyopathy2 (< 1%)Hemochromatosis1 (< 1%)Other diagnoses23 (9%)\nHypertrophic Cardiomyopathy\nTotal146No change in diagnosis80 (55%)Same as pre-CMR diagnosis80 (100%)Change in diagnosis66 (45%)Non-ischemic cardiomyopathy22 (33%)Coronary artery disease11 (17%)Borderline left ventricular function without other significant abnormalities12 (18%)Infiltration/amyloid6 (9%)Left ventricular hypertrophy3 (5%)No hypertrophic cardiomyopathy5 (8%)Left ventricular non-compaction6 (9%)Concomitant hypertrophic cardiomyopathy and coronary artery disease1 (2%)Legend. ARVC: Arrhythmogenic right ventricular cardiomyopathy, NYD: Not yet diagnosed\n\nPre-CMR indications and post CMR diagnoses\nLegend. ARVC: Arrhythmogenic right ventricular cardiomyopathy, NYD: Not yet diagnosed\n\nTable 5Stratification by site according to the rate of change between pre-CMR indication and post-CMR diagnosisCountry of the site performing the CMRSite NumberNo. of PatientsNo change between pre-CMR indication and post-CMR diagnosisChange between pre-CMR indication and post-CMR diagnosisPercentage with change between pre-CMR indication and post-CMR diagnosisUnited States12,6161,2861,33051%United States837618718950%United States226214511745%United States85145677854%United States36116694741%United States8656431323%United States6331171445%United States2575229%United States701100%United States871100%China18182889452%South Korea684135615%India5431267%Total3,8371,9451,89249%\n\nStratification by site according to the rate of change between pre-CMR indication and post-CMR diagnosis\nPatients scanned at 3T had higher change rates vs. those scanned at 1.5 T (53% vs. 44% respectively, p < 0.001). Those scanned on a 3T CMR system had their images rated at ‘excellent’ or ‘good’ 87% of the time versus 90% of scans performed at 1.5 T (p < 0.001). Males had a higher rate of change following CMR (52% vs. 48% respectively, p = 0.02). There was a non-significant trend towards a higher rate of change amongst those of normal weight/underweight vs. those who were overweight. Those with BMI < 25 kg/m2 had a change rate 51% of the time when compared with a change rate of 49% in those with a BMI > = 25 kg/m2 (p = 0.23). Amongst patients undergoing 3T CMR scanning, there was no significant difference regarding change rate between those with BMIs < 25 kg/m2 (55%), 25-29.9 kg/m2 (53%) and > 30 kg/m2 (48%), p = 0.54. Among patients who were LGE positive, CMR was associated with a change between initial indication and post-CMR diagnosis in 44% of the cases vs. 56% of the cases for those who were LGE negative (p < 0.001).", "To minimize the risk of bias caused by data submitted by small contributing sites, we repeated our main analysis after removing sites that contributed < 10 patients. After removal of these patients, 3,825 patients remained from 9 sites. There was no significant difference in our results with 1,888 patients having a change between the initial indication and post-CMR diagnosis (49%) and 1,937 patients (51%) not having such a change.", "Supplemental Table 1 compares patients ultimately included in the cohort with those excluded. Of note, there was no significant difference in terms of median age, height, weight or BMI between the two groups (p-value > 0.05). Patients included in the cohort were slightly more likely to be female (67% vs. 64%, p-value = 0.01). There were also no significant differences in CMR characteristics between those patients who were included and those excluded. However, patients included in the study had significantly higher rates of major cardiovascular risk factors when compared to those who were excluded. The excluded patients also had significantly higher percentages of missing values for these clinical parameters.", "This study must be interpreted in the context of its limitations. First, there was limited data granularity in the registry. For example, we did not have data on treatment and downstream clinical outcomes. We recognize that a change between the pre-CMR indication and post CMR diagnosis is only one aspect in the assessment of a change in management and we lack direct data on downstream treatment decisions and outcomes. Furthermore, CMR does have value in confirming diagnoses and therefore evaluating for a change between the pre-test indication and the post-test diagnosis is an imperfect surrogate in the overall assessment of whether a change in management occurred. With that said, the enhanced ability to detect new or alternate myocardial disease processes is valuable and uniquely differentiates CMR from many other imaging modalities. This has been recognized by other groups who used similar surrogates in their work evaluating the clinical impact of CMR [21]. In another example of our lack of data granularity, we did not have access to the raw images and relied on data input from CMR reports from the participating sites. Our work highlights the importance of improving data capture/granularity in the SCMR Registry to help facilitate future research. Second, there were significant missing data in a number of parameters (summarized in Tables 2 and 3). Future SCMR Registry related quality improvement processes should focus on increasing site compliance with data entry in order to produce a more complete and accurate registry. Third, although we used the global SCMR Registry, the vast majority of the data were from the United States with most of that data derived from one American site. However, it is important to note that for our study, the one dominant site had a very similar change rate (51%) when compared to the entire cohort (49%). Nonetheless, we recognize that using data primarily from one site may affect generalizability and future efforts should focus on diversifying data input from other sites and countries. Fourth, some parameters were qualitative and were reliant solely on site assessments (such as the evaluation of image quality). Future SCMR Registry efforts should focus on harmonizing or standardizing such qualitative data across sites. Finally, due to the design of our study, it was not possible to blind the resident who evaluated whether or not a change occurred to the initial pre-CMR indication. This may have led to ascertainment bias. In order to minimize the impact of this potential bias, a second reviewer who is a Fellow of the SCMR and who has > 10 years of experience in clinical cardiology, CMR clinical research and the interpretation of large datasets, reviewed and confirmed all the findings after initial review by the resident.", "Below is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Data source", "Identification of the patient population and cohort creation", "Determination of whether CMR was associated with changes in downstream management", "Statistical analysis", "Results", "Baseline patient characteristics", "Sequences utilized in the CMRs", "Field strength and image quality", "Initial indications and diagnosis following CMR scanning", "Sensitivity analysis", "Comparison of cohort patients with excluded patients", "Discussion", "Limitations", "Conclusion", "Electronic supplementary material", "" ]
[ "Heart failure (HF) is a global public health problem affecting at least 26 million people worldwide and is increasing in prevalence [1–3]. Cardiovascular magnetic resonance (CMR) imaging is an important diagnostic test in the assessment of patients with HF [4–18]. However, the demographics and clinical characteristics of those undergoing CMR for the evaluation of HF are unknown. Further, the impact of CMR on subsequent HF patient care is unclear. Much of the current available evidence in this area is built upon small single center studies with widely varying results and conclusions. For example, CMR has been reported to lead to a change in patient management in anywhere between 16 and 65% of studies [19–21]. One way in which CMR can lead to a change in patient management is by providing valuable diagnostic information that leads to a change in the understanding of the etiology of HF post-test, when compared to pre-test [19] [20, 21]. This diagnostic information can help inform downstream treatment decisions and impact outcomes. However, the extent to which CMR can impact HF management in this manner is currently unclear. A systematic, multi-center, evaluation of the impact of CMR on patient management is required to shed further light onto these issues.\nThe objectives of this paper are to utilize the Society for Cardiovascular Magnetic Resonance (SCMR) Registry in order to:\n\nDescribe patient demographics, clinical and CMR scanning characteristics of patients undergoing CMR for HF; and.Determine the extent to which CMR is associated with changes in downstream patient management by comparing the pre-CMR indication and the post-CMR diagnostic information.\n\nDescribe patient demographics, clinical and CMR scanning characteristics of patients undergoing CMR for HF; and.\nDetermine the extent to which CMR is associated with changes in downstream patient management by comparing the pre-CMR indication and the post-CMR diagnostic information.", " Data source The data source was the SCMR Registry. This Registry was created by the SCMR in 2013 and the SCMR has continued to support this registry since its creation. For this project, data were abstracted from Jan 1, 2013- Dec 31, 2019. The overarching vision of the SCMR was to provide a central platform to demonstrate the impact of CMR on patient outcomes and clinical care. One of its stated goals is the determination of the downstream impact of CMR on diagnostic clinical decision making [22]. At the time that this project was performed, the SCMR Registry contained demographic and CMR data from 13 centers across the world. Site participation was invited and advertised by the SCMR at multiple forums, including at the SCMR Annual Scientific Sessions. Data for the registry were collected by individual sites retrospectively on consecutive patients who underwent clinically indicated CMR scans and then entered into the SCMR registry by those sites.\nThe data source was the SCMR Registry. This Registry was created by the SCMR in 2013 and the SCMR has continued to support this registry since its creation. For this project, data were abstracted from Jan 1, 2013- Dec 31, 2019. The overarching vision of the SCMR was to provide a central platform to demonstrate the impact of CMR on patient outcomes and clinical care. One of its stated goals is the determination of the downstream impact of CMR on diagnostic clinical decision making [22]. At the time that this project was performed, the SCMR Registry contained demographic and CMR data from 13 centers across the world. Site participation was invited and advertised by the SCMR at multiple forums, including at the SCMR Annual Scientific Sessions. Data for the registry were collected by individual sites retrospectively on consecutive patients who underwent clinically indicated CMR scans and then entered into the SCMR registry by those sites.", "The data source was the SCMR Registry. This Registry was created by the SCMR in 2013 and the SCMR has continued to support this registry since its creation. For this project, data were abstracted from Jan 1, 2013- Dec 31, 2019. The overarching vision of the SCMR was to provide a central platform to demonstrate the impact of CMR on patient outcomes and clinical care. One of its stated goals is the determination of the downstream impact of CMR on diagnostic clinical decision making [22]. At the time that this project was performed, the SCMR Registry contained demographic and CMR data from 13 centers across the world. Site participation was invited and advertised by the SCMR at multiple forums, including at the SCMR Annual Scientific Sessions. Data for the registry were collected by individual sites retrospectively on consecutive patients who underwent clinically indicated CMR scans and then entered into the SCMR registry by those sites.", "CMRs were identified in the SCMR Registry if they were performed for a HF related pre-test indication. Specifically, patients with left ventricular ejection fraction (LVEF) < 55% with the following pre-CMR suspected indications were included: amyloidosis, coronary artery disease (CAD) AND categorized as having LV dysfunction, arrhythmogenic right ventricular dysplasia (ARVC)/cardiomyopathy, cardiomyopathy, arrhythmic disease, Friedrich’s ataxia, hypertrophic cardiomyopathy (HCM) or hemochromatosis. Derivation of the above-mentioned algorithm for patient selection involved the following steps: First, a draft list of indications was selected a priori by the lead author (IR) and a co-author who was largely responsible for establishing the SCMR Registry and who was intimately familiar with its structure and data (RK). This list was based on the principle of balancing the detection of the largest number of patients who received a CMR for the indication of HF whilst minimizing inclusion of patients who were not scanned for HF related indications. It is for this reason that we included only those patients with reduced LVEF (LVEF < 55%) in the cohort. While this list excluded those with HF with preserved ejection fraction, we believed that removing the LVEF inclusion condition would contaminate our data by including many patients who were not scanned for HF indications. Next, as this an SCMR initiated paper, the proposal including the criteria for patient selection was reviewed and approved by the SCMR’s Science Committee. Following this initial approval, the dataset/cohort for the project was created from the larger SCMR Registry based on the approved patient selection criteria. Subsequently, the data and various iterations of the paper were approved by the SCMR’s Publications Committee. The final paper, prior to submission to this journal was approved by the SCMR’s Board of Trustees.", "We determined that a change in management was associated with the CMR if there was a change between the initial indication for the CMR and the subsequent diagnosis after the CMR was completed. The ‘indication’ field was an existing variable in the SCMR Registry. We then reviewed, for each subject, the text field where the final conclusions from the CMR scan were inserted. Using our clinical judgement, we determined if there was a clinically relevant change from the original pre-CMR indication, in a method similar to that used in other studies [21]. The initial screen was done by a senior cardiovascular disease resident (MH). In terms of training, this resident had completed a cardiology residency program (i.e. was a board certified cardiologist in Canada) and was undergoing a fellowship in advanced cardiac imaging at the time of the data review. As such, the resident was familiar with the role of CMR in the management of cardiovascular disease. All results were subsequently reviewed by a level 3 trained SCMR cardiologist and Fellow of the SCMR (IR).\n Statistical analysis Descriptive statistics (percentages, proportions) were performed on key CMR and clinical variables of the patient population. The Fisher’s exact test was used when comparing categorical variables. Specifically, this test was used when comparing the change rate between patients undergoing CMR at 1.5 vs. 3T, between males and females, between patients of different body mass index (BMI) categories, and between patients with and without late gadolinium enhancement (LGE). The Wilcoxon rank sum test was used to compare continuous variables. P values of < 0.05 were considered statistically significant.\nDescriptive statistics (percentages, proportions) were performed on key CMR and clinical variables of the patient population. The Fisher’s exact test was used when comparing categorical variables. Specifically, this test was used when comparing the change rate between patients undergoing CMR at 1.5 vs. 3T, between males and females, between patients of different body mass index (BMI) categories, and between patients with and without late gadolinium enhancement (LGE). The Wilcoxon rank sum test was used to compare continuous variables. P values of < 0.05 were considered statistically significant.", "Descriptive statistics (percentages, proportions) were performed on key CMR and clinical variables of the patient population. The Fisher’s exact test was used when comparing categorical variables. Specifically, this test was used when comparing the change rate between patients undergoing CMR at 1.5 vs. 3T, between males and females, between patients of different body mass index (BMI) categories, and between patients with and without late gadolinium enhancement (LGE). The Wilcoxon rank sum test was used to compare continuous variables. P values of < 0.05 were considered statistically significant.", "There were 6,654 patients in the registry who underwent CMR for the indication of HF during our study period. However, 2,817 patients were excluded because they did not have data entered regarding their pre-CMR indication and/or post CMR diagnosis to allow us to evaluate whether or not receipt of CMR was associated with a change in management. Thus, 3,837 patients ultimately remained in our cohort. Of these, 94% of the CMRs were performed in the United States with 68% occurring at one site (see Table 1). Other countries with significant contributions to the SCMR Registry include China (n = 182, 4.7% and South Korea (n = 41, 1.1%). India contributed 3 cases.\n\nTable 1Distribution of sites contributing to the Society for Cardiovascular Magnetic Resonance (SCMR) RegistryCountrySite NumberNo. of PatientsPercentageUnited States871< 0.1%United States701< 0.1%India5430.1%United States2570.2%United States63310.8%South Korea68411.1%United States86561.5%United States361163.0%United States851453.8%China181824.7%United States22626.8%United States83769.8%United States12,61668.2%Total:3,837\n\nDistribution of sites contributing to the Society for Cardiovascular Magnetic Resonance (SCMR) Registry", "Baseline patient characteristics are summarized in Table 2. Median age of subjects was 59.3 years (IQR, 47.1, 68.3 years) the median BMI was 27. 1 kg/m2 (IQR, 23.8, 31.5 kg/m2). Women constituted 67% of the patients. In terms of major cardiovascular risk factors, 49% of patients had hypertension, 18% had diabetes, 21% were active smokers and 11% had a family history of premature CAD. There were 36% of patients who had evidence of overt CAD (prior myocardial infarction, percutaneous coronary interventions and/or coronary artery disease). With regards to cardiac function, the median LVEF was 41% (IQR 29%, 50%) and the median right ventricular ejection fraction (RVEF) was 48% (IQR 39%, 54%). 3,540 patients had data entered regarding LGE positivity (positive or negative). Overall, 54% of patients were LGE positive. Patterns of LGE (for example ischemic vs. non-ischemic) as well as LGE quantity were not available in the data.\n\nTable 2Characteristics of the patient population\nMean ± SD\n\nMedian\n\nQ1\n\nQ3\n\n% missing values\nAge, years57.0 ± 16.159.347.168.30.3%Height, meters1.72 ± 0.111.731.651.801.1%Weight, kilograms83.8 ± 21.981.568.096.50.4%BMI (kilograms/m2)28.1 ± 6.527.123.831.51.1%Percentage% missing valuesFemale sex67%0%\nCardiac function\n\nMedian\n\nQ1\n\nQ3\n\n% missing values\nLVEDV, mL19515625015%LVESV, mL1128316916%LVEF, %41295016%LVM, g12810016125%RVEDV, mL14711618522%RVESV, mL775710622%LVEDVI, mL/m21008212716%LVESVI, mL/m257438616%LVSV, mL74589116%LVEDD, mm60546713%LVESD, mm47405714%LVMI, g/m265538125%RVEDVI, mL/m276619223%RVESVI, mL/m239305423%RVSV, mL67518422%RVEF, %48395422%\nCardiovascular history\nPercentage\n% missing values\nHistory of myocardial infarction18%11%History of percutaneous coronary intervention12%11%History of coronary artery bypass grafting6%10%History of hypertension49%9%History of diabetes18%10%History of heart failure37%11%History of dyslipidemia39%10%History of smoking21%8%History of peripheral vascular disease4%12%Family history of coronary artery disease11%14%BMI: Body Mass Index LVEDV: Left ventricular (LV) end-diastolic volume, LVEDVI: LV end-diastolic volume indexed to body surface area, LVESV: LV end-systolic volume, LVESVI: LV end-systolic volume indexed to body surface area, LVSV: LV stroke volume, LVEF: LV ejection fraction, LVEDD: LV end-diastolic dimension, LVESD: LV end-systolic dimension, LVM: LV mass, LVMI: LV mass indexed to body surface area, RVEDV: Right ventricular (RV) end-diastolic volume, RVESV: RV end-systolic volume; RVSV: RV stroke volume, RVEF: RVejection fraction, RVEDVI: RV end-diastolic volume indexed to body surface area, RVESVI: RV end-systolic volume indexed to body surface area\n\nCharacteristics of the patient population\nBMI: Body Mass Index LVEDV: Left ventricular (LV) end-diastolic volume, LVEDVI: LV end-diastolic volume indexed to body surface area, LVESV: LV end-systolic volume, LVESVI: LV end-systolic volume indexed to body surface area, LVSV: LV stroke volume, LVEF: LV ejection fraction, LVEDD: LV end-diastolic dimension, LVESD: LV end-systolic dimension, LVM: LV mass, LVMI: LV mass indexed to body surface area, RVEDV: Right ventricular (RV) end-diastolic volume, RVESV: RV end-systolic volume; RVSV: RV stroke volume, RVEF: RVejection fraction, RVEDVI: RV end-diastolic volume indexed to body surface area, RVESVI: RV end-systolic volume indexed to body surface area", "Table 3 summarizes the pulse sequences employed in patients referred for CMR for HF. 94% of patients underwent cine balanced steady state free precession (bSSFP) imaging whilst 90% underwent LGE imaging, 43% underwent T2 weighted imaging and 22% underwent T1 mapping sequences.\n\nTable 3CMR Pulse SequencesVariable Name1.5T (N = 1,217)3T (N = 2,385)All Scans (N = 3,837)Balanced steady state free precession89.6%100.0%94.2%T2 weighted imaging37.6%50.6%42.5%T1 mapping4.5%33.5%21.6%T2 mapping2.8%23.6%14.7%T2 *20.2%33.1%26.6%Stress perfusion22.4%16.3%17.6%Late gadolinium enhancement86.9%96.7%90.4%\n\nCMR Pulse Sequences", "There were 2,385 patients who were scanned on a 3T CMR system (62.1%) vs. 1,217 patients who were scanned at 1.5 T(31.7%). Regarding CMR image quality; 88% of scans were ranked as ‘good’ or ‘excellent’ with 12% ranked as ‘poor’ or ‘fair’. These clinical judgements regarding image quality were site-based assessments and were performed qualitatively by the reading physician. There were no preset criteria to distinguish what constituted ‘excellent’, ‘good’, ‘fair’ or ‘poor’ image quality.", "The top 6 indications for CMRs performed in the SCMR Registry were: Cardiomyopathy, etiology not yet diagnosed (NYD) (1,776; 46.2%), CAD/ischemia/viability (1,230, 32.1%), ARVC (423, 11.0%), HCM (146, 3.8%), arrhythmic substrate (136, 3.5%) and amyloidosis (97, 2.5%). Overall, in 1,892 (49%) of patients, there was a change between the initial pre-test indication and the post CMR diagnosis. When broken down by indication, CMR was associated with changes between indication and post-CMR diagnosis in 333 (79%) of those referred for ARVC, 1,114 (63%) of those referred for cardiomyopathy, 136 (49%) of those referred for arrhythmic disease, 66 (45%) of those undergoing CMR for HCM, 25 (26%) undergoing scanning for amyloidosis and 270 (22%) of those undergoing CMR for CAD. Table 4 summarizes the nature of these changes amongst those who had CMRs performed for the top 6 indications. Table 5 stratifies the change rate between the pre-CMR indication and post-CMR diagnosis according to site. The largest contributing American site had a similar change rate (51%) to that of the entire patient population. The second largest contributing site, also from the United States, similarly had a change rate of 50%. In total, there were 3,611 patients who received CMRs at American sites and they had an overall change rate of 50%. The country that contributed the second largest number of patients was China. They contributed 182 patients and reported a change rate of 52%, similar to the overall and American rate. The country that reported the lowest change rate was South Korea (15%).\n\nTable 4Pre-CMR indications and post CMR diagnosesPre-CMR indicationNumber of patients with pre-CMR diagnoses (%)Most Common Post-CMR diagnosesNumber of patients with post-CMR diagnoses (%)\nAmyloidosis of the heart\nTotal97No change in diagnosis72 (74%)Same as pre-CMR diagnosis72 (100%)Change in diagnosis25 (26%)Non-ischemic cardiomyopathy13 (52%)Coronary artery disease4 (16%)Left ventricular hypertrophy3 (12%)No amyloidosis4 (16%)Possible hypertrophic cardiomyopathy1 (4%)\nArrhythmic Disease\nTotal136No change in diagnosis70 (51%)Same as pre-CMR diagnosis70 (100%)Change in diagnosis66 (49%)Non-ischemic cardiomyopathy44 (67%)Coronary artery disease9 (14%)Possible myocarditis4 (6%)Myocarditis1 (2%)Hypertrophic cardiomyopathy1 (2%)Left ventricular non-compaction1 (2%)Coronary artery disease and non-ischemic cardiomyopathy1 (2%)Cardiomyopathy and ventricular septal defect1 (2%)Possible transplant rejection1 (2%)No arrhythmic substrate3 (5%)\nARVC\nTotal423No change in diagnosis90 (21%)Same as pre-CMR diagnosis90 (100%)Change in diagnosis333 (79%)Non-ischemic cardiomyopathy131 (39%)No ARVC60 (18%)Borderline left ventricular function without other significant abnormalities96 (29%)Coronary artery disease13 (4%)Myocarditis3 (1%)ARVC and cardiomyopathy3 (< 1%)LV non-compaction5 (2%)Infiltrative cardiomyopathy2 (< 1%)Likely inflammatory cardiomyopathy1 (< 1%)Left ventricular hypertrophy2 (< 1%)Ventricular septal defect1 (< 1%)Right atrial dilatation without ARVC5 (2%)Other diagnoses11 (3%)\nCardiomyopathy NYD\nTotal1,776No change in diagnosis662 (37%)Same as pre-CMR diagnosis662 (100%)Change in diagnosis1,114 (63%)Coronary artery disease240 (22%)Non-ischemic cardiomyopathy484 (43%)Myocarditis52 (5%)Left ventricular non-compaction42 (4%)Borderline LV function without other significant abnormalities70 (6%)Sarcoidosis34 (3%)ARVC9 (< 1%)Hypertrophic cardiomyopathy9 (< 1%)Coronary artery disease and non-ischemic cardiomyopathy16 (1%)Infiltrative cardiomyopathy26 (2%)Hemochromatosis2 (< 1%)Other diagnoses130 (12%)\nCoronary Artery Disease\nTotal1,230No change in diagnosis960 (78%)Same as pre-CMR diagnosis960 (100%)Change in diagnosis270 (22%)Non-ischemic cardiomyopathy154 (57%)Borderline left ventricular function without other significant abnormalities42 (16%)No coronary artery disease9 (3%)Myocarditis12 (4%)Coronary artery disease and non-ischemic cardiomyopathy11 (4%)Sarcoidosis4 (1%)Left ventricular non-compaction4 (1%)Infiltrative cardiomyopathy8 (3%)Hypertrophic cardiomyopathy2 (< 1%)Hemochromatosis1 (< 1%)Other diagnoses23 (9%)\nHypertrophic Cardiomyopathy\nTotal146No change in diagnosis80 (55%)Same as pre-CMR diagnosis80 (100%)Change in diagnosis66 (45%)Non-ischemic cardiomyopathy22 (33%)Coronary artery disease11 (17%)Borderline left ventricular function without other significant abnormalities12 (18%)Infiltration/amyloid6 (9%)Left ventricular hypertrophy3 (5%)No hypertrophic cardiomyopathy5 (8%)Left ventricular non-compaction6 (9%)Concomitant hypertrophic cardiomyopathy and coronary artery disease1 (2%)Legend. ARVC: Arrhythmogenic right ventricular cardiomyopathy, NYD: Not yet diagnosed\n\nPre-CMR indications and post CMR diagnoses\nLegend. ARVC: Arrhythmogenic right ventricular cardiomyopathy, NYD: Not yet diagnosed\n\nTable 5Stratification by site according to the rate of change between pre-CMR indication and post-CMR diagnosisCountry of the site performing the CMRSite NumberNo. of PatientsNo change between pre-CMR indication and post-CMR diagnosisChange between pre-CMR indication and post-CMR diagnosisPercentage with change between pre-CMR indication and post-CMR diagnosisUnited States12,6161,2861,33051%United States837618718950%United States226214511745%United States85145677854%United States36116694741%United States8656431323%United States6331171445%United States2575229%United States701100%United States871100%China18182889452%South Korea684135615%India5431267%Total3,8371,9451,89249%\n\nStratification by site according to the rate of change between pre-CMR indication and post-CMR diagnosis\nPatients scanned at 3T had higher change rates vs. those scanned at 1.5 T (53% vs. 44% respectively, p < 0.001). Those scanned on a 3T CMR system had their images rated at ‘excellent’ or ‘good’ 87% of the time versus 90% of scans performed at 1.5 T (p < 0.001). Males had a higher rate of change following CMR (52% vs. 48% respectively, p = 0.02). There was a non-significant trend towards a higher rate of change amongst those of normal weight/underweight vs. those who were overweight. Those with BMI < 25 kg/m2 had a change rate 51% of the time when compared with a change rate of 49% in those with a BMI > = 25 kg/m2 (p = 0.23). Amongst patients undergoing 3T CMR scanning, there was no significant difference regarding change rate between those with BMIs < 25 kg/m2 (55%), 25-29.9 kg/m2 (53%) and > 30 kg/m2 (48%), p = 0.54. Among patients who were LGE positive, CMR was associated with a change between initial indication and post-CMR diagnosis in 44% of the cases vs. 56% of the cases for those who were LGE negative (p < 0.001).", "To minimize the risk of bias caused by data submitted by small contributing sites, we repeated our main analysis after removing sites that contributed < 10 patients. After removal of these patients, 3,825 patients remained from 9 sites. There was no significant difference in our results with 1,888 patients having a change between the initial indication and post-CMR diagnosis (49%) and 1,937 patients (51%) not having such a change.", "Supplemental Table 1 compares patients ultimately included in the cohort with those excluded. Of note, there was no significant difference in terms of median age, height, weight or BMI between the two groups (p-value > 0.05). Patients included in the cohort were slightly more likely to be female (67% vs. 64%, p-value = 0.01). There were also no significant differences in CMR characteristics between those patients who were included and those excluded. However, patients included in the study had significantly higher rates of major cardiovascular risk factors when compared to those who were excluded. The excluded patients also had significantly higher percentages of missing values for these clinical parameters.", "Our analysis of 3,837 consecutive patients from the SCMR Registry referred for CMR for the evaluation of HF reveals a median age of approximately 59 years. The majority of CMR scans were performed on women. Almost 2/3 of the patients were scanned on 3T magnets. CMR was associated with changes between the initial indication and the post-CMR diagnosis in nearly one half of patients. Patients undergoing scanning on 3T were significantly more likely to have a subsequent change when compared to those scanned on 1.5T, despite 1.5T scans having slightly better rated image quality.\nDespite being considered the gold standard diagnostic test for patients with HF, there is little available evidence that receipt of CMR leads to changes in downstream patient management. In a single center study, White et al. examined 82 patients undergoing CMR for the diagnosis of arrhythmic substrate [21]. They reported that 50% of patients had a change in management, as identified by a change in diagnosis after CMR scanning. In a larger single center study, Abbasi et al. studied 150 subjects with LVEF < 50%. They reported a downstream change in management of 52% after the CMR was performed [19]. Our large multi-center study of > 3,000 patients from the SCMR Registry reported a similar rate of change in management after CMR scanning compared to the two aforementioned studies. These results are clinically important as they confirm the impact of CMR in the management of a large number of HF patients across multiple centers and countries. Further, our results reporting that CMR impact on management was more likely after scanning on a 3T magnet are interesting. Although our study was not designed to evaluate the reason underlying this finding, we speculate that it may be related to improved diagnostic quality of some tissue characterization sequences such as LGE, T2 weighted imaging and T1 mapping, despite the fact that our data does not report overall higher image quality for 3T CMR. Since our registry recorded data for overall CMR scan image quality but not for individual sequences, we are unable to test this hypothesis. It is also important to note that the main contributing US site exclusively uses 3T CMR systems. This may have been an important contributor to the differential impact that we observed for patients scanned on 1.5 vs. 3T in terms of diagnostic change.\nMost of the patient characteristics that we reported in our cohort were similar to previously published work from other HF registries. For example, the Swedish Heart Failure Registry reported a hypertension prevalence of 52% (vs. 49% in our cohort) and a diabetes prevalence of approximately 20% (vs. 18% in our cohort) [23]. One important difference is that the mean age in the Swedish registry was approximately 77 years versus our mean of approximately 57 years. One potential explanation for this discrepancy is that all the patients in our cohort, by definition, must have received a CMR. This is in comparison to the Swedish registry where only a small proportion of the patients underwent CMR scanning. There is the potential for selection bias in our study as physicians may be more likely to order CMRs on younger, healthier patients who may tolerate the CMR diagnostic test better and in whom there may be more therapeutic options. Another important finding of ours that differs from prior research is that approximately 2/3 of the patients in our cohort were female. Prior research has reported that prevalence rate of HF is similar in men and women, raising the question of why our cohort contained such a high proportion of women [24–26]. A possible contributor to this is that women with HF have been shown to ascribe more positive meaning to their illness [24, 27–29], perhaps making them more likely to agree to undergo more extensive diagnostic evaluation, including with a CMR. Indeed, there are limited data to suggest that women are more likely to receive advanced imaging tests for the evaluation of possible CAD [30]. If this was the case for the SCMR Registry, it is possible that this attribute led to a selection bias that ultimately culminated in a significantly higher percentage of women being included.\nAlthough most sites had similar rates of changes between the initial indication and the post-CMR diagnosis, some sites had significantly lower rates. The lowest rate of change occurred in the South Korean contributing site. Given the overall small numbers of patients scanned at this site (n = 41), it is difficult to draw firm conclusions about possible reasons for this low rate. The pre-CMR indications were different at this site when compared to the overall cohort with only approximately 56% being performed for cardiomyopathy NYD or CAD/ischemia (vs. approximately 78% of the total CMRs) and 22% performed for arrhythmic substrate (vs. approximately 4% of the total). Further, unlike the overall cohort, the South Korean site scanned patients predominantly at 1.5T (71%), which was associated with lower rates of diagnostic change post CMR in our study.\nIn a comparison of patients who were ultimately included in our cohort with those who were excluded, the former had a significantly higher prevalence of most major cardiovascular risk factors when compared to the latter despite no significant differences in terms of age, weight, BMI or cardiac function parameters. However, it is important to note that those patients excluded from our study also had a significantly higher percentage of missing values for cardiovascular history/risk factor parameters. Indeed, the reason those patients were excluded was because they had insufficient data for us to make a determination of whether or not there was a change between the initial pre-CMR indication and the post-CMR diagnosis.", "This study must be interpreted in the context of its limitations. First, there was limited data granularity in the registry. For example, we did not have data on treatment and downstream clinical outcomes. We recognize that a change between the pre-CMR indication and post CMR diagnosis is only one aspect in the assessment of a change in management and we lack direct data on downstream treatment decisions and outcomes. Furthermore, CMR does have value in confirming diagnoses and therefore evaluating for a change between the pre-test indication and the post-test diagnosis is an imperfect surrogate in the overall assessment of whether a change in management occurred. With that said, the enhanced ability to detect new or alternate myocardial disease processes is valuable and uniquely differentiates CMR from many other imaging modalities. This has been recognized by other groups who used similar surrogates in their work evaluating the clinical impact of CMR [21]. In another example of our lack of data granularity, we did not have access to the raw images and relied on data input from CMR reports from the participating sites. Our work highlights the importance of improving data capture/granularity in the SCMR Registry to help facilitate future research. Second, there were significant missing data in a number of parameters (summarized in Tables 2 and 3). Future SCMR Registry related quality improvement processes should focus on increasing site compliance with data entry in order to produce a more complete and accurate registry. Third, although we used the global SCMR Registry, the vast majority of the data were from the United States with most of that data derived from one American site. However, it is important to note that for our study, the one dominant site had a very similar change rate (51%) when compared to the entire cohort (49%). Nonetheless, we recognize that using data primarily from one site may affect generalizability and future efforts should focus on diversifying data input from other sites and countries. Fourth, some parameters were qualitative and were reliant solely on site assessments (such as the evaluation of image quality). Future SCMR Registry efforts should focus on harmonizing or standardizing such qualitative data across sites. Finally, due to the design of our study, it was not possible to blind the resident who evaluated whether or not a change occurred to the initial pre-CMR indication. This may have led to ascertainment bias. In order to minimize the impact of this potential bias, a second reviewer who is a Fellow of the SCMR and who has > 10 years of experience in clinical cardiology, CMR clinical research and the interpretation of large datasets, reviewed and confirmed all the findings after initial review by the resident.", "In our study of 3,837 SCMR Registry patients undergoing CMR for the evaluation of HF, we found that CMR was associated with a change between the pre-test indication and post-CMR diagnosis in 49% of cases, suggesting a potential impact on patient management. The rate of change occurred more commonly for patients scanned at 3T.", " Below is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1\nBelow is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1", "Below is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1" ]
[ null, null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", null, "conclusion", "supplementary-material", null ]
[ "Heart failure", "Registry", "Real World", "Impact of CMR" ]
Background: Heart failure (HF) is a global public health problem affecting at least 26 million people worldwide and is increasing in prevalence [1–3]. Cardiovascular magnetic resonance (CMR) imaging is an important diagnostic test in the assessment of patients with HF [4–18]. However, the demographics and clinical characteristics of those undergoing CMR for the evaluation of HF are unknown. Further, the impact of CMR on subsequent HF patient care is unclear. Much of the current available evidence in this area is built upon small single center studies with widely varying results and conclusions. For example, CMR has been reported to lead to a change in patient management in anywhere between 16 and 65% of studies [19–21]. One way in which CMR can lead to a change in patient management is by providing valuable diagnostic information that leads to a change in the understanding of the etiology of HF post-test, when compared to pre-test [19] [20, 21]. This diagnostic information can help inform downstream treatment decisions and impact outcomes. However, the extent to which CMR can impact HF management in this manner is currently unclear. A systematic, multi-center, evaluation of the impact of CMR on patient management is required to shed further light onto these issues. The objectives of this paper are to utilize the Society for Cardiovascular Magnetic Resonance (SCMR) Registry in order to: Describe patient demographics, clinical and CMR scanning characteristics of patients undergoing CMR for HF; and.Determine the extent to which CMR is associated with changes in downstream patient management by comparing the pre-CMR indication and the post-CMR diagnostic information. Describe patient demographics, clinical and CMR scanning characteristics of patients undergoing CMR for HF; and. Determine the extent to which CMR is associated with changes in downstream patient management by comparing the pre-CMR indication and the post-CMR diagnostic information. Methods: Data source The data source was the SCMR Registry. This Registry was created by the SCMR in 2013 and the SCMR has continued to support this registry since its creation. For this project, data were abstracted from Jan 1, 2013- Dec 31, 2019. The overarching vision of the SCMR was to provide a central platform to demonstrate the impact of CMR on patient outcomes and clinical care. One of its stated goals is the determination of the downstream impact of CMR on diagnostic clinical decision making [22]. At the time that this project was performed, the SCMR Registry contained demographic and CMR data from 13 centers across the world. Site participation was invited and advertised by the SCMR at multiple forums, including at the SCMR Annual Scientific Sessions. Data for the registry were collected by individual sites retrospectively on consecutive patients who underwent clinically indicated CMR scans and then entered into the SCMR registry by those sites. The data source was the SCMR Registry. This Registry was created by the SCMR in 2013 and the SCMR has continued to support this registry since its creation. For this project, data were abstracted from Jan 1, 2013- Dec 31, 2019. The overarching vision of the SCMR was to provide a central platform to demonstrate the impact of CMR on patient outcomes and clinical care. One of its stated goals is the determination of the downstream impact of CMR on diagnostic clinical decision making [22]. At the time that this project was performed, the SCMR Registry contained demographic and CMR data from 13 centers across the world. Site participation was invited and advertised by the SCMR at multiple forums, including at the SCMR Annual Scientific Sessions. Data for the registry were collected by individual sites retrospectively on consecutive patients who underwent clinically indicated CMR scans and then entered into the SCMR registry by those sites. Data source: The data source was the SCMR Registry. This Registry was created by the SCMR in 2013 and the SCMR has continued to support this registry since its creation. For this project, data were abstracted from Jan 1, 2013- Dec 31, 2019. The overarching vision of the SCMR was to provide a central platform to demonstrate the impact of CMR on patient outcomes and clinical care. One of its stated goals is the determination of the downstream impact of CMR on diagnostic clinical decision making [22]. At the time that this project was performed, the SCMR Registry contained demographic and CMR data from 13 centers across the world. Site participation was invited and advertised by the SCMR at multiple forums, including at the SCMR Annual Scientific Sessions. Data for the registry were collected by individual sites retrospectively on consecutive patients who underwent clinically indicated CMR scans and then entered into the SCMR registry by those sites. Identification of the patient population and cohort creation: CMRs were identified in the SCMR Registry if they were performed for a HF related pre-test indication. Specifically, patients with left ventricular ejection fraction (LVEF) < 55% with the following pre-CMR suspected indications were included: amyloidosis, coronary artery disease (CAD) AND categorized as having LV dysfunction, arrhythmogenic right ventricular dysplasia (ARVC)/cardiomyopathy, cardiomyopathy, arrhythmic disease, Friedrich’s ataxia, hypertrophic cardiomyopathy (HCM) or hemochromatosis. Derivation of the above-mentioned algorithm for patient selection involved the following steps: First, a draft list of indications was selected a priori by the lead author (IR) and a co-author who was largely responsible for establishing the SCMR Registry and who was intimately familiar with its structure and data (RK). This list was based on the principle of balancing the detection of the largest number of patients who received a CMR for the indication of HF whilst minimizing inclusion of patients who were not scanned for HF related indications. It is for this reason that we included only those patients with reduced LVEF (LVEF < 55%) in the cohort. While this list excluded those with HF with preserved ejection fraction, we believed that removing the LVEF inclusion condition would contaminate our data by including many patients who were not scanned for HF indications. Next, as this an SCMR initiated paper, the proposal including the criteria for patient selection was reviewed and approved by the SCMR’s Science Committee. Following this initial approval, the dataset/cohort for the project was created from the larger SCMR Registry based on the approved patient selection criteria. Subsequently, the data and various iterations of the paper were approved by the SCMR’s Publications Committee. The final paper, prior to submission to this journal was approved by the SCMR’s Board of Trustees. Determination of whether CMR was associated with changes in downstream management: We determined that a change in management was associated with the CMR if there was a change between the initial indication for the CMR and the subsequent diagnosis after the CMR was completed. The ‘indication’ field was an existing variable in the SCMR Registry. We then reviewed, for each subject, the text field where the final conclusions from the CMR scan were inserted. Using our clinical judgement, we determined if there was a clinically relevant change from the original pre-CMR indication, in a method similar to that used in other studies [21]. The initial screen was done by a senior cardiovascular disease resident (MH). In terms of training, this resident had completed a cardiology residency program (i.e. was a board certified cardiologist in Canada) and was undergoing a fellowship in advanced cardiac imaging at the time of the data review. As such, the resident was familiar with the role of CMR in the management of cardiovascular disease. All results were subsequently reviewed by a level 3 trained SCMR cardiologist and Fellow of the SCMR (IR). Statistical analysis Descriptive statistics (percentages, proportions) were performed on key CMR and clinical variables of the patient population. The Fisher’s exact test was used when comparing categorical variables. Specifically, this test was used when comparing the change rate between patients undergoing CMR at 1.5 vs. 3T, between males and females, between patients of different body mass index (BMI) categories, and between patients with and without late gadolinium enhancement (LGE). The Wilcoxon rank sum test was used to compare continuous variables. P values of < 0.05 were considered statistically significant. Descriptive statistics (percentages, proportions) were performed on key CMR and clinical variables of the patient population. The Fisher’s exact test was used when comparing categorical variables. Specifically, this test was used when comparing the change rate between patients undergoing CMR at 1.5 vs. 3T, between males and females, between patients of different body mass index (BMI) categories, and between patients with and without late gadolinium enhancement (LGE). The Wilcoxon rank sum test was used to compare continuous variables. P values of < 0.05 were considered statistically significant. Statistical analysis: Descriptive statistics (percentages, proportions) were performed on key CMR and clinical variables of the patient population. The Fisher’s exact test was used when comparing categorical variables. Specifically, this test was used when comparing the change rate between patients undergoing CMR at 1.5 vs. 3T, between males and females, between patients of different body mass index (BMI) categories, and between patients with and without late gadolinium enhancement (LGE). The Wilcoxon rank sum test was used to compare continuous variables. P values of < 0.05 were considered statistically significant. Results: There were 6,654 patients in the registry who underwent CMR for the indication of HF during our study period. However, 2,817 patients were excluded because they did not have data entered regarding their pre-CMR indication and/or post CMR diagnosis to allow us to evaluate whether or not receipt of CMR was associated with a change in management. Thus, 3,837 patients ultimately remained in our cohort. Of these, 94% of the CMRs were performed in the United States with 68% occurring at one site (see Table 1). Other countries with significant contributions to the SCMR Registry include China (n = 182, 4.7% and South Korea (n = 41, 1.1%). India contributed 3 cases. Table 1Distribution of sites contributing to the Society for Cardiovascular Magnetic Resonance (SCMR) RegistryCountrySite NumberNo. of PatientsPercentageUnited States871< 0.1%United States701< 0.1%India5430.1%United States2570.2%United States63310.8%South Korea68411.1%United States86561.5%United States361163.0%United States851453.8%China181824.7%United States22626.8%United States83769.8%United States12,61668.2%Total:3,837 Distribution of sites contributing to the Society for Cardiovascular Magnetic Resonance (SCMR) Registry Baseline patient characteristics: Baseline patient characteristics are summarized in Table 2. Median age of subjects was 59.3 years (IQR, 47.1, 68.3 years) the median BMI was 27. 1 kg/m2 (IQR, 23.8, 31.5 kg/m2). Women constituted 67% of the patients. In terms of major cardiovascular risk factors, 49% of patients had hypertension, 18% had diabetes, 21% were active smokers and 11% had a family history of premature CAD. There were 36% of patients who had evidence of overt CAD (prior myocardial infarction, percutaneous coronary interventions and/or coronary artery disease). With regards to cardiac function, the median LVEF was 41% (IQR 29%, 50%) and the median right ventricular ejection fraction (RVEF) was 48% (IQR 39%, 54%). 3,540 patients had data entered regarding LGE positivity (positive or negative). Overall, 54% of patients were LGE positive. Patterns of LGE (for example ischemic vs. non-ischemic) as well as LGE quantity were not available in the data. Table 2Characteristics of the patient population Mean ± SD Median Q1 Q3 % missing values Age, years57.0 ± 16.159.347.168.30.3%Height, meters1.72 ± 0.111.731.651.801.1%Weight, kilograms83.8 ± 21.981.568.096.50.4%BMI (kilograms/m2)28.1 ± 6.527.123.831.51.1%Percentage% missing valuesFemale sex67%0% Cardiac function Median Q1 Q3 % missing values LVEDV, mL19515625015%LVESV, mL1128316916%LVEF, %41295016%LVM, g12810016125%RVEDV, mL14711618522%RVESV, mL775710622%LVEDVI, mL/m21008212716%LVESVI, mL/m257438616%LVSV, mL74589116%LVEDD, mm60546713%LVESD, mm47405714%LVMI, g/m265538125%RVEDVI, mL/m276619223%RVESVI, mL/m239305423%RVSV, mL67518422%RVEF, %48395422% Cardiovascular history Percentage % missing values History of myocardial infarction18%11%History of percutaneous coronary intervention12%11%History of coronary artery bypass grafting6%10%History of hypertension49%9%History of diabetes18%10%History of heart failure37%11%History of dyslipidemia39%10%History of smoking21%8%History of peripheral vascular disease4%12%Family history of coronary artery disease11%14%BMI: Body Mass Index LVEDV: Left ventricular (LV) end-diastolic volume, LVEDVI: LV end-diastolic volume indexed to body surface area, LVESV: LV end-systolic volume, LVESVI: LV end-systolic volume indexed to body surface area, LVSV: LV stroke volume, LVEF: LV ejection fraction, LVEDD: LV end-diastolic dimension, LVESD: LV end-systolic dimension, LVM: LV mass, LVMI: LV mass indexed to body surface area, RVEDV: Right ventricular (RV) end-diastolic volume, RVESV: RV end-systolic volume; RVSV: RV stroke volume, RVEF: RVejection fraction, RVEDVI: RV end-diastolic volume indexed to body surface area, RVESVI: RV end-systolic volume indexed to body surface area Characteristics of the patient population BMI: Body Mass Index LVEDV: Left ventricular (LV) end-diastolic volume, LVEDVI: LV end-diastolic volume indexed to body surface area, LVESV: LV end-systolic volume, LVESVI: LV end-systolic volume indexed to body surface area, LVSV: LV stroke volume, LVEF: LV ejection fraction, LVEDD: LV end-diastolic dimension, LVESD: LV end-systolic dimension, LVM: LV mass, LVMI: LV mass indexed to body surface area, RVEDV: Right ventricular (RV) end-diastolic volume, RVESV: RV end-systolic volume; RVSV: RV stroke volume, RVEF: RVejection fraction, RVEDVI: RV end-diastolic volume indexed to body surface area, RVESVI: RV end-systolic volume indexed to body surface area Sequences utilized in the CMRs: Table 3 summarizes the pulse sequences employed in patients referred for CMR for HF. 94% of patients underwent cine balanced steady state free precession (bSSFP) imaging whilst 90% underwent LGE imaging, 43% underwent T2 weighted imaging and 22% underwent T1 mapping sequences. Table 3CMR Pulse SequencesVariable Name1.5T (N = 1,217)3T (N = 2,385)All Scans (N = 3,837)Balanced steady state free precession89.6%100.0%94.2%T2 weighted imaging37.6%50.6%42.5%T1 mapping4.5%33.5%21.6%T2 mapping2.8%23.6%14.7%T2 *20.2%33.1%26.6%Stress perfusion22.4%16.3%17.6%Late gadolinium enhancement86.9%96.7%90.4% CMR Pulse Sequences Field strength and image quality: There were 2,385 patients who were scanned on a 3T CMR system (62.1%) vs. 1,217 patients who were scanned at 1.5 T(31.7%). Regarding CMR image quality; 88% of scans were ranked as ‘good’ or ‘excellent’ with 12% ranked as ‘poor’ or ‘fair’. These clinical judgements regarding image quality were site-based assessments and were performed qualitatively by the reading physician. There were no preset criteria to distinguish what constituted ‘excellent’, ‘good’, ‘fair’ or ‘poor’ image quality. Initial indications and diagnosis following CMR scanning: The top 6 indications for CMRs performed in the SCMR Registry were: Cardiomyopathy, etiology not yet diagnosed (NYD) (1,776; 46.2%), CAD/ischemia/viability (1,230, 32.1%), ARVC (423, 11.0%), HCM (146, 3.8%), arrhythmic substrate (136, 3.5%) and amyloidosis (97, 2.5%). Overall, in 1,892 (49%) of patients, there was a change between the initial pre-test indication and the post CMR diagnosis. When broken down by indication, CMR was associated with changes between indication and post-CMR diagnosis in 333 (79%) of those referred for ARVC, 1,114 (63%) of those referred for cardiomyopathy, 136 (49%) of those referred for arrhythmic disease, 66 (45%) of those undergoing CMR for HCM, 25 (26%) undergoing scanning for amyloidosis and 270 (22%) of those undergoing CMR for CAD. Table 4 summarizes the nature of these changes amongst those who had CMRs performed for the top 6 indications. Table 5 stratifies the change rate between the pre-CMR indication and post-CMR diagnosis according to site. The largest contributing American site had a similar change rate (51%) to that of the entire patient population. The second largest contributing site, also from the United States, similarly had a change rate of 50%. In total, there were 3,611 patients who received CMRs at American sites and they had an overall change rate of 50%. The country that contributed the second largest number of patients was China. They contributed 182 patients and reported a change rate of 52%, similar to the overall and American rate. The country that reported the lowest change rate was South Korea (15%). Table 4Pre-CMR indications and post CMR diagnosesPre-CMR indicationNumber of patients with pre-CMR diagnoses (%)Most Common Post-CMR diagnosesNumber of patients with post-CMR diagnoses (%) Amyloidosis of the heart Total97No change in diagnosis72 (74%)Same as pre-CMR diagnosis72 (100%)Change in diagnosis25 (26%)Non-ischemic cardiomyopathy13 (52%)Coronary artery disease4 (16%)Left ventricular hypertrophy3 (12%)No amyloidosis4 (16%)Possible hypertrophic cardiomyopathy1 (4%) Arrhythmic Disease Total136No change in diagnosis70 (51%)Same as pre-CMR diagnosis70 (100%)Change in diagnosis66 (49%)Non-ischemic cardiomyopathy44 (67%)Coronary artery disease9 (14%)Possible myocarditis4 (6%)Myocarditis1 (2%)Hypertrophic cardiomyopathy1 (2%)Left ventricular non-compaction1 (2%)Coronary artery disease and non-ischemic cardiomyopathy1 (2%)Cardiomyopathy and ventricular septal defect1 (2%)Possible transplant rejection1 (2%)No arrhythmic substrate3 (5%) ARVC Total423No change in diagnosis90 (21%)Same as pre-CMR diagnosis90 (100%)Change in diagnosis333 (79%)Non-ischemic cardiomyopathy131 (39%)No ARVC60 (18%)Borderline left ventricular function without other significant abnormalities96 (29%)Coronary artery disease13 (4%)Myocarditis3 (1%)ARVC and cardiomyopathy3 (< 1%)LV non-compaction5 (2%)Infiltrative cardiomyopathy2 (< 1%)Likely inflammatory cardiomyopathy1 (< 1%)Left ventricular hypertrophy2 (< 1%)Ventricular septal defect1 (< 1%)Right atrial dilatation without ARVC5 (2%)Other diagnoses11 (3%) Cardiomyopathy NYD Total1,776No change in diagnosis662 (37%)Same as pre-CMR diagnosis662 (100%)Change in diagnosis1,114 (63%)Coronary artery disease240 (22%)Non-ischemic cardiomyopathy484 (43%)Myocarditis52 (5%)Left ventricular non-compaction42 (4%)Borderline LV function without other significant abnormalities70 (6%)Sarcoidosis34 (3%)ARVC9 (< 1%)Hypertrophic cardiomyopathy9 (< 1%)Coronary artery disease and non-ischemic cardiomyopathy16 (1%)Infiltrative cardiomyopathy26 (2%)Hemochromatosis2 (< 1%)Other diagnoses130 (12%) Coronary Artery Disease Total1,230No change in diagnosis960 (78%)Same as pre-CMR diagnosis960 (100%)Change in diagnosis270 (22%)Non-ischemic cardiomyopathy154 (57%)Borderline left ventricular function without other significant abnormalities42 (16%)No coronary artery disease9 (3%)Myocarditis12 (4%)Coronary artery disease and non-ischemic cardiomyopathy11 (4%)Sarcoidosis4 (1%)Left ventricular non-compaction4 (1%)Infiltrative cardiomyopathy8 (3%)Hypertrophic cardiomyopathy2 (< 1%)Hemochromatosis1 (< 1%)Other diagnoses23 (9%) Hypertrophic Cardiomyopathy Total146No change in diagnosis80 (55%)Same as pre-CMR diagnosis80 (100%)Change in diagnosis66 (45%)Non-ischemic cardiomyopathy22 (33%)Coronary artery disease11 (17%)Borderline left ventricular function without other significant abnormalities12 (18%)Infiltration/amyloid6 (9%)Left ventricular hypertrophy3 (5%)No hypertrophic cardiomyopathy5 (8%)Left ventricular non-compaction6 (9%)Concomitant hypertrophic cardiomyopathy and coronary artery disease1 (2%)Legend. ARVC: Arrhythmogenic right ventricular cardiomyopathy, NYD: Not yet diagnosed Pre-CMR indications and post CMR diagnoses Legend. ARVC: Arrhythmogenic right ventricular cardiomyopathy, NYD: Not yet diagnosed Table 5Stratification by site according to the rate of change between pre-CMR indication and post-CMR diagnosisCountry of the site performing the CMRSite NumberNo. of PatientsNo change between pre-CMR indication and post-CMR diagnosisChange between pre-CMR indication and post-CMR diagnosisPercentage with change between pre-CMR indication and post-CMR diagnosisUnited States12,6161,2861,33051%United States837618718950%United States226214511745%United States85145677854%United States36116694741%United States8656431323%United States6331171445%United States2575229%United States701100%United States871100%China18182889452%South Korea684135615%India5431267%Total3,8371,9451,89249% Stratification by site according to the rate of change between pre-CMR indication and post-CMR diagnosis Patients scanned at 3T had higher change rates vs. those scanned at 1.5 T (53% vs. 44% respectively, p < 0.001). Those scanned on a 3T CMR system had their images rated at ‘excellent’ or ‘good’ 87% of the time versus 90% of scans performed at 1.5 T (p < 0.001). Males had a higher rate of change following CMR (52% vs. 48% respectively, p = 0.02). There was a non-significant trend towards a higher rate of change amongst those of normal weight/underweight vs. those who were overweight. Those with BMI < 25 kg/m2 had a change rate 51% of the time when compared with a change rate of 49% in those with a BMI > = 25 kg/m2 (p = 0.23). Amongst patients undergoing 3T CMR scanning, there was no significant difference regarding change rate between those with BMIs < 25 kg/m2 (55%), 25-29.9 kg/m2 (53%) and > 30 kg/m2 (48%), p = 0.54. Among patients who were LGE positive, CMR was associated with a change between initial indication and post-CMR diagnosis in 44% of the cases vs. 56% of the cases for those who were LGE negative (p < 0.001). Sensitivity analysis: To minimize the risk of bias caused by data submitted by small contributing sites, we repeated our main analysis after removing sites that contributed < 10 patients. After removal of these patients, 3,825 patients remained from 9 sites. There was no significant difference in our results with 1,888 patients having a change between the initial indication and post-CMR diagnosis (49%) and 1,937 patients (51%) not having such a change. Comparison of cohort patients with excluded patients: Supplemental Table 1 compares patients ultimately included in the cohort with those excluded. Of note, there was no significant difference in terms of median age, height, weight or BMI between the two groups (p-value > 0.05). Patients included in the cohort were slightly more likely to be female (67% vs. 64%, p-value = 0.01). There were also no significant differences in CMR characteristics between those patients who were included and those excluded. However, patients included in the study had significantly higher rates of major cardiovascular risk factors when compared to those who were excluded. The excluded patients also had significantly higher percentages of missing values for these clinical parameters. Discussion: Our analysis of 3,837 consecutive patients from the SCMR Registry referred for CMR for the evaluation of HF reveals a median age of approximately 59 years. The majority of CMR scans were performed on women. Almost 2/3 of the patients were scanned on 3T magnets. CMR was associated with changes between the initial indication and the post-CMR diagnosis in nearly one half of patients. Patients undergoing scanning on 3T were significantly more likely to have a subsequent change when compared to those scanned on 1.5T, despite 1.5T scans having slightly better rated image quality. Despite being considered the gold standard diagnostic test for patients with HF, there is little available evidence that receipt of CMR leads to changes in downstream patient management. In a single center study, White et al. examined 82 patients undergoing CMR for the diagnosis of arrhythmic substrate [21]. They reported that 50% of patients had a change in management, as identified by a change in diagnosis after CMR scanning. In a larger single center study, Abbasi et al. studied 150 subjects with LVEF < 50%. They reported a downstream change in management of 52% after the CMR was performed [19]. Our large multi-center study of > 3,000 patients from the SCMR Registry reported a similar rate of change in management after CMR scanning compared to the two aforementioned studies. These results are clinically important as they confirm the impact of CMR in the management of a large number of HF patients across multiple centers and countries. Further, our results reporting that CMR impact on management was more likely after scanning on a 3T magnet are interesting. Although our study was not designed to evaluate the reason underlying this finding, we speculate that it may be related to improved diagnostic quality of some tissue characterization sequences such as LGE, T2 weighted imaging and T1 mapping, despite the fact that our data does not report overall higher image quality for 3T CMR. Since our registry recorded data for overall CMR scan image quality but not for individual sequences, we are unable to test this hypothesis. It is also important to note that the main contributing US site exclusively uses 3T CMR systems. This may have been an important contributor to the differential impact that we observed for patients scanned on 1.5 vs. 3T in terms of diagnostic change. Most of the patient characteristics that we reported in our cohort were similar to previously published work from other HF registries. For example, the Swedish Heart Failure Registry reported a hypertension prevalence of 52% (vs. 49% in our cohort) and a diabetes prevalence of approximately 20% (vs. 18% in our cohort) [23]. One important difference is that the mean age in the Swedish registry was approximately 77 years versus our mean of approximately 57 years. One potential explanation for this discrepancy is that all the patients in our cohort, by definition, must have received a CMR. This is in comparison to the Swedish registry where only a small proportion of the patients underwent CMR scanning. There is the potential for selection bias in our study as physicians may be more likely to order CMRs on younger, healthier patients who may tolerate the CMR diagnostic test better and in whom there may be more therapeutic options. Another important finding of ours that differs from prior research is that approximately 2/3 of the patients in our cohort were female. Prior research has reported that prevalence rate of HF is similar in men and women, raising the question of why our cohort contained such a high proportion of women [24–26]. A possible contributor to this is that women with HF have been shown to ascribe more positive meaning to their illness [24, 27–29], perhaps making them more likely to agree to undergo more extensive diagnostic evaluation, including with a CMR. Indeed, there are limited data to suggest that women are more likely to receive advanced imaging tests for the evaluation of possible CAD [30]. If this was the case for the SCMR Registry, it is possible that this attribute led to a selection bias that ultimately culminated in a significantly higher percentage of women being included. Although most sites had similar rates of changes between the initial indication and the post-CMR diagnosis, some sites had significantly lower rates. The lowest rate of change occurred in the South Korean contributing site. Given the overall small numbers of patients scanned at this site (n = 41), it is difficult to draw firm conclusions about possible reasons for this low rate. The pre-CMR indications were different at this site when compared to the overall cohort with only approximately 56% being performed for cardiomyopathy NYD or CAD/ischemia (vs. approximately 78% of the total CMRs) and 22% performed for arrhythmic substrate (vs. approximately 4% of the total). Further, unlike the overall cohort, the South Korean site scanned patients predominantly at 1.5T (71%), which was associated with lower rates of diagnostic change post CMR in our study. In a comparison of patients who were ultimately included in our cohort with those who were excluded, the former had a significantly higher prevalence of most major cardiovascular risk factors when compared to the latter despite no significant differences in terms of age, weight, BMI or cardiac function parameters. However, it is important to note that those patients excluded from our study also had a significantly higher percentage of missing values for cardiovascular history/risk factor parameters. Indeed, the reason those patients were excluded was because they had insufficient data for us to make a determination of whether or not there was a change between the initial pre-CMR indication and the post-CMR diagnosis. Limitations: This study must be interpreted in the context of its limitations. First, there was limited data granularity in the registry. For example, we did not have data on treatment and downstream clinical outcomes. We recognize that a change between the pre-CMR indication and post CMR diagnosis is only one aspect in the assessment of a change in management and we lack direct data on downstream treatment decisions and outcomes. Furthermore, CMR does have value in confirming diagnoses and therefore evaluating for a change between the pre-test indication and the post-test diagnosis is an imperfect surrogate in the overall assessment of whether a change in management occurred. With that said, the enhanced ability to detect new or alternate myocardial disease processes is valuable and uniquely differentiates CMR from many other imaging modalities. This has been recognized by other groups who used similar surrogates in their work evaluating the clinical impact of CMR [21]. In another example of our lack of data granularity, we did not have access to the raw images and relied on data input from CMR reports from the participating sites. Our work highlights the importance of improving data capture/granularity in the SCMR Registry to help facilitate future research. Second, there were significant missing data in a number of parameters (summarized in Tables 2 and 3). Future SCMR Registry related quality improvement processes should focus on increasing site compliance with data entry in order to produce a more complete and accurate registry. Third, although we used the global SCMR Registry, the vast majority of the data were from the United States with most of that data derived from one American site. However, it is important to note that for our study, the one dominant site had a very similar change rate (51%) when compared to the entire cohort (49%). Nonetheless, we recognize that using data primarily from one site may affect generalizability and future efforts should focus on diversifying data input from other sites and countries. Fourth, some parameters were qualitative and were reliant solely on site assessments (such as the evaluation of image quality). Future SCMR Registry efforts should focus on harmonizing or standardizing such qualitative data across sites. Finally, due to the design of our study, it was not possible to blind the resident who evaluated whether or not a change occurred to the initial pre-CMR indication. This may have led to ascertainment bias. In order to minimize the impact of this potential bias, a second reviewer who is a Fellow of the SCMR and who has > 10 years of experience in clinical cardiology, CMR clinical research and the interpretation of large datasets, reviewed and confirmed all the findings after initial review by the resident. Conclusion: In our study of 3,837 SCMR Registry patients undergoing CMR for the evaluation of HF, we found that CMR was associated with a change between the pre-test indication and post-CMR diagnosis in 49% of cases, suggesting a potential impact on patient management. The rate of change occurred more commonly for patients scanned at 3T. Electronic supplementary material: Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 1 Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 1 : Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 1
Background: Cardiovascular magnetic resonance (CMR) is an important diagnostic test used in the evaluation of patients with heart failure (HF). However, the demographics and clinical characteristics of those undergoing CMR for evaluation of HF are unknown. Further, the impact of CMR on subsequent HF patient care is unclear. The goal of this study was to describe the characteristics of patients undergoing CMR for HF and to determine the extent to which CMR leads to changes in downstream patient management by comparing pre-CMR indications and post-CMR diagnoses. Methods: We utilized the Society for Cardiovascular Magnetic Resonance (SCMR) Registry as our data source and abstracted data for patients undergoing CMR scanning for HF indications from 2013 to 2019. Descriptive statistics (percentages, proportions) were performed on key CMR and clinical variables of the patient population. The Fisher's exact test was used when comparing categorical variables. The Wilcoxon rank sum test was used to compare continuous variables. Results: 3,837 patients were included in our study. 94% of the CMRs were performed in the United States with China, South Korea and India also contributing cases. Median age of HF patients was 59.3 years (IQR, 47.1, 68.3 years) with 67% of the scans occurring on women. Almost 2/3 of the patients were scanned on 3T CMR scanners. Overall, 49% of patients who underwent CMR scanning for HF had a change between the pre-test indication and post CMR diagnosis. 53% of patients undergoing scanning on 3T had a change between the pre-test indication and post CMR diagnosis when compared to 44% of patients who were scanned on 1.5T (p < 0.01). Conclusions: Our results suggest a potential impact of CMR scanning on downstream diagnosis of patients referred for CMR for HF, with a larger potential impact on those scanned on 3T CMR scanners.
Background: Heart failure (HF) is a global public health problem affecting at least 26 million people worldwide and is increasing in prevalence [1–3]. Cardiovascular magnetic resonance (CMR) imaging is an important diagnostic test in the assessment of patients with HF [4–18]. However, the demographics and clinical characteristics of those undergoing CMR for the evaluation of HF are unknown. Further, the impact of CMR on subsequent HF patient care is unclear. Much of the current available evidence in this area is built upon small single center studies with widely varying results and conclusions. For example, CMR has been reported to lead to a change in patient management in anywhere between 16 and 65% of studies [19–21]. One way in which CMR can lead to a change in patient management is by providing valuable diagnostic information that leads to a change in the understanding of the etiology of HF post-test, when compared to pre-test [19] [20, 21]. This diagnostic information can help inform downstream treatment decisions and impact outcomes. However, the extent to which CMR can impact HF management in this manner is currently unclear. A systematic, multi-center, evaluation of the impact of CMR on patient management is required to shed further light onto these issues. The objectives of this paper are to utilize the Society for Cardiovascular Magnetic Resonance (SCMR) Registry in order to: Describe patient demographics, clinical and CMR scanning characteristics of patients undergoing CMR for HF; and.Determine the extent to which CMR is associated with changes in downstream patient management by comparing the pre-CMR indication and the post-CMR diagnostic information. Describe patient demographics, clinical and CMR scanning characteristics of patients undergoing CMR for HF; and. Determine the extent to which CMR is associated with changes in downstream patient management by comparing the pre-CMR indication and the post-CMR diagnostic information. Conclusion: In our study of 3,837 SCMR Registry patients undergoing CMR for the evaluation of HF, we found that CMR was associated with a change between the pre-test indication and post-CMR diagnosis in 49% of cases, suggesting a potential impact on patient management. The rate of change occurred more commonly for patients scanned at 3T.
Background: Cardiovascular magnetic resonance (CMR) is an important diagnostic test used in the evaluation of patients with heart failure (HF). However, the demographics and clinical characteristics of those undergoing CMR for evaluation of HF are unknown. Further, the impact of CMR on subsequent HF patient care is unclear. The goal of this study was to describe the characteristics of patients undergoing CMR for HF and to determine the extent to which CMR leads to changes in downstream patient management by comparing pre-CMR indications and post-CMR diagnoses. Methods: We utilized the Society for Cardiovascular Magnetic Resonance (SCMR) Registry as our data source and abstracted data for patients undergoing CMR scanning for HF indications from 2013 to 2019. Descriptive statistics (percentages, proportions) were performed on key CMR and clinical variables of the patient population. The Fisher's exact test was used when comparing categorical variables. The Wilcoxon rank sum test was used to compare continuous variables. Results: 3,837 patients were included in our study. 94% of the CMRs were performed in the United States with China, South Korea and India also contributing cases. Median age of HF patients was 59.3 years (IQR, 47.1, 68.3 years) with 67% of the scans occurring on women. Almost 2/3 of the patients were scanned on 3T CMR scanners. Overall, 49% of patients who underwent CMR scanning for HF had a change between the pre-test indication and post CMR diagnosis. 53% of patients undergoing scanning on 3T had a change between the pre-test indication and post CMR diagnosis when compared to 44% of patients who were scanned on 1.5T (p < 0.01). Conclusions: Our results suggest a potential impact of CMR scanning on downstream diagnosis of patients referred for CMR for HF, with a larger potential impact on those scanned on 3T CMR scanners.
6,080
362
[ 363, 345, 170, 343, 418, 106, 672, 97, 108, 1228, 84, 135, 509, 18 ]
18
[ "cmr", "patients", "change", "scmr", "registry", "data", "pre", "indication", "patient", "scmr registry" ]
[ "cmr clinical research", "cmr characteristics patients", "heart failure", "heart failure hf", "cmr patient outcomes" ]
null
[CONTENT] Heart failure | Registry | Real World | Impact of CMR [SUMMARY]
null
[CONTENT] Heart failure | Registry | Real World | Impact of CMR [SUMMARY]
[CONTENT] Heart failure | Registry | Real World | Impact of CMR [SUMMARY]
[CONTENT] Heart failure | Registry | Real World | Impact of CMR [SUMMARY]
[CONTENT] Heart failure | Registry | Real World | Impact of CMR [SUMMARY]
[CONTENT] Humans | Female | Predictive Value of Tests | Magnetic Resonance Spectroscopy | Heart Failure | Magnetic Resonance Imaging | Registries [SUMMARY]
null
[CONTENT] Humans | Female | Predictive Value of Tests | Magnetic Resonance Spectroscopy | Heart Failure | Magnetic Resonance Imaging | Registries [SUMMARY]
[CONTENT] Humans | Female | Predictive Value of Tests | Magnetic Resonance Spectroscopy | Heart Failure | Magnetic Resonance Imaging | Registries [SUMMARY]
[CONTENT] Humans | Female | Predictive Value of Tests | Magnetic Resonance Spectroscopy | Heart Failure | Magnetic Resonance Imaging | Registries [SUMMARY]
[CONTENT] Humans | Female | Predictive Value of Tests | Magnetic Resonance Spectroscopy | Heart Failure | Magnetic Resonance Imaging | Registries [SUMMARY]
[CONTENT] cmr clinical research | cmr characteristics patients | heart failure | heart failure hf | cmr patient outcomes [SUMMARY]
null
[CONTENT] cmr clinical research | cmr characteristics patients | heart failure | heart failure hf | cmr patient outcomes [SUMMARY]
[CONTENT] cmr clinical research | cmr characteristics patients | heart failure | heart failure hf | cmr patient outcomes [SUMMARY]
[CONTENT] cmr clinical research | cmr characteristics patients | heart failure | heart failure hf | cmr patient outcomes [SUMMARY]
[CONTENT] cmr clinical research | cmr characteristics patients | heart failure | heart failure hf | cmr patient outcomes [SUMMARY]
[CONTENT] cmr | patients | change | scmr | registry | data | pre | indication | patient | scmr registry [SUMMARY]
null
[CONTENT] cmr | patients | change | scmr | registry | data | pre | indication | patient | scmr registry [SUMMARY]
[CONTENT] cmr | patients | change | scmr | registry | data | pre | indication | patient | scmr registry [SUMMARY]
[CONTENT] cmr | patients | change | scmr | registry | data | pre | indication | patient | scmr registry [SUMMARY]
[CONTENT] cmr | patients | change | scmr | registry | data | pre | indication | patient | scmr registry [SUMMARY]
[CONTENT] cmr | hf | information | diagnostic information | patient management | patient | management | diagnostic | demographics | demographics clinical [SUMMARY]
null
[CONTENT] united | contributing society | sites contributing society cardiovascular | sites contributing | contributing society cardiovascular magnetic | contributing society cardiovascular | sites contributing society | society cardiovascular magnetic | society | cardiovascular magnetic resonance [SUMMARY]
[CONTENT] cmr | change | hf found cmr associated | study 837 scmr | impact patient | impact patient management | impact patient management rate | patient management rate | study 837 | study 837 scmr registry [SUMMARY]
[CONTENT] cmr | patients | scmr | supplementary material | supplementary | material | change | registry | data | supplementary material supplementary material [SUMMARY]
[CONTENT] cmr | patients | scmr | supplementary material | supplementary | material | change | registry | data | supplementary material supplementary material [SUMMARY]
[CONTENT] CMR ||| CMR | HF ||| CMR ||| CMR for HF | CMR [SUMMARY]
null
[CONTENT] 3,837 ||| 94% | the United States | China | South Korea | India ||| HF | 59.3 years | IQR | 47.1 | 68.3 years | 67% ||| Almost 2/3 | 3 | CMR ||| 49% | CMR | HF | CMR ||| 53% | 3 | CMR | 44% | 1.5 | 0.01 [SUMMARY]
[CONTENT] CMR | CMR for HF | 3 | CMR [SUMMARY]
[CONTENT] CMR ||| CMR | HF ||| CMR ||| CMR for HF | CMR ||| the Society for Cardiovascular Magnetic Resonance | CMR | HF | 2013 to 2019 ||| CMR ||| Fisher ||| Wilcoxon ||| 3,837 ||| 94% | the United States | China | South Korea | India ||| HF | 59.3 years | IQR | 47.1 | 68.3 years | 67% ||| Almost 2/3 | 3 | CMR ||| 49% | CMR | HF | CMR ||| 53% | 3 | CMR | 44% | 1.5 | 0.01 ||| CMR | CMR for HF | 3 | CMR [SUMMARY]
[CONTENT] CMR ||| CMR | HF ||| CMR ||| CMR for HF | CMR ||| the Society for Cardiovascular Magnetic Resonance | CMR | HF | 2013 to 2019 ||| CMR ||| Fisher ||| Wilcoxon ||| 3,837 ||| 94% | the United States | China | South Korea | India ||| HF | 59.3 years | IQR | 47.1 | 68.3 years | 67% ||| Almost 2/3 | 3 | CMR ||| 49% | CMR | HF | CMR ||| 53% | 3 | CMR | 44% | 1.5 | 0.01 ||| CMR | CMR for HF | 3 | CMR [SUMMARY]
Dose of early intervention treatment during children's first 36 months of life is associated with developmental outcomes: an observational cohort study in three low/low-middle income countries.
25344731
The positive effects of early developmental intervention (EDI) on early child development have been reported in numerous controlled trials in a variety of countries. An important aspect to determining the efficacy of EDI is the degree to which dosage is linked to outcomes. However, few studies of EDI have conducted such analyses. This observational cohort study examined the association between treatment dose and children's development when EDI was implemented in three low and low-middle income countries as well as demographic and child health factors associated with treatment dose.
BACKGROUND
Infants (78 males, 67 females) born in rural communities in India, Pakistan, and Zambia received a parent-implemented EDI delivered through biweekly home visits by trainers during the first 36 months of life. Outcome was measured at age 36 months with the Mental (MDI) and Psychomotor (PDI) Development Indices of the Bayley Scales of Infant Development-II. Treatment dose was measured by number of home visits completed and parent-reported implementation of assigned developmental stimulation activities between visits. Sociodemographic, prenatal, perinatal, and child health variables were measures as correlates.
METHODS
Average home visits dose exceeded 91% and mothers engaged the children in activities on average 62.5% of days. Higher home visits dose was significantly associated with higher MDI (mean for dose quintiles 1-2 combined = 97.8, quintiles 3-5 combined = 103.4, p = 0.0017). Higher treatment dose was also generally associated with greater mean PDI, but the relationships were non-linear. Location, sociodemographic, and child health variables were associated with treatment dose.
RESULTS
Receiving a higher dose of EDI during the first 36 months of life is generally associated with better developmental outcomes. The higher benefit appears when receiving ≥91% of biweekly home visits and program activities on ≥67% of days over 3 years. It is important to ensure that EDI is implemented with a sufficiently high dose to achieve desired effect. To this end groups at risk for receiving lower dose can be identified and may require special attention to ensure adequate effect.
CONCLUSIONS
[ "Adult", "Child Development", "Child, Preschool", "Cohort Studies", "Developing Countries", "Developmental Disabilities", "Female", "Home Care Services", "Humans", "India", "Infant", "Infant, Newborn", "Male", "Neuropsychological Tests", "Pakistan", "Parents", "Program Evaluation", "Rural Population", "Zambia" ]
4288653
Background
Programs of early developmental intervention (EDI) implemented in the first years of life in children born with, or at risk for, neurodevelopmental disability have been shown to improve cognitive developmental outcomes and consequently, their quality of life. EDI includes various activities designed to enhance a young child’s development, directly via structured experiences and/or indirectly through influencing the care giving environment [1]. The positive effects of EDI on early child development have been reported in numerous controlled trials in high-income countries [2, 3], which have been confirmed through meta-analyses [4, 5] and expert reviews [6–8]. Several trials of EDI with risk groups of infants and young children have also been conducted in low or low-middle income countries (L/LMIC), which have also documented positive effects on child development, by itself or in combination with nutritional supplementation [9–16]. The involvement of parents in EDI is critical for achieving positive outcomes [1, 17–19], which can be optimized by implementing EDI through home visits by a parent trainer. This modality also matches well the circumstances of many L/LMIC where families often live far away from or have other barriers to reach providers that could implement EDI [20]. An important aspect to determining the efficacy of EDI is the degree to which dosage impacts outcomes, and what constitutes “sufficient dosage” [21]. Sufficient dosage with regard to EDI refers to a participant receiving adequate exposure to the intervention for it to be efficacious. Program intensity, or dosage, typically is measured by the quantity and quality the intervention actually achieved when implemented [21, 22], although it ideally should be determined based on the needs of the population at hand [23]. Common indicators of dosage for EDI include amount of time spent in a child development center, number of home visits completed by a specialist training a parent and/or engaging the child, or some indication of parent engagement in the EDI. Whereas there is more information linking outcomes with treatment dose for pre-school programs [21, 22], despite its importance few studies of EDI implemented in the first three years of life have conducted such analyses. A few previous studies generally indicate that children who receive more exposure to EDI display greater improvements in their cognitive development compared to those who receive less, even when differences in exposure were modest. Specifically, children who received EDI (home and center based) for more than 400 days, through age 3, exhibited significant improvements in cognitive development, while smaller but similar effects were evident among children who received treatment between 350 and 400 days [24]. Another study reported that optimal cognitive development of children in EDI was not associated with their background characteristics, such as birth weight or maternal education, but with three aspects related to treatment dosage: number of home visits received, days attending child care, and number of parent meetings attended [18]. However these studies as well as the broader discussions of implementation quality have focused on programs conducted in the United States [21, 22]. The applicability of this information to L/LMIC contexts is unclear at present. The only EDI treatment dose study conducted in a L/LMIC that we are aware of showed that, as the frequency of home visits increased from none, through monthly, biweekly, and weekly, developmental gains at 30 months of age increased as well [25]. Given the potential for EDI to significantly impact the development of children, and therefore the economic development of nations in the long-term [26], it will be important more broadly to examine treatment dose in L/LMIC to inform the implementation of such efforts on a larger scale. Parents may vary in their level of participation in home visit EDI programs due to a variety of factors. Previous research has indicated higher treatment dose among families participating in EDI who have better financial and social resources [20, 27–30]. Perinatal, neonatal, and other child health characteristics might also predict treatment dose for an intervention intending to promote the child’s development. Yet, studies that have examined both social and health predictors of EDI treatment dose are rare and have not considered a broad range of possible predictors [15]. It is important to examine various such factors in L/LMIC because they can identify processes that may influence parents’ adherence with EDI and those who may need additional support. In light of these gaps in our understanding, the aim of the current study was to determine (1) whether there is a dose effect in a home visiting EDI implemented in three L/LMIC and (2) what sociodemographic and health factors are associated with variation in treatment dose. We examined two indicators of dose of EDI. As in previous studies, the number of home visits completed over the course of the EDI was measured. Another important treatment element is the extent to which parents implement the assigned developmental activities with the child during the time between home visits, which we refer to as the program implementation dose. Despite its logical importance to the success of home visiting EDI, we are not aware that parent program implementation dose has been examined in EDI. We hypothesize that increased dose as measured by either indicator will be associated with better developmental outcomes from EDI when implemented in three L/LMIC.
Methods
Data used to examine the association between treatment adherence and developmental outcomes are from one of the conditions of the Brain Research to Ameliorate Impaired Neurodevelopment - Home-based Intervention Trial (BRAIN-HIT), a randomized controlled trial (RCT) detailed elsewhere (clinicaltrials.gov ID# NCT00639184) [31, 32]. Implemented in rural communities of India, Pakistan, and Zambia, the overall aim of BRAIN-HIT was to evaluate the efficacy of an EDI program on the development of children in L/LMIC who are at-risk for neurodevelopmental disability due to birth asphyxia that required resuscitation. A group of children who did not require resuscitation at birth was evaluated using the same protocol to compare the efficacy of the EDI in those with and without birth asphyxia. As detailed elsewhere [32, 33], mental development at 36 months of age was better in children with birth asphyxia who had received the EDI compared with those in the control condition (effect size = 4.6 points on the standardized scale from the Bayley Scales of Infant Development, see below), but there was no difference between trial conditions in the children without birth asphyxia. Psychomotor development was likewise higher in the EDI group, in this case for both the children with (effect size = 5.4) and without (effect size = 6.1) birth asphyxia, compared to those in the control condition. The issue of the effect of treatment dose on development is only relevant for the active EDI condition, and not the comparison condition, which intended to control for placebo, observation, and time effects and lacked a theoretically based developmental intervention. Therefore, only data from those randomized to receive EDI were analyzed in the present research, making this an observational study of that cohort. BRAIN-HIT was approved by the Institutional Review Board at each site and was conducted in accord with prevailing ethical principles. Study population Infants with birth asphyxia (resuscitated) and infants without birth asphyxia or other perinatal complications (non-resuscitated), born from January 2007 through June 2008 in rural communities in three sites in India, Pakistan and Zambia, were matched for country and chronological time and randomly selected from those enrolled in the First Breath Trial [34]. Infants were screened for enrollment into the BRAIN-HIT during the 7-day follow-up visit after birth [31], and were ineligible if: (1) birth weight was less than 1500 grams, (2) neurological examination at seven days of age (grade III by Ellis classification) [35], was severely abnormal (because they were not expected to benefit from EDI), (3) mother was less than 15 years old or unable/unwilling to participate, or (4) mother was not planning to stay in the study area for the next three years. Birth asphyxia was defined as the inability to initiate or sustain spontaneous breathing at birth using WHO definition (biochemical evidence of birth asphyxia could not be obtained in these settings) [36]. A list of potential enrollees was distributed to the investigators in each country to obtain written consent for the study, which was obtained during the second week after birth and before randomization to intervention conditions of the BRAIN-HIT. Infants with birth asphyxia (resuscitated) and infants without birth asphyxia or other perinatal complications (non-resuscitated), born from January 2007 through June 2008 in rural communities in three sites in India, Pakistan and Zambia, were matched for country and chronological time and randomly selected from those enrolled in the First Breath Trial [34]. Infants were screened for enrollment into the BRAIN-HIT during the 7-day follow-up visit after birth [31], and were ineligible if: (1) birth weight was less than 1500 grams, (2) neurological examination at seven days of age (grade III by Ellis classification) [35], was severely abnormal (because they were not expected to benefit from EDI), (3) mother was less than 15 years old or unable/unwilling to participate, or (4) mother was not planning to stay in the study area for the next three years. Birth asphyxia was defined as the inability to initiate or sustain spontaneous breathing at birth using WHO definition (biochemical evidence of birth asphyxia could not be obtained in these settings) [36]. A list of potential enrollees was distributed to the investigators in each country to obtain written consent for the study, which was obtained during the second week after birth and before randomization to intervention conditions of the BRAIN-HIT. Intervention procedures Investigators at each research site selected EDI parent trainers who were trained in an initial 5-day workshop, which was led by the same experts at each research site. A second workshop was conducted before participating children began to reach 18 months of age to adapt the approach to children up to 36 months, again conducted by the same experts at each site. To maintain quality of implementation, the trainers were supervised with observations during actual home visits and constructive feedback was provided on a regular basis. Each parent–child pair was assigned to the same trainer throughout the trial whenever possible, who was scheduled to make a home visit every two weeks over the 36-month trial period. As elaborated elsewhere [31, 32], the trainer presented one or two playful learning activities during each visit targeting developmentally appropriate milestones. These activities cover a spectrum of abilities across the cognitive, social and self-help, gross and fine motor, and language domains. The parent practiced the activity in the presence of the trainer who provided feedback. Cards depicting the activities were then left with the parent, who was encouraged to apply the activities in daily life with the child until the next home visit. The trainer introduced new activities in subsequent visits to enhance the child’s developmental competencies. Investigators at each research site selected EDI parent trainers who were trained in an initial 5-day workshop, which was led by the same experts at each research site. A second workshop was conducted before participating children began to reach 18 months of age to adapt the approach to children up to 36 months, again conducted by the same experts at each site. To maintain quality of implementation, the trainers were supervised with observations during actual home visits and constructive feedback was provided on a regular basis. Each parent–child pair was assigned to the same trainer throughout the trial whenever possible, who was scheduled to make a home visit every two weeks over the 36-month trial period. As elaborated elsewhere [31, 32], the trainer presented one or two playful learning activities during each visit targeting developmentally appropriate milestones. These activities cover a spectrum of abilities across the cognitive, social and self-help, gross and fine motor, and language domains. The parent practiced the activity in the presence of the trainer who provided feedback. Cards depicting the activities were then left with the parent, who was encouraged to apply the activities in daily life with the child until the next home visit. The trainer introduced new activities in subsequent visits to enhance the child’s developmental competencies. Treatment dose indicators Two indicators of treatment dose were calculated. Home visit dose was measured based on each parent trainer keeping a record of visit dates. Following the first visit, visits were scheduled to occur every two weeks until the completion of the trial. A home visit was completed on schedule if it occurred within its assigned two week window following the preceding visit. We calculated the percentage of scheduled home visits completed for each participant for the full 36-month trial. The reason for each missed visit was coded as due to illness, weather, death in family, refusal, child or mother unavailable for another reason, parent trainer schedule conflict, and other reasons. Program implementation dose was measured based on maternal report obtained by the trainer at each home visit of the proportion of days the assigned activities had been implemented since the previous visit. First, the number of days between subsequent completed visits was calculated (Yn). If the time between two home visits extended beyond 30 days, a maximum of 30 days was used. Program implementation credits were assigned for the time period between visits based on the mother’s report of implementation of activities, as follows: “not at all” (creditn = 1), “about one-quarter of days or less” (creditn = Yn*.25), “about one-half of days” (creditn = Yn*.50), “about three-quarters of days” (creditn = Yn*.75), and “almost every day or more” (creditn = Yn). The credits were then added together over the trial period, divided by the number of possible credits, and multiplied by 100. Thus, this score estimates the percent of days between each home visit that the mother reported implementing child stimulation activities. As an additional descriptive measure of treatment dose, the parent trainer was surveyed at the conclusion of the study to estimate how often the activities had been implemented between the home visits, using a five-point scale (from “never” to “always”). Two indicators of treatment dose were calculated. Home visit dose was measured based on each parent trainer keeping a record of visit dates. Following the first visit, visits were scheduled to occur every two weeks until the completion of the trial. A home visit was completed on schedule if it occurred within its assigned two week window following the preceding visit. We calculated the percentage of scheduled home visits completed for each participant for the full 36-month trial. The reason for each missed visit was coded as due to illness, weather, death in family, refusal, child or mother unavailable for another reason, parent trainer schedule conflict, and other reasons. Program implementation dose was measured based on maternal report obtained by the trainer at each home visit of the proportion of days the assigned activities had been implemented since the previous visit. First, the number of days between subsequent completed visits was calculated (Yn). If the time between two home visits extended beyond 30 days, a maximum of 30 days was used. Program implementation credits were assigned for the time period between visits based on the mother’s report of implementation of activities, as follows: “not at all” (creditn = 1), “about one-quarter of days or less” (creditn = Yn*.25), “about one-half of days” (creditn = Yn*.50), “about three-quarters of days” (creditn = Yn*.75), and “almost every day or more” (creditn = Yn). The credits were then added together over the trial period, divided by the number of possible credits, and multiplied by 100. Thus, this score estimates the percent of days between each home visit that the mother reported implementing child stimulation activities. As an additional descriptive measure of treatment dose, the parent trainer was surveyed at the conclusion of the study to estimate how often the activities had been implemented between the home visits, using a five-point scale (from “never” to “always”). Developmental outcome measures The Bayley Scales of Infant Development – II (BSID) [37] was selected as the main outcome measure for this trial because it has been used extensively in various L/LMIC. The BSID underwent pilot-testing at each site to verify validity in the local context and a few items were slightly modified to make it more culturally appropriate (e.g., image of a sandal instead of a shoe). Evaluators across the sites were trained to standards in joint 4-day workshops conducted by experts before each yearly evaluation. The BSID was administered directly to each child by certified study evaluators, who were masked to the children’s birth history and randomization, in the appropriate language with standard material. Both the Mental Developmental Index (MDI) and Psychomotor Developmental Index (PDI) were used to measure developmental outcomes. Scores from the 36-month assessment, obtained just after the completion of the EDI, were used in this analysis as an indicator of treatment outcome. The Bayley Scales of Infant Development – II (BSID) [37] was selected as the main outcome measure for this trial because it has been used extensively in various L/LMIC. The BSID underwent pilot-testing at each site to verify validity in the local context and a few items were slightly modified to make it more culturally appropriate (e.g., image of a sandal instead of a shoe). Evaluators across the sites were trained to standards in joint 4-day workshops conducted by experts before each yearly evaluation. The BSID was administered directly to each child by certified study evaluators, who were masked to the children’s birth history and randomization, in the appropriate language with standard material. Both the Mental Developmental Index (MDI) and Psychomotor Developmental Index (PDI) were used to measure developmental outcomes. Scores from the 36-month assessment, obtained just after the completion of the EDI, were used in this analysis as an indicator of treatment outcome. Health and sociodemographic measures Perinatal and neonatal health variables were obtained from records kept by the FIRST BREATH Trial [34]: child gender, birth weight (1500 g-2499 g, 2500 g-2999 g, 3000 + g), gestational age (28–36 weeks, 37+ weeks), number of prenatal visits (0, 1–3, 4+), and parity. Additional child health variables obtained as part of this trial at 12 months of age included weight for age/sex (<5th, 5th-14th, 15th + percentile) and complete immunization status. Family demographic variables were obtained at enrollment in BRAIN-HIT using a structured parent interview: maternal age, education (none and illiterate, none but literate or primary, literate with some secondary), family assets and home living standard. The presence of 11 family assets (e.g., radio, refrigerator, bicycle) were tallied as a Family Resources Index and classified into three levels (0–1, 2–4, 5+). A Home Living Standard Index was calculated based on seven indicators (e.g., home building material, water source, type of toilet) and classified into three levels (0–4, 5–7, 8+). A socio-economic status (SES) measure was used to classify participants into three groups (quintile 1–3, 4, 5) [38]. Perinatal and neonatal health variables were obtained from records kept by the FIRST BREATH Trial [34]: child gender, birth weight (1500 g-2499 g, 2500 g-2999 g, 3000 + g), gestational age (28–36 weeks, 37+ weeks), number of prenatal visits (0, 1–3, 4+), and parity. Additional child health variables obtained as part of this trial at 12 months of age included weight for age/sex (<5th, 5th-14th, 15th + percentile) and complete immunization status. Family demographic variables were obtained at enrollment in BRAIN-HIT using a structured parent interview: maternal age, education (none and illiterate, none but literate or primary, literate with some secondary), family assets and home living standard. The presence of 11 family assets (e.g., radio, refrigerator, bicycle) were tallied as a Family Resources Index and classified into three levels (0–1, 2–4, 5+). A Home Living Standard Index was calculated based on seven indicators (e.g., home building material, water source, type of toilet) and classified into three levels (0–4, 5–7, 8+). A socio-economic status (SES) measure was used to classify participants into three groups (quintile 1–3, 4, 5) [38]. Statistical analysis Descriptive statistics were computed for child health and family demographic characteristics, treatment dose indicators (home visits dose and protocol implementation dose), and developmental outcomes (MDI and PDI at 36-months) for all individuals randomized to receive EDI. Child health and demographic characteristics were summarized separately for those randomized to receive EDI and included in the treatment dose analysis and those who were excluded from this analysis, and differences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests. A Pearson correlation statistic was computed between the treatment dose characteristics. Aim 1 In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred. In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred. Aim 2 To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models. To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models. Descriptive statistics were computed for child health and family demographic characteristics, treatment dose indicators (home visits dose and protocol implementation dose), and developmental outcomes (MDI and PDI at 36-months) for all individuals randomized to receive EDI. Child health and demographic characteristics were summarized separately for those randomized to receive EDI and included in the treatment dose analysis and those who were excluded from this analysis, and differences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests. A Pearson correlation statistic was computed between the treatment dose characteristics. Aim 1 In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred. In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred. Aim 2 To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models. To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models.
Results
Study sample composition The sample size was determined to provide adequate power to test EDI treatment efficacy, the primary aim of BRAIN-HIT. As outlined in Figure 1, of 540 births screened from January 2007 through June 2008, 438 (81% of screened) were eligible. Only 3 infants were ineligible due to low birth weight or neurological exam, with the remaining 99 being due to mothers not being able to commit to staying in the study communities or could not be reached for screening within 7 days of birth. Informed consent was obtained for 407 (93% of eligible; 165 resuscitated, 242 not resuscitated) who were randomized into either EDI or a control intervention [20]. The 204 assigned to receive EDI (50.1% of those randomized) are relevant for this study, of whom 145 (71.1% of those assigned to EDI) were included in this analysis (Table 1). These participants had mean = 36.8 (range = 35-41) months of age at the time of the developmental assessment.Figure 1 Study flow chart. Study flow chart. Child health and family demographic characteristics of study sample aMeasured at enrollment unless otherwise indicated. bDifferences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests; bold indicates significant p < .05. Exclusions from this analysis were due to death (n = 7), withdrawal (n = 6), loss to follow up (n = 5), incomplete 36-month BSID-II (n = 39) due to administration errors, home-visit data unavailable (n = 1), or another reason (n = 1). Three children were included in the analysis who completed the 36-month evaluation but discontinued the EDI prior to the end of the study (two because the family had insufficient time to fulfill study requirements and one because the family moved). When compared to those who were included in the analysis (Table 1), children excluded (n = 59) were significantly (p < .05) more likely to have been less than the 5th percentile in weight and completed all immunizations at 12-months of age, and their mothers to have had prenatal care, lower parity, and more family resources. The sample size was determined to provide adequate power to test EDI treatment efficacy, the primary aim of BRAIN-HIT. As outlined in Figure 1, of 540 births screened from January 2007 through June 2008, 438 (81% of screened) were eligible. Only 3 infants were ineligible due to low birth weight or neurological exam, with the remaining 99 being due to mothers not being able to commit to staying in the study communities or could not be reached for screening within 7 days of birth. Informed consent was obtained for 407 (93% of eligible; 165 resuscitated, 242 not resuscitated) who were randomized into either EDI or a control intervention [20]. The 204 assigned to receive EDI (50.1% of those randomized) are relevant for this study, of whom 145 (71.1% of those assigned to EDI) were included in this analysis (Table 1). These participants had mean = 36.8 (range = 35-41) months of age at the time of the developmental assessment.Figure 1 Study flow chart. Study flow chart. Child health and family demographic characteristics of study sample aMeasured at enrollment unless otherwise indicated. bDifferences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests; bold indicates significant p < .05. Exclusions from this analysis were due to death (n = 7), withdrawal (n = 6), loss to follow up (n = 5), incomplete 36-month BSID-II (n = 39) due to administration errors, home-visit data unavailable (n = 1), or another reason (n = 1). Three children were included in the analysis who completed the 36-month evaluation but discontinued the EDI prior to the end of the study (two because the family had insufficient time to fulfill study requirements and one because the family moved). When compared to those who were included in the analysis (Table 1), children excluded (n = 59) were significantly (p < .05) more likely to have been less than the 5th percentile in weight and completed all immunizations at 12-months of age, and their mothers to have had prenatal care, lower parity, and more family resources. Description of developmental outcomes and treatment dose The sample had an unadjusted mean (SD) MDI = 101.2 (10.4) and PDI = 106.8 (14.1) at 36-months. Average home visits dose was 91.4% over 36 months, when 8,990 visits out of 9,841 were completed on schedule every two weeks, and 95% of the participants achieved 80% or greater home visits dose. The most common reason for a missed visit was the inability to locate the mother and child at home at the scheduled time (40.3%), for example because the family was travelling away from the home or had moved temporarily. However, the second most common reason was those related to the parent trainer, such as being ill or having a conflict with another meeting (23.9%). Child or mother unavailable for other reasons (15.3%), for example because the mother was working or baby was sleeping, and weather (10.0%) were the only other reasons accounting for at least 10% of the missed visits. Mother or family directly refusing the home visit at the scheduled time was rare (2.5%). Mothers reported engaging the child in the assigned activities on an average of 62.5% of days throughout the 36 month period. This protocol implementation dose equates to practicing the intervention activities 4.4 days per week or 674 days over the 36 month trial period. Home visits dose was modestly correlated with protocol implementation dose (r = 0.35). Parent trainers estimated at the end of the trial that 66.2% of families practiced the intervention “always” or “almost always” throughout the 36 months. The sample had an unadjusted mean (SD) MDI = 101.2 (10.4) and PDI = 106.8 (14.1) at 36-months. Average home visits dose was 91.4% over 36 months, when 8,990 visits out of 9,841 were completed on schedule every two weeks, and 95% of the participants achieved 80% or greater home visits dose. The most common reason for a missed visit was the inability to locate the mother and child at home at the scheduled time (40.3%), for example because the family was travelling away from the home or had moved temporarily. However, the second most common reason was those related to the parent trainer, such as being ill or having a conflict with another meeting (23.9%). Child or mother unavailable for other reasons (15.3%), for example because the mother was working or baby was sleeping, and weather (10.0%) were the only other reasons accounting for at least 10% of the missed visits. Mother or family directly refusing the home visit at the scheduled time was rare (2.5%). Mothers reported engaging the child in the assigned activities on an average of 62.5% of days throughout the 36 month period. This protocol implementation dose equates to practicing the intervention activities 4.4 days per week or 674 days over the 36 month trial period. Home visits dose was modestly correlated with protocol implementation dose (r = 0.35). Parent trainers estimated at the end of the trial that 66.2% of families practiced the intervention “always” or “almost always” throughout the 36 months. Associations between treatment dose and developmental outcomes Higher home visits dose was associated with higher MDI at 36-months (Figure 2). Specifically, quintiles 1–2 mean MDI = 98, while quintiles 3–5 mean MDI = 103 (Table 2). General linear models of MDI supported this relationship when home visits dose was entered as a primary predictor and site, resuscitation status at birth, and 12-month MDI were entered as covariates (Table 2). Most notably, in the model with only home visits dose (Model 1) and the model which included site (Model 2), mean MDI for quintiles 1 and 2 was significantly lower than quintiles 3–5. A step-down test comparing mean MDI for those with home visit dose below the 40th percentile (quintiles 1 and 2) to those with home visit dose above the 40th percentile (quintiles 3–5), provided estimates of 97.8 and 103.4 (p = 0.0017), respectively . Adjusting by site increased the magnitude of the difference by at least 25% (96.8 vs. 103.9, p = 0.0005). When adjusting for 12-month MDI and the interaction between dose and 12-month MDI (Model 5), the adjusted mean scores for the dose quintiles mirrored unadjusted scores, with quintiles 1–2 consistently lower than quintiles 3–5 (p <0.0001). The lower limit for quintile 3 includes those receiving a minimum of 91% of all the planned home visits.Figure 2 Mental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles. Mental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles. Treatment dose modeling results and mean mental (MDI) and psychomotor (PDI) developmental index by quintiles aBold indicate significant p < .05 for the relationship between the treatment dose indicator and the developmental outcome. Based on the same general linear model analysis (Table 2), home visit dose was not significantly associated with PDI at 36 months when considered by itself (Model 1) or when adjusted by site, resuscitation status, and 12-month PDI (Models 2–4). However, there was a positive association between home visits dose and 36-month PDI when adjusting for the 12-month PDI and its interaction with dose (Model 5). Here again, a home visit dose above the 40th percentile (quintiles 3–5) resulted in higher estimated PDI (108.5 – 111.0) compared with below this percentile (103.3 – 106.5) Higher program implementation dose was associated with slightly higher MDI at 36-months compared to those with a lesser dose. Quintiles 1–2 had a mean MDI of 100 or lower, while quintiles 4–5 has a mean MDI of 102 or higher (Table 2), and the difference appears larger when considering the medians of these quintiles. In a general linear model of 36-month MDI (Table 2), program implementation dose was not a significant predictor by itself (Model 1). However, prediction of program implementation dose when adjusting for 12-month MDI and its interaction with dose (Model 5) indicated that greater dose was associated with higher MDI (adjusted mean Q1 = 100.1 vs. Q5 = 103.1, p = 0.0434). PDI at 36 months was not linearly associated with program implementation dose (Table 2). Rather, mean PDI across quintiles followed a U-shape with the highest mean scores for quintiles 1, 4 and 5. The lower limit for quintile 4 includes those implementing activities on 67% of days on average over the trial period. Higher home visits dose was associated with higher MDI at 36-months (Figure 2). Specifically, quintiles 1–2 mean MDI = 98, while quintiles 3–5 mean MDI = 103 (Table 2). General linear models of MDI supported this relationship when home visits dose was entered as a primary predictor and site, resuscitation status at birth, and 12-month MDI were entered as covariates (Table 2). Most notably, in the model with only home visits dose (Model 1) and the model which included site (Model 2), mean MDI for quintiles 1 and 2 was significantly lower than quintiles 3–5. A step-down test comparing mean MDI for those with home visit dose below the 40th percentile (quintiles 1 and 2) to those with home visit dose above the 40th percentile (quintiles 3–5), provided estimates of 97.8 and 103.4 (p = 0.0017), respectively . Adjusting by site increased the magnitude of the difference by at least 25% (96.8 vs. 103.9, p = 0.0005). When adjusting for 12-month MDI and the interaction between dose and 12-month MDI (Model 5), the adjusted mean scores for the dose quintiles mirrored unadjusted scores, with quintiles 1–2 consistently lower than quintiles 3–5 (p <0.0001). The lower limit for quintile 3 includes those receiving a minimum of 91% of all the planned home visits.Figure 2 Mental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles. Mental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles. Treatment dose modeling results and mean mental (MDI) and psychomotor (PDI) developmental index by quintiles aBold indicate significant p < .05 for the relationship between the treatment dose indicator and the developmental outcome. Based on the same general linear model analysis (Table 2), home visit dose was not significantly associated with PDI at 36 months when considered by itself (Model 1) or when adjusted by site, resuscitation status, and 12-month PDI (Models 2–4). However, there was a positive association between home visits dose and 36-month PDI when adjusting for the 12-month PDI and its interaction with dose (Model 5). Here again, a home visit dose above the 40th percentile (quintiles 3–5) resulted in higher estimated PDI (108.5 – 111.0) compared with below this percentile (103.3 – 106.5) Higher program implementation dose was associated with slightly higher MDI at 36-months compared to those with a lesser dose. Quintiles 1–2 had a mean MDI of 100 or lower, while quintiles 4–5 has a mean MDI of 102 or higher (Table 2), and the difference appears larger when considering the medians of these quintiles. In a general linear model of 36-month MDI (Table 2), program implementation dose was not a significant predictor by itself (Model 1). However, prediction of program implementation dose when adjusting for 12-month MDI and its interaction with dose (Model 5) indicated that greater dose was associated with higher MDI (adjusted mean Q1 = 100.1 vs. Q5 = 103.1, p = 0.0434). PDI at 36 months was not linearly associated with program implementation dose (Table 2). Rather, mean PDI across quintiles followed a U-shape with the highest mean scores for quintiles 1, 4 and 5. The lower limit for quintile 4 includes those implementing activities on 67% of days on average over the trial period. Factors associated with treatment dose The following variables were associated with home visits dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: maternal education, parity, family resources, prenatal visits, birth attendant, 1 minute Apgar, preterm birth, and child’s weight at 36-months. These variables were entered into a generalized linear model along with those interaction terms with location that were significant. After backward elimination, the final model (R2 = .19) included parity (82.9 ± 3.0 [adjusted mean ± standard error] with 1 child, 79.7 ± 2.8 with 2–3 children, and 90.8 ± 3.5 with 4+ children [p = 0.0382]), 1 minute Apgar (86.9 ± 2.6 for <9 and 82.0 ± 2.6 for 9+ [p = 0.1754]), location (adjusted mean ranged from 75.6 - 94.1, [p = 0.0019]), preterm [(p = 0.4571) and preterm by location interaction(p = 0.0020). There was a substantial difference in relationship to home visits dose by prematurity across location. Location A had higher dose for term children (65.8 ± 6.3 for preterm and 85.3 ± 4.0 for term). Location B had essentially the same dose between groups (92.8 ± 5.9 for preterm and 95.4 ± 2.9 for term). Location C had considerably higher dose in preterm children (90.5 ± 4.6 for preterm and 76.9 ± 3.5 for term). The following variables were associated with program implementation dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: home visit adherence rate, maternal education, parity, family resources, living standard index, prenatal care, 1 minute Apgar, preterm birth, and weight at birth, 12, 24, and 36 months. These variables were entered into a model along with those interaction terms with location that were significant. After backward elimination and adjusting for location, the final model (R2 = .25) included home visit adherence rate (a one percent increase in home visit adherence resulted in a 0.64 ± 0.18 percent increase in program implementation adherence, p = 0.0004), maternal education (70.0 ± 2.8 for secondary/university and 60.9 ± 2.4 for none/illiterate [p = 0.0400]), prenatal care (71.0 ± 2.9 for 5+ visits and 65.3 ± 3.5 for no care [p = 0.0170]), weight at 12 months (66.7 ± 1.7 for >85th percentile and 61.1 ± 2.2 for <5th percentile [p = 0.0917]), and location (adjusted mean ranged from 59.5 - 69.1, [p = 0.0019]). None of the interaction terms were retained in the final model. The following variables were associated with home visits dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: maternal education, parity, family resources, prenatal visits, birth attendant, 1 minute Apgar, preterm birth, and child’s weight at 36-months. These variables were entered into a generalized linear model along with those interaction terms with location that were significant. After backward elimination, the final model (R2 = .19) included parity (82.9 ± 3.0 [adjusted mean ± standard error] with 1 child, 79.7 ± 2.8 with 2–3 children, and 90.8 ± 3.5 with 4+ children [p = 0.0382]), 1 minute Apgar (86.9 ± 2.6 for <9 and 82.0 ± 2.6 for 9+ [p = 0.1754]), location (adjusted mean ranged from 75.6 - 94.1, [p = 0.0019]), preterm [(p = 0.4571) and preterm by location interaction(p = 0.0020). There was a substantial difference in relationship to home visits dose by prematurity across location. Location A had higher dose for term children (65.8 ± 6.3 for preterm and 85.3 ± 4.0 for term). Location B had essentially the same dose between groups (92.8 ± 5.9 for preterm and 95.4 ± 2.9 for term). Location C had considerably higher dose in preterm children (90.5 ± 4.6 for preterm and 76.9 ± 3.5 for term). The following variables were associated with program implementation dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: home visit adherence rate, maternal education, parity, family resources, living standard index, prenatal care, 1 minute Apgar, preterm birth, and weight at birth, 12, 24, and 36 months. These variables were entered into a model along with those interaction terms with location that were significant. After backward elimination and adjusting for location, the final model (R2 = .25) included home visit adherence rate (a one percent increase in home visit adherence resulted in a 0.64 ± 0.18 percent increase in program implementation adherence, p = 0.0004), maternal education (70.0 ± 2.8 for secondary/university and 60.9 ± 2.4 for none/illiterate [p = 0.0400]), prenatal care (71.0 ± 2.9 for 5+ visits and 65.3 ± 3.5 for no care [p = 0.0170]), weight at 12 months (66.7 ± 1.7 for >85th percentile and 61.1 ± 2.2 for <5th percentile [p = 0.0917]), and location (adjusted mean ranged from 59.5 - 69.1, [p = 0.0019]). None of the interaction terms were retained in the final model.
Conclusions
The body of research in which the current study is embedded quite consistently establishes that within an effective EDI, a higher dose is generally associated with better developmental outcomes. A large body of research indicates that EDI can improve early development of children in L/LMIC. Therefore EDI should be one approach used in L/LMIC to lay the foundation for improving longer-term outcomes of its population and interrupting intergenerational transmission of poverty [26]. Yet, for this to be successful, efforts to implement EDI for children need to ensure that program elements reach the children at the intended intensity. Groups of children at risk for receiving lower treatment dose may require special attention to ensure adequate effect.
[ "Background", "Study population", "Intervention procedures", "Treatment dose indicators", "Developmental outcome measures", "Health and sociodemographic measures", "Statistical analysis", "Aim 1", "Aim 2", "Study sample composition", "Description of developmental outcomes and treatment dose", "Associations between treatment dose and developmental outcomes", "Factors associated with treatment dose" ]
[ "Programs of early developmental intervention (EDI) implemented in the first years of life in children born with, or at risk for, neurodevelopmental disability have been shown to improve cognitive developmental outcomes and consequently, their quality of life. EDI includes various activities designed to enhance a young child’s development, directly via structured experiences and/or indirectly through influencing the care giving environment [1]. The positive effects of EDI on early child development have been reported in numerous controlled trials in high-income countries [2, 3], which have been confirmed through meta-analyses [4, 5] and expert reviews [6–8]. Several trials of EDI with risk groups of infants and young children have also been conducted in low or low-middle income countries (L/LMIC), which have also documented positive effects on child development, by itself or in combination with nutritional supplementation [9–16].\nThe involvement of parents in EDI is critical for achieving positive outcomes [1, 17–19], which can be optimized by implementing EDI through home visits by a parent trainer. This modality also matches well the circumstances of many L/LMIC where families often live far away from or have other barriers to reach providers that could implement EDI [20]. An important aspect to determining the efficacy of EDI is the degree to which dosage impacts outcomes, and what constitutes “sufficient dosage” [21]. Sufficient dosage with regard to EDI refers to a participant receiving adequate exposure to the intervention for it to be efficacious. Program intensity, or dosage, typically is measured by the quantity and quality the intervention actually achieved when implemented [21, 22], although it ideally should be determined based on the needs of the population at hand [23]. Common indicators of dosage for EDI include amount of time spent in a child development center, number of home visits completed by a specialist training a parent and/or engaging the child, or some indication of parent engagement in the EDI.\nWhereas there is more information linking outcomes with treatment dose for pre-school programs [21, 22], despite its importance few studies of EDI implemented in the first three years of life have conducted such analyses. A few previous studies generally indicate that children who receive more exposure to EDI display greater improvements in their cognitive development compared to those who receive less, even when differences in exposure were modest. Specifically, children who received EDI (home and center based) for more than 400 days, through age 3, exhibited significant improvements in cognitive development, while smaller but similar effects were evident among children who received treatment between 350 and 400 days [24]. Another study reported that optimal cognitive development of children in EDI was not associated with their background characteristics, such as birth weight or maternal education, but with three aspects related to treatment dosage: number of home visits received, days attending child care, and number of parent meetings attended [18].\nHowever these studies as well as the broader discussions of implementation quality have focused on programs conducted in the United States [21, 22]. The applicability of this information to L/LMIC contexts is unclear at present. The only EDI treatment dose study conducted in a L/LMIC that we are aware of showed that, as the frequency of home visits increased from none, through monthly, biweekly, and weekly, developmental gains at 30 months of age increased as well [25]. Given the potential for EDI to significantly impact the development of children, and therefore the economic development of nations in the long-term [26], it will be important more broadly to examine treatment dose in L/LMIC to inform the implementation of such efforts on a larger scale.\nParents may vary in their level of participation in home visit EDI programs due to a variety of factors. Previous research has indicated higher treatment dose among families participating in EDI who have better financial and social resources [20, 27–30]. Perinatal, neonatal, and other child health characteristics might also predict treatment dose for an intervention intending to promote the child’s development. Yet, studies that have examined both social and health predictors of EDI treatment dose are rare and have not considered a broad range of possible predictors [15]. It is important to examine various such factors in L/LMIC because they can identify processes that may influence parents’ adherence with EDI and those who may need additional support.\nIn light of these gaps in our understanding, the aim of the current study was to determine (1) whether there is a dose effect in a home visiting EDI implemented in three L/LMIC and (2) what sociodemographic and health factors are associated with variation in treatment dose. We examined two indicators of dose of EDI. As in previous studies, the number of home visits completed over the course of the EDI was measured. Another important treatment element is the extent to which parents implement the assigned developmental activities with the child during the time between home visits, which we refer to as the program implementation dose. Despite its logical importance to the success of home visiting EDI, we are not aware that parent program implementation dose has been examined in EDI. We hypothesize that increased dose as measured by either indicator will be associated with better developmental outcomes from EDI when implemented in three L/LMIC.", "Infants with birth asphyxia (resuscitated) and infants without birth asphyxia or other perinatal complications (non-resuscitated), born from January 2007 through June 2008 in rural communities in three sites in India, Pakistan and Zambia, were matched for country and chronological time and randomly selected from those enrolled in the First Breath Trial [34]. Infants were screened for enrollment into the BRAIN-HIT during the 7-day follow-up visit after birth [31], and were ineligible if: (1) birth weight was less than 1500 grams, (2) neurological examination at seven days of age (grade III by Ellis classification) [35], was severely abnormal (because they were not expected to benefit from EDI), (3) mother was less than 15 years old or unable/unwilling to participate, or (4) mother was not planning to stay in the study area for the next three years. Birth asphyxia was defined as the inability to initiate or sustain spontaneous breathing at birth using WHO definition (biochemical evidence of birth asphyxia could not be obtained in these settings) [36]. A list of potential enrollees was distributed to the investigators in each country to obtain written consent for the study, which was obtained during the second week after birth and before randomization to intervention conditions of the BRAIN-HIT.", "Investigators at each research site selected EDI parent trainers who were trained in an initial 5-day workshop, which was led by the same experts at each research site. A second workshop was conducted before participating children began to reach 18 months of age to adapt the approach to children up to 36 months, again conducted by the same experts at each site. To maintain quality of implementation, the trainers were supervised with observations during actual home visits and constructive feedback was provided on a regular basis.\nEach parent–child pair was assigned to the same trainer throughout the trial whenever possible, who was scheduled to make a home visit every two weeks over the 36-month trial period. As elaborated elsewhere [31, 32], the trainer presented one or two playful learning activities during each visit targeting developmentally appropriate milestones. These activities cover a spectrum of abilities across the cognitive, social and self-help, gross and fine motor, and language domains. The parent practiced the activity in the presence of the trainer who provided feedback. Cards depicting the activities were then left with the parent, who was encouraged to apply the activities in daily life with the child until the next home visit. The trainer introduced new activities in subsequent visits to enhance the child’s developmental competencies.", "Two indicators of treatment dose were calculated. Home visit dose was measured based on each parent trainer keeping a record of visit dates. Following the first visit, visits were scheduled to occur every two weeks until the completion of the trial. A home visit was completed on schedule if it occurred within its assigned two week window following the preceding visit. We calculated the percentage of scheduled home visits completed for each participant for the full 36-month trial. The reason for each missed visit was coded as due to illness, weather, death in family, refusal, child or mother unavailable for another reason, parent trainer schedule conflict, and other reasons.\nProgram implementation dose was measured based on maternal report obtained by the trainer at each home visit of the proportion of days the assigned activities had been implemented since the previous visit. First, the number of days between subsequent completed visits was calculated (Yn). If the time between two home visits extended beyond 30 days, a maximum of 30 days was used. Program implementation credits were assigned for the time period between visits based on the mother’s report of implementation of activities, as follows: “not at all” (creditn = 1), “about one-quarter of days or less” (creditn = Yn*.25), “about one-half of days” (creditn = Yn*.50), “about three-quarters of days” (creditn = Yn*.75), and “almost every day or more” (creditn = Yn). The credits were then added together over the trial period, divided by the number of possible credits, and multiplied by 100. Thus, this score estimates the percent of days between each home visit that the mother reported implementing child stimulation activities. As an additional descriptive measure of treatment dose, the parent trainer was surveyed at the conclusion of the study to estimate how often the activities had been implemented between the home visits, using a five-point scale (from “never” to “always”).", "The Bayley Scales of Infant Development – II (BSID) [37] was selected as the main outcome measure for this trial because it has been used extensively in various L/LMIC. The BSID underwent pilot-testing at each site to verify validity in the local context and a few items were slightly modified to make it more culturally appropriate (e.g., image of a sandal instead of a shoe). Evaluators across the sites were trained to standards in joint 4-day workshops conducted by experts before each yearly evaluation. The BSID was administered directly to each child by certified study evaluators, who were masked to the children’s birth history and randomization, in the appropriate language with standard material. Both the Mental Developmental Index (MDI) and Psychomotor Developmental Index (PDI) were used to measure developmental outcomes. Scores from the 36-month assessment, obtained just after the completion of the EDI, were used in this analysis as an indicator of treatment outcome.", "Perinatal and neonatal health variables were obtained from records kept by the FIRST BREATH Trial [34]: child gender, birth weight (1500 g-2499 g, 2500 g-2999 g, 3000 + g), gestational age (28–36 weeks, 37+ weeks), number of prenatal visits (0, 1–3, 4+), and parity. Additional child health variables obtained as part of this trial at 12 months of age included weight for age/sex (<5th, 5th-14th, 15th + percentile) and complete immunization status.\nFamily demographic variables were obtained at enrollment in BRAIN-HIT using a structured parent interview: maternal age, education (none and illiterate, none but literate or primary, literate with some secondary), family assets and home living standard. The presence of 11 family assets (e.g., radio, refrigerator, bicycle) were tallied as a Family Resources Index and classified into three levels (0–1, 2–4, 5+). A Home Living Standard Index was calculated based on seven indicators (e.g., home building material, water source, type of toilet) and classified into three levels (0–4, 5–7, 8+). A socio-economic status (SES) measure was used to classify participants into three groups (quintile 1–3, 4, 5) [38].", "Descriptive statistics were computed for child health and family demographic characteristics, treatment dose indicators (home visits dose and protocol implementation dose), and developmental outcomes (MDI and PDI at 36-months) for all individuals randomized to receive EDI. Child health and demographic characteristics were summarized separately for those randomized to receive EDI and included in the treatment dose analysis and those who were excluded from this analysis, and differences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests. A Pearson correlation statistic was computed between the treatment dose characteristics.\n Aim 1 In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred.\nIn the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred.\n Aim 2 To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models.\nTo evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models.", "In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred.", "To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models.", "The sample size was determined to provide adequate power to test EDI treatment efficacy, the primary aim of BRAIN-HIT. As outlined in Figure 1, of 540 births screened from January 2007 through June 2008, 438 (81% of screened) were eligible. Only 3 infants were ineligible due to low birth weight or neurological exam, with the remaining 99 being due to mothers not being able to commit to staying in the study communities or could not be reached for screening within 7 days of birth. Informed consent was obtained for 407 (93% of eligible; 165 resuscitated, 242 not resuscitated) who were randomized into either EDI or a control intervention [20]. The 204 assigned to receive EDI (50.1% of those randomized) are relevant for this study, of whom 145 (71.1% of those assigned to EDI) were included in this analysis (Table 1). These participants had mean = 36.8 (range = 35-41) months of age at the time of the developmental assessment.Figure 1\nStudy flow chart.\n\n\nStudy flow chart.\n\n\nChild health and family demographic characteristics of study sample\n\n\naMeasured at enrollment unless otherwise indicated.\n\nbDifferences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests; bold indicates significant p < .05.\nExclusions from this analysis were due to death (n = 7), withdrawal (n = 6), loss to follow up (n = 5), incomplete 36-month BSID-II (n = 39) due to administration errors, home-visit data unavailable (n = 1), or another reason (n = 1). Three children were included in the analysis who completed the 36-month evaluation but discontinued the EDI prior to the end of the study (two because the family had insufficient time to fulfill study requirements and one because the family moved). When compared to those who were included in the analysis (Table 1), children excluded (n = 59) were significantly (p < .05) more likely to have been less than the 5th percentile in weight and completed all immunizations at 12-months of age, and their mothers to have had prenatal care, lower parity, and more family resources.", "The sample had an unadjusted mean (SD) MDI = 101.2 (10.4) and PDI = 106.8 (14.1) at 36-months. Average home visits dose was 91.4% over 36 months, when 8,990 visits out of 9,841 were completed on schedule every two weeks, and 95% of the participants achieved 80% or greater home visits dose. The most common reason for a missed visit was the inability to locate the mother and child at home at the scheduled time (40.3%), for example because the family was travelling away from the home or had moved temporarily. However, the second most common reason was those related to the parent trainer, such as being ill or having a conflict with another meeting (23.9%). Child or mother unavailable for other reasons (15.3%), for example because the mother was working or baby was sleeping, and weather (10.0%) were the only other reasons accounting for at least 10% of the missed visits. Mother or family directly refusing the home visit at the scheduled time was rare (2.5%).\nMothers reported engaging the child in the assigned activities on an average of 62.5% of days throughout the 36 month period. This protocol implementation dose equates to practicing the intervention activities 4.4 days per week or 674 days over the 36 month trial period. Home visits dose was modestly correlated with protocol implementation dose (r = 0.35). Parent trainers estimated at the end of the trial that 66.2% of families practiced the intervention “always” or “almost always” throughout the 36 months.", "Higher home visits dose was associated with higher MDI at 36-months (Figure 2). Specifically, quintiles 1–2 mean MDI = 98, while quintiles 3–5 mean MDI = 103 (Table 2). General linear models of MDI supported this relationship when home visits dose was entered as a primary predictor and site, resuscitation status at birth, and 12-month MDI were entered as covariates (Table 2). Most notably, in the model with only home visits dose (Model 1) and the model which included site (Model 2), mean MDI for quintiles 1 and 2 was significantly lower than quintiles 3–5. A step-down test comparing mean MDI for those with home visit dose below the 40th percentile (quintiles 1 and 2) to those with home visit dose above the 40th percentile (quintiles 3–5), provided estimates of 97.8 and 103.4 (p = 0.0017), respectively . Adjusting by site increased the magnitude of the difference by at least 25% (96.8 vs. 103.9, p = 0.0005). When adjusting for 12-month MDI and the interaction between dose and 12-month MDI (Model 5), the adjusted mean scores for the dose quintiles mirrored unadjusted scores, with quintiles 1–2 consistently lower than quintiles 3–5 (p <0.0001). The lower limit for quintile 3 includes those receiving a minimum of 91% of all the planned home visits.Figure 2\nMental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles.\n\n\nMental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles.\n\n\nTreatment dose modeling results and mean mental (MDI) and psychomotor (PDI) developmental index by quintiles\n\n\naBold indicate significant p < .05 for the relationship between the treatment dose indicator and the developmental outcome.\nBased on the same general linear model analysis (Table 2), home visit dose was not significantly associated with PDI at 36 months when considered by itself (Model 1) or when adjusted by site, resuscitation status, and 12-month PDI (Models 2–4). However, there was a positive association between home visits dose and 36-month PDI when adjusting for the 12-month PDI and its interaction with dose (Model 5). Here again, a home visit dose above the 40th percentile (quintiles 3–5) resulted in higher estimated PDI (108.5 – 111.0) compared with below this percentile (103.3 – 106.5)\nHigher program implementation dose was associated with slightly higher MDI at 36-months compared to those with a lesser dose. Quintiles 1–2 had a mean MDI of 100 or lower, while quintiles 4–5 has a mean MDI of 102 or higher (Table 2), and the difference appears larger when considering the medians of these quintiles. In a general linear model of 36-month MDI (Table 2), program implementation dose was not a significant predictor by itself (Model 1). However, prediction of program implementation dose when adjusting for 12-month MDI and its interaction with dose (Model 5) indicated that greater dose was associated with higher MDI (adjusted mean Q1 = 100.1 vs. Q5 = 103.1, p = 0.0434). PDI at 36 months was not linearly associated with program implementation dose (Table 2). Rather, mean PDI across quintiles followed a U-shape with the highest mean scores for quintiles 1, 4 and 5. The lower limit for quintile 4 includes those implementing activities on 67% of days on average over the trial period.", "The following variables were associated with home visits dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: maternal education, parity, family resources, prenatal visits, birth attendant, 1 minute Apgar, preterm birth, and child’s weight at 36-months. These variables were entered into a generalized linear model along with those interaction terms with location that were significant. After backward elimination, the final model (R2 = .19) included parity (82.9 ± 3.0 [adjusted mean ± standard error] with 1 child, 79.7 ± 2.8 with 2–3 children, and 90.8 ± 3.5 with 4+ children [p = 0.0382]), 1 minute Apgar (86.9 ± 2.6 for <9 and 82.0 ± 2.6 for 9+ [p = 0.1754]), location (adjusted mean ranged from 75.6 - 94.1, [p = 0.0019]), preterm [(p = 0.4571) and preterm by location interaction(p = 0.0020). There was a substantial difference in relationship to home visits dose by prematurity across location. Location A had higher dose for term children (65.8 ± 6.3 for preterm and 85.3 ± 4.0 for term). Location B had essentially the same dose between groups (92.8 ± 5.9 for preterm and 95.4 ± 2.9 for term). Location C had considerably higher dose in preterm children (90.5 ± 4.6 for preterm and 76.9 ± 3.5 for term).\nThe following variables were associated with program implementation dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: home visit adherence rate, maternal education, parity, family resources, living standard index, prenatal care, 1 minute Apgar, preterm birth, and weight at birth, 12, 24, and 36 months. These variables were entered into a model along with those interaction terms with location that were significant. After backward elimination and adjusting for location, the final model (R2 = .25) included home visit adherence rate (a one percent increase in home visit adherence resulted in a 0.64 ± 0.18 percent increase in program implementation adherence, p = 0.0004), maternal education (70.0 ± 2.8 for secondary/university and 60.9 ± 2.4 for none/illiterate [p = 0.0400]), prenatal care (71.0 ± 2.9 for 5+ visits and 65.3 ± 3.5 for no care [p = 0.0170]), weight at 12 months (66.7 ± 1.7 for >85th percentile and 61.1 ± 2.2 for <5th percentile [p = 0.0917]), and location (adjusted mean ranged from 59.5 - 69.1, [p = 0.0019]). None of the interaction terms were retained in the final model." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Intervention procedures", "Treatment dose indicators", "Developmental outcome measures", "Health and sociodemographic measures", "Statistical analysis", "Aim 1", "Aim 2", "Results", "Study sample composition", "Description of developmental outcomes and treatment dose", "Associations between treatment dose and developmental outcomes", "Factors associated with treatment dose", "Discussion", "Conclusions" ]
[ "Programs of early developmental intervention (EDI) implemented in the first years of life in children born with, or at risk for, neurodevelopmental disability have been shown to improve cognitive developmental outcomes and consequently, their quality of life. EDI includes various activities designed to enhance a young child’s development, directly via structured experiences and/or indirectly through influencing the care giving environment [1]. The positive effects of EDI on early child development have been reported in numerous controlled trials in high-income countries [2, 3], which have been confirmed through meta-analyses [4, 5] and expert reviews [6–8]. Several trials of EDI with risk groups of infants and young children have also been conducted in low or low-middle income countries (L/LMIC), which have also documented positive effects on child development, by itself or in combination with nutritional supplementation [9–16].\nThe involvement of parents in EDI is critical for achieving positive outcomes [1, 17–19], which can be optimized by implementing EDI through home visits by a parent trainer. This modality also matches well the circumstances of many L/LMIC where families often live far away from or have other barriers to reach providers that could implement EDI [20]. An important aspect to determining the efficacy of EDI is the degree to which dosage impacts outcomes, and what constitutes “sufficient dosage” [21]. Sufficient dosage with regard to EDI refers to a participant receiving adequate exposure to the intervention for it to be efficacious. Program intensity, or dosage, typically is measured by the quantity and quality the intervention actually achieved when implemented [21, 22], although it ideally should be determined based on the needs of the population at hand [23]. Common indicators of dosage for EDI include amount of time spent in a child development center, number of home visits completed by a specialist training a parent and/or engaging the child, or some indication of parent engagement in the EDI.\nWhereas there is more information linking outcomes with treatment dose for pre-school programs [21, 22], despite its importance few studies of EDI implemented in the first three years of life have conducted such analyses. A few previous studies generally indicate that children who receive more exposure to EDI display greater improvements in their cognitive development compared to those who receive less, even when differences in exposure were modest. Specifically, children who received EDI (home and center based) for more than 400 days, through age 3, exhibited significant improvements in cognitive development, while smaller but similar effects were evident among children who received treatment between 350 and 400 days [24]. Another study reported that optimal cognitive development of children in EDI was not associated with their background characteristics, such as birth weight or maternal education, but with three aspects related to treatment dosage: number of home visits received, days attending child care, and number of parent meetings attended [18].\nHowever these studies as well as the broader discussions of implementation quality have focused on programs conducted in the United States [21, 22]. The applicability of this information to L/LMIC contexts is unclear at present. The only EDI treatment dose study conducted in a L/LMIC that we are aware of showed that, as the frequency of home visits increased from none, through monthly, biweekly, and weekly, developmental gains at 30 months of age increased as well [25]. Given the potential for EDI to significantly impact the development of children, and therefore the economic development of nations in the long-term [26], it will be important more broadly to examine treatment dose in L/LMIC to inform the implementation of such efforts on a larger scale.\nParents may vary in their level of participation in home visit EDI programs due to a variety of factors. Previous research has indicated higher treatment dose among families participating in EDI who have better financial and social resources [20, 27–30]. Perinatal, neonatal, and other child health characteristics might also predict treatment dose for an intervention intending to promote the child’s development. Yet, studies that have examined both social and health predictors of EDI treatment dose are rare and have not considered a broad range of possible predictors [15]. It is important to examine various such factors in L/LMIC because they can identify processes that may influence parents’ adherence with EDI and those who may need additional support.\nIn light of these gaps in our understanding, the aim of the current study was to determine (1) whether there is a dose effect in a home visiting EDI implemented in three L/LMIC and (2) what sociodemographic and health factors are associated with variation in treatment dose. We examined two indicators of dose of EDI. As in previous studies, the number of home visits completed over the course of the EDI was measured. Another important treatment element is the extent to which parents implement the assigned developmental activities with the child during the time between home visits, which we refer to as the program implementation dose. Despite its logical importance to the success of home visiting EDI, we are not aware that parent program implementation dose has been examined in EDI. We hypothesize that increased dose as measured by either indicator will be associated with better developmental outcomes from EDI when implemented in three L/LMIC.", "Data used to examine the association between treatment adherence and developmental outcomes are from one of the conditions of the Brain Research to Ameliorate Impaired Neurodevelopment - Home-based Intervention Trial (BRAIN-HIT), a randomized controlled trial (RCT) detailed elsewhere (clinicaltrials.gov ID# NCT00639184) [31, 32]. Implemented in rural communities of India, Pakistan, and Zambia, the overall aim of BRAIN-HIT was to evaluate the efficacy of an EDI program on the development of children in L/LMIC who are at-risk for neurodevelopmental disability due to birth asphyxia that required resuscitation. A group of children who did not require resuscitation at birth was evaluated using the same protocol to compare the efficacy of the EDI in those with and without birth asphyxia.\nAs detailed elsewhere [32, 33], mental development at 36 months of age was better in children with birth asphyxia who had received the EDI compared with those in the control condition (effect size = 4.6 points on the standardized scale from the Bayley Scales of Infant Development, see below), but there was no difference between trial conditions in the children without birth asphyxia. Psychomotor development was likewise higher in the EDI group, in this case for both the children with (effect size = 5.4) and without (effect size = 6.1) birth asphyxia, compared to those in the control condition. The issue of the effect of treatment dose on development is only relevant for the active EDI condition, and not the comparison condition, which intended to control for placebo, observation, and time effects and lacked a theoretically based developmental intervention. Therefore, only data from those randomized to receive EDI were analyzed in the present research, making this an observational study of that cohort. BRAIN-HIT was approved by the Institutional Review Board at each site and was conducted in accord with prevailing ethical principles.\n Study population Infants with birth asphyxia (resuscitated) and infants without birth asphyxia or other perinatal complications (non-resuscitated), born from January 2007 through June 2008 in rural communities in three sites in India, Pakistan and Zambia, were matched for country and chronological time and randomly selected from those enrolled in the First Breath Trial [34]. Infants were screened for enrollment into the BRAIN-HIT during the 7-day follow-up visit after birth [31], and were ineligible if: (1) birth weight was less than 1500 grams, (2) neurological examination at seven days of age (grade III by Ellis classification) [35], was severely abnormal (because they were not expected to benefit from EDI), (3) mother was less than 15 years old or unable/unwilling to participate, or (4) mother was not planning to stay in the study area for the next three years. Birth asphyxia was defined as the inability to initiate or sustain spontaneous breathing at birth using WHO definition (biochemical evidence of birth asphyxia could not be obtained in these settings) [36]. A list of potential enrollees was distributed to the investigators in each country to obtain written consent for the study, which was obtained during the second week after birth and before randomization to intervention conditions of the BRAIN-HIT.\nInfants with birth asphyxia (resuscitated) and infants without birth asphyxia or other perinatal complications (non-resuscitated), born from January 2007 through June 2008 in rural communities in three sites in India, Pakistan and Zambia, were matched for country and chronological time and randomly selected from those enrolled in the First Breath Trial [34]. Infants were screened for enrollment into the BRAIN-HIT during the 7-day follow-up visit after birth [31], and were ineligible if: (1) birth weight was less than 1500 grams, (2) neurological examination at seven days of age (grade III by Ellis classification) [35], was severely abnormal (because they were not expected to benefit from EDI), (3) mother was less than 15 years old or unable/unwilling to participate, or (4) mother was not planning to stay in the study area for the next three years. Birth asphyxia was defined as the inability to initiate or sustain spontaneous breathing at birth using WHO definition (biochemical evidence of birth asphyxia could not be obtained in these settings) [36]. A list of potential enrollees was distributed to the investigators in each country to obtain written consent for the study, which was obtained during the second week after birth and before randomization to intervention conditions of the BRAIN-HIT.\n Intervention procedures Investigators at each research site selected EDI parent trainers who were trained in an initial 5-day workshop, which was led by the same experts at each research site. A second workshop was conducted before participating children began to reach 18 months of age to adapt the approach to children up to 36 months, again conducted by the same experts at each site. To maintain quality of implementation, the trainers were supervised with observations during actual home visits and constructive feedback was provided on a regular basis.\nEach parent–child pair was assigned to the same trainer throughout the trial whenever possible, who was scheduled to make a home visit every two weeks over the 36-month trial period. As elaborated elsewhere [31, 32], the trainer presented one or two playful learning activities during each visit targeting developmentally appropriate milestones. These activities cover a spectrum of abilities across the cognitive, social and self-help, gross and fine motor, and language domains. The parent practiced the activity in the presence of the trainer who provided feedback. Cards depicting the activities were then left with the parent, who was encouraged to apply the activities in daily life with the child until the next home visit. The trainer introduced new activities in subsequent visits to enhance the child’s developmental competencies.\nInvestigators at each research site selected EDI parent trainers who were trained in an initial 5-day workshop, which was led by the same experts at each research site. A second workshop was conducted before participating children began to reach 18 months of age to adapt the approach to children up to 36 months, again conducted by the same experts at each site. To maintain quality of implementation, the trainers were supervised with observations during actual home visits and constructive feedback was provided on a regular basis.\nEach parent–child pair was assigned to the same trainer throughout the trial whenever possible, who was scheduled to make a home visit every two weeks over the 36-month trial period. As elaborated elsewhere [31, 32], the trainer presented one or two playful learning activities during each visit targeting developmentally appropriate milestones. These activities cover a spectrum of abilities across the cognitive, social and self-help, gross and fine motor, and language domains. The parent practiced the activity in the presence of the trainer who provided feedback. Cards depicting the activities were then left with the parent, who was encouraged to apply the activities in daily life with the child until the next home visit. The trainer introduced new activities in subsequent visits to enhance the child’s developmental competencies.\n Treatment dose indicators Two indicators of treatment dose were calculated. Home visit dose was measured based on each parent trainer keeping a record of visit dates. Following the first visit, visits were scheduled to occur every two weeks until the completion of the trial. A home visit was completed on schedule if it occurred within its assigned two week window following the preceding visit. We calculated the percentage of scheduled home visits completed for each participant for the full 36-month trial. The reason for each missed visit was coded as due to illness, weather, death in family, refusal, child or mother unavailable for another reason, parent trainer schedule conflict, and other reasons.\nProgram implementation dose was measured based on maternal report obtained by the trainer at each home visit of the proportion of days the assigned activities had been implemented since the previous visit. First, the number of days between subsequent completed visits was calculated (Yn). If the time between two home visits extended beyond 30 days, a maximum of 30 days was used. Program implementation credits were assigned for the time period between visits based on the mother’s report of implementation of activities, as follows: “not at all” (creditn = 1), “about one-quarter of days or less” (creditn = Yn*.25), “about one-half of days” (creditn = Yn*.50), “about three-quarters of days” (creditn = Yn*.75), and “almost every day or more” (creditn = Yn). The credits were then added together over the trial period, divided by the number of possible credits, and multiplied by 100. Thus, this score estimates the percent of days between each home visit that the mother reported implementing child stimulation activities. As an additional descriptive measure of treatment dose, the parent trainer was surveyed at the conclusion of the study to estimate how often the activities had been implemented between the home visits, using a five-point scale (from “never” to “always”).\nTwo indicators of treatment dose were calculated. Home visit dose was measured based on each parent trainer keeping a record of visit dates. Following the first visit, visits were scheduled to occur every two weeks until the completion of the trial. A home visit was completed on schedule if it occurred within its assigned two week window following the preceding visit. We calculated the percentage of scheduled home visits completed for each participant for the full 36-month trial. The reason for each missed visit was coded as due to illness, weather, death in family, refusal, child or mother unavailable for another reason, parent trainer schedule conflict, and other reasons.\nProgram implementation dose was measured based on maternal report obtained by the trainer at each home visit of the proportion of days the assigned activities had been implemented since the previous visit. First, the number of days between subsequent completed visits was calculated (Yn). If the time between two home visits extended beyond 30 days, a maximum of 30 days was used. Program implementation credits were assigned for the time period between visits based on the mother’s report of implementation of activities, as follows: “not at all” (creditn = 1), “about one-quarter of days or less” (creditn = Yn*.25), “about one-half of days” (creditn = Yn*.50), “about three-quarters of days” (creditn = Yn*.75), and “almost every day or more” (creditn = Yn). The credits were then added together over the trial period, divided by the number of possible credits, and multiplied by 100. Thus, this score estimates the percent of days between each home visit that the mother reported implementing child stimulation activities. As an additional descriptive measure of treatment dose, the parent trainer was surveyed at the conclusion of the study to estimate how often the activities had been implemented between the home visits, using a five-point scale (from “never” to “always”).\n Developmental outcome measures The Bayley Scales of Infant Development – II (BSID) [37] was selected as the main outcome measure for this trial because it has been used extensively in various L/LMIC. The BSID underwent pilot-testing at each site to verify validity in the local context and a few items were slightly modified to make it more culturally appropriate (e.g., image of a sandal instead of a shoe). Evaluators across the sites were trained to standards in joint 4-day workshops conducted by experts before each yearly evaluation. The BSID was administered directly to each child by certified study evaluators, who were masked to the children’s birth history and randomization, in the appropriate language with standard material. Both the Mental Developmental Index (MDI) and Psychomotor Developmental Index (PDI) were used to measure developmental outcomes. Scores from the 36-month assessment, obtained just after the completion of the EDI, were used in this analysis as an indicator of treatment outcome.\nThe Bayley Scales of Infant Development – II (BSID) [37] was selected as the main outcome measure for this trial because it has been used extensively in various L/LMIC. The BSID underwent pilot-testing at each site to verify validity in the local context and a few items were slightly modified to make it more culturally appropriate (e.g., image of a sandal instead of a shoe). Evaluators across the sites were trained to standards in joint 4-day workshops conducted by experts before each yearly evaluation. The BSID was administered directly to each child by certified study evaluators, who were masked to the children’s birth history and randomization, in the appropriate language with standard material. Both the Mental Developmental Index (MDI) and Psychomotor Developmental Index (PDI) were used to measure developmental outcomes. Scores from the 36-month assessment, obtained just after the completion of the EDI, were used in this analysis as an indicator of treatment outcome.\n Health and sociodemographic measures Perinatal and neonatal health variables were obtained from records kept by the FIRST BREATH Trial [34]: child gender, birth weight (1500 g-2499 g, 2500 g-2999 g, 3000 + g), gestational age (28–36 weeks, 37+ weeks), number of prenatal visits (0, 1–3, 4+), and parity. Additional child health variables obtained as part of this trial at 12 months of age included weight for age/sex (<5th, 5th-14th, 15th + percentile) and complete immunization status.\nFamily demographic variables were obtained at enrollment in BRAIN-HIT using a structured parent interview: maternal age, education (none and illiterate, none but literate or primary, literate with some secondary), family assets and home living standard. The presence of 11 family assets (e.g., radio, refrigerator, bicycle) were tallied as a Family Resources Index and classified into three levels (0–1, 2–4, 5+). A Home Living Standard Index was calculated based on seven indicators (e.g., home building material, water source, type of toilet) and classified into three levels (0–4, 5–7, 8+). A socio-economic status (SES) measure was used to classify participants into three groups (quintile 1–3, 4, 5) [38].\nPerinatal and neonatal health variables were obtained from records kept by the FIRST BREATH Trial [34]: child gender, birth weight (1500 g-2499 g, 2500 g-2999 g, 3000 + g), gestational age (28–36 weeks, 37+ weeks), number of prenatal visits (0, 1–3, 4+), and parity. Additional child health variables obtained as part of this trial at 12 months of age included weight for age/sex (<5th, 5th-14th, 15th + percentile) and complete immunization status.\nFamily demographic variables were obtained at enrollment in BRAIN-HIT using a structured parent interview: maternal age, education (none and illiterate, none but literate or primary, literate with some secondary), family assets and home living standard. The presence of 11 family assets (e.g., radio, refrigerator, bicycle) were tallied as a Family Resources Index and classified into three levels (0–1, 2–4, 5+). A Home Living Standard Index was calculated based on seven indicators (e.g., home building material, water source, type of toilet) and classified into three levels (0–4, 5–7, 8+). A socio-economic status (SES) measure was used to classify participants into three groups (quintile 1–3, 4, 5) [38].\n Statistical analysis Descriptive statistics were computed for child health and family demographic characteristics, treatment dose indicators (home visits dose and protocol implementation dose), and developmental outcomes (MDI and PDI at 36-months) for all individuals randomized to receive EDI. Child health and demographic characteristics were summarized separately for those randomized to receive EDI and included in the treatment dose analysis and those who were excluded from this analysis, and differences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests. A Pearson correlation statistic was computed between the treatment dose characteristics.\n Aim 1 In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred.\nIn the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred.\n Aim 2 To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models.\nTo evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models.\nDescriptive statistics were computed for child health and family demographic characteristics, treatment dose indicators (home visits dose and protocol implementation dose), and developmental outcomes (MDI and PDI at 36-months) for all individuals randomized to receive EDI. Child health and demographic characteristics were summarized separately for those randomized to receive EDI and included in the treatment dose analysis and those who were excluded from this analysis, and differences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests. A Pearson correlation statistic was computed between the treatment dose characteristics.\n Aim 1 In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred.\nIn the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred.\n Aim 2 To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models.\nTo evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models.", "Infants with birth asphyxia (resuscitated) and infants without birth asphyxia or other perinatal complications (non-resuscitated), born from January 2007 through June 2008 in rural communities in three sites in India, Pakistan and Zambia, were matched for country and chronological time and randomly selected from those enrolled in the First Breath Trial [34]. Infants were screened for enrollment into the BRAIN-HIT during the 7-day follow-up visit after birth [31], and were ineligible if: (1) birth weight was less than 1500 grams, (2) neurological examination at seven days of age (grade III by Ellis classification) [35], was severely abnormal (because they were not expected to benefit from EDI), (3) mother was less than 15 years old or unable/unwilling to participate, or (4) mother was not planning to stay in the study area for the next three years. Birth asphyxia was defined as the inability to initiate or sustain spontaneous breathing at birth using WHO definition (biochemical evidence of birth asphyxia could not be obtained in these settings) [36]. A list of potential enrollees was distributed to the investigators in each country to obtain written consent for the study, which was obtained during the second week after birth and before randomization to intervention conditions of the BRAIN-HIT.", "Investigators at each research site selected EDI parent trainers who were trained in an initial 5-day workshop, which was led by the same experts at each research site. A second workshop was conducted before participating children began to reach 18 months of age to adapt the approach to children up to 36 months, again conducted by the same experts at each site. To maintain quality of implementation, the trainers were supervised with observations during actual home visits and constructive feedback was provided on a regular basis.\nEach parent–child pair was assigned to the same trainer throughout the trial whenever possible, who was scheduled to make a home visit every two weeks over the 36-month trial period. As elaborated elsewhere [31, 32], the trainer presented one or two playful learning activities during each visit targeting developmentally appropriate milestones. These activities cover a spectrum of abilities across the cognitive, social and self-help, gross and fine motor, and language domains. The parent practiced the activity in the presence of the trainer who provided feedback. Cards depicting the activities were then left with the parent, who was encouraged to apply the activities in daily life with the child until the next home visit. The trainer introduced new activities in subsequent visits to enhance the child’s developmental competencies.", "Two indicators of treatment dose were calculated. Home visit dose was measured based on each parent trainer keeping a record of visit dates. Following the first visit, visits were scheduled to occur every two weeks until the completion of the trial. A home visit was completed on schedule if it occurred within its assigned two week window following the preceding visit. We calculated the percentage of scheduled home visits completed for each participant for the full 36-month trial. The reason for each missed visit was coded as due to illness, weather, death in family, refusal, child or mother unavailable for another reason, parent trainer schedule conflict, and other reasons.\nProgram implementation dose was measured based on maternal report obtained by the trainer at each home visit of the proportion of days the assigned activities had been implemented since the previous visit. First, the number of days between subsequent completed visits was calculated (Yn). If the time between two home visits extended beyond 30 days, a maximum of 30 days was used. Program implementation credits were assigned for the time period between visits based on the mother’s report of implementation of activities, as follows: “not at all” (creditn = 1), “about one-quarter of days or less” (creditn = Yn*.25), “about one-half of days” (creditn = Yn*.50), “about three-quarters of days” (creditn = Yn*.75), and “almost every day or more” (creditn = Yn). The credits were then added together over the trial period, divided by the number of possible credits, and multiplied by 100. Thus, this score estimates the percent of days between each home visit that the mother reported implementing child stimulation activities. As an additional descriptive measure of treatment dose, the parent trainer was surveyed at the conclusion of the study to estimate how often the activities had been implemented between the home visits, using a five-point scale (from “never” to “always”).", "The Bayley Scales of Infant Development – II (BSID) [37] was selected as the main outcome measure for this trial because it has been used extensively in various L/LMIC. The BSID underwent pilot-testing at each site to verify validity in the local context and a few items were slightly modified to make it more culturally appropriate (e.g., image of a sandal instead of a shoe). Evaluators across the sites were trained to standards in joint 4-day workshops conducted by experts before each yearly evaluation. The BSID was administered directly to each child by certified study evaluators, who were masked to the children’s birth history and randomization, in the appropriate language with standard material. Both the Mental Developmental Index (MDI) and Psychomotor Developmental Index (PDI) were used to measure developmental outcomes. Scores from the 36-month assessment, obtained just after the completion of the EDI, were used in this analysis as an indicator of treatment outcome.", "Perinatal and neonatal health variables were obtained from records kept by the FIRST BREATH Trial [34]: child gender, birth weight (1500 g-2499 g, 2500 g-2999 g, 3000 + g), gestational age (28–36 weeks, 37+ weeks), number of prenatal visits (0, 1–3, 4+), and parity. Additional child health variables obtained as part of this trial at 12 months of age included weight for age/sex (<5th, 5th-14th, 15th + percentile) and complete immunization status.\nFamily demographic variables were obtained at enrollment in BRAIN-HIT using a structured parent interview: maternal age, education (none and illiterate, none but literate or primary, literate with some secondary), family assets and home living standard. The presence of 11 family assets (e.g., radio, refrigerator, bicycle) were tallied as a Family Resources Index and classified into three levels (0–1, 2–4, 5+). A Home Living Standard Index was calculated based on seven indicators (e.g., home building material, water source, type of toilet) and classified into three levels (0–4, 5–7, 8+). A socio-economic status (SES) measure was used to classify participants into three groups (quintile 1–3, 4, 5) [38].", "Descriptive statistics were computed for child health and family demographic characteristics, treatment dose indicators (home visits dose and protocol implementation dose), and developmental outcomes (MDI and PDI at 36-months) for all individuals randomized to receive EDI. Child health and demographic characteristics were summarized separately for those randomized to receive EDI and included in the treatment dose analysis and those who were excluded from this analysis, and differences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests. A Pearson correlation statistic was computed between the treatment dose characteristics.\n Aim 1 In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred.\nIn the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred.\n Aim 2 To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models.\nTo evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models.", "In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred.", "To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models.", " Study sample composition The sample size was determined to provide adequate power to test EDI treatment efficacy, the primary aim of BRAIN-HIT. As outlined in Figure 1, of 540 births screened from January 2007 through June 2008, 438 (81% of screened) were eligible. Only 3 infants were ineligible due to low birth weight or neurological exam, with the remaining 99 being due to mothers not being able to commit to staying in the study communities or could not be reached for screening within 7 days of birth. Informed consent was obtained for 407 (93% of eligible; 165 resuscitated, 242 not resuscitated) who were randomized into either EDI or a control intervention [20]. The 204 assigned to receive EDI (50.1% of those randomized) are relevant for this study, of whom 145 (71.1% of those assigned to EDI) were included in this analysis (Table 1). These participants had mean = 36.8 (range = 35-41) months of age at the time of the developmental assessment.Figure 1\nStudy flow chart.\n\n\nStudy flow chart.\n\n\nChild health and family demographic characteristics of study sample\n\n\naMeasured at enrollment unless otherwise indicated.\n\nbDifferences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests; bold indicates significant p < .05.\nExclusions from this analysis were due to death (n = 7), withdrawal (n = 6), loss to follow up (n = 5), incomplete 36-month BSID-II (n = 39) due to administration errors, home-visit data unavailable (n = 1), or another reason (n = 1). Three children were included in the analysis who completed the 36-month evaluation but discontinued the EDI prior to the end of the study (two because the family had insufficient time to fulfill study requirements and one because the family moved). When compared to those who were included in the analysis (Table 1), children excluded (n = 59) were significantly (p < .05) more likely to have been less than the 5th percentile in weight and completed all immunizations at 12-months of age, and their mothers to have had prenatal care, lower parity, and more family resources.\nThe sample size was determined to provide adequate power to test EDI treatment efficacy, the primary aim of BRAIN-HIT. As outlined in Figure 1, of 540 births screened from January 2007 through June 2008, 438 (81% of screened) were eligible. Only 3 infants were ineligible due to low birth weight or neurological exam, with the remaining 99 being due to mothers not being able to commit to staying in the study communities or could not be reached for screening within 7 days of birth. Informed consent was obtained for 407 (93% of eligible; 165 resuscitated, 242 not resuscitated) who were randomized into either EDI or a control intervention [20]. The 204 assigned to receive EDI (50.1% of those randomized) are relevant for this study, of whom 145 (71.1% of those assigned to EDI) were included in this analysis (Table 1). These participants had mean = 36.8 (range = 35-41) months of age at the time of the developmental assessment.Figure 1\nStudy flow chart.\n\n\nStudy flow chart.\n\n\nChild health and family demographic characteristics of study sample\n\n\naMeasured at enrollment unless otherwise indicated.\n\nbDifferences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests; bold indicates significant p < .05.\nExclusions from this analysis were due to death (n = 7), withdrawal (n = 6), loss to follow up (n = 5), incomplete 36-month BSID-II (n = 39) due to administration errors, home-visit data unavailable (n = 1), or another reason (n = 1). Three children were included in the analysis who completed the 36-month evaluation but discontinued the EDI prior to the end of the study (two because the family had insufficient time to fulfill study requirements and one because the family moved). When compared to those who were included in the analysis (Table 1), children excluded (n = 59) were significantly (p < .05) more likely to have been less than the 5th percentile in weight and completed all immunizations at 12-months of age, and their mothers to have had prenatal care, lower parity, and more family resources.\n Description of developmental outcomes and treatment dose The sample had an unadjusted mean (SD) MDI = 101.2 (10.4) and PDI = 106.8 (14.1) at 36-months. Average home visits dose was 91.4% over 36 months, when 8,990 visits out of 9,841 were completed on schedule every two weeks, and 95% of the participants achieved 80% or greater home visits dose. The most common reason for a missed visit was the inability to locate the mother and child at home at the scheduled time (40.3%), for example because the family was travelling away from the home or had moved temporarily. However, the second most common reason was those related to the parent trainer, such as being ill or having a conflict with another meeting (23.9%). Child or mother unavailable for other reasons (15.3%), for example because the mother was working or baby was sleeping, and weather (10.0%) were the only other reasons accounting for at least 10% of the missed visits. Mother or family directly refusing the home visit at the scheduled time was rare (2.5%).\nMothers reported engaging the child in the assigned activities on an average of 62.5% of days throughout the 36 month period. This protocol implementation dose equates to practicing the intervention activities 4.4 days per week or 674 days over the 36 month trial period. Home visits dose was modestly correlated with protocol implementation dose (r = 0.35). Parent trainers estimated at the end of the trial that 66.2% of families practiced the intervention “always” or “almost always” throughout the 36 months.\nThe sample had an unadjusted mean (SD) MDI = 101.2 (10.4) and PDI = 106.8 (14.1) at 36-months. Average home visits dose was 91.4% over 36 months, when 8,990 visits out of 9,841 were completed on schedule every two weeks, and 95% of the participants achieved 80% or greater home visits dose. The most common reason for a missed visit was the inability to locate the mother and child at home at the scheduled time (40.3%), for example because the family was travelling away from the home or had moved temporarily. However, the second most common reason was those related to the parent trainer, such as being ill or having a conflict with another meeting (23.9%). Child or mother unavailable for other reasons (15.3%), for example because the mother was working or baby was sleeping, and weather (10.0%) were the only other reasons accounting for at least 10% of the missed visits. Mother or family directly refusing the home visit at the scheduled time was rare (2.5%).\nMothers reported engaging the child in the assigned activities on an average of 62.5% of days throughout the 36 month period. This protocol implementation dose equates to practicing the intervention activities 4.4 days per week or 674 days over the 36 month trial period. Home visits dose was modestly correlated with protocol implementation dose (r = 0.35). Parent trainers estimated at the end of the trial that 66.2% of families practiced the intervention “always” or “almost always” throughout the 36 months.\n Associations between treatment dose and developmental outcomes Higher home visits dose was associated with higher MDI at 36-months (Figure 2). Specifically, quintiles 1–2 mean MDI = 98, while quintiles 3–5 mean MDI = 103 (Table 2). General linear models of MDI supported this relationship when home visits dose was entered as a primary predictor and site, resuscitation status at birth, and 12-month MDI were entered as covariates (Table 2). Most notably, in the model with only home visits dose (Model 1) and the model which included site (Model 2), mean MDI for quintiles 1 and 2 was significantly lower than quintiles 3–5. A step-down test comparing mean MDI for those with home visit dose below the 40th percentile (quintiles 1 and 2) to those with home visit dose above the 40th percentile (quintiles 3–5), provided estimates of 97.8 and 103.4 (p = 0.0017), respectively . Adjusting by site increased the magnitude of the difference by at least 25% (96.8 vs. 103.9, p = 0.0005). When adjusting for 12-month MDI and the interaction between dose and 12-month MDI (Model 5), the adjusted mean scores for the dose quintiles mirrored unadjusted scores, with quintiles 1–2 consistently lower than quintiles 3–5 (p <0.0001). The lower limit for quintile 3 includes those receiving a minimum of 91% of all the planned home visits.Figure 2\nMental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles.\n\n\nMental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles.\n\n\nTreatment dose modeling results and mean mental (MDI) and psychomotor (PDI) developmental index by quintiles\n\n\naBold indicate significant p < .05 for the relationship between the treatment dose indicator and the developmental outcome.\nBased on the same general linear model analysis (Table 2), home visit dose was not significantly associated with PDI at 36 months when considered by itself (Model 1) or when adjusted by site, resuscitation status, and 12-month PDI (Models 2–4). However, there was a positive association between home visits dose and 36-month PDI when adjusting for the 12-month PDI and its interaction with dose (Model 5). Here again, a home visit dose above the 40th percentile (quintiles 3–5) resulted in higher estimated PDI (108.5 – 111.0) compared with below this percentile (103.3 – 106.5)\nHigher program implementation dose was associated with slightly higher MDI at 36-months compared to those with a lesser dose. Quintiles 1–2 had a mean MDI of 100 or lower, while quintiles 4–5 has a mean MDI of 102 or higher (Table 2), and the difference appears larger when considering the medians of these quintiles. In a general linear model of 36-month MDI (Table 2), program implementation dose was not a significant predictor by itself (Model 1). However, prediction of program implementation dose when adjusting for 12-month MDI and its interaction with dose (Model 5) indicated that greater dose was associated with higher MDI (adjusted mean Q1 = 100.1 vs. Q5 = 103.1, p = 0.0434). PDI at 36 months was not linearly associated with program implementation dose (Table 2). Rather, mean PDI across quintiles followed a U-shape with the highest mean scores for quintiles 1, 4 and 5. The lower limit for quintile 4 includes those implementing activities on 67% of days on average over the trial period.\nHigher home visits dose was associated with higher MDI at 36-months (Figure 2). Specifically, quintiles 1–2 mean MDI = 98, while quintiles 3–5 mean MDI = 103 (Table 2). General linear models of MDI supported this relationship when home visits dose was entered as a primary predictor and site, resuscitation status at birth, and 12-month MDI were entered as covariates (Table 2). Most notably, in the model with only home visits dose (Model 1) and the model which included site (Model 2), mean MDI for quintiles 1 and 2 was significantly lower than quintiles 3–5. A step-down test comparing mean MDI for those with home visit dose below the 40th percentile (quintiles 1 and 2) to those with home visit dose above the 40th percentile (quintiles 3–5), provided estimates of 97.8 and 103.4 (p = 0.0017), respectively . Adjusting by site increased the magnitude of the difference by at least 25% (96.8 vs. 103.9, p = 0.0005). When adjusting for 12-month MDI and the interaction between dose and 12-month MDI (Model 5), the adjusted mean scores for the dose quintiles mirrored unadjusted scores, with quintiles 1–2 consistently lower than quintiles 3–5 (p <0.0001). The lower limit for quintile 3 includes those receiving a minimum of 91% of all the planned home visits.Figure 2\nMental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles.\n\n\nMental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles.\n\n\nTreatment dose modeling results and mean mental (MDI) and psychomotor (PDI) developmental index by quintiles\n\n\naBold indicate significant p < .05 for the relationship between the treatment dose indicator and the developmental outcome.\nBased on the same general linear model analysis (Table 2), home visit dose was not significantly associated with PDI at 36 months when considered by itself (Model 1) or when adjusted by site, resuscitation status, and 12-month PDI (Models 2–4). However, there was a positive association between home visits dose and 36-month PDI when adjusting for the 12-month PDI and its interaction with dose (Model 5). Here again, a home visit dose above the 40th percentile (quintiles 3–5) resulted in higher estimated PDI (108.5 – 111.0) compared with below this percentile (103.3 – 106.5)\nHigher program implementation dose was associated with slightly higher MDI at 36-months compared to those with a lesser dose. Quintiles 1–2 had a mean MDI of 100 or lower, while quintiles 4–5 has a mean MDI of 102 or higher (Table 2), and the difference appears larger when considering the medians of these quintiles. In a general linear model of 36-month MDI (Table 2), program implementation dose was not a significant predictor by itself (Model 1). However, prediction of program implementation dose when adjusting for 12-month MDI and its interaction with dose (Model 5) indicated that greater dose was associated with higher MDI (adjusted mean Q1 = 100.1 vs. Q5 = 103.1, p = 0.0434). PDI at 36 months was not linearly associated with program implementation dose (Table 2). Rather, mean PDI across quintiles followed a U-shape with the highest mean scores for quintiles 1, 4 and 5. The lower limit for quintile 4 includes those implementing activities on 67% of days on average over the trial period.\n Factors associated with treatment dose The following variables were associated with home visits dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: maternal education, parity, family resources, prenatal visits, birth attendant, 1 minute Apgar, preterm birth, and child’s weight at 36-months. These variables were entered into a generalized linear model along with those interaction terms with location that were significant. After backward elimination, the final model (R2 = .19) included parity (82.9 ± 3.0 [adjusted mean ± standard error] with 1 child, 79.7 ± 2.8 with 2–3 children, and 90.8 ± 3.5 with 4+ children [p = 0.0382]), 1 minute Apgar (86.9 ± 2.6 for <9 and 82.0 ± 2.6 for 9+ [p = 0.1754]), location (adjusted mean ranged from 75.6 - 94.1, [p = 0.0019]), preterm [(p = 0.4571) and preterm by location interaction(p = 0.0020). There was a substantial difference in relationship to home visits dose by prematurity across location. Location A had higher dose for term children (65.8 ± 6.3 for preterm and 85.3 ± 4.0 for term). Location B had essentially the same dose between groups (92.8 ± 5.9 for preterm and 95.4 ± 2.9 for term). Location C had considerably higher dose in preterm children (90.5 ± 4.6 for preterm and 76.9 ± 3.5 for term).\nThe following variables were associated with program implementation dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: home visit adherence rate, maternal education, parity, family resources, living standard index, prenatal care, 1 minute Apgar, preterm birth, and weight at birth, 12, 24, and 36 months. These variables were entered into a model along with those interaction terms with location that were significant. After backward elimination and adjusting for location, the final model (R2 = .25) included home visit adherence rate (a one percent increase in home visit adherence resulted in a 0.64 ± 0.18 percent increase in program implementation adherence, p = 0.0004), maternal education (70.0 ± 2.8 for secondary/university and 60.9 ± 2.4 for none/illiterate [p = 0.0400]), prenatal care (71.0 ± 2.9 for 5+ visits and 65.3 ± 3.5 for no care [p = 0.0170]), weight at 12 months (66.7 ± 1.7 for >85th percentile and 61.1 ± 2.2 for <5th percentile [p = 0.0917]), and location (adjusted mean ranged from 59.5 - 69.1, [p = 0.0019]). None of the interaction terms were retained in the final model.\nThe following variables were associated with home visits dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: maternal education, parity, family resources, prenatal visits, birth attendant, 1 minute Apgar, preterm birth, and child’s weight at 36-months. These variables were entered into a generalized linear model along with those interaction terms with location that were significant. After backward elimination, the final model (R2 = .19) included parity (82.9 ± 3.0 [adjusted mean ± standard error] with 1 child, 79.7 ± 2.8 with 2–3 children, and 90.8 ± 3.5 with 4+ children [p = 0.0382]), 1 minute Apgar (86.9 ± 2.6 for <9 and 82.0 ± 2.6 for 9+ [p = 0.1754]), location (adjusted mean ranged from 75.6 - 94.1, [p = 0.0019]), preterm [(p = 0.4571) and preterm by location interaction(p = 0.0020). There was a substantial difference in relationship to home visits dose by prematurity across location. Location A had higher dose for term children (65.8 ± 6.3 for preterm and 85.3 ± 4.0 for term). Location B had essentially the same dose between groups (92.8 ± 5.9 for preterm and 95.4 ± 2.9 for term). Location C had considerably higher dose in preterm children (90.5 ± 4.6 for preterm and 76.9 ± 3.5 for term).\nThe following variables were associated with program implementation dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: home visit adherence rate, maternal education, parity, family resources, living standard index, prenatal care, 1 minute Apgar, preterm birth, and weight at birth, 12, 24, and 36 months. These variables were entered into a model along with those interaction terms with location that were significant. After backward elimination and adjusting for location, the final model (R2 = .25) included home visit adherence rate (a one percent increase in home visit adherence resulted in a 0.64 ± 0.18 percent increase in program implementation adherence, p = 0.0004), maternal education (70.0 ± 2.8 for secondary/university and 60.9 ± 2.4 for none/illiterate [p = 0.0400]), prenatal care (71.0 ± 2.9 for 5+ visits and 65.3 ± 3.5 for no care [p = 0.0170]), weight at 12 months (66.7 ± 1.7 for >85th percentile and 61.1 ± 2.2 for <5th percentile [p = 0.0917]), and location (adjusted mean ranged from 59.5 - 69.1, [p = 0.0019]). None of the interaction terms were retained in the final model.", "The sample size was determined to provide adequate power to test EDI treatment efficacy, the primary aim of BRAIN-HIT. As outlined in Figure 1, of 540 births screened from January 2007 through June 2008, 438 (81% of screened) were eligible. Only 3 infants were ineligible due to low birth weight or neurological exam, with the remaining 99 being due to mothers not being able to commit to staying in the study communities or could not be reached for screening within 7 days of birth. Informed consent was obtained for 407 (93% of eligible; 165 resuscitated, 242 not resuscitated) who were randomized into either EDI or a control intervention [20]. The 204 assigned to receive EDI (50.1% of those randomized) are relevant for this study, of whom 145 (71.1% of those assigned to EDI) were included in this analysis (Table 1). These participants had mean = 36.8 (range = 35-41) months of age at the time of the developmental assessment.Figure 1\nStudy flow chart.\n\n\nStudy flow chart.\n\n\nChild health and family demographic characteristics of study sample\n\n\naMeasured at enrollment unless otherwise indicated.\n\nbDifferences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests; bold indicates significant p < .05.\nExclusions from this analysis were due to death (n = 7), withdrawal (n = 6), loss to follow up (n = 5), incomplete 36-month BSID-II (n = 39) due to administration errors, home-visit data unavailable (n = 1), or another reason (n = 1). Three children were included in the analysis who completed the 36-month evaluation but discontinued the EDI prior to the end of the study (two because the family had insufficient time to fulfill study requirements and one because the family moved). When compared to those who were included in the analysis (Table 1), children excluded (n = 59) were significantly (p < .05) more likely to have been less than the 5th percentile in weight and completed all immunizations at 12-months of age, and their mothers to have had prenatal care, lower parity, and more family resources.", "The sample had an unadjusted mean (SD) MDI = 101.2 (10.4) and PDI = 106.8 (14.1) at 36-months. Average home visits dose was 91.4% over 36 months, when 8,990 visits out of 9,841 were completed on schedule every two weeks, and 95% of the participants achieved 80% or greater home visits dose. The most common reason for a missed visit was the inability to locate the mother and child at home at the scheduled time (40.3%), for example because the family was travelling away from the home or had moved temporarily. However, the second most common reason was those related to the parent trainer, such as being ill or having a conflict with another meeting (23.9%). Child or mother unavailable for other reasons (15.3%), for example because the mother was working or baby was sleeping, and weather (10.0%) were the only other reasons accounting for at least 10% of the missed visits. Mother or family directly refusing the home visit at the scheduled time was rare (2.5%).\nMothers reported engaging the child in the assigned activities on an average of 62.5% of days throughout the 36 month period. This protocol implementation dose equates to practicing the intervention activities 4.4 days per week or 674 days over the 36 month trial period. Home visits dose was modestly correlated with protocol implementation dose (r = 0.35). Parent trainers estimated at the end of the trial that 66.2% of families practiced the intervention “always” or “almost always” throughout the 36 months.", "Higher home visits dose was associated with higher MDI at 36-months (Figure 2). Specifically, quintiles 1–2 mean MDI = 98, while quintiles 3–5 mean MDI = 103 (Table 2). General linear models of MDI supported this relationship when home visits dose was entered as a primary predictor and site, resuscitation status at birth, and 12-month MDI were entered as covariates (Table 2). Most notably, in the model with only home visits dose (Model 1) and the model which included site (Model 2), mean MDI for quintiles 1 and 2 was significantly lower than quintiles 3–5. A step-down test comparing mean MDI for those with home visit dose below the 40th percentile (quintiles 1 and 2) to those with home visit dose above the 40th percentile (quintiles 3–5), provided estimates of 97.8 and 103.4 (p = 0.0017), respectively . Adjusting by site increased the magnitude of the difference by at least 25% (96.8 vs. 103.9, p = 0.0005). When adjusting for 12-month MDI and the interaction between dose and 12-month MDI (Model 5), the adjusted mean scores for the dose quintiles mirrored unadjusted scores, with quintiles 1–2 consistently lower than quintiles 3–5 (p <0.0001). The lower limit for quintile 3 includes those receiving a minimum of 91% of all the planned home visits.Figure 2\nMental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles.\n\n\nMental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles.\n\n\nTreatment dose modeling results and mean mental (MDI) and psychomotor (PDI) developmental index by quintiles\n\n\naBold indicate significant p < .05 for the relationship between the treatment dose indicator and the developmental outcome.\nBased on the same general linear model analysis (Table 2), home visit dose was not significantly associated with PDI at 36 months when considered by itself (Model 1) or when adjusted by site, resuscitation status, and 12-month PDI (Models 2–4). However, there was a positive association between home visits dose and 36-month PDI when adjusting for the 12-month PDI and its interaction with dose (Model 5). Here again, a home visit dose above the 40th percentile (quintiles 3–5) resulted in higher estimated PDI (108.5 – 111.0) compared with below this percentile (103.3 – 106.5)\nHigher program implementation dose was associated with slightly higher MDI at 36-months compared to those with a lesser dose. Quintiles 1–2 had a mean MDI of 100 or lower, while quintiles 4–5 has a mean MDI of 102 or higher (Table 2), and the difference appears larger when considering the medians of these quintiles. In a general linear model of 36-month MDI (Table 2), program implementation dose was not a significant predictor by itself (Model 1). However, prediction of program implementation dose when adjusting for 12-month MDI and its interaction with dose (Model 5) indicated that greater dose was associated with higher MDI (adjusted mean Q1 = 100.1 vs. Q5 = 103.1, p = 0.0434). PDI at 36 months was not linearly associated with program implementation dose (Table 2). Rather, mean PDI across quintiles followed a U-shape with the highest mean scores for quintiles 1, 4 and 5. The lower limit for quintile 4 includes those implementing activities on 67% of days on average over the trial period.", "The following variables were associated with home visits dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: maternal education, parity, family resources, prenatal visits, birth attendant, 1 minute Apgar, preterm birth, and child’s weight at 36-months. These variables were entered into a generalized linear model along with those interaction terms with location that were significant. After backward elimination, the final model (R2 = .19) included parity (82.9 ± 3.0 [adjusted mean ± standard error] with 1 child, 79.7 ± 2.8 with 2–3 children, and 90.8 ± 3.5 with 4+ children [p = 0.0382]), 1 minute Apgar (86.9 ± 2.6 for <9 and 82.0 ± 2.6 for 9+ [p = 0.1754]), location (adjusted mean ranged from 75.6 - 94.1, [p = 0.0019]), preterm [(p = 0.4571) and preterm by location interaction(p = 0.0020). There was a substantial difference in relationship to home visits dose by prematurity across location. Location A had higher dose for term children (65.8 ± 6.3 for preterm and 85.3 ± 4.0 for term). Location B had essentially the same dose between groups (92.8 ± 5.9 for preterm and 95.4 ± 2.9 for term). Location C had considerably higher dose in preterm children (90.5 ± 4.6 for preterm and 76.9 ± 3.5 for term).\nThe following variables were associated with program implementation dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: home visit adherence rate, maternal education, parity, family resources, living standard index, prenatal care, 1 minute Apgar, preterm birth, and weight at birth, 12, 24, and 36 months. These variables were entered into a model along with those interaction terms with location that were significant. After backward elimination and adjusting for location, the final model (R2 = .25) included home visit adherence rate (a one percent increase in home visit adherence resulted in a 0.64 ± 0.18 percent increase in program implementation adherence, p = 0.0004), maternal education (70.0 ± 2.8 for secondary/university and 60.9 ± 2.4 for none/illiterate [p = 0.0400]), prenatal care (71.0 ± 2.9 for 5+ visits and 65.3 ± 3.5 for no care [p = 0.0170]), weight at 12 months (66.7 ± 1.7 for >85th percentile and 61.1 ± 2.2 for <5th percentile [p = 0.0917]), and location (adjusted mean ranged from 59.5 - 69.1, [p = 0.0019]). None of the interaction terms were retained in the final model.", "Consistent with our hypothesis, receiving a higher dose of EDI during the first 36 months of life, as indicated by number of home visits by a parent trainer and reported implementation of program activities between these home visits, is generally associated with better developmental outcomes at 36 months of age. This benefit is confirmed more consistently for mental compared to psychomotor development, and appears to some extent to be moderated by developmental status at 12 months. The higher benefit from treatment appears for those receiving at least 91% of the biweekly home visits and program activities on at least 67% of days on the average or 716 days over 36 months. In the context of a general developmental benefit demonstrated to be due to this program of EDI [32, 33], the difference in benefit from those receiving smaller vs. larger treatment doses is modest, about three to six points on a standardized developmental measure (M = 100, SD = 15). Variation in treatment dose was associated with child health and family sociodemographic factors as well as by trial location. In particular, more frequent use of the stimulation activities was reported by better educated mothers who had already engaged in a schedule of prenatal care and had infants who reached a higher weight in the first year.\nLimitations with this research include that results may not be generalizable to other L/LMIC or to other types of EDI programs. Moreover, we do not have independent observations of the implementation of the program activities at home, either in terms of quantity or quality. Program implementation dose was measured exclusively by self-report, which might have been susceptible, for example, to recall and acquiescence biases. Direct observation, though challenging to use in this context, should be less biased. Even though this trial of EDI enrolled one of the largest samples reported in L/LMIC, the sample size is still modest. This EDI was not intended for severely impaired infants. There was a 29% loss at follow-up, which included a higher proportion of parents with better resources. Power to detect significant associations with treatment dose was quite limited despite that this trial of EDI enrolled one of the largest samples reported in L/LMIC. Although a broad range of health factors were examined for associations with treatment dose, it would be useful to learn from mothers what other factors possibly influenced their use of the stimulation activities, such as motivation, belief in their efficacy, and family support. Treatment dose had a limited effect on psychomotor development, which may reflect that the EDI was not as successful in addressing development in these domains or be due to children reaching ceiling effects of the BSID at 36 months of age.\nOnly a few studies had previously examined whether dose of EDI during the first three years of life is associated with developmental outcomes. Our findings are consistent with prior studies that have generally reported that children who receive more exposure to EDI, however measured, display greater improvements in their cognitive development [18, 21, 24, 25]. Although only one of these studies was conducted in a L/LMIC, this too reported modest differences on developmental outcomes associated with varying home visit dose [19]. Program implementation dose was not examined. Given the differences between the EDI programs for which treatment dose has been evaluated, countries where implemented, populations targeted, and how treatment dose has been operationalized, it is difficult to generalize from this small body of research. It is impossible yet to establish a minimum effective dose. Given the importance of determining the efficacy of EDI in L/LMIC, which depends in part on information about sufficient dose, further research on the relationship between dose and outcome is much needed. Evaluations of EDI need to include such analysis to inform setting minimal targets for effective implementation.\nEDI provided via home visiting has quite consistently shown to promote development in children in L/LMIC e.g., [9–16]. Our research has added to this literature by showing that the same program can do so across quite different cultures, represented here by India, Pakistan, and Zambia [32]. Whereas the identical program was used, for example in terms of the same basic structure and developmental activities, the social process transpiring in the home visits would naturally vary as a function of the specific people engaged and their local culture. One strength of home visiting EDI is that in this manner it can be both programmatically structured yet culturally flexible.", "The body of research in which the current study is embedded quite consistently establishes that within an effective EDI, a higher dose is generally associated with better developmental outcomes. A large body of research indicates that EDI can improve early development of children in L/LMIC. Therefore EDI should be one approach used in L/LMIC to lay the foundation for improving longer-term outcomes of its population and interrupting intergenerational transmission of poverty [26]. Yet, for this to be successful, efforts to implement EDI for children need to ensure that program elements reach the children at the intended intensity. Groups of children at risk for receiving lower treatment dose may require special attention to ensure adequate effect." ]
[ null, "methods", null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusions" ]
[ "Treatment dose", "Early developmental intervention", "Neurodevelopmental disability", "Birth asphyxia", "Developing countries" ]
Background: Programs of early developmental intervention (EDI) implemented in the first years of life in children born with, or at risk for, neurodevelopmental disability have been shown to improve cognitive developmental outcomes and consequently, their quality of life. EDI includes various activities designed to enhance a young child’s development, directly via structured experiences and/or indirectly through influencing the care giving environment [1]. The positive effects of EDI on early child development have been reported in numerous controlled trials in high-income countries [2, 3], which have been confirmed through meta-analyses [4, 5] and expert reviews [6–8]. Several trials of EDI with risk groups of infants and young children have also been conducted in low or low-middle income countries (L/LMIC), which have also documented positive effects on child development, by itself or in combination with nutritional supplementation [9–16]. The involvement of parents in EDI is critical for achieving positive outcomes [1, 17–19], which can be optimized by implementing EDI through home visits by a parent trainer. This modality also matches well the circumstances of many L/LMIC where families often live far away from or have other barriers to reach providers that could implement EDI [20]. An important aspect to determining the efficacy of EDI is the degree to which dosage impacts outcomes, and what constitutes “sufficient dosage” [21]. Sufficient dosage with regard to EDI refers to a participant receiving adequate exposure to the intervention for it to be efficacious. Program intensity, or dosage, typically is measured by the quantity and quality the intervention actually achieved when implemented [21, 22], although it ideally should be determined based on the needs of the population at hand [23]. Common indicators of dosage for EDI include amount of time spent in a child development center, number of home visits completed by a specialist training a parent and/or engaging the child, or some indication of parent engagement in the EDI. Whereas there is more information linking outcomes with treatment dose for pre-school programs [21, 22], despite its importance few studies of EDI implemented in the first three years of life have conducted such analyses. A few previous studies generally indicate that children who receive more exposure to EDI display greater improvements in their cognitive development compared to those who receive less, even when differences in exposure were modest. Specifically, children who received EDI (home and center based) for more than 400 days, through age 3, exhibited significant improvements in cognitive development, while smaller but similar effects were evident among children who received treatment between 350 and 400 days [24]. Another study reported that optimal cognitive development of children in EDI was not associated with their background characteristics, such as birth weight or maternal education, but with three aspects related to treatment dosage: number of home visits received, days attending child care, and number of parent meetings attended [18]. However these studies as well as the broader discussions of implementation quality have focused on programs conducted in the United States [21, 22]. The applicability of this information to L/LMIC contexts is unclear at present. The only EDI treatment dose study conducted in a L/LMIC that we are aware of showed that, as the frequency of home visits increased from none, through monthly, biweekly, and weekly, developmental gains at 30 months of age increased as well [25]. Given the potential for EDI to significantly impact the development of children, and therefore the economic development of nations in the long-term [26], it will be important more broadly to examine treatment dose in L/LMIC to inform the implementation of such efforts on a larger scale. Parents may vary in their level of participation in home visit EDI programs due to a variety of factors. Previous research has indicated higher treatment dose among families participating in EDI who have better financial and social resources [20, 27–30]. Perinatal, neonatal, and other child health characteristics might also predict treatment dose for an intervention intending to promote the child’s development. Yet, studies that have examined both social and health predictors of EDI treatment dose are rare and have not considered a broad range of possible predictors [15]. It is important to examine various such factors in L/LMIC because they can identify processes that may influence parents’ adherence with EDI and those who may need additional support. In light of these gaps in our understanding, the aim of the current study was to determine (1) whether there is a dose effect in a home visiting EDI implemented in three L/LMIC and (2) what sociodemographic and health factors are associated with variation in treatment dose. We examined two indicators of dose of EDI. As in previous studies, the number of home visits completed over the course of the EDI was measured. Another important treatment element is the extent to which parents implement the assigned developmental activities with the child during the time between home visits, which we refer to as the program implementation dose. Despite its logical importance to the success of home visiting EDI, we are not aware that parent program implementation dose has been examined in EDI. We hypothesize that increased dose as measured by either indicator will be associated with better developmental outcomes from EDI when implemented in three L/LMIC. Methods: Data used to examine the association between treatment adherence and developmental outcomes are from one of the conditions of the Brain Research to Ameliorate Impaired Neurodevelopment - Home-based Intervention Trial (BRAIN-HIT), a randomized controlled trial (RCT) detailed elsewhere (clinicaltrials.gov ID# NCT00639184) [31, 32]. Implemented in rural communities of India, Pakistan, and Zambia, the overall aim of BRAIN-HIT was to evaluate the efficacy of an EDI program on the development of children in L/LMIC who are at-risk for neurodevelopmental disability due to birth asphyxia that required resuscitation. A group of children who did not require resuscitation at birth was evaluated using the same protocol to compare the efficacy of the EDI in those with and without birth asphyxia. As detailed elsewhere [32, 33], mental development at 36 months of age was better in children with birth asphyxia who had received the EDI compared with those in the control condition (effect size = 4.6 points on the standardized scale from the Bayley Scales of Infant Development, see below), but there was no difference between trial conditions in the children without birth asphyxia. Psychomotor development was likewise higher in the EDI group, in this case for both the children with (effect size = 5.4) and without (effect size = 6.1) birth asphyxia, compared to those in the control condition. The issue of the effect of treatment dose on development is only relevant for the active EDI condition, and not the comparison condition, which intended to control for placebo, observation, and time effects and lacked a theoretically based developmental intervention. Therefore, only data from those randomized to receive EDI were analyzed in the present research, making this an observational study of that cohort. BRAIN-HIT was approved by the Institutional Review Board at each site and was conducted in accord with prevailing ethical principles. Study population Infants with birth asphyxia (resuscitated) and infants without birth asphyxia or other perinatal complications (non-resuscitated), born from January 2007 through June 2008 in rural communities in three sites in India, Pakistan and Zambia, were matched for country and chronological time and randomly selected from those enrolled in the First Breath Trial [34]. Infants were screened for enrollment into the BRAIN-HIT during the 7-day follow-up visit after birth [31], and were ineligible if: (1) birth weight was less than 1500 grams, (2) neurological examination at seven days of age (grade III by Ellis classification) [35], was severely abnormal (because they were not expected to benefit from EDI), (3) mother was less than 15 years old or unable/unwilling to participate, or (4) mother was not planning to stay in the study area for the next three years. Birth asphyxia was defined as the inability to initiate or sustain spontaneous breathing at birth using WHO definition (biochemical evidence of birth asphyxia could not be obtained in these settings) [36]. A list of potential enrollees was distributed to the investigators in each country to obtain written consent for the study, which was obtained during the second week after birth and before randomization to intervention conditions of the BRAIN-HIT. Infants with birth asphyxia (resuscitated) and infants without birth asphyxia or other perinatal complications (non-resuscitated), born from January 2007 through June 2008 in rural communities in three sites in India, Pakistan and Zambia, were matched for country and chronological time and randomly selected from those enrolled in the First Breath Trial [34]. Infants were screened for enrollment into the BRAIN-HIT during the 7-day follow-up visit after birth [31], and were ineligible if: (1) birth weight was less than 1500 grams, (2) neurological examination at seven days of age (grade III by Ellis classification) [35], was severely abnormal (because they were not expected to benefit from EDI), (3) mother was less than 15 years old or unable/unwilling to participate, or (4) mother was not planning to stay in the study area for the next three years. Birth asphyxia was defined as the inability to initiate or sustain spontaneous breathing at birth using WHO definition (biochemical evidence of birth asphyxia could not be obtained in these settings) [36]. A list of potential enrollees was distributed to the investigators in each country to obtain written consent for the study, which was obtained during the second week after birth and before randomization to intervention conditions of the BRAIN-HIT. Intervention procedures Investigators at each research site selected EDI parent trainers who were trained in an initial 5-day workshop, which was led by the same experts at each research site. A second workshop was conducted before participating children began to reach 18 months of age to adapt the approach to children up to 36 months, again conducted by the same experts at each site. To maintain quality of implementation, the trainers were supervised with observations during actual home visits and constructive feedback was provided on a regular basis. Each parent–child pair was assigned to the same trainer throughout the trial whenever possible, who was scheduled to make a home visit every two weeks over the 36-month trial period. As elaborated elsewhere [31, 32], the trainer presented one or two playful learning activities during each visit targeting developmentally appropriate milestones. These activities cover a spectrum of abilities across the cognitive, social and self-help, gross and fine motor, and language domains. The parent practiced the activity in the presence of the trainer who provided feedback. Cards depicting the activities were then left with the parent, who was encouraged to apply the activities in daily life with the child until the next home visit. The trainer introduced new activities in subsequent visits to enhance the child’s developmental competencies. Investigators at each research site selected EDI parent trainers who were trained in an initial 5-day workshop, which was led by the same experts at each research site. A second workshop was conducted before participating children began to reach 18 months of age to adapt the approach to children up to 36 months, again conducted by the same experts at each site. To maintain quality of implementation, the trainers were supervised with observations during actual home visits and constructive feedback was provided on a regular basis. Each parent–child pair was assigned to the same trainer throughout the trial whenever possible, who was scheduled to make a home visit every two weeks over the 36-month trial period. As elaborated elsewhere [31, 32], the trainer presented one or two playful learning activities during each visit targeting developmentally appropriate milestones. These activities cover a spectrum of abilities across the cognitive, social and self-help, gross and fine motor, and language domains. The parent practiced the activity in the presence of the trainer who provided feedback. Cards depicting the activities were then left with the parent, who was encouraged to apply the activities in daily life with the child until the next home visit. The trainer introduced new activities in subsequent visits to enhance the child’s developmental competencies. Treatment dose indicators Two indicators of treatment dose were calculated. Home visit dose was measured based on each parent trainer keeping a record of visit dates. Following the first visit, visits were scheduled to occur every two weeks until the completion of the trial. A home visit was completed on schedule if it occurred within its assigned two week window following the preceding visit. We calculated the percentage of scheduled home visits completed for each participant for the full 36-month trial. The reason for each missed visit was coded as due to illness, weather, death in family, refusal, child or mother unavailable for another reason, parent trainer schedule conflict, and other reasons. Program implementation dose was measured based on maternal report obtained by the trainer at each home visit of the proportion of days the assigned activities had been implemented since the previous visit. First, the number of days between subsequent completed visits was calculated (Yn). If the time between two home visits extended beyond 30 days, a maximum of 30 days was used. Program implementation credits were assigned for the time period between visits based on the mother’s report of implementation of activities, as follows: “not at all” (creditn = 1), “about one-quarter of days or less” (creditn = Yn*.25), “about one-half of days” (creditn = Yn*.50), “about three-quarters of days” (creditn = Yn*.75), and “almost every day or more” (creditn = Yn). The credits were then added together over the trial period, divided by the number of possible credits, and multiplied by 100. Thus, this score estimates the percent of days between each home visit that the mother reported implementing child stimulation activities. As an additional descriptive measure of treatment dose, the parent trainer was surveyed at the conclusion of the study to estimate how often the activities had been implemented between the home visits, using a five-point scale (from “never” to “always”). Two indicators of treatment dose were calculated. Home visit dose was measured based on each parent trainer keeping a record of visit dates. Following the first visit, visits were scheduled to occur every two weeks until the completion of the trial. A home visit was completed on schedule if it occurred within its assigned two week window following the preceding visit. We calculated the percentage of scheduled home visits completed for each participant for the full 36-month trial. The reason for each missed visit was coded as due to illness, weather, death in family, refusal, child or mother unavailable for another reason, parent trainer schedule conflict, and other reasons. Program implementation dose was measured based on maternal report obtained by the trainer at each home visit of the proportion of days the assigned activities had been implemented since the previous visit. First, the number of days between subsequent completed visits was calculated (Yn). If the time between two home visits extended beyond 30 days, a maximum of 30 days was used. Program implementation credits were assigned for the time period between visits based on the mother’s report of implementation of activities, as follows: “not at all” (creditn = 1), “about one-quarter of days or less” (creditn = Yn*.25), “about one-half of days” (creditn = Yn*.50), “about three-quarters of days” (creditn = Yn*.75), and “almost every day or more” (creditn = Yn). The credits were then added together over the trial period, divided by the number of possible credits, and multiplied by 100. Thus, this score estimates the percent of days between each home visit that the mother reported implementing child stimulation activities. As an additional descriptive measure of treatment dose, the parent trainer was surveyed at the conclusion of the study to estimate how often the activities had been implemented between the home visits, using a five-point scale (from “never” to “always”). Developmental outcome measures The Bayley Scales of Infant Development – II (BSID) [37] was selected as the main outcome measure for this trial because it has been used extensively in various L/LMIC. The BSID underwent pilot-testing at each site to verify validity in the local context and a few items were slightly modified to make it more culturally appropriate (e.g., image of a sandal instead of a shoe). Evaluators across the sites were trained to standards in joint 4-day workshops conducted by experts before each yearly evaluation. The BSID was administered directly to each child by certified study evaluators, who were masked to the children’s birth history and randomization, in the appropriate language with standard material. Both the Mental Developmental Index (MDI) and Psychomotor Developmental Index (PDI) were used to measure developmental outcomes. Scores from the 36-month assessment, obtained just after the completion of the EDI, were used in this analysis as an indicator of treatment outcome. The Bayley Scales of Infant Development – II (BSID) [37] was selected as the main outcome measure for this trial because it has been used extensively in various L/LMIC. The BSID underwent pilot-testing at each site to verify validity in the local context and a few items were slightly modified to make it more culturally appropriate (e.g., image of a sandal instead of a shoe). Evaluators across the sites were trained to standards in joint 4-day workshops conducted by experts before each yearly evaluation. The BSID was administered directly to each child by certified study evaluators, who were masked to the children’s birth history and randomization, in the appropriate language with standard material. Both the Mental Developmental Index (MDI) and Psychomotor Developmental Index (PDI) were used to measure developmental outcomes. Scores from the 36-month assessment, obtained just after the completion of the EDI, were used in this analysis as an indicator of treatment outcome. Health and sociodemographic measures Perinatal and neonatal health variables were obtained from records kept by the FIRST BREATH Trial [34]: child gender, birth weight (1500 g-2499 g, 2500 g-2999 g, 3000 + g), gestational age (28–36 weeks, 37+ weeks), number of prenatal visits (0, 1–3, 4+), and parity. Additional child health variables obtained as part of this trial at 12 months of age included weight for age/sex (<5th, 5th-14th, 15th + percentile) and complete immunization status. Family demographic variables were obtained at enrollment in BRAIN-HIT using a structured parent interview: maternal age, education (none and illiterate, none but literate or primary, literate with some secondary), family assets and home living standard. The presence of 11 family assets (e.g., radio, refrigerator, bicycle) were tallied as a Family Resources Index and classified into three levels (0–1, 2–4, 5+). A Home Living Standard Index was calculated based on seven indicators (e.g., home building material, water source, type of toilet) and classified into three levels (0–4, 5–7, 8+). A socio-economic status (SES) measure was used to classify participants into three groups (quintile 1–3, 4, 5) [38]. Perinatal and neonatal health variables were obtained from records kept by the FIRST BREATH Trial [34]: child gender, birth weight (1500 g-2499 g, 2500 g-2999 g, 3000 + g), gestational age (28–36 weeks, 37+ weeks), number of prenatal visits (0, 1–3, 4+), and parity. Additional child health variables obtained as part of this trial at 12 months of age included weight for age/sex (<5th, 5th-14th, 15th + percentile) and complete immunization status. Family demographic variables were obtained at enrollment in BRAIN-HIT using a structured parent interview: maternal age, education (none and illiterate, none but literate or primary, literate with some secondary), family assets and home living standard. The presence of 11 family assets (e.g., radio, refrigerator, bicycle) were tallied as a Family Resources Index and classified into three levels (0–1, 2–4, 5+). A Home Living Standard Index was calculated based on seven indicators (e.g., home building material, water source, type of toilet) and classified into three levels (0–4, 5–7, 8+). A socio-economic status (SES) measure was used to classify participants into three groups (quintile 1–3, 4, 5) [38]. Statistical analysis Descriptive statistics were computed for child health and family demographic characteristics, treatment dose indicators (home visits dose and protocol implementation dose), and developmental outcomes (MDI and PDI at 36-months) for all individuals randomized to receive EDI. Child health and demographic characteristics were summarized separately for those randomized to receive EDI and included in the treatment dose analysis and those who were excluded from this analysis, and differences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests. A Pearson correlation statistic was computed between the treatment dose characteristics. Aim 1 In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred. In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred. Aim 2 To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models. To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models. Descriptive statistics were computed for child health and family demographic characteristics, treatment dose indicators (home visits dose and protocol implementation dose), and developmental outcomes (MDI and PDI at 36-months) for all individuals randomized to receive EDI. Child health and demographic characteristics were summarized separately for those randomized to receive EDI and included in the treatment dose analysis and those who were excluded from this analysis, and differences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests. A Pearson correlation statistic was computed between the treatment dose characteristics. Aim 1 In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred. In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred. Aim 2 To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models. To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models. Study population: Infants with birth asphyxia (resuscitated) and infants without birth asphyxia or other perinatal complications (non-resuscitated), born from January 2007 through June 2008 in rural communities in three sites in India, Pakistan and Zambia, were matched for country and chronological time and randomly selected from those enrolled in the First Breath Trial [34]. Infants were screened for enrollment into the BRAIN-HIT during the 7-day follow-up visit after birth [31], and were ineligible if: (1) birth weight was less than 1500 grams, (2) neurological examination at seven days of age (grade III by Ellis classification) [35], was severely abnormal (because they were not expected to benefit from EDI), (3) mother was less than 15 years old or unable/unwilling to participate, or (4) mother was not planning to stay in the study area for the next three years. Birth asphyxia was defined as the inability to initiate or sustain spontaneous breathing at birth using WHO definition (biochemical evidence of birth asphyxia could not be obtained in these settings) [36]. A list of potential enrollees was distributed to the investigators in each country to obtain written consent for the study, which was obtained during the second week after birth and before randomization to intervention conditions of the BRAIN-HIT. Intervention procedures: Investigators at each research site selected EDI parent trainers who were trained in an initial 5-day workshop, which was led by the same experts at each research site. A second workshop was conducted before participating children began to reach 18 months of age to adapt the approach to children up to 36 months, again conducted by the same experts at each site. To maintain quality of implementation, the trainers were supervised with observations during actual home visits and constructive feedback was provided on a regular basis. Each parent–child pair was assigned to the same trainer throughout the trial whenever possible, who was scheduled to make a home visit every two weeks over the 36-month trial period. As elaborated elsewhere [31, 32], the trainer presented one or two playful learning activities during each visit targeting developmentally appropriate milestones. These activities cover a spectrum of abilities across the cognitive, social and self-help, gross and fine motor, and language domains. The parent practiced the activity in the presence of the trainer who provided feedback. Cards depicting the activities were then left with the parent, who was encouraged to apply the activities in daily life with the child until the next home visit. The trainer introduced new activities in subsequent visits to enhance the child’s developmental competencies. Treatment dose indicators: Two indicators of treatment dose were calculated. Home visit dose was measured based on each parent trainer keeping a record of visit dates. Following the first visit, visits were scheduled to occur every two weeks until the completion of the trial. A home visit was completed on schedule if it occurred within its assigned two week window following the preceding visit. We calculated the percentage of scheduled home visits completed for each participant for the full 36-month trial. The reason for each missed visit was coded as due to illness, weather, death in family, refusal, child or mother unavailable for another reason, parent trainer schedule conflict, and other reasons. Program implementation dose was measured based on maternal report obtained by the trainer at each home visit of the proportion of days the assigned activities had been implemented since the previous visit. First, the number of days between subsequent completed visits was calculated (Yn). If the time between two home visits extended beyond 30 days, a maximum of 30 days was used. Program implementation credits were assigned for the time period between visits based on the mother’s report of implementation of activities, as follows: “not at all” (creditn = 1), “about one-quarter of days or less” (creditn = Yn*.25), “about one-half of days” (creditn = Yn*.50), “about three-quarters of days” (creditn = Yn*.75), and “almost every day or more” (creditn = Yn). The credits were then added together over the trial period, divided by the number of possible credits, and multiplied by 100. Thus, this score estimates the percent of days between each home visit that the mother reported implementing child stimulation activities. As an additional descriptive measure of treatment dose, the parent trainer was surveyed at the conclusion of the study to estimate how often the activities had been implemented between the home visits, using a five-point scale (from “never” to “always”). Developmental outcome measures: The Bayley Scales of Infant Development – II (BSID) [37] was selected as the main outcome measure for this trial because it has been used extensively in various L/LMIC. The BSID underwent pilot-testing at each site to verify validity in the local context and a few items were slightly modified to make it more culturally appropriate (e.g., image of a sandal instead of a shoe). Evaluators across the sites were trained to standards in joint 4-day workshops conducted by experts before each yearly evaluation. The BSID was administered directly to each child by certified study evaluators, who were masked to the children’s birth history and randomization, in the appropriate language with standard material. Both the Mental Developmental Index (MDI) and Psychomotor Developmental Index (PDI) were used to measure developmental outcomes. Scores from the 36-month assessment, obtained just after the completion of the EDI, were used in this analysis as an indicator of treatment outcome. Health and sociodemographic measures: Perinatal and neonatal health variables were obtained from records kept by the FIRST BREATH Trial [34]: child gender, birth weight (1500 g-2499 g, 2500 g-2999 g, 3000 + g), gestational age (28–36 weeks, 37+ weeks), number of prenatal visits (0, 1–3, 4+), and parity. Additional child health variables obtained as part of this trial at 12 months of age included weight for age/sex (<5th, 5th-14th, 15th + percentile) and complete immunization status. Family demographic variables were obtained at enrollment in BRAIN-HIT using a structured parent interview: maternal age, education (none and illiterate, none but literate or primary, literate with some secondary), family assets and home living standard. The presence of 11 family assets (e.g., radio, refrigerator, bicycle) were tallied as a Family Resources Index and classified into three levels (0–1, 2–4, 5+). A Home Living Standard Index was calculated based on seven indicators (e.g., home building material, water source, type of toilet) and classified into three levels (0–4, 5–7, 8+). A socio-economic status (SES) measure was used to classify participants into three groups (quintile 1–3, 4, 5) [38]. Statistical analysis: Descriptive statistics were computed for child health and family demographic characteristics, treatment dose indicators (home visits dose and protocol implementation dose), and developmental outcomes (MDI and PDI at 36-months) for all individuals randomized to receive EDI. Child health and demographic characteristics were summarized separately for those randomized to receive EDI and included in the treatment dose analysis and those who were excluded from this analysis, and differences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests. A Pearson correlation statistic was computed between the treatment dose characteristics. Aim 1 In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred. In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred. Aim 2 To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models. To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models. Aim 1: In the absence of established criteria for adequate treatment dose for EDI and to determine where the effectiveness of the intervention may plateau, both treatment dose indicators were divided into quintiles. Those in quintile 1 had lowest dose and those in quintile 5 had the highest dose of the indicator in question. Descriptive statistics for the 36-month MDI and PDI were calculated for each quintile. General linear models were used to evaluate the associations of treatment dose quintile with 36-month MDI and PDI. In addition to the treatment dose indicator in question, covariates of interest included resuscitation status at birth, 12-month MDI or PDI, and site. If the omnibus 4-degree of freedom test for either MDI or PDI provided evidence of significant differences across quintiles of treatment dose, step-down tests were used to evaluate where those differences occurred. Aim 2: To evaluate associations with treatment dose, initially all sociodemographic and child health variables and trial location were entered into linear regression models separately to predict both treatment dose variables. Selected for entry in multivariable models were variables that demonstrated P ≤ 0.20 in univariate association with the adherence variable in question when either adjusted by location alone or location and the variable by location interaction. We employed backward elimination with an alpha of 0.20 to choose the final models. Results: Study sample composition The sample size was determined to provide adequate power to test EDI treatment efficacy, the primary aim of BRAIN-HIT. As outlined in Figure 1, of 540 births screened from January 2007 through June 2008, 438 (81% of screened) were eligible. Only 3 infants were ineligible due to low birth weight or neurological exam, with the remaining 99 being due to mothers not being able to commit to staying in the study communities or could not be reached for screening within 7 days of birth. Informed consent was obtained for 407 (93% of eligible; 165 resuscitated, 242 not resuscitated) who were randomized into either EDI or a control intervention [20]. The 204 assigned to receive EDI (50.1% of those randomized) are relevant for this study, of whom 145 (71.1% of those assigned to EDI) were included in this analysis (Table 1). These participants had mean = 36.8 (range = 35-41) months of age at the time of the developmental assessment.Figure 1 Study flow chart. Study flow chart. Child health and family demographic characteristics of study sample aMeasured at enrollment unless otherwise indicated. bDifferences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests; bold indicates significant p < .05. Exclusions from this analysis were due to death (n = 7), withdrawal (n = 6), loss to follow up (n = 5), incomplete 36-month BSID-II (n = 39) due to administration errors, home-visit data unavailable (n = 1), or another reason (n = 1). Three children were included in the analysis who completed the 36-month evaluation but discontinued the EDI prior to the end of the study (two because the family had insufficient time to fulfill study requirements and one because the family moved). When compared to those who were included in the analysis (Table 1), children excluded (n = 59) were significantly (p < .05) more likely to have been less than the 5th percentile in weight and completed all immunizations at 12-months of age, and their mothers to have had prenatal care, lower parity, and more family resources. The sample size was determined to provide adequate power to test EDI treatment efficacy, the primary aim of BRAIN-HIT. As outlined in Figure 1, of 540 births screened from January 2007 through June 2008, 438 (81% of screened) were eligible. Only 3 infants were ineligible due to low birth weight or neurological exam, with the remaining 99 being due to mothers not being able to commit to staying in the study communities or could not be reached for screening within 7 days of birth. Informed consent was obtained for 407 (93% of eligible; 165 resuscitated, 242 not resuscitated) who were randomized into either EDI or a control intervention [20]. The 204 assigned to receive EDI (50.1% of those randomized) are relevant for this study, of whom 145 (71.1% of those assigned to EDI) were included in this analysis (Table 1). These participants had mean = 36.8 (range = 35-41) months of age at the time of the developmental assessment.Figure 1 Study flow chart. Study flow chart. Child health and family demographic characteristics of study sample aMeasured at enrollment unless otherwise indicated. bDifferences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests; bold indicates significant p < .05. Exclusions from this analysis were due to death (n = 7), withdrawal (n = 6), loss to follow up (n = 5), incomplete 36-month BSID-II (n = 39) due to administration errors, home-visit data unavailable (n = 1), or another reason (n = 1). Three children were included in the analysis who completed the 36-month evaluation but discontinued the EDI prior to the end of the study (two because the family had insufficient time to fulfill study requirements and one because the family moved). When compared to those who were included in the analysis (Table 1), children excluded (n = 59) were significantly (p < .05) more likely to have been less than the 5th percentile in weight and completed all immunizations at 12-months of age, and their mothers to have had prenatal care, lower parity, and more family resources. Description of developmental outcomes and treatment dose The sample had an unadjusted mean (SD) MDI = 101.2 (10.4) and PDI = 106.8 (14.1) at 36-months. Average home visits dose was 91.4% over 36 months, when 8,990 visits out of 9,841 were completed on schedule every two weeks, and 95% of the participants achieved 80% or greater home visits dose. The most common reason for a missed visit was the inability to locate the mother and child at home at the scheduled time (40.3%), for example because the family was travelling away from the home or had moved temporarily. However, the second most common reason was those related to the parent trainer, such as being ill or having a conflict with another meeting (23.9%). Child or mother unavailable for other reasons (15.3%), for example because the mother was working or baby was sleeping, and weather (10.0%) were the only other reasons accounting for at least 10% of the missed visits. Mother or family directly refusing the home visit at the scheduled time was rare (2.5%). Mothers reported engaging the child in the assigned activities on an average of 62.5% of days throughout the 36 month period. This protocol implementation dose equates to practicing the intervention activities 4.4 days per week or 674 days over the 36 month trial period. Home visits dose was modestly correlated with protocol implementation dose (r = 0.35). Parent trainers estimated at the end of the trial that 66.2% of families practiced the intervention “always” or “almost always” throughout the 36 months. The sample had an unadjusted mean (SD) MDI = 101.2 (10.4) and PDI = 106.8 (14.1) at 36-months. Average home visits dose was 91.4% over 36 months, when 8,990 visits out of 9,841 were completed on schedule every two weeks, and 95% of the participants achieved 80% or greater home visits dose. The most common reason for a missed visit was the inability to locate the mother and child at home at the scheduled time (40.3%), for example because the family was travelling away from the home or had moved temporarily. However, the second most common reason was those related to the parent trainer, such as being ill or having a conflict with another meeting (23.9%). Child or mother unavailable for other reasons (15.3%), for example because the mother was working or baby was sleeping, and weather (10.0%) were the only other reasons accounting for at least 10% of the missed visits. Mother or family directly refusing the home visit at the scheduled time was rare (2.5%). Mothers reported engaging the child in the assigned activities on an average of 62.5% of days throughout the 36 month period. This protocol implementation dose equates to practicing the intervention activities 4.4 days per week or 674 days over the 36 month trial period. Home visits dose was modestly correlated with protocol implementation dose (r = 0.35). Parent trainers estimated at the end of the trial that 66.2% of families practiced the intervention “always” or “almost always” throughout the 36 months. Associations between treatment dose and developmental outcomes Higher home visits dose was associated with higher MDI at 36-months (Figure 2). Specifically, quintiles 1–2 mean MDI = 98, while quintiles 3–5 mean MDI = 103 (Table 2). General linear models of MDI supported this relationship when home visits dose was entered as a primary predictor and site, resuscitation status at birth, and 12-month MDI were entered as covariates (Table 2). Most notably, in the model with only home visits dose (Model 1) and the model which included site (Model 2), mean MDI for quintiles 1 and 2 was significantly lower than quintiles 3–5. A step-down test comparing mean MDI for those with home visit dose below the 40th percentile (quintiles 1 and 2) to those with home visit dose above the 40th percentile (quintiles 3–5), provided estimates of 97.8 and 103.4 (p = 0.0017), respectively . Adjusting by site increased the magnitude of the difference by at least 25% (96.8 vs. 103.9, p = 0.0005). When adjusting for 12-month MDI and the interaction between dose and 12-month MDI (Model 5), the adjusted mean scores for the dose quintiles mirrored unadjusted scores, with quintiles 1–2 consistently lower than quintiles 3–5 (p <0.0001). The lower limit for quintile 3 includes those receiving a minimum of 91% of all the planned home visits.Figure 2 Mental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles. Mental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles. Treatment dose modeling results and mean mental (MDI) and psychomotor (PDI) developmental index by quintiles aBold indicate significant p < .05 for the relationship between the treatment dose indicator and the developmental outcome. Based on the same general linear model analysis (Table 2), home visit dose was not significantly associated with PDI at 36 months when considered by itself (Model 1) or when adjusted by site, resuscitation status, and 12-month PDI (Models 2–4). However, there was a positive association between home visits dose and 36-month PDI when adjusting for the 12-month PDI and its interaction with dose (Model 5). Here again, a home visit dose above the 40th percentile (quintiles 3–5) resulted in higher estimated PDI (108.5 – 111.0) compared with below this percentile (103.3 – 106.5) Higher program implementation dose was associated with slightly higher MDI at 36-months compared to those with a lesser dose. Quintiles 1–2 had a mean MDI of 100 or lower, while quintiles 4–5 has a mean MDI of 102 or higher (Table 2), and the difference appears larger when considering the medians of these quintiles. In a general linear model of 36-month MDI (Table 2), program implementation dose was not a significant predictor by itself (Model 1). However, prediction of program implementation dose when adjusting for 12-month MDI and its interaction with dose (Model 5) indicated that greater dose was associated with higher MDI (adjusted mean Q1 = 100.1 vs. Q5 = 103.1, p = 0.0434). PDI at 36 months was not linearly associated with program implementation dose (Table 2). Rather, mean PDI across quintiles followed a U-shape with the highest mean scores for quintiles 1, 4 and 5. The lower limit for quintile 4 includes those implementing activities on 67% of days on average over the trial period. Higher home visits dose was associated with higher MDI at 36-months (Figure 2). Specifically, quintiles 1–2 mean MDI = 98, while quintiles 3–5 mean MDI = 103 (Table 2). General linear models of MDI supported this relationship when home visits dose was entered as a primary predictor and site, resuscitation status at birth, and 12-month MDI were entered as covariates (Table 2). Most notably, in the model with only home visits dose (Model 1) and the model which included site (Model 2), mean MDI for quintiles 1 and 2 was significantly lower than quintiles 3–5. A step-down test comparing mean MDI for those with home visit dose below the 40th percentile (quintiles 1 and 2) to those with home visit dose above the 40th percentile (quintiles 3–5), provided estimates of 97.8 and 103.4 (p = 0.0017), respectively . Adjusting by site increased the magnitude of the difference by at least 25% (96.8 vs. 103.9, p = 0.0005). When adjusting for 12-month MDI and the interaction between dose and 12-month MDI (Model 5), the adjusted mean scores for the dose quintiles mirrored unadjusted scores, with quintiles 1–2 consistently lower than quintiles 3–5 (p <0.0001). The lower limit for quintile 3 includes those receiving a minimum of 91% of all the planned home visits.Figure 2 Mental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles. Mental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles. Treatment dose modeling results and mean mental (MDI) and psychomotor (PDI) developmental index by quintiles aBold indicate significant p < .05 for the relationship between the treatment dose indicator and the developmental outcome. Based on the same general linear model analysis (Table 2), home visit dose was not significantly associated with PDI at 36 months when considered by itself (Model 1) or when adjusted by site, resuscitation status, and 12-month PDI (Models 2–4). However, there was a positive association between home visits dose and 36-month PDI when adjusting for the 12-month PDI and its interaction with dose (Model 5). Here again, a home visit dose above the 40th percentile (quintiles 3–5) resulted in higher estimated PDI (108.5 – 111.0) compared with below this percentile (103.3 – 106.5) Higher program implementation dose was associated with slightly higher MDI at 36-months compared to those with a lesser dose. Quintiles 1–2 had a mean MDI of 100 or lower, while quintiles 4–5 has a mean MDI of 102 or higher (Table 2), and the difference appears larger when considering the medians of these quintiles. In a general linear model of 36-month MDI (Table 2), program implementation dose was not a significant predictor by itself (Model 1). However, prediction of program implementation dose when adjusting for 12-month MDI and its interaction with dose (Model 5) indicated that greater dose was associated with higher MDI (adjusted mean Q1 = 100.1 vs. Q5 = 103.1, p = 0.0434). PDI at 36 months was not linearly associated with program implementation dose (Table 2). Rather, mean PDI across quintiles followed a U-shape with the highest mean scores for quintiles 1, 4 and 5. The lower limit for quintile 4 includes those implementing activities on 67% of days on average over the trial period. Factors associated with treatment dose The following variables were associated with home visits dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: maternal education, parity, family resources, prenatal visits, birth attendant, 1 minute Apgar, preterm birth, and child’s weight at 36-months. These variables were entered into a generalized linear model along with those interaction terms with location that were significant. After backward elimination, the final model (R2 = .19) included parity (82.9 ± 3.0 [adjusted mean ± standard error] with 1 child, 79.7 ± 2.8 with 2–3 children, and 90.8 ± 3.5 with 4+ children [p = 0.0382]), 1 minute Apgar (86.9 ± 2.6 for <9 and 82.0 ± 2.6 for 9+ [p = 0.1754]), location (adjusted mean ranged from 75.6 - 94.1, [p = 0.0019]), preterm [(p = 0.4571) and preterm by location interaction(p = 0.0020). There was a substantial difference in relationship to home visits dose by prematurity across location. Location A had higher dose for term children (65.8 ± 6.3 for preterm and 85.3 ± 4.0 for term). Location B had essentially the same dose between groups (92.8 ± 5.9 for preterm and 95.4 ± 2.9 for term). Location C had considerably higher dose in preterm children (90.5 ± 4.6 for preterm and 76.9 ± 3.5 for term). The following variables were associated with program implementation dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: home visit adherence rate, maternal education, parity, family resources, living standard index, prenatal care, 1 minute Apgar, preterm birth, and weight at birth, 12, 24, and 36 months. These variables were entered into a model along with those interaction terms with location that were significant. After backward elimination and adjusting for location, the final model (R2 = .25) included home visit adherence rate (a one percent increase in home visit adherence resulted in a 0.64 ± 0.18 percent increase in program implementation adherence, p = 0.0004), maternal education (70.0 ± 2.8 for secondary/university and 60.9 ± 2.4 for none/illiterate [p = 0.0400]), prenatal care (71.0 ± 2.9 for 5+ visits and 65.3 ± 3.5 for no care [p = 0.0170]), weight at 12 months (66.7 ± 1.7 for >85th percentile and 61.1 ± 2.2 for <5th percentile [p = 0.0917]), and location (adjusted mean ranged from 59.5 - 69.1, [p = 0.0019]). None of the interaction terms were retained in the final model. The following variables were associated with home visits dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: maternal education, parity, family resources, prenatal visits, birth attendant, 1 minute Apgar, preterm birth, and child’s weight at 36-months. These variables were entered into a generalized linear model along with those interaction terms with location that were significant. After backward elimination, the final model (R2 = .19) included parity (82.9 ± 3.0 [adjusted mean ± standard error] with 1 child, 79.7 ± 2.8 with 2–3 children, and 90.8 ± 3.5 with 4+ children [p = 0.0382]), 1 minute Apgar (86.9 ± 2.6 for <9 and 82.0 ± 2.6 for 9+ [p = 0.1754]), location (adjusted mean ranged from 75.6 - 94.1, [p = 0.0019]), preterm [(p = 0.4571) and preterm by location interaction(p = 0.0020). There was a substantial difference in relationship to home visits dose by prematurity across location. Location A had higher dose for term children (65.8 ± 6.3 for preterm and 85.3 ± 4.0 for term). Location B had essentially the same dose between groups (92.8 ± 5.9 for preterm and 95.4 ± 2.9 for term). Location C had considerably higher dose in preterm children (90.5 ± 4.6 for preterm and 76.9 ± 3.5 for term). The following variables were associated with program implementation dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: home visit adherence rate, maternal education, parity, family resources, living standard index, prenatal care, 1 minute Apgar, preterm birth, and weight at birth, 12, 24, and 36 months. These variables were entered into a model along with those interaction terms with location that were significant. After backward elimination and adjusting for location, the final model (R2 = .25) included home visit adherence rate (a one percent increase in home visit adherence resulted in a 0.64 ± 0.18 percent increase in program implementation adherence, p = 0.0004), maternal education (70.0 ± 2.8 for secondary/university and 60.9 ± 2.4 for none/illiterate [p = 0.0400]), prenatal care (71.0 ± 2.9 for 5+ visits and 65.3 ± 3.5 for no care [p = 0.0170]), weight at 12 months (66.7 ± 1.7 for >85th percentile and 61.1 ± 2.2 for <5th percentile [p = 0.0917]), and location (adjusted mean ranged from 59.5 - 69.1, [p = 0.0019]). None of the interaction terms were retained in the final model. Study sample composition: The sample size was determined to provide adequate power to test EDI treatment efficacy, the primary aim of BRAIN-HIT. As outlined in Figure 1, of 540 births screened from January 2007 through June 2008, 438 (81% of screened) were eligible. Only 3 infants were ineligible due to low birth weight or neurological exam, with the remaining 99 being due to mothers not being able to commit to staying in the study communities or could not be reached for screening within 7 days of birth. Informed consent was obtained for 407 (93% of eligible; 165 resuscitated, 242 not resuscitated) who were randomized into either EDI or a control intervention [20]. The 204 assigned to receive EDI (50.1% of those randomized) are relevant for this study, of whom 145 (71.1% of those assigned to EDI) were included in this analysis (Table 1). These participants had mean = 36.8 (range = 35-41) months of age at the time of the developmental assessment.Figure 1 Study flow chart. Study flow chart. Child health and family demographic characteristics of study sample aMeasured at enrollment unless otherwise indicated. bDifferences in mean values for continuous variables were tested using t-tests and categorical measures were tested using chi-square and Fisher exact tests; bold indicates significant p < .05. Exclusions from this analysis were due to death (n = 7), withdrawal (n = 6), loss to follow up (n = 5), incomplete 36-month BSID-II (n = 39) due to administration errors, home-visit data unavailable (n = 1), or another reason (n = 1). Three children were included in the analysis who completed the 36-month evaluation but discontinued the EDI prior to the end of the study (two because the family had insufficient time to fulfill study requirements and one because the family moved). When compared to those who were included in the analysis (Table 1), children excluded (n = 59) were significantly (p < .05) more likely to have been less than the 5th percentile in weight and completed all immunizations at 12-months of age, and their mothers to have had prenatal care, lower parity, and more family resources. Description of developmental outcomes and treatment dose: The sample had an unadjusted mean (SD) MDI = 101.2 (10.4) and PDI = 106.8 (14.1) at 36-months. Average home visits dose was 91.4% over 36 months, when 8,990 visits out of 9,841 were completed on schedule every two weeks, and 95% of the participants achieved 80% or greater home visits dose. The most common reason for a missed visit was the inability to locate the mother and child at home at the scheduled time (40.3%), for example because the family was travelling away from the home or had moved temporarily. However, the second most common reason was those related to the parent trainer, such as being ill or having a conflict with another meeting (23.9%). Child or mother unavailable for other reasons (15.3%), for example because the mother was working or baby was sleeping, and weather (10.0%) were the only other reasons accounting for at least 10% of the missed visits. Mother or family directly refusing the home visit at the scheduled time was rare (2.5%). Mothers reported engaging the child in the assigned activities on an average of 62.5% of days throughout the 36 month period. This protocol implementation dose equates to practicing the intervention activities 4.4 days per week or 674 days over the 36 month trial period. Home visits dose was modestly correlated with protocol implementation dose (r = 0.35). Parent trainers estimated at the end of the trial that 66.2% of families practiced the intervention “always” or “almost always” throughout the 36 months. Associations between treatment dose and developmental outcomes: Higher home visits dose was associated with higher MDI at 36-months (Figure 2). Specifically, quintiles 1–2 mean MDI = 98, while quintiles 3–5 mean MDI = 103 (Table 2). General linear models of MDI supported this relationship when home visits dose was entered as a primary predictor and site, resuscitation status at birth, and 12-month MDI were entered as covariates (Table 2). Most notably, in the model with only home visits dose (Model 1) and the model which included site (Model 2), mean MDI for quintiles 1 and 2 was significantly lower than quintiles 3–5. A step-down test comparing mean MDI for those with home visit dose below the 40th percentile (quintiles 1 and 2) to those with home visit dose above the 40th percentile (quintiles 3–5), provided estimates of 97.8 and 103.4 (p = 0.0017), respectively . Adjusting by site increased the magnitude of the difference by at least 25% (96.8 vs. 103.9, p = 0.0005). When adjusting for 12-month MDI and the interaction between dose and 12-month MDI (Model 5), the adjusted mean scores for the dose quintiles mirrored unadjusted scores, with quintiles 1–2 consistently lower than quintiles 3–5 (p <0.0001). The lower limit for quintile 3 includes those receiving a minimum of 91% of all the planned home visits.Figure 2 Mental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles. Mental (MDI) and Psychomotor (PDI) Development Index by treatment dose quintiles. Treatment dose modeling results and mean mental (MDI) and psychomotor (PDI) developmental index by quintiles aBold indicate significant p < .05 for the relationship between the treatment dose indicator and the developmental outcome. Based on the same general linear model analysis (Table 2), home visit dose was not significantly associated with PDI at 36 months when considered by itself (Model 1) or when adjusted by site, resuscitation status, and 12-month PDI (Models 2–4). However, there was a positive association between home visits dose and 36-month PDI when adjusting for the 12-month PDI and its interaction with dose (Model 5). Here again, a home visit dose above the 40th percentile (quintiles 3–5) resulted in higher estimated PDI (108.5 – 111.0) compared with below this percentile (103.3 – 106.5) Higher program implementation dose was associated with slightly higher MDI at 36-months compared to those with a lesser dose. Quintiles 1–2 had a mean MDI of 100 or lower, while quintiles 4–5 has a mean MDI of 102 or higher (Table 2), and the difference appears larger when considering the medians of these quintiles. In a general linear model of 36-month MDI (Table 2), program implementation dose was not a significant predictor by itself (Model 1). However, prediction of program implementation dose when adjusting for 12-month MDI and its interaction with dose (Model 5) indicated that greater dose was associated with higher MDI (adjusted mean Q1 = 100.1 vs. Q5 = 103.1, p = 0.0434). PDI at 36 months was not linearly associated with program implementation dose (Table 2). Rather, mean PDI across quintiles followed a U-shape with the highest mean scores for quintiles 1, 4 and 5. The lower limit for quintile 4 includes those implementing activities on 67% of days on average over the trial period. Factors associated with treatment dose: The following variables were associated with home visits dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: maternal education, parity, family resources, prenatal visits, birth attendant, 1 minute Apgar, preterm birth, and child’s weight at 36-months. These variables were entered into a generalized linear model along with those interaction terms with location that were significant. After backward elimination, the final model (R2 = .19) included parity (82.9 ± 3.0 [adjusted mean ± standard error] with 1 child, 79.7 ± 2.8 with 2–3 children, and 90.8 ± 3.5 with 4+ children [p = 0.0382]), 1 minute Apgar (86.9 ± 2.6 for <9 and 82.0 ± 2.6 for 9+ [p = 0.1754]), location (adjusted mean ranged from 75.6 - 94.1, [p = 0.0019]), preterm [(p = 0.4571) and preterm by location interaction(p = 0.0020). There was a substantial difference in relationship to home visits dose by prematurity across location. Location A had higher dose for term children (65.8 ± 6.3 for preterm and 85.3 ± 4.0 for term). Location B had essentially the same dose between groups (92.8 ± 5.9 for preterm and 95.4 ± 2.9 for term). Location C had considerably higher dose in preterm children (90.5 ± 4.6 for preterm and 76.9 ± 3.5 for term). The following variables were associated with program implementation dose at P ≤ 0.20 when either adjusted by location or by the location by variable interaction: home visit adherence rate, maternal education, parity, family resources, living standard index, prenatal care, 1 minute Apgar, preterm birth, and weight at birth, 12, 24, and 36 months. These variables were entered into a model along with those interaction terms with location that were significant. After backward elimination and adjusting for location, the final model (R2 = .25) included home visit adherence rate (a one percent increase in home visit adherence resulted in a 0.64 ± 0.18 percent increase in program implementation adherence, p = 0.0004), maternal education (70.0 ± 2.8 for secondary/university and 60.9 ± 2.4 for none/illiterate [p = 0.0400]), prenatal care (71.0 ± 2.9 for 5+ visits and 65.3 ± 3.5 for no care [p = 0.0170]), weight at 12 months (66.7 ± 1.7 for >85th percentile and 61.1 ± 2.2 for <5th percentile [p = 0.0917]), and location (adjusted mean ranged from 59.5 - 69.1, [p = 0.0019]). None of the interaction terms were retained in the final model. Discussion: Consistent with our hypothesis, receiving a higher dose of EDI during the first 36 months of life, as indicated by number of home visits by a parent trainer and reported implementation of program activities between these home visits, is generally associated with better developmental outcomes at 36 months of age. This benefit is confirmed more consistently for mental compared to psychomotor development, and appears to some extent to be moderated by developmental status at 12 months. The higher benefit from treatment appears for those receiving at least 91% of the biweekly home visits and program activities on at least 67% of days on the average or 716 days over 36 months. In the context of a general developmental benefit demonstrated to be due to this program of EDI [32, 33], the difference in benefit from those receiving smaller vs. larger treatment doses is modest, about three to six points on a standardized developmental measure (M = 100, SD = 15). Variation in treatment dose was associated with child health and family sociodemographic factors as well as by trial location. In particular, more frequent use of the stimulation activities was reported by better educated mothers who had already engaged in a schedule of prenatal care and had infants who reached a higher weight in the first year. Limitations with this research include that results may not be generalizable to other L/LMIC or to other types of EDI programs. Moreover, we do not have independent observations of the implementation of the program activities at home, either in terms of quantity or quality. Program implementation dose was measured exclusively by self-report, which might have been susceptible, for example, to recall and acquiescence biases. Direct observation, though challenging to use in this context, should be less biased. Even though this trial of EDI enrolled one of the largest samples reported in L/LMIC, the sample size is still modest. This EDI was not intended for severely impaired infants. There was a 29% loss at follow-up, which included a higher proportion of parents with better resources. Power to detect significant associations with treatment dose was quite limited despite that this trial of EDI enrolled one of the largest samples reported in L/LMIC. Although a broad range of health factors were examined for associations with treatment dose, it would be useful to learn from mothers what other factors possibly influenced their use of the stimulation activities, such as motivation, belief in their efficacy, and family support. Treatment dose had a limited effect on psychomotor development, which may reflect that the EDI was not as successful in addressing development in these domains or be due to children reaching ceiling effects of the BSID at 36 months of age. Only a few studies had previously examined whether dose of EDI during the first three years of life is associated with developmental outcomes. Our findings are consistent with prior studies that have generally reported that children who receive more exposure to EDI, however measured, display greater improvements in their cognitive development [18, 21, 24, 25]. Although only one of these studies was conducted in a L/LMIC, this too reported modest differences on developmental outcomes associated with varying home visit dose [19]. Program implementation dose was not examined. Given the differences between the EDI programs for which treatment dose has been evaluated, countries where implemented, populations targeted, and how treatment dose has been operationalized, it is difficult to generalize from this small body of research. It is impossible yet to establish a minimum effective dose. Given the importance of determining the efficacy of EDI in L/LMIC, which depends in part on information about sufficient dose, further research on the relationship between dose and outcome is much needed. Evaluations of EDI need to include such analysis to inform setting minimal targets for effective implementation. EDI provided via home visiting has quite consistently shown to promote development in children in L/LMIC e.g., [9–16]. Our research has added to this literature by showing that the same program can do so across quite different cultures, represented here by India, Pakistan, and Zambia [32]. Whereas the identical program was used, for example in terms of the same basic structure and developmental activities, the social process transpiring in the home visits would naturally vary as a function of the specific people engaged and their local culture. One strength of home visiting EDI is that in this manner it can be both programmatically structured yet culturally flexible. Conclusions: The body of research in which the current study is embedded quite consistently establishes that within an effective EDI, a higher dose is generally associated with better developmental outcomes. A large body of research indicates that EDI can improve early development of children in L/LMIC. Therefore EDI should be one approach used in L/LMIC to lay the foundation for improving longer-term outcomes of its population and interrupting intergenerational transmission of poverty [26]. Yet, for this to be successful, efforts to implement EDI for children need to ensure that program elements reach the children at the intended intensity. Groups of children at risk for receiving lower treatment dose may require special attention to ensure adequate effect.
Background: The positive effects of early developmental intervention (EDI) on early child development have been reported in numerous controlled trials in a variety of countries. An important aspect to determining the efficacy of EDI is the degree to which dosage is linked to outcomes. However, few studies of EDI have conducted such analyses. This observational cohort study examined the association between treatment dose and children's development when EDI was implemented in three low and low-middle income countries as well as demographic and child health factors associated with treatment dose. Methods: Infants (78 males, 67 females) born in rural communities in India, Pakistan, and Zambia received a parent-implemented EDI delivered through biweekly home visits by trainers during the first 36 months of life. Outcome was measured at age 36 months with the Mental (MDI) and Psychomotor (PDI) Development Indices of the Bayley Scales of Infant Development-II. Treatment dose was measured by number of home visits completed and parent-reported implementation of assigned developmental stimulation activities between visits. Sociodemographic, prenatal, perinatal, and child health variables were measures as correlates. Results: Average home visits dose exceeded 91% and mothers engaged the children in activities on average 62.5% of days. Higher home visits dose was significantly associated with higher MDI (mean for dose quintiles 1-2 combined = 97.8, quintiles 3-5 combined = 103.4, p = 0.0017). Higher treatment dose was also generally associated with greater mean PDI, but the relationships were non-linear. Location, sociodemographic, and child health variables were associated with treatment dose. Conclusions: Receiving a higher dose of EDI during the first 36 months of life is generally associated with better developmental outcomes. The higher benefit appears when receiving ≥91% of biweekly home visits and program activities on ≥67% of days over 3 years. It is important to ensure that EDI is implemented with a sufficiently high dose to achieve desired effect. To this end groups at risk for receiving lower dose can be identified and may require special attention to ensure adequate effect.
Background: Programs of early developmental intervention (EDI) implemented in the first years of life in children born with, or at risk for, neurodevelopmental disability have been shown to improve cognitive developmental outcomes and consequently, their quality of life. EDI includes various activities designed to enhance a young child’s development, directly via structured experiences and/or indirectly through influencing the care giving environment [1]. The positive effects of EDI on early child development have been reported in numerous controlled trials in high-income countries [2, 3], which have been confirmed through meta-analyses [4, 5] and expert reviews [6–8]. Several trials of EDI with risk groups of infants and young children have also been conducted in low or low-middle income countries (L/LMIC), which have also documented positive effects on child development, by itself or in combination with nutritional supplementation [9–16]. The involvement of parents in EDI is critical for achieving positive outcomes [1, 17–19], which can be optimized by implementing EDI through home visits by a parent trainer. This modality also matches well the circumstances of many L/LMIC where families often live far away from or have other barriers to reach providers that could implement EDI [20]. An important aspect to determining the efficacy of EDI is the degree to which dosage impacts outcomes, and what constitutes “sufficient dosage” [21]. Sufficient dosage with regard to EDI refers to a participant receiving adequate exposure to the intervention for it to be efficacious. Program intensity, or dosage, typically is measured by the quantity and quality the intervention actually achieved when implemented [21, 22], although it ideally should be determined based on the needs of the population at hand [23]. Common indicators of dosage for EDI include amount of time spent in a child development center, number of home visits completed by a specialist training a parent and/or engaging the child, or some indication of parent engagement in the EDI. Whereas there is more information linking outcomes with treatment dose for pre-school programs [21, 22], despite its importance few studies of EDI implemented in the first three years of life have conducted such analyses. A few previous studies generally indicate that children who receive more exposure to EDI display greater improvements in their cognitive development compared to those who receive less, even when differences in exposure were modest. Specifically, children who received EDI (home and center based) for more than 400 days, through age 3, exhibited significant improvements in cognitive development, while smaller but similar effects were evident among children who received treatment between 350 and 400 days [24]. Another study reported that optimal cognitive development of children in EDI was not associated with their background characteristics, such as birth weight or maternal education, but with three aspects related to treatment dosage: number of home visits received, days attending child care, and number of parent meetings attended [18]. However these studies as well as the broader discussions of implementation quality have focused on programs conducted in the United States [21, 22]. The applicability of this information to L/LMIC contexts is unclear at present. The only EDI treatment dose study conducted in a L/LMIC that we are aware of showed that, as the frequency of home visits increased from none, through monthly, biweekly, and weekly, developmental gains at 30 months of age increased as well [25]. Given the potential for EDI to significantly impact the development of children, and therefore the economic development of nations in the long-term [26], it will be important more broadly to examine treatment dose in L/LMIC to inform the implementation of such efforts on a larger scale. Parents may vary in their level of participation in home visit EDI programs due to a variety of factors. Previous research has indicated higher treatment dose among families participating in EDI who have better financial and social resources [20, 27–30]. Perinatal, neonatal, and other child health characteristics might also predict treatment dose for an intervention intending to promote the child’s development. Yet, studies that have examined both social and health predictors of EDI treatment dose are rare and have not considered a broad range of possible predictors [15]. It is important to examine various such factors in L/LMIC because they can identify processes that may influence parents’ adherence with EDI and those who may need additional support. In light of these gaps in our understanding, the aim of the current study was to determine (1) whether there is a dose effect in a home visiting EDI implemented in three L/LMIC and (2) what sociodemographic and health factors are associated with variation in treatment dose. We examined two indicators of dose of EDI. As in previous studies, the number of home visits completed over the course of the EDI was measured. Another important treatment element is the extent to which parents implement the assigned developmental activities with the child during the time between home visits, which we refer to as the program implementation dose. Despite its logical importance to the success of home visiting EDI, we are not aware that parent program implementation dose has been examined in EDI. We hypothesize that increased dose as measured by either indicator will be associated with better developmental outcomes from EDI when implemented in three L/LMIC. Conclusions: The body of research in which the current study is embedded quite consistently establishes that within an effective EDI, a higher dose is generally associated with better developmental outcomes. A large body of research indicates that EDI can improve early development of children in L/LMIC. Therefore EDI should be one approach used in L/LMIC to lay the foundation for improving longer-term outcomes of its population and interrupting intergenerational transmission of poverty [26]. Yet, for this to be successful, efforts to implement EDI for children need to ensure that program elements reach the children at the intended intensity. Groups of children at risk for receiving lower treatment dose may require special attention to ensure adequate effect.
Background: The positive effects of early developmental intervention (EDI) on early child development have been reported in numerous controlled trials in a variety of countries. An important aspect to determining the efficacy of EDI is the degree to which dosage is linked to outcomes. However, few studies of EDI have conducted such analyses. This observational cohort study examined the association between treatment dose and children's development when EDI was implemented in three low and low-middle income countries as well as demographic and child health factors associated with treatment dose. Methods: Infants (78 males, 67 females) born in rural communities in India, Pakistan, and Zambia received a parent-implemented EDI delivered through biweekly home visits by trainers during the first 36 months of life. Outcome was measured at age 36 months with the Mental (MDI) and Psychomotor (PDI) Development Indices of the Bayley Scales of Infant Development-II. Treatment dose was measured by number of home visits completed and parent-reported implementation of assigned developmental stimulation activities between visits. Sociodemographic, prenatal, perinatal, and child health variables were measures as correlates. Results: Average home visits dose exceeded 91% and mothers engaged the children in activities on average 62.5% of days. Higher home visits dose was significantly associated with higher MDI (mean for dose quintiles 1-2 combined = 97.8, quintiles 3-5 combined = 103.4, p = 0.0017). Higher treatment dose was also generally associated with greater mean PDI, but the relationships were non-linear. Location, sociodemographic, and child health variables were associated with treatment dose. Conclusions: Receiving a higher dose of EDI during the first 36 months of life is generally associated with better developmental outcomes. The higher benefit appears when receiving ≥91% of biweekly home visits and program activities on ≥67% of days over 3 years. It is important to ensure that EDI is implemented with a sufficiently high dose to achieve desired effect. To this end groups at risk for receiving lower dose can be identified and may require special attention to ensure adequate effect.
14,762
398
[ 1025, 257, 246, 397, 184, 260, 613, 159, 85, 462, 311, 690, 578 ]
17
[ "dose", "home", "treatment", "treatment dose", "mdi", "edi", "36", "visits", "location", "visit" ]
[ "neonatal child health", "developmental intervention data", "developmental outcomes treatment", "edi child health", "risk neurodevelopmental disability" ]
[CONTENT] Treatment dose | Early developmental intervention | Neurodevelopmental disability | Birth asphyxia | Developing countries [SUMMARY]
[CONTENT] Treatment dose | Early developmental intervention | Neurodevelopmental disability | Birth asphyxia | Developing countries [SUMMARY]
[CONTENT] Treatment dose | Early developmental intervention | Neurodevelopmental disability | Birth asphyxia | Developing countries [SUMMARY]
[CONTENT] Treatment dose | Early developmental intervention | Neurodevelopmental disability | Birth asphyxia | Developing countries [SUMMARY]
[CONTENT] Treatment dose | Early developmental intervention | Neurodevelopmental disability | Birth asphyxia | Developing countries [SUMMARY]
[CONTENT] Treatment dose | Early developmental intervention | Neurodevelopmental disability | Birth asphyxia | Developing countries [SUMMARY]
[CONTENT] Adult | Child Development | Child, Preschool | Cohort Studies | Developing Countries | Developmental Disabilities | Female | Home Care Services | Humans | India | Infant | Infant, Newborn | Male | Neuropsychological Tests | Pakistan | Parents | Program Evaluation | Rural Population | Zambia [SUMMARY]
[CONTENT] Adult | Child Development | Child, Preschool | Cohort Studies | Developing Countries | Developmental Disabilities | Female | Home Care Services | Humans | India | Infant | Infant, Newborn | Male | Neuropsychological Tests | Pakistan | Parents | Program Evaluation | Rural Population | Zambia [SUMMARY]
[CONTENT] Adult | Child Development | Child, Preschool | Cohort Studies | Developing Countries | Developmental Disabilities | Female | Home Care Services | Humans | India | Infant | Infant, Newborn | Male | Neuropsychological Tests | Pakistan | Parents | Program Evaluation | Rural Population | Zambia [SUMMARY]
[CONTENT] Adult | Child Development | Child, Preschool | Cohort Studies | Developing Countries | Developmental Disabilities | Female | Home Care Services | Humans | India | Infant | Infant, Newborn | Male | Neuropsychological Tests | Pakistan | Parents | Program Evaluation | Rural Population | Zambia [SUMMARY]
[CONTENT] Adult | Child Development | Child, Preschool | Cohort Studies | Developing Countries | Developmental Disabilities | Female | Home Care Services | Humans | India | Infant | Infant, Newborn | Male | Neuropsychological Tests | Pakistan | Parents | Program Evaluation | Rural Population | Zambia [SUMMARY]
[CONTENT] Adult | Child Development | Child, Preschool | Cohort Studies | Developing Countries | Developmental Disabilities | Female | Home Care Services | Humans | India | Infant | Infant, Newborn | Male | Neuropsychological Tests | Pakistan | Parents | Program Evaluation | Rural Population | Zambia [SUMMARY]
[CONTENT] neonatal child health | developmental intervention data | developmental outcomes treatment | edi child health | risk neurodevelopmental disability [SUMMARY]
[CONTENT] neonatal child health | developmental intervention data | developmental outcomes treatment | edi child health | risk neurodevelopmental disability [SUMMARY]
[CONTENT] neonatal child health | developmental intervention data | developmental outcomes treatment | edi child health | risk neurodevelopmental disability [SUMMARY]
[CONTENT] neonatal child health | developmental intervention data | developmental outcomes treatment | edi child health | risk neurodevelopmental disability [SUMMARY]
[CONTENT] neonatal child health | developmental intervention data | developmental outcomes treatment | edi child health | risk neurodevelopmental disability [SUMMARY]
[CONTENT] neonatal child health | developmental intervention data | developmental outcomes treatment | edi child health | risk neurodevelopmental disability [SUMMARY]
[CONTENT] dose | home | treatment | treatment dose | mdi | edi | 36 | visits | location | visit [SUMMARY]
[CONTENT] dose | home | treatment | treatment dose | mdi | edi | 36 | visits | location | visit [SUMMARY]
[CONTENT] dose | home | treatment | treatment dose | mdi | edi | 36 | visits | location | visit [SUMMARY]
[CONTENT] dose | home | treatment | treatment dose | mdi | edi | 36 | visits | location | visit [SUMMARY]
[CONTENT] dose | home | treatment | treatment dose | mdi | edi | 36 | visits | location | visit [SUMMARY]
[CONTENT] dose | home | treatment | treatment dose | mdi | edi | 36 | visits | location | visit [SUMMARY]
[CONTENT] edi | dosage | development | lmic | child development | dose | studies | treatment | home | important [SUMMARY]
[CONTENT] dose | treatment dose | treatment | mdi pdi | birth | home | visit | asphyxia | birth asphyxia | variables [SUMMARY]
[CONTENT] dose | model | quintiles | mdi | mean | home | location | preterm | table | pdi [SUMMARY]
[CONTENT] ensure | children | edi | body | body research | lmic | research | outcomes | indicates edi | edi children need [SUMMARY]
[CONTENT] dose | home | treatment | treatment dose | edi | location | mdi | visits | pdi | quintiles [SUMMARY]
[CONTENT] dose | home | treatment | treatment dose | edi | location | mdi | visits | pdi | quintiles [SUMMARY]
[CONTENT] EDI ||| EDI ||| EDI ||| EDI | three [SUMMARY]
[CONTENT] 78 | 67 | India | Pakistan | Zambia | EDI | the first 36 months ||| age 36 months | the Mental | Psychomotor | the Bayley Scales | Infant Development-II ||| ||| [SUMMARY]
[CONTENT] 91% | 62.5% | days ||| MDI | 1 | 97.8 | 3 | 103.4 | 0.0017 ||| PDI ||| [SUMMARY]
[CONTENT] EDI | the first 36 months ||| days | 3 years ||| EDI ||| [SUMMARY]
[CONTENT] EDI ||| EDI ||| EDI ||| EDI | three ||| 67 | India | Pakistan | Zambia | EDI | the first 36 months ||| age 36 months | the Mental | Psychomotor | the Bayley Scales | Infant Development-II ||| ||| ||| ||| 91% | 62.5% | days ||| MDI | 1 | 97.8 | 3 | 103.4 | 0.0017 ||| PDI ||| ||| EDI | the first 36 months ||| days | 3 years ||| EDI ||| [SUMMARY]
[CONTENT] EDI ||| EDI ||| EDI ||| EDI | three ||| 67 | India | Pakistan | Zambia | EDI | the first 36 months ||| age 36 months | the Mental | Psychomotor | the Bayley Scales | Infant Development-II ||| ||| ||| ||| 91% | 62.5% | days ||| MDI | 1 | 97.8 | 3 | 103.4 | 0.0017 ||| PDI ||| ||| EDI | the first 36 months ||| days | 3 years ||| EDI ||| [SUMMARY]
Effects of lornoxicam and intravenous ibuprofen on erythrocyte deformability and hepatic and renal blood flow in rats.
27536068
Change in blood supply is held responsible for anesthesia-related abnormal tissue and organ perfusion. Decreased erythrocyte deformability and increased aggregation may be detected after surgery performed under general anesthesia. It was shown that nonsteroidal anti-inflammatory drugs decrease erythrocyte deformability. Lornoxicam and/or intravenous (iv) ibuprofen are commonly preferred analgesic agents for postoperative pain management. In this study, we aimed to investigate the effects of lornoxicam (2 mg/kg, iv) and ibuprofen (30 mg/kg, iv) on erythrocyte deformability, as well as hepatic and renal blood flows, in male rats.
BACKGROUND
Eighteen male Wistar albino rats were randomly divided into three groups as follows: iv lornoxicam-treated group (Group L), iv ibuprofen-treated group (Group İ), and control group (Group C). Drug administration was carried out by the iv route in all groups except Group C. Hepatic and renal blood flows were studied by laser Doppler, and euthanasia was performed via intra-abdominal blood uptake. Erythrocyte deformability was measured using a constant-flow filtrometry system.
METHODS
Lornoxicam and ibuprofen increased the relative resistance, which is an indicator of erythrocyte deformability, of rats (P=0.016). Comparison of the results from Group L and Group I revealed no statistically significant differences (P=0.694), although the erythrocyte deformability levels in Group L and Group I were statistically higher than the results observed in Group C (P=0.018 and P=0.008, respectively). Hepatic and renal blood flows were significantly lower than the same in Group C.
RESULTS
We believe that lornoxicam and ibuprofen may lead to functional disorders related to renal and liver tissue perfusion secondary to both decreased blood flow and erythrocyte deformability. Further studies regarding these issues are thought to be essential.
CONCLUSION
[ "Anesthesia, General", "Animals", "Anti-Inflammatory Agents, Non-Steroidal", "Erythrocyte Deformability", "Ibuprofen", "Infusions, Intravenous", "Injections, Intravenous", "Kidney", "Liver", "Piroxicam", "Rats", "Rats, Wistar", "Renal Circulation" ]
4977097
Introduction
Erythrocytes are crucial for normal blood flow and hemodynamics. Cell deformability, aggregability, and adherence to endothelial cells are important properties of erythrocytes, which considerably affect blood flow. Under normal flow conditions with a shear stress between normal ranges, erythrocytes are dispersed and properly deformable to maintain tissue perfusion.1 However, abnormal erythrocyte properties are usually seen in various clinical situations, such as heart diseases, hypertension, diabetes, cancers, malaria, anemia, sickle cell disease, and thrombosis.2 Alterations in hemoglobin structure (inflammation, oxidative stress, and hemoglobinopathies), changes in plasma contents including albumin, fibrinogen, and other coagulation elements, as well as pathological states associated with diminished flow (ischemia, trauma, and surgery) are suspected underlying factors that may result in circulation deterioration.3 Nonsteroidal anti-inflammatory drugs (NSAIDs) are frequently used agents for postoperative pain, solely or in combination with other types of analgesics. NSAIDs exert their analgesic effects via inhibition of cyclooxygenase (COX) (prostaglandin synthase G2/H2) enzymes, which produce important prostaglandins (PGs) (eg, PGE1, PGE2, PGF2, and PGI2).4 However, especially in the kidney, PGs are prominent elements in regulating important processes related with blood pressure, such as salt/water balance, renin release, and vascular tone. Following the inhibition of PG synthesis by NSAIDs, salt retention, increased vascular tone in glomerular vascular bed, and decreased glomerular filtration rate may occur, and all these effects may accelerate renal failure, hypertensive disease, and end organ damage.5 Effects of NSAIDs on hepatic blood flow have not been widely investigated; however, it is well known that several PGs, such as PGE1 and PGE2, improve hepatic blood flow.6,7 In this study, we investigated the effects of two NSAIDs – lornoxicam and intravenous (iv) ibuprofen – on renal and hepatic blood flow, as well as on erythrocyte deformability, in rats.
Statistical analysis
Statistical Package for the Social Sciences (SPSS Inc, Chicago, IL, USA) version 12.0 was used for statistical analysis. Erythrocyte deformability, as well as the hepatic and renal blood flows, in the study groups was assessed using the Kruskal–Wallis test. Bonferroni-adjusted Mann–Whitney U-test was used after significant Kruskal–Wallis results to determine which group differs from the others. Results were expressed as mean ± standard deviation. Statistical significance was set at a P-value <0.05 for all analyses and P<0.033 (0.1/3) for Bonferroni-adjusted Mann–Whitney U-test.
Results
Lornoxicam and ibuprofen increased the relative resistance, which is an indicator of the erythrocyte deformability, of rats (P=0.016). Comparison of Group L and Group I revealed no statistically different results (P=0.694), whereas Group L and Group I revealed statistically higher results than that in Group C (P=0.018 and P=0.008, respectively) (Figure 1). Hepatic and renal blood flows were significantly lower in Group L and Group I than those measured in Group C (Figures 2 and 3, respectively) (P=0.013 and P=0.016, respectively). Liver blood flow values in Group L and Group I were significantly lower than that in Group C (P=0.010 and P=0.010, respectively). Comparison of Group L and Group I revealed no statistically different results (P=0.994). Similarly, renal blood flow values in Group L and Group I were significantly lower than that in Group C (P=0.031 and P=0.006, respectively). Comparison of Group L and Group I revealed no statistically different results (P=0.431).
null
null
[ "Methods", "Hepatic and renal blood flow measurement", "Deformability measurements" ]
[ "This study was conducted in the GUDAM Laboratory of Gazi University with the consent of the Experimental Animals Ethics Committee of Gazi University. All of the procedures were performed according to the accepted standards of the Guide for the Care and Use of Laboratory Animals.\nIn the study, 18 male Wistar albino rats, weighing 225–280 g and raised under the same environmental conditions, were used. The rats were maintained under a temperature of 20°C–21°C with cycles of 12-hour daylight and 12-hour darkness; they had free access to food until 2 hours before the anesthesia procedure.\nThree groups of rats constituted the study and control groups. Six randomized rats were grouped as the control; no surgical procedure was performed on the animals in the control group and they received an equal volume of normal saline only (Group C, n=6). In the lornoxicam group, rats were administered lornoxicam (Xefo®; Abdi İbrahim İlaç Sanayi ve Tic A.Ş, İstanbul, Turkey) 2 mg/kg intravenously (Group L). In the iv ibuprofen group, rats were administered iv ibuprofen (Intrafen®, Gen İlaç ve Sağlık Ürünleri A.Ş, Ankara, Turkey) 30 mg/kg intravenously (Group I).\nTwo hours after administration of lornoxicam and iv ibuprofen, the rats were weighed, anesthetized with ketamine (Ketalar® 50 mg/mL; Pfizer, İstanbul, Turkey), and euthanasia was performed via intra-abdominal blood uptake. Heparinized total blood samples were used to prepare erythrocyte packs. Deformability measurements were conducted using erythrocyte suspensions with 5% hematocrit in phosphate-buffered saline buffer.\n Hepatic and renal blood flow measurement Hepatic and renal blood flows were recorded. Blood flow measurements were conducted using a laser Doppler micro-vascular perfusion monitor (OxyLab LDF; Oxford Optronix Limited, Oxford, UK) by fixing the probe on the tissue.\nHepatic and renal blood flows were recorded. Blood flow measurements were conducted using a laser Doppler micro-vascular perfusion monitor (OxyLab LDF; Oxford Optronix Limited, Oxford, UK) by fixing the probe on the tissue.\n Deformability measurements Erythrocyte deformability was measured using a constant-flow filtrometry system (MP 30, Biopac Systems Inc, Commat, USA). Erythrocyte suspension, delivered at 1 mL/min flow rate, was passed through a Nuclepor™ polycarbonate filter (Thermo Fisher Scientific, Waltham, MA, USA) of 5 μm diameter, and alterations in the filtration pressure corresponding to different flow rates were measured. The alterations in the pressure were transferred to a computer with an MP30 data acquisition system (Biopac Systems, Santa. Barbara, CA, USA). The ratio of the values for the filtration pressure of the cellular suspension and the buffer was calculated, and the relative resistance was thereafter calculated.\nErythrocyte deformability was measured using a constant-flow filtrometry system (MP 30, Biopac Systems Inc, Commat, USA). Erythrocyte suspension, delivered at 1 mL/min flow rate, was passed through a Nuclepor™ polycarbonate filter (Thermo Fisher Scientific, Waltham, MA, USA) of 5 μm diameter, and alterations in the filtration pressure corresponding to different flow rates were measured. The alterations in the pressure were transferred to a computer with an MP30 data acquisition system (Biopac Systems, Santa. Barbara, CA, USA). The ratio of the values for the filtration pressure of the cellular suspension and the buffer was calculated, and the relative resistance was thereafter calculated.\n Statistical analysis Statistical Package for the Social Sciences (SPSS Inc, Chicago, IL, USA) version 12.0 was used for statistical analysis. Erythrocyte deformability, as well as the hepatic and renal blood flows, in the study groups was assessed using the Kruskal–Wallis test. Bonferroni-adjusted Mann–Whitney U-test was used after significant Kruskal–Wallis results to determine which group differs from the others. Results were expressed as mean ± standard deviation. Statistical significance was set at a P-value <0.05 for all analyses and P<0.033 (0.1/3) for Bonferroni-adjusted Mann–Whitney U-test.\nStatistical Package for the Social Sciences (SPSS Inc, Chicago, IL, USA) version 12.0 was used for statistical analysis. Erythrocyte deformability, as well as the hepatic and renal blood flows, in the study groups was assessed using the Kruskal–Wallis test. Bonferroni-adjusted Mann–Whitney U-test was used after significant Kruskal–Wallis results to determine which group differs from the others. Results were expressed as mean ± standard deviation. Statistical significance was set at a P-value <0.05 for all analyses and P<0.033 (0.1/3) for Bonferroni-adjusted Mann–Whitney U-test.", "Hepatic and renal blood flows were recorded. Blood flow measurements were conducted using a laser Doppler micro-vascular perfusion monitor (OxyLab LDF; Oxford Optronix Limited, Oxford, UK) by fixing the probe on the tissue.", "Erythrocyte deformability was measured using a constant-flow filtrometry system (MP 30, Biopac Systems Inc, Commat, USA). Erythrocyte suspension, delivered at 1 mL/min flow rate, was passed through a Nuclepor™ polycarbonate filter (Thermo Fisher Scientific, Waltham, MA, USA) of 5 μm diameter, and alterations in the filtration pressure corresponding to different flow rates were measured. The alterations in the pressure were transferred to a computer with an MP30 data acquisition system (Biopac Systems, Santa. Barbara, CA, USA). The ratio of the values for the filtration pressure of the cellular suspension and the buffer was calculated, and the relative resistance was thereafter calculated." ]
[ "methods", null, null ]
[ "Introduction", "Methods", "Hepatic and renal blood flow measurement", "Deformability measurements", "Statistical analysis", "Results", "Discussion" ]
[ "Erythrocytes are crucial for normal blood flow and hemodynamics. Cell deformability, aggregability, and adherence to endothelial cells are important properties of erythrocytes, which considerably affect blood flow. Under normal flow conditions with a shear stress between normal ranges, erythrocytes are dispersed and properly deformable to maintain tissue perfusion.1\nHowever, abnormal erythrocyte properties are usually seen in various clinical situations, such as heart diseases, hypertension, diabetes, cancers, malaria, anemia, sickle cell disease, and thrombosis.2 Alterations in hemoglobin structure (inflammation, oxidative stress, and hemoglobinopathies), changes in plasma contents including albumin, fibrinogen, and other coagulation elements, as well as pathological states associated with diminished flow (ischemia, trauma, and surgery) are suspected underlying factors that may result in circulation deterioration.3\nNonsteroidal anti-inflammatory drugs (NSAIDs) are frequently used agents for postoperative pain, solely or in combination with other types of analgesics. NSAIDs exert their analgesic effects via inhibition of cyclooxygenase (COX) (prostaglandin synthase G2/H2) enzymes, which produce important prostaglandins (PGs) (eg, PGE1, PGE2, PGF2, and PGI2).4 However, especially in the kidney, PGs are prominent elements in regulating important processes related with blood pressure, such as salt/water balance, renin release, and vascular tone. Following the inhibition of PG synthesis by NSAIDs, salt retention, increased vascular tone in glomerular vascular bed, and decreased glomerular filtration rate may occur, and all these effects may accelerate renal failure, hypertensive disease, and end organ damage.5 Effects of NSAIDs on hepatic blood flow have not been widely investigated; however, it is well known that several PGs, such as PGE1 and PGE2, improve hepatic blood flow.6,7\nIn this study, we investigated the effects of two NSAIDs – lornoxicam and intravenous (iv) ibuprofen – on renal and hepatic blood flow, as well as on erythrocyte deformability, in rats.", "This study was conducted in the GUDAM Laboratory of Gazi University with the consent of the Experimental Animals Ethics Committee of Gazi University. All of the procedures were performed according to the accepted standards of the Guide for the Care and Use of Laboratory Animals.\nIn the study, 18 male Wistar albino rats, weighing 225–280 g and raised under the same environmental conditions, were used. The rats were maintained under a temperature of 20°C–21°C with cycles of 12-hour daylight and 12-hour darkness; they had free access to food until 2 hours before the anesthesia procedure.\nThree groups of rats constituted the study and control groups. Six randomized rats were grouped as the control; no surgical procedure was performed on the animals in the control group and they received an equal volume of normal saline only (Group C, n=6). In the lornoxicam group, rats were administered lornoxicam (Xefo®; Abdi İbrahim İlaç Sanayi ve Tic A.Ş, İstanbul, Turkey) 2 mg/kg intravenously (Group L). In the iv ibuprofen group, rats were administered iv ibuprofen (Intrafen®, Gen İlaç ve Sağlık Ürünleri A.Ş, Ankara, Turkey) 30 mg/kg intravenously (Group I).\nTwo hours after administration of lornoxicam and iv ibuprofen, the rats were weighed, anesthetized with ketamine (Ketalar® 50 mg/mL; Pfizer, İstanbul, Turkey), and euthanasia was performed via intra-abdominal blood uptake. Heparinized total blood samples were used to prepare erythrocyte packs. Deformability measurements were conducted using erythrocyte suspensions with 5% hematocrit in phosphate-buffered saline buffer.\n Hepatic and renal blood flow measurement Hepatic and renal blood flows were recorded. Blood flow measurements were conducted using a laser Doppler micro-vascular perfusion monitor (OxyLab LDF; Oxford Optronix Limited, Oxford, UK) by fixing the probe on the tissue.\nHepatic and renal blood flows were recorded. Blood flow measurements were conducted using a laser Doppler micro-vascular perfusion monitor (OxyLab LDF; Oxford Optronix Limited, Oxford, UK) by fixing the probe on the tissue.\n Deformability measurements Erythrocyte deformability was measured using a constant-flow filtrometry system (MP 30, Biopac Systems Inc, Commat, USA). Erythrocyte suspension, delivered at 1 mL/min flow rate, was passed through a Nuclepor™ polycarbonate filter (Thermo Fisher Scientific, Waltham, MA, USA) of 5 μm diameter, and alterations in the filtration pressure corresponding to different flow rates were measured. The alterations in the pressure were transferred to a computer with an MP30 data acquisition system (Biopac Systems, Santa. Barbara, CA, USA). The ratio of the values for the filtration pressure of the cellular suspension and the buffer was calculated, and the relative resistance was thereafter calculated.\nErythrocyte deformability was measured using a constant-flow filtrometry system (MP 30, Biopac Systems Inc, Commat, USA). Erythrocyte suspension, delivered at 1 mL/min flow rate, was passed through a Nuclepor™ polycarbonate filter (Thermo Fisher Scientific, Waltham, MA, USA) of 5 μm diameter, and alterations in the filtration pressure corresponding to different flow rates were measured. The alterations in the pressure were transferred to a computer with an MP30 data acquisition system (Biopac Systems, Santa. Barbara, CA, USA). The ratio of the values for the filtration pressure of the cellular suspension and the buffer was calculated, and the relative resistance was thereafter calculated.\n Statistical analysis Statistical Package for the Social Sciences (SPSS Inc, Chicago, IL, USA) version 12.0 was used for statistical analysis. Erythrocyte deformability, as well as the hepatic and renal blood flows, in the study groups was assessed using the Kruskal–Wallis test. Bonferroni-adjusted Mann–Whitney U-test was used after significant Kruskal–Wallis results to determine which group differs from the others. Results were expressed as mean ± standard deviation. Statistical significance was set at a P-value <0.05 for all analyses and P<0.033 (0.1/3) for Bonferroni-adjusted Mann–Whitney U-test.\nStatistical Package for the Social Sciences (SPSS Inc, Chicago, IL, USA) version 12.0 was used for statistical analysis. Erythrocyte deformability, as well as the hepatic and renal blood flows, in the study groups was assessed using the Kruskal–Wallis test. Bonferroni-adjusted Mann–Whitney U-test was used after significant Kruskal–Wallis results to determine which group differs from the others. Results were expressed as mean ± standard deviation. Statistical significance was set at a P-value <0.05 for all analyses and P<0.033 (0.1/3) for Bonferroni-adjusted Mann–Whitney U-test.", "Hepatic and renal blood flows were recorded. Blood flow measurements were conducted using a laser Doppler micro-vascular perfusion monitor (OxyLab LDF; Oxford Optronix Limited, Oxford, UK) by fixing the probe on the tissue.", "Erythrocyte deformability was measured using a constant-flow filtrometry system (MP 30, Biopac Systems Inc, Commat, USA). Erythrocyte suspension, delivered at 1 mL/min flow rate, was passed through a Nuclepor™ polycarbonate filter (Thermo Fisher Scientific, Waltham, MA, USA) of 5 μm diameter, and alterations in the filtration pressure corresponding to different flow rates were measured. The alterations in the pressure were transferred to a computer with an MP30 data acquisition system (Biopac Systems, Santa. Barbara, CA, USA). The ratio of the values for the filtration pressure of the cellular suspension and the buffer was calculated, and the relative resistance was thereafter calculated.", "Statistical Package for the Social Sciences (SPSS Inc, Chicago, IL, USA) version 12.0 was used for statistical analysis. Erythrocyte deformability, as well as the hepatic and renal blood flows, in the study groups was assessed using the Kruskal–Wallis test. Bonferroni-adjusted Mann–Whitney U-test was used after significant Kruskal–Wallis results to determine which group differs from the others. Results were expressed as mean ± standard deviation. Statistical significance was set at a P-value <0.05 for all analyses and P<0.033 (0.1/3) for Bonferroni-adjusted Mann–Whitney U-test.", "Lornoxicam and ibuprofen increased the relative resistance, which is an indicator of the erythrocyte deformability, of rats (P=0.016). Comparison of Group L and Group I revealed no statistically different results (P=0.694), whereas Group L and Group I revealed statistically higher results than that in Group C (P=0.018 and P=0.008, respectively) (Figure 1).\nHepatic and renal blood flows were significantly lower in Group L and Group I than those measured in Group C (Figures 2 and 3, respectively) (P=0.013 and P=0.016, respectively).\nLiver blood flow values in Group L and Group I were significantly lower than that in Group C (P=0.010 and P=0.010, respectively). Comparison of Group L and Group I revealed no statistically different results (P=0.994).\nSimilarly, renal blood flow values in Group L and Group I were significantly lower than that in Group C (P=0.031 and P=0.006, respectively). Comparison of Group L and Group I revealed no statistically different results (P=0.431).", "In this experimental study, the effects of two different NSAIDs on erythrocyte deformability, as well as renal and hepatic blood flows, were investigated. We found that erythrocyte deformability and the renal and hepatic blood flows were significantly decreased after lornoxicam and ibuprofen administration when compared with the results recorded in the control group.\nThere are only few studies in the literature investigating the effects of NSAIDs on erythrocyte deformability. Bozzo et al8 investigated the effects of different NSAIDs (aspirin, dipyrone, ketorolac, and ibuprofen) on erythrocyte deformability and found similar erythrocyte deformability values for all NSAIDs compared with the control (dipyrone: 0.80±0.07; ibuprofen: 0.83±0.04; ketorolac: 0.85±0.05; aspirin: 0.56±0.05). However, they reported significantly reduced erythrocyte deformability in only the aspirin-treated group when compared with the other groups treated with different NSAIDs (P<0.05). In contrast to the study conducted by Bozzo et al, iv ibuprofen and lornoxicam significantly reduced erythrocyte deformability when compared with the control in this study. In another study, Arslan et al9 compared the effects of lornoxicam and iv paracetamol on erythrocyte deformability, as well as renal and liver blood flows. They found that lornoxicam significantly reduced erythrocyte deformability when compared with iv paracetamol and control. Moreover, decreased renal and liver blood flow rates were found in the lornoxicam-treated group; however, these differences were not statistically significant.\nThere are two subtypes of COX enzymes; COX-1 is mainly found in the gastric mucosa, platelets, vascular endothelium, and kidney; COX-2 is substantially different from COX1 and is mainly located in monocytes and macrophages. In addition to inflammatory cells, COX2 may be found in smooth muscle cells, epithelial cells, and neuronal cells. Production of this enzyme is closely related with inflammatory reactions.10\nLornoxicam and ibuprofen exhibit potent inhibitory activity against both isozymes.11,12 Inhibition of these enzymes results in limited production of PGs such as PGA2, PGD2, PGE1, PGE2, PGF2α, PGI2, and thromboxane. PGs play a major role in central pain perception and pain recognition via peripheral nociceptors. The main mechanism underlying the analgesic activity of NSAIDs is the inhibition of PGs that trigger the peripheral nociceptors that recognize pain and thereby reduce inflammatory responses at the wound and/or incision site. Furthermore, in the central nervous system – both brain and spinal cord – central inhibition of COX enzymes results in reduced PG production. This process leads to decreased pain perception and hyperalgesia.13,14 On the other hand, inhibition of PGs such as PGE2 and PGI2 leads to the gastrointestinal and renal side effects of NSAIDs.15 The renal system is significantly affected, which is represented with decreased renal blood flow, increased salt retention, and renin secretion. Similar to previous studies, we found diminished renal blood flow after ibuprofen and lornoxicam administration. We suggest that these findings may be the result of decreased production of vasodilatory PGs – eg, PGE2 and PGI2 – which is a consequence of the inhibition of COX enzymes by the two NSAIDs used in the experiments.\nHepatic blood flow is less affected by the PG-mediated local effects when compared with the renal system; however, Sunose et al16 reported that increased thromboxane-A2 production by liver sinusoidal endothelial cells via increased COX-1 activity is the primary factor for the increased portal resistance and hyperresponse to vasoconstrictors. Shin et al6 showed that PGE1 improves hepatic blood flow and reduces ischemia–reperfusion injuries during liver transplantation. Similarly, the protective roles of PGI2 in liver ischemia–reperfusion injury by maintaining blood flow, limiting the activity of vasoconstrictors, inhibiting local vascular thrombosis, and decreasing expression of inflammatory cytokines have been shown.17,18 We found decreased hepatic flow with administration of both NSAIDs – lornoxicam and ibuprofen – when compared with the control and we suggest that NSAID-induced inhibition of PG production may result in diminished hepatic blood flow.\nWe can conclude that neither iv ibuprofen nor lornoxicam is superior to each other in terms of the degree of harmful effects on erythrocyte deformability and the renal and hepatic blood flows. We suggest that the negative effects of these agents on the microcirculation, as well as the renal and hepatic blood flow parameters, have to be taken into consideration when these agents are preferred for postoperative pain management. The results of this study have to be supported with future animal and human studies." ]
[ "intro", "methods", null, null, "methods", "results", "discussion" ]
[ "rat", "lornoxicam", "iv ibuprofen", "erythrocyte deformability", "blood flow" ]
Introduction: Erythrocytes are crucial for normal blood flow and hemodynamics. Cell deformability, aggregability, and adherence to endothelial cells are important properties of erythrocytes, which considerably affect blood flow. Under normal flow conditions with a shear stress between normal ranges, erythrocytes are dispersed and properly deformable to maintain tissue perfusion.1 However, abnormal erythrocyte properties are usually seen in various clinical situations, such as heart diseases, hypertension, diabetes, cancers, malaria, anemia, sickle cell disease, and thrombosis.2 Alterations in hemoglobin structure (inflammation, oxidative stress, and hemoglobinopathies), changes in plasma contents including albumin, fibrinogen, and other coagulation elements, as well as pathological states associated with diminished flow (ischemia, trauma, and surgery) are suspected underlying factors that may result in circulation deterioration.3 Nonsteroidal anti-inflammatory drugs (NSAIDs) are frequently used agents for postoperative pain, solely or in combination with other types of analgesics. NSAIDs exert their analgesic effects via inhibition of cyclooxygenase (COX) (prostaglandin synthase G2/H2) enzymes, which produce important prostaglandins (PGs) (eg, PGE1, PGE2, PGF2, and PGI2).4 However, especially in the kidney, PGs are prominent elements in regulating important processes related with blood pressure, such as salt/water balance, renin release, and vascular tone. Following the inhibition of PG synthesis by NSAIDs, salt retention, increased vascular tone in glomerular vascular bed, and decreased glomerular filtration rate may occur, and all these effects may accelerate renal failure, hypertensive disease, and end organ damage.5 Effects of NSAIDs on hepatic blood flow have not been widely investigated; however, it is well known that several PGs, such as PGE1 and PGE2, improve hepatic blood flow.6,7 In this study, we investigated the effects of two NSAIDs – lornoxicam and intravenous (iv) ibuprofen – on renal and hepatic blood flow, as well as on erythrocyte deformability, in rats. Methods: This study was conducted in the GUDAM Laboratory of Gazi University with the consent of the Experimental Animals Ethics Committee of Gazi University. All of the procedures were performed according to the accepted standards of the Guide for the Care and Use of Laboratory Animals. In the study, 18 male Wistar albino rats, weighing 225–280 g and raised under the same environmental conditions, were used. The rats were maintained under a temperature of 20°C–21°C with cycles of 12-hour daylight and 12-hour darkness; they had free access to food until 2 hours before the anesthesia procedure. Three groups of rats constituted the study and control groups. Six randomized rats were grouped as the control; no surgical procedure was performed on the animals in the control group and they received an equal volume of normal saline only (Group C, n=6). In the lornoxicam group, rats were administered lornoxicam (Xefo®; Abdi İbrahim İlaç Sanayi ve Tic A.Ş, İstanbul, Turkey) 2 mg/kg intravenously (Group L). In the iv ibuprofen group, rats were administered iv ibuprofen (Intrafen®, Gen İlaç ve Sağlık Ürünleri A.Ş, Ankara, Turkey) 30 mg/kg intravenously (Group I). Two hours after administration of lornoxicam and iv ibuprofen, the rats were weighed, anesthetized with ketamine (Ketalar® 50 mg/mL; Pfizer, İstanbul, Turkey), and euthanasia was performed via intra-abdominal blood uptake. Heparinized total blood samples were used to prepare erythrocyte packs. Deformability measurements were conducted using erythrocyte suspensions with 5% hematocrit in phosphate-buffered saline buffer. Hepatic and renal blood flow measurement Hepatic and renal blood flows were recorded. Blood flow measurements were conducted using a laser Doppler micro-vascular perfusion monitor (OxyLab LDF; Oxford Optronix Limited, Oxford, UK) by fixing the probe on the tissue. Hepatic and renal blood flows were recorded. Blood flow measurements were conducted using a laser Doppler micro-vascular perfusion monitor (OxyLab LDF; Oxford Optronix Limited, Oxford, UK) by fixing the probe on the tissue. Deformability measurements Erythrocyte deformability was measured using a constant-flow filtrometry system (MP 30, Biopac Systems Inc, Commat, USA). Erythrocyte suspension, delivered at 1 mL/min flow rate, was passed through a Nuclepor™ polycarbonate filter (Thermo Fisher Scientific, Waltham, MA, USA) of 5 μm diameter, and alterations in the filtration pressure corresponding to different flow rates were measured. The alterations in the pressure were transferred to a computer with an MP30 data acquisition system (Biopac Systems, Santa. Barbara, CA, USA). The ratio of the values for the filtration pressure of the cellular suspension and the buffer was calculated, and the relative resistance was thereafter calculated. Erythrocyte deformability was measured using a constant-flow filtrometry system (MP 30, Biopac Systems Inc, Commat, USA). Erythrocyte suspension, delivered at 1 mL/min flow rate, was passed through a Nuclepor™ polycarbonate filter (Thermo Fisher Scientific, Waltham, MA, USA) of 5 μm diameter, and alterations in the filtration pressure corresponding to different flow rates were measured. The alterations in the pressure were transferred to a computer with an MP30 data acquisition system (Biopac Systems, Santa. Barbara, CA, USA). The ratio of the values for the filtration pressure of the cellular suspension and the buffer was calculated, and the relative resistance was thereafter calculated. Statistical analysis Statistical Package for the Social Sciences (SPSS Inc, Chicago, IL, USA) version 12.0 was used for statistical analysis. Erythrocyte deformability, as well as the hepatic and renal blood flows, in the study groups was assessed using the Kruskal–Wallis test. Bonferroni-adjusted Mann–Whitney U-test was used after significant Kruskal–Wallis results to determine which group differs from the others. Results were expressed as mean ± standard deviation. Statistical significance was set at a P-value <0.05 for all analyses and P<0.033 (0.1/3) for Bonferroni-adjusted Mann–Whitney U-test. Statistical Package for the Social Sciences (SPSS Inc, Chicago, IL, USA) version 12.0 was used for statistical analysis. Erythrocyte deformability, as well as the hepatic and renal blood flows, in the study groups was assessed using the Kruskal–Wallis test. Bonferroni-adjusted Mann–Whitney U-test was used after significant Kruskal–Wallis results to determine which group differs from the others. Results were expressed as mean ± standard deviation. Statistical significance was set at a P-value <0.05 for all analyses and P<0.033 (0.1/3) for Bonferroni-adjusted Mann–Whitney U-test. Hepatic and renal blood flow measurement: Hepatic and renal blood flows were recorded. Blood flow measurements were conducted using a laser Doppler micro-vascular perfusion monitor (OxyLab LDF; Oxford Optronix Limited, Oxford, UK) by fixing the probe on the tissue. Deformability measurements: Erythrocyte deformability was measured using a constant-flow filtrometry system (MP 30, Biopac Systems Inc, Commat, USA). Erythrocyte suspension, delivered at 1 mL/min flow rate, was passed through a Nuclepor™ polycarbonate filter (Thermo Fisher Scientific, Waltham, MA, USA) of 5 μm diameter, and alterations in the filtration pressure corresponding to different flow rates were measured. The alterations in the pressure were transferred to a computer with an MP30 data acquisition system (Biopac Systems, Santa. Barbara, CA, USA). The ratio of the values for the filtration pressure of the cellular suspension and the buffer was calculated, and the relative resistance was thereafter calculated. Statistical analysis: Statistical Package for the Social Sciences (SPSS Inc, Chicago, IL, USA) version 12.0 was used for statistical analysis. Erythrocyte deformability, as well as the hepatic and renal blood flows, in the study groups was assessed using the Kruskal–Wallis test. Bonferroni-adjusted Mann–Whitney U-test was used after significant Kruskal–Wallis results to determine which group differs from the others. Results were expressed as mean ± standard deviation. Statistical significance was set at a P-value <0.05 for all analyses and P<0.033 (0.1/3) for Bonferroni-adjusted Mann–Whitney U-test. Results: Lornoxicam and ibuprofen increased the relative resistance, which is an indicator of the erythrocyte deformability, of rats (P=0.016). Comparison of Group L and Group I revealed no statistically different results (P=0.694), whereas Group L and Group I revealed statistically higher results than that in Group C (P=0.018 and P=0.008, respectively) (Figure 1). Hepatic and renal blood flows were significantly lower in Group L and Group I than those measured in Group C (Figures 2 and 3, respectively) (P=0.013 and P=0.016, respectively). Liver blood flow values in Group L and Group I were significantly lower than that in Group C (P=0.010 and P=0.010, respectively). Comparison of Group L and Group I revealed no statistically different results (P=0.994). Similarly, renal blood flow values in Group L and Group I were significantly lower than that in Group C (P=0.031 and P=0.006, respectively). Comparison of Group L and Group I revealed no statistically different results (P=0.431). Discussion: In this experimental study, the effects of two different NSAIDs on erythrocyte deformability, as well as renal and hepatic blood flows, were investigated. We found that erythrocyte deformability and the renal and hepatic blood flows were significantly decreased after lornoxicam and ibuprofen administration when compared with the results recorded in the control group. There are only few studies in the literature investigating the effects of NSAIDs on erythrocyte deformability. Bozzo et al8 investigated the effects of different NSAIDs (aspirin, dipyrone, ketorolac, and ibuprofen) on erythrocyte deformability and found similar erythrocyte deformability values for all NSAIDs compared with the control (dipyrone: 0.80±0.07; ibuprofen: 0.83±0.04; ketorolac: 0.85±0.05; aspirin: 0.56±0.05). However, they reported significantly reduced erythrocyte deformability in only the aspirin-treated group when compared with the other groups treated with different NSAIDs (P<0.05). In contrast to the study conducted by Bozzo et al, iv ibuprofen and lornoxicam significantly reduced erythrocyte deformability when compared with the control in this study. In another study, Arslan et al9 compared the effects of lornoxicam and iv paracetamol on erythrocyte deformability, as well as renal and liver blood flows. They found that lornoxicam significantly reduced erythrocyte deformability when compared with iv paracetamol and control. Moreover, decreased renal and liver blood flow rates were found in the lornoxicam-treated group; however, these differences were not statistically significant. There are two subtypes of COX enzymes; COX-1 is mainly found in the gastric mucosa, platelets, vascular endothelium, and kidney; COX-2 is substantially different from COX1 and is mainly located in monocytes and macrophages. In addition to inflammatory cells, COX2 may be found in smooth muscle cells, epithelial cells, and neuronal cells. Production of this enzyme is closely related with inflammatory reactions.10 Lornoxicam and ibuprofen exhibit potent inhibitory activity against both isozymes.11,12 Inhibition of these enzymes results in limited production of PGs such as PGA2, PGD2, PGE1, PGE2, PGF2α, PGI2, and thromboxane. PGs play a major role in central pain perception and pain recognition via peripheral nociceptors. The main mechanism underlying the analgesic activity of NSAIDs is the inhibition of PGs that trigger the peripheral nociceptors that recognize pain and thereby reduce inflammatory responses at the wound and/or incision site. Furthermore, in the central nervous system – both brain and spinal cord – central inhibition of COX enzymes results in reduced PG production. This process leads to decreased pain perception and hyperalgesia.13,14 On the other hand, inhibition of PGs such as PGE2 and PGI2 leads to the gastrointestinal and renal side effects of NSAIDs.15 The renal system is significantly affected, which is represented with decreased renal blood flow, increased salt retention, and renin secretion. Similar to previous studies, we found diminished renal blood flow after ibuprofen and lornoxicam administration. We suggest that these findings may be the result of decreased production of vasodilatory PGs – eg, PGE2 and PGI2 – which is a consequence of the inhibition of COX enzymes by the two NSAIDs used in the experiments. Hepatic blood flow is less affected by the PG-mediated local effects when compared with the renal system; however, Sunose et al16 reported that increased thromboxane-A2 production by liver sinusoidal endothelial cells via increased COX-1 activity is the primary factor for the increased portal resistance and hyperresponse to vasoconstrictors. Shin et al6 showed that PGE1 improves hepatic blood flow and reduces ischemia–reperfusion injuries during liver transplantation. Similarly, the protective roles of PGI2 in liver ischemia–reperfusion injury by maintaining blood flow, limiting the activity of vasoconstrictors, inhibiting local vascular thrombosis, and decreasing expression of inflammatory cytokines have been shown.17,18 We found decreased hepatic flow with administration of both NSAIDs – lornoxicam and ibuprofen – when compared with the control and we suggest that NSAID-induced inhibition of PG production may result in diminished hepatic blood flow. We can conclude that neither iv ibuprofen nor lornoxicam is superior to each other in terms of the degree of harmful effects on erythrocyte deformability and the renal and hepatic blood flows. We suggest that the negative effects of these agents on the microcirculation, as well as the renal and hepatic blood flow parameters, have to be taken into consideration when these agents are preferred for postoperative pain management. The results of this study have to be supported with future animal and human studies.
Background: Change in blood supply is held responsible for anesthesia-related abnormal tissue and organ perfusion. Decreased erythrocyte deformability and increased aggregation may be detected after surgery performed under general anesthesia. It was shown that nonsteroidal anti-inflammatory drugs decrease erythrocyte deformability. Lornoxicam and/or intravenous (iv) ibuprofen are commonly preferred analgesic agents for postoperative pain management. In this study, we aimed to investigate the effects of lornoxicam (2 mg/kg, iv) and ibuprofen (30 mg/kg, iv) on erythrocyte deformability, as well as hepatic and renal blood flows, in male rats. Methods: Eighteen male Wistar albino rats were randomly divided into three groups as follows: iv lornoxicam-treated group (Group L), iv ibuprofen-treated group (Group İ), and control group (Group C). Drug administration was carried out by the iv route in all groups except Group C. Hepatic and renal blood flows were studied by laser Doppler, and euthanasia was performed via intra-abdominal blood uptake. Erythrocyte deformability was measured using a constant-flow filtrometry system. Results: Lornoxicam and ibuprofen increased the relative resistance, which is an indicator of erythrocyte deformability, of rats (P=0.016). Comparison of the results from Group L and Group I revealed no statistically significant differences (P=0.694), although the erythrocyte deformability levels in Group L and Group I were statistically higher than the results observed in Group C (P=0.018 and P=0.008, respectively). Hepatic and renal blood flows were significantly lower than the same in Group C. Conclusions: We believe that lornoxicam and ibuprofen may lead to functional disorders related to renal and liver tissue perfusion secondary to both decreased blood flow and erythrocyte deformability. Further studies regarding these issues are thought to be essential.
null
null
2,572
339
[ 900, 42, 130 ]
7
[ "blood", "flow", "group", "erythrocyte", "renal", "deformability", "blood flow", "hepatic", "erythrocyte deformability", "results" ]
[ "flow erythrocyte deformability", "blood flow ibuprofen", "blood flow erythrocyte", "different nsaids erythrocyte", "effects nsaids erythrocyte" ]
null
null
[CONTENT] rat | lornoxicam | iv ibuprofen | erythrocyte deformability | blood flow [SUMMARY]
[CONTENT] rat | lornoxicam | iv ibuprofen | erythrocyte deformability | blood flow [SUMMARY]
[CONTENT] rat | lornoxicam | iv ibuprofen | erythrocyte deformability | blood flow [SUMMARY]
null
[CONTENT] rat | lornoxicam | iv ibuprofen | erythrocyte deformability | blood flow [SUMMARY]
null
[CONTENT] Anesthesia, General | Animals | Anti-Inflammatory Agents, Non-Steroidal | Erythrocyte Deformability | Ibuprofen | Infusions, Intravenous | Injections, Intravenous | Kidney | Liver | Piroxicam | Rats | Rats, Wistar | Renal Circulation [SUMMARY]
[CONTENT] Anesthesia, General | Animals | Anti-Inflammatory Agents, Non-Steroidal | Erythrocyte Deformability | Ibuprofen | Infusions, Intravenous | Injections, Intravenous | Kidney | Liver | Piroxicam | Rats | Rats, Wistar | Renal Circulation [SUMMARY]
[CONTENT] Anesthesia, General | Animals | Anti-Inflammatory Agents, Non-Steroidal | Erythrocyte Deformability | Ibuprofen | Infusions, Intravenous | Injections, Intravenous | Kidney | Liver | Piroxicam | Rats | Rats, Wistar | Renal Circulation [SUMMARY]
null
[CONTENT] Anesthesia, General | Animals | Anti-Inflammatory Agents, Non-Steroidal | Erythrocyte Deformability | Ibuprofen | Infusions, Intravenous | Injections, Intravenous | Kidney | Liver | Piroxicam | Rats | Rats, Wistar | Renal Circulation [SUMMARY]
null
[CONTENT] flow erythrocyte deformability | blood flow ibuprofen | blood flow erythrocyte | different nsaids erythrocyte | effects nsaids erythrocyte [SUMMARY]
[CONTENT] flow erythrocyte deformability | blood flow ibuprofen | blood flow erythrocyte | different nsaids erythrocyte | effects nsaids erythrocyte [SUMMARY]
[CONTENT] flow erythrocyte deformability | blood flow ibuprofen | blood flow erythrocyte | different nsaids erythrocyte | effects nsaids erythrocyte [SUMMARY]
null
[CONTENT] flow erythrocyte deformability | blood flow ibuprofen | blood flow erythrocyte | different nsaids erythrocyte | effects nsaids erythrocyte [SUMMARY]
null
[CONTENT] blood | flow | group | erythrocyte | renal | deformability | blood flow | hepatic | erythrocyte deformability | results [SUMMARY]
[CONTENT] blood | flow | group | erythrocyte | renal | deformability | blood flow | hepatic | erythrocyte deformability | results [SUMMARY]
[CONTENT] blood | flow | group | erythrocyte | renal | deformability | blood flow | hepatic | erythrocyte deformability | results [SUMMARY]
null
[CONTENT] blood | flow | group | erythrocyte | renal | deformability | blood flow | hepatic | erythrocyte deformability | results [SUMMARY]
null
[CONTENT] nsaids | flow | effects | important | erythrocytes | blood | blood flow | hepatic blood | hepatic blood flow | pgs [SUMMARY]
[CONTENT] test | statistical | mann whitney | whitney test | adjusted mann whitney test | adjusted mann whitney | adjusted mann | adjusted | mann whitney test | wallis [SUMMARY]
[CONTENT] group | group group | respectively | revealed statistically | group revealed statistically | revealed | group group revealed | group revealed | group group revealed statistically | statistically [SUMMARY]
null
[CONTENT] group | blood | flow | erythrocyte | blood flow | usa | renal | statistical | nsaids | test [SUMMARY]
null
[CONTENT] anesthesia ||| anesthesia ||| erythrocyte ||| ||| 2 mg/kg | 30 mg/kg [SUMMARY]
[CONTENT] Eighteen | Wistar | three | Group L | Group İ | Group C ||| Group C. Hepatic | Doppler ||| Erythrocyte [SUMMARY]
[CONTENT] Lornoxicam | erythrocyte ||| Group L and Group | Group L and Group | Group C ||| Group C. [SUMMARY]
null
[CONTENT] anesthesia ||| anesthesia ||| erythrocyte ||| ||| 2 mg/kg | 30 mg/kg ||| Eighteen | Wistar | three | Group L | Group İ | Group C ||| Group C. Hepatic | Doppler ||| Erythrocyte ||| ||| Lornoxicam | erythrocyte ||| Group L and Group | Group L and Group | Group C ||| Group C. Conclusions ||| [SUMMARY]
null
Idiopathic mesenteric phlebosclerosis associated with long-term oral intake of geniposide.
34168411
Idiopathic mesenteric phlebosclerosis (IMP) is a rare disease, and its etiology and risk factors remain uncertain.
BACKGROUND
The detailed formula of herbal liquid prescriptions of all patients was studied, and the herbal ingredients were compared to identify the toxic agent as a possible etiological factor. Abdominal computed tomography (CT) and colonoscopy images were reviewed to determine the extent and severity of mesenteric phlebosclerosis and the presence of findings regarding colitis. The disease CT score was determined by the distribution of mesenteric vein calcification and colon wall thickening on CT images. The drinking index of medicinal liquor was calculated from the daily quantity and drinking years of Chinese medicinal liquor. Subsequently, Spearman's correlation analysis was conducted to evaluate the correlation between the drinking index and the CT disease score.
METHODS
The mean age of the 8 enrolled patients was 75.7 years and male predominance was found (all 8 patients were men). The patients had histories of 5-40 years of oral Chinese herbal liquids containing geniposide and exhibited typical imaging characteristics (e.g., threadlike calcifications along the colonic and mesenteric vessels or associated with a thickened colonic wall in CT images). Calcifications were confined to the right-side mesenteric vein in 6 of the 8 patients (75%) and involved the left-side mesenteric vein of 2 cases (25%) and the calcifications extended to the mesorectum in 1 of them. The thickening of colon wall mainly occurred in the right colon and the transverse colon. The median disease CT score was 4.88 (n = 7) and the median drinking index was 5680 (n = 7). After Spearman's correlation analysis, the median CT score of the disease showed a significant positive correlation with the median drinking index (r = 0.842, P < 0.05).
RESULTS
Long-term oral intake of Chinese herbal liquid containing geniposide may play a role in the pathogenesis of IMP.
CONCLUSION
[ "Aged", "Colon", "Colonoscopy", "Humans", "Iridoids", "Male", "Mesenteric Veins" ]
8192294
INTRODUCTION
Idiopathic mesenteric phlebosclerosis (IMP) is a rare form of ischemic colitis that usually affects the right hemicolon. It is almost exclusively observed in Asian populations, is characterized by calcification of mesenteric veins and thickening of the wall of the right hemicolon. The etiology and pathogenesis remain unclear, but it is thought that long-term and frequent ingestion of biochemicals and toxins are associated with the disease[1,2]. As clarified by existing studies, long-term intake of herbal medicines or medicinal liquor containing geniposide is recognized as one of the major causes of IMP[2,3]. In this study, we describe 8 patients with mesenteric phlebosclerosis with long-term exposure to Chinese herbal medicines or medicinal liquor. The clinical manifestations and imaging features were summarized, and the relationship between the alcohol index and the severity of IMP observed by computed tomography (CT) were analyzed. This is an interesting article on a relatively rare disease and may present insights into the etiology of IMP.
MATERIALS AND METHODS
Ethical approval This was a retrospective study, and the data originated from the medical records at Zhejiang Provincial People's Hospital from June 2016 to December 2020. The study was approved by the Institutional Review Board at Zhejiang Provincial People's Hospital. Informed consent was waived; patient confidentiality was protected. This was a retrospective study, and the data originated from the medical records at Zhejiang Provincial People's Hospital from June 2016 to December 2020. The study was approved by the Institutional Review Board at Zhejiang Provincial People's Hospital. Informed consent was waived; patient confidentiality was protected. Patient selection The medical records of patients meeting the following inclusion criteria were retrospectively reviewed: (1) The clinical diagnosis of IMP was confirmed by abdominal CT, endoscopy, or pathology including at least one complete abdominal CT examination with or without intravenous contrast medium injection; and (2) Patients had complete clinical information. The medical records of patients meeting the following inclusion criteria were retrospectively reviewed: (1) The clinical diagnosis of IMP was confirmed by abdominal CT, endoscopy, or pathology including at least one complete abdominal CT examination with or without intravenous contrast medium injection; and (2) Patients had complete clinical information. Clinical data collection Eight patients were identified and their clinical data, including symptoms, anamnesis, history of herbal medicines, and therapy, were collected. Patient herbal medicine history, herbal medicine names, contact time, and daily intake were highlighted. Of note, 7 patients consumed similar dosages of two medicines, Wu chia-pee liquor and Wanying die-da wine for a long time. Moreover, the degree of the two wines was similar. The drinking index was calculated as the daily intake (mL) × drinking duration (years). Eight patients were identified and their clinical data, including symptoms, anamnesis, history of herbal medicines, and therapy, were collected. Patient herbal medicine history, herbal medicine names, contact time, and daily intake were highlighted. Of note, 7 patients consumed similar dosages of two medicines, Wu chia-pee liquor and Wanying die-da wine for a long time. Moreover, the degree of the two wines was similar. The drinking index was calculated as the daily intake (mL) × drinking duration (years). Gastroscopy procedures Gastroscopy was performed with an Olympus 290 colonoscope (Olympus, Tokyo, Japan). The endoscope was inserted to 5-10 cm from the terminal ileum. The color of mucosa, the vascular textures of the terminal ileum, ileocecal area, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum were noted, and the presence of congestion, edema, erosion and ulcer were verified. Multiple samples of the intestinal mucosa were collected for histopathological examination. Gastroscopy was performed with an Olympus 290 colonoscope (Olympus, Tokyo, Japan). The endoscope was inserted to 5-10 cm from the terminal ileum. The color of mucosa, the vascular textures of the terminal ileum, ileocecal area, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum were noted, and the presence of congestion, edema, erosion and ulcer were verified. Multiple samples of the intestinal mucosa were collected for histopathological examination. CT acquisition CT scans were generated with a 64-channel multi-detector CT scanner (Somatom Definition Flash, Siemens Medical Solutions, Forchheim, Germany). The CT scanning parameters were: Detector collimation, 1 mm; pitch, 1.5:1; tube voltage, 120 kV; tube current, 120-250 mAs; rotation time, 0.33 s. Contrast-enhanced CT was performed with 80-90 mL of 370 mg I/mL iodinated contrast agent (Ultravist, Bayer Schering Pharma AG) injected in a peripheral vein with a dual high-pressure syringe at a flow rate of 2.2-3 mL/s. A bolus-tracking technique was used to obtain arterial- and venous-phase CT images with delays of 10 s and 50-65 s after a 100 Hounsfield unit threshold of the descending abdominal aorta. Image reconstruction was performed with a 1 mm slice thickness and a 1 mm slice interval with an Application Development Workstation (MM Reading, syngo.via, Version VB20A, Siemens Healthcare GmbH, Forchheim, Germany). CT scans were generated with a 64-channel multi-detector CT scanner (Somatom Definition Flash, Siemens Medical Solutions, Forchheim, Germany). The CT scanning parameters were: Detector collimation, 1 mm; pitch, 1.5:1; tube voltage, 120 kV; tube current, 120-250 mAs; rotation time, 0.33 s. Contrast-enhanced CT was performed with 80-90 mL of 370 mg I/mL iodinated contrast agent (Ultravist, Bayer Schering Pharma AG) injected in a peripheral vein with a dual high-pressure syringe at a flow rate of 2.2-3 mL/s. A bolus-tracking technique was used to obtain arterial- and venous-phase CT images with delays of 10 s and 50-65 s after a 100 Hounsfield unit threshold of the descending abdominal aorta. Image reconstruction was performed with a 1 mm slice thickness and a 1 mm slice interval with an Application Development Workstation (MM Reading, syngo.via, Version VB20A, Siemens Healthcare GmbH, Forchheim, Germany). Assessment of the disease CT score The CT imaging characteristics were assessed by 2 radiologists with 5- and 10-year of clinical experience. The severity of IMP was assessed with consideration of the calcification distribution at the tributaries of the portal vein and the range of thickening of the intestinal wall. Colonic wall thickening was defined as a lumen width exceeding 2 cm and wall thickness exceeding 5 mm. The CT scores of the IMP cases are listed in Table 1). A 4-grade calcification score was evaluated by the scope of mesenteric venous calcification of the colon[4]. Specifically, venous calcifications limited to the straight vein were scored as 1, and calcifications extending to the paracolic marginal vein were scored as 2. If the calcifications extended to the proximal part of main branch of the mesenteric vein, the score was 3. If the distal end of the main branch was involved, then the score was 4. Illustrations of calcification distribution are shown in Figure 1). The severity of colon wall thickening was classified by three scores depending on the extent of the lesion. Lesions confined to the ascending colon had a severity score of 1. Those extending to the traverse colon had a severity score of 2, and those involving the descending colon or distal segment had a severity score of 3. The calcification and colon wall thickening scores were summed to obtain the disease CT score. Graphical illustration of the distribution of calcification in the superior mesenteric vein. SMV: Superior mesenteric vein. Imaging characteristics of lesions on endoscopy and computed tomography +: Positive; -: Negative; CT: Computed tomography; A-colon: Ascending colon; IMV: Inferior mesenteric vein; L-colon: Left colon; PV: Portal vein; SMV: Superior mesenteric vein; T-colon: Transverse colon. Calcification scoring system. Computed tomography score of idiopathic mesenteric phlebosclerosis = bowel wall thickening score + calcification score (+ = 1 point). +: Calcifications limited to straight vein of the colon; ++: Calcifications extended to the paracolic marginal vein; +++: Calcifications extended to the main branch of mesenteric vein; ++++: Calcifications included the trunk of the mesenteric vein; –: No calcifications. The CT imaging characteristics were assessed by 2 radiologists with 5- and 10-year of clinical experience. The severity of IMP was assessed with consideration of the calcification distribution at the tributaries of the portal vein and the range of thickening of the intestinal wall. Colonic wall thickening was defined as a lumen width exceeding 2 cm and wall thickness exceeding 5 mm. The CT scores of the IMP cases are listed in Table 1). A 4-grade calcification score was evaluated by the scope of mesenteric venous calcification of the colon[4]. Specifically, venous calcifications limited to the straight vein were scored as 1, and calcifications extending to the paracolic marginal vein were scored as 2. If the calcifications extended to the proximal part of main branch of the mesenteric vein, the score was 3. If the distal end of the main branch was involved, then the score was 4. Illustrations of calcification distribution are shown in Figure 1). The severity of colon wall thickening was classified by three scores depending on the extent of the lesion. Lesions confined to the ascending colon had a severity score of 1. Those extending to the traverse colon had a severity score of 2, and those involving the descending colon or distal segment had a severity score of 3. The calcification and colon wall thickening scores were summed to obtain the disease CT score. Graphical illustration of the distribution of calcification in the superior mesenteric vein. SMV: Superior mesenteric vein. Imaging characteristics of lesions on endoscopy and computed tomography +: Positive; -: Negative; CT: Computed tomography; A-colon: Ascending colon; IMV: Inferior mesenteric vein; L-colon: Left colon; PV: Portal vein; SMV: Superior mesenteric vein; T-colon: Transverse colon. Calcification scoring system. Computed tomography score of idiopathic mesenteric phlebosclerosis = bowel wall thickening score + calcification score (+ = 1 point). +: Calcifications limited to straight vein of the colon; ++: Calcifications extended to the paracolic marginal vein; +++: Calcifications extended to the main branch of mesenteric vein; ++++: Calcifications included the trunk of the mesenteric vein; –: No calcifications. Statistical analysis The statistical analysis was performed with the Statistical Package for the Social Sciences (SPSS version 25.0, IBM Corp, Armonk NY, United States). Spearman’s correlation analysis was used to assess the correlation between the drinking index and the disease CT score. Two-tailed P values of < 0.05 indicated statistical significance. The statistical analysis was performed with the Statistical Package for the Social Sciences (SPSS version 25.0, IBM Corp, Armonk NY, United States). Spearman’s correlation analysis was used to assess the correlation between the drinking index and the disease CT score. Two-tailed P values of < 0.05 indicated statistical significance.
null
null
CONCLUSION
The number of cases in our retrospective study was relatively small, and the pathogenesis of IMP needs to be determined by further study with a larger data set.
[ "INTRODUCTION", "Ethical approval", "Patient selection", "Clinical data collection", "Gastroscopy procedures", "CT acquisition", "Assessment of the disease CT score", "Statistical analysis", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "Idiopathic mesenteric phlebosclerosis (IMP) is a rare form of ischemic colitis that usually affects the right hemicolon. It is almost exclusively observed in Asian populations, is characterized by calcification of mesenteric veins and thickening of the wall of the right hemicolon. The etiology and pathogenesis remain unclear, but it is thought that long-term and frequent ingestion of biochemicals and toxins are associated with the disease[1,2]. As clarified by existing studies, long-term intake of herbal medicines or medicinal liquor containing geniposide is recognized as one of the major causes of IMP[2,3]. In this study, we describe 8 patients with mesenteric phlebosclerosis with long-term exposure to Chinese herbal medicines or medicinal liquor. The clinical manifestations and imaging features were summarized, and the relationship between the alcohol index and the severity of IMP observed by computed tomography (CT) were analyzed. This is an interesting article on a relatively rare disease and may present insights into the etiology of IMP.", "This was a retrospective study, and the data originated from the medical records at Zhejiang Provincial People's Hospital from June 2016 to December 2020. The study was approved by the Institutional Review Board at Zhejiang Provincial People's Hospital. Informed consent was waived; patient confidentiality was protected.", "The medical records of patients meeting the following inclusion criteria were retrospectively reviewed: (1) The clinical diagnosis of IMP was confirmed by abdominal CT, endoscopy, or pathology including at least one complete abdominal CT examination with or without intravenous contrast medium injection; and (2) Patients had complete clinical information.", "Eight patients were identified and their clinical data, including symptoms, anamnesis, history of herbal medicines, and therapy, were collected. Patient herbal medicine history, herbal medicine names, contact time, and daily intake were highlighted. Of note, 7 patients consumed similar dosages of two medicines, Wu chia-pee liquor and Wanying die-da wine for a long time. Moreover, the degree of the two wines was similar. The drinking index was calculated as the daily intake (mL) × drinking duration (years).", "Gastroscopy was performed with an Olympus 290 colonoscope (Olympus, Tokyo, Japan). The endoscope was inserted to 5-10 cm from the terminal ileum. The color of mucosa, the vascular textures of the terminal ileum, ileocecal area, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum were noted, and the presence of congestion, edema, erosion and ulcer were verified. Multiple samples of the intestinal mucosa were collected for histopathological examination.", "CT scans were generated with a 64-channel multi-detector CT scanner (Somatom Definition Flash, Siemens Medical Solutions, Forchheim, Germany). The CT scanning parameters were: Detector collimation, 1 mm; pitch, 1.5:1; tube voltage, 120 kV; tube current, 120-250 mAs; rotation time, 0.33 s. Contrast-enhanced CT was performed with 80-90 mL of 370 mg I/mL iodinated contrast agent (Ultravist, Bayer Schering Pharma AG) injected in a peripheral vein with a dual high-pressure syringe at a flow rate of 2.2-3 mL/s. A bolus-tracking technique was used to obtain arterial- and venous-phase CT images with delays of 10 s and 50-65 s after a 100 Hounsfield unit threshold of the descending abdominal aorta. Image reconstruction was performed with a 1 mm slice thickness and a 1 mm slice interval with an Application Development Workstation (MM Reading, syngo.via, Version VB20A, Siemens Healthcare GmbH, Forchheim, Germany).", "The CT imaging characteristics were assessed by 2 radiologists with 5- and 10-year of clinical experience. The severity of IMP was assessed with consideration of the calcification distribution at the tributaries of the portal vein and the range of thickening of the intestinal wall. Colonic wall thickening was defined as a lumen width exceeding 2 cm and wall thickness exceeding 5 mm. The CT scores of the IMP cases are listed in Table 1). A 4-grade calcification score was evaluated by the scope of mesenteric venous calcification of the colon[4]. Specifically, venous calcifications limited to the straight vein were scored as 1, and calcifications extending to the paracolic marginal vein were scored as 2. If the calcifications extended to the proximal part of main branch of the mesenteric vein, the score was 3. If the distal end of the main branch was involved, then the score was 4. Illustrations of calcification distribution are shown in Figure 1). The severity of colon wall thickening was classified by three scores depending on the extent of the lesion. Lesions confined to the ascending colon had a severity score of 1. Those extending to the traverse colon had a severity score of 2, and those involving the descending colon or distal segment had a severity score of 3. The calcification and colon wall thickening scores were summed to obtain the disease CT score.\n\nGraphical illustration of the distribution of calcification in the superior mesenteric vein. SMV: Superior mesenteric vein.\nImaging characteristics of lesions on endoscopy and computed tomography\n+: Positive; -: Negative; CT: Computed tomography; A-colon: Ascending colon; IMV: Inferior mesenteric vein; L-colon: Left colon; PV: Portal vein; SMV: Superior mesenteric vein; T-colon: Transverse colon.\nCalcification scoring system. \nComputed tomography score of idiopathic mesenteric phlebosclerosis = bowel wall thickening score + calcification score (+ = 1 point). +: Calcifications limited to straight vein of the colon; ++: Calcifications extended to the paracolic marginal vein; +++: Calcifications extended to the main branch of mesenteric vein; ++++: Calcifications included the trunk of the mesenteric vein; –: No calcifications. ", "The statistical analysis was performed with the Statistical Package for the Social Sciences (SPSS version 25.0, IBM Corp, Armonk NY, United States). Spearman’s correlation analysis was used to assess the correlation between the drinking index and the disease CT score. Two-tailed P values of < 0.05 indicated statistical significance.", "Eight male patients with an average age of 75.7 (range of 59–88) year were included. Five presented with abdominal symptoms (e.g., abdominal pain, fullness, and diarrhea), one of whom had an intestinal obstruction. All 8 patients had histories of long-term use of Chinese herbal medicines or medicinal liquors. One had used an oral liquid for the treatment of chronic rhinitis. The 7 others had used Chinese medical liquors that contained gardenia, chuan xiong, and angelia dahurica. Tables 2 and 3 show the patient clinic characteristics and the ingredients that Chinese traditional medicines had in common. The patients all received conservative treatment.\nClinical characteristics of patients with idiopathic mesenteric phlebosclerosis\nM: Male.\nComposition of Chinese herbal liquid medicines\nAll 8 patients had taken one of the three Chinese herbal liquid medicines. The ingredients included in all the liquid formulations are listed.\nAll patients presented with punctate or linear calcification on CT images. Mesenteric venous calcification involved the ascending colon of all patients and extended to the transverse colon in 4 (Figures 1 and 2). In 2 of the 8 patients, the lesions extended to the descending colon. In 1 patient, the entire colon was involved (Figure 3). Calcification was limited to the right-side mesenteric vein in 6 of the 8 cases (75%). In 2 cases (25%), the left-side mesenteric vein was involved. Diffuse wall thickening in the affected region was observed in 7 patients. One patient presented with calcification without obvious thickening of the colon wall. The wall thickening was most often seen in the right and the transverse colon.\n\nAbdominal computed tomography shows colonic wall thickenings with threadlike calcifications (arrow) of the superior mesenteric vein in the transverse colon and ascending colon. A-C: Case 2; D-F: Case 4. Three-dimensional maximum intensity projection reconstruction of computed tomography angiography effectively illustrates the extent of calcification along the mesenteric vein (C, F).\n\nAbdominal computed tomography shows numerous linear and arc-like dense calcifications (arrow) distributed within the bowel wall of the ascending and hepatic flexure of the colon with thickening of the colon wall. A-C: Case 1; D-F: Case 5. Case 5 Local stenosis is seen in the transverse colon of case 5 (thick arrow). Volume rendering image shows that calcifications were more prominently distributed in the mesenteric veins in the right hemicolon (C, F).\nThe median disease CT score was 4.88 (n = 7) and the median drinking index was 5680 (n = 7). The dispersion diagram in Figure 4 shows the relationship between the drinking index and the disease CT score. Spearman correlation analysis found a significant positive correlation between the alcohol drinking index and the disease CT score (r = 0.842, P < 0.05). In the 4 patients evaluated by of colonoscopy, blue or dark blue colored mucosa was the most characteristic variation. The colonoscopy revealed multiple erosions and ulcers in 1 patient (Figure 5). Table 1 lists the characteristic endoscopic view and CT findings. Histopathology of the biopsy samples showed the deposition of collagen fibers in the subepithelium and around the blood vessels. The vitreous deposits were negatively stained by Congo red and appeared blue following Masson-trichrome, staining, which indicated lamina propria hyalinization and fibrosis (Figure 6).\n\nDispersion diagram and best fitting line for the drinking index vs the disease computed tomography score. Patients two, seven, and eight had histories of diabetes, chronic nephritis, or prostate cancer are shown by red dots. CT: Computed tomography.\n\nAbdominal computed tomography shows multiple threadlike calcifications within the colon wall and adjacent vein from the ileocecal junction to the descending colon (arrow). A-C: Case 6; D-F: Case 7. In case 6, calcifications of the mesenteric vein extended to the rectum; mild diffuse thickening of colon wall is seen. Volume rendering image illustrates the distribution of calcifications in the mesenteric veins, the inferior mesenteric vein with multifocal calcifications (C, F).\n\nRepresentative endoscopic views. A-C: Colonoscopy in case 3 revealed light blue discoloration in the transverse colon; D-F: Colonoscopy case 2 showed edematous congested mucosa with pigmentation, and dark blue discoloration extending to the transverse colon; G-I: Colonoscopy of case 5 revealed edematous dark purple colonic mucosa and sclerotic changes of the colonic walls extending from the cecum to the splenic flexure of colon.", "IMP, which is also known as phlebosclerotic colitis, is a rare intestinal ischemia syndrome with gradual onset and progression. It is characterized by thickening of the wall of the right hemicolon and calcification of mesenteric veins. Most cases have been reported in East Asian nations and regions, especially Japan and Taiwan. In 1991, Koyama et al[5] initially described the disease. To distinguish this disease from ischemic colitis associated with arterial diseases, it was termed as “phlebosclerotic colitis” by Yao et al[6] in 2000. In 2003, Iwashita et al[7] advocated the term “idiopathic mesenteric phlebosclerosis”, as the affected site of this disease showed weak inflammatory changes. Most ischemic bowel diseases result from an insufficient arterial supply attributed to atherosclerosis, thrombosis, and embolus[8]. Disturbed venous return may also cause colitis, including IMP as described here. IMP is usually attributed to chronic ischemia of the colon resulting from calcification of the mesenteric venous system that causes venous congestion of the colon and even hemorrhagic infarction.\nThe disease incidence is low, with mostly chronic and insidious onset. Patients subject to IMP usually present with nonspecific symptoms (e.g., abdominal pain, diarrhea, nausea, and vomiting). As the disease mostly involves the right colon, abdominal pain is more common in the right lower abdomen. Patients may be asymptomatic in the early stage of disease but may develop intestinal obstruction and even perforation in the advanced stage of the disease[9,10]. In this study, most of the 8 patients developed abdominal pain and diarrhea. One presented with intestinal obstruction as the first symptom, and another presented with gastrointestinal bleeding, which was basically consistent with existing reports.\nThe pathogenesis and etiology of IMP remain unclear. IMP is characterized by a defined area and endemic population distribution, and a relationship with a region-specific lifestyle stressed the etiology of this disease[11]. Long-term and frequent ingestion of biochemical substances and toxins is considered to be associated with the disease. Most reported cases of IMP have been associated with the use of herbal medicines and medicinal liquor, most of which contained gardenia fruits[12]. Gardenia fruit is the dried mature fruit of Gardenia jasminoides Ellis. It is a popular crude drug used as a Chinese herb and has been extensively employed for treating cardiovascular, cerebrovascular diseases, hepatobiliary diseases, and diabetes. The main active ingredient of the gardenia fruit is geniposide. As deduced by some scholars, if patients take Chinese herbal drugs containing Gardenia for a long time, the geniposide can be hydrolyzed to genipin by bacteria in intestinal tract, and the absorbed genipin reacts with the protein in mesenteric vein plasma. In addition, collagen gradually accumulates under the mucosa, which subsequently progresses to hyperplastic myointima in the veins, accompanied by fibrosis/sclerosis. The changes ultimately result in venous occlusion[13]. As geniposide refers to one type of glycoside, orally administered geniposide is not directly absorbed after reaching the lower digestive tract. Geniposide is hydrolyzed only after entering the cecum and ascending colon and then transformed to its metabolite, genipin, which permeates the enterocyte membrane as impacted by numerous bacteria in the colon[12]. The transformation and absorption processes have been largely identified in the right colon and transverse colon, which explains the characteristic lesion site of mesenteric venous sclerosis.\nAs shown in Table 2, a clear male predominance was found among the patients. This trend is different from the female predominance described in existing reports from Japan and Taiwan, which might be explained as follows. In Japan and Taiwan, herbal prescriptions containing geniposide are commonly used and are thought to be effective for female-specific symptoms, which may account for the female predominance[14,15]. However, most of our patients had a history of taking Chinese medical liquors for a long time. Most were male, which may be the reason for male predominance in our study. Region-specific lifestyle may thus contribute to the understanding of the etiology of this disease. In this study, 7 patients had a history of taking the Chinese medical liquors named Wu chia-pee liquor and Wanying die-da wine, which consist of multiple Chinese herbs soaked in liquor and have various effects (e.g., enhancing fitness and optimizing immune responses)[3,14]. Another patient had no history of drinking alcohol and had been taking Biyuanshu oral liquid for a long time to treat chronic rhinitis. The medical liquids used by our 8 patients all contained geniposide, chuan xiong, and angelia dahurica (Table 3). A study by Hiramatsu et al[14] of 25 IMP patients in which geniposide was the only Chinese medicine common to all is further evidence the that Chinese herbal medicines containing geniposide are involved in the pathology of IMP. Nevertheless, whether geniposide is only factor directly involved in the pathogenesis of IMP, or whether it is accompanied by other Chinese medicine in the pathogenesis of IMP, needs to be further determined in larger datasets.\nThe clinical symptoms of IMP lack specificity, and the diagnosis is largely determined by the results of radiology and colonoscopy[9,16]. Abdominal CT scans show calcifications of the involved superior mesenteric vein and its branches, which are linear and follow the course of the blood vessels. The involved colon wall becomes swollen and thickened[17]. Endoscopy shows a blue or bluish purple mucosa at the lesion site[18], and the color might be attributed to chronic congestion with ischemia or toxins that stained the bowel mucosa[19]. Tortuous and irregular veins can be seen under the mucosa with poor light transmission. In severe cases, the vessels might disappear. Colonic mucosa was Hyperemia and edema of the colon are sometimes accompanied by erosion or ulcer. The lesions are continuous and the chronic course of disease involves spread from the ileocecal to the anal side. IMP mainly affects the right colon but may also involve the left colon and extend to the sigmoid colon, but generally does not involve the terminal ileum.\nIn this study, 2 patients had phlebosclerosis extending to the left colonic vein branch, 1 had chronic nephritis, and 1 had been treated 5 years previously with endocrine and radiotherapy for prostate cancer, which are rare in IMP[2,20]. The investigators speculated that poor renal function and long-term treatment of malignant tumors prolong the clearance of genipin, an active metabolite of geniposide, which allows genipin to accumulate in the branches of the veins following absorption, thus aggravating the severity of mesenteric venous sclerosis and reducing the absorption capacity of the colon. Genipin, that is not completely absorbed by the ascending and transverse colons, is absorbed from the left colon, leading to sclerosis and calcification of the left colon venous branch. In addition, systemic microvascular disease complicating diabetes and resulting in chronic hypoxia may increase the vulnerability of colon wall and colon vein[1]. Consequently, chronic nephritis, malignant tumors, and diabetes may increase the risk of the progression of IMP, as shown in Figure 6.\nAs shown in Figure 7, the severity of IMP is related to the drinking index, which reflects the daily intake of liquid medicine and the duration of contact. The effect on vessel wall is time- and dose-dependent, and that may be related to colonic flora and colonic absorption capacity. IMP has characteristic manifestations on both CT and endoscopy, which make it is relatively easy to diagnose. Conventional histopathology shows fibrosis and calcification of the vein wall and collagen deposition around the vessels. Because of the superficial location of the lesions, the pathological value of the specimens taken during colonoscopy for the diagnosis of IMP is limited, and the resected specimen may require in-depth observation. The treatment strategy for IMP can be determined on an individual basis. Patients with mild or asymptomatic symptoms can be treated conservatively, which will stop progress after the exposure to pathogenic ingredients is stopped (e.g., the use of Chinese herbal medicine). Surgical treatment is necessary if severe complications such as colonic obstruction, necrosis, perforation, or massive intestinal bleeding occur[21]. However the presence of poor circulation may mean that colon surgery is not an appropriate treatment, and it must be chosen with care.\n\nHistologic examination of the biopsy specimen of case 2. A: Hematoxylin and eosin staining revealed marked fibrosis of the deep layer of the mucosa wall thickening of the venules (× 40); B: Masson’s trichrome stain indicated that the deposits were stained blue, suggesting that they were collagen fibers (× 40); C: Congo red staining revealed absence of vitreous deposition, excluding amyloidosis (× 40).", "This study evidence supports geniposide as most likely to be involved in the pathology of IMP. Clinical conditions including chronic nephritis, malignant tumors, and diabetes mellitus, may be the risk factors of IMP. It is recommended that long-term use of Chinese herbs and medical liquors should be avoided, especially prescriptions or formulations containing gardenia. Both endoscopic and radiologic examinations can lead to a conclusive diagnosis even if biopsy results are insufficient or inconclusive." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Ethical approval", "Patient selection", "Clinical data collection", "Gastroscopy procedures", "CT acquisition", "Assessment of the disease CT score", "Statistical analysis", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "Idiopathic mesenteric phlebosclerosis (IMP) is a rare form of ischemic colitis that usually affects the right hemicolon. It is almost exclusively observed in Asian populations, is characterized by calcification of mesenteric veins and thickening of the wall of the right hemicolon. The etiology and pathogenesis remain unclear, but it is thought that long-term and frequent ingestion of biochemicals and toxins are associated with the disease[1,2]. As clarified by existing studies, long-term intake of herbal medicines or medicinal liquor containing geniposide is recognized as one of the major causes of IMP[2,3]. In this study, we describe 8 patients with mesenteric phlebosclerosis with long-term exposure to Chinese herbal medicines or medicinal liquor. The clinical manifestations and imaging features were summarized, and the relationship between the alcohol index and the severity of IMP observed by computed tomography (CT) were analyzed. This is an interesting article on a relatively rare disease and may present insights into the etiology of IMP.", "Ethical approval This was a retrospective study, and the data originated from the medical records at Zhejiang Provincial People's Hospital from June 2016 to December 2020. The study was approved by the Institutional Review Board at Zhejiang Provincial People's Hospital. Informed consent was waived; patient confidentiality was protected.\nThis was a retrospective study, and the data originated from the medical records at Zhejiang Provincial People's Hospital from June 2016 to December 2020. The study was approved by the Institutional Review Board at Zhejiang Provincial People's Hospital. Informed consent was waived; patient confidentiality was protected.\nPatient selection The medical records of patients meeting the following inclusion criteria were retrospectively reviewed: (1) The clinical diagnosis of IMP was confirmed by abdominal CT, endoscopy, or pathology including at least one complete abdominal CT examination with or without intravenous contrast medium injection; and (2) Patients had complete clinical information.\nThe medical records of patients meeting the following inclusion criteria were retrospectively reviewed: (1) The clinical diagnosis of IMP was confirmed by abdominal CT, endoscopy, or pathology including at least one complete abdominal CT examination with or without intravenous contrast medium injection; and (2) Patients had complete clinical information.\nClinical data collection Eight patients were identified and their clinical data, including symptoms, anamnesis, history of herbal medicines, and therapy, were collected. Patient herbal medicine history, herbal medicine names, contact time, and daily intake were highlighted. Of note, 7 patients consumed similar dosages of two medicines, Wu chia-pee liquor and Wanying die-da wine for a long time. Moreover, the degree of the two wines was similar. The drinking index was calculated as the daily intake (mL) × drinking duration (years).\nEight patients were identified and their clinical data, including symptoms, anamnesis, history of herbal medicines, and therapy, were collected. Patient herbal medicine history, herbal medicine names, contact time, and daily intake were highlighted. Of note, 7 patients consumed similar dosages of two medicines, Wu chia-pee liquor and Wanying die-da wine for a long time. Moreover, the degree of the two wines was similar. The drinking index was calculated as the daily intake (mL) × drinking duration (years).\nGastroscopy procedures Gastroscopy was performed with an Olympus 290 colonoscope (Olympus, Tokyo, Japan). The endoscope was inserted to 5-10 cm from the terminal ileum. The color of mucosa, the vascular textures of the terminal ileum, ileocecal area, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum were noted, and the presence of congestion, edema, erosion and ulcer were verified. Multiple samples of the intestinal mucosa were collected for histopathological examination.\nGastroscopy was performed with an Olympus 290 colonoscope (Olympus, Tokyo, Japan). The endoscope was inserted to 5-10 cm from the terminal ileum. The color of mucosa, the vascular textures of the terminal ileum, ileocecal area, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum were noted, and the presence of congestion, edema, erosion and ulcer were verified. Multiple samples of the intestinal mucosa were collected for histopathological examination.\nCT acquisition CT scans were generated with a 64-channel multi-detector CT scanner (Somatom Definition Flash, Siemens Medical Solutions, Forchheim, Germany). The CT scanning parameters were: Detector collimation, 1 mm; pitch, 1.5:1; tube voltage, 120 kV; tube current, 120-250 mAs; rotation time, 0.33 s. Contrast-enhanced CT was performed with 80-90 mL of 370 mg I/mL iodinated contrast agent (Ultravist, Bayer Schering Pharma AG) injected in a peripheral vein with a dual high-pressure syringe at a flow rate of 2.2-3 mL/s. A bolus-tracking technique was used to obtain arterial- and venous-phase CT images with delays of 10 s and 50-65 s after a 100 Hounsfield unit threshold of the descending abdominal aorta. Image reconstruction was performed with a 1 mm slice thickness and a 1 mm slice interval with an Application Development Workstation (MM Reading, syngo.via, Version VB20A, Siemens Healthcare GmbH, Forchheim, Germany).\nCT scans were generated with a 64-channel multi-detector CT scanner (Somatom Definition Flash, Siemens Medical Solutions, Forchheim, Germany). The CT scanning parameters were: Detector collimation, 1 mm; pitch, 1.5:1; tube voltage, 120 kV; tube current, 120-250 mAs; rotation time, 0.33 s. Contrast-enhanced CT was performed with 80-90 mL of 370 mg I/mL iodinated contrast agent (Ultravist, Bayer Schering Pharma AG) injected in a peripheral vein with a dual high-pressure syringe at a flow rate of 2.2-3 mL/s. A bolus-tracking technique was used to obtain arterial- and venous-phase CT images with delays of 10 s and 50-65 s after a 100 Hounsfield unit threshold of the descending abdominal aorta. Image reconstruction was performed with a 1 mm slice thickness and a 1 mm slice interval with an Application Development Workstation (MM Reading, syngo.via, Version VB20A, Siemens Healthcare GmbH, Forchheim, Germany).\nAssessment of the disease CT score The CT imaging characteristics were assessed by 2 radiologists with 5- and 10-year of clinical experience. The severity of IMP was assessed with consideration of the calcification distribution at the tributaries of the portal vein and the range of thickening of the intestinal wall. Colonic wall thickening was defined as a lumen width exceeding 2 cm and wall thickness exceeding 5 mm. The CT scores of the IMP cases are listed in Table 1). A 4-grade calcification score was evaluated by the scope of mesenteric venous calcification of the colon[4]. Specifically, venous calcifications limited to the straight vein were scored as 1, and calcifications extending to the paracolic marginal vein were scored as 2. If the calcifications extended to the proximal part of main branch of the mesenteric vein, the score was 3. If the distal end of the main branch was involved, then the score was 4. Illustrations of calcification distribution are shown in Figure 1). The severity of colon wall thickening was classified by three scores depending on the extent of the lesion. Lesions confined to the ascending colon had a severity score of 1. Those extending to the traverse colon had a severity score of 2, and those involving the descending colon or distal segment had a severity score of 3. The calcification and colon wall thickening scores were summed to obtain the disease CT score.\n\nGraphical illustration of the distribution of calcification in the superior mesenteric vein. SMV: Superior mesenteric vein.\nImaging characteristics of lesions on endoscopy and computed tomography\n+: Positive; -: Negative; CT: Computed tomography; A-colon: Ascending colon; IMV: Inferior mesenteric vein; L-colon: Left colon; PV: Portal vein; SMV: Superior mesenteric vein; T-colon: Transverse colon.\nCalcification scoring system. \nComputed tomography score of idiopathic mesenteric phlebosclerosis = bowel wall thickening score + calcification score (+ = 1 point). +: Calcifications limited to straight vein of the colon; ++: Calcifications extended to the paracolic marginal vein; +++: Calcifications extended to the main branch of mesenteric vein; ++++: Calcifications included the trunk of the mesenteric vein; –: No calcifications. \nThe CT imaging characteristics were assessed by 2 radiologists with 5- and 10-year of clinical experience. The severity of IMP was assessed with consideration of the calcification distribution at the tributaries of the portal vein and the range of thickening of the intestinal wall. Colonic wall thickening was defined as a lumen width exceeding 2 cm and wall thickness exceeding 5 mm. The CT scores of the IMP cases are listed in Table 1). A 4-grade calcification score was evaluated by the scope of mesenteric venous calcification of the colon[4]. Specifically, venous calcifications limited to the straight vein were scored as 1, and calcifications extending to the paracolic marginal vein were scored as 2. If the calcifications extended to the proximal part of main branch of the mesenteric vein, the score was 3. If the distal end of the main branch was involved, then the score was 4. Illustrations of calcification distribution are shown in Figure 1). The severity of colon wall thickening was classified by three scores depending on the extent of the lesion. Lesions confined to the ascending colon had a severity score of 1. Those extending to the traverse colon had a severity score of 2, and those involving the descending colon or distal segment had a severity score of 3. The calcification and colon wall thickening scores were summed to obtain the disease CT score.\n\nGraphical illustration of the distribution of calcification in the superior mesenteric vein. SMV: Superior mesenteric vein.\nImaging characteristics of lesions on endoscopy and computed tomography\n+: Positive; -: Negative; CT: Computed tomography; A-colon: Ascending colon; IMV: Inferior mesenteric vein; L-colon: Left colon; PV: Portal vein; SMV: Superior mesenteric vein; T-colon: Transverse colon.\nCalcification scoring system. \nComputed tomography score of idiopathic mesenteric phlebosclerosis = bowel wall thickening score + calcification score (+ = 1 point). +: Calcifications limited to straight vein of the colon; ++: Calcifications extended to the paracolic marginal vein; +++: Calcifications extended to the main branch of mesenteric vein; ++++: Calcifications included the trunk of the mesenteric vein; –: No calcifications. \nStatistical analysis The statistical analysis was performed with the Statistical Package for the Social Sciences (SPSS version 25.0, IBM Corp, Armonk NY, United States). Spearman’s correlation analysis was used to assess the correlation between the drinking index and the disease CT score. Two-tailed P values of < 0.05 indicated statistical significance.\nThe statistical analysis was performed with the Statistical Package for the Social Sciences (SPSS version 25.0, IBM Corp, Armonk NY, United States). Spearman’s correlation analysis was used to assess the correlation between the drinking index and the disease CT score. Two-tailed P values of < 0.05 indicated statistical significance.", "This was a retrospective study, and the data originated from the medical records at Zhejiang Provincial People's Hospital from June 2016 to December 2020. The study was approved by the Institutional Review Board at Zhejiang Provincial People's Hospital. Informed consent was waived; patient confidentiality was protected.", "The medical records of patients meeting the following inclusion criteria were retrospectively reviewed: (1) The clinical diagnosis of IMP was confirmed by abdominal CT, endoscopy, or pathology including at least one complete abdominal CT examination with or without intravenous contrast medium injection; and (2) Patients had complete clinical information.", "Eight patients were identified and their clinical data, including symptoms, anamnesis, history of herbal medicines, and therapy, were collected. Patient herbal medicine history, herbal medicine names, contact time, and daily intake were highlighted. Of note, 7 patients consumed similar dosages of two medicines, Wu chia-pee liquor and Wanying die-da wine for a long time. Moreover, the degree of the two wines was similar. The drinking index was calculated as the daily intake (mL) × drinking duration (years).", "Gastroscopy was performed with an Olympus 290 colonoscope (Olympus, Tokyo, Japan). The endoscope was inserted to 5-10 cm from the terminal ileum. The color of mucosa, the vascular textures of the terminal ileum, ileocecal area, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum were noted, and the presence of congestion, edema, erosion and ulcer were verified. Multiple samples of the intestinal mucosa were collected for histopathological examination.", "CT scans were generated with a 64-channel multi-detector CT scanner (Somatom Definition Flash, Siemens Medical Solutions, Forchheim, Germany). The CT scanning parameters were: Detector collimation, 1 mm; pitch, 1.5:1; tube voltage, 120 kV; tube current, 120-250 mAs; rotation time, 0.33 s. Contrast-enhanced CT was performed with 80-90 mL of 370 mg I/mL iodinated contrast agent (Ultravist, Bayer Schering Pharma AG) injected in a peripheral vein with a dual high-pressure syringe at a flow rate of 2.2-3 mL/s. A bolus-tracking technique was used to obtain arterial- and venous-phase CT images with delays of 10 s and 50-65 s after a 100 Hounsfield unit threshold of the descending abdominal aorta. Image reconstruction was performed with a 1 mm slice thickness and a 1 mm slice interval with an Application Development Workstation (MM Reading, syngo.via, Version VB20A, Siemens Healthcare GmbH, Forchheim, Germany).", "The CT imaging characteristics were assessed by 2 radiologists with 5- and 10-year of clinical experience. The severity of IMP was assessed with consideration of the calcification distribution at the tributaries of the portal vein and the range of thickening of the intestinal wall. Colonic wall thickening was defined as a lumen width exceeding 2 cm and wall thickness exceeding 5 mm. The CT scores of the IMP cases are listed in Table 1). A 4-grade calcification score was evaluated by the scope of mesenteric venous calcification of the colon[4]. Specifically, venous calcifications limited to the straight vein were scored as 1, and calcifications extending to the paracolic marginal vein were scored as 2. If the calcifications extended to the proximal part of main branch of the mesenteric vein, the score was 3. If the distal end of the main branch was involved, then the score was 4. Illustrations of calcification distribution are shown in Figure 1). The severity of colon wall thickening was classified by three scores depending on the extent of the lesion. Lesions confined to the ascending colon had a severity score of 1. Those extending to the traverse colon had a severity score of 2, and those involving the descending colon or distal segment had a severity score of 3. The calcification and colon wall thickening scores were summed to obtain the disease CT score.\n\nGraphical illustration of the distribution of calcification in the superior mesenteric vein. SMV: Superior mesenteric vein.\nImaging characteristics of lesions on endoscopy and computed tomography\n+: Positive; -: Negative; CT: Computed tomography; A-colon: Ascending colon; IMV: Inferior mesenteric vein; L-colon: Left colon; PV: Portal vein; SMV: Superior mesenteric vein; T-colon: Transverse colon.\nCalcification scoring system. \nComputed tomography score of idiopathic mesenteric phlebosclerosis = bowel wall thickening score + calcification score (+ = 1 point). +: Calcifications limited to straight vein of the colon; ++: Calcifications extended to the paracolic marginal vein; +++: Calcifications extended to the main branch of mesenteric vein; ++++: Calcifications included the trunk of the mesenteric vein; –: No calcifications. ", "The statistical analysis was performed with the Statistical Package for the Social Sciences (SPSS version 25.0, IBM Corp, Armonk NY, United States). Spearman’s correlation analysis was used to assess the correlation between the drinking index and the disease CT score. Two-tailed P values of < 0.05 indicated statistical significance.", "Eight male patients with an average age of 75.7 (range of 59–88) year were included. Five presented with abdominal symptoms (e.g., abdominal pain, fullness, and diarrhea), one of whom had an intestinal obstruction. All 8 patients had histories of long-term use of Chinese herbal medicines or medicinal liquors. One had used an oral liquid for the treatment of chronic rhinitis. The 7 others had used Chinese medical liquors that contained gardenia, chuan xiong, and angelia dahurica. Tables 2 and 3 show the patient clinic characteristics and the ingredients that Chinese traditional medicines had in common. The patients all received conservative treatment.\nClinical characteristics of patients with idiopathic mesenteric phlebosclerosis\nM: Male.\nComposition of Chinese herbal liquid medicines\nAll 8 patients had taken one of the three Chinese herbal liquid medicines. The ingredients included in all the liquid formulations are listed.\nAll patients presented with punctate or linear calcification on CT images. Mesenteric venous calcification involved the ascending colon of all patients and extended to the transverse colon in 4 (Figures 1 and 2). In 2 of the 8 patients, the lesions extended to the descending colon. In 1 patient, the entire colon was involved (Figure 3). Calcification was limited to the right-side mesenteric vein in 6 of the 8 cases (75%). In 2 cases (25%), the left-side mesenteric vein was involved. Diffuse wall thickening in the affected region was observed in 7 patients. One patient presented with calcification without obvious thickening of the colon wall. The wall thickening was most often seen in the right and the transverse colon.\n\nAbdominal computed tomography shows colonic wall thickenings with threadlike calcifications (arrow) of the superior mesenteric vein in the transverse colon and ascending colon. A-C: Case 2; D-F: Case 4. Three-dimensional maximum intensity projection reconstruction of computed tomography angiography effectively illustrates the extent of calcification along the mesenteric vein (C, F).\n\nAbdominal computed tomography shows numerous linear and arc-like dense calcifications (arrow) distributed within the bowel wall of the ascending and hepatic flexure of the colon with thickening of the colon wall. A-C: Case 1; D-F: Case 5. Case 5 Local stenosis is seen in the transverse colon of case 5 (thick arrow). Volume rendering image shows that calcifications were more prominently distributed in the mesenteric veins in the right hemicolon (C, F).\nThe median disease CT score was 4.88 (n = 7) and the median drinking index was 5680 (n = 7). The dispersion diagram in Figure 4 shows the relationship between the drinking index and the disease CT score. Spearman correlation analysis found a significant positive correlation between the alcohol drinking index and the disease CT score (r = 0.842, P < 0.05). In the 4 patients evaluated by of colonoscopy, blue or dark blue colored mucosa was the most characteristic variation. The colonoscopy revealed multiple erosions and ulcers in 1 patient (Figure 5). Table 1 lists the characteristic endoscopic view and CT findings. Histopathology of the biopsy samples showed the deposition of collagen fibers in the subepithelium and around the blood vessels. The vitreous deposits were negatively stained by Congo red and appeared blue following Masson-trichrome, staining, which indicated lamina propria hyalinization and fibrosis (Figure 6).\n\nDispersion diagram and best fitting line for the drinking index vs the disease computed tomography score. Patients two, seven, and eight had histories of diabetes, chronic nephritis, or prostate cancer are shown by red dots. CT: Computed tomography.\n\nAbdominal computed tomography shows multiple threadlike calcifications within the colon wall and adjacent vein from the ileocecal junction to the descending colon (arrow). A-C: Case 6; D-F: Case 7. In case 6, calcifications of the mesenteric vein extended to the rectum; mild diffuse thickening of colon wall is seen. Volume rendering image illustrates the distribution of calcifications in the mesenteric veins, the inferior mesenteric vein with multifocal calcifications (C, F).\n\nRepresentative endoscopic views. A-C: Colonoscopy in case 3 revealed light blue discoloration in the transverse colon; D-F: Colonoscopy case 2 showed edematous congested mucosa with pigmentation, and dark blue discoloration extending to the transverse colon; G-I: Colonoscopy of case 5 revealed edematous dark purple colonic mucosa and sclerotic changes of the colonic walls extending from the cecum to the splenic flexure of colon.", "IMP, which is also known as phlebosclerotic colitis, is a rare intestinal ischemia syndrome with gradual onset and progression. It is characterized by thickening of the wall of the right hemicolon and calcification of mesenteric veins. Most cases have been reported in East Asian nations and regions, especially Japan and Taiwan. In 1991, Koyama et al[5] initially described the disease. To distinguish this disease from ischemic colitis associated with arterial diseases, it was termed as “phlebosclerotic colitis” by Yao et al[6] in 2000. In 2003, Iwashita et al[7] advocated the term “idiopathic mesenteric phlebosclerosis”, as the affected site of this disease showed weak inflammatory changes. Most ischemic bowel diseases result from an insufficient arterial supply attributed to atherosclerosis, thrombosis, and embolus[8]. Disturbed venous return may also cause colitis, including IMP as described here. IMP is usually attributed to chronic ischemia of the colon resulting from calcification of the mesenteric venous system that causes venous congestion of the colon and even hemorrhagic infarction.\nThe disease incidence is low, with mostly chronic and insidious onset. Patients subject to IMP usually present with nonspecific symptoms (e.g., abdominal pain, diarrhea, nausea, and vomiting). As the disease mostly involves the right colon, abdominal pain is more common in the right lower abdomen. Patients may be asymptomatic in the early stage of disease but may develop intestinal obstruction and even perforation in the advanced stage of the disease[9,10]. In this study, most of the 8 patients developed abdominal pain and diarrhea. One presented with intestinal obstruction as the first symptom, and another presented with gastrointestinal bleeding, which was basically consistent with existing reports.\nThe pathogenesis and etiology of IMP remain unclear. IMP is characterized by a defined area and endemic population distribution, and a relationship with a region-specific lifestyle stressed the etiology of this disease[11]. Long-term and frequent ingestion of biochemical substances and toxins is considered to be associated with the disease. Most reported cases of IMP have been associated with the use of herbal medicines and medicinal liquor, most of which contained gardenia fruits[12]. Gardenia fruit is the dried mature fruit of Gardenia jasminoides Ellis. It is a popular crude drug used as a Chinese herb and has been extensively employed for treating cardiovascular, cerebrovascular diseases, hepatobiliary diseases, and diabetes. The main active ingredient of the gardenia fruit is geniposide. As deduced by some scholars, if patients take Chinese herbal drugs containing Gardenia for a long time, the geniposide can be hydrolyzed to genipin by bacteria in intestinal tract, and the absorbed genipin reacts with the protein in mesenteric vein plasma. In addition, collagen gradually accumulates under the mucosa, which subsequently progresses to hyperplastic myointima in the veins, accompanied by fibrosis/sclerosis. The changes ultimately result in venous occlusion[13]. As geniposide refers to one type of glycoside, orally administered geniposide is not directly absorbed after reaching the lower digestive tract. Geniposide is hydrolyzed only after entering the cecum and ascending colon and then transformed to its metabolite, genipin, which permeates the enterocyte membrane as impacted by numerous bacteria in the colon[12]. The transformation and absorption processes have been largely identified in the right colon and transverse colon, which explains the characteristic lesion site of mesenteric venous sclerosis.\nAs shown in Table 2, a clear male predominance was found among the patients. This trend is different from the female predominance described in existing reports from Japan and Taiwan, which might be explained as follows. In Japan and Taiwan, herbal prescriptions containing geniposide are commonly used and are thought to be effective for female-specific symptoms, which may account for the female predominance[14,15]. However, most of our patients had a history of taking Chinese medical liquors for a long time. Most were male, which may be the reason for male predominance in our study. Region-specific lifestyle may thus contribute to the understanding of the etiology of this disease. In this study, 7 patients had a history of taking the Chinese medical liquors named Wu chia-pee liquor and Wanying die-da wine, which consist of multiple Chinese herbs soaked in liquor and have various effects (e.g., enhancing fitness and optimizing immune responses)[3,14]. Another patient had no history of drinking alcohol and had been taking Biyuanshu oral liquid for a long time to treat chronic rhinitis. The medical liquids used by our 8 patients all contained geniposide, chuan xiong, and angelia dahurica (Table 3). A study by Hiramatsu et al[14] of 25 IMP patients in which geniposide was the only Chinese medicine common to all is further evidence the that Chinese herbal medicines containing geniposide are involved in the pathology of IMP. Nevertheless, whether geniposide is only factor directly involved in the pathogenesis of IMP, or whether it is accompanied by other Chinese medicine in the pathogenesis of IMP, needs to be further determined in larger datasets.\nThe clinical symptoms of IMP lack specificity, and the diagnosis is largely determined by the results of radiology and colonoscopy[9,16]. Abdominal CT scans show calcifications of the involved superior mesenteric vein and its branches, which are linear and follow the course of the blood vessels. The involved colon wall becomes swollen and thickened[17]. Endoscopy shows a blue or bluish purple mucosa at the lesion site[18], and the color might be attributed to chronic congestion with ischemia or toxins that stained the bowel mucosa[19]. Tortuous and irregular veins can be seen under the mucosa with poor light transmission. In severe cases, the vessels might disappear. Colonic mucosa was Hyperemia and edema of the colon are sometimes accompanied by erosion or ulcer. The lesions are continuous and the chronic course of disease involves spread from the ileocecal to the anal side. IMP mainly affects the right colon but may also involve the left colon and extend to the sigmoid colon, but generally does not involve the terminal ileum.\nIn this study, 2 patients had phlebosclerosis extending to the left colonic vein branch, 1 had chronic nephritis, and 1 had been treated 5 years previously with endocrine and radiotherapy for prostate cancer, which are rare in IMP[2,20]. The investigators speculated that poor renal function and long-term treatment of malignant tumors prolong the clearance of genipin, an active metabolite of geniposide, which allows genipin to accumulate in the branches of the veins following absorption, thus aggravating the severity of mesenteric venous sclerosis and reducing the absorption capacity of the colon. Genipin, that is not completely absorbed by the ascending and transverse colons, is absorbed from the left colon, leading to sclerosis and calcification of the left colon venous branch. In addition, systemic microvascular disease complicating diabetes and resulting in chronic hypoxia may increase the vulnerability of colon wall and colon vein[1]. Consequently, chronic nephritis, malignant tumors, and diabetes may increase the risk of the progression of IMP, as shown in Figure 6.\nAs shown in Figure 7, the severity of IMP is related to the drinking index, which reflects the daily intake of liquid medicine and the duration of contact. The effect on vessel wall is time- and dose-dependent, and that may be related to colonic flora and colonic absorption capacity. IMP has characteristic manifestations on both CT and endoscopy, which make it is relatively easy to diagnose. Conventional histopathology shows fibrosis and calcification of the vein wall and collagen deposition around the vessels. Because of the superficial location of the lesions, the pathological value of the specimens taken during colonoscopy for the diagnosis of IMP is limited, and the resected specimen may require in-depth observation. The treatment strategy for IMP can be determined on an individual basis. Patients with mild or asymptomatic symptoms can be treated conservatively, which will stop progress after the exposure to pathogenic ingredients is stopped (e.g., the use of Chinese herbal medicine). Surgical treatment is necessary if severe complications such as colonic obstruction, necrosis, perforation, or massive intestinal bleeding occur[21]. However the presence of poor circulation may mean that colon surgery is not an appropriate treatment, and it must be chosen with care.\n\nHistologic examination of the biopsy specimen of case 2. A: Hematoxylin and eosin staining revealed marked fibrosis of the deep layer of the mucosa wall thickening of the venules (× 40); B: Masson’s trichrome stain indicated that the deposits were stained blue, suggesting that they were collagen fibers (× 40); C: Congo red staining revealed absence of vitreous deposition, excluding amyloidosis (× 40).", "This study evidence supports geniposide as most likely to be involved in the pathology of IMP. Clinical conditions including chronic nephritis, malignant tumors, and diabetes mellitus, may be the risk factors of IMP. It is recommended that long-term use of Chinese herbs and medical liquors should be avoided, especially prescriptions or formulations containing gardenia. Both endoscopic and radiologic examinations can lead to a conclusive diagnosis even if biopsy results are insufficient or inconclusive." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null ]
[ "Idiopathic mesenteric phlebosclerosis", "Phlebosclerotic colitis", "Chinese herbal liquid", "Geniposide", "Colonoscopy", "Computed tomography" ]
INTRODUCTION: Idiopathic mesenteric phlebosclerosis (IMP) is a rare form of ischemic colitis that usually affects the right hemicolon. It is almost exclusively observed in Asian populations, is characterized by calcification of mesenteric veins and thickening of the wall of the right hemicolon. The etiology and pathogenesis remain unclear, but it is thought that long-term and frequent ingestion of biochemicals and toxins are associated with the disease[1,2]. As clarified by existing studies, long-term intake of herbal medicines or medicinal liquor containing geniposide is recognized as one of the major causes of IMP[2,3]. In this study, we describe 8 patients with mesenteric phlebosclerosis with long-term exposure to Chinese herbal medicines or medicinal liquor. The clinical manifestations and imaging features were summarized, and the relationship between the alcohol index and the severity of IMP observed by computed tomography (CT) were analyzed. This is an interesting article on a relatively rare disease and may present insights into the etiology of IMP. MATERIALS AND METHODS: Ethical approval This was a retrospective study, and the data originated from the medical records at Zhejiang Provincial People's Hospital from June 2016 to December 2020. The study was approved by the Institutional Review Board at Zhejiang Provincial People's Hospital. Informed consent was waived; patient confidentiality was protected. This was a retrospective study, and the data originated from the medical records at Zhejiang Provincial People's Hospital from June 2016 to December 2020. The study was approved by the Institutional Review Board at Zhejiang Provincial People's Hospital. Informed consent was waived; patient confidentiality was protected. Patient selection The medical records of patients meeting the following inclusion criteria were retrospectively reviewed: (1) The clinical diagnosis of IMP was confirmed by abdominal CT, endoscopy, or pathology including at least one complete abdominal CT examination with or without intravenous contrast medium injection; and (2) Patients had complete clinical information. The medical records of patients meeting the following inclusion criteria were retrospectively reviewed: (1) The clinical diagnosis of IMP was confirmed by abdominal CT, endoscopy, or pathology including at least one complete abdominal CT examination with or without intravenous contrast medium injection; and (2) Patients had complete clinical information. Clinical data collection Eight patients were identified and their clinical data, including symptoms, anamnesis, history of herbal medicines, and therapy, were collected. Patient herbal medicine history, herbal medicine names, contact time, and daily intake were highlighted. Of note, 7 patients consumed similar dosages of two medicines, Wu chia-pee liquor and Wanying die-da wine for a long time. Moreover, the degree of the two wines was similar. The drinking index was calculated as the daily intake (mL) × drinking duration (years). Eight patients were identified and their clinical data, including symptoms, anamnesis, history of herbal medicines, and therapy, were collected. Patient herbal medicine history, herbal medicine names, contact time, and daily intake were highlighted. Of note, 7 patients consumed similar dosages of two medicines, Wu chia-pee liquor and Wanying die-da wine for a long time. Moreover, the degree of the two wines was similar. The drinking index was calculated as the daily intake (mL) × drinking duration (years). Gastroscopy procedures Gastroscopy was performed with an Olympus 290 colonoscope (Olympus, Tokyo, Japan). The endoscope was inserted to 5-10 cm from the terminal ileum. The color of mucosa, the vascular textures of the terminal ileum, ileocecal area, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum were noted, and the presence of congestion, edema, erosion and ulcer were verified. Multiple samples of the intestinal mucosa were collected for histopathological examination. Gastroscopy was performed with an Olympus 290 colonoscope (Olympus, Tokyo, Japan). The endoscope was inserted to 5-10 cm from the terminal ileum. The color of mucosa, the vascular textures of the terminal ileum, ileocecal area, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum were noted, and the presence of congestion, edema, erosion and ulcer were verified. Multiple samples of the intestinal mucosa were collected for histopathological examination. CT acquisition CT scans were generated with a 64-channel multi-detector CT scanner (Somatom Definition Flash, Siemens Medical Solutions, Forchheim, Germany). The CT scanning parameters were: Detector collimation, 1 mm; pitch, 1.5:1; tube voltage, 120 kV; tube current, 120-250 mAs; rotation time, 0.33 s. Contrast-enhanced CT was performed with 80-90 mL of 370 mg I/mL iodinated contrast agent (Ultravist, Bayer Schering Pharma AG) injected in a peripheral vein with a dual high-pressure syringe at a flow rate of 2.2-3 mL/s. A bolus-tracking technique was used to obtain arterial- and venous-phase CT images with delays of 10 s and 50-65 s after a 100 Hounsfield unit threshold of the descending abdominal aorta. Image reconstruction was performed with a 1 mm slice thickness and a 1 mm slice interval with an Application Development Workstation (MM Reading, syngo.via, Version VB20A, Siemens Healthcare GmbH, Forchheim, Germany). CT scans were generated with a 64-channel multi-detector CT scanner (Somatom Definition Flash, Siemens Medical Solutions, Forchheim, Germany). The CT scanning parameters were: Detector collimation, 1 mm; pitch, 1.5:1; tube voltage, 120 kV; tube current, 120-250 mAs; rotation time, 0.33 s. Contrast-enhanced CT was performed with 80-90 mL of 370 mg I/mL iodinated contrast agent (Ultravist, Bayer Schering Pharma AG) injected in a peripheral vein with a dual high-pressure syringe at a flow rate of 2.2-3 mL/s. A bolus-tracking technique was used to obtain arterial- and venous-phase CT images with delays of 10 s and 50-65 s after a 100 Hounsfield unit threshold of the descending abdominal aorta. Image reconstruction was performed with a 1 mm slice thickness and a 1 mm slice interval with an Application Development Workstation (MM Reading, syngo.via, Version VB20A, Siemens Healthcare GmbH, Forchheim, Germany). Assessment of the disease CT score The CT imaging characteristics were assessed by 2 radiologists with 5- and 10-year of clinical experience. The severity of IMP was assessed with consideration of the calcification distribution at the tributaries of the portal vein and the range of thickening of the intestinal wall. Colonic wall thickening was defined as a lumen width exceeding 2 cm and wall thickness exceeding 5 mm. The CT scores of the IMP cases are listed in Table 1). A 4-grade calcification score was evaluated by the scope of mesenteric venous calcification of the colon[4]. Specifically, venous calcifications limited to the straight vein were scored as 1, and calcifications extending to the paracolic marginal vein were scored as 2. If the calcifications extended to the proximal part of main branch of the mesenteric vein, the score was 3. If the distal end of the main branch was involved, then the score was 4. Illustrations of calcification distribution are shown in Figure 1). The severity of colon wall thickening was classified by three scores depending on the extent of the lesion. Lesions confined to the ascending colon had a severity score of 1. Those extending to the traverse colon had a severity score of 2, and those involving the descending colon or distal segment had a severity score of 3. The calcification and colon wall thickening scores were summed to obtain the disease CT score. Graphical illustration of the distribution of calcification in the superior mesenteric vein. SMV: Superior mesenteric vein. Imaging characteristics of lesions on endoscopy and computed tomography +: Positive; -: Negative; CT: Computed tomography; A-colon: Ascending colon; IMV: Inferior mesenteric vein; L-colon: Left colon; PV: Portal vein; SMV: Superior mesenteric vein; T-colon: Transverse colon. Calcification scoring system. Computed tomography score of idiopathic mesenteric phlebosclerosis = bowel wall thickening score + calcification score (+ = 1 point). +: Calcifications limited to straight vein of the colon; ++: Calcifications extended to the paracolic marginal vein; +++: Calcifications extended to the main branch of mesenteric vein; ++++: Calcifications included the trunk of the mesenteric vein; –: No calcifications. The CT imaging characteristics were assessed by 2 radiologists with 5- and 10-year of clinical experience. The severity of IMP was assessed with consideration of the calcification distribution at the tributaries of the portal vein and the range of thickening of the intestinal wall. Colonic wall thickening was defined as a lumen width exceeding 2 cm and wall thickness exceeding 5 mm. The CT scores of the IMP cases are listed in Table 1). A 4-grade calcification score was evaluated by the scope of mesenteric venous calcification of the colon[4]. Specifically, venous calcifications limited to the straight vein were scored as 1, and calcifications extending to the paracolic marginal vein were scored as 2. If the calcifications extended to the proximal part of main branch of the mesenteric vein, the score was 3. If the distal end of the main branch was involved, then the score was 4. Illustrations of calcification distribution are shown in Figure 1). The severity of colon wall thickening was classified by three scores depending on the extent of the lesion. Lesions confined to the ascending colon had a severity score of 1. Those extending to the traverse colon had a severity score of 2, and those involving the descending colon or distal segment had a severity score of 3. The calcification and colon wall thickening scores were summed to obtain the disease CT score. Graphical illustration of the distribution of calcification in the superior mesenteric vein. SMV: Superior mesenteric vein. Imaging characteristics of lesions on endoscopy and computed tomography +: Positive; -: Negative; CT: Computed tomography; A-colon: Ascending colon; IMV: Inferior mesenteric vein; L-colon: Left colon; PV: Portal vein; SMV: Superior mesenteric vein; T-colon: Transverse colon. Calcification scoring system. Computed tomography score of idiopathic mesenteric phlebosclerosis = bowel wall thickening score + calcification score (+ = 1 point). +: Calcifications limited to straight vein of the colon; ++: Calcifications extended to the paracolic marginal vein; +++: Calcifications extended to the main branch of mesenteric vein; ++++: Calcifications included the trunk of the mesenteric vein; –: No calcifications. Statistical analysis The statistical analysis was performed with the Statistical Package for the Social Sciences (SPSS version 25.0, IBM Corp, Armonk NY, United States). Spearman’s correlation analysis was used to assess the correlation between the drinking index and the disease CT score. Two-tailed P values of < 0.05 indicated statistical significance. The statistical analysis was performed with the Statistical Package for the Social Sciences (SPSS version 25.0, IBM Corp, Armonk NY, United States). Spearman’s correlation analysis was used to assess the correlation between the drinking index and the disease CT score. Two-tailed P values of < 0.05 indicated statistical significance. Ethical approval: This was a retrospective study, and the data originated from the medical records at Zhejiang Provincial People's Hospital from June 2016 to December 2020. The study was approved by the Institutional Review Board at Zhejiang Provincial People's Hospital. Informed consent was waived; patient confidentiality was protected. Patient selection: The medical records of patients meeting the following inclusion criteria were retrospectively reviewed: (1) The clinical diagnosis of IMP was confirmed by abdominal CT, endoscopy, or pathology including at least one complete abdominal CT examination with or without intravenous contrast medium injection; and (2) Patients had complete clinical information. Clinical data collection: Eight patients were identified and their clinical data, including symptoms, anamnesis, history of herbal medicines, and therapy, were collected. Patient herbal medicine history, herbal medicine names, contact time, and daily intake were highlighted. Of note, 7 patients consumed similar dosages of two medicines, Wu chia-pee liquor and Wanying die-da wine for a long time. Moreover, the degree of the two wines was similar. The drinking index was calculated as the daily intake (mL) × drinking duration (years). Gastroscopy procedures: Gastroscopy was performed with an Olympus 290 colonoscope (Olympus, Tokyo, Japan). The endoscope was inserted to 5-10 cm from the terminal ileum. The color of mucosa, the vascular textures of the terminal ileum, ileocecal area, ascending colon, transverse colon, descending colon, sigmoid colon, and rectum were noted, and the presence of congestion, edema, erosion and ulcer were verified. Multiple samples of the intestinal mucosa were collected for histopathological examination. CT acquisition: CT scans were generated with a 64-channel multi-detector CT scanner (Somatom Definition Flash, Siemens Medical Solutions, Forchheim, Germany). The CT scanning parameters were: Detector collimation, 1 mm; pitch, 1.5:1; tube voltage, 120 kV; tube current, 120-250 mAs; rotation time, 0.33 s. Contrast-enhanced CT was performed with 80-90 mL of 370 mg I/mL iodinated contrast agent (Ultravist, Bayer Schering Pharma AG) injected in a peripheral vein with a dual high-pressure syringe at a flow rate of 2.2-3 mL/s. A bolus-tracking technique was used to obtain arterial- and venous-phase CT images with delays of 10 s and 50-65 s after a 100 Hounsfield unit threshold of the descending abdominal aorta. Image reconstruction was performed with a 1 mm slice thickness and a 1 mm slice interval with an Application Development Workstation (MM Reading, syngo.via, Version VB20A, Siemens Healthcare GmbH, Forchheim, Germany). Assessment of the disease CT score: The CT imaging characteristics were assessed by 2 radiologists with 5- and 10-year of clinical experience. The severity of IMP was assessed with consideration of the calcification distribution at the tributaries of the portal vein and the range of thickening of the intestinal wall. Colonic wall thickening was defined as a lumen width exceeding 2 cm and wall thickness exceeding 5 mm. The CT scores of the IMP cases are listed in Table 1). A 4-grade calcification score was evaluated by the scope of mesenteric venous calcification of the colon[4]. Specifically, venous calcifications limited to the straight vein were scored as 1, and calcifications extending to the paracolic marginal vein were scored as 2. If the calcifications extended to the proximal part of main branch of the mesenteric vein, the score was 3. If the distal end of the main branch was involved, then the score was 4. Illustrations of calcification distribution are shown in Figure 1). The severity of colon wall thickening was classified by three scores depending on the extent of the lesion. Lesions confined to the ascending colon had a severity score of 1. Those extending to the traverse colon had a severity score of 2, and those involving the descending colon or distal segment had a severity score of 3. The calcification and colon wall thickening scores were summed to obtain the disease CT score. Graphical illustration of the distribution of calcification in the superior mesenteric vein. SMV: Superior mesenteric vein. Imaging characteristics of lesions on endoscopy and computed tomography +: Positive; -: Negative; CT: Computed tomography; A-colon: Ascending colon; IMV: Inferior mesenteric vein; L-colon: Left colon; PV: Portal vein; SMV: Superior mesenteric vein; T-colon: Transverse colon. Calcification scoring system. Computed tomography score of idiopathic mesenteric phlebosclerosis = bowel wall thickening score + calcification score (+ = 1 point). +: Calcifications limited to straight vein of the colon; ++: Calcifications extended to the paracolic marginal vein; +++: Calcifications extended to the main branch of mesenteric vein; ++++: Calcifications included the trunk of the mesenteric vein; –: No calcifications. Statistical analysis: The statistical analysis was performed with the Statistical Package for the Social Sciences (SPSS version 25.0, IBM Corp, Armonk NY, United States). Spearman’s correlation analysis was used to assess the correlation between the drinking index and the disease CT score. Two-tailed P values of < 0.05 indicated statistical significance. RESULTS: Eight male patients with an average age of 75.7 (range of 59–88) year were included. Five presented with abdominal symptoms (e.g., abdominal pain, fullness, and diarrhea), one of whom had an intestinal obstruction. All 8 patients had histories of long-term use of Chinese herbal medicines or medicinal liquors. One had used an oral liquid for the treatment of chronic rhinitis. The 7 others had used Chinese medical liquors that contained gardenia, chuan xiong, and angelia dahurica. Tables 2 and 3 show the patient clinic characteristics and the ingredients that Chinese traditional medicines had in common. The patients all received conservative treatment. Clinical characteristics of patients with idiopathic mesenteric phlebosclerosis M: Male. Composition of Chinese herbal liquid medicines All 8 patients had taken one of the three Chinese herbal liquid medicines. The ingredients included in all the liquid formulations are listed. All patients presented with punctate or linear calcification on CT images. Mesenteric venous calcification involved the ascending colon of all patients and extended to the transverse colon in 4 (Figures 1 and 2). In 2 of the 8 patients, the lesions extended to the descending colon. In 1 patient, the entire colon was involved (Figure 3). Calcification was limited to the right-side mesenteric vein in 6 of the 8 cases (75%). In 2 cases (25%), the left-side mesenteric vein was involved. Diffuse wall thickening in the affected region was observed in 7 patients. One patient presented with calcification without obvious thickening of the colon wall. The wall thickening was most often seen in the right and the transverse colon. Abdominal computed tomography shows colonic wall thickenings with threadlike calcifications (arrow) of the superior mesenteric vein in the transverse colon and ascending colon. A-C: Case 2; D-F: Case 4. Three-dimensional maximum intensity projection reconstruction of computed tomography angiography effectively illustrates the extent of calcification along the mesenteric vein (C, F). Abdominal computed tomography shows numerous linear and arc-like dense calcifications (arrow) distributed within the bowel wall of the ascending and hepatic flexure of the colon with thickening of the colon wall. A-C: Case 1; D-F: Case 5. Case 5 Local stenosis is seen in the transverse colon of case 5 (thick arrow). Volume rendering image shows that calcifications were more prominently distributed in the mesenteric veins in the right hemicolon (C, F). The median disease CT score was 4.88 (n = 7) and the median drinking index was 5680 (n = 7). The dispersion diagram in Figure 4 shows the relationship between the drinking index and the disease CT score. Spearman correlation analysis found a significant positive correlation between the alcohol drinking index and the disease CT score (r = 0.842, P < 0.05). In the 4 patients evaluated by of colonoscopy, blue or dark blue colored mucosa was the most characteristic variation. The colonoscopy revealed multiple erosions and ulcers in 1 patient (Figure 5). Table 1 lists the characteristic endoscopic view and CT findings. Histopathology of the biopsy samples showed the deposition of collagen fibers in the subepithelium and around the blood vessels. The vitreous deposits were negatively stained by Congo red and appeared blue following Masson-trichrome, staining, which indicated lamina propria hyalinization and fibrosis (Figure 6). Dispersion diagram and best fitting line for the drinking index vs the disease computed tomography score. Patients two, seven, and eight had histories of diabetes, chronic nephritis, or prostate cancer are shown by red dots. CT: Computed tomography. Abdominal computed tomography shows multiple threadlike calcifications within the colon wall and adjacent vein from the ileocecal junction to the descending colon (arrow). A-C: Case 6; D-F: Case 7. In case 6, calcifications of the mesenteric vein extended to the rectum; mild diffuse thickening of colon wall is seen. Volume rendering image illustrates the distribution of calcifications in the mesenteric veins, the inferior mesenteric vein with multifocal calcifications (C, F). Representative endoscopic views. A-C: Colonoscopy in case 3 revealed light blue discoloration in the transverse colon; D-F: Colonoscopy case 2 showed edematous congested mucosa with pigmentation, and dark blue discoloration extending to the transverse colon; G-I: Colonoscopy of case 5 revealed edematous dark purple colonic mucosa and sclerotic changes of the colonic walls extending from the cecum to the splenic flexure of colon. DISCUSSION: IMP, which is also known as phlebosclerotic colitis, is a rare intestinal ischemia syndrome with gradual onset and progression. It is characterized by thickening of the wall of the right hemicolon and calcification of mesenteric veins. Most cases have been reported in East Asian nations and regions, especially Japan and Taiwan. In 1991, Koyama et al[5] initially described the disease. To distinguish this disease from ischemic colitis associated with arterial diseases, it was termed as “phlebosclerotic colitis” by Yao et al[6] in 2000. In 2003, Iwashita et al[7] advocated the term “idiopathic mesenteric phlebosclerosis”, as the affected site of this disease showed weak inflammatory changes. Most ischemic bowel diseases result from an insufficient arterial supply attributed to atherosclerosis, thrombosis, and embolus[8]. Disturbed venous return may also cause colitis, including IMP as described here. IMP is usually attributed to chronic ischemia of the colon resulting from calcification of the mesenteric venous system that causes venous congestion of the colon and even hemorrhagic infarction. The disease incidence is low, with mostly chronic and insidious onset. Patients subject to IMP usually present with nonspecific symptoms (e.g., abdominal pain, diarrhea, nausea, and vomiting). As the disease mostly involves the right colon, abdominal pain is more common in the right lower abdomen. Patients may be asymptomatic in the early stage of disease but may develop intestinal obstruction and even perforation in the advanced stage of the disease[9,10]. In this study, most of the 8 patients developed abdominal pain and diarrhea. One presented with intestinal obstruction as the first symptom, and another presented with gastrointestinal bleeding, which was basically consistent with existing reports. The pathogenesis and etiology of IMP remain unclear. IMP is characterized by a defined area and endemic population distribution, and a relationship with a region-specific lifestyle stressed the etiology of this disease[11]. Long-term and frequent ingestion of biochemical substances and toxins is considered to be associated with the disease. Most reported cases of IMP have been associated with the use of herbal medicines and medicinal liquor, most of which contained gardenia fruits[12]. Gardenia fruit is the dried mature fruit of Gardenia jasminoides Ellis. It is a popular crude drug used as a Chinese herb and has been extensively employed for treating cardiovascular, cerebrovascular diseases, hepatobiliary diseases, and diabetes. The main active ingredient of the gardenia fruit is geniposide. As deduced by some scholars, if patients take Chinese herbal drugs containing Gardenia for a long time, the geniposide can be hydrolyzed to genipin by bacteria in intestinal tract, and the absorbed genipin reacts with the protein in mesenteric vein plasma. In addition, collagen gradually accumulates under the mucosa, which subsequently progresses to hyperplastic myointima in the veins, accompanied by fibrosis/sclerosis. The changes ultimately result in venous occlusion[13]. As geniposide refers to one type of glycoside, orally administered geniposide is not directly absorbed after reaching the lower digestive tract. Geniposide is hydrolyzed only after entering the cecum and ascending colon and then transformed to its metabolite, genipin, which permeates the enterocyte membrane as impacted by numerous bacteria in the colon[12]. The transformation and absorption processes have been largely identified in the right colon and transverse colon, which explains the characteristic lesion site of mesenteric venous sclerosis. As shown in Table 2, a clear male predominance was found among the patients. This trend is different from the female predominance described in existing reports from Japan and Taiwan, which might be explained as follows. In Japan and Taiwan, herbal prescriptions containing geniposide are commonly used and are thought to be effective for female-specific symptoms, which may account for the female predominance[14,15]. However, most of our patients had a history of taking Chinese medical liquors for a long time. Most were male, which may be the reason for male predominance in our study. Region-specific lifestyle may thus contribute to the understanding of the etiology of this disease. In this study, 7 patients had a history of taking the Chinese medical liquors named Wu chia-pee liquor and Wanying die-da wine, which consist of multiple Chinese herbs soaked in liquor and have various effects (e.g., enhancing fitness and optimizing immune responses)[3,14]. Another patient had no history of drinking alcohol and had been taking Biyuanshu oral liquid for a long time to treat chronic rhinitis. The medical liquids used by our 8 patients all contained geniposide, chuan xiong, and angelia dahurica (Table 3). A study by Hiramatsu et al[14] of 25 IMP patients in which geniposide was the only Chinese medicine common to all is further evidence the that Chinese herbal medicines containing geniposide are involved in the pathology of IMP. Nevertheless, whether geniposide is only factor directly involved in the pathogenesis of IMP, or whether it is accompanied by other Chinese medicine in the pathogenesis of IMP, needs to be further determined in larger datasets. The clinical symptoms of IMP lack specificity, and the diagnosis is largely determined by the results of radiology and colonoscopy[9,16]. Abdominal CT scans show calcifications of the involved superior mesenteric vein and its branches, which are linear and follow the course of the blood vessels. The involved colon wall becomes swollen and thickened[17]. Endoscopy shows a blue or bluish purple mucosa at the lesion site[18], and the color might be attributed to chronic congestion with ischemia or toxins that stained the bowel mucosa[19]. Tortuous and irregular veins can be seen under the mucosa with poor light transmission. In severe cases, the vessels might disappear. Colonic mucosa was Hyperemia and edema of the colon are sometimes accompanied by erosion or ulcer. The lesions are continuous and the chronic course of disease involves spread from the ileocecal to the anal side. IMP mainly affects the right colon but may also involve the left colon and extend to the sigmoid colon, but generally does not involve the terminal ileum. In this study, 2 patients had phlebosclerosis extending to the left colonic vein branch, 1 had chronic nephritis, and 1 had been treated 5 years previously with endocrine and radiotherapy for prostate cancer, which are rare in IMP[2,20]. The investigators speculated that poor renal function and long-term treatment of malignant tumors prolong the clearance of genipin, an active metabolite of geniposide, which allows genipin to accumulate in the branches of the veins following absorption, thus aggravating the severity of mesenteric venous sclerosis and reducing the absorption capacity of the colon. Genipin, that is not completely absorbed by the ascending and transverse colons, is absorbed from the left colon, leading to sclerosis and calcification of the left colon venous branch. In addition, systemic microvascular disease complicating diabetes and resulting in chronic hypoxia may increase the vulnerability of colon wall and colon vein[1]. Consequently, chronic nephritis, malignant tumors, and diabetes may increase the risk of the progression of IMP, as shown in Figure 6. As shown in Figure 7, the severity of IMP is related to the drinking index, which reflects the daily intake of liquid medicine and the duration of contact. The effect on vessel wall is time- and dose-dependent, and that may be related to colonic flora and colonic absorption capacity. IMP has characteristic manifestations on both CT and endoscopy, which make it is relatively easy to diagnose. Conventional histopathology shows fibrosis and calcification of the vein wall and collagen deposition around the vessels. Because of the superficial location of the lesions, the pathological value of the specimens taken during colonoscopy for the diagnosis of IMP is limited, and the resected specimen may require in-depth observation. The treatment strategy for IMP can be determined on an individual basis. Patients with mild or asymptomatic symptoms can be treated conservatively, which will stop progress after the exposure to pathogenic ingredients is stopped (e.g., the use of Chinese herbal medicine). Surgical treatment is necessary if severe complications such as colonic obstruction, necrosis, perforation, or massive intestinal bleeding occur[21]. However the presence of poor circulation may mean that colon surgery is not an appropriate treatment, and it must be chosen with care. Histologic examination of the biopsy specimen of case 2. A: Hematoxylin and eosin staining revealed marked fibrosis of the deep layer of the mucosa wall thickening of the venules (× 40); B: Masson’s trichrome stain indicated that the deposits were stained blue, suggesting that they were collagen fibers (× 40); C: Congo red staining revealed absence of vitreous deposition, excluding amyloidosis (× 40). CONCLUSION: This study evidence supports geniposide as most likely to be involved in the pathology of IMP. Clinical conditions including chronic nephritis, malignant tumors, and diabetes mellitus, may be the risk factors of IMP. It is recommended that long-term use of Chinese herbs and medical liquors should be avoided, especially prescriptions or formulations containing gardenia. Both endoscopic and radiologic examinations can lead to a conclusive diagnosis even if biopsy results are insufficient or inconclusive.
Background: Idiopathic mesenteric phlebosclerosis (IMP) is a rare disease, and its etiology and risk factors remain uncertain. Methods: The detailed formula of herbal liquid prescriptions of all patients was studied, and the herbal ingredients were compared to identify the toxic agent as a possible etiological factor. Abdominal computed tomography (CT) and colonoscopy images were reviewed to determine the extent and severity of mesenteric phlebosclerosis and the presence of findings regarding colitis. The disease CT score was determined by the distribution of mesenteric vein calcification and colon wall thickening on CT images. The drinking index of medicinal liquor was calculated from the daily quantity and drinking years of Chinese medicinal liquor. Subsequently, Spearman's correlation analysis was conducted to evaluate the correlation between the drinking index and the CT disease score. Results: The mean age of the 8 enrolled patients was 75.7 years and male predominance was found (all 8 patients were men). The patients had histories of 5-40 years of oral Chinese herbal liquids containing geniposide and exhibited typical imaging characteristics (e.g., threadlike calcifications along the colonic and mesenteric vessels or associated with a thickened colonic wall in CT images). Calcifications were confined to the right-side mesenteric vein in 6 of the 8 patients (75%) and involved the left-side mesenteric vein of 2 cases (25%) and the calcifications extended to the mesorectum in 1 of them. The thickening of colon wall mainly occurred in the right colon and the transverse colon. The median disease CT score was 4.88 (n = 7) and the median drinking index was 5680 (n = 7). After Spearman's correlation analysis, the median CT score of the disease showed a significant positive correlation with the median drinking index (r = 0.842, P < 0.05). Conclusions: Long-term oral intake of Chinese herbal liquid containing geniposide may play a role in the pathogenesis of IMP.
INTRODUCTION: Idiopathic mesenteric phlebosclerosis (IMP) is a rare form of ischemic colitis that usually affects the right hemicolon. It is almost exclusively observed in Asian populations, is characterized by calcification of mesenteric veins and thickening of the wall of the right hemicolon. The etiology and pathogenesis remain unclear, but it is thought that long-term and frequent ingestion of biochemicals and toxins are associated with the disease[1,2]. As clarified by existing studies, long-term intake of herbal medicines or medicinal liquor containing geniposide is recognized as one of the major causes of IMP[2,3]. In this study, we describe 8 patients with mesenteric phlebosclerosis with long-term exposure to Chinese herbal medicines or medicinal liquor. The clinical manifestations and imaging features were summarized, and the relationship between the alcohol index and the severity of IMP observed by computed tomography (CT) were analyzed. This is an interesting article on a relatively rare disease and may present insights into the etiology of IMP. CONCLUSION: The number of cases in our retrospective study was relatively small, and the pathogenesis of IMP needs to be determined by further study with a larger data set.
Background: Idiopathic mesenteric phlebosclerosis (IMP) is a rare disease, and its etiology and risk factors remain uncertain. Methods: The detailed formula of herbal liquid prescriptions of all patients was studied, and the herbal ingredients were compared to identify the toxic agent as a possible etiological factor. Abdominal computed tomography (CT) and colonoscopy images were reviewed to determine the extent and severity of mesenteric phlebosclerosis and the presence of findings regarding colitis. The disease CT score was determined by the distribution of mesenteric vein calcification and colon wall thickening on CT images. The drinking index of medicinal liquor was calculated from the daily quantity and drinking years of Chinese medicinal liquor. Subsequently, Spearman's correlation analysis was conducted to evaluate the correlation between the drinking index and the CT disease score. Results: The mean age of the 8 enrolled patients was 75.7 years and male predominance was found (all 8 patients were men). The patients had histories of 5-40 years of oral Chinese herbal liquids containing geniposide and exhibited typical imaging characteristics (e.g., threadlike calcifications along the colonic and mesenteric vessels or associated with a thickened colonic wall in CT images). Calcifications were confined to the right-side mesenteric vein in 6 of the 8 patients (75%) and involved the left-side mesenteric vein of 2 cases (25%) and the calcifications extended to the mesorectum in 1 of them. The thickening of colon wall mainly occurred in the right colon and the transverse colon. The median disease CT score was 4.88 (n = 7) and the median drinking index was 5680 (n = 7). After Spearman's correlation analysis, the median CT score of the disease showed a significant positive correlation with the median drinking index (r = 0.842, P < 0.05). Conclusions: Long-term oral intake of Chinese herbal liquid containing geniposide may play a role in the pathogenesis of IMP.
5,727
366
[ 180, 53, 58, 101, 89, 191, 419, 60, 860, 1597, 83 ]
12
[ "colon", "vein", "ct", "mesenteric", "score", "patients", "imp", "calcification", "wall", "calcifications" ]
[ "phlebosclerosis imp", "known phlebosclerotic colitis", "mesenteric phlebosclerosis male", "mesenteric phlebosclerosis long", "mesenteric phlebosclerosis bowel" ]
null
[CONTENT] Idiopathic mesenteric phlebosclerosis | Phlebosclerotic colitis | Chinese herbal liquid | Geniposide | Colonoscopy | Computed tomography [SUMMARY]
[CONTENT] Idiopathic mesenteric phlebosclerosis | Phlebosclerotic colitis | Chinese herbal liquid | Geniposide | Colonoscopy | Computed tomography [SUMMARY]
null
[CONTENT] Idiopathic mesenteric phlebosclerosis | Phlebosclerotic colitis | Chinese herbal liquid | Geniposide | Colonoscopy | Computed tomography [SUMMARY]
[CONTENT] Idiopathic mesenteric phlebosclerosis | Phlebosclerotic colitis | Chinese herbal liquid | Geniposide | Colonoscopy | Computed tomography [SUMMARY]
[CONTENT] Idiopathic mesenteric phlebosclerosis | Phlebosclerotic colitis | Chinese herbal liquid | Geniposide | Colonoscopy | Computed tomography [SUMMARY]
[CONTENT] Aged | Colon | Colonoscopy | Humans | Iridoids | Male | Mesenteric Veins [SUMMARY]
[CONTENT] Aged | Colon | Colonoscopy | Humans | Iridoids | Male | Mesenteric Veins [SUMMARY]
null
[CONTENT] Aged | Colon | Colonoscopy | Humans | Iridoids | Male | Mesenteric Veins [SUMMARY]
[CONTENT] Aged | Colon | Colonoscopy | Humans | Iridoids | Male | Mesenteric Veins [SUMMARY]
[CONTENT] Aged | Colon | Colonoscopy | Humans | Iridoids | Male | Mesenteric Veins [SUMMARY]
[CONTENT] phlebosclerosis imp | known phlebosclerotic colitis | mesenteric phlebosclerosis male | mesenteric phlebosclerosis long | mesenteric phlebosclerosis bowel [SUMMARY]
[CONTENT] phlebosclerosis imp | known phlebosclerotic colitis | mesenteric phlebosclerosis male | mesenteric phlebosclerosis long | mesenteric phlebosclerosis bowel [SUMMARY]
null
[CONTENT] phlebosclerosis imp | known phlebosclerotic colitis | mesenteric phlebosclerosis male | mesenteric phlebosclerosis long | mesenteric phlebosclerosis bowel [SUMMARY]
[CONTENT] phlebosclerosis imp | known phlebosclerotic colitis | mesenteric phlebosclerosis male | mesenteric phlebosclerosis long | mesenteric phlebosclerosis bowel [SUMMARY]
[CONTENT] phlebosclerosis imp | known phlebosclerotic colitis | mesenteric phlebosclerosis male | mesenteric phlebosclerosis long | mesenteric phlebosclerosis bowel [SUMMARY]
[CONTENT] colon | vein | ct | mesenteric | score | patients | imp | calcification | wall | calcifications [SUMMARY]
[CONTENT] colon | vein | ct | mesenteric | score | patients | imp | calcification | wall | calcifications [SUMMARY]
null
[CONTENT] colon | vein | ct | mesenteric | score | patients | imp | calcification | wall | calcifications [SUMMARY]
[CONTENT] colon | vein | ct | mesenteric | score | patients | imp | calcification | wall | calcifications [SUMMARY]
[CONTENT] colon | vein | ct | mesenteric | score | patients | imp | calcification | wall | calcifications [SUMMARY]
[CONTENT] imp | long term | term | mesenteric | etiology | medicinal liquor | rare | herbal medicines medicinal liquor | observed | medicines medicinal liquor [SUMMARY]
[CONTENT] colon | vein | score | ct | mesenteric | calcifications | calcification | mesenteric vein | mm | wall [SUMMARY]
null
[CONTENT] imp | likely involved pathology | mellitus risk | term use chinese herbs | use chinese herbs | use chinese herbs medical | chinese herbs medical | results insufficient | results insufficient inconclusive | chinese herbs medical liquors [SUMMARY]
[CONTENT] colon | vein | mesenteric | ct | patients | score | imp | calcifications | calcification | wall [SUMMARY]
[CONTENT] colon | vein | mesenteric | ct | patients | score | imp | calcifications | calcification | wall [SUMMARY]
[CONTENT] IMP [SUMMARY]
[CONTENT] ||| ||| CT ||| daily | Chinese ||| Spearman | CT [SUMMARY]
null
[CONTENT] Chinese | IMP [SUMMARY]
[CONTENT] IMP ||| ||| ||| CT ||| daily | Chinese ||| Spearman | CT ||| ||| 8 | 75.7 years | 8 ||| 5-40 years | Chinese | CT ||| 6 | 8 | 75% | 2 | 25% | 1 ||| ||| CT | 4.88 | 7 | 5680 | 7 ||| Spearman | CT | 0.842 ||| Chinese | IMP [SUMMARY]
[CONTENT] IMP ||| ||| ||| CT ||| daily | Chinese ||| Spearman | CT ||| ||| 8 | 75.7 years | 8 ||| 5-40 years | Chinese | CT ||| 6 | 8 | 75% | 2 | 25% | 1 ||| ||| CT | 4.88 | 7 | 5680 | 7 ||| Spearman | CT | 0.842 ||| Chinese | IMP [SUMMARY]
Diagnostic performance of the Elecsys SARS-CoV-2 antigen assay in the clinical routine of a tertiary care hospital: Preliminary results from a single-center evaluation.
34251047
This report describes a manufacturer-independent evaluation of the diagnostic accuracy of the Elecsys SARS-CoV-2 antigen assay from Roche Diagnostics in a tertiary care setting.
BACKGROUND
In this single-center study, we used nasopharyngeal swabs from 403 cases from the emergency department and intensive care unit of our hospital. The reference standard for detecting SARS-CoV-2 was the reverse-transcription polymerase chain reaction (RT-PCR) assay. Cycle threshold (Ct) values were recorded for positive RT-PCR assays. The index test was the Elecsys SARS-CoV-2 antigen assay. This electrochemiluminescence immunoassay produces results as cutoff index (COI) values, with values ≥1.00 being reported as positive.
METHODS
Of the 403 cases, 47 showed positive results in RT-PCR assays. Of the 47 RT-PCR-positive cases, 12 showed positive results in the antigen assay. Of the 356 RT-PCR-negative cases, all showed negative results in the antigen assay. Thus, the antigen assay showed a sensitivity of 26% (95% CI, 14%-40%) and specificity of 100% (95% CI, 99%-100%). Analysis of the relationship between Ct values and COI values in the 47 RT-PCR-positive cases showed a correlation coefficient of -0.704 (95% CI, -0.824 to -0.522). The true-positive rate of the antigen assay for Ct values of 15-24.9, 25-29.9, 30-34.9, and 35-39.9 was 100%, 44%, 8%, and 6%, respectively.
RESULTS
The Elecsys SARS-CoV-2 antigen assay has a low sensitivity for detecting SARS-CoV-2 from nasopharyngeal swabs. Hence, we decided to not use this assay in the clinical routine of our hospital.
CONCLUSIONS
[ "Antigens, Viral", "COVID-19", "COVID-19 Nucleic Acid Testing", "COVID-19 Serological Testing", "Humans", "Intensive Care Units", "Nasopharynx", "SARS-CoV-2", "Sensitivity and Specificity", "Tertiary Care Centers", "Viral Load" ]
8373346
INTRODUCTION
The RNA virus severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) causes coronavirus disease 2019 (COVID‐19).1 Infection with SARS‐CoV‐2 can be asymptomatic or may result in symptomatic disease ranging in severity from mild upper respiratory tract symptoms to severe pneumonia with respiratory failure and multiple organ failure.1 The gold standard laboratory tests to detect SARS‐CoV‐2 from clinical specimens (eg, nasopharyngeal swabs, oropharyngeal swabs, and bronchoalveolar lavage fluid) are nucleic acid amplification tests (NAATs), mainly reverse‐transcription polymerase chain reaction (RT‐PCR) assays.1, 2 Currently, a variety of NAATs are commercially available for use in routine clinical practice.1, 3 However, since the testing capacity afforded by NAATs is insufficient to cope with the COVID‐19 pandemic, various manufacturers have also developed rapid antigen immunoassays, which do not require skilled personnel and dedicated instrumentation, for detection of the virus from nasopharyngeal and oropharyngeal swabs. SARS‐CoV‐2 rapid point‐of‐care antigen tests have also been commercially available for some time.1, 4 At present, antigen point‐of‐care tests in many countries help to ensure the necessary quantity of SARS‐CoV‐2 tests for their respective testing strategies,1, 4 but these tests have been criticized because of their lower clinical sensitivity in comparison with NAATs.1, 4 Recently, Roche Diagnostics (Rotkreuz, Switzerland) launched a high‐throughput antigen test for medical laboratories called “Elecsys SARS‐CoV‐2 antigen assay,” which runs on the company's analyzers. We evaluated the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay prior to its planned use in our clinical routine. Herein, we report the results of our evaluation.
METHODS
Study design and clinical samples This report describes the findings of a single‐center evaluation of the diagnostic accuracy of the Elecsys SARS‐CoV‐2 antigen assay as an index test in comparison with RT‐PCR as the reference standard. Our manufacturer‐independent evaluation was conducted from March 11, 2021, to April 26, 2021, at the Department of Clinical Pathology, Hospital of Bolzano, province of South Tyrol, Italy. During this period, the median 7‐day incidence rate of new SARS‐CoV‐2‐positive cases per 100,000 population was 149 (starting from 245 on March 11, 2021, and declining to 121 on April 26, 2021) in the province of South Tyrol (Amministrazione Provincia Bolzano, Sicurezza e protezione civile, web: http://www.provincia.bz.it/sicurezza‐protezione‐civile/protezione‐civile/dati‐attuali‐sul‐coronavirus.asp, last access: April 27, 2021). During this time, the Department of Clinical Pathology received 403 requests for simultaneous RT‐PCR and antigen assays from the emergency department of the hospital and from the intensive care unit, which care for COVID‐19 patients. These 403 requests pertained to 336 patients. In all 403 cases, two nasopharyngeal swabs were obtained simultaneously by skilled personnel, of which one was sent for the RT‐PCR assay and the other was sent to run the antigen assay. We used the data from these 403 cases to evaluate the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay. A referral to the ethics committee was not deemed necessary because the project was an assay validation/verification that was in line with good laboratory practice. Such evaluations are routinely performed in medical laboratories before introducing a new assay into the clinical routine. This report describes the findings of a single‐center evaluation of the diagnostic accuracy of the Elecsys SARS‐CoV‐2 antigen assay as an index test in comparison with RT‐PCR as the reference standard. Our manufacturer‐independent evaluation was conducted from March 11, 2021, to April 26, 2021, at the Department of Clinical Pathology, Hospital of Bolzano, province of South Tyrol, Italy. During this period, the median 7‐day incidence rate of new SARS‐CoV‐2‐positive cases per 100,000 population was 149 (starting from 245 on March 11, 2021, and declining to 121 on April 26, 2021) in the province of South Tyrol (Amministrazione Provincia Bolzano, Sicurezza e protezione civile, web: http://www.provincia.bz.it/sicurezza‐protezione‐civile/protezione‐civile/dati‐attuali‐sul‐coronavirus.asp, last access: April 27, 2021). During this time, the Department of Clinical Pathology received 403 requests for simultaneous RT‐PCR and antigen assays from the emergency department of the hospital and from the intensive care unit, which care for COVID‐19 patients. These 403 requests pertained to 336 patients. In all 403 cases, two nasopharyngeal swabs were obtained simultaneously by skilled personnel, of which one was sent for the RT‐PCR assay and the other was sent to run the antigen assay. We used the data from these 403 cases to evaluate the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay. A referral to the ethics committee was not deemed necessary because the project was an assay validation/verification that was in line with good laboratory practice. Such evaluations are routinely performed in medical laboratories before introducing a new assay into the clinical routine. Reference standard—RT‐PCR assay The personnel from the emergency department of the hospital and from the intensive care unit used standard swabs and transport media from two different manufacturers, namely FLOQSwabs® (Ref. 503CS01, Copan Italia S.p.A., Brescia, Italy) in combination with the UTM Universal Transport Medium (Ref. 330C, filled with 3 ml UTM® medium, Copan Italia S.p.A., Brescia, Italy) and the combined specimen collection device ∑‐Transwab® Liquid Amies (one Sigma swab plus 1 ml of liquid Amies transport medium, Ref. MW176S; Medical Wire, Corsham, United Kingdom). The nasopharyngeal swabs were handled as specified by the manufacturer. After the smear, samples were sent to our laboratory where the RT‐PCR assay was performed immediately. The RT‐PCR assay was performed using the Xpert Xpress SARS‐CoV‐2 test (Ref. XPRSARS‐COV2‐10, Cepheid, Sunnyvale, CA, USA) on a GeneXpert® IV instrument (Cepheid, Sunnyvale, CA, USA). The Xpert Xpress SARS‐CoV‐2 test is a rapid, real‐time RT‐PCR test intended for qualitative detection of nucleic acids from SARS‐CoV‐2 in upper respiratory specimens. We performed the entire Xpert Xpress SARS‐CoV‐2 test procedure according to the manufacturer's instructions. The system uses single‐use disposable cartridges that hold RT‐PCR reagents and host the RT‐PCR process. The sample‐processing control and probe‐check control are also included in the cartridge. The Xpert Xpress SARS‐CoV‐2 test provides test results based on the detection of two gene targets, namely the amplification of the SARS‐CoV‐2 E and N2 genes.1, 3 The limit of detection of this test was 250 copies/ml, and the time to result was 45 min.1, 3 The Xpert Xpress SARS‐CoV‐2 test includes an early assay termination function, which can provide an earlier time to result for high‐titer specimens if the signal from the target nucleic acid reaches a predetermined threshold before the full 45 PCR cycles have been completed. Using the GeneXpert software (Cepheid, Sunnyvale, CA, USA), we considered positive RT‐PCR results when the SARS‐CoV‐2 signal for the N2 nucleic acid target had a PCR cycle threshold (Ct) value of <40.0, irrespective of the signal for the E nucleic acid target. In contrast, when the Ct value for the SARS‐CoV‐2 N2 gene was ≥40.0, or when the results of RT‐PCR testing were definitely negative (with reference to a positive result for the sample‐processing control), we classified the result of the RT‐PCR test as negative. Further, we categorized the results of RT‐PCR tests that showed negative signals for the SARS‐CoV‐2 E and N2 genes as well as a negative signal for the sample‐processing control as invalid; in these cases, we repeated the analysis. The personnel from the emergency department of the hospital and from the intensive care unit used standard swabs and transport media from two different manufacturers, namely FLOQSwabs® (Ref. 503CS01, Copan Italia S.p.A., Brescia, Italy) in combination with the UTM Universal Transport Medium (Ref. 330C, filled with 3 ml UTM® medium, Copan Italia S.p.A., Brescia, Italy) and the combined specimen collection device ∑‐Transwab® Liquid Amies (one Sigma swab plus 1 ml of liquid Amies transport medium, Ref. MW176S; Medical Wire, Corsham, United Kingdom). The nasopharyngeal swabs were handled as specified by the manufacturer. After the smear, samples were sent to our laboratory where the RT‐PCR assay was performed immediately. The RT‐PCR assay was performed using the Xpert Xpress SARS‐CoV‐2 test (Ref. XPRSARS‐COV2‐10, Cepheid, Sunnyvale, CA, USA) on a GeneXpert® IV instrument (Cepheid, Sunnyvale, CA, USA). The Xpert Xpress SARS‐CoV‐2 test is a rapid, real‐time RT‐PCR test intended for qualitative detection of nucleic acids from SARS‐CoV‐2 in upper respiratory specimens. We performed the entire Xpert Xpress SARS‐CoV‐2 test procedure according to the manufacturer's instructions. The system uses single‐use disposable cartridges that hold RT‐PCR reagents and host the RT‐PCR process. The sample‐processing control and probe‐check control are also included in the cartridge. The Xpert Xpress SARS‐CoV‐2 test provides test results based on the detection of two gene targets, namely the amplification of the SARS‐CoV‐2 E and N2 genes.1, 3 The limit of detection of this test was 250 copies/ml, and the time to result was 45 min.1, 3 The Xpert Xpress SARS‐CoV‐2 test includes an early assay termination function, which can provide an earlier time to result for high‐titer specimens if the signal from the target nucleic acid reaches a predetermined threshold before the full 45 PCR cycles have been completed. Using the GeneXpert software (Cepheid, Sunnyvale, CA, USA), we considered positive RT‐PCR results when the SARS‐CoV‐2 signal for the N2 nucleic acid target had a PCR cycle threshold (Ct) value of <40.0, irrespective of the signal for the E nucleic acid target. In contrast, when the Ct value for the SARS‐CoV‐2 N2 gene was ≥40.0, or when the results of RT‐PCR testing were definitely negative (with reference to a positive result for the sample‐processing control), we classified the result of the RT‐PCR test as negative. Further, we categorized the results of RT‐PCR tests that showed negative signals for the SARS‐CoV‐2 E and N2 genes as well as a negative signal for the sample‐processing control as invalid; in these cases, we repeated the analysis. Index test—Elecsys SARS‐CoV‐2 antigen assay Specimen collection and preparation for detection of the SARS‐CoV‐2 antigen was performed as recommended by Roche Diagnostics Italy and in accordance with the package insert of the Elecsys SARS‐CoV‐2 antigen assay. We prepared sample collection tubes without any additives (Vacuette® Z No Additive 4 ml, Ref. 454001, Greiner Bio‐One, Kremsmunster, Austria) containing 1.0 ml of the SARS‐CoV‐2 extraction solution (Ref. 09370064190; Roche Diagnostics, Rotkreuz, Switzerland). The SARS‐CoV‐2 extraction solution is intended for the elution and transportation of samples for use in the Elecsys SARS‐CoV‐2 antigen assay. The personnel from the emergency department of the hospital and from the intensive care unit received these specifically prepared sample collection tubes and FLOQSwabs® (Ref. 519CS01, Copan Italia S.p.A., Brescia, Italy) for sample collection. The nasopharyngeal smear for detection of the SARS‐CoV‐2 antigen was performed in exactly the same way and at the same time as the smear for RT‐PCR test. The collection tubes were opened; the swab was soaked in the solution; and the swab was stirred 20 times. The swab was then left in the solution for 2 min. Next, the personnel from the emergency department of the hospital or the intensive care unit removed the swab while pressing it against the tube wall to extract the liquid from the swab. The collection tube was then recapped and immediately sent to our laboratory, where the samples were stored for a maximum of 36 h at 2–8°C. According to the package insert of the Elecsys SARS‐CoV‐2 antigen assay, the samples have an in vitro stability of two days at 2–8°C. Finally, we performed the Elecsys SARS‐CoV‐2 antigen assay using the collection tubes. The Elecsys SARS‐CoV‐2 antigen assay (Ref. 09345299190, Roche Diagnostics, Rotkreuz, Switzerland) is an electrochemiluminescence immunoassay for qualitative detection of the nucleocapsid antigen of SARS‑CoV‐2 in nasopharyngeal and oropharyngeal swab samples. This assay uses monoclonal antibodies directed against the SARS‑CoV‐2 nucleocapsid protein in a double‐antibody sandwich assay format. In our evaluation, we ran this assay on a single Cobas e801 system (Roche Diagnostics, Rotkreuz, Switzerland) according to the manufacturer's instructions. This assay produces results as a cutoff index (COI; signal of sample divided by cutoff), wherein results ≥1.00 are reported as reactive/positive. For the internal quality control, we used the PreciControl SARS‑CoV‐2 antigen (Ref. 09345302190) once daily at two COI levels. We allowed sample measurements only if the controls were within the defined limits. We determined the limit of blank (LoB) as previously suggested 5: Measurements were obtained with the SARS‐CoV‐2 extraction solution in replicates of 20 and calculated LoB = meanblank + 1.645 (SDblank). Using this procedure, we found an LoB of 0.60 COI. We evaluated the linearity of the Elecsys SARS‑CoV‐2 antigen assay according to the CLSI guideline EP6‐A6 using six different analyte concentrations. Fresh samples were used to prepare high‐ and low‐concentration pools. We then conducted a direct dilution series with the low‐ and high‐concentration patient sample pools in the following volume ratios (low‐concentration pool +high‐concentration pool): pool 1, low only; pool 2, 0.8 low +0.2 high; pool 3, 0.6 low +0.4 high; pool 4, 0.4 low +0.6 high; pool 5, 0.2 low +0.8 high; and pool 6, high only. Three measurements were performed for each concentration, and the default criteria were set at 5% for repeatability and 15 COI for nonlinearity. The mean COIs of the low‐ and high‐concentration pools were 0.49 and 759, respectively. The standard errors of regression (Sy,x) and t‐tests from regression analyses showed that the first‐order model fitted better than the second‐ and third‐order models: first‐order model b1, Sy,x = 12.457; t‐test = 86.878 (p < 0.001); second‐order model b2, Sy,x = 11.622; t‐test = 1.839 (p = 0.086); and third‐order model b3, Sy,x = 10.755; t‐test = 1.875 (p = 0.082). In addition, all default criteria were met, so the method was linear up to 750 COI. To evaluate the precision of the Elecsys SARS‑CoV‐2 antigen assay in our laboratory, we performed a replication study according to the Clinical and Laboratory Standards Institute (CLSI; formerly NCCLS) guideline EP5‐A.7 Two pooled patient samples with COI values near the reactive/positive cutoff values of the assay were aliquoted into ten plastic tubes for each concentration level and frozen at –80°C. We analyzed these samples in duplicate in two runs per day for 10 days within 2 weeks of sample collection. Within‐run and total analytical precision (CV) were calculated using the CLSI double‐run precision evaluation test.7 The Elecsys SARS‑CoV‐2 antigen assay had a within‐run CV of 3.3% and a total CV of 3.5% at a mean concentration of 1.12 COI (pool 1) and a within‐run CV of 3.1% and a total CV of 5.7% at a mean concentration of 1.82 COI (pool 2). Specimen collection and preparation for detection of the SARS‐CoV‐2 antigen was performed as recommended by Roche Diagnostics Italy and in accordance with the package insert of the Elecsys SARS‐CoV‐2 antigen assay. We prepared sample collection tubes without any additives (Vacuette® Z No Additive 4 ml, Ref. 454001, Greiner Bio‐One, Kremsmunster, Austria) containing 1.0 ml of the SARS‐CoV‐2 extraction solution (Ref. 09370064190; Roche Diagnostics, Rotkreuz, Switzerland). The SARS‐CoV‐2 extraction solution is intended for the elution and transportation of samples for use in the Elecsys SARS‐CoV‐2 antigen assay. The personnel from the emergency department of the hospital and from the intensive care unit received these specifically prepared sample collection tubes and FLOQSwabs® (Ref. 519CS01, Copan Italia S.p.A., Brescia, Italy) for sample collection. The nasopharyngeal smear for detection of the SARS‐CoV‐2 antigen was performed in exactly the same way and at the same time as the smear for RT‐PCR test. The collection tubes were opened; the swab was soaked in the solution; and the swab was stirred 20 times. The swab was then left in the solution for 2 min. Next, the personnel from the emergency department of the hospital or the intensive care unit removed the swab while pressing it against the tube wall to extract the liquid from the swab. The collection tube was then recapped and immediately sent to our laboratory, where the samples were stored for a maximum of 36 h at 2–8°C. According to the package insert of the Elecsys SARS‐CoV‐2 antigen assay, the samples have an in vitro stability of two days at 2–8°C. Finally, we performed the Elecsys SARS‐CoV‐2 antigen assay using the collection tubes. The Elecsys SARS‐CoV‐2 antigen assay (Ref. 09345299190, Roche Diagnostics, Rotkreuz, Switzerland) is an electrochemiluminescence immunoassay for qualitative detection of the nucleocapsid antigen of SARS‑CoV‐2 in nasopharyngeal and oropharyngeal swab samples. This assay uses monoclonal antibodies directed against the SARS‑CoV‐2 nucleocapsid protein in a double‐antibody sandwich assay format. In our evaluation, we ran this assay on a single Cobas e801 system (Roche Diagnostics, Rotkreuz, Switzerland) according to the manufacturer's instructions. This assay produces results as a cutoff index (COI; signal of sample divided by cutoff), wherein results ≥1.00 are reported as reactive/positive. For the internal quality control, we used the PreciControl SARS‑CoV‐2 antigen (Ref. 09345302190) once daily at two COI levels. We allowed sample measurements only if the controls were within the defined limits. We determined the limit of blank (LoB) as previously suggested 5: Measurements were obtained with the SARS‐CoV‐2 extraction solution in replicates of 20 and calculated LoB = meanblank + 1.645 (SDblank). Using this procedure, we found an LoB of 0.60 COI. We evaluated the linearity of the Elecsys SARS‑CoV‐2 antigen assay according to the CLSI guideline EP6‐A6 using six different analyte concentrations. Fresh samples were used to prepare high‐ and low‐concentration pools. We then conducted a direct dilution series with the low‐ and high‐concentration patient sample pools in the following volume ratios (low‐concentration pool +high‐concentration pool): pool 1, low only; pool 2, 0.8 low +0.2 high; pool 3, 0.6 low +0.4 high; pool 4, 0.4 low +0.6 high; pool 5, 0.2 low +0.8 high; and pool 6, high only. Three measurements were performed for each concentration, and the default criteria were set at 5% for repeatability and 15 COI for nonlinearity. The mean COIs of the low‐ and high‐concentration pools were 0.49 and 759, respectively. The standard errors of regression (Sy,x) and t‐tests from regression analyses showed that the first‐order model fitted better than the second‐ and third‐order models: first‐order model b1, Sy,x = 12.457; t‐test = 86.878 (p < 0.001); second‐order model b2, Sy,x = 11.622; t‐test = 1.839 (p = 0.086); and third‐order model b3, Sy,x = 10.755; t‐test = 1.875 (p = 0.082). In addition, all default criteria were met, so the method was linear up to 750 COI. To evaluate the precision of the Elecsys SARS‑CoV‐2 antigen assay in our laboratory, we performed a replication study according to the Clinical and Laboratory Standards Institute (CLSI; formerly NCCLS) guideline EP5‐A.7 Two pooled patient samples with COI values near the reactive/positive cutoff values of the assay were aliquoted into ten plastic tubes for each concentration level and frozen at –80°C. We analyzed these samples in duplicate in two runs per day for 10 days within 2 weeks of sample collection. Within‐run and total analytical precision (CV) were calculated using the CLSI double‐run precision evaluation test.7 The Elecsys SARS‑CoV‐2 antigen assay had a within‐run CV of 3.3% and a total CV of 3.5% at a mean concentration of 1.12 COI (pool 1) and a within‐run CV of 3.1% and a total CV of 5.7% at a mean concentration of 1.82 COI (pool 2). Statistical analysis We performed a purely descriptive statistical analysis by calculating the sensitivity, specificity, area under the ROC curve, positive likelihood ratio, negative likelihood ratio, positive predictive value, and negative predictive value for the Elecsys SARS‑CoV‐2 antigen assay against the reference standard. Sensitivity, specificity, positive and negative predictive values, and disease prevalence were expressed as percentages. The confidence intervals for sensitivity and specificity were the "exact" Clopper‐Pearson confidence intervals. The confidence intervals for the likelihood ratios were calculated using the log method, as suggested by Altman et al.8 Confidence intervals for the predictive values were the standard logit confidence intervals given by Mercaldo et al.9 The area under the ROC curve was estimated using established procedures.10, 11, 12 For correlation analysis, we calculated the Spearman correlation coefficient (rho) with a p‐value and a 95% confidence interval (CI) for the correlation coefficient. Data analysis was performed using MedCalc software package MedCalc 17.2 (MedCalc Software Ltd, Ostend, Belgium). We performed a purely descriptive statistical analysis by calculating the sensitivity, specificity, area under the ROC curve, positive likelihood ratio, negative likelihood ratio, positive predictive value, and negative predictive value for the Elecsys SARS‑CoV‐2 antigen assay against the reference standard. Sensitivity, specificity, positive and negative predictive values, and disease prevalence were expressed as percentages. The confidence intervals for sensitivity and specificity were the "exact" Clopper‐Pearson confidence intervals. The confidence intervals for the likelihood ratios were calculated using the log method, as suggested by Altman et al.8 Confidence intervals for the predictive values were the standard logit confidence intervals given by Mercaldo et al.9 The area under the ROC curve was estimated using established procedures.10, 11, 12 For correlation analysis, we calculated the Spearman correlation coefficient (rho) with a p‐value and a 95% confidence interval (CI) for the correlation coefficient. Data analysis was performed using MedCalc software package MedCalc 17.2 (MedCalc Software Ltd, Ostend, Belgium).
RESULTS
In this study on the diagnostic accuracy of the Elecsys SARS‑CoV‐2 antigen assay, we used the samples obtained in 403 clinical requests for simultaneous RT‐PCR and antigen assays. The 403 requests were from 336 patients (median age, 74 years; range, 15–100 years; 188 males [56%]). Specifically, 330 requests for SARS‐CoV‐2 testing were from 321 patients in the emergency department of the hospital, and 73 requests were from 15 patients in the intensive care unit, which cared for patients with severe COVID‐19. For the emergency department patients, RT‐PCR assays were ordered by the treating physicians to decide whether the patients were to be admitted to the COVID‐19 wards or to the “clean” COVID‐19‐free wards. In the intensive care unit, RT‐PCR assays were ordered by the treating physicians for follow‐up evaluations of patients with severe COVID‐19. In the 403 cases, 47 RT‐PCR‐positive results were obtained. This corresponds to an RT‐PCR‐positive prevalence of 12% (95% CI, 9–15) in our cohort. Of the 330 requests for SARS‐CoV‐2 testing from the emergency department, 11 showed positive results with the RT‐PCR assay (median Ct value, 32.5; range, 19.2–39.6). Of the 73 requests for SARS‐CoV‐2 testing from the intensive care unit, 36 showed positive results with the RT‐PCR assay (median Ct value, 33.7; range, 18.6–39.5). Table 1 details the overall results from the Elecsys SARS‑CoV‐2 antigen assay against the RT‐PCR assay. Our data yielded the following findings: sensitivity, 26% (95% CI, 14–40); specificity, 100% (95% CI, 99–100); area under the ROC curve, 0.63 (95% CI, 0.58–0.68); positive likelihood ratio, not applicable; negative likelihood ratio, 0.74 (95% CI, 0.63–0.88); positive predictive value, 100%; and negative predictive value, 91% (95% CI, 90–92). Overall performance of the Elecsys SARS‐CoV‐2 antigen assay (ie, index test) versus the RT‐PCR assay (ie, reference standard) in 403 cases Abbreviations: RT‐PCR, reverse‐transcription polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. Next, we examined the 47 RT‐PCR‐positive cases with respect to the Ct values of the SARS‐CoV‐2 signal for the N2 nucleic acid target found in RT‐PCR and the COI values in the Elecsys SARS‑CoV‐2 antigen assay. Analysis of the relationship between the Ct values and COI values in the 47 RT‐PCR‐positive cases showed a Spearman's coefficient of rank correlation (rho) of −0.704 (95% CI, −0.824 to −0.522; p < 0.0001). Figure 1 shows the respective scattergrams. In Table 2, we compared the results of the 47 RT‐PCR‐positive cases categorized by viral load (expressed as Ct values) with the corresponding results of the Elecsys SARS‐CoV‐2 antigen assay. The results showed that the true‐positive rate of the Elecsys SARS‐CoV‐2 antigen assay was 100% for Ct values of 15–24.9, 44% for Ct values of 25–29.9, 8% for Ct values of 30–34.9, and 6% for Ct values of 35–39.9. Table S1 shows the individual results of the 47 RT‐PCR‐positive cases. Scatterplot of the cycle threshold (Ct) values of SARS‐CoV‐2 RT‐PCR versus the cutoff index (COI) values of the Elecsys SARS‐CoV‐2 antigen assay in our 47 RT‐PCR‐positive cases. The horizontal dotted line indicates the cutoff value of the Elecsys SARS‐CoV‐2 antigen assay (negative, COI <1.0; positive, COI ≥1.0). Open triangles indicate requests from the emergency department; open circles indicate requests from the intensive care unit. Abbreviations: COI, cutoff index; Ct, cycle threshold; RT‐PCR, reverse‐transcription polymerase chain reaction; and SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. Comparison of the 47 SARS‐CoV‐2 RT‐PCR‐positive cases categorized by virus load (expressed as Ct values) versus the results of the Elecsys SARS‐CoV‐2 antigen assay Abbreviations: COI, cutoff index; Ct, cycle threshold; RT‐PCR, reverse‐transcription polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.
null
null
[ "INTRODUCTION", "Study design and clinical samples", "Reference standard—RT‐PCR assay", "Index test—Elecsys SARS‐CoV‐2 antigen assay", "Statistical analysis", "AUTHOR CONTRIBUTION" ]
[ "The RNA virus severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) causes coronavirus disease 2019 (COVID‐19).1 Infection with SARS‐CoV‐2 can be asymptomatic or may result in symptomatic disease ranging in severity from mild upper respiratory tract symptoms to severe pneumonia with respiratory failure and multiple organ failure.1 The gold standard laboratory tests to detect SARS‐CoV‐2 from clinical specimens (eg, nasopharyngeal swabs, oropharyngeal swabs, and bronchoalveolar lavage fluid) are nucleic acid amplification tests (NAATs), mainly reverse‐transcription polymerase chain reaction (RT‐PCR) assays.1, 2 Currently, a variety of NAATs are commercially available for use in routine clinical practice.1, 3\n\nHowever, since the testing capacity afforded by NAATs is insufficient to cope with the COVID‐19 pandemic, various manufacturers have also developed rapid antigen immunoassays, which do not require skilled personnel and dedicated instrumentation, for detection of the virus from nasopharyngeal and oropharyngeal swabs. SARS‐CoV‐2 rapid point‐of‐care antigen tests have also been commercially available for some time.1, 4 At present, antigen point‐of‐care tests in many countries help to ensure the necessary quantity of SARS‐CoV‐2 tests for their respective testing strategies,1, 4 but these tests have been criticized because of their lower clinical sensitivity in comparison with NAATs.1, 4\n\nRecently, Roche Diagnostics (Rotkreuz, Switzerland) launched a high‐throughput antigen test for medical laboratories called “Elecsys SARS‐CoV‐2 antigen assay,” which runs on the company's analyzers. We evaluated the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay prior to its planned use in our clinical routine. Herein, we report the results of our evaluation.", "This report describes the findings of a single‐center evaluation of the diagnostic accuracy of the Elecsys SARS‐CoV‐2 antigen assay as an index test in comparison with RT‐PCR as the reference standard. Our manufacturer‐independent evaluation was conducted from March 11, 2021, to April 26, 2021, at the Department of Clinical Pathology, Hospital of Bolzano, province of South Tyrol, Italy. During this period, the median 7‐day incidence rate of new SARS‐CoV‐2‐positive cases per 100,000 population was 149 (starting from 245 on March 11, 2021, and declining to 121 on April 26, 2021) in the province of South Tyrol (Amministrazione Provincia Bolzano, Sicurezza e protezione civile, web: http://www.provincia.bz.it/sicurezza‐protezione‐civile/protezione‐civile/dati‐attuali‐sul‐coronavirus.asp, last access: April 27, 2021). During this time, the Department of Clinical Pathology received 403 requests for simultaneous RT‐PCR and antigen assays from the emergency department of the hospital and from the intensive care unit, which care for COVID‐19 patients. These 403 requests pertained to 336 patients. In all 403 cases, two nasopharyngeal swabs were obtained simultaneously by skilled personnel, of which one was sent for the RT‐PCR assay and the other was sent to run the antigen assay. We used the data from these 403 cases to evaluate the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay. A referral to the ethics committee was not deemed necessary because the project was an assay validation/verification that was in line with good laboratory practice. Such evaluations are routinely performed in medical laboratories before introducing a new assay into the clinical routine.", "The personnel from the emergency department of the hospital and from the intensive care unit used standard swabs and transport media from two different manufacturers, namely FLOQSwabs® (Ref. 503CS01, Copan Italia S.p.A., Brescia, Italy) in combination with the UTM Universal Transport Medium (Ref. 330C, filled with 3 ml UTM® medium, Copan Italia S.p.A., Brescia, Italy) and the combined specimen collection device ∑‐Transwab® Liquid Amies (one Sigma swab plus 1 ml of liquid Amies transport medium, Ref. MW176S; Medical Wire, Corsham, United Kingdom). The nasopharyngeal swabs were handled as specified by the manufacturer. After the smear, samples were sent to our laboratory where the RT‐PCR assay was performed immediately.\nThe RT‐PCR assay was performed using the Xpert Xpress SARS‐CoV‐2 test (Ref. XPRSARS‐COV2‐10, Cepheid, Sunnyvale, CA, USA) on a GeneXpert® IV instrument (Cepheid, Sunnyvale, CA, USA). The Xpert Xpress SARS‐CoV‐2 test is a rapid, real‐time RT‐PCR test intended for qualitative detection of nucleic acids from SARS‐CoV‐2 in upper respiratory specimens. We performed the entire Xpert Xpress SARS‐CoV‐2 test procedure according to the manufacturer's instructions. The system uses single‐use disposable cartridges that hold RT‐PCR reagents and host the RT‐PCR process. The sample‐processing control and probe‐check control are also included in the cartridge. The Xpert Xpress SARS‐CoV‐2 test provides test results based on the detection of two gene targets, namely the amplification of the SARS‐CoV‐2 E and N2 genes.1, 3 The limit of detection of this test was 250 copies/ml, and the time to result was 45 min.1, 3 The Xpert Xpress SARS‐CoV‐2 test includes an early assay termination function, which can provide an earlier time to result for high‐titer specimens if the signal from the target nucleic acid reaches a predetermined threshold before the full 45 PCR cycles have been completed.\nUsing the GeneXpert software (Cepheid, Sunnyvale, CA, USA), we considered positive RT‐PCR results when the SARS‐CoV‐2 signal for the N2 nucleic acid target had a PCR cycle threshold (Ct) value of <40.0, irrespective of the signal for the E nucleic acid target. In contrast, when the Ct value for the SARS‐CoV‐2 N2 gene was ≥40.0, or when the results of RT‐PCR testing were definitely negative (with reference to a positive result for the sample‐processing control), we classified the result of the RT‐PCR test as negative. Further, we categorized the results of RT‐PCR tests that showed negative signals for the SARS‐CoV‐2 E and N2 genes as well as a negative signal for the sample‐processing control as invalid; in these cases, we repeated the analysis.", "Specimen collection and preparation for detection of the SARS‐CoV‐2 antigen was performed as recommended by Roche Diagnostics Italy and in accordance with the package insert of the Elecsys SARS‐CoV‐2 antigen assay. We prepared sample collection tubes without any additives (Vacuette® Z No Additive 4 ml, Ref. 454001, Greiner Bio‐One, Kremsmunster, Austria) containing 1.0 ml of the SARS‐CoV‐2 extraction solution (Ref. 09370064190; Roche Diagnostics, Rotkreuz, Switzerland). The SARS‐CoV‐2 extraction solution is intended for the elution and transportation of samples for use in the Elecsys SARS‐CoV‐2 antigen assay. The personnel from the emergency department of the hospital and from the intensive care unit received these specifically prepared sample collection tubes and FLOQSwabs® (Ref. 519CS01, Copan Italia S.p.A., Brescia, Italy) for sample collection. The nasopharyngeal smear for detection of the SARS‐CoV‐2 antigen was performed in exactly the same way and at the same time as the smear for RT‐PCR test. The collection tubes were opened; the swab was soaked in the solution; and the swab was stirred 20 times. The swab was then left in the solution for 2 min. Next, the personnel from the emergency department of the hospital or the intensive care unit removed the swab while pressing it against the tube wall to extract the liquid from the swab. The collection tube was then recapped and immediately sent to our laboratory, where the samples were stored for a maximum of 36 h at 2–8°C. According to the package insert of the Elecsys SARS‐CoV‐2 antigen assay, the samples have an in vitro stability of two days at 2–8°C. Finally, we performed the Elecsys SARS‐CoV‐2 antigen assay using the collection tubes.\nThe Elecsys SARS‐CoV‐2 antigen assay (Ref. 09345299190, Roche Diagnostics, Rotkreuz, Switzerland) is an electrochemiluminescence immunoassay for qualitative detection of the nucleocapsid antigen of SARS‑CoV‐2 in nasopharyngeal and oropharyngeal swab samples. This assay uses monoclonal antibodies directed against the SARS‑CoV‐2 nucleocapsid protein in a double‐antibody sandwich assay format. In our evaluation, we ran this assay on a single Cobas e801 system (Roche Diagnostics, Rotkreuz, Switzerland) according to the manufacturer's instructions. This assay produces results as a cutoff index (COI; signal of sample divided by cutoff), wherein results ≥1.00 are reported as reactive/positive. For the internal quality control, we used the PreciControl SARS‑CoV‐2 antigen (Ref. 09345302190) once daily at two COI levels. We allowed sample measurements only if the controls were within the defined limits.\nWe determined the limit of blank (LoB) as previously suggested 5: Measurements were obtained with the SARS‐CoV‐2 extraction solution in replicates of 20 and calculated LoB = meanblank + 1.645 (SDblank). Using this procedure, we found an LoB of 0.60 COI.\nWe evaluated the linearity of the Elecsys SARS‑CoV‐2 antigen assay according to the CLSI guideline EP6‐A6 using six different analyte concentrations. Fresh samples were used to prepare high‐ and low‐concentration pools. We then conducted a direct dilution series with the low‐ and high‐concentration patient sample pools in the following volume ratios (low‐concentration pool +high‐concentration pool): pool 1, low only; pool 2, 0.8 low +0.2 high; pool 3, 0.6 low +0.4 high; pool 4, 0.4 low +0.6 high; pool 5, 0.2 low +0.8 high; and pool 6, high only. Three measurements were performed for each concentration, and the default criteria were set at 5% for repeatability and 15 COI for nonlinearity. The mean COIs of the low‐ and high‐concentration pools were 0.49 and 759, respectively. The standard errors of regression (Sy,x) and t‐tests from regression analyses showed that the first‐order model fitted better than the second‐ and third‐order models: first‐order model b1, Sy,x = 12.457; t‐test = 86.878 (p < 0.001); second‐order model b2, Sy,x = 11.622; t‐test = 1.839 (p = 0.086); and third‐order model b3, Sy,x = 10.755; t‐test = 1.875 (p = 0.082). In addition, all default criteria were met, so the method was linear up to 750 COI.\nTo evaluate the precision of the Elecsys SARS‑CoV‐2 antigen assay in our laboratory, we performed a replication study according to the Clinical and Laboratory Standards Institute (CLSI; formerly NCCLS) guideline EP5‐A.7 Two pooled patient samples with COI values near the reactive/positive cutoff values of the assay were aliquoted into ten plastic tubes for each concentration level and frozen at –80°C. We analyzed these samples in duplicate in two runs per day for 10 days within 2 weeks of sample collection. Within‐run and total analytical precision (CV) were calculated using the CLSI double‐run precision evaluation test.7 The Elecsys SARS‑CoV‐2 antigen assay had a within‐run CV of 3.3% and a total CV of 3.5% at a mean concentration of 1.12 COI (pool 1) and a within‐run CV of 3.1% and a total CV of 5.7% at a mean concentration of 1.82 COI (pool 2).", "We performed a purely descriptive statistical analysis by calculating the sensitivity, specificity, area under the ROC curve, positive likelihood ratio, negative likelihood ratio, positive predictive value, and negative predictive value for the Elecsys SARS‑CoV‐2 antigen assay against the reference standard. Sensitivity, specificity, positive and negative predictive values, and disease prevalence were expressed as percentages. The confidence intervals for sensitivity and specificity were the \"exact\" Clopper‐Pearson confidence intervals. The confidence intervals for the likelihood ratios were calculated using the log method, as suggested by Altman et al.8 Confidence intervals for the predictive values were the standard logit confidence intervals given by Mercaldo et al.9 The area under the ROC curve was estimated using established procedures.10, 11, 12 For correlation analysis, we calculated the Spearman correlation coefficient (rho) with a p‐value and a 95% confidence interval (CI) for the correlation coefficient. Data analysis was performed using MedCalc software package MedCalc 17.2 (MedCalc Software Ltd, Ostend, Belgium).", "Thomas Mueller: Conceptualization, data collection, data analysis and interpretation, drafting of the article. Julia Kompatscher: Data collection, data analysis and interpretation, critical revision of the article. Mario La Guardia: Data collection, data analysis and interpretation, critical revision of the article. All authors: Final approval of the article." ]
[ null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study design and clinical samples", "Reference standard—RT‐PCR assay", "Index test—Elecsys SARS‐CoV‐2 antigen assay", "Statistical analysis", "RESULTS", "DISCUSSION", "CONFLICT OF INTEREST", "AUTHOR CONTRIBUTION", "Supporting information" ]
[ "The RNA virus severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) causes coronavirus disease 2019 (COVID‐19).1 Infection with SARS‐CoV‐2 can be asymptomatic or may result in symptomatic disease ranging in severity from mild upper respiratory tract symptoms to severe pneumonia with respiratory failure and multiple organ failure.1 The gold standard laboratory tests to detect SARS‐CoV‐2 from clinical specimens (eg, nasopharyngeal swabs, oropharyngeal swabs, and bronchoalveolar lavage fluid) are nucleic acid amplification tests (NAATs), mainly reverse‐transcription polymerase chain reaction (RT‐PCR) assays.1, 2 Currently, a variety of NAATs are commercially available for use in routine clinical practice.1, 3\n\nHowever, since the testing capacity afforded by NAATs is insufficient to cope with the COVID‐19 pandemic, various manufacturers have also developed rapid antigen immunoassays, which do not require skilled personnel and dedicated instrumentation, for detection of the virus from nasopharyngeal and oropharyngeal swabs. SARS‐CoV‐2 rapid point‐of‐care antigen tests have also been commercially available for some time.1, 4 At present, antigen point‐of‐care tests in many countries help to ensure the necessary quantity of SARS‐CoV‐2 tests for their respective testing strategies,1, 4 but these tests have been criticized because of their lower clinical sensitivity in comparison with NAATs.1, 4\n\nRecently, Roche Diagnostics (Rotkreuz, Switzerland) launched a high‐throughput antigen test for medical laboratories called “Elecsys SARS‐CoV‐2 antigen assay,” which runs on the company's analyzers. We evaluated the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay prior to its planned use in our clinical routine. Herein, we report the results of our evaluation.", "Study design and clinical samples This report describes the findings of a single‐center evaluation of the diagnostic accuracy of the Elecsys SARS‐CoV‐2 antigen assay as an index test in comparison with RT‐PCR as the reference standard. Our manufacturer‐independent evaluation was conducted from March 11, 2021, to April 26, 2021, at the Department of Clinical Pathology, Hospital of Bolzano, province of South Tyrol, Italy. During this period, the median 7‐day incidence rate of new SARS‐CoV‐2‐positive cases per 100,000 population was 149 (starting from 245 on March 11, 2021, and declining to 121 on April 26, 2021) in the province of South Tyrol (Amministrazione Provincia Bolzano, Sicurezza e protezione civile, web: http://www.provincia.bz.it/sicurezza‐protezione‐civile/protezione‐civile/dati‐attuali‐sul‐coronavirus.asp, last access: April 27, 2021). During this time, the Department of Clinical Pathology received 403 requests for simultaneous RT‐PCR and antigen assays from the emergency department of the hospital and from the intensive care unit, which care for COVID‐19 patients. These 403 requests pertained to 336 patients. In all 403 cases, two nasopharyngeal swabs were obtained simultaneously by skilled personnel, of which one was sent for the RT‐PCR assay and the other was sent to run the antigen assay. We used the data from these 403 cases to evaluate the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay. A referral to the ethics committee was not deemed necessary because the project was an assay validation/verification that was in line with good laboratory practice. Such evaluations are routinely performed in medical laboratories before introducing a new assay into the clinical routine.\nThis report describes the findings of a single‐center evaluation of the diagnostic accuracy of the Elecsys SARS‐CoV‐2 antigen assay as an index test in comparison with RT‐PCR as the reference standard. Our manufacturer‐independent evaluation was conducted from March 11, 2021, to April 26, 2021, at the Department of Clinical Pathology, Hospital of Bolzano, province of South Tyrol, Italy. During this period, the median 7‐day incidence rate of new SARS‐CoV‐2‐positive cases per 100,000 population was 149 (starting from 245 on March 11, 2021, and declining to 121 on April 26, 2021) in the province of South Tyrol (Amministrazione Provincia Bolzano, Sicurezza e protezione civile, web: http://www.provincia.bz.it/sicurezza‐protezione‐civile/protezione‐civile/dati‐attuali‐sul‐coronavirus.asp, last access: April 27, 2021). During this time, the Department of Clinical Pathology received 403 requests for simultaneous RT‐PCR and antigen assays from the emergency department of the hospital and from the intensive care unit, which care for COVID‐19 patients. These 403 requests pertained to 336 patients. In all 403 cases, two nasopharyngeal swabs were obtained simultaneously by skilled personnel, of which one was sent for the RT‐PCR assay and the other was sent to run the antigen assay. We used the data from these 403 cases to evaluate the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay. A referral to the ethics committee was not deemed necessary because the project was an assay validation/verification that was in line with good laboratory practice. Such evaluations are routinely performed in medical laboratories before introducing a new assay into the clinical routine.\nReference standard—RT‐PCR assay The personnel from the emergency department of the hospital and from the intensive care unit used standard swabs and transport media from two different manufacturers, namely FLOQSwabs® (Ref. 503CS01, Copan Italia S.p.A., Brescia, Italy) in combination with the UTM Universal Transport Medium (Ref. 330C, filled with 3 ml UTM® medium, Copan Italia S.p.A., Brescia, Italy) and the combined specimen collection device ∑‐Transwab® Liquid Amies (one Sigma swab plus 1 ml of liquid Amies transport medium, Ref. MW176S; Medical Wire, Corsham, United Kingdom). The nasopharyngeal swabs were handled as specified by the manufacturer. After the smear, samples were sent to our laboratory where the RT‐PCR assay was performed immediately.\nThe RT‐PCR assay was performed using the Xpert Xpress SARS‐CoV‐2 test (Ref. XPRSARS‐COV2‐10, Cepheid, Sunnyvale, CA, USA) on a GeneXpert® IV instrument (Cepheid, Sunnyvale, CA, USA). The Xpert Xpress SARS‐CoV‐2 test is a rapid, real‐time RT‐PCR test intended for qualitative detection of nucleic acids from SARS‐CoV‐2 in upper respiratory specimens. We performed the entire Xpert Xpress SARS‐CoV‐2 test procedure according to the manufacturer's instructions. The system uses single‐use disposable cartridges that hold RT‐PCR reagents and host the RT‐PCR process. The sample‐processing control and probe‐check control are also included in the cartridge. The Xpert Xpress SARS‐CoV‐2 test provides test results based on the detection of two gene targets, namely the amplification of the SARS‐CoV‐2 E and N2 genes.1, 3 The limit of detection of this test was 250 copies/ml, and the time to result was 45 min.1, 3 The Xpert Xpress SARS‐CoV‐2 test includes an early assay termination function, which can provide an earlier time to result for high‐titer specimens if the signal from the target nucleic acid reaches a predetermined threshold before the full 45 PCR cycles have been completed.\nUsing the GeneXpert software (Cepheid, Sunnyvale, CA, USA), we considered positive RT‐PCR results when the SARS‐CoV‐2 signal for the N2 nucleic acid target had a PCR cycle threshold (Ct) value of <40.0, irrespective of the signal for the E nucleic acid target. In contrast, when the Ct value for the SARS‐CoV‐2 N2 gene was ≥40.0, or when the results of RT‐PCR testing were definitely negative (with reference to a positive result for the sample‐processing control), we classified the result of the RT‐PCR test as negative. Further, we categorized the results of RT‐PCR tests that showed negative signals for the SARS‐CoV‐2 E and N2 genes as well as a negative signal for the sample‐processing control as invalid; in these cases, we repeated the analysis.\nThe personnel from the emergency department of the hospital and from the intensive care unit used standard swabs and transport media from two different manufacturers, namely FLOQSwabs® (Ref. 503CS01, Copan Italia S.p.A., Brescia, Italy) in combination with the UTM Universal Transport Medium (Ref. 330C, filled with 3 ml UTM® medium, Copan Italia S.p.A., Brescia, Italy) and the combined specimen collection device ∑‐Transwab® Liquid Amies (one Sigma swab plus 1 ml of liquid Amies transport medium, Ref. MW176S; Medical Wire, Corsham, United Kingdom). The nasopharyngeal swabs were handled as specified by the manufacturer. After the smear, samples were sent to our laboratory where the RT‐PCR assay was performed immediately.\nThe RT‐PCR assay was performed using the Xpert Xpress SARS‐CoV‐2 test (Ref. XPRSARS‐COV2‐10, Cepheid, Sunnyvale, CA, USA) on a GeneXpert® IV instrument (Cepheid, Sunnyvale, CA, USA). The Xpert Xpress SARS‐CoV‐2 test is a rapid, real‐time RT‐PCR test intended for qualitative detection of nucleic acids from SARS‐CoV‐2 in upper respiratory specimens. We performed the entire Xpert Xpress SARS‐CoV‐2 test procedure according to the manufacturer's instructions. The system uses single‐use disposable cartridges that hold RT‐PCR reagents and host the RT‐PCR process. The sample‐processing control and probe‐check control are also included in the cartridge. The Xpert Xpress SARS‐CoV‐2 test provides test results based on the detection of two gene targets, namely the amplification of the SARS‐CoV‐2 E and N2 genes.1, 3 The limit of detection of this test was 250 copies/ml, and the time to result was 45 min.1, 3 The Xpert Xpress SARS‐CoV‐2 test includes an early assay termination function, which can provide an earlier time to result for high‐titer specimens if the signal from the target nucleic acid reaches a predetermined threshold before the full 45 PCR cycles have been completed.\nUsing the GeneXpert software (Cepheid, Sunnyvale, CA, USA), we considered positive RT‐PCR results when the SARS‐CoV‐2 signal for the N2 nucleic acid target had a PCR cycle threshold (Ct) value of <40.0, irrespective of the signal for the E nucleic acid target. In contrast, when the Ct value for the SARS‐CoV‐2 N2 gene was ≥40.0, or when the results of RT‐PCR testing were definitely negative (with reference to a positive result for the sample‐processing control), we classified the result of the RT‐PCR test as negative. Further, we categorized the results of RT‐PCR tests that showed negative signals for the SARS‐CoV‐2 E and N2 genes as well as a negative signal for the sample‐processing control as invalid; in these cases, we repeated the analysis.\nIndex test—Elecsys SARS‐CoV‐2 antigen assay Specimen collection and preparation for detection of the SARS‐CoV‐2 antigen was performed as recommended by Roche Diagnostics Italy and in accordance with the package insert of the Elecsys SARS‐CoV‐2 antigen assay. We prepared sample collection tubes without any additives (Vacuette® Z No Additive 4 ml, Ref. 454001, Greiner Bio‐One, Kremsmunster, Austria) containing 1.0 ml of the SARS‐CoV‐2 extraction solution (Ref. 09370064190; Roche Diagnostics, Rotkreuz, Switzerland). The SARS‐CoV‐2 extraction solution is intended for the elution and transportation of samples for use in the Elecsys SARS‐CoV‐2 antigen assay. The personnel from the emergency department of the hospital and from the intensive care unit received these specifically prepared sample collection tubes and FLOQSwabs® (Ref. 519CS01, Copan Italia S.p.A., Brescia, Italy) for sample collection. The nasopharyngeal smear for detection of the SARS‐CoV‐2 antigen was performed in exactly the same way and at the same time as the smear for RT‐PCR test. The collection tubes were opened; the swab was soaked in the solution; and the swab was stirred 20 times. The swab was then left in the solution for 2 min. Next, the personnel from the emergency department of the hospital or the intensive care unit removed the swab while pressing it against the tube wall to extract the liquid from the swab. The collection tube was then recapped and immediately sent to our laboratory, where the samples were stored for a maximum of 36 h at 2–8°C. According to the package insert of the Elecsys SARS‐CoV‐2 antigen assay, the samples have an in vitro stability of two days at 2–8°C. Finally, we performed the Elecsys SARS‐CoV‐2 antigen assay using the collection tubes.\nThe Elecsys SARS‐CoV‐2 antigen assay (Ref. 09345299190, Roche Diagnostics, Rotkreuz, Switzerland) is an electrochemiluminescence immunoassay for qualitative detection of the nucleocapsid antigen of SARS‑CoV‐2 in nasopharyngeal and oropharyngeal swab samples. This assay uses monoclonal antibodies directed against the SARS‑CoV‐2 nucleocapsid protein in a double‐antibody sandwich assay format. In our evaluation, we ran this assay on a single Cobas e801 system (Roche Diagnostics, Rotkreuz, Switzerland) according to the manufacturer's instructions. This assay produces results as a cutoff index (COI; signal of sample divided by cutoff), wherein results ≥1.00 are reported as reactive/positive. For the internal quality control, we used the PreciControl SARS‑CoV‐2 antigen (Ref. 09345302190) once daily at two COI levels. We allowed sample measurements only if the controls were within the defined limits.\nWe determined the limit of blank (LoB) as previously suggested 5: Measurements were obtained with the SARS‐CoV‐2 extraction solution in replicates of 20 and calculated LoB = meanblank + 1.645 (SDblank). Using this procedure, we found an LoB of 0.60 COI.\nWe evaluated the linearity of the Elecsys SARS‑CoV‐2 antigen assay according to the CLSI guideline EP6‐A6 using six different analyte concentrations. Fresh samples were used to prepare high‐ and low‐concentration pools. We then conducted a direct dilution series with the low‐ and high‐concentration patient sample pools in the following volume ratios (low‐concentration pool +high‐concentration pool): pool 1, low only; pool 2, 0.8 low +0.2 high; pool 3, 0.6 low +0.4 high; pool 4, 0.4 low +0.6 high; pool 5, 0.2 low +0.8 high; and pool 6, high only. Three measurements were performed for each concentration, and the default criteria were set at 5% for repeatability and 15 COI for nonlinearity. The mean COIs of the low‐ and high‐concentration pools were 0.49 and 759, respectively. The standard errors of regression (Sy,x) and t‐tests from regression analyses showed that the first‐order model fitted better than the second‐ and third‐order models: first‐order model b1, Sy,x = 12.457; t‐test = 86.878 (p < 0.001); second‐order model b2, Sy,x = 11.622; t‐test = 1.839 (p = 0.086); and third‐order model b3, Sy,x = 10.755; t‐test = 1.875 (p = 0.082). In addition, all default criteria were met, so the method was linear up to 750 COI.\nTo evaluate the precision of the Elecsys SARS‑CoV‐2 antigen assay in our laboratory, we performed a replication study according to the Clinical and Laboratory Standards Institute (CLSI; formerly NCCLS) guideline EP5‐A.7 Two pooled patient samples with COI values near the reactive/positive cutoff values of the assay were aliquoted into ten plastic tubes for each concentration level and frozen at –80°C. We analyzed these samples in duplicate in two runs per day for 10 days within 2 weeks of sample collection. Within‐run and total analytical precision (CV) were calculated using the CLSI double‐run precision evaluation test.7 The Elecsys SARS‑CoV‐2 antigen assay had a within‐run CV of 3.3% and a total CV of 3.5% at a mean concentration of 1.12 COI (pool 1) and a within‐run CV of 3.1% and a total CV of 5.7% at a mean concentration of 1.82 COI (pool 2).\nSpecimen collection and preparation for detection of the SARS‐CoV‐2 antigen was performed as recommended by Roche Diagnostics Italy and in accordance with the package insert of the Elecsys SARS‐CoV‐2 antigen assay. We prepared sample collection tubes without any additives (Vacuette® Z No Additive 4 ml, Ref. 454001, Greiner Bio‐One, Kremsmunster, Austria) containing 1.0 ml of the SARS‐CoV‐2 extraction solution (Ref. 09370064190; Roche Diagnostics, Rotkreuz, Switzerland). The SARS‐CoV‐2 extraction solution is intended for the elution and transportation of samples for use in the Elecsys SARS‐CoV‐2 antigen assay. The personnel from the emergency department of the hospital and from the intensive care unit received these specifically prepared sample collection tubes and FLOQSwabs® (Ref. 519CS01, Copan Italia S.p.A., Brescia, Italy) for sample collection. The nasopharyngeal smear for detection of the SARS‐CoV‐2 antigen was performed in exactly the same way and at the same time as the smear for RT‐PCR test. The collection tubes were opened; the swab was soaked in the solution; and the swab was stirred 20 times. The swab was then left in the solution for 2 min. Next, the personnel from the emergency department of the hospital or the intensive care unit removed the swab while pressing it against the tube wall to extract the liquid from the swab. The collection tube was then recapped and immediately sent to our laboratory, where the samples were stored for a maximum of 36 h at 2–8°C. According to the package insert of the Elecsys SARS‐CoV‐2 antigen assay, the samples have an in vitro stability of two days at 2–8°C. Finally, we performed the Elecsys SARS‐CoV‐2 antigen assay using the collection tubes.\nThe Elecsys SARS‐CoV‐2 antigen assay (Ref. 09345299190, Roche Diagnostics, Rotkreuz, Switzerland) is an electrochemiluminescence immunoassay for qualitative detection of the nucleocapsid antigen of SARS‑CoV‐2 in nasopharyngeal and oropharyngeal swab samples. This assay uses monoclonal antibodies directed against the SARS‑CoV‐2 nucleocapsid protein in a double‐antibody sandwich assay format. In our evaluation, we ran this assay on a single Cobas e801 system (Roche Diagnostics, Rotkreuz, Switzerland) according to the manufacturer's instructions. This assay produces results as a cutoff index (COI; signal of sample divided by cutoff), wherein results ≥1.00 are reported as reactive/positive. For the internal quality control, we used the PreciControl SARS‑CoV‐2 antigen (Ref. 09345302190) once daily at two COI levels. We allowed sample measurements only if the controls were within the defined limits.\nWe determined the limit of blank (LoB) as previously suggested 5: Measurements were obtained with the SARS‐CoV‐2 extraction solution in replicates of 20 and calculated LoB = meanblank + 1.645 (SDblank). Using this procedure, we found an LoB of 0.60 COI.\nWe evaluated the linearity of the Elecsys SARS‑CoV‐2 antigen assay according to the CLSI guideline EP6‐A6 using six different analyte concentrations. Fresh samples were used to prepare high‐ and low‐concentration pools. We then conducted a direct dilution series with the low‐ and high‐concentration patient sample pools in the following volume ratios (low‐concentration pool +high‐concentration pool): pool 1, low only; pool 2, 0.8 low +0.2 high; pool 3, 0.6 low +0.4 high; pool 4, 0.4 low +0.6 high; pool 5, 0.2 low +0.8 high; and pool 6, high only. Three measurements were performed for each concentration, and the default criteria were set at 5% for repeatability and 15 COI for nonlinearity. The mean COIs of the low‐ and high‐concentration pools were 0.49 and 759, respectively. The standard errors of regression (Sy,x) and t‐tests from regression analyses showed that the first‐order model fitted better than the second‐ and third‐order models: first‐order model b1, Sy,x = 12.457; t‐test = 86.878 (p < 0.001); second‐order model b2, Sy,x = 11.622; t‐test = 1.839 (p = 0.086); and third‐order model b3, Sy,x = 10.755; t‐test = 1.875 (p = 0.082). In addition, all default criteria were met, so the method was linear up to 750 COI.\nTo evaluate the precision of the Elecsys SARS‑CoV‐2 antigen assay in our laboratory, we performed a replication study according to the Clinical and Laboratory Standards Institute (CLSI; formerly NCCLS) guideline EP5‐A.7 Two pooled patient samples with COI values near the reactive/positive cutoff values of the assay were aliquoted into ten plastic tubes for each concentration level and frozen at –80°C. We analyzed these samples in duplicate in two runs per day for 10 days within 2 weeks of sample collection. Within‐run and total analytical precision (CV) were calculated using the CLSI double‐run precision evaluation test.7 The Elecsys SARS‑CoV‐2 antigen assay had a within‐run CV of 3.3% and a total CV of 3.5% at a mean concentration of 1.12 COI (pool 1) and a within‐run CV of 3.1% and a total CV of 5.7% at a mean concentration of 1.82 COI (pool 2).\nStatistical analysis We performed a purely descriptive statistical analysis by calculating the sensitivity, specificity, area under the ROC curve, positive likelihood ratio, negative likelihood ratio, positive predictive value, and negative predictive value for the Elecsys SARS‑CoV‐2 antigen assay against the reference standard. Sensitivity, specificity, positive and negative predictive values, and disease prevalence were expressed as percentages. The confidence intervals for sensitivity and specificity were the \"exact\" Clopper‐Pearson confidence intervals. The confidence intervals for the likelihood ratios were calculated using the log method, as suggested by Altman et al.8 Confidence intervals for the predictive values were the standard logit confidence intervals given by Mercaldo et al.9 The area under the ROC curve was estimated using established procedures.10, 11, 12 For correlation analysis, we calculated the Spearman correlation coefficient (rho) with a p‐value and a 95% confidence interval (CI) for the correlation coefficient. Data analysis was performed using MedCalc software package MedCalc 17.2 (MedCalc Software Ltd, Ostend, Belgium).\nWe performed a purely descriptive statistical analysis by calculating the sensitivity, specificity, area under the ROC curve, positive likelihood ratio, negative likelihood ratio, positive predictive value, and negative predictive value for the Elecsys SARS‑CoV‐2 antigen assay against the reference standard. Sensitivity, specificity, positive and negative predictive values, and disease prevalence were expressed as percentages. The confidence intervals for sensitivity and specificity were the \"exact\" Clopper‐Pearson confidence intervals. The confidence intervals for the likelihood ratios were calculated using the log method, as suggested by Altman et al.8 Confidence intervals for the predictive values were the standard logit confidence intervals given by Mercaldo et al.9 The area under the ROC curve was estimated using established procedures.10, 11, 12 For correlation analysis, we calculated the Spearman correlation coefficient (rho) with a p‐value and a 95% confidence interval (CI) for the correlation coefficient. Data analysis was performed using MedCalc software package MedCalc 17.2 (MedCalc Software Ltd, Ostend, Belgium).", "This report describes the findings of a single‐center evaluation of the diagnostic accuracy of the Elecsys SARS‐CoV‐2 antigen assay as an index test in comparison with RT‐PCR as the reference standard. Our manufacturer‐independent evaluation was conducted from March 11, 2021, to April 26, 2021, at the Department of Clinical Pathology, Hospital of Bolzano, province of South Tyrol, Italy. During this period, the median 7‐day incidence rate of new SARS‐CoV‐2‐positive cases per 100,000 population was 149 (starting from 245 on March 11, 2021, and declining to 121 on April 26, 2021) in the province of South Tyrol (Amministrazione Provincia Bolzano, Sicurezza e protezione civile, web: http://www.provincia.bz.it/sicurezza‐protezione‐civile/protezione‐civile/dati‐attuali‐sul‐coronavirus.asp, last access: April 27, 2021). During this time, the Department of Clinical Pathology received 403 requests for simultaneous RT‐PCR and antigen assays from the emergency department of the hospital and from the intensive care unit, which care for COVID‐19 patients. These 403 requests pertained to 336 patients. In all 403 cases, two nasopharyngeal swabs were obtained simultaneously by skilled personnel, of which one was sent for the RT‐PCR assay and the other was sent to run the antigen assay. We used the data from these 403 cases to evaluate the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay. A referral to the ethics committee was not deemed necessary because the project was an assay validation/verification that was in line with good laboratory practice. Such evaluations are routinely performed in medical laboratories before introducing a new assay into the clinical routine.", "The personnel from the emergency department of the hospital and from the intensive care unit used standard swabs and transport media from two different manufacturers, namely FLOQSwabs® (Ref. 503CS01, Copan Italia S.p.A., Brescia, Italy) in combination with the UTM Universal Transport Medium (Ref. 330C, filled with 3 ml UTM® medium, Copan Italia S.p.A., Brescia, Italy) and the combined specimen collection device ∑‐Transwab® Liquid Amies (one Sigma swab plus 1 ml of liquid Amies transport medium, Ref. MW176S; Medical Wire, Corsham, United Kingdom). The nasopharyngeal swabs were handled as specified by the manufacturer. After the smear, samples were sent to our laboratory where the RT‐PCR assay was performed immediately.\nThe RT‐PCR assay was performed using the Xpert Xpress SARS‐CoV‐2 test (Ref. XPRSARS‐COV2‐10, Cepheid, Sunnyvale, CA, USA) on a GeneXpert® IV instrument (Cepheid, Sunnyvale, CA, USA). The Xpert Xpress SARS‐CoV‐2 test is a rapid, real‐time RT‐PCR test intended for qualitative detection of nucleic acids from SARS‐CoV‐2 in upper respiratory specimens. We performed the entire Xpert Xpress SARS‐CoV‐2 test procedure according to the manufacturer's instructions. The system uses single‐use disposable cartridges that hold RT‐PCR reagents and host the RT‐PCR process. The sample‐processing control and probe‐check control are also included in the cartridge. The Xpert Xpress SARS‐CoV‐2 test provides test results based on the detection of two gene targets, namely the amplification of the SARS‐CoV‐2 E and N2 genes.1, 3 The limit of detection of this test was 250 copies/ml, and the time to result was 45 min.1, 3 The Xpert Xpress SARS‐CoV‐2 test includes an early assay termination function, which can provide an earlier time to result for high‐titer specimens if the signal from the target nucleic acid reaches a predetermined threshold before the full 45 PCR cycles have been completed.\nUsing the GeneXpert software (Cepheid, Sunnyvale, CA, USA), we considered positive RT‐PCR results when the SARS‐CoV‐2 signal for the N2 nucleic acid target had a PCR cycle threshold (Ct) value of <40.0, irrespective of the signal for the E nucleic acid target. In contrast, when the Ct value for the SARS‐CoV‐2 N2 gene was ≥40.0, or when the results of RT‐PCR testing were definitely negative (with reference to a positive result for the sample‐processing control), we classified the result of the RT‐PCR test as negative. Further, we categorized the results of RT‐PCR tests that showed negative signals for the SARS‐CoV‐2 E and N2 genes as well as a negative signal for the sample‐processing control as invalid; in these cases, we repeated the analysis.", "Specimen collection and preparation for detection of the SARS‐CoV‐2 antigen was performed as recommended by Roche Diagnostics Italy and in accordance with the package insert of the Elecsys SARS‐CoV‐2 antigen assay. We prepared sample collection tubes without any additives (Vacuette® Z No Additive 4 ml, Ref. 454001, Greiner Bio‐One, Kremsmunster, Austria) containing 1.0 ml of the SARS‐CoV‐2 extraction solution (Ref. 09370064190; Roche Diagnostics, Rotkreuz, Switzerland). The SARS‐CoV‐2 extraction solution is intended for the elution and transportation of samples for use in the Elecsys SARS‐CoV‐2 antigen assay. The personnel from the emergency department of the hospital and from the intensive care unit received these specifically prepared sample collection tubes and FLOQSwabs® (Ref. 519CS01, Copan Italia S.p.A., Brescia, Italy) for sample collection. The nasopharyngeal smear for detection of the SARS‐CoV‐2 antigen was performed in exactly the same way and at the same time as the smear for RT‐PCR test. The collection tubes were opened; the swab was soaked in the solution; and the swab was stirred 20 times. The swab was then left in the solution for 2 min. Next, the personnel from the emergency department of the hospital or the intensive care unit removed the swab while pressing it against the tube wall to extract the liquid from the swab. The collection tube was then recapped and immediately sent to our laboratory, where the samples were stored for a maximum of 36 h at 2–8°C. According to the package insert of the Elecsys SARS‐CoV‐2 antigen assay, the samples have an in vitro stability of two days at 2–8°C. Finally, we performed the Elecsys SARS‐CoV‐2 antigen assay using the collection tubes.\nThe Elecsys SARS‐CoV‐2 antigen assay (Ref. 09345299190, Roche Diagnostics, Rotkreuz, Switzerland) is an electrochemiluminescence immunoassay for qualitative detection of the nucleocapsid antigen of SARS‑CoV‐2 in nasopharyngeal and oropharyngeal swab samples. This assay uses monoclonal antibodies directed against the SARS‑CoV‐2 nucleocapsid protein in a double‐antibody sandwich assay format. In our evaluation, we ran this assay on a single Cobas e801 system (Roche Diagnostics, Rotkreuz, Switzerland) according to the manufacturer's instructions. This assay produces results as a cutoff index (COI; signal of sample divided by cutoff), wherein results ≥1.00 are reported as reactive/positive. For the internal quality control, we used the PreciControl SARS‑CoV‐2 antigen (Ref. 09345302190) once daily at two COI levels. We allowed sample measurements only if the controls were within the defined limits.\nWe determined the limit of blank (LoB) as previously suggested 5: Measurements were obtained with the SARS‐CoV‐2 extraction solution in replicates of 20 and calculated LoB = meanblank + 1.645 (SDblank). Using this procedure, we found an LoB of 0.60 COI.\nWe evaluated the linearity of the Elecsys SARS‑CoV‐2 antigen assay according to the CLSI guideline EP6‐A6 using six different analyte concentrations. Fresh samples were used to prepare high‐ and low‐concentration pools. We then conducted a direct dilution series with the low‐ and high‐concentration patient sample pools in the following volume ratios (low‐concentration pool +high‐concentration pool): pool 1, low only; pool 2, 0.8 low +0.2 high; pool 3, 0.6 low +0.4 high; pool 4, 0.4 low +0.6 high; pool 5, 0.2 low +0.8 high; and pool 6, high only. Three measurements were performed for each concentration, and the default criteria were set at 5% for repeatability and 15 COI for nonlinearity. The mean COIs of the low‐ and high‐concentration pools were 0.49 and 759, respectively. The standard errors of regression (Sy,x) and t‐tests from regression analyses showed that the first‐order model fitted better than the second‐ and third‐order models: first‐order model b1, Sy,x = 12.457; t‐test = 86.878 (p < 0.001); second‐order model b2, Sy,x = 11.622; t‐test = 1.839 (p = 0.086); and third‐order model b3, Sy,x = 10.755; t‐test = 1.875 (p = 0.082). In addition, all default criteria were met, so the method was linear up to 750 COI.\nTo evaluate the precision of the Elecsys SARS‑CoV‐2 antigen assay in our laboratory, we performed a replication study according to the Clinical and Laboratory Standards Institute (CLSI; formerly NCCLS) guideline EP5‐A.7 Two pooled patient samples with COI values near the reactive/positive cutoff values of the assay were aliquoted into ten plastic tubes for each concentration level and frozen at –80°C. We analyzed these samples in duplicate in two runs per day for 10 days within 2 weeks of sample collection. Within‐run and total analytical precision (CV) were calculated using the CLSI double‐run precision evaluation test.7 The Elecsys SARS‑CoV‐2 antigen assay had a within‐run CV of 3.3% and a total CV of 3.5% at a mean concentration of 1.12 COI (pool 1) and a within‐run CV of 3.1% and a total CV of 5.7% at a mean concentration of 1.82 COI (pool 2).", "We performed a purely descriptive statistical analysis by calculating the sensitivity, specificity, area under the ROC curve, positive likelihood ratio, negative likelihood ratio, positive predictive value, and negative predictive value for the Elecsys SARS‑CoV‐2 antigen assay against the reference standard. Sensitivity, specificity, positive and negative predictive values, and disease prevalence were expressed as percentages. The confidence intervals for sensitivity and specificity were the \"exact\" Clopper‐Pearson confidence intervals. The confidence intervals for the likelihood ratios were calculated using the log method, as suggested by Altman et al.8 Confidence intervals for the predictive values were the standard logit confidence intervals given by Mercaldo et al.9 The area under the ROC curve was estimated using established procedures.10, 11, 12 For correlation analysis, we calculated the Spearman correlation coefficient (rho) with a p‐value and a 95% confidence interval (CI) for the correlation coefficient. Data analysis was performed using MedCalc software package MedCalc 17.2 (MedCalc Software Ltd, Ostend, Belgium).", "In this study on the diagnostic accuracy of the Elecsys SARS‑CoV‐2 antigen assay, we used the samples obtained in 403 clinical requests for simultaneous RT‐PCR and antigen assays. The 403 requests were from 336 patients (median age, 74 years; range, 15–100 years; 188 males [56%]). Specifically, 330 requests for SARS‐CoV‐2 testing were from 321 patients in the emergency department of the hospital, and 73 requests were from 15 patients in the intensive care unit, which cared for patients with severe COVID‐19. For the emergency department patients, RT‐PCR assays were ordered by the treating physicians to decide whether the patients were to be admitted to the COVID‐19 wards or to the “clean” COVID‐19‐free wards. In the intensive care unit, RT‐PCR assays were ordered by the treating physicians for follow‐up evaluations of patients with severe COVID‐19. In the 403 cases, 47 RT‐PCR‐positive results were obtained. This corresponds to an RT‐PCR‐positive prevalence of 12% (95% CI, 9–15) in our cohort. Of the 330 requests for SARS‐CoV‐2 testing from the emergency department, 11 showed positive results with the RT‐PCR assay (median Ct value, 32.5; range, 19.2–39.6). Of the 73 requests for SARS‐CoV‐2 testing from the intensive care unit, 36 showed positive results with the RT‐PCR assay (median Ct value, 33.7; range, 18.6–39.5).\nTable 1 details the overall results from the Elecsys SARS‑CoV‐2 antigen assay against the RT‐PCR assay. Our data yielded the following findings: sensitivity, 26% (95% CI, 14–40); specificity, 100% (95% CI, 99–100); area under the ROC curve, 0.63 (95% CI, 0.58–0.68); positive likelihood ratio, not applicable; negative likelihood ratio, 0.74 (95% CI, 0.63–0.88); positive predictive value, 100%; and negative predictive value, 91% (95% CI, 90–92).\nOverall performance of the Elecsys SARS‐CoV‐2 antigen assay (ie, index test) versus the RT‐PCR assay (ie, reference standard) in 403 cases\nAbbreviations: RT‐PCR, reverse‐transcription polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nNext, we examined the 47 RT‐PCR‐positive cases with respect to the Ct values of the SARS‐CoV‐2 signal for the N2 nucleic acid target found in RT‐PCR and the COI values in the Elecsys SARS‑CoV‐2 antigen assay. Analysis of the relationship between the Ct values and COI values in the 47 RT‐PCR‐positive cases showed a Spearman's coefficient of rank correlation (rho) of −0.704 (95% CI, −0.824 to −0.522; p < 0.0001). Figure 1 shows the respective scattergrams. In Table 2, we compared the results of the 47 RT‐PCR‐positive cases categorized by viral load (expressed as Ct values) with the corresponding results of the Elecsys SARS‐CoV‐2 antigen assay. The results showed that the true‐positive rate of the Elecsys SARS‐CoV‐2 antigen assay was 100% for Ct values of 15–24.9, 44% for Ct values of 25–29.9, 8% for Ct values of 30–34.9, and 6% for Ct values of 35–39.9. Table S1 shows the individual results of the 47 RT‐PCR‐positive cases.\nScatterplot of the cycle threshold (Ct) values of SARS‐CoV‐2 RT‐PCR versus the cutoff index (COI) values of the Elecsys SARS‐CoV‐2 antigen assay in our 47 RT‐PCR‐positive cases. The horizontal dotted line indicates the cutoff value of the Elecsys SARS‐CoV‐2 antigen assay (negative, COI <1.0; positive, COI ≥1.0). Open triangles indicate requests from the emergency department; open circles indicate requests from the intensive care unit. Abbreviations: COI, cutoff index; Ct, cycle threshold; RT‐PCR, reverse‐transcription polymerase chain reaction; and SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.\nComparison of the 47 SARS‐CoV‐2 RT‐PCR‐positive cases categorized by virus load (expressed as Ct values) versus the results of the Elecsys SARS‐CoV‐2 antigen assay\nAbbreviations: COI, cutoff index; Ct, cycle threshold; RT‐PCR, reverse‐transcription polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2.", "Although this is only a small single‐center study, the main characteristics of the Elecsys SARS‐CoV‐2 antigen assay can be determined from our results. The Elecsys SARS‐CoV‐2 antigen assay had high specificity (it showed no false‐positive results compared to the RT‐PCR assay), but the assay showed lower sensitivity compared with the RT‐PCR assay (it yielded many false‐negative results). The assay showed a sensitivity of 26% in our cohort, which was fairly low. As expected, the rate of false‐negative results with the Elecsys SARS‐CoV‐2 antigen assay decreased with increasing viral load. In our evaluation, all Elecsys SARS‐CoV‐2 antigen assay results were positive in cases with Ct values of 15–24.9. However, for Ct values of 30–39.9, the Elecsys SARS‐CoV‐2 antigen assay had a sensitivity of only 6%–8% in our cohort. This seems to be too low for a tertiary care setting. Therefore, we decided to not use this assay in the clinical routine of our hospital.\nOur data suggest a clear relationship between the Ct value (as a surrogate measure of viral load) and the sensitivity of the Elecsys SARS‐CoV‐2 antigen assay. A recently published study, for example, demonstrated that SARS‐CoV‐2 infectivity varies with the viral load, among other factors.13, 14 Individuals with high viral loads (as determined by Ct values) were the most infectious.13 Although rapid point‐of‐care antigen tests for detection of SARS‐CoV‐2 have been criticized because of their lower clinical sensitivity than NAATs, these assays may help detect the most infectious cases.13 These rapid point‐of‐care antigen tests usually have a relatively high sensitivity in respiratory specimens with high viral loads (typically >80% in specimens with Ct values <25), while their positive rate in samples with a low viral load (eg, Ct values >25/30) is usually <80%.4, 15, 16 These data support the use of rapid point‐of‐care antigen tests for the detection of SARS‐CoV‐2 in high‐viral‐load individuals. These considerations might also hold true for the Elecsys SARS‐CoV‐2 antigen assay. However, our data do not conclusively determine whether the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay is adequate for population screening programs of asymptomatic or pre‐symptomatic individuals to reduce transmission of SARS‐CoV‐2. Further studies in larger cohorts are necessary to address these issues.\nWhen comparing the results of our evaluation with the data from the package insert of the Elecsys SARS‐CoV‐2 antigen assay, considerable differences in the diagnostic performance were noted. The package insert describes the performance of the antigen assay in comparison with the Roche Diagnostics SARS‐CoV‐2 RT‐PCR assay. According to Roche Diagnostics, the Elecsys SARS‐CoV‐2 antigen assay has a relative sensitivity of approximately 97% at Ct values <30; however, our evaluation showed a relative sensitivity of approximately 67% at Ct values <30. Furthermore, while the package insert described a relative sensitivity of approximately 84% at Ct values of 30–35, our evaluation showed a relative sensitivity of approximately 8% at Ct values of 30–35. According to the manufacturer, the Elecsys SARS‐CoV‐2 antigen assay has a relative sensitivity of approximately 61% for Ct values of 35–40, but our evaluation showed a relative sensitivity of approximately 6% for Ct values of 35–40. Thus, our assay evaluation suggested that the diagnostic sensitivity of the Elecsys SARS‐CoV‐2 antigen assay was worse than that indicated in the package insert. However, we cannot provide a definitive explanation for these differences with the data available to us. We speculate that the large differences in the reported assay performance data may be related to the use of the SARS‐CoV‐2 extraction solution. Indeed, the package insert of the Elecsys SARS‐CoV‐2 antigen assay did not specify anything about the use of the SARS‐CoV‐2 extraction solution, whereas we were advised by Roche Diagnostics, Italy, to use 1.0 ml of the SARS‐CoV‐2 extraction solution for each nasopharyngeal swab (as described in the Methods). The use of 1.0 ml of this SARS‐CoV‐2 extraction solution may have led to a dilution effect of the SARS‐CoV‐2 antigen, which could have negatively affected the sensitivity of the Elecsys SARS‐CoV‐2 antigen assay. However, as mentioned above, this consideration is speculative.\nA diverse range of rapid point‐of‐care antigen tests for the detection of SARS‐CoV‐2 from nasopharyngeal swabs and oropharyngeal swabs are currently available in the market. Some excellent publications have described the evaluation results for these rapid point‐of‐care assays,17, 18, 19, 20, 21, 22, 23 and meta‐analyses on this topic have also been published.15, 16 A summary of the published data suggests that the sensitivity of these rapid point‐of‐care antigen assays is generally low, ranging from 20% to 95% depending on the assay and the virus load. Therefore, the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay is not better than that of rapid point‐of‐care assays described in the literature, with the advantage of a high throughput and the disadvantage of a relatively long time to obtain the results.\nIn conclusion, it remains to be established whether the Elecsys SARS‐CoV‐2 antigen assay can be considered for detecting potentially infective individuals and thus for reducing the virus spread. If this is true, the Elecsys SARS‐CoV‐2 antigen assay could be useful for population screening of asymptomatic or pre‐symptomatic individuals in accordance with the respective testing strategies of the authorities. In a tertiary care setting, however, the Elecsys SARS‐CoV‐2 antigen assay does not appear to be useful in its current form for clinical decision‐making, in our opinion.", "None declared.", "Thomas Mueller: Conceptualization, data collection, data analysis and interpretation, drafting of the article. Julia Kompatscher: Data collection, data analysis and interpretation, critical revision of the article. Mario La Guardia: Data collection, data analysis and interpretation, critical revision of the article. All authors: Final approval of the article.", "Table S1\nClick here for additional data file." ]
[ null, "methods", null, null, null, null, "results", "discussion", "COI-statement", null, "supplementary-material" ]
[ "Antigen", "COVID‐19", "diagnostic test", "immunoassay", "laboratory medicine", "polymerase chain reaction", "SARS‐CoV‐2", "virology" ]
INTRODUCTION: The RNA virus severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) causes coronavirus disease 2019 (COVID‐19).1 Infection with SARS‐CoV‐2 can be asymptomatic or may result in symptomatic disease ranging in severity from mild upper respiratory tract symptoms to severe pneumonia with respiratory failure and multiple organ failure.1 The gold standard laboratory tests to detect SARS‐CoV‐2 from clinical specimens (eg, nasopharyngeal swabs, oropharyngeal swabs, and bronchoalveolar lavage fluid) are nucleic acid amplification tests (NAATs), mainly reverse‐transcription polymerase chain reaction (RT‐PCR) assays.1, 2 Currently, a variety of NAATs are commercially available for use in routine clinical practice.1, 3 However, since the testing capacity afforded by NAATs is insufficient to cope with the COVID‐19 pandemic, various manufacturers have also developed rapid antigen immunoassays, which do not require skilled personnel and dedicated instrumentation, for detection of the virus from nasopharyngeal and oropharyngeal swabs. SARS‐CoV‐2 rapid point‐of‐care antigen tests have also been commercially available for some time.1, 4 At present, antigen point‐of‐care tests in many countries help to ensure the necessary quantity of SARS‐CoV‐2 tests for their respective testing strategies,1, 4 but these tests have been criticized because of their lower clinical sensitivity in comparison with NAATs.1, 4 Recently, Roche Diagnostics (Rotkreuz, Switzerland) launched a high‐throughput antigen test for medical laboratories called “Elecsys SARS‐CoV‐2 antigen assay,” which runs on the company's analyzers. We evaluated the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay prior to its planned use in our clinical routine. Herein, we report the results of our evaluation. METHODS: Study design and clinical samples This report describes the findings of a single‐center evaluation of the diagnostic accuracy of the Elecsys SARS‐CoV‐2 antigen assay as an index test in comparison with RT‐PCR as the reference standard. Our manufacturer‐independent evaluation was conducted from March 11, 2021, to April 26, 2021, at the Department of Clinical Pathology, Hospital of Bolzano, province of South Tyrol, Italy. During this period, the median 7‐day incidence rate of new SARS‐CoV‐2‐positive cases per 100,000 population was 149 (starting from 245 on March 11, 2021, and declining to 121 on April 26, 2021) in the province of South Tyrol (Amministrazione Provincia Bolzano, Sicurezza e protezione civile, web: http://www.provincia.bz.it/sicurezza‐protezione‐civile/protezione‐civile/dati‐attuali‐sul‐coronavirus.asp, last access: April 27, 2021). During this time, the Department of Clinical Pathology received 403 requests for simultaneous RT‐PCR and antigen assays from the emergency department of the hospital and from the intensive care unit, which care for COVID‐19 patients. These 403 requests pertained to 336 patients. In all 403 cases, two nasopharyngeal swabs were obtained simultaneously by skilled personnel, of which one was sent for the RT‐PCR assay and the other was sent to run the antigen assay. We used the data from these 403 cases to evaluate the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay. A referral to the ethics committee was not deemed necessary because the project was an assay validation/verification that was in line with good laboratory practice. Such evaluations are routinely performed in medical laboratories before introducing a new assay into the clinical routine. This report describes the findings of a single‐center evaluation of the diagnostic accuracy of the Elecsys SARS‐CoV‐2 antigen assay as an index test in comparison with RT‐PCR as the reference standard. Our manufacturer‐independent evaluation was conducted from March 11, 2021, to April 26, 2021, at the Department of Clinical Pathology, Hospital of Bolzano, province of South Tyrol, Italy. During this period, the median 7‐day incidence rate of new SARS‐CoV‐2‐positive cases per 100,000 population was 149 (starting from 245 on March 11, 2021, and declining to 121 on April 26, 2021) in the province of South Tyrol (Amministrazione Provincia Bolzano, Sicurezza e protezione civile, web: http://www.provincia.bz.it/sicurezza‐protezione‐civile/protezione‐civile/dati‐attuali‐sul‐coronavirus.asp, last access: April 27, 2021). During this time, the Department of Clinical Pathology received 403 requests for simultaneous RT‐PCR and antigen assays from the emergency department of the hospital and from the intensive care unit, which care for COVID‐19 patients. These 403 requests pertained to 336 patients. In all 403 cases, two nasopharyngeal swabs were obtained simultaneously by skilled personnel, of which one was sent for the RT‐PCR assay and the other was sent to run the antigen assay. We used the data from these 403 cases to evaluate the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay. A referral to the ethics committee was not deemed necessary because the project was an assay validation/verification that was in line with good laboratory practice. Such evaluations are routinely performed in medical laboratories before introducing a new assay into the clinical routine. Reference standard—RT‐PCR assay The personnel from the emergency department of the hospital and from the intensive care unit used standard swabs and transport media from two different manufacturers, namely FLOQSwabs® (Ref. 503CS01, Copan Italia S.p.A., Brescia, Italy) in combination with the UTM Universal Transport Medium (Ref. 330C, filled with 3 ml UTM® medium, Copan Italia S.p.A., Brescia, Italy) and the combined specimen collection device ∑‐Transwab® Liquid Amies (one Sigma swab plus 1 ml of liquid Amies transport medium, Ref. MW176S; Medical Wire, Corsham, United Kingdom). The nasopharyngeal swabs were handled as specified by the manufacturer. After the smear, samples were sent to our laboratory where the RT‐PCR assay was performed immediately. The RT‐PCR assay was performed using the Xpert Xpress SARS‐CoV‐2 test (Ref. XPRSARS‐COV2‐10, Cepheid, Sunnyvale, CA, USA) on a GeneXpert® IV instrument (Cepheid, Sunnyvale, CA, USA). The Xpert Xpress SARS‐CoV‐2 test is a rapid, real‐time RT‐PCR test intended for qualitative detection of nucleic acids from SARS‐CoV‐2 in upper respiratory specimens. We performed the entire Xpert Xpress SARS‐CoV‐2 test procedure according to the manufacturer's instructions. The system uses single‐use disposable cartridges that hold RT‐PCR reagents and host the RT‐PCR process. The sample‐processing control and probe‐check control are also included in the cartridge. The Xpert Xpress SARS‐CoV‐2 test provides test results based on the detection of two gene targets, namely the amplification of the SARS‐CoV‐2 E and N2 genes.1, 3 The limit of detection of this test was 250 copies/ml, and the time to result was 45 min.1, 3 The Xpert Xpress SARS‐CoV‐2 test includes an early assay termination function, which can provide an earlier time to result for high‐titer specimens if the signal from the target nucleic acid reaches a predetermined threshold before the full 45 PCR cycles have been completed. Using the GeneXpert software (Cepheid, Sunnyvale, CA, USA), we considered positive RT‐PCR results when the SARS‐CoV‐2 signal for the N2 nucleic acid target had a PCR cycle threshold (Ct) value of <40.0, irrespective of the signal for the E nucleic acid target. In contrast, when the Ct value for the SARS‐CoV‐2 N2 gene was ≥40.0, or when the results of RT‐PCR testing were definitely negative (with reference to a positive result for the sample‐processing control), we classified the result of the RT‐PCR test as negative. Further, we categorized the results of RT‐PCR tests that showed negative signals for the SARS‐CoV‐2 E and N2 genes as well as a negative signal for the sample‐processing control as invalid; in these cases, we repeated the analysis. The personnel from the emergency department of the hospital and from the intensive care unit used standard swabs and transport media from two different manufacturers, namely FLOQSwabs® (Ref. 503CS01, Copan Italia S.p.A., Brescia, Italy) in combination with the UTM Universal Transport Medium (Ref. 330C, filled with 3 ml UTM® medium, Copan Italia S.p.A., Brescia, Italy) and the combined specimen collection device ∑‐Transwab® Liquid Amies (one Sigma swab plus 1 ml of liquid Amies transport medium, Ref. MW176S; Medical Wire, Corsham, United Kingdom). The nasopharyngeal swabs were handled as specified by the manufacturer. After the smear, samples were sent to our laboratory where the RT‐PCR assay was performed immediately. The RT‐PCR assay was performed using the Xpert Xpress SARS‐CoV‐2 test (Ref. XPRSARS‐COV2‐10, Cepheid, Sunnyvale, CA, USA) on a GeneXpert® IV instrument (Cepheid, Sunnyvale, CA, USA). The Xpert Xpress SARS‐CoV‐2 test is a rapid, real‐time RT‐PCR test intended for qualitative detection of nucleic acids from SARS‐CoV‐2 in upper respiratory specimens. We performed the entire Xpert Xpress SARS‐CoV‐2 test procedure according to the manufacturer's instructions. The system uses single‐use disposable cartridges that hold RT‐PCR reagents and host the RT‐PCR process. The sample‐processing control and probe‐check control are also included in the cartridge. The Xpert Xpress SARS‐CoV‐2 test provides test results based on the detection of two gene targets, namely the amplification of the SARS‐CoV‐2 E and N2 genes.1, 3 The limit of detection of this test was 250 copies/ml, and the time to result was 45 min.1, 3 The Xpert Xpress SARS‐CoV‐2 test includes an early assay termination function, which can provide an earlier time to result for high‐titer specimens if the signal from the target nucleic acid reaches a predetermined threshold before the full 45 PCR cycles have been completed. Using the GeneXpert software (Cepheid, Sunnyvale, CA, USA), we considered positive RT‐PCR results when the SARS‐CoV‐2 signal for the N2 nucleic acid target had a PCR cycle threshold (Ct) value of <40.0, irrespective of the signal for the E nucleic acid target. In contrast, when the Ct value for the SARS‐CoV‐2 N2 gene was ≥40.0, or when the results of RT‐PCR testing were definitely negative (with reference to a positive result for the sample‐processing control), we classified the result of the RT‐PCR test as negative. Further, we categorized the results of RT‐PCR tests that showed negative signals for the SARS‐CoV‐2 E and N2 genes as well as a negative signal for the sample‐processing control as invalid; in these cases, we repeated the analysis. Index test—Elecsys SARS‐CoV‐2 antigen assay Specimen collection and preparation for detection of the SARS‐CoV‐2 antigen was performed as recommended by Roche Diagnostics Italy and in accordance with the package insert of the Elecsys SARS‐CoV‐2 antigen assay. We prepared sample collection tubes without any additives (Vacuette® Z No Additive 4 ml, Ref. 454001, Greiner Bio‐One, Kremsmunster, Austria) containing 1.0 ml of the SARS‐CoV‐2 extraction solution (Ref. 09370064190; Roche Diagnostics, Rotkreuz, Switzerland). The SARS‐CoV‐2 extraction solution is intended for the elution and transportation of samples for use in the Elecsys SARS‐CoV‐2 antigen assay. The personnel from the emergency department of the hospital and from the intensive care unit received these specifically prepared sample collection tubes and FLOQSwabs® (Ref. 519CS01, Copan Italia S.p.A., Brescia, Italy) for sample collection. The nasopharyngeal smear for detection of the SARS‐CoV‐2 antigen was performed in exactly the same way and at the same time as the smear for RT‐PCR test. The collection tubes were opened; the swab was soaked in the solution; and the swab was stirred 20 times. The swab was then left in the solution for 2 min. Next, the personnel from the emergency department of the hospital or the intensive care unit removed the swab while pressing it against the tube wall to extract the liquid from the swab. The collection tube was then recapped and immediately sent to our laboratory, where the samples were stored for a maximum of 36 h at 2–8°C. According to the package insert of the Elecsys SARS‐CoV‐2 antigen assay, the samples have an in vitro stability of two days at 2–8°C. Finally, we performed the Elecsys SARS‐CoV‐2 antigen assay using the collection tubes. The Elecsys SARS‐CoV‐2 antigen assay (Ref. 09345299190, Roche Diagnostics, Rotkreuz, Switzerland) is an electrochemiluminescence immunoassay for qualitative detection of the nucleocapsid antigen of SARS‑CoV‐2 in nasopharyngeal and oropharyngeal swab samples. This assay uses monoclonal antibodies directed against the SARS‑CoV‐2 nucleocapsid protein in a double‐antibody sandwich assay format. In our evaluation, we ran this assay on a single Cobas e801 system (Roche Diagnostics, Rotkreuz, Switzerland) according to the manufacturer's instructions. This assay produces results as a cutoff index (COI; signal of sample divided by cutoff), wherein results ≥1.00 are reported as reactive/positive. For the internal quality control, we used the PreciControl SARS‑CoV‐2 antigen (Ref. 09345302190) once daily at two COI levels. We allowed sample measurements only if the controls were within the defined limits. We determined the limit of blank (LoB) as previously suggested 5: Measurements were obtained with the SARS‐CoV‐2 extraction solution in replicates of 20 and calculated LoB = meanblank + 1.645 (SDblank). Using this procedure, we found an LoB of 0.60 COI. We evaluated the linearity of the Elecsys SARS‑CoV‐2 antigen assay according to the CLSI guideline EP6‐A6 using six different analyte concentrations. Fresh samples were used to prepare high‐ and low‐concentration pools. We then conducted a direct dilution series with the low‐ and high‐concentration patient sample pools in the following volume ratios (low‐concentration pool +high‐concentration pool): pool 1, low only; pool 2, 0.8 low +0.2 high; pool 3, 0.6 low +0.4 high; pool 4, 0.4 low +0.6 high; pool 5, 0.2 low +0.8 high; and pool 6, high only. Three measurements were performed for each concentration, and the default criteria were set at 5% for repeatability and 15 COI for nonlinearity. The mean COIs of the low‐ and high‐concentration pools were 0.49 and 759, respectively. The standard errors of regression (Sy,x) and t‐tests from regression analyses showed that the first‐order model fitted better than the second‐ and third‐order models: first‐order model b1, Sy,x = 12.457; t‐test = 86.878 (p < 0.001); second‐order model b2, Sy,x = 11.622; t‐test = 1.839 (p = 0.086); and third‐order model b3, Sy,x = 10.755; t‐test = 1.875 (p = 0.082). In addition, all default criteria were met, so the method was linear up to 750 COI. To evaluate the precision of the Elecsys SARS‑CoV‐2 antigen assay in our laboratory, we performed a replication study according to the Clinical and Laboratory Standards Institute (CLSI; formerly NCCLS) guideline EP5‐A.7 Two pooled patient samples with COI values near the reactive/positive cutoff values of the assay were aliquoted into ten plastic tubes for each concentration level and frozen at –80°C. We analyzed these samples in duplicate in two runs per day for 10 days within 2 weeks of sample collection. Within‐run and total analytical precision (CV) were calculated using the CLSI double‐run precision evaluation test.7 The Elecsys SARS‑CoV‐2 antigen assay had a within‐run CV of 3.3% and a total CV of 3.5% at a mean concentration of 1.12 COI (pool 1) and a within‐run CV of 3.1% and a total CV of 5.7% at a mean concentration of 1.82 COI (pool 2). Specimen collection and preparation for detection of the SARS‐CoV‐2 antigen was performed as recommended by Roche Diagnostics Italy and in accordance with the package insert of the Elecsys SARS‐CoV‐2 antigen assay. We prepared sample collection tubes without any additives (Vacuette® Z No Additive 4 ml, Ref. 454001, Greiner Bio‐One, Kremsmunster, Austria) containing 1.0 ml of the SARS‐CoV‐2 extraction solution (Ref. 09370064190; Roche Diagnostics, Rotkreuz, Switzerland). The SARS‐CoV‐2 extraction solution is intended for the elution and transportation of samples for use in the Elecsys SARS‐CoV‐2 antigen assay. The personnel from the emergency department of the hospital and from the intensive care unit received these specifically prepared sample collection tubes and FLOQSwabs® (Ref. 519CS01, Copan Italia S.p.A., Brescia, Italy) for sample collection. The nasopharyngeal smear for detection of the SARS‐CoV‐2 antigen was performed in exactly the same way and at the same time as the smear for RT‐PCR test. The collection tubes were opened; the swab was soaked in the solution; and the swab was stirred 20 times. The swab was then left in the solution for 2 min. Next, the personnel from the emergency department of the hospital or the intensive care unit removed the swab while pressing it against the tube wall to extract the liquid from the swab. The collection tube was then recapped and immediately sent to our laboratory, where the samples were stored for a maximum of 36 h at 2–8°C. According to the package insert of the Elecsys SARS‐CoV‐2 antigen assay, the samples have an in vitro stability of two days at 2–8°C. Finally, we performed the Elecsys SARS‐CoV‐2 antigen assay using the collection tubes. The Elecsys SARS‐CoV‐2 antigen assay (Ref. 09345299190, Roche Diagnostics, Rotkreuz, Switzerland) is an electrochemiluminescence immunoassay for qualitative detection of the nucleocapsid antigen of SARS‑CoV‐2 in nasopharyngeal and oropharyngeal swab samples. This assay uses monoclonal antibodies directed against the SARS‑CoV‐2 nucleocapsid protein in a double‐antibody sandwich assay format. In our evaluation, we ran this assay on a single Cobas e801 system (Roche Diagnostics, Rotkreuz, Switzerland) according to the manufacturer's instructions. This assay produces results as a cutoff index (COI; signal of sample divided by cutoff), wherein results ≥1.00 are reported as reactive/positive. For the internal quality control, we used the PreciControl SARS‑CoV‐2 antigen (Ref. 09345302190) once daily at two COI levels. We allowed sample measurements only if the controls were within the defined limits. We determined the limit of blank (LoB) as previously suggested 5: Measurements were obtained with the SARS‐CoV‐2 extraction solution in replicates of 20 and calculated LoB = meanblank + 1.645 (SDblank). Using this procedure, we found an LoB of 0.60 COI. We evaluated the linearity of the Elecsys SARS‑CoV‐2 antigen assay according to the CLSI guideline EP6‐A6 using six different analyte concentrations. Fresh samples were used to prepare high‐ and low‐concentration pools. We then conducted a direct dilution series with the low‐ and high‐concentration patient sample pools in the following volume ratios (low‐concentration pool +high‐concentration pool): pool 1, low only; pool 2, 0.8 low +0.2 high; pool 3, 0.6 low +0.4 high; pool 4, 0.4 low +0.6 high; pool 5, 0.2 low +0.8 high; and pool 6, high only. Three measurements were performed for each concentration, and the default criteria were set at 5% for repeatability and 15 COI for nonlinearity. The mean COIs of the low‐ and high‐concentration pools were 0.49 and 759, respectively. The standard errors of regression (Sy,x) and t‐tests from regression analyses showed that the first‐order model fitted better than the second‐ and third‐order models: first‐order model b1, Sy,x = 12.457; t‐test = 86.878 (p < 0.001); second‐order model b2, Sy,x = 11.622; t‐test = 1.839 (p = 0.086); and third‐order model b3, Sy,x = 10.755; t‐test = 1.875 (p = 0.082). In addition, all default criteria were met, so the method was linear up to 750 COI. To evaluate the precision of the Elecsys SARS‑CoV‐2 antigen assay in our laboratory, we performed a replication study according to the Clinical and Laboratory Standards Institute (CLSI; formerly NCCLS) guideline EP5‐A.7 Two pooled patient samples with COI values near the reactive/positive cutoff values of the assay were aliquoted into ten plastic tubes for each concentration level and frozen at –80°C. We analyzed these samples in duplicate in two runs per day for 10 days within 2 weeks of sample collection. Within‐run and total analytical precision (CV) were calculated using the CLSI double‐run precision evaluation test.7 The Elecsys SARS‑CoV‐2 antigen assay had a within‐run CV of 3.3% and a total CV of 3.5% at a mean concentration of 1.12 COI (pool 1) and a within‐run CV of 3.1% and a total CV of 5.7% at a mean concentration of 1.82 COI (pool 2). Statistical analysis We performed a purely descriptive statistical analysis by calculating the sensitivity, specificity, area under the ROC curve, positive likelihood ratio, negative likelihood ratio, positive predictive value, and negative predictive value for the Elecsys SARS‑CoV‐2 antigen assay against the reference standard. Sensitivity, specificity, positive and negative predictive values, and disease prevalence were expressed as percentages. The confidence intervals for sensitivity and specificity were the "exact" Clopper‐Pearson confidence intervals. The confidence intervals for the likelihood ratios were calculated using the log method, as suggested by Altman et al.8 Confidence intervals for the predictive values were the standard logit confidence intervals given by Mercaldo et al.9 The area under the ROC curve was estimated using established procedures.10, 11, 12 For correlation analysis, we calculated the Spearman correlation coefficient (rho) with a p‐value and a 95% confidence interval (CI) for the correlation coefficient. Data analysis was performed using MedCalc software package MedCalc 17.2 (MedCalc Software Ltd, Ostend, Belgium). We performed a purely descriptive statistical analysis by calculating the sensitivity, specificity, area under the ROC curve, positive likelihood ratio, negative likelihood ratio, positive predictive value, and negative predictive value for the Elecsys SARS‑CoV‐2 antigen assay against the reference standard. Sensitivity, specificity, positive and negative predictive values, and disease prevalence were expressed as percentages. The confidence intervals for sensitivity and specificity were the "exact" Clopper‐Pearson confidence intervals. The confidence intervals for the likelihood ratios were calculated using the log method, as suggested by Altman et al.8 Confidence intervals for the predictive values were the standard logit confidence intervals given by Mercaldo et al.9 The area under the ROC curve was estimated using established procedures.10, 11, 12 For correlation analysis, we calculated the Spearman correlation coefficient (rho) with a p‐value and a 95% confidence interval (CI) for the correlation coefficient. Data analysis was performed using MedCalc software package MedCalc 17.2 (MedCalc Software Ltd, Ostend, Belgium). Study design and clinical samples: This report describes the findings of a single‐center evaluation of the diagnostic accuracy of the Elecsys SARS‐CoV‐2 antigen assay as an index test in comparison with RT‐PCR as the reference standard. Our manufacturer‐independent evaluation was conducted from March 11, 2021, to April 26, 2021, at the Department of Clinical Pathology, Hospital of Bolzano, province of South Tyrol, Italy. During this period, the median 7‐day incidence rate of new SARS‐CoV‐2‐positive cases per 100,000 population was 149 (starting from 245 on March 11, 2021, and declining to 121 on April 26, 2021) in the province of South Tyrol (Amministrazione Provincia Bolzano, Sicurezza e protezione civile, web: http://www.provincia.bz.it/sicurezza‐protezione‐civile/protezione‐civile/dati‐attuali‐sul‐coronavirus.asp, last access: April 27, 2021). During this time, the Department of Clinical Pathology received 403 requests for simultaneous RT‐PCR and antigen assays from the emergency department of the hospital and from the intensive care unit, which care for COVID‐19 patients. These 403 requests pertained to 336 patients. In all 403 cases, two nasopharyngeal swabs were obtained simultaneously by skilled personnel, of which one was sent for the RT‐PCR assay and the other was sent to run the antigen assay. We used the data from these 403 cases to evaluate the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay. A referral to the ethics committee was not deemed necessary because the project was an assay validation/verification that was in line with good laboratory practice. Such evaluations are routinely performed in medical laboratories before introducing a new assay into the clinical routine. Reference standard—RT‐PCR assay: The personnel from the emergency department of the hospital and from the intensive care unit used standard swabs and transport media from two different manufacturers, namely FLOQSwabs® (Ref. 503CS01, Copan Italia S.p.A., Brescia, Italy) in combination with the UTM Universal Transport Medium (Ref. 330C, filled with 3 ml UTM® medium, Copan Italia S.p.A., Brescia, Italy) and the combined specimen collection device ∑‐Transwab® Liquid Amies (one Sigma swab plus 1 ml of liquid Amies transport medium, Ref. MW176S; Medical Wire, Corsham, United Kingdom). The nasopharyngeal swabs were handled as specified by the manufacturer. After the smear, samples were sent to our laboratory where the RT‐PCR assay was performed immediately. The RT‐PCR assay was performed using the Xpert Xpress SARS‐CoV‐2 test (Ref. XPRSARS‐COV2‐10, Cepheid, Sunnyvale, CA, USA) on a GeneXpert® IV instrument (Cepheid, Sunnyvale, CA, USA). The Xpert Xpress SARS‐CoV‐2 test is a rapid, real‐time RT‐PCR test intended for qualitative detection of nucleic acids from SARS‐CoV‐2 in upper respiratory specimens. We performed the entire Xpert Xpress SARS‐CoV‐2 test procedure according to the manufacturer's instructions. The system uses single‐use disposable cartridges that hold RT‐PCR reagents and host the RT‐PCR process. The sample‐processing control and probe‐check control are also included in the cartridge. The Xpert Xpress SARS‐CoV‐2 test provides test results based on the detection of two gene targets, namely the amplification of the SARS‐CoV‐2 E and N2 genes.1, 3 The limit of detection of this test was 250 copies/ml, and the time to result was 45 min.1, 3 The Xpert Xpress SARS‐CoV‐2 test includes an early assay termination function, which can provide an earlier time to result for high‐titer specimens if the signal from the target nucleic acid reaches a predetermined threshold before the full 45 PCR cycles have been completed. Using the GeneXpert software (Cepheid, Sunnyvale, CA, USA), we considered positive RT‐PCR results when the SARS‐CoV‐2 signal for the N2 nucleic acid target had a PCR cycle threshold (Ct) value of <40.0, irrespective of the signal for the E nucleic acid target. In contrast, when the Ct value for the SARS‐CoV‐2 N2 gene was ≥40.0, or when the results of RT‐PCR testing were definitely negative (with reference to a positive result for the sample‐processing control), we classified the result of the RT‐PCR test as negative. Further, we categorized the results of RT‐PCR tests that showed negative signals for the SARS‐CoV‐2 E and N2 genes as well as a negative signal for the sample‐processing control as invalid; in these cases, we repeated the analysis. Index test—Elecsys SARS‐CoV‐2 antigen assay: Specimen collection and preparation for detection of the SARS‐CoV‐2 antigen was performed as recommended by Roche Diagnostics Italy and in accordance with the package insert of the Elecsys SARS‐CoV‐2 antigen assay. We prepared sample collection tubes without any additives (Vacuette® Z No Additive 4 ml, Ref. 454001, Greiner Bio‐One, Kremsmunster, Austria) containing 1.0 ml of the SARS‐CoV‐2 extraction solution (Ref. 09370064190; Roche Diagnostics, Rotkreuz, Switzerland). The SARS‐CoV‐2 extraction solution is intended for the elution and transportation of samples for use in the Elecsys SARS‐CoV‐2 antigen assay. The personnel from the emergency department of the hospital and from the intensive care unit received these specifically prepared sample collection tubes and FLOQSwabs® (Ref. 519CS01, Copan Italia S.p.A., Brescia, Italy) for sample collection. The nasopharyngeal smear for detection of the SARS‐CoV‐2 antigen was performed in exactly the same way and at the same time as the smear for RT‐PCR test. The collection tubes were opened; the swab was soaked in the solution; and the swab was stirred 20 times. The swab was then left in the solution for 2 min. Next, the personnel from the emergency department of the hospital or the intensive care unit removed the swab while pressing it against the tube wall to extract the liquid from the swab. The collection tube was then recapped and immediately sent to our laboratory, where the samples were stored for a maximum of 36 h at 2–8°C. According to the package insert of the Elecsys SARS‐CoV‐2 antigen assay, the samples have an in vitro stability of two days at 2–8°C. Finally, we performed the Elecsys SARS‐CoV‐2 antigen assay using the collection tubes. The Elecsys SARS‐CoV‐2 antigen assay (Ref. 09345299190, Roche Diagnostics, Rotkreuz, Switzerland) is an electrochemiluminescence immunoassay for qualitative detection of the nucleocapsid antigen of SARS‑CoV‐2 in nasopharyngeal and oropharyngeal swab samples. This assay uses monoclonal antibodies directed against the SARS‑CoV‐2 nucleocapsid protein in a double‐antibody sandwich assay format. In our evaluation, we ran this assay on a single Cobas e801 system (Roche Diagnostics, Rotkreuz, Switzerland) according to the manufacturer's instructions. This assay produces results as a cutoff index (COI; signal of sample divided by cutoff), wherein results ≥1.00 are reported as reactive/positive. For the internal quality control, we used the PreciControl SARS‑CoV‐2 antigen (Ref. 09345302190) once daily at two COI levels. We allowed sample measurements only if the controls were within the defined limits. We determined the limit of blank (LoB) as previously suggested 5: Measurements were obtained with the SARS‐CoV‐2 extraction solution in replicates of 20 and calculated LoB = meanblank + 1.645 (SDblank). Using this procedure, we found an LoB of 0.60 COI. We evaluated the linearity of the Elecsys SARS‑CoV‐2 antigen assay according to the CLSI guideline EP6‐A6 using six different analyte concentrations. Fresh samples were used to prepare high‐ and low‐concentration pools. We then conducted a direct dilution series with the low‐ and high‐concentration patient sample pools in the following volume ratios (low‐concentration pool +high‐concentration pool): pool 1, low only; pool 2, 0.8 low +0.2 high; pool 3, 0.6 low +0.4 high; pool 4, 0.4 low +0.6 high; pool 5, 0.2 low +0.8 high; and pool 6, high only. Three measurements were performed for each concentration, and the default criteria were set at 5% for repeatability and 15 COI for nonlinearity. The mean COIs of the low‐ and high‐concentration pools were 0.49 and 759, respectively. The standard errors of regression (Sy,x) and t‐tests from regression analyses showed that the first‐order model fitted better than the second‐ and third‐order models: first‐order model b1, Sy,x = 12.457; t‐test = 86.878 (p < 0.001); second‐order model b2, Sy,x = 11.622; t‐test = 1.839 (p = 0.086); and third‐order model b3, Sy,x = 10.755; t‐test = 1.875 (p = 0.082). In addition, all default criteria were met, so the method was linear up to 750 COI. To evaluate the precision of the Elecsys SARS‑CoV‐2 antigen assay in our laboratory, we performed a replication study according to the Clinical and Laboratory Standards Institute (CLSI; formerly NCCLS) guideline EP5‐A.7 Two pooled patient samples with COI values near the reactive/positive cutoff values of the assay were aliquoted into ten plastic tubes for each concentration level and frozen at –80°C. We analyzed these samples in duplicate in two runs per day for 10 days within 2 weeks of sample collection. Within‐run and total analytical precision (CV) were calculated using the CLSI double‐run precision evaluation test.7 The Elecsys SARS‑CoV‐2 antigen assay had a within‐run CV of 3.3% and a total CV of 3.5% at a mean concentration of 1.12 COI (pool 1) and a within‐run CV of 3.1% and a total CV of 5.7% at a mean concentration of 1.82 COI (pool 2). Statistical analysis: We performed a purely descriptive statistical analysis by calculating the sensitivity, specificity, area under the ROC curve, positive likelihood ratio, negative likelihood ratio, positive predictive value, and negative predictive value for the Elecsys SARS‑CoV‐2 antigen assay against the reference standard. Sensitivity, specificity, positive and negative predictive values, and disease prevalence were expressed as percentages. The confidence intervals for sensitivity and specificity were the "exact" Clopper‐Pearson confidence intervals. The confidence intervals for the likelihood ratios were calculated using the log method, as suggested by Altman et al.8 Confidence intervals for the predictive values were the standard logit confidence intervals given by Mercaldo et al.9 The area under the ROC curve was estimated using established procedures.10, 11, 12 For correlation analysis, we calculated the Spearman correlation coefficient (rho) with a p‐value and a 95% confidence interval (CI) for the correlation coefficient. Data analysis was performed using MedCalc software package MedCalc 17.2 (MedCalc Software Ltd, Ostend, Belgium). RESULTS: In this study on the diagnostic accuracy of the Elecsys SARS‑CoV‐2 antigen assay, we used the samples obtained in 403 clinical requests for simultaneous RT‐PCR and antigen assays. The 403 requests were from 336 patients (median age, 74 years; range, 15–100 years; 188 males [56%]). Specifically, 330 requests for SARS‐CoV‐2 testing were from 321 patients in the emergency department of the hospital, and 73 requests were from 15 patients in the intensive care unit, which cared for patients with severe COVID‐19. For the emergency department patients, RT‐PCR assays were ordered by the treating physicians to decide whether the patients were to be admitted to the COVID‐19 wards or to the “clean” COVID‐19‐free wards. In the intensive care unit, RT‐PCR assays were ordered by the treating physicians for follow‐up evaluations of patients with severe COVID‐19. In the 403 cases, 47 RT‐PCR‐positive results were obtained. This corresponds to an RT‐PCR‐positive prevalence of 12% (95% CI, 9–15) in our cohort. Of the 330 requests for SARS‐CoV‐2 testing from the emergency department, 11 showed positive results with the RT‐PCR assay (median Ct value, 32.5; range, 19.2–39.6). Of the 73 requests for SARS‐CoV‐2 testing from the intensive care unit, 36 showed positive results with the RT‐PCR assay (median Ct value, 33.7; range, 18.6–39.5). Table 1 details the overall results from the Elecsys SARS‑CoV‐2 antigen assay against the RT‐PCR assay. Our data yielded the following findings: sensitivity, 26% (95% CI, 14–40); specificity, 100% (95% CI, 99–100); area under the ROC curve, 0.63 (95% CI, 0.58–0.68); positive likelihood ratio, not applicable; negative likelihood ratio, 0.74 (95% CI, 0.63–0.88); positive predictive value, 100%; and negative predictive value, 91% (95% CI, 90–92). Overall performance of the Elecsys SARS‐CoV‐2 antigen assay (ie, index test) versus the RT‐PCR assay (ie, reference standard) in 403 cases Abbreviations: RT‐PCR, reverse‐transcription polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. Next, we examined the 47 RT‐PCR‐positive cases with respect to the Ct values of the SARS‐CoV‐2 signal for the N2 nucleic acid target found in RT‐PCR and the COI values in the Elecsys SARS‑CoV‐2 antigen assay. Analysis of the relationship between the Ct values and COI values in the 47 RT‐PCR‐positive cases showed a Spearman's coefficient of rank correlation (rho) of −0.704 (95% CI, −0.824 to −0.522; p < 0.0001). Figure 1 shows the respective scattergrams. In Table 2, we compared the results of the 47 RT‐PCR‐positive cases categorized by viral load (expressed as Ct values) with the corresponding results of the Elecsys SARS‐CoV‐2 antigen assay. The results showed that the true‐positive rate of the Elecsys SARS‐CoV‐2 antigen assay was 100% for Ct values of 15–24.9, 44% for Ct values of 25–29.9, 8% for Ct values of 30–34.9, and 6% for Ct values of 35–39.9. Table S1 shows the individual results of the 47 RT‐PCR‐positive cases. Scatterplot of the cycle threshold (Ct) values of SARS‐CoV‐2 RT‐PCR versus the cutoff index (COI) values of the Elecsys SARS‐CoV‐2 antigen assay in our 47 RT‐PCR‐positive cases. The horizontal dotted line indicates the cutoff value of the Elecsys SARS‐CoV‐2 antigen assay (negative, COI <1.0; positive, COI ≥1.0). Open triangles indicate requests from the emergency department; open circles indicate requests from the intensive care unit. Abbreviations: COI, cutoff index; Ct, cycle threshold; RT‐PCR, reverse‐transcription polymerase chain reaction; and SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. Comparison of the 47 SARS‐CoV‐2 RT‐PCR‐positive cases categorized by virus load (expressed as Ct values) versus the results of the Elecsys SARS‐CoV‐2 antigen assay Abbreviations: COI, cutoff index; Ct, cycle threshold; RT‐PCR, reverse‐transcription polymerase chain reaction; SARS‐CoV‐2, severe acute respiratory syndrome coronavirus 2. DISCUSSION: Although this is only a small single‐center study, the main characteristics of the Elecsys SARS‐CoV‐2 antigen assay can be determined from our results. The Elecsys SARS‐CoV‐2 antigen assay had high specificity (it showed no false‐positive results compared to the RT‐PCR assay), but the assay showed lower sensitivity compared with the RT‐PCR assay (it yielded many false‐negative results). The assay showed a sensitivity of 26% in our cohort, which was fairly low. As expected, the rate of false‐negative results with the Elecsys SARS‐CoV‐2 antigen assay decreased with increasing viral load. In our evaluation, all Elecsys SARS‐CoV‐2 antigen assay results were positive in cases with Ct values of 15–24.9. However, for Ct values of 30–39.9, the Elecsys SARS‐CoV‐2 antigen assay had a sensitivity of only 6%–8% in our cohort. This seems to be too low for a tertiary care setting. Therefore, we decided to not use this assay in the clinical routine of our hospital. Our data suggest a clear relationship between the Ct value (as a surrogate measure of viral load) and the sensitivity of the Elecsys SARS‐CoV‐2 antigen assay. A recently published study, for example, demonstrated that SARS‐CoV‐2 infectivity varies with the viral load, among other factors.13, 14 Individuals with high viral loads (as determined by Ct values) were the most infectious.13 Although rapid point‐of‐care antigen tests for detection of SARS‐CoV‐2 have been criticized because of their lower clinical sensitivity than NAATs, these assays may help detect the most infectious cases.13 These rapid point‐of‐care antigen tests usually have a relatively high sensitivity in respiratory specimens with high viral loads (typically >80% in specimens with Ct values <25), while their positive rate in samples with a low viral load (eg, Ct values >25/30) is usually <80%.4, 15, 16 These data support the use of rapid point‐of‐care antigen tests for the detection of SARS‐CoV‐2 in high‐viral‐load individuals. These considerations might also hold true for the Elecsys SARS‐CoV‐2 antigen assay. However, our data do not conclusively determine whether the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay is adequate for population screening programs of asymptomatic or pre‐symptomatic individuals to reduce transmission of SARS‐CoV‐2. Further studies in larger cohorts are necessary to address these issues. When comparing the results of our evaluation with the data from the package insert of the Elecsys SARS‐CoV‐2 antigen assay, considerable differences in the diagnostic performance were noted. The package insert describes the performance of the antigen assay in comparison with the Roche Diagnostics SARS‐CoV‐2 RT‐PCR assay. According to Roche Diagnostics, the Elecsys SARS‐CoV‐2 antigen assay has a relative sensitivity of approximately 97% at Ct values <30; however, our evaluation showed a relative sensitivity of approximately 67% at Ct values <30. Furthermore, while the package insert described a relative sensitivity of approximately 84% at Ct values of 30–35, our evaluation showed a relative sensitivity of approximately 8% at Ct values of 30–35. According to the manufacturer, the Elecsys SARS‐CoV‐2 antigen assay has a relative sensitivity of approximately 61% for Ct values of 35–40, but our evaluation showed a relative sensitivity of approximately 6% for Ct values of 35–40. Thus, our assay evaluation suggested that the diagnostic sensitivity of the Elecsys SARS‐CoV‐2 antigen assay was worse than that indicated in the package insert. However, we cannot provide a definitive explanation for these differences with the data available to us. We speculate that the large differences in the reported assay performance data may be related to the use of the SARS‐CoV‐2 extraction solution. Indeed, the package insert of the Elecsys SARS‐CoV‐2 antigen assay did not specify anything about the use of the SARS‐CoV‐2 extraction solution, whereas we were advised by Roche Diagnostics, Italy, to use 1.0 ml of the SARS‐CoV‐2 extraction solution for each nasopharyngeal swab (as described in the Methods). The use of 1.0 ml of this SARS‐CoV‐2 extraction solution may have led to a dilution effect of the SARS‐CoV‐2 antigen, which could have negatively affected the sensitivity of the Elecsys SARS‐CoV‐2 antigen assay. However, as mentioned above, this consideration is speculative. A diverse range of rapid point‐of‐care antigen tests for the detection of SARS‐CoV‐2 from nasopharyngeal swabs and oropharyngeal swabs are currently available in the market. Some excellent publications have described the evaluation results for these rapid point‐of‐care assays,17, 18, 19, 20, 21, 22, 23 and meta‐analyses on this topic have also been published.15, 16 A summary of the published data suggests that the sensitivity of these rapid point‐of‐care antigen assays is generally low, ranging from 20% to 95% depending on the assay and the virus load. Therefore, the diagnostic performance of the Elecsys SARS‐CoV‐2 antigen assay is not better than that of rapid point‐of‐care assays described in the literature, with the advantage of a high throughput and the disadvantage of a relatively long time to obtain the results. In conclusion, it remains to be established whether the Elecsys SARS‐CoV‐2 antigen assay can be considered for detecting potentially infective individuals and thus for reducing the virus spread. If this is true, the Elecsys SARS‐CoV‐2 antigen assay could be useful for population screening of asymptomatic or pre‐symptomatic individuals in accordance with the respective testing strategies of the authorities. In a tertiary care setting, however, the Elecsys SARS‐CoV‐2 antigen assay does not appear to be useful in its current form for clinical decision‐making, in our opinion. CONFLICT OF INTEREST: None declared. AUTHOR CONTRIBUTION: Thomas Mueller: Conceptualization, data collection, data analysis and interpretation, drafting of the article. Julia Kompatscher: Data collection, data analysis and interpretation, critical revision of the article. Mario La Guardia: Data collection, data analysis and interpretation, critical revision of the article. All authors: Final approval of the article. Supporting information: Table S1 Click here for additional data file.
Background: This report describes a manufacturer-independent evaluation of the diagnostic accuracy of the Elecsys SARS-CoV-2 antigen assay from Roche Diagnostics in a tertiary care setting. Methods: In this single-center study, we used nasopharyngeal swabs from 403 cases from the emergency department and intensive care unit of our hospital. The reference standard for detecting SARS-CoV-2 was the reverse-transcription polymerase chain reaction (RT-PCR) assay. Cycle threshold (Ct) values were recorded for positive RT-PCR assays. The index test was the Elecsys SARS-CoV-2 antigen assay. This electrochemiluminescence immunoassay produces results as cutoff index (COI) values, with values ≥1.00 being reported as positive. Results: Of the 403 cases, 47 showed positive results in RT-PCR assays. Of the 47 RT-PCR-positive cases, 12 showed positive results in the antigen assay. Of the 356 RT-PCR-negative cases, all showed negative results in the antigen assay. Thus, the antigen assay showed a sensitivity of 26% (95% CI, 14%-40%) and specificity of 100% (95% CI, 99%-100%). Analysis of the relationship between Ct values and COI values in the 47 RT-PCR-positive cases showed a correlation coefficient of -0.704 (95% CI, -0.824 to -0.522). The true-positive rate of the antigen assay for Ct values of 15-24.9, 25-29.9, 30-34.9, and 35-39.9 was 100%, 44%, 8%, and 6%, respectively. Conclusions: The Elecsys SARS-CoV-2 antigen assay has a low sensitivity for detecting SARS-CoV-2 from nasopharyngeal swabs. Hence, we decided to not use this assay in the clinical routine of our hospital.
null
null
8,018
346
[ 283, 281, 503, 970, 183, 62 ]
11
[ "sars", "cov", "sars cov", "assay", "antigen", "cov antigen", "sars cov antigen", "pcr", "antigen assay", "rt" ]
[ "coronavirus disease 2019", "sars cov tests", "sars cov antigen", "coronavirus examined", "respiratory syndrome coronavirus" ]
null
null
[CONTENT] Antigen | COVID‐19 | diagnostic test | immunoassay | laboratory medicine | polymerase chain reaction | SARS‐CoV‐2 | virology [SUMMARY]
[CONTENT] Antigen | COVID‐19 | diagnostic test | immunoassay | laboratory medicine | polymerase chain reaction | SARS‐CoV‐2 | virology [SUMMARY]
[CONTENT] Antigen | COVID‐19 | diagnostic test | immunoassay | laboratory medicine | polymerase chain reaction | SARS‐CoV‐2 | virology [SUMMARY]
null
[CONTENT] Antigen | COVID‐19 | diagnostic test | immunoassay | laboratory medicine | polymerase chain reaction | SARS‐CoV‐2 | virology [SUMMARY]
null
[CONTENT] Antigens, Viral | COVID-19 | COVID-19 Nucleic Acid Testing | COVID-19 Serological Testing | Humans | Intensive Care Units | Nasopharynx | SARS-CoV-2 | Sensitivity and Specificity | Tertiary Care Centers | Viral Load [SUMMARY]
[CONTENT] Antigens, Viral | COVID-19 | COVID-19 Nucleic Acid Testing | COVID-19 Serological Testing | Humans | Intensive Care Units | Nasopharynx | SARS-CoV-2 | Sensitivity and Specificity | Tertiary Care Centers | Viral Load [SUMMARY]
[CONTENT] Antigens, Viral | COVID-19 | COVID-19 Nucleic Acid Testing | COVID-19 Serological Testing | Humans | Intensive Care Units | Nasopharynx | SARS-CoV-2 | Sensitivity and Specificity | Tertiary Care Centers | Viral Load [SUMMARY]
null
[CONTENT] Antigens, Viral | COVID-19 | COVID-19 Nucleic Acid Testing | COVID-19 Serological Testing | Humans | Intensive Care Units | Nasopharynx | SARS-CoV-2 | Sensitivity and Specificity | Tertiary Care Centers | Viral Load [SUMMARY]
null
[CONTENT] coronavirus disease 2019 | sars cov tests | sars cov antigen | coronavirus examined | respiratory syndrome coronavirus [SUMMARY]
[CONTENT] coronavirus disease 2019 | sars cov tests | sars cov antigen | coronavirus examined | respiratory syndrome coronavirus [SUMMARY]
[CONTENT] coronavirus disease 2019 | sars cov tests | sars cov antigen | coronavirus examined | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] coronavirus disease 2019 | sars cov tests | sars cov antigen | coronavirus examined | respiratory syndrome coronavirus [SUMMARY]
null
[CONTENT] sars | cov | sars cov | assay | antigen | cov antigen | sars cov antigen | pcr | antigen assay | rt [SUMMARY]
[CONTENT] sars | cov | sars cov | assay | antigen | cov antigen | sars cov antigen | pcr | antigen assay | rt [SUMMARY]
[CONTENT] sars | cov | sars cov | assay | antigen | cov antigen | sars cov antigen | pcr | antigen assay | rt [SUMMARY]
null
[CONTENT] sars | cov | sars cov | assay | antigen | cov antigen | sars cov antigen | pcr | antigen assay | rt [SUMMARY]
null
[CONTENT] tests | naats | cov | sars cov | sars | antigen | clinical | commercially available | commercially | failure [SUMMARY]
[CONTENT] sars cov | cov | sars | assay | antigen | pool | test | pcr | concentration | sample [SUMMARY]
[CONTENT] pcr | rt | rt pcr | ct | cov | sars cov | sars | pcr positive | rt pcr positive | ct values [SUMMARY]
null
[CONTENT] sars | cov | sars cov | declared | assay | antigen | pcr | sars cov antigen | cov antigen | antigen assay [SUMMARY]
null
[CONTENT] Elecsys | Roche Diagnostics | tertiary [SUMMARY]
[CONTENT] 403 ||| RT-PCR ||| RT-PCR ||| Elecsys ||| ≥1.00 [SUMMARY]
[CONTENT] 403 | 47 | RT-PCR ||| 47 | RT-PCR | 12 ||| 356 | RT-PCR ||| 26% | 95% | CI | 14%-40% | 100% | 95% | CI | 99%-100% ||| COI | 47 | RT-PCR | 95% | CI ||| 15 | 25 | 30-34.9 | 35 | 100% | 44% | 8% | 6% [SUMMARY]
null
[CONTENT] ||| Elecsys | Roche Diagnostics | tertiary ||| 403 ||| RT-PCR ||| RT-PCR ||| Elecsys ||| ≥1.00 ||| 403 | 47 | RT-PCR ||| 47 | RT-PCR | 12 ||| 356 | RT-PCR ||| 26% | 95% | CI | 14%-40% | 100% | 95% | CI | 99%-100% ||| COI | 47 | RT-PCR | 95% | CI ||| 15 | 25 | 30-34.9 | 35 | 100% | 44% | 8% | 6% ||| Elecsys ||| [SUMMARY]
null
Circular stapling anastomosis with indocyanine green fluorescence imaging for cervical esophagogastric anastomosis after thoracoscopic esophagectomy: a propensity score-matched analysis.
35488244
Thoracoscopic esophagectomy has been extensively used worldwide as a curative surgery for patients with esophageal cancer; however, complications such as anastomotic leakage and stenosis remain a major concern. Therefore, the objective of this study was to evaluate the efficacy of circular stapling anastomosis with indocyanine green (ICG) fluorescence imaging, which was standardized for cervical esophagogastric anastomosis after thoracoscopic esophagectomy.
BACKGROUND
Altogether, 121 patients with esophageal cancer who underwent thoracoscopic esophagectomy with radical lymph node dissection and cervical esophagogastric anastomosis from November 2009 to December 2020 at Tottori University Hospital were enrolled in this study. Patients who underwent surgery before the anastomotic method was standardized were included in the classical group (n = 82) and patients who underwent surgery after the anastomotic method was standardized were included in the ICG circular group (n = 39). The short-term postoperative outcomes, including anastomotic complications, were compared between the two groups using propensity-matched analysis and the risk factors for anastomotic leakage were evaluated using logistic regression analyses.
METHODS
Of the 121 patients, 33 were included in each group after propensity score matching. The clinicopathological characteristics of patients did not differ between the two groups after propensity score matching. In terms of perioperative outcomes, a significantly higher proportion of patients who underwent surgery using the laparoscopic approach (P < 0.001) and narrow gastric tube (P = 0.003), as well as those who had a lower volume of blood loss (P = 0.009) in the ICG circular group were observed after matching. Moreover, the ICG circular group had a significantly lower incidence of anastomotic leakage (39% vs. 9%, P = 0.004) and anastomotic stenosis (46% vs. 21%, P = 0.037) and a shorter postoperative hospital stay (30 vs. 20 days, P < 0.001) than the classical group. According to the multivariate analysis, the anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy (P = 0.013).
RESULTS
Circular stapling anastomosis with ICG fluorescence imaging is effective in reducing complications such as anastomotic leakage and stenosis.
CONCLUSIONS
[ "Anastomosis, Surgical", "Anastomotic Leak", "Constriction, Pathologic", "Esophageal Neoplasms", "Esophagectomy", "Humans", "Indocyanine Green", "Optical Imaging", "Propensity Score" ]
9052471
Background
Esophageal cancer is the ninth most commonly diagnosed cancer worldwide and the sixth most common cause of cancer-related mortality [1]. Esophagectomy is the mainstay of treatment for resectable esophageal cancer. Thoracoscopic esophagectomy was first reported by Cuschieri et al. in 1992 [2] and has been used worldwide to a large extent as a curative surgery for esophageal cancer. It was reported to reduce the incidence of respiratory complications compared with open esophagectomy in a randomized controlled trial [3]. However, complications, including anastomotic leakage and stenosis, are a major cause of concern. At our institution, thoracoscopic esophagectomy was initially performed in November 2009 and this procedure has been standardized; however, the anastomotic method was not standardized until June 2018. In fact, various anastomotic methods, such as hand-sewn, triangulating [4], circular stapling, and Collard anastomosis [5], have been performed, but the complication rate of anastomosis has not reduced. A systematic review reported that indocyanine green (ICG) fluorescence imaging could be an important adjunct tool for reducing anastomotic leakage following esophagectomy [6]. ICG is a water-soluble near-infrared phosphor that has immediate and long-term safety [7, 8]. ICG fluorescence imaging is a simple evaluation method. In a recent robot-assisted surgery, high-resolution near-infrared images were obtained by employing the Firefly system with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). Therefore, in order to assess the blood flow in the gastric tube during esophagectomy reconstruction, the use of ICG fluorescence imaging was standardized at our institution in July 2018. In addition, the anastomotic method was standardized to circular stapling anastomosis because it is simple and applicable to nearly all cases, including those with a short remnant esophagus. This study aimed to evaluate the efficacy of circular stapling anastomosis with ICG fluorescence imaging for cervical esophagogastric anastomosis after thoracoscopic esophagectomy by comparing the short-term outcomes before and after the anastomotic method was standardized via a propensity-matched analysis.
null
null
Results
Characteristics of patients Of 121 patients, 82 were included in the classical group and 39 in the ICG circular group before matching. Next, 33 patients were included in each group after matching (Fig. 1). Table 1 shows the clinicopathological characteristics of patients before and after matching. Before matching, significant differences were noted in terms of the American Society of Anesthesiologists physical status (ASA-PS) score (P = 0.021) and histological type (P = 0.009). However, after matching, the background characteristics did not significantly differ between the two groups. Table 1Characteristics of patientsBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Age (years)0.8920.872Median (quartiles)66 (61–72)66 (61–71)65 (61–73)67 (63–73)Sex0.9140.282 Male70 (85%)33 (85%)30 (90%)27 (82%) Female12 (15%)6 (15%)3 (9%)6 (18%)Body mass index (kg/m2)21.2 ± 3.122.2 ± 3.30.08321.8 ± 3.121.9 ± 3.30.677Serum albumin level (g/dL)4.2 ± 0.44.1 ± 0.40.4944.1 ± 0.44.1 ± 0.40.985Brinkman index0.7330.278Median (quartiles)800 (445–1000)860 (435–1000)820 (600–1140)840 (405–1000)ECOG performance status0.5860.601 068 (83%)34 (87%)27 (82%)28 (85%) 112 (15%)5 (13%)5 (15%)5 (15%) 22 (2%)0 (0%)1 (3%)0 (0%)Comorbidity Diabetes15 (18%)4 (10%)0.25610 (30%)4 (12%)0.071 Cardiovascular disease10 (12%)6 (15%)0.6282 (6%)5 (15%)0.230 Obstructive ventilation failure27 (33%)13 (33%)0.96510 (30%)11 (33%)0.792ASA-PS score0.0210.115 111 (13%)1 (3%)3 (9%)1 (3%) 263 (77%)28 (72%)27 (82%)23 (70%) 38 (10%)10 (26%)3 (9%)9 (27%)Histological type0.0091.000 Squamous cell carcinoma78 (95%)30 (77%)30 (91%)30 (91%) Adenocarcinoma2 (2%)6 (15%)2 (6%)2 (6%) Others2 (2%)3 (8%)1 (3%)1 (3%)Tumor location0.2290.642 Upper thoracic11 (13%)6 (15%)5 (15%)6 (18%) Middle thoracic43 (52%)16 (41%)15 (46%)16 (49%) Lower thoracic24 (29%)11 (28%)11 (33%)7 (21%) Abdominal4 (5%)6 (15%)2 (6%)4 (12%)cT0.4050.667 136 (44%)16 (41%)16 (49%)14 (42%) 214 (17%)11 (28%)5 (15%)9 (27%) 331 (38%)11 (28%)11 (33%)9 (27%) 4a1 (1%)1 (3%)1 (3%)1 (3%)cN0.2240.420 044 (54%)28 (72%)19 (58%)24 (73%) 117 (21%)5 (13%)7 (21%)4 (12%) 220 (24%)5 (13%)7 (21%)5 (15%) 31 (1%)1 (3%)0 (0%)0 (0%)cStage0.2960.393 132 (39%)14 (36%)16 (49%)13 (39%) 219 (23%)14 (36%)5 (15%)10 (30%) 331 (38%)11 (28%)12 (36%)10 (30%)Neoadjuvant chemotherapy0.9741.000 Absent36 (44%)17 (44%)16 (49%)16 (49%) Present46 (56%)22 (56%)17 (52%)17 (52%)ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status Characteristics of patients ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status Of 121 patients, 82 were included in the classical group and 39 in the ICG circular group before matching. Next, 33 patients were included in each group after matching (Fig. 1). Table 1 shows the clinicopathological characteristics of patients before and after matching. Before matching, significant differences were noted in terms of the American Society of Anesthesiologists physical status (ASA-PS) score (P = 0.021) and histological type (P = 0.009). However, after matching, the background characteristics did not significantly differ between the two groups. Table 1Characteristics of patientsBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Age (years)0.8920.872Median (quartiles)66 (61–72)66 (61–71)65 (61–73)67 (63–73)Sex0.9140.282 Male70 (85%)33 (85%)30 (90%)27 (82%) Female12 (15%)6 (15%)3 (9%)6 (18%)Body mass index (kg/m2)21.2 ± 3.122.2 ± 3.30.08321.8 ± 3.121.9 ± 3.30.677Serum albumin level (g/dL)4.2 ± 0.44.1 ± 0.40.4944.1 ± 0.44.1 ± 0.40.985Brinkman index0.7330.278Median (quartiles)800 (445–1000)860 (435–1000)820 (600–1140)840 (405–1000)ECOG performance status0.5860.601 068 (83%)34 (87%)27 (82%)28 (85%) 112 (15%)5 (13%)5 (15%)5 (15%) 22 (2%)0 (0%)1 (3%)0 (0%)Comorbidity Diabetes15 (18%)4 (10%)0.25610 (30%)4 (12%)0.071 Cardiovascular disease10 (12%)6 (15%)0.6282 (6%)5 (15%)0.230 Obstructive ventilation failure27 (33%)13 (33%)0.96510 (30%)11 (33%)0.792ASA-PS score0.0210.115 111 (13%)1 (3%)3 (9%)1 (3%) 263 (77%)28 (72%)27 (82%)23 (70%) 38 (10%)10 (26%)3 (9%)9 (27%)Histological type0.0091.000 Squamous cell carcinoma78 (95%)30 (77%)30 (91%)30 (91%) Adenocarcinoma2 (2%)6 (15%)2 (6%)2 (6%) Others2 (2%)3 (8%)1 (3%)1 (3%)Tumor location0.2290.642 Upper thoracic11 (13%)6 (15%)5 (15%)6 (18%) Middle thoracic43 (52%)16 (41%)15 (46%)16 (49%) Lower thoracic24 (29%)11 (28%)11 (33%)7 (21%) Abdominal4 (5%)6 (15%)2 (6%)4 (12%)cT0.4050.667 136 (44%)16 (41%)16 (49%)14 (42%) 214 (17%)11 (28%)5 (15%)9 (27%) 331 (38%)11 (28%)11 (33%)9 (27%) 4a1 (1%)1 (3%)1 (3%)1 (3%)cN0.2240.420 044 (54%)28 (72%)19 (58%)24 (73%) 117 (21%)5 (13%)7 (21%)4 (12%) 220 (24%)5 (13%)7 (21%)5 (15%) 31 (1%)1 (3%)0 (0%)0 (0%)cStage0.2960.393 132 (39%)14 (36%)16 (49%)13 (39%) 219 (23%)14 (36%)5 (15%)10 (30%) 331 (38%)11 (28%)12 (36%)10 (30%)Neoadjuvant chemotherapy0.9741.000 Absent36 (44%)17 (44%)16 (49%)16 (49%) Present46 (56%)22 (56%)17 (52%)17 (52%)ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status Characteristics of patients ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status Changes in anastomotic methods and perioperative outcomes Figure 3 presents changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy, and Table 2 depicts the perioperative outcomes in both groups. As shown in Table 2, prior to matching, a significantly higher proportion of patients underwent surgery using the laparoscopic approach (P < 0.001) and narrow gastric tube (P = 0.001) and those who had a lower volume of blood loss (P = 0.038) in the ICG circular group. After matching, the same factors indicated a significant difference. In terms of postoperative outcomes, the ICG circular group had a significantly lower proportion of patients who were observed to have anastomotic leakage (34% vs. 8%, P = 0.002) and a shorter postoperative hospital stay (29 vs. 20 days, P < 0.001) before matching. After matching, a significantly lower proportion of patients were noted to have anastomotic leakage (39% vs. 9%, P = 0.004) and stenosis (46% vs. 21%, P = 0.037) and a shorter postoperative hospital stay (30 vs. 20 days, P < 0.001) in the ICG circular group. Fig. 3Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown Table 2Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomyBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Abdominal approach< 0.001< 0.001 Open34 (42%)0 (0%)15 (46%)0 (0%) Laparoscopic48 (59%)39 (100%)18 (55%)33 (100%)Lymph node dissection0.1801.000 Two-field18 (22%)13 (33%)10 (30%)10 (30%) Three-field64 (78%)26 (67%)23 (70%)23 (70%)Route of reconstruction0.0700.131 Retrosternal68 (83%)37 (95%)27 (82%)31 (94%) Posterior mediastinal14 (17%)2 (5%)6 (18%)2 (6%)Shape of the gastric tube0.0010.003 Wide21 (26%)0 (0%)8 (24%)0 (0%) Narrow61 (74%)39 (100%)25 (76%)33 (100%)Total operative time (min)634 ± 89617 ± 530.573638 ± 100616 ± 510.753Volume of blood loss (mL)186 ± 219103 ± 970.038251 ± 29893 ± 890.009Postoperative complications Anastomotic leakage28 (34%)3 (8%)0.00213 (39%)3 (9%)0.004 Anastomotic stenosis29 (35%)8 (21%)0.09715 (46%)7 (21%)0.037 Pneumonia18 (22%)11 (28%)0.4518 (24%)9 (27%)0.778 Recurrent nerve paralysis14 (17%)2 (5%)0.0707 (21%)2 (6%)0.073 Postoperative hospital stay29 (22–44)20 (16–28)< 0.00130 (25–44)20 (17–28)< 0.001 Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomy Figure 3 presents changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy, and Table 2 depicts the perioperative outcomes in both groups. As shown in Table 2, prior to matching, a significantly higher proportion of patients underwent surgery using the laparoscopic approach (P < 0.001) and narrow gastric tube (P = 0.001) and those who had a lower volume of blood loss (P = 0.038) in the ICG circular group. After matching, the same factors indicated a significant difference. In terms of postoperative outcomes, the ICG circular group had a significantly lower proportion of patients who were observed to have anastomotic leakage (34% vs. 8%, P = 0.002) and a shorter postoperative hospital stay (29 vs. 20 days, P < 0.001) before matching. After matching, a significantly lower proportion of patients were noted to have anastomotic leakage (39% vs. 9%, P = 0.004) and stenosis (46% vs. 21%, P = 0.037) and a shorter postoperative hospital stay (30 vs. 20 days, P < 0.001) in the ICG circular group. Fig. 3Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown Table 2Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomyBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Abdominal approach< 0.001< 0.001 Open34 (42%)0 (0%)15 (46%)0 (0%) Laparoscopic48 (59%)39 (100%)18 (55%)33 (100%)Lymph node dissection0.1801.000 Two-field18 (22%)13 (33%)10 (30%)10 (30%) Three-field64 (78%)26 (67%)23 (70%)23 (70%)Route of reconstruction0.0700.131 Retrosternal68 (83%)37 (95%)27 (82%)31 (94%) Posterior mediastinal14 (17%)2 (5%)6 (18%)2 (6%)Shape of the gastric tube0.0010.003 Wide21 (26%)0 (0%)8 (24%)0 (0%) Narrow61 (74%)39 (100%)25 (76%)33 (100%)Total operative time (min)634 ± 89617 ± 530.573638 ± 100616 ± 510.753Volume of blood loss (mL)186 ± 219103 ± 970.038251 ± 29893 ± 890.009Postoperative complications Anastomotic leakage28 (34%)3 (8%)0.00213 (39%)3 (9%)0.004 Anastomotic stenosis29 (35%)8 (21%)0.09715 (46%)7 (21%)0.037 Pneumonia18 (22%)11 (28%)0.4518 (24%)9 (27%)0.778 Recurrent nerve paralysis14 (17%)2 (5%)0.0707 (21%)2 (6%)0.073 Postoperative hospital stay29 (22–44)20 (16–28)< 0.00130 (25–44)20 (17–28)< 0.001 Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomy Risk factor analyses of anastomotic leakage Finally, the risk factors for anastomotic leakage were evaluated via propensity score matching in 66 patients. The univariate analysis indicated that the Brinkman index (P = 0.048) and anastomotic method (P = 0.008) were significantly associated with anastomotic leakage (Table 4). According to the multivariate analysis, the anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy (odds ratio: 5.983, 95% confidence interval (CI): 1.469–24.359, P = 0.013) (Table 3). Table 3Univariate logistic regression analyses of anastomotic leakageAnastomotic leakageAbsent (n = 50)Present (n = 16)OR95% CIP valueAge (years) <6520 (40%)8 (50%)1.5000.484–4.6510.483 ≥6530 (60%)8 (50%)1Sex Male42 (84%)15 (94%)2.8570.329–24.7950.341 Female8 (16%)1 (6%)1Body mass index (kg/m2) <2224 (48%)10 (63%)1.8060.569–5.7260.316 ≥ 2226 (52%)6 (38%)1Serum albumin level (g/dL) <420 (40%)5 (31%)0.6820.206–2.2610.531 ≥ 430 (60%)11 (69%)1Brinkman index <80024 (48%)3 (19%)0.2500.063–0.9860.048 ≥ 80026 (52%)13 (81%)1Performance status 042 (84%)13 (81%)0.8250.191–3.5740.797 1, 28 (16%)3 (19%)1Diabetes Absent39 (78%)13 (81%)1.2220.295–5.0690.782 Present11 (22%)3 (19%)1Cardiovascular disease Absent46 (92%)13 (81%)0.3770.075–1.9010.237 Present4 (8%)3 (19%)Obstructive ventilation failure Absent36 (72%)9 (56%)0.5000.156–1.6030.243 Present14 (28%)7 (44%)1ASA-PS score 13 (6%)1 (6%)1.0440.101–10.8060.971 2, 347 (94%)15 (94%)1Histological type Squamous cell carcinoma45 (90%)15 (94%)1.6670.180–15.4250.653 Others5 (10%)1 (6%)1Tumor location Ut, Mt31 (62%)11 (69%)1.3480.406–4.4840.626 Lt, Ae19 (38%)5 (31%)1cT 121 (42%)9 (56%)1.7760.570–5.5310.322 2, 3, 4a29 (58%)7 (44%)1cN Absent33 (66%)10 (63%)0.8590.267–2.7640.798 Present17 (34%)6 (38%)1cStage 120 (40%)9 (56%)1.9290.618–6.0200.258 2, 330 (60%)7 (44%)1Neoadjuvant chemotherapy Absent22 (44%)10 (63%)2.1210.668–6.7390.202 Present28 (56%)6 (38%)1Abdominal approach Open11 (22%)4 (25%)1.1820.317–4.4000.803 Laparoscopic39 (78%)12 (75%)1Lymph node dissection Two-field15 (30%)5 (31%)1.0610.314–3.5850.925 Three-field35 (70%)11 (69%)1Route of reconstruction Retrosternal43 (86%)15 (94%)2.4420.277–21.5190.421 Posterior mediastinal7 (14%)1 (6%)1Shape of gastric tube Wide5 (10%)3 (19%)2.0770.437–9.8710.358 Narrow45 (90%)13 (81%)1Total operative time (min) <60020 (40%)6 (38%)0.9000.282–2.8700.859 ≥ 60030 (60%)10 (63%)1Blood loss (mL) <10026 (52%)7 (44%)0.7180.231–2.2290.566 ≥ 10024 (48%)9 (56%)1Anastomotic method ICG circular group30 (60%)3 (19%)0.1540.039–0.6100.008 Classical group20 (40%)13 (81%)1OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status Univariate logistic regression analyses of anastomotic leakage OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status Table 4Multivariate logistic regression analyses of anastomotic leakageOR95% CIP valueBrinkman index (≥ 800)3.5380.842–14.8600.084Anastomotic method (classical group)5.9831.469–24.3590.013OR odds ratio, CI confidence interval Multivariate logistic regression analyses of anastomotic leakage OR odds ratio, CI confidence interval Finally, the risk factors for anastomotic leakage were evaluated via propensity score matching in 66 patients. The univariate analysis indicated that the Brinkman index (P = 0.048) and anastomotic method (P = 0.008) were significantly associated with anastomotic leakage (Table 4). According to the multivariate analysis, the anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy (odds ratio: 5.983, 95% confidence interval (CI): 1.469–24.359, P = 0.013) (Table 3). Table 3Univariate logistic regression analyses of anastomotic leakageAnastomotic leakageAbsent (n = 50)Present (n = 16)OR95% CIP valueAge (years) <6520 (40%)8 (50%)1.5000.484–4.6510.483 ≥6530 (60%)8 (50%)1Sex Male42 (84%)15 (94%)2.8570.329–24.7950.341 Female8 (16%)1 (6%)1Body mass index (kg/m2) <2224 (48%)10 (63%)1.8060.569–5.7260.316 ≥ 2226 (52%)6 (38%)1Serum albumin level (g/dL) <420 (40%)5 (31%)0.6820.206–2.2610.531 ≥ 430 (60%)11 (69%)1Brinkman index <80024 (48%)3 (19%)0.2500.063–0.9860.048 ≥ 80026 (52%)13 (81%)1Performance status 042 (84%)13 (81%)0.8250.191–3.5740.797 1, 28 (16%)3 (19%)1Diabetes Absent39 (78%)13 (81%)1.2220.295–5.0690.782 Present11 (22%)3 (19%)1Cardiovascular disease Absent46 (92%)13 (81%)0.3770.075–1.9010.237 Present4 (8%)3 (19%)Obstructive ventilation failure Absent36 (72%)9 (56%)0.5000.156–1.6030.243 Present14 (28%)7 (44%)1ASA-PS score 13 (6%)1 (6%)1.0440.101–10.8060.971 2, 347 (94%)15 (94%)1Histological type Squamous cell carcinoma45 (90%)15 (94%)1.6670.180–15.4250.653 Others5 (10%)1 (6%)1Tumor location Ut, Mt31 (62%)11 (69%)1.3480.406–4.4840.626 Lt, Ae19 (38%)5 (31%)1cT 121 (42%)9 (56%)1.7760.570–5.5310.322 2, 3, 4a29 (58%)7 (44%)1cN Absent33 (66%)10 (63%)0.8590.267–2.7640.798 Present17 (34%)6 (38%)1cStage 120 (40%)9 (56%)1.9290.618–6.0200.258 2, 330 (60%)7 (44%)1Neoadjuvant chemotherapy Absent22 (44%)10 (63%)2.1210.668–6.7390.202 Present28 (56%)6 (38%)1Abdominal approach Open11 (22%)4 (25%)1.1820.317–4.4000.803 Laparoscopic39 (78%)12 (75%)1Lymph node dissection Two-field15 (30%)5 (31%)1.0610.314–3.5850.925 Three-field35 (70%)11 (69%)1Route of reconstruction Retrosternal43 (86%)15 (94%)2.4420.277–21.5190.421 Posterior mediastinal7 (14%)1 (6%)1Shape of gastric tube Wide5 (10%)3 (19%)2.0770.437–9.8710.358 Narrow45 (90%)13 (81%)1Total operative time (min) <60020 (40%)6 (38%)0.9000.282–2.8700.859 ≥ 60030 (60%)10 (63%)1Blood loss (mL) <10026 (52%)7 (44%)0.7180.231–2.2290.566 ≥ 10024 (48%)9 (56%)1Anastomotic method ICG circular group30 (60%)3 (19%)0.1540.039–0.6100.008 Classical group20 (40%)13 (81%)1OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status Univariate logistic regression analyses of anastomotic leakage OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status Table 4Multivariate logistic regression analyses of anastomotic leakageOR95% CIP valueBrinkman index (≥ 800)3.5380.842–14.8600.084Anastomotic method (classical group)5.9831.469–24.3590.013OR odds ratio, CI confidence interval Multivariate logistic regression analyses of anastomotic leakage OR odds ratio, CI confidence interval
Conclusions
Circular stapling anastomosis with ICG fluorescence imaging was found to be effective in reducing anastomotic complications for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. It was particularly crucial for the anastomotic method to be standardized in this study. However, the incidence rate of anastomotic stenosis can still be improved, and this is a problem that should be addressed in the future.
[ "Background", "Methods", "Patients", "Surgical procedure", "ICG circular anastomosis method", "Definition of perioperative complications", "Statistical analysis", "Characteristics of patients", "Changes in anastomotic methods and perioperative outcomes", "Risk factor analyses of anastomotic leakage" ]
[ "Esophageal cancer is the ninth most commonly diagnosed cancer worldwide and the sixth most common cause of cancer-related mortality [1]. Esophagectomy is the mainstay of treatment for resectable esophageal cancer. Thoracoscopic esophagectomy was first reported by Cuschieri et al. in 1992 [2] and has been used worldwide to a large extent as a curative surgery for esophageal cancer. It was reported to reduce the incidence of respiratory complications compared with open esophagectomy in a randomized controlled trial [3]. However, complications, including anastomotic leakage and stenosis, are a major cause of concern. At our institution, thoracoscopic esophagectomy was initially performed in November 2009 and this procedure has been standardized; however, the anastomotic method was not standardized until June 2018. In fact, various anastomotic methods, such as hand-sewn, triangulating [4], circular stapling, and Collard anastomosis [5], have been performed, but the complication rate of anastomosis has not reduced.\nA systematic review reported that indocyanine green (ICG) fluorescence imaging could be an important adjunct tool for reducing anastomotic leakage following esophagectomy [6]. ICG is a water-soluble near-infrared phosphor that has immediate and long-term safety [7, 8]. ICG fluorescence imaging is a simple evaluation method. In a recent robot-assisted surgery, high-resolution near-infrared images were obtained by employing the Firefly system with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). Therefore, in order to assess the blood flow in the gastric tube during esophagectomy reconstruction, the use of ICG fluorescence imaging was standardized at our institution in July 2018. In addition, the anastomotic method was standardized to circular stapling anastomosis because it is simple and applicable to nearly all cases, including those with a short remnant esophagus.\nThis study aimed to evaluate the efficacy of circular stapling anastomosis with ICG fluorescence imaging for cervical esophagogastric anastomosis after thoracoscopic esophagectomy by comparing the short-term outcomes before and after the anastomotic method was standardized via a propensity-matched analysis.", "Patients Altogether, 145 patients with esophageal cancer who underwent thoracoscopic esophagectomy with radical lymph node dissection between November 2009 and December 2020 at Tottori University Hospital were included in this study. Among them, 19, 2, and 3 patients who underwent reconstruction using the jejunum or colon, pharyngeal gastric tube anastomosis caused by the simultaneous duplication of hypopharyngeal cancer, and two-stage reconstruction, respectively, were excluded. Finally, 121 patients were enrolled in this study (Fig. 1). Patients who underwent surgery until June 2018, i.e., before the standardization of the anastomotic method, were included in the classical group, and those who underwent surgery from July 2018, i.e., after the standardization of the anastomotic method, were included in the ICG circular group. The clinicopathological findings were determined as per the Japanese Classification of Esophageal Cancer (11th edition) [9, 10]. This study was approved by the institutional review board of Tottori University School of Medicine (20A234), and the requirement for informed consent was waived.\n\nFig. 1Patient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy\nPatient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy\nAltogether, 145 patients with esophageal cancer who underwent thoracoscopic esophagectomy with radical lymph node dissection between November 2009 and December 2020 at Tottori University Hospital were included in this study. Among them, 19, 2, and 3 patients who underwent reconstruction using the jejunum or colon, pharyngeal gastric tube anastomosis caused by the simultaneous duplication of hypopharyngeal cancer, and two-stage reconstruction, respectively, were excluded. Finally, 121 patients were enrolled in this study (Fig. 1). Patients who underwent surgery until June 2018, i.e., before the standardization of the anastomotic method, were included in the classical group, and those who underwent surgery from July 2018, i.e., after the standardization of the anastomotic method, were included in the ICG circular group. The clinicopathological findings were determined as per the Japanese Classification of Esophageal Cancer (11th edition) [9, 10]. This study was approved by the institutional review board of Tottori University School of Medicine (20A234), and the requirement for informed consent was waived.\n\nFig. 1Patient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy\nPatient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy\nSurgical procedure All patients underwent thoracoscopic subtotal esophagectomy with mediastinal lymph node dissection in the prone position under right pneumothorax, and robot-assisted esophagectomy has been used since February 2020. After completion of the thoracic procedure, patients were repositioned in the supine position, and cervical and abdominal procedures were simultaneously initiated. Cervical lymph node dissection was not performed in patients with lower thoracic or abdominal esophageal cancer without cervical or upper mediastinal lymph node metastasis. Abdominal procedures, such as laparotomy, hand-assisted laparoscopic surgery, and complete laparoscopy for abdominal lymph node dissection, were performed. Patients who performed complete laparoscopy underwent laparotomy by making an incision of 8 cm in the upper abdomen after completing abdominal lymph node dissection, and the gastric tube was created under direct visualization in all cases. We created the gastric tube with a wide or narrow shape; the wide gastric tube is a method of resecting the stomach just below the esophagogastric junction, and the narrow gastric tube is a method of resecting the lesser curvature of the stomach so that the diameter of the gastric tube is approximately 3.5 cm. It was then pulled up to the neck through the retrosternal or posterior mediastinal route, and esophagogastric anastomosis was performed on the left side of the neck. In the classical group, before the anastomosis method was standardized, additional Kocher mobilization was performed as required for use until the area with good visual blood flow as the reconstructed gastric tube.\nAll patients underwent thoracoscopic subtotal esophagectomy with mediastinal lymph node dissection in the prone position under right pneumothorax, and robot-assisted esophagectomy has been used since February 2020. After completion of the thoracic procedure, patients were repositioned in the supine position, and cervical and abdominal procedures were simultaneously initiated. Cervical lymph node dissection was not performed in patients with lower thoracic or abdominal esophageal cancer without cervical or upper mediastinal lymph node metastasis. Abdominal procedures, such as laparotomy, hand-assisted laparoscopic surgery, and complete laparoscopy for abdominal lymph node dissection, were performed. Patients who performed complete laparoscopy underwent laparotomy by making an incision of 8 cm in the upper abdomen after completing abdominal lymph node dissection, and the gastric tube was created under direct visualization in all cases. We created the gastric tube with a wide or narrow shape; the wide gastric tube is a method of resecting the stomach just below the esophagogastric junction, and the narrow gastric tube is a method of resecting the lesser curvature of the stomach so that the diameter of the gastric tube is approximately 3.5 cm. It was then pulled up to the neck through the retrosternal or posterior mediastinal route, and esophagogastric anastomosis was performed on the left side of the neck. In the classical group, before the anastomosis method was standardized, additional Kocher mobilization was performed as required for use until the area with good visual blood flow as the reconstructed gastric tube.\nICG circular anastomosis method The ICG circular anastomosis approach was used as follows: After the gastric tube was created, ICG at a dose of 10 mg/body was administered intravenously, and ICG fluorescence imaging of the blood flow in the gastric tube was assessed using the PhotoDynamic Eye (Hamamatsu Photonics, Hamamatsu, Japan) or the Firefly system, which was integrated with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). The reconstructed gastric tube was used until the site where the wall of the gastric tube had a uniform contrast within 20 s after contrasting the right gastroepiploic artery with ICG, as reported by Noma et al. [11] (Fig. 2a, b). After the gastric tube was pulled up to the neck, end-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler (Medtronic, Minneapolis, Minnesota) (Fig. 2c). The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60-mm purple cartridge (Medtronic, Minneapolis, Minnesota) (Fig. 2d). The staple line was lastly buried.\n\nFig. 2Procedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge\nProcedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge\nThe ICG circular anastomosis approach was used as follows: After the gastric tube was created, ICG at a dose of 10 mg/body was administered intravenously, and ICG fluorescence imaging of the blood flow in the gastric tube was assessed using the PhotoDynamic Eye (Hamamatsu Photonics, Hamamatsu, Japan) or the Firefly system, which was integrated with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). The reconstructed gastric tube was used until the site where the wall of the gastric tube had a uniform contrast within 20 s after contrasting the right gastroepiploic artery with ICG, as reported by Noma et al. [11] (Fig. 2a, b). After the gastric tube was pulled up to the neck, end-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler (Medtronic, Minneapolis, Minnesota) (Fig. 2c). The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60-mm purple cartridge (Medtronic, Minneapolis, Minnesota) (Fig. 2d). The staple line was lastly buried.\n\nFig. 2Procedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge\nProcedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge\nDefinition of perioperative complications Anastomotic leakage was defined as saliva leakage from the cervical wound, contrast leakage outside the gastrointestinal tract on gastrointestinal series, and abnormal air or fluid accumulation around the site of anastomosis on computed tomography (CT) scan. A routine gastrointestinal series was performed on postoperative day 7; oral intake was initiated on postoperative day 8 for patients who experienced no problems in the postoperative course. This protocol maintained as is during the study period. Anastomotic stenosis was defined as cases in which an endoscope of 9.0 mm diameter could not pass through the anastomosis and balloon dilation was required during endoscopy for postoperative dysphagia. Pneumonia was defined as the appearance of consolidation on chest radiography or CT scan and the detection of bacteria on sputum culture. Recurrent nerve paralysis was assessed by an otolaryngologist on postoperative day 6 or 7 via laryngoscopy. The follow-up period for postoperative complications was 1 year postoperatively for anastomotic stenosis and until postoperative day 30 for other complications.\nAnastomotic leakage was defined as saliva leakage from the cervical wound, contrast leakage outside the gastrointestinal tract on gastrointestinal series, and abnormal air or fluid accumulation around the site of anastomosis on computed tomography (CT) scan. A routine gastrointestinal series was performed on postoperative day 7; oral intake was initiated on postoperative day 8 for patients who experienced no problems in the postoperative course. This protocol maintained as is during the study period. Anastomotic stenosis was defined as cases in which an endoscope of 9.0 mm diameter could not pass through the anastomosis and balloon dilation was required during endoscopy for postoperative dysphagia. Pneumonia was defined as the appearance of consolidation on chest radiography or CT scan and the detection of bacteria on sputum culture. Recurrent nerve paralysis was assessed by an otolaryngologist on postoperative day 6 or 7 via laryngoscopy. The follow-up period for postoperative complications was 1 year postoperatively for anastomotic stenosis and until postoperative day 30 for other complications.\nStatistical analysis Continuous data were presented as mean ± standard deviation or median with quartiles, as indicated. The Mann–Whitney U-test and the χ2 test were used to evaluate differences in continuous and categorical variables, respectively. A propensity-matched analysis was conducted using the logistic regression model and covariates such as age, sex, histological type, tumor location, clinical stage, and presence or absence of neoadjuvant chemotherapy. Univariate and multivariate logistic regression analyses were used to identify the risk factors for anastomotic leakage. Variables that were considered statistically significant in the univariate analysis were used for the multivariate analysis. P values of < 0.05 indicated statistically significant differences, and the Statistical Package for the Social Sciences software version 25 (IBM SPSS Inc., Chicago, Illinois) was used for statistical analyses.\nContinuous data were presented as mean ± standard deviation or median with quartiles, as indicated. The Mann–Whitney U-test and the χ2 test were used to evaluate differences in continuous and categorical variables, respectively. A propensity-matched analysis was conducted using the logistic regression model and covariates such as age, sex, histological type, tumor location, clinical stage, and presence or absence of neoadjuvant chemotherapy. Univariate and multivariate logistic regression analyses were used to identify the risk factors for anastomotic leakage. Variables that were considered statistically significant in the univariate analysis were used for the multivariate analysis. P values of < 0.05 indicated statistically significant differences, and the Statistical Package for the Social Sciences software version 25 (IBM SPSS Inc., Chicago, Illinois) was used for statistical analyses.", "Altogether, 145 patients with esophageal cancer who underwent thoracoscopic esophagectomy with radical lymph node dissection between November 2009 and December 2020 at Tottori University Hospital were included in this study. Among them, 19, 2, and 3 patients who underwent reconstruction using the jejunum or colon, pharyngeal gastric tube anastomosis caused by the simultaneous duplication of hypopharyngeal cancer, and two-stage reconstruction, respectively, were excluded. Finally, 121 patients were enrolled in this study (Fig. 1). Patients who underwent surgery until June 2018, i.e., before the standardization of the anastomotic method, were included in the classical group, and those who underwent surgery from July 2018, i.e., after the standardization of the anastomotic method, were included in the ICG circular group. The clinicopathological findings were determined as per the Japanese Classification of Esophageal Cancer (11th edition) [9, 10]. This study was approved by the institutional review board of Tottori University School of Medicine (20A234), and the requirement for informed consent was waived.\n\nFig. 1Patient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy\nPatient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy", "All patients underwent thoracoscopic subtotal esophagectomy with mediastinal lymph node dissection in the prone position under right pneumothorax, and robot-assisted esophagectomy has been used since February 2020. After completion of the thoracic procedure, patients were repositioned in the supine position, and cervical and abdominal procedures were simultaneously initiated. Cervical lymph node dissection was not performed in patients with lower thoracic or abdominal esophageal cancer without cervical or upper mediastinal lymph node metastasis. Abdominal procedures, such as laparotomy, hand-assisted laparoscopic surgery, and complete laparoscopy for abdominal lymph node dissection, were performed. Patients who performed complete laparoscopy underwent laparotomy by making an incision of 8 cm in the upper abdomen after completing abdominal lymph node dissection, and the gastric tube was created under direct visualization in all cases. We created the gastric tube with a wide or narrow shape; the wide gastric tube is a method of resecting the stomach just below the esophagogastric junction, and the narrow gastric tube is a method of resecting the lesser curvature of the stomach so that the diameter of the gastric tube is approximately 3.5 cm. It was then pulled up to the neck through the retrosternal or posterior mediastinal route, and esophagogastric anastomosis was performed on the left side of the neck. In the classical group, before the anastomosis method was standardized, additional Kocher mobilization was performed as required for use until the area with good visual blood flow as the reconstructed gastric tube.", "The ICG circular anastomosis approach was used as follows: After the gastric tube was created, ICG at a dose of 10 mg/body was administered intravenously, and ICG fluorescence imaging of the blood flow in the gastric tube was assessed using the PhotoDynamic Eye (Hamamatsu Photonics, Hamamatsu, Japan) or the Firefly system, which was integrated with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). The reconstructed gastric tube was used until the site where the wall of the gastric tube had a uniform contrast within 20 s after contrasting the right gastroepiploic artery with ICG, as reported by Noma et al. [11] (Fig. 2a, b). After the gastric tube was pulled up to the neck, end-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler (Medtronic, Minneapolis, Minnesota) (Fig. 2c). The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60-mm purple cartridge (Medtronic, Minneapolis, Minnesota) (Fig. 2d). The staple line was lastly buried.\n\nFig. 2Procedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge\nProcedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge", "Anastomotic leakage was defined as saliva leakage from the cervical wound, contrast leakage outside the gastrointestinal tract on gastrointestinal series, and abnormal air or fluid accumulation around the site of anastomosis on computed tomography (CT) scan. A routine gastrointestinal series was performed on postoperative day 7; oral intake was initiated on postoperative day 8 for patients who experienced no problems in the postoperative course. This protocol maintained as is during the study period. Anastomotic stenosis was defined as cases in which an endoscope of 9.0 mm diameter could not pass through the anastomosis and balloon dilation was required during endoscopy for postoperative dysphagia. Pneumonia was defined as the appearance of consolidation on chest radiography or CT scan and the detection of bacteria on sputum culture. Recurrent nerve paralysis was assessed by an otolaryngologist on postoperative day 6 or 7 via laryngoscopy. The follow-up period for postoperative complications was 1 year postoperatively for anastomotic stenosis and until postoperative day 30 for other complications.", "Continuous data were presented as mean ± standard deviation or median with quartiles, as indicated. The Mann–Whitney U-test and the χ2 test were used to evaluate differences in continuous and categorical variables, respectively. A propensity-matched analysis was conducted using the logistic regression model and covariates such as age, sex, histological type, tumor location, clinical stage, and presence or absence of neoadjuvant chemotherapy. Univariate and multivariate logistic regression analyses were used to identify the risk factors for anastomotic leakage. Variables that were considered statistically significant in the univariate analysis were used for the multivariate analysis. P values of < 0.05 indicated statistically significant differences, and the Statistical Package for the Social Sciences software version 25 (IBM SPSS Inc., Chicago, Illinois) was used for statistical analyses.", "Of 121 patients, 82 were included in the classical group and 39 in the ICG circular group before matching. Next, 33 patients were included in each group after matching (Fig. 1). Table 1 shows the clinicopathological characteristics of patients before and after matching. Before matching, significant differences were noted in terms of the American Society of Anesthesiologists physical status (ASA-PS) score (P = 0.021) and histological type (P = 0.009). However, after matching, the background characteristics did not significantly differ between the two groups.\n\nTable 1Characteristics of patientsBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Age (years)0.8920.872Median (quartiles)66 (61–72)66 (61–71)65 (61–73)67 (63–73)Sex0.9140.282 Male70 (85%)33 (85%)30 (90%)27 (82%) Female12 (15%)6 (15%)3 (9%)6 (18%)Body mass index (kg/m2)21.2 ± 3.122.2 ± 3.30.08321.8 ± 3.121.9 ± 3.30.677Serum albumin level (g/dL)4.2 ± 0.44.1 ± 0.40.4944.1 ± 0.44.1 ± 0.40.985Brinkman index0.7330.278Median (quartiles)800 (445–1000)860 (435–1000)820 (600–1140)840 (405–1000)ECOG performance status0.5860.601 068 (83%)34 (87%)27 (82%)28 (85%) 112 (15%)5 (13%)5 (15%)5 (15%) 22 (2%)0 (0%)1 (3%)0 (0%)Comorbidity Diabetes15 (18%)4 (10%)0.25610 (30%)4 (12%)0.071 Cardiovascular disease10 (12%)6 (15%)0.6282 (6%)5 (15%)0.230 Obstructive ventilation failure27 (33%)13 (33%)0.96510 (30%)11 (33%)0.792ASA-PS score0.0210.115 111 (13%)1 (3%)3 (9%)1 \n(3%) 263 (77%)28 (72%)27 (82%)23 (70%) 38 (10%)10 (26%)3 (9%)9 (27%)Histological type0.0091.000 Squamous cell carcinoma78 (95%)30 (77%)30 (91%)30 (91%) Adenocarcinoma2 (2%)6 (15%)2 (6%)2 (6%) Others2 (2%)3 (8%)1 (3%)1 (3%)Tumor location0.2290.642 Upper thoracic11 (13%)6 (15%)5 (15%)6 (18%) Middle thoracic43 (52%)16 (41%)15 (46%)16 (49%) Lower thoracic24 (29%)11 (28%)11 (33%)7 (21%) Abdominal4 (5%)6 (15%)2 (6%)4 (12%)cT0.4050.667 136 (44%)16 (41%)16 (49%)14 (42%) 214 (17%)11 (28%)5 (15%)9 (27%) 331 (38%)11 (28%)11 (33%)9 (27%) 4a1 (1%)1 (3%)1 (3%)1 (3%)cN0.2240.420 044 (54%)28 (72%)19 (58%)24 (73%) 117 (21%)5 (13%)7 (21%)4 (12%) 220 (24%)5 (13%)7 (21%)5 (15%) 31 (1%)1 (3%)0 (0%)0 (0%)cStage0.2960.393 132 (39%)14 (36%)16 (49%)13 (39%) 219 (23%)14 (36%)5 (15%)10 (30%) 331 (38%)11 (28%)12 (36%)10 (30%)Neoadjuvant chemotherapy0.9741.000 Absent36 (44%)17 (44%)16 (49%)16 (49%) Present46 (56%)22 (56%)17 (52%)17 (52%)ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status\nCharacteristics of patients\nECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status", "Figure 3 presents changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy, and Table 2 depicts the perioperative outcomes in both groups. As shown in Table 2, prior to matching, a significantly higher proportion of patients underwent surgery using the laparoscopic approach (P < 0.001) and narrow gastric tube (P = 0.001) and those who had a lower volume of blood loss (P = 0.038) in the ICG circular group. After matching, the same factors indicated a significant difference. In terms of postoperative outcomes, the ICG circular group had a significantly lower proportion of patients who were observed to have anastomotic leakage (34% vs. 8%, P = 0.002) and a shorter postoperative hospital stay (29 vs. 20 days, P < 0.001) before matching. After matching, a significantly lower proportion of patients were noted to have anastomotic leakage (39% vs. 9%, P = 0.004) and stenosis (46% vs. 21%, P = 0.037) and a shorter postoperative hospital stay (30 vs. 20 days, P < 0.001) in the ICG circular group.\n\nFig. 3Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown\nChanges in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown\n\nTable 2Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomyBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Abdominal approach< 0.001< 0.001 Open34 (42%)0 (0%)15 (46%)0 (0%) Laparoscopic48 (59%)39 (100%)18 (55%)33 (100%)Lymph node dissection0.1801.000 Two-field18 (22%)13 (33%)10 (30%)10 (30%) Three-field64 (78%)26 (67%)23 (70%)23 (70%)Route of reconstruction0.0700.131 Retrosternal68 (83%)37 (95%)27 (82%)31 (94%) Posterior mediastinal14 (17%)2 (5%)6 (18%)2 (6%)Shape of the gastric tube0.0010.003 Wide21 (26%)0 (0%)8 (24%)0 (0%) Narrow61 (74%)39 (100%)25 (76%)33 (100%)Total operative time (min)634 ± 89617 ± 530.573638 ± 100616 ± 510.753Volume of blood loss (mL)186 ± 219103 ± 970.038251 ± 29893 ± 890.009Postoperative complications Anastomotic leakage28 (34%)3 (8%)0.00213 (39%)3 (9%)0.004 Anastomotic stenosis29 (35%)8 (21%)0.09715 (46%)7 (21%)0.037 Pneumonia18 (22%)11 (28%)0.4518 (24%)9 (27%)0.778 Recurrent nerve paralysis14 (17%)2 \n(5%)0.0707 (21%)2 (6%)0.073 Postoperative hospital stay29 (22–44)20 (16–28)< 0.00130 (25–44)20 (17–28)< 0.001\nPerioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomy", "Finally, the risk factors for anastomotic leakage were evaluated via propensity score matching in 66 patients. The univariate analysis indicated that the Brinkman index (P = 0.048) and anastomotic method (P = 0.008) were significantly associated with anastomotic leakage (Table 4). According to the multivariate analysis, the anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy (odds ratio: 5.983, 95% confidence interval (CI): 1.469–24.359, P = 0.013) (Table 3).\n\nTable 3Univariate logistic regression analyses of anastomotic leakageAnastomotic leakageAbsent (n = 50)Present (n = 16)OR95% CIP valueAge (years) <6520 (40%)8 (50%)1.5000.484–4.6510.483 ≥6530 (60%)8 (50%)1Sex Male42 (84%)15 (94%)2.8570.329–24.7950.341 Female8 (16%)1 (6%)1Body mass index (kg/m2) <2224 (48%)10 (63%)1.8060.569–5.7260.316 ≥ 2226 (52%)6 (38%)1Serum albumin level (g/dL) <420 (40%)5 (31%)0.6820.206–2.2610.531 ≥ 430 (60%)11 (69%)1Brinkman index <80024 (48%)3 (19%)0.2500.063–0.9860.048 ≥ 80026 (52%)13 (81%)1Performance status 042 (84%)13 (81%)0.8250.191–3.5740.797 1, 28 (16%)3 (19%)1Diabetes Absent39 (78%)13 (81%)1.2220.295–5.0690.782 Present11 (22%)3 (19%)1Cardiovascular disease Absent46 (92%)13 (81%)0.3770.075–1.9010.237 Present4 (8%)3 (19%)Obstructive ventilation failure Absent36 \n(72%)9 (56%)0.5000.156–1.6030.243 Present14 (28%)7 (44%)1ASA-PS score 13 (6%)1 (6%)1.0440.101–10.8060.971 2, 347 (94%)15 (94%)1Histological type Squamous cell carcinoma45 (90%)15 (94%)1.6670.180–15.4250.653 Others5 (10%)1 (6%)1Tumor location Ut, Mt31 (62%)11 (69%)1.3480.406–4.4840.626 Lt, Ae19 (38%)5 (31%)1cT 121 (42%)9 (56%)1.7760.570–5.5310.322 2, 3, 4a29 (58%)7 (44%)1cN Absent33 (66%)10 (63%)0.8590.267–2.7640.798 Present17 (34%)6 (38%)1cStage 120 (40%)9 (56%)1.9290.618–6.0200.258 2, 330 (60%)7 (44%)1Neoadjuvant chemotherapy Absent22 (44%)10 (63%)2.1210.668–6.7390.202 Present28 (56%)6 (38%)1Abdominal approach Open11 (22%)4 (25%)1.1820.317–4.4000.803 Laparoscopic39 (78%)12 (75%)1Lymph node dissection Two-field15 (30%)5 (31%)1.0610.314–3.5850.925 Three-field35 (70%)11 \n(69%)1Route of reconstruction Retrosternal43 (86%)15 (94%)2.4420.277–21.5190.421 Posterior mediastinal7 (14%)1 (6%)1Shape of gastric tube Wide5 (10%)3 (19%)2.0770.437–9.8710.358 Narrow45 (90%)13 (81%)1Total operative time (min) <60020 (40%)6 (38%)0.9000.282–2.8700.859 ≥ 60030 (60%)10 (63%)1Blood loss (mL) <10026 (52%)7 (44%)0.7180.231–2.2290.566 ≥ 10024 (48%)9 (56%)1Anastomotic method ICG circular group30 (60%)3 (19%)0.1540.039–0.6100.008 Classical group20 (40%)13 (81%)1OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status\nUnivariate logistic regression analyses of anastomotic leakage\nOR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status\n\nTable 4Multivariate logistic regression analyses of anastomotic leakageOR95% CIP valueBrinkman index (≥ 800)3.5380.842–14.8600.084Anastomotic method (classical group)5.9831.469–24.3590.013OR odds ratio, CI confidence interval\nMultivariate logistic regression analyses of anastomotic leakage\nOR odds ratio, CI confidence interval" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "Surgical procedure", "ICG circular anastomosis method", "Definition of perioperative complications", "Statistical analysis", "Results", "Characteristics of patients", "Changes in anastomotic methods and perioperative outcomes", "Risk factor analyses of anastomotic leakage", "Discussion", "Conclusions" ]
[ "Esophageal cancer is the ninth most commonly diagnosed cancer worldwide and the sixth most common cause of cancer-related mortality [1]. Esophagectomy is the mainstay of treatment for resectable esophageal cancer. Thoracoscopic esophagectomy was first reported by Cuschieri et al. in 1992 [2] and has been used worldwide to a large extent as a curative surgery for esophageal cancer. It was reported to reduce the incidence of respiratory complications compared with open esophagectomy in a randomized controlled trial [3]. However, complications, including anastomotic leakage and stenosis, are a major cause of concern. At our institution, thoracoscopic esophagectomy was initially performed in November 2009 and this procedure has been standardized; however, the anastomotic method was not standardized until June 2018. In fact, various anastomotic methods, such as hand-sewn, triangulating [4], circular stapling, and Collard anastomosis [5], have been performed, but the complication rate of anastomosis has not reduced.\nA systematic review reported that indocyanine green (ICG) fluorescence imaging could be an important adjunct tool for reducing anastomotic leakage following esophagectomy [6]. ICG is a water-soluble near-infrared phosphor that has immediate and long-term safety [7, 8]. ICG fluorescence imaging is a simple evaluation method. In a recent robot-assisted surgery, high-resolution near-infrared images were obtained by employing the Firefly system with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). Therefore, in order to assess the blood flow in the gastric tube during esophagectomy reconstruction, the use of ICG fluorescence imaging was standardized at our institution in July 2018. In addition, the anastomotic method was standardized to circular stapling anastomosis because it is simple and applicable to nearly all cases, including those with a short remnant esophagus.\nThis study aimed to evaluate the efficacy of circular stapling anastomosis with ICG fluorescence imaging for cervical esophagogastric anastomosis after thoracoscopic esophagectomy by comparing the short-term outcomes before and after the anastomotic method was standardized via a propensity-matched analysis.", "Patients Altogether, 145 patients with esophageal cancer who underwent thoracoscopic esophagectomy with radical lymph node dissection between November 2009 and December 2020 at Tottori University Hospital were included in this study. Among them, 19, 2, and 3 patients who underwent reconstruction using the jejunum or colon, pharyngeal gastric tube anastomosis caused by the simultaneous duplication of hypopharyngeal cancer, and two-stage reconstruction, respectively, were excluded. Finally, 121 patients were enrolled in this study (Fig. 1). Patients who underwent surgery until June 2018, i.e., before the standardization of the anastomotic method, were included in the classical group, and those who underwent surgery from July 2018, i.e., after the standardization of the anastomotic method, were included in the ICG circular group. The clinicopathological findings were determined as per the Japanese Classification of Esophageal Cancer (11th edition) [9, 10]. This study was approved by the institutional review board of Tottori University School of Medicine (20A234), and the requirement for informed consent was waived.\n\nFig. 1Patient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy\nPatient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy\nAltogether, 145 patients with esophageal cancer who underwent thoracoscopic esophagectomy with radical lymph node dissection between November 2009 and December 2020 at Tottori University Hospital were included in this study. Among them, 19, 2, and 3 patients who underwent reconstruction using the jejunum or colon, pharyngeal gastric tube anastomosis caused by the simultaneous duplication of hypopharyngeal cancer, and two-stage reconstruction, respectively, were excluded. Finally, 121 patients were enrolled in this study (Fig. 1). Patients who underwent surgery until June 2018, i.e., before the standardization of the anastomotic method, were included in the classical group, and those who underwent surgery from July 2018, i.e., after the standardization of the anastomotic method, were included in the ICG circular group. The clinicopathological findings were determined as per the Japanese Classification of Esophageal Cancer (11th edition) [9, 10]. This study was approved by the institutional review board of Tottori University School of Medicine (20A234), and the requirement for informed consent was waived.\n\nFig. 1Patient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy\nPatient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy\nSurgical procedure All patients underwent thoracoscopic subtotal esophagectomy with mediastinal lymph node dissection in the prone position under right pneumothorax, and robot-assisted esophagectomy has been used since February 2020. After completion of the thoracic procedure, patients were repositioned in the supine position, and cervical and abdominal procedures were simultaneously initiated. Cervical lymph node dissection was not performed in patients with lower thoracic or abdominal esophageal cancer without cervical or upper mediastinal lymph node metastasis. Abdominal procedures, such as laparotomy, hand-assisted laparoscopic surgery, and complete laparoscopy for abdominal lymph node dissection, were performed. Patients who performed complete laparoscopy underwent laparotomy by making an incision of 8 cm in the upper abdomen after completing abdominal lymph node dissection, and the gastric tube was created under direct visualization in all cases. We created the gastric tube with a wide or narrow shape; the wide gastric tube is a method of resecting the stomach just below the esophagogastric junction, and the narrow gastric tube is a method of resecting the lesser curvature of the stomach so that the diameter of the gastric tube is approximately 3.5 cm. It was then pulled up to the neck through the retrosternal or posterior mediastinal route, and esophagogastric anastomosis was performed on the left side of the neck. In the classical group, before the anastomosis method was standardized, additional Kocher mobilization was performed as required for use until the area with good visual blood flow as the reconstructed gastric tube.\nAll patients underwent thoracoscopic subtotal esophagectomy with mediastinal lymph node dissection in the prone position under right pneumothorax, and robot-assisted esophagectomy has been used since February 2020. After completion of the thoracic procedure, patients were repositioned in the supine position, and cervical and abdominal procedures were simultaneously initiated. Cervical lymph node dissection was not performed in patients with lower thoracic or abdominal esophageal cancer without cervical or upper mediastinal lymph node metastasis. Abdominal procedures, such as laparotomy, hand-assisted laparoscopic surgery, and complete laparoscopy for abdominal lymph node dissection, were performed. Patients who performed complete laparoscopy underwent laparotomy by making an incision of 8 cm in the upper abdomen after completing abdominal lymph node dissection, and the gastric tube was created under direct visualization in all cases. We created the gastric tube with a wide or narrow shape; the wide gastric tube is a method of resecting the stomach just below the esophagogastric junction, and the narrow gastric tube is a method of resecting the lesser curvature of the stomach so that the diameter of the gastric tube is approximately 3.5 cm. It was then pulled up to the neck through the retrosternal or posterior mediastinal route, and esophagogastric anastomosis was performed on the left side of the neck. In the classical group, before the anastomosis method was standardized, additional Kocher mobilization was performed as required for use until the area with good visual blood flow as the reconstructed gastric tube.\nICG circular anastomosis method The ICG circular anastomosis approach was used as follows: After the gastric tube was created, ICG at a dose of 10 mg/body was administered intravenously, and ICG fluorescence imaging of the blood flow in the gastric tube was assessed using the PhotoDynamic Eye (Hamamatsu Photonics, Hamamatsu, Japan) or the Firefly system, which was integrated with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). The reconstructed gastric tube was used until the site where the wall of the gastric tube had a uniform contrast within 20 s after contrasting the right gastroepiploic artery with ICG, as reported by Noma et al. [11] (Fig. 2a, b). After the gastric tube was pulled up to the neck, end-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler (Medtronic, Minneapolis, Minnesota) (Fig. 2c). The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60-mm purple cartridge (Medtronic, Minneapolis, Minnesota) (Fig. 2d). The staple line was lastly buried.\n\nFig. 2Procedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge\nProcedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge\nThe ICG circular anastomosis approach was used as follows: After the gastric tube was created, ICG at a dose of 10 mg/body was administered intravenously, and ICG fluorescence imaging of the blood flow in the gastric tube was assessed using the PhotoDynamic Eye (Hamamatsu Photonics, Hamamatsu, Japan) or the Firefly system, which was integrated with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). The reconstructed gastric tube was used until the site where the wall of the gastric tube had a uniform contrast within 20 s after contrasting the right gastroepiploic artery with ICG, as reported by Noma et al. [11] (Fig. 2a, b). After the gastric tube was pulled up to the neck, end-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler (Medtronic, Minneapolis, Minnesota) (Fig. 2c). The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60-mm purple cartridge (Medtronic, Minneapolis, Minnesota) (Fig. 2d). The staple line was lastly buried.\n\nFig. 2Procedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge\nProcedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge\nDefinition of perioperative complications Anastomotic leakage was defined as saliva leakage from the cervical wound, contrast leakage outside the gastrointestinal tract on gastrointestinal series, and abnormal air or fluid accumulation around the site of anastomosis on computed tomography (CT) scan. A routine gastrointestinal series was performed on postoperative day 7; oral intake was initiated on postoperative day 8 for patients who experienced no problems in the postoperative course. This protocol maintained as is during the study period. Anastomotic stenosis was defined as cases in which an endoscope of 9.0 mm diameter could not pass through the anastomosis and balloon dilation was required during endoscopy for postoperative dysphagia. Pneumonia was defined as the appearance of consolidation on chest radiography or CT scan and the detection of bacteria on sputum culture. Recurrent nerve paralysis was assessed by an otolaryngologist on postoperative day 6 or 7 via laryngoscopy. The follow-up period for postoperative complications was 1 year postoperatively for anastomotic stenosis and until postoperative day 30 for other complications.\nAnastomotic leakage was defined as saliva leakage from the cervical wound, contrast leakage outside the gastrointestinal tract on gastrointestinal series, and abnormal air or fluid accumulation around the site of anastomosis on computed tomography (CT) scan. A routine gastrointestinal series was performed on postoperative day 7; oral intake was initiated on postoperative day 8 for patients who experienced no problems in the postoperative course. This protocol maintained as is during the study period. Anastomotic stenosis was defined as cases in which an endoscope of 9.0 mm diameter could not pass through the anastomosis and balloon dilation was required during endoscopy for postoperative dysphagia. Pneumonia was defined as the appearance of consolidation on chest radiography or CT scan and the detection of bacteria on sputum culture. Recurrent nerve paralysis was assessed by an otolaryngologist on postoperative day 6 or 7 via laryngoscopy. The follow-up period for postoperative complications was 1 year postoperatively for anastomotic stenosis and until postoperative day 30 for other complications.\nStatistical analysis Continuous data were presented as mean ± standard deviation or median with quartiles, as indicated. The Mann–Whitney U-test and the χ2 test were used to evaluate differences in continuous and categorical variables, respectively. A propensity-matched analysis was conducted using the logistic regression model and covariates such as age, sex, histological type, tumor location, clinical stage, and presence or absence of neoadjuvant chemotherapy. Univariate and multivariate logistic regression analyses were used to identify the risk factors for anastomotic leakage. Variables that were considered statistically significant in the univariate analysis were used for the multivariate analysis. P values of < 0.05 indicated statistically significant differences, and the Statistical Package for the Social Sciences software version 25 (IBM SPSS Inc., Chicago, Illinois) was used for statistical analyses.\nContinuous data were presented as mean ± standard deviation or median with quartiles, as indicated. The Mann–Whitney U-test and the χ2 test were used to evaluate differences in continuous and categorical variables, respectively. A propensity-matched analysis was conducted using the logistic regression model and covariates such as age, sex, histological type, tumor location, clinical stage, and presence or absence of neoadjuvant chemotherapy. Univariate and multivariate logistic regression analyses were used to identify the risk factors for anastomotic leakage. Variables that were considered statistically significant in the univariate analysis were used for the multivariate analysis. P values of < 0.05 indicated statistically significant differences, and the Statistical Package for the Social Sciences software version 25 (IBM SPSS Inc., Chicago, Illinois) was used for statistical analyses.", "Altogether, 145 patients with esophageal cancer who underwent thoracoscopic esophagectomy with radical lymph node dissection between November 2009 and December 2020 at Tottori University Hospital were included in this study. Among them, 19, 2, and 3 patients who underwent reconstruction using the jejunum or colon, pharyngeal gastric tube anastomosis caused by the simultaneous duplication of hypopharyngeal cancer, and two-stage reconstruction, respectively, were excluded. Finally, 121 patients were enrolled in this study (Fig. 1). Patients who underwent surgery until June 2018, i.e., before the standardization of the anastomotic method, were included in the classical group, and those who underwent surgery from July 2018, i.e., after the standardization of the anastomotic method, were included in the ICG circular group. The clinicopathological findings were determined as per the Japanese Classification of Esophageal Cancer (11th edition) [9, 10]. This study was approved by the institutional review board of Tottori University School of Medicine (20A234), and the requirement for informed consent was waived.\n\nFig. 1Patient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy\nPatient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy", "All patients underwent thoracoscopic subtotal esophagectomy with mediastinal lymph node dissection in the prone position under right pneumothorax, and robot-assisted esophagectomy has been used since February 2020. After completion of the thoracic procedure, patients were repositioned in the supine position, and cervical and abdominal procedures were simultaneously initiated. Cervical lymph node dissection was not performed in patients with lower thoracic or abdominal esophageal cancer without cervical or upper mediastinal lymph node metastasis. Abdominal procedures, such as laparotomy, hand-assisted laparoscopic surgery, and complete laparoscopy for abdominal lymph node dissection, were performed. Patients who performed complete laparoscopy underwent laparotomy by making an incision of 8 cm in the upper abdomen after completing abdominal lymph node dissection, and the gastric tube was created under direct visualization in all cases. We created the gastric tube with a wide or narrow shape; the wide gastric tube is a method of resecting the stomach just below the esophagogastric junction, and the narrow gastric tube is a method of resecting the lesser curvature of the stomach so that the diameter of the gastric tube is approximately 3.5 cm. It was then pulled up to the neck through the retrosternal or posterior mediastinal route, and esophagogastric anastomosis was performed on the left side of the neck. In the classical group, before the anastomosis method was standardized, additional Kocher mobilization was performed as required for use until the area with good visual blood flow as the reconstructed gastric tube.", "The ICG circular anastomosis approach was used as follows: After the gastric tube was created, ICG at a dose of 10 mg/body was administered intravenously, and ICG fluorescence imaging of the blood flow in the gastric tube was assessed using the PhotoDynamic Eye (Hamamatsu Photonics, Hamamatsu, Japan) or the Firefly system, which was integrated with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). The reconstructed gastric tube was used until the site where the wall of the gastric tube had a uniform contrast within 20 s after contrasting the right gastroepiploic artery with ICG, as reported by Noma et al. [11] (Fig. 2a, b). After the gastric tube was pulled up to the neck, end-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler (Medtronic, Minneapolis, Minnesota) (Fig. 2c). The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60-mm purple cartridge (Medtronic, Minneapolis, Minnesota) (Fig. 2d). The staple line was lastly buried.\n\nFig. 2Procedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge\nProcedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge", "Anastomotic leakage was defined as saliva leakage from the cervical wound, contrast leakage outside the gastrointestinal tract on gastrointestinal series, and abnormal air or fluid accumulation around the site of anastomosis on computed tomography (CT) scan. A routine gastrointestinal series was performed on postoperative day 7; oral intake was initiated on postoperative day 8 for patients who experienced no problems in the postoperative course. This protocol maintained as is during the study period. Anastomotic stenosis was defined as cases in which an endoscope of 9.0 mm diameter could not pass through the anastomosis and balloon dilation was required during endoscopy for postoperative dysphagia. Pneumonia was defined as the appearance of consolidation on chest radiography or CT scan and the detection of bacteria on sputum culture. Recurrent nerve paralysis was assessed by an otolaryngologist on postoperative day 6 or 7 via laryngoscopy. The follow-up period for postoperative complications was 1 year postoperatively for anastomotic stenosis and until postoperative day 30 for other complications.", "Continuous data were presented as mean ± standard deviation or median with quartiles, as indicated. The Mann–Whitney U-test and the χ2 test were used to evaluate differences in continuous and categorical variables, respectively. A propensity-matched analysis was conducted using the logistic regression model and covariates such as age, sex, histological type, tumor location, clinical stage, and presence or absence of neoadjuvant chemotherapy. Univariate and multivariate logistic regression analyses were used to identify the risk factors for anastomotic leakage. Variables that were considered statistically significant in the univariate analysis were used for the multivariate analysis. P values of < 0.05 indicated statistically significant differences, and the Statistical Package for the Social Sciences software version 25 (IBM SPSS Inc., Chicago, Illinois) was used for statistical analyses.", "Characteristics of patients Of 121 patients, 82 were included in the classical group and 39 in the ICG circular group before matching. Next, 33 patients were included in each group after matching (Fig. 1). Table 1 shows the clinicopathological characteristics of patients before and after matching. Before matching, significant differences were noted in terms of the American Society of Anesthesiologists physical status (ASA-PS) score (P = 0.021) and histological type (P = 0.009). However, after matching, the background characteristics did not significantly differ between the two groups.\n\nTable 1Characteristics of patientsBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Age (years)0.8920.872Median (quartiles)66 (61–72)66 (61–71)65 (61–73)67 (63–73)Sex0.9140.282 Male70 (85%)33 (85%)30 (90%)27 (82%) Female12 (15%)6 (15%)3 (9%)6 (18%)Body mass index (kg/m2)21.2 ± 3.122.2 ± 3.30.08321.8 ± 3.121.9 ± 3.30.677Serum albumin level (g/dL)4.2 ± 0.44.1 ± 0.40.4944.1 ± 0.44.1 ± 0.40.985Brinkman index0.7330.278Median (quartiles)800 (445–1000)860 (435–1000)820 (600–1140)840 (405–1000)ECOG performance status0.5860.601 068 (83%)34 (87%)27 (82%)28 (85%) 112 (15%)5 (13%)5 (15%)5 (15%) 22 (2%)0 (0%)1 (3%)0 (0%)Comorbidity Diabetes15 (18%)4 (10%)0.25610 (30%)4 (12%)0.071 Cardiovascular disease10 (12%)6 (15%)0.6282 (6%)5 (15%)0.230 Obstructive ventilation failure27 (33%)13 (33%)0.96510 (30%)11 (33%)0.792ASA-PS score0.0210.115 111 (13%)1 (3%)3 (9%)1 \n(3%) 263 (77%)28 (72%)27 (82%)23 (70%) 38 (10%)10 (26%)3 (9%)9 (27%)Histological type0.0091.000 Squamous cell carcinoma78 (95%)30 (77%)30 (91%)30 (91%) Adenocarcinoma2 (2%)6 (15%)2 (6%)2 (6%) Others2 (2%)3 (8%)1 (3%)1 (3%)Tumor location0.2290.642 Upper thoracic11 (13%)6 (15%)5 (15%)6 (18%) Middle thoracic43 (52%)16 (41%)15 (46%)16 (49%) Lower thoracic24 (29%)11 (28%)11 (33%)7 (21%) Abdominal4 (5%)6 (15%)2 (6%)4 (12%)cT0.4050.667 136 (44%)16 (41%)16 (49%)14 (42%) 214 (17%)11 (28%)5 (15%)9 (27%) 331 (38%)11 (28%)11 (33%)9 (27%) 4a1 (1%)1 (3%)1 (3%)1 (3%)cN0.2240.420 044 (54%)28 (72%)19 (58%)24 (73%) 117 (21%)5 (13%)7 (21%)4 (12%) 220 (24%)5 (13%)7 (21%)5 (15%) 31 (1%)1 (3%)0 (0%)0 (0%)cStage0.2960.393 132 (39%)14 (36%)16 (49%)13 (39%) 219 (23%)14 (36%)5 (15%)10 (30%) 331 (38%)11 (28%)12 (36%)10 (30%)Neoadjuvant chemotherapy0.9741.000 Absent36 (44%)17 (44%)16 (49%)16 (49%) Present46 (56%)22 (56%)17 (52%)17 (52%)ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status\nCharacteristics of patients\nECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status\nOf 121 patients, 82 were included in the classical group and 39 in the ICG circular group before matching. Next, 33 patients were included in each group after matching (Fig. 1). Table 1 shows the clinicopathological characteristics of patients before and after matching. Before matching, significant differences were noted in terms of the American Society of Anesthesiologists physical status (ASA-PS) score (P = 0.021) and histological type (P = 0.009). However, after matching, the background characteristics did not significantly differ between the two groups.\n\nTable 1Characteristics of patientsBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Age (years)0.8920.872Median (quartiles)66 (61–72)66 (61–71)65 (61–73)67 (63–73)Sex0.9140.282 Male70 (85%)33 (85%)30 (90%)27 (82%) Female12 (15%)6 (15%)3 (9%)6 (18%)Body mass index (kg/m2)21.2 ± 3.122.2 ± 3.30.08321.8 ± 3.121.9 ± 3.30.677Serum albumin level (g/dL)4.2 ± 0.44.1 ± 0.40.4944.1 ± 0.44.1 ± 0.40.985Brinkman index0.7330.278Median (quartiles)800 (445–1000)860 (435–1000)820 (600–1140)840 (405–1000)ECOG performance status0.5860.601 068 (83%)34 (87%)27 (82%)28 (85%) 112 (15%)5 (13%)5 (15%)5 (15%) 22 (2%)0 (0%)1 (3%)0 (0%)Comorbidity Diabetes15 (18%)4 (10%)0.25610 (30%)4 (12%)0.071 Cardiovascular disease10 (12%)6 (15%)0.6282 (6%)5 (15%)0.230 Obstructive ventilation failure27 (33%)13 (33%)0.96510 (30%)11 (33%)0.792ASA-PS score0.0210.115 111 (13%)1 (3%)3 (9%)1 \n(3%) 263 (77%)28 (72%)27 (82%)23 (70%) 38 (10%)10 (26%)3 (9%)9 (27%)Histological type0.0091.000 Squamous cell carcinoma78 (95%)30 (77%)30 (91%)30 (91%) Adenocarcinoma2 (2%)6 (15%)2 (6%)2 (6%) Others2 (2%)3 (8%)1 (3%)1 (3%)Tumor location0.2290.642 Upper thoracic11 (13%)6 (15%)5 (15%)6 (18%) Middle thoracic43 (52%)16 (41%)15 (46%)16 (49%) Lower thoracic24 (29%)11 (28%)11 (33%)7 (21%) Abdominal4 (5%)6 (15%)2 (6%)4 (12%)cT0.4050.667 136 (44%)16 (41%)16 (49%)14 (42%) 214 (17%)11 (28%)5 (15%)9 (27%) 331 (38%)11 (28%)11 (33%)9 (27%) 4a1 (1%)1 (3%)1 (3%)1 (3%)cN0.2240.420 044 (54%)28 (72%)19 (58%)24 (73%) 117 (21%)5 (13%)7 (21%)4 (12%) 220 (24%)5 (13%)7 (21%)5 (15%) 31 (1%)1 (3%)0 (0%)0 (0%)cStage0.2960.393 132 (39%)14 (36%)16 (49%)13 (39%) 219 (23%)14 (36%)5 (15%)10 (30%) 331 (38%)11 (28%)12 (36%)10 (30%)Neoadjuvant chemotherapy0.9741.000 Absent36 (44%)17 (44%)16 (49%)16 (49%) Present46 (56%)22 (56%)17 (52%)17 (52%)ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status\nCharacteristics of patients\nECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status\nChanges in anastomotic methods and perioperative outcomes Figure 3 presents changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy, and Table 2 depicts the perioperative outcomes in both groups. As shown in Table 2, prior to matching, a significantly higher proportion of patients underwent surgery using the laparoscopic approach (P < 0.001) and narrow gastric tube (P = 0.001) and those who had a lower volume of blood loss (P = 0.038) in the ICG circular group. After matching, the same factors indicated a significant difference. In terms of postoperative outcomes, the ICG circular group had a significantly lower proportion of patients who were observed to have anastomotic leakage (34% vs. 8%, P = 0.002) and a shorter postoperative hospital stay (29 vs. 20 days, P < 0.001) before matching. After matching, a significantly lower proportion of patients were noted to have anastomotic leakage (39% vs. 9%, P = 0.004) and stenosis (46% vs. 21%, P = 0.037) and a shorter postoperative hospital stay (30 vs. 20 days, P < 0.001) in the ICG circular group.\n\nFig. 3Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown\nChanges in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown\n\nTable 2Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomyBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Abdominal approach< 0.001< 0.001 Open34 (42%)0 (0%)15 (46%)0 (0%) Laparoscopic48 (59%)39 (100%)18 (55%)33 (100%)Lymph node dissection0.1801.000 Two-field18 (22%)13 (33%)10 (30%)10 (30%) Three-field64 (78%)26 (67%)23 (70%)23 (70%)Route of reconstruction0.0700.131 Retrosternal68 (83%)37 (95%)27 (82%)31 (94%) Posterior mediastinal14 (17%)2 (5%)6 (18%)2 (6%)Shape of the gastric tube0.0010.003 Wide21 (26%)0 (0%)8 (24%)0 (0%) Narrow61 (74%)39 (100%)25 (76%)33 (100%)Total operative time (min)634 ± 89617 ± 530.573638 ± 100616 ± 510.753Volume of blood loss (mL)186 ± 219103 ± 970.038251 ± 29893 ± 890.009Postoperative complications Anastomotic leakage28 (34%)3 (8%)0.00213 (39%)3 (9%)0.004 Anastomotic stenosis29 (35%)8 (21%)0.09715 (46%)7 (21%)0.037 Pneumonia18 (22%)11 (28%)0.4518 (24%)9 (27%)0.778 Recurrent nerve paralysis14 (17%)2 \n(5%)0.0707 (21%)2 (6%)0.073 Postoperative hospital stay29 (22–44)20 (16–28)< 0.00130 (25–44)20 (17–28)< 0.001\nPerioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomy\nFigure 3 presents changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy, and Table 2 depicts the perioperative outcomes in both groups. As shown in Table 2, prior to matching, a significantly higher proportion of patients underwent surgery using the laparoscopic approach (P < 0.001) and narrow gastric tube (P = 0.001) and those who had a lower volume of blood loss (P = 0.038) in the ICG circular group. After matching, the same factors indicated a significant difference. In terms of postoperative outcomes, the ICG circular group had a significantly lower proportion of patients who were observed to have anastomotic leakage (34% vs. 8%, P = 0.002) and a shorter postoperative hospital stay (29 vs. 20 days, P < 0.001) before matching. After matching, a significantly lower proportion of patients were noted to have anastomotic leakage (39% vs. 9%, P = 0.004) and stenosis (46% vs. 21%, P = 0.037) and a shorter postoperative hospital stay (30 vs. 20 days, P < 0.001) in the ICG circular group.\n\nFig. 3Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown\nChanges in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown\n\nTable 2Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomyBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Abdominal approach< 0.001< 0.001 Open34 (42%)0 (0%)15 (46%)0 (0%) Laparoscopic48 (59%)39 (100%)18 (55%)33 (100%)Lymph node dissection0.1801.000 Two-field18 (22%)13 (33%)10 (30%)10 (30%) Three-field64 (78%)26 (67%)23 (70%)23 (70%)Route of reconstruction0.0700.131 Retrosternal68 (83%)37 (95%)27 (82%)31 (94%) Posterior mediastinal14 (17%)2 (5%)6 (18%)2 (6%)Shape of the gastric tube0.0010.003 Wide21 (26%)0 (0%)8 (24%)0 (0%) Narrow61 (74%)39 (100%)25 (76%)33 (100%)Total operative time (min)634 ± 89617 ± 530.573638 ± 100616 ± 510.753Volume of blood loss (mL)186 ± 219103 ± 970.038251 ± 29893 ± 890.009Postoperative complications Anastomotic leakage28 (34%)3 (8%)0.00213 (39%)3 (9%)0.004 Anastomotic stenosis29 (35%)8 (21%)0.09715 (46%)7 (21%)0.037 Pneumonia18 (22%)11 (28%)0.4518 (24%)9 (27%)0.778 Recurrent nerve paralysis14 (17%)2 \n(5%)0.0707 (21%)2 (6%)0.073 Postoperative hospital stay29 (22–44)20 (16–28)< 0.00130 (25–44)20 (17–28)< 0.001\nPerioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomy\nRisk factor analyses of anastomotic leakage Finally, the risk factors for anastomotic leakage were evaluated via propensity score matching in 66 patients. The univariate analysis indicated that the Brinkman index (P = 0.048) and anastomotic method (P = 0.008) were significantly associated with anastomotic leakage (Table 4). According to the multivariate analysis, the anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy (odds ratio: 5.983, 95% confidence interval (CI): 1.469–24.359, P = 0.013) (Table 3).\n\nTable 3Univariate logistic regression analyses of anastomotic leakageAnastomotic leakageAbsent (n = 50)Present (n = 16)OR95% CIP valueAge (years) <6520 (40%)8 (50%)1.5000.484–4.6510.483 ≥6530 (60%)8 (50%)1Sex Male42 (84%)15 (94%)2.8570.329–24.7950.341 Female8 (16%)1 (6%)1Body mass index (kg/m2) <2224 (48%)10 (63%)1.8060.569–5.7260.316 ≥ 2226 (52%)6 (38%)1Serum albumin level (g/dL) <420 (40%)5 (31%)0.6820.206–2.2610.531 ≥ 430 (60%)11 (69%)1Brinkman index <80024 (48%)3 (19%)0.2500.063–0.9860.048 ≥ 80026 (52%)13 (81%)1Performance status 042 (84%)13 (81%)0.8250.191–3.5740.797 1, 28 (16%)3 (19%)1Diabetes Absent39 (78%)13 (81%)1.2220.295–5.0690.782 Present11 (22%)3 (19%)1Cardiovascular disease Absent46 (92%)13 (81%)0.3770.075–1.9010.237 Present4 (8%)3 (19%)Obstructive ventilation failure Absent36 \n(72%)9 (56%)0.5000.156–1.6030.243 Present14 (28%)7 (44%)1ASA-PS score 13 (6%)1 (6%)1.0440.101–10.8060.971 2, 347 (94%)15 (94%)1Histological type Squamous cell carcinoma45 (90%)15 (94%)1.6670.180–15.4250.653 Others5 (10%)1 (6%)1Tumor location Ut, Mt31 (62%)11 (69%)1.3480.406–4.4840.626 Lt, Ae19 (38%)5 (31%)1cT 121 (42%)9 (56%)1.7760.570–5.5310.322 2, 3, 4a29 (58%)7 (44%)1cN Absent33 (66%)10 (63%)0.8590.267–2.7640.798 Present17 (34%)6 (38%)1cStage 120 (40%)9 (56%)1.9290.618–6.0200.258 2, 330 (60%)7 (44%)1Neoadjuvant chemotherapy Absent22 (44%)10 (63%)2.1210.668–6.7390.202 Present28 (56%)6 (38%)1Abdominal approach Open11 (22%)4 (25%)1.1820.317–4.4000.803 Laparoscopic39 (78%)12 (75%)1Lymph node dissection Two-field15 (30%)5 (31%)1.0610.314–3.5850.925 Three-field35 (70%)11 \n(69%)1Route of reconstruction Retrosternal43 (86%)15 (94%)2.4420.277–21.5190.421 Posterior mediastinal7 (14%)1 (6%)1Shape of gastric tube Wide5 (10%)3 (19%)2.0770.437–9.8710.358 Narrow45 (90%)13 (81%)1Total operative time (min) <60020 (40%)6 (38%)0.9000.282–2.8700.859 ≥ 60030 (60%)10 (63%)1Blood loss (mL) <10026 (52%)7 (44%)0.7180.231–2.2290.566 ≥ 10024 (48%)9 (56%)1Anastomotic method ICG circular group30 (60%)3 (19%)0.1540.039–0.6100.008 Classical group20 (40%)13 (81%)1OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status\nUnivariate logistic regression analyses of anastomotic leakage\nOR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status\n\nTable 4Multivariate logistic regression analyses of anastomotic leakageOR95% CIP valueBrinkman index (≥ 800)3.5380.842–14.8600.084Anastomotic method (classical group)5.9831.469–24.3590.013OR odds ratio, CI confidence interval\nMultivariate logistic regression analyses of anastomotic leakage\nOR odds ratio, CI confidence interval\nFinally, the risk factors for anastomotic leakage were evaluated via propensity score matching in 66 patients. The univariate analysis indicated that the Brinkman index (P = 0.048) and anastomotic method (P = 0.008) were significantly associated with anastomotic leakage (Table 4). According to the multivariate analysis, the anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy (odds ratio: 5.983, 95% confidence interval (CI): 1.469–24.359, P = 0.013) (Table 3).\n\nTable 3Univariate logistic regression analyses of anastomotic leakageAnastomotic leakageAbsent (n = 50)Present (n = 16)OR95% CIP valueAge (years) <6520 (40%)8 (50%)1.5000.484–4.6510.483 ≥6530 (60%)8 (50%)1Sex Male42 (84%)15 (94%)2.8570.329–24.7950.341 Female8 (16%)1 (6%)1Body mass index (kg/m2) <2224 (48%)10 (63%)1.8060.569–5.7260.316 ≥ 2226 (52%)6 (38%)1Serum albumin level (g/dL) <420 (40%)5 (31%)0.6820.206–2.2610.531 ≥ 430 (60%)11 (69%)1Brinkman index <80024 (48%)3 (19%)0.2500.063–0.9860.048 ≥ 80026 (52%)13 (81%)1Performance status 042 (84%)13 (81%)0.8250.191–3.5740.797 1, 28 (16%)3 (19%)1Diabetes Absent39 (78%)13 (81%)1.2220.295–5.0690.782 Present11 (22%)3 (19%)1Cardiovascular disease Absent46 (92%)13 (81%)0.3770.075–1.9010.237 Present4 (8%)3 (19%)Obstructive ventilation failure Absent36 \n(72%)9 (56%)0.5000.156–1.6030.243 Present14 (28%)7 (44%)1ASA-PS score 13 (6%)1 (6%)1.0440.101–10.8060.971 2, 347 (94%)15 (94%)1Histological type Squamous cell carcinoma45 (90%)15 (94%)1.6670.180–15.4250.653 Others5 (10%)1 (6%)1Tumor location Ut, Mt31 (62%)11 (69%)1.3480.406–4.4840.626 Lt, Ae19 (38%)5 (31%)1cT 121 (42%)9 (56%)1.7760.570–5.5310.322 2, 3, 4a29 (58%)7 (44%)1cN Absent33 (66%)10 (63%)0.8590.267–2.7640.798 Present17 (34%)6 (38%)1cStage 120 (40%)9 (56%)1.9290.618–6.0200.258 2, 330 (60%)7 (44%)1Neoadjuvant chemotherapy Absent22 (44%)10 (63%)2.1210.668–6.7390.202 Present28 (56%)6 (38%)1Abdominal approach Open11 (22%)4 (25%)1.1820.317–4.4000.803 Laparoscopic39 (78%)12 (75%)1Lymph node dissection Two-field15 (30%)5 (31%)1.0610.314–3.5850.925 Three-field35 (70%)11 \n(69%)1Route of reconstruction Retrosternal43 (86%)15 (94%)2.4420.277–21.5190.421 Posterior mediastinal7 (14%)1 (6%)1Shape of gastric tube Wide5 (10%)3 (19%)2.0770.437–9.8710.358 Narrow45 (90%)13 (81%)1Total operative time (min) <60020 (40%)6 (38%)0.9000.282–2.8700.859 ≥ 60030 (60%)10 (63%)1Blood loss (mL) <10026 (52%)7 (44%)0.7180.231–2.2290.566 ≥ 10024 (48%)9 (56%)1Anastomotic method ICG circular group30 (60%)3 (19%)0.1540.039–0.6100.008 Classical group20 (40%)13 (81%)1OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status\nUnivariate logistic regression analyses of anastomotic leakage\nOR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status\n\nTable 4Multivariate logistic regression analyses of anastomotic leakageOR95% CIP valueBrinkman index (≥ 800)3.5380.842–14.8600.084Anastomotic method (classical group)5.9831.469–24.3590.013OR odds ratio, CI confidence interval\nMultivariate logistic regression analyses of anastomotic leakage\nOR odds ratio, CI confidence interval", "Of 121 patients, 82 were included in the classical group and 39 in the ICG circular group before matching. Next, 33 patients were included in each group after matching (Fig. 1). Table 1 shows the clinicopathological characteristics of patients before and after matching. Before matching, significant differences were noted in terms of the American Society of Anesthesiologists physical status (ASA-PS) score (P = 0.021) and histological type (P = 0.009). However, after matching, the background characteristics did not significantly differ between the two groups.\n\nTable 1Characteristics of patientsBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Age (years)0.8920.872Median (quartiles)66 (61–72)66 (61–71)65 (61–73)67 (63–73)Sex0.9140.282 Male70 (85%)33 (85%)30 (90%)27 (82%) Female12 (15%)6 (15%)3 (9%)6 (18%)Body mass index (kg/m2)21.2 ± 3.122.2 ± 3.30.08321.8 ± 3.121.9 ± 3.30.677Serum albumin level (g/dL)4.2 ± 0.44.1 ± 0.40.4944.1 ± 0.44.1 ± 0.40.985Brinkman index0.7330.278Median (quartiles)800 (445–1000)860 (435–1000)820 (600–1140)840 (405–1000)ECOG performance status0.5860.601 068 (83%)34 (87%)27 (82%)28 (85%) 112 (15%)5 (13%)5 (15%)5 (15%) 22 (2%)0 (0%)1 (3%)0 (0%)Comorbidity Diabetes15 (18%)4 (10%)0.25610 (30%)4 (12%)0.071 Cardiovascular disease10 (12%)6 (15%)0.6282 (6%)5 (15%)0.230 Obstructive ventilation failure27 (33%)13 (33%)0.96510 (30%)11 (33%)0.792ASA-PS score0.0210.115 111 (13%)1 (3%)3 (9%)1 \n(3%) 263 (77%)28 (72%)27 (82%)23 (70%) 38 (10%)10 (26%)3 (9%)9 (27%)Histological type0.0091.000 Squamous cell carcinoma78 (95%)30 (77%)30 (91%)30 (91%) Adenocarcinoma2 (2%)6 (15%)2 (6%)2 (6%) Others2 (2%)3 (8%)1 (3%)1 (3%)Tumor location0.2290.642 Upper thoracic11 (13%)6 (15%)5 (15%)6 (18%) Middle thoracic43 (52%)16 (41%)15 (46%)16 (49%) Lower thoracic24 (29%)11 (28%)11 (33%)7 (21%) Abdominal4 (5%)6 (15%)2 (6%)4 (12%)cT0.4050.667 136 (44%)16 (41%)16 (49%)14 (42%) 214 (17%)11 (28%)5 (15%)9 (27%) 331 (38%)11 (28%)11 (33%)9 (27%) 4a1 (1%)1 (3%)1 (3%)1 (3%)cN0.2240.420 044 (54%)28 (72%)19 (58%)24 (73%) 117 (21%)5 (13%)7 (21%)4 (12%) 220 (24%)5 (13%)7 (21%)5 (15%) 31 (1%)1 (3%)0 (0%)0 (0%)cStage0.2960.393 132 (39%)14 (36%)16 (49%)13 (39%) 219 (23%)14 (36%)5 (15%)10 (30%) 331 (38%)11 (28%)12 (36%)10 (30%)Neoadjuvant chemotherapy0.9741.000 Absent36 (44%)17 (44%)16 (49%)16 (49%) Present46 (56%)22 (56%)17 (52%)17 (52%)ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status\nCharacteristics of patients\nECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status", "Figure 3 presents changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy, and Table 2 depicts the perioperative outcomes in both groups. As shown in Table 2, prior to matching, a significantly higher proportion of patients underwent surgery using the laparoscopic approach (P < 0.001) and narrow gastric tube (P = 0.001) and those who had a lower volume of blood loss (P = 0.038) in the ICG circular group. After matching, the same factors indicated a significant difference. In terms of postoperative outcomes, the ICG circular group had a significantly lower proportion of patients who were observed to have anastomotic leakage (34% vs. 8%, P = 0.002) and a shorter postoperative hospital stay (29 vs. 20 days, P < 0.001) before matching. After matching, a significantly lower proportion of patients were noted to have anastomotic leakage (39% vs. 9%, P = 0.004) and stenosis (46% vs. 21%, P = 0.037) and a shorter postoperative hospital stay (30 vs. 20 days, P < 0.001) in the ICG circular group.\n\nFig. 3Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown\nChanges in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown\n\nTable 2Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomyBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Abdominal approach< 0.001< 0.001 Open34 (42%)0 (0%)15 (46%)0 (0%) Laparoscopic48 (59%)39 (100%)18 (55%)33 (100%)Lymph node dissection0.1801.000 Two-field18 (22%)13 (33%)10 (30%)10 (30%) Three-field64 (78%)26 (67%)23 (70%)23 (70%)Route of reconstruction0.0700.131 Retrosternal68 (83%)37 (95%)27 (82%)31 (94%) Posterior mediastinal14 (17%)2 (5%)6 (18%)2 (6%)Shape of the gastric tube0.0010.003 Wide21 (26%)0 (0%)8 (24%)0 (0%) Narrow61 (74%)39 (100%)25 (76%)33 (100%)Total operative time (min)634 ± 89617 ± 530.573638 ± 100616 ± 510.753Volume of blood loss (mL)186 ± 219103 ± 970.038251 ± 29893 ± 890.009Postoperative complications Anastomotic leakage28 (34%)3 (8%)0.00213 (39%)3 (9%)0.004 Anastomotic stenosis29 (35%)8 (21%)0.09715 (46%)7 (21%)0.037 Pneumonia18 (22%)11 (28%)0.4518 (24%)9 (27%)0.778 Recurrent nerve paralysis14 (17%)2 \n(5%)0.0707 (21%)2 (6%)0.073 Postoperative hospital stay29 (22–44)20 (16–28)< 0.00130 (25–44)20 (17–28)< 0.001\nPerioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomy", "Finally, the risk factors for anastomotic leakage were evaluated via propensity score matching in 66 patients. The univariate analysis indicated that the Brinkman index (P = 0.048) and anastomotic method (P = 0.008) were significantly associated with anastomotic leakage (Table 4). According to the multivariate analysis, the anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy (odds ratio: 5.983, 95% confidence interval (CI): 1.469–24.359, P = 0.013) (Table 3).\n\nTable 3Univariate logistic regression analyses of anastomotic leakageAnastomotic leakageAbsent (n = 50)Present (n = 16)OR95% CIP valueAge (years) <6520 (40%)8 (50%)1.5000.484–4.6510.483 ≥6530 (60%)8 (50%)1Sex Male42 (84%)15 (94%)2.8570.329–24.7950.341 Female8 (16%)1 (6%)1Body mass index (kg/m2) <2224 (48%)10 (63%)1.8060.569–5.7260.316 ≥ 2226 (52%)6 (38%)1Serum albumin level (g/dL) <420 (40%)5 (31%)0.6820.206–2.2610.531 ≥ 430 (60%)11 (69%)1Brinkman index <80024 (48%)3 (19%)0.2500.063–0.9860.048 ≥ 80026 (52%)13 (81%)1Performance status 042 (84%)13 (81%)0.8250.191–3.5740.797 1, 28 (16%)3 (19%)1Diabetes Absent39 (78%)13 (81%)1.2220.295–5.0690.782 Present11 (22%)3 (19%)1Cardiovascular disease Absent46 (92%)13 (81%)0.3770.075–1.9010.237 Present4 (8%)3 (19%)Obstructive ventilation failure Absent36 \n(72%)9 (56%)0.5000.156–1.6030.243 Present14 (28%)7 (44%)1ASA-PS score 13 (6%)1 (6%)1.0440.101–10.8060.971 2, 347 (94%)15 (94%)1Histological type Squamous cell carcinoma45 (90%)15 (94%)1.6670.180–15.4250.653 Others5 (10%)1 (6%)1Tumor location Ut, Mt31 (62%)11 (69%)1.3480.406–4.4840.626 Lt, Ae19 (38%)5 (31%)1cT 121 (42%)9 (56%)1.7760.570–5.5310.322 2, 3, 4a29 (58%)7 (44%)1cN Absent33 (66%)10 (63%)0.8590.267–2.7640.798 Present17 (34%)6 (38%)1cStage 120 (40%)9 (56%)1.9290.618–6.0200.258 2, 330 (60%)7 (44%)1Neoadjuvant chemotherapy Absent22 (44%)10 (63%)2.1210.668–6.7390.202 Present28 (56%)6 (38%)1Abdominal approach Open11 (22%)4 (25%)1.1820.317–4.4000.803 Laparoscopic39 (78%)12 (75%)1Lymph node dissection Two-field15 (30%)5 (31%)1.0610.314–3.5850.925 Three-field35 (70%)11 \n(69%)1Route of reconstruction Retrosternal43 (86%)15 (94%)2.4420.277–21.5190.421 Posterior mediastinal7 (14%)1 (6%)1Shape of gastric tube Wide5 (10%)3 (19%)2.0770.437–9.8710.358 Narrow45 (90%)13 (81%)1Total operative time (min) <60020 (40%)6 (38%)0.9000.282–2.8700.859 ≥ 60030 (60%)10 (63%)1Blood loss (mL) <10026 (52%)7 (44%)0.7180.231–2.2290.566 ≥ 10024 (48%)9 (56%)1Anastomotic method ICG circular group30 (60%)3 (19%)0.1540.039–0.6100.008 Classical group20 (40%)13 (81%)1OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status\nUnivariate logistic regression analyses of anastomotic leakage\nOR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status\n\nTable 4Multivariate logistic regression analyses of anastomotic leakageOR95% CIP valueBrinkman index (≥ 800)3.5380.842–14.8600.084Anastomotic method (classical group)5.9831.469–24.3590.013OR odds ratio, CI confidence interval\nMultivariate logistic regression analyses of anastomotic leakage\nOR odds ratio, CI confidence interval", "This study aimed to evaluate the efficacy of circular stapling anastomosis with ICG fluorescence imaging for cervical esophagogastric anastomosis after thoracoscopic esophagectomy by comparing the short-term outcomes before and after the standardization of the anastomotic method via a propensity-matched analysis. The ICG circular group had a significantly lower rate of complications, including anastomotic leakage and stenosis, and a shorter postoperative hospital stay. Furthermore, anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy.\nThe incidence of anastomotic leakage was significantly lower in the ICG circular group than in the classical group, and anastomotic method was found to be an independent risk factor for postoperative anastomotic leakage. According to a systematic review on ICG fluorescence imaging after esophageal cancer surgery, the overall anastomotic leakage rate in patients who underwent surgery with ICG fluorescence imaging was lower than that in controls (13.5% [118/873] vs. 18.5% [86/466]) [6]. Furthermore, when anastomosis was performed at the site where good ICG perfusion was observed, the incidence of anastomotic leakage was 9.0% (67/746). In this study, anastomosis was performed at the site where good ICG perfusion was observed in all patients in the ICG circular group. Thus, the outcomes of anastomotic leakage were comparable. Honda et al. performed a systematic review comparing outcomes of anastomosis after esophagectomy between hand-sewn and mechanical anastomosis using a circular stapler [12]. Results showed that the anastomotic leakage rate with circular stapling anastomosis (6.1%, 41/668) was similar to that with hand-sewn anastomosis (6.1%, 39/640) (risk ratio [RR]: 1.02, 95% CI: 0.66–1.59, P = 0.43); compared with our results, particularly in classical group, the rate of anastomotic leakage was extremely low. The most important reason for the high anastomotic leakage rate in the classical group was that an excessive number of anastomosis methods were performed. However, in the ICG circular group, anastomosis was performed using a completely uniform technique in all cases, which may have had a substantial impact on our results. Moreover, the review by Honda et al. included studies in which intrathoracic anastomosis was performed. Thus, the outcomes of this review were not completely comparable to those of cervical esophagogastric anastomosis, and our results were acceptable. In addition, Honda et al. showed that the approach using mechanical circular stapling anastomosis considerably reduced the operative time by 15.3 min compared with that required by hand-sewn anastomosis. In our study, the total operative time tended to be shorter in the ICG circular group than in the classical group; however, it was not significant. Therefore, circular stapling anastomosis with ICG fluorescence imaging is a simple and safe method that can reduce anastomotic leakage after thoracoscopic esophagectomy.\nThis study showed that the incidence of anastomotic stenosis was substantially lower in the ICG circular group than in the classical group after propensity score matching. However, the anastomotic stenosis rate in the classical group after matching was extremely high at 46%, which may have had a significant impact. Honda et al., who conducted a subgroup and meta-regression analysis, showed that the stenosis rate of circular stapling anastomosis was significantly higher than that of hand-sewn anastomosis (16.9% [106/626] and 9.9% [62/629]) (RR: 1.67, 95% CI: 1.16–2.42; P = 0.006); they showed no significant differences in the anastomotic site, diameter of the circular stapler, layer, and configuration [12]. A randomized controlled trial by Hayata et al. comparing cervical esophagogastric circular stapling and triangulating stapling anastomoses after esophagectomy showed no significant difference in terms of anastomotic stenosis rate between the circular stapling group (17%, 8/47) and the triangulating stapling group (19%, 9/51) (P = 0.935) [13]. In this study, the anastomotic stenosis rate in the ICG circular group was 21%, which was similar to that noted in previous reports but was not satisfactory. Therefore, although the incidence rate of anastomotic stenosis is improving because of the standardization of anastomotic methods, it should still be further reduced.\nThis study had several limitations. First, this was a retrospective study with a small sample size. All relevant clinicopathological and technical factors, with the exception of the anastomotic method, should have been included in the matching factors; however, this was not practical because of the small sample size. As a result, reconstruction-related perioperative outcomes, such as abdominal approach and shape of gastric tube were not consistent between the two groups. Therefore, the outcomes of ICG circular anastomosis after standardization must be prospectively evaluated. Second, various anastomotic methods were performed in the classical group patients. Therefore, a propensity-matched analysis was performed to eliminate differences in patient characteristics and to prevent bias as much as possible.", "Circular stapling anastomosis with ICG fluorescence imaging was found to be effective in reducing anastomotic complications for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. It was particularly crucial for the anastomotic method to be standardized in this study. However, the incidence rate of anastomotic stenosis can still be improved, and this is a problem that should be addressed in the future." ]
[ null, null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusion" ]
[ "Esophageal cancer", "Thoracoscopic esophagectomy", "Cervical esophagogastric anastomosis", "Indocyanine green fluorescence imaging", "Circular stapling anastomosis", "Anastomotic leakage", "Anastomotic stenosis", "Propensity score matching" ]
Background: Esophageal cancer is the ninth most commonly diagnosed cancer worldwide and the sixth most common cause of cancer-related mortality [1]. Esophagectomy is the mainstay of treatment for resectable esophageal cancer. Thoracoscopic esophagectomy was first reported by Cuschieri et al. in 1992 [2] and has been used worldwide to a large extent as a curative surgery for esophageal cancer. It was reported to reduce the incidence of respiratory complications compared with open esophagectomy in a randomized controlled trial [3]. However, complications, including anastomotic leakage and stenosis, are a major cause of concern. At our institution, thoracoscopic esophagectomy was initially performed in November 2009 and this procedure has been standardized; however, the anastomotic method was not standardized until June 2018. In fact, various anastomotic methods, such as hand-sewn, triangulating [4], circular stapling, and Collard anastomosis [5], have been performed, but the complication rate of anastomosis has not reduced. A systematic review reported that indocyanine green (ICG) fluorescence imaging could be an important adjunct tool for reducing anastomotic leakage following esophagectomy [6]. ICG is a water-soluble near-infrared phosphor that has immediate and long-term safety [7, 8]. ICG fluorescence imaging is a simple evaluation method. In a recent robot-assisted surgery, high-resolution near-infrared images were obtained by employing the Firefly system with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). Therefore, in order to assess the blood flow in the gastric tube during esophagectomy reconstruction, the use of ICG fluorescence imaging was standardized at our institution in July 2018. In addition, the anastomotic method was standardized to circular stapling anastomosis because it is simple and applicable to nearly all cases, including those with a short remnant esophagus. This study aimed to evaluate the efficacy of circular stapling anastomosis with ICG fluorescence imaging for cervical esophagogastric anastomosis after thoracoscopic esophagectomy by comparing the short-term outcomes before and after the anastomotic method was standardized via a propensity-matched analysis. Methods: Patients Altogether, 145 patients with esophageal cancer who underwent thoracoscopic esophagectomy with radical lymph node dissection between November 2009 and December 2020 at Tottori University Hospital were included in this study. Among them, 19, 2, and 3 patients who underwent reconstruction using the jejunum or colon, pharyngeal gastric tube anastomosis caused by the simultaneous duplication of hypopharyngeal cancer, and two-stage reconstruction, respectively, were excluded. Finally, 121 patients were enrolled in this study (Fig. 1). Patients who underwent surgery until June 2018, i.e., before the standardization of the anastomotic method, were included in the classical group, and those who underwent surgery from July 2018, i.e., after the standardization of the anastomotic method, were included in the ICG circular group. The clinicopathological findings were determined as per the Japanese Classification of Esophageal Cancer (11th edition) [9, 10]. This study was approved by the institutional review board of Tottori University School of Medicine (20A234), and the requirement for informed consent was waived. Fig. 1Patient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy Patient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy Altogether, 145 patients with esophageal cancer who underwent thoracoscopic esophagectomy with radical lymph node dissection between November 2009 and December 2020 at Tottori University Hospital were included in this study. Among them, 19, 2, and 3 patients who underwent reconstruction using the jejunum or colon, pharyngeal gastric tube anastomosis caused by the simultaneous duplication of hypopharyngeal cancer, and two-stage reconstruction, respectively, were excluded. Finally, 121 patients were enrolled in this study (Fig. 1). Patients who underwent surgery until June 2018, i.e., before the standardization of the anastomotic method, were included in the classical group, and those who underwent surgery from July 2018, i.e., after the standardization of the anastomotic method, were included in the ICG circular group. The clinicopathological findings were determined as per the Japanese Classification of Esophageal Cancer (11th edition) [9, 10]. This study was approved by the institutional review board of Tottori University School of Medicine (20A234), and the requirement for informed consent was waived. Fig. 1Patient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy Patient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy Surgical procedure All patients underwent thoracoscopic subtotal esophagectomy with mediastinal lymph node dissection in the prone position under right pneumothorax, and robot-assisted esophagectomy has been used since February 2020. After completion of the thoracic procedure, patients were repositioned in the supine position, and cervical and abdominal procedures were simultaneously initiated. Cervical lymph node dissection was not performed in patients with lower thoracic or abdominal esophageal cancer without cervical or upper mediastinal lymph node metastasis. Abdominal procedures, such as laparotomy, hand-assisted laparoscopic surgery, and complete laparoscopy for abdominal lymph node dissection, were performed. Patients who performed complete laparoscopy underwent laparotomy by making an incision of 8 cm in the upper abdomen after completing abdominal lymph node dissection, and the gastric tube was created under direct visualization in all cases. We created the gastric tube with a wide or narrow shape; the wide gastric tube is a method of resecting the stomach just below the esophagogastric junction, and the narrow gastric tube is a method of resecting the lesser curvature of the stomach so that the diameter of the gastric tube is approximately 3.5 cm. It was then pulled up to the neck through the retrosternal or posterior mediastinal route, and esophagogastric anastomosis was performed on the left side of the neck. In the classical group, before the anastomosis method was standardized, additional Kocher mobilization was performed as required for use until the area with good visual blood flow as the reconstructed gastric tube. All patients underwent thoracoscopic subtotal esophagectomy with mediastinal lymph node dissection in the prone position under right pneumothorax, and robot-assisted esophagectomy has been used since February 2020. After completion of the thoracic procedure, patients were repositioned in the supine position, and cervical and abdominal procedures were simultaneously initiated. Cervical lymph node dissection was not performed in patients with lower thoracic or abdominal esophageal cancer without cervical or upper mediastinal lymph node metastasis. Abdominal procedures, such as laparotomy, hand-assisted laparoscopic surgery, and complete laparoscopy for abdominal lymph node dissection, were performed. Patients who performed complete laparoscopy underwent laparotomy by making an incision of 8 cm in the upper abdomen after completing abdominal lymph node dissection, and the gastric tube was created under direct visualization in all cases. We created the gastric tube with a wide or narrow shape; the wide gastric tube is a method of resecting the stomach just below the esophagogastric junction, and the narrow gastric tube is a method of resecting the lesser curvature of the stomach so that the diameter of the gastric tube is approximately 3.5 cm. It was then pulled up to the neck through the retrosternal or posterior mediastinal route, and esophagogastric anastomosis was performed on the left side of the neck. In the classical group, before the anastomosis method was standardized, additional Kocher mobilization was performed as required for use until the area with good visual blood flow as the reconstructed gastric tube. ICG circular anastomosis method The ICG circular anastomosis approach was used as follows: After the gastric tube was created, ICG at a dose of 10 mg/body was administered intravenously, and ICG fluorescence imaging of the blood flow in the gastric tube was assessed using the PhotoDynamic Eye (Hamamatsu Photonics, Hamamatsu, Japan) or the Firefly system, which was integrated with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). The reconstructed gastric tube was used until the site where the wall of the gastric tube had a uniform contrast within 20 s after contrasting the right gastroepiploic artery with ICG, as reported by Noma et al. [11] (Fig. 2a, b). After the gastric tube was pulled up to the neck, end-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler (Medtronic, Minneapolis, Minnesota) (Fig. 2c). The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60-mm purple cartridge (Medtronic, Minneapolis, Minnesota) (Fig. 2d). The staple line was lastly buried. Fig. 2Procedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge Procedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge The ICG circular anastomosis approach was used as follows: After the gastric tube was created, ICG at a dose of 10 mg/body was administered intravenously, and ICG fluorescence imaging of the blood flow in the gastric tube was assessed using the PhotoDynamic Eye (Hamamatsu Photonics, Hamamatsu, Japan) or the Firefly system, which was integrated with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). The reconstructed gastric tube was used until the site where the wall of the gastric tube had a uniform contrast within 20 s after contrasting the right gastroepiploic artery with ICG, as reported by Noma et al. [11] (Fig. 2a, b). After the gastric tube was pulled up to the neck, end-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler (Medtronic, Minneapolis, Minnesota) (Fig. 2c). The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60-mm purple cartridge (Medtronic, Minneapolis, Minnesota) (Fig. 2d). The staple line was lastly buried. Fig. 2Procedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge Procedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge Definition of perioperative complications Anastomotic leakage was defined as saliva leakage from the cervical wound, contrast leakage outside the gastrointestinal tract on gastrointestinal series, and abnormal air or fluid accumulation around the site of anastomosis on computed tomography (CT) scan. A routine gastrointestinal series was performed on postoperative day 7; oral intake was initiated on postoperative day 8 for patients who experienced no problems in the postoperative course. This protocol maintained as is during the study period. Anastomotic stenosis was defined as cases in which an endoscope of 9.0 mm diameter could not pass through the anastomosis and balloon dilation was required during endoscopy for postoperative dysphagia. Pneumonia was defined as the appearance of consolidation on chest radiography or CT scan and the detection of bacteria on sputum culture. Recurrent nerve paralysis was assessed by an otolaryngologist on postoperative day 6 or 7 via laryngoscopy. The follow-up period for postoperative complications was 1 year postoperatively for anastomotic stenosis and until postoperative day 30 for other complications. Anastomotic leakage was defined as saliva leakage from the cervical wound, contrast leakage outside the gastrointestinal tract on gastrointestinal series, and abnormal air or fluid accumulation around the site of anastomosis on computed tomography (CT) scan. A routine gastrointestinal series was performed on postoperative day 7; oral intake was initiated on postoperative day 8 for patients who experienced no problems in the postoperative course. This protocol maintained as is during the study period. Anastomotic stenosis was defined as cases in which an endoscope of 9.0 mm diameter could not pass through the anastomosis and balloon dilation was required during endoscopy for postoperative dysphagia. Pneumonia was defined as the appearance of consolidation on chest radiography or CT scan and the detection of bacteria on sputum culture. Recurrent nerve paralysis was assessed by an otolaryngologist on postoperative day 6 or 7 via laryngoscopy. The follow-up period for postoperative complications was 1 year postoperatively for anastomotic stenosis and until postoperative day 30 for other complications. Statistical analysis Continuous data were presented as mean ± standard deviation or median with quartiles, as indicated. The Mann–Whitney U-test and the χ2 test were used to evaluate differences in continuous and categorical variables, respectively. A propensity-matched analysis was conducted using the logistic regression model and covariates such as age, sex, histological type, tumor location, clinical stage, and presence or absence of neoadjuvant chemotherapy. Univariate and multivariate logistic regression analyses were used to identify the risk factors for anastomotic leakage. Variables that were considered statistically significant in the univariate analysis were used for the multivariate analysis. P values of < 0.05 indicated statistically significant differences, and the Statistical Package for the Social Sciences software version 25 (IBM SPSS Inc., Chicago, Illinois) was used for statistical analyses. Continuous data were presented as mean ± standard deviation or median with quartiles, as indicated. The Mann–Whitney U-test and the χ2 test were used to evaluate differences in continuous and categorical variables, respectively. A propensity-matched analysis was conducted using the logistic regression model and covariates such as age, sex, histological type, tumor location, clinical stage, and presence or absence of neoadjuvant chemotherapy. Univariate and multivariate logistic regression analyses were used to identify the risk factors for anastomotic leakage. Variables that were considered statistically significant in the univariate analysis were used for the multivariate analysis. P values of < 0.05 indicated statistically significant differences, and the Statistical Package for the Social Sciences software version 25 (IBM SPSS Inc., Chicago, Illinois) was used for statistical analyses. Patients: Altogether, 145 patients with esophageal cancer who underwent thoracoscopic esophagectomy with radical lymph node dissection between November 2009 and December 2020 at Tottori University Hospital were included in this study. Among them, 19, 2, and 3 patients who underwent reconstruction using the jejunum or colon, pharyngeal gastric tube anastomosis caused by the simultaneous duplication of hypopharyngeal cancer, and two-stage reconstruction, respectively, were excluded. Finally, 121 patients were enrolled in this study (Fig. 1). Patients who underwent surgery until June 2018, i.e., before the standardization of the anastomotic method, were included in the classical group, and those who underwent surgery from July 2018, i.e., after the standardization of the anastomotic method, were included in the ICG circular group. The clinicopathological findings were determined as per the Japanese Classification of Esophageal Cancer (11th edition) [9, 10]. This study was approved by the institutional review board of Tottori University School of Medicine (20A234), and the requirement for informed consent was waived. Fig. 1Patient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy Patient selection for the evaluation of cervical esophagogastric anastomosis after thoracoscopic esophagectomy Surgical procedure: All patients underwent thoracoscopic subtotal esophagectomy with mediastinal lymph node dissection in the prone position under right pneumothorax, and robot-assisted esophagectomy has been used since February 2020. After completion of the thoracic procedure, patients were repositioned in the supine position, and cervical and abdominal procedures were simultaneously initiated. Cervical lymph node dissection was not performed in patients with lower thoracic or abdominal esophageal cancer without cervical or upper mediastinal lymph node metastasis. Abdominal procedures, such as laparotomy, hand-assisted laparoscopic surgery, and complete laparoscopy for abdominal lymph node dissection, were performed. Patients who performed complete laparoscopy underwent laparotomy by making an incision of 8 cm in the upper abdomen after completing abdominal lymph node dissection, and the gastric tube was created under direct visualization in all cases. We created the gastric tube with a wide or narrow shape; the wide gastric tube is a method of resecting the stomach just below the esophagogastric junction, and the narrow gastric tube is a method of resecting the lesser curvature of the stomach so that the diameter of the gastric tube is approximately 3.5 cm. It was then pulled up to the neck through the retrosternal or posterior mediastinal route, and esophagogastric anastomosis was performed on the left side of the neck. In the classical group, before the anastomosis method was standardized, additional Kocher mobilization was performed as required for use until the area with good visual blood flow as the reconstructed gastric tube. ICG circular anastomosis method: The ICG circular anastomosis approach was used as follows: After the gastric tube was created, ICG at a dose of 10 mg/body was administered intravenously, and ICG fluorescence imaging of the blood flow in the gastric tube was assessed using the PhotoDynamic Eye (Hamamatsu Photonics, Hamamatsu, Japan) or the Firefly system, which was integrated with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). The reconstructed gastric tube was used until the site where the wall of the gastric tube had a uniform contrast within 20 s after contrasting the right gastroepiploic artery with ICG, as reported by Noma et al. [11] (Fig. 2a, b). After the gastric tube was pulled up to the neck, end-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler (Medtronic, Minneapolis, Minnesota) (Fig. 2c). The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60-mm purple cartridge (Medtronic, Minneapolis, Minnesota) (Fig. 2d). The staple line was lastly buried. Fig. 2Procedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge Procedures in indocyanine green (ICG) circular anastomosis. a The right gastroepiploic artery was contrasted with ICG; b The site where the wall of the gastric tube had a uniform contrast with ICG; c End-to-side esophagogastric anastomosis was performed on the posterior wall of the gastric tube using a 25-mm DST Series EEA circular stapler; d The stump of the gastric tube was sectioned and closed using the Signia stapling system with a 60 mm purple cartridge Definition of perioperative complications: Anastomotic leakage was defined as saliva leakage from the cervical wound, contrast leakage outside the gastrointestinal tract on gastrointestinal series, and abnormal air or fluid accumulation around the site of anastomosis on computed tomography (CT) scan. A routine gastrointestinal series was performed on postoperative day 7; oral intake was initiated on postoperative day 8 for patients who experienced no problems in the postoperative course. This protocol maintained as is during the study period. Anastomotic stenosis was defined as cases in which an endoscope of 9.0 mm diameter could not pass through the anastomosis and balloon dilation was required during endoscopy for postoperative dysphagia. Pneumonia was defined as the appearance of consolidation on chest radiography or CT scan and the detection of bacteria on sputum culture. Recurrent nerve paralysis was assessed by an otolaryngologist on postoperative day 6 or 7 via laryngoscopy. The follow-up period for postoperative complications was 1 year postoperatively for anastomotic stenosis and until postoperative day 30 for other complications. Statistical analysis: Continuous data were presented as mean ± standard deviation or median with quartiles, as indicated. The Mann–Whitney U-test and the χ2 test were used to evaluate differences in continuous and categorical variables, respectively. A propensity-matched analysis was conducted using the logistic regression model and covariates such as age, sex, histological type, tumor location, clinical stage, and presence or absence of neoadjuvant chemotherapy. Univariate and multivariate logistic regression analyses were used to identify the risk factors for anastomotic leakage. Variables that were considered statistically significant in the univariate analysis were used for the multivariate analysis. P values of < 0.05 indicated statistically significant differences, and the Statistical Package for the Social Sciences software version 25 (IBM SPSS Inc., Chicago, Illinois) was used for statistical analyses. Results: Characteristics of patients Of 121 patients, 82 were included in the classical group and 39 in the ICG circular group before matching. Next, 33 patients were included in each group after matching (Fig. 1). Table 1 shows the clinicopathological characteristics of patients before and after matching. Before matching, significant differences were noted in terms of the American Society of Anesthesiologists physical status (ASA-PS) score (P = 0.021) and histological type (P = 0.009). However, after matching, the background characteristics did not significantly differ between the two groups. Table 1Characteristics of patientsBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Age (years)0.8920.872Median (quartiles)66 (61–72)66 (61–71)65 (61–73)67 (63–73)Sex0.9140.282 Male70 (85%)33 (85%)30 (90%)27 (82%) Female12 (15%)6 (15%)3 (9%)6 (18%)Body mass index (kg/m2)21.2 ± 3.122.2 ± 3.30.08321.8 ± 3.121.9 ± 3.30.677Serum albumin level (g/dL)4.2 ± 0.44.1 ± 0.40.4944.1 ± 0.44.1 ± 0.40.985Brinkman index0.7330.278Median (quartiles)800 (445–1000)860 (435–1000)820 (600–1140)840 (405–1000)ECOG performance status0.5860.601 068 (83%)34 (87%)27 (82%)28 (85%) 112 (15%)5 (13%)5 (15%)5 (15%) 22 (2%)0 (0%)1 (3%)0 (0%)Comorbidity Diabetes15 (18%)4 (10%)0.25610 (30%)4 (12%)0.071 Cardiovascular disease10 (12%)6 (15%)0.6282 (6%)5 (15%)0.230 Obstructive ventilation failure27 (33%)13 (33%)0.96510 (30%)11 (33%)0.792ASA-PS score0.0210.115 111 (13%)1 (3%)3 (9%)1 (3%) 263 (77%)28 (72%)27 (82%)23 (70%) 38 (10%)10 (26%)3 (9%)9 (27%)Histological type0.0091.000 Squamous cell carcinoma78 (95%)30 (77%)30 (91%)30 (91%) Adenocarcinoma2 (2%)6 (15%)2 (6%)2 (6%) Others2 (2%)3 (8%)1 (3%)1 (3%)Tumor location0.2290.642 Upper thoracic11 (13%)6 (15%)5 (15%)6 (18%) Middle thoracic43 (52%)16 (41%)15 (46%)16 (49%) Lower thoracic24 (29%)11 (28%)11 (33%)7 (21%) Abdominal4 (5%)6 (15%)2 (6%)4 (12%)cT0.4050.667 136 (44%)16 (41%)16 (49%)14 (42%) 214 (17%)11 (28%)5 (15%)9 (27%) 331 (38%)11 (28%)11 (33%)9 (27%) 4a1 (1%)1 (3%)1 (3%)1 (3%)cN0.2240.420 044 (54%)28 (72%)19 (58%)24 (73%) 117 (21%)5 (13%)7 (21%)4 (12%) 220 (24%)5 (13%)7 (21%)5 (15%) 31 (1%)1 (3%)0 (0%)0 (0%)cStage0.2960.393 132 (39%)14 (36%)16 (49%)13 (39%) 219 (23%)14 (36%)5 (15%)10 (30%) 331 (38%)11 (28%)12 (36%)10 (30%)Neoadjuvant chemotherapy0.9741.000 Absent36 (44%)17 (44%)16 (49%)16 (49%) Present46 (56%)22 (56%)17 (52%)17 (52%)ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status Characteristics of patients ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status Of 121 patients, 82 were included in the classical group and 39 in the ICG circular group before matching. Next, 33 patients were included in each group after matching (Fig. 1). Table 1 shows the clinicopathological characteristics of patients before and after matching. Before matching, significant differences were noted in terms of the American Society of Anesthesiologists physical status (ASA-PS) score (P = 0.021) and histological type (P = 0.009). However, after matching, the background characteristics did not significantly differ between the two groups. Table 1Characteristics of patientsBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Age (years)0.8920.872Median (quartiles)66 (61–72)66 (61–71)65 (61–73)67 (63–73)Sex0.9140.282 Male70 (85%)33 (85%)30 (90%)27 (82%) Female12 (15%)6 (15%)3 (9%)6 (18%)Body mass index (kg/m2)21.2 ± 3.122.2 ± 3.30.08321.8 ± 3.121.9 ± 3.30.677Serum albumin level (g/dL)4.2 ± 0.44.1 ± 0.40.4944.1 ± 0.44.1 ± 0.40.985Brinkman index0.7330.278Median (quartiles)800 (445–1000)860 (435–1000)820 (600–1140)840 (405–1000)ECOG performance status0.5860.601 068 (83%)34 (87%)27 (82%)28 (85%) 112 (15%)5 (13%)5 (15%)5 (15%) 22 (2%)0 (0%)1 (3%)0 (0%)Comorbidity Diabetes15 (18%)4 (10%)0.25610 (30%)4 (12%)0.071 Cardiovascular disease10 (12%)6 (15%)0.6282 (6%)5 (15%)0.230 Obstructive ventilation failure27 (33%)13 (33%)0.96510 (30%)11 (33%)0.792ASA-PS score0.0210.115 111 (13%)1 (3%)3 (9%)1 (3%) 263 (77%)28 (72%)27 (82%)23 (70%) 38 (10%)10 (26%)3 (9%)9 (27%)Histological type0.0091.000 Squamous cell carcinoma78 (95%)30 (77%)30 (91%)30 (91%) Adenocarcinoma2 (2%)6 (15%)2 (6%)2 (6%) Others2 (2%)3 (8%)1 (3%)1 (3%)Tumor location0.2290.642 Upper thoracic11 (13%)6 (15%)5 (15%)6 (18%) Middle thoracic43 (52%)16 (41%)15 (46%)16 (49%) Lower thoracic24 (29%)11 (28%)11 (33%)7 (21%) Abdominal4 (5%)6 (15%)2 (6%)4 (12%)cT0.4050.667 136 (44%)16 (41%)16 (49%)14 (42%) 214 (17%)11 (28%)5 (15%)9 (27%) 331 (38%)11 (28%)11 (33%)9 (27%) 4a1 (1%)1 (3%)1 (3%)1 (3%)cN0.2240.420 044 (54%)28 (72%)19 (58%)24 (73%) 117 (21%)5 (13%)7 (21%)4 (12%) 220 (24%)5 (13%)7 (21%)5 (15%) 31 (1%)1 (3%)0 (0%)0 (0%)cStage0.2960.393 132 (39%)14 (36%)16 (49%)13 (39%) 219 (23%)14 (36%)5 (15%)10 (30%) 331 (38%)11 (28%)12 (36%)10 (30%)Neoadjuvant chemotherapy0.9741.000 Absent36 (44%)17 (44%)16 (49%)16 (49%) Present46 (56%)22 (56%)17 (52%)17 (52%)ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status Characteristics of patients ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status Changes in anastomotic methods and perioperative outcomes Figure 3 presents changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy, and Table 2 depicts the perioperative outcomes in both groups. As shown in Table 2, prior to matching, a significantly higher proportion of patients underwent surgery using the laparoscopic approach (P < 0.001) and narrow gastric tube (P = 0.001) and those who had a lower volume of blood loss (P = 0.038) in the ICG circular group. After matching, the same factors indicated a significant difference. In terms of postoperative outcomes, the ICG circular group had a significantly lower proportion of patients who were observed to have anastomotic leakage (34% vs. 8%, P = 0.002) and a shorter postoperative hospital stay (29 vs. 20 days, P < 0.001) before matching. After matching, a significantly lower proportion of patients were noted to have anastomotic leakage (39% vs. 9%, P = 0.004) and stenosis (46% vs. 21%, P = 0.037) and a shorter postoperative hospital stay (30 vs. 20 days, P < 0.001) in the ICG circular group. Fig. 3Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown Table 2Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomyBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Abdominal approach< 0.001< 0.001 Open34 (42%)0 (0%)15 (46%)0 (0%) Laparoscopic48 (59%)39 (100%)18 (55%)33 (100%)Lymph node dissection0.1801.000 Two-field18 (22%)13 (33%)10 (30%)10 (30%) Three-field64 (78%)26 (67%)23 (70%)23 (70%)Route of reconstruction0.0700.131 Retrosternal68 (83%)37 (95%)27 (82%)31 (94%) Posterior mediastinal14 (17%)2 (5%)6 (18%)2 (6%)Shape of the gastric tube0.0010.003 Wide21 (26%)0 (0%)8 (24%)0 (0%) Narrow61 (74%)39 (100%)25 (76%)33 (100%)Total operative time (min)634 ± 89617 ± 530.573638 ± 100616 ± 510.753Volume of blood loss (mL)186 ± 219103 ± 970.038251 ± 29893 ± 890.009Postoperative complications Anastomotic leakage28 (34%)3 (8%)0.00213 (39%)3 (9%)0.004 Anastomotic stenosis29 (35%)8 (21%)0.09715 (46%)7 (21%)0.037 Pneumonia18 (22%)11 (28%)0.4518 (24%)9 (27%)0.778 Recurrent nerve paralysis14 (17%)2 (5%)0.0707 (21%)2 (6%)0.073 Postoperative hospital stay29 (22–44)20 (16–28)< 0.00130 (25–44)20 (17–28)< 0.001 Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomy Figure 3 presents changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy, and Table 2 depicts the perioperative outcomes in both groups. As shown in Table 2, prior to matching, a significantly higher proportion of patients underwent surgery using the laparoscopic approach (P < 0.001) and narrow gastric tube (P = 0.001) and those who had a lower volume of blood loss (P = 0.038) in the ICG circular group. After matching, the same factors indicated a significant difference. In terms of postoperative outcomes, the ICG circular group had a significantly lower proportion of patients who were observed to have anastomotic leakage (34% vs. 8%, P = 0.002) and a shorter postoperative hospital stay (29 vs. 20 days, P < 0.001) before matching. After matching, a significantly lower proportion of patients were noted to have anastomotic leakage (39% vs. 9%, P = 0.004) and stenosis (46% vs. 21%, P = 0.037) and a shorter postoperative hospital stay (30 vs. 20 days, P < 0.001) in the ICG circular group. Fig. 3Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown Table 2Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomyBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Abdominal approach< 0.001< 0.001 Open34 (42%)0 (0%)15 (46%)0 (0%) Laparoscopic48 (59%)39 (100%)18 (55%)33 (100%)Lymph node dissection0.1801.000 Two-field18 (22%)13 (33%)10 (30%)10 (30%) Three-field64 (78%)26 (67%)23 (70%)23 (70%)Route of reconstruction0.0700.131 Retrosternal68 (83%)37 (95%)27 (82%)31 (94%) Posterior mediastinal14 (17%)2 (5%)6 (18%)2 (6%)Shape of the gastric tube0.0010.003 Wide21 (26%)0 (0%)8 (24%)0 (0%) Narrow61 (74%)39 (100%)25 (76%)33 (100%)Total operative time (min)634 ± 89617 ± 530.573638 ± 100616 ± 510.753Volume of blood loss (mL)186 ± 219103 ± 970.038251 ± 29893 ± 890.009Postoperative complications Anastomotic leakage28 (34%)3 (8%)0.00213 (39%)3 (9%)0.004 Anastomotic stenosis29 (35%)8 (21%)0.09715 (46%)7 (21%)0.037 Pneumonia18 (22%)11 (28%)0.4518 (24%)9 (27%)0.778 Recurrent nerve paralysis14 (17%)2 (5%)0.0707 (21%)2 (6%)0.073 Postoperative hospital stay29 (22–44)20 (16–28)< 0.00130 (25–44)20 (17–28)< 0.001 Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomy Risk factor analyses of anastomotic leakage Finally, the risk factors for anastomotic leakage were evaluated via propensity score matching in 66 patients. The univariate analysis indicated that the Brinkman index (P = 0.048) and anastomotic method (P = 0.008) were significantly associated with anastomotic leakage (Table 4). According to the multivariate analysis, the anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy (odds ratio: 5.983, 95% confidence interval (CI): 1.469–24.359, P = 0.013) (Table 3). Table 3Univariate logistic regression analyses of anastomotic leakageAnastomotic leakageAbsent (n = 50)Present (n = 16)OR95% CIP valueAge (years) <6520 (40%)8 (50%)1.5000.484–4.6510.483 ≥6530 (60%)8 (50%)1Sex Male42 (84%)15 (94%)2.8570.329–24.7950.341 Female8 (16%)1 (6%)1Body mass index (kg/m2) <2224 (48%)10 (63%)1.8060.569–5.7260.316 ≥ 2226 (52%)6 (38%)1Serum albumin level (g/dL) <420 (40%)5 (31%)0.6820.206–2.2610.531 ≥ 430 (60%)11 (69%)1Brinkman index <80024 (48%)3 (19%)0.2500.063–0.9860.048 ≥ 80026 (52%)13 (81%)1Performance status 042 (84%)13 (81%)0.8250.191–3.5740.797 1, 28 (16%)3 (19%)1Diabetes Absent39 (78%)13 (81%)1.2220.295–5.0690.782 Present11 (22%)3 (19%)1Cardiovascular disease Absent46 (92%)13 (81%)0.3770.075–1.9010.237 Present4 (8%)3 (19%)Obstructive ventilation failure Absent36 (72%)9 (56%)0.5000.156–1.6030.243 Present14 (28%)7 (44%)1ASA-PS score 13 (6%)1 (6%)1.0440.101–10.8060.971 2, 347 (94%)15 (94%)1Histological type Squamous cell carcinoma45 (90%)15 (94%)1.6670.180–15.4250.653 Others5 (10%)1 (6%)1Tumor location Ut, Mt31 (62%)11 (69%)1.3480.406–4.4840.626 Lt, Ae19 (38%)5 (31%)1cT 121 (42%)9 (56%)1.7760.570–5.5310.322 2, 3, 4a29 (58%)7 (44%)1cN Absent33 (66%)10 (63%)0.8590.267–2.7640.798 Present17 (34%)6 (38%)1cStage 120 (40%)9 (56%)1.9290.618–6.0200.258 2, 330 (60%)7 (44%)1Neoadjuvant chemotherapy Absent22 (44%)10 (63%)2.1210.668–6.7390.202 Present28 (56%)6 (38%)1Abdominal approach Open11 (22%)4 (25%)1.1820.317–4.4000.803 Laparoscopic39 (78%)12 (75%)1Lymph node dissection Two-field15 (30%)5 (31%)1.0610.314–3.5850.925 Three-field35 (70%)11 (69%)1Route of reconstruction Retrosternal43 (86%)15 (94%)2.4420.277–21.5190.421 Posterior mediastinal7 (14%)1 (6%)1Shape of gastric tube Wide5 (10%)3 (19%)2.0770.437–9.8710.358 Narrow45 (90%)13 (81%)1Total operative time (min) <60020 (40%)6 (38%)0.9000.282–2.8700.859 ≥ 60030 (60%)10 (63%)1Blood loss (mL) <10026 (52%)7 (44%)0.7180.231–2.2290.566 ≥ 10024 (48%)9 (56%)1Anastomotic method ICG circular group30 (60%)3 (19%)0.1540.039–0.6100.008 Classical group20 (40%)13 (81%)1OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status Univariate logistic regression analyses of anastomotic leakage OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status Table 4Multivariate logistic regression analyses of anastomotic leakageOR95% CIP valueBrinkman index (≥ 800)3.5380.842–14.8600.084Anastomotic method (classical group)5.9831.469–24.3590.013OR odds ratio, CI confidence interval Multivariate logistic regression analyses of anastomotic leakage OR odds ratio, CI confidence interval Finally, the risk factors for anastomotic leakage were evaluated via propensity score matching in 66 patients. The univariate analysis indicated that the Brinkman index (P = 0.048) and anastomotic method (P = 0.008) were significantly associated with anastomotic leakage (Table 4). According to the multivariate analysis, the anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy (odds ratio: 5.983, 95% confidence interval (CI): 1.469–24.359, P = 0.013) (Table 3). Table 3Univariate logistic regression analyses of anastomotic leakageAnastomotic leakageAbsent (n = 50)Present (n = 16)OR95% CIP valueAge (years) <6520 (40%)8 (50%)1.5000.484–4.6510.483 ≥6530 (60%)8 (50%)1Sex Male42 (84%)15 (94%)2.8570.329–24.7950.341 Female8 (16%)1 (6%)1Body mass index (kg/m2) <2224 (48%)10 (63%)1.8060.569–5.7260.316 ≥ 2226 (52%)6 (38%)1Serum albumin level (g/dL) <420 (40%)5 (31%)0.6820.206–2.2610.531 ≥ 430 (60%)11 (69%)1Brinkman index <80024 (48%)3 (19%)0.2500.063–0.9860.048 ≥ 80026 (52%)13 (81%)1Performance status 042 (84%)13 (81%)0.8250.191–3.5740.797 1, 28 (16%)3 (19%)1Diabetes Absent39 (78%)13 (81%)1.2220.295–5.0690.782 Present11 (22%)3 (19%)1Cardiovascular disease Absent46 (92%)13 (81%)0.3770.075–1.9010.237 Present4 (8%)3 (19%)Obstructive ventilation failure Absent36 (72%)9 (56%)0.5000.156–1.6030.243 Present14 (28%)7 (44%)1ASA-PS score 13 (6%)1 (6%)1.0440.101–10.8060.971 2, 347 (94%)15 (94%)1Histological type Squamous cell carcinoma45 (90%)15 (94%)1.6670.180–15.4250.653 Others5 (10%)1 (6%)1Tumor location Ut, Mt31 (62%)11 (69%)1.3480.406–4.4840.626 Lt, Ae19 (38%)5 (31%)1cT 121 (42%)9 (56%)1.7760.570–5.5310.322 2, 3, 4a29 (58%)7 (44%)1cN Absent33 (66%)10 (63%)0.8590.267–2.7640.798 Present17 (34%)6 (38%)1cStage 120 (40%)9 (56%)1.9290.618–6.0200.258 2, 330 (60%)7 (44%)1Neoadjuvant chemotherapy Absent22 (44%)10 (63%)2.1210.668–6.7390.202 Present28 (56%)6 (38%)1Abdominal approach Open11 (22%)4 (25%)1.1820.317–4.4000.803 Laparoscopic39 (78%)12 (75%)1Lymph node dissection Two-field15 (30%)5 (31%)1.0610.314–3.5850.925 Three-field35 (70%)11 (69%)1Route of reconstruction Retrosternal43 (86%)15 (94%)2.4420.277–21.5190.421 Posterior mediastinal7 (14%)1 (6%)1Shape of gastric tube Wide5 (10%)3 (19%)2.0770.437–9.8710.358 Narrow45 (90%)13 (81%)1Total operative time (min) <60020 (40%)6 (38%)0.9000.282–2.8700.859 ≥ 60030 (60%)10 (63%)1Blood loss (mL) <10026 (52%)7 (44%)0.7180.231–2.2290.566 ≥ 10024 (48%)9 (56%)1Anastomotic method ICG circular group30 (60%)3 (19%)0.1540.039–0.6100.008 Classical group20 (40%)13 (81%)1OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status Univariate logistic regression analyses of anastomotic leakage OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status Table 4Multivariate logistic regression analyses of anastomotic leakageOR95% CIP valueBrinkman index (≥ 800)3.5380.842–14.8600.084Anastomotic method (classical group)5.9831.469–24.3590.013OR odds ratio, CI confidence interval Multivariate logistic regression analyses of anastomotic leakage OR odds ratio, CI confidence interval Characteristics of patients: Of 121 patients, 82 were included in the classical group and 39 in the ICG circular group before matching. Next, 33 patients were included in each group after matching (Fig. 1). Table 1 shows the clinicopathological characteristics of patients before and after matching. Before matching, significant differences were noted in terms of the American Society of Anesthesiologists physical status (ASA-PS) score (P = 0.021) and histological type (P = 0.009). However, after matching, the background characteristics did not significantly differ between the two groups. Table 1Characteristics of patientsBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Age (years)0.8920.872Median (quartiles)66 (61–72)66 (61–71)65 (61–73)67 (63–73)Sex0.9140.282 Male70 (85%)33 (85%)30 (90%)27 (82%) Female12 (15%)6 (15%)3 (9%)6 (18%)Body mass index (kg/m2)21.2 ± 3.122.2 ± 3.30.08321.8 ± 3.121.9 ± 3.30.677Serum albumin level (g/dL)4.2 ± 0.44.1 ± 0.40.4944.1 ± 0.44.1 ± 0.40.985Brinkman index0.7330.278Median (quartiles)800 (445–1000)860 (435–1000)820 (600–1140)840 (405–1000)ECOG performance status0.5860.601 068 (83%)34 (87%)27 (82%)28 (85%) 112 (15%)5 (13%)5 (15%)5 (15%) 22 (2%)0 (0%)1 (3%)0 (0%)Comorbidity Diabetes15 (18%)4 (10%)0.25610 (30%)4 (12%)0.071 Cardiovascular disease10 (12%)6 (15%)0.6282 (6%)5 (15%)0.230 Obstructive ventilation failure27 (33%)13 (33%)0.96510 (30%)11 (33%)0.792ASA-PS score0.0210.115 111 (13%)1 (3%)3 (9%)1 (3%) 263 (77%)28 (72%)27 (82%)23 (70%) 38 (10%)10 (26%)3 (9%)9 (27%)Histological type0.0091.000 Squamous cell carcinoma78 (95%)30 (77%)30 (91%)30 (91%) Adenocarcinoma2 (2%)6 (15%)2 (6%)2 (6%) Others2 (2%)3 (8%)1 (3%)1 (3%)Tumor location0.2290.642 Upper thoracic11 (13%)6 (15%)5 (15%)6 (18%) Middle thoracic43 (52%)16 (41%)15 (46%)16 (49%) Lower thoracic24 (29%)11 (28%)11 (33%)7 (21%) Abdominal4 (5%)6 (15%)2 (6%)4 (12%)cT0.4050.667 136 (44%)16 (41%)16 (49%)14 (42%) 214 (17%)11 (28%)5 (15%)9 (27%) 331 (38%)11 (28%)11 (33%)9 (27%) 4a1 (1%)1 (3%)1 (3%)1 (3%)cN0.2240.420 044 (54%)28 (72%)19 (58%)24 (73%) 117 (21%)5 (13%)7 (21%)4 (12%) 220 (24%)5 (13%)7 (21%)5 (15%) 31 (1%)1 (3%)0 (0%)0 (0%)cStage0.2960.393 132 (39%)14 (36%)16 (49%)13 (39%) 219 (23%)14 (36%)5 (15%)10 (30%) 331 (38%)11 (28%)12 (36%)10 (30%)Neoadjuvant chemotherapy0.9741.000 Absent36 (44%)17 (44%)16 (49%)16 (49%) Present46 (56%)22 (56%)17 (52%)17 (52%)ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status Characteristics of patients ECOG Eastern Cooperative Oncology Group, ASA-PS American Society of Anesthesiologists physical status Changes in anastomotic methods and perioperative outcomes: Figure 3 presents changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy, and Table 2 depicts the perioperative outcomes in both groups. As shown in Table 2, prior to matching, a significantly higher proportion of patients underwent surgery using the laparoscopic approach (P < 0.001) and narrow gastric tube (P = 0.001) and those who had a lower volume of blood loss (P = 0.038) in the ICG circular group. After matching, the same factors indicated a significant difference. In terms of postoperative outcomes, the ICG circular group had a significantly lower proportion of patients who were observed to have anastomotic leakage (34% vs. 8%, P = 0.002) and a shorter postoperative hospital stay (29 vs. 20 days, P < 0.001) before matching. After matching, a significantly lower proportion of patients were noted to have anastomotic leakage (39% vs. 9%, P = 0.004) and stenosis (46% vs. 21%, P = 0.037) and a shorter postoperative hospital stay (30 vs. 20 days, P < 0.001) in the ICG circular group. Fig. 3Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown Changes in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Changes in the classical and ICG circular groups, and the annual incidence rates of anastomotic leakage and stenosis in anastomotic methods for cervical esophagogastric anastomosis after thoracoscopic esophagectomy are shown Table 2Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomyBefore matchingAfter matchingClassical groupICG circular groupP valueClassical groupICG circular groupP value(n = 82)(n = 39)(n = 33)(n = 33)Abdominal approach< 0.001< 0.001 Open34 (42%)0 (0%)15 (46%)0 (0%) Laparoscopic48 (59%)39 (100%)18 (55%)33 (100%)Lymph node dissection0.1801.000 Two-field18 (22%)13 (33%)10 (30%)10 (30%) Three-field64 (78%)26 (67%)23 (70%)23 (70%)Route of reconstruction0.0700.131 Retrosternal68 (83%)37 (95%)27 (82%)31 (94%) Posterior mediastinal14 (17%)2 (5%)6 (18%)2 (6%)Shape of the gastric tube0.0010.003 Wide21 (26%)0 (0%)8 (24%)0 (0%) Narrow61 (74%)39 (100%)25 (76%)33 (100%)Total operative time (min)634 ± 89617 ± 530.573638 ± 100616 ± 510.753Volume of blood loss (mL)186 ± 219103 ± 970.038251 ± 29893 ± 890.009Postoperative complications Anastomotic leakage28 (34%)3 (8%)0.00213 (39%)3 (9%)0.004 Anastomotic stenosis29 (35%)8 (21%)0.09715 (46%)7 (21%)0.037 Pneumonia18 (22%)11 (28%)0.4518 (24%)9 (27%)0.778 Recurrent nerve paralysis14 (17%)2 (5%)0.0707 (21%)2 (6%)0.073 Postoperative hospital stay29 (22–44)20 (16–28)< 0.00130 (25–44)20 (17–28)< 0.001 Perioperative outcomes of patients with esophageal cancer after thoracoscopic esophagectomy Risk factor analyses of anastomotic leakage: Finally, the risk factors for anastomotic leakage were evaluated via propensity score matching in 66 patients. The univariate analysis indicated that the Brinkman index (P = 0.048) and anastomotic method (P = 0.008) were significantly associated with anastomotic leakage (Table 4). According to the multivariate analysis, the anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy (odds ratio: 5.983, 95% confidence interval (CI): 1.469–24.359, P = 0.013) (Table 3). Table 3Univariate logistic regression analyses of anastomotic leakageAnastomotic leakageAbsent (n = 50)Present (n = 16)OR95% CIP valueAge (years) <6520 (40%)8 (50%)1.5000.484–4.6510.483 ≥6530 (60%)8 (50%)1Sex Male42 (84%)15 (94%)2.8570.329–24.7950.341 Female8 (16%)1 (6%)1Body mass index (kg/m2) <2224 (48%)10 (63%)1.8060.569–5.7260.316 ≥ 2226 (52%)6 (38%)1Serum albumin level (g/dL) <420 (40%)5 (31%)0.6820.206–2.2610.531 ≥ 430 (60%)11 (69%)1Brinkman index <80024 (48%)3 (19%)0.2500.063–0.9860.048 ≥ 80026 (52%)13 (81%)1Performance status 042 (84%)13 (81%)0.8250.191–3.5740.797 1, 28 (16%)3 (19%)1Diabetes Absent39 (78%)13 (81%)1.2220.295–5.0690.782 Present11 (22%)3 (19%)1Cardiovascular disease Absent46 (92%)13 (81%)0.3770.075–1.9010.237 Present4 (8%)3 (19%)Obstructive ventilation failure Absent36 (72%)9 (56%)0.5000.156–1.6030.243 Present14 (28%)7 (44%)1ASA-PS score 13 (6%)1 (6%)1.0440.101–10.8060.971 2, 347 (94%)15 (94%)1Histological type Squamous cell carcinoma45 (90%)15 (94%)1.6670.180–15.4250.653 Others5 (10%)1 (6%)1Tumor location Ut, Mt31 (62%)11 (69%)1.3480.406–4.4840.626 Lt, Ae19 (38%)5 (31%)1cT 121 (42%)9 (56%)1.7760.570–5.5310.322 2, 3, 4a29 (58%)7 (44%)1cN Absent33 (66%)10 (63%)0.8590.267–2.7640.798 Present17 (34%)6 (38%)1cStage 120 (40%)9 (56%)1.9290.618–6.0200.258 2, 330 (60%)7 (44%)1Neoadjuvant chemotherapy Absent22 (44%)10 (63%)2.1210.668–6.7390.202 Present28 (56%)6 (38%)1Abdominal approach Open11 (22%)4 (25%)1.1820.317–4.4000.803 Laparoscopic39 (78%)12 (75%)1Lymph node dissection Two-field15 (30%)5 (31%)1.0610.314–3.5850.925 Three-field35 (70%)11 (69%)1Route of reconstruction Retrosternal43 (86%)15 (94%)2.4420.277–21.5190.421 Posterior mediastinal7 (14%)1 (6%)1Shape of gastric tube Wide5 (10%)3 (19%)2.0770.437–9.8710.358 Narrow45 (90%)13 (81%)1Total operative time (min) <60020 (40%)6 (38%)0.9000.282–2.8700.859 ≥ 60030 (60%)10 (63%)1Blood loss (mL) <10026 (52%)7 (44%)0.7180.231–2.2290.566 ≥ 10024 (48%)9 (56%)1Anastomotic method ICG circular group30 (60%)3 (19%)0.1540.039–0.6100.008 Classical group20 (40%)13 (81%)1OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status Univariate logistic regression analyses of anastomotic leakage OR odds ratio, CI confidence interval, ASA-PS American Society of Anesthesiologists physical status Table 4Multivariate logistic regression analyses of anastomotic leakageOR95% CIP valueBrinkman index (≥ 800)3.5380.842–14.8600.084Anastomotic method (classical group)5.9831.469–24.3590.013OR odds ratio, CI confidence interval Multivariate logistic regression analyses of anastomotic leakage OR odds ratio, CI confidence interval Discussion: This study aimed to evaluate the efficacy of circular stapling anastomosis with ICG fluorescence imaging for cervical esophagogastric anastomosis after thoracoscopic esophagectomy by comparing the short-term outcomes before and after the standardization of the anastomotic method via a propensity-matched analysis. The ICG circular group had a significantly lower rate of complications, including anastomotic leakage and stenosis, and a shorter postoperative hospital stay. Furthermore, anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy. The incidence of anastomotic leakage was significantly lower in the ICG circular group than in the classical group, and anastomotic method was found to be an independent risk factor for postoperative anastomotic leakage. According to a systematic review on ICG fluorescence imaging after esophageal cancer surgery, the overall anastomotic leakage rate in patients who underwent surgery with ICG fluorescence imaging was lower than that in controls (13.5% [118/873] vs. 18.5% [86/466]) [6]. Furthermore, when anastomosis was performed at the site where good ICG perfusion was observed, the incidence of anastomotic leakage was 9.0% (67/746). In this study, anastomosis was performed at the site where good ICG perfusion was observed in all patients in the ICG circular group. Thus, the outcomes of anastomotic leakage were comparable. Honda et al. performed a systematic review comparing outcomes of anastomosis after esophagectomy between hand-sewn and mechanical anastomosis using a circular stapler [12]. Results showed that the anastomotic leakage rate with circular stapling anastomosis (6.1%, 41/668) was similar to that with hand-sewn anastomosis (6.1%, 39/640) (risk ratio [RR]: 1.02, 95% CI: 0.66–1.59, P = 0.43); compared with our results, particularly in classical group, the rate of anastomotic leakage was extremely low. The most important reason for the high anastomotic leakage rate in the classical group was that an excessive number of anastomosis methods were performed. However, in the ICG circular group, anastomosis was performed using a completely uniform technique in all cases, which may have had a substantial impact on our results. Moreover, the review by Honda et al. included studies in which intrathoracic anastomosis was performed. Thus, the outcomes of this review were not completely comparable to those of cervical esophagogastric anastomosis, and our results were acceptable. In addition, Honda et al. showed that the approach using mechanical circular stapling anastomosis considerably reduced the operative time by 15.3 min compared with that required by hand-sewn anastomosis. In our study, the total operative time tended to be shorter in the ICG circular group than in the classical group; however, it was not significant. Therefore, circular stapling anastomosis with ICG fluorescence imaging is a simple and safe method that can reduce anastomotic leakage after thoracoscopic esophagectomy. This study showed that the incidence of anastomotic stenosis was substantially lower in the ICG circular group than in the classical group after propensity score matching. However, the anastomotic stenosis rate in the classical group after matching was extremely high at 46%, which may have had a significant impact. Honda et al., who conducted a subgroup and meta-regression analysis, showed that the stenosis rate of circular stapling anastomosis was significantly higher than that of hand-sewn anastomosis (16.9% [106/626] and 9.9% [62/629]) (RR: 1.67, 95% CI: 1.16–2.42; P = 0.006); they showed no significant differences in the anastomotic site, diameter of the circular stapler, layer, and configuration [12]. A randomized controlled trial by Hayata et al. comparing cervical esophagogastric circular stapling and triangulating stapling anastomoses after esophagectomy showed no significant difference in terms of anastomotic stenosis rate between the circular stapling group (17%, 8/47) and the triangulating stapling group (19%, 9/51) (P = 0.935) [13]. In this study, the anastomotic stenosis rate in the ICG circular group was 21%, which was similar to that noted in previous reports but was not satisfactory. Therefore, although the incidence rate of anastomotic stenosis is improving because of the standardization of anastomotic methods, it should still be further reduced. This study had several limitations. First, this was a retrospective study with a small sample size. All relevant clinicopathological and technical factors, with the exception of the anastomotic method, should have been included in the matching factors; however, this was not practical because of the small sample size. As a result, reconstruction-related perioperative outcomes, such as abdominal approach and shape of gastric tube were not consistent between the two groups. Therefore, the outcomes of ICG circular anastomosis after standardization must be prospectively evaluated. Second, various anastomotic methods were performed in the classical group patients. Therefore, a propensity-matched analysis was performed to eliminate differences in patient characteristics and to prevent bias as much as possible. Conclusions: Circular stapling anastomosis with ICG fluorescence imaging was found to be effective in reducing anastomotic complications for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. It was particularly crucial for the anastomotic method to be standardized in this study. However, the incidence rate of anastomotic stenosis can still be improved, and this is a problem that should be addressed in the future.
Background: Thoracoscopic esophagectomy has been extensively used worldwide as a curative surgery for patients with esophageal cancer; however, complications such as anastomotic leakage and stenosis remain a major concern. Therefore, the objective of this study was to evaluate the efficacy of circular stapling anastomosis with indocyanine green (ICG) fluorescence imaging, which was standardized for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Methods: Altogether, 121 patients with esophageal cancer who underwent thoracoscopic esophagectomy with radical lymph node dissection and cervical esophagogastric anastomosis from November 2009 to December 2020 at Tottori University Hospital were enrolled in this study. Patients who underwent surgery before the anastomotic method was standardized were included in the classical group (n = 82) and patients who underwent surgery after the anastomotic method was standardized were included in the ICG circular group (n = 39). The short-term postoperative outcomes, including anastomotic complications, were compared between the two groups using propensity-matched analysis and the risk factors for anastomotic leakage were evaluated using logistic regression analyses. Results: Of the 121 patients, 33 were included in each group after propensity score matching. The clinicopathological characteristics of patients did not differ between the two groups after propensity score matching. In terms of perioperative outcomes, a significantly higher proportion of patients who underwent surgery using the laparoscopic approach (P < 0.001) and narrow gastric tube (P = 0.003), as well as those who had a lower volume of blood loss (P = 0.009) in the ICG circular group were observed after matching. Moreover, the ICG circular group had a significantly lower incidence of anastomotic leakage (39% vs. 9%, P = 0.004) and anastomotic stenosis (46% vs. 21%, P = 0.037) and a shorter postoperative hospital stay (30 vs. 20 days, P < 0.001) than the classical group. According to the multivariate analysis, the anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy (P = 0.013). Conclusions: Circular stapling anastomosis with ICG fluorescence imaging is effective in reducing complications such as anastomotic leakage and stenosis.
Background: Esophageal cancer is the ninth most commonly diagnosed cancer worldwide and the sixth most common cause of cancer-related mortality [1]. Esophagectomy is the mainstay of treatment for resectable esophageal cancer. Thoracoscopic esophagectomy was first reported by Cuschieri et al. in 1992 [2] and has been used worldwide to a large extent as a curative surgery for esophageal cancer. It was reported to reduce the incidence of respiratory complications compared with open esophagectomy in a randomized controlled trial [3]. However, complications, including anastomotic leakage and stenosis, are a major cause of concern. At our institution, thoracoscopic esophagectomy was initially performed in November 2009 and this procedure has been standardized; however, the anastomotic method was not standardized until June 2018. In fact, various anastomotic methods, such as hand-sewn, triangulating [4], circular stapling, and Collard anastomosis [5], have been performed, but the complication rate of anastomosis has not reduced. A systematic review reported that indocyanine green (ICG) fluorescence imaging could be an important adjunct tool for reducing anastomotic leakage following esophagectomy [6]. ICG is a water-soluble near-infrared phosphor that has immediate and long-term safety [7, 8]. ICG fluorescence imaging is a simple evaluation method. In a recent robot-assisted surgery, high-resolution near-infrared images were obtained by employing the Firefly system with the da Vinci Xi surgical robot (Intuitive Surgical Inc., Sunnyvale, California). Therefore, in order to assess the blood flow in the gastric tube during esophagectomy reconstruction, the use of ICG fluorescence imaging was standardized at our institution in July 2018. In addition, the anastomotic method was standardized to circular stapling anastomosis because it is simple and applicable to nearly all cases, including those with a short remnant esophagus. This study aimed to evaluate the efficacy of circular stapling anastomosis with ICG fluorescence imaging for cervical esophagogastric anastomosis after thoracoscopic esophagectomy by comparing the short-term outcomes before and after the anastomotic method was standardized via a propensity-matched analysis. Conclusions: Circular stapling anastomosis with ICG fluorescence imaging was found to be effective in reducing anastomotic complications for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. It was particularly crucial for the anastomotic method to be standardized in this study. However, the incidence rate of anastomotic stenosis can still be improved, and this is a problem that should be addressed in the future.
Background: Thoracoscopic esophagectomy has been extensively used worldwide as a curative surgery for patients with esophageal cancer; however, complications such as anastomotic leakage and stenosis remain a major concern. Therefore, the objective of this study was to evaluate the efficacy of circular stapling anastomosis with indocyanine green (ICG) fluorescence imaging, which was standardized for cervical esophagogastric anastomosis after thoracoscopic esophagectomy. Methods: Altogether, 121 patients with esophageal cancer who underwent thoracoscopic esophagectomy with radical lymph node dissection and cervical esophagogastric anastomosis from November 2009 to December 2020 at Tottori University Hospital were enrolled in this study. Patients who underwent surgery before the anastomotic method was standardized were included in the classical group (n = 82) and patients who underwent surgery after the anastomotic method was standardized were included in the ICG circular group (n = 39). The short-term postoperative outcomes, including anastomotic complications, were compared between the two groups using propensity-matched analysis and the risk factors for anastomotic leakage were evaluated using logistic regression analyses. Results: Of the 121 patients, 33 were included in each group after propensity score matching. The clinicopathological characteristics of patients did not differ between the two groups after propensity score matching. In terms of perioperative outcomes, a significantly higher proportion of patients who underwent surgery using the laparoscopic approach (P < 0.001) and narrow gastric tube (P = 0.003), as well as those who had a lower volume of blood loss (P = 0.009) in the ICG circular group were observed after matching. Moreover, the ICG circular group had a significantly lower incidence of anastomotic leakage (39% vs. 9%, P = 0.004) and anastomotic stenosis (46% vs. 21%, P = 0.037) and a shorter postoperative hospital stay (30 vs. 20 days, P < 0.001) than the classical group. According to the multivariate analysis, the anastomotic method was an independent risk factor for anastomotic leakage after thoracoscopic esophagectomy (P = 0.013). Conclusions: Circular stapling anastomosis with ICG fluorescence imaging is effective in reducing complications such as anastomotic leakage and stenosis.
10,553
416
[ 390, 2496, 223, 267, 419, 177, 151, 612, 584, 586 ]
13
[ "anastomotic", "anastomosis", "circular", "icg", "gastric", "tube", "gastric tube", "patients", "15", "leakage" ]
[ "leakage thoracoscopic esophagectomy", "thoracoscopic esophagectomy changes", "thoracoscopic esophagectomy altogether", "thoracoscopic esophagectomy comparing", "cancer thoracoscopic esophagectomy" ]
null
[CONTENT] Esophageal cancer | Thoracoscopic esophagectomy | Cervical esophagogastric anastomosis | Indocyanine green fluorescence imaging | Circular stapling anastomosis | Anastomotic leakage | Anastomotic stenosis | Propensity score matching [SUMMARY]
null
[CONTENT] Esophageal cancer | Thoracoscopic esophagectomy | Cervical esophagogastric anastomosis | Indocyanine green fluorescence imaging | Circular stapling anastomosis | Anastomotic leakage | Anastomotic stenosis | Propensity score matching [SUMMARY]
[CONTENT] Esophageal cancer | Thoracoscopic esophagectomy | Cervical esophagogastric anastomosis | Indocyanine green fluorescence imaging | Circular stapling anastomosis | Anastomotic leakage | Anastomotic stenosis | Propensity score matching [SUMMARY]
[CONTENT] Esophageal cancer | Thoracoscopic esophagectomy | Cervical esophagogastric anastomosis | Indocyanine green fluorescence imaging | Circular stapling anastomosis | Anastomotic leakage | Anastomotic stenosis | Propensity score matching [SUMMARY]
[CONTENT] Esophageal cancer | Thoracoscopic esophagectomy | Cervical esophagogastric anastomosis | Indocyanine green fluorescence imaging | Circular stapling anastomosis | Anastomotic leakage | Anastomotic stenosis | Propensity score matching [SUMMARY]
[CONTENT] Anastomosis, Surgical | Anastomotic Leak | Constriction, Pathologic | Esophageal Neoplasms | Esophagectomy | Humans | Indocyanine Green | Optical Imaging | Propensity Score [SUMMARY]
null
[CONTENT] Anastomosis, Surgical | Anastomotic Leak | Constriction, Pathologic | Esophageal Neoplasms | Esophagectomy | Humans | Indocyanine Green | Optical Imaging | Propensity Score [SUMMARY]
[CONTENT] Anastomosis, Surgical | Anastomotic Leak | Constriction, Pathologic | Esophageal Neoplasms | Esophagectomy | Humans | Indocyanine Green | Optical Imaging | Propensity Score [SUMMARY]
[CONTENT] Anastomosis, Surgical | Anastomotic Leak | Constriction, Pathologic | Esophageal Neoplasms | Esophagectomy | Humans | Indocyanine Green | Optical Imaging | Propensity Score [SUMMARY]
[CONTENT] Anastomosis, Surgical | Anastomotic Leak | Constriction, Pathologic | Esophageal Neoplasms | Esophagectomy | Humans | Indocyanine Green | Optical Imaging | Propensity Score [SUMMARY]
[CONTENT] leakage thoracoscopic esophagectomy | thoracoscopic esophagectomy changes | thoracoscopic esophagectomy altogether | thoracoscopic esophagectomy comparing | cancer thoracoscopic esophagectomy [SUMMARY]
null
[CONTENT] leakage thoracoscopic esophagectomy | thoracoscopic esophagectomy changes | thoracoscopic esophagectomy altogether | thoracoscopic esophagectomy comparing | cancer thoracoscopic esophagectomy [SUMMARY]
[CONTENT] leakage thoracoscopic esophagectomy | thoracoscopic esophagectomy changes | thoracoscopic esophagectomy altogether | thoracoscopic esophagectomy comparing | cancer thoracoscopic esophagectomy [SUMMARY]
[CONTENT] leakage thoracoscopic esophagectomy | thoracoscopic esophagectomy changes | thoracoscopic esophagectomy altogether | thoracoscopic esophagectomy comparing | cancer thoracoscopic esophagectomy [SUMMARY]
[CONTENT] leakage thoracoscopic esophagectomy | thoracoscopic esophagectomy changes | thoracoscopic esophagectomy altogether | thoracoscopic esophagectomy comparing | cancer thoracoscopic esophagectomy [SUMMARY]
[CONTENT] anastomotic | anastomosis | circular | icg | gastric | tube | gastric tube | patients | 15 | leakage [SUMMARY]
null
[CONTENT] anastomotic | anastomosis | circular | icg | gastric | tube | gastric tube | patients | 15 | leakage [SUMMARY]
[CONTENT] anastomotic | anastomosis | circular | icg | gastric | tube | gastric tube | patients | 15 | leakage [SUMMARY]
[CONTENT] anastomotic | anastomosis | circular | icg | gastric | tube | gastric tube | patients | 15 | leakage [SUMMARY]
[CONTENT] anastomotic | anastomosis | circular | icg | gastric | tube | gastric tube | patients | 15 | leakage [SUMMARY]
[CONTENT] standardized | esophagectomy | cancer | anastomotic method standardized | anastomotic | imaging | fluorescence imaging | fluorescence | icg fluorescence imaging | icg fluorescence [SUMMARY]
null
[CONTENT] 15 | 33 | 13 | anastomotic | 28 | 44 | 30 | 10 | 16 | 11 [SUMMARY]
[CONTENT] anastomotic | crucial anastomotic method | future | stenosis improved problem | stenosis improved problem addressed | study incidence rate anastomotic | study incidence rate | study incidence | improved problem addressed future | standardized study incidence [SUMMARY]
[CONTENT] anastomotic | anastomosis | icg | circular | gastric | gastric tube | tube | 15 | esophagectomy | patients [SUMMARY]
[CONTENT] anastomotic | anastomosis | icg | circular | gastric | gastric tube | tube | 15 | esophagectomy | patients [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] 121 | 33 ||| two ||| 0.001 | 0.003 | 0.009 | ICG ||| ICG | 39% | 9% | 0.004 | 46% | 21% | 0.037 | 30 | 20 days | 0.001 ||| 0.013 [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| 121 | November 2009 to December 2020 | Tottori University Hospital ||| 82 | ICG | 39 ||| two ||| 121 | 33 ||| two ||| 0.001 | 0.003 | 0.009 | ICG ||| ICG | 39% | 9% | 0.004 | 46% | 21% | 0.037 | 30 | 20 days | 0.001 ||| 0.013 ||| [SUMMARY]
[CONTENT] ||| ||| 121 | November 2009 to December 2020 | Tottori University Hospital ||| 82 | ICG | 39 ||| two ||| 121 | 33 ||| two ||| 0.001 | 0.003 | 0.009 | ICG ||| ICG | 39% | 9% | 0.004 | 46% | 21% | 0.037 | 30 | 20 days | 0.001 ||| 0.013 ||| [SUMMARY]
The relationship between epicardial fat tissue thickness and visceral adipose tissue in lean patients with polycystic ovary syndrome.
26545735
Polycystic ovary syndrome (PCOS) is related to metabolic syndrome, insulin resistance, and cardiovascular metabolic syndromes. This is particularly true for individuals with central and abdominal obesity because visceral abdominal adipose tissue (VAAT) and epicardial adipose tissue (EAT) produce a large number of proinflammatory and proatherogenic cytokines. The present study aimed to determine whether there are changes in VAAT and EAT levels which were considered as indirect predictors for subclinical atherosclerosis in lean patients with PCOS.
BACKGROUND
The clinical and demographic characteristics of 35 patients with PCOS and 38 healthy control subjects were recorded for the present study. Additionally, the serum levels of various biochemical parameters were measured and EAT levels were assessed using 2D-transthoracic echocardiography.
METHODS
There were no significant differences in mean age (p = 0.056) or mean body mass index (BMI) (p = 0.446) between the patient and control groups. However, the body fat percentage, waist-to-hip ratio, amount of abdominal subcutaneous adipose tissue, and VAAT thickness were higher in the PCOS patient group than in the control group. The amounts of EAT in the patient and control groups were similar (p = 0.384). EAT was correlated with BMI, fat mass, waist circumference, and hip circumference but not with any biochemical metabolic parameters including the homeostasis model assessment of insulin resistance index or the levels of triglycerides, low-density lipoprotein cholesterol, and high-density lipoprotein (HDL) cholesterol. However, there was a small positive correlation between the amounts of VAAT and EAT. VAAT was directly correlated with body fat parameters such as BMI, fat mass, and abdominal subcutaneous adipose thickness and inversely correlated with the HDL cholesterol level.
RESULTS
The present study found that increased abdominal adipose tissue in patients with PCOS was associated with atherosclerosis. Additionally, EAT may aid in the determination of the risk of atherosclerosis in patients with PCOS because it is easily measured.
CONCLUSIONS
[ "Adult", "Body Mass Index", "Case-Control Studies", "Female", "Humans", "Intra-Abdominal Fat", "Pericardium", "Polycystic Ovary Syndrome", "Thinness", "Waist-Hip Ratio" ]
4636769
Background
Polycystic ovary syndrome (PCOS) is a heterogeneous disease that affects 5 to 10 % of women in the reproductive period [1]. Many studies have shown that PCOS is associated with various cardiovascular risk factors such as obesity, insulin resistance, hyperlipidemia, metabolic syndrome, and hypertension [1–3]. Additionally, patients with PCOS have a high incidence of central and abdominal obesity and marked increases in the waist circumference (WC) and waist-to-hip ratio (WHR) [4, 5]. Visceral abdominal adipose tissue (VAAT) surrounds the internal organs, and increased amounts of VAAT are more important than increased levels of subcutaneous fat in terms of the risks of metabolic syndrome, insulin resistance, and cardiovascular mortality [6]. Epicardial adipose tissue (EAT) and visceral adipose tissue (VAT), located between the myocardium and visceral epicardium, respectively, are derived from the same origin [7]. This is important because both of these body fat tissues produce large numbers of proinflammatory and proatherogenic cytokines [8, 9]. The reported findings regarding abdominal fat tissue and EAT in patients with PCOS are controversial [8–21]. For example, patients with PCOS have been shown to have increased [10–14], similar [15–17], or decreased [18] amounts of abdominal fat. Similarly, the amount of EAT in patients with PCOS has been reported to be increased and unchanged compared with healthy control groups [19–21]. However, not all patients with PCOS are obese; in fact, a 2001 study of 346 patients with PCOS found that 56 % of such patients are lean [22]; 56.0 % had a body mass index (BMI) of < 25 kg/m2, 11.3 % had a BMI of 25 to 27 kg/m2, and 32.7 % had a BMI of ≥ 27 kg/m2 [22]. Thus, the present study aimed to determine whether there are changes in the amounts of VAAT and EAT in lean patients with PCOS compared with healthy control subjects.
null
null
Results
The body fat distribution, total body water, WC, HC, WHR, amount of abdominal subcutaneous adipose tissue, VAAT thickness, blood pressure, and levels of LDL cholesterol, HDL cholesterol, are TG are provided in Table 1. There were no significant differences in the mean age (p = 0.056) or mean BMI (p = 0.446) between the patient and control groups, but the body fat percentage, WHR, amount of abdominal subcutaneous adipose tissue, and VAAT thickness were higher in the patient group. However, the patient and control groups had similar amounts of EAT (p = 0.384) (Table 1).Table 1Comparison of demographic, body analysis and laboratory parameters of the patient and the control groupFeaturePatients (n = 35)Controls (n = 38) P valueAge (years)25.16 ± 4.1227.44 ± 4.310.056BMI (kg/m2)25.60 ± 5.2223.67 ± 3.700.404Fat mass (%)31.74 ± 10.1226.33 ± 7.61 0.042 Total body water (kg)34.11 ± 4.4832.72 ± 2.330.193Waist circumference (cm)91.76 ± 18.9586.50 ± 9.710.677Hip circumference (cm)103.56 ± 15.27100.55 ± 7.480.557WHR0.94 ± 0.060.89 ± 0.04 0.007 VAAT thickness9.24 ± 4.676.77 ± 2.68 0.042 Abdominal subcutaneous adipose tissue thickness40.48 ± 8.8332.86 ± 9.48 0.008 Systolic blood pressure (mmHg)120.10 ± 7.78120.28 ± 7.280.979Diastolic blood pressure (mmHg)70.26 ± 7.4072.76 ± 9.210.318FPG (mg/dL)89.16 ± 9.1287.72 ± 7.920.565Insulin (μIU/ mL)9.77 ± 6.276.80 ± 3.450.073HOMA-IR2.13 ± 1.331.60 ± 0.670.082TG (mg/dL)11.52 ± 53.8185.52 ± 31.490.058LDL (mg/dL)106.40 ± 35.5796.38 ± 28.730.305HDL (mg/dL)55.69 ± 16.2558.90 ± 11.960.300EAT (mm)4.72 ± 0.884.43 ± 1.310.384 BMI body mass index, VAAT visceral abdominal adipose tissue, FPG fasting plasma glucose, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests Comparison of demographic, body analysis and laboratory parameters of the patient and the control group BMI body mass index, VAAT visceral abdominal adipose tissue, FPG fasting plasma glucose, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests EAT had a significantly positive correlation with BMI (r = 0.260, p = 0.034), fat mass (r = 0.250, p = 0.041), WC (r = 0.301, p = 0.016), and abdominal circumference (r = 0.254, p = 0.043), but it was not correlated with the HOMA-IR index or the levels of TG, LDL cholesterol, or HDL cholesterol (p > 0.05) (Table 2). There was a small positive correlation between VAAT and EAT (r = 0.248, p = 0.048). VAAT was also directly associated with BMI (r = 0.921, p < 0.01), fat mass (r = 0.941, p < 0.01), WC (r = 0.941, p < 0.01), HC (r = 0.876, p < 0.01), abdominal subcutaneous adipose thickness (r = 0.896, p < 0.01), the HOMA-IR index (r = 0.618, p < 0.01), and the levels of TG (r = 0.388, p < 0.01) and LDL cholesterol (r = 0.288, p = 0.016). Conversely, VAAT was inversely associated with the HDL cholesterol level (r = −0.488, p < 0.01) (Table 3).Table 2Correlation analysis between EAT and variablesVariabler value p valueBMI (kg/m2)0.260 0.034 Fat mass (%)0.250 0.041 Waist circumference (cm)0.301 0.016 Hip circumference (cm)0.254 0.043 HOMA-IR0.1190.490TG (mg/dL)0.0760.550LDL (mg/dL)0.1580.209HDL (mg/dL)−0.1850.141 BMI body mass index, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all testsTable 3Correlation analysis between VAAT and variablesVariabler value P valueEAT (mm)0.248 0.048 BMI (kg/m2)0.921<0.01Fat mass (%)0.941<0.01Waist circumference (cm)0.941<0.01Hip circumference (cm)0.876<0.01Abdominal subcutaneous adipose tissue thickness0.896<0.01HOMA-IR0.618<0.01TG (mg/dL)0.388<0.01LDL (mg/dL)0.288 0.016 HDL (mg/dL)−0.488<0.01 BMI body mass index, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests Correlation analysis between EAT and variables BMI body mass index, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests Correlation analysis between VAAT and variables BMI body mass index, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests
Conclusions
The present study observed several associations between EAT thickness and cardiovascular risk in patients with PCOS. Because of the difficulties related to the measurement of abdominal adipose tissue thickness, the assessment of EAT may be a relatively easy-to-use but important tool for the determination of cardiovascular risk.
[ "Selection of subjects", "Exclusion criteria", "Measurements", "Body composition analysis", "Biochemical analysis", "Echocardiography", "Statistical analysis" ]
[ "The present study included 38 healthy control subjects and 35 patients with PCOS and concurrent hyperandrogenism and/or ovulatory dysfunction who were admitted to the Endocrinology and Metabolism Department of Sakarya Training and Research Hospital at Sakarya University from January 2013 to June 2014. Some of these patients were diagnosed by a gynecologist, and some were diagnosed by the present authors based on the presence of two of the three criteria from the Rotterdam European Society for Human Reproduction and Embryology/American Society for Reproductive Medicine (ESHRE/ASRM) for PCOS: a) oligomenorrhea, amenorrhea, or anovulation; b) the presence of clinical or biochemical hyperandrogenism; and/or c) the presence of polycystic ovaries as determined by a pelvic ultrasound [22]. The control group comprised healthy secretaries, nurses, and doctors from our hospital who volunteered for the study and had regular menstrual cycles, normal androgen levels, the absence of hirsutism, and no polycystic ovary as determined by a pelvic ultrasound. The demographic data of the patient and control groups were recorded. The present study was approved by the Sakarya University Faculty of Medicine Ethics Committee (Date: 24.02.2014; No. 27), and all participants provided written informed consent.", "Subjects were excluded from the present study if they had history of smoking, diabetes, or hypertension; had been diagnosed with Cushing’s syndrome (based on the 1-mg dexamethasone suppression test) or non-classic congenital adrenal hyperplasia (based on a 17-OH progesterone level of > 10 ng/dL after stimulation); exhibited cardiac disease; and/or had used antidiabetic, antihypertensive, antilipidemic, or oral contraceptive drugs within the past 3 months.", "The height and weight of all subjects were measured, and their BMI was calculated as the weight in kilograms divided by the square of height in meters. WC was measured from the narrowest part of the body between the iliac crest and the rib, and hip circumference (HC) was measured at the widest part of the hips. The WHR was calculated as the ratio of WC to HC [21].", "The basal metabolic rate, body fat percentage, and total body water of each patient were evaluated with a Tanita Body Composition Analyzer (Model TBF-300; Tanita Corporation, Itabashi-ku, Tokyo, Japan) while the patient was in a standing position without shoes and with light clothing on after a ≥ 8-h fast with sufficient hydration. Abdominal subcutaneous fat tissue thickness and visceral abdominal fat tissue thickness were recorded using bioelectrical impedance with the Tanita Abdominal Fat Analyzer (AB-140 Viscan; Tanita Corporation, Tokyo, Japan). The blood pressure of each patient was assessed after at least 10 min of rest with a sphygmomanometer (ERKA; Bad Tölz, Germany); two measurements were performed, and the average of the blood pressure measurements was calculated [23].", "Blood samples were obtained from the patients in the morning after at least 8 h of fasting. The fasting plasma glucose (FPG) and fasting insulin levels of the patients were measured, and the homeostatic model assessment-insulin resistance (HOMA-IR) index was calculated using the following formula: (FPG [mg/dL] × fasting plasma insulin [μIU/mL] / 405). If the HOMA-IR index was > 2.7, insulin resistance was considered to be present [24]. Serum lipid levels of low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, and triglycerides (TG) were measured using xylidine blue with an end-point colorimetric method (Roche Diagnostics GmbH; Mannheim, Germany). FPG levels were measured with a hexokinase method (Roche Diagnostics GmbH).", "All patients were directed to the Department of Cardiology at Sakarya Training and Research Hospital, and the EAT thickness was measured using 2D-transthoracic echocardiography by the same cardiologist. The parasternal long- and short-axis images of EAT, which allow for the most accurate measurement from the right ventricle, were obtained using a standard parasternal image in the left lateral decubitus position. EAT was defined as the echo-free space between the outer wall of the myocardium and the visceral layer of pericardium at end-systole in the right ventricle [25].", "All statistical analyses were performed with SPSS software, version 15 (SPSS, Inc., Chicago, IL, USA). Nonparametric tests were utilized due to the prevalence of variables. The Mann–Whitney U test was applied to assess differences between groups, and Spearman’s test was applied to assess correlations between the variables. A p value of < 0.05 was considered to indicate statistical significance, and a correlation was considered to be present if Spearman’s value was ≥ 0.50. Continuous variables are expressed as either the mean ± standard deviation (SD) or the median (minimum–maximum), and categorical variables are expressed as either frequency or percentage. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of < 0.05 was considered to indicate statistical significance for all tests." ]
[ null, null, null, null, null, null, null ]
[ "Background", "Methods", "Selection of subjects", "Exclusion criteria", "Measurements", "Body composition analysis", "Biochemical analysis", "Echocardiography", "Statistical analysis", "Results", "Discussion", "Conclusions" ]
[ "Polycystic ovary syndrome (PCOS) is a heterogeneous disease that affects 5 to 10 % of women in the reproductive period [1]. Many studies have shown that PCOS is associated with various cardiovascular risk factors such as obesity, insulin resistance, hyperlipidemia, metabolic syndrome, and hypertension [1–3]. Additionally, patients with PCOS have a high incidence of central and abdominal obesity and marked increases in the waist circumference (WC) and waist-to-hip ratio (WHR) [4, 5].\nVisceral abdominal adipose tissue (VAAT) surrounds the internal organs, and increased amounts of VAAT are more important than increased levels of subcutaneous fat in terms of the risks of metabolic syndrome, insulin resistance, and cardiovascular mortality [6]. Epicardial adipose tissue (EAT) and visceral adipose tissue (VAT), located between the myocardium and visceral epicardium, respectively, are derived from the same origin [7]. This is important because both of these body fat tissues produce large numbers of proinflammatory and proatherogenic cytokines [8, 9]. The reported findings regarding abdominal fat tissue and EAT in patients with PCOS are controversial [8–21]. For example, patients with PCOS have been shown to have increased [10–14], similar [15–17], or decreased [18] amounts of abdominal fat. Similarly, the amount of EAT in patients with PCOS has been reported to be increased and unchanged compared with healthy control groups [19–21]. However, not all patients with PCOS are obese; in fact, a 2001 study of 346 patients with PCOS found that 56 % of such patients are lean [22]; 56.0 % had a body mass index (BMI) of < 25 kg/m2, 11.3 % had a BMI of 25 to 27 kg/m2, and 32.7 % had a BMI of ≥ 27 kg/m2 [22]. Thus, the present study aimed to determine whether there are changes in the amounts of VAAT and EAT in lean patients with PCOS compared with healthy control subjects.", " Selection of subjects The present study included 38 healthy control subjects and 35 patients with PCOS and concurrent hyperandrogenism and/or ovulatory dysfunction who were admitted to the Endocrinology and Metabolism Department of Sakarya Training and Research Hospital at Sakarya University from January 2013 to June 2014. Some of these patients were diagnosed by a gynecologist, and some were diagnosed by the present authors based on the presence of two of the three criteria from the Rotterdam European Society for Human Reproduction and Embryology/American Society for Reproductive Medicine (ESHRE/ASRM) for PCOS: a) oligomenorrhea, amenorrhea, or anovulation; b) the presence of clinical or biochemical hyperandrogenism; and/or c) the presence of polycystic ovaries as determined by a pelvic ultrasound [22]. The control group comprised healthy secretaries, nurses, and doctors from our hospital who volunteered for the study and had regular menstrual cycles, normal androgen levels, the absence of hirsutism, and no polycystic ovary as determined by a pelvic ultrasound. The demographic data of the patient and control groups were recorded. The present study was approved by the Sakarya University Faculty of Medicine Ethics Committee (Date: 24.02.2014; No. 27), and all participants provided written informed consent.\nThe present study included 38 healthy control subjects and 35 patients with PCOS and concurrent hyperandrogenism and/or ovulatory dysfunction who were admitted to the Endocrinology and Metabolism Department of Sakarya Training and Research Hospital at Sakarya University from January 2013 to June 2014. Some of these patients were diagnosed by a gynecologist, and some were diagnosed by the present authors based on the presence of two of the three criteria from the Rotterdam European Society for Human Reproduction and Embryology/American Society for Reproductive Medicine (ESHRE/ASRM) for PCOS: a) oligomenorrhea, amenorrhea, or anovulation; b) the presence of clinical or biochemical hyperandrogenism; and/or c) the presence of polycystic ovaries as determined by a pelvic ultrasound [22]. The control group comprised healthy secretaries, nurses, and doctors from our hospital who volunteered for the study and had regular menstrual cycles, normal androgen levels, the absence of hirsutism, and no polycystic ovary as determined by a pelvic ultrasound. The demographic data of the patient and control groups were recorded. The present study was approved by the Sakarya University Faculty of Medicine Ethics Committee (Date: 24.02.2014; No. 27), and all participants provided written informed consent.\n Exclusion criteria Subjects were excluded from the present study if they had history of smoking, diabetes, or hypertension; had been diagnosed with Cushing’s syndrome (based on the 1-mg dexamethasone suppression test) or non-classic congenital adrenal hyperplasia (based on a 17-OH progesterone level of > 10 ng/dL after stimulation); exhibited cardiac disease; and/or had used antidiabetic, antihypertensive, antilipidemic, or oral contraceptive drugs within the past 3 months.\nSubjects were excluded from the present study if they had history of smoking, diabetes, or hypertension; had been diagnosed with Cushing’s syndrome (based on the 1-mg dexamethasone suppression test) or non-classic congenital adrenal hyperplasia (based on a 17-OH progesterone level of > 10 ng/dL after stimulation); exhibited cardiac disease; and/or had used antidiabetic, antihypertensive, antilipidemic, or oral contraceptive drugs within the past 3 months.\n Measurements The height and weight of all subjects were measured, and their BMI was calculated as the weight in kilograms divided by the square of height in meters. WC was measured from the narrowest part of the body between the iliac crest and the rib, and hip circumference (HC) was measured at the widest part of the hips. The WHR was calculated as the ratio of WC to HC [21].\nThe height and weight of all subjects were measured, and their BMI was calculated as the weight in kilograms divided by the square of height in meters. WC was measured from the narrowest part of the body between the iliac crest and the rib, and hip circumference (HC) was measured at the widest part of the hips. The WHR was calculated as the ratio of WC to HC [21].\n Body composition analysis The basal metabolic rate, body fat percentage, and total body water of each patient were evaluated with a Tanita Body Composition Analyzer (Model TBF-300; Tanita Corporation, Itabashi-ku, Tokyo, Japan) while the patient was in a standing position without shoes and with light clothing on after a ≥ 8-h fast with sufficient hydration. Abdominal subcutaneous fat tissue thickness and visceral abdominal fat tissue thickness were recorded using bioelectrical impedance with the Tanita Abdominal Fat Analyzer (AB-140 Viscan; Tanita Corporation, Tokyo, Japan). The blood pressure of each patient was assessed after at least 10 min of rest with a sphygmomanometer (ERKA; Bad Tölz, Germany); two measurements were performed, and the average of the blood pressure measurements was calculated [23].\nThe basal metabolic rate, body fat percentage, and total body water of each patient were evaluated with a Tanita Body Composition Analyzer (Model TBF-300; Tanita Corporation, Itabashi-ku, Tokyo, Japan) while the patient was in a standing position without shoes and with light clothing on after a ≥ 8-h fast with sufficient hydration. Abdominal subcutaneous fat tissue thickness and visceral abdominal fat tissue thickness were recorded using bioelectrical impedance with the Tanita Abdominal Fat Analyzer (AB-140 Viscan; Tanita Corporation, Tokyo, Japan). The blood pressure of each patient was assessed after at least 10 min of rest with a sphygmomanometer (ERKA; Bad Tölz, Germany); two measurements were performed, and the average of the blood pressure measurements was calculated [23].\n Biochemical analysis Blood samples were obtained from the patients in the morning after at least 8 h of fasting. The fasting plasma glucose (FPG) and fasting insulin levels of the patients were measured, and the homeostatic model assessment-insulin resistance (HOMA-IR) index was calculated using the following formula: (FPG [mg/dL] × fasting plasma insulin [μIU/mL] / 405). If the HOMA-IR index was > 2.7, insulin resistance was considered to be present [24]. Serum lipid levels of low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, and triglycerides (TG) were measured using xylidine blue with an end-point colorimetric method (Roche Diagnostics GmbH; Mannheim, Germany). FPG levels were measured with a hexokinase method (Roche Diagnostics GmbH).\nBlood samples were obtained from the patients in the morning after at least 8 h of fasting. The fasting plasma glucose (FPG) and fasting insulin levels of the patients were measured, and the homeostatic model assessment-insulin resistance (HOMA-IR) index was calculated using the following formula: (FPG [mg/dL] × fasting plasma insulin [μIU/mL] / 405). If the HOMA-IR index was > 2.7, insulin resistance was considered to be present [24]. Serum lipid levels of low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, and triglycerides (TG) were measured using xylidine blue with an end-point colorimetric method (Roche Diagnostics GmbH; Mannheim, Germany). FPG levels were measured with a hexokinase method (Roche Diagnostics GmbH).\n Echocardiography All patients were directed to the Department of Cardiology at Sakarya Training and Research Hospital, and the EAT thickness was measured using 2D-transthoracic echocardiography by the same cardiologist. The parasternal long- and short-axis images of EAT, which allow for the most accurate measurement from the right ventricle, were obtained using a standard parasternal image in the left lateral decubitus position. EAT was defined as the echo-free space between the outer wall of the myocardium and the visceral layer of pericardium at end-systole in the right ventricle [25].\nAll patients were directed to the Department of Cardiology at Sakarya Training and Research Hospital, and the EAT thickness was measured using 2D-transthoracic echocardiography by the same cardiologist. The parasternal long- and short-axis images of EAT, which allow for the most accurate measurement from the right ventricle, were obtained using a standard parasternal image in the left lateral decubitus position. EAT was defined as the echo-free space between the outer wall of the myocardium and the visceral layer of pericardium at end-systole in the right ventricle [25].\n Statistical analysis All statistical analyses were performed with SPSS software, version 15 (SPSS, Inc., Chicago, IL, USA). Nonparametric tests were utilized due to the prevalence of variables. The Mann–Whitney U test was applied to assess differences between groups, and Spearman’s test was applied to assess correlations between the variables. A p value of < 0.05 was considered to indicate statistical significance, and a correlation was considered to be present if Spearman’s value was ≥ 0.50. Continuous variables are expressed as either the mean ± standard deviation (SD) or the median (minimum–maximum), and categorical variables are expressed as either frequency or percentage. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of < 0.05 was considered to indicate statistical significance for all tests.\nAll statistical analyses were performed with SPSS software, version 15 (SPSS, Inc., Chicago, IL, USA). Nonparametric tests were utilized due to the prevalence of variables. The Mann–Whitney U test was applied to assess differences between groups, and Spearman’s test was applied to assess correlations between the variables. A p value of < 0.05 was considered to indicate statistical significance, and a correlation was considered to be present if Spearman’s value was ≥ 0.50. Continuous variables are expressed as either the mean ± standard deviation (SD) or the median (minimum–maximum), and categorical variables are expressed as either frequency or percentage. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of < 0.05 was considered to indicate statistical significance for all tests.", "The present study included 38 healthy control subjects and 35 patients with PCOS and concurrent hyperandrogenism and/or ovulatory dysfunction who were admitted to the Endocrinology and Metabolism Department of Sakarya Training and Research Hospital at Sakarya University from January 2013 to June 2014. Some of these patients were diagnosed by a gynecologist, and some were diagnosed by the present authors based on the presence of two of the three criteria from the Rotterdam European Society for Human Reproduction and Embryology/American Society for Reproductive Medicine (ESHRE/ASRM) for PCOS: a) oligomenorrhea, amenorrhea, or anovulation; b) the presence of clinical or biochemical hyperandrogenism; and/or c) the presence of polycystic ovaries as determined by a pelvic ultrasound [22]. The control group comprised healthy secretaries, nurses, and doctors from our hospital who volunteered for the study and had regular menstrual cycles, normal androgen levels, the absence of hirsutism, and no polycystic ovary as determined by a pelvic ultrasound. The demographic data of the patient and control groups were recorded. The present study was approved by the Sakarya University Faculty of Medicine Ethics Committee (Date: 24.02.2014; No. 27), and all participants provided written informed consent.", "Subjects were excluded from the present study if they had history of smoking, diabetes, or hypertension; had been diagnosed with Cushing’s syndrome (based on the 1-mg dexamethasone suppression test) or non-classic congenital adrenal hyperplasia (based on a 17-OH progesterone level of > 10 ng/dL after stimulation); exhibited cardiac disease; and/or had used antidiabetic, antihypertensive, antilipidemic, or oral contraceptive drugs within the past 3 months.", "The height and weight of all subjects were measured, and their BMI was calculated as the weight in kilograms divided by the square of height in meters. WC was measured from the narrowest part of the body between the iliac crest and the rib, and hip circumference (HC) was measured at the widest part of the hips. The WHR was calculated as the ratio of WC to HC [21].", "The basal metabolic rate, body fat percentage, and total body water of each patient were evaluated with a Tanita Body Composition Analyzer (Model TBF-300; Tanita Corporation, Itabashi-ku, Tokyo, Japan) while the patient was in a standing position without shoes and with light clothing on after a ≥ 8-h fast with sufficient hydration. Abdominal subcutaneous fat tissue thickness and visceral abdominal fat tissue thickness were recorded using bioelectrical impedance with the Tanita Abdominal Fat Analyzer (AB-140 Viscan; Tanita Corporation, Tokyo, Japan). The blood pressure of each patient was assessed after at least 10 min of rest with a sphygmomanometer (ERKA; Bad Tölz, Germany); two measurements were performed, and the average of the blood pressure measurements was calculated [23].", "Blood samples were obtained from the patients in the morning after at least 8 h of fasting. The fasting plasma glucose (FPG) and fasting insulin levels of the patients were measured, and the homeostatic model assessment-insulin resistance (HOMA-IR) index was calculated using the following formula: (FPG [mg/dL] × fasting plasma insulin [μIU/mL] / 405). If the HOMA-IR index was > 2.7, insulin resistance was considered to be present [24]. Serum lipid levels of low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, and triglycerides (TG) were measured using xylidine blue with an end-point colorimetric method (Roche Diagnostics GmbH; Mannheim, Germany). FPG levels were measured with a hexokinase method (Roche Diagnostics GmbH).", "All patients were directed to the Department of Cardiology at Sakarya Training and Research Hospital, and the EAT thickness was measured using 2D-transthoracic echocardiography by the same cardiologist. The parasternal long- and short-axis images of EAT, which allow for the most accurate measurement from the right ventricle, were obtained using a standard parasternal image in the left lateral decubitus position. EAT was defined as the echo-free space between the outer wall of the myocardium and the visceral layer of pericardium at end-systole in the right ventricle [25].", "All statistical analyses were performed with SPSS software, version 15 (SPSS, Inc., Chicago, IL, USA). Nonparametric tests were utilized due to the prevalence of variables. The Mann–Whitney U test was applied to assess differences between groups, and Spearman’s test was applied to assess correlations between the variables. A p value of < 0.05 was considered to indicate statistical significance, and a correlation was considered to be present if Spearman’s value was ≥ 0.50. Continuous variables are expressed as either the mean ± standard deviation (SD) or the median (minimum–maximum), and categorical variables are expressed as either frequency or percentage. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of < 0.05 was considered to indicate statistical significance for all tests.", "The body fat distribution, total body water, WC, HC, WHR, amount of abdominal subcutaneous adipose tissue, VAAT thickness, blood pressure, and levels of LDL cholesterol, HDL cholesterol, are TG are provided in Table 1. There were no significant differences in the mean age (p = 0.056) or mean BMI (p = 0.446) between the patient and control groups, but the body fat percentage, WHR, amount of abdominal subcutaneous adipose tissue, and VAAT thickness were higher in the patient group. However, the patient and control groups had similar amounts of EAT (p = 0.384) (Table 1).Table 1Comparison of demographic, body analysis and laboratory parameters of the patient and the control groupFeaturePatients (n = 35)Controls (n = 38)\nP valueAge (years)25.16 ± 4.1227.44 ± 4.310.056BMI (kg/m2)25.60 ± 5.2223.67 ± 3.700.404Fat mass (%)31.74 ± 10.1226.33 ± 7.61\n0.042\nTotal body water (kg)34.11 ± 4.4832.72 ± 2.330.193Waist circumference (cm)91.76 ± 18.9586.50 ± 9.710.677Hip circumference (cm)103.56 ± 15.27100.55 ± 7.480.557WHR0.94 ± 0.060.89 ± 0.04\n0.007\nVAAT thickness9.24 ± 4.676.77 ± 2.68\n0.042\nAbdominal subcutaneous adipose tissue thickness40.48 ± 8.8332.86 ± 9.48\n0.008\nSystolic blood pressure (mmHg)120.10 ± 7.78120.28 ± 7.280.979Diastolic blood pressure (mmHg)70.26 ± 7.4072.76 ± 9.210.318FPG (mg/dL)89.16 ± 9.1287.72 ± 7.920.565Insulin (μIU/ mL)9.77 ± 6.276.80 ± 3.450.073HOMA-IR2.13 ± 1.331.60 ± 0.670.082TG (mg/dL)11.52 ± 53.8185.52 ± 31.490.058LDL (mg/dL)106.40 ± 35.5796.38 ± 28.730.305HDL (mg/dL)55.69 ± 16.2558.90 ± 11.960.300EAT (mm)4.72 ± 0.884.43 ± 1.310.384\nBMI body mass index, VAAT visceral abdominal adipose tissue, FPG fasting plasma glucose, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests\nComparison of demographic, body analysis and laboratory parameters of the patient and the control group\n\nBMI body mass index, VAAT visceral abdominal adipose tissue, FPG fasting plasma glucose, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests\nEAT had a significantly positive correlation with BMI (r = 0.260, p = 0.034), fat mass (r = 0.250, p = 0.041), WC (r = 0.301, p = 0.016), and abdominal circumference (r = 0.254, p = 0.043), but it was not correlated with the HOMA-IR index or the levels of TG, LDL cholesterol, or HDL cholesterol (p > 0.05) (Table 2). There was a small positive correlation between VAAT and EAT (r = 0.248, p = 0.048). VAAT was also directly associated with BMI (r = 0.921, p < 0.01), fat mass (r = 0.941, p < 0.01), WC (r = 0.941, p < 0.01), HC (r = 0.876, p < 0.01), abdominal subcutaneous adipose thickness (r = 0.896, p < 0.01), the HOMA-IR index (r = 0.618, p < 0.01), and the levels of TG (r = 0.388, p < 0.01) and LDL cholesterol (r = 0.288, p = 0.016). Conversely, VAAT was inversely associated with the HDL cholesterol level (r = −0.488, p < 0.01) (Table 3).Table 2Correlation analysis between EAT and variablesVariabler value\np valueBMI (kg/m2)0.260\n0.034\nFat mass (%)0.250\n0.041\nWaist circumference (cm)0.301\n0.016\nHip circumference (cm)0.254\n0.043\nHOMA-IR0.1190.490TG (mg/dL)0.0760.550LDL (mg/dL)0.1580.209HDL (mg/dL)−0.1850.141\nBMI body mass index, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all testsTable 3Correlation analysis between VAAT and variablesVariabler value\nP valueEAT (mm)0.248\n0.048\nBMI (kg/m2)0.921<0.01Fat mass (%)0.941<0.01Waist circumference (cm)0.941<0.01Hip circumference (cm)0.876<0.01Abdominal subcutaneous adipose tissue thickness0.896<0.01HOMA-IR0.618<0.01TG (mg/dL)0.388<0.01LDL (mg/dL)0.288\n0.016\nHDL (mg/dL)−0.488<0.01\nBMI body mass index, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests\nCorrelation analysis between EAT and variables\n\nBMI body mass index, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests\nCorrelation analysis between VAAT and variables\n\nBMI body mass index, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests", "Although the patient and control groups in the present study had similar ages and BMIs, the lean patients with PCOS exhibited a higher WHR, VAAT, and abdominal subcutaneous fat tissue thickness than did the control group. In contrast, there were no significant differences in EAT. However, EAT was significantly correlated with VAAT, BMI, fat mass, WC, and HC.\nPrevious studies have shown a positive correlation between EAT thickness and VAAT thickness [26–28] independent of obesity [26, 27], and this correlation seems to be more important than WC [28]. Although there is an increased amount of EAT in obese patients with than without PCOS [19–21], EAT is not different between lean patients with PCOS and the normal population [21, 29]. Similarly, the present study found no differences in EAT between the two groups. Compared with normal individuals, patients with PCOS exhibit increases in total fat mass and organ-specific VAT [29]. Furthermore, these increases are positively correlated with both systolic and diastolic blood pressure and the levels of fasting glucose, insulin, LDL cholesterol, TG, and transaminases but negatively correlated with the insulin sensitivity index and HDL cholesterol. Likewise, the present study found that fat mass, abdominal subcutaneous adipose tissue, and VAAT thickness were higher in the patient group than in the control group. Additionally, VAAT thickness was positively correlated with BMI, fat mass, WC, HC, abdominal subcutaneous adipose tissue thickness, and the levels of TG and LDL cholesterol but negatively correlated with HDL cholesterol.\nSahin et al. [21] reported that EAT is positively correlated with age, BMI, WC, the glucose level, the HOMA-IR index, and the TG level. Similarly, the present study found that EAT was positively correlated with BMI, WC, HC, and VAAT; however, in contrast to those previous findings, EAT was not correlated with fasting glucose, the HOMA-IR index, or the lipid parameters. This may be due to the fact that the patients with PCOS in the present study were lean rather than obese. Another study found that EAT is correlated with BMI, WC, VAT thickness, and insulin resistance [30]. In the present study, EAT was positively correlated with BMI, WC, and VAAT but not with the HOMA-IR index. EAT is reportedly more closely associated with VAT than with total body fat [28, 30].\nIn studies employing magnetic resonance imaging (MRI), the abdominal adipose tissue thicknesses of patients with PCOS and normal control subjects did not significantly differ [16, 31–33]. It has also been shown that VAAT thickness is increased only in mildly obese patients with PCOS relative to control subjects [15] and that obesity predicts insulin resistance independently of PCOS [31]. Furthermore, a study conducted using a bioimpedance device found that there was less VAT in lean patients with PCOS than in the control group [18]. In contrast, the present study found greater VAAT thickness in lean patients with PCOS than in the control group despite the fact that the EAT thickness did not change.\nThe gold standard tests for measuring VAT are MRI and computed tomography (CT) [34]. Thus, a limitation of the present study may be that the amounts of abdominal subcutaneous adipose tissue and VAT were assessed using bioelectrical impedance. However, it is difficult to measure adipose tissue using CT or MRI because of cost-effectiveness, the application of radiation, and the use of contrast media. Additionally, previous studies have shown that the results of bioelectrical impedance tests and CT scans when measuring VAT are closely correlated [35]. Consequently, given that the present study found a small positive correlation between EAT thickness and VAAT thickness, echocardiography would appear to be an easy, simple, noninvasive, reliable, and accessible method for the measurement of these parameters relative to the use of MRI scans [25]. EAT is important for the determination of both VAAT thickness and cardiovascular risk [26, 36], and increased abdominal adipose tissue is related to an increased risk of atherosclerosis [37] and mortality [38]. Because it can be difficult to measure abdominal adipose tissue, it may be more economical and efficient to determine these risk factors by measuring EAT.", "The present study observed several associations between EAT thickness and cardiovascular risk in patients with PCOS. Because of the difficulties related to the measurement of abdominal adipose tissue thickness, the assessment of EAT may be a relatively easy-to-use but important tool for the determination of cardiovascular risk." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, null, "results", "discussion", "conclusion" ]
[ "Polycystic", "Ovary", "Epicardial", "Adipose" ]
Background: Polycystic ovary syndrome (PCOS) is a heterogeneous disease that affects 5 to 10 % of women in the reproductive period [1]. Many studies have shown that PCOS is associated with various cardiovascular risk factors such as obesity, insulin resistance, hyperlipidemia, metabolic syndrome, and hypertension [1–3]. Additionally, patients with PCOS have a high incidence of central and abdominal obesity and marked increases in the waist circumference (WC) and waist-to-hip ratio (WHR) [4, 5]. Visceral abdominal adipose tissue (VAAT) surrounds the internal organs, and increased amounts of VAAT are more important than increased levels of subcutaneous fat in terms of the risks of metabolic syndrome, insulin resistance, and cardiovascular mortality [6]. Epicardial adipose tissue (EAT) and visceral adipose tissue (VAT), located between the myocardium and visceral epicardium, respectively, are derived from the same origin [7]. This is important because both of these body fat tissues produce large numbers of proinflammatory and proatherogenic cytokines [8, 9]. The reported findings regarding abdominal fat tissue and EAT in patients with PCOS are controversial [8–21]. For example, patients with PCOS have been shown to have increased [10–14], similar [15–17], or decreased [18] amounts of abdominal fat. Similarly, the amount of EAT in patients with PCOS has been reported to be increased and unchanged compared with healthy control groups [19–21]. However, not all patients with PCOS are obese; in fact, a 2001 study of 346 patients with PCOS found that 56 % of such patients are lean [22]; 56.0 % had a body mass index (BMI) of < 25 kg/m2, 11.3 % had a BMI of 25 to 27 kg/m2, and 32.7 % had a BMI of ≥ 27 kg/m2 [22]. Thus, the present study aimed to determine whether there are changes in the amounts of VAAT and EAT in lean patients with PCOS compared with healthy control subjects. Methods: Selection of subjects The present study included 38 healthy control subjects and 35 patients with PCOS and concurrent hyperandrogenism and/or ovulatory dysfunction who were admitted to the Endocrinology and Metabolism Department of Sakarya Training and Research Hospital at Sakarya University from January 2013 to June 2014. Some of these patients were diagnosed by a gynecologist, and some were diagnosed by the present authors based on the presence of two of the three criteria from the Rotterdam European Society for Human Reproduction and Embryology/American Society for Reproductive Medicine (ESHRE/ASRM) for PCOS: a) oligomenorrhea, amenorrhea, or anovulation; b) the presence of clinical or biochemical hyperandrogenism; and/or c) the presence of polycystic ovaries as determined by a pelvic ultrasound [22]. The control group comprised healthy secretaries, nurses, and doctors from our hospital who volunteered for the study and had regular menstrual cycles, normal androgen levels, the absence of hirsutism, and no polycystic ovary as determined by a pelvic ultrasound. The demographic data of the patient and control groups were recorded. The present study was approved by the Sakarya University Faculty of Medicine Ethics Committee (Date: 24.02.2014; No. 27), and all participants provided written informed consent. The present study included 38 healthy control subjects and 35 patients with PCOS and concurrent hyperandrogenism and/or ovulatory dysfunction who were admitted to the Endocrinology and Metabolism Department of Sakarya Training and Research Hospital at Sakarya University from January 2013 to June 2014. Some of these patients were diagnosed by a gynecologist, and some were diagnosed by the present authors based on the presence of two of the three criteria from the Rotterdam European Society for Human Reproduction and Embryology/American Society for Reproductive Medicine (ESHRE/ASRM) for PCOS: a) oligomenorrhea, amenorrhea, or anovulation; b) the presence of clinical or biochemical hyperandrogenism; and/or c) the presence of polycystic ovaries as determined by a pelvic ultrasound [22]. The control group comprised healthy secretaries, nurses, and doctors from our hospital who volunteered for the study and had regular menstrual cycles, normal androgen levels, the absence of hirsutism, and no polycystic ovary as determined by a pelvic ultrasound. The demographic data of the patient and control groups were recorded. The present study was approved by the Sakarya University Faculty of Medicine Ethics Committee (Date: 24.02.2014; No. 27), and all participants provided written informed consent. Exclusion criteria Subjects were excluded from the present study if they had history of smoking, diabetes, or hypertension; had been diagnosed with Cushing’s syndrome (based on the 1-mg dexamethasone suppression test) or non-classic congenital adrenal hyperplasia (based on a 17-OH progesterone level of > 10 ng/dL after stimulation); exhibited cardiac disease; and/or had used antidiabetic, antihypertensive, antilipidemic, or oral contraceptive drugs within the past 3 months. Subjects were excluded from the present study if they had history of smoking, diabetes, or hypertension; had been diagnosed with Cushing’s syndrome (based on the 1-mg dexamethasone suppression test) or non-classic congenital adrenal hyperplasia (based on a 17-OH progesterone level of > 10 ng/dL after stimulation); exhibited cardiac disease; and/or had used antidiabetic, antihypertensive, antilipidemic, or oral contraceptive drugs within the past 3 months. Measurements The height and weight of all subjects were measured, and their BMI was calculated as the weight in kilograms divided by the square of height in meters. WC was measured from the narrowest part of the body between the iliac crest and the rib, and hip circumference (HC) was measured at the widest part of the hips. The WHR was calculated as the ratio of WC to HC [21]. The height and weight of all subjects were measured, and their BMI was calculated as the weight in kilograms divided by the square of height in meters. WC was measured from the narrowest part of the body between the iliac crest and the rib, and hip circumference (HC) was measured at the widest part of the hips. The WHR was calculated as the ratio of WC to HC [21]. Body composition analysis The basal metabolic rate, body fat percentage, and total body water of each patient were evaluated with a Tanita Body Composition Analyzer (Model TBF-300; Tanita Corporation, Itabashi-ku, Tokyo, Japan) while the patient was in a standing position without shoes and with light clothing on after a ≥ 8-h fast with sufficient hydration. Abdominal subcutaneous fat tissue thickness and visceral abdominal fat tissue thickness were recorded using bioelectrical impedance with the Tanita Abdominal Fat Analyzer (AB-140 Viscan; Tanita Corporation, Tokyo, Japan). The blood pressure of each patient was assessed after at least 10 min of rest with a sphygmomanometer (ERKA; Bad Tölz, Germany); two measurements were performed, and the average of the blood pressure measurements was calculated [23]. The basal metabolic rate, body fat percentage, and total body water of each patient were evaluated with a Tanita Body Composition Analyzer (Model TBF-300; Tanita Corporation, Itabashi-ku, Tokyo, Japan) while the patient was in a standing position without shoes and with light clothing on after a ≥ 8-h fast with sufficient hydration. Abdominal subcutaneous fat tissue thickness and visceral abdominal fat tissue thickness were recorded using bioelectrical impedance with the Tanita Abdominal Fat Analyzer (AB-140 Viscan; Tanita Corporation, Tokyo, Japan). The blood pressure of each patient was assessed after at least 10 min of rest with a sphygmomanometer (ERKA; Bad Tölz, Germany); two measurements were performed, and the average of the blood pressure measurements was calculated [23]. Biochemical analysis Blood samples were obtained from the patients in the morning after at least 8 h of fasting. The fasting plasma glucose (FPG) and fasting insulin levels of the patients were measured, and the homeostatic model assessment-insulin resistance (HOMA-IR) index was calculated using the following formula: (FPG [mg/dL] × fasting plasma insulin [μIU/mL] / 405). If the HOMA-IR index was > 2.7, insulin resistance was considered to be present [24]. Serum lipid levels of low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, and triglycerides (TG) were measured using xylidine blue with an end-point colorimetric method (Roche Diagnostics GmbH; Mannheim, Germany). FPG levels were measured with a hexokinase method (Roche Diagnostics GmbH). Blood samples were obtained from the patients in the morning after at least 8 h of fasting. The fasting plasma glucose (FPG) and fasting insulin levels of the patients were measured, and the homeostatic model assessment-insulin resistance (HOMA-IR) index was calculated using the following formula: (FPG [mg/dL] × fasting plasma insulin [μIU/mL] / 405). If the HOMA-IR index was > 2.7, insulin resistance was considered to be present [24]. Serum lipid levels of low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, and triglycerides (TG) were measured using xylidine blue with an end-point colorimetric method (Roche Diagnostics GmbH; Mannheim, Germany). FPG levels were measured with a hexokinase method (Roche Diagnostics GmbH). Echocardiography All patients were directed to the Department of Cardiology at Sakarya Training and Research Hospital, and the EAT thickness was measured using 2D-transthoracic echocardiography by the same cardiologist. The parasternal long- and short-axis images of EAT, which allow for the most accurate measurement from the right ventricle, were obtained using a standard parasternal image in the left lateral decubitus position. EAT was defined as the echo-free space between the outer wall of the myocardium and the visceral layer of pericardium at end-systole in the right ventricle [25]. All patients were directed to the Department of Cardiology at Sakarya Training and Research Hospital, and the EAT thickness was measured using 2D-transthoracic echocardiography by the same cardiologist. The parasternal long- and short-axis images of EAT, which allow for the most accurate measurement from the right ventricle, were obtained using a standard parasternal image in the left lateral decubitus position. EAT was defined as the echo-free space between the outer wall of the myocardium and the visceral layer of pericardium at end-systole in the right ventricle [25]. Statistical analysis All statistical analyses were performed with SPSS software, version 15 (SPSS, Inc., Chicago, IL, USA). Nonparametric tests were utilized due to the prevalence of variables. The Mann–Whitney U test was applied to assess differences between groups, and Spearman’s test was applied to assess correlations between the variables. A p value of < 0.05 was considered to indicate statistical significance, and a correlation was considered to be present if Spearman’s value was ≥ 0.50. Continuous variables are expressed as either the mean ± standard deviation (SD) or the median (minimum–maximum), and categorical variables are expressed as either frequency or percentage. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of < 0.05 was considered to indicate statistical significance for all tests. All statistical analyses were performed with SPSS software, version 15 (SPSS, Inc., Chicago, IL, USA). Nonparametric tests were utilized due to the prevalence of variables. The Mann–Whitney U test was applied to assess differences between groups, and Spearman’s test was applied to assess correlations between the variables. A p value of < 0.05 was considered to indicate statistical significance, and a correlation was considered to be present if Spearman’s value was ≥ 0.50. Continuous variables are expressed as either the mean ± standard deviation (SD) or the median (minimum–maximum), and categorical variables are expressed as either frequency or percentage. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of < 0.05 was considered to indicate statistical significance for all tests. Selection of subjects: The present study included 38 healthy control subjects and 35 patients with PCOS and concurrent hyperandrogenism and/or ovulatory dysfunction who were admitted to the Endocrinology and Metabolism Department of Sakarya Training and Research Hospital at Sakarya University from January 2013 to June 2014. Some of these patients were diagnosed by a gynecologist, and some were diagnosed by the present authors based on the presence of two of the three criteria from the Rotterdam European Society for Human Reproduction and Embryology/American Society for Reproductive Medicine (ESHRE/ASRM) for PCOS: a) oligomenorrhea, amenorrhea, or anovulation; b) the presence of clinical or biochemical hyperandrogenism; and/or c) the presence of polycystic ovaries as determined by a pelvic ultrasound [22]. The control group comprised healthy secretaries, nurses, and doctors from our hospital who volunteered for the study and had regular menstrual cycles, normal androgen levels, the absence of hirsutism, and no polycystic ovary as determined by a pelvic ultrasound. The demographic data of the patient and control groups were recorded. The present study was approved by the Sakarya University Faculty of Medicine Ethics Committee (Date: 24.02.2014; No. 27), and all participants provided written informed consent. Exclusion criteria: Subjects were excluded from the present study if they had history of smoking, diabetes, or hypertension; had been diagnosed with Cushing’s syndrome (based on the 1-mg dexamethasone suppression test) or non-classic congenital adrenal hyperplasia (based on a 17-OH progesterone level of > 10 ng/dL after stimulation); exhibited cardiac disease; and/or had used antidiabetic, antihypertensive, antilipidemic, or oral contraceptive drugs within the past 3 months. Measurements: The height and weight of all subjects were measured, and their BMI was calculated as the weight in kilograms divided by the square of height in meters. WC was measured from the narrowest part of the body between the iliac crest and the rib, and hip circumference (HC) was measured at the widest part of the hips. The WHR was calculated as the ratio of WC to HC [21]. Body composition analysis: The basal metabolic rate, body fat percentage, and total body water of each patient were evaluated with a Tanita Body Composition Analyzer (Model TBF-300; Tanita Corporation, Itabashi-ku, Tokyo, Japan) while the patient was in a standing position without shoes and with light clothing on after a ≥ 8-h fast with sufficient hydration. Abdominal subcutaneous fat tissue thickness and visceral abdominal fat tissue thickness were recorded using bioelectrical impedance with the Tanita Abdominal Fat Analyzer (AB-140 Viscan; Tanita Corporation, Tokyo, Japan). The blood pressure of each patient was assessed after at least 10 min of rest with a sphygmomanometer (ERKA; Bad Tölz, Germany); two measurements were performed, and the average of the blood pressure measurements was calculated [23]. Biochemical analysis: Blood samples were obtained from the patients in the morning after at least 8 h of fasting. The fasting plasma glucose (FPG) and fasting insulin levels of the patients were measured, and the homeostatic model assessment-insulin resistance (HOMA-IR) index was calculated using the following formula: (FPG [mg/dL] × fasting plasma insulin [μIU/mL] / 405). If the HOMA-IR index was > 2.7, insulin resistance was considered to be present [24]. Serum lipid levels of low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, and triglycerides (TG) were measured using xylidine blue with an end-point colorimetric method (Roche Diagnostics GmbH; Mannheim, Germany). FPG levels were measured with a hexokinase method (Roche Diagnostics GmbH). Echocardiography: All patients were directed to the Department of Cardiology at Sakarya Training and Research Hospital, and the EAT thickness was measured using 2D-transthoracic echocardiography by the same cardiologist. The parasternal long- and short-axis images of EAT, which allow for the most accurate measurement from the right ventricle, were obtained using a standard parasternal image in the left lateral decubitus position. EAT was defined as the echo-free space between the outer wall of the myocardium and the visceral layer of pericardium at end-systole in the right ventricle [25]. Statistical analysis: All statistical analyses were performed with SPSS software, version 15 (SPSS, Inc., Chicago, IL, USA). Nonparametric tests were utilized due to the prevalence of variables. The Mann–Whitney U test was applied to assess differences between groups, and Spearman’s test was applied to assess correlations between the variables. A p value of < 0.05 was considered to indicate statistical significance, and a correlation was considered to be present if Spearman’s value was ≥ 0.50. Continuous variables are expressed as either the mean ± standard deviation (SD) or the median (minimum–maximum), and categorical variables are expressed as either frequency or percentage. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of < 0.05 was considered to indicate statistical significance for all tests. Results: The body fat distribution, total body water, WC, HC, WHR, amount of abdominal subcutaneous adipose tissue, VAAT thickness, blood pressure, and levels of LDL cholesterol, HDL cholesterol, are TG are provided in Table 1. There were no significant differences in the mean age (p = 0.056) or mean BMI (p = 0.446) between the patient and control groups, but the body fat percentage, WHR, amount of abdominal subcutaneous adipose tissue, and VAAT thickness were higher in the patient group. However, the patient and control groups had similar amounts of EAT (p = 0.384) (Table 1).Table 1Comparison of demographic, body analysis and laboratory parameters of the patient and the control groupFeaturePatients (n = 35)Controls (n = 38) P valueAge (years)25.16 ± 4.1227.44 ± 4.310.056BMI (kg/m2)25.60 ± 5.2223.67 ± 3.700.404Fat mass (%)31.74 ± 10.1226.33 ± 7.61 0.042 Total body water (kg)34.11 ± 4.4832.72 ± 2.330.193Waist circumference (cm)91.76 ± 18.9586.50 ± 9.710.677Hip circumference (cm)103.56 ± 15.27100.55 ± 7.480.557WHR0.94 ± 0.060.89 ± 0.04 0.007 VAAT thickness9.24 ± 4.676.77 ± 2.68 0.042 Abdominal subcutaneous adipose tissue thickness40.48 ± 8.8332.86 ± 9.48 0.008 Systolic blood pressure (mmHg)120.10 ± 7.78120.28 ± 7.280.979Diastolic blood pressure (mmHg)70.26 ± 7.4072.76 ± 9.210.318FPG (mg/dL)89.16 ± 9.1287.72 ± 7.920.565Insulin (μIU/ mL)9.77 ± 6.276.80 ± 3.450.073HOMA-IR2.13 ± 1.331.60 ± 0.670.082TG (mg/dL)11.52 ± 53.8185.52 ± 31.490.058LDL (mg/dL)106.40 ± 35.5796.38 ± 28.730.305HDL (mg/dL)55.69 ± 16.2558.90 ± 11.960.300EAT (mm)4.72 ± 0.884.43 ± 1.310.384 BMI body mass index, VAAT visceral abdominal adipose tissue, FPG fasting plasma glucose, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests Comparison of demographic, body analysis and laboratory parameters of the patient and the control group BMI body mass index, VAAT visceral abdominal adipose tissue, FPG fasting plasma glucose, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests EAT had a significantly positive correlation with BMI (r = 0.260, p = 0.034), fat mass (r = 0.250, p = 0.041), WC (r = 0.301, p = 0.016), and abdominal circumference (r = 0.254, p = 0.043), but it was not correlated with the HOMA-IR index or the levels of TG, LDL cholesterol, or HDL cholesterol (p > 0.05) (Table 2). There was a small positive correlation between VAAT and EAT (r = 0.248, p = 0.048). VAAT was also directly associated with BMI (r = 0.921, p < 0.01), fat mass (r = 0.941, p < 0.01), WC (r = 0.941, p < 0.01), HC (r = 0.876, p < 0.01), abdominal subcutaneous adipose thickness (r = 0.896, p < 0.01), the HOMA-IR index (r = 0.618, p < 0.01), and the levels of TG (r = 0.388, p < 0.01) and LDL cholesterol (r = 0.288, p = 0.016). Conversely, VAAT was inversely associated with the HDL cholesterol level (r = −0.488, p < 0.01) (Table 3).Table 2Correlation analysis between EAT and variablesVariabler value p valueBMI (kg/m2)0.260 0.034 Fat mass (%)0.250 0.041 Waist circumference (cm)0.301 0.016 Hip circumference (cm)0.254 0.043 HOMA-IR0.1190.490TG (mg/dL)0.0760.550LDL (mg/dL)0.1580.209HDL (mg/dL)−0.1850.141 BMI body mass index, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all testsTable 3Correlation analysis between VAAT and variablesVariabler value P valueEAT (mm)0.248 0.048 BMI (kg/m2)0.921<0.01Fat mass (%)0.941<0.01Waist circumference (cm)0.941<0.01Hip circumference (cm)0.876<0.01Abdominal subcutaneous adipose tissue thickness0.896<0.01HOMA-IR0.618<0.01TG (mg/dL)0.388<0.01LDL (mg/dL)0.288 0.016 HDL (mg/dL)−0.488<0.01 BMI body mass index, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests Correlation analysis between EAT and variables BMI body mass index, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests Correlation analysis between VAAT and variables BMI body mass index, HOMA-IR Homeostatic Model Assessment-insulin resistance, TG triglyceride, LDL low density lipoprotein, HDL high density lipoprotein, EAT epicardial adipose tissue. Continuous variables were compared with an independent-samples t-tests or the Mann–Whitney U test, and categorical variables were compared using Pearson’s chi-square test. A p value of <0.05 was considered to indicate statistical significance for all tests Discussion: Although the patient and control groups in the present study had similar ages and BMIs, the lean patients with PCOS exhibited a higher WHR, VAAT, and abdominal subcutaneous fat tissue thickness than did the control group. In contrast, there were no significant differences in EAT. However, EAT was significantly correlated with VAAT, BMI, fat mass, WC, and HC. Previous studies have shown a positive correlation between EAT thickness and VAAT thickness [26–28] independent of obesity [26, 27], and this correlation seems to be more important than WC [28]. Although there is an increased amount of EAT in obese patients with than without PCOS [19–21], EAT is not different between lean patients with PCOS and the normal population [21, 29]. Similarly, the present study found no differences in EAT between the two groups. Compared with normal individuals, patients with PCOS exhibit increases in total fat mass and organ-specific VAT [29]. Furthermore, these increases are positively correlated with both systolic and diastolic blood pressure and the levels of fasting glucose, insulin, LDL cholesterol, TG, and transaminases but negatively correlated with the insulin sensitivity index and HDL cholesterol. Likewise, the present study found that fat mass, abdominal subcutaneous adipose tissue, and VAAT thickness were higher in the patient group than in the control group. Additionally, VAAT thickness was positively correlated with BMI, fat mass, WC, HC, abdominal subcutaneous adipose tissue thickness, and the levels of TG and LDL cholesterol but negatively correlated with HDL cholesterol. Sahin et al. [21] reported that EAT is positively correlated with age, BMI, WC, the glucose level, the HOMA-IR index, and the TG level. Similarly, the present study found that EAT was positively correlated with BMI, WC, HC, and VAAT; however, in contrast to those previous findings, EAT was not correlated with fasting glucose, the HOMA-IR index, or the lipid parameters. This may be due to the fact that the patients with PCOS in the present study were lean rather than obese. Another study found that EAT is correlated with BMI, WC, VAT thickness, and insulin resistance [30]. In the present study, EAT was positively correlated with BMI, WC, and VAAT but not with the HOMA-IR index. EAT is reportedly more closely associated with VAT than with total body fat [28, 30]. In studies employing magnetic resonance imaging (MRI), the abdominal adipose tissue thicknesses of patients with PCOS and normal control subjects did not significantly differ [16, 31–33]. It has also been shown that VAAT thickness is increased only in mildly obese patients with PCOS relative to control subjects [15] and that obesity predicts insulin resistance independently of PCOS [31]. Furthermore, a study conducted using a bioimpedance device found that there was less VAT in lean patients with PCOS than in the control group [18]. In contrast, the present study found greater VAAT thickness in lean patients with PCOS than in the control group despite the fact that the EAT thickness did not change. The gold standard tests for measuring VAT are MRI and computed tomography (CT) [34]. Thus, a limitation of the present study may be that the amounts of abdominal subcutaneous adipose tissue and VAT were assessed using bioelectrical impedance. However, it is difficult to measure adipose tissue using CT or MRI because of cost-effectiveness, the application of radiation, and the use of contrast media. Additionally, previous studies have shown that the results of bioelectrical impedance tests and CT scans when measuring VAT are closely correlated [35]. Consequently, given that the present study found a small positive correlation between EAT thickness and VAAT thickness, echocardiography would appear to be an easy, simple, noninvasive, reliable, and accessible method for the measurement of these parameters relative to the use of MRI scans [25]. EAT is important for the determination of both VAAT thickness and cardiovascular risk [26, 36], and increased abdominal adipose tissue is related to an increased risk of atherosclerosis [37] and mortality [38]. Because it can be difficult to measure abdominal adipose tissue, it may be more economical and efficient to determine these risk factors by measuring EAT. Conclusions: The present study observed several associations between EAT thickness and cardiovascular risk in patients with PCOS. Because of the difficulties related to the measurement of abdominal adipose tissue thickness, the assessment of EAT may be a relatively easy-to-use but important tool for the determination of cardiovascular risk.
Background: Polycystic ovary syndrome (PCOS) is related to metabolic syndrome, insulin resistance, and cardiovascular metabolic syndromes. This is particularly true for individuals with central and abdominal obesity because visceral abdominal adipose tissue (VAAT) and epicardial adipose tissue (EAT) produce a large number of proinflammatory and proatherogenic cytokines. The present study aimed to determine whether there are changes in VAAT and EAT levels which were considered as indirect predictors for subclinical atherosclerosis in lean patients with PCOS. Methods: The clinical and demographic characteristics of 35 patients with PCOS and 38 healthy control subjects were recorded for the present study. Additionally, the serum levels of various biochemical parameters were measured and EAT levels were assessed using 2D-transthoracic echocardiography. Results: There were no significant differences in mean age (p = 0.056) or mean body mass index (BMI) (p = 0.446) between the patient and control groups. However, the body fat percentage, waist-to-hip ratio, amount of abdominal subcutaneous adipose tissue, and VAAT thickness were higher in the PCOS patient group than in the control group. The amounts of EAT in the patient and control groups were similar (p = 0.384). EAT was correlated with BMI, fat mass, waist circumference, and hip circumference but not with any biochemical metabolic parameters including the homeostasis model assessment of insulin resistance index or the levels of triglycerides, low-density lipoprotein cholesterol, and high-density lipoprotein (HDL) cholesterol. However, there was a small positive correlation between the amounts of VAAT and EAT. VAAT was directly correlated with body fat parameters such as BMI, fat mass, and abdominal subcutaneous adipose thickness and inversely correlated with the HDL cholesterol level. Conclusions: The present study found that increased abdominal adipose tissue in patients with PCOS was associated with atherosclerosis. Additionally, EAT may aid in the determination of the risk of atherosclerosis in patients with PCOS because it is easily measured.
Background: Polycystic ovary syndrome (PCOS) is a heterogeneous disease that affects 5 to 10 % of women in the reproductive period [1]. Many studies have shown that PCOS is associated with various cardiovascular risk factors such as obesity, insulin resistance, hyperlipidemia, metabolic syndrome, and hypertension [1–3]. Additionally, patients with PCOS have a high incidence of central and abdominal obesity and marked increases in the waist circumference (WC) and waist-to-hip ratio (WHR) [4, 5]. Visceral abdominal adipose tissue (VAAT) surrounds the internal organs, and increased amounts of VAAT are more important than increased levels of subcutaneous fat in terms of the risks of metabolic syndrome, insulin resistance, and cardiovascular mortality [6]. Epicardial adipose tissue (EAT) and visceral adipose tissue (VAT), located between the myocardium and visceral epicardium, respectively, are derived from the same origin [7]. This is important because both of these body fat tissues produce large numbers of proinflammatory and proatherogenic cytokines [8, 9]. The reported findings regarding abdominal fat tissue and EAT in patients with PCOS are controversial [8–21]. For example, patients with PCOS have been shown to have increased [10–14], similar [15–17], or decreased [18] amounts of abdominal fat. Similarly, the amount of EAT in patients with PCOS has been reported to be increased and unchanged compared with healthy control groups [19–21]. However, not all patients with PCOS are obese; in fact, a 2001 study of 346 patients with PCOS found that 56 % of such patients are lean [22]; 56.0 % had a body mass index (BMI) of < 25 kg/m2, 11.3 % had a BMI of 25 to 27 kg/m2, and 32.7 % had a BMI of ≥ 27 kg/m2 [22]. Thus, the present study aimed to determine whether there are changes in the amounts of VAAT and EAT in lean patients with PCOS compared with healthy control subjects. Conclusions: The present study observed several associations between EAT thickness and cardiovascular risk in patients with PCOS. Because of the difficulties related to the measurement of abdominal adipose tissue thickness, the assessment of EAT may be a relatively easy-to-use but important tool for the determination of cardiovascular risk.
Background: Polycystic ovary syndrome (PCOS) is related to metabolic syndrome, insulin resistance, and cardiovascular metabolic syndromes. This is particularly true for individuals with central and abdominal obesity because visceral abdominal adipose tissue (VAAT) and epicardial adipose tissue (EAT) produce a large number of proinflammatory and proatherogenic cytokines. The present study aimed to determine whether there are changes in VAAT and EAT levels which were considered as indirect predictors for subclinical atherosclerosis in lean patients with PCOS. Methods: The clinical and demographic characteristics of 35 patients with PCOS and 38 healthy control subjects were recorded for the present study. Additionally, the serum levels of various biochemical parameters were measured and EAT levels were assessed using 2D-transthoracic echocardiography. Results: There were no significant differences in mean age (p = 0.056) or mean body mass index (BMI) (p = 0.446) between the patient and control groups. However, the body fat percentage, waist-to-hip ratio, amount of abdominal subcutaneous adipose tissue, and VAAT thickness were higher in the PCOS patient group than in the control group. The amounts of EAT in the patient and control groups were similar (p = 0.384). EAT was correlated with BMI, fat mass, waist circumference, and hip circumference but not with any biochemical metabolic parameters including the homeostasis model assessment of insulin resistance index or the levels of triglycerides, low-density lipoprotein cholesterol, and high-density lipoprotein (HDL) cholesterol. However, there was a small positive correlation between the amounts of VAAT and EAT. VAAT was directly correlated with body fat parameters such as BMI, fat mass, and abdominal subcutaneous adipose thickness and inversely correlated with the HDL cholesterol level. Conclusions: The present study found that increased abdominal adipose tissue in patients with PCOS was associated with atherosclerosis. Additionally, EAT may aid in the determination of the risk of atherosclerosis in patients with PCOS because it is easily measured.
5,629
378
[ 222, 89, 78, 147, 164, 103, 178 ]
12
[ "eat", "patients", "variables", "tissue", "present", "body", "abdominal", "thickness", "test", "fat" ]
[ "eat epicardial adipose", "visceral abdominal adipose", "pcos associated cardiovascular", "patients pcos obese", "epicardial adipose tissue" ]
null
[CONTENT] Polycystic | Ovary | Epicardial | Adipose [SUMMARY]
null
[CONTENT] Polycystic | Ovary | Epicardial | Adipose [SUMMARY]
[CONTENT] Polycystic | Ovary | Epicardial | Adipose [SUMMARY]
[CONTENT] Polycystic | Ovary | Epicardial | Adipose [SUMMARY]
[CONTENT] Polycystic | Ovary | Epicardial | Adipose [SUMMARY]
[CONTENT] Adult | Body Mass Index | Case-Control Studies | Female | Humans | Intra-Abdominal Fat | Pericardium | Polycystic Ovary Syndrome | Thinness | Waist-Hip Ratio [SUMMARY]
null
[CONTENT] Adult | Body Mass Index | Case-Control Studies | Female | Humans | Intra-Abdominal Fat | Pericardium | Polycystic Ovary Syndrome | Thinness | Waist-Hip Ratio [SUMMARY]
[CONTENT] Adult | Body Mass Index | Case-Control Studies | Female | Humans | Intra-Abdominal Fat | Pericardium | Polycystic Ovary Syndrome | Thinness | Waist-Hip Ratio [SUMMARY]
[CONTENT] Adult | Body Mass Index | Case-Control Studies | Female | Humans | Intra-Abdominal Fat | Pericardium | Polycystic Ovary Syndrome | Thinness | Waist-Hip Ratio [SUMMARY]
[CONTENT] Adult | Body Mass Index | Case-Control Studies | Female | Humans | Intra-Abdominal Fat | Pericardium | Polycystic Ovary Syndrome | Thinness | Waist-Hip Ratio [SUMMARY]
[CONTENT] eat epicardial adipose | visceral abdominal adipose | pcos associated cardiovascular | patients pcos obese | epicardial adipose tissue [SUMMARY]
null
[CONTENT] eat epicardial adipose | visceral abdominal adipose | pcos associated cardiovascular | patients pcos obese | epicardial adipose tissue [SUMMARY]
[CONTENT] eat epicardial adipose | visceral abdominal adipose | pcos associated cardiovascular | patients pcos obese | epicardial adipose tissue [SUMMARY]
[CONTENT] eat epicardial adipose | visceral abdominal adipose | pcos associated cardiovascular | patients pcos obese | epicardial adipose tissue [SUMMARY]
[CONTENT] eat epicardial adipose | visceral abdominal adipose | pcos associated cardiovascular | patients pcos obese | epicardial adipose tissue [SUMMARY]
[CONTENT] eat | patients | variables | tissue | present | body | abdominal | thickness | test | fat [SUMMARY]
null
[CONTENT] eat | patients | variables | tissue | present | body | abdominal | thickness | test | fat [SUMMARY]
[CONTENT] eat | patients | variables | tissue | present | body | abdominal | thickness | test | fat [SUMMARY]
[CONTENT] eat | patients | variables | tissue | present | body | abdominal | thickness | test | fat [SUMMARY]
[CONTENT] eat | patients | variables | tissue | present | body | abdominal | thickness | test | fat [SUMMARY]
[CONTENT] pcos | patients pcos | patients | increased | kg | m2 | kg m2 | fat | vaat | amounts [SUMMARY]
null
[CONTENT] variables | density | density lipoprotein | variables compared | lipoprotein | 01 | adipose | mass | test | adipose tissue [SUMMARY]
[CONTENT] cardiovascular risk | risk | cardiovascular | eat | thickness | present study observed | pcos difficulties | pcos difficulties related | tissue thickness assessment | tissue thickness assessment eat [SUMMARY]
[CONTENT] eat | patients | measured | variables | pcos | thickness | study | fat | abdominal | tissue [SUMMARY]
[CONTENT] eat | patients | measured | variables | pcos | thickness | study | fat | abdominal | tissue [SUMMARY]
[CONTENT] ||| EAT ||| VAAT | EAT | PCOS [SUMMARY]
null
[CONTENT] 0.056 | BMI | 0.446 ||| PCOS ||| EAT | 0.384 ||| EAT | BMI | HDL ||| VAAT | EAT ||| VAAT | BMI | HDL [SUMMARY]
[CONTENT] PCOS ||| EAT [SUMMARY]
[CONTENT] ||| EAT ||| VAAT | EAT | PCOS ||| 35 | PCOS | 38 ||| EAT ||| 0.056 | BMI | 0.446 ||| PCOS ||| EAT | 0.384 ||| EAT | BMI | HDL ||| VAAT | EAT ||| VAAT | BMI | HDL ||| PCOS ||| EAT [SUMMARY]
[CONTENT] ||| EAT ||| VAAT | EAT | PCOS ||| 35 | PCOS | 38 ||| EAT ||| 0.056 | BMI | 0.446 ||| PCOS ||| EAT | 0.384 ||| EAT | BMI | HDL ||| VAAT | EAT ||| VAAT | BMI | HDL ||| PCOS ||| EAT [SUMMARY]
Stromal expression of miR-21 in T3-4a colorectal cancer is an independent predictor of early tumor relapse.
25609245
MicroRNA-21 (miR-21) is an oncogenic microRNA that regulates the expression of multiple cancer-related target genes. miR-21 has been associated with progression of some types of cancer. Metastasis-associated protein1 expression and loss of E-cadherin expression are correlated with cancer progression and metastasis in many cancer types. In advanced colorectal cancer, the clinical significance of miR-21 expression remains unclear. We aimed to investigate the impact of miR-21 expression in advanced colorectal cancer and its correlation with target proteins associated with colorectal cancer progression.
BACKGROUND
From 2004 to 2007, 277 consecutive patients with T3-4a colorectal cancer treated with R0 surgical resection were included. Patients with neoadjuvant therapy and distant metastasis at presentation were excluded. The expression of miR-21 was investigated by in situ hybridization. Immunohistochemistry was used to detect E-cadherin and metastasis-associated protein1 expression.
METHODS
High stromal expression of miR-21 was found in 76 of 277 (27.4%) colorectal cancer samples and was correlated with low E-cadherin expression (P = 0.019) and high metastasis-associated protein1 expression (P = 0.004). T3-4a colorectal cancer patients with high miR-21 expression had significantly shorter recurrence-free survival than those with low miR-21 expression. When analyzing colon and rectal cancer separately, high expression of miR-21 was an independent prognostic factor of unfavorable recurrence-free survival in T3-4a colon cancer patients (P = 0.038, HR = 2.45; 95% CI = 1.05-5.72) but not in T3-4a rectal cancer patients. In a sub-classification analysis, high miR-21 expression was associated with shorter recurrence-free survival in the stage II cancer (P = 0.001) but not in the stage III subgroup (P = 0.267).
RESULTS
Stromal miR-21 expression is related to the expression of E-cadherin and metastasis-associated protein1 in colorectal cancer. Stage II colorectal cancer patients with high levels of miR-21 are at higher risk for tumor recurrence and should be considered for more intensive treatment.
CONCLUSIONS
[ "Aged", "Cadherins", "Colonic Neoplasms", "Disease-Free Survival", "Female", "Histone Deacetylases", "Humans", "Male", "MicroRNAs", "Middle Aged", "Neoplasm Recurrence, Local", "Neoplasm Staging", "Rectal Neoplasms", "Repressor Proteins", "Risk Factors", "Trans-Activators" ]
4308857
Background
Colorectal cancer (CRC) is the third most commonly diagnosed cancer in Korea [1]. The prognosis of CRC is associated with tumor progression; five-year survival rates range from 93% to 8% [2]. There are many proposed serological and molecular markers as predictive and prognostic indicators of CRC; however, they are not widely accepted as providing reliable prognostic information due to a lack of reproducibility, validation and standardization among studies [3,4]. Therefore, there is a need to identify more reliable prognostic mediators of tumor progression and metastasis in order to define the behavior of CRC and improve postoperative treatment strategies. MicroRNAs are small noncoding RNA molecules, 18-25 nucleotides in length, which post-transcriptionally regulate gene expression by binding to the 3’ untranslated regions of target messenger RNAs and play a central role in regulation of mRNA expression [5]. MicroRNAs have been shown to influence all cellular processes [6] and have a high degree of sequence conservation among distantly related organism, indicating their likely participation in essential biological processes [7]. Of note, microRNAs have been reported to have a marked influence on carcinogenesis through the dysregulation of oncogenes and tumor suppressor genes [8]. Cancer-related microRNAs typically show altered expression levels in tumors as compared to the level of expression in the corresponding normal tissue. MicroRNA-21 (miR-21) is an oncogenic microRNA that regulates the expression of multiple cancer-related target genes, such as PTEN and PDCD4, and has been reported to be consistently up-regulated in various types of cancers, including colon, breast, lung, and stomach cancers [9-16]. MiR-21 is known to contribute to the regulation of apoptosis, cell proliferation and migration [9,11,17]. Moreover, miR-21 levels increase in the advanced stages of cancer, suggesting a central role for miR-21 in invasion and dissemination of cancer [12,14]. In CRC tissue samples, miR-21 expression is up-regulated during tumor progression and is also known to be associated with poor survival and response to chemotherapy [12,13,18]. However, the clinical significance of miR-21 expression in advanced CRC remains unclear. In situ hybridization (ISH) for microRNA has an advantage over quantitative microRNA expression analysis platforms in that ISH allows for precise histological localization of microRNAs in formalin-fixed paraffin-embedded tissue blocks [19,20]. Loss of E-cadherin expression is associated with activation of epithelial-mesenchymal transition, invasion and metastasis in various cancers [21]. Conversely, expression of Metastasis-associated protein1 (MTA1) is correlated with cancer progression and metastasis in numerous cancer types, including CRC [22,23]. Previous studies on the association between MTA1 and E-cadherin have shown that MTA1 regulates E-cadherin expression through AKT activation in prostate cancer, and that low E-cadherin expression promotes cancer metastasis [21,24]. However, the exact role of these proteins in CRC remains unclear. We investigated miR-21 expression using ISH in specimens from T3-4a CRC patients treated by surgical resection. We also evaluated the relationship between expression of miR-21, E-cadherin and MTA1 and their clinical significance as potential biomarkers for prognosis of T3-4a CRC patients.
null
null
Results
Demographic and clinicopathological variables of the study participants are listed in Table 1.Table 1 Correlations of clinicopathological parameters and expression of miR-21 in 277 patients with T3-4a colorectal cancer ParameterNmiR-21 expressionp-valueHighLowAge<6513936 (25.9%)103 (74.1%)0.565≥6513840 (29.0%)98 (71.0%)GenderMale18151 (28.2%)130 (71.8%)0.705Female9625 (26.0%)71 (74.0%)Primary siteRight colon8220 (24.4%)62 (75.6%)0.636Left colon9128 (30.8%)63 (69.2%)Rectum10428 (26.9%)76 (73.1%)Histologic typeNon-mucinous26273 (27.9%)189 (72.1%)0.767Mucinous153 (20.0%)12 (80.0%)DifferentiationWell or moderately26074 (28.5%)186 (71.5%)0.168Poorly172 (11.8%)15 (88.2%)Depth of invasionpT322865 (28.5%)163 (71.5%)0.388pT4a4911 (22.4%)38 (77.6%)Lymph node metastasisAbsent13839 (28.3%)99 (71.7%)0.759Present13937 (26.6%)102 (73.7%)AJCC stage0.118IIA12234 (27.9%)88 (72.1%)IIB165 (31.2%)11 (68.8%)IIIB8228 (34.1%)54 (65.9%)IIIC579 (15.8%)48 (84.2%)Perineural invasionAbsent21958 (26.5%)161 (73.5%)0.490Present5818 (31.0%)40 (69.0%)Lymphatic invasionAbsent12033 (27.5%)87 (72.5%)0.984Present15743 (27.4%)114 (72.6%)Vascular invasionAbsent25572 (28.2%)183 (71.8%)0.311Present224 (18.2%)18 (81.8%)CEA (ng/dL)a<516346 (28.2%)117 (71.8%)0.542≥57825 (32.1%)53 (67.9%)Adjuvant therapyNo161 (6.2%)15 (93.8%)0.079Yes26175 (28.7%)186 (71.3%)E-cadherinLow10940 (36.7%)69 (63.3%)0.019High16137 (23.0%)124 (77.0%)MTA1Low16837 (22.0%)131 (78.0%)0.004High10239 (38.2%)63 (61.8%)aPreoperative serum level of carcinoembryonic antigen (CEA) was measured in 241 colorectal cancer patients. AJCC, American Joint Committee on Cancer. Immunohistochemistry for E-cadherin and MTA1 was available in 270 cases. Correlations of clinicopathological parameters and expression of miR-21 in 277 patients with T3-4a colorectal cancer aPreoperative serum level of carcinoembryonic antigen (CEA) was measured in 241 colorectal cancer patients. AJCC, American Joint Committee on Cancer. Immunohistochemistry for E-cadherin and MTA1 was available in 270 cases. miR-21 expression by in situ hybridization miR-21 expression was found to be predominantly localized to the stroma surrounding the tumor cells (Figure 1). High levels of miR-21 were found in 76 of 277 (27.4%) CRC specimens. There was no significant correlation between high miR-21 expression and the clinicopathological features of the patients (Table 1).Figure 1In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400. In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400. miR-21 expression was found to be predominantly localized to the stroma surrounding the tumor cells (Figure 1). High levels of miR-21 were found in 76 of 277 (27.4%) CRC specimens. There was no significant correlation between high miR-21 expression and the clinicopathological features of the patients (Table 1).Figure 1In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400. In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400. Correlation between miR-21 and MTA1/E-cadherin expression The expression patterns of E-cadherin and MTA1 in stained tumor cells were membranous and nuclear, respectively (Figure 1). Low expression of E-cadherin was found in 109 of 277 (39.4%) CRCs, and high MTA1 expression was seen in 102 (36.8%) tumors. High miR-21 expression was significantly correlated with low E-cadherin expression (P = 0.019) and high MTA1 expression (P = 0.004) (Table 1). E-cadherin expression was negatively correlated with MTA1 expression (P = 0.005). The expression patterns of E-cadherin and MTA1 in stained tumor cells were membranous and nuclear, respectively (Figure 1). Low expression of E-cadherin was found in 109 of 277 (39.4%) CRCs, and high MTA1 expression was seen in 102 (36.8%) tumors. High miR-21 expression was significantly correlated with low E-cadherin expression (P = 0.019) and high MTA1 expression (P = 0.004) (Table 1). E-cadherin expression was negatively correlated with MTA1 expression (P = 0.005). Recurrence-free survival and overall survival In all 277 CRC patients, variables significantly associated with RFS included miR-21 expression (P = 0.010, Figure 2A), histological differentiation (P = 0.031), pT stage (P = 0.0005), lymph node metastasis (P = 0.00001), and serum CEA level (P =0.006) (Table 2). In a multivariate analysis, high levels of miR-21 (P = 0.007, HR = 2.24; 95% CI = 1.25-4.02), pT stage, lymph node metastasis, and serum CEA level were independent prognostic factors for unfavorable RFS (Table 3). However, the OS rate was not associated with the expression levels of miR-21, E-cadherin or MTA1.Figure 2Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival.Table 2 Univariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer Colorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.02 (1.18-3.45)0.0101.32 (0.62-2.85)0.4743.09 (1.41-6.76)0.005Age (<65 years vs. ≥65 years)0.97 (0.57-1.66)0.9200.45 (0.20-1.02)0.0552.12 (0.95-4.77)0.068Tumor type (non-mucinous vs. mucinous)1.18 (0.37-3.78)0.7833.03 (0.70-13.05)0.1380.67 (0.09-4.94)0.693Differentiation (well or moderately vs. poorly)2.56 (1.09-6.00)0.0314.57 (1.33-15.67)0.0162.06 (0.60-7.00)0.249pT (T3 vs. T4a)2.68 (1.51-4.76)0.00053.97 (1.73-9.12)0.0012.30 (1.00-5.30)0.044Lymph node metastasis (absent vs. present)3.69 (1.98-6.87)0.000016.65 (2.30-19.22)0.00042.24 (1.00-5.02)0.045CEA (<5 ng/dL vs. ≥5 ng/dL)2.24 (1.26-3.99)0.0062.26 (1.02-5.05)0.0462.23 (0.97-5.15)0.060Adjuvant therapy (no vs. yes)4.19 (0.58-30.35)0.156NANA3.23 (0.44-23.98)0.252HR, hazard ratio; CI, confidence interval; NA, not available.Table 3 Multivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location Colorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.24 (1.25-4.02)0.0071.65 (0.65-4.16)0.2952.45 (1.05-5.72)0.038Age (<65 years vs. ≥65 years)1.03 (0.56-1.89)0.9240.27 (0.10-0.70)0.0072.48 (1.00-6.12)0.049Tumor type (non-mucinous vs. mucinous)0.61 (0.13-2.97)0.5390.62 (0.03-11.50)0.7511.03 (0.13-8.48)0.976Differentiation (well or moderately vs. poorly)2.18 (0.83-5.71)0.1142.60 (0.55-12.21)0.2251.56 (0.41-5.94)0.513pT (T3 vs. T4a)1.97 (1.01-3.83)0.0462.26 (0.75-6.79)0.1452.27 (0.86-5.97)0.098Lymph node metastasis (absent vs. present)4.55 (2.23-9.29)0.0000311.75 (3.33-41.48)0.00013.02 (1.22-7.47)0.017CEA (<5 ng/dL vs. ≥5 ng/dL)2.63 (1.46-4.74)0.0013.32 (1.39-7.51)0.0062.65 (1.13-6.21)0.025Adjuvant therapy (no vs. yes)2.48 (0.32-19.32)0.386NANA2.15 (0.26-18.08)0.431HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes). Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival. Univariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes). To further understand the association of prognostic factors and RFS according to the primary cancer site, we analyzed their HRs for RFS in colon and rectal cancer separately (Tables 3 and 4). High expression of miR-21 was associated with shorter RFS in patients with T3-4a colon cancer (n = 173, P = 0.005, Figure 2A), but not in patients with T3-4a rectal cancer (n = 104, P = 0.474, Figure 2A).Table 4 Characteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer First author (reference)YearOriginNo. of casesAJCC stageRecurrence-free survivalOverall survivalCut-off valueStatistic analysisDetection methodHR95% CIHR95% CISchetter [13]2008USAaCC 71I-IVNANA2.71.3-5.5Third tertileMultivariateRT-PCRChinaaCC 103I-IVNANA2.41.4-4.1DichotomizeMultivariateMicroarrayShibuya [18]2010JapanCRC 156I-IV0.3960.186-0.8970.5130.280-0.956MeanMultivariateRT-PCRNielsen [19]2011DenmarkCC 129II1.281.06-1.551.171.02-1.34DichotomizeMultivariateISHRC 67II0.850.73–1.010.970.83-1.13DichotomizeMultivariateISHKjaer-Frifeldt [20]2012DenmarkCC 764II1.411.19-1.671.050.94-1.18Mean logMultivariateISHZhang [29]2013ChinaCC 138II1.980.95-4.15NANADichotomizeUnivariateRT-PCRCC 137II1.880.95-3.75NANADichotomizeUnivariateRT-PCRCC 255II1.791.22-2.62NANADichotomizeUnivariateRT-PCRBovell [30]2013USACRC 55IVNANA3.251.37-7.72MeanMultivariateRT-PCRToiyama [31]2013JapanCRC 166I-IVNANA0.590.21-1.633.7MultivariateRT-PCRChen [32]2013TaiwanCRC 195I-IVNANA1.6550.992-2.762MeanUnivariateRT-PCRHansen [33]2014DenmarkCC 554II1.3481.032-1.7601.0750.889-1.301DichotomizeMultivariateRT-PCROue [34]2014JapanCC 156I-IVNANA1.800.91-3.58Third tertileMultivariateRT-PCRCC 87II-IIINANA3.131.20-8.17Third tertileMultivariateRT-PCRGermanyCC 145IINANA2.651.06-6.66Third tertileMultivariateRT-PCRPresent studyKoreaCC 173II-III3.091.41-6.760.4250.142-1.271DichotomizeMultivariateISHRC 104II-III1.320.62-2.852.0460.557-7.513DichotomizeMultivariateISHaOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer. Characteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer aOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer. The T3-4a CRC patients were divided into subgroups according to American Joint Committee on Cancer stage. In the stage II (T3-4aN0M0) subgroup, we found that patients with high miR-21 expression level had a significantly shorter RFS time than those with low miR-21 level regardless of the primary site (colon cancer, P = 0.007; rectal cancer, P = 0.030, Figure 2B). However, in the stage III (T3-4aN1M0) subgroup, there was no significant difference in RFS between patients with high or low levels of miR-21 expression (colon cancer, P = 0.053; rectal cancer, P = 0.588, Figure 2C). In all 277 CRC patients, variables significantly associated with RFS included miR-21 expression (P = 0.010, Figure 2A), histological differentiation (P = 0.031), pT stage (P = 0.0005), lymph node metastasis (P = 0.00001), and serum CEA level (P =0.006) (Table 2). In a multivariate analysis, high levels of miR-21 (P = 0.007, HR = 2.24; 95% CI = 1.25-4.02), pT stage, lymph node metastasis, and serum CEA level were independent prognostic factors for unfavorable RFS (Table 3). However, the OS rate was not associated with the expression levels of miR-21, E-cadherin or MTA1.Figure 2Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival.Table 2 Univariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer Colorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.02 (1.18-3.45)0.0101.32 (0.62-2.85)0.4743.09 (1.41-6.76)0.005Age (<65 years vs. ≥65 years)0.97 (0.57-1.66)0.9200.45 (0.20-1.02)0.0552.12 (0.95-4.77)0.068Tumor type (non-mucinous vs. mucinous)1.18 (0.37-3.78)0.7833.03 (0.70-13.05)0.1380.67 (0.09-4.94)0.693Differentiation (well or moderately vs. poorly)2.56 (1.09-6.00)0.0314.57 (1.33-15.67)0.0162.06 (0.60-7.00)0.249pT (T3 vs. T4a)2.68 (1.51-4.76)0.00053.97 (1.73-9.12)0.0012.30 (1.00-5.30)0.044Lymph node metastasis (absent vs. present)3.69 (1.98-6.87)0.000016.65 (2.30-19.22)0.00042.24 (1.00-5.02)0.045CEA (<5 ng/dL vs. ≥5 ng/dL)2.24 (1.26-3.99)0.0062.26 (1.02-5.05)0.0462.23 (0.97-5.15)0.060Adjuvant therapy (no vs. yes)4.19 (0.58-30.35)0.156NANA3.23 (0.44-23.98)0.252HR, hazard ratio; CI, confidence interval; NA, not available.Table 3 Multivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location Colorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.24 (1.25-4.02)0.0071.65 (0.65-4.16)0.2952.45 (1.05-5.72)0.038Age (<65 years vs. ≥65 years)1.03 (0.56-1.89)0.9240.27 (0.10-0.70)0.0072.48 (1.00-6.12)0.049Tumor type (non-mucinous vs. mucinous)0.61 (0.13-2.97)0.5390.62 (0.03-11.50)0.7511.03 (0.13-8.48)0.976Differentiation (well or moderately vs. poorly)2.18 (0.83-5.71)0.1142.60 (0.55-12.21)0.2251.56 (0.41-5.94)0.513pT (T3 vs. T4a)1.97 (1.01-3.83)0.0462.26 (0.75-6.79)0.1452.27 (0.86-5.97)0.098Lymph node metastasis (absent vs. present)4.55 (2.23-9.29)0.0000311.75 (3.33-41.48)0.00013.02 (1.22-7.47)0.017CEA (<5 ng/dL vs. ≥5 ng/dL)2.63 (1.46-4.74)0.0013.32 (1.39-7.51)0.0062.65 (1.13-6.21)0.025Adjuvant therapy (no vs. yes)2.48 (0.32-19.32)0.386NANA2.15 (0.26-18.08)0.431HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes). Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival. Univariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes). To further understand the association of prognostic factors and RFS according to the primary cancer site, we analyzed their HRs for RFS in colon and rectal cancer separately (Tables 3 and 4). High expression of miR-21 was associated with shorter RFS in patients with T3-4a colon cancer (n = 173, P = 0.005, Figure 2A), but not in patients with T3-4a rectal cancer (n = 104, P = 0.474, Figure 2A).Table 4 Characteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer First author (reference)YearOriginNo. of casesAJCC stageRecurrence-free survivalOverall survivalCut-off valueStatistic analysisDetection methodHR95% CIHR95% CISchetter [13]2008USAaCC 71I-IVNANA2.71.3-5.5Third tertileMultivariateRT-PCRChinaaCC 103I-IVNANA2.41.4-4.1DichotomizeMultivariateMicroarrayShibuya [18]2010JapanCRC 156I-IV0.3960.186-0.8970.5130.280-0.956MeanMultivariateRT-PCRNielsen [19]2011DenmarkCC 129II1.281.06-1.551.171.02-1.34DichotomizeMultivariateISHRC 67II0.850.73–1.010.970.83-1.13DichotomizeMultivariateISHKjaer-Frifeldt [20]2012DenmarkCC 764II1.411.19-1.671.050.94-1.18Mean logMultivariateISHZhang [29]2013ChinaCC 138II1.980.95-4.15NANADichotomizeUnivariateRT-PCRCC 137II1.880.95-3.75NANADichotomizeUnivariateRT-PCRCC 255II1.791.22-2.62NANADichotomizeUnivariateRT-PCRBovell [30]2013USACRC 55IVNANA3.251.37-7.72MeanMultivariateRT-PCRToiyama [31]2013JapanCRC 166I-IVNANA0.590.21-1.633.7MultivariateRT-PCRChen [32]2013TaiwanCRC 195I-IVNANA1.6550.992-2.762MeanUnivariateRT-PCRHansen [33]2014DenmarkCC 554II1.3481.032-1.7601.0750.889-1.301DichotomizeMultivariateRT-PCROue [34]2014JapanCC 156I-IVNANA1.800.91-3.58Third tertileMultivariateRT-PCRCC 87II-IIINANA3.131.20-8.17Third tertileMultivariateRT-PCRGermanyCC 145IINANA2.651.06-6.66Third tertileMultivariateRT-PCRPresent studyKoreaCC 173II-III3.091.41-6.760.4250.142-1.271DichotomizeMultivariateISHRC 104II-III1.320.62-2.852.0460.557-7.513DichotomizeMultivariateISHaOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer. Characteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer aOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer. The T3-4a CRC patients were divided into subgroups according to American Joint Committee on Cancer stage. In the stage II (T3-4aN0M0) subgroup, we found that patients with high miR-21 expression level had a significantly shorter RFS time than those with low miR-21 level regardless of the primary site (colon cancer, P = 0.007; rectal cancer, P = 0.030, Figure 2B). However, in the stage III (T3-4aN1M0) subgroup, there was no significant difference in RFS between patients with high or low levels of miR-21 expression (colon cancer, P = 0.053; rectal cancer, P = 0.588, Figure 2C). Meta-analysis A total of 10 studies were included for the meta-analysis and their characteristics are summarized in Table 4 [13,18-20,29-34]. High heterogeneity was found in the analysis. For all CRC patients, high miR-21 expression was significantly associated with poor RFS (HR = 1.327, 95% CI = 1.053-1.673, Figure 3) and poor OS (HR = 1.272, 95% CI = 1.065-1.519, Figure 4). In subgroup analysis, the high miR-21 expression was significantly correlated with poor RFS and OS in colon cancer patients (HR = 1.423, 95% CI = 1.280-1.582; HR = 1.357, 95% CI = 1.102-1.672, respectively), but not in rectal cancer or CRC patients.Figure 3Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival.Figure 4Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival. Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival. Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival. A total of 10 studies were included for the meta-analysis and their characteristics are summarized in Table 4 [13,18-20,29-34]. High heterogeneity was found in the analysis. For all CRC patients, high miR-21 expression was significantly associated with poor RFS (HR = 1.327, 95% CI = 1.053-1.673, Figure 3) and poor OS (HR = 1.272, 95% CI = 1.065-1.519, Figure 4). In subgroup analysis, the high miR-21 expression was significantly correlated with poor RFS and OS in colon cancer patients (HR = 1.423, 95% CI = 1.280-1.582; HR = 1.357, 95% CI = 1.102-1.672, respectively), but not in rectal cancer or CRC patients.Figure 3Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival.Figure 4Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival. Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival. Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival.
Conclusion
miR-21 is overexpressed in the stroma of CRC specimens and has strong associations with the expression of E-cadherin and MTA1. A high level of miR-21 is an independent risk factor predictive of early tumor recurrence in T3-4a colon cancer and stage II CRC. Thus, CRC patients with miR-21 overexpression are at higher risk for tumor recurrence and may benefit from more intensive treatment.
[ "Patients", "Tissue microarray construction", "Immunohistochemistry for E-cadherin and MTA1", "In situ hybridization for miR-21", "Statistical analysis", "Meta-analysis for the association of miR-21 expression and patient survival", "miR-21 expression by in situ hybridization", "Correlation between miR-21 and MTA1/E-cadherin expression", "Recurrence-free survival and overall survival", "Meta-analysis" ]
[ "From January 2004 until June 2007, a total of 526 consecutive patients underwent surgical resection for CRC at Seoul St. Mary’s Hospital. Of these, 277 patients with pathological T3 (invasion of the subserosa or pericolic/perirectal adipose tissue) or T4a (serosal invasion) cancer were selected for the study, based on the following inclusion criteria: (i) no neoadjuvant chemotherapy or radiation therapy, (ii) no evidence of direct invasion into adjacent structures or organs, (iii) no postoperative death within six weeks, and (iv) no distant metastasis at presentation. The patients consisted of 181 males and 96 females (mean age 63.0 years). Overall survival (OS) was defined as the time interval between surgery and death from any cause or the most recent follow-up date. Recurrence-free survival (RFS) was defined as the time from the date of surgery to the date of first cancer recurrence or the most recent disease-free follow-up. This study was approved by the Institutional Review Board of Seoul, St. Mary’s Hospital, The Catholic University of Korea. Written informed consent was obtained from all patients.", "We constructed tissue microarrays from formalin-fixed, paraffin-embedded tissues as previously described [25,26]. Two 2-mm-diameter tissue cores were collected from each representative tumor specimen and inserted in a recipient paraffin block. The tissue microarray blocks were serially cut into 4-μm-thick sections for immunohistochemistry and 6-μm-thick sections for ISH.", "Immunohistochemical staining was performed using specific antibodies against E-cadherin (4A2C7, Zymed, South San Francisco, CA), MTA1 (A-11, Santa Cruz Biotechnology, Santa Cruz, CA) and the Polink-2 plus polymer HRP detection system (Golden Bridge International, Mukilteo, WA, USA) according to each manufacturer’s protocol. The specificity of each antibody was confirmed using both Western blotting and immunocytochemistry in several cell lines with known protein expression status. Negative controls were performed by the substitution of the primary antibodies with normal mouse IgG at the same concentration as the primary antibodies. Multi-tissue blocks containing known-positive tumor tissues were used as positive controls. Staining was examined in triplicate by two gastrointestinal pathologists (CKJ and SHL) who were blinded to the clinicopathological data. Specimens with discordant interpretations were reviewed until an agreement was reached. Immunohistochemical staining results were only assessed by a semiquantitative score of staining intensity (0, no; 1, weak; 2, moderate; 3, strong staining) because nearly all positively-staining tumors showed a diffuse staining pattern for both proteins. These scores were subsequently used to group samples into two categories: low (0 or 1) and high staining (2 or 3). Membrane staining of E-cadherin was evaluated and scored as ‘2’ when tumor cells displayed staining intensity similar to that seen in normal colonic mucosa. MTA1 expression was evaluated as nuclear staining.", "ISH was performed using the miRCURY locked nucleic acid (LNA) microRNA Detection FFPE microRNA ISH Optimization Kit 2 (Exiqon, Vedbaek, Denmark) in a StatSpin ThermoBrite Slide Hybridizer (Fisher Scientific, Westwood, MA) as previously described [19]. We used a double-digoxigenin-labeled LNA miR-21 probe (Exiqon, sequence: 5′-TCAACATCAGT-CTGATAAGCTA-3′), a positive control LNA U6 snRNA probe (Exiqon, sequence: 5′- CACGAATTTGCGTGTCATCCTT-3′) and a negative control LNA scrambled microRNA probe (Exiqon, sequence: 5′- GTGTAACACGTCTATACGCCCA-3′). Tissue sections were counterstained with nuclear fast red. Semiquantitative assessment of the ISH staining results was performed by two pathologists (CKJ and SHL) who were unaware of the clinicopathological and immunohistochemical data. In all cases where disagreements occurred, a consensus was reached by the investigators. The intensity of the staining was scored as negative (0), weak (1), moderate (2), or strong (3), as previously described [27,28], and samples were subsequently grouped into two categories: low (0 or 1) and high (2 or 3) expression.", "The relationships between the expression of miR-21, E-cadherin and MTA1 and the clinicopathological parameters were analyzed using the Chi-square test. Cumulative incidence curves for OS and RFS were plotted using the Kaplan–Meier method. The long-rank test was used to detect differences among groups. Multivariate analysis for OS and RFS was conducted using the Cox proportional hazard regression model. All statistical analyses were performed using SPSS, version 16 (SPSS Inc., Chicago, IL). A p value <0.05 was considered significant.", "Two authors (CKJ and SHL) performed literature searches using PubMed, Embase databases and Google up to November 2014, and independently selected eligible articles. Inclusion criteria include 1) being related to the association between miR-21 expression and CRC prognosis, 2) original articles, and 3) sufficient RFS or OS data including hazard ratio (HR) with a 95% confidence interval (CI). We performed a meta-analysis of HR of the effect of miR-21expression on RFS or OS in colon or rectal cancer patients. Heterogeneity among studies was assessed using Cochran Q test and I2 values. A P < 0.10 or I2 > 50% was considered significant heterogeneity. If statistical heterogeneity was observed, the random effect model was used for meta-analysis. Otherwise, we used a fixed-effect model for the meta-analysis. Meta-analyses were performed using Comprehensive Meta Analysis Version 2.0 (Biostat Inc., Englewood, NJ).", "miR-21 expression was found to be predominantly localized to the stroma surrounding the tumor cells (Figure 1). High levels of miR-21 were found in 76 of 277 (27.4%) CRC specimens. There was no significant correlation between high miR-21 expression and the clinicopathological features of the patients (Table 1).Figure 1In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400.\nIn situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400.", "The expression patterns of E-cadherin and MTA1 in stained tumor cells were membranous and nuclear, respectively (Figure 1). Low expression of E-cadherin was found in 109 of 277 (39.4%) CRCs, and high MTA1 expression was seen in 102 (36.8%) tumors. High miR-21 expression was significantly correlated with low E-cadherin expression (P = 0.019) and high MTA1 expression (P = 0.004) (Table 1). E-cadherin expression was negatively correlated with MTA1 expression (P = 0.005).", "In all 277 CRC patients, variables significantly associated with RFS included miR-21 expression (P = 0.010, Figure 2A), histological differentiation (P = 0.031), pT stage (P = 0.0005), lymph node metastasis (P = 0.00001), and serum CEA level (P =0.006) (Table 2). In a multivariate analysis, high levels of miR-21 (P = 0.007, HR = 2.24; 95% CI = 1.25-4.02), pT stage, lymph node metastasis, and serum CEA level were independent prognostic factors for unfavorable RFS (Table 3). However, the OS rate was not associated with the expression levels of miR-21, E-cadherin or MTA1.Figure 2Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival.Table 2\nUnivariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer\nColorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.02 (1.18-3.45)0.0101.32 (0.62-2.85)0.4743.09 (1.41-6.76)0.005Age (<65 years vs. ≥65 years)0.97 (0.57-1.66)0.9200.45 (0.20-1.02)0.0552.12 (0.95-4.77)0.068Tumor type (non-mucinous vs. mucinous)1.18 (0.37-3.78)0.7833.03 (0.70-13.05)0.1380.67 (0.09-4.94)0.693Differentiation (well or moderately vs. poorly)2.56 (1.09-6.00)0.0314.57 (1.33-15.67)0.0162.06 (0.60-7.00)0.249pT (T3 vs. T4a)2.68 (1.51-4.76)0.00053.97 (1.73-9.12)0.0012.30 (1.00-5.30)0.044Lymph node metastasis (absent vs. present)3.69 (1.98-6.87)0.000016.65 (2.30-19.22)0.00042.24 (1.00-5.02)0.045CEA (<5 ng/dL vs. ≥5 ng/dL)2.24 (1.26-3.99)0.0062.26 (1.02-5.05)0.0462.23 (0.97-5.15)0.060Adjuvant therapy (no vs. yes)4.19 (0.58-30.35)0.156NANA3.23 (0.44-23.98)0.252HR, hazard ratio; CI, confidence interval; NA, not available.Table 3\nMultivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location\nColorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.24 (1.25-4.02)0.0071.65 (0.65-4.16)0.2952.45 (1.05-5.72)0.038Age (<65 years vs. ≥65 years)1.03 (0.56-1.89)0.9240.27 (0.10-0.70)0.0072.48 (1.00-6.12)0.049Tumor type (non-mucinous vs. mucinous)0.61 (0.13-2.97)0.5390.62 (0.03-11.50)0.7511.03 (0.13-8.48)0.976Differentiation (well or moderately vs. poorly)2.18 (0.83-5.71)0.1142.60 (0.55-12.21)0.2251.56 (0.41-5.94)0.513pT (T3 vs. T4a)1.97 (1.01-3.83)0.0462.26 (0.75-6.79)0.1452.27 (0.86-5.97)0.098Lymph node metastasis (absent vs. present)4.55 (2.23-9.29)0.0000311.75 (3.33-41.48)0.00013.02 (1.22-7.47)0.017CEA (<5 ng/dL vs. ≥5 ng/dL)2.63 (1.46-4.74)0.0013.32 (1.39-7.51)0.0062.65 (1.13-6.21)0.025Adjuvant therapy (no vs. yes)2.48 (0.32-19.32)0.386NANA2.15 (0.26-18.08)0.431HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes).\nAssociation between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival.\n\nUnivariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer\n\nHR, hazard ratio; CI, confidence interval; NA, not available.\n\nMultivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location\n\nHR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes).\nTo further understand the association of prognostic factors and RFS according to the primary cancer site, we analyzed their HRs for RFS in colon and rectal cancer separately (Tables 3 and 4). High expression of miR-21 was associated with shorter RFS in patients with T3-4a colon cancer (n = 173, P = 0.005, Figure 2A), but not in patients with T3-4a rectal cancer (n = 104, P = 0.474, Figure 2A).Table 4\nCharacteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer\nFirst author (reference)YearOriginNo. of casesAJCC stageRecurrence-free survivalOverall survivalCut-off valueStatistic analysisDetection methodHR95% CIHR95% CISchetter [13]2008USAaCC 71I-IVNANA2.71.3-5.5Third tertileMultivariateRT-PCRChinaaCC 103I-IVNANA2.41.4-4.1DichotomizeMultivariateMicroarrayShibuya [18]2010JapanCRC 156I-IV0.3960.186-0.8970.5130.280-0.956MeanMultivariateRT-PCRNielsen [19]2011DenmarkCC 129II1.281.06-1.551.171.02-1.34DichotomizeMultivariateISHRC 67II0.850.73–1.010.970.83-1.13DichotomizeMultivariateISHKjaer-Frifeldt [20]2012DenmarkCC 764II1.411.19-1.671.050.94-1.18Mean logMultivariateISHZhang [29]2013ChinaCC 138II1.980.95-4.15NANADichotomizeUnivariateRT-PCRCC 137II1.880.95-3.75NANADichotomizeUnivariateRT-PCRCC 255II1.791.22-2.62NANADichotomizeUnivariateRT-PCRBovell [30]2013USACRC 55IVNANA3.251.37-7.72MeanMultivariateRT-PCRToiyama [31]2013JapanCRC 166I-IVNANA0.590.21-1.633.7MultivariateRT-PCRChen [32]2013TaiwanCRC 195I-IVNANA1.6550.992-2.762MeanUnivariateRT-PCRHansen [33]2014DenmarkCC 554II1.3481.032-1.7601.0750.889-1.301DichotomizeMultivariateRT-PCROue [34]2014JapanCC 156I-IVNANA1.800.91-3.58Third tertileMultivariateRT-PCRCC 87II-IIINANA3.131.20-8.17Third tertileMultivariateRT-PCRGermanyCC 145IINANA2.651.06-6.66Third tertileMultivariateRT-PCRPresent studyKoreaCC 173II-III3.091.41-6.760.4250.142-1.271DichotomizeMultivariateISHRC 104II-III1.320.62-2.852.0460.557-7.513DichotomizeMultivariateISHaOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer.\n\nCharacteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer\n\naOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer.\nThe T3-4a CRC patients were divided into subgroups according to American Joint Committee on Cancer stage. In the stage II (T3-4aN0M0) subgroup, we found that patients with high miR-21 expression level had a significantly shorter RFS time than those with low miR-21 level regardless of the primary site (colon cancer, P = 0.007; rectal cancer, P = 0.030, Figure 2B). However, in the stage III (T3-4aN1M0) subgroup, there was no significant difference in RFS between patients with high or low levels of miR-21 expression (colon cancer, P = 0.053; rectal cancer, P = 0.588, Figure 2C).", "A total of 10 studies were included for the meta-analysis and their characteristics are summarized in Table 4 [13,18-20,29-34]. High heterogeneity was found in the analysis. For all CRC patients, high miR-21 expression was significantly associated with poor RFS (HR = 1.327, 95% CI = 1.053-1.673, Figure 3) and poor OS (HR = 1.272, 95% CI = 1.065-1.519, Figure 4). In subgroup analysis, the high miR-21 expression was significantly correlated with poor RFS and OS in colon cancer patients (HR = 1.423, 95% CI = 1.280-1.582; HR = 1.357, 95% CI = 1.102-1.672, respectively), but not in rectal cancer or CRC patients.Figure 3Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival.Figure 4Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival.\nForest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival.\nForest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "Tissue microarray construction", "Immunohistochemistry for E-cadherin and MTA1", "In situ hybridization for miR-21", "Statistical analysis", "Meta-analysis for the association of miR-21 expression and patient survival", "Results", "miR-21 expression by in situ hybridization", "Correlation between miR-21 and MTA1/E-cadherin expression", "Recurrence-free survival and overall survival", "Meta-analysis", "Discussion", "Conclusion" ]
[ "Colorectal cancer (CRC) is the third most commonly diagnosed cancer in Korea [1]. The prognosis of CRC is associated with tumor progression; five-year survival rates range from 93% to 8% [2]. There are many proposed serological and molecular markers as predictive and prognostic indicators of CRC; however, they are not widely accepted as providing reliable prognostic information due to a lack of reproducibility, validation and standardization among studies [3,4]. Therefore, there is a need to identify more reliable prognostic mediators of tumor progression and metastasis in order to define the behavior of CRC and improve postoperative treatment strategies.\nMicroRNAs are small noncoding RNA molecules, 18-25 nucleotides in length, which post-transcriptionally regulate gene expression by binding to the 3’ untranslated regions of target messenger RNAs and play a central role in regulation of mRNA expression [5]. MicroRNAs have been shown to influence all cellular processes [6] and have a high degree of sequence conservation among distantly related organism, indicating their likely participation in essential biological processes [7]. Of note, microRNAs have been reported to have a marked influence on carcinogenesis through the dysregulation of oncogenes and tumor suppressor genes [8]. Cancer-related microRNAs typically show altered expression levels in tumors as compared to the level of expression in the corresponding normal tissue.\nMicroRNA-21 (miR-21) is an oncogenic microRNA that regulates the expression of multiple cancer-related target genes, such as PTEN and PDCD4, and has been reported to be consistently up-regulated in various types of cancers, including colon, breast, lung, and stomach cancers [9-16]. MiR-21 is known to contribute to the regulation of apoptosis, cell proliferation and migration [9,11,17]. Moreover, miR-21 levels increase in the advanced stages of cancer, suggesting a central role for miR-21 in invasion and dissemination of cancer [12,14]. In CRC tissue samples, miR-21 expression is up-regulated during tumor progression and is also known to be associated with poor survival and response to chemotherapy [12,13,18]. However, the clinical significance of miR-21 expression in advanced CRC remains unclear.\nIn situ hybridization (ISH) for microRNA has an advantage over quantitative microRNA expression analysis platforms in that ISH allows for precise histological localization of microRNAs in formalin-fixed paraffin-embedded tissue blocks [19,20].\nLoss of E-cadherin expression is associated with activation of epithelial-mesenchymal transition, invasion and metastasis in various cancers [21]. Conversely, expression of Metastasis-associated protein1 (MTA1) is correlated with cancer progression and metastasis in numerous cancer types, including CRC [22,23]. Previous studies on the association between MTA1 and E-cadherin have shown that MTA1 regulates E-cadherin expression through AKT activation in prostate cancer, and that low E-cadherin expression promotes cancer metastasis [21,24]. However, the exact role of these proteins in CRC remains unclear.\nWe investigated miR-21 expression using ISH in specimens from T3-4a CRC patients treated by surgical resection. We also evaluated the relationship between expression of miR-21, E-cadherin and MTA1 and their clinical significance as potential biomarkers for prognosis of T3-4a CRC patients.", " Patients From January 2004 until June 2007, a total of 526 consecutive patients underwent surgical resection for CRC at Seoul St. Mary’s Hospital. Of these, 277 patients with pathological T3 (invasion of the subserosa or pericolic/perirectal adipose tissue) or T4a (serosal invasion) cancer were selected for the study, based on the following inclusion criteria: (i) no neoadjuvant chemotherapy or radiation therapy, (ii) no evidence of direct invasion into adjacent structures or organs, (iii) no postoperative death within six weeks, and (iv) no distant metastasis at presentation. The patients consisted of 181 males and 96 females (mean age 63.0 years). Overall survival (OS) was defined as the time interval between surgery and death from any cause or the most recent follow-up date. Recurrence-free survival (RFS) was defined as the time from the date of surgery to the date of first cancer recurrence or the most recent disease-free follow-up. This study was approved by the Institutional Review Board of Seoul, St. Mary’s Hospital, The Catholic University of Korea. Written informed consent was obtained from all patients.\nFrom January 2004 until June 2007, a total of 526 consecutive patients underwent surgical resection for CRC at Seoul St. Mary’s Hospital. Of these, 277 patients with pathological T3 (invasion of the subserosa or pericolic/perirectal adipose tissue) or T4a (serosal invasion) cancer were selected for the study, based on the following inclusion criteria: (i) no neoadjuvant chemotherapy or radiation therapy, (ii) no evidence of direct invasion into adjacent structures or organs, (iii) no postoperative death within six weeks, and (iv) no distant metastasis at presentation. The patients consisted of 181 males and 96 females (mean age 63.0 years). Overall survival (OS) was defined as the time interval between surgery and death from any cause or the most recent follow-up date. Recurrence-free survival (RFS) was defined as the time from the date of surgery to the date of first cancer recurrence or the most recent disease-free follow-up. This study was approved by the Institutional Review Board of Seoul, St. Mary’s Hospital, The Catholic University of Korea. Written informed consent was obtained from all patients.\n Tissue microarray construction We constructed tissue microarrays from formalin-fixed, paraffin-embedded tissues as previously described [25,26]. Two 2-mm-diameter tissue cores were collected from each representative tumor specimen and inserted in a recipient paraffin block. The tissue microarray blocks were serially cut into 4-μm-thick sections for immunohistochemistry and 6-μm-thick sections for ISH.\nWe constructed tissue microarrays from formalin-fixed, paraffin-embedded tissues as previously described [25,26]. Two 2-mm-diameter tissue cores were collected from each representative tumor specimen and inserted in a recipient paraffin block. The tissue microarray blocks were serially cut into 4-μm-thick sections for immunohistochemistry and 6-μm-thick sections for ISH.\n Immunohistochemistry for E-cadherin and MTA1 Immunohistochemical staining was performed using specific antibodies against E-cadherin (4A2C7, Zymed, South San Francisco, CA), MTA1 (A-11, Santa Cruz Biotechnology, Santa Cruz, CA) and the Polink-2 plus polymer HRP detection system (Golden Bridge International, Mukilteo, WA, USA) according to each manufacturer’s protocol. The specificity of each antibody was confirmed using both Western blotting and immunocytochemistry in several cell lines with known protein expression status. Negative controls were performed by the substitution of the primary antibodies with normal mouse IgG at the same concentration as the primary antibodies. Multi-tissue blocks containing known-positive tumor tissues were used as positive controls. Staining was examined in triplicate by two gastrointestinal pathologists (CKJ and SHL) who were blinded to the clinicopathological data. Specimens with discordant interpretations were reviewed until an agreement was reached. Immunohistochemical staining results were only assessed by a semiquantitative score of staining intensity (0, no; 1, weak; 2, moderate; 3, strong staining) because nearly all positively-staining tumors showed a diffuse staining pattern for both proteins. These scores were subsequently used to group samples into two categories: low (0 or 1) and high staining (2 or 3). Membrane staining of E-cadherin was evaluated and scored as ‘2’ when tumor cells displayed staining intensity similar to that seen in normal colonic mucosa. MTA1 expression was evaluated as nuclear staining.\nImmunohistochemical staining was performed using specific antibodies against E-cadherin (4A2C7, Zymed, South San Francisco, CA), MTA1 (A-11, Santa Cruz Biotechnology, Santa Cruz, CA) and the Polink-2 plus polymer HRP detection system (Golden Bridge International, Mukilteo, WA, USA) according to each manufacturer’s protocol. The specificity of each antibody was confirmed using both Western blotting and immunocytochemistry in several cell lines with known protein expression status. Negative controls were performed by the substitution of the primary antibodies with normal mouse IgG at the same concentration as the primary antibodies. Multi-tissue blocks containing known-positive tumor tissues were used as positive controls. Staining was examined in triplicate by two gastrointestinal pathologists (CKJ and SHL) who were blinded to the clinicopathological data. Specimens with discordant interpretations were reviewed until an agreement was reached. Immunohistochemical staining results were only assessed by a semiquantitative score of staining intensity (0, no; 1, weak; 2, moderate; 3, strong staining) because nearly all positively-staining tumors showed a diffuse staining pattern for both proteins. These scores were subsequently used to group samples into two categories: low (0 or 1) and high staining (2 or 3). Membrane staining of E-cadherin was evaluated and scored as ‘2’ when tumor cells displayed staining intensity similar to that seen in normal colonic mucosa. MTA1 expression was evaluated as nuclear staining.\n In situ hybridization for miR-21 ISH was performed using the miRCURY locked nucleic acid (LNA) microRNA Detection FFPE microRNA ISH Optimization Kit 2 (Exiqon, Vedbaek, Denmark) in a StatSpin ThermoBrite Slide Hybridizer (Fisher Scientific, Westwood, MA) as previously described [19]. We used a double-digoxigenin-labeled LNA miR-21 probe (Exiqon, sequence: 5′-TCAACATCAGT-CTGATAAGCTA-3′), a positive control LNA U6 snRNA probe (Exiqon, sequence: 5′- CACGAATTTGCGTGTCATCCTT-3′) and a negative control LNA scrambled microRNA probe (Exiqon, sequence: 5′- GTGTAACACGTCTATACGCCCA-3′). Tissue sections were counterstained with nuclear fast red. Semiquantitative assessment of the ISH staining results was performed by two pathologists (CKJ and SHL) who were unaware of the clinicopathological and immunohistochemical data. In all cases where disagreements occurred, a consensus was reached by the investigators. The intensity of the staining was scored as negative (0), weak (1), moderate (2), or strong (3), as previously described [27,28], and samples were subsequently grouped into two categories: low (0 or 1) and high (2 or 3) expression.\nISH was performed using the miRCURY locked nucleic acid (LNA) microRNA Detection FFPE microRNA ISH Optimization Kit 2 (Exiqon, Vedbaek, Denmark) in a StatSpin ThermoBrite Slide Hybridizer (Fisher Scientific, Westwood, MA) as previously described [19]. We used a double-digoxigenin-labeled LNA miR-21 probe (Exiqon, sequence: 5′-TCAACATCAGT-CTGATAAGCTA-3′), a positive control LNA U6 snRNA probe (Exiqon, sequence: 5′- CACGAATTTGCGTGTCATCCTT-3′) and a negative control LNA scrambled microRNA probe (Exiqon, sequence: 5′- GTGTAACACGTCTATACGCCCA-3′). Tissue sections were counterstained with nuclear fast red. Semiquantitative assessment of the ISH staining results was performed by two pathologists (CKJ and SHL) who were unaware of the clinicopathological and immunohistochemical data. In all cases where disagreements occurred, a consensus was reached by the investigators. The intensity of the staining was scored as negative (0), weak (1), moderate (2), or strong (3), as previously described [27,28], and samples were subsequently grouped into two categories: low (0 or 1) and high (2 or 3) expression.\n Statistical analysis The relationships between the expression of miR-21, E-cadherin and MTA1 and the clinicopathological parameters were analyzed using the Chi-square test. Cumulative incidence curves for OS and RFS were plotted using the Kaplan–Meier method. The long-rank test was used to detect differences among groups. Multivariate analysis for OS and RFS was conducted using the Cox proportional hazard regression model. All statistical analyses were performed using SPSS, version 16 (SPSS Inc., Chicago, IL). A p value <0.05 was considered significant.\nThe relationships between the expression of miR-21, E-cadherin and MTA1 and the clinicopathological parameters were analyzed using the Chi-square test. Cumulative incidence curves for OS and RFS were plotted using the Kaplan–Meier method. The long-rank test was used to detect differences among groups. Multivariate analysis for OS and RFS was conducted using the Cox proportional hazard regression model. All statistical analyses were performed using SPSS, version 16 (SPSS Inc., Chicago, IL). A p value <0.05 was considered significant.\n Meta-analysis for the association of miR-21 expression and patient survival Two authors (CKJ and SHL) performed literature searches using PubMed, Embase databases and Google up to November 2014, and independently selected eligible articles. Inclusion criteria include 1) being related to the association between miR-21 expression and CRC prognosis, 2) original articles, and 3) sufficient RFS or OS data including hazard ratio (HR) with a 95% confidence interval (CI). We performed a meta-analysis of HR of the effect of miR-21expression on RFS or OS in colon or rectal cancer patients. Heterogeneity among studies was assessed using Cochran Q test and I2 values. A P < 0.10 or I2 > 50% was considered significant heterogeneity. If statistical heterogeneity was observed, the random effect model was used for meta-analysis. Otherwise, we used a fixed-effect model for the meta-analysis. Meta-analyses were performed using Comprehensive Meta Analysis Version 2.0 (Biostat Inc., Englewood, NJ).\nTwo authors (CKJ and SHL) performed literature searches using PubMed, Embase databases and Google up to November 2014, and independently selected eligible articles. Inclusion criteria include 1) being related to the association between miR-21 expression and CRC prognosis, 2) original articles, and 3) sufficient RFS or OS data including hazard ratio (HR) with a 95% confidence interval (CI). We performed a meta-analysis of HR of the effect of miR-21expression on RFS or OS in colon or rectal cancer patients. Heterogeneity among studies was assessed using Cochran Q test and I2 values. A P < 0.10 or I2 > 50% was considered significant heterogeneity. If statistical heterogeneity was observed, the random effect model was used for meta-analysis. Otherwise, we used a fixed-effect model for the meta-analysis. Meta-analyses were performed using Comprehensive Meta Analysis Version 2.0 (Biostat Inc., Englewood, NJ).", "From January 2004 until June 2007, a total of 526 consecutive patients underwent surgical resection for CRC at Seoul St. Mary’s Hospital. Of these, 277 patients with pathological T3 (invasion of the subserosa or pericolic/perirectal adipose tissue) or T4a (serosal invasion) cancer were selected for the study, based on the following inclusion criteria: (i) no neoadjuvant chemotherapy or radiation therapy, (ii) no evidence of direct invasion into adjacent structures or organs, (iii) no postoperative death within six weeks, and (iv) no distant metastasis at presentation. The patients consisted of 181 males and 96 females (mean age 63.0 years). Overall survival (OS) was defined as the time interval between surgery and death from any cause or the most recent follow-up date. Recurrence-free survival (RFS) was defined as the time from the date of surgery to the date of first cancer recurrence or the most recent disease-free follow-up. This study was approved by the Institutional Review Board of Seoul, St. Mary’s Hospital, The Catholic University of Korea. Written informed consent was obtained from all patients.", "We constructed tissue microarrays from formalin-fixed, paraffin-embedded tissues as previously described [25,26]. Two 2-mm-diameter tissue cores were collected from each representative tumor specimen and inserted in a recipient paraffin block. The tissue microarray blocks were serially cut into 4-μm-thick sections for immunohistochemistry and 6-μm-thick sections for ISH.", "Immunohistochemical staining was performed using specific antibodies against E-cadherin (4A2C7, Zymed, South San Francisco, CA), MTA1 (A-11, Santa Cruz Biotechnology, Santa Cruz, CA) and the Polink-2 plus polymer HRP detection system (Golden Bridge International, Mukilteo, WA, USA) according to each manufacturer’s protocol. The specificity of each antibody was confirmed using both Western blotting and immunocytochemistry in several cell lines with known protein expression status. Negative controls were performed by the substitution of the primary antibodies with normal mouse IgG at the same concentration as the primary antibodies. Multi-tissue blocks containing known-positive tumor tissues were used as positive controls. Staining was examined in triplicate by two gastrointestinal pathologists (CKJ and SHL) who were blinded to the clinicopathological data. Specimens with discordant interpretations were reviewed until an agreement was reached. Immunohistochemical staining results were only assessed by a semiquantitative score of staining intensity (0, no; 1, weak; 2, moderate; 3, strong staining) because nearly all positively-staining tumors showed a diffuse staining pattern for both proteins. These scores were subsequently used to group samples into two categories: low (0 or 1) and high staining (2 or 3). Membrane staining of E-cadherin was evaluated and scored as ‘2’ when tumor cells displayed staining intensity similar to that seen in normal colonic mucosa. MTA1 expression was evaluated as nuclear staining.", "ISH was performed using the miRCURY locked nucleic acid (LNA) microRNA Detection FFPE microRNA ISH Optimization Kit 2 (Exiqon, Vedbaek, Denmark) in a StatSpin ThermoBrite Slide Hybridizer (Fisher Scientific, Westwood, MA) as previously described [19]. We used a double-digoxigenin-labeled LNA miR-21 probe (Exiqon, sequence: 5′-TCAACATCAGT-CTGATAAGCTA-3′), a positive control LNA U6 snRNA probe (Exiqon, sequence: 5′- CACGAATTTGCGTGTCATCCTT-3′) and a negative control LNA scrambled microRNA probe (Exiqon, sequence: 5′- GTGTAACACGTCTATACGCCCA-3′). Tissue sections were counterstained with nuclear fast red. Semiquantitative assessment of the ISH staining results was performed by two pathologists (CKJ and SHL) who were unaware of the clinicopathological and immunohistochemical data. In all cases where disagreements occurred, a consensus was reached by the investigators. The intensity of the staining was scored as negative (0), weak (1), moderate (2), or strong (3), as previously described [27,28], and samples were subsequently grouped into two categories: low (0 or 1) and high (2 or 3) expression.", "The relationships between the expression of miR-21, E-cadherin and MTA1 and the clinicopathological parameters were analyzed using the Chi-square test. Cumulative incidence curves for OS and RFS were plotted using the Kaplan–Meier method. The long-rank test was used to detect differences among groups. Multivariate analysis for OS and RFS was conducted using the Cox proportional hazard regression model. All statistical analyses were performed using SPSS, version 16 (SPSS Inc., Chicago, IL). A p value <0.05 was considered significant.", "Two authors (CKJ and SHL) performed literature searches using PubMed, Embase databases and Google up to November 2014, and independently selected eligible articles. Inclusion criteria include 1) being related to the association between miR-21 expression and CRC prognosis, 2) original articles, and 3) sufficient RFS or OS data including hazard ratio (HR) with a 95% confidence interval (CI). We performed a meta-analysis of HR of the effect of miR-21expression on RFS or OS in colon or rectal cancer patients. Heterogeneity among studies was assessed using Cochran Q test and I2 values. A P < 0.10 or I2 > 50% was considered significant heterogeneity. If statistical heterogeneity was observed, the random effect model was used for meta-analysis. Otherwise, we used a fixed-effect model for the meta-analysis. Meta-analyses were performed using Comprehensive Meta Analysis Version 2.0 (Biostat Inc., Englewood, NJ).", "Demographic and clinicopathological variables of the study participants are listed in Table 1.Table 1\nCorrelations of clinicopathological parameters and expression of miR-21 in 277 patients with T3-4a colorectal cancer\nParameterNmiR-21 expressionp-valueHighLowAge<6513936 (25.9%)103 (74.1%)0.565≥6513840 (29.0%)98 (71.0%)GenderMale18151 (28.2%)130 (71.8%)0.705Female9625 (26.0%)71 (74.0%)Primary siteRight colon8220 (24.4%)62 (75.6%)0.636Left colon9128 (30.8%)63 (69.2%)Rectum10428 (26.9%)76 (73.1%)Histologic typeNon-mucinous26273 (27.9%)189 (72.1%)0.767Mucinous153 (20.0%)12 (80.0%)DifferentiationWell or moderately26074 (28.5%)186 (71.5%)0.168Poorly172 (11.8%)15 (88.2%)Depth of invasionpT322865 (28.5%)163 (71.5%)0.388pT4a4911 (22.4%)38 (77.6%)Lymph node metastasisAbsent13839 (28.3%)99 (71.7%)0.759Present13937 (26.6%)102 (73.7%)AJCC stage0.118IIA12234 (27.9%)88 (72.1%)IIB165 (31.2%)11 (68.8%)IIIB8228 (34.1%)54 (65.9%)IIIC579 (15.8%)48 (84.2%)Perineural invasionAbsent21958 (26.5%)161 (73.5%)0.490Present5818 (31.0%)40 (69.0%)Lymphatic invasionAbsent12033 (27.5%)87 (72.5%)0.984Present15743 (27.4%)114 (72.6%)Vascular invasionAbsent25572 (28.2%)183 (71.8%)0.311Present224 (18.2%)18 (81.8%)CEA (ng/dL)a<516346 (28.2%)117 (71.8%)0.542≥57825 (32.1%)53 (67.9%)Adjuvant therapyNo161 (6.2%)15 (93.8%)0.079Yes26175 (28.7%)186 (71.3%)E-cadherinLow10940 (36.7%)69 (63.3%)0.019High16137 (23.0%)124 (77.0%)MTA1Low16837 (22.0%)131 (78.0%)0.004High10239 (38.2%)63 (61.8%)aPreoperative serum level of carcinoembryonic antigen (CEA) was measured in 241 colorectal cancer patients. AJCC, American Joint Committee on Cancer. Immunohistochemistry for E-cadherin and MTA1 was available in 270 cases.\n\nCorrelations of clinicopathological parameters and expression of miR-21 in 277 patients with T3-4a colorectal cancer\n\naPreoperative serum level of carcinoembryonic antigen (CEA) was measured in 241 colorectal cancer patients. AJCC, American Joint Committee on Cancer. Immunohistochemistry for E-cadherin and MTA1 was available in 270 cases.\n miR-21 expression by in situ hybridization miR-21 expression was found to be predominantly localized to the stroma surrounding the tumor cells (Figure 1). High levels of miR-21 were found in 76 of 277 (27.4%) CRC specimens. There was no significant correlation between high miR-21 expression and the clinicopathological features of the patients (Table 1).Figure 1In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400.\nIn situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400.\nmiR-21 expression was found to be predominantly localized to the stroma surrounding the tumor cells (Figure 1). High levels of miR-21 were found in 76 of 277 (27.4%) CRC specimens. There was no significant correlation between high miR-21 expression and the clinicopathological features of the patients (Table 1).Figure 1In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400.\nIn situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400.\n Correlation between miR-21 and MTA1/E-cadherin expression The expression patterns of E-cadherin and MTA1 in stained tumor cells were membranous and nuclear, respectively (Figure 1). Low expression of E-cadherin was found in 109 of 277 (39.4%) CRCs, and high MTA1 expression was seen in 102 (36.8%) tumors. High miR-21 expression was significantly correlated with low E-cadherin expression (P = 0.019) and high MTA1 expression (P = 0.004) (Table 1). E-cadherin expression was negatively correlated with MTA1 expression (P = 0.005).\nThe expression patterns of E-cadherin and MTA1 in stained tumor cells were membranous and nuclear, respectively (Figure 1). Low expression of E-cadherin was found in 109 of 277 (39.4%) CRCs, and high MTA1 expression was seen in 102 (36.8%) tumors. High miR-21 expression was significantly correlated with low E-cadherin expression (P = 0.019) and high MTA1 expression (P = 0.004) (Table 1). E-cadherin expression was negatively correlated with MTA1 expression (P = 0.005).\n Recurrence-free survival and overall survival In all 277 CRC patients, variables significantly associated with RFS included miR-21 expression (P = 0.010, Figure 2A), histological differentiation (P = 0.031), pT stage (P = 0.0005), lymph node metastasis (P = 0.00001), and serum CEA level (P =0.006) (Table 2). In a multivariate analysis, high levels of miR-21 (P = 0.007, HR = 2.24; 95% CI = 1.25-4.02), pT stage, lymph node metastasis, and serum CEA level were independent prognostic factors for unfavorable RFS (Table 3). However, the OS rate was not associated with the expression levels of miR-21, E-cadherin or MTA1.Figure 2Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival.Table 2\nUnivariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer\nColorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.02 (1.18-3.45)0.0101.32 (0.62-2.85)0.4743.09 (1.41-6.76)0.005Age (<65 years vs. ≥65 years)0.97 (0.57-1.66)0.9200.45 (0.20-1.02)0.0552.12 (0.95-4.77)0.068Tumor type (non-mucinous vs. mucinous)1.18 (0.37-3.78)0.7833.03 (0.70-13.05)0.1380.67 (0.09-4.94)0.693Differentiation (well or moderately vs. poorly)2.56 (1.09-6.00)0.0314.57 (1.33-15.67)0.0162.06 (0.60-7.00)0.249pT (T3 vs. T4a)2.68 (1.51-4.76)0.00053.97 (1.73-9.12)0.0012.30 (1.00-5.30)0.044Lymph node metastasis (absent vs. present)3.69 (1.98-6.87)0.000016.65 (2.30-19.22)0.00042.24 (1.00-5.02)0.045CEA (<5 ng/dL vs. ≥5 ng/dL)2.24 (1.26-3.99)0.0062.26 (1.02-5.05)0.0462.23 (0.97-5.15)0.060Adjuvant therapy (no vs. yes)4.19 (0.58-30.35)0.156NANA3.23 (0.44-23.98)0.252HR, hazard ratio; CI, confidence interval; NA, not available.Table 3\nMultivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location\nColorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.24 (1.25-4.02)0.0071.65 (0.65-4.16)0.2952.45 (1.05-5.72)0.038Age (<65 years vs. ≥65 years)1.03 (0.56-1.89)0.9240.27 (0.10-0.70)0.0072.48 (1.00-6.12)0.049Tumor type (non-mucinous vs. mucinous)0.61 (0.13-2.97)0.5390.62 (0.03-11.50)0.7511.03 (0.13-8.48)0.976Differentiation (well or moderately vs. poorly)2.18 (0.83-5.71)0.1142.60 (0.55-12.21)0.2251.56 (0.41-5.94)0.513pT (T3 vs. T4a)1.97 (1.01-3.83)0.0462.26 (0.75-6.79)0.1452.27 (0.86-5.97)0.098Lymph node metastasis (absent vs. present)4.55 (2.23-9.29)0.0000311.75 (3.33-41.48)0.00013.02 (1.22-7.47)0.017CEA (<5 ng/dL vs. ≥5 ng/dL)2.63 (1.46-4.74)0.0013.32 (1.39-7.51)0.0062.65 (1.13-6.21)0.025Adjuvant therapy (no vs. yes)2.48 (0.32-19.32)0.386NANA2.15 (0.26-18.08)0.431HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes).\nAssociation between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival.\n\nUnivariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer\n\nHR, hazard ratio; CI, confidence interval; NA, not available.\n\nMultivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location\n\nHR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes).\nTo further understand the association of prognostic factors and RFS according to the primary cancer site, we analyzed their HRs for RFS in colon and rectal cancer separately (Tables 3 and 4). High expression of miR-21 was associated with shorter RFS in patients with T3-4a colon cancer (n = 173, P = 0.005, Figure 2A), but not in patients with T3-4a rectal cancer (n = 104, P = 0.474, Figure 2A).Table 4\nCharacteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer\nFirst author (reference)YearOriginNo. of casesAJCC stageRecurrence-free survivalOverall survivalCut-off valueStatistic analysisDetection methodHR95% CIHR95% CISchetter [13]2008USAaCC 71I-IVNANA2.71.3-5.5Third tertileMultivariateRT-PCRChinaaCC 103I-IVNANA2.41.4-4.1DichotomizeMultivariateMicroarrayShibuya [18]2010JapanCRC 156I-IV0.3960.186-0.8970.5130.280-0.956MeanMultivariateRT-PCRNielsen [19]2011DenmarkCC 129II1.281.06-1.551.171.02-1.34DichotomizeMultivariateISHRC 67II0.850.73–1.010.970.83-1.13DichotomizeMultivariateISHKjaer-Frifeldt [20]2012DenmarkCC 764II1.411.19-1.671.050.94-1.18Mean logMultivariateISHZhang [29]2013ChinaCC 138II1.980.95-4.15NANADichotomizeUnivariateRT-PCRCC 137II1.880.95-3.75NANADichotomizeUnivariateRT-PCRCC 255II1.791.22-2.62NANADichotomizeUnivariateRT-PCRBovell [30]2013USACRC 55IVNANA3.251.37-7.72MeanMultivariateRT-PCRToiyama [31]2013JapanCRC 166I-IVNANA0.590.21-1.633.7MultivariateRT-PCRChen [32]2013TaiwanCRC 195I-IVNANA1.6550.992-2.762MeanUnivariateRT-PCRHansen [33]2014DenmarkCC 554II1.3481.032-1.7601.0750.889-1.301DichotomizeMultivariateRT-PCROue [34]2014JapanCC 156I-IVNANA1.800.91-3.58Third tertileMultivariateRT-PCRCC 87II-IIINANA3.131.20-8.17Third tertileMultivariateRT-PCRGermanyCC 145IINANA2.651.06-6.66Third tertileMultivariateRT-PCRPresent studyKoreaCC 173II-III3.091.41-6.760.4250.142-1.271DichotomizeMultivariateISHRC 104II-III1.320.62-2.852.0460.557-7.513DichotomizeMultivariateISHaOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer.\n\nCharacteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer\n\naOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer.\nThe T3-4a CRC patients were divided into subgroups according to American Joint Committee on Cancer stage. In the stage II (T3-4aN0M0) subgroup, we found that patients with high miR-21 expression level had a significantly shorter RFS time than those with low miR-21 level regardless of the primary site (colon cancer, P = 0.007; rectal cancer, P = 0.030, Figure 2B). However, in the stage III (T3-4aN1M0) subgroup, there was no significant difference in RFS between patients with high or low levels of miR-21 expression (colon cancer, P = 0.053; rectal cancer, P = 0.588, Figure 2C).\nIn all 277 CRC patients, variables significantly associated with RFS included miR-21 expression (P = 0.010, Figure 2A), histological differentiation (P = 0.031), pT stage (P = 0.0005), lymph node metastasis (P = 0.00001), and serum CEA level (P =0.006) (Table 2). In a multivariate analysis, high levels of miR-21 (P = 0.007, HR = 2.24; 95% CI = 1.25-4.02), pT stage, lymph node metastasis, and serum CEA level were independent prognostic factors for unfavorable RFS (Table 3). However, the OS rate was not associated with the expression levels of miR-21, E-cadherin or MTA1.Figure 2Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival.Table 2\nUnivariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer\nColorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.02 (1.18-3.45)0.0101.32 (0.62-2.85)0.4743.09 (1.41-6.76)0.005Age (<65 years vs. ≥65 years)0.97 (0.57-1.66)0.9200.45 (0.20-1.02)0.0552.12 (0.95-4.77)0.068Tumor type (non-mucinous vs. mucinous)1.18 (0.37-3.78)0.7833.03 (0.70-13.05)0.1380.67 (0.09-4.94)0.693Differentiation (well or moderately vs. poorly)2.56 (1.09-6.00)0.0314.57 (1.33-15.67)0.0162.06 (0.60-7.00)0.249pT (T3 vs. T4a)2.68 (1.51-4.76)0.00053.97 (1.73-9.12)0.0012.30 (1.00-5.30)0.044Lymph node metastasis (absent vs. present)3.69 (1.98-6.87)0.000016.65 (2.30-19.22)0.00042.24 (1.00-5.02)0.045CEA (<5 ng/dL vs. ≥5 ng/dL)2.24 (1.26-3.99)0.0062.26 (1.02-5.05)0.0462.23 (0.97-5.15)0.060Adjuvant therapy (no vs. yes)4.19 (0.58-30.35)0.156NANA3.23 (0.44-23.98)0.252HR, hazard ratio; CI, confidence interval; NA, not available.Table 3\nMultivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location\nColorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.24 (1.25-4.02)0.0071.65 (0.65-4.16)0.2952.45 (1.05-5.72)0.038Age (<65 years vs. ≥65 years)1.03 (0.56-1.89)0.9240.27 (0.10-0.70)0.0072.48 (1.00-6.12)0.049Tumor type (non-mucinous vs. mucinous)0.61 (0.13-2.97)0.5390.62 (0.03-11.50)0.7511.03 (0.13-8.48)0.976Differentiation (well or moderately vs. poorly)2.18 (0.83-5.71)0.1142.60 (0.55-12.21)0.2251.56 (0.41-5.94)0.513pT (T3 vs. T4a)1.97 (1.01-3.83)0.0462.26 (0.75-6.79)0.1452.27 (0.86-5.97)0.098Lymph node metastasis (absent vs. present)4.55 (2.23-9.29)0.0000311.75 (3.33-41.48)0.00013.02 (1.22-7.47)0.017CEA (<5 ng/dL vs. ≥5 ng/dL)2.63 (1.46-4.74)0.0013.32 (1.39-7.51)0.0062.65 (1.13-6.21)0.025Adjuvant therapy (no vs. yes)2.48 (0.32-19.32)0.386NANA2.15 (0.26-18.08)0.431HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes).\nAssociation between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival.\n\nUnivariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer\n\nHR, hazard ratio; CI, confidence interval; NA, not available.\n\nMultivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location\n\nHR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes).\nTo further understand the association of prognostic factors and RFS according to the primary cancer site, we analyzed their HRs for RFS in colon and rectal cancer separately (Tables 3 and 4). High expression of miR-21 was associated with shorter RFS in patients with T3-4a colon cancer (n = 173, P = 0.005, Figure 2A), but not in patients with T3-4a rectal cancer (n = 104, P = 0.474, Figure 2A).Table 4\nCharacteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer\nFirst author (reference)YearOriginNo. of casesAJCC stageRecurrence-free survivalOverall survivalCut-off valueStatistic analysisDetection methodHR95% CIHR95% CISchetter [13]2008USAaCC 71I-IVNANA2.71.3-5.5Third tertileMultivariateRT-PCRChinaaCC 103I-IVNANA2.41.4-4.1DichotomizeMultivariateMicroarrayShibuya [18]2010JapanCRC 156I-IV0.3960.186-0.8970.5130.280-0.956MeanMultivariateRT-PCRNielsen [19]2011DenmarkCC 129II1.281.06-1.551.171.02-1.34DichotomizeMultivariateISHRC 67II0.850.73–1.010.970.83-1.13DichotomizeMultivariateISHKjaer-Frifeldt [20]2012DenmarkCC 764II1.411.19-1.671.050.94-1.18Mean logMultivariateISHZhang [29]2013ChinaCC 138II1.980.95-4.15NANADichotomizeUnivariateRT-PCRCC 137II1.880.95-3.75NANADichotomizeUnivariateRT-PCRCC 255II1.791.22-2.62NANADichotomizeUnivariateRT-PCRBovell [30]2013USACRC 55IVNANA3.251.37-7.72MeanMultivariateRT-PCRToiyama [31]2013JapanCRC 166I-IVNANA0.590.21-1.633.7MultivariateRT-PCRChen [32]2013TaiwanCRC 195I-IVNANA1.6550.992-2.762MeanUnivariateRT-PCRHansen [33]2014DenmarkCC 554II1.3481.032-1.7601.0750.889-1.301DichotomizeMultivariateRT-PCROue [34]2014JapanCC 156I-IVNANA1.800.91-3.58Third tertileMultivariateRT-PCRCC 87II-IIINANA3.131.20-8.17Third tertileMultivariateRT-PCRGermanyCC 145IINANA2.651.06-6.66Third tertileMultivariateRT-PCRPresent studyKoreaCC 173II-III3.091.41-6.760.4250.142-1.271DichotomizeMultivariateISHRC 104II-III1.320.62-2.852.0460.557-7.513DichotomizeMultivariateISHaOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer.\n\nCharacteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer\n\naOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer.\nThe T3-4a CRC patients were divided into subgroups according to American Joint Committee on Cancer stage. In the stage II (T3-4aN0M0) subgroup, we found that patients with high miR-21 expression level had a significantly shorter RFS time than those with low miR-21 level regardless of the primary site (colon cancer, P = 0.007; rectal cancer, P = 0.030, Figure 2B). However, in the stage III (T3-4aN1M0) subgroup, there was no significant difference in RFS between patients with high or low levels of miR-21 expression (colon cancer, P = 0.053; rectal cancer, P = 0.588, Figure 2C).\n Meta-analysis A total of 10 studies were included for the meta-analysis and their characteristics are summarized in Table 4 [13,18-20,29-34]. High heterogeneity was found in the analysis. For all CRC patients, high miR-21 expression was significantly associated with poor RFS (HR = 1.327, 95% CI = 1.053-1.673, Figure 3) and poor OS (HR = 1.272, 95% CI = 1.065-1.519, Figure 4). In subgroup analysis, the high miR-21 expression was significantly correlated with poor RFS and OS in colon cancer patients (HR = 1.423, 95% CI = 1.280-1.582; HR = 1.357, 95% CI = 1.102-1.672, respectively), but not in rectal cancer or CRC patients.Figure 3Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival.Figure 4Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival.\nForest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival.\nForest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival.\nA total of 10 studies were included for the meta-analysis and their characteristics are summarized in Table 4 [13,18-20,29-34]. High heterogeneity was found in the analysis. For all CRC patients, high miR-21 expression was significantly associated with poor RFS (HR = 1.327, 95% CI = 1.053-1.673, Figure 3) and poor OS (HR = 1.272, 95% CI = 1.065-1.519, Figure 4). In subgroup analysis, the high miR-21 expression was significantly correlated with poor RFS and OS in colon cancer patients (HR = 1.423, 95% CI = 1.280-1.582; HR = 1.357, 95% CI = 1.102-1.672, respectively), but not in rectal cancer or CRC patients.Figure 3Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival.Figure 4Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival.\nForest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival.\nForest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival.", "miR-21 expression was found to be predominantly localized to the stroma surrounding the tumor cells (Figure 1). High levels of miR-21 were found in 76 of 277 (27.4%) CRC specimens. There was no significant correlation between high miR-21 expression and the clinicopathological features of the patients (Table 1).Figure 1In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400.\nIn situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400.", "The expression patterns of E-cadherin and MTA1 in stained tumor cells were membranous and nuclear, respectively (Figure 1). Low expression of E-cadherin was found in 109 of 277 (39.4%) CRCs, and high MTA1 expression was seen in 102 (36.8%) tumors. High miR-21 expression was significantly correlated with low E-cadherin expression (P = 0.019) and high MTA1 expression (P = 0.004) (Table 1). E-cadherin expression was negatively correlated with MTA1 expression (P = 0.005).", "In all 277 CRC patients, variables significantly associated with RFS included miR-21 expression (P = 0.010, Figure 2A), histological differentiation (P = 0.031), pT stage (P = 0.0005), lymph node metastasis (P = 0.00001), and serum CEA level (P =0.006) (Table 2). In a multivariate analysis, high levels of miR-21 (P = 0.007, HR = 2.24; 95% CI = 1.25-4.02), pT stage, lymph node metastasis, and serum CEA level were independent prognostic factors for unfavorable RFS (Table 3). However, the OS rate was not associated with the expression levels of miR-21, E-cadherin or MTA1.Figure 2Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival.Table 2\nUnivariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer\nColorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.02 (1.18-3.45)0.0101.32 (0.62-2.85)0.4743.09 (1.41-6.76)0.005Age (<65 years vs. ≥65 years)0.97 (0.57-1.66)0.9200.45 (0.20-1.02)0.0552.12 (0.95-4.77)0.068Tumor type (non-mucinous vs. mucinous)1.18 (0.37-3.78)0.7833.03 (0.70-13.05)0.1380.67 (0.09-4.94)0.693Differentiation (well or moderately vs. poorly)2.56 (1.09-6.00)0.0314.57 (1.33-15.67)0.0162.06 (0.60-7.00)0.249pT (T3 vs. T4a)2.68 (1.51-4.76)0.00053.97 (1.73-9.12)0.0012.30 (1.00-5.30)0.044Lymph node metastasis (absent vs. present)3.69 (1.98-6.87)0.000016.65 (2.30-19.22)0.00042.24 (1.00-5.02)0.045CEA (<5 ng/dL vs. ≥5 ng/dL)2.24 (1.26-3.99)0.0062.26 (1.02-5.05)0.0462.23 (0.97-5.15)0.060Adjuvant therapy (no vs. yes)4.19 (0.58-30.35)0.156NANA3.23 (0.44-23.98)0.252HR, hazard ratio; CI, confidence interval; NA, not available.Table 3\nMultivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location\nColorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.24 (1.25-4.02)0.0071.65 (0.65-4.16)0.2952.45 (1.05-5.72)0.038Age (<65 years vs. ≥65 years)1.03 (0.56-1.89)0.9240.27 (0.10-0.70)0.0072.48 (1.00-6.12)0.049Tumor type (non-mucinous vs. mucinous)0.61 (0.13-2.97)0.5390.62 (0.03-11.50)0.7511.03 (0.13-8.48)0.976Differentiation (well or moderately vs. poorly)2.18 (0.83-5.71)0.1142.60 (0.55-12.21)0.2251.56 (0.41-5.94)0.513pT (T3 vs. T4a)1.97 (1.01-3.83)0.0462.26 (0.75-6.79)0.1452.27 (0.86-5.97)0.098Lymph node metastasis (absent vs. present)4.55 (2.23-9.29)0.0000311.75 (3.33-41.48)0.00013.02 (1.22-7.47)0.017CEA (<5 ng/dL vs. ≥5 ng/dL)2.63 (1.46-4.74)0.0013.32 (1.39-7.51)0.0062.65 (1.13-6.21)0.025Adjuvant therapy (no vs. yes)2.48 (0.32-19.32)0.386NANA2.15 (0.26-18.08)0.431HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes).\nAssociation between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival.\n\nUnivariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer\n\nHR, hazard ratio; CI, confidence interval; NA, not available.\n\nMultivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location\n\nHR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes).\nTo further understand the association of prognostic factors and RFS according to the primary cancer site, we analyzed their HRs for RFS in colon and rectal cancer separately (Tables 3 and 4). High expression of miR-21 was associated with shorter RFS in patients with T3-4a colon cancer (n = 173, P = 0.005, Figure 2A), but not in patients with T3-4a rectal cancer (n = 104, P = 0.474, Figure 2A).Table 4\nCharacteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer\nFirst author (reference)YearOriginNo. of casesAJCC stageRecurrence-free survivalOverall survivalCut-off valueStatistic analysisDetection methodHR95% CIHR95% CISchetter [13]2008USAaCC 71I-IVNANA2.71.3-5.5Third tertileMultivariateRT-PCRChinaaCC 103I-IVNANA2.41.4-4.1DichotomizeMultivariateMicroarrayShibuya [18]2010JapanCRC 156I-IV0.3960.186-0.8970.5130.280-0.956MeanMultivariateRT-PCRNielsen [19]2011DenmarkCC 129II1.281.06-1.551.171.02-1.34DichotomizeMultivariateISHRC 67II0.850.73–1.010.970.83-1.13DichotomizeMultivariateISHKjaer-Frifeldt [20]2012DenmarkCC 764II1.411.19-1.671.050.94-1.18Mean logMultivariateISHZhang [29]2013ChinaCC 138II1.980.95-4.15NANADichotomizeUnivariateRT-PCRCC 137II1.880.95-3.75NANADichotomizeUnivariateRT-PCRCC 255II1.791.22-2.62NANADichotomizeUnivariateRT-PCRBovell [30]2013USACRC 55IVNANA3.251.37-7.72MeanMultivariateRT-PCRToiyama [31]2013JapanCRC 166I-IVNANA0.590.21-1.633.7MultivariateRT-PCRChen [32]2013TaiwanCRC 195I-IVNANA1.6550.992-2.762MeanUnivariateRT-PCRHansen [33]2014DenmarkCC 554II1.3481.032-1.7601.0750.889-1.301DichotomizeMultivariateRT-PCROue [34]2014JapanCC 156I-IVNANA1.800.91-3.58Third tertileMultivariateRT-PCRCC 87II-IIINANA3.131.20-8.17Third tertileMultivariateRT-PCRGermanyCC 145IINANA2.651.06-6.66Third tertileMultivariateRT-PCRPresent studyKoreaCC 173II-III3.091.41-6.760.4250.142-1.271DichotomizeMultivariateISHRC 104II-III1.320.62-2.852.0460.557-7.513DichotomizeMultivariateISHaOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer.\n\nCharacteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer\n\naOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer.\nThe T3-4a CRC patients were divided into subgroups according to American Joint Committee on Cancer stage. In the stage II (T3-4aN0M0) subgroup, we found that patients with high miR-21 expression level had a significantly shorter RFS time than those with low miR-21 level regardless of the primary site (colon cancer, P = 0.007; rectal cancer, P = 0.030, Figure 2B). However, in the stage III (T3-4aN1M0) subgroup, there was no significant difference in RFS between patients with high or low levels of miR-21 expression (colon cancer, P = 0.053; rectal cancer, P = 0.588, Figure 2C).", "A total of 10 studies were included for the meta-analysis and their characteristics are summarized in Table 4 [13,18-20,29-34]. High heterogeneity was found in the analysis. For all CRC patients, high miR-21 expression was significantly associated with poor RFS (HR = 1.327, 95% CI = 1.053-1.673, Figure 3) and poor OS (HR = 1.272, 95% CI = 1.065-1.519, Figure 4). In subgroup analysis, the high miR-21 expression was significantly correlated with poor RFS and OS in colon cancer patients (HR = 1.423, 95% CI = 1.280-1.582; HR = 1.357, 95% CI = 1.102-1.672, respectively), but not in rectal cancer or CRC patients.Figure 3Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival.Figure 4Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival.\nForest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival.\nForest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival.", "In the present study, we detected high miR-21 expression in 27.1% (81 of 299) of T3-4a CRCs, and this was associated with low E-cadherin expression and high MTA1 expression. The multivariate analysis revealed that high miR-21 expression was an independent predictor of tumor recurrence in patients with T3-4a CRC.\nWe observed that miR-21 overexpression occurred in the stroma rather than in the actual tumor cells. Previous studies have reported that miR-21 predominantly localizes to fibroblast-like cells within the tumor-associated stroma of CRC, breast cancer and esophageal cancer [19,35,36]. Using high sensitivity TaqMan quantitative RT-PCR assays in microdissected tissue, Bullock et al. found that miR-21 expression was undetectable in CRC tumor cells but was present in the tumor-associated stroma [35]. Up-regulated miR-21 expression in CRC-associated stroma was associated with transforming growth factor TGF-β-dependent fibroblast-to-myofibroblast transformation and with decreased expression of reversion-inducing cysteine-rich protein with Kazal motifs [35]. The authors proposed that myofibroblast-derived factors mediated tumor progression, and that miR-21 promoted chemo-resistance and tumor invasion by increasing matrix metalloproteinase 2 activity [35]. These results suggest that miR-21 may regulate tumor progression through modulation of the tumor microenvironment.\nRecent studies have shown that high stromal miR-21 expression, as measured by ISH, is correlated with shorter RFS in stage II colon cancer [19,20]. In the analysis of OS in stage II colon cancer patients, Nielsen et al. [19] reported on the prognostic significance of miR-21; while Kjaer-Frifeldt et al. [20] were unable to show any significant impact on OS. In the present study using ISH, we found that stromal miR-21 expression was a prognostic factor for RFS in stage II CRC patients but not in stage III patients. Therefore, miR-21 overexpression may have an important role in tumor progression and recurrence prior to the development of lymph node or distant metastases. We found no prognostic value for miR-21 in our analysis of OS, which was calculated as the time from surgery to time of death from any cause.\nIn the stratified meta-analysis by tumor site, we found that high miR-21 expression was associated with shorter RFS and worse OS in colon cancer, but not in rectal cancer or CRC. The RFS results are consistent with findings of present study.\nIt has been reported that MTA1 regulates E-cadherin expression via AKT activation in prostate cancer, and that miR-21 is required for regulation of phosphorylated AKT expression in glioblastoma multiforme [24,37]. Xiong et al. suggested that miR-21 influences tumor biology through the PTEN/PI-3 K/Akt pathway in CRC [38]. In our immunohistochemical analysis of E-cadherin and MTA1expression, high MTA1 level was associated with low E-cadherin expression. The expression profiles of these proteins were also significantly correlated with miR-21 expression patterns. Taken together, these results led us to hypothesize that MTA1 may negatively regulate E-cadherin expression via high miR-21 expression in CRC. However, further studies will be needed to determine whether there is a direct role for miR-21 in regulation of MTA1 and E-cadherin expression.\nOur study has some limitations including the retrospective, single-institution design and the lack of validation of these results in an independent CRC patient population. Thus, further prospective studies are needed to evaluate the prognostic significance of miR-21 expression.", "miR-21 is overexpressed in the stroma of CRC specimens and has strong associations with the expression of E-cadherin and MTA1. A high level of miR-21 is an independent risk factor predictive of early tumor recurrence in T3-4a colon cancer and stage II CRC. Thus, CRC patients with miR-21 overexpression are at higher risk for tumor recurrence and may benefit from more intensive treatment." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusion" ]
[ "Colorectal neoplasms", "Neoplasm recurrence", "microRNA", "Cadherins", "MTA-1 protein" ]
Background: Colorectal cancer (CRC) is the third most commonly diagnosed cancer in Korea [1]. The prognosis of CRC is associated with tumor progression; five-year survival rates range from 93% to 8% [2]. There are many proposed serological and molecular markers as predictive and prognostic indicators of CRC; however, they are not widely accepted as providing reliable prognostic information due to a lack of reproducibility, validation and standardization among studies [3,4]. Therefore, there is a need to identify more reliable prognostic mediators of tumor progression and metastasis in order to define the behavior of CRC and improve postoperative treatment strategies. MicroRNAs are small noncoding RNA molecules, 18-25 nucleotides in length, which post-transcriptionally regulate gene expression by binding to the 3’ untranslated regions of target messenger RNAs and play a central role in regulation of mRNA expression [5]. MicroRNAs have been shown to influence all cellular processes [6] and have a high degree of sequence conservation among distantly related organism, indicating their likely participation in essential biological processes [7]. Of note, microRNAs have been reported to have a marked influence on carcinogenesis through the dysregulation of oncogenes and tumor suppressor genes [8]. Cancer-related microRNAs typically show altered expression levels in tumors as compared to the level of expression in the corresponding normal tissue. MicroRNA-21 (miR-21) is an oncogenic microRNA that regulates the expression of multiple cancer-related target genes, such as PTEN and PDCD4, and has been reported to be consistently up-regulated in various types of cancers, including colon, breast, lung, and stomach cancers [9-16]. MiR-21 is known to contribute to the regulation of apoptosis, cell proliferation and migration [9,11,17]. Moreover, miR-21 levels increase in the advanced stages of cancer, suggesting a central role for miR-21 in invasion and dissemination of cancer [12,14]. In CRC tissue samples, miR-21 expression is up-regulated during tumor progression and is also known to be associated with poor survival and response to chemotherapy [12,13,18]. However, the clinical significance of miR-21 expression in advanced CRC remains unclear. In situ hybridization (ISH) for microRNA has an advantage over quantitative microRNA expression analysis platforms in that ISH allows for precise histological localization of microRNAs in formalin-fixed paraffin-embedded tissue blocks [19,20]. Loss of E-cadherin expression is associated with activation of epithelial-mesenchymal transition, invasion and metastasis in various cancers [21]. Conversely, expression of Metastasis-associated protein1 (MTA1) is correlated with cancer progression and metastasis in numerous cancer types, including CRC [22,23]. Previous studies on the association between MTA1 and E-cadherin have shown that MTA1 regulates E-cadherin expression through AKT activation in prostate cancer, and that low E-cadherin expression promotes cancer metastasis [21,24]. However, the exact role of these proteins in CRC remains unclear. We investigated miR-21 expression using ISH in specimens from T3-4a CRC patients treated by surgical resection. We also evaluated the relationship between expression of miR-21, E-cadherin and MTA1 and their clinical significance as potential biomarkers for prognosis of T3-4a CRC patients. Methods: Patients From January 2004 until June 2007, a total of 526 consecutive patients underwent surgical resection for CRC at Seoul St. Mary’s Hospital. Of these, 277 patients with pathological T3 (invasion of the subserosa or pericolic/perirectal adipose tissue) or T4a (serosal invasion) cancer were selected for the study, based on the following inclusion criteria: (i) no neoadjuvant chemotherapy or radiation therapy, (ii) no evidence of direct invasion into adjacent structures or organs, (iii) no postoperative death within six weeks, and (iv) no distant metastasis at presentation. The patients consisted of 181 males and 96 females (mean age 63.0 years). Overall survival (OS) was defined as the time interval between surgery and death from any cause or the most recent follow-up date. Recurrence-free survival (RFS) was defined as the time from the date of surgery to the date of first cancer recurrence or the most recent disease-free follow-up. This study was approved by the Institutional Review Board of Seoul, St. Mary’s Hospital, The Catholic University of Korea. Written informed consent was obtained from all patients. From January 2004 until June 2007, a total of 526 consecutive patients underwent surgical resection for CRC at Seoul St. Mary’s Hospital. Of these, 277 patients with pathological T3 (invasion of the subserosa or pericolic/perirectal adipose tissue) or T4a (serosal invasion) cancer were selected for the study, based on the following inclusion criteria: (i) no neoadjuvant chemotherapy or radiation therapy, (ii) no evidence of direct invasion into adjacent structures or organs, (iii) no postoperative death within six weeks, and (iv) no distant metastasis at presentation. The patients consisted of 181 males and 96 females (mean age 63.0 years). Overall survival (OS) was defined as the time interval between surgery and death from any cause or the most recent follow-up date. Recurrence-free survival (RFS) was defined as the time from the date of surgery to the date of first cancer recurrence or the most recent disease-free follow-up. This study was approved by the Institutional Review Board of Seoul, St. Mary’s Hospital, The Catholic University of Korea. Written informed consent was obtained from all patients. Tissue microarray construction We constructed tissue microarrays from formalin-fixed, paraffin-embedded tissues as previously described [25,26]. Two 2-mm-diameter tissue cores were collected from each representative tumor specimen and inserted in a recipient paraffin block. The tissue microarray blocks were serially cut into 4-μm-thick sections for immunohistochemistry and 6-μm-thick sections for ISH. We constructed tissue microarrays from formalin-fixed, paraffin-embedded tissues as previously described [25,26]. Two 2-mm-diameter tissue cores were collected from each representative tumor specimen and inserted in a recipient paraffin block. The tissue microarray blocks were serially cut into 4-μm-thick sections for immunohistochemistry and 6-μm-thick sections for ISH. Immunohistochemistry for E-cadherin and MTA1 Immunohistochemical staining was performed using specific antibodies against E-cadherin (4A2C7, Zymed, South San Francisco, CA), MTA1 (A-11, Santa Cruz Biotechnology, Santa Cruz, CA) and the Polink-2 plus polymer HRP detection system (Golden Bridge International, Mukilteo, WA, USA) according to each manufacturer’s protocol. The specificity of each antibody was confirmed using both Western blotting and immunocytochemistry in several cell lines with known protein expression status. Negative controls were performed by the substitution of the primary antibodies with normal mouse IgG at the same concentration as the primary antibodies. Multi-tissue blocks containing known-positive tumor tissues were used as positive controls. Staining was examined in triplicate by two gastrointestinal pathologists (CKJ and SHL) who were blinded to the clinicopathological data. Specimens with discordant interpretations were reviewed until an agreement was reached. Immunohistochemical staining results were only assessed by a semiquantitative score of staining intensity (0, no; 1, weak; 2, moderate; 3, strong staining) because nearly all positively-staining tumors showed a diffuse staining pattern for both proteins. These scores were subsequently used to group samples into two categories: low (0 or 1) and high staining (2 or 3). Membrane staining of E-cadherin was evaluated and scored as ‘2’ when tumor cells displayed staining intensity similar to that seen in normal colonic mucosa. MTA1 expression was evaluated as nuclear staining. Immunohistochemical staining was performed using specific antibodies against E-cadherin (4A2C7, Zymed, South San Francisco, CA), MTA1 (A-11, Santa Cruz Biotechnology, Santa Cruz, CA) and the Polink-2 plus polymer HRP detection system (Golden Bridge International, Mukilteo, WA, USA) according to each manufacturer’s protocol. The specificity of each antibody was confirmed using both Western blotting and immunocytochemistry in several cell lines with known protein expression status. Negative controls were performed by the substitution of the primary antibodies with normal mouse IgG at the same concentration as the primary antibodies. Multi-tissue blocks containing known-positive tumor tissues were used as positive controls. Staining was examined in triplicate by two gastrointestinal pathologists (CKJ and SHL) who were blinded to the clinicopathological data. Specimens with discordant interpretations were reviewed until an agreement was reached. Immunohistochemical staining results were only assessed by a semiquantitative score of staining intensity (0, no; 1, weak; 2, moderate; 3, strong staining) because nearly all positively-staining tumors showed a diffuse staining pattern for both proteins. These scores were subsequently used to group samples into two categories: low (0 or 1) and high staining (2 or 3). Membrane staining of E-cadherin was evaluated and scored as ‘2’ when tumor cells displayed staining intensity similar to that seen in normal colonic mucosa. MTA1 expression was evaluated as nuclear staining. In situ hybridization for miR-21 ISH was performed using the miRCURY locked nucleic acid (LNA) microRNA Detection FFPE microRNA ISH Optimization Kit 2 (Exiqon, Vedbaek, Denmark) in a StatSpin ThermoBrite Slide Hybridizer (Fisher Scientific, Westwood, MA) as previously described [19]. We used a double-digoxigenin-labeled LNA miR-21 probe (Exiqon, sequence: 5′-TCAACATCAGT-CTGATAAGCTA-3′), a positive control LNA U6 snRNA probe (Exiqon, sequence: 5′- CACGAATTTGCGTGTCATCCTT-3′) and a negative control LNA scrambled microRNA probe (Exiqon, sequence: 5′- GTGTAACACGTCTATACGCCCA-3′). Tissue sections were counterstained with nuclear fast red. Semiquantitative assessment of the ISH staining results was performed by two pathologists (CKJ and SHL) who were unaware of the clinicopathological and immunohistochemical data. In all cases where disagreements occurred, a consensus was reached by the investigators. The intensity of the staining was scored as negative (0), weak (1), moderate (2), or strong (3), as previously described [27,28], and samples were subsequently grouped into two categories: low (0 or 1) and high (2 or 3) expression. ISH was performed using the miRCURY locked nucleic acid (LNA) microRNA Detection FFPE microRNA ISH Optimization Kit 2 (Exiqon, Vedbaek, Denmark) in a StatSpin ThermoBrite Slide Hybridizer (Fisher Scientific, Westwood, MA) as previously described [19]. We used a double-digoxigenin-labeled LNA miR-21 probe (Exiqon, sequence: 5′-TCAACATCAGT-CTGATAAGCTA-3′), a positive control LNA U6 snRNA probe (Exiqon, sequence: 5′- CACGAATTTGCGTGTCATCCTT-3′) and a negative control LNA scrambled microRNA probe (Exiqon, sequence: 5′- GTGTAACACGTCTATACGCCCA-3′). Tissue sections were counterstained with nuclear fast red. Semiquantitative assessment of the ISH staining results was performed by two pathologists (CKJ and SHL) who were unaware of the clinicopathological and immunohistochemical data. In all cases where disagreements occurred, a consensus was reached by the investigators. The intensity of the staining was scored as negative (0), weak (1), moderate (2), or strong (3), as previously described [27,28], and samples were subsequently grouped into two categories: low (0 or 1) and high (2 or 3) expression. Statistical analysis The relationships between the expression of miR-21, E-cadherin and MTA1 and the clinicopathological parameters were analyzed using the Chi-square test. Cumulative incidence curves for OS and RFS were plotted using the Kaplan–Meier method. The long-rank test was used to detect differences among groups. Multivariate analysis for OS and RFS was conducted using the Cox proportional hazard regression model. All statistical analyses were performed using SPSS, version 16 (SPSS Inc., Chicago, IL). A p value <0.05 was considered significant. The relationships between the expression of miR-21, E-cadherin and MTA1 and the clinicopathological parameters were analyzed using the Chi-square test. Cumulative incidence curves for OS and RFS were plotted using the Kaplan–Meier method. The long-rank test was used to detect differences among groups. Multivariate analysis for OS and RFS was conducted using the Cox proportional hazard regression model. All statistical analyses were performed using SPSS, version 16 (SPSS Inc., Chicago, IL). A p value <0.05 was considered significant. Meta-analysis for the association of miR-21 expression and patient survival Two authors (CKJ and SHL) performed literature searches using PubMed, Embase databases and Google up to November 2014, and independently selected eligible articles. Inclusion criteria include 1) being related to the association between miR-21 expression and CRC prognosis, 2) original articles, and 3) sufficient RFS or OS data including hazard ratio (HR) with a 95% confidence interval (CI). We performed a meta-analysis of HR of the effect of miR-21expression on RFS or OS in colon or rectal cancer patients. Heterogeneity among studies was assessed using Cochran Q test and I2 values. A P < 0.10 or I2 > 50% was considered significant heterogeneity. If statistical heterogeneity was observed, the random effect model was used for meta-analysis. Otherwise, we used a fixed-effect model for the meta-analysis. Meta-analyses were performed using Comprehensive Meta Analysis Version 2.0 (Biostat Inc., Englewood, NJ). Two authors (CKJ and SHL) performed literature searches using PubMed, Embase databases and Google up to November 2014, and independently selected eligible articles. Inclusion criteria include 1) being related to the association between miR-21 expression and CRC prognosis, 2) original articles, and 3) sufficient RFS or OS data including hazard ratio (HR) with a 95% confidence interval (CI). We performed a meta-analysis of HR of the effect of miR-21expression on RFS or OS in colon or rectal cancer patients. Heterogeneity among studies was assessed using Cochran Q test and I2 values. A P < 0.10 or I2 > 50% was considered significant heterogeneity. If statistical heterogeneity was observed, the random effect model was used for meta-analysis. Otherwise, we used a fixed-effect model for the meta-analysis. Meta-analyses were performed using Comprehensive Meta Analysis Version 2.0 (Biostat Inc., Englewood, NJ). Patients: From January 2004 until June 2007, a total of 526 consecutive patients underwent surgical resection for CRC at Seoul St. Mary’s Hospital. Of these, 277 patients with pathological T3 (invasion of the subserosa or pericolic/perirectal adipose tissue) or T4a (serosal invasion) cancer were selected for the study, based on the following inclusion criteria: (i) no neoadjuvant chemotherapy or radiation therapy, (ii) no evidence of direct invasion into adjacent structures or organs, (iii) no postoperative death within six weeks, and (iv) no distant metastasis at presentation. The patients consisted of 181 males and 96 females (mean age 63.0 years). Overall survival (OS) was defined as the time interval between surgery and death from any cause or the most recent follow-up date. Recurrence-free survival (RFS) was defined as the time from the date of surgery to the date of first cancer recurrence or the most recent disease-free follow-up. This study was approved by the Institutional Review Board of Seoul, St. Mary’s Hospital, The Catholic University of Korea. Written informed consent was obtained from all patients. Tissue microarray construction: We constructed tissue microarrays from formalin-fixed, paraffin-embedded tissues as previously described [25,26]. Two 2-mm-diameter tissue cores were collected from each representative tumor specimen and inserted in a recipient paraffin block. The tissue microarray blocks were serially cut into 4-μm-thick sections for immunohistochemistry and 6-μm-thick sections for ISH. Immunohistochemistry for E-cadherin and MTA1: Immunohistochemical staining was performed using specific antibodies against E-cadherin (4A2C7, Zymed, South San Francisco, CA), MTA1 (A-11, Santa Cruz Biotechnology, Santa Cruz, CA) and the Polink-2 plus polymer HRP detection system (Golden Bridge International, Mukilteo, WA, USA) according to each manufacturer’s protocol. The specificity of each antibody was confirmed using both Western blotting and immunocytochemistry in several cell lines with known protein expression status. Negative controls were performed by the substitution of the primary antibodies with normal mouse IgG at the same concentration as the primary antibodies. Multi-tissue blocks containing known-positive tumor tissues were used as positive controls. Staining was examined in triplicate by two gastrointestinal pathologists (CKJ and SHL) who were blinded to the clinicopathological data. Specimens with discordant interpretations were reviewed until an agreement was reached. Immunohistochemical staining results were only assessed by a semiquantitative score of staining intensity (0, no; 1, weak; 2, moderate; 3, strong staining) because nearly all positively-staining tumors showed a diffuse staining pattern for both proteins. These scores were subsequently used to group samples into two categories: low (0 or 1) and high staining (2 or 3). Membrane staining of E-cadherin was evaluated and scored as ‘2’ when tumor cells displayed staining intensity similar to that seen in normal colonic mucosa. MTA1 expression was evaluated as nuclear staining. In situ hybridization for miR-21: ISH was performed using the miRCURY locked nucleic acid (LNA) microRNA Detection FFPE microRNA ISH Optimization Kit 2 (Exiqon, Vedbaek, Denmark) in a StatSpin ThermoBrite Slide Hybridizer (Fisher Scientific, Westwood, MA) as previously described [19]. We used a double-digoxigenin-labeled LNA miR-21 probe (Exiqon, sequence: 5′-TCAACATCAGT-CTGATAAGCTA-3′), a positive control LNA U6 snRNA probe (Exiqon, sequence: 5′- CACGAATTTGCGTGTCATCCTT-3′) and a negative control LNA scrambled microRNA probe (Exiqon, sequence: 5′- GTGTAACACGTCTATACGCCCA-3′). Tissue sections were counterstained with nuclear fast red. Semiquantitative assessment of the ISH staining results was performed by two pathologists (CKJ and SHL) who were unaware of the clinicopathological and immunohistochemical data. In all cases where disagreements occurred, a consensus was reached by the investigators. The intensity of the staining was scored as negative (0), weak (1), moderate (2), or strong (3), as previously described [27,28], and samples were subsequently grouped into two categories: low (0 or 1) and high (2 or 3) expression. Statistical analysis: The relationships between the expression of miR-21, E-cadherin and MTA1 and the clinicopathological parameters were analyzed using the Chi-square test. Cumulative incidence curves for OS and RFS were plotted using the Kaplan–Meier method. The long-rank test was used to detect differences among groups. Multivariate analysis for OS and RFS was conducted using the Cox proportional hazard regression model. All statistical analyses were performed using SPSS, version 16 (SPSS Inc., Chicago, IL). A p value <0.05 was considered significant. Meta-analysis for the association of miR-21 expression and patient survival: Two authors (CKJ and SHL) performed literature searches using PubMed, Embase databases and Google up to November 2014, and independently selected eligible articles. Inclusion criteria include 1) being related to the association between miR-21 expression and CRC prognosis, 2) original articles, and 3) sufficient RFS or OS data including hazard ratio (HR) with a 95% confidence interval (CI). We performed a meta-analysis of HR of the effect of miR-21expression on RFS or OS in colon or rectal cancer patients. Heterogeneity among studies was assessed using Cochran Q test and I2 values. A P < 0.10 or I2 > 50% was considered significant heterogeneity. If statistical heterogeneity was observed, the random effect model was used for meta-analysis. Otherwise, we used a fixed-effect model for the meta-analysis. Meta-analyses were performed using Comprehensive Meta Analysis Version 2.0 (Biostat Inc., Englewood, NJ). Results: Demographic and clinicopathological variables of the study participants are listed in Table 1.Table 1 Correlations of clinicopathological parameters and expression of miR-21 in 277 patients with T3-4a colorectal cancer ParameterNmiR-21 expressionp-valueHighLowAge<6513936 (25.9%)103 (74.1%)0.565≥6513840 (29.0%)98 (71.0%)GenderMale18151 (28.2%)130 (71.8%)0.705Female9625 (26.0%)71 (74.0%)Primary siteRight colon8220 (24.4%)62 (75.6%)0.636Left colon9128 (30.8%)63 (69.2%)Rectum10428 (26.9%)76 (73.1%)Histologic typeNon-mucinous26273 (27.9%)189 (72.1%)0.767Mucinous153 (20.0%)12 (80.0%)DifferentiationWell or moderately26074 (28.5%)186 (71.5%)0.168Poorly172 (11.8%)15 (88.2%)Depth of invasionpT322865 (28.5%)163 (71.5%)0.388pT4a4911 (22.4%)38 (77.6%)Lymph node metastasisAbsent13839 (28.3%)99 (71.7%)0.759Present13937 (26.6%)102 (73.7%)AJCC stage0.118IIA12234 (27.9%)88 (72.1%)IIB165 (31.2%)11 (68.8%)IIIB8228 (34.1%)54 (65.9%)IIIC579 (15.8%)48 (84.2%)Perineural invasionAbsent21958 (26.5%)161 (73.5%)0.490Present5818 (31.0%)40 (69.0%)Lymphatic invasionAbsent12033 (27.5%)87 (72.5%)0.984Present15743 (27.4%)114 (72.6%)Vascular invasionAbsent25572 (28.2%)183 (71.8%)0.311Present224 (18.2%)18 (81.8%)CEA (ng/dL)a<516346 (28.2%)117 (71.8%)0.542≥57825 (32.1%)53 (67.9%)Adjuvant therapyNo161 (6.2%)15 (93.8%)0.079Yes26175 (28.7%)186 (71.3%)E-cadherinLow10940 (36.7%)69 (63.3%)0.019High16137 (23.0%)124 (77.0%)MTA1Low16837 (22.0%)131 (78.0%)0.004High10239 (38.2%)63 (61.8%)aPreoperative serum level of carcinoembryonic antigen (CEA) was measured in 241 colorectal cancer patients. AJCC, American Joint Committee on Cancer. Immunohistochemistry for E-cadherin and MTA1 was available in 270 cases. Correlations of clinicopathological parameters and expression of miR-21 in 277 patients with T3-4a colorectal cancer aPreoperative serum level of carcinoembryonic antigen (CEA) was measured in 241 colorectal cancer patients. AJCC, American Joint Committee on Cancer. Immunohistochemistry for E-cadherin and MTA1 was available in 270 cases. miR-21 expression by in situ hybridization miR-21 expression was found to be predominantly localized to the stroma surrounding the tumor cells (Figure 1). High levels of miR-21 were found in 76 of 277 (27.4%) CRC specimens. There was no significant correlation between high miR-21 expression and the clinicopathological features of the patients (Table 1).Figure 1In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400. In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400. miR-21 expression was found to be predominantly localized to the stroma surrounding the tumor cells (Figure 1). High levels of miR-21 were found in 76 of 277 (27.4%) CRC specimens. There was no significant correlation between high miR-21 expression and the clinicopathological features of the patients (Table 1).Figure 1In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400. In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400. Correlation between miR-21 and MTA1/E-cadherin expression The expression patterns of E-cadherin and MTA1 in stained tumor cells were membranous and nuclear, respectively (Figure 1). Low expression of E-cadherin was found in 109 of 277 (39.4%) CRCs, and high MTA1 expression was seen in 102 (36.8%) tumors. High miR-21 expression was significantly correlated with low E-cadherin expression (P = 0.019) and high MTA1 expression (P = 0.004) (Table 1). E-cadherin expression was negatively correlated with MTA1 expression (P = 0.005). The expression patterns of E-cadherin and MTA1 in stained tumor cells were membranous and nuclear, respectively (Figure 1). Low expression of E-cadherin was found in 109 of 277 (39.4%) CRCs, and high MTA1 expression was seen in 102 (36.8%) tumors. High miR-21 expression was significantly correlated with low E-cadherin expression (P = 0.019) and high MTA1 expression (P = 0.004) (Table 1). E-cadherin expression was negatively correlated with MTA1 expression (P = 0.005). Recurrence-free survival and overall survival In all 277 CRC patients, variables significantly associated with RFS included miR-21 expression (P = 0.010, Figure 2A), histological differentiation (P = 0.031), pT stage (P = 0.0005), lymph node metastasis (P = 0.00001), and serum CEA level (P =0.006) (Table 2). In a multivariate analysis, high levels of miR-21 (P = 0.007, HR = 2.24; 95% CI = 1.25-4.02), pT stage, lymph node metastasis, and serum CEA level were independent prognostic factors for unfavorable RFS (Table 3). However, the OS rate was not associated with the expression levels of miR-21, E-cadherin or MTA1.Figure 2Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival.Table 2 Univariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer Colorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.02 (1.18-3.45)0.0101.32 (0.62-2.85)0.4743.09 (1.41-6.76)0.005Age (<65 years vs. ≥65 years)0.97 (0.57-1.66)0.9200.45 (0.20-1.02)0.0552.12 (0.95-4.77)0.068Tumor type (non-mucinous vs. mucinous)1.18 (0.37-3.78)0.7833.03 (0.70-13.05)0.1380.67 (0.09-4.94)0.693Differentiation (well or moderately vs. poorly)2.56 (1.09-6.00)0.0314.57 (1.33-15.67)0.0162.06 (0.60-7.00)0.249pT (T3 vs. T4a)2.68 (1.51-4.76)0.00053.97 (1.73-9.12)0.0012.30 (1.00-5.30)0.044Lymph node metastasis (absent vs. present)3.69 (1.98-6.87)0.000016.65 (2.30-19.22)0.00042.24 (1.00-5.02)0.045CEA (<5 ng/dL vs. ≥5 ng/dL)2.24 (1.26-3.99)0.0062.26 (1.02-5.05)0.0462.23 (0.97-5.15)0.060Adjuvant therapy (no vs. yes)4.19 (0.58-30.35)0.156NANA3.23 (0.44-23.98)0.252HR, hazard ratio; CI, confidence interval; NA, not available.Table 3 Multivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location Colorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.24 (1.25-4.02)0.0071.65 (0.65-4.16)0.2952.45 (1.05-5.72)0.038Age (<65 years vs. ≥65 years)1.03 (0.56-1.89)0.9240.27 (0.10-0.70)0.0072.48 (1.00-6.12)0.049Tumor type (non-mucinous vs. mucinous)0.61 (0.13-2.97)0.5390.62 (0.03-11.50)0.7511.03 (0.13-8.48)0.976Differentiation (well or moderately vs. poorly)2.18 (0.83-5.71)0.1142.60 (0.55-12.21)0.2251.56 (0.41-5.94)0.513pT (T3 vs. T4a)1.97 (1.01-3.83)0.0462.26 (0.75-6.79)0.1452.27 (0.86-5.97)0.098Lymph node metastasis (absent vs. present)4.55 (2.23-9.29)0.0000311.75 (3.33-41.48)0.00013.02 (1.22-7.47)0.017CEA (<5 ng/dL vs. ≥5 ng/dL)2.63 (1.46-4.74)0.0013.32 (1.39-7.51)0.0062.65 (1.13-6.21)0.025Adjuvant therapy (no vs. yes)2.48 (0.32-19.32)0.386NANA2.15 (0.26-18.08)0.431HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes). Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival. Univariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes). To further understand the association of prognostic factors and RFS according to the primary cancer site, we analyzed their HRs for RFS in colon and rectal cancer separately (Tables 3 and 4). High expression of miR-21 was associated with shorter RFS in patients with T3-4a colon cancer (n = 173, P = 0.005, Figure 2A), but not in patients with T3-4a rectal cancer (n = 104, P = 0.474, Figure 2A).Table 4 Characteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer First author (reference)YearOriginNo. of casesAJCC stageRecurrence-free survivalOverall survivalCut-off valueStatistic analysisDetection methodHR95% CIHR95% CISchetter [13]2008USAaCC 71I-IVNANA2.71.3-5.5Third tertileMultivariateRT-PCRChinaaCC 103I-IVNANA2.41.4-4.1DichotomizeMultivariateMicroarrayShibuya [18]2010JapanCRC 156I-IV0.3960.186-0.8970.5130.280-0.956MeanMultivariateRT-PCRNielsen [19]2011DenmarkCC 129II1.281.06-1.551.171.02-1.34DichotomizeMultivariateISHRC 67II0.850.73–1.010.970.83-1.13DichotomizeMultivariateISHKjaer-Frifeldt [20]2012DenmarkCC 764II1.411.19-1.671.050.94-1.18Mean logMultivariateISHZhang [29]2013ChinaCC 138II1.980.95-4.15NANADichotomizeUnivariateRT-PCRCC 137II1.880.95-3.75NANADichotomizeUnivariateRT-PCRCC 255II1.791.22-2.62NANADichotomizeUnivariateRT-PCRBovell [30]2013USACRC 55IVNANA3.251.37-7.72MeanMultivariateRT-PCRToiyama [31]2013JapanCRC 166I-IVNANA0.590.21-1.633.7MultivariateRT-PCRChen [32]2013TaiwanCRC 195I-IVNANA1.6550.992-2.762MeanUnivariateRT-PCRHansen [33]2014DenmarkCC 554II1.3481.032-1.7601.0750.889-1.301DichotomizeMultivariateRT-PCROue [34]2014JapanCC 156I-IVNANA1.800.91-3.58Third tertileMultivariateRT-PCRCC 87II-IIINANA3.131.20-8.17Third tertileMultivariateRT-PCRGermanyCC 145IINANA2.651.06-6.66Third tertileMultivariateRT-PCRPresent studyKoreaCC 173II-III3.091.41-6.760.4250.142-1.271DichotomizeMultivariateISHRC 104II-III1.320.62-2.852.0460.557-7.513DichotomizeMultivariateISHaOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer. Characteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer aOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer. The T3-4a CRC patients were divided into subgroups according to American Joint Committee on Cancer stage. In the stage II (T3-4aN0M0) subgroup, we found that patients with high miR-21 expression level had a significantly shorter RFS time than those with low miR-21 level regardless of the primary site (colon cancer, P = 0.007; rectal cancer, P = 0.030, Figure 2B). However, in the stage III (T3-4aN1M0) subgroup, there was no significant difference in RFS between patients with high or low levels of miR-21 expression (colon cancer, P = 0.053; rectal cancer, P = 0.588, Figure 2C). In all 277 CRC patients, variables significantly associated with RFS included miR-21 expression (P = 0.010, Figure 2A), histological differentiation (P = 0.031), pT stage (P = 0.0005), lymph node metastasis (P = 0.00001), and serum CEA level (P =0.006) (Table 2). In a multivariate analysis, high levels of miR-21 (P = 0.007, HR = 2.24; 95% CI = 1.25-4.02), pT stage, lymph node metastasis, and serum CEA level were independent prognostic factors for unfavorable RFS (Table 3). However, the OS rate was not associated with the expression levels of miR-21, E-cadherin or MTA1.Figure 2Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival.Table 2 Univariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer Colorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.02 (1.18-3.45)0.0101.32 (0.62-2.85)0.4743.09 (1.41-6.76)0.005Age (<65 years vs. ≥65 years)0.97 (0.57-1.66)0.9200.45 (0.20-1.02)0.0552.12 (0.95-4.77)0.068Tumor type (non-mucinous vs. mucinous)1.18 (0.37-3.78)0.7833.03 (0.70-13.05)0.1380.67 (0.09-4.94)0.693Differentiation (well or moderately vs. poorly)2.56 (1.09-6.00)0.0314.57 (1.33-15.67)0.0162.06 (0.60-7.00)0.249pT (T3 vs. T4a)2.68 (1.51-4.76)0.00053.97 (1.73-9.12)0.0012.30 (1.00-5.30)0.044Lymph node metastasis (absent vs. present)3.69 (1.98-6.87)0.000016.65 (2.30-19.22)0.00042.24 (1.00-5.02)0.045CEA (<5 ng/dL vs. ≥5 ng/dL)2.24 (1.26-3.99)0.0062.26 (1.02-5.05)0.0462.23 (0.97-5.15)0.060Adjuvant therapy (no vs. yes)4.19 (0.58-30.35)0.156NANA3.23 (0.44-23.98)0.252HR, hazard ratio; CI, confidence interval; NA, not available.Table 3 Multivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location Colorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.24 (1.25-4.02)0.0071.65 (0.65-4.16)0.2952.45 (1.05-5.72)0.038Age (<65 years vs. ≥65 years)1.03 (0.56-1.89)0.9240.27 (0.10-0.70)0.0072.48 (1.00-6.12)0.049Tumor type (non-mucinous vs. mucinous)0.61 (0.13-2.97)0.5390.62 (0.03-11.50)0.7511.03 (0.13-8.48)0.976Differentiation (well or moderately vs. poorly)2.18 (0.83-5.71)0.1142.60 (0.55-12.21)0.2251.56 (0.41-5.94)0.513pT (T3 vs. T4a)1.97 (1.01-3.83)0.0462.26 (0.75-6.79)0.1452.27 (0.86-5.97)0.098Lymph node metastasis (absent vs. present)4.55 (2.23-9.29)0.0000311.75 (3.33-41.48)0.00013.02 (1.22-7.47)0.017CEA (<5 ng/dL vs. ≥5 ng/dL)2.63 (1.46-4.74)0.0013.32 (1.39-7.51)0.0062.65 (1.13-6.21)0.025Adjuvant therapy (no vs. yes)2.48 (0.32-19.32)0.386NANA2.15 (0.26-18.08)0.431HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes). Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival. Univariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes). To further understand the association of prognostic factors and RFS according to the primary cancer site, we analyzed their HRs for RFS in colon and rectal cancer separately (Tables 3 and 4). High expression of miR-21 was associated with shorter RFS in patients with T3-4a colon cancer (n = 173, P = 0.005, Figure 2A), but not in patients with T3-4a rectal cancer (n = 104, P = 0.474, Figure 2A).Table 4 Characteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer First author (reference)YearOriginNo. of casesAJCC stageRecurrence-free survivalOverall survivalCut-off valueStatistic analysisDetection methodHR95% CIHR95% CISchetter [13]2008USAaCC 71I-IVNANA2.71.3-5.5Third tertileMultivariateRT-PCRChinaaCC 103I-IVNANA2.41.4-4.1DichotomizeMultivariateMicroarrayShibuya [18]2010JapanCRC 156I-IV0.3960.186-0.8970.5130.280-0.956MeanMultivariateRT-PCRNielsen [19]2011DenmarkCC 129II1.281.06-1.551.171.02-1.34DichotomizeMultivariateISHRC 67II0.850.73–1.010.970.83-1.13DichotomizeMultivariateISHKjaer-Frifeldt [20]2012DenmarkCC 764II1.411.19-1.671.050.94-1.18Mean logMultivariateISHZhang [29]2013ChinaCC 138II1.980.95-4.15NANADichotomizeUnivariateRT-PCRCC 137II1.880.95-3.75NANADichotomizeUnivariateRT-PCRCC 255II1.791.22-2.62NANADichotomizeUnivariateRT-PCRBovell [30]2013USACRC 55IVNANA3.251.37-7.72MeanMultivariateRT-PCRToiyama [31]2013JapanCRC 166I-IVNANA0.590.21-1.633.7MultivariateRT-PCRChen [32]2013TaiwanCRC 195I-IVNANA1.6550.992-2.762MeanUnivariateRT-PCRHansen [33]2014DenmarkCC 554II1.3481.032-1.7601.0750.889-1.301DichotomizeMultivariateRT-PCROue [34]2014JapanCC 156I-IVNANA1.800.91-3.58Third tertileMultivariateRT-PCRCC 87II-IIINANA3.131.20-8.17Third tertileMultivariateRT-PCRGermanyCC 145IINANA2.651.06-6.66Third tertileMultivariateRT-PCRPresent studyKoreaCC 173II-III3.091.41-6.760.4250.142-1.271DichotomizeMultivariateISHRC 104II-III1.320.62-2.852.0460.557-7.513DichotomizeMultivariateISHaOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer. Characteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer aOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer. The T3-4a CRC patients were divided into subgroups according to American Joint Committee on Cancer stage. In the stage II (T3-4aN0M0) subgroup, we found that patients with high miR-21 expression level had a significantly shorter RFS time than those with low miR-21 level regardless of the primary site (colon cancer, P = 0.007; rectal cancer, P = 0.030, Figure 2B). However, in the stage III (T3-4aN1M0) subgroup, there was no significant difference in RFS between patients with high or low levels of miR-21 expression (colon cancer, P = 0.053; rectal cancer, P = 0.588, Figure 2C). Meta-analysis A total of 10 studies were included for the meta-analysis and their characteristics are summarized in Table 4 [13,18-20,29-34]. High heterogeneity was found in the analysis. For all CRC patients, high miR-21 expression was significantly associated with poor RFS (HR = 1.327, 95% CI = 1.053-1.673, Figure 3) and poor OS (HR = 1.272, 95% CI = 1.065-1.519, Figure 4). In subgroup analysis, the high miR-21 expression was significantly correlated with poor RFS and OS in colon cancer patients (HR = 1.423, 95% CI = 1.280-1.582; HR = 1.357, 95% CI = 1.102-1.672, respectively), but not in rectal cancer or CRC patients.Figure 3Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival.Figure 4Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival. Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival. Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival. A total of 10 studies were included for the meta-analysis and their characteristics are summarized in Table 4 [13,18-20,29-34]. High heterogeneity was found in the analysis. For all CRC patients, high miR-21 expression was significantly associated with poor RFS (HR = 1.327, 95% CI = 1.053-1.673, Figure 3) and poor OS (HR = 1.272, 95% CI = 1.065-1.519, Figure 4). In subgroup analysis, the high miR-21 expression was significantly correlated with poor RFS and OS in colon cancer patients (HR = 1.423, 95% CI = 1.280-1.582; HR = 1.357, 95% CI = 1.102-1.672, respectively), but not in rectal cancer or CRC patients.Figure 3Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival.Figure 4Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival. Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival. Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival. miR-21 expression by in situ hybridization: miR-21 expression was found to be predominantly localized to the stroma surrounding the tumor cells (Figure 1). High levels of miR-21 were found in 76 of 277 (27.4%) CRC specimens. There was no significant correlation between high miR-21 expression and the clinicopathological features of the patients (Table 1).Figure 1In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400. In situ hybridization for miR-21 and immunohistochemistry for E-cadherin and MTA1. (A) A representative 2 mm tumor tissue core from the colorectal cancer tissue microarray shows diffuse strong miR-21 expression in the stroma. (B) High-magnification image of insert in (A) shows that miR-21 signals are strong in the stromal cells of colorectal cancer but not in the tumor cells. Magnification x400. (C) Tumor cells show strong membranous expression of E-cadherin. Magnification x400. (D) Tumor cells show strong nuclear expression of MTA1. Magnification x400. Correlation between miR-21 and MTA1/E-cadherin expression: The expression patterns of E-cadherin and MTA1 in stained tumor cells were membranous and nuclear, respectively (Figure 1). Low expression of E-cadherin was found in 109 of 277 (39.4%) CRCs, and high MTA1 expression was seen in 102 (36.8%) tumors. High miR-21 expression was significantly correlated with low E-cadherin expression (P = 0.019) and high MTA1 expression (P = 0.004) (Table 1). E-cadherin expression was negatively correlated with MTA1 expression (P = 0.005). Recurrence-free survival and overall survival: In all 277 CRC patients, variables significantly associated with RFS included miR-21 expression (P = 0.010, Figure 2A), histological differentiation (P = 0.031), pT stage (P = 0.0005), lymph node metastasis (P = 0.00001), and serum CEA level (P =0.006) (Table 2). In a multivariate analysis, high levels of miR-21 (P = 0.007, HR = 2.24; 95% CI = 1.25-4.02), pT stage, lymph node metastasis, and serum CEA level were independent prognostic factors for unfavorable RFS (Table 3). However, the OS rate was not associated with the expression levels of miR-21, E-cadherin or MTA1.Figure 2Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival.Table 2 Univariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer Colorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.02 (1.18-3.45)0.0101.32 (0.62-2.85)0.4743.09 (1.41-6.76)0.005Age (<65 years vs. ≥65 years)0.97 (0.57-1.66)0.9200.45 (0.20-1.02)0.0552.12 (0.95-4.77)0.068Tumor type (non-mucinous vs. mucinous)1.18 (0.37-3.78)0.7833.03 (0.70-13.05)0.1380.67 (0.09-4.94)0.693Differentiation (well or moderately vs. poorly)2.56 (1.09-6.00)0.0314.57 (1.33-15.67)0.0162.06 (0.60-7.00)0.249pT (T3 vs. T4a)2.68 (1.51-4.76)0.00053.97 (1.73-9.12)0.0012.30 (1.00-5.30)0.044Lymph node metastasis (absent vs. present)3.69 (1.98-6.87)0.000016.65 (2.30-19.22)0.00042.24 (1.00-5.02)0.045CEA (<5 ng/dL vs. ≥5 ng/dL)2.24 (1.26-3.99)0.0062.26 (1.02-5.05)0.0462.23 (0.97-5.15)0.060Adjuvant therapy (no vs. yes)4.19 (0.58-30.35)0.156NANA3.23 (0.44-23.98)0.252HR, hazard ratio; CI, confidence interval; NA, not available.Table 3 Multivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location Colorectal cancer (n = 277)Rectal cancer (n = 104)Colon cancer (n = 173)VariablesHR (95% CI)p-valueHR (95% CI)p-valueHR (95% CI)p-valuemiR-21 expression (low vs. high)2.24 (1.25-4.02)0.0071.65 (0.65-4.16)0.2952.45 (1.05-5.72)0.038Age (<65 years vs. ≥65 years)1.03 (0.56-1.89)0.9240.27 (0.10-0.70)0.0072.48 (1.00-6.12)0.049Tumor type (non-mucinous vs. mucinous)0.61 (0.13-2.97)0.5390.62 (0.03-11.50)0.7511.03 (0.13-8.48)0.976Differentiation (well or moderately vs. poorly)2.18 (0.83-5.71)0.1142.60 (0.55-12.21)0.2251.56 (0.41-5.94)0.513pT (T3 vs. T4a)1.97 (1.01-3.83)0.0462.26 (0.75-6.79)0.1452.27 (0.86-5.97)0.098Lymph node metastasis (absent vs. present)4.55 (2.23-9.29)0.0000311.75 (3.33-41.48)0.00013.02 (1.22-7.47)0.017CEA (<5 ng/dL vs. ≥5 ng/dL)2.63 (1.46-4.74)0.0013.32 (1.39-7.51)0.0062.65 (1.13-6.21)0.025Adjuvant therapy (no vs. yes)2.48 (0.32-19.32)0.386NANA2.15 (0.26-18.08)0.431HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes). Association between miR-21 expression and recurrence-free survival in patients with T3-4a colorectal cancer. Kaplan-Meier survival curves for recurrence-free survival in all (A), stage II (B) and stage III (C) cancer patients according to miR-21 expression status. (A) High miR-21 expression is associated with recurrence-free survival in colon cancer patients but not in rectal cancer patients. (B) For the 138 patients with stage II cancer, the association between high miR-21 expression and recurrence-free survival is statistically significant only in colon cancer patients. (C) Among 277 stage III cancer patients, high miR-21 expression is not associated with poor recurrence-free survival. Univariate analysis for overall recurrence-free survival among patients with T3-4a colorectal cancer HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis of prognostic factors predicting overall recurrence-free survival according to cancer location HR, hazard ratio; CI, confidence interval; NA, not available. Multivariate analysis is adjusted for age (<65 years vs. ≥65 years), tumor type (non-mucinous vs. mucinous), differentiation (well or moderately vs. poorly), pT (T3 vs. T4a), lymph node metastasis (absent vs. present), CEA (<5 ng/dL vs. ≥5 ng/dL) and adjuvant therapy (no vs. yes). To further understand the association of prognostic factors and RFS according to the primary cancer site, we analyzed their HRs for RFS in colon and rectal cancer separately (Tables 3 and 4). High expression of miR-21 was associated with shorter RFS in patients with T3-4a colon cancer (n = 173, P = 0.005, Figure 2A), but not in patients with T3-4a rectal cancer (n = 104, P = 0.474, Figure 2A).Table 4 Characteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer First author (reference)YearOriginNo. of casesAJCC stageRecurrence-free survivalOverall survivalCut-off valueStatistic analysisDetection methodHR95% CIHR95% CISchetter [13]2008USAaCC 71I-IVNANA2.71.3-5.5Third tertileMultivariateRT-PCRChinaaCC 103I-IVNANA2.41.4-4.1DichotomizeMultivariateMicroarrayShibuya [18]2010JapanCRC 156I-IV0.3960.186-0.8970.5130.280-0.956MeanMultivariateRT-PCRNielsen [19]2011DenmarkCC 129II1.281.06-1.551.171.02-1.34DichotomizeMultivariateISHRC 67II0.850.73–1.010.970.83-1.13DichotomizeMultivariateISHKjaer-Frifeldt [20]2012DenmarkCC 764II1.411.19-1.671.050.94-1.18Mean logMultivariateISHZhang [29]2013ChinaCC 138II1.980.95-4.15NANADichotomizeUnivariateRT-PCRCC 137II1.880.95-3.75NANADichotomizeUnivariateRT-PCRCC 255II1.791.22-2.62NANADichotomizeUnivariateRT-PCRBovell [30]2013USACRC 55IVNANA3.251.37-7.72MeanMultivariateRT-PCRToiyama [31]2013JapanCRC 166I-IVNANA0.590.21-1.633.7MultivariateRT-PCRChen [32]2013TaiwanCRC 195I-IVNANA1.6550.992-2.762MeanUnivariateRT-PCRHansen [33]2014DenmarkCC 554II1.3481.032-1.7601.0750.889-1.301DichotomizeMultivariateRT-PCROue [34]2014JapanCC 156I-IVNANA1.800.91-3.58Third tertileMultivariateRT-PCRCC 87II-IIINANA3.131.20-8.17Third tertileMultivariateRT-PCRGermanyCC 145IINANA2.651.06-6.66Third tertileMultivariateRT-PCRPresent studyKoreaCC 173II-III3.091.41-6.760.4250.142-1.271DichotomizeMultivariateISHRC 104II-III1.320.62-2.852.0460.557-7.513DichotomizeMultivariateISHaOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer. Characteristics of studies that evaluated the association between the high expression of miR-21 and recurrence-free survival or overall survival in colorectal cancer aOnly including patients with typical adenocarcinoma. AJCC, American Joint Committee on Cancer; CI, confidence interval; HR, hazard ratio; NA, not available; RT-PCR, reverse-transcription PCR; ISH, in situ hybridization; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer. The T3-4a CRC patients were divided into subgroups according to American Joint Committee on Cancer stage. In the stage II (T3-4aN0M0) subgroup, we found that patients with high miR-21 expression level had a significantly shorter RFS time than those with low miR-21 level regardless of the primary site (colon cancer, P = 0.007; rectal cancer, P = 0.030, Figure 2B). However, in the stage III (T3-4aN1M0) subgroup, there was no significant difference in RFS between patients with high or low levels of miR-21 expression (colon cancer, P = 0.053; rectal cancer, P = 0.588, Figure 2C). Meta-analysis: A total of 10 studies were included for the meta-analysis and their characteristics are summarized in Table 4 [13,18-20,29-34]. High heterogeneity was found in the analysis. For all CRC patients, high miR-21 expression was significantly associated with poor RFS (HR = 1.327, 95% CI = 1.053-1.673, Figure 3) and poor OS (HR = 1.272, 95% CI = 1.065-1.519, Figure 4). In subgroup analysis, the high miR-21 expression was significantly correlated with poor RFS and OS in colon cancer patients (HR = 1.423, 95% CI = 1.280-1.582; HR = 1.357, 95% CI = 1.102-1.672, respectively), but not in rectal cancer or CRC patients.Figure 3Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival.Figure 4Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival. Forest plot of meta-analysis for the association of high miR-21 expression and recurrence-free survival in colorectal cancer patients. There is a statistically significant association between high miR-21 expression and poor recurrence-free survival in colon cancer patients. The observed association is not statistically significant in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; RFS, recurrence-free survival. Forest plot of meta-analysis for the association of high miR-21 expression and overall survival in colorectal cancer patients. High miR-21 expression is associated with poor overall survival in colon cancer patients but not in rectal cancer. CI, confidence interval; CC, colon cancer; RC, rectal cancer; CRC, colorectal cancer; OS, overall survival. Discussion: In the present study, we detected high miR-21 expression in 27.1% (81 of 299) of T3-4a CRCs, and this was associated with low E-cadherin expression and high MTA1 expression. The multivariate analysis revealed that high miR-21 expression was an independent predictor of tumor recurrence in patients with T3-4a CRC. We observed that miR-21 overexpression occurred in the stroma rather than in the actual tumor cells. Previous studies have reported that miR-21 predominantly localizes to fibroblast-like cells within the tumor-associated stroma of CRC, breast cancer and esophageal cancer [19,35,36]. Using high sensitivity TaqMan quantitative RT-PCR assays in microdissected tissue, Bullock et al. found that miR-21 expression was undetectable in CRC tumor cells but was present in the tumor-associated stroma [35]. Up-regulated miR-21 expression in CRC-associated stroma was associated with transforming growth factor TGF-β-dependent fibroblast-to-myofibroblast transformation and with decreased expression of reversion-inducing cysteine-rich protein with Kazal motifs [35]. The authors proposed that myofibroblast-derived factors mediated tumor progression, and that miR-21 promoted chemo-resistance and tumor invasion by increasing matrix metalloproteinase 2 activity [35]. These results suggest that miR-21 may regulate tumor progression through modulation of the tumor microenvironment. Recent studies have shown that high stromal miR-21 expression, as measured by ISH, is correlated with shorter RFS in stage II colon cancer [19,20]. In the analysis of OS in stage II colon cancer patients, Nielsen et al. [19] reported on the prognostic significance of miR-21; while Kjaer-Frifeldt et al. [20] were unable to show any significant impact on OS. In the present study using ISH, we found that stromal miR-21 expression was a prognostic factor for RFS in stage II CRC patients but not in stage III patients. Therefore, miR-21 overexpression may have an important role in tumor progression and recurrence prior to the development of lymph node or distant metastases. We found no prognostic value for miR-21 in our analysis of OS, which was calculated as the time from surgery to time of death from any cause. In the stratified meta-analysis by tumor site, we found that high miR-21 expression was associated with shorter RFS and worse OS in colon cancer, but not in rectal cancer or CRC. The RFS results are consistent with findings of present study. It has been reported that MTA1 regulates E-cadherin expression via AKT activation in prostate cancer, and that miR-21 is required for regulation of phosphorylated AKT expression in glioblastoma multiforme [24,37]. Xiong et al. suggested that miR-21 influences tumor biology through the PTEN/PI-3 K/Akt pathway in CRC [38]. In our immunohistochemical analysis of E-cadherin and MTA1expression, high MTA1 level was associated with low E-cadherin expression. The expression profiles of these proteins were also significantly correlated with miR-21 expression patterns. Taken together, these results led us to hypothesize that MTA1 may negatively regulate E-cadherin expression via high miR-21 expression in CRC. However, further studies will be needed to determine whether there is a direct role for miR-21 in regulation of MTA1 and E-cadherin expression. Our study has some limitations including the retrospective, single-institution design and the lack of validation of these results in an independent CRC patient population. Thus, further prospective studies are needed to evaluate the prognostic significance of miR-21 expression. Conclusion: miR-21 is overexpressed in the stroma of CRC specimens and has strong associations with the expression of E-cadherin and MTA1. A high level of miR-21 is an independent risk factor predictive of early tumor recurrence in T3-4a colon cancer and stage II CRC. Thus, CRC patients with miR-21 overexpression are at higher risk for tumor recurrence and may benefit from more intensive treatment.
Background: MicroRNA-21 (miR-21) is an oncogenic microRNA that regulates the expression of multiple cancer-related target genes. miR-21 has been associated with progression of some types of cancer. Metastasis-associated protein1 expression and loss of E-cadherin expression are correlated with cancer progression and metastasis in many cancer types. In advanced colorectal cancer, the clinical significance of miR-21 expression remains unclear. We aimed to investigate the impact of miR-21 expression in advanced colorectal cancer and its correlation with target proteins associated with colorectal cancer progression. Methods: From 2004 to 2007, 277 consecutive patients with T3-4a colorectal cancer treated with R0 surgical resection were included. Patients with neoadjuvant therapy and distant metastasis at presentation were excluded. The expression of miR-21 was investigated by in situ hybridization. Immunohistochemistry was used to detect E-cadherin and metastasis-associated protein1 expression. Results: High stromal expression of miR-21 was found in 76 of 277 (27.4%) colorectal cancer samples and was correlated with low E-cadherin expression (P = 0.019) and high metastasis-associated protein1 expression (P = 0.004). T3-4a colorectal cancer patients with high miR-21 expression had significantly shorter recurrence-free survival than those with low miR-21 expression. When analyzing colon and rectal cancer separately, high expression of miR-21 was an independent prognostic factor of unfavorable recurrence-free survival in T3-4a colon cancer patients (P = 0.038, HR = 2.45; 95% CI = 1.05-5.72) but not in T3-4a rectal cancer patients. In a sub-classification analysis, high miR-21 expression was associated with shorter recurrence-free survival in the stage II cancer (P = 0.001) but not in the stage III subgroup (P = 0.267). Conclusions: Stromal miR-21 expression is related to the expression of E-cadherin and metastasis-associated protein1 in colorectal cancer. Stage II colorectal cancer patients with high levels of miR-21 are at higher risk for tumor recurrence and should be considered for more intensive treatment.
Background: Colorectal cancer (CRC) is the third most commonly diagnosed cancer in Korea [1]. The prognosis of CRC is associated with tumor progression; five-year survival rates range from 93% to 8% [2]. There are many proposed serological and molecular markers as predictive and prognostic indicators of CRC; however, they are not widely accepted as providing reliable prognostic information due to a lack of reproducibility, validation and standardization among studies [3,4]. Therefore, there is a need to identify more reliable prognostic mediators of tumor progression and metastasis in order to define the behavior of CRC and improve postoperative treatment strategies. MicroRNAs are small noncoding RNA molecules, 18-25 nucleotides in length, which post-transcriptionally regulate gene expression by binding to the 3’ untranslated regions of target messenger RNAs and play a central role in regulation of mRNA expression [5]. MicroRNAs have been shown to influence all cellular processes [6] and have a high degree of sequence conservation among distantly related organism, indicating their likely participation in essential biological processes [7]. Of note, microRNAs have been reported to have a marked influence on carcinogenesis through the dysregulation of oncogenes and tumor suppressor genes [8]. Cancer-related microRNAs typically show altered expression levels in tumors as compared to the level of expression in the corresponding normal tissue. MicroRNA-21 (miR-21) is an oncogenic microRNA that regulates the expression of multiple cancer-related target genes, such as PTEN and PDCD4, and has been reported to be consistently up-regulated in various types of cancers, including colon, breast, lung, and stomach cancers [9-16]. MiR-21 is known to contribute to the regulation of apoptosis, cell proliferation and migration [9,11,17]. Moreover, miR-21 levels increase in the advanced stages of cancer, suggesting a central role for miR-21 in invasion and dissemination of cancer [12,14]. In CRC tissue samples, miR-21 expression is up-regulated during tumor progression and is also known to be associated with poor survival and response to chemotherapy [12,13,18]. However, the clinical significance of miR-21 expression in advanced CRC remains unclear. In situ hybridization (ISH) for microRNA has an advantage over quantitative microRNA expression analysis platforms in that ISH allows for precise histological localization of microRNAs in formalin-fixed paraffin-embedded tissue blocks [19,20]. Loss of E-cadherin expression is associated with activation of epithelial-mesenchymal transition, invasion and metastasis in various cancers [21]. Conversely, expression of Metastasis-associated protein1 (MTA1) is correlated with cancer progression and metastasis in numerous cancer types, including CRC [22,23]. Previous studies on the association between MTA1 and E-cadherin have shown that MTA1 regulates E-cadherin expression through AKT activation in prostate cancer, and that low E-cadherin expression promotes cancer metastasis [21,24]. However, the exact role of these proteins in CRC remains unclear. We investigated miR-21 expression using ISH in specimens from T3-4a CRC patients treated by surgical resection. We also evaluated the relationship between expression of miR-21, E-cadherin and MTA1 and their clinical significance as potential biomarkers for prognosis of T3-4a CRC patients. Conclusion: miR-21 is overexpressed in the stroma of CRC specimens and has strong associations with the expression of E-cadherin and MTA1. A high level of miR-21 is an independent risk factor predictive of early tumor recurrence in T3-4a colon cancer and stage II CRC. Thus, CRC patients with miR-21 overexpression are at higher risk for tumor recurrence and may benefit from more intensive treatment.
Background: MicroRNA-21 (miR-21) is an oncogenic microRNA that regulates the expression of multiple cancer-related target genes. miR-21 has been associated with progression of some types of cancer. Metastasis-associated protein1 expression and loss of E-cadherin expression are correlated with cancer progression and metastasis in many cancer types. In advanced colorectal cancer, the clinical significance of miR-21 expression remains unclear. We aimed to investigate the impact of miR-21 expression in advanced colorectal cancer and its correlation with target proteins associated with colorectal cancer progression. Methods: From 2004 to 2007, 277 consecutive patients with T3-4a colorectal cancer treated with R0 surgical resection were included. Patients with neoadjuvant therapy and distant metastasis at presentation were excluded. The expression of miR-21 was investigated by in situ hybridization. Immunohistochemistry was used to detect E-cadherin and metastasis-associated protein1 expression. Results: High stromal expression of miR-21 was found in 76 of 277 (27.4%) colorectal cancer samples and was correlated with low E-cadherin expression (P = 0.019) and high metastasis-associated protein1 expression (P = 0.004). T3-4a colorectal cancer patients with high miR-21 expression had significantly shorter recurrence-free survival than those with low miR-21 expression. When analyzing colon and rectal cancer separately, high expression of miR-21 was an independent prognostic factor of unfavorable recurrence-free survival in T3-4a colon cancer patients (P = 0.038, HR = 2.45; 95% CI = 1.05-5.72) but not in T3-4a rectal cancer patients. In a sub-classification analysis, high miR-21 expression was associated with shorter recurrence-free survival in the stage II cancer (P = 0.001) but not in the stage III subgroup (P = 0.267). Conclusions: Stromal miR-21 expression is related to the expression of E-cadherin and metastasis-associated protein1 in colorectal cancer. Stage II colorectal cancer patients with high levels of miR-21 are at higher risk for tumor recurrence and should be considered for more intensive treatment.
12,392
384
[ 219, 69, 270, 212, 99, 181, 275, 111, 1631, 460 ]
15
[ "cancer", "expression", "21", "mir", "mir 21", "patients", "high", "survival", "21 expression", "mir 21 expression" ]
[ "prognostic value mir", "tumor progression mir", "micrornas reported marked", "quantitative microrna expression", "cancer related micrornas" ]
null
[CONTENT] Colorectal neoplasms | Neoplasm recurrence | microRNA | Cadherins | MTA-1 protein [SUMMARY]
null
[CONTENT] Colorectal neoplasms | Neoplasm recurrence | microRNA | Cadherins | MTA-1 protein [SUMMARY]
[CONTENT] Colorectal neoplasms | Neoplasm recurrence | microRNA | Cadherins | MTA-1 protein [SUMMARY]
[CONTENT] Colorectal neoplasms | Neoplasm recurrence | microRNA | Cadherins | MTA-1 protein [SUMMARY]
[CONTENT] Colorectal neoplasms | Neoplasm recurrence | microRNA | Cadherins | MTA-1 protein [SUMMARY]
[CONTENT] Aged | Cadherins | Colonic Neoplasms | Disease-Free Survival | Female | Histone Deacetylases | Humans | Male | MicroRNAs | Middle Aged | Neoplasm Recurrence, Local | Neoplasm Staging | Rectal Neoplasms | Repressor Proteins | Risk Factors | Trans-Activators [SUMMARY]
null
[CONTENT] Aged | Cadherins | Colonic Neoplasms | Disease-Free Survival | Female | Histone Deacetylases | Humans | Male | MicroRNAs | Middle Aged | Neoplasm Recurrence, Local | Neoplasm Staging | Rectal Neoplasms | Repressor Proteins | Risk Factors | Trans-Activators [SUMMARY]
[CONTENT] Aged | Cadherins | Colonic Neoplasms | Disease-Free Survival | Female | Histone Deacetylases | Humans | Male | MicroRNAs | Middle Aged | Neoplasm Recurrence, Local | Neoplasm Staging | Rectal Neoplasms | Repressor Proteins | Risk Factors | Trans-Activators [SUMMARY]
[CONTENT] Aged | Cadherins | Colonic Neoplasms | Disease-Free Survival | Female | Histone Deacetylases | Humans | Male | MicroRNAs | Middle Aged | Neoplasm Recurrence, Local | Neoplasm Staging | Rectal Neoplasms | Repressor Proteins | Risk Factors | Trans-Activators [SUMMARY]
[CONTENT] Aged | Cadherins | Colonic Neoplasms | Disease-Free Survival | Female | Histone Deacetylases | Humans | Male | MicroRNAs | Middle Aged | Neoplasm Recurrence, Local | Neoplasm Staging | Rectal Neoplasms | Repressor Proteins | Risk Factors | Trans-Activators [SUMMARY]
[CONTENT] prognostic value mir | tumor progression mir | micrornas reported marked | quantitative microrna expression | cancer related micrornas [SUMMARY]
null
[CONTENT] prognostic value mir | tumor progression mir | micrornas reported marked | quantitative microrna expression | cancer related micrornas [SUMMARY]
[CONTENT] prognostic value mir | tumor progression mir | micrornas reported marked | quantitative microrna expression | cancer related micrornas [SUMMARY]
[CONTENT] prognostic value mir | tumor progression mir | micrornas reported marked | quantitative microrna expression | cancer related micrornas [SUMMARY]
[CONTENT] prognostic value mir | tumor progression mir | micrornas reported marked | quantitative microrna expression | cancer related micrornas [SUMMARY]
[CONTENT] cancer | expression | 21 | mir | mir 21 | patients | high | survival | 21 expression | mir 21 expression [SUMMARY]
null
[CONTENT] cancer | expression | 21 | mir | mir 21 | patients | high | survival | 21 expression | mir 21 expression [SUMMARY]
[CONTENT] cancer | expression | 21 | mir | mir 21 | patients | high | survival | 21 expression | mir 21 expression [SUMMARY]
[CONTENT] cancer | expression | 21 | mir | mir 21 | patients | high | survival | 21 expression | mir 21 expression [SUMMARY]
[CONTENT] cancer | expression | 21 | mir | mir 21 | patients | high | survival | 21 expression | mir 21 expression [SUMMARY]
[CONTENT] expression | micrornas | crc | cancer | 21 | progression | metastasis | mir 21 | mir | microrna [SUMMARY]
null
[CONTENT] cancer | vs | survival | 21 | expression | patients | mir | mir 21 | colorectal | colorectal cancer [SUMMARY]
[CONTENT] risk | tumor recurrence | crc | mir 21 | 21 | mir | recurrence | specimens strong associations | recurrence benefit | recurrence benefit intensive [SUMMARY]
[CONTENT] expression | cancer | 21 | mir | mir 21 | staining | patients | high | 21 expression | tumor [SUMMARY]
[CONTENT] expression | cancer | 21 | mir | mir 21 | staining | patients | high | 21 expression | tumor [SUMMARY]
[CONTENT] ||| ||| protein1 ||| ||| [SUMMARY]
null
[CONTENT] 76 | 277 | 27.4% | 0.019 | 0.004 ||| T3-4a ||| T3-4a | 0.038 | 2.45 | 95% | CI | 1.05 | T3-4a ||| II | 0.001 | 0.267 [SUMMARY]
[CONTENT] protein1 ||| II [SUMMARY]
[CONTENT] ||| ||| protein1 ||| ||| ||| 2004 to 2007 | 277 | T3-4a | R0 ||| ||| ||| ||| 76 | 277 | 27.4% | 0.019 | 0.004 ||| T3-4a ||| T3-4a | 0.038 | 2.45 | 95% | CI | 1.05 | T3-4a ||| II | 0.001 | 0.267 ||| protein1 ||| II [SUMMARY]
[CONTENT] ||| ||| protein1 ||| ||| ||| 2004 to 2007 | 277 | T3-4a | R0 ||| ||| ||| ||| 76 | 277 | 27.4% | 0.019 | 0.004 ||| T3-4a ||| T3-4a | 0.038 | 2.45 | 95% | CI | 1.05 | T3-4a ||| II | 0.001 | 0.267 ||| protein1 ||| II [SUMMARY]
Does initial buccal crest thickness affect final buccal crest thickness after flapless immediate implant placement and provisionalization: A prospective cone beam computed tomogram cohort study.
34981616
Flapless immediate implant placement and provisionalization (FIIPP) in the aesthetic zone is still controversial. Especially, an initial buccal crest thickness (BCT) of ≤1 mm is thought to be disruptive for the final buccal crest stability jeopardizing the aesthetic outcome.
BACKGROUND
The study was designed as a prospective study on FIIPP. Only patients were included in whom one maxillary incisor was considered as lost. In six centers, 100 consecutive patients received FIIPP. Implants were placed in a maximal palatal position of the socket, thereby creating a buccal space of at least 2 mm, which was subsequently filled with a bovine bone substitute. Files of preoperative (T0), peroperative (T1) and 1-year postoperative (T3) cone beam computed tomogram (CBCT) scans were imported into the Maxillim™ software to analyze the changes in BCT-BCH over time.
MATERIALS AND METHODS
Preoperatively, 85% of the cases showed a BCT ≤1 mm, in 25% of the patients also a small buccal defect (≤5 mm) was present. Mean BCT at the level of the implant-shoulder increased from 0.6 mm at baseline to 3.3 mm immediate postoperatively and compacted to 2.4 mm after 1 year. Mean BCH improved from 0.7 to 3.1 mm peroperatively, and resorbed to 1.7 mm after 1 year. The Pearson correlation of 0.38 between initial and final BCT was significant (p = 0.01) and therefore is valued as moderate. If only patients (75%) with an intact alveolus were included in the analysis, still a "moderate correlation" of 0.32 (p = 0.01) was calculated.
RESULTS
A "moderate correlation" was shown for the hypothesis that "thinner preoperative BCT's deliver thinner BCT's" 1 year after performing FIIPP.
CONCLUSIONS
[ "Animals", "Cattle", "Cohort Studies", "Cone-Beam Computed Tomography", "Dental Implants", "Dental Implants, Single-Tooth", "Esthetics, Dental", "Humans", "Immediate Dental Implant Loading", "Maxilla", "Prospective Studies" ]
9306851
INTRODUCTION
Replacement of maxillary incisors by immediate implant placement and provisionalization (IIPP) may be a reliable therapy with respect to implant survival and pink aesthetic outcome. 1 , 2 , 3 This minimal invasive procedure improves patient comfort, reduces both treatment time and postoperative complaints, as well as costs compared to early or delayed placement protocols. However, due to the natural process of post‐extraction bone remodeling, 4 , 5 , 6 , 7 the pink aesthetic outcome may vary. In this perspective, thickness of the buccal bone crest is crucial. 8 Dimensional alterations of the facial soft tissues and buccal bone following tooth extraction negatively influence a successful aesthetic outcome in implant therapy. 9 Cone beam computed tomogram (CBCT) analysis showed vertical loss of the buccal crest up to 7.5 mm within 8 weeks after flapless extraction. 10 This bone loss compromises the aesthetic outcome, because it is followed by retraction of the covering soft tissues, resulting in midfacial soft tissue recession, which in the end may lead to exposure of the implant surface. Immediately replacing a root by a dental implant itself does not prevent resorption of the buccal crest. 11 , 12 , 13 In order to compensate for bone resorption, bone augmentation procedures in advance of implant installation have been suggested. Ridge preservation procedures, filling the extraction socket with bone or a bone substitute, have proven to be effective in limiting both horizontal and vertical ridge alterations after extraction. 14 , 15 , 16 , 17 In these procedures, applying freeze‐dried bovine bone xenografts is favorable, as they preserve more alveolar bone volume compared to the use of autogenous bone. 18 , 19 , 20 During immediate implant placement also ridge preservation can be performed, provided that sufficient space is created, which subsequently can be filled with a bone substitute. 21 , 22 To create such a gap with optimal dimensions, new insights advocate to install the implant in a more palatal position, on condition of the presence of sufficient apical bone volume. Such a gap leads to less crestal bone resorption and minimal midfacial soft‐tissue recession. 23 , 24 , 25 , 26 , 27 , 28 This buccal gap even allows new bone formation, coronal to the receding buccal bone wall. 29 , 30 , 31 To achieve a minimal gap width of at least 2 mm the use of implants with a smaller diameter is advocated. Thickness of the buccal crest itself also plays an important role, as it consists of bundle bone. As herein only minimal vascularization and regenerative ability are present, thinner buccal alveolar crests will resorb more. 32 To prevent resorption the buccal gap may be filled with a freeze‐dried bovine bone xenograft. In combination with immediate provisionalization, an instant support of the papillae and midfacial soft tissue is delivered, thereby reducing marginal bone changes. 33 , 34 , 35 , 36 Only in case of sufficient initial stability provisional restoration is advocated at the time of flapless implant placement, 37 , 38 , 39 , 40 independent of the patient's biotype. 41 CBCT data giving insight into the process of buccal bone remodeling after IIPP are rare. This prospective multicenter CBCT study reveals the process of buccal bone remodeling in the aesthetic zone after FIIPP, while placing the implants in a palatal position and simultaneously performing a ridge preservation procedure. Aim of the study is to evaluate bone remodeling of the reconstructed buccal wall after FIIPP and to inventory if there is relation between “initial buccal crest thickness” on one hand, and the “final thickness” of the reconstructed buccal crest on the other hand. It is hypothesized that the thinner the initial crest thickness, the thinner the reconstructed buccal crest will be after performing FIIPP.
null
null
RESULTS
Of the 100 included patients, 98 (57 females, 41 males) were available for evaluation. One patient was excluded because of trauma resulting in implant loss, another moved abroad. Age of the included patients varied between 17 and 80 years with an average of 45.8 years. On average, IIPP took place 37 days (range 0–210 days) after intake. Reasons for extraction were trauma, root fracture, failed endodontic treatment, or lack of ferrule. NobelActive™ CC implants with a diameter of 3.0 mm (6×) and 3.5 mm (17×) implants were used to replace lateral incisors. Central incisors were replaced by NobelActive™ CC with 3.5 mm diameter (30×) or with 4.3 diameter (45×). Implant length varied between 11.5 and 18 mm. In all cases, primary implant stability was sufficient to allow immediate provisional restoration. No implant was lost during the first year of the study; all implants received a final restoration. CBCT‐analysis In 17 cases, one of the CBCT‐scans (T0, T1, or T2) could not be interpreted due to movement artifacts, scattering, or beam hardening. In total, 81 complete CBCT series were analyzed. Reliability was 0.901 (p < 0.001), showing a satisfactory correlation between both observers. There was no evidence of a structural difference between both observers as the mean difference was 0.069 mm, with 95% CI = [−0.037–0.175 mm] (p = 0.192). The random error was 0.222 mm. Initial BCT‐T0 was measured (1) directly or (2) by subtracting OBC from IBC. The paired sample correlation was 0.984. Between used methods, the measured difference in BCT was 0.01 mm (p = 0.241). Therefore, both methods are valid for measuring the BCT (Figure 4). Two methods to measure buccal bone thickness (BCT). The x‐axis represents the BCT which was directly measured. The y‐axis reflects the BCT measured as OBC‐IBC: the difference between the “outer buccal crest” (OBC) and the “inner buccal crest” (IBC) in relation to the implant surface Both mean BCT and BCH per time point are depicted in Table 1. Direct post‐operatively (T1), mean BCT increased from 0.6 mm at baseline (SD = 0.5) to 3.3 mm (SD = 1.2). After 1 year (T3) mean BCT reduced to 2.4 mm (SD = 1.1). Mean and standard deviation of the bone crest thickness (BCT) and bone crest height (BCH) on three time points (T0, T1, T2) Mean BCH at T0 was 0.7 mm (SD = 0.5), which enlarged to 3.1 mm (SD = 1.2) direct postoperatively (T1). Over a period of 1 year (T3) BCH condensed to 1.7 mm (SD = 2.4). With respect to BCT and BCH, differences between T1 versus T0, T3 versus T0, and T3 versus T1 were statistically significant (all p = 0.003) (Table 2). Differences in bone crest thickness (BCT) and bone crest height (BCH) are significant between time points T1‐T0, T2‐T0, and T3‐T2 Preoperatively, 85% of the patients presented a BCT‐T0 of ≤1 mm (Table 3). At T1, thus immediately after performing IIPP, 98% of all patients showed a BCT‐T1 of at least 2 mm (Table 3). After 1 year, 8 patients (10%) showed a BCT‐T3 less than 1 mm (Table 3): 2 patients (2.5%) showed a BCT‐T3 of 0.6 and 0.8 mm each. In the other 6 (7.5%) patients, no bone crest was present (BCT‐T3 = 0). In these 6 patients also BCH‐T3 failed, meaning that after 1 year, bone height was lower than the level of the implant‐shoulder. Distribution of patients in relation to their BCT at times T0, T1, and T3 To assess how the initial bone crest thickness (BCT‐T0) is related to the final bone crest thickness (BCT‐T3), the Pearson's correlation was calculated for all 81 patients (Figure 5). The outcome of 0.38 (p = 0.01) suggests that a moderate correlation is present. For solely the 61 patients (75%: Table 3) with an intact alveolus at T0 a Pearson's correlation of 0.32 (p: 0.011) was calculated, which still stands for a moderate correlation. In this graph, the Pearson's relation between the initial BCT‐T0 (x‐axis) and the final achieved BCT‐T3 (y‐axis) is depicted for all 81 patients In 17 cases, one of the CBCT‐scans (T0, T1, or T2) could not be interpreted due to movement artifacts, scattering, or beam hardening. In total, 81 complete CBCT series were analyzed. Reliability was 0.901 (p < 0.001), showing a satisfactory correlation between both observers. There was no evidence of a structural difference between both observers as the mean difference was 0.069 mm, with 95% CI = [−0.037–0.175 mm] (p = 0.192). The random error was 0.222 mm. Initial BCT‐T0 was measured (1) directly or (2) by subtracting OBC from IBC. The paired sample correlation was 0.984. Between used methods, the measured difference in BCT was 0.01 mm (p = 0.241). Therefore, both methods are valid for measuring the BCT (Figure 4). Two methods to measure buccal bone thickness (BCT). The x‐axis represents the BCT which was directly measured. The y‐axis reflects the BCT measured as OBC‐IBC: the difference between the “outer buccal crest” (OBC) and the “inner buccal crest” (IBC) in relation to the implant surface Both mean BCT and BCH per time point are depicted in Table 1. Direct post‐operatively (T1), mean BCT increased from 0.6 mm at baseline (SD = 0.5) to 3.3 mm (SD = 1.2). After 1 year (T3) mean BCT reduced to 2.4 mm (SD = 1.1). Mean and standard deviation of the bone crest thickness (BCT) and bone crest height (BCH) on three time points (T0, T1, T2) Mean BCH at T0 was 0.7 mm (SD = 0.5), which enlarged to 3.1 mm (SD = 1.2) direct postoperatively (T1). Over a period of 1 year (T3) BCH condensed to 1.7 mm (SD = 2.4). With respect to BCT and BCH, differences between T1 versus T0, T3 versus T0, and T3 versus T1 were statistically significant (all p = 0.003) (Table 2). Differences in bone crest thickness (BCT) and bone crest height (BCH) are significant between time points T1‐T0, T2‐T0, and T3‐T2 Preoperatively, 85% of the patients presented a BCT‐T0 of ≤1 mm (Table 3). At T1, thus immediately after performing IIPP, 98% of all patients showed a BCT‐T1 of at least 2 mm (Table 3). After 1 year, 8 patients (10%) showed a BCT‐T3 less than 1 mm (Table 3): 2 patients (2.5%) showed a BCT‐T3 of 0.6 and 0.8 mm each. In the other 6 (7.5%) patients, no bone crest was present (BCT‐T3 = 0). In these 6 patients also BCH‐T3 failed, meaning that after 1 year, bone height was lower than the level of the implant‐shoulder. Distribution of patients in relation to their BCT at times T0, T1, and T3 To assess how the initial bone crest thickness (BCT‐T0) is related to the final bone crest thickness (BCT‐T3), the Pearson's correlation was calculated for all 81 patients (Figure 5). The outcome of 0.38 (p = 0.01) suggests that a moderate correlation is present. For solely the 61 patients (75%: Table 3) with an intact alveolus at T0 a Pearson's correlation of 0.32 (p: 0.011) was calculated, which still stands for a moderate correlation. In this graph, the Pearson's relation between the initial BCT‐T0 (x‐axis) and the final achieved BCT‐T3 (y‐axis) is depicted for all 81 patients
null
null
[ "INTRODUCTION", "Study population", "Multicenter", "Preoperative measures", "Operative procedure", "Radiographic procedure measurements", "Statistical methods", "\nCBCT‐analysis", "AUTHOR CONTRIBUTIONS" ]
[ "Replacement of maxillary incisors by immediate implant placement and provisionalization (IIPP) may be a reliable therapy with respect to implant survival and pink aesthetic outcome.\n1\n, \n2\n, \n3\n This minimal invasive procedure improves patient comfort, reduces both treatment time and postoperative complaints, as well as costs compared to early or delayed placement protocols. However, due to the natural process of post‐extraction bone remodeling,\n4\n, \n5\n, \n6\n, \n7\n the pink aesthetic outcome may vary. In this perspective, thickness of the buccal bone crest is crucial.\n8\n\n\nDimensional alterations of the facial soft tissues and buccal bone following tooth extraction negatively influence a successful aesthetic outcome in implant therapy.\n9\n Cone beam computed tomogram (CBCT) analysis showed vertical loss of the buccal crest up to 7.5 mm within 8 weeks after flapless extraction.\n10\n This bone loss compromises the aesthetic outcome, because it is followed by retraction of the covering soft tissues, resulting in midfacial soft tissue recession, which in the end may lead to exposure of the implant surface. Immediately replacing a root by a dental implant itself does not prevent resorption of the buccal crest.\n11\n, \n12\n, \n13\n In order to compensate for bone resorption, bone augmentation procedures in advance of implant installation have been suggested. Ridge preservation procedures, filling the extraction socket with bone or a bone substitute, have proven to be effective in limiting both horizontal and vertical ridge alterations after extraction.\n14\n, \n15\n, \n16\n, \n17\n In these procedures, applying freeze‐dried bovine bone xenografts is favorable, as they preserve more alveolar bone volume compared to the use of autogenous bone.\n18\n, \n19\n, \n20\n\n\nDuring immediate implant placement also ridge preservation can be performed, provided that sufficient space is created, which subsequently can be filled with a bone substitute.\n21\n, \n22\n To create such a gap with optimal dimensions, new insights advocate to install the implant in a more palatal position, on condition of the presence of sufficient apical bone volume. Such a gap leads to less crestal bone resorption and minimal midfacial soft‐tissue recession.\n23\n, \n24\n, \n25\n, \n26\n, \n27\n, \n28\n This buccal gap even allows new bone formation, coronal to the receding buccal bone wall.\n29\n, \n30\n, \n31\n To achieve a minimal gap width of at least 2 mm the use of implants with a smaller diameter is advocated. Thickness of the buccal crest itself also plays an important role, as it consists of bundle bone. As herein only minimal vascularization and regenerative ability are present, thinner buccal alveolar crests will resorb more.\n32\n To prevent resorption the buccal gap may be filled with a freeze‐dried bovine bone xenograft. In combination with immediate provisionalization, an instant support of the papillae and midfacial soft tissue is delivered, thereby reducing marginal bone changes.\n33\n, \n34\n, \n35\n, \n36\n Only in case of sufficient initial stability provisional restoration is advocated at the time of flapless implant placement,\n37\n, \n38\n, \n39\n, \n40\n independent of the patient's biotype.\n41\n\n\nCBCT data giving insight into the process of buccal bone remodeling after IIPP are rare. This prospective multicenter CBCT study reveals the process of buccal bone remodeling in the aesthetic zone after FIIPP, while placing the implants in a palatal position and simultaneously performing a ridge preservation procedure.\nAim of the study is to evaluate bone remodeling of the reconstructed buccal wall after FIIPP and to inventory if there is relation between “initial buccal crest thickness” on one hand, and the “final thickness” of the reconstructed buccal crest on the other hand. It is hypothesized that the thinner the initial crest thickness, the thinner the reconstructed buccal crest will be after performing FIIPP.", "In total, 100 consecutive patients were included in this prospective clinical study between 2014 and 2017. In all patients, one upper tooth was in danger of being lost. This study was approved by the Ethics Committee of the Radboud University Medical Center Nijmegen (2014/157) and registered in the Dutch Trial Register (NTR) on 20 October 2015 (NTR5583/NL4170). Written informed consent to participate in this study, as well as for use and publication of the data, was derived from all participants. This manuscript was written conform the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guidelines. These guidelines were created to aid the author in ensuring high‐quality presentation of the conducted observational study.\nPrerequisites were that the one failing single maxillary incisor was surrounded by two healthy teeth, that the extraction socket was intact and that sufficient occlusal support was present in the absence of periodontal disease and bruxism. Furthermore, to allow for primary implant stability, sufficient apical bone volume had to be present in the apical region.\nBesides intact sockets, also sockets showing solely a periapical bone defect or a bone crest defect ≤5 mm, defined as EDS‐2 or EDS‐3,\n42\n were included.\nPatients suffering from the following habits or diseases were excluded: smoking more than 10 units a day, drug or alcohol abuse, uncontrolled diabetes, pregnancy, or when disturbed bone healing could be expected, such as in case of local or systemic disease, severe osteoporosis, Paget's disease, renal osteodystrophy, radiation in the head–neck region, immune‐suppression or corticosteroids treatment in the recent past. Finally, the aesthetic expectations had to be achievable.", "In total, six centers for oral implant therapy participated; one university, one hospital, and four referral dental clinics. In the two first centers, an oral maxillofacial surgeon installed the implants, while the restorative procedure was performed by a separate restorative dentist. In the remaining four centers, the complete IIPP procedure was performed by a dentist trained in oral implantology.", "Patients were instructed to take 2 g amoxicillin 1 h before surgery, and 500 mg/3× day for 5 days starting in the morning after surgery. If patients were allergic to amoxicillin 600 mg clindamycin 1 h before surgery, and 300 mg/4× day for 5 days starting in the morning after surgery, was advised. In addition, patients had to rinse with 0.12% chlorhexidine solution twice a day for 14 days, starting the day before surgery. Also, it was instructed to take 1 g paracetamol or 600 mg ibuprofen 1 h before surgery.", "After atraumatic tooth removal, an osteotomy was conducted in the palatal wall of the socket in a more palato‐apically direction compared to the original apex (Figure 1A). Subsequently, the last used drill remained in the preparation to prevent bone substitute plugging it (Figure 1B). Hereafter, the socket was filled with a mixture of blood and bovine bone (Bio Oss™ S 0.25–1 mm, Geistlich Biomaterials, Wolhusen, Switzerland) after which the drill was removed carefully by turning it anti‐clockwise, creating a clean corridor to install the implant (NobelActive Conical Connection™ NobelBiocare, Washington, DC) (Figure 1C). The seat of the implant was placed 3 mm apically from the buccal gingival margin and at least 2 mm palatal of the buccal bone plate (Figure 1D) to create the recommended space buccally and to allow a suitable emergence angle of <30°.\n43\n To evaluate the implant position a low dose small field CBCT scan was made; in case of inaccuracies still corrections could be conducted.\n(A) The osteotomy was conducted in the palatal wall, after which (B) the last drill used was placed into the socket. After filling the socket with (C) a mixture of blood and Bio‐Oss™, the drill was removed and (D) the implant installed\nHereafter, a titanium temporary customized platform‐switch Procera™ abutment (Nobel Biocare, Washington, DC) was placed (Figure 2A) allowing fabrication of a composite screw‐retained provisional restoration. Care was taken to prevent contact with the antagonistic dentition in occlusion or articulation. After implant placement (3–9 months), the final impression was taken to fabricate either an individualized, screw‐retained, zirconium‐oxide porcelain veneered crown, or an individualized zirconium‐oxide abutment (Procera™, NobelBiocare, Washington, DC) with a resin cemented porcelain facing (Figure 2B).\n44\n\n\n(A) Titanium temporary abutment was placed onto the implant, allowing the fabrication of a provisional crown. (B) The aesthetic result after 1 year", "To minimize the effective dose, only small field‐of‐view scans (6 × 6 cm) were applied. For analysis, the preoperative, peroperative, and CBCT data after 12 months were imported as Digital Imaging and Communications in Medicine (DICOM) files into the Maxillim™ software (version 2.3.0.3, Medicim NV, Mechelen, Belgium).\nSuperimposition of the different CBCT scans using the voxel‐based alignment procedure in Maxilim™ scans was performed prior to analysis of the buccal crest. Using both the palate, anterior nasal spina, and adjacent teeth as reference area for voxel‐based alignment, optimal superimposition of the dimensions of the (reconstructed) buccal crest became feasible. Thickness of the buccal crest was measured at the level of the implant‐shoulder, ensuring that thickness of the buccal crest was measured at the same position and angulation at all time points. By subtracting the preoperative (T0), peroperative (T1), and 1 year postoperative (T3) dimensions, both changes in buccal crest thickness (BCT) and buccal crest height (BCH) could be calculated (Figure 3).\nBCT: bone crest thickness. BCH: bone crest height. Preoperatively (T0), direct postoperatively (T1), and after 1 year (T3) measurements (red dots) were conducted. The green dotted reference line reflects the shoulder of the implant\nBuccal crest thickness (BCT‐T0) before treatment was measured using two methods: (1) directly or (2) by subtracting the distance from the inner buccal crest (IBC) to the implant from the outer buccal crest (OBC) to the implant.\nBetween midfacial, 1 mm to the mesial or 1 mm to the distal side, no difference in measurements for thickness or height is present,\n36\n therefore, solely midfacial measurements were conducted.", "For all measurements, the range, median, mean and standard deviation were calculated. Differences in BCT and BCH were tested with a paired sample t‐test. Statistics were calculated for all clinical parameters using SPSS (SPSS Inc., Chicago, IL). Statistical significance was defined as p ≤ 0.05.\nThe inter‐observer performance was analyzed using a paired sample t‐test. For this purpose in total 36 measurements were repeated. The reliability was calculated as Pearson's correlation, and the random error was calculated as the standard deviation of the difference between observers, divided by √2.\nTo assess if the two methods (BCT vs OBC minus IBC) led to a different outcome also a paired sample t‐test was conducted.\nTo inventory if there was a correlation between the “initial BCT” and “final BCT after 1 year,” the Pearson's correlation coefficient was calculated.", "In 17 cases, one of the CBCT‐scans (T0, T1, or T2) could not be interpreted due to movement artifacts, scattering, or beam hardening. In total, 81 complete CBCT series were analyzed.\nReliability was 0.901 (p < 0.001), showing a satisfactory correlation between both observers. There was no evidence of a structural difference between both observers as the mean difference was 0.069 mm, with 95% CI = [−0.037–0.175 mm] (p = 0.192). The random error was 0.222 mm.\nInitial BCT‐T0 was measured (1) directly or (2) by subtracting OBC from IBC. The paired sample correlation was 0.984. Between used methods, the measured difference in BCT was 0.01 mm (p = 0.241). Therefore, both methods are valid for measuring the BCT (Figure 4).\nTwo methods to measure buccal bone thickness (BCT). The x‐axis represents the BCT which was directly measured. The y‐axis reflects the BCT measured as OBC‐IBC: the difference between the “outer buccal crest” (OBC) and the “inner buccal crest” (IBC) in relation to the implant surface\nBoth mean BCT and BCH per time point are depicted in Table 1. Direct post‐operatively (T1), mean BCT increased from 0.6 mm at baseline (SD = 0.5) to 3.3 mm (SD = 1.2). After 1 year (T3) mean BCT reduced to 2.4 mm (SD = 1.1).\nMean and standard deviation of the bone crest thickness (BCT) and bone crest height (BCH) on three time points (T0, T1, T2)\nMean BCH at T0 was 0.7 mm (SD = 0.5), which enlarged to 3.1 mm (SD = 1.2) direct postoperatively (T1). Over a period of 1 year (T3) BCH condensed to 1.7 mm (SD = 2.4).\nWith respect to BCT and BCH, differences between T1 versus T0, T3 versus T0, and T3 versus T1 were statistically significant (all p = 0.003) (Table 2).\nDifferences in bone crest thickness (BCT) and bone crest height (BCH) are significant between time points T1‐T0, T2‐T0, and T3‐T2\nPreoperatively, 85% of the patients presented a BCT‐T0 of ≤1 mm (Table 3). At T1, thus immediately after performing IIPP, 98% of all patients showed a BCT‐T1 of at least 2 mm (Table 3). After 1 year, 8 patients (10%) showed a BCT‐T3 less than 1 mm (Table 3): 2 patients (2.5%) showed a BCT‐T3 of 0.6 and 0.8 mm each. In the other 6 (7.5%) patients, no bone crest was present (BCT‐T3 = 0). In these 6 patients also BCH‐T3 failed, meaning that after 1 year, bone height was lower than the level of the implant‐shoulder.\nDistribution of patients in relation to their BCT at times T0, T1, and T3\nTo assess how the initial bone crest thickness (BCT‐T0) is related to the final bone crest thickness (BCT‐T3), the Pearson's correlation was calculated for all 81 patients (Figure 5). The outcome of 0.38 (p = 0.01) suggests that a moderate correlation is present. For solely the 61 patients (75%: Table 3) with an intact alveolus at T0 a Pearson's correlation of 0.32 (p: 0.011) was calculated, which still stands for a moderate correlation.\nIn this graph, the Pearson's relation between the initial BCT‐T0 (x‐axis) and the final achieved BCT‐T3 (y‐axis) is depicted for all 81 patients", "Tristan Ariaan Staas conceptualized the project idea, conducted the literature search, collected and analyzed the data, and drafted the manuscript. Edith Groenendijk conceptualized the project idea, collected and analyzed data, and made corrections to the drafted manuscript. Gerry Max Raghoebar contributed to the data analysis. Ewald Bronkhorst conducted the statistical analysis. Gerry Max Raghoebar and Gert Jacobus Meijer critically reviewed and revised the manuscript." ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIAL AND METHODS", "Study population", "Multicenter", "Preoperative measures", "Operative procedure", "Radiographic procedure measurements", "Statistical methods", "RESULTS", "\nCBCT‐analysis", "DISCUSSION", "CONFLICT OF INTEREST", "AUTHOR CONTRIBUTIONS" ]
[ "Replacement of maxillary incisors by immediate implant placement and provisionalization (IIPP) may be a reliable therapy with respect to implant survival and pink aesthetic outcome.\n1\n, \n2\n, \n3\n This minimal invasive procedure improves patient comfort, reduces both treatment time and postoperative complaints, as well as costs compared to early or delayed placement protocols. However, due to the natural process of post‐extraction bone remodeling,\n4\n, \n5\n, \n6\n, \n7\n the pink aesthetic outcome may vary. In this perspective, thickness of the buccal bone crest is crucial.\n8\n\n\nDimensional alterations of the facial soft tissues and buccal bone following tooth extraction negatively influence a successful aesthetic outcome in implant therapy.\n9\n Cone beam computed tomogram (CBCT) analysis showed vertical loss of the buccal crest up to 7.5 mm within 8 weeks after flapless extraction.\n10\n This bone loss compromises the aesthetic outcome, because it is followed by retraction of the covering soft tissues, resulting in midfacial soft tissue recession, which in the end may lead to exposure of the implant surface. Immediately replacing a root by a dental implant itself does not prevent resorption of the buccal crest.\n11\n, \n12\n, \n13\n In order to compensate for bone resorption, bone augmentation procedures in advance of implant installation have been suggested. Ridge preservation procedures, filling the extraction socket with bone or a bone substitute, have proven to be effective in limiting both horizontal and vertical ridge alterations after extraction.\n14\n, \n15\n, \n16\n, \n17\n In these procedures, applying freeze‐dried bovine bone xenografts is favorable, as they preserve more alveolar bone volume compared to the use of autogenous bone.\n18\n, \n19\n, \n20\n\n\nDuring immediate implant placement also ridge preservation can be performed, provided that sufficient space is created, which subsequently can be filled with a bone substitute.\n21\n, \n22\n To create such a gap with optimal dimensions, new insights advocate to install the implant in a more palatal position, on condition of the presence of sufficient apical bone volume. Such a gap leads to less crestal bone resorption and minimal midfacial soft‐tissue recession.\n23\n, \n24\n, \n25\n, \n26\n, \n27\n, \n28\n This buccal gap even allows new bone formation, coronal to the receding buccal bone wall.\n29\n, \n30\n, \n31\n To achieve a minimal gap width of at least 2 mm the use of implants with a smaller diameter is advocated. Thickness of the buccal crest itself also plays an important role, as it consists of bundle bone. As herein only minimal vascularization and regenerative ability are present, thinner buccal alveolar crests will resorb more.\n32\n To prevent resorption the buccal gap may be filled with a freeze‐dried bovine bone xenograft. In combination with immediate provisionalization, an instant support of the papillae and midfacial soft tissue is delivered, thereby reducing marginal bone changes.\n33\n, \n34\n, \n35\n, \n36\n Only in case of sufficient initial stability provisional restoration is advocated at the time of flapless implant placement,\n37\n, \n38\n, \n39\n, \n40\n independent of the patient's biotype.\n41\n\n\nCBCT data giving insight into the process of buccal bone remodeling after IIPP are rare. This prospective multicenter CBCT study reveals the process of buccal bone remodeling in the aesthetic zone after FIIPP, while placing the implants in a palatal position and simultaneously performing a ridge preservation procedure.\nAim of the study is to evaluate bone remodeling of the reconstructed buccal wall after FIIPP and to inventory if there is relation between “initial buccal crest thickness” on one hand, and the “final thickness” of the reconstructed buccal crest on the other hand. It is hypothesized that the thinner the initial crest thickness, the thinner the reconstructed buccal crest will be after performing FIIPP.", "Study population In total, 100 consecutive patients were included in this prospective clinical study between 2014 and 2017. In all patients, one upper tooth was in danger of being lost. This study was approved by the Ethics Committee of the Radboud University Medical Center Nijmegen (2014/157) and registered in the Dutch Trial Register (NTR) on 20 October 2015 (NTR5583/NL4170). Written informed consent to participate in this study, as well as for use and publication of the data, was derived from all participants. This manuscript was written conform the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guidelines. These guidelines were created to aid the author in ensuring high‐quality presentation of the conducted observational study.\nPrerequisites were that the one failing single maxillary incisor was surrounded by two healthy teeth, that the extraction socket was intact and that sufficient occlusal support was present in the absence of periodontal disease and bruxism. Furthermore, to allow for primary implant stability, sufficient apical bone volume had to be present in the apical region.\nBesides intact sockets, also sockets showing solely a periapical bone defect or a bone crest defect ≤5 mm, defined as EDS‐2 or EDS‐3,\n42\n were included.\nPatients suffering from the following habits or diseases were excluded: smoking more than 10 units a day, drug or alcohol abuse, uncontrolled diabetes, pregnancy, or when disturbed bone healing could be expected, such as in case of local or systemic disease, severe osteoporosis, Paget's disease, renal osteodystrophy, radiation in the head–neck region, immune‐suppression or corticosteroids treatment in the recent past. Finally, the aesthetic expectations had to be achievable.\nIn total, 100 consecutive patients were included in this prospective clinical study between 2014 and 2017. In all patients, one upper tooth was in danger of being lost. This study was approved by the Ethics Committee of the Radboud University Medical Center Nijmegen (2014/157) and registered in the Dutch Trial Register (NTR) on 20 October 2015 (NTR5583/NL4170). Written informed consent to participate in this study, as well as for use and publication of the data, was derived from all participants. This manuscript was written conform the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guidelines. These guidelines were created to aid the author in ensuring high‐quality presentation of the conducted observational study.\nPrerequisites were that the one failing single maxillary incisor was surrounded by two healthy teeth, that the extraction socket was intact and that sufficient occlusal support was present in the absence of periodontal disease and bruxism. Furthermore, to allow for primary implant stability, sufficient apical bone volume had to be present in the apical region.\nBesides intact sockets, also sockets showing solely a periapical bone defect or a bone crest defect ≤5 mm, defined as EDS‐2 or EDS‐3,\n42\n were included.\nPatients suffering from the following habits or diseases were excluded: smoking more than 10 units a day, drug or alcohol abuse, uncontrolled diabetes, pregnancy, or when disturbed bone healing could be expected, such as in case of local or systemic disease, severe osteoporosis, Paget's disease, renal osteodystrophy, radiation in the head–neck region, immune‐suppression or corticosteroids treatment in the recent past. Finally, the aesthetic expectations had to be achievable.\nMulticenter In total, six centers for oral implant therapy participated; one university, one hospital, and four referral dental clinics. In the two first centers, an oral maxillofacial surgeon installed the implants, while the restorative procedure was performed by a separate restorative dentist. In the remaining four centers, the complete IIPP procedure was performed by a dentist trained in oral implantology.\nIn total, six centers for oral implant therapy participated; one university, one hospital, and four referral dental clinics. In the two first centers, an oral maxillofacial surgeon installed the implants, while the restorative procedure was performed by a separate restorative dentist. In the remaining four centers, the complete IIPP procedure was performed by a dentist trained in oral implantology.\nPreoperative measures Patients were instructed to take 2 g amoxicillin 1 h before surgery, and 500 mg/3× day for 5 days starting in the morning after surgery. If patients were allergic to amoxicillin 600 mg clindamycin 1 h before surgery, and 300 mg/4× day for 5 days starting in the morning after surgery, was advised. In addition, patients had to rinse with 0.12% chlorhexidine solution twice a day for 14 days, starting the day before surgery. Also, it was instructed to take 1 g paracetamol or 600 mg ibuprofen 1 h before surgery.\nPatients were instructed to take 2 g amoxicillin 1 h before surgery, and 500 mg/3× day for 5 days starting in the morning after surgery. If patients were allergic to amoxicillin 600 mg clindamycin 1 h before surgery, and 300 mg/4× day for 5 days starting in the morning after surgery, was advised. In addition, patients had to rinse with 0.12% chlorhexidine solution twice a day for 14 days, starting the day before surgery. Also, it was instructed to take 1 g paracetamol or 600 mg ibuprofen 1 h before surgery.\nOperative procedure After atraumatic tooth removal, an osteotomy was conducted in the palatal wall of the socket in a more palato‐apically direction compared to the original apex (Figure 1A). Subsequently, the last used drill remained in the preparation to prevent bone substitute plugging it (Figure 1B). Hereafter, the socket was filled with a mixture of blood and bovine bone (Bio Oss™ S 0.25–1 mm, Geistlich Biomaterials, Wolhusen, Switzerland) after which the drill was removed carefully by turning it anti‐clockwise, creating a clean corridor to install the implant (NobelActive Conical Connection™ NobelBiocare, Washington, DC) (Figure 1C). The seat of the implant was placed 3 mm apically from the buccal gingival margin and at least 2 mm palatal of the buccal bone plate (Figure 1D) to create the recommended space buccally and to allow a suitable emergence angle of <30°.\n43\n To evaluate the implant position a low dose small field CBCT scan was made; in case of inaccuracies still corrections could be conducted.\n(A) The osteotomy was conducted in the palatal wall, after which (B) the last drill used was placed into the socket. After filling the socket with (C) a mixture of blood and Bio‐Oss™, the drill was removed and (D) the implant installed\nHereafter, a titanium temporary customized platform‐switch Procera™ abutment (Nobel Biocare, Washington, DC) was placed (Figure 2A) allowing fabrication of a composite screw‐retained provisional restoration. Care was taken to prevent contact with the antagonistic dentition in occlusion or articulation. After implant placement (3–9 months), the final impression was taken to fabricate either an individualized, screw‐retained, zirconium‐oxide porcelain veneered crown, or an individualized zirconium‐oxide abutment (Procera™, NobelBiocare, Washington, DC) with a resin cemented porcelain facing (Figure 2B).\n44\n\n\n(A) Titanium temporary abutment was placed onto the implant, allowing the fabrication of a provisional crown. (B) The aesthetic result after 1 year\nAfter atraumatic tooth removal, an osteotomy was conducted in the palatal wall of the socket in a more palato‐apically direction compared to the original apex (Figure 1A). Subsequently, the last used drill remained in the preparation to prevent bone substitute plugging it (Figure 1B). Hereafter, the socket was filled with a mixture of blood and bovine bone (Bio Oss™ S 0.25–1 mm, Geistlich Biomaterials, Wolhusen, Switzerland) after which the drill was removed carefully by turning it anti‐clockwise, creating a clean corridor to install the implant (NobelActive Conical Connection™ NobelBiocare, Washington, DC) (Figure 1C). The seat of the implant was placed 3 mm apically from the buccal gingival margin and at least 2 mm palatal of the buccal bone plate (Figure 1D) to create the recommended space buccally and to allow a suitable emergence angle of <30°.\n43\n To evaluate the implant position a low dose small field CBCT scan was made; in case of inaccuracies still corrections could be conducted.\n(A) The osteotomy was conducted in the palatal wall, after which (B) the last drill used was placed into the socket. After filling the socket with (C) a mixture of blood and Bio‐Oss™, the drill was removed and (D) the implant installed\nHereafter, a titanium temporary customized platform‐switch Procera™ abutment (Nobel Biocare, Washington, DC) was placed (Figure 2A) allowing fabrication of a composite screw‐retained provisional restoration. Care was taken to prevent contact with the antagonistic dentition in occlusion or articulation. After implant placement (3–9 months), the final impression was taken to fabricate either an individualized, screw‐retained, zirconium‐oxide porcelain veneered crown, or an individualized zirconium‐oxide abutment (Procera™, NobelBiocare, Washington, DC) with a resin cemented porcelain facing (Figure 2B).\n44\n\n\n(A) Titanium temporary abutment was placed onto the implant, allowing the fabrication of a provisional crown. (B) The aesthetic result after 1 year\nRadiographic procedure measurements To minimize the effective dose, only small field‐of‐view scans (6 × 6 cm) were applied. For analysis, the preoperative, peroperative, and CBCT data after 12 months were imported as Digital Imaging and Communications in Medicine (DICOM) files into the Maxillim™ software (version 2.3.0.3, Medicim NV, Mechelen, Belgium).\nSuperimposition of the different CBCT scans using the voxel‐based alignment procedure in Maxilim™ scans was performed prior to analysis of the buccal crest. Using both the palate, anterior nasal spina, and adjacent teeth as reference area for voxel‐based alignment, optimal superimposition of the dimensions of the (reconstructed) buccal crest became feasible. Thickness of the buccal crest was measured at the level of the implant‐shoulder, ensuring that thickness of the buccal crest was measured at the same position and angulation at all time points. By subtracting the preoperative (T0), peroperative (T1), and 1 year postoperative (T3) dimensions, both changes in buccal crest thickness (BCT) and buccal crest height (BCH) could be calculated (Figure 3).\nBCT: bone crest thickness. BCH: bone crest height. Preoperatively (T0), direct postoperatively (T1), and after 1 year (T3) measurements (red dots) were conducted. The green dotted reference line reflects the shoulder of the implant\nBuccal crest thickness (BCT‐T0) before treatment was measured using two methods: (1) directly or (2) by subtracting the distance from the inner buccal crest (IBC) to the implant from the outer buccal crest (OBC) to the implant.\nBetween midfacial, 1 mm to the mesial or 1 mm to the distal side, no difference in measurements for thickness or height is present,\n36\n therefore, solely midfacial measurements were conducted.\nTo minimize the effective dose, only small field‐of‐view scans (6 × 6 cm) were applied. For analysis, the preoperative, peroperative, and CBCT data after 12 months were imported as Digital Imaging and Communications in Medicine (DICOM) files into the Maxillim™ software (version 2.3.0.3, Medicim NV, Mechelen, Belgium).\nSuperimposition of the different CBCT scans using the voxel‐based alignment procedure in Maxilim™ scans was performed prior to analysis of the buccal crest. Using both the palate, anterior nasal spina, and adjacent teeth as reference area for voxel‐based alignment, optimal superimposition of the dimensions of the (reconstructed) buccal crest became feasible. Thickness of the buccal crest was measured at the level of the implant‐shoulder, ensuring that thickness of the buccal crest was measured at the same position and angulation at all time points. By subtracting the preoperative (T0), peroperative (T1), and 1 year postoperative (T3) dimensions, both changes in buccal crest thickness (BCT) and buccal crest height (BCH) could be calculated (Figure 3).\nBCT: bone crest thickness. BCH: bone crest height. Preoperatively (T0), direct postoperatively (T1), and after 1 year (T3) measurements (red dots) were conducted. The green dotted reference line reflects the shoulder of the implant\nBuccal crest thickness (BCT‐T0) before treatment was measured using two methods: (1) directly or (2) by subtracting the distance from the inner buccal crest (IBC) to the implant from the outer buccal crest (OBC) to the implant.\nBetween midfacial, 1 mm to the mesial or 1 mm to the distal side, no difference in measurements for thickness or height is present,\n36\n therefore, solely midfacial measurements were conducted.\nStatistical methods For all measurements, the range, median, mean and standard deviation were calculated. Differences in BCT and BCH were tested with a paired sample t‐test. Statistics were calculated for all clinical parameters using SPSS (SPSS Inc., Chicago, IL). Statistical significance was defined as p ≤ 0.05.\nThe inter‐observer performance was analyzed using a paired sample t‐test. For this purpose in total 36 measurements were repeated. The reliability was calculated as Pearson's correlation, and the random error was calculated as the standard deviation of the difference between observers, divided by √2.\nTo assess if the two methods (BCT vs OBC minus IBC) led to a different outcome also a paired sample t‐test was conducted.\nTo inventory if there was a correlation between the “initial BCT” and “final BCT after 1 year,” the Pearson's correlation coefficient was calculated.\nFor all measurements, the range, median, mean and standard deviation were calculated. Differences in BCT and BCH were tested with a paired sample t‐test. Statistics were calculated for all clinical parameters using SPSS (SPSS Inc., Chicago, IL). Statistical significance was defined as p ≤ 0.05.\nThe inter‐observer performance was analyzed using a paired sample t‐test. For this purpose in total 36 measurements were repeated. The reliability was calculated as Pearson's correlation, and the random error was calculated as the standard deviation of the difference between observers, divided by √2.\nTo assess if the two methods (BCT vs OBC minus IBC) led to a different outcome also a paired sample t‐test was conducted.\nTo inventory if there was a correlation between the “initial BCT” and “final BCT after 1 year,” the Pearson's correlation coefficient was calculated.", "In total, 100 consecutive patients were included in this prospective clinical study between 2014 and 2017. In all patients, one upper tooth was in danger of being lost. This study was approved by the Ethics Committee of the Radboud University Medical Center Nijmegen (2014/157) and registered in the Dutch Trial Register (NTR) on 20 October 2015 (NTR5583/NL4170). Written informed consent to participate in this study, as well as for use and publication of the data, was derived from all participants. This manuscript was written conform the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guidelines. These guidelines were created to aid the author in ensuring high‐quality presentation of the conducted observational study.\nPrerequisites were that the one failing single maxillary incisor was surrounded by two healthy teeth, that the extraction socket was intact and that sufficient occlusal support was present in the absence of periodontal disease and bruxism. Furthermore, to allow for primary implant stability, sufficient apical bone volume had to be present in the apical region.\nBesides intact sockets, also sockets showing solely a periapical bone defect or a bone crest defect ≤5 mm, defined as EDS‐2 or EDS‐3,\n42\n were included.\nPatients suffering from the following habits or diseases were excluded: smoking more than 10 units a day, drug or alcohol abuse, uncontrolled diabetes, pregnancy, or when disturbed bone healing could be expected, such as in case of local or systemic disease, severe osteoporosis, Paget's disease, renal osteodystrophy, radiation in the head–neck region, immune‐suppression or corticosteroids treatment in the recent past. Finally, the aesthetic expectations had to be achievable.", "In total, six centers for oral implant therapy participated; one university, one hospital, and four referral dental clinics. In the two first centers, an oral maxillofacial surgeon installed the implants, while the restorative procedure was performed by a separate restorative dentist. In the remaining four centers, the complete IIPP procedure was performed by a dentist trained in oral implantology.", "Patients were instructed to take 2 g amoxicillin 1 h before surgery, and 500 mg/3× day for 5 days starting in the morning after surgery. If patients were allergic to amoxicillin 600 mg clindamycin 1 h before surgery, and 300 mg/4× day for 5 days starting in the morning after surgery, was advised. In addition, patients had to rinse with 0.12% chlorhexidine solution twice a day for 14 days, starting the day before surgery. Also, it was instructed to take 1 g paracetamol or 600 mg ibuprofen 1 h before surgery.", "After atraumatic tooth removal, an osteotomy was conducted in the palatal wall of the socket in a more palato‐apically direction compared to the original apex (Figure 1A). Subsequently, the last used drill remained in the preparation to prevent bone substitute plugging it (Figure 1B). Hereafter, the socket was filled with a mixture of blood and bovine bone (Bio Oss™ S 0.25–1 mm, Geistlich Biomaterials, Wolhusen, Switzerland) after which the drill was removed carefully by turning it anti‐clockwise, creating a clean corridor to install the implant (NobelActive Conical Connection™ NobelBiocare, Washington, DC) (Figure 1C). The seat of the implant was placed 3 mm apically from the buccal gingival margin and at least 2 mm palatal of the buccal bone plate (Figure 1D) to create the recommended space buccally and to allow a suitable emergence angle of <30°.\n43\n To evaluate the implant position a low dose small field CBCT scan was made; in case of inaccuracies still corrections could be conducted.\n(A) The osteotomy was conducted in the palatal wall, after which (B) the last drill used was placed into the socket. After filling the socket with (C) a mixture of blood and Bio‐Oss™, the drill was removed and (D) the implant installed\nHereafter, a titanium temporary customized platform‐switch Procera™ abutment (Nobel Biocare, Washington, DC) was placed (Figure 2A) allowing fabrication of a composite screw‐retained provisional restoration. Care was taken to prevent contact with the antagonistic dentition in occlusion or articulation. After implant placement (3–9 months), the final impression was taken to fabricate either an individualized, screw‐retained, zirconium‐oxide porcelain veneered crown, or an individualized zirconium‐oxide abutment (Procera™, NobelBiocare, Washington, DC) with a resin cemented porcelain facing (Figure 2B).\n44\n\n\n(A) Titanium temporary abutment was placed onto the implant, allowing the fabrication of a provisional crown. (B) The aesthetic result after 1 year", "To minimize the effective dose, only small field‐of‐view scans (6 × 6 cm) were applied. For analysis, the preoperative, peroperative, and CBCT data after 12 months were imported as Digital Imaging and Communications in Medicine (DICOM) files into the Maxillim™ software (version 2.3.0.3, Medicim NV, Mechelen, Belgium).\nSuperimposition of the different CBCT scans using the voxel‐based alignment procedure in Maxilim™ scans was performed prior to analysis of the buccal crest. Using both the palate, anterior nasal spina, and adjacent teeth as reference area for voxel‐based alignment, optimal superimposition of the dimensions of the (reconstructed) buccal crest became feasible. Thickness of the buccal crest was measured at the level of the implant‐shoulder, ensuring that thickness of the buccal crest was measured at the same position and angulation at all time points. By subtracting the preoperative (T0), peroperative (T1), and 1 year postoperative (T3) dimensions, both changes in buccal crest thickness (BCT) and buccal crest height (BCH) could be calculated (Figure 3).\nBCT: bone crest thickness. BCH: bone crest height. Preoperatively (T0), direct postoperatively (T1), and after 1 year (T3) measurements (red dots) were conducted. The green dotted reference line reflects the shoulder of the implant\nBuccal crest thickness (BCT‐T0) before treatment was measured using two methods: (1) directly or (2) by subtracting the distance from the inner buccal crest (IBC) to the implant from the outer buccal crest (OBC) to the implant.\nBetween midfacial, 1 mm to the mesial or 1 mm to the distal side, no difference in measurements for thickness or height is present,\n36\n therefore, solely midfacial measurements were conducted.", "For all measurements, the range, median, mean and standard deviation were calculated. Differences in BCT and BCH were tested with a paired sample t‐test. Statistics were calculated for all clinical parameters using SPSS (SPSS Inc., Chicago, IL). Statistical significance was defined as p ≤ 0.05.\nThe inter‐observer performance was analyzed using a paired sample t‐test. For this purpose in total 36 measurements were repeated. The reliability was calculated as Pearson's correlation, and the random error was calculated as the standard deviation of the difference between observers, divided by √2.\nTo assess if the two methods (BCT vs OBC minus IBC) led to a different outcome also a paired sample t‐test was conducted.\nTo inventory if there was a correlation between the “initial BCT” and “final BCT after 1 year,” the Pearson's correlation coefficient was calculated.", "Of the 100 included patients, 98 (57 females, 41 males) were available for evaluation. One patient was excluded because of trauma resulting in implant loss, another moved abroad. Age of the included patients varied between 17 and 80 years with an average of 45.8 years.\nOn average, IIPP took place 37 days (range 0–210 days) after intake. Reasons for extraction were trauma, root fracture, failed endodontic treatment, or lack of ferrule.\nNobelActive™ CC implants with a diameter of 3.0 mm (6×) and 3.5 mm (17×) implants were used to replace lateral incisors. Central incisors were replaced by NobelActive™ CC with 3.5 mm diameter (30×) or with 4.3 diameter (45×). Implant length varied between 11.5 and 18 mm. In all cases, primary implant stability was sufficient to allow immediate provisional restoration. No implant was lost during the first year of the study; all implants received a final restoration.\n\nCBCT‐analysis In 17 cases, one of the CBCT‐scans (T0, T1, or T2) could not be interpreted due to movement artifacts, scattering, or beam hardening. In total, 81 complete CBCT series were analyzed.\nReliability was 0.901 (p < 0.001), showing a satisfactory correlation between both observers. There was no evidence of a structural difference between both observers as the mean difference was 0.069 mm, with 95% CI = [−0.037–0.175 mm] (p = 0.192). The random error was 0.222 mm.\nInitial BCT‐T0 was measured (1) directly or (2) by subtracting OBC from IBC. The paired sample correlation was 0.984. Between used methods, the measured difference in BCT was 0.01 mm (p = 0.241). Therefore, both methods are valid for measuring the BCT (Figure 4).\nTwo methods to measure buccal bone thickness (BCT). The x‐axis represents the BCT which was directly measured. The y‐axis reflects the BCT measured as OBC‐IBC: the difference between the “outer buccal crest” (OBC) and the “inner buccal crest” (IBC) in relation to the implant surface\nBoth mean BCT and BCH per time point are depicted in Table 1. Direct post‐operatively (T1), mean BCT increased from 0.6 mm at baseline (SD = 0.5) to 3.3 mm (SD = 1.2). After 1 year (T3) mean BCT reduced to 2.4 mm (SD = 1.1).\nMean and standard deviation of the bone crest thickness (BCT) and bone crest height (BCH) on three time points (T0, T1, T2)\nMean BCH at T0 was 0.7 mm (SD = 0.5), which enlarged to 3.1 mm (SD = 1.2) direct postoperatively (T1). Over a period of 1 year (T3) BCH condensed to 1.7 mm (SD = 2.4).\nWith respect to BCT and BCH, differences between T1 versus T0, T3 versus T0, and T3 versus T1 were statistically significant (all p = 0.003) (Table 2).\nDifferences in bone crest thickness (BCT) and bone crest height (BCH) are significant between time points T1‐T0, T2‐T0, and T3‐T2\nPreoperatively, 85% of the patients presented a BCT‐T0 of ≤1 mm (Table 3). At T1, thus immediately after performing IIPP, 98% of all patients showed a BCT‐T1 of at least 2 mm (Table 3). After 1 year, 8 patients (10%) showed a BCT‐T3 less than 1 mm (Table 3): 2 patients (2.5%) showed a BCT‐T3 of 0.6 and 0.8 mm each. In the other 6 (7.5%) patients, no bone crest was present (BCT‐T3 = 0). In these 6 patients also BCH‐T3 failed, meaning that after 1 year, bone height was lower than the level of the implant‐shoulder.\nDistribution of patients in relation to their BCT at times T0, T1, and T3\nTo assess how the initial bone crest thickness (BCT‐T0) is related to the final bone crest thickness (BCT‐T3), the Pearson's correlation was calculated for all 81 patients (Figure 5). The outcome of 0.38 (p = 0.01) suggests that a moderate correlation is present. For solely the 61 patients (75%: Table 3) with an intact alveolus at T0 a Pearson's correlation of 0.32 (p: 0.011) was calculated, which still stands for a moderate correlation.\nIn this graph, the Pearson's relation between the initial BCT‐T0 (x‐axis) and the final achieved BCT‐T3 (y‐axis) is depicted for all 81 patients\nIn 17 cases, one of the CBCT‐scans (T0, T1, or T2) could not be interpreted due to movement artifacts, scattering, or beam hardening. In total, 81 complete CBCT series were analyzed.\nReliability was 0.901 (p < 0.001), showing a satisfactory correlation between both observers. There was no evidence of a structural difference between both observers as the mean difference was 0.069 mm, with 95% CI = [−0.037–0.175 mm] (p = 0.192). The random error was 0.222 mm.\nInitial BCT‐T0 was measured (1) directly or (2) by subtracting OBC from IBC. The paired sample correlation was 0.984. Between used methods, the measured difference in BCT was 0.01 mm (p = 0.241). Therefore, both methods are valid for measuring the BCT (Figure 4).\nTwo methods to measure buccal bone thickness (BCT). The x‐axis represents the BCT which was directly measured. The y‐axis reflects the BCT measured as OBC‐IBC: the difference between the “outer buccal crest” (OBC) and the “inner buccal crest” (IBC) in relation to the implant surface\nBoth mean BCT and BCH per time point are depicted in Table 1. Direct post‐operatively (T1), mean BCT increased from 0.6 mm at baseline (SD = 0.5) to 3.3 mm (SD = 1.2). After 1 year (T3) mean BCT reduced to 2.4 mm (SD = 1.1).\nMean and standard deviation of the bone crest thickness (BCT) and bone crest height (BCH) on three time points (T0, T1, T2)\nMean BCH at T0 was 0.7 mm (SD = 0.5), which enlarged to 3.1 mm (SD = 1.2) direct postoperatively (T1). Over a period of 1 year (T3) BCH condensed to 1.7 mm (SD = 2.4).\nWith respect to BCT and BCH, differences between T1 versus T0, T3 versus T0, and T3 versus T1 were statistically significant (all p = 0.003) (Table 2).\nDifferences in bone crest thickness (BCT) and bone crest height (BCH) are significant between time points T1‐T0, T2‐T0, and T3‐T2\nPreoperatively, 85% of the patients presented a BCT‐T0 of ≤1 mm (Table 3). At T1, thus immediately after performing IIPP, 98% of all patients showed a BCT‐T1 of at least 2 mm (Table 3). After 1 year, 8 patients (10%) showed a BCT‐T3 less than 1 mm (Table 3): 2 patients (2.5%) showed a BCT‐T3 of 0.6 and 0.8 mm each. In the other 6 (7.5%) patients, no bone crest was present (BCT‐T3 = 0). In these 6 patients also BCH‐T3 failed, meaning that after 1 year, bone height was lower than the level of the implant‐shoulder.\nDistribution of patients in relation to their BCT at times T0, T1, and T3\nTo assess how the initial bone crest thickness (BCT‐T0) is related to the final bone crest thickness (BCT‐T3), the Pearson's correlation was calculated for all 81 patients (Figure 5). The outcome of 0.38 (p = 0.01) suggests that a moderate correlation is present. For solely the 61 patients (75%: Table 3) with an intact alveolus at T0 a Pearson's correlation of 0.32 (p: 0.011) was calculated, which still stands for a moderate correlation.\nIn this graph, the Pearson's relation between the initial BCT‐T0 (x‐axis) and the final achieved BCT‐T3 (y‐axis) is depicted for all 81 patients", "In 17 cases, one of the CBCT‐scans (T0, T1, or T2) could not be interpreted due to movement artifacts, scattering, or beam hardening. In total, 81 complete CBCT series were analyzed.\nReliability was 0.901 (p < 0.001), showing a satisfactory correlation between both observers. There was no evidence of a structural difference between both observers as the mean difference was 0.069 mm, with 95% CI = [−0.037–0.175 mm] (p = 0.192). The random error was 0.222 mm.\nInitial BCT‐T0 was measured (1) directly or (2) by subtracting OBC from IBC. The paired sample correlation was 0.984. Between used methods, the measured difference in BCT was 0.01 mm (p = 0.241). Therefore, both methods are valid for measuring the BCT (Figure 4).\nTwo methods to measure buccal bone thickness (BCT). The x‐axis represents the BCT which was directly measured. The y‐axis reflects the BCT measured as OBC‐IBC: the difference between the “outer buccal crest” (OBC) and the “inner buccal crest” (IBC) in relation to the implant surface\nBoth mean BCT and BCH per time point are depicted in Table 1. Direct post‐operatively (T1), mean BCT increased from 0.6 mm at baseline (SD = 0.5) to 3.3 mm (SD = 1.2). After 1 year (T3) mean BCT reduced to 2.4 mm (SD = 1.1).\nMean and standard deviation of the bone crest thickness (BCT) and bone crest height (BCH) on three time points (T0, T1, T2)\nMean BCH at T0 was 0.7 mm (SD = 0.5), which enlarged to 3.1 mm (SD = 1.2) direct postoperatively (T1). Over a period of 1 year (T3) BCH condensed to 1.7 mm (SD = 2.4).\nWith respect to BCT and BCH, differences between T1 versus T0, T3 versus T0, and T3 versus T1 were statistically significant (all p = 0.003) (Table 2).\nDifferences in bone crest thickness (BCT) and bone crest height (BCH) are significant between time points T1‐T0, T2‐T0, and T3‐T2\nPreoperatively, 85% of the patients presented a BCT‐T0 of ≤1 mm (Table 3). At T1, thus immediately after performing IIPP, 98% of all patients showed a BCT‐T1 of at least 2 mm (Table 3). After 1 year, 8 patients (10%) showed a BCT‐T3 less than 1 mm (Table 3): 2 patients (2.5%) showed a BCT‐T3 of 0.6 and 0.8 mm each. In the other 6 (7.5%) patients, no bone crest was present (BCT‐T3 = 0). In these 6 patients also BCH‐T3 failed, meaning that after 1 year, bone height was lower than the level of the implant‐shoulder.\nDistribution of patients in relation to their BCT at times T0, T1, and T3\nTo assess how the initial bone crest thickness (BCT‐T0) is related to the final bone crest thickness (BCT‐T3), the Pearson's correlation was calculated for all 81 patients (Figure 5). The outcome of 0.38 (p = 0.01) suggests that a moderate correlation is present. For solely the 61 patients (75%: Table 3) with an intact alveolus at T0 a Pearson's correlation of 0.32 (p: 0.011) was calculated, which still stands for a moderate correlation.\nIn this graph, the Pearson's relation between the initial BCT‐T0 (x‐axis) and the final achieved BCT‐T3 (y‐axis) is depicted for all 81 patients", "Although IIPP procedures are more accepted nowadays, midfacial recession as a result of bone loss of the buccal crest is still considered to be a major risk. Publications about this topic are difficult to compare to each other, because surgical procedures differ substantially with respect to the choice of implant type, implant diameter, implant position, and “to raise a flap or not.” Furthermore, discussion is ongoing if additional connective tissue grafts or bone substitutes should be applied. Confusing is also that in most publications, with respect to the prosthetic treatment, various materials and shapes of abutment were included. Moreover, debate continues if, and at what time point, provisional restorations should be used.\n45\n\n\nFrom a patient's point of view, immediate tooth replacement is an attractive strategy, because in one session both aesthetics and comfort are delivered together with a substantial gain in treatment time, as in the same treatment session both the tooth is extracted and the implant installed. Also with respect to the aesthetic result after 1 year IIPP offers advantages: midfacial recession is 0.75 mm less compared to delayed restoration after 1 year.\n46\n\n\nWith respect to the question if “whether or not” a buccal flap should be raised, Naji et al. presented the 6 months' results of a CBCT study in which different soft tissue techniques were compared.\n47\n In three groups of each 16 patients, a buccal gap of at least 2 mm was created: group 1 received a bone graft and membrane, after which the wound bed was closed with a primary flap. Group 2 received primary flap closure only. In group 3, no extra technique was applied; solely immediate implant installation was conducted. The least horizontal dimensional change after implant placement was recorded for group 3, in which also the least postoperative pain was monitored. Both can be explained by the fact that no flap was raised, thereby stressing the importance of the local blood supply and, as such, the vitality of the soft tissues. These results are in agreement with others, which confirmed that, if no flap is elevated, a greater preservation of the buccal alveolar bone width from resorption was seen.\n48\n, \n49\n\n\nConsidering the need of filling the gap, Naji et al. suggested that solely a thick bone crest with a BCT of ≥1 mm, allows a stabilized formed coagulum without the need for regenerative materials.\n47\n Others stated that in case of an initial buccal bone plate width of <1 mm or with fenestration, gap grafting, and regeneration are recommended to enhance bone filling and reduce the bone reduction.\n50\n, \n51\n, \n52\n\n\nCBCT is a useful tool that has been successfully used for reproducibility and accuracy of bone crest level measurements,\n53\n, \n54\n as corroborated in the present study showing a mean difference of 0.069 mm between both observers (p = 0.192).\nThe initial mean width BCT in our study was 0.6 mm (SD 0.5) which is in accordance with previous studies using CBCT scans to measure bone width around maxillary anterior teeth.\n8\n, \n55\n, \n56\n With our IIPP protocol immediately after surgery BCT a gain in thickness of 2.7 mm was achieved: from 0.6 mm (mean) to 3.3 mm (mean). After 12 months, a decrease of 0.9 mm was observed still leaving a BCT of 2.4 mm (mean). In only a few articles, both initial and postoperative BCT were measured in combination with IIPP. Although Morimoto et al. described 12 patients retrospectively and also filled the buccal gap with bone graft material, they did not create a buccal gap of at least 2 mm.\n55\n They reported an initial median BCT of 0.5 mm, and a median thickness of 1.8 mm after 1 year, resulting in a total gain of 1.3 mm, which is lower as reported in our study (1.8 mm) when generating a minimal gap of 2 mm. Degidi et al. created buccal gaps of between 1 and 4 mm and filled the buccal gaps with Bio‐Oss™ Collagen (Geistlich Pharma AB, Wolhusen, Switzerland). Although they did not present an initial BCT, the same decrease of 0.9 mm in the mean BCT was reported after 1 year: from 3.0 to 2.1 mm.\n57\n\n\nUnfortunately, also with respect to the vertical dimension, Degidi et al. presented no initial bone heights, and thereby no initial increase in BCH. The immediate postoperatively achieved BCH of 3.0 mm reduced to 2.2 mm after 1 year. This reduction (0.8 mm) is less than the 1.4 mm in our CBCT study, in which BCH decreased from 3.1 to 1.7 mm.\n57\n However, this can be explained by the composition of their patient population: 50% of the implant sites were not in the front, but in the premolar and the canine region.\n57\n Furthermore, only patients were introduced if the BCT was at least 0.5 mm, while in our study also cases with a smaller BCT were allowed. Morimoto et al. only presented a preoperative BCH (median 1.5 mm) and after 1 year (median 1.1 mm). It is unclear if their clinical procedure resulted in a gain in BCH.\n55\n\n\nThe vertical increase in BCH, as measured in this study, may seem surprising, but is in agreement with the gain of height that already was reported in the ridge preservation studies from Iasella et al. in 2003\n14\n and Vance et al. in 2004\n58\n who reported an average gain of 1.3 and 0.7 mm, respectively.\nApplying a bone substitute simultaneously with IIPP significantly enlarged the buccal crest both in width and height. Key question is the exact composition of the final buccal crest. After all, a limitation of this study is that no histology of the buccal bone volume was conducted; it was not specified which percentage of buccal crest consists of newly formed bone or bone substitute. It can be secured that immediately after application of the bone substitute the buccal bone crest consists of a combination of the original buccal bone at the outside and bone substitute at the inside. The measured horizontal bone reduction after 12 months can be explained by resorption of the buccal bundle bone initiated by the removal of the periodontal ligament, reducing local blood supply and its regenerative capacity. Horizontal reduction of BCT also can be clarified by condensation of the bone substitute particles in time.\nIn 20 (25%) of the patients preoperatively, a small buccal bone defect (≤5 mm) was present. If this group was included in the statistical analysis, the Pearson correlation between initial and final BCT was 0.38. If this group was excluded from analysis, still a significant Pearson's correlation of 0.32 remained. As such there is indeed a moderate correlation indicating that thin buccal plates will result also in thinner buccal plates after reconstruction. Of course, also other factors will influence the end result after reconstruction, such as age, general condition, and width of the created gap.\nTo our knowledge, this is the first prospective study assessing the relation between the initial BCT and the BCT after 1 year. A ‘moderate correlation’ was shown for the hypothesis that ‘thinner preoperative BCT's deliver thinner BCT's’ one year after performing FIIPP. Nevertheless, independent of the initial BCT, 1 year after following the presented FIIPP protocol, both bone crest thickness and height were in 90% of the cases still substantial, meaning that more than the required minimal BCT of 1 mm was present, thereby creating a stable base for the soft tissues.\nThe lesson to be learned is that FIIPP may be successful in cases of a bone defect at the implant shoulder (BCT‐T0 = 0); however, it is not a panacea. In total, 20 of such cases (25%) were included: 14 were successful (BCT‐T3 ≥1 mm) and 6 cases (7.5%) failed, meaning that the bone crest both in thickness and height scored zero 1 year postoperatively.\nLong‐term prospective studies need to be performed to prove if both BCT and BCH will also be stable over time. Retrospective CBCT‐data after 7 years showed already promising results.\n59\n\n", "The authors declare no conflict of interest.", "Tristan Ariaan Staas conceptualized the project idea, conducted the literature search, collected and analyzed the data, and drafted the manuscript. Edith Groenendijk conceptualized the project idea, collected and analyzed data, and made corrections to the drafted manuscript. Gerry Max Raghoebar contributed to the data analysis. Ewald Bronkhorst conducted the statistical analysis. Gerry Max Raghoebar and Gert Jacobus Meijer critically reviewed and revised the manuscript." ]
[ null, "materials-and-methods", null, null, null, null, null, null, "results", null, "discussion", "COI-statement", null ]
[ "aesthetic outcome", "CBCT analysis", "flapless immediate implant placement", "immediate restoration" ]
INTRODUCTION: Replacement of maxillary incisors by immediate implant placement and provisionalization (IIPP) may be a reliable therapy with respect to implant survival and pink aesthetic outcome. 1 , 2 , 3 This minimal invasive procedure improves patient comfort, reduces both treatment time and postoperative complaints, as well as costs compared to early or delayed placement protocols. However, due to the natural process of post‐extraction bone remodeling, 4 , 5 , 6 , 7 the pink aesthetic outcome may vary. In this perspective, thickness of the buccal bone crest is crucial. 8 Dimensional alterations of the facial soft tissues and buccal bone following tooth extraction negatively influence a successful aesthetic outcome in implant therapy. 9 Cone beam computed tomogram (CBCT) analysis showed vertical loss of the buccal crest up to 7.5 mm within 8 weeks after flapless extraction. 10 This bone loss compromises the aesthetic outcome, because it is followed by retraction of the covering soft tissues, resulting in midfacial soft tissue recession, which in the end may lead to exposure of the implant surface. Immediately replacing a root by a dental implant itself does not prevent resorption of the buccal crest. 11 , 12 , 13 In order to compensate for bone resorption, bone augmentation procedures in advance of implant installation have been suggested. Ridge preservation procedures, filling the extraction socket with bone or a bone substitute, have proven to be effective in limiting both horizontal and vertical ridge alterations after extraction. 14 , 15 , 16 , 17 In these procedures, applying freeze‐dried bovine bone xenografts is favorable, as they preserve more alveolar bone volume compared to the use of autogenous bone. 18 , 19 , 20 During immediate implant placement also ridge preservation can be performed, provided that sufficient space is created, which subsequently can be filled with a bone substitute. 21 , 22 To create such a gap with optimal dimensions, new insights advocate to install the implant in a more palatal position, on condition of the presence of sufficient apical bone volume. Such a gap leads to less crestal bone resorption and minimal midfacial soft‐tissue recession. 23 , 24 , 25 , 26 , 27 , 28 This buccal gap even allows new bone formation, coronal to the receding buccal bone wall. 29 , 30 , 31 To achieve a minimal gap width of at least 2 mm the use of implants with a smaller diameter is advocated. Thickness of the buccal crest itself also plays an important role, as it consists of bundle bone. As herein only minimal vascularization and regenerative ability are present, thinner buccal alveolar crests will resorb more. 32 To prevent resorption the buccal gap may be filled with a freeze‐dried bovine bone xenograft. In combination with immediate provisionalization, an instant support of the papillae and midfacial soft tissue is delivered, thereby reducing marginal bone changes. 33 , 34 , 35 , 36 Only in case of sufficient initial stability provisional restoration is advocated at the time of flapless implant placement, 37 , 38 , 39 , 40 independent of the patient's biotype. 41 CBCT data giving insight into the process of buccal bone remodeling after IIPP are rare. This prospective multicenter CBCT study reveals the process of buccal bone remodeling in the aesthetic zone after FIIPP, while placing the implants in a palatal position and simultaneously performing a ridge preservation procedure. Aim of the study is to evaluate bone remodeling of the reconstructed buccal wall after FIIPP and to inventory if there is relation between “initial buccal crest thickness” on one hand, and the “final thickness” of the reconstructed buccal crest on the other hand. It is hypothesized that the thinner the initial crest thickness, the thinner the reconstructed buccal crest will be after performing FIIPP. MATERIAL AND METHODS: Study population In total, 100 consecutive patients were included in this prospective clinical study between 2014 and 2017. In all patients, one upper tooth was in danger of being lost. This study was approved by the Ethics Committee of the Radboud University Medical Center Nijmegen (2014/157) and registered in the Dutch Trial Register (NTR) on 20 October 2015 (NTR5583/NL4170). Written informed consent to participate in this study, as well as for use and publication of the data, was derived from all participants. This manuscript was written conform the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guidelines. These guidelines were created to aid the author in ensuring high‐quality presentation of the conducted observational study. Prerequisites were that the one failing single maxillary incisor was surrounded by two healthy teeth, that the extraction socket was intact and that sufficient occlusal support was present in the absence of periodontal disease and bruxism. Furthermore, to allow for primary implant stability, sufficient apical bone volume had to be present in the apical region. Besides intact sockets, also sockets showing solely a periapical bone defect or a bone crest defect ≤5 mm, defined as EDS‐2 or EDS‐3, 42 were included. Patients suffering from the following habits or diseases were excluded: smoking more than 10 units a day, drug or alcohol abuse, uncontrolled diabetes, pregnancy, or when disturbed bone healing could be expected, such as in case of local or systemic disease, severe osteoporosis, Paget's disease, renal osteodystrophy, radiation in the head–neck region, immune‐suppression or corticosteroids treatment in the recent past. Finally, the aesthetic expectations had to be achievable. In total, 100 consecutive patients were included in this prospective clinical study between 2014 and 2017. In all patients, one upper tooth was in danger of being lost. This study was approved by the Ethics Committee of the Radboud University Medical Center Nijmegen (2014/157) and registered in the Dutch Trial Register (NTR) on 20 October 2015 (NTR5583/NL4170). Written informed consent to participate in this study, as well as for use and publication of the data, was derived from all participants. This manuscript was written conform the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guidelines. These guidelines were created to aid the author in ensuring high‐quality presentation of the conducted observational study. Prerequisites were that the one failing single maxillary incisor was surrounded by two healthy teeth, that the extraction socket was intact and that sufficient occlusal support was present in the absence of periodontal disease and bruxism. Furthermore, to allow for primary implant stability, sufficient apical bone volume had to be present in the apical region. Besides intact sockets, also sockets showing solely a periapical bone defect or a bone crest defect ≤5 mm, defined as EDS‐2 or EDS‐3, 42 were included. Patients suffering from the following habits or diseases were excluded: smoking more than 10 units a day, drug or alcohol abuse, uncontrolled diabetes, pregnancy, or when disturbed bone healing could be expected, such as in case of local or systemic disease, severe osteoporosis, Paget's disease, renal osteodystrophy, radiation in the head–neck region, immune‐suppression or corticosteroids treatment in the recent past. Finally, the aesthetic expectations had to be achievable. Multicenter In total, six centers for oral implant therapy participated; one university, one hospital, and four referral dental clinics. In the two first centers, an oral maxillofacial surgeon installed the implants, while the restorative procedure was performed by a separate restorative dentist. In the remaining four centers, the complete IIPP procedure was performed by a dentist trained in oral implantology. In total, six centers for oral implant therapy participated; one university, one hospital, and four referral dental clinics. In the two first centers, an oral maxillofacial surgeon installed the implants, while the restorative procedure was performed by a separate restorative dentist. In the remaining four centers, the complete IIPP procedure was performed by a dentist trained in oral implantology. Preoperative measures Patients were instructed to take 2 g amoxicillin 1 h before surgery, and 500 mg/3× day for 5 days starting in the morning after surgery. If patients were allergic to amoxicillin 600 mg clindamycin 1 h before surgery, and 300 mg/4× day for 5 days starting in the morning after surgery, was advised. In addition, patients had to rinse with 0.12% chlorhexidine solution twice a day for 14 days, starting the day before surgery. Also, it was instructed to take 1 g paracetamol or 600 mg ibuprofen 1 h before surgery. Patients were instructed to take 2 g amoxicillin 1 h before surgery, and 500 mg/3× day for 5 days starting in the morning after surgery. If patients were allergic to amoxicillin 600 mg clindamycin 1 h before surgery, and 300 mg/4× day for 5 days starting in the morning after surgery, was advised. In addition, patients had to rinse with 0.12% chlorhexidine solution twice a day for 14 days, starting the day before surgery. Also, it was instructed to take 1 g paracetamol or 600 mg ibuprofen 1 h before surgery. Operative procedure After atraumatic tooth removal, an osteotomy was conducted in the palatal wall of the socket in a more palato‐apically direction compared to the original apex (Figure 1A). Subsequently, the last used drill remained in the preparation to prevent bone substitute plugging it (Figure 1B). Hereafter, the socket was filled with a mixture of blood and bovine bone (Bio Oss™ S 0.25–1 mm, Geistlich Biomaterials, Wolhusen, Switzerland) after which the drill was removed carefully by turning it anti‐clockwise, creating a clean corridor to install the implant (NobelActive Conical Connection™ NobelBiocare, Washington, DC) (Figure 1C). The seat of the implant was placed 3 mm apically from the buccal gingival margin and at least 2 mm palatal of the buccal bone plate (Figure 1D) to create the recommended space buccally and to allow a suitable emergence angle of <30°. 43 To evaluate the implant position a low dose small field CBCT scan was made; in case of inaccuracies still corrections could be conducted. (A) The osteotomy was conducted in the palatal wall, after which (B) the last drill used was placed into the socket. After filling the socket with (C) a mixture of blood and Bio‐Oss™, the drill was removed and (D) the implant installed Hereafter, a titanium temporary customized platform‐switch Procera™ abutment (Nobel Biocare, Washington, DC) was placed (Figure 2A) allowing fabrication of a composite screw‐retained provisional restoration. Care was taken to prevent contact with the antagonistic dentition in occlusion or articulation. After implant placement (3–9 months), the final impression was taken to fabricate either an individualized, screw‐retained, zirconium‐oxide porcelain veneered crown, or an individualized zirconium‐oxide abutment (Procera™, NobelBiocare, Washington, DC) with a resin cemented porcelain facing (Figure 2B). 44 (A) Titanium temporary abutment was placed onto the implant, allowing the fabrication of a provisional crown. (B) The aesthetic result after 1 year After atraumatic tooth removal, an osteotomy was conducted in the palatal wall of the socket in a more palato‐apically direction compared to the original apex (Figure 1A). Subsequently, the last used drill remained in the preparation to prevent bone substitute plugging it (Figure 1B). Hereafter, the socket was filled with a mixture of blood and bovine bone (Bio Oss™ S 0.25–1 mm, Geistlich Biomaterials, Wolhusen, Switzerland) after which the drill was removed carefully by turning it anti‐clockwise, creating a clean corridor to install the implant (NobelActive Conical Connection™ NobelBiocare, Washington, DC) (Figure 1C). The seat of the implant was placed 3 mm apically from the buccal gingival margin and at least 2 mm palatal of the buccal bone plate (Figure 1D) to create the recommended space buccally and to allow a suitable emergence angle of <30°. 43 To evaluate the implant position a low dose small field CBCT scan was made; in case of inaccuracies still corrections could be conducted. (A) The osteotomy was conducted in the palatal wall, after which (B) the last drill used was placed into the socket. After filling the socket with (C) a mixture of blood and Bio‐Oss™, the drill was removed and (D) the implant installed Hereafter, a titanium temporary customized platform‐switch Procera™ abutment (Nobel Biocare, Washington, DC) was placed (Figure 2A) allowing fabrication of a composite screw‐retained provisional restoration. Care was taken to prevent contact with the antagonistic dentition in occlusion or articulation. After implant placement (3–9 months), the final impression was taken to fabricate either an individualized, screw‐retained, zirconium‐oxide porcelain veneered crown, or an individualized zirconium‐oxide abutment (Procera™, NobelBiocare, Washington, DC) with a resin cemented porcelain facing (Figure 2B). 44 (A) Titanium temporary abutment was placed onto the implant, allowing the fabrication of a provisional crown. (B) The aesthetic result after 1 year Radiographic procedure measurements To minimize the effective dose, only small field‐of‐view scans (6 × 6 cm) were applied. For analysis, the preoperative, peroperative, and CBCT data after 12 months were imported as Digital Imaging and Communications in Medicine (DICOM) files into the Maxillim™ software (version 2.3.0.3, Medicim NV, Mechelen, Belgium). Superimposition of the different CBCT scans using the voxel‐based alignment procedure in Maxilim™ scans was performed prior to analysis of the buccal crest. Using both the palate, anterior nasal spina, and adjacent teeth as reference area for voxel‐based alignment, optimal superimposition of the dimensions of the (reconstructed) buccal crest became feasible. Thickness of the buccal crest was measured at the level of the implant‐shoulder, ensuring that thickness of the buccal crest was measured at the same position and angulation at all time points. By subtracting the preoperative (T0), peroperative (T1), and 1 year postoperative (T3) dimensions, both changes in buccal crest thickness (BCT) and buccal crest height (BCH) could be calculated (Figure 3). BCT: bone crest thickness. BCH: bone crest height. Preoperatively (T0), direct postoperatively (T1), and after 1 year (T3) measurements (red dots) were conducted. The green dotted reference line reflects the shoulder of the implant Buccal crest thickness (BCT‐T0) before treatment was measured using two methods: (1) directly or (2) by subtracting the distance from the inner buccal crest (IBC) to the implant from the outer buccal crest (OBC) to the implant. Between midfacial, 1 mm to the mesial or 1 mm to the distal side, no difference in measurements for thickness or height is present, 36 therefore, solely midfacial measurements were conducted. To minimize the effective dose, only small field‐of‐view scans (6 × 6 cm) were applied. For analysis, the preoperative, peroperative, and CBCT data after 12 months were imported as Digital Imaging and Communications in Medicine (DICOM) files into the Maxillim™ software (version 2.3.0.3, Medicim NV, Mechelen, Belgium). Superimposition of the different CBCT scans using the voxel‐based alignment procedure in Maxilim™ scans was performed prior to analysis of the buccal crest. Using both the palate, anterior nasal spina, and adjacent teeth as reference area for voxel‐based alignment, optimal superimposition of the dimensions of the (reconstructed) buccal crest became feasible. Thickness of the buccal crest was measured at the level of the implant‐shoulder, ensuring that thickness of the buccal crest was measured at the same position and angulation at all time points. By subtracting the preoperative (T0), peroperative (T1), and 1 year postoperative (T3) dimensions, both changes in buccal crest thickness (BCT) and buccal crest height (BCH) could be calculated (Figure 3). BCT: bone crest thickness. BCH: bone crest height. Preoperatively (T0), direct postoperatively (T1), and after 1 year (T3) measurements (red dots) were conducted. The green dotted reference line reflects the shoulder of the implant Buccal crest thickness (BCT‐T0) before treatment was measured using two methods: (1) directly or (2) by subtracting the distance from the inner buccal crest (IBC) to the implant from the outer buccal crest (OBC) to the implant. Between midfacial, 1 mm to the mesial or 1 mm to the distal side, no difference in measurements for thickness or height is present, 36 therefore, solely midfacial measurements were conducted. Statistical methods For all measurements, the range, median, mean and standard deviation were calculated. Differences in BCT and BCH were tested with a paired sample t‐test. Statistics were calculated for all clinical parameters using SPSS (SPSS Inc., Chicago, IL). Statistical significance was defined as p ≤ 0.05. The inter‐observer performance was analyzed using a paired sample t‐test. For this purpose in total 36 measurements were repeated. The reliability was calculated as Pearson's correlation, and the random error was calculated as the standard deviation of the difference between observers, divided by √2. To assess if the two methods (BCT vs OBC minus IBC) led to a different outcome also a paired sample t‐test was conducted. To inventory if there was a correlation between the “initial BCT” and “final BCT after 1 year,” the Pearson's correlation coefficient was calculated. For all measurements, the range, median, mean and standard deviation were calculated. Differences in BCT and BCH were tested with a paired sample t‐test. Statistics were calculated for all clinical parameters using SPSS (SPSS Inc., Chicago, IL). Statistical significance was defined as p ≤ 0.05. The inter‐observer performance was analyzed using a paired sample t‐test. For this purpose in total 36 measurements were repeated. The reliability was calculated as Pearson's correlation, and the random error was calculated as the standard deviation of the difference between observers, divided by √2. To assess if the two methods (BCT vs OBC minus IBC) led to a different outcome also a paired sample t‐test was conducted. To inventory if there was a correlation between the “initial BCT” and “final BCT after 1 year,” the Pearson's correlation coefficient was calculated. Study population: In total, 100 consecutive patients were included in this prospective clinical study between 2014 and 2017. In all patients, one upper tooth was in danger of being lost. This study was approved by the Ethics Committee of the Radboud University Medical Center Nijmegen (2014/157) and registered in the Dutch Trial Register (NTR) on 20 October 2015 (NTR5583/NL4170). Written informed consent to participate in this study, as well as for use and publication of the data, was derived from all participants. This manuscript was written conform the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) guidelines. These guidelines were created to aid the author in ensuring high‐quality presentation of the conducted observational study. Prerequisites were that the one failing single maxillary incisor was surrounded by two healthy teeth, that the extraction socket was intact and that sufficient occlusal support was present in the absence of periodontal disease and bruxism. Furthermore, to allow for primary implant stability, sufficient apical bone volume had to be present in the apical region. Besides intact sockets, also sockets showing solely a periapical bone defect or a bone crest defect ≤5 mm, defined as EDS‐2 or EDS‐3, 42 were included. Patients suffering from the following habits or diseases were excluded: smoking more than 10 units a day, drug or alcohol abuse, uncontrolled diabetes, pregnancy, or when disturbed bone healing could be expected, such as in case of local or systemic disease, severe osteoporosis, Paget's disease, renal osteodystrophy, radiation in the head–neck region, immune‐suppression or corticosteroids treatment in the recent past. Finally, the aesthetic expectations had to be achievable. Multicenter: In total, six centers for oral implant therapy participated; one university, one hospital, and four referral dental clinics. In the two first centers, an oral maxillofacial surgeon installed the implants, while the restorative procedure was performed by a separate restorative dentist. In the remaining four centers, the complete IIPP procedure was performed by a dentist trained in oral implantology. Preoperative measures: Patients were instructed to take 2 g amoxicillin 1 h before surgery, and 500 mg/3× day for 5 days starting in the morning after surgery. If patients were allergic to amoxicillin 600 mg clindamycin 1 h before surgery, and 300 mg/4× day for 5 days starting in the morning after surgery, was advised. In addition, patients had to rinse with 0.12% chlorhexidine solution twice a day for 14 days, starting the day before surgery. Also, it was instructed to take 1 g paracetamol or 600 mg ibuprofen 1 h before surgery. Operative procedure: After atraumatic tooth removal, an osteotomy was conducted in the palatal wall of the socket in a more palato‐apically direction compared to the original apex (Figure 1A). Subsequently, the last used drill remained in the preparation to prevent bone substitute plugging it (Figure 1B). Hereafter, the socket was filled with a mixture of blood and bovine bone (Bio Oss™ S 0.25–1 mm, Geistlich Biomaterials, Wolhusen, Switzerland) after which the drill was removed carefully by turning it anti‐clockwise, creating a clean corridor to install the implant (NobelActive Conical Connection™ NobelBiocare, Washington, DC) (Figure 1C). The seat of the implant was placed 3 mm apically from the buccal gingival margin and at least 2 mm palatal of the buccal bone plate (Figure 1D) to create the recommended space buccally and to allow a suitable emergence angle of <30°. 43 To evaluate the implant position a low dose small field CBCT scan was made; in case of inaccuracies still corrections could be conducted. (A) The osteotomy was conducted in the palatal wall, after which (B) the last drill used was placed into the socket. After filling the socket with (C) a mixture of blood and Bio‐Oss™, the drill was removed and (D) the implant installed Hereafter, a titanium temporary customized platform‐switch Procera™ abutment (Nobel Biocare, Washington, DC) was placed (Figure 2A) allowing fabrication of a composite screw‐retained provisional restoration. Care was taken to prevent contact with the antagonistic dentition in occlusion or articulation. After implant placement (3–9 months), the final impression was taken to fabricate either an individualized, screw‐retained, zirconium‐oxide porcelain veneered crown, or an individualized zirconium‐oxide abutment (Procera™, NobelBiocare, Washington, DC) with a resin cemented porcelain facing (Figure 2B). 44 (A) Titanium temporary abutment was placed onto the implant, allowing the fabrication of a provisional crown. (B) The aesthetic result after 1 year Radiographic procedure measurements: To minimize the effective dose, only small field‐of‐view scans (6 × 6 cm) were applied. For analysis, the preoperative, peroperative, and CBCT data after 12 months were imported as Digital Imaging and Communications in Medicine (DICOM) files into the Maxillim™ software (version 2.3.0.3, Medicim NV, Mechelen, Belgium). Superimposition of the different CBCT scans using the voxel‐based alignment procedure in Maxilim™ scans was performed prior to analysis of the buccal crest. Using both the palate, anterior nasal spina, and adjacent teeth as reference area for voxel‐based alignment, optimal superimposition of the dimensions of the (reconstructed) buccal crest became feasible. Thickness of the buccal crest was measured at the level of the implant‐shoulder, ensuring that thickness of the buccal crest was measured at the same position and angulation at all time points. By subtracting the preoperative (T0), peroperative (T1), and 1 year postoperative (T3) dimensions, both changes in buccal crest thickness (BCT) and buccal crest height (BCH) could be calculated (Figure 3). BCT: bone crest thickness. BCH: bone crest height. Preoperatively (T0), direct postoperatively (T1), and after 1 year (T3) measurements (red dots) were conducted. The green dotted reference line reflects the shoulder of the implant Buccal crest thickness (BCT‐T0) before treatment was measured using two methods: (1) directly or (2) by subtracting the distance from the inner buccal crest (IBC) to the implant from the outer buccal crest (OBC) to the implant. Between midfacial, 1 mm to the mesial or 1 mm to the distal side, no difference in measurements for thickness or height is present, 36 therefore, solely midfacial measurements were conducted. Statistical methods: For all measurements, the range, median, mean and standard deviation were calculated. Differences in BCT and BCH were tested with a paired sample t‐test. Statistics were calculated for all clinical parameters using SPSS (SPSS Inc., Chicago, IL). Statistical significance was defined as p ≤ 0.05. The inter‐observer performance was analyzed using a paired sample t‐test. For this purpose in total 36 measurements were repeated. The reliability was calculated as Pearson's correlation, and the random error was calculated as the standard deviation of the difference between observers, divided by √2. To assess if the two methods (BCT vs OBC minus IBC) led to a different outcome also a paired sample t‐test was conducted. To inventory if there was a correlation between the “initial BCT” and “final BCT after 1 year,” the Pearson's correlation coefficient was calculated. RESULTS: Of the 100 included patients, 98 (57 females, 41 males) were available for evaluation. One patient was excluded because of trauma resulting in implant loss, another moved abroad. Age of the included patients varied between 17 and 80 years with an average of 45.8 years. On average, IIPP took place 37 days (range 0–210 days) after intake. Reasons for extraction were trauma, root fracture, failed endodontic treatment, or lack of ferrule. NobelActive™ CC implants with a diameter of 3.0 mm (6×) and 3.5 mm (17×) implants were used to replace lateral incisors. Central incisors were replaced by NobelActive™ CC with 3.5 mm diameter (30×) or with 4.3 diameter (45×). Implant length varied between 11.5 and 18 mm. In all cases, primary implant stability was sufficient to allow immediate provisional restoration. No implant was lost during the first year of the study; all implants received a final restoration. CBCT‐analysis In 17 cases, one of the CBCT‐scans (T0, T1, or T2) could not be interpreted due to movement artifacts, scattering, or beam hardening. In total, 81 complete CBCT series were analyzed. Reliability was 0.901 (p < 0.001), showing a satisfactory correlation between both observers. There was no evidence of a structural difference between both observers as the mean difference was 0.069 mm, with 95% CI = [−0.037–0.175 mm] (p = 0.192). The random error was 0.222 mm. Initial BCT‐T0 was measured (1) directly or (2) by subtracting OBC from IBC. The paired sample correlation was 0.984. Between used methods, the measured difference in BCT was 0.01 mm (p = 0.241). Therefore, both methods are valid for measuring the BCT (Figure 4). Two methods to measure buccal bone thickness (BCT). The x‐axis represents the BCT which was directly measured. The y‐axis reflects the BCT measured as OBC‐IBC: the difference between the “outer buccal crest” (OBC) and the “inner buccal crest” (IBC) in relation to the implant surface Both mean BCT and BCH per time point are depicted in Table 1. Direct post‐operatively (T1), mean BCT increased from 0.6 mm at baseline (SD = 0.5) to 3.3 mm (SD = 1.2). After 1 year (T3) mean BCT reduced to 2.4 mm (SD = 1.1). Mean and standard deviation of the bone crest thickness (BCT) and bone crest height (BCH) on three time points (T0, T1, T2) Mean BCH at T0 was 0.7 mm (SD = 0.5), which enlarged to 3.1 mm (SD = 1.2) direct postoperatively (T1). Over a period of 1 year (T3) BCH condensed to 1.7 mm (SD = 2.4). With respect to BCT and BCH, differences between T1 versus T0, T3 versus T0, and T3 versus T1 were statistically significant (all p = 0.003) (Table 2). Differences in bone crest thickness (BCT) and bone crest height (BCH) are significant between time points T1‐T0, T2‐T0, and T3‐T2 Preoperatively, 85% of the patients presented a BCT‐T0 of ≤1 mm (Table 3). At T1, thus immediately after performing IIPP, 98% of all patients showed a BCT‐T1 of at least 2 mm (Table 3). After 1 year, 8 patients (10%) showed a BCT‐T3 less than 1 mm (Table 3): 2 patients (2.5%) showed a BCT‐T3 of 0.6 and 0.8 mm each. In the other 6 (7.5%) patients, no bone crest was present (BCT‐T3 = 0). In these 6 patients also BCH‐T3 failed, meaning that after 1 year, bone height was lower than the level of the implant‐shoulder. Distribution of patients in relation to their BCT at times T0, T1, and T3 To assess how the initial bone crest thickness (BCT‐T0) is related to the final bone crest thickness (BCT‐T3), the Pearson's correlation was calculated for all 81 patients (Figure 5). The outcome of 0.38 (p = 0.01) suggests that a moderate correlation is present. For solely the 61 patients (75%: Table 3) with an intact alveolus at T0 a Pearson's correlation of 0.32 (p: 0.011) was calculated, which still stands for a moderate correlation. In this graph, the Pearson's relation between the initial BCT‐T0 (x‐axis) and the final achieved BCT‐T3 (y‐axis) is depicted for all 81 patients In 17 cases, one of the CBCT‐scans (T0, T1, or T2) could not be interpreted due to movement artifacts, scattering, or beam hardening. In total, 81 complete CBCT series were analyzed. Reliability was 0.901 (p < 0.001), showing a satisfactory correlation between both observers. There was no evidence of a structural difference between both observers as the mean difference was 0.069 mm, with 95% CI = [−0.037–0.175 mm] (p = 0.192). The random error was 0.222 mm. Initial BCT‐T0 was measured (1) directly or (2) by subtracting OBC from IBC. The paired sample correlation was 0.984. Between used methods, the measured difference in BCT was 0.01 mm (p = 0.241). Therefore, both methods are valid for measuring the BCT (Figure 4). Two methods to measure buccal bone thickness (BCT). The x‐axis represents the BCT which was directly measured. The y‐axis reflects the BCT measured as OBC‐IBC: the difference between the “outer buccal crest” (OBC) and the “inner buccal crest” (IBC) in relation to the implant surface Both mean BCT and BCH per time point are depicted in Table 1. Direct post‐operatively (T1), mean BCT increased from 0.6 mm at baseline (SD = 0.5) to 3.3 mm (SD = 1.2). After 1 year (T3) mean BCT reduced to 2.4 mm (SD = 1.1). Mean and standard deviation of the bone crest thickness (BCT) and bone crest height (BCH) on three time points (T0, T1, T2) Mean BCH at T0 was 0.7 mm (SD = 0.5), which enlarged to 3.1 mm (SD = 1.2) direct postoperatively (T1). Over a period of 1 year (T3) BCH condensed to 1.7 mm (SD = 2.4). With respect to BCT and BCH, differences between T1 versus T0, T3 versus T0, and T3 versus T1 were statistically significant (all p = 0.003) (Table 2). Differences in bone crest thickness (BCT) and bone crest height (BCH) are significant between time points T1‐T0, T2‐T0, and T3‐T2 Preoperatively, 85% of the patients presented a BCT‐T0 of ≤1 mm (Table 3). At T1, thus immediately after performing IIPP, 98% of all patients showed a BCT‐T1 of at least 2 mm (Table 3). After 1 year, 8 patients (10%) showed a BCT‐T3 less than 1 mm (Table 3): 2 patients (2.5%) showed a BCT‐T3 of 0.6 and 0.8 mm each. In the other 6 (7.5%) patients, no bone crest was present (BCT‐T3 = 0). In these 6 patients also BCH‐T3 failed, meaning that after 1 year, bone height was lower than the level of the implant‐shoulder. Distribution of patients in relation to their BCT at times T0, T1, and T3 To assess how the initial bone crest thickness (BCT‐T0) is related to the final bone crest thickness (BCT‐T3), the Pearson's correlation was calculated for all 81 patients (Figure 5). The outcome of 0.38 (p = 0.01) suggests that a moderate correlation is present. For solely the 61 patients (75%: Table 3) with an intact alveolus at T0 a Pearson's correlation of 0.32 (p: 0.011) was calculated, which still stands for a moderate correlation. In this graph, the Pearson's relation between the initial BCT‐T0 (x‐axis) and the final achieved BCT‐T3 (y‐axis) is depicted for all 81 patients CBCT‐analysis: In 17 cases, one of the CBCT‐scans (T0, T1, or T2) could not be interpreted due to movement artifacts, scattering, or beam hardening. In total, 81 complete CBCT series were analyzed. Reliability was 0.901 (p < 0.001), showing a satisfactory correlation between both observers. There was no evidence of a structural difference between both observers as the mean difference was 0.069 mm, with 95% CI = [−0.037–0.175 mm] (p = 0.192). The random error was 0.222 mm. Initial BCT‐T0 was measured (1) directly or (2) by subtracting OBC from IBC. The paired sample correlation was 0.984. Between used methods, the measured difference in BCT was 0.01 mm (p = 0.241). Therefore, both methods are valid for measuring the BCT (Figure 4). Two methods to measure buccal bone thickness (BCT). The x‐axis represents the BCT which was directly measured. The y‐axis reflects the BCT measured as OBC‐IBC: the difference between the “outer buccal crest” (OBC) and the “inner buccal crest” (IBC) in relation to the implant surface Both mean BCT and BCH per time point are depicted in Table 1. Direct post‐operatively (T1), mean BCT increased from 0.6 mm at baseline (SD = 0.5) to 3.3 mm (SD = 1.2). After 1 year (T3) mean BCT reduced to 2.4 mm (SD = 1.1). Mean and standard deviation of the bone crest thickness (BCT) and bone crest height (BCH) on three time points (T0, T1, T2) Mean BCH at T0 was 0.7 mm (SD = 0.5), which enlarged to 3.1 mm (SD = 1.2) direct postoperatively (T1). Over a period of 1 year (T3) BCH condensed to 1.7 mm (SD = 2.4). With respect to BCT and BCH, differences between T1 versus T0, T3 versus T0, and T3 versus T1 were statistically significant (all p = 0.003) (Table 2). Differences in bone crest thickness (BCT) and bone crest height (BCH) are significant between time points T1‐T0, T2‐T0, and T3‐T2 Preoperatively, 85% of the patients presented a BCT‐T0 of ≤1 mm (Table 3). At T1, thus immediately after performing IIPP, 98% of all patients showed a BCT‐T1 of at least 2 mm (Table 3). After 1 year, 8 patients (10%) showed a BCT‐T3 less than 1 mm (Table 3): 2 patients (2.5%) showed a BCT‐T3 of 0.6 and 0.8 mm each. In the other 6 (7.5%) patients, no bone crest was present (BCT‐T3 = 0). In these 6 patients also BCH‐T3 failed, meaning that after 1 year, bone height was lower than the level of the implant‐shoulder. Distribution of patients in relation to their BCT at times T0, T1, and T3 To assess how the initial bone crest thickness (BCT‐T0) is related to the final bone crest thickness (BCT‐T3), the Pearson's correlation was calculated for all 81 patients (Figure 5). The outcome of 0.38 (p = 0.01) suggests that a moderate correlation is present. For solely the 61 patients (75%: Table 3) with an intact alveolus at T0 a Pearson's correlation of 0.32 (p: 0.011) was calculated, which still stands for a moderate correlation. In this graph, the Pearson's relation between the initial BCT‐T0 (x‐axis) and the final achieved BCT‐T3 (y‐axis) is depicted for all 81 patients DISCUSSION: Although IIPP procedures are more accepted nowadays, midfacial recession as a result of bone loss of the buccal crest is still considered to be a major risk. Publications about this topic are difficult to compare to each other, because surgical procedures differ substantially with respect to the choice of implant type, implant diameter, implant position, and “to raise a flap or not.” Furthermore, discussion is ongoing if additional connective tissue grafts or bone substitutes should be applied. Confusing is also that in most publications, with respect to the prosthetic treatment, various materials and shapes of abutment were included. Moreover, debate continues if, and at what time point, provisional restorations should be used. 45 From a patient's point of view, immediate tooth replacement is an attractive strategy, because in one session both aesthetics and comfort are delivered together with a substantial gain in treatment time, as in the same treatment session both the tooth is extracted and the implant installed. Also with respect to the aesthetic result after 1 year IIPP offers advantages: midfacial recession is 0.75 mm less compared to delayed restoration after 1 year. 46 With respect to the question if “whether or not” a buccal flap should be raised, Naji et al. presented the 6 months' results of a CBCT study in which different soft tissue techniques were compared. 47 In three groups of each 16 patients, a buccal gap of at least 2 mm was created: group 1 received a bone graft and membrane, after which the wound bed was closed with a primary flap. Group 2 received primary flap closure only. In group 3, no extra technique was applied; solely immediate implant installation was conducted. The least horizontal dimensional change after implant placement was recorded for group 3, in which also the least postoperative pain was monitored. Both can be explained by the fact that no flap was raised, thereby stressing the importance of the local blood supply and, as such, the vitality of the soft tissues. These results are in agreement with others, which confirmed that, if no flap is elevated, a greater preservation of the buccal alveolar bone width from resorption was seen. 48 , 49 Considering the need of filling the gap, Naji et al. suggested that solely a thick bone crest with a BCT of ≥1 mm, allows a stabilized formed coagulum without the need for regenerative materials. 47 Others stated that in case of an initial buccal bone plate width of <1 mm or with fenestration, gap grafting, and regeneration are recommended to enhance bone filling and reduce the bone reduction. 50 , 51 , 52 CBCT is a useful tool that has been successfully used for reproducibility and accuracy of bone crest level measurements, 53 , 54 as corroborated in the present study showing a mean difference of 0.069 mm between both observers (p = 0.192). The initial mean width BCT in our study was 0.6 mm (SD 0.5) which is in accordance with previous studies using CBCT scans to measure bone width around maxillary anterior teeth. 8 , 55 , 56 With our IIPP protocol immediately after surgery BCT a gain in thickness of 2.7 mm was achieved: from 0.6 mm (mean) to 3.3 mm (mean). After 12 months, a decrease of 0.9 mm was observed still leaving a BCT of 2.4 mm (mean). In only a few articles, both initial and postoperative BCT were measured in combination with IIPP. Although Morimoto et al. described 12 patients retrospectively and also filled the buccal gap with bone graft material, they did not create a buccal gap of at least 2 mm. 55 They reported an initial median BCT of 0.5 mm, and a median thickness of 1.8 mm after 1 year, resulting in a total gain of 1.3 mm, which is lower as reported in our study (1.8 mm) when generating a minimal gap of 2 mm. Degidi et al. created buccal gaps of between 1 and 4 mm and filled the buccal gaps with Bio‐Oss™ Collagen (Geistlich Pharma AB, Wolhusen, Switzerland). Although they did not present an initial BCT, the same decrease of 0.9 mm in the mean BCT was reported after 1 year: from 3.0 to 2.1 mm. 57 Unfortunately, also with respect to the vertical dimension, Degidi et al. presented no initial bone heights, and thereby no initial increase in BCH. The immediate postoperatively achieved BCH of 3.0 mm reduced to 2.2 mm after 1 year. This reduction (0.8 mm) is less than the 1.4 mm in our CBCT study, in which BCH decreased from 3.1 to 1.7 mm. 57 However, this can be explained by the composition of their patient population: 50% of the implant sites were not in the front, but in the premolar and the canine region. 57 Furthermore, only patients were introduced if the BCT was at least 0.5 mm, while in our study also cases with a smaller BCT were allowed. Morimoto et al. only presented a preoperative BCH (median 1.5 mm) and after 1 year (median 1.1 mm). It is unclear if their clinical procedure resulted in a gain in BCH. 55 The vertical increase in BCH, as measured in this study, may seem surprising, but is in agreement with the gain of height that already was reported in the ridge preservation studies from Iasella et al. in 2003 14 and Vance et al. in 2004 58 who reported an average gain of 1.3 and 0.7 mm, respectively. Applying a bone substitute simultaneously with IIPP significantly enlarged the buccal crest both in width and height. Key question is the exact composition of the final buccal crest. After all, a limitation of this study is that no histology of the buccal bone volume was conducted; it was not specified which percentage of buccal crest consists of newly formed bone or bone substitute. It can be secured that immediately after application of the bone substitute the buccal bone crest consists of a combination of the original buccal bone at the outside and bone substitute at the inside. The measured horizontal bone reduction after 12 months can be explained by resorption of the buccal bundle bone initiated by the removal of the periodontal ligament, reducing local blood supply and its regenerative capacity. Horizontal reduction of BCT also can be clarified by condensation of the bone substitute particles in time. In 20 (25%) of the patients preoperatively, a small buccal bone defect (≤5 mm) was present. If this group was included in the statistical analysis, the Pearson correlation between initial and final BCT was 0.38. If this group was excluded from analysis, still a significant Pearson's correlation of 0.32 remained. As such there is indeed a moderate correlation indicating that thin buccal plates will result also in thinner buccal plates after reconstruction. Of course, also other factors will influence the end result after reconstruction, such as age, general condition, and width of the created gap. To our knowledge, this is the first prospective study assessing the relation between the initial BCT and the BCT after 1 year. A ‘moderate correlation’ was shown for the hypothesis that ‘thinner preoperative BCT's deliver thinner BCT's’ one year after performing FIIPP. Nevertheless, independent of the initial BCT, 1 year after following the presented FIIPP protocol, both bone crest thickness and height were in 90% of the cases still substantial, meaning that more than the required minimal BCT of 1 mm was present, thereby creating a stable base for the soft tissues. The lesson to be learned is that FIIPP may be successful in cases of a bone defect at the implant shoulder (BCT‐T0 = 0); however, it is not a panacea. In total, 20 of such cases (25%) were included: 14 were successful (BCT‐T3 ≥1 mm) and 6 cases (7.5%) failed, meaning that the bone crest both in thickness and height scored zero 1 year postoperatively. Long‐term prospective studies need to be performed to prove if both BCT and BCH will also be stable over time. Retrospective CBCT‐data after 7 years showed already promising results. 59 CONFLICT OF INTEREST: The authors declare no conflict of interest. AUTHOR CONTRIBUTIONS: Tristan Ariaan Staas conceptualized the project idea, conducted the literature search, collected and analyzed the data, and drafted the manuscript. Edith Groenendijk conceptualized the project idea, collected and analyzed data, and made corrections to the drafted manuscript. Gerry Max Raghoebar contributed to the data analysis. Ewald Bronkhorst conducted the statistical analysis. Gerry Max Raghoebar and Gert Jacobus Meijer critically reviewed and revised the manuscript.
Background: Flapless immediate implant placement and provisionalization (FIIPP) in the aesthetic zone is still controversial. Especially, an initial buccal crest thickness (BCT) of ≤1 mm is thought to be disruptive for the final buccal crest stability jeopardizing the aesthetic outcome. Methods: The study was designed as a prospective study on FIIPP. Only patients were included in whom one maxillary incisor was considered as lost. In six centers, 100 consecutive patients received FIIPP. Implants were placed in a maximal palatal position of the socket, thereby creating a buccal space of at least 2 mm, which was subsequently filled with a bovine bone substitute. Files of preoperative (T0), peroperative (T1) and 1-year postoperative (T3) cone beam computed tomogram (CBCT) scans were imported into the Maxillim™ software to analyze the changes in BCT-BCH over time. Results: Preoperatively, 85% of the cases showed a BCT ≤1 mm, in 25% of the patients also a small buccal defect (≤5 mm) was present. Mean BCT at the level of the implant-shoulder increased from 0.6 mm at baseline to 3.3 mm immediate postoperatively and compacted to 2.4 mm after 1 year. Mean BCH improved from 0.7 to 3.1 mm peroperatively, and resorbed to 1.7 mm after 1 year. The Pearson correlation of 0.38 between initial and final BCT was significant (p = 0.01) and therefore is valued as moderate. If only patients (75%) with an intact alveolus were included in the analysis, still a "moderate correlation" of 0.32 (p = 0.01) was calculated. Conclusions: A "moderate correlation" was shown for the hypothesis that "thinner preoperative BCT's deliver thinner BCT's" 1 year after performing FIIPP.
null
null
9,223
354
[ 770, 315, 69, 114, 392, 351, 168, 747, 74 ]
13
[ "bct", "bone", "mm", "crest", "buccal", "implant", "patients", "t0", "buccal crest", "t3" ]
[ "root dental implant", "extracted implant", "buccal crest limitation", "oral implantology preoperative", "resorption buccal crest" ]
null
null
null
[CONTENT] aesthetic outcome | CBCT analysis | flapless immediate implant placement | immediate restoration [SUMMARY]
null
[CONTENT] aesthetic outcome | CBCT analysis | flapless immediate implant placement | immediate restoration [SUMMARY]
null
[CONTENT] aesthetic outcome | CBCT analysis | flapless immediate implant placement | immediate restoration [SUMMARY]
null
[CONTENT] Animals | Cattle | Cohort Studies | Cone-Beam Computed Tomography | Dental Implants | Dental Implants, Single-Tooth | Esthetics, Dental | Humans | Immediate Dental Implant Loading | Maxilla | Prospective Studies [SUMMARY]
null
[CONTENT] Animals | Cattle | Cohort Studies | Cone-Beam Computed Tomography | Dental Implants | Dental Implants, Single-Tooth | Esthetics, Dental | Humans | Immediate Dental Implant Loading | Maxilla | Prospective Studies [SUMMARY]
null
[CONTENT] Animals | Cattle | Cohort Studies | Cone-Beam Computed Tomography | Dental Implants | Dental Implants, Single-Tooth | Esthetics, Dental | Humans | Immediate Dental Implant Loading | Maxilla | Prospective Studies [SUMMARY]
null
[CONTENT] root dental implant | extracted implant | buccal crest limitation | oral implantology preoperative | resorption buccal crest [SUMMARY]
null
[CONTENT] root dental implant | extracted implant | buccal crest limitation | oral implantology preoperative | resorption buccal crest [SUMMARY]
null
[CONTENT] root dental implant | extracted implant | buccal crest limitation | oral implantology preoperative | resorption buccal crest [SUMMARY]
null
[CONTENT] bct | bone | mm | crest | buccal | implant | patients | t0 | buccal crest | t3 [SUMMARY]
null
[CONTENT] bct | bone | mm | crest | buccal | implant | patients | t0 | buccal crest | t3 [SUMMARY]
null
[CONTENT] bct | bone | mm | crest | buccal | implant | patients | t0 | buccal crest | t3 [SUMMARY]
null
[CONTENT] bone | buccal | gap | soft | crest | implant | remodeling | bone remodeling | aesthetic outcome | resorption [SUMMARY]
null
[CONTENT] bct | t0 | mm | t3 | t1 | patients | table | crest | sd | bone [SUMMARY]
null
[CONTENT] bct | bone | mm | crest | buccal | patients | implant | t0 | buccal crest | t3 [SUMMARY]
null
[CONTENT] FIIPP ||| ≤1 | mm [SUMMARY]
null
[CONTENT] 85% | mm | 25% | ≤5 ||| Mean BCT | 0.6 mm | 3.3 mm | 2.4 mm | 1 year ||| BCH | 0.7 to | 3.1 mm | 1.7 mm | 1 year ||| Pearson | 0.38 | BCT | 0.01 ||| 75% | 0.32 | 0.01 [SUMMARY]
null
[CONTENT] FIIPP ||| ≤1 | mm ||| FIIPP ||| one ||| six | 100 | FIIPP ||| at least 2 ||| T0 | T1 | 1-year | T3 | Maxillim | BCT-BCH ||| ||| 85% | mm | 25% | ≤5 ||| Mean BCT | 0.6 mm | 3.3 mm | 2.4 mm | 1 year ||| BCH | 0.7 to | 3.1 mm | 1.7 mm | 1 year ||| Pearson | 0.38 | BCT | 0.01 ||| 75% | 0.32 | 0.01 ||| BCT | BCT | 1 year | FIIPP [SUMMARY]
null
Inhibition of p38 MAPK diminishes doxorubicin-induced drug resistance associated with P-glycoprotein in human leukemia K562 cells.
23018344
Several studies have shown that multidrug transporters, such as P-glycoprotein (PGP), are involved in cell resistance to chemotherapy and refractory epilepsy. The p38 mitogen-activated protein kinase (MAPK) signaling pathway may increase PGP activity. However, p38-mediated drug resistance associated with PGP is unclear. Here, we investigated p38-mediated doxorubicin-induced drug resistance in human leukemia K562 cells.
BACKGROUND
The expression of PGP was detected by RT-PCR, Western blot, and immunocytochemistry. Cell viability and half-inhibitory concentrations (IC50) were determined by CCK-8 assay. The intracellular concentration of drugs was measured by HPLC.
MATERIAL/METHODS
A doxorubicin-induced PGP overexpression cell line, K562/Dox, was generated. The p38 inhibitor SB202190 significantly decreased MDR1 mRNA expression, as well as PGP, in K562/Dox cells. The IC50 of phenytoin sodium and doxorubicin in K562/Dox cells was significantly higher than that in wild-type K562 cells, indicating the drug resistance of K562/Dox cells. During the blocking of p38 activity in the presence of SB202190, cell number was significantly reduced after the phenytoin sodium and doxorubicin treatment, and the IC50 of phenytoin sodium and doxorubicin was decreased in K562/Dox cells. HPLC showed that the intracellular levels of phenytoin sodium and doxorubicin were significantly lower in K562/Dox cells than those in K562 cells. The decrease of the intracellular level of these drugs was significantly abolished in the presence of SB202190.
RESULTS
Our study demonstrated that p38 is, at least in part, involved in doxorubicin-induced drug resistance. The mechanistic study of MAPK-mediated PGP and the action of SB202190 need further investigation.
CONCLUSIONS
[ "ATP Binding Cassette Transporter, Subfamily B, Member 1", "Cell Survival", "Dose-Response Relationship, Drug", "Doxorubicin", "Drug Resistance, Neoplasm", "Humans", "Imidazoles", "K562 Cells", "Leukemia", "Phenytoin", "Protein Kinase Inhibitors", "Pyridines", "p38 Mitogen-Activated Protein Kinases" ]
3560559
Background
The p38 mitogen-activated protein kinase (MAPK) signaling pathway mediates multiple cellular events, including proliferation, differentiation, migration, adhesion and apoptosis, in response to various extracellular stimuli, such as growth factors, hormones, ligands for G protein-coupled receptors, inflammatory cytokines, and stresses [1]. It has been shown that p38 MAPK signaling is associated with cancers in humans and mice [2] and regulates gene expression through the activation of transcription factors. Long-term exposure of tumor cells to certain types of chemotherapy drugs causes resistance. The best example is doxorubicin, an anti-cancer drug that often leads to drug resistance [3,4]. Recent studies on cell resistance to chemotherapy and refractory epilepsy drugs showed that multidrug resistance (MDR) transporters, especially P-glycoprotein (PGP) encoded by MDR1, play an important role in multidrug resistance [5–7]. PGP is a membrane-associated protein with 6 transmembrane domains and an adenosine triphosphate (ATP) binding site. This energy-dependent structure provides the characteristics of a drug efflux transporter that can pump drugs and other hydrophobic compounds out of cells, reducing the intracellular drug concentration, thus leading to drug resistance [8–10]. PGP expression can be induced by several factors, including cytotoxic drugs, irradiation, heat shock, and other stresses [11–13]. These factors also activate the p38 MAPK signaling pathway [14–16], suggesting that the p38 MAPK signaling pathway may be involved in the regulation of PGP expression. In this study we investigated the effect of a highly selective, potent, cell-permeable inhibitor of p38 MAPK (SB202190) on doxorubicin-induced drug resistance associated with PGP in a leukemia cell line. We demonstrated that p38 MAPK is involved in doxorubicin-induced PGP expression, cell resistance to the antiepileptic drug phenytoin sodium, and the chemotherapy drug, doxorubicin, in leukemia cells.
Statistical analysis
All statistical analyses were carried out using SigmaStat (Chicago, IL). Comparisons between groups were performed using either a paired Student t-tests or one-way ANOVA, where indicated. Data are presented as mean ±SD or SEM. Differences were considered significant at values of P<0.05.
Results
Generation of K562/Dox cells by doxorubicin and responsiveness to p38 inhibitor K562/Dox cells were generated by repeating treatments of doxorubicin and confirmed by the induction of PGP expression. Immunocytochemistry showed that wild-type K562 cells had an undetectable level of PGP expression (Figure 1A, left panel), whereas most K562/Dox cells were PGP-positive and appeared light-brown (Figure 1A, right panel), indicating the induction of PGP expression by doxorubicin. This drug-resistant cell line was further confirmed by the detection of multi-drug resistance 1 gene, MDR1, in the absence or presence of a p38 inhibitor, SB202190 (Figure 1B). After quantitative analysis of RT-PCR, we found that SB202190 treatment significantly decreased MDR1 expression in K562/Dox cells (Figure 1C; P<0.001; n=10). However, the treatment of U0126, a highly selective inhibitor of both MEK1 and MEK2, had no effect on MDR1 expression. K562/Dox cells were generated by repeating treatments of doxorubicin and confirmed by the induction of PGP expression. Immunocytochemistry showed that wild-type K562 cells had an undetectable level of PGP expression (Figure 1A, left panel), whereas most K562/Dox cells were PGP-positive and appeared light-brown (Figure 1A, right panel), indicating the induction of PGP expression by doxorubicin. This drug-resistant cell line was further confirmed by the detection of multi-drug resistance 1 gene, MDR1, in the absence or presence of a p38 inhibitor, SB202190 (Figure 1B). After quantitative analysis of RT-PCR, we found that SB202190 treatment significantly decreased MDR1 expression in K562/Dox cells (Figure 1C; P<0.001; n=10). However, the treatment of U0126, a highly selective inhibitor of both MEK1 and MEK2, had no effect on MDR1 expression. Inhibition of p38 leading to a decrease of PGP expression in K562/Dox cells The expression of PGP in K562/Dox cells was further detected by Western blot. SB202190 treatment for 48 hours decreased PGP expression, whereas U0126 treatment had no effect (Figure 2A). After quantitative analysis, we found that p38 inhibitor significantly reduced the expression of PGP (Figure 2B). Next, we confirmed that SB202190 indeed significantly suppressed phopho-p38, an active form of p38, as well as total-p38 in K562/Dox cells (Figure 2C, D). The expression of PGP in K562/Dox cells was further detected by Western blot. SB202190 treatment for 48 hours decreased PGP expression, whereas U0126 treatment had no effect (Figure 2A). After quantitative analysis, we found that p38 inhibitor significantly reduced the expression of PGP (Figure 2B). Next, we confirmed that SB202190 indeed significantly suppressed phopho-p38, an active form of p38, as well as total-p38 in K562/Dox cells (Figure 2C, D). Reducing drug resistance by p38 inhibitor in K562/Dox cells We subsequently investigated whether p38 was involved in cells’ resistance to the drug tested. First, we examined cell viability by pretreating the cells with p38 inhibitor SB202190 and a positive control, verapamil, followed by treating the cells with phenytoin sodium or doxorubicin. The inhibition of cell viabilities by phenytoin sodium and doxorubicin was determined by CCK-8 assay. We found that in the presence of SB202190, phenytoin sodium and doxorubicin significantly decreased the number of living cells (Figure 3A, B). Studies confirmed that treatment of 10 μM SB202190 or verapamil alone had no effect on cell viability in either K562 or K562/Dox cells (data not shown). Second, we measured the IC50 of phenytoin and doxorubicin in K562/Dox cells. The IC50 of phenytoin and doxorubicin in K562/Dox cells was significantly higher than that in K562 cells (2186.33±214.70 vs. 468.82±44.67 μg/mL and 4.33±0.50 vs. 0.32±0.05 μg/mL, respectively) (Table 1), indicating that K562/Dox cells were drug-resistant. After blocking p38 with 10 μM SB202190 in K562/Dox cells, we observed that the IC50 of phenytoin was significantly decreased, from 2186.33±214.70 to 949.83±131.31 μg/mL, with an RI of 2.30, similar to that of the verapamil control (2.56). The IC50 of doxorubicin was also lower in cells treated with 10 μM SB202190 than in untreated K562/Dox cells (4.33±0.50 μg/mL and 0.40±0.09 μg/mL, respectively), with an RI of 10.83, similar to that of verapamil (12.37). Third, we measured the intracellular concentration of phenytoin and doxorubicin by HPLC. The intracellular levels of phenytoin and doxorubicin were significantly lower in K562/Dox cells than those in K562 cells (Figure 4A, B), further confirming the drug resistance of K562/Dox cells. The decrease of the intracellular level of phenytoin and doxorubicin in K562/Dox cells was significantly abolished in the presence of SB202190 (Figure 4). These data clearly demonstrate that p38 is, at least in part, involved in the regulation of drug resistance in K562/Dox cells. We subsequently investigated whether p38 was involved in cells’ resistance to the drug tested. First, we examined cell viability by pretreating the cells with p38 inhibitor SB202190 and a positive control, verapamil, followed by treating the cells with phenytoin sodium or doxorubicin. The inhibition of cell viabilities by phenytoin sodium and doxorubicin was determined by CCK-8 assay. We found that in the presence of SB202190, phenytoin sodium and doxorubicin significantly decreased the number of living cells (Figure 3A, B). Studies confirmed that treatment of 10 μM SB202190 or verapamil alone had no effect on cell viability in either K562 or K562/Dox cells (data not shown). Second, we measured the IC50 of phenytoin and doxorubicin in K562/Dox cells. The IC50 of phenytoin and doxorubicin in K562/Dox cells was significantly higher than that in K562 cells (2186.33±214.70 vs. 468.82±44.67 μg/mL and 4.33±0.50 vs. 0.32±0.05 μg/mL, respectively) (Table 1), indicating that K562/Dox cells were drug-resistant. After blocking p38 with 10 μM SB202190 in K562/Dox cells, we observed that the IC50 of phenytoin was significantly decreased, from 2186.33±214.70 to 949.83±131.31 μg/mL, with an RI of 2.30, similar to that of the verapamil control (2.56). The IC50 of doxorubicin was also lower in cells treated with 10 μM SB202190 than in untreated K562/Dox cells (4.33±0.50 μg/mL and 0.40±0.09 μg/mL, respectively), with an RI of 10.83, similar to that of verapamil (12.37). Third, we measured the intracellular concentration of phenytoin and doxorubicin by HPLC. The intracellular levels of phenytoin and doxorubicin were significantly lower in K562/Dox cells than those in K562 cells (Figure 4A, B), further confirming the drug resistance of K562/Dox cells. The decrease of the intracellular level of phenytoin and doxorubicin in K562/Dox cells was significantly abolished in the presence of SB202190 (Figure 4). These data clearly demonstrate that p38 is, at least in part, involved in the regulation of drug resistance in K562/Dox cells.
Conclusions
Our study shows that the p38 signaling pathway is involved in doxorubicin-induced drug resistance. The inhibition of p38 MAPK diminishes doxorubicin-induced drug resistance associated with the down-regulation of PGP. Thus, the inhibitors of p38 may provide new chemotherapeutic option to overcome drug resistance in treatment of cancer and epilepsy. Further studies on the mechanisms of p38 inhibitors and the development of effective PGP-specific antagonists with low toxicity will improve the clinical effects of the chemotherapy and anti-epilepsy therapy.
[ "Background", "Material", "Generation of K562/Dox cell line and treatment", "Immunocytochemical staining", "RNA extraction and RT-PCR", "Western blotting", "Cell viability and half maximal inhibitory concentration", "Measurement of intracellular concentration of PHT and Dox", "Generation of K562/Dox cells by doxorubicin and responsiveness to p38 inhibitor", "Inhibition of p38 leading to a decrease of PGP expression in K562/Dox cells", "Reducing drug resistance by p38 inhibitor in K562/Dox cells" ]
[ "The p38 mitogen-activated protein kinase (MAPK) signaling pathway mediates multiple cellular events, including proliferation, differentiation, migration, adhesion and apoptosis, in response to various extracellular stimuli, such as growth factors, hormones, ligands for G protein-coupled receptors, inflammatory cytokines, and stresses [1]. It has been shown that p38 MAPK signaling is associated with cancers in humans and mice [2] and regulates gene expression through the activation of transcription factors.\nLong-term exposure of tumor cells to certain types of chemotherapy drugs causes resistance. The best example is doxorubicin, an anti-cancer drug that often leads to drug resistance [3,4]. Recent studies on cell resistance to chemotherapy and refractory epilepsy drugs showed that multidrug resistance (MDR) transporters, especially P-glycoprotein (PGP) encoded by MDR1, play an important role in multidrug resistance [5–7]. PGP is a membrane-associated protein with 6 transmembrane domains and an adenosine triphosphate (ATP) binding site. This energy-dependent structure provides the characteristics of a drug efflux transporter that can pump drugs and other hydrophobic compounds out of cells, reducing the intracellular drug concentration, thus leading to drug resistance [8–10].\nPGP expression can be induced by several factors, including cytotoxic drugs, irradiation, heat shock, and other stresses [11–13]. These factors also activate the p38 MAPK signaling pathway [14–16], suggesting that the p38 MAPK signaling pathway may be involved in the regulation of PGP expression. In this study we investigated the effect of a highly selective, potent, cell-permeable inhibitor of p38 MAPK (SB202190) on doxorubicin-induced drug resistance associated with PGP in a leukemia cell line. We demonstrated that p38 MAPK is involved in doxorubicin-induced PGP expression, cell resistance to the antiepileptic drug phenytoin sodium, and the chemotherapy drug, doxorubicin, in leukemia cells.", "Human leukemia cell line K562 was obtained from the Blood Institute of the Chinese Academy of Sciences (Tianjin, China). Doxorubicin hydrochloride (Dox), phenytoin sodium (PHT), verapamil hydrochloride (Ver), rabbit anti-human PGP antibody), SB202190 (4-(4-Fluorophenyl)-2-(4-hydroxyphenyl)-5-(4-pyridyl)-1H-imidazole), U0126 (1,4-diamino-2,3-dicyano-1,4-bis[2-aminophenylthio] butadiene), Cell Counting Kit-8 (CCK-8), and DAB (3,3′-Diaminobenzidine) were purchased from Sigma (St. Louis, MO). RPMI-1640, penicillin and streptomycin were from Gibco (Invitrogen, NY). Anti-p38 antibodies (total and phosphor T180 - Y182) were from Santa Cruz Technologies.", "To generate a resistant cell line K562/Dox, K562 cells were cultured in RPMI-1640 medium supplemented with 15% fetal calf serum (FCS), 100 U/mL penicillin, and 100 μg/mL streptomycin at 37°C overnight, followed by treatment with 10 μg/mL doxorubicin at 37°C for 2 hours. Cells were then centrifuged and recovered with fresh medium. After recovery, cells were retreated with doxorubicin at the same dose. These processes were repeated several times until drug resistance was acquired. All K562 cells were kept in the logarithmic growth phase during doxorubicin treatment. The established cell line was then maintained in fresh complete medium supplemented with 0.1 μg/mL doxorubicin.\nAll K562/Dox cells were cultured in the absence of Dox for 10 days prior to compound treatment. The cells were then plated and treated with 10 μM phenytoin sodium, 10 μM doxorubicin, 10 μM SB202190, 10 μM U0126, or 10 μM verapamil in DMSO for a period of time as indicated below. Equal amount DMSO was used for a negative control.", "K562 and K562/Dox cells were seeded on 0.1% poly-lysine coated slides (Sigma) and fixed in ice-cold acetone for 10 min. After washing 3 times with PBS, the cells were permeated with 0.25% Triton X-100 plus 5% DMSO in PBS for 10 min. After washing 3 times with PBS, the cells were treated with 1.5% and 3% hydrogen peroxide for 15 min each to block endogenous peroxidase and peroxidase-like activity. Following block with 10% goat serum in PBS for 1 hour, the cells were incubated with specific antibody against human PGP (1: 200 dilution) at 4°C overnight. After incubation of horseradish peroxidase-labeled goat anti-rabbit secondary antibody (1:1000 dilution) at 37°C for 1 hour, the cells were washed with 0.1 mol/L Tris-HCl buffer for 5 min. The cells were then incubated with 0.05% DAB substrate in 0.05 mol/L Tris-HCl buffer, followed by incubation of 2 drops of 3% hydrogen peroxide for 5~15 min until cells were coloured light brown. The reaction was stopped by putting the slides into 0.05 mol/L Tris-HCl buffer and air-drying. After mounting, the cells were observed under the microscope and photos were taken.", "Total RNA was extracted from cells using Trizol reagent (Invitrogen). One microgram of total RNA was reversely transcribed using a reverse transcription kit (MBI Fermentas, Burlington, Canada). The PCR amplification was carried out in a volume of 25 μl using the Fermentas kit. The primers of MDR1 and β-actin were synthesized by Shanghai Saibaisheng Company (Shanghai, China). The primer sequences were 5′-TTTTCATGCTATAATGCGAC-3′ (forward) and 5′-TCCAAGAACAGGACTGATGG-3′ (reverse) for MDR1 (226 bp) and 5′-CCTCGCCTTTGCCGATCC-3′ (forward) and 5′-GGATCTTCATGAGGTAGTCAGTC-3′ (reverse) for β-actin (620 bp). PCR amplification was performed at 94°C for 45 sec, 54°C for 45 sec, and 72°C for 1 min for 30 cycles. An initial step to denature RNA at 95°C for 2 min and a final extension of 5 min at 72°C were also performed. PCR products were then separated on 1.5% agarose gel and analyzed using a gel imaging system (GeneGenius, USA).", "Total protein was extracted from 1×109 cells. Equal amount protein samples were run on SDS-PAGE gels and transferred to polyvinylidene difluoride (PVDF) membranes. After blocking with 1% skim milk in TBS-T at room temperature for 1 hour, the membrane was probed with mouse anti-human primary antibody (1:500 dilution) at 4°C overnight and subsequently incubated with goat anti-mouse horseradish peroxidase-conjugated secondary antibody (1:2000 dilution) at room temperature for 2 hours. Signals were detected using ECL-Plus (Santa Clara, CA) and quantified using the Bio-Rad2000 gel imaging system with QUANTITY ONE software (Bio-Rad Laboratories, Hercules, CA).", "Cell viability and drug resistance was determined by cell counting method using CCK-8 assay according to the protocol of the manufacturer. Briefly, cells were pre-treated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin or doxorubicin for 48 hours. After adding 10 μl of the CCK-8 solution to each well of the plate, cells were incubated for 2 hours in the incubator. The absorbance was measured at 450 nm using a microplate reader. Cell viability was calculated using the data obtained from the wells that contain known numbers of viable cells. The 50% inhibitory concentration (IC50) of each drug was calculated using a weighted regression of the plot. Reversal index (RI) was calculated as RI=IC50 without inhibitor/IC50 with inhibitor.", "The concentration of intracellular phenytoin or doxorubicin was measured by HPLC. Briefly, cells were pretreated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin sodium or doxorubicin for 36 hours at a final concentration of 10 μM. Cells (2×106 in 2 ml culture medium) were then collected and re-suspended in 0.3 mol/L HCl/50% ethanol. After centrifugation at 10,000 rpm for 10 min, the supernatant (20 μl) was loaded into the column of HPLC for the measurement of the intracellular concentration of drug according to the protocol of the manufacturer.", "K562/Dox cells were generated by repeating treatments of doxorubicin and confirmed by the induction of PGP expression. Immunocytochemistry showed that wild-type K562 cells had an undetectable level of PGP expression (Figure 1A, left panel), whereas most K562/Dox cells were PGP-positive and appeared light-brown (Figure 1A, right panel), indicating the induction of PGP expression by doxorubicin. This drug-resistant cell line was further confirmed by the detection of multi-drug resistance 1 gene, MDR1, in the absence or presence of a p38 inhibitor, SB202190 (Figure 1B). After quantitative analysis of RT-PCR, we found that SB202190 treatment significantly decreased MDR1 expression in K562/Dox cells (Figure 1C; P<0.001; n=10). However, the treatment of U0126, a highly selective inhibitor of both MEK1 and MEK2, had no effect on MDR1 expression.", "The expression of PGP in K562/Dox cells was further detected by Western blot. SB202190 treatment for 48 hours decreased PGP expression, whereas U0126 treatment had no effect (Figure 2A). After quantitative analysis, we found that p38 inhibitor significantly reduced the expression of PGP (Figure 2B). Next, we confirmed that SB202190 indeed significantly suppressed phopho-p38, an active form of p38, as well as total-p38 in K562/Dox cells (Figure 2C, D).", "We subsequently investigated whether p38 was involved in cells’ resistance to the drug tested. First, we examined cell viability by pretreating the cells with p38 inhibitor SB202190 and a positive control, verapamil, followed by treating the cells with phenytoin sodium or doxorubicin. The inhibition of cell viabilities by phenytoin sodium and doxorubicin was determined by CCK-8 assay. We found that in the presence of SB202190, phenytoin sodium and doxorubicin significantly decreased the number of living cells (Figure 3A, B). Studies confirmed that treatment of 10 μM SB202190 or verapamil alone had no effect on cell viability in either K562 or K562/Dox cells (data not shown). Second, we measured the IC50 of phenytoin and doxorubicin in K562/Dox cells. The IC50 of phenytoin and doxorubicin in K562/Dox cells was significantly higher than that in K562 cells (2186.33±214.70 vs. 468.82±44.67 μg/mL and 4.33±0.50 vs. 0.32±0.05 μg/mL, respectively) (Table 1), indicating that K562/Dox cells were drug-resistant. After blocking p38 with 10 μM SB202190 in K562/Dox cells, we observed that the IC50 of phenytoin was significantly decreased, from 2186.33±214.70 to 949.83±131.31 μg/mL, with an RI of 2.30, similar to that of the verapamil control (2.56). The IC50 of doxorubicin was also lower in cells treated with 10 μM SB202190 than in untreated K562/Dox cells (4.33±0.50 μg/mL and 0.40±0.09 μg/mL, respectively), with an RI of 10.83, similar to that of verapamil (12.37). Third, we measured the intracellular concentration of phenytoin and doxorubicin by HPLC. The intracellular levels of phenytoin and doxorubicin were significantly lower in K562/Dox cells than those in K562 cells (Figure 4A, B), further confirming the drug resistance of K562/Dox cells. The decrease of the intracellular level of phenytoin and doxorubicin in K562/Dox cells was significantly abolished in the presence of SB202190 (Figure 4). These data clearly demonstrate that p38 is, at least in part, involved in the regulation of drug resistance in K562/Dox cells." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Material and Methods", "Material", "Generation of K562/Dox cell line and treatment", "Immunocytochemical staining", "RNA extraction and RT-PCR", "Western blotting", "Cell viability and half maximal inhibitory concentration", "Measurement of intracellular concentration of PHT and Dox", "Statistical analysis", "Results", "Generation of K562/Dox cells by doxorubicin and responsiveness to p38 inhibitor", "Inhibition of p38 leading to a decrease of PGP expression in K562/Dox cells", "Reducing drug resistance by p38 inhibitor in K562/Dox cells", "Discussion", "Conclusions" ]
[ "The p38 mitogen-activated protein kinase (MAPK) signaling pathway mediates multiple cellular events, including proliferation, differentiation, migration, adhesion and apoptosis, in response to various extracellular stimuli, such as growth factors, hormones, ligands for G protein-coupled receptors, inflammatory cytokines, and stresses [1]. It has been shown that p38 MAPK signaling is associated with cancers in humans and mice [2] and regulates gene expression through the activation of transcription factors.\nLong-term exposure of tumor cells to certain types of chemotherapy drugs causes resistance. The best example is doxorubicin, an anti-cancer drug that often leads to drug resistance [3,4]. Recent studies on cell resistance to chemotherapy and refractory epilepsy drugs showed that multidrug resistance (MDR) transporters, especially P-glycoprotein (PGP) encoded by MDR1, play an important role in multidrug resistance [5–7]. PGP is a membrane-associated protein with 6 transmembrane domains and an adenosine triphosphate (ATP) binding site. This energy-dependent structure provides the characteristics of a drug efflux transporter that can pump drugs and other hydrophobic compounds out of cells, reducing the intracellular drug concentration, thus leading to drug resistance [8–10].\nPGP expression can be induced by several factors, including cytotoxic drugs, irradiation, heat shock, and other stresses [11–13]. These factors also activate the p38 MAPK signaling pathway [14–16], suggesting that the p38 MAPK signaling pathway may be involved in the regulation of PGP expression. In this study we investigated the effect of a highly selective, potent, cell-permeable inhibitor of p38 MAPK (SB202190) on doxorubicin-induced drug resistance associated with PGP in a leukemia cell line. We demonstrated that p38 MAPK is involved in doxorubicin-induced PGP expression, cell resistance to the antiepileptic drug phenytoin sodium, and the chemotherapy drug, doxorubicin, in leukemia cells.", " Material Human leukemia cell line K562 was obtained from the Blood Institute of the Chinese Academy of Sciences (Tianjin, China). Doxorubicin hydrochloride (Dox), phenytoin sodium (PHT), verapamil hydrochloride (Ver), rabbit anti-human PGP antibody), SB202190 (4-(4-Fluorophenyl)-2-(4-hydroxyphenyl)-5-(4-pyridyl)-1H-imidazole), U0126 (1,4-diamino-2,3-dicyano-1,4-bis[2-aminophenylthio] butadiene), Cell Counting Kit-8 (CCK-8), and DAB (3,3′-Diaminobenzidine) were purchased from Sigma (St. Louis, MO). RPMI-1640, penicillin and streptomycin were from Gibco (Invitrogen, NY). Anti-p38 antibodies (total and phosphor T180 - Y182) were from Santa Cruz Technologies.\nHuman leukemia cell line K562 was obtained from the Blood Institute of the Chinese Academy of Sciences (Tianjin, China). Doxorubicin hydrochloride (Dox), phenytoin sodium (PHT), verapamil hydrochloride (Ver), rabbit anti-human PGP antibody), SB202190 (4-(4-Fluorophenyl)-2-(4-hydroxyphenyl)-5-(4-pyridyl)-1H-imidazole), U0126 (1,4-diamino-2,3-dicyano-1,4-bis[2-aminophenylthio] butadiene), Cell Counting Kit-8 (CCK-8), and DAB (3,3′-Diaminobenzidine) were purchased from Sigma (St. Louis, MO). RPMI-1640, penicillin and streptomycin were from Gibco (Invitrogen, NY). Anti-p38 antibodies (total and phosphor T180 - Y182) were from Santa Cruz Technologies.\n Generation of K562/Dox cell line and treatment To generate a resistant cell line K562/Dox, K562 cells were cultured in RPMI-1640 medium supplemented with 15% fetal calf serum (FCS), 100 U/mL penicillin, and 100 μg/mL streptomycin at 37°C overnight, followed by treatment with 10 μg/mL doxorubicin at 37°C for 2 hours. Cells were then centrifuged and recovered with fresh medium. After recovery, cells were retreated with doxorubicin at the same dose. These processes were repeated several times until drug resistance was acquired. All K562 cells were kept in the logarithmic growth phase during doxorubicin treatment. The established cell line was then maintained in fresh complete medium supplemented with 0.1 μg/mL doxorubicin.\nAll K562/Dox cells were cultured in the absence of Dox for 10 days prior to compound treatment. The cells were then plated and treated with 10 μM phenytoin sodium, 10 μM doxorubicin, 10 μM SB202190, 10 μM U0126, or 10 μM verapamil in DMSO for a period of time as indicated below. Equal amount DMSO was used for a negative control.\nTo generate a resistant cell line K562/Dox, K562 cells were cultured in RPMI-1640 medium supplemented with 15% fetal calf serum (FCS), 100 U/mL penicillin, and 100 μg/mL streptomycin at 37°C overnight, followed by treatment with 10 μg/mL doxorubicin at 37°C for 2 hours. Cells were then centrifuged and recovered with fresh medium. After recovery, cells were retreated with doxorubicin at the same dose. These processes were repeated several times until drug resistance was acquired. All K562 cells were kept in the logarithmic growth phase during doxorubicin treatment. The established cell line was then maintained in fresh complete medium supplemented with 0.1 μg/mL doxorubicin.\nAll K562/Dox cells were cultured in the absence of Dox for 10 days prior to compound treatment. The cells were then plated and treated with 10 μM phenytoin sodium, 10 μM doxorubicin, 10 μM SB202190, 10 μM U0126, or 10 μM verapamil in DMSO for a period of time as indicated below. Equal amount DMSO was used for a negative control.\n Immunocytochemical staining K562 and K562/Dox cells were seeded on 0.1% poly-lysine coated slides (Sigma) and fixed in ice-cold acetone for 10 min. After washing 3 times with PBS, the cells were permeated with 0.25% Triton X-100 plus 5% DMSO in PBS for 10 min. After washing 3 times with PBS, the cells were treated with 1.5% and 3% hydrogen peroxide for 15 min each to block endogenous peroxidase and peroxidase-like activity. Following block with 10% goat serum in PBS for 1 hour, the cells were incubated with specific antibody against human PGP (1: 200 dilution) at 4°C overnight. After incubation of horseradish peroxidase-labeled goat anti-rabbit secondary antibody (1:1000 dilution) at 37°C for 1 hour, the cells were washed with 0.1 mol/L Tris-HCl buffer for 5 min. The cells were then incubated with 0.05% DAB substrate in 0.05 mol/L Tris-HCl buffer, followed by incubation of 2 drops of 3% hydrogen peroxide for 5~15 min until cells were coloured light brown. The reaction was stopped by putting the slides into 0.05 mol/L Tris-HCl buffer and air-drying. After mounting, the cells were observed under the microscope and photos were taken.\nK562 and K562/Dox cells were seeded on 0.1% poly-lysine coated slides (Sigma) and fixed in ice-cold acetone for 10 min. After washing 3 times with PBS, the cells were permeated with 0.25% Triton X-100 plus 5% DMSO in PBS for 10 min. After washing 3 times with PBS, the cells were treated with 1.5% and 3% hydrogen peroxide for 15 min each to block endogenous peroxidase and peroxidase-like activity. Following block with 10% goat serum in PBS for 1 hour, the cells were incubated with specific antibody against human PGP (1: 200 dilution) at 4°C overnight. After incubation of horseradish peroxidase-labeled goat anti-rabbit secondary antibody (1:1000 dilution) at 37°C for 1 hour, the cells were washed with 0.1 mol/L Tris-HCl buffer for 5 min. The cells were then incubated with 0.05% DAB substrate in 0.05 mol/L Tris-HCl buffer, followed by incubation of 2 drops of 3% hydrogen peroxide for 5~15 min until cells were coloured light brown. The reaction was stopped by putting the slides into 0.05 mol/L Tris-HCl buffer and air-drying. After mounting, the cells were observed under the microscope and photos were taken.\n RNA extraction and RT-PCR Total RNA was extracted from cells using Trizol reagent (Invitrogen). One microgram of total RNA was reversely transcribed using a reverse transcription kit (MBI Fermentas, Burlington, Canada). The PCR amplification was carried out in a volume of 25 μl using the Fermentas kit. The primers of MDR1 and β-actin were synthesized by Shanghai Saibaisheng Company (Shanghai, China). The primer sequences were 5′-TTTTCATGCTATAATGCGAC-3′ (forward) and 5′-TCCAAGAACAGGACTGATGG-3′ (reverse) for MDR1 (226 bp) and 5′-CCTCGCCTTTGCCGATCC-3′ (forward) and 5′-GGATCTTCATGAGGTAGTCAGTC-3′ (reverse) for β-actin (620 bp). PCR amplification was performed at 94°C for 45 sec, 54°C for 45 sec, and 72°C for 1 min for 30 cycles. An initial step to denature RNA at 95°C for 2 min and a final extension of 5 min at 72°C were also performed. PCR products were then separated on 1.5% agarose gel and analyzed using a gel imaging system (GeneGenius, USA).\nTotal RNA was extracted from cells using Trizol reagent (Invitrogen). One microgram of total RNA was reversely transcribed using a reverse transcription kit (MBI Fermentas, Burlington, Canada). The PCR amplification was carried out in a volume of 25 μl using the Fermentas kit. The primers of MDR1 and β-actin were synthesized by Shanghai Saibaisheng Company (Shanghai, China). The primer sequences were 5′-TTTTCATGCTATAATGCGAC-3′ (forward) and 5′-TCCAAGAACAGGACTGATGG-3′ (reverse) for MDR1 (226 bp) and 5′-CCTCGCCTTTGCCGATCC-3′ (forward) and 5′-GGATCTTCATGAGGTAGTCAGTC-3′ (reverse) for β-actin (620 bp). PCR amplification was performed at 94°C for 45 sec, 54°C for 45 sec, and 72°C for 1 min for 30 cycles. An initial step to denature RNA at 95°C for 2 min and a final extension of 5 min at 72°C were also performed. PCR products were then separated on 1.5% agarose gel and analyzed using a gel imaging system (GeneGenius, USA).\n Western blotting Total protein was extracted from 1×109 cells. Equal amount protein samples were run on SDS-PAGE gels and transferred to polyvinylidene difluoride (PVDF) membranes. After blocking with 1% skim milk in TBS-T at room temperature for 1 hour, the membrane was probed with mouse anti-human primary antibody (1:500 dilution) at 4°C overnight and subsequently incubated with goat anti-mouse horseradish peroxidase-conjugated secondary antibody (1:2000 dilution) at room temperature for 2 hours. Signals were detected using ECL-Plus (Santa Clara, CA) and quantified using the Bio-Rad2000 gel imaging system with QUANTITY ONE software (Bio-Rad Laboratories, Hercules, CA).\nTotal protein was extracted from 1×109 cells. Equal amount protein samples were run on SDS-PAGE gels and transferred to polyvinylidene difluoride (PVDF) membranes. After blocking with 1% skim milk in TBS-T at room temperature for 1 hour, the membrane was probed with mouse anti-human primary antibody (1:500 dilution) at 4°C overnight and subsequently incubated with goat anti-mouse horseradish peroxidase-conjugated secondary antibody (1:2000 dilution) at room temperature for 2 hours. Signals were detected using ECL-Plus (Santa Clara, CA) and quantified using the Bio-Rad2000 gel imaging system with QUANTITY ONE software (Bio-Rad Laboratories, Hercules, CA).\n Cell viability and half maximal inhibitory concentration Cell viability and drug resistance was determined by cell counting method using CCK-8 assay according to the protocol of the manufacturer. Briefly, cells were pre-treated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin or doxorubicin for 48 hours. After adding 10 μl of the CCK-8 solution to each well of the plate, cells were incubated for 2 hours in the incubator. The absorbance was measured at 450 nm using a microplate reader. Cell viability was calculated using the data obtained from the wells that contain known numbers of viable cells. The 50% inhibitory concentration (IC50) of each drug was calculated using a weighted regression of the plot. Reversal index (RI) was calculated as RI=IC50 without inhibitor/IC50 with inhibitor.\nCell viability and drug resistance was determined by cell counting method using CCK-8 assay according to the protocol of the manufacturer. Briefly, cells were pre-treated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin or doxorubicin for 48 hours. After adding 10 μl of the CCK-8 solution to each well of the plate, cells were incubated for 2 hours in the incubator. The absorbance was measured at 450 nm using a microplate reader. Cell viability was calculated using the data obtained from the wells that contain known numbers of viable cells. The 50% inhibitory concentration (IC50) of each drug was calculated using a weighted regression of the plot. Reversal index (RI) was calculated as RI=IC50 without inhibitor/IC50 with inhibitor.\n Measurement of intracellular concentration of PHT and Dox The concentration of intracellular phenytoin or doxorubicin was measured by HPLC. Briefly, cells were pretreated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin sodium or doxorubicin for 36 hours at a final concentration of 10 μM. Cells (2×106 in 2 ml culture medium) were then collected and re-suspended in 0.3 mol/L HCl/50% ethanol. After centrifugation at 10,000 rpm for 10 min, the supernatant (20 μl) was loaded into the column of HPLC for the measurement of the intracellular concentration of drug according to the protocol of the manufacturer.\nThe concentration of intracellular phenytoin or doxorubicin was measured by HPLC. Briefly, cells were pretreated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin sodium or doxorubicin for 36 hours at a final concentration of 10 μM. Cells (2×106 in 2 ml culture medium) were then collected and re-suspended in 0.3 mol/L HCl/50% ethanol. After centrifugation at 10,000 rpm for 10 min, the supernatant (20 μl) was loaded into the column of HPLC for the measurement of the intracellular concentration of drug according to the protocol of the manufacturer.\n Statistical analysis All statistical analyses were carried out using SigmaStat (Chicago, IL). Comparisons between groups were performed using either a paired Student t-tests or one-way ANOVA, where indicated. Data are presented as mean ±SD or SEM. Differences were considered significant at values of P<0.05.\nAll statistical analyses were carried out using SigmaStat (Chicago, IL). Comparisons between groups were performed using either a paired Student t-tests or one-way ANOVA, where indicated. Data are presented as mean ±SD or SEM. Differences were considered significant at values of P<0.05.", "Human leukemia cell line K562 was obtained from the Blood Institute of the Chinese Academy of Sciences (Tianjin, China). Doxorubicin hydrochloride (Dox), phenytoin sodium (PHT), verapamil hydrochloride (Ver), rabbit anti-human PGP antibody), SB202190 (4-(4-Fluorophenyl)-2-(4-hydroxyphenyl)-5-(4-pyridyl)-1H-imidazole), U0126 (1,4-diamino-2,3-dicyano-1,4-bis[2-aminophenylthio] butadiene), Cell Counting Kit-8 (CCK-8), and DAB (3,3′-Diaminobenzidine) were purchased from Sigma (St. Louis, MO). RPMI-1640, penicillin and streptomycin were from Gibco (Invitrogen, NY). Anti-p38 antibodies (total and phosphor T180 - Y182) were from Santa Cruz Technologies.", "To generate a resistant cell line K562/Dox, K562 cells were cultured in RPMI-1640 medium supplemented with 15% fetal calf serum (FCS), 100 U/mL penicillin, and 100 μg/mL streptomycin at 37°C overnight, followed by treatment with 10 μg/mL doxorubicin at 37°C for 2 hours. Cells were then centrifuged and recovered with fresh medium. After recovery, cells were retreated with doxorubicin at the same dose. These processes were repeated several times until drug resistance was acquired. All K562 cells were kept in the logarithmic growth phase during doxorubicin treatment. The established cell line was then maintained in fresh complete medium supplemented with 0.1 μg/mL doxorubicin.\nAll K562/Dox cells were cultured in the absence of Dox for 10 days prior to compound treatment. The cells were then plated and treated with 10 μM phenytoin sodium, 10 μM doxorubicin, 10 μM SB202190, 10 μM U0126, or 10 μM verapamil in DMSO for a period of time as indicated below. Equal amount DMSO was used for a negative control.", "K562 and K562/Dox cells were seeded on 0.1% poly-lysine coated slides (Sigma) and fixed in ice-cold acetone for 10 min. After washing 3 times with PBS, the cells were permeated with 0.25% Triton X-100 plus 5% DMSO in PBS for 10 min. After washing 3 times with PBS, the cells were treated with 1.5% and 3% hydrogen peroxide for 15 min each to block endogenous peroxidase and peroxidase-like activity. Following block with 10% goat serum in PBS for 1 hour, the cells were incubated with specific antibody against human PGP (1: 200 dilution) at 4°C overnight. After incubation of horseradish peroxidase-labeled goat anti-rabbit secondary antibody (1:1000 dilution) at 37°C for 1 hour, the cells were washed with 0.1 mol/L Tris-HCl buffer for 5 min. The cells were then incubated with 0.05% DAB substrate in 0.05 mol/L Tris-HCl buffer, followed by incubation of 2 drops of 3% hydrogen peroxide for 5~15 min until cells were coloured light brown. The reaction was stopped by putting the slides into 0.05 mol/L Tris-HCl buffer and air-drying. After mounting, the cells were observed under the microscope and photos were taken.", "Total RNA was extracted from cells using Trizol reagent (Invitrogen). One microgram of total RNA was reversely transcribed using a reverse transcription kit (MBI Fermentas, Burlington, Canada). The PCR amplification was carried out in a volume of 25 μl using the Fermentas kit. The primers of MDR1 and β-actin were synthesized by Shanghai Saibaisheng Company (Shanghai, China). The primer sequences were 5′-TTTTCATGCTATAATGCGAC-3′ (forward) and 5′-TCCAAGAACAGGACTGATGG-3′ (reverse) for MDR1 (226 bp) and 5′-CCTCGCCTTTGCCGATCC-3′ (forward) and 5′-GGATCTTCATGAGGTAGTCAGTC-3′ (reverse) for β-actin (620 bp). PCR amplification was performed at 94°C for 45 sec, 54°C for 45 sec, and 72°C for 1 min for 30 cycles. An initial step to denature RNA at 95°C for 2 min and a final extension of 5 min at 72°C were also performed. PCR products were then separated on 1.5% agarose gel and analyzed using a gel imaging system (GeneGenius, USA).", "Total protein was extracted from 1×109 cells. Equal amount protein samples were run on SDS-PAGE gels and transferred to polyvinylidene difluoride (PVDF) membranes. After blocking with 1% skim milk in TBS-T at room temperature for 1 hour, the membrane was probed with mouse anti-human primary antibody (1:500 dilution) at 4°C overnight and subsequently incubated with goat anti-mouse horseradish peroxidase-conjugated secondary antibody (1:2000 dilution) at room temperature for 2 hours. Signals were detected using ECL-Plus (Santa Clara, CA) and quantified using the Bio-Rad2000 gel imaging system with QUANTITY ONE software (Bio-Rad Laboratories, Hercules, CA).", "Cell viability and drug resistance was determined by cell counting method using CCK-8 assay according to the protocol of the manufacturer. Briefly, cells were pre-treated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin or doxorubicin for 48 hours. After adding 10 μl of the CCK-8 solution to each well of the plate, cells were incubated for 2 hours in the incubator. The absorbance was measured at 450 nm using a microplate reader. Cell viability was calculated using the data obtained from the wells that contain known numbers of viable cells. The 50% inhibitory concentration (IC50) of each drug was calculated using a weighted regression of the plot. Reversal index (RI) was calculated as RI=IC50 without inhibitor/IC50 with inhibitor.", "The concentration of intracellular phenytoin or doxorubicin was measured by HPLC. Briefly, cells were pretreated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin sodium or doxorubicin for 36 hours at a final concentration of 10 μM. Cells (2×106 in 2 ml culture medium) were then collected and re-suspended in 0.3 mol/L HCl/50% ethanol. After centrifugation at 10,000 rpm for 10 min, the supernatant (20 μl) was loaded into the column of HPLC for the measurement of the intracellular concentration of drug according to the protocol of the manufacturer.", "All statistical analyses were carried out using SigmaStat (Chicago, IL). Comparisons between groups were performed using either a paired Student t-tests or one-way ANOVA, where indicated. Data are presented as mean ±SD or SEM. Differences were considered significant at values of P<0.05.", " Generation of K562/Dox cells by doxorubicin and responsiveness to p38 inhibitor K562/Dox cells were generated by repeating treatments of doxorubicin and confirmed by the induction of PGP expression. Immunocytochemistry showed that wild-type K562 cells had an undetectable level of PGP expression (Figure 1A, left panel), whereas most K562/Dox cells were PGP-positive and appeared light-brown (Figure 1A, right panel), indicating the induction of PGP expression by doxorubicin. This drug-resistant cell line was further confirmed by the detection of multi-drug resistance 1 gene, MDR1, in the absence or presence of a p38 inhibitor, SB202190 (Figure 1B). After quantitative analysis of RT-PCR, we found that SB202190 treatment significantly decreased MDR1 expression in K562/Dox cells (Figure 1C; P<0.001; n=10). However, the treatment of U0126, a highly selective inhibitor of both MEK1 and MEK2, had no effect on MDR1 expression.\nK562/Dox cells were generated by repeating treatments of doxorubicin and confirmed by the induction of PGP expression. Immunocytochemistry showed that wild-type K562 cells had an undetectable level of PGP expression (Figure 1A, left panel), whereas most K562/Dox cells were PGP-positive and appeared light-brown (Figure 1A, right panel), indicating the induction of PGP expression by doxorubicin. This drug-resistant cell line was further confirmed by the detection of multi-drug resistance 1 gene, MDR1, in the absence or presence of a p38 inhibitor, SB202190 (Figure 1B). After quantitative analysis of RT-PCR, we found that SB202190 treatment significantly decreased MDR1 expression in K562/Dox cells (Figure 1C; P<0.001; n=10). However, the treatment of U0126, a highly selective inhibitor of both MEK1 and MEK2, had no effect on MDR1 expression.\n Inhibition of p38 leading to a decrease of PGP expression in K562/Dox cells The expression of PGP in K562/Dox cells was further detected by Western blot. SB202190 treatment for 48 hours decreased PGP expression, whereas U0126 treatment had no effect (Figure 2A). After quantitative analysis, we found that p38 inhibitor significantly reduced the expression of PGP (Figure 2B). Next, we confirmed that SB202190 indeed significantly suppressed phopho-p38, an active form of p38, as well as total-p38 in K562/Dox cells (Figure 2C, D).\nThe expression of PGP in K562/Dox cells was further detected by Western blot. SB202190 treatment for 48 hours decreased PGP expression, whereas U0126 treatment had no effect (Figure 2A). After quantitative analysis, we found that p38 inhibitor significantly reduced the expression of PGP (Figure 2B). Next, we confirmed that SB202190 indeed significantly suppressed phopho-p38, an active form of p38, as well as total-p38 in K562/Dox cells (Figure 2C, D).\n Reducing drug resistance by p38 inhibitor in K562/Dox cells We subsequently investigated whether p38 was involved in cells’ resistance to the drug tested. First, we examined cell viability by pretreating the cells with p38 inhibitor SB202190 and a positive control, verapamil, followed by treating the cells with phenytoin sodium or doxorubicin. The inhibition of cell viabilities by phenytoin sodium and doxorubicin was determined by CCK-8 assay. We found that in the presence of SB202190, phenytoin sodium and doxorubicin significantly decreased the number of living cells (Figure 3A, B). Studies confirmed that treatment of 10 μM SB202190 or verapamil alone had no effect on cell viability in either K562 or K562/Dox cells (data not shown). Second, we measured the IC50 of phenytoin and doxorubicin in K562/Dox cells. The IC50 of phenytoin and doxorubicin in K562/Dox cells was significantly higher than that in K562 cells (2186.33±214.70 vs. 468.82±44.67 μg/mL and 4.33±0.50 vs. 0.32±0.05 μg/mL, respectively) (Table 1), indicating that K562/Dox cells were drug-resistant. After blocking p38 with 10 μM SB202190 in K562/Dox cells, we observed that the IC50 of phenytoin was significantly decreased, from 2186.33±214.70 to 949.83±131.31 μg/mL, with an RI of 2.30, similar to that of the verapamil control (2.56). The IC50 of doxorubicin was also lower in cells treated with 10 μM SB202190 than in untreated K562/Dox cells (4.33±0.50 μg/mL and 0.40±0.09 μg/mL, respectively), with an RI of 10.83, similar to that of verapamil (12.37). Third, we measured the intracellular concentration of phenytoin and doxorubicin by HPLC. The intracellular levels of phenytoin and doxorubicin were significantly lower in K562/Dox cells than those in K562 cells (Figure 4A, B), further confirming the drug resistance of K562/Dox cells. The decrease of the intracellular level of phenytoin and doxorubicin in K562/Dox cells was significantly abolished in the presence of SB202190 (Figure 4). These data clearly demonstrate that p38 is, at least in part, involved in the regulation of drug resistance in K562/Dox cells.\nWe subsequently investigated whether p38 was involved in cells’ resistance to the drug tested. First, we examined cell viability by pretreating the cells with p38 inhibitor SB202190 and a positive control, verapamil, followed by treating the cells with phenytoin sodium or doxorubicin. The inhibition of cell viabilities by phenytoin sodium and doxorubicin was determined by CCK-8 assay. We found that in the presence of SB202190, phenytoin sodium and doxorubicin significantly decreased the number of living cells (Figure 3A, B). Studies confirmed that treatment of 10 μM SB202190 or verapamil alone had no effect on cell viability in either K562 or K562/Dox cells (data not shown). Second, we measured the IC50 of phenytoin and doxorubicin in K562/Dox cells. The IC50 of phenytoin and doxorubicin in K562/Dox cells was significantly higher than that in K562 cells (2186.33±214.70 vs. 468.82±44.67 μg/mL and 4.33±0.50 vs. 0.32±0.05 μg/mL, respectively) (Table 1), indicating that K562/Dox cells were drug-resistant. After blocking p38 with 10 μM SB202190 in K562/Dox cells, we observed that the IC50 of phenytoin was significantly decreased, from 2186.33±214.70 to 949.83±131.31 μg/mL, with an RI of 2.30, similar to that of the verapamil control (2.56). The IC50 of doxorubicin was also lower in cells treated with 10 μM SB202190 than in untreated K562/Dox cells (4.33±0.50 μg/mL and 0.40±0.09 μg/mL, respectively), with an RI of 10.83, similar to that of verapamil (12.37). Third, we measured the intracellular concentration of phenytoin and doxorubicin by HPLC. The intracellular levels of phenytoin and doxorubicin were significantly lower in K562/Dox cells than those in K562 cells (Figure 4A, B), further confirming the drug resistance of K562/Dox cells. The decrease of the intracellular level of phenytoin and doxorubicin in K562/Dox cells was significantly abolished in the presence of SB202190 (Figure 4). These data clearly demonstrate that p38 is, at least in part, involved in the regulation of drug resistance in K562/Dox cells.", "K562/Dox cells were generated by repeating treatments of doxorubicin and confirmed by the induction of PGP expression. Immunocytochemistry showed that wild-type K562 cells had an undetectable level of PGP expression (Figure 1A, left panel), whereas most K562/Dox cells were PGP-positive and appeared light-brown (Figure 1A, right panel), indicating the induction of PGP expression by doxorubicin. This drug-resistant cell line was further confirmed by the detection of multi-drug resistance 1 gene, MDR1, in the absence or presence of a p38 inhibitor, SB202190 (Figure 1B). After quantitative analysis of RT-PCR, we found that SB202190 treatment significantly decreased MDR1 expression in K562/Dox cells (Figure 1C; P<0.001; n=10). However, the treatment of U0126, a highly selective inhibitor of both MEK1 and MEK2, had no effect on MDR1 expression.", "The expression of PGP in K562/Dox cells was further detected by Western blot. SB202190 treatment for 48 hours decreased PGP expression, whereas U0126 treatment had no effect (Figure 2A). After quantitative analysis, we found that p38 inhibitor significantly reduced the expression of PGP (Figure 2B). Next, we confirmed that SB202190 indeed significantly suppressed phopho-p38, an active form of p38, as well as total-p38 in K562/Dox cells (Figure 2C, D).", "We subsequently investigated whether p38 was involved in cells’ resistance to the drug tested. First, we examined cell viability by pretreating the cells with p38 inhibitor SB202190 and a positive control, verapamil, followed by treating the cells with phenytoin sodium or doxorubicin. The inhibition of cell viabilities by phenytoin sodium and doxorubicin was determined by CCK-8 assay. We found that in the presence of SB202190, phenytoin sodium and doxorubicin significantly decreased the number of living cells (Figure 3A, B). Studies confirmed that treatment of 10 μM SB202190 or verapamil alone had no effect on cell viability in either K562 or K562/Dox cells (data not shown). Second, we measured the IC50 of phenytoin and doxorubicin in K562/Dox cells. The IC50 of phenytoin and doxorubicin in K562/Dox cells was significantly higher than that in K562 cells (2186.33±214.70 vs. 468.82±44.67 μg/mL and 4.33±0.50 vs. 0.32±0.05 μg/mL, respectively) (Table 1), indicating that K562/Dox cells were drug-resistant. After blocking p38 with 10 μM SB202190 in K562/Dox cells, we observed that the IC50 of phenytoin was significantly decreased, from 2186.33±214.70 to 949.83±131.31 μg/mL, with an RI of 2.30, similar to that of the verapamil control (2.56). The IC50 of doxorubicin was also lower in cells treated with 10 μM SB202190 than in untreated K562/Dox cells (4.33±0.50 μg/mL and 0.40±0.09 μg/mL, respectively), with an RI of 10.83, similar to that of verapamil (12.37). Third, we measured the intracellular concentration of phenytoin and doxorubicin by HPLC. The intracellular levels of phenytoin and doxorubicin were significantly lower in K562/Dox cells than those in K562 cells (Figure 4A, B), further confirming the drug resistance of K562/Dox cells. The decrease of the intracellular level of phenytoin and doxorubicin in K562/Dox cells was significantly abolished in the presence of SB202190 (Figure 4). These data clearly demonstrate that p38 is, at least in part, involved in the regulation of drug resistance in K562/Dox cells.", "Drug resistance often occurs in anti-cancer and anti-epileptic therapy. Previous studies have shown that the multidrug transporter PGP is involved in cell resistance to chemotherapy and refractory epilepsy [17,18]. A PGP antagonist may effectively reverse chemotherapy and epilepsy drug resistance [7,19]. Here, we demonstrated that the p38 MAPK signaling pathway is involved in doxorubicin-induced drug resistance associated with PGP regulation, and a p38 inhibitor may serve as a PGP antagonist.\nPGP is a transmembrane glycoprotein, functioning as a drug transport that actively pumps out a variety of anti-cancer agents and other hydrophobic compounds from the cells [7,20], thus reducing intracellular drug concentrations and leading to drug resistance [10]. It has been shown that long-term exposure of tumor cells to some types of chemotherapy drugs causes resistance [21]. This is consistent with results of our current study that drug resistance associated with PGP expression can be induced by repeating treatment of doxorubicin. PGP-overexpressing K562/Dox cells allow us to study the effect of p38 inhibitor on drug resistance. The expression of MDR genes and multidrug transporters, such as PGP, are regulated by many factors, including cytotoxic drugs and stresses [11–13]; these factors also activate the p38 MAPK pathway [16,22]. Both PGP and p38 MAPK are involved in cellular processes (eg, apoptosis and cell proliferation) [23,24], indicating that there may be a link between p38 MAPK and PGP. We and others demonstrated that inhibition of p38 by SB202190 can decrease the expression of PGP and MDR1, a gene that encodes PGP, in K562/Dox (current study) and gastric cancer cells [25], suggesting that p38 MAPK signaling is involved in the regulation of PGP. U0126, a highly selective inhibitor of mitogen-activated protein kinase/extracellular signal-regulated kinase (ERK) kinase (MEK) [26], can reduce the endogenous expression levels of PGP in the human colorectal cancer cells, HCT-15 and SW620-14 [27], and functionally antagonize AP-1 transcriptional activity through noncompetitive inhibition of MEK1/2 [28]. However, in this study we found that the expression of PGP in K562/Dox cells was not affected by U0126, indicating that the MEK1/2 signaling pathway is not a major pathway involved in PGP regulation in leukemia cells, and that the effect of U0126 may be cell-type specific.\nIn order to restore the sensitivity of phenytoin and doxorubicin in resistant cells, we applied SB202190, a p38 MAPK-specific antagonist [29], which leads to the specificity of the p38 MAPK pathway on PGP regulation. K562/Dox cells that highly expressed PGP were resistant to doxorubicin and phenytoin sodium, and in the presence of SB202190 these cells reverse their drug resistance to a degree similar to that of the well-known PGP antagonist, verapamil. This result further confirms that the p38 MAPK pathway is involved in multidrug resistance through the regulation of PGP. Our data are consistent with the previous finding that in an acidic environment, the p38 MAPK pathway mediated the upregulation of PGP in rat prostate cancer cells [30]. However, how p38 MAPK regulates PGP expression is not yet clear. It has been reported that there are NF-κB binding sites in the MDR1 promoter region, suggesting that MDR1 may be activated by NF-κB [31]. p38 MAPK can activate NF-κB expression [32], indicating the possibility of that p38 MAPK may regulate PGP expression through the activation of this transcription factor.", "Our study shows that the p38 signaling pathway is involved in doxorubicin-induced drug resistance. The inhibition of p38 MAPK diminishes doxorubicin-induced drug resistance associated with the down-regulation of PGP. Thus, the inhibitors of p38 may provide new chemotherapeutic option to overcome drug resistance in treatment of cancer and epilepsy. Further studies on the mechanisms of p38 inhibitors and the development of effective PGP-specific antagonists with low toxicity will improve the clinical effects of the chemotherapy and anti-epilepsy therapy." ]
[ null, "materials|methods", null, null, null, null, null, null, null, "methods", "results", null, null, null, "discussion", "conclusions" ]
[ "p38 MAPK", "drug resistance", "P-glycoprotein", "doxorubicin", "cancer" ]
Background: The p38 mitogen-activated protein kinase (MAPK) signaling pathway mediates multiple cellular events, including proliferation, differentiation, migration, adhesion and apoptosis, in response to various extracellular stimuli, such as growth factors, hormones, ligands for G protein-coupled receptors, inflammatory cytokines, and stresses [1]. It has been shown that p38 MAPK signaling is associated with cancers in humans and mice [2] and regulates gene expression through the activation of transcription factors. Long-term exposure of tumor cells to certain types of chemotherapy drugs causes resistance. The best example is doxorubicin, an anti-cancer drug that often leads to drug resistance [3,4]. Recent studies on cell resistance to chemotherapy and refractory epilepsy drugs showed that multidrug resistance (MDR) transporters, especially P-glycoprotein (PGP) encoded by MDR1, play an important role in multidrug resistance [5–7]. PGP is a membrane-associated protein with 6 transmembrane domains and an adenosine triphosphate (ATP) binding site. This energy-dependent structure provides the characteristics of a drug efflux transporter that can pump drugs and other hydrophobic compounds out of cells, reducing the intracellular drug concentration, thus leading to drug resistance [8–10]. PGP expression can be induced by several factors, including cytotoxic drugs, irradiation, heat shock, and other stresses [11–13]. These factors also activate the p38 MAPK signaling pathway [14–16], suggesting that the p38 MAPK signaling pathway may be involved in the regulation of PGP expression. In this study we investigated the effect of a highly selective, potent, cell-permeable inhibitor of p38 MAPK (SB202190) on doxorubicin-induced drug resistance associated with PGP in a leukemia cell line. We demonstrated that p38 MAPK is involved in doxorubicin-induced PGP expression, cell resistance to the antiepileptic drug phenytoin sodium, and the chemotherapy drug, doxorubicin, in leukemia cells. Material and Methods: Material Human leukemia cell line K562 was obtained from the Blood Institute of the Chinese Academy of Sciences (Tianjin, China). Doxorubicin hydrochloride (Dox), phenytoin sodium (PHT), verapamil hydrochloride (Ver), rabbit anti-human PGP antibody), SB202190 (4-(4-Fluorophenyl)-2-(4-hydroxyphenyl)-5-(4-pyridyl)-1H-imidazole), U0126 (1,4-diamino-2,3-dicyano-1,4-bis[2-aminophenylthio] butadiene), Cell Counting Kit-8 (CCK-8), and DAB (3,3′-Diaminobenzidine) were purchased from Sigma (St. Louis, MO). RPMI-1640, penicillin and streptomycin were from Gibco (Invitrogen, NY). Anti-p38 antibodies (total and phosphor T180 - Y182) were from Santa Cruz Technologies. Human leukemia cell line K562 was obtained from the Blood Institute of the Chinese Academy of Sciences (Tianjin, China). Doxorubicin hydrochloride (Dox), phenytoin sodium (PHT), verapamil hydrochloride (Ver), rabbit anti-human PGP antibody), SB202190 (4-(4-Fluorophenyl)-2-(4-hydroxyphenyl)-5-(4-pyridyl)-1H-imidazole), U0126 (1,4-diamino-2,3-dicyano-1,4-bis[2-aminophenylthio] butadiene), Cell Counting Kit-8 (CCK-8), and DAB (3,3′-Diaminobenzidine) were purchased from Sigma (St. Louis, MO). RPMI-1640, penicillin and streptomycin were from Gibco (Invitrogen, NY). Anti-p38 antibodies (total and phosphor T180 - Y182) were from Santa Cruz Technologies. Generation of K562/Dox cell line and treatment To generate a resistant cell line K562/Dox, K562 cells were cultured in RPMI-1640 medium supplemented with 15% fetal calf serum (FCS), 100 U/mL penicillin, and 100 μg/mL streptomycin at 37°C overnight, followed by treatment with 10 μg/mL doxorubicin at 37°C for 2 hours. Cells were then centrifuged and recovered with fresh medium. After recovery, cells were retreated with doxorubicin at the same dose. These processes were repeated several times until drug resistance was acquired. All K562 cells were kept in the logarithmic growth phase during doxorubicin treatment. The established cell line was then maintained in fresh complete medium supplemented with 0.1 μg/mL doxorubicin. All K562/Dox cells were cultured in the absence of Dox for 10 days prior to compound treatment. The cells were then plated and treated with 10 μM phenytoin sodium, 10 μM doxorubicin, 10 μM SB202190, 10 μM U0126, or 10 μM verapamil in DMSO for a period of time as indicated below. Equal amount DMSO was used for a negative control. To generate a resistant cell line K562/Dox, K562 cells were cultured in RPMI-1640 medium supplemented with 15% fetal calf serum (FCS), 100 U/mL penicillin, and 100 μg/mL streptomycin at 37°C overnight, followed by treatment with 10 μg/mL doxorubicin at 37°C for 2 hours. Cells were then centrifuged and recovered with fresh medium. After recovery, cells were retreated with doxorubicin at the same dose. These processes were repeated several times until drug resistance was acquired. All K562 cells were kept in the logarithmic growth phase during doxorubicin treatment. The established cell line was then maintained in fresh complete medium supplemented with 0.1 μg/mL doxorubicin. All K562/Dox cells were cultured in the absence of Dox for 10 days prior to compound treatment. The cells were then plated and treated with 10 μM phenytoin sodium, 10 μM doxorubicin, 10 μM SB202190, 10 μM U0126, or 10 μM verapamil in DMSO for a period of time as indicated below. Equal amount DMSO was used for a negative control. Immunocytochemical staining K562 and K562/Dox cells were seeded on 0.1% poly-lysine coated slides (Sigma) and fixed in ice-cold acetone for 10 min. After washing 3 times with PBS, the cells were permeated with 0.25% Triton X-100 plus 5% DMSO in PBS for 10 min. After washing 3 times with PBS, the cells were treated with 1.5% and 3% hydrogen peroxide for 15 min each to block endogenous peroxidase and peroxidase-like activity. Following block with 10% goat serum in PBS for 1 hour, the cells were incubated with specific antibody against human PGP (1: 200 dilution) at 4°C overnight. After incubation of horseradish peroxidase-labeled goat anti-rabbit secondary antibody (1:1000 dilution) at 37°C for 1 hour, the cells were washed with 0.1 mol/L Tris-HCl buffer for 5 min. The cells were then incubated with 0.05% DAB substrate in 0.05 mol/L Tris-HCl buffer, followed by incubation of 2 drops of 3% hydrogen peroxide for 5~15 min until cells were coloured light brown. The reaction was stopped by putting the slides into 0.05 mol/L Tris-HCl buffer and air-drying. After mounting, the cells were observed under the microscope and photos were taken. K562 and K562/Dox cells were seeded on 0.1% poly-lysine coated slides (Sigma) and fixed in ice-cold acetone for 10 min. After washing 3 times with PBS, the cells were permeated with 0.25% Triton X-100 plus 5% DMSO in PBS for 10 min. After washing 3 times with PBS, the cells were treated with 1.5% and 3% hydrogen peroxide for 15 min each to block endogenous peroxidase and peroxidase-like activity. Following block with 10% goat serum in PBS for 1 hour, the cells were incubated with specific antibody against human PGP (1: 200 dilution) at 4°C overnight. After incubation of horseradish peroxidase-labeled goat anti-rabbit secondary antibody (1:1000 dilution) at 37°C for 1 hour, the cells were washed with 0.1 mol/L Tris-HCl buffer for 5 min. The cells were then incubated with 0.05% DAB substrate in 0.05 mol/L Tris-HCl buffer, followed by incubation of 2 drops of 3% hydrogen peroxide for 5~15 min until cells were coloured light brown. The reaction was stopped by putting the slides into 0.05 mol/L Tris-HCl buffer and air-drying. After mounting, the cells were observed under the microscope and photos were taken. RNA extraction and RT-PCR Total RNA was extracted from cells using Trizol reagent (Invitrogen). One microgram of total RNA was reversely transcribed using a reverse transcription kit (MBI Fermentas, Burlington, Canada). The PCR amplification was carried out in a volume of 25 μl using the Fermentas kit. The primers of MDR1 and β-actin were synthesized by Shanghai Saibaisheng Company (Shanghai, China). The primer sequences were 5′-TTTTCATGCTATAATGCGAC-3′ (forward) and 5′-TCCAAGAACAGGACTGATGG-3′ (reverse) for MDR1 (226 bp) and 5′-CCTCGCCTTTGCCGATCC-3′ (forward) and 5′-GGATCTTCATGAGGTAGTCAGTC-3′ (reverse) for β-actin (620 bp). PCR amplification was performed at 94°C for 45 sec, 54°C for 45 sec, and 72°C for 1 min for 30 cycles. An initial step to denature RNA at 95°C for 2 min and a final extension of 5 min at 72°C were also performed. PCR products were then separated on 1.5% agarose gel and analyzed using a gel imaging system (GeneGenius, USA). Total RNA was extracted from cells using Trizol reagent (Invitrogen). One microgram of total RNA was reversely transcribed using a reverse transcription kit (MBI Fermentas, Burlington, Canada). The PCR amplification was carried out in a volume of 25 μl using the Fermentas kit. The primers of MDR1 and β-actin were synthesized by Shanghai Saibaisheng Company (Shanghai, China). The primer sequences were 5′-TTTTCATGCTATAATGCGAC-3′ (forward) and 5′-TCCAAGAACAGGACTGATGG-3′ (reverse) for MDR1 (226 bp) and 5′-CCTCGCCTTTGCCGATCC-3′ (forward) and 5′-GGATCTTCATGAGGTAGTCAGTC-3′ (reverse) for β-actin (620 bp). PCR amplification was performed at 94°C for 45 sec, 54°C for 45 sec, and 72°C for 1 min for 30 cycles. An initial step to denature RNA at 95°C for 2 min and a final extension of 5 min at 72°C were also performed. PCR products were then separated on 1.5% agarose gel and analyzed using a gel imaging system (GeneGenius, USA). Western blotting Total protein was extracted from 1×109 cells. Equal amount protein samples were run on SDS-PAGE gels and transferred to polyvinylidene difluoride (PVDF) membranes. After blocking with 1% skim milk in TBS-T at room temperature for 1 hour, the membrane was probed with mouse anti-human primary antibody (1:500 dilution) at 4°C overnight and subsequently incubated with goat anti-mouse horseradish peroxidase-conjugated secondary antibody (1:2000 dilution) at room temperature for 2 hours. Signals were detected using ECL-Plus (Santa Clara, CA) and quantified using the Bio-Rad2000 gel imaging system with QUANTITY ONE software (Bio-Rad Laboratories, Hercules, CA). Total protein was extracted from 1×109 cells. Equal amount protein samples were run on SDS-PAGE gels and transferred to polyvinylidene difluoride (PVDF) membranes. After blocking with 1% skim milk in TBS-T at room temperature for 1 hour, the membrane was probed with mouse anti-human primary antibody (1:500 dilution) at 4°C overnight and subsequently incubated with goat anti-mouse horseradish peroxidase-conjugated secondary antibody (1:2000 dilution) at room temperature for 2 hours. Signals were detected using ECL-Plus (Santa Clara, CA) and quantified using the Bio-Rad2000 gel imaging system with QUANTITY ONE software (Bio-Rad Laboratories, Hercules, CA). Cell viability and half maximal inhibitory concentration Cell viability and drug resistance was determined by cell counting method using CCK-8 assay according to the protocol of the manufacturer. Briefly, cells were pre-treated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin or doxorubicin for 48 hours. After adding 10 μl of the CCK-8 solution to each well of the plate, cells were incubated for 2 hours in the incubator. The absorbance was measured at 450 nm using a microplate reader. Cell viability was calculated using the data obtained from the wells that contain known numbers of viable cells. The 50% inhibitory concentration (IC50) of each drug was calculated using a weighted regression of the plot. Reversal index (RI) was calculated as RI=IC50 without inhibitor/IC50 with inhibitor. Cell viability and drug resistance was determined by cell counting method using CCK-8 assay according to the protocol of the manufacturer. Briefly, cells were pre-treated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin or doxorubicin for 48 hours. After adding 10 μl of the CCK-8 solution to each well of the plate, cells were incubated for 2 hours in the incubator. The absorbance was measured at 450 nm using a microplate reader. Cell viability was calculated using the data obtained from the wells that contain known numbers of viable cells. The 50% inhibitory concentration (IC50) of each drug was calculated using a weighted regression of the plot. Reversal index (RI) was calculated as RI=IC50 without inhibitor/IC50 with inhibitor. Measurement of intracellular concentration of PHT and Dox The concentration of intracellular phenytoin or doxorubicin was measured by HPLC. Briefly, cells were pretreated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin sodium or doxorubicin for 36 hours at a final concentration of 10 μM. Cells (2×106 in 2 ml culture medium) were then collected and re-suspended in 0.3 mol/L HCl/50% ethanol. After centrifugation at 10,000 rpm for 10 min, the supernatant (20 μl) was loaded into the column of HPLC for the measurement of the intracellular concentration of drug according to the protocol of the manufacturer. The concentration of intracellular phenytoin or doxorubicin was measured by HPLC. Briefly, cells were pretreated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin sodium or doxorubicin for 36 hours at a final concentration of 10 μM. Cells (2×106 in 2 ml culture medium) were then collected and re-suspended in 0.3 mol/L HCl/50% ethanol. After centrifugation at 10,000 rpm for 10 min, the supernatant (20 μl) was loaded into the column of HPLC for the measurement of the intracellular concentration of drug according to the protocol of the manufacturer. Statistical analysis All statistical analyses were carried out using SigmaStat (Chicago, IL). Comparisons between groups were performed using either a paired Student t-tests or one-way ANOVA, where indicated. Data are presented as mean ±SD or SEM. Differences were considered significant at values of P<0.05. All statistical analyses were carried out using SigmaStat (Chicago, IL). Comparisons between groups were performed using either a paired Student t-tests or one-way ANOVA, where indicated. Data are presented as mean ±SD or SEM. Differences were considered significant at values of P<0.05. Material: Human leukemia cell line K562 was obtained from the Blood Institute of the Chinese Academy of Sciences (Tianjin, China). Doxorubicin hydrochloride (Dox), phenytoin sodium (PHT), verapamil hydrochloride (Ver), rabbit anti-human PGP antibody), SB202190 (4-(4-Fluorophenyl)-2-(4-hydroxyphenyl)-5-(4-pyridyl)-1H-imidazole), U0126 (1,4-diamino-2,3-dicyano-1,4-bis[2-aminophenylthio] butadiene), Cell Counting Kit-8 (CCK-8), and DAB (3,3′-Diaminobenzidine) were purchased from Sigma (St. Louis, MO). RPMI-1640, penicillin and streptomycin were from Gibco (Invitrogen, NY). Anti-p38 antibodies (total and phosphor T180 - Y182) were from Santa Cruz Technologies. Generation of K562/Dox cell line and treatment: To generate a resistant cell line K562/Dox, K562 cells were cultured in RPMI-1640 medium supplemented with 15% fetal calf serum (FCS), 100 U/mL penicillin, and 100 μg/mL streptomycin at 37°C overnight, followed by treatment with 10 μg/mL doxorubicin at 37°C for 2 hours. Cells were then centrifuged and recovered with fresh medium. After recovery, cells were retreated with doxorubicin at the same dose. These processes were repeated several times until drug resistance was acquired. All K562 cells were kept in the logarithmic growth phase during doxorubicin treatment. The established cell line was then maintained in fresh complete medium supplemented with 0.1 μg/mL doxorubicin. All K562/Dox cells were cultured in the absence of Dox for 10 days prior to compound treatment. The cells were then plated and treated with 10 μM phenytoin sodium, 10 μM doxorubicin, 10 μM SB202190, 10 μM U0126, or 10 μM verapamil in DMSO for a period of time as indicated below. Equal amount DMSO was used for a negative control. Immunocytochemical staining: K562 and K562/Dox cells were seeded on 0.1% poly-lysine coated slides (Sigma) and fixed in ice-cold acetone for 10 min. After washing 3 times with PBS, the cells were permeated with 0.25% Triton X-100 plus 5% DMSO in PBS for 10 min. After washing 3 times with PBS, the cells were treated with 1.5% and 3% hydrogen peroxide for 15 min each to block endogenous peroxidase and peroxidase-like activity. Following block with 10% goat serum in PBS for 1 hour, the cells were incubated with specific antibody against human PGP (1: 200 dilution) at 4°C overnight. After incubation of horseradish peroxidase-labeled goat anti-rabbit secondary antibody (1:1000 dilution) at 37°C for 1 hour, the cells were washed with 0.1 mol/L Tris-HCl buffer for 5 min. The cells were then incubated with 0.05% DAB substrate in 0.05 mol/L Tris-HCl buffer, followed by incubation of 2 drops of 3% hydrogen peroxide for 5~15 min until cells were coloured light brown. The reaction was stopped by putting the slides into 0.05 mol/L Tris-HCl buffer and air-drying. After mounting, the cells were observed under the microscope and photos were taken. RNA extraction and RT-PCR: Total RNA was extracted from cells using Trizol reagent (Invitrogen). One microgram of total RNA was reversely transcribed using a reverse transcription kit (MBI Fermentas, Burlington, Canada). The PCR amplification was carried out in a volume of 25 μl using the Fermentas kit. The primers of MDR1 and β-actin were synthesized by Shanghai Saibaisheng Company (Shanghai, China). The primer sequences were 5′-TTTTCATGCTATAATGCGAC-3′ (forward) and 5′-TCCAAGAACAGGACTGATGG-3′ (reverse) for MDR1 (226 bp) and 5′-CCTCGCCTTTGCCGATCC-3′ (forward) and 5′-GGATCTTCATGAGGTAGTCAGTC-3′ (reverse) for β-actin (620 bp). PCR amplification was performed at 94°C for 45 sec, 54°C for 45 sec, and 72°C for 1 min for 30 cycles. An initial step to denature RNA at 95°C for 2 min and a final extension of 5 min at 72°C were also performed. PCR products were then separated on 1.5% agarose gel and analyzed using a gel imaging system (GeneGenius, USA). Western blotting: Total protein was extracted from 1×109 cells. Equal amount protein samples were run on SDS-PAGE gels and transferred to polyvinylidene difluoride (PVDF) membranes. After blocking with 1% skim milk in TBS-T at room temperature for 1 hour, the membrane was probed with mouse anti-human primary antibody (1:500 dilution) at 4°C overnight and subsequently incubated with goat anti-mouse horseradish peroxidase-conjugated secondary antibody (1:2000 dilution) at room temperature for 2 hours. Signals were detected using ECL-Plus (Santa Clara, CA) and quantified using the Bio-Rad2000 gel imaging system with QUANTITY ONE software (Bio-Rad Laboratories, Hercules, CA). Cell viability and half maximal inhibitory concentration: Cell viability and drug resistance was determined by cell counting method using CCK-8 assay according to the protocol of the manufacturer. Briefly, cells were pre-treated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin or doxorubicin for 48 hours. After adding 10 μl of the CCK-8 solution to each well of the plate, cells were incubated for 2 hours in the incubator. The absorbance was measured at 450 nm using a microplate reader. Cell viability was calculated using the data obtained from the wells that contain known numbers of viable cells. The 50% inhibitory concentration (IC50) of each drug was calculated using a weighted regression of the plot. Reversal index (RI) was calculated as RI=IC50 without inhibitor/IC50 with inhibitor. Measurement of intracellular concentration of PHT and Dox: The concentration of intracellular phenytoin or doxorubicin was measured by HPLC. Briefly, cells were pretreated with 10 μM SB202190 or 10 μM verapamil for 1 hour and treated with phenytoin sodium or doxorubicin for 36 hours at a final concentration of 10 μM. Cells (2×106 in 2 ml culture medium) were then collected and re-suspended in 0.3 mol/L HCl/50% ethanol. After centrifugation at 10,000 rpm for 10 min, the supernatant (20 μl) was loaded into the column of HPLC for the measurement of the intracellular concentration of drug according to the protocol of the manufacturer. Statistical analysis: All statistical analyses were carried out using SigmaStat (Chicago, IL). Comparisons between groups were performed using either a paired Student t-tests or one-way ANOVA, where indicated. Data are presented as mean ±SD or SEM. Differences were considered significant at values of P<0.05. Results: Generation of K562/Dox cells by doxorubicin and responsiveness to p38 inhibitor K562/Dox cells were generated by repeating treatments of doxorubicin and confirmed by the induction of PGP expression. Immunocytochemistry showed that wild-type K562 cells had an undetectable level of PGP expression (Figure 1A, left panel), whereas most K562/Dox cells were PGP-positive and appeared light-brown (Figure 1A, right panel), indicating the induction of PGP expression by doxorubicin. This drug-resistant cell line was further confirmed by the detection of multi-drug resistance 1 gene, MDR1, in the absence or presence of a p38 inhibitor, SB202190 (Figure 1B). After quantitative analysis of RT-PCR, we found that SB202190 treatment significantly decreased MDR1 expression in K562/Dox cells (Figure 1C; P<0.001; n=10). However, the treatment of U0126, a highly selective inhibitor of both MEK1 and MEK2, had no effect on MDR1 expression. K562/Dox cells were generated by repeating treatments of doxorubicin and confirmed by the induction of PGP expression. Immunocytochemistry showed that wild-type K562 cells had an undetectable level of PGP expression (Figure 1A, left panel), whereas most K562/Dox cells were PGP-positive and appeared light-brown (Figure 1A, right panel), indicating the induction of PGP expression by doxorubicin. This drug-resistant cell line was further confirmed by the detection of multi-drug resistance 1 gene, MDR1, in the absence or presence of a p38 inhibitor, SB202190 (Figure 1B). After quantitative analysis of RT-PCR, we found that SB202190 treatment significantly decreased MDR1 expression in K562/Dox cells (Figure 1C; P<0.001; n=10). However, the treatment of U0126, a highly selective inhibitor of both MEK1 and MEK2, had no effect on MDR1 expression. Inhibition of p38 leading to a decrease of PGP expression in K562/Dox cells The expression of PGP in K562/Dox cells was further detected by Western blot. SB202190 treatment for 48 hours decreased PGP expression, whereas U0126 treatment had no effect (Figure 2A). After quantitative analysis, we found that p38 inhibitor significantly reduced the expression of PGP (Figure 2B). Next, we confirmed that SB202190 indeed significantly suppressed phopho-p38, an active form of p38, as well as total-p38 in K562/Dox cells (Figure 2C, D). The expression of PGP in K562/Dox cells was further detected by Western blot. SB202190 treatment for 48 hours decreased PGP expression, whereas U0126 treatment had no effect (Figure 2A). After quantitative analysis, we found that p38 inhibitor significantly reduced the expression of PGP (Figure 2B). Next, we confirmed that SB202190 indeed significantly suppressed phopho-p38, an active form of p38, as well as total-p38 in K562/Dox cells (Figure 2C, D). Reducing drug resistance by p38 inhibitor in K562/Dox cells We subsequently investigated whether p38 was involved in cells’ resistance to the drug tested. First, we examined cell viability by pretreating the cells with p38 inhibitor SB202190 and a positive control, verapamil, followed by treating the cells with phenytoin sodium or doxorubicin. The inhibition of cell viabilities by phenytoin sodium and doxorubicin was determined by CCK-8 assay. We found that in the presence of SB202190, phenytoin sodium and doxorubicin significantly decreased the number of living cells (Figure 3A, B). Studies confirmed that treatment of 10 μM SB202190 or verapamil alone had no effect on cell viability in either K562 or K562/Dox cells (data not shown). Second, we measured the IC50 of phenytoin and doxorubicin in K562/Dox cells. The IC50 of phenytoin and doxorubicin in K562/Dox cells was significantly higher than that in K562 cells (2186.33±214.70 vs. 468.82±44.67 μg/mL and 4.33±0.50 vs. 0.32±0.05 μg/mL, respectively) (Table 1), indicating that K562/Dox cells were drug-resistant. After blocking p38 with 10 μM SB202190 in K562/Dox cells, we observed that the IC50 of phenytoin was significantly decreased, from 2186.33±214.70 to 949.83±131.31 μg/mL, with an RI of 2.30, similar to that of the verapamil control (2.56). The IC50 of doxorubicin was also lower in cells treated with 10 μM SB202190 than in untreated K562/Dox cells (4.33±0.50 μg/mL and 0.40±0.09 μg/mL, respectively), with an RI of 10.83, similar to that of verapamil (12.37). Third, we measured the intracellular concentration of phenytoin and doxorubicin by HPLC. The intracellular levels of phenytoin and doxorubicin were significantly lower in K562/Dox cells than those in K562 cells (Figure 4A, B), further confirming the drug resistance of K562/Dox cells. The decrease of the intracellular level of phenytoin and doxorubicin in K562/Dox cells was significantly abolished in the presence of SB202190 (Figure 4). These data clearly demonstrate that p38 is, at least in part, involved in the regulation of drug resistance in K562/Dox cells. We subsequently investigated whether p38 was involved in cells’ resistance to the drug tested. First, we examined cell viability by pretreating the cells with p38 inhibitor SB202190 and a positive control, verapamil, followed by treating the cells with phenytoin sodium or doxorubicin. The inhibition of cell viabilities by phenytoin sodium and doxorubicin was determined by CCK-8 assay. We found that in the presence of SB202190, phenytoin sodium and doxorubicin significantly decreased the number of living cells (Figure 3A, B). Studies confirmed that treatment of 10 μM SB202190 or verapamil alone had no effect on cell viability in either K562 or K562/Dox cells (data not shown). Second, we measured the IC50 of phenytoin and doxorubicin in K562/Dox cells. The IC50 of phenytoin and doxorubicin in K562/Dox cells was significantly higher than that in K562 cells (2186.33±214.70 vs. 468.82±44.67 μg/mL and 4.33±0.50 vs. 0.32±0.05 μg/mL, respectively) (Table 1), indicating that K562/Dox cells were drug-resistant. After blocking p38 with 10 μM SB202190 in K562/Dox cells, we observed that the IC50 of phenytoin was significantly decreased, from 2186.33±214.70 to 949.83±131.31 μg/mL, with an RI of 2.30, similar to that of the verapamil control (2.56). The IC50 of doxorubicin was also lower in cells treated with 10 μM SB202190 than in untreated K562/Dox cells (4.33±0.50 μg/mL and 0.40±0.09 μg/mL, respectively), with an RI of 10.83, similar to that of verapamil (12.37). Third, we measured the intracellular concentration of phenytoin and doxorubicin by HPLC. The intracellular levels of phenytoin and doxorubicin were significantly lower in K562/Dox cells than those in K562 cells (Figure 4A, B), further confirming the drug resistance of K562/Dox cells. The decrease of the intracellular level of phenytoin and doxorubicin in K562/Dox cells was significantly abolished in the presence of SB202190 (Figure 4). These data clearly demonstrate that p38 is, at least in part, involved in the regulation of drug resistance in K562/Dox cells. Generation of K562/Dox cells by doxorubicin and responsiveness to p38 inhibitor: K562/Dox cells were generated by repeating treatments of doxorubicin and confirmed by the induction of PGP expression. Immunocytochemistry showed that wild-type K562 cells had an undetectable level of PGP expression (Figure 1A, left panel), whereas most K562/Dox cells were PGP-positive and appeared light-brown (Figure 1A, right panel), indicating the induction of PGP expression by doxorubicin. This drug-resistant cell line was further confirmed by the detection of multi-drug resistance 1 gene, MDR1, in the absence or presence of a p38 inhibitor, SB202190 (Figure 1B). After quantitative analysis of RT-PCR, we found that SB202190 treatment significantly decreased MDR1 expression in K562/Dox cells (Figure 1C; P<0.001; n=10). However, the treatment of U0126, a highly selective inhibitor of both MEK1 and MEK2, had no effect on MDR1 expression. Inhibition of p38 leading to a decrease of PGP expression in K562/Dox cells: The expression of PGP in K562/Dox cells was further detected by Western blot. SB202190 treatment for 48 hours decreased PGP expression, whereas U0126 treatment had no effect (Figure 2A). After quantitative analysis, we found that p38 inhibitor significantly reduced the expression of PGP (Figure 2B). Next, we confirmed that SB202190 indeed significantly suppressed phopho-p38, an active form of p38, as well as total-p38 in K562/Dox cells (Figure 2C, D). Reducing drug resistance by p38 inhibitor in K562/Dox cells: We subsequently investigated whether p38 was involved in cells’ resistance to the drug tested. First, we examined cell viability by pretreating the cells with p38 inhibitor SB202190 and a positive control, verapamil, followed by treating the cells with phenytoin sodium or doxorubicin. The inhibition of cell viabilities by phenytoin sodium and doxorubicin was determined by CCK-8 assay. We found that in the presence of SB202190, phenytoin sodium and doxorubicin significantly decreased the number of living cells (Figure 3A, B). Studies confirmed that treatment of 10 μM SB202190 or verapamil alone had no effect on cell viability in either K562 or K562/Dox cells (data not shown). Second, we measured the IC50 of phenytoin and doxorubicin in K562/Dox cells. The IC50 of phenytoin and doxorubicin in K562/Dox cells was significantly higher than that in K562 cells (2186.33±214.70 vs. 468.82±44.67 μg/mL and 4.33±0.50 vs. 0.32±0.05 μg/mL, respectively) (Table 1), indicating that K562/Dox cells were drug-resistant. After blocking p38 with 10 μM SB202190 in K562/Dox cells, we observed that the IC50 of phenytoin was significantly decreased, from 2186.33±214.70 to 949.83±131.31 μg/mL, with an RI of 2.30, similar to that of the verapamil control (2.56). The IC50 of doxorubicin was also lower in cells treated with 10 μM SB202190 than in untreated K562/Dox cells (4.33±0.50 μg/mL and 0.40±0.09 μg/mL, respectively), with an RI of 10.83, similar to that of verapamil (12.37). Third, we measured the intracellular concentration of phenytoin and doxorubicin by HPLC. The intracellular levels of phenytoin and doxorubicin were significantly lower in K562/Dox cells than those in K562 cells (Figure 4A, B), further confirming the drug resistance of K562/Dox cells. The decrease of the intracellular level of phenytoin and doxorubicin in K562/Dox cells was significantly abolished in the presence of SB202190 (Figure 4). These data clearly demonstrate that p38 is, at least in part, involved in the regulation of drug resistance in K562/Dox cells. Discussion: Drug resistance often occurs in anti-cancer and anti-epileptic therapy. Previous studies have shown that the multidrug transporter PGP is involved in cell resistance to chemotherapy and refractory epilepsy [17,18]. A PGP antagonist may effectively reverse chemotherapy and epilepsy drug resistance [7,19]. Here, we demonstrated that the p38 MAPK signaling pathway is involved in doxorubicin-induced drug resistance associated with PGP regulation, and a p38 inhibitor may serve as a PGP antagonist. PGP is a transmembrane glycoprotein, functioning as a drug transport that actively pumps out a variety of anti-cancer agents and other hydrophobic compounds from the cells [7,20], thus reducing intracellular drug concentrations and leading to drug resistance [10]. It has been shown that long-term exposure of tumor cells to some types of chemotherapy drugs causes resistance [21]. This is consistent with results of our current study that drug resistance associated with PGP expression can be induced by repeating treatment of doxorubicin. PGP-overexpressing K562/Dox cells allow us to study the effect of p38 inhibitor on drug resistance. The expression of MDR genes and multidrug transporters, such as PGP, are regulated by many factors, including cytotoxic drugs and stresses [11–13]; these factors also activate the p38 MAPK pathway [16,22]. Both PGP and p38 MAPK are involved in cellular processes (eg, apoptosis and cell proliferation) [23,24], indicating that there may be a link between p38 MAPK and PGP. We and others demonstrated that inhibition of p38 by SB202190 can decrease the expression of PGP and MDR1, a gene that encodes PGP, in K562/Dox (current study) and gastric cancer cells [25], suggesting that p38 MAPK signaling is involved in the regulation of PGP. U0126, a highly selective inhibitor of mitogen-activated protein kinase/extracellular signal-regulated kinase (ERK) kinase (MEK) [26], can reduce the endogenous expression levels of PGP in the human colorectal cancer cells, HCT-15 and SW620-14 [27], and functionally antagonize AP-1 transcriptional activity through noncompetitive inhibition of MEK1/2 [28]. However, in this study we found that the expression of PGP in K562/Dox cells was not affected by U0126, indicating that the MEK1/2 signaling pathway is not a major pathway involved in PGP regulation in leukemia cells, and that the effect of U0126 may be cell-type specific. In order to restore the sensitivity of phenytoin and doxorubicin in resistant cells, we applied SB202190, a p38 MAPK-specific antagonist [29], which leads to the specificity of the p38 MAPK pathway on PGP regulation. K562/Dox cells that highly expressed PGP were resistant to doxorubicin and phenytoin sodium, and in the presence of SB202190 these cells reverse their drug resistance to a degree similar to that of the well-known PGP antagonist, verapamil. This result further confirms that the p38 MAPK pathway is involved in multidrug resistance through the regulation of PGP. Our data are consistent with the previous finding that in an acidic environment, the p38 MAPK pathway mediated the upregulation of PGP in rat prostate cancer cells [30]. However, how p38 MAPK regulates PGP expression is not yet clear. It has been reported that there are NF-κB binding sites in the MDR1 promoter region, suggesting that MDR1 may be activated by NF-κB [31]. p38 MAPK can activate NF-κB expression [32], indicating the possibility of that p38 MAPK may regulate PGP expression through the activation of this transcription factor. Conclusions: Our study shows that the p38 signaling pathway is involved in doxorubicin-induced drug resistance. The inhibition of p38 MAPK diminishes doxorubicin-induced drug resistance associated with the down-regulation of PGP. Thus, the inhibitors of p38 may provide new chemotherapeutic option to overcome drug resistance in treatment of cancer and epilepsy. Further studies on the mechanisms of p38 inhibitors and the development of effective PGP-specific antagonists with low toxicity will improve the clinical effects of the chemotherapy and anti-epilepsy therapy.
Background: Several studies have shown that multidrug transporters, such as P-glycoprotein (PGP), are involved in cell resistance to chemotherapy and refractory epilepsy. The p38 mitogen-activated protein kinase (MAPK) signaling pathway may increase PGP activity. However, p38-mediated drug resistance associated with PGP is unclear. Here, we investigated p38-mediated doxorubicin-induced drug resistance in human leukemia K562 cells. Methods: The expression of PGP was detected by RT-PCR, Western blot, and immunocytochemistry. Cell viability and half-inhibitory concentrations (IC50) were determined by CCK-8 assay. The intracellular concentration of drugs was measured by HPLC. Results: A doxorubicin-induced PGP overexpression cell line, K562/Dox, was generated. The p38 inhibitor SB202190 significantly decreased MDR1 mRNA expression, as well as PGP, in K562/Dox cells. The IC50 of phenytoin sodium and doxorubicin in K562/Dox cells was significantly higher than that in wild-type K562 cells, indicating the drug resistance of K562/Dox cells. During the blocking of p38 activity in the presence of SB202190, cell number was significantly reduced after the phenytoin sodium and doxorubicin treatment, and the IC50 of phenytoin sodium and doxorubicin was decreased in K562/Dox cells. HPLC showed that the intracellular levels of phenytoin sodium and doxorubicin were significantly lower in K562/Dox cells than those in K562 cells. The decrease of the intracellular level of these drugs was significantly abolished in the presence of SB202190. Conclusions: Our study demonstrated that p38 is, at least in part, involved in doxorubicin-induced drug resistance. The mechanistic study of MAPK-mediated PGP and the action of SB202190 need further investigation.
Background: The p38 mitogen-activated protein kinase (MAPK) signaling pathway mediates multiple cellular events, including proliferation, differentiation, migration, adhesion and apoptosis, in response to various extracellular stimuli, such as growth factors, hormones, ligands for G protein-coupled receptors, inflammatory cytokines, and stresses [1]. It has been shown that p38 MAPK signaling is associated with cancers in humans and mice [2] and regulates gene expression through the activation of transcription factors. Long-term exposure of tumor cells to certain types of chemotherapy drugs causes resistance. The best example is doxorubicin, an anti-cancer drug that often leads to drug resistance [3,4]. Recent studies on cell resistance to chemotherapy and refractory epilepsy drugs showed that multidrug resistance (MDR) transporters, especially P-glycoprotein (PGP) encoded by MDR1, play an important role in multidrug resistance [5–7]. PGP is a membrane-associated protein with 6 transmembrane domains and an adenosine triphosphate (ATP) binding site. This energy-dependent structure provides the characteristics of a drug efflux transporter that can pump drugs and other hydrophobic compounds out of cells, reducing the intracellular drug concentration, thus leading to drug resistance [8–10]. PGP expression can be induced by several factors, including cytotoxic drugs, irradiation, heat shock, and other stresses [11–13]. These factors also activate the p38 MAPK signaling pathway [14–16], suggesting that the p38 MAPK signaling pathway may be involved in the regulation of PGP expression. In this study we investigated the effect of a highly selective, potent, cell-permeable inhibitor of p38 MAPK (SB202190) on doxorubicin-induced drug resistance associated with PGP in a leukemia cell line. We demonstrated that p38 MAPK is involved in doxorubicin-induced PGP expression, cell resistance to the antiepileptic drug phenytoin sodium, and the chemotherapy drug, doxorubicin, in leukemia cells. Conclusions: Our study shows that the p38 signaling pathway is involved in doxorubicin-induced drug resistance. The inhibition of p38 MAPK diminishes doxorubicin-induced drug resistance associated with the down-regulation of PGP. Thus, the inhibitors of p38 may provide new chemotherapeutic option to overcome drug resistance in treatment of cancer and epilepsy. Further studies on the mechanisms of p38 inhibitors and the development of effective PGP-specific antagonists with low toxicity will improve the clinical effects of the chemotherapy and anti-epilepsy therapy.
Background: Several studies have shown that multidrug transporters, such as P-glycoprotein (PGP), are involved in cell resistance to chemotherapy and refractory epilepsy. The p38 mitogen-activated protein kinase (MAPK) signaling pathway may increase PGP activity. However, p38-mediated drug resistance associated with PGP is unclear. Here, we investigated p38-mediated doxorubicin-induced drug resistance in human leukemia K562 cells. Methods: The expression of PGP was detected by RT-PCR, Western blot, and immunocytochemistry. Cell viability and half-inhibitory concentrations (IC50) were determined by CCK-8 assay. The intracellular concentration of drugs was measured by HPLC. Results: A doxorubicin-induced PGP overexpression cell line, K562/Dox, was generated. The p38 inhibitor SB202190 significantly decreased MDR1 mRNA expression, as well as PGP, in K562/Dox cells. The IC50 of phenytoin sodium and doxorubicin in K562/Dox cells was significantly higher than that in wild-type K562 cells, indicating the drug resistance of K562/Dox cells. During the blocking of p38 activity in the presence of SB202190, cell number was significantly reduced after the phenytoin sodium and doxorubicin treatment, and the IC50 of phenytoin sodium and doxorubicin was decreased in K562/Dox cells. HPLC showed that the intracellular levels of phenytoin sodium and doxorubicin were significantly lower in K562/Dox cells than those in K562 cells. The decrease of the intracellular level of these drugs was significantly abolished in the presence of SB202190. Conclusions: Our study demonstrated that p38 is, at least in part, involved in doxorubicin-induced drug resistance. The mechanistic study of MAPK-mediated PGP and the action of SB202190 need further investigation.
6,949
326
[ 359, 132, 202, 244, 191, 130, 145, 109, 168, 93, 395 ]
16
[ "cells", "k562", "dox", "doxorubicin", "10", "k562 dox", "p38", "pgp", "dox cells", "k562 dox cells" ]
[ "inhibition p38 mapk", "multidrug resistance regulation", "drug resistance p38", "cell resistance chemotherapy", "signaling associated cancers" ]
[CONTENT] p38 MAPK | drug resistance | P-glycoprotein | doxorubicin | cancer [SUMMARY]
[CONTENT] p38 MAPK | drug resistance | P-glycoprotein | doxorubicin | cancer [SUMMARY]
[CONTENT] p38 MAPK | drug resistance | P-glycoprotein | doxorubicin | cancer [SUMMARY]
[CONTENT] p38 MAPK | drug resistance | P-glycoprotein | doxorubicin | cancer [SUMMARY]
[CONTENT] p38 MAPK | drug resistance | P-glycoprotein | doxorubicin | cancer [SUMMARY]
[CONTENT] p38 MAPK | drug resistance | P-glycoprotein | doxorubicin | cancer [SUMMARY]
[CONTENT] ATP Binding Cassette Transporter, Subfamily B, Member 1 | Cell Survival | Dose-Response Relationship, Drug | Doxorubicin | Drug Resistance, Neoplasm | Humans | Imidazoles | K562 Cells | Leukemia | Phenytoin | Protein Kinase Inhibitors | Pyridines | p38 Mitogen-Activated Protein Kinases [SUMMARY]
[CONTENT] ATP Binding Cassette Transporter, Subfamily B, Member 1 | Cell Survival | Dose-Response Relationship, Drug | Doxorubicin | Drug Resistance, Neoplasm | Humans | Imidazoles | K562 Cells | Leukemia | Phenytoin | Protein Kinase Inhibitors | Pyridines | p38 Mitogen-Activated Protein Kinases [SUMMARY]
[CONTENT] ATP Binding Cassette Transporter, Subfamily B, Member 1 | Cell Survival | Dose-Response Relationship, Drug | Doxorubicin | Drug Resistance, Neoplasm | Humans | Imidazoles | K562 Cells | Leukemia | Phenytoin | Protein Kinase Inhibitors | Pyridines | p38 Mitogen-Activated Protein Kinases [SUMMARY]
[CONTENT] ATP Binding Cassette Transporter, Subfamily B, Member 1 | Cell Survival | Dose-Response Relationship, Drug | Doxorubicin | Drug Resistance, Neoplasm | Humans | Imidazoles | K562 Cells | Leukemia | Phenytoin | Protein Kinase Inhibitors | Pyridines | p38 Mitogen-Activated Protein Kinases [SUMMARY]
[CONTENT] ATP Binding Cassette Transporter, Subfamily B, Member 1 | Cell Survival | Dose-Response Relationship, Drug | Doxorubicin | Drug Resistance, Neoplasm | Humans | Imidazoles | K562 Cells | Leukemia | Phenytoin | Protein Kinase Inhibitors | Pyridines | p38 Mitogen-Activated Protein Kinases [SUMMARY]
[CONTENT] ATP Binding Cassette Transporter, Subfamily B, Member 1 | Cell Survival | Dose-Response Relationship, Drug | Doxorubicin | Drug Resistance, Neoplasm | Humans | Imidazoles | K562 Cells | Leukemia | Phenytoin | Protein Kinase Inhibitors | Pyridines | p38 Mitogen-Activated Protein Kinases [SUMMARY]
[CONTENT] inhibition p38 mapk | multidrug resistance regulation | drug resistance p38 | cell resistance chemotherapy | signaling associated cancers [SUMMARY]
[CONTENT] inhibition p38 mapk | multidrug resistance regulation | drug resistance p38 | cell resistance chemotherapy | signaling associated cancers [SUMMARY]
[CONTENT] inhibition p38 mapk | multidrug resistance regulation | drug resistance p38 | cell resistance chemotherapy | signaling associated cancers [SUMMARY]
[CONTENT] inhibition p38 mapk | multidrug resistance regulation | drug resistance p38 | cell resistance chemotherapy | signaling associated cancers [SUMMARY]
[CONTENT] inhibition p38 mapk | multidrug resistance regulation | drug resistance p38 | cell resistance chemotherapy | signaling associated cancers [SUMMARY]
[CONTENT] inhibition p38 mapk | multidrug resistance regulation | drug resistance p38 | cell resistance chemotherapy | signaling associated cancers [SUMMARY]
[CONTENT] cells | k562 | dox | doxorubicin | 10 | k562 dox | p38 | pgp | dox cells | k562 dox cells [SUMMARY]
[CONTENT] cells | k562 | dox | doxorubicin | 10 | k562 dox | p38 | pgp | dox cells | k562 dox cells [SUMMARY]
[CONTENT] cells | k562 | dox | doxorubicin | 10 | k562 dox | p38 | pgp | dox cells | k562 dox cells [SUMMARY]
[CONTENT] cells | k562 | dox | doxorubicin | 10 | k562 dox | p38 | pgp | dox cells | k562 dox cells [SUMMARY]
[CONTENT] cells | k562 | dox | doxorubicin | 10 | k562 dox | p38 | pgp | dox cells | k562 dox cells [SUMMARY]
[CONTENT] cells | k562 | dox | doxorubicin | 10 | k562 dox | p38 | pgp | dox cells | k562 dox cells [SUMMARY]
[CONTENT] mapk | resistance | p38 mapk | drug | factors | mapk signaling | drugs | signaling | p38 | pgp [SUMMARY]
[CONTENT] sd | tests | presented mean sd sem | way | way anova | way anova indicated | way anova indicated data | sem differences considered significant | anova | anova indicated [SUMMARY]
[CONTENT] k562 | cells | dox cells | k562 dox | k562 dox cells | dox | figure | significantly | expression | p38 [SUMMARY]
[CONTENT] inhibitors | p38 | induced drug resistance | induced | induced drug | doxorubicin induced | doxorubicin induced drug | doxorubicin induced drug resistance | epilepsy | resistance [SUMMARY]
[CONTENT] cells | k562 | 10 | pgp | p38 | doxorubicin | dox | k562 dox | expression | drug [SUMMARY]
[CONTENT] cells | k562 | 10 | pgp | p38 | doxorubicin | dox | k562 dox | expression | drug [SUMMARY]
[CONTENT] PGP ||| PGP ||| PGP ||| K562 [SUMMARY]
[CONTENT] PGP | RT-PCR ||| half | CCK-8 ||| HPLC [SUMMARY]
[CONTENT] PGP | K562/Dox ||| PGP | K562 ||| K562 | K562 | K562/Dox ||| SB202190 | K562 ||| HPLC | K562 | K562 ||| SB202190 [SUMMARY]
[CONTENT] ||| PGP | SB202190 [SUMMARY]
[CONTENT] PGP ||| PGP ||| PGP ||| K562 ||| PGP | RT-PCR ||| half | CCK-8 ||| HPLC ||| PGP | K562/Dox ||| PGP | K562 ||| K562 | K562 | K562/Dox ||| SB202190 | K562 ||| HPLC | K562 | K562 ||| SB202190 ||| ||| PGP | SB202190 [SUMMARY]
[CONTENT] PGP ||| PGP ||| PGP ||| K562 ||| PGP | RT-PCR ||| half | CCK-8 ||| HPLC ||| PGP | K562/Dox ||| PGP | K562 ||| K562 | K562 | K562/Dox ||| SB202190 | K562 ||| HPLC | K562 | K562 ||| SB202190 ||| ||| PGP | SB202190 [SUMMARY]
Lp-PLA2 evaluates the severity of carotid artery stenosis and predicts the occurrence of cerebrovascular events in high stroke-risk populations.
33458873
Lipoprotein-associated phospholipase A2 (Lp-PLA2) is an independent risk factor for cardiovascular disease. However, relationship between carotid artery stenosis and cerebrovascular events in high stroke-risk populations is still unclear.
BACKGROUND
A total of 835 people at a high risk of stroke were screened from 15,933 people aged >40 years in April 2013 and followed at 3, 6, 12, and 24 months. Finally, 823 participants met the screening criteria, and the clinical data and biochemical parameters were investigated.
METHODS
Among the 823 participants, 286 had varying degrees of carotid artery stenosis and 18 had cerebrovascular events. The level of Lp-PLA2 in the carotid artery stenosis group was higher than that in the no stenosis group, and the level in the event group was higher than that in the no event group (p < 0.05). Spearman correlation analysis showed that Lp-PLA2 was positively correlated with the degree of carotid artery stenosis (r = 0.093, p = 0.07) and stenosis involvement (r = 0.094, p = 0.07). The correlation coefficient between Lp-PLA2 and lipoprotein was the highest on the levels of sdLDL (r = 0.555, p < 0.001), followed by non-HDL, LDL, TC, and TG. Cox multivariate regression analysis revealed that, compared with the first quantile of Lp-PLA2 level (Q1, low level), the risk of cerebrovascular events in the fourth quantile of Lp-PLA2 was 10.170 times that of the first quantile (OR = 10.170, 95% CI 1.302-79.448, p = 0.027).
RESULTS
Lp-PLA2 levels can evaluate carotid artery stenosis and predict the occurrence of cerebrovascular events in high stroke-risk populations and provide scientific guidance for risk stratification management.
CONCLUSIONS
[ "1-Alkyl-2-acetylglycerophosphocholine Esterase", "Aged", "Aged, 80 and over", "Biomarkers", "Carotid Arteries", "Carotid Stenosis", "Cerebrovascular Disorders", "Female", "Humans", "Male", "Middle Aged", "Risk", "Risk Factors", "Stroke" ]
7957999
INTRODUCTION
Cerebrovascular disease (CVD) has become a major social and public health problem worldwide, 1 , 2 including stroke, abnormal cerebrovascular, and malformations, and other disorders of cerebral blood circulation. 3 The mortality rate of cerebrovascular diseases in urban and rural areas is 125.78 per 100,000 and 151.91 per 100,000, respectively. 4 , 5 Given the high morbidity, high mortality, high disability, and high recurrence rates of CVD, 6 it places a very heavy burden on families and society. Therefore, the prevention and treatment of cardiovascular diseases are urgently needed. Hypertension, dyslipidemia, heart disease, diabetes, smoking, being overweight, and lack of physical activity are common risk factors for CVD. 7 Among these risk factors, blood biomarkers play a critical role in the formation and rupture of atherosclerotic plaques. 8 However, currently available evidence is insufficient to predict CVD events. To identify a sufficient method for the occurrence of cerebrovascular disease in populations at a high risk of stroke, research of new biomarkers to predict CVD events early has attracted wide attention. Recently, researchers have explored biomarkers associated with atherosclerosis (AS) and CVD. Among these studies, lipoprotein‐associated phospholipase (Lp‐PLA2) was found to significantly promote AS. 9 Atherosclerosis is a disorder of lipid metabolism and chronic inflammatory diseases. 10 , 11 Endothelial dysfunction is a pathological basis for cerebrovascular diseases and is one of the major factors in the formation of carotid artery stenosis. 12 This eventually leads to cardiovascular and cerebrovascular events. Lp‐PLA2 is mainly secreted by macrophages, T lymphocytes, monocytes, and mast cells. 13 Lp‐PLA2 plays a key role in the development of inflammation and AS, mainly including the aggregation and activation of leukocytes and platelets, vascular smooth muscle cell proliferation and migration, endothelial dysfunction, expression of adhesion molecules and cytokines, and the core of plaque necrosis. The formation of Lp‐PLA2 downregulates the synthesis and release of nitric oxide in endothelial cells, enhances oxidative stress response, and promotes endothelial cell apoptosis. 14 This study mainly aimed to explore the relationship between Lp‐PLA2 levels and carotid artery stenosis and the value of early prediction of CVD events in stroke‐risk populations and to provide a scientific basis for risk stratification management and early prevention of stroke‐risk populations in order to reduce the heavy burden of cardiovascular disease on families and society.
METHODS
Ethics statements This study was approved by the Ethics Committee of Zhejiang Provincial People's Hospital (No. 2020QT286), which waived the requirement of informed consent. This study was approved by the Ethics Committee of Zhejiang Provincial People's Hospital (No. 2020QT286), which waived the requirement of informed consent. Study population In April 2013, the study subjects were screened from 15,933 residents aged >40 years in Zhaohui Street, Xiacheng District, Hangzhou, Zhejiang Province. For those at a high risk of stroke, 835 candidates were selected, and members of the research team conducted telephone follow‐ups of people at a high risk of stroke at the 3rd, 6th, 12th, and 24th months from April 2013, and 823 eventually met the criteria, including 473 women and 350 men. (Study flow was shown on Figure 1). Study flowchart The inclusion criteria were as follows: aged >40 years; a history of stroke; a history of transient ischemic attack; a history of hypertension (≥140/90 mm Hg) or taking antihypertensive drugs; atrial fibrillation and valvular disease; smoking; dyslipidemia or unknown; diabetes; rarely performed physical exercise (standard frequency of physical exercise is ≥3 times a week, each ≥30 minutes, for more than 1 year; those engaged in moderate to severe physical labor are regarded as having regular physical exercise), obesity (body mass index [BMI] ≥ 26 kg/m2); and a family history of stroke. Those who have lived or worked outside the study areas for more than half a year; have severe liver or kidney disease or malignant tumors, mental illness, or systemic immune disease; or have 2‐year follow‐up period, incomplete data, or lost to follow‐up were excluded. The screening of the above at‐risk population is in line with national clinical guidelines for stroke. 15 In April 2013, the study subjects were screened from 15,933 residents aged >40 years in Zhaohui Street, Xiacheng District, Hangzhou, Zhejiang Province. For those at a high risk of stroke, 835 candidates were selected, and members of the research team conducted telephone follow‐ups of people at a high risk of stroke at the 3rd, 6th, 12th, and 24th months from April 2013, and 823 eventually met the criteria, including 473 women and 350 men. (Study flow was shown on Figure 1). Study flowchart The inclusion criteria were as follows: aged >40 years; a history of stroke; a history of transient ischemic attack; a history of hypertension (≥140/90 mm Hg) or taking antihypertensive drugs; atrial fibrillation and valvular disease; smoking; dyslipidemia or unknown; diabetes; rarely performed physical exercise (standard frequency of physical exercise is ≥3 times a week, each ≥30 minutes, for more than 1 year; those engaged in moderate to severe physical labor are regarded as having regular physical exercise), obesity (body mass index [BMI] ≥ 26 kg/m2); and a family history of stroke. Those who have lived or worked outside the study areas for more than half a year; have severe liver or kidney disease or malignant tumors, mental illness, or systemic immune disease; or have 2‐year follow‐up period, incomplete data, or lost to follow‐up were excluded. The screening of the above at‐risk population is in line with national clinical guidelines for stroke. 15 China stroke prevention project committee (CSPPC) stroke program Stroke is a chronic disease that seriously threatens the health of the Chinese population. The incidence of stroke across the country is rising at an annual rate of 8.7%. The annual cost of treating cerebrovascular diseases is more than 10 billion yuan, and the indirect economic losses cost nearly 20 billion yuan every year. To address the various challenges caused by stroke, the Chinese Ministry of Health established the CSPPC in April 2011. CSPPC formulates policies, issues clinical guidelines, and organizes community hospitals to carry out stroke‐risk factor screening and risk assessment for permanent residents over 40 years old in high‐incidence areas and to conduct health education and regular physical examinations for selected low‐risk populations, intervention guidance for the middle‐risk population based on individual characteristics, further inspections for the high‐risk population, and comprehensive intervention. During regular follow‐up of the middle‐risk and high‐risk groups, patients identified to have cervical vascular disease or suspected stroke will be referred to the hospital for further diagnosis and treatment. Stroke is a chronic disease that seriously threatens the health of the Chinese population. The incidence of stroke across the country is rising at an annual rate of 8.7%. The annual cost of treating cerebrovascular diseases is more than 10 billion yuan, and the indirect economic losses cost nearly 20 billion yuan every year. To address the various challenges caused by stroke, the Chinese Ministry of Health established the CSPPC in April 2011. CSPPC formulates policies, issues clinical guidelines, and organizes community hospitals to carry out stroke‐risk factor screening and risk assessment for permanent residents over 40 years old in high‐incidence areas and to conduct health education and regular physical examinations for selected low‐risk populations, intervention guidance for the middle‐risk population based on individual characteristics, further inspections for the high‐risk population, and comprehensive intervention. During regular follow‐up of the middle‐risk and high‐risk groups, patients identified to have cervical vascular disease or suspected stroke will be referred to the hospital for further diagnosis and treatment. Carotid ultrasound and physical examination The neck ultrasound examinations of all subjects were performed using ultrasound diagnostic apparatus S2000 (Siemens, Germany) according to the guidelines established by the European Stroke Conference. The blood vessels examined included the bilateral common carotid artery, carotid sinus, internal carotid artery, subclavian artery, and vertebral artery. Carotid artery stenosis 16 is then divided into (i) mild stenosis, in which the inner diameter is reduced by 1%–49%, the ultrasound image shows local plaques, and there is no significant change in blood flow; (ii) moderate stenosis, in which the inner diameter is reduced by 50%–69%, the blood flow is accelerated at the plaque stenosis, and the pathological vortex is formed at the distal end of the stenosis; and (iii) severe stenosis, in which the inner diameter is reduced by 70%–99%, the plaque is aggravated, the blood flow is further accelerated at the plaque stenosis, and pathological vortex and turbulent mixed signals are formed at the distal end. All relevant measurements such as weight, height, waist circumference, and blood pressure (BP) were measured by trained medical personnel in strict accordance with the corresponding standards. After verification and verification, data were entered into the database of the China Stroke Data Center. The neck ultrasound examinations of all subjects were performed using ultrasound diagnostic apparatus S2000 (Siemens, Germany) according to the guidelines established by the European Stroke Conference. The blood vessels examined included the bilateral common carotid artery, carotid sinus, internal carotid artery, subclavian artery, and vertebral artery. Carotid artery stenosis 16 is then divided into (i) mild stenosis, in which the inner diameter is reduced by 1%–49%, the ultrasound image shows local plaques, and there is no significant change in blood flow; (ii) moderate stenosis, in which the inner diameter is reduced by 50%–69%, the blood flow is accelerated at the plaque stenosis, and the pathological vortex is formed at the distal end of the stenosis; and (iii) severe stenosis, in which the inner diameter is reduced by 70%–99%, the plaque is aggravated, the blood flow is further accelerated at the plaque stenosis, and pathological vortex and turbulent mixed signals are formed at the distal end. All relevant measurements such as weight, height, waist circumference, and blood pressure (BP) were measured by trained medical personnel in strict accordance with the corresponding standards. After verification and verification, data were entered into the database of the China Stroke Data Center. Data collection and blood biochemistry All subjects fasted for at least 8 hours, and 3 mL of cubital venous blood was drawn into a separating gel‐accelerating vacuum blood collection tube and left for approximately 15 minutes after the plasma was precipitated. The serum was separated by centrifugation at relative centrifugal force of 560 g for 15 minutes, and one part was sent to the laboratory biochemistry room. Blood glucose (GLU), triglycerides (TG), total cholesterol (TC), low‐density lipoprotein (LDL), high‐density lipoprotein (HDL), homocysteine (HCY), uric acid (UA), and free fatty acid (FFA) tests were completed on the same day. One portion was immediately stored in the refrigerator at −80°C. Small and dense low‐density lipoprotein (sdLDL), Lp‐PLA2, high‐sensitivity C‐reactive protein (hs‐CRP), cystatin C (Cys C), and lipoprotein a (LPa) completed the test on the day of the experiment. Demographic parameters such as age; sex; waist circumference; BMI, systolic blood pressure; diastolic blood pressure; medical history of all subjects, such as heart disease history, diabetes history, hypertension history, dyslipidemia history, smoking history, family history of hypertension, family history of diabetes data such as stroke, family history of coronary heart disease, family history of stroke; carotid artery ultrasound results; and cardiovascular and cerebrovascular events were all collected from the China Stroke Data Center database. All subjects fasted for at least 8 hours, and 3 mL of cubital venous blood was drawn into a separating gel‐accelerating vacuum blood collection tube and left for approximately 15 minutes after the plasma was precipitated. The serum was separated by centrifugation at relative centrifugal force of 560 g for 15 minutes, and one part was sent to the laboratory biochemistry room. Blood glucose (GLU), triglycerides (TG), total cholesterol (TC), low‐density lipoprotein (LDL), high‐density lipoprotein (HDL), homocysteine (HCY), uric acid (UA), and free fatty acid (FFA) tests were completed on the same day. One portion was immediately stored in the refrigerator at −80°C. Small and dense low‐density lipoprotein (sdLDL), Lp‐PLA2, high‐sensitivity C‐reactive protein (hs‐CRP), cystatin C (Cys C), and lipoprotein a (LPa) completed the test on the day of the experiment. Demographic parameters such as age; sex; waist circumference; BMI, systolic blood pressure; diastolic blood pressure; medical history of all subjects, such as heart disease history, diabetes history, hypertension history, dyslipidemia history, smoking history, family history of hypertension, family history of diabetes data such as stroke, family history of coronary heart disease, family history of stroke; carotid artery ultrasound results; and cardiovascular and cerebrovascular events were all collected from the China Stroke Data Center database. Statistical analysis SPSS 20.0 was used to analyze the data, and GraphPad Prism 5 (GraphPad Software Inc., La Jolla, CA) was used to draw graphs. Count data were expressed as a percentage, and the comparison between groups was analyzed using the chi‐square test. Measurement data with normal distribution were expressed as mean ± standard deviation (±s). The comparison between two groups was performed using the t test, and the comparison of multiple groups was performed by single‐factor analysis of variance. Measurement data of non‐normal distribution are expressed as median (interquartile range), and the rank‐sum test was used for comparison between groups. Correlation analysis was performed using Spearman correlation analysis or Pearson correlation analysis. Regression analysis was performed using logistic multivariate regression analysis. Survival analysis uses the proportional hazards regression model (Cox regression analysis) method, and p < 0.05 was considered statistically significant. SPSS 20.0 was used to analyze the data, and GraphPad Prism 5 (GraphPad Software Inc., La Jolla, CA) was used to draw graphs. Count data were expressed as a percentage, and the comparison between groups was analyzed using the chi‐square test. Measurement data with normal distribution were expressed as mean ± standard deviation (±s). The comparison between two groups was performed using the t test, and the comparison of multiple groups was performed by single‐factor analysis of variance. Measurement data of non‐normal distribution are expressed as median (interquartile range), and the rank‐sum test was used for comparison between groups. Correlation analysis was performed using Spearman correlation analysis or Pearson correlation analysis. Regression analysis was performed using logistic multivariate regression analysis. Survival analysis uses the proportional hazards regression model (Cox regression analysis) method, and p < 0.05 was considered statistically significant.
RESULTS
Lp‐PLA2 levels in groups with different degrees of carotid artery stenosis According to the neck ultrasound findings, there were 537 patients in the no stenosis group, 254 patients in the mild stenosis group, and 32 patients in the moderate to severe stenosis group. As the severity of carotid artery stenosis increased, the age, systolic BP, male ratio, and diabetes history ratio increased, and the difference among the three groups was statistically significant (p < 0.05) (Table 1). The level of Lp‐PLA2 was higher in the moderate to severe stenosis group than in the no stenosis group (p < 0.05), and the level in the mild stenosis group was higher than that in the non‐stenosis group (p < 0.05). No statistically significant differences in sdLDL and other parameters were found among the three groups (p > 0.05). Then, the correlation between Lp‐PLA2 and other parameters was analyzed. Studies have shown that Lp‐PLA2 is positively correlated with the degree of carotid artery stenosis and the range of stenosis. In these high‐risk groups, the clinical biochemical indicators TG, TC, LDL, nonHDL, GLU, Homeostatic Model Assessment for Insulin Resistance, UA, HCY, FFA, CYS C, hs‐CRP, and sdLDL were positively correlated with Lp‐PLA2 (r > 0, p < 0.05), and the correlation coefficient between Lp‐PLA2 and other lipoproteins was in this order sdLDL > non‐HDL > LDL > TC > TG. sdLDL showed the strongest correlation with Lp‐PLA2 (r = 0.555), and HDL was negatively correlated with Lp‐PLA2 (r = −0.145, p < 0.001). Demographic characteristics and clinical biochemical parameters in different carotid artery stenosis groups Abbreviation: BMI, body mass index; DBP, diastolic blood pressure; FFA, Free Fat Acid; HCY, homocysteine; HDL, High‐density lipoprotein; HOMA‐IR, Homeostasis model assessment for insulin resistance; HOMA‐IS, Homeostasis model assessment for insulin sensitivity; hs‐CRP, High‐sensitivity C‐reactive protein; LDL, Low‐density lipoprotein; LP(a), Lipoproteins(a); LP‐PLA2, Lipoprotein‐associated phospholipase A2; nonHDL, Non–high‐density lipoprotein cholesterol; SBP, systolic blood pressure; sdLDL, small dense low‐density lipoprotein; TC, Total cholesterol; TG, Triglycerides; UA, uric acid. According to the neck ultrasound findings, there were 537 patients in the no stenosis group, 254 patients in the mild stenosis group, and 32 patients in the moderate to severe stenosis group. As the severity of carotid artery stenosis increased, the age, systolic BP, male ratio, and diabetes history ratio increased, and the difference among the three groups was statistically significant (p < 0.05) (Table 1). The level of Lp‐PLA2 was higher in the moderate to severe stenosis group than in the no stenosis group (p < 0.05), and the level in the mild stenosis group was higher than that in the non‐stenosis group (p < 0.05). No statistically significant differences in sdLDL and other parameters were found among the three groups (p > 0.05). Then, the correlation between Lp‐PLA2 and other parameters was analyzed. Studies have shown that Lp‐PLA2 is positively correlated with the degree of carotid artery stenosis and the range of stenosis. In these high‐risk groups, the clinical biochemical indicators TG, TC, LDL, nonHDL, GLU, Homeostatic Model Assessment for Insulin Resistance, UA, HCY, FFA, CYS C, hs‐CRP, and sdLDL were positively correlated with Lp‐PLA2 (r > 0, p < 0.05), and the correlation coefficient between Lp‐PLA2 and other lipoproteins was in this order sdLDL > non‐HDL > LDL > TC > TG. sdLDL showed the strongest correlation with Lp‐PLA2 (r = 0.555), and HDL was negatively correlated with Lp‐PLA2 (r = −0.145, p < 0.001). Demographic characteristics and clinical biochemical parameters in different carotid artery stenosis groups Abbreviation: BMI, body mass index; DBP, diastolic blood pressure; FFA, Free Fat Acid; HCY, homocysteine; HDL, High‐density lipoprotein; HOMA‐IR, Homeostasis model assessment for insulin resistance; HOMA‐IS, Homeostasis model assessment for insulin sensitivity; hs‐CRP, High‐sensitivity C‐reactive protein; LDL, Low‐density lipoprotein; LP(a), Lipoproteins(a); LP‐PLA2, Lipoprotein‐associated phospholipase A2; nonHDL, Non–high‐density lipoprotein cholesterol; SBP, systolic blood pressure; sdLDL, small dense low‐density lipoprotein; TC, Total cholesterol; TG, Triglycerides; UA, uric acid. Association between Lp‐PLA2 level and occurrence of cerebrovascular events We followed the patients for 2 years and found that 18 had cerebrovascular events, and the remaining 805 had no cerebrovascular events. The level of Lp‐PLA2 was higher in the group with cerebrovascular events than in the group without cerebrovascular events (662.81 ± 111.25 vs 559.86 ± 130.05, p < 0.001) (Table 2 and Figure 2). The levels of some parameters that can reflect renal function, such as Cys C, HCY, and UA, were also higher in the event group than in the no event group (Table 2, p < 0.05). The level of hs‐CRP, an inflammatory marker, was also higher in the event group than in the no event group (Table 2, p < 0.05). No statistical difference was found between the other parameters of the event group, such as HDL, LDL, and the no event group (Table 2, p > 0.05). The levels of Lp‐PLA2 were then divided into four groups (Q1–Q4). As the quartile of Lp‐PLA2 levels increases, the incidence of cerebrovascular events increases. In the cerebrovascular event group, the incidence of cerebrovascular events increased with the increase in Lp‐PLA2 quantile (Table 3, p = 0.027). In the group with cerebrovascular events, the fourth quartile had a higher incidence of cardiovascular and cerebrovascular events than the first quartile (Table 3, p < 0.05). Comparison of demographic characteristics and clinical biochemical parameters of cerebrovascular events Differences in Lp‐PLA2 levels among people with different degrees of stenosis. *p < 0.05, **p < 0.01, ***p < 0.001 Relationship between the incidence of cerebrovascular events after grouping Lp‐PLA2 quartiles The difference between the events and no evens group of Lp‐PLA2 with varying level. First quantile (Q1): Lp‐PLA2 ≤ 473.10 IU/L; second quantile (Q2): 473.10 IU/L < Lp‐PLA2 < 560.70 IU/L; third quantile (Q3): 560.70 IU/L ≤ Lp‐PLA2 < 643.90 IU/L; fourth quartile (Q4): Lp‐PLA2 ≥ 643.90 IU/L. p < 0.05 is considered statistically significant. We followed the patients for 2 years and found that 18 had cerebrovascular events, and the remaining 805 had no cerebrovascular events. The level of Lp‐PLA2 was higher in the group with cerebrovascular events than in the group without cerebrovascular events (662.81 ± 111.25 vs 559.86 ± 130.05, p < 0.001) (Table 2 and Figure 2). The levels of some parameters that can reflect renal function, such as Cys C, HCY, and UA, were also higher in the event group than in the no event group (Table 2, p < 0.05). The level of hs‐CRP, an inflammatory marker, was also higher in the event group than in the no event group (Table 2, p < 0.05). No statistical difference was found between the other parameters of the event group, such as HDL, LDL, and the no event group (Table 2, p > 0.05). The levels of Lp‐PLA2 were then divided into four groups (Q1–Q4). As the quartile of Lp‐PLA2 levels increases, the incidence of cerebrovascular events increases. In the cerebrovascular event group, the incidence of cerebrovascular events increased with the increase in Lp‐PLA2 quantile (Table 3, p = 0.027). In the group with cerebrovascular events, the fourth quartile had a higher incidence of cardiovascular and cerebrovascular events than the first quartile (Table 3, p < 0.05). Comparison of demographic characteristics and clinical biochemical parameters of cerebrovascular events Differences in Lp‐PLA2 levels among people with different degrees of stenosis. *p < 0.05, **p < 0.01, ***p < 0.001 Relationship between the incidence of cerebrovascular events after grouping Lp‐PLA2 quartiles The difference between the events and no evens group of Lp‐PLA2 with varying level. First quantile (Q1): Lp‐PLA2 ≤ 473.10 IU/L; second quantile (Q2): 473.10 IU/L < Lp‐PLA2 < 560.70 IU/L; third quantile (Q3): 560.70 IU/L ≤ Lp‐PLA2 < 643.90 IU/L; fourth quartile (Q4): Lp‐PLA2 ≥ 643.90 IU/L. p < 0.05 is considered statistically significant. Cerebrovascular events are positively correlated with the degree of carotid artery stenosis The incidence of cerebrovascular events in the stenosis group was higher than that in the no stenosis group (Table 4, p = 0.17), but no statistically significant difference was noted. As regards progression of stenosis, that is, from no stenosis to severe stenosis, increases in the degree of stenosis likely lead to an increase in the incidence of cerebrovascular events (Table 4). Furthermore, the greater the number of stenoses, the greater the probability of cerebrovascular events (Table 4). Relationship between carotid artery stenosis and the incidence of cerebrovascular events The diference between the cebrovascular events and no events in the carotid artery stenosis group and no stenosis, with varying degrees of stenosis and stenosis range. p < 0.05 is considered statistically significant. The incidence of cerebrovascular events in the stenosis group was higher than that in the no stenosis group (Table 4, p = 0.17), but no statistically significant difference was noted. As regards progression of stenosis, that is, from no stenosis to severe stenosis, increases in the degree of stenosis likely lead to an increase in the incidence of cerebrovascular events (Table 4). Furthermore, the greater the number of stenoses, the greater the probability of cerebrovascular events (Table 4). Relationship between carotid artery stenosis and the incidence of cerebrovascular events The diference between the cebrovascular events and no events in the carotid artery stenosis group and no stenosis, with varying degrees of stenosis and stenosis range. p < 0.05 is considered statistically significant. Effects of Lp‐PLA2 levels on cerebrovascular events and mortality Participants were followed for 2 years. In the subsequent analysis using Cox regression to analyze the forward LR method, the occurrence of cerebrovascular events was used as the value indicating that the event had occurred, the follow‐up time was used as the time variable, the Lp‐PLA2 level quartile was used as the classification covariate, and the first quantile was used as the reference group. The results suggest that Lp‐PLA2 is a risk factor for cerebrovascular events (Figure 3, p = 0.046). Compared with the first quartile, the risk of cerebrovascular events increased as the quartile increased. Compared with the first quantile, the risk of cerebrovascular events in the fourth quantile was 10.170 times that of the first quantile (OR=10.170, 95%CI 1.302–79.448, p = 0.027) (Table 5 and Figure 4). Differences in Lp‐PLA2 levels between groups with cerebrovascular events and without cerebrovascular events. *p < 0.05, **p < 0.01, ***p < 0.001 Cox regression analysis for cerebrovascular events and death by Lp‐PLA2 (UI/L) Levels Abbreviations: CI, confidence interval; OR, odds ratio. Cox cumulative hazard for people with various Lp‐PLA2 levels. The Lp‐PLA2 level was grouped as follows: Q1, Lp‐PLA2 ≤ 473.10 IU/L; Q2, 473.10 IU/L < LpPLA2 < 560.70 IU/L; Q3, 560.70 IU/L ≤ LpPLA2 < 643.90 IU/L; Q4, Lp‐PLA2 ≥ 643.90 IU/L Participants were followed for 2 years. In the subsequent analysis using Cox regression to analyze the forward LR method, the occurrence of cerebrovascular events was used as the value indicating that the event had occurred, the follow‐up time was used as the time variable, the Lp‐PLA2 level quartile was used as the classification covariate, and the first quantile was used as the reference group. The results suggest that Lp‐PLA2 is a risk factor for cerebrovascular events (Figure 3, p = 0.046). Compared with the first quartile, the risk of cerebrovascular events increased as the quartile increased. Compared with the first quantile, the risk of cerebrovascular events in the fourth quantile was 10.170 times that of the first quantile (OR=10.170, 95%CI 1.302–79.448, p = 0.027) (Table 5 and Figure 4). Differences in Lp‐PLA2 levels between groups with cerebrovascular events and without cerebrovascular events. *p < 0.05, **p < 0.01, ***p < 0.001 Cox regression analysis for cerebrovascular events and death by Lp‐PLA2 (UI/L) Levels Abbreviations: CI, confidence interval; OR, odds ratio. Cox cumulative hazard for people with various Lp‐PLA2 levels. The Lp‐PLA2 level was grouped as follows: Q1, Lp‐PLA2 ≤ 473.10 IU/L; Q2, 473.10 IU/L < LpPLA2 < 560.70 IU/L; Q3, 560.70 IU/L ≤ LpPLA2 < 643.90 IU/L; Q4, Lp‐PLA2 ≥ 643.90 IU/L
null
null
[ "INTRODUCTION", "Ethics statements", "Study population", "China stroke prevention project committee (CSPPC) stroke program", "Carotid ultrasound and physical examination", "Data collection and blood biochemistry", "Statistical analysis", "Lp‐PLA2 levels in groups with different degrees of carotid artery stenosis", "Association between Lp‐PLA2 level and occurrence of cerebrovascular events", "Cerebrovascular events are positively correlated with the degree of carotid artery stenosis", "Effects of Lp‐PLA2 levels on cerebrovascular events and mortality" ]
[ "Cerebrovascular disease (CVD) has become a major social and public health problem worldwide,\n1\n, \n2\n including stroke, abnormal cerebrovascular, and malformations, and other disorders of cerebral blood circulation.\n3\n The mortality rate of cerebrovascular diseases in urban and rural areas is 125.78 per 100,000 and 151.91 per 100,000, respectively.\n4\n, \n5\n Given the high morbidity, high mortality, high disability, and high recurrence rates of CVD,\n6\n it places a very heavy burden on families and society. Therefore, the prevention and treatment of cardiovascular diseases are urgently needed.\nHypertension, dyslipidemia, heart disease, diabetes, smoking, being overweight, and lack of physical activity are common risk factors for CVD.\n7\n Among these risk factors, blood biomarkers play a critical role in the formation and rupture of atherosclerotic plaques.\n8\n However, currently available evidence is insufficient to predict CVD events. To identify a sufficient method for the occurrence of cerebrovascular disease in populations at a high risk of stroke, research of new biomarkers to predict CVD events early has attracted wide attention. Recently, researchers have explored biomarkers associated with atherosclerosis (AS) and CVD. Among these studies, lipoprotein‐associated phospholipase (Lp‐PLA2) was found to significantly promote AS.\n9\n\n\nAtherosclerosis is a disorder of lipid metabolism and chronic inflammatory diseases.\n10\n, \n11\n Endothelial dysfunction is a pathological basis for cerebrovascular diseases and is one of the major factors in the formation of carotid artery stenosis.\n12\n This eventually leads to cardiovascular and cerebrovascular events.\nLp‐PLA2 is mainly secreted by macrophages, T lymphocytes, monocytes, and mast cells.\n13\n Lp‐PLA2 plays a key role in the development of inflammation and AS, mainly including the aggregation and activation of leukocytes and platelets, vascular smooth muscle cell proliferation and migration, endothelial dysfunction, expression of adhesion molecules and cytokines, and the core of plaque necrosis. The formation of Lp‐PLA2 downregulates the synthesis and release of nitric oxide in endothelial cells, enhances oxidative stress response, and promotes endothelial cell apoptosis.\n14\n\n\nThis study mainly aimed to explore the relationship between Lp‐PLA2 levels and carotid artery stenosis and the value of early prediction of CVD events in stroke‐risk populations and to provide a scientific basis for risk stratification management and early prevention of stroke‐risk populations in order to reduce the heavy burden of cardiovascular disease on families and society.", "This study was approved by the Ethics Committee of Zhejiang Provincial People's Hospital (No. 2020QT286), which waived the requirement of informed consent.", "In April 2013, the study subjects were screened from 15,933 residents aged >40 years in Zhaohui Street, Xiacheng District, Hangzhou, Zhejiang Province. For those at a high risk of stroke, 835 candidates were selected, and members of the research team conducted telephone follow‐ups of people at a high risk of stroke at the 3rd, 6th, 12th, and 24th months from April 2013, and 823 eventually met the criteria, including 473 women and 350 men. (Study flow was shown on Figure 1).\nStudy flowchart\nThe inclusion criteria were as follows: aged >40 years; a history of stroke; a history of transient ischemic attack; a history of hypertension (≥140/90 mm Hg) or taking antihypertensive drugs; atrial fibrillation and valvular disease; smoking; dyslipidemia or unknown; diabetes; rarely performed physical exercise (standard frequency of physical exercise is ≥3 times a week, each ≥30 minutes, for more than 1 year; those engaged in moderate to severe physical labor are regarded as having regular physical exercise), obesity (body mass index [BMI] ≥ 26 kg/m2); and a family history of stroke. Those who have lived or worked outside the study areas for more than half a year; have severe liver or kidney disease or malignant tumors, mental illness, or systemic immune disease; or have 2‐year follow‐up period, incomplete data, or lost to follow‐up were excluded. The screening of the above at‐risk population is in line with national clinical guidelines for stroke.\n15\n\n", "Stroke is a chronic disease that seriously threatens the health of the Chinese population. The incidence of stroke across the country is rising at an annual rate of 8.7%. The annual cost of treating cerebrovascular diseases is more than 10 billion yuan, and the indirect economic losses cost nearly 20 billion yuan every year. To address the various challenges caused by stroke, the Chinese Ministry of Health established the CSPPC in April 2011. CSPPC formulates policies, issues clinical guidelines, and organizes community hospitals to carry out stroke‐risk factor screening and risk assessment for permanent residents over 40 years old in high‐incidence areas and to conduct health education and regular physical examinations for selected low‐risk populations, intervention guidance for the middle‐risk population based on individual characteristics, further inspections for the high‐risk population, and comprehensive intervention. During regular follow‐up of the middle‐risk and high‐risk groups, patients identified to have cervical vascular disease or suspected stroke will be referred to the hospital for further diagnosis and treatment.", "The neck ultrasound examinations of all subjects were performed using ultrasound diagnostic apparatus S2000 (Siemens, Germany) according to the guidelines established by the European Stroke Conference. The blood vessels examined included the bilateral common carotid artery, carotid sinus, internal carotid artery, subclavian artery, and vertebral artery. Carotid artery stenosis\n16\n is then divided into (i) mild stenosis, in which the inner diameter is reduced by 1%–49%, the ultrasound image shows local plaques, and there is no significant change in blood flow; (ii) moderate stenosis, in which the inner diameter is reduced by 50%–69%, the blood flow is accelerated at the plaque stenosis, and the pathological vortex is formed at the distal end of the stenosis; and (iii) severe stenosis, in which the inner diameter is reduced by 70%–99%, the plaque is aggravated, the blood flow is further accelerated at the plaque stenosis, and pathological vortex and turbulent mixed signals are formed at the distal end.\nAll relevant measurements such as weight, height, waist circumference, and blood pressure (BP) were measured by trained medical personnel in strict accordance with the corresponding standards. After verification and verification, data were entered into the database of the China Stroke Data Center.", "All subjects fasted for at least 8 hours, and 3 mL of cubital venous blood was drawn into a separating gel‐accelerating vacuum blood collection tube and left for approximately 15 minutes after the plasma was precipitated. The serum was separated by centrifugation at relative centrifugal force of 560 g for 15 minutes, and one part was sent to the laboratory biochemistry room. Blood glucose (GLU), triglycerides (TG), total cholesterol (TC), low‐density lipoprotein (LDL), high‐density lipoprotein (HDL), homocysteine (HCY), uric acid (UA), and free fatty acid (FFA) tests were completed on the same day. One portion was immediately stored in the refrigerator at −80°C. Small and dense low‐density lipoprotein (sdLDL), Lp‐PLA2, high‐sensitivity C‐reactive protein (hs‐CRP), cystatin C (Cys C), and lipoprotein a (LPa) completed the test on the day of the experiment.\nDemographic parameters such as age; sex; waist circumference; BMI, systolic blood pressure; diastolic blood pressure; medical history of all subjects, such as heart disease history, diabetes history, hypertension history, dyslipidemia history, smoking history, family history of hypertension, family history of diabetes data such as stroke, family history of coronary heart disease, family history of stroke; carotid artery ultrasound results; and cardiovascular and cerebrovascular events were all collected from the China Stroke Data Center database.", "SPSS 20.0 was used to analyze the data, and GraphPad Prism 5 (GraphPad Software Inc., La Jolla, CA) was used to draw graphs. Count data were expressed as a percentage, and the comparison between groups was analyzed using the chi‐square test. Measurement data with normal distribution were expressed as mean ± standard deviation (±s). The comparison between two groups was performed using the t test, and the comparison of multiple groups was performed by single‐factor analysis of variance. Measurement data of non‐normal distribution are expressed as median (interquartile range), and the rank‐sum test was used for comparison between groups. Correlation analysis was performed using Spearman correlation analysis or Pearson correlation analysis. Regression analysis was performed using logistic multivariate regression analysis. Survival analysis uses the proportional hazards regression model (Cox regression analysis) method, and p < 0.05 was considered statistically significant.", "According to the neck ultrasound findings, there were 537 patients in the no stenosis group, 254 patients in the mild stenosis group, and 32 patients in the moderate to severe stenosis group. As the severity of carotid artery stenosis increased, the age, systolic BP, male ratio, and diabetes history ratio increased, and the difference among the three groups was statistically significant (p < 0.05) (Table 1). The level of Lp‐PLA2 was higher in the moderate to severe stenosis group than in the no stenosis group (p < 0.05), and the level in the mild stenosis group was higher than that in the non‐stenosis group (p < 0.05). No statistically significant differences in sdLDL and other parameters were found among the three groups (p > 0.05). Then, the correlation between Lp‐PLA2 and other parameters was analyzed. Studies have shown that Lp‐PLA2 is positively correlated with the degree of carotid artery stenosis and the range of stenosis. In these high‐risk groups, the clinical biochemical indicators TG, TC, LDL, nonHDL, GLU, Homeostatic Model Assessment for Insulin Resistance, UA, HCY, FFA, CYS C, hs‐CRP, and sdLDL were positively correlated with Lp‐PLA2 (r > 0, p < 0.05), and the correlation coefficient between Lp‐PLA2 and other lipoproteins was in this order sdLDL > non‐HDL > LDL > TC > TG. sdLDL showed the strongest correlation with Lp‐PLA2 (r = 0.555), and HDL was negatively correlated with Lp‐PLA2 (r = −0.145, p < 0.001).\nDemographic characteristics and clinical biochemical parameters in different carotid artery stenosis groups\nAbbreviation: BMI, body mass index; DBP, diastolic blood pressure; FFA, Free Fat Acid; HCY, homocysteine; HDL, High‐density lipoprotein; HOMA‐IR, Homeostasis model assessment for insulin resistance; HOMA‐IS, Homeostasis model assessment for insulin sensitivity; hs‐CRP, High‐sensitivity C‐reactive protein; LDL, Low‐density lipoprotein; LP(a), Lipoproteins(a); LP‐PLA2, Lipoprotein‐associated phospholipase A2; nonHDL, Non–high‐density lipoprotein cholesterol; SBP, systolic blood pressure; sdLDL, small dense low‐density lipoprotein; TC, Total cholesterol; TG, Triglycerides; UA, uric acid.", "We followed the patients for 2 years and found that 18 had cerebrovascular events, and the remaining 805 had no cerebrovascular events. The level of Lp‐PLA2 was higher in the group with cerebrovascular events than in the group without cerebrovascular events (662.81 ± 111.25 vs 559.86 ± 130.05, p < 0.001) (Table 2 and Figure 2). The levels of some parameters that can reflect renal function, such as Cys C, HCY, and UA, were also higher in the event group than in the no event group (Table 2, p < 0.05). The level of hs‐CRP, an inflammatory marker, was also higher in the event group than in the no event group (Table 2, p < 0.05). No statistical difference was found between the other parameters of the event group, such as HDL, LDL, and the no event group (Table 2, p > 0.05). The levels of Lp‐PLA2 were then divided into four groups (Q1–Q4). As the quartile of Lp‐PLA2 levels increases, the incidence of cerebrovascular events increases. In the cerebrovascular event group, the incidence of cerebrovascular events increased with the increase in Lp‐PLA2 quantile (Table 3, p = 0.027). In the group with cerebrovascular events, the fourth quartile had a higher incidence of cardiovascular and cerebrovascular events than the first quartile (Table 3, p < 0.05).\nComparison of demographic characteristics and clinical biochemical parameters of cerebrovascular events\nDifferences in Lp‐PLA2 levels among people with different degrees of stenosis. *p < 0.05, **p < 0.01, ***p < 0.001\nRelationship between the incidence of cerebrovascular events after grouping Lp‐PLA2 quartiles\nThe difference between the events and no evens group of Lp‐PLA2 with varying level.\nFirst quantile (Q1): Lp‐PLA2 ≤ 473.10 IU/L; second quantile (Q2): 473.10 IU/L < Lp‐PLA2 < 560.70 IU/L; third quantile (Q3): 560.70 IU/L ≤ Lp‐PLA2 < 643.90 IU/L; fourth quartile (Q4): Lp‐PLA2 ≥ 643.90 IU/L.\n\np < 0.05 is considered statistically significant.", "The incidence of cerebrovascular events in the stenosis group was higher than that in the no stenosis group (Table 4, p = 0.17), but no statistically significant difference was noted. As regards progression of stenosis, that is, from no stenosis to severe stenosis, increases in the degree of stenosis likely lead to an increase in the incidence of cerebrovascular events (Table 4). Furthermore, the greater the number of stenoses, the greater the probability of cerebrovascular events (Table 4).\nRelationship between carotid artery stenosis and the incidence of cerebrovascular events\nThe diference between the cebrovascular events and no events in the carotid artery stenosis group and no stenosis, with varying degrees of stenosis and stenosis range.\n\np < 0.05 is considered statistically significant.", "Participants were followed for 2 years. In the subsequent analysis using Cox regression to analyze the forward LR method, the occurrence of cerebrovascular events was used as the value indicating that the event had occurred, the follow‐up time was used as the time variable, the Lp‐PLA2 level quartile was used as the classification covariate, and the first quantile was used as the reference group. The results suggest that Lp‐PLA2 is a risk factor for cerebrovascular events (Figure 3, p = 0.046). Compared with the first quartile, the risk of cerebrovascular events increased as the quartile increased. Compared with the first quantile, the risk of cerebrovascular events in the fourth quantile was 10.170 times that of the first quantile (OR=10.170, 95%CI 1.302–79.448, p = 0.027) (Table 5 and Figure 4).\nDifferences in Lp‐PLA2 levels between groups with cerebrovascular events and without cerebrovascular events. *p < 0.05, **p < 0.01, ***p < 0.001\nCox regression analysis for cerebrovascular events and death by Lp‐PLA2 (UI/L) Levels\nAbbreviations: CI, confidence interval; OR, odds ratio.\nCox cumulative hazard for people with various Lp‐PLA2 levels. The Lp‐PLA2 level was grouped as follows: Q1, Lp‐PLA2 ≤ 473.10 IU/L; Q2, 473.10 IU/L < LpPLA2 < 560.70 IU/L; Q3, 560.70 IU/L ≤ LpPLA2 < 643.90 IU/L; Q4, Lp‐PLA2 ≥ 643.90 IU/L" ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Ethics statements", "Study population", "China stroke prevention project committee (CSPPC) stroke program", "Carotid ultrasound and physical examination", "Data collection and blood biochemistry", "Statistical analysis", "RESULTS", "Lp‐PLA2 levels in groups with different degrees of carotid artery stenosis", "Association between Lp‐PLA2 level and occurrence of cerebrovascular events", "Cerebrovascular events are positively correlated with the degree of carotid artery stenosis", "Effects of Lp‐PLA2 levels on cerebrovascular events and mortality", "DISCUSSION" ]
[ "Cerebrovascular disease (CVD) has become a major social and public health problem worldwide,\n1\n, \n2\n including stroke, abnormal cerebrovascular, and malformations, and other disorders of cerebral blood circulation.\n3\n The mortality rate of cerebrovascular diseases in urban and rural areas is 125.78 per 100,000 and 151.91 per 100,000, respectively.\n4\n, \n5\n Given the high morbidity, high mortality, high disability, and high recurrence rates of CVD,\n6\n it places a very heavy burden on families and society. Therefore, the prevention and treatment of cardiovascular diseases are urgently needed.\nHypertension, dyslipidemia, heart disease, diabetes, smoking, being overweight, and lack of physical activity are common risk factors for CVD.\n7\n Among these risk factors, blood biomarkers play a critical role in the formation and rupture of atherosclerotic plaques.\n8\n However, currently available evidence is insufficient to predict CVD events. To identify a sufficient method for the occurrence of cerebrovascular disease in populations at a high risk of stroke, research of new biomarkers to predict CVD events early has attracted wide attention. Recently, researchers have explored biomarkers associated with atherosclerosis (AS) and CVD. Among these studies, lipoprotein‐associated phospholipase (Lp‐PLA2) was found to significantly promote AS.\n9\n\n\nAtherosclerosis is a disorder of lipid metabolism and chronic inflammatory diseases.\n10\n, \n11\n Endothelial dysfunction is a pathological basis for cerebrovascular diseases and is one of the major factors in the formation of carotid artery stenosis.\n12\n This eventually leads to cardiovascular and cerebrovascular events.\nLp‐PLA2 is mainly secreted by macrophages, T lymphocytes, monocytes, and mast cells.\n13\n Lp‐PLA2 plays a key role in the development of inflammation and AS, mainly including the aggregation and activation of leukocytes and platelets, vascular smooth muscle cell proliferation and migration, endothelial dysfunction, expression of adhesion molecules and cytokines, and the core of plaque necrosis. The formation of Lp‐PLA2 downregulates the synthesis and release of nitric oxide in endothelial cells, enhances oxidative stress response, and promotes endothelial cell apoptosis.\n14\n\n\nThis study mainly aimed to explore the relationship between Lp‐PLA2 levels and carotid artery stenosis and the value of early prediction of CVD events in stroke‐risk populations and to provide a scientific basis for risk stratification management and early prevention of stroke‐risk populations in order to reduce the heavy burden of cardiovascular disease on families and society.", "Ethics statements This study was approved by the Ethics Committee of Zhejiang Provincial People's Hospital (No. 2020QT286), which waived the requirement of informed consent.\nThis study was approved by the Ethics Committee of Zhejiang Provincial People's Hospital (No. 2020QT286), which waived the requirement of informed consent.\nStudy population In April 2013, the study subjects were screened from 15,933 residents aged >40 years in Zhaohui Street, Xiacheng District, Hangzhou, Zhejiang Province. For those at a high risk of stroke, 835 candidates were selected, and members of the research team conducted telephone follow‐ups of people at a high risk of stroke at the 3rd, 6th, 12th, and 24th months from April 2013, and 823 eventually met the criteria, including 473 women and 350 men. (Study flow was shown on Figure 1).\nStudy flowchart\nThe inclusion criteria were as follows: aged >40 years; a history of stroke; a history of transient ischemic attack; a history of hypertension (≥140/90 mm Hg) or taking antihypertensive drugs; atrial fibrillation and valvular disease; smoking; dyslipidemia or unknown; diabetes; rarely performed physical exercise (standard frequency of physical exercise is ≥3 times a week, each ≥30 minutes, for more than 1 year; those engaged in moderate to severe physical labor are regarded as having regular physical exercise), obesity (body mass index [BMI] ≥ 26 kg/m2); and a family history of stroke. Those who have lived or worked outside the study areas for more than half a year; have severe liver or kidney disease or malignant tumors, mental illness, or systemic immune disease; or have 2‐year follow‐up period, incomplete data, or lost to follow‐up were excluded. The screening of the above at‐risk population is in line with national clinical guidelines for stroke.\n15\n\n\nIn April 2013, the study subjects were screened from 15,933 residents aged >40 years in Zhaohui Street, Xiacheng District, Hangzhou, Zhejiang Province. For those at a high risk of stroke, 835 candidates were selected, and members of the research team conducted telephone follow‐ups of people at a high risk of stroke at the 3rd, 6th, 12th, and 24th months from April 2013, and 823 eventually met the criteria, including 473 women and 350 men. (Study flow was shown on Figure 1).\nStudy flowchart\nThe inclusion criteria were as follows: aged >40 years; a history of stroke; a history of transient ischemic attack; a history of hypertension (≥140/90 mm Hg) or taking antihypertensive drugs; atrial fibrillation and valvular disease; smoking; dyslipidemia or unknown; diabetes; rarely performed physical exercise (standard frequency of physical exercise is ≥3 times a week, each ≥30 minutes, for more than 1 year; those engaged in moderate to severe physical labor are regarded as having regular physical exercise), obesity (body mass index [BMI] ≥ 26 kg/m2); and a family history of stroke. Those who have lived or worked outside the study areas for more than half a year; have severe liver or kidney disease or malignant tumors, mental illness, or systemic immune disease; or have 2‐year follow‐up period, incomplete data, or lost to follow‐up were excluded. The screening of the above at‐risk population is in line with national clinical guidelines for stroke.\n15\n\n\nChina stroke prevention project committee (CSPPC) stroke program Stroke is a chronic disease that seriously threatens the health of the Chinese population. The incidence of stroke across the country is rising at an annual rate of 8.7%. The annual cost of treating cerebrovascular diseases is more than 10 billion yuan, and the indirect economic losses cost nearly 20 billion yuan every year. To address the various challenges caused by stroke, the Chinese Ministry of Health established the CSPPC in April 2011. CSPPC formulates policies, issues clinical guidelines, and organizes community hospitals to carry out stroke‐risk factor screening and risk assessment for permanent residents over 40 years old in high‐incidence areas and to conduct health education and regular physical examinations for selected low‐risk populations, intervention guidance for the middle‐risk population based on individual characteristics, further inspections for the high‐risk population, and comprehensive intervention. During regular follow‐up of the middle‐risk and high‐risk groups, patients identified to have cervical vascular disease or suspected stroke will be referred to the hospital for further diagnosis and treatment.\nStroke is a chronic disease that seriously threatens the health of the Chinese population. The incidence of stroke across the country is rising at an annual rate of 8.7%. The annual cost of treating cerebrovascular diseases is more than 10 billion yuan, and the indirect economic losses cost nearly 20 billion yuan every year. To address the various challenges caused by stroke, the Chinese Ministry of Health established the CSPPC in April 2011. CSPPC formulates policies, issues clinical guidelines, and organizes community hospitals to carry out stroke‐risk factor screening and risk assessment for permanent residents over 40 years old in high‐incidence areas and to conduct health education and regular physical examinations for selected low‐risk populations, intervention guidance for the middle‐risk population based on individual characteristics, further inspections for the high‐risk population, and comprehensive intervention. During regular follow‐up of the middle‐risk and high‐risk groups, patients identified to have cervical vascular disease or suspected stroke will be referred to the hospital for further diagnosis and treatment.\nCarotid ultrasound and physical examination The neck ultrasound examinations of all subjects were performed using ultrasound diagnostic apparatus S2000 (Siemens, Germany) according to the guidelines established by the European Stroke Conference. The blood vessels examined included the bilateral common carotid artery, carotid sinus, internal carotid artery, subclavian artery, and vertebral artery. Carotid artery stenosis\n16\n is then divided into (i) mild stenosis, in which the inner diameter is reduced by 1%–49%, the ultrasound image shows local plaques, and there is no significant change in blood flow; (ii) moderate stenosis, in which the inner diameter is reduced by 50%–69%, the blood flow is accelerated at the plaque stenosis, and the pathological vortex is formed at the distal end of the stenosis; and (iii) severe stenosis, in which the inner diameter is reduced by 70%–99%, the plaque is aggravated, the blood flow is further accelerated at the plaque stenosis, and pathological vortex and turbulent mixed signals are formed at the distal end.\nAll relevant measurements such as weight, height, waist circumference, and blood pressure (BP) were measured by trained medical personnel in strict accordance with the corresponding standards. After verification and verification, data were entered into the database of the China Stroke Data Center.\nThe neck ultrasound examinations of all subjects were performed using ultrasound diagnostic apparatus S2000 (Siemens, Germany) according to the guidelines established by the European Stroke Conference. The blood vessels examined included the bilateral common carotid artery, carotid sinus, internal carotid artery, subclavian artery, and vertebral artery. Carotid artery stenosis\n16\n is then divided into (i) mild stenosis, in which the inner diameter is reduced by 1%–49%, the ultrasound image shows local plaques, and there is no significant change in blood flow; (ii) moderate stenosis, in which the inner diameter is reduced by 50%–69%, the blood flow is accelerated at the plaque stenosis, and the pathological vortex is formed at the distal end of the stenosis; and (iii) severe stenosis, in which the inner diameter is reduced by 70%–99%, the plaque is aggravated, the blood flow is further accelerated at the plaque stenosis, and pathological vortex and turbulent mixed signals are formed at the distal end.\nAll relevant measurements such as weight, height, waist circumference, and blood pressure (BP) were measured by trained medical personnel in strict accordance with the corresponding standards. After verification and verification, data were entered into the database of the China Stroke Data Center.\nData collection and blood biochemistry All subjects fasted for at least 8 hours, and 3 mL of cubital venous blood was drawn into a separating gel‐accelerating vacuum blood collection tube and left for approximately 15 minutes after the plasma was precipitated. The serum was separated by centrifugation at relative centrifugal force of 560 g for 15 minutes, and one part was sent to the laboratory biochemistry room. Blood glucose (GLU), triglycerides (TG), total cholesterol (TC), low‐density lipoprotein (LDL), high‐density lipoprotein (HDL), homocysteine (HCY), uric acid (UA), and free fatty acid (FFA) tests were completed on the same day. One portion was immediately stored in the refrigerator at −80°C. Small and dense low‐density lipoprotein (sdLDL), Lp‐PLA2, high‐sensitivity C‐reactive protein (hs‐CRP), cystatin C (Cys C), and lipoprotein a (LPa) completed the test on the day of the experiment.\nDemographic parameters such as age; sex; waist circumference; BMI, systolic blood pressure; diastolic blood pressure; medical history of all subjects, such as heart disease history, diabetes history, hypertension history, dyslipidemia history, smoking history, family history of hypertension, family history of diabetes data such as stroke, family history of coronary heart disease, family history of stroke; carotid artery ultrasound results; and cardiovascular and cerebrovascular events were all collected from the China Stroke Data Center database.\nAll subjects fasted for at least 8 hours, and 3 mL of cubital venous blood was drawn into a separating gel‐accelerating vacuum blood collection tube and left for approximately 15 minutes after the plasma was precipitated. The serum was separated by centrifugation at relative centrifugal force of 560 g for 15 minutes, and one part was sent to the laboratory biochemistry room. Blood glucose (GLU), triglycerides (TG), total cholesterol (TC), low‐density lipoprotein (LDL), high‐density lipoprotein (HDL), homocysteine (HCY), uric acid (UA), and free fatty acid (FFA) tests were completed on the same day. One portion was immediately stored in the refrigerator at −80°C. Small and dense low‐density lipoprotein (sdLDL), Lp‐PLA2, high‐sensitivity C‐reactive protein (hs‐CRP), cystatin C (Cys C), and lipoprotein a (LPa) completed the test on the day of the experiment.\nDemographic parameters such as age; sex; waist circumference; BMI, systolic blood pressure; diastolic blood pressure; medical history of all subjects, such as heart disease history, diabetes history, hypertension history, dyslipidemia history, smoking history, family history of hypertension, family history of diabetes data such as stroke, family history of coronary heart disease, family history of stroke; carotid artery ultrasound results; and cardiovascular and cerebrovascular events were all collected from the China Stroke Data Center database.\nStatistical analysis SPSS 20.0 was used to analyze the data, and GraphPad Prism 5 (GraphPad Software Inc., La Jolla, CA) was used to draw graphs. Count data were expressed as a percentage, and the comparison between groups was analyzed using the chi‐square test. Measurement data with normal distribution were expressed as mean ± standard deviation (±s). The comparison between two groups was performed using the t test, and the comparison of multiple groups was performed by single‐factor analysis of variance. Measurement data of non‐normal distribution are expressed as median (interquartile range), and the rank‐sum test was used for comparison between groups. Correlation analysis was performed using Spearman correlation analysis or Pearson correlation analysis. Regression analysis was performed using logistic multivariate regression analysis. Survival analysis uses the proportional hazards regression model (Cox regression analysis) method, and p < 0.05 was considered statistically significant.\nSPSS 20.0 was used to analyze the data, and GraphPad Prism 5 (GraphPad Software Inc., La Jolla, CA) was used to draw graphs. Count data were expressed as a percentage, and the comparison between groups was analyzed using the chi‐square test. Measurement data with normal distribution were expressed as mean ± standard deviation (±s). The comparison between two groups was performed using the t test, and the comparison of multiple groups was performed by single‐factor analysis of variance. Measurement data of non‐normal distribution are expressed as median (interquartile range), and the rank‐sum test was used for comparison between groups. Correlation analysis was performed using Spearman correlation analysis or Pearson correlation analysis. Regression analysis was performed using logistic multivariate regression analysis. Survival analysis uses the proportional hazards regression model (Cox regression analysis) method, and p < 0.05 was considered statistically significant.", "This study was approved by the Ethics Committee of Zhejiang Provincial People's Hospital (No. 2020QT286), which waived the requirement of informed consent.", "In April 2013, the study subjects were screened from 15,933 residents aged >40 years in Zhaohui Street, Xiacheng District, Hangzhou, Zhejiang Province. For those at a high risk of stroke, 835 candidates were selected, and members of the research team conducted telephone follow‐ups of people at a high risk of stroke at the 3rd, 6th, 12th, and 24th months from April 2013, and 823 eventually met the criteria, including 473 women and 350 men. (Study flow was shown on Figure 1).\nStudy flowchart\nThe inclusion criteria were as follows: aged >40 years; a history of stroke; a history of transient ischemic attack; a history of hypertension (≥140/90 mm Hg) or taking antihypertensive drugs; atrial fibrillation and valvular disease; smoking; dyslipidemia or unknown; diabetes; rarely performed physical exercise (standard frequency of physical exercise is ≥3 times a week, each ≥30 minutes, for more than 1 year; those engaged in moderate to severe physical labor are regarded as having regular physical exercise), obesity (body mass index [BMI] ≥ 26 kg/m2); and a family history of stroke. Those who have lived or worked outside the study areas for more than half a year; have severe liver or kidney disease or malignant tumors, mental illness, or systemic immune disease; or have 2‐year follow‐up period, incomplete data, or lost to follow‐up were excluded. The screening of the above at‐risk population is in line with national clinical guidelines for stroke.\n15\n\n", "Stroke is a chronic disease that seriously threatens the health of the Chinese population. The incidence of stroke across the country is rising at an annual rate of 8.7%. The annual cost of treating cerebrovascular diseases is more than 10 billion yuan, and the indirect economic losses cost nearly 20 billion yuan every year. To address the various challenges caused by stroke, the Chinese Ministry of Health established the CSPPC in April 2011. CSPPC formulates policies, issues clinical guidelines, and organizes community hospitals to carry out stroke‐risk factor screening and risk assessment for permanent residents over 40 years old in high‐incidence areas and to conduct health education and regular physical examinations for selected low‐risk populations, intervention guidance for the middle‐risk population based on individual characteristics, further inspections for the high‐risk population, and comprehensive intervention. During regular follow‐up of the middle‐risk and high‐risk groups, patients identified to have cervical vascular disease or suspected stroke will be referred to the hospital for further diagnosis and treatment.", "The neck ultrasound examinations of all subjects were performed using ultrasound diagnostic apparatus S2000 (Siemens, Germany) according to the guidelines established by the European Stroke Conference. The blood vessels examined included the bilateral common carotid artery, carotid sinus, internal carotid artery, subclavian artery, and vertebral artery. Carotid artery stenosis\n16\n is then divided into (i) mild stenosis, in which the inner diameter is reduced by 1%–49%, the ultrasound image shows local plaques, and there is no significant change in blood flow; (ii) moderate stenosis, in which the inner diameter is reduced by 50%–69%, the blood flow is accelerated at the plaque stenosis, and the pathological vortex is formed at the distal end of the stenosis; and (iii) severe stenosis, in which the inner diameter is reduced by 70%–99%, the plaque is aggravated, the blood flow is further accelerated at the plaque stenosis, and pathological vortex and turbulent mixed signals are formed at the distal end.\nAll relevant measurements such as weight, height, waist circumference, and blood pressure (BP) were measured by trained medical personnel in strict accordance with the corresponding standards. After verification and verification, data were entered into the database of the China Stroke Data Center.", "All subjects fasted for at least 8 hours, and 3 mL of cubital venous blood was drawn into a separating gel‐accelerating vacuum blood collection tube and left for approximately 15 minutes after the plasma was precipitated. The serum was separated by centrifugation at relative centrifugal force of 560 g for 15 minutes, and one part was sent to the laboratory biochemistry room. Blood glucose (GLU), triglycerides (TG), total cholesterol (TC), low‐density lipoprotein (LDL), high‐density lipoprotein (HDL), homocysteine (HCY), uric acid (UA), and free fatty acid (FFA) tests were completed on the same day. One portion was immediately stored in the refrigerator at −80°C. Small and dense low‐density lipoprotein (sdLDL), Lp‐PLA2, high‐sensitivity C‐reactive protein (hs‐CRP), cystatin C (Cys C), and lipoprotein a (LPa) completed the test on the day of the experiment.\nDemographic parameters such as age; sex; waist circumference; BMI, systolic blood pressure; diastolic blood pressure; medical history of all subjects, such as heart disease history, diabetes history, hypertension history, dyslipidemia history, smoking history, family history of hypertension, family history of diabetes data such as stroke, family history of coronary heart disease, family history of stroke; carotid artery ultrasound results; and cardiovascular and cerebrovascular events were all collected from the China Stroke Data Center database.", "SPSS 20.0 was used to analyze the data, and GraphPad Prism 5 (GraphPad Software Inc., La Jolla, CA) was used to draw graphs. Count data were expressed as a percentage, and the comparison between groups was analyzed using the chi‐square test. Measurement data with normal distribution were expressed as mean ± standard deviation (±s). The comparison between two groups was performed using the t test, and the comparison of multiple groups was performed by single‐factor analysis of variance. Measurement data of non‐normal distribution are expressed as median (interquartile range), and the rank‐sum test was used for comparison between groups. Correlation analysis was performed using Spearman correlation analysis or Pearson correlation analysis. Regression analysis was performed using logistic multivariate regression analysis. Survival analysis uses the proportional hazards regression model (Cox regression analysis) method, and p < 0.05 was considered statistically significant.", "Lp‐PLA2 levels in groups with different degrees of carotid artery stenosis According to the neck ultrasound findings, there were 537 patients in the no stenosis group, 254 patients in the mild stenosis group, and 32 patients in the moderate to severe stenosis group. As the severity of carotid artery stenosis increased, the age, systolic BP, male ratio, and diabetes history ratio increased, and the difference among the three groups was statistically significant (p < 0.05) (Table 1). The level of Lp‐PLA2 was higher in the moderate to severe stenosis group than in the no stenosis group (p < 0.05), and the level in the mild stenosis group was higher than that in the non‐stenosis group (p < 0.05). No statistically significant differences in sdLDL and other parameters were found among the three groups (p > 0.05). Then, the correlation between Lp‐PLA2 and other parameters was analyzed. Studies have shown that Lp‐PLA2 is positively correlated with the degree of carotid artery stenosis and the range of stenosis. In these high‐risk groups, the clinical biochemical indicators TG, TC, LDL, nonHDL, GLU, Homeostatic Model Assessment for Insulin Resistance, UA, HCY, FFA, CYS C, hs‐CRP, and sdLDL were positively correlated with Lp‐PLA2 (r > 0, p < 0.05), and the correlation coefficient between Lp‐PLA2 and other lipoproteins was in this order sdLDL > non‐HDL > LDL > TC > TG. sdLDL showed the strongest correlation with Lp‐PLA2 (r = 0.555), and HDL was negatively correlated with Lp‐PLA2 (r = −0.145, p < 0.001).\nDemographic characteristics and clinical biochemical parameters in different carotid artery stenosis groups\nAbbreviation: BMI, body mass index; DBP, diastolic blood pressure; FFA, Free Fat Acid; HCY, homocysteine; HDL, High‐density lipoprotein; HOMA‐IR, Homeostasis model assessment for insulin resistance; HOMA‐IS, Homeostasis model assessment for insulin sensitivity; hs‐CRP, High‐sensitivity C‐reactive protein; LDL, Low‐density lipoprotein; LP(a), Lipoproteins(a); LP‐PLA2, Lipoprotein‐associated phospholipase A2; nonHDL, Non–high‐density lipoprotein cholesterol; SBP, systolic blood pressure; sdLDL, small dense low‐density lipoprotein; TC, Total cholesterol; TG, Triglycerides; UA, uric acid.\nAccording to the neck ultrasound findings, there were 537 patients in the no stenosis group, 254 patients in the mild stenosis group, and 32 patients in the moderate to severe stenosis group. As the severity of carotid artery stenosis increased, the age, systolic BP, male ratio, and diabetes history ratio increased, and the difference among the three groups was statistically significant (p < 0.05) (Table 1). The level of Lp‐PLA2 was higher in the moderate to severe stenosis group than in the no stenosis group (p < 0.05), and the level in the mild stenosis group was higher than that in the non‐stenosis group (p < 0.05). No statistically significant differences in sdLDL and other parameters were found among the three groups (p > 0.05). Then, the correlation between Lp‐PLA2 and other parameters was analyzed. Studies have shown that Lp‐PLA2 is positively correlated with the degree of carotid artery stenosis and the range of stenosis. In these high‐risk groups, the clinical biochemical indicators TG, TC, LDL, nonHDL, GLU, Homeostatic Model Assessment for Insulin Resistance, UA, HCY, FFA, CYS C, hs‐CRP, and sdLDL were positively correlated with Lp‐PLA2 (r > 0, p < 0.05), and the correlation coefficient between Lp‐PLA2 and other lipoproteins was in this order sdLDL > non‐HDL > LDL > TC > TG. sdLDL showed the strongest correlation with Lp‐PLA2 (r = 0.555), and HDL was negatively correlated with Lp‐PLA2 (r = −0.145, p < 0.001).\nDemographic characteristics and clinical biochemical parameters in different carotid artery stenosis groups\nAbbreviation: BMI, body mass index; DBP, diastolic blood pressure; FFA, Free Fat Acid; HCY, homocysteine; HDL, High‐density lipoprotein; HOMA‐IR, Homeostasis model assessment for insulin resistance; HOMA‐IS, Homeostasis model assessment for insulin sensitivity; hs‐CRP, High‐sensitivity C‐reactive protein; LDL, Low‐density lipoprotein; LP(a), Lipoproteins(a); LP‐PLA2, Lipoprotein‐associated phospholipase A2; nonHDL, Non–high‐density lipoprotein cholesterol; SBP, systolic blood pressure; sdLDL, small dense low‐density lipoprotein; TC, Total cholesterol; TG, Triglycerides; UA, uric acid.\nAssociation between Lp‐PLA2 level and occurrence of cerebrovascular events We followed the patients for 2 years and found that 18 had cerebrovascular events, and the remaining 805 had no cerebrovascular events. The level of Lp‐PLA2 was higher in the group with cerebrovascular events than in the group without cerebrovascular events (662.81 ± 111.25 vs 559.86 ± 130.05, p < 0.001) (Table 2 and Figure 2). The levels of some parameters that can reflect renal function, such as Cys C, HCY, and UA, were also higher in the event group than in the no event group (Table 2, p < 0.05). The level of hs‐CRP, an inflammatory marker, was also higher in the event group than in the no event group (Table 2, p < 0.05). No statistical difference was found between the other parameters of the event group, such as HDL, LDL, and the no event group (Table 2, p > 0.05). The levels of Lp‐PLA2 were then divided into four groups (Q1–Q4). As the quartile of Lp‐PLA2 levels increases, the incidence of cerebrovascular events increases. In the cerebrovascular event group, the incidence of cerebrovascular events increased with the increase in Lp‐PLA2 quantile (Table 3, p = 0.027). In the group with cerebrovascular events, the fourth quartile had a higher incidence of cardiovascular and cerebrovascular events than the first quartile (Table 3, p < 0.05).\nComparison of demographic characteristics and clinical biochemical parameters of cerebrovascular events\nDifferences in Lp‐PLA2 levels among people with different degrees of stenosis. *p < 0.05, **p < 0.01, ***p < 0.001\nRelationship between the incidence of cerebrovascular events after grouping Lp‐PLA2 quartiles\nThe difference between the events and no evens group of Lp‐PLA2 with varying level.\nFirst quantile (Q1): Lp‐PLA2 ≤ 473.10 IU/L; second quantile (Q2): 473.10 IU/L < Lp‐PLA2 < 560.70 IU/L; third quantile (Q3): 560.70 IU/L ≤ Lp‐PLA2 < 643.90 IU/L; fourth quartile (Q4): Lp‐PLA2 ≥ 643.90 IU/L.\n\np < 0.05 is considered statistically significant.\nWe followed the patients for 2 years and found that 18 had cerebrovascular events, and the remaining 805 had no cerebrovascular events. The level of Lp‐PLA2 was higher in the group with cerebrovascular events than in the group without cerebrovascular events (662.81 ± 111.25 vs 559.86 ± 130.05, p < 0.001) (Table 2 and Figure 2). The levels of some parameters that can reflect renal function, such as Cys C, HCY, and UA, were also higher in the event group than in the no event group (Table 2, p < 0.05). The level of hs‐CRP, an inflammatory marker, was also higher in the event group than in the no event group (Table 2, p < 0.05). No statistical difference was found between the other parameters of the event group, such as HDL, LDL, and the no event group (Table 2, p > 0.05). The levels of Lp‐PLA2 were then divided into four groups (Q1–Q4). As the quartile of Lp‐PLA2 levels increases, the incidence of cerebrovascular events increases. In the cerebrovascular event group, the incidence of cerebrovascular events increased with the increase in Lp‐PLA2 quantile (Table 3, p = 0.027). In the group with cerebrovascular events, the fourth quartile had a higher incidence of cardiovascular and cerebrovascular events than the first quartile (Table 3, p < 0.05).\nComparison of demographic characteristics and clinical biochemical parameters of cerebrovascular events\nDifferences in Lp‐PLA2 levels among people with different degrees of stenosis. *p < 0.05, **p < 0.01, ***p < 0.001\nRelationship between the incidence of cerebrovascular events after grouping Lp‐PLA2 quartiles\nThe difference between the events and no evens group of Lp‐PLA2 with varying level.\nFirst quantile (Q1): Lp‐PLA2 ≤ 473.10 IU/L; second quantile (Q2): 473.10 IU/L < Lp‐PLA2 < 560.70 IU/L; third quantile (Q3): 560.70 IU/L ≤ Lp‐PLA2 < 643.90 IU/L; fourth quartile (Q4): Lp‐PLA2 ≥ 643.90 IU/L.\n\np < 0.05 is considered statistically significant.\nCerebrovascular events are positively correlated with the degree of carotid artery stenosis The incidence of cerebrovascular events in the stenosis group was higher than that in the no stenosis group (Table 4, p = 0.17), but no statistically significant difference was noted. As regards progression of stenosis, that is, from no stenosis to severe stenosis, increases in the degree of stenosis likely lead to an increase in the incidence of cerebrovascular events (Table 4). Furthermore, the greater the number of stenoses, the greater the probability of cerebrovascular events (Table 4).\nRelationship between carotid artery stenosis and the incidence of cerebrovascular events\nThe diference between the cebrovascular events and no events in the carotid artery stenosis group and no stenosis, with varying degrees of stenosis and stenosis range.\n\np < 0.05 is considered statistically significant.\nThe incidence of cerebrovascular events in the stenosis group was higher than that in the no stenosis group (Table 4, p = 0.17), but no statistically significant difference was noted. As regards progression of stenosis, that is, from no stenosis to severe stenosis, increases in the degree of stenosis likely lead to an increase in the incidence of cerebrovascular events (Table 4). Furthermore, the greater the number of stenoses, the greater the probability of cerebrovascular events (Table 4).\nRelationship between carotid artery stenosis and the incidence of cerebrovascular events\nThe diference between the cebrovascular events and no events in the carotid artery stenosis group and no stenosis, with varying degrees of stenosis and stenosis range.\n\np < 0.05 is considered statistically significant.\nEffects of Lp‐PLA2 levels on cerebrovascular events and mortality Participants were followed for 2 years. In the subsequent analysis using Cox regression to analyze the forward LR method, the occurrence of cerebrovascular events was used as the value indicating that the event had occurred, the follow‐up time was used as the time variable, the Lp‐PLA2 level quartile was used as the classification covariate, and the first quantile was used as the reference group. The results suggest that Lp‐PLA2 is a risk factor for cerebrovascular events (Figure 3, p = 0.046). Compared with the first quartile, the risk of cerebrovascular events increased as the quartile increased. Compared with the first quantile, the risk of cerebrovascular events in the fourth quantile was 10.170 times that of the first quantile (OR=10.170, 95%CI 1.302–79.448, p = 0.027) (Table 5 and Figure 4).\nDifferences in Lp‐PLA2 levels between groups with cerebrovascular events and without cerebrovascular events. *p < 0.05, **p < 0.01, ***p < 0.001\nCox regression analysis for cerebrovascular events and death by Lp‐PLA2 (UI/L) Levels\nAbbreviations: CI, confidence interval; OR, odds ratio.\nCox cumulative hazard for people with various Lp‐PLA2 levels. The Lp‐PLA2 level was grouped as follows: Q1, Lp‐PLA2 ≤ 473.10 IU/L; Q2, 473.10 IU/L < LpPLA2 < 560.70 IU/L; Q3, 560.70 IU/L ≤ LpPLA2 < 643.90 IU/L; Q4, Lp‐PLA2 ≥ 643.90 IU/L\nParticipants were followed for 2 years. In the subsequent analysis using Cox regression to analyze the forward LR method, the occurrence of cerebrovascular events was used as the value indicating that the event had occurred, the follow‐up time was used as the time variable, the Lp‐PLA2 level quartile was used as the classification covariate, and the first quantile was used as the reference group. The results suggest that Lp‐PLA2 is a risk factor for cerebrovascular events (Figure 3, p = 0.046). Compared with the first quartile, the risk of cerebrovascular events increased as the quartile increased. Compared with the first quantile, the risk of cerebrovascular events in the fourth quantile was 10.170 times that of the first quantile (OR=10.170, 95%CI 1.302–79.448, p = 0.027) (Table 5 and Figure 4).\nDifferences in Lp‐PLA2 levels between groups with cerebrovascular events and without cerebrovascular events. *p < 0.05, **p < 0.01, ***p < 0.001\nCox regression analysis for cerebrovascular events and death by Lp‐PLA2 (UI/L) Levels\nAbbreviations: CI, confidence interval; OR, odds ratio.\nCox cumulative hazard for people with various Lp‐PLA2 levels. The Lp‐PLA2 level was grouped as follows: Q1, Lp‐PLA2 ≤ 473.10 IU/L; Q2, 473.10 IU/L < LpPLA2 < 560.70 IU/L; Q3, 560.70 IU/L ≤ LpPLA2 < 643.90 IU/L; Q4, Lp‐PLA2 ≥ 643.90 IU/L", "According to the neck ultrasound findings, there were 537 patients in the no stenosis group, 254 patients in the mild stenosis group, and 32 patients in the moderate to severe stenosis group. As the severity of carotid artery stenosis increased, the age, systolic BP, male ratio, and diabetes history ratio increased, and the difference among the three groups was statistically significant (p < 0.05) (Table 1). The level of Lp‐PLA2 was higher in the moderate to severe stenosis group than in the no stenosis group (p < 0.05), and the level in the mild stenosis group was higher than that in the non‐stenosis group (p < 0.05). No statistically significant differences in sdLDL and other parameters were found among the three groups (p > 0.05). Then, the correlation between Lp‐PLA2 and other parameters was analyzed. Studies have shown that Lp‐PLA2 is positively correlated with the degree of carotid artery stenosis and the range of stenosis. In these high‐risk groups, the clinical biochemical indicators TG, TC, LDL, nonHDL, GLU, Homeostatic Model Assessment for Insulin Resistance, UA, HCY, FFA, CYS C, hs‐CRP, and sdLDL were positively correlated with Lp‐PLA2 (r > 0, p < 0.05), and the correlation coefficient between Lp‐PLA2 and other lipoproteins was in this order sdLDL > non‐HDL > LDL > TC > TG. sdLDL showed the strongest correlation with Lp‐PLA2 (r = 0.555), and HDL was negatively correlated with Lp‐PLA2 (r = −0.145, p < 0.001).\nDemographic characteristics and clinical biochemical parameters in different carotid artery stenosis groups\nAbbreviation: BMI, body mass index; DBP, diastolic blood pressure; FFA, Free Fat Acid; HCY, homocysteine; HDL, High‐density lipoprotein; HOMA‐IR, Homeostasis model assessment for insulin resistance; HOMA‐IS, Homeostasis model assessment for insulin sensitivity; hs‐CRP, High‐sensitivity C‐reactive protein; LDL, Low‐density lipoprotein; LP(a), Lipoproteins(a); LP‐PLA2, Lipoprotein‐associated phospholipase A2; nonHDL, Non–high‐density lipoprotein cholesterol; SBP, systolic blood pressure; sdLDL, small dense low‐density lipoprotein; TC, Total cholesterol; TG, Triglycerides; UA, uric acid.", "We followed the patients for 2 years and found that 18 had cerebrovascular events, and the remaining 805 had no cerebrovascular events. The level of Lp‐PLA2 was higher in the group with cerebrovascular events than in the group without cerebrovascular events (662.81 ± 111.25 vs 559.86 ± 130.05, p < 0.001) (Table 2 and Figure 2). The levels of some parameters that can reflect renal function, such as Cys C, HCY, and UA, were also higher in the event group than in the no event group (Table 2, p < 0.05). The level of hs‐CRP, an inflammatory marker, was also higher in the event group than in the no event group (Table 2, p < 0.05). No statistical difference was found between the other parameters of the event group, such as HDL, LDL, and the no event group (Table 2, p > 0.05). The levels of Lp‐PLA2 were then divided into four groups (Q1–Q4). As the quartile of Lp‐PLA2 levels increases, the incidence of cerebrovascular events increases. In the cerebrovascular event group, the incidence of cerebrovascular events increased with the increase in Lp‐PLA2 quantile (Table 3, p = 0.027). In the group with cerebrovascular events, the fourth quartile had a higher incidence of cardiovascular and cerebrovascular events than the first quartile (Table 3, p < 0.05).\nComparison of demographic characteristics and clinical biochemical parameters of cerebrovascular events\nDifferences in Lp‐PLA2 levels among people with different degrees of stenosis. *p < 0.05, **p < 0.01, ***p < 0.001\nRelationship between the incidence of cerebrovascular events after grouping Lp‐PLA2 quartiles\nThe difference between the events and no evens group of Lp‐PLA2 with varying level.\nFirst quantile (Q1): Lp‐PLA2 ≤ 473.10 IU/L; second quantile (Q2): 473.10 IU/L < Lp‐PLA2 < 560.70 IU/L; third quantile (Q3): 560.70 IU/L ≤ Lp‐PLA2 < 643.90 IU/L; fourth quartile (Q4): Lp‐PLA2 ≥ 643.90 IU/L.\n\np < 0.05 is considered statistically significant.", "The incidence of cerebrovascular events in the stenosis group was higher than that in the no stenosis group (Table 4, p = 0.17), but no statistically significant difference was noted. As regards progression of stenosis, that is, from no stenosis to severe stenosis, increases in the degree of stenosis likely lead to an increase in the incidence of cerebrovascular events (Table 4). Furthermore, the greater the number of stenoses, the greater the probability of cerebrovascular events (Table 4).\nRelationship between carotid artery stenosis and the incidence of cerebrovascular events\nThe diference between the cebrovascular events and no events in the carotid artery stenosis group and no stenosis, with varying degrees of stenosis and stenosis range.\n\np < 0.05 is considered statistically significant.", "Participants were followed for 2 years. In the subsequent analysis using Cox regression to analyze the forward LR method, the occurrence of cerebrovascular events was used as the value indicating that the event had occurred, the follow‐up time was used as the time variable, the Lp‐PLA2 level quartile was used as the classification covariate, and the first quantile was used as the reference group. The results suggest that Lp‐PLA2 is a risk factor for cerebrovascular events (Figure 3, p = 0.046). Compared with the first quartile, the risk of cerebrovascular events increased as the quartile increased. Compared with the first quantile, the risk of cerebrovascular events in the fourth quantile was 10.170 times that of the first quantile (OR=10.170, 95%CI 1.302–79.448, p = 0.027) (Table 5 and Figure 4).\nDifferences in Lp‐PLA2 levels between groups with cerebrovascular events and without cerebrovascular events. *p < 0.05, **p < 0.01, ***p < 0.001\nCox regression analysis for cerebrovascular events and death by Lp‐PLA2 (UI/L) Levels\nAbbreviations: CI, confidence interval; OR, odds ratio.\nCox cumulative hazard for people with various Lp‐PLA2 levels. The Lp‐PLA2 level was grouped as follows: Q1, Lp‐PLA2 ≤ 473.10 IU/L; Q2, 473.10 IU/L < LpPLA2 < 560.70 IU/L; Q3, 560.70 IU/L ≤ LpPLA2 < 643.90 IU/L; Q4, Lp‐PLA2 ≥ 643.90 IU/L", "In this study, the association between CVD and Lp‐PLA2, a novel plasma biomarker involved in the development of carotid artery stenosis and cerebrovascular events in populations at a high risk of stroke, was investigated. The results showed that the level of Lp‐PLA2 in the carotid artery stenosis group was significantly higher than that in the no stenosis group, and its level was positively correlated with the degree of carotid artery stenosis. Cox regression analysis showed that Lp‐PLA2 was an independent factor in the risk prediction of cerebrovascular events. All findings demonstrated the possible clinical application of Lp‐PLA2 to predict the occurrence of cerebrovascular events and assess the degree of carotid artery stenosis.\nLp‐PLA2 is a biomarker produced by inflammatory cells, which can break down oxidized phospholipids and release products that promote inflammation and further aggravate AS.\n17\n Some previous studies have referred to the role of Lp‐PLA2 in the formation of carotid AS and carotid stenosis. Charniot et al.\n18\n showed that the level of Lp‐PLA2 in patients with carotid artery stenosis increased significantly with the severity of atherosclerotic lesions, and the level of Lp‐PLA2 in the severe stenosis group was the highest. A group of researchers investigated 111 patients with chronic coronary artery disease confirmed by angiography and reported that Lp‐PLA2 was positively correlated with carotid intima‐media thickness. In addition, the carotid artery stenosis of the Lp‐PLA2 high‐level group was severe and the thickness of the intima‐media was higher than those in the control group,\n19\n which implies that Lp‐PLA2 may be the main factor for carotid artery thickening. Another researcher recruited 678 patients diagnosed with coronary artery disease by angiography. Multivariate regression analysis showed that Lp‐PLA2 is an independent risk factor for AS. A cross‐sectional study measured Lp‐PLA2 levels in the blood of people aged >40 years and proved that this new marker is inextricably linked to carotid AS and carotid artery stenosis. However, this study cannot prove the role of high levels of Lp‐PLA2 in carotid plaque formation.\n20\n The above results suggest that increased expression of Lp‐PLA2 is highly associated with atherosclerotic lesions. Therefore, it plays a very important role in risk assessment of carotid artery stenosis. However, most research samples are relatively small and may not be able to prove the clinical value of Lp‐PLA2 in these blood circulation diseases.\nLp‐PLA2 plays a role in CVD, as it can aggravate atherosclerotic lesions and easily rupture complex and vulnerable plaques, leading to CVD events. Several recent studies have shown that plasma Lp‐PLA2 levels are associated with the risk of subsequent coronary heart disease and ischemic stroke.\n19\n, \n21\n, \n22\n, \n23\n In a multi‐ethnic cohort study, high levels of Lp‐PLA2 and activities were associated with an increased incidence of cardiovascular disease and coronary heart disease in people without a baseline clinical cardiovascular disease.\n24\n In this study, the patients who were at a high risk for stroke were followed for 2 years, and the level of Lp‐PLA2 in the cerebrovascular event group was significantly higher than that in the no event group. Moreover, we grouped Lp‐PLA2 levels into quartiles and found that the high‐level group (Q4) had a much higher risk of cerebrovascular events than the low‐level group, with an increase in the Lp‐PLA2 quartile, and the incidence of cerebrovascular events increased. In addition, the risk of cerebrovascular events in the fourth quantile is 10.170 times that of the first quantile. The results suggest that Lp‐PLA2 is a risk factor for the occurrence of cerebrovascular events, and as the level of Lp‐PLA2 increases, the risk of cerebrovascular events increases. Lp‐PLA2 level has a predictive value for the occurrence of cerebrovascular events in populations at a high risk of stroke.\nThe limitations of this study are as follows. First, the sample size was not large, the follow‐up time was only 2 years, and the number of cerebrovascular events that eventually occurred was relatively small. Second, this study was conducted at a single center and the study population mainly included people aged >40 years at a high risk of stroke, so the results of the study only represented a small part of the population. Finally, we only analyzed the correlation between other demographic parameters and clinical biochemical indicators and Lp‐PLA2. However, we did not combine these indicators with Lp‐PLA2 to assess carotid artery stenosis and cerebrovascular events. In the future, we will increase the sample size and follow patients for a longer period of time. We will further explore the clinical value of multiple indicators in cerebrovascular events and carotid artery stenosis.\nIn conclusion, the level of Lp‐PLA2 was positively correlated with the degree of carotid artery stenosis and predicted cerebrovascular events. Our results suggest that Lp‐PLA2 may be a tool for evaluating the prognosis of the development of cardiovascular and cerebrovascular diseases." ]
[ null, "methods", null, null, null, null, null, null, "results", null, null, null, null, "discussion" ]
[ "cerebrovascular events", "high‐risk population", "lipoproteinassociated phospholipase A2", "stroke" ]
INTRODUCTION: Cerebrovascular disease (CVD) has become a major social and public health problem worldwide, 1 , 2 including stroke, abnormal cerebrovascular, and malformations, and other disorders of cerebral blood circulation. 3 The mortality rate of cerebrovascular diseases in urban and rural areas is 125.78 per 100,000 and 151.91 per 100,000, respectively. 4 , 5 Given the high morbidity, high mortality, high disability, and high recurrence rates of CVD, 6 it places a very heavy burden on families and society. Therefore, the prevention and treatment of cardiovascular diseases are urgently needed. Hypertension, dyslipidemia, heart disease, diabetes, smoking, being overweight, and lack of physical activity are common risk factors for CVD. 7 Among these risk factors, blood biomarkers play a critical role in the formation and rupture of atherosclerotic plaques. 8 However, currently available evidence is insufficient to predict CVD events. To identify a sufficient method for the occurrence of cerebrovascular disease in populations at a high risk of stroke, research of new biomarkers to predict CVD events early has attracted wide attention. Recently, researchers have explored biomarkers associated with atherosclerosis (AS) and CVD. Among these studies, lipoprotein‐associated phospholipase (Lp‐PLA2) was found to significantly promote AS. 9 Atherosclerosis is a disorder of lipid metabolism and chronic inflammatory diseases. 10 , 11 Endothelial dysfunction is a pathological basis for cerebrovascular diseases and is one of the major factors in the formation of carotid artery stenosis. 12 This eventually leads to cardiovascular and cerebrovascular events. Lp‐PLA2 is mainly secreted by macrophages, T lymphocytes, monocytes, and mast cells. 13 Lp‐PLA2 plays a key role in the development of inflammation and AS, mainly including the aggregation and activation of leukocytes and platelets, vascular smooth muscle cell proliferation and migration, endothelial dysfunction, expression of adhesion molecules and cytokines, and the core of plaque necrosis. The formation of Lp‐PLA2 downregulates the synthesis and release of nitric oxide in endothelial cells, enhances oxidative stress response, and promotes endothelial cell apoptosis. 14 This study mainly aimed to explore the relationship between Lp‐PLA2 levels and carotid artery stenosis and the value of early prediction of CVD events in stroke‐risk populations and to provide a scientific basis for risk stratification management and early prevention of stroke‐risk populations in order to reduce the heavy burden of cardiovascular disease on families and society. METHODS: Ethics statements This study was approved by the Ethics Committee of Zhejiang Provincial People's Hospital (No. 2020QT286), which waived the requirement of informed consent. This study was approved by the Ethics Committee of Zhejiang Provincial People's Hospital (No. 2020QT286), which waived the requirement of informed consent. Study population In April 2013, the study subjects were screened from 15,933 residents aged >40 years in Zhaohui Street, Xiacheng District, Hangzhou, Zhejiang Province. For those at a high risk of stroke, 835 candidates were selected, and members of the research team conducted telephone follow‐ups of people at a high risk of stroke at the 3rd, 6th, 12th, and 24th months from April 2013, and 823 eventually met the criteria, including 473 women and 350 men. (Study flow was shown on Figure 1). Study flowchart The inclusion criteria were as follows: aged >40 years; a history of stroke; a history of transient ischemic attack; a history of hypertension (≥140/90 mm Hg) or taking antihypertensive drugs; atrial fibrillation and valvular disease; smoking; dyslipidemia or unknown; diabetes; rarely performed physical exercise (standard frequency of physical exercise is ≥3 times a week, each ≥30 minutes, for more than 1 year; those engaged in moderate to severe physical labor are regarded as having regular physical exercise), obesity (body mass index [BMI] ≥ 26 kg/m2); and a family history of stroke. Those who have lived or worked outside the study areas for more than half a year; have severe liver or kidney disease or malignant tumors, mental illness, or systemic immune disease; or have 2‐year follow‐up period, incomplete data, or lost to follow‐up were excluded. The screening of the above at‐risk population is in line with national clinical guidelines for stroke. 15 In April 2013, the study subjects were screened from 15,933 residents aged >40 years in Zhaohui Street, Xiacheng District, Hangzhou, Zhejiang Province. For those at a high risk of stroke, 835 candidates were selected, and members of the research team conducted telephone follow‐ups of people at a high risk of stroke at the 3rd, 6th, 12th, and 24th months from April 2013, and 823 eventually met the criteria, including 473 women and 350 men. (Study flow was shown on Figure 1). Study flowchart The inclusion criteria were as follows: aged >40 years; a history of stroke; a history of transient ischemic attack; a history of hypertension (≥140/90 mm Hg) or taking antihypertensive drugs; atrial fibrillation and valvular disease; smoking; dyslipidemia or unknown; diabetes; rarely performed physical exercise (standard frequency of physical exercise is ≥3 times a week, each ≥30 minutes, for more than 1 year; those engaged in moderate to severe physical labor are regarded as having regular physical exercise), obesity (body mass index [BMI] ≥ 26 kg/m2); and a family history of stroke. Those who have lived or worked outside the study areas for more than half a year; have severe liver or kidney disease or malignant tumors, mental illness, or systemic immune disease; or have 2‐year follow‐up period, incomplete data, or lost to follow‐up were excluded. The screening of the above at‐risk population is in line with national clinical guidelines for stroke. 15 China stroke prevention project committee (CSPPC) stroke program Stroke is a chronic disease that seriously threatens the health of the Chinese population. The incidence of stroke across the country is rising at an annual rate of 8.7%. The annual cost of treating cerebrovascular diseases is more than 10 billion yuan, and the indirect economic losses cost nearly 20 billion yuan every year. To address the various challenges caused by stroke, the Chinese Ministry of Health established the CSPPC in April 2011. CSPPC formulates policies, issues clinical guidelines, and organizes community hospitals to carry out stroke‐risk factor screening and risk assessment for permanent residents over 40 years old in high‐incidence areas and to conduct health education and regular physical examinations for selected low‐risk populations, intervention guidance for the middle‐risk population based on individual characteristics, further inspections for the high‐risk population, and comprehensive intervention. During regular follow‐up of the middle‐risk and high‐risk groups, patients identified to have cervical vascular disease or suspected stroke will be referred to the hospital for further diagnosis and treatment. Stroke is a chronic disease that seriously threatens the health of the Chinese population. The incidence of stroke across the country is rising at an annual rate of 8.7%. The annual cost of treating cerebrovascular diseases is more than 10 billion yuan, and the indirect economic losses cost nearly 20 billion yuan every year. To address the various challenges caused by stroke, the Chinese Ministry of Health established the CSPPC in April 2011. CSPPC formulates policies, issues clinical guidelines, and organizes community hospitals to carry out stroke‐risk factor screening and risk assessment for permanent residents over 40 years old in high‐incidence areas and to conduct health education and regular physical examinations for selected low‐risk populations, intervention guidance for the middle‐risk population based on individual characteristics, further inspections for the high‐risk population, and comprehensive intervention. During regular follow‐up of the middle‐risk and high‐risk groups, patients identified to have cervical vascular disease or suspected stroke will be referred to the hospital for further diagnosis and treatment. Carotid ultrasound and physical examination The neck ultrasound examinations of all subjects were performed using ultrasound diagnostic apparatus S2000 (Siemens, Germany) according to the guidelines established by the European Stroke Conference. The blood vessels examined included the bilateral common carotid artery, carotid sinus, internal carotid artery, subclavian artery, and vertebral artery. Carotid artery stenosis 16 is then divided into (i) mild stenosis, in which the inner diameter is reduced by 1%–49%, the ultrasound image shows local plaques, and there is no significant change in blood flow; (ii) moderate stenosis, in which the inner diameter is reduced by 50%–69%, the blood flow is accelerated at the plaque stenosis, and the pathological vortex is formed at the distal end of the stenosis; and (iii) severe stenosis, in which the inner diameter is reduced by 70%–99%, the plaque is aggravated, the blood flow is further accelerated at the plaque stenosis, and pathological vortex and turbulent mixed signals are formed at the distal end. All relevant measurements such as weight, height, waist circumference, and blood pressure (BP) were measured by trained medical personnel in strict accordance with the corresponding standards. After verification and verification, data were entered into the database of the China Stroke Data Center. The neck ultrasound examinations of all subjects were performed using ultrasound diagnostic apparatus S2000 (Siemens, Germany) according to the guidelines established by the European Stroke Conference. The blood vessels examined included the bilateral common carotid artery, carotid sinus, internal carotid artery, subclavian artery, and vertebral artery. Carotid artery stenosis 16 is then divided into (i) mild stenosis, in which the inner diameter is reduced by 1%–49%, the ultrasound image shows local plaques, and there is no significant change in blood flow; (ii) moderate stenosis, in which the inner diameter is reduced by 50%–69%, the blood flow is accelerated at the plaque stenosis, and the pathological vortex is formed at the distal end of the stenosis; and (iii) severe stenosis, in which the inner diameter is reduced by 70%–99%, the plaque is aggravated, the blood flow is further accelerated at the plaque stenosis, and pathological vortex and turbulent mixed signals are formed at the distal end. All relevant measurements such as weight, height, waist circumference, and blood pressure (BP) were measured by trained medical personnel in strict accordance with the corresponding standards. After verification and verification, data were entered into the database of the China Stroke Data Center. Data collection and blood biochemistry All subjects fasted for at least 8 hours, and 3 mL of cubital venous blood was drawn into a separating gel‐accelerating vacuum blood collection tube and left for approximately 15 minutes after the plasma was precipitated. The serum was separated by centrifugation at relative centrifugal force of 560 g for 15 minutes, and one part was sent to the laboratory biochemistry room. Blood glucose (GLU), triglycerides (TG), total cholesterol (TC), low‐density lipoprotein (LDL), high‐density lipoprotein (HDL), homocysteine (HCY), uric acid (UA), and free fatty acid (FFA) tests were completed on the same day. One portion was immediately stored in the refrigerator at −80°C. Small and dense low‐density lipoprotein (sdLDL), Lp‐PLA2, high‐sensitivity C‐reactive protein (hs‐CRP), cystatin C (Cys C), and lipoprotein a (LPa) completed the test on the day of the experiment. Demographic parameters such as age; sex; waist circumference; BMI, systolic blood pressure; diastolic blood pressure; medical history of all subjects, such as heart disease history, diabetes history, hypertension history, dyslipidemia history, smoking history, family history of hypertension, family history of diabetes data such as stroke, family history of coronary heart disease, family history of stroke; carotid artery ultrasound results; and cardiovascular and cerebrovascular events were all collected from the China Stroke Data Center database. All subjects fasted for at least 8 hours, and 3 mL of cubital venous blood was drawn into a separating gel‐accelerating vacuum blood collection tube and left for approximately 15 minutes after the plasma was precipitated. The serum was separated by centrifugation at relative centrifugal force of 560 g for 15 minutes, and one part was sent to the laboratory biochemistry room. Blood glucose (GLU), triglycerides (TG), total cholesterol (TC), low‐density lipoprotein (LDL), high‐density lipoprotein (HDL), homocysteine (HCY), uric acid (UA), and free fatty acid (FFA) tests were completed on the same day. One portion was immediately stored in the refrigerator at −80°C. Small and dense low‐density lipoprotein (sdLDL), Lp‐PLA2, high‐sensitivity C‐reactive protein (hs‐CRP), cystatin C (Cys C), and lipoprotein a (LPa) completed the test on the day of the experiment. Demographic parameters such as age; sex; waist circumference; BMI, systolic blood pressure; diastolic blood pressure; medical history of all subjects, such as heart disease history, diabetes history, hypertension history, dyslipidemia history, smoking history, family history of hypertension, family history of diabetes data such as stroke, family history of coronary heart disease, family history of stroke; carotid artery ultrasound results; and cardiovascular and cerebrovascular events were all collected from the China Stroke Data Center database. Statistical analysis SPSS 20.0 was used to analyze the data, and GraphPad Prism 5 (GraphPad Software Inc., La Jolla, CA) was used to draw graphs. Count data were expressed as a percentage, and the comparison between groups was analyzed using the chi‐square test. Measurement data with normal distribution were expressed as mean ± standard deviation (±s). The comparison between two groups was performed using the t test, and the comparison of multiple groups was performed by single‐factor analysis of variance. Measurement data of non‐normal distribution are expressed as median (interquartile range), and the rank‐sum test was used for comparison between groups. Correlation analysis was performed using Spearman correlation analysis or Pearson correlation analysis. Regression analysis was performed using logistic multivariate regression analysis. Survival analysis uses the proportional hazards regression model (Cox regression analysis) method, and p < 0.05 was considered statistically significant. SPSS 20.0 was used to analyze the data, and GraphPad Prism 5 (GraphPad Software Inc., La Jolla, CA) was used to draw graphs. Count data were expressed as a percentage, and the comparison between groups was analyzed using the chi‐square test. Measurement data with normal distribution were expressed as mean ± standard deviation (±s). The comparison between two groups was performed using the t test, and the comparison of multiple groups was performed by single‐factor analysis of variance. Measurement data of non‐normal distribution are expressed as median (interquartile range), and the rank‐sum test was used for comparison between groups. Correlation analysis was performed using Spearman correlation analysis or Pearson correlation analysis. Regression analysis was performed using logistic multivariate regression analysis. Survival analysis uses the proportional hazards regression model (Cox regression analysis) method, and p < 0.05 was considered statistically significant. Ethics statements: This study was approved by the Ethics Committee of Zhejiang Provincial People's Hospital (No. 2020QT286), which waived the requirement of informed consent. Study population: In April 2013, the study subjects were screened from 15,933 residents aged >40 years in Zhaohui Street, Xiacheng District, Hangzhou, Zhejiang Province. For those at a high risk of stroke, 835 candidates were selected, and members of the research team conducted telephone follow‐ups of people at a high risk of stroke at the 3rd, 6th, 12th, and 24th months from April 2013, and 823 eventually met the criteria, including 473 women and 350 men. (Study flow was shown on Figure 1). Study flowchart The inclusion criteria were as follows: aged >40 years; a history of stroke; a history of transient ischemic attack; a history of hypertension (≥140/90 mm Hg) or taking antihypertensive drugs; atrial fibrillation and valvular disease; smoking; dyslipidemia or unknown; diabetes; rarely performed physical exercise (standard frequency of physical exercise is ≥3 times a week, each ≥30 minutes, for more than 1 year; those engaged in moderate to severe physical labor are regarded as having regular physical exercise), obesity (body mass index [BMI] ≥ 26 kg/m2); and a family history of stroke. Those who have lived or worked outside the study areas for more than half a year; have severe liver or kidney disease or malignant tumors, mental illness, or systemic immune disease; or have 2‐year follow‐up period, incomplete data, or lost to follow‐up were excluded. The screening of the above at‐risk population is in line with national clinical guidelines for stroke. 15 China stroke prevention project committee (CSPPC) stroke program: Stroke is a chronic disease that seriously threatens the health of the Chinese population. The incidence of stroke across the country is rising at an annual rate of 8.7%. The annual cost of treating cerebrovascular diseases is more than 10 billion yuan, and the indirect economic losses cost nearly 20 billion yuan every year. To address the various challenges caused by stroke, the Chinese Ministry of Health established the CSPPC in April 2011. CSPPC formulates policies, issues clinical guidelines, and organizes community hospitals to carry out stroke‐risk factor screening and risk assessment for permanent residents over 40 years old in high‐incidence areas and to conduct health education and regular physical examinations for selected low‐risk populations, intervention guidance for the middle‐risk population based on individual characteristics, further inspections for the high‐risk population, and comprehensive intervention. During regular follow‐up of the middle‐risk and high‐risk groups, patients identified to have cervical vascular disease or suspected stroke will be referred to the hospital for further diagnosis and treatment. Carotid ultrasound and physical examination: The neck ultrasound examinations of all subjects were performed using ultrasound diagnostic apparatus S2000 (Siemens, Germany) according to the guidelines established by the European Stroke Conference. The blood vessels examined included the bilateral common carotid artery, carotid sinus, internal carotid artery, subclavian artery, and vertebral artery. Carotid artery stenosis 16 is then divided into (i) mild stenosis, in which the inner diameter is reduced by 1%–49%, the ultrasound image shows local plaques, and there is no significant change in blood flow; (ii) moderate stenosis, in which the inner diameter is reduced by 50%–69%, the blood flow is accelerated at the plaque stenosis, and the pathological vortex is formed at the distal end of the stenosis; and (iii) severe stenosis, in which the inner diameter is reduced by 70%–99%, the plaque is aggravated, the blood flow is further accelerated at the plaque stenosis, and pathological vortex and turbulent mixed signals are formed at the distal end. All relevant measurements such as weight, height, waist circumference, and blood pressure (BP) were measured by trained medical personnel in strict accordance with the corresponding standards. After verification and verification, data were entered into the database of the China Stroke Data Center. Data collection and blood biochemistry: All subjects fasted for at least 8 hours, and 3 mL of cubital venous blood was drawn into a separating gel‐accelerating vacuum blood collection tube and left for approximately 15 minutes after the plasma was precipitated. The serum was separated by centrifugation at relative centrifugal force of 560 g for 15 minutes, and one part was sent to the laboratory biochemistry room. Blood glucose (GLU), triglycerides (TG), total cholesterol (TC), low‐density lipoprotein (LDL), high‐density lipoprotein (HDL), homocysteine (HCY), uric acid (UA), and free fatty acid (FFA) tests were completed on the same day. One portion was immediately stored in the refrigerator at −80°C. Small and dense low‐density lipoprotein (sdLDL), Lp‐PLA2, high‐sensitivity C‐reactive protein (hs‐CRP), cystatin C (Cys C), and lipoprotein a (LPa) completed the test on the day of the experiment. Demographic parameters such as age; sex; waist circumference; BMI, systolic blood pressure; diastolic blood pressure; medical history of all subjects, such as heart disease history, diabetes history, hypertension history, dyslipidemia history, smoking history, family history of hypertension, family history of diabetes data such as stroke, family history of coronary heart disease, family history of stroke; carotid artery ultrasound results; and cardiovascular and cerebrovascular events were all collected from the China Stroke Data Center database. Statistical analysis: SPSS 20.0 was used to analyze the data, and GraphPad Prism 5 (GraphPad Software Inc., La Jolla, CA) was used to draw graphs. Count data were expressed as a percentage, and the comparison between groups was analyzed using the chi‐square test. Measurement data with normal distribution were expressed as mean ± standard deviation (±s). The comparison between two groups was performed using the t test, and the comparison of multiple groups was performed by single‐factor analysis of variance. Measurement data of non‐normal distribution are expressed as median (interquartile range), and the rank‐sum test was used for comparison between groups. Correlation analysis was performed using Spearman correlation analysis or Pearson correlation analysis. Regression analysis was performed using logistic multivariate regression analysis. Survival analysis uses the proportional hazards regression model (Cox regression analysis) method, and p < 0.05 was considered statistically significant. RESULTS: Lp‐PLA2 levels in groups with different degrees of carotid artery stenosis According to the neck ultrasound findings, there were 537 patients in the no stenosis group, 254 patients in the mild stenosis group, and 32 patients in the moderate to severe stenosis group. As the severity of carotid artery stenosis increased, the age, systolic BP, male ratio, and diabetes history ratio increased, and the difference among the three groups was statistically significant (p < 0.05) (Table 1). The level of Lp‐PLA2 was higher in the moderate to severe stenosis group than in the no stenosis group (p < 0.05), and the level in the mild stenosis group was higher than that in the non‐stenosis group (p < 0.05). No statistically significant differences in sdLDL and other parameters were found among the three groups (p > 0.05). Then, the correlation between Lp‐PLA2 and other parameters was analyzed. Studies have shown that Lp‐PLA2 is positively correlated with the degree of carotid artery stenosis and the range of stenosis. In these high‐risk groups, the clinical biochemical indicators TG, TC, LDL, nonHDL, GLU, Homeostatic Model Assessment for Insulin Resistance, UA, HCY, FFA, CYS C, hs‐CRP, and sdLDL were positively correlated with Lp‐PLA2 (r > 0, p < 0.05), and the correlation coefficient between Lp‐PLA2 and other lipoproteins was in this order sdLDL > non‐HDL > LDL > TC > TG. sdLDL showed the strongest correlation with Lp‐PLA2 (r = 0.555), and HDL was negatively correlated with Lp‐PLA2 (r = −0.145, p < 0.001). Demographic characteristics and clinical biochemical parameters in different carotid artery stenosis groups Abbreviation: BMI, body mass index; DBP, diastolic blood pressure; FFA, Free Fat Acid; HCY, homocysteine; HDL, High‐density lipoprotein; HOMA‐IR, Homeostasis model assessment for insulin resistance; HOMA‐IS, Homeostasis model assessment for insulin sensitivity; hs‐CRP, High‐sensitivity C‐reactive protein; LDL, Low‐density lipoprotein; LP(a), Lipoproteins(a); LP‐PLA2, Lipoprotein‐associated phospholipase A2; nonHDL, Non–high‐density lipoprotein cholesterol; SBP, systolic blood pressure; sdLDL, small dense low‐density lipoprotein; TC, Total cholesterol; TG, Triglycerides; UA, uric acid. According to the neck ultrasound findings, there were 537 patients in the no stenosis group, 254 patients in the mild stenosis group, and 32 patients in the moderate to severe stenosis group. As the severity of carotid artery stenosis increased, the age, systolic BP, male ratio, and diabetes history ratio increased, and the difference among the three groups was statistically significant (p < 0.05) (Table 1). The level of Lp‐PLA2 was higher in the moderate to severe stenosis group than in the no stenosis group (p < 0.05), and the level in the mild stenosis group was higher than that in the non‐stenosis group (p < 0.05). No statistically significant differences in sdLDL and other parameters were found among the three groups (p > 0.05). Then, the correlation between Lp‐PLA2 and other parameters was analyzed. Studies have shown that Lp‐PLA2 is positively correlated with the degree of carotid artery stenosis and the range of stenosis. In these high‐risk groups, the clinical biochemical indicators TG, TC, LDL, nonHDL, GLU, Homeostatic Model Assessment for Insulin Resistance, UA, HCY, FFA, CYS C, hs‐CRP, and sdLDL were positively correlated with Lp‐PLA2 (r > 0, p < 0.05), and the correlation coefficient between Lp‐PLA2 and other lipoproteins was in this order sdLDL > non‐HDL > LDL > TC > TG. sdLDL showed the strongest correlation with Lp‐PLA2 (r = 0.555), and HDL was negatively correlated with Lp‐PLA2 (r = −0.145, p < 0.001). Demographic characteristics and clinical biochemical parameters in different carotid artery stenosis groups Abbreviation: BMI, body mass index; DBP, diastolic blood pressure; FFA, Free Fat Acid; HCY, homocysteine; HDL, High‐density lipoprotein; HOMA‐IR, Homeostasis model assessment for insulin resistance; HOMA‐IS, Homeostasis model assessment for insulin sensitivity; hs‐CRP, High‐sensitivity C‐reactive protein; LDL, Low‐density lipoprotein; LP(a), Lipoproteins(a); LP‐PLA2, Lipoprotein‐associated phospholipase A2; nonHDL, Non–high‐density lipoprotein cholesterol; SBP, systolic blood pressure; sdLDL, small dense low‐density lipoprotein; TC, Total cholesterol; TG, Triglycerides; UA, uric acid. Association between Lp‐PLA2 level and occurrence of cerebrovascular events We followed the patients for 2 years and found that 18 had cerebrovascular events, and the remaining 805 had no cerebrovascular events. The level of Lp‐PLA2 was higher in the group with cerebrovascular events than in the group without cerebrovascular events (662.81 ± 111.25 vs 559.86 ± 130.05, p < 0.001) (Table 2 and Figure 2). The levels of some parameters that can reflect renal function, such as Cys C, HCY, and UA, were also higher in the event group than in the no event group (Table 2, p < 0.05). The level of hs‐CRP, an inflammatory marker, was also higher in the event group than in the no event group (Table 2, p < 0.05). No statistical difference was found between the other parameters of the event group, such as HDL, LDL, and the no event group (Table 2, p > 0.05). The levels of Lp‐PLA2 were then divided into four groups (Q1–Q4). As the quartile of Lp‐PLA2 levels increases, the incidence of cerebrovascular events increases. In the cerebrovascular event group, the incidence of cerebrovascular events increased with the increase in Lp‐PLA2 quantile (Table 3, p = 0.027). In the group with cerebrovascular events, the fourth quartile had a higher incidence of cardiovascular and cerebrovascular events than the first quartile (Table 3, p < 0.05). Comparison of demographic characteristics and clinical biochemical parameters of cerebrovascular events Differences in Lp‐PLA2 levels among people with different degrees of stenosis. *p < 0.05, **p < 0.01, ***p < 0.001 Relationship between the incidence of cerebrovascular events after grouping Lp‐PLA2 quartiles The difference between the events and no evens group of Lp‐PLA2 with varying level. First quantile (Q1): Lp‐PLA2 ≤ 473.10 IU/L; second quantile (Q2): 473.10 IU/L < Lp‐PLA2 < 560.70 IU/L; third quantile (Q3): 560.70 IU/L ≤ Lp‐PLA2 < 643.90 IU/L; fourth quartile (Q4): Lp‐PLA2 ≥ 643.90 IU/L. p < 0.05 is considered statistically significant. We followed the patients for 2 years and found that 18 had cerebrovascular events, and the remaining 805 had no cerebrovascular events. The level of Lp‐PLA2 was higher in the group with cerebrovascular events than in the group without cerebrovascular events (662.81 ± 111.25 vs 559.86 ± 130.05, p < 0.001) (Table 2 and Figure 2). The levels of some parameters that can reflect renal function, such as Cys C, HCY, and UA, were also higher in the event group than in the no event group (Table 2, p < 0.05). The level of hs‐CRP, an inflammatory marker, was also higher in the event group than in the no event group (Table 2, p < 0.05). No statistical difference was found between the other parameters of the event group, such as HDL, LDL, and the no event group (Table 2, p > 0.05). The levels of Lp‐PLA2 were then divided into four groups (Q1–Q4). As the quartile of Lp‐PLA2 levels increases, the incidence of cerebrovascular events increases. In the cerebrovascular event group, the incidence of cerebrovascular events increased with the increase in Lp‐PLA2 quantile (Table 3, p = 0.027). In the group with cerebrovascular events, the fourth quartile had a higher incidence of cardiovascular and cerebrovascular events than the first quartile (Table 3, p < 0.05). Comparison of demographic characteristics and clinical biochemical parameters of cerebrovascular events Differences in Lp‐PLA2 levels among people with different degrees of stenosis. *p < 0.05, **p < 0.01, ***p < 0.001 Relationship between the incidence of cerebrovascular events after grouping Lp‐PLA2 quartiles The difference between the events and no evens group of Lp‐PLA2 with varying level. First quantile (Q1): Lp‐PLA2 ≤ 473.10 IU/L; second quantile (Q2): 473.10 IU/L < Lp‐PLA2 < 560.70 IU/L; third quantile (Q3): 560.70 IU/L ≤ Lp‐PLA2 < 643.90 IU/L; fourth quartile (Q4): Lp‐PLA2 ≥ 643.90 IU/L. p < 0.05 is considered statistically significant. Cerebrovascular events are positively correlated with the degree of carotid artery stenosis The incidence of cerebrovascular events in the stenosis group was higher than that in the no stenosis group (Table 4, p = 0.17), but no statistically significant difference was noted. As regards progression of stenosis, that is, from no stenosis to severe stenosis, increases in the degree of stenosis likely lead to an increase in the incidence of cerebrovascular events (Table 4). Furthermore, the greater the number of stenoses, the greater the probability of cerebrovascular events (Table 4). Relationship between carotid artery stenosis and the incidence of cerebrovascular events The diference between the cebrovascular events and no events in the carotid artery stenosis group and no stenosis, with varying degrees of stenosis and stenosis range. p < 0.05 is considered statistically significant. The incidence of cerebrovascular events in the stenosis group was higher than that in the no stenosis group (Table 4, p = 0.17), but no statistically significant difference was noted. As regards progression of stenosis, that is, from no stenosis to severe stenosis, increases in the degree of stenosis likely lead to an increase in the incidence of cerebrovascular events (Table 4). Furthermore, the greater the number of stenoses, the greater the probability of cerebrovascular events (Table 4). Relationship between carotid artery stenosis and the incidence of cerebrovascular events The diference between the cebrovascular events and no events in the carotid artery stenosis group and no stenosis, with varying degrees of stenosis and stenosis range. p < 0.05 is considered statistically significant. Effects of Lp‐PLA2 levels on cerebrovascular events and mortality Participants were followed for 2 years. In the subsequent analysis using Cox regression to analyze the forward LR method, the occurrence of cerebrovascular events was used as the value indicating that the event had occurred, the follow‐up time was used as the time variable, the Lp‐PLA2 level quartile was used as the classification covariate, and the first quantile was used as the reference group. The results suggest that Lp‐PLA2 is a risk factor for cerebrovascular events (Figure 3, p = 0.046). Compared with the first quartile, the risk of cerebrovascular events increased as the quartile increased. Compared with the first quantile, the risk of cerebrovascular events in the fourth quantile was 10.170 times that of the first quantile (OR=10.170, 95%CI 1.302–79.448, p = 0.027) (Table 5 and Figure 4). Differences in Lp‐PLA2 levels between groups with cerebrovascular events and without cerebrovascular events. *p < 0.05, **p < 0.01, ***p < 0.001 Cox regression analysis for cerebrovascular events and death by Lp‐PLA2 (UI/L) Levels Abbreviations: CI, confidence interval; OR, odds ratio. Cox cumulative hazard for people with various Lp‐PLA2 levels. The Lp‐PLA2 level was grouped as follows: Q1, Lp‐PLA2 ≤ 473.10 IU/L; Q2, 473.10 IU/L < LpPLA2 < 560.70 IU/L; Q3, 560.70 IU/L ≤ LpPLA2 < 643.90 IU/L; Q4, Lp‐PLA2 ≥ 643.90 IU/L Participants were followed for 2 years. In the subsequent analysis using Cox regression to analyze the forward LR method, the occurrence of cerebrovascular events was used as the value indicating that the event had occurred, the follow‐up time was used as the time variable, the Lp‐PLA2 level quartile was used as the classification covariate, and the first quantile was used as the reference group. The results suggest that Lp‐PLA2 is a risk factor for cerebrovascular events (Figure 3, p = 0.046). Compared with the first quartile, the risk of cerebrovascular events increased as the quartile increased. Compared with the first quantile, the risk of cerebrovascular events in the fourth quantile was 10.170 times that of the first quantile (OR=10.170, 95%CI 1.302–79.448, p = 0.027) (Table 5 and Figure 4). Differences in Lp‐PLA2 levels between groups with cerebrovascular events and without cerebrovascular events. *p < 0.05, **p < 0.01, ***p < 0.001 Cox regression analysis for cerebrovascular events and death by Lp‐PLA2 (UI/L) Levels Abbreviations: CI, confidence interval; OR, odds ratio. Cox cumulative hazard for people with various Lp‐PLA2 levels. The Lp‐PLA2 level was grouped as follows: Q1, Lp‐PLA2 ≤ 473.10 IU/L; Q2, 473.10 IU/L < LpPLA2 < 560.70 IU/L; Q3, 560.70 IU/L ≤ LpPLA2 < 643.90 IU/L; Q4, Lp‐PLA2 ≥ 643.90 IU/L Lp‐PLA2 levels in groups with different degrees of carotid artery stenosis: According to the neck ultrasound findings, there were 537 patients in the no stenosis group, 254 patients in the mild stenosis group, and 32 patients in the moderate to severe stenosis group. As the severity of carotid artery stenosis increased, the age, systolic BP, male ratio, and diabetes history ratio increased, and the difference among the three groups was statistically significant (p < 0.05) (Table 1). The level of Lp‐PLA2 was higher in the moderate to severe stenosis group than in the no stenosis group (p < 0.05), and the level in the mild stenosis group was higher than that in the non‐stenosis group (p < 0.05). No statistically significant differences in sdLDL and other parameters were found among the three groups (p > 0.05). Then, the correlation between Lp‐PLA2 and other parameters was analyzed. Studies have shown that Lp‐PLA2 is positively correlated with the degree of carotid artery stenosis and the range of stenosis. In these high‐risk groups, the clinical biochemical indicators TG, TC, LDL, nonHDL, GLU, Homeostatic Model Assessment for Insulin Resistance, UA, HCY, FFA, CYS C, hs‐CRP, and sdLDL were positively correlated with Lp‐PLA2 (r > 0, p < 0.05), and the correlation coefficient between Lp‐PLA2 and other lipoproteins was in this order sdLDL > non‐HDL > LDL > TC > TG. sdLDL showed the strongest correlation with Lp‐PLA2 (r = 0.555), and HDL was negatively correlated with Lp‐PLA2 (r = −0.145, p < 0.001). Demographic characteristics and clinical biochemical parameters in different carotid artery stenosis groups Abbreviation: BMI, body mass index; DBP, diastolic blood pressure; FFA, Free Fat Acid; HCY, homocysteine; HDL, High‐density lipoprotein; HOMA‐IR, Homeostasis model assessment for insulin resistance; HOMA‐IS, Homeostasis model assessment for insulin sensitivity; hs‐CRP, High‐sensitivity C‐reactive protein; LDL, Low‐density lipoprotein; LP(a), Lipoproteins(a); LP‐PLA2, Lipoprotein‐associated phospholipase A2; nonHDL, Non–high‐density lipoprotein cholesterol; SBP, systolic blood pressure; sdLDL, small dense low‐density lipoprotein; TC, Total cholesterol; TG, Triglycerides; UA, uric acid. Association between Lp‐PLA2 level and occurrence of cerebrovascular events: We followed the patients for 2 years and found that 18 had cerebrovascular events, and the remaining 805 had no cerebrovascular events. The level of Lp‐PLA2 was higher in the group with cerebrovascular events than in the group without cerebrovascular events (662.81 ± 111.25 vs 559.86 ± 130.05, p < 0.001) (Table 2 and Figure 2). The levels of some parameters that can reflect renal function, such as Cys C, HCY, and UA, were also higher in the event group than in the no event group (Table 2, p < 0.05). The level of hs‐CRP, an inflammatory marker, was also higher in the event group than in the no event group (Table 2, p < 0.05). No statistical difference was found between the other parameters of the event group, such as HDL, LDL, and the no event group (Table 2, p > 0.05). The levels of Lp‐PLA2 were then divided into four groups (Q1–Q4). As the quartile of Lp‐PLA2 levels increases, the incidence of cerebrovascular events increases. In the cerebrovascular event group, the incidence of cerebrovascular events increased with the increase in Lp‐PLA2 quantile (Table 3, p = 0.027). In the group with cerebrovascular events, the fourth quartile had a higher incidence of cardiovascular and cerebrovascular events than the first quartile (Table 3, p < 0.05). Comparison of demographic characteristics and clinical biochemical parameters of cerebrovascular events Differences in Lp‐PLA2 levels among people with different degrees of stenosis. *p < 0.05, **p < 0.01, ***p < 0.001 Relationship between the incidence of cerebrovascular events after grouping Lp‐PLA2 quartiles The difference between the events and no evens group of Lp‐PLA2 with varying level. First quantile (Q1): Lp‐PLA2 ≤ 473.10 IU/L; second quantile (Q2): 473.10 IU/L < Lp‐PLA2 < 560.70 IU/L; third quantile (Q3): 560.70 IU/L ≤ Lp‐PLA2 < 643.90 IU/L; fourth quartile (Q4): Lp‐PLA2 ≥ 643.90 IU/L. p < 0.05 is considered statistically significant. Cerebrovascular events are positively correlated with the degree of carotid artery stenosis: The incidence of cerebrovascular events in the stenosis group was higher than that in the no stenosis group (Table 4, p = 0.17), but no statistically significant difference was noted. As regards progression of stenosis, that is, from no stenosis to severe stenosis, increases in the degree of stenosis likely lead to an increase in the incidence of cerebrovascular events (Table 4). Furthermore, the greater the number of stenoses, the greater the probability of cerebrovascular events (Table 4). Relationship between carotid artery stenosis and the incidence of cerebrovascular events The diference between the cebrovascular events and no events in the carotid artery stenosis group and no stenosis, with varying degrees of stenosis and stenosis range. p < 0.05 is considered statistically significant. Effects of Lp‐PLA2 levels on cerebrovascular events and mortality: Participants were followed for 2 years. In the subsequent analysis using Cox regression to analyze the forward LR method, the occurrence of cerebrovascular events was used as the value indicating that the event had occurred, the follow‐up time was used as the time variable, the Lp‐PLA2 level quartile was used as the classification covariate, and the first quantile was used as the reference group. The results suggest that Lp‐PLA2 is a risk factor for cerebrovascular events (Figure 3, p = 0.046). Compared with the first quartile, the risk of cerebrovascular events increased as the quartile increased. Compared with the first quantile, the risk of cerebrovascular events in the fourth quantile was 10.170 times that of the first quantile (OR=10.170, 95%CI 1.302–79.448, p = 0.027) (Table 5 and Figure 4). Differences in Lp‐PLA2 levels between groups with cerebrovascular events and without cerebrovascular events. *p < 0.05, **p < 0.01, ***p < 0.001 Cox regression analysis for cerebrovascular events and death by Lp‐PLA2 (UI/L) Levels Abbreviations: CI, confidence interval; OR, odds ratio. Cox cumulative hazard for people with various Lp‐PLA2 levels. The Lp‐PLA2 level was grouped as follows: Q1, Lp‐PLA2 ≤ 473.10 IU/L; Q2, 473.10 IU/L < LpPLA2 < 560.70 IU/L; Q3, 560.70 IU/L ≤ LpPLA2 < 643.90 IU/L; Q4, Lp‐PLA2 ≥ 643.90 IU/L DISCUSSION: In this study, the association between CVD and Lp‐PLA2, a novel plasma biomarker involved in the development of carotid artery stenosis and cerebrovascular events in populations at a high risk of stroke, was investigated. The results showed that the level of Lp‐PLA2 in the carotid artery stenosis group was significantly higher than that in the no stenosis group, and its level was positively correlated with the degree of carotid artery stenosis. Cox regression analysis showed that Lp‐PLA2 was an independent factor in the risk prediction of cerebrovascular events. All findings demonstrated the possible clinical application of Lp‐PLA2 to predict the occurrence of cerebrovascular events and assess the degree of carotid artery stenosis. Lp‐PLA2 is a biomarker produced by inflammatory cells, which can break down oxidized phospholipids and release products that promote inflammation and further aggravate AS. 17 Some previous studies have referred to the role of Lp‐PLA2 in the formation of carotid AS and carotid stenosis. Charniot et al. 18 showed that the level of Lp‐PLA2 in patients with carotid artery stenosis increased significantly with the severity of atherosclerotic lesions, and the level of Lp‐PLA2 in the severe stenosis group was the highest. A group of researchers investigated 111 patients with chronic coronary artery disease confirmed by angiography and reported that Lp‐PLA2 was positively correlated with carotid intima‐media thickness. In addition, the carotid artery stenosis of the Lp‐PLA2 high‐level group was severe and the thickness of the intima‐media was higher than those in the control group, 19 which implies that Lp‐PLA2 may be the main factor for carotid artery thickening. Another researcher recruited 678 patients diagnosed with coronary artery disease by angiography. Multivariate regression analysis showed that Lp‐PLA2 is an independent risk factor for AS. A cross‐sectional study measured Lp‐PLA2 levels in the blood of people aged >40 years and proved that this new marker is inextricably linked to carotid AS and carotid artery stenosis. However, this study cannot prove the role of high levels of Lp‐PLA2 in carotid plaque formation. 20 The above results suggest that increased expression of Lp‐PLA2 is highly associated with atherosclerotic lesions. Therefore, it plays a very important role in risk assessment of carotid artery stenosis. However, most research samples are relatively small and may not be able to prove the clinical value of Lp‐PLA2 in these blood circulation diseases. Lp‐PLA2 plays a role in CVD, as it can aggravate atherosclerotic lesions and easily rupture complex and vulnerable plaques, leading to CVD events. Several recent studies have shown that plasma Lp‐PLA2 levels are associated with the risk of subsequent coronary heart disease and ischemic stroke. 19 , 21 , 22 , 23 In a multi‐ethnic cohort study, high levels of Lp‐PLA2 and activities were associated with an increased incidence of cardiovascular disease and coronary heart disease in people without a baseline clinical cardiovascular disease. 24 In this study, the patients who were at a high risk for stroke were followed for 2 years, and the level of Lp‐PLA2 in the cerebrovascular event group was significantly higher than that in the no event group. Moreover, we grouped Lp‐PLA2 levels into quartiles and found that the high‐level group (Q4) had a much higher risk of cerebrovascular events than the low‐level group, with an increase in the Lp‐PLA2 quartile, and the incidence of cerebrovascular events increased. In addition, the risk of cerebrovascular events in the fourth quantile is 10.170 times that of the first quantile. The results suggest that Lp‐PLA2 is a risk factor for the occurrence of cerebrovascular events, and as the level of Lp‐PLA2 increases, the risk of cerebrovascular events increases. Lp‐PLA2 level has a predictive value for the occurrence of cerebrovascular events in populations at a high risk of stroke. The limitations of this study are as follows. First, the sample size was not large, the follow‐up time was only 2 years, and the number of cerebrovascular events that eventually occurred was relatively small. Second, this study was conducted at a single center and the study population mainly included people aged >40 years at a high risk of stroke, so the results of the study only represented a small part of the population. Finally, we only analyzed the correlation between other demographic parameters and clinical biochemical indicators and Lp‐PLA2. However, we did not combine these indicators with Lp‐PLA2 to assess carotid artery stenosis and cerebrovascular events. In the future, we will increase the sample size and follow patients for a longer period of time. We will further explore the clinical value of multiple indicators in cerebrovascular events and carotid artery stenosis. In conclusion, the level of Lp‐PLA2 was positively correlated with the degree of carotid artery stenosis and predicted cerebrovascular events. Our results suggest that Lp‐PLA2 may be a tool for evaluating the prognosis of the development of cardiovascular and cerebrovascular diseases.
Background: Lipoprotein-associated phospholipase A2 (Lp-PLA2) is an independent risk factor for cardiovascular disease. However, relationship between carotid artery stenosis and cerebrovascular events in high stroke-risk populations is still unclear. Methods: A total of 835 people at a high risk of stroke were screened from 15,933 people aged >40 years in April 2013 and followed at 3, 6, 12, and 24 months. Finally, 823 participants met the screening criteria, and the clinical data and biochemical parameters were investigated. Results: Among the 823 participants, 286 had varying degrees of carotid artery stenosis and 18 had cerebrovascular events. The level of Lp-PLA2 in the carotid artery stenosis group was higher than that in the no stenosis group, and the level in the event group was higher than that in the no event group (p < 0.05). Spearman correlation analysis showed that Lp-PLA2 was positively correlated with the degree of carotid artery stenosis (r = 0.093, p = 0.07) and stenosis involvement (r = 0.094, p = 0.07). The correlation coefficient between Lp-PLA2 and lipoprotein was the highest on the levels of sdLDL (r = 0.555, p < 0.001), followed by non-HDL, LDL, TC, and TG. Cox multivariate regression analysis revealed that, compared with the first quantile of Lp-PLA2 level (Q1, low level), the risk of cerebrovascular events in the fourth quantile of Lp-PLA2 was 10.170 times that of the first quantile (OR = 10.170, 95% CI 1.302-79.448, p = 0.027). Conclusions: Lp-PLA2 levels can evaluate carotid artery stenosis and predict the occurrence of cerebrovascular events in high stroke-risk populations and provide scientific guidance for risk stratification management.
null
null
9,134
365
[ 462, 28, 300, 180, 239, 273, 167, 434, 449, 151, 307 ]
14
[ "lp", "pla2", "lp pla2", "stenosis", "cerebrovascular", "events", "cerebrovascular events", "group", "risk", "stroke" ]
[ "cerebrovascular diseases major", "results cardiovascular cerebrovascular", "cardiovascular cerebrovascular events", "cerebrovascular disease cvd", "cerebrovascular disease populations" ]
null
null
[CONTENT] cerebrovascular events | high‐risk population | lipoproteinassociated phospholipase A2 | stroke [SUMMARY]
[CONTENT] cerebrovascular events | high‐risk population | lipoproteinassociated phospholipase A2 | stroke [SUMMARY]
[CONTENT] cerebrovascular events | high‐risk population | lipoproteinassociated phospholipase A2 | stroke [SUMMARY]
null
[CONTENT] cerebrovascular events | high‐risk population | lipoproteinassociated phospholipase A2 | stroke [SUMMARY]
null
[CONTENT] 1-Alkyl-2-acetylglycerophosphocholine Esterase | Aged | Aged, 80 and over | Biomarkers | Carotid Arteries | Carotid Stenosis | Cerebrovascular Disorders | Female | Humans | Male | Middle Aged | Risk | Risk Factors | Stroke [SUMMARY]
[CONTENT] 1-Alkyl-2-acetylglycerophosphocholine Esterase | Aged | Aged, 80 and over | Biomarkers | Carotid Arteries | Carotid Stenosis | Cerebrovascular Disorders | Female | Humans | Male | Middle Aged | Risk | Risk Factors | Stroke [SUMMARY]
[CONTENT] 1-Alkyl-2-acetylglycerophosphocholine Esterase | Aged | Aged, 80 and over | Biomarkers | Carotid Arteries | Carotid Stenosis | Cerebrovascular Disorders | Female | Humans | Male | Middle Aged | Risk | Risk Factors | Stroke [SUMMARY]
null
[CONTENT] 1-Alkyl-2-acetylglycerophosphocholine Esterase | Aged | Aged, 80 and over | Biomarkers | Carotid Arteries | Carotid Stenosis | Cerebrovascular Disorders | Female | Humans | Male | Middle Aged | Risk | Risk Factors | Stroke [SUMMARY]
null
[CONTENT] cerebrovascular diseases major | results cardiovascular cerebrovascular | cardiovascular cerebrovascular events | cerebrovascular disease cvd | cerebrovascular disease populations [SUMMARY]
[CONTENT] cerebrovascular diseases major | results cardiovascular cerebrovascular | cardiovascular cerebrovascular events | cerebrovascular disease cvd | cerebrovascular disease populations [SUMMARY]
[CONTENT] cerebrovascular diseases major | results cardiovascular cerebrovascular | cardiovascular cerebrovascular events | cerebrovascular disease cvd | cerebrovascular disease populations [SUMMARY]
null
[CONTENT] cerebrovascular diseases major | results cardiovascular cerebrovascular | cardiovascular cerebrovascular events | cerebrovascular disease cvd | cerebrovascular disease populations [SUMMARY]
null
[CONTENT] lp | pla2 | lp pla2 | stenosis | cerebrovascular | events | cerebrovascular events | group | risk | stroke [SUMMARY]
[CONTENT] lp | pla2 | lp pla2 | stenosis | cerebrovascular | events | cerebrovascular events | group | risk | stroke [SUMMARY]
[CONTENT] lp | pla2 | lp pla2 | stenosis | cerebrovascular | events | cerebrovascular events | group | risk | stroke [SUMMARY]
null
[CONTENT] lp | pla2 | lp pla2 | stenosis | cerebrovascular | events | cerebrovascular events | group | risk | stroke [SUMMARY]
null
[CONTENT] cvd | endothelial | risk | biomarkers | early | factors | cerebrovascular | diseases | formation | mainly [SUMMARY]
[CONTENT] history | stroke | data | blood | analysis | risk | performed | disease | family | family history [SUMMARY]
[CONTENT] lp | lp pla2 | pla2 | group | events | stenosis | cerebrovascular events | cerebrovascular | iu | 05 [SUMMARY]
null
[CONTENT] stenosis | lp | pla2 | lp pla2 | events | cerebrovascular | group | cerebrovascular events | stroke | risk [SUMMARY]
null
[CONTENT] A2 ||| [SUMMARY]
[CONTENT] 835 | 15,933 | 40 years | April 2013 | 3 | 6 | 12 | 24 months ||| 823 [SUMMARY]
[CONTENT] 823 | 286 | 18 ||| 0.05 ||| Spearman | 0.093 | 0.07 | 0.094 | 0.07 ||| 0.555 | non-HDL | LDL | TC | TG ||| first | fourth | 10.170 | first | 10.170 | 95% | CI | 1.302-79.448 | 0.027 [SUMMARY]
null
[CONTENT] A2 ||| ||| 835 | 15,933 | 40 years | April 2013 | 3 | 6 | 12 | 24 months ||| 823 ||| ||| 823 | 286 | 18 ||| 0.05 ||| Spearman | 0.093 | 0.07 | 0.094 | 0.07 ||| 0.555 | non-HDL | LDL | TC | TG ||| first | fourth | 10.170 | first | 10.170 | 95% | CI | 1.302-79.448 | 0.027 ||| [SUMMARY]
null
Hypoxia-inducible factor-1 α/platelet derived growth factor axis in HIV-associated pulmonary vascular remodeling.
21819559
Human immunodeficiency virus (HIV) infected patients are at increased risk for the development of pulmonary arterial hypertension (PAH). Recent reports have demonstrated that HIV associated viral proteins induce reactive oxygen species (ROS) with resultant endothelial cell dysfunction and related vascular injury. In this study, we explored the impact of HIV protein induced oxidative stress on production of hypoxia inducible factor (HIF)-1α and platelet-derived growth factor (PDGF), critical mediators implicated in the pathogenesis of HIV-PAH.
BACKGROUND
The lungs from 4-5 months old HIV-1 transgenic (Tg) rats were assessed for the presence of pulmonary vascular remodeling and HIF-1α/PDGF-BB expression in comparison with wild type controls. Human primary pulmonary arterial endothelial cells (HPAEC) were treated with HIV-associated proteins in the presence or absence of pretreatment with antioxidants, for 24 hrs followed by estimation of ROS levels and western blot analysis of HIF-1α or PDGF-BB.
METHODS
HIV-Tg rats, a model with marked viral protein induced vascular oxidative stress in the absence of active HIV-1 replication demonstrated significant medial thickening of pulmonary vessels and increased right ventricular mass compared to wild-type controls, with increased expression of HIF-1α and PDGF-BB in HIV-Tg rats. The up-regulation of both HIF-1α and PDGF-B chain mRNA in each HIV-Tg rat was directly correlated with an increase in right ventricular/left ventricular+septum ratio. Supporting our in-vivo findings, HPAECs treated with HIV-proteins: Tat and gp120, demonstrated increased ROS and parallel increase of PDGF-BB expression with the maximum induction observed on treatment with R5 type gp-120CM. Pre-treatment of endothelial cells with antioxidants or transfection of cells with HIF-1α small interfering RNA resulted in abrogation of gp-120CM mediated induction of PDGF-BB, therefore, confirming that ROS generation and activation of HIF-1α plays critical role in gp120 mediated up-regulation of PDGF-BB.
RESULTS
In summary, these findings indicate that viral protein induced oxidative stress results in HIF-1α dependent up-regulation of PDGF-BB and suggests the possible involvement of this pathway in the development of HIV-PAH.
CONCLUSION
[ "Animals", "Antioxidants", "Becaplermin", "Blotting, Western", "Cells, Cultured", "Disease Models, Animal", "Endothelial Cells", "Familial Primary Pulmonary Hypertension", "HIV Envelope Protein gp120", "HIV Infections", "HIV-1", "Humans", "Hypertension, Pulmonary", "Hypertrophy, Right Ventricular", "Hypoxia-Inducible Factor 1, alpha Subunit", "Lung", "Microvessels", "Oxidative Stress", "Platelet-Derived Growth Factor", "Proto-Oncogene Proteins c-sis", "Pulmonary Artery", "RNA Interference", "RNA, Messenger", "Rats", "Rats, Sprague-Dawley", "Rats, Transgenic", "Reactive Oxygen Species", "Signal Transduction", "Time Factors", "Transfection", "Up-Regulation", "tat Gene Products, Human Immunodeficiency Virus" ]
3163194
Introduction
The advent of antiretroviral therapy (ART) has clearly led to improved survival among HIV-1 infected individuals, yet this advancement has resulted in the unexpected consequence of virus-associated noninfectious complications such as HIV-related pulmonary arterial hypertension (HIV-PAH) [1,2]. Despite adherence with ART, development of HIV-PAH serves as an independent predictor of death in patients with HIV-infection [3]. A precise characterization of the pathogenesis of HIV-PAH has so far proven elusive. As there is little evidence for direct viral infection within the pulmonary vascular bed [4-7], popular hypothesis is that secretary HIV-1 viral proteins in circulation are capable of inducing vascular oxidative stress and direct endothelial cell dysfunction and smooth muscle cell proliferation critical to the development of HIV-related arteriopathy [8,9]. Further, evidence is accumulating which suggests that the HIV-1 infection of monocyte/macrophages and lymphocytes stimulates increased production of pro-inflammatory markers and/or growth factors. implicated in the pathogenesis of HIV-PAH such as platelet derived growth factor (PDGF)-BB [10-16]. These soluble mediators can then initiate endothelial injury followed by smooth muscle cell proliferation and migration [2,17,18]. Previous studies provide evidence for the possible involvement of PDGF in the pathogenesis of pulmonary vascular remodeling in animal models [19,20] and in lung biopsies from patients with PPH or with HIV-PAH [12]. Furthermore, a non-specific inhibitor of PDGF signaling, imatinib, has demonstrated the ability to diminish vascular remodeling in animal studies and to mitigate clinical decline in human PAH trials [21-24]. Our previous work demonstrates an over-expression of PDGF in-vitro in HIV-infected macrophages [25] and in-vivo in Simian HIV-infected macaques [16]. Our recent work supports an HIV-protein mediated up-regulation of PDGF-BB in un-infectable vascular cell types such as human primary pulmonary arterial endothelial and smooth muscle cells [26]. However, the mechanism(s) by which HIV infection or viral protein(s) binding induces PDGF expression and the role of this potent mitogen in the setting of HIV-associated pulmonary arteriopathy has not been well characterized. HIV associated viral proteins including Tat and gp-120 have demonstrated the ability to trigger the generation of reactive oxygen species (ROS) [27,28]. As oxidative stress stabilizes hypoxia inducible factor (HIF)-1α, a transcription factor critical for regulation of important proliferative and vaso-active mediators [29-31], we hypothesize that viral protein generated reactive oxygen species (ROS) induce HIF-1α accumulation, with a resultant enhanced transcription of PDGF-B chain. Thus, given the need for clarification of the mechanisms responsible for HIV-related pulmonary vascular remodeling, we, in the present study, first utilized the non-infectious NL4-3Δgag/pol HIV-1 transgenic (HIV-Tg) rat model [32,33] to explore the direct role of viral proteins in the development of pulmonary vascular remodeling. This HIV-Tg rat model [34], develops many clinical multisystem manifestations similar to those found in AIDS patients and most importantly, has earlier been demonstrated to be under significant oxidative stress. Furthermore, given that the pulmonary artery endothelial dysfunction plays a key role in the initiation and progression of PAH [35-37], utilizing the primary pulmonary endothelial cell-culture system we next delineated the importance of oxidative stress and HIF-1α activation in viral protein mediated up-regulation of PDGF-BB.
Methods
HIV-1 transgenic and wild type rats HIV-1 transgenic (Tg) Sprague Dawley (SD) and SD wild type (WT) rats were purchased from Harlan (Indianapolis, Indiana). Young 4-5 months old Tg rats (n = 6) and age matched SD wild type rats (n = 6) were used for analysis. The HIV-1 Tg rat contains a gag-pol deleted NL4-3 provirus and expresses HIV viral RNA and proteins in various tissues including lymphocytes and monocytes. The animals were euthanized with inhalation of 2.5-3% isofluorane gas, followed by transcardial saline perfusion. Following euthanasia, one half of the lung was post-fixed for histological examination, while the other half was snap frozen for RNA analysis. The animal care at the Kansas University Medical Center was in strict accordance with the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals. HIV-1 transgenic (Tg) Sprague Dawley (SD) and SD wild type (WT) rats were purchased from Harlan (Indianapolis, Indiana). Young 4-5 months old Tg rats (n = 6) and age matched SD wild type rats (n = 6) were used for analysis. The HIV-1 Tg rat contains a gag-pol deleted NL4-3 provirus and expresses HIV viral RNA and proteins in various tissues including lymphocytes and monocytes. The animals were euthanized with inhalation of 2.5-3% isofluorane gas, followed by transcardial saline perfusion. Following euthanasia, one half of the lung was post-fixed for histological examination, while the other half was snap frozen for RNA analysis. The animal care at the Kansas University Medical Center was in strict accordance with the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals. Right Ventricular Mass Evaluation Hearts were removed from the euthanized animals. After the removal of atria, the wall of the right ventricle (RV) was separated from the left ventricle (LV) and septum (LV+S) according to the established method [38]. Wet weights of both RV & LV+S were quantified, normalized to the total body weight and used to calculate the RV/LV+S ratio. Hearts were removed from the euthanized animals. After the removal of atria, the wall of the right ventricle (RV) was separated from the left ventricle (LV) and septum (LV+S) according to the established method [38]. Wet weights of both RV & LV+S were quantified, normalized to the total body weight and used to calculate the RV/LV+S ratio. Histology and immuno-histochemical analysis of pulmonary arteries Excised lungs were immersed in 4% paraformaldehyde overnight followed by 70% ethanol and then used for paraffin embedding. Paraffin sections of 5 μm thickness were used for Hematoxylin & Eosin (H&E) or Verhoeff von Gieson (VVG) staining. The digital scans of whole section from each animal were generated with a ScanScope scanner and then visualized and analyzed using Aperio image view software. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries (50-250 μm diameter) in blinded manner. Wall thickness and outer diameter of approximately 25 muscularized arteries were measured in each section at two perpendicular points and then averaged. The percentage medial wall thickness was then calculated as described before [39]. Immunohistochemistry staining of paraffin-embedded lung sections was performed as previously described [16] with primary antibodies including α-SMA, factor VIII, from Dako Corporation (Carpentaria, CA, USA), HIF-1α, from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA) and PDGF-BB, from Abcam, Inc. (Cambridge, MA). Excised lungs were immersed in 4% paraformaldehyde overnight followed by 70% ethanol and then used for paraffin embedding. Paraffin sections of 5 μm thickness were used for Hematoxylin & Eosin (H&E) or Verhoeff von Gieson (VVG) staining. The digital scans of whole section from each animal were generated with a ScanScope scanner and then visualized and analyzed using Aperio image view software. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries (50-250 μm diameter) in blinded manner. Wall thickness and outer diameter of approximately 25 muscularized arteries were measured in each section at two perpendicular points and then averaged. The percentage medial wall thickness was then calculated as described before [39]. Immunohistochemistry staining of paraffin-embedded lung sections was performed as previously described [16] with primary antibodies including α-SMA, factor VIII, from Dako Corporation (Carpentaria, CA, USA), HIF-1α, from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA) and PDGF-BB, from Abcam, Inc. (Cambridge, MA). Cell culture and treatments Human primary pulmonary microvascular endothelial cells (HPMVEC) were purchased from ScienCell research laboratories (Carlsbad, CA) and grown in endothelial cell basal media containing 2% fetal bovine serum (FBS), 1 μg/ml hydrocortisone, 10 ng/ml human epidermal growth factor, 3 ng/ml basic fibroblast growth factor, 10 μg/ml heparin, and gentamycin/amphotericin. Cells were treated with viral proteins: Tat 1-72 (1 μM, University of Kentucky), gp-120CM or gp-120LAV (100 ng/ml, Protein Sciences Corporation, Meriden, CT) for 24 hrs or 1 hr followed by western blot analysis and ROS quantification, respectively. Tat or gp120 stock solution was heat-inactivated by boiling for 30 min. For treatment with CCR5 neutralizing antibody or IgG isotype control (10 μg/ml, R&D systems) or; with antioxidant cocktail (0.2 mM ascorbate, 0.5 mM glutathione, and 3.5 μM α-tocopherol), cells were pre-treated with inhibitors for 30 min followed by treatment with gp-120 CM. Human primary pulmonary microvascular endothelial cells (HPMVEC) were purchased from ScienCell research laboratories (Carlsbad, CA) and grown in endothelial cell basal media containing 2% fetal bovine serum (FBS), 1 μg/ml hydrocortisone, 10 ng/ml human epidermal growth factor, 3 ng/ml basic fibroblast growth factor, 10 μg/ml heparin, and gentamycin/amphotericin. Cells were treated with viral proteins: Tat 1-72 (1 μM, University of Kentucky), gp-120CM or gp-120LAV (100 ng/ml, Protein Sciences Corporation, Meriden, CT) for 24 hrs or 1 hr followed by western blot analysis and ROS quantification, respectively. Tat or gp120 stock solution was heat-inactivated by boiling for 30 min. For treatment with CCR5 neutralizing antibody or IgG isotype control (10 μg/ml, R&D systems) or; with antioxidant cocktail (0.2 mM ascorbate, 0.5 mM glutathione, and 3.5 μM α-tocopherol), cells were pre-treated with inhibitors for 30 min followed by treatment with gp-120 CM. Quantification of cellular oxidative stress using dichlorofluorescein (DCF) assay Pulmonary endothelial cells were treated with 5-(and -6)-carboxy-2', 7'-dichlorodihydroflourescein diacetate (DCFH-DA) (Molecular Probes, Inc.) for 30 min followed by treatment with viral protein(s) for 1 hr. In the presence of H2O2, DCFH is oxidized to a fluorescent DCF within the cytoplasm which was read by fluorescent plate reader at an excitation of 485 nm with an emission of 530 nm [40]. Pulmonary endothelial cells were treated with 5-(and -6)-carboxy-2', 7'-dichlorodihydroflourescein diacetate (DCFH-DA) (Molecular Probes, Inc.) for 30 min followed by treatment with viral protein(s) for 1 hr. In the presence of H2O2, DCFH is oxidized to a fluorescent DCF within the cytoplasm which was read by fluorescent plate reader at an excitation of 485 nm with an emission of 530 nm [40]. Transfection of pulmonary endothelial cells with small interfering (si) RNA The silencer select pre-designed and validated siRNA duplexes targeting HIF-1α were obtained from Applied Biosystems (Carlsbad, CA). Cells were also transfected with silencer select negative control siRNA for comparison. HPMVECs were transfected with 10 nM siRNA using Hiperfect transfection reagent (Qiagen, Valencia, CA) as instructed by the manufacturer. The transfected cells were then treated with or without cocaine and/or Tat for 24 hrs followed by protein extraction for western blot analysis. The silencer select pre-designed and validated siRNA duplexes targeting HIF-1α were obtained from Applied Biosystems (Carlsbad, CA). Cells were also transfected with silencer select negative control siRNA for comparison. HPMVECs were transfected with 10 nM siRNA using Hiperfect transfection reagent (Qiagen, Valencia, CA) as instructed by the manufacturer. The transfected cells were then treated with or without cocaine and/or Tat for 24 hrs followed by protein extraction for western blot analysis. Real-Time RT-PCR analysis We used Real-Time RT-PCR to analyze RNA extracted from frozen lungs of HIV-1 Tg rats and WT controls obtained after non-fixative perfusion. Quantitative analysis of HIF-1a, PDGF and ET-1 mRNA in Tg and WT rats was done using primers from SA Biosciences (Frederick, MD) by Real-Time RT-PCR using the SYBR Green detection method. Total RNA was isolated from frozen lung tissues by lysis in Trizol and was then converted into first strand cDNA to be used for real-time PCR. Detection was performed with an ABI Prism 7700 sequence detector. The average Ct value of the house-keeping gene, HPRT, was subtracted from that of target gene to give changes in Ct (dCt). The fold-change in gene expression (differences in dCt, or ddCt) was then determined as log2 relative units. We used Real-Time RT-PCR to analyze RNA extracted from frozen lungs of HIV-1 Tg rats and WT controls obtained after non-fixative perfusion. Quantitative analysis of HIF-1a, PDGF and ET-1 mRNA in Tg and WT rats was done using primers from SA Biosciences (Frederick, MD) by Real-Time RT-PCR using the SYBR Green detection method. Total RNA was isolated from frozen lung tissues by lysis in Trizol and was then converted into first strand cDNA to be used for real-time PCR. Detection was performed with an ABI Prism 7700 sequence detector. The average Ct value of the house-keeping gene, HPRT, was subtracted from that of target gene to give changes in Ct (dCt). The fold-change in gene expression (differences in dCt, or ddCt) was then determined as log2 relative units. Western Blot Analysis Frozen rat lung tissues or endothelial cells were lysed in lysing buffer (Sigma, St. Louis, MO) containing protease inhibitors. Protein estimation in these samples was measured using the micro-BCA method protein assay kit (Pierce Chemical Co., Rockford, IL). Western blot analyses were performed using primary antibodies against HIF-1α (Santa Cruz), PDGF-BB (Santa Cruz), and β-actin (Sigma). The secondary antibodies used were horseradish peroxidase-conjugated anti-mouse or anti-rabbit (1:5000, Pierce Chemical Co) and detection was performed using the enhanced chemiluminescence system (Pierce Chemical Co.). The NIH imageJ software was used for densitometric analysis of immunoblots. Frozen rat lung tissues or endothelial cells were lysed in lysing buffer (Sigma, St. Louis, MO) containing protease inhibitors. Protein estimation in these samples was measured using the micro-BCA method protein assay kit (Pierce Chemical Co., Rockford, IL). Western blot analyses were performed using primary antibodies against HIF-1α (Santa Cruz), PDGF-BB (Santa Cruz), and β-actin (Sigma). The secondary antibodies used were horseradish peroxidase-conjugated anti-mouse or anti-rabbit (1:5000, Pierce Chemical Co) and detection was performed using the enhanced chemiluminescence system (Pierce Chemical Co.). The NIH imageJ software was used for densitometric analysis of immunoblots. Statistical Analysis Statistical analysis was performed using two-way analysis of variance with a post-hoc Student t- test or non-parametric Wilcoxon Rank-Sum test as appropriate. To test for association of RV/LV+septum ratio with other mediators, the non-parametric Spearman's rank-sum correlation coefficient was used and coefficient of determination (R2) was calculated. Exact two-sided p-values were calculated for all the analysis, using SAS 9.1 software (SAS Institute, Inc., Cary, NC, USA). A type I error rate of 5% was used for determining statistical significance. Statistical analysis was performed using two-way analysis of variance with a post-hoc Student t- test or non-parametric Wilcoxon Rank-Sum test as appropriate. To test for association of RV/LV+septum ratio with other mediators, the non-parametric Spearman's rank-sum correlation coefficient was used and coefficient of determination (R2) was calculated. Exact two-sided p-values were calculated for all the analysis, using SAS 9.1 software (SAS Institute, Inc., Cary, NC, USA). A type I error rate of 5% was used for determining statistical significance.
null
null
Conclusion
All authors have read and approved the manuscript. JM contributed in writing manuscript, quantitated medial wall thickness and participated in data analysis; HG performed all the cell-culture experiments; XB performed immuno-histochemistry and western blots on rat lungs; FL harvested lung tissues, extracted RNA and performed real-time RT-PCR experiments; OT reviewed the H&E and immunohistochemical stained sections, SJB and SB contributed in critiquing the manuscript; AL participated in interpretation of the data and writing the manuscript; NKD designed the study and supervised overall experimental plans, analyzed and interpreted the data, and wrote the manuscript.
[ "Introduction", "HIV-1 transgenic and wild type rats", "Right Ventricular Mass Evaluation", "Histology and immuno-histochemical analysis of pulmonary arteries", "Cell culture and treatments", "Quantification of cellular oxidative stress using dichlorofluorescein (DCF) assay", "Transfection of pulmonary endothelial cells with small interfering (si) RNA", "Real-Time RT-PCR analysis", "Western Blot Analysis", "Statistical Analysis", "Results", "Pulmonary vascular remodeling in HIV-Tg rats", "Characterization of pulmonary vascular lesions in HIV-Tg rats", "Right ventricular hypertrophy (RVH) in HIV-Tg rats", "Increased expression of HIF-1a and PDGF-BB in HIV-Tg rats", "Increased expression of PDGF-BB in HIV-protein(s) treated pulmonary microvascular endothelial cells", "Reactive oxygen species are involved in HIV-protein mediated PDGF-BB induction", "ROS dependent stimulation of HIF-1α is necessary for HIV-protein mediated PDGF-BB induction", "Discussion", "Conclusion" ]
[ "The advent of antiretroviral therapy (ART) has clearly led to improved survival among HIV-1 infected individuals, yet this advancement has resulted in the unexpected consequence of virus-associated noninfectious complications such as HIV-related pulmonary arterial hypertension (HIV-PAH) [1,2]. Despite adherence with ART, development of HIV-PAH serves as an independent predictor of death in patients with HIV-infection [3]. A precise characterization of the pathogenesis of HIV-PAH has so far proven elusive. As there is little evidence for direct viral infection within the pulmonary vascular bed [4-7], popular hypothesis is that secretary HIV-1 viral proteins in circulation are capable of inducing vascular oxidative stress and direct endothelial cell dysfunction and smooth muscle cell proliferation critical to the development of HIV-related arteriopathy [8,9]. Further, evidence is accumulating which suggests that the HIV-1 infection of monocyte/macrophages and lymphocytes stimulates increased production of pro-inflammatory markers and/or growth factors. implicated in the pathogenesis of HIV-PAH such as platelet derived growth factor (PDGF)-BB [10-16]. These soluble mediators can then initiate endothelial injury followed by smooth muscle cell proliferation and migration [2,17,18].\nPrevious studies provide evidence for the possible involvement of PDGF in the pathogenesis of pulmonary vascular remodeling in animal models [19,20] and in lung biopsies from patients with PPH or with HIV-PAH [12]. Furthermore, a non-specific inhibitor of PDGF signaling, imatinib, has demonstrated the ability to diminish vascular remodeling in animal studies and to mitigate clinical decline in human PAH trials [21-24]. Our previous work demonstrates an over-expression of PDGF in-vitro in HIV-infected macrophages [25] and in-vivo in Simian HIV-infected macaques [16]. Our recent work supports an HIV-protein mediated up-regulation of PDGF-BB in un-infectable vascular cell types such as human primary pulmonary arterial endothelial and smooth muscle cells [26]. However, the mechanism(s) by which HIV infection or viral protein(s) binding induces PDGF expression and the role of this potent mitogen in the setting of HIV-associated pulmonary arteriopathy has not been well characterized. HIV associated viral proteins including Tat and gp-120 have demonstrated the ability to trigger the generation of reactive oxygen species (ROS) [27,28]. As oxidative stress stabilizes hypoxia inducible factor (HIF)-1α, a transcription factor critical for regulation of important proliferative and vaso-active mediators [29-31], we hypothesize that viral protein generated reactive oxygen species (ROS) induce HIF-1α accumulation, with a resultant enhanced transcription of PDGF-B chain.\nThus, given the need for clarification of the mechanisms responsible for HIV-related pulmonary vascular remodeling, we, in the present study, first utilized the non-infectious NL4-3Δgag/pol HIV-1 transgenic (HIV-Tg) rat model [32,33] to explore the direct role of viral proteins in the development of pulmonary vascular remodeling. This HIV-Tg rat model [34], develops many clinical multisystem manifestations similar to those found in AIDS patients and most importantly, has earlier been demonstrated to be under significant oxidative stress. Furthermore, given that the pulmonary artery endothelial dysfunction plays a key role in the initiation and progression of PAH [35-37], utilizing the primary pulmonary endothelial cell-culture system we next delineated the importance of oxidative stress and HIF-1α activation in viral protein mediated up-regulation of PDGF-BB.", "HIV-1 transgenic (Tg) Sprague Dawley (SD) and SD wild type (WT) rats were purchased from Harlan (Indianapolis, Indiana). Young 4-5 months old Tg rats (n = 6) and age matched SD wild type rats (n = 6) were used for analysis. The HIV-1 Tg rat contains a gag-pol deleted NL4-3 provirus and expresses HIV viral RNA and proteins in various tissues including lymphocytes and monocytes. The animals were euthanized with inhalation of 2.5-3% isofluorane gas, followed by transcardial saline perfusion. Following euthanasia, one half of the lung was post-fixed for histological examination, while the other half was snap frozen for RNA analysis. The animal care at the Kansas University Medical Center was in strict accordance with the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals.", "Hearts were removed from the euthanized animals. After the removal of atria, the wall of the right ventricle (RV) was separated from the left ventricle (LV) and septum (LV+S) according to the established method [38]. Wet weights of both RV & LV+S were quantified, normalized to the total body weight and used to calculate the RV/LV+S ratio.", "Excised lungs were immersed in 4% paraformaldehyde overnight followed by 70% ethanol and then used for paraffin embedding. Paraffin sections of 5 μm thickness were used for Hematoxylin & Eosin (H&E) or Verhoeff von Gieson (VVG) staining. The digital scans of whole section from each animal were generated with a ScanScope scanner and then visualized and analyzed using Aperio image view software. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries (50-250 μm diameter) in blinded manner. Wall thickness and outer diameter of approximately 25 muscularized arteries were measured in each section at two perpendicular points and then averaged. The percentage medial wall thickness was then calculated as described before [39]. Immunohistochemistry staining of paraffin-embedded lung sections was performed as previously described [16] with primary antibodies including α-SMA, factor VIII, from Dako Corporation (Carpentaria, CA, USA), HIF-1α, from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA) and PDGF-BB, from Abcam, Inc. (Cambridge, MA).", "Human primary pulmonary microvascular endothelial cells (HPMVEC) were purchased from ScienCell research laboratories (Carlsbad, CA) and grown in endothelial cell basal media containing 2% fetal bovine serum (FBS), 1 μg/ml hydrocortisone, 10 ng/ml human epidermal growth factor, 3 ng/ml basic fibroblast growth factor, 10 μg/ml heparin, and gentamycin/amphotericin. Cells were treated with viral proteins: Tat 1-72 (1 μM, University of Kentucky), gp-120CM or gp-120LAV (100 ng/ml, Protein Sciences Corporation, Meriden, CT) for 24 hrs or 1 hr followed by western blot analysis and ROS quantification, respectively. Tat or gp120 stock solution was heat-inactivated by boiling for 30 min. For treatment with CCR5 neutralizing antibody or IgG isotype control (10 μg/ml, R&D systems) or; with antioxidant cocktail (0.2 mM ascorbate, 0.5 mM glutathione, and 3.5 μM α-tocopherol), cells were pre-treated with inhibitors for 30 min followed by treatment with gp-120 CM.", "Pulmonary endothelial cells were treated with 5-(and -6)-carboxy-2', 7'-dichlorodihydroflourescein diacetate (DCFH-DA) (Molecular Probes, Inc.) for 30 min followed by treatment with viral protein(s) for 1 hr. In the presence of H2O2, DCFH is oxidized to a fluorescent DCF within the cytoplasm which was read by fluorescent plate reader at an excitation of 485 nm with an emission of 530 nm [40].", "The silencer select pre-designed and validated siRNA duplexes targeting HIF-1α were obtained from Applied Biosystems (Carlsbad, CA). Cells were also transfected with silencer select negative control siRNA for comparison. HPMVECs were transfected with 10 nM siRNA using Hiperfect transfection reagent (Qiagen, Valencia, CA) as instructed by the manufacturer. The transfected cells were then treated with or without cocaine and/or Tat for 24 hrs followed by protein extraction for western blot analysis.", "We used Real-Time RT-PCR to analyze RNA extracted from frozen lungs of HIV-1 Tg rats and WT controls obtained after non-fixative perfusion. Quantitative analysis of HIF-1a, PDGF and ET-1 mRNA in Tg and WT rats was done using primers from SA Biosciences (Frederick, MD) by Real-Time RT-PCR using the SYBR Green detection method. Total RNA was isolated from frozen lung tissues by lysis in Trizol and was then converted into first strand cDNA to be used for real-time PCR. Detection was performed with an ABI Prism 7700 sequence detector. The average Ct value of the house-keeping gene, HPRT, was subtracted from that of target gene to give changes in Ct (dCt). The fold-change in gene expression (differences in dCt, or ddCt) was then determined as log2 relative units.", "Frozen rat lung tissues or endothelial cells were lysed in lysing buffer (Sigma, St. Louis, MO) containing protease inhibitors. Protein estimation in these samples was measured using the micro-BCA method protein assay kit (Pierce Chemical Co., Rockford, IL). Western blot analyses were performed using primary antibodies against HIF-1α (Santa Cruz), PDGF-BB (Santa Cruz), and β-actin (Sigma). The secondary antibodies used were horseradish peroxidase-conjugated anti-mouse or anti-rabbit (1:5000, Pierce Chemical Co) and detection was performed using the enhanced chemiluminescence system (Pierce Chemical Co.). The NIH imageJ software was used for densitometric analysis of immunoblots.", "Statistical analysis was performed using two-way analysis of variance with a post-hoc Student t- test or non-parametric Wilcoxon Rank-Sum test as appropriate. To test for association of RV/LV+septum ratio with other mediators, the non-parametric Spearman's rank-sum correlation coefficient was used and coefficient of determination (R2) was calculated. Exact two-sided p-values were calculated for all the analysis, using SAS 9.1 software (SAS Institute, Inc., Cary, NC, USA). A type I error rate of 5% was used for determining statistical significance.", " Pulmonary vascular remodeling in HIV-Tg rats Reports suggesting respiratory difficulty in HIV-Tg rats [34] led us to utilize this model in looking for evidence of pulmonary arteriopathy associated with HIV-related proteins. As clinical manifestations of AIDS in HIV-Tg rats begins as early as 5 months [34], we compared 4-5 months old Tg with age-matched WT-control rats (n = 6 in each group). Analysis of both, H&E and VVG staining, of paraffin embedded lung sections from 5 months old HIV-Tg rats demonstrated moderate to severe vascular remodeling. Representative images of H&E and VVG staining from each group are shown in Figure 1A. There was a significant increase in the thickness of the medial wall of muscular arteries in HIV-Tg rats (Figure 1b, c, e, f) compared to normal vessels in wild-type control (Figure 1a, d). Further, the presence of a smooth muscle layer was observed in many of the normally non-muscular distal arteries of HIV-Tg rats. The VVG-stained sections from both the groups revealed a well defined internal elastic lamina (black stain) in WT-control rats whereas elastic lamina was found to be disrupted in HIV-Tg rats (Figure 1). As shown in Figure 2, percentage of medial wall thickness of pulmonary arteries with outer diameter ranging between 50-200 μm was observed to be significantly high in HIV-Tg rats as compared to WT controls (p < 0.001).\nHistological evidence of pulmonary vascular remodeling in HIV-Tg rats. Representative images of H& E (a, b, c) and VVG (d, e, f) stained sections from HIV-Tg (b, c, e, f) and WT control (a, d) rats. H&E photomicrographs were captured at 10× (a, b) and at 20× (c) magnification whereas VVG images were captured at 4× (d, e) and at 20× (f) original magnification (scale bar: 100 μm). Each representative image is from different animal.\nIncrease in medial wall thickness of pulmonary arteries in HIV-Tg rats compared with WT-rats. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries with diameter size ranging from 50 μm-250 μm. P < = 0.001, HIV-Tg rats vs. WT controls.\nReports suggesting respiratory difficulty in HIV-Tg rats [34] led us to utilize this model in looking for evidence of pulmonary arteriopathy associated with HIV-related proteins. As clinical manifestations of AIDS in HIV-Tg rats begins as early as 5 months [34], we compared 4-5 months old Tg with age-matched WT-control rats (n = 6 in each group). Analysis of both, H&E and VVG staining, of paraffin embedded lung sections from 5 months old HIV-Tg rats demonstrated moderate to severe vascular remodeling. Representative images of H&E and VVG staining from each group are shown in Figure 1A. There was a significant increase in the thickness of the medial wall of muscular arteries in HIV-Tg rats (Figure 1b, c, e, f) compared to normal vessels in wild-type control (Figure 1a, d). Further, the presence of a smooth muscle layer was observed in many of the normally non-muscular distal arteries of HIV-Tg rats. The VVG-stained sections from both the groups revealed a well defined internal elastic lamina (black stain) in WT-control rats whereas elastic lamina was found to be disrupted in HIV-Tg rats (Figure 1). As shown in Figure 2, percentage of medial wall thickness of pulmonary arteries with outer diameter ranging between 50-200 μm was observed to be significantly high in HIV-Tg rats as compared to WT controls (p < 0.001).\nHistological evidence of pulmonary vascular remodeling in HIV-Tg rats. Representative images of H& E (a, b, c) and VVG (d, e, f) stained sections from HIV-Tg (b, c, e, f) and WT control (a, d) rats. H&E photomicrographs were captured at 10× (a, b) and at 20× (c) magnification whereas VVG images were captured at 4× (d, e) and at 20× (f) original magnification (scale bar: 100 μm). Each representative image is from different animal.\nIncrease in medial wall thickness of pulmonary arteries in HIV-Tg rats compared with WT-rats. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries with diameter size ranging from 50 μm-250 μm. P < = 0.001, HIV-Tg rats vs. WT controls.\n Characterization of pulmonary vascular lesions in HIV-Tg rats In order to characterize the cellular composition of pulmonary vascular lesions in HIV-Tg rats; the lung sections were stained for α-SMA and Factor VIII. As shown in the representative lung sections from each group in Figure 3, we confirmed the presence of vascular remodeling with medial wall thickness in HIV-Tg rats while normal blood vessels were observed in WT control rats. Marked increase in the medial wall thickness of muscular arteries was observed due to increased proliferation of smooth muscle cells (SMC) in the HIV-Tg group (Figure 3b) compared to WT controls (Figure 3a). Endothelial monolayer was generally damaged with signs of increased expression of factor 8 (or vWF) in both thickened and non-muscular vessels (Figure 3d).\nPresence of medial wall thickness in the pulmonary arteries of HIV-Tg rats. Immuno-histochemistry of paraffin embedded lung sections with anti- α smooth muscle actin (brown) (a-b) and factor VIII (brown) (c-d) indicated abnormal vascular lesions with significant medial wall thickening in the lung sections from HIV-Tg rats (b, d) compared with WT-controls (a, c). Representative images were captured at 10× original magnification (scale bar: 100 μm).\nIn order to characterize the cellular composition of pulmonary vascular lesions in HIV-Tg rats; the lung sections were stained for α-SMA and Factor VIII. As shown in the representative lung sections from each group in Figure 3, we confirmed the presence of vascular remodeling with medial wall thickness in HIV-Tg rats while normal blood vessels were observed in WT control rats. Marked increase in the medial wall thickness of muscular arteries was observed due to increased proliferation of smooth muscle cells (SMC) in the HIV-Tg group (Figure 3b) compared to WT controls (Figure 3a). Endothelial monolayer was generally damaged with signs of increased expression of factor 8 (or vWF) in both thickened and non-muscular vessels (Figure 3d).\nPresence of medial wall thickness in the pulmonary arteries of HIV-Tg rats. Immuno-histochemistry of paraffin embedded lung sections with anti- α smooth muscle actin (brown) (a-b) and factor VIII (brown) (c-d) indicated abnormal vascular lesions with significant medial wall thickening in the lung sections from HIV-Tg rats (b, d) compared with WT-controls (a, c). Representative images were captured at 10× original magnification (scale bar: 100 μm).\n Right ventricular hypertrophy (RVH) in HIV-Tg rats The HIV-Tg rats exhibited an increase in the ratio of wet weight of the right ventricle (RV) to the sum of the wet weights of the left ventricle and interventricular septum (LV+S), compared to that found in the control group (Figure 4). This increase in the RV/LV+S ratio suggests a disproportionate growth of the right ventricle compared to the left, thereby indicating early RV hypertrophy in these HIV-Tg rats.\nRight ventricular hypertrophy in HIV-Tg rats (n = 6) compared with age-matched WT rats (n = 6). The ratio of the wet weight of RV wall and of the LV wall with septum of heart (RV/LV+septum) was measured. P = 0.06, HIV-Tg rats vs. WT controls.\nThe HIV-Tg rats exhibited an increase in the ratio of wet weight of the right ventricle (RV) to the sum of the wet weights of the left ventricle and interventricular septum (LV+S), compared to that found in the control group (Figure 4). This increase in the RV/LV+S ratio suggests a disproportionate growth of the right ventricle compared to the left, thereby indicating early RV hypertrophy in these HIV-Tg rats.\nRight ventricular hypertrophy in HIV-Tg rats (n = 6) compared with age-matched WT rats (n = 6). The ratio of the wet weight of RV wall and of the LV wall with septum of heart (RV/LV+septum) was measured. P = 0.06, HIV-Tg rats vs. WT controls.\n Increased expression of HIF-1a and PDGF-BB in HIV-Tg rats Having determined, through observation of right ventricular changes, the degree of pulmonary arteriopathy in HIV-Tg rats, we next compared the level of HIF-1α expression in the lungs of these rats to that found in controls. Although RNA analysis of lung extracts suggested an insignificant increase in the expression of HIF-1α (p = 0.078) (Figure 5A), western blot analysis demonstrated significant (p < 0.05) increase in HIF-1α protein, thus confirming the increase in expression of HIF-1α in HIV-Tg rats as compared to WT controls (Figure 5B). This increase in the expression of HIF-1α was further confirmed by immunohistochemical analysis on the lung sections from HIV-Tg and WT controls. As shown in Figure 5C, the lung parenchyma, along with endothelial cells lining the vessels demonstrating medial thickness, had strong expression of HIF-1α in the lung sections from HIV-Tg rats. Enhanced expression was also observed in mononuclear cells around the thickened vessels. Smooth muscle cells in these arteries, however, did not demonstrate a significant increase in HIF-1α compared to those from controls.\nIncreased expression of HIF-1α in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA and B) western blot analysis of total protein, extracted from lungs of HIV-Tg rats and age matched wild-type SD rats. Histogram below western blot image represents the average densitometric ratio of 135 kDa, HIF-1α to β-actin in wild-type and HIV-Tg rats. Statistical significance was calculated using a two-tail, independent, t-test. (* p < = 0.05) C) Immuno-histochemistry of paraffin embedded lung sections with anti-HIF-1α. Representative photomicrographs of immunostaining from wild-type and HIV-Tg group are shown. Original magnification: 60×.\nWe next evaluated the expression of pro-proliferative factor, PDGF-BB that is suggestive to be regulated in a HIF-dependent manner [29-31]. As shown in Figure 6A, real-time RT-PCR analysis of total mRNA extracted from lung homogenates suggested an increase in the expression of PDGF-B chain mRNA in HIV-Tg rats compared to the control rats with normal vasculature. Interestingly, this increased expression of PDGF-B chain in the HIV-Tg group was associated positively with the increase in expression of HIF-1α (p = 0.002, R = 0.97) (Figure 6B). Furthermore, immunohistochemical analysis suggested enhanced expression of PDGF-BB in endothelial cells and in mononuclear infiltrated cells around thickened vessels (Figure 6C) from HIV-Tg rats similar to HIF-1α staining (Figure 5C). Additionally, the increased levels of HIF-1α (p = 0.009, R = 0.94) and PDGF-B chain (p = 0.036, R = 0.78) strongly correlated linearly with the increased RV/LV+ septum ratio in HIV-Tg rats (Figure 6D). No notable trends were found within the wild-type group.\nIncreased expression of PDGF-B chain in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA in the lungs of HIV-Tg rats and age matched wild-type SD rats. B) Correlation of PDGF-B chain mRNA with the expression of HIF-1α in HIV-Tg rats. C) Immuno-histochemistry for PDGF-BB on the paraffin embedded lung sections from HIV-Tg and WT rats. Original magnification: 60×. D) Correlation of RV/LV+S ratio with the expression of HIF-1α and PDGF-BB in HIV-Tg rats. Correlation was calculated using the non-parametric Spearman's rank correlation coefficient.\nHaving determined, through observation of right ventricular changes, the degree of pulmonary arteriopathy in HIV-Tg rats, we next compared the level of HIF-1α expression in the lungs of these rats to that found in controls. Although RNA analysis of lung extracts suggested an insignificant increase in the expression of HIF-1α (p = 0.078) (Figure 5A), western blot analysis demonstrated significant (p < 0.05) increase in HIF-1α protein, thus confirming the increase in expression of HIF-1α in HIV-Tg rats as compared to WT controls (Figure 5B). This increase in the expression of HIF-1α was further confirmed by immunohistochemical analysis on the lung sections from HIV-Tg and WT controls. As shown in Figure 5C, the lung parenchyma, along with endothelial cells lining the vessels demonstrating medial thickness, had strong expression of HIF-1α in the lung sections from HIV-Tg rats. Enhanced expression was also observed in mononuclear cells around the thickened vessels. Smooth muscle cells in these arteries, however, did not demonstrate a significant increase in HIF-1α compared to those from controls.\nIncreased expression of HIF-1α in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA and B) western blot analysis of total protein, extracted from lungs of HIV-Tg rats and age matched wild-type SD rats. Histogram below western blot image represents the average densitometric ratio of 135 kDa, HIF-1α to β-actin in wild-type and HIV-Tg rats. Statistical significance was calculated using a two-tail, independent, t-test. (* p < = 0.05) C) Immuno-histochemistry of paraffin embedded lung sections with anti-HIF-1α. Representative photomicrographs of immunostaining from wild-type and HIV-Tg group are shown. Original magnification: 60×.\nWe next evaluated the expression of pro-proliferative factor, PDGF-BB that is suggestive to be regulated in a HIF-dependent manner [29-31]. As shown in Figure 6A, real-time RT-PCR analysis of total mRNA extracted from lung homogenates suggested an increase in the expression of PDGF-B chain mRNA in HIV-Tg rats compared to the control rats with normal vasculature. Interestingly, this increased expression of PDGF-B chain in the HIV-Tg group was associated positively with the increase in expression of HIF-1α (p = 0.002, R = 0.97) (Figure 6B). Furthermore, immunohistochemical analysis suggested enhanced expression of PDGF-BB in endothelial cells and in mononuclear infiltrated cells around thickened vessels (Figure 6C) from HIV-Tg rats similar to HIF-1α staining (Figure 5C). Additionally, the increased levels of HIF-1α (p = 0.009, R = 0.94) and PDGF-B chain (p = 0.036, R = 0.78) strongly correlated linearly with the increased RV/LV+ septum ratio in HIV-Tg rats (Figure 6D). No notable trends were found within the wild-type group.\nIncreased expression of PDGF-B chain in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA in the lungs of HIV-Tg rats and age matched wild-type SD rats. B) Correlation of PDGF-B chain mRNA with the expression of HIF-1α in HIV-Tg rats. C) Immuno-histochemistry for PDGF-BB on the paraffin embedded lung sections from HIV-Tg and WT rats. Original magnification: 60×. D) Correlation of RV/LV+S ratio with the expression of HIF-1α and PDGF-BB in HIV-Tg rats. Correlation was calculated using the non-parametric Spearman's rank correlation coefficient.\n Increased expression of PDGF-BB in HIV-protein(s) treated pulmonary microvascular endothelial cells Since we observed increased expression of HIF-1α and PDGF-BB in the lungs from HIV-Tg rats including endothelial cells lining the pulmonary arterial vessels, we next sought to delineate if HIF-dependent mechanism is involved in the viral protein mediated up regulation of PDGF-BB in the pulmonary endothelium. Two major HIV-proteins: Tat and gp-120 are known to be actively secreted by infected cells and has been detected in the serum of HIV-infected patients [41-43]. Furthermore, given that both Tat and gp-120 were found to express in the lung homogenates of HIV-Tg rats (data not shown), we first treated HPMVECs with these viral proteins over a period of 24 hrs and assessed for the expression of PDGF-BB by western blot analysis. As shown in Figure 7, treatments with Tat, gp-120LAV (from X4-type virus) or gp-120CM (from R5-type virus) resulted in significant increase of PDGF-BB protein expression compared to untreated control. However, when cells were subjected to treatment with the same concentration of heat inactivated Tat or gp-120, no induction in the PDGF-BB expression was observed. Additionally, the maximum increase that was observed on treatment with R5-type gp-120CM was also significantly more when compared with the PDGF-BB induction obtained on Tat or X4-type gp-120LAV treatment.\nIncreased expression of PDGF-BB in pulmonary endothelial cells on treatment with HIV-proteins. Representative western blot showing PDGF-BB expression in cellular extracts from Tat (25 ng/ml), gp-120LAV (100 ng/ml), gp-120CM (100 ng/ml), heat-inactivated (HI) Tat or HI-gp-120 treated human pulmonary microvascular endothelial cells. The blots were re-probed with human β-actin antibodies. Histogram represents the average densitometric ratio of PDGF-BB to β-actin of three independent experiments. Statistical significance was calculated using a two-tail, independent t-test. (** p ≤ 0.01 vs. control, #p ≤ 0.05 vs. Tat or gp-120 treatment).\nSince we observed increased expression of HIF-1α and PDGF-BB in the lungs from HIV-Tg rats including endothelial cells lining the pulmonary arterial vessels, we next sought to delineate if HIF-dependent mechanism is involved in the viral protein mediated up regulation of PDGF-BB in the pulmonary endothelium. Two major HIV-proteins: Tat and gp-120 are known to be actively secreted by infected cells and has been detected in the serum of HIV-infected patients [41-43]. Furthermore, given that both Tat and gp-120 were found to express in the lung homogenates of HIV-Tg rats (data not shown), we first treated HPMVECs with these viral proteins over a period of 24 hrs and assessed for the expression of PDGF-BB by western blot analysis. As shown in Figure 7, treatments with Tat, gp-120LAV (from X4-type virus) or gp-120CM (from R5-type virus) resulted in significant increase of PDGF-BB protein expression compared to untreated control. However, when cells were subjected to treatment with the same concentration of heat inactivated Tat or gp-120, no induction in the PDGF-BB expression was observed. Additionally, the maximum increase that was observed on treatment with R5-type gp-120CM was also significantly more when compared with the PDGF-BB induction obtained on Tat or X4-type gp-120LAV treatment.\nIncreased expression of PDGF-BB in pulmonary endothelial cells on treatment with HIV-proteins. Representative western blot showing PDGF-BB expression in cellular extracts from Tat (25 ng/ml), gp-120LAV (100 ng/ml), gp-120CM (100 ng/ml), heat-inactivated (HI) Tat or HI-gp-120 treated human pulmonary microvascular endothelial cells. The blots were re-probed with human β-actin antibodies. Histogram represents the average densitometric ratio of PDGF-BB to β-actin of three independent experiments. Statistical significance was calculated using a two-tail, independent t-test. (** p ≤ 0.01 vs. control, #p ≤ 0.05 vs. Tat or gp-120 treatment).\n Reactive oxygen species are involved in HIV-protein mediated PDGF-BB induction Since both Tat and gp-120 [27,28,44] are known to induce oxidative stress, we next evaluated the levels of cytoplasmic ROS in Tat or gp-120 treated HPMVECs by DCF assay. Our findings demonstrated that the treatment of cells with Tat, gp-120LAV or gp-120CM results in significant increase in the production of ROS when compared to controls (Figure 8A). Similar to the PDGF-BB expression the maximum oxidative burst was observed on treatment with R5 type gp-120CM. Based on these findings we next focused on elucidating the mechanism(s) involved in the gp-120CM mediated up-regulation of PDGF-BB in pulmonary endothelial cells. We first investigated if chemokine receptor CCR5 is specifically involved in gp-120CM mediated generation of ROS by use of CCR5 neutralizing antibody. As illustrated in Figure 8B, pretreatment of HPMVECs with CCR5 antibody for 30 min, prevented the ROS production on gp-120 CM treatment whereas the pretreatment with isotype matched control antibody control had no affect. Furthermore, to examine if this enhanced levels of ROS are involved in PDGF-BB increase associated with gp-120CM treatment of pulmonary endothelial cells, cells were pretreated with antioxidant cocktail for 30 min. followed by 24 h treatment with gp-120CM. As shown in Figure 8C, western blot analysis of the total cell extract demonstrated the ability of antioxidants to prevent the gp-120 CM mediated increase in the PDGF-BB expression. Taken together these data suggest the role of oxidative burst in R5-type gp-120 mediated up-regulation of PDGF-BB in pulmonary endothelial cells.\nInvolvement of oxidative stress in gp-120 mediated PDGF-BB induction in pulmonary endothelial cells. A) Enhanced oxidative stress in pulmonary endothelial cells on Tat and gp120 treatment. Human pulmonary microvascular endothelial cells (HPMVECs) were incubated with carboxy-H2-DCF-DA followed by Tat (25 ng/ml) or gp-120 (100 ng/ml) treatment for 60 min, and assessed for oxidative stress (Mean ± SD., **P ≤ 0.01, ***P < 0.001 vs. control). B) Effect of CCR5 neutralizing antibody on gp-120CM (100 ng/ml) mediated oxidative stress in HPMVECs. Cells were pretreated with CCR5 antibody (10 μg/ml) or equal amount of IgG isotype control for 30 min, followed by DCF assay (Mean ± SD., ***P < 0.001 treatment versus control; #P < 0.05 vs. gp120CM treatment). C) Gp-120CM mediated PDGF-BB expression in the presence of antioxidant cocktail. HPMVECs were pretreated with antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. Cells were then used for protein extraction followed by sequential immunobloting with antibodies specifically directed to the PDGF-BB and β-actin. Representative western blot images (upper panel) are shown with histograms (lower panel) showing the average densitometric analysis of the PDGF-BB band normalized to corresponding β-actin band from three independent experiments (*** P < = 0.001 versus control; ###P < = 0.001 versus gp120CM treatment).\nSince both Tat and gp-120 [27,28,44] are known to induce oxidative stress, we next evaluated the levels of cytoplasmic ROS in Tat or gp-120 treated HPMVECs by DCF assay. Our findings demonstrated that the treatment of cells with Tat, gp-120LAV or gp-120CM results in significant increase in the production of ROS when compared to controls (Figure 8A). Similar to the PDGF-BB expression the maximum oxidative burst was observed on treatment with R5 type gp-120CM. Based on these findings we next focused on elucidating the mechanism(s) involved in the gp-120CM mediated up-regulation of PDGF-BB in pulmonary endothelial cells. We first investigated if chemokine receptor CCR5 is specifically involved in gp-120CM mediated generation of ROS by use of CCR5 neutralizing antibody. As illustrated in Figure 8B, pretreatment of HPMVECs with CCR5 antibody for 30 min, prevented the ROS production on gp-120 CM treatment whereas the pretreatment with isotype matched control antibody control had no affect. Furthermore, to examine if this enhanced levels of ROS are involved in PDGF-BB increase associated with gp-120CM treatment of pulmonary endothelial cells, cells were pretreated with antioxidant cocktail for 30 min. followed by 24 h treatment with gp-120CM. As shown in Figure 8C, western blot analysis of the total cell extract demonstrated the ability of antioxidants to prevent the gp-120 CM mediated increase in the PDGF-BB expression. Taken together these data suggest the role of oxidative burst in R5-type gp-120 mediated up-regulation of PDGF-BB in pulmonary endothelial cells.\nInvolvement of oxidative stress in gp-120 mediated PDGF-BB induction in pulmonary endothelial cells. A) Enhanced oxidative stress in pulmonary endothelial cells on Tat and gp120 treatment. Human pulmonary microvascular endothelial cells (HPMVECs) were incubated with carboxy-H2-DCF-DA followed by Tat (25 ng/ml) or gp-120 (100 ng/ml) treatment for 60 min, and assessed for oxidative stress (Mean ± SD., **P ≤ 0.01, ***P < 0.001 vs. control). B) Effect of CCR5 neutralizing antibody on gp-120CM (100 ng/ml) mediated oxidative stress in HPMVECs. Cells were pretreated with CCR5 antibody (10 μg/ml) or equal amount of IgG isotype control for 30 min, followed by DCF assay (Mean ± SD., ***P < 0.001 treatment versus control; #P < 0.05 vs. gp120CM treatment). C) Gp-120CM mediated PDGF-BB expression in the presence of antioxidant cocktail. HPMVECs were pretreated with antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. Cells were then used for protein extraction followed by sequential immunobloting with antibodies specifically directed to the PDGF-BB and β-actin. Representative western blot images (upper panel) are shown with histograms (lower panel) showing the average densitometric analysis of the PDGF-BB band normalized to corresponding β-actin band from three independent experiments (*** P < = 0.001 versus control; ###P < = 0.001 versus gp120CM treatment).\n ROS dependent stimulation of HIF-1α is necessary for HIV-protein mediated PDGF-BB induction It is well known that most of the pathological effects of ROS in various oxidative stress associated disorders are mediated by activation and stabilization of HIF-1α [45]. We therefore next investigated if ROS mediated activation of HIF-1α on gp-120CM treatment is important for increased expression of PDGF-BB in gp-120 treated HPMVECs. We first examined whether gp-120CM treatment of HPMVECs could result in increased levels of HIF-1α protein. As shown in Figure 9A, western blot analysis of gp-120CM treated cellular extracts demonstrated increased levels of HIF-1α as compared to untreated controls. Furthermore, this gp-120CM mediated induction of HIF-1α expression was inhibited on pre-treatment of HPMVECs with an antioxidant cocktail (Figure 9A), thus confirming the ROS mediated augmentation of HIF-1α expression on R5 gp-120 treatment of endothelial cells.\nOxidative stress dependent HIF-1α expression is involved in gp-120CM mediated PDGF-BB induction. A) Western blot analysis of HIF-1α expression in human pulmonary microvascular endothelial cells (HPMVECs) pretreated with or without antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. B) Evaluation of HIF-1α knockdown by western blot analysis of whole cell lysates from HPMVECs transfected with siRNA specific to HIF-1α (10nM) or with negative control siRNA in presence of gp120CM treatment. (C) Knock down of HIF-1α resulted in inhibition of gp120CM-mediated induction of PDGF-BB expression in HPMVECs. Blots are representative of three independent experiments with histogram (lower panel) showing the average densitometric analysis normalized to β-actin. All values are mean ± SD. *P < = 0.01,**P < = 0.001 treatment versus control; #P < = 0.01, ##P < = 0.001 treatment versus gp120CM treated untransfected cells.\nNext to determine the involvement of HIF-1α in gp-120 mediated regulation of PDGF-BB expression, we used HIF-1α specific siRNA knock down experiments. First we optimized that the transfection of HPMVECs with 10nM HIF-1α siRNA was efficient in decreasing around 80% of gp-120CM induced HIF-1α expression when compared to cells transfected with non-specific siRNA control (Figure 9B). Furthermore, the HIF-1α siRNA transfected cells showed significant decrease in the expression of PDGF-BB in the presence of gp-120CM when compared with untransfected or non-specfic siRNA transfected gp-120CM treated cells (Figure 9C) thus underscoring the role of HIF-1α activation in the gp-120CM mediated PDGF-BB expression in pulmonary endothelial cells.\nIt is well known that most of the pathological effects of ROS in various oxidative stress associated disorders are mediated by activation and stabilization of HIF-1α [45]. We therefore next investigated if ROS mediated activation of HIF-1α on gp-120CM treatment is important for increased expression of PDGF-BB in gp-120 treated HPMVECs. We first examined whether gp-120CM treatment of HPMVECs could result in increased levels of HIF-1α protein. As shown in Figure 9A, western blot analysis of gp-120CM treated cellular extracts demonstrated increased levels of HIF-1α as compared to untreated controls. Furthermore, this gp-120CM mediated induction of HIF-1α expression was inhibited on pre-treatment of HPMVECs with an antioxidant cocktail (Figure 9A), thus confirming the ROS mediated augmentation of HIF-1α expression on R5 gp-120 treatment of endothelial cells.\nOxidative stress dependent HIF-1α expression is involved in gp-120CM mediated PDGF-BB induction. A) Western blot analysis of HIF-1α expression in human pulmonary microvascular endothelial cells (HPMVECs) pretreated with or without antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. B) Evaluation of HIF-1α knockdown by western blot analysis of whole cell lysates from HPMVECs transfected with siRNA specific to HIF-1α (10nM) or with negative control siRNA in presence of gp120CM treatment. (C) Knock down of HIF-1α resulted in inhibition of gp120CM-mediated induction of PDGF-BB expression in HPMVECs. Blots are representative of three independent experiments with histogram (lower panel) showing the average densitometric analysis normalized to β-actin. All values are mean ± SD. *P < = 0.01,**P < = 0.001 treatment versus control; #P < = 0.01, ##P < = 0.001 treatment versus gp120CM treated untransfected cells.\nNext to determine the involvement of HIF-1α in gp-120 mediated regulation of PDGF-BB expression, we used HIF-1α specific siRNA knock down experiments. First we optimized that the transfection of HPMVECs with 10nM HIF-1α siRNA was efficient in decreasing around 80% of gp-120CM induced HIF-1α expression when compared to cells transfected with non-specific siRNA control (Figure 9B). Furthermore, the HIF-1α siRNA transfected cells showed significant decrease in the expression of PDGF-BB in the presence of gp-120CM when compared with untransfected or non-specfic siRNA transfected gp-120CM treated cells (Figure 9C) thus underscoring the role of HIF-1α activation in the gp-120CM mediated PDGF-BB expression in pulmonary endothelial cells.", "Reports suggesting respiratory difficulty in HIV-Tg rats [34] led us to utilize this model in looking for evidence of pulmonary arteriopathy associated with HIV-related proteins. As clinical manifestations of AIDS in HIV-Tg rats begins as early as 5 months [34], we compared 4-5 months old Tg with age-matched WT-control rats (n = 6 in each group). Analysis of both, H&E and VVG staining, of paraffin embedded lung sections from 5 months old HIV-Tg rats demonstrated moderate to severe vascular remodeling. Representative images of H&E and VVG staining from each group are shown in Figure 1A. There was a significant increase in the thickness of the medial wall of muscular arteries in HIV-Tg rats (Figure 1b, c, e, f) compared to normal vessels in wild-type control (Figure 1a, d). Further, the presence of a smooth muscle layer was observed in many of the normally non-muscular distal arteries of HIV-Tg rats. The VVG-stained sections from both the groups revealed a well defined internal elastic lamina (black stain) in WT-control rats whereas elastic lamina was found to be disrupted in HIV-Tg rats (Figure 1). As shown in Figure 2, percentage of medial wall thickness of pulmonary arteries with outer diameter ranging between 50-200 μm was observed to be significantly high in HIV-Tg rats as compared to WT controls (p < 0.001).\nHistological evidence of pulmonary vascular remodeling in HIV-Tg rats. Representative images of H& E (a, b, c) and VVG (d, e, f) stained sections from HIV-Tg (b, c, e, f) and WT control (a, d) rats. H&E photomicrographs were captured at 10× (a, b) and at 20× (c) magnification whereas VVG images were captured at 4× (d, e) and at 20× (f) original magnification (scale bar: 100 μm). Each representative image is from different animal.\nIncrease in medial wall thickness of pulmonary arteries in HIV-Tg rats compared with WT-rats. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries with diameter size ranging from 50 μm-250 μm. P < = 0.001, HIV-Tg rats vs. WT controls.", "In order to characterize the cellular composition of pulmonary vascular lesions in HIV-Tg rats; the lung sections were stained for α-SMA and Factor VIII. As shown in the representative lung sections from each group in Figure 3, we confirmed the presence of vascular remodeling with medial wall thickness in HIV-Tg rats while normal blood vessels were observed in WT control rats. Marked increase in the medial wall thickness of muscular arteries was observed due to increased proliferation of smooth muscle cells (SMC) in the HIV-Tg group (Figure 3b) compared to WT controls (Figure 3a). Endothelial monolayer was generally damaged with signs of increased expression of factor 8 (or vWF) in both thickened and non-muscular vessels (Figure 3d).\nPresence of medial wall thickness in the pulmonary arteries of HIV-Tg rats. Immuno-histochemistry of paraffin embedded lung sections with anti- α smooth muscle actin (brown) (a-b) and factor VIII (brown) (c-d) indicated abnormal vascular lesions with significant medial wall thickening in the lung sections from HIV-Tg rats (b, d) compared with WT-controls (a, c). Representative images were captured at 10× original magnification (scale bar: 100 μm).", "The HIV-Tg rats exhibited an increase in the ratio of wet weight of the right ventricle (RV) to the sum of the wet weights of the left ventricle and interventricular septum (LV+S), compared to that found in the control group (Figure 4). This increase in the RV/LV+S ratio suggests a disproportionate growth of the right ventricle compared to the left, thereby indicating early RV hypertrophy in these HIV-Tg rats.\nRight ventricular hypertrophy in HIV-Tg rats (n = 6) compared with age-matched WT rats (n = 6). The ratio of the wet weight of RV wall and of the LV wall with septum of heart (RV/LV+septum) was measured. P = 0.06, HIV-Tg rats vs. WT controls.", "Having determined, through observation of right ventricular changes, the degree of pulmonary arteriopathy in HIV-Tg rats, we next compared the level of HIF-1α expression in the lungs of these rats to that found in controls. Although RNA analysis of lung extracts suggested an insignificant increase in the expression of HIF-1α (p = 0.078) (Figure 5A), western blot analysis demonstrated significant (p < 0.05) increase in HIF-1α protein, thus confirming the increase in expression of HIF-1α in HIV-Tg rats as compared to WT controls (Figure 5B). This increase in the expression of HIF-1α was further confirmed by immunohistochemical analysis on the lung sections from HIV-Tg and WT controls. As shown in Figure 5C, the lung parenchyma, along with endothelial cells lining the vessels demonstrating medial thickness, had strong expression of HIF-1α in the lung sections from HIV-Tg rats. Enhanced expression was also observed in mononuclear cells around the thickened vessels. Smooth muscle cells in these arteries, however, did not demonstrate a significant increase in HIF-1α compared to those from controls.\nIncreased expression of HIF-1α in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA and B) western blot analysis of total protein, extracted from lungs of HIV-Tg rats and age matched wild-type SD rats. Histogram below western blot image represents the average densitometric ratio of 135 kDa, HIF-1α to β-actin in wild-type and HIV-Tg rats. Statistical significance was calculated using a two-tail, independent, t-test. (* p < = 0.05) C) Immuno-histochemistry of paraffin embedded lung sections with anti-HIF-1α. Representative photomicrographs of immunostaining from wild-type and HIV-Tg group are shown. Original magnification: 60×.\nWe next evaluated the expression of pro-proliferative factor, PDGF-BB that is suggestive to be regulated in a HIF-dependent manner [29-31]. As shown in Figure 6A, real-time RT-PCR analysis of total mRNA extracted from lung homogenates suggested an increase in the expression of PDGF-B chain mRNA in HIV-Tg rats compared to the control rats with normal vasculature. Interestingly, this increased expression of PDGF-B chain in the HIV-Tg group was associated positively with the increase in expression of HIF-1α (p = 0.002, R = 0.97) (Figure 6B). Furthermore, immunohistochemical analysis suggested enhanced expression of PDGF-BB in endothelial cells and in mononuclear infiltrated cells around thickened vessels (Figure 6C) from HIV-Tg rats similar to HIF-1α staining (Figure 5C). Additionally, the increased levels of HIF-1α (p = 0.009, R = 0.94) and PDGF-B chain (p = 0.036, R = 0.78) strongly correlated linearly with the increased RV/LV+ septum ratio in HIV-Tg rats (Figure 6D). No notable trends were found within the wild-type group.\nIncreased expression of PDGF-B chain in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA in the lungs of HIV-Tg rats and age matched wild-type SD rats. B) Correlation of PDGF-B chain mRNA with the expression of HIF-1α in HIV-Tg rats. C) Immuno-histochemistry for PDGF-BB on the paraffin embedded lung sections from HIV-Tg and WT rats. Original magnification: 60×. D) Correlation of RV/LV+S ratio with the expression of HIF-1α and PDGF-BB in HIV-Tg rats. Correlation was calculated using the non-parametric Spearman's rank correlation coefficient.", "Since we observed increased expression of HIF-1α and PDGF-BB in the lungs from HIV-Tg rats including endothelial cells lining the pulmonary arterial vessels, we next sought to delineate if HIF-dependent mechanism is involved in the viral protein mediated up regulation of PDGF-BB in the pulmonary endothelium. Two major HIV-proteins: Tat and gp-120 are known to be actively secreted by infected cells and has been detected in the serum of HIV-infected patients [41-43]. Furthermore, given that both Tat and gp-120 were found to express in the lung homogenates of HIV-Tg rats (data not shown), we first treated HPMVECs with these viral proteins over a period of 24 hrs and assessed for the expression of PDGF-BB by western blot analysis. As shown in Figure 7, treatments with Tat, gp-120LAV (from X4-type virus) or gp-120CM (from R5-type virus) resulted in significant increase of PDGF-BB protein expression compared to untreated control. However, when cells were subjected to treatment with the same concentration of heat inactivated Tat or gp-120, no induction in the PDGF-BB expression was observed. Additionally, the maximum increase that was observed on treatment with R5-type gp-120CM was also significantly more when compared with the PDGF-BB induction obtained on Tat or X4-type gp-120LAV treatment.\nIncreased expression of PDGF-BB in pulmonary endothelial cells on treatment with HIV-proteins. Representative western blot showing PDGF-BB expression in cellular extracts from Tat (25 ng/ml), gp-120LAV (100 ng/ml), gp-120CM (100 ng/ml), heat-inactivated (HI) Tat or HI-gp-120 treated human pulmonary microvascular endothelial cells. The blots were re-probed with human β-actin antibodies. Histogram represents the average densitometric ratio of PDGF-BB to β-actin of three independent experiments. Statistical significance was calculated using a two-tail, independent t-test. (** p ≤ 0.01 vs. control, #p ≤ 0.05 vs. Tat or gp-120 treatment).", "Since both Tat and gp-120 [27,28,44] are known to induce oxidative stress, we next evaluated the levels of cytoplasmic ROS in Tat or gp-120 treated HPMVECs by DCF assay. Our findings demonstrated that the treatment of cells with Tat, gp-120LAV or gp-120CM results in significant increase in the production of ROS when compared to controls (Figure 8A). Similar to the PDGF-BB expression the maximum oxidative burst was observed on treatment with R5 type gp-120CM. Based on these findings we next focused on elucidating the mechanism(s) involved in the gp-120CM mediated up-regulation of PDGF-BB in pulmonary endothelial cells. We first investigated if chemokine receptor CCR5 is specifically involved in gp-120CM mediated generation of ROS by use of CCR5 neutralizing antibody. As illustrated in Figure 8B, pretreatment of HPMVECs with CCR5 antibody for 30 min, prevented the ROS production on gp-120 CM treatment whereas the pretreatment with isotype matched control antibody control had no affect. Furthermore, to examine if this enhanced levels of ROS are involved in PDGF-BB increase associated with gp-120CM treatment of pulmonary endothelial cells, cells were pretreated with antioxidant cocktail for 30 min. followed by 24 h treatment with gp-120CM. As shown in Figure 8C, western blot analysis of the total cell extract demonstrated the ability of antioxidants to prevent the gp-120 CM mediated increase in the PDGF-BB expression. Taken together these data suggest the role of oxidative burst in R5-type gp-120 mediated up-regulation of PDGF-BB in pulmonary endothelial cells.\nInvolvement of oxidative stress in gp-120 mediated PDGF-BB induction in pulmonary endothelial cells. A) Enhanced oxidative stress in pulmonary endothelial cells on Tat and gp120 treatment. Human pulmonary microvascular endothelial cells (HPMVECs) were incubated with carboxy-H2-DCF-DA followed by Tat (25 ng/ml) or gp-120 (100 ng/ml) treatment for 60 min, and assessed for oxidative stress (Mean ± SD., **P ≤ 0.01, ***P < 0.001 vs. control). B) Effect of CCR5 neutralizing antibody on gp-120CM (100 ng/ml) mediated oxidative stress in HPMVECs. Cells were pretreated with CCR5 antibody (10 μg/ml) or equal amount of IgG isotype control for 30 min, followed by DCF assay (Mean ± SD., ***P < 0.001 treatment versus control; #P < 0.05 vs. gp120CM treatment). C) Gp-120CM mediated PDGF-BB expression in the presence of antioxidant cocktail. HPMVECs were pretreated with antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. Cells were then used for protein extraction followed by sequential immunobloting with antibodies specifically directed to the PDGF-BB and β-actin. Representative western blot images (upper panel) are shown with histograms (lower panel) showing the average densitometric analysis of the PDGF-BB band normalized to corresponding β-actin band from three independent experiments (*** P < = 0.001 versus control; ###P < = 0.001 versus gp120CM treatment).", "It is well known that most of the pathological effects of ROS in various oxidative stress associated disorders are mediated by activation and stabilization of HIF-1α [45]. We therefore next investigated if ROS mediated activation of HIF-1α on gp-120CM treatment is important for increased expression of PDGF-BB in gp-120 treated HPMVECs. We first examined whether gp-120CM treatment of HPMVECs could result in increased levels of HIF-1α protein. As shown in Figure 9A, western blot analysis of gp-120CM treated cellular extracts demonstrated increased levels of HIF-1α as compared to untreated controls. Furthermore, this gp-120CM mediated induction of HIF-1α expression was inhibited on pre-treatment of HPMVECs with an antioxidant cocktail (Figure 9A), thus confirming the ROS mediated augmentation of HIF-1α expression on R5 gp-120 treatment of endothelial cells.\nOxidative stress dependent HIF-1α expression is involved in gp-120CM mediated PDGF-BB induction. A) Western blot analysis of HIF-1α expression in human pulmonary microvascular endothelial cells (HPMVECs) pretreated with or without antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. B) Evaluation of HIF-1α knockdown by western blot analysis of whole cell lysates from HPMVECs transfected with siRNA specific to HIF-1α (10nM) or with negative control siRNA in presence of gp120CM treatment. (C) Knock down of HIF-1α resulted in inhibition of gp120CM-mediated induction of PDGF-BB expression in HPMVECs. Blots are representative of three independent experiments with histogram (lower panel) showing the average densitometric analysis normalized to β-actin. All values are mean ± SD. *P < = 0.01,**P < = 0.001 treatment versus control; #P < = 0.01, ##P < = 0.001 treatment versus gp120CM treated untransfected cells.\nNext to determine the involvement of HIF-1α in gp-120 mediated regulation of PDGF-BB expression, we used HIF-1α specific siRNA knock down experiments. First we optimized that the transfection of HPMVECs with 10nM HIF-1α siRNA was efficient in decreasing around 80% of gp-120CM induced HIF-1α expression when compared to cells transfected with non-specific siRNA control (Figure 9B). Furthermore, the HIF-1α siRNA transfected cells showed significant decrease in the expression of PDGF-BB in the presence of gp-120CM when compared with untransfected or non-specfic siRNA transfected gp-120CM treated cells (Figure 9C) thus underscoring the role of HIF-1α activation in the gp-120CM mediated PDGF-BB expression in pulmonary endothelial cells.", "In this study, we offer histological and physiologic evidence of pulmonary vascular remodeling with significant thickening of medial layer of the arteries, elevated RV mass in the non-infectious rat model of HIV-1. Pulmonary arteriopathy exhibited by the HIV-Tg rats was manifested primarily by smooth muscle proliferation within the medial wall, endothelial disruption with little indication of endothelial cell proliferation but absence of classic plexiform lesions. Although RV hypertrophy in the HIV-Tg rats is suggestive of concomitant right heart pressure overload, the presence of pulmonary arteriopathy alone does not necessarily predict pulmonary hypertension. Furthermore, in humans only a fraction of individuals with HIV develop PAH, suggesting that the etiology of HIV-PAH is multi-factorial and complex where multiple insults such as HIV infection, drugs of abuse, and genetic predilection may be necessary to induce clinical disease. Therefore, one could hypothesize that the viral protein(s) provides the first 'hit' and second 'hit' such as administration of stimulants may lead to more severe pathology in these HIV-Tg rats.\nInflammation is considered to play an important role in HIV- associated pulmonary vascular remodeling with accumulation of macrophages and T lymphocytes found in the vicinity of pulmonary vessels of pulmonary hypertension patients [46,47]. Consistent with these findings we also observed infiltration of mononuclear cells near or around the thickened vessels with mild interstitial pneumonitis as described before in this model [34]. HIV-1 infection is known to stimulate monocyte/macrophages and lymphocytes to secrete elevated levels of cytokines, growth factors and viral proteins such as Nef, Tat and gp-120, [10-16] that can then initiate endothelial injury, SMC proliferation and migration, leading to the development of HIV-PAH [8-10,18,26,48]. It is plausible that the medial wall thickening, an important determinant of pulmonary vascular resistance, discovered in HIV-Tg rat model is the result of integrated effects of various HIV proteins and the related inflammatory mediators including PDGF-BB.\nExamination of the HIV-1 Tg rat lungs revealed increased staining of PDGF-BB in macrophages around hypertrophied vessels and in endothelial cells. Earlier studies suggest induction of PDGF-BB by endothelial cells [49] but not by SMCs [50] in response to hypoxia. Furthermore, the vasculature and lungs of this HIV-Tg rat model has earlier been demonstrated to be under significant oxidative stress [32,33]. Along the lines, we observed enhanced expression of HIF-1α, a crucial transcription factor responsible for sensing and responding to oxidant stress and hypoxic conditions [51], suggesting, in part, the involvement of ROS/HIF-1α pathway in the over expression of PDGF-BB. HIF-1α controls a large program of genes critical to the development of pulmonary arterial hypertension [29,31,52,53]. Interestingly, the expression of HIF-1α and PDGF was not only elevated and positively associated with each other in the lungs of the HIV-1 Tg rats, but the quantity of each was directly related, in a linear fashion, to the degree of increase in the right ventricular hypertrophy (RV/LV+septum ratio).\nWhile these correlations in the HIV-1 transgenic model are consistent with our hypothesis, our in-vitro work in pulmonary endothelial cells validates that viral protein mediated oxidative stress/HIF-1α pathway results in induction of PDGF-BB. The injury to the endothelium, an initiating event in PAH [54] is known to be associated with the induction of oxidative stress [44]. HIV-associated proteins, Tat and gp-120, as confirmed by our findings and others, demonstrate the ability to invoke oxidative stress mediated endothelial dysfunction [27,28,44,55]. In addition, results demonstrating enhanced levels of HIF-1α in viral protein treated pulmonary endothelial cells, are in concert with the previous findings supporting the activation and accumulation of HIF-1α by HIV-1 through the production of ROS [56]. PDGF-BB, known to be involved in hypoxia-induced vascular remodeling, [30,57,58]) has been suggested to be up-regulated in a HIF-dependent manner, but the mechanism by which HIF-1α and PDGF levels are elevated during vascular remodeling associated with PAH, are still not completely understood. A putative HIF-response element on PDGF-gene has been identified [59] but the studies demonstrating the direct involvement of HIF-1α in the regulation of PDGF expression are lacking. Here, we provide evidence validating the significance of HIF-1α in the pathogenesis of HIV-associated vascular dysfunction, and report the novel finding that its response to viral protein generated oxidative stress is to augment PDGF expression in the pulmonary endothelium. To our knowledge, this is the first report validating that HIV-1 viral proteins through the activation of HIF-1α induce PDGF expression.\nThe HIV-1 virus is unable to actively infect endothelial cells due to the absence of necessary CD4 receptors. However, viral proteins have been demonstrated to act on endothelial cells through direct binding to their CCR5 (R5) or CXCR4 (X4) co-receptors [60]. This is corroborated with our findings showing mitigation of gp-120 CM response to increase PDGF-BB expression in the presence of CCR5 neutralizing antibody. Maximum PDGF-BB expression and ROS production was seen on treatment with R5-type gp-120 that is expected to be secreted in abundance by infiltrated HIV-infected CCR5+ T cells [61] and macrophages seen around the pulmonary vascular lesions associated with PAH [62]. In addition, studies on co-receptor usage of HIV have shown that virus utilizing CCR5 as a co-receptor is the predominant type of virus found in HIV-infected individuals [63]. Furthermore, R5gp120 has been reported earlier to induce the expression of cell-cycle and cell proliferation related genes more strongly than X4 gp-120 in peripheral blood mononuclear cells [64] and this differential potency of gp-120 effect may be present in pulmonary endothelial cells as well.", "In summary, we demonstrate that the influence of HIV-1 proteins alone, without viral infection, is associated with pulmonary arteriopathy including accumulation of HIF-1α and PDGF as observed in the HIV-1 Tg rats. Furthermore, our in-vitro findings confirm that HIV-1 viral protein mediated generation of oxidative stress and resultant activation of HIF-1α leads to subsequent induction of PDGF expression in pulmonary endothelial cells. Consistent with a possible role of PDGF in the development of idiopathic PAH, the correlation of this mediator with RVH, does suggest this pathway may be one of the many insults involved in the development of HIV-related pulmonary arteriopathy and potentially HIV-PAH." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "HIV-1 transgenic and wild type rats", "Right Ventricular Mass Evaluation", "Histology and immuno-histochemical analysis of pulmonary arteries", "Cell culture and treatments", "Quantification of cellular oxidative stress using dichlorofluorescein (DCF) assay", "Transfection of pulmonary endothelial cells with small interfering (si) RNA", "Real-Time RT-PCR analysis", "Western Blot Analysis", "Statistical Analysis", "Results", "Pulmonary vascular remodeling in HIV-Tg rats", "Characterization of pulmonary vascular lesions in HIV-Tg rats", "Right ventricular hypertrophy (RVH) in HIV-Tg rats", "Increased expression of HIF-1a and PDGF-BB in HIV-Tg rats", "Increased expression of PDGF-BB in HIV-protein(s) treated pulmonary microvascular endothelial cells", "Reactive oxygen species are involved in HIV-protein mediated PDGF-BB induction", "ROS dependent stimulation of HIF-1α is necessary for HIV-protein mediated PDGF-BB induction", "Discussion", "Conclusion" ]
[ "The advent of antiretroviral therapy (ART) has clearly led to improved survival among HIV-1 infected individuals, yet this advancement has resulted in the unexpected consequence of virus-associated noninfectious complications such as HIV-related pulmonary arterial hypertension (HIV-PAH) [1,2]. Despite adherence with ART, development of HIV-PAH serves as an independent predictor of death in patients with HIV-infection [3]. A precise characterization of the pathogenesis of HIV-PAH has so far proven elusive. As there is little evidence for direct viral infection within the pulmonary vascular bed [4-7], popular hypothesis is that secretary HIV-1 viral proteins in circulation are capable of inducing vascular oxidative stress and direct endothelial cell dysfunction and smooth muscle cell proliferation critical to the development of HIV-related arteriopathy [8,9]. Further, evidence is accumulating which suggests that the HIV-1 infection of monocyte/macrophages and lymphocytes stimulates increased production of pro-inflammatory markers and/or growth factors. implicated in the pathogenesis of HIV-PAH such as platelet derived growth factor (PDGF)-BB [10-16]. These soluble mediators can then initiate endothelial injury followed by smooth muscle cell proliferation and migration [2,17,18].\nPrevious studies provide evidence for the possible involvement of PDGF in the pathogenesis of pulmonary vascular remodeling in animal models [19,20] and in lung biopsies from patients with PPH or with HIV-PAH [12]. Furthermore, a non-specific inhibitor of PDGF signaling, imatinib, has demonstrated the ability to diminish vascular remodeling in animal studies and to mitigate clinical decline in human PAH trials [21-24]. Our previous work demonstrates an over-expression of PDGF in-vitro in HIV-infected macrophages [25] and in-vivo in Simian HIV-infected macaques [16]. Our recent work supports an HIV-protein mediated up-regulation of PDGF-BB in un-infectable vascular cell types such as human primary pulmonary arterial endothelial and smooth muscle cells [26]. However, the mechanism(s) by which HIV infection or viral protein(s) binding induces PDGF expression and the role of this potent mitogen in the setting of HIV-associated pulmonary arteriopathy has not been well characterized. HIV associated viral proteins including Tat and gp-120 have demonstrated the ability to trigger the generation of reactive oxygen species (ROS) [27,28]. As oxidative stress stabilizes hypoxia inducible factor (HIF)-1α, a transcription factor critical for regulation of important proliferative and vaso-active mediators [29-31], we hypothesize that viral protein generated reactive oxygen species (ROS) induce HIF-1α accumulation, with a resultant enhanced transcription of PDGF-B chain.\nThus, given the need for clarification of the mechanisms responsible for HIV-related pulmonary vascular remodeling, we, in the present study, first utilized the non-infectious NL4-3Δgag/pol HIV-1 transgenic (HIV-Tg) rat model [32,33] to explore the direct role of viral proteins in the development of pulmonary vascular remodeling. This HIV-Tg rat model [34], develops many clinical multisystem manifestations similar to those found in AIDS patients and most importantly, has earlier been demonstrated to be under significant oxidative stress. Furthermore, given that the pulmonary artery endothelial dysfunction plays a key role in the initiation and progression of PAH [35-37], utilizing the primary pulmonary endothelial cell-culture system we next delineated the importance of oxidative stress and HIF-1α activation in viral protein mediated up-regulation of PDGF-BB.", " HIV-1 transgenic and wild type rats HIV-1 transgenic (Tg) Sprague Dawley (SD) and SD wild type (WT) rats were purchased from Harlan (Indianapolis, Indiana). Young 4-5 months old Tg rats (n = 6) and age matched SD wild type rats (n = 6) were used for analysis. The HIV-1 Tg rat contains a gag-pol deleted NL4-3 provirus and expresses HIV viral RNA and proteins in various tissues including lymphocytes and monocytes. The animals were euthanized with inhalation of 2.5-3% isofluorane gas, followed by transcardial saline perfusion. Following euthanasia, one half of the lung was post-fixed for histological examination, while the other half was snap frozen for RNA analysis. The animal care at the Kansas University Medical Center was in strict accordance with the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals.\nHIV-1 transgenic (Tg) Sprague Dawley (SD) and SD wild type (WT) rats were purchased from Harlan (Indianapolis, Indiana). Young 4-5 months old Tg rats (n = 6) and age matched SD wild type rats (n = 6) were used for analysis. The HIV-1 Tg rat contains a gag-pol deleted NL4-3 provirus and expresses HIV viral RNA and proteins in various tissues including lymphocytes and monocytes. The animals were euthanized with inhalation of 2.5-3% isofluorane gas, followed by transcardial saline perfusion. Following euthanasia, one half of the lung was post-fixed for histological examination, while the other half was snap frozen for RNA analysis. The animal care at the Kansas University Medical Center was in strict accordance with the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals.\n Right Ventricular Mass Evaluation Hearts were removed from the euthanized animals. After the removal of atria, the wall of the right ventricle (RV) was separated from the left ventricle (LV) and septum (LV+S) according to the established method [38]. Wet weights of both RV & LV+S were quantified, normalized to the total body weight and used to calculate the RV/LV+S ratio.\nHearts were removed from the euthanized animals. After the removal of atria, the wall of the right ventricle (RV) was separated from the left ventricle (LV) and septum (LV+S) according to the established method [38]. Wet weights of both RV & LV+S were quantified, normalized to the total body weight and used to calculate the RV/LV+S ratio.\n Histology and immuno-histochemical analysis of pulmonary arteries Excised lungs were immersed in 4% paraformaldehyde overnight followed by 70% ethanol and then used for paraffin embedding. Paraffin sections of 5 μm thickness were used for Hematoxylin & Eosin (H&E) or Verhoeff von Gieson (VVG) staining. The digital scans of whole section from each animal were generated with a ScanScope scanner and then visualized and analyzed using Aperio image view software. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries (50-250 μm diameter) in blinded manner. Wall thickness and outer diameter of approximately 25 muscularized arteries were measured in each section at two perpendicular points and then averaged. The percentage medial wall thickness was then calculated as described before [39]. Immunohistochemistry staining of paraffin-embedded lung sections was performed as previously described [16] with primary antibodies including α-SMA, factor VIII, from Dako Corporation (Carpentaria, CA, USA), HIF-1α, from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA) and PDGF-BB, from Abcam, Inc. (Cambridge, MA).\nExcised lungs were immersed in 4% paraformaldehyde overnight followed by 70% ethanol and then used for paraffin embedding. Paraffin sections of 5 μm thickness were used for Hematoxylin & Eosin (H&E) or Verhoeff von Gieson (VVG) staining. The digital scans of whole section from each animal were generated with a ScanScope scanner and then visualized and analyzed using Aperio image view software. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries (50-250 μm diameter) in blinded manner. Wall thickness and outer diameter of approximately 25 muscularized arteries were measured in each section at two perpendicular points and then averaged. The percentage medial wall thickness was then calculated as described before [39]. Immunohistochemistry staining of paraffin-embedded lung sections was performed as previously described [16] with primary antibodies including α-SMA, factor VIII, from Dako Corporation (Carpentaria, CA, USA), HIF-1α, from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA) and PDGF-BB, from Abcam, Inc. (Cambridge, MA).\n Cell culture and treatments Human primary pulmonary microvascular endothelial cells (HPMVEC) were purchased from ScienCell research laboratories (Carlsbad, CA) and grown in endothelial cell basal media containing 2% fetal bovine serum (FBS), 1 μg/ml hydrocortisone, 10 ng/ml human epidermal growth factor, 3 ng/ml basic fibroblast growth factor, 10 μg/ml heparin, and gentamycin/amphotericin. Cells were treated with viral proteins: Tat 1-72 (1 μM, University of Kentucky), gp-120CM or gp-120LAV (100 ng/ml, Protein Sciences Corporation, Meriden, CT) for 24 hrs or 1 hr followed by western blot analysis and ROS quantification, respectively. Tat or gp120 stock solution was heat-inactivated by boiling for 30 min. For treatment with CCR5 neutralizing antibody or IgG isotype control (10 μg/ml, R&D systems) or; with antioxidant cocktail (0.2 mM ascorbate, 0.5 mM glutathione, and 3.5 μM α-tocopherol), cells were pre-treated with inhibitors for 30 min followed by treatment with gp-120 CM.\nHuman primary pulmonary microvascular endothelial cells (HPMVEC) were purchased from ScienCell research laboratories (Carlsbad, CA) and grown in endothelial cell basal media containing 2% fetal bovine serum (FBS), 1 μg/ml hydrocortisone, 10 ng/ml human epidermal growth factor, 3 ng/ml basic fibroblast growth factor, 10 μg/ml heparin, and gentamycin/amphotericin. Cells were treated with viral proteins: Tat 1-72 (1 μM, University of Kentucky), gp-120CM or gp-120LAV (100 ng/ml, Protein Sciences Corporation, Meriden, CT) for 24 hrs or 1 hr followed by western blot analysis and ROS quantification, respectively. Tat or gp120 stock solution was heat-inactivated by boiling for 30 min. For treatment with CCR5 neutralizing antibody or IgG isotype control (10 μg/ml, R&D systems) or; with antioxidant cocktail (0.2 mM ascorbate, 0.5 mM glutathione, and 3.5 μM α-tocopherol), cells were pre-treated with inhibitors for 30 min followed by treatment with gp-120 CM.\n Quantification of cellular oxidative stress using dichlorofluorescein (DCF) assay Pulmonary endothelial cells were treated with 5-(and -6)-carboxy-2', 7'-dichlorodihydroflourescein diacetate (DCFH-DA) (Molecular Probes, Inc.) for 30 min followed by treatment with viral protein(s) for 1 hr. In the presence of H2O2, DCFH is oxidized to a fluorescent DCF within the cytoplasm which was read by fluorescent plate reader at an excitation of 485 nm with an emission of 530 nm [40].\nPulmonary endothelial cells were treated with 5-(and -6)-carboxy-2', 7'-dichlorodihydroflourescein diacetate (DCFH-DA) (Molecular Probes, Inc.) for 30 min followed by treatment with viral protein(s) for 1 hr. In the presence of H2O2, DCFH is oxidized to a fluorescent DCF within the cytoplasm which was read by fluorescent plate reader at an excitation of 485 nm with an emission of 530 nm [40].\n Transfection of pulmonary endothelial cells with small interfering (si) RNA The silencer select pre-designed and validated siRNA duplexes targeting HIF-1α were obtained from Applied Biosystems (Carlsbad, CA). Cells were also transfected with silencer select negative control siRNA for comparison. HPMVECs were transfected with 10 nM siRNA using Hiperfect transfection reagent (Qiagen, Valencia, CA) as instructed by the manufacturer. The transfected cells were then treated with or without cocaine and/or Tat for 24 hrs followed by protein extraction for western blot analysis.\nThe silencer select pre-designed and validated siRNA duplexes targeting HIF-1α were obtained from Applied Biosystems (Carlsbad, CA). Cells were also transfected with silencer select negative control siRNA for comparison. HPMVECs were transfected with 10 nM siRNA using Hiperfect transfection reagent (Qiagen, Valencia, CA) as instructed by the manufacturer. The transfected cells were then treated with or without cocaine and/or Tat for 24 hrs followed by protein extraction for western blot analysis.\n Real-Time RT-PCR analysis We used Real-Time RT-PCR to analyze RNA extracted from frozen lungs of HIV-1 Tg rats and WT controls obtained after non-fixative perfusion. Quantitative analysis of HIF-1a, PDGF and ET-1 mRNA in Tg and WT rats was done using primers from SA Biosciences (Frederick, MD) by Real-Time RT-PCR using the SYBR Green detection method. Total RNA was isolated from frozen lung tissues by lysis in Trizol and was then converted into first strand cDNA to be used for real-time PCR. Detection was performed with an ABI Prism 7700 sequence detector. The average Ct value of the house-keeping gene, HPRT, was subtracted from that of target gene to give changes in Ct (dCt). The fold-change in gene expression (differences in dCt, or ddCt) was then determined as log2 relative units.\nWe used Real-Time RT-PCR to analyze RNA extracted from frozen lungs of HIV-1 Tg rats and WT controls obtained after non-fixative perfusion. Quantitative analysis of HIF-1a, PDGF and ET-1 mRNA in Tg and WT rats was done using primers from SA Biosciences (Frederick, MD) by Real-Time RT-PCR using the SYBR Green detection method. Total RNA was isolated from frozen lung tissues by lysis in Trizol and was then converted into first strand cDNA to be used for real-time PCR. Detection was performed with an ABI Prism 7700 sequence detector. The average Ct value of the house-keeping gene, HPRT, was subtracted from that of target gene to give changes in Ct (dCt). The fold-change in gene expression (differences in dCt, or ddCt) was then determined as log2 relative units.\n Western Blot Analysis Frozen rat lung tissues or endothelial cells were lysed in lysing buffer (Sigma, St. Louis, MO) containing protease inhibitors. Protein estimation in these samples was measured using the micro-BCA method protein assay kit (Pierce Chemical Co., Rockford, IL). Western blot analyses were performed using primary antibodies against HIF-1α (Santa Cruz), PDGF-BB (Santa Cruz), and β-actin (Sigma). The secondary antibodies used were horseradish peroxidase-conjugated anti-mouse or anti-rabbit (1:5000, Pierce Chemical Co) and detection was performed using the enhanced chemiluminescence system (Pierce Chemical Co.). The NIH imageJ software was used for densitometric analysis of immunoblots.\nFrozen rat lung tissues or endothelial cells were lysed in lysing buffer (Sigma, St. Louis, MO) containing protease inhibitors. Protein estimation in these samples was measured using the micro-BCA method protein assay kit (Pierce Chemical Co., Rockford, IL). Western blot analyses were performed using primary antibodies against HIF-1α (Santa Cruz), PDGF-BB (Santa Cruz), and β-actin (Sigma). The secondary antibodies used were horseradish peroxidase-conjugated anti-mouse or anti-rabbit (1:5000, Pierce Chemical Co) and detection was performed using the enhanced chemiluminescence system (Pierce Chemical Co.). The NIH imageJ software was used for densitometric analysis of immunoblots.\n Statistical Analysis Statistical analysis was performed using two-way analysis of variance with a post-hoc Student t- test or non-parametric Wilcoxon Rank-Sum test as appropriate. To test for association of RV/LV+septum ratio with other mediators, the non-parametric Spearman's rank-sum correlation coefficient was used and coefficient of determination (R2) was calculated. Exact two-sided p-values were calculated for all the analysis, using SAS 9.1 software (SAS Institute, Inc., Cary, NC, USA). A type I error rate of 5% was used for determining statistical significance.\nStatistical analysis was performed using two-way analysis of variance with a post-hoc Student t- test or non-parametric Wilcoxon Rank-Sum test as appropriate. To test for association of RV/LV+septum ratio with other mediators, the non-parametric Spearman's rank-sum correlation coefficient was used and coefficient of determination (R2) was calculated. Exact two-sided p-values were calculated for all the analysis, using SAS 9.1 software (SAS Institute, Inc., Cary, NC, USA). A type I error rate of 5% was used for determining statistical significance.", "HIV-1 transgenic (Tg) Sprague Dawley (SD) and SD wild type (WT) rats were purchased from Harlan (Indianapolis, Indiana). Young 4-5 months old Tg rats (n = 6) and age matched SD wild type rats (n = 6) were used for analysis. The HIV-1 Tg rat contains a gag-pol deleted NL4-3 provirus and expresses HIV viral RNA and proteins in various tissues including lymphocytes and monocytes. The animals were euthanized with inhalation of 2.5-3% isofluorane gas, followed by transcardial saline perfusion. Following euthanasia, one half of the lung was post-fixed for histological examination, while the other half was snap frozen for RNA analysis. The animal care at the Kansas University Medical Center was in strict accordance with the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals.", "Hearts were removed from the euthanized animals. After the removal of atria, the wall of the right ventricle (RV) was separated from the left ventricle (LV) and septum (LV+S) according to the established method [38]. Wet weights of both RV & LV+S were quantified, normalized to the total body weight and used to calculate the RV/LV+S ratio.", "Excised lungs were immersed in 4% paraformaldehyde overnight followed by 70% ethanol and then used for paraffin embedding. Paraffin sections of 5 μm thickness were used for Hematoxylin & Eosin (H&E) or Verhoeff von Gieson (VVG) staining. The digital scans of whole section from each animal were generated with a ScanScope scanner and then visualized and analyzed using Aperio image view software. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries (50-250 μm diameter) in blinded manner. Wall thickness and outer diameter of approximately 25 muscularized arteries were measured in each section at two perpendicular points and then averaged. The percentage medial wall thickness was then calculated as described before [39]. Immunohistochemistry staining of paraffin-embedded lung sections was performed as previously described [16] with primary antibodies including α-SMA, factor VIII, from Dako Corporation (Carpentaria, CA, USA), HIF-1α, from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA) and PDGF-BB, from Abcam, Inc. (Cambridge, MA).", "Human primary pulmonary microvascular endothelial cells (HPMVEC) were purchased from ScienCell research laboratories (Carlsbad, CA) and grown in endothelial cell basal media containing 2% fetal bovine serum (FBS), 1 μg/ml hydrocortisone, 10 ng/ml human epidermal growth factor, 3 ng/ml basic fibroblast growth factor, 10 μg/ml heparin, and gentamycin/amphotericin. Cells were treated with viral proteins: Tat 1-72 (1 μM, University of Kentucky), gp-120CM or gp-120LAV (100 ng/ml, Protein Sciences Corporation, Meriden, CT) for 24 hrs or 1 hr followed by western blot analysis and ROS quantification, respectively. Tat or gp120 stock solution was heat-inactivated by boiling for 30 min. For treatment with CCR5 neutralizing antibody or IgG isotype control (10 μg/ml, R&D systems) or; with antioxidant cocktail (0.2 mM ascorbate, 0.5 mM glutathione, and 3.5 μM α-tocopherol), cells were pre-treated with inhibitors for 30 min followed by treatment with gp-120 CM.", "Pulmonary endothelial cells were treated with 5-(and -6)-carboxy-2', 7'-dichlorodihydroflourescein diacetate (DCFH-DA) (Molecular Probes, Inc.) for 30 min followed by treatment with viral protein(s) for 1 hr. In the presence of H2O2, DCFH is oxidized to a fluorescent DCF within the cytoplasm which was read by fluorescent plate reader at an excitation of 485 nm with an emission of 530 nm [40].", "The silencer select pre-designed and validated siRNA duplexes targeting HIF-1α were obtained from Applied Biosystems (Carlsbad, CA). Cells were also transfected with silencer select negative control siRNA for comparison. HPMVECs were transfected with 10 nM siRNA using Hiperfect transfection reagent (Qiagen, Valencia, CA) as instructed by the manufacturer. The transfected cells were then treated with or without cocaine and/or Tat for 24 hrs followed by protein extraction for western blot analysis.", "We used Real-Time RT-PCR to analyze RNA extracted from frozen lungs of HIV-1 Tg rats and WT controls obtained after non-fixative perfusion. Quantitative analysis of HIF-1a, PDGF and ET-1 mRNA in Tg and WT rats was done using primers from SA Biosciences (Frederick, MD) by Real-Time RT-PCR using the SYBR Green detection method. Total RNA was isolated from frozen lung tissues by lysis in Trizol and was then converted into first strand cDNA to be used for real-time PCR. Detection was performed with an ABI Prism 7700 sequence detector. The average Ct value of the house-keeping gene, HPRT, was subtracted from that of target gene to give changes in Ct (dCt). The fold-change in gene expression (differences in dCt, or ddCt) was then determined as log2 relative units.", "Frozen rat lung tissues or endothelial cells were lysed in lysing buffer (Sigma, St. Louis, MO) containing protease inhibitors. Protein estimation in these samples was measured using the micro-BCA method protein assay kit (Pierce Chemical Co., Rockford, IL). Western blot analyses were performed using primary antibodies against HIF-1α (Santa Cruz), PDGF-BB (Santa Cruz), and β-actin (Sigma). The secondary antibodies used were horseradish peroxidase-conjugated anti-mouse or anti-rabbit (1:5000, Pierce Chemical Co) and detection was performed using the enhanced chemiluminescence system (Pierce Chemical Co.). The NIH imageJ software was used for densitometric analysis of immunoblots.", "Statistical analysis was performed using two-way analysis of variance with a post-hoc Student t- test or non-parametric Wilcoxon Rank-Sum test as appropriate. To test for association of RV/LV+septum ratio with other mediators, the non-parametric Spearman's rank-sum correlation coefficient was used and coefficient of determination (R2) was calculated. Exact two-sided p-values were calculated for all the analysis, using SAS 9.1 software (SAS Institute, Inc., Cary, NC, USA). A type I error rate of 5% was used for determining statistical significance.", " Pulmonary vascular remodeling in HIV-Tg rats Reports suggesting respiratory difficulty in HIV-Tg rats [34] led us to utilize this model in looking for evidence of pulmonary arteriopathy associated with HIV-related proteins. As clinical manifestations of AIDS in HIV-Tg rats begins as early as 5 months [34], we compared 4-5 months old Tg with age-matched WT-control rats (n = 6 in each group). Analysis of both, H&E and VVG staining, of paraffin embedded lung sections from 5 months old HIV-Tg rats demonstrated moderate to severe vascular remodeling. Representative images of H&E and VVG staining from each group are shown in Figure 1A. There was a significant increase in the thickness of the medial wall of muscular arteries in HIV-Tg rats (Figure 1b, c, e, f) compared to normal vessels in wild-type control (Figure 1a, d). Further, the presence of a smooth muscle layer was observed in many of the normally non-muscular distal arteries of HIV-Tg rats. The VVG-stained sections from both the groups revealed a well defined internal elastic lamina (black stain) in WT-control rats whereas elastic lamina was found to be disrupted in HIV-Tg rats (Figure 1). As shown in Figure 2, percentage of medial wall thickness of pulmonary arteries with outer diameter ranging between 50-200 μm was observed to be significantly high in HIV-Tg rats as compared to WT controls (p < 0.001).\nHistological evidence of pulmonary vascular remodeling in HIV-Tg rats. Representative images of H& E (a, b, c) and VVG (d, e, f) stained sections from HIV-Tg (b, c, e, f) and WT control (a, d) rats. H&E photomicrographs were captured at 10× (a, b) and at 20× (c) magnification whereas VVG images were captured at 4× (d, e) and at 20× (f) original magnification (scale bar: 100 μm). Each representative image is from different animal.\nIncrease in medial wall thickness of pulmonary arteries in HIV-Tg rats compared with WT-rats. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries with diameter size ranging from 50 μm-250 μm. P < = 0.001, HIV-Tg rats vs. WT controls.\nReports suggesting respiratory difficulty in HIV-Tg rats [34] led us to utilize this model in looking for evidence of pulmonary arteriopathy associated with HIV-related proteins. As clinical manifestations of AIDS in HIV-Tg rats begins as early as 5 months [34], we compared 4-5 months old Tg with age-matched WT-control rats (n = 6 in each group). Analysis of both, H&E and VVG staining, of paraffin embedded lung sections from 5 months old HIV-Tg rats demonstrated moderate to severe vascular remodeling. Representative images of H&E and VVG staining from each group are shown in Figure 1A. There was a significant increase in the thickness of the medial wall of muscular arteries in HIV-Tg rats (Figure 1b, c, e, f) compared to normal vessels in wild-type control (Figure 1a, d). Further, the presence of a smooth muscle layer was observed in many of the normally non-muscular distal arteries of HIV-Tg rats. The VVG-stained sections from both the groups revealed a well defined internal elastic lamina (black stain) in WT-control rats whereas elastic lamina was found to be disrupted in HIV-Tg rats (Figure 1). As shown in Figure 2, percentage of medial wall thickness of pulmonary arteries with outer diameter ranging between 50-200 μm was observed to be significantly high in HIV-Tg rats as compared to WT controls (p < 0.001).\nHistological evidence of pulmonary vascular remodeling in HIV-Tg rats. Representative images of H& E (a, b, c) and VVG (d, e, f) stained sections from HIV-Tg (b, c, e, f) and WT control (a, d) rats. H&E photomicrographs were captured at 10× (a, b) and at 20× (c) magnification whereas VVG images were captured at 4× (d, e) and at 20× (f) original magnification (scale bar: 100 μm). Each representative image is from different animal.\nIncrease in medial wall thickness of pulmonary arteries in HIV-Tg rats compared with WT-rats. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries with diameter size ranging from 50 μm-250 μm. P < = 0.001, HIV-Tg rats vs. WT controls.\n Characterization of pulmonary vascular lesions in HIV-Tg rats In order to characterize the cellular composition of pulmonary vascular lesions in HIV-Tg rats; the lung sections were stained for α-SMA and Factor VIII. As shown in the representative lung sections from each group in Figure 3, we confirmed the presence of vascular remodeling with medial wall thickness in HIV-Tg rats while normal blood vessels were observed in WT control rats. Marked increase in the medial wall thickness of muscular arteries was observed due to increased proliferation of smooth muscle cells (SMC) in the HIV-Tg group (Figure 3b) compared to WT controls (Figure 3a). Endothelial monolayer was generally damaged with signs of increased expression of factor 8 (or vWF) in both thickened and non-muscular vessels (Figure 3d).\nPresence of medial wall thickness in the pulmonary arteries of HIV-Tg rats. Immuno-histochemistry of paraffin embedded lung sections with anti- α smooth muscle actin (brown) (a-b) and factor VIII (brown) (c-d) indicated abnormal vascular lesions with significant medial wall thickening in the lung sections from HIV-Tg rats (b, d) compared with WT-controls (a, c). Representative images were captured at 10× original magnification (scale bar: 100 μm).\nIn order to characterize the cellular composition of pulmonary vascular lesions in HIV-Tg rats; the lung sections were stained for α-SMA and Factor VIII. As shown in the representative lung sections from each group in Figure 3, we confirmed the presence of vascular remodeling with medial wall thickness in HIV-Tg rats while normal blood vessels were observed in WT control rats. Marked increase in the medial wall thickness of muscular arteries was observed due to increased proliferation of smooth muscle cells (SMC) in the HIV-Tg group (Figure 3b) compared to WT controls (Figure 3a). Endothelial monolayer was generally damaged with signs of increased expression of factor 8 (or vWF) in both thickened and non-muscular vessels (Figure 3d).\nPresence of medial wall thickness in the pulmonary arteries of HIV-Tg rats. Immuno-histochemistry of paraffin embedded lung sections with anti- α smooth muscle actin (brown) (a-b) and factor VIII (brown) (c-d) indicated abnormal vascular lesions with significant medial wall thickening in the lung sections from HIV-Tg rats (b, d) compared with WT-controls (a, c). Representative images were captured at 10× original magnification (scale bar: 100 μm).\n Right ventricular hypertrophy (RVH) in HIV-Tg rats The HIV-Tg rats exhibited an increase in the ratio of wet weight of the right ventricle (RV) to the sum of the wet weights of the left ventricle and interventricular septum (LV+S), compared to that found in the control group (Figure 4). This increase in the RV/LV+S ratio suggests a disproportionate growth of the right ventricle compared to the left, thereby indicating early RV hypertrophy in these HIV-Tg rats.\nRight ventricular hypertrophy in HIV-Tg rats (n = 6) compared with age-matched WT rats (n = 6). The ratio of the wet weight of RV wall and of the LV wall with septum of heart (RV/LV+septum) was measured. P = 0.06, HIV-Tg rats vs. WT controls.\nThe HIV-Tg rats exhibited an increase in the ratio of wet weight of the right ventricle (RV) to the sum of the wet weights of the left ventricle and interventricular septum (LV+S), compared to that found in the control group (Figure 4). This increase in the RV/LV+S ratio suggests a disproportionate growth of the right ventricle compared to the left, thereby indicating early RV hypertrophy in these HIV-Tg rats.\nRight ventricular hypertrophy in HIV-Tg rats (n = 6) compared with age-matched WT rats (n = 6). The ratio of the wet weight of RV wall and of the LV wall with septum of heart (RV/LV+septum) was measured. P = 0.06, HIV-Tg rats vs. WT controls.\n Increased expression of HIF-1a and PDGF-BB in HIV-Tg rats Having determined, through observation of right ventricular changes, the degree of pulmonary arteriopathy in HIV-Tg rats, we next compared the level of HIF-1α expression in the lungs of these rats to that found in controls. Although RNA analysis of lung extracts suggested an insignificant increase in the expression of HIF-1α (p = 0.078) (Figure 5A), western blot analysis demonstrated significant (p < 0.05) increase in HIF-1α protein, thus confirming the increase in expression of HIF-1α in HIV-Tg rats as compared to WT controls (Figure 5B). This increase in the expression of HIF-1α was further confirmed by immunohistochemical analysis on the lung sections from HIV-Tg and WT controls. As shown in Figure 5C, the lung parenchyma, along with endothelial cells lining the vessels demonstrating medial thickness, had strong expression of HIF-1α in the lung sections from HIV-Tg rats. Enhanced expression was also observed in mononuclear cells around the thickened vessels. Smooth muscle cells in these arteries, however, did not demonstrate a significant increase in HIF-1α compared to those from controls.\nIncreased expression of HIF-1α in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA and B) western blot analysis of total protein, extracted from lungs of HIV-Tg rats and age matched wild-type SD rats. Histogram below western blot image represents the average densitometric ratio of 135 kDa, HIF-1α to β-actin in wild-type and HIV-Tg rats. Statistical significance was calculated using a two-tail, independent, t-test. (* p < = 0.05) C) Immuno-histochemistry of paraffin embedded lung sections with anti-HIF-1α. Representative photomicrographs of immunostaining from wild-type and HIV-Tg group are shown. Original magnification: 60×.\nWe next evaluated the expression of pro-proliferative factor, PDGF-BB that is suggestive to be regulated in a HIF-dependent manner [29-31]. As shown in Figure 6A, real-time RT-PCR analysis of total mRNA extracted from lung homogenates suggested an increase in the expression of PDGF-B chain mRNA in HIV-Tg rats compared to the control rats with normal vasculature. Interestingly, this increased expression of PDGF-B chain in the HIV-Tg group was associated positively with the increase in expression of HIF-1α (p = 0.002, R = 0.97) (Figure 6B). Furthermore, immunohistochemical analysis suggested enhanced expression of PDGF-BB in endothelial cells and in mononuclear infiltrated cells around thickened vessels (Figure 6C) from HIV-Tg rats similar to HIF-1α staining (Figure 5C). Additionally, the increased levels of HIF-1α (p = 0.009, R = 0.94) and PDGF-B chain (p = 0.036, R = 0.78) strongly correlated linearly with the increased RV/LV+ septum ratio in HIV-Tg rats (Figure 6D). No notable trends were found within the wild-type group.\nIncreased expression of PDGF-B chain in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA in the lungs of HIV-Tg rats and age matched wild-type SD rats. B) Correlation of PDGF-B chain mRNA with the expression of HIF-1α in HIV-Tg rats. C) Immuno-histochemistry for PDGF-BB on the paraffin embedded lung sections from HIV-Tg and WT rats. Original magnification: 60×. D) Correlation of RV/LV+S ratio with the expression of HIF-1α and PDGF-BB in HIV-Tg rats. Correlation was calculated using the non-parametric Spearman's rank correlation coefficient.\nHaving determined, through observation of right ventricular changes, the degree of pulmonary arteriopathy in HIV-Tg rats, we next compared the level of HIF-1α expression in the lungs of these rats to that found in controls. Although RNA analysis of lung extracts suggested an insignificant increase in the expression of HIF-1α (p = 0.078) (Figure 5A), western blot analysis demonstrated significant (p < 0.05) increase in HIF-1α protein, thus confirming the increase in expression of HIF-1α in HIV-Tg rats as compared to WT controls (Figure 5B). This increase in the expression of HIF-1α was further confirmed by immunohistochemical analysis on the lung sections from HIV-Tg and WT controls. As shown in Figure 5C, the lung parenchyma, along with endothelial cells lining the vessels demonstrating medial thickness, had strong expression of HIF-1α in the lung sections from HIV-Tg rats. Enhanced expression was also observed in mononuclear cells around the thickened vessels. Smooth muscle cells in these arteries, however, did not demonstrate a significant increase in HIF-1α compared to those from controls.\nIncreased expression of HIF-1α in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA and B) western blot analysis of total protein, extracted from lungs of HIV-Tg rats and age matched wild-type SD rats. Histogram below western blot image represents the average densitometric ratio of 135 kDa, HIF-1α to β-actin in wild-type and HIV-Tg rats. Statistical significance was calculated using a two-tail, independent, t-test. (* p < = 0.05) C) Immuno-histochemistry of paraffin embedded lung sections with anti-HIF-1α. Representative photomicrographs of immunostaining from wild-type and HIV-Tg group are shown. Original magnification: 60×.\nWe next evaluated the expression of pro-proliferative factor, PDGF-BB that is suggestive to be regulated in a HIF-dependent manner [29-31]. As shown in Figure 6A, real-time RT-PCR analysis of total mRNA extracted from lung homogenates suggested an increase in the expression of PDGF-B chain mRNA in HIV-Tg rats compared to the control rats with normal vasculature. Interestingly, this increased expression of PDGF-B chain in the HIV-Tg group was associated positively with the increase in expression of HIF-1α (p = 0.002, R = 0.97) (Figure 6B). Furthermore, immunohistochemical analysis suggested enhanced expression of PDGF-BB in endothelial cells and in mononuclear infiltrated cells around thickened vessels (Figure 6C) from HIV-Tg rats similar to HIF-1α staining (Figure 5C). Additionally, the increased levels of HIF-1α (p = 0.009, R = 0.94) and PDGF-B chain (p = 0.036, R = 0.78) strongly correlated linearly with the increased RV/LV+ septum ratio in HIV-Tg rats (Figure 6D). No notable trends were found within the wild-type group.\nIncreased expression of PDGF-B chain in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA in the lungs of HIV-Tg rats and age matched wild-type SD rats. B) Correlation of PDGF-B chain mRNA with the expression of HIF-1α in HIV-Tg rats. C) Immuno-histochemistry for PDGF-BB on the paraffin embedded lung sections from HIV-Tg and WT rats. Original magnification: 60×. D) Correlation of RV/LV+S ratio with the expression of HIF-1α and PDGF-BB in HIV-Tg rats. Correlation was calculated using the non-parametric Spearman's rank correlation coefficient.\n Increased expression of PDGF-BB in HIV-protein(s) treated pulmonary microvascular endothelial cells Since we observed increased expression of HIF-1α and PDGF-BB in the lungs from HIV-Tg rats including endothelial cells lining the pulmonary arterial vessels, we next sought to delineate if HIF-dependent mechanism is involved in the viral protein mediated up regulation of PDGF-BB in the pulmonary endothelium. Two major HIV-proteins: Tat and gp-120 are known to be actively secreted by infected cells and has been detected in the serum of HIV-infected patients [41-43]. Furthermore, given that both Tat and gp-120 were found to express in the lung homogenates of HIV-Tg rats (data not shown), we first treated HPMVECs with these viral proteins over a period of 24 hrs and assessed for the expression of PDGF-BB by western blot analysis. As shown in Figure 7, treatments with Tat, gp-120LAV (from X4-type virus) or gp-120CM (from R5-type virus) resulted in significant increase of PDGF-BB protein expression compared to untreated control. However, when cells were subjected to treatment with the same concentration of heat inactivated Tat or gp-120, no induction in the PDGF-BB expression was observed. Additionally, the maximum increase that was observed on treatment with R5-type gp-120CM was also significantly more when compared with the PDGF-BB induction obtained on Tat or X4-type gp-120LAV treatment.\nIncreased expression of PDGF-BB in pulmonary endothelial cells on treatment with HIV-proteins. Representative western blot showing PDGF-BB expression in cellular extracts from Tat (25 ng/ml), gp-120LAV (100 ng/ml), gp-120CM (100 ng/ml), heat-inactivated (HI) Tat or HI-gp-120 treated human pulmonary microvascular endothelial cells. The blots were re-probed with human β-actin antibodies. Histogram represents the average densitometric ratio of PDGF-BB to β-actin of three independent experiments. Statistical significance was calculated using a two-tail, independent t-test. (** p ≤ 0.01 vs. control, #p ≤ 0.05 vs. Tat or gp-120 treatment).\nSince we observed increased expression of HIF-1α and PDGF-BB in the lungs from HIV-Tg rats including endothelial cells lining the pulmonary arterial vessels, we next sought to delineate if HIF-dependent mechanism is involved in the viral protein mediated up regulation of PDGF-BB in the pulmonary endothelium. Two major HIV-proteins: Tat and gp-120 are known to be actively secreted by infected cells and has been detected in the serum of HIV-infected patients [41-43]. Furthermore, given that both Tat and gp-120 were found to express in the lung homogenates of HIV-Tg rats (data not shown), we first treated HPMVECs with these viral proteins over a period of 24 hrs and assessed for the expression of PDGF-BB by western blot analysis. As shown in Figure 7, treatments with Tat, gp-120LAV (from X4-type virus) or gp-120CM (from R5-type virus) resulted in significant increase of PDGF-BB protein expression compared to untreated control. However, when cells were subjected to treatment with the same concentration of heat inactivated Tat or gp-120, no induction in the PDGF-BB expression was observed. Additionally, the maximum increase that was observed on treatment with R5-type gp-120CM was also significantly more when compared with the PDGF-BB induction obtained on Tat or X4-type gp-120LAV treatment.\nIncreased expression of PDGF-BB in pulmonary endothelial cells on treatment with HIV-proteins. Representative western blot showing PDGF-BB expression in cellular extracts from Tat (25 ng/ml), gp-120LAV (100 ng/ml), gp-120CM (100 ng/ml), heat-inactivated (HI) Tat or HI-gp-120 treated human pulmonary microvascular endothelial cells. The blots were re-probed with human β-actin antibodies. Histogram represents the average densitometric ratio of PDGF-BB to β-actin of three independent experiments. Statistical significance was calculated using a two-tail, independent t-test. (** p ≤ 0.01 vs. control, #p ≤ 0.05 vs. Tat or gp-120 treatment).\n Reactive oxygen species are involved in HIV-protein mediated PDGF-BB induction Since both Tat and gp-120 [27,28,44] are known to induce oxidative stress, we next evaluated the levels of cytoplasmic ROS in Tat or gp-120 treated HPMVECs by DCF assay. Our findings demonstrated that the treatment of cells with Tat, gp-120LAV or gp-120CM results in significant increase in the production of ROS when compared to controls (Figure 8A). Similar to the PDGF-BB expression the maximum oxidative burst was observed on treatment with R5 type gp-120CM. Based on these findings we next focused on elucidating the mechanism(s) involved in the gp-120CM mediated up-regulation of PDGF-BB in pulmonary endothelial cells. We first investigated if chemokine receptor CCR5 is specifically involved in gp-120CM mediated generation of ROS by use of CCR5 neutralizing antibody. As illustrated in Figure 8B, pretreatment of HPMVECs with CCR5 antibody for 30 min, prevented the ROS production on gp-120 CM treatment whereas the pretreatment with isotype matched control antibody control had no affect. Furthermore, to examine if this enhanced levels of ROS are involved in PDGF-BB increase associated with gp-120CM treatment of pulmonary endothelial cells, cells were pretreated with antioxidant cocktail for 30 min. followed by 24 h treatment with gp-120CM. As shown in Figure 8C, western blot analysis of the total cell extract demonstrated the ability of antioxidants to prevent the gp-120 CM mediated increase in the PDGF-BB expression. Taken together these data suggest the role of oxidative burst in R5-type gp-120 mediated up-regulation of PDGF-BB in pulmonary endothelial cells.\nInvolvement of oxidative stress in gp-120 mediated PDGF-BB induction in pulmonary endothelial cells. A) Enhanced oxidative stress in pulmonary endothelial cells on Tat and gp120 treatment. Human pulmonary microvascular endothelial cells (HPMVECs) were incubated with carboxy-H2-DCF-DA followed by Tat (25 ng/ml) or gp-120 (100 ng/ml) treatment for 60 min, and assessed for oxidative stress (Mean ± SD., **P ≤ 0.01, ***P < 0.001 vs. control). B) Effect of CCR5 neutralizing antibody on gp-120CM (100 ng/ml) mediated oxidative stress in HPMVECs. Cells were pretreated with CCR5 antibody (10 μg/ml) or equal amount of IgG isotype control for 30 min, followed by DCF assay (Mean ± SD., ***P < 0.001 treatment versus control; #P < 0.05 vs. gp120CM treatment). C) Gp-120CM mediated PDGF-BB expression in the presence of antioxidant cocktail. HPMVECs were pretreated with antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. Cells were then used for protein extraction followed by sequential immunobloting with antibodies specifically directed to the PDGF-BB and β-actin. Representative western blot images (upper panel) are shown with histograms (lower panel) showing the average densitometric analysis of the PDGF-BB band normalized to corresponding β-actin band from three independent experiments (*** P < = 0.001 versus control; ###P < = 0.001 versus gp120CM treatment).\nSince both Tat and gp-120 [27,28,44] are known to induce oxidative stress, we next evaluated the levels of cytoplasmic ROS in Tat or gp-120 treated HPMVECs by DCF assay. Our findings demonstrated that the treatment of cells with Tat, gp-120LAV or gp-120CM results in significant increase in the production of ROS when compared to controls (Figure 8A). Similar to the PDGF-BB expression the maximum oxidative burst was observed on treatment with R5 type gp-120CM. Based on these findings we next focused on elucidating the mechanism(s) involved in the gp-120CM mediated up-regulation of PDGF-BB in pulmonary endothelial cells. We first investigated if chemokine receptor CCR5 is specifically involved in gp-120CM mediated generation of ROS by use of CCR5 neutralizing antibody. As illustrated in Figure 8B, pretreatment of HPMVECs with CCR5 antibody for 30 min, prevented the ROS production on gp-120 CM treatment whereas the pretreatment with isotype matched control antibody control had no affect. Furthermore, to examine if this enhanced levels of ROS are involved in PDGF-BB increase associated with gp-120CM treatment of pulmonary endothelial cells, cells were pretreated with antioxidant cocktail for 30 min. followed by 24 h treatment with gp-120CM. As shown in Figure 8C, western blot analysis of the total cell extract demonstrated the ability of antioxidants to prevent the gp-120 CM mediated increase in the PDGF-BB expression. Taken together these data suggest the role of oxidative burst in R5-type gp-120 mediated up-regulation of PDGF-BB in pulmonary endothelial cells.\nInvolvement of oxidative stress in gp-120 mediated PDGF-BB induction in pulmonary endothelial cells. A) Enhanced oxidative stress in pulmonary endothelial cells on Tat and gp120 treatment. Human pulmonary microvascular endothelial cells (HPMVECs) were incubated with carboxy-H2-DCF-DA followed by Tat (25 ng/ml) or gp-120 (100 ng/ml) treatment for 60 min, and assessed for oxidative stress (Mean ± SD., **P ≤ 0.01, ***P < 0.001 vs. control). B) Effect of CCR5 neutralizing antibody on gp-120CM (100 ng/ml) mediated oxidative stress in HPMVECs. Cells were pretreated with CCR5 antibody (10 μg/ml) or equal amount of IgG isotype control for 30 min, followed by DCF assay (Mean ± SD., ***P < 0.001 treatment versus control; #P < 0.05 vs. gp120CM treatment). C) Gp-120CM mediated PDGF-BB expression in the presence of antioxidant cocktail. HPMVECs were pretreated with antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. Cells were then used for protein extraction followed by sequential immunobloting with antibodies specifically directed to the PDGF-BB and β-actin. Representative western blot images (upper panel) are shown with histograms (lower panel) showing the average densitometric analysis of the PDGF-BB band normalized to corresponding β-actin band from three independent experiments (*** P < = 0.001 versus control; ###P < = 0.001 versus gp120CM treatment).\n ROS dependent stimulation of HIF-1α is necessary for HIV-protein mediated PDGF-BB induction It is well known that most of the pathological effects of ROS in various oxidative stress associated disorders are mediated by activation and stabilization of HIF-1α [45]. We therefore next investigated if ROS mediated activation of HIF-1α on gp-120CM treatment is important for increased expression of PDGF-BB in gp-120 treated HPMVECs. We first examined whether gp-120CM treatment of HPMVECs could result in increased levels of HIF-1α protein. As shown in Figure 9A, western blot analysis of gp-120CM treated cellular extracts demonstrated increased levels of HIF-1α as compared to untreated controls. Furthermore, this gp-120CM mediated induction of HIF-1α expression was inhibited on pre-treatment of HPMVECs with an antioxidant cocktail (Figure 9A), thus confirming the ROS mediated augmentation of HIF-1α expression on R5 gp-120 treatment of endothelial cells.\nOxidative stress dependent HIF-1α expression is involved in gp-120CM mediated PDGF-BB induction. A) Western blot analysis of HIF-1α expression in human pulmonary microvascular endothelial cells (HPMVECs) pretreated with or without antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. B) Evaluation of HIF-1α knockdown by western blot analysis of whole cell lysates from HPMVECs transfected with siRNA specific to HIF-1α (10nM) or with negative control siRNA in presence of gp120CM treatment. (C) Knock down of HIF-1α resulted in inhibition of gp120CM-mediated induction of PDGF-BB expression in HPMVECs. Blots are representative of three independent experiments with histogram (lower panel) showing the average densitometric analysis normalized to β-actin. All values are mean ± SD. *P < = 0.01,**P < = 0.001 treatment versus control; #P < = 0.01, ##P < = 0.001 treatment versus gp120CM treated untransfected cells.\nNext to determine the involvement of HIF-1α in gp-120 mediated regulation of PDGF-BB expression, we used HIF-1α specific siRNA knock down experiments. First we optimized that the transfection of HPMVECs with 10nM HIF-1α siRNA was efficient in decreasing around 80% of gp-120CM induced HIF-1α expression when compared to cells transfected with non-specific siRNA control (Figure 9B). Furthermore, the HIF-1α siRNA transfected cells showed significant decrease in the expression of PDGF-BB in the presence of gp-120CM when compared with untransfected or non-specfic siRNA transfected gp-120CM treated cells (Figure 9C) thus underscoring the role of HIF-1α activation in the gp-120CM mediated PDGF-BB expression in pulmonary endothelial cells.\nIt is well known that most of the pathological effects of ROS in various oxidative stress associated disorders are mediated by activation and stabilization of HIF-1α [45]. We therefore next investigated if ROS mediated activation of HIF-1α on gp-120CM treatment is important for increased expression of PDGF-BB in gp-120 treated HPMVECs. We first examined whether gp-120CM treatment of HPMVECs could result in increased levels of HIF-1α protein. As shown in Figure 9A, western blot analysis of gp-120CM treated cellular extracts demonstrated increased levels of HIF-1α as compared to untreated controls. Furthermore, this gp-120CM mediated induction of HIF-1α expression was inhibited on pre-treatment of HPMVECs with an antioxidant cocktail (Figure 9A), thus confirming the ROS mediated augmentation of HIF-1α expression on R5 gp-120 treatment of endothelial cells.\nOxidative stress dependent HIF-1α expression is involved in gp-120CM mediated PDGF-BB induction. A) Western blot analysis of HIF-1α expression in human pulmonary microvascular endothelial cells (HPMVECs) pretreated with or without antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. B) Evaluation of HIF-1α knockdown by western blot analysis of whole cell lysates from HPMVECs transfected with siRNA specific to HIF-1α (10nM) or with negative control siRNA in presence of gp120CM treatment. (C) Knock down of HIF-1α resulted in inhibition of gp120CM-mediated induction of PDGF-BB expression in HPMVECs. Blots are representative of three independent experiments with histogram (lower panel) showing the average densitometric analysis normalized to β-actin. All values are mean ± SD. *P < = 0.01,**P < = 0.001 treatment versus control; #P < = 0.01, ##P < = 0.001 treatment versus gp120CM treated untransfected cells.\nNext to determine the involvement of HIF-1α in gp-120 mediated regulation of PDGF-BB expression, we used HIF-1α specific siRNA knock down experiments. First we optimized that the transfection of HPMVECs with 10nM HIF-1α siRNA was efficient in decreasing around 80% of gp-120CM induced HIF-1α expression when compared to cells transfected with non-specific siRNA control (Figure 9B). Furthermore, the HIF-1α siRNA transfected cells showed significant decrease in the expression of PDGF-BB in the presence of gp-120CM when compared with untransfected or non-specfic siRNA transfected gp-120CM treated cells (Figure 9C) thus underscoring the role of HIF-1α activation in the gp-120CM mediated PDGF-BB expression in pulmonary endothelial cells.", "Reports suggesting respiratory difficulty in HIV-Tg rats [34] led us to utilize this model in looking for evidence of pulmonary arteriopathy associated with HIV-related proteins. As clinical manifestations of AIDS in HIV-Tg rats begins as early as 5 months [34], we compared 4-5 months old Tg with age-matched WT-control rats (n = 6 in each group). Analysis of both, H&E and VVG staining, of paraffin embedded lung sections from 5 months old HIV-Tg rats demonstrated moderate to severe vascular remodeling. Representative images of H&E and VVG staining from each group are shown in Figure 1A. There was a significant increase in the thickness of the medial wall of muscular arteries in HIV-Tg rats (Figure 1b, c, e, f) compared to normal vessels in wild-type control (Figure 1a, d). Further, the presence of a smooth muscle layer was observed in many of the normally non-muscular distal arteries of HIV-Tg rats. The VVG-stained sections from both the groups revealed a well defined internal elastic lamina (black stain) in WT-control rats whereas elastic lamina was found to be disrupted in HIV-Tg rats (Figure 1). As shown in Figure 2, percentage of medial wall thickness of pulmonary arteries with outer diameter ranging between 50-200 μm was observed to be significantly high in HIV-Tg rats as compared to WT controls (p < 0.001).\nHistological evidence of pulmonary vascular remodeling in HIV-Tg rats. Representative images of H& E (a, b, c) and VVG (d, e, f) stained sections from HIV-Tg (b, c, e, f) and WT control (a, d) rats. H&E photomicrographs were captured at 10× (a, b) and at 20× (c) magnification whereas VVG images were captured at 4× (d, e) and at 20× (f) original magnification (scale bar: 100 μm). Each representative image is from different animal.\nIncrease in medial wall thickness of pulmonary arteries in HIV-Tg rats compared with WT-rats. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries with diameter size ranging from 50 μm-250 μm. P < = 0.001, HIV-Tg rats vs. WT controls.", "In order to characterize the cellular composition of pulmonary vascular lesions in HIV-Tg rats; the lung sections were stained for α-SMA and Factor VIII. As shown in the representative lung sections from each group in Figure 3, we confirmed the presence of vascular remodeling with medial wall thickness in HIV-Tg rats while normal blood vessels were observed in WT control rats. Marked increase in the medial wall thickness of muscular arteries was observed due to increased proliferation of smooth muscle cells (SMC) in the HIV-Tg group (Figure 3b) compared to WT controls (Figure 3a). Endothelial monolayer was generally damaged with signs of increased expression of factor 8 (or vWF) in both thickened and non-muscular vessels (Figure 3d).\nPresence of medial wall thickness in the pulmonary arteries of HIV-Tg rats. Immuno-histochemistry of paraffin embedded lung sections with anti- α smooth muscle actin (brown) (a-b) and factor VIII (brown) (c-d) indicated abnormal vascular lesions with significant medial wall thickening in the lung sections from HIV-Tg rats (b, d) compared with WT-controls (a, c). Representative images were captured at 10× original magnification (scale bar: 100 μm).", "The HIV-Tg rats exhibited an increase in the ratio of wet weight of the right ventricle (RV) to the sum of the wet weights of the left ventricle and interventricular septum (LV+S), compared to that found in the control group (Figure 4). This increase in the RV/LV+S ratio suggests a disproportionate growth of the right ventricle compared to the left, thereby indicating early RV hypertrophy in these HIV-Tg rats.\nRight ventricular hypertrophy in HIV-Tg rats (n = 6) compared with age-matched WT rats (n = 6). The ratio of the wet weight of RV wall and of the LV wall with septum of heart (RV/LV+septum) was measured. P = 0.06, HIV-Tg rats vs. WT controls.", "Having determined, through observation of right ventricular changes, the degree of pulmonary arteriopathy in HIV-Tg rats, we next compared the level of HIF-1α expression in the lungs of these rats to that found in controls. Although RNA analysis of lung extracts suggested an insignificant increase in the expression of HIF-1α (p = 0.078) (Figure 5A), western blot analysis demonstrated significant (p < 0.05) increase in HIF-1α protein, thus confirming the increase in expression of HIF-1α in HIV-Tg rats as compared to WT controls (Figure 5B). This increase in the expression of HIF-1α was further confirmed by immunohistochemical analysis on the lung sections from HIV-Tg and WT controls. As shown in Figure 5C, the lung parenchyma, along with endothelial cells lining the vessels demonstrating medial thickness, had strong expression of HIF-1α in the lung sections from HIV-Tg rats. Enhanced expression was also observed in mononuclear cells around the thickened vessels. Smooth muscle cells in these arteries, however, did not demonstrate a significant increase in HIF-1α compared to those from controls.\nIncreased expression of HIF-1α in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA and B) western blot analysis of total protein, extracted from lungs of HIV-Tg rats and age matched wild-type SD rats. Histogram below western blot image represents the average densitometric ratio of 135 kDa, HIF-1α to β-actin in wild-type and HIV-Tg rats. Statistical significance was calculated using a two-tail, independent, t-test. (* p < = 0.05) C) Immuno-histochemistry of paraffin embedded lung sections with anti-HIF-1α. Representative photomicrographs of immunostaining from wild-type and HIV-Tg group are shown. Original magnification: 60×.\nWe next evaluated the expression of pro-proliferative factor, PDGF-BB that is suggestive to be regulated in a HIF-dependent manner [29-31]. As shown in Figure 6A, real-time RT-PCR analysis of total mRNA extracted from lung homogenates suggested an increase in the expression of PDGF-B chain mRNA in HIV-Tg rats compared to the control rats with normal vasculature. Interestingly, this increased expression of PDGF-B chain in the HIV-Tg group was associated positively with the increase in expression of HIF-1α (p = 0.002, R = 0.97) (Figure 6B). Furthermore, immunohistochemical analysis suggested enhanced expression of PDGF-BB in endothelial cells and in mononuclear infiltrated cells around thickened vessels (Figure 6C) from HIV-Tg rats similar to HIF-1α staining (Figure 5C). Additionally, the increased levels of HIF-1α (p = 0.009, R = 0.94) and PDGF-B chain (p = 0.036, R = 0.78) strongly correlated linearly with the increased RV/LV+ septum ratio in HIV-Tg rats (Figure 6D). No notable trends were found within the wild-type group.\nIncreased expression of PDGF-B chain in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA in the lungs of HIV-Tg rats and age matched wild-type SD rats. B) Correlation of PDGF-B chain mRNA with the expression of HIF-1α in HIV-Tg rats. C) Immuno-histochemistry for PDGF-BB on the paraffin embedded lung sections from HIV-Tg and WT rats. Original magnification: 60×. D) Correlation of RV/LV+S ratio with the expression of HIF-1α and PDGF-BB in HIV-Tg rats. Correlation was calculated using the non-parametric Spearman's rank correlation coefficient.", "Since we observed increased expression of HIF-1α and PDGF-BB in the lungs from HIV-Tg rats including endothelial cells lining the pulmonary arterial vessels, we next sought to delineate if HIF-dependent mechanism is involved in the viral protein mediated up regulation of PDGF-BB in the pulmonary endothelium. Two major HIV-proteins: Tat and gp-120 are known to be actively secreted by infected cells and has been detected in the serum of HIV-infected patients [41-43]. Furthermore, given that both Tat and gp-120 were found to express in the lung homogenates of HIV-Tg rats (data not shown), we first treated HPMVECs with these viral proteins over a period of 24 hrs and assessed for the expression of PDGF-BB by western blot analysis. As shown in Figure 7, treatments with Tat, gp-120LAV (from X4-type virus) or gp-120CM (from R5-type virus) resulted in significant increase of PDGF-BB protein expression compared to untreated control. However, when cells were subjected to treatment with the same concentration of heat inactivated Tat or gp-120, no induction in the PDGF-BB expression was observed. Additionally, the maximum increase that was observed on treatment with R5-type gp-120CM was also significantly more when compared with the PDGF-BB induction obtained on Tat or X4-type gp-120LAV treatment.\nIncreased expression of PDGF-BB in pulmonary endothelial cells on treatment with HIV-proteins. Representative western blot showing PDGF-BB expression in cellular extracts from Tat (25 ng/ml), gp-120LAV (100 ng/ml), gp-120CM (100 ng/ml), heat-inactivated (HI) Tat or HI-gp-120 treated human pulmonary microvascular endothelial cells. The blots were re-probed with human β-actin antibodies. Histogram represents the average densitometric ratio of PDGF-BB to β-actin of three independent experiments. Statistical significance was calculated using a two-tail, independent t-test. (** p ≤ 0.01 vs. control, #p ≤ 0.05 vs. Tat or gp-120 treatment).", "Since both Tat and gp-120 [27,28,44] are known to induce oxidative stress, we next evaluated the levels of cytoplasmic ROS in Tat or gp-120 treated HPMVECs by DCF assay. Our findings demonstrated that the treatment of cells with Tat, gp-120LAV or gp-120CM results in significant increase in the production of ROS when compared to controls (Figure 8A). Similar to the PDGF-BB expression the maximum oxidative burst was observed on treatment with R5 type gp-120CM. Based on these findings we next focused on elucidating the mechanism(s) involved in the gp-120CM mediated up-regulation of PDGF-BB in pulmonary endothelial cells. We first investigated if chemokine receptor CCR5 is specifically involved in gp-120CM mediated generation of ROS by use of CCR5 neutralizing antibody. As illustrated in Figure 8B, pretreatment of HPMVECs with CCR5 antibody for 30 min, prevented the ROS production on gp-120 CM treatment whereas the pretreatment with isotype matched control antibody control had no affect. Furthermore, to examine if this enhanced levels of ROS are involved in PDGF-BB increase associated with gp-120CM treatment of pulmonary endothelial cells, cells were pretreated with antioxidant cocktail for 30 min. followed by 24 h treatment with gp-120CM. As shown in Figure 8C, western blot analysis of the total cell extract demonstrated the ability of antioxidants to prevent the gp-120 CM mediated increase in the PDGF-BB expression. Taken together these data suggest the role of oxidative burst in R5-type gp-120 mediated up-regulation of PDGF-BB in pulmonary endothelial cells.\nInvolvement of oxidative stress in gp-120 mediated PDGF-BB induction in pulmonary endothelial cells. A) Enhanced oxidative stress in pulmonary endothelial cells on Tat and gp120 treatment. Human pulmonary microvascular endothelial cells (HPMVECs) were incubated with carboxy-H2-DCF-DA followed by Tat (25 ng/ml) or gp-120 (100 ng/ml) treatment for 60 min, and assessed for oxidative stress (Mean ± SD., **P ≤ 0.01, ***P < 0.001 vs. control). B) Effect of CCR5 neutralizing antibody on gp-120CM (100 ng/ml) mediated oxidative stress in HPMVECs. Cells were pretreated with CCR5 antibody (10 μg/ml) or equal amount of IgG isotype control for 30 min, followed by DCF assay (Mean ± SD., ***P < 0.001 treatment versus control; #P < 0.05 vs. gp120CM treatment). C) Gp-120CM mediated PDGF-BB expression in the presence of antioxidant cocktail. HPMVECs were pretreated with antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. Cells were then used for protein extraction followed by sequential immunobloting with antibodies specifically directed to the PDGF-BB and β-actin. Representative western blot images (upper panel) are shown with histograms (lower panel) showing the average densitometric analysis of the PDGF-BB band normalized to corresponding β-actin band from three independent experiments (*** P < = 0.001 versus control; ###P < = 0.001 versus gp120CM treatment).", "It is well known that most of the pathological effects of ROS in various oxidative stress associated disorders are mediated by activation and stabilization of HIF-1α [45]. We therefore next investigated if ROS mediated activation of HIF-1α on gp-120CM treatment is important for increased expression of PDGF-BB in gp-120 treated HPMVECs. We first examined whether gp-120CM treatment of HPMVECs could result in increased levels of HIF-1α protein. As shown in Figure 9A, western blot analysis of gp-120CM treated cellular extracts demonstrated increased levels of HIF-1α as compared to untreated controls. Furthermore, this gp-120CM mediated induction of HIF-1α expression was inhibited on pre-treatment of HPMVECs with an antioxidant cocktail (Figure 9A), thus confirming the ROS mediated augmentation of HIF-1α expression on R5 gp-120 treatment of endothelial cells.\nOxidative stress dependent HIF-1α expression is involved in gp-120CM mediated PDGF-BB induction. A) Western blot analysis of HIF-1α expression in human pulmonary microvascular endothelial cells (HPMVECs) pretreated with or without antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. B) Evaluation of HIF-1α knockdown by western blot analysis of whole cell lysates from HPMVECs transfected with siRNA specific to HIF-1α (10nM) or with negative control siRNA in presence of gp120CM treatment. (C) Knock down of HIF-1α resulted in inhibition of gp120CM-mediated induction of PDGF-BB expression in HPMVECs. Blots are representative of three independent experiments with histogram (lower panel) showing the average densitometric analysis normalized to β-actin. All values are mean ± SD. *P < = 0.01,**P < = 0.001 treatment versus control; #P < = 0.01, ##P < = 0.001 treatment versus gp120CM treated untransfected cells.\nNext to determine the involvement of HIF-1α in gp-120 mediated regulation of PDGF-BB expression, we used HIF-1α specific siRNA knock down experiments. First we optimized that the transfection of HPMVECs with 10nM HIF-1α siRNA was efficient in decreasing around 80% of gp-120CM induced HIF-1α expression when compared to cells transfected with non-specific siRNA control (Figure 9B). Furthermore, the HIF-1α siRNA transfected cells showed significant decrease in the expression of PDGF-BB in the presence of gp-120CM when compared with untransfected or non-specfic siRNA transfected gp-120CM treated cells (Figure 9C) thus underscoring the role of HIF-1α activation in the gp-120CM mediated PDGF-BB expression in pulmonary endothelial cells.", "In this study, we offer histological and physiologic evidence of pulmonary vascular remodeling with significant thickening of medial layer of the arteries, elevated RV mass in the non-infectious rat model of HIV-1. Pulmonary arteriopathy exhibited by the HIV-Tg rats was manifested primarily by smooth muscle proliferation within the medial wall, endothelial disruption with little indication of endothelial cell proliferation but absence of classic plexiform lesions. Although RV hypertrophy in the HIV-Tg rats is suggestive of concomitant right heart pressure overload, the presence of pulmonary arteriopathy alone does not necessarily predict pulmonary hypertension. Furthermore, in humans only a fraction of individuals with HIV develop PAH, suggesting that the etiology of HIV-PAH is multi-factorial and complex where multiple insults such as HIV infection, drugs of abuse, and genetic predilection may be necessary to induce clinical disease. Therefore, one could hypothesize that the viral protein(s) provides the first 'hit' and second 'hit' such as administration of stimulants may lead to more severe pathology in these HIV-Tg rats.\nInflammation is considered to play an important role in HIV- associated pulmonary vascular remodeling with accumulation of macrophages and T lymphocytes found in the vicinity of pulmonary vessels of pulmonary hypertension patients [46,47]. Consistent with these findings we also observed infiltration of mononuclear cells near or around the thickened vessels with mild interstitial pneumonitis as described before in this model [34]. HIV-1 infection is known to stimulate monocyte/macrophages and lymphocytes to secrete elevated levels of cytokines, growth factors and viral proteins such as Nef, Tat and gp-120, [10-16] that can then initiate endothelial injury, SMC proliferation and migration, leading to the development of HIV-PAH [8-10,18,26,48]. It is plausible that the medial wall thickening, an important determinant of pulmonary vascular resistance, discovered in HIV-Tg rat model is the result of integrated effects of various HIV proteins and the related inflammatory mediators including PDGF-BB.\nExamination of the HIV-1 Tg rat lungs revealed increased staining of PDGF-BB in macrophages around hypertrophied vessels and in endothelial cells. Earlier studies suggest induction of PDGF-BB by endothelial cells [49] but not by SMCs [50] in response to hypoxia. Furthermore, the vasculature and lungs of this HIV-Tg rat model has earlier been demonstrated to be under significant oxidative stress [32,33]. Along the lines, we observed enhanced expression of HIF-1α, a crucial transcription factor responsible for sensing and responding to oxidant stress and hypoxic conditions [51], suggesting, in part, the involvement of ROS/HIF-1α pathway in the over expression of PDGF-BB. HIF-1α controls a large program of genes critical to the development of pulmonary arterial hypertension [29,31,52,53]. Interestingly, the expression of HIF-1α and PDGF was not only elevated and positively associated with each other in the lungs of the HIV-1 Tg rats, but the quantity of each was directly related, in a linear fashion, to the degree of increase in the right ventricular hypertrophy (RV/LV+septum ratio).\nWhile these correlations in the HIV-1 transgenic model are consistent with our hypothesis, our in-vitro work in pulmonary endothelial cells validates that viral protein mediated oxidative stress/HIF-1α pathway results in induction of PDGF-BB. The injury to the endothelium, an initiating event in PAH [54] is known to be associated with the induction of oxidative stress [44]. HIV-associated proteins, Tat and gp-120, as confirmed by our findings and others, demonstrate the ability to invoke oxidative stress mediated endothelial dysfunction [27,28,44,55]. In addition, results demonstrating enhanced levels of HIF-1α in viral protein treated pulmonary endothelial cells, are in concert with the previous findings supporting the activation and accumulation of HIF-1α by HIV-1 through the production of ROS [56]. PDGF-BB, known to be involved in hypoxia-induced vascular remodeling, [30,57,58]) has been suggested to be up-regulated in a HIF-dependent manner, but the mechanism by which HIF-1α and PDGF levels are elevated during vascular remodeling associated with PAH, are still not completely understood. A putative HIF-response element on PDGF-gene has been identified [59] but the studies demonstrating the direct involvement of HIF-1α in the regulation of PDGF expression are lacking. Here, we provide evidence validating the significance of HIF-1α in the pathogenesis of HIV-associated vascular dysfunction, and report the novel finding that its response to viral protein generated oxidative stress is to augment PDGF expression in the pulmonary endothelium. To our knowledge, this is the first report validating that HIV-1 viral proteins through the activation of HIF-1α induce PDGF expression.\nThe HIV-1 virus is unable to actively infect endothelial cells due to the absence of necessary CD4 receptors. However, viral proteins have been demonstrated to act on endothelial cells through direct binding to their CCR5 (R5) or CXCR4 (X4) co-receptors [60]. This is corroborated with our findings showing mitigation of gp-120 CM response to increase PDGF-BB expression in the presence of CCR5 neutralizing antibody. Maximum PDGF-BB expression and ROS production was seen on treatment with R5-type gp-120 that is expected to be secreted in abundance by infiltrated HIV-infected CCR5+ T cells [61] and macrophages seen around the pulmonary vascular lesions associated with PAH [62]. In addition, studies on co-receptor usage of HIV have shown that virus utilizing CCR5 as a co-receptor is the predominant type of virus found in HIV-infected individuals [63]. Furthermore, R5gp120 has been reported earlier to induce the expression of cell-cycle and cell proliferation related genes more strongly than X4 gp-120 in peripheral blood mononuclear cells [64] and this differential potency of gp-120 effect may be present in pulmonary endothelial cells as well.", "In summary, we demonstrate that the influence of HIV-1 proteins alone, without viral infection, is associated with pulmonary arteriopathy including accumulation of HIF-1α and PDGF as observed in the HIV-1 Tg rats. Furthermore, our in-vitro findings confirm that HIV-1 viral protein mediated generation of oxidative stress and resultant activation of HIF-1α leads to subsequent induction of PDGF expression in pulmonary endothelial cells. Consistent with a possible role of PDGF in the development of idiopathic PAH, the correlation of this mediator with RVH, does suggest this pathway may be one of the many insults involved in the development of HIV-related pulmonary arteriopathy and potentially HIV-PAH." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "lungs", "endothelial cells", "gp-120", "oxidative stress" ]
Introduction: The advent of antiretroviral therapy (ART) has clearly led to improved survival among HIV-1 infected individuals, yet this advancement has resulted in the unexpected consequence of virus-associated noninfectious complications such as HIV-related pulmonary arterial hypertension (HIV-PAH) [1,2]. Despite adherence with ART, development of HIV-PAH serves as an independent predictor of death in patients with HIV-infection [3]. A precise characterization of the pathogenesis of HIV-PAH has so far proven elusive. As there is little evidence for direct viral infection within the pulmonary vascular bed [4-7], popular hypothesis is that secretary HIV-1 viral proteins in circulation are capable of inducing vascular oxidative stress and direct endothelial cell dysfunction and smooth muscle cell proliferation critical to the development of HIV-related arteriopathy [8,9]. Further, evidence is accumulating which suggests that the HIV-1 infection of monocyte/macrophages and lymphocytes stimulates increased production of pro-inflammatory markers and/or growth factors. implicated in the pathogenesis of HIV-PAH such as platelet derived growth factor (PDGF)-BB [10-16]. These soluble mediators can then initiate endothelial injury followed by smooth muscle cell proliferation and migration [2,17,18]. Previous studies provide evidence for the possible involvement of PDGF in the pathogenesis of pulmonary vascular remodeling in animal models [19,20] and in lung biopsies from patients with PPH or with HIV-PAH [12]. Furthermore, a non-specific inhibitor of PDGF signaling, imatinib, has demonstrated the ability to diminish vascular remodeling in animal studies and to mitigate clinical decline in human PAH trials [21-24]. Our previous work demonstrates an over-expression of PDGF in-vitro in HIV-infected macrophages [25] and in-vivo in Simian HIV-infected macaques [16]. Our recent work supports an HIV-protein mediated up-regulation of PDGF-BB in un-infectable vascular cell types such as human primary pulmonary arterial endothelial and smooth muscle cells [26]. However, the mechanism(s) by which HIV infection or viral protein(s) binding induces PDGF expression and the role of this potent mitogen in the setting of HIV-associated pulmonary arteriopathy has not been well characterized. HIV associated viral proteins including Tat and gp-120 have demonstrated the ability to trigger the generation of reactive oxygen species (ROS) [27,28]. As oxidative stress stabilizes hypoxia inducible factor (HIF)-1α, a transcription factor critical for regulation of important proliferative and vaso-active mediators [29-31], we hypothesize that viral protein generated reactive oxygen species (ROS) induce HIF-1α accumulation, with a resultant enhanced transcription of PDGF-B chain. Thus, given the need for clarification of the mechanisms responsible for HIV-related pulmonary vascular remodeling, we, in the present study, first utilized the non-infectious NL4-3Δgag/pol HIV-1 transgenic (HIV-Tg) rat model [32,33] to explore the direct role of viral proteins in the development of pulmonary vascular remodeling. This HIV-Tg rat model [34], develops many clinical multisystem manifestations similar to those found in AIDS patients and most importantly, has earlier been demonstrated to be under significant oxidative stress. Furthermore, given that the pulmonary artery endothelial dysfunction plays a key role in the initiation and progression of PAH [35-37], utilizing the primary pulmonary endothelial cell-culture system we next delineated the importance of oxidative stress and HIF-1α activation in viral protein mediated up-regulation of PDGF-BB. Methods: HIV-1 transgenic and wild type rats HIV-1 transgenic (Tg) Sprague Dawley (SD) and SD wild type (WT) rats were purchased from Harlan (Indianapolis, Indiana). Young 4-5 months old Tg rats (n = 6) and age matched SD wild type rats (n = 6) were used for analysis. The HIV-1 Tg rat contains a gag-pol deleted NL4-3 provirus and expresses HIV viral RNA and proteins in various tissues including lymphocytes and monocytes. The animals were euthanized with inhalation of 2.5-3% isofluorane gas, followed by transcardial saline perfusion. Following euthanasia, one half of the lung was post-fixed for histological examination, while the other half was snap frozen for RNA analysis. The animal care at the Kansas University Medical Center was in strict accordance with the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals. HIV-1 transgenic (Tg) Sprague Dawley (SD) and SD wild type (WT) rats were purchased from Harlan (Indianapolis, Indiana). Young 4-5 months old Tg rats (n = 6) and age matched SD wild type rats (n = 6) were used for analysis. The HIV-1 Tg rat contains a gag-pol deleted NL4-3 provirus and expresses HIV viral RNA and proteins in various tissues including lymphocytes and monocytes. The animals were euthanized with inhalation of 2.5-3% isofluorane gas, followed by transcardial saline perfusion. Following euthanasia, one half of the lung was post-fixed for histological examination, while the other half was snap frozen for RNA analysis. The animal care at the Kansas University Medical Center was in strict accordance with the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals. Right Ventricular Mass Evaluation Hearts were removed from the euthanized animals. After the removal of atria, the wall of the right ventricle (RV) was separated from the left ventricle (LV) and septum (LV+S) according to the established method [38]. Wet weights of both RV & LV+S were quantified, normalized to the total body weight and used to calculate the RV/LV+S ratio. Hearts were removed from the euthanized animals. After the removal of atria, the wall of the right ventricle (RV) was separated from the left ventricle (LV) and septum (LV+S) according to the established method [38]. Wet weights of both RV & LV+S were quantified, normalized to the total body weight and used to calculate the RV/LV+S ratio. Histology and immuno-histochemical analysis of pulmonary arteries Excised lungs were immersed in 4% paraformaldehyde overnight followed by 70% ethanol and then used for paraffin embedding. Paraffin sections of 5 μm thickness were used for Hematoxylin & Eosin (H&E) or Verhoeff von Gieson (VVG) staining. The digital scans of whole section from each animal were generated with a ScanScope scanner and then visualized and analyzed using Aperio image view software. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries (50-250 μm diameter) in blinded manner. Wall thickness and outer diameter of approximately 25 muscularized arteries were measured in each section at two perpendicular points and then averaged. The percentage medial wall thickness was then calculated as described before [39]. Immunohistochemistry staining of paraffin-embedded lung sections was performed as previously described [16] with primary antibodies including α-SMA, factor VIII, from Dako Corporation (Carpentaria, CA, USA), HIF-1α, from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA) and PDGF-BB, from Abcam, Inc. (Cambridge, MA). Excised lungs were immersed in 4% paraformaldehyde overnight followed by 70% ethanol and then used for paraffin embedding. Paraffin sections of 5 μm thickness were used for Hematoxylin & Eosin (H&E) or Verhoeff von Gieson (VVG) staining. The digital scans of whole section from each animal were generated with a ScanScope scanner and then visualized and analyzed using Aperio image view software. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries (50-250 μm diameter) in blinded manner. Wall thickness and outer diameter of approximately 25 muscularized arteries were measured in each section at two perpendicular points and then averaged. The percentage medial wall thickness was then calculated as described before [39]. Immunohistochemistry staining of paraffin-embedded lung sections was performed as previously described [16] with primary antibodies including α-SMA, factor VIII, from Dako Corporation (Carpentaria, CA, USA), HIF-1α, from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA) and PDGF-BB, from Abcam, Inc. (Cambridge, MA). Cell culture and treatments Human primary pulmonary microvascular endothelial cells (HPMVEC) were purchased from ScienCell research laboratories (Carlsbad, CA) and grown in endothelial cell basal media containing 2% fetal bovine serum (FBS), 1 μg/ml hydrocortisone, 10 ng/ml human epidermal growth factor, 3 ng/ml basic fibroblast growth factor, 10 μg/ml heparin, and gentamycin/amphotericin. Cells were treated with viral proteins: Tat 1-72 (1 μM, University of Kentucky), gp-120CM or gp-120LAV (100 ng/ml, Protein Sciences Corporation, Meriden, CT) for 24 hrs or 1 hr followed by western blot analysis and ROS quantification, respectively. Tat or gp120 stock solution was heat-inactivated by boiling for 30 min. For treatment with CCR5 neutralizing antibody or IgG isotype control (10 μg/ml, R&D systems) or; with antioxidant cocktail (0.2 mM ascorbate, 0.5 mM glutathione, and 3.5 μM α-tocopherol), cells were pre-treated with inhibitors for 30 min followed by treatment with gp-120 CM. Human primary pulmonary microvascular endothelial cells (HPMVEC) were purchased from ScienCell research laboratories (Carlsbad, CA) and grown in endothelial cell basal media containing 2% fetal bovine serum (FBS), 1 μg/ml hydrocortisone, 10 ng/ml human epidermal growth factor, 3 ng/ml basic fibroblast growth factor, 10 μg/ml heparin, and gentamycin/amphotericin. Cells were treated with viral proteins: Tat 1-72 (1 μM, University of Kentucky), gp-120CM or gp-120LAV (100 ng/ml, Protein Sciences Corporation, Meriden, CT) for 24 hrs or 1 hr followed by western blot analysis and ROS quantification, respectively. Tat or gp120 stock solution was heat-inactivated by boiling for 30 min. For treatment with CCR5 neutralizing antibody or IgG isotype control (10 μg/ml, R&D systems) or; with antioxidant cocktail (0.2 mM ascorbate, 0.5 mM glutathione, and 3.5 μM α-tocopherol), cells were pre-treated with inhibitors for 30 min followed by treatment with gp-120 CM. Quantification of cellular oxidative stress using dichlorofluorescein (DCF) assay Pulmonary endothelial cells were treated with 5-(and -6)-carboxy-2', 7'-dichlorodihydroflourescein diacetate (DCFH-DA) (Molecular Probes, Inc.) for 30 min followed by treatment with viral protein(s) for 1 hr. In the presence of H2O2, DCFH is oxidized to a fluorescent DCF within the cytoplasm which was read by fluorescent plate reader at an excitation of 485 nm with an emission of 530 nm [40]. Pulmonary endothelial cells were treated with 5-(and -6)-carboxy-2', 7'-dichlorodihydroflourescein diacetate (DCFH-DA) (Molecular Probes, Inc.) for 30 min followed by treatment with viral protein(s) for 1 hr. In the presence of H2O2, DCFH is oxidized to a fluorescent DCF within the cytoplasm which was read by fluorescent plate reader at an excitation of 485 nm with an emission of 530 nm [40]. Transfection of pulmonary endothelial cells with small interfering (si) RNA The silencer select pre-designed and validated siRNA duplexes targeting HIF-1α were obtained from Applied Biosystems (Carlsbad, CA). Cells were also transfected with silencer select negative control siRNA for comparison. HPMVECs were transfected with 10 nM siRNA using Hiperfect transfection reagent (Qiagen, Valencia, CA) as instructed by the manufacturer. The transfected cells were then treated with or without cocaine and/or Tat for 24 hrs followed by protein extraction for western blot analysis. The silencer select pre-designed and validated siRNA duplexes targeting HIF-1α were obtained from Applied Biosystems (Carlsbad, CA). Cells were also transfected with silencer select negative control siRNA for comparison. HPMVECs were transfected with 10 nM siRNA using Hiperfect transfection reagent (Qiagen, Valencia, CA) as instructed by the manufacturer. The transfected cells were then treated with or without cocaine and/or Tat for 24 hrs followed by protein extraction for western blot analysis. Real-Time RT-PCR analysis We used Real-Time RT-PCR to analyze RNA extracted from frozen lungs of HIV-1 Tg rats and WT controls obtained after non-fixative perfusion. Quantitative analysis of HIF-1a, PDGF and ET-1 mRNA in Tg and WT rats was done using primers from SA Biosciences (Frederick, MD) by Real-Time RT-PCR using the SYBR Green detection method. Total RNA was isolated from frozen lung tissues by lysis in Trizol and was then converted into first strand cDNA to be used for real-time PCR. Detection was performed with an ABI Prism 7700 sequence detector. The average Ct value of the house-keeping gene, HPRT, was subtracted from that of target gene to give changes in Ct (dCt). The fold-change in gene expression (differences in dCt, or ddCt) was then determined as log2 relative units. We used Real-Time RT-PCR to analyze RNA extracted from frozen lungs of HIV-1 Tg rats and WT controls obtained after non-fixative perfusion. Quantitative analysis of HIF-1a, PDGF and ET-1 mRNA in Tg and WT rats was done using primers from SA Biosciences (Frederick, MD) by Real-Time RT-PCR using the SYBR Green detection method. Total RNA was isolated from frozen lung tissues by lysis in Trizol and was then converted into first strand cDNA to be used for real-time PCR. Detection was performed with an ABI Prism 7700 sequence detector. The average Ct value of the house-keeping gene, HPRT, was subtracted from that of target gene to give changes in Ct (dCt). The fold-change in gene expression (differences in dCt, or ddCt) was then determined as log2 relative units. Western Blot Analysis Frozen rat lung tissues or endothelial cells were lysed in lysing buffer (Sigma, St. Louis, MO) containing protease inhibitors. Protein estimation in these samples was measured using the micro-BCA method protein assay kit (Pierce Chemical Co., Rockford, IL). Western blot analyses were performed using primary antibodies against HIF-1α (Santa Cruz), PDGF-BB (Santa Cruz), and β-actin (Sigma). The secondary antibodies used were horseradish peroxidase-conjugated anti-mouse or anti-rabbit (1:5000, Pierce Chemical Co) and detection was performed using the enhanced chemiluminescence system (Pierce Chemical Co.). The NIH imageJ software was used for densitometric analysis of immunoblots. Frozen rat lung tissues or endothelial cells were lysed in lysing buffer (Sigma, St. Louis, MO) containing protease inhibitors. Protein estimation in these samples was measured using the micro-BCA method protein assay kit (Pierce Chemical Co., Rockford, IL). Western blot analyses were performed using primary antibodies against HIF-1α (Santa Cruz), PDGF-BB (Santa Cruz), and β-actin (Sigma). The secondary antibodies used were horseradish peroxidase-conjugated anti-mouse or anti-rabbit (1:5000, Pierce Chemical Co) and detection was performed using the enhanced chemiluminescence system (Pierce Chemical Co.). The NIH imageJ software was used for densitometric analysis of immunoblots. Statistical Analysis Statistical analysis was performed using two-way analysis of variance with a post-hoc Student t- test or non-parametric Wilcoxon Rank-Sum test as appropriate. To test for association of RV/LV+septum ratio with other mediators, the non-parametric Spearman's rank-sum correlation coefficient was used and coefficient of determination (R2) was calculated. Exact two-sided p-values were calculated for all the analysis, using SAS 9.1 software (SAS Institute, Inc., Cary, NC, USA). A type I error rate of 5% was used for determining statistical significance. Statistical analysis was performed using two-way analysis of variance with a post-hoc Student t- test or non-parametric Wilcoxon Rank-Sum test as appropriate. To test for association of RV/LV+septum ratio with other mediators, the non-parametric Spearman's rank-sum correlation coefficient was used and coefficient of determination (R2) was calculated. Exact two-sided p-values were calculated for all the analysis, using SAS 9.1 software (SAS Institute, Inc., Cary, NC, USA). A type I error rate of 5% was used for determining statistical significance. HIV-1 transgenic and wild type rats: HIV-1 transgenic (Tg) Sprague Dawley (SD) and SD wild type (WT) rats were purchased from Harlan (Indianapolis, Indiana). Young 4-5 months old Tg rats (n = 6) and age matched SD wild type rats (n = 6) were used for analysis. The HIV-1 Tg rat contains a gag-pol deleted NL4-3 provirus and expresses HIV viral RNA and proteins in various tissues including lymphocytes and monocytes. The animals were euthanized with inhalation of 2.5-3% isofluorane gas, followed by transcardial saline perfusion. Following euthanasia, one half of the lung was post-fixed for histological examination, while the other half was snap frozen for RNA analysis. The animal care at the Kansas University Medical Center was in strict accordance with the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals. Right Ventricular Mass Evaluation: Hearts were removed from the euthanized animals. After the removal of atria, the wall of the right ventricle (RV) was separated from the left ventricle (LV) and septum (LV+S) according to the established method [38]. Wet weights of both RV & LV+S were quantified, normalized to the total body weight and used to calculate the RV/LV+S ratio. Histology and immuno-histochemical analysis of pulmonary arteries: Excised lungs were immersed in 4% paraformaldehyde overnight followed by 70% ethanol and then used for paraffin embedding. Paraffin sections of 5 μm thickness were used for Hematoxylin & Eosin (H&E) or Verhoeff von Gieson (VVG) staining. The digital scans of whole section from each animal were generated with a ScanScope scanner and then visualized and analyzed using Aperio image view software. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries (50-250 μm diameter) in blinded manner. Wall thickness and outer diameter of approximately 25 muscularized arteries were measured in each section at two perpendicular points and then averaged. The percentage medial wall thickness was then calculated as described before [39]. Immunohistochemistry staining of paraffin-embedded lung sections was performed as previously described [16] with primary antibodies including α-SMA, factor VIII, from Dako Corporation (Carpentaria, CA, USA), HIF-1α, from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA) and PDGF-BB, from Abcam, Inc. (Cambridge, MA). Cell culture and treatments: Human primary pulmonary microvascular endothelial cells (HPMVEC) were purchased from ScienCell research laboratories (Carlsbad, CA) and grown in endothelial cell basal media containing 2% fetal bovine serum (FBS), 1 μg/ml hydrocortisone, 10 ng/ml human epidermal growth factor, 3 ng/ml basic fibroblast growth factor, 10 μg/ml heparin, and gentamycin/amphotericin. Cells were treated with viral proteins: Tat 1-72 (1 μM, University of Kentucky), gp-120CM or gp-120LAV (100 ng/ml, Protein Sciences Corporation, Meriden, CT) for 24 hrs or 1 hr followed by western blot analysis and ROS quantification, respectively. Tat or gp120 stock solution was heat-inactivated by boiling for 30 min. For treatment with CCR5 neutralizing antibody or IgG isotype control (10 μg/ml, R&D systems) or; with antioxidant cocktail (0.2 mM ascorbate, 0.5 mM glutathione, and 3.5 μM α-tocopherol), cells were pre-treated with inhibitors for 30 min followed by treatment with gp-120 CM. Quantification of cellular oxidative stress using dichlorofluorescein (DCF) assay: Pulmonary endothelial cells were treated with 5-(and -6)-carboxy-2', 7'-dichlorodihydroflourescein diacetate (DCFH-DA) (Molecular Probes, Inc.) for 30 min followed by treatment with viral protein(s) for 1 hr. In the presence of H2O2, DCFH is oxidized to a fluorescent DCF within the cytoplasm which was read by fluorescent plate reader at an excitation of 485 nm with an emission of 530 nm [40]. Transfection of pulmonary endothelial cells with small interfering (si) RNA: The silencer select pre-designed and validated siRNA duplexes targeting HIF-1α were obtained from Applied Biosystems (Carlsbad, CA). Cells were also transfected with silencer select negative control siRNA for comparison. HPMVECs were transfected with 10 nM siRNA using Hiperfect transfection reagent (Qiagen, Valencia, CA) as instructed by the manufacturer. The transfected cells were then treated with or without cocaine and/or Tat for 24 hrs followed by protein extraction for western blot analysis. Real-Time RT-PCR analysis: We used Real-Time RT-PCR to analyze RNA extracted from frozen lungs of HIV-1 Tg rats and WT controls obtained after non-fixative perfusion. Quantitative analysis of HIF-1a, PDGF and ET-1 mRNA in Tg and WT rats was done using primers from SA Biosciences (Frederick, MD) by Real-Time RT-PCR using the SYBR Green detection method. Total RNA was isolated from frozen lung tissues by lysis in Trizol and was then converted into first strand cDNA to be used for real-time PCR. Detection was performed with an ABI Prism 7700 sequence detector. The average Ct value of the house-keeping gene, HPRT, was subtracted from that of target gene to give changes in Ct (dCt). The fold-change in gene expression (differences in dCt, or ddCt) was then determined as log2 relative units. Western Blot Analysis: Frozen rat lung tissues or endothelial cells were lysed in lysing buffer (Sigma, St. Louis, MO) containing protease inhibitors. Protein estimation in these samples was measured using the micro-BCA method protein assay kit (Pierce Chemical Co., Rockford, IL). Western blot analyses were performed using primary antibodies against HIF-1α (Santa Cruz), PDGF-BB (Santa Cruz), and β-actin (Sigma). The secondary antibodies used were horseradish peroxidase-conjugated anti-mouse or anti-rabbit (1:5000, Pierce Chemical Co) and detection was performed using the enhanced chemiluminescence system (Pierce Chemical Co.). The NIH imageJ software was used for densitometric analysis of immunoblots. Statistical Analysis: Statistical analysis was performed using two-way analysis of variance with a post-hoc Student t- test or non-parametric Wilcoxon Rank-Sum test as appropriate. To test for association of RV/LV+septum ratio with other mediators, the non-parametric Spearman's rank-sum correlation coefficient was used and coefficient of determination (R2) was calculated. Exact two-sided p-values were calculated for all the analysis, using SAS 9.1 software (SAS Institute, Inc., Cary, NC, USA). A type I error rate of 5% was used for determining statistical significance. Results: Pulmonary vascular remodeling in HIV-Tg rats Reports suggesting respiratory difficulty in HIV-Tg rats [34] led us to utilize this model in looking for evidence of pulmonary arteriopathy associated with HIV-related proteins. As clinical manifestations of AIDS in HIV-Tg rats begins as early as 5 months [34], we compared 4-5 months old Tg with age-matched WT-control rats (n = 6 in each group). Analysis of both, H&E and VVG staining, of paraffin embedded lung sections from 5 months old HIV-Tg rats demonstrated moderate to severe vascular remodeling. Representative images of H&E and VVG staining from each group are shown in Figure 1A. There was a significant increase in the thickness of the medial wall of muscular arteries in HIV-Tg rats (Figure 1b, c, e, f) compared to normal vessels in wild-type control (Figure 1a, d). Further, the presence of a smooth muscle layer was observed in many of the normally non-muscular distal arteries of HIV-Tg rats. The VVG-stained sections from both the groups revealed a well defined internal elastic lamina (black stain) in WT-control rats whereas elastic lamina was found to be disrupted in HIV-Tg rats (Figure 1). As shown in Figure 2, percentage of medial wall thickness of pulmonary arteries with outer diameter ranging between 50-200 μm was observed to be significantly high in HIV-Tg rats as compared to WT controls (p < 0.001). Histological evidence of pulmonary vascular remodeling in HIV-Tg rats. Representative images of H& E (a, b, c) and VVG (d, e, f) stained sections from HIV-Tg (b, c, e, f) and WT control (a, d) rats. H&E photomicrographs were captured at 10× (a, b) and at 20× (c) magnification whereas VVG images were captured at 4× (d, e) and at 20× (f) original magnification (scale bar: 100 μm). Each representative image is from different animal. Increase in medial wall thickness of pulmonary arteries in HIV-Tg rats compared with WT-rats. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries with diameter size ranging from 50 μm-250 μm. P < = 0.001, HIV-Tg rats vs. WT controls. Reports suggesting respiratory difficulty in HIV-Tg rats [34] led us to utilize this model in looking for evidence of pulmonary arteriopathy associated with HIV-related proteins. As clinical manifestations of AIDS in HIV-Tg rats begins as early as 5 months [34], we compared 4-5 months old Tg with age-matched WT-control rats (n = 6 in each group). Analysis of both, H&E and VVG staining, of paraffin embedded lung sections from 5 months old HIV-Tg rats demonstrated moderate to severe vascular remodeling. Representative images of H&E and VVG staining from each group are shown in Figure 1A. There was a significant increase in the thickness of the medial wall of muscular arteries in HIV-Tg rats (Figure 1b, c, e, f) compared to normal vessels in wild-type control (Figure 1a, d). Further, the presence of a smooth muscle layer was observed in many of the normally non-muscular distal arteries of HIV-Tg rats. The VVG-stained sections from both the groups revealed a well defined internal elastic lamina (black stain) in WT-control rats whereas elastic lamina was found to be disrupted in HIV-Tg rats (Figure 1). As shown in Figure 2, percentage of medial wall thickness of pulmonary arteries with outer diameter ranging between 50-200 μm was observed to be significantly high in HIV-Tg rats as compared to WT controls (p < 0.001). Histological evidence of pulmonary vascular remodeling in HIV-Tg rats. Representative images of H& E (a, b, c) and VVG (d, e, f) stained sections from HIV-Tg (b, c, e, f) and WT control (a, d) rats. H&E photomicrographs were captured at 10× (a, b) and at 20× (c) magnification whereas VVG images were captured at 4× (d, e) and at 20× (f) original magnification (scale bar: 100 μm). Each representative image is from different animal. Increase in medial wall thickness of pulmonary arteries in HIV-Tg rats compared with WT-rats. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries with diameter size ranging from 50 μm-250 μm. P < = 0.001, HIV-Tg rats vs. WT controls. Characterization of pulmonary vascular lesions in HIV-Tg rats In order to characterize the cellular composition of pulmonary vascular lesions in HIV-Tg rats; the lung sections were stained for α-SMA and Factor VIII. As shown in the representative lung sections from each group in Figure 3, we confirmed the presence of vascular remodeling with medial wall thickness in HIV-Tg rats while normal blood vessels were observed in WT control rats. Marked increase in the medial wall thickness of muscular arteries was observed due to increased proliferation of smooth muscle cells (SMC) in the HIV-Tg group (Figure 3b) compared to WT controls (Figure 3a). Endothelial monolayer was generally damaged with signs of increased expression of factor 8 (or vWF) in both thickened and non-muscular vessels (Figure 3d). Presence of medial wall thickness in the pulmonary arteries of HIV-Tg rats. Immuno-histochemistry of paraffin embedded lung sections with anti- α smooth muscle actin (brown) (a-b) and factor VIII (brown) (c-d) indicated abnormal vascular lesions with significant medial wall thickening in the lung sections from HIV-Tg rats (b, d) compared with WT-controls (a, c). Representative images were captured at 10× original magnification (scale bar: 100 μm). In order to characterize the cellular composition of pulmonary vascular lesions in HIV-Tg rats; the lung sections were stained for α-SMA and Factor VIII. As shown in the representative lung sections from each group in Figure 3, we confirmed the presence of vascular remodeling with medial wall thickness in HIV-Tg rats while normal blood vessels were observed in WT control rats. Marked increase in the medial wall thickness of muscular arteries was observed due to increased proliferation of smooth muscle cells (SMC) in the HIV-Tg group (Figure 3b) compared to WT controls (Figure 3a). Endothelial monolayer was generally damaged with signs of increased expression of factor 8 (or vWF) in both thickened and non-muscular vessels (Figure 3d). Presence of medial wall thickness in the pulmonary arteries of HIV-Tg rats. Immuno-histochemistry of paraffin embedded lung sections with anti- α smooth muscle actin (brown) (a-b) and factor VIII (brown) (c-d) indicated abnormal vascular lesions with significant medial wall thickening in the lung sections from HIV-Tg rats (b, d) compared with WT-controls (a, c). Representative images were captured at 10× original magnification (scale bar: 100 μm). Right ventricular hypertrophy (RVH) in HIV-Tg rats The HIV-Tg rats exhibited an increase in the ratio of wet weight of the right ventricle (RV) to the sum of the wet weights of the left ventricle and interventricular septum (LV+S), compared to that found in the control group (Figure 4). This increase in the RV/LV+S ratio suggests a disproportionate growth of the right ventricle compared to the left, thereby indicating early RV hypertrophy in these HIV-Tg rats. Right ventricular hypertrophy in HIV-Tg rats (n = 6) compared with age-matched WT rats (n = 6). The ratio of the wet weight of RV wall and of the LV wall with septum of heart (RV/LV+septum) was measured. P = 0.06, HIV-Tg rats vs. WT controls. The HIV-Tg rats exhibited an increase in the ratio of wet weight of the right ventricle (RV) to the sum of the wet weights of the left ventricle and interventricular septum (LV+S), compared to that found in the control group (Figure 4). This increase in the RV/LV+S ratio suggests a disproportionate growth of the right ventricle compared to the left, thereby indicating early RV hypertrophy in these HIV-Tg rats. Right ventricular hypertrophy in HIV-Tg rats (n = 6) compared with age-matched WT rats (n = 6). The ratio of the wet weight of RV wall and of the LV wall with septum of heart (RV/LV+septum) was measured. P = 0.06, HIV-Tg rats vs. WT controls. Increased expression of HIF-1a and PDGF-BB in HIV-Tg rats Having determined, through observation of right ventricular changes, the degree of pulmonary arteriopathy in HIV-Tg rats, we next compared the level of HIF-1α expression in the lungs of these rats to that found in controls. Although RNA analysis of lung extracts suggested an insignificant increase in the expression of HIF-1α (p = 0.078) (Figure 5A), western blot analysis demonstrated significant (p < 0.05) increase in HIF-1α protein, thus confirming the increase in expression of HIF-1α in HIV-Tg rats as compared to WT controls (Figure 5B). This increase in the expression of HIF-1α was further confirmed by immunohistochemical analysis on the lung sections from HIV-Tg and WT controls. As shown in Figure 5C, the lung parenchyma, along with endothelial cells lining the vessels demonstrating medial thickness, had strong expression of HIF-1α in the lung sections from HIV-Tg rats. Enhanced expression was also observed in mononuclear cells around the thickened vessels. Smooth muscle cells in these arteries, however, did not demonstrate a significant increase in HIF-1α compared to those from controls. Increased expression of HIF-1α in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA and B) western blot analysis of total protein, extracted from lungs of HIV-Tg rats and age matched wild-type SD rats. Histogram below western blot image represents the average densitometric ratio of 135 kDa, HIF-1α to β-actin in wild-type and HIV-Tg rats. Statistical significance was calculated using a two-tail, independent, t-test. (* p < = 0.05) C) Immuno-histochemistry of paraffin embedded lung sections with anti-HIF-1α. Representative photomicrographs of immunostaining from wild-type and HIV-Tg group are shown. Original magnification: 60×. We next evaluated the expression of pro-proliferative factor, PDGF-BB that is suggestive to be regulated in a HIF-dependent manner [29-31]. As shown in Figure 6A, real-time RT-PCR analysis of total mRNA extracted from lung homogenates suggested an increase in the expression of PDGF-B chain mRNA in HIV-Tg rats compared to the control rats with normal vasculature. Interestingly, this increased expression of PDGF-B chain in the HIV-Tg group was associated positively with the increase in expression of HIF-1α (p = 0.002, R = 0.97) (Figure 6B). Furthermore, immunohistochemical analysis suggested enhanced expression of PDGF-BB in endothelial cells and in mononuclear infiltrated cells around thickened vessels (Figure 6C) from HIV-Tg rats similar to HIF-1α staining (Figure 5C). Additionally, the increased levels of HIF-1α (p = 0.009, R = 0.94) and PDGF-B chain (p = 0.036, R = 0.78) strongly correlated linearly with the increased RV/LV+ septum ratio in HIV-Tg rats (Figure 6D). No notable trends were found within the wild-type group. Increased expression of PDGF-B chain in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA in the lungs of HIV-Tg rats and age matched wild-type SD rats. B) Correlation of PDGF-B chain mRNA with the expression of HIF-1α in HIV-Tg rats. C) Immuno-histochemistry for PDGF-BB on the paraffin embedded lung sections from HIV-Tg and WT rats. Original magnification: 60×. D) Correlation of RV/LV+S ratio with the expression of HIF-1α and PDGF-BB in HIV-Tg rats. Correlation was calculated using the non-parametric Spearman's rank correlation coefficient. Having determined, through observation of right ventricular changes, the degree of pulmonary arteriopathy in HIV-Tg rats, we next compared the level of HIF-1α expression in the lungs of these rats to that found in controls. Although RNA analysis of lung extracts suggested an insignificant increase in the expression of HIF-1α (p = 0.078) (Figure 5A), western blot analysis demonstrated significant (p < 0.05) increase in HIF-1α protein, thus confirming the increase in expression of HIF-1α in HIV-Tg rats as compared to WT controls (Figure 5B). This increase in the expression of HIF-1α was further confirmed by immunohistochemical analysis on the lung sections from HIV-Tg and WT controls. As shown in Figure 5C, the lung parenchyma, along with endothelial cells lining the vessels demonstrating medial thickness, had strong expression of HIF-1α in the lung sections from HIV-Tg rats. Enhanced expression was also observed in mononuclear cells around the thickened vessels. Smooth muscle cells in these arteries, however, did not demonstrate a significant increase in HIF-1α compared to those from controls. Increased expression of HIF-1α in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA and B) western blot analysis of total protein, extracted from lungs of HIV-Tg rats and age matched wild-type SD rats. Histogram below western blot image represents the average densitometric ratio of 135 kDa, HIF-1α to β-actin in wild-type and HIV-Tg rats. Statistical significance was calculated using a two-tail, independent, t-test. (* p < = 0.05) C) Immuno-histochemistry of paraffin embedded lung sections with anti-HIF-1α. Representative photomicrographs of immunostaining from wild-type and HIV-Tg group are shown. Original magnification: 60×. We next evaluated the expression of pro-proliferative factor, PDGF-BB that is suggestive to be regulated in a HIF-dependent manner [29-31]. As shown in Figure 6A, real-time RT-PCR analysis of total mRNA extracted from lung homogenates suggested an increase in the expression of PDGF-B chain mRNA in HIV-Tg rats compared to the control rats with normal vasculature. Interestingly, this increased expression of PDGF-B chain in the HIV-Tg group was associated positively with the increase in expression of HIF-1α (p = 0.002, R = 0.97) (Figure 6B). Furthermore, immunohistochemical analysis suggested enhanced expression of PDGF-BB in endothelial cells and in mononuclear infiltrated cells around thickened vessels (Figure 6C) from HIV-Tg rats similar to HIF-1α staining (Figure 5C). Additionally, the increased levels of HIF-1α (p = 0.009, R = 0.94) and PDGF-B chain (p = 0.036, R = 0.78) strongly correlated linearly with the increased RV/LV+ septum ratio in HIV-Tg rats (Figure 6D). No notable trends were found within the wild-type group. Increased expression of PDGF-B chain in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA in the lungs of HIV-Tg rats and age matched wild-type SD rats. B) Correlation of PDGF-B chain mRNA with the expression of HIF-1α in HIV-Tg rats. C) Immuno-histochemistry for PDGF-BB on the paraffin embedded lung sections from HIV-Tg and WT rats. Original magnification: 60×. D) Correlation of RV/LV+S ratio with the expression of HIF-1α and PDGF-BB in HIV-Tg rats. Correlation was calculated using the non-parametric Spearman's rank correlation coefficient. Increased expression of PDGF-BB in HIV-protein(s) treated pulmonary microvascular endothelial cells Since we observed increased expression of HIF-1α and PDGF-BB in the lungs from HIV-Tg rats including endothelial cells lining the pulmonary arterial vessels, we next sought to delineate if HIF-dependent mechanism is involved in the viral protein mediated up regulation of PDGF-BB in the pulmonary endothelium. Two major HIV-proteins: Tat and gp-120 are known to be actively secreted by infected cells and has been detected in the serum of HIV-infected patients [41-43]. Furthermore, given that both Tat and gp-120 were found to express in the lung homogenates of HIV-Tg rats (data not shown), we first treated HPMVECs with these viral proteins over a period of 24 hrs and assessed for the expression of PDGF-BB by western blot analysis. As shown in Figure 7, treatments with Tat, gp-120LAV (from X4-type virus) or gp-120CM (from R5-type virus) resulted in significant increase of PDGF-BB protein expression compared to untreated control. However, when cells were subjected to treatment with the same concentration of heat inactivated Tat or gp-120, no induction in the PDGF-BB expression was observed. Additionally, the maximum increase that was observed on treatment with R5-type gp-120CM was also significantly more when compared with the PDGF-BB induction obtained on Tat or X4-type gp-120LAV treatment. Increased expression of PDGF-BB in pulmonary endothelial cells on treatment with HIV-proteins. Representative western blot showing PDGF-BB expression in cellular extracts from Tat (25 ng/ml), gp-120LAV (100 ng/ml), gp-120CM (100 ng/ml), heat-inactivated (HI) Tat or HI-gp-120 treated human pulmonary microvascular endothelial cells. The blots were re-probed with human β-actin antibodies. Histogram represents the average densitometric ratio of PDGF-BB to β-actin of three independent experiments. Statistical significance was calculated using a two-tail, independent t-test. (** p ≤ 0.01 vs. control, #p ≤ 0.05 vs. Tat or gp-120 treatment). Since we observed increased expression of HIF-1α and PDGF-BB in the lungs from HIV-Tg rats including endothelial cells lining the pulmonary arterial vessels, we next sought to delineate if HIF-dependent mechanism is involved in the viral protein mediated up regulation of PDGF-BB in the pulmonary endothelium. Two major HIV-proteins: Tat and gp-120 are known to be actively secreted by infected cells and has been detected in the serum of HIV-infected patients [41-43]. Furthermore, given that both Tat and gp-120 were found to express in the lung homogenates of HIV-Tg rats (data not shown), we first treated HPMVECs with these viral proteins over a period of 24 hrs and assessed for the expression of PDGF-BB by western blot analysis. As shown in Figure 7, treatments with Tat, gp-120LAV (from X4-type virus) or gp-120CM (from R5-type virus) resulted in significant increase of PDGF-BB protein expression compared to untreated control. However, when cells were subjected to treatment with the same concentration of heat inactivated Tat or gp-120, no induction in the PDGF-BB expression was observed. Additionally, the maximum increase that was observed on treatment with R5-type gp-120CM was also significantly more when compared with the PDGF-BB induction obtained on Tat or X4-type gp-120LAV treatment. Increased expression of PDGF-BB in pulmonary endothelial cells on treatment with HIV-proteins. Representative western blot showing PDGF-BB expression in cellular extracts from Tat (25 ng/ml), gp-120LAV (100 ng/ml), gp-120CM (100 ng/ml), heat-inactivated (HI) Tat or HI-gp-120 treated human pulmonary microvascular endothelial cells. The blots were re-probed with human β-actin antibodies. Histogram represents the average densitometric ratio of PDGF-BB to β-actin of three independent experiments. Statistical significance was calculated using a two-tail, independent t-test. (** p ≤ 0.01 vs. control, #p ≤ 0.05 vs. Tat or gp-120 treatment). Reactive oxygen species are involved in HIV-protein mediated PDGF-BB induction Since both Tat and gp-120 [27,28,44] are known to induce oxidative stress, we next evaluated the levels of cytoplasmic ROS in Tat or gp-120 treated HPMVECs by DCF assay. Our findings demonstrated that the treatment of cells with Tat, gp-120LAV or gp-120CM results in significant increase in the production of ROS when compared to controls (Figure 8A). Similar to the PDGF-BB expression the maximum oxidative burst was observed on treatment with R5 type gp-120CM. Based on these findings we next focused on elucidating the mechanism(s) involved in the gp-120CM mediated up-regulation of PDGF-BB in pulmonary endothelial cells. We first investigated if chemokine receptor CCR5 is specifically involved in gp-120CM mediated generation of ROS by use of CCR5 neutralizing antibody. As illustrated in Figure 8B, pretreatment of HPMVECs with CCR5 antibody for 30 min, prevented the ROS production on gp-120 CM treatment whereas the pretreatment with isotype matched control antibody control had no affect. Furthermore, to examine if this enhanced levels of ROS are involved in PDGF-BB increase associated with gp-120CM treatment of pulmonary endothelial cells, cells were pretreated with antioxidant cocktail for 30 min. followed by 24 h treatment with gp-120CM. As shown in Figure 8C, western blot analysis of the total cell extract demonstrated the ability of antioxidants to prevent the gp-120 CM mediated increase in the PDGF-BB expression. Taken together these data suggest the role of oxidative burst in R5-type gp-120 mediated up-regulation of PDGF-BB in pulmonary endothelial cells. Involvement of oxidative stress in gp-120 mediated PDGF-BB induction in pulmonary endothelial cells. A) Enhanced oxidative stress in pulmonary endothelial cells on Tat and gp120 treatment. Human pulmonary microvascular endothelial cells (HPMVECs) were incubated with carboxy-H2-DCF-DA followed by Tat (25 ng/ml) or gp-120 (100 ng/ml) treatment for 60 min, and assessed for oxidative stress (Mean ± SD., **P ≤ 0.01, ***P < 0.001 vs. control). B) Effect of CCR5 neutralizing antibody on gp-120CM (100 ng/ml) mediated oxidative stress in HPMVECs. Cells were pretreated with CCR5 antibody (10 μg/ml) or equal amount of IgG isotype control for 30 min, followed by DCF assay (Mean ± SD., ***P < 0.001 treatment versus control; #P < 0.05 vs. gp120CM treatment). C) Gp-120CM mediated PDGF-BB expression in the presence of antioxidant cocktail. HPMVECs were pretreated with antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. Cells were then used for protein extraction followed by sequential immunobloting with antibodies specifically directed to the PDGF-BB and β-actin. Representative western blot images (upper panel) are shown with histograms (lower panel) showing the average densitometric analysis of the PDGF-BB band normalized to corresponding β-actin band from three independent experiments (*** P < = 0.001 versus control; ###P < = 0.001 versus gp120CM treatment). Since both Tat and gp-120 [27,28,44] are known to induce oxidative stress, we next evaluated the levels of cytoplasmic ROS in Tat or gp-120 treated HPMVECs by DCF assay. Our findings demonstrated that the treatment of cells with Tat, gp-120LAV or gp-120CM results in significant increase in the production of ROS when compared to controls (Figure 8A). Similar to the PDGF-BB expression the maximum oxidative burst was observed on treatment with R5 type gp-120CM. Based on these findings we next focused on elucidating the mechanism(s) involved in the gp-120CM mediated up-regulation of PDGF-BB in pulmonary endothelial cells. We first investigated if chemokine receptor CCR5 is specifically involved in gp-120CM mediated generation of ROS by use of CCR5 neutralizing antibody. As illustrated in Figure 8B, pretreatment of HPMVECs with CCR5 antibody for 30 min, prevented the ROS production on gp-120 CM treatment whereas the pretreatment with isotype matched control antibody control had no affect. Furthermore, to examine if this enhanced levels of ROS are involved in PDGF-BB increase associated with gp-120CM treatment of pulmonary endothelial cells, cells were pretreated with antioxidant cocktail for 30 min. followed by 24 h treatment with gp-120CM. As shown in Figure 8C, western blot analysis of the total cell extract demonstrated the ability of antioxidants to prevent the gp-120 CM mediated increase in the PDGF-BB expression. Taken together these data suggest the role of oxidative burst in R5-type gp-120 mediated up-regulation of PDGF-BB in pulmonary endothelial cells. Involvement of oxidative stress in gp-120 mediated PDGF-BB induction in pulmonary endothelial cells. A) Enhanced oxidative stress in pulmonary endothelial cells on Tat and gp120 treatment. Human pulmonary microvascular endothelial cells (HPMVECs) were incubated with carboxy-H2-DCF-DA followed by Tat (25 ng/ml) or gp-120 (100 ng/ml) treatment for 60 min, and assessed for oxidative stress (Mean ± SD., **P ≤ 0.01, ***P < 0.001 vs. control). B) Effect of CCR5 neutralizing antibody on gp-120CM (100 ng/ml) mediated oxidative stress in HPMVECs. Cells were pretreated with CCR5 antibody (10 μg/ml) or equal amount of IgG isotype control for 30 min, followed by DCF assay (Mean ± SD., ***P < 0.001 treatment versus control; #P < 0.05 vs. gp120CM treatment). C) Gp-120CM mediated PDGF-BB expression in the presence of antioxidant cocktail. HPMVECs were pretreated with antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. Cells were then used for protein extraction followed by sequential immunobloting with antibodies specifically directed to the PDGF-BB and β-actin. Representative western blot images (upper panel) are shown with histograms (lower panel) showing the average densitometric analysis of the PDGF-BB band normalized to corresponding β-actin band from three independent experiments (*** P < = 0.001 versus control; ###P < = 0.001 versus gp120CM treatment). ROS dependent stimulation of HIF-1α is necessary for HIV-protein mediated PDGF-BB induction It is well known that most of the pathological effects of ROS in various oxidative stress associated disorders are mediated by activation and stabilization of HIF-1α [45]. We therefore next investigated if ROS mediated activation of HIF-1α on gp-120CM treatment is important for increased expression of PDGF-BB in gp-120 treated HPMVECs. We first examined whether gp-120CM treatment of HPMVECs could result in increased levels of HIF-1α protein. As shown in Figure 9A, western blot analysis of gp-120CM treated cellular extracts demonstrated increased levels of HIF-1α as compared to untreated controls. Furthermore, this gp-120CM mediated induction of HIF-1α expression was inhibited on pre-treatment of HPMVECs with an antioxidant cocktail (Figure 9A), thus confirming the ROS mediated augmentation of HIF-1α expression on R5 gp-120 treatment of endothelial cells. Oxidative stress dependent HIF-1α expression is involved in gp-120CM mediated PDGF-BB induction. A) Western blot analysis of HIF-1α expression in human pulmonary microvascular endothelial cells (HPMVECs) pretreated with or without antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. B) Evaluation of HIF-1α knockdown by western blot analysis of whole cell lysates from HPMVECs transfected with siRNA specific to HIF-1α (10nM) or with negative control siRNA in presence of gp120CM treatment. (C) Knock down of HIF-1α resulted in inhibition of gp120CM-mediated induction of PDGF-BB expression in HPMVECs. Blots are representative of three independent experiments with histogram (lower panel) showing the average densitometric analysis normalized to β-actin. All values are mean ± SD. *P < = 0.01,**P < = 0.001 treatment versus control; #P < = 0.01, ##P < = 0.001 treatment versus gp120CM treated untransfected cells. Next to determine the involvement of HIF-1α in gp-120 mediated regulation of PDGF-BB expression, we used HIF-1α specific siRNA knock down experiments. First we optimized that the transfection of HPMVECs with 10nM HIF-1α siRNA was efficient in decreasing around 80% of gp-120CM induced HIF-1α expression when compared to cells transfected with non-specific siRNA control (Figure 9B). Furthermore, the HIF-1α siRNA transfected cells showed significant decrease in the expression of PDGF-BB in the presence of gp-120CM when compared with untransfected or non-specfic siRNA transfected gp-120CM treated cells (Figure 9C) thus underscoring the role of HIF-1α activation in the gp-120CM mediated PDGF-BB expression in pulmonary endothelial cells. It is well known that most of the pathological effects of ROS in various oxidative stress associated disorders are mediated by activation and stabilization of HIF-1α [45]. We therefore next investigated if ROS mediated activation of HIF-1α on gp-120CM treatment is important for increased expression of PDGF-BB in gp-120 treated HPMVECs. We first examined whether gp-120CM treatment of HPMVECs could result in increased levels of HIF-1α protein. As shown in Figure 9A, western blot analysis of gp-120CM treated cellular extracts demonstrated increased levels of HIF-1α as compared to untreated controls. Furthermore, this gp-120CM mediated induction of HIF-1α expression was inhibited on pre-treatment of HPMVECs with an antioxidant cocktail (Figure 9A), thus confirming the ROS mediated augmentation of HIF-1α expression on R5 gp-120 treatment of endothelial cells. Oxidative stress dependent HIF-1α expression is involved in gp-120CM mediated PDGF-BB induction. A) Western blot analysis of HIF-1α expression in human pulmonary microvascular endothelial cells (HPMVECs) pretreated with or without antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. B) Evaluation of HIF-1α knockdown by western blot analysis of whole cell lysates from HPMVECs transfected with siRNA specific to HIF-1α (10nM) or with negative control siRNA in presence of gp120CM treatment. (C) Knock down of HIF-1α resulted in inhibition of gp120CM-mediated induction of PDGF-BB expression in HPMVECs. Blots are representative of three independent experiments with histogram (lower panel) showing the average densitometric analysis normalized to β-actin. All values are mean ± SD. *P < = 0.01,**P < = 0.001 treatment versus control; #P < = 0.01, ##P < = 0.001 treatment versus gp120CM treated untransfected cells. Next to determine the involvement of HIF-1α in gp-120 mediated regulation of PDGF-BB expression, we used HIF-1α specific siRNA knock down experiments. First we optimized that the transfection of HPMVECs with 10nM HIF-1α siRNA was efficient in decreasing around 80% of gp-120CM induced HIF-1α expression when compared to cells transfected with non-specific siRNA control (Figure 9B). Furthermore, the HIF-1α siRNA transfected cells showed significant decrease in the expression of PDGF-BB in the presence of gp-120CM when compared with untransfected or non-specfic siRNA transfected gp-120CM treated cells (Figure 9C) thus underscoring the role of HIF-1α activation in the gp-120CM mediated PDGF-BB expression in pulmonary endothelial cells. Pulmonary vascular remodeling in HIV-Tg rats: Reports suggesting respiratory difficulty in HIV-Tg rats [34] led us to utilize this model in looking for evidence of pulmonary arteriopathy associated with HIV-related proteins. As clinical manifestations of AIDS in HIV-Tg rats begins as early as 5 months [34], we compared 4-5 months old Tg with age-matched WT-control rats (n = 6 in each group). Analysis of both, H&E and VVG staining, of paraffin embedded lung sections from 5 months old HIV-Tg rats demonstrated moderate to severe vascular remodeling. Representative images of H&E and VVG staining from each group are shown in Figure 1A. There was a significant increase in the thickness of the medial wall of muscular arteries in HIV-Tg rats (Figure 1b, c, e, f) compared to normal vessels in wild-type control (Figure 1a, d). Further, the presence of a smooth muscle layer was observed in many of the normally non-muscular distal arteries of HIV-Tg rats. The VVG-stained sections from both the groups revealed a well defined internal elastic lamina (black stain) in WT-control rats whereas elastic lamina was found to be disrupted in HIV-Tg rats (Figure 1). As shown in Figure 2, percentage of medial wall thickness of pulmonary arteries with outer diameter ranging between 50-200 μm was observed to be significantly high in HIV-Tg rats as compared to WT controls (p < 0.001). Histological evidence of pulmonary vascular remodeling in HIV-Tg rats. Representative images of H& E (a, b, c) and VVG (d, e, f) stained sections from HIV-Tg (b, c, e, f) and WT control (a, d) rats. H&E photomicrographs were captured at 10× (a, b) and at 20× (c) magnification whereas VVG images were captured at 4× (d, e) and at 20× (f) original magnification (scale bar: 100 μm). Each representative image is from different animal. Increase in medial wall thickness of pulmonary arteries in HIV-Tg rats compared with WT-rats. VVG-stained sections from each animal were evaluated for medial wall thickness of pulmonary arteries with diameter size ranging from 50 μm-250 μm. P < = 0.001, HIV-Tg rats vs. WT controls. Characterization of pulmonary vascular lesions in HIV-Tg rats: In order to characterize the cellular composition of pulmonary vascular lesions in HIV-Tg rats; the lung sections were stained for α-SMA and Factor VIII. As shown in the representative lung sections from each group in Figure 3, we confirmed the presence of vascular remodeling with medial wall thickness in HIV-Tg rats while normal blood vessels were observed in WT control rats. Marked increase in the medial wall thickness of muscular arteries was observed due to increased proliferation of smooth muscle cells (SMC) in the HIV-Tg group (Figure 3b) compared to WT controls (Figure 3a). Endothelial monolayer was generally damaged with signs of increased expression of factor 8 (or vWF) in both thickened and non-muscular vessels (Figure 3d). Presence of medial wall thickness in the pulmonary arteries of HIV-Tg rats. Immuno-histochemistry of paraffin embedded lung sections with anti- α smooth muscle actin (brown) (a-b) and factor VIII (brown) (c-d) indicated abnormal vascular lesions with significant medial wall thickening in the lung sections from HIV-Tg rats (b, d) compared with WT-controls (a, c). Representative images were captured at 10× original magnification (scale bar: 100 μm). Right ventricular hypertrophy (RVH) in HIV-Tg rats: The HIV-Tg rats exhibited an increase in the ratio of wet weight of the right ventricle (RV) to the sum of the wet weights of the left ventricle and interventricular septum (LV+S), compared to that found in the control group (Figure 4). This increase in the RV/LV+S ratio suggests a disproportionate growth of the right ventricle compared to the left, thereby indicating early RV hypertrophy in these HIV-Tg rats. Right ventricular hypertrophy in HIV-Tg rats (n = 6) compared with age-matched WT rats (n = 6). The ratio of the wet weight of RV wall and of the LV wall with septum of heart (RV/LV+septum) was measured. P = 0.06, HIV-Tg rats vs. WT controls. Increased expression of HIF-1a and PDGF-BB in HIV-Tg rats: Having determined, through observation of right ventricular changes, the degree of pulmonary arteriopathy in HIV-Tg rats, we next compared the level of HIF-1α expression in the lungs of these rats to that found in controls. Although RNA analysis of lung extracts suggested an insignificant increase in the expression of HIF-1α (p = 0.078) (Figure 5A), western blot analysis demonstrated significant (p < 0.05) increase in HIF-1α protein, thus confirming the increase in expression of HIF-1α in HIV-Tg rats as compared to WT controls (Figure 5B). This increase in the expression of HIF-1α was further confirmed by immunohistochemical analysis on the lung sections from HIV-Tg and WT controls. As shown in Figure 5C, the lung parenchyma, along with endothelial cells lining the vessels demonstrating medial thickness, had strong expression of HIF-1α in the lung sections from HIV-Tg rats. Enhanced expression was also observed in mononuclear cells around the thickened vessels. Smooth muscle cells in these arteries, however, did not demonstrate a significant increase in HIF-1α compared to those from controls. Increased expression of HIF-1α in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA and B) western blot analysis of total protein, extracted from lungs of HIV-Tg rats and age matched wild-type SD rats. Histogram below western blot image represents the average densitometric ratio of 135 kDa, HIF-1α to β-actin in wild-type and HIV-Tg rats. Statistical significance was calculated using a two-tail, independent, t-test. (* p < = 0.05) C) Immuno-histochemistry of paraffin embedded lung sections with anti-HIF-1α. Representative photomicrographs of immunostaining from wild-type and HIV-Tg group are shown. Original magnification: 60×. We next evaluated the expression of pro-proliferative factor, PDGF-BB that is suggestive to be regulated in a HIF-dependent manner [29-31]. As shown in Figure 6A, real-time RT-PCR analysis of total mRNA extracted from lung homogenates suggested an increase in the expression of PDGF-B chain mRNA in HIV-Tg rats compared to the control rats with normal vasculature. Interestingly, this increased expression of PDGF-B chain in the HIV-Tg group was associated positively with the increase in expression of HIF-1α (p = 0.002, R = 0.97) (Figure 6B). Furthermore, immunohistochemical analysis suggested enhanced expression of PDGF-BB in endothelial cells and in mononuclear infiltrated cells around thickened vessels (Figure 6C) from HIV-Tg rats similar to HIF-1α staining (Figure 5C). Additionally, the increased levels of HIF-1α (p = 0.009, R = 0.94) and PDGF-B chain (p = 0.036, R = 0.78) strongly correlated linearly with the increased RV/LV+ septum ratio in HIV-Tg rats (Figure 6D). No notable trends were found within the wild-type group. Increased expression of PDGF-B chain in HIV-Tg rats compared to wild type controls. A) Real-Time RT-PCR analysis of total mRNA in the lungs of HIV-Tg rats and age matched wild-type SD rats. B) Correlation of PDGF-B chain mRNA with the expression of HIF-1α in HIV-Tg rats. C) Immuno-histochemistry for PDGF-BB on the paraffin embedded lung sections from HIV-Tg and WT rats. Original magnification: 60×. D) Correlation of RV/LV+S ratio with the expression of HIF-1α and PDGF-BB in HIV-Tg rats. Correlation was calculated using the non-parametric Spearman's rank correlation coefficient. Increased expression of PDGF-BB in HIV-protein(s) treated pulmonary microvascular endothelial cells: Since we observed increased expression of HIF-1α and PDGF-BB in the lungs from HIV-Tg rats including endothelial cells lining the pulmonary arterial vessels, we next sought to delineate if HIF-dependent mechanism is involved in the viral protein mediated up regulation of PDGF-BB in the pulmonary endothelium. Two major HIV-proteins: Tat and gp-120 are known to be actively secreted by infected cells and has been detected in the serum of HIV-infected patients [41-43]. Furthermore, given that both Tat and gp-120 were found to express in the lung homogenates of HIV-Tg rats (data not shown), we first treated HPMVECs with these viral proteins over a period of 24 hrs and assessed for the expression of PDGF-BB by western blot analysis. As shown in Figure 7, treatments with Tat, gp-120LAV (from X4-type virus) or gp-120CM (from R5-type virus) resulted in significant increase of PDGF-BB protein expression compared to untreated control. However, when cells were subjected to treatment with the same concentration of heat inactivated Tat or gp-120, no induction in the PDGF-BB expression was observed. Additionally, the maximum increase that was observed on treatment with R5-type gp-120CM was also significantly more when compared with the PDGF-BB induction obtained on Tat or X4-type gp-120LAV treatment. Increased expression of PDGF-BB in pulmonary endothelial cells on treatment with HIV-proteins. Representative western blot showing PDGF-BB expression in cellular extracts from Tat (25 ng/ml), gp-120LAV (100 ng/ml), gp-120CM (100 ng/ml), heat-inactivated (HI) Tat or HI-gp-120 treated human pulmonary microvascular endothelial cells. The blots were re-probed with human β-actin antibodies. Histogram represents the average densitometric ratio of PDGF-BB to β-actin of three independent experiments. Statistical significance was calculated using a two-tail, independent t-test. (** p ≤ 0.01 vs. control, #p ≤ 0.05 vs. Tat or gp-120 treatment). Reactive oxygen species are involved in HIV-protein mediated PDGF-BB induction: Since both Tat and gp-120 [27,28,44] are known to induce oxidative stress, we next evaluated the levels of cytoplasmic ROS in Tat or gp-120 treated HPMVECs by DCF assay. Our findings demonstrated that the treatment of cells with Tat, gp-120LAV or gp-120CM results in significant increase in the production of ROS when compared to controls (Figure 8A). Similar to the PDGF-BB expression the maximum oxidative burst was observed on treatment with R5 type gp-120CM. Based on these findings we next focused on elucidating the mechanism(s) involved in the gp-120CM mediated up-regulation of PDGF-BB in pulmonary endothelial cells. We first investigated if chemokine receptor CCR5 is specifically involved in gp-120CM mediated generation of ROS by use of CCR5 neutralizing antibody. As illustrated in Figure 8B, pretreatment of HPMVECs with CCR5 antibody for 30 min, prevented the ROS production on gp-120 CM treatment whereas the pretreatment with isotype matched control antibody control had no affect. Furthermore, to examine if this enhanced levels of ROS are involved in PDGF-BB increase associated with gp-120CM treatment of pulmonary endothelial cells, cells were pretreated with antioxidant cocktail for 30 min. followed by 24 h treatment with gp-120CM. As shown in Figure 8C, western blot analysis of the total cell extract demonstrated the ability of antioxidants to prevent the gp-120 CM mediated increase in the PDGF-BB expression. Taken together these data suggest the role of oxidative burst in R5-type gp-120 mediated up-regulation of PDGF-BB in pulmonary endothelial cells. Involvement of oxidative stress in gp-120 mediated PDGF-BB induction in pulmonary endothelial cells. A) Enhanced oxidative stress in pulmonary endothelial cells on Tat and gp120 treatment. Human pulmonary microvascular endothelial cells (HPMVECs) were incubated with carboxy-H2-DCF-DA followed by Tat (25 ng/ml) or gp-120 (100 ng/ml) treatment for 60 min, and assessed for oxidative stress (Mean ± SD., **P ≤ 0.01, ***P < 0.001 vs. control). B) Effect of CCR5 neutralizing antibody on gp-120CM (100 ng/ml) mediated oxidative stress in HPMVECs. Cells were pretreated with CCR5 antibody (10 μg/ml) or equal amount of IgG isotype control for 30 min, followed by DCF assay (Mean ± SD., ***P < 0.001 treatment versus control; #P < 0.05 vs. gp120CM treatment). C) Gp-120CM mediated PDGF-BB expression in the presence of antioxidant cocktail. HPMVECs were pretreated with antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. Cells were then used for protein extraction followed by sequential immunobloting with antibodies specifically directed to the PDGF-BB and β-actin. Representative western blot images (upper panel) are shown with histograms (lower panel) showing the average densitometric analysis of the PDGF-BB band normalized to corresponding β-actin band from three independent experiments (*** P < = 0.001 versus control; ###P < = 0.001 versus gp120CM treatment). ROS dependent stimulation of HIF-1α is necessary for HIV-protein mediated PDGF-BB induction: It is well known that most of the pathological effects of ROS in various oxidative stress associated disorders are mediated by activation and stabilization of HIF-1α [45]. We therefore next investigated if ROS mediated activation of HIF-1α on gp-120CM treatment is important for increased expression of PDGF-BB in gp-120 treated HPMVECs. We first examined whether gp-120CM treatment of HPMVECs could result in increased levels of HIF-1α protein. As shown in Figure 9A, western blot analysis of gp-120CM treated cellular extracts demonstrated increased levels of HIF-1α as compared to untreated controls. Furthermore, this gp-120CM mediated induction of HIF-1α expression was inhibited on pre-treatment of HPMVECs with an antioxidant cocktail (Figure 9A), thus confirming the ROS mediated augmentation of HIF-1α expression on R5 gp-120 treatment of endothelial cells. Oxidative stress dependent HIF-1α expression is involved in gp-120CM mediated PDGF-BB induction. A) Western blot analysis of HIF-1α expression in human pulmonary microvascular endothelial cells (HPMVECs) pretreated with or without antioxidant cocktail for 30 min followed by incubation with gp-120CM (100 ng/ml) for 24 hours. B) Evaluation of HIF-1α knockdown by western blot analysis of whole cell lysates from HPMVECs transfected with siRNA specific to HIF-1α (10nM) or with negative control siRNA in presence of gp120CM treatment. (C) Knock down of HIF-1α resulted in inhibition of gp120CM-mediated induction of PDGF-BB expression in HPMVECs. Blots are representative of three independent experiments with histogram (lower panel) showing the average densitometric analysis normalized to β-actin. All values are mean ± SD. *P < = 0.01,**P < = 0.001 treatment versus control; #P < = 0.01, ##P < = 0.001 treatment versus gp120CM treated untransfected cells. Next to determine the involvement of HIF-1α in gp-120 mediated regulation of PDGF-BB expression, we used HIF-1α specific siRNA knock down experiments. First we optimized that the transfection of HPMVECs with 10nM HIF-1α siRNA was efficient in decreasing around 80% of gp-120CM induced HIF-1α expression when compared to cells transfected with non-specific siRNA control (Figure 9B). Furthermore, the HIF-1α siRNA transfected cells showed significant decrease in the expression of PDGF-BB in the presence of gp-120CM when compared with untransfected or non-specfic siRNA transfected gp-120CM treated cells (Figure 9C) thus underscoring the role of HIF-1α activation in the gp-120CM mediated PDGF-BB expression in pulmonary endothelial cells. Discussion: In this study, we offer histological and physiologic evidence of pulmonary vascular remodeling with significant thickening of medial layer of the arteries, elevated RV mass in the non-infectious rat model of HIV-1. Pulmonary arteriopathy exhibited by the HIV-Tg rats was manifested primarily by smooth muscle proliferation within the medial wall, endothelial disruption with little indication of endothelial cell proliferation but absence of classic plexiform lesions. Although RV hypertrophy in the HIV-Tg rats is suggestive of concomitant right heart pressure overload, the presence of pulmonary arteriopathy alone does not necessarily predict pulmonary hypertension. Furthermore, in humans only a fraction of individuals with HIV develop PAH, suggesting that the etiology of HIV-PAH is multi-factorial and complex where multiple insults such as HIV infection, drugs of abuse, and genetic predilection may be necessary to induce clinical disease. Therefore, one could hypothesize that the viral protein(s) provides the first 'hit' and second 'hit' such as administration of stimulants may lead to more severe pathology in these HIV-Tg rats. Inflammation is considered to play an important role in HIV- associated pulmonary vascular remodeling with accumulation of macrophages and T lymphocytes found in the vicinity of pulmonary vessels of pulmonary hypertension patients [46,47]. Consistent with these findings we also observed infiltration of mononuclear cells near or around the thickened vessels with mild interstitial pneumonitis as described before in this model [34]. HIV-1 infection is known to stimulate monocyte/macrophages and lymphocytes to secrete elevated levels of cytokines, growth factors and viral proteins such as Nef, Tat and gp-120, [10-16] that can then initiate endothelial injury, SMC proliferation and migration, leading to the development of HIV-PAH [8-10,18,26,48]. It is plausible that the medial wall thickening, an important determinant of pulmonary vascular resistance, discovered in HIV-Tg rat model is the result of integrated effects of various HIV proteins and the related inflammatory mediators including PDGF-BB. Examination of the HIV-1 Tg rat lungs revealed increased staining of PDGF-BB in macrophages around hypertrophied vessels and in endothelial cells. Earlier studies suggest induction of PDGF-BB by endothelial cells [49] but not by SMCs [50] in response to hypoxia. Furthermore, the vasculature and lungs of this HIV-Tg rat model has earlier been demonstrated to be under significant oxidative stress [32,33]. Along the lines, we observed enhanced expression of HIF-1α, a crucial transcription factor responsible for sensing and responding to oxidant stress and hypoxic conditions [51], suggesting, in part, the involvement of ROS/HIF-1α pathway in the over expression of PDGF-BB. HIF-1α controls a large program of genes critical to the development of pulmonary arterial hypertension [29,31,52,53]. Interestingly, the expression of HIF-1α and PDGF was not only elevated and positively associated with each other in the lungs of the HIV-1 Tg rats, but the quantity of each was directly related, in a linear fashion, to the degree of increase in the right ventricular hypertrophy (RV/LV+septum ratio). While these correlations in the HIV-1 transgenic model are consistent with our hypothesis, our in-vitro work in pulmonary endothelial cells validates that viral protein mediated oxidative stress/HIF-1α pathway results in induction of PDGF-BB. The injury to the endothelium, an initiating event in PAH [54] is known to be associated with the induction of oxidative stress [44]. HIV-associated proteins, Tat and gp-120, as confirmed by our findings and others, demonstrate the ability to invoke oxidative stress mediated endothelial dysfunction [27,28,44,55]. In addition, results demonstrating enhanced levels of HIF-1α in viral protein treated pulmonary endothelial cells, are in concert with the previous findings supporting the activation and accumulation of HIF-1α by HIV-1 through the production of ROS [56]. PDGF-BB, known to be involved in hypoxia-induced vascular remodeling, [30,57,58]) has been suggested to be up-regulated in a HIF-dependent manner, but the mechanism by which HIF-1α and PDGF levels are elevated during vascular remodeling associated with PAH, are still not completely understood. A putative HIF-response element on PDGF-gene has been identified [59] but the studies demonstrating the direct involvement of HIF-1α in the regulation of PDGF expression are lacking. Here, we provide evidence validating the significance of HIF-1α in the pathogenesis of HIV-associated vascular dysfunction, and report the novel finding that its response to viral protein generated oxidative stress is to augment PDGF expression in the pulmonary endothelium. To our knowledge, this is the first report validating that HIV-1 viral proteins through the activation of HIF-1α induce PDGF expression. The HIV-1 virus is unable to actively infect endothelial cells due to the absence of necessary CD4 receptors. However, viral proteins have been demonstrated to act on endothelial cells through direct binding to their CCR5 (R5) or CXCR4 (X4) co-receptors [60]. This is corroborated with our findings showing mitigation of gp-120 CM response to increase PDGF-BB expression in the presence of CCR5 neutralizing antibody. Maximum PDGF-BB expression and ROS production was seen on treatment with R5-type gp-120 that is expected to be secreted in abundance by infiltrated HIV-infected CCR5+ T cells [61] and macrophages seen around the pulmonary vascular lesions associated with PAH [62]. In addition, studies on co-receptor usage of HIV have shown that virus utilizing CCR5 as a co-receptor is the predominant type of virus found in HIV-infected individuals [63]. Furthermore, R5gp120 has been reported earlier to induce the expression of cell-cycle and cell proliferation related genes more strongly than X4 gp-120 in peripheral blood mononuclear cells [64] and this differential potency of gp-120 effect may be present in pulmonary endothelial cells as well. Conclusion: In summary, we demonstrate that the influence of HIV-1 proteins alone, without viral infection, is associated with pulmonary arteriopathy including accumulation of HIF-1α and PDGF as observed in the HIV-1 Tg rats. Furthermore, our in-vitro findings confirm that HIV-1 viral protein mediated generation of oxidative stress and resultant activation of HIF-1α leads to subsequent induction of PDGF expression in pulmonary endothelial cells. Consistent with a possible role of PDGF in the development of idiopathic PAH, the correlation of this mediator with RVH, does suggest this pathway may be one of the many insults involved in the development of HIV-related pulmonary arteriopathy and potentially HIV-PAH.
Background: Human immunodeficiency virus (HIV) infected patients are at increased risk for the development of pulmonary arterial hypertension (PAH). Recent reports have demonstrated that HIV associated viral proteins induce reactive oxygen species (ROS) with resultant endothelial cell dysfunction and related vascular injury. In this study, we explored the impact of HIV protein induced oxidative stress on production of hypoxia inducible factor (HIF)-1α and platelet-derived growth factor (PDGF), critical mediators implicated in the pathogenesis of HIV-PAH. Methods: The lungs from 4-5 months old HIV-1 transgenic (Tg) rats were assessed for the presence of pulmonary vascular remodeling and HIF-1α/PDGF-BB expression in comparison with wild type controls. Human primary pulmonary arterial endothelial cells (HPAEC) were treated with HIV-associated proteins in the presence or absence of pretreatment with antioxidants, for 24 hrs followed by estimation of ROS levels and western blot analysis of HIF-1α or PDGF-BB. Results: HIV-Tg rats, a model with marked viral protein induced vascular oxidative stress in the absence of active HIV-1 replication demonstrated significant medial thickening of pulmonary vessels and increased right ventricular mass compared to wild-type controls, with increased expression of HIF-1α and PDGF-BB in HIV-Tg rats. The up-regulation of both HIF-1α and PDGF-B chain mRNA in each HIV-Tg rat was directly correlated with an increase in right ventricular/left ventricular+septum ratio. Supporting our in-vivo findings, HPAECs treated with HIV-proteins: Tat and gp120, demonstrated increased ROS and parallel increase of PDGF-BB expression with the maximum induction observed on treatment with R5 type gp-120CM. Pre-treatment of endothelial cells with antioxidants or transfection of cells with HIF-1α small interfering RNA resulted in abrogation of gp-120CM mediated induction of PDGF-BB, therefore, confirming that ROS generation and activation of HIF-1α plays critical role in gp120 mediated up-regulation of PDGF-BB. Conclusions: In summary, these findings indicate that viral protein induced oxidative stress results in HIF-1α dependent up-regulation of PDGF-BB and suggests the possible involvement of this pathway in the development of HIV-PAH.
Introduction: The advent of antiretroviral therapy (ART) has clearly led to improved survival among HIV-1 infected individuals, yet this advancement has resulted in the unexpected consequence of virus-associated noninfectious complications such as HIV-related pulmonary arterial hypertension (HIV-PAH) [1,2]. Despite adherence with ART, development of HIV-PAH serves as an independent predictor of death in patients with HIV-infection [3]. A precise characterization of the pathogenesis of HIV-PAH has so far proven elusive. As there is little evidence for direct viral infection within the pulmonary vascular bed [4-7], popular hypothesis is that secretary HIV-1 viral proteins in circulation are capable of inducing vascular oxidative stress and direct endothelial cell dysfunction and smooth muscle cell proliferation critical to the development of HIV-related arteriopathy [8,9]. Further, evidence is accumulating which suggests that the HIV-1 infection of monocyte/macrophages and lymphocytes stimulates increased production of pro-inflammatory markers and/or growth factors. implicated in the pathogenesis of HIV-PAH such as platelet derived growth factor (PDGF)-BB [10-16]. These soluble mediators can then initiate endothelial injury followed by smooth muscle cell proliferation and migration [2,17,18]. Previous studies provide evidence for the possible involvement of PDGF in the pathogenesis of pulmonary vascular remodeling in animal models [19,20] and in lung biopsies from patients with PPH or with HIV-PAH [12]. Furthermore, a non-specific inhibitor of PDGF signaling, imatinib, has demonstrated the ability to diminish vascular remodeling in animal studies and to mitigate clinical decline in human PAH trials [21-24]. Our previous work demonstrates an over-expression of PDGF in-vitro in HIV-infected macrophages [25] and in-vivo in Simian HIV-infected macaques [16]. Our recent work supports an HIV-protein mediated up-regulation of PDGF-BB in un-infectable vascular cell types such as human primary pulmonary arterial endothelial and smooth muscle cells [26]. However, the mechanism(s) by which HIV infection or viral protein(s) binding induces PDGF expression and the role of this potent mitogen in the setting of HIV-associated pulmonary arteriopathy has not been well characterized. HIV associated viral proteins including Tat and gp-120 have demonstrated the ability to trigger the generation of reactive oxygen species (ROS) [27,28]. As oxidative stress stabilizes hypoxia inducible factor (HIF)-1α, a transcription factor critical for regulation of important proliferative and vaso-active mediators [29-31], we hypothesize that viral protein generated reactive oxygen species (ROS) induce HIF-1α accumulation, with a resultant enhanced transcription of PDGF-B chain. Thus, given the need for clarification of the mechanisms responsible for HIV-related pulmonary vascular remodeling, we, in the present study, first utilized the non-infectious NL4-3Δgag/pol HIV-1 transgenic (HIV-Tg) rat model [32,33] to explore the direct role of viral proteins in the development of pulmonary vascular remodeling. This HIV-Tg rat model [34], develops many clinical multisystem manifestations similar to those found in AIDS patients and most importantly, has earlier been demonstrated to be under significant oxidative stress. Furthermore, given that the pulmonary artery endothelial dysfunction plays a key role in the initiation and progression of PAH [35-37], utilizing the primary pulmonary endothelial cell-culture system we next delineated the importance of oxidative stress and HIF-1α activation in viral protein mediated up-regulation of PDGF-BB. Conclusion: All authors have read and approved the manuscript. JM contributed in writing manuscript, quantitated medial wall thickness and participated in data analysis; HG performed all the cell-culture experiments; XB performed immuno-histochemistry and western blots on rat lungs; FL harvested lung tissues, extracted RNA and performed real-time RT-PCR experiments; OT reviewed the H&E and immunohistochemical stained sections, SJB and SB contributed in critiquing the manuscript; AL participated in interpretation of the data and writing the manuscript; NKD designed the study and supervised overall experimental plans, analyzed and interpreted the data, and wrote the manuscript.
Background: Human immunodeficiency virus (HIV) infected patients are at increased risk for the development of pulmonary arterial hypertension (PAH). Recent reports have demonstrated that HIV associated viral proteins induce reactive oxygen species (ROS) with resultant endothelial cell dysfunction and related vascular injury. In this study, we explored the impact of HIV protein induced oxidative stress on production of hypoxia inducible factor (HIF)-1α and platelet-derived growth factor (PDGF), critical mediators implicated in the pathogenesis of HIV-PAH. Methods: The lungs from 4-5 months old HIV-1 transgenic (Tg) rats were assessed for the presence of pulmonary vascular remodeling and HIF-1α/PDGF-BB expression in comparison with wild type controls. Human primary pulmonary arterial endothelial cells (HPAEC) were treated with HIV-associated proteins in the presence or absence of pretreatment with antioxidants, for 24 hrs followed by estimation of ROS levels and western blot analysis of HIF-1α or PDGF-BB. Results: HIV-Tg rats, a model with marked viral protein induced vascular oxidative stress in the absence of active HIV-1 replication demonstrated significant medial thickening of pulmonary vessels and increased right ventricular mass compared to wild-type controls, with increased expression of HIF-1α and PDGF-BB in HIV-Tg rats. The up-regulation of both HIF-1α and PDGF-B chain mRNA in each HIV-Tg rat was directly correlated with an increase in right ventricular/left ventricular+septum ratio. Supporting our in-vivo findings, HPAECs treated with HIV-proteins: Tat and gp120, demonstrated increased ROS and parallel increase of PDGF-BB expression with the maximum induction observed on treatment with R5 type gp-120CM. Pre-treatment of endothelial cells with antioxidants or transfection of cells with HIF-1α small interfering RNA resulted in abrogation of gp-120CM mediated induction of PDGF-BB, therefore, confirming that ROS generation and activation of HIF-1α plays critical role in gp120 mediated up-regulation of PDGF-BB. Conclusions: In summary, these findings indicate that viral protein induced oxidative stress results in HIF-1α dependent up-regulation of PDGF-BB and suggests the possible involvement of this pathway in the development of HIV-PAH.
14,779
412
[ 662, 166, 72, 203, 199, 76, 84, 161, 131, 113, 6029, 454, 242, 150, 697, 395, 576, 446, 1097, 119 ]
21
[ "hiv", "rats", "tg", "pdgf", "hiv tg", "gp", "hif", "1α", "hif 1α", "expression" ]
[ "arteriopathy exhibited hiv", "pulmonary arteries hiv", "hiv pulmonary arteriopathy", "arterial hypertension hiv", "hiv associated vascular" ]
null
[CONTENT] lungs | endothelial cells | gp-120 | oxidative stress [SUMMARY]
[CONTENT] lungs | endothelial cells | gp-120 | oxidative stress [SUMMARY]
null
[CONTENT] lungs | endothelial cells | gp-120 | oxidative stress [SUMMARY]
[CONTENT] lungs | endothelial cells | gp-120 | oxidative stress [SUMMARY]
[CONTENT] lungs | endothelial cells | gp-120 | oxidative stress [SUMMARY]
[CONTENT] Animals | Antioxidants | Becaplermin | Blotting, Western | Cells, Cultured | Disease Models, Animal | Endothelial Cells | Familial Primary Pulmonary Hypertension | HIV Envelope Protein gp120 | HIV Infections | HIV-1 | Humans | Hypertension, Pulmonary | Hypertrophy, Right Ventricular | Hypoxia-Inducible Factor 1, alpha Subunit | Lung | Microvessels | Oxidative Stress | Platelet-Derived Growth Factor | Proto-Oncogene Proteins c-sis | Pulmonary Artery | RNA Interference | RNA, Messenger | Rats | Rats, Sprague-Dawley | Rats, Transgenic | Reactive Oxygen Species | Signal Transduction | Time Factors | Transfection | Up-Regulation | tat Gene Products, Human Immunodeficiency Virus [SUMMARY]
[CONTENT] Animals | Antioxidants | Becaplermin | Blotting, Western | Cells, Cultured | Disease Models, Animal | Endothelial Cells | Familial Primary Pulmonary Hypertension | HIV Envelope Protein gp120 | HIV Infections | HIV-1 | Humans | Hypertension, Pulmonary | Hypertrophy, Right Ventricular | Hypoxia-Inducible Factor 1, alpha Subunit | Lung | Microvessels | Oxidative Stress | Platelet-Derived Growth Factor | Proto-Oncogene Proteins c-sis | Pulmonary Artery | RNA Interference | RNA, Messenger | Rats | Rats, Sprague-Dawley | Rats, Transgenic | Reactive Oxygen Species | Signal Transduction | Time Factors | Transfection | Up-Regulation | tat Gene Products, Human Immunodeficiency Virus [SUMMARY]
null
[CONTENT] Animals | Antioxidants | Becaplermin | Blotting, Western | Cells, Cultured | Disease Models, Animal | Endothelial Cells | Familial Primary Pulmonary Hypertension | HIV Envelope Protein gp120 | HIV Infections | HIV-1 | Humans | Hypertension, Pulmonary | Hypertrophy, Right Ventricular | Hypoxia-Inducible Factor 1, alpha Subunit | Lung | Microvessels | Oxidative Stress | Platelet-Derived Growth Factor | Proto-Oncogene Proteins c-sis | Pulmonary Artery | RNA Interference | RNA, Messenger | Rats | Rats, Sprague-Dawley | Rats, Transgenic | Reactive Oxygen Species | Signal Transduction | Time Factors | Transfection | Up-Regulation | tat Gene Products, Human Immunodeficiency Virus [SUMMARY]
[CONTENT] Animals | Antioxidants | Becaplermin | Blotting, Western | Cells, Cultured | Disease Models, Animal | Endothelial Cells | Familial Primary Pulmonary Hypertension | HIV Envelope Protein gp120 | HIV Infections | HIV-1 | Humans | Hypertension, Pulmonary | Hypertrophy, Right Ventricular | Hypoxia-Inducible Factor 1, alpha Subunit | Lung | Microvessels | Oxidative Stress | Platelet-Derived Growth Factor | Proto-Oncogene Proteins c-sis | Pulmonary Artery | RNA Interference | RNA, Messenger | Rats | Rats, Sprague-Dawley | Rats, Transgenic | Reactive Oxygen Species | Signal Transduction | Time Factors | Transfection | Up-Regulation | tat Gene Products, Human Immunodeficiency Virus [SUMMARY]
[CONTENT] Animals | Antioxidants | Becaplermin | Blotting, Western | Cells, Cultured | Disease Models, Animal | Endothelial Cells | Familial Primary Pulmonary Hypertension | HIV Envelope Protein gp120 | HIV Infections | HIV-1 | Humans | Hypertension, Pulmonary | Hypertrophy, Right Ventricular | Hypoxia-Inducible Factor 1, alpha Subunit | Lung | Microvessels | Oxidative Stress | Platelet-Derived Growth Factor | Proto-Oncogene Proteins c-sis | Pulmonary Artery | RNA Interference | RNA, Messenger | Rats | Rats, Sprague-Dawley | Rats, Transgenic | Reactive Oxygen Species | Signal Transduction | Time Factors | Transfection | Up-Regulation | tat Gene Products, Human Immunodeficiency Virus [SUMMARY]
[CONTENT] arteriopathy exhibited hiv | pulmonary arteries hiv | hiv pulmonary arteriopathy | arterial hypertension hiv | hiv associated vascular [SUMMARY]
[CONTENT] arteriopathy exhibited hiv | pulmonary arteries hiv | hiv pulmonary arteriopathy | arterial hypertension hiv | hiv associated vascular [SUMMARY]
null
[CONTENT] arteriopathy exhibited hiv | pulmonary arteries hiv | hiv pulmonary arteriopathy | arterial hypertension hiv | hiv associated vascular [SUMMARY]
[CONTENT] arteriopathy exhibited hiv | pulmonary arteries hiv | hiv pulmonary arteriopathy | arterial hypertension hiv | hiv associated vascular [SUMMARY]
[CONTENT] arteriopathy exhibited hiv | pulmonary arteries hiv | hiv pulmonary arteriopathy | arterial hypertension hiv | hiv associated vascular [SUMMARY]
[CONTENT] hiv | rats | tg | pdgf | hiv tg | gp | hif | 1α | hif 1α | expression [SUMMARY]
[CONTENT] hiv | rats | tg | pdgf | hiv tg | gp | hif | 1α | hif 1α | expression [SUMMARY]
null
[CONTENT] hiv | rats | tg | pdgf | hiv tg | gp | hif | 1α | hif 1α | expression [SUMMARY]
[CONTENT] hiv | rats | tg | pdgf | hiv tg | gp | hif | 1α | hif 1α | expression [SUMMARY]
[CONTENT] hiv | rats | tg | pdgf | hiv tg | gp | hif | 1α | hif 1α | expression [SUMMARY]
[CONTENT] hiv | pah | vascular | hiv pah | viral | pulmonary | pdgf | infection | cell | pulmonary vascular [SUMMARY]
[CONTENT] analysis | ml | performed | cruz | santa cruz | santa | cells | rna | followed | lv [SUMMARY]
null
[CONTENT] hiv | development | pah | pdgf | arteriopathy | pulmonary arteriopathy | pulmonary | viral | influence hiv proteins viral | tg rats furthermore vitro [SUMMARY]
[CONTENT] hiv | rats | gp | tg | pdgf | hiv tg | hif | 1α | hif 1α | hiv tg rats [SUMMARY]
[CONTENT] hiv | rats | gp | tg | pdgf | hiv tg | hif | 1α | hif 1α | hiv tg rats [SUMMARY]
[CONTENT] PAH ||| ROS ||| PDGF | HIV-PAH [SUMMARY]
[CONTENT] 4-5 months old | HIF-1α/PDGF-BB ||| 24 | ROS | HIF-1α | PDGF-BB [SUMMARY]
null
[CONTENT] HIF-1α | PDGF-BB [SUMMARY]
[CONTENT] PAH ||| ROS ||| PDGF | HIV-PAH ||| 4-5 months old | HIF-1α/PDGF-BB ||| 24 | ROS | HIF-1α ||| HIV-1 | HIF-1α | PDGF-BB ||| HIF-1α ||| ROS | PDGF-BB ||| HIF-1α | RNA | gp-120CM | PDGF-BB | ROS | HIF-1α | gp120 | PDGF-BB ||| HIF-1α | PDGF-BB [SUMMARY]
[CONTENT] PAH ||| ROS ||| PDGF | HIV-PAH ||| 4-5 months old | HIF-1α/PDGF-BB ||| 24 | ROS | HIF-1α ||| HIV-1 | HIF-1α | PDGF-BB ||| HIF-1α ||| ROS | PDGF-BB ||| HIF-1α | RNA | gp-120CM | PDGF-BB | ROS | HIF-1α | gp120 | PDGF-BB ||| HIF-1α | PDGF-BB [SUMMARY]
Effect of Vitamin D Supplementation in Early Life on Children's Growth and Body Composition: A Systematic Review and Meta-Analysis of Randomized Controlled Trials.
33562750
Vitamin D deficiency during pregnancy or infancy is associated with adverse growth in children. No systematic review has been conducted to summarize available evidence on the effect of vitamin D supplementation in pregnancy and infancy on growth and body composition in children.
BACKGROUND
A systematic review and meta-analysis were performed on the effects of vitamin D supplementation during early life on children's growth and body composition (bone, lean and fat). A literature search of randomized controlled trials (RCTs) was conducted to identify relevant studies on the effects of vitamin D supplementation during pregnancy and infancy on children's body composition (bone, lean and fat) in PubMed, EMBASE and Cochrane Library from inception to 31 December 2020. A Cochrane Risk Assessment Tool was used for quality assessment. The comparison was vitamin D supplementation vs. placebo or standard care. Random-effects and fixed-effect meta-analyses were conducted. The effects are presented as mean differences (MDs) or risk ratios (RRs) with 95% confidence intervals (CIs).
METHOD
A total of 3960 participants from eleven randomized controlled trials were eligible for inclusion. Vitamin D supplementation during pregnancy was associated with higher triceps skinfold thickness (mm) (MD 0.33, 95% CI, 0.12, 0.54; I2 = 34%) in neonates. Vitamin D supplementation during pregnancy or infancy was associated with significantly increased length for age z-score in infants at 1 year of age (MD 0.29, 95% CI, 0.03, 0.54; I2 = 0%), and was associated with lower body mass index (BMI) (kg/m2) (MD -0.19, 95% CI -0.34, -0.04; I2 = 0%) and body mass index z-score (BMIZ) (MD -0.12, 95% CI -0.21, -0.04; I2 = 0%) in offspring at 3-6 years of age. Vitamin D supplementation during early life was not observed to be associated with children's bone, lean or fat mass.
RESULTS
Vitamin D supplementation during pregnancy or infancy may be associated with reduced adiposity in childhood. Further large clinical trials of the effects of vitamin D supplementation on childhood body composition are warranted.
CONCLUSION
[ "Adiposity", "Bias", "Body Composition", "Body Height", "Body Mass Index", "Body Weight", "Bone Density", "Confidence Intervals", "Female", "Growth", "Humans", "Infant", "Infant, Newborn", "Odds Ratio", "Placebos", "Pregnancy", "Randomized Controlled Trials as Topic", "Skinfold Thickness", "Vitamin D", "Vitamins" ]
7914476
1. Introduction
There is growing interest regarding the association between early life vitamin D status with children’s growth, bone health, adiposity and muscle development. It is widely accepted that vitamin D plays a critical role in bone health by maintaining calcium homeostasis [1]. This function becomes especially important during pregnancy when the developing fetus is entirely dependent on the mother for accretion of roughly 30 g of calcium for skeletal purposes [2,3]. In addition to its calcium metabolic functions, mixed evidence suggests that infant adiposity and lean mass are in part determined by vitamin D status [2]. Vitamin D may also play a role in maintaining normal glucose homeostasis during pregnancy, thus preventing fetal macrosomia and excess deposition of subcutaneous fat [4]. Vitamin D receptors have been isolated in skeletal muscle tissues [5], and low vitamin D concentration is associated with proximal myopathy and reduced physical performance [6]. Several observational studies [7,8,9,10,11,12,13,14,15] on maternal vitamin D status and growth or body composition in offspring have been conducted. Low vitamin D concentrations were associated with lower birthweight [11]. Offspring exposed to higher maternal serum 25(OH)D concentrations had lower fat mass and higher bone mass during infancy [6]. In its most severe form, infants born to mothers who had vitamin D deficiency were at elevated risk of rickets. [16,17,18] While there are few observational studies relating postnatal muscle development to intrauterine 25(OH)D exposure, no association was reported between the two in adulthood in one study [19]. Another observational study concluded that prenatal vitamin D exposure may have a greater effect on muscle strength than on muscle mass in the development of offspring [6]. Considering the high prevalence of low vitamin D status during pregnancy and infancy [20,21,22,23], and the inconsistent results of the clinical trials [2,6], this systematic review and meta-analysis aimed to assess the effect of vitamin D supplementation in early life (pregnancy, lactation and infancy) on child growth, bone health, lean mass and adiposity.
2. Methods
We followed the guidelines for Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [24]. 2.1. Search Strategy An electronic literature search of published studies was performed on PubMed, EMBASE and Cochrane Library up to 31 December 2020. The systematic literature search was based on the following search strategy: controlled vocabulary (i.e., MeSH Terms: “Vitamin D” [MeSH], “body composition” [MeSH]) as well as specific text words (including “vitamin D”, “calciferol”, “supplementation”, “pregnancy”, “infancy”, “growth”, “body composition”, “bone”, “lean mass”, “fat mass”) were included and systematically combined (AND/OR) with English language abstracts available. The details of the search strategy are presented in Table S1. Only English language papers on human clinical trials were considered. The reference lists of relevant reviews and studies were screened for additional articles [3,25,26,27,28]. An electronic literature search of published studies was performed on PubMed, EMBASE and Cochrane Library up to 31 December 2020. The systematic literature search was based on the following search strategy: controlled vocabulary (i.e., MeSH Terms: “Vitamin D” [MeSH], “body composition” [MeSH]) as well as specific text words (including “vitamin D”, “calciferol”, “supplementation”, “pregnancy”, “infancy”, “growth”, “body composition”, “bone”, “lean mass”, “fat mass”) were included and systematically combined (AND/OR) with English language abstracts available. The details of the search strategy are presented in Table S1. Only English language papers on human clinical trials were considered. The reference lists of relevant reviews and studies were screened for additional articles [3,25,26,27,28]. 2.2. Study Selection Selected studies had to fulfill the following criteria to qualify for inclusion: (a) the study design is a randomized controlled trial (RCT) of vitamin D supplementation (800–5000 IU/day vs. placebo or standard care) or, in cases of co-intervention, with consistent additional supplements across treatment groups; (b) the study population are children; (c) the outcomes measured at least one of the following: bone mineral content (BMC), fat mass, lean mass, skinfold thickness, body mass index (BMI), body mass index z-score (BMIZ), weight for age z-score (WAZ), length for age z-score (LAZ) and head circumference for age z-score (HCAZ); (d) the study met the methodological quality assessment criteria for RCTs [29]. Studies were excluded if: (a) the outcome data were incomplete or impossible to compare with other studies; (b) there was no appropriate control group. Two authors (K. M. and W. G. B.) independently searched for and assessed the eligibility of the electronic literature by initially screening titles and abstracts. Full-length articles of potential studies to be included were then obtained and read to make final inclusion or exclusion decisions. In case of disagreement, a third reviewer (S. Q. W.) was consulted. Selected studies had to fulfill the following criteria to qualify for inclusion: (a) the study design is a randomized controlled trial (RCT) of vitamin D supplementation (800–5000 IU/day vs. placebo or standard care) or, in cases of co-intervention, with consistent additional supplements across treatment groups; (b) the study population are children; (c) the outcomes measured at least one of the following: bone mineral content (BMC), fat mass, lean mass, skinfold thickness, body mass index (BMI), body mass index z-score (BMIZ), weight for age z-score (WAZ), length for age z-score (LAZ) and head circumference for age z-score (HCAZ); (d) the study met the methodological quality assessment criteria for RCTs [29]. Studies were excluded if: (a) the outcome data were incomplete or impossible to compare with other studies; (b) there was no appropriate control group. Two authors (K. M. and W. G. B.) independently searched for and assessed the eligibility of the electronic literature by initially screening titles and abstracts. Full-length articles of potential studies to be included were then obtained and read to make final inclusion or exclusion decisions. In case of disagreement, a third reviewer (S. Q. W.) was consulted. 2.3. Quality Assessment Using the Cochrane Risk Assessment Tool, we evaluated the methodological quality of each included clinical trial based on the following criteria: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other biases [29]. We assigned each of the abovementioned items as having either a low, high or unclear risk of bias for each eligible RCT. Using the Cochrane Risk Assessment Tool, we evaluated the methodological quality of each included clinical trial based on the following criteria: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other biases [29]. We assigned each of the abovementioned items as having either a low, high or unclear risk of bias for each eligible RCT. 2.4. Data Extraction and Synthesis A data extraction form was used to collect the information of the individual clinical trial regarding the study characteristics: the first author’s last name, year of publication, country of origin, study design, total sample size, characteristics of participants, initiation of supplementation, interventions and outcomes. Data were extracted by two reviewers independently following a per-protocol analysis. All statistical analyses were performed using Review Manager (version 5.3). For interventional studies with multiple experimental groups receiving varying amounts of vitamin D supplements, data were merged to form only one experimental group per study. All outcomes in this analysis had continuous data. The mean, standard deviation and number of participants for both the control and experimental group of each studied outcome were used to calculate the sample size weighted mean difference (MD). The point estimate was illustrated by forest plots for each study with a 95% confidence interval (CI). Heterogeneity was assessed by calculating the I squared (I2) statistic. Results were merged using a fixed effects model for I2 less than 50%, and a random effects model was applied when I2 reached 50% or more. A p-value of less than 0.05 was considered significant for our systematic review. A data extraction form was used to collect the information of the individual clinical trial regarding the study characteristics: the first author’s last name, year of publication, country of origin, study design, total sample size, characteristics of participants, initiation of supplementation, interventions and outcomes. Data were extracted by two reviewers independently following a per-protocol analysis. All statistical analyses were performed using Review Manager (version 5.3). For interventional studies with multiple experimental groups receiving varying amounts of vitamin D supplements, data were merged to form only one experimental group per study. All outcomes in this analysis had continuous data. The mean, standard deviation and number of participants for both the control and experimental group of each studied outcome were used to calculate the sample size weighted mean difference (MD). The point estimate was illustrated by forest plots for each study with a 95% confidence interval (CI). Heterogeneity was assessed by calculating the I squared (I2) statistic. Results were merged using a fixed effects model for I2 less than 50%, and a random effects model was applied when I2 reached 50% or more. A p-value of less than 0.05 was considered significant for our systematic review.
3. Results
3.1. Study Selection Our search strategy identified 1665 potential publications. After screening the titles and abstracts, we read 53 full articles, of which 12 studies were included in this systematic review [3,26,27,30,31,32,33,34,35,36,37,38]. The selection process of the relevant literature is summarized in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Flow Diagram (Figure 1). Our search strategy identified 1665 potential publications. After screening the titles and abstracts, we read 53 full articles, of which 12 studies were included in this systematic review [3,26,27,30,31,32,33,34,35,36,37,38]. The selection process of the relevant literature is summarized in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Flow Diagram (Figure 1). 3.2. Characteristics of Included Trials Twelve RCTs [3,26,27,30,31,32,33,34,35,36,37,38,39,40] involving a total of 4583 participants were included in this systematic review. Nine trials conducted vitamin D supplementation during pregnancy [3,26,27,30,31,33,34,36,40], and three trials performed vitamin D supplementation during infancy [30,34,38]. One study involved a subgroup of vitamin D supplementation in lactation [37]. One study [32] followed up children at 1 (Hazell 2014) [33] and 3 (Hazell 2017) [34] years of age. Six studies [3,26,36,37,38,39] were placebo-controlled; four [27,30,34,40] were comparisons between higher vs. lower doses of vitamin D, and the lowest-dose group (400 IU/day) (this low dose is part of the standard care) served as the control; and two [33,36] involved control groups without supplements. All intervention groups were supplemented with cholecalciferol, except two studies [31,39] that used ergocalciferol supplementation. One study conducted vitamin D and calcium supplementation in all treatment groups, but the doses of vitamin D were different between intervention groups and the control group (intervention: 60,000 IU/4 weeks or 60000 IU/8 weeks; Control: 400 IU/day) [27]. Three of the RCTs [32,34,35] were follow-up studies. Details of the characteristics of included studies are shown in Table 1. Twelve RCTs [3,26,27,30,31,32,33,34,35,36,37,38,39,40] involving a total of 4583 participants were included in this systematic review. Nine trials conducted vitamin D supplementation during pregnancy [3,26,27,30,31,33,34,36,40], and three trials performed vitamin D supplementation during infancy [30,34,38]. One study involved a subgroup of vitamin D supplementation in lactation [37]. One study [32] followed up children at 1 (Hazell 2014) [33] and 3 (Hazell 2017) [34] years of age. Six studies [3,26,36,37,38,39] were placebo-controlled; four [27,30,34,40] were comparisons between higher vs. lower doses of vitamin D, and the lowest-dose group (400 IU/day) (this low dose is part of the standard care) served as the control; and two [33,36] involved control groups without supplements. All intervention groups were supplemented with cholecalciferol, except two studies [31,39] that used ergocalciferol supplementation. One study conducted vitamin D and calcium supplementation in all treatment groups, but the doses of vitamin D were different between intervention groups and the control group (intervention: 60,000 IU/4 weeks or 60000 IU/8 weeks; Control: 400 IU/day) [27]. Three of the RCTs [32,34,35] were follow-up studies. Details of the characteristics of included studies are shown in Table 1. 3.3. Risk of Bias of Included Clinical Trials Risk of bias of included clinical trials is presented in Table S2. Participation completion rates were especially low in the two RCT follow-up studies in infants at 43.9% [38] and 66% [33], respectively. For random sequence generation, there were eight studies with low risk of bias and two studies in pregnancy [35,39] with unclear risk of bias. For allocation concealment, there were nine studies with low risk of bias, one study with high risk of bias [31] and one study [35] with unclear risk of bias. For blinding of participants and personnel, there were ten studies with low risk of bias and one study [33] with unclear risk of bias. For blinding of outcome assessment, there were eight studies with low risk of bias and three studies with unclear risk of bias. For incomplete outcome data, there were eight studies with low risk of bias, and three studies [with high risk of bias. For selective outcome data, all studies had a low risk of bias except for Brooke et al. [39], which had a high risk of bias. For other sources of bias, there was one study [31] with a high risk of bias; the rest had a low risk of bias. Risk of bias of included clinical trials is presented in Table S2. Participation completion rates were especially low in the two RCT follow-up studies in infants at 43.9% [38] and 66% [33], respectively. For random sequence generation, there were eight studies with low risk of bias and two studies in pregnancy [35,39] with unclear risk of bias. For allocation concealment, there were nine studies with low risk of bias, one study with high risk of bias [31] and one study [35] with unclear risk of bias. For blinding of participants and personnel, there were ten studies with low risk of bias and one study [33] with unclear risk of bias. For blinding of outcome assessment, there were eight studies with low risk of bias and three studies with unclear risk of bias. For incomplete outcome data, there were eight studies with low risk of bias, and three studies [with high risk of bias. For selective outcome data, all studies had a low risk of bias except for Brooke et al. [39], which had a high risk of bias. For other sources of bias, there was one study [31] with a high risk of bias; the rest had a low risk of bias. 3.4. Bone Mineral Content (BMC) Whole-body BMC (g) was examined in five RCTs [3,26,27,31,33] involving 1444 and 1349 mother–newborn pairs. Dual-energy X-ray absorptiometry (DXA) was used in all four studies to assess bone parameters. Bone health assessment was performed in the infants at one week, three weeks, between 12 and 16 months, and 3–6 years. There was no association between vitamin D supplementation during pregnancy with whole-body BMC (gram) in neonates (MD 1.09, 95% CI 0.64, 2.81; I2 = 0) and BMC (gram) in infants at 1 year of age (MD −19.38, 95% CI −60.55, 21.79; I2 = 73%) (Figure 2A). Whole-body BMC (g) was examined in five RCTs [3,26,27,31,33] involving 1444 and 1349 mother–newborn pairs. Dual-energy X-ray absorptiometry (DXA) was used in all four studies to assess bone parameters. Bone health assessment was performed in the infants at one week, three weeks, between 12 and 16 months, and 3–6 years. There was no association between vitamin D supplementation during pregnancy with whole-body BMC (gram) in neonates (MD 1.09, 95% CI 0.64, 2.81; I2 = 0) and BMC (gram) in infants at 1 year of age (MD −19.38, 95% CI −60.55, 21.79; I2 = 73%) (Figure 2A). 3.5. Lean Mass (g) and Lean Mass Percentage (%) Lean mass (gram) was reported in six RCTs [27,30,31,33,34,40] involving 631 participants. Lean mass was measured using DXA at approximately seven days, 6 months, between 12 and 16 months and at 36 months. Vitamin D supplementation was not associated with total lean mass in infants at ages 6 months (MD, −18.42, 95% CI −586.29, 549.45; I2 = 81%), 1 year (MD −1.00, 95% CI −624.71, 622.71, I2 = 67%) and 3 years (MD 102.63, 95% CI −185.44, 390.71, I2 = 0%) (Figure 2B). The data could not be pooled when lean mass was not assessed at similar ages. Cooper et al. [26] observed that the lean mass in the infants born to mothers assigned to cholecalciferol supplementation were not significantly different from mothers in the placebo group. One study [33] showed that lean mass percentage in infants did not differ with vitamin D supplementation. Lean mass (gram) was reported in six RCTs [27,30,31,33,34,40] involving 631 participants. Lean mass was measured using DXA at approximately seven days, 6 months, between 12 and 16 months and at 36 months. Vitamin D supplementation was not associated with total lean mass in infants at ages 6 months (MD, −18.42, 95% CI −586.29, 549.45; I2 = 81%), 1 year (MD −1.00, 95% CI −624.71, 622.71, I2 = 67%) and 3 years (MD 102.63, 95% CI −185.44, 390.71, I2 = 0%) (Figure 2B). The data could not be pooled when lean mass was not assessed at similar ages. Cooper et al. [26] observed that the lean mass in the infants born to mothers assigned to cholecalciferol supplementation were not significantly different from mothers in the placebo group. One study [33] showed that lean mass percentage in infants did not differ with vitamin D supplementation. 3.6. Fat Mass (g) and Fat Mass Percentage (%) Fat mass (g) was reported in five RCTs [27,30,34,35,40] involving 621 participants. Fat mass was measured using DXA at approximately seven days, 6 months, 12–16 months and 3 years of age. Vitamin D supplementation was not associated with total body fat mass (g) in the infants at ages 6 months (MD, −153.28, 95% CI −348.14 to 41.57, I2 = 0%), 1 year (MD, −141.77; 95% CI, −471.04 to 187.50, I2 = 0%) and 3 years (MD, −53.47; 95% CI, −256.90 to 149.95, I2 = 0%) (Figure 2C). The data could not be pooled when fat mass was not assessed at similar ages. Three RCTs involving 360 participants reported the outcome of fat mass percentage (%) at ages 1 year and 3–6 years. Vitamin D supplementation was not associated with fat mass percentage (%) in the infants at 1 year of age (MD −0.92, 95% CI −3.65, 1.81, I2 = 0). Fat mass (g) was reported in five RCTs [27,30,34,35,40] involving 621 participants. Fat mass was measured using DXA at approximately seven days, 6 months, 12–16 months and 3 years of age. Vitamin D supplementation was not associated with total body fat mass (g) in the infants at ages 6 months (MD, −153.28, 95% CI −348.14 to 41.57, I2 = 0%), 1 year (MD, −141.77; 95% CI, −471.04 to 187.50, I2 = 0%) and 3 years (MD, −53.47; 95% CI, −256.90 to 149.95, I2 = 0%) (Figure 2C). The data could not be pooled when fat mass was not assessed at similar ages. Three RCTs involving 360 participants reported the outcome of fat mass percentage (%) at ages 1 year and 3–6 years. Vitamin D supplementation was not associated with fat mass percentage (%) in the infants at 1 year of age (MD −0.92, 95% CI −3.65, 1.81, I2 = 0). 3.7. Skinfold Thickness Skinfold (triceps) thickness (mm) was assessed in three RCTs [35,38,39] with 555 participants. Outcomes were measured at birth in two RCTs [35,39] and between the age of three and six years in the third study [38]. Meta-analysis could only be performed for the two RCTs that measured outcomes at birth due to age disparity for outcome measurements with the third study. Neonates whose mothers had been supplemented with vitamin D had significantly higher skinfold thickness (mm) than those who had not (MD 0.33, 95% CI 0.12, 0.54). There was no significant heterogeneity (I2 = 34%) (Figure 3A). Trilok-Kumar et al. [38] reported no association between infancy supplementation of vitamin D and skinfold thicknesses. Skinfold (triceps) thickness (mm) was assessed in three RCTs [35,38,39] with 555 participants. Outcomes were measured at birth in two RCTs [35,39] and between the age of three and six years in the third study [38]. Meta-analysis could only be performed for the two RCTs that measured outcomes at birth due to age disparity for outcome measurements with the third study. Neonates whose mothers had been supplemented with vitamin D had significantly higher skinfold thickness (mm) than those who had not (MD 0.33, 95% CI 0.12, 0.54). There was no significant heterogeneity (I2 = 34%) (Figure 3A). Trilok-Kumar et al. [38] reported no association between infancy supplementation of vitamin D and skinfold thicknesses. 3.8. Body Mass Index (BMI) Two RCTs [34,38] involving 999 participants reported the outcome of BMI (kg/m2). Vitamin D supplementation (vs. placebo or standard care) in infancy was associated with significantly lower BMI (kg/m2) between the ages of 3 and 6 years (MD −0.19, 95%CI −0.34, −0.04). Heterogeneity was not significant (I2 = 0%) (Figure 3B). Two RCTs [34,38] involving 999 participants reported the outcome of BMI (kg/m2). Vitamin D supplementation (vs. placebo or standard care) in infancy was associated with significantly lower BMI (kg/m2) between the ages of 3 and 6 years (MD −0.19, 95%CI −0.34, −0.04). Heterogeneity was not significant (I2 = 0%) (Figure 3B). 3.9. Body Mass Index Z-Score (BMIZ) Four RCTs [31,34,38,40] involving 1674 participants reported the outcome of infant BMIZ. Offspring who had prenatal or postnatal vitamin D supplementation (vs. placebo or standard care) had a significantly lower BMIZ at three to six years old (MD −0.12; 95% CI −0.21, −0.04). No significant heterogeneity was detected (I2 = 0%) (Figure 3C). Four RCTs [31,34,38,40] involving 1674 participants reported the outcome of infant BMIZ. Offspring who had prenatal or postnatal vitamin D supplementation (vs. placebo or standard care) had a significantly lower BMIZ at three to six years old (MD −0.12; 95% CI −0.21, −0.04). No significant heterogeneity was detected (I2 = 0%) (Figure 3C). 3.10. Weight for Age Z-Score (WAZ) and Length for Age Z-Score (LAZ) WAZ was examined in six RCTs [27,31,32,34,35,37], and LAZ was examined in four RCTs [23,28,30,39] involving 2495 and 1196 participants, respectively. Both outcomes were assessed in children at ages one year [34], between 12 and 16 months [27], three years [32] and between three and six years [31,35]. Due to age differences, results were separately merged for outcomes examined at ages 12–18 months [27,34] and three to six years [32,35]. There was no significantly difference between the intervention group in the outcome WAZ in children at ages 1 year (MD −0.07; 95%CI −0.20 to 0.07) and 3–6 years (MD −0.06, 95% CI −0.18, 0.06). LAZ was higher in infants 1 year of age in the vitamin D supplementation group compared with the control group (MD 0.29, 95% CI 0.03, 0.54; I2 = 0%); however, there was no significant difference in LAZ in children at 3–6 years between the two groups (MD 0.04, 95%CI −0.08, 0.16; I2 = 0%). WAZ was examined in six RCTs [27,31,32,34,35,37], and LAZ was examined in four RCTs [23,28,30,39] involving 2495 and 1196 participants, respectively. Both outcomes were assessed in children at ages one year [34], between 12 and 16 months [27], three years [32] and between three and six years [31,35]. Due to age differences, results were separately merged for outcomes examined at ages 12–18 months [27,34] and three to six years [32,35]. There was no significantly difference between the intervention group in the outcome WAZ in children at ages 1 year (MD −0.07; 95%CI −0.20 to 0.07) and 3–6 years (MD −0.06, 95% CI −0.18, 0.06). LAZ was higher in infants 1 year of age in the vitamin D supplementation group compared with the control group (MD 0.29, 95% CI 0.03, 0.54; I2 = 0%); however, there was no significant difference in LAZ in children at 3–6 years between the two groups (MD 0.04, 95%CI −0.08, 0.16; I2 = 0%). 3.11. Head Circumference for Age Z-Score (HCAZ) HCAZ was measured in two RCTs [27,34] with 183 infants. No association was found between maternal vitamin D supplementation and HCAZ (MD 0.12, 95%CI −0.18, 0.42). There was no significant heterogeneity. HCAZ was measured in two RCTs [27,34] with 183 infants. No association was found between maternal vitamin D supplementation and HCAZ (MD 0.12, 95%CI −0.18, 0.42). There was no significant heterogeneity.
5. Conclusions
This systematic review of randomised clinical trials suggests that that vitamin D supplementation during pregnancy is associated with higher skinfold thickness in neonates. Vitamin D supplementation during pregnancy or infancy is associated with lower BMI and BMI z-score in offspring at 3 to 6 years of age. Based on current published clinical trials, vitamin D supplementation in early life is not observed to be associated with bone, lean and fat mass by DXA. Future large well-designed double blinded RCTs are needed to assess the effectiveness of vitamin D supplementation in early life on children’s bone health, lean mass and adiposity.
[ "2.1. Search Strategy", "2.2. Study Selection", "2.3. Quality Assessment", "2.4. Data Extraction and Synthesis", "3.1. Study Selection", "3.2. Characteristics of Included Trials", "3.3. Risk of Bias of Included Clinical Trials", "3.4. Bone Mineral Content (BMC)", "3.5. Lean Mass (g) and Lean Mass Percentage (%)", "3.6. Fat Mass (g) and Fat Mass Percentage (%)", "3.7. Skinfold Thickness", "3.8. Body Mass Index (BMI)", "3.9. Body Mass Index Z-Score (BMIZ)", "3.10. Weight for Age Z-Score (WAZ) and Length for Age Z-Score (LAZ)", "3.11. Head Circumference for Age Z-Score (HCAZ)", "4.1. Statement of Main Findings", "4.2. Importance and Implications", "4.3. Comparison with Previous Studies", "4.4. Mechanisms", "4.5. Strengths and Limitations" ]
[ "An electronic literature search of published studies was performed on PubMed, EMBASE and Cochrane Library up to 31 December 2020. The systematic literature search was based on the following search strategy: controlled vocabulary (i.e., MeSH Terms: “Vitamin D” [MeSH], “body composition” [MeSH]) as well as specific text words (including “vitamin D”, “calciferol”, “supplementation”, “pregnancy”, “infancy”, “growth”, “body composition”, “bone”, “lean mass”, “fat mass”) were included and systematically combined (AND/OR) with English language abstracts available. The details of the search strategy are presented in Table S1. Only English language papers on human clinical trials were considered. The reference lists of relevant reviews and studies were screened for additional articles [3,25,26,27,28].", "Selected studies had to fulfill the following criteria to qualify for inclusion: (a) the study design is a randomized controlled trial (RCT) of vitamin D supplementation (800–5000 IU/day vs. placebo or standard care) or, in cases of co-intervention, with consistent additional supplements across treatment groups; (b) the study population are children; (c) the outcomes measured at least one of the following: bone mineral content (BMC), fat mass, lean mass, skinfold thickness, body mass index (BMI), body mass index z-score (BMIZ), weight for age z-score (WAZ), length for age z-score (LAZ) and head circumference for age z-score (HCAZ); (d) the study met the methodological quality assessment criteria for RCTs [29]. Studies were excluded if: (a) the outcome data were incomplete or impossible to compare with other studies; (b) there was no appropriate control group.\nTwo authors (K. M. and W. G. B.) independently searched for and assessed the eligibility of the electronic literature by initially screening titles and abstracts. Full-length articles of potential studies to be included were then obtained and read to make final inclusion or exclusion decisions. In case of disagreement, a third reviewer (S. Q. W.) was consulted.", "Using the Cochrane Risk Assessment Tool, we evaluated the methodological quality of each included clinical trial based on the following criteria: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other biases [29]. We assigned each of the abovementioned items as having either a low, high or unclear risk of bias for each eligible RCT.", "A data extraction form was used to collect the information of the individual clinical trial regarding the study characteristics: the first author’s last name, year of publication, country of origin, study design, total sample size, characteristics of participants, initiation of supplementation, interventions and outcomes. Data were extracted by two reviewers independently following a per-protocol analysis.\nAll statistical analyses were performed using Review Manager (version 5.3). For interventional studies with multiple experimental groups receiving varying amounts of vitamin D supplements, data were merged to form only one experimental group per study. All outcomes in this analysis had continuous data. The mean, standard deviation and number of participants for both the control and experimental group of each studied outcome were used to calculate the sample size weighted mean difference (MD). The point estimate was illustrated by forest plots for each study with a 95% confidence interval (CI). Heterogeneity was assessed by calculating the I squared (I2) statistic. Results were merged using a fixed effects model for I2 less than 50%, and a random effects model was applied when I2 reached 50% or more. A p-value of less than 0.05 was considered significant for our systematic review.", "Our search strategy identified 1665 potential publications. After screening the titles and abstracts, we read 53 full articles, of which 12 studies were included in this systematic review [3,26,27,30,31,32,33,34,35,36,37,38]. The selection process of the relevant literature is summarized in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Flow Diagram (Figure 1).", "Twelve RCTs [3,26,27,30,31,32,33,34,35,36,37,38,39,40] involving a total of 4583 participants were included in this systematic review. Nine trials conducted vitamin D supplementation during pregnancy [3,26,27,30,31,33,34,36,40], and three trials performed vitamin D supplementation during infancy [30,34,38]. One study involved a subgroup of vitamin D supplementation in lactation [37]. One study [32] followed up children at 1 (Hazell 2014) [33] and 3 (Hazell 2017) [34] years of age. Six studies [3,26,36,37,38,39] were placebo-controlled; four [27,30,34,40] were comparisons between higher vs. lower doses of vitamin D, and the lowest-dose group (400 IU/day) (this low dose is part of the standard care) served as the control; and two [33,36] involved control groups without supplements. All intervention groups were supplemented with cholecalciferol, except two studies [31,39] that used ergocalciferol supplementation. One study conducted vitamin D and calcium supplementation in all treatment groups, but the doses of vitamin D were different between intervention groups and the control group (intervention: 60,000 IU/4 weeks or 60000 IU/8 weeks; Control: 400 IU/day) [27]. Three of the RCTs [32,34,35] were follow-up studies. Details of the characteristics of included studies are shown in Table 1.", "Risk of bias of included clinical trials is presented in Table S2. Participation completion rates were especially low in the two RCT follow-up studies in infants at 43.9% [38] and 66% [33], respectively. For random sequence generation, there were eight studies with low risk of bias and two studies in pregnancy [35,39] with unclear risk of bias. For allocation concealment, there were nine studies with low risk of bias, one study with high risk of bias [31] and one study [35] with unclear risk of bias. For blinding of participants and personnel, there were ten studies with low risk of bias and one study [33] with unclear risk of bias. For blinding of outcome assessment, there were eight studies with low risk of bias and three studies with unclear risk of bias. For incomplete outcome data, there were eight studies with low risk of bias, and three studies [with high risk of bias. For selective outcome data, all studies had a low risk of bias except for Brooke et al. [39], which had a high risk of bias. For other sources of bias, there was one study [31] with a high risk of bias; the rest had a low risk of bias.", "Whole-body BMC (g) was examined in five RCTs [3,26,27,31,33] involving 1444 and 1349 mother–newborn pairs. Dual-energy X-ray absorptiometry (DXA) was used in all four studies to assess bone parameters. Bone health assessment was performed in the infants at one week, three weeks, between 12 and 16 months, and 3–6 years. There was no association between vitamin D supplementation during pregnancy with whole-body BMC (gram) in neonates (MD 1.09, 95% CI 0.64, 2.81; I2 = 0) and BMC (gram) in infants at 1 year of age (MD −19.38, 95% CI −60.55, 21.79; I2 = 73%) (Figure 2A).", "Lean mass (gram) was reported in six RCTs [27,30,31,33,34,40] involving 631 participants. Lean mass was measured using DXA at approximately seven days, 6 months, between 12 and 16 months and at 36 months. Vitamin D supplementation was not associated with total lean mass in infants at ages 6 months (MD, −18.42, 95% CI −586.29, 549.45; I2 = 81%), 1 year (MD −1.00, 95% CI −624.71, 622.71, I2 = 67%) and 3 years (MD 102.63, 95% CI −185.44, 390.71, I2 = 0%) (Figure 2B). The data could not be pooled when lean mass was not assessed at similar ages. Cooper et al. [26] observed that the lean mass in the infants born to mothers assigned to cholecalciferol supplementation were not significantly different from mothers in the placebo group. One study [33] showed that lean mass percentage in infants did not differ with vitamin D supplementation.", "Fat mass (g) was reported in five RCTs [27,30,34,35,40] involving 621 participants. Fat mass was measured using DXA at approximately seven days, 6 months, 12–16 months and 3 years of age. Vitamin D supplementation was not associated with total body fat mass (g) in the infants at ages 6 months (MD, −153.28, 95% CI −348.14 to 41.57, I2 = 0%), 1 year (MD, −141.77; 95% CI, −471.04 to 187.50, I2 = 0%) and 3 years (MD, −53.47; 95% CI, −256.90 to 149.95, I2 = 0%) (Figure 2C). The data could not be pooled when fat mass was not assessed at similar ages. Three RCTs involving 360 participants reported the outcome of fat mass percentage (%) at ages 1 year and 3–6 years. Vitamin D supplementation was not associated with fat mass percentage (%) in the infants at 1 year of age (MD −0.92, 95% CI −3.65, 1.81, I2 = 0).", "Skinfold (triceps) thickness (mm) was assessed in three RCTs [35,38,39] with 555 participants. Outcomes were measured at birth in two RCTs [35,39] and between the age of three and six years in the third study [38]. Meta-analysis could only be performed for the two RCTs that measured outcomes at birth due to age disparity for outcome measurements with the third study. Neonates whose mothers had been supplemented with vitamin D had significantly higher skinfold thickness (mm) than those who had not (MD 0.33, 95% CI 0.12, 0.54). There was no significant heterogeneity (I2 = 34%) (Figure 3A). Trilok-Kumar et al. [38] reported no association between infancy supplementation of vitamin D and skinfold thicknesses.", "Two RCTs [34,38] involving 999 participants reported the outcome of BMI (kg/m2). Vitamin D supplementation (vs. placebo or standard care) in infancy was associated with significantly lower BMI (kg/m2) between the ages of 3 and 6 years (MD −0.19, 95%CI −0.34, −0.04). Heterogeneity was not significant (I2 = 0%) (Figure 3B).", "Four RCTs [31,34,38,40] involving 1674 participants reported the outcome of infant BMIZ. Offspring who had prenatal or postnatal vitamin D supplementation (vs. placebo or standard care) had a significantly lower BMIZ at three to six years old (MD −0.12; 95% CI −0.21, −0.04). No significant heterogeneity was detected (I2 = 0%) (Figure 3C).", "WAZ was examined in six RCTs [27,31,32,34,35,37], and LAZ was examined in four RCTs [23,28,30,39] involving 2495 and 1196 participants, respectively. Both outcomes were assessed in children at ages one year [34], between 12 and 16 months [27], three years [32] and between three and six years [31,35]. Due to age differences, results were separately merged for outcomes examined at ages 12–18 months [27,34] and three to six years [32,35]. There was no significantly difference between the intervention group in the outcome WAZ in children at ages 1 year (MD −0.07; 95%CI −0.20 to 0.07) and 3–6 years (MD −0.06, 95% CI −0.18, 0.06). LAZ was higher in infants 1 year of age in the vitamin D supplementation group compared with the control group (MD 0.29, 95% CI 0.03, 0.54; I2 = 0%); however, there was no significant difference in LAZ in children at 3–6 years between the two groups (MD 0.04, 95%CI −0.08, 0.16; I2 = 0%).", "HCAZ was measured in two RCTs [27,34] with 183 infants. No association was found between maternal vitamin D supplementation and HCAZ (MD 0.12, 95%CI −0.18, 0.42). There was no significant heterogeneity.", "This is the first systematic review and meta-analysis of the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on children’s body composition (bone health, lean mass and adiposity). We found that vitamin D supplementation during pregnancy was associated with higher skinfold thickness in neonates. Vitamin D supplementation in early life was associated with significantly higher length for age z-score in infants at 1 year of age, and was associated with lower BMI and BMI z-score in offspring at 3 to 6 years of age. From current evidence, vitamin D supplementation during early life was not found to be associated with children’s BMC, lean mass (g, %), WAZ and HCAZ. We found that vitamin D supplementation during early life had a consistent trend to decrease body fat mass (g, %), although the 95% CI confidence intervals included the null effect. There was no heterogeneity (I2 = 0) across studies. These null effects may be due to the small sample size in the included trials. Large well-designed clinical trials are needed to confirm the above associations.", "This systematic review added to the existing literature by including a greater number of recent RCTs and the first systematic review and meta-analysis of RCTs on the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on the outcomes of children’s bone (whole-body BMC), muscle (lean mass and lean mass percentage), adiposity (skinfold thickness, fat mass and fat mass percentage) and growth (age and sex specific indicators: BMIZ, WAZ, LAZ and HCAZ). The results show that vitamin D supplementation in early life was associated with higher skinfold thickness in neonates, higher LAZ in infants and lower BMIZ in children at 3–6 years of age, suggesting that vitamin D in early life may play an important role in children’s adiposity development, which may have a public health implication for the early intervention or prevention of childhood overweight/obesity and related cardiometabolic health issues.", "There are several systematic reviews [1,11,25,28,41,42,43,44,45] on the effects of maternal vitamin D supplementation intake or status during pregnancy on maternal, neonatal or infant health outcomes. In contrast, we could not identify any meta-analysis examining the effects of vitamin D supplementation during early life on child body composition. One narrative review [46] described the relationship between vitamin D and BMD and found that it is inconsistent across studies; however, the authors did not perform a meta-analysis. While most studies [1,25,28,41,44,45] included anthropometric measures, such as birthweight, birth length and head circumference, none of them reported the respective sex-specific and age-specific z-scores.\nHarvey et al. published a comprehensive review on both observational and clinical trials of the role of vitamin D during pregnancy in perinatal outcomes (such as birthweight, birth length, head circumference, anthropometry and body composition and low birthweight) [25]. Like our review, their study [25] showed that child BMC was not affected by supplementation of vitamin D, and the results were inconsistent regarding skinfold thickness. However, our systematic review included more recent studies, and the meta-analysis was based only on RCTs. Another review by Curtis et al. evaluated the link between prenatal vitamin D supplementation and child bone development, but lacked results on other body composition outcomes, such as fat and lean mass [2]. This study found that achieving a higher level of serum 25 hydroxyvitamin D [25(OH)D] in pregnancy might have beneficial effects on the bone development of offspring. However, there are not enough high quality RCTs to assess this, and the timing of assessment is variable among existing trials. No association of vitamin D with BMC in early childhood could preclude an effect on adolescence and adulthood. Longer-term follow-up is needed.\nOur previous meta-analysis showed that maternal low vitamin D status during pregnancy was associated with lower birthweight, and higher weight at 9 months of age, which indicates that prenatal vitamin D status was related with accelerated weight gain during infancy that may be linked to increased adiposity in offspring [11]. Our other systematic review demonstrates that vitamin D supplementation during pregnancy increased birthweight and reduced the risk of small for gestational age [43]. However, the above two studies did not study the effect of vitamin D on bone health, lean mass and fat mass. Despite four RCTs [31,32,35,36] in this current review showing that BMI and BMIZ were lower in participants who received prenatal or postnatal vitamin D supplementation, it is important to note that children were studied around the usual age of BMI and adiposity rebound, which occur at approximately 4 and 6 years of age, respectively [35,47]. The long-term effects of vitamin D supplementation during early life on BMI and BMIZ are unclear. More high quality RCTs are required to assess the link between vitamin D supplementation and lean mass in early life.", "Vitamin D is important for the differentiation of mesenchymal stem cells into adipocytes. Early life vitamin D adequacy promotes the conversion of preadipocyte maturation to form myocytes rather than mature adipocytes. A study performed on mice showed that offspring gestated in a vitamin D-deficient diet possess larger visceral body fat pads and greater susceptibility to high fat diet-induced adipocyte hypertrophy [48]. Moreover, greater nuclear receptor peroxisome proliferator-activated receptor gamma (Pparg) expression in visceral adipose tissue was also observed in this study performed on mice. The nuclear receptor PPARG takes part in both adipogenesis and lipid storage [49,50,51].\nWhile direct supplementation of vitamin D did not lead to a difference in lean mass between control and experimental groups in this meta-analysis, Hazell et al. showed that higher vitamin D status correlates with a leaner body composition; infants with a plasma 25(OH)D3 concentration above 75 nmol/L did not differ in lean mass and fat mass compared with those below 75 nmol/L [37]. Previous work has shown that the biologically active form of vitamin D, 1,25(OH)-2D, binds to vitamin D receptors to signal gene transcription and sensitize the Akt/mTOR pathway involved in protein synthesis [32,52].", "Our systematic review has its strengths. It is the first systematic review and meta-analysis of randomized controlled trials to assess the effectiveness of vitamin D supplementation during early life (pregnancy and/or infancy) on body composition. Risk of bias in the RCTs was evaluated to ensure quality of the included studies. This study has some limitations. First, we included eleven RCTs of vitamin D supplementation in early life on children’s body composition; the outcome measures were quite different across individual studies, and therefore, for each outcome, there were only a few RCTs. Second, outcome assessment was performed in children at different ages, which made pooling the data impossible for certain outcomes. The baseline vitamin D status, timing and the dose of vitamin D supplementation administered during pregnancy or infancy also differed across studies. There was a lack of data on visceral vs. subcutaneous adiposity. Moreover, most trials had no information on the compliance with vitamin D supplementation. Finally, small sample sizes and the loss to follow-up were additional limiting factors." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Search Strategy", "2.2. Study Selection", "2.3. Quality Assessment", "2.4. Data Extraction and Synthesis", "3. Results", "3.1. Study Selection", "3.2. Characteristics of Included Trials", "3.3. Risk of Bias of Included Clinical Trials", "3.4. Bone Mineral Content (BMC)", "3.5. Lean Mass (g) and Lean Mass Percentage (%)", "3.6. Fat Mass (g) and Fat Mass Percentage (%)", "3.7. Skinfold Thickness", "3.8. Body Mass Index (BMI)", "3.9. Body Mass Index Z-Score (BMIZ)", "3.10. Weight for Age Z-Score (WAZ) and Length for Age Z-Score (LAZ)", "3.11. Head Circumference for Age Z-Score (HCAZ)", "4. Discussion", "4.1. Statement of Main Findings", "4.2. Importance and Implications", "4.3. Comparison with Previous Studies", "4.4. Mechanisms", "4.5. Strengths and Limitations", "5. Conclusions" ]
[ "There is growing interest regarding the association between early life vitamin D status with children’s growth, bone health, adiposity and muscle development. It is widely accepted that vitamin D plays a critical role in bone health by maintaining calcium homeostasis [1]. This function becomes especially important during pregnancy when the developing fetus is entirely dependent on the mother for accretion of roughly 30 g of calcium for skeletal purposes [2,3]. In addition to its calcium metabolic functions, mixed evidence suggests that infant adiposity and lean mass are in part determined by vitamin D status [2]. Vitamin D may also play a role in maintaining normal glucose homeostasis during pregnancy, thus preventing fetal macrosomia and excess deposition of subcutaneous fat [4]. Vitamin D receptors have been isolated in skeletal muscle tissues [5], and low vitamin D concentration is associated with proximal myopathy and reduced physical performance [6].\nSeveral observational studies [7,8,9,10,11,12,13,14,15] on maternal vitamin D status and growth or body composition in offspring have been conducted. Low vitamin D concentrations were associated with lower birthweight [11]. Offspring exposed to higher maternal serum 25(OH)D concentrations had lower fat mass and higher bone mass during infancy [6]. In its most severe form, infants born to mothers who had vitamin D deficiency were at elevated risk of rickets. [16,17,18] While there are few observational studies relating postnatal muscle development to intrauterine 25(OH)D exposure, no association was reported between the two in adulthood in one study [19]. Another observational study concluded that prenatal vitamin D exposure may have a greater effect on muscle strength than on muscle mass in the development of offspring [6].\nConsidering the high prevalence of low vitamin D status during pregnancy and infancy [20,21,22,23], and the inconsistent results of the clinical trials [2,6], this systematic review and meta-analysis aimed to assess the effect of vitamin D supplementation in early life (pregnancy, lactation and infancy) on child growth, bone health, lean mass and adiposity.", "We followed the guidelines for Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [24].\n 2.1. Search Strategy An electronic literature search of published studies was performed on PubMed, EMBASE and Cochrane Library up to 31 December 2020. The systematic literature search was based on the following search strategy: controlled vocabulary (i.e., MeSH Terms: “Vitamin D” [MeSH], “body composition” [MeSH]) as well as specific text words (including “vitamin D”, “calciferol”, “supplementation”, “pregnancy”, “infancy”, “growth”, “body composition”, “bone”, “lean mass”, “fat mass”) were included and systematically combined (AND/OR) with English language abstracts available. The details of the search strategy are presented in Table S1. Only English language papers on human clinical trials were considered. The reference lists of relevant reviews and studies were screened for additional articles [3,25,26,27,28].\nAn electronic literature search of published studies was performed on PubMed, EMBASE and Cochrane Library up to 31 December 2020. The systematic literature search was based on the following search strategy: controlled vocabulary (i.e., MeSH Terms: “Vitamin D” [MeSH], “body composition” [MeSH]) as well as specific text words (including “vitamin D”, “calciferol”, “supplementation”, “pregnancy”, “infancy”, “growth”, “body composition”, “bone”, “lean mass”, “fat mass”) were included and systematically combined (AND/OR) with English language abstracts available. The details of the search strategy are presented in Table S1. Only English language papers on human clinical trials were considered. The reference lists of relevant reviews and studies were screened for additional articles [3,25,26,27,28].\n 2.2. Study Selection Selected studies had to fulfill the following criteria to qualify for inclusion: (a) the study design is a randomized controlled trial (RCT) of vitamin D supplementation (800–5000 IU/day vs. placebo or standard care) or, in cases of co-intervention, with consistent additional supplements across treatment groups; (b) the study population are children; (c) the outcomes measured at least one of the following: bone mineral content (BMC), fat mass, lean mass, skinfold thickness, body mass index (BMI), body mass index z-score (BMIZ), weight for age z-score (WAZ), length for age z-score (LAZ) and head circumference for age z-score (HCAZ); (d) the study met the methodological quality assessment criteria for RCTs [29]. Studies were excluded if: (a) the outcome data were incomplete or impossible to compare with other studies; (b) there was no appropriate control group.\nTwo authors (K. M. and W. G. B.) independently searched for and assessed the eligibility of the electronic literature by initially screening titles and abstracts. Full-length articles of potential studies to be included were then obtained and read to make final inclusion or exclusion decisions. In case of disagreement, a third reviewer (S. Q. W.) was consulted.\nSelected studies had to fulfill the following criteria to qualify for inclusion: (a) the study design is a randomized controlled trial (RCT) of vitamin D supplementation (800–5000 IU/day vs. placebo or standard care) or, in cases of co-intervention, with consistent additional supplements across treatment groups; (b) the study population are children; (c) the outcomes measured at least one of the following: bone mineral content (BMC), fat mass, lean mass, skinfold thickness, body mass index (BMI), body mass index z-score (BMIZ), weight for age z-score (WAZ), length for age z-score (LAZ) and head circumference for age z-score (HCAZ); (d) the study met the methodological quality assessment criteria for RCTs [29]. Studies were excluded if: (a) the outcome data were incomplete or impossible to compare with other studies; (b) there was no appropriate control group.\nTwo authors (K. M. and W. G. B.) independently searched for and assessed the eligibility of the electronic literature by initially screening titles and abstracts. Full-length articles of potential studies to be included were then obtained and read to make final inclusion or exclusion decisions. In case of disagreement, a third reviewer (S. Q. W.) was consulted.\n 2.3. Quality Assessment Using the Cochrane Risk Assessment Tool, we evaluated the methodological quality of each included clinical trial based on the following criteria: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other biases [29]. We assigned each of the abovementioned items as having either a low, high or unclear risk of bias for each eligible RCT.\nUsing the Cochrane Risk Assessment Tool, we evaluated the methodological quality of each included clinical trial based on the following criteria: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other biases [29]. We assigned each of the abovementioned items as having either a low, high or unclear risk of bias for each eligible RCT.\n 2.4. Data Extraction and Synthesis A data extraction form was used to collect the information of the individual clinical trial regarding the study characteristics: the first author’s last name, year of publication, country of origin, study design, total sample size, characteristics of participants, initiation of supplementation, interventions and outcomes. Data were extracted by two reviewers independently following a per-protocol analysis.\nAll statistical analyses were performed using Review Manager (version 5.3). For interventional studies with multiple experimental groups receiving varying amounts of vitamin D supplements, data were merged to form only one experimental group per study. All outcomes in this analysis had continuous data. The mean, standard deviation and number of participants for both the control and experimental group of each studied outcome were used to calculate the sample size weighted mean difference (MD). The point estimate was illustrated by forest plots for each study with a 95% confidence interval (CI). Heterogeneity was assessed by calculating the I squared (I2) statistic. Results were merged using a fixed effects model for I2 less than 50%, and a random effects model was applied when I2 reached 50% or more. A p-value of less than 0.05 was considered significant for our systematic review.\nA data extraction form was used to collect the information of the individual clinical trial regarding the study characteristics: the first author’s last name, year of publication, country of origin, study design, total sample size, characteristics of participants, initiation of supplementation, interventions and outcomes. Data were extracted by two reviewers independently following a per-protocol analysis.\nAll statistical analyses were performed using Review Manager (version 5.3). For interventional studies with multiple experimental groups receiving varying amounts of vitamin D supplements, data were merged to form only one experimental group per study. All outcomes in this analysis had continuous data. The mean, standard deviation and number of participants for both the control and experimental group of each studied outcome were used to calculate the sample size weighted mean difference (MD). The point estimate was illustrated by forest plots for each study with a 95% confidence interval (CI). Heterogeneity was assessed by calculating the I squared (I2) statistic. Results were merged using a fixed effects model for I2 less than 50%, and a random effects model was applied when I2 reached 50% or more. A p-value of less than 0.05 was considered significant for our systematic review.", "An electronic literature search of published studies was performed on PubMed, EMBASE and Cochrane Library up to 31 December 2020. The systematic literature search was based on the following search strategy: controlled vocabulary (i.e., MeSH Terms: “Vitamin D” [MeSH], “body composition” [MeSH]) as well as specific text words (including “vitamin D”, “calciferol”, “supplementation”, “pregnancy”, “infancy”, “growth”, “body composition”, “bone”, “lean mass”, “fat mass”) were included and systematically combined (AND/OR) with English language abstracts available. The details of the search strategy are presented in Table S1. Only English language papers on human clinical trials were considered. The reference lists of relevant reviews and studies were screened for additional articles [3,25,26,27,28].", "Selected studies had to fulfill the following criteria to qualify for inclusion: (a) the study design is a randomized controlled trial (RCT) of vitamin D supplementation (800–5000 IU/day vs. placebo or standard care) or, in cases of co-intervention, with consistent additional supplements across treatment groups; (b) the study population are children; (c) the outcomes measured at least one of the following: bone mineral content (BMC), fat mass, lean mass, skinfold thickness, body mass index (BMI), body mass index z-score (BMIZ), weight for age z-score (WAZ), length for age z-score (LAZ) and head circumference for age z-score (HCAZ); (d) the study met the methodological quality assessment criteria for RCTs [29]. Studies were excluded if: (a) the outcome data were incomplete or impossible to compare with other studies; (b) there was no appropriate control group.\nTwo authors (K. M. and W. G. B.) independently searched for and assessed the eligibility of the electronic literature by initially screening titles and abstracts. Full-length articles of potential studies to be included were then obtained and read to make final inclusion or exclusion decisions. In case of disagreement, a third reviewer (S. Q. W.) was consulted.", "Using the Cochrane Risk Assessment Tool, we evaluated the methodological quality of each included clinical trial based on the following criteria: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other biases [29]. We assigned each of the abovementioned items as having either a low, high or unclear risk of bias for each eligible RCT.", "A data extraction form was used to collect the information of the individual clinical trial regarding the study characteristics: the first author’s last name, year of publication, country of origin, study design, total sample size, characteristics of participants, initiation of supplementation, interventions and outcomes. Data were extracted by two reviewers independently following a per-protocol analysis.\nAll statistical analyses were performed using Review Manager (version 5.3). For interventional studies with multiple experimental groups receiving varying amounts of vitamin D supplements, data were merged to form only one experimental group per study. All outcomes in this analysis had continuous data. The mean, standard deviation and number of participants for both the control and experimental group of each studied outcome were used to calculate the sample size weighted mean difference (MD). The point estimate was illustrated by forest plots for each study with a 95% confidence interval (CI). Heterogeneity was assessed by calculating the I squared (I2) statistic. Results were merged using a fixed effects model for I2 less than 50%, and a random effects model was applied when I2 reached 50% or more. A p-value of less than 0.05 was considered significant for our systematic review.", " 3.1. Study Selection Our search strategy identified 1665 potential publications. After screening the titles and abstracts, we read 53 full articles, of which 12 studies were included in this systematic review [3,26,27,30,31,32,33,34,35,36,37,38]. The selection process of the relevant literature is summarized in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Flow Diagram (Figure 1).\nOur search strategy identified 1665 potential publications. After screening the titles and abstracts, we read 53 full articles, of which 12 studies were included in this systematic review [3,26,27,30,31,32,33,34,35,36,37,38]. The selection process of the relevant literature is summarized in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Flow Diagram (Figure 1).\n 3.2. Characteristics of Included Trials Twelve RCTs [3,26,27,30,31,32,33,34,35,36,37,38,39,40] involving a total of 4583 participants were included in this systematic review. Nine trials conducted vitamin D supplementation during pregnancy [3,26,27,30,31,33,34,36,40], and three trials performed vitamin D supplementation during infancy [30,34,38]. One study involved a subgroup of vitamin D supplementation in lactation [37]. One study [32] followed up children at 1 (Hazell 2014) [33] and 3 (Hazell 2017) [34] years of age. Six studies [3,26,36,37,38,39] were placebo-controlled; four [27,30,34,40] were comparisons between higher vs. lower doses of vitamin D, and the lowest-dose group (400 IU/day) (this low dose is part of the standard care) served as the control; and two [33,36] involved control groups without supplements. All intervention groups were supplemented with cholecalciferol, except two studies [31,39] that used ergocalciferol supplementation. One study conducted vitamin D and calcium supplementation in all treatment groups, but the doses of vitamin D were different between intervention groups and the control group (intervention: 60,000 IU/4 weeks or 60000 IU/8 weeks; Control: 400 IU/day) [27]. Three of the RCTs [32,34,35] were follow-up studies. Details of the characteristics of included studies are shown in Table 1.\nTwelve RCTs [3,26,27,30,31,32,33,34,35,36,37,38,39,40] involving a total of 4583 participants were included in this systematic review. Nine trials conducted vitamin D supplementation during pregnancy [3,26,27,30,31,33,34,36,40], and three trials performed vitamin D supplementation during infancy [30,34,38]. One study involved a subgroup of vitamin D supplementation in lactation [37]. One study [32] followed up children at 1 (Hazell 2014) [33] and 3 (Hazell 2017) [34] years of age. Six studies [3,26,36,37,38,39] were placebo-controlled; four [27,30,34,40] were comparisons between higher vs. lower doses of vitamin D, and the lowest-dose group (400 IU/day) (this low dose is part of the standard care) served as the control; and two [33,36] involved control groups without supplements. All intervention groups were supplemented with cholecalciferol, except two studies [31,39] that used ergocalciferol supplementation. One study conducted vitamin D and calcium supplementation in all treatment groups, but the doses of vitamin D were different between intervention groups and the control group (intervention: 60,000 IU/4 weeks or 60000 IU/8 weeks; Control: 400 IU/day) [27]. Three of the RCTs [32,34,35] were follow-up studies. Details of the characteristics of included studies are shown in Table 1.\n 3.3. Risk of Bias of Included Clinical Trials Risk of bias of included clinical trials is presented in Table S2. Participation completion rates were especially low in the two RCT follow-up studies in infants at 43.9% [38] and 66% [33], respectively. For random sequence generation, there were eight studies with low risk of bias and two studies in pregnancy [35,39] with unclear risk of bias. For allocation concealment, there were nine studies with low risk of bias, one study with high risk of bias [31] and one study [35] with unclear risk of bias. For blinding of participants and personnel, there were ten studies with low risk of bias and one study [33] with unclear risk of bias. For blinding of outcome assessment, there were eight studies with low risk of bias and three studies with unclear risk of bias. For incomplete outcome data, there were eight studies with low risk of bias, and three studies [with high risk of bias. For selective outcome data, all studies had a low risk of bias except for Brooke et al. [39], which had a high risk of bias. For other sources of bias, there was one study [31] with a high risk of bias; the rest had a low risk of bias.\nRisk of bias of included clinical trials is presented in Table S2. Participation completion rates were especially low in the two RCT follow-up studies in infants at 43.9% [38] and 66% [33], respectively. For random sequence generation, there were eight studies with low risk of bias and two studies in pregnancy [35,39] with unclear risk of bias. For allocation concealment, there were nine studies with low risk of bias, one study with high risk of bias [31] and one study [35] with unclear risk of bias. For blinding of participants and personnel, there were ten studies with low risk of bias and one study [33] with unclear risk of bias. For blinding of outcome assessment, there were eight studies with low risk of bias and three studies with unclear risk of bias. For incomplete outcome data, there were eight studies with low risk of bias, and three studies [with high risk of bias. For selective outcome data, all studies had a low risk of bias except for Brooke et al. [39], which had a high risk of bias. For other sources of bias, there was one study [31] with a high risk of bias; the rest had a low risk of bias.\n 3.4. Bone Mineral Content (BMC) Whole-body BMC (g) was examined in five RCTs [3,26,27,31,33] involving 1444 and 1349 mother–newborn pairs. Dual-energy X-ray absorptiometry (DXA) was used in all four studies to assess bone parameters. Bone health assessment was performed in the infants at one week, three weeks, between 12 and 16 months, and 3–6 years. There was no association between vitamin D supplementation during pregnancy with whole-body BMC (gram) in neonates (MD 1.09, 95% CI 0.64, 2.81; I2 = 0) and BMC (gram) in infants at 1 year of age (MD −19.38, 95% CI −60.55, 21.79; I2 = 73%) (Figure 2A).\nWhole-body BMC (g) was examined in five RCTs [3,26,27,31,33] involving 1444 and 1349 mother–newborn pairs. Dual-energy X-ray absorptiometry (DXA) was used in all four studies to assess bone parameters. Bone health assessment was performed in the infants at one week, three weeks, between 12 and 16 months, and 3–6 years. There was no association between vitamin D supplementation during pregnancy with whole-body BMC (gram) in neonates (MD 1.09, 95% CI 0.64, 2.81; I2 = 0) and BMC (gram) in infants at 1 year of age (MD −19.38, 95% CI −60.55, 21.79; I2 = 73%) (Figure 2A).\n 3.5. Lean Mass (g) and Lean Mass Percentage (%) Lean mass (gram) was reported in six RCTs [27,30,31,33,34,40] involving 631 participants. Lean mass was measured using DXA at approximately seven days, 6 months, between 12 and 16 months and at 36 months. Vitamin D supplementation was not associated with total lean mass in infants at ages 6 months (MD, −18.42, 95% CI −586.29, 549.45; I2 = 81%), 1 year (MD −1.00, 95% CI −624.71, 622.71, I2 = 67%) and 3 years (MD 102.63, 95% CI −185.44, 390.71, I2 = 0%) (Figure 2B). The data could not be pooled when lean mass was not assessed at similar ages. Cooper et al. [26] observed that the lean mass in the infants born to mothers assigned to cholecalciferol supplementation were not significantly different from mothers in the placebo group. One study [33] showed that lean mass percentage in infants did not differ with vitamin D supplementation.\nLean mass (gram) was reported in six RCTs [27,30,31,33,34,40] involving 631 participants. Lean mass was measured using DXA at approximately seven days, 6 months, between 12 and 16 months and at 36 months. Vitamin D supplementation was not associated with total lean mass in infants at ages 6 months (MD, −18.42, 95% CI −586.29, 549.45; I2 = 81%), 1 year (MD −1.00, 95% CI −624.71, 622.71, I2 = 67%) and 3 years (MD 102.63, 95% CI −185.44, 390.71, I2 = 0%) (Figure 2B). The data could not be pooled when lean mass was not assessed at similar ages. Cooper et al. [26] observed that the lean mass in the infants born to mothers assigned to cholecalciferol supplementation were not significantly different from mothers in the placebo group. One study [33] showed that lean mass percentage in infants did not differ with vitamin D supplementation.\n 3.6. Fat Mass (g) and Fat Mass Percentage (%) Fat mass (g) was reported in five RCTs [27,30,34,35,40] involving 621 participants. Fat mass was measured using DXA at approximately seven days, 6 months, 12–16 months and 3 years of age. Vitamin D supplementation was not associated with total body fat mass (g) in the infants at ages 6 months (MD, −153.28, 95% CI −348.14 to 41.57, I2 = 0%), 1 year (MD, −141.77; 95% CI, −471.04 to 187.50, I2 = 0%) and 3 years (MD, −53.47; 95% CI, −256.90 to 149.95, I2 = 0%) (Figure 2C). The data could not be pooled when fat mass was not assessed at similar ages. Three RCTs involving 360 participants reported the outcome of fat mass percentage (%) at ages 1 year and 3–6 years. Vitamin D supplementation was not associated with fat mass percentage (%) in the infants at 1 year of age (MD −0.92, 95% CI −3.65, 1.81, I2 = 0).\nFat mass (g) was reported in five RCTs [27,30,34,35,40] involving 621 participants. Fat mass was measured using DXA at approximately seven days, 6 months, 12–16 months and 3 years of age. Vitamin D supplementation was not associated with total body fat mass (g) in the infants at ages 6 months (MD, −153.28, 95% CI −348.14 to 41.57, I2 = 0%), 1 year (MD, −141.77; 95% CI, −471.04 to 187.50, I2 = 0%) and 3 years (MD, −53.47; 95% CI, −256.90 to 149.95, I2 = 0%) (Figure 2C). The data could not be pooled when fat mass was not assessed at similar ages. Three RCTs involving 360 participants reported the outcome of fat mass percentage (%) at ages 1 year and 3–6 years. Vitamin D supplementation was not associated with fat mass percentage (%) in the infants at 1 year of age (MD −0.92, 95% CI −3.65, 1.81, I2 = 0).\n 3.7. Skinfold Thickness Skinfold (triceps) thickness (mm) was assessed in three RCTs [35,38,39] with 555 participants. Outcomes were measured at birth in two RCTs [35,39] and between the age of three and six years in the third study [38]. Meta-analysis could only be performed for the two RCTs that measured outcomes at birth due to age disparity for outcome measurements with the third study. Neonates whose mothers had been supplemented with vitamin D had significantly higher skinfold thickness (mm) than those who had not (MD 0.33, 95% CI 0.12, 0.54). There was no significant heterogeneity (I2 = 34%) (Figure 3A). Trilok-Kumar et al. [38] reported no association between infancy supplementation of vitamin D and skinfold thicknesses.\nSkinfold (triceps) thickness (mm) was assessed in three RCTs [35,38,39] with 555 participants. Outcomes were measured at birth in two RCTs [35,39] and between the age of three and six years in the third study [38]. Meta-analysis could only be performed for the two RCTs that measured outcomes at birth due to age disparity for outcome measurements with the third study. Neonates whose mothers had been supplemented with vitamin D had significantly higher skinfold thickness (mm) than those who had not (MD 0.33, 95% CI 0.12, 0.54). There was no significant heterogeneity (I2 = 34%) (Figure 3A). Trilok-Kumar et al. [38] reported no association between infancy supplementation of vitamin D and skinfold thicknesses.\n 3.8. Body Mass Index (BMI) Two RCTs [34,38] involving 999 participants reported the outcome of BMI (kg/m2). Vitamin D supplementation (vs. placebo or standard care) in infancy was associated with significantly lower BMI (kg/m2) between the ages of 3 and 6 years (MD −0.19, 95%CI −0.34, −0.04). Heterogeneity was not significant (I2 = 0%) (Figure 3B).\nTwo RCTs [34,38] involving 999 participants reported the outcome of BMI (kg/m2). Vitamin D supplementation (vs. placebo or standard care) in infancy was associated with significantly lower BMI (kg/m2) between the ages of 3 and 6 years (MD −0.19, 95%CI −0.34, −0.04). Heterogeneity was not significant (I2 = 0%) (Figure 3B).\n 3.9. Body Mass Index Z-Score (BMIZ) Four RCTs [31,34,38,40] involving 1674 participants reported the outcome of infant BMIZ. Offspring who had prenatal or postnatal vitamin D supplementation (vs. placebo or standard care) had a significantly lower BMIZ at three to six years old (MD −0.12; 95% CI −0.21, −0.04). No significant heterogeneity was detected (I2 = 0%) (Figure 3C).\nFour RCTs [31,34,38,40] involving 1674 participants reported the outcome of infant BMIZ. Offspring who had prenatal or postnatal vitamin D supplementation (vs. placebo or standard care) had a significantly lower BMIZ at three to six years old (MD −0.12; 95% CI −0.21, −0.04). No significant heterogeneity was detected (I2 = 0%) (Figure 3C).\n 3.10. Weight for Age Z-Score (WAZ) and Length for Age Z-Score (LAZ) WAZ was examined in six RCTs [27,31,32,34,35,37], and LAZ was examined in four RCTs [23,28,30,39] involving 2495 and 1196 participants, respectively. Both outcomes were assessed in children at ages one year [34], between 12 and 16 months [27], three years [32] and between three and six years [31,35]. Due to age differences, results were separately merged for outcomes examined at ages 12–18 months [27,34] and three to six years [32,35]. There was no significantly difference between the intervention group in the outcome WAZ in children at ages 1 year (MD −0.07; 95%CI −0.20 to 0.07) and 3–6 years (MD −0.06, 95% CI −0.18, 0.06). LAZ was higher in infants 1 year of age in the vitamin D supplementation group compared with the control group (MD 0.29, 95% CI 0.03, 0.54; I2 = 0%); however, there was no significant difference in LAZ in children at 3–6 years between the two groups (MD 0.04, 95%CI −0.08, 0.16; I2 = 0%).\nWAZ was examined in six RCTs [27,31,32,34,35,37], and LAZ was examined in four RCTs [23,28,30,39] involving 2495 and 1196 participants, respectively. Both outcomes were assessed in children at ages one year [34], between 12 and 16 months [27], three years [32] and between three and six years [31,35]. Due to age differences, results were separately merged for outcomes examined at ages 12–18 months [27,34] and three to six years [32,35]. There was no significantly difference between the intervention group in the outcome WAZ in children at ages 1 year (MD −0.07; 95%CI −0.20 to 0.07) and 3–6 years (MD −0.06, 95% CI −0.18, 0.06). LAZ was higher in infants 1 year of age in the vitamin D supplementation group compared with the control group (MD 0.29, 95% CI 0.03, 0.54; I2 = 0%); however, there was no significant difference in LAZ in children at 3–6 years between the two groups (MD 0.04, 95%CI −0.08, 0.16; I2 = 0%).\n 3.11. Head Circumference for Age Z-Score (HCAZ) HCAZ was measured in two RCTs [27,34] with 183 infants. No association was found between maternal vitamin D supplementation and HCAZ (MD 0.12, 95%CI −0.18, 0.42). There was no significant heterogeneity.\nHCAZ was measured in two RCTs [27,34] with 183 infants. No association was found between maternal vitamin D supplementation and HCAZ (MD 0.12, 95%CI −0.18, 0.42). There was no significant heterogeneity.", "Our search strategy identified 1665 potential publications. After screening the titles and abstracts, we read 53 full articles, of which 12 studies were included in this systematic review [3,26,27,30,31,32,33,34,35,36,37,38]. The selection process of the relevant literature is summarized in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Flow Diagram (Figure 1).", "Twelve RCTs [3,26,27,30,31,32,33,34,35,36,37,38,39,40] involving a total of 4583 participants were included in this systematic review. Nine trials conducted vitamin D supplementation during pregnancy [3,26,27,30,31,33,34,36,40], and three trials performed vitamin D supplementation during infancy [30,34,38]. One study involved a subgroup of vitamin D supplementation in lactation [37]. One study [32] followed up children at 1 (Hazell 2014) [33] and 3 (Hazell 2017) [34] years of age. Six studies [3,26,36,37,38,39] were placebo-controlled; four [27,30,34,40] were comparisons between higher vs. lower doses of vitamin D, and the lowest-dose group (400 IU/day) (this low dose is part of the standard care) served as the control; and two [33,36] involved control groups without supplements. All intervention groups were supplemented with cholecalciferol, except two studies [31,39] that used ergocalciferol supplementation. One study conducted vitamin D and calcium supplementation in all treatment groups, but the doses of vitamin D were different between intervention groups and the control group (intervention: 60,000 IU/4 weeks or 60000 IU/8 weeks; Control: 400 IU/day) [27]. Three of the RCTs [32,34,35] were follow-up studies. Details of the characteristics of included studies are shown in Table 1.", "Risk of bias of included clinical trials is presented in Table S2. Participation completion rates were especially low in the two RCT follow-up studies in infants at 43.9% [38] and 66% [33], respectively. For random sequence generation, there were eight studies with low risk of bias and two studies in pregnancy [35,39] with unclear risk of bias. For allocation concealment, there were nine studies with low risk of bias, one study with high risk of bias [31] and one study [35] with unclear risk of bias. For blinding of participants and personnel, there were ten studies with low risk of bias and one study [33] with unclear risk of bias. For blinding of outcome assessment, there were eight studies with low risk of bias and three studies with unclear risk of bias. For incomplete outcome data, there were eight studies with low risk of bias, and three studies [with high risk of bias. For selective outcome data, all studies had a low risk of bias except for Brooke et al. [39], which had a high risk of bias. For other sources of bias, there was one study [31] with a high risk of bias; the rest had a low risk of bias.", "Whole-body BMC (g) was examined in five RCTs [3,26,27,31,33] involving 1444 and 1349 mother–newborn pairs. Dual-energy X-ray absorptiometry (DXA) was used in all four studies to assess bone parameters. Bone health assessment was performed in the infants at one week, three weeks, between 12 and 16 months, and 3–6 years. There was no association between vitamin D supplementation during pregnancy with whole-body BMC (gram) in neonates (MD 1.09, 95% CI 0.64, 2.81; I2 = 0) and BMC (gram) in infants at 1 year of age (MD −19.38, 95% CI −60.55, 21.79; I2 = 73%) (Figure 2A).", "Lean mass (gram) was reported in six RCTs [27,30,31,33,34,40] involving 631 participants. Lean mass was measured using DXA at approximately seven days, 6 months, between 12 and 16 months and at 36 months. Vitamin D supplementation was not associated with total lean mass in infants at ages 6 months (MD, −18.42, 95% CI −586.29, 549.45; I2 = 81%), 1 year (MD −1.00, 95% CI −624.71, 622.71, I2 = 67%) and 3 years (MD 102.63, 95% CI −185.44, 390.71, I2 = 0%) (Figure 2B). The data could not be pooled when lean mass was not assessed at similar ages. Cooper et al. [26] observed that the lean mass in the infants born to mothers assigned to cholecalciferol supplementation were not significantly different from mothers in the placebo group. One study [33] showed that lean mass percentage in infants did not differ with vitamin D supplementation.", "Fat mass (g) was reported in five RCTs [27,30,34,35,40] involving 621 participants. Fat mass was measured using DXA at approximately seven days, 6 months, 12–16 months and 3 years of age. Vitamin D supplementation was not associated with total body fat mass (g) in the infants at ages 6 months (MD, −153.28, 95% CI −348.14 to 41.57, I2 = 0%), 1 year (MD, −141.77; 95% CI, −471.04 to 187.50, I2 = 0%) and 3 years (MD, −53.47; 95% CI, −256.90 to 149.95, I2 = 0%) (Figure 2C). The data could not be pooled when fat mass was not assessed at similar ages. Three RCTs involving 360 participants reported the outcome of fat mass percentage (%) at ages 1 year and 3–6 years. Vitamin D supplementation was not associated with fat mass percentage (%) in the infants at 1 year of age (MD −0.92, 95% CI −3.65, 1.81, I2 = 0).", "Skinfold (triceps) thickness (mm) was assessed in three RCTs [35,38,39] with 555 participants. Outcomes were measured at birth in two RCTs [35,39] and between the age of three and six years in the third study [38]. Meta-analysis could only be performed for the two RCTs that measured outcomes at birth due to age disparity for outcome measurements with the third study. Neonates whose mothers had been supplemented with vitamin D had significantly higher skinfold thickness (mm) than those who had not (MD 0.33, 95% CI 0.12, 0.54). There was no significant heterogeneity (I2 = 34%) (Figure 3A). Trilok-Kumar et al. [38] reported no association between infancy supplementation of vitamin D and skinfold thicknesses.", "Two RCTs [34,38] involving 999 participants reported the outcome of BMI (kg/m2). Vitamin D supplementation (vs. placebo or standard care) in infancy was associated with significantly lower BMI (kg/m2) between the ages of 3 and 6 years (MD −0.19, 95%CI −0.34, −0.04). Heterogeneity was not significant (I2 = 0%) (Figure 3B).", "Four RCTs [31,34,38,40] involving 1674 participants reported the outcome of infant BMIZ. Offspring who had prenatal or postnatal vitamin D supplementation (vs. placebo or standard care) had a significantly lower BMIZ at three to six years old (MD −0.12; 95% CI −0.21, −0.04). No significant heterogeneity was detected (I2 = 0%) (Figure 3C).", "WAZ was examined in six RCTs [27,31,32,34,35,37], and LAZ was examined in four RCTs [23,28,30,39] involving 2495 and 1196 participants, respectively. Both outcomes were assessed in children at ages one year [34], between 12 and 16 months [27], three years [32] and between three and six years [31,35]. Due to age differences, results were separately merged for outcomes examined at ages 12–18 months [27,34] and three to six years [32,35]. There was no significantly difference between the intervention group in the outcome WAZ in children at ages 1 year (MD −0.07; 95%CI −0.20 to 0.07) and 3–6 years (MD −0.06, 95% CI −0.18, 0.06). LAZ was higher in infants 1 year of age in the vitamin D supplementation group compared with the control group (MD 0.29, 95% CI 0.03, 0.54; I2 = 0%); however, there was no significant difference in LAZ in children at 3–6 years between the two groups (MD 0.04, 95%CI −0.08, 0.16; I2 = 0%).", "HCAZ was measured in two RCTs [27,34] with 183 infants. No association was found between maternal vitamin D supplementation and HCAZ (MD 0.12, 95%CI −0.18, 0.42). There was no significant heterogeneity.", " 4.1. Statement of Main Findings This is the first systematic review and meta-analysis of the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on children’s body composition (bone health, lean mass and adiposity). We found that vitamin D supplementation during pregnancy was associated with higher skinfold thickness in neonates. Vitamin D supplementation in early life was associated with significantly higher length for age z-score in infants at 1 year of age, and was associated with lower BMI and BMI z-score in offspring at 3 to 6 years of age. From current evidence, vitamin D supplementation during early life was not found to be associated with children’s BMC, lean mass (g, %), WAZ and HCAZ. We found that vitamin D supplementation during early life had a consistent trend to decrease body fat mass (g, %), although the 95% CI confidence intervals included the null effect. There was no heterogeneity (I2 = 0) across studies. These null effects may be due to the small sample size in the included trials. Large well-designed clinical trials are needed to confirm the above associations.\nThis is the first systematic review and meta-analysis of the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on children’s body composition (bone health, lean mass and adiposity). We found that vitamin D supplementation during pregnancy was associated with higher skinfold thickness in neonates. Vitamin D supplementation in early life was associated with significantly higher length for age z-score in infants at 1 year of age, and was associated with lower BMI and BMI z-score in offspring at 3 to 6 years of age. From current evidence, vitamin D supplementation during early life was not found to be associated with children’s BMC, lean mass (g, %), WAZ and HCAZ. We found that vitamin D supplementation during early life had a consistent trend to decrease body fat mass (g, %), although the 95% CI confidence intervals included the null effect. There was no heterogeneity (I2 = 0) across studies. These null effects may be due to the small sample size in the included trials. Large well-designed clinical trials are needed to confirm the above associations.\n 4.2. Importance and Implications This systematic review added to the existing literature by including a greater number of recent RCTs and the first systematic review and meta-analysis of RCTs on the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on the outcomes of children’s bone (whole-body BMC), muscle (lean mass and lean mass percentage), adiposity (skinfold thickness, fat mass and fat mass percentage) and growth (age and sex specific indicators: BMIZ, WAZ, LAZ and HCAZ). The results show that vitamin D supplementation in early life was associated with higher skinfold thickness in neonates, higher LAZ in infants and lower BMIZ in children at 3–6 years of age, suggesting that vitamin D in early life may play an important role in children’s adiposity development, which may have a public health implication for the early intervention or prevention of childhood overweight/obesity and related cardiometabolic health issues.\nThis systematic review added to the existing literature by including a greater number of recent RCTs and the first systematic review and meta-analysis of RCTs on the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on the outcomes of children’s bone (whole-body BMC), muscle (lean mass and lean mass percentage), adiposity (skinfold thickness, fat mass and fat mass percentage) and growth (age and sex specific indicators: BMIZ, WAZ, LAZ and HCAZ). The results show that vitamin D supplementation in early life was associated with higher skinfold thickness in neonates, higher LAZ in infants and lower BMIZ in children at 3–6 years of age, suggesting that vitamin D in early life may play an important role in children’s adiposity development, which may have a public health implication for the early intervention or prevention of childhood overweight/obesity and related cardiometabolic health issues.\n 4.3. Comparison with Previous Studies There are several systematic reviews [1,11,25,28,41,42,43,44,45] on the effects of maternal vitamin D supplementation intake or status during pregnancy on maternal, neonatal or infant health outcomes. In contrast, we could not identify any meta-analysis examining the effects of vitamin D supplementation during early life on child body composition. One narrative review [46] described the relationship between vitamin D and BMD and found that it is inconsistent across studies; however, the authors did not perform a meta-analysis. While most studies [1,25,28,41,44,45] included anthropometric measures, such as birthweight, birth length and head circumference, none of them reported the respective sex-specific and age-specific z-scores.\nHarvey et al. published a comprehensive review on both observational and clinical trials of the role of vitamin D during pregnancy in perinatal outcomes (such as birthweight, birth length, head circumference, anthropometry and body composition and low birthweight) [25]. Like our review, their study [25] showed that child BMC was not affected by supplementation of vitamin D, and the results were inconsistent regarding skinfold thickness. However, our systematic review included more recent studies, and the meta-analysis was based only on RCTs. Another review by Curtis et al. evaluated the link between prenatal vitamin D supplementation and child bone development, but lacked results on other body composition outcomes, such as fat and lean mass [2]. This study found that achieving a higher level of serum 25 hydroxyvitamin D [25(OH)D] in pregnancy might have beneficial effects on the bone development of offspring. However, there are not enough high quality RCTs to assess this, and the timing of assessment is variable among existing trials. No association of vitamin D with BMC in early childhood could preclude an effect on adolescence and adulthood. Longer-term follow-up is needed.\nOur previous meta-analysis showed that maternal low vitamin D status during pregnancy was associated with lower birthweight, and higher weight at 9 months of age, which indicates that prenatal vitamin D status was related with accelerated weight gain during infancy that may be linked to increased adiposity in offspring [11]. Our other systematic review demonstrates that vitamin D supplementation during pregnancy increased birthweight and reduced the risk of small for gestational age [43]. However, the above two studies did not study the effect of vitamin D on bone health, lean mass and fat mass. Despite four RCTs [31,32,35,36] in this current review showing that BMI and BMIZ were lower in participants who received prenatal or postnatal vitamin D supplementation, it is important to note that children were studied around the usual age of BMI and adiposity rebound, which occur at approximately 4 and 6 years of age, respectively [35,47]. The long-term effects of vitamin D supplementation during early life on BMI and BMIZ are unclear. More high quality RCTs are required to assess the link between vitamin D supplementation and lean mass in early life.\nThere are several systematic reviews [1,11,25,28,41,42,43,44,45] on the effects of maternal vitamin D supplementation intake or status during pregnancy on maternal, neonatal or infant health outcomes. In contrast, we could not identify any meta-analysis examining the effects of vitamin D supplementation during early life on child body composition. One narrative review [46] described the relationship between vitamin D and BMD and found that it is inconsistent across studies; however, the authors did not perform a meta-analysis. While most studies [1,25,28,41,44,45] included anthropometric measures, such as birthweight, birth length and head circumference, none of them reported the respective sex-specific and age-specific z-scores.\nHarvey et al. published a comprehensive review on both observational and clinical trials of the role of vitamin D during pregnancy in perinatal outcomes (such as birthweight, birth length, head circumference, anthropometry and body composition and low birthweight) [25]. Like our review, their study [25] showed that child BMC was not affected by supplementation of vitamin D, and the results were inconsistent regarding skinfold thickness. However, our systematic review included more recent studies, and the meta-analysis was based only on RCTs. Another review by Curtis et al. evaluated the link between prenatal vitamin D supplementation and child bone development, but lacked results on other body composition outcomes, such as fat and lean mass [2]. This study found that achieving a higher level of serum 25 hydroxyvitamin D [25(OH)D] in pregnancy might have beneficial effects on the bone development of offspring. However, there are not enough high quality RCTs to assess this, and the timing of assessment is variable among existing trials. No association of vitamin D with BMC in early childhood could preclude an effect on adolescence and adulthood. Longer-term follow-up is needed.\nOur previous meta-analysis showed that maternal low vitamin D status during pregnancy was associated with lower birthweight, and higher weight at 9 months of age, which indicates that prenatal vitamin D status was related with accelerated weight gain during infancy that may be linked to increased adiposity in offspring [11]. Our other systematic review demonstrates that vitamin D supplementation during pregnancy increased birthweight and reduced the risk of small for gestational age [43]. However, the above two studies did not study the effect of vitamin D on bone health, lean mass and fat mass. Despite four RCTs [31,32,35,36] in this current review showing that BMI and BMIZ were lower in participants who received prenatal or postnatal vitamin D supplementation, it is important to note that children were studied around the usual age of BMI and adiposity rebound, which occur at approximately 4 and 6 years of age, respectively [35,47]. The long-term effects of vitamin D supplementation during early life on BMI and BMIZ are unclear. More high quality RCTs are required to assess the link between vitamin D supplementation and lean mass in early life.\n 4.4. Mechanisms Vitamin D is important for the differentiation of mesenchymal stem cells into adipocytes. Early life vitamin D adequacy promotes the conversion of preadipocyte maturation to form myocytes rather than mature adipocytes. A study performed on mice showed that offspring gestated in a vitamin D-deficient diet possess larger visceral body fat pads and greater susceptibility to high fat diet-induced adipocyte hypertrophy [48]. Moreover, greater nuclear receptor peroxisome proliferator-activated receptor gamma (Pparg) expression in visceral adipose tissue was also observed in this study performed on mice. The nuclear receptor PPARG takes part in both adipogenesis and lipid storage [49,50,51].\nWhile direct supplementation of vitamin D did not lead to a difference in lean mass between control and experimental groups in this meta-analysis, Hazell et al. showed that higher vitamin D status correlates with a leaner body composition; infants with a plasma 25(OH)D3 concentration above 75 nmol/L did not differ in lean mass and fat mass compared with those below 75 nmol/L [37]. Previous work has shown that the biologically active form of vitamin D, 1,25(OH)-2D, binds to vitamin D receptors to signal gene transcription and sensitize the Akt/mTOR pathway involved in protein synthesis [32,52].\nVitamin D is important for the differentiation of mesenchymal stem cells into adipocytes. Early life vitamin D adequacy promotes the conversion of preadipocyte maturation to form myocytes rather than mature adipocytes. A study performed on mice showed that offspring gestated in a vitamin D-deficient diet possess larger visceral body fat pads and greater susceptibility to high fat diet-induced adipocyte hypertrophy [48]. Moreover, greater nuclear receptor peroxisome proliferator-activated receptor gamma (Pparg) expression in visceral adipose tissue was also observed in this study performed on mice. The nuclear receptor PPARG takes part in both adipogenesis and lipid storage [49,50,51].\nWhile direct supplementation of vitamin D did not lead to a difference in lean mass between control and experimental groups in this meta-analysis, Hazell et al. showed that higher vitamin D status correlates with a leaner body composition; infants with a plasma 25(OH)D3 concentration above 75 nmol/L did not differ in lean mass and fat mass compared with those below 75 nmol/L [37]. Previous work has shown that the biologically active form of vitamin D, 1,25(OH)-2D, binds to vitamin D receptors to signal gene transcription and sensitize the Akt/mTOR pathway involved in protein synthesis [32,52].\n 4.5. Strengths and Limitations Our systematic review has its strengths. It is the first systematic review and meta-analysis of randomized controlled trials to assess the effectiveness of vitamin D supplementation during early life (pregnancy and/or infancy) on body composition. Risk of bias in the RCTs was evaluated to ensure quality of the included studies. This study has some limitations. First, we included eleven RCTs of vitamin D supplementation in early life on children’s body composition; the outcome measures were quite different across individual studies, and therefore, for each outcome, there were only a few RCTs. Second, outcome assessment was performed in children at different ages, which made pooling the data impossible for certain outcomes. The baseline vitamin D status, timing and the dose of vitamin D supplementation administered during pregnancy or infancy also differed across studies. There was a lack of data on visceral vs. subcutaneous adiposity. Moreover, most trials had no information on the compliance with vitamin D supplementation. Finally, small sample sizes and the loss to follow-up were additional limiting factors.\nOur systematic review has its strengths. It is the first systematic review and meta-analysis of randomized controlled trials to assess the effectiveness of vitamin D supplementation during early life (pregnancy and/or infancy) on body composition. Risk of bias in the RCTs was evaluated to ensure quality of the included studies. This study has some limitations. First, we included eleven RCTs of vitamin D supplementation in early life on children’s body composition; the outcome measures were quite different across individual studies, and therefore, for each outcome, there were only a few RCTs. Second, outcome assessment was performed in children at different ages, which made pooling the data impossible for certain outcomes. The baseline vitamin D status, timing and the dose of vitamin D supplementation administered during pregnancy or infancy also differed across studies. There was a lack of data on visceral vs. subcutaneous adiposity. Moreover, most trials had no information on the compliance with vitamin D supplementation. Finally, small sample sizes and the loss to follow-up were additional limiting factors.", "This is the first systematic review and meta-analysis of the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on children’s body composition (bone health, lean mass and adiposity). We found that vitamin D supplementation during pregnancy was associated with higher skinfold thickness in neonates. Vitamin D supplementation in early life was associated with significantly higher length for age z-score in infants at 1 year of age, and was associated with lower BMI and BMI z-score in offspring at 3 to 6 years of age. From current evidence, vitamin D supplementation during early life was not found to be associated with children’s BMC, lean mass (g, %), WAZ and HCAZ. We found that vitamin D supplementation during early life had a consistent trend to decrease body fat mass (g, %), although the 95% CI confidence intervals included the null effect. There was no heterogeneity (I2 = 0) across studies. These null effects may be due to the small sample size in the included trials. Large well-designed clinical trials are needed to confirm the above associations.", "This systematic review added to the existing literature by including a greater number of recent RCTs and the first systematic review and meta-analysis of RCTs on the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on the outcomes of children’s bone (whole-body BMC), muscle (lean mass and lean mass percentage), adiposity (skinfold thickness, fat mass and fat mass percentage) and growth (age and sex specific indicators: BMIZ, WAZ, LAZ and HCAZ). The results show that vitamin D supplementation in early life was associated with higher skinfold thickness in neonates, higher LAZ in infants and lower BMIZ in children at 3–6 years of age, suggesting that vitamin D in early life may play an important role in children’s adiposity development, which may have a public health implication for the early intervention or prevention of childhood overweight/obesity and related cardiometabolic health issues.", "There are several systematic reviews [1,11,25,28,41,42,43,44,45] on the effects of maternal vitamin D supplementation intake or status during pregnancy on maternal, neonatal or infant health outcomes. In contrast, we could not identify any meta-analysis examining the effects of vitamin D supplementation during early life on child body composition. One narrative review [46] described the relationship between vitamin D and BMD and found that it is inconsistent across studies; however, the authors did not perform a meta-analysis. While most studies [1,25,28,41,44,45] included anthropometric measures, such as birthweight, birth length and head circumference, none of them reported the respective sex-specific and age-specific z-scores.\nHarvey et al. published a comprehensive review on both observational and clinical trials of the role of vitamin D during pregnancy in perinatal outcomes (such as birthweight, birth length, head circumference, anthropometry and body composition and low birthweight) [25]. Like our review, their study [25] showed that child BMC was not affected by supplementation of vitamin D, and the results were inconsistent regarding skinfold thickness. However, our systematic review included more recent studies, and the meta-analysis was based only on RCTs. Another review by Curtis et al. evaluated the link between prenatal vitamin D supplementation and child bone development, but lacked results on other body composition outcomes, such as fat and lean mass [2]. This study found that achieving a higher level of serum 25 hydroxyvitamin D [25(OH)D] in pregnancy might have beneficial effects on the bone development of offspring. However, there are not enough high quality RCTs to assess this, and the timing of assessment is variable among existing trials. No association of vitamin D with BMC in early childhood could preclude an effect on adolescence and adulthood. Longer-term follow-up is needed.\nOur previous meta-analysis showed that maternal low vitamin D status during pregnancy was associated with lower birthweight, and higher weight at 9 months of age, which indicates that prenatal vitamin D status was related with accelerated weight gain during infancy that may be linked to increased adiposity in offspring [11]. Our other systematic review demonstrates that vitamin D supplementation during pregnancy increased birthweight and reduced the risk of small for gestational age [43]. However, the above two studies did not study the effect of vitamin D on bone health, lean mass and fat mass. Despite four RCTs [31,32,35,36] in this current review showing that BMI and BMIZ were lower in participants who received prenatal or postnatal vitamin D supplementation, it is important to note that children were studied around the usual age of BMI and adiposity rebound, which occur at approximately 4 and 6 years of age, respectively [35,47]. The long-term effects of vitamin D supplementation during early life on BMI and BMIZ are unclear. More high quality RCTs are required to assess the link between vitamin D supplementation and lean mass in early life.", "Vitamin D is important for the differentiation of mesenchymal stem cells into adipocytes. Early life vitamin D adequacy promotes the conversion of preadipocyte maturation to form myocytes rather than mature adipocytes. A study performed on mice showed that offspring gestated in a vitamin D-deficient diet possess larger visceral body fat pads and greater susceptibility to high fat diet-induced adipocyte hypertrophy [48]. Moreover, greater nuclear receptor peroxisome proliferator-activated receptor gamma (Pparg) expression in visceral adipose tissue was also observed in this study performed on mice. The nuclear receptor PPARG takes part in both adipogenesis and lipid storage [49,50,51].\nWhile direct supplementation of vitamin D did not lead to a difference in lean mass between control and experimental groups in this meta-analysis, Hazell et al. showed that higher vitamin D status correlates with a leaner body composition; infants with a plasma 25(OH)D3 concentration above 75 nmol/L did not differ in lean mass and fat mass compared with those below 75 nmol/L [37]. Previous work has shown that the biologically active form of vitamin D, 1,25(OH)-2D, binds to vitamin D receptors to signal gene transcription and sensitize the Akt/mTOR pathway involved in protein synthesis [32,52].", "Our systematic review has its strengths. It is the first systematic review and meta-analysis of randomized controlled trials to assess the effectiveness of vitamin D supplementation during early life (pregnancy and/or infancy) on body composition. Risk of bias in the RCTs was evaluated to ensure quality of the included studies. This study has some limitations. First, we included eleven RCTs of vitamin D supplementation in early life on children’s body composition; the outcome measures were quite different across individual studies, and therefore, for each outcome, there were only a few RCTs. Second, outcome assessment was performed in children at different ages, which made pooling the data impossible for certain outcomes. The baseline vitamin D status, timing and the dose of vitamin D supplementation administered during pregnancy or infancy also differed across studies. There was a lack of data on visceral vs. subcutaneous adiposity. Moreover, most trials had no information on the compliance with vitamin D supplementation. Finally, small sample sizes and the loss to follow-up were additional limiting factors.", "This systematic review of randomised clinical trials suggests that that vitamin D supplementation during pregnancy is associated with higher skinfold thickness in neonates. Vitamin D supplementation during pregnancy or infancy is associated with lower BMI and BMI z-score in offspring at 3 to 6 years of age. Based on current published clinical trials, vitamin D supplementation in early life is not observed to be associated with bone, lean and fat mass by DXA. Future large well-designed double blinded RCTs are needed to assess the effectiveness of vitamin D supplementation in early life on children’s bone health, lean mass and adiposity." ]
[ "intro", "methods", null, null, null, null, "results", null, null, null, null, null, null, null, null, null, null, null, "discussion", null, null, null, null, null, "conclusions" ]
[ "Vitamin D", "pregnancy", "infancy", "randomized controlled trials", "childhood", "body composition", "adiposity" ]
1. Introduction: There is growing interest regarding the association between early life vitamin D status with children’s growth, bone health, adiposity and muscle development. It is widely accepted that vitamin D plays a critical role in bone health by maintaining calcium homeostasis [1]. This function becomes especially important during pregnancy when the developing fetus is entirely dependent on the mother for accretion of roughly 30 g of calcium for skeletal purposes [2,3]. In addition to its calcium metabolic functions, mixed evidence suggests that infant adiposity and lean mass are in part determined by vitamin D status [2]. Vitamin D may also play a role in maintaining normal glucose homeostasis during pregnancy, thus preventing fetal macrosomia and excess deposition of subcutaneous fat [4]. Vitamin D receptors have been isolated in skeletal muscle tissues [5], and low vitamin D concentration is associated with proximal myopathy and reduced physical performance [6]. Several observational studies [7,8,9,10,11,12,13,14,15] on maternal vitamin D status and growth or body composition in offspring have been conducted. Low vitamin D concentrations were associated with lower birthweight [11]. Offspring exposed to higher maternal serum 25(OH)D concentrations had lower fat mass and higher bone mass during infancy [6]. In its most severe form, infants born to mothers who had vitamin D deficiency were at elevated risk of rickets. [16,17,18] While there are few observational studies relating postnatal muscle development to intrauterine 25(OH)D exposure, no association was reported between the two in adulthood in one study [19]. Another observational study concluded that prenatal vitamin D exposure may have a greater effect on muscle strength than on muscle mass in the development of offspring [6]. Considering the high prevalence of low vitamin D status during pregnancy and infancy [20,21,22,23], and the inconsistent results of the clinical trials [2,6], this systematic review and meta-analysis aimed to assess the effect of vitamin D supplementation in early life (pregnancy, lactation and infancy) on child growth, bone health, lean mass and adiposity. 2. Methods: We followed the guidelines for Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [24]. 2.1. Search Strategy An electronic literature search of published studies was performed on PubMed, EMBASE and Cochrane Library up to 31 December 2020. The systematic literature search was based on the following search strategy: controlled vocabulary (i.e., MeSH Terms: “Vitamin D” [MeSH], “body composition” [MeSH]) as well as specific text words (including “vitamin D”, “calciferol”, “supplementation”, “pregnancy”, “infancy”, “growth”, “body composition”, “bone”, “lean mass”, “fat mass”) were included and systematically combined (AND/OR) with English language abstracts available. The details of the search strategy are presented in Table S1. Only English language papers on human clinical trials were considered. The reference lists of relevant reviews and studies were screened for additional articles [3,25,26,27,28]. An electronic literature search of published studies was performed on PubMed, EMBASE and Cochrane Library up to 31 December 2020. The systematic literature search was based on the following search strategy: controlled vocabulary (i.e., MeSH Terms: “Vitamin D” [MeSH], “body composition” [MeSH]) as well as specific text words (including “vitamin D”, “calciferol”, “supplementation”, “pregnancy”, “infancy”, “growth”, “body composition”, “bone”, “lean mass”, “fat mass”) were included and systematically combined (AND/OR) with English language abstracts available. The details of the search strategy are presented in Table S1. Only English language papers on human clinical trials were considered. The reference lists of relevant reviews and studies were screened for additional articles [3,25,26,27,28]. 2.2. Study Selection Selected studies had to fulfill the following criteria to qualify for inclusion: (a) the study design is a randomized controlled trial (RCT) of vitamin D supplementation (800–5000 IU/day vs. placebo or standard care) or, in cases of co-intervention, with consistent additional supplements across treatment groups; (b) the study population are children; (c) the outcomes measured at least one of the following: bone mineral content (BMC), fat mass, lean mass, skinfold thickness, body mass index (BMI), body mass index z-score (BMIZ), weight for age z-score (WAZ), length for age z-score (LAZ) and head circumference for age z-score (HCAZ); (d) the study met the methodological quality assessment criteria for RCTs [29]. Studies were excluded if: (a) the outcome data were incomplete or impossible to compare with other studies; (b) there was no appropriate control group. Two authors (K. M. and W. G. B.) independently searched for and assessed the eligibility of the electronic literature by initially screening titles and abstracts. Full-length articles of potential studies to be included were then obtained and read to make final inclusion or exclusion decisions. In case of disagreement, a third reviewer (S. Q. W.) was consulted. Selected studies had to fulfill the following criteria to qualify for inclusion: (a) the study design is a randomized controlled trial (RCT) of vitamin D supplementation (800–5000 IU/day vs. placebo or standard care) or, in cases of co-intervention, with consistent additional supplements across treatment groups; (b) the study population are children; (c) the outcomes measured at least one of the following: bone mineral content (BMC), fat mass, lean mass, skinfold thickness, body mass index (BMI), body mass index z-score (BMIZ), weight for age z-score (WAZ), length for age z-score (LAZ) and head circumference for age z-score (HCAZ); (d) the study met the methodological quality assessment criteria for RCTs [29]. Studies were excluded if: (a) the outcome data were incomplete or impossible to compare with other studies; (b) there was no appropriate control group. Two authors (K. M. and W. G. B.) independently searched for and assessed the eligibility of the electronic literature by initially screening titles and abstracts. Full-length articles of potential studies to be included were then obtained and read to make final inclusion or exclusion decisions. In case of disagreement, a third reviewer (S. Q. W.) was consulted. 2.3. Quality Assessment Using the Cochrane Risk Assessment Tool, we evaluated the methodological quality of each included clinical trial based on the following criteria: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other biases [29]. We assigned each of the abovementioned items as having either a low, high or unclear risk of bias for each eligible RCT. Using the Cochrane Risk Assessment Tool, we evaluated the methodological quality of each included clinical trial based on the following criteria: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other biases [29]. We assigned each of the abovementioned items as having either a low, high or unclear risk of bias for each eligible RCT. 2.4. Data Extraction and Synthesis A data extraction form was used to collect the information of the individual clinical trial regarding the study characteristics: the first author’s last name, year of publication, country of origin, study design, total sample size, characteristics of participants, initiation of supplementation, interventions and outcomes. Data were extracted by two reviewers independently following a per-protocol analysis. All statistical analyses were performed using Review Manager (version 5.3). For interventional studies with multiple experimental groups receiving varying amounts of vitamin D supplements, data were merged to form only one experimental group per study. All outcomes in this analysis had continuous data. The mean, standard deviation and number of participants for both the control and experimental group of each studied outcome were used to calculate the sample size weighted mean difference (MD). The point estimate was illustrated by forest plots for each study with a 95% confidence interval (CI). Heterogeneity was assessed by calculating the I squared (I2) statistic. Results were merged using a fixed effects model for I2 less than 50%, and a random effects model was applied when I2 reached 50% or more. A p-value of less than 0.05 was considered significant for our systematic review. A data extraction form was used to collect the information of the individual clinical trial regarding the study characteristics: the first author’s last name, year of publication, country of origin, study design, total sample size, characteristics of participants, initiation of supplementation, interventions and outcomes. Data were extracted by two reviewers independently following a per-protocol analysis. All statistical analyses were performed using Review Manager (version 5.3). For interventional studies with multiple experimental groups receiving varying amounts of vitamin D supplements, data were merged to form only one experimental group per study. All outcomes in this analysis had continuous data. The mean, standard deviation and number of participants for both the control and experimental group of each studied outcome were used to calculate the sample size weighted mean difference (MD). The point estimate was illustrated by forest plots for each study with a 95% confidence interval (CI). Heterogeneity was assessed by calculating the I squared (I2) statistic. Results were merged using a fixed effects model for I2 less than 50%, and a random effects model was applied when I2 reached 50% or more. A p-value of less than 0.05 was considered significant for our systematic review. 2.1. Search Strategy: An electronic literature search of published studies was performed on PubMed, EMBASE and Cochrane Library up to 31 December 2020. The systematic literature search was based on the following search strategy: controlled vocabulary (i.e., MeSH Terms: “Vitamin D” [MeSH], “body composition” [MeSH]) as well as specific text words (including “vitamin D”, “calciferol”, “supplementation”, “pregnancy”, “infancy”, “growth”, “body composition”, “bone”, “lean mass”, “fat mass”) were included and systematically combined (AND/OR) with English language abstracts available. The details of the search strategy are presented in Table S1. Only English language papers on human clinical trials were considered. The reference lists of relevant reviews and studies were screened for additional articles [3,25,26,27,28]. 2.2. Study Selection: Selected studies had to fulfill the following criteria to qualify for inclusion: (a) the study design is a randomized controlled trial (RCT) of vitamin D supplementation (800–5000 IU/day vs. placebo or standard care) or, in cases of co-intervention, with consistent additional supplements across treatment groups; (b) the study population are children; (c) the outcomes measured at least one of the following: bone mineral content (BMC), fat mass, lean mass, skinfold thickness, body mass index (BMI), body mass index z-score (BMIZ), weight for age z-score (WAZ), length for age z-score (LAZ) and head circumference for age z-score (HCAZ); (d) the study met the methodological quality assessment criteria for RCTs [29]. Studies were excluded if: (a) the outcome data were incomplete or impossible to compare with other studies; (b) there was no appropriate control group. Two authors (K. M. and W. G. B.) independently searched for and assessed the eligibility of the electronic literature by initially screening titles and abstracts. Full-length articles of potential studies to be included were then obtained and read to make final inclusion or exclusion decisions. In case of disagreement, a third reviewer (S. Q. W.) was consulted. 2.3. Quality Assessment: Using the Cochrane Risk Assessment Tool, we evaluated the methodological quality of each included clinical trial based on the following criteria: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other biases [29]. We assigned each of the abovementioned items as having either a low, high or unclear risk of bias for each eligible RCT. 2.4. Data Extraction and Synthesis: A data extraction form was used to collect the information of the individual clinical trial regarding the study characteristics: the first author’s last name, year of publication, country of origin, study design, total sample size, characteristics of participants, initiation of supplementation, interventions and outcomes. Data were extracted by two reviewers independently following a per-protocol analysis. All statistical analyses were performed using Review Manager (version 5.3). For interventional studies with multiple experimental groups receiving varying amounts of vitamin D supplements, data were merged to form only one experimental group per study. All outcomes in this analysis had continuous data. The mean, standard deviation and number of participants for both the control and experimental group of each studied outcome were used to calculate the sample size weighted mean difference (MD). The point estimate was illustrated by forest plots for each study with a 95% confidence interval (CI). Heterogeneity was assessed by calculating the I squared (I2) statistic. Results were merged using a fixed effects model for I2 less than 50%, and a random effects model was applied when I2 reached 50% or more. A p-value of less than 0.05 was considered significant for our systematic review. 3. Results: 3.1. Study Selection Our search strategy identified 1665 potential publications. After screening the titles and abstracts, we read 53 full articles, of which 12 studies were included in this systematic review [3,26,27,30,31,32,33,34,35,36,37,38]. The selection process of the relevant literature is summarized in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Flow Diagram (Figure 1). Our search strategy identified 1665 potential publications. After screening the titles and abstracts, we read 53 full articles, of which 12 studies were included in this systematic review [3,26,27,30,31,32,33,34,35,36,37,38]. The selection process of the relevant literature is summarized in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Flow Diagram (Figure 1). 3.2. Characteristics of Included Trials Twelve RCTs [3,26,27,30,31,32,33,34,35,36,37,38,39,40] involving a total of 4583 participants were included in this systematic review. Nine trials conducted vitamin D supplementation during pregnancy [3,26,27,30,31,33,34,36,40], and three trials performed vitamin D supplementation during infancy [30,34,38]. One study involved a subgroup of vitamin D supplementation in lactation [37]. One study [32] followed up children at 1 (Hazell 2014) [33] and 3 (Hazell 2017) [34] years of age. Six studies [3,26,36,37,38,39] were placebo-controlled; four [27,30,34,40] were comparisons between higher vs. lower doses of vitamin D, and the lowest-dose group (400 IU/day) (this low dose is part of the standard care) served as the control; and two [33,36] involved control groups without supplements. All intervention groups were supplemented with cholecalciferol, except two studies [31,39] that used ergocalciferol supplementation. One study conducted vitamin D and calcium supplementation in all treatment groups, but the doses of vitamin D were different between intervention groups and the control group (intervention: 60,000 IU/4 weeks or 60000 IU/8 weeks; Control: 400 IU/day) [27]. Three of the RCTs [32,34,35] were follow-up studies. Details of the characteristics of included studies are shown in Table 1. Twelve RCTs [3,26,27,30,31,32,33,34,35,36,37,38,39,40] involving a total of 4583 participants were included in this systematic review. Nine trials conducted vitamin D supplementation during pregnancy [3,26,27,30,31,33,34,36,40], and three trials performed vitamin D supplementation during infancy [30,34,38]. One study involved a subgroup of vitamin D supplementation in lactation [37]. One study [32] followed up children at 1 (Hazell 2014) [33] and 3 (Hazell 2017) [34] years of age. Six studies [3,26,36,37,38,39] were placebo-controlled; four [27,30,34,40] were comparisons between higher vs. lower doses of vitamin D, and the lowest-dose group (400 IU/day) (this low dose is part of the standard care) served as the control; and two [33,36] involved control groups without supplements. All intervention groups were supplemented with cholecalciferol, except two studies [31,39] that used ergocalciferol supplementation. One study conducted vitamin D and calcium supplementation in all treatment groups, but the doses of vitamin D were different between intervention groups and the control group (intervention: 60,000 IU/4 weeks or 60000 IU/8 weeks; Control: 400 IU/day) [27]. Three of the RCTs [32,34,35] were follow-up studies. Details of the characteristics of included studies are shown in Table 1. 3.3. Risk of Bias of Included Clinical Trials Risk of bias of included clinical trials is presented in Table S2. Participation completion rates were especially low in the two RCT follow-up studies in infants at 43.9% [38] and 66% [33], respectively. For random sequence generation, there were eight studies with low risk of bias and two studies in pregnancy [35,39] with unclear risk of bias. For allocation concealment, there were nine studies with low risk of bias, one study with high risk of bias [31] and one study [35] with unclear risk of bias. For blinding of participants and personnel, there were ten studies with low risk of bias and one study [33] with unclear risk of bias. For blinding of outcome assessment, there were eight studies with low risk of bias and three studies with unclear risk of bias. For incomplete outcome data, there were eight studies with low risk of bias, and three studies [with high risk of bias. For selective outcome data, all studies had a low risk of bias except for Brooke et al. [39], which had a high risk of bias. For other sources of bias, there was one study [31] with a high risk of bias; the rest had a low risk of bias. Risk of bias of included clinical trials is presented in Table S2. Participation completion rates were especially low in the two RCT follow-up studies in infants at 43.9% [38] and 66% [33], respectively. For random sequence generation, there were eight studies with low risk of bias and two studies in pregnancy [35,39] with unclear risk of bias. For allocation concealment, there were nine studies with low risk of bias, one study with high risk of bias [31] and one study [35] with unclear risk of bias. For blinding of participants and personnel, there were ten studies with low risk of bias and one study [33] with unclear risk of bias. For blinding of outcome assessment, there were eight studies with low risk of bias and three studies with unclear risk of bias. For incomplete outcome data, there were eight studies with low risk of bias, and three studies [with high risk of bias. For selective outcome data, all studies had a low risk of bias except for Brooke et al. [39], which had a high risk of bias. For other sources of bias, there was one study [31] with a high risk of bias; the rest had a low risk of bias. 3.4. Bone Mineral Content (BMC) Whole-body BMC (g) was examined in five RCTs [3,26,27,31,33] involving 1444 and 1349 mother–newborn pairs. Dual-energy X-ray absorptiometry (DXA) was used in all four studies to assess bone parameters. Bone health assessment was performed in the infants at one week, three weeks, between 12 and 16 months, and 3–6 years. There was no association between vitamin D supplementation during pregnancy with whole-body BMC (gram) in neonates (MD 1.09, 95% CI 0.64, 2.81; I2 = 0) and BMC (gram) in infants at 1 year of age (MD −19.38, 95% CI −60.55, 21.79; I2 = 73%) (Figure 2A). Whole-body BMC (g) was examined in five RCTs [3,26,27,31,33] involving 1444 and 1349 mother–newborn pairs. Dual-energy X-ray absorptiometry (DXA) was used in all four studies to assess bone parameters. Bone health assessment was performed in the infants at one week, three weeks, between 12 and 16 months, and 3–6 years. There was no association between vitamin D supplementation during pregnancy with whole-body BMC (gram) in neonates (MD 1.09, 95% CI 0.64, 2.81; I2 = 0) and BMC (gram) in infants at 1 year of age (MD −19.38, 95% CI −60.55, 21.79; I2 = 73%) (Figure 2A). 3.5. Lean Mass (g) and Lean Mass Percentage (%) Lean mass (gram) was reported in six RCTs [27,30,31,33,34,40] involving 631 participants. Lean mass was measured using DXA at approximately seven days, 6 months, between 12 and 16 months and at 36 months. Vitamin D supplementation was not associated with total lean mass in infants at ages 6 months (MD, −18.42, 95% CI −586.29, 549.45; I2 = 81%), 1 year (MD −1.00, 95% CI −624.71, 622.71, I2 = 67%) and 3 years (MD 102.63, 95% CI −185.44, 390.71, I2 = 0%) (Figure 2B). The data could not be pooled when lean mass was not assessed at similar ages. Cooper et al. [26] observed that the lean mass in the infants born to mothers assigned to cholecalciferol supplementation were not significantly different from mothers in the placebo group. One study [33] showed that lean mass percentage in infants did not differ with vitamin D supplementation. Lean mass (gram) was reported in six RCTs [27,30,31,33,34,40] involving 631 participants. Lean mass was measured using DXA at approximately seven days, 6 months, between 12 and 16 months and at 36 months. Vitamin D supplementation was not associated with total lean mass in infants at ages 6 months (MD, −18.42, 95% CI −586.29, 549.45; I2 = 81%), 1 year (MD −1.00, 95% CI −624.71, 622.71, I2 = 67%) and 3 years (MD 102.63, 95% CI −185.44, 390.71, I2 = 0%) (Figure 2B). The data could not be pooled when lean mass was not assessed at similar ages. Cooper et al. [26] observed that the lean mass in the infants born to mothers assigned to cholecalciferol supplementation were not significantly different from mothers in the placebo group. One study [33] showed that lean mass percentage in infants did not differ with vitamin D supplementation. 3.6. Fat Mass (g) and Fat Mass Percentage (%) Fat mass (g) was reported in five RCTs [27,30,34,35,40] involving 621 participants. Fat mass was measured using DXA at approximately seven days, 6 months, 12–16 months and 3 years of age. Vitamin D supplementation was not associated with total body fat mass (g) in the infants at ages 6 months (MD, −153.28, 95% CI −348.14 to 41.57, I2 = 0%), 1 year (MD, −141.77; 95% CI, −471.04 to 187.50, I2 = 0%) and 3 years (MD, −53.47; 95% CI, −256.90 to 149.95, I2 = 0%) (Figure 2C). The data could not be pooled when fat mass was not assessed at similar ages. Three RCTs involving 360 participants reported the outcome of fat mass percentage (%) at ages 1 year and 3–6 years. Vitamin D supplementation was not associated with fat mass percentage (%) in the infants at 1 year of age (MD −0.92, 95% CI −3.65, 1.81, I2 = 0). Fat mass (g) was reported in five RCTs [27,30,34,35,40] involving 621 participants. Fat mass was measured using DXA at approximately seven days, 6 months, 12–16 months and 3 years of age. Vitamin D supplementation was not associated with total body fat mass (g) in the infants at ages 6 months (MD, −153.28, 95% CI −348.14 to 41.57, I2 = 0%), 1 year (MD, −141.77; 95% CI, −471.04 to 187.50, I2 = 0%) and 3 years (MD, −53.47; 95% CI, −256.90 to 149.95, I2 = 0%) (Figure 2C). The data could not be pooled when fat mass was not assessed at similar ages. Three RCTs involving 360 participants reported the outcome of fat mass percentage (%) at ages 1 year and 3–6 years. Vitamin D supplementation was not associated with fat mass percentage (%) in the infants at 1 year of age (MD −0.92, 95% CI −3.65, 1.81, I2 = 0). 3.7. Skinfold Thickness Skinfold (triceps) thickness (mm) was assessed in three RCTs [35,38,39] with 555 participants. Outcomes were measured at birth in two RCTs [35,39] and between the age of three and six years in the third study [38]. Meta-analysis could only be performed for the two RCTs that measured outcomes at birth due to age disparity for outcome measurements with the third study. Neonates whose mothers had been supplemented with vitamin D had significantly higher skinfold thickness (mm) than those who had not (MD 0.33, 95% CI 0.12, 0.54). There was no significant heterogeneity (I2 = 34%) (Figure 3A). Trilok-Kumar et al. [38] reported no association between infancy supplementation of vitamin D and skinfold thicknesses. Skinfold (triceps) thickness (mm) was assessed in three RCTs [35,38,39] with 555 participants. Outcomes were measured at birth in two RCTs [35,39] and between the age of three and six years in the third study [38]. Meta-analysis could only be performed for the two RCTs that measured outcomes at birth due to age disparity for outcome measurements with the third study. Neonates whose mothers had been supplemented with vitamin D had significantly higher skinfold thickness (mm) than those who had not (MD 0.33, 95% CI 0.12, 0.54). There was no significant heterogeneity (I2 = 34%) (Figure 3A). Trilok-Kumar et al. [38] reported no association between infancy supplementation of vitamin D and skinfold thicknesses. 3.8. Body Mass Index (BMI) Two RCTs [34,38] involving 999 participants reported the outcome of BMI (kg/m2). Vitamin D supplementation (vs. placebo or standard care) in infancy was associated with significantly lower BMI (kg/m2) between the ages of 3 and 6 years (MD −0.19, 95%CI −0.34, −0.04). Heterogeneity was not significant (I2 = 0%) (Figure 3B). Two RCTs [34,38] involving 999 participants reported the outcome of BMI (kg/m2). Vitamin D supplementation (vs. placebo or standard care) in infancy was associated with significantly lower BMI (kg/m2) between the ages of 3 and 6 years (MD −0.19, 95%CI −0.34, −0.04). Heterogeneity was not significant (I2 = 0%) (Figure 3B). 3.9. Body Mass Index Z-Score (BMIZ) Four RCTs [31,34,38,40] involving 1674 participants reported the outcome of infant BMIZ. Offspring who had prenatal or postnatal vitamin D supplementation (vs. placebo or standard care) had a significantly lower BMIZ at three to six years old (MD −0.12; 95% CI −0.21, −0.04). No significant heterogeneity was detected (I2 = 0%) (Figure 3C). Four RCTs [31,34,38,40] involving 1674 participants reported the outcome of infant BMIZ. Offspring who had prenatal or postnatal vitamin D supplementation (vs. placebo or standard care) had a significantly lower BMIZ at three to six years old (MD −0.12; 95% CI −0.21, −0.04). No significant heterogeneity was detected (I2 = 0%) (Figure 3C). 3.10. Weight for Age Z-Score (WAZ) and Length for Age Z-Score (LAZ) WAZ was examined in six RCTs [27,31,32,34,35,37], and LAZ was examined in four RCTs [23,28,30,39] involving 2495 and 1196 participants, respectively. Both outcomes were assessed in children at ages one year [34], between 12 and 16 months [27], three years [32] and between three and six years [31,35]. Due to age differences, results were separately merged for outcomes examined at ages 12–18 months [27,34] and three to six years [32,35]. There was no significantly difference between the intervention group in the outcome WAZ in children at ages 1 year (MD −0.07; 95%CI −0.20 to 0.07) and 3–6 years (MD −0.06, 95% CI −0.18, 0.06). LAZ was higher in infants 1 year of age in the vitamin D supplementation group compared with the control group (MD 0.29, 95% CI 0.03, 0.54; I2 = 0%); however, there was no significant difference in LAZ in children at 3–6 years between the two groups (MD 0.04, 95%CI −0.08, 0.16; I2 = 0%). WAZ was examined in six RCTs [27,31,32,34,35,37], and LAZ was examined in four RCTs [23,28,30,39] involving 2495 and 1196 participants, respectively. Both outcomes were assessed in children at ages one year [34], between 12 and 16 months [27], three years [32] and between three and six years [31,35]. Due to age differences, results were separately merged for outcomes examined at ages 12–18 months [27,34] and three to six years [32,35]. There was no significantly difference between the intervention group in the outcome WAZ in children at ages 1 year (MD −0.07; 95%CI −0.20 to 0.07) and 3–6 years (MD −0.06, 95% CI −0.18, 0.06). LAZ was higher in infants 1 year of age in the vitamin D supplementation group compared with the control group (MD 0.29, 95% CI 0.03, 0.54; I2 = 0%); however, there was no significant difference in LAZ in children at 3–6 years between the two groups (MD 0.04, 95%CI −0.08, 0.16; I2 = 0%). 3.11. Head Circumference for Age Z-Score (HCAZ) HCAZ was measured in two RCTs [27,34] with 183 infants. No association was found between maternal vitamin D supplementation and HCAZ (MD 0.12, 95%CI −0.18, 0.42). There was no significant heterogeneity. HCAZ was measured in two RCTs [27,34] with 183 infants. No association was found between maternal vitamin D supplementation and HCAZ (MD 0.12, 95%CI −0.18, 0.42). There was no significant heterogeneity. 3.1. Study Selection: Our search strategy identified 1665 potential publications. After screening the titles and abstracts, we read 53 full articles, of which 12 studies were included in this systematic review [3,26,27,30,31,32,33,34,35,36,37,38]. The selection process of the relevant literature is summarized in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Flow Diagram (Figure 1). 3.2. Characteristics of Included Trials: Twelve RCTs [3,26,27,30,31,32,33,34,35,36,37,38,39,40] involving a total of 4583 participants were included in this systematic review. Nine trials conducted vitamin D supplementation during pregnancy [3,26,27,30,31,33,34,36,40], and three trials performed vitamin D supplementation during infancy [30,34,38]. One study involved a subgroup of vitamin D supplementation in lactation [37]. One study [32] followed up children at 1 (Hazell 2014) [33] and 3 (Hazell 2017) [34] years of age. Six studies [3,26,36,37,38,39] were placebo-controlled; four [27,30,34,40] were comparisons between higher vs. lower doses of vitamin D, and the lowest-dose group (400 IU/day) (this low dose is part of the standard care) served as the control; and two [33,36] involved control groups without supplements. All intervention groups were supplemented with cholecalciferol, except two studies [31,39] that used ergocalciferol supplementation. One study conducted vitamin D and calcium supplementation in all treatment groups, but the doses of vitamin D were different between intervention groups and the control group (intervention: 60,000 IU/4 weeks or 60000 IU/8 weeks; Control: 400 IU/day) [27]. Three of the RCTs [32,34,35] were follow-up studies. Details of the characteristics of included studies are shown in Table 1. 3.3. Risk of Bias of Included Clinical Trials: Risk of bias of included clinical trials is presented in Table S2. Participation completion rates were especially low in the two RCT follow-up studies in infants at 43.9% [38] and 66% [33], respectively. For random sequence generation, there were eight studies with low risk of bias and two studies in pregnancy [35,39] with unclear risk of bias. For allocation concealment, there were nine studies with low risk of bias, one study with high risk of bias [31] and one study [35] with unclear risk of bias. For blinding of participants and personnel, there were ten studies with low risk of bias and one study [33] with unclear risk of bias. For blinding of outcome assessment, there were eight studies with low risk of bias and three studies with unclear risk of bias. For incomplete outcome data, there were eight studies with low risk of bias, and three studies [with high risk of bias. For selective outcome data, all studies had a low risk of bias except for Brooke et al. [39], which had a high risk of bias. For other sources of bias, there was one study [31] with a high risk of bias; the rest had a low risk of bias. 3.4. Bone Mineral Content (BMC): Whole-body BMC (g) was examined in five RCTs [3,26,27,31,33] involving 1444 and 1349 mother–newborn pairs. Dual-energy X-ray absorptiometry (DXA) was used in all four studies to assess bone parameters. Bone health assessment was performed in the infants at one week, three weeks, between 12 and 16 months, and 3–6 years. There was no association between vitamin D supplementation during pregnancy with whole-body BMC (gram) in neonates (MD 1.09, 95% CI 0.64, 2.81; I2 = 0) and BMC (gram) in infants at 1 year of age (MD −19.38, 95% CI −60.55, 21.79; I2 = 73%) (Figure 2A). 3.5. Lean Mass (g) and Lean Mass Percentage (%): Lean mass (gram) was reported in six RCTs [27,30,31,33,34,40] involving 631 participants. Lean mass was measured using DXA at approximately seven days, 6 months, between 12 and 16 months and at 36 months. Vitamin D supplementation was not associated with total lean mass in infants at ages 6 months (MD, −18.42, 95% CI −586.29, 549.45; I2 = 81%), 1 year (MD −1.00, 95% CI −624.71, 622.71, I2 = 67%) and 3 years (MD 102.63, 95% CI −185.44, 390.71, I2 = 0%) (Figure 2B). The data could not be pooled when lean mass was not assessed at similar ages. Cooper et al. [26] observed that the lean mass in the infants born to mothers assigned to cholecalciferol supplementation were not significantly different from mothers in the placebo group. One study [33] showed that lean mass percentage in infants did not differ with vitamin D supplementation. 3.6. Fat Mass (g) and Fat Mass Percentage (%): Fat mass (g) was reported in five RCTs [27,30,34,35,40] involving 621 participants. Fat mass was measured using DXA at approximately seven days, 6 months, 12–16 months and 3 years of age. Vitamin D supplementation was not associated with total body fat mass (g) in the infants at ages 6 months (MD, −153.28, 95% CI −348.14 to 41.57, I2 = 0%), 1 year (MD, −141.77; 95% CI, −471.04 to 187.50, I2 = 0%) and 3 years (MD, −53.47; 95% CI, −256.90 to 149.95, I2 = 0%) (Figure 2C). The data could not be pooled when fat mass was not assessed at similar ages. Three RCTs involving 360 participants reported the outcome of fat mass percentage (%) at ages 1 year and 3–6 years. Vitamin D supplementation was not associated with fat mass percentage (%) in the infants at 1 year of age (MD −0.92, 95% CI −3.65, 1.81, I2 = 0). 3.7. Skinfold Thickness: Skinfold (triceps) thickness (mm) was assessed in three RCTs [35,38,39] with 555 participants. Outcomes were measured at birth in two RCTs [35,39] and between the age of three and six years in the third study [38]. Meta-analysis could only be performed for the two RCTs that measured outcomes at birth due to age disparity for outcome measurements with the third study. Neonates whose mothers had been supplemented with vitamin D had significantly higher skinfold thickness (mm) than those who had not (MD 0.33, 95% CI 0.12, 0.54). There was no significant heterogeneity (I2 = 34%) (Figure 3A). Trilok-Kumar et al. [38] reported no association between infancy supplementation of vitamin D and skinfold thicknesses. 3.8. Body Mass Index (BMI): Two RCTs [34,38] involving 999 participants reported the outcome of BMI (kg/m2). Vitamin D supplementation (vs. placebo or standard care) in infancy was associated with significantly lower BMI (kg/m2) between the ages of 3 and 6 years (MD −0.19, 95%CI −0.34, −0.04). Heterogeneity was not significant (I2 = 0%) (Figure 3B). 3.9. Body Mass Index Z-Score (BMIZ): Four RCTs [31,34,38,40] involving 1674 participants reported the outcome of infant BMIZ. Offspring who had prenatal or postnatal vitamin D supplementation (vs. placebo or standard care) had a significantly lower BMIZ at three to six years old (MD −0.12; 95% CI −0.21, −0.04). No significant heterogeneity was detected (I2 = 0%) (Figure 3C). 3.10. Weight for Age Z-Score (WAZ) and Length for Age Z-Score (LAZ): WAZ was examined in six RCTs [27,31,32,34,35,37], and LAZ was examined in four RCTs [23,28,30,39] involving 2495 and 1196 participants, respectively. Both outcomes were assessed in children at ages one year [34], between 12 and 16 months [27], three years [32] and between three and six years [31,35]. Due to age differences, results were separately merged for outcomes examined at ages 12–18 months [27,34] and three to six years [32,35]. There was no significantly difference between the intervention group in the outcome WAZ in children at ages 1 year (MD −0.07; 95%CI −0.20 to 0.07) and 3–6 years (MD −0.06, 95% CI −0.18, 0.06). LAZ was higher in infants 1 year of age in the vitamin D supplementation group compared with the control group (MD 0.29, 95% CI 0.03, 0.54; I2 = 0%); however, there was no significant difference in LAZ in children at 3–6 years between the two groups (MD 0.04, 95%CI −0.08, 0.16; I2 = 0%). 3.11. Head Circumference for Age Z-Score (HCAZ): HCAZ was measured in two RCTs [27,34] with 183 infants. No association was found between maternal vitamin D supplementation and HCAZ (MD 0.12, 95%CI −0.18, 0.42). There was no significant heterogeneity. 4. Discussion: 4.1. Statement of Main Findings This is the first systematic review and meta-analysis of the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on children’s body composition (bone health, lean mass and adiposity). We found that vitamin D supplementation during pregnancy was associated with higher skinfold thickness in neonates. Vitamin D supplementation in early life was associated with significantly higher length for age z-score in infants at 1 year of age, and was associated with lower BMI and BMI z-score in offspring at 3 to 6 years of age. From current evidence, vitamin D supplementation during early life was not found to be associated with children’s BMC, lean mass (g, %), WAZ and HCAZ. We found that vitamin D supplementation during early life had a consistent trend to decrease body fat mass (g, %), although the 95% CI confidence intervals included the null effect. There was no heterogeneity (I2 = 0) across studies. These null effects may be due to the small sample size in the included trials. Large well-designed clinical trials are needed to confirm the above associations. This is the first systematic review and meta-analysis of the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on children’s body composition (bone health, lean mass and adiposity). We found that vitamin D supplementation during pregnancy was associated with higher skinfold thickness in neonates. Vitamin D supplementation in early life was associated with significantly higher length for age z-score in infants at 1 year of age, and was associated with lower BMI and BMI z-score in offspring at 3 to 6 years of age. From current evidence, vitamin D supplementation during early life was not found to be associated with children’s BMC, lean mass (g, %), WAZ and HCAZ. We found that vitamin D supplementation during early life had a consistent trend to decrease body fat mass (g, %), although the 95% CI confidence intervals included the null effect. There was no heterogeneity (I2 = 0) across studies. These null effects may be due to the small sample size in the included trials. Large well-designed clinical trials are needed to confirm the above associations. 4.2. Importance and Implications This systematic review added to the existing literature by including a greater number of recent RCTs and the first systematic review and meta-analysis of RCTs on the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on the outcomes of children’s bone (whole-body BMC), muscle (lean mass and lean mass percentage), adiposity (skinfold thickness, fat mass and fat mass percentage) and growth (age and sex specific indicators: BMIZ, WAZ, LAZ and HCAZ). The results show that vitamin D supplementation in early life was associated with higher skinfold thickness in neonates, higher LAZ in infants and lower BMIZ in children at 3–6 years of age, suggesting that vitamin D in early life may play an important role in children’s adiposity development, which may have a public health implication for the early intervention or prevention of childhood overweight/obesity and related cardiometabolic health issues. This systematic review added to the existing literature by including a greater number of recent RCTs and the first systematic review and meta-analysis of RCTs on the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on the outcomes of children’s bone (whole-body BMC), muscle (lean mass and lean mass percentage), adiposity (skinfold thickness, fat mass and fat mass percentage) and growth (age and sex specific indicators: BMIZ, WAZ, LAZ and HCAZ). The results show that vitamin D supplementation in early life was associated with higher skinfold thickness in neonates, higher LAZ in infants and lower BMIZ in children at 3–6 years of age, suggesting that vitamin D in early life may play an important role in children’s adiposity development, which may have a public health implication for the early intervention or prevention of childhood overweight/obesity and related cardiometabolic health issues. 4.3. Comparison with Previous Studies There are several systematic reviews [1,11,25,28,41,42,43,44,45] on the effects of maternal vitamin D supplementation intake or status during pregnancy on maternal, neonatal or infant health outcomes. In contrast, we could not identify any meta-analysis examining the effects of vitamin D supplementation during early life on child body composition. One narrative review [46] described the relationship between vitamin D and BMD and found that it is inconsistent across studies; however, the authors did not perform a meta-analysis. While most studies [1,25,28,41,44,45] included anthropometric measures, such as birthweight, birth length and head circumference, none of them reported the respective sex-specific and age-specific z-scores. Harvey et al. published a comprehensive review on both observational and clinical trials of the role of vitamin D during pregnancy in perinatal outcomes (such as birthweight, birth length, head circumference, anthropometry and body composition and low birthweight) [25]. Like our review, their study [25] showed that child BMC was not affected by supplementation of vitamin D, and the results were inconsistent regarding skinfold thickness. However, our systematic review included more recent studies, and the meta-analysis was based only on RCTs. Another review by Curtis et al. evaluated the link between prenatal vitamin D supplementation and child bone development, but lacked results on other body composition outcomes, such as fat and lean mass [2]. This study found that achieving a higher level of serum 25 hydroxyvitamin D [25(OH)D] in pregnancy might have beneficial effects on the bone development of offspring. However, there are not enough high quality RCTs to assess this, and the timing of assessment is variable among existing trials. No association of vitamin D with BMC in early childhood could preclude an effect on adolescence and adulthood. Longer-term follow-up is needed. Our previous meta-analysis showed that maternal low vitamin D status during pregnancy was associated with lower birthweight, and higher weight at 9 months of age, which indicates that prenatal vitamin D status was related with accelerated weight gain during infancy that may be linked to increased adiposity in offspring [11]. Our other systematic review demonstrates that vitamin D supplementation during pregnancy increased birthweight and reduced the risk of small for gestational age [43]. However, the above two studies did not study the effect of vitamin D on bone health, lean mass and fat mass. Despite four RCTs [31,32,35,36] in this current review showing that BMI and BMIZ were lower in participants who received prenatal or postnatal vitamin D supplementation, it is important to note that children were studied around the usual age of BMI and adiposity rebound, which occur at approximately 4 and 6 years of age, respectively [35,47]. The long-term effects of vitamin D supplementation during early life on BMI and BMIZ are unclear. More high quality RCTs are required to assess the link between vitamin D supplementation and lean mass in early life. There are several systematic reviews [1,11,25,28,41,42,43,44,45] on the effects of maternal vitamin D supplementation intake or status during pregnancy on maternal, neonatal or infant health outcomes. In contrast, we could not identify any meta-analysis examining the effects of vitamin D supplementation during early life on child body composition. One narrative review [46] described the relationship between vitamin D and BMD and found that it is inconsistent across studies; however, the authors did not perform a meta-analysis. While most studies [1,25,28,41,44,45] included anthropometric measures, such as birthweight, birth length and head circumference, none of them reported the respective sex-specific and age-specific z-scores. Harvey et al. published a comprehensive review on both observational and clinical trials of the role of vitamin D during pregnancy in perinatal outcomes (such as birthweight, birth length, head circumference, anthropometry and body composition and low birthweight) [25]. Like our review, their study [25] showed that child BMC was not affected by supplementation of vitamin D, and the results were inconsistent regarding skinfold thickness. However, our systematic review included more recent studies, and the meta-analysis was based only on RCTs. Another review by Curtis et al. evaluated the link between prenatal vitamin D supplementation and child bone development, but lacked results on other body composition outcomes, such as fat and lean mass [2]. This study found that achieving a higher level of serum 25 hydroxyvitamin D [25(OH)D] in pregnancy might have beneficial effects on the bone development of offspring. However, there are not enough high quality RCTs to assess this, and the timing of assessment is variable among existing trials. No association of vitamin D with BMC in early childhood could preclude an effect on adolescence and adulthood. Longer-term follow-up is needed. Our previous meta-analysis showed that maternal low vitamin D status during pregnancy was associated with lower birthweight, and higher weight at 9 months of age, which indicates that prenatal vitamin D status was related with accelerated weight gain during infancy that may be linked to increased adiposity in offspring [11]. Our other systematic review demonstrates that vitamin D supplementation during pregnancy increased birthweight and reduced the risk of small for gestational age [43]. However, the above two studies did not study the effect of vitamin D on bone health, lean mass and fat mass. Despite four RCTs [31,32,35,36] in this current review showing that BMI and BMIZ were lower in participants who received prenatal or postnatal vitamin D supplementation, it is important to note that children were studied around the usual age of BMI and adiposity rebound, which occur at approximately 4 and 6 years of age, respectively [35,47]. The long-term effects of vitamin D supplementation during early life on BMI and BMIZ are unclear. More high quality RCTs are required to assess the link between vitamin D supplementation and lean mass in early life. 4.4. Mechanisms Vitamin D is important for the differentiation of mesenchymal stem cells into adipocytes. Early life vitamin D adequacy promotes the conversion of preadipocyte maturation to form myocytes rather than mature adipocytes. A study performed on mice showed that offspring gestated in a vitamin D-deficient diet possess larger visceral body fat pads and greater susceptibility to high fat diet-induced adipocyte hypertrophy [48]. Moreover, greater nuclear receptor peroxisome proliferator-activated receptor gamma (Pparg) expression in visceral adipose tissue was also observed in this study performed on mice. The nuclear receptor PPARG takes part in both adipogenesis and lipid storage [49,50,51]. While direct supplementation of vitamin D did not lead to a difference in lean mass between control and experimental groups in this meta-analysis, Hazell et al. showed that higher vitamin D status correlates with a leaner body composition; infants with a plasma 25(OH)D3 concentration above 75 nmol/L did not differ in lean mass and fat mass compared with those below 75 nmol/L [37]. Previous work has shown that the biologically active form of vitamin D, 1,25(OH)-2D, binds to vitamin D receptors to signal gene transcription and sensitize the Akt/mTOR pathway involved in protein synthesis [32,52]. Vitamin D is important for the differentiation of mesenchymal stem cells into adipocytes. Early life vitamin D adequacy promotes the conversion of preadipocyte maturation to form myocytes rather than mature adipocytes. A study performed on mice showed that offspring gestated in a vitamin D-deficient diet possess larger visceral body fat pads and greater susceptibility to high fat diet-induced adipocyte hypertrophy [48]. Moreover, greater nuclear receptor peroxisome proliferator-activated receptor gamma (Pparg) expression in visceral adipose tissue was also observed in this study performed on mice. The nuclear receptor PPARG takes part in both adipogenesis and lipid storage [49,50,51]. While direct supplementation of vitamin D did not lead to a difference in lean mass between control and experimental groups in this meta-analysis, Hazell et al. showed that higher vitamin D status correlates with a leaner body composition; infants with a plasma 25(OH)D3 concentration above 75 nmol/L did not differ in lean mass and fat mass compared with those below 75 nmol/L [37]. Previous work has shown that the biologically active form of vitamin D, 1,25(OH)-2D, binds to vitamin D receptors to signal gene transcription and sensitize the Akt/mTOR pathway involved in protein synthesis [32,52]. 4.5. Strengths and Limitations Our systematic review has its strengths. It is the first systematic review and meta-analysis of randomized controlled trials to assess the effectiveness of vitamin D supplementation during early life (pregnancy and/or infancy) on body composition. Risk of bias in the RCTs was evaluated to ensure quality of the included studies. This study has some limitations. First, we included eleven RCTs of vitamin D supplementation in early life on children’s body composition; the outcome measures were quite different across individual studies, and therefore, for each outcome, there were only a few RCTs. Second, outcome assessment was performed in children at different ages, which made pooling the data impossible for certain outcomes. The baseline vitamin D status, timing and the dose of vitamin D supplementation administered during pregnancy or infancy also differed across studies. There was a lack of data on visceral vs. subcutaneous adiposity. Moreover, most trials had no information on the compliance with vitamin D supplementation. Finally, small sample sizes and the loss to follow-up were additional limiting factors. Our systematic review has its strengths. It is the first systematic review and meta-analysis of randomized controlled trials to assess the effectiveness of vitamin D supplementation during early life (pregnancy and/or infancy) on body composition. Risk of bias in the RCTs was evaluated to ensure quality of the included studies. This study has some limitations. First, we included eleven RCTs of vitamin D supplementation in early life on children’s body composition; the outcome measures were quite different across individual studies, and therefore, for each outcome, there were only a few RCTs. Second, outcome assessment was performed in children at different ages, which made pooling the data impossible for certain outcomes. The baseline vitamin D status, timing and the dose of vitamin D supplementation administered during pregnancy or infancy also differed across studies. There was a lack of data on visceral vs. subcutaneous adiposity. Moreover, most trials had no information on the compliance with vitamin D supplementation. Finally, small sample sizes and the loss to follow-up were additional limiting factors. 4.1. Statement of Main Findings: This is the first systematic review and meta-analysis of the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on children’s body composition (bone health, lean mass and adiposity). We found that vitamin D supplementation during pregnancy was associated with higher skinfold thickness in neonates. Vitamin D supplementation in early life was associated with significantly higher length for age z-score in infants at 1 year of age, and was associated with lower BMI and BMI z-score in offspring at 3 to 6 years of age. From current evidence, vitamin D supplementation during early life was not found to be associated with children’s BMC, lean mass (g, %), WAZ and HCAZ. We found that vitamin D supplementation during early life had a consistent trend to decrease body fat mass (g, %), although the 95% CI confidence intervals included the null effect. There was no heterogeneity (I2 = 0) across studies. These null effects may be due to the small sample size in the included trials. Large well-designed clinical trials are needed to confirm the above associations. 4.2. Importance and Implications: This systematic review added to the existing literature by including a greater number of recent RCTs and the first systematic review and meta-analysis of RCTs on the effects of vitamin D supplementation during early life (during pregnancy, lactation or infancy) on the outcomes of children’s bone (whole-body BMC), muscle (lean mass and lean mass percentage), adiposity (skinfold thickness, fat mass and fat mass percentage) and growth (age and sex specific indicators: BMIZ, WAZ, LAZ and HCAZ). The results show that vitamin D supplementation in early life was associated with higher skinfold thickness in neonates, higher LAZ in infants and lower BMIZ in children at 3–6 years of age, suggesting that vitamin D in early life may play an important role in children’s adiposity development, which may have a public health implication for the early intervention or prevention of childhood overweight/obesity and related cardiometabolic health issues. 4.3. Comparison with Previous Studies: There are several systematic reviews [1,11,25,28,41,42,43,44,45] on the effects of maternal vitamin D supplementation intake or status during pregnancy on maternal, neonatal or infant health outcomes. In contrast, we could not identify any meta-analysis examining the effects of vitamin D supplementation during early life on child body composition. One narrative review [46] described the relationship between vitamin D and BMD and found that it is inconsistent across studies; however, the authors did not perform a meta-analysis. While most studies [1,25,28,41,44,45] included anthropometric measures, such as birthweight, birth length and head circumference, none of them reported the respective sex-specific and age-specific z-scores. Harvey et al. published a comprehensive review on both observational and clinical trials of the role of vitamin D during pregnancy in perinatal outcomes (such as birthweight, birth length, head circumference, anthropometry and body composition and low birthweight) [25]. Like our review, their study [25] showed that child BMC was not affected by supplementation of vitamin D, and the results were inconsistent regarding skinfold thickness. However, our systematic review included more recent studies, and the meta-analysis was based only on RCTs. Another review by Curtis et al. evaluated the link between prenatal vitamin D supplementation and child bone development, but lacked results on other body composition outcomes, such as fat and lean mass [2]. This study found that achieving a higher level of serum 25 hydroxyvitamin D [25(OH)D] in pregnancy might have beneficial effects on the bone development of offspring. However, there are not enough high quality RCTs to assess this, and the timing of assessment is variable among existing trials. No association of vitamin D with BMC in early childhood could preclude an effect on adolescence and adulthood. Longer-term follow-up is needed. Our previous meta-analysis showed that maternal low vitamin D status during pregnancy was associated with lower birthweight, and higher weight at 9 months of age, which indicates that prenatal vitamin D status was related with accelerated weight gain during infancy that may be linked to increased adiposity in offspring [11]. Our other systematic review demonstrates that vitamin D supplementation during pregnancy increased birthweight and reduced the risk of small for gestational age [43]. However, the above two studies did not study the effect of vitamin D on bone health, lean mass and fat mass. Despite four RCTs [31,32,35,36] in this current review showing that BMI and BMIZ were lower in participants who received prenatal or postnatal vitamin D supplementation, it is important to note that children were studied around the usual age of BMI and adiposity rebound, which occur at approximately 4 and 6 years of age, respectively [35,47]. The long-term effects of vitamin D supplementation during early life on BMI and BMIZ are unclear. More high quality RCTs are required to assess the link between vitamin D supplementation and lean mass in early life. 4.4. Mechanisms: Vitamin D is important for the differentiation of mesenchymal stem cells into adipocytes. Early life vitamin D adequacy promotes the conversion of preadipocyte maturation to form myocytes rather than mature adipocytes. A study performed on mice showed that offspring gestated in a vitamin D-deficient diet possess larger visceral body fat pads and greater susceptibility to high fat diet-induced adipocyte hypertrophy [48]. Moreover, greater nuclear receptor peroxisome proliferator-activated receptor gamma (Pparg) expression in visceral adipose tissue was also observed in this study performed on mice. The nuclear receptor PPARG takes part in both adipogenesis and lipid storage [49,50,51]. While direct supplementation of vitamin D did not lead to a difference in lean mass between control and experimental groups in this meta-analysis, Hazell et al. showed that higher vitamin D status correlates with a leaner body composition; infants with a plasma 25(OH)D3 concentration above 75 nmol/L did not differ in lean mass and fat mass compared with those below 75 nmol/L [37]. Previous work has shown that the biologically active form of vitamin D, 1,25(OH)-2D, binds to vitamin D receptors to signal gene transcription and sensitize the Akt/mTOR pathway involved in protein synthesis [32,52]. 4.5. Strengths and Limitations: Our systematic review has its strengths. It is the first systematic review and meta-analysis of randomized controlled trials to assess the effectiveness of vitamin D supplementation during early life (pregnancy and/or infancy) on body composition. Risk of bias in the RCTs was evaluated to ensure quality of the included studies. This study has some limitations. First, we included eleven RCTs of vitamin D supplementation in early life on children’s body composition; the outcome measures were quite different across individual studies, and therefore, for each outcome, there were only a few RCTs. Second, outcome assessment was performed in children at different ages, which made pooling the data impossible for certain outcomes. The baseline vitamin D status, timing and the dose of vitamin D supplementation administered during pregnancy or infancy also differed across studies. There was a lack of data on visceral vs. subcutaneous adiposity. Moreover, most trials had no information on the compliance with vitamin D supplementation. Finally, small sample sizes and the loss to follow-up were additional limiting factors. 5. Conclusions: This systematic review of randomised clinical trials suggests that that vitamin D supplementation during pregnancy is associated with higher skinfold thickness in neonates. Vitamin D supplementation during pregnancy or infancy is associated with lower BMI and BMI z-score in offspring at 3 to 6 years of age. Based on current published clinical trials, vitamin D supplementation in early life is not observed to be associated with bone, lean and fat mass by DXA. Future large well-designed double blinded RCTs are needed to assess the effectiveness of vitamin D supplementation in early life on children’s bone health, lean mass and adiposity.
Background: Vitamin D deficiency during pregnancy or infancy is associated with adverse growth in children. No systematic review has been conducted to summarize available evidence on the effect of vitamin D supplementation in pregnancy and infancy on growth and body composition in children. Methods: A systematic review and meta-analysis were performed on the effects of vitamin D supplementation during early life on children's growth and body composition (bone, lean and fat). A literature search of randomized controlled trials (RCTs) was conducted to identify relevant studies on the effects of vitamin D supplementation during pregnancy and infancy on children's body composition (bone, lean and fat) in PubMed, EMBASE and Cochrane Library from inception to 31 December 2020. A Cochrane Risk Assessment Tool was used for quality assessment. The comparison was vitamin D supplementation vs. placebo or standard care. Random-effects and fixed-effect meta-analyses were conducted. The effects are presented as mean differences (MDs) or risk ratios (RRs) with 95% confidence intervals (CIs). Results: A total of 3960 participants from eleven randomized controlled trials were eligible for inclusion. Vitamin D supplementation during pregnancy was associated with higher triceps skinfold thickness (mm) (MD 0.33, 95% CI, 0.12, 0.54; I2 = 34%) in neonates. Vitamin D supplementation during pregnancy or infancy was associated with significantly increased length for age z-score in infants at 1 year of age (MD 0.29, 95% CI, 0.03, 0.54; I2 = 0%), and was associated with lower body mass index (BMI) (kg/m2) (MD -0.19, 95% CI -0.34, -0.04; I2 = 0%) and body mass index z-score (BMIZ) (MD -0.12, 95% CI -0.21, -0.04; I2 = 0%) in offspring at 3-6 years of age. Vitamin D supplementation during early life was not observed to be associated with children's bone, lean or fat mass. Conclusions: Vitamin D supplementation during pregnancy or infancy may be associated with reduced adiposity in childhood. Further large clinical trials of the effects of vitamin D supplementation on childhood body composition are warranted.
1. Introduction: There is growing interest regarding the association between early life vitamin D status with children’s growth, bone health, adiposity and muscle development. It is widely accepted that vitamin D plays a critical role in bone health by maintaining calcium homeostasis [1]. This function becomes especially important during pregnancy when the developing fetus is entirely dependent on the mother for accretion of roughly 30 g of calcium for skeletal purposes [2,3]. In addition to its calcium metabolic functions, mixed evidence suggests that infant adiposity and lean mass are in part determined by vitamin D status [2]. Vitamin D may also play a role in maintaining normal glucose homeostasis during pregnancy, thus preventing fetal macrosomia and excess deposition of subcutaneous fat [4]. Vitamin D receptors have been isolated in skeletal muscle tissues [5], and low vitamin D concentration is associated with proximal myopathy and reduced physical performance [6]. Several observational studies [7,8,9,10,11,12,13,14,15] on maternal vitamin D status and growth or body composition in offspring have been conducted. Low vitamin D concentrations were associated with lower birthweight [11]. Offspring exposed to higher maternal serum 25(OH)D concentrations had lower fat mass and higher bone mass during infancy [6]. In its most severe form, infants born to mothers who had vitamin D deficiency were at elevated risk of rickets. [16,17,18] While there are few observational studies relating postnatal muscle development to intrauterine 25(OH)D exposure, no association was reported between the two in adulthood in one study [19]. Another observational study concluded that prenatal vitamin D exposure may have a greater effect on muscle strength than on muscle mass in the development of offspring [6]. Considering the high prevalence of low vitamin D status during pregnancy and infancy [20,21,22,23], and the inconsistent results of the clinical trials [2,6], this systematic review and meta-analysis aimed to assess the effect of vitamin D supplementation in early life (pregnancy, lactation and infancy) on child growth, bone health, lean mass and adiposity. 5. Conclusions: This systematic review of randomised clinical trials suggests that that vitamin D supplementation during pregnancy is associated with higher skinfold thickness in neonates. Vitamin D supplementation during pregnancy or infancy is associated with lower BMI and BMI z-score in offspring at 3 to 6 years of age. Based on current published clinical trials, vitamin D supplementation in early life is not observed to be associated with bone, lean and fat mass by DXA. Future large well-designed double blinded RCTs are needed to assess the effectiveness of vitamin D supplementation in early life on children’s bone health, lean mass and adiposity.
Background: Vitamin D deficiency during pregnancy or infancy is associated with adverse growth in children. No systematic review has been conducted to summarize available evidence on the effect of vitamin D supplementation in pregnancy and infancy on growth and body composition in children. Methods: A systematic review and meta-analysis were performed on the effects of vitamin D supplementation during early life on children's growth and body composition (bone, lean and fat). A literature search of randomized controlled trials (RCTs) was conducted to identify relevant studies on the effects of vitamin D supplementation during pregnancy and infancy on children's body composition (bone, lean and fat) in PubMed, EMBASE and Cochrane Library from inception to 31 December 2020. A Cochrane Risk Assessment Tool was used for quality assessment. The comparison was vitamin D supplementation vs. placebo or standard care. Random-effects and fixed-effect meta-analyses were conducted. The effects are presented as mean differences (MDs) or risk ratios (RRs) with 95% confidence intervals (CIs). Results: A total of 3960 participants from eleven randomized controlled trials were eligible for inclusion. Vitamin D supplementation during pregnancy was associated with higher triceps skinfold thickness (mm) (MD 0.33, 95% CI, 0.12, 0.54; I2 = 34%) in neonates. Vitamin D supplementation during pregnancy or infancy was associated with significantly increased length for age z-score in infants at 1 year of age (MD 0.29, 95% CI, 0.03, 0.54; I2 = 0%), and was associated with lower body mass index (BMI) (kg/m2) (MD -0.19, 95% CI -0.34, -0.04; I2 = 0%) and body mass index z-score (BMIZ) (MD -0.12, 95% CI -0.21, -0.04; I2 = 0%) in offspring at 3-6 years of age. Vitamin D supplementation during early life was not observed to be associated with children's bone, lean or fat mass. Conclusions: Vitamin D supplementation during pregnancy or infancy may be associated with reduced adiposity in childhood. Further large clinical trials of the effects of vitamin D supplementation on childhood body composition are warranted.
12,176
424
[ 166, 261, 78, 232, 66, 247, 245, 138, 186, 200, 148, 75, 70, 207, 40, 217, 175, 560, 230, 196 ]
25
[ "vitamin", "supplementation", "mass", "vitamin supplementation", "studies", "study", "rcts", "age", "risk", "95" ]
[ "role vitamin pregnancy", "vitamin status growth", "vitamin supplementation pregnancy", "adiposity found vitamin", "maternal low vitamin" ]
[CONTENT] Vitamin D | pregnancy | infancy | randomized controlled trials | childhood | body composition | adiposity [SUMMARY]
[CONTENT] Vitamin D | pregnancy | infancy | randomized controlled trials | childhood | body composition | adiposity [SUMMARY]
[CONTENT] Vitamin D | pregnancy | infancy | randomized controlled trials | childhood | body composition | adiposity [SUMMARY]
[CONTENT] Vitamin D | pregnancy | infancy | randomized controlled trials | childhood | body composition | adiposity [SUMMARY]
[CONTENT] Vitamin D | pregnancy | infancy | randomized controlled trials | childhood | body composition | adiposity [SUMMARY]
[CONTENT] Vitamin D | pregnancy | infancy | randomized controlled trials | childhood | body composition | adiposity [SUMMARY]
[CONTENT] Adiposity | Bias | Body Composition | Body Height | Body Mass Index | Body Weight | Bone Density | Confidence Intervals | Female | Growth | Humans | Infant | Infant, Newborn | Odds Ratio | Placebos | Pregnancy | Randomized Controlled Trials as Topic | Skinfold Thickness | Vitamin D | Vitamins [SUMMARY]
[CONTENT] Adiposity | Bias | Body Composition | Body Height | Body Mass Index | Body Weight | Bone Density | Confidence Intervals | Female | Growth | Humans | Infant | Infant, Newborn | Odds Ratio | Placebos | Pregnancy | Randomized Controlled Trials as Topic | Skinfold Thickness | Vitamin D | Vitamins [SUMMARY]
[CONTENT] Adiposity | Bias | Body Composition | Body Height | Body Mass Index | Body Weight | Bone Density | Confidence Intervals | Female | Growth | Humans | Infant | Infant, Newborn | Odds Ratio | Placebos | Pregnancy | Randomized Controlled Trials as Topic | Skinfold Thickness | Vitamin D | Vitamins [SUMMARY]
[CONTENT] Adiposity | Bias | Body Composition | Body Height | Body Mass Index | Body Weight | Bone Density | Confidence Intervals | Female | Growth | Humans | Infant | Infant, Newborn | Odds Ratio | Placebos | Pregnancy | Randomized Controlled Trials as Topic | Skinfold Thickness | Vitamin D | Vitamins [SUMMARY]
[CONTENT] Adiposity | Bias | Body Composition | Body Height | Body Mass Index | Body Weight | Bone Density | Confidence Intervals | Female | Growth | Humans | Infant | Infant, Newborn | Odds Ratio | Placebos | Pregnancy | Randomized Controlled Trials as Topic | Skinfold Thickness | Vitamin D | Vitamins [SUMMARY]
[CONTENT] Adiposity | Bias | Body Composition | Body Height | Body Mass Index | Body Weight | Bone Density | Confidence Intervals | Female | Growth | Humans | Infant | Infant, Newborn | Odds Ratio | Placebos | Pregnancy | Randomized Controlled Trials as Topic | Skinfold Thickness | Vitamin D | Vitamins [SUMMARY]
[CONTENT] role vitamin pregnancy | vitamin status growth | vitamin supplementation pregnancy | adiposity found vitamin | maternal low vitamin [SUMMARY]
[CONTENT] role vitamin pregnancy | vitamin status growth | vitamin supplementation pregnancy | adiposity found vitamin | maternal low vitamin [SUMMARY]
[CONTENT] role vitamin pregnancy | vitamin status growth | vitamin supplementation pregnancy | adiposity found vitamin | maternal low vitamin [SUMMARY]
[CONTENT] role vitamin pregnancy | vitamin status growth | vitamin supplementation pregnancy | adiposity found vitamin | maternal low vitamin [SUMMARY]
[CONTENT] role vitamin pregnancy | vitamin status growth | vitamin supplementation pregnancy | adiposity found vitamin | maternal low vitamin [SUMMARY]
[CONTENT] role vitamin pregnancy | vitamin status growth | vitamin supplementation pregnancy | adiposity found vitamin | maternal low vitamin [SUMMARY]
[CONTENT] vitamin | supplementation | mass | vitamin supplementation | studies | study | rcts | age | risk | 95 [SUMMARY]
[CONTENT] vitamin | supplementation | mass | vitamin supplementation | studies | study | rcts | age | risk | 95 [SUMMARY]
[CONTENT] vitamin | supplementation | mass | vitamin supplementation | studies | study | rcts | age | risk | 95 [SUMMARY]
[CONTENT] vitamin | supplementation | mass | vitamin supplementation | studies | study | rcts | age | risk | 95 [SUMMARY]
[CONTENT] vitamin | supplementation | mass | vitamin supplementation | studies | study | rcts | age | risk | 95 [SUMMARY]
[CONTENT] vitamin | supplementation | mass | vitamin supplementation | studies | study | rcts | age | risk | 95 [SUMMARY]
[CONTENT] muscle | vitamin | status | vitamin status | observational | low vitamin | calcium | mass | development | growth [SUMMARY]
[CONTENT] following | study | data | search | studies | mass | mesh | score | criteria | experimental [SUMMARY]
[CONTENT] bias | risk bias | risk | 34 | md | 95 ci | 95 | ci | studies | mass [SUMMARY]
[CONTENT] associated | vitamin supplementation | supplementation | vitamin | vitamin supplementation early | supplementation early | supplementation early life | vitamin supplementation early life | vitamin supplementation pregnancy | bmi [SUMMARY]
[CONTENT] vitamin | mass | supplementation | vitamin supplementation | studies | rcts | study | 95 | md | 95 ci [SUMMARY]
[CONTENT] vitamin | mass | supplementation | vitamin supplementation | studies | rcts | study | 95 | md | 95 ci [SUMMARY]
[CONTENT] Vitamin ||| [SUMMARY]
[CONTENT] ||| PubMed | EMBASE | Cochrane Library | 31 December 2020 ||| ||| ||| ||| 95% [SUMMARY]
[CONTENT] 3960 | eleven ||| Vitamin D | mm | MD | 0.33 | 95% | CI | 0.12 | 0.54 | I2 | 34% ||| Vitamin D | 1 year of age | MD 0.29 | 95% | CI | 0.03 | 0.54 | I2 | 0% | BMI | kg/m2 | MD -0.19 | 95% | CI | -0.04 | I2 | 0% | BMIZ | MD | 95% | CI | -0.04 | I2 | 0% | 3-6 years of age ||| Vitamin D [SUMMARY]
[CONTENT] Vitamin D ||| [SUMMARY]
[CONTENT] Vitamin ||| ||| ||| PubMed | EMBASE | Cochrane Library | 31 December 2020 ||| ||| ||| ||| 95% ||| 3960 | eleven ||| Vitamin D | mm | MD | 0.33 | 95% | CI | 0.12 | 0.54 | I2 | 34% ||| Vitamin D | 1 year of age | MD 0.29 | 95% | CI | 0.03 | 0.54 | I2 | 0% | BMI | kg/m2 | MD -0.19 | 95% | CI | -0.04 | I2 | 0% | BMIZ | MD | 95% | CI | -0.04 | I2 | 0% | 3-6 years of age ||| Vitamin D ||| ||| [SUMMARY]
[CONTENT] Vitamin ||| ||| ||| PubMed | EMBASE | Cochrane Library | 31 December 2020 ||| ||| ||| ||| 95% ||| 3960 | eleven ||| Vitamin D | mm | MD | 0.33 | 95% | CI | 0.12 | 0.54 | I2 | 34% ||| Vitamin D | 1 year of age | MD 0.29 | 95% | CI | 0.03 | 0.54 | I2 | 0% | BMI | kg/m2 | MD -0.19 | 95% | CI | -0.04 | I2 | 0% | BMIZ | MD | 95% | CI | -0.04 | I2 | 0% | 3-6 years of age ||| Vitamin D ||| ||| [SUMMARY]
Needle tract seeding in renal tumor biopsies: experience from a single institution.
33993889
Percutaneous needle biopsy of renal masses has been increasingly utilized to aid the diagnosis and guide management. It is generally considered as a safe procedure. However, tumor seeding along the needle tract, one of the complications, theoretically poses potential risk of tumor spread by seeded malignant cells. Prior studies on the frequency of needle tract seeding in renal tumor biopsies are limited and clinical significance of biopsy-associated tumor seeding remains largely controversial.
BACKGROUND
Here we investigated the frequencies of biopsy needle tract tumor seeding at our institution by reviewing the histology of renal cell carcinoma nephrectomy specimens with a prior biopsy within the last seventeen years. Biopsy site changes were recognized as a combination of foreign body reaction, hemosiderin deposition, fibrosis and fat necrosis. The histologic evidence of needle tract tumor seeding was identified as clusters of tumor cells embedded in perinephric tissue spatially associated with the biopsy site. In addition, association between parameters of biopsy techniques and tumor seeding were investigated.
METHODS
We observed needle tract tumor seeding to perinephric tissue in six out of ninety-eight (6 %) renal cell carcinoma cases including clear cell renal cell carcinoma, papillary renal cell carcinoma, chromophobe, and clear cell papillary renal cell carcinoma. The needle tract tumor seeding was exclusively observed in papillary renal cell carcinomas (6/28, 21 %) that were unifocal, small-sized (≤ 4 cm), confined to the kidney and had type 1 features. No recurrence or metastasis was observed in the papillary renal cell carcinoma cases with tumor seeding or the stage-matched cases without tumor seeding.
RESULTS
Our study demonstrated a higher than reported frequency of needle tract tumor seeding. Effective communication between pathologists and clinicians as well as documentation of tumor seeding is recommended. Further studies with a larger patient cohort and longer follow up to evaluate the impact of needle tract tumor seeding on long term prognosis are needed. This may also help reach a consensus on appropriate pathologic staging of renal cell carcinoma when the only site of perinephric fat invasion is within a biopsy needle tract.
CONCLUSIONS
[ "Adult", "Aged", "Biopsy, Large-Core Needle", "Carcinoma, Renal Cell", "Female", "Humans", "Kidney Neoplasms", "Male", "Middle Aged", "Neoplasm Seeding", "Retrospective Studies" ]
8127231
Introduction
The indications of and demand for percutaneous needle biopsy of renal tumors have been expanding with rapid advances in medical imaging technology and treatment modalities [1]. Historically, renal mass biopsy (RMB) was limited to differentiate renal cell carcinoma (RCC) from other differential diagnoses including benign tumors, metastatic disease, infection or lymphoma. In contrast, nowadays it is increasingly considered for risk stratification of renal cell carcinoma, as well as for guiding treatment strategies. The 2020 National Comprehensive Cancer Network (NCCN) guidelines recommend RMB of small lesions for diagnosis and stratification of active surveillance, cryosurgery and radiofrequency ablation therapy [2]. The American Urological Association (AUA) guideline also emphasizes that a RMB should be performed prior to ablation therapy to provide pathologic diagnosis and guide subsequent surveillance [3]. In addition, RMB is considered highly accurate for the diagnosis of malignancy and histologic determination of RCC subtypes in several systemic analyses [3–6]. Percutaneous needle core biopsy of renal tumors has been generally considered as a safe procedure. Complications other than hematoma are rare. These include tumor seeding along the needle tract, arteriovenous fistula formation, infection and pneumothorax [7–9]. In particular, tumor seeding along the biopsy needle tract is always a safety issue to consider in biopsy procedures or fine needle aspiration of mass lesions in various tissues, as it poses potential risk of iatrogenic local tumor spread by seeded malignant cells and possible subsequent cancer recurrence or dissemination [1]. Historically, the rate of needle tract tumor seeding in renal biopsy was estimated to be as low as 0.01 % by Smith in 1991, and Herts and Baker in 1995 [9, 10]. To date, only a handful of case reports and a small case series have been published [8, 11–16]. However, the frequency and clinical significance of biopsy-associated tumor seeding remains largely controversial due at least in part to lack of systemic review, histological analysis and follow up data in early studies [9, 10, 17]. In recent years, a few studies re-visited the phenomenon of tumor seeding along core needle biopsy tract in renal cell carcinomas and challenged the previously acknowledged rarity of needle tract tumor seeding following renal tumor biopsy. For example, one case series reported a 1.2 % overall incidence of tumor seeding [16]. In a study of more than 20,000 patients with clinical T1a RCC, the upstaging rate was 2.1 % for patients with prior history of RMB, compared with 1.1 % in patients without prior RMB, although there was no histological evidence showing the association of perinephric fat invasion with a prior biopsy site in this study [18]. Considering the potentially significant impact of tumor seeding, we retrospectively assessed the histologic evidence of tumor seeding along the biopsy needle tract in the resection specimens of renal cell carcinomas at our institution and examined the correlation between tumor seeding and histologic subtypes of renal cell carcinoma.
null
null
Results
Tumor seeding is exclusively observed in PRCC Patients’ demographics and essential pathologic features are summarized in Table 1. The average ages of patients at diagnosis were similar among PRCC, CCRCC and CCPRCC. Patients with ChRCC were relatively younger. The proportions of cases in various pathologic stages (without considering the effect of needle tract tumor seeding) were comparable between CCRCC and PRCC, with pathological T1a stage in more than half the cases of PRCCs and CCRCCs. Table 1Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsyPapillary renal cell carcinomaClear cell renal cell carcinomaClear cell papillary renal cell carcinomaChromophobe renal cell carcinomaNumber of cases286334Age (mean ± SD)62.5 ± 11.561.1 ± 12.060.0 ± 5.644.5 ± 6.5Sex (Male/Female)22/636/272/11/3Tumor stagingapT1a173532pT1b41601pT2a2101pT3a51000pT3b0100Nucleolar grade (ISUP)116Not applicableNot applicable211383718401Lymphovascular invasion1400Lymph node involvement1100Distant metastasis0100a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsy a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor Needle tract tumor seeding within the perinephric adipose tissue was identified in 6 out of 98 (6 %) renal cell carcinoma cases. This was exclusively observed in PRCC (6/28, 21 %), with type 1 features, unifocal, small-sized (≤ 4 cm), and confined to the kidney (Table 2). Histology of the representative cases with tumor seeding along the biopsy needle tract is shown in Fig. 1. In contrast, none of the other tumors (63 CCRCC, 3 CCPRCC, and 4 ChRCC) showed tumor seeding along the biopsy needle tract. In four of the six cases, the presence of tumor cells within the perinephric adipose tissue associated with biopsy needle tract were documented in the pathology reports. Three of these four cases, otherwise pT1a, were upstaged to pT3a due to needle tract tumor seeding. The majority (5/6) of the tumors showed low nucleolar grade (grade 1 or 2). Post-operative follow-up period of the six cases with tumor seeding ranged from 1 month to 52 months with a median of 10.5 months. One patient died of complications from stroke one month following nephrectomy, and one patient was lost for follow up 13 months post nephrectomy. Post-operative follow-up period of the comparable 9 pT1a low grade PRCC cases without tumor seeding ranged from 7 month to 130 months with a median of 36 months; one patient was lost to follow up. No recurrence or metastasis were identified in any of the pT1a PRCC cases, with or without tumor seeding. With regard to the pT3a PRCC cases, one out of the 5 patients developed metastatic lesions in multiple retroperitoneal lymph nodes at 7 months after radical nephrectomy. However, the primary PRCC in this patient had type 2 features, was of high nucleolar grade, and exhibited lymphovascular invasion and lymph node involvement at the time of nephrectomy. The other four patients (one with type 2 features and high nucleolar grade; one with type 1 features and high nucleolar grade; the other two with type 1 features and low nucleolar grade) showed no evidence of local recurrence or metastases during the follow-up period ranging from 24 to 51 months. Table 2Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tractCaseAge (years)SexRCC subtypeSurgical procedureTumor stageaTumor size (cm)ISUP nucleolar gradeComplete samplingFollow up159FPRCC, type 1Partial nephrectomypT1a2.31No1 month, died of stroke255MPRCC, type 1Partial nephrectomypT1a2.12No11 months, no recurrence370FPRCC, type 1Partial nephrectomypT1a1.71Yes52 months, no recurrence448MPRCC, type 1Partial nephrectomypT1a1.51No6 months, no recurrence562MPRCC, type 1Partial nephrectomypT1a1.22Yes10 months, no recurrence646MPRCC, type 1Partial nephrectomypT1a3.83No13 months, no recurrence; lost to follow upa Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into considerationAbbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinomaFig. 1Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tract a Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into consideration Abbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinoma Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue Patients’ demographics and essential pathologic features are summarized in Table 1. The average ages of patients at diagnosis were similar among PRCC, CCRCC and CCPRCC. Patients with ChRCC were relatively younger. The proportions of cases in various pathologic stages (without considering the effect of needle tract tumor seeding) were comparable between CCRCC and PRCC, with pathological T1a stage in more than half the cases of PRCCs and CCRCCs. Table 1Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsyPapillary renal cell carcinomaClear cell renal cell carcinomaClear cell papillary renal cell carcinomaChromophobe renal cell carcinomaNumber of cases286334Age (mean ± SD)62.5 ± 11.561.1 ± 12.060.0 ± 5.644.5 ± 6.5Sex (Male/Female)22/636/272/11/3Tumor stagingapT1a173532pT1b41601pT2a2101pT3a51000pT3b0100Nucleolar grade (ISUP)116Not applicableNot applicable211383718401Lymphovascular invasion1400Lymph node involvement1100Distant metastasis0100a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsy a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor Needle tract tumor seeding within the perinephric adipose tissue was identified in 6 out of 98 (6 %) renal cell carcinoma cases. This was exclusively observed in PRCC (6/28, 21 %), with type 1 features, unifocal, small-sized (≤ 4 cm), and confined to the kidney (Table 2). Histology of the representative cases with tumor seeding along the biopsy needle tract is shown in Fig. 1. In contrast, none of the other tumors (63 CCRCC, 3 CCPRCC, and 4 ChRCC) showed tumor seeding along the biopsy needle tract. In four of the six cases, the presence of tumor cells within the perinephric adipose tissue associated with biopsy needle tract were documented in the pathology reports. Three of these four cases, otherwise pT1a, were upstaged to pT3a due to needle tract tumor seeding. The majority (5/6) of the tumors showed low nucleolar grade (grade 1 or 2). Post-operative follow-up period of the six cases with tumor seeding ranged from 1 month to 52 months with a median of 10.5 months. One patient died of complications from stroke one month following nephrectomy, and one patient was lost for follow up 13 months post nephrectomy. Post-operative follow-up period of the comparable 9 pT1a low grade PRCC cases without tumor seeding ranged from 7 month to 130 months with a median of 36 months; one patient was lost to follow up. No recurrence or metastasis were identified in any of the pT1a PRCC cases, with or without tumor seeding. With regard to the pT3a PRCC cases, one out of the 5 patients developed metastatic lesions in multiple retroperitoneal lymph nodes at 7 months after radical nephrectomy. However, the primary PRCC in this patient had type 2 features, was of high nucleolar grade, and exhibited lymphovascular invasion and lymph node involvement at the time of nephrectomy. The other four patients (one with type 2 features and high nucleolar grade; one with type 1 features and high nucleolar grade; the other two with type 1 features and low nucleolar grade) showed no evidence of local recurrence or metastases during the follow-up period ranging from 24 to 51 months. Table 2Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tractCaseAge (years)SexRCC subtypeSurgical procedureTumor stageaTumor size (cm)ISUP nucleolar gradeComplete samplingFollow up159FPRCC, type 1Partial nephrectomypT1a2.31No1 month, died of stroke255MPRCC, type 1Partial nephrectomypT1a2.12No11 months, no recurrence370FPRCC, type 1Partial nephrectomypT1a1.71Yes52 months, no recurrence448MPRCC, type 1Partial nephrectomypT1a1.51No6 months, no recurrence562MPRCC, type 1Partial nephrectomypT1a1.22Yes10 months, no recurrence646MPRCC, type 1Partial nephrectomypT1a3.83No13 months, no recurrence; lost to follow upa Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into considerationAbbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinomaFig. 1Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tract a Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into consideration Abbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinoma Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue Potential impact of specimen sampling and biopsy techniques on biopsy site identification and tumor seeding To further evaluate whether the observed different rates in needle tract tumor seeding among different histological types of RCC were confounded by the extent of specimen sampling, we evaluated the identification of biopsy site with regard to different approaches of specimen sampling and surgery procedure (Table 3). Interestingly, the biopsy site was identified in 9 of the 28 PRCC cases, and tumor seeding along biopsy needle tract was seen in six cases. In contrast, biopsy site was present in 2 CCRCC cases; neither of the cases showed tumor seeding. Due to small sample sizes, no statistical analysis on the correlation of biopsy site identification and tumor seeding was performed. Table 3Summary of biopsy site and tumor seedingPapillary renal cell carcinoma (PRCC)Clear cell renal cell carcinoma (CCRCC)Clear cell papillary renal cell carcinoma (CCPRCC)Chromophobe renal cell carcinoma (ChRCC)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)Partial nephrectomy19 (68 %)9 (32 %)6 (21 %)43 (68 %)0 (0 %)0 (0 %)1 (33 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)Radical nephrectomy9 (32 %)0 (0 %)0 (0 %)20 (32 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Entire samplinga7 (25 %)4 (14 %)2 (7 %)9 (14 %)0 (0 %)0 (0 %)1 (17 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Incomplete samplingb21 (75 %)5 (18 %)4 (14 %)54 (85 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)a refers to the specimens entirely submitted for microscopic examinationb refers to the specimens with representative sections submitted for microscopic examination Summary of biopsy site and tumor seeding a refers to the specimens entirely submitted for microscopic examination b refers to the specimens with representative sections submitted for microscopic examination Of all the 98 cases, 17 tumors were entirely submitted for microscopic evaluation, while the other 81 tumors were incompletely sampled. The biopsy sites were identified in 23 % of cases with complete tumor sampling and 9 % of cases with incomplete tumor sampling. Although the differences on the frequencies of identifying biopsy site changes were not statistically significant (p = 0.08), thorough specimen sampling appeared to lead to a higher chance of identifying these changes. It is evident that the extent of sampling of the perinephric fat would be directly relevant to the likelihood of identifying the biopsy site. However, data on the extent of perinephric fat sampling was not available based on the gross descriptions for most cases. Whether a suspicious biopsy tract site was identified was not mentioned in any of the cases. Regarding the surgical approach, 67 and 31 cases were partial and radical nephrectomies, respectively. Biopsy sites were identified in 13 % of cases with partial nephrectomy and 6 % of cases with radical nephrectomy. We also evaluated whether there is any effect of renal tumor biopsy techniques on the frequency of tumor seeding in PRCCs, by comparing a few biopsy parameters between pT1a cases with and without tumor seeding (Table 4). The biopsy parameters we looked into are those considered potentially affecting the risk of biopsy tract tumor seeding in the literature, including the biopsy needle size (smaller diameter associated with lower frequency of tumor seeding), use of coaxial sheath (associated with lower chance of tumor seeding), and the number of passes (controversial, but generally speaking, multiple passes associated with higher risk of tumor seeding) [20, 21]. Biopsy procedure information was available in all the 6 cases with tumor seeding and 8 cases without tumor seeding. Among these cases, there are no significant differences in the following parameters, biopsy needle size (p = 0.78), application of coaxial sheath technique (p = 0.53) and number of passes (p = 0.69), between cases with and without tumor seeding. Table 4Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seedingCaseBiopsy needle size (gauge)Coaxial sheath techniqueNumber of passesTumor seeding118No2Yes218Yes4Yes318Yes3Yes418Yes5Yes518Yes6Yes618No2Yes718Yes3No818Yes3No920Yes6No1018No4No1120No6No1218No3No1318No3No1420Yes4No Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seeding To further evaluate whether the observed different rates in needle tract tumor seeding among different histological types of RCC were confounded by the extent of specimen sampling, we evaluated the identification of biopsy site with regard to different approaches of specimen sampling and surgery procedure (Table 3). Interestingly, the biopsy site was identified in 9 of the 28 PRCC cases, and tumor seeding along biopsy needle tract was seen in six cases. In contrast, biopsy site was present in 2 CCRCC cases; neither of the cases showed tumor seeding. Due to small sample sizes, no statistical analysis on the correlation of biopsy site identification and tumor seeding was performed. Table 3Summary of biopsy site and tumor seedingPapillary renal cell carcinoma (PRCC)Clear cell renal cell carcinoma (CCRCC)Clear cell papillary renal cell carcinoma (CCPRCC)Chromophobe renal cell carcinoma (ChRCC)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)Partial nephrectomy19 (68 %)9 (32 %)6 (21 %)43 (68 %)0 (0 %)0 (0 %)1 (33 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)Radical nephrectomy9 (32 %)0 (0 %)0 (0 %)20 (32 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Entire samplinga7 (25 %)4 (14 %)2 (7 %)9 (14 %)0 (0 %)0 (0 %)1 (17 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Incomplete samplingb21 (75 %)5 (18 %)4 (14 %)54 (85 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)a refers to the specimens entirely submitted for microscopic examinationb refers to the specimens with representative sections submitted for microscopic examination Summary of biopsy site and tumor seeding a refers to the specimens entirely submitted for microscopic examination b refers to the specimens with representative sections submitted for microscopic examination Of all the 98 cases, 17 tumors were entirely submitted for microscopic evaluation, while the other 81 tumors were incompletely sampled. The biopsy sites were identified in 23 % of cases with complete tumor sampling and 9 % of cases with incomplete tumor sampling. Although the differences on the frequencies of identifying biopsy site changes were not statistically significant (p = 0.08), thorough specimen sampling appeared to lead to a higher chance of identifying these changes. It is evident that the extent of sampling of the perinephric fat would be directly relevant to the likelihood of identifying the biopsy site. However, data on the extent of perinephric fat sampling was not available based on the gross descriptions for most cases. Whether a suspicious biopsy tract site was identified was not mentioned in any of the cases. Regarding the surgical approach, 67 and 31 cases were partial and radical nephrectomies, respectively. Biopsy sites were identified in 13 % of cases with partial nephrectomy and 6 % of cases with radical nephrectomy. We also evaluated whether there is any effect of renal tumor biopsy techniques on the frequency of tumor seeding in PRCCs, by comparing a few biopsy parameters between pT1a cases with and without tumor seeding (Table 4). The biopsy parameters we looked into are those considered potentially affecting the risk of biopsy tract tumor seeding in the literature, including the biopsy needle size (smaller diameter associated with lower frequency of tumor seeding), use of coaxial sheath (associated with lower chance of tumor seeding), and the number of passes (controversial, but generally speaking, multiple passes associated with higher risk of tumor seeding) [20, 21]. Biopsy procedure information was available in all the 6 cases with tumor seeding and 8 cases without tumor seeding. Among these cases, there are no significant differences in the following parameters, biopsy needle size (p = 0.78), application of coaxial sheath technique (p = 0.53) and number of passes (p = 0.69), between cases with and without tumor seeding. Table 4Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seedingCaseBiopsy needle size (gauge)Coaxial sheath techniqueNumber of passesTumor seeding118No2Yes218Yes4Yes318Yes3Yes418Yes5Yes518Yes6Yes618No2Yes718Yes3No818Yes3No920Yes6No1018No4No1120No6No1218No3No1318No3No1420Yes4No Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seeding
null
null
[ "Tumor seeding is exclusively observed in PRCC", "Potential impact of specimen sampling and biopsy techniques on biopsy site identification and tumor seeding" ]
[ "Patients’ demographics and essential pathologic features are summarized in Table 1. The average ages of patients at diagnosis were similar among PRCC, CCRCC and CCPRCC. Patients with ChRCC were relatively younger. The proportions of cases in various pathologic stages (without considering the effect of needle tract tumor seeding) were comparable between CCRCC and PRCC, with pathological T1a stage in more than half the cases of PRCCs and CCRCCs.\nTable 1Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsyPapillary renal cell carcinomaClear cell renal cell carcinomaClear cell papillary renal cell carcinomaChromophobe renal cell carcinomaNumber of cases286334Age (mean ± SD)62.5 ± 11.561.1 ± 12.060.0 ± 5.644.5 ± 6.5Sex (Male/Female)22/636/272/11/3Tumor stagingapT1a173532pT1b41601pT2a2101pT3a51000pT3b0100Nucleolar grade (ISUP)116Not applicableNot applicable211383718401Lymphovascular invasion1400Lymph node involvement1100Distant metastasis0100a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor\nDemographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsy\na The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor\nNeedle tract tumor seeding within the perinephric adipose tissue was identified in 6 out of 98 (6 %) renal cell carcinoma cases. This was exclusively observed in PRCC (6/28, 21 %), with type 1 features, unifocal, small-sized (≤ 4 cm), and confined to the kidney (Table 2). Histology of the representative cases with tumor seeding along the biopsy needle tract is shown in Fig. 1. In contrast, none of the other tumors (63 CCRCC, 3 CCPRCC, and 4 ChRCC) showed tumor seeding along the biopsy needle tract. In four of the six cases, the presence of tumor cells within the perinephric adipose tissue associated with biopsy needle tract were documented in the pathology reports. Three of these four cases, otherwise pT1a, were upstaged to pT3a due to needle tract tumor seeding. The majority (5/6) of the tumors showed low nucleolar grade (grade 1 or 2). Post-operative follow-up period of the six cases with tumor seeding ranged from 1 month to 52 months with a median of 10.5 months. One patient died of complications from stroke one month following nephrectomy, and one patient was lost for follow up 13 months post nephrectomy. Post-operative follow-up period of the comparable 9 pT1a low grade PRCC cases without tumor seeding ranged from 7 month to 130 months with a median of 36 months; one patient was lost to follow up. No recurrence or metastasis were identified in any of the pT1a PRCC cases, with or without tumor seeding. With regard to the pT3a PRCC cases, one out of the 5 patients developed metastatic lesions in multiple retroperitoneal lymph nodes at 7 months after radical nephrectomy. However, the primary PRCC in this patient had type 2 features, was of high nucleolar grade, and exhibited lymphovascular invasion and lymph node involvement at the time of nephrectomy. The other four patients (one with type 2 features and high nucleolar grade; one with type 1 features and high nucleolar grade; the other two with type 1 features and low nucleolar grade) showed no evidence of local recurrence or metastases during the follow-up period ranging from 24 to 51 months.\nTable 2Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tractCaseAge (years)SexRCC subtypeSurgical procedureTumor stageaTumor size (cm)ISUP nucleolar gradeComplete samplingFollow up159FPRCC, type 1Partial nephrectomypT1a2.31No1 month, died of stroke255MPRCC, type 1Partial nephrectomypT1a2.12No11 months, no recurrence370FPRCC, type 1Partial nephrectomypT1a1.71Yes52 months, no recurrence448MPRCC, type 1Partial nephrectomypT1a1.51No6 months, no recurrence562MPRCC, type 1Partial nephrectomypT1a1.22Yes10 months, no recurrence646MPRCC, type 1Partial nephrectomypT1a3.83No13 months, no recurrence; lost to follow upa Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into considerationAbbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinomaFig. 1Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue\nSummary of clinicopathologic features of cases with tumor seeding along the biopsy needle tract\na Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into consideration\nAbbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinoma\nRepresentative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue", "To further evaluate whether the observed different rates in needle tract tumor seeding among different histological types of RCC were confounded by the extent of specimen sampling, we evaluated the identification of biopsy site with regard to different approaches of specimen sampling and surgery procedure (Table 3). Interestingly, the biopsy site was identified in 9 of the 28 PRCC cases, and tumor seeding along biopsy needle tract was seen in six cases. In contrast, biopsy site was present in 2 CCRCC cases; neither of the cases showed tumor seeding. Due to small sample sizes, no statistical analysis on the correlation of biopsy site identification and tumor seeding was performed.\nTable 3Summary of biopsy site and tumor seedingPapillary renal cell carcinoma (PRCC)Clear cell renal cell carcinoma (CCRCC)Clear cell papillary renal cell carcinoma (CCPRCC)Chromophobe renal cell carcinoma (ChRCC)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)Partial nephrectomy19 (68 %)9 (32 %)6 (21 %)43 (68 %)0 (0 %)0 (0 %)1 (33 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)Radical nephrectomy9 (32 %)0 (0 %)0 (0 %)20 (32 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Entire samplinga7 (25 %)4 (14 %)2 (7 %)9 (14 %)0 (0 %)0 (0 %)1 (17 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Incomplete samplingb21 (75 %)5 (18 %)4 (14 %)54 (85 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)a refers to the specimens entirely submitted for microscopic examinationb refers to the specimens with representative sections submitted for microscopic examination\nSummary of biopsy site and tumor seeding\na refers to the specimens entirely submitted for microscopic examination\nb refers to the specimens with representative sections submitted for microscopic examination\nOf all the 98 cases, 17 tumors were entirely submitted for microscopic evaluation, while the other 81 tumors were incompletely sampled. The biopsy sites were identified in 23 % of cases with complete tumor sampling and 9 % of cases with incomplete tumor sampling. Although the differences on the frequencies of identifying biopsy site changes were not statistically significant (p = 0.08), thorough specimen sampling appeared to lead to a higher chance of identifying these changes. It is evident that the extent of sampling of the perinephric fat would be directly relevant to the likelihood of identifying the biopsy site. However, data on the extent of perinephric fat sampling was not available based on the gross descriptions for most cases. Whether a suspicious biopsy tract site was identified was not mentioned in any of the cases. Regarding the surgical approach, 67 and 31 cases were partial and radical nephrectomies, respectively. Biopsy sites were identified in 13 % of cases with partial nephrectomy and 6 % of cases with radical nephrectomy.\nWe also evaluated whether there is any effect of renal tumor biopsy techniques on the frequency of tumor seeding in PRCCs, by comparing a few biopsy parameters between pT1a cases with and without tumor seeding (Table 4). The biopsy parameters we looked into are those considered potentially affecting the risk of biopsy tract tumor seeding in the literature, including the biopsy needle size (smaller diameter associated with lower frequency of tumor seeding), use of coaxial sheath (associated with lower chance of tumor seeding), and the number of passes (controversial, but generally speaking, multiple passes associated with higher risk of tumor seeding) [20, 21]. Biopsy procedure information was available in all the 6 cases with tumor seeding and 8 cases without tumor seeding. Among these cases, there are no significant differences in the following parameters, biopsy needle size (p = 0.78), application of coaxial sheath technique (p = 0.53) and number of passes (p = 0.69), between cases with and without tumor seeding.\nTable 4Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seedingCaseBiopsy needle size (gauge)Coaxial sheath techniqueNumber of passesTumor seeding118No2Yes218Yes4Yes318Yes3Yes418Yes5Yes518Yes6Yes618No2Yes718Yes3No818Yes3No920Yes6No1018No4No1120No6No1218No3No1318No3No1420Yes4No\nRenal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seeding" ]
[ null, null ]
[ "Introduction", "Materials and methods", "Results", "Tumor seeding is exclusively observed in PRCC", "Potential impact of specimen sampling and biopsy techniques on biopsy site identification and tumor seeding", "Discussion" ]
[ "The indications of and demand for percutaneous needle biopsy of renal tumors have been expanding with rapid advances in medical imaging technology and treatment modalities [1]. Historically, renal mass biopsy (RMB) was limited to differentiate renal cell carcinoma (RCC) from other differential diagnoses including benign tumors, metastatic disease, infection or lymphoma. In contrast, nowadays it is increasingly considered for risk stratification of renal cell carcinoma, as well as for guiding treatment strategies. The 2020 National Comprehensive Cancer Network (NCCN) guidelines recommend RMB of small lesions for diagnosis and stratification of active surveillance, cryosurgery and radiofrequency ablation therapy [2]. The American Urological Association (AUA) guideline also emphasizes that a RMB should be performed prior to ablation therapy to provide pathologic diagnosis and guide subsequent surveillance [3]. In addition, RMB is considered highly accurate for the diagnosis of malignancy and histologic determination of RCC subtypes in several systemic analyses [3–6]. Percutaneous needle core biopsy of renal tumors has been generally considered as a safe procedure. Complications other than hematoma are rare. These include tumor seeding along the needle tract, arteriovenous fistula formation, infection and pneumothorax [7–9]. In particular, tumor seeding along the biopsy needle tract is always a safety issue to consider in biopsy procedures or fine needle aspiration of mass lesions in various tissues, as it poses potential risk of iatrogenic local tumor spread by seeded malignant cells and possible subsequent cancer recurrence or dissemination [1]. Historically, the rate of needle tract tumor seeding in renal biopsy was estimated to be as low as 0.01 % by Smith in 1991, and Herts and Baker in 1995 [9, 10]. To date, only a handful of case reports and a small case series have been published [8, 11–16]. However, the frequency and clinical significance of biopsy-associated tumor seeding remains largely controversial due at least in part to lack of systemic review, histological analysis and follow up data in early studies [9, 10, 17]. In recent years, a few studies re-visited the phenomenon of tumor seeding along core needle biopsy tract in renal cell carcinomas and challenged the previously acknowledged rarity of needle tract tumor seeding following renal tumor biopsy. For example, one case series reported a 1.2 % overall incidence of tumor seeding [16]. In a study of more than 20,000 patients with clinical T1a RCC, the upstaging rate was 2.1 % for patients with prior history of RMB, compared with 1.1 % in patients without prior RMB, although there was no histological evidence showing the association of perinephric fat invasion with a prior biopsy site in this study [18].\nConsidering the potentially significant impact of tumor seeding, we retrospectively assessed the histologic evidence of tumor seeding along the biopsy needle tract in the resection specimens of renal cell carcinomas at our institution and examined the correlation between tumor seeding and histologic subtypes of renal cell carcinoma.", "Our institution’s pathology database was searched for cases diagnosed as renal cell carcinoma on biopsy with subsequent nephrectomy from January 2003 to April 2020. To identify patients who underwent both biopsy and nephrectomy, all available medical records and pathology reports were reviewed. A total of 116 patients were identified, including 62 with papillary renal cell carcinoma (PRCC), 71 with clear cell renal cell carcinoma (CCRCC), 4 with clear cell papillary renal cell carcinoma (CCPRCC), and 6 with chromophobe renal cell carcinoma (ChRCC). Of these, the nephrectomy slides of 28 PRCC, 63 CCRCC, 3 CCPRCC, and 4 ChRCC cases (98 cases in total) were available for review. All slides were retrospectively reviewed by two of the authors (LB and YZ) to assess for biopsy site changes and needle tract tumor seeding. The histologic evidence of biopsy site changes on resection specimens included a combination of foreign body reaction, hemosiderin deposition, fibrosis, fat necrosis, and presence of absorbable gelatin [19]. Biopsy needle tract tumor seeding was identified as clusters of tumor cells embedded in perinephric tissue spatially associated with the above-described biopsy site. Fisher’s exact test or t-test was used to compare the rates of biopsy site identification, as well as compare the parameters of biopsy techniques between cases with and without tumor seeding.", "Tumor seeding is exclusively observed in PRCC Patients’ demographics and essential pathologic features are summarized in Table 1. The average ages of patients at diagnosis were similar among PRCC, CCRCC and CCPRCC. Patients with ChRCC were relatively younger. The proportions of cases in various pathologic stages (without considering the effect of needle tract tumor seeding) were comparable between CCRCC and PRCC, with pathological T1a stage in more than half the cases of PRCCs and CCRCCs.\nTable 1Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsyPapillary renal cell carcinomaClear cell renal cell carcinomaClear cell papillary renal cell carcinomaChromophobe renal cell carcinomaNumber of cases286334Age (mean ± SD)62.5 ± 11.561.1 ± 12.060.0 ± 5.644.5 ± 6.5Sex (Male/Female)22/636/272/11/3Tumor stagingapT1a173532pT1b41601pT2a2101pT3a51000pT3b0100Nucleolar grade (ISUP)116Not applicableNot applicable211383718401Lymphovascular invasion1400Lymph node involvement1100Distant metastasis0100a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor\nDemographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsy\na The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor\nNeedle tract tumor seeding within the perinephric adipose tissue was identified in 6 out of 98 (6 %) renal cell carcinoma cases. This was exclusively observed in PRCC (6/28, 21 %), with type 1 features, unifocal, small-sized (≤ 4 cm), and confined to the kidney (Table 2). Histology of the representative cases with tumor seeding along the biopsy needle tract is shown in Fig. 1. In contrast, none of the other tumors (63 CCRCC, 3 CCPRCC, and 4 ChRCC) showed tumor seeding along the biopsy needle tract. In four of the six cases, the presence of tumor cells within the perinephric adipose tissue associated with biopsy needle tract were documented in the pathology reports. Three of these four cases, otherwise pT1a, were upstaged to pT3a due to needle tract tumor seeding. The majority (5/6) of the tumors showed low nucleolar grade (grade 1 or 2). Post-operative follow-up period of the six cases with tumor seeding ranged from 1 month to 52 months with a median of 10.5 months. One patient died of complications from stroke one month following nephrectomy, and one patient was lost for follow up 13 months post nephrectomy. Post-operative follow-up period of the comparable 9 pT1a low grade PRCC cases without tumor seeding ranged from 7 month to 130 months with a median of 36 months; one patient was lost to follow up. No recurrence or metastasis were identified in any of the pT1a PRCC cases, with or without tumor seeding. With regard to the pT3a PRCC cases, one out of the 5 patients developed metastatic lesions in multiple retroperitoneal lymph nodes at 7 months after radical nephrectomy. However, the primary PRCC in this patient had type 2 features, was of high nucleolar grade, and exhibited lymphovascular invasion and lymph node involvement at the time of nephrectomy. The other four patients (one with type 2 features and high nucleolar grade; one with type 1 features and high nucleolar grade; the other two with type 1 features and low nucleolar grade) showed no evidence of local recurrence or metastases during the follow-up period ranging from 24 to 51 months.\nTable 2Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tractCaseAge (years)SexRCC subtypeSurgical procedureTumor stageaTumor size (cm)ISUP nucleolar gradeComplete samplingFollow up159FPRCC, type 1Partial nephrectomypT1a2.31No1 month, died of stroke255MPRCC, type 1Partial nephrectomypT1a2.12No11 months, no recurrence370FPRCC, type 1Partial nephrectomypT1a1.71Yes52 months, no recurrence448MPRCC, type 1Partial nephrectomypT1a1.51No6 months, no recurrence562MPRCC, type 1Partial nephrectomypT1a1.22Yes10 months, no recurrence646MPRCC, type 1Partial nephrectomypT1a3.83No13 months, no recurrence; lost to follow upa Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into considerationAbbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinomaFig. 1Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue\nSummary of clinicopathologic features of cases with tumor seeding along the biopsy needle tract\na Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into consideration\nAbbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinoma\nRepresentative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue\nPatients’ demographics and essential pathologic features are summarized in Table 1. The average ages of patients at diagnosis were similar among PRCC, CCRCC and CCPRCC. Patients with ChRCC were relatively younger. The proportions of cases in various pathologic stages (without considering the effect of needle tract tumor seeding) were comparable between CCRCC and PRCC, with pathological T1a stage in more than half the cases of PRCCs and CCRCCs.\nTable 1Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsyPapillary renal cell carcinomaClear cell renal cell carcinomaClear cell papillary renal cell carcinomaChromophobe renal cell carcinomaNumber of cases286334Age (mean ± SD)62.5 ± 11.561.1 ± 12.060.0 ± 5.644.5 ± 6.5Sex (Male/Female)22/636/272/11/3Tumor stagingapT1a173532pT1b41601pT2a2101pT3a51000pT3b0100Nucleolar grade (ISUP)116Not applicableNot applicable211383718401Lymphovascular invasion1400Lymph node involvement1100Distant metastasis0100a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor\nDemographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsy\na The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor\nNeedle tract tumor seeding within the perinephric adipose tissue was identified in 6 out of 98 (6 %) renal cell carcinoma cases. This was exclusively observed in PRCC (6/28, 21 %), with type 1 features, unifocal, small-sized (≤ 4 cm), and confined to the kidney (Table 2). Histology of the representative cases with tumor seeding along the biopsy needle tract is shown in Fig. 1. In contrast, none of the other tumors (63 CCRCC, 3 CCPRCC, and 4 ChRCC) showed tumor seeding along the biopsy needle tract. In four of the six cases, the presence of tumor cells within the perinephric adipose tissue associated with biopsy needle tract were documented in the pathology reports. Three of these four cases, otherwise pT1a, were upstaged to pT3a due to needle tract tumor seeding. The majority (5/6) of the tumors showed low nucleolar grade (grade 1 or 2). Post-operative follow-up period of the six cases with tumor seeding ranged from 1 month to 52 months with a median of 10.5 months. One patient died of complications from stroke one month following nephrectomy, and one patient was lost for follow up 13 months post nephrectomy. Post-operative follow-up period of the comparable 9 pT1a low grade PRCC cases without tumor seeding ranged from 7 month to 130 months with a median of 36 months; one patient was lost to follow up. No recurrence or metastasis were identified in any of the pT1a PRCC cases, with or without tumor seeding. With regard to the pT3a PRCC cases, one out of the 5 patients developed metastatic lesions in multiple retroperitoneal lymph nodes at 7 months after radical nephrectomy. However, the primary PRCC in this patient had type 2 features, was of high nucleolar grade, and exhibited lymphovascular invasion and lymph node involvement at the time of nephrectomy. The other four patients (one with type 2 features and high nucleolar grade; one with type 1 features and high nucleolar grade; the other two with type 1 features and low nucleolar grade) showed no evidence of local recurrence or metastases during the follow-up period ranging from 24 to 51 months.\nTable 2Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tractCaseAge (years)SexRCC subtypeSurgical procedureTumor stageaTumor size (cm)ISUP nucleolar gradeComplete samplingFollow up159FPRCC, type 1Partial nephrectomypT1a2.31No1 month, died of stroke255MPRCC, type 1Partial nephrectomypT1a2.12No11 months, no recurrence370FPRCC, type 1Partial nephrectomypT1a1.71Yes52 months, no recurrence448MPRCC, type 1Partial nephrectomypT1a1.51No6 months, no recurrence562MPRCC, type 1Partial nephrectomypT1a1.22Yes10 months, no recurrence646MPRCC, type 1Partial nephrectomypT1a3.83No13 months, no recurrence; lost to follow upa Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into considerationAbbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinomaFig. 1Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue\nSummary of clinicopathologic features of cases with tumor seeding along the biopsy needle tract\na Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into consideration\nAbbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinoma\nRepresentative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue\nPotential impact of specimen sampling and biopsy techniques on biopsy site identification and tumor seeding To further evaluate whether the observed different rates in needle tract tumor seeding among different histological types of RCC were confounded by the extent of specimen sampling, we evaluated the identification of biopsy site with regard to different approaches of specimen sampling and surgery procedure (Table 3). Interestingly, the biopsy site was identified in 9 of the 28 PRCC cases, and tumor seeding along biopsy needle tract was seen in six cases. In contrast, biopsy site was present in 2 CCRCC cases; neither of the cases showed tumor seeding. Due to small sample sizes, no statistical analysis on the correlation of biopsy site identification and tumor seeding was performed.\nTable 3Summary of biopsy site and tumor seedingPapillary renal cell carcinoma (PRCC)Clear cell renal cell carcinoma (CCRCC)Clear cell papillary renal cell carcinoma (CCPRCC)Chromophobe renal cell carcinoma (ChRCC)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)Partial nephrectomy19 (68 %)9 (32 %)6 (21 %)43 (68 %)0 (0 %)0 (0 %)1 (33 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)Radical nephrectomy9 (32 %)0 (0 %)0 (0 %)20 (32 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Entire samplinga7 (25 %)4 (14 %)2 (7 %)9 (14 %)0 (0 %)0 (0 %)1 (17 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Incomplete samplingb21 (75 %)5 (18 %)4 (14 %)54 (85 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)a refers to the specimens entirely submitted for microscopic examinationb refers to the specimens with representative sections submitted for microscopic examination\nSummary of biopsy site and tumor seeding\na refers to the specimens entirely submitted for microscopic examination\nb refers to the specimens with representative sections submitted for microscopic examination\nOf all the 98 cases, 17 tumors were entirely submitted for microscopic evaluation, while the other 81 tumors were incompletely sampled. The biopsy sites were identified in 23 % of cases with complete tumor sampling and 9 % of cases with incomplete tumor sampling. Although the differences on the frequencies of identifying biopsy site changes were not statistically significant (p = 0.08), thorough specimen sampling appeared to lead to a higher chance of identifying these changes. It is evident that the extent of sampling of the perinephric fat would be directly relevant to the likelihood of identifying the biopsy site. However, data on the extent of perinephric fat sampling was not available based on the gross descriptions for most cases. Whether a suspicious biopsy tract site was identified was not mentioned in any of the cases. Regarding the surgical approach, 67 and 31 cases were partial and radical nephrectomies, respectively. Biopsy sites were identified in 13 % of cases with partial nephrectomy and 6 % of cases with radical nephrectomy.\nWe also evaluated whether there is any effect of renal tumor biopsy techniques on the frequency of tumor seeding in PRCCs, by comparing a few biopsy parameters between pT1a cases with and without tumor seeding (Table 4). The biopsy parameters we looked into are those considered potentially affecting the risk of biopsy tract tumor seeding in the literature, including the biopsy needle size (smaller diameter associated with lower frequency of tumor seeding), use of coaxial sheath (associated with lower chance of tumor seeding), and the number of passes (controversial, but generally speaking, multiple passes associated with higher risk of tumor seeding) [20, 21]. Biopsy procedure information was available in all the 6 cases with tumor seeding and 8 cases without tumor seeding. Among these cases, there are no significant differences in the following parameters, biopsy needle size (p = 0.78), application of coaxial sheath technique (p = 0.53) and number of passes (p = 0.69), between cases with and without tumor seeding.\nTable 4Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seedingCaseBiopsy needle size (gauge)Coaxial sheath techniqueNumber of passesTumor seeding118No2Yes218Yes4Yes318Yes3Yes418Yes5Yes518Yes6Yes618No2Yes718Yes3No818Yes3No920Yes6No1018No4No1120No6No1218No3No1318No3No1420Yes4No\nRenal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seeding\nTo further evaluate whether the observed different rates in needle tract tumor seeding among different histological types of RCC were confounded by the extent of specimen sampling, we evaluated the identification of biopsy site with regard to different approaches of specimen sampling and surgery procedure (Table 3). Interestingly, the biopsy site was identified in 9 of the 28 PRCC cases, and tumor seeding along biopsy needle tract was seen in six cases. In contrast, biopsy site was present in 2 CCRCC cases; neither of the cases showed tumor seeding. Due to small sample sizes, no statistical analysis on the correlation of biopsy site identification and tumor seeding was performed.\nTable 3Summary of biopsy site and tumor seedingPapillary renal cell carcinoma (PRCC)Clear cell renal cell carcinoma (CCRCC)Clear cell papillary renal cell carcinoma (CCPRCC)Chromophobe renal cell carcinoma (ChRCC)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)Partial nephrectomy19 (68 %)9 (32 %)6 (21 %)43 (68 %)0 (0 %)0 (0 %)1 (33 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)Radical nephrectomy9 (32 %)0 (0 %)0 (0 %)20 (32 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Entire samplinga7 (25 %)4 (14 %)2 (7 %)9 (14 %)0 (0 %)0 (0 %)1 (17 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Incomplete samplingb21 (75 %)5 (18 %)4 (14 %)54 (85 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)a refers to the specimens entirely submitted for microscopic examinationb refers to the specimens with representative sections submitted for microscopic examination\nSummary of biopsy site and tumor seeding\na refers to the specimens entirely submitted for microscopic examination\nb refers to the specimens with representative sections submitted for microscopic examination\nOf all the 98 cases, 17 tumors were entirely submitted for microscopic evaluation, while the other 81 tumors were incompletely sampled. The biopsy sites were identified in 23 % of cases with complete tumor sampling and 9 % of cases with incomplete tumor sampling. Although the differences on the frequencies of identifying biopsy site changes were not statistically significant (p = 0.08), thorough specimen sampling appeared to lead to a higher chance of identifying these changes. It is evident that the extent of sampling of the perinephric fat would be directly relevant to the likelihood of identifying the biopsy site. However, data on the extent of perinephric fat sampling was not available based on the gross descriptions for most cases. Whether a suspicious biopsy tract site was identified was not mentioned in any of the cases. Regarding the surgical approach, 67 and 31 cases were partial and radical nephrectomies, respectively. Biopsy sites were identified in 13 % of cases with partial nephrectomy and 6 % of cases with radical nephrectomy.\nWe also evaluated whether there is any effect of renal tumor biopsy techniques on the frequency of tumor seeding in PRCCs, by comparing a few biopsy parameters between pT1a cases with and without tumor seeding (Table 4). The biopsy parameters we looked into are those considered potentially affecting the risk of biopsy tract tumor seeding in the literature, including the biopsy needle size (smaller diameter associated with lower frequency of tumor seeding), use of coaxial sheath (associated with lower chance of tumor seeding), and the number of passes (controversial, but generally speaking, multiple passes associated with higher risk of tumor seeding) [20, 21]. Biopsy procedure information was available in all the 6 cases with tumor seeding and 8 cases without tumor seeding. Among these cases, there are no significant differences in the following parameters, biopsy needle size (p = 0.78), application of coaxial sheath technique (p = 0.53) and number of passes (p = 0.69), between cases with and without tumor seeding.\nTable 4Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seedingCaseBiopsy needle size (gauge)Coaxial sheath techniqueNumber of passesTumor seeding118No2Yes218Yes4Yes318Yes3Yes418Yes5Yes518Yes6Yes618No2Yes718Yes3No818Yes3No920Yes6No1018No4No1120No6No1218No3No1318No3No1420Yes4No\nRenal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seeding", "Patients’ demographics and essential pathologic features are summarized in Table 1. The average ages of patients at diagnosis were similar among PRCC, CCRCC and CCPRCC. Patients with ChRCC were relatively younger. The proportions of cases in various pathologic stages (without considering the effect of needle tract tumor seeding) were comparable between CCRCC and PRCC, with pathological T1a stage in more than half the cases of PRCCs and CCRCCs.\nTable 1Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsyPapillary renal cell carcinomaClear cell renal cell carcinomaClear cell papillary renal cell carcinomaChromophobe renal cell carcinomaNumber of cases286334Age (mean ± SD)62.5 ± 11.561.1 ± 12.060.0 ± 5.644.5 ± 6.5Sex (Male/Female)22/636/272/11/3Tumor stagingapT1a173532pT1b41601pT2a2101pT3a51000pT3b0100Nucleolar grade (ISUP)116Not applicableNot applicable211383718401Lymphovascular invasion1400Lymph node involvement1100Distant metastasis0100a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor\nDemographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsy\na The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor\nNeedle tract tumor seeding within the perinephric adipose tissue was identified in 6 out of 98 (6 %) renal cell carcinoma cases. This was exclusively observed in PRCC (6/28, 21 %), with type 1 features, unifocal, small-sized (≤ 4 cm), and confined to the kidney (Table 2). Histology of the representative cases with tumor seeding along the biopsy needle tract is shown in Fig. 1. In contrast, none of the other tumors (63 CCRCC, 3 CCPRCC, and 4 ChRCC) showed tumor seeding along the biopsy needle tract. In four of the six cases, the presence of tumor cells within the perinephric adipose tissue associated with biopsy needle tract were documented in the pathology reports. Three of these four cases, otherwise pT1a, were upstaged to pT3a due to needle tract tumor seeding. The majority (5/6) of the tumors showed low nucleolar grade (grade 1 or 2). Post-operative follow-up period of the six cases with tumor seeding ranged from 1 month to 52 months with a median of 10.5 months. One patient died of complications from stroke one month following nephrectomy, and one patient was lost for follow up 13 months post nephrectomy. Post-operative follow-up period of the comparable 9 pT1a low grade PRCC cases without tumor seeding ranged from 7 month to 130 months with a median of 36 months; one patient was lost to follow up. No recurrence or metastasis were identified in any of the pT1a PRCC cases, with or without tumor seeding. With regard to the pT3a PRCC cases, one out of the 5 patients developed metastatic lesions in multiple retroperitoneal lymph nodes at 7 months after radical nephrectomy. However, the primary PRCC in this patient had type 2 features, was of high nucleolar grade, and exhibited lymphovascular invasion and lymph node involvement at the time of nephrectomy. The other four patients (one with type 2 features and high nucleolar grade; one with type 1 features and high nucleolar grade; the other two with type 1 features and low nucleolar grade) showed no evidence of local recurrence or metastases during the follow-up period ranging from 24 to 51 months.\nTable 2Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tractCaseAge (years)SexRCC subtypeSurgical procedureTumor stageaTumor size (cm)ISUP nucleolar gradeComplete samplingFollow up159FPRCC, type 1Partial nephrectomypT1a2.31No1 month, died of stroke255MPRCC, type 1Partial nephrectomypT1a2.12No11 months, no recurrence370FPRCC, type 1Partial nephrectomypT1a1.71Yes52 months, no recurrence448MPRCC, type 1Partial nephrectomypT1a1.51No6 months, no recurrence562MPRCC, type 1Partial nephrectomypT1a1.22Yes10 months, no recurrence646MPRCC, type 1Partial nephrectomypT1a3.83No13 months, no recurrence; lost to follow upa Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into considerationAbbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinomaFig. 1Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue\nSummary of clinicopathologic features of cases with tumor seeding along the biopsy needle tract\na Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into consideration\nAbbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinoma\nRepresentative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue", "To further evaluate whether the observed different rates in needle tract tumor seeding among different histological types of RCC were confounded by the extent of specimen sampling, we evaluated the identification of biopsy site with regard to different approaches of specimen sampling and surgery procedure (Table 3). Interestingly, the biopsy site was identified in 9 of the 28 PRCC cases, and tumor seeding along biopsy needle tract was seen in six cases. In contrast, biopsy site was present in 2 CCRCC cases; neither of the cases showed tumor seeding. Due to small sample sizes, no statistical analysis on the correlation of biopsy site identification and tumor seeding was performed.\nTable 3Summary of biopsy site and tumor seedingPapillary renal cell carcinoma (PRCC)Clear cell renal cell carcinoma (CCRCC)Clear cell papillary renal cell carcinoma (CCPRCC)Chromophobe renal cell carcinoma (ChRCC)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)Partial nephrectomy19 (68 %)9 (32 %)6 (21 %)43 (68 %)0 (0 %)0 (0 %)1 (33 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)Radical nephrectomy9 (32 %)0 (0 %)0 (0 %)20 (32 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Entire samplinga7 (25 %)4 (14 %)2 (7 %)9 (14 %)0 (0 %)0 (0 %)1 (17 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Incomplete samplingb21 (75 %)5 (18 %)4 (14 %)54 (85 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)a refers to the specimens entirely submitted for microscopic examinationb refers to the specimens with representative sections submitted for microscopic examination\nSummary of biopsy site and tumor seeding\na refers to the specimens entirely submitted for microscopic examination\nb refers to the specimens with representative sections submitted for microscopic examination\nOf all the 98 cases, 17 tumors were entirely submitted for microscopic evaluation, while the other 81 tumors were incompletely sampled. The biopsy sites were identified in 23 % of cases with complete tumor sampling and 9 % of cases with incomplete tumor sampling. Although the differences on the frequencies of identifying biopsy site changes were not statistically significant (p = 0.08), thorough specimen sampling appeared to lead to a higher chance of identifying these changes. It is evident that the extent of sampling of the perinephric fat would be directly relevant to the likelihood of identifying the biopsy site. However, data on the extent of perinephric fat sampling was not available based on the gross descriptions for most cases. Whether a suspicious biopsy tract site was identified was not mentioned in any of the cases. Regarding the surgical approach, 67 and 31 cases were partial and radical nephrectomies, respectively. Biopsy sites were identified in 13 % of cases with partial nephrectomy and 6 % of cases with radical nephrectomy.\nWe also evaluated whether there is any effect of renal tumor biopsy techniques on the frequency of tumor seeding in PRCCs, by comparing a few biopsy parameters between pT1a cases with and without tumor seeding (Table 4). The biopsy parameters we looked into are those considered potentially affecting the risk of biopsy tract tumor seeding in the literature, including the biopsy needle size (smaller diameter associated with lower frequency of tumor seeding), use of coaxial sheath (associated with lower chance of tumor seeding), and the number of passes (controversial, but generally speaking, multiple passes associated with higher risk of tumor seeding) [20, 21]. Biopsy procedure information was available in all the 6 cases with tumor seeding and 8 cases without tumor seeding. Among these cases, there are no significant differences in the following parameters, biopsy needle size (p = 0.78), application of coaxial sheath technique (p = 0.53) and number of passes (p = 0.69), between cases with and without tumor seeding.\nTable 4Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seedingCaseBiopsy needle size (gauge)Coaxial sheath techniqueNumber of passesTumor seeding118No2Yes218Yes4Yes318Yes3Yes418Yes5Yes518Yes6Yes618No2Yes718Yes3No818Yes3No920Yes6No1018No4No1120No6No1218No3No1318No3No1420Yes4No\nRenal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seeding", "Our present study is one of the largest case series from a single institution to date evaluating the incidence of biopsy needle tract tumor seeding confirmed by histological examination. It is also the first study investigating the differential frequencies of tumor seeding among specific histologic subtypes of RCC. In our cohort, tumor seeding within the perinephric adipose tissue along the biopsy needle tract was observed in 6 % (6/98) of all RCC resection cases, but exclusively among patients with PRCC (6/28, 21 %). The previously reported overall tumor seeding rate ranges from 0.01 % [9, 10] to 1.2 % [16] in the literature. The lower tumor seeding rates reported in prior studies in the 1990 s were estimated based on responses to questionnaires at multiple institutions [9, 10]. Although the total number of biopsies in the prior studies was large (more than 10,000 biopsies of abdominal masses including renal masses), no standardized protocols on detection of tumor seeding were described, likely resulting in underestimation of the frequency of tumor seeding. Microscopically, we observed clear histologic evidence of biopsy tract changes with intermingled tumor cell clusters, discontinuous from the main tumor, in all the six cases that we interpreted as needle tract tumor seeding. However, there is a lack of specific histologic criteria in interpretating biopsy according to a recent multi-institutional survey and interobserver variability exists [22], which could potentially contribute to the variable frequencies of needle tract tumor seeding reported.\nTo date, there are a total of 25 reported cases on tumor seeding along the percutaneous renal mass biopsy tract, from several case reports and one case series [8, 11–16]. Of these, PRCC (15 cases) was the most commonly encountered pathologic subtype. Other histologic subtypes included 3 CCRCC, 4 renal cell carcinomas (subtypes not specified), 1 oncocytoma, 1 urothelial carcinoma of the kidney and 1 “angiomyoliposarcoma”. Although this phenomenon was observed in several renal tumors, a predilection for tumor seeding was identified in PRCC compared to other types.\nWe did not observe biopsy needle tract tumor seeding in CCRCC in our case series, while rare cases of tumor seeding in CCRCC were published previously [8, 12]. Biopsy tract seeding was not observed in CCPRCC and ChRCC either, but the sample sizes for these two subtypes were small. We evaluated several factors possibly affecting detection of tumor seeding in the resection specimens. Although the tumors along with perinephric tissue were sampled per CAP (College of American Pathologists) protocols, not all were entirely sampled (especially the larger-sized tumors). Our data suggests that gross sampling might influence microscopic identification of biopsy site changes. Moreover, careful gross examination of the nephrectomy specimen for scarring, fat necrosis, hemorrhage and fibrosis in the perinephric fat or hemorrhagic foci in the capsular surface, and more diligent sampling of such areas, if present, might help identify sites of needle tract, thus allowing more efficient evaluation for potential tumor seeding [19]. In our series, no macroscopic descriptions of suspected biopsy tract were mentioned upon retrospective review of all the cases, suggesting no targeted samplings for biopsy tract. However, the drastic difference in the tumor seeding rate between CCRCC and PRCC is not readily explained by tumor sampling alone. A few theories were proposed in the literature to explain the higher frequencies of needle tract tumor seeding in PRCC. Some studies observed that PRCC tend to exhibit incomplete or absent peritumoral pseudo-capsule more frequently than CCRCC, facilitating tumor invasion into the perinephric fat [23]. Other hypotheses for higher rate of needle tract tumor seeding in PRCC include the friable nature of the tumor facilitating tumor cell adherence to the needle, higher frequency of exophytic growth allowing tumor seeding more often in the extrarenal space, as well as possible higher chance of tumor cell survival when explanted into the needle tract [17]. Nevertheless, the exact reasons for the differences in the frequencies of biopsy needle tract tumor seeding among various pathologic types of renal tumors need further investigation.\nPathologic staging of renal cell carcinoma is one of the essential prognostic factors and guides patient management, especially surveillance following surgery. Localized pT1a or pT1b renal cell carcinomas are considered as low-risk disease, with recurrence risk of 1–8 %. For these patients, abdominal imaging is recommended annually for 3 years. In contrast, patients with localized T2 and higher disease are considered as having moderate- to high-risk of recurrence (30–78 %). More intensive surveillance protocol is warranted with abdominal imaging (CT or MRI) recommended at 3-6-month interval for the first 3 years, then annually to the fifth year [24].\nTo date, there is no evidence-based standard protocol among pathologists on whether upstaging is justified solely based on the finding of perinephric tumor seeding along biopsy needle tract. In prior reports, one case of PRCC with tumor foci involving perinephric fat was initially staged as pT3a [14]. However, following confirmation that the tumor foci represented tumor seeding of prior biopsy tract within perinephric fat, the final stage was revised from pT3a to pT1a, indicating that the authors did not consider perinephric tumor seeding along biopsy tract as true cancer invasion [14]. In contrast, in a seven-case series with tumor seeding along the biopsy needle tract involving perinephric fat, six of seven tumors (PRCC and CCRCC) were upstaged to pT3a solely due to biopsy tract seeding, which would have been otherwise staged as pT1a [16]. Understanding the biological behavior of tumor cells spread along the biopsy tract is fundamental to ascertain appropriate cancer staging. It is questionable whether passive displacement of potentially indolent tumor cells to a location that would theoretically necessitate upstaging is equivalent to a genetically aggressive counterpart that actively invades the perirenal tissue. For example, a review on tumor seeding following breast needle biopsy found that the incidence of detecting tumor seeding declines as the interval between biopsy and surgery lengthens, suggesting reduced viability of the seeded tumor cells [25]. On the other hand, it could be argued that an increased access to lymphatic structures and blood vessels in the perirenal tissue by tumor seeding may play a more important role. Thus far, basic mechanistic studies and long-term clinical follow-up data are sparse. It is unclear whether and how the pathologic features of the original tumor and microenvironment of tumor seeding site would affect tumor regrowth. It may also be technically challenging to determine the casual relationship between recurrence/metastasis at a later time and prior tumor seeding phenomenon. The follow-up data based on our series of PRCC seem to show a low risk of recurrence between patients with low grade pT1a disease with and without tumor seeding in the perinephric adipose tissue. Studies with a larger patient cohort and longer follow-up are needed for a more definitive prognostic assessment. Despite these uncertainties, it is documented that two (CCRCC and RCC not specified) of the 25 previously published cases with perinephric biopsy tract tumor seeding showed local cancer recurrence associated with prior biopsy site. Moreover, seven cases (RCC not specified, CCRCC, PRCC, and oncocytoma) exhibited extrarenal subcutaneous or retroperitoneal tumor nodules histologically consistent with the original renal tumors. Therefore, thorough and diligent grossing and microscopic examination for biopsy site changes and signs of tumor seeding is recommended, especially in small-sized tumors, the management and/or follow-up of which may differ significantly based on whether tumor is confined to the kidney or not. Effective communication between pathologists and clinicians and precise documentation of the tumor seeding is essential to facilitate appropriate follow up and patient management.\nThere are several limitations to our study. First, due to the retrospective nature of the study, we were not able to ascertain whether evidence of needle tract changes was diligently looked for and adequately sampled during grossing. Second, the lengths of follow-up for the six cases with biopsy tract tumor seeding were relatively short, limiting the long-term evaluation of prognosis. Third, the number of cases for CCPRCC and ChRCC were relatively small, limiting the study of tumor seeding in these two subtypes.\nTumor seeding along the biopsy needle tract in patients with RCC warrants increased attention due to its higher frequency than previously documented and the potential impact on patient management. Future studies on a larger scale and longer follow up to evaluate the association between needle tract tumor seeding and prognosis are warranted." ]
[ "introduction", "materials|methods", "results", null, null, "discussion" ]
[ "Biopsy needle tract", "Tumor seeding", "Renal cell carcinoma", "Papillary renal cell carcinoma", "Clear cell renal cell carcinoma" ]
Introduction: The indications of and demand for percutaneous needle biopsy of renal tumors have been expanding with rapid advances in medical imaging technology and treatment modalities [1]. Historically, renal mass biopsy (RMB) was limited to differentiate renal cell carcinoma (RCC) from other differential diagnoses including benign tumors, metastatic disease, infection or lymphoma. In contrast, nowadays it is increasingly considered for risk stratification of renal cell carcinoma, as well as for guiding treatment strategies. The 2020 National Comprehensive Cancer Network (NCCN) guidelines recommend RMB of small lesions for diagnosis and stratification of active surveillance, cryosurgery and radiofrequency ablation therapy [2]. The American Urological Association (AUA) guideline also emphasizes that a RMB should be performed prior to ablation therapy to provide pathologic diagnosis and guide subsequent surveillance [3]. In addition, RMB is considered highly accurate for the diagnosis of malignancy and histologic determination of RCC subtypes in several systemic analyses [3–6]. Percutaneous needle core biopsy of renal tumors has been generally considered as a safe procedure. Complications other than hematoma are rare. These include tumor seeding along the needle tract, arteriovenous fistula formation, infection and pneumothorax [7–9]. In particular, tumor seeding along the biopsy needle tract is always a safety issue to consider in biopsy procedures or fine needle aspiration of mass lesions in various tissues, as it poses potential risk of iatrogenic local tumor spread by seeded malignant cells and possible subsequent cancer recurrence or dissemination [1]. Historically, the rate of needle tract tumor seeding in renal biopsy was estimated to be as low as 0.01 % by Smith in 1991, and Herts and Baker in 1995 [9, 10]. To date, only a handful of case reports and a small case series have been published [8, 11–16]. However, the frequency and clinical significance of biopsy-associated tumor seeding remains largely controversial due at least in part to lack of systemic review, histological analysis and follow up data in early studies [9, 10, 17]. In recent years, a few studies re-visited the phenomenon of tumor seeding along core needle biopsy tract in renal cell carcinomas and challenged the previously acknowledged rarity of needle tract tumor seeding following renal tumor biopsy. For example, one case series reported a 1.2 % overall incidence of tumor seeding [16]. In a study of more than 20,000 patients with clinical T1a RCC, the upstaging rate was 2.1 % for patients with prior history of RMB, compared with 1.1 % in patients without prior RMB, although there was no histological evidence showing the association of perinephric fat invasion with a prior biopsy site in this study [18]. Considering the potentially significant impact of tumor seeding, we retrospectively assessed the histologic evidence of tumor seeding along the biopsy needle tract in the resection specimens of renal cell carcinomas at our institution and examined the correlation between tumor seeding and histologic subtypes of renal cell carcinoma. Materials and methods: Our institution’s pathology database was searched for cases diagnosed as renal cell carcinoma on biopsy with subsequent nephrectomy from January 2003 to April 2020. To identify patients who underwent both biopsy and nephrectomy, all available medical records and pathology reports were reviewed. A total of 116 patients were identified, including 62 with papillary renal cell carcinoma (PRCC), 71 with clear cell renal cell carcinoma (CCRCC), 4 with clear cell papillary renal cell carcinoma (CCPRCC), and 6 with chromophobe renal cell carcinoma (ChRCC). Of these, the nephrectomy slides of 28 PRCC, 63 CCRCC, 3 CCPRCC, and 4 ChRCC cases (98 cases in total) were available for review. All slides were retrospectively reviewed by two of the authors (LB and YZ) to assess for biopsy site changes and needle tract tumor seeding. The histologic evidence of biopsy site changes on resection specimens included a combination of foreign body reaction, hemosiderin deposition, fibrosis, fat necrosis, and presence of absorbable gelatin [19]. Biopsy needle tract tumor seeding was identified as clusters of tumor cells embedded in perinephric tissue spatially associated with the above-described biopsy site. Fisher’s exact test or t-test was used to compare the rates of biopsy site identification, as well as compare the parameters of biopsy techniques between cases with and without tumor seeding. Results: Tumor seeding is exclusively observed in PRCC Patients’ demographics and essential pathologic features are summarized in Table 1. The average ages of patients at diagnosis were similar among PRCC, CCRCC and CCPRCC. Patients with ChRCC were relatively younger. The proportions of cases in various pathologic stages (without considering the effect of needle tract tumor seeding) were comparable between CCRCC and PRCC, with pathological T1a stage in more than half the cases of PRCCs and CCRCCs. Table 1Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsyPapillary renal cell carcinomaClear cell renal cell carcinomaClear cell papillary renal cell carcinomaChromophobe renal cell carcinomaNumber of cases286334Age (mean ± SD)62.5 ± 11.561.1 ± 12.060.0 ± 5.644.5 ± 6.5Sex (Male/Female)22/636/272/11/3Tumor stagingapT1a173532pT1b41601pT2a2101pT3a51000pT3b0100Nucleolar grade (ISUP)116Not applicableNot applicable211383718401Lymphovascular invasion1400Lymph node involvement1100Distant metastasis0100a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsy a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor Needle tract tumor seeding within the perinephric adipose tissue was identified in 6 out of 98 (6 %) renal cell carcinoma cases. This was exclusively observed in PRCC (6/28, 21 %), with type 1 features, unifocal, small-sized (≤ 4 cm), and confined to the kidney (Table 2). Histology of the representative cases with tumor seeding along the biopsy needle tract is shown in Fig. 1. In contrast, none of the other tumors (63 CCRCC, 3 CCPRCC, and 4 ChRCC) showed tumor seeding along the biopsy needle tract. In four of the six cases, the presence of tumor cells within the perinephric adipose tissue associated with biopsy needle tract were documented in the pathology reports. Three of these four cases, otherwise pT1a, were upstaged to pT3a due to needle tract tumor seeding. The majority (5/6) of the tumors showed low nucleolar grade (grade 1 or 2). Post-operative follow-up period of the six cases with tumor seeding ranged from 1 month to 52 months with a median of 10.5 months. One patient died of complications from stroke one month following nephrectomy, and one patient was lost for follow up 13 months post nephrectomy. Post-operative follow-up period of the comparable 9 pT1a low grade PRCC cases without tumor seeding ranged from 7 month to 130 months with a median of 36 months; one patient was lost to follow up. No recurrence or metastasis were identified in any of the pT1a PRCC cases, with or without tumor seeding. With regard to the pT3a PRCC cases, one out of the 5 patients developed metastatic lesions in multiple retroperitoneal lymph nodes at 7 months after radical nephrectomy. However, the primary PRCC in this patient had type 2 features, was of high nucleolar grade, and exhibited lymphovascular invasion and lymph node involvement at the time of nephrectomy. The other four patients (one with type 2 features and high nucleolar grade; one with type 1 features and high nucleolar grade; the other two with type 1 features and low nucleolar grade) showed no evidence of local recurrence or metastases during the follow-up period ranging from 24 to 51 months. Table 2Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tractCaseAge (years)SexRCC subtypeSurgical procedureTumor stageaTumor size (cm)ISUP nucleolar gradeComplete samplingFollow up159FPRCC, type 1Partial nephrectomypT1a2.31No1 month, died of stroke255MPRCC, type 1Partial nephrectomypT1a2.12No11 months, no recurrence370FPRCC, type 1Partial nephrectomypT1a1.71Yes52 months, no recurrence448MPRCC, type 1Partial nephrectomypT1a1.51No6 months, no recurrence562MPRCC, type 1Partial nephrectomypT1a1.22Yes10 months, no recurrence646MPRCC, type 1Partial nephrectomypT1a3.83No13 months, no recurrence; lost to follow upa Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into considerationAbbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinomaFig. 1Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tract a Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into consideration Abbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinoma Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue Patients’ demographics and essential pathologic features are summarized in Table 1. The average ages of patients at diagnosis were similar among PRCC, CCRCC and CCPRCC. Patients with ChRCC were relatively younger. The proportions of cases in various pathologic stages (without considering the effect of needle tract tumor seeding) were comparable between CCRCC and PRCC, with pathological T1a stage in more than half the cases of PRCCs and CCRCCs. Table 1Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsyPapillary renal cell carcinomaClear cell renal cell carcinomaClear cell papillary renal cell carcinomaChromophobe renal cell carcinomaNumber of cases286334Age (mean ± SD)62.5 ± 11.561.1 ± 12.060.0 ± 5.644.5 ± 6.5Sex (Male/Female)22/636/272/11/3Tumor stagingapT1a173532pT1b41601pT2a2101pT3a51000pT3b0100Nucleolar grade (ISUP)116Not applicableNot applicable211383718401Lymphovascular invasion1400Lymph node involvement1100Distant metastasis0100a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsy a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor Needle tract tumor seeding within the perinephric adipose tissue was identified in 6 out of 98 (6 %) renal cell carcinoma cases. This was exclusively observed in PRCC (6/28, 21 %), with type 1 features, unifocal, small-sized (≤ 4 cm), and confined to the kidney (Table 2). Histology of the representative cases with tumor seeding along the biopsy needle tract is shown in Fig. 1. In contrast, none of the other tumors (63 CCRCC, 3 CCPRCC, and 4 ChRCC) showed tumor seeding along the biopsy needle tract. In four of the six cases, the presence of tumor cells within the perinephric adipose tissue associated with biopsy needle tract were documented in the pathology reports. Three of these four cases, otherwise pT1a, were upstaged to pT3a due to needle tract tumor seeding. The majority (5/6) of the tumors showed low nucleolar grade (grade 1 or 2). Post-operative follow-up period of the six cases with tumor seeding ranged from 1 month to 52 months with a median of 10.5 months. One patient died of complications from stroke one month following nephrectomy, and one patient was lost for follow up 13 months post nephrectomy. Post-operative follow-up period of the comparable 9 pT1a low grade PRCC cases without tumor seeding ranged from 7 month to 130 months with a median of 36 months; one patient was lost to follow up. No recurrence or metastasis were identified in any of the pT1a PRCC cases, with or without tumor seeding. With regard to the pT3a PRCC cases, one out of the 5 patients developed metastatic lesions in multiple retroperitoneal lymph nodes at 7 months after radical nephrectomy. However, the primary PRCC in this patient had type 2 features, was of high nucleolar grade, and exhibited lymphovascular invasion and lymph node involvement at the time of nephrectomy. The other four patients (one with type 2 features and high nucleolar grade; one with type 1 features and high nucleolar grade; the other two with type 1 features and low nucleolar grade) showed no evidence of local recurrence or metastases during the follow-up period ranging from 24 to 51 months. Table 2Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tractCaseAge (years)SexRCC subtypeSurgical procedureTumor stageaTumor size (cm)ISUP nucleolar gradeComplete samplingFollow up159FPRCC, type 1Partial nephrectomypT1a2.31No1 month, died of stroke255MPRCC, type 1Partial nephrectomypT1a2.12No11 months, no recurrence370FPRCC, type 1Partial nephrectomypT1a1.71Yes52 months, no recurrence448MPRCC, type 1Partial nephrectomypT1a1.51No6 months, no recurrence562MPRCC, type 1Partial nephrectomypT1a1.22Yes10 months, no recurrence646MPRCC, type 1Partial nephrectomypT1a3.83No13 months, no recurrence; lost to follow upa Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into considerationAbbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinomaFig. 1Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tract a Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into consideration Abbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinoma Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue Potential impact of specimen sampling and biopsy techniques on biopsy site identification and tumor seeding To further evaluate whether the observed different rates in needle tract tumor seeding among different histological types of RCC were confounded by the extent of specimen sampling, we evaluated the identification of biopsy site with regard to different approaches of specimen sampling and surgery procedure (Table 3). Interestingly, the biopsy site was identified in 9 of the 28 PRCC cases, and tumor seeding along biopsy needle tract was seen in six cases. In contrast, biopsy site was present in 2 CCRCC cases; neither of the cases showed tumor seeding. Due to small sample sizes, no statistical analysis on the correlation of biopsy site identification and tumor seeding was performed. Table 3Summary of biopsy site and tumor seedingPapillary renal cell carcinoma (PRCC)Clear cell renal cell carcinoma (CCRCC)Clear cell papillary renal cell carcinoma (CCPRCC)Chromophobe renal cell carcinoma (ChRCC)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)Partial nephrectomy19 (68 %)9 (32 %)6 (21 %)43 (68 %)0 (0 %)0 (0 %)1 (33 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)Radical nephrectomy9 (32 %)0 (0 %)0 (0 %)20 (32 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Entire samplinga7 (25 %)4 (14 %)2 (7 %)9 (14 %)0 (0 %)0 (0 %)1 (17 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Incomplete samplingb21 (75 %)5 (18 %)4 (14 %)54 (85 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)a refers to the specimens entirely submitted for microscopic examinationb refers to the specimens with representative sections submitted for microscopic examination Summary of biopsy site and tumor seeding a refers to the specimens entirely submitted for microscopic examination b refers to the specimens with representative sections submitted for microscopic examination Of all the 98 cases, 17 tumors were entirely submitted for microscopic evaluation, while the other 81 tumors were incompletely sampled. The biopsy sites were identified in 23 % of cases with complete tumor sampling and 9 % of cases with incomplete tumor sampling. Although the differences on the frequencies of identifying biopsy site changes were not statistically significant (p = 0.08), thorough specimen sampling appeared to lead to a higher chance of identifying these changes. It is evident that the extent of sampling of the perinephric fat would be directly relevant to the likelihood of identifying the biopsy site. However, data on the extent of perinephric fat sampling was not available based on the gross descriptions for most cases. Whether a suspicious biopsy tract site was identified was not mentioned in any of the cases. Regarding the surgical approach, 67 and 31 cases were partial and radical nephrectomies, respectively. Biopsy sites were identified in 13 % of cases with partial nephrectomy and 6 % of cases with radical nephrectomy. We also evaluated whether there is any effect of renal tumor biopsy techniques on the frequency of tumor seeding in PRCCs, by comparing a few biopsy parameters between pT1a cases with and without tumor seeding (Table 4). The biopsy parameters we looked into are those considered potentially affecting the risk of biopsy tract tumor seeding in the literature, including the biopsy needle size (smaller diameter associated with lower frequency of tumor seeding), use of coaxial sheath (associated with lower chance of tumor seeding), and the number of passes (controversial, but generally speaking, multiple passes associated with higher risk of tumor seeding) [20, 21]. Biopsy procedure information was available in all the 6 cases with tumor seeding and 8 cases without tumor seeding. Among these cases, there are no significant differences in the following parameters, biopsy needle size (p = 0.78), application of coaxial sheath technique (p = 0.53) and number of passes (p = 0.69), between cases with and without tumor seeding. Table 4Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seedingCaseBiopsy needle size (gauge)Coaxial sheath techniqueNumber of passesTumor seeding118No2Yes218Yes4Yes318Yes3Yes418Yes5Yes518Yes6Yes618No2Yes718Yes3No818Yes3No920Yes6No1018No4No1120No6No1218No3No1318No3No1420Yes4No Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seeding To further evaluate whether the observed different rates in needle tract tumor seeding among different histological types of RCC were confounded by the extent of specimen sampling, we evaluated the identification of biopsy site with regard to different approaches of specimen sampling and surgery procedure (Table 3). Interestingly, the biopsy site was identified in 9 of the 28 PRCC cases, and tumor seeding along biopsy needle tract was seen in six cases. In contrast, biopsy site was present in 2 CCRCC cases; neither of the cases showed tumor seeding. Due to small sample sizes, no statistical analysis on the correlation of biopsy site identification and tumor seeding was performed. Table 3Summary of biopsy site and tumor seedingPapillary renal cell carcinoma (PRCC)Clear cell renal cell carcinoma (CCRCC)Clear cell papillary renal cell carcinoma (CCPRCC)Chromophobe renal cell carcinoma (ChRCC)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)Partial nephrectomy19 (68 %)9 (32 %)6 (21 %)43 (68 %)0 (0 %)0 (0 %)1 (33 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)Radical nephrectomy9 (32 %)0 (0 %)0 (0 %)20 (32 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Entire samplinga7 (25 %)4 (14 %)2 (7 %)9 (14 %)0 (0 %)0 (0 %)1 (17 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Incomplete samplingb21 (75 %)5 (18 %)4 (14 %)54 (85 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)a refers to the specimens entirely submitted for microscopic examinationb refers to the specimens with representative sections submitted for microscopic examination Summary of biopsy site and tumor seeding a refers to the specimens entirely submitted for microscopic examination b refers to the specimens with representative sections submitted for microscopic examination Of all the 98 cases, 17 tumors were entirely submitted for microscopic evaluation, while the other 81 tumors were incompletely sampled. The biopsy sites were identified in 23 % of cases with complete tumor sampling and 9 % of cases with incomplete tumor sampling. Although the differences on the frequencies of identifying biopsy site changes were not statistically significant (p = 0.08), thorough specimen sampling appeared to lead to a higher chance of identifying these changes. It is evident that the extent of sampling of the perinephric fat would be directly relevant to the likelihood of identifying the biopsy site. However, data on the extent of perinephric fat sampling was not available based on the gross descriptions for most cases. Whether a suspicious biopsy tract site was identified was not mentioned in any of the cases. Regarding the surgical approach, 67 and 31 cases were partial and radical nephrectomies, respectively. Biopsy sites were identified in 13 % of cases with partial nephrectomy and 6 % of cases with radical nephrectomy. We also evaluated whether there is any effect of renal tumor biopsy techniques on the frequency of tumor seeding in PRCCs, by comparing a few biopsy parameters between pT1a cases with and without tumor seeding (Table 4). The biopsy parameters we looked into are those considered potentially affecting the risk of biopsy tract tumor seeding in the literature, including the biopsy needle size (smaller diameter associated with lower frequency of tumor seeding), use of coaxial sheath (associated with lower chance of tumor seeding), and the number of passes (controversial, but generally speaking, multiple passes associated with higher risk of tumor seeding) [20, 21]. Biopsy procedure information was available in all the 6 cases with tumor seeding and 8 cases without tumor seeding. Among these cases, there are no significant differences in the following parameters, biopsy needle size (p = 0.78), application of coaxial sheath technique (p = 0.53) and number of passes (p = 0.69), between cases with and without tumor seeding. Table 4Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seedingCaseBiopsy needle size (gauge)Coaxial sheath techniqueNumber of passesTumor seeding118No2Yes218Yes4Yes318Yes3Yes418Yes5Yes518Yes6Yes618No2Yes718Yes3No818Yes3No920Yes6No1018No4No1120No6No1218No3No1318No3No1420Yes4No Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seeding Tumor seeding is exclusively observed in PRCC: Patients’ demographics and essential pathologic features are summarized in Table 1. The average ages of patients at diagnosis were similar among PRCC, CCRCC and CCPRCC. Patients with ChRCC were relatively younger. The proportions of cases in various pathologic stages (without considering the effect of needle tract tumor seeding) were comparable between CCRCC and PRCC, with pathological T1a stage in more than half the cases of PRCCs and CCRCCs. Table 1Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsyPapillary renal cell carcinomaClear cell renal cell carcinomaClear cell papillary renal cell carcinomaChromophobe renal cell carcinomaNumber of cases286334Age (mean ± SD)62.5 ± 11.561.1 ± 12.060.0 ± 5.644.5 ± 6.5Sex (Male/Female)22/636/272/11/3Tumor stagingapT1a173532pT1b41601pT2a2101pT3a51000pT3b0100Nucleolar grade (ISUP)116Not applicableNot applicable211383718401Lymphovascular invasion1400Lymph node involvement1100Distant metastasis0100a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor Demographics and pathologic features of the 98 renal cell carcinoma cases with prior biopsy a The American Joint Committee on Cancer (AJCC) cancer staging, 8th edition for Tumor Needle tract tumor seeding within the perinephric adipose tissue was identified in 6 out of 98 (6 %) renal cell carcinoma cases. This was exclusively observed in PRCC (6/28, 21 %), with type 1 features, unifocal, small-sized (≤ 4 cm), and confined to the kidney (Table 2). Histology of the representative cases with tumor seeding along the biopsy needle tract is shown in Fig. 1. In contrast, none of the other tumors (63 CCRCC, 3 CCPRCC, and 4 ChRCC) showed tumor seeding along the biopsy needle tract. In four of the six cases, the presence of tumor cells within the perinephric adipose tissue associated with biopsy needle tract were documented in the pathology reports. Three of these four cases, otherwise pT1a, were upstaged to pT3a due to needle tract tumor seeding. The majority (5/6) of the tumors showed low nucleolar grade (grade 1 or 2). Post-operative follow-up period of the six cases with tumor seeding ranged from 1 month to 52 months with a median of 10.5 months. One patient died of complications from stroke one month following nephrectomy, and one patient was lost for follow up 13 months post nephrectomy. Post-operative follow-up period of the comparable 9 pT1a low grade PRCC cases without tumor seeding ranged from 7 month to 130 months with a median of 36 months; one patient was lost to follow up. No recurrence or metastasis were identified in any of the pT1a PRCC cases, with or without tumor seeding. With regard to the pT3a PRCC cases, one out of the 5 patients developed metastatic lesions in multiple retroperitoneal lymph nodes at 7 months after radical nephrectomy. However, the primary PRCC in this patient had type 2 features, was of high nucleolar grade, and exhibited lymphovascular invasion and lymph node involvement at the time of nephrectomy. The other four patients (one with type 2 features and high nucleolar grade; one with type 1 features and high nucleolar grade; the other two with type 1 features and low nucleolar grade) showed no evidence of local recurrence or metastases during the follow-up period ranging from 24 to 51 months. Table 2Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tractCaseAge (years)SexRCC subtypeSurgical procedureTumor stageaTumor size (cm)ISUP nucleolar gradeComplete samplingFollow up159FPRCC, type 1Partial nephrectomypT1a2.31No1 month, died of stroke255MPRCC, type 1Partial nephrectomypT1a2.12No11 months, no recurrence370FPRCC, type 1Partial nephrectomypT1a1.71Yes52 months, no recurrence448MPRCC, type 1Partial nephrectomypT1a1.51No6 months, no recurrence562MPRCC, type 1Partial nephrectomypT1a1.22Yes10 months, no recurrence646MPRCC, type 1Partial nephrectomypT1a3.83No13 months, no recurrence; lost to follow upa Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into considerationAbbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinomaFig. 1Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue Summary of clinicopathologic features of cases with tumor seeding along the biopsy needle tract a Based on the 8th edition American Joint Committee on Cancer (AJCC) pTNM staging system; biopsy tract seeding not taken into consideration Abbreviations: RCC, renal cell carcinoma; PRCC, papillary renal cell carcinoma Representative images of two cases (a-c and d-f, respectively) demonstrating the histologic evidence of biopsy site changes and tumor seeding along the biopsy site. (a) Low power view showing the primary PRCC, perinephric tissue with biopsy site changes and tumor seeding to the adjacent perinephric adipose tissue beyond the renal capsule; (b) biopsy site changes showing combination of foreign material deposition (asterisk = gelfoam), foreign body reaction, hemosiderin deposition (arrowhead) and fibrosis, as well as a few clusters of tumor cells (arrow) seeded within the biopsy site; (c) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue; (d) Low power view showing the primary PRCC confined within the capsule and perinephric tissue with biopsy site changes with hemorrhage and foreign material deposition (inset); (e) perinephric tissue with foreign body reaction (dashed arrow); (f) High power view highlighting nests of tumor cells (arrow) seeded in the perinephric adipose tissue Potential impact of specimen sampling and biopsy techniques on biopsy site identification and tumor seeding: To further evaluate whether the observed different rates in needle tract tumor seeding among different histological types of RCC were confounded by the extent of specimen sampling, we evaluated the identification of biopsy site with regard to different approaches of specimen sampling and surgery procedure (Table 3). Interestingly, the biopsy site was identified in 9 of the 28 PRCC cases, and tumor seeding along biopsy needle tract was seen in six cases. In contrast, biopsy site was present in 2 CCRCC cases; neither of the cases showed tumor seeding. Due to small sample sizes, no statistical analysis on the correlation of biopsy site identification and tumor seeding was performed. Table 3Summary of biopsy site and tumor seedingPapillary renal cell carcinoma (PRCC)Clear cell renal cell carcinoma (CCRCC)Clear cell papillary renal cell carcinoma (CCPRCC)Chromophobe renal cell carcinoma (ChRCC)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)n (%)Biopsy site,n (%)Tumor seeding,n (%)Partial nephrectomy19 (68 %)9 (32 %)6 (21 %)43 (68 %)0 (0 %)0 (0 %)1 (33 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)Radical nephrectomy9 (32 %)0 (0 %)0 (0 %)20 (32 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Entire samplinga7 (25 %)4 (14 %)2 (7 %)9 (14 %)0 (0 %)0 (0 %)1 (17 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)0 (0 %)Incomplete samplingb21 (75 %)5 (18 %)4 (14 %)54 (85 %)2 (3 %)0 (0 %)2 (67 %)0 (0 %)0 (0 %)4 (100 %)0 (0 %)0 (0 %)a refers to the specimens entirely submitted for microscopic examinationb refers to the specimens with representative sections submitted for microscopic examination Summary of biopsy site and tumor seeding a refers to the specimens entirely submitted for microscopic examination b refers to the specimens with representative sections submitted for microscopic examination Of all the 98 cases, 17 tumors were entirely submitted for microscopic evaluation, while the other 81 tumors were incompletely sampled. The biopsy sites were identified in 23 % of cases with complete tumor sampling and 9 % of cases with incomplete tumor sampling. Although the differences on the frequencies of identifying biopsy site changes were not statistically significant (p = 0.08), thorough specimen sampling appeared to lead to a higher chance of identifying these changes. It is evident that the extent of sampling of the perinephric fat would be directly relevant to the likelihood of identifying the biopsy site. However, data on the extent of perinephric fat sampling was not available based on the gross descriptions for most cases. Whether a suspicious biopsy tract site was identified was not mentioned in any of the cases. Regarding the surgical approach, 67 and 31 cases were partial and radical nephrectomies, respectively. Biopsy sites were identified in 13 % of cases with partial nephrectomy and 6 % of cases with radical nephrectomy. We also evaluated whether there is any effect of renal tumor biopsy techniques on the frequency of tumor seeding in PRCCs, by comparing a few biopsy parameters between pT1a cases with and without tumor seeding (Table 4). The biopsy parameters we looked into are those considered potentially affecting the risk of biopsy tract tumor seeding in the literature, including the biopsy needle size (smaller diameter associated with lower frequency of tumor seeding), use of coaxial sheath (associated with lower chance of tumor seeding), and the number of passes (controversial, but generally speaking, multiple passes associated with higher risk of tumor seeding) [20, 21]. Biopsy procedure information was available in all the 6 cases with tumor seeding and 8 cases without tumor seeding. Among these cases, there are no significant differences in the following parameters, biopsy needle size (p = 0.78), application of coaxial sheath technique (p = 0.53) and number of passes (p = 0.69), between cases with and without tumor seeding. Table 4Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seedingCaseBiopsy needle size (gauge)Coaxial sheath techniqueNumber of passesTumor seeding118No2Yes218Yes4Yes318Yes3Yes418Yes5Yes518Yes6Yes618No2Yes718Yes3No818Yes3No920Yes6No1018No4No1120No6No1218No3No1318No3No1420Yes4No Renal tumor biopsy techniques in pT1a papillary renal cell carcinoma cases with and without tumor seeding Discussion: Our present study is one of the largest case series from a single institution to date evaluating the incidence of biopsy needle tract tumor seeding confirmed by histological examination. It is also the first study investigating the differential frequencies of tumor seeding among specific histologic subtypes of RCC. In our cohort, tumor seeding within the perinephric adipose tissue along the biopsy needle tract was observed in 6 % (6/98) of all RCC resection cases, but exclusively among patients with PRCC (6/28, 21 %). The previously reported overall tumor seeding rate ranges from 0.01 % [9, 10] to 1.2 % [16] in the literature. The lower tumor seeding rates reported in prior studies in the 1990 s were estimated based on responses to questionnaires at multiple institutions [9, 10]. Although the total number of biopsies in the prior studies was large (more than 10,000 biopsies of abdominal masses including renal masses), no standardized protocols on detection of tumor seeding were described, likely resulting in underestimation of the frequency of tumor seeding. Microscopically, we observed clear histologic evidence of biopsy tract changes with intermingled tumor cell clusters, discontinuous from the main tumor, in all the six cases that we interpreted as needle tract tumor seeding. However, there is a lack of specific histologic criteria in interpretating biopsy according to a recent multi-institutional survey and interobserver variability exists [22], which could potentially contribute to the variable frequencies of needle tract tumor seeding reported. To date, there are a total of 25 reported cases on tumor seeding along the percutaneous renal mass biopsy tract, from several case reports and one case series [8, 11–16]. Of these, PRCC (15 cases) was the most commonly encountered pathologic subtype. Other histologic subtypes included 3 CCRCC, 4 renal cell carcinomas (subtypes not specified), 1 oncocytoma, 1 urothelial carcinoma of the kidney and 1 “angiomyoliposarcoma”. Although this phenomenon was observed in several renal tumors, a predilection for tumor seeding was identified in PRCC compared to other types. We did not observe biopsy needle tract tumor seeding in CCRCC in our case series, while rare cases of tumor seeding in CCRCC were published previously [8, 12]. Biopsy tract seeding was not observed in CCPRCC and ChRCC either, but the sample sizes for these two subtypes were small. We evaluated several factors possibly affecting detection of tumor seeding in the resection specimens. Although the tumors along with perinephric tissue were sampled per CAP (College of American Pathologists) protocols, not all were entirely sampled (especially the larger-sized tumors). Our data suggests that gross sampling might influence microscopic identification of biopsy site changes. Moreover, careful gross examination of the nephrectomy specimen for scarring, fat necrosis, hemorrhage and fibrosis in the perinephric fat or hemorrhagic foci in the capsular surface, and more diligent sampling of such areas, if present, might help identify sites of needle tract, thus allowing more efficient evaluation for potential tumor seeding [19]. In our series, no macroscopic descriptions of suspected biopsy tract were mentioned upon retrospective review of all the cases, suggesting no targeted samplings for biopsy tract. However, the drastic difference in the tumor seeding rate between CCRCC and PRCC is not readily explained by tumor sampling alone. A few theories were proposed in the literature to explain the higher frequencies of needle tract tumor seeding in PRCC. Some studies observed that PRCC tend to exhibit incomplete or absent peritumoral pseudo-capsule more frequently than CCRCC, facilitating tumor invasion into the perinephric fat [23]. Other hypotheses for higher rate of needle tract tumor seeding in PRCC include the friable nature of the tumor facilitating tumor cell adherence to the needle, higher frequency of exophytic growth allowing tumor seeding more often in the extrarenal space, as well as possible higher chance of tumor cell survival when explanted into the needle tract [17]. Nevertheless, the exact reasons for the differences in the frequencies of biopsy needle tract tumor seeding among various pathologic types of renal tumors need further investigation. Pathologic staging of renal cell carcinoma is one of the essential prognostic factors and guides patient management, especially surveillance following surgery. Localized pT1a or pT1b renal cell carcinomas are considered as low-risk disease, with recurrence risk of 1–8 %. For these patients, abdominal imaging is recommended annually for 3 years. In contrast, patients with localized T2 and higher disease are considered as having moderate- to high-risk of recurrence (30–78 %). More intensive surveillance protocol is warranted with abdominal imaging (CT or MRI) recommended at 3-6-month interval for the first 3 years, then annually to the fifth year [24]. To date, there is no evidence-based standard protocol among pathologists on whether upstaging is justified solely based on the finding of perinephric tumor seeding along biopsy needle tract. In prior reports, one case of PRCC with tumor foci involving perinephric fat was initially staged as pT3a [14]. However, following confirmation that the tumor foci represented tumor seeding of prior biopsy tract within perinephric fat, the final stage was revised from pT3a to pT1a, indicating that the authors did not consider perinephric tumor seeding along biopsy tract as true cancer invasion [14]. In contrast, in a seven-case series with tumor seeding along the biopsy needle tract involving perinephric fat, six of seven tumors (PRCC and CCRCC) were upstaged to pT3a solely due to biopsy tract seeding, which would have been otherwise staged as pT1a [16]. Understanding the biological behavior of tumor cells spread along the biopsy tract is fundamental to ascertain appropriate cancer staging. It is questionable whether passive displacement of potentially indolent tumor cells to a location that would theoretically necessitate upstaging is equivalent to a genetically aggressive counterpart that actively invades the perirenal tissue. For example, a review on tumor seeding following breast needle biopsy found that the incidence of detecting tumor seeding declines as the interval between biopsy and surgery lengthens, suggesting reduced viability of the seeded tumor cells [25]. On the other hand, it could be argued that an increased access to lymphatic structures and blood vessels in the perirenal tissue by tumor seeding may play a more important role. Thus far, basic mechanistic studies and long-term clinical follow-up data are sparse. It is unclear whether and how the pathologic features of the original tumor and microenvironment of tumor seeding site would affect tumor regrowth. It may also be technically challenging to determine the casual relationship between recurrence/metastasis at a later time and prior tumor seeding phenomenon. The follow-up data based on our series of PRCC seem to show a low risk of recurrence between patients with low grade pT1a disease with and without tumor seeding in the perinephric adipose tissue. Studies with a larger patient cohort and longer follow-up are needed for a more definitive prognostic assessment. Despite these uncertainties, it is documented that two (CCRCC and RCC not specified) of the 25 previously published cases with perinephric biopsy tract tumor seeding showed local cancer recurrence associated with prior biopsy site. Moreover, seven cases (RCC not specified, CCRCC, PRCC, and oncocytoma) exhibited extrarenal subcutaneous or retroperitoneal tumor nodules histologically consistent with the original renal tumors. Therefore, thorough and diligent grossing and microscopic examination for biopsy site changes and signs of tumor seeding is recommended, especially in small-sized tumors, the management and/or follow-up of which may differ significantly based on whether tumor is confined to the kidney or not. Effective communication between pathologists and clinicians and precise documentation of the tumor seeding is essential to facilitate appropriate follow up and patient management. There are several limitations to our study. First, due to the retrospective nature of the study, we were not able to ascertain whether evidence of needle tract changes was diligently looked for and adequately sampled during grossing. Second, the lengths of follow-up for the six cases with biopsy tract tumor seeding were relatively short, limiting the long-term evaluation of prognosis. Third, the number of cases for CCPRCC and ChRCC were relatively small, limiting the study of tumor seeding in these two subtypes. Tumor seeding along the biopsy needle tract in patients with RCC warrants increased attention due to its higher frequency than previously documented and the potential impact on patient management. Future studies on a larger scale and longer follow up to evaluate the association between needle tract tumor seeding and prognosis are warranted.
Background: Percutaneous needle biopsy of renal masses has been increasingly utilized to aid the diagnosis and guide management. It is generally considered as a safe procedure. However, tumor seeding along the needle tract, one of the complications, theoretically poses potential risk of tumor spread by seeded malignant cells. Prior studies on the frequency of needle tract seeding in renal tumor biopsies are limited and clinical significance of biopsy-associated tumor seeding remains largely controversial. Methods: Here we investigated the frequencies of biopsy needle tract tumor seeding at our institution by reviewing the histology of renal cell carcinoma nephrectomy specimens with a prior biopsy within the last seventeen years. Biopsy site changes were recognized as a combination of foreign body reaction, hemosiderin deposition, fibrosis and fat necrosis. The histologic evidence of needle tract tumor seeding was identified as clusters of tumor cells embedded in perinephric tissue spatially associated with the biopsy site. In addition, association between parameters of biopsy techniques and tumor seeding were investigated. Results: We observed needle tract tumor seeding to perinephric tissue in six out of ninety-eight (6 %) renal cell carcinoma cases including clear cell renal cell carcinoma, papillary renal cell carcinoma, chromophobe, and clear cell papillary renal cell carcinoma. The needle tract tumor seeding was exclusively observed in papillary renal cell carcinomas (6/28, 21 %) that were unifocal, small-sized (≤ 4 cm), confined to the kidney and had type 1 features. No recurrence or metastasis was observed in the papillary renal cell carcinoma cases with tumor seeding or the stage-matched cases without tumor seeding. Conclusions: Our study demonstrated a higher than reported frequency of needle tract tumor seeding. Effective communication between pathologists and clinicians as well as documentation of tumor seeding is recommended. Further studies with a larger patient cohort and longer follow up to evaluate the impact of needle tract tumor seeding on long term prognosis are needed. This may also help reach a consensus on appropriate pathologic staging of renal cell carcinoma when the only site of perinephric fat invasion is within a biopsy needle tract.
null
null
9,079
397
[ 1192, 1008 ]
6
[ "tumor", "biopsy", "seeding", "tumor seeding", "cases", "renal", "site", "biopsy site", "cell", "tract" ]
[ "renal cell carcinomafig", "renal biopsy estimated", "renal mass biopsy", "renal tumors thorough", "renal cell carcinoma" ]
null
null
null
[CONTENT] Biopsy needle tract | Tumor seeding | Renal cell carcinoma | Papillary renal cell carcinoma | Clear cell renal cell carcinoma [SUMMARY]
null
[CONTENT] Biopsy needle tract | Tumor seeding | Renal cell carcinoma | Papillary renal cell carcinoma | Clear cell renal cell carcinoma [SUMMARY]
null
[CONTENT] Biopsy needle tract | Tumor seeding | Renal cell carcinoma | Papillary renal cell carcinoma | Clear cell renal cell carcinoma [SUMMARY]
null
[CONTENT] Adult | Aged | Biopsy, Large-Core Needle | Carcinoma, Renal Cell | Female | Humans | Kidney Neoplasms | Male | Middle Aged | Neoplasm Seeding | Retrospective Studies [SUMMARY]
null
[CONTENT] Adult | Aged | Biopsy, Large-Core Needle | Carcinoma, Renal Cell | Female | Humans | Kidney Neoplasms | Male | Middle Aged | Neoplasm Seeding | Retrospective Studies [SUMMARY]
null
[CONTENT] Adult | Aged | Biopsy, Large-Core Needle | Carcinoma, Renal Cell | Female | Humans | Kidney Neoplasms | Male | Middle Aged | Neoplasm Seeding | Retrospective Studies [SUMMARY]
null
[CONTENT] renal cell carcinomafig | renal biopsy estimated | renal mass biopsy | renal tumors thorough | renal cell carcinoma [SUMMARY]
null
[CONTENT] renal cell carcinomafig | renal biopsy estimated | renal mass biopsy | renal tumors thorough | renal cell carcinoma [SUMMARY]
null
[CONTENT] renal cell carcinomafig | renal biopsy estimated | renal mass biopsy | renal tumors thorough | renal cell carcinoma [SUMMARY]
null
[CONTENT] tumor | biopsy | seeding | tumor seeding | cases | renal | site | biopsy site | cell | tract [SUMMARY]
null
[CONTENT] tumor | biopsy | seeding | tumor seeding | cases | renal | site | biopsy site | cell | tract [SUMMARY]
null
[CONTENT] tumor | biopsy | seeding | tumor seeding | cases | renal | site | biopsy site | cell | tract [SUMMARY]
null
[CONTENT] rmb | tumor | biopsy | renal | seeding | tumor seeding | needle | tract | case | prior [SUMMARY]
null
[CONTENT] tumor | biopsy | cases | seeding | tumor seeding | site | biopsy site | months | renal | cell [SUMMARY]
null
[CONTENT] tumor | biopsy | seeding | tumor seeding | cases | renal | cell | site | tract | biopsy site [SUMMARY]
null
[CONTENT] ||| ||| ||| [SUMMARY]
null
[CONTENT] six | ninety-eight | 6 % ||| 6/28 | 21 % | 1 ||| [SUMMARY]
null
[CONTENT] ||| ||| ||| ||| the last seventeen years ||| ||| ||| ||| ||| six | ninety-eight | 6 % ||| 6/28 | 21 % | 1 ||| ||| ||| clinicians ||| ||| [SUMMARY]
null
Soluble ST2 in the prediction of heart failure and death in patients with atrial fibrillation.
35188278
Biomarkers may be a useful marker for predicting heart failure (HF) or death in patients with atrial fibrillation (AF).
BACKGROUND
This is a prospective study of patients with nonvalvular AF. Clinical outcomes were HF or death. Clinical and laboratory data were compared between those with and without clinical outcomes. Univariate and multivariate analysis was performed to determine whether sST2 is an independent predictor for heart failure or death in patients with nonvalvular AF.
METHODS
A total of 185 patients (mean age: 68.9 ± 11.0 years) were included, 116 (62.7%) were male. The average sST2 and N-terminal pro-brain natriuretic peptide (NT-proBNP) levels were 31.3 ± 19.7 ng/ml and 2399.5 ± 6853.0 pg/ml, respectively. Best receiver operating characteristic (ROC) cut off of sST2 for predicting HF or death was 30.14 ng/ml. Seventy-three (39.5%) patients had an sST2 level ≥30.14 ng/ml, and 112 (60.5%) had an sST2 level <30.14 ng/dl. The average follow-up was 33.1 ± 6.6 months. Twenty-nine (15.7%) patients died, and 33 (17.8%) developed HF during follow-up. Multivariate analysis revealed that high sST2 to be an independent risk factor for death or HF with a HR and 95% CI of 2.60 (1.41-4.78). The predictive value of sST2 is better than NT-proBNP, and it remained significant in AF patients irrespective of history of HF, and NT-proBNP levels.
RESULTS
sST2 is an independent predictor of death or HF in patients with AF irrespective of history of HF or NT-proBNP levels.
CONCLUSIONS
[ "Aged", "Atrial Fibrillation", "Biomarkers", "Female", "Heart Failure", "Humans", "Interleukin-1 Receptor-Like 1 Protein", "Male", "Middle Aged", "Natriuretic Peptide, Brain", "Peptide Fragments", "Prognosis", "Prospective Studies" ]
9019881
INTRODUCTION
Non‐valvular atrial fibrillation (AF) is one of the most common cardiac arrhythmias, 1 and the prevalence of AF increases in older adult population. 2 Heart failure (HF) is one of the coexisting conditions frequently seen in patients with AF, 3 , 4 and the prevalence of HF also increases in older adults. 5 When AF and HF coexist in a patient, it is often difficult to determine which condition is the cause, and which is the effect. 6 Practice guidelines mainly focus on stroke prevention in patients with AF and HF is often overlooked. Results from the Global Anticoagulant Registry in the Field‐Atrial Fibrillation (GARFIELD‐AF) registry, which is a large global registry of patients with newly diagnosed AF, revealed a rate of HF of 2.41 per 100 person‐years, which is greater than the rate of ischemic stroke, major bleeding, and cardiovascular death. 7 Recent European Society of Cardiology (ESC) guideline for management of AF emphasizes the treatment of comorbidities, such as hypertension, diabetes, and HF. 8 Natriuretic peptide, such as brain natriuretic peptide (BNP) and N‐terminal pro‐BNP (NT‐proBNP), has been shown to be both a diagnostic and prognostic biomarker for HF. 9 Soluble ST2 (sST2) is another biomarker that has been demonstrated to be a good prognostic marker in patients with HF. 10 , 11 American College of Cardiology (ACC) guideline for management of patients with HF suggests that sST2 may be useful as an additive biomarker for prognosis of patients with HF. 12 The objectives of this study were to determine (1) the prognostic value of sST2 for HF and death in patients with AF; (2) the prognostic value of sST2 for HF and death in patients with AF with and without history of HF; and (3) whether the prognostic value of sST2 for HF and death in patients with AF is independent of NT‐proBNP level.
METHODS
Study population This is a prospective study. Patients who were at least 18 years of age with a diagnosis of nonvalvular AF were prospectively enrolled. The presence of AF was confirmed by 12‐lead electrocardiography (ECG) or ambulatory ECG monitoring. Patients with at least one of the following criteria were excluded: (1) rheumatic mitral stenosis; (2) mechanical heart valve; (3) AF from transient reversible cause, such as pneumonia; (4) pregnancy; (5) life expectancy less than 3 years; (6) unwilling to participate; (7) hospitalization within 1 month before study enrollment; (8) ongoing participation in a clinical trial; and/or (9) inability to attend follow‐up appointments. The protocol for this study was approved by the Institutional Review Board (IRB) of the Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand, and all patients gave written informed consent to participate. This is a prospective study. Patients who were at least 18 years of age with a diagnosis of nonvalvular AF were prospectively enrolled. The presence of AF was confirmed by 12‐lead electrocardiography (ECG) or ambulatory ECG monitoring. Patients with at least one of the following criteria were excluded: (1) rheumatic mitral stenosis; (2) mechanical heart valve; (3) AF from transient reversible cause, such as pneumonia; (4) pregnancy; (5) life expectancy less than 3 years; (6) unwilling to participate; (7) hospitalization within 1 month before study enrollment; (8) ongoing participation in a clinical trial; and/or (9) inability to attend follow‐up appointments. The protocol for this study was approved by the Institutional Review Board (IRB) of the Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand, and all patients gave written informed consent to participate. Study protocol and data collection Baseline data were collected and recorded from medical record reviews and patient interviews. Included patients were followed‐up every 6 months for 3 years. In addition to study‐related data that were collected at each follow‐up visit, the authors investigated for, determined, and recorded the occurrence of study outcomes (HR or death) that occurred during the preceding six months. The following data were collected: demographic data; type, duration, and symptom of AF; left ventricular ejection fraction (LVEF) from echocardiogram; comorbid conditions, including history of HF, coronary artery disease (CAD), ischemic stroke/transient ischemic attack (TIA), diabetes mellitus (DM), hypertension (HT), dyslipidemia (DLP), smoking, implantable devices, and dementia; medications; and, laboratory data, such as creatinine clearance for the calculation for chronic kidney disease (CKD) and renal replacement therapy (RRT), hematocrit for assessment of anemia, NT‐proBNP, and sST2. Baseline data were collected and recorded from medical record reviews and patient interviews. Included patients were followed‐up every 6 months for 3 years. In addition to study‐related data that were collected at each follow‐up visit, the authors investigated for, determined, and recorded the occurrence of study outcomes (HR or death) that occurred during the preceding six months. The following data were collected: demographic data; type, duration, and symptom of AF; left ventricular ejection fraction (LVEF) from echocardiogram; comorbid conditions, including history of HF, coronary artery disease (CAD), ischemic stroke/transient ischemic attack (TIA), diabetes mellitus (DM), hypertension (HT), dyslipidemia (DLP), smoking, implantable devices, and dementia; medications; and, laboratory data, such as creatinine clearance for the calculation for chronic kidney disease (CKD) and renal replacement therapy (RRT), hematocrit for assessment of anemia, NT‐proBNP, and sST2. Definitions CAD was defined as the presence of significant stenosis of at least one major coronary artery by coronary angiogram or coronary computed tomography (CT) angiography, or history of documented myocardial infarction or coronary revascularization or positive stress imaging either by nuclear stress test, magnetic resonance imaging, or echocardiography. Anemia was defined as hemoglobin level <13 g/dl for males, and <12 g/dl for females. CKD in this study was defined as CKD stages 3–5 or an estimated glomerular filtration rate (eGFR [ml/min/1.73 m2]) by Chronic Kidney Disease Epidemiology Collaboration (CKD‐EPI) formula less than 60 ml/min. CAD was defined as the presence of significant stenosis of at least one major coronary artery by coronary angiogram or coronary computed tomography (CT) angiography, or history of documented myocardial infarction or coronary revascularization or positive stress imaging either by nuclear stress test, magnetic resonance imaging, or echocardiography. Anemia was defined as hemoglobin level <13 g/dl for males, and <12 g/dl for females. CKD in this study was defined as CKD stages 3–5 or an estimated glomerular filtration rate (eGFR [ml/min/1.73 m2]) by Chronic Kidney Disease Epidemiology Collaboration (CKD‐EPI) formula less than 60 ml/min. Laboratory investigations sST2 was measured from plasma samples using a high‐sensitivity sandwich monoclonal immunoassay (Presage ST2 Assay, Critical Diagnostics). The sST2 assay had a within‐run coefficient of less than 2.5%, a total coefficient of variation of 4%, and a limit of detection of 1.31 ng/ml. NT‐proBNP was measured from plasma using a commercially available immunoassay (Elecsys NT‐proBNP assay, Roche Diagnostics). eGFR (ml/min/1.73 m2) was calculated using the CKD‐EPI formula. sST2 was measured from plasma samples using a high‐sensitivity sandwich monoclonal immunoassay (Presage ST2 Assay, Critical Diagnostics). The sST2 assay had a within‐run coefficient of less than 2.5%, a total coefficient of variation of 4%, and a limit of detection of 1.31 ng/ml. NT‐proBNP was measured from plasma using a commercially available immunoassay (Elecsys NT‐proBNP assay, Roche Diagnostics). eGFR (ml/min/1.73 m2) was calculated using the CKD‐EPI formula. Outcomes The primary outcomes of this study were HF or death. We used standard definition for cardiovascular endpoint events proposed by the American College of Cardiology (ACC) and American Heart Association (AHA). 13 HF was defined an urgent, unscheduled clinic or emergency department visit or hospital admission, with a primary diagnosis of HF, where the patient exhibits new or worsening symptoms of HF on presentation, has objective evidence of new or worsening HF, and receives initiation or intensification of treatment specifically for HF. Objective evidence consists of at least two physical examination findings OR at least one physical examination finding and at least one laboratory criterion of new or worsening HF on presentation. Death was subcategorized into cardiovascular (CV) death, non‐CV death, or undetermined cause. To minimize the bias, all outcomes were confirmed by a separate adjudication team. The sample size of this registry was enough to determine the differences in outcome between two groups with 90% power. The primary outcomes of this study were HF or death. We used standard definition for cardiovascular endpoint events proposed by the American College of Cardiology (ACC) and American Heart Association (AHA). 13 HF was defined an urgent, unscheduled clinic or emergency department visit or hospital admission, with a primary diagnosis of HF, where the patient exhibits new or worsening symptoms of HF on presentation, has objective evidence of new or worsening HF, and receives initiation or intensification of treatment specifically for HF. Objective evidence consists of at least two physical examination findings OR at least one physical examination finding and at least one laboratory criterion of new or worsening HF on presentation. Death was subcategorized into cardiovascular (CV) death, non‐CV death, or undetermined cause. To minimize the bias, all outcomes were confirmed by a separate adjudication team. The sample size of this registry was enough to determine the differences in outcome between two groups with 90% power. Statistical analysis Continuous data were compared by the Student's t‐test for unpaired data, and are described as mean ± standard deviation (SD). Categorical data were compared by χ 2 test or Fisher's exact test, and are described as number and percentage. Clinical outcome data are shown as proportion of outcome in each group, and rate of outcome per 100 person‐years with 95% confidence interval (CI). Kaplan‐Meier estimate was performed to assess the time‐to‐event as the probability of surviving divided by the number of patients at risk. Log‐rank test was performed to compare the difference in survival probability between groups. Univariate and multivariate analysis was performed using Cox proportional hazard function to assess the effect of baseline variables on clinical outcomes. The results are presented as hazard ratio and 95% confidence interval. The primary analysis was based on the sST2 cut‐off derived from receiver operating characteristics (ROC) curve analysis. Sensitivity analysis was performed (1) by using median of sST2 as a cut off (2) by comparing four groups of sST2 separated by quartiles (3) by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome by cubic spline graph. A p‐value of <.05 was considered statistically significant, and SPSS Statistics software (SPSS, Inc.) and R version 3.6.3 was used to perform data analyses. Continuous data were compared by the Student's t‐test for unpaired data, and are described as mean ± standard deviation (SD). Categorical data were compared by χ 2 test or Fisher's exact test, and are described as number and percentage. Clinical outcome data are shown as proportion of outcome in each group, and rate of outcome per 100 person‐years with 95% confidence interval (CI). Kaplan‐Meier estimate was performed to assess the time‐to‐event as the probability of surviving divided by the number of patients at risk. Log‐rank test was performed to compare the difference in survival probability between groups. Univariate and multivariate analysis was performed using Cox proportional hazard function to assess the effect of baseline variables on clinical outcomes. The results are presented as hazard ratio and 95% confidence interval. The primary analysis was based on the sST2 cut‐off derived from receiver operating characteristics (ROC) curve analysis. Sensitivity analysis was performed (1) by using median of sST2 as a cut off (2) by comparing four groups of sST2 separated by quartiles (3) by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome by cubic spline graph. A p‐value of <.05 was considered statistically significant, and SPSS Statistics software (SPSS, Inc.) and R version 3.6.3 was used to perform data analyses.
RESULTS
Baseline characteristics Of the 185 patients that were enrolled, 116 (62.7%) were male. The average age of patients was 68.9 ± 11.0 years, and the average sST2 level was 31.3 ± 19.7 ng/ml (median and interquartile range [IQR]: 26.78 and 18.54–38.38 ng/ml). The average NT‐proBNP level was 2399 ± 6853 pg/ml (median and IQR: 974.4 and 490.9–1841.0 pg/ml). Of the 185 patients that were enrolled, 116 (62.7%) were male. The average age of patients was 68.9 ± 11.0 years, and the average sST2 level was 31.3 ± 19.7 ng/ml (median and interquartile range [IQR]: 26.78 and 18.54–38.38 ng/ml). The average NT‐proBNP level was 2399 ± 6853 pg/ml (median and IQR: 974.4 and 490.9–1841.0 pg/ml). Clinical outcomes The average follow‐up duration was 33.1 ± 6.6 months or 502.2 persons‐year. There were 54 patients with death or heart failure during follow‐up (29 deaths and 33 heart failures). Baseline characteristics of patients with and without clinical outcome are shown in Table 1. Older age, history of HF, CAD, DM, HT, CKD, RRT, anemia, low LVEF, and elevated sST2 level were all found to be significantly associated with an increased risk of HF or death. From ROC analysis the best cut‐off of sST2 for death or heart failure was 30.14 ng/ml (area under the curve of 0.69). Seventy‐three (39.5%) patients had an sST2 level ≥30.14 ng/ml, and 112 (60.5%) had an sST2 level <30.14 ng/dl. The proportion of patients with clinical outcomes compared between patients with sST2 level <30.14 and sST2 level ≥30.14 ng/ml is shown in Figure 1. Sixty‐nine (37.3%) patients in our study had history of HF. The differences between the 2 sST2 groups are also shown in patients with and without history of HF, and in patients with NT‐proBNP <median and ≥median. The median (IQR) rate of HF or death, death, and HF was 10.75 (8.08–14.03), 5.77 (3.87–8.29), 6.57 (4.52–9.23) per 100 persons‐years, respectively. The incidence rate of clinical outcomes in patients with sST2 < 30.14 ng/ml and ≥30.14 ng/ml is shown in Table S1. The incidence rate of clinical outcomes was increased in patients with sST2 ≥ 30.14 ng/ml. Table S1 also demonstrated a higher incidence rate of each outcome for patients with sST2 ≥ 30.14 ng/ml regardless of history of heart failure and NT‐proBNP levels. Baseline characteristics of NVAF patients compared between those with and without HF or death Note: Data presented as mean ± standard deviation or number and percentage. The bold values are statistically significant p < .05. Abbreviations: ACEI, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; HF, heart failure; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; NVAF, non‐valvular atrial fibrillation; TIA, transient ischemic attack. Rate of heart failure (HF) and death according to soluble sST2 group for (A) all patients, (B) patients with history of HF, (C) patients no history of HF, (D) patients with N‐terminal pro‐brain natriuretic peptide (NT‐proBNP) level ≥median, and (E) patients with NT‐proBNP level <median The average follow‐up duration was 33.1 ± 6.6 months or 502.2 persons‐year. There were 54 patients with death or heart failure during follow‐up (29 deaths and 33 heart failures). Baseline characteristics of patients with and without clinical outcome are shown in Table 1. Older age, history of HF, CAD, DM, HT, CKD, RRT, anemia, low LVEF, and elevated sST2 level were all found to be significantly associated with an increased risk of HF or death. From ROC analysis the best cut‐off of sST2 for death or heart failure was 30.14 ng/ml (area under the curve of 0.69). Seventy‐three (39.5%) patients had an sST2 level ≥30.14 ng/ml, and 112 (60.5%) had an sST2 level <30.14 ng/dl. The proportion of patients with clinical outcomes compared between patients with sST2 level <30.14 and sST2 level ≥30.14 ng/ml is shown in Figure 1. Sixty‐nine (37.3%) patients in our study had history of HF. The differences between the 2 sST2 groups are also shown in patients with and without history of HF, and in patients with NT‐proBNP <median and ≥median. The median (IQR) rate of HF or death, death, and HF was 10.75 (8.08–14.03), 5.77 (3.87–8.29), 6.57 (4.52–9.23) per 100 persons‐years, respectively. The incidence rate of clinical outcomes in patients with sST2 < 30.14 ng/ml and ≥30.14 ng/ml is shown in Table S1. The incidence rate of clinical outcomes was increased in patients with sST2 ≥ 30.14 ng/ml. Table S1 also demonstrated a higher incidence rate of each outcome for patients with sST2 ≥ 30.14 ng/ml regardless of history of heart failure and NT‐proBNP levels. Baseline characteristics of NVAF patients compared between those with and without HF or death Note: Data presented as mean ± standard deviation or number and percentage. The bold values are statistically significant p < .05. Abbreviations: ACEI, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; HF, heart failure; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; NVAF, non‐valvular atrial fibrillation; TIA, transient ischemic attack. Rate of heart failure (HF) and death according to soluble sST2 group for (A) all patients, (B) patients with history of HF, (C) patients no history of HF, (D) patients with N‐terminal pro‐brain natriuretic peptide (NT‐proBNP) level ≥median, and (E) patients with NT‐proBNP level <median Univariate and multivariate analysis The results of univariate and multivariate Cox proportional analysis are shown as a forest plot in Figure 2. Factors with p‐value <.2 from Table 1 were selected for univariate and multivariate Cox‐proportional Hazard model analysis. Univariate analysis showed history of HF, CAD, DM, RRT, CKD, anemia, LVEF < 50%, NT‐proBNP >median, and sST2 > 30.14 ng/ml to be predictors of HF or death. Subsequent multivariate analysis revealed history of HF, NT‐proBNP >median, and sST2 ≥ 30.14 ng/ml to be independent predictors for HF or death. Forest plot of univariate and multivariate analysis for factors that predict heart failure or death. CAD, coronary artery disease; CI, confidence interval; CKD, chronic kidney disease; LVEF, left ventricular ejection fraction; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; sST2, soluble ST2; TIA, transient ischemic attack The results of univariate and multivariate Cox proportional analysis are shown as a forest plot in Figure 2. Factors with p‐value <.2 from Table 1 were selected for univariate and multivariate Cox‐proportional Hazard model analysis. Univariate analysis showed history of HF, CAD, DM, RRT, CKD, anemia, LVEF < 50%, NT‐proBNP >median, and sST2 > 30.14 ng/ml to be predictors of HF or death. Subsequent multivariate analysis revealed history of HF, NT‐proBNP >median, and sST2 ≥ 30.14 ng/ml to be independent predictors for HF or death. Forest plot of univariate and multivariate analysis for factors that predict heart failure or death. CAD, coronary artery disease; CI, confidence interval; CKD, chronic kidney disease; LVEF, left ventricular ejection fraction; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; sST2, soluble ST2; TIA, transient ischemic attack Survival analysis The cumulative event rates of HF or death, HF, and death are shown in Figure 3. The event rate in patients with sST2 ≥ 30.14 ng/ml increased as the follow‐up time increased and significantly different from those with sST2 < 30.14 ng/ml both for unadjusted and adjusted model. Moreover, the distance between the two plots (sST2 ≥ 30.14 and sST2 < 30.14 ng/ml) becomes greater as the follow‐up duration increases. Cumulative rate of heart failure (HF) or death, death, and HF compared between patients with sST2 level ≥30.14 and <30.14 ng/ml. A–C: unadjusted, D–F: adjusted for confounders The cumulative event rates of HF or death, HF, and death are shown in Figure 3. The event rate in patients with sST2 ≥ 30.14 ng/ml increased as the follow‐up time increased and significantly different from those with sST2 < 30.14 ng/ml both for unadjusted and adjusted model. Moreover, the distance between the two plots (sST2 ≥ 30.14 and sST2 < 30.14 ng/ml) becomes greater as the follow‐up duration increases. Cumulative rate of heart failure (HF) or death, death, and HF compared between patients with sST2 level ≥30.14 and <30.14 ng/ml. A–C: unadjusted, D–F: adjusted for confounders Sensitivity analysis and test of interaction effect Sensitivity analysis was performed by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome. Restricted cubic spline graph demonstrates that the risk of heart failure or death increased as the levels of sST2 increased both for unadjusted and adjusted model (variables with p < .2 from Table 1 were included in the adjusted model) (Figure 4). Figure S1 demonstrates that there were no significant interactions (interaction test p > .05) between history of heart failure and sST2 levels (Figure S1A–C) and NT‐proBNP levels and sST2 levels (Figure S1D–F) on each of the clinical outcomes. Cubic spline graph showing hazard ratio and 95% confidence interval (CI) heart failure (HF) or death, death, and HF of sST2 as continuous data with A–C: unadjusted, D–F: adjusted for confounders Sensitivity analysis was also performed by using median (26.78 ng/ml) of sST2 as a cut‐off and by comparing four groups of sST2 separated by quartiles (1st quartile: <18.54 ng/ml, 2nd quartile: 18.54–26.78 ng/ml, 3rd quartile: 26.78–38.38 ng/ml, 4th quartile: ≥38.38 ng/ml). The results are shown in Figure S2. Subgroup analysis for the predictive value of sST2 for HF or death showed that sST2 can predict HF or death in patients with AF in the majority of subgroups including age, sex, history of HF, CAD, stroke, hypertension, diabetes, CKD, LVEF, and NT‐proBNP (Figure S3). Sensitivity analysis was performed by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome. Restricted cubic spline graph demonstrates that the risk of heart failure or death increased as the levels of sST2 increased both for unadjusted and adjusted model (variables with p < .2 from Table 1 were included in the adjusted model) (Figure 4). Figure S1 demonstrates that there were no significant interactions (interaction test p > .05) between history of heart failure and sST2 levels (Figure S1A–C) and NT‐proBNP levels and sST2 levels (Figure S1D–F) on each of the clinical outcomes. Cubic spline graph showing hazard ratio and 95% confidence interval (CI) heart failure (HF) or death, death, and HF of sST2 as continuous data with A–C: unadjusted, D–F: adjusted for confounders Sensitivity analysis was also performed by using median (26.78 ng/ml) of sST2 as a cut‐off and by comparing four groups of sST2 separated by quartiles (1st quartile: <18.54 ng/ml, 2nd quartile: 18.54–26.78 ng/ml, 3rd quartile: 26.78–38.38 ng/ml, 4th quartile: ≥38.38 ng/ml). The results are shown in Figure S2. Subgroup analysis for the predictive value of sST2 for HF or death showed that sST2 can predict HF or death in patients with AF in the majority of subgroups including age, sex, history of HF, CAD, stroke, hypertension, diabetes, CKD, LVEF, and NT‐proBNP (Figure S3).
CONCLUSION
The results of this study revealed sST2 level to be an independent predictor of death or HF in patients with non‐ventricular AF irrespective of history of HF or NT‐proBNP levels.
[ "INTRODUCTION", "Study population", "Study protocol and data collection", "Definitions", "Laboratory investigations", "Outcomes", "Statistical analysis", "Baseline characteristics", "Clinical outcomes", "Univariate and multivariate analysis", "Survival analysis", "Sensitivity analysis and test of interaction effect", "Limitations", "AUTHOR CONTRIBUTIONS" ]
[ "Non‐valvular atrial fibrillation (AF) is one of the most common cardiac arrhythmias,\n1\n and the prevalence of AF increases in older adult population.\n2\n Heart failure (HF) is one of the coexisting conditions frequently seen in patients with AF,\n3\n, \n4\n and the prevalence of HF also increases in older adults.\n5\n When AF and HF coexist in a patient, it is often difficult to determine which condition is the cause, and which is the effect.\n6\n Practice guidelines mainly focus on stroke prevention in patients with AF and HF is often overlooked. Results from the Global Anticoagulant Registry in the Field‐Atrial Fibrillation (GARFIELD‐AF) registry, which is a large global registry of patients with newly diagnosed AF, revealed a rate of HF of 2.41 per 100 person‐years, which is greater than the rate of ischemic stroke, major bleeding, and cardiovascular death.\n7\n Recent European Society of Cardiology (ESC) guideline for management of AF emphasizes the treatment of comorbidities, such as hypertension, diabetes, and HF.\n8\n\n\nNatriuretic peptide, such as brain natriuretic peptide (BNP) and N‐terminal pro‐BNP (NT‐proBNP), has been shown to be both a diagnostic and prognostic biomarker for HF.\n9\n Soluble ST2 (sST2) is another biomarker that has been demonstrated to be a good prognostic marker in patients with HF.\n10\n, \n11\n American College of Cardiology (ACC) guideline for management of patients with HF suggests that sST2 may be useful as an additive biomarker for prognosis of patients with HF.\n12\n The objectives of this study were to determine (1) the prognostic value of sST2 for HF and death in patients with AF; (2) the prognostic value of sST2 for HF and death in patients with AF with and without history of HF; and (3) whether the prognostic value of sST2 for HF and death in patients with AF is independent of NT‐proBNP level.", "This is a prospective study. Patients who were at least 18 years of age with a diagnosis of nonvalvular AF were prospectively enrolled. The presence of AF was confirmed by 12‐lead electrocardiography (ECG) or ambulatory ECG monitoring. Patients with at least one of the following criteria were excluded: (1) rheumatic mitral stenosis; (2) mechanical heart valve; (3) AF from transient reversible cause, such as pneumonia; (4) pregnancy; (5) life expectancy less than 3 years; (6) unwilling to participate; (7) hospitalization within 1 month before study enrollment; (8) ongoing participation in a clinical trial; and/or (9) inability to attend follow‐up appointments. The protocol for this study was approved by the Institutional Review Board (IRB) of the Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand, and all patients gave written informed consent to participate.", "Baseline data were collected and recorded from medical record reviews and patient interviews. Included patients were followed‐up every 6 months for 3 years. In addition to study‐related data that were collected at each follow‐up visit, the authors investigated for, determined, and recorded the occurrence of study outcomes (HR or death) that occurred during the preceding six months.\nThe following data were collected: demographic data; type, duration, and symptom of AF; left ventricular ejection fraction (LVEF) from echocardiogram; comorbid conditions, including history of HF, coronary artery disease (CAD), ischemic stroke/transient ischemic attack (TIA), diabetes mellitus (DM), hypertension (HT), dyslipidemia (DLP), smoking, implantable devices, and dementia; medications; and, laboratory data, such as creatinine clearance for the calculation for chronic kidney disease (CKD) and renal replacement therapy (RRT), hematocrit for assessment of anemia, NT‐proBNP, and sST2.", "CAD was defined as the presence of significant stenosis of at least one major coronary artery by coronary angiogram or coronary computed tomography (CT) angiography, or history of documented myocardial infarction or coronary revascularization or positive stress imaging either by nuclear stress test, magnetic resonance imaging, or echocardiography. Anemia was defined as hemoglobin level <13 g/dl for males, and <12 g/dl for females. CKD in this study was defined as CKD stages 3–5 or an estimated glomerular filtration rate (eGFR [ml/min/1.73 m2]) by Chronic Kidney Disease Epidemiology Collaboration (CKD‐EPI) formula less than 60 ml/min.", "sST2 was measured from plasma samples using a high‐sensitivity sandwich monoclonal immunoassay (Presage ST2 Assay, Critical Diagnostics). The sST2 assay had a within‐run coefficient of less than 2.5%, a total coefficient of variation of 4%, and a limit of detection of 1.31 ng/ml. NT‐proBNP was measured from plasma using a commercially available immunoassay (Elecsys NT‐proBNP assay, Roche Diagnostics). eGFR (ml/min/1.73 m2) was calculated using the CKD‐EPI formula.", "The primary outcomes of this study were HF or death. We used standard definition for cardiovascular endpoint events proposed by the American College of Cardiology (ACC) and American Heart Association (AHA).\n13\n HF was defined an urgent, unscheduled clinic or emergency department visit or hospital admission, with a primary diagnosis of HF, where the patient exhibits new or worsening symptoms of HF on presentation, has objective evidence of new or worsening HF, and receives initiation or intensification of treatment specifically for HF. Objective evidence consists of at least two physical examination findings OR at least one physical examination finding and at least one laboratory criterion of new or worsening HF on presentation. Death was subcategorized into cardiovascular (CV) death, non‐CV death, or undetermined cause. To minimize the bias, all outcomes were confirmed by a separate adjudication team. The sample size of this registry was enough to determine the differences in outcome between two groups with 90% power.", "Continuous data were compared by the Student's t‐test for unpaired data, and are described as mean ± standard deviation (SD). Categorical data were compared by χ\n2 test or Fisher's exact test, and are described as number and percentage. Clinical outcome data are shown as proportion of outcome in each group, and rate of outcome per 100 person‐years with 95% confidence interval (CI). Kaplan‐Meier estimate was performed to assess the time‐to‐event as the probability of surviving divided by the number of patients at risk. Log‐rank test was performed to compare the difference in survival probability between groups. Univariate and multivariate analysis was performed using Cox proportional hazard function to assess the effect of baseline variables on clinical outcomes. The results are presented as hazard ratio and 95% confidence interval. The primary analysis was based on the sST2 cut‐off derived from receiver operating characteristics (ROC) curve analysis. Sensitivity analysis was performed (1) by using median of sST2 as a cut off (2) by comparing four groups of sST2 separated by quartiles (3) by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome by cubic spline graph. A p‐value of <.05 was considered statistically significant, and SPSS Statistics software (SPSS, Inc.) and R version 3.6.3 was used to perform data analyses.", "Of the 185 patients that were enrolled, 116 (62.7%) were male. The average age of patients was 68.9 ± 11.0 years, and the average sST2 level was 31.3 ± 19.7 ng/ml (median and interquartile range [IQR]: 26.78 and 18.54–38.38 ng/ml). The average NT‐proBNP level was 2399 ± 6853 pg/ml (median and IQR: 974.4 and 490.9–1841.0 pg/ml).", "The average follow‐up duration was 33.1 ± 6.6 months or 502.2 persons‐year. There were 54 patients with death or heart failure during follow‐up (29 deaths and 33 heart failures). Baseline characteristics of patients with and without clinical outcome are shown in Table 1. Older age, history of HF, CAD, DM, HT, CKD, RRT, anemia, low LVEF, and elevated sST2 level were all found to be significantly associated with an increased risk of HF or death. From ROC analysis the best cut‐off of sST2 for death or heart failure was 30.14 ng/ml (area under the curve of 0.69). Seventy‐three (39.5%) patients had an sST2 level ≥30.14 ng/ml, and 112 (60.5%) had an sST2 level <30.14 ng/dl. The proportion of patients with clinical outcomes compared between patients with sST2 level <30.14 and sST2 level ≥30.14 ng/ml is shown in Figure 1. Sixty‐nine (37.3%) patients in our study had history of HF. The differences between the 2 sST2 groups are also shown in patients with and without history of HF, and in patients with NT‐proBNP <median and ≥median. The median (IQR) rate of HF or death, death, and HF was 10.75 (8.08–14.03), 5.77 (3.87–8.29), 6.57 (4.52–9.23) per 100 persons‐years, respectively. The incidence rate of clinical outcomes in patients with sST2 < 30.14 ng/ml and ≥30.14 ng/ml is shown in Table S1. The incidence rate of clinical outcomes was increased in patients with sST2 ≥ 30.14 ng/ml. Table S1 also demonstrated a higher incidence rate of each outcome for patients with sST2 ≥ 30.14 ng/ml regardless of history of heart failure and NT‐proBNP levels.\nBaseline characteristics of NVAF patients compared between those with and without HF or death\n\nNote: Data presented as mean ± standard deviation or number and percentage. The bold values are statistically significant p < .05.\nAbbreviations: ACEI, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; HF, heart failure; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; NVAF, non‐valvular atrial fibrillation; TIA, transient ischemic attack.\nRate of heart failure (HF) and death according to soluble sST2 group for (A) all patients, (B) patients with history of HF, (C) patients no history of HF, (D) patients with N‐terminal pro‐brain natriuretic peptide (NT‐proBNP) level ≥median, and (E) patients with NT‐proBNP level <median", "The results of univariate and multivariate Cox proportional analysis are shown as a forest plot in Figure 2. Factors with p‐value <.2 from Table 1 were selected for univariate and multivariate Cox‐proportional Hazard model analysis. Univariate analysis showed history of HF, CAD, DM, RRT, CKD, anemia, LVEF < 50%, NT‐proBNP >median, and sST2 > 30.14 ng/ml to be predictors of HF or death. Subsequent multivariate analysis revealed history of HF, NT‐proBNP >median, and sST2 ≥ 30.14 ng/ml to be independent predictors for HF or death.\nForest plot of univariate and multivariate analysis for factors that predict heart failure or death. CAD, coronary artery disease; CI, confidence interval; CKD, chronic kidney disease; LVEF, left ventricular ejection fraction; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; sST2, soluble ST2; TIA, transient ischemic attack", "The cumulative event rates of HF or death, HF, and death are shown in Figure 3. The event rate in patients with sST2 ≥ 30.14 ng/ml increased as the follow‐up time increased and significantly different from those with sST2 < 30.14 ng/ml both for unadjusted and adjusted model. Moreover, the distance between the two plots (sST2 ≥ 30.14 and sST2 < 30.14 ng/ml) becomes greater as the follow‐up duration increases.\nCumulative rate of heart failure (HF) or death, death, and HF compared between patients with sST2 level ≥30.14 and <30.14 ng/ml. A–C: unadjusted, D–F: adjusted for confounders", "Sensitivity analysis was performed by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome. Restricted cubic spline graph demonstrates that the risk of heart failure or death increased as the levels of sST2 increased both for unadjusted and adjusted model (variables with p < .2 from Table 1 were included in the adjusted model) (Figure 4). Figure S1 demonstrates that there were no significant interactions (interaction test p > .05) between history of heart failure and sST2 levels (Figure S1A–C) and NT‐proBNP levels and sST2 levels (Figure S1D–F) on each of the clinical outcomes.\nCubic spline graph showing hazard ratio and 95% confidence interval (CI) heart failure (HF) or death, death, and HF of sST2 as continuous data with A–C: unadjusted, D–F: adjusted for confounders\nSensitivity analysis was also performed by using median (26.78 ng/ml) of sST2 as a cut‐off and by comparing four groups of sST2 separated by quartiles (1st quartile: <18.54 ng/ml, 2nd quartile: 18.54–26.78 ng/ml, 3rd quartile: 26.78–38.38 ng/ml, 4th quartile: ≥38.38 ng/ml). The results are shown in Figure S2.\nSubgroup analysis for the predictive value of sST2 for HF or death showed that sST2 can predict HF or death in patients with AF in the majority of subgroups including age, sex, history of HF, CAD, stroke, hypertension, diabetes, CKD, LVEF, and NT‐proBNP (Figure S3).", "This study has some mentionable limitations. First, the size of our study population is relatively small, so our study may have been insufficiently powered to identify all statistically significant differences and associations. However, we enrolled all eligible non‐valvular AF patients during our study period. Moreover, the sufficient statistical power of our study may be supported by the fact that we found sST2 ≥ 30.14 ng/ml to be significantly and independently associated with death or HF regardless of history of HF or NT‐proBNP level status. Second, our center is a large tertiary care hospital that is often referred more complex cases, so our results may not be immediately generalizable to AF population seeking/receiving treatment at primary care centers. Third and last, sST2 laboratory data were analyzed only at baseline. sST2 remained a significant predictor for clinical outcomes.", "All authors contributed substantially to the following: study conception and design; acquisition or analysis and interpretation of the data; drafting and/or critically revising the article; and, preparing the manuscript for submission to our target journal. All authors are in agreement with both the final version of the manuscript, and the decision to submit this manuscript for journal publication." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study population", "Study protocol and data collection", "Definitions", "Laboratory investigations", "Outcomes", "Statistical analysis", "RESULTS", "Baseline characteristics", "Clinical outcomes", "Univariate and multivariate analysis", "Survival analysis", "Sensitivity analysis and test of interaction effect", "DISCUSSION", "Limitations", "CONCLUSION", "CONFLICT OF INTERESTS", "AUTHOR CONTRIBUTIONS", "Supporting information" ]
[ "Non‐valvular atrial fibrillation (AF) is one of the most common cardiac arrhythmias,\n1\n and the prevalence of AF increases in older adult population.\n2\n Heart failure (HF) is one of the coexisting conditions frequently seen in patients with AF,\n3\n, \n4\n and the prevalence of HF also increases in older adults.\n5\n When AF and HF coexist in a patient, it is often difficult to determine which condition is the cause, and which is the effect.\n6\n Practice guidelines mainly focus on stroke prevention in patients with AF and HF is often overlooked. Results from the Global Anticoagulant Registry in the Field‐Atrial Fibrillation (GARFIELD‐AF) registry, which is a large global registry of patients with newly diagnosed AF, revealed a rate of HF of 2.41 per 100 person‐years, which is greater than the rate of ischemic stroke, major bleeding, and cardiovascular death.\n7\n Recent European Society of Cardiology (ESC) guideline for management of AF emphasizes the treatment of comorbidities, such as hypertension, diabetes, and HF.\n8\n\n\nNatriuretic peptide, such as brain natriuretic peptide (BNP) and N‐terminal pro‐BNP (NT‐proBNP), has been shown to be both a diagnostic and prognostic biomarker for HF.\n9\n Soluble ST2 (sST2) is another biomarker that has been demonstrated to be a good prognostic marker in patients with HF.\n10\n, \n11\n American College of Cardiology (ACC) guideline for management of patients with HF suggests that sST2 may be useful as an additive biomarker for prognosis of patients with HF.\n12\n The objectives of this study were to determine (1) the prognostic value of sST2 for HF and death in patients with AF; (2) the prognostic value of sST2 for HF and death in patients with AF with and without history of HF; and (3) whether the prognostic value of sST2 for HF and death in patients with AF is independent of NT‐proBNP level.", "Study population This is a prospective study. Patients who were at least 18 years of age with a diagnosis of nonvalvular AF were prospectively enrolled. The presence of AF was confirmed by 12‐lead electrocardiography (ECG) or ambulatory ECG monitoring. Patients with at least one of the following criteria were excluded: (1) rheumatic mitral stenosis; (2) mechanical heart valve; (3) AF from transient reversible cause, such as pneumonia; (4) pregnancy; (5) life expectancy less than 3 years; (6) unwilling to participate; (7) hospitalization within 1 month before study enrollment; (8) ongoing participation in a clinical trial; and/or (9) inability to attend follow‐up appointments. The protocol for this study was approved by the Institutional Review Board (IRB) of the Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand, and all patients gave written informed consent to participate.\nThis is a prospective study. Patients who were at least 18 years of age with a diagnosis of nonvalvular AF were prospectively enrolled. The presence of AF was confirmed by 12‐lead electrocardiography (ECG) or ambulatory ECG monitoring. Patients with at least one of the following criteria were excluded: (1) rheumatic mitral stenosis; (2) mechanical heart valve; (3) AF from transient reversible cause, such as pneumonia; (4) pregnancy; (5) life expectancy less than 3 years; (6) unwilling to participate; (7) hospitalization within 1 month before study enrollment; (8) ongoing participation in a clinical trial; and/or (9) inability to attend follow‐up appointments. The protocol for this study was approved by the Institutional Review Board (IRB) of the Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand, and all patients gave written informed consent to participate.\nStudy protocol and data collection Baseline data were collected and recorded from medical record reviews and patient interviews. Included patients were followed‐up every 6 months for 3 years. In addition to study‐related data that were collected at each follow‐up visit, the authors investigated for, determined, and recorded the occurrence of study outcomes (HR or death) that occurred during the preceding six months.\nThe following data were collected: demographic data; type, duration, and symptom of AF; left ventricular ejection fraction (LVEF) from echocardiogram; comorbid conditions, including history of HF, coronary artery disease (CAD), ischemic stroke/transient ischemic attack (TIA), diabetes mellitus (DM), hypertension (HT), dyslipidemia (DLP), smoking, implantable devices, and dementia; medications; and, laboratory data, such as creatinine clearance for the calculation for chronic kidney disease (CKD) and renal replacement therapy (RRT), hematocrit for assessment of anemia, NT‐proBNP, and sST2.\nBaseline data were collected and recorded from medical record reviews and patient interviews. Included patients were followed‐up every 6 months for 3 years. In addition to study‐related data that were collected at each follow‐up visit, the authors investigated for, determined, and recorded the occurrence of study outcomes (HR or death) that occurred during the preceding six months.\nThe following data were collected: demographic data; type, duration, and symptom of AF; left ventricular ejection fraction (LVEF) from echocardiogram; comorbid conditions, including history of HF, coronary artery disease (CAD), ischemic stroke/transient ischemic attack (TIA), diabetes mellitus (DM), hypertension (HT), dyslipidemia (DLP), smoking, implantable devices, and dementia; medications; and, laboratory data, such as creatinine clearance for the calculation for chronic kidney disease (CKD) and renal replacement therapy (RRT), hematocrit for assessment of anemia, NT‐proBNP, and sST2.\nDefinitions CAD was defined as the presence of significant stenosis of at least one major coronary artery by coronary angiogram or coronary computed tomography (CT) angiography, or history of documented myocardial infarction or coronary revascularization or positive stress imaging either by nuclear stress test, magnetic resonance imaging, or echocardiography. Anemia was defined as hemoglobin level <13 g/dl for males, and <12 g/dl for females. CKD in this study was defined as CKD stages 3–5 or an estimated glomerular filtration rate (eGFR [ml/min/1.73 m2]) by Chronic Kidney Disease Epidemiology Collaboration (CKD‐EPI) formula less than 60 ml/min.\nCAD was defined as the presence of significant stenosis of at least one major coronary artery by coronary angiogram or coronary computed tomography (CT) angiography, or history of documented myocardial infarction or coronary revascularization or positive stress imaging either by nuclear stress test, magnetic resonance imaging, or echocardiography. Anemia was defined as hemoglobin level <13 g/dl for males, and <12 g/dl for females. CKD in this study was defined as CKD stages 3–5 or an estimated glomerular filtration rate (eGFR [ml/min/1.73 m2]) by Chronic Kidney Disease Epidemiology Collaboration (CKD‐EPI) formula less than 60 ml/min.\nLaboratory investigations sST2 was measured from plasma samples using a high‐sensitivity sandwich monoclonal immunoassay (Presage ST2 Assay, Critical Diagnostics). The sST2 assay had a within‐run coefficient of less than 2.5%, a total coefficient of variation of 4%, and a limit of detection of 1.31 ng/ml. NT‐proBNP was measured from plasma using a commercially available immunoassay (Elecsys NT‐proBNP assay, Roche Diagnostics). eGFR (ml/min/1.73 m2) was calculated using the CKD‐EPI formula.\nsST2 was measured from plasma samples using a high‐sensitivity sandwich monoclonal immunoassay (Presage ST2 Assay, Critical Diagnostics). The sST2 assay had a within‐run coefficient of less than 2.5%, a total coefficient of variation of 4%, and a limit of detection of 1.31 ng/ml. NT‐proBNP was measured from plasma using a commercially available immunoassay (Elecsys NT‐proBNP assay, Roche Diagnostics). eGFR (ml/min/1.73 m2) was calculated using the CKD‐EPI formula.\nOutcomes The primary outcomes of this study were HF or death. We used standard definition for cardiovascular endpoint events proposed by the American College of Cardiology (ACC) and American Heart Association (AHA).\n13\n HF was defined an urgent, unscheduled clinic or emergency department visit or hospital admission, with a primary diagnosis of HF, where the patient exhibits new or worsening symptoms of HF on presentation, has objective evidence of new or worsening HF, and receives initiation or intensification of treatment specifically for HF. Objective evidence consists of at least two physical examination findings OR at least one physical examination finding and at least one laboratory criterion of new or worsening HF on presentation. Death was subcategorized into cardiovascular (CV) death, non‐CV death, or undetermined cause. To minimize the bias, all outcomes were confirmed by a separate adjudication team. The sample size of this registry was enough to determine the differences in outcome between two groups with 90% power.\nThe primary outcomes of this study were HF or death. We used standard definition for cardiovascular endpoint events proposed by the American College of Cardiology (ACC) and American Heart Association (AHA).\n13\n HF was defined an urgent, unscheduled clinic or emergency department visit or hospital admission, with a primary diagnosis of HF, where the patient exhibits new or worsening symptoms of HF on presentation, has objective evidence of new or worsening HF, and receives initiation or intensification of treatment specifically for HF. Objective evidence consists of at least two physical examination findings OR at least one physical examination finding and at least one laboratory criterion of new or worsening HF on presentation. Death was subcategorized into cardiovascular (CV) death, non‐CV death, or undetermined cause. To minimize the bias, all outcomes were confirmed by a separate adjudication team. The sample size of this registry was enough to determine the differences in outcome between two groups with 90% power.\nStatistical analysis Continuous data were compared by the Student's t‐test for unpaired data, and are described as mean ± standard deviation (SD). Categorical data were compared by χ\n2 test or Fisher's exact test, and are described as number and percentage. Clinical outcome data are shown as proportion of outcome in each group, and rate of outcome per 100 person‐years with 95% confidence interval (CI). Kaplan‐Meier estimate was performed to assess the time‐to‐event as the probability of surviving divided by the number of patients at risk. Log‐rank test was performed to compare the difference in survival probability between groups. Univariate and multivariate analysis was performed using Cox proportional hazard function to assess the effect of baseline variables on clinical outcomes. The results are presented as hazard ratio and 95% confidence interval. The primary analysis was based on the sST2 cut‐off derived from receiver operating characteristics (ROC) curve analysis. Sensitivity analysis was performed (1) by using median of sST2 as a cut off (2) by comparing four groups of sST2 separated by quartiles (3) by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome by cubic spline graph. A p‐value of <.05 was considered statistically significant, and SPSS Statistics software (SPSS, Inc.) and R version 3.6.3 was used to perform data analyses.\nContinuous data were compared by the Student's t‐test for unpaired data, and are described as mean ± standard deviation (SD). Categorical data were compared by χ\n2 test or Fisher's exact test, and are described as number and percentage. Clinical outcome data are shown as proportion of outcome in each group, and rate of outcome per 100 person‐years with 95% confidence interval (CI). Kaplan‐Meier estimate was performed to assess the time‐to‐event as the probability of surviving divided by the number of patients at risk. Log‐rank test was performed to compare the difference in survival probability between groups. Univariate and multivariate analysis was performed using Cox proportional hazard function to assess the effect of baseline variables on clinical outcomes. The results are presented as hazard ratio and 95% confidence interval. The primary analysis was based on the sST2 cut‐off derived from receiver operating characteristics (ROC) curve analysis. Sensitivity analysis was performed (1) by using median of sST2 as a cut off (2) by comparing four groups of sST2 separated by quartiles (3) by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome by cubic spline graph. A p‐value of <.05 was considered statistically significant, and SPSS Statistics software (SPSS, Inc.) and R version 3.6.3 was used to perform data analyses.", "This is a prospective study. Patients who were at least 18 years of age with a diagnosis of nonvalvular AF were prospectively enrolled. The presence of AF was confirmed by 12‐lead electrocardiography (ECG) or ambulatory ECG monitoring. Patients with at least one of the following criteria were excluded: (1) rheumatic mitral stenosis; (2) mechanical heart valve; (3) AF from transient reversible cause, such as pneumonia; (4) pregnancy; (5) life expectancy less than 3 years; (6) unwilling to participate; (7) hospitalization within 1 month before study enrollment; (8) ongoing participation in a clinical trial; and/or (9) inability to attend follow‐up appointments. The protocol for this study was approved by the Institutional Review Board (IRB) of the Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand, and all patients gave written informed consent to participate.", "Baseline data were collected and recorded from medical record reviews and patient interviews. Included patients were followed‐up every 6 months for 3 years. In addition to study‐related data that were collected at each follow‐up visit, the authors investigated for, determined, and recorded the occurrence of study outcomes (HR or death) that occurred during the preceding six months.\nThe following data were collected: demographic data; type, duration, and symptom of AF; left ventricular ejection fraction (LVEF) from echocardiogram; comorbid conditions, including history of HF, coronary artery disease (CAD), ischemic stroke/transient ischemic attack (TIA), diabetes mellitus (DM), hypertension (HT), dyslipidemia (DLP), smoking, implantable devices, and dementia; medications; and, laboratory data, such as creatinine clearance for the calculation for chronic kidney disease (CKD) and renal replacement therapy (RRT), hematocrit for assessment of anemia, NT‐proBNP, and sST2.", "CAD was defined as the presence of significant stenosis of at least one major coronary artery by coronary angiogram or coronary computed tomography (CT) angiography, or history of documented myocardial infarction or coronary revascularization or positive stress imaging either by nuclear stress test, magnetic resonance imaging, or echocardiography. Anemia was defined as hemoglobin level <13 g/dl for males, and <12 g/dl for females. CKD in this study was defined as CKD stages 3–5 or an estimated glomerular filtration rate (eGFR [ml/min/1.73 m2]) by Chronic Kidney Disease Epidemiology Collaboration (CKD‐EPI) formula less than 60 ml/min.", "sST2 was measured from plasma samples using a high‐sensitivity sandwich monoclonal immunoassay (Presage ST2 Assay, Critical Diagnostics). The sST2 assay had a within‐run coefficient of less than 2.5%, a total coefficient of variation of 4%, and a limit of detection of 1.31 ng/ml. NT‐proBNP was measured from plasma using a commercially available immunoassay (Elecsys NT‐proBNP assay, Roche Diagnostics). eGFR (ml/min/1.73 m2) was calculated using the CKD‐EPI formula.", "The primary outcomes of this study were HF or death. We used standard definition for cardiovascular endpoint events proposed by the American College of Cardiology (ACC) and American Heart Association (AHA).\n13\n HF was defined an urgent, unscheduled clinic or emergency department visit or hospital admission, with a primary diagnosis of HF, where the patient exhibits new or worsening symptoms of HF on presentation, has objective evidence of new or worsening HF, and receives initiation or intensification of treatment specifically for HF. Objective evidence consists of at least two physical examination findings OR at least one physical examination finding and at least one laboratory criterion of new or worsening HF on presentation. Death was subcategorized into cardiovascular (CV) death, non‐CV death, or undetermined cause. To minimize the bias, all outcomes were confirmed by a separate adjudication team. The sample size of this registry was enough to determine the differences in outcome between two groups with 90% power.", "Continuous data were compared by the Student's t‐test for unpaired data, and are described as mean ± standard deviation (SD). Categorical data were compared by χ\n2 test or Fisher's exact test, and are described as number and percentage. Clinical outcome data are shown as proportion of outcome in each group, and rate of outcome per 100 person‐years with 95% confidence interval (CI). Kaplan‐Meier estimate was performed to assess the time‐to‐event as the probability of surviving divided by the number of patients at risk. Log‐rank test was performed to compare the difference in survival probability between groups. Univariate and multivariate analysis was performed using Cox proportional hazard function to assess the effect of baseline variables on clinical outcomes. The results are presented as hazard ratio and 95% confidence interval. The primary analysis was based on the sST2 cut‐off derived from receiver operating characteristics (ROC) curve analysis. Sensitivity analysis was performed (1) by using median of sST2 as a cut off (2) by comparing four groups of sST2 separated by quartiles (3) by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome by cubic spline graph. A p‐value of <.05 was considered statistically significant, and SPSS Statistics software (SPSS, Inc.) and R version 3.6.3 was used to perform data analyses.", "Baseline characteristics Of the 185 patients that were enrolled, 116 (62.7%) were male. The average age of patients was 68.9 ± 11.0 years, and the average sST2 level was 31.3 ± 19.7 ng/ml (median and interquartile range [IQR]: 26.78 and 18.54–38.38 ng/ml). The average NT‐proBNP level was 2399 ± 6853 pg/ml (median and IQR: 974.4 and 490.9–1841.0 pg/ml).\nOf the 185 patients that were enrolled, 116 (62.7%) were male. The average age of patients was 68.9 ± 11.0 years, and the average sST2 level was 31.3 ± 19.7 ng/ml (median and interquartile range [IQR]: 26.78 and 18.54–38.38 ng/ml). The average NT‐proBNP level was 2399 ± 6853 pg/ml (median and IQR: 974.4 and 490.9–1841.0 pg/ml).\nClinical outcomes The average follow‐up duration was 33.1 ± 6.6 months or 502.2 persons‐year. There were 54 patients with death or heart failure during follow‐up (29 deaths and 33 heart failures). Baseline characteristics of patients with and without clinical outcome are shown in Table 1. Older age, history of HF, CAD, DM, HT, CKD, RRT, anemia, low LVEF, and elevated sST2 level were all found to be significantly associated with an increased risk of HF or death. From ROC analysis the best cut‐off of sST2 for death or heart failure was 30.14 ng/ml (area under the curve of 0.69). Seventy‐three (39.5%) patients had an sST2 level ≥30.14 ng/ml, and 112 (60.5%) had an sST2 level <30.14 ng/dl. The proportion of patients with clinical outcomes compared between patients with sST2 level <30.14 and sST2 level ≥30.14 ng/ml is shown in Figure 1. Sixty‐nine (37.3%) patients in our study had history of HF. The differences between the 2 sST2 groups are also shown in patients with and without history of HF, and in patients with NT‐proBNP <median and ≥median. The median (IQR) rate of HF or death, death, and HF was 10.75 (8.08–14.03), 5.77 (3.87–8.29), 6.57 (4.52–9.23) per 100 persons‐years, respectively. The incidence rate of clinical outcomes in patients with sST2 < 30.14 ng/ml and ≥30.14 ng/ml is shown in Table S1. The incidence rate of clinical outcomes was increased in patients with sST2 ≥ 30.14 ng/ml. Table S1 also demonstrated a higher incidence rate of each outcome for patients with sST2 ≥ 30.14 ng/ml regardless of history of heart failure and NT‐proBNP levels.\nBaseline characteristics of NVAF patients compared between those with and without HF or death\n\nNote: Data presented as mean ± standard deviation or number and percentage. The bold values are statistically significant p < .05.\nAbbreviations: ACEI, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; HF, heart failure; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; NVAF, non‐valvular atrial fibrillation; TIA, transient ischemic attack.\nRate of heart failure (HF) and death according to soluble sST2 group for (A) all patients, (B) patients with history of HF, (C) patients no history of HF, (D) patients with N‐terminal pro‐brain natriuretic peptide (NT‐proBNP) level ≥median, and (E) patients with NT‐proBNP level <median\nThe average follow‐up duration was 33.1 ± 6.6 months or 502.2 persons‐year. There were 54 patients with death or heart failure during follow‐up (29 deaths and 33 heart failures). Baseline characteristics of patients with and without clinical outcome are shown in Table 1. Older age, history of HF, CAD, DM, HT, CKD, RRT, anemia, low LVEF, and elevated sST2 level were all found to be significantly associated with an increased risk of HF or death. From ROC analysis the best cut‐off of sST2 for death or heart failure was 30.14 ng/ml (area under the curve of 0.69). Seventy‐three (39.5%) patients had an sST2 level ≥30.14 ng/ml, and 112 (60.5%) had an sST2 level <30.14 ng/dl. The proportion of patients with clinical outcomes compared between patients with sST2 level <30.14 and sST2 level ≥30.14 ng/ml is shown in Figure 1. Sixty‐nine (37.3%) patients in our study had history of HF. The differences between the 2 sST2 groups are also shown in patients with and without history of HF, and in patients with NT‐proBNP <median and ≥median. The median (IQR) rate of HF or death, death, and HF was 10.75 (8.08–14.03), 5.77 (3.87–8.29), 6.57 (4.52–9.23) per 100 persons‐years, respectively. The incidence rate of clinical outcomes in patients with sST2 < 30.14 ng/ml and ≥30.14 ng/ml is shown in Table S1. The incidence rate of clinical outcomes was increased in patients with sST2 ≥ 30.14 ng/ml. Table S1 also demonstrated a higher incidence rate of each outcome for patients with sST2 ≥ 30.14 ng/ml regardless of history of heart failure and NT‐proBNP levels.\nBaseline characteristics of NVAF patients compared between those with and without HF or death\n\nNote: Data presented as mean ± standard deviation or number and percentage. The bold values are statistically significant p < .05.\nAbbreviations: ACEI, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; HF, heart failure; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; NVAF, non‐valvular atrial fibrillation; TIA, transient ischemic attack.\nRate of heart failure (HF) and death according to soluble sST2 group for (A) all patients, (B) patients with history of HF, (C) patients no history of HF, (D) patients with N‐terminal pro‐brain natriuretic peptide (NT‐proBNP) level ≥median, and (E) patients with NT‐proBNP level <median\nUnivariate and multivariate analysis The results of univariate and multivariate Cox proportional analysis are shown as a forest plot in Figure 2. Factors with p‐value <.2 from Table 1 were selected for univariate and multivariate Cox‐proportional Hazard model analysis. Univariate analysis showed history of HF, CAD, DM, RRT, CKD, anemia, LVEF < 50%, NT‐proBNP >median, and sST2 > 30.14 ng/ml to be predictors of HF or death. Subsequent multivariate analysis revealed history of HF, NT‐proBNP >median, and sST2 ≥ 30.14 ng/ml to be independent predictors for HF or death.\nForest plot of univariate and multivariate analysis for factors that predict heart failure or death. CAD, coronary artery disease; CI, confidence interval; CKD, chronic kidney disease; LVEF, left ventricular ejection fraction; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; sST2, soluble ST2; TIA, transient ischemic attack\nThe results of univariate and multivariate Cox proportional analysis are shown as a forest plot in Figure 2. Factors with p‐value <.2 from Table 1 were selected for univariate and multivariate Cox‐proportional Hazard model analysis. Univariate analysis showed history of HF, CAD, DM, RRT, CKD, anemia, LVEF < 50%, NT‐proBNP >median, and sST2 > 30.14 ng/ml to be predictors of HF or death. Subsequent multivariate analysis revealed history of HF, NT‐proBNP >median, and sST2 ≥ 30.14 ng/ml to be independent predictors for HF or death.\nForest plot of univariate and multivariate analysis for factors that predict heart failure or death. CAD, coronary artery disease; CI, confidence interval; CKD, chronic kidney disease; LVEF, left ventricular ejection fraction; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; sST2, soluble ST2; TIA, transient ischemic attack\nSurvival analysis The cumulative event rates of HF or death, HF, and death are shown in Figure 3. The event rate in patients with sST2 ≥ 30.14 ng/ml increased as the follow‐up time increased and significantly different from those with sST2 < 30.14 ng/ml both for unadjusted and adjusted model. Moreover, the distance between the two plots (sST2 ≥ 30.14 and sST2 < 30.14 ng/ml) becomes greater as the follow‐up duration increases.\nCumulative rate of heart failure (HF) or death, death, and HF compared between patients with sST2 level ≥30.14 and <30.14 ng/ml. A–C: unadjusted, D–F: adjusted for confounders\nThe cumulative event rates of HF or death, HF, and death are shown in Figure 3. The event rate in patients with sST2 ≥ 30.14 ng/ml increased as the follow‐up time increased and significantly different from those with sST2 < 30.14 ng/ml both for unadjusted and adjusted model. Moreover, the distance between the two plots (sST2 ≥ 30.14 and sST2 < 30.14 ng/ml) becomes greater as the follow‐up duration increases.\nCumulative rate of heart failure (HF) or death, death, and HF compared between patients with sST2 level ≥30.14 and <30.14 ng/ml. A–C: unadjusted, D–F: adjusted for confounders\nSensitivity analysis and test of interaction effect Sensitivity analysis was performed by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome. Restricted cubic spline graph demonstrates that the risk of heart failure or death increased as the levels of sST2 increased both for unadjusted and adjusted model (variables with p < .2 from Table 1 were included in the adjusted model) (Figure 4). Figure S1 demonstrates that there were no significant interactions (interaction test p > .05) between history of heart failure and sST2 levels (Figure S1A–C) and NT‐proBNP levels and sST2 levels (Figure S1D–F) on each of the clinical outcomes.\nCubic spline graph showing hazard ratio and 95% confidence interval (CI) heart failure (HF) or death, death, and HF of sST2 as continuous data with A–C: unadjusted, D–F: adjusted for confounders\nSensitivity analysis was also performed by using median (26.78 ng/ml) of sST2 as a cut‐off and by comparing four groups of sST2 separated by quartiles (1st quartile: <18.54 ng/ml, 2nd quartile: 18.54–26.78 ng/ml, 3rd quartile: 26.78–38.38 ng/ml, 4th quartile: ≥38.38 ng/ml). The results are shown in Figure S2.\nSubgroup analysis for the predictive value of sST2 for HF or death showed that sST2 can predict HF or death in patients with AF in the majority of subgroups including age, sex, history of HF, CAD, stroke, hypertension, diabetes, CKD, LVEF, and NT‐proBNP (Figure S3).\nSensitivity analysis was performed by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome. Restricted cubic spline graph demonstrates that the risk of heart failure or death increased as the levels of sST2 increased both for unadjusted and adjusted model (variables with p < .2 from Table 1 were included in the adjusted model) (Figure 4). Figure S1 demonstrates that there were no significant interactions (interaction test p > .05) between history of heart failure and sST2 levels (Figure S1A–C) and NT‐proBNP levels and sST2 levels (Figure S1D–F) on each of the clinical outcomes.\nCubic spline graph showing hazard ratio and 95% confidence interval (CI) heart failure (HF) or death, death, and HF of sST2 as continuous data with A–C: unadjusted, D–F: adjusted for confounders\nSensitivity analysis was also performed by using median (26.78 ng/ml) of sST2 as a cut‐off and by comparing four groups of sST2 separated by quartiles (1st quartile: <18.54 ng/ml, 2nd quartile: 18.54–26.78 ng/ml, 3rd quartile: 26.78–38.38 ng/ml, 4th quartile: ≥38.38 ng/ml). The results are shown in Figure S2.\nSubgroup analysis for the predictive value of sST2 for HF or death showed that sST2 can predict HF or death in patients with AF in the majority of subgroups including age, sex, history of HF, CAD, stroke, hypertension, diabetes, CKD, LVEF, and NT‐proBNP (Figure S3).", "Of the 185 patients that were enrolled, 116 (62.7%) were male. The average age of patients was 68.9 ± 11.0 years, and the average sST2 level was 31.3 ± 19.7 ng/ml (median and interquartile range [IQR]: 26.78 and 18.54–38.38 ng/ml). The average NT‐proBNP level was 2399 ± 6853 pg/ml (median and IQR: 974.4 and 490.9–1841.0 pg/ml).", "The average follow‐up duration was 33.1 ± 6.6 months or 502.2 persons‐year. There were 54 patients with death or heart failure during follow‐up (29 deaths and 33 heart failures). Baseline characteristics of patients with and without clinical outcome are shown in Table 1. Older age, history of HF, CAD, DM, HT, CKD, RRT, anemia, low LVEF, and elevated sST2 level were all found to be significantly associated with an increased risk of HF or death. From ROC analysis the best cut‐off of sST2 for death or heart failure was 30.14 ng/ml (area under the curve of 0.69). Seventy‐three (39.5%) patients had an sST2 level ≥30.14 ng/ml, and 112 (60.5%) had an sST2 level <30.14 ng/dl. The proportion of patients with clinical outcomes compared between patients with sST2 level <30.14 and sST2 level ≥30.14 ng/ml is shown in Figure 1. Sixty‐nine (37.3%) patients in our study had history of HF. The differences between the 2 sST2 groups are also shown in patients with and without history of HF, and in patients with NT‐proBNP <median and ≥median. The median (IQR) rate of HF or death, death, and HF was 10.75 (8.08–14.03), 5.77 (3.87–8.29), 6.57 (4.52–9.23) per 100 persons‐years, respectively. The incidence rate of clinical outcomes in patients with sST2 < 30.14 ng/ml and ≥30.14 ng/ml is shown in Table S1. The incidence rate of clinical outcomes was increased in patients with sST2 ≥ 30.14 ng/ml. Table S1 also demonstrated a higher incidence rate of each outcome for patients with sST2 ≥ 30.14 ng/ml regardless of history of heart failure and NT‐proBNP levels.\nBaseline characteristics of NVAF patients compared between those with and without HF or death\n\nNote: Data presented as mean ± standard deviation or number and percentage. The bold values are statistically significant p < .05.\nAbbreviations: ACEI, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; HF, heart failure; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; NVAF, non‐valvular atrial fibrillation; TIA, transient ischemic attack.\nRate of heart failure (HF) and death according to soluble sST2 group for (A) all patients, (B) patients with history of HF, (C) patients no history of HF, (D) patients with N‐terminal pro‐brain natriuretic peptide (NT‐proBNP) level ≥median, and (E) patients with NT‐proBNP level <median", "The results of univariate and multivariate Cox proportional analysis are shown as a forest plot in Figure 2. Factors with p‐value <.2 from Table 1 were selected for univariate and multivariate Cox‐proportional Hazard model analysis. Univariate analysis showed history of HF, CAD, DM, RRT, CKD, anemia, LVEF < 50%, NT‐proBNP >median, and sST2 > 30.14 ng/ml to be predictors of HF or death. Subsequent multivariate analysis revealed history of HF, NT‐proBNP >median, and sST2 ≥ 30.14 ng/ml to be independent predictors for HF or death.\nForest plot of univariate and multivariate analysis for factors that predict heart failure or death. CAD, coronary artery disease; CI, confidence interval; CKD, chronic kidney disease; LVEF, left ventricular ejection fraction; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; sST2, soluble ST2; TIA, transient ischemic attack", "The cumulative event rates of HF or death, HF, and death are shown in Figure 3. The event rate in patients with sST2 ≥ 30.14 ng/ml increased as the follow‐up time increased and significantly different from those with sST2 < 30.14 ng/ml both for unadjusted and adjusted model. Moreover, the distance between the two plots (sST2 ≥ 30.14 and sST2 < 30.14 ng/ml) becomes greater as the follow‐up duration increases.\nCumulative rate of heart failure (HF) or death, death, and HF compared between patients with sST2 level ≥30.14 and <30.14 ng/ml. A–C: unadjusted, D–F: adjusted for confounders", "Sensitivity analysis was performed by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome. Restricted cubic spline graph demonstrates that the risk of heart failure or death increased as the levels of sST2 increased both for unadjusted and adjusted model (variables with p < .2 from Table 1 were included in the adjusted model) (Figure 4). Figure S1 demonstrates that there were no significant interactions (interaction test p > .05) between history of heart failure and sST2 levels (Figure S1A–C) and NT‐proBNP levels and sST2 levels (Figure S1D–F) on each of the clinical outcomes.\nCubic spline graph showing hazard ratio and 95% confidence interval (CI) heart failure (HF) or death, death, and HF of sST2 as continuous data with A–C: unadjusted, D–F: adjusted for confounders\nSensitivity analysis was also performed by using median (26.78 ng/ml) of sST2 as a cut‐off and by comparing four groups of sST2 separated by quartiles (1st quartile: <18.54 ng/ml, 2nd quartile: 18.54–26.78 ng/ml, 3rd quartile: 26.78–38.38 ng/ml, 4th quartile: ≥38.38 ng/ml). The results are shown in Figure S2.\nSubgroup analysis for the predictive value of sST2 for HF or death showed that sST2 can predict HF or death in patients with AF in the majority of subgroups including age, sex, history of HF, CAD, stroke, hypertension, diabetes, CKD, LVEF, and NT‐proBNP (Figure S3).", "This prospective study in patients with nonvalvular AF revealed sST2 to be an independent predictor of death or HF. sST2 was also found to be an independent predictor of HF and death when each of those two study outcomes was considered individually. The importance of sST2 as an independent predictor of outcome was demonstrated in patients with and without history of HF, and in patients with NT‐proBNP ≥median and <median.\nPatients with AF had a 3‐fold increased risk of HF, and patients with HF had a 4.5–5.9‐fold increased risk of AF.\n14\n Practice guidelines recommend that the treatment of AF focus not only on prevention of ischemic stroke and rate and rhythm control, but also on management of comorbidities, such as HT and DM.\n8\n Integrated management of AF patients with oral anticoagulant (OAC) and management of comorbidities have been shown to be associated with better clinical outcomes.\n15\n\n\nThe meta‐analysis global group in chronic heart failure (MAGGIC) risk score has been proposed for the prediction of mortality in patients with chronic HF, including both HF with reduced ejection fraction (HFrEF) and HF with preserved ejection fraction (HFpEF).\n16\n Moreover, some biomarkers, such as troponin, BNP, or NT‐proBNP, and sST2, have been shown to improve the performance of models designed to predict the risk of patients with HF.\n12\n, \n17\n Although we have many data on biomarkers and prognosis of heart failure,\n18\n, \n19\n there were limited data on the prediction of HF especially in patients with patients with AF. Data from the present study showed history of HF, CKD, and sST2 level ≥30.14 ng/ml (ROC cut off) to be independent predictors of HF in patients with AF. NT‐proBNP level has been recommended not only for the diagnosis, but also for prognostic assessment in patients with HF.\n12\n, \n17\n BNP can be used to predict risk of HF in high‐risk population.\n20\n Natriuretic peptide levels are elevated approximately 20%–30% in patients with AF; therefore, the criteria for diagnosis of HF in patients with AF should be different from those used for diagnosis of HF in patients without AF.\n21\n Increased BNP levels predict an increased risk of mortality in patients with and without HF.\n22\n Data from the Fushimi AF registry showed that increased BNP levels in patients with AF without known HF were associated with increased risk of mortality, ischemic stroke, and HF.\n23\n Data from the same study demonstrated an increased risk of adverse outcome in patients with pre‐existing HF. The results of univariate analysis in our study showed history of HF, CAD, DM, RRT, CKD, anemia, LVEF < 50%, NT‐proBNP >median, and sST2 > 30.14 ng/ml to be predictors of death or HF among patients with AF. Our multivariate analysis revealed history of HF, CKD, and sST2 ≥ 30.14 ng/ml to be independent predictors of HF or death in patients with AF. NT‐proBNP >median was not included in the final multivariate analysis model.\nAmong patients with HF, a previous study found sST2 level to be stronger than BNP and troponin‐T levels for predicting death and HF in the future.\n24\n Among patients with AF, sST2 levels predict recurrence of AF after RF ablation.\n25\n In Chinese population, sST2 was shown to be a predictor of HF risk in patients with AF.\n26\n Data from European population with anticoagulated AF showed sST2 to be a marker for increased risk of mortality.\n27\n The strength of the present study is that we explored both mortality and HF outcome, and both composite and individual outcome. We also performed a separate subanalysis analysis in patients with and without history of HF, and in patients with NT‐proBNP levels ≥median and <median. Our results showed sST2 to be a predictor of HF or death in AF patients regardless of history of HF, and regardless of NT‐proBNP level.\nThe results of this study suggest several important considerations. First, the risk of HF is high in patients with AF. The rate of HF in AF was even greater than the rate of ischemic stroke/TIA. This finding emphasizes the importance of a management strategy to reduce HF risk. Second, sST2 was shown to be a useful biomarker that can augment clinical data in the prediction of HF. Moreover, the predictive power of sST2 was even greater than that of NT‐proBNP. Third, although we did not have data on sST2‐guided management of HF in patients with AF, previous studies in patients with HF and sinus rhythm showed sST2 level to be significantly associated with reduced HF risk, and that patients with reduced sST2 level after treatment had a better prognosis.\n28\n, \n29\n\n\nLimitations This study has some mentionable limitations. First, the size of our study population is relatively small, so our study may have been insufficiently powered to identify all statistically significant differences and associations. However, we enrolled all eligible non‐valvular AF patients during our study period. Moreover, the sufficient statistical power of our study may be supported by the fact that we found sST2 ≥ 30.14 ng/ml to be significantly and independently associated with death or HF regardless of history of HF or NT‐proBNP level status. Second, our center is a large tertiary care hospital that is often referred more complex cases, so our results may not be immediately generalizable to AF population seeking/receiving treatment at primary care centers. Third and last, sST2 laboratory data were analyzed only at baseline. sST2 remained a significant predictor for clinical outcomes.\nThis study has some mentionable limitations. First, the size of our study population is relatively small, so our study may have been insufficiently powered to identify all statistically significant differences and associations. However, we enrolled all eligible non‐valvular AF patients during our study period. Moreover, the sufficient statistical power of our study may be supported by the fact that we found sST2 ≥ 30.14 ng/ml to be significantly and independently associated with death or HF regardless of history of HF or NT‐proBNP level status. Second, our center is a large tertiary care hospital that is often referred more complex cases, so our results may not be immediately generalizable to AF population seeking/receiving treatment at primary care centers. Third and last, sST2 laboratory data were analyzed only at baseline. sST2 remained a significant predictor for clinical outcomes.", "This study has some mentionable limitations. First, the size of our study population is relatively small, so our study may have been insufficiently powered to identify all statistically significant differences and associations. However, we enrolled all eligible non‐valvular AF patients during our study period. Moreover, the sufficient statistical power of our study may be supported by the fact that we found sST2 ≥ 30.14 ng/ml to be significantly and independently associated with death or HF regardless of history of HF or NT‐proBNP level status. Second, our center is a large tertiary care hospital that is often referred more complex cases, so our results may not be immediately generalizable to AF population seeking/receiving treatment at primary care centers. Third and last, sST2 laboratory data were analyzed only at baseline. sST2 remained a significant predictor for clinical outcomes.", "The results of this study revealed sST2 level to be an independent predictor of death or HF in patients with non‐ventricular AF irrespective of history of HF or NT‐proBNP levels.", "All authors declare no personal or professional conflicts of interest, and no financial support from the companies that produce and/or distribute the drugs, devices, or materials described in this report.", "All authors contributed substantially to the following: study conception and design; acquisition or analysis and interpretation of the data; drafting and/or critically revising the article; and, preparing the manuscript for submission to our target journal. All authors are in agreement with both the final version of the manuscript, and the decision to submit this manuscript for journal publication.", "Supporting information.\nClick here for additional data file." ]
[ null, "methods", null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", null, "conclusions", "COI-statement", null, "supplementary-material" ]
[ "history of heart failure", "nonvalvular atrial fibrillation", "patients", "prognostic significance", "soluble ST2 level" ]
INTRODUCTION: Non‐valvular atrial fibrillation (AF) is one of the most common cardiac arrhythmias, 1 and the prevalence of AF increases in older adult population. 2 Heart failure (HF) is one of the coexisting conditions frequently seen in patients with AF, 3 , 4 and the prevalence of HF also increases in older adults. 5 When AF and HF coexist in a patient, it is often difficult to determine which condition is the cause, and which is the effect. 6 Practice guidelines mainly focus on stroke prevention in patients with AF and HF is often overlooked. Results from the Global Anticoagulant Registry in the Field‐Atrial Fibrillation (GARFIELD‐AF) registry, which is a large global registry of patients with newly diagnosed AF, revealed a rate of HF of 2.41 per 100 person‐years, which is greater than the rate of ischemic stroke, major bleeding, and cardiovascular death. 7 Recent European Society of Cardiology (ESC) guideline for management of AF emphasizes the treatment of comorbidities, such as hypertension, diabetes, and HF. 8 Natriuretic peptide, such as brain natriuretic peptide (BNP) and N‐terminal pro‐BNP (NT‐proBNP), has been shown to be both a diagnostic and prognostic biomarker for HF. 9 Soluble ST2 (sST2) is another biomarker that has been demonstrated to be a good prognostic marker in patients with HF. 10 , 11 American College of Cardiology (ACC) guideline for management of patients with HF suggests that sST2 may be useful as an additive biomarker for prognosis of patients with HF. 12 The objectives of this study were to determine (1) the prognostic value of sST2 for HF and death in patients with AF; (2) the prognostic value of sST2 for HF and death in patients with AF with and without history of HF; and (3) whether the prognostic value of sST2 for HF and death in patients with AF is independent of NT‐proBNP level. METHODS: Study population This is a prospective study. Patients who were at least 18 years of age with a diagnosis of nonvalvular AF were prospectively enrolled. The presence of AF was confirmed by 12‐lead electrocardiography (ECG) or ambulatory ECG monitoring. Patients with at least one of the following criteria were excluded: (1) rheumatic mitral stenosis; (2) mechanical heart valve; (3) AF from transient reversible cause, such as pneumonia; (4) pregnancy; (5) life expectancy less than 3 years; (6) unwilling to participate; (7) hospitalization within 1 month before study enrollment; (8) ongoing participation in a clinical trial; and/or (9) inability to attend follow‐up appointments. The protocol for this study was approved by the Institutional Review Board (IRB) of the Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand, and all patients gave written informed consent to participate. This is a prospective study. Patients who were at least 18 years of age with a diagnosis of nonvalvular AF were prospectively enrolled. The presence of AF was confirmed by 12‐lead electrocardiography (ECG) or ambulatory ECG monitoring. Patients with at least one of the following criteria were excluded: (1) rheumatic mitral stenosis; (2) mechanical heart valve; (3) AF from transient reversible cause, such as pneumonia; (4) pregnancy; (5) life expectancy less than 3 years; (6) unwilling to participate; (7) hospitalization within 1 month before study enrollment; (8) ongoing participation in a clinical trial; and/or (9) inability to attend follow‐up appointments. The protocol for this study was approved by the Institutional Review Board (IRB) of the Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand, and all patients gave written informed consent to participate. Study protocol and data collection Baseline data were collected and recorded from medical record reviews and patient interviews. Included patients were followed‐up every 6 months for 3 years. In addition to study‐related data that were collected at each follow‐up visit, the authors investigated for, determined, and recorded the occurrence of study outcomes (HR or death) that occurred during the preceding six months. The following data were collected: demographic data; type, duration, and symptom of AF; left ventricular ejection fraction (LVEF) from echocardiogram; comorbid conditions, including history of HF, coronary artery disease (CAD), ischemic stroke/transient ischemic attack (TIA), diabetes mellitus (DM), hypertension (HT), dyslipidemia (DLP), smoking, implantable devices, and dementia; medications; and, laboratory data, such as creatinine clearance for the calculation for chronic kidney disease (CKD) and renal replacement therapy (RRT), hematocrit for assessment of anemia, NT‐proBNP, and sST2. Baseline data were collected and recorded from medical record reviews and patient interviews. Included patients were followed‐up every 6 months for 3 years. In addition to study‐related data that were collected at each follow‐up visit, the authors investigated for, determined, and recorded the occurrence of study outcomes (HR or death) that occurred during the preceding six months. The following data were collected: demographic data; type, duration, and symptom of AF; left ventricular ejection fraction (LVEF) from echocardiogram; comorbid conditions, including history of HF, coronary artery disease (CAD), ischemic stroke/transient ischemic attack (TIA), diabetes mellitus (DM), hypertension (HT), dyslipidemia (DLP), smoking, implantable devices, and dementia; medications; and, laboratory data, such as creatinine clearance for the calculation for chronic kidney disease (CKD) and renal replacement therapy (RRT), hematocrit for assessment of anemia, NT‐proBNP, and sST2. Definitions CAD was defined as the presence of significant stenosis of at least one major coronary artery by coronary angiogram or coronary computed tomography (CT) angiography, or history of documented myocardial infarction or coronary revascularization or positive stress imaging either by nuclear stress test, magnetic resonance imaging, or echocardiography. Anemia was defined as hemoglobin level <13 g/dl for males, and <12 g/dl for females. CKD in this study was defined as CKD stages 3–5 or an estimated glomerular filtration rate (eGFR [ml/min/1.73 m2]) by Chronic Kidney Disease Epidemiology Collaboration (CKD‐EPI) formula less than 60 ml/min. CAD was defined as the presence of significant stenosis of at least one major coronary artery by coronary angiogram or coronary computed tomography (CT) angiography, or history of documented myocardial infarction or coronary revascularization or positive stress imaging either by nuclear stress test, magnetic resonance imaging, or echocardiography. Anemia was defined as hemoglobin level <13 g/dl for males, and <12 g/dl for females. CKD in this study was defined as CKD stages 3–5 or an estimated glomerular filtration rate (eGFR [ml/min/1.73 m2]) by Chronic Kidney Disease Epidemiology Collaboration (CKD‐EPI) formula less than 60 ml/min. Laboratory investigations sST2 was measured from plasma samples using a high‐sensitivity sandwich monoclonal immunoassay (Presage ST2 Assay, Critical Diagnostics). The sST2 assay had a within‐run coefficient of less than 2.5%, a total coefficient of variation of 4%, and a limit of detection of 1.31 ng/ml. NT‐proBNP was measured from plasma using a commercially available immunoassay (Elecsys NT‐proBNP assay, Roche Diagnostics). eGFR (ml/min/1.73 m2) was calculated using the CKD‐EPI formula. sST2 was measured from plasma samples using a high‐sensitivity sandwich monoclonal immunoassay (Presage ST2 Assay, Critical Diagnostics). The sST2 assay had a within‐run coefficient of less than 2.5%, a total coefficient of variation of 4%, and a limit of detection of 1.31 ng/ml. NT‐proBNP was measured from plasma using a commercially available immunoassay (Elecsys NT‐proBNP assay, Roche Diagnostics). eGFR (ml/min/1.73 m2) was calculated using the CKD‐EPI formula. Outcomes The primary outcomes of this study were HF or death. We used standard definition for cardiovascular endpoint events proposed by the American College of Cardiology (ACC) and American Heart Association (AHA). 13 HF was defined an urgent, unscheduled clinic or emergency department visit or hospital admission, with a primary diagnosis of HF, where the patient exhibits new or worsening symptoms of HF on presentation, has objective evidence of new or worsening HF, and receives initiation or intensification of treatment specifically for HF. Objective evidence consists of at least two physical examination findings OR at least one physical examination finding and at least one laboratory criterion of new or worsening HF on presentation. Death was subcategorized into cardiovascular (CV) death, non‐CV death, or undetermined cause. To minimize the bias, all outcomes were confirmed by a separate adjudication team. The sample size of this registry was enough to determine the differences in outcome between two groups with 90% power. The primary outcomes of this study were HF or death. We used standard definition for cardiovascular endpoint events proposed by the American College of Cardiology (ACC) and American Heart Association (AHA). 13 HF was defined an urgent, unscheduled clinic or emergency department visit or hospital admission, with a primary diagnosis of HF, where the patient exhibits new or worsening symptoms of HF on presentation, has objective evidence of new or worsening HF, and receives initiation or intensification of treatment specifically for HF. Objective evidence consists of at least two physical examination findings OR at least one physical examination finding and at least one laboratory criterion of new or worsening HF on presentation. Death was subcategorized into cardiovascular (CV) death, non‐CV death, or undetermined cause. To minimize the bias, all outcomes were confirmed by a separate adjudication team. The sample size of this registry was enough to determine the differences in outcome between two groups with 90% power. Statistical analysis Continuous data were compared by the Student's t‐test for unpaired data, and are described as mean ± standard deviation (SD). Categorical data were compared by χ 2 test or Fisher's exact test, and are described as number and percentage. Clinical outcome data are shown as proportion of outcome in each group, and rate of outcome per 100 person‐years with 95% confidence interval (CI). Kaplan‐Meier estimate was performed to assess the time‐to‐event as the probability of surviving divided by the number of patients at risk. Log‐rank test was performed to compare the difference in survival probability between groups. Univariate and multivariate analysis was performed using Cox proportional hazard function to assess the effect of baseline variables on clinical outcomes. The results are presented as hazard ratio and 95% confidence interval. The primary analysis was based on the sST2 cut‐off derived from receiver operating characteristics (ROC) curve analysis. Sensitivity analysis was performed (1) by using median of sST2 as a cut off (2) by comparing four groups of sST2 separated by quartiles (3) by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome by cubic spline graph. A p‐value of <.05 was considered statistically significant, and SPSS Statistics software (SPSS, Inc.) and R version 3.6.3 was used to perform data analyses. Continuous data were compared by the Student's t‐test for unpaired data, and are described as mean ± standard deviation (SD). Categorical data were compared by χ 2 test or Fisher's exact test, and are described as number and percentage. Clinical outcome data are shown as proportion of outcome in each group, and rate of outcome per 100 person‐years with 95% confidence interval (CI). Kaplan‐Meier estimate was performed to assess the time‐to‐event as the probability of surviving divided by the number of patients at risk. Log‐rank test was performed to compare the difference in survival probability between groups. Univariate and multivariate analysis was performed using Cox proportional hazard function to assess the effect of baseline variables on clinical outcomes. The results are presented as hazard ratio and 95% confidence interval. The primary analysis was based on the sST2 cut‐off derived from receiver operating characteristics (ROC) curve analysis. Sensitivity analysis was performed (1) by using median of sST2 as a cut off (2) by comparing four groups of sST2 separated by quartiles (3) by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome by cubic spline graph. A p‐value of <.05 was considered statistically significant, and SPSS Statistics software (SPSS, Inc.) and R version 3.6.3 was used to perform data analyses. Study population: This is a prospective study. Patients who were at least 18 years of age with a diagnosis of nonvalvular AF were prospectively enrolled. The presence of AF was confirmed by 12‐lead electrocardiography (ECG) or ambulatory ECG monitoring. Patients with at least one of the following criteria were excluded: (1) rheumatic mitral stenosis; (2) mechanical heart valve; (3) AF from transient reversible cause, such as pneumonia; (4) pregnancy; (5) life expectancy less than 3 years; (6) unwilling to participate; (7) hospitalization within 1 month before study enrollment; (8) ongoing participation in a clinical trial; and/or (9) inability to attend follow‐up appointments. The protocol for this study was approved by the Institutional Review Board (IRB) of the Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand, and all patients gave written informed consent to participate. Study protocol and data collection: Baseline data were collected and recorded from medical record reviews and patient interviews. Included patients were followed‐up every 6 months for 3 years. In addition to study‐related data that were collected at each follow‐up visit, the authors investigated for, determined, and recorded the occurrence of study outcomes (HR or death) that occurred during the preceding six months. The following data were collected: demographic data; type, duration, and symptom of AF; left ventricular ejection fraction (LVEF) from echocardiogram; comorbid conditions, including history of HF, coronary artery disease (CAD), ischemic stroke/transient ischemic attack (TIA), diabetes mellitus (DM), hypertension (HT), dyslipidemia (DLP), smoking, implantable devices, and dementia; medications; and, laboratory data, such as creatinine clearance for the calculation for chronic kidney disease (CKD) and renal replacement therapy (RRT), hematocrit for assessment of anemia, NT‐proBNP, and sST2. Definitions: CAD was defined as the presence of significant stenosis of at least one major coronary artery by coronary angiogram or coronary computed tomography (CT) angiography, or history of documented myocardial infarction or coronary revascularization or positive stress imaging either by nuclear stress test, magnetic resonance imaging, or echocardiography. Anemia was defined as hemoglobin level <13 g/dl for males, and <12 g/dl for females. CKD in this study was defined as CKD stages 3–5 or an estimated glomerular filtration rate (eGFR [ml/min/1.73 m2]) by Chronic Kidney Disease Epidemiology Collaboration (CKD‐EPI) formula less than 60 ml/min. Laboratory investigations: sST2 was measured from plasma samples using a high‐sensitivity sandwich monoclonal immunoassay (Presage ST2 Assay, Critical Diagnostics). The sST2 assay had a within‐run coefficient of less than 2.5%, a total coefficient of variation of 4%, and a limit of detection of 1.31 ng/ml. NT‐proBNP was measured from plasma using a commercially available immunoassay (Elecsys NT‐proBNP assay, Roche Diagnostics). eGFR (ml/min/1.73 m2) was calculated using the CKD‐EPI formula. Outcomes: The primary outcomes of this study were HF or death. We used standard definition for cardiovascular endpoint events proposed by the American College of Cardiology (ACC) and American Heart Association (AHA). 13 HF was defined an urgent, unscheduled clinic or emergency department visit or hospital admission, with a primary diagnosis of HF, where the patient exhibits new or worsening symptoms of HF on presentation, has objective evidence of new or worsening HF, and receives initiation or intensification of treatment specifically for HF. Objective evidence consists of at least two physical examination findings OR at least one physical examination finding and at least one laboratory criterion of new or worsening HF on presentation. Death was subcategorized into cardiovascular (CV) death, non‐CV death, or undetermined cause. To minimize the bias, all outcomes were confirmed by a separate adjudication team. The sample size of this registry was enough to determine the differences in outcome between two groups with 90% power. Statistical analysis: Continuous data were compared by the Student's t‐test for unpaired data, and are described as mean ± standard deviation (SD). Categorical data were compared by χ 2 test or Fisher's exact test, and are described as number and percentage. Clinical outcome data are shown as proportion of outcome in each group, and rate of outcome per 100 person‐years with 95% confidence interval (CI). Kaplan‐Meier estimate was performed to assess the time‐to‐event as the probability of surviving divided by the number of patients at risk. Log‐rank test was performed to compare the difference in survival probability between groups. Univariate and multivariate analysis was performed using Cox proportional hazard function to assess the effect of baseline variables on clinical outcomes. The results are presented as hazard ratio and 95% confidence interval. The primary analysis was based on the sST2 cut‐off derived from receiver operating characteristics (ROC) curve analysis. Sensitivity analysis was performed (1) by using median of sST2 as a cut off (2) by comparing four groups of sST2 separated by quartiles (3) by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome by cubic spline graph. A p‐value of <.05 was considered statistically significant, and SPSS Statistics software (SPSS, Inc.) and R version 3.6.3 was used to perform data analyses. RESULTS: Baseline characteristics Of the 185 patients that were enrolled, 116 (62.7%) were male. The average age of patients was 68.9 ± 11.0 years, and the average sST2 level was 31.3 ± 19.7 ng/ml (median and interquartile range [IQR]: 26.78 and 18.54–38.38 ng/ml). The average NT‐proBNP level was 2399 ± 6853 pg/ml (median and IQR: 974.4 and 490.9–1841.0 pg/ml). Of the 185 patients that were enrolled, 116 (62.7%) were male. The average age of patients was 68.9 ± 11.0 years, and the average sST2 level was 31.3 ± 19.7 ng/ml (median and interquartile range [IQR]: 26.78 and 18.54–38.38 ng/ml). The average NT‐proBNP level was 2399 ± 6853 pg/ml (median and IQR: 974.4 and 490.9–1841.0 pg/ml). Clinical outcomes The average follow‐up duration was 33.1 ± 6.6 months or 502.2 persons‐year. There were 54 patients with death or heart failure during follow‐up (29 deaths and 33 heart failures). Baseline characteristics of patients with and without clinical outcome are shown in Table 1. Older age, history of HF, CAD, DM, HT, CKD, RRT, anemia, low LVEF, and elevated sST2 level were all found to be significantly associated with an increased risk of HF or death. From ROC analysis the best cut‐off of sST2 for death or heart failure was 30.14 ng/ml (area under the curve of 0.69). Seventy‐three (39.5%) patients had an sST2 level ≥30.14 ng/ml, and 112 (60.5%) had an sST2 level <30.14 ng/dl. The proportion of patients with clinical outcomes compared between patients with sST2 level <30.14 and sST2 level ≥30.14 ng/ml is shown in Figure 1. Sixty‐nine (37.3%) patients in our study had history of HF. The differences between the 2 sST2 groups are also shown in patients with and without history of HF, and in patients with NT‐proBNP <median and ≥median. The median (IQR) rate of HF or death, death, and HF was 10.75 (8.08–14.03), 5.77 (3.87–8.29), 6.57 (4.52–9.23) per 100 persons‐years, respectively. The incidence rate of clinical outcomes in patients with sST2 < 30.14 ng/ml and ≥30.14 ng/ml is shown in Table S1. The incidence rate of clinical outcomes was increased in patients with sST2 ≥ 30.14 ng/ml. Table S1 also demonstrated a higher incidence rate of each outcome for patients with sST2 ≥ 30.14 ng/ml regardless of history of heart failure and NT‐proBNP levels. Baseline characteristics of NVAF patients compared between those with and without HF or death Note: Data presented as mean ± standard deviation or number and percentage. The bold values are statistically significant p < .05. Abbreviations: ACEI, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; HF, heart failure; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; NVAF, non‐valvular atrial fibrillation; TIA, transient ischemic attack. Rate of heart failure (HF) and death according to soluble sST2 group for (A) all patients, (B) patients with history of HF, (C) patients no history of HF, (D) patients with N‐terminal pro‐brain natriuretic peptide (NT‐proBNP) level ≥median, and (E) patients with NT‐proBNP level <median The average follow‐up duration was 33.1 ± 6.6 months or 502.2 persons‐year. There were 54 patients with death or heart failure during follow‐up (29 deaths and 33 heart failures). Baseline characteristics of patients with and without clinical outcome are shown in Table 1. Older age, history of HF, CAD, DM, HT, CKD, RRT, anemia, low LVEF, and elevated sST2 level were all found to be significantly associated with an increased risk of HF or death. From ROC analysis the best cut‐off of sST2 for death or heart failure was 30.14 ng/ml (area under the curve of 0.69). Seventy‐three (39.5%) patients had an sST2 level ≥30.14 ng/ml, and 112 (60.5%) had an sST2 level <30.14 ng/dl. The proportion of patients with clinical outcomes compared between patients with sST2 level <30.14 and sST2 level ≥30.14 ng/ml is shown in Figure 1. Sixty‐nine (37.3%) patients in our study had history of HF. The differences between the 2 sST2 groups are also shown in patients with and without history of HF, and in patients with NT‐proBNP <median and ≥median. The median (IQR) rate of HF or death, death, and HF was 10.75 (8.08–14.03), 5.77 (3.87–8.29), 6.57 (4.52–9.23) per 100 persons‐years, respectively. The incidence rate of clinical outcomes in patients with sST2 < 30.14 ng/ml and ≥30.14 ng/ml is shown in Table S1. The incidence rate of clinical outcomes was increased in patients with sST2 ≥ 30.14 ng/ml. Table S1 also demonstrated a higher incidence rate of each outcome for patients with sST2 ≥ 30.14 ng/ml regardless of history of heart failure and NT‐proBNP levels. Baseline characteristics of NVAF patients compared between those with and without HF or death Note: Data presented as mean ± standard deviation or number and percentage. The bold values are statistically significant p < .05. Abbreviations: ACEI, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; HF, heart failure; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; NVAF, non‐valvular atrial fibrillation; TIA, transient ischemic attack. Rate of heart failure (HF) and death according to soluble sST2 group for (A) all patients, (B) patients with history of HF, (C) patients no history of HF, (D) patients with N‐terminal pro‐brain natriuretic peptide (NT‐proBNP) level ≥median, and (E) patients with NT‐proBNP level <median Univariate and multivariate analysis The results of univariate and multivariate Cox proportional analysis are shown as a forest plot in Figure 2. Factors with p‐value <.2 from Table 1 were selected for univariate and multivariate Cox‐proportional Hazard model analysis. Univariate analysis showed history of HF, CAD, DM, RRT, CKD, anemia, LVEF < 50%, NT‐proBNP >median, and sST2 > 30.14 ng/ml to be predictors of HF or death. Subsequent multivariate analysis revealed history of HF, NT‐proBNP >median, and sST2 ≥ 30.14 ng/ml to be independent predictors for HF or death. Forest plot of univariate and multivariate analysis for factors that predict heart failure or death. CAD, coronary artery disease; CI, confidence interval; CKD, chronic kidney disease; LVEF, left ventricular ejection fraction; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; sST2, soluble ST2; TIA, transient ischemic attack The results of univariate and multivariate Cox proportional analysis are shown as a forest plot in Figure 2. Factors with p‐value <.2 from Table 1 were selected for univariate and multivariate Cox‐proportional Hazard model analysis. Univariate analysis showed history of HF, CAD, DM, RRT, CKD, anemia, LVEF < 50%, NT‐proBNP >median, and sST2 > 30.14 ng/ml to be predictors of HF or death. Subsequent multivariate analysis revealed history of HF, NT‐proBNP >median, and sST2 ≥ 30.14 ng/ml to be independent predictors for HF or death. Forest plot of univariate and multivariate analysis for factors that predict heart failure or death. CAD, coronary artery disease; CI, confidence interval; CKD, chronic kidney disease; LVEF, left ventricular ejection fraction; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; sST2, soluble ST2; TIA, transient ischemic attack Survival analysis The cumulative event rates of HF or death, HF, and death are shown in Figure 3. The event rate in patients with sST2 ≥ 30.14 ng/ml increased as the follow‐up time increased and significantly different from those with sST2 < 30.14 ng/ml both for unadjusted and adjusted model. Moreover, the distance between the two plots (sST2 ≥ 30.14 and sST2 < 30.14 ng/ml) becomes greater as the follow‐up duration increases. Cumulative rate of heart failure (HF) or death, death, and HF compared between patients with sST2 level ≥30.14 and <30.14 ng/ml. A–C: unadjusted, D–F: adjusted for confounders The cumulative event rates of HF or death, HF, and death are shown in Figure 3. The event rate in patients with sST2 ≥ 30.14 ng/ml increased as the follow‐up time increased and significantly different from those with sST2 < 30.14 ng/ml both for unadjusted and adjusted model. Moreover, the distance between the two plots (sST2 ≥ 30.14 and sST2 < 30.14 ng/ml) becomes greater as the follow‐up duration increases. Cumulative rate of heart failure (HF) or death, death, and HF compared between patients with sST2 level ≥30.14 and <30.14 ng/ml. A–C: unadjusted, D–F: adjusted for confounders Sensitivity analysis and test of interaction effect Sensitivity analysis was performed by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome. Restricted cubic spline graph demonstrates that the risk of heart failure or death increased as the levels of sST2 increased both for unadjusted and adjusted model (variables with p < .2 from Table 1 were included in the adjusted model) (Figure 4). Figure S1 demonstrates that there were no significant interactions (interaction test p > .05) between history of heart failure and sST2 levels (Figure S1A–C) and NT‐proBNP levels and sST2 levels (Figure S1D–F) on each of the clinical outcomes. Cubic spline graph showing hazard ratio and 95% confidence interval (CI) heart failure (HF) or death, death, and HF of sST2 as continuous data with A–C: unadjusted, D–F: adjusted for confounders Sensitivity analysis was also performed by using median (26.78 ng/ml) of sST2 as a cut‐off and by comparing four groups of sST2 separated by quartiles (1st quartile: <18.54 ng/ml, 2nd quartile: 18.54–26.78 ng/ml, 3rd quartile: 26.78–38.38 ng/ml, 4th quartile: ≥38.38 ng/ml). The results are shown in Figure S2. Subgroup analysis for the predictive value of sST2 for HF or death showed that sST2 can predict HF or death in patients with AF in the majority of subgroups including age, sex, history of HF, CAD, stroke, hypertension, diabetes, CKD, LVEF, and NT‐proBNP (Figure S3). Sensitivity analysis was performed by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome. Restricted cubic spline graph demonstrates that the risk of heart failure or death increased as the levels of sST2 increased both for unadjusted and adjusted model (variables with p < .2 from Table 1 were included in the adjusted model) (Figure 4). Figure S1 demonstrates that there were no significant interactions (interaction test p > .05) between history of heart failure and sST2 levels (Figure S1A–C) and NT‐proBNP levels and sST2 levels (Figure S1D–F) on each of the clinical outcomes. Cubic spline graph showing hazard ratio and 95% confidence interval (CI) heart failure (HF) or death, death, and HF of sST2 as continuous data with A–C: unadjusted, D–F: adjusted for confounders Sensitivity analysis was also performed by using median (26.78 ng/ml) of sST2 as a cut‐off and by comparing four groups of sST2 separated by quartiles (1st quartile: <18.54 ng/ml, 2nd quartile: 18.54–26.78 ng/ml, 3rd quartile: 26.78–38.38 ng/ml, 4th quartile: ≥38.38 ng/ml). The results are shown in Figure S2. Subgroup analysis for the predictive value of sST2 for HF or death showed that sST2 can predict HF or death in patients with AF in the majority of subgroups including age, sex, history of HF, CAD, stroke, hypertension, diabetes, CKD, LVEF, and NT‐proBNP (Figure S3). Baseline characteristics: Of the 185 patients that were enrolled, 116 (62.7%) were male. The average age of patients was 68.9 ± 11.0 years, and the average sST2 level was 31.3 ± 19.7 ng/ml (median and interquartile range [IQR]: 26.78 and 18.54–38.38 ng/ml). The average NT‐proBNP level was 2399 ± 6853 pg/ml (median and IQR: 974.4 and 490.9–1841.0 pg/ml). Clinical outcomes: The average follow‐up duration was 33.1 ± 6.6 months or 502.2 persons‐year. There were 54 patients with death or heart failure during follow‐up (29 deaths and 33 heart failures). Baseline characteristics of patients with and without clinical outcome are shown in Table 1. Older age, history of HF, CAD, DM, HT, CKD, RRT, anemia, low LVEF, and elevated sST2 level were all found to be significantly associated with an increased risk of HF or death. From ROC analysis the best cut‐off of sST2 for death or heart failure was 30.14 ng/ml (area under the curve of 0.69). Seventy‐three (39.5%) patients had an sST2 level ≥30.14 ng/ml, and 112 (60.5%) had an sST2 level <30.14 ng/dl. The proportion of patients with clinical outcomes compared between patients with sST2 level <30.14 and sST2 level ≥30.14 ng/ml is shown in Figure 1. Sixty‐nine (37.3%) patients in our study had history of HF. The differences between the 2 sST2 groups are also shown in patients with and without history of HF, and in patients with NT‐proBNP <median and ≥median. The median (IQR) rate of HF or death, death, and HF was 10.75 (8.08–14.03), 5.77 (3.87–8.29), 6.57 (4.52–9.23) per 100 persons‐years, respectively. The incidence rate of clinical outcomes in patients with sST2 < 30.14 ng/ml and ≥30.14 ng/ml is shown in Table S1. The incidence rate of clinical outcomes was increased in patients with sST2 ≥ 30.14 ng/ml. Table S1 also demonstrated a higher incidence rate of each outcome for patients with sST2 ≥ 30.14 ng/ml regardless of history of heart failure and NT‐proBNP levels. Baseline characteristics of NVAF patients compared between those with and without HF or death Note: Data presented as mean ± standard deviation or number and percentage. The bold values are statistically significant p < .05. Abbreviations: ACEI, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; HF, heart failure; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; NVAF, non‐valvular atrial fibrillation; TIA, transient ischemic attack. Rate of heart failure (HF) and death according to soluble sST2 group for (A) all patients, (B) patients with history of HF, (C) patients no history of HF, (D) patients with N‐terminal pro‐brain natriuretic peptide (NT‐proBNP) level ≥median, and (E) patients with NT‐proBNP level <median Univariate and multivariate analysis: The results of univariate and multivariate Cox proportional analysis are shown as a forest plot in Figure 2. Factors with p‐value <.2 from Table 1 were selected for univariate and multivariate Cox‐proportional Hazard model analysis. Univariate analysis showed history of HF, CAD, DM, RRT, CKD, anemia, LVEF < 50%, NT‐proBNP >median, and sST2 > 30.14 ng/ml to be predictors of HF or death. Subsequent multivariate analysis revealed history of HF, NT‐proBNP >median, and sST2 ≥ 30.14 ng/ml to be independent predictors for HF or death. Forest plot of univariate and multivariate analysis for factors that predict heart failure or death. CAD, coronary artery disease; CI, confidence interval; CKD, chronic kidney disease; LVEF, left ventricular ejection fraction; NT‐proBNP level, N‐terminal pro‐brain natriuretic peptide; sST2, soluble ST2; TIA, transient ischemic attack Survival analysis: The cumulative event rates of HF or death, HF, and death are shown in Figure 3. The event rate in patients with sST2 ≥ 30.14 ng/ml increased as the follow‐up time increased and significantly different from those with sST2 < 30.14 ng/ml both for unadjusted and adjusted model. Moreover, the distance between the two plots (sST2 ≥ 30.14 and sST2 < 30.14 ng/ml) becomes greater as the follow‐up duration increases. Cumulative rate of heart failure (HF) or death, death, and HF compared between patients with sST2 level ≥30.14 and <30.14 ng/ml. A–C: unadjusted, D–F: adjusted for confounders Sensitivity analysis and test of interaction effect: Sensitivity analysis was performed by treating sST2 as continuous data and testing the effect of sST2 on heart failure or death, death, and heart failure outcome. Restricted cubic spline graph demonstrates that the risk of heart failure or death increased as the levels of sST2 increased both for unadjusted and adjusted model (variables with p < .2 from Table 1 were included in the adjusted model) (Figure 4). Figure S1 demonstrates that there were no significant interactions (interaction test p > .05) between history of heart failure and sST2 levels (Figure S1A–C) and NT‐proBNP levels and sST2 levels (Figure S1D–F) on each of the clinical outcomes. Cubic spline graph showing hazard ratio and 95% confidence interval (CI) heart failure (HF) or death, death, and HF of sST2 as continuous data with A–C: unadjusted, D–F: adjusted for confounders Sensitivity analysis was also performed by using median (26.78 ng/ml) of sST2 as a cut‐off and by comparing four groups of sST2 separated by quartiles (1st quartile: <18.54 ng/ml, 2nd quartile: 18.54–26.78 ng/ml, 3rd quartile: 26.78–38.38 ng/ml, 4th quartile: ≥38.38 ng/ml). The results are shown in Figure S2. Subgroup analysis for the predictive value of sST2 for HF or death showed that sST2 can predict HF or death in patients with AF in the majority of subgroups including age, sex, history of HF, CAD, stroke, hypertension, diabetes, CKD, LVEF, and NT‐proBNP (Figure S3). DISCUSSION: This prospective study in patients with nonvalvular AF revealed sST2 to be an independent predictor of death or HF. sST2 was also found to be an independent predictor of HF and death when each of those two study outcomes was considered individually. The importance of sST2 as an independent predictor of outcome was demonstrated in patients with and without history of HF, and in patients with NT‐proBNP ≥median and <median. Patients with AF had a 3‐fold increased risk of HF, and patients with HF had a 4.5–5.9‐fold increased risk of AF. 14 Practice guidelines recommend that the treatment of AF focus not only on prevention of ischemic stroke and rate and rhythm control, but also on management of comorbidities, such as HT and DM. 8 Integrated management of AF patients with oral anticoagulant (OAC) and management of comorbidities have been shown to be associated with better clinical outcomes. 15 The meta‐analysis global group in chronic heart failure (MAGGIC) risk score has been proposed for the prediction of mortality in patients with chronic HF, including both HF with reduced ejection fraction (HFrEF) and HF with preserved ejection fraction (HFpEF). 16 Moreover, some biomarkers, such as troponin, BNP, or NT‐proBNP, and sST2, have been shown to improve the performance of models designed to predict the risk of patients with HF. 12 , 17 Although we have many data on biomarkers and prognosis of heart failure, 18 , 19 there were limited data on the prediction of HF especially in patients with patients with AF. Data from the present study showed history of HF, CKD, and sST2 level ≥30.14 ng/ml (ROC cut off) to be independent predictors of HF in patients with AF. NT‐proBNP level has been recommended not only for the diagnosis, but also for prognostic assessment in patients with HF. 12 , 17 BNP can be used to predict risk of HF in high‐risk population. 20 Natriuretic peptide levels are elevated approximately 20%–30% in patients with AF; therefore, the criteria for diagnosis of HF in patients with AF should be different from those used for diagnosis of HF in patients without AF. 21 Increased BNP levels predict an increased risk of mortality in patients with and without HF. 22 Data from the Fushimi AF registry showed that increased BNP levels in patients with AF without known HF were associated with increased risk of mortality, ischemic stroke, and HF. 23 Data from the same study demonstrated an increased risk of adverse outcome in patients with pre‐existing HF. The results of univariate analysis in our study showed history of HF, CAD, DM, RRT, CKD, anemia, LVEF < 50%, NT‐proBNP >median, and sST2 > 30.14 ng/ml to be predictors of death or HF among patients with AF. Our multivariate analysis revealed history of HF, CKD, and sST2 ≥ 30.14 ng/ml to be independent predictors of HF or death in patients with AF. NT‐proBNP >median was not included in the final multivariate analysis model. Among patients with HF, a previous study found sST2 level to be stronger than BNP and troponin‐T levels for predicting death and HF in the future. 24 Among patients with AF, sST2 levels predict recurrence of AF after RF ablation. 25 In Chinese population, sST2 was shown to be a predictor of HF risk in patients with AF. 26 Data from European population with anticoagulated AF showed sST2 to be a marker for increased risk of mortality. 27 The strength of the present study is that we explored both mortality and HF outcome, and both composite and individual outcome. We also performed a separate subanalysis analysis in patients with and without history of HF, and in patients with NT‐proBNP levels ≥median and <median. Our results showed sST2 to be a predictor of HF or death in AF patients regardless of history of HF, and regardless of NT‐proBNP level. The results of this study suggest several important considerations. First, the risk of HF is high in patients with AF. The rate of HF in AF was even greater than the rate of ischemic stroke/TIA. This finding emphasizes the importance of a management strategy to reduce HF risk. Second, sST2 was shown to be a useful biomarker that can augment clinical data in the prediction of HF. Moreover, the predictive power of sST2 was even greater than that of NT‐proBNP. Third, although we did not have data on sST2‐guided management of HF in patients with AF, previous studies in patients with HF and sinus rhythm showed sST2 level to be significantly associated with reduced HF risk, and that patients with reduced sST2 level after treatment had a better prognosis. 28 , 29 Limitations This study has some mentionable limitations. First, the size of our study population is relatively small, so our study may have been insufficiently powered to identify all statistically significant differences and associations. However, we enrolled all eligible non‐valvular AF patients during our study period. Moreover, the sufficient statistical power of our study may be supported by the fact that we found sST2 ≥ 30.14 ng/ml to be significantly and independently associated with death or HF regardless of history of HF or NT‐proBNP level status. Second, our center is a large tertiary care hospital that is often referred more complex cases, so our results may not be immediately generalizable to AF population seeking/receiving treatment at primary care centers. Third and last, sST2 laboratory data were analyzed only at baseline. sST2 remained a significant predictor for clinical outcomes. This study has some mentionable limitations. First, the size of our study population is relatively small, so our study may have been insufficiently powered to identify all statistically significant differences and associations. However, we enrolled all eligible non‐valvular AF patients during our study period. Moreover, the sufficient statistical power of our study may be supported by the fact that we found sST2 ≥ 30.14 ng/ml to be significantly and independently associated with death or HF regardless of history of HF or NT‐proBNP level status. Second, our center is a large tertiary care hospital that is often referred more complex cases, so our results may not be immediately generalizable to AF population seeking/receiving treatment at primary care centers. Third and last, sST2 laboratory data were analyzed only at baseline. sST2 remained a significant predictor for clinical outcomes. Limitations: This study has some mentionable limitations. First, the size of our study population is relatively small, so our study may have been insufficiently powered to identify all statistically significant differences and associations. However, we enrolled all eligible non‐valvular AF patients during our study period. Moreover, the sufficient statistical power of our study may be supported by the fact that we found sST2 ≥ 30.14 ng/ml to be significantly and independently associated with death or HF regardless of history of HF or NT‐proBNP level status. Second, our center is a large tertiary care hospital that is often referred more complex cases, so our results may not be immediately generalizable to AF population seeking/receiving treatment at primary care centers. Third and last, sST2 laboratory data were analyzed only at baseline. sST2 remained a significant predictor for clinical outcomes. CONCLUSION: The results of this study revealed sST2 level to be an independent predictor of death or HF in patients with non‐ventricular AF irrespective of history of HF or NT‐proBNP levels. CONFLICT OF INTERESTS: All authors declare no personal or professional conflicts of interest, and no financial support from the companies that produce and/or distribute the drugs, devices, or materials described in this report. AUTHOR CONTRIBUTIONS: All authors contributed substantially to the following: study conception and design; acquisition or analysis and interpretation of the data; drafting and/or critically revising the article; and, preparing the manuscript for submission to our target journal. All authors are in agreement with both the final version of the manuscript, and the decision to submit this manuscript for journal publication. Supporting information: Supporting information. Click here for additional data file.
Background: Biomarkers may be a useful marker for predicting heart failure (HF) or death in patients with atrial fibrillation (AF). Methods: This is a prospective study of patients with nonvalvular AF. Clinical outcomes were HF or death. Clinical and laboratory data were compared between those with and without clinical outcomes. Univariate and multivariate analysis was performed to determine whether sST2 is an independent predictor for heart failure or death in patients with nonvalvular AF. Results: A total of 185 patients (mean age: 68.9 ± 11.0 years) were included, 116 (62.7%) were male. The average sST2 and N-terminal pro-brain natriuretic peptide (NT-proBNP) levels were 31.3 ± 19.7 ng/ml and 2399.5 ± 6853.0 pg/ml, respectively. Best receiver operating characteristic (ROC) cut off of sST2 for predicting HF or death was 30.14 ng/ml. Seventy-three (39.5%) patients had an sST2 level ≥30.14 ng/ml, and 112 (60.5%) had an sST2 level <30.14 ng/dl. The average follow-up was 33.1 ± 6.6 months. Twenty-nine (15.7%) patients died, and 33 (17.8%) developed HF during follow-up. Multivariate analysis revealed that high sST2 to be an independent risk factor for death or HF with a HR and 95% CI of 2.60 (1.41-4.78). The predictive value of sST2 is better than NT-proBNP, and it remained significant in AF patients irrespective of history of HF, and NT-proBNP levels. Conclusions: sST2 is an independent predictor of death or HF in patients with AF irrespective of history of HF or NT-proBNP levels.
INTRODUCTION: Non‐valvular atrial fibrillation (AF) is one of the most common cardiac arrhythmias, 1 and the prevalence of AF increases in older adult population. 2 Heart failure (HF) is one of the coexisting conditions frequently seen in patients with AF, 3 , 4 and the prevalence of HF also increases in older adults. 5 When AF and HF coexist in a patient, it is often difficult to determine which condition is the cause, and which is the effect. 6 Practice guidelines mainly focus on stroke prevention in patients with AF and HF is often overlooked. Results from the Global Anticoagulant Registry in the Field‐Atrial Fibrillation (GARFIELD‐AF) registry, which is a large global registry of patients with newly diagnosed AF, revealed a rate of HF of 2.41 per 100 person‐years, which is greater than the rate of ischemic stroke, major bleeding, and cardiovascular death. 7 Recent European Society of Cardiology (ESC) guideline for management of AF emphasizes the treatment of comorbidities, such as hypertension, diabetes, and HF. 8 Natriuretic peptide, such as brain natriuretic peptide (BNP) and N‐terminal pro‐BNP (NT‐proBNP), has been shown to be both a diagnostic and prognostic biomarker for HF. 9 Soluble ST2 (sST2) is another biomarker that has been demonstrated to be a good prognostic marker in patients with HF. 10 , 11 American College of Cardiology (ACC) guideline for management of patients with HF suggests that sST2 may be useful as an additive biomarker for prognosis of patients with HF. 12 The objectives of this study were to determine (1) the prognostic value of sST2 for HF and death in patients with AF; (2) the prognostic value of sST2 for HF and death in patients with AF with and without history of HF; and (3) whether the prognostic value of sST2 for HF and death in patients with AF is independent of NT‐proBNP level. CONCLUSION: The results of this study revealed sST2 level to be an independent predictor of death or HF in patients with non‐ventricular AF irrespective of history of HF or NT‐proBNP levels.
Background: Biomarkers may be a useful marker for predicting heart failure (HF) or death in patients with atrial fibrillation (AF). Methods: This is a prospective study of patients with nonvalvular AF. Clinical outcomes were HF or death. Clinical and laboratory data were compared between those with and without clinical outcomes. Univariate and multivariate analysis was performed to determine whether sST2 is an independent predictor for heart failure or death in patients with nonvalvular AF. Results: A total of 185 patients (mean age: 68.9 ± 11.0 years) were included, 116 (62.7%) were male. The average sST2 and N-terminal pro-brain natriuretic peptide (NT-proBNP) levels were 31.3 ± 19.7 ng/ml and 2399.5 ± 6853.0 pg/ml, respectively. Best receiver operating characteristic (ROC) cut off of sST2 for predicting HF or death was 30.14 ng/ml. Seventy-three (39.5%) patients had an sST2 level ≥30.14 ng/ml, and 112 (60.5%) had an sST2 level <30.14 ng/dl. The average follow-up was 33.1 ± 6.6 months. Twenty-nine (15.7%) patients died, and 33 (17.8%) developed HF during follow-up. Multivariate analysis revealed that high sST2 to be an independent risk factor for death or HF with a HR and 95% CI of 2.60 (1.41-4.78). The predictive value of sST2 is better than NT-proBNP, and it remained significant in AF patients irrespective of history of HF, and NT-proBNP levels. Conclusions: sST2 is an independent predictor of death or HF in patients with AF irrespective of history of HF or NT-proBNP levels.
8,735
345
[ 379, 176, 185, 123, 89, 181, 260, 91, 494, 174, 139, 311, 157, 66 ]
20
[ "hf", "sst2", "patients", "death", "ml", "ng", "ng ml", "14", "30", "data" ]
[ "valvular atrial fibrillation", "fibrillation af common", "ischemic stroke hf", "stroke hypertension diabetes", "atrial fibrillation garfield" ]
[CONTENT] history of heart failure | nonvalvular atrial fibrillation | patients | prognostic significance | soluble ST2 level [SUMMARY]
[CONTENT] history of heart failure | nonvalvular atrial fibrillation | patients | prognostic significance | soluble ST2 level [SUMMARY]
[CONTENT] history of heart failure | nonvalvular atrial fibrillation | patients | prognostic significance | soluble ST2 level [SUMMARY]
[CONTENT] history of heart failure | nonvalvular atrial fibrillation | patients | prognostic significance | soluble ST2 level [SUMMARY]
[CONTENT] history of heart failure | nonvalvular atrial fibrillation | patients | prognostic significance | soluble ST2 level [SUMMARY]
[CONTENT] history of heart failure | nonvalvular atrial fibrillation | patients | prognostic significance | soluble ST2 level [SUMMARY]
[CONTENT] Aged | Atrial Fibrillation | Biomarkers | Female | Heart Failure | Humans | Interleukin-1 Receptor-Like 1 Protein | Male | Middle Aged | Natriuretic Peptide, Brain | Peptide Fragments | Prognosis | Prospective Studies [SUMMARY]
[CONTENT] Aged | Atrial Fibrillation | Biomarkers | Female | Heart Failure | Humans | Interleukin-1 Receptor-Like 1 Protein | Male | Middle Aged | Natriuretic Peptide, Brain | Peptide Fragments | Prognosis | Prospective Studies [SUMMARY]
[CONTENT] Aged | Atrial Fibrillation | Biomarkers | Female | Heart Failure | Humans | Interleukin-1 Receptor-Like 1 Protein | Male | Middle Aged | Natriuretic Peptide, Brain | Peptide Fragments | Prognosis | Prospective Studies [SUMMARY]
[CONTENT] Aged | Atrial Fibrillation | Biomarkers | Female | Heart Failure | Humans | Interleukin-1 Receptor-Like 1 Protein | Male | Middle Aged | Natriuretic Peptide, Brain | Peptide Fragments | Prognosis | Prospective Studies [SUMMARY]
[CONTENT] Aged | Atrial Fibrillation | Biomarkers | Female | Heart Failure | Humans | Interleukin-1 Receptor-Like 1 Protein | Male | Middle Aged | Natriuretic Peptide, Brain | Peptide Fragments | Prognosis | Prospective Studies [SUMMARY]
[CONTENT] Aged | Atrial Fibrillation | Biomarkers | Female | Heart Failure | Humans | Interleukin-1 Receptor-Like 1 Protein | Male | Middle Aged | Natriuretic Peptide, Brain | Peptide Fragments | Prognosis | Prospective Studies [SUMMARY]
[CONTENT] valvular atrial fibrillation | fibrillation af common | ischemic stroke hf | stroke hypertension diabetes | atrial fibrillation garfield [SUMMARY]
[CONTENT] valvular atrial fibrillation | fibrillation af common | ischemic stroke hf | stroke hypertension diabetes | atrial fibrillation garfield [SUMMARY]
[CONTENT] valvular atrial fibrillation | fibrillation af common | ischemic stroke hf | stroke hypertension diabetes | atrial fibrillation garfield [SUMMARY]
[CONTENT] valvular atrial fibrillation | fibrillation af common | ischemic stroke hf | stroke hypertension diabetes | atrial fibrillation garfield [SUMMARY]
[CONTENT] valvular atrial fibrillation | fibrillation af common | ischemic stroke hf | stroke hypertension diabetes | atrial fibrillation garfield [SUMMARY]
[CONTENT] valvular atrial fibrillation | fibrillation af common | ischemic stroke hf | stroke hypertension diabetes | atrial fibrillation garfield [SUMMARY]
[CONTENT] hf | sst2 | patients | death | ml | ng | ng ml | 14 | 30 | data [SUMMARY]
[CONTENT] hf | sst2 | patients | death | ml | ng | ng ml | 14 | 30 | data [SUMMARY]
[CONTENT] hf | sst2 | patients | death | ml | ng | ng ml | 14 | 30 | data [SUMMARY]
[CONTENT] hf | sst2 | patients | death | ml | ng | ng ml | 14 | 30 | data [SUMMARY]
[CONTENT] hf | sst2 | patients | death | ml | ng | ng ml | 14 | 30 | data [SUMMARY]
[CONTENT] hf | sst2 | patients | death | ml | ng | ng ml | 14 | 30 | data [SUMMARY]
[CONTENT] hf | af | prognostic | patients | patients af | prognostic value | prognostic value sst2 | prognostic value sst2 hf | sst2 hf death patients | biomarker [SUMMARY]
[CONTENT] data | hf | study | coronary | test | sst2 | defined | death | outcome | performed [SUMMARY]
[CONTENT] sst2 | 14 | 30 14 | 30 | hf | ng | ml | ng ml | patients | death [SUMMARY]
[CONTENT] patients non | af irrespective | patients non ventricular | level independent predictor death | ventricular af | ventricular af irrespective | ventricular af irrespective history | level independent predictor | level independent | predictor death hf patients [SUMMARY]
[CONTENT] hf | sst2 | patients | death | ml | data | af | 14 | ng | ng ml [SUMMARY]
[CONTENT] hf | sst2 | patients | death | ml | data | af | 14 | ng | ng ml [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] AF ||| HF ||| ||| AF [SUMMARY]
[CONTENT] 185 | 68.9 | 11.0 years | 116 | 62.7% ||| 31.3 ± | 19.7 ng/ml and 2399.5 ±  | 6853.0 ||| ROC | HF | 30.14 ng/ml ||| Seventy-three | 39.5% ||| 112 | 60.5% | 30.14 ng/dl ||| 33.1 ± | 6.6 months ||| Twenty-nine | 15.7% | 33 | 17.8% ||| HF | 95% | CI | 2.60 | 1.41 ||| NT-proBNP | HF [SUMMARY]
[CONTENT] HF [SUMMARY]
[CONTENT] ||| AF ||| HF ||| ||| AF ||| 185 | 68.9 | 11.0 years | 116 | 62.7% ||| 31.3 ± | 19.7 ng/ml and 2399.5 ±  | 6853.0 ||| ROC | HF | 30.14 ng/ml ||| Seventy-three | 39.5% ||| 112 | 60.5% | 30.14 ng/dl ||| 33.1 ± | 6.6 months ||| Twenty-nine | 15.7% | 33 | 17.8% ||| HF | 95% | CI | 2.60 | 1.41 ||| NT-proBNP | HF ||| HF [SUMMARY]
[CONTENT] ||| AF ||| HF ||| ||| AF ||| 185 | 68.9 | 11.0 years | 116 | 62.7% ||| 31.3 ± | 19.7 ng/ml and 2399.5 ±  | 6853.0 ||| ROC | HF | 30.14 ng/ml ||| Seventy-three | 39.5% ||| 112 | 60.5% | 30.14 ng/dl ||| 33.1 ± | 6.6 months ||| Twenty-nine | 15.7% | 33 | 17.8% ||| HF | 95% | CI | 2.60 | 1.41 ||| NT-proBNP | HF ||| HF [SUMMARY]
Treatment Seeking Behavior, Treatment Cost and Quality of Life of Head and Neck Cancer Patients: A Cross-Sectional Analytical Study from South India.
34582675
Head and neck cancer constitute one-third of all cancers. Due to the complex nature of Head and Neck Cancer treatment, expenditure on cancer treatment are higher and the Quality of Life of patients is also compromised. The objectives of the study were to determine the time taken by patients for seeking care from registered medical practitioners, time to definitive diagnosis and treatment initiation, expenditure incurred, and Quality of Life.
BACKGROUND
The present study was a cross-sectional descriptive involving outpatient with head and neck cancer reported to the department of radiotherapy, regional cancer center, JIPMER. The quality of life was assessed using validated FACT-Hand N scales.
METHODS
The preferred first contact for seeking care for most was the private sector (52%). The median (IQR) presentation interval, diagnostic interval, and treatment initiation interval were 36.5 (16 - 65.7), 14 (7 - 31.5), and 65.5 (45 - 104) days respectively. The average indirect cost incurred was INR 8424 (4095-16570) in JIPMER, which was spent over an average duration of 240 days. The median (IQR) wage loss by the patients and/or caregivers was INR18000 (5250-61575). The source of expenditure was mainly from their family savings (56%). Functional well-being was severely impaired. The patients with occupation, head of the family, and early stage of cancer had a statistically significant quality of life.
RESULTS
The majority of the patients were diagnosed in the regional cancer center, JIPMER although their preferred first point of contact was private practitioners. The average time interval from diagnosis to treatment initiation was more than two months. The expenditure during the treatment was mainly because of indirect cost and wage loss. The functional quality of life was severely impaired for the majority of the cases.
CONCLUSION
[ "Adult", "Cross-Sectional Studies", "Female", "Head and Neck Neoplasms", "Health Care Costs", "Humans", "India", "Male", "Middle Aged", "Patient Acceptance of Health Care", "Quality of Life" ]
8850892
Introduction
Cancer is the second most common cause of death following cardiovascular disorders (Roth et al., 2018). Globally 14.1 million new cancer cases and 8.2 million deaths occur every year, and 32.5 million people are living with cancer (World Health Organization, 2014b). Head and Neck cancers (HNC) account 23 % of all cancer cases (Dikshit et al., 2012) They arise from the mucosal lining (squamous cell), and include Oral and Oropharyngeal carcinoma, Nasal and Nasopharyngeal carcinoma and Hypopharyngeal carcinoma (Shah and Lydiatt, 1995). HNC is the highest occurring cancer among the males and third highest in females. The Indian Council of Medical Research (ICMR), has estimated that approximately 0.20 to 0.25 million new Head and Neck cancer patients are diagnosed every year (National Cancer Registry and Programme Indian Council of Medical Research, 2016) and this constitutes about 30% of all incident cancers. India has the highest rate of oropharyngeal cancers accounting for 30-40% of all cancers (National Cancer Registry and Programme Indian Council of Medical Research, 2016) and its mortality was 18% in males and 7% in female (World Health Organization 2014a). The International Agency for Research on Cancer (IARC) estimated that the incidence of cancer will sharply increase by 50% in 2020 (Ferlay and Soerjomataram, 2015) and the reasons behind this are increasing life expectancy and aging population worldwide. This prediction was made considering the current trend of increasing tobacco consumption and the adoption of an unhealthy lifestyle (World Health Organization, 2014b). Early diagnosis and timely initiation of treatment of Head and Neck cancers improves survival, lower the cost of care and results in retention of a better quality of life.(Kumar et al., 2012) Most patients experience a drop in their income while undergoing diagnosis and treatment. During treatment indirect cost is a major burden to the patients, increasing their financial stress and can drive many families to economic catastrophes (Kavosi et al. 2014; Sharp and Timmons, 2010; Nair et al., 2013). In India, Public health facilities provide free or subsidized treatment. Patients usually initiate care in the private sector because of perceived better treatment and perceived better chances of survival before they start seeking care in a public facility (Nair et al., 2013). Quality of life (QOL) is a multidimensional concept measuring the physical, social/ familial, emotional, and functional wellbeing of an individual (Webster et al., 2003). HNC can affect the quality of life of an individual by affecting the normal speech, breathing, and eating and disfigurement (Bernier et al., 2004). In India, the literature on the QOL of patients with HNC and time took for seeking care, getting diagnosed, and treated is limited. Against this background, we planned to conduct a study among the head and neck cancer patients who attended department of radiation oncology; the objectives were 1). to determine the time intervals in presentation, diagnostic, and treatment initiation and various pathways of the care sought before reaching our tertiary care facility, 2). To estimate their treatment cost and sources of their health expenditure and 3). Socio-dempographic and clinical factors associated with quality of life (QOL).
null
null
Results
A total of 192 patients out of the eligible 195 were recruited and the response rate was 98%. Majority of the Head and Neck cancer patients were aged between 45-59 years (90, 46.9 %), male (128, 66.7 %), belonged to rural areas (133, 69.3 %), unemployed (142, 74.0 %), and belonged to lower middle class (77, 40.1 %). The common sites of cancer were oral and oropharynx (146, 76 %) and majority reported with stage IV cancer (124, 64.6 %) as shown in Table 1. The median days (IQR) of the presentation interval, diagnostic interval, and treatment initiation interval were 36.5 (16 - 65.7), 14 (7 - 31.5), and 65.5 (45 - 104) days respectively as shown in Table 2. Majority (87%) of the patients visited at least one health care provider before reaching the department. The private sector clinic/ hospitals was preferred by 52 % of the patients for initial consultation (Figure 1). Definite diagnosis of cancer was made in our tertiary care facility for almost 90% of cases. The median (IQR) of total direct cost, among those who had ever spent for their treatment services, was INR 2400 (700-7300). This estimated total cost in private facilities was spent over a median (IQR) period of 7 (1-20) days. The median direct cost incurred by head and neck cancer patients in our centre and other government facilities were nil. Table 3 shows the expenditure on food and lodgement by patients and their caregivers during their diagnosis and treatment in different facilities. The patients and their caregivers had soughted care in the studied health care facility for a median period of 240 days and the total median (IQR) wage loss was INR 18000 (5250-61575) during this period. The major source of expenditures were from family savings (56%), the sale of assets (22%) and borrowed (22%). The mean Quality of Life (QOL) score was lowest for functional well-being, which was categorized as severely impaired. Other domains of QOL (i.e., Physical, social, emotional, and specific to head and neck) and summary scales (FACT-G and FACT HandN) mean scores were in the moderate range. Better QOL was significantly associated with occupation, when the patient was the head of the family, site and early stage of cancer (Table 4). Discussion: The present study highlighted the treatment-seeking behavior, treatment cost, and quality of life of the head and neck cancer patients. The majority of the patients were male, age more than 45 years and reported with oral and oropharyngeal cancer in the advanced stage (III and IV) which was similar to various other studies in India (World Health Organization, 2014a; Deka et al., 2015; Mohanti et al., 2007). The preferred first contact for seeking care was the private sector (54%) followed by the government sector (30%). This finding is in contrast to an earlier study conducted in five hospitals across India, which had reported that the cancer patient’s interest and faith were more inclined towards the government sector (47%) than private (45%) (Joshi et al., 2014). Around 11 % of the patient reported to our centre directly; another study conducted in cancer hospital also reported that lesser proportion of study patients (7%) initially report to cancer hospitals (Kumar et al., 2012). The first point of contact for the one-fourth of the patients was the primary/ community health center. The definite diagnosis of cancer was done in our tertiary care centre for 90% of the cases. This may be due to the fact that ours is a regional cancer centre which is well equipped for making the diagnosis. People also visit the center foreconomical reasons, as diagnosis and treatment is almost free at this center. The median presentation interval, diagnostic interval, and treatment initiation interval in our study were found 36.5, 15, and 66 days. A study conducted in another comparable centre as ours also reported similar findings; except that the diagnostic interval was two-time more there as compared to our study (Kumar et al., 2012). All direct costs related to consultation, diagnosis, and treatment are free of cost in our centre. The patients from the nearby state of Tamil Nadu avail service through the Chief Minister’s Comprehensive Health Insurance Scheme (CMCHIS), under which individuals belonging to annual income less than INR 72000 (~1006 $) can avail free treatment services. The majority of the study patients (82%) were from Tamil Nadu state and from low socio-economic status; insured by CMCHIS (“Chief Minister’s Comprehensive Health Insurance Scheme,” n.d.). The average direct cost ever spent by patients in public (7 patients) and private facilities (89 patients) was INR 1000 (400-5,000) and 1,600 (500-6,533) for average of 14 and 7 days treatment respectively. The average direct cost ever paid to quacks was INR 5000 (725-7,500) in average 30 days of seeking care. The out of the pocket expenses during treatment was mostly because of indirect cost. The study on economic burden of cancer conducted in similar setting in Delhi also observed that approximate 60% of the patient expenditure was on transportation, food and lodging during the treatment (Mukhopadhyay et al., 2011). A study among 508 cancer patients (all types of cancer) conducted in tertiary care centers of five major cities (Aizawl, Bikaner, Kolkata, Thiruvananthapuram, and Mumbai) of India in 2011 showed that the mean cost of investigation, treatment and indirect expenses over a period of one-year was INR 16739, INR 41311and INR 27248 respectively (Nair et al., 2013). In our study the indirect cost incurred by patients was comparatively less (one-third). The difference may be due to concessional transport scheme available for patients who came from Tamil Nadu and approximate 72 % of the participants were in their first year of treatment. A study among 100 oral cancer patients in a private tertiary hospital, stated that the direct costs varied according to stages of oral cancer, and the total direct cost was INR 146092 (72401- 228919) which is much higher as compared to our study (Goyal et al. 2014). Treatment cost estimated in our study may not be representative to head and neck cancer patients seeking care in a private setting. Considering Quality of Life (QOL) of the patients, there was severe impairment in functional wellbeing whereas there was moderate impairment in other dimensions. A study in All India Institute of Medical Sciences, Delhi stated that functional scores decline during the treatment and for those having symptoms related to disease (like pain, fatigue, nausea etc.) which increase during the course of the treatment (Bansal et al., 2004). The patients were were employed, were head of their family and were in early stage of nasal, nasopharyngeal, parotid and thyrod cancer had significantly better QOL than other cancer patients. The study from Karachi, Pakistan which used same scale (FACT-G and HandN scale) also found that there was significant association between occupation, stage and site of cancer with QOL (Bilal et al., 2015). We could not find significant association between QOL and various demographic variables and socioeconomic status. In contrast to our study, a study conducted at Regional Cancer Centre, Trivandrum using the FACT-G scale, reported that patients with higher socioeconomic status had better QOL (Thomas et al., 2004). Limitation In our study, the patients who were not able to speak due to their illness, their QOL assessment was done based on the responses of their caregiver; thus, there is a possibility of information bias. Patients were asked the about their symptom recognition and pathway of care, there are chances of having recall bias. Since the study was conducted in the hospital setting it is expected that the characteristic of cancer patients seeking care from hospital may be different from that of cancer patients in the community due to berksonian bias. Treatment cost estimated in our study may not apply to all head and neck cancer patients, especially those who are seeking care in a private setting. Strength The present study tried to identify the important aspects of the treatment of head and neck cancer patients. Validated Tamil version of the tool was used for collecting information related to the quality of life of patients. In conclusion, preferred healthcare provider was private sector as reflected in the pathways of care and majority of patients visited at least one provider before reaching the tertiary care facility. The average treatment initiation interval was more than two months. The expenditure was mostly on indirect cost and initially patients/ their caregivers spend from their own savings, but at a later stage, they start selling their assets and ultimately landed-up borrowing money for their treatment. Their overall quality of life was moderately impaired. Screening and referral mechanism at primary/ community health centers can reduce the presenation time interval as has been already initiated by the National Programme for prevention and Control of Cancer, Diabetes, Cardiovascular Diseases and stroke. Further research is needed to understand the physical, social, familial, and functional quality of life to different disease parameters.
null
null
[ "Author Contribution Statement" ]
[ "None." ]
[ null ]
[ "Introduction", "Materials and Methods", "Results", "Author Contribution Statement" ]
[ "Cancer is the second most common cause of death following cardiovascular disorders (Roth et al., 2018). Globally 14.1 million new cancer cases and 8.2 million deaths occur every year, and 32.5 million people are living with cancer (World Health Organization, 2014b). Head and Neck cancers (HNC) account 23 % of all cancer cases (Dikshit et al., 2012) They arise from the mucosal lining (squamous cell), and include Oral and Oropharyngeal carcinoma, Nasal and Nasopharyngeal carcinoma and Hypopharyngeal carcinoma (Shah and Lydiatt, 1995). HNC is the highest occurring cancer among the males and third highest in females. The Indian Council of Medical Research (ICMR), has estimated that approximately 0.20 to 0.25 million new Head and Neck cancer patients are diagnosed every year (National Cancer Registry and Programme Indian Council of Medical Research, 2016) and this constitutes about 30% of all incident cancers. India has the highest rate of oropharyngeal cancers accounting for 30-40% of all cancers (National Cancer Registry and Programme Indian Council of Medical Research, 2016) and its mortality was 18% in males and 7% in female (World Health Organization 2014a). The International Agency for Research on Cancer (IARC) estimated that the incidence of cancer will sharply increase by 50% in 2020 (Ferlay and Soerjomataram, 2015) and the reasons behind this are increasing life expectancy and aging population worldwide. This prediction was made considering the current trend of increasing tobacco consumption and the adoption of an unhealthy lifestyle (World Health Organization, 2014b). \nEarly diagnosis and timely initiation of treatment of Head and Neck cancers improves survival, lower the cost of care and results in retention of a better quality of life.(Kumar et al., 2012) Most patients experience a drop in their income while undergoing diagnosis and treatment. During treatment indirect cost is a major burden to the patients, increasing their financial stress and can drive many families to economic catastrophes (Kavosi et al. 2014; Sharp and Timmons, 2010; Nair et al., 2013). In India, Public health facilities provide free or subsidized treatment. Patients usually initiate care in the private sector because of perceived better treatment and perceived better chances of survival before they start seeking care in a public facility (Nair et al., 2013). \nQuality of life (QOL) is a multidimensional concept measuring the physical, social/ familial, emotional, and functional wellbeing of an individual (Webster et al., 2003). HNC can affect the quality of life of an individual by affecting the normal speech, breathing, and eating and disfigurement (Bernier et al., 2004). In India, the literature on the QOL of patients with HNC and time took for seeking care, getting diagnosed, and treated is limited.\nAgainst this background, we planned to conduct a study among the head and neck cancer patients who attended department of radiation oncology; the objectives were 1). to determine the time intervals in presentation, diagnostic, and treatment initiation and various pathways of the care sought before reaching our tertiary care facility, 2). To estimate their treatment cost and sources of their health expenditure and 3). Socio-dempographic and clinical factors associated with quality of life (QOL).", "\nStudy design \n\nThe study was a hospital-based cross-sectional analytic study conducted among patients who attended department of radiation oncology.\n\nSetting\n\nThe study was carried out in the Jawaharlal Institute of Postgraduate Medical Education and Research, an institute of national importance located in Puducherry. Puducherry is one of the eight union territories of India. The Union Territory of Puducherry lies in the southern part of the Indian Peninsula. The population of Puducherry was 1.2 million as per the 2011 census. The RCC offers services to around 3000 new cancer patients every year of whom 990 suffered from head and neck cancer. The RCC now includes the Departments of Radiotherapy, Medical Oncology, and Surgical Oncology. Cancer treatment to patients in JIPMER is mostly free. The cancer patients are also referred from other eastern and southern Indian states. The patients from the nearby state of Tamil Nadu avail service through the Chief Minister’s Comprehensive Health Insurance Scheme, under which individuals belonging to annual income less than INR 72,000 (~1006 $) can avail free treatment services. In 2002, the Department of Radiotherapy was upgraded to Regional Cancer Centre (RCC). Approximately 1,200 patients availing advanced diagnostic and treatment services including radio-diagnosis, pathology, medical oncology, surgical oncology and radiotherapy. The fee for consultation and investigation is free for all patients.\n\nSelection of patients \n\nThe study participants included all newly registered and follow-up adult patients with head and neck cancer seeking treatment at Radiotherapy department, JIPMER between 1st August 2016 to 30th September 2016. Convenient sampling was adapted for the study. A total of 195 adult patients with head and neck cancer who attended Radiotherapy OPD during the period of data collection were approached for inclusion in the study. Among them, 192 patients who gave consent were included in the study. All diagnosed head and neck cancer patients with date of definite treatment were recruited consecutively. Patients who were diagnosed for more than three years were excluded. \n\nData collection and processing\n\nSociodemographic details, clinical and medical history were extracted from the patient’s case sheet. The date of diagnosis and treatment initiation were extracted from the patient’s current and previous hospital records. The hospital record files of the patients were retrieved from medical record dapertment. The files number were noted based on the eligibility criteria and eligible patients were approached for inclusion in the study. Study participants were interviewed after completeion of their consultation with treating physician or procedures, the participants were interviewed in a separate room ensuring privacy. Information on date of recognition of symptom, type and number of health care providers visited, date of visit, date of definitive diagnosis and treatment were collected using a self administered structured questionnaire. \nThree types of time interval were elicited, i.e., the first time interval was considered from onset of symptom till they sought their first consultation from a registered medical practitioner (presentation interval), the second time interval was from the time of consultation with the registered medical practitioner till definitive cancer diagnosis was made (Diagnostic interval) and the third one was from the time of diagnosis till initiation of definitive cancer treatment (Treatment initiation interval). The expenditure on consultation, investigation, and treatment was considered as a direct cost, and the expenditure on transport, food, and lodgement was considered as an indirect cost. All costs incurred by patients were elicited for the whole duration of their illness preceding the date of the interview and recorded in Indian National Rupees (INR). 1 dollor (US) is equals to INR 66.9.\n\nStudy tool \n\nThe instrument used for assessing Quality of Life (QOL) was the validated Tamil version of the Functional Assessment of Cancer Therapy (FACT) scale (version 4). The patient’s responses were marked on the scale of 0 to 4; as was most appropriate to their condition in the past seven days. Negatively stated items were reversed by subtracting the responses from “4”. All subscale items were summed to derive the total score. Four subscales, i.e., Physical, Social, Emotional, and Functional (27 items) together constituted FACT-General (FACT –G) summary score. Specific questions related to head and neck were added to above mentioned four subscales in the FACT-HandN scale having 39 items. The total scores were divided into groups of three based on the absolute number (Fisch et al. 2003). The low score was considered as a severe impairment, moderate score as moderate impairment, and high score as low impairment. The higher the score, the better was the QOL.\nQOL was measured using the Functional Assessment of Cancer Therapy (FACT) general and specific. \nProrated subscale score = [Sum of item score] × [N of items in subscale] ÷ [N of items answered]\n\nStatistical methods and Analysis\n\nThe data was entered using EpiData Entry client (v2.0.9.25) and analyzed using EpiData Analysis version 2.2.2.183 (EpiData Association, Odense, Denmark) and SPSS version 19.\nSociodemographic, clinical, and treatment variables were expressed as frequency and proportions. Continuous variables like time intervals, direct and indirect cost were expressed as median with Interquartile Range (IQR). The refernce time point for the economic cost to the patient and QOL was data collection period. \nQOL subscale and summary scale were expressed as Mean and Standard Deviation (SD). The association between the exposures and the FACT summary scale was analyzed using Kruskal Walis ANOVA and independent t-test. Statistical significance was considered as a p-value of less than 0.05.\n\nEthical approval\n\nThe study was approved by the Institute’s Scientific Advisory Committee and Ethical clearance was obtained from the Institute Ethics Committee (Human studies), before the start of the study [project no JIP/IEC/SC/2016/29/890]. Informed written consent was obtained from all the patients. The interview was conducted in a separate room and confidentiality of the patients information was maintained throughout the study. The patient information sheet and written informed consent in the regional languages was obtained from the particpants before conducting the interviews.\nDistribution of Socio-demographic and Clinical Characteristics of Head and Neck Cancer Patients attended at Radiation Oncology (N=192).\nMean (SD) age, 54.92 (10.58); Range, 28-85; *Others include Karnataka, West Bengal, Jharkhand, Andhra Pradesh and Andaman Nicobar Island. †Socio-economic status of the patients calculated according to the Modified BG Prasad scale (CPI (IW) Base 2001=100 Monthly Index). Because study participants were mixed, so all India general Index (277) considered for calculating socioeconomic status.‡Others include nasal (n=1), Nasopharynx (n=7), Thyroid (n=3) and Parotid gland (n=1).\nDistribution of Different Time Intervals (in days) among Head and Neck Cancer Patients at Department of Radiation Oncology During August-September 2016\n* 20 patients got diagnosed before reporting to study centre. Their median time interval till definite diagnosis was 10.5 (IQR, 4.75-15.75)\n†2 patients had initiated their treatment prior to reporting to the study centre.\nTotal Indirect Cost Incurred by all Head and Neck Cancer Patients and Attendants ever Reported in Different Facilities in INR\n*ESIC-Employees' State Insurance Corporation hospital; † Ear Nose and Throat specialist; ‡Quack- an unqualified person who claims medical knowledge or other skills.\nSocio-demographic and Clinical Factors Associated with Quality of Life among Head and Neck Cancer Patients at Radiation Oncology (N=192)\n*Others include Nasal, Nasopharynx, Parotid, and Thyroid; †Stage IV include IVA, IV B and IV C; Test used for estimation of association are Independent t test and one-way ANOVA with post hoc test (Tukey).\nPathway of the Care among Head and Neck Cancer Patients at Tertiary Cancer Centre (N=192); *Ear Nose Throat Specialist, † Employees’ State Insurance Corporation, ‡Primary Health Centre/ Community Health Centre", "A total of 192 patients out of the eligible 195 were recruited and the response rate was 98%. Majority of the Head and Neck cancer patients were aged between 45-59 years (90, 46.9 %), male (128, 66.7 %), belonged to rural areas (133, 69.3 %), unemployed (142, 74.0 %), and belonged to lower middle class (77, 40.1 %). The common sites of cancer were oral and oropharynx (146, 76 %) and majority reported with stage IV cancer (124, 64.6 %) as shown in Table 1.\nThe median days (IQR) of the presentation interval, diagnostic interval, and treatment initiation interval were 36.5 (16 - 65.7), 14 (7 - 31.5), and 65.5 (45 - 104) days respectively as shown in Table 2.\nMajority (87%) of the patients visited at least one health care provider before reaching the department. The private sector clinic/ hospitals was preferred by 52 % of the patients for initial consultation (Figure 1). \nDefinite diagnosis of cancer was made in our tertiary care facility for almost 90% of cases. The median (IQR) of total direct cost, among those who had ever spent for their treatment services, was INR 2400 (700-7300). This estimated total cost in private facilities was spent over a median (IQR) period of 7 (1-20) days. The median direct cost incurred by head and neck cancer patients in our centre and other government facilities were nil. \n\nTable 3 shows the expenditure on food and lodgement by patients and their caregivers during their diagnosis and treatment in different facilities. \nThe patients and their caregivers had soughted care in the studied health care facility for a median period of 240 days and the total median (IQR) wage loss was INR 18000 (5250-61575) during this period. The major source of expenditures were from family savings (56%), the sale of assets (22%) and borrowed (22%). \nThe mean Quality of Life (QOL) score was lowest for functional well-being, which was categorized as severely impaired. Other domains of QOL (i.e., Physical, social, emotional, and specific to head and neck) and summary scales (FACT-G and FACT HandN) mean scores were in the moderate range. Better QOL was significantly associated with occupation, when the patient was the head of the family, site and early stage of cancer (Table 4).\nDiscussion: The present study highlighted the treatment-seeking behavior, treatment cost, and quality of life of the head and neck cancer patients. The majority of the patients were male, age more than 45 years and reported with oral and oropharyngeal cancer in the advanced stage (III and IV) which was similar to various other studies in India (World Health Organization, 2014a; Deka et al., 2015; Mohanti et al., 2007). The preferred first contact for seeking care was the private sector (54%) followed by the government sector (30%). This finding is in contrast to an earlier study conducted in five hospitals across India, which had reported that the cancer patient’s interest and faith were more inclined towards the government sector (47%) than private (45%) (Joshi et al., 2014). Around 11 % of the patient reported to our centre directly; another study conducted in cancer hospital also reported that lesser proportion of study patients (7%) initially report to cancer hospitals (Kumar et al., 2012). The first point of contact for the one-fourth of the patients was the primary/ community health center. The definite diagnosis of cancer was done in our tertiary care centre for 90% of the cases. This may be due to the fact that ours is a regional cancer centre which is well equipped for making the diagnosis. People also visit the center foreconomical reasons, as diagnosis and treatment is almost free at this center. \nThe median presentation interval, diagnostic interval, and treatment initiation interval in our study were found 36.5, 15, and 66 days. A study conducted in another comparable centre as ours also reported similar findings; except that the diagnostic interval was two-time more there as compared to our study (Kumar et al., 2012). \nAll direct costs related to consultation, diagnosis, and treatment are free of cost in our centre. The patients from the nearby state of Tamil Nadu avail service through the Chief Minister’s Comprehensive Health Insurance Scheme (CMCHIS), under which individuals belonging to annual income less than INR 72000 (~1006 $) can avail free treatment services. The majority of the study patients (82%) were from Tamil Nadu state and from low socio-economic status; insured by CMCHIS (“Chief Minister’s Comprehensive Health Insurance Scheme,” n.d.). \nThe average direct cost ever spent by patients in public (7 patients) and private facilities (89 patients) was INR 1000 (400-5,000) and 1,600 (500-6,533) for average of 14 and 7 days treatment respectively. The average direct cost ever paid to quacks was INR 5000 (725-7,500) in average 30 days of seeking care. The out of the pocket expenses during treatment was mostly because of indirect cost. The study on economic burden of cancer conducted in similar setting in Delhi also observed that approximate 60% of the patient expenditure was on transportation, food and lodging during the treatment (Mukhopadhyay et al., 2011). \nA study among 508 cancer patients (all types of cancer) conducted in tertiary care centers of five major cities (Aizawl, Bikaner, Kolkata, Thiruvananthapuram, and Mumbai) of India in 2011 showed that the mean cost of investigation, treatment and indirect expenses over a period of one-year was INR 16739, INR 41311and INR 27248 respectively (Nair et al., 2013). In our study the indirect cost incurred by patients was comparatively less (one-third). The difference may be due to concessional transport scheme available for patients who came from Tamil Nadu and approximate 72 % of the participants were in their first year of treatment. \nA study among 100 oral cancer patients in a private tertiary hospital, stated that the direct costs varied according to stages of oral cancer, and the total direct cost was INR 146092 (72401- 228919) which is much higher as compared to our study (Goyal et al. 2014). Treatment cost estimated in our study may not be representative to head and neck cancer patients seeking care in a private setting.\nConsidering Quality of Life (QOL) of the patients, there was severe impairment in functional wellbeing whereas there was moderate impairment in other dimensions. A study in All India Institute of Medical Sciences, Delhi stated that functional scores decline during the treatment and for those having symptoms related to disease (like pain, fatigue, nausea etc.) which increase during the course of the treatment (Bansal et al., 2004). The patients were were employed, were head of their family and were in early stage of nasal, nasopharyngeal, parotid and thyrod cancer had significantly better QOL than other cancer patients. The study from Karachi, Pakistan which used same scale (FACT-G and HandN scale) also found that there was significant association between occupation, stage and site of cancer with QOL (Bilal et al., 2015). We could not find significant association between QOL and various demographic variables and socioeconomic status. In contrast to our study, a study conducted at Regional Cancer Centre, Trivandrum using the FACT-G scale, reported that patients with higher socioeconomic status had better QOL (Thomas et al., 2004). \n\nLimitation\n\nIn our study, the patients who were not able to speak due to their illness, their QOL assessment was done based on the responses of their caregiver; thus, there is a possibility of information bias. Patients were asked the about their symptom recognition and pathway of care, there are chances of having recall bias. Since the study was conducted in the hospital setting it is expected that the characteristic of cancer patients seeking care from hospital may be different from that of cancer patients in the community due to berksonian bias. Treatment cost estimated in our study may not apply to all head and neck cancer patients, especially those who are seeking care in a private setting.\n\nStrength\n\nThe present study tried to identify the important aspects of the treatment of head and neck cancer patients. Validated Tamil version of the tool was used for collecting information related to the quality of life of patients.\nIn conclusion, preferred healthcare provider was private sector as reflected in the pathways of care and majority of patients visited at least one provider before reaching the tertiary care facility. The average treatment initiation interval was more than two months. The expenditure was mostly on indirect cost and initially patients/ their caregivers spend from their own savings, but at a later stage, they start selling their assets and ultimately landed-up borrowing money for their treatment. Their overall quality of life was moderately impaired. \nScreening and referral mechanism at primary/ community health centers can reduce the presenation time interval as has been already initiated by the National Programme for prevention and Control of Cancer, Diabetes, Cardiovascular Diseases and stroke. Further research is needed to understand the physical, social, familial, and functional quality of life to different disease parameters.", "None." ]
[ "intro", "materials|methods", "results", null ]
[ "Head and neck cancer", "FACT", "HandN", "quality of life", "treatment cost", "treatment seeking behavior" ]
Introduction: Cancer is the second most common cause of death following cardiovascular disorders (Roth et al., 2018). Globally 14.1 million new cancer cases and 8.2 million deaths occur every year, and 32.5 million people are living with cancer (World Health Organization, 2014b). Head and Neck cancers (HNC) account 23 % of all cancer cases (Dikshit et al., 2012) They arise from the mucosal lining (squamous cell), and include Oral and Oropharyngeal carcinoma, Nasal and Nasopharyngeal carcinoma and Hypopharyngeal carcinoma (Shah and Lydiatt, 1995). HNC is the highest occurring cancer among the males and third highest in females. The Indian Council of Medical Research (ICMR), has estimated that approximately 0.20 to 0.25 million new Head and Neck cancer patients are diagnosed every year (National Cancer Registry and Programme Indian Council of Medical Research, 2016) and this constitutes about 30% of all incident cancers. India has the highest rate of oropharyngeal cancers accounting for 30-40% of all cancers (National Cancer Registry and Programme Indian Council of Medical Research, 2016) and its mortality was 18% in males and 7% in female (World Health Organization 2014a). The International Agency for Research on Cancer (IARC) estimated that the incidence of cancer will sharply increase by 50% in 2020 (Ferlay and Soerjomataram, 2015) and the reasons behind this are increasing life expectancy and aging population worldwide. This prediction was made considering the current trend of increasing tobacco consumption and the adoption of an unhealthy lifestyle (World Health Organization, 2014b). Early diagnosis and timely initiation of treatment of Head and Neck cancers improves survival, lower the cost of care and results in retention of a better quality of life.(Kumar et al., 2012) Most patients experience a drop in their income while undergoing diagnosis and treatment. During treatment indirect cost is a major burden to the patients, increasing their financial stress and can drive many families to economic catastrophes (Kavosi et al. 2014; Sharp and Timmons, 2010; Nair et al., 2013). In India, Public health facilities provide free or subsidized treatment. Patients usually initiate care in the private sector because of perceived better treatment and perceived better chances of survival before they start seeking care in a public facility (Nair et al., 2013). Quality of life (QOL) is a multidimensional concept measuring the physical, social/ familial, emotional, and functional wellbeing of an individual (Webster et al., 2003). HNC can affect the quality of life of an individual by affecting the normal speech, breathing, and eating and disfigurement (Bernier et al., 2004). In India, the literature on the QOL of patients with HNC and time took for seeking care, getting diagnosed, and treated is limited. Against this background, we planned to conduct a study among the head and neck cancer patients who attended department of radiation oncology; the objectives were 1). to determine the time intervals in presentation, diagnostic, and treatment initiation and various pathways of the care sought before reaching our tertiary care facility, 2). To estimate their treatment cost and sources of their health expenditure and 3). Socio-dempographic and clinical factors associated with quality of life (QOL). Materials and Methods: Study design The study was a hospital-based cross-sectional analytic study conducted among patients who attended department of radiation oncology. Setting The study was carried out in the Jawaharlal Institute of Postgraduate Medical Education and Research, an institute of national importance located in Puducherry. Puducherry is one of the eight union territories of India. The Union Territory of Puducherry lies in the southern part of the Indian Peninsula. The population of Puducherry was 1.2 million as per the 2011 census. The RCC offers services to around 3000 new cancer patients every year of whom 990 suffered from head and neck cancer. The RCC now includes the Departments of Radiotherapy, Medical Oncology, and Surgical Oncology. Cancer treatment to patients in JIPMER is mostly free. The cancer patients are also referred from other eastern and southern Indian states. The patients from the nearby state of Tamil Nadu avail service through the Chief Minister’s Comprehensive Health Insurance Scheme, under which individuals belonging to annual income less than INR 72,000 (~1006 $) can avail free treatment services. In 2002, the Department of Radiotherapy was upgraded to Regional Cancer Centre (RCC). Approximately 1,200 patients availing advanced diagnostic and treatment services including radio-diagnosis, pathology, medical oncology, surgical oncology and radiotherapy. The fee for consultation and investigation is free for all patients. Selection of patients The study participants included all newly registered and follow-up adult patients with head and neck cancer seeking treatment at Radiotherapy department, JIPMER between 1st August 2016 to 30th September 2016. Convenient sampling was adapted for the study. A total of 195 adult patients with head and neck cancer who attended Radiotherapy OPD during the period of data collection were approached for inclusion in the study. Among them, 192 patients who gave consent were included in the study. All diagnosed head and neck cancer patients with date of definite treatment were recruited consecutively. Patients who were diagnosed for more than three years were excluded. Data collection and processing Sociodemographic details, clinical and medical history were extracted from the patient’s case sheet. The date of diagnosis and treatment initiation were extracted from the patient’s current and previous hospital records. The hospital record files of the patients were retrieved from medical record dapertment. The files number were noted based on the eligibility criteria and eligible patients were approached for inclusion in the study. Study participants were interviewed after completeion of their consultation with treating physician or procedures, the participants were interviewed in a separate room ensuring privacy. Information on date of recognition of symptom, type and number of health care providers visited, date of visit, date of definitive diagnosis and treatment were collected using a self administered structured questionnaire. Three types of time interval were elicited, i.e., the first time interval was considered from onset of symptom till they sought their first consultation from a registered medical practitioner (presentation interval), the second time interval was from the time of consultation with the registered medical practitioner till definitive cancer diagnosis was made (Diagnostic interval) and the third one was from the time of diagnosis till initiation of definitive cancer treatment (Treatment initiation interval). The expenditure on consultation, investigation, and treatment was considered as a direct cost, and the expenditure on transport, food, and lodgement was considered as an indirect cost. All costs incurred by patients were elicited for the whole duration of their illness preceding the date of the interview and recorded in Indian National Rupees (INR). 1 dollor (US) is equals to INR 66.9. Study tool The instrument used for assessing Quality of Life (QOL) was the validated Tamil version of the Functional Assessment of Cancer Therapy (FACT) scale (version 4). The patient’s responses were marked on the scale of 0 to 4; as was most appropriate to their condition in the past seven days. Negatively stated items were reversed by subtracting the responses from “4”. All subscale items were summed to derive the total score. Four subscales, i.e., Physical, Social, Emotional, and Functional (27 items) together constituted FACT-General (FACT –G) summary score. Specific questions related to head and neck were added to above mentioned four subscales in the FACT-HandN scale having 39 items. The total scores were divided into groups of three based on the absolute number (Fisch et al. 2003). The low score was considered as a severe impairment, moderate score as moderate impairment, and high score as low impairment. The higher the score, the better was the QOL. QOL was measured using the Functional Assessment of Cancer Therapy (FACT) general and specific. Prorated subscale score = [Sum of item score] × [N of items in subscale] ÷ [N of items answered] Statistical methods and Analysis The data was entered using EpiData Entry client (v2.0.9.25) and analyzed using EpiData Analysis version 2.2.2.183 (EpiData Association, Odense, Denmark) and SPSS version 19. Sociodemographic, clinical, and treatment variables were expressed as frequency and proportions. Continuous variables like time intervals, direct and indirect cost were expressed as median with Interquartile Range (IQR). The refernce time point for the economic cost to the patient and QOL was data collection period. QOL subscale and summary scale were expressed as Mean and Standard Deviation (SD). The association between the exposures and the FACT summary scale was analyzed using Kruskal Walis ANOVA and independent t-test. Statistical significance was considered as a p-value of less than 0.05. Ethical approval The study was approved by the Institute’s Scientific Advisory Committee and Ethical clearance was obtained from the Institute Ethics Committee (Human studies), before the start of the study [project no JIP/IEC/SC/2016/29/890]. Informed written consent was obtained from all the patients. The interview was conducted in a separate room and confidentiality of the patients information was maintained throughout the study. The patient information sheet and written informed consent in the regional languages was obtained from the particpants before conducting the interviews. Distribution of Socio-demographic and Clinical Characteristics of Head and Neck Cancer Patients attended at Radiation Oncology (N=192). Mean (SD) age, 54.92 (10.58); Range, 28-85; *Others include Karnataka, West Bengal, Jharkhand, Andhra Pradesh and Andaman Nicobar Island. †Socio-economic status of the patients calculated according to the Modified BG Prasad scale (CPI (IW) Base 2001=100 Monthly Index). Because study participants were mixed, so all India general Index (277) considered for calculating socioeconomic status.‡Others include nasal (n=1), Nasopharynx (n=7), Thyroid (n=3) and Parotid gland (n=1). Distribution of Different Time Intervals (in days) among Head and Neck Cancer Patients at Department of Radiation Oncology During August-September 2016 * 20 patients got diagnosed before reporting to study centre. Their median time interval till definite diagnosis was 10.5 (IQR, 4.75-15.75) †2 patients had initiated their treatment prior to reporting to the study centre. Total Indirect Cost Incurred by all Head and Neck Cancer Patients and Attendants ever Reported in Different Facilities in INR *ESIC-Employees' State Insurance Corporation hospital; † Ear Nose and Throat specialist; ‡Quack- an unqualified person who claims medical knowledge or other skills. Socio-demographic and Clinical Factors Associated with Quality of Life among Head and Neck Cancer Patients at Radiation Oncology (N=192) *Others include Nasal, Nasopharynx, Parotid, and Thyroid; †Stage IV include IVA, IV B and IV C; Test used for estimation of association are Independent t test and one-way ANOVA with post hoc test (Tukey). Pathway of the Care among Head and Neck Cancer Patients at Tertiary Cancer Centre (N=192); *Ear Nose Throat Specialist, † Employees’ State Insurance Corporation, ‡Primary Health Centre/ Community Health Centre Results: A total of 192 patients out of the eligible 195 were recruited and the response rate was 98%. Majority of the Head and Neck cancer patients were aged between 45-59 years (90, 46.9 %), male (128, 66.7 %), belonged to rural areas (133, 69.3 %), unemployed (142, 74.0 %), and belonged to lower middle class (77, 40.1 %). The common sites of cancer were oral and oropharynx (146, 76 %) and majority reported with stage IV cancer (124, 64.6 %) as shown in Table 1. The median days (IQR) of the presentation interval, diagnostic interval, and treatment initiation interval were 36.5 (16 - 65.7), 14 (7 - 31.5), and 65.5 (45 - 104) days respectively as shown in Table 2. Majority (87%) of the patients visited at least one health care provider before reaching the department. The private sector clinic/ hospitals was preferred by 52 % of the patients for initial consultation (Figure 1). Definite diagnosis of cancer was made in our tertiary care facility for almost 90% of cases. The median (IQR) of total direct cost, among those who had ever spent for their treatment services, was INR 2400 (700-7300). This estimated total cost in private facilities was spent over a median (IQR) period of 7 (1-20) days. The median direct cost incurred by head and neck cancer patients in our centre and other government facilities were nil. Table 3 shows the expenditure on food and lodgement by patients and their caregivers during their diagnosis and treatment in different facilities. The patients and their caregivers had soughted care in the studied health care facility for a median period of 240 days and the total median (IQR) wage loss was INR 18000 (5250-61575) during this period. The major source of expenditures were from family savings (56%), the sale of assets (22%) and borrowed (22%). The mean Quality of Life (QOL) score was lowest for functional well-being, which was categorized as severely impaired. Other domains of QOL (i.e., Physical, social, emotional, and specific to head and neck) and summary scales (FACT-G and FACT HandN) mean scores were in the moderate range. Better QOL was significantly associated with occupation, when the patient was the head of the family, site and early stage of cancer (Table 4). Discussion: The present study highlighted the treatment-seeking behavior, treatment cost, and quality of life of the head and neck cancer patients. The majority of the patients were male, age more than 45 years and reported with oral and oropharyngeal cancer in the advanced stage (III and IV) which was similar to various other studies in India (World Health Organization, 2014a; Deka et al., 2015; Mohanti et al., 2007). The preferred first contact for seeking care was the private sector (54%) followed by the government sector (30%). This finding is in contrast to an earlier study conducted in five hospitals across India, which had reported that the cancer patient’s interest and faith were more inclined towards the government sector (47%) than private (45%) (Joshi et al., 2014). Around 11 % of the patient reported to our centre directly; another study conducted in cancer hospital also reported that lesser proportion of study patients (7%) initially report to cancer hospitals (Kumar et al., 2012). The first point of contact for the one-fourth of the patients was the primary/ community health center. The definite diagnosis of cancer was done in our tertiary care centre for 90% of the cases. This may be due to the fact that ours is a regional cancer centre which is well equipped for making the diagnosis. People also visit the center foreconomical reasons, as diagnosis and treatment is almost free at this center. The median presentation interval, diagnostic interval, and treatment initiation interval in our study were found 36.5, 15, and 66 days. A study conducted in another comparable centre as ours also reported similar findings; except that the diagnostic interval was two-time more there as compared to our study (Kumar et al., 2012). All direct costs related to consultation, diagnosis, and treatment are free of cost in our centre. The patients from the nearby state of Tamil Nadu avail service through the Chief Minister’s Comprehensive Health Insurance Scheme (CMCHIS), under which individuals belonging to annual income less than INR 72000 (~1006 $) can avail free treatment services. The majority of the study patients (82%) were from Tamil Nadu state and from low socio-economic status; insured by CMCHIS (“Chief Minister’s Comprehensive Health Insurance Scheme,” n.d.). The average direct cost ever spent by patients in public (7 patients) and private facilities (89 patients) was INR 1000 (400-5,000) and 1,600 (500-6,533) for average of 14 and 7 days treatment respectively. The average direct cost ever paid to quacks was INR 5000 (725-7,500) in average 30 days of seeking care. The out of the pocket expenses during treatment was mostly because of indirect cost. The study on economic burden of cancer conducted in similar setting in Delhi also observed that approximate 60% of the patient expenditure was on transportation, food and lodging during the treatment (Mukhopadhyay et al., 2011). A study among 508 cancer patients (all types of cancer) conducted in tertiary care centers of five major cities (Aizawl, Bikaner, Kolkata, Thiruvananthapuram, and Mumbai) of India in 2011 showed that the mean cost of investigation, treatment and indirect expenses over a period of one-year was INR 16739, INR 41311and INR 27248 respectively (Nair et al., 2013). In our study the indirect cost incurred by patients was comparatively less (one-third). The difference may be due to concessional transport scheme available for patients who came from Tamil Nadu and approximate 72 % of the participants were in their first year of treatment. A study among 100 oral cancer patients in a private tertiary hospital, stated that the direct costs varied according to stages of oral cancer, and the total direct cost was INR 146092 (72401- 228919) which is much higher as compared to our study (Goyal et al. 2014). Treatment cost estimated in our study may not be representative to head and neck cancer patients seeking care in a private setting. Considering Quality of Life (QOL) of the patients, there was severe impairment in functional wellbeing whereas there was moderate impairment in other dimensions. A study in All India Institute of Medical Sciences, Delhi stated that functional scores decline during the treatment and for those having symptoms related to disease (like pain, fatigue, nausea etc.) which increase during the course of the treatment (Bansal et al., 2004). The patients were were employed, were head of their family and were in early stage of nasal, nasopharyngeal, parotid and thyrod cancer had significantly better QOL than other cancer patients. The study from Karachi, Pakistan which used same scale (FACT-G and HandN scale) also found that there was significant association between occupation, stage and site of cancer with QOL (Bilal et al., 2015). We could not find significant association between QOL and various demographic variables and socioeconomic status. In contrast to our study, a study conducted at Regional Cancer Centre, Trivandrum using the FACT-G scale, reported that patients with higher socioeconomic status had better QOL (Thomas et al., 2004). Limitation In our study, the patients who were not able to speak due to their illness, their QOL assessment was done based on the responses of their caregiver; thus, there is a possibility of information bias. Patients were asked the about their symptom recognition and pathway of care, there are chances of having recall bias. Since the study was conducted in the hospital setting it is expected that the characteristic of cancer patients seeking care from hospital may be different from that of cancer patients in the community due to berksonian bias. Treatment cost estimated in our study may not apply to all head and neck cancer patients, especially those who are seeking care in a private setting. Strength The present study tried to identify the important aspects of the treatment of head and neck cancer patients. Validated Tamil version of the tool was used for collecting information related to the quality of life of patients. In conclusion, preferred healthcare provider was private sector as reflected in the pathways of care and majority of patients visited at least one provider before reaching the tertiary care facility. The average treatment initiation interval was more than two months. The expenditure was mostly on indirect cost and initially patients/ their caregivers spend from their own savings, but at a later stage, they start selling their assets and ultimately landed-up borrowing money for their treatment. Their overall quality of life was moderately impaired. Screening and referral mechanism at primary/ community health centers can reduce the presenation time interval as has been already initiated by the National Programme for prevention and Control of Cancer, Diabetes, Cardiovascular Diseases and stroke. Further research is needed to understand the physical, social, familial, and functional quality of life to different disease parameters. Author Contribution Statement: None.
Background: Head and neck cancer constitute one-third of all cancers. Due to the complex nature of Head and Neck Cancer treatment, expenditure on cancer treatment are higher and the Quality of Life of patients is also compromised. The objectives of the study were to determine the time taken by patients for seeking care from registered medical practitioners, time to definitive diagnosis and treatment initiation, expenditure incurred, and Quality of Life. Methods: The present study was a cross-sectional descriptive involving outpatient with head and neck cancer reported to the department of radiotherapy, regional cancer center, JIPMER. The quality of life was assessed using validated FACT-Hand N scales. Results: The preferred first contact for seeking care for most was the private sector (52%). The median (IQR) presentation interval, diagnostic interval, and treatment initiation interval were 36.5 (16 - 65.7), 14 (7 - 31.5), and 65.5 (45 - 104) days respectively. The average indirect cost incurred was INR 8424 (4095-16570) in JIPMER, which was spent over an average duration of 240 days. The median (IQR) wage loss by the patients and/or caregivers was INR18000 (5250-61575). The source of expenditure was mainly from their family savings (56%). Functional well-being was severely impaired. The patients with occupation, head of the family, and early stage of cancer had a statistically significant quality of life. Conclusions: The majority of the patients were diagnosed in the regional cancer center, JIPMER although their preferred first point of contact was private practitioners. The average time interval from diagnosis to treatment initiation was more than two months. The expenditure during the treatment was mainly because of indirect cost and wage loss. The functional quality of life was severely impaired for the majority of the cases.
null
null
3,978
357
[ 2 ]
4
[ "patients", "cancer", "study", "treatment", "head", "cost", "care", "neck", "cancer patients", "head neck" ]
[ "cancer oral oropharynx", "oropharyngeal cancer advanced", "head neck cancer", "oropharyngeal cancers accounting", "head neck cancers" ]
null
null
null
[CONTENT] Head and neck cancer | FACT | HandN | quality of life | treatment cost | treatment seeking behavior [SUMMARY]
null
[CONTENT] Head and neck cancer | FACT | HandN | quality of life | treatment cost | treatment seeking behavior [SUMMARY]
null
[CONTENT] Head and neck cancer | FACT | HandN | quality of life | treatment cost | treatment seeking behavior [SUMMARY]
null
[CONTENT] Adult | Cross-Sectional Studies | Female | Head and Neck Neoplasms | Health Care Costs | Humans | India | Male | Middle Aged | Patient Acceptance of Health Care | Quality of Life [SUMMARY]
null
[CONTENT] Adult | Cross-Sectional Studies | Female | Head and Neck Neoplasms | Health Care Costs | Humans | India | Male | Middle Aged | Patient Acceptance of Health Care | Quality of Life [SUMMARY]
null
[CONTENT] Adult | Cross-Sectional Studies | Female | Head and Neck Neoplasms | Health Care Costs | Humans | India | Male | Middle Aged | Patient Acceptance of Health Care | Quality of Life [SUMMARY]
null
[CONTENT] cancer oral oropharynx | oropharyngeal cancer advanced | head neck cancer | oropharyngeal cancers accounting | head neck cancers [SUMMARY]
null
[CONTENT] cancer oral oropharynx | oropharyngeal cancer advanced | head neck cancer | oropharyngeal cancers accounting | head neck cancers [SUMMARY]
null
[CONTENT] cancer oral oropharynx | oropharyngeal cancer advanced | head neck cancer | oropharyngeal cancers accounting | head neck cancers [SUMMARY]
null
[CONTENT] patients | cancer | study | treatment | head | cost | care | neck | cancer patients | head neck [SUMMARY]
null
[CONTENT] patients | cancer | study | treatment | head | cost | care | neck | cancer patients | head neck [SUMMARY]
null
[CONTENT] patients | cancer | study | treatment | head | cost | care | neck | cancer patients | head neck [SUMMARY]
null
[CONTENT] cancer | cancers | treatment | hnc | patients | care | life | health | million | indian council medical research [SUMMARY]
null
[CONTENT] patients | cancer | study | treatment | cost | care | private | inr | interval | cancer patients [SUMMARY]
null
[CONTENT] patients | cancer | treatment | study | care | head | head neck | neck | cost | cancer patients [SUMMARY]
null
[CONTENT] one-third ||| the Quality of Life ||| Quality of Life [SUMMARY]
null
[CONTENT] first | 52% ||| IQR | 36.5 | 16 - 65.7 | 14 | 65.5 | 45 - 104 | days ||| INR 8424 | 4095-16570 | JIPMER | 240 days ||| IQR | INR18000 | 5250-61575 ||| 56% ||| ||| [SUMMARY]
null
[CONTENT] one-third ||| the Quality of Life ||| Quality of Life ||| JIPMER ||| ||| ||| first | 52% ||| IQR | 36.5 | 16 - 65.7 | 14 | 65.5 | 45 - 104 | days ||| INR 8424 | 4095-16570 | JIPMER | 240 days ||| IQR | INR18000 | 5250-61575 ||| 56% ||| ||| ||| JIPMER | first ||| more than two months ||| ||| [SUMMARY]
null
Impact of serum omentin-1 levels on cardiac prognosis in patients with heart failure.
24755035
Various adipokines are reported to be associated with the development of heart failure (HF) through insulin resistance and chronic inflammation. Omentin-1 is a novel adipokine and is associated with incident coronary artery disease. However, it remains unclear whether serum omentin-1 levels are associated with cardiac prognosis in patients with HF.
BACKGROUND
We measured serum omentin-1 levels at admission in 136 consecutive patients with HF, and 20 control subjects without signs of significant heart disease. We prospectively followed patients with HF to endpoints of cardiac death or re-hospitalization for worsening HF.
METHODS
Serum omentin-1 levels were markedly lower in HF patients with cardiac events compared with to without. The patients who were in New York Heart Association (NYHA) functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III, whereas serum omentin-1 levels did not correlate with serum brain natriuretic peptide levels (r = 0.217, P = 0.011). We divided the HF patients into three groups based on the tertiles of serum omentin-1 level (low T1, middle T2, and high T3). Multivariate Cox hazard analysis showed that the lowest serum omentin-1 level (T1) was independently associated with cardiac events after adjustment for confounding factors (hazard ratio 5.78, 95% confidence interval 1.20-12.79). We divided the HF patients into two groups according to the median serum omentin-1 levels. Kaplan-Meier analysis revealed that the patients with low serum omentin-1 levels had a higher risk of cardiac events compared with those with high serum omentin-1 levels (log-rank test p < 0.001).
RESULTS
Decreased serum omentin-1 levels were associated with a poor cardiac outcome in patients with HF.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Biomarkers", "Cytokines", "Female", "Follow-Up Studies", "GPI-Linked Proteins", "Heart Failure", "Humans", "Lectins", "Male", "Middle Aged", "Prognosis", "Prospective Studies", "Risk Factors" ]
4006671
Background
Heart failure (HF) remain a major cause of death worldwide and has a poor prognosis despite advances in treatment [1]. Adipocytokines, such as tumor necrosis factor-alpha, interleukin-6, and plasminogen activator inhibitor-1, play a crucial role in the development of cardiovascular diseases through insulin resistance and chronic inflammation [2-5]. Adipokines, such as adiponectin, are also reported to have anti-inflammatory, anti-oxidant, and anti-apoptotic properties, and are decreased in patients with cardiovascular disease [6-9]. There has been a move to clarify the causal relationship between various adipokines and cardiovascular disease [10,11]. Omentin-1 is a novel adipokine whose serum levels are decreased in obese individuals, and is associated with insulin resistance [12-16]. Omentin-1 has been suggested to play a beneficial role in preventing atherosclerosis [17,18], however, it remains unclear whether serum omentin-1 levels are associated with clinical outcome in patients with HF. The purpose of this study was to clarify the impact of serum omentin-1 levels on cardiac prognosis in patients with HF.
Methods
Study population We enrolled 136 consecutive patients who were admitted to the Yamagata University Hospital for treatment of worsening HF, diagnosis and pathophysiological investigations, or for therapeutic evaluation of HF. We also enrolled 20 control subjects without signs of significant heart disease. A diagnosis of HF was based on a history of dyspnea and symptoms of exercise intolerance followed by pulmonary congestion, pleural effusion, or left ventricular enlargement by chest X-ray or echocardiography [19,20]. Control subjects were excluded if they had significant coronary artery disease, systolic and diastolic dysfunction, valvular heart disease, or myocardial hypertrophy on echocardiography [21]. All patients gave written informed consent prior to their participation, and the protocol was approved by the institution’s Human Investigation Committee. The procedures were performed in accordance with the Helsinki Declaration. We enrolled 136 consecutive patients who were admitted to the Yamagata University Hospital for treatment of worsening HF, diagnosis and pathophysiological investigations, or for therapeutic evaluation of HF. We also enrolled 20 control subjects without signs of significant heart disease. A diagnosis of HF was based on a history of dyspnea and symptoms of exercise intolerance followed by pulmonary congestion, pleural effusion, or left ventricular enlargement by chest X-ray or echocardiography [19,20]. Control subjects were excluded if they had significant coronary artery disease, systolic and diastolic dysfunction, valvular heart disease, or myocardial hypertrophy on echocardiography [21]. All patients gave written informed consent prior to their participation, and the protocol was approved by the institution’s Human Investigation Committee. The procedures were performed in accordance with the Helsinki Declaration. Measurement of serum omentin-1 and brain natriuretic peptide levels Blood samples were drawn at admission and centrifuged at 2,500 g for 15 minutes at 4°C within 30 minutes of collection. The serum was stored at -80°C until analysis. Serum omentin-1 concentrations were measured with a sandwich enzyme-linked immunosorbent assay (ELISA, Immuno-Biological Laboratories CO., Ltd., Gunma, Japan), according to the manufacturer’s instructions [22,23]. The serum omentin-1 levels were measured in duplicate by an investigator unaware of the associated patients’ characteristics. Serum brain natriuretic peptide (BNP) concentrations were measured using a commercially available specific radio-immuno assay for human BNP (Shiono RIA BNP assay kit, Shionogi & Co., Ltd., Tokyo, Japan) [24]. Blood samples were drawn at admission and centrifuged at 2,500 g for 15 minutes at 4°C within 30 minutes of collection. The serum was stored at -80°C until analysis. Serum omentin-1 concentrations were measured with a sandwich enzyme-linked immunosorbent assay (ELISA, Immuno-Biological Laboratories CO., Ltd., Gunma, Japan), according to the manufacturer’s instructions [22,23]. The serum omentin-1 levels were measured in duplicate by an investigator unaware of the associated patients’ characteristics. Serum brain natriuretic peptide (BNP) concentrations were measured using a commercially available specific radio-immuno assay for human BNP (Shiono RIA BNP assay kit, Shionogi & Co., Ltd., Tokyo, Japan) [24]. Endpoints and follow-up The patients were prospectively followed for a median duration of 399 ± 378 days. The end points were cardiac death, including death due to progressive HF, myocardial infarction, stroke and sudden cardiac death, and re-hospitalization for worsening HF. Sudden cardiac death was defined as death without definite premonitory symptoms or signs, and was confirmed by the attending physician. Two cardiologists who were blinded to the blood biomarker data reviewed the medical records and conducted telephone interviews to survey the incidence of cardiovascular events. The patients were prospectively followed for a median duration of 399 ± 378 days. The end points were cardiac death, including death due to progressive HF, myocardial infarction, stroke and sudden cardiac death, and re-hospitalization for worsening HF. Sudden cardiac death was defined as death without definite premonitory symptoms or signs, and was confirmed by the attending physician. Two cardiologists who were blinded to the blood biomarker data reviewed the medical records and conducted telephone interviews to survey the incidence of cardiovascular events. Statistical analysis Data are presented as the mean ± standard deviation (SD). The Mann–Whitney U-test was used when the data were not distributed normally. If the data were not distributed normally, they were presented as medians with an interquartile range. The unpaired Student’s t-test and the chi-square test were used for comparisons of continuous and categorical variables, respectively. Comparison of data among three groups was performed by the Kruskal-Wallis test. Uni- and multivariate analyses with Cox proportional hazard regression were used to determine significant predictors of cardiovascular events. Cumulative overall and event-free survival rates were computed using the Kaplan-Meier method and were compared using the log-rank test. We calculated the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) to measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model [25]. NRI and IDI are new statistical measures to assess and quantify the improvement in risk prediction offered by a new marker. A P value < 0.05 was considered statistically significant. All statistical analyses were performed with a standard statistical program package (JMP version 10; SAS Institute, Cary, North Carolina, USA), and the R-3.0.2 with additional packages (Rcmdr, Epi, pROC, and PredictABEL). Data are presented as the mean ± standard deviation (SD). The Mann–Whitney U-test was used when the data were not distributed normally. If the data were not distributed normally, they were presented as medians with an interquartile range. The unpaired Student’s t-test and the chi-square test were used for comparisons of continuous and categorical variables, respectively. Comparison of data among three groups was performed by the Kruskal-Wallis test. Uni- and multivariate analyses with Cox proportional hazard regression were used to determine significant predictors of cardiovascular events. Cumulative overall and event-free survival rates were computed using the Kaplan-Meier method and were compared using the log-rank test. We calculated the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) to measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model [25]. NRI and IDI are new statistical measures to assess and quantify the improvement in risk prediction offered by a new marker. A P value < 0.05 was considered statistically significant. All statistical analyses were performed with a standard statistical program package (JMP version 10; SAS Institute, Cary, North Carolina, USA), and the R-3.0.2 with additional packages (Rcmdr, Epi, pROC, and PredictABEL).
Results
Comparison between patients with and without heart failure The patients with HF had a lower BMI and left ventricular ejection fraction, and lower serum total cholesterol, triglyceride levels, and higher serum BNP levels compared with control subjects (Table 1). Baseline clinical characteristics Data are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association. The patients with HF had a lower BMI and left ventricular ejection fraction, and lower serum total cholesterol, triglyceride levels, and higher serum BNP levels compared with control subjects (Table 1). Baseline clinical characteristics Data are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association. Comparison between HF patients with and without cardiac events There were 59 cardiac events including 17 deaths and 32 re-hospitalizations in patients with HF during the follow-up period (Table 2). The patients who experienced cardiac events were in a more severe New York Heart Association (NYHA) functional class, and had a lower estimated glomerular filtration rate, lower left ventricular ejection fraction, higher left ventricular end-diastolic diameter, and higher serum BNP levels compared with those who did not. Moreover, patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (Figure 1). There were no significant differences in etiologies of HF between patients with and without cardiac events (Table 2). Comparison of patients with or without cardiac event Data are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association. Comparisons of serum omentin-1 levels between control subjects and HF patients with or without cardiac events. HF patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (p < 0.001). HF, heart failure. There were 59 cardiac events including 17 deaths and 32 re-hospitalizations in patients with HF during the follow-up period (Table 2). The patients who experienced cardiac events were in a more severe New York Heart Association (NYHA) functional class, and had a lower estimated glomerular filtration rate, lower left ventricular ejection fraction, higher left ventricular end-diastolic diameter, and higher serum BNP levels compared with those who did not. Moreover, patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (Figure 1). There were no significant differences in etiologies of HF between patients with and without cardiac events (Table 2). Comparison of patients with or without cardiac event Data are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association. Comparisons of serum omentin-1 levels between control subjects and HF patients with or without cardiac events. HF patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (p < 0.001). HF, heart failure. Serum omentin-1 levels and HF severity The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (P = 0.029 vs. class II and P = 0.041 vs. class III, Figure 2A). On the other hand, serum omentin-1 levels were not significantly different between the patients who were in NYHA functional class II and III (P = 0.582). Furthermore, there was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217, Figure 2B). Serum omentin-1 levels and heart failure severity. A. The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (*P = 0.029 vs. class II and #P = 0.041 vs. class III, Figure 2A). (The number of patients; II = 71, III = 46, IV = 19) B. The association between serum omentin-1 levels and serum BNP levels. There was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217). BNP, brain natriuretic peptide; NYHA, New York Heart Association. The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (P = 0.029 vs. class II and P = 0.041 vs. class III, Figure 2A). On the other hand, serum omentin-1 levels were not significantly different between the patients who were in NYHA functional class II and III (P = 0.582). Furthermore, there was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217, Figure 2B). Serum omentin-1 levels and heart failure severity. A. The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (*P = 0.029 vs. class II and #P = 0.041 vs. class III, Figure 2A). (The number of patients; II = 71, III = 46, IV = 19) B. The association between serum omentin-1 levels and serum BNP levels. There was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217). BNP, brain natriuretic peptide; NYHA, New York Heart Association. Association between serum omentin-1 levels and cardiac events We divided patients with HF into three groups according to the tertiles of serum omentin-1 levels. Multivariate Cox hazard analysis showed that the lowest serum omentin-1 levels (T1) were independently associated with cardiac events after adjustment for age, gender, NYHA functional class, left ventricular ejection fraction, and serum brain natriuretic peptide levels (hazard ratio 5.65, 95% confidence interval 2.61-12.20; Figure 3, Table 3). We divided the patients into two groups according to the median serum omentin-1 levels. Kaplan-Meier analysis revealed that the patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001, Figure 4). Hazard ratio of the tertiles of omentin-1 levels for cardiac events after adjustment of age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; HDLc, high density lipoprotein cholesterol; NYHA, New York Heart Association. Univariate and multivariate analyses for cardiac events *Adjusted HR after adjustment for age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; CI, confidence interval; HDLc, high density lipoprotein cholesterol; HR, hazard ratio; NYHA, New York Heart Association; SD, standard deviation. Kaplan-Meier analysis. The patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001). We divided patients with HF into three groups according to the tertiles of serum omentin-1 levels. Multivariate Cox hazard analysis showed that the lowest serum omentin-1 levels (T1) were independently associated with cardiac events after adjustment for age, gender, NYHA functional class, left ventricular ejection fraction, and serum brain natriuretic peptide levels (hazard ratio 5.65, 95% confidence interval 2.61-12.20; Figure 3, Table 3). We divided the patients into two groups according to the median serum omentin-1 levels. Kaplan-Meier analysis revealed that the patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001, Figure 4). Hazard ratio of the tertiles of omentin-1 levels for cardiac events after adjustment of age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; HDLc, high density lipoprotein cholesterol; NYHA, New York Heart Association. Univariate and multivariate analyses for cardiac events *Adjusted HR after adjustment for age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; CI, confidence interval; HDLc, high density lipoprotein cholesterol; HR, hazard ratio; NYHA, New York Heart Association; SD, standard deviation. Kaplan-Meier analysis. The patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001). Net reclassification improvement and integrated discrimination improvement To measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model, we calculated the NRI and the IDI. The inclusion of serum omentin-1 levels in the prediction model (includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels) for the prediction of cardiac events, improved the NRI and IDI values, suggesting effective reclassification and discrimination (Table 4). Statistics for model fit and improvement with addition of serum omentin-1 level predicted on the prediction of cardiac events Prediction model includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels. BNP, brain natriuretic peptide; CI, confidence interval; IDI, integrated discrimination improvement; NRI, net reclassification improvement; NYHA, New York Heart Association. To measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model, we calculated the NRI and the IDI. The inclusion of serum omentin-1 levels in the prediction model (includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels) for the prediction of cardiac events, improved the NRI and IDI values, suggesting effective reclassification and discrimination (Table 4). Statistics for model fit and improvement with addition of serum omentin-1 level predicted on the prediction of cardiac events Prediction model includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels. BNP, brain natriuretic peptide; CI, confidence interval; IDI, integrated discrimination improvement; NRI, net reclassification improvement; NYHA, New York Heart Association.
null
null
[ "Background", "Study population", "Measurement of serum omentin-1 and brain natriuretic peptide levels", "Endpoints and follow-up", "Statistical analysis", "Comparison between patients with and without heart failure", "Comparison between HF patients with and without cardiac events", "Serum omentin-1 levels and HF severity", "Association between serum omentin-1 levels and cardiac events", "Net reclassification improvement and integrated discrimination improvement", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Heart failure (HF) remain a major cause of death worldwide and has a poor prognosis despite advances in treatment [1]. Adipocytokines, such as tumor necrosis factor-alpha, interleukin-6, and plasminogen activator inhibitor-1, play a crucial role in the development of cardiovascular diseases through insulin resistance and chronic inflammation [2-5]. Adipokines, such as adiponectin, are also reported to have anti-inflammatory, anti-oxidant, and anti-apoptotic properties, and are decreased in patients with cardiovascular disease [6-9]. There has been a move to clarify the causal relationship between various adipokines and cardiovascular disease [10,11].\nOmentin-1 is a novel adipokine whose serum levels are decreased in obese individuals, and is associated with insulin resistance [12-16]. Omentin-1 has been suggested to play a beneficial role in preventing atherosclerosis [17,18], however, it remains unclear whether serum omentin-1 levels are associated with clinical outcome in patients with HF.\nThe purpose of this study was to clarify the impact of serum omentin-1 levels on cardiac prognosis in patients with HF.", "We enrolled 136 consecutive patients who were admitted to the Yamagata University Hospital for treatment of worsening HF, diagnosis and pathophysiological investigations, or for therapeutic evaluation of HF. We also enrolled 20 control subjects without signs of significant heart disease.\nA diagnosis of HF was based on a history of dyspnea and symptoms of exercise intolerance followed by pulmonary congestion, pleural effusion, or left ventricular enlargement by chest X-ray or echocardiography [19,20]. Control subjects were excluded if they had significant coronary artery disease, systolic and diastolic dysfunction, valvular heart disease, or myocardial hypertrophy on echocardiography [21]. All patients gave written informed consent prior to their participation, and the protocol was approved by the institution’s Human Investigation Committee. The procedures were performed in accordance with the Helsinki Declaration.", "Blood samples were drawn at admission and centrifuged at 2,500 g for 15 minutes at 4°C within 30 minutes of collection. The serum was stored at -80°C until analysis. Serum omentin-1 concentrations were measured with a sandwich enzyme-linked immunosorbent assay (ELISA, Immuno-Biological Laboratories CO., Ltd., Gunma, Japan), according to the manufacturer’s instructions [22,23]. The serum omentin-1 levels were measured in duplicate by an investigator unaware of the associated patients’ characteristics. Serum brain natriuretic peptide (BNP) concentrations were measured using a commercially available specific radio-immuno assay for human BNP (Shiono RIA BNP assay kit, Shionogi & Co., Ltd., Tokyo, Japan) [24].", "The patients were prospectively followed for a median duration of 399 ± 378 days. The end points were cardiac death, including death due to progressive HF, myocardial infarction, stroke and sudden cardiac death, and re-hospitalization for worsening HF. Sudden cardiac death was defined as death without definite premonitory symptoms or signs, and was confirmed by the attending physician. Two cardiologists who were blinded to the blood biomarker data reviewed the medical records and conducted telephone interviews to survey the incidence of cardiovascular events.", "Data are presented as the mean ± standard deviation (SD). The Mann–Whitney U-test was used when the data were not distributed normally. If the data were not distributed normally, they were presented as medians with an interquartile range. The unpaired Student’s t-test and the chi-square test were used for comparisons of continuous and categorical variables, respectively. Comparison of data among three groups was performed by the Kruskal-Wallis test. Uni- and multivariate analyses with Cox proportional hazard regression were used to determine significant predictors of cardiovascular events. Cumulative overall and event-free survival rates were computed using the Kaplan-Meier method and were compared using the log-rank test. We calculated the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) to measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model [25]. NRI and IDI are new statistical measures to assess and quantify the improvement in risk prediction offered by a new marker. A P value < 0.05 was considered statistically significant. All statistical analyses were performed with a standard statistical program package (JMP version 10; SAS Institute, Cary, North Carolina, USA), and the R-3.0.2 with additional packages (Rcmdr, Epi, pROC, and PredictABEL).", "The patients with HF had a lower BMI and left ventricular ejection fraction, and lower serum total cholesterol, triglyceride levels, and higher serum BNP levels compared with control subjects (Table 1).\nBaseline clinical characteristics\nData are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association.", "There were 59 cardiac events including 17 deaths and 32 re-hospitalizations in patients with HF during the follow-up period (Table 2). The patients who experienced cardiac events were in a more severe New York Heart Association (NYHA) functional class, and had a lower estimated glomerular filtration rate, lower left ventricular ejection fraction, higher left ventricular end-diastolic diameter, and higher serum BNP levels compared with those who did not. Moreover, patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (Figure 1). There were no significant differences in etiologies of HF between patients with and without cardiac events (Table 2).\nComparison of patients with or without cardiac event\nData are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association.\nComparisons of serum omentin-1 levels between control subjects and HF patients with or without cardiac events. HF patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (p < 0.001). HF, heart failure.", "The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (P = 0.029 vs. class II and P = 0.041 vs. class III, Figure 2A). On the other hand, serum omentin-1 levels were not significantly different between the patients who were in NYHA functional class II and III (P = 0.582). Furthermore, there was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217, Figure 2B).\nSerum omentin-1 levels and heart failure severity. A. The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (*P = 0.029 vs. class II and #P = 0.041 vs. class III, Figure 2A). (The number of patients; II = 71, III = 46, IV = 19) B. The association between serum omentin-1 levels and serum BNP levels. There was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217). BNP, brain natriuretic peptide; NYHA, New York Heart Association.", "We divided patients with HF into three groups according to the tertiles of serum omentin-1 levels. Multivariate Cox hazard analysis showed that the lowest serum omentin-1 levels (T1) were independently associated with cardiac events after adjustment for age, gender, NYHA functional class, left ventricular ejection fraction, and serum brain natriuretic peptide levels (hazard ratio 5.65, 95% confidence interval 2.61-12.20; Figure 3, Table 3). We divided the patients into two groups according to the median serum omentin-1 levels. Kaplan-Meier analysis revealed that the patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001, Figure 4).\nHazard ratio of the tertiles of omentin-1 levels for cardiac events after adjustment of age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; HDLc, high density lipoprotein cholesterol; NYHA, New York Heart Association.\nUnivariate and multivariate analyses for cardiac events\n*Adjusted HR after adjustment for age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels.\nBNP, brain natriuretic peptide; CI, confidence interval; HDLc, high density lipoprotein cholesterol; HR, hazard ratio; NYHA, New York Heart Association; SD, standard deviation.\nKaplan-Meier analysis. The patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001).", "To measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model, we calculated the NRI and the IDI. The inclusion of serum omentin-1 levels in the prediction model (includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels) for the prediction of cardiac events, improved the NRI and IDI values, suggesting effective reclassification and discrimination (Table 4).\nStatistics for model fit and improvement with addition of serum omentin-1 level predicted on the prediction of cardiac events\nPrediction model includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels.\nBNP, brain natriuretic peptide; CI, confidence interval; IDI, integrated discrimination improvement; NRI, net reclassification improvement; NYHA, New York Heart Association.", "BMI: Body mass index; BNP: Brain natriuretic peptide; eGFR: Estimated glomerular filtration rate; ELISA: Sandwich enzyme-linked immunosorbent assay; HF: Heart failure; IDI: Integrated discrimination improvement; NRI: Net reclassification improvement; NYHA: New York heart association; SD: Standard deviation.", "The authors report that there is no duality of interest associated with this manuscript.", "TN, TW and IK contributed to discussions about study design and data analyses. SK, DK, MY, YO, and YH conceived and carried out experiments. TN and TW participated in the interpretation of the results and the writing of the manuscript. SN, HT, TA, TS, and TM helped with data collection. All authors have read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Measurement of serum omentin-1 and brain natriuretic peptide levels", "Endpoints and follow-up", "Statistical analysis", "Results", "Comparison between patients with and without heart failure", "Comparison between HF patients with and without cardiac events", "Serum omentin-1 levels and HF severity", "Association between serum omentin-1 levels and cardiac events", "Net reclassification improvement and integrated discrimination improvement", "Discussion", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Heart failure (HF) remain a major cause of death worldwide and has a poor prognosis despite advances in treatment [1]. Adipocytokines, such as tumor necrosis factor-alpha, interleukin-6, and plasminogen activator inhibitor-1, play a crucial role in the development of cardiovascular diseases through insulin resistance and chronic inflammation [2-5]. Adipokines, such as adiponectin, are also reported to have anti-inflammatory, anti-oxidant, and anti-apoptotic properties, and are decreased in patients with cardiovascular disease [6-9]. There has been a move to clarify the causal relationship between various adipokines and cardiovascular disease [10,11].\nOmentin-1 is a novel adipokine whose serum levels are decreased in obese individuals, and is associated with insulin resistance [12-16]. Omentin-1 has been suggested to play a beneficial role in preventing atherosclerosis [17,18], however, it remains unclear whether serum omentin-1 levels are associated with clinical outcome in patients with HF.\nThe purpose of this study was to clarify the impact of serum omentin-1 levels on cardiac prognosis in patients with HF.", " Study population We enrolled 136 consecutive patients who were admitted to the Yamagata University Hospital for treatment of worsening HF, diagnosis and pathophysiological investigations, or for therapeutic evaluation of HF. We also enrolled 20 control subjects without signs of significant heart disease.\nA diagnosis of HF was based on a history of dyspnea and symptoms of exercise intolerance followed by pulmonary congestion, pleural effusion, or left ventricular enlargement by chest X-ray or echocardiography [19,20]. Control subjects were excluded if they had significant coronary artery disease, systolic and diastolic dysfunction, valvular heart disease, or myocardial hypertrophy on echocardiography [21]. All patients gave written informed consent prior to their participation, and the protocol was approved by the institution’s Human Investigation Committee. The procedures were performed in accordance with the Helsinki Declaration.\nWe enrolled 136 consecutive patients who were admitted to the Yamagata University Hospital for treatment of worsening HF, diagnosis and pathophysiological investigations, or for therapeutic evaluation of HF. We also enrolled 20 control subjects without signs of significant heart disease.\nA diagnosis of HF was based on a history of dyspnea and symptoms of exercise intolerance followed by pulmonary congestion, pleural effusion, or left ventricular enlargement by chest X-ray or echocardiography [19,20]. Control subjects were excluded if they had significant coronary artery disease, systolic and diastolic dysfunction, valvular heart disease, or myocardial hypertrophy on echocardiography [21]. All patients gave written informed consent prior to their participation, and the protocol was approved by the institution’s Human Investigation Committee. The procedures were performed in accordance with the Helsinki Declaration.\n Measurement of serum omentin-1 and brain natriuretic peptide levels Blood samples were drawn at admission and centrifuged at 2,500 g for 15 minutes at 4°C within 30 minutes of collection. The serum was stored at -80°C until analysis. Serum omentin-1 concentrations were measured with a sandwich enzyme-linked immunosorbent assay (ELISA, Immuno-Biological Laboratories CO., Ltd., Gunma, Japan), according to the manufacturer’s instructions [22,23]. The serum omentin-1 levels were measured in duplicate by an investigator unaware of the associated patients’ characteristics. Serum brain natriuretic peptide (BNP) concentrations were measured using a commercially available specific radio-immuno assay for human BNP (Shiono RIA BNP assay kit, Shionogi & Co., Ltd., Tokyo, Japan) [24].\nBlood samples were drawn at admission and centrifuged at 2,500 g for 15 minutes at 4°C within 30 minutes of collection. The serum was stored at -80°C until analysis. Serum omentin-1 concentrations were measured with a sandwich enzyme-linked immunosorbent assay (ELISA, Immuno-Biological Laboratories CO., Ltd., Gunma, Japan), according to the manufacturer’s instructions [22,23]. The serum omentin-1 levels were measured in duplicate by an investigator unaware of the associated patients’ characteristics. Serum brain natriuretic peptide (BNP) concentrations were measured using a commercially available specific radio-immuno assay for human BNP (Shiono RIA BNP assay kit, Shionogi & Co., Ltd., Tokyo, Japan) [24].\n Endpoints and follow-up The patients were prospectively followed for a median duration of 399 ± 378 days. The end points were cardiac death, including death due to progressive HF, myocardial infarction, stroke and sudden cardiac death, and re-hospitalization for worsening HF. Sudden cardiac death was defined as death without definite premonitory symptoms or signs, and was confirmed by the attending physician. Two cardiologists who were blinded to the blood biomarker data reviewed the medical records and conducted telephone interviews to survey the incidence of cardiovascular events.\nThe patients were prospectively followed for a median duration of 399 ± 378 days. The end points were cardiac death, including death due to progressive HF, myocardial infarction, stroke and sudden cardiac death, and re-hospitalization for worsening HF. Sudden cardiac death was defined as death without definite premonitory symptoms or signs, and was confirmed by the attending physician. Two cardiologists who were blinded to the blood biomarker data reviewed the medical records and conducted telephone interviews to survey the incidence of cardiovascular events.\n Statistical analysis Data are presented as the mean ± standard deviation (SD). The Mann–Whitney U-test was used when the data were not distributed normally. If the data were not distributed normally, they were presented as medians with an interquartile range. The unpaired Student’s t-test and the chi-square test were used for comparisons of continuous and categorical variables, respectively. Comparison of data among three groups was performed by the Kruskal-Wallis test. Uni- and multivariate analyses with Cox proportional hazard regression were used to determine significant predictors of cardiovascular events. Cumulative overall and event-free survival rates were computed using the Kaplan-Meier method and were compared using the log-rank test. We calculated the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) to measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model [25]. NRI and IDI are new statistical measures to assess and quantify the improvement in risk prediction offered by a new marker. A P value < 0.05 was considered statistically significant. All statistical analyses were performed with a standard statistical program package (JMP version 10; SAS Institute, Cary, North Carolina, USA), and the R-3.0.2 with additional packages (Rcmdr, Epi, pROC, and PredictABEL).\nData are presented as the mean ± standard deviation (SD). The Mann–Whitney U-test was used when the data were not distributed normally. If the data were not distributed normally, they were presented as medians with an interquartile range. The unpaired Student’s t-test and the chi-square test were used for comparisons of continuous and categorical variables, respectively. Comparison of data among three groups was performed by the Kruskal-Wallis test. Uni- and multivariate analyses with Cox proportional hazard regression were used to determine significant predictors of cardiovascular events. Cumulative overall and event-free survival rates were computed using the Kaplan-Meier method and were compared using the log-rank test. We calculated the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) to measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model [25]. NRI and IDI are new statistical measures to assess and quantify the improvement in risk prediction offered by a new marker. A P value < 0.05 was considered statistically significant. All statistical analyses were performed with a standard statistical program package (JMP version 10; SAS Institute, Cary, North Carolina, USA), and the R-3.0.2 with additional packages (Rcmdr, Epi, pROC, and PredictABEL).", "We enrolled 136 consecutive patients who were admitted to the Yamagata University Hospital for treatment of worsening HF, diagnosis and pathophysiological investigations, or for therapeutic evaluation of HF. We also enrolled 20 control subjects without signs of significant heart disease.\nA diagnosis of HF was based on a history of dyspnea and symptoms of exercise intolerance followed by pulmonary congestion, pleural effusion, or left ventricular enlargement by chest X-ray or echocardiography [19,20]. Control subjects were excluded if they had significant coronary artery disease, systolic and diastolic dysfunction, valvular heart disease, or myocardial hypertrophy on echocardiography [21]. All patients gave written informed consent prior to their participation, and the protocol was approved by the institution’s Human Investigation Committee. The procedures were performed in accordance with the Helsinki Declaration.", "Blood samples were drawn at admission and centrifuged at 2,500 g for 15 minutes at 4°C within 30 minutes of collection. The serum was stored at -80°C until analysis. Serum omentin-1 concentrations were measured with a sandwich enzyme-linked immunosorbent assay (ELISA, Immuno-Biological Laboratories CO., Ltd., Gunma, Japan), according to the manufacturer’s instructions [22,23]. The serum omentin-1 levels were measured in duplicate by an investigator unaware of the associated patients’ characteristics. Serum brain natriuretic peptide (BNP) concentrations were measured using a commercially available specific radio-immuno assay for human BNP (Shiono RIA BNP assay kit, Shionogi & Co., Ltd., Tokyo, Japan) [24].", "The patients were prospectively followed for a median duration of 399 ± 378 days. The end points were cardiac death, including death due to progressive HF, myocardial infarction, stroke and sudden cardiac death, and re-hospitalization for worsening HF. Sudden cardiac death was defined as death without definite premonitory symptoms or signs, and was confirmed by the attending physician. Two cardiologists who were blinded to the blood biomarker data reviewed the medical records and conducted telephone interviews to survey the incidence of cardiovascular events.", "Data are presented as the mean ± standard deviation (SD). The Mann–Whitney U-test was used when the data were not distributed normally. If the data were not distributed normally, they were presented as medians with an interquartile range. The unpaired Student’s t-test and the chi-square test were used for comparisons of continuous and categorical variables, respectively. Comparison of data among three groups was performed by the Kruskal-Wallis test. Uni- and multivariate analyses with Cox proportional hazard regression were used to determine significant predictors of cardiovascular events. Cumulative overall and event-free survival rates were computed using the Kaplan-Meier method and were compared using the log-rank test. We calculated the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) to measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model [25]. NRI and IDI are new statistical measures to assess and quantify the improvement in risk prediction offered by a new marker. A P value < 0.05 was considered statistically significant. All statistical analyses were performed with a standard statistical program package (JMP version 10; SAS Institute, Cary, North Carolina, USA), and the R-3.0.2 with additional packages (Rcmdr, Epi, pROC, and PredictABEL).", " Comparison between patients with and without heart failure The patients with HF had a lower BMI and left ventricular ejection fraction, and lower serum total cholesterol, triglyceride levels, and higher serum BNP levels compared with control subjects (Table 1).\nBaseline clinical characteristics\nData are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association.\nThe patients with HF had a lower BMI and left ventricular ejection fraction, and lower serum total cholesterol, triglyceride levels, and higher serum BNP levels compared with control subjects (Table 1).\nBaseline clinical characteristics\nData are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association.\n Comparison between HF patients with and without cardiac events There were 59 cardiac events including 17 deaths and 32 re-hospitalizations in patients with HF during the follow-up period (Table 2). The patients who experienced cardiac events were in a more severe New York Heart Association (NYHA) functional class, and had a lower estimated glomerular filtration rate, lower left ventricular ejection fraction, higher left ventricular end-diastolic diameter, and higher serum BNP levels compared with those who did not. Moreover, patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (Figure 1). There were no significant differences in etiologies of HF between patients with and without cardiac events (Table 2).\nComparison of patients with or without cardiac event\nData are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association.\nComparisons of serum omentin-1 levels between control subjects and HF patients with or without cardiac events. HF patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (p < 0.001). HF, heart failure.\nThere were 59 cardiac events including 17 deaths and 32 re-hospitalizations in patients with HF during the follow-up period (Table 2). The patients who experienced cardiac events were in a more severe New York Heart Association (NYHA) functional class, and had a lower estimated glomerular filtration rate, lower left ventricular ejection fraction, higher left ventricular end-diastolic diameter, and higher serum BNP levels compared with those who did not. Moreover, patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (Figure 1). There were no significant differences in etiologies of HF between patients with and without cardiac events (Table 2).\nComparison of patients with or without cardiac event\nData are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association.\nComparisons of serum omentin-1 levels between control subjects and HF patients with or without cardiac events. HF patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (p < 0.001). HF, heart failure.\n Serum omentin-1 levels and HF severity The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (P = 0.029 vs. class II and P = 0.041 vs. class III, Figure 2A). On the other hand, serum omentin-1 levels were not significantly different between the patients who were in NYHA functional class II and III (P = 0.582). Furthermore, there was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217, Figure 2B).\nSerum omentin-1 levels and heart failure severity. A. The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (*P = 0.029 vs. class II and #P = 0.041 vs. class III, Figure 2A). (The number of patients; II = 71, III = 46, IV = 19) B. The association between serum omentin-1 levels and serum BNP levels. There was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217). BNP, brain natriuretic peptide; NYHA, New York Heart Association.\nThe patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (P = 0.029 vs. class II and P = 0.041 vs. class III, Figure 2A). On the other hand, serum omentin-1 levels were not significantly different between the patients who were in NYHA functional class II and III (P = 0.582). Furthermore, there was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217, Figure 2B).\nSerum omentin-1 levels and heart failure severity. A. The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (*P = 0.029 vs. class II and #P = 0.041 vs. class III, Figure 2A). (The number of patients; II = 71, III = 46, IV = 19) B. The association between serum omentin-1 levels and serum BNP levels. There was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217). BNP, brain natriuretic peptide; NYHA, New York Heart Association.\n Association between serum omentin-1 levels and cardiac events We divided patients with HF into three groups according to the tertiles of serum omentin-1 levels. Multivariate Cox hazard analysis showed that the lowest serum omentin-1 levels (T1) were independently associated with cardiac events after adjustment for age, gender, NYHA functional class, left ventricular ejection fraction, and serum brain natriuretic peptide levels (hazard ratio 5.65, 95% confidence interval 2.61-12.20; Figure 3, Table 3). We divided the patients into two groups according to the median serum omentin-1 levels. Kaplan-Meier analysis revealed that the patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001, Figure 4).\nHazard ratio of the tertiles of omentin-1 levels for cardiac events after adjustment of age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; HDLc, high density lipoprotein cholesterol; NYHA, New York Heart Association.\nUnivariate and multivariate analyses for cardiac events\n*Adjusted HR after adjustment for age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels.\nBNP, brain natriuretic peptide; CI, confidence interval; HDLc, high density lipoprotein cholesterol; HR, hazard ratio; NYHA, New York Heart Association; SD, standard deviation.\nKaplan-Meier analysis. The patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001).\nWe divided patients with HF into three groups according to the tertiles of serum omentin-1 levels. Multivariate Cox hazard analysis showed that the lowest serum omentin-1 levels (T1) were independently associated with cardiac events after adjustment for age, gender, NYHA functional class, left ventricular ejection fraction, and serum brain natriuretic peptide levels (hazard ratio 5.65, 95% confidence interval 2.61-12.20; Figure 3, Table 3). We divided the patients into two groups according to the median serum omentin-1 levels. Kaplan-Meier analysis revealed that the patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001, Figure 4).\nHazard ratio of the tertiles of omentin-1 levels for cardiac events after adjustment of age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; HDLc, high density lipoprotein cholesterol; NYHA, New York Heart Association.\nUnivariate and multivariate analyses for cardiac events\n*Adjusted HR after adjustment for age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels.\nBNP, brain natriuretic peptide; CI, confidence interval; HDLc, high density lipoprotein cholesterol; HR, hazard ratio; NYHA, New York Heart Association; SD, standard deviation.\nKaplan-Meier analysis. The patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001).\n Net reclassification improvement and integrated discrimination improvement To measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model, we calculated the NRI and the IDI. The inclusion of serum omentin-1 levels in the prediction model (includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels) for the prediction of cardiac events, improved the NRI and IDI values, suggesting effective reclassification and discrimination (Table 4).\nStatistics for model fit and improvement with addition of serum omentin-1 level predicted on the prediction of cardiac events\nPrediction model includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels.\nBNP, brain natriuretic peptide; CI, confidence interval; IDI, integrated discrimination improvement; NRI, net reclassification improvement; NYHA, New York Heart Association.\nTo measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model, we calculated the NRI and the IDI. The inclusion of serum omentin-1 levels in the prediction model (includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels) for the prediction of cardiac events, improved the NRI and IDI values, suggesting effective reclassification and discrimination (Table 4).\nStatistics for model fit and improvement with addition of serum omentin-1 level predicted on the prediction of cardiac events\nPrediction model includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels.\nBNP, brain natriuretic peptide; CI, confidence interval; IDI, integrated discrimination improvement; NRI, net reclassification improvement; NYHA, New York Heart Association.", "The patients with HF had a lower BMI and left ventricular ejection fraction, and lower serum total cholesterol, triglyceride levels, and higher serum BNP levels compared with control subjects (Table 1).\nBaseline clinical characteristics\nData are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association.", "There were 59 cardiac events including 17 deaths and 32 re-hospitalizations in patients with HF during the follow-up period (Table 2). The patients who experienced cardiac events were in a more severe New York Heart Association (NYHA) functional class, and had a lower estimated glomerular filtration rate, lower left ventricular ejection fraction, higher left ventricular end-diastolic diameter, and higher serum BNP levels compared with those who did not. Moreover, patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (Figure 1). There were no significant differences in etiologies of HF between patients with and without cardiac events (Table 2).\nComparison of patients with or without cardiac event\nData are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association.\nComparisons of serum omentin-1 levels between control subjects and HF patients with or without cardiac events. HF patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (p < 0.001). HF, heart failure.", "The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (P = 0.029 vs. class II and P = 0.041 vs. class III, Figure 2A). On the other hand, serum omentin-1 levels were not significantly different between the patients who were in NYHA functional class II and III (P = 0.582). Furthermore, there was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217, Figure 2B).\nSerum omentin-1 levels and heart failure severity. A. The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (*P = 0.029 vs. class II and #P = 0.041 vs. class III, Figure 2A). (The number of patients; II = 71, III = 46, IV = 19) B. The association between serum omentin-1 levels and serum BNP levels. There was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217). BNP, brain natriuretic peptide; NYHA, New York Heart Association.", "We divided patients with HF into three groups according to the tertiles of serum omentin-1 levels. Multivariate Cox hazard analysis showed that the lowest serum omentin-1 levels (T1) were independently associated with cardiac events after adjustment for age, gender, NYHA functional class, left ventricular ejection fraction, and serum brain natriuretic peptide levels (hazard ratio 5.65, 95% confidence interval 2.61-12.20; Figure 3, Table 3). We divided the patients into two groups according to the median serum omentin-1 levels. Kaplan-Meier analysis revealed that the patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001, Figure 4).\nHazard ratio of the tertiles of omentin-1 levels for cardiac events after adjustment of age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; HDLc, high density lipoprotein cholesterol; NYHA, New York Heart Association.\nUnivariate and multivariate analyses for cardiac events\n*Adjusted HR after adjustment for age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels.\nBNP, brain natriuretic peptide; CI, confidence interval; HDLc, high density lipoprotein cholesterol; HR, hazard ratio; NYHA, New York Heart Association; SD, standard deviation.\nKaplan-Meier analysis. The patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001).", "To measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model, we calculated the NRI and the IDI. The inclusion of serum omentin-1 levels in the prediction model (includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels) for the prediction of cardiac events, improved the NRI and IDI values, suggesting effective reclassification and discrimination (Table 4).\nStatistics for model fit and improvement with addition of serum omentin-1 level predicted on the prediction of cardiac events\nPrediction model includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels.\nBNP, brain natriuretic peptide; CI, confidence interval; IDI, integrated discrimination improvement; NRI, net reclassification improvement; NYHA, New York Heart Association.", "The present study demonstrated that decreased serum omentin-1 levels predicted cardiac events in patients with HF. Serum omentin-1 level appears to be a novel prognostic marker for the risk stratification of patients with HF.\nVarious types of adipocytokines are reported to be a predictor of unfavorable cardiac outcomes in patients with HF [26]. In addition to their roles as predictors of cardiac outcome, a variety of adipocytokines have been associated with the development of HF through insulin resistance and chronic inflammation [14,27-29]. Serum adiponectin levels are reported to be correlated with BNP levels, and are associated with HF severity and unfavorable outcomes in patients with HF [30,31]. Adiponectin has been suggested to play a role in the prevention of cardiovascular diseases via its anti-inflammatory, anti-oxidant, and anti-apoptotic properties [6-9]. Recently, reports have shown several adipokines to have beneficial effects on cardiovascular diseases [32-34]. However, the precise role of these adipokines remains unclear.\nOmentin-1 is a 38 kDa novel adipokine identified in 2004 from visceral adipose tissue [12,13]. Shibata et al. reported that decreased plasma omentin-1 levels predict the prevalence of coronary artery disease [18]. Yang et al. reported that omentin-1 enhances insulin-stimulated glucose uptake in human adipocytes and may regulate insulin sensitivity [13]. Yamawaki et al. reported that omentin-1 modulates vascular function and attenuates cyclooxygenase-2 expression and c-jun N-terminal kinase (JNK) activation in cytokine-stimulated endothelial cells [35,36]. These studies all suggest that omentin-1 may improve insulin resistance and suppress vascular inflammation. Interestingly, Pan et al. suggested that omentin-1 expression and production are decreased with elevated inflammatory adipokines, such as tumor necrosis factor-alpha and interleukin-6, in patients with impaired glucose intolerance and newly diagnosed type 2 diabetes mellitus [37].\nUnlike to adiponectin, serum omentin-1 was reported to decrease with chronic inflammation and oxidative stress in patients with HF. The bioactivity of omentin-1 appears multifaceted and remains to be fully defined. The present study showed no correlation between serum omentin-1 and BNP levels unlike adiponectin [30], suggesting that these markers indicate different features of the pathophysiological process of HF. Serum omentin-1 levels may represent a promising biomarker for cardiac prognosis, irrespective of serum BNP levels. The inclusion of serum omentin-1 levels in the prediction model (includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels) for the prediction of cardiac events, improved the NRI and IDI values, suggesting effective reclassification and discrimination.\nThe present study has certain limitations. Firstly, the sample size was relatively small and it was a single center study. Nonetheless, there was a significant relationship between serum omentin-1 levels and cardiac events. In addition, the inclusion of serum omentin-1 levels in the prediction model with conventional risk factors, including serum BNP levels, for the prediction of cardiac events, improved the NRI and IDI values. Secondly, there were no data for other adipocytokines. Further study is needed to clarify the association between serum omentin-1 and other adipocytokines in a large HF population.\nIn conclusion, decreased serum omentin-1 levels were associated with cardiac events in patients with HF, irrespective of serum BNP levels. Serum omentin-1 level appears to represent a novel prognostic marker for the risk stratification of patients with HF.", "BMI: Body mass index; BNP: Brain natriuretic peptide; eGFR: Estimated glomerular filtration rate; ELISA: Sandwich enzyme-linked immunosorbent assay; HF: Heart failure; IDI: Integrated discrimination improvement; NRI: Net reclassification improvement; NYHA: New York heart association; SD: Standard deviation.", "The authors report that there is no duality of interest associated with this manuscript.", "TN, TW and IK contributed to discussions about study design and data analyses. SK, DK, MY, YO, and YH conceived and carried out experiments. TN and TW participated in the interpretation of the results and the writing of the manuscript. SN, HT, TA, TS, and TM helped with data collection. All authors have read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, "results", null, null, null, null, null, "discussion", null, null, null ]
[ "Omentin-1", "Heart failure", "Prognosis" ]
Background: Heart failure (HF) remain a major cause of death worldwide and has a poor prognosis despite advances in treatment [1]. Adipocytokines, such as tumor necrosis factor-alpha, interleukin-6, and plasminogen activator inhibitor-1, play a crucial role in the development of cardiovascular diseases through insulin resistance and chronic inflammation [2-5]. Adipokines, such as adiponectin, are also reported to have anti-inflammatory, anti-oxidant, and anti-apoptotic properties, and are decreased in patients with cardiovascular disease [6-9]. There has been a move to clarify the causal relationship between various adipokines and cardiovascular disease [10,11]. Omentin-1 is a novel adipokine whose serum levels are decreased in obese individuals, and is associated with insulin resistance [12-16]. Omentin-1 has been suggested to play a beneficial role in preventing atherosclerosis [17,18], however, it remains unclear whether serum omentin-1 levels are associated with clinical outcome in patients with HF. The purpose of this study was to clarify the impact of serum omentin-1 levels on cardiac prognosis in patients with HF. Methods: Study population We enrolled 136 consecutive patients who were admitted to the Yamagata University Hospital for treatment of worsening HF, diagnosis and pathophysiological investigations, or for therapeutic evaluation of HF. We also enrolled 20 control subjects without signs of significant heart disease. A diagnosis of HF was based on a history of dyspnea and symptoms of exercise intolerance followed by pulmonary congestion, pleural effusion, or left ventricular enlargement by chest X-ray or echocardiography [19,20]. Control subjects were excluded if they had significant coronary artery disease, systolic and diastolic dysfunction, valvular heart disease, or myocardial hypertrophy on echocardiography [21]. All patients gave written informed consent prior to their participation, and the protocol was approved by the institution’s Human Investigation Committee. The procedures were performed in accordance with the Helsinki Declaration. We enrolled 136 consecutive patients who were admitted to the Yamagata University Hospital for treatment of worsening HF, diagnosis and pathophysiological investigations, or for therapeutic evaluation of HF. We also enrolled 20 control subjects without signs of significant heart disease. A diagnosis of HF was based on a history of dyspnea and symptoms of exercise intolerance followed by pulmonary congestion, pleural effusion, or left ventricular enlargement by chest X-ray or echocardiography [19,20]. Control subjects were excluded if they had significant coronary artery disease, systolic and diastolic dysfunction, valvular heart disease, or myocardial hypertrophy on echocardiography [21]. All patients gave written informed consent prior to their participation, and the protocol was approved by the institution’s Human Investigation Committee. The procedures were performed in accordance with the Helsinki Declaration. Measurement of serum omentin-1 and brain natriuretic peptide levels Blood samples were drawn at admission and centrifuged at 2,500 g for 15 minutes at 4°C within 30 minutes of collection. The serum was stored at -80°C until analysis. Serum omentin-1 concentrations were measured with a sandwich enzyme-linked immunosorbent assay (ELISA, Immuno-Biological Laboratories CO., Ltd., Gunma, Japan), according to the manufacturer’s instructions [22,23]. The serum omentin-1 levels were measured in duplicate by an investigator unaware of the associated patients’ characteristics. Serum brain natriuretic peptide (BNP) concentrations were measured using a commercially available specific radio-immuno assay for human BNP (Shiono RIA BNP assay kit, Shionogi & Co., Ltd., Tokyo, Japan) [24]. Blood samples were drawn at admission and centrifuged at 2,500 g for 15 minutes at 4°C within 30 minutes of collection. The serum was stored at -80°C until analysis. Serum omentin-1 concentrations were measured with a sandwich enzyme-linked immunosorbent assay (ELISA, Immuno-Biological Laboratories CO., Ltd., Gunma, Japan), according to the manufacturer’s instructions [22,23]. The serum omentin-1 levels were measured in duplicate by an investigator unaware of the associated patients’ characteristics. Serum brain natriuretic peptide (BNP) concentrations were measured using a commercially available specific radio-immuno assay for human BNP (Shiono RIA BNP assay kit, Shionogi & Co., Ltd., Tokyo, Japan) [24]. Endpoints and follow-up The patients were prospectively followed for a median duration of 399 ± 378 days. The end points were cardiac death, including death due to progressive HF, myocardial infarction, stroke and sudden cardiac death, and re-hospitalization for worsening HF. Sudden cardiac death was defined as death without definite premonitory symptoms or signs, and was confirmed by the attending physician. Two cardiologists who were blinded to the blood biomarker data reviewed the medical records and conducted telephone interviews to survey the incidence of cardiovascular events. The patients were prospectively followed for a median duration of 399 ± 378 days. The end points were cardiac death, including death due to progressive HF, myocardial infarction, stroke and sudden cardiac death, and re-hospitalization for worsening HF. Sudden cardiac death was defined as death without definite premonitory symptoms or signs, and was confirmed by the attending physician. Two cardiologists who were blinded to the blood biomarker data reviewed the medical records and conducted telephone interviews to survey the incidence of cardiovascular events. Statistical analysis Data are presented as the mean ± standard deviation (SD). The Mann–Whitney U-test was used when the data were not distributed normally. If the data were not distributed normally, they were presented as medians with an interquartile range. The unpaired Student’s t-test and the chi-square test were used for comparisons of continuous and categorical variables, respectively. Comparison of data among three groups was performed by the Kruskal-Wallis test. Uni- and multivariate analyses with Cox proportional hazard regression were used to determine significant predictors of cardiovascular events. Cumulative overall and event-free survival rates were computed using the Kaplan-Meier method and were compared using the log-rank test. We calculated the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) to measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model [25]. NRI and IDI are new statistical measures to assess and quantify the improvement in risk prediction offered by a new marker. A P value < 0.05 was considered statistically significant. All statistical analyses were performed with a standard statistical program package (JMP version 10; SAS Institute, Cary, North Carolina, USA), and the R-3.0.2 with additional packages (Rcmdr, Epi, pROC, and PredictABEL). Data are presented as the mean ± standard deviation (SD). The Mann–Whitney U-test was used when the data were not distributed normally. If the data were not distributed normally, they were presented as medians with an interquartile range. The unpaired Student’s t-test and the chi-square test were used for comparisons of continuous and categorical variables, respectively. Comparison of data among three groups was performed by the Kruskal-Wallis test. Uni- and multivariate analyses with Cox proportional hazard regression were used to determine significant predictors of cardiovascular events. Cumulative overall and event-free survival rates were computed using the Kaplan-Meier method and were compared using the log-rank test. We calculated the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) to measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model [25]. NRI and IDI are new statistical measures to assess and quantify the improvement in risk prediction offered by a new marker. A P value < 0.05 was considered statistically significant. All statistical analyses were performed with a standard statistical program package (JMP version 10; SAS Institute, Cary, North Carolina, USA), and the R-3.0.2 with additional packages (Rcmdr, Epi, pROC, and PredictABEL). Study population: We enrolled 136 consecutive patients who were admitted to the Yamagata University Hospital for treatment of worsening HF, diagnosis and pathophysiological investigations, or for therapeutic evaluation of HF. We also enrolled 20 control subjects without signs of significant heart disease. A diagnosis of HF was based on a history of dyspnea and symptoms of exercise intolerance followed by pulmonary congestion, pleural effusion, or left ventricular enlargement by chest X-ray or echocardiography [19,20]. Control subjects were excluded if they had significant coronary artery disease, systolic and diastolic dysfunction, valvular heart disease, or myocardial hypertrophy on echocardiography [21]. All patients gave written informed consent prior to their participation, and the protocol was approved by the institution’s Human Investigation Committee. The procedures were performed in accordance with the Helsinki Declaration. Measurement of serum omentin-1 and brain natriuretic peptide levels: Blood samples were drawn at admission and centrifuged at 2,500 g for 15 minutes at 4°C within 30 minutes of collection. The serum was stored at -80°C until analysis. Serum omentin-1 concentrations were measured with a sandwich enzyme-linked immunosorbent assay (ELISA, Immuno-Biological Laboratories CO., Ltd., Gunma, Japan), according to the manufacturer’s instructions [22,23]. The serum omentin-1 levels were measured in duplicate by an investigator unaware of the associated patients’ characteristics. Serum brain natriuretic peptide (BNP) concentrations were measured using a commercially available specific radio-immuno assay for human BNP (Shiono RIA BNP assay kit, Shionogi & Co., Ltd., Tokyo, Japan) [24]. Endpoints and follow-up: The patients were prospectively followed for a median duration of 399 ± 378 days. The end points were cardiac death, including death due to progressive HF, myocardial infarction, stroke and sudden cardiac death, and re-hospitalization for worsening HF. Sudden cardiac death was defined as death without definite premonitory symptoms or signs, and was confirmed by the attending physician. Two cardiologists who were blinded to the blood biomarker data reviewed the medical records and conducted telephone interviews to survey the incidence of cardiovascular events. Statistical analysis: Data are presented as the mean ± standard deviation (SD). The Mann–Whitney U-test was used when the data were not distributed normally. If the data were not distributed normally, they were presented as medians with an interquartile range. The unpaired Student’s t-test and the chi-square test were used for comparisons of continuous and categorical variables, respectively. Comparison of data among three groups was performed by the Kruskal-Wallis test. Uni- and multivariate analyses with Cox proportional hazard regression were used to determine significant predictors of cardiovascular events. Cumulative overall and event-free survival rates were computed using the Kaplan-Meier method and were compared using the log-rank test. We calculated the net reclassification improvement (NRI) and the integrated discrimination improvement (IDI) to measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model [25]. NRI and IDI are new statistical measures to assess and quantify the improvement in risk prediction offered by a new marker. A P value < 0.05 was considered statistically significant. All statistical analyses were performed with a standard statistical program package (JMP version 10; SAS Institute, Cary, North Carolina, USA), and the R-3.0.2 with additional packages (Rcmdr, Epi, pROC, and PredictABEL). Results: Comparison between patients with and without heart failure The patients with HF had a lower BMI and left ventricular ejection fraction, and lower serum total cholesterol, triglyceride levels, and higher serum BNP levels compared with control subjects (Table 1). Baseline clinical characteristics Data are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association. The patients with HF had a lower BMI and left ventricular ejection fraction, and lower serum total cholesterol, triglyceride levels, and higher serum BNP levels compared with control subjects (Table 1). Baseline clinical characteristics Data are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association. Comparison between HF patients with and without cardiac events There were 59 cardiac events including 17 deaths and 32 re-hospitalizations in patients with HF during the follow-up period (Table 2). The patients who experienced cardiac events were in a more severe New York Heart Association (NYHA) functional class, and had a lower estimated glomerular filtration rate, lower left ventricular ejection fraction, higher left ventricular end-diastolic diameter, and higher serum BNP levels compared with those who did not. Moreover, patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (Figure 1). There were no significant differences in etiologies of HF between patients with and without cardiac events (Table 2). Comparison of patients with or without cardiac event Data are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association. Comparisons of serum omentin-1 levels between control subjects and HF patients with or without cardiac events. HF patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (p < 0.001). HF, heart failure. There were 59 cardiac events including 17 deaths and 32 re-hospitalizations in patients with HF during the follow-up period (Table 2). The patients who experienced cardiac events were in a more severe New York Heart Association (NYHA) functional class, and had a lower estimated glomerular filtration rate, lower left ventricular ejection fraction, higher left ventricular end-diastolic diameter, and higher serum BNP levels compared with those who did not. Moreover, patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (Figure 1). There were no significant differences in etiologies of HF between patients with and without cardiac events (Table 2). Comparison of patients with or without cardiac event Data are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association. Comparisons of serum omentin-1 levels between control subjects and HF patients with or without cardiac events. HF patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (p < 0.001). HF, heart failure. Serum omentin-1 levels and HF severity The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (P = 0.029 vs. class II and P = 0.041 vs. class III, Figure 2A). On the other hand, serum omentin-1 levels were not significantly different between the patients who were in NYHA functional class II and III (P = 0.582). Furthermore, there was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217, Figure 2B). Serum omentin-1 levels and heart failure severity. A. The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (*P = 0.029 vs. class II and #P = 0.041 vs. class III, Figure 2A). (The number of patients; II = 71, III = 46, IV = 19) B. The association between serum omentin-1 levels and serum BNP levels. There was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217). BNP, brain natriuretic peptide; NYHA, New York Heart Association. The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (P = 0.029 vs. class II and P = 0.041 vs. class III, Figure 2A). On the other hand, serum omentin-1 levels were not significantly different between the patients who were in NYHA functional class II and III (P = 0.582). Furthermore, there was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217, Figure 2B). Serum omentin-1 levels and heart failure severity. A. The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (*P = 0.029 vs. class II and #P = 0.041 vs. class III, Figure 2A). (The number of patients; II = 71, III = 46, IV = 19) B. The association between serum omentin-1 levels and serum BNP levels. There was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217). BNP, brain natriuretic peptide; NYHA, New York Heart Association. Association between serum omentin-1 levels and cardiac events We divided patients with HF into three groups according to the tertiles of serum omentin-1 levels. Multivariate Cox hazard analysis showed that the lowest serum omentin-1 levels (T1) were independently associated with cardiac events after adjustment for age, gender, NYHA functional class, left ventricular ejection fraction, and serum brain natriuretic peptide levels (hazard ratio 5.65, 95% confidence interval 2.61-12.20; Figure 3, Table 3). We divided the patients into two groups according to the median serum omentin-1 levels. Kaplan-Meier analysis revealed that the patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001, Figure 4). Hazard ratio of the tertiles of omentin-1 levels for cardiac events after adjustment of age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; HDLc, high density lipoprotein cholesterol; NYHA, New York Heart Association. Univariate and multivariate analyses for cardiac events *Adjusted HR after adjustment for age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; CI, confidence interval; HDLc, high density lipoprotein cholesterol; HR, hazard ratio; NYHA, New York Heart Association; SD, standard deviation. Kaplan-Meier analysis. The patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001). We divided patients with HF into three groups according to the tertiles of serum omentin-1 levels. Multivariate Cox hazard analysis showed that the lowest serum omentin-1 levels (T1) were independently associated with cardiac events after adjustment for age, gender, NYHA functional class, left ventricular ejection fraction, and serum brain natriuretic peptide levels (hazard ratio 5.65, 95% confidence interval 2.61-12.20; Figure 3, Table 3). We divided the patients into two groups according to the median serum omentin-1 levels. Kaplan-Meier analysis revealed that the patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001, Figure 4). Hazard ratio of the tertiles of omentin-1 levels for cardiac events after adjustment of age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; HDLc, high density lipoprotein cholesterol; NYHA, New York Heart Association. Univariate and multivariate analyses for cardiac events *Adjusted HR after adjustment for age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; CI, confidence interval; HDLc, high density lipoprotein cholesterol; HR, hazard ratio; NYHA, New York Heart Association; SD, standard deviation. Kaplan-Meier analysis. The patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001). Net reclassification improvement and integrated discrimination improvement To measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model, we calculated the NRI and the IDI. The inclusion of serum omentin-1 levels in the prediction model (includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels) for the prediction of cardiac events, improved the NRI and IDI values, suggesting effective reclassification and discrimination (Table 4). Statistics for model fit and improvement with addition of serum omentin-1 level predicted on the prediction of cardiac events Prediction model includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels. BNP, brain natriuretic peptide; CI, confidence interval; IDI, integrated discrimination improvement; NRI, net reclassification improvement; NYHA, New York Heart Association. To measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model, we calculated the NRI and the IDI. The inclusion of serum omentin-1 levels in the prediction model (includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels) for the prediction of cardiac events, improved the NRI and IDI values, suggesting effective reclassification and discrimination (Table 4). Statistics for model fit and improvement with addition of serum omentin-1 level predicted on the prediction of cardiac events Prediction model includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels. BNP, brain natriuretic peptide; CI, confidence interval; IDI, integrated discrimination improvement; NRI, net reclassification improvement; NYHA, New York Heart Association. Comparison between patients with and without heart failure: The patients with HF had a lower BMI and left ventricular ejection fraction, and lower serum total cholesterol, triglyceride levels, and higher serum BNP levels compared with control subjects (Table 1). Baseline clinical characteristics Data are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association. Comparison between HF patients with and without cardiac events: There were 59 cardiac events including 17 deaths and 32 re-hospitalizations in patients with HF during the follow-up period (Table 2). The patients who experienced cardiac events were in a more severe New York Heart Association (NYHA) functional class, and had a lower estimated glomerular filtration rate, lower left ventricular ejection fraction, higher left ventricular end-diastolic diameter, and higher serum BNP levels compared with those who did not. Moreover, patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (Figure 1). There were no significant differences in etiologies of HF between patients with and without cardiac events (Table 2). Comparison of patients with or without cardiac event Data are presented as mean±SD or % unless otherwise indicated; ACE, angiotensin-converting enzyme; ARB, angiotensin receptor blocker; BNP, brain natriuretic peptide; BUN, Blood urea nitrogen; eGFR, estimated glomerular filtration rate; HDLc, high density lipoprotein cholesterol; hsCRP, high-sensitivity C-reactive protein; IQR, interquartile range; LDLc, low density lipoprotein cholesterol; LV, left ventricular; NYHA, New York Heart Association. Comparisons of serum omentin-1 levels between control subjects and HF patients with or without cardiac events. HF patients with cardiac events showed markedly lower serum omentin-1 levels compared with those without (p < 0.001). HF, heart failure. Serum omentin-1 levels and HF severity: The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (P = 0.029 vs. class II and P = 0.041 vs. class III, Figure 2A). On the other hand, serum omentin-1 levels were not significantly different between the patients who were in NYHA functional class II and III (P = 0.582). Furthermore, there was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217, Figure 2B). Serum omentin-1 levels and heart failure severity. A. The patients who were in NYHA functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III (*P = 0.029 vs. class II and #P = 0.041 vs. class III, Figure 2A). (The number of patients; II = 71, III = 46, IV = 19) B. The association between serum omentin-1 levels and serum BNP levels. There was no relationship between the serum omentin-1 levels and the serum BNP levels (r = 0.217). BNP, brain natriuretic peptide; NYHA, New York Heart Association. Association between serum omentin-1 levels and cardiac events: We divided patients with HF into three groups according to the tertiles of serum omentin-1 levels. Multivariate Cox hazard analysis showed that the lowest serum omentin-1 levels (T1) were independently associated with cardiac events after adjustment for age, gender, NYHA functional class, left ventricular ejection fraction, and serum brain natriuretic peptide levels (hazard ratio 5.65, 95% confidence interval 2.61-12.20; Figure 3, Table 3). We divided the patients into two groups according to the median serum omentin-1 levels. Kaplan-Meier analysis revealed that the patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001, Figure 4). Hazard ratio of the tertiles of omentin-1 levels for cardiac events after adjustment of age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; HDLc, high density lipoprotein cholesterol; NYHA, New York Heart Association. Univariate and multivariate analyses for cardiac events *Adjusted HR after adjustment for age, gender, body mass index, NYHA functional class, left ventricular ejection fraction, serum triglycerides, serum HDLc levels, and serum BNP levels. BNP, brain natriuretic peptide; CI, confidence interval; HDLc, high density lipoprotein cholesterol; HR, hazard ratio; NYHA, New York Heart Association; SD, standard deviation. Kaplan-Meier analysis. The patients with low serum omentin-1 levels had a higher risk of cardiac events compared to those with high serum omentin-1 levels (log-rank test p < 0.001). Net reclassification improvement and integrated discrimination improvement: To measure the quantity of improvement for the correct reclassification and sensitivity according to the addition of serum omentin-1 levels to the prediction model, we calculated the NRI and the IDI. The inclusion of serum omentin-1 levels in the prediction model (includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels) for the prediction of cardiac events, improved the NRI and IDI values, suggesting effective reclassification and discrimination (Table 4). Statistics for model fit and improvement with addition of serum omentin-1 level predicted on the prediction of cardiac events Prediction model includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels. BNP, brain natriuretic peptide; CI, confidence interval; IDI, integrated discrimination improvement; NRI, net reclassification improvement; NYHA, New York Heart Association. Discussion: The present study demonstrated that decreased serum omentin-1 levels predicted cardiac events in patients with HF. Serum omentin-1 level appears to be a novel prognostic marker for the risk stratification of patients with HF. Various types of adipocytokines are reported to be a predictor of unfavorable cardiac outcomes in patients with HF [26]. In addition to their roles as predictors of cardiac outcome, a variety of adipocytokines have been associated with the development of HF through insulin resistance and chronic inflammation [14,27-29]. Serum adiponectin levels are reported to be correlated with BNP levels, and are associated with HF severity and unfavorable outcomes in patients with HF [30,31]. Adiponectin has been suggested to play a role in the prevention of cardiovascular diseases via its anti-inflammatory, anti-oxidant, and anti-apoptotic properties [6-9]. Recently, reports have shown several adipokines to have beneficial effects on cardiovascular diseases [32-34]. However, the precise role of these adipokines remains unclear. Omentin-1 is a 38 kDa novel adipokine identified in 2004 from visceral adipose tissue [12,13]. Shibata et al. reported that decreased plasma omentin-1 levels predict the prevalence of coronary artery disease [18]. Yang et al. reported that omentin-1 enhances insulin-stimulated glucose uptake in human adipocytes and may regulate insulin sensitivity [13]. Yamawaki et al. reported that omentin-1 modulates vascular function and attenuates cyclooxygenase-2 expression and c-jun N-terminal kinase (JNK) activation in cytokine-stimulated endothelial cells [35,36]. These studies all suggest that omentin-1 may improve insulin resistance and suppress vascular inflammation. Interestingly, Pan et al. suggested that omentin-1 expression and production are decreased with elevated inflammatory adipokines, such as tumor necrosis factor-alpha and interleukin-6, in patients with impaired glucose intolerance and newly diagnosed type 2 diabetes mellitus [37]. Unlike to adiponectin, serum omentin-1 was reported to decrease with chronic inflammation and oxidative stress in patients with HF. The bioactivity of omentin-1 appears multifaceted and remains to be fully defined. The present study showed no correlation between serum omentin-1 and BNP levels unlike adiponectin [30], suggesting that these markers indicate different features of the pathophysiological process of HF. Serum omentin-1 levels may represent a promising biomarker for cardiac prognosis, irrespective of serum BNP levels. The inclusion of serum omentin-1 levels in the prediction model (includes age, gender, NYHA functional class, left ventricular ejection fraction, and serum BNP levels) for the prediction of cardiac events, improved the NRI and IDI values, suggesting effective reclassification and discrimination. The present study has certain limitations. Firstly, the sample size was relatively small and it was a single center study. Nonetheless, there was a significant relationship between serum omentin-1 levels and cardiac events. In addition, the inclusion of serum omentin-1 levels in the prediction model with conventional risk factors, including serum BNP levels, for the prediction of cardiac events, improved the NRI and IDI values. Secondly, there were no data for other adipocytokines. Further study is needed to clarify the association between serum omentin-1 and other adipocytokines in a large HF population. In conclusion, decreased serum omentin-1 levels were associated with cardiac events in patients with HF, irrespective of serum BNP levels. Serum omentin-1 level appears to represent a novel prognostic marker for the risk stratification of patients with HF. Abbreviations: BMI: Body mass index; BNP: Brain natriuretic peptide; eGFR: Estimated glomerular filtration rate; ELISA: Sandwich enzyme-linked immunosorbent assay; HF: Heart failure; IDI: Integrated discrimination improvement; NRI: Net reclassification improvement; NYHA: New York heart association; SD: Standard deviation. Competing interests: The authors report that there is no duality of interest associated with this manuscript. Authors’ contributions: TN, TW and IK contributed to discussions about study design and data analyses. SK, DK, MY, YO, and YH conceived and carried out experiments. TN and TW participated in the interpretation of the results and the writing of the manuscript. SN, HT, TA, TS, and TM helped with data collection. All authors have read and approved the final manuscript.
Background: Various adipokines are reported to be associated with the development of heart failure (HF) through insulin resistance and chronic inflammation. Omentin-1 is a novel adipokine and is associated with incident coronary artery disease. However, it remains unclear whether serum omentin-1 levels are associated with cardiac prognosis in patients with HF. Methods: We measured serum omentin-1 levels at admission in 136 consecutive patients with HF, and 20 control subjects without signs of significant heart disease. We prospectively followed patients with HF to endpoints of cardiac death or re-hospitalization for worsening HF. Results: Serum omentin-1 levels were markedly lower in HF patients with cardiac events compared with to without. The patients who were in New York Heart Association (NYHA) functional class IV showed significantly lower serum omentin-1 levels compared to those in class II and III, whereas serum omentin-1 levels did not correlate with serum brain natriuretic peptide levels (r = 0.217, P = 0.011). We divided the HF patients into three groups based on the tertiles of serum omentin-1 level (low T1, middle T2, and high T3). Multivariate Cox hazard analysis showed that the lowest serum omentin-1 level (T1) was independently associated with cardiac events after adjustment for confounding factors (hazard ratio 5.78, 95% confidence interval 1.20-12.79). We divided the HF patients into two groups according to the median serum omentin-1 levels. Kaplan-Meier analysis revealed that the patients with low serum omentin-1 levels had a higher risk of cardiac events compared with those with high serum omentin-1 levels (log-rank test p < 0.001). Conclusions: Decreased serum omentin-1 levels were associated with a poor cardiac outcome in patients with HF.
null
null
6,470
328
[ 208, 149, 140, 97, 258, 127, 269, 237, 324, 161, 57, 15, 73 ]
16
[ "serum", "levels", "omentin", "serum omentin", "omentin levels", "patients", "serum omentin levels", "cardiac", "bnp", "hf" ]
[ "adipokine serum levels", "adipocytokines tumor necrosis", "adiponectin serum omentin", "adipokines cardiovascular disease", "omentin adipocytokines" ]
null
null
[CONTENT] Omentin-1 | Heart failure | Prognosis [SUMMARY]
[CONTENT] Omentin-1 | Heart failure | Prognosis [SUMMARY]
[CONTENT] Omentin-1 | Heart failure | Prognosis [SUMMARY]
null
[CONTENT] Omentin-1 | Heart failure | Prognosis [SUMMARY]
null
[CONTENT] Aged | Aged, 80 and over | Biomarkers | Cytokines | Female | Follow-Up Studies | GPI-Linked Proteins | Heart Failure | Humans | Lectins | Male | Middle Aged | Prognosis | Prospective Studies | Risk Factors [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Biomarkers | Cytokines | Female | Follow-Up Studies | GPI-Linked Proteins | Heart Failure | Humans | Lectins | Male | Middle Aged | Prognosis | Prospective Studies | Risk Factors [SUMMARY]
[CONTENT] Aged | Aged, 80 and over | Biomarkers | Cytokines | Female | Follow-Up Studies | GPI-Linked Proteins | Heart Failure | Humans | Lectins | Male | Middle Aged | Prognosis | Prospective Studies | Risk Factors [SUMMARY]
null
[CONTENT] Aged | Aged, 80 and over | Biomarkers | Cytokines | Female | Follow-Up Studies | GPI-Linked Proteins | Heart Failure | Humans | Lectins | Male | Middle Aged | Prognosis | Prospective Studies | Risk Factors [SUMMARY]
null
[CONTENT] adipokine serum levels | adipocytokines tumor necrosis | adiponectin serum omentin | adipokines cardiovascular disease | omentin adipocytokines [SUMMARY]
[CONTENT] adipokine serum levels | adipocytokines tumor necrosis | adiponectin serum omentin | adipokines cardiovascular disease | omentin adipocytokines [SUMMARY]
[CONTENT] adipokine serum levels | adipocytokines tumor necrosis | adiponectin serum omentin | adipokines cardiovascular disease | omentin adipocytokines [SUMMARY]
null
[CONTENT] adipokine serum levels | adipocytokines tumor necrosis | adiponectin serum omentin | adipokines cardiovascular disease | omentin adipocytokines [SUMMARY]
null
[CONTENT] serum | levels | omentin | serum omentin | omentin levels | patients | serum omentin levels | cardiac | bnp | hf [SUMMARY]
[CONTENT] serum | levels | omentin | serum omentin | omentin levels | patients | serum omentin levels | cardiac | bnp | hf [SUMMARY]
[CONTENT] serum | levels | omentin | serum omentin | omentin levels | patients | serum omentin levels | cardiac | bnp | hf [SUMMARY]
null
[CONTENT] serum | levels | omentin | serum omentin | omentin levels | patients | serum omentin levels | cardiac | bnp | hf [SUMMARY]
null
[CONTENT] anti | cardiovascular disease | cardiovascular | omentin | play | clarify | resistance | insulin | decreased | insulin resistance [SUMMARY]
[CONTENT] death | test | statistical | cardiac death | measured | data | improvement | significant | serum | assay [SUMMARY]
[CONTENT] serum | levels | omentin | serum omentin | omentin levels | serum omentin levels | cardiac events | class | cardiac | nyha [SUMMARY]
null
[CONTENT] serum | levels | omentin | serum omentin | cardiac | patients | omentin levels | serum omentin levels | hf | bnp [SUMMARY]
null
[CONTENT] ||| ||| HF [SUMMARY]
[CONTENT] 136 | HF | 20 ||| HF | HF [SUMMARY]
[CONTENT] HF ||| New York Heart Association | IV | 0.217 | 0.011 ||| HF | three | T1 | T2 ||| T1 | 5.78 | 95% | 1.20 ||| HF | two ||| Kaplan-Meier | 0.001 [SUMMARY]
null
[CONTENT] ||| ||| HF ||| 136 | HF | 20 ||| HF | HF ||| HF ||| New York Heart Association | IV | 0.217 | 0.011 ||| HF | three | T1 | T2 ||| T1 | 5.78 | 95% | 1.20 ||| HF | two ||| Kaplan-Meier | 0.001 ||| HF [SUMMARY]
null
No red cell alloimmunization or change of clinical outcome after using fresh frozen cancellous allograft bone for acetabular reconstruction in revision hip arthroplasty: a follow up study.
23009246
Possible immunization to blood group or other antigens and subsequent inhibition of remodeling or incorporation after use of untreated human bone allograft was described previously. This study presents the immunological, clinical and radiological results of 30 patients with acetabular revisions using fresh frozen non-irradiated bone allograft.
BACKGROUND
AB0-incompatible (donor-recipient) bone transplantation was performed in 22 cases, Rh(D) incompatible transplantation in 6 cases. The mean follow up of 23 months included measuring Harris hip score and radiological examination with evaluation of remodeling of the bone graft, implant migration and heterotopic ossification. In addition, all patients were screened for alloimmunization to Rh blood group antigens.
METHODS
Compared to the whole study group, there were no differences in clinical or radiological measurements for the groups with AB0- or Rh(D)-incompatible bone transplantation. The mean Harris Hip Score was 80.6. X-rays confirmed total remodeling of all allografts with no acetabular loosening. At follow up, blood tests revealed no alloimmunization to Rh blood group donor antigens.
RESULTS
The use of fresh frozen non-irradiated bone allograft in acetabular revision is a reliable supplement to reconstruction. The risk of alloimmunization to donor-blood group antigens after AB0- or Rh-incompatible allograft transplantation with a negative long-term influence on bone-remodeling or the clinical outcome is negligible.
CONCLUSIONS
[ "ABO Blood-Group System", "Acetabulum", "Aged", "Aged, 80 and over", "Arthroplasty, Replacement, Hip", "Blood Group Incompatibility", "Bone Transplantation", "Chi-Square Distribution", "Cryopreservation", "Erythrocytes", "Female", "Femur Head", "Follow-Up Studies", "Hip Prosthesis", "Humans", "Isoantibodies", "Male", "Middle Aged", "Ossification, Heterotopic", "Postoperative Complications", "Prosthesis Failure", "Radiography", "Reoperation", "Retrospective Studies", "Rh-Hr Blood-Group System", "Time Factors", "Tissue and Organ Harvesting", "Treatment Outcome" ]
3477012
Background
Aseptic loosening is the most common long-term complication in total hip arthroplasty. Revision of the failed acetabular component remains challenging due to migration of the implant during loosening and procedures to remove the primary implant often result in an extensive loss of pelvic bone. Bone grafting combined with insertion of a revision acetabular component is an established method to restore pelvic bone stock [1-4]. Because of its limited availability and poor quality in elderly patients the use of an autogenous graft is often not feasible. Therefore, allografts are utilized in most acetabular revisions. Regardless of whether treated (chemical, freeze dried, irradiated) or fresh-frozen non-irradiated allografts are used, the clinical outcome is usually good [5-7]. We have been using fresh frozen untreated allografts from our own bone bank in revision acetabular hip arthroplasty for decades with good results. Nevertheless, immunization to blood group antigens or other antigens and subsequent possible inhibition of long-term remodeling or incorporation of the transplanted bone is mentioned as an argument against the use of fresh frozen non-irradiated allografts [8,9]. The purpose of this study was to evaluate whether allografting of AB0- and Rh-incompatible patients (donor-recipient) leads to recipient-alloimmunization with proof of irregular erythrocyte antibodies (Rh system). In addition, clinical and radiological findings should be observed in the postoperative course.
Methods
Graft extraction Femoral head bone grafts were obtained from donors through total hip arthroplasty. The grafts were not treated, immediately double packed and stored at – 80°C at our local bone bank. Besides blood group determination (AB0 and Rhesus) donors were screened for infectious diseases (HIV, Hepatitis B and -C, Syphilis) before and at least six weeks after surgery according to the local guidelines for operating a bone bank. Femoral head bone grafts were obtained from donors through total hip arthroplasty. The grafts were not treated, immediately double packed and stored at – 80°C at our local bone bank. Besides blood group determination (AB0 and Rhesus) donors were screened for infectious diseases (HIV, Hepatitis B and -C, Syphilis) before and at least six weeks after surgery according to the local guidelines for operating a bone bank. Patients We retrospectively reviewed 30 patients (13 males, 17 females). The study was performed in compliance with the Helsinki Declaration and approved by the local Ethics Committee (Nr. 254/2010BO2, University Tuebingen, Germany). Between 2006 and 2010 all included patients received fresh frozen cancellous allograft bone from our bone bank during acetabular revision at our institution by the corresponding author (T.K.). Acetabular defects were determined from preoperative radiographs and the intraoperative assessment using the classification introduced by Paprosky et al. [10]. Type I defects were present in 8 hips (26.7%), type II A in 9 (30%), type II B in 3 (10%), type II C in 3 (10%), type III A in 2 (6.7%), type III B in 3 (10%) and type IV with complete pelvic discontinuity in 2 (6.7%). The amount of impacted bone material was determined by the size of the defect. AB0 incompatible (donor-recipient) bone transplantation was performed in 22 cases. 6 Rh(D) negative patients received bone from Rh(D) positive patients. In most cases, revision components were implanted (Burch-Schneider reinforcement ring or Mueller ring, Zimmer GmbH, Switzerland) for acetabular reconstruction. The average age at the time of surgery was 71 years (range 48 to 90). We retrospectively reviewed 30 patients (13 males, 17 females). The study was performed in compliance with the Helsinki Declaration and approved by the local Ethics Committee (Nr. 254/2010BO2, University Tuebingen, Germany). Between 2006 and 2010 all included patients received fresh frozen cancellous allograft bone from our bone bank during acetabular revision at our institution by the corresponding author (T.K.). Acetabular defects were determined from preoperative radiographs and the intraoperative assessment using the classification introduced by Paprosky et al. [10]. Type I defects were present in 8 hips (26.7%), type II A in 9 (30%), type II B in 3 (10%), type II C in 3 (10%), type III A in 2 (6.7%), type III B in 3 (10%) and type IV with complete pelvic discontinuity in 2 (6.7%). The amount of impacted bone material was determined by the size of the defect. AB0 incompatible (donor-recipient) bone transplantation was performed in 22 cases. 6 Rh(D) negative patients received bone from Rh(D) positive patients. In most cases, revision components were implanted (Burch-Schneider reinforcement ring or Mueller ring, Zimmer GmbH, Switzerland) for acetabular reconstruction. The average age at the time of surgery was 71 years (range 48 to 90). Follow up All patients were screened for alloimmunization to Rh blood group antigens (D, C, c, E, e) with a minimum clinical and radiographic follow-up of 6 months (mean 23 months). We did not screen for further blood group antigens. Clinical assessments were evaluated according to the criteria of the Harris Hip Score including scoring of pain, walking and mobility of the revised hip [11]. Radiological evaluation was performed after 7 days, 6 weeks and at the time of study-related follow up at least 6 months after surgery. The acetabular index and horizontal and vertical migration of the acetabular component were measured [12]. Acetabular component loosening was defined if the sum of horizontal and vertical migration was ≥ 5 mm, if the change in the acetabular index was ≥ 5° or if there was a progressive radiolucent line ≥ 1 mm around the whole acetabular component [13]. The Brooker-classification was used for determination of heterotopic ossification [14]. Remodeling of the allograft was measured on the basis of appearance of trabecular remodeling within the graft. All patients were screened for alloimmunization to Rh blood group antigens (D, C, c, E, e) with a minimum clinical and radiographic follow-up of 6 months (mean 23 months). We did not screen for further blood group antigens. Clinical assessments were evaluated according to the criteria of the Harris Hip Score including scoring of pain, walking and mobility of the revised hip [11]. Radiological evaluation was performed after 7 days, 6 weeks and at the time of study-related follow up at least 6 months after surgery. The acetabular index and horizontal and vertical migration of the acetabular component were measured [12]. Acetabular component loosening was defined if the sum of horizontal and vertical migration was ≥ 5 mm, if the change in the acetabular index was ≥ 5° or if there was a progressive radiolucent line ≥ 1 mm around the whole acetabular component [13]. The Brooker-classification was used for determination of heterotopic ossification [14]. Remodeling of the allograft was measured on the basis of appearance of trabecular remodeling within the graft. Statistical analysis The paired t-test and Pearson´s chi square test were used for intra-group analysis. The Pearson correlation coefficient was calculated to measure the dependence between the different variables. A p-value ≤ 0.05 was considered significant. The paired t-test and Pearson´s chi square test were used for intra-group analysis. The Pearson correlation coefficient was calculated to measure the dependence between the different variables. A p-value ≤ 0.05 was considered significant.
Results
Alloimmunization AB0 incompatible (donor-recipient) allograft transplantation was performed in 22 cases, Rhesus(D)-incompatible transplantation in 6 of 30 cases. No antibodies to donor blood-antigens were found in any patient in the Rhesus-system (Dd, Cc, Ee) during follow up. Especially Rh(D) incompatible transplantation did not lead to a detectable alloimmunization. We also found no differences in clinical or radiological measurements for these groups (Table 1). Study groups * Rh(D)-negative patients received bone from Rh(D)-positive donors. ** Screening for antibodies against 5 Rhesus antigens (D, C, c, E, e). AB0 incompatible (donor-recipient) allograft transplantation was performed in 22 cases, Rhesus(D)-incompatible transplantation in 6 of 30 cases. No antibodies to donor blood-antigens were found in any patient in the Rhesus-system (Dd, Cc, Ee) during follow up. Especially Rh(D) incompatible transplantation did not lead to a detectable alloimmunization. We also found no differences in clinical or radiological measurements for these groups (Table 1). Study groups * Rh(D)-negative patients received bone from Rh(D)-positive donors. ** Screening for antibodies against 5 Rhesus antigens (D, C, c, E, e). Clinical and radiological findings The revision rate of the entire study group of 30 patients was 3.3% due to a superficial septic complication in one patient after an AB0- and Rh-compatible allograft transplantation. All 30 acetabular components were still in place at time of follow up. The mean Harris-Hip-Score at the latest follow up was 80.6 points (range 43 to 100). Significant acetabular component tilting > 5° (range 0° to 4.4°), horizontal migration ≥ 5 mm (0.1 to 6.0 mm) or vertical migration ≥ 5 mm (0 to 4.2 mm) was found in one case. All allografts remodeled with homogeneous trabeculation and no radiolucent lines at the host-allograft interface (Figures 1). Periacetabular heterotopic ossification was found in 6 cases (20%): 5 patients with grade II and 1 patient with grade III. However, in 5 of 6 cases, preoperative radiographs revealed heterotopic ossification of a similar grade. There was no correlation between increasing preoperative acetabular defects and Harris Hip Score at follow up (p = 0.46). Advanced age was negatively correlated with the Harris Hip Score, but did not reach statistical significance (R = −0.51, p = 0.21). a. Preoperative anteroposterior radiograph of a 72 year old woman, 13 years after total hip replacement, showing aseptic loosening of the acetabular component with migration into the small pelvis (type III B). b. 7 days after acetabular revision using a Mueller ring and fresh frozen non-irradiated allograft bone. Note the distinguishable allograft bone chips medial of the ring. A polyethylene cup was cemented into the ring. c. 28-month follow up radiograph showing the implants to be unchanged with complete remodeling of the allograft bone and homogeneous trabeculation. The revision rate of the entire study group of 30 patients was 3.3% due to a superficial septic complication in one patient after an AB0- and Rh-compatible allograft transplantation. All 30 acetabular components were still in place at time of follow up. The mean Harris-Hip-Score at the latest follow up was 80.6 points (range 43 to 100). Significant acetabular component tilting > 5° (range 0° to 4.4°), horizontal migration ≥ 5 mm (0.1 to 6.0 mm) or vertical migration ≥ 5 mm (0 to 4.2 mm) was found in one case. All allografts remodeled with homogeneous trabeculation and no radiolucent lines at the host-allograft interface (Figures 1). Periacetabular heterotopic ossification was found in 6 cases (20%): 5 patients with grade II and 1 patient with grade III. However, in 5 of 6 cases, preoperative radiographs revealed heterotopic ossification of a similar grade. There was no correlation between increasing preoperative acetabular defects and Harris Hip Score at follow up (p = 0.46). Advanced age was negatively correlated with the Harris Hip Score, but did not reach statistical significance (R = −0.51, p = 0.21). a. Preoperative anteroposterior radiograph of a 72 year old woman, 13 years after total hip replacement, showing aseptic loosening of the acetabular component with migration into the small pelvis (type III B). b. 7 days after acetabular revision using a Mueller ring and fresh frozen non-irradiated allograft bone. Note the distinguishable allograft bone chips medial of the ring. A polyethylene cup was cemented into the ring. c. 28-month follow up radiograph showing the implants to be unchanged with complete remodeling of the allograft bone and homogeneous trabeculation.
Conclusions
We conclude that the risk of alloimmunization against blood group antigens after AB0- or Rh-incompatible transplantation of bone with an influence on bone-remodeling or the clinical outcome is very low.
[ "Background", "Graft extraction", "Patients", "Follow up", "Statistical analysis", "Alloimmunization", "Clinical and radiological findings", "Competing interests", "Authors` contributions", "Pre-publication history" ]
[ "Aseptic loosening is the most common long-term complication in total hip arthroplasty. Revision of the failed acetabular component remains challenging due to migration of the implant during loosening and procedures to remove the primary implant often result in an extensive loss of pelvic bone. Bone grafting combined with insertion of a revision acetabular component is an established method to restore pelvic bone stock [1-4]. Because of its limited availability and poor quality in elderly patients the use of an autogenous graft is often not feasible. Therefore, allografts are utilized in most acetabular revisions. Regardless of whether treated (chemical, freeze dried, irradiated) or fresh-frozen non-irradiated allografts are used, the clinical outcome is usually good [5-7]. We have been using fresh frozen untreated allografts from our own bone bank in revision acetabular hip arthroplasty for decades with good results. Nevertheless, immunization to blood group antigens or other antigens and subsequent possible inhibition of long-term remodeling or incorporation of the transplanted bone is mentioned as an argument against the use of fresh frozen non-irradiated allografts [8,9].\nThe purpose of this study was to evaluate whether allografting of AB0- and Rh-incompatible patients (donor-recipient) leads to recipient-alloimmunization with proof of irregular erythrocyte antibodies (Rh system). In addition, clinical and radiological findings should be observed in the postoperative course.", "Femoral head bone grafts were obtained from donors through total hip arthroplasty. The grafts were not treated, immediately double packed and stored at – 80°C at our local bone bank. Besides blood group determination (AB0 and Rhesus) donors were screened for infectious diseases (HIV, Hepatitis B and -C, Syphilis) before and at least six weeks after surgery according to the local guidelines for operating a bone bank.", "We retrospectively reviewed 30 patients (13 males, 17 females). The study was performed in compliance with the Helsinki Declaration and approved by the local Ethics Committee (Nr. 254/2010BO2, University Tuebingen, Germany). Between 2006 and 2010 all included patients received fresh frozen cancellous allograft bone from our bone bank during acetabular revision at our institution by the corresponding author (T.K.). Acetabular defects were determined from preoperative radiographs and the intraoperative assessment using the classification introduced by Paprosky et al. [10]. Type I defects were present in 8 hips (26.7%), type II A in 9 (30%), type II B in 3 (10%), type II C in 3 (10%), type III A in 2 (6.7%), type III B in 3 (10%) and type IV with complete pelvic discontinuity in 2 (6.7%). The amount of impacted bone material was determined by the size of the defect. AB0 incompatible (donor-recipient) bone transplantation was performed in 22 cases. 6 Rh(D) negative patients received bone from Rh(D) positive patients. In most cases, revision components were implanted (Burch-Schneider reinforcement ring or Mueller ring, Zimmer GmbH, Switzerland) for acetabular reconstruction. The average age at the time of surgery was 71 years (range 48 to 90).", "All patients were screened for alloimmunization to Rh blood group antigens (D, C, c, E, e) with a minimum clinical and radiographic follow-up of 6 months (mean 23 months). We did not screen for further blood group antigens.\nClinical assessments were evaluated according to the criteria of the Harris Hip Score including scoring of pain, walking and mobility of the revised hip [11].\nRadiological evaluation was performed after 7 days, 6 weeks and at the time of study-related follow up at least 6 months after surgery. The acetabular index and horizontal and vertical migration of the acetabular component were measured [12]. Acetabular component loosening was defined if the sum of horizontal and vertical migration was ≥ 5 mm, if the change in the acetabular index was ≥ 5° or if there was a progressive radiolucent line ≥ 1 mm around the whole acetabular component [13]. The Brooker-classification was used for determination of heterotopic ossification [14]. Remodeling of the allograft was measured on the basis of appearance of trabecular remodeling within the graft.", "The paired t-test and Pearson´s chi square test were used for intra-group analysis. The Pearson correlation coefficient was calculated to measure the dependence between the different variables. A p-value ≤ 0.05 was considered significant.", "AB0 incompatible (donor-recipient) allograft transplantation was performed in 22 cases, Rhesus(D)-incompatible transplantation in 6 of 30 cases. No antibodies to donor blood-antigens were found in any patient in the Rhesus-system (Dd, Cc, Ee) during follow up. Especially Rh(D) incompatible transplantation did not lead to a detectable alloimmunization. We also found no differences in clinical or radiological measurements for these groups (Table 1).\nStudy groups\n* Rh(D)-negative patients received bone from Rh(D)-positive donors.\n** Screening for antibodies against 5 Rhesus antigens (D, C, c, E, e).", "The revision rate of the entire study group of 30 patients was 3.3% due to a superficial septic complication in one patient after an AB0- and Rh-compatible allograft transplantation. All 30 acetabular components were still in place at time of follow up. The mean Harris-Hip-Score at the latest follow up was 80.6 points (range 43 to 100). Significant acetabular component tilting > 5° (range 0° to 4.4°), horizontal migration ≥ 5 mm (0.1 to 6.0 mm) or vertical migration ≥ 5 mm (0 to 4.2 mm) was found in one case. All allografts remodeled with homogeneous trabeculation and no radiolucent lines at the host-allograft interface (Figures 1). Periacetabular heterotopic ossification was found in 6 cases (20%): 5 patients with grade II and 1 patient with grade III. However, in 5 of 6 cases, preoperative radiographs revealed heterotopic ossification of a similar grade. There was no correlation between increasing preoperative acetabular defects and Harris Hip Score at follow up (p = 0.46). Advanced age was negatively correlated with the Harris Hip Score, but did not reach statistical significance (R = −0.51, p = 0.21).\na. Preoperative anteroposterior radiograph of a 72 year old woman, 13 years after total hip replacement, showing aseptic loosening of the acetabular component with migration into the small pelvis (type III B). b. 7 days after acetabular revision using a Mueller ring and fresh frozen non-irradiated allograft bone. Note the distinguishable allograft bone chips medial of the ring. A polyethylene cup was cemented into the ring. c. 28-month follow up radiograph showing the implants to be unchanged with complete remodeling of the allograft bone and homogeneous trabeculation.", "The authors declare that they have no competing interests.", "Each author has made substantive intellectual contributions to this study: FM: participated in collecting data and study design, drafted the manuscript. MS: participated in collecting data. RS: participated in study design (immunologic research), manuscript revision. TK: participated in study design, performed the acetabular revisions, manuscript revision. IP: participated in study design, manuscript revision. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2474/13/187/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Graft extraction", "Patients", "Follow up", "Statistical analysis", "Results", "Alloimmunization", "Clinical and radiological findings", "Discussion", "Conclusions", "Competing interests", "Authors` contributions", "Pre-publication history" ]
[ "Aseptic loosening is the most common long-term complication in total hip arthroplasty. Revision of the failed acetabular component remains challenging due to migration of the implant during loosening and procedures to remove the primary implant often result in an extensive loss of pelvic bone. Bone grafting combined with insertion of a revision acetabular component is an established method to restore pelvic bone stock [1-4]. Because of its limited availability and poor quality in elderly patients the use of an autogenous graft is often not feasible. Therefore, allografts are utilized in most acetabular revisions. Regardless of whether treated (chemical, freeze dried, irradiated) or fresh-frozen non-irradiated allografts are used, the clinical outcome is usually good [5-7]. We have been using fresh frozen untreated allografts from our own bone bank in revision acetabular hip arthroplasty for decades with good results. Nevertheless, immunization to blood group antigens or other antigens and subsequent possible inhibition of long-term remodeling or incorporation of the transplanted bone is mentioned as an argument against the use of fresh frozen non-irradiated allografts [8,9].\nThe purpose of this study was to evaluate whether allografting of AB0- and Rh-incompatible patients (donor-recipient) leads to recipient-alloimmunization with proof of irregular erythrocyte antibodies (Rh system). In addition, clinical and radiological findings should be observed in the postoperative course.", " Graft extraction Femoral head bone grafts were obtained from donors through total hip arthroplasty. The grafts were not treated, immediately double packed and stored at – 80°C at our local bone bank. Besides blood group determination (AB0 and Rhesus) donors were screened for infectious diseases (HIV, Hepatitis B and -C, Syphilis) before and at least six weeks after surgery according to the local guidelines for operating a bone bank.\nFemoral head bone grafts were obtained from donors through total hip arthroplasty. The grafts were not treated, immediately double packed and stored at – 80°C at our local bone bank. Besides blood group determination (AB0 and Rhesus) donors were screened for infectious diseases (HIV, Hepatitis B and -C, Syphilis) before and at least six weeks after surgery according to the local guidelines for operating a bone bank.\n Patients We retrospectively reviewed 30 patients (13 males, 17 females). The study was performed in compliance with the Helsinki Declaration and approved by the local Ethics Committee (Nr. 254/2010BO2, University Tuebingen, Germany). Between 2006 and 2010 all included patients received fresh frozen cancellous allograft bone from our bone bank during acetabular revision at our institution by the corresponding author (T.K.). Acetabular defects were determined from preoperative radiographs and the intraoperative assessment using the classification introduced by Paprosky et al. [10]. Type I defects were present in 8 hips (26.7%), type II A in 9 (30%), type II B in 3 (10%), type II C in 3 (10%), type III A in 2 (6.7%), type III B in 3 (10%) and type IV with complete pelvic discontinuity in 2 (6.7%). The amount of impacted bone material was determined by the size of the defect. AB0 incompatible (donor-recipient) bone transplantation was performed in 22 cases. 6 Rh(D) negative patients received bone from Rh(D) positive patients. In most cases, revision components were implanted (Burch-Schneider reinforcement ring or Mueller ring, Zimmer GmbH, Switzerland) for acetabular reconstruction. The average age at the time of surgery was 71 years (range 48 to 90).\nWe retrospectively reviewed 30 patients (13 males, 17 females). The study was performed in compliance with the Helsinki Declaration and approved by the local Ethics Committee (Nr. 254/2010BO2, University Tuebingen, Germany). Between 2006 and 2010 all included patients received fresh frozen cancellous allograft bone from our bone bank during acetabular revision at our institution by the corresponding author (T.K.). Acetabular defects were determined from preoperative radiographs and the intraoperative assessment using the classification introduced by Paprosky et al. [10]. Type I defects were present in 8 hips (26.7%), type II A in 9 (30%), type II B in 3 (10%), type II C in 3 (10%), type III A in 2 (6.7%), type III B in 3 (10%) and type IV with complete pelvic discontinuity in 2 (6.7%). The amount of impacted bone material was determined by the size of the defect. AB0 incompatible (donor-recipient) bone transplantation was performed in 22 cases. 6 Rh(D) negative patients received bone from Rh(D) positive patients. In most cases, revision components were implanted (Burch-Schneider reinforcement ring or Mueller ring, Zimmer GmbH, Switzerland) for acetabular reconstruction. The average age at the time of surgery was 71 years (range 48 to 90).\n Follow up All patients were screened for alloimmunization to Rh blood group antigens (D, C, c, E, e) with a minimum clinical and radiographic follow-up of 6 months (mean 23 months). We did not screen for further blood group antigens.\nClinical assessments were evaluated according to the criteria of the Harris Hip Score including scoring of pain, walking and mobility of the revised hip [11].\nRadiological evaluation was performed after 7 days, 6 weeks and at the time of study-related follow up at least 6 months after surgery. The acetabular index and horizontal and vertical migration of the acetabular component were measured [12]. Acetabular component loosening was defined if the sum of horizontal and vertical migration was ≥ 5 mm, if the change in the acetabular index was ≥ 5° or if there was a progressive radiolucent line ≥ 1 mm around the whole acetabular component [13]. The Brooker-classification was used for determination of heterotopic ossification [14]. Remodeling of the allograft was measured on the basis of appearance of trabecular remodeling within the graft.\nAll patients were screened for alloimmunization to Rh blood group antigens (D, C, c, E, e) with a minimum clinical and radiographic follow-up of 6 months (mean 23 months). We did not screen for further blood group antigens.\nClinical assessments were evaluated according to the criteria of the Harris Hip Score including scoring of pain, walking and mobility of the revised hip [11].\nRadiological evaluation was performed after 7 days, 6 weeks and at the time of study-related follow up at least 6 months after surgery. The acetabular index and horizontal and vertical migration of the acetabular component were measured [12]. Acetabular component loosening was defined if the sum of horizontal and vertical migration was ≥ 5 mm, if the change in the acetabular index was ≥ 5° or if there was a progressive radiolucent line ≥ 1 mm around the whole acetabular component [13]. The Brooker-classification was used for determination of heterotopic ossification [14]. Remodeling of the allograft was measured on the basis of appearance of trabecular remodeling within the graft.\n Statistical analysis The paired t-test and Pearson´s chi square test were used for intra-group analysis. The Pearson correlation coefficient was calculated to measure the dependence between the different variables. A p-value ≤ 0.05 was considered significant.\nThe paired t-test and Pearson´s chi square test were used for intra-group analysis. The Pearson correlation coefficient was calculated to measure the dependence between the different variables. A p-value ≤ 0.05 was considered significant.", "Femoral head bone grafts were obtained from donors through total hip arthroplasty. The grafts were not treated, immediately double packed and stored at – 80°C at our local bone bank. Besides blood group determination (AB0 and Rhesus) donors were screened for infectious diseases (HIV, Hepatitis B and -C, Syphilis) before and at least six weeks after surgery according to the local guidelines for operating a bone bank.", "We retrospectively reviewed 30 patients (13 males, 17 females). The study was performed in compliance with the Helsinki Declaration and approved by the local Ethics Committee (Nr. 254/2010BO2, University Tuebingen, Germany). Between 2006 and 2010 all included patients received fresh frozen cancellous allograft bone from our bone bank during acetabular revision at our institution by the corresponding author (T.K.). Acetabular defects were determined from preoperative radiographs and the intraoperative assessment using the classification introduced by Paprosky et al. [10]. Type I defects were present in 8 hips (26.7%), type II A in 9 (30%), type II B in 3 (10%), type II C in 3 (10%), type III A in 2 (6.7%), type III B in 3 (10%) and type IV with complete pelvic discontinuity in 2 (6.7%). The amount of impacted bone material was determined by the size of the defect. AB0 incompatible (donor-recipient) bone transplantation was performed in 22 cases. 6 Rh(D) negative patients received bone from Rh(D) positive patients. In most cases, revision components were implanted (Burch-Schneider reinforcement ring or Mueller ring, Zimmer GmbH, Switzerland) for acetabular reconstruction. The average age at the time of surgery was 71 years (range 48 to 90).", "All patients were screened for alloimmunization to Rh blood group antigens (D, C, c, E, e) with a minimum clinical and radiographic follow-up of 6 months (mean 23 months). We did not screen for further blood group antigens.\nClinical assessments were evaluated according to the criteria of the Harris Hip Score including scoring of pain, walking and mobility of the revised hip [11].\nRadiological evaluation was performed after 7 days, 6 weeks and at the time of study-related follow up at least 6 months after surgery. The acetabular index and horizontal and vertical migration of the acetabular component were measured [12]. Acetabular component loosening was defined if the sum of horizontal and vertical migration was ≥ 5 mm, if the change in the acetabular index was ≥ 5° or if there was a progressive radiolucent line ≥ 1 mm around the whole acetabular component [13]. The Brooker-classification was used for determination of heterotopic ossification [14]. Remodeling of the allograft was measured on the basis of appearance of trabecular remodeling within the graft.", "The paired t-test and Pearson´s chi square test were used for intra-group analysis. The Pearson correlation coefficient was calculated to measure the dependence between the different variables. A p-value ≤ 0.05 was considered significant.", " Alloimmunization AB0 incompatible (donor-recipient) allograft transplantation was performed in 22 cases, Rhesus(D)-incompatible transplantation in 6 of 30 cases. No antibodies to donor blood-antigens were found in any patient in the Rhesus-system (Dd, Cc, Ee) during follow up. Especially Rh(D) incompatible transplantation did not lead to a detectable alloimmunization. We also found no differences in clinical or radiological measurements for these groups (Table 1).\nStudy groups\n* Rh(D)-negative patients received bone from Rh(D)-positive donors.\n** Screening for antibodies against 5 Rhesus antigens (D, C, c, E, e).\nAB0 incompatible (donor-recipient) allograft transplantation was performed in 22 cases, Rhesus(D)-incompatible transplantation in 6 of 30 cases. No antibodies to donor blood-antigens were found in any patient in the Rhesus-system (Dd, Cc, Ee) during follow up. Especially Rh(D) incompatible transplantation did not lead to a detectable alloimmunization. We also found no differences in clinical or radiological measurements for these groups (Table 1).\nStudy groups\n* Rh(D)-negative patients received bone from Rh(D)-positive donors.\n** Screening for antibodies against 5 Rhesus antigens (D, C, c, E, e).\n Clinical and radiological findings The revision rate of the entire study group of 30 patients was 3.3% due to a superficial septic complication in one patient after an AB0- and Rh-compatible allograft transplantation. All 30 acetabular components were still in place at time of follow up. The mean Harris-Hip-Score at the latest follow up was 80.6 points (range 43 to 100). Significant acetabular component tilting > 5° (range 0° to 4.4°), horizontal migration ≥ 5 mm (0.1 to 6.0 mm) or vertical migration ≥ 5 mm (0 to 4.2 mm) was found in one case. All allografts remodeled with homogeneous trabeculation and no radiolucent lines at the host-allograft interface (Figures 1). Periacetabular heterotopic ossification was found in 6 cases (20%): 5 patients with grade II and 1 patient with grade III. However, in 5 of 6 cases, preoperative radiographs revealed heterotopic ossification of a similar grade. There was no correlation between increasing preoperative acetabular defects and Harris Hip Score at follow up (p = 0.46). Advanced age was negatively correlated with the Harris Hip Score, but did not reach statistical significance (R = −0.51, p = 0.21).\na. Preoperative anteroposterior radiograph of a 72 year old woman, 13 years after total hip replacement, showing aseptic loosening of the acetabular component with migration into the small pelvis (type III B). b. 7 days after acetabular revision using a Mueller ring and fresh frozen non-irradiated allograft bone. Note the distinguishable allograft bone chips medial of the ring. A polyethylene cup was cemented into the ring. c. 28-month follow up radiograph showing the implants to be unchanged with complete remodeling of the allograft bone and homogeneous trabeculation.\nThe revision rate of the entire study group of 30 patients was 3.3% due to a superficial septic complication in one patient after an AB0- and Rh-compatible allograft transplantation. All 30 acetabular components were still in place at time of follow up. The mean Harris-Hip-Score at the latest follow up was 80.6 points (range 43 to 100). Significant acetabular component tilting > 5° (range 0° to 4.4°), horizontal migration ≥ 5 mm (0.1 to 6.0 mm) or vertical migration ≥ 5 mm (0 to 4.2 mm) was found in one case. All allografts remodeled with homogeneous trabeculation and no radiolucent lines at the host-allograft interface (Figures 1). Periacetabular heterotopic ossification was found in 6 cases (20%): 5 patients with grade II and 1 patient with grade III. However, in 5 of 6 cases, preoperative radiographs revealed heterotopic ossification of a similar grade. There was no correlation between increasing preoperative acetabular defects and Harris Hip Score at follow up (p = 0.46). Advanced age was negatively correlated with the Harris Hip Score, but did not reach statistical significance (R = −0.51, p = 0.21).\na. Preoperative anteroposterior radiograph of a 72 year old woman, 13 years after total hip replacement, showing aseptic loosening of the acetabular component with migration into the small pelvis (type III B). b. 7 days after acetabular revision using a Mueller ring and fresh frozen non-irradiated allograft bone. Note the distinguishable allograft bone chips medial of the ring. A polyethylene cup was cemented into the ring. c. 28-month follow up radiograph showing the implants to be unchanged with complete remodeling of the allograft bone and homogeneous trabeculation.", "AB0 incompatible (donor-recipient) allograft transplantation was performed in 22 cases, Rhesus(D)-incompatible transplantation in 6 of 30 cases. No antibodies to donor blood-antigens were found in any patient in the Rhesus-system (Dd, Cc, Ee) during follow up. Especially Rh(D) incompatible transplantation did not lead to a detectable alloimmunization. We also found no differences in clinical or radiological measurements for these groups (Table 1).\nStudy groups\n* Rh(D)-negative patients received bone from Rh(D)-positive donors.\n** Screening for antibodies against 5 Rhesus antigens (D, C, c, E, e).", "The revision rate of the entire study group of 30 patients was 3.3% due to a superficial septic complication in one patient after an AB0- and Rh-compatible allograft transplantation. All 30 acetabular components were still in place at time of follow up. The mean Harris-Hip-Score at the latest follow up was 80.6 points (range 43 to 100). Significant acetabular component tilting > 5° (range 0° to 4.4°), horizontal migration ≥ 5 mm (0.1 to 6.0 mm) or vertical migration ≥ 5 mm (0 to 4.2 mm) was found in one case. All allografts remodeled with homogeneous trabeculation and no radiolucent lines at the host-allograft interface (Figures 1). Periacetabular heterotopic ossification was found in 6 cases (20%): 5 patients with grade II and 1 patient with grade III. However, in 5 of 6 cases, preoperative radiographs revealed heterotopic ossification of a similar grade. There was no correlation between increasing preoperative acetabular defects and Harris Hip Score at follow up (p = 0.46). Advanced age was negatively correlated with the Harris Hip Score, but did not reach statistical significance (R = −0.51, p = 0.21).\na. Preoperative anteroposterior radiograph of a 72 year old woman, 13 years after total hip replacement, showing aseptic loosening of the acetabular component with migration into the small pelvis (type III B). b. 7 days after acetabular revision using a Mueller ring and fresh frozen non-irradiated allograft bone. Note the distinguishable allograft bone chips medial of the ring. A polyethylene cup was cemented into the ring. c. 28-month follow up radiograph showing the implants to be unchanged with complete remodeling of the allograft bone and homogeneous trabeculation.", "The best method of preparation and processing of bone allografts is still under discussion. We use fresh frozen allografts in our department with no further chemical or physical processing. The use of fresh frozen allografts became subject to the restrictive European Union Directive due to the concern of a possible transmission of an infectious disease or other illness [15]. A low risk of disease transmission remains in some cases which are described in the literature [16,17]. Sufficient donor-screening is therefore essential. Except for Rh(D)-negative females of childbearing age, fresh frozen allografts are commonly transplanted AB0- and Rh-incompatible.\nIn the present study we found no alloimmunization in Rh(D)-incompatible transplanted recipients. With respect to other clinically relevant antigens in the Rh system (C, c, E, e) no irregular antibodies could be detected in the recipients after transplantation. However, reviewing the literature, there are some rare cases of Rh(D)-alloimmunisation after bone grafting [18,19].\nConcerning the AB0 system, the human body always naturally contains antibodies against the other blood group antigens (except genotype AB). Therefore, the detection of anti-A or anti-B alloantibodies cannot be regarded as proof of a possible alloimmunization after AB0 incompatible transplantation. However, AB0 incompatible transplantation might cause an increase of the anti-A or Anti-B titer (boostering) [20]. Due to the retrospective design of our study, we could not investigate a possible boostering of anti-A or Anti-B alloantibodies after AB0-incompatible transplantation of bones. Stassen et al. found no irregular antibodies in patients before and after transplantation of frozen allogeneic bone in orthopaedic or maxillo-facial surgery [21].\nDespite the fact that we found no antibodies and according to the current standards, we still recommend transplanting only Rh(D)-negative bones to Rh(D)-negative women of childbearing age. For all other patients, our study confirms that blood-group compatible transplantation of fresh frozen allografts is not necessary in revision hip arthroplasty.\nTo minimize the risks of infection, allografts could be sterilized by irradiation and chemical or physical treatment. This treatment could result in a destruction of the bone matrix and subsequent reduction in strength affecting long-term graft incorporation [22]. Despite this consideration, studies show good mid- and long-term results of treated bone grafts in revision hip arthroplasty [1-7,23-26].\nOur study revealed 100% graft remodeling and only one case of significant acetabular component migration (6 mm migration with no clinical symptoms, no surgical revision necessary) at a mean follow up of 23 months. We found radiological evidence of good allograft trabeculation as a sign of complete remodeling and integration into the recipient bone structure, even after 6 months. This rapid remodeling rate has also been demonstrated in several other studies [23,24]. We have to consider, that complete remodeling does not mean complete incorporation of the graft. This could only be confirmed by biopsy and histological examination.\nAcetabular reconstruction using a revision implant and allograft bone for reconstructing pelvic bone stock is a reliable method of managing acetabular defects. The question remains whether to use treated or untreated allografts in hip revision surgery as both show good clinical results. Advantages of untreated fresh frozen non-irradiated allografts are their cost effectiveness, supposed better biological quality and availability in a local bone bank. A disadvantage is the slightly increased risk of disease transmission which can be minimized by sufficient donor-screening and sterile handling.", "We conclude that the risk of alloimmunization against blood group antigens after AB0- or Rh-incompatible transplantation of bone with an influence on bone-remodeling or the clinical outcome is very low.", "The authors declare that they have no competing interests.", "Each author has made substantive intellectual contributions to this study: FM: participated in collecting data and study design, drafted the manuscript. MS: participated in collecting data. RS: participated in study design (immunologic research), manuscript revision. TK: participated in study design, performed the acetabular revisions, manuscript revision. IP: participated in study design, manuscript revision. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2474/13/187/prepub\n" ]
[ null, "methods", null, null, null, null, "results", null, null, "discussion", "conclusions", null, null, null ]
[ "Acetabular revision", "Allograft bone", "Remodeling", "Alloimmunization", "AB0", "Rhesus" ]
Background: Aseptic loosening is the most common long-term complication in total hip arthroplasty. Revision of the failed acetabular component remains challenging due to migration of the implant during loosening and procedures to remove the primary implant often result in an extensive loss of pelvic bone. Bone grafting combined with insertion of a revision acetabular component is an established method to restore pelvic bone stock [1-4]. Because of its limited availability and poor quality in elderly patients the use of an autogenous graft is often not feasible. Therefore, allografts are utilized in most acetabular revisions. Regardless of whether treated (chemical, freeze dried, irradiated) or fresh-frozen non-irradiated allografts are used, the clinical outcome is usually good [5-7]. We have been using fresh frozen untreated allografts from our own bone bank in revision acetabular hip arthroplasty for decades with good results. Nevertheless, immunization to blood group antigens or other antigens and subsequent possible inhibition of long-term remodeling or incorporation of the transplanted bone is mentioned as an argument against the use of fresh frozen non-irradiated allografts [8,9]. The purpose of this study was to evaluate whether allografting of AB0- and Rh-incompatible patients (donor-recipient) leads to recipient-alloimmunization with proof of irregular erythrocyte antibodies (Rh system). In addition, clinical and radiological findings should be observed in the postoperative course. Methods: Graft extraction Femoral head bone grafts were obtained from donors through total hip arthroplasty. The grafts were not treated, immediately double packed and stored at – 80°C at our local bone bank. Besides blood group determination (AB0 and Rhesus) donors were screened for infectious diseases (HIV, Hepatitis B and -C, Syphilis) before and at least six weeks after surgery according to the local guidelines for operating a bone bank. Femoral head bone grafts were obtained from donors through total hip arthroplasty. The grafts were not treated, immediately double packed and stored at – 80°C at our local bone bank. Besides blood group determination (AB0 and Rhesus) donors were screened for infectious diseases (HIV, Hepatitis B and -C, Syphilis) before and at least six weeks after surgery according to the local guidelines for operating a bone bank. Patients We retrospectively reviewed 30 patients (13 males, 17 females). The study was performed in compliance with the Helsinki Declaration and approved by the local Ethics Committee (Nr. 254/2010BO2, University Tuebingen, Germany). Between 2006 and 2010 all included patients received fresh frozen cancellous allograft bone from our bone bank during acetabular revision at our institution by the corresponding author (T.K.). Acetabular defects were determined from preoperative radiographs and the intraoperative assessment using the classification introduced by Paprosky et al. [10]. Type I defects were present in 8 hips (26.7%), type II A in 9 (30%), type II B in 3 (10%), type II C in 3 (10%), type III A in 2 (6.7%), type III B in 3 (10%) and type IV with complete pelvic discontinuity in 2 (6.7%). The amount of impacted bone material was determined by the size of the defect. AB0 incompatible (donor-recipient) bone transplantation was performed in 22 cases. 6 Rh(D) negative patients received bone from Rh(D) positive patients. In most cases, revision components were implanted (Burch-Schneider reinforcement ring or Mueller ring, Zimmer GmbH, Switzerland) for acetabular reconstruction. The average age at the time of surgery was 71 years (range 48 to 90). We retrospectively reviewed 30 patients (13 males, 17 females). The study was performed in compliance with the Helsinki Declaration and approved by the local Ethics Committee (Nr. 254/2010BO2, University Tuebingen, Germany). Between 2006 and 2010 all included patients received fresh frozen cancellous allograft bone from our bone bank during acetabular revision at our institution by the corresponding author (T.K.). Acetabular defects were determined from preoperative radiographs and the intraoperative assessment using the classification introduced by Paprosky et al. [10]. Type I defects were present in 8 hips (26.7%), type II A in 9 (30%), type II B in 3 (10%), type II C in 3 (10%), type III A in 2 (6.7%), type III B in 3 (10%) and type IV with complete pelvic discontinuity in 2 (6.7%). The amount of impacted bone material was determined by the size of the defect. AB0 incompatible (donor-recipient) bone transplantation was performed in 22 cases. 6 Rh(D) negative patients received bone from Rh(D) positive patients. In most cases, revision components were implanted (Burch-Schneider reinforcement ring or Mueller ring, Zimmer GmbH, Switzerland) for acetabular reconstruction. The average age at the time of surgery was 71 years (range 48 to 90). Follow up All patients were screened for alloimmunization to Rh blood group antigens (D, C, c, E, e) with a minimum clinical and radiographic follow-up of 6 months (mean 23 months). We did not screen for further blood group antigens. Clinical assessments were evaluated according to the criteria of the Harris Hip Score including scoring of pain, walking and mobility of the revised hip [11]. Radiological evaluation was performed after 7 days, 6 weeks and at the time of study-related follow up at least 6 months after surgery. The acetabular index and horizontal and vertical migration of the acetabular component were measured [12]. Acetabular component loosening was defined if the sum of horizontal and vertical migration was ≥ 5 mm, if the change in the acetabular index was ≥ 5° or if there was a progressive radiolucent line ≥ 1 mm around the whole acetabular component [13]. The Brooker-classification was used for determination of heterotopic ossification [14]. Remodeling of the allograft was measured on the basis of appearance of trabecular remodeling within the graft. All patients were screened for alloimmunization to Rh blood group antigens (D, C, c, E, e) with a minimum clinical and radiographic follow-up of 6 months (mean 23 months). We did not screen for further blood group antigens. Clinical assessments were evaluated according to the criteria of the Harris Hip Score including scoring of pain, walking and mobility of the revised hip [11]. Radiological evaluation was performed after 7 days, 6 weeks and at the time of study-related follow up at least 6 months after surgery. The acetabular index and horizontal and vertical migration of the acetabular component were measured [12]. Acetabular component loosening was defined if the sum of horizontal and vertical migration was ≥ 5 mm, if the change in the acetabular index was ≥ 5° or if there was a progressive radiolucent line ≥ 1 mm around the whole acetabular component [13]. The Brooker-classification was used for determination of heterotopic ossification [14]. Remodeling of the allograft was measured on the basis of appearance of trabecular remodeling within the graft. Statistical analysis The paired t-test and Pearson´s chi square test were used for intra-group analysis. The Pearson correlation coefficient was calculated to measure the dependence between the different variables. A p-value ≤ 0.05 was considered significant. The paired t-test and Pearson´s chi square test were used for intra-group analysis. The Pearson correlation coefficient was calculated to measure the dependence between the different variables. A p-value ≤ 0.05 was considered significant. Graft extraction: Femoral head bone grafts were obtained from donors through total hip arthroplasty. The grafts were not treated, immediately double packed and stored at – 80°C at our local bone bank. Besides blood group determination (AB0 and Rhesus) donors were screened for infectious diseases (HIV, Hepatitis B and -C, Syphilis) before and at least six weeks after surgery according to the local guidelines for operating a bone bank. Patients: We retrospectively reviewed 30 patients (13 males, 17 females). The study was performed in compliance with the Helsinki Declaration and approved by the local Ethics Committee (Nr. 254/2010BO2, University Tuebingen, Germany). Between 2006 and 2010 all included patients received fresh frozen cancellous allograft bone from our bone bank during acetabular revision at our institution by the corresponding author (T.K.). Acetabular defects were determined from preoperative radiographs and the intraoperative assessment using the classification introduced by Paprosky et al. [10]. Type I defects were present in 8 hips (26.7%), type II A in 9 (30%), type II B in 3 (10%), type II C in 3 (10%), type III A in 2 (6.7%), type III B in 3 (10%) and type IV with complete pelvic discontinuity in 2 (6.7%). The amount of impacted bone material was determined by the size of the defect. AB0 incompatible (donor-recipient) bone transplantation was performed in 22 cases. 6 Rh(D) negative patients received bone from Rh(D) positive patients. In most cases, revision components were implanted (Burch-Schneider reinforcement ring or Mueller ring, Zimmer GmbH, Switzerland) for acetabular reconstruction. The average age at the time of surgery was 71 years (range 48 to 90). Follow up: All patients were screened for alloimmunization to Rh blood group antigens (D, C, c, E, e) with a minimum clinical and radiographic follow-up of 6 months (mean 23 months). We did not screen for further blood group antigens. Clinical assessments were evaluated according to the criteria of the Harris Hip Score including scoring of pain, walking and mobility of the revised hip [11]. Radiological evaluation was performed after 7 days, 6 weeks and at the time of study-related follow up at least 6 months after surgery. The acetabular index and horizontal and vertical migration of the acetabular component were measured [12]. Acetabular component loosening was defined if the sum of horizontal and vertical migration was ≥ 5 mm, if the change in the acetabular index was ≥ 5° or if there was a progressive radiolucent line ≥ 1 mm around the whole acetabular component [13]. The Brooker-classification was used for determination of heterotopic ossification [14]. Remodeling of the allograft was measured on the basis of appearance of trabecular remodeling within the graft. Statistical analysis: The paired t-test and Pearson´s chi square test were used for intra-group analysis. The Pearson correlation coefficient was calculated to measure the dependence between the different variables. A p-value ≤ 0.05 was considered significant. Results: Alloimmunization AB0 incompatible (donor-recipient) allograft transplantation was performed in 22 cases, Rhesus(D)-incompatible transplantation in 6 of 30 cases. No antibodies to donor blood-antigens were found in any patient in the Rhesus-system (Dd, Cc, Ee) during follow up. Especially Rh(D) incompatible transplantation did not lead to a detectable alloimmunization. We also found no differences in clinical or radiological measurements for these groups (Table 1). Study groups * Rh(D)-negative patients received bone from Rh(D)-positive donors. ** Screening for antibodies against 5 Rhesus antigens (D, C, c, E, e). AB0 incompatible (donor-recipient) allograft transplantation was performed in 22 cases, Rhesus(D)-incompatible transplantation in 6 of 30 cases. No antibodies to donor blood-antigens were found in any patient in the Rhesus-system (Dd, Cc, Ee) during follow up. Especially Rh(D) incompatible transplantation did not lead to a detectable alloimmunization. We also found no differences in clinical or radiological measurements for these groups (Table 1). Study groups * Rh(D)-negative patients received bone from Rh(D)-positive donors. ** Screening for antibodies against 5 Rhesus antigens (D, C, c, E, e). Clinical and radiological findings The revision rate of the entire study group of 30 patients was 3.3% due to a superficial septic complication in one patient after an AB0- and Rh-compatible allograft transplantation. All 30 acetabular components were still in place at time of follow up. The mean Harris-Hip-Score at the latest follow up was 80.6 points (range 43 to 100). Significant acetabular component tilting > 5° (range 0° to 4.4°), horizontal migration ≥ 5 mm (0.1 to 6.0 mm) or vertical migration ≥ 5 mm (0 to 4.2 mm) was found in one case. All allografts remodeled with homogeneous trabeculation and no radiolucent lines at the host-allograft interface (Figures 1). Periacetabular heterotopic ossification was found in 6 cases (20%): 5 patients with grade II and 1 patient with grade III. However, in 5 of 6 cases, preoperative radiographs revealed heterotopic ossification of a similar grade. There was no correlation between increasing preoperative acetabular defects and Harris Hip Score at follow up (p = 0.46). Advanced age was negatively correlated with the Harris Hip Score, but did not reach statistical significance (R = −0.51, p = 0.21). a. Preoperative anteroposterior radiograph of a 72 year old woman, 13 years after total hip replacement, showing aseptic loosening of the acetabular component with migration into the small pelvis (type III B). b. 7 days after acetabular revision using a Mueller ring and fresh frozen non-irradiated allograft bone. Note the distinguishable allograft bone chips medial of the ring. A polyethylene cup was cemented into the ring. c. 28-month follow up radiograph showing the implants to be unchanged with complete remodeling of the allograft bone and homogeneous trabeculation. The revision rate of the entire study group of 30 patients was 3.3% due to a superficial septic complication in one patient after an AB0- and Rh-compatible allograft transplantation. All 30 acetabular components were still in place at time of follow up. The mean Harris-Hip-Score at the latest follow up was 80.6 points (range 43 to 100). Significant acetabular component tilting > 5° (range 0° to 4.4°), horizontal migration ≥ 5 mm (0.1 to 6.0 mm) or vertical migration ≥ 5 mm (0 to 4.2 mm) was found in one case. All allografts remodeled with homogeneous trabeculation and no radiolucent lines at the host-allograft interface (Figures 1). Periacetabular heterotopic ossification was found in 6 cases (20%): 5 patients with grade II and 1 patient with grade III. However, in 5 of 6 cases, preoperative radiographs revealed heterotopic ossification of a similar grade. There was no correlation between increasing preoperative acetabular defects and Harris Hip Score at follow up (p = 0.46). Advanced age was negatively correlated with the Harris Hip Score, but did not reach statistical significance (R = −0.51, p = 0.21). a. Preoperative anteroposterior radiograph of a 72 year old woman, 13 years after total hip replacement, showing aseptic loosening of the acetabular component with migration into the small pelvis (type III B). b. 7 days after acetabular revision using a Mueller ring and fresh frozen non-irradiated allograft bone. Note the distinguishable allograft bone chips medial of the ring. A polyethylene cup was cemented into the ring. c. 28-month follow up radiograph showing the implants to be unchanged with complete remodeling of the allograft bone and homogeneous trabeculation. Alloimmunization: AB0 incompatible (donor-recipient) allograft transplantation was performed in 22 cases, Rhesus(D)-incompatible transplantation in 6 of 30 cases. No antibodies to donor blood-antigens were found in any patient in the Rhesus-system (Dd, Cc, Ee) during follow up. Especially Rh(D) incompatible transplantation did not lead to a detectable alloimmunization. We also found no differences in clinical or radiological measurements for these groups (Table 1). Study groups * Rh(D)-negative patients received bone from Rh(D)-positive donors. ** Screening for antibodies against 5 Rhesus antigens (D, C, c, E, e). Clinical and radiological findings: The revision rate of the entire study group of 30 patients was 3.3% due to a superficial septic complication in one patient after an AB0- and Rh-compatible allograft transplantation. All 30 acetabular components were still in place at time of follow up. The mean Harris-Hip-Score at the latest follow up was 80.6 points (range 43 to 100). Significant acetabular component tilting > 5° (range 0° to 4.4°), horizontal migration ≥ 5 mm (0.1 to 6.0 mm) or vertical migration ≥ 5 mm (0 to 4.2 mm) was found in one case. All allografts remodeled with homogeneous trabeculation and no radiolucent lines at the host-allograft interface (Figures 1). Periacetabular heterotopic ossification was found in 6 cases (20%): 5 patients with grade II and 1 patient with grade III. However, in 5 of 6 cases, preoperative radiographs revealed heterotopic ossification of a similar grade. There was no correlation between increasing preoperative acetabular defects and Harris Hip Score at follow up (p = 0.46). Advanced age was negatively correlated with the Harris Hip Score, but did not reach statistical significance (R = −0.51, p = 0.21). a. Preoperative anteroposterior radiograph of a 72 year old woman, 13 years after total hip replacement, showing aseptic loosening of the acetabular component with migration into the small pelvis (type III B). b. 7 days after acetabular revision using a Mueller ring and fresh frozen non-irradiated allograft bone. Note the distinguishable allograft bone chips medial of the ring. A polyethylene cup was cemented into the ring. c. 28-month follow up radiograph showing the implants to be unchanged with complete remodeling of the allograft bone and homogeneous trabeculation. Discussion: The best method of preparation and processing of bone allografts is still under discussion. We use fresh frozen allografts in our department with no further chemical or physical processing. The use of fresh frozen allografts became subject to the restrictive European Union Directive due to the concern of a possible transmission of an infectious disease or other illness [15]. A low risk of disease transmission remains in some cases which are described in the literature [16,17]. Sufficient donor-screening is therefore essential. Except for Rh(D)-negative females of childbearing age, fresh frozen allografts are commonly transplanted AB0- and Rh-incompatible. In the present study we found no alloimmunization in Rh(D)-incompatible transplanted recipients. With respect to other clinically relevant antigens in the Rh system (C, c, E, e) no irregular antibodies could be detected in the recipients after transplantation. However, reviewing the literature, there are some rare cases of Rh(D)-alloimmunisation after bone grafting [18,19]. Concerning the AB0 system, the human body always naturally contains antibodies against the other blood group antigens (except genotype AB). Therefore, the detection of anti-A or anti-B alloantibodies cannot be regarded as proof of a possible alloimmunization after AB0 incompatible transplantation. However, AB0 incompatible transplantation might cause an increase of the anti-A or Anti-B titer (boostering) [20]. Due to the retrospective design of our study, we could not investigate a possible boostering of anti-A or Anti-B alloantibodies after AB0-incompatible transplantation of bones. Stassen et al. found no irregular antibodies in patients before and after transplantation of frozen allogeneic bone in orthopaedic or maxillo-facial surgery [21]. Despite the fact that we found no antibodies and according to the current standards, we still recommend transplanting only Rh(D)-negative bones to Rh(D)-negative women of childbearing age. For all other patients, our study confirms that blood-group compatible transplantation of fresh frozen allografts is not necessary in revision hip arthroplasty. To minimize the risks of infection, allografts could be sterilized by irradiation and chemical or physical treatment. This treatment could result in a destruction of the bone matrix and subsequent reduction in strength affecting long-term graft incorporation [22]. Despite this consideration, studies show good mid- and long-term results of treated bone grafts in revision hip arthroplasty [1-7,23-26]. Our study revealed 100% graft remodeling and only one case of significant acetabular component migration (6 mm migration with no clinical symptoms, no surgical revision necessary) at a mean follow up of 23 months. We found radiological evidence of good allograft trabeculation as a sign of complete remodeling and integration into the recipient bone structure, even after 6 months. This rapid remodeling rate has also been demonstrated in several other studies [23,24]. We have to consider, that complete remodeling does not mean complete incorporation of the graft. This could only be confirmed by biopsy and histological examination. Acetabular reconstruction using a revision implant and allograft bone for reconstructing pelvic bone stock is a reliable method of managing acetabular defects. The question remains whether to use treated or untreated allografts in hip revision surgery as both show good clinical results. Advantages of untreated fresh frozen non-irradiated allografts are their cost effectiveness, supposed better biological quality and availability in a local bone bank. A disadvantage is the slightly increased risk of disease transmission which can be minimized by sufficient donor-screening and sterile handling. Conclusions: We conclude that the risk of alloimmunization against blood group antigens after AB0- or Rh-incompatible transplantation of bone with an influence on bone-remodeling or the clinical outcome is very low. Competing interests: The authors declare that they have no competing interests. Authors` contributions: Each author has made substantive intellectual contributions to this study: FM: participated in collecting data and study design, drafted the manuscript. MS: participated in collecting data. RS: participated in study design (immunologic research), manuscript revision. TK: participated in study design, performed the acetabular revisions, manuscript revision. IP: participated in study design, manuscript revision. All authors read and approved the final manuscript. Pre-publication history: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2474/13/187/prepub
Background: Possible immunization to blood group or other antigens and subsequent inhibition of remodeling or incorporation after use of untreated human bone allograft was described previously. This study presents the immunological, clinical and radiological results of 30 patients with acetabular revisions using fresh frozen non-irradiated bone allograft. Methods: AB0-incompatible (donor-recipient) bone transplantation was performed in 22 cases, Rh(D) incompatible transplantation in 6 cases. The mean follow up of 23 months included measuring Harris hip score and radiological examination with evaluation of remodeling of the bone graft, implant migration and heterotopic ossification. In addition, all patients were screened for alloimmunization to Rh blood group antigens. Results: Compared to the whole study group, there were no differences in clinical or radiological measurements for the groups with AB0- or Rh(D)-incompatible bone transplantation. The mean Harris Hip Score was 80.6. X-rays confirmed total remodeling of all allografts with no acetabular loosening. At follow up, blood tests revealed no alloimmunization to Rh blood group donor antigens. Conclusions: The use of fresh frozen non-irradiated bone allograft in acetabular revision is a reliable supplement to reconstruction. The risk of alloimmunization to donor-blood group antigens after AB0- or Rh-incompatible allograft transplantation with a negative long-term influence on bone-remodeling or the clinical outcome is negligible.
Background: Aseptic loosening is the most common long-term complication in total hip arthroplasty. Revision of the failed acetabular component remains challenging due to migration of the implant during loosening and procedures to remove the primary implant often result in an extensive loss of pelvic bone. Bone grafting combined with insertion of a revision acetabular component is an established method to restore pelvic bone stock [1-4]. Because of its limited availability and poor quality in elderly patients the use of an autogenous graft is often not feasible. Therefore, allografts are utilized in most acetabular revisions. Regardless of whether treated (chemical, freeze dried, irradiated) or fresh-frozen non-irradiated allografts are used, the clinical outcome is usually good [5-7]. We have been using fresh frozen untreated allografts from our own bone bank in revision acetabular hip arthroplasty for decades with good results. Nevertheless, immunization to blood group antigens or other antigens and subsequent possible inhibition of long-term remodeling or incorporation of the transplanted bone is mentioned as an argument against the use of fresh frozen non-irradiated allografts [8,9]. The purpose of this study was to evaluate whether allografting of AB0- and Rh-incompatible patients (donor-recipient) leads to recipient-alloimmunization with proof of irregular erythrocyte antibodies (Rh system). In addition, clinical and radiological findings should be observed in the postoperative course. Conclusions: We conclude that the risk of alloimmunization against blood group antigens after AB0- or Rh-incompatible transplantation of bone with an influence on bone-remodeling or the clinical outcome is very low.
Background: Possible immunization to blood group or other antigens and subsequent inhibition of remodeling or incorporation after use of untreated human bone allograft was described previously. This study presents the immunological, clinical and radiological results of 30 patients with acetabular revisions using fresh frozen non-irradiated bone allograft. Methods: AB0-incompatible (donor-recipient) bone transplantation was performed in 22 cases, Rh(D) incompatible transplantation in 6 cases. The mean follow up of 23 months included measuring Harris hip score and radiological examination with evaluation of remodeling of the bone graft, implant migration and heterotopic ossification. In addition, all patients were screened for alloimmunization to Rh blood group antigens. Results: Compared to the whole study group, there were no differences in clinical or radiological measurements for the groups with AB0- or Rh(D)-incompatible bone transplantation. The mean Harris Hip Score was 80.6. X-rays confirmed total remodeling of all allografts with no acetabular loosening. At follow up, blood tests revealed no alloimmunization to Rh blood group donor antigens. Conclusions: The use of fresh frozen non-irradiated bone allograft in acetabular revision is a reliable supplement to reconstruction. The risk of alloimmunization to donor-blood group antigens after AB0- or Rh-incompatible allograft transplantation with a negative long-term influence on bone-remodeling or the clinical outcome is negligible.
4,315
255
[ 262, 79, 260, 214, 45, 117, 339, 10, 80, 16 ]
14
[ "bone", "acetabular", "rh", "patients", "hip", "allograft", "follow", "type", "revision", "study" ]
[ "acetabular reconstruction revision", "transplantation 30 acetabular", "bone allografts discussion", "untreated allografts hip", "acetabular hip arthroplasty" ]
[CONTENT] Acetabular revision | Allograft bone | Remodeling | Alloimmunization | AB0 | Rhesus [SUMMARY]
[CONTENT] Acetabular revision | Allograft bone | Remodeling | Alloimmunization | AB0 | Rhesus [SUMMARY]
[CONTENT] Acetabular revision | Allograft bone | Remodeling | Alloimmunization | AB0 | Rhesus [SUMMARY]
[CONTENT] Acetabular revision | Allograft bone | Remodeling | Alloimmunization | AB0 | Rhesus [SUMMARY]
[CONTENT] Acetabular revision | Allograft bone | Remodeling | Alloimmunization | AB0 | Rhesus [SUMMARY]
[CONTENT] Acetabular revision | Allograft bone | Remodeling | Alloimmunization | AB0 | Rhesus [SUMMARY]
[CONTENT] ABO Blood-Group System | Acetabulum | Aged | Aged, 80 and over | Arthroplasty, Replacement, Hip | Blood Group Incompatibility | Bone Transplantation | Chi-Square Distribution | Cryopreservation | Erythrocytes | Female | Femur Head | Follow-Up Studies | Hip Prosthesis | Humans | Isoantibodies | Male | Middle Aged | Ossification, Heterotopic | Postoperative Complications | Prosthesis Failure | Radiography | Reoperation | Retrospective Studies | Rh-Hr Blood-Group System | Time Factors | Tissue and Organ Harvesting | Treatment Outcome [SUMMARY]
[CONTENT] ABO Blood-Group System | Acetabulum | Aged | Aged, 80 and over | Arthroplasty, Replacement, Hip | Blood Group Incompatibility | Bone Transplantation | Chi-Square Distribution | Cryopreservation | Erythrocytes | Female | Femur Head | Follow-Up Studies | Hip Prosthesis | Humans | Isoantibodies | Male | Middle Aged | Ossification, Heterotopic | Postoperative Complications | Prosthesis Failure | Radiography | Reoperation | Retrospective Studies | Rh-Hr Blood-Group System | Time Factors | Tissue and Organ Harvesting | Treatment Outcome [SUMMARY]
[CONTENT] ABO Blood-Group System | Acetabulum | Aged | Aged, 80 and over | Arthroplasty, Replacement, Hip | Blood Group Incompatibility | Bone Transplantation | Chi-Square Distribution | Cryopreservation | Erythrocytes | Female | Femur Head | Follow-Up Studies | Hip Prosthesis | Humans | Isoantibodies | Male | Middle Aged | Ossification, Heterotopic | Postoperative Complications | Prosthesis Failure | Radiography | Reoperation | Retrospective Studies | Rh-Hr Blood-Group System | Time Factors | Tissue and Organ Harvesting | Treatment Outcome [SUMMARY]
[CONTENT] ABO Blood-Group System | Acetabulum | Aged | Aged, 80 and over | Arthroplasty, Replacement, Hip | Blood Group Incompatibility | Bone Transplantation | Chi-Square Distribution | Cryopreservation | Erythrocytes | Female | Femur Head | Follow-Up Studies | Hip Prosthesis | Humans | Isoantibodies | Male | Middle Aged | Ossification, Heterotopic | Postoperative Complications | Prosthesis Failure | Radiography | Reoperation | Retrospective Studies | Rh-Hr Blood-Group System | Time Factors | Tissue and Organ Harvesting | Treatment Outcome [SUMMARY]
[CONTENT] ABO Blood-Group System | Acetabulum | Aged | Aged, 80 and over | Arthroplasty, Replacement, Hip | Blood Group Incompatibility | Bone Transplantation | Chi-Square Distribution | Cryopreservation | Erythrocytes | Female | Femur Head | Follow-Up Studies | Hip Prosthesis | Humans | Isoantibodies | Male | Middle Aged | Ossification, Heterotopic | Postoperative Complications | Prosthesis Failure | Radiography | Reoperation | Retrospective Studies | Rh-Hr Blood-Group System | Time Factors | Tissue and Organ Harvesting | Treatment Outcome [SUMMARY]
[CONTENT] ABO Blood-Group System | Acetabulum | Aged | Aged, 80 and over | Arthroplasty, Replacement, Hip | Blood Group Incompatibility | Bone Transplantation | Chi-Square Distribution | Cryopreservation | Erythrocytes | Female | Femur Head | Follow-Up Studies | Hip Prosthesis | Humans | Isoantibodies | Male | Middle Aged | Ossification, Heterotopic | Postoperative Complications | Prosthesis Failure | Radiography | Reoperation | Retrospective Studies | Rh-Hr Blood-Group System | Time Factors | Tissue and Organ Harvesting | Treatment Outcome [SUMMARY]
[CONTENT] acetabular reconstruction revision | transplantation 30 acetabular | bone allografts discussion | untreated allografts hip | acetabular hip arthroplasty [SUMMARY]
[CONTENT] acetabular reconstruction revision | transplantation 30 acetabular | bone allografts discussion | untreated allografts hip | acetabular hip arthroplasty [SUMMARY]
[CONTENT] acetabular reconstruction revision | transplantation 30 acetabular | bone allografts discussion | untreated allografts hip | acetabular hip arthroplasty [SUMMARY]
[CONTENT] acetabular reconstruction revision | transplantation 30 acetabular | bone allografts discussion | untreated allografts hip | acetabular hip arthroplasty [SUMMARY]
[CONTENT] acetabular reconstruction revision | transplantation 30 acetabular | bone allografts discussion | untreated allografts hip | acetabular hip arthroplasty [SUMMARY]
[CONTENT] acetabular reconstruction revision | transplantation 30 acetabular | bone allografts discussion | untreated allografts hip | acetabular hip arthroplasty [SUMMARY]
[CONTENT] bone | acetabular | rh | patients | hip | allograft | follow | type | revision | study [SUMMARY]
[CONTENT] bone | acetabular | rh | patients | hip | allograft | follow | type | revision | study [SUMMARY]
[CONTENT] bone | acetabular | rh | patients | hip | allograft | follow | type | revision | study [SUMMARY]
[CONTENT] bone | acetabular | rh | patients | hip | allograft | follow | type | revision | study [SUMMARY]
[CONTENT] bone | acetabular | rh | patients | hip | allograft | follow | type | revision | study [SUMMARY]
[CONTENT] bone | acetabular | rh | patients | hip | allograft | follow | type | revision | study [SUMMARY]
[CONTENT] allografts | bone | irradiated | acetabular | revision acetabular | fresh | fresh frozen | frozen | long | pelvic bone [SUMMARY]
[CONTENT] type | acetabular | bone | 10 | 10 type | patients | type ii | months | ii | local [SUMMARY]
[CONTENT] allograft | follow | found | grade | mm | acetabular | cases | patient | transplantation | hip [SUMMARY]
[CONTENT] bone influence bone remodeling | alloimmunization blood group | antigens ab0 rh | risk alloimmunization | risk alloimmunization blood | risk alloimmunization blood group | bone remodeling clinical outcome | bone remodeling clinical | alloimmunization blood group antigens | alloimmunization blood [SUMMARY]
[CONTENT] bone | acetabular | rh | type | transplantation | hip | patients | revision | allograft | incompatible [SUMMARY]
[CONTENT] bone | acetabular | rh | type | transplantation | hip | patients | revision | allograft | incompatible [SUMMARY]
[CONTENT] ||| 30 [SUMMARY]
[CONTENT] 22 | 6 ||| 23 months | Harris ||| [SUMMARY]
[CONTENT] ||| Harris Hip Score | 80.6 ||| ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] ||| 30 ||| 22 | 6 ||| 23 months | Harris ||| ||| ||| Harris Hip Score | 80.6 ||| ||| ||| ||| [SUMMARY]
[CONTENT] ||| 30 ||| 22 | 6 ||| 23 months | Harris ||| ||| ||| Harris Hip Score | 80.6 ||| ||| ||| ||| [SUMMARY]
Reference ranges for bone mineral density and prevalence of osteoporosis in Vietnamese men and women.
21831301
The aim of this study was to examine the effect of different reference ranges in bone mineral density on the diagnosis of osteoporosis.
BACKGROUND
This cross-sectional study involved 357 men and 870 women aged between 18 and 89 years, who were randomly sampled from various districts within Ho Chi Minh City, Vietnam. BMD at the femoral neck, lumbar spine and whole body was measured by DXA (Hologic QDR4500). Polynomial regression models and bootstraps method were used to determine peak BMD and standard deviation (SD). Based on the two parameters, we computed T-scores (denoted by TVN) for each individual in the study. A similar diagnosis was also done based on T-scores provided by the densitometer (TDXA), which is based on the US White population (NHANES III). We then compared the concordance between TVN and TDXA in the classification of osteoporosis. Osteoporosis was defined according to the World Health Organization criteria.
METHODS
In post-menopausal women, the prevalence of osteoporosis based on femoral neck TVN was 29%, but when the diagnosis was based on TDXA, the prevalence was 44%. In men aged 50+ years, the TVN-based prevalence of osteoporosis was 10%, which was lower than TDXA-based prevalence (30%). Among 177 women who were diagnosed with osteoporosis by TDXA, 35% were actually osteopenia by TVN. The kappa-statistic was 0.54 for women and 0.41 for men.
RESULTS
These data suggest that the T-scores provided by the Hologic QDR4500 over-diagnosed osteoporosis in Vietnamese men and women. This over-diagnosis could lead to over-treatment and influence the decision of recruitment of participants in clinical trials.
CONCLUSION
[ "Absorptiometry, Photon", "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Asian People", "Bone Density", "Cross-Sectional Studies", "Female", "Humans", "Male", "Middle Aged", "Osteoporosis", "Osteoporosis, Postmenopausal", "Prevalence", "Reference Values", "Vietnam", "Young Adult" ]
3163638
Background
Osteoporosis and its consequence of fragility fracture represent a major public health problem not only in developed countries, but in developing countries as well [1]. The number of fractures in Asia is higher than that in European countries combined. Of all the fractures in the world, approximately 17% was found to occur in Southeast Asia, 29% in West Pacific, as compared to 35% occurring in Europe [2]. However, the prevalence of and risk factors for osteoporosis in Asian populations have not been well documented. Part of the problem is due to the lack of well-defined criteria for the diagnosis of osteoporosis in Asian men and women. Currently, the operational definition of osteoporosis is based on a measurement of bone mineral density (BMD), which is the most robust predictor of fracture risk [3,4]. BMD of an individual is often expressed in terms of its peak level and standard deviation to yield a T-score. The two parameters (i.e., peak BMD level and standard deviation) are commonly derived from a well characterized population of young individuals [5]. An individual's T-score is actually the number of standard deviations from the peak BMD achieved during the age of 20 and 30 years [6,7]. However, previous studies have suggested that peak BMD is different among ethnicities and between men and women [8,9]. Therefore, the diagnosis of osteoporosis should ideally be based on sex- and ethnicity-specific reference range [10,11]. Dual-energy × ray absorptiometry (DXA) is considered the gold standard method for measuring BMD [6]. In recent years, DXA has been introduced to many Asian countries, including Vietnam, and is commonly used for the diagnosis of osteoporosis and treatment decision. In the absence of sex-specific reference data for local population, most doctors used the T-scores provided by the densitometer as a referent value to make diagnosis for an individual. However, it is not clear whether the reference data base used in the derivation of T-scores in these densitometers is appropriate for a local population. We hypothesize that there is considerable discrepancy in the diagnosis of osteoporosis between reference data. The present study was designed to test the hypothesis, by determining reference range of peak bone density for an Asian population, and then comparing the concordance between a population-specific T-score and the DXA-based T-score in the diagnosis of osteoporosis.
Methods
Study design and participants The study was designed as a cross-sectional investigation, with the setting being Ho Chi Minh City, a major city in Vietnam. The research protocol and procedures were approved by the Scientific Committee of the People's Hospital 115 and Pham Ngoc Thach University of Medicine. All volunteer participants were provided with full information about the study's purpose and gave informed consent to participate in the study, according to the principles of medical ethics of the World Health Organization. We used simple random sampling technique for identifying potential participants. We approached community organizations, including church and temples, and obtained the list of members, and then randomly selected individuals aged 18 or above. We sent a letter of invitation to the selected individuals. The participants received a free health check-up, and lipid analyses, but did not receive any financial incentive. No invited participants refused to participate in the study. Participants were excluded from the study if they had diseases deemed to affect to bone metabolism such as hyperthyroidism, hyperparathyroidism, renal failure, malabsorption syndrome, alcoholism, chronic colitis, multi- myeloma, leukemia, and chronic arthritis. The study was designed as a cross-sectional investigation, with the setting being Ho Chi Minh City, a major city in Vietnam. The research protocol and procedures were approved by the Scientific Committee of the People's Hospital 115 and Pham Ngoc Thach University of Medicine. All volunteer participants were provided with full information about the study's purpose and gave informed consent to participate in the study, according to the principles of medical ethics of the World Health Organization. We used simple random sampling technique for identifying potential participants. We approached community organizations, including church and temples, and obtained the list of members, and then randomly selected individuals aged 18 or above. We sent a letter of invitation to the selected individuals. The participants received a free health check-up, and lipid analyses, but did not receive any financial incentive. No invited participants refused to participate in the study. Participants were excluded from the study if they had diseases deemed to affect to bone metabolism such as hyperthyroidism, hyperparathyroidism, renal failure, malabsorption syndrome, alcoholism, chronic colitis, multi- myeloma, leukemia, and chronic arthritis. Measurements and data collection Data collection was done by trained research doctors and nurses using a validated questionnaire. The questionnaire solicited information, including anthropometry, lifestyle factors, dietary intakes, physical activity, and clinical history. Anthropometric parameters including age, weight, standing height were obtained. Body weight was measured on an electronic scale with indoor clothing without shoes. Height was determined without shoes on a portable stadiometer with mandible plane parallel to the floor. Each participant was asked to provide information on current and past smoking habits. Smoking was quantified in terms of the number of pack-years consumed in each ten-year interval age group. Alcohol intake in average numbers of standard drinks per day, at present as well as within the last 5 years, was obtained. Clinical data including blood pressure, pulse, and reproductive history (i.e. parity, age of menarche, and age of menopause), medical history (i.e. previous fracture, previous and current use of pharmacological therapies) were also obtained. Data collection was done by trained research doctors and nurses using a validated questionnaire. The questionnaire solicited information, including anthropometry, lifestyle factors, dietary intakes, physical activity, and clinical history. Anthropometric parameters including age, weight, standing height were obtained. Body weight was measured on an electronic scale with indoor clothing without shoes. Height was determined without shoes on a portable stadiometer with mandible plane parallel to the floor. Each participant was asked to provide information on current and past smoking habits. Smoking was quantified in terms of the number of pack-years consumed in each ten-year interval age group. Alcohol intake in average numbers of standard drinks per day, at present as well as within the last 5 years, was obtained. Clinical data including blood pressure, pulse, and reproductive history (i.e. parity, age of menarche, and age of menopause), medical history (i.e. previous fracture, previous and current use of pharmacological therapies) were also obtained. Bone mineral density Areal BMD was measured at the lumbar spine (L2-L4), femoral neck, and whole body using a Hologic QDR 4500 (Hologic Corp, Madison, WI, USA). The short-term in vivo precision expressed as the coefficient of variation was 1.8% for the lumbar spine and 1.5% for the hip. The machine was standardized by standard phantom before each measurement. The densitometer provided a T-score for each measured site. In this paper, the T-score is referred to as TDXA. We used the WHO criteria to categorize TDXA into three groups: osteoporosis if the T-score is equal to or lower than -2.5; osteopenia if T-score is between -1 and -2.5; and normal if T-score is equal or greater than -1. Areal BMD was measured at the lumbar spine (L2-L4), femoral neck, and whole body using a Hologic QDR 4500 (Hologic Corp, Madison, WI, USA). The short-term in vivo precision expressed as the coefficient of variation was 1.8% for the lumbar spine and 1.5% for the hip. The machine was standardized by standard phantom before each measurement. The densitometer provided a T-score for each measured site. In this paper, the T-score is referred to as TDXA. We used the WHO criteria to categorize TDXA into three groups: osteoporosis if the T-score is equal to or lower than -2.5; osteopenia if T-score is between -1 and -2.5; and normal if T-score is equal or greater than -1. Determination of reference range In this analysis, we made use of the functional relationship between BMD and age to construct a reference range. A series of polynomial regression models (up to the third degree) were fitted to femoral neck, total hip and lumbar spine BMD as a function of age as follows: BMD = α + β1(age) + β2(age)2 + β3(age)3, where α is the intercept, β1, β1, and β3 are regression parameters, which were estimated from the observed data. Reduced models (i.e., quadratic and linear models) were considered, and the "final" model was chosen based on the Akaike Information Criterion (AIC). Peak BMD (pBMD) and ages at which it was reached were then estimated from the final model. Ninety-five percent confidence intervals (95% CI) for pBMD and ages of pBMD were determined by the bootstrap (resampling) method. The analysis was performed with R statistical software [12]. Based on the parameters in the polynomial regression models, we determined the means of peak BMD and standard deviation (SD) for spine and femoral neck BMD. Using the two parameters, we calculated the T-score for each individual (denoted by TVN), and used the WHO criteria to classify the T-score into three groups, namely, osteoporosis, osteopenia, and normal. The concordance between TDXA and TVN was then assessed by the kappa statistic. In this analysis, we made use of the functional relationship between BMD and age to construct a reference range. A series of polynomial regression models (up to the third degree) were fitted to femoral neck, total hip and lumbar spine BMD as a function of age as follows: BMD = α + β1(age) + β2(age)2 + β3(age)3, where α is the intercept, β1, β1, and β3 are regression parameters, which were estimated from the observed data. Reduced models (i.e., quadratic and linear models) were considered, and the "final" model was chosen based on the Akaike Information Criterion (AIC). Peak BMD (pBMD) and ages at which it was reached were then estimated from the final model. Ninety-five percent confidence intervals (95% CI) for pBMD and ages of pBMD were determined by the bootstrap (resampling) method. The analysis was performed with R statistical software [12]. Based on the parameters in the polynomial regression models, we determined the means of peak BMD and standard deviation (SD) for spine and femoral neck BMD. Using the two parameters, we calculated the T-score for each individual (denoted by TVN), and used the WHO criteria to classify the T-score into three groups, namely, osteoporosis, osteopenia, and normal. The concordance between TDXA and TVN was then assessed by the kappa statistic.
null
null
Conclusion
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2474/12/182/prepub
[ "Background", "Study design and participants", "Measurements and data collection", "Bone mineral density", "Determination of reference range", "Results", "Peak bone mineral density", "Prevalence of osteoporosis", "Discussion", "Conclusion" ]
[ "Osteoporosis and its consequence of fragility fracture represent a major public health problem not only in developed countries, but in developing countries as well [1]. The number of fractures in Asia is higher than that in European countries combined. Of all the fractures in the world, approximately 17% was found to occur in Southeast Asia, 29% in West Pacific, as compared to 35% occurring in Europe [2]. However, the prevalence of and risk factors for osteoporosis in Asian populations have not been well documented. Part of the problem is due to the lack of well-defined criteria for the diagnosis of osteoporosis in Asian men and women.\nCurrently, the operational definition of osteoporosis is based on a measurement of bone mineral density (BMD), which is the most robust predictor of fracture risk [3,4]. BMD of an individual is often expressed in terms of its peak level and standard deviation to yield a T-score. The two parameters (i.e., peak BMD level and standard deviation) are commonly derived from a well characterized population of young individuals [5]. An individual's T-score is actually the number of standard deviations from the peak BMD achieved during the age of 20 and 30 years [6,7]. However, previous studies have suggested that peak BMD is different among ethnicities and between men and women [8,9]. Therefore, the diagnosis of osteoporosis should ideally be based on sex- and ethnicity-specific reference range [10,11].\nDual-energy × ray absorptiometry (DXA) is considered the gold standard method for measuring BMD [6]. In recent years, DXA has been introduced to many Asian countries, including Vietnam, and is commonly used for the diagnosis of osteoporosis and treatment decision. In the absence of sex-specific reference data for local population, most doctors used the T-scores provided by the densitometer as a referent value to make diagnosis for an individual. However, it is not clear whether the reference data base used in the derivation of T-scores in these densitometers is appropriate for a local population. We hypothesize that there is considerable discrepancy in the diagnosis of osteoporosis between reference data. The present study was designed to test the hypothesis, by determining reference range of peak bone density for an Asian population, and then comparing the concordance between a population-specific T-score and the DXA-based T-score in the diagnosis of osteoporosis.", "The study was designed as a cross-sectional investigation, with the setting being Ho Chi Minh City, a major city in Vietnam. The research protocol and procedures were approved by the Scientific Committee of the People's Hospital 115 and Pham Ngoc Thach University of Medicine. All volunteer participants were provided with full information about the study's purpose and gave informed consent to participate in the study, according to the principles of medical ethics of the World Health Organization.\nWe used simple random sampling technique for identifying potential participants. We approached community organizations, including church and temples, and obtained the list of members, and then randomly selected individuals aged 18 or above. We sent a letter of invitation to the selected individuals. The participants received a free health check-up, and lipid analyses, but did not receive any financial incentive. No invited participants refused to participate in the study.\nParticipants were excluded from the study if they had diseases deemed to affect to bone metabolism such as hyperthyroidism, hyperparathyroidism, renal failure, malabsorption syndrome, alcoholism, chronic colitis, multi- myeloma, leukemia, and chronic arthritis.", "Data collection was done by trained research doctors and nurses using a validated questionnaire. The questionnaire solicited information, including anthropometry, lifestyle factors, dietary intakes, physical activity, and clinical history. Anthropometric parameters including age, weight, standing height were obtained. Body weight was measured on an electronic scale with indoor clothing without shoes. Height was determined without shoes on a portable stadiometer with mandible plane parallel to the floor. Each participant was asked to provide information on current and past smoking habits. Smoking was quantified in terms of the number of pack-years consumed in each ten-year interval age group. Alcohol intake in average numbers of standard drinks per day, at present as well as within the last 5 years, was obtained. Clinical data including blood pressure, pulse, and reproductive history (i.e. parity, age of menarche, and age of menopause), medical history (i.e. previous fracture, previous and current use of pharmacological therapies) were also obtained.", "Areal BMD was measured at the lumbar spine (L2-L4), femoral neck, and whole body using a Hologic QDR 4500 (Hologic Corp, Madison, WI, USA). The short-term in vivo precision expressed as the coefficient of variation was 1.8% for the lumbar spine and 1.5% for the hip. The machine was standardized by standard phantom before each measurement.\nThe densitometer provided a T-score for each measured site. In this paper, the T-score is referred to as TDXA. We used the WHO criteria to categorize TDXA into three groups: osteoporosis if the T-score is equal to or lower than -2.5; osteopenia if T-score is between -1 and -2.5; and normal if T-score is equal or greater than -1.", "In this analysis, we made use of the functional relationship between BMD and age to construct a reference range. A series of polynomial regression models (up to the third degree) were fitted to femoral neck, total hip and lumbar spine BMD as a function of age as follows: BMD = α + β1(age) + β2(age)2 + β3(age)3, where α is the intercept, β1, β1, and β3 are regression parameters, which were estimated from the observed data. Reduced models (i.e., quadratic and linear models) were considered, and the \"final\" model was chosen based on the Akaike Information Criterion (AIC). Peak BMD (pBMD) and ages at which it was reached were then estimated from the final model. Ninety-five percent confidence intervals (95% CI) for pBMD and ages of pBMD were determined by the bootstrap (resampling) method. The analysis was performed with R statistical software [12].\nBased on the parameters in the polynomial regression models, we determined the means of peak BMD and standard deviation (SD) for spine and femoral neck BMD. Using the two parameters, we calculated the T-score for each individual (denoted by TVN), and used the WHO criteria to classify the T-score into three groups, namely, osteoporosis, osteopenia, and normal. The concordance between TDXA and TVN was then assessed by the kappa statistic.", "In total, 1227 individuals (357 men and 870 women) aged 18 years or older participated in the study. In this sample, 58.5% of men and 51% of women were at age 50+ years, respectively. As expected, BMD in men was higher than in women by ~12% at femoral neck and by ~7% at lumbar spine (Table 1).\nCharacteristics of participants\nValues are mean (standard deviation)\nBMD, bone mineral density\nP-value was derived from unpaired t-test for difference between men and women.\n Peak bone mineral density The relationship between BMD and age was best described by the third-degree polynomial regression model (Figures 1 and 2). The relationship was characterized by three phases, namely, BMD increased between the ages of 18 and 25, followed by a steady period (aged between 25 and 45), and then gradually declined after the age of 45. The age-related decrease in BMD in women was greater than that in men. For example, compared with lumbar spine BMD among women aged between 20-30 years, lumbar spine BMD among women aged 70+ years was decreased by 27%; however, in men, the corresponding rate of decrease was ~15%. A similar sex-differential decline was also observed in femoral neck BMD.\nRelationship between age and bone density at the femoral neck, total hip, and lumbar spine for men.\nRelationship between age and bone density at the femoral neck, total hip, and lumbar spine for women.\nBased on the parameters of the polynomial regression model (Table 2), we estimated pBMD and the age of pBMD for men and women (Table 3). Consistently, pBMD was higher in men compared with women, but the difference was dependent on skeletal site. For example, for lumbar spine, pBMD in men (1.05 ± 0.12 g/cm2; mean ± SD) was about 9% higher than in women (0.96 ± 0.11 g/cm2). Similarly, pBMD at the femoral neck in men (0.85 ± 0.13 g/cm2) was 6% higher than in women (0.80 ± 0.11 g/cm2). The age achieving pBMD was reached in women was younger than in men. For example, at the femoral neck, age of pBMD in women was 22.4 years (95% CI: 19 - 24) which was earlier than in men (26; 95% CI: 24 - 29). This trend was also observed at the lumbar spine (25 in women and 27 years in men).\nEstimates of parameters of the third degree polynomial regression model\nValues are coefficient (standard error) of the model BMD = α + β1age + β2age2 + β3age3\nR2, coefficient of determination indicates the proportion of variance in BMD that could be \"explained\" by the polynomial model; SEE, Standard error of estimate.\nPeak bone mineral density (pBMD) and age of pBMD in men and women\nValues are amean (standard deviation) and bmean (95% confidence interval)\nThe relationship between BMD and age was best described by the third-degree polynomial regression model (Figures 1 and 2). The relationship was characterized by three phases, namely, BMD increased between the ages of 18 and 25, followed by a steady period (aged between 25 and 45), and then gradually declined after the age of 45. The age-related decrease in BMD in women was greater than that in men. For example, compared with lumbar spine BMD among women aged between 20-30 years, lumbar spine BMD among women aged 70+ years was decreased by 27%; however, in men, the corresponding rate of decrease was ~15%. A similar sex-differential decline was also observed in femoral neck BMD.\nRelationship between age and bone density at the femoral neck, total hip, and lumbar spine for men.\nRelationship between age and bone density at the femoral neck, total hip, and lumbar spine for women.\nBased on the parameters of the polynomial regression model (Table 2), we estimated pBMD and the age of pBMD for men and women (Table 3). Consistently, pBMD was higher in men compared with women, but the difference was dependent on skeletal site. For example, for lumbar spine, pBMD in men (1.05 ± 0.12 g/cm2; mean ± SD) was about 9% higher than in women (0.96 ± 0.11 g/cm2). Similarly, pBMD at the femoral neck in men (0.85 ± 0.13 g/cm2) was 6% higher than in women (0.80 ± 0.11 g/cm2). The age achieving pBMD was reached in women was younger than in men. For example, at the femoral neck, age of pBMD in women was 22.4 years (95% CI: 19 - 24) which was earlier than in men (26; 95% CI: 24 - 29). This trend was also observed at the lumbar spine (25 in women and 27 years in men).\nEstimates of parameters of the third degree polynomial regression model\nValues are coefficient (standard error) of the model BMD = α + β1age + β2age2 + β3age3\nR2, coefficient of determination indicates the proportion of variance in BMD that could be \"explained\" by the polynomial model; SEE, Standard error of estimate.\nPeak bone mineral density (pBMD) and age of pBMD in men and women\nValues are amean (standard deviation) and bmean (95% confidence interval)\n Prevalence of osteoporosis Based on pBMD and SD, T-scores were calculated for men aged 50+ years or post-menopausal women, and these were referred to as TVN, to differentiate with TDXA which was automatically provided by the densitometer (Table 4). In women aged over 50 years, TVN was higher than TDXA at femoral neck (-1.84 ± 0.96 vs -2.27 ± 0.96; P < 0.0001) and at the lumbar spine (-1.61 ± 1.28 vs -2.39 ± 1.31; P < 0.0001). In men aged over 50 years, the same trend also was also found at the femoral neck (-1.50 ± 0.90 vs -2.01 ± 0.86; P < 0.0001), and at the lumbar spine (-1.33 ± 1.33 vs -1.81 ± 1.43; P < 0.0001).\nPrevalence of osteoporosis and osteopenia in men and women aged 50+ years\nData are actual number of individuals in each subgroup, and percentage of sex-specific total.\nAs expected, although absolute values of TVN and TDXA were different, the correlation between them was high (r > 0.98). The linear equation (without intercept) linking the two scores is as follows: at the femoral neck, TVN = 1.177 × TDXA for women, and TVN = 1.246 × TDXA for men; at the lumbar spine, TVN = 1.298 × TDXA for women, and TVN = 1.207 × TDXA for men. The equations suggest that, for example, at the femoral neck, TVN was higher than TDXA by 0.18 SD (in men) to 0.25 SD (in women).\nThe concordance between two diagnoses of osteoporosis (i.e., TVN and TDXA) is shown in Table 5. TDXA tended to over-diagnose osteoporosis more than did TVN. In women aged 50+ years, using femoral neck TVN, the prevalence of osteoporosis was 28.6%, but when the diagnosis was based on TDXA, the prevalence was 43.7%. In men aged 50+ years, the TVN-based prevalence of osteoporosis was 10.4%, which was only a-third of the TDXA-based prevalence (29.6%). The discrepancy mainly occurred in the osteopenic group. For example, among 40 men diagnosed by TDXA to have osteoporosis, there was 65% of them (n = 26) were actually identified as having osteopenia by TVN. Similarly, among 177 women where were diagnosed with osteoporosis by TDXA, 35% (n = 61) were actually osteopenic by TVN. The kappa statistic was 0.54 for women and 0.41 for men.\nConcordance in diagnosis of osteoporosis between DXA provided T-scores and actual T-scores\nValues are shown as number of individuals in each subgroup, and percentage of row-wise total. Kappa statistic for men: κ = 0.41 (95% CI: 0.30 - 0.52); women: κ = 0.54 (95% CI: 0.48 - 0.60)\nUsing the National Health and Nutrition Examination Survey (NHANES) reference data for US Whites (aged between 20 and 29) [13], we computed T-score for each individual aged 50+ years, and classified into either normal, osteopenia or osteoporosis group. We found that the prevalence of osteoporosis was 30% (n = 40/135) in men and 43% (n = 160/368) in women. These prevalence rates are almost identical to the prevalence derived from the TDXA. In fact, the concordance in osteoporosis classification between TDXA and NHANES data was 100% for men and 96% for women.\nBased on pBMD and SD, T-scores were calculated for men aged 50+ years or post-menopausal women, and these were referred to as TVN, to differentiate with TDXA which was automatically provided by the densitometer (Table 4). In women aged over 50 years, TVN was higher than TDXA at femoral neck (-1.84 ± 0.96 vs -2.27 ± 0.96; P < 0.0001) and at the lumbar spine (-1.61 ± 1.28 vs -2.39 ± 1.31; P < 0.0001). In men aged over 50 years, the same trend also was also found at the femoral neck (-1.50 ± 0.90 vs -2.01 ± 0.86; P < 0.0001), and at the lumbar spine (-1.33 ± 1.33 vs -1.81 ± 1.43; P < 0.0001).\nPrevalence of osteoporosis and osteopenia in men and women aged 50+ years\nData are actual number of individuals in each subgroup, and percentage of sex-specific total.\nAs expected, although absolute values of TVN and TDXA were different, the correlation between them was high (r > 0.98). The linear equation (without intercept) linking the two scores is as follows: at the femoral neck, TVN = 1.177 × TDXA for women, and TVN = 1.246 × TDXA for men; at the lumbar spine, TVN = 1.298 × TDXA for women, and TVN = 1.207 × TDXA for men. The equations suggest that, for example, at the femoral neck, TVN was higher than TDXA by 0.18 SD (in men) to 0.25 SD (in women).\nThe concordance between two diagnoses of osteoporosis (i.e., TVN and TDXA) is shown in Table 5. TDXA tended to over-diagnose osteoporosis more than did TVN. In women aged 50+ years, using femoral neck TVN, the prevalence of osteoporosis was 28.6%, but when the diagnosis was based on TDXA, the prevalence was 43.7%. In men aged 50+ years, the TVN-based prevalence of osteoporosis was 10.4%, which was only a-third of the TDXA-based prevalence (29.6%). The discrepancy mainly occurred in the osteopenic group. For example, among 40 men diagnosed by TDXA to have osteoporosis, there was 65% of them (n = 26) were actually identified as having osteopenia by TVN. Similarly, among 177 women where were diagnosed with osteoporosis by TDXA, 35% (n = 61) were actually osteopenic by TVN. The kappa statistic was 0.54 for women and 0.41 for men.\nConcordance in diagnosis of osteoporosis between DXA provided T-scores and actual T-scores\nValues are shown as number of individuals in each subgroup, and percentage of row-wise total. Kappa statistic for men: κ = 0.41 (95% CI: 0.30 - 0.52); women: κ = 0.54 (95% CI: 0.48 - 0.60)\nUsing the National Health and Nutrition Examination Survey (NHANES) reference data for US Whites (aged between 20 and 29) [13], we computed T-score for each individual aged 50+ years, and classified into either normal, osteopenia or osteoporosis group. We found that the prevalence of osteoporosis was 30% (n = 40/135) in men and 43% (n = 160/368) in women. These prevalence rates are almost identical to the prevalence derived from the TDXA. In fact, the concordance in osteoporosis classification between TDXA and NHANES data was 100% for men and 96% for women.", "The relationship between BMD and age was best described by the third-degree polynomial regression model (Figures 1 and 2). The relationship was characterized by three phases, namely, BMD increased between the ages of 18 and 25, followed by a steady period (aged between 25 and 45), and then gradually declined after the age of 45. The age-related decrease in BMD in women was greater than that in men. For example, compared with lumbar spine BMD among women aged between 20-30 years, lumbar spine BMD among women aged 70+ years was decreased by 27%; however, in men, the corresponding rate of decrease was ~15%. A similar sex-differential decline was also observed in femoral neck BMD.\nRelationship between age and bone density at the femoral neck, total hip, and lumbar spine for men.\nRelationship between age and bone density at the femoral neck, total hip, and lumbar spine for women.\nBased on the parameters of the polynomial regression model (Table 2), we estimated pBMD and the age of pBMD for men and women (Table 3). Consistently, pBMD was higher in men compared with women, but the difference was dependent on skeletal site. For example, for lumbar spine, pBMD in men (1.05 ± 0.12 g/cm2; mean ± SD) was about 9% higher than in women (0.96 ± 0.11 g/cm2). Similarly, pBMD at the femoral neck in men (0.85 ± 0.13 g/cm2) was 6% higher than in women (0.80 ± 0.11 g/cm2). The age achieving pBMD was reached in women was younger than in men. For example, at the femoral neck, age of pBMD in women was 22.4 years (95% CI: 19 - 24) which was earlier than in men (26; 95% CI: 24 - 29). This trend was also observed at the lumbar spine (25 in women and 27 years in men).\nEstimates of parameters of the third degree polynomial regression model\nValues are coefficient (standard error) of the model BMD = α + β1age + β2age2 + β3age3\nR2, coefficient of determination indicates the proportion of variance in BMD that could be \"explained\" by the polynomial model; SEE, Standard error of estimate.\nPeak bone mineral density (pBMD) and age of pBMD in men and women\nValues are amean (standard deviation) and bmean (95% confidence interval)", "Based on pBMD and SD, T-scores were calculated for men aged 50+ years or post-menopausal women, and these were referred to as TVN, to differentiate with TDXA which was automatically provided by the densitometer (Table 4). In women aged over 50 years, TVN was higher than TDXA at femoral neck (-1.84 ± 0.96 vs -2.27 ± 0.96; P < 0.0001) and at the lumbar spine (-1.61 ± 1.28 vs -2.39 ± 1.31; P < 0.0001). In men aged over 50 years, the same trend also was also found at the femoral neck (-1.50 ± 0.90 vs -2.01 ± 0.86; P < 0.0001), and at the lumbar spine (-1.33 ± 1.33 vs -1.81 ± 1.43; P < 0.0001).\nPrevalence of osteoporosis and osteopenia in men and women aged 50+ years\nData are actual number of individuals in each subgroup, and percentage of sex-specific total.\nAs expected, although absolute values of TVN and TDXA were different, the correlation between them was high (r > 0.98). The linear equation (without intercept) linking the two scores is as follows: at the femoral neck, TVN = 1.177 × TDXA for women, and TVN = 1.246 × TDXA for men; at the lumbar spine, TVN = 1.298 × TDXA for women, and TVN = 1.207 × TDXA for men. The equations suggest that, for example, at the femoral neck, TVN was higher than TDXA by 0.18 SD (in men) to 0.25 SD (in women).\nThe concordance between two diagnoses of osteoporosis (i.e., TVN and TDXA) is shown in Table 5. TDXA tended to over-diagnose osteoporosis more than did TVN. In women aged 50+ years, using femoral neck TVN, the prevalence of osteoporosis was 28.6%, but when the diagnosis was based on TDXA, the prevalence was 43.7%. In men aged 50+ years, the TVN-based prevalence of osteoporosis was 10.4%, which was only a-third of the TDXA-based prevalence (29.6%). The discrepancy mainly occurred in the osteopenic group. For example, among 40 men diagnosed by TDXA to have osteoporosis, there was 65% of them (n = 26) were actually identified as having osteopenia by TVN. Similarly, among 177 women where were diagnosed with osteoporosis by TDXA, 35% (n = 61) were actually osteopenic by TVN. The kappa statistic was 0.54 for women and 0.41 for men.\nConcordance in diagnosis of osteoporosis between DXA provided T-scores and actual T-scores\nValues are shown as number of individuals in each subgroup, and percentage of row-wise total. Kappa statistic for men: κ = 0.41 (95% CI: 0.30 - 0.52); women: κ = 0.54 (95% CI: 0.48 - 0.60)\nUsing the National Health and Nutrition Examination Survey (NHANES) reference data for US Whites (aged between 20 and 29) [13], we computed T-score for each individual aged 50+ years, and classified into either normal, osteopenia or osteoporosis group. We found that the prevalence of osteoporosis was 30% (n = 40/135) in men and 43% (n = 160/368) in women. These prevalence rates are almost identical to the prevalence derived from the TDXA. In fact, the concordance in osteoporosis classification between TDXA and NHANES data was 100% for men and 96% for women.", "To assess the magnitude of the problem, it is essential to establish an appropriate measure for the diagnosis of osteoporosis. Currently, osteoporosis is operationally defined in terms of BMD, which is compared to a normative database [5]. However, it is well known that measured values of BMD differ across ethnicities [9,11,14], and the referent database should therefore be ethnicity-specific. In this study, we have shown that there was a considerable discrepancy in the diagnosis of osteoporosis between referent data derived from the local population and referent data that are provided by the densitometer.\nIt is clear from this analysis that the densitometer reference data over-diagnosed osteoporosis in the Vietnamese population. Using the local normative data, we found that the prevalence of osteoporosis in Vietnamese women and men aged 50+ years was 29% and 10%, respectively. However, using the DXA-provided normative data, the prevalence in women and men was 44% and 30%, respectively. The discrepancy raises a question of which T-score is more appropriate. In a recent study in 328 Vietnamese postmenopausal women using DXA Lunar Prodigy, the prevalence of osteoporosis was 26% [15]. Another smaller study in Vietnamese postmenopausal women living in United State showed that this prevalence was 37% [16]. The prevalence of osteoporosis in postmenopausal Thai women was around 29% [17]. In Caucasians, the prevalence of osteoporosis in postmenopausal women ranged between 20% and 25% [18,19]. In summary, most of these studies in Asian and Caucasian women found that the prevalence of osteoporosis ranged between 20 and 30% [18-20], which is highly consistent with the present study's estimate. These data also suggest that the densitometer-provided T-score is not appropriate for the diagnosis of osteoporosis in Vietnamese women.\nWhy there were differences between TVN and TDXA? The most \"proximate\" explanation is that there were differences in peak BMD and standard deviation between the Hologic normative data and the present normative data. However, the standard deviation in BMD is very stable across populations; therefore, the main reason could be that peak BMD value provided by the Hologic densitometer was higher than peak BMD in Vietnamese. Assuming that SD of femoral neck BMD was 0.12 g/cm2, with TDXA, one could infer that peak BMD was 0.92 and 0.86 g/cm2 in men and women, respectively. These values are identical to the femoral neck BMD reference values for US White men and women of the National Health and Nutrition Examination Survey (NHANES)[13]. In reality, the observed peak BMD in our study was 0.85 g/cm2 (men) and 0.80 g/cm2 (women). It is obvious that the peak BMD provided by the densitometer was derived from a non-Vietnamese population, which may not be applicable to the Vietnamese population.\nIn this study, the relationship between BMD and age followed a third degree polynomial function, which is consistent with a recent study [15]. According to this functional relationship, Vietnamese women achieved their peak BMD at the age of 27-29, which was later than that in Caucasian (20-25 years). Although it is not possible to determine the underlying factors for this apparent difference, it is well-known that Asian girls tend to have a later menarche than Caucasian girls (13 vs 12 years).\nOsteoporosis in men, particularly Asian men, has not been well documented. The present study was among the first research about osteoporosis in Asian men. In this study, about one tenth of men aged over 50 had osteoporosis. This prevalence is highly comparable with previous estimate from Caucasian men [19]. Individuals with osteoporosis are at high risk of fragility fracture [21,22]. In this study, we found that almost 30% of women (and 10% of men) aged 50+ years had osteoporosis, implying that the magnitude of osteoporosis in Vietnam is as high as in developed countries.\nThe present results have to be interpreted within the context of strengths and potential limitations. First, the study represents one of the largest studies of osteoporosis in Asian populations, and as such, it increased the reliability of estimates of peak bone mass and prevalence of osteoporosis. Second, the study population is highly homogeneous, which reduces the effects of potential confounders that could compromise the estimates. The participants were randomly selected according to a rigorous sampling scheme, which ensures the representativeness of the general population. Third, the technique of measurement of BMD is considered \"gold standard\" for the assessment of bone strength. Nevertheless, the study also has a number of potential weaknesses. The participants in this study were sampled from an urban population; as a result, the study's finding may not be generalizable to the rural population. Because we excluded individuals with diseases deemed to interfere with bone metabolism, the prevalence of osteoporosis reported here could be an underestimate of the prevalence in the general population. Ideally, peak bone density should be estimated from a longitudinal study in which a large number of men and women is followed from the age of 5 till the age of 30, but such a study is not practically feasible. On the other hand, estimate of peak BMD in cross-sectional study such as the present study can be biased by unmeasured confounders.\nNevertheless, the present findings have important public health and clinical implications. Because individuals with T-scores being or less than -2.5 are often treated, the over-diagnosis by TDXA could have led to over-treatment in the general population. Moreover, individuals with T-scores being or less than -2.5 are also candidates for anti-fracture clinical trials or clinical studies, the use of TDXA could have included some women in such studies and exposed them to unnecessary risk. Thus, it seems prudent to use local normative data for the diagnosis of osteoporosis in order to avoid over-diagnosis or over-treatment.", "In summary, these data suggest that the prevalence of osteoporosis in Vietnamese men (10%) and women (30%) aged 50+ years is comparable with those in Caucasian populations. The data also indicated that the T-score provided by the Hologic QDR4500 over-diagnosed osteoporosis in Vietnamese men and women. We propose to use the data developed in this study for the diagnosis of osteoporosis in the Vietnamese population." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study design and participants", "Measurements and data collection", "Bone mineral density", "Determination of reference range", "Results", "Peak bone mineral density", "Prevalence of osteoporosis", "Discussion", "Conclusion" ]
[ "Osteoporosis and its consequence of fragility fracture represent a major public health problem not only in developed countries, but in developing countries as well [1]. The number of fractures in Asia is higher than that in European countries combined. Of all the fractures in the world, approximately 17% was found to occur in Southeast Asia, 29% in West Pacific, as compared to 35% occurring in Europe [2]. However, the prevalence of and risk factors for osteoporosis in Asian populations have not been well documented. Part of the problem is due to the lack of well-defined criteria for the diagnosis of osteoporosis in Asian men and women.\nCurrently, the operational definition of osteoporosis is based on a measurement of bone mineral density (BMD), which is the most robust predictor of fracture risk [3,4]. BMD of an individual is often expressed in terms of its peak level and standard deviation to yield a T-score. The two parameters (i.e., peak BMD level and standard deviation) are commonly derived from a well characterized population of young individuals [5]. An individual's T-score is actually the number of standard deviations from the peak BMD achieved during the age of 20 and 30 years [6,7]. However, previous studies have suggested that peak BMD is different among ethnicities and between men and women [8,9]. Therefore, the diagnosis of osteoporosis should ideally be based on sex- and ethnicity-specific reference range [10,11].\nDual-energy × ray absorptiometry (DXA) is considered the gold standard method for measuring BMD [6]. In recent years, DXA has been introduced to many Asian countries, including Vietnam, and is commonly used for the diagnosis of osteoporosis and treatment decision. In the absence of sex-specific reference data for local population, most doctors used the T-scores provided by the densitometer as a referent value to make diagnosis for an individual. However, it is not clear whether the reference data base used in the derivation of T-scores in these densitometers is appropriate for a local population. We hypothesize that there is considerable discrepancy in the diagnosis of osteoporosis between reference data. The present study was designed to test the hypothesis, by determining reference range of peak bone density for an Asian population, and then comparing the concordance between a population-specific T-score and the DXA-based T-score in the diagnosis of osteoporosis.", " Study design and participants The study was designed as a cross-sectional investigation, with the setting being Ho Chi Minh City, a major city in Vietnam. The research protocol and procedures were approved by the Scientific Committee of the People's Hospital 115 and Pham Ngoc Thach University of Medicine. All volunteer participants were provided with full information about the study's purpose and gave informed consent to participate in the study, according to the principles of medical ethics of the World Health Organization.\nWe used simple random sampling technique for identifying potential participants. We approached community organizations, including church and temples, and obtained the list of members, and then randomly selected individuals aged 18 or above. We sent a letter of invitation to the selected individuals. The participants received a free health check-up, and lipid analyses, but did not receive any financial incentive. No invited participants refused to participate in the study.\nParticipants were excluded from the study if they had diseases deemed to affect to bone metabolism such as hyperthyroidism, hyperparathyroidism, renal failure, malabsorption syndrome, alcoholism, chronic colitis, multi- myeloma, leukemia, and chronic arthritis.\nThe study was designed as a cross-sectional investigation, with the setting being Ho Chi Minh City, a major city in Vietnam. The research protocol and procedures were approved by the Scientific Committee of the People's Hospital 115 and Pham Ngoc Thach University of Medicine. All volunteer participants were provided with full information about the study's purpose and gave informed consent to participate in the study, according to the principles of medical ethics of the World Health Organization.\nWe used simple random sampling technique for identifying potential participants. We approached community organizations, including church and temples, and obtained the list of members, and then randomly selected individuals aged 18 or above. We sent a letter of invitation to the selected individuals. The participants received a free health check-up, and lipid analyses, but did not receive any financial incentive. No invited participants refused to participate in the study.\nParticipants were excluded from the study if they had diseases deemed to affect to bone metabolism such as hyperthyroidism, hyperparathyroidism, renal failure, malabsorption syndrome, alcoholism, chronic colitis, multi- myeloma, leukemia, and chronic arthritis.\n Measurements and data collection Data collection was done by trained research doctors and nurses using a validated questionnaire. The questionnaire solicited information, including anthropometry, lifestyle factors, dietary intakes, physical activity, and clinical history. Anthropometric parameters including age, weight, standing height were obtained. Body weight was measured on an electronic scale with indoor clothing without shoes. Height was determined without shoes on a portable stadiometer with mandible plane parallel to the floor. Each participant was asked to provide information on current and past smoking habits. Smoking was quantified in terms of the number of pack-years consumed in each ten-year interval age group. Alcohol intake in average numbers of standard drinks per day, at present as well as within the last 5 years, was obtained. Clinical data including blood pressure, pulse, and reproductive history (i.e. parity, age of menarche, and age of menopause), medical history (i.e. previous fracture, previous and current use of pharmacological therapies) were also obtained.\nData collection was done by trained research doctors and nurses using a validated questionnaire. The questionnaire solicited information, including anthropometry, lifestyle factors, dietary intakes, physical activity, and clinical history. Anthropometric parameters including age, weight, standing height were obtained. Body weight was measured on an electronic scale with indoor clothing without shoes. Height was determined without shoes on a portable stadiometer with mandible plane parallel to the floor. Each participant was asked to provide information on current and past smoking habits. Smoking was quantified in terms of the number of pack-years consumed in each ten-year interval age group. Alcohol intake in average numbers of standard drinks per day, at present as well as within the last 5 years, was obtained. Clinical data including blood pressure, pulse, and reproductive history (i.e. parity, age of menarche, and age of menopause), medical history (i.e. previous fracture, previous and current use of pharmacological therapies) were also obtained.\n Bone mineral density Areal BMD was measured at the lumbar spine (L2-L4), femoral neck, and whole body using a Hologic QDR 4500 (Hologic Corp, Madison, WI, USA). The short-term in vivo precision expressed as the coefficient of variation was 1.8% for the lumbar spine and 1.5% for the hip. The machine was standardized by standard phantom before each measurement.\nThe densitometer provided a T-score for each measured site. In this paper, the T-score is referred to as TDXA. We used the WHO criteria to categorize TDXA into three groups: osteoporosis if the T-score is equal to or lower than -2.5; osteopenia if T-score is between -1 and -2.5; and normal if T-score is equal or greater than -1.\nAreal BMD was measured at the lumbar spine (L2-L4), femoral neck, and whole body using a Hologic QDR 4500 (Hologic Corp, Madison, WI, USA). The short-term in vivo precision expressed as the coefficient of variation was 1.8% for the lumbar spine and 1.5% for the hip. The machine was standardized by standard phantom before each measurement.\nThe densitometer provided a T-score for each measured site. In this paper, the T-score is referred to as TDXA. We used the WHO criteria to categorize TDXA into three groups: osteoporosis if the T-score is equal to or lower than -2.5; osteopenia if T-score is between -1 and -2.5; and normal if T-score is equal or greater than -1.\n Determination of reference range In this analysis, we made use of the functional relationship between BMD and age to construct a reference range. A series of polynomial regression models (up to the third degree) were fitted to femoral neck, total hip and lumbar spine BMD as a function of age as follows: BMD = α + β1(age) + β2(age)2 + β3(age)3, where α is the intercept, β1, β1, and β3 are regression parameters, which were estimated from the observed data. Reduced models (i.e., quadratic and linear models) were considered, and the \"final\" model was chosen based on the Akaike Information Criterion (AIC). Peak BMD (pBMD) and ages at which it was reached were then estimated from the final model. Ninety-five percent confidence intervals (95% CI) for pBMD and ages of pBMD were determined by the bootstrap (resampling) method. The analysis was performed with R statistical software [12].\nBased on the parameters in the polynomial regression models, we determined the means of peak BMD and standard deviation (SD) for spine and femoral neck BMD. Using the two parameters, we calculated the T-score for each individual (denoted by TVN), and used the WHO criteria to classify the T-score into three groups, namely, osteoporosis, osteopenia, and normal. The concordance between TDXA and TVN was then assessed by the kappa statistic.\nIn this analysis, we made use of the functional relationship between BMD and age to construct a reference range. A series of polynomial regression models (up to the third degree) were fitted to femoral neck, total hip and lumbar spine BMD as a function of age as follows: BMD = α + β1(age) + β2(age)2 + β3(age)3, where α is the intercept, β1, β1, and β3 are regression parameters, which were estimated from the observed data. Reduced models (i.e., quadratic and linear models) were considered, and the \"final\" model was chosen based on the Akaike Information Criterion (AIC). Peak BMD (pBMD) and ages at which it was reached were then estimated from the final model. Ninety-five percent confidence intervals (95% CI) for pBMD and ages of pBMD were determined by the bootstrap (resampling) method. The analysis was performed with R statistical software [12].\nBased on the parameters in the polynomial regression models, we determined the means of peak BMD and standard deviation (SD) for spine and femoral neck BMD. Using the two parameters, we calculated the T-score for each individual (denoted by TVN), and used the WHO criteria to classify the T-score into three groups, namely, osteoporosis, osteopenia, and normal. The concordance between TDXA and TVN was then assessed by the kappa statistic.", "The study was designed as a cross-sectional investigation, with the setting being Ho Chi Minh City, a major city in Vietnam. The research protocol and procedures were approved by the Scientific Committee of the People's Hospital 115 and Pham Ngoc Thach University of Medicine. All volunteer participants were provided with full information about the study's purpose and gave informed consent to participate in the study, according to the principles of medical ethics of the World Health Organization.\nWe used simple random sampling technique for identifying potential participants. We approached community organizations, including church and temples, and obtained the list of members, and then randomly selected individuals aged 18 or above. We sent a letter of invitation to the selected individuals. The participants received a free health check-up, and lipid analyses, but did not receive any financial incentive. No invited participants refused to participate in the study.\nParticipants were excluded from the study if they had diseases deemed to affect to bone metabolism such as hyperthyroidism, hyperparathyroidism, renal failure, malabsorption syndrome, alcoholism, chronic colitis, multi- myeloma, leukemia, and chronic arthritis.", "Data collection was done by trained research doctors and nurses using a validated questionnaire. The questionnaire solicited information, including anthropometry, lifestyle factors, dietary intakes, physical activity, and clinical history. Anthropometric parameters including age, weight, standing height were obtained. Body weight was measured on an electronic scale with indoor clothing without shoes. Height was determined without shoes on a portable stadiometer with mandible plane parallel to the floor. Each participant was asked to provide information on current and past smoking habits. Smoking was quantified in terms of the number of pack-years consumed in each ten-year interval age group. Alcohol intake in average numbers of standard drinks per day, at present as well as within the last 5 years, was obtained. Clinical data including blood pressure, pulse, and reproductive history (i.e. parity, age of menarche, and age of menopause), medical history (i.e. previous fracture, previous and current use of pharmacological therapies) were also obtained.", "Areal BMD was measured at the lumbar spine (L2-L4), femoral neck, and whole body using a Hologic QDR 4500 (Hologic Corp, Madison, WI, USA). The short-term in vivo precision expressed as the coefficient of variation was 1.8% for the lumbar spine and 1.5% for the hip. The machine was standardized by standard phantom before each measurement.\nThe densitometer provided a T-score for each measured site. In this paper, the T-score is referred to as TDXA. We used the WHO criteria to categorize TDXA into three groups: osteoporosis if the T-score is equal to or lower than -2.5; osteopenia if T-score is between -1 and -2.5; and normal if T-score is equal or greater than -1.", "In this analysis, we made use of the functional relationship between BMD and age to construct a reference range. A series of polynomial regression models (up to the third degree) were fitted to femoral neck, total hip and lumbar spine BMD as a function of age as follows: BMD = α + β1(age) + β2(age)2 + β3(age)3, where α is the intercept, β1, β1, and β3 are regression parameters, which were estimated from the observed data. Reduced models (i.e., quadratic and linear models) were considered, and the \"final\" model was chosen based on the Akaike Information Criterion (AIC). Peak BMD (pBMD) and ages at which it was reached were then estimated from the final model. Ninety-five percent confidence intervals (95% CI) for pBMD and ages of pBMD were determined by the bootstrap (resampling) method. The analysis was performed with R statistical software [12].\nBased on the parameters in the polynomial regression models, we determined the means of peak BMD and standard deviation (SD) for spine and femoral neck BMD. Using the two parameters, we calculated the T-score for each individual (denoted by TVN), and used the WHO criteria to classify the T-score into three groups, namely, osteoporosis, osteopenia, and normal. The concordance between TDXA and TVN was then assessed by the kappa statistic.", "In total, 1227 individuals (357 men and 870 women) aged 18 years or older participated in the study. In this sample, 58.5% of men and 51% of women were at age 50+ years, respectively. As expected, BMD in men was higher than in women by ~12% at femoral neck and by ~7% at lumbar spine (Table 1).\nCharacteristics of participants\nValues are mean (standard deviation)\nBMD, bone mineral density\nP-value was derived from unpaired t-test for difference between men and women.\n Peak bone mineral density The relationship between BMD and age was best described by the third-degree polynomial regression model (Figures 1 and 2). The relationship was characterized by three phases, namely, BMD increased between the ages of 18 and 25, followed by a steady period (aged between 25 and 45), and then gradually declined after the age of 45. The age-related decrease in BMD in women was greater than that in men. For example, compared with lumbar spine BMD among women aged between 20-30 years, lumbar spine BMD among women aged 70+ years was decreased by 27%; however, in men, the corresponding rate of decrease was ~15%. A similar sex-differential decline was also observed in femoral neck BMD.\nRelationship between age and bone density at the femoral neck, total hip, and lumbar spine for men.\nRelationship between age and bone density at the femoral neck, total hip, and lumbar spine for women.\nBased on the parameters of the polynomial regression model (Table 2), we estimated pBMD and the age of pBMD for men and women (Table 3). Consistently, pBMD was higher in men compared with women, but the difference was dependent on skeletal site. For example, for lumbar spine, pBMD in men (1.05 ± 0.12 g/cm2; mean ± SD) was about 9% higher than in women (0.96 ± 0.11 g/cm2). Similarly, pBMD at the femoral neck in men (0.85 ± 0.13 g/cm2) was 6% higher than in women (0.80 ± 0.11 g/cm2). The age achieving pBMD was reached in women was younger than in men. For example, at the femoral neck, age of pBMD in women was 22.4 years (95% CI: 19 - 24) which was earlier than in men (26; 95% CI: 24 - 29). This trend was also observed at the lumbar spine (25 in women and 27 years in men).\nEstimates of parameters of the third degree polynomial regression model\nValues are coefficient (standard error) of the model BMD = α + β1age + β2age2 + β3age3\nR2, coefficient of determination indicates the proportion of variance in BMD that could be \"explained\" by the polynomial model; SEE, Standard error of estimate.\nPeak bone mineral density (pBMD) and age of pBMD in men and women\nValues are amean (standard deviation) and bmean (95% confidence interval)\nThe relationship between BMD and age was best described by the third-degree polynomial regression model (Figures 1 and 2). The relationship was characterized by three phases, namely, BMD increased between the ages of 18 and 25, followed by a steady period (aged between 25 and 45), and then gradually declined after the age of 45. The age-related decrease in BMD in women was greater than that in men. For example, compared with lumbar spine BMD among women aged between 20-30 years, lumbar spine BMD among women aged 70+ years was decreased by 27%; however, in men, the corresponding rate of decrease was ~15%. A similar sex-differential decline was also observed in femoral neck BMD.\nRelationship between age and bone density at the femoral neck, total hip, and lumbar spine for men.\nRelationship between age and bone density at the femoral neck, total hip, and lumbar spine for women.\nBased on the parameters of the polynomial regression model (Table 2), we estimated pBMD and the age of pBMD for men and women (Table 3). Consistently, pBMD was higher in men compared with women, but the difference was dependent on skeletal site. For example, for lumbar spine, pBMD in men (1.05 ± 0.12 g/cm2; mean ± SD) was about 9% higher than in women (0.96 ± 0.11 g/cm2). Similarly, pBMD at the femoral neck in men (0.85 ± 0.13 g/cm2) was 6% higher than in women (0.80 ± 0.11 g/cm2). The age achieving pBMD was reached in women was younger than in men. For example, at the femoral neck, age of pBMD in women was 22.4 years (95% CI: 19 - 24) which was earlier than in men (26; 95% CI: 24 - 29). This trend was also observed at the lumbar spine (25 in women and 27 years in men).\nEstimates of parameters of the third degree polynomial regression model\nValues are coefficient (standard error) of the model BMD = α + β1age + β2age2 + β3age3\nR2, coefficient of determination indicates the proportion of variance in BMD that could be \"explained\" by the polynomial model; SEE, Standard error of estimate.\nPeak bone mineral density (pBMD) and age of pBMD in men and women\nValues are amean (standard deviation) and bmean (95% confidence interval)\n Prevalence of osteoporosis Based on pBMD and SD, T-scores were calculated for men aged 50+ years or post-menopausal women, and these were referred to as TVN, to differentiate with TDXA which was automatically provided by the densitometer (Table 4). In women aged over 50 years, TVN was higher than TDXA at femoral neck (-1.84 ± 0.96 vs -2.27 ± 0.96; P < 0.0001) and at the lumbar spine (-1.61 ± 1.28 vs -2.39 ± 1.31; P < 0.0001). In men aged over 50 years, the same trend also was also found at the femoral neck (-1.50 ± 0.90 vs -2.01 ± 0.86; P < 0.0001), and at the lumbar spine (-1.33 ± 1.33 vs -1.81 ± 1.43; P < 0.0001).\nPrevalence of osteoporosis and osteopenia in men and women aged 50+ years\nData are actual number of individuals in each subgroup, and percentage of sex-specific total.\nAs expected, although absolute values of TVN and TDXA were different, the correlation between them was high (r > 0.98). The linear equation (without intercept) linking the two scores is as follows: at the femoral neck, TVN = 1.177 × TDXA for women, and TVN = 1.246 × TDXA for men; at the lumbar spine, TVN = 1.298 × TDXA for women, and TVN = 1.207 × TDXA for men. The equations suggest that, for example, at the femoral neck, TVN was higher than TDXA by 0.18 SD (in men) to 0.25 SD (in women).\nThe concordance between two diagnoses of osteoporosis (i.e., TVN and TDXA) is shown in Table 5. TDXA tended to over-diagnose osteoporosis more than did TVN. In women aged 50+ years, using femoral neck TVN, the prevalence of osteoporosis was 28.6%, but when the diagnosis was based on TDXA, the prevalence was 43.7%. In men aged 50+ years, the TVN-based prevalence of osteoporosis was 10.4%, which was only a-third of the TDXA-based prevalence (29.6%). The discrepancy mainly occurred in the osteopenic group. For example, among 40 men diagnosed by TDXA to have osteoporosis, there was 65% of them (n = 26) were actually identified as having osteopenia by TVN. Similarly, among 177 women where were diagnosed with osteoporosis by TDXA, 35% (n = 61) were actually osteopenic by TVN. The kappa statistic was 0.54 for women and 0.41 for men.\nConcordance in diagnosis of osteoporosis between DXA provided T-scores and actual T-scores\nValues are shown as number of individuals in each subgroup, and percentage of row-wise total. Kappa statistic for men: κ = 0.41 (95% CI: 0.30 - 0.52); women: κ = 0.54 (95% CI: 0.48 - 0.60)\nUsing the National Health and Nutrition Examination Survey (NHANES) reference data for US Whites (aged between 20 and 29) [13], we computed T-score for each individual aged 50+ years, and classified into either normal, osteopenia or osteoporosis group. We found that the prevalence of osteoporosis was 30% (n = 40/135) in men and 43% (n = 160/368) in women. These prevalence rates are almost identical to the prevalence derived from the TDXA. In fact, the concordance in osteoporosis classification between TDXA and NHANES data was 100% for men and 96% for women.\nBased on pBMD and SD, T-scores were calculated for men aged 50+ years or post-menopausal women, and these were referred to as TVN, to differentiate with TDXA which was automatically provided by the densitometer (Table 4). In women aged over 50 years, TVN was higher than TDXA at femoral neck (-1.84 ± 0.96 vs -2.27 ± 0.96; P < 0.0001) and at the lumbar spine (-1.61 ± 1.28 vs -2.39 ± 1.31; P < 0.0001). In men aged over 50 years, the same trend also was also found at the femoral neck (-1.50 ± 0.90 vs -2.01 ± 0.86; P < 0.0001), and at the lumbar spine (-1.33 ± 1.33 vs -1.81 ± 1.43; P < 0.0001).\nPrevalence of osteoporosis and osteopenia in men and women aged 50+ years\nData are actual number of individuals in each subgroup, and percentage of sex-specific total.\nAs expected, although absolute values of TVN and TDXA were different, the correlation between them was high (r > 0.98). The linear equation (without intercept) linking the two scores is as follows: at the femoral neck, TVN = 1.177 × TDXA for women, and TVN = 1.246 × TDXA for men; at the lumbar spine, TVN = 1.298 × TDXA for women, and TVN = 1.207 × TDXA for men. The equations suggest that, for example, at the femoral neck, TVN was higher than TDXA by 0.18 SD (in men) to 0.25 SD (in women).\nThe concordance between two diagnoses of osteoporosis (i.e., TVN and TDXA) is shown in Table 5. TDXA tended to over-diagnose osteoporosis more than did TVN. In women aged 50+ years, using femoral neck TVN, the prevalence of osteoporosis was 28.6%, but when the diagnosis was based on TDXA, the prevalence was 43.7%. In men aged 50+ years, the TVN-based prevalence of osteoporosis was 10.4%, which was only a-third of the TDXA-based prevalence (29.6%). The discrepancy mainly occurred in the osteopenic group. For example, among 40 men diagnosed by TDXA to have osteoporosis, there was 65% of them (n = 26) were actually identified as having osteopenia by TVN. Similarly, among 177 women where were diagnosed with osteoporosis by TDXA, 35% (n = 61) were actually osteopenic by TVN. The kappa statistic was 0.54 for women and 0.41 for men.\nConcordance in diagnosis of osteoporosis between DXA provided T-scores and actual T-scores\nValues are shown as number of individuals in each subgroup, and percentage of row-wise total. Kappa statistic for men: κ = 0.41 (95% CI: 0.30 - 0.52); women: κ = 0.54 (95% CI: 0.48 - 0.60)\nUsing the National Health and Nutrition Examination Survey (NHANES) reference data for US Whites (aged between 20 and 29) [13], we computed T-score for each individual aged 50+ years, and classified into either normal, osteopenia or osteoporosis group. We found that the prevalence of osteoporosis was 30% (n = 40/135) in men and 43% (n = 160/368) in women. These prevalence rates are almost identical to the prevalence derived from the TDXA. In fact, the concordance in osteoporosis classification between TDXA and NHANES data was 100% for men and 96% for women.", "The relationship between BMD and age was best described by the third-degree polynomial regression model (Figures 1 and 2). The relationship was characterized by three phases, namely, BMD increased between the ages of 18 and 25, followed by a steady period (aged between 25 and 45), and then gradually declined after the age of 45. The age-related decrease in BMD in women was greater than that in men. For example, compared with lumbar spine BMD among women aged between 20-30 years, lumbar spine BMD among women aged 70+ years was decreased by 27%; however, in men, the corresponding rate of decrease was ~15%. A similar sex-differential decline was also observed in femoral neck BMD.\nRelationship between age and bone density at the femoral neck, total hip, and lumbar spine for men.\nRelationship between age and bone density at the femoral neck, total hip, and lumbar spine for women.\nBased on the parameters of the polynomial regression model (Table 2), we estimated pBMD and the age of pBMD for men and women (Table 3). Consistently, pBMD was higher in men compared with women, but the difference was dependent on skeletal site. For example, for lumbar spine, pBMD in men (1.05 ± 0.12 g/cm2; mean ± SD) was about 9% higher than in women (0.96 ± 0.11 g/cm2). Similarly, pBMD at the femoral neck in men (0.85 ± 0.13 g/cm2) was 6% higher than in women (0.80 ± 0.11 g/cm2). The age achieving pBMD was reached in women was younger than in men. For example, at the femoral neck, age of pBMD in women was 22.4 years (95% CI: 19 - 24) which was earlier than in men (26; 95% CI: 24 - 29). This trend was also observed at the lumbar spine (25 in women and 27 years in men).\nEstimates of parameters of the third degree polynomial regression model\nValues are coefficient (standard error) of the model BMD = α + β1age + β2age2 + β3age3\nR2, coefficient of determination indicates the proportion of variance in BMD that could be \"explained\" by the polynomial model; SEE, Standard error of estimate.\nPeak bone mineral density (pBMD) and age of pBMD in men and women\nValues are amean (standard deviation) and bmean (95% confidence interval)", "Based on pBMD and SD, T-scores were calculated for men aged 50+ years or post-menopausal women, and these were referred to as TVN, to differentiate with TDXA which was automatically provided by the densitometer (Table 4). In women aged over 50 years, TVN was higher than TDXA at femoral neck (-1.84 ± 0.96 vs -2.27 ± 0.96; P < 0.0001) and at the lumbar spine (-1.61 ± 1.28 vs -2.39 ± 1.31; P < 0.0001). In men aged over 50 years, the same trend also was also found at the femoral neck (-1.50 ± 0.90 vs -2.01 ± 0.86; P < 0.0001), and at the lumbar spine (-1.33 ± 1.33 vs -1.81 ± 1.43; P < 0.0001).\nPrevalence of osteoporosis and osteopenia in men and women aged 50+ years\nData are actual number of individuals in each subgroup, and percentage of sex-specific total.\nAs expected, although absolute values of TVN and TDXA were different, the correlation between them was high (r > 0.98). The linear equation (without intercept) linking the two scores is as follows: at the femoral neck, TVN = 1.177 × TDXA for women, and TVN = 1.246 × TDXA for men; at the lumbar spine, TVN = 1.298 × TDXA for women, and TVN = 1.207 × TDXA for men. The equations suggest that, for example, at the femoral neck, TVN was higher than TDXA by 0.18 SD (in men) to 0.25 SD (in women).\nThe concordance between two diagnoses of osteoporosis (i.e., TVN and TDXA) is shown in Table 5. TDXA tended to over-diagnose osteoporosis more than did TVN. In women aged 50+ years, using femoral neck TVN, the prevalence of osteoporosis was 28.6%, but when the diagnosis was based on TDXA, the prevalence was 43.7%. In men aged 50+ years, the TVN-based prevalence of osteoporosis was 10.4%, which was only a-third of the TDXA-based prevalence (29.6%). The discrepancy mainly occurred in the osteopenic group. For example, among 40 men diagnosed by TDXA to have osteoporosis, there was 65% of them (n = 26) were actually identified as having osteopenia by TVN. Similarly, among 177 women where were diagnosed with osteoporosis by TDXA, 35% (n = 61) were actually osteopenic by TVN. The kappa statistic was 0.54 for women and 0.41 for men.\nConcordance in diagnosis of osteoporosis between DXA provided T-scores and actual T-scores\nValues are shown as number of individuals in each subgroup, and percentage of row-wise total. Kappa statistic for men: κ = 0.41 (95% CI: 0.30 - 0.52); women: κ = 0.54 (95% CI: 0.48 - 0.60)\nUsing the National Health and Nutrition Examination Survey (NHANES) reference data for US Whites (aged between 20 and 29) [13], we computed T-score for each individual aged 50+ years, and classified into either normal, osteopenia or osteoporosis group. We found that the prevalence of osteoporosis was 30% (n = 40/135) in men and 43% (n = 160/368) in women. These prevalence rates are almost identical to the prevalence derived from the TDXA. In fact, the concordance in osteoporosis classification between TDXA and NHANES data was 100% for men and 96% for women.", "To assess the magnitude of the problem, it is essential to establish an appropriate measure for the diagnosis of osteoporosis. Currently, osteoporosis is operationally defined in terms of BMD, which is compared to a normative database [5]. However, it is well known that measured values of BMD differ across ethnicities [9,11,14], and the referent database should therefore be ethnicity-specific. In this study, we have shown that there was a considerable discrepancy in the diagnosis of osteoporosis between referent data derived from the local population and referent data that are provided by the densitometer.\nIt is clear from this analysis that the densitometer reference data over-diagnosed osteoporosis in the Vietnamese population. Using the local normative data, we found that the prevalence of osteoporosis in Vietnamese women and men aged 50+ years was 29% and 10%, respectively. However, using the DXA-provided normative data, the prevalence in women and men was 44% and 30%, respectively. The discrepancy raises a question of which T-score is more appropriate. In a recent study in 328 Vietnamese postmenopausal women using DXA Lunar Prodigy, the prevalence of osteoporosis was 26% [15]. Another smaller study in Vietnamese postmenopausal women living in United State showed that this prevalence was 37% [16]. The prevalence of osteoporosis in postmenopausal Thai women was around 29% [17]. In Caucasians, the prevalence of osteoporosis in postmenopausal women ranged between 20% and 25% [18,19]. In summary, most of these studies in Asian and Caucasian women found that the prevalence of osteoporosis ranged between 20 and 30% [18-20], which is highly consistent with the present study's estimate. These data also suggest that the densitometer-provided T-score is not appropriate for the diagnosis of osteoporosis in Vietnamese women.\nWhy there were differences between TVN and TDXA? The most \"proximate\" explanation is that there were differences in peak BMD and standard deviation between the Hologic normative data and the present normative data. However, the standard deviation in BMD is very stable across populations; therefore, the main reason could be that peak BMD value provided by the Hologic densitometer was higher than peak BMD in Vietnamese. Assuming that SD of femoral neck BMD was 0.12 g/cm2, with TDXA, one could infer that peak BMD was 0.92 and 0.86 g/cm2 in men and women, respectively. These values are identical to the femoral neck BMD reference values for US White men and women of the National Health and Nutrition Examination Survey (NHANES)[13]. In reality, the observed peak BMD in our study was 0.85 g/cm2 (men) and 0.80 g/cm2 (women). It is obvious that the peak BMD provided by the densitometer was derived from a non-Vietnamese population, which may not be applicable to the Vietnamese population.\nIn this study, the relationship between BMD and age followed a third degree polynomial function, which is consistent with a recent study [15]. According to this functional relationship, Vietnamese women achieved their peak BMD at the age of 27-29, which was later than that in Caucasian (20-25 years). Although it is not possible to determine the underlying factors for this apparent difference, it is well-known that Asian girls tend to have a later menarche than Caucasian girls (13 vs 12 years).\nOsteoporosis in men, particularly Asian men, has not been well documented. The present study was among the first research about osteoporosis in Asian men. In this study, about one tenth of men aged over 50 had osteoporosis. This prevalence is highly comparable with previous estimate from Caucasian men [19]. Individuals with osteoporosis are at high risk of fragility fracture [21,22]. In this study, we found that almost 30% of women (and 10% of men) aged 50+ years had osteoporosis, implying that the magnitude of osteoporosis in Vietnam is as high as in developed countries.\nThe present results have to be interpreted within the context of strengths and potential limitations. First, the study represents one of the largest studies of osteoporosis in Asian populations, and as such, it increased the reliability of estimates of peak bone mass and prevalence of osteoporosis. Second, the study population is highly homogeneous, which reduces the effects of potential confounders that could compromise the estimates. The participants were randomly selected according to a rigorous sampling scheme, which ensures the representativeness of the general population. Third, the technique of measurement of BMD is considered \"gold standard\" for the assessment of bone strength. Nevertheless, the study also has a number of potential weaknesses. The participants in this study were sampled from an urban population; as a result, the study's finding may not be generalizable to the rural population. Because we excluded individuals with diseases deemed to interfere with bone metabolism, the prevalence of osteoporosis reported here could be an underestimate of the prevalence in the general population. Ideally, peak bone density should be estimated from a longitudinal study in which a large number of men and women is followed from the age of 5 till the age of 30, but such a study is not practically feasible. On the other hand, estimate of peak BMD in cross-sectional study such as the present study can be biased by unmeasured confounders.\nNevertheless, the present findings have important public health and clinical implications. Because individuals with T-scores being or less than -2.5 are often treated, the over-diagnosis by TDXA could have led to over-treatment in the general population. Moreover, individuals with T-scores being or less than -2.5 are also candidates for anti-fracture clinical trials or clinical studies, the use of TDXA could have included some women in such studies and exposed them to unnecessary risk. Thus, it seems prudent to use local normative data for the diagnosis of osteoporosis in order to avoid over-diagnosis or over-treatment.", "In summary, these data suggest that the prevalence of osteoporosis in Vietnamese men (10%) and women (30%) aged 50+ years is comparable with those in Caucasian populations. The data also indicated that the T-score provided by the Hologic QDR4500 over-diagnosed osteoporosis in Vietnamese men and women. We propose to use the data developed in this study for the diagnosis of osteoporosis in the Vietnamese population." ]
[ null, "methods", null, null, null, null, null, null, null, null, null ]
[ "\nreference range\n", "\nbone mineral density\n", "\nosteoporosis\n", "\nwomen\n", "\nmen\n" ]
Background: Osteoporosis and its consequence of fragility fracture represent a major public health problem not only in developed countries, but in developing countries as well [1]. The number of fractures in Asia is higher than that in European countries combined. Of all the fractures in the world, approximately 17% was found to occur in Southeast Asia, 29% in West Pacific, as compared to 35% occurring in Europe [2]. However, the prevalence of and risk factors for osteoporosis in Asian populations have not been well documented. Part of the problem is due to the lack of well-defined criteria for the diagnosis of osteoporosis in Asian men and women. Currently, the operational definition of osteoporosis is based on a measurement of bone mineral density (BMD), which is the most robust predictor of fracture risk [3,4]. BMD of an individual is often expressed in terms of its peak level and standard deviation to yield a T-score. The two parameters (i.e., peak BMD level and standard deviation) are commonly derived from a well characterized population of young individuals [5]. An individual's T-score is actually the number of standard deviations from the peak BMD achieved during the age of 20 and 30 years [6,7]. However, previous studies have suggested that peak BMD is different among ethnicities and between men and women [8,9]. Therefore, the diagnosis of osteoporosis should ideally be based on sex- and ethnicity-specific reference range [10,11]. Dual-energy × ray absorptiometry (DXA) is considered the gold standard method for measuring BMD [6]. In recent years, DXA has been introduced to many Asian countries, including Vietnam, and is commonly used for the diagnosis of osteoporosis and treatment decision. In the absence of sex-specific reference data for local population, most doctors used the T-scores provided by the densitometer as a referent value to make diagnosis for an individual. However, it is not clear whether the reference data base used in the derivation of T-scores in these densitometers is appropriate for a local population. We hypothesize that there is considerable discrepancy in the diagnosis of osteoporosis between reference data. The present study was designed to test the hypothesis, by determining reference range of peak bone density for an Asian population, and then comparing the concordance between a population-specific T-score and the DXA-based T-score in the diagnosis of osteoporosis. Methods: Study design and participants The study was designed as a cross-sectional investigation, with the setting being Ho Chi Minh City, a major city in Vietnam. The research protocol and procedures were approved by the Scientific Committee of the People's Hospital 115 and Pham Ngoc Thach University of Medicine. All volunteer participants were provided with full information about the study's purpose and gave informed consent to participate in the study, according to the principles of medical ethics of the World Health Organization. We used simple random sampling technique for identifying potential participants. We approached community organizations, including church and temples, and obtained the list of members, and then randomly selected individuals aged 18 or above. We sent a letter of invitation to the selected individuals. The participants received a free health check-up, and lipid analyses, but did not receive any financial incentive. No invited participants refused to participate in the study. Participants were excluded from the study if they had diseases deemed to affect to bone metabolism such as hyperthyroidism, hyperparathyroidism, renal failure, malabsorption syndrome, alcoholism, chronic colitis, multi- myeloma, leukemia, and chronic arthritis. The study was designed as a cross-sectional investigation, with the setting being Ho Chi Minh City, a major city in Vietnam. The research protocol and procedures were approved by the Scientific Committee of the People's Hospital 115 and Pham Ngoc Thach University of Medicine. All volunteer participants were provided with full information about the study's purpose and gave informed consent to participate in the study, according to the principles of medical ethics of the World Health Organization. We used simple random sampling technique for identifying potential participants. We approached community organizations, including church and temples, and obtained the list of members, and then randomly selected individuals aged 18 or above. We sent a letter of invitation to the selected individuals. The participants received a free health check-up, and lipid analyses, but did not receive any financial incentive. No invited participants refused to participate in the study. Participants were excluded from the study if they had diseases deemed to affect to bone metabolism such as hyperthyroidism, hyperparathyroidism, renal failure, malabsorption syndrome, alcoholism, chronic colitis, multi- myeloma, leukemia, and chronic arthritis. Measurements and data collection Data collection was done by trained research doctors and nurses using a validated questionnaire. The questionnaire solicited information, including anthropometry, lifestyle factors, dietary intakes, physical activity, and clinical history. Anthropometric parameters including age, weight, standing height were obtained. Body weight was measured on an electronic scale with indoor clothing without shoes. Height was determined without shoes on a portable stadiometer with mandible plane parallel to the floor. Each participant was asked to provide information on current and past smoking habits. Smoking was quantified in terms of the number of pack-years consumed in each ten-year interval age group. Alcohol intake in average numbers of standard drinks per day, at present as well as within the last 5 years, was obtained. Clinical data including blood pressure, pulse, and reproductive history (i.e. parity, age of menarche, and age of menopause), medical history (i.e. previous fracture, previous and current use of pharmacological therapies) were also obtained. Data collection was done by trained research doctors and nurses using a validated questionnaire. The questionnaire solicited information, including anthropometry, lifestyle factors, dietary intakes, physical activity, and clinical history. Anthropometric parameters including age, weight, standing height were obtained. Body weight was measured on an electronic scale with indoor clothing without shoes. Height was determined without shoes on a portable stadiometer with mandible plane parallel to the floor. Each participant was asked to provide information on current and past smoking habits. Smoking was quantified in terms of the number of pack-years consumed in each ten-year interval age group. Alcohol intake in average numbers of standard drinks per day, at present as well as within the last 5 years, was obtained. Clinical data including blood pressure, pulse, and reproductive history (i.e. parity, age of menarche, and age of menopause), medical history (i.e. previous fracture, previous and current use of pharmacological therapies) were also obtained. Bone mineral density Areal BMD was measured at the lumbar spine (L2-L4), femoral neck, and whole body using a Hologic QDR 4500 (Hologic Corp, Madison, WI, USA). The short-term in vivo precision expressed as the coefficient of variation was 1.8% for the lumbar spine and 1.5% for the hip. The machine was standardized by standard phantom before each measurement. The densitometer provided a T-score for each measured site. In this paper, the T-score is referred to as TDXA. We used the WHO criteria to categorize TDXA into three groups: osteoporosis if the T-score is equal to or lower than -2.5; osteopenia if T-score is between -1 and -2.5; and normal if T-score is equal or greater than -1. Areal BMD was measured at the lumbar spine (L2-L4), femoral neck, and whole body using a Hologic QDR 4500 (Hologic Corp, Madison, WI, USA). The short-term in vivo precision expressed as the coefficient of variation was 1.8% for the lumbar spine and 1.5% for the hip. The machine was standardized by standard phantom before each measurement. The densitometer provided a T-score for each measured site. In this paper, the T-score is referred to as TDXA. We used the WHO criteria to categorize TDXA into three groups: osteoporosis if the T-score is equal to or lower than -2.5; osteopenia if T-score is between -1 and -2.5; and normal if T-score is equal or greater than -1. Determination of reference range In this analysis, we made use of the functional relationship between BMD and age to construct a reference range. A series of polynomial regression models (up to the third degree) were fitted to femoral neck, total hip and lumbar spine BMD as a function of age as follows: BMD = α + β1(age) + β2(age)2 + β3(age)3, where α is the intercept, β1, β1, and β3 are regression parameters, which were estimated from the observed data. Reduced models (i.e., quadratic and linear models) were considered, and the "final" model was chosen based on the Akaike Information Criterion (AIC). Peak BMD (pBMD) and ages at which it was reached were then estimated from the final model. Ninety-five percent confidence intervals (95% CI) for pBMD and ages of pBMD were determined by the bootstrap (resampling) method. The analysis was performed with R statistical software [12]. Based on the parameters in the polynomial regression models, we determined the means of peak BMD and standard deviation (SD) for spine and femoral neck BMD. Using the two parameters, we calculated the T-score for each individual (denoted by TVN), and used the WHO criteria to classify the T-score into three groups, namely, osteoporosis, osteopenia, and normal. The concordance between TDXA and TVN was then assessed by the kappa statistic. In this analysis, we made use of the functional relationship between BMD and age to construct a reference range. A series of polynomial regression models (up to the third degree) were fitted to femoral neck, total hip and lumbar spine BMD as a function of age as follows: BMD = α + β1(age) + β2(age)2 + β3(age)3, where α is the intercept, β1, β1, and β3 are regression parameters, which were estimated from the observed data. Reduced models (i.e., quadratic and linear models) were considered, and the "final" model was chosen based on the Akaike Information Criterion (AIC). Peak BMD (pBMD) and ages at which it was reached were then estimated from the final model. Ninety-five percent confidence intervals (95% CI) for pBMD and ages of pBMD were determined by the bootstrap (resampling) method. The analysis was performed with R statistical software [12]. Based on the parameters in the polynomial regression models, we determined the means of peak BMD and standard deviation (SD) for spine and femoral neck BMD. Using the two parameters, we calculated the T-score for each individual (denoted by TVN), and used the WHO criteria to classify the T-score into three groups, namely, osteoporosis, osteopenia, and normal. The concordance between TDXA and TVN was then assessed by the kappa statistic. Study design and participants: The study was designed as a cross-sectional investigation, with the setting being Ho Chi Minh City, a major city in Vietnam. The research protocol and procedures were approved by the Scientific Committee of the People's Hospital 115 and Pham Ngoc Thach University of Medicine. All volunteer participants were provided with full information about the study's purpose and gave informed consent to participate in the study, according to the principles of medical ethics of the World Health Organization. We used simple random sampling technique for identifying potential participants. We approached community organizations, including church and temples, and obtained the list of members, and then randomly selected individuals aged 18 or above. We sent a letter of invitation to the selected individuals. The participants received a free health check-up, and lipid analyses, but did not receive any financial incentive. No invited participants refused to participate in the study. Participants were excluded from the study if they had diseases deemed to affect to bone metabolism such as hyperthyroidism, hyperparathyroidism, renal failure, malabsorption syndrome, alcoholism, chronic colitis, multi- myeloma, leukemia, and chronic arthritis. Measurements and data collection: Data collection was done by trained research doctors and nurses using a validated questionnaire. The questionnaire solicited information, including anthropometry, lifestyle factors, dietary intakes, physical activity, and clinical history. Anthropometric parameters including age, weight, standing height were obtained. Body weight was measured on an electronic scale with indoor clothing without shoes. Height was determined without shoes on a portable stadiometer with mandible plane parallel to the floor. Each participant was asked to provide information on current and past smoking habits. Smoking was quantified in terms of the number of pack-years consumed in each ten-year interval age group. Alcohol intake in average numbers of standard drinks per day, at present as well as within the last 5 years, was obtained. Clinical data including blood pressure, pulse, and reproductive history (i.e. parity, age of menarche, and age of menopause), medical history (i.e. previous fracture, previous and current use of pharmacological therapies) were also obtained. Bone mineral density: Areal BMD was measured at the lumbar spine (L2-L4), femoral neck, and whole body using a Hologic QDR 4500 (Hologic Corp, Madison, WI, USA). The short-term in vivo precision expressed as the coefficient of variation was 1.8% for the lumbar spine and 1.5% for the hip. The machine was standardized by standard phantom before each measurement. The densitometer provided a T-score for each measured site. In this paper, the T-score is referred to as TDXA. We used the WHO criteria to categorize TDXA into three groups: osteoporosis if the T-score is equal to or lower than -2.5; osteopenia if T-score is between -1 and -2.5; and normal if T-score is equal or greater than -1. Determination of reference range: In this analysis, we made use of the functional relationship between BMD and age to construct a reference range. A series of polynomial regression models (up to the third degree) were fitted to femoral neck, total hip and lumbar spine BMD as a function of age as follows: BMD = α + β1(age) + β2(age)2 + β3(age)3, where α is the intercept, β1, β1, and β3 are regression parameters, which were estimated from the observed data. Reduced models (i.e., quadratic and linear models) were considered, and the "final" model was chosen based on the Akaike Information Criterion (AIC). Peak BMD (pBMD) and ages at which it was reached were then estimated from the final model. Ninety-five percent confidence intervals (95% CI) for pBMD and ages of pBMD were determined by the bootstrap (resampling) method. The analysis was performed with R statistical software [12]. Based on the parameters in the polynomial regression models, we determined the means of peak BMD and standard deviation (SD) for spine and femoral neck BMD. Using the two parameters, we calculated the T-score for each individual (denoted by TVN), and used the WHO criteria to classify the T-score into three groups, namely, osteoporosis, osteopenia, and normal. The concordance between TDXA and TVN was then assessed by the kappa statistic. Results: In total, 1227 individuals (357 men and 870 women) aged 18 years or older participated in the study. In this sample, 58.5% of men and 51% of women were at age 50+ years, respectively. As expected, BMD in men was higher than in women by ~12% at femoral neck and by ~7% at lumbar spine (Table 1). Characteristics of participants Values are mean (standard deviation) BMD, bone mineral density P-value was derived from unpaired t-test for difference between men and women. Peak bone mineral density The relationship between BMD and age was best described by the third-degree polynomial regression model (Figures 1 and 2). The relationship was characterized by three phases, namely, BMD increased between the ages of 18 and 25, followed by a steady period (aged between 25 and 45), and then gradually declined after the age of 45. The age-related decrease in BMD in women was greater than that in men. For example, compared with lumbar spine BMD among women aged between 20-30 years, lumbar spine BMD among women aged 70+ years was decreased by 27%; however, in men, the corresponding rate of decrease was ~15%. A similar sex-differential decline was also observed in femoral neck BMD. Relationship between age and bone density at the femoral neck, total hip, and lumbar spine for men. Relationship between age and bone density at the femoral neck, total hip, and lumbar spine for women. Based on the parameters of the polynomial regression model (Table 2), we estimated pBMD and the age of pBMD for men and women (Table 3). Consistently, pBMD was higher in men compared with women, but the difference was dependent on skeletal site. For example, for lumbar spine, pBMD in men (1.05 ± 0.12 g/cm2; mean ± SD) was about 9% higher than in women (0.96 ± 0.11 g/cm2). Similarly, pBMD at the femoral neck in men (0.85 ± 0.13 g/cm2) was 6% higher than in women (0.80 ± 0.11 g/cm2). The age achieving pBMD was reached in women was younger than in men. For example, at the femoral neck, age of pBMD in women was 22.4 years (95% CI: 19 - 24) which was earlier than in men (26; 95% CI: 24 - 29). This trend was also observed at the lumbar spine (25 in women and 27 years in men). Estimates of parameters of the third degree polynomial regression model Values are coefficient (standard error) of the model BMD = α + β1age + β2age2 + β3age3 R2, coefficient of determination indicates the proportion of variance in BMD that could be "explained" by the polynomial model; SEE, Standard error of estimate. Peak bone mineral density (pBMD) and age of pBMD in men and women Values are amean (standard deviation) and bmean (95% confidence interval) The relationship between BMD and age was best described by the third-degree polynomial regression model (Figures 1 and 2). The relationship was characterized by three phases, namely, BMD increased between the ages of 18 and 25, followed by a steady period (aged between 25 and 45), and then gradually declined after the age of 45. The age-related decrease in BMD in women was greater than that in men. For example, compared with lumbar spine BMD among women aged between 20-30 years, lumbar spine BMD among women aged 70+ years was decreased by 27%; however, in men, the corresponding rate of decrease was ~15%. A similar sex-differential decline was also observed in femoral neck BMD. Relationship between age and bone density at the femoral neck, total hip, and lumbar spine for men. Relationship between age and bone density at the femoral neck, total hip, and lumbar spine for women. Based on the parameters of the polynomial regression model (Table 2), we estimated pBMD and the age of pBMD for men and women (Table 3). Consistently, pBMD was higher in men compared with women, but the difference was dependent on skeletal site. For example, for lumbar spine, pBMD in men (1.05 ± 0.12 g/cm2; mean ± SD) was about 9% higher than in women (0.96 ± 0.11 g/cm2). Similarly, pBMD at the femoral neck in men (0.85 ± 0.13 g/cm2) was 6% higher than in women (0.80 ± 0.11 g/cm2). The age achieving pBMD was reached in women was younger than in men. For example, at the femoral neck, age of pBMD in women was 22.4 years (95% CI: 19 - 24) which was earlier than in men (26; 95% CI: 24 - 29). This trend was also observed at the lumbar spine (25 in women and 27 years in men). Estimates of parameters of the third degree polynomial regression model Values are coefficient (standard error) of the model BMD = α + β1age + β2age2 + β3age3 R2, coefficient of determination indicates the proportion of variance in BMD that could be "explained" by the polynomial model; SEE, Standard error of estimate. Peak bone mineral density (pBMD) and age of pBMD in men and women Values are amean (standard deviation) and bmean (95% confidence interval) Prevalence of osteoporosis Based on pBMD and SD, T-scores were calculated for men aged 50+ years or post-menopausal women, and these were referred to as TVN, to differentiate with TDXA which was automatically provided by the densitometer (Table 4). In women aged over 50 years, TVN was higher than TDXA at femoral neck (-1.84 ± 0.96 vs -2.27 ± 0.96; P < 0.0001) and at the lumbar spine (-1.61 ± 1.28 vs -2.39 ± 1.31; P < 0.0001). In men aged over 50 years, the same trend also was also found at the femoral neck (-1.50 ± 0.90 vs -2.01 ± 0.86; P < 0.0001), and at the lumbar spine (-1.33 ± 1.33 vs -1.81 ± 1.43; P < 0.0001). Prevalence of osteoporosis and osteopenia in men and women aged 50+ years Data are actual number of individuals in each subgroup, and percentage of sex-specific total. As expected, although absolute values of TVN and TDXA were different, the correlation between them was high (r > 0.98). The linear equation (without intercept) linking the two scores is as follows: at the femoral neck, TVN = 1.177 × TDXA for women, and TVN = 1.246 × TDXA for men; at the lumbar spine, TVN = 1.298 × TDXA for women, and TVN = 1.207 × TDXA for men. The equations suggest that, for example, at the femoral neck, TVN was higher than TDXA by 0.18 SD (in men) to 0.25 SD (in women). The concordance between two diagnoses of osteoporosis (i.e., TVN and TDXA) is shown in Table 5. TDXA tended to over-diagnose osteoporosis more than did TVN. In women aged 50+ years, using femoral neck TVN, the prevalence of osteoporosis was 28.6%, but when the diagnosis was based on TDXA, the prevalence was 43.7%. In men aged 50+ years, the TVN-based prevalence of osteoporosis was 10.4%, which was only a-third of the TDXA-based prevalence (29.6%). The discrepancy mainly occurred in the osteopenic group. For example, among 40 men diagnosed by TDXA to have osteoporosis, there was 65% of them (n = 26) were actually identified as having osteopenia by TVN. Similarly, among 177 women where were diagnosed with osteoporosis by TDXA, 35% (n = 61) were actually osteopenic by TVN. The kappa statistic was 0.54 for women and 0.41 for men. Concordance in diagnosis of osteoporosis between DXA provided T-scores and actual T-scores Values are shown as number of individuals in each subgroup, and percentage of row-wise total. Kappa statistic for men: κ = 0.41 (95% CI: 0.30 - 0.52); women: κ = 0.54 (95% CI: 0.48 - 0.60) Using the National Health and Nutrition Examination Survey (NHANES) reference data for US Whites (aged between 20 and 29) [13], we computed T-score for each individual aged 50+ years, and classified into either normal, osteopenia or osteoporosis group. We found that the prevalence of osteoporosis was 30% (n = 40/135) in men and 43% (n = 160/368) in women. These prevalence rates are almost identical to the prevalence derived from the TDXA. In fact, the concordance in osteoporosis classification between TDXA and NHANES data was 100% for men and 96% for women. Based on pBMD and SD, T-scores were calculated for men aged 50+ years or post-menopausal women, and these were referred to as TVN, to differentiate with TDXA which was automatically provided by the densitometer (Table 4). In women aged over 50 years, TVN was higher than TDXA at femoral neck (-1.84 ± 0.96 vs -2.27 ± 0.96; P < 0.0001) and at the lumbar spine (-1.61 ± 1.28 vs -2.39 ± 1.31; P < 0.0001). In men aged over 50 years, the same trend also was also found at the femoral neck (-1.50 ± 0.90 vs -2.01 ± 0.86; P < 0.0001), and at the lumbar spine (-1.33 ± 1.33 vs -1.81 ± 1.43; P < 0.0001). Prevalence of osteoporosis and osteopenia in men and women aged 50+ years Data are actual number of individuals in each subgroup, and percentage of sex-specific total. As expected, although absolute values of TVN and TDXA were different, the correlation between them was high (r > 0.98). The linear equation (without intercept) linking the two scores is as follows: at the femoral neck, TVN = 1.177 × TDXA for women, and TVN = 1.246 × TDXA for men; at the lumbar spine, TVN = 1.298 × TDXA for women, and TVN = 1.207 × TDXA for men. The equations suggest that, for example, at the femoral neck, TVN was higher than TDXA by 0.18 SD (in men) to 0.25 SD (in women). The concordance between two diagnoses of osteoporosis (i.e., TVN and TDXA) is shown in Table 5. TDXA tended to over-diagnose osteoporosis more than did TVN. In women aged 50+ years, using femoral neck TVN, the prevalence of osteoporosis was 28.6%, but when the diagnosis was based on TDXA, the prevalence was 43.7%. In men aged 50+ years, the TVN-based prevalence of osteoporosis was 10.4%, which was only a-third of the TDXA-based prevalence (29.6%). The discrepancy mainly occurred in the osteopenic group. For example, among 40 men diagnosed by TDXA to have osteoporosis, there was 65% of them (n = 26) were actually identified as having osteopenia by TVN. Similarly, among 177 women where were diagnosed with osteoporosis by TDXA, 35% (n = 61) were actually osteopenic by TVN. The kappa statistic was 0.54 for women and 0.41 for men. Concordance in diagnosis of osteoporosis between DXA provided T-scores and actual T-scores Values are shown as number of individuals in each subgroup, and percentage of row-wise total. Kappa statistic for men: κ = 0.41 (95% CI: 0.30 - 0.52); women: κ = 0.54 (95% CI: 0.48 - 0.60) Using the National Health and Nutrition Examination Survey (NHANES) reference data for US Whites (aged between 20 and 29) [13], we computed T-score for each individual aged 50+ years, and classified into either normal, osteopenia or osteoporosis group. We found that the prevalence of osteoporosis was 30% (n = 40/135) in men and 43% (n = 160/368) in women. These prevalence rates are almost identical to the prevalence derived from the TDXA. In fact, the concordance in osteoporosis classification between TDXA and NHANES data was 100% for men and 96% for women. Peak bone mineral density: The relationship between BMD and age was best described by the third-degree polynomial regression model (Figures 1 and 2). The relationship was characterized by three phases, namely, BMD increased between the ages of 18 and 25, followed by a steady period (aged between 25 and 45), and then gradually declined after the age of 45. The age-related decrease in BMD in women was greater than that in men. For example, compared with lumbar spine BMD among women aged between 20-30 years, lumbar spine BMD among women aged 70+ years was decreased by 27%; however, in men, the corresponding rate of decrease was ~15%. A similar sex-differential decline was also observed in femoral neck BMD. Relationship between age and bone density at the femoral neck, total hip, and lumbar spine for men. Relationship between age and bone density at the femoral neck, total hip, and lumbar spine for women. Based on the parameters of the polynomial regression model (Table 2), we estimated pBMD and the age of pBMD for men and women (Table 3). Consistently, pBMD was higher in men compared with women, but the difference was dependent on skeletal site. For example, for lumbar spine, pBMD in men (1.05 ± 0.12 g/cm2; mean ± SD) was about 9% higher than in women (0.96 ± 0.11 g/cm2). Similarly, pBMD at the femoral neck in men (0.85 ± 0.13 g/cm2) was 6% higher than in women (0.80 ± 0.11 g/cm2). The age achieving pBMD was reached in women was younger than in men. For example, at the femoral neck, age of pBMD in women was 22.4 years (95% CI: 19 - 24) which was earlier than in men (26; 95% CI: 24 - 29). This trend was also observed at the lumbar spine (25 in women and 27 years in men). Estimates of parameters of the third degree polynomial regression model Values are coefficient (standard error) of the model BMD = α + β1age + β2age2 + β3age3 R2, coefficient of determination indicates the proportion of variance in BMD that could be "explained" by the polynomial model; SEE, Standard error of estimate. Peak bone mineral density (pBMD) and age of pBMD in men and women Values are amean (standard deviation) and bmean (95% confidence interval) Prevalence of osteoporosis: Based on pBMD and SD, T-scores were calculated for men aged 50+ years or post-menopausal women, and these were referred to as TVN, to differentiate with TDXA which was automatically provided by the densitometer (Table 4). In women aged over 50 years, TVN was higher than TDXA at femoral neck (-1.84 ± 0.96 vs -2.27 ± 0.96; P < 0.0001) and at the lumbar spine (-1.61 ± 1.28 vs -2.39 ± 1.31; P < 0.0001). In men aged over 50 years, the same trend also was also found at the femoral neck (-1.50 ± 0.90 vs -2.01 ± 0.86; P < 0.0001), and at the lumbar spine (-1.33 ± 1.33 vs -1.81 ± 1.43; P < 0.0001). Prevalence of osteoporosis and osteopenia in men and women aged 50+ years Data are actual number of individuals in each subgroup, and percentage of sex-specific total. As expected, although absolute values of TVN and TDXA were different, the correlation between them was high (r > 0.98). The linear equation (without intercept) linking the two scores is as follows: at the femoral neck, TVN = 1.177 × TDXA for women, and TVN = 1.246 × TDXA for men; at the lumbar spine, TVN = 1.298 × TDXA for women, and TVN = 1.207 × TDXA for men. The equations suggest that, for example, at the femoral neck, TVN was higher than TDXA by 0.18 SD (in men) to 0.25 SD (in women). The concordance between two diagnoses of osteoporosis (i.e., TVN and TDXA) is shown in Table 5. TDXA tended to over-diagnose osteoporosis more than did TVN. In women aged 50+ years, using femoral neck TVN, the prevalence of osteoporosis was 28.6%, but when the diagnosis was based on TDXA, the prevalence was 43.7%. In men aged 50+ years, the TVN-based prevalence of osteoporosis was 10.4%, which was only a-third of the TDXA-based prevalence (29.6%). The discrepancy mainly occurred in the osteopenic group. For example, among 40 men diagnosed by TDXA to have osteoporosis, there was 65% of them (n = 26) were actually identified as having osteopenia by TVN. Similarly, among 177 women where were diagnosed with osteoporosis by TDXA, 35% (n = 61) were actually osteopenic by TVN. The kappa statistic was 0.54 for women and 0.41 for men. Concordance in diagnosis of osteoporosis between DXA provided T-scores and actual T-scores Values are shown as number of individuals in each subgroup, and percentage of row-wise total. Kappa statistic for men: κ = 0.41 (95% CI: 0.30 - 0.52); women: κ = 0.54 (95% CI: 0.48 - 0.60) Using the National Health and Nutrition Examination Survey (NHANES) reference data for US Whites (aged between 20 and 29) [13], we computed T-score for each individual aged 50+ years, and classified into either normal, osteopenia or osteoporosis group. We found that the prevalence of osteoporosis was 30% (n = 40/135) in men and 43% (n = 160/368) in women. These prevalence rates are almost identical to the prevalence derived from the TDXA. In fact, the concordance in osteoporosis classification between TDXA and NHANES data was 100% for men and 96% for women. Discussion: To assess the magnitude of the problem, it is essential to establish an appropriate measure for the diagnosis of osteoporosis. Currently, osteoporosis is operationally defined in terms of BMD, which is compared to a normative database [5]. However, it is well known that measured values of BMD differ across ethnicities [9,11,14], and the referent database should therefore be ethnicity-specific. In this study, we have shown that there was a considerable discrepancy in the diagnosis of osteoporosis between referent data derived from the local population and referent data that are provided by the densitometer. It is clear from this analysis that the densitometer reference data over-diagnosed osteoporosis in the Vietnamese population. Using the local normative data, we found that the prevalence of osteoporosis in Vietnamese women and men aged 50+ years was 29% and 10%, respectively. However, using the DXA-provided normative data, the prevalence in women and men was 44% and 30%, respectively. The discrepancy raises a question of which T-score is more appropriate. In a recent study in 328 Vietnamese postmenopausal women using DXA Lunar Prodigy, the prevalence of osteoporosis was 26% [15]. Another smaller study in Vietnamese postmenopausal women living in United State showed that this prevalence was 37% [16]. The prevalence of osteoporosis in postmenopausal Thai women was around 29% [17]. In Caucasians, the prevalence of osteoporosis in postmenopausal women ranged between 20% and 25% [18,19]. In summary, most of these studies in Asian and Caucasian women found that the prevalence of osteoporosis ranged between 20 and 30% [18-20], which is highly consistent with the present study's estimate. These data also suggest that the densitometer-provided T-score is not appropriate for the diagnosis of osteoporosis in Vietnamese women. Why there were differences between TVN and TDXA? The most "proximate" explanation is that there were differences in peak BMD and standard deviation between the Hologic normative data and the present normative data. However, the standard deviation in BMD is very stable across populations; therefore, the main reason could be that peak BMD value provided by the Hologic densitometer was higher than peak BMD in Vietnamese. Assuming that SD of femoral neck BMD was 0.12 g/cm2, with TDXA, one could infer that peak BMD was 0.92 and 0.86 g/cm2 in men and women, respectively. These values are identical to the femoral neck BMD reference values for US White men and women of the National Health and Nutrition Examination Survey (NHANES)[13]. In reality, the observed peak BMD in our study was 0.85 g/cm2 (men) and 0.80 g/cm2 (women). It is obvious that the peak BMD provided by the densitometer was derived from a non-Vietnamese population, which may not be applicable to the Vietnamese population. In this study, the relationship between BMD and age followed a third degree polynomial function, which is consistent with a recent study [15]. According to this functional relationship, Vietnamese women achieved their peak BMD at the age of 27-29, which was later than that in Caucasian (20-25 years). Although it is not possible to determine the underlying factors for this apparent difference, it is well-known that Asian girls tend to have a later menarche than Caucasian girls (13 vs 12 years). Osteoporosis in men, particularly Asian men, has not been well documented. The present study was among the first research about osteoporosis in Asian men. In this study, about one tenth of men aged over 50 had osteoporosis. This prevalence is highly comparable with previous estimate from Caucasian men [19]. Individuals with osteoporosis are at high risk of fragility fracture [21,22]. In this study, we found that almost 30% of women (and 10% of men) aged 50+ years had osteoporosis, implying that the magnitude of osteoporosis in Vietnam is as high as in developed countries. The present results have to be interpreted within the context of strengths and potential limitations. First, the study represents one of the largest studies of osteoporosis in Asian populations, and as such, it increased the reliability of estimates of peak bone mass and prevalence of osteoporosis. Second, the study population is highly homogeneous, which reduces the effects of potential confounders that could compromise the estimates. The participants were randomly selected according to a rigorous sampling scheme, which ensures the representativeness of the general population. Third, the technique of measurement of BMD is considered "gold standard" for the assessment of bone strength. Nevertheless, the study also has a number of potential weaknesses. The participants in this study were sampled from an urban population; as a result, the study's finding may not be generalizable to the rural population. Because we excluded individuals with diseases deemed to interfere with bone metabolism, the prevalence of osteoporosis reported here could be an underestimate of the prevalence in the general population. Ideally, peak bone density should be estimated from a longitudinal study in which a large number of men and women is followed from the age of 5 till the age of 30, but such a study is not practically feasible. On the other hand, estimate of peak BMD in cross-sectional study such as the present study can be biased by unmeasured confounders. Nevertheless, the present findings have important public health and clinical implications. Because individuals with T-scores being or less than -2.5 are often treated, the over-diagnosis by TDXA could have led to over-treatment in the general population. Moreover, individuals with T-scores being or less than -2.5 are also candidates for anti-fracture clinical trials or clinical studies, the use of TDXA could have included some women in such studies and exposed them to unnecessary risk. Thus, it seems prudent to use local normative data for the diagnosis of osteoporosis in order to avoid over-diagnosis or over-treatment. Conclusion: In summary, these data suggest that the prevalence of osteoporosis in Vietnamese men (10%) and women (30%) aged 50+ years is comparable with those in Caucasian populations. The data also indicated that the T-score provided by the Hologic QDR4500 over-diagnosed osteoporosis in Vietnamese men and women. We propose to use the data developed in this study for the diagnosis of osteoporosis in the Vietnamese population.
Background: The aim of this study was to examine the effect of different reference ranges in bone mineral density on the diagnosis of osteoporosis. Methods: This cross-sectional study involved 357 men and 870 women aged between 18 and 89 years, who were randomly sampled from various districts within Ho Chi Minh City, Vietnam. BMD at the femoral neck, lumbar spine and whole body was measured by DXA (Hologic QDR4500). Polynomial regression models and bootstraps method were used to determine peak BMD and standard deviation (SD). Based on the two parameters, we computed T-scores (denoted by TVN) for each individual in the study. A similar diagnosis was also done based on T-scores provided by the densitometer (TDXA), which is based on the US White population (NHANES III). We then compared the concordance between TVN and TDXA in the classification of osteoporosis. Osteoporosis was defined according to the World Health Organization criteria. Results: In post-menopausal women, the prevalence of osteoporosis based on femoral neck TVN was 29%, but when the diagnosis was based on TDXA, the prevalence was 44%. In men aged 50+ years, the TVN-based prevalence of osteoporosis was 10%, which was lower than TDXA-based prevalence (30%). Among 177 women who were diagnosed with osteoporosis by TDXA, 35% were actually osteopenia by TVN. The kappa-statistic was 0.54 for women and 0.41 for men. Conclusions: These data suggest that the T-scores provided by the Hologic QDR4500 over-diagnosed osteoporosis in Vietnamese men and women. This over-diagnosis could lead to over-treatment and influence the decision of recruitment of participants in clinical trials.
Background: Osteoporosis and its consequence of fragility fracture represent a major public health problem not only in developed countries, but in developing countries as well [1]. The number of fractures in Asia is higher than that in European countries combined. Of all the fractures in the world, approximately 17% was found to occur in Southeast Asia, 29% in West Pacific, as compared to 35% occurring in Europe [2]. However, the prevalence of and risk factors for osteoporosis in Asian populations have not been well documented. Part of the problem is due to the lack of well-defined criteria for the diagnosis of osteoporosis in Asian men and women. Currently, the operational definition of osteoporosis is based on a measurement of bone mineral density (BMD), which is the most robust predictor of fracture risk [3,4]. BMD of an individual is often expressed in terms of its peak level and standard deviation to yield a T-score. The two parameters (i.e., peak BMD level and standard deviation) are commonly derived from a well characterized population of young individuals [5]. An individual's T-score is actually the number of standard deviations from the peak BMD achieved during the age of 20 and 30 years [6,7]. However, previous studies have suggested that peak BMD is different among ethnicities and between men and women [8,9]. Therefore, the diagnosis of osteoporosis should ideally be based on sex- and ethnicity-specific reference range [10,11]. Dual-energy × ray absorptiometry (DXA) is considered the gold standard method for measuring BMD [6]. In recent years, DXA has been introduced to many Asian countries, including Vietnam, and is commonly used for the diagnosis of osteoporosis and treatment decision. In the absence of sex-specific reference data for local population, most doctors used the T-scores provided by the densitometer as a referent value to make diagnosis for an individual. However, it is not clear whether the reference data base used in the derivation of T-scores in these densitometers is appropriate for a local population. We hypothesize that there is considerable discrepancy in the diagnosis of osteoporosis between reference data. The present study was designed to test the hypothesis, by determining reference range of peak bone density for an Asian population, and then comparing the concordance between a population-specific T-score and the DXA-based T-score in the diagnosis of osteoporosis. Conclusion: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2474/12/182/prepub
Background: The aim of this study was to examine the effect of different reference ranges in bone mineral density on the diagnosis of osteoporosis. Methods: This cross-sectional study involved 357 men and 870 women aged between 18 and 89 years, who were randomly sampled from various districts within Ho Chi Minh City, Vietnam. BMD at the femoral neck, lumbar spine and whole body was measured by DXA (Hologic QDR4500). Polynomial regression models and bootstraps method were used to determine peak BMD and standard deviation (SD). Based on the two parameters, we computed T-scores (denoted by TVN) for each individual in the study. A similar diagnosis was also done based on T-scores provided by the densitometer (TDXA), which is based on the US White population (NHANES III). We then compared the concordance between TVN and TDXA in the classification of osteoporosis. Osteoporosis was defined according to the World Health Organization criteria. Results: In post-menopausal women, the prevalence of osteoporosis based on femoral neck TVN was 29%, but when the diagnosis was based on TDXA, the prevalence was 44%. In men aged 50+ years, the TVN-based prevalence of osteoporosis was 10%, which was lower than TDXA-based prevalence (30%). Among 177 women who were diagnosed with osteoporosis by TDXA, 35% were actually osteopenia by TVN. The kappa-statistic was 0.54 for women and 0.41 for men. Conclusions: These data suggest that the T-scores provided by the Hologic QDR4500 over-diagnosed osteoporosis in Vietnamese men and women. This over-diagnosis could lead to over-treatment and influence the decision of recruitment of participants in clinical trials.
7,775
334
[ 466, 212, 184, 151, 268, 2414, 480, 666, 1138, 80 ]
11
[ "women", "men", "osteoporosis", "bmd", "tdxa", "age", "tvn", "years", "femoral neck", "neck" ]
[ "suggest prevalence osteoporosis", "diagnosis osteoporosis asian", "based prevalence osteoporosis", "factors osteoporosis asian", "osteoporosis asian populations" ]
null
[CONTENT] reference range | bone mineral density | osteoporosis | women | men [SUMMARY]
[CONTENT] reference range | bone mineral density | osteoporosis | women | men [SUMMARY]
null
[CONTENT] reference range | bone mineral density | osteoporosis | women | men [SUMMARY]
[CONTENT] reference range | bone mineral density | osteoporosis | women | men [SUMMARY]
[CONTENT] reference range | bone mineral density | osteoporosis | women | men [SUMMARY]
[CONTENT] Absorptiometry, Photon | Adolescent | Adult | Aged | Aged, 80 and over | Asian People | Bone Density | Cross-Sectional Studies | Female | Humans | Male | Middle Aged | Osteoporosis | Osteoporosis, Postmenopausal | Prevalence | Reference Values | Vietnam | Young Adult [SUMMARY]
[CONTENT] Absorptiometry, Photon | Adolescent | Adult | Aged | Aged, 80 and over | Asian People | Bone Density | Cross-Sectional Studies | Female | Humans | Male | Middle Aged | Osteoporosis | Osteoporosis, Postmenopausal | Prevalence | Reference Values | Vietnam | Young Adult [SUMMARY]
null
[CONTENT] Absorptiometry, Photon | Adolescent | Adult | Aged | Aged, 80 and over | Asian People | Bone Density | Cross-Sectional Studies | Female | Humans | Male | Middle Aged | Osteoporosis | Osteoporosis, Postmenopausal | Prevalence | Reference Values | Vietnam | Young Adult [SUMMARY]
[CONTENT] Absorptiometry, Photon | Adolescent | Adult | Aged | Aged, 80 and over | Asian People | Bone Density | Cross-Sectional Studies | Female | Humans | Male | Middle Aged | Osteoporosis | Osteoporosis, Postmenopausal | Prevalence | Reference Values | Vietnam | Young Adult [SUMMARY]
[CONTENT] Absorptiometry, Photon | Adolescent | Adult | Aged | Aged, 80 and over | Asian People | Bone Density | Cross-Sectional Studies | Female | Humans | Male | Middle Aged | Osteoporosis | Osteoporosis, Postmenopausal | Prevalence | Reference Values | Vietnam | Young Adult [SUMMARY]
[CONTENT] suggest prevalence osteoporosis | diagnosis osteoporosis asian | based prevalence osteoporosis | factors osteoporosis asian | osteoporosis asian populations [SUMMARY]
[CONTENT] suggest prevalence osteoporosis | diagnosis osteoporosis asian | based prevalence osteoporosis | factors osteoporosis asian | osteoporosis asian populations [SUMMARY]
null
[CONTENT] suggest prevalence osteoporosis | diagnosis osteoporosis asian | based prevalence osteoporosis | factors osteoporosis asian | osteoporosis asian populations [SUMMARY]
[CONTENT] suggest prevalence osteoporosis | diagnosis osteoporosis asian | based prevalence osteoporosis | factors osteoporosis asian | osteoporosis asian populations [SUMMARY]
[CONTENT] suggest prevalence osteoporosis | diagnosis osteoporosis asian | based prevalence osteoporosis | factors osteoporosis asian | osteoporosis asian populations [SUMMARY]
[CONTENT] women | men | osteoporosis | bmd | tdxa | age | tvn | years | femoral neck | neck [SUMMARY]
[CONTENT] women | men | osteoporosis | bmd | tdxa | age | tvn | years | femoral neck | neck [SUMMARY]
null
[CONTENT] women | men | osteoporosis | bmd | tdxa | age | tvn | years | femoral neck | neck [SUMMARY]
[CONTENT] women | men | osteoporosis | bmd | tdxa | age | tvn | years | femoral neck | neck [SUMMARY]
[CONTENT] women | men | osteoporosis | bmd | tdxa | age | tvn | years | femoral neck | neck [SUMMARY]
[CONTENT] population | osteoporosis | diagnosis | asian | countries | diagnosis osteoporosis | bmd | reference | peak | specific [SUMMARY]
[CONTENT] age | participants | bmd | models | score | study | obtained | including | information | history [SUMMARY]
null
[CONTENT] vietnamese | osteoporosis vietnamese | vietnamese men | osteoporosis vietnamese men | data | osteoporosis | men | women | vietnamese men 10 | data indicated [SUMMARY]
[CONTENT] women | men | bmd | osteoporosis | age | tdxa | study | tvn | score | spine [SUMMARY]
[CONTENT] women | men | bmd | osteoporosis | age | tdxa | study | tvn | score | spine [SUMMARY]
[CONTENT] [SUMMARY]
[CONTENT] 357 | 870 | between 18 and 89 years | Ho Chi Minh City | Vietnam ||| BMD | DXA ||| BMD ||| two | TVN ||| US ||| TVN ||| the World Health Organization [SUMMARY]
null
[CONTENT] Hologic | Vietnamese ||| [SUMMARY]
[CONTENT] ||| 357 | 870 | between 18 and 89 years | Ho Chi Minh City | Vietnam ||| BMD | DXA ||| BMD ||| two | TVN ||| US ||| TVN ||| the World Health Organization ||| TVN | 29% | TDXA | 44% ||| 50+ years | TVN | 10% | 30% ||| 177 | TDXA | 35% | TVN ||| 0.54 | 0.41 ||| Hologic | Vietnamese ||| [SUMMARY]
[CONTENT] ||| 357 | 870 | between 18 and 89 years | Ho Chi Minh City | Vietnam ||| BMD | DXA ||| BMD ||| two | TVN ||| US ||| TVN ||| the World Health Organization ||| TVN | 29% | TDXA | 44% ||| 50+ years | TVN | 10% | 30% ||| 177 | TDXA | 35% | TVN ||| 0.54 | 0.41 ||| Hologic | Vietnamese ||| [SUMMARY]
Embracing the Nutritional Assessment in Cerebral Palsy: A Toolkit for Healthcare Professionals for Daily Practice.
35334837
Nutritional status assessment (NSA) can be challenging in children with cerebral palsy (CP). There are high omission rates in national surveillance reports of weight and height information. Alternative methods are used to assess nutritional status that may be unknown to the healthcare professionals (HCP) who report these children. Caregivers experience challenges when dealing with feeding problems (FP) common in CP. Our aim was to assess the difficulties in NSA which are causing this underreport and to create solutions for registers and caregivers.
BACKGROUND
An online questionnaire was created for registers. Three meetings with HCP and caregivers were held to discuss problems and solutions regarding NSA and intervention.
METHODS
HCP mentioned difficulty in NSA due to a lack of time, collaboration with others, equipment, and childrens' motor impairment. Caregivers experienced difficulty in preparing nutritious meals with adapted textures. The creation of educational tools and other strategies were suggested. A toolkit for HCP was created with the weight and height assessment methods described and other for caregivers to deal with common FP.
RESULTS
There are several difficulties experienced by HCP that might be overcome with educational tools, such as a toolkit. This will facilitate nutritional assessment and intervention and hopefully reduce underreporting.
CONCLUSIONS
[ "Caregivers", "Cerebral Palsy", "Child", "Delivery of Health Care", "Humans", "Nutrition Assessment", "Nutritional Status" ]
8950259
1. Introduction
Cerebral palsy (PC) is associated with motor problems accompanied by disorders in cognition, communication and behavior, epilepsy episodes, and musculoskeletal problems. It is the most common motor deficiency in childhood and persists in adulthood [1]. It is a heterogeneous condition that can include spastic (85%), dyskinetic (7%), which includes dystonia and choreoathetosis, and ataxic (4%) forms. CP motor severity can be established using the Gross Motor Function Classification System (GMFCS), which indicates a child’s level of gross motor function and mobility [2], ranking from I (light symptoms) to V (more severe). Children with CP are usually shorter and lighter than peers, especially when functional limitations are more severe [3,4]. The amount of body fat and muscle mass also tend to be lower, due to lack of mobility [5], as well as the duration and severity of the neurological disorder which reduces function in terms of daily activities [6]. There is a high risk of malnutrition (29–48%), which increases when motor impairment is higher [7,8,9]. Etiology of malnutrition is multifactorial and includes both nutritional and non-nutritional factors [10]. Dietary intake is conditioned by oromotor impairment [11] and the presence of gastroesophageal reflux and constipation [10]. Moreover, altered energy requirements frequently occur and decrease as ambulatory status declines and more limbs are involved [12]. Malnutrition impacts growth and quality of life [13], decreases immune function [4,14,15], increases use of health care [16], limits participation in social activities, and worsens survival prognosis [4,17]. In recent decades, it has been perceived that a correct diagnosis that leads to actions with the purpose of attenuating eating problems, decreases hospitalization rates and improves nutritional status [18]. This assessment will allow or adequate multidisciplinary interventions to restore linear growth, stabilize weight, reduce irritability and spasticity, improve circulation and healing, and increase social participation and quality of life [10]. There is no single method to assess nutritional status, but the analyses of a set of parameters can provide important nutritional information. In children, anthropometric parameters such as weight, height, and body mass index are the most frequently used since they are simple to collect and noninvasive [19]. For children with CP, anthropometric measurements can be difficult to assess, mainly if they have contractures, spasticity, scoliosis, and positioning problems that hinder the use of conventional weighing and other measuring methods [20]. In the last few decades, several indirect methods to determine anthropometrics in children with CP were developed, albeit with no consensus on the ideal one/ones despite de extensive discussion on the topic [20,21,22]. A variety of growth charts can be used to assess and monitor growth [23]. Specific charts for children with CP are available, such as Brooks charts [24]. However, they reflect how children with CP have grown, rather than how they can be expected to optimally grow [25]. National and other surveillance programs of children with CP provide clinical information [26,27] as well as nutritional status data. Countries that preform this analysis report elevated rates of undernutrition, but the most concerning is the higher underreport of anthropometric data [28,29]. To our knowledge, there is no record on the reasons why registers from the program do not always assess this evaluation parameters. The hypothesis was raised that registers experience some difficulties when assessing the nutritional status of these children. As part of a project funded by the Polytechnic Institute of Lisbon, IPL/2020/PIN-PC_ESTESL (Project for Nutritional Intervention in Cerebral Palsy), we developed a strategy to identify the main reasons for the higher underreport and developed actions to minimize this situation. This project included (1) assessment of registers’ motives for underreporting; (2) assessment of the difficulties experienced by healthcare professionals working with these children; (3) assessment of the feeding difficulties experienced by caregivers; and (4) a review of the literature on anthropometric procedures developed for children with CP and developed a pedagogical material adapted to the needs previous assessed.
null
null
3. Results
3.1. Difficulties Experienced by Registers of the Surveillance Program and Healthcare Professionals When Assessing Nutritional Status and Suggested Solutions Of the 20 registers of the Portuguese national surveillance program of children with CP invited to fulfil the questionnaire, 65% (n = 13) submitted their response. Most were medical doctors (n = 8), followed by physical therapists (n = 2), dietitians (n = 2), and nurses (n = 1). All of them considered nutritional assessment important, with a focus on weight assessment (WA) and height assessment (HA). Regarding this matter, the difficulties revealed by these professionals were lack of time (15% for WA and HA), lack of collaboration with other health professionals (7% for WA and 23% for HA), child’s motor impairment (30% for both) and lack of equipment such as scale or stadiometer (38% for both). All of the inquired mentioned having a measuring tape in their work setting. Alternative methods, such as segmental length measures, were known by 69%. In regards of possible solutions to facilitate this assessment, 15% of the registers found brochures/flyers a useful tool to compile information about different methodologies, 84% suggested to create and design a guide/manual of procedures described step-by-step and 46% suggested a mobile app. It was mentioned by 92% that professional training about alternative methods of assessing nutritional status in children with high motor impairment would be helpful. After each meeting with the healthcare professionals, all perspectives from the 35 participants were recorded from an informal conversation. The group included general practitioners, pediatricians, speech therapists, physiotherapists, occupational therapists, psychologists, nurses, dietitians, and caregivers. The contributions from healthcare professionals were similar to those expressed in the questionnaire filled by registers. In order to remove some of the barriers to the assessment of nutritional status, they mentioned the creation of an online platform with all of the equations involved in nutritional assessment, where data can be inserted and automatically calculate the measurements; one can create an educational program for healthcare professionals; and one can gather a team of high skilled professionals that visits all CP centers in the country to evaluate and register all children. It was also mentioned that involving the community surrounding children with CP would be helpful. The goal was to promote an active contribution to the evaluation and registration of their anthropometric parameters. All difficulties experienced by registers and healthcare professionals, as well as the suggested solutions to improve the nutritional assessment are summarized in Table 1. Of the 20 registers of the Portuguese national surveillance program of children with CP invited to fulfil the questionnaire, 65% (n = 13) submitted their response. Most were medical doctors (n = 8), followed by physical therapists (n = 2), dietitians (n = 2), and nurses (n = 1). All of them considered nutritional assessment important, with a focus on weight assessment (WA) and height assessment (HA). Regarding this matter, the difficulties revealed by these professionals were lack of time (15% for WA and HA), lack of collaboration with other health professionals (7% for WA and 23% for HA), child’s motor impairment (30% for both) and lack of equipment such as scale or stadiometer (38% for both). All of the inquired mentioned having a measuring tape in their work setting. Alternative methods, such as segmental length measures, were known by 69%. In regards of possible solutions to facilitate this assessment, 15% of the registers found brochures/flyers a useful tool to compile information about different methodologies, 84% suggested to create and design a guide/manual of procedures described step-by-step and 46% suggested a mobile app. It was mentioned by 92% that professional training about alternative methods of assessing nutritional status in children with high motor impairment would be helpful. After each meeting with the healthcare professionals, all perspectives from the 35 participants were recorded from an informal conversation. The group included general practitioners, pediatricians, speech therapists, physiotherapists, occupational therapists, psychologists, nurses, dietitians, and caregivers. The contributions from healthcare professionals were similar to those expressed in the questionnaire filled by registers. In order to remove some of the barriers to the assessment of nutritional status, they mentioned the creation of an online platform with all of the equations involved in nutritional assessment, where data can be inserted and automatically calculate the measurements; one can create an educational program for healthcare professionals; and one can gather a team of high skilled professionals that visits all CP centers in the country to evaluate and register all children. It was also mentioned that involving the community surrounding children with CP would be helpful. The goal was to promote an active contribution to the evaluation and registration of their anthropometric parameters. All difficulties experienced by registers and healthcare professionals, as well as the suggested solutions to improve the nutritional assessment are summarized in Table 1. 3.2. Difficulties Experienced by Caregivers When Dealing with Children’s Feeding Problems and Discussed Strategies to Minimize These Problems During the meetings with caregivers, the main problems referred were the difficulty in meal’s preparation with adapted textures that meet nutritional needs; difficulty to ensure correct hydration status; fear of meals where fish is the main protein source due to presence of fish bones; and how to position the child during mealtime. These answers were recorded while an informal conversation was happening. Some strategies were discussed, including the instruction of the canteen staff in how to prepare different textured and nutritional meals and the creation of a manual with this information summarized; creation of a manual with information on ways to aromatize water and tips to encourage the consumption of liquids (e.g., alarms during the day or mark the bottles to control the intake during the day); and ways to deal with common gastrointestinal problems such as constipation, dysphagia and gastroesophageal reflux. In addition, it was mentioned that there was little financial support for the prescription of artificial nutrition for families with children with CP. During the meetings with caregivers, the main problems referred were the difficulty in meal’s preparation with adapted textures that meet nutritional needs; difficulty to ensure correct hydration status; fear of meals where fish is the main protein source due to presence of fish bones; and how to position the child during mealtime. These answers were recorded while an informal conversation was happening. Some strategies were discussed, including the instruction of the canteen staff in how to prepare different textured and nutritional meals and the creation of a manual with this information summarized; creation of a manual with information on ways to aromatize water and tips to encourage the consumption of liquids (e.g., alarms during the day or mark the bottles to control the intake during the day); and ways to deal with common gastrointestinal problems such as constipation, dysphagia and gastroesophageal reflux. In addition, it was mentioned that there was little financial support for the prescription of artificial nutrition for families with children with CP. 3.3. Toolkit to Healthcare Professionals and Registers The toolkit for healthcare professionals and registers focused on methods to assess weight and height in children with CP since these are the only indicators currently asked in the surveillance program forms regarding the evaluation of nutritional status. 3.3.1. Weight Assessment This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material). This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material). 3.3.2. Height Assessment As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material). Alternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown). As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material). Alternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown). The toolkit for healthcare professionals and registers focused on methods to assess weight and height in children with CP since these are the only indicators currently asked in the surveillance program forms regarding the evaluation of nutritional status. 3.3.1. Weight Assessment This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material). This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material). 3.3.2. Height Assessment As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material). Alternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown). As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material). Alternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown).
5. Conclusions
The amount of time needed to perform all of the care to children with CP limits the time to carry out anthropometric measurements, especially if the procedures are not described in a perceivable way. The great variety of methodologies used to assess nutritional status in these children make the choice challenging. So, healthcare professionals and registers should be instructed on how to carry out the procedures, otherwise the underreport of weight and height information will still be rising. The toolkit was designed to overcome some of the difficulties expressed by healthcare professionals, some of them registers in the national surveillance program. Having the information in a step-by-step structure with original illustrations to exemplify the procedures will hopefully help them in the evaluation. In the future, other strategies can be implemented accordingly with their suggestions. Caregivers also experience some challenges when dealing with the feeding problems of these children. A toolkit was developed to help them improve the nutritional assessment of children with CP, identify cases of malnutrition and enable faster and more personalized nutritional intervention (data not shown). This kind of support will need to be continually updated to be in line with the latest recommendations.
[ "2. Materials and Methods", "2.1. Challenges While Registration Anthropometric Parameters: Questionnaire for Registers of Portuguese National Surveillance Program of Children with Cerebral Palsy", "2.2. Challenges While Assessing Nutritional Status and Dealing with Feeding Problems: Meetings with Healthcare Professionals and Caregivers", "2.3. Development of the Toolkit for Healthcare Professionals and Registers", "3.1. Difficulties Experienced by Registers of the Surveillance Program and Healthcare Professionals When Assessing Nutritional Status and Suggested Solutions", "3.2. Difficulties Experienced by Caregivers When Dealing with Children’s Feeding Problems and Discussed Strategies to Minimize These Problems", "3.3. Toolkit to Healthcare Professionals and Registers", "3.3.1. Weight Assessment", "3.3.2. Height Assessment" ]
[ "2.1. Challenges While Registration Anthropometric Parameters: Questionnaire for Registers of Portuguese National Surveillance Program of Children with Cerebral Palsy A questionnaire was developed and applied to a group of registers from a Portuguese national surveillance program of children with CP to access the major difficulties to fulfill nutritional information of national surveillance report. A cross-sectional survey was conducted using an original questionnaire developed by the research team and hosted by an online platform (Google Forms). The questionnaire was first applied to a group of healthcare professionals for piloted test to ensure functionality and clarity of questions. In August 2020, an invitation email was sent to 20 voluntary registers that follow children with CP within the scope of the national surveillance program. The survey was opened for five weeks. The anonymous questionnaire was focused on the following: importance given to the evaluation; existence of collaboration with other professionals to assess weight and height, and difficulties experienced when assessing (strategies that could facilitate those measurements). Data were aggregated descriptive analysis of categorical data was performed.\nA questionnaire was developed and applied to a group of registers from a Portuguese national surveillance program of children with CP to access the major difficulties to fulfill nutritional information of national surveillance report. A cross-sectional survey was conducted using an original questionnaire developed by the research team and hosted by an online platform (Google Forms). The questionnaire was first applied to a group of healthcare professionals for piloted test to ensure functionality and clarity of questions. In August 2020, an invitation email was sent to 20 voluntary registers that follow children with CP within the scope of the national surveillance program. The survey was opened for five weeks. The anonymous questionnaire was focused on the following: importance given to the evaluation; existence of collaboration with other professionals to assess weight and height, and difficulties experienced when assessing (strategies that could facilitate those measurements). Data were aggregated descriptive analysis of categorical data was performed.\n2.2. Challenges While Assessing Nutritional Status and Dealing with Feeding Problems: Meetings with Healthcare Professionals and Caregivers A set of meetings were organized with healthcare professionals who work with children with CP—two in the hospital context and one in a CP center in a geographical area of Portugal (Alentejo). Quality data were recorded by one of the investigators, while two other investigators conducted the meeting. The meeting was structured in three phases. First, a presentation was made based on the nutritional data of the results of the last national report. Second, a collaborative brainstorming session was conducted where participants were able to freely share their beliefs and practical experiences. This allowed the research team to identify possible explanations for the higher underreporting results. Third, a workshop was held on conventional and alternative methodologies of anthropometric measurements for nutritional assessment with moments or practical assessment of children with CP where conventional and alternative anthropometric measurements were applied. Regarding the measured children, the caregiver was informed before the meeting about the anthropometric procedures and gave the informed consent to perform the evaluation with those healthcare professionals. Caregivers were also present during assessment. All of the procedures were explained, and the data were then included in the clinical record of the children. The ethical principles of the Helsinki declaration were followed. Another set of meetings was developed with caregivers. The quality data were recorded by one of the investigators while two other investigators conducted the meeting. The meeting was structured in two phases. First, the project was explained and the importance of adequate food for the health of children with CP. In the second phase, an inquiry was carried out on the main difficulties associated with feeding these children. A participatory methodology was used in which participants had the possibility to freely share their experiences, beliefs, and expectations. Finally, the main shared difficulties were presented and, together, caregivers’ practical interventions to solve these problems were systematized.\nA set of meetings were organized with healthcare professionals who work with children with CP—two in the hospital context and one in a CP center in a geographical area of Portugal (Alentejo). Quality data were recorded by one of the investigators, while two other investigators conducted the meeting. The meeting was structured in three phases. First, a presentation was made based on the nutritional data of the results of the last national report. Second, a collaborative brainstorming session was conducted where participants were able to freely share their beliefs and practical experiences. This allowed the research team to identify possible explanations for the higher underreporting results. Third, a workshop was held on conventional and alternative methodologies of anthropometric measurements for nutritional assessment with moments or practical assessment of children with CP where conventional and alternative anthropometric measurements were applied. Regarding the measured children, the caregiver was informed before the meeting about the anthropometric procedures and gave the informed consent to perform the evaluation with those healthcare professionals. Caregivers were also present during assessment. All of the procedures were explained, and the data were then included in the clinical record of the children. The ethical principles of the Helsinki declaration were followed. Another set of meetings was developed with caregivers. The quality data were recorded by one of the investigators while two other investigators conducted the meeting. The meeting was structured in two phases. First, the project was explained and the importance of adequate food for the health of children with CP. In the second phase, an inquiry was carried out on the main difficulties associated with feeding these children. A participatory methodology was used in which participants had the possibility to freely share their experiences, beliefs, and expectations. Finally, the main shared difficulties were presented and, together, caregivers’ practical interventions to solve these problems were systematized.\n2.3. Development of the Toolkit for Healthcare Professionals and Registers A narrative review of the literature was made following the SANRA criteria [30]. The search using PubMed platform from 1980 and 2021 was performed with the following search terms ‘cerebral palsy’, ‘children with disabilities’; ‘neurological disorders’; ‘nutritional status’; ‘body composition’; ‘body weight’; “weights and measures”, ‘’child development’; ‘physical examination’; ‘growth disorders’; ‘anthropometry/methods’; ‘assessment, nutritional’; ‘feeding problems’; ‘constipation’ and ‘gastroesophageal reflux’. For initial exploratory research on the topic, only systematic review articles and meta-analysis were considered. Articles in English, Portuguese and Spanish were included. The purpose was to investigate conventional and alternative methods of evaluating anthropometric parameters and nutritional status in children with CP, as well as common feeding problems. The articles cited in those were investigated in more detail, especially when they indicated descriptive procedures of measuring these children and how to interpret the results. The assessment methods to estimate weight and height recommended by ESPGHAN were chosen over the others, even though these were also considered as alternatives. After analysis of all of the data collected, two investigators with a communication background elaborated a practical toolkit to help healthcare professionals in nutritional assessment (Supplementary Material) and other to assist caregivers on feeding problems (data not shown). To illustrate the anthropometric procedures, original figures were created.\nA narrative review of the literature was made following the SANRA criteria [30]. The search using PubMed platform from 1980 and 2021 was performed with the following search terms ‘cerebral palsy’, ‘children with disabilities’; ‘neurological disorders’; ‘nutritional status’; ‘body composition’; ‘body weight’; “weights and measures”, ‘’child development’; ‘physical examination’; ‘growth disorders’; ‘anthropometry/methods’; ‘assessment, nutritional’; ‘feeding problems’; ‘constipation’ and ‘gastroesophageal reflux’. For initial exploratory research on the topic, only systematic review articles and meta-analysis were considered. Articles in English, Portuguese and Spanish were included. The purpose was to investigate conventional and alternative methods of evaluating anthropometric parameters and nutritional status in children with CP, as well as common feeding problems. The articles cited in those were investigated in more detail, especially when they indicated descriptive procedures of measuring these children and how to interpret the results. The assessment methods to estimate weight and height recommended by ESPGHAN were chosen over the others, even though these were also considered as alternatives. After analysis of all of the data collected, two investigators with a communication background elaborated a practical toolkit to help healthcare professionals in nutritional assessment (Supplementary Material) and other to assist caregivers on feeding problems (data not shown). To illustrate the anthropometric procedures, original figures were created.", "A questionnaire was developed and applied to a group of registers from a Portuguese national surveillance program of children with CP to access the major difficulties to fulfill nutritional information of national surveillance report. A cross-sectional survey was conducted using an original questionnaire developed by the research team and hosted by an online platform (Google Forms). The questionnaire was first applied to a group of healthcare professionals for piloted test to ensure functionality and clarity of questions. In August 2020, an invitation email was sent to 20 voluntary registers that follow children with CP within the scope of the national surveillance program. The survey was opened for five weeks. The anonymous questionnaire was focused on the following: importance given to the evaluation; existence of collaboration with other professionals to assess weight and height, and difficulties experienced when assessing (strategies that could facilitate those measurements). Data were aggregated descriptive analysis of categorical data was performed.", "A set of meetings were organized with healthcare professionals who work with children with CP—two in the hospital context and one in a CP center in a geographical area of Portugal (Alentejo). Quality data were recorded by one of the investigators, while two other investigators conducted the meeting. The meeting was structured in three phases. First, a presentation was made based on the nutritional data of the results of the last national report. Second, a collaborative brainstorming session was conducted where participants were able to freely share their beliefs and practical experiences. This allowed the research team to identify possible explanations for the higher underreporting results. Third, a workshop was held on conventional and alternative methodologies of anthropometric measurements for nutritional assessment with moments or practical assessment of children with CP where conventional and alternative anthropometric measurements were applied. Regarding the measured children, the caregiver was informed before the meeting about the anthropometric procedures and gave the informed consent to perform the evaluation with those healthcare professionals. Caregivers were also present during assessment. All of the procedures were explained, and the data were then included in the clinical record of the children. The ethical principles of the Helsinki declaration were followed. Another set of meetings was developed with caregivers. The quality data were recorded by one of the investigators while two other investigators conducted the meeting. The meeting was structured in two phases. First, the project was explained and the importance of adequate food for the health of children with CP. In the second phase, an inquiry was carried out on the main difficulties associated with feeding these children. A participatory methodology was used in which participants had the possibility to freely share their experiences, beliefs, and expectations. Finally, the main shared difficulties were presented and, together, caregivers’ practical interventions to solve these problems were systematized.", "A narrative review of the literature was made following the SANRA criteria [30]. The search using PubMed platform from 1980 and 2021 was performed with the following search terms ‘cerebral palsy’, ‘children with disabilities’; ‘neurological disorders’; ‘nutritional status’; ‘body composition’; ‘body weight’; “weights and measures”, ‘’child development’; ‘physical examination’; ‘growth disorders’; ‘anthropometry/methods’; ‘assessment, nutritional’; ‘feeding problems’; ‘constipation’ and ‘gastroesophageal reflux’. For initial exploratory research on the topic, only systematic review articles and meta-analysis were considered. Articles in English, Portuguese and Spanish were included. The purpose was to investigate conventional and alternative methods of evaluating anthropometric parameters and nutritional status in children with CP, as well as common feeding problems. The articles cited in those were investigated in more detail, especially when they indicated descriptive procedures of measuring these children and how to interpret the results. The assessment methods to estimate weight and height recommended by ESPGHAN were chosen over the others, even though these were also considered as alternatives. After analysis of all of the data collected, two investigators with a communication background elaborated a practical toolkit to help healthcare professionals in nutritional assessment (Supplementary Material) and other to assist caregivers on feeding problems (data not shown). To illustrate the anthropometric procedures, original figures were created.", "Of the 20 registers of the Portuguese national surveillance program of children with CP invited to fulfil the questionnaire, 65% (n = 13) submitted their response. Most were medical doctors (n = 8), followed by physical therapists (n = 2), dietitians (n = 2), and nurses (n = 1). All of them considered nutritional assessment important, with a focus on weight assessment (WA) and height assessment (HA). Regarding this matter, the difficulties revealed by these professionals were lack of time (15% for WA and HA), lack of collaboration with other health professionals (7% for WA and 23% for HA), child’s motor impairment (30% for both) and lack of equipment such as scale or stadiometer (38% for both). All of the inquired mentioned having a measuring tape in their work setting. Alternative methods, such as segmental length measures, were known by 69%. In regards of possible solutions to facilitate this assessment, 15% of the registers found brochures/flyers a useful tool to compile information about different methodologies, 84% suggested to create and design a guide/manual of procedures described step-by-step and 46% suggested a mobile app. It was mentioned by 92% that professional training about alternative methods of assessing nutritional status in children with high motor impairment would be helpful. After each meeting with the healthcare professionals, all perspectives from the 35 participants were recorded from an informal conversation. The group included general practitioners, pediatricians, speech therapists, physiotherapists, occupational therapists, psychologists, nurses, dietitians, and caregivers. The contributions from healthcare professionals were similar to those expressed in the questionnaire filled by registers. In order to remove some of the barriers to the assessment of nutritional status, they mentioned the creation of an online platform with all of the equations involved in nutritional assessment, where data can be inserted and automatically calculate the measurements; one can create an educational program for healthcare professionals; and one can gather a team of high skilled professionals that visits all CP centers in the country to evaluate and register all children. It was also mentioned that involving the community surrounding children with CP would be helpful. The goal was to promote an active contribution to the evaluation and registration of their anthropometric parameters. All difficulties experienced by registers and healthcare professionals, as well as the suggested solutions to improve the nutritional assessment are summarized in Table 1.", "During the meetings with caregivers, the main problems referred were the difficulty in meal’s preparation with adapted textures that meet nutritional needs; difficulty to ensure correct hydration status; fear of meals where fish is the main protein source due to presence of fish bones; and how to position the child during mealtime. These answers were recorded while an informal conversation was happening. Some strategies were discussed, including the instruction of the canteen staff in how to prepare different textured and nutritional meals and the creation of a manual with this information summarized; creation of a manual with information on ways to aromatize water and tips to encourage the consumption of liquids (e.g., alarms during the day or mark the bottles to control the intake during the day); and ways to deal with common gastrointestinal problems such as constipation, dysphagia and gastroesophageal reflux. In addition, it was mentioned that there was little financial support for the prescription of artificial nutrition for families with children with CP.", "The toolkit for healthcare professionals and registers focused on methods to assess weight and height in children with CP since these are the only indicators currently asked in the surveillance program forms regarding the evaluation of nutritional status.\n3.3.1. Weight Assessment This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material).\nThis can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material).\n3.3.2. Height Assessment As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material).\nAlternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown).\nAs with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material).\nAlternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown).", "This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material).", "As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material).\nAlternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown)." ]
[ null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Challenges While Registration Anthropometric Parameters: Questionnaire for Registers of Portuguese National Surveillance Program of Children with Cerebral Palsy", "2.2. Challenges While Assessing Nutritional Status and Dealing with Feeding Problems: Meetings with Healthcare Professionals and Caregivers", "2.3. Development of the Toolkit for Healthcare Professionals and Registers", "3. Results", "3.1. Difficulties Experienced by Registers of the Surveillance Program and Healthcare Professionals When Assessing Nutritional Status and Suggested Solutions", "3.2. Difficulties Experienced by Caregivers When Dealing with Children’s Feeding Problems and Discussed Strategies to Minimize These Problems", "3.3. Toolkit to Healthcare Professionals and Registers", "3.3.1. Weight Assessment", "3.3.2. Height Assessment", "4. Discussion", "5. Conclusions" ]
[ "Cerebral palsy (PC) is associated with motor problems accompanied by disorders in cognition, communication and behavior, epilepsy episodes, and musculoskeletal problems. It is the most common motor deficiency in childhood and persists in adulthood [1]. It is a heterogeneous condition that can include spastic (85%), dyskinetic (7%), which includes dystonia and choreoathetosis, and ataxic (4%) forms. CP motor severity can be established using the Gross Motor Function Classification System (GMFCS), which indicates a child’s level of gross motor function and mobility [2], ranking from I (light symptoms) to V (more severe). Children with CP are usually shorter and lighter than peers, especially when functional limitations are more severe [3,4]. The amount of body fat and muscle mass also tend to be lower, due to lack of mobility [5], as well as the duration and severity of the neurological disorder which reduces function in terms of daily activities [6]. There is a high risk of malnutrition (29–48%), which increases when motor impairment is higher [7,8,9]. Etiology of malnutrition is multifactorial and includes both nutritional and non-nutritional factors [10]. Dietary intake is conditioned by oromotor impairment [11] and the presence of gastroesophageal reflux and constipation [10]. Moreover, altered energy requirements frequently occur and decrease as ambulatory status declines and more limbs are involved [12]. Malnutrition impacts growth and quality of life [13], decreases immune function [4,14,15], increases use of health care [16], limits participation in social activities, and worsens survival prognosis [4,17]. In recent decades, it has been perceived that a correct diagnosis that leads to actions with the purpose of attenuating eating problems, decreases hospitalization rates and improves nutritional status [18]. This assessment will allow or adequate multidisciplinary interventions to restore linear growth, stabilize weight, reduce irritability and spasticity, improve circulation and healing, and increase social participation and quality of life [10]. There is no single method to assess nutritional status, but the analyses of a set of parameters can provide important nutritional information. In children, anthropometric parameters such as weight, height, and body mass index are the most frequently used since they are simple to collect and noninvasive [19]. For children with CP, anthropometric measurements can be difficult to assess, mainly if they have contractures, spasticity, scoliosis, and positioning problems that hinder the use of conventional weighing and other measuring methods [20]. In the last few decades, several indirect methods to determine anthropometrics in children with CP were developed, albeit with no consensus on the ideal one/ones despite de extensive discussion on the topic [20,21,22]. A variety of growth charts can be used to assess and monitor growth [23]. Specific charts for children with CP are available, such as Brooks charts [24]. However, they reflect how children with CP have grown, rather than how they can be expected to optimally grow [25]. National and other surveillance programs of children with CP provide clinical information [26,27] as well as nutritional status data. Countries that preform this analysis report elevated rates of undernutrition, but the most concerning is the higher underreport of anthropometric data [28,29]. To our knowledge, there is no record on the reasons why registers from the program do not always assess this evaluation parameters. The hypothesis was raised that registers experience some difficulties when assessing the nutritional status of these children. As part of a project funded by the Polytechnic Institute of Lisbon, IPL/2020/PIN-PC_ESTESL (Project for Nutritional Intervention in Cerebral Palsy), we developed a strategy to identify the main reasons for the higher underreport and developed actions to minimize this situation. This project included (1) assessment of registers’ motives for underreporting; (2) assessment of the difficulties experienced by healthcare professionals working with these children; (3) assessment of the feeding difficulties experienced by caregivers; and (4) a review of the literature on anthropometric procedures developed for children with CP and developed a pedagogical material adapted to the needs previous assessed.", "2.1. Challenges While Registration Anthropometric Parameters: Questionnaire for Registers of Portuguese National Surveillance Program of Children with Cerebral Palsy A questionnaire was developed and applied to a group of registers from a Portuguese national surveillance program of children with CP to access the major difficulties to fulfill nutritional information of national surveillance report. A cross-sectional survey was conducted using an original questionnaire developed by the research team and hosted by an online platform (Google Forms). The questionnaire was first applied to a group of healthcare professionals for piloted test to ensure functionality and clarity of questions. In August 2020, an invitation email was sent to 20 voluntary registers that follow children with CP within the scope of the national surveillance program. The survey was opened for five weeks. The anonymous questionnaire was focused on the following: importance given to the evaluation; existence of collaboration with other professionals to assess weight and height, and difficulties experienced when assessing (strategies that could facilitate those measurements). Data were aggregated descriptive analysis of categorical data was performed.\nA questionnaire was developed and applied to a group of registers from a Portuguese national surveillance program of children with CP to access the major difficulties to fulfill nutritional information of national surveillance report. A cross-sectional survey was conducted using an original questionnaire developed by the research team and hosted by an online platform (Google Forms). The questionnaire was first applied to a group of healthcare professionals for piloted test to ensure functionality and clarity of questions. In August 2020, an invitation email was sent to 20 voluntary registers that follow children with CP within the scope of the national surveillance program. The survey was opened for five weeks. The anonymous questionnaire was focused on the following: importance given to the evaluation; existence of collaboration with other professionals to assess weight and height, and difficulties experienced when assessing (strategies that could facilitate those measurements). Data were aggregated descriptive analysis of categorical data was performed.\n2.2. Challenges While Assessing Nutritional Status and Dealing with Feeding Problems: Meetings with Healthcare Professionals and Caregivers A set of meetings were organized with healthcare professionals who work with children with CP—two in the hospital context and one in a CP center in a geographical area of Portugal (Alentejo). Quality data were recorded by one of the investigators, while two other investigators conducted the meeting. The meeting was structured in three phases. First, a presentation was made based on the nutritional data of the results of the last national report. Second, a collaborative brainstorming session was conducted where participants were able to freely share their beliefs and practical experiences. This allowed the research team to identify possible explanations for the higher underreporting results. Third, a workshop was held on conventional and alternative methodologies of anthropometric measurements for nutritional assessment with moments or practical assessment of children with CP where conventional and alternative anthropometric measurements were applied. Regarding the measured children, the caregiver was informed before the meeting about the anthropometric procedures and gave the informed consent to perform the evaluation with those healthcare professionals. Caregivers were also present during assessment. All of the procedures were explained, and the data were then included in the clinical record of the children. The ethical principles of the Helsinki declaration were followed. Another set of meetings was developed with caregivers. The quality data were recorded by one of the investigators while two other investigators conducted the meeting. The meeting was structured in two phases. First, the project was explained and the importance of adequate food for the health of children with CP. In the second phase, an inquiry was carried out on the main difficulties associated with feeding these children. A participatory methodology was used in which participants had the possibility to freely share their experiences, beliefs, and expectations. Finally, the main shared difficulties were presented and, together, caregivers’ practical interventions to solve these problems were systematized.\nA set of meetings were organized with healthcare professionals who work with children with CP—two in the hospital context and one in a CP center in a geographical area of Portugal (Alentejo). Quality data were recorded by one of the investigators, while two other investigators conducted the meeting. The meeting was structured in three phases. First, a presentation was made based on the nutritional data of the results of the last national report. Second, a collaborative brainstorming session was conducted where participants were able to freely share their beliefs and practical experiences. This allowed the research team to identify possible explanations for the higher underreporting results. Third, a workshop was held on conventional and alternative methodologies of anthropometric measurements for nutritional assessment with moments or practical assessment of children with CP where conventional and alternative anthropometric measurements were applied. Regarding the measured children, the caregiver was informed before the meeting about the anthropometric procedures and gave the informed consent to perform the evaluation with those healthcare professionals. Caregivers were also present during assessment. All of the procedures were explained, and the data were then included in the clinical record of the children. The ethical principles of the Helsinki declaration were followed. Another set of meetings was developed with caregivers. The quality data were recorded by one of the investigators while two other investigators conducted the meeting. The meeting was structured in two phases. First, the project was explained and the importance of adequate food for the health of children with CP. In the second phase, an inquiry was carried out on the main difficulties associated with feeding these children. A participatory methodology was used in which participants had the possibility to freely share their experiences, beliefs, and expectations. Finally, the main shared difficulties were presented and, together, caregivers’ practical interventions to solve these problems were systematized.\n2.3. Development of the Toolkit for Healthcare Professionals and Registers A narrative review of the literature was made following the SANRA criteria [30]. The search using PubMed platform from 1980 and 2021 was performed with the following search terms ‘cerebral palsy’, ‘children with disabilities’; ‘neurological disorders’; ‘nutritional status’; ‘body composition’; ‘body weight’; “weights and measures”, ‘’child development’; ‘physical examination’; ‘growth disorders’; ‘anthropometry/methods’; ‘assessment, nutritional’; ‘feeding problems’; ‘constipation’ and ‘gastroesophageal reflux’. For initial exploratory research on the topic, only systematic review articles and meta-analysis were considered. Articles in English, Portuguese and Spanish were included. The purpose was to investigate conventional and alternative methods of evaluating anthropometric parameters and nutritional status in children with CP, as well as common feeding problems. The articles cited in those were investigated in more detail, especially when they indicated descriptive procedures of measuring these children and how to interpret the results. The assessment methods to estimate weight and height recommended by ESPGHAN were chosen over the others, even though these were also considered as alternatives. After analysis of all of the data collected, two investigators with a communication background elaborated a practical toolkit to help healthcare professionals in nutritional assessment (Supplementary Material) and other to assist caregivers on feeding problems (data not shown). To illustrate the anthropometric procedures, original figures were created.\nA narrative review of the literature was made following the SANRA criteria [30]. The search using PubMed platform from 1980 and 2021 was performed with the following search terms ‘cerebral palsy’, ‘children with disabilities’; ‘neurological disorders’; ‘nutritional status’; ‘body composition’; ‘body weight’; “weights and measures”, ‘’child development’; ‘physical examination’; ‘growth disorders’; ‘anthropometry/methods’; ‘assessment, nutritional’; ‘feeding problems’; ‘constipation’ and ‘gastroesophageal reflux’. For initial exploratory research on the topic, only systematic review articles and meta-analysis were considered. Articles in English, Portuguese and Spanish were included. The purpose was to investigate conventional and alternative methods of evaluating anthropometric parameters and nutritional status in children with CP, as well as common feeding problems. The articles cited in those were investigated in more detail, especially when they indicated descriptive procedures of measuring these children and how to interpret the results. The assessment methods to estimate weight and height recommended by ESPGHAN were chosen over the others, even though these were also considered as alternatives. After analysis of all of the data collected, two investigators with a communication background elaborated a practical toolkit to help healthcare professionals in nutritional assessment (Supplementary Material) and other to assist caregivers on feeding problems (data not shown). To illustrate the anthropometric procedures, original figures were created.", "A questionnaire was developed and applied to a group of registers from a Portuguese national surveillance program of children with CP to access the major difficulties to fulfill nutritional information of national surveillance report. A cross-sectional survey was conducted using an original questionnaire developed by the research team and hosted by an online platform (Google Forms). The questionnaire was first applied to a group of healthcare professionals for piloted test to ensure functionality and clarity of questions. In August 2020, an invitation email was sent to 20 voluntary registers that follow children with CP within the scope of the national surveillance program. The survey was opened for five weeks. The anonymous questionnaire was focused on the following: importance given to the evaluation; existence of collaboration with other professionals to assess weight and height, and difficulties experienced when assessing (strategies that could facilitate those measurements). Data were aggregated descriptive analysis of categorical data was performed.", "A set of meetings were organized with healthcare professionals who work with children with CP—two in the hospital context and one in a CP center in a geographical area of Portugal (Alentejo). Quality data were recorded by one of the investigators, while two other investigators conducted the meeting. The meeting was structured in three phases. First, a presentation was made based on the nutritional data of the results of the last national report. Second, a collaborative brainstorming session was conducted where participants were able to freely share their beliefs and practical experiences. This allowed the research team to identify possible explanations for the higher underreporting results. Third, a workshop was held on conventional and alternative methodologies of anthropometric measurements for nutritional assessment with moments or practical assessment of children with CP where conventional and alternative anthropometric measurements were applied. Regarding the measured children, the caregiver was informed before the meeting about the anthropometric procedures and gave the informed consent to perform the evaluation with those healthcare professionals. Caregivers were also present during assessment. All of the procedures were explained, and the data were then included in the clinical record of the children. The ethical principles of the Helsinki declaration were followed. Another set of meetings was developed with caregivers. The quality data were recorded by one of the investigators while two other investigators conducted the meeting. The meeting was structured in two phases. First, the project was explained and the importance of adequate food for the health of children with CP. In the second phase, an inquiry was carried out on the main difficulties associated with feeding these children. A participatory methodology was used in which participants had the possibility to freely share their experiences, beliefs, and expectations. Finally, the main shared difficulties were presented and, together, caregivers’ practical interventions to solve these problems were systematized.", "A narrative review of the literature was made following the SANRA criteria [30]. The search using PubMed platform from 1980 and 2021 was performed with the following search terms ‘cerebral palsy’, ‘children with disabilities’; ‘neurological disorders’; ‘nutritional status’; ‘body composition’; ‘body weight’; “weights and measures”, ‘’child development’; ‘physical examination’; ‘growth disorders’; ‘anthropometry/methods’; ‘assessment, nutritional’; ‘feeding problems’; ‘constipation’ and ‘gastroesophageal reflux’. For initial exploratory research on the topic, only systematic review articles and meta-analysis were considered. Articles in English, Portuguese and Spanish were included. The purpose was to investigate conventional and alternative methods of evaluating anthropometric parameters and nutritional status in children with CP, as well as common feeding problems. The articles cited in those were investigated in more detail, especially when they indicated descriptive procedures of measuring these children and how to interpret the results. The assessment methods to estimate weight and height recommended by ESPGHAN were chosen over the others, even though these were also considered as alternatives. After analysis of all of the data collected, two investigators with a communication background elaborated a practical toolkit to help healthcare professionals in nutritional assessment (Supplementary Material) and other to assist caregivers on feeding problems (data not shown). To illustrate the anthropometric procedures, original figures were created.", "3.1. Difficulties Experienced by Registers of the Surveillance Program and Healthcare Professionals When Assessing Nutritional Status and Suggested Solutions Of the 20 registers of the Portuguese national surveillance program of children with CP invited to fulfil the questionnaire, 65% (n = 13) submitted their response. Most were medical doctors (n = 8), followed by physical therapists (n = 2), dietitians (n = 2), and nurses (n = 1). All of them considered nutritional assessment important, with a focus on weight assessment (WA) and height assessment (HA). Regarding this matter, the difficulties revealed by these professionals were lack of time (15% for WA and HA), lack of collaboration with other health professionals (7% for WA and 23% for HA), child’s motor impairment (30% for both) and lack of equipment such as scale or stadiometer (38% for both). All of the inquired mentioned having a measuring tape in their work setting. Alternative methods, such as segmental length measures, were known by 69%. In regards of possible solutions to facilitate this assessment, 15% of the registers found brochures/flyers a useful tool to compile information about different methodologies, 84% suggested to create and design a guide/manual of procedures described step-by-step and 46% suggested a mobile app. It was mentioned by 92% that professional training about alternative methods of assessing nutritional status in children with high motor impairment would be helpful. After each meeting with the healthcare professionals, all perspectives from the 35 participants were recorded from an informal conversation. The group included general practitioners, pediatricians, speech therapists, physiotherapists, occupational therapists, psychologists, nurses, dietitians, and caregivers. The contributions from healthcare professionals were similar to those expressed in the questionnaire filled by registers. In order to remove some of the barriers to the assessment of nutritional status, they mentioned the creation of an online platform with all of the equations involved in nutritional assessment, where data can be inserted and automatically calculate the measurements; one can create an educational program for healthcare professionals; and one can gather a team of high skilled professionals that visits all CP centers in the country to evaluate and register all children. It was also mentioned that involving the community surrounding children with CP would be helpful. The goal was to promote an active contribution to the evaluation and registration of their anthropometric parameters. All difficulties experienced by registers and healthcare professionals, as well as the suggested solutions to improve the nutritional assessment are summarized in Table 1.\nOf the 20 registers of the Portuguese national surveillance program of children with CP invited to fulfil the questionnaire, 65% (n = 13) submitted their response. Most were medical doctors (n = 8), followed by physical therapists (n = 2), dietitians (n = 2), and nurses (n = 1). All of them considered nutritional assessment important, with a focus on weight assessment (WA) and height assessment (HA). Regarding this matter, the difficulties revealed by these professionals were lack of time (15% for WA and HA), lack of collaboration with other health professionals (7% for WA and 23% for HA), child’s motor impairment (30% for both) and lack of equipment such as scale or stadiometer (38% for both). All of the inquired mentioned having a measuring tape in their work setting. Alternative methods, such as segmental length measures, were known by 69%. In regards of possible solutions to facilitate this assessment, 15% of the registers found brochures/flyers a useful tool to compile information about different methodologies, 84% suggested to create and design a guide/manual of procedures described step-by-step and 46% suggested a mobile app. It was mentioned by 92% that professional training about alternative methods of assessing nutritional status in children with high motor impairment would be helpful. After each meeting with the healthcare professionals, all perspectives from the 35 participants were recorded from an informal conversation. The group included general practitioners, pediatricians, speech therapists, physiotherapists, occupational therapists, psychologists, nurses, dietitians, and caregivers. The contributions from healthcare professionals were similar to those expressed in the questionnaire filled by registers. In order to remove some of the barriers to the assessment of nutritional status, they mentioned the creation of an online platform with all of the equations involved in nutritional assessment, where data can be inserted and automatically calculate the measurements; one can create an educational program for healthcare professionals; and one can gather a team of high skilled professionals that visits all CP centers in the country to evaluate and register all children. It was also mentioned that involving the community surrounding children with CP would be helpful. The goal was to promote an active contribution to the evaluation and registration of their anthropometric parameters. All difficulties experienced by registers and healthcare professionals, as well as the suggested solutions to improve the nutritional assessment are summarized in Table 1.\n3.2. Difficulties Experienced by Caregivers When Dealing with Children’s Feeding Problems and Discussed Strategies to Minimize These Problems During the meetings with caregivers, the main problems referred were the difficulty in meal’s preparation with adapted textures that meet nutritional needs; difficulty to ensure correct hydration status; fear of meals where fish is the main protein source due to presence of fish bones; and how to position the child during mealtime. These answers were recorded while an informal conversation was happening. Some strategies were discussed, including the instruction of the canteen staff in how to prepare different textured and nutritional meals and the creation of a manual with this information summarized; creation of a manual with information on ways to aromatize water and tips to encourage the consumption of liquids (e.g., alarms during the day or mark the bottles to control the intake during the day); and ways to deal with common gastrointestinal problems such as constipation, dysphagia and gastroesophageal reflux. In addition, it was mentioned that there was little financial support for the prescription of artificial nutrition for families with children with CP.\nDuring the meetings with caregivers, the main problems referred were the difficulty in meal’s preparation with adapted textures that meet nutritional needs; difficulty to ensure correct hydration status; fear of meals where fish is the main protein source due to presence of fish bones; and how to position the child during mealtime. These answers were recorded while an informal conversation was happening. Some strategies were discussed, including the instruction of the canteen staff in how to prepare different textured and nutritional meals and the creation of a manual with this information summarized; creation of a manual with information on ways to aromatize water and tips to encourage the consumption of liquids (e.g., alarms during the day or mark the bottles to control the intake during the day); and ways to deal with common gastrointestinal problems such as constipation, dysphagia and gastroesophageal reflux. In addition, it was mentioned that there was little financial support for the prescription of artificial nutrition for families with children with CP.\n3.3. Toolkit to Healthcare Professionals and Registers The toolkit for healthcare professionals and registers focused on methods to assess weight and height in children with CP since these are the only indicators currently asked in the surveillance program forms regarding the evaluation of nutritional status.\n3.3.1. Weight Assessment This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material).\nThis can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material).\n3.3.2. Height Assessment As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material).\nAlternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown).\nAs with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material).\nAlternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown).\nThe toolkit for healthcare professionals and registers focused on methods to assess weight and height in children with CP since these are the only indicators currently asked in the surveillance program forms regarding the evaluation of nutritional status.\n3.3.1. Weight Assessment This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material).\nThis can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material).\n3.3.2. Height Assessment As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material).\nAlternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown).\nAs with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material).\nAlternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown).", "Of the 20 registers of the Portuguese national surveillance program of children with CP invited to fulfil the questionnaire, 65% (n = 13) submitted their response. Most were medical doctors (n = 8), followed by physical therapists (n = 2), dietitians (n = 2), and nurses (n = 1). All of them considered nutritional assessment important, with a focus on weight assessment (WA) and height assessment (HA). Regarding this matter, the difficulties revealed by these professionals were lack of time (15% for WA and HA), lack of collaboration with other health professionals (7% for WA and 23% for HA), child’s motor impairment (30% for both) and lack of equipment such as scale or stadiometer (38% for both). All of the inquired mentioned having a measuring tape in their work setting. Alternative methods, such as segmental length measures, were known by 69%. In regards of possible solutions to facilitate this assessment, 15% of the registers found brochures/flyers a useful tool to compile information about different methodologies, 84% suggested to create and design a guide/manual of procedures described step-by-step and 46% suggested a mobile app. It was mentioned by 92% that professional training about alternative methods of assessing nutritional status in children with high motor impairment would be helpful. After each meeting with the healthcare professionals, all perspectives from the 35 participants were recorded from an informal conversation. The group included general practitioners, pediatricians, speech therapists, physiotherapists, occupational therapists, psychologists, nurses, dietitians, and caregivers. The contributions from healthcare professionals were similar to those expressed in the questionnaire filled by registers. In order to remove some of the barriers to the assessment of nutritional status, they mentioned the creation of an online platform with all of the equations involved in nutritional assessment, where data can be inserted and automatically calculate the measurements; one can create an educational program for healthcare professionals; and one can gather a team of high skilled professionals that visits all CP centers in the country to evaluate and register all children. It was also mentioned that involving the community surrounding children with CP would be helpful. The goal was to promote an active contribution to the evaluation and registration of their anthropometric parameters. All difficulties experienced by registers and healthcare professionals, as well as the suggested solutions to improve the nutritional assessment are summarized in Table 1.", "During the meetings with caregivers, the main problems referred were the difficulty in meal’s preparation with adapted textures that meet nutritional needs; difficulty to ensure correct hydration status; fear of meals where fish is the main protein source due to presence of fish bones; and how to position the child during mealtime. These answers were recorded while an informal conversation was happening. Some strategies were discussed, including the instruction of the canteen staff in how to prepare different textured and nutritional meals and the creation of a manual with this information summarized; creation of a manual with information on ways to aromatize water and tips to encourage the consumption of liquids (e.g., alarms during the day or mark the bottles to control the intake during the day); and ways to deal with common gastrointestinal problems such as constipation, dysphagia and gastroesophageal reflux. In addition, it was mentioned that there was little financial support for the prescription of artificial nutrition for families with children with CP.", "The toolkit for healthcare professionals and registers focused on methods to assess weight and height in children with CP since these are the only indicators currently asked in the surveillance program forms regarding the evaluation of nutritional status.\n3.3.1. Weight Assessment This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material).\nThis can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material).\n3.3.2. Height Assessment As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material).\nAlternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown).\nAs with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material).\nAlternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown).", "This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material).", "As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material).\nAlternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown).", "The work fields of the healthcare professionals included in this study were wide for both registers and healthcare professionals that work with these children. These different healthcare settings are beneficial insofar as they allow more opportunities to evaluate these children so that the anthropometric information is always up to date [35].\nSeveral difficulties were mentioned in assessing nutritional status in children with CP, including lack of time and collaborations with other colleagues, child’s motor impairment and lack of equipment. There seems to be little autonomy and knowledge regarding this evaluation in order to do it in a short amount of time during a routine appointment. Children’s motor impairment made it difficult to carry out the nutritional assessment using conventional methods [32]. Therefore, it is important that healthcare professionals are trained to apply alternative methods, such as measuring segmental lengths and using the results in specific equations. The lack of specialized equipment referred by the participants can be overcome with the use of an anthropometric tape (all of the professionals involved mentioned that they have one).\nAs for our knowledge, this is the first time the reasons behind the high underreport on anthropometric data from registers in the Surveillance Program of Children with Cerebral Palsy are addressed. High omission rates of weight and height information have been reported. In Portugal, the omission rates are 53% for weight and 62.3% for height [29]. In a study from Nepal, the nutritional status information in national reports was also very limited and the authors say that this gap in evidence remains a major obstacle for planning targeted nutritional intervention for children with CP [28]. Thus, it is important to promote ways to improve the availability of this information.\nFrom the results from the questionnaire applied to registers, it was possible to understand that educational tools would be helpful. This includes the development of a manual which can serve as a theoretical basis to create informative brochures and other educational materials as online presentations on the topic. The healthcare professionals suggested in the meetings that the community could be more involved once there are a lot of people living alongside these children who can weight and measure them routinely. The creation of a skilled team to carry out this evaluation in each country or region could be a solution, but would be more challenging to implement in a short amount of time and would have no financial support. The development of an online platform that includes all of the equations used in nutritional assessment was also referred to. This tool would speed up the interpretation of the results measured during each appointment and could also include the step-by-step procedures addressed in the manual. The challenges shared by these professionals (registers of the surveillance program and the other healthcare professionals) are similar and the different solutions addressed can be applicable to both groups. The main goal is to have weight and height information available in order to have updated reports on epidemiological data and to do adequate interventions that improve these children’s lives. For the present work, the toolkit was created as a first strategy to reduce the underreport on the surveillance program. Moreover, the meetings with healthcare professionals were also an opportunity to educate on performing nutritional status evaluation. In the future, the other suggestions can be implemented.\nSome of challenges addressed by the caregivers are known and have been previously published in scientific literature [36,37]. Feeding problems are prevalent in children with CP (21–58%) and some of them were discussed during the meetings [36]. Nevertheless, the main problems expressed by caregivers were related to very practical issues, such as meal preparations and how to adapt its textures, how to prepare nutritious food, and other issues. The difficulties and suggestions were an added value for our communication professionals to develop tools that help caregivers dealing with those challenges (data not shown).\nThe development of a toolkit was important to ensure that the information is aggregated in a practical way so that health professionals can have quick access and to help in the decision-making when it comes to the assessment of anthropometric measures, such as weight and height. Although there is evidence on the issue of assessing the nutritional status of children with CP, a step-by-step manual is needed. In addition, it has original figures that exemplify the measures described to make it easier to put into practice. The design of the toolkit was based on a participatory process with health professionals who work with children with CP in their work settings and caregivers [38,39]. Understanding their previous knowledge, beliefs and resources about the nutritional assessment and feeding care of children with CP was necessary. In the theoretical framework of the Behavior Change Process [40] communication for behavior change is urgent to understand at what stage of knowledge about a particular problem, or the need to adopt a new behavior people are. Therefore, it is essential to adapt messages to communication according to these restrictions. Studies suggest that individuals are more likely to change behavior when interventions are culturally appropriate, locally relevant, and participated in at all stages [41].\nThe present work focus on the evaluation of weight and height in children with CP due to the urgent need to improve the availability of data in the surveillance program reports. Evaluation methods used should be feasible, efficient, inexpensive, and non-stressful for the child and caregivers. Measurements should be performed repeatedly to prevent incorrect diagnoses and to monitor continuously [17]. Moreover, the same method should be applied in order to gain experience and to allow a proper evaluation of these children. In this toolkit, only the measurements of weight and height were addressed. In the future, it is important to perform a complete evaluation of these children’s nutritional status and to ask the children’s clinical history and search information on their activity level, including the presence of comorbidities and pharmacotherapy that may affect nutritional status [42]. ESPGHAN recommends assessment of anthropometrics, body composition, and laboratory parameters [43]. It is recommended that one looks for warning signs of undernutrition including pressure ulcers, weight for age/sex Z-score below −2, triceps skinfold thickness and arm circumference or arm muscle area for age/sex below 10th centile and grow or weight gain velocity impairment or difficulty in recovering [17].\nBody mass index (BMI), weight for height, and ideal body weight are frequently used to estimate body fat or nutritional status but are poor predictors of body fat percentage and have limited usefulness in guiding nutritional intervention [44]. Skinfolds reflect body mass reserves and could start to be routinely performed in children with CP, even if in some cases it is a challenging task [45]. This type of assessment should be performed by an experienced healthcare professional so that measurements are accurate. Triceps and subscapular skinfolds can be measured using the common methodology. It should be taken into account that the results must be interpreted considering a more localized distribution of fat in the abdominal area [7,46]. The results can be then applied in specific equations to predict body fat mass percentage shown in Table 3. The Slaughter equation, used in a healthy population, is not suitable for children with neuromotor impairment. Instead, Gurka equation, which includes correction factors such as GMFCS classification, best correlates with gold standard method [47]. Anthropometric parameters should be performed on the least affected side of the body [47,48,49]. Bioelectrical impedance analysis (BIA) allows for assessment of body composition, this being easy to perform, non-invasive, and involving no discomfort for the child. When available, it is more preferable than anthropometric measurements (skinfolds and body perimeters) as it is more suitable for children with CP [42]. Even so, limitations must be consider; hydration status is quite heterogeneous during childhood and depends on age and gender, and the results may also be falsified by the frequent dehydration state of children with CP [20,50].\nCalculations of mid-upper arm muscle and fat areas are based on measurements of the upper arm circumference and triceps skinfolds [51] using the equation in Table 4.\nLohman classification can be used to classify the percentage of body fast mass results. It was created with a large sample of healthy children and can be used considering low body fat mass (≤10% for boys and ≤15% for girls), fat mass adequate (10–25% for boys and 16–30% for girls) and excess fat mass (>25% for boys and >30% for women). Values >30% of FM in boys and >35% of fat mass in girls indicates a likely diagnosis of obesity. When the percentage of fat mass is less than 7% in boys and less than 15% in girls, probably they will have very low weight [52]. Frisancho’s tables also provide centiles to classify PCT. After mid-upper-arm muscle circumference and mid-upper-arm muscle calculation, values can also be interpreted according to Frisancho centiles, even though these were not validated for children with CP. Therefore, they should be evaluated with caution. In children with CP, muscle mass values may be underestimated [51]. This information could be included in a future expanded version of the toolkit to guide healthcare professionals in nutritional status assessment, as suggested by ESPGHAN. Therefore, the clinical information of each child would be more complete and, perhaps, become part of the surveillance program form.", "The amount of time needed to perform all of the care to children with CP limits the time to carry out anthropometric measurements, especially if the procedures are not described in a perceivable way. The great variety of methodologies used to assess nutritional status in these children make the choice challenging. So, healthcare professionals and registers should be instructed on how to carry out the procedures, otherwise the underreport of weight and height information will still be rising. The toolkit was designed to overcome some of the difficulties expressed by healthcare professionals, some of them registers in the national surveillance program. Having the information in a step-by-step structure with original illustrations to exemplify the procedures will hopefully help them in the evaluation. In the future, other strategies can be implemented accordingly with their suggestions. Caregivers also experience some challenges when dealing with the feeding problems of these children. A toolkit was developed to help them improve the nutritional assessment of children with CP, identify cases of malnutrition and enable faster and more personalized nutritional intervention (data not shown). This kind of support will need to be continually updated to be in line with the latest recommendations." ]
[ "intro", null, null, null, null, "results", null, null, null, null, null, "discussion", "conclusions" ]
[ "cerebral palsy", "anthropometric measures", "underreported data", "surveillance program", "child", "neurology" ]
1. Introduction: Cerebral palsy (PC) is associated with motor problems accompanied by disorders in cognition, communication and behavior, epilepsy episodes, and musculoskeletal problems. It is the most common motor deficiency in childhood and persists in adulthood [1]. It is a heterogeneous condition that can include spastic (85%), dyskinetic (7%), which includes dystonia and choreoathetosis, and ataxic (4%) forms. CP motor severity can be established using the Gross Motor Function Classification System (GMFCS), which indicates a child’s level of gross motor function and mobility [2], ranking from I (light symptoms) to V (more severe). Children with CP are usually shorter and lighter than peers, especially when functional limitations are more severe [3,4]. The amount of body fat and muscle mass also tend to be lower, due to lack of mobility [5], as well as the duration and severity of the neurological disorder which reduces function in terms of daily activities [6]. There is a high risk of malnutrition (29–48%), which increases when motor impairment is higher [7,8,9]. Etiology of malnutrition is multifactorial and includes both nutritional and non-nutritional factors [10]. Dietary intake is conditioned by oromotor impairment [11] and the presence of gastroesophageal reflux and constipation [10]. Moreover, altered energy requirements frequently occur and decrease as ambulatory status declines and more limbs are involved [12]. Malnutrition impacts growth and quality of life [13], decreases immune function [4,14,15], increases use of health care [16], limits participation in social activities, and worsens survival prognosis [4,17]. In recent decades, it has been perceived that a correct diagnosis that leads to actions with the purpose of attenuating eating problems, decreases hospitalization rates and improves nutritional status [18]. This assessment will allow or adequate multidisciplinary interventions to restore linear growth, stabilize weight, reduce irritability and spasticity, improve circulation and healing, and increase social participation and quality of life [10]. There is no single method to assess nutritional status, but the analyses of a set of parameters can provide important nutritional information. In children, anthropometric parameters such as weight, height, and body mass index are the most frequently used since they are simple to collect and noninvasive [19]. For children with CP, anthropometric measurements can be difficult to assess, mainly if they have contractures, spasticity, scoliosis, and positioning problems that hinder the use of conventional weighing and other measuring methods [20]. In the last few decades, several indirect methods to determine anthropometrics in children with CP were developed, albeit with no consensus on the ideal one/ones despite de extensive discussion on the topic [20,21,22]. A variety of growth charts can be used to assess and monitor growth [23]. Specific charts for children with CP are available, such as Brooks charts [24]. However, they reflect how children with CP have grown, rather than how they can be expected to optimally grow [25]. National and other surveillance programs of children with CP provide clinical information [26,27] as well as nutritional status data. Countries that preform this analysis report elevated rates of undernutrition, but the most concerning is the higher underreport of anthropometric data [28,29]. To our knowledge, there is no record on the reasons why registers from the program do not always assess this evaluation parameters. The hypothesis was raised that registers experience some difficulties when assessing the nutritional status of these children. As part of a project funded by the Polytechnic Institute of Lisbon, IPL/2020/PIN-PC_ESTESL (Project for Nutritional Intervention in Cerebral Palsy), we developed a strategy to identify the main reasons for the higher underreport and developed actions to minimize this situation. This project included (1) assessment of registers’ motives for underreporting; (2) assessment of the difficulties experienced by healthcare professionals working with these children; (3) assessment of the feeding difficulties experienced by caregivers; and (4) a review of the literature on anthropometric procedures developed for children with CP and developed a pedagogical material adapted to the needs previous assessed. 2. Materials and Methods: 2.1. Challenges While Registration Anthropometric Parameters: Questionnaire for Registers of Portuguese National Surveillance Program of Children with Cerebral Palsy A questionnaire was developed and applied to a group of registers from a Portuguese national surveillance program of children with CP to access the major difficulties to fulfill nutritional information of national surveillance report. A cross-sectional survey was conducted using an original questionnaire developed by the research team and hosted by an online platform (Google Forms). The questionnaire was first applied to a group of healthcare professionals for piloted test to ensure functionality and clarity of questions. In August 2020, an invitation email was sent to 20 voluntary registers that follow children with CP within the scope of the national surveillance program. The survey was opened for five weeks. The anonymous questionnaire was focused on the following: importance given to the evaluation; existence of collaboration with other professionals to assess weight and height, and difficulties experienced when assessing (strategies that could facilitate those measurements). Data were aggregated descriptive analysis of categorical data was performed. A questionnaire was developed and applied to a group of registers from a Portuguese national surveillance program of children with CP to access the major difficulties to fulfill nutritional information of national surveillance report. A cross-sectional survey was conducted using an original questionnaire developed by the research team and hosted by an online platform (Google Forms). The questionnaire was first applied to a group of healthcare professionals for piloted test to ensure functionality and clarity of questions. In August 2020, an invitation email was sent to 20 voluntary registers that follow children with CP within the scope of the national surveillance program. The survey was opened for five weeks. The anonymous questionnaire was focused on the following: importance given to the evaluation; existence of collaboration with other professionals to assess weight and height, and difficulties experienced when assessing (strategies that could facilitate those measurements). Data were aggregated descriptive analysis of categorical data was performed. 2.2. Challenges While Assessing Nutritional Status and Dealing with Feeding Problems: Meetings with Healthcare Professionals and Caregivers A set of meetings were organized with healthcare professionals who work with children with CP—two in the hospital context and one in a CP center in a geographical area of Portugal (Alentejo). Quality data were recorded by one of the investigators, while two other investigators conducted the meeting. The meeting was structured in three phases. First, a presentation was made based on the nutritional data of the results of the last national report. Second, a collaborative brainstorming session was conducted where participants were able to freely share their beliefs and practical experiences. This allowed the research team to identify possible explanations for the higher underreporting results. Third, a workshop was held on conventional and alternative methodologies of anthropometric measurements for nutritional assessment with moments or practical assessment of children with CP where conventional and alternative anthropometric measurements were applied. Regarding the measured children, the caregiver was informed before the meeting about the anthropometric procedures and gave the informed consent to perform the evaluation with those healthcare professionals. Caregivers were also present during assessment. All of the procedures were explained, and the data were then included in the clinical record of the children. The ethical principles of the Helsinki declaration were followed. Another set of meetings was developed with caregivers. The quality data were recorded by one of the investigators while two other investigators conducted the meeting. The meeting was structured in two phases. First, the project was explained and the importance of adequate food for the health of children with CP. In the second phase, an inquiry was carried out on the main difficulties associated with feeding these children. A participatory methodology was used in which participants had the possibility to freely share their experiences, beliefs, and expectations. Finally, the main shared difficulties were presented and, together, caregivers’ practical interventions to solve these problems were systematized. A set of meetings were organized with healthcare professionals who work with children with CP—two in the hospital context and one in a CP center in a geographical area of Portugal (Alentejo). Quality data were recorded by one of the investigators, while two other investigators conducted the meeting. The meeting was structured in three phases. First, a presentation was made based on the nutritional data of the results of the last national report. Second, a collaborative brainstorming session was conducted where participants were able to freely share their beliefs and practical experiences. This allowed the research team to identify possible explanations for the higher underreporting results. Third, a workshop was held on conventional and alternative methodologies of anthropometric measurements for nutritional assessment with moments or practical assessment of children with CP where conventional and alternative anthropometric measurements were applied. Regarding the measured children, the caregiver was informed before the meeting about the anthropometric procedures and gave the informed consent to perform the evaluation with those healthcare professionals. Caregivers were also present during assessment. All of the procedures were explained, and the data were then included in the clinical record of the children. The ethical principles of the Helsinki declaration were followed. Another set of meetings was developed with caregivers. The quality data were recorded by one of the investigators while two other investigators conducted the meeting. The meeting was structured in two phases. First, the project was explained and the importance of adequate food for the health of children with CP. In the second phase, an inquiry was carried out on the main difficulties associated with feeding these children. A participatory methodology was used in which participants had the possibility to freely share their experiences, beliefs, and expectations. Finally, the main shared difficulties were presented and, together, caregivers’ practical interventions to solve these problems were systematized. 2.3. Development of the Toolkit for Healthcare Professionals and Registers A narrative review of the literature was made following the SANRA criteria [30]. The search using PubMed platform from 1980 and 2021 was performed with the following search terms ‘cerebral palsy’, ‘children with disabilities’; ‘neurological disorders’; ‘nutritional status’; ‘body composition’; ‘body weight’; “weights and measures”, ‘’child development’; ‘physical examination’; ‘growth disorders’; ‘anthropometry/methods’; ‘assessment, nutritional’; ‘feeding problems’; ‘constipation’ and ‘gastroesophageal reflux’. For initial exploratory research on the topic, only systematic review articles and meta-analysis were considered. Articles in English, Portuguese and Spanish were included. The purpose was to investigate conventional and alternative methods of evaluating anthropometric parameters and nutritional status in children with CP, as well as common feeding problems. The articles cited in those were investigated in more detail, especially when they indicated descriptive procedures of measuring these children and how to interpret the results. The assessment methods to estimate weight and height recommended by ESPGHAN were chosen over the others, even though these were also considered as alternatives. After analysis of all of the data collected, two investigators with a communication background elaborated a practical toolkit to help healthcare professionals in nutritional assessment (Supplementary Material) and other to assist caregivers on feeding problems (data not shown). To illustrate the anthropometric procedures, original figures were created. A narrative review of the literature was made following the SANRA criteria [30]. The search using PubMed platform from 1980 and 2021 was performed with the following search terms ‘cerebral palsy’, ‘children with disabilities’; ‘neurological disorders’; ‘nutritional status’; ‘body composition’; ‘body weight’; “weights and measures”, ‘’child development’; ‘physical examination’; ‘growth disorders’; ‘anthropometry/methods’; ‘assessment, nutritional’; ‘feeding problems’; ‘constipation’ and ‘gastroesophageal reflux’. For initial exploratory research on the topic, only systematic review articles and meta-analysis were considered. Articles in English, Portuguese and Spanish were included. The purpose was to investigate conventional and alternative methods of evaluating anthropometric parameters and nutritional status in children with CP, as well as common feeding problems. The articles cited in those were investigated in more detail, especially when they indicated descriptive procedures of measuring these children and how to interpret the results. The assessment methods to estimate weight and height recommended by ESPGHAN were chosen over the others, even though these were also considered as alternatives. After analysis of all of the data collected, two investigators with a communication background elaborated a practical toolkit to help healthcare professionals in nutritional assessment (Supplementary Material) and other to assist caregivers on feeding problems (data not shown). To illustrate the anthropometric procedures, original figures were created. 2.1. Challenges While Registration Anthropometric Parameters: Questionnaire for Registers of Portuguese National Surveillance Program of Children with Cerebral Palsy: A questionnaire was developed and applied to a group of registers from a Portuguese national surveillance program of children with CP to access the major difficulties to fulfill nutritional information of national surveillance report. A cross-sectional survey was conducted using an original questionnaire developed by the research team and hosted by an online platform (Google Forms). The questionnaire was first applied to a group of healthcare professionals for piloted test to ensure functionality and clarity of questions. In August 2020, an invitation email was sent to 20 voluntary registers that follow children with CP within the scope of the national surveillance program. The survey was opened for five weeks. The anonymous questionnaire was focused on the following: importance given to the evaluation; existence of collaboration with other professionals to assess weight and height, and difficulties experienced when assessing (strategies that could facilitate those measurements). Data were aggregated descriptive analysis of categorical data was performed. 2.2. Challenges While Assessing Nutritional Status and Dealing with Feeding Problems: Meetings with Healthcare Professionals and Caregivers: A set of meetings were organized with healthcare professionals who work with children with CP—two in the hospital context and one in a CP center in a geographical area of Portugal (Alentejo). Quality data were recorded by one of the investigators, while two other investigators conducted the meeting. The meeting was structured in three phases. First, a presentation was made based on the nutritional data of the results of the last national report. Second, a collaborative brainstorming session was conducted where participants were able to freely share their beliefs and practical experiences. This allowed the research team to identify possible explanations for the higher underreporting results. Third, a workshop was held on conventional and alternative methodologies of anthropometric measurements for nutritional assessment with moments or practical assessment of children with CP where conventional and alternative anthropometric measurements were applied. Regarding the measured children, the caregiver was informed before the meeting about the anthropometric procedures and gave the informed consent to perform the evaluation with those healthcare professionals. Caregivers were also present during assessment. All of the procedures were explained, and the data were then included in the clinical record of the children. The ethical principles of the Helsinki declaration were followed. Another set of meetings was developed with caregivers. The quality data were recorded by one of the investigators while two other investigators conducted the meeting. The meeting was structured in two phases. First, the project was explained and the importance of adequate food for the health of children with CP. In the second phase, an inquiry was carried out on the main difficulties associated with feeding these children. A participatory methodology was used in which participants had the possibility to freely share their experiences, beliefs, and expectations. Finally, the main shared difficulties were presented and, together, caregivers’ practical interventions to solve these problems were systematized. 2.3. Development of the Toolkit for Healthcare Professionals and Registers: A narrative review of the literature was made following the SANRA criteria [30]. The search using PubMed platform from 1980 and 2021 was performed with the following search terms ‘cerebral palsy’, ‘children with disabilities’; ‘neurological disorders’; ‘nutritional status’; ‘body composition’; ‘body weight’; “weights and measures”, ‘’child development’; ‘physical examination’; ‘growth disorders’; ‘anthropometry/methods’; ‘assessment, nutritional’; ‘feeding problems’; ‘constipation’ and ‘gastroesophageal reflux’. For initial exploratory research on the topic, only systematic review articles and meta-analysis were considered. Articles in English, Portuguese and Spanish were included. The purpose was to investigate conventional and alternative methods of evaluating anthropometric parameters and nutritional status in children with CP, as well as common feeding problems. The articles cited in those were investigated in more detail, especially when they indicated descriptive procedures of measuring these children and how to interpret the results. The assessment methods to estimate weight and height recommended by ESPGHAN were chosen over the others, even though these were also considered as alternatives. After analysis of all of the data collected, two investigators with a communication background elaborated a practical toolkit to help healthcare professionals in nutritional assessment (Supplementary Material) and other to assist caregivers on feeding problems (data not shown). To illustrate the anthropometric procedures, original figures were created. 3. Results: 3.1. Difficulties Experienced by Registers of the Surveillance Program and Healthcare Professionals When Assessing Nutritional Status and Suggested Solutions Of the 20 registers of the Portuguese national surveillance program of children with CP invited to fulfil the questionnaire, 65% (n = 13) submitted their response. Most were medical doctors (n = 8), followed by physical therapists (n = 2), dietitians (n = 2), and nurses (n = 1). All of them considered nutritional assessment important, with a focus on weight assessment (WA) and height assessment (HA). Regarding this matter, the difficulties revealed by these professionals were lack of time (15% for WA and HA), lack of collaboration with other health professionals (7% for WA and 23% for HA), child’s motor impairment (30% for both) and lack of equipment such as scale or stadiometer (38% for both). All of the inquired mentioned having a measuring tape in their work setting. Alternative methods, such as segmental length measures, were known by 69%. In regards of possible solutions to facilitate this assessment, 15% of the registers found brochures/flyers a useful tool to compile information about different methodologies, 84% suggested to create and design a guide/manual of procedures described step-by-step and 46% suggested a mobile app. It was mentioned by 92% that professional training about alternative methods of assessing nutritional status in children with high motor impairment would be helpful. After each meeting with the healthcare professionals, all perspectives from the 35 participants were recorded from an informal conversation. The group included general practitioners, pediatricians, speech therapists, physiotherapists, occupational therapists, psychologists, nurses, dietitians, and caregivers. The contributions from healthcare professionals were similar to those expressed in the questionnaire filled by registers. In order to remove some of the barriers to the assessment of nutritional status, they mentioned the creation of an online platform with all of the equations involved in nutritional assessment, where data can be inserted and automatically calculate the measurements; one can create an educational program for healthcare professionals; and one can gather a team of high skilled professionals that visits all CP centers in the country to evaluate and register all children. It was also mentioned that involving the community surrounding children with CP would be helpful. The goal was to promote an active contribution to the evaluation and registration of their anthropometric parameters. All difficulties experienced by registers and healthcare professionals, as well as the suggested solutions to improve the nutritional assessment are summarized in Table 1. Of the 20 registers of the Portuguese national surveillance program of children with CP invited to fulfil the questionnaire, 65% (n = 13) submitted their response. Most were medical doctors (n = 8), followed by physical therapists (n = 2), dietitians (n = 2), and nurses (n = 1). All of them considered nutritional assessment important, with a focus on weight assessment (WA) and height assessment (HA). Regarding this matter, the difficulties revealed by these professionals were lack of time (15% for WA and HA), lack of collaboration with other health professionals (7% for WA and 23% for HA), child’s motor impairment (30% for both) and lack of equipment such as scale or stadiometer (38% for both). All of the inquired mentioned having a measuring tape in their work setting. Alternative methods, such as segmental length measures, were known by 69%. In regards of possible solutions to facilitate this assessment, 15% of the registers found brochures/flyers a useful tool to compile information about different methodologies, 84% suggested to create and design a guide/manual of procedures described step-by-step and 46% suggested a mobile app. It was mentioned by 92% that professional training about alternative methods of assessing nutritional status in children with high motor impairment would be helpful. After each meeting with the healthcare professionals, all perspectives from the 35 participants were recorded from an informal conversation. The group included general practitioners, pediatricians, speech therapists, physiotherapists, occupational therapists, psychologists, nurses, dietitians, and caregivers. The contributions from healthcare professionals were similar to those expressed in the questionnaire filled by registers. In order to remove some of the barriers to the assessment of nutritional status, they mentioned the creation of an online platform with all of the equations involved in nutritional assessment, where data can be inserted and automatically calculate the measurements; one can create an educational program for healthcare professionals; and one can gather a team of high skilled professionals that visits all CP centers in the country to evaluate and register all children. It was also mentioned that involving the community surrounding children with CP would be helpful. The goal was to promote an active contribution to the evaluation and registration of their anthropometric parameters. All difficulties experienced by registers and healthcare professionals, as well as the suggested solutions to improve the nutritional assessment are summarized in Table 1. 3.2. Difficulties Experienced by Caregivers When Dealing with Children’s Feeding Problems and Discussed Strategies to Minimize These Problems During the meetings with caregivers, the main problems referred were the difficulty in meal’s preparation with adapted textures that meet nutritional needs; difficulty to ensure correct hydration status; fear of meals where fish is the main protein source due to presence of fish bones; and how to position the child during mealtime. These answers were recorded while an informal conversation was happening. Some strategies were discussed, including the instruction of the canteen staff in how to prepare different textured and nutritional meals and the creation of a manual with this information summarized; creation of a manual with information on ways to aromatize water and tips to encourage the consumption of liquids (e.g., alarms during the day or mark the bottles to control the intake during the day); and ways to deal with common gastrointestinal problems such as constipation, dysphagia and gastroesophageal reflux. In addition, it was mentioned that there was little financial support for the prescription of artificial nutrition for families with children with CP. During the meetings with caregivers, the main problems referred were the difficulty in meal’s preparation with adapted textures that meet nutritional needs; difficulty to ensure correct hydration status; fear of meals where fish is the main protein source due to presence of fish bones; and how to position the child during mealtime. These answers were recorded while an informal conversation was happening. Some strategies were discussed, including the instruction of the canteen staff in how to prepare different textured and nutritional meals and the creation of a manual with this information summarized; creation of a manual with information on ways to aromatize water and tips to encourage the consumption of liquids (e.g., alarms during the day or mark the bottles to control the intake during the day); and ways to deal with common gastrointestinal problems such as constipation, dysphagia and gastroesophageal reflux. In addition, it was mentioned that there was little financial support for the prescription of artificial nutrition for families with children with CP. 3.3. Toolkit to Healthcare Professionals and Registers The toolkit for healthcare professionals and registers focused on methods to assess weight and height in children with CP since these are the only indicators currently asked in the surveillance program forms regarding the evaluation of nutritional status. 3.3.1. Weight Assessment This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material). This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material). 3.3.2. Height Assessment As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material). Alternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown). As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material). Alternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown). The toolkit for healthcare professionals and registers focused on methods to assess weight and height in children with CP since these are the only indicators currently asked in the surveillance program forms regarding the evaluation of nutritional status. 3.3.1. Weight Assessment This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material). This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material). 3.3.2. Height Assessment As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material). Alternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown). As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material). Alternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown). 3.1. Difficulties Experienced by Registers of the Surveillance Program and Healthcare Professionals When Assessing Nutritional Status and Suggested Solutions: Of the 20 registers of the Portuguese national surveillance program of children with CP invited to fulfil the questionnaire, 65% (n = 13) submitted their response. Most were medical doctors (n = 8), followed by physical therapists (n = 2), dietitians (n = 2), and nurses (n = 1). All of them considered nutritional assessment important, with a focus on weight assessment (WA) and height assessment (HA). Regarding this matter, the difficulties revealed by these professionals were lack of time (15% for WA and HA), lack of collaboration with other health professionals (7% for WA and 23% for HA), child’s motor impairment (30% for both) and lack of equipment such as scale or stadiometer (38% for both). All of the inquired mentioned having a measuring tape in their work setting. Alternative methods, such as segmental length measures, were known by 69%. In regards of possible solutions to facilitate this assessment, 15% of the registers found brochures/flyers a useful tool to compile information about different methodologies, 84% suggested to create and design a guide/manual of procedures described step-by-step and 46% suggested a mobile app. It was mentioned by 92% that professional training about alternative methods of assessing nutritional status in children with high motor impairment would be helpful. After each meeting with the healthcare professionals, all perspectives from the 35 participants were recorded from an informal conversation. The group included general practitioners, pediatricians, speech therapists, physiotherapists, occupational therapists, psychologists, nurses, dietitians, and caregivers. The contributions from healthcare professionals were similar to those expressed in the questionnaire filled by registers. In order to remove some of the barriers to the assessment of nutritional status, they mentioned the creation of an online platform with all of the equations involved in nutritional assessment, where data can be inserted and automatically calculate the measurements; one can create an educational program for healthcare professionals; and one can gather a team of high skilled professionals that visits all CP centers in the country to evaluate and register all children. It was also mentioned that involving the community surrounding children with CP would be helpful. The goal was to promote an active contribution to the evaluation and registration of their anthropometric parameters. All difficulties experienced by registers and healthcare professionals, as well as the suggested solutions to improve the nutritional assessment are summarized in Table 1. 3.2. Difficulties Experienced by Caregivers When Dealing with Children’s Feeding Problems and Discussed Strategies to Minimize These Problems: During the meetings with caregivers, the main problems referred were the difficulty in meal’s preparation with adapted textures that meet nutritional needs; difficulty to ensure correct hydration status; fear of meals where fish is the main protein source due to presence of fish bones; and how to position the child during mealtime. These answers were recorded while an informal conversation was happening. Some strategies were discussed, including the instruction of the canteen staff in how to prepare different textured and nutritional meals and the creation of a manual with this information summarized; creation of a manual with information on ways to aromatize water and tips to encourage the consumption of liquids (e.g., alarms during the day or mark the bottles to control the intake during the day); and ways to deal with common gastrointestinal problems such as constipation, dysphagia and gastroesophageal reflux. In addition, it was mentioned that there was little financial support for the prescription of artificial nutrition for families with children with CP. 3.3. Toolkit to Healthcare Professionals and Registers: The toolkit for healthcare professionals and registers focused on methods to assess weight and height in children with CP since these are the only indicators currently asked in the surveillance program forms regarding the evaluation of nutritional status. 3.3.1. Weight Assessment This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material). This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material). 3.3.2. Height Assessment As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material). Alternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown). As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material). Alternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown). 3.3.1. Weight Assessment: This can be performed directly on a scale or estimated by indirect measurement. A summary of these methods is described in Figure 1 and step by step procedures will be addressed ahead (Supplementary Material). 3.3.2. Height Assessment: As with weight, this can be performed directly with a stadiometer or estimated by indirect measurement. The summary of these methods is described in Figure 2, and the step-by-step methodologies will be addressed ahead (Supplementary Material). Alternative methods may be needed to evaluate weight and height. These methods include the measurement of segmental lengths. The results are then included in equations and give us an estimation of these anthropometric parameters. In Table 2, these equations are summarized. The step-by-step description to assist the healthcare professional to perform each measurement according to the children characteristics and allow the calculation of the child’s weight and the height is presented in detail in the toolkit (Supplementary Material). The second toolkit dedicated to help caregivers included practical ways of dealing with the need of adapt meal textures, gastroesophageal reflux, constipation, nutritional deficiencies, and dehydration (data not shown). 4. Discussion: The work fields of the healthcare professionals included in this study were wide for both registers and healthcare professionals that work with these children. These different healthcare settings are beneficial insofar as they allow more opportunities to evaluate these children so that the anthropometric information is always up to date [35]. Several difficulties were mentioned in assessing nutritional status in children with CP, including lack of time and collaborations with other colleagues, child’s motor impairment and lack of equipment. There seems to be little autonomy and knowledge regarding this evaluation in order to do it in a short amount of time during a routine appointment. Children’s motor impairment made it difficult to carry out the nutritional assessment using conventional methods [32]. Therefore, it is important that healthcare professionals are trained to apply alternative methods, such as measuring segmental lengths and using the results in specific equations. The lack of specialized equipment referred by the participants can be overcome with the use of an anthropometric tape (all of the professionals involved mentioned that they have one). As for our knowledge, this is the first time the reasons behind the high underreport on anthropometric data from registers in the Surveillance Program of Children with Cerebral Palsy are addressed. High omission rates of weight and height information have been reported. In Portugal, the omission rates are 53% for weight and 62.3% for height [29]. In a study from Nepal, the nutritional status information in national reports was also very limited and the authors say that this gap in evidence remains a major obstacle for planning targeted nutritional intervention for children with CP [28]. Thus, it is important to promote ways to improve the availability of this information. From the results from the questionnaire applied to registers, it was possible to understand that educational tools would be helpful. This includes the development of a manual which can serve as a theoretical basis to create informative brochures and other educational materials as online presentations on the topic. The healthcare professionals suggested in the meetings that the community could be more involved once there are a lot of people living alongside these children who can weight and measure them routinely. The creation of a skilled team to carry out this evaluation in each country or region could be a solution, but would be more challenging to implement in a short amount of time and would have no financial support. The development of an online platform that includes all of the equations used in nutritional assessment was also referred to. This tool would speed up the interpretation of the results measured during each appointment and could also include the step-by-step procedures addressed in the manual. The challenges shared by these professionals (registers of the surveillance program and the other healthcare professionals) are similar and the different solutions addressed can be applicable to both groups. The main goal is to have weight and height information available in order to have updated reports on epidemiological data and to do adequate interventions that improve these children’s lives. For the present work, the toolkit was created as a first strategy to reduce the underreport on the surveillance program. Moreover, the meetings with healthcare professionals were also an opportunity to educate on performing nutritional status evaluation. In the future, the other suggestions can be implemented. Some of challenges addressed by the caregivers are known and have been previously published in scientific literature [36,37]. Feeding problems are prevalent in children with CP (21–58%) and some of them were discussed during the meetings [36]. Nevertheless, the main problems expressed by caregivers were related to very practical issues, such as meal preparations and how to adapt its textures, how to prepare nutritious food, and other issues. The difficulties and suggestions were an added value for our communication professionals to develop tools that help caregivers dealing with those challenges (data not shown). The development of a toolkit was important to ensure that the information is aggregated in a practical way so that health professionals can have quick access and to help in the decision-making when it comes to the assessment of anthropometric measures, such as weight and height. Although there is evidence on the issue of assessing the nutritional status of children with CP, a step-by-step manual is needed. In addition, it has original figures that exemplify the measures described to make it easier to put into practice. The design of the toolkit was based on a participatory process with health professionals who work with children with CP in their work settings and caregivers [38,39]. Understanding their previous knowledge, beliefs and resources about the nutritional assessment and feeding care of children with CP was necessary. In the theoretical framework of the Behavior Change Process [40] communication for behavior change is urgent to understand at what stage of knowledge about a particular problem, or the need to adopt a new behavior people are. Therefore, it is essential to adapt messages to communication according to these restrictions. Studies suggest that individuals are more likely to change behavior when interventions are culturally appropriate, locally relevant, and participated in at all stages [41]. The present work focus on the evaluation of weight and height in children with CP due to the urgent need to improve the availability of data in the surveillance program reports. Evaluation methods used should be feasible, efficient, inexpensive, and non-stressful for the child and caregivers. Measurements should be performed repeatedly to prevent incorrect diagnoses and to monitor continuously [17]. Moreover, the same method should be applied in order to gain experience and to allow a proper evaluation of these children. In this toolkit, only the measurements of weight and height were addressed. In the future, it is important to perform a complete evaluation of these children’s nutritional status and to ask the children’s clinical history and search information on their activity level, including the presence of comorbidities and pharmacotherapy that may affect nutritional status [42]. ESPGHAN recommends assessment of anthropometrics, body composition, and laboratory parameters [43]. It is recommended that one looks for warning signs of undernutrition including pressure ulcers, weight for age/sex Z-score below −2, triceps skinfold thickness and arm circumference or arm muscle area for age/sex below 10th centile and grow or weight gain velocity impairment or difficulty in recovering [17]. Body mass index (BMI), weight for height, and ideal body weight are frequently used to estimate body fat or nutritional status but are poor predictors of body fat percentage and have limited usefulness in guiding nutritional intervention [44]. Skinfolds reflect body mass reserves and could start to be routinely performed in children with CP, even if in some cases it is a challenging task [45]. This type of assessment should be performed by an experienced healthcare professional so that measurements are accurate. Triceps and subscapular skinfolds can be measured using the common methodology. It should be taken into account that the results must be interpreted considering a more localized distribution of fat in the abdominal area [7,46]. The results can be then applied in specific equations to predict body fat mass percentage shown in Table 3. The Slaughter equation, used in a healthy population, is not suitable for children with neuromotor impairment. Instead, Gurka equation, which includes correction factors such as GMFCS classification, best correlates with gold standard method [47]. Anthropometric parameters should be performed on the least affected side of the body [47,48,49]. Bioelectrical impedance analysis (BIA) allows for assessment of body composition, this being easy to perform, non-invasive, and involving no discomfort for the child. When available, it is more preferable than anthropometric measurements (skinfolds and body perimeters) as it is more suitable for children with CP [42]. Even so, limitations must be consider; hydration status is quite heterogeneous during childhood and depends on age and gender, and the results may also be falsified by the frequent dehydration state of children with CP [20,50]. Calculations of mid-upper arm muscle and fat areas are based on measurements of the upper arm circumference and triceps skinfolds [51] using the equation in Table 4. Lohman classification can be used to classify the percentage of body fast mass results. It was created with a large sample of healthy children and can be used considering low body fat mass (≤10% for boys and ≤15% for girls), fat mass adequate (10–25% for boys and 16–30% for girls) and excess fat mass (>25% for boys and >30% for women). Values >30% of FM in boys and >35% of fat mass in girls indicates a likely diagnosis of obesity. When the percentage of fat mass is less than 7% in boys and less than 15% in girls, probably they will have very low weight [52]. Frisancho’s tables also provide centiles to classify PCT. After mid-upper-arm muscle circumference and mid-upper-arm muscle calculation, values can also be interpreted according to Frisancho centiles, even though these were not validated for children with CP. Therefore, they should be evaluated with caution. In children with CP, muscle mass values may be underestimated [51]. This information could be included in a future expanded version of the toolkit to guide healthcare professionals in nutritional status assessment, as suggested by ESPGHAN. Therefore, the clinical information of each child would be more complete and, perhaps, become part of the surveillance program form. 5. Conclusions: The amount of time needed to perform all of the care to children with CP limits the time to carry out anthropometric measurements, especially if the procedures are not described in a perceivable way. The great variety of methodologies used to assess nutritional status in these children make the choice challenging. So, healthcare professionals and registers should be instructed on how to carry out the procedures, otherwise the underreport of weight and height information will still be rising. The toolkit was designed to overcome some of the difficulties expressed by healthcare professionals, some of them registers in the national surveillance program. Having the information in a step-by-step structure with original illustrations to exemplify the procedures will hopefully help them in the evaluation. In the future, other strategies can be implemented accordingly with their suggestions. Caregivers also experience some challenges when dealing with the feeding problems of these children. A toolkit was developed to help them improve the nutritional assessment of children with CP, identify cases of malnutrition and enable faster and more personalized nutritional intervention (data not shown). This kind of support will need to be continually updated to be in line with the latest recommendations.
Background: Nutritional status assessment (NSA) can be challenging in children with cerebral palsy (CP). There are high omission rates in national surveillance reports of weight and height information. Alternative methods are used to assess nutritional status that may be unknown to the healthcare professionals (HCP) who report these children. Caregivers experience challenges when dealing with feeding problems (FP) common in CP. Our aim was to assess the difficulties in NSA which are causing this underreport and to create solutions for registers and caregivers. Methods: An online questionnaire was created for registers. Three meetings with HCP and caregivers were held to discuss problems and solutions regarding NSA and intervention. Results: HCP mentioned difficulty in NSA due to a lack of time, collaboration with others, equipment, and childrens' motor impairment. Caregivers experienced difficulty in preparing nutritious meals with adapted textures. The creation of educational tools and other strategies were suggested. A toolkit for HCP was created with the weight and height assessment methods described and other for caregivers to deal with common FP. Conclusions: There are several difficulties experienced by HCP that might be overcome with educational tools, such as a toolkit. This will facilitate nutritional assessment and intervention and hopefully reduce underreporting.
1. Introduction: Cerebral palsy (PC) is associated with motor problems accompanied by disorders in cognition, communication and behavior, epilepsy episodes, and musculoskeletal problems. It is the most common motor deficiency in childhood and persists in adulthood [1]. It is a heterogeneous condition that can include spastic (85%), dyskinetic (7%), which includes dystonia and choreoathetosis, and ataxic (4%) forms. CP motor severity can be established using the Gross Motor Function Classification System (GMFCS), which indicates a child’s level of gross motor function and mobility [2], ranking from I (light symptoms) to V (more severe). Children with CP are usually shorter and lighter than peers, especially when functional limitations are more severe [3,4]. The amount of body fat and muscle mass also tend to be lower, due to lack of mobility [5], as well as the duration and severity of the neurological disorder which reduces function in terms of daily activities [6]. There is a high risk of malnutrition (29–48%), which increases when motor impairment is higher [7,8,9]. Etiology of malnutrition is multifactorial and includes both nutritional and non-nutritional factors [10]. Dietary intake is conditioned by oromotor impairment [11] and the presence of gastroesophageal reflux and constipation [10]. Moreover, altered energy requirements frequently occur and decrease as ambulatory status declines and more limbs are involved [12]. Malnutrition impacts growth and quality of life [13], decreases immune function [4,14,15], increases use of health care [16], limits participation in social activities, and worsens survival prognosis [4,17]. In recent decades, it has been perceived that a correct diagnosis that leads to actions with the purpose of attenuating eating problems, decreases hospitalization rates and improves nutritional status [18]. This assessment will allow or adequate multidisciplinary interventions to restore linear growth, stabilize weight, reduce irritability and spasticity, improve circulation and healing, and increase social participation and quality of life [10]. There is no single method to assess nutritional status, but the analyses of a set of parameters can provide important nutritional information. In children, anthropometric parameters such as weight, height, and body mass index are the most frequently used since they are simple to collect and noninvasive [19]. For children with CP, anthropometric measurements can be difficult to assess, mainly if they have contractures, spasticity, scoliosis, and positioning problems that hinder the use of conventional weighing and other measuring methods [20]. In the last few decades, several indirect methods to determine anthropometrics in children with CP were developed, albeit with no consensus on the ideal one/ones despite de extensive discussion on the topic [20,21,22]. A variety of growth charts can be used to assess and monitor growth [23]. Specific charts for children with CP are available, such as Brooks charts [24]. However, they reflect how children with CP have grown, rather than how they can be expected to optimally grow [25]. National and other surveillance programs of children with CP provide clinical information [26,27] as well as nutritional status data. Countries that preform this analysis report elevated rates of undernutrition, but the most concerning is the higher underreport of anthropometric data [28,29]. To our knowledge, there is no record on the reasons why registers from the program do not always assess this evaluation parameters. The hypothesis was raised that registers experience some difficulties when assessing the nutritional status of these children. As part of a project funded by the Polytechnic Institute of Lisbon, IPL/2020/PIN-PC_ESTESL (Project for Nutritional Intervention in Cerebral Palsy), we developed a strategy to identify the main reasons for the higher underreport and developed actions to minimize this situation. This project included (1) assessment of registers’ motives for underreporting; (2) assessment of the difficulties experienced by healthcare professionals working with these children; (3) assessment of the feeding difficulties experienced by caregivers; and (4) a review of the literature on anthropometric procedures developed for children with CP and developed a pedagogical material adapted to the needs previous assessed. 5. Conclusions: The amount of time needed to perform all of the care to children with CP limits the time to carry out anthropometric measurements, especially if the procedures are not described in a perceivable way. The great variety of methodologies used to assess nutritional status in these children make the choice challenging. So, healthcare professionals and registers should be instructed on how to carry out the procedures, otherwise the underreport of weight and height information will still be rising. The toolkit was designed to overcome some of the difficulties expressed by healthcare professionals, some of them registers in the national surveillance program. Having the information in a step-by-step structure with original illustrations to exemplify the procedures will hopefully help them in the evaluation. In the future, other strategies can be implemented accordingly with their suggestions. Caregivers also experience some challenges when dealing with the feeding problems of these children. A toolkit was developed to help them improve the nutritional assessment of children with CP, identify cases of malnutrition and enable faster and more personalized nutritional intervention (data not shown). This kind of support will need to be continually updated to be in line with the latest recommendations.
Background: Nutritional status assessment (NSA) can be challenging in children with cerebral palsy (CP). There are high omission rates in national surveillance reports of weight and height information. Alternative methods are used to assess nutritional status that may be unknown to the healthcare professionals (HCP) who report these children. Caregivers experience challenges when dealing with feeding problems (FP) common in CP. Our aim was to assess the difficulties in NSA which are causing this underreport and to create solutions for registers and caregivers. Methods: An online questionnaire was created for registers. Three meetings with HCP and caregivers were held to discuss problems and solutions regarding NSA and intervention. Results: HCP mentioned difficulty in NSA due to a lack of time, collaboration with others, equipment, and childrens' motor impairment. Caregivers experienced difficulty in preparing nutritious meals with adapted textures. The creation of educational tools and other strategies were suggested. A toolkit for HCP was created with the weight and height assessment methods described and other for caregivers to deal with common FP. Conclusions: There are several difficulties experienced by HCP that might be overcome with educational tools, such as a toolkit. This will facilitate nutritional assessment and intervention and hopefully reduce underreporting.
9,049
238
[ 1636, 171, 342, 277, 470, 182, 475, 38, 174 ]
13
[ "children", "nutritional", "professionals", "assessment", "cp", "weight", "healthcare", "step", "methods", "children cp" ]
[ "terms cerebral palsy", "children neuromotor impairment", "cerebral palsy questionnaire", "motor deficiency childhood", "child motor impairment" ]
null
[CONTENT] cerebral palsy | anthropometric measures | underreported data | surveillance program | child | neurology [SUMMARY]
null
[CONTENT] cerebral palsy | anthropometric measures | underreported data | surveillance program | child | neurology [SUMMARY]
[CONTENT] cerebral palsy | anthropometric measures | underreported data | surveillance program | child | neurology [SUMMARY]
[CONTENT] cerebral palsy | anthropometric measures | underreported data | surveillance program | child | neurology [SUMMARY]
[CONTENT] cerebral palsy | anthropometric measures | underreported data | surveillance program | child | neurology [SUMMARY]
[CONTENT] Caregivers | Cerebral Palsy | Child | Delivery of Health Care | Humans | Nutrition Assessment | Nutritional Status [SUMMARY]
null
[CONTENT] Caregivers | Cerebral Palsy | Child | Delivery of Health Care | Humans | Nutrition Assessment | Nutritional Status [SUMMARY]
[CONTENT] Caregivers | Cerebral Palsy | Child | Delivery of Health Care | Humans | Nutrition Assessment | Nutritional Status [SUMMARY]
[CONTENT] Caregivers | Cerebral Palsy | Child | Delivery of Health Care | Humans | Nutrition Assessment | Nutritional Status [SUMMARY]
[CONTENT] Caregivers | Cerebral Palsy | Child | Delivery of Health Care | Humans | Nutrition Assessment | Nutritional Status [SUMMARY]
[CONTENT] terms cerebral palsy | children neuromotor impairment | cerebral palsy questionnaire | motor deficiency childhood | child motor impairment [SUMMARY]
null
[CONTENT] terms cerebral palsy | children neuromotor impairment | cerebral palsy questionnaire | motor deficiency childhood | child motor impairment [SUMMARY]
[CONTENT] terms cerebral palsy | children neuromotor impairment | cerebral palsy questionnaire | motor deficiency childhood | child motor impairment [SUMMARY]
[CONTENT] terms cerebral palsy | children neuromotor impairment | cerebral palsy questionnaire | motor deficiency childhood | child motor impairment [SUMMARY]
[CONTENT] terms cerebral palsy | children neuromotor impairment | cerebral palsy questionnaire | motor deficiency childhood | child motor impairment [SUMMARY]
[CONTENT] children | nutritional | professionals | assessment | cp | weight | healthcare | step | methods | children cp [SUMMARY]
null
[CONTENT] children | nutritional | professionals | assessment | cp | weight | healthcare | step | methods | children cp [SUMMARY]
[CONTENT] children | nutritional | professionals | assessment | cp | weight | healthcare | step | methods | children cp [SUMMARY]
[CONTENT] children | nutritional | professionals | assessment | cp | weight | healthcare | step | methods | children cp [SUMMARY]
[CONTENT] children | nutritional | professionals | assessment | cp | weight | healthcare | step | methods | children cp [SUMMARY]
[CONTENT] motor | function | children | developed | cp | growth | charts | nutritional | children cp | malnutrition [SUMMARY]
null
[CONTENT] step | measurement | methods | assessment | nutritional | professionals | weight | step step | supplementary | supplementary material [SUMMARY]
[CONTENT] carry | children | time | healthcare professionals registers | procedures | professionals registers | nutritional | step | toolkit | help [SUMMARY]
[CONTENT] step | children | nutritional | methods | assessment | measurement | cp | professionals | weight | children cp [SUMMARY]
[CONTENT] step | children | nutritional | methods | assessment | measurement | cp | professionals | weight | children cp [SUMMARY]
[CONTENT] ||| ||| HCP ||| FP | CP ||| NSA [SUMMARY]
null
[CONTENT] HCP | NSA ||| ||| ||| HCP | FP [SUMMARY]
[CONTENT] HCP ||| [SUMMARY]
[CONTENT] ||| ||| HCP ||| FP | CP ||| NSA ||| ||| Three | HCP | NSA ||| HCP | NSA ||| ||| ||| HCP | FP ||| HCP ||| [SUMMARY]
[CONTENT] ||| ||| HCP ||| FP | CP ||| NSA ||| ||| Three | HCP | NSA ||| HCP | NSA ||| ||| ||| HCP | FP ||| HCP ||| [SUMMARY]
Dosing regimens of oral ciprofloxacin for children with severe malnutrition: a population pharmacokinetic study with Monte Carlo simulation.
21831986
Severe malnutrition is frequently complicated by sepsis, leading to high case fatality. Oral ciprofloxacin is a potential alternative to the standard parenteral ampicillin/gentamicin combination, but its pharmacokinetics in malnourished children is unknown.
BACKGROUND
Ciprofloxacin (10 mg/kg, 12 hourly) was administered either 2 h before or up to 2 h after feeds to Kenyan children hospitalized with severe malnutrition. Four plasma ciprofloxacin concentrations were measured over 24 h. Population analysis with NONMEM investigated factors affecting the oral clearance (CL) and the oral volume of distribution (V). Monte Carlo simulations investigated dosage regimens to achieve a target AUC(0-24)/MIC ratio of ≥125.
METHODS
Data comprised 202 ciprofloxacin concentration measurements from 52 children aged 8-102 months. Absorption was generally rapid but variable; C(max) ranged from 0.6 to 4.5 mg/L. Data were fitted by a one-compartment model with first-order absorption and lag. The parameters were CL (L/h) = 42.7 (L/h/70 kg) × [weight (kg)/70](0.75) × [1 + 0.0368 (Na(+) - 136)] × [1 - 0.283 (high risk)] and V (L) = 372 × (L/70 kg) × [1 + 0.0291 (Na(+) - 136)]. Estimates of AUC(0-24) ranged from 8 to 61 mg·h/L. The breakpoint for Gram-negative organisms was <0.06 mg/L with doses of 20 mg/kg/day and <0.125 mg/L with doses of 30 or 45 mg/kg/day. The cumulative fraction of response with 30 mg/kg/day was ≥80% for Escherichia coli, Klebsiella pneumoniae and Salmonella species, but <60% for Pseudomonas aeruginosa.
RESULTS
An oral ciprofloxacin dose of 10 mg/kg three times daily (30 mg/kg/day) may be a suitable alternative antibiotic for the management of sepsis in severely malnourished children. Absorption was unaffected by the simultaneous administration of feeds.
CONCLUSIONS
[ "Administration, Oral", "Anti-Bacterial Agents", "Bacteremia", "Child", "Child, Preschool", "Ciprofloxacin", "Dehydration", "Drug Resistance, Multiple, Bacterial", "Escherichia coli", "Female", "Humans", "Infant", "Klebsiella pneumoniae", "Male", "Malnutrition", "Microbial Sensitivity Tests", "Monte Carlo Method", "Pseudomonas aeruginosa", "Salmonella" ]
3172043
Introduction
Severe malnutrition remains a common cause of admission to hospital in less-developed countries. Many centres, particularly in Africa, report poor outcome despite adherence to recommended treatment guidelines.1–4 The children at the greatest risk of fatal outcome are those with Gram-negative septicaemia, constituting 48%–55% of invasive bacterial pathogens, and those admitted with diarrhoea and/or shock.5,6 Changes in the intestinal mucosal integrity and gut microbial balance occur in severe malnutrition,7 resulting in treatment failure and adverse clinical outcome.8 The higher prevalence of gut barrier dysfunction in children with severe malnutrition may have important effects on the absorption of antimicrobials and their bioavailability, and therefore may limit choices for the delivery of antimicrobial medication. Children with severe and complicated malnutrition routinely receive broad-spectrum parenteral antibiotics.9 In vitro antibiotic susceptibility testing indicates that up to 85% of organisms are fully susceptible to the first-line treatment, parenteral ampicillin and gentamicin, recommended by the WHO for children with severe and complicated malnutrition.4 Pharmacokinetic studies have demonstrated satisfactory plasma concentrations of these commonly used antibiotics.10 However, in vitro resistance has been associated with later deaths, and the current second-line antibiotic (chloramphenicol) was found to offer little advantage over the ampicillin and gentamicin combination.4 Fluoroquinolones are effective against most Gram-negative organisms and have activity against Gram-positive organisms, especially when given in combination.11 The quinolones are used in the treatment of serious bacterial infections in adults; however, their use in children has been restricted due to concerns about potential cartilage damage.12 Nevertheless, quinolones are increasingly prescribed for paediatric patients. Ciprofloxacin is licensed in children >1 year of age for pseudomonal infections in cystic fibrosis, for complicated urinary tract infections, and for the treatment and prophylaxis of inhalation anthrax.13 When the benefits of treatment outweigh the risks, ciprofloxacin is also licensed in the UK for children >1 year of age with severe respiratory tract and gastrointestinal system infection.14 In its many years of clinical use, ciprofloxacin has been found effective, even with oral administration, owing to its bioavailability of ∼70%.15 However, few studies have evaluated the population pharmacokinetics of ciprofloxacin in children,16–18 and no studies have been conducted in severe malnutrition. For the treatment of Gram-negative infection, common in severe malnutrition, pragmatic and cost-effective treatments are needed to improve outcome. Since intravenous formulations of fluoroquinolones are very expensive, they are rarely used in resource-poor settings. Oral formulations would appear to be the best option in patients who are able to ingest and adequately absorb medication. The aims of this study were to determine the pharmacokinetic profile of oral ciprofloxacin given at a dose of 10 mg/kg twice daily to children admitted to hospital with severe malnutrition, to develop a population model to describe the pharmacokinetics of ciprofloxacin in this patient group, and to use Monte Carlo simulation techniques to investigate potential relationships between dosage regimens and antimicrobial efficacy.
Methods
Patients The study was conducted at Kilifi District Hospital, Kenya. All children admitted to the ward were examined by a member of the clinical research team. Between July 2008 and February 2009, children >6 months of age were assessed for eligibility for the study. Eligible children had severe malnutrition, defined as one of the following: weight-for-height z-score (WHZ) of ≤3; mid-upper arm circumference of <11 cm; or the presence of bilateral pedal oedema (kwashiorkor). Children with evidence of intrinsic renal disease (creatinine concentration >300 μmol/L and hypertension or hyperkalaemia) were excluded. No cases had coexisting chronic bone or joint disease, or concurrently prescribed antacids, ketoconazole, theophylline or corticosteroids. The study was explained to the child's parent or guardian in their usual language and written informed consent was obtained. The Kenya Medical Research Institute/National Ethical Review Committee approved the study. Baseline laboratory tests included a full blood count, blood film for malaria parasites, plasma creatinine, electrolytes, plasma glucose, blood gases, blood culture and HIV rapid antibody test. Blood cultures were processed by a BACTEC 9050 instrument (Becton Dickinson, NJ, USA). Children were treated according to the standard WHO management guidelines for severe malnutrition. These include nutritional support with a special milk-based formula (F75 and F100), multivitamin and multimineral supplementation and reduced sodium oral rehydration solution (RESOMAL) for children with diarrhoea (>3 watery stools/day). At admission, all children were prescribed parenteral ampicillin (50 mg/kg four times a day) and intramuscular gentamicin (7.5 mg/kg once daily) for 7 days. These were revised, when indicated by the child's clinical condition or culture results. Other fluoroquinolones were not prescribed during the study period, since they were not routinely available. The study was conducted at Kilifi District Hospital, Kenya. All children admitted to the ward were examined by a member of the clinical research team. Between July 2008 and February 2009, children >6 months of age were assessed for eligibility for the study. Eligible children had severe malnutrition, defined as one of the following: weight-for-height z-score (WHZ) of ≤3; mid-upper arm circumference of <11 cm; or the presence of bilateral pedal oedema (kwashiorkor). Children with evidence of intrinsic renal disease (creatinine concentration >300 μmol/L and hypertension or hyperkalaemia) were excluded. No cases had coexisting chronic bone or joint disease, or concurrently prescribed antacids, ketoconazole, theophylline or corticosteroids. The study was explained to the child's parent or guardian in their usual language and written informed consent was obtained. The Kenya Medical Research Institute/National Ethical Review Committee approved the study. Baseline laboratory tests included a full blood count, blood film for malaria parasites, plasma creatinine, electrolytes, plasma glucose, blood gases, blood culture and HIV rapid antibody test. Blood cultures were processed by a BACTEC 9050 instrument (Becton Dickinson, NJ, USA). Children were treated according to the standard WHO management guidelines for severe malnutrition. These include nutritional support with a special milk-based formula (F75 and F100), multivitamin and multimineral supplementation and reduced sodium oral rehydration solution (RESOMAL) for children with diarrhoea (>3 watery stools/day). At admission, all children were prescribed parenteral ampicillin (50 mg/kg four times a day) and intramuscular gentamicin (7.5 mg/kg once daily) for 7 days. These were revised, when indicated by the child's clinical condition or culture results. Other fluoroquinolones were not prescribed during the study period, since they were not routinely available. Study procedures Ciprofloxacin concentration–time profiles were obtained during two study periods. In the initial study period, children were given the ciprofloxacin either 2 h before or 2 h after they had their nutritional milks or meal. In this first period of the study (n = 36), an equal number of patients were included in three subgroups: low risk; intermediate risk; and high risk of fatality. The stratification of risk was based on a previous report.4 High risk included children presenting with one of the following: depressed conscious state; bradycardia (heart rate <80 beats per minute); evidence of shock (capillary refill time ≥2 seconds, temperature gradient or weak pulse); or hypoglycaemia (blood glucose <3 mmol/L). Intermediate risk included any one of: deep ‘acidotic’ breathing; signs of severe dehydration (plus diarrhoea); lethargy; hyponatraemia (sodium <125 mmol/L); or hypokalaemia (potassium <2.5 mmol/L). Children with low risk had none of these factors. In the second study period (n = 16), the drug was administered at the time they received nutritional feeds and did not have any risk stratification. Ciprofloxacin concentration–time profiles were obtained during two study periods. In the initial study period, children were given the ciprofloxacin either 2 h before or 2 h after they had their nutritional milks or meal. In this first period of the study (n = 36), an equal number of patients were included in three subgroups: low risk; intermediate risk; and high risk of fatality. The stratification of risk was based on a previous report.4 High risk included children presenting with one of the following: depressed conscious state; bradycardia (heart rate <80 beats per minute); evidence of shock (capillary refill time ≥2 seconds, temperature gradient or weak pulse); or hypoglycaemia (blood glucose <3 mmol/L). Intermediate risk included any one of: deep ‘acidotic’ breathing; signs of severe dehydration (plus diarrhoea); lethargy; hyponatraemia (sodium <125 mmol/L); or hypokalaemia (potassium <2.5 mmol/L). Children with low risk had none of these factors. In the second study period (n = 16), the drug was administered at the time they received nutritional feeds and did not have any risk stratification. Drug administration and blood sampling Participants received witnessed doses of oral ciprofloxacin (10 mg/kg body weight), i.e. a standard treatment dose, every 12 h for 48 h, starting on the day of admission. Routine antimicrobials were given simultaneously as the empirical treatment for invasive infections. Since oral suspensions are not available locally, ciprofloxacin tablets (Bactiflox™ 250 mg, Mepha Pharma AG, Switzerland) were reformulated into an aqueous suspension by the study pharmacist. A separate tablet was used to prepare each individual dose and the suspension was then used immediately. Individual doses were measured by the pharmacist under the observation of the trial nurse, according to the patient's body weight. This approach has been used and reported previously.19 Children were allocated, using a computer-generated randomization list, to one of three blood sampling schedules: Group A at 2, 4, 8 and 24 h; Group B at 3, 5, 9 and 12 h; and Group C at 1, 3, 6 and 10 h. Children in each of the risk categories were evenly allocated across the sampling schedules. To minimize discomfort to the child, blood sampling for the measurement of ciprofloxacin concentrations was through an in situ cannula, which was only used for this purpose. At each timepoint, a venous blood sample (0.5 mL) was collected into a lithium heparin tube, centrifuged, and the plasma separated and stored in cryovials at −20°C until analysed. A rapid, selective and sensitive HPLC method coupled with fluorescence detection was used to determine the concentration of ciprofloxacin in the plasma samples.20 The intra- and interassay imprecisions of the HPLC method were <8.0%, and accuracy values ranged from 93% to 105% for quality control samples at 0.2, 1.8 and 3.6 mg/L. Calibration curves of ciprofloxacin were linear over the concentration range of 0.02–4 mg/L, with correlation coefficients ≥0.998. Participants received witnessed doses of oral ciprofloxacin (10 mg/kg body weight), i.e. a standard treatment dose, every 12 h for 48 h, starting on the day of admission. Routine antimicrobials were given simultaneously as the empirical treatment for invasive infections. Since oral suspensions are not available locally, ciprofloxacin tablets (Bactiflox™ 250 mg, Mepha Pharma AG, Switzerland) were reformulated into an aqueous suspension by the study pharmacist. A separate tablet was used to prepare each individual dose and the suspension was then used immediately. Individual doses were measured by the pharmacist under the observation of the trial nurse, according to the patient's body weight. This approach has been used and reported previously.19 Children were allocated, using a computer-generated randomization list, to one of three blood sampling schedules: Group A at 2, 4, 8 and 24 h; Group B at 3, 5, 9 and 12 h; and Group C at 1, 3, 6 and 10 h. Children in each of the risk categories were evenly allocated across the sampling schedules. To minimize discomfort to the child, blood sampling for the measurement of ciprofloxacin concentrations was through an in situ cannula, which was only used for this purpose. At each timepoint, a venous blood sample (0.5 mL) was collected into a lithium heparin tube, centrifuged, and the plasma separated and stored in cryovials at −20°C until analysed. A rapid, selective and sensitive HPLC method coupled with fluorescence detection was used to determine the concentration of ciprofloxacin in the plasma samples.20 The intra- and interassay imprecisions of the HPLC method were <8.0%, and accuracy values ranged from 93% to 105% for quality control samples at 0.2, 1.8 and 3.6 mg/L. Calibration curves of ciprofloxacin were linear over the concentration range of 0.02–4 mg/L, with correlation coefficients ≥0.998. Pharmacokinetic analysis Population pharmacokinetic parameter estimates were obtained with NONMEM version VI using first-order conditional estimation with interaction.21 Post-processing of the NONMEM results was performed with Xpose version 4 programmed in R 2.9.2.22,23 Preliminary analyses found that a one-compartment elimination model adequately described the concentration–time profiles; absorption was described using first- and zero-order models with an absorption lag and with a transit compartment model.24 Relationships between the oral clearance of ciprofloxacin (CL) and weight, and between the oral volume of distribution (V) and weight were described using an allometric approach,25 i.e. Between-subject variabilities (BSV) in the pharmacokinetic parameters were assumed to be log-normally distributed and covariance between the variabilities in the pharmacokinetic parameters was investigated. A combined proportional and additive error model was used to describe the residual error. A wide range of demographic, biochemical and haematological data were collected in the course of the study. In addition, creatinine clearance estimates for each patient, according to age range, were calculated using the equations of Schwarz et al.26,27 Potential relationships between the clinical and demographic factors and empirical Bayes' estimates (individual estimates) of ciprofloxacin oral CL, oral V and absorption rate were initially examined visually using scatter plots and by general additive modelling within Xpose.22 Factors identified as potential covariates that might explain variability in ciprofloxacin pharmacokinetics were then added individually to the population model. A statistically significant improvement in the fit of the model to the data was defined as a reduction in the objective function value (OFV) of ≥3.84 (P <0.05) in the forward stepwise analysis. The covariate that produced the greatest fall in OFV was included first, and then other covariates were added to and removed from the model in a stepwise manner that included changing the order of inclusion and removal. Statistical significance was defined as an increase in OFV of ≥6.63 (P < 0.01) when a covariate was removed. Additional criteria, such as goodness-of-fit plots, the precision of the parameter estimates, and the ability of covariates to explain variability in the oral CL, oral V and absorption rate, were also considered. The final population model was evaluated in three ways: a bootstrap sampling procedure with 1000 samples; a prediction-corrected visual predictive check (pcVPC) based on 1000 simulations; and by examination of normalized prediction distribution errors (npde) from 1000 simulations. Both the bootstrapping procedure and the pcVPC were performed using PsN toolkit;28,29 npde were computed using the software developed by Brendel et al.30 Time to maximum (Tmax) and maximum concentrations of ciprofloxacin (Cmax) for each patient were obtained from the raw data and estimated using the individual pharmacokinetic parameter values. Estimates of steady-state AUC within each dosage interval (AUC12) and each day (AUC24) were calculated from 12 hourly dose/CLi and daily dose/CLi, respectively, where CLi are the individual estimates of oral CL. Individual estimates of the elimination half-life (t½) were calculated from 0.693 × Vi/CLi, where Vi are the individual estimates of oral V. Population pharmacokinetic parameter estimates were obtained with NONMEM version VI using first-order conditional estimation with interaction.21 Post-processing of the NONMEM results was performed with Xpose version 4 programmed in R 2.9.2.22,23 Preliminary analyses found that a one-compartment elimination model adequately described the concentration–time profiles; absorption was described using first- and zero-order models with an absorption lag and with a transit compartment model.24 Relationships between the oral clearance of ciprofloxacin (CL) and weight, and between the oral volume of distribution (V) and weight were described using an allometric approach,25 i.e. Between-subject variabilities (BSV) in the pharmacokinetic parameters were assumed to be log-normally distributed and covariance between the variabilities in the pharmacokinetic parameters was investigated. A combined proportional and additive error model was used to describe the residual error. A wide range of demographic, biochemical and haematological data were collected in the course of the study. In addition, creatinine clearance estimates for each patient, according to age range, were calculated using the equations of Schwarz et al.26,27 Potential relationships between the clinical and demographic factors and empirical Bayes' estimates (individual estimates) of ciprofloxacin oral CL, oral V and absorption rate were initially examined visually using scatter plots and by general additive modelling within Xpose.22 Factors identified as potential covariates that might explain variability in ciprofloxacin pharmacokinetics were then added individually to the population model. A statistically significant improvement in the fit of the model to the data was defined as a reduction in the objective function value (OFV) of ≥3.84 (P <0.05) in the forward stepwise analysis. The covariate that produced the greatest fall in OFV was included first, and then other covariates were added to and removed from the model in a stepwise manner that included changing the order of inclusion and removal. Statistical significance was defined as an increase in OFV of ≥6.63 (P < 0.01) when a covariate was removed. Additional criteria, such as goodness-of-fit plots, the precision of the parameter estimates, and the ability of covariates to explain variability in the oral CL, oral V and absorption rate, were also considered. The final population model was evaluated in three ways: a bootstrap sampling procedure with 1000 samples; a prediction-corrected visual predictive check (pcVPC) based on 1000 simulations; and by examination of normalized prediction distribution errors (npde) from 1000 simulations. Both the bootstrapping procedure and the pcVPC were performed using PsN toolkit;28,29 npde were computed using the software developed by Brendel et al.30 Time to maximum (Tmax) and maximum concentrations of ciprofloxacin (Cmax) for each patient were obtained from the raw data and estimated using the individual pharmacokinetic parameter values. Estimates of steady-state AUC within each dosage interval (AUC12) and each day (AUC24) were calculated from 12 hourly dose/CLi and daily dose/CLi, respectively, where CLi are the individual estimates of oral CL. Individual estimates of the elimination half-life (t½) were calculated from 0.693 × Vi/CLi, where Vi are the individual estimates of oral V. Monte Carlo simulations The final parameters of the population pharmacokinetic model were used to simulate estimates of oral CL for 10 000 patients using NONMEM version VI.21 Relevant clinical characteristics (weight and sodium concentration) were assumed to arise from log-normal distributions with outer limits set to the values observed in the raw data. The incidence of ‘high risk’ in the simulated dataset was 31%. AUC24 estimates were then determined for each simulated patient from daily dose/oral CL. Simulations were performed for three dosage regimens: 20 mg/kg/day (current daily dosage regimen); 30 mg/kg/day; and 45 mg/kg/day. For evaluation of these dosage regimens, MIC values were chosen across the range 0.03–8 mg/L. The probability of target attainment (PTA) was defined as the probability that the target AUC0–24/MIC ratio was achieved at each MIC. Target AUC0–24/MIC ratios of ≥125 were used.31 For each ciprofloxacin regimen, the highest MIC at which the PTA was ≥90% was defined as the pharmacokinetic/pharmacodynamic susceptible breakpoint. A second analysis was conducted using the MIC distributions for Escherichia coli, Pseudomonas aeruginosa, Salmonella spp. and Klebsiella pneumoniae derived from the database of the European Committee on Antimicrobial Susceptibility Testing.32 These MIC distributions were extracted from 17 877 strains of E. coli, 27 825 strains of P. aeruginosa, 5898 strains of K. pneumoniae and 1733 strains of Salmonella spp. The cumulative fraction of response (CFR) was used to estimate the overall response of pathogens to ciprofloxacin with each of the three dosage regimens. This estimate accounts for the variability of drug exposure in the population and the variability in the MIC combined with the distributions of MICs for the pathogens. For each MIC, the fraction of simulated patients who met the pharmacodynamic target (AUC0–24/MIC ≥ 125) was multiplied by the fraction of the distribution of microorganisms for each MIC. The CFR was calculated as the sum of fraction products over all MICs. The final parameters of the population pharmacokinetic model were used to simulate estimates of oral CL for 10 000 patients using NONMEM version VI.21 Relevant clinical characteristics (weight and sodium concentration) were assumed to arise from log-normal distributions with outer limits set to the values observed in the raw data. The incidence of ‘high risk’ in the simulated dataset was 31%. AUC24 estimates were then determined for each simulated patient from daily dose/oral CL. Simulations were performed for three dosage regimens: 20 mg/kg/day (current daily dosage regimen); 30 mg/kg/day; and 45 mg/kg/day. For evaluation of these dosage regimens, MIC values were chosen across the range 0.03–8 mg/L. The probability of target attainment (PTA) was defined as the probability that the target AUC0–24/MIC ratio was achieved at each MIC. Target AUC0–24/MIC ratios of ≥125 were used.31 For each ciprofloxacin regimen, the highest MIC at which the PTA was ≥90% was defined as the pharmacokinetic/pharmacodynamic susceptible breakpoint. A second analysis was conducted using the MIC distributions for Escherichia coli, Pseudomonas aeruginosa, Salmonella spp. and Klebsiella pneumoniae derived from the database of the European Committee on Antimicrobial Susceptibility Testing.32 These MIC distributions were extracted from 17 877 strains of E. coli, 27 825 strains of P. aeruginosa, 5898 strains of K. pneumoniae and 1733 strains of Salmonella spp. The cumulative fraction of response (CFR) was used to estimate the overall response of pathogens to ciprofloxacin with each of the three dosage regimens. This estimate accounts for the variability of drug exposure in the population and the variability in the MIC combined with the distributions of MICs for the pathogens. For each MIC, the fraction of simulated patients who met the pharmacodynamic target (AUC0–24/MIC ≥ 125) was multiplied by the fraction of the distribution of microorganisms for each MIC. The CFR was calculated as the sum of fraction products over all MICs.
Results
Participants and admission clinical characteristics Of the 90 children with severe malnutrition admitted during the study period, 52 were enrolled into the study. Twelve who fulfilled the entry criteria declined to give consent, in four intravenous access was not possible and 22 were excluded due to completion of recruitment to a specific subgroup. The demographic and clinical characteristics of the patients included in the study are summarized in Table 1. Median age [interquartile range (IQR)] was 23 months (15–33 months), 29 (56%) were male, oedema (defining kwashiorkor) was present in 24 patients (46%) and eight (15%) children were HIV antibody positive. The median weight (IQR) was 6.9 kg (6.1–8.4 kg). The estimated creatinine clearance ranged from 5 to 128.7 mL/min/1.73 m2; only one patient had severe renal impairment. According to the risk stratification, mortality was 21%, 12% and 0% in the high-risk, intermediate-risk and low-risk groups, respectively. Bacteraemia was present in six children; three had E. coli, two had S. pneumoniae and one had K. pneumoniae. The isolates were tested against ampicillin, gentamicin, ciprofloxacin, chloramphenicol and ceftriaxone. The K. pneumoniae isolate was resistant to ampicillin only, while one E. coli isolate was resistant to all of the tested antibiotics. One S. pneumoniae isolate was resistant to ampicillin and had intermediate susceptibility to ciprofloxacin. Ciprofloxacin was well tolerated and there were no adverse events reported with its use. Table 1.Summary of the demographic and clinical characteristics of the patients who participated in the studyCharacteristicNumber (%)/median (range)Interquartile rangeMale/female29/23 (56/44)Oedema24 (46)HIV positive8 (15)Age (months)23 (8–102)15–33Weight (kg)6.9 (4.1–14.5)6.1–8.4Height (cm)75.4 (58.5–114.4)70.1–81.4MUAC (cm)11 (7.7–14.3)10–12WHZ−3.26 (−5.69–0.04)−3.67 to −2.48Vomiting15 (29)Shock7 (13)Dehydration25 (48)Low risk22 (42)Intermediate risk14 (27)High risk16 (31)White blood cell (×106/L) (6–17.5)13.3 (5.5–84.4)10.3–17.1Haemoglobin (g/dL) (9–14)9.0 (2.1–12.8)7.2–9.8Platelets (×106/L) (150–400)309 (16–1369)210–451Sodium (mmol/L) (138–145)136 (120–160)131–138Potassium (mmol/L) (3.5–5)3.1 (1.2–5.1)2.3–4.1Glucose (mmol/L) (2.8–5)4 (0.4–11.4)2.8–4.7Bicarbonate (mmol/L) (22–29)15.4 (4.7–26)11.7–18.5Base excess (−4 to +2)−8.0 (−26.9–2.1)−13.1 to −4.5Serum creatinine (μmol/L) (44–88)44 (27–676)36.8–51.9Creatinine clearancea (mL/min/1.73 m2)85.8 (5.0–128.7)70.7–101.4MUAC, mid-upper arm circumference.aEstimated according to age using the equations of Schwartz et al.26,27 Summary of the demographic and clinical characteristics of the patients who participated in the study MUAC, mid-upper arm circumference. aEstimated according to age using the equations of Schwartz et al.26,27 Of the 90 children with severe malnutrition admitted during the study period, 52 were enrolled into the study. Twelve who fulfilled the entry criteria declined to give consent, in four intravenous access was not possible and 22 were excluded due to completion of recruitment to a specific subgroup. The demographic and clinical characteristics of the patients included in the study are summarized in Table 1. Median age [interquartile range (IQR)] was 23 months (15–33 months), 29 (56%) were male, oedema (defining kwashiorkor) was present in 24 patients (46%) and eight (15%) children were HIV antibody positive. The median weight (IQR) was 6.9 kg (6.1–8.4 kg). The estimated creatinine clearance ranged from 5 to 128.7 mL/min/1.73 m2; only one patient had severe renal impairment. According to the risk stratification, mortality was 21%, 12% and 0% in the high-risk, intermediate-risk and low-risk groups, respectively. Bacteraemia was present in six children; three had E. coli, two had S. pneumoniae and one had K. pneumoniae. The isolates were tested against ampicillin, gentamicin, ciprofloxacin, chloramphenicol and ceftriaxone. The K. pneumoniae isolate was resistant to ampicillin only, while one E. coli isolate was resistant to all of the tested antibiotics. One S. pneumoniae isolate was resistant to ampicillin and had intermediate susceptibility to ciprofloxacin. Ciprofloxacin was well tolerated and there were no adverse events reported with its use. Table 1.Summary of the demographic and clinical characteristics of the patients who participated in the studyCharacteristicNumber (%)/median (range)Interquartile rangeMale/female29/23 (56/44)Oedema24 (46)HIV positive8 (15)Age (months)23 (8–102)15–33Weight (kg)6.9 (4.1–14.5)6.1–8.4Height (cm)75.4 (58.5–114.4)70.1–81.4MUAC (cm)11 (7.7–14.3)10–12WHZ−3.26 (−5.69–0.04)−3.67 to −2.48Vomiting15 (29)Shock7 (13)Dehydration25 (48)Low risk22 (42)Intermediate risk14 (27)High risk16 (31)White blood cell (×106/L) (6–17.5)13.3 (5.5–84.4)10.3–17.1Haemoglobin (g/dL) (9–14)9.0 (2.1–12.8)7.2–9.8Platelets (×106/L) (150–400)309 (16–1369)210–451Sodium (mmol/L) (138–145)136 (120–160)131–138Potassium (mmol/L) (3.5–5)3.1 (1.2–5.1)2.3–4.1Glucose (mmol/L) (2.8–5)4 (0.4–11.4)2.8–4.7Bicarbonate (mmol/L) (22–29)15.4 (4.7–26)11.7–18.5Base excess (−4 to +2)−8.0 (−26.9–2.1)−13.1 to −4.5Serum creatinine (μmol/L) (44–88)44 (27–676)36.8–51.9Creatinine clearancea (mL/min/1.73 m2)85.8 (5.0–128.7)70.7–101.4MUAC, mid-upper arm circumference.aEstimated according to age using the equations of Schwartz et al.26,27 Summary of the demographic and clinical characteristics of the patients who participated in the study MUAC, mid-upper arm circumference. aEstimated according to age using the equations of Schwartz et al.26,27 Pharmacokinetic data analysis A total of 202 plasma ciprofloxacin concentration measurements were available, with a median of 4 (range 2–4) measurements per patient. Individual concentration–time profiles are presented in Figure 1. Tmax ranged from 1 to 5 h (median 3 h) after the dose, and in 69% of patients Cmax was observed within 1–3 h. Cmax ranged from 0.6 to 4.5 mg/L with a median of 1.7 mg/L. Cmax was >1 mg/L and >2 mg/L in 77% and 31% of patients, respectively. Trough concentrations at 12 h after the first dose were available from 17 patients and ranged from 0.1 to 0.7 mg/L with a median of 0.3 mg/L. Figure 1.Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients. Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients. Both the first-order with lag and the transit compartment absorption models provided good fits of the population and individual concentration–time profiles. Since the transit compartment model was unstable during validation procedures and provided little improvement in the overall fit of the data, the first-order model with lag was used for covariate model development. The transit compartment model was tested again with the final covariate model, but offered no clear advantage. Plots of individual estimates of oral CL and oral V against the measured and derived clinical and demographic data identified potential relationships between oral CL and risk category, serum sodium concentration, serum potassium concentration, feeding status, shock and dehydration, and between oral V and risk category, serum sodium concentration, shock, dehydration, diarrhoea and vomiting. These factors all achieved small, but statistically significant, reductions in the OFV when included individually in the population model for oral CL. The biggest reductions occurred with sodium concentration (6.24) and the high-risk category (6.05). When these factors were combined, the OFV fell by a further 14.46 points. BSV in oral CL fell from 49.8% with the base model to 38.1% with the covariate model. No additional factors reduced the variability in oral CL, but a further reduction in OFV and BSV in oral V (from 48.4% to 43.0%) was obtained when sodium concentration was added to the model for oral V. Although the inclusion of bicarbonate concentration as a factor influencing the absorption rate constant (ka) produced a statistically significant reduction in OFV, it did not reduce BSV in ka and was therefore excluded. The final population model parameters and bootstrap estimates are presented in Table 2 and summarized below. Table 2.Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished childrenParameterPopulation estimateBootstrap estimateBootstrap 95% CIθ142.742.537.0–49.3θ20.03680.03610.0217–0.0446θ3−0.283−0.285−0.412 to −0.118θ4372367316–429θ50.02910.02820.0155–0.0388θ62.973.441.32–8.86θ70.7420.7920.168–0.924BSV CL38.137.828.7–45.8BSV V43.042.932.4–51.8BSV ka10211056–159Residual error Additive (SD)0.02730.02780.0041–0.0438Proportional (%CV)18.617.814.0–22.6Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7.Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L). Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished children Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7. Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L). The population model therefore identified a standardized oral CL in an adult weighing 70 kg of 42.7 L/h, and found that oral CL was linearly and positively related to sodium concentration. The model described an increase (or decrease) in oral CL by 3.7% for every 1 mmol/L increase (or decrease) in the serum sodium concentration from the median value of 136 mmol/L. The standardized oral CL fell by 28% to 30.6 L/h/70 kg in patients in the ‘high-risk’ category. The standardized oral V estimate was 372 L/70 kg and changed by 2.9% for every 1 mmol/L change in the serum sodium concentration from 136 mmol/L. All three validation methods indicated that the model provided a satisfactory description of the data. Figure 2 shows good agreement between the measured concentrations and the concentrations predicted by both the population pharmacokinetic model (r2 = 0.659) and the individual parameter estimates (r2 = 0.971). The pcVPC presented in Figure 3 shows that the population model was able to describe the distribution of the raw concentration data, and the npde check confirmed a normal distribution around each individual observation within the simulated dataset. Figure 2.Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line. Figure 3.Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model. Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line. Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model. Table 3 summarizes the individual Bayes' estimates of the pharmacokinetic parameters and derived estimates of Cmax, Tmax, t½ and AUC0–24. Oral CL had a median of 0.98 L/h/kg in patients in the low and intermediate categories, and 0.67 L/h/kg in high-risk patients. There was a wide variability in the individual estimates of AUC0–24, which ranged from 7.9 to 61 mg·h/L. Median estimates of AUC0–24 were higher in patients in the high-risk category at 29.7 mg·h/L compared with 20.5 mg·h/L in the low and intermediate categories. Table 3.Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutritionVariablenMeanSDMedianMinimumMaximumCL(L/h)527.433.547.191.8317.1CL(L/h/kg)521.020.520.870.322.54CL (L/h/kg) low/intermediate risk361.140.530.980.462.54CL (L/h/kg) high risk160.770.410.670.321.53V (L/kg)525.472.694.492.1414.2t½ (h)523.971.383.782.019.04Observed Tmax (h)522.771.083.001.005.17Model-predicted Tmax (h)521.910.581.791.093.85Observed Cmax (mg/L)521.680.791.710.584.52Model-predicted Cmax (mg/L)521.510.601.500.613.56AUC0–24 (mg·h/L)5224.812.422.47.961.3AUC0–24 (mg·h/L) low/intermediate risk3621.18.820. 57.943.4AUC0–24 (mg·h/L) high risk1632.915.529.713.161.3Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC. Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutrition Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC. A total of 202 plasma ciprofloxacin concentration measurements were available, with a median of 4 (range 2–4) measurements per patient. Individual concentration–time profiles are presented in Figure 1. Tmax ranged from 1 to 5 h (median 3 h) after the dose, and in 69% of patients Cmax was observed within 1–3 h. Cmax ranged from 0.6 to 4.5 mg/L with a median of 1.7 mg/L. Cmax was >1 mg/L and >2 mg/L in 77% and 31% of patients, respectively. Trough concentrations at 12 h after the first dose were available from 17 patients and ranged from 0.1 to 0.7 mg/L with a median of 0.3 mg/L. Figure 1.Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients. Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients. Both the first-order with lag and the transit compartment absorption models provided good fits of the population and individual concentration–time profiles. Since the transit compartment model was unstable during validation procedures and provided little improvement in the overall fit of the data, the first-order model with lag was used for covariate model development. The transit compartment model was tested again with the final covariate model, but offered no clear advantage. Plots of individual estimates of oral CL and oral V against the measured and derived clinical and demographic data identified potential relationships between oral CL and risk category, serum sodium concentration, serum potassium concentration, feeding status, shock and dehydration, and between oral V and risk category, serum sodium concentration, shock, dehydration, diarrhoea and vomiting. These factors all achieved small, but statistically significant, reductions in the OFV when included individually in the population model for oral CL. The biggest reductions occurred with sodium concentration (6.24) and the high-risk category (6.05). When these factors were combined, the OFV fell by a further 14.46 points. BSV in oral CL fell from 49.8% with the base model to 38.1% with the covariate model. No additional factors reduced the variability in oral CL, but a further reduction in OFV and BSV in oral V (from 48.4% to 43.0%) was obtained when sodium concentration was added to the model for oral V. Although the inclusion of bicarbonate concentration as a factor influencing the absorption rate constant (ka) produced a statistically significant reduction in OFV, it did not reduce BSV in ka and was therefore excluded. The final population model parameters and bootstrap estimates are presented in Table 2 and summarized below. Table 2.Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished childrenParameterPopulation estimateBootstrap estimateBootstrap 95% CIθ142.742.537.0–49.3θ20.03680.03610.0217–0.0446θ3−0.283−0.285−0.412 to −0.118θ4372367316–429θ50.02910.02820.0155–0.0388θ62.973.441.32–8.86θ70.7420.7920.168–0.924BSV CL38.137.828.7–45.8BSV V43.042.932.4–51.8BSV ka10211056–159Residual error Additive (SD)0.02730.02780.0041–0.0438Proportional (%CV)18.617.814.0–22.6Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7.Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L). Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished children Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7. Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L). The population model therefore identified a standardized oral CL in an adult weighing 70 kg of 42.7 L/h, and found that oral CL was linearly and positively related to sodium concentration. The model described an increase (or decrease) in oral CL by 3.7% for every 1 mmol/L increase (or decrease) in the serum sodium concentration from the median value of 136 mmol/L. The standardized oral CL fell by 28% to 30.6 L/h/70 kg in patients in the ‘high-risk’ category. The standardized oral V estimate was 372 L/70 kg and changed by 2.9% for every 1 mmol/L change in the serum sodium concentration from 136 mmol/L. All three validation methods indicated that the model provided a satisfactory description of the data. Figure 2 shows good agreement between the measured concentrations and the concentrations predicted by both the population pharmacokinetic model (r2 = 0.659) and the individual parameter estimates (r2 = 0.971). The pcVPC presented in Figure 3 shows that the population model was able to describe the distribution of the raw concentration data, and the npde check confirmed a normal distribution around each individual observation within the simulated dataset. Figure 2.Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line. Figure 3.Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model. Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line. Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model. Table 3 summarizes the individual Bayes' estimates of the pharmacokinetic parameters and derived estimates of Cmax, Tmax, t½ and AUC0–24. Oral CL had a median of 0.98 L/h/kg in patients in the low and intermediate categories, and 0.67 L/h/kg in high-risk patients. There was a wide variability in the individual estimates of AUC0–24, which ranged from 7.9 to 61 mg·h/L. Median estimates of AUC0–24 were higher in patients in the high-risk category at 29.7 mg·h/L compared with 20.5 mg·h/L in the low and intermediate categories. Table 3.Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutritionVariablenMeanSDMedianMinimumMaximumCL(L/h)527.433.547.191.8317.1CL(L/h/kg)521.020.520.870.322.54CL (L/h/kg) low/intermediate risk361.140.530.980.462.54CL (L/h/kg) high risk160.770.410.670.321.53V (L/kg)525.472.694.492.1414.2t½ (h)523.971.383.782.019.04Observed Tmax (h)522.771.083.001.005.17Model-predicted Tmax (h)521.910.581.791.093.85Observed Cmax (mg/L)521.680.791.710.584.52Model-predicted Cmax (mg/L)521.510.601.500.613.56AUC0–24 (mg·h/L)5224.812.422.47.961.3AUC0–24 (mg·h/L) low/intermediate risk3621.18.820. 57.943.4AUC0–24 (mg·h/L) high risk1632.915.529.713.161.3Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC. Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutrition Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC. Monte Carlo simulations The percentage of simulated patients who achieved an AUC0–24/MIC ratio of ≥125 at each MIC value with the three ciprofloxacin daily dosage regimens is presented in Figure 4. With 20 mg/kg/day, only 76% of patients would be expected to achieve the target AUC0–24/MIC ratio if the MIC was 0.125 mg/L. However, with daily doses of 30 and 45 mg/kg, the percentages increased to 95% and 99%, respectively. Consequently, the pharmacokinetic/pharmacodynamic breakpoint for Gram-negative organisms was <0.06 mg/L for the study dose of 20 mg/kg/day, and <0.125 mg/L for doses of 30 and 45 mg/kg/day. The target AUC0–24/MIC ratio was achieved in <5% of patients with all dosage regimens if the MIC was >1 mg/L. When the results were integrated with the MIC distribution for each organism, the CFR was >80% with all three daily dosage regimens for E. coli and Salmonella spp., and was 80% for K. pneumoniae with a dose of 30 mg/kg/day (Table 4). CFR values for P. aeruginosa were <70% for all doses tested. Table 4.Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coliCumulative fraction of predicted response (%)OrganismTarget AUC0–24/MIC ratio20 mg/kg/day30 mg/kg/day45 mg/kg/daySalmonella spp.125969899P. aeruginosa125435564K. pneumonia125768083E. coli125858788 Figure 4.Percentage probability of achieving a target AUC0–24/MIC ratio ≥125. Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coli Percentage probability of achieving a target AUC0–24/MIC ratio ≥125. The percentage of simulated patients who achieved an AUC0–24/MIC ratio of ≥125 at each MIC value with the three ciprofloxacin daily dosage regimens is presented in Figure 4. With 20 mg/kg/day, only 76% of patients would be expected to achieve the target AUC0–24/MIC ratio if the MIC was 0.125 mg/L. However, with daily doses of 30 and 45 mg/kg, the percentages increased to 95% and 99%, respectively. Consequently, the pharmacokinetic/pharmacodynamic breakpoint for Gram-negative organisms was <0.06 mg/L for the study dose of 20 mg/kg/day, and <0.125 mg/L for doses of 30 and 45 mg/kg/day. The target AUC0–24/MIC ratio was achieved in <5% of patients with all dosage regimens if the MIC was >1 mg/L. When the results were integrated with the MIC distribution for each organism, the CFR was >80% with all three daily dosage regimens for E. coli and Salmonella spp., and was 80% for K. pneumoniae with a dose of 30 mg/kg/day (Table 4). CFR values for P. aeruginosa were <70% for all doses tested. Table 4.Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coliCumulative fraction of predicted response (%)OrganismTarget AUC0–24/MIC ratio20 mg/kg/day30 mg/kg/day45 mg/kg/daySalmonella spp.125969899P. aeruginosa125435564K. pneumonia125768083E. coli125858788 Figure 4.Percentage probability of achieving a target AUC0–24/MIC ratio ≥125. Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coli Percentage probability of achieving a target AUC0–24/MIC ratio ≥125.
Conclusions
The pharmacokinetics of oral ciprofloxacin in children with severe malnutrition is influenced by weight, serum sodium concentration and the risk of mortality, but there was high variability in the clearance and rate of absorption. Oral ciprofloxacin absorption was unaffected by the simultaneous administration of nutritional feeds. An oral dose of 10 mg/kg twice daily should be effective against E. coli and Salmonella spp., but a higher dose of 10 mg/kg three times a day would be recommended for K. pneumoniae. Oral ciprofloxacin is unlikely to be an effective treatment for P. aeruginosa. Irrespective of the bacterial pathogen, patients with severe illness, at high risk of mortality, should initially receive intravenous antibiotics.
[ "Patients", "Study procedures", "Drug administration and blood sampling", "Pharmacokinetic analysis", "Monte Carlo simulations", "Participants and admission clinical characteristics", "Pharmacokinetic data analysis", "Monte Carlo simulations", "Funding", "Transparency declarations" ]
[ "The study was conducted at Kilifi District Hospital, Kenya. All children admitted to the ward were examined by a member of the clinical research team. Between July 2008 and February 2009, children >6 months of age were assessed for eligibility for the study. Eligible children had severe malnutrition, defined as one of the following: weight-for-height z-score (WHZ) of ≤3; mid-upper arm circumference of <11 cm; or the presence of bilateral pedal oedema (kwashiorkor). Children with evidence of intrinsic renal disease (creatinine concentration >300 μmol/L and hypertension or hyperkalaemia) were excluded. No cases had coexisting chronic bone or joint disease, or concurrently prescribed antacids, ketoconazole, theophylline or corticosteroids. The study was explained to the child's parent or guardian in their usual language and written informed consent was obtained. The Kenya Medical Research Institute/National Ethical Review Committee approved the study.\nBaseline laboratory tests included a full blood count, blood film for malaria parasites, plasma creatinine, electrolytes, plasma glucose, blood gases, blood culture and HIV rapid antibody test. Blood cultures were processed by a BACTEC 9050 instrument (Becton Dickinson, NJ, USA). Children were treated according to the standard WHO management guidelines for severe malnutrition. These include nutritional support with a special milk-based formula (F75 and F100), multivitamin and multimineral supplementation and reduced sodium oral rehydration solution (RESOMAL) for children with diarrhoea (>3 watery stools/day). At admission, all children were prescribed parenteral ampicillin (50 mg/kg four times a day) and intramuscular gentamicin (7.5 mg/kg once daily) for 7 days. These were revised, when indicated by the child's clinical condition or culture results. Other fluoroquinolones were not prescribed during the study period, since they were not routinely available.", "Ciprofloxacin concentration–time profiles were obtained during two study periods. In the initial study period, children were given the ciprofloxacin either 2 h before or 2 h after they had their nutritional milks or meal. In this first period of the study (n = 36), an equal number of patients were included in three subgroups: low risk; intermediate risk; and high risk of fatality. The stratification of risk was based on a previous report.4 High risk included children presenting with one of the following: depressed conscious state; bradycardia (heart rate <80 beats per minute); evidence of shock (capillary refill time ≥2 seconds, temperature gradient or weak pulse); or hypoglycaemia (blood glucose <3 mmol/L). Intermediate risk included any one of: deep ‘acidotic’ breathing; signs of severe dehydration (plus diarrhoea); lethargy; hyponatraemia (sodium <125 mmol/L); or hypokalaemia (potassium <2.5 mmol/L). Children with low risk had none of these factors. In the second study period (n = 16), the drug was administered at the time they received nutritional feeds and did not have any risk stratification.", "Participants received witnessed doses of oral ciprofloxacin (10 mg/kg body weight), i.e. a standard treatment dose, every 12 h for 48 h, starting on the day of admission. Routine antimicrobials were given simultaneously as the empirical treatment for invasive infections. Since oral suspensions are not available locally, ciprofloxacin tablets (Bactiflox™ 250 mg, Mepha Pharma AG, Switzerland) were reformulated into an aqueous suspension by the study pharmacist. A separate tablet was used to prepare each individual dose and the suspension was then used immediately. Individual doses were measured by the pharmacist under the observation of the trial nurse, according to the patient's body weight. This approach has been used and reported previously.19\nChildren were allocated, using a computer-generated randomization list, to one of three blood sampling schedules: Group A at 2, 4, 8 and 24 h; Group B at 3, 5, 9 and 12 h; and Group C at 1, 3, 6 and 10 h. Children in each of the risk categories were evenly allocated across the sampling schedules. To minimize discomfort to the child, blood sampling for the measurement of ciprofloxacin concentrations was through an in situ cannula, which was only used for this purpose. At each timepoint, a venous blood sample (0.5 mL) was collected into a lithium heparin tube, centrifuged, and the plasma separated and stored in cryovials at −20°C until analysed. A rapid, selective and sensitive HPLC method coupled with fluorescence detection was used to determine the concentration of ciprofloxacin in the plasma samples.20 The intra- and interassay imprecisions of the HPLC method were <8.0%, and accuracy values ranged from 93% to 105% for quality control samples at 0.2, 1.8 and 3.6 mg/L. Calibration curves of ciprofloxacin were linear over the concentration range of 0.02–4 mg/L, with correlation coefficients ≥0.998.", "Population pharmacokinetic parameter estimates were obtained with NONMEM version VI using first-order conditional estimation with interaction.21 Post-processing of the NONMEM results was performed with Xpose version 4 programmed in R 2.9.2.22,23 Preliminary analyses found that a one-compartment elimination model adequately described the concentration–time profiles; absorption was described using first- and zero-order models with an absorption lag and with a transit compartment model.24 Relationships between the oral clearance of ciprofloxacin (CL) and weight, and between the oral volume of distribution (V) and weight were described using an allometric approach,25 i.e.\n\n\n\nBetween-subject variabilities (BSV) in the pharmacokinetic parameters were assumed to be log-normally distributed and covariance between the variabilities in the pharmacokinetic parameters was investigated. A combined proportional and additive error model was used to describe the residual error.\nA wide range of demographic, biochemical and haematological data were collected in the course of the study. In addition, creatinine clearance estimates for each patient, according to age range, were calculated using the equations of Schwarz et al.26,27 Potential relationships between the clinical and demographic factors and empirical Bayes' estimates (individual estimates) of ciprofloxacin oral CL, oral V and absorption rate were initially examined visually using scatter plots and by general additive modelling within Xpose.22 Factors identified as potential covariates that might explain variability in ciprofloxacin pharmacokinetics were then added individually to the population model. A statistically significant improvement in the fit of the model to the data was defined as a reduction in the objective function value (OFV) of ≥3.84 (P <0.05) in the forward stepwise analysis. The covariate that produced the greatest fall in OFV was included first, and then other covariates were added to and removed from the model in a stepwise manner that included changing the order of inclusion and removal. Statistical significance was defined as an increase in OFV of ≥6.63 (P < 0.01) when a covariate was removed. Additional criteria, such as goodness-of-fit plots, the precision of the parameter estimates, and the ability of covariates to explain variability in the oral CL, oral V and absorption rate, were also considered.\nThe final population model was evaluated in three ways: a bootstrap sampling procedure with 1000 samples; a prediction-corrected visual predictive check (pcVPC) based on 1000 simulations; and by examination of normalized prediction distribution errors (npde) from 1000 simulations. Both the bootstrapping procedure and the pcVPC were performed using PsN toolkit;28,29 npde were computed using the software developed by Brendel et al.30\nTime to maximum (Tmax) and maximum concentrations of ciprofloxacin (Cmax) for each patient were obtained from the raw data and estimated using the individual pharmacokinetic parameter values. Estimates of steady-state AUC within each dosage interval (AUC12) and each day (AUC24) were calculated from 12 hourly dose/CLi and daily dose/CLi, respectively, where CLi are the individual estimates of oral CL. Individual estimates of the elimination half-life (t½) were calculated from 0.693 × Vi/CLi, where Vi are the individual estimates of oral V.", "The final parameters of the population pharmacokinetic model were used to simulate estimates of oral CL for 10 000 patients using NONMEM version VI.21 Relevant clinical characteristics (weight and sodium concentration) were assumed to arise from log-normal distributions with outer limits set to the values observed in the raw data. The incidence of ‘high risk’ in the simulated dataset was 31%. AUC24 estimates were then determined for each simulated patient from daily dose/oral CL. Simulations were performed for three dosage regimens: 20 mg/kg/day (current daily dosage regimen); 30 mg/kg/day; and 45 mg/kg/day. For evaluation of these dosage regimens, MIC values were chosen across the range 0.03–8 mg/L. The probability of target attainment (PTA) was defined as the probability that the target AUC0–24/MIC ratio was achieved at each MIC. Target AUC0–24/MIC ratios of ≥125 were used.31 For each ciprofloxacin regimen, the highest MIC at which the PTA was ≥90% was defined as the pharmacokinetic/pharmacodynamic susceptible breakpoint.\nA second analysis was conducted using the MIC distributions for Escherichia coli, Pseudomonas aeruginosa, Salmonella spp. and Klebsiella pneumoniae derived from the database of the European Committee on Antimicrobial Susceptibility Testing.32 These MIC distributions were extracted from 17 877 strains of E. coli, 27 825 strains of P. aeruginosa, 5898 strains of K. pneumoniae and 1733 strains of Salmonella spp. The cumulative fraction of response (CFR) was used to estimate the overall response of pathogens to ciprofloxacin with each of the three dosage regimens. This estimate accounts for the variability of drug exposure in the population and the variability in the MIC combined with the distributions of MICs for the pathogens. For each MIC, the fraction of simulated patients who met the pharmacodynamic target (AUC0–24/MIC ≥ 125) was multiplied by the fraction of the distribution of microorganisms for each MIC. The CFR was calculated as the sum of fraction products over all MICs.", "Of the 90 children with severe malnutrition admitted during the study period, 52 were enrolled into the study. Twelve who fulfilled the entry criteria declined to give consent, in four intravenous access was not possible and 22 were excluded due to completion of recruitment to a specific subgroup. The demographic and clinical characteristics of the patients included in the study are summarized in Table 1. Median age [interquartile range (IQR)] was 23 months (15–33 months), 29 (56%) were male, oedema (defining kwashiorkor) was present in 24 patients (46%) and eight (15%) children were HIV antibody positive. The median weight (IQR) was 6.9 kg (6.1–8.4 kg). The estimated creatinine clearance ranged from 5 to 128.7 mL/min/1.73 m2; only one patient had severe renal impairment. According to the risk stratification, mortality was 21%, 12% and 0% in the high-risk, intermediate-risk and low-risk groups, respectively. Bacteraemia was present in six children; three had E. coli, two had S. pneumoniae and one had K. pneumoniae. The isolates were tested against ampicillin, gentamicin, ciprofloxacin, chloramphenicol and ceftriaxone. The K. pneumoniae isolate was resistant to ampicillin only, while one E. coli isolate was resistant to all of the tested antibiotics. One S. pneumoniae isolate was resistant to ampicillin and had intermediate susceptibility to ciprofloxacin. Ciprofloxacin was well tolerated and there were no adverse events reported with its use.\nTable 1.Summary of the demographic and clinical characteristics of the patients who participated in the studyCharacteristicNumber (%)/median (range)Interquartile rangeMale/female29/23 (56/44)Oedema24 (46)HIV positive8 (15)Age (months)23 (8–102)15–33Weight (kg)6.9 (4.1–14.5)6.1–8.4Height (cm)75.4 (58.5–114.4)70.1–81.4MUAC (cm)11 (7.7–14.3)10–12WHZ−3.26 (−5.69–0.04)−3.67 to −2.48Vomiting15 (29)Shock7 (13)Dehydration25 (48)Low risk22 (42)Intermediate risk14 (27)High risk16 (31)White blood cell (×106/L) (6–17.5)13.3 (5.5–84.4)10.3–17.1Haemoglobin (g/dL) (9–14)9.0 (2.1–12.8)7.2–9.8Platelets (×106/L) (150–400)309 (16–1369)210–451Sodium (mmol/L) (138–145)136 (120–160)131–138Potassium (mmol/L) (3.5–5)3.1 (1.2–5.1)2.3–4.1Glucose (mmol/L) (2.8–5)4 (0.4–11.4)2.8–4.7Bicarbonate (mmol/L) (22–29)15.4 (4.7–26)11.7–18.5Base excess (−4 to +2)−8.0 (−26.9–2.1)−13.1 to −4.5Serum creatinine (μmol/L) (44–88)44 (27–676)36.8–51.9Creatinine clearancea (mL/min/1.73 m2)85.8 (5.0–128.7)70.7–101.4MUAC, mid-upper arm circumference.aEstimated according to age using the equations of Schwartz et al.26,27\nSummary of the demographic and clinical characteristics of the patients who participated in the study\nMUAC, mid-upper arm circumference.\naEstimated according to age using the equations of Schwartz et al.26,27", "A total of 202 plasma ciprofloxacin concentration measurements were available, with a median of 4 (range 2–4) measurements per patient. Individual concentration–time profiles are presented in Figure 1. Tmax ranged from 1 to 5 h (median 3 h) after the dose, and in 69% of patients Cmax was observed within 1–3 h. Cmax ranged from 0.6 to 4.5 mg/L with a median of 1.7 mg/L. Cmax was >1 mg/L and >2 mg/L in 77% and 31% of patients, respectively. Trough concentrations at 12 h after the first dose were available from 17 patients and ranged from 0.1 to 0.7 mg/L with a median of 0.3 mg/L.\nFigure 1.Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients.\nCiprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients.\nBoth the first-order with lag and the transit compartment absorption models provided good fits of the population and individual concentration–time profiles. Since the transit compartment model was unstable during validation procedures and provided little improvement in the overall fit of the data, the first-order model with lag was used for covariate model development. The transit compartment model was tested again with the final covariate model, but offered no clear advantage.\nPlots of individual estimates of oral CL and oral V against the measured and derived clinical and demographic data identified potential relationships between oral CL and risk category, serum sodium concentration, serum potassium concentration, feeding status, shock and dehydration, and between oral V and risk category, serum sodium concentration, shock, dehydration, diarrhoea and vomiting. These factors all achieved small, but statistically significant, reductions in the OFV when included individually in the population model for oral CL. The biggest reductions occurred with sodium concentration (6.24) and the high-risk category (6.05). When these factors were combined, the OFV fell by a further 14.46 points. BSV in oral CL fell from 49.8% with the base model to 38.1% with the covariate model. No additional factors reduced the variability in oral CL, but a further reduction in OFV and BSV in oral V (from 48.4% to 43.0%) was obtained when sodium concentration was added to the model for oral V. Although the inclusion of bicarbonate concentration as a factor influencing the absorption rate constant (ka) produced a statistically significant reduction in OFV, it did not reduce BSV in ka and was therefore excluded. The final population model parameters and bootstrap estimates are presented in Table 2 and summarized below.\nTable 2.Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished childrenParameterPopulation estimateBootstrap estimateBootstrap 95% CIθ142.742.537.0–49.3θ20.03680.03610.0217–0.0446θ3−0.283−0.285−0.412 to −0.118θ4372367316–429θ50.02910.02820.0155–0.0388θ62.973.441.32–8.86θ70.7420.7920.168–0.924BSV CL38.137.828.7–45.8BSV V43.042.932.4–51.8BSV ka10211056–159Residual error Additive (SD)0.02730.02780.0041–0.0438Proportional (%CV)18.617.814.0–22.6Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7.Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L).\n\n\n\n\nParameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished children\nStructural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7.\nAbbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L).\nThe population model therefore identified a standardized oral CL in an adult weighing 70 kg of 42.7 L/h, and found that oral CL was linearly and positively related to sodium concentration. The model described an increase (or decrease) in oral CL by 3.7% for every 1 mmol/L increase (or decrease) in the serum sodium concentration from the median value of 136 mmol/L. The standardized oral CL fell by 28% to 30.6 L/h/70 kg in patients in the ‘high-risk’ category. The standardized oral V estimate was 372 L/70 kg and changed by 2.9% for every 1 mmol/L change in the serum sodium concentration from 136 mmol/L.\nAll three validation methods indicated that the model provided a satisfactory description of the data. Figure 2 shows good agreement between the measured concentrations and the concentrations predicted by both the population pharmacokinetic model (r2 = 0.659) and the individual parameter estimates (r2 = 0.971). The pcVPC presented in Figure 3 shows that the population model was able to describe the distribution of the raw concentration data, and the npde check confirmed a normal distribution around each individual observation within the simulated dataset.\nFigure 2.Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line.\nFigure 3.Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model.\nObserved versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line.\nPrediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model.\nTable 3 summarizes the individual Bayes' estimates of the pharmacokinetic parameters and derived estimates of Cmax, Tmax, t½ and AUC0–24. Oral CL had a median of 0.98 L/h/kg in patients in the low and intermediate categories, and 0.67 L/h/kg in high-risk patients. There was a wide variability in the individual estimates of AUC0–24, which ranged from 7.9 to 61 mg·h/L. Median estimates of AUC0–24 were higher in patients in the high-risk category at 29.7 mg·h/L compared with 20.5 mg·h/L in the low and intermediate categories.\nTable 3.Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutritionVariablenMeanSDMedianMinimumMaximumCL(L/h)527.433.547.191.8317.1CL(L/h/kg)521.020.520.870.322.54CL (L/h/kg) low/intermediate risk361.140.530.980.462.54CL (L/h/kg) high risk160.770.410.670.321.53V (L/kg)525.472.694.492.1414.2t½ (h)523.971.383.782.019.04Observed Tmax (h)522.771.083.001.005.17Model-predicted Tmax (h)521.910.581.791.093.85Observed Cmax (mg/L)521.680.791.710.584.52Model-predicted Cmax (mg/L)521.510.601.500.613.56AUC0–24 (mg·h/L)5224.812.422.47.961.3AUC0–24 (mg·h/L) low/intermediate risk3621.18.820. 57.943.4AUC0–24 (mg·h/L) high risk1632.915.529.713.161.3Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC.\nSummary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutrition\nAbbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC.", "The percentage of simulated patients who achieved an AUC0–24/MIC ratio of ≥125 at each MIC value with the three ciprofloxacin daily dosage regimens is presented in Figure 4. With 20 mg/kg/day, only 76% of patients would be expected to achieve the target AUC0–24/MIC ratio if the MIC was 0.125 mg/L. However, with daily doses of 30 and 45 mg/kg, the percentages increased to 95% and 99%, respectively. Consequently, the pharmacokinetic/pharmacodynamic breakpoint for Gram-negative organisms was <0.06 mg/L for the study dose of 20 mg/kg/day, and <0.125 mg/L for doses of 30 and 45 mg/kg/day. The target AUC0–24/MIC ratio was achieved in <5% of patients with all dosage regimens if the MIC was >1 mg/L. When the results were integrated with the MIC distribution for each organism, the CFR was >80% with all three daily dosage regimens for E. coli and Salmonella spp., and was 80% for K. pneumoniae with a dose of 30 mg/kg/day (Table 4). CFR values for P. aeruginosa were <70% for all doses tested.\nTable 4.Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coliCumulative fraction of predicted response (%)OrganismTarget AUC0–24/MIC ratio20 mg/kg/day30 mg/kg/day45 mg/kg/daySalmonella spp.125969899P. aeruginosa125435564K. pneumonia125768083E. coli125858788\nFigure 4.Percentage probability of achieving a target AUC0–24/MIC ratio ≥125.\nCumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coli\nPercentage probability of achieving a target AUC0–24/MIC ratio ≥125.", "This study was supported by Wellcome Trust core funding to KEMRI-Wellcome Trust Research Programme (grant Reference No. 077092). Nahashon Thuo is supported by a Wellcome Trust Masters Training Fellowship (grant reference No. 089353/Z/09/Z). The funding sources had no role in study design, analysis or in the writing of the report.", "None to declare." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Patients", "Study procedures", "Drug administration and blood sampling", "Pharmacokinetic analysis", "Monte Carlo simulations", "Results", "Participants and admission clinical characteristics", "Pharmacokinetic data analysis", "Monte Carlo simulations", "Discussion", "Conclusions", "Funding", "Transparency declarations" ]
[ "Severe malnutrition remains a common cause of admission to hospital in less-developed countries. Many centres, particularly in Africa, report poor outcome despite adherence to recommended treatment guidelines.1–4 The children at the greatest risk of fatal outcome are those with Gram-negative septicaemia, constituting 48%–55% of invasive bacterial pathogens, and those admitted with diarrhoea and/or shock.5,6 Changes in the intestinal mucosal integrity and gut microbial balance occur in severe malnutrition,7 resulting in treatment failure and adverse clinical outcome.8 The higher prevalence of gut barrier dysfunction in children with severe malnutrition may have important effects on the absorption of antimicrobials and their bioavailability, and therefore may limit choices for the delivery of antimicrobial medication.\nChildren with severe and complicated malnutrition routinely receive broad-spectrum parenteral antibiotics.9 In vitro antibiotic susceptibility testing indicates that up to 85% of organisms are fully susceptible to the first-line treatment, parenteral ampicillin and gentamicin, recommended by the WHO for children with severe and complicated malnutrition.4 Pharmacokinetic studies have demonstrated satisfactory plasma concentrations of these commonly used antibiotics.10 However, in vitro resistance has been associated with later deaths, and the current second-line antibiotic (chloramphenicol) was found to offer little advantage over the ampicillin and gentamicin combination.4\nFluoroquinolones are effective against most Gram-negative organisms and have activity against Gram-positive organisms, especially when given in combination.11 The quinolones are used in the treatment of serious bacterial infections in adults; however, their use in children has been restricted due to concerns about potential cartilage damage.12 Nevertheless, quinolones are increasingly prescribed for paediatric patients. Ciprofloxacin is licensed in children >1 year of age for pseudomonal infections in cystic fibrosis, for complicated urinary tract infections, and for the treatment and prophylaxis of inhalation anthrax.13 When the benefits of treatment outweigh the risks, ciprofloxacin is also licensed in the UK for children >1 year of age with severe respiratory tract and gastrointestinal system infection.14 In its many years of clinical use, ciprofloxacin has been found effective, even with oral administration, owing to its bioavailability of ∼70%.15 However, few studies have evaluated the population pharmacokinetics of ciprofloxacin in children,16–18 and no studies have been conducted in severe malnutrition.\nFor the treatment of Gram-negative infection, common in severe malnutrition, pragmatic and cost-effective treatments are needed to improve outcome. Since intravenous formulations of fluoroquinolones are very expensive, they are rarely used in resource-poor settings. Oral formulations would appear to be the best option in patients who are able to ingest and adequately absorb medication. The aims of this study were to determine the pharmacokinetic profile of oral ciprofloxacin given at a dose of 10 mg/kg twice daily to children admitted to hospital with severe malnutrition, to develop a population model to describe the pharmacokinetics of ciprofloxacin in this patient group, and to use Monte Carlo simulation techniques to investigate potential relationships between dosage regimens and antimicrobial efficacy.", " Patients The study was conducted at Kilifi District Hospital, Kenya. All children admitted to the ward were examined by a member of the clinical research team. Between July 2008 and February 2009, children >6 months of age were assessed for eligibility for the study. Eligible children had severe malnutrition, defined as one of the following: weight-for-height z-score (WHZ) of ≤3; mid-upper arm circumference of <11 cm; or the presence of bilateral pedal oedema (kwashiorkor). Children with evidence of intrinsic renal disease (creatinine concentration >300 μmol/L and hypertension or hyperkalaemia) were excluded. No cases had coexisting chronic bone or joint disease, or concurrently prescribed antacids, ketoconazole, theophylline or corticosteroids. The study was explained to the child's parent or guardian in their usual language and written informed consent was obtained. The Kenya Medical Research Institute/National Ethical Review Committee approved the study.\nBaseline laboratory tests included a full blood count, blood film for malaria parasites, plasma creatinine, electrolytes, plasma glucose, blood gases, blood culture and HIV rapid antibody test. Blood cultures were processed by a BACTEC 9050 instrument (Becton Dickinson, NJ, USA). Children were treated according to the standard WHO management guidelines for severe malnutrition. These include nutritional support with a special milk-based formula (F75 and F100), multivitamin and multimineral supplementation and reduced sodium oral rehydration solution (RESOMAL) for children with diarrhoea (>3 watery stools/day). At admission, all children were prescribed parenteral ampicillin (50 mg/kg four times a day) and intramuscular gentamicin (7.5 mg/kg once daily) for 7 days. These were revised, when indicated by the child's clinical condition or culture results. Other fluoroquinolones were not prescribed during the study period, since they were not routinely available.\nThe study was conducted at Kilifi District Hospital, Kenya. All children admitted to the ward were examined by a member of the clinical research team. Between July 2008 and February 2009, children >6 months of age were assessed for eligibility for the study. Eligible children had severe malnutrition, defined as one of the following: weight-for-height z-score (WHZ) of ≤3; mid-upper arm circumference of <11 cm; or the presence of bilateral pedal oedema (kwashiorkor). Children with evidence of intrinsic renal disease (creatinine concentration >300 μmol/L and hypertension or hyperkalaemia) were excluded. No cases had coexisting chronic bone or joint disease, or concurrently prescribed antacids, ketoconazole, theophylline or corticosteroids. The study was explained to the child's parent or guardian in their usual language and written informed consent was obtained. The Kenya Medical Research Institute/National Ethical Review Committee approved the study.\nBaseline laboratory tests included a full blood count, blood film for malaria parasites, plasma creatinine, electrolytes, plasma glucose, blood gases, blood culture and HIV rapid antibody test. Blood cultures were processed by a BACTEC 9050 instrument (Becton Dickinson, NJ, USA). Children were treated according to the standard WHO management guidelines for severe malnutrition. These include nutritional support with a special milk-based formula (F75 and F100), multivitamin and multimineral supplementation and reduced sodium oral rehydration solution (RESOMAL) for children with diarrhoea (>3 watery stools/day). At admission, all children were prescribed parenteral ampicillin (50 mg/kg four times a day) and intramuscular gentamicin (7.5 mg/kg once daily) for 7 days. These were revised, when indicated by the child's clinical condition or culture results. Other fluoroquinolones were not prescribed during the study period, since they were not routinely available.\n Study procedures Ciprofloxacin concentration–time profiles were obtained during two study periods. In the initial study period, children were given the ciprofloxacin either 2 h before or 2 h after they had their nutritional milks or meal. In this first period of the study (n = 36), an equal number of patients were included in three subgroups: low risk; intermediate risk; and high risk of fatality. The stratification of risk was based on a previous report.4 High risk included children presenting with one of the following: depressed conscious state; bradycardia (heart rate <80 beats per minute); evidence of shock (capillary refill time ≥2 seconds, temperature gradient or weak pulse); or hypoglycaemia (blood glucose <3 mmol/L). Intermediate risk included any one of: deep ‘acidotic’ breathing; signs of severe dehydration (plus diarrhoea); lethargy; hyponatraemia (sodium <125 mmol/L); or hypokalaemia (potassium <2.5 mmol/L). Children with low risk had none of these factors. In the second study period (n = 16), the drug was administered at the time they received nutritional feeds and did not have any risk stratification.\nCiprofloxacin concentration–time profiles were obtained during two study periods. In the initial study period, children were given the ciprofloxacin either 2 h before or 2 h after they had their nutritional milks or meal. In this first period of the study (n = 36), an equal number of patients were included in three subgroups: low risk; intermediate risk; and high risk of fatality. The stratification of risk was based on a previous report.4 High risk included children presenting with one of the following: depressed conscious state; bradycardia (heart rate <80 beats per minute); evidence of shock (capillary refill time ≥2 seconds, temperature gradient or weak pulse); or hypoglycaemia (blood glucose <3 mmol/L). Intermediate risk included any one of: deep ‘acidotic’ breathing; signs of severe dehydration (plus diarrhoea); lethargy; hyponatraemia (sodium <125 mmol/L); or hypokalaemia (potassium <2.5 mmol/L). Children with low risk had none of these factors. In the second study period (n = 16), the drug was administered at the time they received nutritional feeds and did not have any risk stratification.\n Drug administration and blood sampling Participants received witnessed doses of oral ciprofloxacin (10 mg/kg body weight), i.e. a standard treatment dose, every 12 h for 48 h, starting on the day of admission. Routine antimicrobials were given simultaneously as the empirical treatment for invasive infections. Since oral suspensions are not available locally, ciprofloxacin tablets (Bactiflox™ 250 mg, Mepha Pharma AG, Switzerland) were reformulated into an aqueous suspension by the study pharmacist. A separate tablet was used to prepare each individual dose and the suspension was then used immediately. Individual doses were measured by the pharmacist under the observation of the trial nurse, according to the patient's body weight. This approach has been used and reported previously.19\nChildren were allocated, using a computer-generated randomization list, to one of three blood sampling schedules: Group A at 2, 4, 8 and 24 h; Group B at 3, 5, 9 and 12 h; and Group C at 1, 3, 6 and 10 h. Children in each of the risk categories were evenly allocated across the sampling schedules. To minimize discomfort to the child, blood sampling for the measurement of ciprofloxacin concentrations was through an in situ cannula, which was only used for this purpose. At each timepoint, a venous blood sample (0.5 mL) was collected into a lithium heparin tube, centrifuged, and the plasma separated and stored in cryovials at −20°C until analysed. A rapid, selective and sensitive HPLC method coupled with fluorescence detection was used to determine the concentration of ciprofloxacin in the plasma samples.20 The intra- and interassay imprecisions of the HPLC method were <8.0%, and accuracy values ranged from 93% to 105% for quality control samples at 0.2, 1.8 and 3.6 mg/L. Calibration curves of ciprofloxacin were linear over the concentration range of 0.02–4 mg/L, with correlation coefficients ≥0.998.\nParticipants received witnessed doses of oral ciprofloxacin (10 mg/kg body weight), i.e. a standard treatment dose, every 12 h for 48 h, starting on the day of admission. Routine antimicrobials were given simultaneously as the empirical treatment for invasive infections. Since oral suspensions are not available locally, ciprofloxacin tablets (Bactiflox™ 250 mg, Mepha Pharma AG, Switzerland) were reformulated into an aqueous suspension by the study pharmacist. A separate tablet was used to prepare each individual dose and the suspension was then used immediately. Individual doses were measured by the pharmacist under the observation of the trial nurse, according to the patient's body weight. This approach has been used and reported previously.19\nChildren were allocated, using a computer-generated randomization list, to one of three blood sampling schedules: Group A at 2, 4, 8 and 24 h; Group B at 3, 5, 9 and 12 h; and Group C at 1, 3, 6 and 10 h. Children in each of the risk categories were evenly allocated across the sampling schedules. To minimize discomfort to the child, blood sampling for the measurement of ciprofloxacin concentrations was through an in situ cannula, which was only used for this purpose. At each timepoint, a venous blood sample (0.5 mL) was collected into a lithium heparin tube, centrifuged, and the plasma separated and stored in cryovials at −20°C until analysed. A rapid, selective and sensitive HPLC method coupled with fluorescence detection was used to determine the concentration of ciprofloxacin in the plasma samples.20 The intra- and interassay imprecisions of the HPLC method were <8.0%, and accuracy values ranged from 93% to 105% for quality control samples at 0.2, 1.8 and 3.6 mg/L. Calibration curves of ciprofloxacin were linear over the concentration range of 0.02–4 mg/L, with correlation coefficients ≥0.998.\n Pharmacokinetic analysis Population pharmacokinetic parameter estimates were obtained with NONMEM version VI using first-order conditional estimation with interaction.21 Post-processing of the NONMEM results was performed with Xpose version 4 programmed in R 2.9.2.22,23 Preliminary analyses found that a one-compartment elimination model adequately described the concentration–time profiles; absorption was described using first- and zero-order models with an absorption lag and with a transit compartment model.24 Relationships between the oral clearance of ciprofloxacin (CL) and weight, and between the oral volume of distribution (V) and weight were described using an allometric approach,25 i.e.\n\n\n\nBetween-subject variabilities (BSV) in the pharmacokinetic parameters were assumed to be log-normally distributed and covariance between the variabilities in the pharmacokinetic parameters was investigated. A combined proportional and additive error model was used to describe the residual error.\nA wide range of demographic, biochemical and haematological data were collected in the course of the study. In addition, creatinine clearance estimates for each patient, according to age range, were calculated using the equations of Schwarz et al.26,27 Potential relationships between the clinical and demographic factors and empirical Bayes' estimates (individual estimates) of ciprofloxacin oral CL, oral V and absorption rate were initially examined visually using scatter plots and by general additive modelling within Xpose.22 Factors identified as potential covariates that might explain variability in ciprofloxacin pharmacokinetics were then added individually to the population model. A statistically significant improvement in the fit of the model to the data was defined as a reduction in the objective function value (OFV) of ≥3.84 (P <0.05) in the forward stepwise analysis. The covariate that produced the greatest fall in OFV was included first, and then other covariates were added to and removed from the model in a stepwise manner that included changing the order of inclusion and removal. Statistical significance was defined as an increase in OFV of ≥6.63 (P < 0.01) when a covariate was removed. Additional criteria, such as goodness-of-fit plots, the precision of the parameter estimates, and the ability of covariates to explain variability in the oral CL, oral V and absorption rate, were also considered.\nThe final population model was evaluated in three ways: a bootstrap sampling procedure with 1000 samples; a prediction-corrected visual predictive check (pcVPC) based on 1000 simulations; and by examination of normalized prediction distribution errors (npde) from 1000 simulations. Both the bootstrapping procedure and the pcVPC were performed using PsN toolkit;28,29 npde were computed using the software developed by Brendel et al.30\nTime to maximum (Tmax) and maximum concentrations of ciprofloxacin (Cmax) for each patient were obtained from the raw data and estimated using the individual pharmacokinetic parameter values. Estimates of steady-state AUC within each dosage interval (AUC12) and each day (AUC24) were calculated from 12 hourly dose/CLi and daily dose/CLi, respectively, where CLi are the individual estimates of oral CL. Individual estimates of the elimination half-life (t½) were calculated from 0.693 × Vi/CLi, where Vi are the individual estimates of oral V.\nPopulation pharmacokinetic parameter estimates were obtained with NONMEM version VI using first-order conditional estimation with interaction.21 Post-processing of the NONMEM results was performed with Xpose version 4 programmed in R 2.9.2.22,23 Preliminary analyses found that a one-compartment elimination model adequately described the concentration–time profiles; absorption was described using first- and zero-order models with an absorption lag and with a transit compartment model.24 Relationships between the oral clearance of ciprofloxacin (CL) and weight, and between the oral volume of distribution (V) and weight were described using an allometric approach,25 i.e.\n\n\n\nBetween-subject variabilities (BSV) in the pharmacokinetic parameters were assumed to be log-normally distributed and covariance between the variabilities in the pharmacokinetic parameters was investigated. A combined proportional and additive error model was used to describe the residual error.\nA wide range of demographic, biochemical and haematological data were collected in the course of the study. In addition, creatinine clearance estimates for each patient, according to age range, were calculated using the equations of Schwarz et al.26,27 Potential relationships between the clinical and demographic factors and empirical Bayes' estimates (individual estimates) of ciprofloxacin oral CL, oral V and absorption rate were initially examined visually using scatter plots and by general additive modelling within Xpose.22 Factors identified as potential covariates that might explain variability in ciprofloxacin pharmacokinetics were then added individually to the population model. A statistically significant improvement in the fit of the model to the data was defined as a reduction in the objective function value (OFV) of ≥3.84 (P <0.05) in the forward stepwise analysis. The covariate that produced the greatest fall in OFV was included first, and then other covariates were added to and removed from the model in a stepwise manner that included changing the order of inclusion and removal. Statistical significance was defined as an increase in OFV of ≥6.63 (P < 0.01) when a covariate was removed. Additional criteria, such as goodness-of-fit plots, the precision of the parameter estimates, and the ability of covariates to explain variability in the oral CL, oral V and absorption rate, were also considered.\nThe final population model was evaluated in three ways: a bootstrap sampling procedure with 1000 samples; a prediction-corrected visual predictive check (pcVPC) based on 1000 simulations; and by examination of normalized prediction distribution errors (npde) from 1000 simulations. Both the bootstrapping procedure and the pcVPC were performed using PsN toolkit;28,29 npde were computed using the software developed by Brendel et al.30\nTime to maximum (Tmax) and maximum concentrations of ciprofloxacin (Cmax) for each patient were obtained from the raw data and estimated using the individual pharmacokinetic parameter values. Estimates of steady-state AUC within each dosage interval (AUC12) and each day (AUC24) were calculated from 12 hourly dose/CLi and daily dose/CLi, respectively, where CLi are the individual estimates of oral CL. Individual estimates of the elimination half-life (t½) were calculated from 0.693 × Vi/CLi, where Vi are the individual estimates of oral V.\n Monte Carlo simulations The final parameters of the population pharmacokinetic model were used to simulate estimates of oral CL for 10 000 patients using NONMEM version VI.21 Relevant clinical characteristics (weight and sodium concentration) were assumed to arise from log-normal distributions with outer limits set to the values observed in the raw data. The incidence of ‘high risk’ in the simulated dataset was 31%. AUC24 estimates were then determined for each simulated patient from daily dose/oral CL. Simulations were performed for three dosage regimens: 20 mg/kg/day (current daily dosage regimen); 30 mg/kg/day; and 45 mg/kg/day. For evaluation of these dosage regimens, MIC values were chosen across the range 0.03–8 mg/L. The probability of target attainment (PTA) was defined as the probability that the target AUC0–24/MIC ratio was achieved at each MIC. Target AUC0–24/MIC ratios of ≥125 were used.31 For each ciprofloxacin regimen, the highest MIC at which the PTA was ≥90% was defined as the pharmacokinetic/pharmacodynamic susceptible breakpoint.\nA second analysis was conducted using the MIC distributions for Escherichia coli, Pseudomonas aeruginosa, Salmonella spp. and Klebsiella pneumoniae derived from the database of the European Committee on Antimicrobial Susceptibility Testing.32 These MIC distributions were extracted from 17 877 strains of E. coli, 27 825 strains of P. aeruginosa, 5898 strains of K. pneumoniae and 1733 strains of Salmonella spp. The cumulative fraction of response (CFR) was used to estimate the overall response of pathogens to ciprofloxacin with each of the three dosage regimens. This estimate accounts for the variability of drug exposure in the population and the variability in the MIC combined with the distributions of MICs for the pathogens. For each MIC, the fraction of simulated patients who met the pharmacodynamic target (AUC0–24/MIC ≥ 125) was multiplied by the fraction of the distribution of microorganisms for each MIC. The CFR was calculated as the sum of fraction products over all MICs.\nThe final parameters of the population pharmacokinetic model were used to simulate estimates of oral CL for 10 000 patients using NONMEM version VI.21 Relevant clinical characteristics (weight and sodium concentration) were assumed to arise from log-normal distributions with outer limits set to the values observed in the raw data. The incidence of ‘high risk’ in the simulated dataset was 31%. AUC24 estimates were then determined for each simulated patient from daily dose/oral CL. Simulations were performed for three dosage regimens: 20 mg/kg/day (current daily dosage regimen); 30 mg/kg/day; and 45 mg/kg/day. For evaluation of these dosage regimens, MIC values were chosen across the range 0.03–8 mg/L. The probability of target attainment (PTA) was defined as the probability that the target AUC0–24/MIC ratio was achieved at each MIC. Target AUC0–24/MIC ratios of ≥125 were used.31 For each ciprofloxacin regimen, the highest MIC at which the PTA was ≥90% was defined as the pharmacokinetic/pharmacodynamic susceptible breakpoint.\nA second analysis was conducted using the MIC distributions for Escherichia coli, Pseudomonas aeruginosa, Salmonella spp. and Klebsiella pneumoniae derived from the database of the European Committee on Antimicrobial Susceptibility Testing.32 These MIC distributions were extracted from 17 877 strains of E. coli, 27 825 strains of P. aeruginosa, 5898 strains of K. pneumoniae and 1733 strains of Salmonella spp. The cumulative fraction of response (CFR) was used to estimate the overall response of pathogens to ciprofloxacin with each of the three dosage regimens. This estimate accounts for the variability of drug exposure in the population and the variability in the MIC combined with the distributions of MICs for the pathogens. For each MIC, the fraction of simulated patients who met the pharmacodynamic target (AUC0–24/MIC ≥ 125) was multiplied by the fraction of the distribution of microorganisms for each MIC. The CFR was calculated as the sum of fraction products over all MICs.", "The study was conducted at Kilifi District Hospital, Kenya. All children admitted to the ward were examined by a member of the clinical research team. Between July 2008 and February 2009, children >6 months of age were assessed for eligibility for the study. Eligible children had severe malnutrition, defined as one of the following: weight-for-height z-score (WHZ) of ≤3; mid-upper arm circumference of <11 cm; or the presence of bilateral pedal oedema (kwashiorkor). Children with evidence of intrinsic renal disease (creatinine concentration >300 μmol/L and hypertension or hyperkalaemia) were excluded. No cases had coexisting chronic bone or joint disease, or concurrently prescribed antacids, ketoconazole, theophylline or corticosteroids. The study was explained to the child's parent or guardian in their usual language and written informed consent was obtained. The Kenya Medical Research Institute/National Ethical Review Committee approved the study.\nBaseline laboratory tests included a full blood count, blood film for malaria parasites, plasma creatinine, electrolytes, plasma glucose, blood gases, blood culture and HIV rapid antibody test. Blood cultures were processed by a BACTEC 9050 instrument (Becton Dickinson, NJ, USA). Children were treated according to the standard WHO management guidelines for severe malnutrition. These include nutritional support with a special milk-based formula (F75 and F100), multivitamin and multimineral supplementation and reduced sodium oral rehydration solution (RESOMAL) for children with diarrhoea (>3 watery stools/day). At admission, all children were prescribed parenteral ampicillin (50 mg/kg four times a day) and intramuscular gentamicin (7.5 mg/kg once daily) for 7 days. These were revised, when indicated by the child's clinical condition or culture results. Other fluoroquinolones were not prescribed during the study period, since they were not routinely available.", "Ciprofloxacin concentration–time profiles were obtained during two study periods. In the initial study period, children were given the ciprofloxacin either 2 h before or 2 h after they had their nutritional milks or meal. In this first period of the study (n = 36), an equal number of patients were included in three subgroups: low risk; intermediate risk; and high risk of fatality. The stratification of risk was based on a previous report.4 High risk included children presenting with one of the following: depressed conscious state; bradycardia (heart rate <80 beats per minute); evidence of shock (capillary refill time ≥2 seconds, temperature gradient or weak pulse); or hypoglycaemia (blood glucose <3 mmol/L). Intermediate risk included any one of: deep ‘acidotic’ breathing; signs of severe dehydration (plus diarrhoea); lethargy; hyponatraemia (sodium <125 mmol/L); or hypokalaemia (potassium <2.5 mmol/L). Children with low risk had none of these factors. In the second study period (n = 16), the drug was administered at the time they received nutritional feeds and did not have any risk stratification.", "Participants received witnessed doses of oral ciprofloxacin (10 mg/kg body weight), i.e. a standard treatment dose, every 12 h for 48 h, starting on the day of admission. Routine antimicrobials were given simultaneously as the empirical treatment for invasive infections. Since oral suspensions are not available locally, ciprofloxacin tablets (Bactiflox™ 250 mg, Mepha Pharma AG, Switzerland) were reformulated into an aqueous suspension by the study pharmacist. A separate tablet was used to prepare each individual dose and the suspension was then used immediately. Individual doses were measured by the pharmacist under the observation of the trial nurse, according to the patient's body weight. This approach has been used and reported previously.19\nChildren were allocated, using a computer-generated randomization list, to one of three blood sampling schedules: Group A at 2, 4, 8 and 24 h; Group B at 3, 5, 9 and 12 h; and Group C at 1, 3, 6 and 10 h. Children in each of the risk categories were evenly allocated across the sampling schedules. To minimize discomfort to the child, blood sampling for the measurement of ciprofloxacin concentrations was through an in situ cannula, which was only used for this purpose. At each timepoint, a venous blood sample (0.5 mL) was collected into a lithium heparin tube, centrifuged, and the plasma separated and stored in cryovials at −20°C until analysed. A rapid, selective and sensitive HPLC method coupled with fluorescence detection was used to determine the concentration of ciprofloxacin in the plasma samples.20 The intra- and interassay imprecisions of the HPLC method were <8.0%, and accuracy values ranged from 93% to 105% for quality control samples at 0.2, 1.8 and 3.6 mg/L. Calibration curves of ciprofloxacin were linear over the concentration range of 0.02–4 mg/L, with correlation coefficients ≥0.998.", "Population pharmacokinetic parameter estimates were obtained with NONMEM version VI using first-order conditional estimation with interaction.21 Post-processing of the NONMEM results was performed with Xpose version 4 programmed in R 2.9.2.22,23 Preliminary analyses found that a one-compartment elimination model adequately described the concentration–time profiles; absorption was described using first- and zero-order models with an absorption lag and with a transit compartment model.24 Relationships between the oral clearance of ciprofloxacin (CL) and weight, and between the oral volume of distribution (V) and weight were described using an allometric approach,25 i.e.\n\n\n\nBetween-subject variabilities (BSV) in the pharmacokinetic parameters were assumed to be log-normally distributed and covariance between the variabilities in the pharmacokinetic parameters was investigated. A combined proportional and additive error model was used to describe the residual error.\nA wide range of demographic, biochemical and haematological data were collected in the course of the study. In addition, creatinine clearance estimates for each patient, according to age range, were calculated using the equations of Schwarz et al.26,27 Potential relationships between the clinical and demographic factors and empirical Bayes' estimates (individual estimates) of ciprofloxacin oral CL, oral V and absorption rate were initially examined visually using scatter plots and by general additive modelling within Xpose.22 Factors identified as potential covariates that might explain variability in ciprofloxacin pharmacokinetics were then added individually to the population model. A statistically significant improvement in the fit of the model to the data was defined as a reduction in the objective function value (OFV) of ≥3.84 (P <0.05) in the forward stepwise analysis. The covariate that produced the greatest fall in OFV was included first, and then other covariates were added to and removed from the model in a stepwise manner that included changing the order of inclusion and removal. Statistical significance was defined as an increase in OFV of ≥6.63 (P < 0.01) when a covariate was removed. Additional criteria, such as goodness-of-fit plots, the precision of the parameter estimates, and the ability of covariates to explain variability in the oral CL, oral V and absorption rate, were also considered.\nThe final population model was evaluated in three ways: a bootstrap sampling procedure with 1000 samples; a prediction-corrected visual predictive check (pcVPC) based on 1000 simulations; and by examination of normalized prediction distribution errors (npde) from 1000 simulations. Both the bootstrapping procedure and the pcVPC were performed using PsN toolkit;28,29 npde were computed using the software developed by Brendel et al.30\nTime to maximum (Tmax) and maximum concentrations of ciprofloxacin (Cmax) for each patient were obtained from the raw data and estimated using the individual pharmacokinetic parameter values. Estimates of steady-state AUC within each dosage interval (AUC12) and each day (AUC24) were calculated from 12 hourly dose/CLi and daily dose/CLi, respectively, where CLi are the individual estimates of oral CL. Individual estimates of the elimination half-life (t½) were calculated from 0.693 × Vi/CLi, where Vi are the individual estimates of oral V.", "The final parameters of the population pharmacokinetic model were used to simulate estimates of oral CL for 10 000 patients using NONMEM version VI.21 Relevant clinical characteristics (weight and sodium concentration) were assumed to arise from log-normal distributions with outer limits set to the values observed in the raw data. The incidence of ‘high risk’ in the simulated dataset was 31%. AUC24 estimates were then determined for each simulated patient from daily dose/oral CL. Simulations were performed for three dosage regimens: 20 mg/kg/day (current daily dosage regimen); 30 mg/kg/day; and 45 mg/kg/day. For evaluation of these dosage regimens, MIC values were chosen across the range 0.03–8 mg/L. The probability of target attainment (PTA) was defined as the probability that the target AUC0–24/MIC ratio was achieved at each MIC. Target AUC0–24/MIC ratios of ≥125 were used.31 For each ciprofloxacin regimen, the highest MIC at which the PTA was ≥90% was defined as the pharmacokinetic/pharmacodynamic susceptible breakpoint.\nA second analysis was conducted using the MIC distributions for Escherichia coli, Pseudomonas aeruginosa, Salmonella spp. and Klebsiella pneumoniae derived from the database of the European Committee on Antimicrobial Susceptibility Testing.32 These MIC distributions were extracted from 17 877 strains of E. coli, 27 825 strains of P. aeruginosa, 5898 strains of K. pneumoniae and 1733 strains of Salmonella spp. The cumulative fraction of response (CFR) was used to estimate the overall response of pathogens to ciprofloxacin with each of the three dosage regimens. This estimate accounts for the variability of drug exposure in the population and the variability in the MIC combined with the distributions of MICs for the pathogens. For each MIC, the fraction of simulated patients who met the pharmacodynamic target (AUC0–24/MIC ≥ 125) was multiplied by the fraction of the distribution of microorganisms for each MIC. The CFR was calculated as the sum of fraction products over all MICs.", " Participants and admission clinical characteristics Of the 90 children with severe malnutrition admitted during the study period, 52 were enrolled into the study. Twelve who fulfilled the entry criteria declined to give consent, in four intravenous access was not possible and 22 were excluded due to completion of recruitment to a specific subgroup. The demographic and clinical characteristics of the patients included in the study are summarized in Table 1. Median age [interquartile range (IQR)] was 23 months (15–33 months), 29 (56%) were male, oedema (defining kwashiorkor) was present in 24 patients (46%) and eight (15%) children were HIV antibody positive. The median weight (IQR) was 6.9 kg (6.1–8.4 kg). The estimated creatinine clearance ranged from 5 to 128.7 mL/min/1.73 m2; only one patient had severe renal impairment. According to the risk stratification, mortality was 21%, 12% and 0% in the high-risk, intermediate-risk and low-risk groups, respectively. Bacteraemia was present in six children; three had E. coli, two had S. pneumoniae and one had K. pneumoniae. The isolates were tested against ampicillin, gentamicin, ciprofloxacin, chloramphenicol and ceftriaxone. The K. pneumoniae isolate was resistant to ampicillin only, while one E. coli isolate was resistant to all of the tested antibiotics. One S. pneumoniae isolate was resistant to ampicillin and had intermediate susceptibility to ciprofloxacin. Ciprofloxacin was well tolerated and there were no adverse events reported with its use.\nTable 1.Summary of the demographic and clinical characteristics of the patients who participated in the studyCharacteristicNumber (%)/median (range)Interquartile rangeMale/female29/23 (56/44)Oedema24 (46)HIV positive8 (15)Age (months)23 (8–102)15–33Weight (kg)6.9 (4.1–14.5)6.1–8.4Height (cm)75.4 (58.5–114.4)70.1–81.4MUAC (cm)11 (7.7–14.3)10–12WHZ−3.26 (−5.69–0.04)−3.67 to −2.48Vomiting15 (29)Shock7 (13)Dehydration25 (48)Low risk22 (42)Intermediate risk14 (27)High risk16 (31)White blood cell (×106/L) (6–17.5)13.3 (5.5–84.4)10.3–17.1Haemoglobin (g/dL) (9–14)9.0 (2.1–12.8)7.2–9.8Platelets (×106/L) (150–400)309 (16–1369)210–451Sodium (mmol/L) (138–145)136 (120–160)131–138Potassium (mmol/L) (3.5–5)3.1 (1.2–5.1)2.3–4.1Glucose (mmol/L) (2.8–5)4 (0.4–11.4)2.8–4.7Bicarbonate (mmol/L) (22–29)15.4 (4.7–26)11.7–18.5Base excess (−4 to +2)−8.0 (−26.9–2.1)−13.1 to −4.5Serum creatinine (μmol/L) (44–88)44 (27–676)36.8–51.9Creatinine clearancea (mL/min/1.73 m2)85.8 (5.0–128.7)70.7–101.4MUAC, mid-upper arm circumference.aEstimated according to age using the equations of Schwartz et al.26,27\nSummary of the demographic and clinical characteristics of the patients who participated in the study\nMUAC, mid-upper arm circumference.\naEstimated according to age using the equations of Schwartz et al.26,27\nOf the 90 children with severe malnutrition admitted during the study period, 52 were enrolled into the study. Twelve who fulfilled the entry criteria declined to give consent, in four intravenous access was not possible and 22 were excluded due to completion of recruitment to a specific subgroup. The demographic and clinical characteristics of the patients included in the study are summarized in Table 1. Median age [interquartile range (IQR)] was 23 months (15–33 months), 29 (56%) were male, oedema (defining kwashiorkor) was present in 24 patients (46%) and eight (15%) children were HIV antibody positive. The median weight (IQR) was 6.9 kg (6.1–8.4 kg). The estimated creatinine clearance ranged from 5 to 128.7 mL/min/1.73 m2; only one patient had severe renal impairment. According to the risk stratification, mortality was 21%, 12% and 0% in the high-risk, intermediate-risk and low-risk groups, respectively. Bacteraemia was present in six children; three had E. coli, two had S. pneumoniae and one had K. pneumoniae. The isolates were tested against ampicillin, gentamicin, ciprofloxacin, chloramphenicol and ceftriaxone. The K. pneumoniae isolate was resistant to ampicillin only, while one E. coli isolate was resistant to all of the tested antibiotics. One S. pneumoniae isolate was resistant to ampicillin and had intermediate susceptibility to ciprofloxacin. Ciprofloxacin was well tolerated and there were no adverse events reported with its use.\nTable 1.Summary of the demographic and clinical characteristics of the patients who participated in the studyCharacteristicNumber (%)/median (range)Interquartile rangeMale/female29/23 (56/44)Oedema24 (46)HIV positive8 (15)Age (months)23 (8–102)15–33Weight (kg)6.9 (4.1–14.5)6.1–8.4Height (cm)75.4 (58.5–114.4)70.1–81.4MUAC (cm)11 (7.7–14.3)10–12WHZ−3.26 (−5.69–0.04)−3.67 to −2.48Vomiting15 (29)Shock7 (13)Dehydration25 (48)Low risk22 (42)Intermediate risk14 (27)High risk16 (31)White blood cell (×106/L) (6–17.5)13.3 (5.5–84.4)10.3–17.1Haemoglobin (g/dL) (9–14)9.0 (2.1–12.8)7.2–9.8Platelets (×106/L) (150–400)309 (16–1369)210–451Sodium (mmol/L) (138–145)136 (120–160)131–138Potassium (mmol/L) (3.5–5)3.1 (1.2–5.1)2.3–4.1Glucose (mmol/L) (2.8–5)4 (0.4–11.4)2.8–4.7Bicarbonate (mmol/L) (22–29)15.4 (4.7–26)11.7–18.5Base excess (−4 to +2)−8.0 (−26.9–2.1)−13.1 to −4.5Serum creatinine (μmol/L) (44–88)44 (27–676)36.8–51.9Creatinine clearancea (mL/min/1.73 m2)85.8 (5.0–128.7)70.7–101.4MUAC, mid-upper arm circumference.aEstimated according to age using the equations of Schwartz et al.26,27\nSummary of the demographic and clinical characteristics of the patients who participated in the study\nMUAC, mid-upper arm circumference.\naEstimated according to age using the equations of Schwartz et al.26,27\n Pharmacokinetic data analysis A total of 202 plasma ciprofloxacin concentration measurements were available, with a median of 4 (range 2–4) measurements per patient. Individual concentration–time profiles are presented in Figure 1. Tmax ranged from 1 to 5 h (median 3 h) after the dose, and in 69% of patients Cmax was observed within 1–3 h. Cmax ranged from 0.6 to 4.5 mg/L with a median of 1.7 mg/L. Cmax was >1 mg/L and >2 mg/L in 77% and 31% of patients, respectively. Trough concentrations at 12 h after the first dose were available from 17 patients and ranged from 0.1 to 0.7 mg/L with a median of 0.3 mg/L.\nFigure 1.Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients.\nCiprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients.\nBoth the first-order with lag and the transit compartment absorption models provided good fits of the population and individual concentration–time profiles. Since the transit compartment model was unstable during validation procedures and provided little improvement in the overall fit of the data, the first-order model with lag was used for covariate model development. The transit compartment model was tested again with the final covariate model, but offered no clear advantage.\nPlots of individual estimates of oral CL and oral V against the measured and derived clinical and demographic data identified potential relationships between oral CL and risk category, serum sodium concentration, serum potassium concentration, feeding status, shock and dehydration, and between oral V and risk category, serum sodium concentration, shock, dehydration, diarrhoea and vomiting. These factors all achieved small, but statistically significant, reductions in the OFV when included individually in the population model for oral CL. The biggest reductions occurred with sodium concentration (6.24) and the high-risk category (6.05). When these factors were combined, the OFV fell by a further 14.46 points. BSV in oral CL fell from 49.8% with the base model to 38.1% with the covariate model. No additional factors reduced the variability in oral CL, but a further reduction in OFV and BSV in oral V (from 48.4% to 43.0%) was obtained when sodium concentration was added to the model for oral V. Although the inclusion of bicarbonate concentration as a factor influencing the absorption rate constant (ka) produced a statistically significant reduction in OFV, it did not reduce BSV in ka and was therefore excluded. The final population model parameters and bootstrap estimates are presented in Table 2 and summarized below.\nTable 2.Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished childrenParameterPopulation estimateBootstrap estimateBootstrap 95% CIθ142.742.537.0–49.3θ20.03680.03610.0217–0.0446θ3−0.283−0.285−0.412 to −0.118θ4372367316–429θ50.02910.02820.0155–0.0388θ62.973.441.32–8.86θ70.7420.7920.168–0.924BSV CL38.137.828.7–45.8BSV V43.042.932.4–51.8BSV ka10211056–159Residual error Additive (SD)0.02730.02780.0041–0.0438Proportional (%CV)18.617.814.0–22.6Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7.Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L).\n\n\n\n\nParameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished children\nStructural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7.\nAbbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L).\nThe population model therefore identified a standardized oral CL in an adult weighing 70 kg of 42.7 L/h, and found that oral CL was linearly and positively related to sodium concentration. The model described an increase (or decrease) in oral CL by 3.7% for every 1 mmol/L increase (or decrease) in the serum sodium concentration from the median value of 136 mmol/L. The standardized oral CL fell by 28% to 30.6 L/h/70 kg in patients in the ‘high-risk’ category. The standardized oral V estimate was 372 L/70 kg and changed by 2.9% for every 1 mmol/L change in the serum sodium concentration from 136 mmol/L.\nAll three validation methods indicated that the model provided a satisfactory description of the data. Figure 2 shows good agreement between the measured concentrations and the concentrations predicted by both the population pharmacokinetic model (r2 = 0.659) and the individual parameter estimates (r2 = 0.971). The pcVPC presented in Figure 3 shows that the population model was able to describe the distribution of the raw concentration data, and the npde check confirmed a normal distribution around each individual observation within the simulated dataset.\nFigure 2.Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line.\nFigure 3.Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model.\nObserved versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line.\nPrediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model.\nTable 3 summarizes the individual Bayes' estimates of the pharmacokinetic parameters and derived estimates of Cmax, Tmax, t½ and AUC0–24. Oral CL had a median of 0.98 L/h/kg in patients in the low and intermediate categories, and 0.67 L/h/kg in high-risk patients. There was a wide variability in the individual estimates of AUC0–24, which ranged from 7.9 to 61 mg·h/L. Median estimates of AUC0–24 were higher in patients in the high-risk category at 29.7 mg·h/L compared with 20.5 mg·h/L in the low and intermediate categories.\nTable 3.Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutritionVariablenMeanSDMedianMinimumMaximumCL(L/h)527.433.547.191.8317.1CL(L/h/kg)521.020.520.870.322.54CL (L/h/kg) low/intermediate risk361.140.530.980.462.54CL (L/h/kg) high risk160.770.410.670.321.53V (L/kg)525.472.694.492.1414.2t½ (h)523.971.383.782.019.04Observed Tmax (h)522.771.083.001.005.17Model-predicted Tmax (h)521.910.581.791.093.85Observed Cmax (mg/L)521.680.791.710.584.52Model-predicted Cmax (mg/L)521.510.601.500.613.56AUC0–24 (mg·h/L)5224.812.422.47.961.3AUC0–24 (mg·h/L) low/intermediate risk3621.18.820. 57.943.4AUC0–24 (mg·h/L) high risk1632.915.529.713.161.3Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC.\nSummary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutrition\nAbbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC.\nA total of 202 plasma ciprofloxacin concentration measurements were available, with a median of 4 (range 2–4) measurements per patient. Individual concentration–time profiles are presented in Figure 1. Tmax ranged from 1 to 5 h (median 3 h) after the dose, and in 69% of patients Cmax was observed within 1–3 h. Cmax ranged from 0.6 to 4.5 mg/L with a median of 1.7 mg/L. Cmax was >1 mg/L and >2 mg/L in 77% and 31% of patients, respectively. Trough concentrations at 12 h after the first dose were available from 17 patients and ranged from 0.1 to 0.7 mg/L with a median of 0.3 mg/L.\nFigure 1.Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients.\nCiprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients.\nBoth the first-order with lag and the transit compartment absorption models provided good fits of the population and individual concentration–time profiles. Since the transit compartment model was unstable during validation procedures and provided little improvement in the overall fit of the data, the first-order model with lag was used for covariate model development. The transit compartment model was tested again with the final covariate model, but offered no clear advantage.\nPlots of individual estimates of oral CL and oral V against the measured and derived clinical and demographic data identified potential relationships between oral CL and risk category, serum sodium concentration, serum potassium concentration, feeding status, shock and dehydration, and between oral V and risk category, serum sodium concentration, shock, dehydration, diarrhoea and vomiting. These factors all achieved small, but statistically significant, reductions in the OFV when included individually in the population model for oral CL. The biggest reductions occurred with sodium concentration (6.24) and the high-risk category (6.05). When these factors were combined, the OFV fell by a further 14.46 points. BSV in oral CL fell from 49.8% with the base model to 38.1% with the covariate model. No additional factors reduced the variability in oral CL, but a further reduction in OFV and BSV in oral V (from 48.4% to 43.0%) was obtained when sodium concentration was added to the model for oral V. Although the inclusion of bicarbonate concentration as a factor influencing the absorption rate constant (ka) produced a statistically significant reduction in OFV, it did not reduce BSV in ka and was therefore excluded. The final population model parameters and bootstrap estimates are presented in Table 2 and summarized below.\nTable 2.Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished childrenParameterPopulation estimateBootstrap estimateBootstrap 95% CIθ142.742.537.0–49.3θ20.03680.03610.0217–0.0446θ3−0.283−0.285−0.412 to −0.118θ4372367316–429θ50.02910.02820.0155–0.0388θ62.973.441.32–8.86θ70.7420.7920.168–0.924BSV CL38.137.828.7–45.8BSV V43.042.932.4–51.8BSV ka10211056–159Residual error Additive (SD)0.02730.02780.0041–0.0438Proportional (%CV)18.617.814.0–22.6Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7.Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L).\n\n\n\n\nParameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished children\nStructural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7.\nAbbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L).\nThe population model therefore identified a standardized oral CL in an adult weighing 70 kg of 42.7 L/h, and found that oral CL was linearly and positively related to sodium concentration. The model described an increase (or decrease) in oral CL by 3.7% for every 1 mmol/L increase (or decrease) in the serum sodium concentration from the median value of 136 mmol/L. The standardized oral CL fell by 28% to 30.6 L/h/70 kg in patients in the ‘high-risk’ category. The standardized oral V estimate was 372 L/70 kg and changed by 2.9% for every 1 mmol/L change in the serum sodium concentration from 136 mmol/L.\nAll three validation methods indicated that the model provided a satisfactory description of the data. Figure 2 shows good agreement between the measured concentrations and the concentrations predicted by both the population pharmacokinetic model (r2 = 0.659) and the individual parameter estimates (r2 = 0.971). The pcVPC presented in Figure 3 shows that the population model was able to describe the distribution of the raw concentration data, and the npde check confirmed a normal distribution around each individual observation within the simulated dataset.\nFigure 2.Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line.\nFigure 3.Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model.\nObserved versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line.\nPrediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model.\nTable 3 summarizes the individual Bayes' estimates of the pharmacokinetic parameters and derived estimates of Cmax, Tmax, t½ and AUC0–24. Oral CL had a median of 0.98 L/h/kg in patients in the low and intermediate categories, and 0.67 L/h/kg in high-risk patients. There was a wide variability in the individual estimates of AUC0–24, which ranged from 7.9 to 61 mg·h/L. Median estimates of AUC0–24 were higher in patients in the high-risk category at 29.7 mg·h/L compared with 20.5 mg·h/L in the low and intermediate categories.\nTable 3.Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutritionVariablenMeanSDMedianMinimumMaximumCL(L/h)527.433.547.191.8317.1CL(L/h/kg)521.020.520.870.322.54CL (L/h/kg) low/intermediate risk361.140.530.980.462.54CL (L/h/kg) high risk160.770.410.670.321.53V (L/kg)525.472.694.492.1414.2t½ (h)523.971.383.782.019.04Observed Tmax (h)522.771.083.001.005.17Model-predicted Tmax (h)521.910.581.791.093.85Observed Cmax (mg/L)521.680.791.710.584.52Model-predicted Cmax (mg/L)521.510.601.500.613.56AUC0–24 (mg·h/L)5224.812.422.47.961.3AUC0–24 (mg·h/L) low/intermediate risk3621.18.820. 57.943.4AUC0–24 (mg·h/L) high risk1632.915.529.713.161.3Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC.\nSummary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutrition\nAbbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC.\n Monte Carlo simulations The percentage of simulated patients who achieved an AUC0–24/MIC ratio of ≥125 at each MIC value with the three ciprofloxacin daily dosage regimens is presented in Figure 4. With 20 mg/kg/day, only 76% of patients would be expected to achieve the target AUC0–24/MIC ratio if the MIC was 0.125 mg/L. However, with daily doses of 30 and 45 mg/kg, the percentages increased to 95% and 99%, respectively. Consequently, the pharmacokinetic/pharmacodynamic breakpoint for Gram-negative organisms was <0.06 mg/L for the study dose of 20 mg/kg/day, and <0.125 mg/L for doses of 30 and 45 mg/kg/day. The target AUC0–24/MIC ratio was achieved in <5% of patients with all dosage regimens if the MIC was >1 mg/L. When the results were integrated with the MIC distribution for each organism, the CFR was >80% with all three daily dosage regimens for E. coli and Salmonella spp., and was 80% for K. pneumoniae with a dose of 30 mg/kg/day (Table 4). CFR values for P. aeruginosa were <70% for all doses tested.\nTable 4.Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coliCumulative fraction of predicted response (%)OrganismTarget AUC0–24/MIC ratio20 mg/kg/day30 mg/kg/day45 mg/kg/daySalmonella spp.125969899P. aeruginosa125435564K. pneumonia125768083E. coli125858788\nFigure 4.Percentage probability of achieving a target AUC0–24/MIC ratio ≥125.\nCumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coli\nPercentage probability of achieving a target AUC0–24/MIC ratio ≥125.\nThe percentage of simulated patients who achieved an AUC0–24/MIC ratio of ≥125 at each MIC value with the three ciprofloxacin daily dosage regimens is presented in Figure 4. With 20 mg/kg/day, only 76% of patients would be expected to achieve the target AUC0–24/MIC ratio if the MIC was 0.125 mg/L. However, with daily doses of 30 and 45 mg/kg, the percentages increased to 95% and 99%, respectively. Consequently, the pharmacokinetic/pharmacodynamic breakpoint for Gram-negative organisms was <0.06 mg/L for the study dose of 20 mg/kg/day, and <0.125 mg/L for doses of 30 and 45 mg/kg/day. The target AUC0–24/MIC ratio was achieved in <5% of patients with all dosage regimens if the MIC was >1 mg/L. When the results were integrated with the MIC distribution for each organism, the CFR was >80% with all three daily dosage regimens for E. coli and Salmonella spp., and was 80% for K. pneumoniae with a dose of 30 mg/kg/day (Table 4). CFR values for P. aeruginosa were <70% for all doses tested.\nTable 4.Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coliCumulative fraction of predicted response (%)OrganismTarget AUC0–24/MIC ratio20 mg/kg/day30 mg/kg/day45 mg/kg/daySalmonella spp.125969899P. aeruginosa125435564K. pneumonia125768083E. coli125858788\nFigure 4.Percentage probability of achieving a target AUC0–24/MIC ratio ≥125.\nCumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coli\nPercentage probability of achieving a target AUC0–24/MIC ratio ≥125.", "Of the 90 children with severe malnutrition admitted during the study period, 52 were enrolled into the study. Twelve who fulfilled the entry criteria declined to give consent, in four intravenous access was not possible and 22 were excluded due to completion of recruitment to a specific subgroup. The demographic and clinical characteristics of the patients included in the study are summarized in Table 1. Median age [interquartile range (IQR)] was 23 months (15–33 months), 29 (56%) were male, oedema (defining kwashiorkor) was present in 24 patients (46%) and eight (15%) children were HIV antibody positive. The median weight (IQR) was 6.9 kg (6.1–8.4 kg). The estimated creatinine clearance ranged from 5 to 128.7 mL/min/1.73 m2; only one patient had severe renal impairment. According to the risk stratification, mortality was 21%, 12% and 0% in the high-risk, intermediate-risk and low-risk groups, respectively. Bacteraemia was present in six children; three had E. coli, two had S. pneumoniae and one had K. pneumoniae. The isolates were tested against ampicillin, gentamicin, ciprofloxacin, chloramphenicol and ceftriaxone. The K. pneumoniae isolate was resistant to ampicillin only, while one E. coli isolate was resistant to all of the tested antibiotics. One S. pneumoniae isolate was resistant to ampicillin and had intermediate susceptibility to ciprofloxacin. Ciprofloxacin was well tolerated and there were no adverse events reported with its use.\nTable 1.Summary of the demographic and clinical characteristics of the patients who participated in the studyCharacteristicNumber (%)/median (range)Interquartile rangeMale/female29/23 (56/44)Oedema24 (46)HIV positive8 (15)Age (months)23 (8–102)15–33Weight (kg)6.9 (4.1–14.5)6.1–8.4Height (cm)75.4 (58.5–114.4)70.1–81.4MUAC (cm)11 (7.7–14.3)10–12WHZ−3.26 (−5.69–0.04)−3.67 to −2.48Vomiting15 (29)Shock7 (13)Dehydration25 (48)Low risk22 (42)Intermediate risk14 (27)High risk16 (31)White blood cell (×106/L) (6–17.5)13.3 (5.5–84.4)10.3–17.1Haemoglobin (g/dL) (9–14)9.0 (2.1–12.8)7.2–9.8Platelets (×106/L) (150–400)309 (16–1369)210–451Sodium (mmol/L) (138–145)136 (120–160)131–138Potassium (mmol/L) (3.5–5)3.1 (1.2–5.1)2.3–4.1Glucose (mmol/L) (2.8–5)4 (0.4–11.4)2.8–4.7Bicarbonate (mmol/L) (22–29)15.4 (4.7–26)11.7–18.5Base excess (−4 to +2)−8.0 (−26.9–2.1)−13.1 to −4.5Serum creatinine (μmol/L) (44–88)44 (27–676)36.8–51.9Creatinine clearancea (mL/min/1.73 m2)85.8 (5.0–128.7)70.7–101.4MUAC, mid-upper arm circumference.aEstimated according to age using the equations of Schwartz et al.26,27\nSummary of the demographic and clinical characteristics of the patients who participated in the study\nMUAC, mid-upper arm circumference.\naEstimated according to age using the equations of Schwartz et al.26,27", "A total of 202 plasma ciprofloxacin concentration measurements were available, with a median of 4 (range 2–4) measurements per patient. Individual concentration–time profiles are presented in Figure 1. Tmax ranged from 1 to 5 h (median 3 h) after the dose, and in 69% of patients Cmax was observed within 1–3 h. Cmax ranged from 0.6 to 4.5 mg/L with a median of 1.7 mg/L. Cmax was >1 mg/L and >2 mg/L in 77% and 31% of patients, respectively. Trough concentrations at 12 h after the first dose were available from 17 patients and ranged from 0.1 to 0.7 mg/L with a median of 0.3 mg/L.\nFigure 1.Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients.\nCiprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients.\nBoth the first-order with lag and the transit compartment absorption models provided good fits of the population and individual concentration–time profiles. Since the transit compartment model was unstable during validation procedures and provided little improvement in the overall fit of the data, the first-order model with lag was used for covariate model development. The transit compartment model was tested again with the final covariate model, but offered no clear advantage.\nPlots of individual estimates of oral CL and oral V against the measured and derived clinical and demographic data identified potential relationships between oral CL and risk category, serum sodium concentration, serum potassium concentration, feeding status, shock and dehydration, and between oral V and risk category, serum sodium concentration, shock, dehydration, diarrhoea and vomiting. These factors all achieved small, but statistically significant, reductions in the OFV when included individually in the population model for oral CL. The biggest reductions occurred with sodium concentration (6.24) and the high-risk category (6.05). When these factors were combined, the OFV fell by a further 14.46 points. BSV in oral CL fell from 49.8% with the base model to 38.1% with the covariate model. No additional factors reduced the variability in oral CL, but a further reduction in OFV and BSV in oral V (from 48.4% to 43.0%) was obtained when sodium concentration was added to the model for oral V. Although the inclusion of bicarbonate concentration as a factor influencing the absorption rate constant (ka) produced a statistically significant reduction in OFV, it did not reduce BSV in ka and was therefore excluded. The final population model parameters and bootstrap estimates are presented in Table 2 and summarized below.\nTable 2.Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished childrenParameterPopulation estimateBootstrap estimateBootstrap 95% CIθ142.742.537.0–49.3θ20.03680.03610.0217–0.0446θ3−0.283−0.285−0.412 to −0.118θ4372367316–429θ50.02910.02820.0155–0.0388θ62.973.441.32–8.86θ70.7420.7920.168–0.924BSV CL38.137.828.7–45.8BSV V43.042.932.4–51.8BSV ka10211056–159Residual error Additive (SD)0.02730.02780.0041–0.0438Proportional (%CV)18.617.814.0–22.6Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7.Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L).\n\n\n\n\nParameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished children\nStructural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7.\nAbbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L).\nThe population model therefore identified a standardized oral CL in an adult weighing 70 kg of 42.7 L/h, and found that oral CL was linearly and positively related to sodium concentration. The model described an increase (or decrease) in oral CL by 3.7% for every 1 mmol/L increase (or decrease) in the serum sodium concentration from the median value of 136 mmol/L. The standardized oral CL fell by 28% to 30.6 L/h/70 kg in patients in the ‘high-risk’ category. The standardized oral V estimate was 372 L/70 kg and changed by 2.9% for every 1 mmol/L change in the serum sodium concentration from 136 mmol/L.\nAll three validation methods indicated that the model provided a satisfactory description of the data. Figure 2 shows good agreement between the measured concentrations and the concentrations predicted by both the population pharmacokinetic model (r2 = 0.659) and the individual parameter estimates (r2 = 0.971). The pcVPC presented in Figure 3 shows that the population model was able to describe the distribution of the raw concentration data, and the npde check confirmed a normal distribution around each individual observation within the simulated dataset.\nFigure 2.Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line.\nFigure 3.Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model.\nObserved versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line.\nPrediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model.\nTable 3 summarizes the individual Bayes' estimates of the pharmacokinetic parameters and derived estimates of Cmax, Tmax, t½ and AUC0–24. Oral CL had a median of 0.98 L/h/kg in patients in the low and intermediate categories, and 0.67 L/h/kg in high-risk patients. There was a wide variability in the individual estimates of AUC0–24, which ranged from 7.9 to 61 mg·h/L. Median estimates of AUC0–24 were higher in patients in the high-risk category at 29.7 mg·h/L compared with 20.5 mg·h/L in the low and intermediate categories.\nTable 3.Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutritionVariablenMeanSDMedianMinimumMaximumCL(L/h)527.433.547.191.8317.1CL(L/h/kg)521.020.520.870.322.54CL (L/h/kg) low/intermediate risk361.140.530.980.462.54CL (L/h/kg) high risk160.770.410.670.321.53V (L/kg)525.472.694.492.1414.2t½ (h)523.971.383.782.019.04Observed Tmax (h)522.771.083.001.005.17Model-predicted Tmax (h)521.910.581.791.093.85Observed Cmax (mg/L)521.680.791.710.584.52Model-predicted Cmax (mg/L)521.510.601.500.613.56AUC0–24 (mg·h/L)5224.812.422.47.961.3AUC0–24 (mg·h/L) low/intermediate risk3621.18.820. 57.943.4AUC0–24 (mg·h/L) high risk1632.915.529.713.161.3Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC.\nSummary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutrition\nAbbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC.", "The percentage of simulated patients who achieved an AUC0–24/MIC ratio of ≥125 at each MIC value with the three ciprofloxacin daily dosage regimens is presented in Figure 4. With 20 mg/kg/day, only 76% of patients would be expected to achieve the target AUC0–24/MIC ratio if the MIC was 0.125 mg/L. However, with daily doses of 30 and 45 mg/kg, the percentages increased to 95% and 99%, respectively. Consequently, the pharmacokinetic/pharmacodynamic breakpoint for Gram-negative organisms was <0.06 mg/L for the study dose of 20 mg/kg/day, and <0.125 mg/L for doses of 30 and 45 mg/kg/day. The target AUC0–24/MIC ratio was achieved in <5% of patients with all dosage regimens if the MIC was >1 mg/L. When the results were integrated with the MIC distribution for each organism, the CFR was >80% with all three daily dosage regimens for E. coli and Salmonella spp., and was 80% for K. pneumoniae with a dose of 30 mg/kg/day (Table 4). CFR values for P. aeruginosa were <70% for all doses tested.\nTable 4.Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coliCumulative fraction of predicted response (%)OrganismTarget AUC0–24/MIC ratio20 mg/kg/day30 mg/kg/day45 mg/kg/daySalmonella spp.125969899P. aeruginosa125435564K. pneumonia125768083E. coli125858788\nFigure 4.Percentage probability of achieving a target AUC0–24/MIC ratio ≥125.\nCumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coli\nPercentage probability of achieving a target AUC0–24/MIC ratio ≥125.", "This study examined the population pharmacokinetics of ciprofloxacin following oral doses of 10 mg/kg given 12 hourly on the day of admission to a group of 52 paediatric patients with severe malnutrition. Since the study was designed purely to investigate ciprofloxacin pharmacokinetics and not efficacy, all patients also received standard therapy for malnutrition and for sepsis on admission. Patient weight, high risk of mortality and serum sodium concentration were the main factors that influenced the concentration–time profile of ciprofloxacin in this patient group. Estimates of AUC24 ranged from 8 to 61 mg·h/L, indicating that an AUC/MIC ratio ≥125 would only be achieved in all patients studied if the MIC was <0.06 mg/L.\nSparse sampling was necessary due to the nature of the population. The sampling windows covered different sections of the 12 h dosage interval; additional trough concentrations measured after the second dose were available from 16 patients. The observed Cmax concentrations ranged from 0.6 to 4.5 mg/L. This range is lower than the values averaging ∼8.4 mg/L reported by Schaefer et al.18 following intravenous doses of 10 mg/kg to paediatric patients with cystic fibrosis, but is consistent, when corrected for dose, with their range of ∼2.5–5 mg/L (mean 3.5 mg/L) achieved with an oral dose of 15 mg/kg. In contrast, despite using a higher dose of 15 mg/kg, Peltola et al.19 observed similar values of Cmax (0.5–5.3 mg/L) when they administered ground tablets in water and mean Cmax values of ∼2–2.7 mg/L following an oral suspension of 10 mg/kg.33 The observed variability in Cmax and Tmax (1–5 h) in the present study may reflect a combination of variable oral bioavailability and the sampling strategy. One limitation of the present study was that most samples were taken ≥2 h after the dose and the Cmax may have already been attained. Peak concentrations were typically observed at 1–2 h in previous studies.19,33 The observed Cmax measurements suggest that MIC values <0.1 mg/L would ideally be necessary to consistently achieve Cmax/MIC ratios of >10, as recommended by MacGowan et al.34\nThe concentration–time data were adequately described by first-order absorption with lag and a monoexponential decline. Studies following the intravenous administration of ciprofloxacin have identified a biexponential decline with a short distribution half-life of ∼10–30 min,16–18,35,36 but the sparse sampling schedule in the present study precluded the identification of a distribution phase. The population model indicated a rapid absorption of ciprofloxacin with an estimated half-life of 14 min after a lag of ∼45 min. Previous population analyses in paediatric patients reported shorter or similar absorption lag times, but rates of absorption were slower, with half-lives ranging from ∼30 to 96 min.16–18 In contrast, Peltola et al.,19 who also used ground tablets, reported mean absorption half-lives of 24 min in infants up to 14 weeks old and 17 min in children aged 1–5 years. Both the present and previous population studies identified wide variability in the absorption rate, with BSV estimates ranging from 50% to 103%.16,17\nRajagopalan and Gastonguay17 reported a standardized clearance of 30.3 L/h/70 kg in paediatric patients aged 14 weeks to 17 years. Correcting for their bioavailability estimate of 61% gives an oral clearance of 49.7 L/h/70 kg, which is similar to the value of 42.7 L/h/70 kg obtained in the present study. Both results are reasonably consistent with values of ∼40–70 L/h reported in adults with normal renal function.37–39 Individual weight-corrected oral CL estimates were in the range 0.3–2.5 L/h/kg in the present study and were similar to the CL values reported by Rajagopalan and Gastonguay17 (0.2–1.3 L/h/kg), when corrected for bioavailability (0.3–2.2 L/h/kg), and the oral CL observed by Peltola et al.33 (1–1.5 L/h/kg). The population model developed by Payen et al.16 predicts an oral clearance increasing from 2 to 20 L/h over the age range covered in the present study. These values are again consistent with the present observations of 1.8–17 L/h. These findings suggest that oral CL is similar in this population of malnourished children to those reported in children without malnutrition.\nSince there was no intravenous ciprofloxacin dose given, oral bioavailability could not be determined directly from this study. However, the similarity between the estimates of oral CL in the present study with the results from previous studies suggests that bioavailability may also be similar. Ciprofloxacin is generally well absorbed, with a typical bioavailability of ∼70% in adults15 and 60%–70% in children.16–18,40 Rubio et al.40 reported a lower bioavailability in children with cystic fibrosis aged 5–12 years (68%) compared with those aged 13–17 years (95%), but this age effect was not observed in the other studies. The formulation that was used in the present study, i.e. tablets reformulated into a suspension with water, has practical advantages over the commercial oral suspension33 as it is much cheaper and easy to prepare on site.\nAnother major aim of this study was to evaluate the impact of feeding on oral clearance. Both milk and divalent cations (such as magnesium and other minerals included in the nutritional milk) have been reported to chelate quinolones and reduce the oral bioavailability of ciprofloxacin.41–43 Since most children are managed in resource-poor healthcare settings with a limited number of healthcare personnel, it is both impractical and difficult to ensure that the feeding times are synchronized around the drug administration times. Consequently, a significant proportion of children are likely to receive ciprofloxacin concomitantly with food and a reduction in bioavailability could compromise efficacy. Although the preliminary analysis found an increase in oral clearance in patients who received ciprofloxacin with feeding, the objective function value only fell by 4.5 points, indicating a weak effect. In the final model, no influence of feeding was identified. Thirteen of the 16 patients who were given ciprofloxacin with food were in the low/intermediate-risk category, which was associated with higher oral CL and lower AUC estimates. The influence of risk may therefore have confounded the identification of a food effect, if one existed. Conversely, if food was important, it may have enhanced the apparent influence of risk in the model. The lack of a clear influence of feeding on absorption is a positive finding. If a significant interaction had been observed, then the future use of this antimicrobial would have been compromised by the regular feeding patterns required, and possible variability in the gut motility and gastric emptying in children with severe malnutrition.\nDehydration and shock were initially found to influence oral CL when examined alone, but both were related to other factors that had a more powerful effect. Of the 16 patients in the high-risk category, 12 had dehydration (of whom 11 had diarrhoea) and 6 had shock. ‘High risk’ therefore accounted for both of these factors and only sodium concentration provided an additional improvement in the fit of the model to the data. An unexpected finding was the lack of an effect of renal function on oral CL, since ciprofloxacin CL is known to decrease in renal impairment.37–39,44 However, the only patient in the group with overt renal failure (creatinine concentration 676 μmol/L) was also categorized as high risk. This individual had one of lowest estimates of oral CL at 1.8 L/h (0.33 L/h/kg). Apart from this one individual, no other patient had severe renal impairment; the highest serum creatinine concentration in other patients was 95 μmol/L. Interestingly, another patient in the high-risk category had a similar and very low oral CL (0.32 L/h/kg), despite a creatinine concentration of 76 μmol/L. The t½ of ciprofloxacin is ∼3–5 h in adults with normal renal function.37,38,44 Similar values have been found in previous studies in infants and young children16,33,45 and in the present study, where t½ had a median of 3.8 h and ranged from 2 to 9 h.\nAlthough children in the high-risk category (including children with impaired consciousness, shock and hypoglycaemia) had significantly lower estimates of oral CL, individual estimates varied widely in both groups (Table 3). ‘High risk’ children represent the most critically ill cases, where both the logistics of administration and gut absorption of oral microbiological agents are likely to be compromised. Importantly, this finding highlights the need for parenteral agents for the most critically ill, for which third-generation cephalosporins are likely to be the most appropriate. We have previously shown delayed uptake and reduced clearance of gentamicin following an intramuscular dose of 7.5 mg/kg in children with severe malnutrition complicated by septic shock.10\nThe finding that low sodium concentration is associated with a reduced oral CL is also of interest. Hyponatraemia in severe malnutrition has been postulated to be due to ‘adaptive reduction’ where normal homeostatic cellular ATPase and sodium potassium pumps are faulty, resulting in low sodium due increased extracellular water.46,47 The reasons why low sodium is associated with lower estimations of oral CL and oral V are not clear, but although statistically significant, the inclusion of this factor only reduced BSV in oral CL by 6% and BSV in oral V by 4%, so it had a weak effect overall and may be an incidental finding.\nThe standardized estimate of oral V (372 L/70 kg) was similar to that reported by Forrest et al.37 in adult patients with normal renal function (321 L/1.73 m2), but higher than the estimate of volume of distribution at steady state (Vss) obtained by Rajagopalan and Gastonguay17 in paediatric patients (240 L/70 kg when corrected for bioavailability). Individual estimates of oral V ranged from 2 to 14 L/kg and the median of 4.5 L/kg was more than twice the values of ∼2 L/kg obtained by Schaefer18 and Payen16. Overall, these results indicate that oral V is elevated in patients with malnutrition. Similar results have been observed with gentamicin in malnourished children—both higher mean and a wide interpatient variability.10 As with oral CL, a small but significant relationship between oral V and sodium concentration was identified, with higher sodium concentrations being associated with larger estimates of oral V.\nFor fluoroquinolones in general, the ratio of the daily AUC to the MIC (AUC/MIC) is likely to be the pharmacodynamic criterion most predictive of clinical outcome.31 Forrest et al.31 found that the probability of clinical cure in a group of seriously ill patients was 80% if the AUC/MIC was ≥125, but was only 42% below this value and that faster eradication rates occurred if the AUC/MIC was >250. The median estimate of steady-state AUC0–12 in the present study (11.2 mg·h/L) is lower than the mean values of ∼13–19 mg·h/L reported by Lipman et al.48 following intravenous doses of 10 mg/kg twice daily to paediatric patients with sepsis. These differences probably reflect absorption, since correcting for a bioavailability of 70% yields similar results to the present study (predicted oral AUC0–12 range 9–13 mg·h/L). Even with the higher AUC, Lipman et al. recommended increasing the dose to 10 mg/kg 8 hourly to obtain AUC/MIC values of 100–150 for organisms with MICs >0.3 mg/L.45 Estimates of AUC0–24 in the present study ranged from 8 to 61 mg·h/L. The individual results indicated that only 12% of the study patients would achieve an AUC/MIC ≥125 if the MIC of the organism was 0.3 mg/L and 0% if it was 0.5 mg/L. Monte Carlo simulations produced slightly higher results (24% and 2%, respectively). Using a range of MIC values, it was found that a dose of ≥30 mg/kg per day would be required to achieve an AUC/MIC of ≥125 in >90% of patients if the MIC was 0.125 mg/L and that an MIC of <0.06 mg/L was necessary to achieve satisfactory AUC/MIC values with a dose of 20 mg/kg/day. Using internationally derived distributions of the MICs of a range of organisms, including the most common isolates in children with severe malnutrition complicated by invasive bacterial disease, a dose of 20 mg/kg/day was sufficient for most isolates of E. coli and Salmonella spp., but 30 mg/kg/day would be necessary to treat Klebsiella infections. Even with higher doses of 45 mg/kg/day, targets for P. aeruginosa could only be achieved in 64% of cases.\nSimilar problems of underdosing have also been identified in adults. Standard intravenous doses of 400 mg twice daily yielded inadequate AUC/MIC and Cmax/MIC ratios in critically ill patients unless the MIC was <0.25 mg/L.36 Simulations conducted by Montgomery et al.35 demonstrated that for an MIC of 0.5 mg/L, 400 mg 12 hourly would only achieve an AUC/MIC ≥125 in 15% of adults with cystic fibrosis and that an increase to 600 mg 8 hourly would be required to achieve >90% success. If the dose used in the present study was increased to 15 mg/kg 8 hourly and assuming no change in bioavailability, the probability of achieving an AUC/MIC ≥125 would increase to 83% for an MIC of 0.3 mg/L and 32% for an MIC of 0.5 mg/L. However, 37% of patients would then reach daily AUCs >60 mg·h/L. These values are higher than the mean daily AUCs reported following the administration of intravenous high-dose ciprofloxacin (400 mg 8 hourly) to critically ill adults48 and it is possible that such high exposure would increase the risk of toxicity in such patients.\n Conclusions The pharmacokinetics of oral ciprofloxacin in children with severe malnutrition is influenced by weight, serum sodium concentration and the risk of mortality, but there was high variability in the clearance and rate of absorption. Oral ciprofloxacin absorption was unaffected by the simultaneous administration of nutritional feeds. An oral dose of 10 mg/kg twice daily should be effective against E. coli and Salmonella spp., but a higher dose of 10 mg/kg three times a day would be recommended for K. pneumoniae. Oral ciprofloxacin is unlikely to be an effective treatment for P. aeruginosa. Irrespective of the bacterial pathogen, patients with severe illness, at high risk of mortality, should initially receive intravenous antibiotics.\nThe pharmacokinetics of oral ciprofloxacin in children with severe malnutrition is influenced by weight, serum sodium concentration and the risk of mortality, but there was high variability in the clearance and rate of absorption. Oral ciprofloxacin absorption was unaffected by the simultaneous administration of nutritional feeds. An oral dose of 10 mg/kg twice daily should be effective against E. coli and Salmonella spp., but a higher dose of 10 mg/kg three times a day would be recommended for K. pneumoniae. Oral ciprofloxacin is unlikely to be an effective treatment for P. aeruginosa. Irrespective of the bacterial pathogen, patients with severe illness, at high risk of mortality, should initially receive intravenous antibiotics.", "The pharmacokinetics of oral ciprofloxacin in children with severe malnutrition is influenced by weight, serum sodium concentration and the risk of mortality, but there was high variability in the clearance and rate of absorption. Oral ciprofloxacin absorption was unaffected by the simultaneous administration of nutritional feeds. An oral dose of 10 mg/kg twice daily should be effective against E. coli and Salmonella spp., but a higher dose of 10 mg/kg three times a day would be recommended for K. pneumoniae. Oral ciprofloxacin is unlikely to be an effective treatment for P. aeruginosa. Irrespective of the bacterial pathogen, patients with severe illness, at high risk of mortality, should initially receive intravenous antibiotics.", "This study was supported by Wellcome Trust core funding to KEMRI-Wellcome Trust Research Programme (grant Reference No. 077092). Nahashon Thuo is supported by a Wellcome Trust Masters Training Fellowship (grant reference No. 089353/Z/09/Z). The funding sources had no role in study design, analysis or in the writing of the report.", "None to declare." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, "discussion", "conclusions", null, null ]
[ "quinolone", "drug absorption", "marasmus", "kwashiorkor", "Gram-negative" ]
Introduction: Severe malnutrition remains a common cause of admission to hospital in less-developed countries. Many centres, particularly in Africa, report poor outcome despite adherence to recommended treatment guidelines.1–4 The children at the greatest risk of fatal outcome are those with Gram-negative septicaemia, constituting 48%–55% of invasive bacterial pathogens, and those admitted with diarrhoea and/or shock.5,6 Changes in the intestinal mucosal integrity and gut microbial balance occur in severe malnutrition,7 resulting in treatment failure and adverse clinical outcome.8 The higher prevalence of gut barrier dysfunction in children with severe malnutrition may have important effects on the absorption of antimicrobials and their bioavailability, and therefore may limit choices for the delivery of antimicrobial medication. Children with severe and complicated malnutrition routinely receive broad-spectrum parenteral antibiotics.9 In vitro antibiotic susceptibility testing indicates that up to 85% of organisms are fully susceptible to the first-line treatment, parenteral ampicillin and gentamicin, recommended by the WHO for children with severe and complicated malnutrition.4 Pharmacokinetic studies have demonstrated satisfactory plasma concentrations of these commonly used antibiotics.10 However, in vitro resistance has been associated with later deaths, and the current second-line antibiotic (chloramphenicol) was found to offer little advantage over the ampicillin and gentamicin combination.4 Fluoroquinolones are effective against most Gram-negative organisms and have activity against Gram-positive organisms, especially when given in combination.11 The quinolones are used in the treatment of serious bacterial infections in adults; however, their use in children has been restricted due to concerns about potential cartilage damage.12 Nevertheless, quinolones are increasingly prescribed for paediatric patients. Ciprofloxacin is licensed in children >1 year of age for pseudomonal infections in cystic fibrosis, for complicated urinary tract infections, and for the treatment and prophylaxis of inhalation anthrax.13 When the benefits of treatment outweigh the risks, ciprofloxacin is also licensed in the UK for children >1 year of age with severe respiratory tract and gastrointestinal system infection.14 In its many years of clinical use, ciprofloxacin has been found effective, even with oral administration, owing to its bioavailability of ∼70%.15 However, few studies have evaluated the population pharmacokinetics of ciprofloxacin in children,16–18 and no studies have been conducted in severe malnutrition. For the treatment of Gram-negative infection, common in severe malnutrition, pragmatic and cost-effective treatments are needed to improve outcome. Since intravenous formulations of fluoroquinolones are very expensive, they are rarely used in resource-poor settings. Oral formulations would appear to be the best option in patients who are able to ingest and adequately absorb medication. The aims of this study were to determine the pharmacokinetic profile of oral ciprofloxacin given at a dose of 10 mg/kg twice daily to children admitted to hospital with severe malnutrition, to develop a population model to describe the pharmacokinetics of ciprofloxacin in this patient group, and to use Monte Carlo simulation techniques to investigate potential relationships between dosage regimens and antimicrobial efficacy. Methods: Patients The study was conducted at Kilifi District Hospital, Kenya. All children admitted to the ward were examined by a member of the clinical research team. Between July 2008 and February 2009, children >6 months of age were assessed for eligibility for the study. Eligible children had severe malnutrition, defined as one of the following: weight-for-height z-score (WHZ) of ≤3; mid-upper arm circumference of <11 cm; or the presence of bilateral pedal oedema (kwashiorkor). Children with evidence of intrinsic renal disease (creatinine concentration >300 μmol/L and hypertension or hyperkalaemia) were excluded. No cases had coexisting chronic bone or joint disease, or concurrently prescribed antacids, ketoconazole, theophylline or corticosteroids. The study was explained to the child's parent or guardian in their usual language and written informed consent was obtained. The Kenya Medical Research Institute/National Ethical Review Committee approved the study. Baseline laboratory tests included a full blood count, blood film for malaria parasites, plasma creatinine, electrolytes, plasma glucose, blood gases, blood culture and HIV rapid antibody test. Blood cultures were processed by a BACTEC 9050 instrument (Becton Dickinson, NJ, USA). Children were treated according to the standard WHO management guidelines for severe malnutrition. These include nutritional support with a special milk-based formula (F75 and F100), multivitamin and multimineral supplementation and reduced sodium oral rehydration solution (RESOMAL) for children with diarrhoea (>3 watery stools/day). At admission, all children were prescribed parenteral ampicillin (50 mg/kg four times a day) and intramuscular gentamicin (7.5 mg/kg once daily) for 7 days. These were revised, when indicated by the child's clinical condition or culture results. Other fluoroquinolones were not prescribed during the study period, since they were not routinely available. The study was conducted at Kilifi District Hospital, Kenya. All children admitted to the ward were examined by a member of the clinical research team. Between July 2008 and February 2009, children >6 months of age were assessed for eligibility for the study. Eligible children had severe malnutrition, defined as one of the following: weight-for-height z-score (WHZ) of ≤3; mid-upper arm circumference of <11 cm; or the presence of bilateral pedal oedema (kwashiorkor). Children with evidence of intrinsic renal disease (creatinine concentration >300 μmol/L and hypertension or hyperkalaemia) were excluded. No cases had coexisting chronic bone or joint disease, or concurrently prescribed antacids, ketoconazole, theophylline or corticosteroids. The study was explained to the child's parent or guardian in their usual language and written informed consent was obtained. The Kenya Medical Research Institute/National Ethical Review Committee approved the study. Baseline laboratory tests included a full blood count, blood film for malaria parasites, plasma creatinine, electrolytes, plasma glucose, blood gases, blood culture and HIV rapid antibody test. Blood cultures were processed by a BACTEC 9050 instrument (Becton Dickinson, NJ, USA). Children were treated according to the standard WHO management guidelines for severe malnutrition. These include nutritional support with a special milk-based formula (F75 and F100), multivitamin and multimineral supplementation and reduced sodium oral rehydration solution (RESOMAL) for children with diarrhoea (>3 watery stools/day). At admission, all children were prescribed parenteral ampicillin (50 mg/kg four times a day) and intramuscular gentamicin (7.5 mg/kg once daily) for 7 days. These were revised, when indicated by the child's clinical condition or culture results. Other fluoroquinolones were not prescribed during the study period, since they were not routinely available. Study procedures Ciprofloxacin concentration–time profiles were obtained during two study periods. In the initial study period, children were given the ciprofloxacin either 2 h before or 2 h after they had their nutritional milks or meal. In this first period of the study (n = 36), an equal number of patients were included in three subgroups: low risk; intermediate risk; and high risk of fatality. The stratification of risk was based on a previous report.4 High risk included children presenting with one of the following: depressed conscious state; bradycardia (heart rate <80 beats per minute); evidence of shock (capillary refill time ≥2 seconds, temperature gradient or weak pulse); or hypoglycaemia (blood glucose <3 mmol/L). Intermediate risk included any one of: deep ‘acidotic’ breathing; signs of severe dehydration (plus diarrhoea); lethargy; hyponatraemia (sodium <125 mmol/L); or hypokalaemia (potassium <2.5 mmol/L). Children with low risk had none of these factors. In the second study period (n = 16), the drug was administered at the time they received nutritional feeds and did not have any risk stratification. Ciprofloxacin concentration–time profiles were obtained during two study periods. In the initial study period, children were given the ciprofloxacin either 2 h before or 2 h after they had their nutritional milks or meal. In this first period of the study (n = 36), an equal number of patients were included in three subgroups: low risk; intermediate risk; and high risk of fatality. The stratification of risk was based on a previous report.4 High risk included children presenting with one of the following: depressed conscious state; bradycardia (heart rate <80 beats per minute); evidence of shock (capillary refill time ≥2 seconds, temperature gradient or weak pulse); or hypoglycaemia (blood glucose <3 mmol/L). Intermediate risk included any one of: deep ‘acidotic’ breathing; signs of severe dehydration (plus diarrhoea); lethargy; hyponatraemia (sodium <125 mmol/L); or hypokalaemia (potassium <2.5 mmol/L). Children with low risk had none of these factors. In the second study period (n = 16), the drug was administered at the time they received nutritional feeds and did not have any risk stratification. Drug administration and blood sampling Participants received witnessed doses of oral ciprofloxacin (10 mg/kg body weight), i.e. a standard treatment dose, every 12 h for 48 h, starting on the day of admission. Routine antimicrobials were given simultaneously as the empirical treatment for invasive infections. Since oral suspensions are not available locally, ciprofloxacin tablets (Bactiflox™ 250 mg, Mepha Pharma AG, Switzerland) were reformulated into an aqueous suspension by the study pharmacist. A separate tablet was used to prepare each individual dose and the suspension was then used immediately. Individual doses were measured by the pharmacist under the observation of the trial nurse, according to the patient's body weight. This approach has been used and reported previously.19 Children were allocated, using a computer-generated randomization list, to one of three blood sampling schedules: Group A at 2, 4, 8 and 24 h; Group B at 3, 5, 9 and 12 h; and Group C at 1, 3, 6 and 10 h. Children in each of the risk categories were evenly allocated across the sampling schedules. To minimize discomfort to the child, blood sampling for the measurement of ciprofloxacin concentrations was through an in situ cannula, which was only used for this purpose. At each timepoint, a venous blood sample (0.5 mL) was collected into a lithium heparin tube, centrifuged, and the plasma separated and stored in cryovials at −20°C until analysed. A rapid, selective and sensitive HPLC method coupled with fluorescence detection was used to determine the concentration of ciprofloxacin in the plasma samples.20 The intra- and interassay imprecisions of the HPLC method were <8.0%, and accuracy values ranged from 93% to 105% for quality control samples at 0.2, 1.8 and 3.6 mg/L. Calibration curves of ciprofloxacin were linear over the concentration range of 0.02–4 mg/L, with correlation coefficients ≥0.998. Participants received witnessed doses of oral ciprofloxacin (10 mg/kg body weight), i.e. a standard treatment dose, every 12 h for 48 h, starting on the day of admission. Routine antimicrobials were given simultaneously as the empirical treatment for invasive infections. Since oral suspensions are not available locally, ciprofloxacin tablets (Bactiflox™ 250 mg, Mepha Pharma AG, Switzerland) were reformulated into an aqueous suspension by the study pharmacist. A separate tablet was used to prepare each individual dose and the suspension was then used immediately. Individual doses were measured by the pharmacist under the observation of the trial nurse, according to the patient's body weight. This approach has been used and reported previously.19 Children were allocated, using a computer-generated randomization list, to one of three blood sampling schedules: Group A at 2, 4, 8 and 24 h; Group B at 3, 5, 9 and 12 h; and Group C at 1, 3, 6 and 10 h. Children in each of the risk categories were evenly allocated across the sampling schedules. To minimize discomfort to the child, blood sampling for the measurement of ciprofloxacin concentrations was through an in situ cannula, which was only used for this purpose. At each timepoint, a venous blood sample (0.5 mL) was collected into a lithium heparin tube, centrifuged, and the plasma separated and stored in cryovials at −20°C until analysed. A rapid, selective and sensitive HPLC method coupled with fluorescence detection was used to determine the concentration of ciprofloxacin in the plasma samples.20 The intra- and interassay imprecisions of the HPLC method were <8.0%, and accuracy values ranged from 93% to 105% for quality control samples at 0.2, 1.8 and 3.6 mg/L. Calibration curves of ciprofloxacin were linear over the concentration range of 0.02–4 mg/L, with correlation coefficients ≥0.998. Pharmacokinetic analysis Population pharmacokinetic parameter estimates were obtained with NONMEM version VI using first-order conditional estimation with interaction.21 Post-processing of the NONMEM results was performed with Xpose version 4 programmed in R 2.9.2.22,23 Preliminary analyses found that a one-compartment elimination model adequately described the concentration–time profiles; absorption was described using first- and zero-order models with an absorption lag and with a transit compartment model.24 Relationships between the oral clearance of ciprofloxacin (CL) and weight, and between the oral volume of distribution (V) and weight were described using an allometric approach,25 i.e. Between-subject variabilities (BSV) in the pharmacokinetic parameters were assumed to be log-normally distributed and covariance between the variabilities in the pharmacokinetic parameters was investigated. A combined proportional and additive error model was used to describe the residual error. A wide range of demographic, biochemical and haematological data were collected in the course of the study. In addition, creatinine clearance estimates for each patient, according to age range, were calculated using the equations of Schwarz et al.26,27 Potential relationships between the clinical and demographic factors and empirical Bayes' estimates (individual estimates) of ciprofloxacin oral CL, oral V and absorption rate were initially examined visually using scatter plots and by general additive modelling within Xpose.22 Factors identified as potential covariates that might explain variability in ciprofloxacin pharmacokinetics were then added individually to the population model. A statistically significant improvement in the fit of the model to the data was defined as a reduction in the objective function value (OFV) of ≥3.84 (P <0.05) in the forward stepwise analysis. The covariate that produced the greatest fall in OFV was included first, and then other covariates were added to and removed from the model in a stepwise manner that included changing the order of inclusion and removal. Statistical significance was defined as an increase in OFV of ≥6.63 (P < 0.01) when a covariate was removed. Additional criteria, such as goodness-of-fit plots, the precision of the parameter estimates, and the ability of covariates to explain variability in the oral CL, oral V and absorption rate, were also considered. The final population model was evaluated in three ways: a bootstrap sampling procedure with 1000 samples; a prediction-corrected visual predictive check (pcVPC) based on 1000 simulations; and by examination of normalized prediction distribution errors (npde) from 1000 simulations. Both the bootstrapping procedure and the pcVPC were performed using PsN toolkit;28,29 npde were computed using the software developed by Brendel et al.30 Time to maximum (Tmax) and maximum concentrations of ciprofloxacin (Cmax) for each patient were obtained from the raw data and estimated using the individual pharmacokinetic parameter values. Estimates of steady-state AUC within each dosage interval (AUC12) and each day (AUC24) were calculated from 12 hourly dose/CLi and daily dose/CLi, respectively, where CLi are the individual estimates of oral CL. Individual estimates of the elimination half-life (t½) were calculated from 0.693 × Vi/CLi, where Vi are the individual estimates of oral V. Population pharmacokinetic parameter estimates were obtained with NONMEM version VI using first-order conditional estimation with interaction.21 Post-processing of the NONMEM results was performed with Xpose version 4 programmed in R 2.9.2.22,23 Preliminary analyses found that a one-compartment elimination model adequately described the concentration–time profiles; absorption was described using first- and zero-order models with an absorption lag and with a transit compartment model.24 Relationships between the oral clearance of ciprofloxacin (CL) and weight, and between the oral volume of distribution (V) and weight were described using an allometric approach,25 i.e. Between-subject variabilities (BSV) in the pharmacokinetic parameters were assumed to be log-normally distributed and covariance between the variabilities in the pharmacokinetic parameters was investigated. A combined proportional and additive error model was used to describe the residual error. A wide range of demographic, biochemical and haematological data were collected in the course of the study. In addition, creatinine clearance estimates for each patient, according to age range, were calculated using the equations of Schwarz et al.26,27 Potential relationships between the clinical and demographic factors and empirical Bayes' estimates (individual estimates) of ciprofloxacin oral CL, oral V and absorption rate were initially examined visually using scatter plots and by general additive modelling within Xpose.22 Factors identified as potential covariates that might explain variability in ciprofloxacin pharmacokinetics were then added individually to the population model. A statistically significant improvement in the fit of the model to the data was defined as a reduction in the objective function value (OFV) of ≥3.84 (P <0.05) in the forward stepwise analysis. The covariate that produced the greatest fall in OFV was included first, and then other covariates were added to and removed from the model in a stepwise manner that included changing the order of inclusion and removal. Statistical significance was defined as an increase in OFV of ≥6.63 (P < 0.01) when a covariate was removed. Additional criteria, such as goodness-of-fit plots, the precision of the parameter estimates, and the ability of covariates to explain variability in the oral CL, oral V and absorption rate, were also considered. The final population model was evaluated in three ways: a bootstrap sampling procedure with 1000 samples; a prediction-corrected visual predictive check (pcVPC) based on 1000 simulations; and by examination of normalized prediction distribution errors (npde) from 1000 simulations. Both the bootstrapping procedure and the pcVPC were performed using PsN toolkit;28,29 npde were computed using the software developed by Brendel et al.30 Time to maximum (Tmax) and maximum concentrations of ciprofloxacin (Cmax) for each patient were obtained from the raw data and estimated using the individual pharmacokinetic parameter values. Estimates of steady-state AUC within each dosage interval (AUC12) and each day (AUC24) were calculated from 12 hourly dose/CLi and daily dose/CLi, respectively, where CLi are the individual estimates of oral CL. Individual estimates of the elimination half-life (t½) were calculated from 0.693 × Vi/CLi, where Vi are the individual estimates of oral V. Monte Carlo simulations The final parameters of the population pharmacokinetic model were used to simulate estimates of oral CL for 10 000 patients using NONMEM version VI.21 Relevant clinical characteristics (weight and sodium concentration) were assumed to arise from log-normal distributions with outer limits set to the values observed in the raw data. The incidence of ‘high risk’ in the simulated dataset was 31%. AUC24 estimates were then determined for each simulated patient from daily dose/oral CL. Simulations were performed for three dosage regimens: 20 mg/kg/day (current daily dosage regimen); 30 mg/kg/day; and 45 mg/kg/day. For evaluation of these dosage regimens, MIC values were chosen across the range 0.03–8 mg/L. The probability of target attainment (PTA) was defined as the probability that the target AUC0–24/MIC ratio was achieved at each MIC. Target AUC0–24/MIC ratios of ≥125 were used.31 For each ciprofloxacin regimen, the highest MIC at which the PTA was ≥90% was defined as the pharmacokinetic/pharmacodynamic susceptible breakpoint. A second analysis was conducted using the MIC distributions for Escherichia coli, Pseudomonas aeruginosa, Salmonella spp. and Klebsiella pneumoniae derived from the database of the European Committee on Antimicrobial Susceptibility Testing.32 These MIC distributions were extracted from 17 877 strains of E. coli, 27 825 strains of P. aeruginosa, 5898 strains of K. pneumoniae and 1733 strains of Salmonella spp. The cumulative fraction of response (CFR) was used to estimate the overall response of pathogens to ciprofloxacin with each of the three dosage regimens. This estimate accounts for the variability of drug exposure in the population and the variability in the MIC combined with the distributions of MICs for the pathogens. For each MIC, the fraction of simulated patients who met the pharmacodynamic target (AUC0–24/MIC ≥ 125) was multiplied by the fraction of the distribution of microorganisms for each MIC. The CFR was calculated as the sum of fraction products over all MICs. The final parameters of the population pharmacokinetic model were used to simulate estimates of oral CL for 10 000 patients using NONMEM version VI.21 Relevant clinical characteristics (weight and sodium concentration) were assumed to arise from log-normal distributions with outer limits set to the values observed in the raw data. The incidence of ‘high risk’ in the simulated dataset was 31%. AUC24 estimates were then determined for each simulated patient from daily dose/oral CL. Simulations were performed for three dosage regimens: 20 mg/kg/day (current daily dosage regimen); 30 mg/kg/day; and 45 mg/kg/day. For evaluation of these dosage regimens, MIC values were chosen across the range 0.03–8 mg/L. The probability of target attainment (PTA) was defined as the probability that the target AUC0–24/MIC ratio was achieved at each MIC. Target AUC0–24/MIC ratios of ≥125 were used.31 For each ciprofloxacin regimen, the highest MIC at which the PTA was ≥90% was defined as the pharmacokinetic/pharmacodynamic susceptible breakpoint. A second analysis was conducted using the MIC distributions for Escherichia coli, Pseudomonas aeruginosa, Salmonella spp. and Klebsiella pneumoniae derived from the database of the European Committee on Antimicrobial Susceptibility Testing.32 These MIC distributions were extracted from 17 877 strains of E. coli, 27 825 strains of P. aeruginosa, 5898 strains of K. pneumoniae and 1733 strains of Salmonella spp. The cumulative fraction of response (CFR) was used to estimate the overall response of pathogens to ciprofloxacin with each of the three dosage regimens. This estimate accounts for the variability of drug exposure in the population and the variability in the MIC combined with the distributions of MICs for the pathogens. For each MIC, the fraction of simulated patients who met the pharmacodynamic target (AUC0–24/MIC ≥ 125) was multiplied by the fraction of the distribution of microorganisms for each MIC. The CFR was calculated as the sum of fraction products over all MICs. Patients: The study was conducted at Kilifi District Hospital, Kenya. All children admitted to the ward were examined by a member of the clinical research team. Between July 2008 and February 2009, children >6 months of age were assessed for eligibility for the study. Eligible children had severe malnutrition, defined as one of the following: weight-for-height z-score (WHZ) of ≤3; mid-upper arm circumference of <11 cm; or the presence of bilateral pedal oedema (kwashiorkor). Children with evidence of intrinsic renal disease (creatinine concentration >300 μmol/L and hypertension or hyperkalaemia) were excluded. No cases had coexisting chronic bone or joint disease, or concurrently prescribed antacids, ketoconazole, theophylline or corticosteroids. The study was explained to the child's parent or guardian in their usual language and written informed consent was obtained. The Kenya Medical Research Institute/National Ethical Review Committee approved the study. Baseline laboratory tests included a full blood count, blood film for malaria parasites, plasma creatinine, electrolytes, plasma glucose, blood gases, blood culture and HIV rapid antibody test. Blood cultures were processed by a BACTEC 9050 instrument (Becton Dickinson, NJ, USA). Children were treated according to the standard WHO management guidelines for severe malnutrition. These include nutritional support with a special milk-based formula (F75 and F100), multivitamin and multimineral supplementation and reduced sodium oral rehydration solution (RESOMAL) for children with diarrhoea (>3 watery stools/day). At admission, all children were prescribed parenteral ampicillin (50 mg/kg four times a day) and intramuscular gentamicin (7.5 mg/kg once daily) for 7 days. These were revised, when indicated by the child's clinical condition or culture results. Other fluoroquinolones were not prescribed during the study period, since they were not routinely available. Study procedures: Ciprofloxacin concentration–time profiles were obtained during two study periods. In the initial study period, children were given the ciprofloxacin either 2 h before or 2 h after they had their nutritional milks or meal. In this first period of the study (n = 36), an equal number of patients were included in three subgroups: low risk; intermediate risk; and high risk of fatality. The stratification of risk was based on a previous report.4 High risk included children presenting with one of the following: depressed conscious state; bradycardia (heart rate <80 beats per minute); evidence of shock (capillary refill time ≥2 seconds, temperature gradient or weak pulse); or hypoglycaemia (blood glucose <3 mmol/L). Intermediate risk included any one of: deep ‘acidotic’ breathing; signs of severe dehydration (plus diarrhoea); lethargy; hyponatraemia (sodium <125 mmol/L); or hypokalaemia (potassium <2.5 mmol/L). Children with low risk had none of these factors. In the second study period (n = 16), the drug was administered at the time they received nutritional feeds and did not have any risk stratification. Drug administration and blood sampling: Participants received witnessed doses of oral ciprofloxacin (10 mg/kg body weight), i.e. a standard treatment dose, every 12 h for 48 h, starting on the day of admission. Routine antimicrobials were given simultaneously as the empirical treatment for invasive infections. Since oral suspensions are not available locally, ciprofloxacin tablets (Bactiflox™ 250 mg, Mepha Pharma AG, Switzerland) were reformulated into an aqueous suspension by the study pharmacist. A separate tablet was used to prepare each individual dose and the suspension was then used immediately. Individual doses were measured by the pharmacist under the observation of the trial nurse, according to the patient's body weight. This approach has been used and reported previously.19 Children were allocated, using a computer-generated randomization list, to one of three blood sampling schedules: Group A at 2, 4, 8 and 24 h; Group B at 3, 5, 9 and 12 h; and Group C at 1, 3, 6 and 10 h. Children in each of the risk categories were evenly allocated across the sampling schedules. To minimize discomfort to the child, blood sampling for the measurement of ciprofloxacin concentrations was through an in situ cannula, which was only used for this purpose. At each timepoint, a venous blood sample (0.5 mL) was collected into a lithium heparin tube, centrifuged, and the plasma separated and stored in cryovials at −20°C until analysed. A rapid, selective and sensitive HPLC method coupled with fluorescence detection was used to determine the concentration of ciprofloxacin in the plasma samples.20 The intra- and interassay imprecisions of the HPLC method were <8.0%, and accuracy values ranged from 93% to 105% for quality control samples at 0.2, 1.8 and 3.6 mg/L. Calibration curves of ciprofloxacin were linear over the concentration range of 0.02–4 mg/L, with correlation coefficients ≥0.998. Pharmacokinetic analysis: Population pharmacokinetic parameter estimates were obtained with NONMEM version VI using first-order conditional estimation with interaction.21 Post-processing of the NONMEM results was performed with Xpose version 4 programmed in R 2.9.2.22,23 Preliminary analyses found that a one-compartment elimination model adequately described the concentration–time profiles; absorption was described using first- and zero-order models with an absorption lag and with a transit compartment model.24 Relationships between the oral clearance of ciprofloxacin (CL) and weight, and between the oral volume of distribution (V) and weight were described using an allometric approach,25 i.e. Between-subject variabilities (BSV) in the pharmacokinetic parameters were assumed to be log-normally distributed and covariance between the variabilities in the pharmacokinetic parameters was investigated. A combined proportional and additive error model was used to describe the residual error. A wide range of demographic, biochemical and haematological data were collected in the course of the study. In addition, creatinine clearance estimates for each patient, according to age range, were calculated using the equations of Schwarz et al.26,27 Potential relationships between the clinical and demographic factors and empirical Bayes' estimates (individual estimates) of ciprofloxacin oral CL, oral V and absorption rate were initially examined visually using scatter plots and by general additive modelling within Xpose.22 Factors identified as potential covariates that might explain variability in ciprofloxacin pharmacokinetics were then added individually to the population model. A statistically significant improvement in the fit of the model to the data was defined as a reduction in the objective function value (OFV) of ≥3.84 (P <0.05) in the forward stepwise analysis. The covariate that produced the greatest fall in OFV was included first, and then other covariates were added to and removed from the model in a stepwise manner that included changing the order of inclusion and removal. Statistical significance was defined as an increase in OFV of ≥6.63 (P < 0.01) when a covariate was removed. Additional criteria, such as goodness-of-fit plots, the precision of the parameter estimates, and the ability of covariates to explain variability in the oral CL, oral V and absorption rate, were also considered. The final population model was evaluated in three ways: a bootstrap sampling procedure with 1000 samples; a prediction-corrected visual predictive check (pcVPC) based on 1000 simulations; and by examination of normalized prediction distribution errors (npde) from 1000 simulations. Both the bootstrapping procedure and the pcVPC were performed using PsN toolkit;28,29 npde were computed using the software developed by Brendel et al.30 Time to maximum (Tmax) and maximum concentrations of ciprofloxacin (Cmax) for each patient were obtained from the raw data and estimated using the individual pharmacokinetic parameter values. Estimates of steady-state AUC within each dosage interval (AUC12) and each day (AUC24) were calculated from 12 hourly dose/CLi and daily dose/CLi, respectively, where CLi are the individual estimates of oral CL. Individual estimates of the elimination half-life (t½) were calculated from 0.693 × Vi/CLi, where Vi are the individual estimates of oral V. Monte Carlo simulations: The final parameters of the population pharmacokinetic model were used to simulate estimates of oral CL for 10 000 patients using NONMEM version VI.21 Relevant clinical characteristics (weight and sodium concentration) were assumed to arise from log-normal distributions with outer limits set to the values observed in the raw data. The incidence of ‘high risk’ in the simulated dataset was 31%. AUC24 estimates were then determined for each simulated patient from daily dose/oral CL. Simulations were performed for three dosage regimens: 20 mg/kg/day (current daily dosage regimen); 30 mg/kg/day; and 45 mg/kg/day. For evaluation of these dosage regimens, MIC values were chosen across the range 0.03–8 mg/L. The probability of target attainment (PTA) was defined as the probability that the target AUC0–24/MIC ratio was achieved at each MIC. Target AUC0–24/MIC ratios of ≥125 were used.31 For each ciprofloxacin regimen, the highest MIC at which the PTA was ≥90% was defined as the pharmacokinetic/pharmacodynamic susceptible breakpoint. A second analysis was conducted using the MIC distributions for Escherichia coli, Pseudomonas aeruginosa, Salmonella spp. and Klebsiella pneumoniae derived from the database of the European Committee on Antimicrobial Susceptibility Testing.32 These MIC distributions were extracted from 17 877 strains of E. coli, 27 825 strains of P. aeruginosa, 5898 strains of K. pneumoniae and 1733 strains of Salmonella spp. The cumulative fraction of response (CFR) was used to estimate the overall response of pathogens to ciprofloxacin with each of the three dosage regimens. This estimate accounts for the variability of drug exposure in the population and the variability in the MIC combined with the distributions of MICs for the pathogens. For each MIC, the fraction of simulated patients who met the pharmacodynamic target (AUC0–24/MIC ≥ 125) was multiplied by the fraction of the distribution of microorganisms for each MIC. The CFR was calculated as the sum of fraction products over all MICs. Results: Participants and admission clinical characteristics Of the 90 children with severe malnutrition admitted during the study period, 52 were enrolled into the study. Twelve who fulfilled the entry criteria declined to give consent, in four intravenous access was not possible and 22 were excluded due to completion of recruitment to a specific subgroup. The demographic and clinical characteristics of the patients included in the study are summarized in Table 1. Median age [interquartile range (IQR)] was 23 months (15–33 months), 29 (56%) were male, oedema (defining kwashiorkor) was present in 24 patients (46%) and eight (15%) children were HIV antibody positive. The median weight (IQR) was 6.9 kg (6.1–8.4 kg). The estimated creatinine clearance ranged from 5 to 128.7 mL/min/1.73 m2; only one patient had severe renal impairment. According to the risk stratification, mortality was 21%, 12% and 0% in the high-risk, intermediate-risk and low-risk groups, respectively. Bacteraemia was present in six children; three had E. coli, two had S. pneumoniae and one had K. pneumoniae. The isolates were tested against ampicillin, gentamicin, ciprofloxacin, chloramphenicol and ceftriaxone. The K. pneumoniae isolate was resistant to ampicillin only, while one E. coli isolate was resistant to all of the tested antibiotics. One S. pneumoniae isolate was resistant to ampicillin and had intermediate susceptibility to ciprofloxacin. Ciprofloxacin was well tolerated and there were no adverse events reported with its use. Table 1.Summary of the demographic and clinical characteristics of the patients who participated in the studyCharacteristicNumber (%)/median (range)Interquartile rangeMale/female29/23 (56/44)Oedema24 (46)HIV positive8 (15)Age (months)23 (8–102)15–33Weight (kg)6.9 (4.1–14.5)6.1–8.4Height (cm)75.4 (58.5–114.4)70.1–81.4MUAC (cm)11 (7.7–14.3)10–12WHZ−3.26 (−5.69–0.04)−3.67 to −2.48Vomiting15 (29)Shock7 (13)Dehydration25 (48)Low risk22 (42)Intermediate risk14 (27)High risk16 (31)White blood cell (×106/L) (6–17.5)13.3 (5.5–84.4)10.3–17.1Haemoglobin (g/dL) (9–14)9.0 (2.1–12.8)7.2–9.8Platelets (×106/L) (150–400)309 (16–1369)210–451Sodium (mmol/L) (138–145)136 (120–160)131–138Potassium (mmol/L) (3.5–5)3.1 (1.2–5.1)2.3–4.1Glucose (mmol/L) (2.8–5)4 (0.4–11.4)2.8–4.7Bicarbonate (mmol/L) (22–29)15.4 (4.7–26)11.7–18.5Base excess (−4 to +2)−8.0 (−26.9–2.1)−13.1 to −4.5Serum creatinine (μmol/L) (44–88)44 (27–676)36.8–51.9Creatinine clearancea (mL/min/1.73 m2)85.8 (5.0–128.7)70.7–101.4MUAC, mid-upper arm circumference.aEstimated according to age using the equations of Schwartz et al.26,27 Summary of the demographic and clinical characteristics of the patients who participated in the study MUAC, mid-upper arm circumference. aEstimated according to age using the equations of Schwartz et al.26,27 Of the 90 children with severe malnutrition admitted during the study period, 52 were enrolled into the study. Twelve who fulfilled the entry criteria declined to give consent, in four intravenous access was not possible and 22 were excluded due to completion of recruitment to a specific subgroup. The demographic and clinical characteristics of the patients included in the study are summarized in Table 1. Median age [interquartile range (IQR)] was 23 months (15–33 months), 29 (56%) were male, oedema (defining kwashiorkor) was present in 24 patients (46%) and eight (15%) children were HIV antibody positive. The median weight (IQR) was 6.9 kg (6.1–8.4 kg). The estimated creatinine clearance ranged from 5 to 128.7 mL/min/1.73 m2; only one patient had severe renal impairment. According to the risk stratification, mortality was 21%, 12% and 0% in the high-risk, intermediate-risk and low-risk groups, respectively. Bacteraemia was present in six children; three had E. coli, two had S. pneumoniae and one had K. pneumoniae. The isolates were tested against ampicillin, gentamicin, ciprofloxacin, chloramphenicol and ceftriaxone. The K. pneumoniae isolate was resistant to ampicillin only, while one E. coli isolate was resistant to all of the tested antibiotics. One S. pneumoniae isolate was resistant to ampicillin and had intermediate susceptibility to ciprofloxacin. Ciprofloxacin was well tolerated and there were no adverse events reported with its use. Table 1.Summary of the demographic and clinical characteristics of the patients who participated in the studyCharacteristicNumber (%)/median (range)Interquartile rangeMale/female29/23 (56/44)Oedema24 (46)HIV positive8 (15)Age (months)23 (8–102)15–33Weight (kg)6.9 (4.1–14.5)6.1–8.4Height (cm)75.4 (58.5–114.4)70.1–81.4MUAC (cm)11 (7.7–14.3)10–12WHZ−3.26 (−5.69–0.04)−3.67 to −2.48Vomiting15 (29)Shock7 (13)Dehydration25 (48)Low risk22 (42)Intermediate risk14 (27)High risk16 (31)White blood cell (×106/L) (6–17.5)13.3 (5.5–84.4)10.3–17.1Haemoglobin (g/dL) (9–14)9.0 (2.1–12.8)7.2–9.8Platelets (×106/L) (150–400)309 (16–1369)210–451Sodium (mmol/L) (138–145)136 (120–160)131–138Potassium (mmol/L) (3.5–5)3.1 (1.2–5.1)2.3–4.1Glucose (mmol/L) (2.8–5)4 (0.4–11.4)2.8–4.7Bicarbonate (mmol/L) (22–29)15.4 (4.7–26)11.7–18.5Base excess (−4 to +2)−8.0 (−26.9–2.1)−13.1 to −4.5Serum creatinine (μmol/L) (44–88)44 (27–676)36.8–51.9Creatinine clearancea (mL/min/1.73 m2)85.8 (5.0–128.7)70.7–101.4MUAC, mid-upper arm circumference.aEstimated according to age using the equations of Schwartz et al.26,27 Summary of the demographic and clinical characteristics of the patients who participated in the study MUAC, mid-upper arm circumference. aEstimated according to age using the equations of Schwartz et al.26,27 Pharmacokinetic data analysis A total of 202 plasma ciprofloxacin concentration measurements were available, with a median of 4 (range 2–4) measurements per patient. Individual concentration–time profiles are presented in Figure 1. Tmax ranged from 1 to 5 h (median 3 h) after the dose, and in 69% of patients Cmax was observed within 1–3 h. Cmax ranged from 0.6 to 4.5 mg/L with a median of 1.7 mg/L. Cmax was >1 mg/L and >2 mg/L in 77% and 31% of patients, respectively. Trough concentrations at 12 h after the first dose were available from 17 patients and ranged from 0.1 to 0.7 mg/L with a median of 0.3 mg/L. Figure 1.Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients. Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients. Both the first-order with lag and the transit compartment absorption models provided good fits of the population and individual concentration–time profiles. Since the transit compartment model was unstable during validation procedures and provided little improvement in the overall fit of the data, the first-order model with lag was used for covariate model development. The transit compartment model was tested again with the final covariate model, but offered no clear advantage. Plots of individual estimates of oral CL and oral V against the measured and derived clinical and demographic data identified potential relationships between oral CL and risk category, serum sodium concentration, serum potassium concentration, feeding status, shock and dehydration, and between oral V and risk category, serum sodium concentration, shock, dehydration, diarrhoea and vomiting. These factors all achieved small, but statistically significant, reductions in the OFV when included individually in the population model for oral CL. The biggest reductions occurred with sodium concentration (6.24) and the high-risk category (6.05). When these factors were combined, the OFV fell by a further 14.46 points. BSV in oral CL fell from 49.8% with the base model to 38.1% with the covariate model. No additional factors reduced the variability in oral CL, but a further reduction in OFV and BSV in oral V (from 48.4% to 43.0%) was obtained when sodium concentration was added to the model for oral V. Although the inclusion of bicarbonate concentration as a factor influencing the absorption rate constant (ka) produced a statistically significant reduction in OFV, it did not reduce BSV in ka and was therefore excluded. The final population model parameters and bootstrap estimates are presented in Table 2 and summarized below. Table 2.Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished childrenParameterPopulation estimateBootstrap estimateBootstrap 95% CIθ142.742.537.0–49.3θ20.03680.03610.0217–0.0446θ3−0.283−0.285−0.412 to −0.118θ4372367316–429θ50.02910.02820.0155–0.0388θ62.973.441.32–8.86θ70.7420.7920.168–0.924BSV CL38.137.828.7–45.8BSV V43.042.932.4–51.8BSV ka10211056–159Residual error Additive (SD)0.02730.02780.0041–0.0438Proportional (%CV)18.617.814.0–22.6Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7.Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L). Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished children Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7. Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L). The population model therefore identified a standardized oral CL in an adult weighing 70 kg of 42.7 L/h, and found that oral CL was linearly and positively related to sodium concentration. The model described an increase (or decrease) in oral CL by 3.7% for every 1 mmol/L increase (or decrease) in the serum sodium concentration from the median value of 136 mmol/L. The standardized oral CL fell by 28% to 30.6 L/h/70 kg in patients in the ‘high-risk’ category. The standardized oral V estimate was 372 L/70 kg and changed by 2.9% for every 1 mmol/L change in the serum sodium concentration from 136 mmol/L. All three validation methods indicated that the model provided a satisfactory description of the data. Figure 2 shows good agreement between the measured concentrations and the concentrations predicted by both the population pharmacokinetic model (r2 = 0.659) and the individual parameter estimates (r2 = 0.971). The pcVPC presented in Figure 3 shows that the population model was able to describe the distribution of the raw concentration data, and the npde check confirmed a normal distribution around each individual observation within the simulated dataset. Figure 2.Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line. Figure 3.Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model. Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line. Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model. Table 3 summarizes the individual Bayes' estimates of the pharmacokinetic parameters and derived estimates of Cmax, Tmax, t½ and AUC0–24. Oral CL had a median of 0.98 L/h/kg in patients in the low and intermediate categories, and 0.67 L/h/kg in high-risk patients. There was a wide variability in the individual estimates of AUC0–24, which ranged from 7.9 to 61 mg·h/L. Median estimates of AUC0–24 were higher in patients in the high-risk category at 29.7 mg·h/L compared with 20.5 mg·h/L in the low and intermediate categories. Table 3.Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutritionVariablenMeanSDMedianMinimumMaximumCL(L/h)527.433.547.191.8317.1CL(L/h/kg)521.020.520.870.322.54CL (L/h/kg) low/intermediate risk361.140.530.980.462.54CL (L/h/kg) high risk160.770.410.670.321.53V (L/kg)525.472.694.492.1414.2t½ (h)523.971.383.782.019.04Observed Tmax (h)522.771.083.001.005.17Model-predicted Tmax (h)521.910.581.791.093.85Observed Cmax (mg/L)521.680.791.710.584.52Model-predicted Cmax (mg/L)521.510.601.500.613.56AUC0–24 (mg·h/L)5224.812.422.47.961.3AUC0–24 (mg·h/L) low/intermediate risk3621.18.820. 57.943.4AUC0–24 (mg·h/L) high risk1632.915.529.713.161.3Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC. Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutrition Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC. A total of 202 plasma ciprofloxacin concentration measurements were available, with a median of 4 (range 2–4) measurements per patient. Individual concentration–time profiles are presented in Figure 1. Tmax ranged from 1 to 5 h (median 3 h) after the dose, and in 69% of patients Cmax was observed within 1–3 h. Cmax ranged from 0.6 to 4.5 mg/L with a median of 1.7 mg/L. Cmax was >1 mg/L and >2 mg/L in 77% and 31% of patients, respectively. Trough concentrations at 12 h after the first dose were available from 17 patients and ranged from 0.1 to 0.7 mg/L with a median of 0.3 mg/L. Figure 1.Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients. Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients. Both the first-order with lag and the transit compartment absorption models provided good fits of the population and individual concentration–time profiles. Since the transit compartment model was unstable during validation procedures and provided little improvement in the overall fit of the data, the first-order model with lag was used for covariate model development. The transit compartment model was tested again with the final covariate model, but offered no clear advantage. Plots of individual estimates of oral CL and oral V against the measured and derived clinical and demographic data identified potential relationships between oral CL and risk category, serum sodium concentration, serum potassium concentration, feeding status, shock and dehydration, and between oral V and risk category, serum sodium concentration, shock, dehydration, diarrhoea and vomiting. These factors all achieved small, but statistically significant, reductions in the OFV when included individually in the population model for oral CL. The biggest reductions occurred with sodium concentration (6.24) and the high-risk category (6.05). When these factors were combined, the OFV fell by a further 14.46 points. BSV in oral CL fell from 49.8% with the base model to 38.1% with the covariate model. No additional factors reduced the variability in oral CL, but a further reduction in OFV and BSV in oral V (from 48.4% to 43.0%) was obtained when sodium concentration was added to the model for oral V. Although the inclusion of bicarbonate concentration as a factor influencing the absorption rate constant (ka) produced a statistically significant reduction in OFV, it did not reduce BSV in ka and was therefore excluded. The final population model parameters and bootstrap estimates are presented in Table 2 and summarized below. Table 2.Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished childrenParameterPopulation estimateBootstrap estimateBootstrap 95% CIθ142.742.537.0–49.3θ20.03680.03610.0217–0.0446θ3−0.283−0.285−0.412 to −0.118θ4372367316–429θ50.02910.02820.0155–0.0388θ62.973.441.32–8.86θ70.7420.7920.168–0.924BSV CL38.137.828.7–45.8BSV V43.042.932.4–51.8BSV ka10211056–159Residual error Additive (SD)0.02730.02780.0041–0.0438Proportional (%CV)18.617.814.0–22.6Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7.Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L). Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished children Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7. Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L). The population model therefore identified a standardized oral CL in an adult weighing 70 kg of 42.7 L/h, and found that oral CL was linearly and positively related to sodium concentration. The model described an increase (or decrease) in oral CL by 3.7% for every 1 mmol/L increase (or decrease) in the serum sodium concentration from the median value of 136 mmol/L. The standardized oral CL fell by 28% to 30.6 L/h/70 kg in patients in the ‘high-risk’ category. The standardized oral V estimate was 372 L/70 kg and changed by 2.9% for every 1 mmol/L change in the serum sodium concentration from 136 mmol/L. All three validation methods indicated that the model provided a satisfactory description of the data. Figure 2 shows good agreement between the measured concentrations and the concentrations predicted by both the population pharmacokinetic model (r2 = 0.659) and the individual parameter estimates (r2 = 0.971). The pcVPC presented in Figure 3 shows that the population model was able to describe the distribution of the raw concentration data, and the npde check confirmed a normal distribution around each individual observation within the simulated dataset. Figure 2.Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line. Figure 3.Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model. Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line. Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model. Table 3 summarizes the individual Bayes' estimates of the pharmacokinetic parameters and derived estimates of Cmax, Tmax, t½ and AUC0–24. Oral CL had a median of 0.98 L/h/kg in patients in the low and intermediate categories, and 0.67 L/h/kg in high-risk patients. There was a wide variability in the individual estimates of AUC0–24, which ranged from 7.9 to 61 mg·h/L. Median estimates of AUC0–24 were higher in patients in the high-risk category at 29.7 mg·h/L compared with 20.5 mg·h/L in the low and intermediate categories. Table 3.Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutritionVariablenMeanSDMedianMinimumMaximumCL(L/h)527.433.547.191.8317.1CL(L/h/kg)521.020.520.870.322.54CL (L/h/kg) low/intermediate risk361.140.530.980.462.54CL (L/h/kg) high risk160.770.410.670.321.53V (L/kg)525.472.694.492.1414.2t½ (h)523.971.383.782.019.04Observed Tmax (h)522.771.083.001.005.17Model-predicted Tmax (h)521.910.581.791.093.85Observed Cmax (mg/L)521.680.791.710.584.52Model-predicted Cmax (mg/L)521.510.601.500.613.56AUC0–24 (mg·h/L)5224.812.422.47.961.3AUC0–24 (mg·h/L) low/intermediate risk3621.18.820. 57.943.4AUC0–24 (mg·h/L) high risk1632.915.529.713.161.3Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC. Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutrition Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC. Monte Carlo simulations The percentage of simulated patients who achieved an AUC0–24/MIC ratio of ≥125 at each MIC value with the three ciprofloxacin daily dosage regimens is presented in Figure 4. With 20 mg/kg/day, only 76% of patients would be expected to achieve the target AUC0–24/MIC ratio if the MIC was 0.125 mg/L. However, with daily doses of 30 and 45 mg/kg, the percentages increased to 95% and 99%, respectively. Consequently, the pharmacokinetic/pharmacodynamic breakpoint for Gram-negative organisms was <0.06 mg/L for the study dose of 20 mg/kg/day, and <0.125 mg/L for doses of 30 and 45 mg/kg/day. The target AUC0–24/MIC ratio was achieved in <5% of patients with all dosage regimens if the MIC was >1 mg/L. When the results were integrated with the MIC distribution for each organism, the CFR was >80% with all three daily dosage regimens for E. coli and Salmonella spp., and was 80% for K. pneumoniae with a dose of 30 mg/kg/day (Table 4). CFR values for P. aeruginosa were <70% for all doses tested. Table 4.Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coliCumulative fraction of predicted response (%)OrganismTarget AUC0–24/MIC ratio20 mg/kg/day30 mg/kg/day45 mg/kg/daySalmonella spp.125969899P. aeruginosa125435564K. pneumonia125768083E. coli125858788 Figure 4.Percentage probability of achieving a target AUC0–24/MIC ratio ≥125. Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coli Percentage probability of achieving a target AUC0–24/MIC ratio ≥125. The percentage of simulated patients who achieved an AUC0–24/MIC ratio of ≥125 at each MIC value with the three ciprofloxacin daily dosage regimens is presented in Figure 4. With 20 mg/kg/day, only 76% of patients would be expected to achieve the target AUC0–24/MIC ratio if the MIC was 0.125 mg/L. However, with daily doses of 30 and 45 mg/kg, the percentages increased to 95% and 99%, respectively. Consequently, the pharmacokinetic/pharmacodynamic breakpoint for Gram-negative organisms was <0.06 mg/L for the study dose of 20 mg/kg/day, and <0.125 mg/L for doses of 30 and 45 mg/kg/day. The target AUC0–24/MIC ratio was achieved in <5% of patients with all dosage regimens if the MIC was >1 mg/L. When the results were integrated with the MIC distribution for each organism, the CFR was >80% with all three daily dosage regimens for E. coli and Salmonella spp., and was 80% for K. pneumoniae with a dose of 30 mg/kg/day (Table 4). CFR values for P. aeruginosa were <70% for all doses tested. Table 4.Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coliCumulative fraction of predicted response (%)OrganismTarget AUC0–24/MIC ratio20 mg/kg/day30 mg/kg/day45 mg/kg/daySalmonella spp.125969899P. aeruginosa125435564K. pneumonia125768083E. coli125858788 Figure 4.Percentage probability of achieving a target AUC0–24/MIC ratio ≥125. Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coli Percentage probability of achieving a target AUC0–24/MIC ratio ≥125. Participants and admission clinical characteristics: Of the 90 children with severe malnutrition admitted during the study period, 52 were enrolled into the study. Twelve who fulfilled the entry criteria declined to give consent, in four intravenous access was not possible and 22 were excluded due to completion of recruitment to a specific subgroup. The demographic and clinical characteristics of the patients included in the study are summarized in Table 1. Median age [interquartile range (IQR)] was 23 months (15–33 months), 29 (56%) were male, oedema (defining kwashiorkor) was present in 24 patients (46%) and eight (15%) children were HIV antibody positive. The median weight (IQR) was 6.9 kg (6.1–8.4 kg). The estimated creatinine clearance ranged from 5 to 128.7 mL/min/1.73 m2; only one patient had severe renal impairment. According to the risk stratification, mortality was 21%, 12% and 0% in the high-risk, intermediate-risk and low-risk groups, respectively. Bacteraemia was present in six children; three had E. coli, two had S. pneumoniae and one had K. pneumoniae. The isolates were tested against ampicillin, gentamicin, ciprofloxacin, chloramphenicol and ceftriaxone. The K. pneumoniae isolate was resistant to ampicillin only, while one E. coli isolate was resistant to all of the tested antibiotics. One S. pneumoniae isolate was resistant to ampicillin and had intermediate susceptibility to ciprofloxacin. Ciprofloxacin was well tolerated and there were no adverse events reported with its use. Table 1.Summary of the demographic and clinical characteristics of the patients who participated in the studyCharacteristicNumber (%)/median (range)Interquartile rangeMale/female29/23 (56/44)Oedema24 (46)HIV positive8 (15)Age (months)23 (8–102)15–33Weight (kg)6.9 (4.1–14.5)6.1–8.4Height (cm)75.4 (58.5–114.4)70.1–81.4MUAC (cm)11 (7.7–14.3)10–12WHZ−3.26 (−5.69–0.04)−3.67 to −2.48Vomiting15 (29)Shock7 (13)Dehydration25 (48)Low risk22 (42)Intermediate risk14 (27)High risk16 (31)White blood cell (×106/L) (6–17.5)13.3 (5.5–84.4)10.3–17.1Haemoglobin (g/dL) (9–14)9.0 (2.1–12.8)7.2–9.8Platelets (×106/L) (150–400)309 (16–1369)210–451Sodium (mmol/L) (138–145)136 (120–160)131–138Potassium (mmol/L) (3.5–5)3.1 (1.2–5.1)2.3–4.1Glucose (mmol/L) (2.8–5)4 (0.4–11.4)2.8–4.7Bicarbonate (mmol/L) (22–29)15.4 (4.7–26)11.7–18.5Base excess (−4 to +2)−8.0 (−26.9–2.1)−13.1 to −4.5Serum creatinine (μmol/L) (44–88)44 (27–676)36.8–51.9Creatinine clearancea (mL/min/1.73 m2)85.8 (5.0–128.7)70.7–101.4MUAC, mid-upper arm circumference.aEstimated according to age using the equations of Schwartz et al.26,27 Summary of the demographic and clinical characteristics of the patients who participated in the study MUAC, mid-upper arm circumference. aEstimated according to age using the equations of Schwartz et al.26,27 Pharmacokinetic data analysis: A total of 202 plasma ciprofloxacin concentration measurements were available, with a median of 4 (range 2–4) measurements per patient. Individual concentration–time profiles are presented in Figure 1. Tmax ranged from 1 to 5 h (median 3 h) after the dose, and in 69% of patients Cmax was observed within 1–3 h. Cmax ranged from 0.6 to 4.5 mg/L with a median of 1.7 mg/L. Cmax was >1 mg/L and >2 mg/L in 77% and 31% of patients, respectively. Trough concentrations at 12 h after the first dose were available from 17 patients and ranged from 0.1 to 0.7 mg/L with a median of 0.3 mg/L. Figure 1.Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients. Ciprofloxacin concentration measurements in 52 children with malnutrition following oral doses of 10 mg/kg. Samples were measured after the first dose in all patients and after a second dose 12 h later in 16 patients. Both the first-order with lag and the transit compartment absorption models provided good fits of the population and individual concentration–time profiles. Since the transit compartment model was unstable during validation procedures and provided little improvement in the overall fit of the data, the first-order model with lag was used for covariate model development. The transit compartment model was tested again with the final covariate model, but offered no clear advantage. Plots of individual estimates of oral CL and oral V against the measured and derived clinical and demographic data identified potential relationships between oral CL and risk category, serum sodium concentration, serum potassium concentration, feeding status, shock and dehydration, and between oral V and risk category, serum sodium concentration, shock, dehydration, diarrhoea and vomiting. These factors all achieved small, but statistically significant, reductions in the OFV when included individually in the population model for oral CL. The biggest reductions occurred with sodium concentration (6.24) and the high-risk category (6.05). When these factors were combined, the OFV fell by a further 14.46 points. BSV in oral CL fell from 49.8% with the base model to 38.1% with the covariate model. No additional factors reduced the variability in oral CL, but a further reduction in OFV and BSV in oral V (from 48.4% to 43.0%) was obtained when sodium concentration was added to the model for oral V. Although the inclusion of bicarbonate concentration as a factor influencing the absorption rate constant (ka) produced a statistically significant reduction in OFV, it did not reduce BSV in ka and was therefore excluded. The final population model parameters and bootstrap estimates are presented in Table 2 and summarized below. Table 2.Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished childrenParameterPopulation estimateBootstrap estimateBootstrap 95% CIθ142.742.537.0–49.3θ20.03680.03610.0217–0.0446θ3−0.283−0.285−0.412 to −0.118θ4372367316–429θ50.02910.02820.0155–0.0388θ62.973.441.32–8.86θ70.7420.7920.168–0.924BSV CL38.137.828.7–45.8BSV V43.042.932.4–51.8BSV ka10211056–159Residual error Additive (SD)0.02730.02780.0041–0.0438Proportional (%CV)18.617.814.0–22.6Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7.Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L). Parameter estimates arising from the final population model describing the pharmacokinetics of oral ciprofloxacin in malnourished children Structural model: TVCL = θ1 × (WT/70)0.75 × [1 + θ2 × (Na+ – 136)] × [1 + θ3 × (high risk)]; TVV = θ4 × (WT/70) × [1 + θ5 × (Na+ – 136)]; TVKA = θ6; ALAG = θ7. Abbreviations: BSV, between-subject variability expressed as a % coefficient of variation; CL, oral clearance; V, oral volume of distribution; TVCL, typical value of oral clearance (L/h); TVV, typical value of oral volume of distribution (L); TVKA, typical value of the absorption rate constant (ka) (1/h); ALAG, absorption lag time (h); WT, body weight (kg); Na+, serum sodium concentration (mmol/L). The population model therefore identified a standardized oral CL in an adult weighing 70 kg of 42.7 L/h, and found that oral CL was linearly and positively related to sodium concentration. The model described an increase (or decrease) in oral CL by 3.7% for every 1 mmol/L increase (or decrease) in the serum sodium concentration from the median value of 136 mmol/L. The standardized oral CL fell by 28% to 30.6 L/h/70 kg in patients in the ‘high-risk’ category. The standardized oral V estimate was 372 L/70 kg and changed by 2.9% for every 1 mmol/L change in the serum sodium concentration from 136 mmol/L. All three validation methods indicated that the model provided a satisfactory description of the data. Figure 2 shows good agreement between the measured concentrations and the concentrations predicted by both the population pharmacokinetic model (r2 = 0.659) and the individual parameter estimates (r2 = 0.971). The pcVPC presented in Figure 3 shows that the population model was able to describe the distribution of the raw concentration data, and the npde check confirmed a normal distribution around each individual observation within the simulated dataset. Figure 2.Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line. Figure 3.Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model. Observed versus population (a) and individual (b) predicted concentrations of ciprofloxacin in malnourished infants based on the final population model. The thin line represents the line of identity; the thick line represents the linear regression line. Prediction-corrected visual predictive check of the final model describing the pharmacokinetics of oral ciprofloxacin in infants with malnutrition. The solid line represents the median of the raw data, the dotted lines are the 10th and 90th percentiles of the raw data, and the shaded areas are the 90% confidence intervals of the 10th, 50th and 90th percentiles of the 1000 simulations based on the final model. Table 3 summarizes the individual Bayes' estimates of the pharmacokinetic parameters and derived estimates of Cmax, Tmax, t½ and AUC0–24. Oral CL had a median of 0.98 L/h/kg in patients in the low and intermediate categories, and 0.67 L/h/kg in high-risk patients. There was a wide variability in the individual estimates of AUC0–24, which ranged from 7.9 to 61 mg·h/L. Median estimates of AUC0–24 were higher in patients in the high-risk category at 29.7 mg·h/L compared with 20.5 mg·h/L in the low and intermediate categories. Table 3.Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutritionVariablenMeanSDMedianMinimumMaximumCL(L/h)527.433.547.191.8317.1CL(L/h/kg)521.020.520.870.322.54CL (L/h/kg) low/intermediate risk361.140.530.980.462.54CL (L/h/kg) high risk160.770.410.670.321.53V (L/kg)525.472.694.492.1414.2t½ (h)523.971.383.782.019.04Observed Tmax (h)522.771.083.001.005.17Model-predicted Tmax (h)521.910.581.791.093.85Observed Cmax (mg/L)521.680.791.710.584.52Model-predicted Cmax (mg/L)521.510.601.500.613.56AUC0–24 (mg·h/L)5224.812.422.47.961.3AUC0–24 (mg·h/L) low/intermediate risk3621.18.820. 57.943.4AUC0–24 (mg·h/L) high risk1632.915.529.713.161.3Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC. Summary of individual ciprofloxacin pharmacokinetic parameter estimates obtained in 52 children with severe malnutrition Abbreviations: CL, oral clearance; V, oral volume of distribution; t½, elimination half-life; Tmax, the time of the maximum concentration; AUC0–24, the steady-state 24 h AUC. Monte Carlo simulations: The percentage of simulated patients who achieved an AUC0–24/MIC ratio of ≥125 at each MIC value with the three ciprofloxacin daily dosage regimens is presented in Figure 4. With 20 mg/kg/day, only 76% of patients would be expected to achieve the target AUC0–24/MIC ratio if the MIC was 0.125 mg/L. However, with daily doses of 30 and 45 mg/kg, the percentages increased to 95% and 99%, respectively. Consequently, the pharmacokinetic/pharmacodynamic breakpoint for Gram-negative organisms was <0.06 mg/L for the study dose of 20 mg/kg/day, and <0.125 mg/L for doses of 30 and 45 mg/kg/day. The target AUC0–24/MIC ratio was achieved in <5% of patients with all dosage regimens if the MIC was >1 mg/L. When the results were integrated with the MIC distribution for each organism, the CFR was >80% with all three daily dosage regimens for E. coli and Salmonella spp., and was 80% for K. pneumoniae with a dose of 30 mg/kg/day (Table 4). CFR values for P. aeruginosa were <70% for all doses tested. Table 4.Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coliCumulative fraction of predicted response (%)OrganismTarget AUC0–24/MIC ratio20 mg/kg/day30 mg/kg/day45 mg/kg/daySalmonella spp.125969899P. aeruginosa125435564K. pneumonia125768083E. coli125858788 Figure 4.Percentage probability of achieving a target AUC0–24/MIC ratio ≥125. Cumulative fraction of predicted response to achieve the target AUC0–24/MIC ratio for three ciprofloxacin dosage regimens against strains of Salmonella spp., P. aeruginosa, K. pneumoniae and E. coli Percentage probability of achieving a target AUC0–24/MIC ratio ≥125. Discussion: This study examined the population pharmacokinetics of ciprofloxacin following oral doses of 10 mg/kg given 12 hourly on the day of admission to a group of 52 paediatric patients with severe malnutrition. Since the study was designed purely to investigate ciprofloxacin pharmacokinetics and not efficacy, all patients also received standard therapy for malnutrition and for sepsis on admission. Patient weight, high risk of mortality and serum sodium concentration were the main factors that influenced the concentration–time profile of ciprofloxacin in this patient group. Estimates of AUC24 ranged from 8 to 61 mg·h/L, indicating that an AUC/MIC ratio ≥125 would only be achieved in all patients studied if the MIC was <0.06 mg/L. Sparse sampling was necessary due to the nature of the population. The sampling windows covered different sections of the 12 h dosage interval; additional trough concentrations measured after the second dose were available from 16 patients. The observed Cmax concentrations ranged from 0.6 to 4.5 mg/L. This range is lower than the values averaging ∼8.4 mg/L reported by Schaefer et al.18 following intravenous doses of 10 mg/kg to paediatric patients with cystic fibrosis, but is consistent, when corrected for dose, with their range of ∼2.5–5 mg/L (mean 3.5 mg/L) achieved with an oral dose of 15 mg/kg. In contrast, despite using a higher dose of 15 mg/kg, Peltola et al.19 observed similar values of Cmax (0.5–5.3 mg/L) when they administered ground tablets in water and mean Cmax values of ∼2–2.7 mg/L following an oral suspension of 10 mg/kg.33 The observed variability in Cmax and Tmax (1–5 h) in the present study may reflect a combination of variable oral bioavailability and the sampling strategy. One limitation of the present study was that most samples were taken ≥2 h after the dose and the Cmax may have already been attained. Peak concentrations were typically observed at 1–2 h in previous studies.19,33 The observed Cmax measurements suggest that MIC values <0.1 mg/L would ideally be necessary to consistently achieve Cmax/MIC ratios of >10, as recommended by MacGowan et al.34 The concentration–time data were adequately described by first-order absorption with lag and a monoexponential decline. Studies following the intravenous administration of ciprofloxacin have identified a biexponential decline with a short distribution half-life of ∼10–30 min,16–18,35,36 but the sparse sampling schedule in the present study precluded the identification of a distribution phase. The population model indicated a rapid absorption of ciprofloxacin with an estimated half-life of 14 min after a lag of ∼45 min. Previous population analyses in paediatric patients reported shorter or similar absorption lag times, but rates of absorption were slower, with half-lives ranging from ∼30 to 96 min.16–18 In contrast, Peltola et al.,19 who also used ground tablets, reported mean absorption half-lives of 24 min in infants up to 14 weeks old and 17 min in children aged 1–5 years. Both the present and previous population studies identified wide variability in the absorption rate, with BSV estimates ranging from 50% to 103%.16,17 Rajagopalan and Gastonguay17 reported a standardized clearance of 30.3 L/h/70 kg in paediatric patients aged 14 weeks to 17 years. Correcting for their bioavailability estimate of 61% gives an oral clearance of 49.7 L/h/70 kg, which is similar to the value of 42.7 L/h/70 kg obtained in the present study. Both results are reasonably consistent with values of ∼40–70 L/h reported in adults with normal renal function.37–39 Individual weight-corrected oral CL estimates were in the range 0.3–2.5 L/h/kg in the present study and were similar to the CL values reported by Rajagopalan and Gastonguay17 (0.2–1.3 L/h/kg), when corrected for bioavailability (0.3–2.2 L/h/kg), and the oral CL observed by Peltola et al.33 (1–1.5 L/h/kg). The population model developed by Payen et al.16 predicts an oral clearance increasing from 2 to 20 L/h over the age range covered in the present study. These values are again consistent with the present observations of 1.8–17 L/h. These findings suggest that oral CL is similar in this population of malnourished children to those reported in children without malnutrition. Since there was no intravenous ciprofloxacin dose given, oral bioavailability could not be determined directly from this study. However, the similarity between the estimates of oral CL in the present study with the results from previous studies suggests that bioavailability may also be similar. Ciprofloxacin is generally well absorbed, with a typical bioavailability of ∼70% in adults15 and 60%–70% in children.16–18,40 Rubio et al.40 reported a lower bioavailability in children with cystic fibrosis aged 5–12 years (68%) compared with those aged 13–17 years (95%), but this age effect was not observed in the other studies. The formulation that was used in the present study, i.e. tablets reformulated into a suspension with water, has practical advantages over the commercial oral suspension33 as it is much cheaper and easy to prepare on site. Another major aim of this study was to evaluate the impact of feeding on oral clearance. Both milk and divalent cations (such as magnesium and other minerals included in the nutritional milk) have been reported to chelate quinolones and reduce the oral bioavailability of ciprofloxacin.41–43 Since most children are managed in resource-poor healthcare settings with a limited number of healthcare personnel, it is both impractical and difficult to ensure that the feeding times are synchronized around the drug administration times. Consequently, a significant proportion of children are likely to receive ciprofloxacin concomitantly with food and a reduction in bioavailability could compromise efficacy. Although the preliminary analysis found an increase in oral clearance in patients who received ciprofloxacin with feeding, the objective function value only fell by 4.5 points, indicating a weak effect. In the final model, no influence of feeding was identified. Thirteen of the 16 patients who were given ciprofloxacin with food were in the low/intermediate-risk category, which was associated with higher oral CL and lower AUC estimates. The influence of risk may therefore have confounded the identification of a food effect, if one existed. Conversely, if food was important, it may have enhanced the apparent influence of risk in the model. The lack of a clear influence of feeding on absorption is a positive finding. If a significant interaction had been observed, then the future use of this antimicrobial would have been compromised by the regular feeding patterns required, and possible variability in the gut motility and gastric emptying in children with severe malnutrition. Dehydration and shock were initially found to influence oral CL when examined alone, but both were related to other factors that had a more powerful effect. Of the 16 patients in the high-risk category, 12 had dehydration (of whom 11 had diarrhoea) and 6 had shock. ‘High risk’ therefore accounted for both of these factors and only sodium concentration provided an additional improvement in the fit of the model to the data. An unexpected finding was the lack of an effect of renal function on oral CL, since ciprofloxacin CL is known to decrease in renal impairment.37–39,44 However, the only patient in the group with overt renal failure (creatinine concentration 676 μmol/L) was also categorized as high risk. This individual had one of lowest estimates of oral CL at 1.8 L/h (0.33 L/h/kg). Apart from this one individual, no other patient had severe renal impairment; the highest serum creatinine concentration in other patients was 95 μmol/L. Interestingly, another patient in the high-risk category had a similar and very low oral CL (0.32 L/h/kg), despite a creatinine concentration of 76 μmol/L. The t½ of ciprofloxacin is ∼3–5 h in adults with normal renal function.37,38,44 Similar values have been found in previous studies in infants and young children16,33,45 and in the present study, where t½ had a median of 3.8 h and ranged from 2 to 9 h. Although children in the high-risk category (including children with impaired consciousness, shock and hypoglycaemia) had significantly lower estimates of oral CL, individual estimates varied widely in both groups (Table 3). ‘High risk’ children represent the most critically ill cases, where both the logistics of administration and gut absorption of oral microbiological agents are likely to be compromised. Importantly, this finding highlights the need for parenteral agents for the most critically ill, for which third-generation cephalosporins are likely to be the most appropriate. We have previously shown delayed uptake and reduced clearance of gentamicin following an intramuscular dose of 7.5 mg/kg in children with severe malnutrition complicated by septic shock.10 The finding that low sodium concentration is associated with a reduced oral CL is also of interest. Hyponatraemia in severe malnutrition has been postulated to be due to ‘adaptive reduction’ where normal homeostatic cellular ATPase and sodium potassium pumps are faulty, resulting in low sodium due increased extracellular water.46,47 The reasons why low sodium is associated with lower estimations of oral CL and oral V are not clear, but although statistically significant, the inclusion of this factor only reduced BSV in oral CL by 6% and BSV in oral V by 4%, so it had a weak effect overall and may be an incidental finding. The standardized estimate of oral V (372 L/70 kg) was similar to that reported by Forrest et al.37 in adult patients with normal renal function (321 L/1.73 m2), but higher than the estimate of volume of distribution at steady state (Vss) obtained by Rajagopalan and Gastonguay17 in paediatric patients (240 L/70 kg when corrected for bioavailability). Individual estimates of oral V ranged from 2 to 14 L/kg and the median of 4.5 L/kg was more than twice the values of ∼2 L/kg obtained by Schaefer18 and Payen16. Overall, these results indicate that oral V is elevated in patients with malnutrition. Similar results have been observed with gentamicin in malnourished children—both higher mean and a wide interpatient variability.10 As with oral CL, a small but significant relationship between oral V and sodium concentration was identified, with higher sodium concentrations being associated with larger estimates of oral V. For fluoroquinolones in general, the ratio of the daily AUC to the MIC (AUC/MIC) is likely to be the pharmacodynamic criterion most predictive of clinical outcome.31 Forrest et al.31 found that the probability of clinical cure in a group of seriously ill patients was 80% if the AUC/MIC was ≥125, but was only 42% below this value and that faster eradication rates occurred if the AUC/MIC was >250. The median estimate of steady-state AUC0–12 in the present study (11.2 mg·h/L) is lower than the mean values of ∼13–19 mg·h/L reported by Lipman et al.48 following intravenous doses of 10 mg/kg twice daily to paediatric patients with sepsis. These differences probably reflect absorption, since correcting for a bioavailability of 70% yields similar results to the present study (predicted oral AUC0–12 range 9–13 mg·h/L). Even with the higher AUC, Lipman et al. recommended increasing the dose to 10 mg/kg 8 hourly to obtain AUC/MIC values of 100–150 for organisms with MICs >0.3 mg/L.45 Estimates of AUC0–24 in the present study ranged from 8 to 61 mg·h/L. The individual results indicated that only 12% of the study patients would achieve an AUC/MIC ≥125 if the MIC of the organism was 0.3 mg/L and 0% if it was 0.5 mg/L. Monte Carlo simulations produced slightly higher results (24% and 2%, respectively). Using a range of MIC values, it was found that a dose of ≥30 mg/kg per day would be required to achieve an AUC/MIC of ≥125 in >90% of patients if the MIC was 0.125 mg/L and that an MIC of <0.06 mg/L was necessary to achieve satisfactory AUC/MIC values with a dose of 20 mg/kg/day. Using internationally derived distributions of the MICs of a range of organisms, including the most common isolates in children with severe malnutrition complicated by invasive bacterial disease, a dose of 20 mg/kg/day was sufficient for most isolates of E. coli and Salmonella spp., but 30 mg/kg/day would be necessary to treat Klebsiella infections. Even with higher doses of 45 mg/kg/day, targets for P. aeruginosa could only be achieved in 64% of cases. Similar problems of underdosing have also been identified in adults. Standard intravenous doses of 400 mg twice daily yielded inadequate AUC/MIC and Cmax/MIC ratios in critically ill patients unless the MIC was <0.25 mg/L.36 Simulations conducted by Montgomery et al.35 demonstrated that for an MIC of 0.5 mg/L, 400 mg 12 hourly would only achieve an AUC/MIC ≥125 in 15% of adults with cystic fibrosis and that an increase to 600 mg 8 hourly would be required to achieve >90% success. If the dose used in the present study was increased to 15 mg/kg 8 hourly and assuming no change in bioavailability, the probability of achieving an AUC/MIC ≥125 would increase to 83% for an MIC of 0.3 mg/L and 32% for an MIC of 0.5 mg/L. However, 37% of patients would then reach daily AUCs >60 mg·h/L. These values are higher than the mean daily AUCs reported following the administration of intravenous high-dose ciprofloxacin (400 mg 8 hourly) to critically ill adults48 and it is possible that such high exposure would increase the risk of toxicity in such patients. Conclusions The pharmacokinetics of oral ciprofloxacin in children with severe malnutrition is influenced by weight, serum sodium concentration and the risk of mortality, but there was high variability in the clearance and rate of absorption. Oral ciprofloxacin absorption was unaffected by the simultaneous administration of nutritional feeds. An oral dose of 10 mg/kg twice daily should be effective against E. coli and Salmonella spp., but a higher dose of 10 mg/kg three times a day would be recommended for K. pneumoniae. Oral ciprofloxacin is unlikely to be an effective treatment for P. aeruginosa. Irrespective of the bacterial pathogen, patients with severe illness, at high risk of mortality, should initially receive intravenous antibiotics. The pharmacokinetics of oral ciprofloxacin in children with severe malnutrition is influenced by weight, serum sodium concentration and the risk of mortality, but there was high variability in the clearance and rate of absorption. Oral ciprofloxacin absorption was unaffected by the simultaneous administration of nutritional feeds. An oral dose of 10 mg/kg twice daily should be effective against E. coli and Salmonella spp., but a higher dose of 10 mg/kg three times a day would be recommended for K. pneumoniae. Oral ciprofloxacin is unlikely to be an effective treatment for P. aeruginosa. Irrespective of the bacterial pathogen, patients with severe illness, at high risk of mortality, should initially receive intravenous antibiotics. Conclusions: The pharmacokinetics of oral ciprofloxacin in children with severe malnutrition is influenced by weight, serum sodium concentration and the risk of mortality, but there was high variability in the clearance and rate of absorption. Oral ciprofloxacin absorption was unaffected by the simultaneous administration of nutritional feeds. An oral dose of 10 mg/kg twice daily should be effective against E. coli and Salmonella spp., but a higher dose of 10 mg/kg three times a day would be recommended for K. pneumoniae. Oral ciprofloxacin is unlikely to be an effective treatment for P. aeruginosa. Irrespective of the bacterial pathogen, patients with severe illness, at high risk of mortality, should initially receive intravenous antibiotics. Funding: This study was supported by Wellcome Trust core funding to KEMRI-Wellcome Trust Research Programme (grant Reference No. 077092). Nahashon Thuo is supported by a Wellcome Trust Masters Training Fellowship (grant reference No. 089353/Z/09/Z). The funding sources had no role in study design, analysis or in the writing of the report. Transparency declarations: None to declare.
Background: Severe malnutrition is frequently complicated by sepsis, leading to high case fatality. Oral ciprofloxacin is a potential alternative to the standard parenteral ampicillin/gentamicin combination, but its pharmacokinetics in malnourished children is unknown. Methods: Ciprofloxacin (10 mg/kg, 12 hourly) was administered either 2 h before or up to 2 h after feeds to Kenyan children hospitalized with severe malnutrition. Four plasma ciprofloxacin concentrations were measured over 24 h. Population analysis with NONMEM investigated factors affecting the oral clearance (CL) and the oral volume of distribution (V). Monte Carlo simulations investigated dosage regimens to achieve a target AUC(0-24)/MIC ratio of ≥125. Results: Data comprised 202 ciprofloxacin concentration measurements from 52 children aged 8-102 months. Absorption was generally rapid but variable; C(max) ranged from 0.6 to 4.5 mg/L. Data were fitted by a one-compartment model with first-order absorption and lag. The parameters were CL (L/h) = 42.7 (L/h/70 kg) × [weight (kg)/70](0.75) × [1 + 0.0368 (Na(+) - 136)] × [1 - 0.283 (high risk)] and V (L) = 372 × (L/70 kg) × [1 + 0.0291 (Na(+) - 136)]. Estimates of AUC(0-24) ranged from 8 to 61 mg·h/L. The breakpoint for Gram-negative organisms was <0.06 mg/L with doses of 20 mg/kg/day and <0.125 mg/L with doses of 30 or 45 mg/kg/day. The cumulative fraction of response with 30 mg/kg/day was ≥80% for Escherichia coli, Klebsiella pneumoniae and Salmonella species, but <60% for Pseudomonas aeruginosa. Conclusions: An oral ciprofloxacin dose of 10 mg/kg three times daily (30 mg/kg/day) may be a suitable alternative antibiotic for the management of sepsis in severely malnourished children. Absorption was unaffected by the simultaneous administration of feeds.
Introduction: Severe malnutrition remains a common cause of admission to hospital in less-developed countries. Many centres, particularly in Africa, report poor outcome despite adherence to recommended treatment guidelines.1–4 The children at the greatest risk of fatal outcome are those with Gram-negative septicaemia, constituting 48%–55% of invasive bacterial pathogens, and those admitted with diarrhoea and/or shock.5,6 Changes in the intestinal mucosal integrity and gut microbial balance occur in severe malnutrition,7 resulting in treatment failure and adverse clinical outcome.8 The higher prevalence of gut barrier dysfunction in children with severe malnutrition may have important effects on the absorption of antimicrobials and their bioavailability, and therefore may limit choices for the delivery of antimicrobial medication. Children with severe and complicated malnutrition routinely receive broad-spectrum parenteral antibiotics.9 In vitro antibiotic susceptibility testing indicates that up to 85% of organisms are fully susceptible to the first-line treatment, parenteral ampicillin and gentamicin, recommended by the WHO for children with severe and complicated malnutrition.4 Pharmacokinetic studies have demonstrated satisfactory plasma concentrations of these commonly used antibiotics.10 However, in vitro resistance has been associated with later deaths, and the current second-line antibiotic (chloramphenicol) was found to offer little advantage over the ampicillin and gentamicin combination.4 Fluoroquinolones are effective against most Gram-negative organisms and have activity against Gram-positive organisms, especially when given in combination.11 The quinolones are used in the treatment of serious bacterial infections in adults; however, their use in children has been restricted due to concerns about potential cartilage damage.12 Nevertheless, quinolones are increasingly prescribed for paediatric patients. Ciprofloxacin is licensed in children >1 year of age for pseudomonal infections in cystic fibrosis, for complicated urinary tract infections, and for the treatment and prophylaxis of inhalation anthrax.13 When the benefits of treatment outweigh the risks, ciprofloxacin is also licensed in the UK for children >1 year of age with severe respiratory tract and gastrointestinal system infection.14 In its many years of clinical use, ciprofloxacin has been found effective, even with oral administration, owing to its bioavailability of ∼70%.15 However, few studies have evaluated the population pharmacokinetics of ciprofloxacin in children,16–18 and no studies have been conducted in severe malnutrition. For the treatment of Gram-negative infection, common in severe malnutrition, pragmatic and cost-effective treatments are needed to improve outcome. Since intravenous formulations of fluoroquinolones are very expensive, they are rarely used in resource-poor settings. Oral formulations would appear to be the best option in patients who are able to ingest and adequately absorb medication. The aims of this study were to determine the pharmacokinetic profile of oral ciprofloxacin given at a dose of 10 mg/kg twice daily to children admitted to hospital with severe malnutrition, to develop a population model to describe the pharmacokinetics of ciprofloxacin in this patient group, and to use Monte Carlo simulation techniques to investigate potential relationships between dosage regimens and antimicrobial efficacy. Conclusions: The pharmacokinetics of oral ciprofloxacin in children with severe malnutrition is influenced by weight, serum sodium concentration and the risk of mortality, but there was high variability in the clearance and rate of absorption. Oral ciprofloxacin absorption was unaffected by the simultaneous administration of nutritional feeds. An oral dose of 10 mg/kg twice daily should be effective against E. coli and Salmonella spp., but a higher dose of 10 mg/kg three times a day would be recommended for K. pneumoniae. Oral ciprofloxacin is unlikely to be an effective treatment for P. aeruginosa. Irrespective of the bacterial pathogen, patients with severe illness, at high risk of mortality, should initially receive intravenous antibiotics.
Background: Severe malnutrition is frequently complicated by sepsis, leading to high case fatality. Oral ciprofloxacin is a potential alternative to the standard parenteral ampicillin/gentamicin combination, but its pharmacokinetics in malnourished children is unknown. Methods: Ciprofloxacin (10 mg/kg, 12 hourly) was administered either 2 h before or up to 2 h after feeds to Kenyan children hospitalized with severe malnutrition. Four plasma ciprofloxacin concentrations were measured over 24 h. Population analysis with NONMEM investigated factors affecting the oral clearance (CL) and the oral volume of distribution (V). Monte Carlo simulations investigated dosage regimens to achieve a target AUC(0-24)/MIC ratio of ≥125. Results: Data comprised 202 ciprofloxacin concentration measurements from 52 children aged 8-102 months. Absorption was generally rapid but variable; C(max) ranged from 0.6 to 4.5 mg/L. Data were fitted by a one-compartment model with first-order absorption and lag. The parameters were CL (L/h) = 42.7 (L/h/70 kg) × [weight (kg)/70](0.75) × [1 + 0.0368 (Na(+) - 136)] × [1 - 0.283 (high risk)] and V (L) = 372 × (L/70 kg) × [1 + 0.0291 (Na(+) - 136)]. Estimates of AUC(0-24) ranged from 8 to 61 mg·h/L. The breakpoint for Gram-negative organisms was <0.06 mg/L with doses of 20 mg/kg/day and <0.125 mg/L with doses of 30 or 45 mg/kg/day. The cumulative fraction of response with 30 mg/kg/day was ≥80% for Escherichia coli, Klebsiella pneumoniae and Salmonella species, but <60% for Pseudomonas aeruginosa. Conclusions: An oral ciprofloxacin dose of 10 mg/kg three times daily (30 mg/kg/day) may be a suitable alternative antibiotic for the management of sepsis in severely malnourished children. Absorption was unaffected by the simultaneous administration of feeds.
17,205
405
[ 354, 228, 352, 583, 375, 499, 1750, 363, 66, 4 ]
15
[ "oral", "mg", "kg", "ciprofloxacin", "model", "mic", "patients", "concentration", "children", "risk" ]
[ "therapy malnutrition sepsis", "malnutrition complicated septic", "antibiotics conclusions pharmacokinetics", "admission routine antimicrobials", "children malnutrition intravenous" ]
[CONTENT] quinolone | drug absorption | marasmus | kwashiorkor | Gram-negative [SUMMARY]
[CONTENT] quinolone | drug absorption | marasmus | kwashiorkor | Gram-negative [SUMMARY]
[CONTENT] quinolone | drug absorption | marasmus | kwashiorkor | Gram-negative [SUMMARY]
[CONTENT] quinolone | drug absorption | marasmus | kwashiorkor | Gram-negative [SUMMARY]
[CONTENT] quinolone | drug absorption | marasmus | kwashiorkor | Gram-negative [SUMMARY]
[CONTENT] quinolone | drug absorption | marasmus | kwashiorkor | Gram-negative [SUMMARY]
[CONTENT] Administration, Oral | Anti-Bacterial Agents | Bacteremia | Child | Child, Preschool | Ciprofloxacin | Dehydration | Drug Resistance, Multiple, Bacterial | Escherichia coli | Female | Humans | Infant | Klebsiella pneumoniae | Male | Malnutrition | Microbial Sensitivity Tests | Monte Carlo Method | Pseudomonas aeruginosa | Salmonella [SUMMARY]
[CONTENT] Administration, Oral | Anti-Bacterial Agents | Bacteremia | Child | Child, Preschool | Ciprofloxacin | Dehydration | Drug Resistance, Multiple, Bacterial | Escherichia coli | Female | Humans | Infant | Klebsiella pneumoniae | Male | Malnutrition | Microbial Sensitivity Tests | Monte Carlo Method | Pseudomonas aeruginosa | Salmonella [SUMMARY]
[CONTENT] Administration, Oral | Anti-Bacterial Agents | Bacteremia | Child | Child, Preschool | Ciprofloxacin | Dehydration | Drug Resistance, Multiple, Bacterial | Escherichia coli | Female | Humans | Infant | Klebsiella pneumoniae | Male | Malnutrition | Microbial Sensitivity Tests | Monte Carlo Method | Pseudomonas aeruginosa | Salmonella [SUMMARY]
[CONTENT] Administration, Oral | Anti-Bacterial Agents | Bacteremia | Child | Child, Preschool | Ciprofloxacin | Dehydration | Drug Resistance, Multiple, Bacterial | Escherichia coli | Female | Humans | Infant | Klebsiella pneumoniae | Male | Malnutrition | Microbial Sensitivity Tests | Monte Carlo Method | Pseudomonas aeruginosa | Salmonella [SUMMARY]
[CONTENT] Administration, Oral | Anti-Bacterial Agents | Bacteremia | Child | Child, Preschool | Ciprofloxacin | Dehydration | Drug Resistance, Multiple, Bacterial | Escherichia coli | Female | Humans | Infant | Klebsiella pneumoniae | Male | Malnutrition | Microbial Sensitivity Tests | Monte Carlo Method | Pseudomonas aeruginosa | Salmonella [SUMMARY]
[CONTENT] Administration, Oral | Anti-Bacterial Agents | Bacteremia | Child | Child, Preschool | Ciprofloxacin | Dehydration | Drug Resistance, Multiple, Bacterial | Escherichia coli | Female | Humans | Infant | Klebsiella pneumoniae | Male | Malnutrition | Microbial Sensitivity Tests | Monte Carlo Method | Pseudomonas aeruginosa | Salmonella [SUMMARY]
[CONTENT] therapy malnutrition sepsis | malnutrition complicated septic | antibiotics conclusions pharmacokinetics | admission routine antimicrobials | children malnutrition intravenous [SUMMARY]
[CONTENT] therapy malnutrition sepsis | malnutrition complicated septic | antibiotics conclusions pharmacokinetics | admission routine antimicrobials | children malnutrition intravenous [SUMMARY]
[CONTENT] therapy malnutrition sepsis | malnutrition complicated septic | antibiotics conclusions pharmacokinetics | admission routine antimicrobials | children malnutrition intravenous [SUMMARY]
[CONTENT] therapy malnutrition sepsis | malnutrition complicated septic | antibiotics conclusions pharmacokinetics | admission routine antimicrobials | children malnutrition intravenous [SUMMARY]
[CONTENT] therapy malnutrition sepsis | malnutrition complicated septic | antibiotics conclusions pharmacokinetics | admission routine antimicrobials | children malnutrition intravenous [SUMMARY]
[CONTENT] therapy malnutrition sepsis | malnutrition complicated septic | antibiotics conclusions pharmacokinetics | admission routine antimicrobials | children malnutrition intravenous [SUMMARY]
[CONTENT] oral | mg | kg | ciprofloxacin | model | mic | patients | concentration | children | risk [SUMMARY]
[CONTENT] oral | mg | kg | ciprofloxacin | model | mic | patients | concentration | children | risk [SUMMARY]
[CONTENT] oral | mg | kg | ciprofloxacin | model | mic | patients | concentration | children | risk [SUMMARY]
[CONTENT] oral | mg | kg | ciprofloxacin | model | mic | patients | concentration | children | risk [SUMMARY]
[CONTENT] oral | mg | kg | ciprofloxacin | model | mic | patients | concentration | children | risk [SUMMARY]
[CONTENT] oral | mg | kg | ciprofloxacin | model | mic | patients | concentration | children | risk [SUMMARY]
[CONTENT] treatment | severe | malnutrition | children | outcome | gram | severe malnutrition | complicated | studies | ciprofloxacin [SUMMARY]
[CONTENT] mic | estimates | oral | blood | children | ciprofloxacin | study | risk | mg | model [SUMMARY]
[CONTENT] oral | model | mg | kg | 24 | concentration | median | cl | patients | line [SUMMARY]
[CONTENT] oral | risk mortality | oral ciprofloxacin | effective | dose 10 mg kg | dose 10 mg | dose 10 | mortality | ciprofloxacin | 10 mg [SUMMARY]
[CONTENT] oral | declare | mg | mic | ciprofloxacin | kg | children | risk | model | estimates [SUMMARY]
[CONTENT] oral | declare | mg | mic | ciprofloxacin | kg | children | risk | model | estimates [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] 10 mg/kg | 12 hourly | 2 | up to 2 | Kenyan ||| Four | 24 | NONMEM ||| Monte Carlo | AUC(0-24)/MIC | ≥125 [SUMMARY]
[CONTENT] 202 | 52 | 8-102 months ||| 0.6 to 4.5 | one | first ||| 42.7 ||| kg)/70](0.75 | 1 + | Na(+ | 136 | 1 -  | 0.283 | 372 ||| 1 + 0.0291 | Na(+ | 136 ||| 8 ||| Gram-negative ||| 20 mg/kg/day | 30 ||| 30 mg/kg/day | Escherichia | Klebsiella | Salmonella | 60% [SUMMARY]
[CONTENT] 10 mg/kg | three | 30 mg/kg/day ||| [SUMMARY]
[CONTENT] ||| ||| 10 mg/kg | 12 hourly | 2 | up to 2 | Kenyan ||| Four | 24 | NONMEM ||| Monte Carlo | AUC(0-24)/MIC | ≥125 ||| ||| 202 | 52 | 8-102 months ||| 0.6 to 4.5 | one | first ||| 42.7 ||| kg)/70](0.75 | 1 + | Na(+ | 136 | 1 -  | 0.283 | 372 ||| 1 + 0.0291 | Na(+ | 136 ||| 8 ||| Gram-negative ||| 20 mg/kg/day | 30 ||| 30 mg/kg/day | Escherichia | Klebsiella | Salmonella | 60% ||| 10 mg/kg | three | 30 mg/kg/day ||| [SUMMARY]
[CONTENT] ||| ||| 10 mg/kg | 12 hourly | 2 | up to 2 | Kenyan ||| Four | 24 | NONMEM ||| Monte Carlo | AUC(0-24)/MIC | ≥125 ||| ||| 202 | 52 | 8-102 months ||| 0.6 to 4.5 | one | first ||| 42.7 ||| kg)/70](0.75 | 1 + | Na(+ | 136 | 1 -  | 0.283 | 372 ||| 1 + 0.0291 | Na(+ | 136 ||| 8 ||| Gram-negative ||| 20 mg/kg/day | 30 ||| 30 mg/kg/day | Escherichia | Klebsiella | Salmonella | 60% ||| 10 mg/kg | three | 30 mg/kg/day ||| [SUMMARY]
Effect of different acupuncture and moxibustion methods on functional dyspepsia caused by sequelae of COVID-19: A protocol for systematic review and meta-analysis.
36197210
Functional dyspepsia (FD) is a group of diseases that cannot be explained after routine clinical examination, and is characterized by postprandial fullness, early satiety, and upper abdominal pain or burning. According to the statistics, FD continues to become one of the high-risk sequelae of coronavirus disease 2019 (COVID-19), affecting patients' quality of life, increasing psychological burden and increasing economic costs. However, its optimal treatment is still an urgent problem. A large number of studies have shown that acupuncture and moxibustion is effective and safe in the treatment of FD caused by sequelae of COVID-19, which is of research value. Therefore, based on the current literatures, the effectiveness and safety of different acupuncture and moxibustion methods were systematically evaluated to provide possible alternative therapy on FD.
BACKGROUND
Studies search for eligible randomized controlled trials that use different acupuncture and moxibustion methods as the sole treatment on FD and their data extraction will be done by 2 researchers. In case of disagreement, a third researcher will be introduced for arbitration. Mean difference or relative risk with fixed or random effect model in terms of 95% confidence interval will be adopted for the data synthesis. To evaluate the risk of bias, the Cochrane risk of bias assessment tool will be utilized. The sensitivity or subgroup analysis will also be conducted when meeting high heterogeneity (I2 > 50%).
METHODS
This meta-analysis will provide an authentic synthesis of different acupuncture and moxibustion methods on FD caused by sequelae of COVID-19.
RESULTS
This meta-analysis will evaluate the effect of acupuncture and moxibustion on FD caused by sequelae of COVID-19, providing evidence as to the treatment in these patients.
CONCLUSION
[ "Acupuncture Therapy", "COVID-19", "Dyspepsia", "Humans", "Meta-Analysis as Topic", "Moxibustion", "Quality of Life", "Systematic Reviews as Topic" ]
9508946
1. Introduction
Coronavirus disease 2019 (COVID-19) caused by the SARS-CoV-2 has caused extremely serious harm to human’s life and health since the outbreak at the end of 2019. With the number of healers of COVID-19 is increasing, the emergence of long-term persistent symptoms has become another focus of attention for experts from all countries after the acute infection period. A large number of early studies have confirmed that patients with COVID-19 have clinical manifestations of functional dyspepsia (FD) during infection such as loss of appetite, diarrhea, vomiting and other gastrointestinal symptoms.[1] The frequency of digestive symptoms in patients with COVID-19 presents regional differently, for higher in North America, followed by Europe and Asia.[2,3] FD is a digestive system disease characterized by epigastric symptoms, including 2 major types of etiology: organic and functional. FD’s clinical manifestations are mostly postprandial abdominal bloating and discomfort, early satiety, epigastric pain, vomiting, nausea and belching.[4] At present, the diagnosis of FD is mainly based on the latest Rome IV standard for functional gastrointestinal diseases, which was formulated by the Rome Committee in 2016. Compared with the Rome III standard, the Rome IV standard has adjusted in terms of the frequency, severity and subgroup diagnosis of specific symptoms.[5] The diagnosis of FD still requires >6 months’ disease process, as well as the symptoms should having been onset in the past 3 months. In addition, the extent of 4 core symptoms (epigastric pain, early satiety, postprandial discomfort and epigastric burning) was assessed to the degree of symptoms reached an uncomfortable level affecting daily life.[6] The overall global prevalence of FD is 11.5% to 14.5%,[7] at the same time rising in recent years. Up to now, the etiology and pathogenesis of FD are still unclear, which bring great difficulties to clinical treatment, resulting in repeated medical treatment, seriously affecting the quality of life and consuming a lot of medical resources. Therefore, it is necessary to seek effective treatment. At present, there have been many clinical research of the use of gastrointestinal motility drugs on FD, but still without uniformity and consensus. Conventional therapies include gastro-kinetic agents, acid suppressive drugs, drugs for eradication of helicobacter pylori (HP) infection, and antidepressants. Gastro-kinetic agent is dopamine receptor antagonists such as domperidone and itopride, 5-HT4 receptor agonists such as levosulpiride and mosapride. Acid suppressive drugs mainly include H2 receptor antagonists, and proton pump inhibitors. Antidepressants are doxepin and flunarizine, etc. However, due to the unclear mechanism of gastrointestinal symptoms after COVID-19, conventional therapies such as gastro-kinetic agents are currently used. In this situation, acupuncture and moxibustion therapies increasingly appear in clinical treatment of FD.[8] The therapies are diverse, safe and effective, which could effectively alleviate the clinical symptoms of FD’s patients. Acupuncture and moxibustion therapy has a variety of forms and methods, such as ordinary filiform needle acupuncture, abdominal needle, warm acupuncture, electro-acupuncture, mild moxibustion, thermo-sensitive moxibustion, etc., having a good effect on FD and other gastrointestinal diseases. To sum up, the vague pathogenesis of FD caused by COVID-19 and unsystematic treatment resulting in the lack of supporting systematic evidence, which has had a negative impact of the treatment to a certain extent. However, acupuncture and moxibustion therapy of highly effective, is only regarded as supplementary medicine and alternative medicine. Therefore, in order to objectively understand the efficacy and safety of this oriental therapy in the treatment of FD, this study aims to collect randomized controlled trials of different methods of acupuncture and moxibustion on FD and complete systematic reviews and meta-analyses, providing a reliable evidence-based basis for clinical application.
2. Methods
This study has been registered as PROSPERO CRD42022346782 (https://www.crd.york.ac.uk/prospero/). This protocol for meta-analysis will be performed according to the Preferred Reporting Items for Systematic Review and Meta analysis Protocols statement.[9] 2.1. Selection criteria 2.1.1. Types of studies. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers. 2.1.2. Participants. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria. 2.1.3. Types of interventions. In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc). In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc). 2.1.1. Types of studies. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers. 2.1.2. Participants. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria. 2.1.3. Types of interventions. In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc). In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc). 2.2. Outcomes The primary outcome will be effectiveness including the quantity of duodenum eosinophils, the status of vagus nerve and intestinal flora. The secondary outcomes will be gastrointestinal symptoms, health related quality of life, and mortality and hospitalization rates. The primary outcome will be effectiveness including the quantity of duodenum eosinophils, the status of vagus nerve and intestinal flora. The secondary outcomes will be gastrointestinal symptoms, health related quality of life, and mortality and hospitalization rates. 2.3. Search strategy First, PubMed, Embase, Web of Science, Chinese National Knowledge Infrastructure database, Chinese Biomedical Database, Chinese Science and Technology Periodical database, the WanFang database and the Cochrane Central Register of Controlled Trials databases will be searched to find relevant articles up to July 2022 using a combination of the main search terms “Functional Dyspepsia,” “duodenum eosinophils,” “vagus nerve,” and “intestinal flora” within the restriction limit of “randomized controlled trial.” Taking PubMed as an example, the retrieval strategy is shown in Table 1. Search strategy for the PubMed database Second, to include complete and updated outcomes, abstracts and presentations of ongoing RCTs on FD from several of the most important international conferences (American Gastroenterological Association) in 2018 to 2022 will be inspected. Last, the reference lists of the relevant articles will be checked with expectation for additional articles. The searching process will be conducted systematically by 2 researchers through 8 databases from their inception to the present date. When meeting with divergence, discussion with a third researcher will be conducted. First, PubMed, Embase, Web of Science, Chinese National Knowledge Infrastructure database, Chinese Biomedical Database, Chinese Science and Technology Periodical database, the WanFang database and the Cochrane Central Register of Controlled Trials databases will be searched to find relevant articles up to July 2022 using a combination of the main search terms “Functional Dyspepsia,” “duodenum eosinophils,” “vagus nerve,” and “intestinal flora” within the restriction limit of “randomized controlled trial.” Taking PubMed as an example, the retrieval strategy is shown in Table 1. Search strategy for the PubMed database Second, to include complete and updated outcomes, abstracts and presentations of ongoing RCTs on FD from several of the most important international conferences (American Gastroenterological Association) in 2018 to 2022 will be inspected. Last, the reference lists of the relevant articles will be checked with expectation for additional articles. The searching process will be conducted systematically by 2 researchers through 8 databases from their inception to the present date. When meeting with divergence, discussion with a third researcher will be conducted. 2.4. Study selection In order to screen all studies, we will use the EndNote X9 to facilitate document management. Data collection and analysis will be conducted by 2 professionally trained researchers. If there are differences, they will solve the problem through discussion or ask the opinions of a third researcher. The article will be initially screened by reviewing the title and abstract of the study’s, and then read the full text to exclude the unqualified documents: not RCTs, incorrect intervention, it belongs to RCTs, but does not meet the inclusion criteria, no data for extraction. Finally, the literature meeting the requirements will be evaluated and included in our study. The detailed screening process is shown in Figure 1. PRISMA flow chart of study selection process. PRISMA=Preferred Reporting Items for Systematic Review and Meta Analysis. In order to screen all studies, we will use the EndNote X9 to facilitate document management. Data collection and analysis will be conducted by 2 professionally trained researchers. If there are differences, they will solve the problem through discussion or ask the opinions of a third researcher. The article will be initially screened by reviewing the title and abstract of the study’s, and then read the full text to exclude the unqualified documents: not RCTs, incorrect intervention, it belongs to RCTs, but does not meet the inclusion criteria, no data for extraction. Finally, the literature meeting the requirements will be evaluated and included in our study. The detailed screening process is shown in Figure 1. PRISMA flow chart of study selection process. PRISMA=Preferred Reporting Items for Systematic Review and Meta Analysis. 2.5. Data extraction Two reviewers will be responsible for the extraction and management of data according to the retrieval strategy, including study title, journal, year of publication, name of first author, general information, study design, experimental intervention and timing of intervention, results, and adverse events. If there is any disagreement between the 2 reviewers during the data extraction process, the panel will jointly arbitrate and make a decision. Two reviewers will be responsible for the extraction and management of data according to the retrieval strategy, including study title, journal, year of publication, name of first author, general information, study design, experimental intervention and timing of intervention, results, and adverse events. If there is any disagreement between the 2 reviewers during the data extraction process, the panel will jointly arbitrate and make a decision. 2.6. Risk of bias and quality assessment As for risk of bias assessment, we will use the tool, Cochrane System Reviewer Manual (Version 6.3, 2022) to evaluate the quality of the included RCTs.[10] The quality of evidence for the outcomes will be evaluated by the use of the Grading of Recommendations Assessment, Development and Evaluation system.[11] Two researchers will perform the analysis and the results will be cross-checked. We will evaluate the following 6 aspects: Selection bias: Bias arising from the randomization process; Implementation bias: Bias due to deviations from intended interventions; Measurement bias: Bias in measurement of the outcome; Follow-up bias: Bias due to missing outcome data; Reporting bias: Bias in selection of the reported result; Other bias. The process of quality assessment will be conducted systematically by 2 researchers. When data meet with ambiguity, contradiction, errors or omission, discussion with a third reviewer will be conducted. So as to cope with potential divergence, the researcher will contact the corresponding author for the clarified, correct or missing data, when needed. As for risk of bias assessment, we will use the tool, Cochrane System Reviewer Manual (Version 6.3, 2022) to evaluate the quality of the included RCTs.[10] The quality of evidence for the outcomes will be evaluated by the use of the Grading of Recommendations Assessment, Development and Evaluation system.[11] Two researchers will perform the analysis and the results will be cross-checked. We will evaluate the following 6 aspects: Selection bias: Bias arising from the randomization process; Implementation bias: Bias due to deviations from intended interventions; Measurement bias: Bias in measurement of the outcome; Follow-up bias: Bias due to missing outcome data; Reporting bias: Bias in selection of the reported result; Other bias. The process of quality assessment will be conducted systematically by 2 researchers. When data meet with ambiguity, contradiction, errors or omission, discussion with a third reviewer will be conducted. So as to cope with potential divergence, the researcher will contact the corresponding author for the clarified, correct or missing data, when needed. 2.7. Assessment of reporting bias Funnel plots will be conducted to evaluate reported deviations. We will use funnel plots to detect potential reporting bias. Begg and Egger test will be used to assess the symmetry of the funnel, draw and detect release deviations. Funnel plots will be conducted to evaluate reported deviations. We will use funnel plots to detect potential reporting bias. Begg and Egger test will be used to assess the symmetry of the funnel, draw and detect release deviations. 2.8. Statistical analysis For continuous outcomes, the effect size for the intervention will be calculated by the difference between the means of the intervention and control groups at the end of the intervention. For morbidity and mortality, relative risk with 95% confidence interval will be calculated. For each outcome, heterogeneity will be assessed using the Cochran Q and I2 statistic; for the Cochran Q and I2 statistic, a P value of <.05 and I2 > 50% will be considered significant, respectively. When there is significant heterogeneity, the data will be pooled using a random-effects model; otherwise, a fixed-effects model will be used. Publication bias will be assessed graphically using a funnel plot and mathematically using Egger test. For these analyses, Comprehensive Meta Analysis Software version 2 (Biostat, Englewood, NJ) and STATA 16 software (Stata Corp LP, College Station, TX) will be used. For continuous outcomes, the effect size for the intervention will be calculated by the difference between the means of the intervention and control groups at the end of the intervention. For morbidity and mortality, relative risk with 95% confidence interval will be calculated. For each outcome, heterogeneity will be assessed using the Cochran Q and I2 statistic; for the Cochran Q and I2 statistic, a P value of <.05 and I2 > 50% will be considered significant, respectively. When there is significant heterogeneity, the data will be pooled using a random-effects model; otherwise, a fixed-effects model will be used. Publication bias will be assessed graphically using a funnel plot and mathematically using Egger test. For these analyses, Comprehensive Meta Analysis Software version 2 (Biostat, Englewood, NJ) and STATA 16 software (Stata Corp LP, College Station, TX) will be used. 2.9. Sensitivity and subgroup analysis Meta-regression will be used to determine whether the effect of acupuncture and moxibustion methods will be confounded by baseline clinical characteristics. Subgroup analysis stratified by route of different methods (warm acupuncture, thermo-sensitive moxibustion or electroacupuncture) will be performed. Meta-regression will be used to determine whether the effect of acupuncture and moxibustion methods will be confounded by baseline clinical characteristics. Subgroup analysis stratified by route of different methods (warm acupuncture, thermo-sensitive moxibustion or electroacupuncture) will be performed. 2.10. Ethical issues This meta-analysis is a literature study. Ethical approval is not required because this meta-analysis will not involve any subject directly. This meta-analysis is a literature study. Ethical approval is not required because this meta-analysis will not involve any subject directly.
null
null
null
null
[ "2.1. Selection criteria", "2.1.1. Types of studies.", "2.1.2. Participants.", "2.1.3. Types of interventions.", "2.2. Outcomes", "2.3. Search strategy", "2.4. Study selection", "2.5. Data extraction", "2.6. Risk of bias and quality assessment", "2.7. Assessment of reporting bias", "2.8. Statistical analysis", "2.9. Sensitivity and subgroup analysis", "2.10. Ethical issues", "Author contributions" ]
[ "2.1.1. Types of studies. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers.\nThere are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers.\n2.1.2. Participants. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria.\nCOVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria.\n2.1.3. Types of interventions. In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc).\nIn addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc).", "There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers.", "COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria.", "In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc).", "The primary outcome will be effectiveness including the quantity of duodenum eosinophils, the status of vagus nerve and intestinal flora. The secondary outcomes will be gastrointestinal symptoms, health related quality of life, and mortality and hospitalization rates.", "First, PubMed, Embase, Web of Science, Chinese National Knowledge Infrastructure database, Chinese Biomedical Database, Chinese Science and Technology Periodical database, the WanFang database and the Cochrane Central Register of Controlled Trials databases will be searched to find relevant articles up to July 2022 using a combination of the main search terms “Functional Dyspepsia,” “duodenum eosinophils,” “vagus nerve,” and “intestinal flora” within the restriction limit of “randomized controlled trial.” Taking PubMed as an example, the retrieval strategy is shown in Table 1.\nSearch strategy for the PubMed database\nSecond, to include complete and updated outcomes, abstracts and presentations of ongoing RCTs on FD from several of the most important international conferences (American Gastroenterological Association) in 2018 to 2022 will be inspected.\nLast, the reference lists of the relevant articles will be checked with expectation for additional articles.\nThe searching process will be conducted systematically by 2 researchers through 8 databases from their inception to the present date. When meeting with divergence, discussion with a third researcher will be conducted.", "In order to screen all studies, we will use the EndNote X9 to facilitate document management. Data collection and analysis will be conducted by 2 professionally trained researchers. If there are differences, they will solve the problem through discussion or ask the opinions of a third researcher. The article will be initially screened by reviewing the title and abstract of the study’s, and then read the full text to exclude the unqualified documents: not RCTs, incorrect intervention, it belongs to RCTs, but does not meet the inclusion criteria, no data for extraction. Finally, the literature meeting the requirements will be evaluated and included in our study. The detailed screening process is shown in Figure 1.\nPRISMA flow chart of study selection process. PRISMA=Preferred Reporting Items for Systematic Review and Meta Analysis.", "Two reviewers will be responsible for the extraction and management of data according to the retrieval strategy, including study title, journal, year of publication, name of first author, general information, study design, experimental intervention and timing of intervention, results, and adverse events. If there is any disagreement between the 2 reviewers during the data extraction process, the panel will jointly arbitrate and make a decision.", "As for risk of bias assessment, we will use the tool, Cochrane System Reviewer Manual (Version 6.3, 2022) to evaluate the quality of the included RCTs.[10] The quality of evidence for the outcomes will be evaluated by the use of the Grading of Recommendations Assessment, Development and Evaluation system.[11]\nTwo researchers will perform the analysis and the results will be cross-checked. We will evaluate the following 6 aspects: Selection bias: Bias arising from the randomization process; Implementation bias: Bias due to deviations from intended interventions; Measurement bias: Bias in measurement of the outcome; Follow-up bias: Bias due to missing outcome data; Reporting bias: Bias in selection of the reported result; Other bias.\nThe process of quality assessment will be conducted systematically by 2 researchers. When data meet with ambiguity, contradiction, errors or omission, discussion with a third reviewer will be conducted. So as to cope with potential divergence, the researcher will contact the corresponding author for the clarified, correct or missing data, when needed.", "Funnel plots will be conducted to evaluate reported deviations. We will use funnel plots to detect potential reporting bias. Begg and Egger test will be used to assess the symmetry of the funnel, draw and detect release deviations.", "For continuous outcomes, the effect size for the intervention will be calculated by the difference between the means of the intervention and control groups at the end of the intervention. For morbidity and mortality, relative risk with 95% confidence interval will be calculated. For each outcome, heterogeneity will be assessed using the Cochran Q and I2 statistic; for the Cochran Q and I2 statistic, a P value of <.05 and I2 > 50% will be considered significant, respectively. When there is significant heterogeneity, the data will be pooled using a random-effects model; otherwise, a fixed-effects model will be used. Publication bias will be assessed graphically using a funnel plot and mathematically using Egger test. For these analyses, Comprehensive Meta Analysis Software version 2 (Biostat, Englewood, NJ) and STATA 16 software (Stata Corp LP, College Station, TX) will be used.", "Meta-regression will be used to determine whether the effect of acupuncture and moxibustion methods will be confounded by baseline clinical characteristics. Subgroup analysis stratified by route of different methods (warm acupuncture, thermo-sensitive moxibustion or electroacupuncture) will be performed.", "This meta-analysis is a literature study. Ethical approval is not required because this meta-analysis will not involve any subject directly.", "Xingzhen Lin is the guarantor of the article and will be the arbitrator when meeting disagreements. All research members participated in developing the criteria and drafting the protocol for this systematic review. TP, XH and MZ established the search strategy. XH, YX and ZL will independently accomplish the study selection, data extraction and assess the risk of bias. XF, LL and WL will perform the data syntheses. The subsequent and final versions of the protocol are critically reviewed, modified and authorized by all authors.\nMethodology: Yue Xiong.\nWriting – original draft: Tianzhong Peng, Xuedi Huang, Manhua Zhu, Xinyue Fang, Wanning Lan, Xingzhen Lin.\nWriting – review & editing: Tianzhong Peng, Xuedi Huang, Manhua Zhu, Xinju Hou, Xinyue Fang, Zitong Lin, Lu Liu, Wanning Lan, Xingzhen Lin." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Selection criteria", "2.1.1. Types of studies.", "2.1.2. Participants.", "2.1.3. Types of interventions.", "2.2. Outcomes", "2.3. Search strategy", "2.4. Study selection", "2.5. Data extraction", "2.6. Risk of bias and quality assessment", "2.7. Assessment of reporting bias", "2.8. Statistical analysis", "2.9. Sensitivity and subgroup analysis", "2.10. Ethical issues", "3. Discussion", "Author contributions" ]
[ "Coronavirus disease 2019 (COVID-19) caused by the SARS-CoV-2 has caused extremely serious harm to human’s life and health since the outbreak at the end of 2019. With the number of healers of COVID-19 is increasing, the emergence of long-term persistent symptoms has become another focus of attention for experts from all countries after the acute infection period. A large number of early studies have confirmed that patients with COVID-19 have clinical manifestations of functional dyspepsia (FD) during infection such as loss of appetite, diarrhea, vomiting and other gastrointestinal symptoms.[1] The frequency of digestive symptoms in patients with COVID-19 presents regional differently, for higher in North America, followed by Europe and Asia.[2,3] FD is a digestive system disease characterized by epigastric symptoms, including 2 major types of etiology: organic and functional. FD’s clinical manifestations are mostly postprandial abdominal bloating and discomfort, early satiety, epigastric pain, vomiting, nausea and belching.[4] At present, the diagnosis of FD is mainly based on the latest Rome IV standard for functional gastrointestinal diseases, which was formulated by the Rome Committee in 2016. Compared with the Rome III standard, the Rome IV standard has adjusted in terms of the frequency, severity and subgroup diagnosis of specific symptoms.[5] The diagnosis of FD still requires >6 months’ disease process, as well as the symptoms should having been onset in the past 3 months. In addition, the extent of 4 core symptoms (epigastric pain, early satiety, postprandial discomfort and epigastric burning) was assessed to the degree of symptoms reached an uncomfortable level affecting daily life.[6] The overall global prevalence of FD is 11.5% to 14.5%,[7] at the same time rising in recent years. Up to now, the etiology and pathogenesis of FD are still unclear, which bring great difficulties to clinical treatment, resulting in repeated medical treatment, seriously affecting the quality of life and consuming a lot of medical resources. Therefore, it is necessary to seek effective treatment.\nAt present, there have been many clinical research of the use of gastrointestinal motility drugs on FD, but still without uniformity and consensus. Conventional therapies include gastro-kinetic agents, acid suppressive drugs, drugs for eradication of helicobacter pylori (HP) infection, and antidepressants. Gastro-kinetic agent is dopamine receptor antagonists such as domperidone and itopride, 5-HT4 receptor agonists such as levosulpiride and mosapride. Acid suppressive drugs mainly include H2 receptor antagonists, and proton pump inhibitors. Antidepressants are doxepin and flunarizine, etc. However, due to the unclear mechanism of gastrointestinal symptoms after COVID-19, conventional therapies such as gastro-kinetic agents are currently used.\nIn this situation, acupuncture and moxibustion therapies increasingly appear in clinical treatment of FD.[8] The therapies are diverse, safe and effective, which could effectively alleviate the clinical symptoms of FD’s patients. Acupuncture and moxibustion therapy has a variety of forms and methods, such as ordinary filiform needle acupuncture, abdominal needle, warm acupuncture, electro-acupuncture, mild moxibustion, thermo-sensitive moxibustion, etc., having a good effect on FD and other gastrointestinal diseases.\nTo sum up, the vague pathogenesis of FD caused by COVID-19 and unsystematic treatment resulting in the lack of supporting systematic evidence, which has had a negative impact of the treatment to a certain extent. However, acupuncture and moxibustion therapy of highly effective, is only regarded as supplementary medicine and alternative medicine. Therefore, in order to objectively understand the efficacy and safety of this oriental therapy in the treatment of FD, this study aims to collect randomized controlled trials of different methods of acupuncture and moxibustion on FD and complete systematic reviews and meta-analyses, providing a reliable evidence-based basis for clinical application.", "This study has been registered as PROSPERO CRD42022346782 (https://www.crd.york.ac.uk/prospero/). This protocol for meta-analysis will be performed according to the Preferred Reporting Items for Systematic Review and Meta analysis Protocols statement.[9]\n2.1. Selection criteria 2.1.1. Types of studies. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers.\nThere are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers.\n2.1.2. Participants. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria.\nCOVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria.\n2.1.3. Types of interventions. In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc).\nIn addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc).\n2.1.1. Types of studies. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers.\nThere are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers.\n2.1.2. Participants. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria.\nCOVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria.\n2.1.3. Types of interventions. In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc).\nIn addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc).\n2.2. Outcomes The primary outcome will be effectiveness including the quantity of duodenum eosinophils, the status of vagus nerve and intestinal flora. The secondary outcomes will be gastrointestinal symptoms, health related quality of life, and mortality and hospitalization rates.\nThe primary outcome will be effectiveness including the quantity of duodenum eosinophils, the status of vagus nerve and intestinal flora. The secondary outcomes will be gastrointestinal symptoms, health related quality of life, and mortality and hospitalization rates.\n2.3. Search strategy First, PubMed, Embase, Web of Science, Chinese National Knowledge Infrastructure database, Chinese Biomedical Database, Chinese Science and Technology Periodical database, the WanFang database and the Cochrane Central Register of Controlled Trials databases will be searched to find relevant articles up to July 2022 using a combination of the main search terms “Functional Dyspepsia,” “duodenum eosinophils,” “vagus nerve,” and “intestinal flora” within the restriction limit of “randomized controlled trial.” Taking PubMed as an example, the retrieval strategy is shown in Table 1.\nSearch strategy for the PubMed database\nSecond, to include complete and updated outcomes, abstracts and presentations of ongoing RCTs on FD from several of the most important international conferences (American Gastroenterological Association) in 2018 to 2022 will be inspected.\nLast, the reference lists of the relevant articles will be checked with expectation for additional articles.\nThe searching process will be conducted systematically by 2 researchers through 8 databases from their inception to the present date. When meeting with divergence, discussion with a third researcher will be conducted.\nFirst, PubMed, Embase, Web of Science, Chinese National Knowledge Infrastructure database, Chinese Biomedical Database, Chinese Science and Technology Periodical database, the WanFang database and the Cochrane Central Register of Controlled Trials databases will be searched to find relevant articles up to July 2022 using a combination of the main search terms “Functional Dyspepsia,” “duodenum eosinophils,” “vagus nerve,” and “intestinal flora” within the restriction limit of “randomized controlled trial.” Taking PubMed as an example, the retrieval strategy is shown in Table 1.\nSearch strategy for the PubMed database\nSecond, to include complete and updated outcomes, abstracts and presentations of ongoing RCTs on FD from several of the most important international conferences (American Gastroenterological Association) in 2018 to 2022 will be inspected.\nLast, the reference lists of the relevant articles will be checked with expectation for additional articles.\nThe searching process will be conducted systematically by 2 researchers through 8 databases from their inception to the present date. When meeting with divergence, discussion with a third researcher will be conducted.\n2.4. Study selection In order to screen all studies, we will use the EndNote X9 to facilitate document management. Data collection and analysis will be conducted by 2 professionally trained researchers. If there are differences, they will solve the problem through discussion or ask the opinions of a third researcher. The article will be initially screened by reviewing the title and abstract of the study’s, and then read the full text to exclude the unqualified documents: not RCTs, incorrect intervention, it belongs to RCTs, but does not meet the inclusion criteria, no data for extraction. Finally, the literature meeting the requirements will be evaluated and included in our study. The detailed screening process is shown in Figure 1.\nPRISMA flow chart of study selection process. PRISMA=Preferred Reporting Items for Systematic Review and Meta Analysis.\nIn order to screen all studies, we will use the EndNote X9 to facilitate document management. Data collection and analysis will be conducted by 2 professionally trained researchers. If there are differences, they will solve the problem through discussion or ask the opinions of a third researcher. The article will be initially screened by reviewing the title and abstract of the study’s, and then read the full text to exclude the unqualified documents: not RCTs, incorrect intervention, it belongs to RCTs, but does not meet the inclusion criteria, no data for extraction. Finally, the literature meeting the requirements will be evaluated and included in our study. The detailed screening process is shown in Figure 1.\nPRISMA flow chart of study selection process. PRISMA=Preferred Reporting Items for Systematic Review and Meta Analysis.\n2.5. Data extraction Two reviewers will be responsible for the extraction and management of data according to the retrieval strategy, including study title, journal, year of publication, name of first author, general information, study design, experimental intervention and timing of intervention, results, and adverse events. If there is any disagreement between the 2 reviewers during the data extraction process, the panel will jointly arbitrate and make a decision.\nTwo reviewers will be responsible for the extraction and management of data according to the retrieval strategy, including study title, journal, year of publication, name of first author, general information, study design, experimental intervention and timing of intervention, results, and adverse events. If there is any disagreement between the 2 reviewers during the data extraction process, the panel will jointly arbitrate and make a decision.\n2.6. Risk of bias and quality assessment As for risk of bias assessment, we will use the tool, Cochrane System Reviewer Manual (Version 6.3, 2022) to evaluate the quality of the included RCTs.[10] The quality of evidence for the outcomes will be evaluated by the use of the Grading of Recommendations Assessment, Development and Evaluation system.[11]\nTwo researchers will perform the analysis and the results will be cross-checked. We will evaluate the following 6 aspects: Selection bias: Bias arising from the randomization process; Implementation bias: Bias due to deviations from intended interventions; Measurement bias: Bias in measurement of the outcome; Follow-up bias: Bias due to missing outcome data; Reporting bias: Bias in selection of the reported result; Other bias.\nThe process of quality assessment will be conducted systematically by 2 researchers. When data meet with ambiguity, contradiction, errors or omission, discussion with a third reviewer will be conducted. So as to cope with potential divergence, the researcher will contact the corresponding author for the clarified, correct or missing data, when needed.\nAs for risk of bias assessment, we will use the tool, Cochrane System Reviewer Manual (Version 6.3, 2022) to evaluate the quality of the included RCTs.[10] The quality of evidence for the outcomes will be evaluated by the use of the Grading of Recommendations Assessment, Development and Evaluation system.[11]\nTwo researchers will perform the analysis and the results will be cross-checked. We will evaluate the following 6 aspects: Selection bias: Bias arising from the randomization process; Implementation bias: Bias due to deviations from intended interventions; Measurement bias: Bias in measurement of the outcome; Follow-up bias: Bias due to missing outcome data; Reporting bias: Bias in selection of the reported result; Other bias.\nThe process of quality assessment will be conducted systematically by 2 researchers. When data meet with ambiguity, contradiction, errors or omission, discussion with a third reviewer will be conducted. So as to cope with potential divergence, the researcher will contact the corresponding author for the clarified, correct or missing data, when needed.\n2.7. Assessment of reporting bias Funnel plots will be conducted to evaluate reported deviations. We will use funnel plots to detect potential reporting bias. Begg and Egger test will be used to assess the symmetry of the funnel, draw and detect release deviations.\nFunnel plots will be conducted to evaluate reported deviations. We will use funnel plots to detect potential reporting bias. Begg and Egger test will be used to assess the symmetry of the funnel, draw and detect release deviations.\n2.8. Statistical analysis For continuous outcomes, the effect size for the intervention will be calculated by the difference between the means of the intervention and control groups at the end of the intervention. For morbidity and mortality, relative risk with 95% confidence interval will be calculated. For each outcome, heterogeneity will be assessed using the Cochran Q and I2 statistic; for the Cochran Q and I2 statistic, a P value of <.05 and I2 > 50% will be considered significant, respectively. When there is significant heterogeneity, the data will be pooled using a random-effects model; otherwise, a fixed-effects model will be used. Publication bias will be assessed graphically using a funnel plot and mathematically using Egger test. For these analyses, Comprehensive Meta Analysis Software version 2 (Biostat, Englewood, NJ) and STATA 16 software (Stata Corp LP, College Station, TX) will be used.\nFor continuous outcomes, the effect size for the intervention will be calculated by the difference between the means of the intervention and control groups at the end of the intervention. For morbidity and mortality, relative risk with 95% confidence interval will be calculated. For each outcome, heterogeneity will be assessed using the Cochran Q and I2 statistic; for the Cochran Q and I2 statistic, a P value of <.05 and I2 > 50% will be considered significant, respectively. When there is significant heterogeneity, the data will be pooled using a random-effects model; otherwise, a fixed-effects model will be used. Publication bias will be assessed graphically using a funnel plot and mathematically using Egger test. For these analyses, Comprehensive Meta Analysis Software version 2 (Biostat, Englewood, NJ) and STATA 16 software (Stata Corp LP, College Station, TX) will be used.\n2.9. Sensitivity and subgroup analysis Meta-regression will be used to determine whether the effect of acupuncture and moxibustion methods will be confounded by baseline clinical characteristics. Subgroup analysis stratified by route of different methods (warm acupuncture, thermo-sensitive moxibustion or electroacupuncture) will be performed.\nMeta-regression will be used to determine whether the effect of acupuncture and moxibustion methods will be confounded by baseline clinical characteristics. Subgroup analysis stratified by route of different methods (warm acupuncture, thermo-sensitive moxibustion or electroacupuncture) will be performed.\n2.10. Ethical issues This meta-analysis is a literature study. Ethical approval is not required because this meta-analysis will not involve any subject directly.\nThis meta-analysis is a literature study. Ethical approval is not required because this meta-analysis will not involve any subject directly.", "2.1.1. Types of studies. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers.\nThere are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers.\n2.1.2. Participants. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria.\nCOVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria.\n2.1.3. Types of interventions. In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc).\nIn addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc).", "There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers.", "COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria.", "In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc).", "The primary outcome will be effectiveness including the quantity of duodenum eosinophils, the status of vagus nerve and intestinal flora. The secondary outcomes will be gastrointestinal symptoms, health related quality of life, and mortality and hospitalization rates.", "First, PubMed, Embase, Web of Science, Chinese National Knowledge Infrastructure database, Chinese Biomedical Database, Chinese Science and Technology Periodical database, the WanFang database and the Cochrane Central Register of Controlled Trials databases will be searched to find relevant articles up to July 2022 using a combination of the main search terms “Functional Dyspepsia,” “duodenum eosinophils,” “vagus nerve,” and “intestinal flora” within the restriction limit of “randomized controlled trial.” Taking PubMed as an example, the retrieval strategy is shown in Table 1.\nSearch strategy for the PubMed database\nSecond, to include complete and updated outcomes, abstracts and presentations of ongoing RCTs on FD from several of the most important international conferences (American Gastroenterological Association) in 2018 to 2022 will be inspected.\nLast, the reference lists of the relevant articles will be checked with expectation for additional articles.\nThe searching process will be conducted systematically by 2 researchers through 8 databases from their inception to the present date. When meeting with divergence, discussion with a third researcher will be conducted.", "In order to screen all studies, we will use the EndNote X9 to facilitate document management. Data collection and analysis will be conducted by 2 professionally trained researchers. If there are differences, they will solve the problem through discussion or ask the opinions of a third researcher. The article will be initially screened by reviewing the title and abstract of the study’s, and then read the full text to exclude the unqualified documents: not RCTs, incorrect intervention, it belongs to RCTs, but does not meet the inclusion criteria, no data for extraction. Finally, the literature meeting the requirements will be evaluated and included in our study. The detailed screening process is shown in Figure 1.\nPRISMA flow chart of study selection process. PRISMA=Preferred Reporting Items for Systematic Review and Meta Analysis.", "Two reviewers will be responsible for the extraction and management of data according to the retrieval strategy, including study title, journal, year of publication, name of first author, general information, study design, experimental intervention and timing of intervention, results, and adverse events. If there is any disagreement between the 2 reviewers during the data extraction process, the panel will jointly arbitrate and make a decision.", "As for risk of bias assessment, we will use the tool, Cochrane System Reviewer Manual (Version 6.3, 2022) to evaluate the quality of the included RCTs.[10] The quality of evidence for the outcomes will be evaluated by the use of the Grading of Recommendations Assessment, Development and Evaluation system.[11]\nTwo researchers will perform the analysis and the results will be cross-checked. We will evaluate the following 6 aspects: Selection bias: Bias arising from the randomization process; Implementation bias: Bias due to deviations from intended interventions; Measurement bias: Bias in measurement of the outcome; Follow-up bias: Bias due to missing outcome data; Reporting bias: Bias in selection of the reported result; Other bias.\nThe process of quality assessment will be conducted systematically by 2 researchers. When data meet with ambiguity, contradiction, errors or omission, discussion with a third reviewer will be conducted. So as to cope with potential divergence, the researcher will contact the corresponding author for the clarified, correct or missing data, when needed.", "Funnel plots will be conducted to evaluate reported deviations. We will use funnel plots to detect potential reporting bias. Begg and Egger test will be used to assess the symmetry of the funnel, draw and detect release deviations.", "For continuous outcomes, the effect size for the intervention will be calculated by the difference between the means of the intervention and control groups at the end of the intervention. For morbidity and mortality, relative risk with 95% confidence interval will be calculated. For each outcome, heterogeneity will be assessed using the Cochran Q and I2 statistic; for the Cochran Q and I2 statistic, a P value of <.05 and I2 > 50% will be considered significant, respectively. When there is significant heterogeneity, the data will be pooled using a random-effects model; otherwise, a fixed-effects model will be used. Publication bias will be assessed graphically using a funnel plot and mathematically using Egger test. For these analyses, Comprehensive Meta Analysis Software version 2 (Biostat, Englewood, NJ) and STATA 16 software (Stata Corp LP, College Station, TX) will be used.", "Meta-regression will be used to determine whether the effect of acupuncture and moxibustion methods will be confounded by baseline clinical characteristics. Subgroup analysis stratified by route of different methods (warm acupuncture, thermo-sensitive moxibustion or electroacupuncture) will be performed.", "This meta-analysis is a literature study. Ethical approval is not required because this meta-analysis will not involve any subject directly.", "Regarding FD as an important sequelae of COVID-19,[2] many literature have analyzed the necessity of treating FD, so as to make humanistic care for patients with COVID-19 further displayed. Although the effect of acupuncture and moxibustion methods in FD patients have been reported, there is still insufficient meta-analysis to support the conclusion. Moreover, according to the investigation, acupuncture and moxibustion on sequelae of COVID-19 have not yet been taken seriously. To the best of our knowledge, this is the first meta-analysis protocol about different acupuncture and moxibustion methods on FD caused by sequelae of COVID-19. The results will evaluate whether different ways of acupuncture and moxibustion are beneficial for the long time conditioning of FD patients with COVID-19, providing evidence regarding the choice of acupuncture and moxibustion in these patients.", "Xingzhen Lin is the guarantor of the article and will be the arbitrator when meeting disagreements. All research members participated in developing the criteria and drafting the protocol for this systematic review. TP, XH and MZ established the search strategy. XH, YX and ZL will independently accomplish the study selection, data extraction and assess the risk of bias. XF, LL and WL will perform the data syntheses. The subsequent and final versions of the protocol are critically reviewed, modified and authorized by all authors.\nMethodology: Yue Xiong.\nWriting – original draft: Tianzhong Peng, Xuedi Huang, Manhua Zhu, Xinyue Fang, Wanning Lan, Xingzhen Lin.\nWriting – review & editing: Tianzhong Peng, Xuedi Huang, Manhua Zhu, Xinju Hou, Xinyue Fang, Zitong Lin, Lu Liu, Wanning Lan, Xingzhen Lin." ]
[ "intro", "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, "discussion", null ]
[ "acupuncture and moxibustion", "COVID-19", "functional dyspepsia", "protocol", "systematic review" ]
1. Introduction: Coronavirus disease 2019 (COVID-19) caused by the SARS-CoV-2 has caused extremely serious harm to human’s life and health since the outbreak at the end of 2019. With the number of healers of COVID-19 is increasing, the emergence of long-term persistent symptoms has become another focus of attention for experts from all countries after the acute infection period. A large number of early studies have confirmed that patients with COVID-19 have clinical manifestations of functional dyspepsia (FD) during infection such as loss of appetite, diarrhea, vomiting and other gastrointestinal symptoms.[1] The frequency of digestive symptoms in patients with COVID-19 presents regional differently, for higher in North America, followed by Europe and Asia.[2,3] FD is a digestive system disease characterized by epigastric symptoms, including 2 major types of etiology: organic and functional. FD’s clinical manifestations are mostly postprandial abdominal bloating and discomfort, early satiety, epigastric pain, vomiting, nausea and belching.[4] At present, the diagnosis of FD is mainly based on the latest Rome IV standard for functional gastrointestinal diseases, which was formulated by the Rome Committee in 2016. Compared with the Rome III standard, the Rome IV standard has adjusted in terms of the frequency, severity and subgroup diagnosis of specific symptoms.[5] The diagnosis of FD still requires >6 months’ disease process, as well as the symptoms should having been onset in the past 3 months. In addition, the extent of 4 core symptoms (epigastric pain, early satiety, postprandial discomfort and epigastric burning) was assessed to the degree of symptoms reached an uncomfortable level affecting daily life.[6] The overall global prevalence of FD is 11.5% to 14.5%,[7] at the same time rising in recent years. Up to now, the etiology and pathogenesis of FD are still unclear, which bring great difficulties to clinical treatment, resulting in repeated medical treatment, seriously affecting the quality of life and consuming a lot of medical resources. Therefore, it is necessary to seek effective treatment. At present, there have been many clinical research of the use of gastrointestinal motility drugs on FD, but still without uniformity and consensus. Conventional therapies include gastro-kinetic agents, acid suppressive drugs, drugs for eradication of helicobacter pylori (HP) infection, and antidepressants. Gastro-kinetic agent is dopamine receptor antagonists such as domperidone and itopride, 5-HT4 receptor agonists such as levosulpiride and mosapride. Acid suppressive drugs mainly include H2 receptor antagonists, and proton pump inhibitors. Antidepressants are doxepin and flunarizine, etc. However, due to the unclear mechanism of gastrointestinal symptoms after COVID-19, conventional therapies such as gastro-kinetic agents are currently used. In this situation, acupuncture and moxibustion therapies increasingly appear in clinical treatment of FD.[8] The therapies are diverse, safe and effective, which could effectively alleviate the clinical symptoms of FD’s patients. Acupuncture and moxibustion therapy has a variety of forms and methods, such as ordinary filiform needle acupuncture, abdominal needle, warm acupuncture, electro-acupuncture, mild moxibustion, thermo-sensitive moxibustion, etc., having a good effect on FD and other gastrointestinal diseases. To sum up, the vague pathogenesis of FD caused by COVID-19 and unsystematic treatment resulting in the lack of supporting systematic evidence, which has had a negative impact of the treatment to a certain extent. However, acupuncture and moxibustion therapy of highly effective, is only regarded as supplementary medicine and alternative medicine. Therefore, in order to objectively understand the efficacy and safety of this oriental therapy in the treatment of FD, this study aims to collect randomized controlled trials of different methods of acupuncture and moxibustion on FD and complete systematic reviews and meta-analyses, providing a reliable evidence-based basis for clinical application. 2. Methods: This study has been registered as PROSPERO CRD42022346782 (https://www.crd.york.ac.uk/prospero/). This protocol for meta-analysis will be performed according to the Preferred Reporting Items for Systematic Review and Meta analysis Protocols statement.[9] 2.1. Selection criteria 2.1.1. Types of studies. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers. 2.1.2. Participants. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria. 2.1.3. Types of interventions. In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc). In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc). 2.1.1. Types of studies. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers. 2.1.2. Participants. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria. 2.1.3. Types of interventions. In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc). In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc). 2.2. Outcomes The primary outcome will be effectiveness including the quantity of duodenum eosinophils, the status of vagus nerve and intestinal flora. The secondary outcomes will be gastrointestinal symptoms, health related quality of life, and mortality and hospitalization rates. The primary outcome will be effectiveness including the quantity of duodenum eosinophils, the status of vagus nerve and intestinal flora. The secondary outcomes will be gastrointestinal symptoms, health related quality of life, and mortality and hospitalization rates. 2.3. Search strategy First, PubMed, Embase, Web of Science, Chinese National Knowledge Infrastructure database, Chinese Biomedical Database, Chinese Science and Technology Periodical database, the WanFang database and the Cochrane Central Register of Controlled Trials databases will be searched to find relevant articles up to July 2022 using a combination of the main search terms “Functional Dyspepsia,” “duodenum eosinophils,” “vagus nerve,” and “intestinal flora” within the restriction limit of “randomized controlled trial.” Taking PubMed as an example, the retrieval strategy is shown in Table 1. Search strategy for the PubMed database Second, to include complete and updated outcomes, abstracts and presentations of ongoing RCTs on FD from several of the most important international conferences (American Gastroenterological Association) in 2018 to 2022 will be inspected. Last, the reference lists of the relevant articles will be checked with expectation for additional articles. The searching process will be conducted systematically by 2 researchers through 8 databases from their inception to the present date. When meeting with divergence, discussion with a third researcher will be conducted. First, PubMed, Embase, Web of Science, Chinese National Knowledge Infrastructure database, Chinese Biomedical Database, Chinese Science and Technology Periodical database, the WanFang database and the Cochrane Central Register of Controlled Trials databases will be searched to find relevant articles up to July 2022 using a combination of the main search terms “Functional Dyspepsia,” “duodenum eosinophils,” “vagus nerve,” and “intestinal flora” within the restriction limit of “randomized controlled trial.” Taking PubMed as an example, the retrieval strategy is shown in Table 1. Search strategy for the PubMed database Second, to include complete and updated outcomes, abstracts and presentations of ongoing RCTs on FD from several of the most important international conferences (American Gastroenterological Association) in 2018 to 2022 will be inspected. Last, the reference lists of the relevant articles will be checked with expectation for additional articles. The searching process will be conducted systematically by 2 researchers through 8 databases from their inception to the present date. When meeting with divergence, discussion with a third researcher will be conducted. 2.4. Study selection In order to screen all studies, we will use the EndNote X9 to facilitate document management. Data collection and analysis will be conducted by 2 professionally trained researchers. If there are differences, they will solve the problem through discussion or ask the opinions of a third researcher. The article will be initially screened by reviewing the title and abstract of the study’s, and then read the full text to exclude the unqualified documents: not RCTs, incorrect intervention, it belongs to RCTs, but does not meet the inclusion criteria, no data for extraction. Finally, the literature meeting the requirements will be evaluated and included in our study. The detailed screening process is shown in Figure 1. PRISMA flow chart of study selection process. PRISMA=Preferred Reporting Items for Systematic Review and Meta Analysis. In order to screen all studies, we will use the EndNote X9 to facilitate document management. Data collection and analysis will be conducted by 2 professionally trained researchers. If there are differences, they will solve the problem through discussion or ask the opinions of a third researcher. The article will be initially screened by reviewing the title and abstract of the study’s, and then read the full text to exclude the unqualified documents: not RCTs, incorrect intervention, it belongs to RCTs, but does not meet the inclusion criteria, no data for extraction. Finally, the literature meeting the requirements will be evaluated and included in our study. The detailed screening process is shown in Figure 1. PRISMA flow chart of study selection process. PRISMA=Preferred Reporting Items for Systematic Review and Meta Analysis. 2.5. Data extraction Two reviewers will be responsible for the extraction and management of data according to the retrieval strategy, including study title, journal, year of publication, name of first author, general information, study design, experimental intervention and timing of intervention, results, and adverse events. If there is any disagreement between the 2 reviewers during the data extraction process, the panel will jointly arbitrate and make a decision. Two reviewers will be responsible for the extraction and management of data according to the retrieval strategy, including study title, journal, year of publication, name of first author, general information, study design, experimental intervention and timing of intervention, results, and adverse events. If there is any disagreement between the 2 reviewers during the data extraction process, the panel will jointly arbitrate and make a decision. 2.6. Risk of bias and quality assessment As for risk of bias assessment, we will use the tool, Cochrane System Reviewer Manual (Version 6.3, 2022) to evaluate the quality of the included RCTs.[10] The quality of evidence for the outcomes will be evaluated by the use of the Grading of Recommendations Assessment, Development and Evaluation system.[11] Two researchers will perform the analysis and the results will be cross-checked. We will evaluate the following 6 aspects: Selection bias: Bias arising from the randomization process; Implementation bias: Bias due to deviations from intended interventions; Measurement bias: Bias in measurement of the outcome; Follow-up bias: Bias due to missing outcome data; Reporting bias: Bias in selection of the reported result; Other bias. The process of quality assessment will be conducted systematically by 2 researchers. When data meet with ambiguity, contradiction, errors or omission, discussion with a third reviewer will be conducted. So as to cope with potential divergence, the researcher will contact the corresponding author for the clarified, correct or missing data, when needed. As for risk of bias assessment, we will use the tool, Cochrane System Reviewer Manual (Version 6.3, 2022) to evaluate the quality of the included RCTs.[10] The quality of evidence for the outcomes will be evaluated by the use of the Grading of Recommendations Assessment, Development and Evaluation system.[11] Two researchers will perform the analysis and the results will be cross-checked. We will evaluate the following 6 aspects: Selection bias: Bias arising from the randomization process; Implementation bias: Bias due to deviations from intended interventions; Measurement bias: Bias in measurement of the outcome; Follow-up bias: Bias due to missing outcome data; Reporting bias: Bias in selection of the reported result; Other bias. The process of quality assessment will be conducted systematically by 2 researchers. When data meet with ambiguity, contradiction, errors or omission, discussion with a third reviewer will be conducted. So as to cope with potential divergence, the researcher will contact the corresponding author for the clarified, correct or missing data, when needed. 2.7. Assessment of reporting bias Funnel plots will be conducted to evaluate reported deviations. We will use funnel plots to detect potential reporting bias. Begg and Egger test will be used to assess the symmetry of the funnel, draw and detect release deviations. Funnel plots will be conducted to evaluate reported deviations. We will use funnel plots to detect potential reporting bias. Begg and Egger test will be used to assess the symmetry of the funnel, draw and detect release deviations. 2.8. Statistical analysis For continuous outcomes, the effect size for the intervention will be calculated by the difference between the means of the intervention and control groups at the end of the intervention. For morbidity and mortality, relative risk with 95% confidence interval will be calculated. For each outcome, heterogeneity will be assessed using the Cochran Q and I2 statistic; for the Cochran Q and I2 statistic, a P value of <.05 and I2 > 50% will be considered significant, respectively. When there is significant heterogeneity, the data will be pooled using a random-effects model; otherwise, a fixed-effects model will be used. Publication bias will be assessed graphically using a funnel plot and mathematically using Egger test. For these analyses, Comprehensive Meta Analysis Software version 2 (Biostat, Englewood, NJ) and STATA 16 software (Stata Corp LP, College Station, TX) will be used. For continuous outcomes, the effect size for the intervention will be calculated by the difference between the means of the intervention and control groups at the end of the intervention. For morbidity and mortality, relative risk with 95% confidence interval will be calculated. For each outcome, heterogeneity will be assessed using the Cochran Q and I2 statistic; for the Cochran Q and I2 statistic, a P value of <.05 and I2 > 50% will be considered significant, respectively. When there is significant heterogeneity, the data will be pooled using a random-effects model; otherwise, a fixed-effects model will be used. Publication bias will be assessed graphically using a funnel plot and mathematically using Egger test. For these analyses, Comprehensive Meta Analysis Software version 2 (Biostat, Englewood, NJ) and STATA 16 software (Stata Corp LP, College Station, TX) will be used. 2.9. Sensitivity and subgroup analysis Meta-regression will be used to determine whether the effect of acupuncture and moxibustion methods will be confounded by baseline clinical characteristics. Subgroup analysis stratified by route of different methods (warm acupuncture, thermo-sensitive moxibustion or electroacupuncture) will be performed. Meta-regression will be used to determine whether the effect of acupuncture and moxibustion methods will be confounded by baseline clinical characteristics. Subgroup analysis stratified by route of different methods (warm acupuncture, thermo-sensitive moxibustion or electroacupuncture) will be performed. 2.10. Ethical issues This meta-analysis is a literature study. Ethical approval is not required because this meta-analysis will not involve any subject directly. This meta-analysis is a literature study. Ethical approval is not required because this meta-analysis will not involve any subject directly. 2.1. Selection criteria: 2.1.1. Types of studies. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers. There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers. 2.1.2. Participants. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria. COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria. 2.1.3. Types of interventions. In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc). In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc). 2.1.1. Types of studies.: There are not any limitations on the publication language or publication time. Only randomized controlled trials (RCTs) will be included in our literature, which means nonrandomized controlled trial, reviews, case reports, experimental study, observational cohort, case control studies and animal study will be deleted by our researchers. 2.1.2. Participants.: COVID-19 patients with FD lasted for ≥1 week. There are no restrictions on gender, race, and stage of disease. Patients with a history of FD before COVID-19 infection will be excluded. The diagnosis of COVID-19 includes Chinese or international diagnostic criteria. 2.1.3. Types of interventions.: In addition to the treatment of COVID-19, treatment group interventions comprised different methods of acupuncture and moxibustion. As well as treating COVID-19, the comparator groups intervention included comfort therapy (placebo, pseudo-acupuncture, or blank control), other therapies (Western medicine, usual care or non-drug therapy, etc). 2.2. Outcomes: The primary outcome will be effectiveness including the quantity of duodenum eosinophils, the status of vagus nerve and intestinal flora. The secondary outcomes will be gastrointestinal symptoms, health related quality of life, and mortality and hospitalization rates. 2.3. Search strategy: First, PubMed, Embase, Web of Science, Chinese National Knowledge Infrastructure database, Chinese Biomedical Database, Chinese Science and Technology Periodical database, the WanFang database and the Cochrane Central Register of Controlled Trials databases will be searched to find relevant articles up to July 2022 using a combination of the main search terms “Functional Dyspepsia,” “duodenum eosinophils,” “vagus nerve,” and “intestinal flora” within the restriction limit of “randomized controlled trial.” Taking PubMed as an example, the retrieval strategy is shown in Table 1. Search strategy for the PubMed database Second, to include complete and updated outcomes, abstracts and presentations of ongoing RCTs on FD from several of the most important international conferences (American Gastroenterological Association) in 2018 to 2022 will be inspected. Last, the reference lists of the relevant articles will be checked with expectation for additional articles. The searching process will be conducted systematically by 2 researchers through 8 databases from their inception to the present date. When meeting with divergence, discussion with a third researcher will be conducted. 2.4. Study selection: In order to screen all studies, we will use the EndNote X9 to facilitate document management. Data collection and analysis will be conducted by 2 professionally trained researchers. If there are differences, they will solve the problem through discussion or ask the opinions of a third researcher. The article will be initially screened by reviewing the title and abstract of the study’s, and then read the full text to exclude the unqualified documents: not RCTs, incorrect intervention, it belongs to RCTs, but does not meet the inclusion criteria, no data for extraction. Finally, the literature meeting the requirements will be evaluated and included in our study. The detailed screening process is shown in Figure 1. PRISMA flow chart of study selection process. PRISMA=Preferred Reporting Items for Systematic Review and Meta Analysis. 2.5. Data extraction: Two reviewers will be responsible for the extraction and management of data according to the retrieval strategy, including study title, journal, year of publication, name of first author, general information, study design, experimental intervention and timing of intervention, results, and adverse events. If there is any disagreement between the 2 reviewers during the data extraction process, the panel will jointly arbitrate and make a decision. 2.6. Risk of bias and quality assessment: As for risk of bias assessment, we will use the tool, Cochrane System Reviewer Manual (Version 6.3, 2022) to evaluate the quality of the included RCTs.[10] The quality of evidence for the outcomes will be evaluated by the use of the Grading of Recommendations Assessment, Development and Evaluation system.[11] Two researchers will perform the analysis and the results will be cross-checked. We will evaluate the following 6 aspects: Selection bias: Bias arising from the randomization process; Implementation bias: Bias due to deviations from intended interventions; Measurement bias: Bias in measurement of the outcome; Follow-up bias: Bias due to missing outcome data; Reporting bias: Bias in selection of the reported result; Other bias. The process of quality assessment will be conducted systematically by 2 researchers. When data meet with ambiguity, contradiction, errors or omission, discussion with a third reviewer will be conducted. So as to cope with potential divergence, the researcher will contact the corresponding author for the clarified, correct or missing data, when needed. 2.7. Assessment of reporting bias: Funnel plots will be conducted to evaluate reported deviations. We will use funnel plots to detect potential reporting bias. Begg and Egger test will be used to assess the symmetry of the funnel, draw and detect release deviations. 2.8. Statistical analysis: For continuous outcomes, the effect size for the intervention will be calculated by the difference between the means of the intervention and control groups at the end of the intervention. For morbidity and mortality, relative risk with 95% confidence interval will be calculated. For each outcome, heterogeneity will be assessed using the Cochran Q and I2 statistic; for the Cochran Q and I2 statistic, a P value of <.05 and I2 > 50% will be considered significant, respectively. When there is significant heterogeneity, the data will be pooled using a random-effects model; otherwise, a fixed-effects model will be used. Publication bias will be assessed graphically using a funnel plot and mathematically using Egger test. For these analyses, Comprehensive Meta Analysis Software version 2 (Biostat, Englewood, NJ) and STATA 16 software (Stata Corp LP, College Station, TX) will be used. 2.9. Sensitivity and subgroup analysis: Meta-regression will be used to determine whether the effect of acupuncture and moxibustion methods will be confounded by baseline clinical characteristics. Subgroup analysis stratified by route of different methods (warm acupuncture, thermo-sensitive moxibustion or electroacupuncture) will be performed. 2.10. Ethical issues: This meta-analysis is a literature study. Ethical approval is not required because this meta-analysis will not involve any subject directly. 3. Discussion: Regarding FD as an important sequelae of COVID-19,[2] many literature have analyzed the necessity of treating FD, so as to make humanistic care for patients with COVID-19 further displayed. Although the effect of acupuncture and moxibustion methods in FD patients have been reported, there is still insufficient meta-analysis to support the conclusion. Moreover, according to the investigation, acupuncture and moxibustion on sequelae of COVID-19 have not yet been taken seriously. To the best of our knowledge, this is the first meta-analysis protocol about different acupuncture and moxibustion methods on FD caused by sequelae of COVID-19. The results will evaluate whether different ways of acupuncture and moxibustion are beneficial for the long time conditioning of FD patients with COVID-19, providing evidence regarding the choice of acupuncture and moxibustion in these patients. Author contributions: Xingzhen Lin is the guarantor of the article and will be the arbitrator when meeting disagreements. All research members participated in developing the criteria and drafting the protocol for this systematic review. TP, XH and MZ established the search strategy. XH, YX and ZL will independently accomplish the study selection, data extraction and assess the risk of bias. XF, LL and WL will perform the data syntheses. The subsequent and final versions of the protocol are critically reviewed, modified and authorized by all authors. Methodology: Yue Xiong. Writing – original draft: Tianzhong Peng, Xuedi Huang, Manhua Zhu, Xinyue Fang, Wanning Lan, Xingzhen Lin. Writing – review & editing: Tianzhong Peng, Xuedi Huang, Manhua Zhu, Xinju Hou, Xinyue Fang, Zitong Lin, Lu Liu, Wanning Lan, Xingzhen Lin.
Background: Functional dyspepsia (FD) is a group of diseases that cannot be explained after routine clinical examination, and is characterized by postprandial fullness, early satiety, and upper abdominal pain or burning. According to the statistics, FD continues to become one of the high-risk sequelae of coronavirus disease 2019 (COVID-19), affecting patients' quality of life, increasing psychological burden and increasing economic costs. However, its optimal treatment is still an urgent problem. A large number of studies have shown that acupuncture and moxibustion is effective and safe in the treatment of FD caused by sequelae of COVID-19, which is of research value. Therefore, based on the current literatures, the effectiveness and safety of different acupuncture and moxibustion methods were systematically evaluated to provide possible alternative therapy on FD. Methods: Studies search for eligible randomized controlled trials that use different acupuncture and moxibustion methods as the sole treatment on FD and their data extraction will be done by 2 researchers. In case of disagreement, a third researcher will be introduced for arbitration. Mean difference or relative risk with fixed or random effect model in terms of 95% confidence interval will be adopted for the data synthesis. To evaluate the risk of bias, the Cochrane risk of bias assessment tool will be utilized. The sensitivity or subgroup analysis will also be conducted when meeting high heterogeneity (I2 > 50%). Results: This meta-analysis will provide an authentic synthesis of different acupuncture and moxibustion methods on FD caused by sequelae of COVID-19. Conclusions: This meta-analysis will evaluate the effect of acupuncture and moxibustion on FD caused by sequelae of COVID-19, providing evidence as to the treatment in these patients.
null
null
5,342
327
[ 351, 57, 47, 61, 42, 205, 152, 77, 200, 42, 172, 47, 26, 159 ]
17
[ "bias", "19", "covid", "covid 19", "study", "fd", "acupuncture", "analysis", "data", "intervention" ]
[ "coronavirus disease 2019", "fd gastrointestinal diseases", "fd patients covid", "symptoms covid", "gastrointestinal symptoms covid" ]
null
null
null
[CONTENT] acupuncture and moxibustion | COVID-19 | functional dyspepsia | protocol | systematic review [SUMMARY]
[CONTENT] acupuncture and moxibustion | COVID-19 | functional dyspepsia | protocol | systematic review [SUMMARY]
null
null
[CONTENT] acupuncture and moxibustion | COVID-19 | functional dyspepsia | protocol | systematic review [SUMMARY]
null
[CONTENT] Acupuncture Therapy | COVID-19 | Dyspepsia | Humans | Meta-Analysis as Topic | Moxibustion | Quality of Life | Systematic Reviews as Topic [SUMMARY]
[CONTENT] Acupuncture Therapy | COVID-19 | Dyspepsia | Humans | Meta-Analysis as Topic | Moxibustion | Quality of Life | Systematic Reviews as Topic [SUMMARY]
null
null
[CONTENT] Acupuncture Therapy | COVID-19 | Dyspepsia | Humans | Meta-Analysis as Topic | Moxibustion | Quality of Life | Systematic Reviews as Topic [SUMMARY]
null
[CONTENT] coronavirus disease 2019 | fd gastrointestinal diseases | fd patients covid | symptoms covid | gastrointestinal symptoms covid [SUMMARY]
[CONTENT] coronavirus disease 2019 | fd gastrointestinal diseases | fd patients covid | symptoms covid | gastrointestinal symptoms covid [SUMMARY]
null
null
[CONTENT] coronavirus disease 2019 | fd gastrointestinal diseases | fd patients covid | symptoms covid | gastrointestinal symptoms covid [SUMMARY]
null
[CONTENT] bias | 19 | covid | covid 19 | study | fd | acupuncture | analysis | data | intervention [SUMMARY]
[CONTENT] bias | 19 | covid | covid 19 | study | fd | acupuncture | analysis | data | intervention [SUMMARY]
null
null
[CONTENT] bias | 19 | covid | covid 19 | study | fd | acupuncture | analysis | data | intervention [SUMMARY]
null
[CONTENT] fd | symptoms | clinical | treatment | acupuncture | rome | epigastric | drugs | gastrointestinal | covid 19 [SUMMARY]
[CONTENT] bias | 19 | covid 19 | covid | study | data | analysis | intervention | database | bias bias [SUMMARY]
null
null
[CONTENT] 19 | covid | covid 19 | bias | acupuncture | fd | study | analysis | moxibustion | data [SUMMARY]
null
[CONTENT] ||| FD | 2019 | COVID-19 ||| ||| FD | COVID-19 ||| FD [SUMMARY]
[CONTENT] FD | 2 ||| third ||| 95% ||| Cochrane ||| 50% [SUMMARY]
null
null
[CONTENT] ||| FD | 2019 | COVID-19 ||| ||| FD | COVID-19 ||| FD ||| FD | 2 ||| third ||| 95% ||| Cochrane ||| 50% ||| ||| FD | COVID-19 ||| FD | COVID-19 [SUMMARY]
null
"I Just Went for the Screening, But I Did Not Go for the Results". Utilization of Cervical Cancer Screening and Vaccination among Females at Oyibi Community.
34181335
Cervical cancer screening and vaccination practices is reported to have  low coverage in most developing countries. It has been reported that most women are aware of cervical cancer screening and vaccination worldwide.  Nevertheless, the rate at which women participate in  cervical cancer screening and vaccination was found to be low both locally and internationally. Consequently, in sub-Saharan Africa, cervical cancer screening programs have poor coverage. The aim of this study was to explore the practices of cervical cancer screening and vaccination among females at Oyibi community.
BACKGROUND
The researchers employed a qualitative exploratory design to recruit 35 participants put into five Focus Group Discussions (FGDs).  Five FGDs were formed with seven (7) members in each group. The members were purposely recruited. The sample size was based on data saturation. Data was retrieved using a semi-structured interview guide. The researchers served as moderators in the group.
METHODS
Two (2) main themes with Eight (8) subthemes were generated from the data analysis. The themes were; (cervical cancer screening and vaccination practices), and (perceived benefits of cervical cancer screening and vaccination). The subthemes that emerged were as follows: types of cervical screening and vaccination done by participants, experiences during cervical cancer screening, experiences during cervical cancer vaccination, decision to go for cervical cancer screening and vaccination, willingness to recommend cervical cancer screening and vaccination to other women,  early detection of cervical cancer through early screening, benefits of cervical cancer vaccination, and willingness to receive cervical cancer vaccine. The study also revealed that most of the women who had done the screening and vaccination were young (19-29 years).
RESULTS
The results from the study indicated that the participants' utilization of cervical cancer screening and vaccination were poor although they were conscious of the benefits of cervical cancer screening and vaccination and were willing to recommend it to their relatives and their loved ones. <br />.
CONCLUSION
[ "Adult", "Female", "Focus Groups", "Ghana", "Health Knowledge, Attitudes, Practice", "Humans", "Mass Screening", "Papillomavirus Infections", "Papillomavirus Vaccines", "Patient Acceptance of Health Care", "Qualitative Research", "Uterine Cervical Neoplasms" ]
8418834
Introduction
Cervical cancer screening and vaccination practices are reported to be low in most developing countries. Most studies have ascertained that even though most women are aware of the types of screening for cervical cancer , few have participated (Markovic, et al., 2005; Jassim, et al., 2018; Liebermann et al., 2020 ). For instance, the study of Bakogianni et al (2012) among 472 female students ascertained that although majority of the participants (94.07%) knew about the Pap test, only 44.82% of the participants had been screened and vaccinated against cervical cancer. The major source of information about cervical cancer screening and vaccination identified by authors included the media, relatives, friends, and health workers (Ezem, 2007; Awodele et al., 2011). Majority of the participants in a study indicated that they received the vaccine after their first sexual intercourse (Bakogianni, et al., 2012). The reasons cited by some females for not screening and vaccinating included lack of appreciation of the importance of screening, feeling of embarrassment, fear and the attendant high cost (Makwe and Ihuma, 2012). In South Africa, a study revealed that, only 16 (9.8%) participants have had a Pap smear test done of which 11 (69%) knew their result (Hogue et al., 2014). A point worth noting is that, even though awareness and knowledge of cervical cancer were high among staff, their patronage of screening were low (Owoeye and Ibrahim, 2013) Cervical cancer is a type of cancer which affects the cells of the cervix and is associated with the Human Papilloma virus (HPV) (Brisson, and Drolet, 2019). There are several types of HPV but the two common types linked to cervical cancer are type 16 and 18 (Arbyn et al., 2020 ; Vu et al., 2013). Cervical cancer screening is a form of analysis to detect HPV and precancerous cell to aid in reducing cervical cancer incidence and morbidities (Massad et al., 2013). Pap smear test is one of the types of cervical cancer screening where a sample is obtained from the cervix which is then smeared onto a labeled glass slide and fixed with 95% ethyl alcohol in a jar for analysis (Sachan et al., 2018). In sub-Saharan Africa, cervical cancer screening programs have poor coverage (Irabor et al., 2017). For example, it has been found that cervical cancer screening coverage in sub-Saharan Africa ranges from 2% to 20% in the urban areas and 0.4 to 14% in the rural areas (Louie et al., 2009). Sixty to eighty percent (60 - 80%) of women who develop cervical cancer in sub-Saharan Africa live in rural areas with no opportunity of taking part in a cervical screening (Irabor et al., 2017). Also, cervical cancer screening programs have been found to have a better coverage among those of higher socio-economic class which still puts the people of sub-Saharan Africa among those who have poor coverage of the screening programs and vaccination utilization (Irabor et al., 2018). Regarding Human Papilloma Virus (HPV) vaccines, it is increasingly becoming available in the developing countries, but the cost is prohibitive for most people living in low and middle-income countries (Songane et al., 2019). The vaccine is recommended for girls prior to their sexual debut yet the HPV vaccine has not yet been deployed in the National Program on Immunization (NPI) in countries in Africa including Nigeria (Sadoh et al., 2018: Mutiu et al., 2019). Evidence suggests that in most developing countries, especially Ghana, there is inadequate infrastructure and quality control (Bobdey et al., 2016). Hence, the researchers reported that high quality cytology screening may not be feasible for wide-scale implementation thereby contributing to the high incidence of cervical cancer in developing countries. Despite the fact that the HPV vaccine has been authorized for use in Ghana and is available in some private and public health care centers, cervical cancer screening rates in urban and rural settings in Ghana are low thus 3.2% and 2.2% respectively (Williams and Amoateng, 2012). Cervical cancer screening among women in Ghana is very poor since those who undergo cervical cancer screening delay in looking for treatment until the point where their cervical growth tumors may have metastasized (Williams et al., 2013). It is shown that cervical cancer screening in many hospitals in Accra, especially the Ridge hospital are Papanicolau (Pap) smear and visual inspection of the cervix with acetic acid (VIA) (Sanghvi et al., 2008) which help in early detection of cervical cancer. In Ghana, some women have the intention and willingness to receive HPV vaccine due to their perceived benefits (Juntasopeepun et al., 2011). Despite the availability of cervical cancer screening tools in the country, including those that are appropriate for low resource setting, the rate of preventive cervical cancer screening remain extremely low among women in Low and Middle Income Countries (LMICS), hence, affects the use of vaccines (Williams et al., 2018). It can therefore be inferred that a positive attitude towards the cervical cancer vaccination and screening increase the chances to be screened and also receive the vaccine (Kang, and Moneyham, 2011). Purpose of the study The purpose of this study was to explore the practices of cervical cancer screening and vaccination among females at Oyibi Community.
null
null
Results
Socio-demographic characteristics of the participants Thirty-five (35) participants constituting five (5) Focus Group Discussions (FGD) were interacted with to obtain necessary data for the study. Each FGD consisted of seven (7) participants. The group consisted solely of o women from Oyibi Community within the Kpone-Katamanso District in the Greater Accra Region of Ghana. The findings of the study revealed that majority of the participants (69%) were single whilst few of them, constituting 11 (31%) were married for 5years and above. Concerning the age of participants, the results revealed that majority of the participants were within the age of 19 -29 years with few above 40 years. Thus, the least age recorded was 19 and 60 years as the highest age recorded. The study again revealed that majority, that is, 15 participants of the respondents had Secondary education (57.1%) and Tertiary education 13 (37.1%) with 7 making the least participants having basic education background (20%). Majority of the participants 33 (94.3%) were Christians while a few of them (5.7%), were Muslims with diverse cultural backgrounds from the Volta region, Ashanti region, Eastern region, Greater Accra region, Northern region, and Western region. The rest of the demographic data is presented below. Two themes emerged from this study which were cervical cancer screening and vaccination utilization by women and cervical cancer vaccination effectiveness and cost. Utilization of Cervical Cancer Screening and Vaccination This theme presents practices of cervical cancer screening and vaccination among women. The five (5) sub-themes which emerged were the types of cervical screening and vaccination done by participants, experience during cervical cancer screening, experiences during cervical cancer vaccination, decision to go for cervical cancer screening and vaccination, and willingness to recommend cervical cancer screening and vaccination to other women. The types of cervical screening and vaccination done by participants Analysis of the data collected revealed that few of the participants had undergone cervical cancer screening 2 (5.7%) (20 and 25 years) and vaccination 1 (2.9%) (28 years). The following quotes provide details of the above results: “I have done the pap smear at work, but I haven’t taken the vaccine. It was free at my workplace, so I just went to screen to know if I have it since I am young. However, they didn’t tell me to come for the vaccine because I didn’t go for the report. They did not also do any follow up to ask why I didn’t come for the results although they took my number”(p.20). “Yes, I was told the screening is called pap smear. I went to screen last year when it was the cervical cancer screening month. It was free as at that time. After I was told it was a negative, I paid for the vaccine and they administered it to metwice. I did that because I married at the age 23 and two years now, I have not been able to conceive so I wanted to be sure nothing was wrong with my cervix’’ (p.25). Few participants who refused to go for the cervical cancer screening and vaccination stated that theywere virgins hence not legible for screening: “No, I have not gone to screen or take the vaccine. Per the information I had, I was told that only those who are sexually active could be screened of which I am not. I am a virgin so there is no need going to do it. I might do it in the future” (p.15). Other participants did not go for the cervical cancer screening and vaccination because they lacked knowledge on its importance: ‘’ I haven’tbeen screened and vaccinated because I don’t think it is necessary to screen because I don’t have the disease. Even at the age of 45years, I am still strong. I am saying this because I am not sick, and I think it is not necessary. I don’t also know the types of cervical cancer screening and vaccines given, so why should I screen and vaccinate?’ (p.16). “I cannot be affected by this cancer because I am 50 years plus and I have not been hospitalized before and in my family no one has had this condition’ (p.35). Experiences during cervical cancer screening The study discovered that only few of the participants had done the cervical cancer screening. Various interesting and insightful experiences reported by participants during cervical cancer screening were as follows: “Oh, I did the pap smear at my work place, so I was a bit comfortable with the environment. The procedure itself was uncomfortable though painless. When I got to the room, I was asked to lie down on the bed and put my legs on a leg support. I was alone with the nurse. The nurse inserted an equipment into my vagina to take fluid for investigation’’(p.6). ‘’ We entered a room.Then the nur explained how it was going to be. I was okay because there was no man there. She assisted me to lie on the bed and raised my legs and put them on a leg support.After that she put some cloth on my thighs then I felt her putting something in my vagina, but it wasn’t painful. It was normal, however I felt something brushing inside like twice’’ (p.20). Experiences during cervical cancer vaccination A significant experience revealed in this study was pain associated with the cervical cancer vaccination since it is injected. Hence, the study found that only one (2.8%) participant had received the vaccine for cervical cancer even though few of them had done the screening ‘’After I paid for the vaccine, I was asked to wait in a room. The nurse then came in with some needles and the vaccine in a small bottle. She told me she wouldinject my upper arm and it would feel uncomfortable a bit. She took the vaccine with the needle and then cleaned the area with cotton and injected it, even though it felt painful she was soon done’’ (p.25). Other participants shared their experiences on how the vaccination might be even though they had not been vaccinated: ‘’ I don’t know if it’s painful or not even though I know it’s an injection. I didn’t go for the vaccine even though I have done the screening once because I do not like injections” (p.20). ‘’ I have heard of the vaccine but I don’t know if it is an injection or poured into the mouth like the vitamin A vaccine, so I cannot tell how it feels like’’ (p.1). ‘‘No please, I have no idea about the vaccine and how it’s like. I didn’teven know that cancer has a vaccine that prevents it. If my grand-daughter was to be around she would have been able to tell you’’ (p.4). Decision to go for cervical cancer screening and vaccination The results suggest that more than half of the participants, 20 (57.1%) were willing and eager to receive cervical cancer screening and vaccination ifthey have the opportunity; “I will be very glad to go for the cervical cancer screening and vaccination since I am aware of the cervical cancer screening and vaccination now but I will be happy to get it done soon’’(p.2). “Yeah, definitely the idea came to mind one day. I might go for the screening and get a review of my reproductive organs to know if everything is okay soon‘’.(p.6) “I would like to screen and get vaccinated against cervical cancer soon. I had the intention of getting myself screenedabout a year ago, but I have been too busy with work to take time off to do so’’(p.17). However, few participants didn’t see the need to screen and get vaccinated because they were advanced in age; ‘’ No, I didn’t get screened and vaccinated when I was young because I didn’t know about the condition, and also I don’t have any intention to get vaccinated now because I am old and my husband is is no more alive so I will rather recommend it to the young ones who are sexually active’’ (p.8). “I have not screened and vaccinated. In fact, I have no reason to be screened because I am not sick, I am healthy. It’s when you are ill that you visit the hospital to get treated.S I am well, so why should I go?” (p.29). Willingness to recommend cervical cancer screening and vaccination to other women Further discussion with the participants on the above subject matter revealed that they are ready to recommend to all women including their friends, families and church members to screen and vaccinate against cervical cancer: “It’s a good initiative, so I will recommend it to other people to go and bescreened. I will tell my friends, church members and family about it since I now know about it. Knowing about this disease and getting screened for it comes with huge benefits, so I will take it upon myself as a duty to tell all women in my church it”. (p.1) “I will recommend it to all women to get screened for it and also to get the vaccine because sex is common among the young women of today, so it will be good if they know their status” ( p.24). Participants revealed that they are not aware of any relative or friend who had done the screening and hence, recounted that they were going to recommend the screening and vaccination to their friends and relatives: “I don’t know of any relative and friends who have gone to be screened and also received the vaccine, so I will tell my family members and friends about it when I go home” (p.34). “No please, I don’t know of any family members or relatives that have done it. You know, when it comes to illnesses like this,it’s not something people would want to know, so I think that is the reason why they have not done it.however, I will tell them the importance of doing it, so that we all can go for the screening”(p.31). Perceived Benefits of Cervical Cancer Screening and Vaccination The participants viewed the screening as beneficial. Three (3) subthemes emerged from this theme: early detection of cervical cancer through early screening, willingness to receive cervical cancer vaccine, cost effectiveness. Early detection of cervical cancer through early screening Interrogations with the participants on the above subject matter revealed that early detection of the disease helps in early treatment and prevention which eventually reduces the death rate of cervical cancer. In regard to the above findings, participants explained in the following words: “I think if you go early to screen for cervical cancer and it is detected , it is good because early detection of the disease is key to prevent it. You waiting until the disease spreads to other parts of the body will be worse, so early detection is the best” (p.28). Few participants narrated that knowing it early will help one commence treatment early to prevent complications: “Yes, I say it helps to detect it early because if you go and get screened and it is positive you get treated and when its negative you get vaccinated to protect you, so both are good”.(p.20) ‘‘I believe that you going to get screenedand vaccinated will reduce the number of women who are infected with cervical cancer since the screening helps to detect cervical cancer. You can receive treatment when it is detected early. It will not only reduce the number of people having the disease but it will also reduce the death rate. It will help salvage the rest of the cervix that is not infected and treat the in time’’ (p.35). Protection from cervical cancer Cervical cancer vaccination is one of the most important things to do after screening. The participants also recounted that vaccination for cervical cancer is a step in the right direction to preventing the disease. “The vaccine gives lifetime protection when taken; you don’t have to worry about getting cervical cancer after screening and taking the vaccination”. (p.4). “I believe that early screening and vaccination against cervical cancer will help to prevent this disease because after screening, if the result is negative, you will receive the vaccine that prevents you from cervical cancer. And as our people say prevention is better than cure’’. (p.3). However some participants believed that cervical cancer could not be prevented by receiving the HPV vaccine since cancers cannot be prevented: ‘‘Cancers in general cannot be prevented with vaccines. I don’t know of any vaccine that prevents cervical cancer. When you get cervical cancer that is it. The only thing left is to go for treatment like chemotherapy and other medications’’(p.33). “Vaccines for preventing cancer? I didn’t know there was a vaccine available for preventing cervical cancer because people are still dying of cancers. You just have to be mindful of your lifestyle, that is the best way to prevent it’’ (p.24). Willingness to receive cervical cancer vaccination Participants revealed that after screening for cervical cancer, it is important to go for vaccination to protect oneself. The following quotes provides details of the results above: “I’ m willing to go for the screening and vaccination to protect myself from this deadly disease, even if I have contracted the cancer after being screened, I will be ready to be treated for the , and if I am negative too, I will go for the vaccination to help protect me from the cancer. (p.13). ‘‘In my case, I have gone to have cervical cancer screening already but I was reluctant to go for the vaccination but I think I am ready to go for the vaccination now’’ (p.15) ‘‘I have taken all my children for immunizations and they are all fine, so I am sure that this vaccine too will not cause any problem, so I will go for it whenever I am off duty’’ (p.4) Demographic Characteristics of the Participants Source, Filed Survey Data (2020).
null
null
[]
[]
[]
[ "Introduction", "Materials and Methods", "Results", "Discussions" ]
[ "Cervical cancer screening and vaccination practices are reported to be low in most developing countries. Most studies have ascertained that even though most women are aware of the types of screening for cervical cancer , few have participated (Markovic, et al., 2005; Jassim, et al., 2018; Liebermann et al., 2020 ). For instance, the study of Bakogianni et al (2012) among 472 female students ascertained that although majority of the participants (94.07%) knew about the Pap test, only 44.82% of the participants had been screened and vaccinated against cervical cancer. The major source of information about cervical cancer screening and vaccination identified by authors included the media, relatives, friends, and health workers (Ezem, 2007; Awodele et al., 2011). Majority of the participants in a study indicated that they received the vaccine after their first sexual intercourse (Bakogianni, et al., 2012). The reasons cited by some females for not screening and vaccinating included lack of appreciation of the importance of screening, feeling of embarrassment, fear and the attendant high cost (Makwe and Ihuma, 2012). In South Africa, a study revealed that, only 16 (9.8%) participants have had a Pap smear test done of which 11 (69%) knew their result (Hogue et al., 2014). A point worth noting is that, even though awareness and knowledge of cervical cancer were high among staff, their patronage of screening were low (Owoeye and Ibrahim, 2013)\nCervical cancer is a type of cancer which affects the cells of the cervix and is associated with the Human Papilloma virus (HPV) (Brisson, and Drolet, 2019). There are several types of HPV but the two common types linked to cervical cancer are type 16 and 18 (Arbyn et al., 2020 ; Vu et al., 2013). Cervical cancer screening is a form of analysis to detect HPV and precancerous cell to aid in reducing cervical cancer incidence and morbidities (Massad et al., 2013). Pap smear test is one of the types of cervical cancer screening where a sample is obtained from the cervix which is then smeared onto a labeled glass slide and fixed with 95% ethyl alcohol in a jar for analysis (Sachan et al., 2018). \nIn sub-Saharan Africa, cervical cancer screening programs have poor coverage (Irabor et al., 2017). For example, it has been found that cervical cancer screening coverage in sub-Saharan Africa ranges from 2% to 20% in the urban areas and 0.4 to 14% in the rural areas (Louie et al., 2009). Sixty to eighty percent (60 - 80%) of women who develop cervical cancer in sub-Saharan Africa live in rural areas with no opportunity of taking part in a cervical screening (Irabor et al., 2017). Also, cervical cancer screening programs have been found to have a better coverage among those of higher socio-economic class which still puts the people of sub-Saharan Africa among those who have poor coverage of the screening programs and vaccination utilization (Irabor et al., 2018).\nRegarding Human Papilloma Virus (HPV) vaccines, it is increasingly becoming available in the developing countries, but the cost is prohibitive for most people living in low and middle-income countries (Songane et al., 2019). The vaccine is recommended for girls prior to their sexual debut yet the HPV vaccine has not yet been deployed in the National Program on Immunization (NPI) in countries in Africa including Nigeria (Sadoh et al., 2018: Mutiu et al., 2019). Evidence suggests that in most developing countries, especially Ghana, there is inadequate infrastructure and quality control (Bobdey et al., 2016). Hence, the researchers reported that high quality cytology screening may not be feasible for wide-scale implementation thereby contributing to the high incidence of cervical cancer in developing countries. \nDespite the fact that the HPV vaccine has been authorized for use in Ghana and is available in some private and public health care centers, cervical cancer screening rates in urban and rural settings in Ghana are low thus 3.2% and 2.2% respectively (Williams and Amoateng, 2012). Cervical cancer screening among women in Ghana is very poor since those who undergo cervical cancer screening delay in looking for treatment until the point where their cervical growth tumors may have metastasized (Williams et al., 2013). It is shown that cervical cancer screening in many hospitals in Accra, especially the Ridge hospital are Papanicolau (Pap) smear and visual inspection of the cervix with acetic acid (VIA) (Sanghvi et al., 2008) which help in early detection of cervical cancer. \nIn Ghana, some women have the intention and willingness to receive HPV vaccine due to their perceived benefits (Juntasopeepun et al., 2011). Despite the availability of cervical cancer screening tools in the country, including those that are appropriate for low resource setting, the rate of preventive cervical cancer screening remain extremely low among women in Low and Middle Income Countries (LMICS), hence, affects the use of vaccines (Williams et al., 2018). It can therefore be inferred that a positive attitude towards the cervical cancer vaccination and screening increase the chances to be screened and also receive the vaccine (Kang, and Moneyham, 2011).\n\nPurpose of the study\n\nThe purpose of this study was to explore the practices of cervical cancer screening and vaccination among females at Oyibi Community.", "\nMethodology\n\nIn this study, a qualitative exploratory design was employed. Qualitative research is a form of inquiry that studies individuals in their natural settings and helps to interpret a phenomena in terms of the meanings people bring to them (Aspers and Corte, 2019). A focus group discussion was used in collecting data from participants. This was used because the researchers were interested in exploring the varied opinions of the women on cervical cancer screening and vaccination. \nA focus group discussion was used to gain an in-depth understanding of cervical cancer screening and vaccination from participants’ point of view(Nyumbaet al., 2018). The researchers served as facilitators during the data collection. The target population of this study included women living in the Oyibi community who were 18 years and above since women of such age range were at risk of cervical cancer because they are sexually active. The study included women between the ages of 18 to 65 years who could express themselves in Twi and English and were willing to participate in the study.\nThe research setting is Oyibi located in the Greater Accra Region of Ghana. It is one of the rural communities in the G.A of Ghana and no study has looked at cervical cancer screening and vaccination in this group. Finally, this community has no cervical cancer screening center close to them.Neither do they have a hospital. \nPurposive sampling was used to recruit all participants who met the inclusion criteria and were willing to engage in the FGDs to provide necessaryinformation to ensure credibility of the study. Sample size was based on data saturation. Five (5) FGDs were held with seven (7) members in each group. Hence, the sample size for this study was 35 participants. All the 35 participants completed the interviews with none opting out. Interviews were conducted from one group to the other till no new responses were retrieved (Data saturation). Ethical clearance was obtained from the Dodowa Health Research Center Institutional Review Board (DHRC- IRB 31/03/20) before the data collection. The clearance letter from the ethical board was submitted to the assembly man and elders of the Oyibi community who gave their permission for entry into the community. The researchers contacted various females from the selected community in various locations and gatherings such as churches, weddings, marketplaces and houses within the community. The researchers established rapport by introducing themselves to the participants before explaining the purpose of the study to them. The benefits participants stood to gain from the study was also explained to them. The researchers scheduled the days for the data collection and a venue to suit participants’ availability. Moreover, all methods were carried out in accordance torelevant guidelines and regulations including the informed consent, voluntary participation, anonymity and confidentiality. The interviews were recorded with an audio tape. The researchers met participants in a private place to conduct the interviews ensuring that no other person gets access to the recorded data. The participants were asked to use numbers to identify themselves in the group instead of their original names to conceal their identity. They were also informed about some “dos” and “don’ts” such as “Not interrupting others when they are sharing their views, allowing each member to share their views, not arguing with members of the group but can disagree where appropriate and not disclosing any personal information given during the discussion or making mockery of a group member after the discussion. The interviews were conducted by all the authors in English since all the participants speakand understand English. Focus group interviews lasted for 45-60 minutes. Data was collected over a period of 6weeks. The participants were congratulated by the researchers after the interviews. \nA semi-structured interview was used by the researchers who served as moderators to collect data from the members of each FGs. The interview guide consisted of open-ended questions with probes for further clarifications. The interview guide consisted of demographic characteristics of participants as well as questions based on the objectives of the study which were as follows: practices, willingness to screen and vaccinate and perceptions on cervical cancer screening benefits. The tool was carefully designed by the researchers and reviewed by all researchers. It was also given to other nursing researchers to peer review. The tool was pretested among two females in Malejor who have similar characteristics as the women in Oyibi. All interviews were done once. The recorded interviews were transcribed and saved on a personal laptop and it was secured with a password known only to the researchers. This was to ensure the safety of the interviews in case the laptop was stolen or became faulty.\n\nStatistical Analysis\n\nData was analyzed using content analysis. Content analysis has been defined as a systematic way of compressing several words into fewer content categories based on explicit rules of coding (Stemler, 2000). It allows researchers sift through large volumes of data. The analysis was done after each FGDs. Data collection and data analysis were done concurrently. The audio taped data were played by the researchers and typed into a word document saving them with FGD numbers (1-5). The researchers played and listened to each discussion over and over again to familiarize themselves with the data which were transcribed verbatim. Each transcript was read and re-read to understand what participants said and to contact participants for clarification where appropriate. The researcher then coded the data by reading through and giving meanings throughout the transcripts by representing it with two (2) to four (4) words. Similar meanings were put together. Themes which were formulated for each group were written down and grouped based on patterns or relationships amongst them. In all, two (2) themes emerged and eight (8) subthemes were formulated based on theobjectives of the study.\nThe methodological rigor was maintained to ensure the validity and reliability of the findings. This ensured that+ findings and the processes for conducting the data were s trustworthy. The trustworthiness was maintained by ensuring the following weremaintained: credibility, transferability, dependability, and confirmability (Bittlinger, 2017).", "\nSocio-demographic characteristics of the participants\n\nThirty-five (35) participants constituting five (5) Focus Group Discussions (FGD) were interacted with to obtain necessary data for the study. Each FGD consisted of seven (7) participants. The group consisted solely of o women from Oyibi Community within the Kpone-Katamanso District in the Greater Accra Region of Ghana.\nThe findings of the study revealed that majority of the participants (69%) were single whilst few of them, constituting 11 (31%) were married for 5years and above. Concerning the age of participants, the results revealed that majority of the participants were within the age of 19 -29 years with few above 40 years. Thus, the least age recorded was 19 and 60 years as the highest age recorded. The study again revealed that majority, that is, 15 participants of the respondents had Secondary education (57.1%) and Tertiary education 13 (37.1%) with 7 making the least participants having basic education background (20%). Majority of the participants 33 (94.3%) were Christians while a few of them (5.7%), were Muslims with diverse cultural backgrounds from the Volta region, Ashanti region, Eastern region, Greater Accra region, Northern region, and Western region. The rest of the demographic data is presented below.\nTwo themes emerged from this study which were cervical cancer screening and vaccination utilization by women and cervical cancer vaccination effectiveness and cost. \n\nUtilization of Cervical Cancer Screening and Vaccination\n\nThis theme presents practices of cervical cancer screening and vaccination among women. The five (5) sub-themes which emerged were the types of cervical screening and vaccination done by participants, experience during cervical cancer screening, experiences during cervical cancer vaccination, decision to go for cervical cancer screening and vaccination, and willingness to recommend cervical cancer screening and vaccination to other women.\n\nThe types of cervical screening and vaccination done by participants\n\nAnalysis of the data collected revealed that few of the participants had undergone cervical cancer screening 2 (5.7%) (20 and 25 years) and vaccination 1 (2.9%) (28 years). The following quotes provide details of the above results:\n“I have done the pap smear at work, but I haven’t taken the vaccine. It was free at my workplace, so I just went to screen to know if I have it since I am young. However, they didn’t tell me to come for the vaccine because I didn’t go for the report. They did not also do any follow up to ask why I didn’t come for the results although they took my number”(p.20).\n“Yes, I was told the screening is called pap smear. I went to screen last year when it was the cervical cancer screening month. It was free as at that time. After I was told it was a negative, I paid for the vaccine and they administered it to metwice. I did that because I married at the age 23 and two years now, I have not been able to conceive so I wanted to be sure nothing was wrong with my cervix’’ (p.25).\nFew participants who refused to go for the cervical cancer screening and vaccination stated that theywere virgins hence not legible for screening:\n“No, I have not gone to screen or take the vaccine. Per the information I had, I was told that only those who are sexually active could be screened of which I am not. I am a virgin so there is no need going to do it. I might do it in the future” (p.15).\nOther participants did not go for the cervical cancer screening and vaccination because they lacked knowledge on its importance:\n‘’ I haven’tbeen screened and vaccinated because I don’t think it is necessary to screen because I don’t have the disease. Even at the age of 45years, I am still strong. I am saying this because I am not sick, and I think it is not necessary. I don’t also know the types of cervical cancer screening and vaccines given, so why should I screen and vaccinate?’ (p.16).\n“I cannot be affected by this cancer because I am 50 years plus and I have not been hospitalized before and in my family no one has had this condition’ (p.35).\n\nExperiences during cervical cancer screening\n\nThe study discovered that only few of the participants had done the cervical cancer screening. Various interesting and insightful experiences reported by participants during cervical cancer screening were as follows:\n“Oh, I did the pap smear at my work place, so I was a bit comfortable with the environment. The procedure itself was uncomfortable though painless. When I got to the room, I was asked to lie down on the bed and put my legs on a leg support. I was alone with the nurse. The nurse inserted an equipment into my vagina to take fluid for investigation’’(p.6).\n‘’ We entered a room.Then the nur explained how it was going to be. I was okay because there was no man there. She assisted me to lie on the bed and raised my legs and put them on a leg support.After that she put some cloth on my thighs then I felt her putting something in my vagina, but it wasn’t painful. It was normal, however I felt something brushing inside like twice’’ (p.20).\n\nExperiences during cervical cancer vaccination\n\nA significant experience revealed in this study was pain associated with the cervical cancer vaccination since it is injected. Hence, the study found that only one (2.8%) participant had received the vaccine for cervical cancer even though few of them had done the screening\n‘’After I paid for the vaccine, I was asked to wait in a room. The nurse then came in with some needles and the vaccine in a small bottle. She told me she wouldinject my upper arm and it would feel uncomfortable a bit. She took the vaccine with the needle and then cleaned the area with cotton and injected it, even though it felt painful she was soon done’’ (p.25).\nOther participants shared their experiences on how the vaccination might be even though they had not been vaccinated:\n‘’ I don’t know if it’s painful or not even though I know it’s an injection. I didn’t go for the vaccine even though I have done the screening once because I do not like injections” (p.20).\n‘’ I have heard of the vaccine but I don’t know if it is an injection or poured into the mouth like the vitamin A vaccine, so I cannot tell how it feels like’’ (p.1).\n‘‘No please, I have no idea about the vaccine and how it’s like. I didn’teven know that cancer has a vaccine that prevents it. If my grand-daughter was to be around she would have been able to tell you’’ (p.4).\n\nDecision to go for cervical cancer screening and vaccination\n\nThe results suggest that more than half of the participants, 20 (57.1%) were willing and eager to receive cervical cancer screening and vaccination ifthey have the opportunity;\n“I will be very glad to go for the cervical cancer screening and vaccination since I am aware of the cervical cancer screening and vaccination now but I will be happy to get it done soon’’(p.2).\n“Yeah, definitely the idea came to mind one day. I might go for the screening and get a review of my reproductive organs to know if everything is okay soon‘’.(p.6)\n“I would like to screen and get vaccinated against cervical cancer soon. I had the intention of getting myself screenedabout a year ago, but I have been too busy with work to take time off to do so’’(p.17).\nHowever, few participants didn’t see the need to screen and get vaccinated because they were advanced in age;\n‘’ No, I didn’t get screened and vaccinated when I was young because I didn’t know about the condition, and also I don’t have any intention to get vaccinated now because I am old and my husband is is no more alive so I will rather recommend it to the young ones who are sexually active’’ (p.8).\n“I have not screened and vaccinated. In fact, I have no reason to be screened because I am not sick, I am healthy. It’s when you are ill that you visit the hospital to get treated.S I am well, so why should I go?” (p.29).\n\nWillingness to recommend cervical cancer screening and vaccination to other women\n\nFurther discussion with the participants on the above subject matter revealed that they are ready to recommend to all women including their friends, families and church members to screen and vaccinate against cervical cancer:\n“It’s a good initiative, so I will recommend it to other people to go and bescreened. I will tell my friends, church members and family about it since I now know about it. Knowing about this disease and getting screened for it comes with huge benefits, so I will take it upon myself as a duty to tell all women in my church it”. (p.1)\n“I will recommend it to all women to get screened for it and also to get the vaccine because sex is common among the young women of today, so it will be good if they know their status” ( p.24).\nParticipants revealed that they are not aware of any relative or friend who had done the screening and hence, recounted that they were going to recommend the screening and vaccination to their friends and relatives:\n“I don’t know of any relative and friends who have gone to be screened and also received the vaccine, so I will tell my family members and friends about it when I go home” (p.34).\n“No please, I don’t know of any family members or relatives that have done it. You know, when it comes to illnesses like this,it’s not something people would want to know, so I think that is the reason why they have not done it.however, I will tell them the importance of doing it, so that we all can go for the screening”(p.31).\n\nPerceived Benefits of Cervical Cancer Screening and Vaccination\n\nThe participants viewed the screening as beneficial. Three (3) subthemes emerged from this theme: early detection of cervical cancer through early screening, willingness to receive cervical cancer vaccine, cost effectiveness.\n\nEarly detection of cervical cancer through early screening\n\nInterrogations with the participants on the above subject matter revealed that early detection of the disease helps in early treatment and prevention which eventually reduces the death rate of cervical cancer. In regard to the above findings, participants explained in the following words:\n “I think if you go early to screen for cervical cancer and it is detected , it is good because early detection of the disease is key to prevent it. You waiting until the disease spreads to other parts of the body will be worse, so early detection is the best” (p.28).\nFew participants narrated that knowing it early will help one commence treatment early to prevent complications:\n“Yes, I say it helps to detect it early because if you go and get screened and it is positive you get treated and when its negative you get vaccinated to protect you, so both are good”.(p.20)\n‘‘I believe that you going to get screenedand vaccinated will reduce the number of women who are infected with cervical cancer since the screening helps to detect cervical cancer. You can receive treatment when it is detected early. It will not only reduce the number of people having the disease but it will also reduce the death rate. It will help salvage the rest of the cervix that is not infected and treat the in time’’ (p.35).\n\nProtection from cervical cancer\n\nCervical cancer vaccination is one of the most important things to do after screening. The participants also recounted that vaccination for cervical cancer is a step in the right direction to preventing the disease. \n“The vaccine gives lifetime protection when taken; you don’t have to worry about getting cervical cancer after screening and taking the vaccination”. (p.4).\n“I believe that early screening and vaccination against cervical cancer will help to prevent this disease because after screening, if the result is negative, you will receive the vaccine that prevents you from cervical cancer. And as our people say prevention is better than cure’’. (p.3).\nHowever some participants believed that cervical cancer could not be prevented by receiving the HPV vaccine since cancers cannot be prevented:\n‘‘Cancers in general cannot be prevented with vaccines. I don’t know of any vaccine that prevents cervical cancer. When you get cervical cancer that is it. The only thing left is to go for treatment like chemotherapy and other medications’’(p.33).\n“Vaccines for preventing cancer? I didn’t know there was a vaccine available for preventing cervical cancer because people are still dying of cancers. You just have to be mindful of your lifestyle, that is the best way to prevent it’’ (p.24).\n\nWillingness to receive cervical cancer vaccination\n\nParticipants revealed that after screening for cervical cancer, it is important to go for vaccination to protect oneself. The following quotes provides details of the results above:\n“I’ m willing to go for the screening and vaccination to protect myself from this deadly disease, even if I have contracted the cancer after being screened, I will be ready to be treated for the , and if I am negative too, I will go for the vaccination to help protect me from the cancer. (p.13).\n‘‘In my case, I have gone to have cervical cancer screening already but I was reluctant to go for the vaccination but I think I am ready to go for the vaccination now’’ (p.15)\n‘‘I have taken all my children for immunizations and they are all fine, so I am sure that this vaccine too will not cause any problem, so I will go for it whenever I am off duty’’ (p.4)\nDemographic Characteristics of the Participants\nSource, Filed Survey Data (2020). ", "\nUtilization of Cervical Cancer Screening and Vaccination\n\nResults of the present study revealed that just a few participants had undergone cervical cancer screening and vaccination thus 2 (5.7%) and 1 (2.9%) respectively. Participants who had undergone screening recalled pap-smear as the test done. However, participants who had received the vaccination could not recall the name of the vaccine that was administered. The present study further reveals that some of the participants did not go to be screened and vaccinated on the basis that they were virgins, hence, not legible for screening. This finding is in consonance with a study by Aniebue and Aniebue (2010) among 394 female university students in Nigeriawhich unveiled that, about 23.1% identified the Pap smear as a screening test type and only 5.2% of respondents had ever been screened. The findings also supported a study done in Ghana which discovered that The majority of women(97.7%) had never heard of the Pap smear test before (Ebu et al., 2015). Nevertheless, the study was in contrast to Harper, and DeMars’ (2016) study in Canada which showed that, Cervarix and Gardasil9 were some vaccines mentioned by participants.\nThe few participants who had undergone cervical cancer screening shared their experiences during screening. These participants recounted that, the screening was not painful during the procedure, but they were rather uncomfortable because of the way they were placed on the table to be examined and the equipment that was inserted into their vagina. Findings also recorded that, some participants were comfortable during the screening due to familiar environment and the absence of male nurses or doctors. The present study is in likeness to a study conducted by Rositch et al., (2012) who discovered that some women reported a low level of physical discomfort during Pap smear collection. In addition, over 80% of women reported that they would feel comfortable using a self-sampling device (82%) and would prefer at-home sample collection (84%). This present study’s finding was in contrast with a study where 26.47% of respondents were married but none of them had undergone screening test with the belief that it would be painful (Pegu et al., 2016). A similar study in Ghana also revealed that only 15 of the participants (8.5%) had undergone Pap smear for cervical cancer screening due to poor knowledge about it (Adanu, 2002). \n\nExperiences during cervical cancer vaccination \n\nA significant experience revealed in this study was pain associated withcervical cancer vaccination. The study found that, only one (2.8%) participant had received the vaccine for cervical cancer even though a few of them had done the screening. The participant who had received the cervical cancer vaccination described the process as a bit painful although not as painful as she thought it would be. She felt the needle pricking her skin for just a brief period. This implies that the experiences of females on cervical cancer vaccination is mainly subjective since others may perceive it differently. Similarly, a study conducted in North Carolina reported that pain from HPV vaccination was commonly reported by parents but was less frequent compared to other adolescent vaccines and did not appear to affect vaccine regimen completion. These findings may be important to increase HPV vaccination coverage since women perceive it as less painful. (Hudson et al., 2016). Moreover, a study done in Ghana ascertained that the main concern of the vaccination was ensuring safety during the administration rather than the pain as identified in this study (Coleman et al., 2011).\nRegarding the decision to go for cervical cancer screening and vaccination, the study discovered that, more than half of the participants 20 (51%) were willing and eager to receive cervical cancer screening and vaccination should they have the opportunity. In relation to the present study, a study conducted in Nigeria revealed that majority of the participants (62.5%) demonstrated readiness to be screened and vaccinated against cervical cancer (Eze et al., 2012). \nAbout participants’ willingness to recommend cervical cancer screening and vaccination to other women, the study established that majority of the participants in the study were willing to recommend to all women including their friends, families and church members to be screened and vaccinated against cervical cancer since they believed most youth were into early sexual debut which was identified to increase participants’ risk in acquiring cervical cancer. The participants also revealed that, they were not aware of any relative or friend who had done the screening.They stated they were going to recommend the screening and vaccination to their friends and relatives. This finding was surprising because majority of the study participants had not been screened and vaccinated but were willing to recommend it to others. The findings of this current investigation are also consistent with those of Adejuyigbe et al., (2015), who did a study among medical students of the University of Lagos and found that most of the respondents supported vaccination of adolescent girls (65.7%) and were willing to recommend vaccination to colleagues/friends (82.1%) and to future clients (80.0%). \n\nPerceived Benefits of Cervical Cancer Screening and Vaccination\n\nEarly detection of cervical cancer was one of the benefits enumerated by the study participants. Few of the participants who knew about cervical cancer screening and vaccination were of the notion that, screening for cervical cancer aids in the early detection of cancer’s hidden warning signs long before symptoms appear and thus helps one to commence treatment early. The findings of this present study are consistent with that of Wang et al., (2019) who discovered that early cervical cancer screening and vaccination reduces cervical cancer cumulative incidence, increases life span, reduces cost of treating cervical cancer and improves the quality of life . Similar to the present study findings, Ibekweet al.(2010) in South Africa concluded that, majority (87%) were of the belief that cervical cancer screening is important, while 75% indicated that screening could find changes in the cervix before full cancer arises and that when cervical cancer is detected earlier, it can be easily treated.\nMajority of the participants in this present study acknowledged the money spent in preventing cervical cancer was less than the money involved when treating cervical cancer, especially when the condition advances.Thus, it is perceived to be cost effective. This implies that the cost of preventive intervention of cervical cancer ischeaper, effective and beneficial than cost involved at the onset of the cervical cancer. However, these findings were surprising since majority of the participants had not been screened and vaccinated against cervical cancer. A study was in accordance with these findings where it was discovered that majority of the respondents (76%) used in a study in Ghana were willing to pay at least something for screening and vaccination because they were aware of the cost effectiveness of these services (Opoku et al., 2016).\nIn conclusion, the results from the study indicated that the participants’ utilization of cervical cancer screening and vaccination were poor, nevertheless, they were conscious of cervical cancer screening and vaccination and were willing to recommend it to friends, relatives and loves ones.\n\nImplications\n\nIn nursing practice, the study revealed that cervical cancer screening and patronization rate among women who partook in this study was low, hence healthcare professionals should find ways of sensitizing women during healthcare delivery to encourage more women to partake in the screening. Moreover, nurses should prepare women who are being booked for the screening and vaccination to help allay the fears and anxieties that they go through. Healthcare professionals should follow up on the women who come for screening to motivate them to go for vaccination. Durbars and seminars should be organized in various rural communities in Ghana by health care professionals, hospitals and N.G.Os in the country to sensitize women on cervical cancer screening and vaccination to help clear misconceptions.\n\nRecommendations\n\nThe government should formulate policies to help reduce cost for screening and vaccination. The NHIS should absorb the cost to help improve screening and vaccination coverage. \n\nMinistry of Health (MoH)\n\n• The Ministry of Health should lobby government for funds for the inclusion of the cost of cervical cancer screening and vaccination in the NHIS. \n\nGHANA HEALTH SERVICE (GHS)\n\n• Ministry of Health should establish more screening centers a screening center close to the rural community where the study was conducted to motivate more women in this area to patronize screening for cervical cancer and for vaccination.\n• They should also make screening and vaccination services available and accessible nation-wide.\n\nPRACTICING NURSES \n\n• They should make out time to explain the screening procedure to the client and the possible discomfort that might occur.\n• They should also take the telephone numbers of the clients that und ergo screening in order to call them when the results are ready.\n•Health care professionals should conduct more research in this field to help find innovative ways to improve screening and vaccination practices.\nLimitations\n• The study might have revealed more robust findings if a mixed method was done. Hence it is recommended that other researchers consider doing a mixed method in this area." ]
[ "intro", "materials|methods", "results", "discussion" ]
[ "Utilization", "cervical cancer", "screening", "vaccination", "females" ]
Introduction: Cervical cancer screening and vaccination practices are reported to be low in most developing countries. Most studies have ascertained that even though most women are aware of the types of screening for cervical cancer , few have participated (Markovic, et al., 2005; Jassim, et al., 2018; Liebermann et al., 2020 ). For instance, the study of Bakogianni et al (2012) among 472 female students ascertained that although majority of the participants (94.07%) knew about the Pap test, only 44.82% of the participants had been screened and vaccinated against cervical cancer. The major source of information about cervical cancer screening and vaccination identified by authors included the media, relatives, friends, and health workers (Ezem, 2007; Awodele et al., 2011). Majority of the participants in a study indicated that they received the vaccine after their first sexual intercourse (Bakogianni, et al., 2012). The reasons cited by some females for not screening and vaccinating included lack of appreciation of the importance of screening, feeling of embarrassment, fear and the attendant high cost (Makwe and Ihuma, 2012). In South Africa, a study revealed that, only 16 (9.8%) participants have had a Pap smear test done of which 11 (69%) knew their result (Hogue et al., 2014). A point worth noting is that, even though awareness and knowledge of cervical cancer were high among staff, their patronage of screening were low (Owoeye and Ibrahim, 2013) Cervical cancer is a type of cancer which affects the cells of the cervix and is associated with the Human Papilloma virus (HPV) (Brisson, and Drolet, 2019). There are several types of HPV but the two common types linked to cervical cancer are type 16 and 18 (Arbyn et al., 2020 ; Vu et al., 2013). Cervical cancer screening is a form of analysis to detect HPV and precancerous cell to aid in reducing cervical cancer incidence and morbidities (Massad et al., 2013). Pap smear test is one of the types of cervical cancer screening where a sample is obtained from the cervix which is then smeared onto a labeled glass slide and fixed with 95% ethyl alcohol in a jar for analysis (Sachan et al., 2018). In sub-Saharan Africa, cervical cancer screening programs have poor coverage (Irabor et al., 2017). For example, it has been found that cervical cancer screening coverage in sub-Saharan Africa ranges from 2% to 20% in the urban areas and 0.4 to 14% in the rural areas (Louie et al., 2009). Sixty to eighty percent (60 - 80%) of women who develop cervical cancer in sub-Saharan Africa live in rural areas with no opportunity of taking part in a cervical screening (Irabor et al., 2017). Also, cervical cancer screening programs have been found to have a better coverage among those of higher socio-economic class which still puts the people of sub-Saharan Africa among those who have poor coverage of the screening programs and vaccination utilization (Irabor et al., 2018). Regarding Human Papilloma Virus (HPV) vaccines, it is increasingly becoming available in the developing countries, but the cost is prohibitive for most people living in low and middle-income countries (Songane et al., 2019). The vaccine is recommended for girls prior to their sexual debut yet the HPV vaccine has not yet been deployed in the National Program on Immunization (NPI) in countries in Africa including Nigeria (Sadoh et al., 2018: Mutiu et al., 2019). Evidence suggests that in most developing countries, especially Ghana, there is inadequate infrastructure and quality control (Bobdey et al., 2016). Hence, the researchers reported that high quality cytology screening may not be feasible for wide-scale implementation thereby contributing to the high incidence of cervical cancer in developing countries. Despite the fact that the HPV vaccine has been authorized for use in Ghana and is available in some private and public health care centers, cervical cancer screening rates in urban and rural settings in Ghana are low thus 3.2% and 2.2% respectively (Williams and Amoateng, 2012). Cervical cancer screening among women in Ghana is very poor since those who undergo cervical cancer screening delay in looking for treatment until the point where their cervical growth tumors may have metastasized (Williams et al., 2013). It is shown that cervical cancer screening in many hospitals in Accra, especially the Ridge hospital are Papanicolau (Pap) smear and visual inspection of the cervix with acetic acid (VIA) (Sanghvi et al., 2008) which help in early detection of cervical cancer. In Ghana, some women have the intention and willingness to receive HPV vaccine due to their perceived benefits (Juntasopeepun et al., 2011). Despite the availability of cervical cancer screening tools in the country, including those that are appropriate for low resource setting, the rate of preventive cervical cancer screening remain extremely low among women in Low and Middle Income Countries (LMICS), hence, affects the use of vaccines (Williams et al., 2018). It can therefore be inferred that a positive attitude towards the cervical cancer vaccination and screening increase the chances to be screened and also receive the vaccine (Kang, and Moneyham, 2011). Purpose of the study The purpose of this study was to explore the practices of cervical cancer screening and vaccination among females at Oyibi Community. Materials and Methods: Methodology In this study, a qualitative exploratory design was employed. Qualitative research is a form of inquiry that studies individuals in their natural settings and helps to interpret a phenomena in terms of the meanings people bring to them (Aspers and Corte, 2019). A focus group discussion was used in collecting data from participants. This was used because the researchers were interested in exploring the varied opinions of the women on cervical cancer screening and vaccination. A focus group discussion was used to gain an in-depth understanding of cervical cancer screening and vaccination from participants’ point of view(Nyumbaet al., 2018). The researchers served as facilitators during the data collection. The target population of this study included women living in the Oyibi community who were 18 years and above since women of such age range were at risk of cervical cancer because they are sexually active. The study included women between the ages of 18 to 65 years who could express themselves in Twi and English and were willing to participate in the study. The research setting is Oyibi located in the Greater Accra Region of Ghana. It is one of the rural communities in the G.A of Ghana and no study has looked at cervical cancer screening and vaccination in this group. Finally, this community has no cervical cancer screening center close to them.Neither do they have a hospital. Purposive sampling was used to recruit all participants who met the inclusion criteria and were willing to engage in the FGDs to provide necessaryinformation to ensure credibility of the study. Sample size was based on data saturation. Five (5) FGDs were held with seven (7) members in each group. Hence, the sample size for this study was 35 participants. All the 35 participants completed the interviews with none opting out. Interviews were conducted from one group to the other till no new responses were retrieved (Data saturation). Ethical clearance was obtained from the Dodowa Health Research Center Institutional Review Board (DHRC- IRB 31/03/20) before the data collection. The clearance letter from the ethical board was submitted to the assembly man and elders of the Oyibi community who gave their permission for entry into the community. The researchers contacted various females from the selected community in various locations and gatherings such as churches, weddings, marketplaces and houses within the community. The researchers established rapport by introducing themselves to the participants before explaining the purpose of the study to them. The benefits participants stood to gain from the study was also explained to them. The researchers scheduled the days for the data collection and a venue to suit participants’ availability. Moreover, all methods were carried out in accordance torelevant guidelines and regulations including the informed consent, voluntary participation, anonymity and confidentiality. The interviews were recorded with an audio tape. The researchers met participants in a private place to conduct the interviews ensuring that no other person gets access to the recorded data. The participants were asked to use numbers to identify themselves in the group instead of their original names to conceal their identity. They were also informed about some “dos” and “don’ts” such as “Not interrupting others when they are sharing their views, allowing each member to share their views, not arguing with members of the group but can disagree where appropriate and not disclosing any personal information given during the discussion or making mockery of a group member after the discussion. The interviews were conducted by all the authors in English since all the participants speakand understand English. Focus group interviews lasted for 45-60 minutes. Data was collected over a period of 6weeks. The participants were congratulated by the researchers after the interviews. A semi-structured interview was used by the researchers who served as moderators to collect data from the members of each FGs. The interview guide consisted of open-ended questions with probes for further clarifications. The interview guide consisted of demographic characteristics of participants as well as questions based on the objectives of the study which were as follows: practices, willingness to screen and vaccinate and perceptions on cervical cancer screening benefits. The tool was carefully designed by the researchers and reviewed by all researchers. It was also given to other nursing researchers to peer review. The tool was pretested among two females in Malejor who have similar characteristics as the women in Oyibi. All interviews were done once. The recorded interviews were transcribed and saved on a personal laptop and it was secured with a password known only to the researchers. This was to ensure the safety of the interviews in case the laptop was stolen or became faulty. Statistical Analysis Data was analyzed using content analysis. Content analysis has been defined as a systematic way of compressing several words into fewer content categories based on explicit rules of coding (Stemler, 2000). It allows researchers sift through large volumes of data. The analysis was done after each FGDs. Data collection and data analysis were done concurrently. The audio taped data were played by the researchers and typed into a word document saving them with FGD numbers (1-5). The researchers played and listened to each discussion over and over again to familiarize themselves with the data which were transcribed verbatim. Each transcript was read and re-read to understand what participants said and to contact participants for clarification where appropriate. The researcher then coded the data by reading through and giving meanings throughout the transcripts by representing it with two (2) to four (4) words. Similar meanings were put together. Themes which were formulated for each group were written down and grouped based on patterns or relationships amongst them. In all, two (2) themes emerged and eight (8) subthemes were formulated based on theobjectives of the study. The methodological rigor was maintained to ensure the validity and reliability of the findings. This ensured that+ findings and the processes for conducting the data were s trustworthy. The trustworthiness was maintained by ensuring the following weremaintained: credibility, transferability, dependability, and confirmability (Bittlinger, 2017). Results: Socio-demographic characteristics of the participants Thirty-five (35) participants constituting five (5) Focus Group Discussions (FGD) were interacted with to obtain necessary data for the study. Each FGD consisted of seven (7) participants. The group consisted solely of o women from Oyibi Community within the Kpone-Katamanso District in the Greater Accra Region of Ghana. The findings of the study revealed that majority of the participants (69%) were single whilst few of them, constituting 11 (31%) were married for 5years and above. Concerning the age of participants, the results revealed that majority of the participants were within the age of 19 -29 years with few above 40 years. Thus, the least age recorded was 19 and 60 years as the highest age recorded. The study again revealed that majority, that is, 15 participants of the respondents had Secondary education (57.1%) and Tertiary education 13 (37.1%) with 7 making the least participants having basic education background (20%). Majority of the participants 33 (94.3%) were Christians while a few of them (5.7%), were Muslims with diverse cultural backgrounds from the Volta region, Ashanti region, Eastern region, Greater Accra region, Northern region, and Western region. The rest of the demographic data is presented below. Two themes emerged from this study which were cervical cancer screening and vaccination utilization by women and cervical cancer vaccination effectiveness and cost. Utilization of Cervical Cancer Screening and Vaccination This theme presents practices of cervical cancer screening and vaccination among women. The five (5) sub-themes which emerged were the types of cervical screening and vaccination done by participants, experience during cervical cancer screening, experiences during cervical cancer vaccination, decision to go for cervical cancer screening and vaccination, and willingness to recommend cervical cancer screening and vaccination to other women. The types of cervical screening and vaccination done by participants Analysis of the data collected revealed that few of the participants had undergone cervical cancer screening 2 (5.7%) (20 and 25 years) and vaccination 1 (2.9%) (28 years). The following quotes provide details of the above results: “I have done the pap smear at work, but I haven’t taken the vaccine. It was free at my workplace, so I just went to screen to know if I have it since I am young. However, they didn’t tell me to come for the vaccine because I didn’t go for the report. They did not also do any follow up to ask why I didn’t come for the results although they took my number”(p.20). “Yes, I was told the screening is called pap smear. I went to screen last year when it was the cervical cancer screening month. It was free as at that time. After I was told it was a negative, I paid for the vaccine and they administered it to metwice. I did that because I married at the age 23 and two years now, I have not been able to conceive so I wanted to be sure nothing was wrong with my cervix’’ (p.25). Few participants who refused to go for the cervical cancer screening and vaccination stated that theywere virgins hence not legible for screening: “No, I have not gone to screen or take the vaccine. Per the information I had, I was told that only those who are sexually active could be screened of which I am not. I am a virgin so there is no need going to do it. I might do it in the future” (p.15). Other participants did not go for the cervical cancer screening and vaccination because they lacked knowledge on its importance: ‘’ I haven’tbeen screened and vaccinated because I don’t think it is necessary to screen because I don’t have the disease. Even at the age of 45years, I am still strong. I am saying this because I am not sick, and I think it is not necessary. I don’t also know the types of cervical cancer screening and vaccines given, so why should I screen and vaccinate?’ (p.16). “I cannot be affected by this cancer because I am 50 years plus and I have not been hospitalized before and in my family no one has had this condition’ (p.35). Experiences during cervical cancer screening The study discovered that only few of the participants had done the cervical cancer screening. Various interesting and insightful experiences reported by participants during cervical cancer screening were as follows: “Oh, I did the pap smear at my work place, so I was a bit comfortable with the environment. The procedure itself was uncomfortable though painless. When I got to the room, I was asked to lie down on the bed and put my legs on a leg support. I was alone with the nurse. The nurse inserted an equipment into my vagina to take fluid for investigation’’(p.6). ‘’ We entered a room.Then the nur explained how it was going to be. I was okay because there was no man there. She assisted me to lie on the bed and raised my legs and put them on a leg support.After that she put some cloth on my thighs then I felt her putting something in my vagina, but it wasn’t painful. It was normal, however I felt something brushing inside like twice’’ (p.20). Experiences during cervical cancer vaccination A significant experience revealed in this study was pain associated with the cervical cancer vaccination since it is injected. Hence, the study found that only one (2.8%) participant had received the vaccine for cervical cancer even though few of them had done the screening ‘’After I paid for the vaccine, I was asked to wait in a room. The nurse then came in with some needles and the vaccine in a small bottle. She told me she wouldinject my upper arm and it would feel uncomfortable a bit. She took the vaccine with the needle and then cleaned the area with cotton and injected it, even though it felt painful she was soon done’’ (p.25). Other participants shared their experiences on how the vaccination might be even though they had not been vaccinated: ‘’ I don’t know if it’s painful or not even though I know it’s an injection. I didn’t go for the vaccine even though I have done the screening once because I do not like injections” (p.20). ‘’ I have heard of the vaccine but I don’t know if it is an injection or poured into the mouth like the vitamin A vaccine, so I cannot tell how it feels like’’ (p.1). ‘‘No please, I have no idea about the vaccine and how it’s like. I didn’teven know that cancer has a vaccine that prevents it. If my grand-daughter was to be around she would have been able to tell you’’ (p.4). Decision to go for cervical cancer screening and vaccination The results suggest that more than half of the participants, 20 (57.1%) were willing and eager to receive cervical cancer screening and vaccination ifthey have the opportunity; “I will be very glad to go for the cervical cancer screening and vaccination since I am aware of the cervical cancer screening and vaccination now but I will be happy to get it done soon’’(p.2). “Yeah, definitely the idea came to mind one day. I might go for the screening and get a review of my reproductive organs to know if everything is okay soon‘’.(p.6) “I would like to screen and get vaccinated against cervical cancer soon. I had the intention of getting myself screenedabout a year ago, but I have been too busy with work to take time off to do so’’(p.17). However, few participants didn’t see the need to screen and get vaccinated because they were advanced in age; ‘’ No, I didn’t get screened and vaccinated when I was young because I didn’t know about the condition, and also I don’t have any intention to get vaccinated now because I am old and my husband is is no more alive so I will rather recommend it to the young ones who are sexually active’’ (p.8). “I have not screened and vaccinated. In fact, I have no reason to be screened because I am not sick, I am healthy. It’s when you are ill that you visit the hospital to get treated.S I am well, so why should I go?” (p.29). Willingness to recommend cervical cancer screening and vaccination to other women Further discussion with the participants on the above subject matter revealed that they are ready to recommend to all women including their friends, families and church members to screen and vaccinate against cervical cancer: “It’s a good initiative, so I will recommend it to other people to go and bescreened. I will tell my friends, church members and family about it since I now know about it. Knowing about this disease and getting screened for it comes with huge benefits, so I will take it upon myself as a duty to tell all women in my church it”. (p.1) “I will recommend it to all women to get screened for it and also to get the vaccine because sex is common among the young women of today, so it will be good if they know their status” ( p.24). Participants revealed that they are not aware of any relative or friend who had done the screening and hence, recounted that they were going to recommend the screening and vaccination to their friends and relatives: “I don’t know of any relative and friends who have gone to be screened and also received the vaccine, so I will tell my family members and friends about it when I go home” (p.34). “No please, I don’t know of any family members or relatives that have done it. You know, when it comes to illnesses like this,it’s not something people would want to know, so I think that is the reason why they have not done it.however, I will tell them the importance of doing it, so that we all can go for the screening”(p.31). Perceived Benefits of Cervical Cancer Screening and Vaccination The participants viewed the screening as beneficial. Three (3) subthemes emerged from this theme: early detection of cervical cancer through early screening, willingness to receive cervical cancer vaccine, cost effectiveness. Early detection of cervical cancer through early screening Interrogations with the participants on the above subject matter revealed that early detection of the disease helps in early treatment and prevention which eventually reduces the death rate of cervical cancer. In regard to the above findings, participants explained in the following words: “I think if you go early to screen for cervical cancer and it is detected , it is good because early detection of the disease is key to prevent it. You waiting until the disease spreads to other parts of the body will be worse, so early detection is the best” (p.28). Few participants narrated that knowing it early will help one commence treatment early to prevent complications: “Yes, I say it helps to detect it early because if you go and get screened and it is positive you get treated and when its negative you get vaccinated to protect you, so both are good”.(p.20) ‘‘I believe that you going to get screenedand vaccinated will reduce the number of women who are infected with cervical cancer since the screening helps to detect cervical cancer. You can receive treatment when it is detected early. It will not only reduce the number of people having the disease but it will also reduce the death rate. It will help salvage the rest of the cervix that is not infected and treat the in time’’ (p.35). Protection from cervical cancer Cervical cancer vaccination is one of the most important things to do after screening. The participants also recounted that vaccination for cervical cancer is a step in the right direction to preventing the disease. “The vaccine gives lifetime protection when taken; you don’t have to worry about getting cervical cancer after screening and taking the vaccination”. (p.4). “I believe that early screening and vaccination against cervical cancer will help to prevent this disease because after screening, if the result is negative, you will receive the vaccine that prevents you from cervical cancer. And as our people say prevention is better than cure’’. (p.3). However some participants believed that cervical cancer could not be prevented by receiving the HPV vaccine since cancers cannot be prevented: ‘‘Cancers in general cannot be prevented with vaccines. I don’t know of any vaccine that prevents cervical cancer. When you get cervical cancer that is it. The only thing left is to go for treatment like chemotherapy and other medications’’(p.33). “Vaccines for preventing cancer? I didn’t know there was a vaccine available for preventing cervical cancer because people are still dying of cancers. You just have to be mindful of your lifestyle, that is the best way to prevent it’’ (p.24). Willingness to receive cervical cancer vaccination Participants revealed that after screening for cervical cancer, it is important to go for vaccination to protect oneself. The following quotes provides details of the results above: “I’ m willing to go for the screening and vaccination to protect myself from this deadly disease, even if I have contracted the cancer after being screened, I will be ready to be treated for the , and if I am negative too, I will go for the vaccination to help protect me from the cancer. (p.13). ‘‘In my case, I have gone to have cervical cancer screening already but I was reluctant to go for the vaccination but I think I am ready to go for the vaccination now’’ (p.15) ‘‘I have taken all my children for immunizations and they are all fine, so I am sure that this vaccine too will not cause any problem, so I will go for it whenever I am off duty’’ (p.4) Demographic Characteristics of the Participants Source, Filed Survey Data (2020). Discussions: Utilization of Cervical Cancer Screening and Vaccination Results of the present study revealed that just a few participants had undergone cervical cancer screening and vaccination thus 2 (5.7%) and 1 (2.9%) respectively. Participants who had undergone screening recalled pap-smear as the test done. However, participants who had received the vaccination could not recall the name of the vaccine that was administered. The present study further reveals that some of the participants did not go to be screened and vaccinated on the basis that they were virgins, hence, not legible for screening. This finding is in consonance with a study by Aniebue and Aniebue (2010) among 394 female university students in Nigeriawhich unveiled that, about 23.1% identified the Pap smear as a screening test type and only 5.2% of respondents had ever been screened. The findings also supported a study done in Ghana which discovered that The majority of women(97.7%) had never heard of the Pap smear test before (Ebu et al., 2015). Nevertheless, the study was in contrast to Harper, and DeMars’ (2016) study in Canada which showed that, Cervarix and Gardasil9 were some vaccines mentioned by participants. The few participants who had undergone cervical cancer screening shared their experiences during screening. These participants recounted that, the screening was not painful during the procedure, but they were rather uncomfortable because of the way they were placed on the table to be examined and the equipment that was inserted into their vagina. Findings also recorded that, some participants were comfortable during the screening due to familiar environment and the absence of male nurses or doctors. The present study is in likeness to a study conducted by Rositch et al., (2012) who discovered that some women reported a low level of physical discomfort during Pap smear collection. In addition, over 80% of women reported that they would feel comfortable using a self-sampling device (82%) and would prefer at-home sample collection (84%). This present study’s finding was in contrast with a study where 26.47% of respondents were married but none of them had undergone screening test with the belief that it would be painful (Pegu et al., 2016). A similar study in Ghana also revealed that only 15 of the participants (8.5%) had undergone Pap smear for cervical cancer screening due to poor knowledge about it (Adanu, 2002). Experiences during cervical cancer vaccination A significant experience revealed in this study was pain associated withcervical cancer vaccination. The study found that, only one (2.8%) participant had received the vaccine for cervical cancer even though a few of them had done the screening. The participant who had received the cervical cancer vaccination described the process as a bit painful although not as painful as she thought it would be. She felt the needle pricking her skin for just a brief period. This implies that the experiences of females on cervical cancer vaccination is mainly subjective since others may perceive it differently. Similarly, a study conducted in North Carolina reported that pain from HPV vaccination was commonly reported by parents but was less frequent compared to other adolescent vaccines and did not appear to affect vaccine regimen completion. These findings may be important to increase HPV vaccination coverage since women perceive it as less painful. (Hudson et al., 2016). Moreover, a study done in Ghana ascertained that the main concern of the vaccination was ensuring safety during the administration rather than the pain as identified in this study (Coleman et al., 2011). Regarding the decision to go for cervical cancer screening and vaccination, the study discovered that, more than half of the participants 20 (51%) were willing and eager to receive cervical cancer screening and vaccination should they have the opportunity. In relation to the present study, a study conducted in Nigeria revealed that majority of the participants (62.5%) demonstrated readiness to be screened and vaccinated against cervical cancer (Eze et al., 2012). About participants’ willingness to recommend cervical cancer screening and vaccination to other women, the study established that majority of the participants in the study were willing to recommend to all women including their friends, families and church members to be screened and vaccinated against cervical cancer since they believed most youth were into early sexual debut which was identified to increase participants’ risk in acquiring cervical cancer. The participants also revealed that, they were not aware of any relative or friend who had done the screening.They stated they were going to recommend the screening and vaccination to their friends and relatives. This finding was surprising because majority of the study participants had not been screened and vaccinated but were willing to recommend it to others. The findings of this current investigation are also consistent with those of Adejuyigbe et al., (2015), who did a study among medical students of the University of Lagos and found that most of the respondents supported vaccination of adolescent girls (65.7%) and were willing to recommend vaccination to colleagues/friends (82.1%) and to future clients (80.0%). Perceived Benefits of Cervical Cancer Screening and Vaccination Early detection of cervical cancer was one of the benefits enumerated by the study participants. Few of the participants who knew about cervical cancer screening and vaccination were of the notion that, screening for cervical cancer aids in the early detection of cancer’s hidden warning signs long before symptoms appear and thus helps one to commence treatment early. The findings of this present study are consistent with that of Wang et al., (2019) who discovered that early cervical cancer screening and vaccination reduces cervical cancer cumulative incidence, increases life span, reduces cost of treating cervical cancer and improves the quality of life . Similar to the present study findings, Ibekweet al.(2010) in South Africa concluded that, majority (87%) were of the belief that cervical cancer screening is important, while 75% indicated that screening could find changes in the cervix before full cancer arises and that when cervical cancer is detected earlier, it can be easily treated. Majority of the participants in this present study acknowledged the money spent in preventing cervical cancer was less than the money involved when treating cervical cancer, especially when the condition advances.Thus, it is perceived to be cost effective. This implies that the cost of preventive intervention of cervical cancer ischeaper, effective and beneficial than cost involved at the onset of the cervical cancer. However, these findings were surprising since majority of the participants had not been screened and vaccinated against cervical cancer. A study was in accordance with these findings where it was discovered that majority of the respondents (76%) used in a study in Ghana were willing to pay at least something for screening and vaccination because they were aware of the cost effectiveness of these services (Opoku et al., 2016). In conclusion, the results from the study indicated that the participants’ utilization of cervical cancer screening and vaccination were poor, nevertheless, they were conscious of cervical cancer screening and vaccination and were willing to recommend it to friends, relatives and loves ones. Implications In nursing practice, the study revealed that cervical cancer screening and patronization rate among women who partook in this study was low, hence healthcare professionals should find ways of sensitizing women during healthcare delivery to encourage more women to partake in the screening. Moreover, nurses should prepare women who are being booked for the screening and vaccination to help allay the fears and anxieties that they go through. Healthcare professionals should follow up on the women who come for screening to motivate them to go for vaccination. Durbars and seminars should be organized in various rural communities in Ghana by health care professionals, hospitals and N.G.Os in the country to sensitize women on cervical cancer screening and vaccination to help clear misconceptions. Recommendations The government should formulate policies to help reduce cost for screening and vaccination. The NHIS should absorb the cost to help improve screening and vaccination coverage. Ministry of Health (MoH) • The Ministry of Health should lobby government for funds for the inclusion of the cost of cervical cancer screening and vaccination in the NHIS. GHANA HEALTH SERVICE (GHS) • Ministry of Health should establish more screening centers a screening center close to the rural community where the study was conducted to motivate more women in this area to patronize screening for cervical cancer and for vaccination. • They should also make screening and vaccination services available and accessible nation-wide. PRACTICING NURSES • They should make out time to explain the screening procedure to the client and the possible discomfort that might occur. • They should also take the telephone numbers of the clients that und ergo screening in order to call them when the results are ready. •Health care professionals should conduct more research in this field to help find innovative ways to improve screening and vaccination practices. Limitations • The study might have revealed more robust findings if a mixed method was done. Hence it is recommended that other researchers consider doing a mixed method in this area.
Background: Cervical cancer screening and vaccination practices is reported to have  low coverage in most developing countries. It has been reported that most women are aware of cervical cancer screening and vaccination worldwide.  Nevertheless, the rate at which women participate in  cervical cancer screening and vaccination was found to be low both locally and internationally. Consequently, in sub-Saharan Africa, cervical cancer screening programs have poor coverage. The aim of this study was to explore the practices of cervical cancer screening and vaccination among females at Oyibi community. Methods: The researchers employed a qualitative exploratory design to recruit 35 participants put into five Focus Group Discussions (FGDs).  Five FGDs were formed with seven (7) members in each group. The members were purposely recruited. The sample size was based on data saturation. Data was retrieved using a semi-structured interview guide. The researchers served as moderators in the group. Results: Two (2) main themes with Eight (8) subthemes were generated from the data analysis. The themes were; (cervical cancer screening and vaccination practices), and (perceived benefits of cervical cancer screening and vaccination). The subthemes that emerged were as follows: types of cervical screening and vaccination done by participants, experiences during cervical cancer screening, experiences during cervical cancer vaccination, decision to go for cervical cancer screening and vaccination, willingness to recommend cervical cancer screening and vaccination to other women,  early detection of cervical cancer through early screening, benefits of cervical cancer vaccination, and willingness to receive cervical cancer vaccine. The study also revealed that most of the women who had done the screening and vaccination were young (19-29 years). Conclusions: The results from the study indicated that the participants' utilization of cervical cancer screening and vaccination were poor although they were conscious of the benefits of cervical cancer screening and vaccination and were willing to recommend it to their relatives and their loved ones. <br />.
null
null
6,714
381
[]
4
[ "cancer", "cervical", "cervical cancer", "screening", "vaccination", "participants", "cancer screening", "cervical cancer screening", "study", "screening vaccination" ]
[ "vaccinate perceptions cervical", "cervical cancer vaccination", "screening experiences cervical", "cervical cancer screening", "cervical cancer participants" ]
null
null
null
[CONTENT] Utilization | cervical cancer | screening | vaccination | females [SUMMARY]
null
[CONTENT] Utilization | cervical cancer | screening | vaccination | females [SUMMARY]
null
[CONTENT] Utilization | cervical cancer | screening | vaccination | females [SUMMARY]
null
[CONTENT] Adult | Female | Focus Groups | Ghana | Health Knowledge, Attitudes, Practice | Humans | Mass Screening | Papillomavirus Infections | Papillomavirus Vaccines | Patient Acceptance of Health Care | Qualitative Research | Uterine Cervical Neoplasms [SUMMARY]
null
[CONTENT] Adult | Female | Focus Groups | Ghana | Health Knowledge, Attitudes, Practice | Humans | Mass Screening | Papillomavirus Infections | Papillomavirus Vaccines | Patient Acceptance of Health Care | Qualitative Research | Uterine Cervical Neoplasms [SUMMARY]
null
[CONTENT] Adult | Female | Focus Groups | Ghana | Health Knowledge, Attitudes, Practice | Humans | Mass Screening | Papillomavirus Infections | Papillomavirus Vaccines | Patient Acceptance of Health Care | Qualitative Research | Uterine Cervical Neoplasms [SUMMARY]
null
[CONTENT] vaccinate perceptions cervical | cervical cancer vaccination | screening experiences cervical | cervical cancer screening | cervical cancer participants [SUMMARY]
null
[CONTENT] vaccinate perceptions cervical | cervical cancer vaccination | screening experiences cervical | cervical cancer screening | cervical cancer participants [SUMMARY]
null
[CONTENT] vaccinate perceptions cervical | cervical cancer vaccination | screening experiences cervical | cervical cancer screening | cervical cancer participants [SUMMARY]
null
[CONTENT] cancer | cervical | cervical cancer | screening | vaccination | participants | cancer screening | cervical cancer screening | study | screening vaccination [SUMMARY]
null
[CONTENT] cancer | cervical | cervical cancer | screening | vaccination | participants | cancer screening | cervical cancer screening | study | screening vaccination [SUMMARY]
null
[CONTENT] cancer | cervical | cervical cancer | screening | vaccination | participants | cancer screening | cervical cancer screening | study | screening vaccination [SUMMARY]
null
[CONTENT] cervical | cancer | cervical cancer | screening | cervical cancer screening | cancer screening | countries | low | africa | hpv [SUMMARY]
null
[CONTENT] cancer | cervical | cervical cancer | screening | vaccination | know | participants | vaccine | cervical cancer screening | cancer screening [SUMMARY]
null
[CONTENT] cancer | cervical | cervical cancer | screening | participants | vaccination | cervical cancer screening | cancer screening | study | data [SUMMARY]
null
[CONTENT] ||| ||| ||| Africa ||| Oyibi [SUMMARY]
null
[CONTENT] Two | 2 | Eight | 8) ||| ||| ||| 19-29 years [SUMMARY]
null
[CONTENT] ||| ||| ||| Africa ||| Oyibi ||| 35 | five | Focus Group Discussions ||| Five | seven | 7 ||| ||| ||| ||| ||| Two | 2 | Eight | 8) ||| ||| ||| 19-29 years ||| ||| [SUMMARY]
null
Biomechanical evaluation of a novel transtibial posterior cruciate ligament reconstruction using high-strength sutures in a porcine bone model.
34629417
Multiple techniques are commonly used for posterior cruciate ligament (PCL) reconstruction. However, the optimum method regarding the fixation of PCL reconstruction after PCL tears remains debatable. The purpose of this study was to compare the biomechanical properties among three different tibial fixation procedures for transtibial single-bundle PCL reconstruction.
BACKGROUND
Thirty-six porcine tibias and porcine extensor tendons were randomized into three fixation study groups: the interference screw fixation (IS) group, the transtibial tubercle fixation (TTF) group, and TTF + IS group (n = 12 in each group). The structural properties of the three fixation groups were tested under cyclic loading and load-to-failure. The slippage after the cyclic loading test and the stiffness and ultimate failure load after load-to-failure testing were recorded.
METHODS
After 1000 cycles of cyclic testing, no significant difference was observed in graft slippage among the three groups. For load-to-failure testing, the TTF + IS group showed a higher ultimate failure load than the TTF group and the IS group (876.34 ± 58.78 N vs. 660.92 ± 77.74 N [P < 0.001] vs. 556.49 ± 65.33 N [P < 0.001]). The stiffness in the TTF group was significantly lower than that in the IS group and the TTF + IS group (92.77 ± 20.16 N/mm in the TTF group vs. 120.27 ± 15.66 N/m in the IS group [P = 0.001] and 131.79 ± 17.95 N/mm in the TTF + IS group [P < 0.001]). No significant difference in the mean stiffness was found between the IS group and the TTF + IS group (P = 0.127).
RESULTS
In this biomechanical study, supplementary fixation with transtibial tubercle sutures increased the ultimate failure load during load-to-failure testing for PCL reconstruction.
CONCLUSIONS
[ "Animals", "Biomechanical Phenomena", "Posterior Cruciate Ligament Reconstruction", "Sutures", "Swine", "Tendons", "Tibia" ]
8509899
Introduction
Compared with the treatments used for anterior cruciate ligament (ACL) reconstruction, the optimal treatment for posterior cruciate ligament (PCL) tears has been debated because many patients develop residual posterior laxity following PCL reconstruction.[1–13] Recent studies have evaluated several PCL reconstruction techniques, including the transtibial technique or inlay technique, femoral and tibial tunnel placement, femoral and/or tibial fixation, etc.[6–9,14–17] However, the gold standard technique for PCL reconstruction has not been established. Tibial side fixation has been the recent treatment of interest for PCL reconstruction. Numerous types of tibial fixation have been introduced for PCL reconstruction using hamstring autografts during the transtibial technique, such as interference screws, cross-pins, screws, spiked washers, and endobuttons.[6–8,11,14,18–20] The transtibial technique with a hamstring autograft is one of the most frequently employed techniques in clinical practice. However, four-stranded hamstring autografts fixed with interference screws with the transtibial technique may have short decreased pullout strength, which might lead to decreased overall graft stiffness and increased total graft deformation. Therefore, we proposed a new surgical technique for PCL reconstruction with tibial transtibial tubercle fixation (TTF) using several high-strength sutures that are not restricted by the graft length and increase the biomechanical properties of the reconstructed graft. To our knowledge, no study has compared tibial graft fixation in the transtibial technique for PCL reconstruction with TTF using several high-strength sutures with interference screws at the tibial side fixation. The purpose of this study was to compare the biomechanical properties among three different procedures at the tibial site in terms of their ability in transtibial PCL reconstruction: the use of interference screw fixation (IS) alone, the TTF alone, and the new TTF + IS technique. The primary hypothesis is that supplementary fixation with transosseous high-strength sutures will improve the initial IS strength and stiffness under cyclic loading and load-to-failure testing. The secondary hypothesis is that the initial new TTF technique will be comparable with that achieved with IS.
Methods
Graft preparation and tunnel preparation This study was approved by the Ethics Committee of First Affiliated Hospital of China Medical University. The availability of young cadaver knees is limited for biomechanical testing. Porcine tibias, which were used in this study, have been reported to have biomechanical properties similar to those of young human bone.[21] A randomized controlled experimental study in a porcine model was performed using 36 fresh-frozen porcine tibias and 24 porcine digital extensor tendons from healthy male pigs aged 12 to 16 months and weighing 90 kg. The bone mineral density (BMD) of the porcine tibias was assessed using dual-energy X-ray absorptiometry (Hologic QDR Whole-Body X-ray Bone Densitometer; Hologic, Bedford, MA, USA). BMD in IS group was 24.21 ± 0.85 kg/m2, in TTF group was 24.05 ± 0.62 kg/m2, and in TTF + IS group was 24.29 ± 0.53 kg/m2. Both the tibias and tendons were stored at −80°C. Before testing, all specimens were thawed at room temperature for 12 h. All of the specimens underwent one freeze-thaw cycle before biomechanical testing. The preparation procedures for the graft and tibial tunnel were similar for the three groups. The specimens were blocked and randomly divided into three groups: IS alone, TTF alone, and TTF + IS (n = 12 in each group) [Figure 1]. Computer drawing of the three fixation groups. (A) IS; (B) TTF; (C) TTF + IS. IS: Interference screw fixation; TTF: Transtibial tubercle fixation. For all porcine tibiae, a tunnel with a diameter of 8 mm and a length of 5 to 6 cm was prepared on the tibia by the transtibial technique. A PCL tibial drill guide (Arthrex, Naples, FL, USA) was used, and the drill guide angle of the tibia was oriented at 55° to 60°. A double-looped graft was prepared on the table, folded in half, and thinned to 8 mm in diameter and 9 to 10 mm in length. Three No. 2 Ultrabraid sutures (Smith & Nephew, Andover, MA, USA) were used to sew 3 cm of both ends of each tendon together using a crisscross stitch. Then, the grafts were wrapped in 0.9% saline solution-soaked gauze before testing. In the IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel [Figure 2]. In the TTF + IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel, and then the ends of the sutures were tied at the tibia. An eyelet-passing pin was drilled transversely 1 cm distal to the tibial tunnel (parallel to the tibial joint line and 1 cm posterior to the anterior tibial cortex). The sutures were passed through the transtibial tubercle with the eyelet-passing pin. All the ends of the sutures were tied at the tibia [Figure 3]. Sawbone model demonstrating the TTF technique. (A–C) The graft was pulled into the tibial tunnel; an eyelet-passing pin was drilled transversely into the transtibial tubercle. (D–F) The sutures were passed to the lateral side. (G–I) The transosseous sutures were tied at the tibia with a knot pusher. TTF: Transtibial tubercle fixation. (A) IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel. (B) TTF: the graft was fixed with high-strength sutures tied at the tibia. (C) TTF + IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel; the ends of the high-strength sutures were tied at the tibia. IS: Interference screw fixation; TTF: Transtibial tubercle fixation. This study was approved by the Ethics Committee of First Affiliated Hospital of China Medical University. The availability of young cadaver knees is limited for biomechanical testing. Porcine tibias, which were used in this study, have been reported to have biomechanical properties similar to those of young human bone.[21] A randomized controlled experimental study in a porcine model was performed using 36 fresh-frozen porcine tibias and 24 porcine digital extensor tendons from healthy male pigs aged 12 to 16 months and weighing 90 kg. The bone mineral density (BMD) of the porcine tibias was assessed using dual-energy X-ray absorptiometry (Hologic QDR Whole-Body X-ray Bone Densitometer; Hologic, Bedford, MA, USA). BMD in IS group was 24.21 ± 0.85 kg/m2, in TTF group was 24.05 ± 0.62 kg/m2, and in TTF + IS group was 24.29 ± 0.53 kg/m2. Both the tibias and tendons were stored at −80°C. Before testing, all specimens were thawed at room temperature for 12 h. All of the specimens underwent one freeze-thaw cycle before biomechanical testing. The preparation procedures for the graft and tibial tunnel were similar for the three groups. The specimens were blocked and randomly divided into three groups: IS alone, TTF alone, and TTF + IS (n = 12 in each group) [Figure 1]. Computer drawing of the three fixation groups. (A) IS; (B) TTF; (C) TTF + IS. IS: Interference screw fixation; TTF: Transtibial tubercle fixation. For all porcine tibiae, a tunnel with a diameter of 8 mm and a length of 5 to 6 cm was prepared on the tibia by the transtibial technique. A PCL tibial drill guide (Arthrex, Naples, FL, USA) was used, and the drill guide angle of the tibia was oriented at 55° to 60°. A double-looped graft was prepared on the table, folded in half, and thinned to 8 mm in diameter and 9 to 10 mm in length. Three No. 2 Ultrabraid sutures (Smith & Nephew, Andover, MA, USA) were used to sew 3 cm of both ends of each tendon together using a crisscross stitch. Then, the grafts were wrapped in 0.9% saline solution-soaked gauze before testing. In the IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel [Figure 2]. In the TTF + IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel, and then the ends of the sutures were tied at the tibia. An eyelet-passing pin was drilled transversely 1 cm distal to the tibial tunnel (parallel to the tibial joint line and 1 cm posterior to the anterior tibial cortex). The sutures were passed through the transtibial tubercle with the eyelet-passing pin. All the ends of the sutures were tied at the tibia [Figure 3]. Sawbone model demonstrating the TTF technique. (A–C) The graft was pulled into the tibial tunnel; an eyelet-passing pin was drilled transversely into the transtibial tubercle. (D–F) The sutures were passed to the lateral side. (G–I) The transosseous sutures were tied at the tibia with a knot pusher. TTF: Transtibial tubercle fixation. (A) IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel. (B) TTF: the graft was fixed with high-strength sutures tied at the tibia. (C) TTF + IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel; the ends of the high-strength sutures were tied at the tibia. IS: Interference screw fixation; TTF: Transtibial tubercle fixation. Biomechanical testing using animal tissue Biomechanical testing was performed in a similar manner to the methods described by Zhang et al[20] The tibias were fixed in a custom testing jig [Figures 2 and 3]. All biomechanical tests of the graft-fixation method-tibia complexes were administered using a testing machine. The looped end of the double-looped porcine tendon graft was fixed to a bar attached to the base of the material testing machine. The free graft was kept at a length of 3 cm. The direction of the tensile force and tibial bone tunnel formed an angle of 130° in the sagittal plane. For the graft-fixation method, the tibia complex was pre-conditioned at 50 N for 5 min, and cyclic loads between 50 and 250 N were applied for 1000 cycles at a frequency of 1 Hz. Grafts were marked with links at tunnel exit points applying the pre-conditioning load and again after the cyclic loading test. Graft slippage was measured as the distance between these two lines. After clinical load testing, the constructs were pre-loaded at 20 N for 2 min; then, they underwent load-to-failure testing at a rate of 10 mm/min. The ultimate failure load (N) was determined. Pull-out stiffness (N/mm) was calculated as the slope of the linear portion of the load-elongation curve. The failure modes were noted. Biomechanical testing was performed in a similar manner to the methods described by Zhang et al[20] The tibias were fixed in a custom testing jig [Figures 2 and 3]. All biomechanical tests of the graft-fixation method-tibia complexes were administered using a testing machine. The looped end of the double-looped porcine tendon graft was fixed to a bar attached to the base of the material testing machine. The free graft was kept at a length of 3 cm. The direction of the tensile force and tibial bone tunnel formed an angle of 130° in the sagittal plane. For the graft-fixation method, the tibia complex was pre-conditioned at 50 N for 5 min, and cyclic loads between 50 and 250 N were applied for 1000 cycles at a frequency of 1 Hz. Grafts were marked with links at tunnel exit points applying the pre-conditioning load and again after the cyclic loading test. Graft slippage was measured as the distance between these two lines. After clinical load testing, the constructs were pre-loaded at 20 N for 2 min; then, they underwent load-to-failure testing at a rate of 10 mm/min. The ultimate failure load (N) was determined. Pull-out stiffness (N/mm) was calculated as the slope of the linear portion of the load-elongation curve. The failure modes were noted. Statistical analysis Statistical analysis was performed using SPSS 21.0 (IBM, Armonk, NY, USA). We used the Kolmogorov-Smirnov test to determine the normally distributed variables within the groups. The Student's t test was used to compare the elongation, stiffness, and failure load among the three test groups. The significance level was set at P < 0.050. Statistical analysis was performed using SPSS 21.0 (IBM, Armonk, NY, USA). We used the Kolmogorov-Smirnov test to determine the normally distributed variables within the groups. The Student's t test was used to compare the elongation, stiffness, and failure load among the three test groups. The significance level was set at P < 0.050.
Results
Cyclic testing No failures occurred during cyclic testing. Table 1 reports the cyclic testing results (1000 cycles) in the three groups. The mean graft slippage values for the IS group, TTF group, and TTF + IS group were 1.37 ± 0.45, 1.98 ± 0.46, and 1.39 ± 0.50 mm, respectively. There were no significant differences in slippage among the three groups. Cyclic testing and load-to-failure testing in the three groups. Data are presented as mean ± standard deviation. ∗IS vs. TTF; †IS vs. TTF + IS; ‡TTF vs. TTF + IS. IS: Interference screw; TTF: Transtibial tubercle fixation. No failures occurred during cyclic testing. Table 1 reports the cyclic testing results (1000 cycles) in the three groups. The mean graft slippage values for the IS group, TTF group, and TTF + IS group were 1.37 ± 0.45, 1.98 ± 0.46, and 1.39 ± 0.50 mm, respectively. There were no significant differences in slippage among the three groups. Cyclic testing and load-to-failure testing in the three groups. Data are presented as mean ± standard deviation. ∗IS vs. TTF; †IS vs. TTF + IS; ‡TTF vs. TTF + IS. IS: Interference screw; TTF: Transtibial tubercle fixation. Load-to-failure testing The ultimate failure load in the TTF + IS group was significantly higher than those in the IS group and the TTF group (876.34 ± 58.78 N in the TTF + IS group vs. 556.49 ± 65.33 N in the IS group [P < 0.001] and 660.92 ± 77.74 N in the TTF group [P < 0.001]). The ultimate failure load in the TTF group was also significantly higher than that in the IS group (660.92 ± 77.74 N vs. 556.49 ± 65.33 N; P = 0.001). The stiffness in the TTF group was significantly lower than those in the IS group and the TTF + IS group (92.77 ± 20.16 N/mm in the TTF group vs. 120.27 ± 15.66 N/m in the IS group [P < 0.001] and 131.79 ± 17.95 N/mm in the TTF + IS group [P < 0.001]). No significant difference in the mean stiffness was found between the IS group and the TTF + IS group [Table 1]. The ultimate failure load in the TTF + IS group was significantly higher than those in the IS group and the TTF group (876.34 ± 58.78 N in the TTF + IS group vs. 556.49 ± 65.33 N in the IS group [P < 0.001] and 660.92 ± 77.74 N in the TTF group [P < 0.001]). The ultimate failure load in the TTF group was also significantly higher than that in the IS group (660.92 ± 77.74 N vs. 556.49 ± 65.33 N; P = 0.001). The stiffness in the TTF group was significantly lower than those in the IS group and the TTF + IS group (92.77 ± 20.16 N/mm in the TTF group vs. 120.27 ± 15.66 N/m in the IS group [P < 0.001] and 131.79 ± 17.95 N/mm in the TTF + IS group [P < 0.001]). No significant difference in the mean stiffness was found between the IS group and the TTF + IS group [Table 1].
null
null
[ "Graft preparation and tunnel preparation", "Biomechanical testing using animal tissue", "Statistical analysis", "Cyclic testing", "Load-to-failure testing" ]
[ "This study was approved by the Ethics Committee of First Affiliated Hospital of China Medical University. The availability of young cadaver knees is limited for biomechanical testing. Porcine tibias, which were used in this study, have been reported to have biomechanical properties similar to those of young human bone.[21] A randomized controlled experimental study in a porcine model was performed using 36 fresh-frozen porcine tibias and 24 porcine digital extensor tendons from healthy male pigs aged 12 to 16 months and weighing 90 kg. The bone mineral density (BMD) of the porcine tibias was assessed using dual-energy X-ray absorptiometry (Hologic QDR Whole-Body X-ray Bone Densitometer; Hologic, Bedford, MA, USA). BMD in IS group was 24.21 ± 0.85 kg/m2, in TTF group was 24.05 ± 0.62 kg/m2, and in TTF + IS group was 24.29 ± 0.53 kg/m2. Both the tibias and tendons were stored at −80°C. Before testing, all specimens were thawed at room temperature for 12 h. All of the specimens underwent one freeze-thaw cycle before biomechanical testing. The preparation procedures for the graft and tibial tunnel were similar for the three groups. The specimens were blocked and randomly divided into three groups: IS alone, TTF alone, and TTF + IS (n = 12 in each group) [Figure 1].\nComputer drawing of the three fixation groups. (A) IS; (B) TTF; (C) TTF + IS. IS: Interference screw fixation; TTF: Transtibial tubercle fixation.\nFor all porcine tibiae, a tunnel with a diameter of 8 mm and a length of 5 to 6 cm was prepared on the tibia by the transtibial technique. A PCL tibial drill guide (Arthrex, Naples, FL, USA) was used, and the drill guide angle of the tibia was oriented at 55° to 60°. A double-looped graft was prepared on the table, folded in half, and thinned to 8 mm in diameter and 9 to 10 mm in length. Three No. 2 Ultrabraid sutures (Smith & Nephew, Andover, MA, USA) were used to sew 3 cm of both ends of each tendon together using a crisscross stitch. Then, the grafts were wrapped in 0.9% saline solution-soaked gauze before testing. In the IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel [Figure 2]. In the TTF + IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel, and then the ends of the sutures were tied at the tibia. An eyelet-passing pin was drilled transversely 1 cm distal to the tibial tunnel (parallel to the tibial joint line and 1 cm posterior to the anterior tibial cortex). The sutures were passed through the transtibial tubercle with the eyelet-passing pin. All the ends of the sutures were tied at the tibia [Figure 3].\nSawbone model demonstrating the TTF technique. (A–C) The graft was pulled into the tibial tunnel; an eyelet-passing pin was drilled transversely into the transtibial tubercle. (D–F) The sutures were passed to the lateral side. (G–I) The transosseous sutures were tied at the tibia with a knot pusher. TTF: Transtibial tubercle fixation.\n(A) IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel. (B) TTF: the graft was fixed with high-strength sutures tied at the tibia. (C) TTF + IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel; the ends of the high-strength sutures were tied at the tibia. IS: Interference screw fixation; TTF: Transtibial tubercle fixation.", "Biomechanical testing was performed in a similar manner to the methods described by Zhang et al[20] The tibias were fixed in a custom testing jig [Figures 2 and 3]. All biomechanical tests of the graft-fixation method-tibia complexes were administered using a testing machine. The looped end of the double-looped porcine tendon graft was fixed to a bar attached to the base of the material testing machine. The free graft was kept at a length of 3 cm. The direction of the tensile force and tibial bone tunnel formed an angle of 130° in the sagittal plane. For the graft-fixation method, the tibia complex was pre-conditioned at 50 N for 5 min, and cyclic loads between 50 and 250 N were applied for 1000 cycles at a frequency of 1 Hz. Grafts were marked with links at tunnel exit points applying the pre-conditioning load and again after the cyclic loading test. Graft slippage was measured as the distance between these two lines. After clinical load testing, the constructs were pre-loaded at 20 N for 2 min; then, they underwent load-to-failure testing at a rate of 10 mm/min. The ultimate failure load (N) was determined. Pull-out stiffness (N/mm) was calculated as the slope of the linear portion of the load-elongation curve. The failure modes were noted.", "Statistical analysis was performed using SPSS 21.0 (IBM, Armonk, NY, USA). We used the Kolmogorov-Smirnov test to determine the normally distributed variables within the groups. The Student's t test was used to compare the elongation, stiffness, and failure load among the three test groups. The significance level was set at P < 0.050.", "No failures occurred during cyclic testing. Table 1 reports the cyclic testing results (1000 cycles) in the three groups. The mean graft slippage values for the IS group, TTF group, and TTF + IS group were 1.37 ± 0.45, 1.98 ± 0.46, and 1.39 ± 0.50 mm, respectively. There were no significant differences in slippage among the three groups.\nCyclic testing and load-to-failure testing in the three groups.\nData are presented as mean ± standard deviation. ∗IS vs. TTF; †IS vs. TTF + IS; ‡TTF vs. TTF + IS. IS: Interference screw; TTF: Transtibial tubercle fixation.", "The ultimate failure load in the TTF + IS group was significantly higher than those in the IS group and the TTF group (876.34 ± 58.78 N in the TTF + IS group vs. 556.49 ± 65.33 N in the IS group [P < 0.001] and 660.92 ± 77.74 N in the TTF group [P < 0.001]). The ultimate failure load in the TTF group was also significantly higher than that in the IS group (660.92 ± 77.74 N vs. 556.49 ± 65.33 N; P = 0.001).\nThe stiffness in the TTF group was significantly lower than those in the IS group and the TTF + IS group (92.77 ± 20.16 N/mm in the TTF group vs. 120.27 ± 15.66 N/m in the IS group [P < 0.001] and 131.79 ± 17.95 N/mm in the TTF + IS group [P < 0.001]). No significant difference in the mean stiffness was found between the IS group and the TTF + IS group [Table 1]." ]
[ null, null, null, null, null ]
[ "Introduction", "Methods", "Graft preparation and tunnel preparation", "Biomechanical testing using animal tissue", "Statistical analysis", "Results", "Cyclic testing", "Load-to-failure testing", "Discussion", "Conflicts of interest" ]
[ "Compared with the treatments used for anterior cruciate ligament (ACL) reconstruction, the optimal treatment for posterior cruciate ligament (PCL) tears has been debated because many patients develop residual posterior laxity following PCL reconstruction.[1–13] Recent studies have evaluated several PCL reconstruction techniques, including the transtibial technique or inlay technique, femoral and tibial tunnel placement, femoral and/or tibial fixation, etc.[6–9,14–17] However, the gold standard technique for PCL reconstruction has not been established.\nTibial side fixation has been the recent treatment of interest for PCL reconstruction. Numerous types of tibial fixation have been introduced for PCL reconstruction using hamstring autografts during the transtibial technique, such as interference screws, cross-pins, screws, spiked washers, and endobuttons.[6–8,11,14,18–20] The transtibial technique with a hamstring autograft is one of the most frequently employed techniques in clinical practice. However, four-stranded hamstring autografts fixed with interference screws with the transtibial technique may have short decreased pullout strength, which might lead to decreased overall graft stiffness and increased total graft deformation. Therefore, we proposed a new surgical technique for PCL reconstruction with tibial transtibial tubercle fixation (TTF) using several high-strength sutures that are not restricted by the graft length and increase the biomechanical properties of the reconstructed graft. To our knowledge, no study has compared tibial graft fixation in the transtibial technique for PCL reconstruction with TTF using several high-strength sutures with interference screws at the tibial side fixation.\nThe purpose of this study was to compare the biomechanical properties among three different procedures at the tibial site in terms of their ability in transtibial PCL reconstruction: the use of interference screw fixation (IS) alone, the TTF alone, and the new TTF + IS technique. The primary hypothesis is that supplementary fixation with transosseous high-strength sutures will improve the initial IS strength and stiffness under cyclic loading and load-to-failure testing. The secondary hypothesis is that the initial new TTF technique will be comparable with that achieved with IS.", "Graft preparation and tunnel preparation This study was approved by the Ethics Committee of First Affiliated Hospital of China Medical University. The availability of young cadaver knees is limited for biomechanical testing. Porcine tibias, which were used in this study, have been reported to have biomechanical properties similar to those of young human bone.[21] A randomized controlled experimental study in a porcine model was performed using 36 fresh-frozen porcine tibias and 24 porcine digital extensor tendons from healthy male pigs aged 12 to 16 months and weighing 90 kg. The bone mineral density (BMD) of the porcine tibias was assessed using dual-energy X-ray absorptiometry (Hologic QDR Whole-Body X-ray Bone Densitometer; Hologic, Bedford, MA, USA). BMD in IS group was 24.21 ± 0.85 kg/m2, in TTF group was 24.05 ± 0.62 kg/m2, and in TTF + IS group was 24.29 ± 0.53 kg/m2. Both the tibias and tendons were stored at −80°C. Before testing, all specimens were thawed at room temperature for 12 h. All of the specimens underwent one freeze-thaw cycle before biomechanical testing. The preparation procedures for the graft and tibial tunnel were similar for the three groups. The specimens were blocked and randomly divided into three groups: IS alone, TTF alone, and TTF + IS (n = 12 in each group) [Figure 1].\nComputer drawing of the three fixation groups. (A) IS; (B) TTF; (C) TTF + IS. IS: Interference screw fixation; TTF: Transtibial tubercle fixation.\nFor all porcine tibiae, a tunnel with a diameter of 8 mm and a length of 5 to 6 cm was prepared on the tibia by the transtibial technique. A PCL tibial drill guide (Arthrex, Naples, FL, USA) was used, and the drill guide angle of the tibia was oriented at 55° to 60°. A double-looped graft was prepared on the table, folded in half, and thinned to 8 mm in diameter and 9 to 10 mm in length. Three No. 2 Ultrabraid sutures (Smith & Nephew, Andover, MA, USA) were used to sew 3 cm of both ends of each tendon together using a crisscross stitch. Then, the grafts were wrapped in 0.9% saline solution-soaked gauze before testing. In the IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel [Figure 2]. In the TTF + IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel, and then the ends of the sutures were tied at the tibia. An eyelet-passing pin was drilled transversely 1 cm distal to the tibial tunnel (parallel to the tibial joint line and 1 cm posterior to the anterior tibial cortex). The sutures were passed through the transtibial tubercle with the eyelet-passing pin. All the ends of the sutures were tied at the tibia [Figure 3].\nSawbone model demonstrating the TTF technique. (A–C) The graft was pulled into the tibial tunnel; an eyelet-passing pin was drilled transversely into the transtibial tubercle. (D–F) The sutures were passed to the lateral side. (G–I) The transosseous sutures were tied at the tibia with a knot pusher. TTF: Transtibial tubercle fixation.\n(A) IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel. (B) TTF: the graft was fixed with high-strength sutures tied at the tibia. (C) TTF + IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel; the ends of the high-strength sutures were tied at the tibia. IS: Interference screw fixation; TTF: Transtibial tubercle fixation.\nThis study was approved by the Ethics Committee of First Affiliated Hospital of China Medical University. The availability of young cadaver knees is limited for biomechanical testing. Porcine tibias, which were used in this study, have been reported to have biomechanical properties similar to those of young human bone.[21] A randomized controlled experimental study in a porcine model was performed using 36 fresh-frozen porcine tibias and 24 porcine digital extensor tendons from healthy male pigs aged 12 to 16 months and weighing 90 kg. The bone mineral density (BMD) of the porcine tibias was assessed using dual-energy X-ray absorptiometry (Hologic QDR Whole-Body X-ray Bone Densitometer; Hologic, Bedford, MA, USA). BMD in IS group was 24.21 ± 0.85 kg/m2, in TTF group was 24.05 ± 0.62 kg/m2, and in TTF + IS group was 24.29 ± 0.53 kg/m2. Both the tibias and tendons were stored at −80°C. Before testing, all specimens were thawed at room temperature for 12 h. All of the specimens underwent one freeze-thaw cycle before biomechanical testing. The preparation procedures for the graft and tibial tunnel were similar for the three groups. The specimens were blocked and randomly divided into three groups: IS alone, TTF alone, and TTF + IS (n = 12 in each group) [Figure 1].\nComputer drawing of the three fixation groups. (A) IS; (B) TTF; (C) TTF + IS. IS: Interference screw fixation; TTF: Transtibial tubercle fixation.\nFor all porcine tibiae, a tunnel with a diameter of 8 mm and a length of 5 to 6 cm was prepared on the tibia by the transtibial technique. A PCL tibial drill guide (Arthrex, Naples, FL, USA) was used, and the drill guide angle of the tibia was oriented at 55° to 60°. A double-looped graft was prepared on the table, folded in half, and thinned to 8 mm in diameter and 9 to 10 mm in length. Three No. 2 Ultrabraid sutures (Smith & Nephew, Andover, MA, USA) were used to sew 3 cm of both ends of each tendon together using a crisscross stitch. Then, the grafts were wrapped in 0.9% saline solution-soaked gauze before testing. In the IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel [Figure 2]. In the TTF + IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel, and then the ends of the sutures were tied at the tibia. An eyelet-passing pin was drilled transversely 1 cm distal to the tibial tunnel (parallel to the tibial joint line and 1 cm posterior to the anterior tibial cortex). The sutures were passed through the transtibial tubercle with the eyelet-passing pin. All the ends of the sutures were tied at the tibia [Figure 3].\nSawbone model demonstrating the TTF technique. (A–C) The graft was pulled into the tibial tunnel; an eyelet-passing pin was drilled transversely into the transtibial tubercle. (D–F) The sutures were passed to the lateral side. (G–I) The transosseous sutures were tied at the tibia with a knot pusher. TTF: Transtibial tubercle fixation.\n(A) IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel. (B) TTF: the graft was fixed with high-strength sutures tied at the tibia. (C) TTF + IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel; the ends of the high-strength sutures were tied at the tibia. IS: Interference screw fixation; TTF: Transtibial tubercle fixation.\nBiomechanical testing using animal tissue Biomechanical testing was performed in a similar manner to the methods described by Zhang et al[20] The tibias were fixed in a custom testing jig [Figures 2 and 3]. All biomechanical tests of the graft-fixation method-tibia complexes were administered using a testing machine. The looped end of the double-looped porcine tendon graft was fixed to a bar attached to the base of the material testing machine. The free graft was kept at a length of 3 cm. The direction of the tensile force and tibial bone tunnel formed an angle of 130° in the sagittal plane. For the graft-fixation method, the tibia complex was pre-conditioned at 50 N for 5 min, and cyclic loads between 50 and 250 N were applied for 1000 cycles at a frequency of 1 Hz. Grafts were marked with links at tunnel exit points applying the pre-conditioning load and again after the cyclic loading test. Graft slippage was measured as the distance between these two lines. After clinical load testing, the constructs were pre-loaded at 20 N for 2 min; then, they underwent load-to-failure testing at a rate of 10 mm/min. The ultimate failure load (N) was determined. Pull-out stiffness (N/mm) was calculated as the slope of the linear portion of the load-elongation curve. The failure modes were noted.\nBiomechanical testing was performed in a similar manner to the methods described by Zhang et al[20] The tibias were fixed in a custom testing jig [Figures 2 and 3]. All biomechanical tests of the graft-fixation method-tibia complexes were administered using a testing machine. The looped end of the double-looped porcine tendon graft was fixed to a bar attached to the base of the material testing machine. The free graft was kept at a length of 3 cm. The direction of the tensile force and tibial bone tunnel formed an angle of 130° in the sagittal plane. For the graft-fixation method, the tibia complex was pre-conditioned at 50 N for 5 min, and cyclic loads between 50 and 250 N were applied for 1000 cycles at a frequency of 1 Hz. Grafts were marked with links at tunnel exit points applying the pre-conditioning load and again after the cyclic loading test. Graft slippage was measured as the distance between these two lines. After clinical load testing, the constructs were pre-loaded at 20 N for 2 min; then, they underwent load-to-failure testing at a rate of 10 mm/min. The ultimate failure load (N) was determined. Pull-out stiffness (N/mm) was calculated as the slope of the linear portion of the load-elongation curve. The failure modes were noted.\nStatistical analysis Statistical analysis was performed using SPSS 21.0 (IBM, Armonk, NY, USA). We used the Kolmogorov-Smirnov test to determine the normally distributed variables within the groups. The Student's t test was used to compare the elongation, stiffness, and failure load among the three test groups. The significance level was set at P < 0.050.\nStatistical analysis was performed using SPSS 21.0 (IBM, Armonk, NY, USA). We used the Kolmogorov-Smirnov test to determine the normally distributed variables within the groups. The Student's t test was used to compare the elongation, stiffness, and failure load among the three test groups. The significance level was set at P < 0.050.", "This study was approved by the Ethics Committee of First Affiliated Hospital of China Medical University. The availability of young cadaver knees is limited for biomechanical testing. Porcine tibias, which were used in this study, have been reported to have biomechanical properties similar to those of young human bone.[21] A randomized controlled experimental study in a porcine model was performed using 36 fresh-frozen porcine tibias and 24 porcine digital extensor tendons from healthy male pigs aged 12 to 16 months and weighing 90 kg. The bone mineral density (BMD) of the porcine tibias was assessed using dual-energy X-ray absorptiometry (Hologic QDR Whole-Body X-ray Bone Densitometer; Hologic, Bedford, MA, USA). BMD in IS group was 24.21 ± 0.85 kg/m2, in TTF group was 24.05 ± 0.62 kg/m2, and in TTF + IS group was 24.29 ± 0.53 kg/m2. Both the tibias and tendons were stored at −80°C. Before testing, all specimens were thawed at room temperature for 12 h. All of the specimens underwent one freeze-thaw cycle before biomechanical testing. The preparation procedures for the graft and tibial tunnel were similar for the three groups. The specimens were blocked and randomly divided into three groups: IS alone, TTF alone, and TTF + IS (n = 12 in each group) [Figure 1].\nComputer drawing of the three fixation groups. (A) IS; (B) TTF; (C) TTF + IS. IS: Interference screw fixation; TTF: Transtibial tubercle fixation.\nFor all porcine tibiae, a tunnel with a diameter of 8 mm and a length of 5 to 6 cm was prepared on the tibia by the transtibial technique. A PCL tibial drill guide (Arthrex, Naples, FL, USA) was used, and the drill guide angle of the tibia was oriented at 55° to 60°. A double-looped graft was prepared on the table, folded in half, and thinned to 8 mm in diameter and 9 to 10 mm in length. Three No. 2 Ultrabraid sutures (Smith & Nephew, Andover, MA, USA) were used to sew 3 cm of both ends of each tendon together using a crisscross stitch. Then, the grafts were wrapped in 0.9% saline solution-soaked gauze before testing. In the IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel [Figure 2]. In the TTF + IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel, and then the ends of the sutures were tied at the tibia. An eyelet-passing pin was drilled transversely 1 cm distal to the tibial tunnel (parallel to the tibial joint line and 1 cm posterior to the anterior tibial cortex). The sutures were passed through the transtibial tubercle with the eyelet-passing pin. All the ends of the sutures were tied at the tibia [Figure 3].\nSawbone model demonstrating the TTF technique. (A–C) The graft was pulled into the tibial tunnel; an eyelet-passing pin was drilled transversely into the transtibial tubercle. (D–F) The sutures were passed to the lateral side. (G–I) The transosseous sutures were tied at the tibia with a knot pusher. TTF: Transtibial tubercle fixation.\n(A) IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel. (B) TTF: the graft was fixed with high-strength sutures tied at the tibia. (C) TTF + IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel; the ends of the high-strength sutures were tied at the tibia. IS: Interference screw fixation; TTF: Transtibial tubercle fixation.", "Biomechanical testing was performed in a similar manner to the methods described by Zhang et al[20] The tibias were fixed in a custom testing jig [Figures 2 and 3]. All biomechanical tests of the graft-fixation method-tibia complexes were administered using a testing machine. The looped end of the double-looped porcine tendon graft was fixed to a bar attached to the base of the material testing machine. The free graft was kept at a length of 3 cm. The direction of the tensile force and tibial bone tunnel formed an angle of 130° in the sagittal plane. For the graft-fixation method, the tibia complex was pre-conditioned at 50 N for 5 min, and cyclic loads between 50 and 250 N were applied for 1000 cycles at a frequency of 1 Hz. Grafts were marked with links at tunnel exit points applying the pre-conditioning load and again after the cyclic loading test. Graft slippage was measured as the distance between these two lines. After clinical load testing, the constructs were pre-loaded at 20 N for 2 min; then, they underwent load-to-failure testing at a rate of 10 mm/min. The ultimate failure load (N) was determined. Pull-out stiffness (N/mm) was calculated as the slope of the linear portion of the load-elongation curve. The failure modes were noted.", "Statistical analysis was performed using SPSS 21.0 (IBM, Armonk, NY, USA). We used the Kolmogorov-Smirnov test to determine the normally distributed variables within the groups. The Student's t test was used to compare the elongation, stiffness, and failure load among the three test groups. The significance level was set at P < 0.050.", "Cyclic testing No failures occurred during cyclic testing. Table 1 reports the cyclic testing results (1000 cycles) in the three groups. The mean graft slippage values for the IS group, TTF group, and TTF + IS group were 1.37 ± 0.45, 1.98 ± 0.46, and 1.39 ± 0.50 mm, respectively. There were no significant differences in slippage among the three groups.\nCyclic testing and load-to-failure testing in the three groups.\nData are presented as mean ± standard deviation. ∗IS vs. TTF; †IS vs. TTF + IS; ‡TTF vs. TTF + IS. IS: Interference screw; TTF: Transtibial tubercle fixation.\nNo failures occurred during cyclic testing. Table 1 reports the cyclic testing results (1000 cycles) in the three groups. The mean graft slippage values for the IS group, TTF group, and TTF + IS group were 1.37 ± 0.45, 1.98 ± 0.46, and 1.39 ± 0.50 mm, respectively. There were no significant differences in slippage among the three groups.\nCyclic testing and load-to-failure testing in the three groups.\nData are presented as mean ± standard deviation. ∗IS vs. TTF; †IS vs. TTF + IS; ‡TTF vs. TTF + IS. IS: Interference screw; TTF: Transtibial tubercle fixation.\nLoad-to-failure testing The ultimate failure load in the TTF + IS group was significantly higher than those in the IS group and the TTF group (876.34 ± 58.78 N in the TTF + IS group vs. 556.49 ± 65.33 N in the IS group [P < 0.001] and 660.92 ± 77.74 N in the TTF group [P < 0.001]). The ultimate failure load in the TTF group was also significantly higher than that in the IS group (660.92 ± 77.74 N vs. 556.49 ± 65.33 N; P = 0.001).\nThe stiffness in the TTF group was significantly lower than those in the IS group and the TTF + IS group (92.77 ± 20.16 N/mm in the TTF group vs. 120.27 ± 15.66 N/m in the IS group [P < 0.001] and 131.79 ± 17.95 N/mm in the TTF + IS group [P < 0.001]). No significant difference in the mean stiffness was found between the IS group and the TTF + IS group [Table 1].\nThe ultimate failure load in the TTF + IS group was significantly higher than those in the IS group and the TTF group (876.34 ± 58.78 N in the TTF + IS group vs. 556.49 ± 65.33 N in the IS group [P < 0.001] and 660.92 ± 77.74 N in the TTF group [P < 0.001]). The ultimate failure load in the TTF group was also significantly higher than that in the IS group (660.92 ± 77.74 N vs. 556.49 ± 65.33 N; P = 0.001).\nThe stiffness in the TTF group was significantly lower than those in the IS group and the TTF + IS group (92.77 ± 20.16 N/mm in the TTF group vs. 120.27 ± 15.66 N/m in the IS group [P < 0.001] and 131.79 ± 17.95 N/mm in the TTF + IS group [P < 0.001]). No significant difference in the mean stiffness was found between the IS group and the TTF + IS group [Table 1].", "No failures occurred during cyclic testing. Table 1 reports the cyclic testing results (1000 cycles) in the three groups. The mean graft slippage values for the IS group, TTF group, and TTF + IS group were 1.37 ± 0.45, 1.98 ± 0.46, and 1.39 ± 0.50 mm, respectively. There were no significant differences in slippage among the three groups.\nCyclic testing and load-to-failure testing in the three groups.\nData are presented as mean ± standard deviation. ∗IS vs. TTF; †IS vs. TTF + IS; ‡TTF vs. TTF + IS. IS: Interference screw; TTF: Transtibial tubercle fixation.", "The ultimate failure load in the TTF + IS group was significantly higher than those in the IS group and the TTF group (876.34 ± 58.78 N in the TTF + IS group vs. 556.49 ± 65.33 N in the IS group [P < 0.001] and 660.92 ± 77.74 N in the TTF group [P < 0.001]). The ultimate failure load in the TTF group was also significantly higher than that in the IS group (660.92 ± 77.74 N vs. 556.49 ± 65.33 N; P = 0.001).\nThe stiffness in the TTF group was significantly lower than those in the IS group and the TTF + IS group (92.77 ± 20.16 N/mm in the TTF group vs. 120.27 ± 15.66 N/m in the IS group [P < 0.001] and 131.79 ± 17.95 N/mm in the TTF + IS group [P < 0.001]). No significant difference in the mean stiffness was found between the IS group and the TTF + IS group [Table 1].", "The principal finding of our study was that the surgical technique for transtibial PCL reconstruction with TTF using several high-strength sutures provided a higher ultimate failure load than IS alone or TTF alone during PCL reconstruction on the tibial side in a porcine model. In the cyclic testing study, there were no significant differences in the slippage between the IS group and the TTF + IS group. In the load-to-failure testing, the TTF + IS group had a higher ultimate failure load than the IS group and the TTF group. The stiffness in the TTF group was significantly lower than that in the IS group and the TTF + IS group. No significant difference in mean stiffness was found between the IS group and the TTF + IS group (P = 0.127).\nRecent studies have supported that the transtibial technique or tibial inlay technique can improve the stability of the knee in PCL-reconstructed knees.[1,13–17,19,20,22–27] However, the optimal PCL reconstruction technique has yet to be determined because PCL reconstruction has not had the same success in restoring knee stability as ACL reconstruction. Many authors have pointed out that graft fixation techniques and graft fixation levels are critical factors for successful PCL reconstruction using hamstring tendon grafts.[3]\nThere are some biomechanical studies supporting that supplementary tibial fixation for ACL reconstruction may be beneficial[20,28] and showing that supplementary fixation with staple or push-lock screws increases the ultimate failure load compared with interference fixation alone.[28] Multiple strands of high-strength sutures can theoretically provide a higher ultimate failure load. The transosseous suture fixation technique with high-strength sutures has been used for the repair of the rotator cuff, patellar tendon, and quadriceps tendon ruptures,[11,18,29–35] and we used transosseous suture fixation with high-strength sutures for PCL reconstruction in this biomechanical study. Another study published a similar technical note for TTF without hardware for ACL and PCL reconstruction.[13] Our study is the first to compare the biomechanics of supplementary TTF using several high-strength sutures with those of IS in transtibial PCL reconstruction. Regarding TTF + IS vs. IS alone or TTF alone, we found that supplementary TTF provided a higher ultimate failure load than IS alone or TTF alone for PCL graft-to-tibial tunnel fixation. This procedure may theoretically be recommended for supplementary fixation in cases of revision surgery with tunnel widening and graft-tunnel mismatch in PCL reconstruction. The supplementary transtibial tubercle technique does not require implants and is therefore much less expensive than other techniques, such as suspension buttons, screws or washers, and metallic anchors. Considering the decreased biomechanical properties with TTF alone (relatively lower stiffness), it might not be recommended for PCL fixation alone in a clinical setting. A longer effective length of reconstructed graft could lead to increased overall graft stiffness and decreased total graft deformation.[4] We chose to avoid the use of IS alone or TTF alone for PCL graft fixation because of the decreased pullout strength and/or decreased stiffness, which might have led to decreased overall graft stiffness and increased total graft deformation.\nThere were some limitations to our study. First, a main limitation of this biomechanical study was that we only focused on the time-zero outcomes. Second, we could not study the healing of the graft to bone over time. Third, human bone was not used in this study because the availability of young human bone and hamstring tendons for biomechanical testing is limited. The porcine bone model may not indicate the actual situation in human surgical repair, which limits the value of the study. However, porcine bone specimens are commonly used for biomechanical studies due to their similar structural and material properties to human hamstring tendons.[20,21] Fourth, this is another offering for knee ligament reconstruction surgeons and is a relatively non-complicated and inexpensive technique. There are no comparative data with clinical outcomes. Finally, we should provide evidence that this technique restores knee function, motion, and stability in future clinical studies.\nThe results of this biomechanical study suggest that supplementary TTF + IS increased the ultimate failure loads compared with conventional IS alone.", "None." ]
[ "intro", "methods", null, null, null, "results", null, null, "discussion", "COI-statement" ]
[ "Posterior cruciate ligament", "Transtibial technique", "Biomechanics", "Interference screw", "High-strength sutures" ]
Introduction: Compared with the treatments used for anterior cruciate ligament (ACL) reconstruction, the optimal treatment for posterior cruciate ligament (PCL) tears has been debated because many patients develop residual posterior laxity following PCL reconstruction.[1–13] Recent studies have evaluated several PCL reconstruction techniques, including the transtibial technique or inlay technique, femoral and tibial tunnel placement, femoral and/or tibial fixation, etc.[6–9,14–17] However, the gold standard technique for PCL reconstruction has not been established. Tibial side fixation has been the recent treatment of interest for PCL reconstruction. Numerous types of tibial fixation have been introduced for PCL reconstruction using hamstring autografts during the transtibial technique, such as interference screws, cross-pins, screws, spiked washers, and endobuttons.[6–8,11,14,18–20] The transtibial technique with a hamstring autograft is one of the most frequently employed techniques in clinical practice. However, four-stranded hamstring autografts fixed with interference screws with the transtibial technique may have short decreased pullout strength, which might lead to decreased overall graft stiffness and increased total graft deformation. Therefore, we proposed a new surgical technique for PCL reconstruction with tibial transtibial tubercle fixation (TTF) using several high-strength sutures that are not restricted by the graft length and increase the biomechanical properties of the reconstructed graft. To our knowledge, no study has compared tibial graft fixation in the transtibial technique for PCL reconstruction with TTF using several high-strength sutures with interference screws at the tibial side fixation. The purpose of this study was to compare the biomechanical properties among three different procedures at the tibial site in terms of their ability in transtibial PCL reconstruction: the use of interference screw fixation (IS) alone, the TTF alone, and the new TTF + IS technique. The primary hypothesis is that supplementary fixation with transosseous high-strength sutures will improve the initial IS strength and stiffness under cyclic loading and load-to-failure testing. The secondary hypothesis is that the initial new TTF technique will be comparable with that achieved with IS. Methods: Graft preparation and tunnel preparation This study was approved by the Ethics Committee of First Affiliated Hospital of China Medical University. The availability of young cadaver knees is limited for biomechanical testing. Porcine tibias, which were used in this study, have been reported to have biomechanical properties similar to those of young human bone.[21] A randomized controlled experimental study in a porcine model was performed using 36 fresh-frozen porcine tibias and 24 porcine digital extensor tendons from healthy male pigs aged 12 to 16 months and weighing 90 kg. The bone mineral density (BMD) of the porcine tibias was assessed using dual-energy X-ray absorptiometry (Hologic QDR Whole-Body X-ray Bone Densitometer; Hologic, Bedford, MA, USA). BMD in IS group was 24.21 ± 0.85 kg/m2, in TTF group was 24.05 ± 0.62 kg/m2, and in TTF + IS group was 24.29 ± 0.53 kg/m2. Both the tibias and tendons were stored at −80°C. Before testing, all specimens were thawed at room temperature for 12 h. All of the specimens underwent one freeze-thaw cycle before biomechanical testing. The preparation procedures for the graft and tibial tunnel were similar for the three groups. The specimens were blocked and randomly divided into three groups: IS alone, TTF alone, and TTF + IS (n = 12 in each group) [Figure 1]. Computer drawing of the three fixation groups. (A) IS; (B) TTF; (C) TTF + IS. IS: Interference screw fixation; TTF: Transtibial tubercle fixation. For all porcine tibiae, a tunnel with a diameter of 8 mm and a length of 5 to 6 cm was prepared on the tibia by the transtibial technique. A PCL tibial drill guide (Arthrex, Naples, FL, USA) was used, and the drill guide angle of the tibia was oriented at 55° to 60°. A double-looped graft was prepared on the table, folded in half, and thinned to 8 mm in diameter and 9 to 10 mm in length. Three No. 2 Ultrabraid sutures (Smith & Nephew, Andover, MA, USA) were used to sew 3 cm of both ends of each tendon together using a crisscross stitch. Then, the grafts were wrapped in 0.9% saline solution-soaked gauze before testing. In the IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel [Figure 2]. In the TTF + IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel, and then the ends of the sutures were tied at the tibia. An eyelet-passing pin was drilled transversely 1 cm distal to the tibial tunnel (parallel to the tibial joint line and 1 cm posterior to the anterior tibial cortex). The sutures were passed through the transtibial tubercle with the eyelet-passing pin. All the ends of the sutures were tied at the tibia [Figure 3]. Sawbone model demonstrating the TTF technique. (A–C) The graft was pulled into the tibial tunnel; an eyelet-passing pin was drilled transversely into the transtibial tubercle. (D–F) The sutures were passed to the lateral side. (G–I) The transosseous sutures were tied at the tibia with a knot pusher. TTF: Transtibial tubercle fixation. (A) IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel. (B) TTF: the graft was fixed with high-strength sutures tied at the tibia. (C) TTF + IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel; the ends of the high-strength sutures were tied at the tibia. IS: Interference screw fixation; TTF: Transtibial tubercle fixation. This study was approved by the Ethics Committee of First Affiliated Hospital of China Medical University. The availability of young cadaver knees is limited for biomechanical testing. Porcine tibias, which were used in this study, have been reported to have biomechanical properties similar to those of young human bone.[21] A randomized controlled experimental study in a porcine model was performed using 36 fresh-frozen porcine tibias and 24 porcine digital extensor tendons from healthy male pigs aged 12 to 16 months and weighing 90 kg. The bone mineral density (BMD) of the porcine tibias was assessed using dual-energy X-ray absorptiometry (Hologic QDR Whole-Body X-ray Bone Densitometer; Hologic, Bedford, MA, USA). BMD in IS group was 24.21 ± 0.85 kg/m2, in TTF group was 24.05 ± 0.62 kg/m2, and in TTF + IS group was 24.29 ± 0.53 kg/m2. Both the tibias and tendons were stored at −80°C. Before testing, all specimens were thawed at room temperature for 12 h. All of the specimens underwent one freeze-thaw cycle before biomechanical testing. The preparation procedures for the graft and tibial tunnel were similar for the three groups. The specimens were blocked and randomly divided into three groups: IS alone, TTF alone, and TTF + IS (n = 12 in each group) [Figure 1]. Computer drawing of the three fixation groups. (A) IS; (B) TTF; (C) TTF + IS. IS: Interference screw fixation; TTF: Transtibial tubercle fixation. For all porcine tibiae, a tunnel with a diameter of 8 mm and a length of 5 to 6 cm was prepared on the tibia by the transtibial technique. A PCL tibial drill guide (Arthrex, Naples, FL, USA) was used, and the drill guide angle of the tibia was oriented at 55° to 60°. A double-looped graft was prepared on the table, folded in half, and thinned to 8 mm in diameter and 9 to 10 mm in length. Three No. 2 Ultrabraid sutures (Smith & Nephew, Andover, MA, USA) were used to sew 3 cm of both ends of each tendon together using a crisscross stitch. Then, the grafts were wrapped in 0.9% saline solution-soaked gauze before testing. In the IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel [Figure 2]. In the TTF + IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel, and then the ends of the sutures were tied at the tibia. An eyelet-passing pin was drilled transversely 1 cm distal to the tibial tunnel (parallel to the tibial joint line and 1 cm posterior to the anterior tibial cortex). The sutures were passed through the transtibial tubercle with the eyelet-passing pin. All the ends of the sutures were tied at the tibia [Figure 3]. Sawbone model demonstrating the TTF technique. (A–C) The graft was pulled into the tibial tunnel; an eyelet-passing pin was drilled transversely into the transtibial tubercle. (D–F) The sutures were passed to the lateral side. (G–I) The transosseous sutures were tied at the tibia with a knot pusher. TTF: Transtibial tubercle fixation. (A) IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel. (B) TTF: the graft was fixed with high-strength sutures tied at the tibia. (C) TTF + IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel; the ends of the high-strength sutures were tied at the tibia. IS: Interference screw fixation; TTF: Transtibial tubercle fixation. Biomechanical testing using animal tissue Biomechanical testing was performed in a similar manner to the methods described by Zhang et al[20] The tibias were fixed in a custom testing jig [Figures 2 and 3]. All biomechanical tests of the graft-fixation method-tibia complexes were administered using a testing machine. The looped end of the double-looped porcine tendon graft was fixed to a bar attached to the base of the material testing machine. The free graft was kept at a length of 3 cm. The direction of the tensile force and tibial bone tunnel formed an angle of 130° in the sagittal plane. For the graft-fixation method, the tibia complex was pre-conditioned at 50 N for 5 min, and cyclic loads between 50 and 250 N were applied for 1000 cycles at a frequency of 1 Hz. Grafts were marked with links at tunnel exit points applying the pre-conditioning load and again after the cyclic loading test. Graft slippage was measured as the distance between these two lines. After clinical load testing, the constructs were pre-loaded at 20 N for 2 min; then, they underwent load-to-failure testing at a rate of 10 mm/min. The ultimate failure load (N) was determined. Pull-out stiffness (N/mm) was calculated as the slope of the linear portion of the load-elongation curve. The failure modes were noted. Biomechanical testing was performed in a similar manner to the methods described by Zhang et al[20] The tibias were fixed in a custom testing jig [Figures 2 and 3]. All biomechanical tests of the graft-fixation method-tibia complexes were administered using a testing machine. The looped end of the double-looped porcine tendon graft was fixed to a bar attached to the base of the material testing machine. The free graft was kept at a length of 3 cm. The direction of the tensile force and tibial bone tunnel formed an angle of 130° in the sagittal plane. For the graft-fixation method, the tibia complex was pre-conditioned at 50 N for 5 min, and cyclic loads between 50 and 250 N were applied for 1000 cycles at a frequency of 1 Hz. Grafts were marked with links at tunnel exit points applying the pre-conditioning load and again after the cyclic loading test. Graft slippage was measured as the distance between these two lines. After clinical load testing, the constructs were pre-loaded at 20 N for 2 min; then, they underwent load-to-failure testing at a rate of 10 mm/min. The ultimate failure load (N) was determined. Pull-out stiffness (N/mm) was calculated as the slope of the linear portion of the load-elongation curve. The failure modes were noted. Statistical analysis Statistical analysis was performed using SPSS 21.0 (IBM, Armonk, NY, USA). We used the Kolmogorov-Smirnov test to determine the normally distributed variables within the groups. The Student's t test was used to compare the elongation, stiffness, and failure load among the three test groups. The significance level was set at P < 0.050. Statistical analysis was performed using SPSS 21.0 (IBM, Armonk, NY, USA). We used the Kolmogorov-Smirnov test to determine the normally distributed variables within the groups. The Student's t test was used to compare the elongation, stiffness, and failure load among the three test groups. The significance level was set at P < 0.050. Graft preparation and tunnel preparation: This study was approved by the Ethics Committee of First Affiliated Hospital of China Medical University. The availability of young cadaver knees is limited for biomechanical testing. Porcine tibias, which were used in this study, have been reported to have biomechanical properties similar to those of young human bone.[21] A randomized controlled experimental study in a porcine model was performed using 36 fresh-frozen porcine tibias and 24 porcine digital extensor tendons from healthy male pigs aged 12 to 16 months and weighing 90 kg. The bone mineral density (BMD) of the porcine tibias was assessed using dual-energy X-ray absorptiometry (Hologic QDR Whole-Body X-ray Bone Densitometer; Hologic, Bedford, MA, USA). BMD in IS group was 24.21 ± 0.85 kg/m2, in TTF group was 24.05 ± 0.62 kg/m2, and in TTF + IS group was 24.29 ± 0.53 kg/m2. Both the tibias and tendons were stored at −80°C. Before testing, all specimens were thawed at room temperature for 12 h. All of the specimens underwent one freeze-thaw cycle before biomechanical testing. The preparation procedures for the graft and tibial tunnel were similar for the three groups. The specimens were blocked and randomly divided into three groups: IS alone, TTF alone, and TTF + IS (n = 12 in each group) [Figure 1]. Computer drawing of the three fixation groups. (A) IS; (B) TTF; (C) TTF + IS. IS: Interference screw fixation; TTF: Transtibial tubercle fixation. For all porcine tibiae, a tunnel with a diameter of 8 mm and a length of 5 to 6 cm was prepared on the tibia by the transtibial technique. A PCL tibial drill guide (Arthrex, Naples, FL, USA) was used, and the drill guide angle of the tibia was oriented at 55° to 60°. A double-looped graft was prepared on the table, folded in half, and thinned to 8 mm in diameter and 9 to 10 mm in length. Three No. 2 Ultrabraid sutures (Smith & Nephew, Andover, MA, USA) were used to sew 3 cm of both ends of each tendon together using a crisscross stitch. Then, the grafts were wrapped in 0.9% saline solution-soaked gauze before testing. In the IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel [Figure 2]. In the TTF + IS group, the graft was fixed with an 8 × 25 mm titanium interference screw (Guardsman) in the proximal tibial tunnel, and then the ends of the sutures were tied at the tibia. An eyelet-passing pin was drilled transversely 1 cm distal to the tibial tunnel (parallel to the tibial joint line and 1 cm posterior to the anterior tibial cortex). The sutures were passed through the transtibial tubercle with the eyelet-passing pin. All the ends of the sutures were tied at the tibia [Figure 3]. Sawbone model demonstrating the TTF technique. (A–C) The graft was pulled into the tibial tunnel; an eyelet-passing pin was drilled transversely into the transtibial tubercle. (D–F) The sutures were passed to the lateral side. (G–I) The transosseous sutures were tied at the tibia with a knot pusher. TTF: Transtibial tubercle fixation. (A) IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel. (B) TTF: the graft was fixed with high-strength sutures tied at the tibia. (C) TTF + IS: the graft was fixed with a titanium interference screw in the proximal tibial tunnel; the ends of the high-strength sutures were tied at the tibia. IS: Interference screw fixation; TTF: Transtibial tubercle fixation. Biomechanical testing using animal tissue: Biomechanical testing was performed in a similar manner to the methods described by Zhang et al[20] The tibias were fixed in a custom testing jig [Figures 2 and 3]. All biomechanical tests of the graft-fixation method-tibia complexes were administered using a testing machine. The looped end of the double-looped porcine tendon graft was fixed to a bar attached to the base of the material testing machine. The free graft was kept at a length of 3 cm. The direction of the tensile force and tibial bone tunnel formed an angle of 130° in the sagittal plane. For the graft-fixation method, the tibia complex was pre-conditioned at 50 N for 5 min, and cyclic loads between 50 and 250 N were applied for 1000 cycles at a frequency of 1 Hz. Grafts were marked with links at tunnel exit points applying the pre-conditioning load and again after the cyclic loading test. Graft slippage was measured as the distance between these two lines. After clinical load testing, the constructs were pre-loaded at 20 N for 2 min; then, they underwent load-to-failure testing at a rate of 10 mm/min. The ultimate failure load (N) was determined. Pull-out stiffness (N/mm) was calculated as the slope of the linear portion of the load-elongation curve. The failure modes were noted. Statistical analysis: Statistical analysis was performed using SPSS 21.0 (IBM, Armonk, NY, USA). We used the Kolmogorov-Smirnov test to determine the normally distributed variables within the groups. The Student's t test was used to compare the elongation, stiffness, and failure load among the three test groups. The significance level was set at P < 0.050. Results: Cyclic testing No failures occurred during cyclic testing. Table 1 reports the cyclic testing results (1000 cycles) in the three groups. The mean graft slippage values for the IS group, TTF group, and TTF + IS group were 1.37 ± 0.45, 1.98 ± 0.46, and 1.39 ± 0.50 mm, respectively. There were no significant differences in slippage among the three groups. Cyclic testing and load-to-failure testing in the three groups. Data are presented as mean ± standard deviation. ∗IS vs. TTF; †IS vs. TTF + IS; ‡TTF vs. TTF + IS. IS: Interference screw; TTF: Transtibial tubercle fixation. No failures occurred during cyclic testing. Table 1 reports the cyclic testing results (1000 cycles) in the three groups. The mean graft slippage values for the IS group, TTF group, and TTF + IS group were 1.37 ± 0.45, 1.98 ± 0.46, and 1.39 ± 0.50 mm, respectively. There were no significant differences in slippage among the three groups. Cyclic testing and load-to-failure testing in the three groups. Data are presented as mean ± standard deviation. ∗IS vs. TTF; †IS vs. TTF + IS; ‡TTF vs. TTF + IS. IS: Interference screw; TTF: Transtibial tubercle fixation. Load-to-failure testing The ultimate failure load in the TTF + IS group was significantly higher than those in the IS group and the TTF group (876.34 ± 58.78 N in the TTF + IS group vs. 556.49 ± 65.33 N in the IS group [P < 0.001] and 660.92 ± 77.74 N in the TTF group [P < 0.001]). The ultimate failure load in the TTF group was also significantly higher than that in the IS group (660.92 ± 77.74 N vs. 556.49 ± 65.33 N; P = 0.001). The stiffness in the TTF group was significantly lower than those in the IS group and the TTF + IS group (92.77 ± 20.16 N/mm in the TTF group vs. 120.27 ± 15.66 N/m in the IS group [P < 0.001] and 131.79 ± 17.95 N/mm in the TTF + IS group [P < 0.001]). No significant difference in the mean stiffness was found between the IS group and the TTF + IS group [Table 1]. The ultimate failure load in the TTF + IS group was significantly higher than those in the IS group and the TTF group (876.34 ± 58.78 N in the TTF + IS group vs. 556.49 ± 65.33 N in the IS group [P < 0.001] and 660.92 ± 77.74 N in the TTF group [P < 0.001]). The ultimate failure load in the TTF group was also significantly higher than that in the IS group (660.92 ± 77.74 N vs. 556.49 ± 65.33 N; P = 0.001). The stiffness in the TTF group was significantly lower than those in the IS group and the TTF + IS group (92.77 ± 20.16 N/mm in the TTF group vs. 120.27 ± 15.66 N/m in the IS group [P < 0.001] and 131.79 ± 17.95 N/mm in the TTF + IS group [P < 0.001]). No significant difference in the mean stiffness was found between the IS group and the TTF + IS group [Table 1]. Cyclic testing: No failures occurred during cyclic testing. Table 1 reports the cyclic testing results (1000 cycles) in the three groups. The mean graft slippage values for the IS group, TTF group, and TTF + IS group were 1.37 ± 0.45, 1.98 ± 0.46, and 1.39 ± 0.50 mm, respectively. There were no significant differences in slippage among the three groups. Cyclic testing and load-to-failure testing in the three groups. Data are presented as mean ± standard deviation. ∗IS vs. TTF; †IS vs. TTF + IS; ‡TTF vs. TTF + IS. IS: Interference screw; TTF: Transtibial tubercle fixation. Load-to-failure testing: The ultimate failure load in the TTF + IS group was significantly higher than those in the IS group and the TTF group (876.34 ± 58.78 N in the TTF + IS group vs. 556.49 ± 65.33 N in the IS group [P < 0.001] and 660.92 ± 77.74 N in the TTF group [P < 0.001]). The ultimate failure load in the TTF group was also significantly higher than that in the IS group (660.92 ± 77.74 N vs. 556.49 ± 65.33 N; P = 0.001). The stiffness in the TTF group was significantly lower than those in the IS group and the TTF + IS group (92.77 ± 20.16 N/mm in the TTF group vs. 120.27 ± 15.66 N/m in the IS group [P < 0.001] and 131.79 ± 17.95 N/mm in the TTF + IS group [P < 0.001]). No significant difference in the mean stiffness was found between the IS group and the TTF + IS group [Table 1]. Discussion: The principal finding of our study was that the surgical technique for transtibial PCL reconstruction with TTF using several high-strength sutures provided a higher ultimate failure load than IS alone or TTF alone during PCL reconstruction on the tibial side in a porcine model. In the cyclic testing study, there were no significant differences in the slippage between the IS group and the TTF + IS group. In the load-to-failure testing, the TTF + IS group had a higher ultimate failure load than the IS group and the TTF group. The stiffness in the TTF group was significantly lower than that in the IS group and the TTF + IS group. No significant difference in mean stiffness was found between the IS group and the TTF + IS group (P = 0.127). Recent studies have supported that the transtibial technique or tibial inlay technique can improve the stability of the knee in PCL-reconstructed knees.[1,13–17,19,20,22–27] However, the optimal PCL reconstruction technique has yet to be determined because PCL reconstruction has not had the same success in restoring knee stability as ACL reconstruction. Many authors have pointed out that graft fixation techniques and graft fixation levels are critical factors for successful PCL reconstruction using hamstring tendon grafts.[3] There are some biomechanical studies supporting that supplementary tibial fixation for ACL reconstruction may be beneficial[20,28] and showing that supplementary fixation with staple or push-lock screws increases the ultimate failure load compared with interference fixation alone.[28] Multiple strands of high-strength sutures can theoretically provide a higher ultimate failure load. The transosseous suture fixation technique with high-strength sutures has been used for the repair of the rotator cuff, patellar tendon, and quadriceps tendon ruptures,[11,18,29–35] and we used transosseous suture fixation with high-strength sutures for PCL reconstruction in this biomechanical study. Another study published a similar technical note for TTF without hardware for ACL and PCL reconstruction.[13] Our study is the first to compare the biomechanics of supplementary TTF using several high-strength sutures with those of IS in transtibial PCL reconstruction. Regarding TTF + IS vs. IS alone or TTF alone, we found that supplementary TTF provided a higher ultimate failure load than IS alone or TTF alone for PCL graft-to-tibial tunnel fixation. This procedure may theoretically be recommended for supplementary fixation in cases of revision surgery with tunnel widening and graft-tunnel mismatch in PCL reconstruction. The supplementary transtibial tubercle technique does not require implants and is therefore much less expensive than other techniques, such as suspension buttons, screws or washers, and metallic anchors. Considering the decreased biomechanical properties with TTF alone (relatively lower stiffness), it might not be recommended for PCL fixation alone in a clinical setting. A longer effective length of reconstructed graft could lead to increased overall graft stiffness and decreased total graft deformation.[4] We chose to avoid the use of IS alone or TTF alone for PCL graft fixation because of the decreased pullout strength and/or decreased stiffness, which might have led to decreased overall graft stiffness and increased total graft deformation. There were some limitations to our study. First, a main limitation of this biomechanical study was that we only focused on the time-zero outcomes. Second, we could not study the healing of the graft to bone over time. Third, human bone was not used in this study because the availability of young human bone and hamstring tendons for biomechanical testing is limited. The porcine bone model may not indicate the actual situation in human surgical repair, which limits the value of the study. However, porcine bone specimens are commonly used for biomechanical studies due to their similar structural and material properties to human hamstring tendons.[20,21] Fourth, this is another offering for knee ligament reconstruction surgeons and is a relatively non-complicated and inexpensive technique. There are no comparative data with clinical outcomes. Finally, we should provide evidence that this technique restores knee function, motion, and stability in future clinical studies. The results of this biomechanical study suggest that supplementary TTF + IS increased the ultimate failure loads compared with conventional IS alone. Conflicts of interest: None.
Background: Multiple techniques are commonly used for posterior cruciate ligament (PCL) reconstruction. However, the optimum method regarding the fixation of PCL reconstruction after PCL tears remains debatable. The purpose of this study was to compare the biomechanical properties among three different tibial fixation procedures for transtibial single-bundle PCL reconstruction. Methods: Thirty-six porcine tibias and porcine extensor tendons were randomized into three fixation study groups: the interference screw fixation (IS) group, the transtibial tubercle fixation (TTF) group, and TTF + IS group (n = 12 in each group). The structural properties of the three fixation groups were tested under cyclic loading and load-to-failure. The slippage after the cyclic loading test and the stiffness and ultimate failure load after load-to-failure testing were recorded. Results: After 1000 cycles of cyclic testing, no significant difference was observed in graft slippage among the three groups. For load-to-failure testing, the TTF + IS group showed a higher ultimate failure load than the TTF group and the IS group (876.34 ± 58.78 N vs. 660.92 ± 77.74 N [P < 0.001] vs. 556.49 ± 65.33 N [P < 0.001]). The stiffness in the TTF group was significantly lower than that in the IS group and the TTF + IS group (92.77 ± 20.16 N/mm in the TTF group vs. 120.27 ± 15.66 N/m in the IS group [P = 0.001] and 131.79 ± 17.95 N/mm in the TTF + IS group [P < 0.001]). No significant difference in the mean stiffness was found between the IS group and the TTF + IS group (P = 0.127). Conclusions: In this biomechanical study, supplementary fixation with transtibial tubercle sutures increased the ultimate failure load during load-to-failure testing for PCL reconstruction.
null
null
5,620
392
[ 766, 265, 69, 136, 227 ]
10
[ "ttf", "group", "graft", "testing", "ttf group", "fixation", "tibial", "load", "tunnel", "transtibial" ]
[ "ligament reconstruction", "transtibial pcl reconstruction", "pcl graft tibial", "cruciate ligament pcl", "pcl reconstruction hamstring" ]
null
null
[CONTENT] Posterior cruciate ligament | Transtibial technique | Biomechanics | Interference screw | High-strength sutures [SUMMARY]
[CONTENT] Posterior cruciate ligament | Transtibial technique | Biomechanics | Interference screw | High-strength sutures [SUMMARY]
[CONTENT] Posterior cruciate ligament | Transtibial technique | Biomechanics | Interference screw | High-strength sutures [SUMMARY]
null
[CONTENT] Posterior cruciate ligament | Transtibial technique | Biomechanics | Interference screw | High-strength sutures [SUMMARY]
null
[CONTENT] Animals | Biomechanical Phenomena | Posterior Cruciate Ligament Reconstruction | Sutures | Swine | Tendons | Tibia [SUMMARY]
[CONTENT] Animals | Biomechanical Phenomena | Posterior Cruciate Ligament Reconstruction | Sutures | Swine | Tendons | Tibia [SUMMARY]
[CONTENT] Animals | Biomechanical Phenomena | Posterior Cruciate Ligament Reconstruction | Sutures | Swine | Tendons | Tibia [SUMMARY]
null
[CONTENT] Animals | Biomechanical Phenomena | Posterior Cruciate Ligament Reconstruction | Sutures | Swine | Tendons | Tibia [SUMMARY]
null
[CONTENT] ligament reconstruction | transtibial pcl reconstruction | pcl graft tibial | cruciate ligament pcl | pcl reconstruction hamstring [SUMMARY]
[CONTENT] ligament reconstruction | transtibial pcl reconstruction | pcl graft tibial | cruciate ligament pcl | pcl reconstruction hamstring [SUMMARY]
[CONTENT] ligament reconstruction | transtibial pcl reconstruction | pcl graft tibial | cruciate ligament pcl | pcl reconstruction hamstring [SUMMARY]
null
[CONTENT] ligament reconstruction | transtibial pcl reconstruction | pcl graft tibial | cruciate ligament pcl | pcl reconstruction hamstring [SUMMARY]
null
[CONTENT] ttf | group | graft | testing | ttf group | fixation | tibial | load | tunnel | transtibial [SUMMARY]
[CONTENT] ttf | group | graft | testing | ttf group | fixation | tibial | load | tunnel | transtibial [SUMMARY]
[CONTENT] ttf | group | graft | testing | ttf group | fixation | tibial | load | tunnel | transtibial [SUMMARY]
null
[CONTENT] ttf | group | graft | testing | ttf group | fixation | tibial | load | tunnel | transtibial [SUMMARY]
null
[CONTENT] reconstruction | pcl reconstruction | technique | pcl | tibial | fixation | transtibial | screws | tibial fixation | strength [SUMMARY]
[CONTENT] tibia | graft | tibial | ttf | tunnel | sutures | testing | fixed | porcine | tibial tunnel [SUMMARY]
[CONTENT] group | ttf | ttf group | 001 | vs | group 001 | group ttf group | group ttf | cyclic testing | 92 [SUMMARY]
null
[CONTENT] ttf | group | ttf group | graft | testing | tibial | fixation | reconstruction | groups | load [SUMMARY]
null
[CONTENT] PCL ||| PCL | PCL ||| three | PCL [SUMMARY]
[CONTENT] Thirty-six | three | TTF | TTF | 12 ||| three ||| [SUMMARY]
[CONTENT] 1000 | three ||| TTF | TTF | 876.34 ± | 58.78 | 660.92 | 77.74 ||| 556.49 ||| 65.33 ||| ||| TTF | TTF | 92.77 ± | 20.16 | TTF | 120.27  | 15.66 | N ||| 0.001 | 131.79 ± | 17.95 N/mm | TTF ||| TTF | 0.127 [SUMMARY]
null
[CONTENT] PCL ||| PCL | PCL ||| three | PCL ||| Thirty-six | three | TTF | TTF | 12 ||| three ||| ||| 1000 | three ||| TTF | TTF | 876.34 ± | 58.78 | 660.92 | 77.74 ||| 556.49 ||| 65.33 ||| ||| TTF | TTF | 92.77 ± | 20.16 | TTF | 120.27  | 15.66 | N ||| 0.001 | 131.79 ± | 17.95 N/mm | TTF ||| TTF | 0.127 ||| PCL [SUMMARY]
null
Evaluation of Acute and Sub-Acute Toxicity of Aqueous Extracts of
33883843
The majority of population rely on traditional medicine as a source of healthcare. Artemisia afra is a plant traditionally used for its medicinal values, including treatment of malaria in many parts of the world. Currently, it is also attracting attention because of a claim that a related species, Artemisia annua, is a remedy for the COVD-19 pandemic. The aim of the present study was to investigate toxic effects of A. afra on brain, heart and suprarenal glands in mice aged 8-12 weeks and weighing 25-30g.
BACKGROUND
Leaves of A.afra were collected from Bale National Park, dried under shade, crushed into powder and soaked in distilled water to yield aqueous extract for oral administration. For acute toxicity study, seven treated and one control groups, with 3 female mice each, were used. They were given a single dose of 200mg/kg, 700mg/kg, 1200mg/kg, 2200mg/kg, 3200mg/kg, 4200mg/kg or 5000mg/kg b/wt of the extract. For the sub-acute toxicity study, two treated and one control groups, with 5 female and 5 male mice each, were used. They were daily treated with 600mg/kg or 1800mg/kg b/wt of extract.
METHODS
LD50 was found to be greater than 5000mg/kg indicating that the plant is relatively safe. In the sub-acute study, no signs of toxicity were observed in all treatment groups. On microscopic examination of the brain, heart and suprarenal glands no sign of cellular injury was observed.
RESULTS
The findings of this study suggest that the leaves extract of A. afra is relatively safe in mice.
CONCLUSION
[ "Animals", "Artemisia", "Brain", "Female", "Male", "Mice", "Plant Extracts", "Plant Leaves", "Water" ]
8047245
Introduction
The genus Artemisia, which belongs to the family of Asteraceae, contains more than 400 species and is widely used in many parts of the world either alone or in combination with other plants as herbal remedy for a variety of human ailments. A. afra is a medium-size perennial herb, rarely exceeding 2m high. It is located in Ethiopia Kenya, Zimbabwe, Malawi, Angola and South Africa (1). Various parts of the plant contain volatile oil, terpenoids, coumarins, acetylenes, scopoletin and flavonoids (2). The volatile oil contains 1, 8-cineole, α-thujone, β-thujone, camphor and borneol, and has definite anti-microbial and anti-oxidative properties. Thujone is known to cause neurotoxicity with different neurological symptoms like dizziness, tremor, convulsion and hallucination (3). Artemisia species are most commonly used in traditional folk medicine, notably in the treatment of malaria. In Ethiopia, A. afra is traditionally used in combination with other herbals as a remedy against headache, eye diseases, ringworm, haematuria and stabbing pain (4). It is also used to treat infertility, febrile illness, common cold, spirit and epilepsy (5). A. afra has recently attracted worldwide attention of researchers for its possible use in the treatment of chronic diseases like diabetes, cardiovascular diseases and cancer. Aqueous extract of A. afra has cardioprotective, antihyperlipidemic, antioxidant and antihypertensive activities (6). Its related species A. annua is recently claimed from Madagascar to be a remedy for the current COVID-19 pandemic. Ethanol extract of A. afra was observed to arrest cell cycle of cancer cells (7). Aqueous extract of A. afra was shown to decrease glucose level near normal range and to have antioxidant activity in diabetic rats (8). It was also reported that it has bronchodilation and anti-inflammatory activities (9). Whilst evidence-based studies indicating the efficacy of herbal remedies are still being unveiled, increasing evidence regarding adverse effects of herbal medicine has highlighted the demand for toxicological studies for herbal products (10). This could also be true for A. afra with only limited studies have investigated its toxicity. Acute oral administration of aqueous extract of A. afra to mice was non-toxic with LD50 of 8960mg/kg (2). The same study showed that chronic oral administration of this extract in rats was relatively safe with minor intermittent diarrhea, salivation and partial hypo-activity. Acute and sub-chronic toxicity studies on the aqueous leaf extract of A. afra have also shown no significant sign of toxicity on liver, kidney and some blood parameters in Wistar rats (11). However, studies on the effect of the plant extract on other vital organs are lacking. The aim of the present study was to investigate toxic effects of A. afra on brain, heart and suprarenal glands in albno mice.
null
null
Results
Acute toxicity study: The single oral administration of aqueous extract of A. afra in mice did not show any mortality even with the highest dose which was 5000mg/kg. No signs of toxicity were observed at the lower four doses, i.e., 200mg/kg, 700mg/kg, 1200mg/kg and 2200mg/kg. However, signs of mild toxicity like anxiety and piloerection were seen at the doses of 3200, 4200mg/kg and 5000mg/kg. These symptoms gradually disappeared after some wash out periods over two weeks observation. Moreover, there was a gradual increase in body weight of treated mice though not statistically different among the different groups. Sub-acute Toxicity Effects of A. afra aqueous leaves extract on behavior and body weight of mice: During the period of 28 days of sub-acute toxicity study, mice treated orally with both low and high doses of the extract showed no noticeable change in their general behavior compared to the control group. There was also no significant difference between the two sexes, except that the males, in both treated and non-treated control groups, were more aggressive compared to the females (Table 1). Moreover, there was no toxicity related death throughout the period of the study. Neurobehavioral neurotoxicity evaluation of male and female mice treated with aqueous extract of A. afra as compared to the controls during consecutive four weeks of observations “+” means behavioral response present. “-” means behavioral response absent. n=5 Gradual weight gain was observed in both treated and control groups though not statistically significant during the study period, and no significant difference was also observed between male and female groups (Figure 1). Mean body weight growth between female (A) and male (B) mice treated with 600mg/kg and 1800mg/kg of aqueous leaves extract of A.afra as compared to the controls. Values are m+/-SEM. n=5. Effects of A. afra aqueous leaves extract on gross pathology and relative organ weight: On gross examination of brain, heart and suprarenal glands, no abnormal gross pathological findings like dark and white spot and necrosis were observed in any of the treatment and control groups. There were no significant absolute and relative organ weight changes between treated and control groups in both sexes (Table 2). Absolute and relative organ weights of mice orally treated with 600mg/kg and 1800mg/kg of aqueous leaves extract of A. afra as compared to the controls Relative organ is calculated using the weight of brain as a denominator. Each value is expressed as mean ± SEM, n=5 for each group Effects of aqueous leaves extract on histology of cerebral cortex, heart and suprarenal glands: Microscopic examination of cerebral cortex of mice treated with 600mg/kg and 1800mg/kg of aqueous leaves extract of A. afra indicated no structural disturbance compared to the controls. No signs of individual neuron death and focal lesion such as pyknosis, karyorrhexis and karyolysis and eosinophilic cytoplasm were observed in both male and female mice. In addition, no sign of inflammation like lymphocytic infiltration were observed in male and female mice (Figure 2). The cytoarchitecture of cerebral cortex was identical between treated and control mice. Photomicrographs of H & E stained sections of cerebral cortex from female mice treated with aqueous leaves extract of A. afra at 600mg/kg (B) and 1800mg/kg (C) as compared to the control (A); I = Molecular, II/III = Supragranular Pyramidal, IV = Granular, V = Deep Pyramidal and VI = Polymorphic (Multiform) layers. Magnifications X200 No architectural difference was observed in microscopic examination of the heart of treated and control female and male mice. In mice treated with both 600mg/kg and 1800mg/kg of the extract, no signs of myocardial cell injury such as pyknosis, karyorrhexis, karyolysis, vacuolation, focal necrosis and fibrosis were observed. Furthermore, no sign of inflammation (leukocytic infiltration) was observed in female and male treated mice groups (Figure 3). Photomicrographs of H & E stained sections of heart from female mice treated with aqueous leaves extract of A. afraat 600mg/kg (B) and 1800mg/kg (C) as compared to the control (A). Magnifications X300 On examination of the sections of suprarenal glands, no signs of toxicity were observed on both cortical and medullary regions. The architecture of both cortex and medulla of mice treated with 600mg/kg and 1800mg/kg of the extract was identical with that of the control. In both treated and control mice, two cortical zones were observed with some mice showing additional zone (X-zone). However, no cortical lesion like degeneration (vacuolar or granular), necrosis or hemorrhage was observed (Figure 4). Photomicrographs of H & E stained sections of suprarenal glands from female mice treated with aqueous leaves extract of A. afra at 600mg/kg (B) and 1800mg/kg (C) as compared to the control (A); G = zona golerulosa, F = zona fasciculate, X = x zone and M = adrenal medulla; Magnifications X200
null
null
[]
[]
[]
[ "Introduction", "Materials and Methods", "Results", "Discussion" ]
[ "The genus Artemisia, which belongs to the family of Asteraceae, contains more than 400 species and is widely used in many parts of the world either alone or in combination with other plants as herbal remedy for a variety of human ailments. A. afra is a medium-size perennial herb, rarely exceeding 2m high. It is located in Ethiopia Kenya, Zimbabwe, Malawi, Angola and South Africa (1).\nVarious parts of the plant contain volatile oil, terpenoids, coumarins, acetylenes, scopoletin and flavonoids (2). The volatile oil contains 1, 8-cineole, α-thujone, β-thujone, camphor and borneol, and has definite anti-microbial and anti-oxidative properties. Thujone is known to cause neurotoxicity with different neurological symptoms like dizziness, tremor, convulsion and hallucination (3).\nArtemisia species are most commonly used in traditional folk medicine, notably in the treatment of malaria. In Ethiopia, A. afra is traditionally used in combination with other herbals as a remedy against headache, eye diseases, ringworm, haematuria and stabbing pain (4). It is also used to treat infertility, febrile illness, common cold, spirit and epilepsy (5).\nA. afra has recently attracted worldwide attention of researchers for its possible use in the treatment of chronic diseases like diabetes, cardiovascular diseases and cancer. Aqueous extract of A. afra has cardioprotective, antihyperlipidemic, antioxidant and antihypertensive activities (6). Its related species A. annua is recently claimed from Madagascar to be a remedy for the current COVID-19 pandemic. Ethanol extract of A. afra was observed to arrest cell cycle of cancer cells (7). Aqueous extract of A. afra was shown to decrease glucose level near normal range and to have antioxidant activity in diabetic rats (8). It was also reported that it has bronchodilation and anti-inflammatory activities (9).\nWhilst evidence-based studies indicating the efficacy of herbal remedies are still being unveiled, increasing evidence regarding adverse effects of herbal medicine has highlighted the demand for toxicological studies for herbal products (10). This could also be true for A. afra with only limited studies have investigated its toxicity. Acute oral administration of aqueous extract of A. afra to mice was non-toxic with LD50 of 8960mg/kg (2). The same study showed that chronic oral administration of this extract in rats was relatively safe with minor intermittent diarrhea, salivation and partial hypo-activity. Acute and sub-chronic toxicity studies on the aqueous leaf extract of A. afra have also shown no significant sign of toxicity on liver, kidney and some blood parameters in Wistar rats (11). However, studies on the effect of the plant extract on other vital organs are lacking. The aim of the present study was to investigate toxic effects of A. afra on brain, heart and suprarenal glands in albno mice.", "The study was laboratory based experiment conducted at Addis Ababa University (AAU), College of Health Sciences, Departments of Anatomy and Physiology, and Ethiopian Public Health Institute (EPHI) from September 2014 to July, 2015.\nCollection of plant materials: A. afra was collected from Bale National Park, 400km southeast of Addis Ababa in Oromia regional state during the month of September 2014. The plant was identified by a taxonomist, and a few samples were deposited at the National Herbarium in the College of Natural and Computational Sciences, Addis Ababa University (AAU), with a voucher specimen number of 392/NKI/PHARM.\nPreparation of aqueous leaf extract: The plant leaves were cleaned, dried in shade, ground to powder (400g) and macerated with distilled water for 2hrs and 30minutes with intermittent agitation by orbital shaker. The supernatant part of agitated materials was decanted and filtered with 0.1 mm2 mesh gauze from the un-dissolved portion of the plant. The filtrate was freeze-dried to give 43g (10.75% yield) of crude extract.\nExperimental animals: The animals used in this study were bred and reared at the animal house of EPHI and transported to AAU, College of Health Sciences, Department of Physiology. Experiments were conducted on 54 healthy adult male and female mice aged 8–12 weeks and weighing 25–30g. Grouping of mice was done randomly. The animals were kept in separate polycarbonate cages and provided with bedding of clean paddy husk. The mice were acclimatized to laboratory conditions for one week prior to experimentation to minimize nonspecific stress (12).\nThe test substance was administered in single and repeated doses by gavages for acute and sub-acute toxicity studies, respectively. For the acute toxicity study, prior to dosing food but not water was withheld for 3 hours. Following the period of fasting, the animals were weighed, and the dose was calculated according to the body weight for each animal. The test substance dissolved in distilled water was then administered. After the substance was administered, food was withheld for 2 hours (12).\nAcute toxicity study and LD50 determination: Acute toxicity test was done as per the OECD guideline for testing of chemicals 423 (12). It was started with low initial single dose of 200mg/kg. This dose was selected in reference to a previous efficacy study of A. afra which showed significant anti-diabetic activity at 200mg/kg in diabetic rats (13). Additional six higher single doses of 700mg/kg, 1200mg/kg, 2200mg/kg, 3200mg/kg, 4200mg/kg and 5000mg/kg were administered. A total of eight groups of mice were used (seven treated and one control) each consisting of 3 female adult albino Swiss mice.\nThe treated and control groups were observed continuously for 3hrs and then every 24hrs for the next 14 days; and any signs of toxicity and mortality were recorded. The presence or absence of toxic signs like increased motor activity, tremors, ptosis, lacrimation, exophthalmos, piloerection, salivation and depression were observed during the study period. The body weight of each mouse was recorded at the 7th and 14th days. The differences in the body weight were also recorded.\nLethal doses for fifty percent of the mice (LD50) for aqueous leaf extracts of A. afra were determined using Protocol for LD50 determination (12). On the 14th day of treatment, all mice were sacrificed with anesthetic diethyl ether. Comprehensive gross pathological observations were carried out on the brain, heart and suprarenal glands to check for any signs of abnormality and presence of lesions.\nSub-acute toxicity study: The study was carried out using 15 female and 15 male mice which were grouped into six groups, 3 groups for the females and another three groups for the males. For both sexes, two groups were given 600mg/kg (low dose) and 1800mg/kg (high dose) of the extract for 28 days, while another group was given vehicle (distilled water). The low dose was selected in reference to an efficacy study of A. afra in treatment of malaria which showed 400 mg/kg as effective dose (14). However, this dose was modified to 600 mg/kg based on clinical observation of acute toxicity studies (15). The volume of the extract and distilled water was calculated as 1.5ml/100g and given at constant time between 9:00–10:00 am once a day.\nIndividual weights of mice were taken shortly before the test substance was administered and weekly thereafter using digital electronic balance. Weight changes were also calculated and recorded. Mice were observed individually once during the first 30 minutes after dosing, four times for the first 4 hours with one hour interval and daily thereafter for 28 days. Clinical observation for morbidity and mortality once a day were recorded (15).\nFor screening possible neurobehavioral toxicity, in cage and open field observations were done daily. Functional observational battery was employed to assess a wide range of neurobiological functions, including sensory, motor and autonomic components. In the present study, functional observational battery comprising a series of assessments designed to measure motor, sensory and autonomic function was used (16). These included posture, abnormal motor activity (like tremor and fasciculation, convulsion), ease of removal from cage, reactivity to handling, lacrimation anorexia and salivation.\nAt the end of the test, animals were weighed and then humanely sacrificed anesthetizing with diethyl ether (15). Organs of interest; namely, the brain, heart and suprarenal glands were carefully dissected out and weighed. The relative organ weights of heart and brain were calculated using the animal weight as a denominator, while that of the suprarenal glands' was calculated using the brain as a denominator.\nGross pathological examinations for toxicological lesions, including the presence of dark and white spot and necrosis were done. For the brain, frontal lobes of cerebrum were taken by coronal section at the level of Bregma. Both the right and left suprarenal glands were taken as a sample. The heart was sectioned longitudinally through ventricle and atria from the base to the apex, and the left atria and ventricle were taken for histopathological investigation. Sample tissues were taken and placed in a labeled test tube containing 10% buffered formalin. Fixed tissues were dehydrated and cleared, respectively, in ascending graded series of ethanol and xylene, infiltrated with molten paraffin wax and embedded in paraffin blocks. These were sectioned at a thickness of 5µm using Leica rotary microtome (LEICA RM 2125 RT, Germany). Ribbons of tissue sections were gently collected and placed onto the surface of a water bath heated at 40°C. They were then collected onto gelatin coated glass slides and placed in oven overnight. Sections were deparaffinized by xylene, hydrated through a down series of alcohols, and stained by Harris' hematoxylin. Slides were differentiated by 1% acid alcohol and counter stained in eosin. Stained sections were dehydrated with increasing concentrations of ethyl alcohols and cleared in xylene, mounted using DPX and covered with cover slips.\nStained tissue sections of brain, heart and suprarenal glands were carefully examined under binocular compound light microscope (LEICA DM 750, Germany). Tissue sections from the treated groups were examined for any evidence of histopathological changes by a pathologist with respect to those of the controls blindly. After examination, photomicrographs of selected sample sections of brain, heart and suprarenal glands from both treated and control mice were taken under a magnification of x20 objective by using automated built-in digital photo-camera (EVOS XL, USA).\nData processing and analysis: All quantitative data were organized and analyzed using Statistical Package for Social Science (SPSS) version 21 statistical software. The values of body and organ weight including relative organ weights were analyzed, and the results were expressed as mean ±SEM (standard error of mean). Difference between treated and control groups were compared using one-way ANOVA. P-values <0.05 were considered statistically significant.\nEthical approval: Ethical approval was obtained from the Research Review Committee of the Department of Anatomy, College of Health Sciences, AAU.", "Acute toxicity study: The single oral administration of aqueous extract of A. afra in mice did not show any mortality even with the highest dose which was 5000mg/kg. No signs of toxicity were observed at the lower four doses, i.e., 200mg/kg, 700mg/kg, 1200mg/kg and 2200mg/kg. However, signs of mild toxicity like anxiety and piloerection were seen at the doses of 3200, 4200mg/kg and 5000mg/kg. These symptoms gradually disappeared after some wash out periods over two weeks observation. Moreover, there was a gradual increase in body weight of treated mice though not statistically different among the different groups.\n\nSub-acute Toxicity\n\nEffects of A. afra aqueous leaves extract on behavior and body weight of mice: During the period of 28 days of sub-acute toxicity study, mice treated orally with both low and high doses of the extract showed no noticeable change in their general behavior compared to the control group. There was also no significant difference between the two sexes, except that the males, in both treated and non-treated control groups, were more aggressive compared to the females (Table 1). Moreover, there was no toxicity related death throughout the period of the study.\nNeurobehavioral neurotoxicity evaluation of male and female mice treated with aqueous extract of A. afra as compared to the controls during consecutive four weeks of observations\n“+” means behavioral response present. “-” means behavioral response absent. n=5\nGradual weight gain was observed in both treated and control groups though not statistically significant during the study period, and no significant difference was also observed between male and female groups (Figure 1).\nMean body weight growth between female (A) and male (B) mice treated with 600mg/kg and 1800mg/kg of aqueous leaves extract of A.afra as compared to the controls. Values are m+/-SEM. n=5.\nEffects of A. afra aqueous leaves extract on gross pathology and relative organ weight: On gross examination of brain, heart and suprarenal glands, no abnormal gross pathological findings like dark and white spot and necrosis were observed in any of the treatment and control groups. There were no significant absolute and relative organ weight changes between treated and control groups in both sexes (Table 2).\nAbsolute and relative organ weights of mice orally treated with 600mg/kg and 1800mg/kg of aqueous leaves extract of A. afra as compared to the controls\nRelative organ is calculated using the weight of brain as a denominator. Each value is expressed as mean ± SEM, n=5 for each group\nEffects of aqueous leaves extract on histology of cerebral cortex, heart and suprarenal glands: Microscopic examination of cerebral cortex of mice treated with 600mg/kg and 1800mg/kg of aqueous leaves extract of A. afra indicated no structural disturbance compared to the controls. No signs of individual neuron death and focal lesion such as pyknosis, karyorrhexis and karyolysis and eosinophilic cytoplasm were observed in both male and female mice. In addition, no sign of inflammation like lymphocytic infiltration were observed in male and female mice (Figure 2). The cytoarchitecture of cerebral cortex was identical between treated and control mice.\nPhotomicrographs of H & E stained sections of cerebral cortex from female mice treated with aqueous leaves extract of A. afra at 600mg/kg (B) and 1800mg/kg (C) as compared to the control (A); I = Molecular, II/III = Supragranular Pyramidal, IV = Granular, V = Deep Pyramidal and VI = Polymorphic (Multiform) layers. Magnifications X200\nNo architectural difference was observed in microscopic examination of the heart of treated and control female and male mice. In mice treated with both 600mg/kg and 1800mg/kg of the extract, no signs of myocardial cell injury such as pyknosis, karyorrhexis, karyolysis, vacuolation, focal necrosis and fibrosis were observed. Furthermore, no sign of inflammation (leukocytic infiltration) was observed in female and male treated mice groups (Figure 3).\nPhotomicrographs of H & E stained sections of heart from female mice treated with aqueous leaves extract of A. afraat 600mg/kg (B) and 1800mg/kg (C) as compared to the control (A). Magnifications X300\nOn examination of the sections of suprarenal glands, no signs of toxicity were observed on both cortical and medullary regions. The architecture of both cortex and medulla of mice treated with 600mg/kg and 1800mg/kg of the extract was identical with that of the control. In both treated and control mice, two cortical zones were observed with some mice showing additional zone (X-zone). However, no cortical lesion like degeneration (vacuolar or granular), necrosis or hemorrhage was observed (Figure 4).\nPhotomicrographs of H & E stained sections of suprarenal glands from female mice treated with aqueous leaves extract of A. afra at 600mg/kg (B) and 1800mg/kg (C) as compared to the control (A); G = zona golerulosa, F = zona fasciculate, X = x zone and M = adrenal medulla; Magnifications X200", "Evaluation of the pathological alterations induced in laboratory animals by novel treatment agents represents one of the safety assessments prior to conducting clinical trials. This preliminary assessment represents major contributions to the development of new treatments for humans and animals. Acute toxicity tests provide data on the relative toxicity likely to arise from a single or brief exposure. It is an initial assessment of toxic manifestations induced with the test substance/s (17).\nThis study showed that the LD50 of the aqueous extract of A. afra was above 5000mg/kg which is grouped under category 5 or unclassified according to Globally Harmonized Classification System (13). This is in agreement with the findings of previous studies in mice (2) and rats (11) experiments. Oral administrations of the extract up to dose of 2200 mg/kg did not cause any alteration in the behavioral pattern of the mice as compared to the control group. However, mild sign of toxicity was observed at the higher three doses. There was no significant difference in weight gain between treated and control groups. Gross examination of heart, suprarenal glands and brain revealed no treatment-related gross findings at necropsy. The results of acute toxicity test in this study, therefore, indicate that aqueous leaf extract of A. afra is tolerated up to the limit of 5000mg/kg body weight as per OECD guidelines (12).\nSub-acute toxicity study provides information on possible health hazards likely to occur on nervous, immune and endocrine systems from repeated exposure over a relatively limited period of time (15). The present subacute toxicity study with oral aqueous extract of A. afra on body weight, brain, heart and suprarenal glands showed that no significant behavioral changes were observed between treated and control groups in both sexes. Changes in animals' body weights are usually used as an indicator of toxic effects of test substances (18). On the other hand, increase in animals' body weight could also be related to body fat accumulation rather than to toxicity (19). The non-significant increase in body weight observed in the present study might be attributed to fat accumulation in the body. Evaluation of organ-to-body weight and organ-to-brain weight ratios are also used to assess treatment related effects in toxicological studies. For suprarenal glands, organ-to-brain ratio is more predictive of suprarenal glands weight change than organ-to-body weight ratio (20). No significant changes were observed in absolute and relative organ weights of all groups in the present study (21). These findings are in line with those of previous studies done with other species of the genus Artemisia\nIn response to injury, a number of changes may occur in neurons and their processes (axons and dendrites) like shrinkage of cell body, pyknosis of the nucleus, disappearance of the nucleolus and loss of Nissl substance with intense eosinophilia of the cytoplasm (red neurons) (22). In the histopathological examination of cerebral cortex, none of these morphological changes was observed which was supported by absence of abnormal behavior and motor activity on cage side observation. This might suggest that the aqueous leaf extract of A. afra may not cause toxicity to cerebral cortex of mice.\nBecause of its high oxidative metabolic need, the heart can be injured by any compounds that interfere with its oxygen supply. On microscopic examination, myocardial damage can take the form of cytoplasmic alterations such as vacillation, pyknotic nucleus, karyrrhexis, and karyolysis in diffuse or localized area (23). In the present study, none of these lesions were observed which is in line with the results of the previous study done on the same plant with different dose (2).\nSuprarenal glands are reported to be the most common endocrine organ associated with chemically induced lesions (24). These lesions are more frequent in the zona fasciculata than in the zona glomerulosa. The adrenal cortex produces steroid hormones with a 17-carbon nucleus following a series of hydroxylation reactions that occur in the mitochondria and endoplasmic reticulum. Toxic agents for the adrenal cortex include short-chain aliphatic compounds, lipidosis inducers, amphiphilic compounds, natural and synthetic steroids, and chemicals that affect hydroxylation (25). Morphologic evaluation of cortical lesions provides insight into the sites of inhibition of steroidogenesis. In the present study, no histological degenerative or proliferative lesions of cortex or medulla were observed suggesting the non-toxic effect of the plant on the glands which is in agreement with that of the previous study done in other species of the same genus (21).\nIn conclusion, the present findings suggest that administration of 600mg/kg and 1800mg/kg of body weight of aqueous leaves extract of A. afra in mice for a month is safe." ]
[ "intro", "materials|methodss", "results", "discussion" ]
[ "A. afra", "Toxicity study", "histopathology", "brain", "heart", "suprarenal glands", "Swiss albino mice" ]
Introduction: The genus Artemisia, which belongs to the family of Asteraceae, contains more than 400 species and is widely used in many parts of the world either alone or in combination with other plants as herbal remedy for a variety of human ailments. A. afra is a medium-size perennial herb, rarely exceeding 2m high. It is located in Ethiopia Kenya, Zimbabwe, Malawi, Angola and South Africa (1). Various parts of the plant contain volatile oil, terpenoids, coumarins, acetylenes, scopoletin and flavonoids (2). The volatile oil contains 1, 8-cineole, α-thujone, β-thujone, camphor and borneol, and has definite anti-microbial and anti-oxidative properties. Thujone is known to cause neurotoxicity with different neurological symptoms like dizziness, tremor, convulsion and hallucination (3). Artemisia species are most commonly used in traditional folk medicine, notably in the treatment of malaria. In Ethiopia, A. afra is traditionally used in combination with other herbals as a remedy against headache, eye diseases, ringworm, haematuria and stabbing pain (4). It is also used to treat infertility, febrile illness, common cold, spirit and epilepsy (5). A. afra has recently attracted worldwide attention of researchers for its possible use in the treatment of chronic diseases like diabetes, cardiovascular diseases and cancer. Aqueous extract of A. afra has cardioprotective, antihyperlipidemic, antioxidant and antihypertensive activities (6). Its related species A. annua is recently claimed from Madagascar to be a remedy for the current COVID-19 pandemic. Ethanol extract of A. afra was observed to arrest cell cycle of cancer cells (7). Aqueous extract of A. afra was shown to decrease glucose level near normal range and to have antioxidant activity in diabetic rats (8). It was also reported that it has bronchodilation and anti-inflammatory activities (9). Whilst evidence-based studies indicating the efficacy of herbal remedies are still being unveiled, increasing evidence regarding adverse effects of herbal medicine has highlighted the demand for toxicological studies for herbal products (10). This could also be true for A. afra with only limited studies have investigated its toxicity. Acute oral administration of aqueous extract of A. afra to mice was non-toxic with LD50 of 8960mg/kg (2). The same study showed that chronic oral administration of this extract in rats was relatively safe with minor intermittent diarrhea, salivation and partial hypo-activity. Acute and sub-chronic toxicity studies on the aqueous leaf extract of A. afra have also shown no significant sign of toxicity on liver, kidney and some blood parameters in Wistar rats (11). However, studies on the effect of the plant extract on other vital organs are lacking. The aim of the present study was to investigate toxic effects of A. afra on brain, heart and suprarenal glands in albno mice. Materials and Methods: The study was laboratory based experiment conducted at Addis Ababa University (AAU), College of Health Sciences, Departments of Anatomy and Physiology, and Ethiopian Public Health Institute (EPHI) from September 2014 to July, 2015. Collection of plant materials: A. afra was collected from Bale National Park, 400km southeast of Addis Ababa in Oromia regional state during the month of September 2014. The plant was identified by a taxonomist, and a few samples were deposited at the National Herbarium in the College of Natural and Computational Sciences, Addis Ababa University (AAU), with a voucher specimen number of 392/NKI/PHARM. Preparation of aqueous leaf extract: The plant leaves were cleaned, dried in shade, ground to powder (400g) and macerated with distilled water for 2hrs and 30minutes with intermittent agitation by orbital shaker. The supernatant part of agitated materials was decanted and filtered with 0.1 mm2 mesh gauze from the un-dissolved portion of the plant. The filtrate was freeze-dried to give 43g (10.75% yield) of crude extract. Experimental animals: The animals used in this study were bred and reared at the animal house of EPHI and transported to AAU, College of Health Sciences, Department of Physiology. Experiments were conducted on 54 healthy adult male and female mice aged 8–12 weeks and weighing 25–30g. Grouping of mice was done randomly. The animals were kept in separate polycarbonate cages and provided with bedding of clean paddy husk. The mice were acclimatized to laboratory conditions for one week prior to experimentation to minimize nonspecific stress (12). The test substance was administered in single and repeated doses by gavages for acute and sub-acute toxicity studies, respectively. For the acute toxicity study, prior to dosing food but not water was withheld for 3 hours. Following the period of fasting, the animals were weighed, and the dose was calculated according to the body weight for each animal. The test substance dissolved in distilled water was then administered. After the substance was administered, food was withheld for 2 hours (12). Acute toxicity study and LD50 determination: Acute toxicity test was done as per the OECD guideline for testing of chemicals 423 (12). It was started with low initial single dose of 200mg/kg. This dose was selected in reference to a previous efficacy study of A. afra which showed significant anti-diabetic activity at 200mg/kg in diabetic rats (13). Additional six higher single doses of 700mg/kg, 1200mg/kg, 2200mg/kg, 3200mg/kg, 4200mg/kg and 5000mg/kg were administered. A total of eight groups of mice were used (seven treated and one control) each consisting of 3 female adult albino Swiss mice. The treated and control groups were observed continuously for 3hrs and then every 24hrs for the next 14 days; and any signs of toxicity and mortality were recorded. The presence or absence of toxic signs like increased motor activity, tremors, ptosis, lacrimation, exophthalmos, piloerection, salivation and depression were observed during the study period. The body weight of each mouse was recorded at the 7th and 14th days. The differences in the body weight were also recorded. Lethal doses for fifty percent of the mice (LD50) for aqueous leaf extracts of A. afra were determined using Protocol for LD50 determination (12). On the 14th day of treatment, all mice were sacrificed with anesthetic diethyl ether. Comprehensive gross pathological observations were carried out on the brain, heart and suprarenal glands to check for any signs of abnormality and presence of lesions. Sub-acute toxicity study: The study was carried out using 15 female and 15 male mice which were grouped into six groups, 3 groups for the females and another three groups for the males. For both sexes, two groups were given 600mg/kg (low dose) and 1800mg/kg (high dose) of the extract for 28 days, while another group was given vehicle (distilled water). The low dose was selected in reference to an efficacy study of A. afra in treatment of malaria which showed 400 mg/kg as effective dose (14). However, this dose was modified to 600 mg/kg based on clinical observation of acute toxicity studies (15). The volume of the extract and distilled water was calculated as 1.5ml/100g and given at constant time between 9:00–10:00 am once a day. Individual weights of mice were taken shortly before the test substance was administered and weekly thereafter using digital electronic balance. Weight changes were also calculated and recorded. Mice were observed individually once during the first 30 minutes after dosing, four times for the first 4 hours with one hour interval and daily thereafter for 28 days. Clinical observation for morbidity and mortality once a day were recorded (15). For screening possible neurobehavioral toxicity, in cage and open field observations were done daily. Functional observational battery was employed to assess a wide range of neurobiological functions, including sensory, motor and autonomic components. In the present study, functional observational battery comprising a series of assessments designed to measure motor, sensory and autonomic function was used (16). These included posture, abnormal motor activity (like tremor and fasciculation, convulsion), ease of removal from cage, reactivity to handling, lacrimation anorexia and salivation. At the end of the test, animals were weighed and then humanely sacrificed anesthetizing with diethyl ether (15). Organs of interest; namely, the brain, heart and suprarenal glands were carefully dissected out and weighed. The relative organ weights of heart and brain were calculated using the animal weight as a denominator, while that of the suprarenal glands' was calculated using the brain as a denominator. Gross pathological examinations for toxicological lesions, including the presence of dark and white spot and necrosis were done. For the brain, frontal lobes of cerebrum were taken by coronal section at the level of Bregma. Both the right and left suprarenal glands were taken as a sample. The heart was sectioned longitudinally through ventricle and atria from the base to the apex, and the left atria and ventricle were taken for histopathological investigation. Sample tissues were taken and placed in a labeled test tube containing 10% buffered formalin. Fixed tissues were dehydrated and cleared, respectively, in ascending graded series of ethanol and xylene, infiltrated with molten paraffin wax and embedded in paraffin blocks. These were sectioned at a thickness of 5µm using Leica rotary microtome (LEICA RM 2125 RT, Germany). Ribbons of tissue sections were gently collected and placed onto the surface of a water bath heated at 40°C. They were then collected onto gelatin coated glass slides and placed in oven overnight. Sections were deparaffinized by xylene, hydrated through a down series of alcohols, and stained by Harris' hematoxylin. Slides were differentiated by 1% acid alcohol and counter stained in eosin. Stained sections were dehydrated with increasing concentrations of ethyl alcohols and cleared in xylene, mounted using DPX and covered with cover slips. Stained tissue sections of brain, heart and suprarenal glands were carefully examined under binocular compound light microscope (LEICA DM 750, Germany). Tissue sections from the treated groups were examined for any evidence of histopathological changes by a pathologist with respect to those of the controls blindly. After examination, photomicrographs of selected sample sections of brain, heart and suprarenal glands from both treated and control mice were taken under a magnification of x20 objective by using automated built-in digital photo-camera (EVOS XL, USA). Data processing and analysis: All quantitative data were organized and analyzed using Statistical Package for Social Science (SPSS) version 21 statistical software. The values of body and organ weight including relative organ weights were analyzed, and the results were expressed as mean ±SEM (standard error of mean). Difference between treated and control groups were compared using one-way ANOVA. P-values <0.05 were considered statistically significant. Ethical approval: Ethical approval was obtained from the Research Review Committee of the Department of Anatomy, College of Health Sciences, AAU. Results: Acute toxicity study: The single oral administration of aqueous extract of A. afra in mice did not show any mortality even with the highest dose which was 5000mg/kg. No signs of toxicity were observed at the lower four doses, i.e., 200mg/kg, 700mg/kg, 1200mg/kg and 2200mg/kg. However, signs of mild toxicity like anxiety and piloerection were seen at the doses of 3200, 4200mg/kg and 5000mg/kg. These symptoms gradually disappeared after some wash out periods over two weeks observation. Moreover, there was a gradual increase in body weight of treated mice though not statistically different among the different groups. Sub-acute Toxicity Effects of A. afra aqueous leaves extract on behavior and body weight of mice: During the period of 28 days of sub-acute toxicity study, mice treated orally with both low and high doses of the extract showed no noticeable change in their general behavior compared to the control group. There was also no significant difference between the two sexes, except that the males, in both treated and non-treated control groups, were more aggressive compared to the females (Table 1). Moreover, there was no toxicity related death throughout the period of the study. Neurobehavioral neurotoxicity evaluation of male and female mice treated with aqueous extract of A. afra as compared to the controls during consecutive four weeks of observations “+” means behavioral response present. “-” means behavioral response absent. n=5 Gradual weight gain was observed in both treated and control groups though not statistically significant during the study period, and no significant difference was also observed between male and female groups (Figure 1). Mean body weight growth between female (A) and male (B) mice treated with 600mg/kg and 1800mg/kg of aqueous leaves extract of A.afra as compared to the controls. Values are m+/-SEM. n=5. Effects of A. afra aqueous leaves extract on gross pathology and relative organ weight: On gross examination of brain, heart and suprarenal glands, no abnormal gross pathological findings like dark and white spot and necrosis were observed in any of the treatment and control groups. There were no significant absolute and relative organ weight changes between treated and control groups in both sexes (Table 2). Absolute and relative organ weights of mice orally treated with 600mg/kg and 1800mg/kg of aqueous leaves extract of A. afra as compared to the controls Relative organ is calculated using the weight of brain as a denominator. Each value is expressed as mean ± SEM, n=5 for each group Effects of aqueous leaves extract on histology of cerebral cortex, heart and suprarenal glands: Microscopic examination of cerebral cortex of mice treated with 600mg/kg and 1800mg/kg of aqueous leaves extract of A. afra indicated no structural disturbance compared to the controls. No signs of individual neuron death and focal lesion such as pyknosis, karyorrhexis and karyolysis and eosinophilic cytoplasm were observed in both male and female mice. In addition, no sign of inflammation like lymphocytic infiltration were observed in male and female mice (Figure 2). The cytoarchitecture of cerebral cortex was identical between treated and control mice. Photomicrographs of H & E stained sections of cerebral cortex from female mice treated with aqueous leaves extract of A. afra at 600mg/kg (B) and 1800mg/kg (C) as compared to the control (A); I = Molecular, II/III = Supragranular Pyramidal, IV = Granular, V = Deep Pyramidal and VI = Polymorphic (Multiform) layers. Magnifications X200 No architectural difference was observed in microscopic examination of the heart of treated and control female and male mice. In mice treated with both 600mg/kg and 1800mg/kg of the extract, no signs of myocardial cell injury such as pyknosis, karyorrhexis, karyolysis, vacuolation, focal necrosis and fibrosis were observed. Furthermore, no sign of inflammation (leukocytic infiltration) was observed in female and male treated mice groups (Figure 3). Photomicrographs of H & E stained sections of heart from female mice treated with aqueous leaves extract of A. afraat 600mg/kg (B) and 1800mg/kg (C) as compared to the control (A). Magnifications X300 On examination of the sections of suprarenal glands, no signs of toxicity were observed on both cortical and medullary regions. The architecture of both cortex and medulla of mice treated with 600mg/kg and 1800mg/kg of the extract was identical with that of the control. In both treated and control mice, two cortical zones were observed with some mice showing additional zone (X-zone). However, no cortical lesion like degeneration (vacuolar or granular), necrosis or hemorrhage was observed (Figure 4). Photomicrographs of H & E stained sections of suprarenal glands from female mice treated with aqueous leaves extract of A. afra at 600mg/kg (B) and 1800mg/kg (C) as compared to the control (A); G = zona golerulosa, F = zona fasciculate, X = x zone and M = adrenal medulla; Magnifications X200 Discussion: Evaluation of the pathological alterations induced in laboratory animals by novel treatment agents represents one of the safety assessments prior to conducting clinical trials. This preliminary assessment represents major contributions to the development of new treatments for humans and animals. Acute toxicity tests provide data on the relative toxicity likely to arise from a single or brief exposure. It is an initial assessment of toxic manifestations induced with the test substance/s (17). This study showed that the LD50 of the aqueous extract of A. afra was above 5000mg/kg which is grouped under category 5 or unclassified according to Globally Harmonized Classification System (13). This is in agreement with the findings of previous studies in mice (2) and rats (11) experiments. Oral administrations of the extract up to dose of 2200 mg/kg did not cause any alteration in the behavioral pattern of the mice as compared to the control group. However, mild sign of toxicity was observed at the higher three doses. There was no significant difference in weight gain between treated and control groups. Gross examination of heart, suprarenal glands and brain revealed no treatment-related gross findings at necropsy. The results of acute toxicity test in this study, therefore, indicate that aqueous leaf extract of A. afra is tolerated up to the limit of 5000mg/kg body weight as per OECD guidelines (12). Sub-acute toxicity study provides information on possible health hazards likely to occur on nervous, immune and endocrine systems from repeated exposure over a relatively limited period of time (15). The present subacute toxicity study with oral aqueous extract of A. afra on body weight, brain, heart and suprarenal glands showed that no significant behavioral changes were observed between treated and control groups in both sexes. Changes in animals' body weights are usually used as an indicator of toxic effects of test substances (18). On the other hand, increase in animals' body weight could also be related to body fat accumulation rather than to toxicity (19). The non-significant increase in body weight observed in the present study might be attributed to fat accumulation in the body. Evaluation of organ-to-body weight and organ-to-brain weight ratios are also used to assess treatment related effects in toxicological studies. For suprarenal glands, organ-to-brain ratio is more predictive of suprarenal glands weight change than organ-to-body weight ratio (20). No significant changes were observed in absolute and relative organ weights of all groups in the present study (21). These findings are in line with those of previous studies done with other species of the genus Artemisia In response to injury, a number of changes may occur in neurons and their processes (axons and dendrites) like shrinkage of cell body, pyknosis of the nucleus, disappearance of the nucleolus and loss of Nissl substance with intense eosinophilia of the cytoplasm (red neurons) (22). In the histopathological examination of cerebral cortex, none of these morphological changes was observed which was supported by absence of abnormal behavior and motor activity on cage side observation. This might suggest that the aqueous leaf extract of A. afra may not cause toxicity to cerebral cortex of mice. Because of its high oxidative metabolic need, the heart can be injured by any compounds that interfere with its oxygen supply. On microscopic examination, myocardial damage can take the form of cytoplasmic alterations such as vacillation, pyknotic nucleus, karyrrhexis, and karyolysis in diffuse or localized area (23). In the present study, none of these lesions were observed which is in line with the results of the previous study done on the same plant with different dose (2). Suprarenal glands are reported to be the most common endocrine organ associated with chemically induced lesions (24). These lesions are more frequent in the zona fasciculata than in the zona glomerulosa. The adrenal cortex produces steroid hormones with a 17-carbon nucleus following a series of hydroxylation reactions that occur in the mitochondria and endoplasmic reticulum. Toxic agents for the adrenal cortex include short-chain aliphatic compounds, lipidosis inducers, amphiphilic compounds, natural and synthetic steroids, and chemicals that affect hydroxylation (25). Morphologic evaluation of cortical lesions provides insight into the sites of inhibition of steroidogenesis. In the present study, no histological degenerative or proliferative lesions of cortex or medulla were observed suggesting the non-toxic effect of the plant on the glands which is in agreement with that of the previous study done in other species of the same genus (21). In conclusion, the present findings suggest that administration of 600mg/kg and 1800mg/kg of body weight of aqueous leaves extract of A. afra in mice for a month is safe.
Background: The majority of population rely on traditional medicine as a source of healthcare. Artemisia afra is a plant traditionally used for its medicinal values, including treatment of malaria in many parts of the world. Currently, it is also attracting attention because of a claim that a related species, Artemisia annua, is a remedy for the COVD-19 pandemic. The aim of the present study was to investigate toxic effects of A. afra on brain, heart and suprarenal glands in mice aged 8-12 weeks and weighing 25-30g. Methods: Leaves of A.afra were collected from Bale National Park, dried under shade, crushed into powder and soaked in distilled water to yield aqueous extract for oral administration. For acute toxicity study, seven treated and one control groups, with 3 female mice each, were used. They were given a single dose of 200mg/kg, 700mg/kg, 1200mg/kg, 2200mg/kg, 3200mg/kg, 4200mg/kg or 5000mg/kg b/wt of the extract. For the sub-acute toxicity study, two treated and one control groups, with 5 female and 5 male mice each, were used. They were daily treated with 600mg/kg or 1800mg/kg b/wt of extract. Results: LD50 was found to be greater than 5000mg/kg indicating that the plant is relatively safe. In the sub-acute study, no signs of toxicity were observed in all treatment groups. On microscopic examination of the brain, heart and suprarenal glands no sign of cellular injury was observed. Conclusions: The findings of this study suggest that the leaves extract of A. afra is relatively safe in mice.
null
null
3,986
324
[]
4
[ "kg", "mice", "extract", "afra", "toxicity", "treated", "study", "observed", "weight", "aqueous" ]
[ "leaves extract afraat", "ethanol extract afra", "artemisia species commonly", "malaria ethiopia afra", "afra treatment malaria" ]
null
null
null
[CONTENT] A. afra | Toxicity study | histopathology | brain | heart | suprarenal glands | Swiss albino mice [SUMMARY]
null
[CONTENT] A. afra | Toxicity study | histopathology | brain | heart | suprarenal glands | Swiss albino mice [SUMMARY]
null
[CONTENT] A. afra | Toxicity study | histopathology | brain | heart | suprarenal glands | Swiss albino mice [SUMMARY]
null
[CONTENT] Animals | Artemisia | Brain | Female | Male | Mice | Plant Extracts | Plant Leaves | Water [SUMMARY]
null
[CONTENT] Animals | Artemisia | Brain | Female | Male | Mice | Plant Extracts | Plant Leaves | Water [SUMMARY]
null
[CONTENT] Animals | Artemisia | Brain | Female | Male | Mice | Plant Extracts | Plant Leaves | Water [SUMMARY]
null
[CONTENT] leaves extract afraat | ethanol extract afra | artemisia species commonly | malaria ethiopia afra | afra treatment malaria [SUMMARY]
null
[CONTENT] leaves extract afraat | ethanol extract afra | artemisia species commonly | malaria ethiopia afra | afra treatment malaria [SUMMARY]
null
[CONTENT] leaves extract afraat | ethanol extract afra | artemisia species commonly | malaria ethiopia afra | afra treatment malaria [SUMMARY]
null
[CONTENT] kg | mice | extract | afra | toxicity | treated | study | observed | weight | aqueous [SUMMARY]
null
[CONTENT] kg | mice | extract | afra | toxicity | treated | study | observed | weight | aqueous [SUMMARY]
null
[CONTENT] kg | mice | extract | afra | toxicity | treated | study | observed | weight | aqueous [SUMMARY]
null
[CONTENT] afra | herbal | extract | extract afra | studies | diseases | chronic | thujone | remedy | anti [SUMMARY]
null
[CONTENT] treated | kg | mice | female | control | extract | aqueous leaves | mice treated | aqueous leaves extract | leaves extract [SUMMARY]
null
[CONTENT] kg | mice | extract | afra | treated | weight | toxicity | study | extract afra | aqueous [SUMMARY]
null
[CONTENT] ||| Artemisia ||| Artemisia | COVD-19 ||| A. | 8-12 weeks | 25-30 [SUMMARY]
null
[CONTENT] ||| ||| [SUMMARY]
null
[CONTENT] ||| Artemisia ||| Artemisia | COVD-19 ||| A. | 8-12 weeks | 25-30 | Bale National Park ||| seven | one | 3 female ||| 200mg/kg | 700mg | 2200mg/kg ||| two | one | 5 female | 5 ||| daily | 600mg | 1800mg/kg ||| ||| ||| ||| ||| A. [SUMMARY]
null
Role of blood pressure on stroke-related mortality: a 45-year follow-up study in China.
35026771
Hypertension is associated with stroke-related mortality. However, the long-term association of blood pressure (BP) and the risk of stroke-related mortality and the influence path of BP on stroke-related death remain unknown. The current study aimed to estimate the long-term causal associations between BP and stroke-related mortality and the potential mediating and moderated mediating model of the associations.
BACKGROUND
This is a 45-year follow-up cohort study and a total of 1696 subjects were enrolled in 1976 and 1081 participants died by the latest follow-up in 2020. COX proportional hazard model was used to explore the associations of stroke-related death with baseline systolic blood pressure (SBP)/diastolic blood pressure (DBP) categories and BP changes from 1976 to 1994. The mediating and moderated mediating effects were performed to detect the possible influencing path from BP to stroke-related deaths. E value was calculated in the sensitivity analysis.
METHODS
Among 1696 participants, the average age was 44.38 ± 6.10 years, and 1124 were men (66.3%). After a 45-year follow-up, a total of 201 (11.9%) stroke-related deaths occurred. After the adjustment, the COX proportional hazard model showed that among the participants with SBP ≥ 160 mmHg or DBP ≥ 100 mmHg in 1976, the risk of stroke-related death increased by 217.5% (hazard ratio [HR] = 3.175, 95% confidence interval [CI]: 2.297-4.388), and the adjusted HRs were higher in male participants. Among the participants with hypertension in 1976 and 1994, the risk of stroke-related death increased by 110.4% (HR = 2.104, 95% CI: 1.632-2.713), and the adjusted HRs of the BP changes were higher in male participants. Body mass index (BMI) significantly mediated the association of SBP and stroke-related deaths and this mediating effect was moderated by gender.
RESULTS
In a 45-year follow-up, high BP and persistent hypertension are associated with stroke-related death, and these associations were even more pronounced in male participants. The paths of association are mediated by BMI and moderated by gender.
CONCLUSIONS
[ "Adult", "Blood Pressure", "China", "Follow-Up Studies", "Humans", "Hypertension", "Male", "Middle Aged", "Risk Factors", "Stroke" ]
8869560
Introduction
The status of hypertension in Chinese adults was the higher prevalence and lower rate of awareness, treatment, and control according to China Hypertension Survey (2012–2015).[1] Hypertension is associated with morbidity, progression, and mortality of cardiovascular disease,[2–4] especially stroke-related death.[5] High systolic blood pressure (SBP) ranked first for the number of deaths accounting for 2.54 million, and ranked second for the percentage of disability adjusted life years, namely the lost total health life years from onset hypertension to death in China.[6] Therefore, higher blood pressure (BP), as the strongest causal and high exposure factor, is the leading attributable risk factor for stroke-related death worldwide.[7,8] Nevertheless, the long-term observational research on the associations between BP and BP changes, and stroke-related deaths and the paths of associations are still rare. Most cohort studies were not followed long enough. A cohort study with long enough follow-up in a fixed population did not only observe the long-term association of BP level and death but also explore the influence path under the premise of causal order, and avoid the causal inversion bias. The present study performed a 45-year prospective cohort study to estimate the long-term influence of baseline BP and changes of BP on stroke-related deaths and explore the possible influencing paths.
Methods
Ethics approval The cohort was approved by the Ethics Committee of the People's Liberation Army General Hospital, China (EC0411-2001). All participants provided their written informed consent. The cohort was approved by the Ethics Committee of the People's Liberation Army General Hospital, China (EC0411-2001). All participants provided their written informed consent. Subjects The data in the current study were originated from the Xi’an Machinery Factory cohort study including all employees aged ≥35 years. The information of physical and biochemical examination was collected in the hospital of machinery factory and the teaching hospital of the Fourth Military Medical University, which has been previously reported.[9,10] Form the baseline survey in 1976, the latest follow-up was 2020 with a 3-year follow-up interval. The information of demographics, physiological index, and lifestyle factors were collected through face-to-face interviews by trained staff in 1976 and 1994. The cause of death was recorded in the follow-up every 4 years. In the baseline survey wave, a total of 1842 persons were recruited. After excluding those who lost to follow-up and lacked baseline information, a total of 1696 subjects were enrolled. A total of 169 participants died from stroke-related by the latest follow-up in December 2020. The data in the current study were originated from the Xi’an Machinery Factory cohort study including all employees aged ≥35 years. The information of physical and biochemical examination was collected in the hospital of machinery factory and the teaching hospital of the Fourth Military Medical University, which has been previously reported.[9,10] Form the baseline survey in 1976, the latest follow-up was 2020 with a 3-year follow-up interval. The information of demographics, physiological index, and lifestyle factors were collected through face-to-face interviews by trained staff in 1976 and 1994. The cause of death was recorded in the follow-up every 4 years. In the baseline survey wave, a total of 1842 persons were recruited. After excluding those who lost to follow-up and lacked baseline information, a total of 1696 subjects were enrolled. A total of 169 participants died from stroke-related by the latest follow-up in December 2020. Exposure BP was measured twice at 10-min intervals by nurses using a stethoscope and a mercury-stand sphygmomanometer, and the average value served as the BP value. Participants with an SBP < 140 mmHg and diastolic blood pressure (DBP) < 90 mmHg were defined as normal BP, while those with SBP ≥140 mmHg and/or DBP ≥90 mmHg were defined as hypertension.[11] To better understand the results, SBP/DBP was grouped into < 130 mmHg/ < 80 mmHg, 130 to 139 mmHg/80 to 89 mmHg, 130 to 139 mmHg/80 to 89 mmHg, and ≥160 mmHg/≥100 mmHg at the baseline. Changes in BP categories from 1976 to 1994 were defined as normal BP → normal BP, normal BP → hypertension, hypertension → normal BP, and hypertension → hypertension.[11] BP was measured twice at 10-min intervals by nurses using a stethoscope and a mercury-stand sphygmomanometer, and the average value served as the BP value. Participants with an SBP < 140 mmHg and diastolic blood pressure (DBP) < 90 mmHg were defined as normal BP, while those with SBP ≥140 mmHg and/or DBP ≥90 mmHg were defined as hypertension.[11] To better understand the results, SBP/DBP was grouped into < 130 mmHg/ < 80 mmHg, 130 to 139 mmHg/80 to 89 mmHg, 130 to 139 mmHg/80 to 89 mmHg, and ≥160 mmHg/≥100 mmHg at the baseline. Changes in BP categories from 1976 to 1994 were defined as normal BP → normal BP, normal BP → hypertension, hypertension → normal BP, and hypertension → hypertension.[11] Covariables Body mass index (BMI) was calculated as weight (kg) divided by height squared (m2). Total cholesterol (TC) and triglyceride were detected in the medical insurance designated hospital. Self-reported information of demographic characteristics (education, occupation, and marital status), family history of the disease, and lifestyle factors (smoking and drinking) were collected according to the investigation. Body mass index (BMI) was calculated as weight (kg) divided by height squared (m2). Total cholesterol (TC) and triglyceride were detected in the medical insurance designated hospital. Self-reported information of demographic characteristics (education, occupation, and marital status), family history of the disease, and lifestyle factors (smoking and drinking) were collected according to the investigation. Determination of the cause of death The medical insurance designated hospitals of all participants were fixed, and the pension payment needs to be reported to the local Ministry of Personnel, therefore, the follow-up of death was 100% complete. The determination of the cause of deaths was checked according to ICD-10 (I64.X04) and ICD-11 (8B11, 8B20) by two doctors in the medical insurance hospital every 3 years. The endpoints of this study were stroke-related deaths. The medical insurance designated hospitals of all participants were fixed, and the pension payment needs to be reported to the local Ministry of Personnel, therefore, the follow-up of death was 100% complete. The determination of the cause of deaths was checked according to ICD-10 (I64.X04) and ICD-11 (8B11, 8B20) by two doctors in the medical insurance hospital every 3 years. The endpoints of this study were stroke-related deaths. Statistical analysis Differences among gender and BP groups were performed using Student's t and chi-squared test according to the type of variables. The incidence density of mortality was calculated. COX proportional hazard model was used to explore the hazard ratios (HRs) and 95% confidence intervals (CIs) for death in associations with baseline SBP/DBP categories and 19-year changes of BP. Schoenfeld residual trend test was used to test the proportional hazard assumption in the associations of BP categories in 1976/BP change in 1994 and stroke-related death, respectively. The results show that the independent variables (BP/BP change) and the two models meet the preconditions of proportional risk (P > 0.05). Models were stratified by gender, and continuous variables such as age, BMI, TC, and categorical variables such as marital status, education, occupation, smoking, drinking, and diabetes were adjusted. While comparing HRs from BP categories or BP changes, the floating absolute risk was used to estimate the HRs and 95% CI.[12] The Cox–Stuart test was used to estimate the trend of the adjusted HRs and 95% CI. The associations between BP and deaths in such a long term were not a simple direct influence, and the exploration of possible paths of associations should not be given up. Therefore, the possible mediations and moderations were performed using the analytic methods.[13] The mediation models explore the independent variable affect the dependent variable in what way or by what variable. In this study, the independent variable (BP in 1976) can exert an indirect effect on the dependent variable (stroke-related deaths in 2020) through an intermediary variable (BMI in 1994). The moderation model explores the different influences of independent variables on the dependent variables in different situations or populations. All mediation and moderated mediation analyses were performed by scripts of PROCESS of SPSS 24.0 (IBM SPSS Statistics for Windows, Version 24.0. IBM, Armonk, NY, USA).[14] The simple mediating model was tested by PROCESS model 4, and the moderated mediating model by PROCESS model 5.[15] All mediating and moderated mediating models were based on a 5000 sample bootstrapping set. E values were reported in the sensitivity analysis, which is related to the potential subject to unmeasured confounding.[16] All analyses were performed using SPSS 24.0, STATA 15.0 (Stata Corp., College Station, TX, US), and EmpowerStats (http://www.empowerstats.com, X&Y Solutions, Inc., Boston, MA, USA). Differences among gender and BP groups were performed using Student's t and chi-squared test according to the type of variables. The incidence density of mortality was calculated. COX proportional hazard model was used to explore the hazard ratios (HRs) and 95% confidence intervals (CIs) for death in associations with baseline SBP/DBP categories and 19-year changes of BP. Schoenfeld residual trend test was used to test the proportional hazard assumption in the associations of BP categories in 1976/BP change in 1994 and stroke-related death, respectively. The results show that the independent variables (BP/BP change) and the two models meet the preconditions of proportional risk (P > 0.05). Models were stratified by gender, and continuous variables such as age, BMI, TC, and categorical variables such as marital status, education, occupation, smoking, drinking, and diabetes were adjusted. While comparing HRs from BP categories or BP changes, the floating absolute risk was used to estimate the HRs and 95% CI.[12] The Cox–Stuart test was used to estimate the trend of the adjusted HRs and 95% CI. The associations between BP and deaths in such a long term were not a simple direct influence, and the exploration of possible paths of associations should not be given up. Therefore, the possible mediations and moderations were performed using the analytic methods.[13] The mediation models explore the independent variable affect the dependent variable in what way or by what variable. In this study, the independent variable (BP in 1976) can exert an indirect effect on the dependent variable (stroke-related deaths in 2020) through an intermediary variable (BMI in 1994). The moderation model explores the different influences of independent variables on the dependent variables in different situations or populations. All mediation and moderated mediation analyses were performed by scripts of PROCESS of SPSS 24.0 (IBM SPSS Statistics for Windows, Version 24.0. IBM, Armonk, NY, USA).[14] The simple mediating model was tested by PROCESS model 4, and the moderated mediating model by PROCESS model 5.[15] All mediating and moderated mediating models were based on a 5000 sample bootstrapping set. E values were reported in the sensitivity analysis, which is related to the potential subject to unmeasured confounding.[16] All analyses were performed using SPSS 24.0, STATA 15.0 (Stata Corp., College Station, TX, US), and EmpowerStats (http://www.empowerstats.com, X&Y Solutions, Inc., Boston, MA, USA).
Results
Basic characteristics of 1696 participants in 1976 Among 1696 participants, the average age was 44.38 ± 6.10 years, and 1124 were men (66.3%). No significant statistical differences in SBP and DBP were found between male and female participants. A total of 617 participants were identified with hypertension (36.4%) in 1976, with an average age of 46.04 ± 7.11 years, with 64.8% men. The levels of average age, TC, and BMI were elevated with the increasing BP level (Ptrend < 0.05) [Table 1 and Supplementary Table 1]. Characteristics of 1696 participants with different baseline BP in 1976. Data are presented as mean ± standard deviation or n(%). BMI: Body mass index; BP: Blood pressure; DBP: Diastolic blood pressure; SBP: Systolic blood pressure; TC: Total cholesterol; TG: Triglyceride. Among 1696 participants, the average age was 44.38 ± 6.10 years, and 1124 were men (66.3%). No significant statistical differences in SBP and DBP were found between male and female participants. A total of 617 participants were identified with hypertension (36.4%) in 1976, with an average age of 46.04 ± 7.11 years, with 64.8% men. The levels of average age, TC, and BMI were elevated with the increasing BP level (Ptrend < 0.05) [Table 1 and Supplementary Table 1]. Characteristics of 1696 participants with different baseline BP in 1976. Data are presented as mean ± standard deviation or n(%). BMI: Body mass index; BP: Blood pressure; DBP: Diastolic blood pressure; SBP: Systolic blood pressure; TC: Total cholesterol; TG: Triglyceride. Stroke-related deaths according to the baseline BP After a 45-year follow-up, a total of 201 stroke-related deaths occurred. The stroke-related mortality in 45 years was 11.9% (95% CI: 10.3–13.4%), and the incidence density was 0.26 per 100 person-years. The incidence density of stroke-related mortality was significantly higher in the participants with hypertension than those without hypertension and higher in male than female participants [Table 2]. Stroke-related mortality according to the baseline hypertension (N = 1696). Data are presented as n/N, or n (range). After a 45-year follow-up, a total of 201 stroke-related deaths occurred. The stroke-related mortality in 45 years was 11.9% (95% CI: 10.3–13.4%), and the incidence density was 0.26 per 100 person-years. The incidence density of stroke-related mortality was significantly higher in the participants with hypertension than those without hypertension and higher in male than female participants [Table 2]. Stroke-related mortality according to the baseline hypertension (N = 1696). Data are presented as n/N, or n (range). The associations between BP and stroke-related mortality The COX proportional hazard model showed that, after adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, TC, among the participants with SBP ≥160 mmHg or DBP ≥100 mmHg in 1976, the risk of stroke-related death increased by 225.8% (HR = 3.258, 95% CI: 2.353–4.510). In these associations, the risks of stroke-related death were even more pronounced in male participants [Table 3]. HRs of stroke-related death associated with BP categories from 1976 to 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol. The COX proportional hazard model showed that, after adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, TC, among the participants with SBP ≥160 mmHg or DBP ≥100 mmHg in 1976, the risk of stroke-related death increased by 225.8% (HR = 3.258, 95% CI: 2.353–4.510). In these associations, the risks of stroke-related death were even more pronounced in male participants [Table 3]. HRs of stroke-related death associated with BP categories from 1976 to 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol. Changes in BP levels and subsequent risk of stroke-related mortality During a 45-year follow-up from 1976 to 2020, 201 stroke-related deaths were recorded, and 169 stroke-related deaths were recorded from 1994 to 2020. After adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC, changes in BP groups from 1976 to 1994 were associated with stroke-related mortality. Compared with the normal BP → normal BP group, the adjusted HR was 2.104 (95% CI: 1.632–2.713) for the hypertension → hypertension group and the adjusted HRs were 2.415 (95% CI: 1.801–3.239) and 1.895 (95% CI: 1.109–3.239) in male and female participants, respectively [Table 4]. HRs of stroke-related death associated with changes of BP categories from 1994 to 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC.BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol. During a 45-year follow-up from 1976 to 2020, 201 stroke-related deaths were recorded, and 169 stroke-related deaths were recorded from 1994 to 2020. After adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC, changes in BP groups from 1976 to 1994 were associated with stroke-related mortality. Compared with the normal BP → normal BP group, the adjusted HR was 2.104 (95% CI: 1.632–2.713) for the hypertension → hypertension group and the adjusted HRs were 2.415 (95% CI: 1.801–3.239) and 1.895 (95% CI: 1.109–3.239) in male and female participants, respectively [Table 4]. HRs of stroke-related death associated with changes of BP categories from 1994 to 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC.BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol. Sensitivity analysis E value was calculated in the sensitivity analysis, and the E values were higher than the HR values. Most E values >2 indicated that considerable unmeasured significant confounding factors could be needed to negate the existing HRs, which implied the current associations tended to be more stable [Tables 3 and 4]. Results were consistent in the sensitivity analysis after excluding the participants with cardiovascular diseases at baseline [Supplementary Tables 2 and 3]. E value was calculated in the sensitivity analysis, and the E values were higher than the HR values. Most E values >2 indicated that considerable unmeasured significant confounding factors could be needed to negate the existing HRs, which implied the current associations tended to be more stable [Tables 3 and 4]. Results were consistent in the sensitivity analysis after excluding the participants with cardiovascular diseases at baseline [Supplementary Tables 2 and 3]. The mediation and moderated mediation analysis We examined, adjusting for the potential confounding factors above, whether BMI in 1994 mediated the influence of BP in 1976 (SBP and DBP) on stroke-related deaths in 2020, respectively. The mediation analysis showed that BMI in 1994, as a statistically significant mediator, partially mediated the effect of SBP in 1976 on stroke-related deaths in 2020, and the mediating effect accounted for 10.1% of the total effect [Table 5 and Figure 1]. The mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol. The flow of mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 10.1%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths. The moderated mediation analysis showed that the direct effect from SBP to stroke-related death of the mediation model above was moderated by gender, which indicated the effect of SBP on stroke-related death was moderated by male participants. With the moderating effect, the mediating effect accounted for 5.81% of the total effect [Table 6 and Figure 2]. The moderated mediation analysis of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol. The flow of conditional process analysis (mediation and moderation) of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 5.8%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths. We examined, adjusting for the potential confounding factors above, whether BMI in 1994 mediated the influence of BP in 1976 (SBP and DBP) on stroke-related deaths in 2020, respectively. The mediation analysis showed that BMI in 1994, as a statistically significant mediator, partially mediated the effect of SBP in 1976 on stroke-related deaths in 2020, and the mediating effect accounted for 10.1% of the total effect [Table 5 and Figure 1]. The mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol. The flow of mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 10.1%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths. The moderated mediation analysis showed that the direct effect from SBP to stroke-related death of the mediation model above was moderated by gender, which indicated the effect of SBP on stroke-related death was moderated by male participants. With the moderating effect, the mediating effect accounted for 5.81% of the total effect [Table 6 and Figure 2]. The moderated mediation analysis of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol. The flow of conditional process analysis (mediation and moderation) of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 5.8%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths.
null
null
[ "Ethics approval", "Exposure", "Covariables", "Determination of the cause of death", "Statistical analysis", "Basic characteristics of 1696 participants in 1976", "Stroke-related deaths according to the baseline BP", "The associations between BP and stroke-related mortality", "Changes in BP levels and subsequent risk of stroke-related mortality", "Sensitivity analysis", "The mediation and moderated mediation analysis" ]
[ "The cohort was approved by the Ethics Committee of the People's Liberation Army General Hospital, China (EC0411-2001). All participants provided their written informed consent.", "BP was measured twice at 10-min intervals by nurses using a stethoscope and a mercury-stand sphygmomanometer, and the average value served as the BP value. Participants with an SBP < 140 mmHg and diastolic blood pressure (DBP) < 90 mmHg were defined as normal BP, while those with SBP ≥140 mmHg and/or DBP ≥90 mmHg were defined as hypertension.[11] To better understand the results, SBP/DBP was grouped into < 130 mmHg/ < 80 mmHg, 130 to 139 mmHg/80 to 89 mmHg, 130 to 139 mmHg/80 to 89 mmHg, and ≥160 mmHg/≥100 mmHg at the baseline. Changes in BP categories from 1976 to 1994 were defined as normal BP → normal BP, normal BP → hypertension, hypertension → normal BP, and hypertension → hypertension.[11]", "Body mass index (BMI) was calculated as weight (kg) divided by height squared (m2). Total cholesterol (TC) and triglyceride were detected in the medical insurance designated hospital. Self-reported information of demographic characteristics (education, occupation, and marital status), family history of the disease, and lifestyle factors (smoking and drinking) were collected according to the investigation.", "The medical insurance designated hospitals of all participants were fixed, and the pension payment needs to be reported to the local Ministry of Personnel, therefore, the follow-up of death was 100% complete. The determination of the cause of deaths was checked according to ICD-10 (I64.X04) and ICD-11 (8B11, 8B20) by two doctors in the medical insurance hospital every 3 years. The endpoints of this study were stroke-related deaths.", "Differences among gender and BP groups were performed using Student's t and chi-squared test according to the type of variables. The incidence density of mortality was calculated. COX proportional hazard model was used to explore the hazard ratios (HRs) and 95% confidence intervals (CIs) for death in associations with baseline SBP/DBP categories and 19-year changes of BP. Schoenfeld residual trend test was used to test the proportional hazard assumption in the associations of BP categories in 1976/BP change in 1994 and stroke-related death, respectively. The results show that the independent variables (BP/BP change) and the two models meet the preconditions of proportional risk (P > 0.05). Models were stratified by gender, and continuous variables such as age, BMI, TC, and categorical variables such as marital status, education, occupation, smoking, drinking, and diabetes were adjusted. While comparing HRs from BP categories or BP changes, the floating absolute risk was used to estimate the HRs and 95% CI.[12] The Cox–Stuart test was used to estimate the trend of the adjusted HRs and 95% CI. The associations between BP and deaths in such a long term were not a simple direct influence, and the exploration of possible paths of associations should not be given up. Therefore, the possible mediations and moderations were performed using the analytic methods.[13] The mediation models explore the independent variable affect the dependent variable in what way or by what variable. In this study, the independent variable (BP in 1976) can exert an indirect effect on the dependent variable (stroke-related deaths in 2020) through an intermediary variable (BMI in 1994). The moderation model explores the different influences of independent variables on the dependent variables in different situations or populations. All mediation and moderated mediation analyses were performed by scripts of PROCESS of SPSS 24.0 (IBM SPSS Statistics for Windows, Version 24.0. IBM, Armonk, NY, USA).[14] The simple mediating model was tested by PROCESS model 4, and the moderated mediating model by PROCESS model 5.[15] All mediating and moderated mediating models were based on a 5000 sample bootstrapping set.\nE values were reported in the sensitivity analysis, which is related to the potential subject to unmeasured confounding.[16] All analyses were performed using SPSS 24.0, STATA 15.0 (Stata Corp., College Station, TX, US), and EmpowerStats (http://www.empowerstats.com, X&Y Solutions, Inc., Boston, MA, USA).", "Among 1696 participants, the average age was 44.38 ± 6.10 years, and 1124 were men (66.3%). No significant statistical differences in SBP and DBP were found between male and female participants. A total of 617 participants were identified with hypertension (36.4%) in 1976, with an average age of 46.04 ± 7.11 years, with 64.8% men. The levels of average age, TC, and BMI were elevated with the increasing BP level (Ptrend < 0.05) [Table 1 and Supplementary Table 1].\nCharacteristics of 1696 participants with different baseline BP in 1976.\nData are presented as mean ± standard deviation or n(%). BMI: Body mass index; BP: Blood pressure; DBP: Diastolic blood pressure; SBP: Systolic blood pressure; TC: Total cholesterol; TG: Triglyceride.", "After a 45-year follow-up, a total of 201 stroke-related deaths occurred. The stroke-related mortality in 45 years was 11.9% (95% CI: 10.3–13.4%), and the incidence density was 0.26 per 100 person-years. The incidence density of stroke-related mortality was significantly higher in the participants with hypertension than those without hypertension and higher in male than female participants [Table 2].\nStroke-related mortality according to the baseline hypertension (N = 1696).\nData are presented as n/N, or n (range).", "The COX proportional hazard model showed that, after adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, TC, among the participants with SBP ≥160 mmHg or DBP ≥100 mmHg in 1976, the risk of stroke-related death increased by 225.8% (HR = 3.258, 95% CI: 2.353–4.510). In these associations, the risks of stroke-related death were even more pronounced in male participants [Table 3].\nHRs of stroke-related death associated with BP categories from 1976 to 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol.", "During a 45-year follow-up from 1976 to 2020, 201 stroke-related deaths were recorded, and 169 stroke-related deaths were recorded from 1994 to 2020. After adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC, changes in BP groups from 1976 to 1994 were associated with stroke-related mortality. Compared with the normal BP → normal BP group, the adjusted HR was 2.104 (95% CI: 1.632–2.713) for the hypertension → hypertension group and the adjusted HRs were 2.415 (95% CI: 1.801–3.239) and 1.895 (95% CI: 1.109–3.239) in male and female participants, respectively [Table 4].\nHRs of stroke-related death associated with changes of BP categories from 1994 to 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC.BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol.", "E value was calculated in the sensitivity analysis, and the E values were higher than the HR values. Most E values >2 indicated that considerable unmeasured significant confounding factors could be needed to negate the existing HRs, which implied the current associations tended to be more stable [Tables 3 and 4]. Results were consistent in the sensitivity analysis after excluding the participants with cardiovascular diseases at baseline [Supplementary Tables 2 and 3].", "We examined, adjusting for the potential confounding factors above, whether BMI in 1994 mediated the influence of BP in 1976 (SBP and DBP) on stroke-related deaths in 2020, respectively. The mediation analysis showed that BMI in 1994, as a statistically significant mediator, partially mediated the effect of SBP in 1976 on stroke-related deaths in 2020, and the mediating effect accounted for 10.1% of the total effect [Table 5 and Figure 1].\nThe mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol.\nThe flow of mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 10.1%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths.\nThe moderated mediation analysis showed that the direct effect from SBP to stroke-related death of the mediation model above was moderated by gender, which indicated the effect of SBP on stroke-related death was moderated by male participants. With the moderating effect, the mediating effect accounted for 5.81% of the total effect [Table 6 and Figure 2].\nThe moderated mediation analysis of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol.\nThe flow of conditional process analysis (mediation and moderation) of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 5.8%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Ethics approval", "Subjects", "Exposure", "Covariables", "Determination of the cause of death", "Statistical analysis", "Results", "Basic characteristics of 1696 participants in 1976", "Stroke-related deaths according to the baseline BP", "The associations between BP and stroke-related mortality", "Changes in BP levels and subsequent risk of stroke-related mortality", "Sensitivity analysis", "The mediation and moderated mediation analysis", "Discussion", "Conflicts of interest", "Supplementary Material" ]
[ "The status of hypertension in Chinese adults was the higher prevalence and lower rate of awareness, treatment, and control according to China Hypertension Survey (2012–2015).[1] Hypertension is associated with morbidity, progression, and mortality of cardiovascular disease,[2–4] especially stroke-related death.[5] High systolic blood pressure (SBP) ranked first for the number of deaths accounting for 2.54 million, and ranked second for the percentage of disability adjusted life years, namely the lost total health life years from onset hypertension to death in China.[6] Therefore, higher blood pressure (BP), as the strongest causal and high exposure factor, is the leading attributable risk factor for stroke-related death worldwide.[7,8] Nevertheless, the long-term observational research on the associations between BP and BP changes, and stroke-related deaths and the paths of associations are still rare. Most cohort studies were not followed long enough. A cohort study with long enough follow-up in a fixed population did not only observe the long-term association of BP level and death but also explore the influence path under the premise of causal order, and avoid the causal inversion bias. The present study performed a 45-year prospective cohort study to estimate the long-term influence of baseline BP and changes of BP on stroke-related deaths and explore the possible influencing paths.", " Ethics approval The cohort was approved by the Ethics Committee of the People's Liberation Army General Hospital, China (EC0411-2001). All participants provided their written informed consent.\nThe cohort was approved by the Ethics Committee of the People's Liberation Army General Hospital, China (EC0411-2001). All participants provided their written informed consent.\n Subjects The data in the current study were originated from the Xi’an Machinery Factory cohort study including all employees aged ≥35 years. The information of physical and biochemical examination was collected in the hospital of machinery factory and the teaching hospital of the Fourth Military Medical University, which has been previously reported.[9,10] Form the baseline survey in 1976, the latest follow-up was 2020 with a 3-year follow-up interval. The information of demographics, physiological index, and lifestyle factors were collected through face-to-face interviews by trained staff in 1976 and 1994. The cause of death was recorded in the follow-up every 4 years. In the baseline survey wave, a total of 1842 persons were recruited. After excluding those who lost to follow-up and lacked baseline information, a total of 1696 subjects were enrolled. A total of 169 participants died from stroke-related by the latest follow-up in December 2020.\nThe data in the current study were originated from the Xi’an Machinery Factory cohort study including all employees aged ≥35 years. The information of physical and biochemical examination was collected in the hospital of machinery factory and the teaching hospital of the Fourth Military Medical University, which has been previously reported.[9,10] Form the baseline survey in 1976, the latest follow-up was 2020 with a 3-year follow-up interval. The information of demographics, physiological index, and lifestyle factors were collected through face-to-face interviews by trained staff in 1976 and 1994. The cause of death was recorded in the follow-up every 4 years. In the baseline survey wave, a total of 1842 persons were recruited. After excluding those who lost to follow-up and lacked baseline information, a total of 1696 subjects were enrolled. A total of 169 participants died from stroke-related by the latest follow-up in December 2020.\n Exposure BP was measured twice at 10-min intervals by nurses using a stethoscope and a mercury-stand sphygmomanometer, and the average value served as the BP value. Participants with an SBP < 140 mmHg and diastolic blood pressure (DBP) < 90 mmHg were defined as normal BP, while those with SBP ≥140 mmHg and/or DBP ≥90 mmHg were defined as hypertension.[11] To better understand the results, SBP/DBP was grouped into < 130 mmHg/ < 80 mmHg, 130 to 139 mmHg/80 to 89 mmHg, 130 to 139 mmHg/80 to 89 mmHg, and ≥160 mmHg/≥100 mmHg at the baseline. Changes in BP categories from 1976 to 1994 were defined as normal BP → normal BP, normal BP → hypertension, hypertension → normal BP, and hypertension → hypertension.[11]\nBP was measured twice at 10-min intervals by nurses using a stethoscope and a mercury-stand sphygmomanometer, and the average value served as the BP value. Participants with an SBP < 140 mmHg and diastolic blood pressure (DBP) < 90 mmHg were defined as normal BP, while those with SBP ≥140 mmHg and/or DBP ≥90 mmHg were defined as hypertension.[11] To better understand the results, SBP/DBP was grouped into < 130 mmHg/ < 80 mmHg, 130 to 139 mmHg/80 to 89 mmHg, 130 to 139 mmHg/80 to 89 mmHg, and ≥160 mmHg/≥100 mmHg at the baseline. Changes in BP categories from 1976 to 1994 were defined as normal BP → normal BP, normal BP → hypertension, hypertension → normal BP, and hypertension → hypertension.[11]\n Covariables Body mass index (BMI) was calculated as weight (kg) divided by height squared (m2). Total cholesterol (TC) and triglyceride were detected in the medical insurance designated hospital. Self-reported information of demographic characteristics (education, occupation, and marital status), family history of the disease, and lifestyle factors (smoking and drinking) were collected according to the investigation.\nBody mass index (BMI) was calculated as weight (kg) divided by height squared (m2). Total cholesterol (TC) and triglyceride were detected in the medical insurance designated hospital. Self-reported information of demographic characteristics (education, occupation, and marital status), family history of the disease, and lifestyle factors (smoking and drinking) were collected according to the investigation.\n Determination of the cause of death The medical insurance designated hospitals of all participants were fixed, and the pension payment needs to be reported to the local Ministry of Personnel, therefore, the follow-up of death was 100% complete. The determination of the cause of deaths was checked according to ICD-10 (I64.X04) and ICD-11 (8B11, 8B20) by two doctors in the medical insurance hospital every 3 years. The endpoints of this study were stroke-related deaths.\nThe medical insurance designated hospitals of all participants were fixed, and the pension payment needs to be reported to the local Ministry of Personnel, therefore, the follow-up of death was 100% complete. The determination of the cause of deaths was checked according to ICD-10 (I64.X04) and ICD-11 (8B11, 8B20) by two doctors in the medical insurance hospital every 3 years. The endpoints of this study were stroke-related deaths.\n Statistical analysis Differences among gender and BP groups were performed using Student's t and chi-squared test according to the type of variables. The incidence density of mortality was calculated. COX proportional hazard model was used to explore the hazard ratios (HRs) and 95% confidence intervals (CIs) for death in associations with baseline SBP/DBP categories and 19-year changes of BP. Schoenfeld residual trend test was used to test the proportional hazard assumption in the associations of BP categories in 1976/BP change in 1994 and stroke-related death, respectively. The results show that the independent variables (BP/BP change) and the two models meet the preconditions of proportional risk (P > 0.05). Models were stratified by gender, and continuous variables such as age, BMI, TC, and categorical variables such as marital status, education, occupation, smoking, drinking, and diabetes were adjusted. While comparing HRs from BP categories or BP changes, the floating absolute risk was used to estimate the HRs and 95% CI.[12] The Cox–Stuart test was used to estimate the trend of the adjusted HRs and 95% CI. The associations between BP and deaths in such a long term were not a simple direct influence, and the exploration of possible paths of associations should not be given up. Therefore, the possible mediations and moderations were performed using the analytic methods.[13] The mediation models explore the independent variable affect the dependent variable in what way or by what variable. In this study, the independent variable (BP in 1976) can exert an indirect effect on the dependent variable (stroke-related deaths in 2020) through an intermediary variable (BMI in 1994). The moderation model explores the different influences of independent variables on the dependent variables in different situations or populations. All mediation and moderated mediation analyses were performed by scripts of PROCESS of SPSS 24.0 (IBM SPSS Statistics for Windows, Version 24.0. IBM, Armonk, NY, USA).[14] The simple mediating model was tested by PROCESS model 4, and the moderated mediating model by PROCESS model 5.[15] All mediating and moderated mediating models were based on a 5000 sample bootstrapping set.\nE values were reported in the sensitivity analysis, which is related to the potential subject to unmeasured confounding.[16] All analyses were performed using SPSS 24.0, STATA 15.0 (Stata Corp., College Station, TX, US), and EmpowerStats (http://www.empowerstats.com, X&Y Solutions, Inc., Boston, MA, USA).\nDifferences among gender and BP groups were performed using Student's t and chi-squared test according to the type of variables. The incidence density of mortality was calculated. COX proportional hazard model was used to explore the hazard ratios (HRs) and 95% confidence intervals (CIs) for death in associations with baseline SBP/DBP categories and 19-year changes of BP. Schoenfeld residual trend test was used to test the proportional hazard assumption in the associations of BP categories in 1976/BP change in 1994 and stroke-related death, respectively. The results show that the independent variables (BP/BP change) and the two models meet the preconditions of proportional risk (P > 0.05). Models were stratified by gender, and continuous variables such as age, BMI, TC, and categorical variables such as marital status, education, occupation, smoking, drinking, and diabetes were adjusted. While comparing HRs from BP categories or BP changes, the floating absolute risk was used to estimate the HRs and 95% CI.[12] The Cox–Stuart test was used to estimate the trend of the adjusted HRs and 95% CI. The associations between BP and deaths in such a long term were not a simple direct influence, and the exploration of possible paths of associations should not be given up. Therefore, the possible mediations and moderations were performed using the analytic methods.[13] The mediation models explore the independent variable affect the dependent variable in what way or by what variable. In this study, the independent variable (BP in 1976) can exert an indirect effect on the dependent variable (stroke-related deaths in 2020) through an intermediary variable (BMI in 1994). The moderation model explores the different influences of independent variables on the dependent variables in different situations or populations. All mediation and moderated mediation analyses were performed by scripts of PROCESS of SPSS 24.0 (IBM SPSS Statistics for Windows, Version 24.0. IBM, Armonk, NY, USA).[14] The simple mediating model was tested by PROCESS model 4, and the moderated mediating model by PROCESS model 5.[15] All mediating and moderated mediating models were based on a 5000 sample bootstrapping set.\nE values were reported in the sensitivity analysis, which is related to the potential subject to unmeasured confounding.[16] All analyses were performed using SPSS 24.0, STATA 15.0 (Stata Corp., College Station, TX, US), and EmpowerStats (http://www.empowerstats.com, X&Y Solutions, Inc., Boston, MA, USA).", "The cohort was approved by the Ethics Committee of the People's Liberation Army General Hospital, China (EC0411-2001). All participants provided their written informed consent.", "The data in the current study were originated from the Xi’an Machinery Factory cohort study including all employees aged ≥35 years. The information of physical and biochemical examination was collected in the hospital of machinery factory and the teaching hospital of the Fourth Military Medical University, which has been previously reported.[9,10] Form the baseline survey in 1976, the latest follow-up was 2020 with a 3-year follow-up interval. The information of demographics, physiological index, and lifestyle factors were collected through face-to-face interviews by trained staff in 1976 and 1994. The cause of death was recorded in the follow-up every 4 years. In the baseline survey wave, a total of 1842 persons were recruited. After excluding those who lost to follow-up and lacked baseline information, a total of 1696 subjects were enrolled. A total of 169 participants died from stroke-related by the latest follow-up in December 2020.", "BP was measured twice at 10-min intervals by nurses using a stethoscope and a mercury-stand sphygmomanometer, and the average value served as the BP value. Participants with an SBP < 140 mmHg and diastolic blood pressure (DBP) < 90 mmHg were defined as normal BP, while those with SBP ≥140 mmHg and/or DBP ≥90 mmHg were defined as hypertension.[11] To better understand the results, SBP/DBP was grouped into < 130 mmHg/ < 80 mmHg, 130 to 139 mmHg/80 to 89 mmHg, 130 to 139 mmHg/80 to 89 mmHg, and ≥160 mmHg/≥100 mmHg at the baseline. Changes in BP categories from 1976 to 1994 were defined as normal BP → normal BP, normal BP → hypertension, hypertension → normal BP, and hypertension → hypertension.[11]", "Body mass index (BMI) was calculated as weight (kg) divided by height squared (m2). Total cholesterol (TC) and triglyceride were detected in the medical insurance designated hospital. Self-reported information of demographic characteristics (education, occupation, and marital status), family history of the disease, and lifestyle factors (smoking and drinking) were collected according to the investigation.", "The medical insurance designated hospitals of all participants were fixed, and the pension payment needs to be reported to the local Ministry of Personnel, therefore, the follow-up of death was 100% complete. The determination of the cause of deaths was checked according to ICD-10 (I64.X04) and ICD-11 (8B11, 8B20) by two doctors in the medical insurance hospital every 3 years. The endpoints of this study were stroke-related deaths.", "Differences among gender and BP groups were performed using Student's t and chi-squared test according to the type of variables. The incidence density of mortality was calculated. COX proportional hazard model was used to explore the hazard ratios (HRs) and 95% confidence intervals (CIs) for death in associations with baseline SBP/DBP categories and 19-year changes of BP. Schoenfeld residual trend test was used to test the proportional hazard assumption in the associations of BP categories in 1976/BP change in 1994 and stroke-related death, respectively. The results show that the independent variables (BP/BP change) and the two models meet the preconditions of proportional risk (P > 0.05). Models were stratified by gender, and continuous variables such as age, BMI, TC, and categorical variables such as marital status, education, occupation, smoking, drinking, and diabetes were adjusted. While comparing HRs from BP categories or BP changes, the floating absolute risk was used to estimate the HRs and 95% CI.[12] The Cox–Stuart test was used to estimate the trend of the adjusted HRs and 95% CI. The associations between BP and deaths in such a long term were not a simple direct influence, and the exploration of possible paths of associations should not be given up. Therefore, the possible mediations and moderations were performed using the analytic methods.[13] The mediation models explore the independent variable affect the dependent variable in what way or by what variable. In this study, the independent variable (BP in 1976) can exert an indirect effect on the dependent variable (stroke-related deaths in 2020) through an intermediary variable (BMI in 1994). The moderation model explores the different influences of independent variables on the dependent variables in different situations or populations. All mediation and moderated mediation analyses were performed by scripts of PROCESS of SPSS 24.0 (IBM SPSS Statistics for Windows, Version 24.0. IBM, Armonk, NY, USA).[14] The simple mediating model was tested by PROCESS model 4, and the moderated mediating model by PROCESS model 5.[15] All mediating and moderated mediating models were based on a 5000 sample bootstrapping set.\nE values were reported in the sensitivity analysis, which is related to the potential subject to unmeasured confounding.[16] All analyses were performed using SPSS 24.0, STATA 15.0 (Stata Corp., College Station, TX, US), and EmpowerStats (http://www.empowerstats.com, X&Y Solutions, Inc., Boston, MA, USA).", " Basic characteristics of 1696 participants in 1976 Among 1696 participants, the average age was 44.38 ± 6.10 years, and 1124 were men (66.3%). No significant statistical differences in SBP and DBP were found between male and female participants. A total of 617 participants were identified with hypertension (36.4%) in 1976, with an average age of 46.04 ± 7.11 years, with 64.8% men. The levels of average age, TC, and BMI were elevated with the increasing BP level (Ptrend < 0.05) [Table 1 and Supplementary Table 1].\nCharacteristics of 1696 participants with different baseline BP in 1976.\nData are presented as mean ± standard deviation or n(%). BMI: Body mass index; BP: Blood pressure; DBP: Diastolic blood pressure; SBP: Systolic blood pressure; TC: Total cholesterol; TG: Triglyceride.\nAmong 1696 participants, the average age was 44.38 ± 6.10 years, and 1124 were men (66.3%). No significant statistical differences in SBP and DBP were found between male and female participants. A total of 617 participants were identified with hypertension (36.4%) in 1976, with an average age of 46.04 ± 7.11 years, with 64.8% men. The levels of average age, TC, and BMI were elevated with the increasing BP level (Ptrend < 0.05) [Table 1 and Supplementary Table 1].\nCharacteristics of 1696 participants with different baseline BP in 1976.\nData are presented as mean ± standard deviation or n(%). BMI: Body mass index; BP: Blood pressure; DBP: Diastolic blood pressure; SBP: Systolic blood pressure; TC: Total cholesterol; TG: Triglyceride.\n Stroke-related deaths according to the baseline BP After a 45-year follow-up, a total of 201 stroke-related deaths occurred. The stroke-related mortality in 45 years was 11.9% (95% CI: 10.3–13.4%), and the incidence density was 0.26 per 100 person-years. The incidence density of stroke-related mortality was significantly higher in the participants with hypertension than those without hypertension and higher in male than female participants [Table 2].\nStroke-related mortality according to the baseline hypertension (N = 1696).\nData are presented as n/N, or n (range).\nAfter a 45-year follow-up, a total of 201 stroke-related deaths occurred. The stroke-related mortality in 45 years was 11.9% (95% CI: 10.3–13.4%), and the incidence density was 0.26 per 100 person-years. The incidence density of stroke-related mortality was significantly higher in the participants with hypertension than those without hypertension and higher in male than female participants [Table 2].\nStroke-related mortality according to the baseline hypertension (N = 1696).\nData are presented as n/N, or n (range).\n The associations between BP and stroke-related mortality The COX proportional hazard model showed that, after adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, TC, among the participants with SBP ≥160 mmHg or DBP ≥100 mmHg in 1976, the risk of stroke-related death increased by 225.8% (HR = 3.258, 95% CI: 2.353–4.510). In these associations, the risks of stroke-related death were even more pronounced in male participants [Table 3].\nHRs of stroke-related death associated with BP categories from 1976 to 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol.\nThe COX proportional hazard model showed that, after adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, TC, among the participants with SBP ≥160 mmHg or DBP ≥100 mmHg in 1976, the risk of stroke-related death increased by 225.8% (HR = 3.258, 95% CI: 2.353–4.510). In these associations, the risks of stroke-related death were even more pronounced in male participants [Table 3].\nHRs of stroke-related death associated with BP categories from 1976 to 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol.\n Changes in BP levels and subsequent risk of stroke-related mortality During a 45-year follow-up from 1976 to 2020, 201 stroke-related deaths were recorded, and 169 stroke-related deaths were recorded from 1994 to 2020. After adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC, changes in BP groups from 1976 to 1994 were associated with stroke-related mortality. Compared with the normal BP → normal BP group, the adjusted HR was 2.104 (95% CI: 1.632–2.713) for the hypertension → hypertension group and the adjusted HRs were 2.415 (95% CI: 1.801–3.239) and 1.895 (95% CI: 1.109–3.239) in male and female participants, respectively [Table 4].\nHRs of stroke-related death associated with changes of BP categories from 1994 to 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC.BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol.\nDuring a 45-year follow-up from 1976 to 2020, 201 stroke-related deaths were recorded, and 169 stroke-related deaths were recorded from 1994 to 2020. After adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC, changes in BP groups from 1976 to 1994 were associated with stroke-related mortality. Compared with the normal BP → normal BP group, the adjusted HR was 2.104 (95% CI: 1.632–2.713) for the hypertension → hypertension group and the adjusted HRs were 2.415 (95% CI: 1.801–3.239) and 1.895 (95% CI: 1.109–3.239) in male and female participants, respectively [Table 4].\nHRs of stroke-related death associated with changes of BP categories from 1994 to 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC.BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol.\n Sensitivity analysis E value was calculated in the sensitivity analysis, and the E values were higher than the HR values. Most E values >2 indicated that considerable unmeasured significant confounding factors could be needed to negate the existing HRs, which implied the current associations tended to be more stable [Tables 3 and 4]. Results were consistent in the sensitivity analysis after excluding the participants with cardiovascular diseases at baseline [Supplementary Tables 2 and 3].\nE value was calculated in the sensitivity analysis, and the E values were higher than the HR values. Most E values >2 indicated that considerable unmeasured significant confounding factors could be needed to negate the existing HRs, which implied the current associations tended to be more stable [Tables 3 and 4]. Results were consistent in the sensitivity analysis after excluding the participants with cardiovascular diseases at baseline [Supplementary Tables 2 and 3].\n The mediation and moderated mediation analysis We examined, adjusting for the potential confounding factors above, whether BMI in 1994 mediated the influence of BP in 1976 (SBP and DBP) on stroke-related deaths in 2020, respectively. The mediation analysis showed that BMI in 1994, as a statistically significant mediator, partially mediated the effect of SBP in 1976 on stroke-related deaths in 2020, and the mediating effect accounted for 10.1% of the total effect [Table 5 and Figure 1].\nThe mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol.\nThe flow of mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 10.1%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths.\nThe moderated mediation analysis showed that the direct effect from SBP to stroke-related death of the mediation model above was moderated by gender, which indicated the effect of SBP on stroke-related death was moderated by male participants. With the moderating effect, the mediating effect accounted for 5.81% of the total effect [Table 6 and Figure 2].\nThe moderated mediation analysis of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol.\nThe flow of conditional process analysis (mediation and moderation) of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 5.8%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths.\nWe examined, adjusting for the potential confounding factors above, whether BMI in 1994 mediated the influence of BP in 1976 (SBP and DBP) on stroke-related deaths in 2020, respectively. The mediation analysis showed that BMI in 1994, as a statistically significant mediator, partially mediated the effect of SBP in 1976 on stroke-related deaths in 2020, and the mediating effect accounted for 10.1% of the total effect [Table 5 and Figure 1].\nThe mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol.\nThe flow of mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 10.1%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths.\nThe moderated mediation analysis showed that the direct effect from SBP to stroke-related death of the mediation model above was moderated by gender, which indicated the effect of SBP on stroke-related death was moderated by male participants. With the moderating effect, the mediating effect accounted for 5.81% of the total effect [Table 6 and Figure 2].\nThe moderated mediation analysis of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol.\nThe flow of conditional process analysis (mediation and moderation) of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 5.8%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths.", "Among 1696 participants, the average age was 44.38 ± 6.10 years, and 1124 were men (66.3%). No significant statistical differences in SBP and DBP were found between male and female participants. A total of 617 participants were identified with hypertension (36.4%) in 1976, with an average age of 46.04 ± 7.11 years, with 64.8% men. The levels of average age, TC, and BMI were elevated with the increasing BP level (Ptrend < 0.05) [Table 1 and Supplementary Table 1].\nCharacteristics of 1696 participants with different baseline BP in 1976.\nData are presented as mean ± standard deviation or n(%). BMI: Body mass index; BP: Blood pressure; DBP: Diastolic blood pressure; SBP: Systolic blood pressure; TC: Total cholesterol; TG: Triglyceride.", "After a 45-year follow-up, a total of 201 stroke-related deaths occurred. The stroke-related mortality in 45 years was 11.9% (95% CI: 10.3–13.4%), and the incidence density was 0.26 per 100 person-years. The incidence density of stroke-related mortality was significantly higher in the participants with hypertension than those without hypertension and higher in male than female participants [Table 2].\nStroke-related mortality according to the baseline hypertension (N = 1696).\nData are presented as n/N, or n (range).", "The COX proportional hazard model showed that, after adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, TC, among the participants with SBP ≥160 mmHg or DBP ≥100 mmHg in 1976, the risk of stroke-related death increased by 225.8% (HR = 3.258, 95% CI: 2.353–4.510). In these associations, the risks of stroke-related death were even more pronounced in male participants [Table 3].\nHRs of stroke-related death associated with BP categories from 1976 to 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol.", "During a 45-year follow-up from 1976 to 2020, 201 stroke-related deaths were recorded, and 169 stroke-related deaths were recorded from 1994 to 2020. After adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC, changes in BP groups from 1976 to 1994 were associated with stroke-related mortality. Compared with the normal BP → normal BP group, the adjusted HR was 2.104 (95% CI: 1.632–2.713) for the hypertension → hypertension group and the adjusted HRs were 2.415 (95% CI: 1.801–3.239) and 1.895 (95% CI: 1.109–3.239) in male and female participants, respectively [Table 4].\nHRs of stroke-related death associated with changes of BP categories from 1994 to 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC.BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol.", "E value was calculated in the sensitivity analysis, and the E values were higher than the HR values. Most E values >2 indicated that considerable unmeasured significant confounding factors could be needed to negate the existing HRs, which implied the current associations tended to be more stable [Tables 3 and 4]. Results were consistent in the sensitivity analysis after excluding the participants with cardiovascular diseases at baseline [Supplementary Tables 2 and 3].", "We examined, adjusting for the potential confounding factors above, whether BMI in 1994 mediated the influence of BP in 1976 (SBP and DBP) on stroke-related deaths in 2020, respectively. The mediation analysis showed that BMI in 1994, as a statistically significant mediator, partially mediated the effect of SBP in 1976 on stroke-related deaths in 2020, and the mediating effect accounted for 10.1% of the total effect [Table 5 and Figure 1].\nThe mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol.\nThe flow of mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 10.1%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths.\nThe moderated mediation analysis showed that the direct effect from SBP to stroke-related death of the mediation model above was moderated by gender, which indicated the effect of SBP on stroke-related death was moderated by male participants. With the moderating effect, the mediating effect accounted for 5.81% of the total effect [Table 6 and Figure 2].\nThe moderated mediation analysis of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020.\nAdjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol.\nThe flow of conditional process analysis (mediation and moderation) of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 5.8%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths.", "The results of our study have shown the long-term positive associations between BP, BP changes, and stroke-related deaths outcomes, highlighting the complexity of the influencing models of the associations. The association of BP and stroke-related death was mediated by BMI and moderated by gender, respectively.\nThe average SBP and DBP levels were 125.6 ± 20.6 mmHg and 82.3 ± 18.2 mmHg in 1976, and average value in 1994 were 127.7 ± 18.2 mmHg and 83.3 ± 10.5 mmHg. In 1976, China was at the early stage of reform and opening and in the lag period of economic development, and in 1994, China entered a period of rapid economic development. The average BP level increased slightly. This increasing trend was consistent with prior studies.[1,17,18] The prevalence of hypertension has been gradually increasing with age growth and different periods.\nHigh BP level exposure is an independent risk factor of incidence, progression, and mortality of stroke. Whether this causal relationship would be more closely with the increment of exposure duration of high BP was unclear. Studies on this association were limited to the insufficient follow-up time, which would not avoid the inversion of cause and effect. Most observational studies were focused on the morbidity, progression, and mortality cardiovascular disease, which is due to the long-term association of BP and death and insufficient follow-up time.[2,19]\nIn this study, the 19-year changes in BP were recorded, and the relationship between changes in BP and stroke-related deaths was explored. The changes in BP may be due to lifestyle changes, drug intervention, and other factors. The results have shown that long-term hypertension was associated with stroke-related death to varying degrees, while the associations did not exist in the population with no significant increase in BP. It can be speculated that effective control of BP level may reverse or reduce the risk of stroke-related death. Studies also support the point that these relationships have been reported that the drop in BP from antihypertensive medications provided greater vascular benefits.[20,21]\nThe results of mediating test suggested that BMI mediated the influence of SBP on stroke-related death, which meant the higher the SBP level, the higher the BMI, and the latter was related with more likely suffering from stroke-related deaths. The results of moderated mediating effect suggested in male participants, BMI mediated this association. At present, no similar study had been found in these models.[22]\nThis study has some limitations. The results that emerged from the Xi’an machinery factory cohort study may not be able to be generalized to other populations. Second, the results of the baseline survey of family disease histories were self-reported, and recall bias would be difficult to avoid. Third, the information of antihypertensive drugs for cardiovascular disease that might influence BP level was missing, which might provide underestimated the harmful associations of BP and death outcomes.\nTo conclude, the current study demonstrated that high BP and changes of increased BP indicators were associated with stroke-related death, and the mediating and moderated mediating effects significantly affected these associations. This indicates that the association of high BP indicators and stroke-related death is long-term and multipath progress, which would further verify the necessity of control of hypertension.", "None.", "" ]
[ "intro", "methods", null, "subjects", null, null, null, null, "results", null, null, null, null, null, null, "discussion", "COI-statement", "supplementary-material" ]
[ "Blood pressure", "Stroke", "Mortality", "Mediation", "Cohort study" ]
Introduction: The status of hypertension in Chinese adults was the higher prevalence and lower rate of awareness, treatment, and control according to China Hypertension Survey (2012–2015).[1] Hypertension is associated with morbidity, progression, and mortality of cardiovascular disease,[2–4] especially stroke-related death.[5] High systolic blood pressure (SBP) ranked first for the number of deaths accounting for 2.54 million, and ranked second for the percentage of disability adjusted life years, namely the lost total health life years from onset hypertension to death in China.[6] Therefore, higher blood pressure (BP), as the strongest causal and high exposure factor, is the leading attributable risk factor for stroke-related death worldwide.[7,8] Nevertheless, the long-term observational research on the associations between BP and BP changes, and stroke-related deaths and the paths of associations are still rare. Most cohort studies were not followed long enough. A cohort study with long enough follow-up in a fixed population did not only observe the long-term association of BP level and death but also explore the influence path under the premise of causal order, and avoid the causal inversion bias. The present study performed a 45-year prospective cohort study to estimate the long-term influence of baseline BP and changes of BP on stroke-related deaths and explore the possible influencing paths. Methods: Ethics approval The cohort was approved by the Ethics Committee of the People's Liberation Army General Hospital, China (EC0411-2001). All participants provided their written informed consent. The cohort was approved by the Ethics Committee of the People's Liberation Army General Hospital, China (EC0411-2001). All participants provided their written informed consent. Subjects The data in the current study were originated from the Xi’an Machinery Factory cohort study including all employees aged ≥35 years. The information of physical and biochemical examination was collected in the hospital of machinery factory and the teaching hospital of the Fourth Military Medical University, which has been previously reported.[9,10] Form the baseline survey in 1976, the latest follow-up was 2020 with a 3-year follow-up interval. The information of demographics, physiological index, and lifestyle factors were collected through face-to-face interviews by trained staff in 1976 and 1994. The cause of death was recorded in the follow-up every 4 years. In the baseline survey wave, a total of 1842 persons were recruited. After excluding those who lost to follow-up and lacked baseline information, a total of 1696 subjects were enrolled. A total of 169 participants died from stroke-related by the latest follow-up in December 2020. The data in the current study were originated from the Xi’an Machinery Factory cohort study including all employees aged ≥35 years. The information of physical and biochemical examination was collected in the hospital of machinery factory and the teaching hospital of the Fourth Military Medical University, which has been previously reported.[9,10] Form the baseline survey in 1976, the latest follow-up was 2020 with a 3-year follow-up interval. The information of demographics, physiological index, and lifestyle factors were collected through face-to-face interviews by trained staff in 1976 and 1994. The cause of death was recorded in the follow-up every 4 years. In the baseline survey wave, a total of 1842 persons were recruited. After excluding those who lost to follow-up and lacked baseline information, a total of 1696 subjects were enrolled. A total of 169 participants died from stroke-related by the latest follow-up in December 2020. Exposure BP was measured twice at 10-min intervals by nurses using a stethoscope and a mercury-stand sphygmomanometer, and the average value served as the BP value. Participants with an SBP < 140 mmHg and diastolic blood pressure (DBP) < 90 mmHg were defined as normal BP, while those with SBP ≥140 mmHg and/or DBP ≥90 mmHg were defined as hypertension.[11] To better understand the results, SBP/DBP was grouped into < 130 mmHg/ < 80 mmHg, 130 to 139 mmHg/80 to 89 mmHg, 130 to 139 mmHg/80 to 89 mmHg, and ≥160 mmHg/≥100 mmHg at the baseline. Changes in BP categories from 1976 to 1994 were defined as normal BP → normal BP, normal BP → hypertension, hypertension → normal BP, and hypertension → hypertension.[11] BP was measured twice at 10-min intervals by nurses using a stethoscope and a mercury-stand sphygmomanometer, and the average value served as the BP value. Participants with an SBP < 140 mmHg and diastolic blood pressure (DBP) < 90 mmHg were defined as normal BP, while those with SBP ≥140 mmHg and/or DBP ≥90 mmHg were defined as hypertension.[11] To better understand the results, SBP/DBP was grouped into < 130 mmHg/ < 80 mmHg, 130 to 139 mmHg/80 to 89 mmHg, 130 to 139 mmHg/80 to 89 mmHg, and ≥160 mmHg/≥100 mmHg at the baseline. Changes in BP categories from 1976 to 1994 were defined as normal BP → normal BP, normal BP → hypertension, hypertension → normal BP, and hypertension → hypertension.[11] Covariables Body mass index (BMI) was calculated as weight (kg) divided by height squared (m2). Total cholesterol (TC) and triglyceride were detected in the medical insurance designated hospital. Self-reported information of demographic characteristics (education, occupation, and marital status), family history of the disease, and lifestyle factors (smoking and drinking) were collected according to the investigation. Body mass index (BMI) was calculated as weight (kg) divided by height squared (m2). Total cholesterol (TC) and triglyceride were detected in the medical insurance designated hospital. Self-reported information of demographic characteristics (education, occupation, and marital status), family history of the disease, and lifestyle factors (smoking and drinking) were collected according to the investigation. Determination of the cause of death The medical insurance designated hospitals of all participants were fixed, and the pension payment needs to be reported to the local Ministry of Personnel, therefore, the follow-up of death was 100% complete. The determination of the cause of deaths was checked according to ICD-10 (I64.X04) and ICD-11 (8B11, 8B20) by two doctors in the medical insurance hospital every 3 years. The endpoints of this study were stroke-related deaths. The medical insurance designated hospitals of all participants were fixed, and the pension payment needs to be reported to the local Ministry of Personnel, therefore, the follow-up of death was 100% complete. The determination of the cause of deaths was checked according to ICD-10 (I64.X04) and ICD-11 (8B11, 8B20) by two doctors in the medical insurance hospital every 3 years. The endpoints of this study were stroke-related deaths. Statistical analysis Differences among gender and BP groups were performed using Student's t and chi-squared test according to the type of variables. The incidence density of mortality was calculated. COX proportional hazard model was used to explore the hazard ratios (HRs) and 95% confidence intervals (CIs) for death in associations with baseline SBP/DBP categories and 19-year changes of BP. Schoenfeld residual trend test was used to test the proportional hazard assumption in the associations of BP categories in 1976/BP change in 1994 and stroke-related death, respectively. The results show that the independent variables (BP/BP change) and the two models meet the preconditions of proportional risk (P > 0.05). Models were stratified by gender, and continuous variables such as age, BMI, TC, and categorical variables such as marital status, education, occupation, smoking, drinking, and diabetes were adjusted. While comparing HRs from BP categories or BP changes, the floating absolute risk was used to estimate the HRs and 95% CI.[12] The Cox–Stuart test was used to estimate the trend of the adjusted HRs and 95% CI. The associations between BP and deaths in such a long term were not a simple direct influence, and the exploration of possible paths of associations should not be given up. Therefore, the possible mediations and moderations were performed using the analytic methods.[13] The mediation models explore the independent variable affect the dependent variable in what way or by what variable. In this study, the independent variable (BP in 1976) can exert an indirect effect on the dependent variable (stroke-related deaths in 2020) through an intermediary variable (BMI in 1994). The moderation model explores the different influences of independent variables on the dependent variables in different situations or populations. All mediation and moderated mediation analyses were performed by scripts of PROCESS of SPSS 24.0 (IBM SPSS Statistics for Windows, Version 24.0. IBM, Armonk, NY, USA).[14] The simple mediating model was tested by PROCESS model 4, and the moderated mediating model by PROCESS model 5.[15] All mediating and moderated mediating models were based on a 5000 sample bootstrapping set. E values were reported in the sensitivity analysis, which is related to the potential subject to unmeasured confounding.[16] All analyses were performed using SPSS 24.0, STATA 15.0 (Stata Corp., College Station, TX, US), and EmpowerStats (http://www.empowerstats.com, X&Y Solutions, Inc., Boston, MA, USA). Differences among gender and BP groups were performed using Student's t and chi-squared test according to the type of variables. The incidence density of mortality was calculated. COX proportional hazard model was used to explore the hazard ratios (HRs) and 95% confidence intervals (CIs) for death in associations with baseline SBP/DBP categories and 19-year changes of BP. Schoenfeld residual trend test was used to test the proportional hazard assumption in the associations of BP categories in 1976/BP change in 1994 and stroke-related death, respectively. The results show that the independent variables (BP/BP change) and the two models meet the preconditions of proportional risk (P > 0.05). Models were stratified by gender, and continuous variables such as age, BMI, TC, and categorical variables such as marital status, education, occupation, smoking, drinking, and diabetes were adjusted. While comparing HRs from BP categories or BP changes, the floating absolute risk was used to estimate the HRs and 95% CI.[12] The Cox–Stuart test was used to estimate the trend of the adjusted HRs and 95% CI. The associations between BP and deaths in such a long term were not a simple direct influence, and the exploration of possible paths of associations should not be given up. Therefore, the possible mediations and moderations were performed using the analytic methods.[13] The mediation models explore the independent variable affect the dependent variable in what way or by what variable. In this study, the independent variable (BP in 1976) can exert an indirect effect on the dependent variable (stroke-related deaths in 2020) through an intermediary variable (BMI in 1994). The moderation model explores the different influences of independent variables on the dependent variables in different situations or populations. All mediation and moderated mediation analyses were performed by scripts of PROCESS of SPSS 24.0 (IBM SPSS Statistics for Windows, Version 24.0. IBM, Armonk, NY, USA).[14] The simple mediating model was tested by PROCESS model 4, and the moderated mediating model by PROCESS model 5.[15] All mediating and moderated mediating models were based on a 5000 sample bootstrapping set. E values were reported in the sensitivity analysis, which is related to the potential subject to unmeasured confounding.[16] All analyses were performed using SPSS 24.0, STATA 15.0 (Stata Corp., College Station, TX, US), and EmpowerStats (http://www.empowerstats.com, X&Y Solutions, Inc., Boston, MA, USA). Ethics approval: The cohort was approved by the Ethics Committee of the People's Liberation Army General Hospital, China (EC0411-2001). All participants provided their written informed consent. Subjects: The data in the current study were originated from the Xi’an Machinery Factory cohort study including all employees aged ≥35 years. The information of physical and biochemical examination was collected in the hospital of machinery factory and the teaching hospital of the Fourth Military Medical University, which has been previously reported.[9,10] Form the baseline survey in 1976, the latest follow-up was 2020 with a 3-year follow-up interval. The information of demographics, physiological index, and lifestyle factors were collected through face-to-face interviews by trained staff in 1976 and 1994. The cause of death was recorded in the follow-up every 4 years. In the baseline survey wave, a total of 1842 persons were recruited. After excluding those who lost to follow-up and lacked baseline information, a total of 1696 subjects were enrolled. A total of 169 participants died from stroke-related by the latest follow-up in December 2020. Exposure: BP was measured twice at 10-min intervals by nurses using a stethoscope and a mercury-stand sphygmomanometer, and the average value served as the BP value. Participants with an SBP < 140 mmHg and diastolic blood pressure (DBP) < 90 mmHg were defined as normal BP, while those with SBP ≥140 mmHg and/or DBP ≥90 mmHg were defined as hypertension.[11] To better understand the results, SBP/DBP was grouped into < 130 mmHg/ < 80 mmHg, 130 to 139 mmHg/80 to 89 mmHg, 130 to 139 mmHg/80 to 89 mmHg, and ≥160 mmHg/≥100 mmHg at the baseline. Changes in BP categories from 1976 to 1994 were defined as normal BP → normal BP, normal BP → hypertension, hypertension → normal BP, and hypertension → hypertension.[11] Covariables: Body mass index (BMI) was calculated as weight (kg) divided by height squared (m2). Total cholesterol (TC) and triglyceride were detected in the medical insurance designated hospital. Self-reported information of demographic characteristics (education, occupation, and marital status), family history of the disease, and lifestyle factors (smoking and drinking) were collected according to the investigation. Determination of the cause of death: The medical insurance designated hospitals of all participants were fixed, and the pension payment needs to be reported to the local Ministry of Personnel, therefore, the follow-up of death was 100% complete. The determination of the cause of deaths was checked according to ICD-10 (I64.X04) and ICD-11 (8B11, 8B20) by two doctors in the medical insurance hospital every 3 years. The endpoints of this study were stroke-related deaths. Statistical analysis: Differences among gender and BP groups were performed using Student's t and chi-squared test according to the type of variables. The incidence density of mortality was calculated. COX proportional hazard model was used to explore the hazard ratios (HRs) and 95% confidence intervals (CIs) for death in associations with baseline SBP/DBP categories and 19-year changes of BP. Schoenfeld residual trend test was used to test the proportional hazard assumption in the associations of BP categories in 1976/BP change in 1994 and stroke-related death, respectively. The results show that the independent variables (BP/BP change) and the two models meet the preconditions of proportional risk (P > 0.05). Models were stratified by gender, and continuous variables such as age, BMI, TC, and categorical variables such as marital status, education, occupation, smoking, drinking, and diabetes were adjusted. While comparing HRs from BP categories or BP changes, the floating absolute risk was used to estimate the HRs and 95% CI.[12] The Cox–Stuart test was used to estimate the trend of the adjusted HRs and 95% CI. The associations between BP and deaths in such a long term were not a simple direct influence, and the exploration of possible paths of associations should not be given up. Therefore, the possible mediations and moderations were performed using the analytic methods.[13] The mediation models explore the independent variable affect the dependent variable in what way or by what variable. In this study, the independent variable (BP in 1976) can exert an indirect effect on the dependent variable (stroke-related deaths in 2020) through an intermediary variable (BMI in 1994). The moderation model explores the different influences of independent variables on the dependent variables in different situations or populations. All mediation and moderated mediation analyses were performed by scripts of PROCESS of SPSS 24.0 (IBM SPSS Statistics for Windows, Version 24.0. IBM, Armonk, NY, USA).[14] The simple mediating model was tested by PROCESS model 4, and the moderated mediating model by PROCESS model 5.[15] All mediating and moderated mediating models were based on a 5000 sample bootstrapping set. E values were reported in the sensitivity analysis, which is related to the potential subject to unmeasured confounding.[16] All analyses were performed using SPSS 24.0, STATA 15.0 (Stata Corp., College Station, TX, US), and EmpowerStats (http://www.empowerstats.com, X&Y Solutions, Inc., Boston, MA, USA). Results: Basic characteristics of 1696 participants in 1976 Among 1696 participants, the average age was 44.38 ± 6.10 years, and 1124 were men (66.3%). No significant statistical differences in SBP and DBP were found between male and female participants. A total of 617 participants were identified with hypertension (36.4%) in 1976, with an average age of 46.04 ± 7.11 years, with 64.8% men. The levels of average age, TC, and BMI were elevated with the increasing BP level (Ptrend < 0.05) [Table 1 and Supplementary Table 1]. Characteristics of 1696 participants with different baseline BP in 1976. Data are presented as mean ± standard deviation or n(%). BMI: Body mass index; BP: Blood pressure; DBP: Diastolic blood pressure; SBP: Systolic blood pressure; TC: Total cholesterol; TG: Triglyceride. Among 1696 participants, the average age was 44.38 ± 6.10 years, and 1124 were men (66.3%). No significant statistical differences in SBP and DBP were found between male and female participants. A total of 617 participants were identified with hypertension (36.4%) in 1976, with an average age of 46.04 ± 7.11 years, with 64.8% men. The levels of average age, TC, and BMI were elevated with the increasing BP level (Ptrend < 0.05) [Table 1 and Supplementary Table 1]. Characteristics of 1696 participants with different baseline BP in 1976. Data are presented as mean ± standard deviation or n(%). BMI: Body mass index; BP: Blood pressure; DBP: Diastolic blood pressure; SBP: Systolic blood pressure; TC: Total cholesterol; TG: Triglyceride. Stroke-related deaths according to the baseline BP After a 45-year follow-up, a total of 201 stroke-related deaths occurred. The stroke-related mortality in 45 years was 11.9% (95% CI: 10.3–13.4%), and the incidence density was 0.26 per 100 person-years. The incidence density of stroke-related mortality was significantly higher in the participants with hypertension than those without hypertension and higher in male than female participants [Table 2]. Stroke-related mortality according to the baseline hypertension (N = 1696). Data are presented as n/N, or n (range). After a 45-year follow-up, a total of 201 stroke-related deaths occurred. The stroke-related mortality in 45 years was 11.9% (95% CI: 10.3–13.4%), and the incidence density was 0.26 per 100 person-years. The incidence density of stroke-related mortality was significantly higher in the participants with hypertension than those without hypertension and higher in male than female participants [Table 2]. Stroke-related mortality according to the baseline hypertension (N = 1696). Data are presented as n/N, or n (range). The associations between BP and stroke-related mortality The COX proportional hazard model showed that, after adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, TC, among the participants with SBP ≥160 mmHg or DBP ≥100 mmHg in 1976, the risk of stroke-related death increased by 225.8% (HR = 3.258, 95% CI: 2.353–4.510). In these associations, the risks of stroke-related death were even more pronounced in male participants [Table 3]. HRs of stroke-related death associated with BP categories from 1976 to 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol. The COX proportional hazard model showed that, after adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, TC, among the participants with SBP ≥160 mmHg or DBP ≥100 mmHg in 1976, the risk of stroke-related death increased by 225.8% (HR = 3.258, 95% CI: 2.353–4.510). In these associations, the risks of stroke-related death were even more pronounced in male participants [Table 3]. HRs of stroke-related death associated with BP categories from 1976 to 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol. Changes in BP levels and subsequent risk of stroke-related mortality During a 45-year follow-up from 1976 to 2020, 201 stroke-related deaths were recorded, and 169 stroke-related deaths were recorded from 1994 to 2020. After adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC, changes in BP groups from 1976 to 1994 were associated with stroke-related mortality. Compared with the normal BP → normal BP group, the adjusted HR was 2.104 (95% CI: 1.632–2.713) for the hypertension → hypertension group and the adjusted HRs were 2.415 (95% CI: 1.801–3.239) and 1.895 (95% CI: 1.109–3.239) in male and female participants, respectively [Table 4]. HRs of stroke-related death associated with changes of BP categories from 1994 to 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC.BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol. During a 45-year follow-up from 1976 to 2020, 201 stroke-related deaths were recorded, and 169 stroke-related deaths were recorded from 1994 to 2020. After adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC, changes in BP groups from 1976 to 1994 were associated with stroke-related mortality. Compared with the normal BP → normal BP group, the adjusted HR was 2.104 (95% CI: 1.632–2.713) for the hypertension → hypertension group and the adjusted HRs were 2.415 (95% CI: 1.801–3.239) and 1.895 (95% CI: 1.109–3.239) in male and female participants, respectively [Table 4]. HRs of stroke-related death associated with changes of BP categories from 1994 to 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC.BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol. Sensitivity analysis E value was calculated in the sensitivity analysis, and the E values were higher than the HR values. Most E values >2 indicated that considerable unmeasured significant confounding factors could be needed to negate the existing HRs, which implied the current associations tended to be more stable [Tables 3 and 4]. Results were consistent in the sensitivity analysis after excluding the participants with cardiovascular diseases at baseline [Supplementary Tables 2 and 3]. E value was calculated in the sensitivity analysis, and the E values were higher than the HR values. Most E values >2 indicated that considerable unmeasured significant confounding factors could be needed to negate the existing HRs, which implied the current associations tended to be more stable [Tables 3 and 4]. Results were consistent in the sensitivity analysis after excluding the participants with cardiovascular diseases at baseline [Supplementary Tables 2 and 3]. The mediation and moderated mediation analysis We examined, adjusting for the potential confounding factors above, whether BMI in 1994 mediated the influence of BP in 1976 (SBP and DBP) on stroke-related deaths in 2020, respectively. The mediation analysis showed that BMI in 1994, as a statistically significant mediator, partially mediated the effect of SBP in 1976 on stroke-related deaths in 2020, and the mediating effect accounted for 10.1% of the total effect [Table 5 and Figure 1]. The mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol. The flow of mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 10.1%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths. The moderated mediation analysis showed that the direct effect from SBP to stroke-related death of the mediation model above was moderated by gender, which indicated the effect of SBP on stroke-related death was moderated by male participants. With the moderating effect, the mediating effect accounted for 5.81% of the total effect [Table 6 and Figure 2]. The moderated mediation analysis of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol. The flow of conditional process analysis (mediation and moderation) of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 5.8%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths. We examined, adjusting for the potential confounding factors above, whether BMI in 1994 mediated the influence of BP in 1976 (SBP and DBP) on stroke-related deaths in 2020, respectively. The mediation analysis showed that BMI in 1994, as a statistically significant mediator, partially mediated the effect of SBP in 1976 on stroke-related deaths in 2020, and the mediating effect accounted for 10.1% of the total effect [Table 5 and Figure 1]. The mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol. The flow of mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 10.1%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths. The moderated mediation analysis showed that the direct effect from SBP to stroke-related death of the mediation model above was moderated by gender, which indicated the effect of SBP on stroke-related death was moderated by male participants. With the moderating effect, the mediating effect accounted for 5.81% of the total effect [Table 6 and Figure 2]. The moderated mediation analysis of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol. The flow of conditional process analysis (mediation and moderation) of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 5.8%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths. Basic characteristics of 1696 participants in 1976: Among 1696 participants, the average age was 44.38 ± 6.10 years, and 1124 were men (66.3%). No significant statistical differences in SBP and DBP were found between male and female participants. A total of 617 participants were identified with hypertension (36.4%) in 1976, with an average age of 46.04 ± 7.11 years, with 64.8% men. The levels of average age, TC, and BMI were elevated with the increasing BP level (Ptrend < 0.05) [Table 1 and Supplementary Table 1]. Characteristics of 1696 participants with different baseline BP in 1976. Data are presented as mean ± standard deviation or n(%). BMI: Body mass index; BP: Blood pressure; DBP: Diastolic blood pressure; SBP: Systolic blood pressure; TC: Total cholesterol; TG: Triglyceride. Stroke-related deaths according to the baseline BP: After a 45-year follow-up, a total of 201 stroke-related deaths occurred. The stroke-related mortality in 45 years was 11.9% (95% CI: 10.3–13.4%), and the incidence density was 0.26 per 100 person-years. The incidence density of stroke-related mortality was significantly higher in the participants with hypertension than those without hypertension and higher in male than female participants [Table 2]. Stroke-related mortality according to the baseline hypertension (N = 1696). Data are presented as n/N, or n (range). The associations between BP and stroke-related mortality: The COX proportional hazard model showed that, after adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, TC, among the participants with SBP ≥160 mmHg or DBP ≥100 mmHg in 1976, the risk of stroke-related death increased by 225.8% (HR = 3.258, 95% CI: 2.353–4.510). In these associations, the risks of stroke-related death were even more pronounced in male participants [Table 3]. HRs of stroke-related death associated with BP categories from 1976 to 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol. Changes in BP levels and subsequent risk of stroke-related mortality: During a 45-year follow-up from 1976 to 2020, 201 stroke-related deaths were recorded, and 169 stroke-related deaths were recorded from 1994 to 2020. After adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC, changes in BP groups from 1976 to 1994 were associated with stroke-related mortality. Compared with the normal BP → normal BP group, the adjusted HR was 2.104 (95% CI: 1.632–2.713) for the hypertension → hypertension group and the adjusted HRs were 2.415 (95% CI: 1.801–3.239) and 1.895 (95% CI: 1.109–3.239) in male and female participants, respectively [Table 4]. HRs of stroke-related death associated with changes of BP categories from 1994 to 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC.BMI: Body mass index; BP: Blood pressure; CI: Confidence interval; DBP: Diastolic blood pressure; HRs: Hazard ratios; SBP: Systolic blood pressure; TC: Total cholesterol. Sensitivity analysis: E value was calculated in the sensitivity analysis, and the E values were higher than the HR values. Most E values >2 indicated that considerable unmeasured significant confounding factors could be needed to negate the existing HRs, which implied the current associations tended to be more stable [Tables 3 and 4]. Results were consistent in the sensitivity analysis after excluding the participants with cardiovascular diseases at baseline [Supplementary Tables 2 and 3]. The mediation and moderated mediation analysis: We examined, adjusting for the potential confounding factors above, whether BMI in 1994 mediated the influence of BP in 1976 (SBP and DBP) on stroke-related deaths in 2020, respectively. The mediation analysis showed that BMI in 1994, as a statistically significant mediator, partially mediated the effect of SBP in 1976 on stroke-related deaths in 2020, and the mediating effect accounted for 10.1% of the total effect [Table 5 and Figure 1]. The mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol. The flow of mediating effect of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 10.1%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths. The moderated mediation analysis showed that the direct effect from SBP to stroke-related death of the mediation model above was moderated by gender, which indicated the effect of SBP on stroke-related death was moderated by male participants. With the moderating effect, the mediating effect accounted for 5.81% of the total effect [Table 6 and Figure 2]. The moderated mediation analysis of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. Adjusting for age, sex, BMI, marital status, education, occupation, smoking, drinking, diabetes, and TC. BMI: Body mass index; CI: Confidence interval; SBP: Systolic blood pressure; TC: Total cholesterol. The flow of conditional process analysis (mediation and moderation) of BMI in 1994 between SBP in 1976 and stroke-related deaths in 2020. The mediating effect accounted for 5.8%. BMI: Body mass index; SBP: Systolic blood pressure. a1: the effect size of SBP on BMI; b1: the effect size of BMI on stroke-related deaths; c1’: the direct effect size of SBP on stroke-related deaths; c1: the total effect size of SBP on stroke-related deaths. Discussion: The results of our study have shown the long-term positive associations between BP, BP changes, and stroke-related deaths outcomes, highlighting the complexity of the influencing models of the associations. The association of BP and stroke-related death was mediated by BMI and moderated by gender, respectively. The average SBP and DBP levels were 125.6 ± 20.6 mmHg and 82.3 ± 18.2 mmHg in 1976, and average value in 1994 were 127.7 ± 18.2 mmHg and 83.3 ± 10.5 mmHg. In 1976, China was at the early stage of reform and opening and in the lag period of economic development, and in 1994, China entered a period of rapid economic development. The average BP level increased slightly. This increasing trend was consistent with prior studies.[1,17,18] The prevalence of hypertension has been gradually increasing with age growth and different periods. High BP level exposure is an independent risk factor of incidence, progression, and mortality of stroke. Whether this causal relationship would be more closely with the increment of exposure duration of high BP was unclear. Studies on this association were limited to the insufficient follow-up time, which would not avoid the inversion of cause and effect. Most observational studies were focused on the morbidity, progression, and mortality cardiovascular disease, which is due to the long-term association of BP and death and insufficient follow-up time.[2,19] In this study, the 19-year changes in BP were recorded, and the relationship between changes in BP and stroke-related deaths was explored. The changes in BP may be due to lifestyle changes, drug intervention, and other factors. The results have shown that long-term hypertension was associated with stroke-related death to varying degrees, while the associations did not exist in the population with no significant increase in BP. It can be speculated that effective control of BP level may reverse or reduce the risk of stroke-related death. Studies also support the point that these relationships have been reported that the drop in BP from antihypertensive medications provided greater vascular benefits.[20,21] The results of mediating test suggested that BMI mediated the influence of SBP on stroke-related death, which meant the higher the SBP level, the higher the BMI, and the latter was related with more likely suffering from stroke-related deaths. The results of moderated mediating effect suggested in male participants, BMI mediated this association. At present, no similar study had been found in these models.[22] This study has some limitations. The results that emerged from the Xi’an machinery factory cohort study may not be able to be generalized to other populations. Second, the results of the baseline survey of family disease histories were self-reported, and recall bias would be difficult to avoid. Third, the information of antihypertensive drugs for cardiovascular disease that might influence BP level was missing, which might provide underestimated the harmful associations of BP and death outcomes. To conclude, the current study demonstrated that high BP and changes of increased BP indicators were associated with stroke-related death, and the mediating and moderated mediating effects significantly affected these associations. This indicates that the association of high BP indicators and stroke-related death is long-term and multipath progress, which would further verify the necessity of control of hypertension. Conflicts of interest: None. Supplementary Material:
Background: Hypertension is associated with stroke-related mortality. However, the long-term association of blood pressure (BP) and the risk of stroke-related mortality and the influence path of BP on stroke-related death remain unknown. The current study aimed to estimate the long-term causal associations between BP and stroke-related mortality and the potential mediating and moderated mediating model of the associations. Methods: This is a 45-year follow-up cohort study and a total of 1696 subjects were enrolled in 1976 and 1081 participants died by the latest follow-up in 2020. COX proportional hazard model was used to explore the associations of stroke-related death with baseline systolic blood pressure (SBP)/diastolic blood pressure (DBP) categories and BP changes from 1976 to 1994. The mediating and moderated mediating effects were performed to detect the possible influencing path from BP to stroke-related deaths. E value was calculated in the sensitivity analysis. Results: Among 1696 participants, the average age was 44.38 ± 6.10 years, and 1124 were men (66.3%). After a 45-year follow-up, a total of 201 (11.9%) stroke-related deaths occurred. After the adjustment, the COX proportional hazard model showed that among the participants with SBP ≥ 160 mmHg or DBP ≥ 100 mmHg in 1976, the risk of stroke-related death increased by 217.5% (hazard ratio [HR] = 3.175, 95% confidence interval [CI]: 2.297-4.388), and the adjusted HRs were higher in male participants. Among the participants with hypertension in 1976 and 1994, the risk of stroke-related death increased by 110.4% (HR = 2.104, 95% CI: 1.632-2.713), and the adjusted HRs of the BP changes were higher in male participants. Body mass index (BMI) significantly mediated the association of SBP and stroke-related deaths and this mediating effect was moderated by gender. Conclusions: In a 45-year follow-up, high BP and persistent hypertension are associated with stroke-related death, and these associations were even more pronounced in male participants. The paths of association are mediated by BMI and moderated by gender.
null
null
7,798
437
[ 32, 156, 75, 84, 472, 165, 116, 173, 216, 82, 485 ]
18
[ "bp", "related", "stroke", "stroke related", "sbp", "bmi", "effect", "deaths", "stroke related deaths", "related deaths" ]
[ "mortality stroke causal", "associations bp deaths", "hypertension associated stroke", "hypertension chinese", "china hypertension survey" ]
null
null
[CONTENT] Blood pressure | Stroke | Mortality | Mediation | Cohort study [SUMMARY]
[CONTENT] Blood pressure | Stroke | Mortality | Mediation | Cohort study [SUMMARY]
[CONTENT] Blood pressure | Stroke | Mortality | Mediation | Cohort study [SUMMARY]
null
[CONTENT] Blood pressure | Stroke | Mortality | Mediation | Cohort study [SUMMARY]
null
[CONTENT] Adult | Blood Pressure | China | Follow-Up Studies | Humans | Hypertension | Male | Middle Aged | Risk Factors | Stroke [SUMMARY]
[CONTENT] Adult | Blood Pressure | China | Follow-Up Studies | Humans | Hypertension | Male | Middle Aged | Risk Factors | Stroke [SUMMARY]
[CONTENT] Adult | Blood Pressure | China | Follow-Up Studies | Humans | Hypertension | Male | Middle Aged | Risk Factors | Stroke [SUMMARY]
null
[CONTENT] Adult | Blood Pressure | China | Follow-Up Studies | Humans | Hypertension | Male | Middle Aged | Risk Factors | Stroke [SUMMARY]
null
[CONTENT] mortality stroke causal | associations bp deaths | hypertension associated stroke | hypertension chinese | china hypertension survey [SUMMARY]
[CONTENT] mortality stroke causal | associations bp deaths | hypertension associated stroke | hypertension chinese | china hypertension survey [SUMMARY]
[CONTENT] mortality stroke causal | associations bp deaths | hypertension associated stroke | hypertension chinese | china hypertension survey [SUMMARY]
null
[CONTENT] mortality stroke causal | associations bp deaths | hypertension associated stroke | hypertension chinese | china hypertension survey [SUMMARY]
null
[CONTENT] bp | related | stroke | stroke related | sbp | bmi | effect | deaths | stroke related deaths | related deaths [SUMMARY]
[CONTENT] bp | related | stroke | stroke related | sbp | bmi | effect | deaths | stroke related deaths | related deaths [SUMMARY]
[CONTENT] bp | related | stroke | stroke related | sbp | bmi | effect | deaths | stroke related deaths | related deaths [SUMMARY]
null
[CONTENT] bp | related | stroke | stroke related | sbp | bmi | effect | deaths | stroke related deaths | related deaths [SUMMARY]
null
[CONTENT] long | bp | causal | term | long term | hypertension | life years | life | ranked | cohort [SUMMARY]
[CONTENT] bp | mmhg | variable | variables | model | normal bp | normal | hospital | follow | performed [SUMMARY]
[CONTENT] effect | bmi | stroke | stroke related | related | sbp | deaths | stroke related deaths | related deaths | size [SUMMARY]
null
[CONTENT] bp | related | stroke | stroke related | bmi | mmhg | sbp | deaths | hypertension | effect [SUMMARY]
null
[CONTENT] ||| BP | BP ||| BP [SUMMARY]
[CONTENT] 45-year | 1696 | 1976 | 1081 | 2020 ||| COX | DBP | BP | 1976 to 1994 ||| BP ||| [SUMMARY]
[CONTENT] 1696 | 44.38 ± | 6.10 years | 1124 | 66.3% ||| 45-year | 201 | 11.9% ||| COX | SBP | ≥ | 160 | DBP | 100 | 1976 | 217.5% | 3.175 | 95% ||| CI | 2.297-4.388 ||| 1976 | 1994 | 110.4% | 2.104 | 95% | CI | 1.632-2.713 | BP ||| BMI | SBP [SUMMARY]
null
[CONTENT] ||| BP | BP ||| BP ||| 45-year | 1696 | 1976 | 1081 | 2020 ||| COX | DBP | BP | 1976 to 1994 ||| BP ||| ||| ||| 1696 | 44.38 ± | 6.10 years | 1124 | 66.3% ||| 45-year | 201 | 11.9% ||| COX | SBP | ≥ | 160 | DBP | 100 | 1976 | 217.5% | 3.175 | 95% ||| CI | 2.297-4.388 ||| 1976 | 1994 | 110.4% | 2.104 | 95% | CI | 1.632-2.713 | BP ||| BMI | SBP ||| 45-year | BP ||| BMI [SUMMARY]
null
Professional Status of Infectious Disease Specialists in Korea: A Nationwide Survey.
36472083
Infectious disease (ID) specialists are skilled facilitators of medical consultation who promote better outcomes in patient survival, antibiotic stewardship as well as healthcare safety in pandemic response. This study aimed to assess the working status of ID specialists and identify problems faced by ID professionals in Korea.
BACKGROUND
This was a nationwide cross-sectional study in Korea. An online-based survey was conducted over 11 days (from December 17-27, 2020), targeting all active adult (n = 281) and pediatric (n = 71) ID specialists in Korea (N = 352). Questions regarding the practice areas of the specialists were divided into five categories: 1) clinical practices of outpatient care, inpatient care, and consultations; 2) infection control; 3) antibiotic stewardship; 4) research; and 5) education and training. We investigated the weekly time-use patterns for these areas of practice.
METHODS
Of the 352 ID specialists, 195 (55.4%; 51.2% [144/281] adult and 71.8% [51/71] pediatric ID specialists) responded in the survey. Moreover, 144 (73.8%) of the total respondents were involved in all practice categories investigated. The most common practice area was outpatient service (93.8%), followed by consultation (91.3%) and inpatient service (87.7%). Specialists worked a median of 61 (interquartile range: 54-71) hours weekly: patient care, 29 (14-37) hours; research 11 (5-19) hours; infection control 4 (2-10) hours; antibiotic stewardship, 3 (1-5) hours; and education/training, 2 (2-6) hours.
RESULTS
ID specialists in Korea simultaneously undertake multiple tasks and work long hours, highlighting the need for training and employing more ID specialists.
CONCLUSION
[ "Adult", "Humans", "Child", "Cross-Sectional Studies", "Specialization", "Republic of Korea", "Communicable Diseases", "Surveys and Questionnaires" ]
9723190
INTRODUCTION
The current roles of infectious disease (ID) specialists are diverse, including diagnosis and treatment of various IDs, infection control, antibiotic stewardship, response to disease outbreaks, and vaccination. The social need for ID specialists is higher than ever because of the emergence of antimicrobial-resistant pathogens and recurrent outbreaks caused by emerging IDs. In particular, the spread of coronavirus disease 2019 (COVID-19) has affected more than 150 million people worldwide—including more than 128,000 people in Korea—since January 2020. This has created demand for ID specialists.12 Unfortunately, the number of ID specialists in several countries is suboptimal, and the number of applicants into ID training programs is insufficient.3456 In 2019, there were 242 active adult ID specialists in Korea, representing 0.42/100,000 of the population. One ID specialist in Korea was in charge of 342 hospital beds, which was higher than that in other countries, including the US, Europe, and Brazil.3467 Experienced ID specialists improve clinical outcomes. Direct system level improvements through infection and antimicrobial stewardship subsequently enhance patient satisfaction while optimizing the overall quality of care. However, the shortage of ID specialists can lead to unfavorable public health outcomes, including the emergence of a number of antibiotic-resistant bacteria89 and poor response to ID epidemics.10 Additionally, the shortage can lead to long work hours and low job satisfaction among ID specialists, leading to few applicants for ID specialist courses.1112 Recently, we analyzed the current working status and geographical distribution of adult ID specialists.3 However, there is limited information about the actual scope of work and time spent relative to their responsibilities. This study aimed to analyze the areas of practice and time spent in each area among adult and pediatric ID specialists in Korea and to identify commonly encountered problems and propose solutions from the perspective of ID specialists.
METHODS
Study design and population A survey was conducted on December 17–27, 2020, targeting all adult and pediatric ID specialists (N = 392) in Korea. At the time of the survey, 40 experts that were either retired or had passed away were excluded. In total, 352 ID specialists (281 adult ID physicians and 71 pediatric ID specialists) were identified as potential participants in the survey. An online-based survey link was forwarded to them via text messages and e-mails by the office of the Korean Society of Infectious Diseases and the Korean Society of Paediatric Infectious Diseases. To encourage participation, we sent a reminder on the fifth day. The responders were anonymized, and only one response from each participant was accepted. A survey was conducted on December 17–27, 2020, targeting all adult and pediatric ID specialists (N = 392) in Korea. At the time of the survey, 40 experts that were either retired or had passed away were excluded. In total, 352 ID specialists (281 adult ID physicians and 71 pediatric ID specialists) were identified as potential participants in the survey. An online-based survey link was forwarded to them via text messages and e-mails by the office of the Korean Society of Infectious Diseases and the Korean Society of Paediatric Infectious Diseases. To encourage participation, we sent a reminder on the fifth day. The responders were anonymized, and only one response from each participant was accepted. Survey items Survey items included baseline characteristics of respondents (age, sex, marital status, number of children, and academic degree), type of working institution, job title, and practice area. Questions regarding the practice areas of ID specialists were divided into five categories: 1) clinical practices of outpatient care, inpatient care, and consultations; 2) infection control; 3) antibiotic stewardship; 4) research; and 5) education and training. The education and training area includes all kinds of lectures or training for students/residents or non-clinical position health care workers. All practices, except research, were based on activities conducted a year before the survey period (December 2019–November 2020). Meanwhile, research-related activities were based on the three years before the survey period (December 2017–November 2020). We determined that the sum of the weights for each expert’s clinical and research fields was 100%. Items related to job satisfaction were surveyed using a 5-point Likert scale. To determine compensation, we investigated vacation benefits and average annual income. Survey items included baseline characteristics of respondents (age, sex, marital status, number of children, and academic degree), type of working institution, job title, and practice area. Questions regarding the practice areas of ID specialists were divided into five categories: 1) clinical practices of outpatient care, inpatient care, and consultations; 2) infection control; 3) antibiotic stewardship; 4) research; and 5) education and training. The education and training area includes all kinds of lectures or training for students/residents or non-clinical position health care workers. All practices, except research, were based on activities conducted a year before the survey period (December 2019–November 2020). Meanwhile, research-related activities were based on the three years before the survey period (December 2017–November 2020). We determined that the sum of the weights for each expert’s clinical and research fields was 100%. Items related to job satisfaction were surveyed using a 5-point Likert scale. To determine compensation, we investigated vacation benefits and average annual income. Weekly patterns of time use Respondents selected one week (from Monday to Sunday) between November 2, 2020 and December 6, 2020, and chose activities they performed from 6 am to midnight. One of the nine activities was entered on an hourly basis: 1) outpatient care; 2) consultation; 3) inpatient care/rounding; 4) education/training; 5) research; 6) infection control; 7) antibiotic stewardship; 8) volunteer work; and 9) participation in conferences (except infection control meetings). Additionally, the start and finish times for a daily work period from Monday to Saturday were recorded to investigate the working hours in a week. Respondents selected one week (from Monday to Sunday) between November 2, 2020 and December 6, 2020, and chose activities they performed from 6 am to midnight. One of the nine activities was entered on an hourly basis: 1) outpatient care; 2) consultation; 3) inpatient care/rounding; 4) education/training; 5) research; 6) infection control; 7) antibiotic stewardship; 8) volunteer work; and 9) participation in conferences (except infection control meetings). Additionally, the start and finish times for a daily work period from Monday to Saturday were recorded to investigate the working hours in a week. Statistical analysis SPSS version 24.0 for Windows (IBM, Armonk, NY, USA) was used for statistical analysis. Chi-square or Fisher’s exact tests were used to compare categorical variables. Continuous variables were compared using the Student’s t-test or Mann-Whitney U test, as appropriate. Variables with P values < 0.05 were considered statistically significant. SPSS version 24.0 for Windows (IBM, Armonk, NY, USA) was used for statistical analysis. Chi-square or Fisher’s exact tests were used to compare categorical variables. Continuous variables were compared using the Student’s t-test or Mann-Whitney U test, as appropriate. Variables with P values < 0.05 were considered statistically significant. Ethics statement The study protocol was approved by the Institutional Review Board of Soonchunhyang University Seoul Hospital (No. 2020-05-016). Online written informed consent was obtained from all participants. The study protocol was approved by the Institutional Review Board of Soonchunhyang University Seoul Hospital (No. 2020-05-016). Online written informed consent was obtained from all participants.
RESULTS
Demographic characteristics of respondents Overall, 195 (55.4%) ID specialists (144 adult ID specialists and 51 pediatric ID specialists) completed the survey. Detailed baseline characteristics are shown in Table 1. Majority of the respondents (192, 98.5%) worked in a clinical position. Most ID specialists (181, 92.8%) worked in acute-care referral hospitals, with two thirds of respondents (127, 65.1%) working in metropolitan areas (Supplementary Table 1). Data are number (%) of patients, unless otherwise indicated. ID = infectious disease, IQR = interquartile range. aNon-clinical areas included pharmaceutical companies (n = 2) and a life science company (n = 1). bOthers included pharmaceutical companies (n = 2), life science companies (n = 1), laboratories (n = 1), and medical schools (n = 1). In total, 144 (73.8%) respondents were involved in all of the following practices: inpatient and outpatient care, consultation, infection control, antibiotic stewardship, research, and education/training. Adult ID specialists were more involved in the practices of consultation, infection control, antibiotic stewardship, and participation in the public sector (Table 1). The major areas of specialization were bacterial/viral infections and infection control (Fig. 1A). HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria. Overall, 195 (55.4%) ID specialists (144 adult ID specialists and 51 pediatric ID specialists) completed the survey. Detailed baseline characteristics are shown in Table 1. Majority of the respondents (192, 98.5%) worked in a clinical position. Most ID specialists (181, 92.8%) worked in acute-care referral hospitals, with two thirds of respondents (127, 65.1%) working in metropolitan areas (Supplementary Table 1). Data are number (%) of patients, unless otherwise indicated. ID = infectious disease, IQR = interquartile range. aNon-clinical areas included pharmaceutical companies (n = 2) and a life science company (n = 1). bOthers included pharmaceutical companies (n = 2), life science companies (n = 1), laboratories (n = 1), and medical schools (n = 1). In total, 144 (73.8%) respondents were involved in all of the following practices: inpatient and outpatient care, consultation, infection control, antibiotic stewardship, research, and education/training. Adult ID specialists were more involved in the practices of consultation, infection control, antibiotic stewardship, and participation in the public sector (Table 1). The major areas of specialization were bacterial/viral infections and infection control (Fig. 1A). HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria. Clinical practices of the respondents The most common area of clinical practice was bacterial/viral diseases, followed by fever of unknown origin, and immunocompromised infection (Fig. 1B, Table 2). Human immunodeficiency virus/acquired immune deficiency syndrome, parasitic infection, immunocompromised infection, and occupational exposure were more common among adult ID specialists. In contrast, pediatric ID specialists were more often involved in vaccination/travel clinics. The majority of ID specialists had 6–10 hospitalized patients per day (48.5%); 41.9% of adult ID specialists had 6–10 hospitalized patients, while 61.9% of pediatric ID specialists had fewer than five hospitalized patients per day. The number of patients per outpatient clinic was similar between adult and pediatric ID specialists, with the majority having 11–20 patients per section. A large percentage of adult ID specialists (29.6%) performed > 20 formal consultations per day, while the majority of pediatric ID specialists (76.7%) had < 5 formal consultations per day. The number of informal consultations per day was similar between the two groups. Consultations were conducted without assistive personnel for 73.0% of the respondents. Twenty-three percent of adult ID specialists participated in pediatric ID consultation, and 7.0% of pediatric ID specialists participated in adult ID consultation. Data are number (%) of patients, unless otherwise indicated. ID = infectious disease, IQR = interquartile range, HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria. aConsultation via formal paperwork. bConsultation via informal communication (e.g., telephone, text messages). Among 183 respondents, 131 (71.6%) answered that their institution took care of hospitalized patients with COVID-19, of whom 105 (80.2%) participated in COVID-19 care (Supplementary Table 2). The most common area of clinical practice was bacterial/viral diseases, followed by fever of unknown origin, and immunocompromised infection (Fig. 1B, Table 2). Human immunodeficiency virus/acquired immune deficiency syndrome, parasitic infection, immunocompromised infection, and occupational exposure were more common among adult ID specialists. In contrast, pediatric ID specialists were more often involved in vaccination/travel clinics. The majority of ID specialists had 6–10 hospitalized patients per day (48.5%); 41.9% of adult ID specialists had 6–10 hospitalized patients, while 61.9% of pediatric ID specialists had fewer than five hospitalized patients per day. The number of patients per outpatient clinic was similar between adult and pediatric ID specialists, with the majority having 11–20 patients per section. A large percentage of adult ID specialists (29.6%) performed > 20 formal consultations per day, while the majority of pediatric ID specialists (76.7%) had < 5 formal consultations per day. The number of informal consultations per day was similar between the two groups. Consultations were conducted without assistive personnel for 73.0% of the respondents. Twenty-three percent of adult ID specialists participated in pediatric ID consultation, and 7.0% of pediatric ID specialists participated in adult ID consultation. Data are number (%) of patients, unless otherwise indicated. ID = infectious disease, IQR = interquartile range, HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria. aConsultation via formal paperwork. bConsultation via informal communication (e.g., telephone, text messages). Among 183 respondents, 131 (71.6%) answered that their institution took care of hospitalized patients with COVID-19, of whom 105 (80.2%) participated in COVID-19 care (Supplementary Table 2). Infection control and antibiotic stewardship Participation in infection control and antibiotic stewardship activities was more common among adult ID specialists (90.3% vs. 74.5%, P = 0.009) than among pediatric ID specialists (Table 1). Among 183 respondents participating in infection control activities, 96 (52.5%) had been an infection control team chair, with a median period of five years (interquartile range [IQR]: 2–10 years) (Supplementary Table 3). The most common infection control activities were attending infection control meetings (97.0%), responding to emerging ID (91.1%), and responding to unexpected exposure to transmissible diseases (88.7%). The most common antibiotic stewardship activities were review and approval of restricted antibiotics (81.5%), active monitoring of antibiotic prescription (51.8%), and review of surgical prophylactic antibiotics (42.8%). Participation in infection control and antibiotic stewardship activities was more common among adult ID specialists (90.3% vs. 74.5%, P = 0.009) than among pediatric ID specialists (Table 1). Among 183 respondents participating in infection control activities, 96 (52.5%) had been an infection control team chair, with a median period of five years (interquartile range [IQR]: 2–10 years) (Supplementary Table 3). The most common infection control activities were attending infection control meetings (97.0%), responding to emerging ID (91.1%), and responding to unexpected exposure to transmissible diseases (88.7%). The most common antibiotic stewardship activities were review and approval of restricted antibiotics (81.5%), active monitoring of antibiotic prescription (51.8%), and review of surgical prophylactic antibiotics (42.8%). Research Overall, 164 (84.1%) respondents were involved in research, including clinical research (88.4%), basic research (19.2%), and research in public health/epidemiology (18.0%) (Supplementary Table 4). The main research interests of adult ID specialists were bacterial/viral infections (78.5%), infection control (40.8%), and antibiotic stewardship (27.7%), while those of pediatric ID specialists were bacterial/viral infections (85.7%), vaccination (59.5%), and immunocompromised patients (23.8%). Most respondents (90.2%) had published in Science Citation Index/Expanded (SCI/E) journals, and the median number of publications as the first or corresponding author to SCI/E journals within 3 years was three (IQR: 2–6). Of the ID specialists involved in research, 59.8% had acquired research funds. Detailed response results according to position (professor, clinician, or non-clinical position) are shown in Supplementary Table 5. Overall, 164 (84.1%) respondents were involved in research, including clinical research (88.4%), basic research (19.2%), and research in public health/epidemiology (18.0%) (Supplementary Table 4). The main research interests of adult ID specialists were bacterial/viral infections (78.5%), infection control (40.8%), and antibiotic stewardship (27.7%), while those of pediatric ID specialists were bacterial/viral infections (85.7%), vaccination (59.5%), and immunocompromised patients (23.8%). Most respondents (90.2%) had published in Science Citation Index/Expanded (SCI/E) journals, and the median number of publications as the first or corresponding author to SCI/E journals within 3 years was three (IQR: 2–6). Of the ID specialists involved in research, 59.8% had acquired research funds. Detailed response results according to position (professor, clinician, or non-clinical position) are shown in Supplementary Table 5. Education and training Overall, 153 respondents (78.5%) were involved in education and training. The contents of education/training included ID (93.5%), antibiotics (75.9%), infection control (75.9%), and vaccination (58.2%) (Supplementary Table 6). Overall, 153 respondents (78.5%) were involved in education and training. The contents of education/training included ID (93.5%), antibiotics (75.9%), infection control (75.9%), and vaccination (58.2%) (Supplementary Table 6). Weekly patterns of time use In total, 153 (43.5%) ID specialists reported weekly patterns of time use. Detailed results are shown in Table 3. Weekly working hours were longer among adult ID specialists than among pediatric ID specialists (median: 59.0 vs. 55.0 hours from Monday to Friday, P = 0.005; median: 62.0 vs. 57.5 hours from Monday to Saturday, P = 0.015). Among activities, ID specialists spent the longest hours on patient care, especially outpatient services (median: 12 hours, IQR: 7–16 hours), followed by inpatient services (median: 10 hours, IQR: 6–13 hours) and consultation (median: 8 hours, IQR: 4–14 hours), altogether resulted in a median of 29 hours (IQR: 14–37 hours) each week. Adult ID specialists spent more time on consultation, infection control, and antibiotic stewardship, while pediatric ID specialists spent more time on outpatient services, research, and volunteer medical services. Data are presented as median (interquartile range). ID = infectious disease. In total, 153 (43.5%) ID specialists reported weekly patterns of time use. Detailed results are shown in Table 3. Weekly working hours were longer among adult ID specialists than among pediatric ID specialists (median: 59.0 vs. 55.0 hours from Monday to Friday, P = 0.005; median: 62.0 vs. 57.5 hours from Monday to Saturday, P = 0.015). Among activities, ID specialists spent the longest hours on patient care, especially outpatient services (median: 12 hours, IQR: 7–16 hours), followed by inpatient services (median: 10 hours, IQR: 6–13 hours) and consultation (median: 8 hours, IQR: 4–14 hours), altogether resulted in a median of 29 hours (IQR: 14–37 hours) each week. Adult ID specialists spent more time on consultation, infection control, and antibiotic stewardship, while pediatric ID specialists spent more time on outpatient services, research, and volunteer medical services. Data are presented as median (interquartile range). ID = infectious disease. Job satisfaction and compensations Among ID specialists, 37.4% (n = 73) responded that they were satisfied with their current job. The percentage of respondents who answered positively to question whether they would select the ID major if they had to choose again was higher for the adult ID specialist group than for the pediatric ID specialist group (P = 0.004) (Supplementary Table 7). Factors for satisfaction as ID specialists were shown in the Supplementary Table 8. In a multivariable logistic analysis, characteristics with male gender and working area in Seoul, Incheon, or Gyeonggi-do were significantly associated with job satisfaction (Supplementary Table 8). Most ID specialists spent 5–10 days of vacation per year (52.5%), earning 4,479–89,572 USD per year (46.2%) (Supplementary Table 9). Respondents answered that the ideal number of hospital beds covered by one adult and pediatric ID specialist was 151–200 beds (30.8%) and 401–500 beds (30.3%), respectively (Supplementary Table 10). Among ID specialists, 37.4% (n = 73) responded that they were satisfied with their current job. The percentage of respondents who answered positively to question whether they would select the ID major if they had to choose again was higher for the adult ID specialist group than for the pediatric ID specialist group (P = 0.004) (Supplementary Table 7). Factors for satisfaction as ID specialists were shown in the Supplementary Table 8. In a multivariable logistic analysis, characteristics with male gender and working area in Seoul, Incheon, or Gyeonggi-do were significantly associated with job satisfaction (Supplementary Table 8). Most ID specialists spent 5–10 days of vacation per year (52.5%), earning 4,479–89,572 USD per year (46.2%) (Supplementary Table 9). Respondents answered that the ideal number of hospital beds covered by one adult and pediatric ID specialist was 151–200 beds (30.8%) and 401–500 beds (30.3%), respectively (Supplementary Table 10). Main problems and complaints To foster ID specialty in Korea, ID specialists suggested that they should be appropriately compensated, especially for infection control or antibiotic stewardship activities (n = 91, 37.0%), and that additional ID specialists are necessary (n = 61, 24.8%). Respondents also suggested that the opinions of ID specialists should be respected and reflected in government policies (n = 34, 13.8%) (Supplementary Table 11). To foster ID specialty in Korea, ID specialists suggested that they should be appropriately compensated, especially for infection control or antibiotic stewardship activities (n = 91, 37.0%), and that additional ID specialists are necessary (n = 61, 24.8%). Respondents also suggested that the opinions of ID specialists should be respected and reflected in government policies (n = 34, 13.8%) (Supplementary Table 11).
null
null
[ "Study design and population", "Survey items", "Weekly patterns of time use", "Statistical analysis", "Demographic characteristics of respondents", "Clinical practices of the respondents", "Infection control and antibiotic stewardship", "Research", "Education and training", "Weekly patterns of time use", "Job satisfaction and compensations", "Main problems and complaints" ]
[ "A survey was conducted on December 17–27, 2020, targeting all adult and pediatric ID specialists (N = 392) in Korea. At the time of the survey, 40 experts that were either retired or had passed away were excluded. In total, 352 ID specialists (281 adult ID physicians and 71 pediatric ID specialists) were identified as potential participants in the survey. An online-based survey link was forwarded to them via text messages and e-mails by the office of the Korean Society of Infectious Diseases and the Korean Society of Paediatric Infectious Diseases. To encourage participation, we sent a reminder on the fifth day. The responders were anonymized, and only one response from each participant was accepted.", "Survey items included baseline characteristics of respondents (age, sex, marital status, number of children, and academic degree), type of working institution, job title, and practice area. Questions regarding the practice areas of ID specialists were divided into five categories: 1) clinical practices of outpatient care, inpatient care, and consultations; 2) infection control; 3) antibiotic stewardship; 4) research; and 5) education and training. The education and training area includes all kinds of lectures or training for students/residents or non-clinical position health care workers. All practices, except research, were based on activities conducted a year before the survey period (December 2019–November 2020). Meanwhile, research-related activities were based on the three years before the survey period (December 2017–November 2020). We determined that the sum of the weights for each expert’s clinical and research fields was 100%. Items related to job satisfaction were surveyed using a 5-point Likert scale. To determine compensation, we investigated vacation benefits and average annual income.", "Respondents selected one week (from Monday to Sunday) between November 2, 2020 and December 6, 2020, and chose activities they performed from 6 am to midnight. One of the nine activities was entered on an hourly basis: 1) outpatient care; 2) consultation; 3) inpatient care/rounding; 4) education/training; 5) research; 6) infection control; 7) antibiotic stewardship; 8) volunteer work; and 9) participation in conferences (except infection control meetings). Additionally, the start and finish times for a daily work period from Monday to Saturday were recorded to investigate the working hours in a week.", "SPSS version 24.0 for Windows (IBM, Armonk, NY, USA) was used for statistical analysis. Chi-square or Fisher’s exact tests were used to compare categorical variables. Continuous variables were compared using the Student’s t-test or Mann-Whitney U test, as appropriate. Variables with P values < 0.05 were considered statistically significant.", "Overall, 195 (55.4%) ID specialists (144 adult ID specialists and 51 pediatric ID specialists) completed the survey. Detailed baseline characteristics are shown in Table 1. Majority of the respondents (192, 98.5%) worked in a clinical position. Most ID specialists (181, 92.8%) worked in acute-care referral hospitals, with two thirds of respondents (127, 65.1%) working in metropolitan areas (Supplementary Table 1).\nData are number (%) of patients, unless otherwise indicated.\nID = infectious disease, IQR = interquartile range.\naNon-clinical areas included pharmaceutical companies (n = 2) and a life science company (n = 1).\nbOthers included pharmaceutical companies (n = 2), life science companies (n = 1), laboratories (n = 1), and medical schools (n = 1).\nIn total, 144 (73.8%) respondents were involved in all of the following practices: inpatient and outpatient care, consultation, infection control, antibiotic stewardship, research, and education/training. Adult ID specialists were more involved in the practices of consultation, infection control, antibiotic stewardship, and participation in the public sector (Table 1). The major areas of specialization were bacterial/viral infections and infection control (Fig. 1A).\nHIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria.", "The most common area of clinical practice was bacterial/viral diseases, followed by fever of unknown origin, and immunocompromised infection (Fig. 1B, Table 2). Human immunodeficiency virus/acquired immune deficiency syndrome, parasitic infection, immunocompromised infection, and occupational exposure were more common among adult ID specialists. In contrast, pediatric ID specialists were more often involved in vaccination/travel clinics. The majority of ID specialists had 6–10 hospitalized patients per day (48.5%); 41.9% of adult ID specialists had 6–10 hospitalized patients, while 61.9% of pediatric ID specialists had fewer than five hospitalized patients per day. The number of patients per outpatient clinic was similar between adult and pediatric ID specialists, with the majority having 11–20 patients per section. A large percentage of adult ID specialists (29.6%) performed > 20 formal consultations per day, while the majority of pediatric ID specialists (76.7%) had < 5 formal consultations per day. The number of informal consultations per day was similar between the two groups. Consultations were conducted without assistive personnel for 73.0% of the respondents. Twenty-three percent of adult ID specialists participated in pediatric ID consultation, and 7.0% of pediatric ID specialists participated in adult ID consultation.\nData are number (%) of patients, unless otherwise indicated.\nID = infectious disease, IQR = interquartile range, HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria.\naConsultation via formal paperwork.\nbConsultation via informal communication (e.g., telephone, text messages).\nAmong 183 respondents, 131 (71.6%) answered that their institution took care of hospitalized patients with COVID-19, of whom 105 (80.2%) participated in COVID-19 care (Supplementary Table 2).", "Participation in infection control and antibiotic stewardship activities was more common among adult ID specialists (90.3% vs. 74.5%, P = 0.009) than among pediatric ID specialists (Table 1). Among 183 respondents participating in infection control activities, 96 (52.5%) had been an infection control team chair, with a median period of five years (interquartile range [IQR]: 2–10 years) (Supplementary Table 3). The most common infection control activities were attending infection control meetings (97.0%), responding to emerging ID (91.1%), and responding to unexpected exposure to transmissible diseases (88.7%). The most common antibiotic stewardship activities were review and approval of restricted antibiotics (81.5%), active monitoring of antibiotic prescription (51.8%), and review of surgical prophylactic antibiotics (42.8%).", "Overall, 164 (84.1%) respondents were involved in research, including clinical research (88.4%), basic research (19.2%), and research in public health/epidemiology (18.0%) (Supplementary Table 4). The main research interests of adult ID specialists were bacterial/viral infections (78.5%), infection control (40.8%), and antibiotic stewardship (27.7%), while those of pediatric ID specialists were bacterial/viral infections (85.7%), vaccination (59.5%), and immunocompromised patients (23.8%). Most respondents (90.2%) had published in Science Citation Index/Expanded (SCI/E) journals, and the median number of publications as the first or corresponding author to SCI/E journals within 3 years was three (IQR: 2–6). Of the ID specialists involved in research, 59.8% had acquired research funds. Detailed response results according to position (professor, clinician, or non-clinical position) are shown in Supplementary Table 5.", "Overall, 153 respondents (78.5%) were involved in education and training. The contents of education/training included ID (93.5%), antibiotics (75.9%), infection control (75.9%), and vaccination (58.2%) (Supplementary Table 6).", "In total, 153 (43.5%) ID specialists reported weekly patterns of time use. Detailed results are shown in Table 3. Weekly working hours were longer among adult ID specialists than among pediatric ID specialists (median: 59.0 vs. 55.0 hours from Monday to Friday, P = 0.005; median: 62.0 vs. 57.5 hours from Monday to Saturday, P = 0.015). Among activities, ID specialists spent the longest hours on patient care, especially outpatient services (median: 12 hours, IQR: 7–16 hours), followed by inpatient services (median: 10 hours, IQR: 6–13 hours) and consultation (median: 8 hours, IQR: 4–14 hours), altogether resulted in a median of 29 hours (IQR: 14–37 hours) each week. Adult ID specialists spent more time on consultation, infection control, and antibiotic stewardship, while pediatric ID specialists spent more time on outpatient services, research, and volunteer medical services.\nData are presented as median (interquartile range).\nID = infectious disease.", "Among ID specialists, 37.4% (n = 73) responded that they were satisfied with their current job. The percentage of respondents who answered positively to question whether they would select the ID major if they had to choose again was higher for the adult ID specialist group than for the pediatric ID specialist group (P = 0.004) (Supplementary Table 7). Factors for satisfaction as ID specialists were shown in the Supplementary Table 8. In a multivariable logistic analysis, characteristics with male gender and working area in Seoul, Incheon, or Gyeonggi-do were significantly associated with job satisfaction (Supplementary Table 8). Most ID specialists spent 5–10 days of vacation per year (52.5%), earning 4,479–89,572 USD per year (46.2%) (Supplementary Table 9). Respondents answered that the ideal number of hospital beds covered by one adult and pediatric ID specialist was 151–200 beds (30.8%) and 401–500 beds (30.3%), respectively (Supplementary Table 10).", "To foster ID specialty in Korea, ID specialists suggested that they should be appropriately compensated, especially for infection control or antibiotic stewardship activities (n = 91, 37.0%), and that additional ID specialists are necessary (n = 61, 24.8%). Respondents also suggested that the opinions of ID specialists should be respected and reflected in government policies (n = 34, 13.8%) (Supplementary Table 11)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Study design and population", "Survey items", "Weekly patterns of time use", "Statistical analysis", "Ethics statement", "RESULTS", "Demographic characteristics of respondents", "Clinical practices of the respondents", "Infection control and antibiotic stewardship", "Research", "Education and training", "Weekly patterns of time use", "Job satisfaction and compensations", "Main problems and complaints", "DISCUSSION" ]
[ "The current roles of infectious disease (ID) specialists are diverse, including diagnosis and treatment of various IDs, infection control, antibiotic stewardship, response to disease outbreaks, and vaccination. The social need for ID specialists is higher than ever because of the emergence of antimicrobial-resistant pathogens and recurrent outbreaks caused by emerging IDs. In particular, the spread of coronavirus disease 2019 (COVID-19) has affected more than 150 million people worldwide—including more than 128,000 people in Korea—since January 2020. This has created demand for ID specialists.12 Unfortunately, the number of ID specialists in several countries is suboptimal, and the number of applicants into ID training programs is insufficient.3456\nIn 2019, there were 242 active adult ID specialists in Korea, representing 0.42/100,000 of the population. One ID specialist in Korea was in charge of 342 hospital beds, which was higher than that in other countries, including the US, Europe, and Brazil.3467 Experienced ID specialists improve clinical outcomes. Direct system level improvements through infection and antimicrobial stewardship subsequently enhance patient satisfaction while optimizing the overall quality of care. However, the shortage of ID specialists can lead to unfavorable public health outcomes, including the emergence of a number of antibiotic-resistant bacteria89 and poor response to ID epidemics.10 Additionally, the shortage can lead to long work hours and low job satisfaction among ID specialists, leading to few applicants for ID specialist courses.1112\nRecently, we analyzed the current working status and geographical distribution of adult ID specialists.3 However, there is limited information about the actual scope of work and time spent relative to their responsibilities. This study aimed to analyze the areas of practice and time spent in each area among adult and pediatric ID specialists in Korea and to identify commonly encountered problems and propose solutions from the perspective of ID specialists.", "Study design and population A survey was conducted on December 17–27, 2020, targeting all adult and pediatric ID specialists (N = 392) in Korea. At the time of the survey, 40 experts that were either retired or had passed away were excluded. In total, 352 ID specialists (281 adult ID physicians and 71 pediatric ID specialists) were identified as potential participants in the survey. An online-based survey link was forwarded to them via text messages and e-mails by the office of the Korean Society of Infectious Diseases and the Korean Society of Paediatric Infectious Diseases. To encourage participation, we sent a reminder on the fifth day. The responders were anonymized, and only one response from each participant was accepted.\nA survey was conducted on December 17–27, 2020, targeting all adult and pediatric ID specialists (N = 392) in Korea. At the time of the survey, 40 experts that were either retired or had passed away were excluded. In total, 352 ID specialists (281 adult ID physicians and 71 pediatric ID specialists) were identified as potential participants in the survey. An online-based survey link was forwarded to them via text messages and e-mails by the office of the Korean Society of Infectious Diseases and the Korean Society of Paediatric Infectious Diseases. To encourage participation, we sent a reminder on the fifth day. The responders were anonymized, and only one response from each participant was accepted.\nSurvey items Survey items included baseline characteristics of respondents (age, sex, marital status, number of children, and academic degree), type of working institution, job title, and practice area. Questions regarding the practice areas of ID specialists were divided into five categories: 1) clinical practices of outpatient care, inpatient care, and consultations; 2) infection control; 3) antibiotic stewardship; 4) research; and 5) education and training. The education and training area includes all kinds of lectures or training for students/residents or non-clinical position health care workers. All practices, except research, were based on activities conducted a year before the survey period (December 2019–November 2020). Meanwhile, research-related activities were based on the three years before the survey period (December 2017–November 2020). We determined that the sum of the weights for each expert’s clinical and research fields was 100%. Items related to job satisfaction were surveyed using a 5-point Likert scale. To determine compensation, we investigated vacation benefits and average annual income.\nSurvey items included baseline characteristics of respondents (age, sex, marital status, number of children, and academic degree), type of working institution, job title, and practice area. Questions regarding the practice areas of ID specialists were divided into five categories: 1) clinical practices of outpatient care, inpatient care, and consultations; 2) infection control; 3) antibiotic stewardship; 4) research; and 5) education and training. The education and training area includes all kinds of lectures or training for students/residents or non-clinical position health care workers. All practices, except research, were based on activities conducted a year before the survey period (December 2019–November 2020). Meanwhile, research-related activities were based on the three years before the survey period (December 2017–November 2020). We determined that the sum of the weights for each expert’s clinical and research fields was 100%. Items related to job satisfaction were surveyed using a 5-point Likert scale. To determine compensation, we investigated vacation benefits and average annual income.\nWeekly patterns of time use Respondents selected one week (from Monday to Sunday) between November 2, 2020 and December 6, 2020, and chose activities they performed from 6 am to midnight. One of the nine activities was entered on an hourly basis: 1) outpatient care; 2) consultation; 3) inpatient care/rounding; 4) education/training; 5) research; 6) infection control; 7) antibiotic stewardship; 8) volunteer work; and 9) participation in conferences (except infection control meetings). Additionally, the start and finish times for a daily work period from Monday to Saturday were recorded to investigate the working hours in a week.\nRespondents selected one week (from Monday to Sunday) between November 2, 2020 and December 6, 2020, and chose activities they performed from 6 am to midnight. One of the nine activities was entered on an hourly basis: 1) outpatient care; 2) consultation; 3) inpatient care/rounding; 4) education/training; 5) research; 6) infection control; 7) antibiotic stewardship; 8) volunteer work; and 9) participation in conferences (except infection control meetings). Additionally, the start and finish times for a daily work period from Monday to Saturday were recorded to investigate the working hours in a week.\nStatistical analysis SPSS version 24.0 for Windows (IBM, Armonk, NY, USA) was used for statistical analysis. Chi-square or Fisher’s exact tests were used to compare categorical variables. Continuous variables were compared using the Student’s t-test or Mann-Whitney U test, as appropriate. Variables with P values < 0.05 were considered statistically significant.\nSPSS version 24.0 for Windows (IBM, Armonk, NY, USA) was used for statistical analysis. Chi-square or Fisher’s exact tests were used to compare categorical variables. Continuous variables were compared using the Student’s t-test or Mann-Whitney U test, as appropriate. Variables with P values < 0.05 were considered statistically significant.\nEthics statement The study protocol was approved by the Institutional Review Board of Soonchunhyang University Seoul Hospital (No. 2020-05-016). Online written informed consent was obtained from all participants.\nThe study protocol was approved by the Institutional Review Board of Soonchunhyang University Seoul Hospital (No. 2020-05-016). Online written informed consent was obtained from all participants.", "A survey was conducted on December 17–27, 2020, targeting all adult and pediatric ID specialists (N = 392) in Korea. At the time of the survey, 40 experts that were either retired or had passed away were excluded. In total, 352 ID specialists (281 adult ID physicians and 71 pediatric ID specialists) were identified as potential participants in the survey. An online-based survey link was forwarded to them via text messages and e-mails by the office of the Korean Society of Infectious Diseases and the Korean Society of Paediatric Infectious Diseases. To encourage participation, we sent a reminder on the fifth day. The responders were anonymized, and only one response from each participant was accepted.", "Survey items included baseline characteristics of respondents (age, sex, marital status, number of children, and academic degree), type of working institution, job title, and practice area. Questions regarding the practice areas of ID specialists were divided into five categories: 1) clinical practices of outpatient care, inpatient care, and consultations; 2) infection control; 3) antibiotic stewardship; 4) research; and 5) education and training. The education and training area includes all kinds of lectures or training for students/residents or non-clinical position health care workers. All practices, except research, were based on activities conducted a year before the survey period (December 2019–November 2020). Meanwhile, research-related activities were based on the three years before the survey period (December 2017–November 2020). We determined that the sum of the weights for each expert’s clinical and research fields was 100%. Items related to job satisfaction were surveyed using a 5-point Likert scale. To determine compensation, we investigated vacation benefits and average annual income.", "Respondents selected one week (from Monday to Sunday) between November 2, 2020 and December 6, 2020, and chose activities they performed from 6 am to midnight. One of the nine activities was entered on an hourly basis: 1) outpatient care; 2) consultation; 3) inpatient care/rounding; 4) education/training; 5) research; 6) infection control; 7) antibiotic stewardship; 8) volunteer work; and 9) participation in conferences (except infection control meetings). Additionally, the start and finish times for a daily work period from Monday to Saturday were recorded to investigate the working hours in a week.", "SPSS version 24.0 for Windows (IBM, Armonk, NY, USA) was used for statistical analysis. Chi-square or Fisher’s exact tests were used to compare categorical variables. Continuous variables were compared using the Student’s t-test or Mann-Whitney U test, as appropriate. Variables with P values < 0.05 were considered statistically significant.", "The study protocol was approved by the Institutional Review Board of Soonchunhyang University Seoul Hospital (No. 2020-05-016). Online written informed consent was obtained from all participants.", "Demographic characteristics of respondents Overall, 195 (55.4%) ID specialists (144 adult ID specialists and 51 pediatric ID specialists) completed the survey. Detailed baseline characteristics are shown in Table 1. Majority of the respondents (192, 98.5%) worked in a clinical position. Most ID specialists (181, 92.8%) worked in acute-care referral hospitals, with two thirds of respondents (127, 65.1%) working in metropolitan areas (Supplementary Table 1).\nData are number (%) of patients, unless otherwise indicated.\nID = infectious disease, IQR = interquartile range.\naNon-clinical areas included pharmaceutical companies (n = 2) and a life science company (n = 1).\nbOthers included pharmaceutical companies (n = 2), life science companies (n = 1), laboratories (n = 1), and medical schools (n = 1).\nIn total, 144 (73.8%) respondents were involved in all of the following practices: inpatient and outpatient care, consultation, infection control, antibiotic stewardship, research, and education/training. Adult ID specialists were more involved in the practices of consultation, infection control, antibiotic stewardship, and participation in the public sector (Table 1). The major areas of specialization were bacterial/viral infections and infection control (Fig. 1A).\nHIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria.\nOverall, 195 (55.4%) ID specialists (144 adult ID specialists and 51 pediatric ID specialists) completed the survey. Detailed baseline characteristics are shown in Table 1. Majority of the respondents (192, 98.5%) worked in a clinical position. Most ID specialists (181, 92.8%) worked in acute-care referral hospitals, with two thirds of respondents (127, 65.1%) working in metropolitan areas (Supplementary Table 1).\nData are number (%) of patients, unless otherwise indicated.\nID = infectious disease, IQR = interquartile range.\naNon-clinical areas included pharmaceutical companies (n = 2) and a life science company (n = 1).\nbOthers included pharmaceutical companies (n = 2), life science companies (n = 1), laboratories (n = 1), and medical schools (n = 1).\nIn total, 144 (73.8%) respondents were involved in all of the following practices: inpatient and outpatient care, consultation, infection control, antibiotic stewardship, research, and education/training. Adult ID specialists were more involved in the practices of consultation, infection control, antibiotic stewardship, and participation in the public sector (Table 1). The major areas of specialization were bacterial/viral infections and infection control (Fig. 1A).\nHIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria.\nClinical practices of the respondents The most common area of clinical practice was bacterial/viral diseases, followed by fever of unknown origin, and immunocompromised infection (Fig. 1B, Table 2). Human immunodeficiency virus/acquired immune deficiency syndrome, parasitic infection, immunocompromised infection, and occupational exposure were more common among adult ID specialists. In contrast, pediatric ID specialists were more often involved in vaccination/travel clinics. The majority of ID specialists had 6–10 hospitalized patients per day (48.5%); 41.9% of adult ID specialists had 6–10 hospitalized patients, while 61.9% of pediatric ID specialists had fewer than five hospitalized patients per day. The number of patients per outpatient clinic was similar between adult and pediatric ID specialists, with the majority having 11–20 patients per section. A large percentage of adult ID specialists (29.6%) performed > 20 formal consultations per day, while the majority of pediatric ID specialists (76.7%) had < 5 formal consultations per day. The number of informal consultations per day was similar between the two groups. Consultations were conducted without assistive personnel for 73.0% of the respondents. Twenty-three percent of adult ID specialists participated in pediatric ID consultation, and 7.0% of pediatric ID specialists participated in adult ID consultation.\nData are number (%) of patients, unless otherwise indicated.\nID = infectious disease, IQR = interquartile range, HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria.\naConsultation via formal paperwork.\nbConsultation via informal communication (e.g., telephone, text messages).\nAmong 183 respondents, 131 (71.6%) answered that their institution took care of hospitalized patients with COVID-19, of whom 105 (80.2%) participated in COVID-19 care (Supplementary Table 2).\nThe most common area of clinical practice was bacterial/viral diseases, followed by fever of unknown origin, and immunocompromised infection (Fig. 1B, Table 2). Human immunodeficiency virus/acquired immune deficiency syndrome, parasitic infection, immunocompromised infection, and occupational exposure were more common among adult ID specialists. In contrast, pediatric ID specialists were more often involved in vaccination/travel clinics. The majority of ID specialists had 6–10 hospitalized patients per day (48.5%); 41.9% of adult ID specialists had 6–10 hospitalized patients, while 61.9% of pediatric ID specialists had fewer than five hospitalized patients per day. The number of patients per outpatient clinic was similar between adult and pediatric ID specialists, with the majority having 11–20 patients per section. A large percentage of adult ID specialists (29.6%) performed > 20 formal consultations per day, while the majority of pediatric ID specialists (76.7%) had < 5 formal consultations per day. The number of informal consultations per day was similar between the two groups. Consultations were conducted without assistive personnel for 73.0% of the respondents. Twenty-three percent of adult ID specialists participated in pediatric ID consultation, and 7.0% of pediatric ID specialists participated in adult ID consultation.\nData are number (%) of patients, unless otherwise indicated.\nID = infectious disease, IQR = interquartile range, HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria.\naConsultation via formal paperwork.\nbConsultation via informal communication (e.g., telephone, text messages).\nAmong 183 respondents, 131 (71.6%) answered that their institution took care of hospitalized patients with COVID-19, of whom 105 (80.2%) participated in COVID-19 care (Supplementary Table 2).\nInfection control and antibiotic stewardship Participation in infection control and antibiotic stewardship activities was more common among adult ID specialists (90.3% vs. 74.5%, P = 0.009) than among pediatric ID specialists (Table 1). Among 183 respondents participating in infection control activities, 96 (52.5%) had been an infection control team chair, with a median period of five years (interquartile range [IQR]: 2–10 years) (Supplementary Table 3). The most common infection control activities were attending infection control meetings (97.0%), responding to emerging ID (91.1%), and responding to unexpected exposure to transmissible diseases (88.7%). The most common antibiotic stewardship activities were review and approval of restricted antibiotics (81.5%), active monitoring of antibiotic prescription (51.8%), and review of surgical prophylactic antibiotics (42.8%).\nParticipation in infection control and antibiotic stewardship activities was more common among adult ID specialists (90.3% vs. 74.5%, P = 0.009) than among pediatric ID specialists (Table 1). Among 183 respondents participating in infection control activities, 96 (52.5%) had been an infection control team chair, with a median period of five years (interquartile range [IQR]: 2–10 years) (Supplementary Table 3). The most common infection control activities were attending infection control meetings (97.0%), responding to emerging ID (91.1%), and responding to unexpected exposure to transmissible diseases (88.7%). The most common antibiotic stewardship activities were review and approval of restricted antibiotics (81.5%), active monitoring of antibiotic prescription (51.8%), and review of surgical prophylactic antibiotics (42.8%).\nResearch Overall, 164 (84.1%) respondents were involved in research, including clinical research (88.4%), basic research (19.2%), and research in public health/epidemiology (18.0%) (Supplementary Table 4). The main research interests of adult ID specialists were bacterial/viral infections (78.5%), infection control (40.8%), and antibiotic stewardship (27.7%), while those of pediatric ID specialists were bacterial/viral infections (85.7%), vaccination (59.5%), and immunocompromised patients (23.8%). Most respondents (90.2%) had published in Science Citation Index/Expanded (SCI/E) journals, and the median number of publications as the first or corresponding author to SCI/E journals within 3 years was three (IQR: 2–6). Of the ID specialists involved in research, 59.8% had acquired research funds. Detailed response results according to position (professor, clinician, or non-clinical position) are shown in Supplementary Table 5.\nOverall, 164 (84.1%) respondents were involved in research, including clinical research (88.4%), basic research (19.2%), and research in public health/epidemiology (18.0%) (Supplementary Table 4). The main research interests of adult ID specialists were bacterial/viral infections (78.5%), infection control (40.8%), and antibiotic stewardship (27.7%), while those of pediatric ID specialists were bacterial/viral infections (85.7%), vaccination (59.5%), and immunocompromised patients (23.8%). Most respondents (90.2%) had published in Science Citation Index/Expanded (SCI/E) journals, and the median number of publications as the first or corresponding author to SCI/E journals within 3 years was three (IQR: 2–6). Of the ID specialists involved in research, 59.8% had acquired research funds. Detailed response results according to position (professor, clinician, or non-clinical position) are shown in Supplementary Table 5.\nEducation and training Overall, 153 respondents (78.5%) were involved in education and training. The contents of education/training included ID (93.5%), antibiotics (75.9%), infection control (75.9%), and vaccination (58.2%) (Supplementary Table 6).\nOverall, 153 respondents (78.5%) were involved in education and training. The contents of education/training included ID (93.5%), antibiotics (75.9%), infection control (75.9%), and vaccination (58.2%) (Supplementary Table 6).\nWeekly patterns of time use In total, 153 (43.5%) ID specialists reported weekly patterns of time use. Detailed results are shown in Table 3. Weekly working hours were longer among adult ID specialists than among pediatric ID specialists (median: 59.0 vs. 55.0 hours from Monday to Friday, P = 0.005; median: 62.0 vs. 57.5 hours from Monday to Saturday, P = 0.015). Among activities, ID specialists spent the longest hours on patient care, especially outpatient services (median: 12 hours, IQR: 7–16 hours), followed by inpatient services (median: 10 hours, IQR: 6–13 hours) and consultation (median: 8 hours, IQR: 4–14 hours), altogether resulted in a median of 29 hours (IQR: 14–37 hours) each week. Adult ID specialists spent more time on consultation, infection control, and antibiotic stewardship, while pediatric ID specialists spent more time on outpatient services, research, and volunteer medical services.\nData are presented as median (interquartile range).\nID = infectious disease.\nIn total, 153 (43.5%) ID specialists reported weekly patterns of time use. Detailed results are shown in Table 3. Weekly working hours were longer among adult ID specialists than among pediatric ID specialists (median: 59.0 vs. 55.0 hours from Monday to Friday, P = 0.005; median: 62.0 vs. 57.5 hours from Monday to Saturday, P = 0.015). Among activities, ID specialists spent the longest hours on patient care, especially outpatient services (median: 12 hours, IQR: 7–16 hours), followed by inpatient services (median: 10 hours, IQR: 6–13 hours) and consultation (median: 8 hours, IQR: 4–14 hours), altogether resulted in a median of 29 hours (IQR: 14–37 hours) each week. Adult ID specialists spent more time on consultation, infection control, and antibiotic stewardship, while pediatric ID specialists spent more time on outpatient services, research, and volunteer medical services.\nData are presented as median (interquartile range).\nID = infectious disease.\nJob satisfaction and compensations Among ID specialists, 37.4% (n = 73) responded that they were satisfied with their current job. The percentage of respondents who answered positively to question whether they would select the ID major if they had to choose again was higher for the adult ID specialist group than for the pediatric ID specialist group (P = 0.004) (Supplementary Table 7). Factors for satisfaction as ID specialists were shown in the Supplementary Table 8. In a multivariable logistic analysis, characteristics with male gender and working area in Seoul, Incheon, or Gyeonggi-do were significantly associated with job satisfaction (Supplementary Table 8). Most ID specialists spent 5–10 days of vacation per year (52.5%), earning 4,479–89,572 USD per year (46.2%) (Supplementary Table 9). Respondents answered that the ideal number of hospital beds covered by one adult and pediatric ID specialist was 151–200 beds (30.8%) and 401–500 beds (30.3%), respectively (Supplementary Table 10).\nAmong ID specialists, 37.4% (n = 73) responded that they were satisfied with their current job. The percentage of respondents who answered positively to question whether they would select the ID major if they had to choose again was higher for the adult ID specialist group than for the pediatric ID specialist group (P = 0.004) (Supplementary Table 7). Factors for satisfaction as ID specialists were shown in the Supplementary Table 8. In a multivariable logistic analysis, characteristics with male gender and working area in Seoul, Incheon, or Gyeonggi-do were significantly associated with job satisfaction (Supplementary Table 8). Most ID specialists spent 5–10 days of vacation per year (52.5%), earning 4,479–89,572 USD per year (46.2%) (Supplementary Table 9). Respondents answered that the ideal number of hospital beds covered by one adult and pediatric ID specialist was 151–200 beds (30.8%) and 401–500 beds (30.3%), respectively (Supplementary Table 10).\nMain problems and complaints To foster ID specialty in Korea, ID specialists suggested that they should be appropriately compensated, especially for infection control or antibiotic stewardship activities (n = 91, 37.0%), and that additional ID specialists are necessary (n = 61, 24.8%). Respondents also suggested that the opinions of ID specialists should be respected and reflected in government policies (n = 34, 13.8%) (Supplementary Table 11).\nTo foster ID specialty in Korea, ID specialists suggested that they should be appropriately compensated, especially for infection control or antibiotic stewardship activities (n = 91, 37.0%), and that additional ID specialists are necessary (n = 61, 24.8%). Respondents also suggested that the opinions of ID specialists should be respected and reflected in government policies (n = 34, 13.8%) (Supplementary Table 11).", "Overall, 195 (55.4%) ID specialists (144 adult ID specialists and 51 pediatric ID specialists) completed the survey. Detailed baseline characteristics are shown in Table 1. Majority of the respondents (192, 98.5%) worked in a clinical position. Most ID specialists (181, 92.8%) worked in acute-care referral hospitals, with two thirds of respondents (127, 65.1%) working in metropolitan areas (Supplementary Table 1).\nData are number (%) of patients, unless otherwise indicated.\nID = infectious disease, IQR = interquartile range.\naNon-clinical areas included pharmaceutical companies (n = 2) and a life science company (n = 1).\nbOthers included pharmaceutical companies (n = 2), life science companies (n = 1), laboratories (n = 1), and medical schools (n = 1).\nIn total, 144 (73.8%) respondents were involved in all of the following practices: inpatient and outpatient care, consultation, infection control, antibiotic stewardship, research, and education/training. Adult ID specialists were more involved in the practices of consultation, infection control, antibiotic stewardship, and participation in the public sector (Table 1). The major areas of specialization were bacterial/viral infections and infection control (Fig. 1A).\nHIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria.", "The most common area of clinical practice was bacterial/viral diseases, followed by fever of unknown origin, and immunocompromised infection (Fig. 1B, Table 2). Human immunodeficiency virus/acquired immune deficiency syndrome, parasitic infection, immunocompromised infection, and occupational exposure were more common among adult ID specialists. In contrast, pediatric ID specialists were more often involved in vaccination/travel clinics. The majority of ID specialists had 6–10 hospitalized patients per day (48.5%); 41.9% of adult ID specialists had 6–10 hospitalized patients, while 61.9% of pediatric ID specialists had fewer than five hospitalized patients per day. The number of patients per outpatient clinic was similar between adult and pediatric ID specialists, with the majority having 11–20 patients per section. A large percentage of adult ID specialists (29.6%) performed > 20 formal consultations per day, while the majority of pediatric ID specialists (76.7%) had < 5 formal consultations per day. The number of informal consultations per day was similar between the two groups. Consultations were conducted without assistive personnel for 73.0% of the respondents. Twenty-three percent of adult ID specialists participated in pediatric ID consultation, and 7.0% of pediatric ID specialists participated in adult ID consultation.\nData are number (%) of patients, unless otherwise indicated.\nID = infectious disease, IQR = interquartile range, HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria.\naConsultation via formal paperwork.\nbConsultation via informal communication (e.g., telephone, text messages).\nAmong 183 respondents, 131 (71.6%) answered that their institution took care of hospitalized patients with COVID-19, of whom 105 (80.2%) participated in COVID-19 care (Supplementary Table 2).", "Participation in infection control and antibiotic stewardship activities was more common among adult ID specialists (90.3% vs. 74.5%, P = 0.009) than among pediatric ID specialists (Table 1). Among 183 respondents participating in infection control activities, 96 (52.5%) had been an infection control team chair, with a median period of five years (interquartile range [IQR]: 2–10 years) (Supplementary Table 3). The most common infection control activities were attending infection control meetings (97.0%), responding to emerging ID (91.1%), and responding to unexpected exposure to transmissible diseases (88.7%). The most common antibiotic stewardship activities were review and approval of restricted antibiotics (81.5%), active monitoring of antibiotic prescription (51.8%), and review of surgical prophylactic antibiotics (42.8%).", "Overall, 164 (84.1%) respondents were involved in research, including clinical research (88.4%), basic research (19.2%), and research in public health/epidemiology (18.0%) (Supplementary Table 4). The main research interests of adult ID specialists were bacterial/viral infections (78.5%), infection control (40.8%), and antibiotic stewardship (27.7%), while those of pediatric ID specialists were bacterial/viral infections (85.7%), vaccination (59.5%), and immunocompromised patients (23.8%). Most respondents (90.2%) had published in Science Citation Index/Expanded (SCI/E) journals, and the median number of publications as the first or corresponding author to SCI/E journals within 3 years was three (IQR: 2–6). Of the ID specialists involved in research, 59.8% had acquired research funds. Detailed response results according to position (professor, clinician, or non-clinical position) are shown in Supplementary Table 5.", "Overall, 153 respondents (78.5%) were involved in education and training. The contents of education/training included ID (93.5%), antibiotics (75.9%), infection control (75.9%), and vaccination (58.2%) (Supplementary Table 6).", "In total, 153 (43.5%) ID specialists reported weekly patterns of time use. Detailed results are shown in Table 3. Weekly working hours were longer among adult ID specialists than among pediatric ID specialists (median: 59.0 vs. 55.0 hours from Monday to Friday, P = 0.005; median: 62.0 vs. 57.5 hours from Monday to Saturday, P = 0.015). Among activities, ID specialists spent the longest hours on patient care, especially outpatient services (median: 12 hours, IQR: 7–16 hours), followed by inpatient services (median: 10 hours, IQR: 6–13 hours) and consultation (median: 8 hours, IQR: 4–14 hours), altogether resulted in a median of 29 hours (IQR: 14–37 hours) each week. Adult ID specialists spent more time on consultation, infection control, and antibiotic stewardship, while pediatric ID specialists spent more time on outpatient services, research, and volunteer medical services.\nData are presented as median (interquartile range).\nID = infectious disease.", "Among ID specialists, 37.4% (n = 73) responded that they were satisfied with their current job. The percentage of respondents who answered positively to question whether they would select the ID major if they had to choose again was higher for the adult ID specialist group than for the pediatric ID specialist group (P = 0.004) (Supplementary Table 7). Factors for satisfaction as ID specialists were shown in the Supplementary Table 8. In a multivariable logistic analysis, characteristics with male gender and working area in Seoul, Incheon, or Gyeonggi-do were significantly associated with job satisfaction (Supplementary Table 8). Most ID specialists spent 5–10 days of vacation per year (52.5%), earning 4,479–89,572 USD per year (46.2%) (Supplementary Table 9). Respondents answered that the ideal number of hospital beds covered by one adult and pediatric ID specialist was 151–200 beds (30.8%) and 401–500 beds (30.3%), respectively (Supplementary Table 10).", "To foster ID specialty in Korea, ID specialists suggested that they should be appropriately compensated, especially for infection control or antibiotic stewardship activities (n = 91, 37.0%), and that additional ID specialists are necessary (n = 61, 24.8%). Respondents also suggested that the opinions of ID specialists should be respected and reflected in government policies (n = 34, 13.8%) (Supplementary Table 11).", "In the present study, we found that most ID specialists in Korea spent longer time treating patients with IDs. Concurrently, they also participated in infection control, antibiotic stewardship, or education/training. They work for more than 60 hours weekly, which exceeded the 52 hours set legally by the government. Accordingly, the demands for ID specialists have been concentrated on appropriate compensation of their work and increase employment of ID specialists. Considering that the number of ID specialists in Korea is low, the promotion of ID training will not negatively affect the training system of other specialties.\nAmong the various roles of ID specialists, patient care is essential. Involvement of ID specialists in the care of patients with IDs results in reduced hospitalization rates, mortality, healthcare costs, and hospital stay.13141516 In this study, most ID specialists were engaged in several patient care activities, including inpatient and outpatient care and consultations in the same week. Furthermore, about one fourth of adult ID specialists were engaged in pediatric ID consultations. The diverse clinical responsibilities of the ID specialists may be due to shortage in ID staff.\nFollowing the outbreak of the Middle East respiratory syndrome in medical institutions in 2015, the medical law in Korea was revised to strengthen the legal regulations for infection control personnel.17 Furthermore, since 2017, the Ministry of Health and Welfare has reimbursed medical institutions for infection prevention and management measures. However, for reimbursement, a hospital requires an ID doctor in charge of 300 hospital beds and doctors in charge of infection control to perform infection control duties for at least 20 h/week and complete 16 hours of training related to infection control.17 Consequently, infection control rounding has become a mandatory activity for ID specialists in Korean hospitals.18 However, the pattern of weekly time use among ID specialists shows limited room for additional infection control activities. Despite this workload, the majority of ID specialists had to undertake multiple responsibilities, including being director of infection control. This results to insufficient time for infection control activities.\nParticipation in antibiotic stewardship mainly focused on approval for specific and restricted antibiotics.19 Similar to other activities related to the ID specialty, antibiotic stewardship in Korean hospitals is mostly conducted by one or two ID specialists.19 However, for more comprehensive ID activities, approximately 3.01 personnel are required per 1,000 beds.20 An increase in the emergence of antimicrobial-resistant pathogens emphasizes the importance of appropriate antibiotic use and antibiotic stewardship programs.21 Securing ID specialists is necessary to implement and expand the antibiotic stewardship program in Korean hospitals.\nAccording to our results, ID specialists devoted similar amounts of time to research and patient care. This may be because a large proportion of ID specialists were stationed at university-affiliated hospitals. However, they were unlikely to receive research funds and perform basic research in their early careers. Given that a rapid response to emerging infectious and re-emerging diseases is enhanced by a strong research base22 promoting ID research is necessary to strengthen public health control efforts. Besides expanding ID research funds, emphasizing basic research in ID training courses should be considered.23 Promoting research in ID will most likely attract more applicants for training in ID specialties.24\nUnfortunately, the actual time that ID specialists allocated to education and training activities was 3 hours per week. This duration may be insufficient for providing adequate educational opportunities. Due to the increase in emerging IDs and IDs caused by antimicrobial-resistant pathogens, it is important to educate students/trainees and healthcare personnel on infection control. An adequately staffed ID workforce is necessary to achieve this.\nID specialists worked an average of 60.5 hours per week. Given that more than half of the respondents were women and more than two thirds were married and had children, one can expect that it is difficult for ID specialists to achieve an appropriate work-life balance. In our previous survey,11 only 8.7% of ID specialists responded to having a work-life balance. In this study, working hours were based on work and leave hours. Only 2.6% answered that they did not carry work at home. It is known that long working hours can lead to health risks, including an increased risk of stroke.25\nInterestingly, this study revealed differences between adult and pediatric ID specialists. Adult ID specialists had a greater role in consultation, infection control, and antibiotic stewardship than pediatric ID specialists. Although, the satisfaction level was similar between the two types of experts, pediatric ID specialists responded more positively to reselecting the same major. It is estimated that more weekly working hours for adult ID specialists may be the cause of the reselection factor; however, further studies should be conducted in this regard.\nThe strength of our study is the high response rate; 55.4% of ID specialists participated in the study. Therefore, the results may adequately represent the status of ID specialists in Korea. Nonetheless, this study had some limitations. First, clinical microbiologists who can be classified as ID specialists were not included in the study. Second, the survey was conducted in the middle of the COVID-19 pandemic, which may have affected the operational times of ID specialists. Third, there could be information or selection bias due to non-response and questionnaire-based, cross-sectional study design. Further studies with more meticulous designs such as longitudinal or repeated cross-sectional studies are needed to guarantee a more reliable data.\nWe identified areas of practice and patterns of time use among adult and pediatric ID specialists in Korea. Even during the middle of the COVID-19 pandemic, most experts oversee all necessary areas (e.g., treatment, education, research, infection control, and antibiotic stewardship) in medical institutions with limited resources. It is expected that these problems can be solved by appropriately compensating individuals and medical institutions for their invisible activities (including infection control and antibiotic stewardship) and by securing additional human resources." ]
[ "intro", "methods", null, null, null, null, "ethics-statement", "results", null, null, null, null, null, null, null, null, "discussion" ]
[ "Infectious Disease Specialists", "Professional Development", "Workforce", "Adult", "Pediatrics", "Korea" ]
INTRODUCTION: The current roles of infectious disease (ID) specialists are diverse, including diagnosis and treatment of various IDs, infection control, antibiotic stewardship, response to disease outbreaks, and vaccination. The social need for ID specialists is higher than ever because of the emergence of antimicrobial-resistant pathogens and recurrent outbreaks caused by emerging IDs. In particular, the spread of coronavirus disease 2019 (COVID-19) has affected more than 150 million people worldwide—including more than 128,000 people in Korea—since January 2020. This has created demand for ID specialists.12 Unfortunately, the number of ID specialists in several countries is suboptimal, and the number of applicants into ID training programs is insufficient.3456 In 2019, there were 242 active adult ID specialists in Korea, representing 0.42/100,000 of the population. One ID specialist in Korea was in charge of 342 hospital beds, which was higher than that in other countries, including the US, Europe, and Brazil.3467 Experienced ID specialists improve clinical outcomes. Direct system level improvements through infection and antimicrobial stewardship subsequently enhance patient satisfaction while optimizing the overall quality of care. However, the shortage of ID specialists can lead to unfavorable public health outcomes, including the emergence of a number of antibiotic-resistant bacteria89 and poor response to ID epidemics.10 Additionally, the shortage can lead to long work hours and low job satisfaction among ID specialists, leading to few applicants for ID specialist courses.1112 Recently, we analyzed the current working status and geographical distribution of adult ID specialists.3 However, there is limited information about the actual scope of work and time spent relative to their responsibilities. This study aimed to analyze the areas of practice and time spent in each area among adult and pediatric ID specialists in Korea and to identify commonly encountered problems and propose solutions from the perspective of ID specialists. METHODS: Study design and population A survey was conducted on December 17–27, 2020, targeting all adult and pediatric ID specialists (N = 392) in Korea. At the time of the survey, 40 experts that were either retired or had passed away were excluded. In total, 352 ID specialists (281 adult ID physicians and 71 pediatric ID specialists) were identified as potential participants in the survey. An online-based survey link was forwarded to them via text messages and e-mails by the office of the Korean Society of Infectious Diseases and the Korean Society of Paediatric Infectious Diseases. To encourage participation, we sent a reminder on the fifth day. The responders were anonymized, and only one response from each participant was accepted. A survey was conducted on December 17–27, 2020, targeting all adult and pediatric ID specialists (N = 392) in Korea. At the time of the survey, 40 experts that were either retired or had passed away were excluded. In total, 352 ID specialists (281 adult ID physicians and 71 pediatric ID specialists) were identified as potential participants in the survey. An online-based survey link was forwarded to them via text messages and e-mails by the office of the Korean Society of Infectious Diseases and the Korean Society of Paediatric Infectious Diseases. To encourage participation, we sent a reminder on the fifth day. The responders were anonymized, and only one response from each participant was accepted. Survey items Survey items included baseline characteristics of respondents (age, sex, marital status, number of children, and academic degree), type of working institution, job title, and practice area. Questions regarding the practice areas of ID specialists were divided into five categories: 1) clinical practices of outpatient care, inpatient care, and consultations; 2) infection control; 3) antibiotic stewardship; 4) research; and 5) education and training. The education and training area includes all kinds of lectures or training for students/residents or non-clinical position health care workers. All practices, except research, were based on activities conducted a year before the survey period (December 2019–November 2020). Meanwhile, research-related activities were based on the three years before the survey period (December 2017–November 2020). We determined that the sum of the weights for each expert’s clinical and research fields was 100%. Items related to job satisfaction were surveyed using a 5-point Likert scale. To determine compensation, we investigated vacation benefits and average annual income. Survey items included baseline characteristics of respondents (age, sex, marital status, number of children, and academic degree), type of working institution, job title, and practice area. Questions regarding the practice areas of ID specialists were divided into five categories: 1) clinical practices of outpatient care, inpatient care, and consultations; 2) infection control; 3) antibiotic stewardship; 4) research; and 5) education and training. The education and training area includes all kinds of lectures or training for students/residents or non-clinical position health care workers. All practices, except research, were based on activities conducted a year before the survey period (December 2019–November 2020). Meanwhile, research-related activities were based on the three years before the survey period (December 2017–November 2020). We determined that the sum of the weights for each expert’s clinical and research fields was 100%. Items related to job satisfaction were surveyed using a 5-point Likert scale. To determine compensation, we investigated vacation benefits and average annual income. Weekly patterns of time use Respondents selected one week (from Monday to Sunday) between November 2, 2020 and December 6, 2020, and chose activities they performed from 6 am to midnight. One of the nine activities was entered on an hourly basis: 1) outpatient care; 2) consultation; 3) inpatient care/rounding; 4) education/training; 5) research; 6) infection control; 7) antibiotic stewardship; 8) volunteer work; and 9) participation in conferences (except infection control meetings). Additionally, the start and finish times for a daily work period from Monday to Saturday were recorded to investigate the working hours in a week. Respondents selected one week (from Monday to Sunday) between November 2, 2020 and December 6, 2020, and chose activities they performed from 6 am to midnight. One of the nine activities was entered on an hourly basis: 1) outpatient care; 2) consultation; 3) inpatient care/rounding; 4) education/training; 5) research; 6) infection control; 7) antibiotic stewardship; 8) volunteer work; and 9) participation in conferences (except infection control meetings). Additionally, the start and finish times for a daily work period from Monday to Saturday were recorded to investigate the working hours in a week. Statistical analysis SPSS version 24.0 for Windows (IBM, Armonk, NY, USA) was used for statistical analysis. Chi-square or Fisher’s exact tests were used to compare categorical variables. Continuous variables were compared using the Student’s t-test or Mann-Whitney U test, as appropriate. Variables with P values < 0.05 were considered statistically significant. SPSS version 24.0 for Windows (IBM, Armonk, NY, USA) was used for statistical analysis. Chi-square or Fisher’s exact tests were used to compare categorical variables. Continuous variables were compared using the Student’s t-test or Mann-Whitney U test, as appropriate. Variables with P values < 0.05 were considered statistically significant. Ethics statement The study protocol was approved by the Institutional Review Board of Soonchunhyang University Seoul Hospital (No. 2020-05-016). Online written informed consent was obtained from all participants. The study protocol was approved by the Institutional Review Board of Soonchunhyang University Seoul Hospital (No. 2020-05-016). Online written informed consent was obtained from all participants. Study design and population: A survey was conducted on December 17–27, 2020, targeting all adult and pediatric ID specialists (N = 392) in Korea. At the time of the survey, 40 experts that were either retired or had passed away were excluded. In total, 352 ID specialists (281 adult ID physicians and 71 pediatric ID specialists) were identified as potential participants in the survey. An online-based survey link was forwarded to them via text messages and e-mails by the office of the Korean Society of Infectious Diseases and the Korean Society of Paediatric Infectious Diseases. To encourage participation, we sent a reminder on the fifth day. The responders were anonymized, and only one response from each participant was accepted. Survey items: Survey items included baseline characteristics of respondents (age, sex, marital status, number of children, and academic degree), type of working institution, job title, and practice area. Questions regarding the practice areas of ID specialists were divided into five categories: 1) clinical practices of outpatient care, inpatient care, and consultations; 2) infection control; 3) antibiotic stewardship; 4) research; and 5) education and training. The education and training area includes all kinds of lectures or training for students/residents or non-clinical position health care workers. All practices, except research, were based on activities conducted a year before the survey period (December 2019–November 2020). Meanwhile, research-related activities were based on the three years before the survey period (December 2017–November 2020). We determined that the sum of the weights for each expert’s clinical and research fields was 100%. Items related to job satisfaction were surveyed using a 5-point Likert scale. To determine compensation, we investigated vacation benefits and average annual income. Weekly patterns of time use: Respondents selected one week (from Monday to Sunday) between November 2, 2020 and December 6, 2020, and chose activities they performed from 6 am to midnight. One of the nine activities was entered on an hourly basis: 1) outpatient care; 2) consultation; 3) inpatient care/rounding; 4) education/training; 5) research; 6) infection control; 7) antibiotic stewardship; 8) volunteer work; and 9) participation in conferences (except infection control meetings). Additionally, the start and finish times for a daily work period from Monday to Saturday were recorded to investigate the working hours in a week. Statistical analysis: SPSS version 24.0 for Windows (IBM, Armonk, NY, USA) was used for statistical analysis. Chi-square or Fisher’s exact tests were used to compare categorical variables. Continuous variables were compared using the Student’s t-test or Mann-Whitney U test, as appropriate. Variables with P values < 0.05 were considered statistically significant. Ethics statement: The study protocol was approved by the Institutional Review Board of Soonchunhyang University Seoul Hospital (No. 2020-05-016). Online written informed consent was obtained from all participants. RESULTS: Demographic characteristics of respondents Overall, 195 (55.4%) ID specialists (144 adult ID specialists and 51 pediatric ID specialists) completed the survey. Detailed baseline characteristics are shown in Table 1. Majority of the respondents (192, 98.5%) worked in a clinical position. Most ID specialists (181, 92.8%) worked in acute-care referral hospitals, with two thirds of respondents (127, 65.1%) working in metropolitan areas (Supplementary Table 1). Data are number (%) of patients, unless otherwise indicated. ID = infectious disease, IQR = interquartile range. aNon-clinical areas included pharmaceutical companies (n = 2) and a life science company (n = 1). bOthers included pharmaceutical companies (n = 2), life science companies (n = 1), laboratories (n = 1), and medical schools (n = 1). In total, 144 (73.8%) respondents were involved in all of the following practices: inpatient and outpatient care, consultation, infection control, antibiotic stewardship, research, and education/training. Adult ID specialists were more involved in the practices of consultation, infection control, antibiotic stewardship, and participation in the public sector (Table 1). The major areas of specialization were bacterial/viral infections and infection control (Fig. 1A). HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria. Overall, 195 (55.4%) ID specialists (144 adult ID specialists and 51 pediatric ID specialists) completed the survey. Detailed baseline characteristics are shown in Table 1. Majority of the respondents (192, 98.5%) worked in a clinical position. Most ID specialists (181, 92.8%) worked in acute-care referral hospitals, with two thirds of respondents (127, 65.1%) working in metropolitan areas (Supplementary Table 1). Data are number (%) of patients, unless otherwise indicated. ID = infectious disease, IQR = interquartile range. aNon-clinical areas included pharmaceutical companies (n = 2) and a life science company (n = 1). bOthers included pharmaceutical companies (n = 2), life science companies (n = 1), laboratories (n = 1), and medical schools (n = 1). In total, 144 (73.8%) respondents were involved in all of the following practices: inpatient and outpatient care, consultation, infection control, antibiotic stewardship, research, and education/training. Adult ID specialists were more involved in the practices of consultation, infection control, antibiotic stewardship, and participation in the public sector (Table 1). The major areas of specialization were bacterial/viral infections and infection control (Fig. 1A). HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria. Clinical practices of the respondents The most common area of clinical practice was bacterial/viral diseases, followed by fever of unknown origin, and immunocompromised infection (Fig. 1B, Table 2). Human immunodeficiency virus/acquired immune deficiency syndrome, parasitic infection, immunocompromised infection, and occupational exposure were more common among adult ID specialists. In contrast, pediatric ID specialists were more often involved in vaccination/travel clinics. The majority of ID specialists had 6–10 hospitalized patients per day (48.5%); 41.9% of adult ID specialists had 6–10 hospitalized patients, while 61.9% of pediatric ID specialists had fewer than five hospitalized patients per day. The number of patients per outpatient clinic was similar between adult and pediatric ID specialists, with the majority having 11–20 patients per section. A large percentage of adult ID specialists (29.6%) performed > 20 formal consultations per day, while the majority of pediatric ID specialists (76.7%) had < 5 formal consultations per day. The number of informal consultations per day was similar between the two groups. Consultations were conducted without assistive personnel for 73.0% of the respondents. Twenty-three percent of adult ID specialists participated in pediatric ID consultation, and 7.0% of pediatric ID specialists participated in adult ID consultation. Data are number (%) of patients, unless otherwise indicated. ID = infectious disease, IQR = interquartile range, HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria. aConsultation via formal paperwork. bConsultation via informal communication (e.g., telephone, text messages). Among 183 respondents, 131 (71.6%) answered that their institution took care of hospitalized patients with COVID-19, of whom 105 (80.2%) participated in COVID-19 care (Supplementary Table 2). The most common area of clinical practice was bacterial/viral diseases, followed by fever of unknown origin, and immunocompromised infection (Fig. 1B, Table 2). Human immunodeficiency virus/acquired immune deficiency syndrome, parasitic infection, immunocompromised infection, and occupational exposure were more common among adult ID specialists. In contrast, pediatric ID specialists were more often involved in vaccination/travel clinics. The majority of ID specialists had 6–10 hospitalized patients per day (48.5%); 41.9% of adult ID specialists had 6–10 hospitalized patients, while 61.9% of pediatric ID specialists had fewer than five hospitalized patients per day. The number of patients per outpatient clinic was similar between adult and pediatric ID specialists, with the majority having 11–20 patients per section. A large percentage of adult ID specialists (29.6%) performed > 20 formal consultations per day, while the majority of pediatric ID specialists (76.7%) had < 5 formal consultations per day. The number of informal consultations per day was similar between the two groups. Consultations were conducted without assistive personnel for 73.0% of the respondents. Twenty-three percent of adult ID specialists participated in pediatric ID consultation, and 7.0% of pediatric ID specialists participated in adult ID consultation. Data are number (%) of patients, unless otherwise indicated. ID = infectious disease, IQR = interquartile range, HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria. aConsultation via formal paperwork. bConsultation via informal communication (e.g., telephone, text messages). Among 183 respondents, 131 (71.6%) answered that their institution took care of hospitalized patients with COVID-19, of whom 105 (80.2%) participated in COVID-19 care (Supplementary Table 2). Infection control and antibiotic stewardship Participation in infection control and antibiotic stewardship activities was more common among adult ID specialists (90.3% vs. 74.5%, P = 0.009) than among pediatric ID specialists (Table 1). Among 183 respondents participating in infection control activities, 96 (52.5%) had been an infection control team chair, with a median period of five years (interquartile range [IQR]: 2–10 years) (Supplementary Table 3). The most common infection control activities were attending infection control meetings (97.0%), responding to emerging ID (91.1%), and responding to unexpected exposure to transmissible diseases (88.7%). The most common antibiotic stewardship activities were review and approval of restricted antibiotics (81.5%), active monitoring of antibiotic prescription (51.8%), and review of surgical prophylactic antibiotics (42.8%). Participation in infection control and antibiotic stewardship activities was more common among adult ID specialists (90.3% vs. 74.5%, P = 0.009) than among pediatric ID specialists (Table 1). Among 183 respondents participating in infection control activities, 96 (52.5%) had been an infection control team chair, with a median period of five years (interquartile range [IQR]: 2–10 years) (Supplementary Table 3). The most common infection control activities were attending infection control meetings (97.0%), responding to emerging ID (91.1%), and responding to unexpected exposure to transmissible diseases (88.7%). The most common antibiotic stewardship activities were review and approval of restricted antibiotics (81.5%), active monitoring of antibiotic prescription (51.8%), and review of surgical prophylactic antibiotics (42.8%). Research Overall, 164 (84.1%) respondents were involved in research, including clinical research (88.4%), basic research (19.2%), and research in public health/epidemiology (18.0%) (Supplementary Table 4). The main research interests of adult ID specialists were bacterial/viral infections (78.5%), infection control (40.8%), and antibiotic stewardship (27.7%), while those of pediatric ID specialists were bacterial/viral infections (85.7%), vaccination (59.5%), and immunocompromised patients (23.8%). Most respondents (90.2%) had published in Science Citation Index/Expanded (SCI/E) journals, and the median number of publications as the first or corresponding author to SCI/E journals within 3 years was three (IQR: 2–6). Of the ID specialists involved in research, 59.8% had acquired research funds. Detailed response results according to position (professor, clinician, or non-clinical position) are shown in Supplementary Table 5. Overall, 164 (84.1%) respondents were involved in research, including clinical research (88.4%), basic research (19.2%), and research in public health/epidemiology (18.0%) (Supplementary Table 4). The main research interests of adult ID specialists were bacterial/viral infections (78.5%), infection control (40.8%), and antibiotic stewardship (27.7%), while those of pediatric ID specialists were bacterial/viral infections (85.7%), vaccination (59.5%), and immunocompromised patients (23.8%). Most respondents (90.2%) had published in Science Citation Index/Expanded (SCI/E) journals, and the median number of publications as the first or corresponding author to SCI/E journals within 3 years was three (IQR: 2–6). Of the ID specialists involved in research, 59.8% had acquired research funds. Detailed response results according to position (professor, clinician, or non-clinical position) are shown in Supplementary Table 5. Education and training Overall, 153 respondents (78.5%) were involved in education and training. The contents of education/training included ID (93.5%), antibiotics (75.9%), infection control (75.9%), and vaccination (58.2%) (Supplementary Table 6). Overall, 153 respondents (78.5%) were involved in education and training. The contents of education/training included ID (93.5%), antibiotics (75.9%), infection control (75.9%), and vaccination (58.2%) (Supplementary Table 6). Weekly patterns of time use In total, 153 (43.5%) ID specialists reported weekly patterns of time use. Detailed results are shown in Table 3. Weekly working hours were longer among adult ID specialists than among pediatric ID specialists (median: 59.0 vs. 55.0 hours from Monday to Friday, P = 0.005; median: 62.0 vs. 57.5 hours from Monday to Saturday, P = 0.015). Among activities, ID specialists spent the longest hours on patient care, especially outpatient services (median: 12 hours, IQR: 7–16 hours), followed by inpatient services (median: 10 hours, IQR: 6–13 hours) and consultation (median: 8 hours, IQR: 4–14 hours), altogether resulted in a median of 29 hours (IQR: 14–37 hours) each week. Adult ID specialists spent more time on consultation, infection control, and antibiotic stewardship, while pediatric ID specialists spent more time on outpatient services, research, and volunteer medical services. Data are presented as median (interquartile range). ID = infectious disease. In total, 153 (43.5%) ID specialists reported weekly patterns of time use. Detailed results are shown in Table 3. Weekly working hours were longer among adult ID specialists than among pediatric ID specialists (median: 59.0 vs. 55.0 hours from Monday to Friday, P = 0.005; median: 62.0 vs. 57.5 hours from Monday to Saturday, P = 0.015). Among activities, ID specialists spent the longest hours on patient care, especially outpatient services (median: 12 hours, IQR: 7–16 hours), followed by inpatient services (median: 10 hours, IQR: 6–13 hours) and consultation (median: 8 hours, IQR: 4–14 hours), altogether resulted in a median of 29 hours (IQR: 14–37 hours) each week. Adult ID specialists spent more time on consultation, infection control, and antibiotic stewardship, while pediatric ID specialists spent more time on outpatient services, research, and volunteer medical services. Data are presented as median (interquartile range). ID = infectious disease. Job satisfaction and compensations Among ID specialists, 37.4% (n = 73) responded that they were satisfied with their current job. The percentage of respondents who answered positively to question whether they would select the ID major if they had to choose again was higher for the adult ID specialist group than for the pediatric ID specialist group (P = 0.004) (Supplementary Table 7). Factors for satisfaction as ID specialists were shown in the Supplementary Table 8. In a multivariable logistic analysis, characteristics with male gender and working area in Seoul, Incheon, or Gyeonggi-do were significantly associated with job satisfaction (Supplementary Table 8). Most ID specialists spent 5–10 days of vacation per year (52.5%), earning 4,479–89,572 USD per year (46.2%) (Supplementary Table 9). Respondents answered that the ideal number of hospital beds covered by one adult and pediatric ID specialist was 151–200 beds (30.8%) and 401–500 beds (30.3%), respectively (Supplementary Table 10). Among ID specialists, 37.4% (n = 73) responded that they were satisfied with their current job. The percentage of respondents who answered positively to question whether they would select the ID major if they had to choose again was higher for the adult ID specialist group than for the pediatric ID specialist group (P = 0.004) (Supplementary Table 7). Factors for satisfaction as ID specialists were shown in the Supplementary Table 8. In a multivariable logistic analysis, characteristics with male gender and working area in Seoul, Incheon, or Gyeonggi-do were significantly associated with job satisfaction (Supplementary Table 8). Most ID specialists spent 5–10 days of vacation per year (52.5%), earning 4,479–89,572 USD per year (46.2%) (Supplementary Table 9). Respondents answered that the ideal number of hospital beds covered by one adult and pediatric ID specialist was 151–200 beds (30.8%) and 401–500 beds (30.3%), respectively (Supplementary Table 10). Main problems and complaints To foster ID specialty in Korea, ID specialists suggested that they should be appropriately compensated, especially for infection control or antibiotic stewardship activities (n = 91, 37.0%), and that additional ID specialists are necessary (n = 61, 24.8%). Respondents also suggested that the opinions of ID specialists should be respected and reflected in government policies (n = 34, 13.8%) (Supplementary Table 11). To foster ID specialty in Korea, ID specialists suggested that they should be appropriately compensated, especially for infection control or antibiotic stewardship activities (n = 91, 37.0%), and that additional ID specialists are necessary (n = 61, 24.8%). Respondents also suggested that the opinions of ID specialists should be respected and reflected in government policies (n = 34, 13.8%) (Supplementary Table 11). Demographic characteristics of respondents: Overall, 195 (55.4%) ID specialists (144 adult ID specialists and 51 pediatric ID specialists) completed the survey. Detailed baseline characteristics are shown in Table 1. Majority of the respondents (192, 98.5%) worked in a clinical position. Most ID specialists (181, 92.8%) worked in acute-care referral hospitals, with two thirds of respondents (127, 65.1%) working in metropolitan areas (Supplementary Table 1). Data are number (%) of patients, unless otherwise indicated. ID = infectious disease, IQR = interquartile range. aNon-clinical areas included pharmaceutical companies (n = 2) and a life science company (n = 1). bOthers included pharmaceutical companies (n = 2), life science companies (n = 1), laboratories (n = 1), and medical schools (n = 1). In total, 144 (73.8%) respondents were involved in all of the following practices: inpatient and outpatient care, consultation, infection control, antibiotic stewardship, research, and education/training. Adult ID specialists were more involved in the practices of consultation, infection control, antibiotic stewardship, and participation in the public sector (Table 1). The major areas of specialization were bacterial/viral infections and infection control (Fig. 1A). HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria. Clinical practices of the respondents: The most common area of clinical practice was bacterial/viral diseases, followed by fever of unknown origin, and immunocompromised infection (Fig. 1B, Table 2). Human immunodeficiency virus/acquired immune deficiency syndrome, parasitic infection, immunocompromised infection, and occupational exposure were more common among adult ID specialists. In contrast, pediatric ID specialists were more often involved in vaccination/travel clinics. The majority of ID specialists had 6–10 hospitalized patients per day (48.5%); 41.9% of adult ID specialists had 6–10 hospitalized patients, while 61.9% of pediatric ID specialists had fewer than five hospitalized patients per day. The number of patients per outpatient clinic was similar between adult and pediatric ID specialists, with the majority having 11–20 patients per section. A large percentage of adult ID specialists (29.6%) performed > 20 formal consultations per day, while the majority of pediatric ID specialists (76.7%) had < 5 formal consultations per day. The number of informal consultations per day was similar between the two groups. Consultations were conducted without assistive personnel for 73.0% of the respondents. Twenty-three percent of adult ID specialists participated in pediatric ID consultation, and 7.0% of pediatric ID specialists participated in adult ID consultation. Data are number (%) of patients, unless otherwise indicated. ID = infectious disease, IQR = interquartile range, HIV = human immunodeficiency virus, AIDS = acquired immune deficiency syndrome, TB = tuberculosis, NTM = nontuberculous mycobacteria. aConsultation via formal paperwork. bConsultation via informal communication (e.g., telephone, text messages). Among 183 respondents, 131 (71.6%) answered that their institution took care of hospitalized patients with COVID-19, of whom 105 (80.2%) participated in COVID-19 care (Supplementary Table 2). Infection control and antibiotic stewardship: Participation in infection control and antibiotic stewardship activities was more common among adult ID specialists (90.3% vs. 74.5%, P = 0.009) than among pediatric ID specialists (Table 1). Among 183 respondents participating in infection control activities, 96 (52.5%) had been an infection control team chair, with a median period of five years (interquartile range [IQR]: 2–10 years) (Supplementary Table 3). The most common infection control activities were attending infection control meetings (97.0%), responding to emerging ID (91.1%), and responding to unexpected exposure to transmissible diseases (88.7%). The most common antibiotic stewardship activities were review and approval of restricted antibiotics (81.5%), active monitoring of antibiotic prescription (51.8%), and review of surgical prophylactic antibiotics (42.8%). Research: Overall, 164 (84.1%) respondents were involved in research, including clinical research (88.4%), basic research (19.2%), and research in public health/epidemiology (18.0%) (Supplementary Table 4). The main research interests of adult ID specialists were bacterial/viral infections (78.5%), infection control (40.8%), and antibiotic stewardship (27.7%), while those of pediatric ID specialists were bacterial/viral infections (85.7%), vaccination (59.5%), and immunocompromised patients (23.8%). Most respondents (90.2%) had published in Science Citation Index/Expanded (SCI/E) journals, and the median number of publications as the first or corresponding author to SCI/E journals within 3 years was three (IQR: 2–6). Of the ID specialists involved in research, 59.8% had acquired research funds. Detailed response results according to position (professor, clinician, or non-clinical position) are shown in Supplementary Table 5. Education and training: Overall, 153 respondents (78.5%) were involved in education and training. The contents of education/training included ID (93.5%), antibiotics (75.9%), infection control (75.9%), and vaccination (58.2%) (Supplementary Table 6). Weekly patterns of time use: In total, 153 (43.5%) ID specialists reported weekly patterns of time use. Detailed results are shown in Table 3. Weekly working hours were longer among adult ID specialists than among pediatric ID specialists (median: 59.0 vs. 55.0 hours from Monday to Friday, P = 0.005; median: 62.0 vs. 57.5 hours from Monday to Saturday, P = 0.015). Among activities, ID specialists spent the longest hours on patient care, especially outpatient services (median: 12 hours, IQR: 7–16 hours), followed by inpatient services (median: 10 hours, IQR: 6–13 hours) and consultation (median: 8 hours, IQR: 4–14 hours), altogether resulted in a median of 29 hours (IQR: 14–37 hours) each week. Adult ID specialists spent more time on consultation, infection control, and antibiotic stewardship, while pediatric ID specialists spent more time on outpatient services, research, and volunteer medical services. Data are presented as median (interquartile range). ID = infectious disease. Job satisfaction and compensations: Among ID specialists, 37.4% (n = 73) responded that they were satisfied with their current job. The percentage of respondents who answered positively to question whether they would select the ID major if they had to choose again was higher for the adult ID specialist group than for the pediatric ID specialist group (P = 0.004) (Supplementary Table 7). Factors for satisfaction as ID specialists were shown in the Supplementary Table 8. In a multivariable logistic analysis, characteristics with male gender and working area in Seoul, Incheon, or Gyeonggi-do were significantly associated with job satisfaction (Supplementary Table 8). Most ID specialists spent 5–10 days of vacation per year (52.5%), earning 4,479–89,572 USD per year (46.2%) (Supplementary Table 9). Respondents answered that the ideal number of hospital beds covered by one adult and pediatric ID specialist was 151–200 beds (30.8%) and 401–500 beds (30.3%), respectively (Supplementary Table 10). Main problems and complaints: To foster ID specialty in Korea, ID specialists suggested that they should be appropriately compensated, especially for infection control or antibiotic stewardship activities (n = 91, 37.0%), and that additional ID specialists are necessary (n = 61, 24.8%). Respondents also suggested that the opinions of ID specialists should be respected and reflected in government policies (n = 34, 13.8%) (Supplementary Table 11). DISCUSSION: In the present study, we found that most ID specialists in Korea spent longer time treating patients with IDs. Concurrently, they also participated in infection control, antibiotic stewardship, or education/training. They work for more than 60 hours weekly, which exceeded the 52 hours set legally by the government. Accordingly, the demands for ID specialists have been concentrated on appropriate compensation of their work and increase employment of ID specialists. Considering that the number of ID specialists in Korea is low, the promotion of ID training will not negatively affect the training system of other specialties. Among the various roles of ID specialists, patient care is essential. Involvement of ID specialists in the care of patients with IDs results in reduced hospitalization rates, mortality, healthcare costs, and hospital stay.13141516 In this study, most ID specialists were engaged in several patient care activities, including inpatient and outpatient care and consultations in the same week. Furthermore, about one fourth of adult ID specialists were engaged in pediatric ID consultations. The diverse clinical responsibilities of the ID specialists may be due to shortage in ID staff. Following the outbreak of the Middle East respiratory syndrome in medical institutions in 2015, the medical law in Korea was revised to strengthen the legal regulations for infection control personnel.17 Furthermore, since 2017, the Ministry of Health and Welfare has reimbursed medical institutions for infection prevention and management measures. However, for reimbursement, a hospital requires an ID doctor in charge of 300 hospital beds and doctors in charge of infection control to perform infection control duties for at least 20 h/week and complete 16 hours of training related to infection control.17 Consequently, infection control rounding has become a mandatory activity for ID specialists in Korean hospitals.18 However, the pattern of weekly time use among ID specialists shows limited room for additional infection control activities. Despite this workload, the majority of ID specialists had to undertake multiple responsibilities, including being director of infection control. This results to insufficient time for infection control activities. Participation in antibiotic stewardship mainly focused on approval for specific and restricted antibiotics.19 Similar to other activities related to the ID specialty, antibiotic stewardship in Korean hospitals is mostly conducted by one or two ID specialists.19 However, for more comprehensive ID activities, approximately 3.01 personnel are required per 1,000 beds.20 An increase in the emergence of antimicrobial-resistant pathogens emphasizes the importance of appropriate antibiotic use and antibiotic stewardship programs.21 Securing ID specialists is necessary to implement and expand the antibiotic stewardship program in Korean hospitals. According to our results, ID specialists devoted similar amounts of time to research and patient care. This may be because a large proportion of ID specialists were stationed at university-affiliated hospitals. However, they were unlikely to receive research funds and perform basic research in their early careers. Given that a rapid response to emerging infectious and re-emerging diseases is enhanced by a strong research base22 promoting ID research is necessary to strengthen public health control efforts. Besides expanding ID research funds, emphasizing basic research in ID training courses should be considered.23 Promoting research in ID will most likely attract more applicants for training in ID specialties.24 Unfortunately, the actual time that ID specialists allocated to education and training activities was 3 hours per week. This duration may be insufficient for providing adequate educational opportunities. Due to the increase in emerging IDs and IDs caused by antimicrobial-resistant pathogens, it is important to educate students/trainees and healthcare personnel on infection control. An adequately staffed ID workforce is necessary to achieve this. ID specialists worked an average of 60.5 hours per week. Given that more than half of the respondents were women and more than two thirds were married and had children, one can expect that it is difficult for ID specialists to achieve an appropriate work-life balance. In our previous survey,11 only 8.7% of ID specialists responded to having a work-life balance. In this study, working hours were based on work and leave hours. Only 2.6% answered that they did not carry work at home. It is known that long working hours can lead to health risks, including an increased risk of stroke.25 Interestingly, this study revealed differences between adult and pediatric ID specialists. Adult ID specialists had a greater role in consultation, infection control, and antibiotic stewardship than pediatric ID specialists. Although, the satisfaction level was similar between the two types of experts, pediatric ID specialists responded more positively to reselecting the same major. It is estimated that more weekly working hours for adult ID specialists may be the cause of the reselection factor; however, further studies should be conducted in this regard. The strength of our study is the high response rate; 55.4% of ID specialists participated in the study. Therefore, the results may adequately represent the status of ID specialists in Korea. Nonetheless, this study had some limitations. First, clinical microbiologists who can be classified as ID specialists were not included in the study. Second, the survey was conducted in the middle of the COVID-19 pandemic, which may have affected the operational times of ID specialists. Third, there could be information or selection bias due to non-response and questionnaire-based, cross-sectional study design. Further studies with more meticulous designs such as longitudinal or repeated cross-sectional studies are needed to guarantee a more reliable data. We identified areas of practice and patterns of time use among adult and pediatric ID specialists in Korea. Even during the middle of the COVID-19 pandemic, most experts oversee all necessary areas (e.g., treatment, education, research, infection control, and antibiotic stewardship) in medical institutions with limited resources. It is expected that these problems can be solved by appropriately compensating individuals and medical institutions for their invisible activities (including infection control and antibiotic stewardship) and by securing additional human resources.
Background: Infectious disease (ID) specialists are skilled facilitators of medical consultation who promote better outcomes in patient survival, antibiotic stewardship as well as healthcare safety in pandemic response. This study aimed to assess the working status of ID specialists and identify problems faced by ID professionals in Korea. Methods: This was a nationwide cross-sectional study in Korea. An online-based survey was conducted over 11 days (from December 17-27, 2020), targeting all active adult (n = 281) and pediatric (n = 71) ID specialists in Korea (N = 352). Questions regarding the practice areas of the specialists were divided into five categories: 1) clinical practices of outpatient care, inpatient care, and consultations; 2) infection control; 3) antibiotic stewardship; 4) research; and 5) education and training. We investigated the weekly time-use patterns for these areas of practice. Results: Of the 352 ID specialists, 195 (55.4%; 51.2% [144/281] adult and 71.8% [51/71] pediatric ID specialists) responded in the survey. Moreover, 144 (73.8%) of the total respondents were involved in all practice categories investigated. The most common practice area was outpatient service (93.8%), followed by consultation (91.3%) and inpatient service (87.7%). Specialists worked a median of 61 (interquartile range: 54-71) hours weekly: patient care, 29 (14-37) hours; research 11 (5-19) hours; infection control 4 (2-10) hours; antibiotic stewardship, 3 (1-5) hours; and education/training, 2 (2-6) hours. Conclusions: ID specialists in Korea simultaneously undertake multiple tasks and work long hours, highlighting the need for training and employing more ID specialists.
null
null
7,785
361
[ 135, 207, 124, 67, 282, 340, 158, 196, 53, 196, 185, 81 ]
17
[ "id", "id specialists", "specialists", "infection", "control", "infection control", "adult", "research", "table", "pediatric" ]
[ "id specialists improve", "korea id specialists", "id specialists shortage", "id specialists countries", "coronavirus disease 2019" ]
null
null
[CONTENT] Infectious Disease Specialists | Professional Development | Workforce | Adult | Pediatrics | Korea [SUMMARY]
[CONTENT] Infectious Disease Specialists | Professional Development | Workforce | Adult | Pediatrics | Korea [SUMMARY]
[CONTENT] Infectious Disease Specialists | Professional Development | Workforce | Adult | Pediatrics | Korea [SUMMARY]
null
[CONTENT] Infectious Disease Specialists | Professional Development | Workforce | Adult | Pediatrics | Korea [SUMMARY]
null
[CONTENT] Adult | Humans | Child | Cross-Sectional Studies | Specialization | Republic of Korea | Communicable Diseases | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Humans | Child | Cross-Sectional Studies | Specialization | Republic of Korea | Communicable Diseases | Surveys and Questionnaires [SUMMARY]
[CONTENT] Adult | Humans | Child | Cross-Sectional Studies | Specialization | Republic of Korea | Communicable Diseases | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] Adult | Humans | Child | Cross-Sectional Studies | Specialization | Republic of Korea | Communicable Diseases | Surveys and Questionnaires [SUMMARY]
null
[CONTENT] id specialists improve | korea id specialists | id specialists shortage | id specialists countries | coronavirus disease 2019 [SUMMARY]
[CONTENT] id specialists improve | korea id specialists | id specialists shortage | id specialists countries | coronavirus disease 2019 [SUMMARY]
[CONTENT] id specialists improve | korea id specialists | id specialists shortage | id specialists countries | coronavirus disease 2019 [SUMMARY]
null
[CONTENT] id specialists improve | korea id specialists | id specialists shortage | id specialists countries | coronavirus disease 2019 [SUMMARY]
null
[CONTENT] id | id specialists | specialists | infection | control | infection control | adult | research | table | pediatric [SUMMARY]
[CONTENT] id | id specialists | specialists | infection | control | infection control | adult | research | table | pediatric [SUMMARY]
[CONTENT] id | id specialists | specialists | infection | control | infection control | adult | research | table | pediatric [SUMMARY]
null
[CONTENT] id | id specialists | specialists | infection | control | infection control | adult | research | table | pediatric [SUMMARY]
null
[CONTENT] id | specialists | id specialists | including | korea | people | applicants id | time spent | countries | outcomes [SUMMARY]
[CONTENT] survey | 2020 | december | research | variables | care | november | november 2020 | items | based [SUMMARY]
[CONTENT] id | specialists | id specialists | table | hours | median | supplementary | supplementary table | adult | infection [SUMMARY]
null
[CONTENT] id | specialists | id specialists | infection | research | control | infection control | hours | table | adult [SUMMARY]
null
[CONTENT] ||| Korea [SUMMARY]
[CONTENT] Korea ||| over 11 days | December 17-27 | 2020 | 281 | 71 | Korea | 352 ||| five | 1 | 2 | 3 | 4 | 5 ||| weekly [SUMMARY]
[CONTENT] 352 | 195 | 55.4% | 51.2% | 144/281 | 71.8% ||| 144 | 73.8% ||| 93.8% | 91.3% | 87.7% ||| 61 | 54-71 | hours | weekly | 29 | 14-37 | 11 | 5-19 | 4 ||| 2-10 | 3 | 1-5 | 2 | 2-6 [SUMMARY]
null
[CONTENT] ||| Korea ||| Korea ||| over 11 days | December 17-27 | 2020 | 281 | 71 | Korea | 352 ||| five | 1 | 2 | 3 | 4 | 5 ||| weekly ||| ||| 352 | 195 | 55.4% | 51.2% | 144/281 | 71.8% ||| 144 | 73.8% ||| 93.8% | 91.3% | 87.7% ||| 61 | 54-71 | hours | weekly | 29 | 14-37 | 11 | 5-19 | 4 ||| 2-10 | 3 | 1-5 | 2 | 2-6 ||| Korea [SUMMARY]
null
Factors Associated With the Quality of the Patient-Doctor Relationship: A Cross-Sectional Study of Ambulatory Mexican Patients With Rheumatic Diseases.
35616508
The patient-doctor relationship (PDR) is a complex phenomenon with strong cultural determinants, which impacts health-related outcomes and, accordingly, does have ethical implications. The study objective was to describe the PDR from medical encounters between 600 Mexican outpatients with rheumatic diseases and their attending rheumatologists, and to identify factors associated with a good PDR.
BACKGROUND
A cross-sectional study was performed. Patients completed the PDRQ-9 (Patient-Doctor Relationship Questionnaire, 9 items), the HAQ-DI (Health Assessment Questionnaire Disability Index), the Short-Form 36 items (SF-36), a pain-visual analog scale, and the Ideal Patient Autonomy Scale. Relevant sociodemographic, disease-related, and treatment-related variables were obtained. Patients assigned a PDRQ-9 score to each patient-doctor encounter. Regression analysis was used to identify factors associated with a good PDR, which was defined based on a cutoff point established using the borderline performance method.
METHODS
Patients were primarily middle-aged female subjects (86%), with substantial disease duration (median, 11.1 years), without disability (HAQ-DI within reference range, 55.3%), and with deteriorated quality of life (SF-36 out of reference range, 73.7%-78.6%). Among them, 36.5% had systemic lupus erythematosus and 31.8% had rheumatoid arthritis. There were 422 patients (70.3%) with a good PDR and 523 medical encounters (87.2%) involved certified rheumatologists.Patient paternalistic ideal of autonomy (odds ratio [OR], 3.029; 95% confidence interval [CI], 1.793-5.113), SF-36 score (OR, 1.014; 95% CI, 1.003-1.025), female sex (OR, 0.460; 95% CI, 0.233-0.010), and being certified rheumatologist (OR, 1.526; 95% CI, 1.059-2.200) were associated with a good PDR.
RESULTS
Patient-related factors and the degree of experience of the attending physician impact the quality of the PDR, in Mexican outpatients with rheumatic diseases.
CONCLUSIONS
[ "Cross-Sectional Studies", "Disability Evaluation", "Female", "Humans", "Middle Aged", "Physician-Patient Relations", "Quality of Life", "Rheumatic Diseases", "Surveys and Questionnaires" ]
9169750
null
null
PATIENTS AND METHODS
Ethics The study was approved by the Internal Review Board of the INCMyN-SZ (Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán) (reference number: IRE-3005-19-20-1). All the patients from the Outpatient Clinic of the Department of Immunology and Rheumatology (OCDIR) who agreed to participate provided written informed consent. Before patient enrollment, the study was presented to all the rheumatologists/trainees assigned to the OCDIR during the study period, and all of them agreed to participate. The study was approved by the Internal Review Board of the INCMyN-SZ (Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán) (reference number: IRE-3005-19-20-1). All the patients from the Outpatient Clinic of the Department of Immunology and Rheumatology (OCDIR) who agreed to participate provided written informed consent. Before patient enrollment, the study was presented to all the rheumatologists/trainees assigned to the OCDIR during the study period, and all of them agreed to participate. Study Design, Setting, and Study Population The study has been previously described.6 Briefly, it was cross-sectional and performed between July 2019 and February 2020, at the OCDIR of a tertiary-care level and academic center for rheumatic diseases, located in Mexico City. STROBE's guidelines were followed (Appendix, http://links.lww.com/RHU/A394). The INCMyN-SZ belongs to the National Institutes of Health of Mexico. Patients had Federal government health coverage depending on their socioeconomic level, which was defined by social workers after patient's interview and income-to-needs ratio's assessment. Patients had to pay for their medication, medical assistance, laboratories, and diagnostic imaging studies; however, up to 75% of the patients had at least 70% health coverage. Eleven certified rheumatologists and 10 trainees in rheumatology were assigned to the OCDIR, and all were self-defined as Mexican. In addition, approximately 5000 patients with at least 1 visit to the outpatient clinic, self-referred as Mexican and with a variety of rheumatic diseases attended the OCDIR. At first visit to the OCDIR, patients were assigned a primary rheumatologist, which was maintained during the entire patient's follow-up, but for patients assigned to trainees in rheumatology, they changed their primary physician every 2 years (training program duration). Patients might be assigned a different primary rheumatologist upon patient's request. The 10 most frequent diagnoses (n = 4476) based on the attending rheumatologist criteria were SLE in 1652 patients (33%), RA in 1578 (31.6%), systemic sclerosis in 239 (4.8%), systemic vasculitis in 220 (4.4%), primary Sjögren syndrome (PSS) in 190 (3.8%), spondyloarthritis in 174 (3.5%), inflammatory myopathies and primary antiphospholipid syndrome in 150 patients each (3%), mixed connective tissue disease in 94 patients (1.9%), and adult Still disease in 29 patients (0.6%). Finally, 524 patients (10.5%) had other diagnosis. All the patients who consecutively who consecutively were seen at the OCDIR during the study period, and had a defined rheumatic disease according to the criteria of the attending rheumatologist, were invited to participate. Exclusion criteria included patients on palliative care, with overlap syndrome (but secondary Sjögren syndrome), and with uncontrolled comorbid conditions. The study has been previously described.6 Briefly, it was cross-sectional and performed between July 2019 and February 2020, at the OCDIR of a tertiary-care level and academic center for rheumatic diseases, located in Mexico City. STROBE's guidelines were followed (Appendix, http://links.lww.com/RHU/A394). The INCMyN-SZ belongs to the National Institutes of Health of Mexico. Patients had Federal government health coverage depending on their socioeconomic level, which was defined by social workers after patient's interview and income-to-needs ratio's assessment. Patients had to pay for their medication, medical assistance, laboratories, and diagnostic imaging studies; however, up to 75% of the patients had at least 70% health coverage. Eleven certified rheumatologists and 10 trainees in rheumatology were assigned to the OCDIR, and all were self-defined as Mexican. In addition, approximately 5000 patients with at least 1 visit to the outpatient clinic, self-referred as Mexican and with a variety of rheumatic diseases attended the OCDIR. At first visit to the OCDIR, patients were assigned a primary rheumatologist, which was maintained during the entire patient's follow-up, but for patients assigned to trainees in rheumatology, they changed their primary physician every 2 years (training program duration). Patients might be assigned a different primary rheumatologist upon patient's request. The 10 most frequent diagnoses (n = 4476) based on the attending rheumatologist criteria were SLE in 1652 patients (33%), RA in 1578 (31.6%), systemic sclerosis in 239 (4.8%), systemic vasculitis in 220 (4.4%), primary Sjögren syndrome (PSS) in 190 (3.8%), spondyloarthritis in 174 (3.5%), inflammatory myopathies and primary antiphospholipid syndrome in 150 patients each (3%), mixed connective tissue disease in 94 patients (1.9%), and adult Still disease in 29 patients (0.6%). Finally, 524 patients (10.5%) had other diagnosis. All the patients who consecutively who consecutively were seen at the OCDIR during the study period, and had a defined rheumatic disease according to the criteria of the attending rheumatologist, were invited to participate. Exclusion criteria included patients on palliative care, with overlap syndrome (but secondary Sjögren syndrome), and with uncontrolled comorbid conditions. Study Maneuvers All included patients were invited to evaluate the PDR at the end of their consultation, and to complete the Patient-Doctor Relationship Questionnaire (PDRQ-9)20 and a PDR Likert Scale. Patients additionally completed the Spanish version of the Health Assessment Questionnaire–Disability Index (HAQ-DI) to assess disability21 and the Short-Form 36 items (SF-36) to assess health-related quality of life (HRQoL),22 a visual analog scale (VAS) to assess pain, and the Ideal Patient Autonomy Scale (IPAS).6 Physicians assigned to OCDIR also completed the IPAS. Relevant sociodemographic variables (sex, age, formal education, socioeconomic level, religious beliefs, economic dependency, living with a partner, and access to the social security system), disease-related variables (disease duration, years of follow-up at the OCDIR, comorbid conditions and Charlson comorbidity score,23 participation in clinical trials, previous hospitalizations and number), and treatment-related variables (immunosuppressive treatment and number of immunosuppressive drugs/per patient and corticosteroid use) were obtained from all the patients, in standardized formats after a careful chart review and patient interview to confirm the data. In all cases, interviews, questionnaires, and scales were applied in an area designated for research purposes by personnel not involved in patient care. All included patients were invited to evaluate the PDR at the end of their consultation, and to complete the Patient-Doctor Relationship Questionnaire (PDRQ-9)20 and a PDR Likert Scale. Patients additionally completed the Spanish version of the Health Assessment Questionnaire–Disability Index (HAQ-DI) to assess disability21 and the Short-Form 36 items (SF-36) to assess health-related quality of life (HRQoL),22 a visual analog scale (VAS) to assess pain, and the Ideal Patient Autonomy Scale (IPAS).6 Physicians assigned to OCDIR also completed the IPAS. Relevant sociodemographic variables (sex, age, formal education, socioeconomic level, religious beliefs, economic dependency, living with a partner, and access to the social security system), disease-related variables (disease duration, years of follow-up at the OCDIR, comorbid conditions and Charlson comorbidity score,23 participation in clinical trials, previous hospitalizations and number), and treatment-related variables (immunosuppressive treatment and number of immunosuppressive drugs/per patient and corticosteroid use) were obtained from all the patients, in standardized formats after a careful chart review and patient interview to confirm the data. In all cases, interviews, questionnaires, and scales were applied in an area designated for research purposes by personnel not involved in patient care. Instruments Description The PDRQ-920 assessed the quality of the PDR experienced by the patient through the quantification of the patient's opinion regarding communication, satisfaction, trust, and accessibility in dealing with the doctor and the treatment that followed. The questionnaire is based on a 5-point Likert scale ranging from 1 (not at all appropriate) to 5 (totally appropriate). PDRQ-9 scores range from 1 to 5, with higher scores translating into a better PDR. The PDR Likert scale assessed the quality of the PDR experienced by the patient, who is directed to choose among 3 options: inferior, borderline, and superior. The IPAS6 is a self-administered questionnaire that assesses patient' ideal of autonomy according to 4 subscales that can be further grouped into 2 subscales: patients with an ideal of physician-centered/paternalistic (with information) autonomy and patients with an ideal of patient-centered autonomy. The IPAS can be applied to the attending physician. The PDRQ-920 assessed the quality of the PDR experienced by the patient through the quantification of the patient's opinion regarding communication, satisfaction, trust, and accessibility in dealing with the doctor and the treatment that followed. The questionnaire is based on a 5-point Likert scale ranging from 1 (not at all appropriate) to 5 (totally appropriate). PDRQ-9 scores range from 1 to 5, with higher scores translating into a better PDR. The PDR Likert scale assessed the quality of the PDR experienced by the patient, who is directed to choose among 3 options: inferior, borderline, and superior. The IPAS6 is a self-administered questionnaire that assesses patient' ideal of autonomy according to 4 subscales that can be further grouped into 2 subscales: patients with an ideal of physician-centered/paternalistic (with information) autonomy and patients with an ideal of patient-centered autonomy. The IPAS can be applied to the attending physician. Definitions A good PDR was defined based on a cutoff point established with the borderline performance method.24 Briefly, the PDRQ-9 of the patients who rated the PDR Likert scale as borderline were selected (n = 267), and their mean score was calculated as 3.73. Patients who scored the PDRQ-9 with a value >3.73 were considered to have a good PDR, and their counterpart were considered to have a deficient PDR. Senior rheumatologists were defined as certified rheumatologists with ≥20 years of clinical experience. Certified rheumatologists were defined as rheumatologists who completed their training program and certification process. A good PDR was defined based on a cutoff point established with the borderline performance method.24 Briefly, the PDRQ-9 of the patients who rated the PDR Likert scale as borderline were selected (n = 267), and their mean score was calculated as 3.73. Patients who scored the PDRQ-9 with a value >3.73 were considered to have a good PDR, and their counterpart were considered to have a deficient PDR. Senior rheumatologists were defined as certified rheumatologists with ≥20 years of clinical experience. Certified rheumatologists were defined as rheumatologists who completed their training program and certification process. Statistical Analysis Descriptive statistics were performed to estimate the frequencies and percentages for categorical variables and the median, interquartile range (IQR) for continuous variables, of the sociodemographic variables, the disease-related and treatment-related variables, the patient-reported outcomes, and the PDR of the study population. A PDRQ-9 score was assigned to each patient-rheumatologist encounter. Characteristics of patients with a good PDR were compared with those of patients with deficient PDR, using appropriate tests. Logistic multiple regression analysis was used to establish factors associated with a good PDR, which was considered the dependent variable. The selection of the variables to be included was based on statistical significance in the bivariate analysis (p ≤ 0.10), and a limited number of potential confounder variables was also considered. In addition, the number of variables to be included was previously defined to avoid overfitting the model, and correlations between variables were also analyzed. Missing data were below 1% and applied to SF-36 questionnaire, 2 missing data; no imputation was performed. In addition, only 496 patients (82.6%) had a predominant ideal of autonomy,6 and their data were included in the regression analysis. All statistical analyses were performed using Statistical Package for the Social Sciences version 21.0 (SPSS; Chicago, IL). A value of p < 0.05 was considered statistically significant. Descriptive statistics were performed to estimate the frequencies and percentages for categorical variables and the median, interquartile range (IQR) for continuous variables, of the sociodemographic variables, the disease-related and treatment-related variables, the patient-reported outcomes, and the PDR of the study population. A PDRQ-9 score was assigned to each patient-rheumatologist encounter. Characteristics of patients with a good PDR were compared with those of patients with deficient PDR, using appropriate tests. Logistic multiple regression analysis was used to establish factors associated with a good PDR, which was considered the dependent variable. The selection of the variables to be included was based on statistical significance in the bivariate analysis (p ≤ 0.10), and a limited number of potential confounder variables was also considered. In addition, the number of variables to be included was previously defined to avoid overfitting the model, and correlations between variables were also analyzed. Missing data were below 1% and applied to SF-36 questionnaire, 2 missing data; no imputation was performed. In addition, only 496 patients (82.6%) had a predominant ideal of autonomy,6 and their data were included in the regression analysis. All statistical analyses were performed using Statistical Package for the Social Sciences version 21.0 (SPSS; Chicago, IL). A value of p < 0.05 was considered statistically significant.
RESULTS
Population Characteristics A total of 691 ambulatory patients were invited to participate, 90 of whom declined the invitation, primarily due to time constraints, and 1 patient did not complete the PDRQ-9. The characteristics of the 600 patients are depicted in Table 1 and had been previously described.6 Briefly, patients were primarily middle-aged women (86%), with a medium-low socioeconomic status (88.8%), long-standing disease (51.8%), comorbid conditions (58.7%), pain under control (68.5%), no disability (55.3%), and HRQoL out of the reference range (73.7%–78.6%), based on published cutoffs for the pain-VAS, HAQ-DI, and SF-36.25 In addition, they were on immunosuppressive drugs (96.5%). Patients scored high on the PDRQ-9, and 30.5% of the patients rated the PDR with the highest score. The patients' diagnoses were as follows: 219 patients (36.5%) had SLE, 191 (31.8%) had RA, 42 (7%) had systemic vasculitis, 23 each (3.8%) had inflammatory myopathy and primary antiphospholipid syndrome, 25 (4.2%) had systemic sclerosis, 28 (4.7%) had spondyloarthritis, 20 each (3.3%) had PSS and mixed connective tissue disease, and 9 patients (1.5%) had adult Still disease. Population's Characteristics (N = 600) Data presented as median (IQR) as otherwise indicated. aNumber (%) of patients. bSSS provides comprehensive health care insurance for public and private employees (including pensioners) and their households, such as outpatient and inpatient health care, hospitalization, paid sick days, disability, and retirement plans. Funded by both the Mexican Federal Government and by contributions from employees and their employers, this system comprises a heterogeneous bundle of autonomous health care institutions, including outpatient clinics, general hospitals, and specialty hospitals (according to Pineda et al 2019, https://doi.org/10.1007/s00296-018-4198-7). cLimited to patients with previous hospitalizations. SE, socioeconomic (level); SSS, social security system; IDs, immunosuppressive drugs; MD, missing data; DD, disease duration. A total of 691 ambulatory patients were invited to participate, 90 of whom declined the invitation, primarily due to time constraints, and 1 patient did not complete the PDRQ-9. The characteristics of the 600 patients are depicted in Table 1 and had been previously described.6 Briefly, patients were primarily middle-aged women (86%), with a medium-low socioeconomic status (88.8%), long-standing disease (51.8%), comorbid conditions (58.7%), pain under control (68.5%), no disability (55.3%), and HRQoL out of the reference range (73.7%–78.6%), based on published cutoffs for the pain-VAS, HAQ-DI, and SF-36.25 In addition, they were on immunosuppressive drugs (96.5%). Patients scored high on the PDRQ-9, and 30.5% of the patients rated the PDR with the highest score. The patients' diagnoses were as follows: 219 patients (36.5%) had SLE, 191 (31.8%) had RA, 42 (7%) had systemic vasculitis, 23 each (3.8%) had inflammatory myopathy and primary antiphospholipid syndrome, 25 (4.2%) had systemic sclerosis, 28 (4.7%) had spondyloarthritis, 20 each (3.3%) had PSS and mixed connective tissue disease, and 9 patients (1.5%) had adult Still disease. Population's Characteristics (N = 600) Data presented as median (IQR) as otherwise indicated. aNumber (%) of patients. bSSS provides comprehensive health care insurance for public and private employees (including pensioners) and their households, such as outpatient and inpatient health care, hospitalization, paid sick days, disability, and retirement plans. Funded by both the Mexican Federal Government and by contributions from employees and their employers, this system comprises a heterogeneous bundle of autonomous health care institutions, including outpatient clinics, general hospitals, and specialty hospitals (according to Pineda et al 2019, https://doi.org/10.1007/s00296-018-4198-7). cLimited to patients with previous hospitalizations. SE, socioeconomic (level); SSS, social security system; IDs, immunosuppressive drugs; MD, missing data; DD, disease duration. Description of the PDR in the Study Population The median (IQR) of the PDRQ-9 score in the entire population was 4.6 (3.4–5). Twenty patients (3.3%) rated the PDR Likert scale as inferior, 267 (44.5%) as borderline, and the 313 patients left (52.2%) as superior. Table 2 summarizes the PDRQ-9 global score and individual item scores in the entire population, and the comparison among groups defined according to PDR Likert scale response. As expected, the better the PDR Likert scale, the higher the global and individual items PDRQ-9 scores. Global and Individual Item PDRQ-9 Scores in the Entire Population and Comparison Between Patients Classified According to their PDR Likert Scale Items 2 (“My doctor has enough time for me”), 4 (“My doctor understand me”), and 7 (“I can talk to my doctor”) obtained lower scores than the remaining items (p ≤ 0.001 for any comparison), and the differences persisted within the patients grouped according to the PDR Likert scale response (inferior, borderline, and superior), as summarized in Table 2 and Figure. Individual item PDRQ-9 scores in the patients grouped according to their PDR Likert Scale category (inferior, borderline, superior). Color online-figure is available at http://www.jclinrheum.com. Finally, among the 600 patient-doctor encounters, 523 (87.2%) involved certified rheumatologists, whereas the remaining 77 encounters (22.8%) involved trainees in rheumatology. Patient-doctor encounters from the former group (with certified rheumatologists) were rated with higher global and individual item PDRQ-9 scores, compared with their counterparts, as summarized in Table 3, but for item 6, “My doctor and I agree on the nature of my medical symptoms” and 7, “I can talk to my doctor.” Comparison of Global and Individual Item PDRQ-9 Scores Between Patient-Doctor Encounters that Involved Certified Rheumatologists and Those That Involved Trainees in Rheumatology The median (IQR) of the PDRQ-9 score in the entire population was 4.6 (3.4–5). Twenty patients (3.3%) rated the PDR Likert scale as inferior, 267 (44.5%) as borderline, and the 313 patients left (52.2%) as superior. Table 2 summarizes the PDRQ-9 global score and individual item scores in the entire population, and the comparison among groups defined according to PDR Likert scale response. As expected, the better the PDR Likert scale, the higher the global and individual items PDRQ-9 scores. Global and Individual Item PDRQ-9 Scores in the Entire Population and Comparison Between Patients Classified According to their PDR Likert Scale Items 2 (“My doctor has enough time for me”), 4 (“My doctor understand me”), and 7 (“I can talk to my doctor”) obtained lower scores than the remaining items (p ≤ 0.001 for any comparison), and the differences persisted within the patients grouped according to the PDR Likert scale response (inferior, borderline, and superior), as summarized in Table 2 and Figure. Individual item PDRQ-9 scores in the patients grouped according to their PDR Likert Scale category (inferior, borderline, superior). Color online-figure is available at http://www.jclinrheum.com. Finally, among the 600 patient-doctor encounters, 523 (87.2%) involved certified rheumatologists, whereas the remaining 77 encounters (22.8%) involved trainees in rheumatology. Patient-doctor encounters from the former group (with certified rheumatologists) were rated with higher global and individual item PDRQ-9 scores, compared with their counterparts, as summarized in Table 3, but for item 6, “My doctor and I agree on the nature of my medical symptoms” and 7, “I can talk to my doctor.” Comparison of Global and Individual Item PDRQ-9 Scores Between Patient-Doctor Encounters that Involved Certified Rheumatologists and Those That Involved Trainees in Rheumatology Factors Associated With Good PDR We first compared medical encounters rated by the patients as deficient PDR (defined as PDRQ-9 score ≤3.73, n = 178) with medical encounters rated by the patients with good PDR (n = 422), and the results are summarized in Supplemental Table, http://links.lww.com/RHU/A393. Compared with their counterparts, patients from the former group were more likely to be female (90.4% vs 84.1%, p = 0.053) and scored higher pain-VAS (18 [3–42.8] vs 11 [0–38.5], p = 0.042), were more likely to have disability based on the HAQ-DI score (58.5% vs 47.8%, p = 0.019), and were less likely to have the SF-36 physical component within the reference range (14.7% vs 24.2%, p = 0.009). Accordingly, patients with deficient PDR had lower SF-36 global scores (56 [43.5–70.5] vs 63.1 [47.9–76.5], p = 0.001); also, they were more frequently involved in patient-doctor encounters with trainees in rheumatology. Finally, they were less likely to have a paternalistic ideal of patient autonomy (74% vs 89.4%, p ≤ 0.001), and were less likely to be concordant with their doctor's ideal of autonomy (62.3% vs 77.7%, p = 0.001, and 65.1% vs 80%, p = 0.001 for patient-doctor concordance with a physician-centered/paternalistic ideal of autonomy). The following variables were included in the multiple logistic regression analysis to identify factors associated with a good PDR, which was considered the dependent variable: whether the patient was female, pain-VAS and SF-36 scores (highly correlated to SF-36 emotional component within the reference range), HAQ-DI score out of reference range, patient-doctor encounters with variable experience of the attending rheumatologist (trainees, certified rheumatologists), patient-doctor concordance in the ideal of autonomy (highly correlated to patient-doctor concordance in paternalistic ideal of autonomy), and patient's paternalistic ideal of autonomy. Results showed that patient paternalistic ideal of autonomy (odds ratio [OR], 3.029; 95% confidence interval [CI], 1.793–5.113; p ≤ 0.001), patient SF-36 global score (OR, 1.014; 95% CI, 1.003–1.025; p = 0.011), patient's female sex (OR, 0.460; 95% CI, 0.233–0.010; p = 0.026), and being a certified/senior rheumatologist (OR, 1.526; 95% CI, 1.059–2.200; p = 0.024) were associated with a good PDR. We first compared medical encounters rated by the patients as deficient PDR (defined as PDRQ-9 score ≤3.73, n = 178) with medical encounters rated by the patients with good PDR (n = 422), and the results are summarized in Supplemental Table, http://links.lww.com/RHU/A393. Compared with their counterparts, patients from the former group were more likely to be female (90.4% vs 84.1%, p = 0.053) and scored higher pain-VAS (18 [3–42.8] vs 11 [0–38.5], p = 0.042), were more likely to have disability based on the HAQ-DI score (58.5% vs 47.8%, p = 0.019), and were less likely to have the SF-36 physical component within the reference range (14.7% vs 24.2%, p = 0.009). Accordingly, patients with deficient PDR had lower SF-36 global scores (56 [43.5–70.5] vs 63.1 [47.9–76.5], p = 0.001); also, they were more frequently involved in patient-doctor encounters with trainees in rheumatology. Finally, they were less likely to have a paternalistic ideal of patient autonomy (74% vs 89.4%, p ≤ 0.001), and were less likely to be concordant with their doctor's ideal of autonomy (62.3% vs 77.7%, p = 0.001, and 65.1% vs 80%, p = 0.001 for patient-doctor concordance with a physician-centered/paternalistic ideal of autonomy). The following variables were included in the multiple logistic regression analysis to identify factors associated with a good PDR, which was considered the dependent variable: whether the patient was female, pain-VAS and SF-36 scores (highly correlated to SF-36 emotional component within the reference range), HAQ-DI score out of reference range, patient-doctor encounters with variable experience of the attending rheumatologist (trainees, certified rheumatologists), patient-doctor concordance in the ideal of autonomy (highly correlated to patient-doctor concordance in paternalistic ideal of autonomy), and patient's paternalistic ideal of autonomy. Results showed that patient paternalistic ideal of autonomy (odds ratio [OR], 3.029; 95% confidence interval [CI], 1.793–5.113; p ≤ 0.001), patient SF-36 global score (OR, 1.014; 95% CI, 1.003–1.025; p = 0.011), patient's female sex (OR, 0.460; 95% CI, 0.233–0.010; p = 0.026), and being a certified/senior rheumatologist (OR, 1.526; 95% CI, 1.059–2.200; p = 0.024) were associated with a good PDR.
CONCLUSIONS
The PDR is a complex dynamic and multidisciplinary phenomenon that needs to be approached from a cultural perspective. The PDR might also be conceived as a highly valuable outcome in itself, the quality of which influences disease outcomes, patient's satisfaction with care and adherence to treatment, and clinician satisfaction at work. In Mexican outpatients with rheumatic diseases, we found factors associated with a good PDR that were related to patient characteristics and of the clinician. Insights from this study are of great value for the development of strategies targeted at building solid relationships and improving communication among patients and doctors.
[ "Ethics", "Study Design, Setting, and Study Population", "Study Maneuvers", "Instruments Description", "Definitions", "Statistical Analysis", "Population Characteristics", "Description of the PDR in the Study Population", "Factors Associated With Good PDR" ]
[ "The study was approved by the Internal Review Board of the INCMyN-SZ (Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán) (reference number: IRE-3005-19-20-1). All the patients from the Outpatient Clinic of the Department of Immunology and Rheumatology (OCDIR) who agreed to participate provided written informed consent.\nBefore patient enrollment, the study was presented to all the rheumatologists/trainees assigned to the OCDIR during the study period, and all of them agreed to participate.", "The study has been previously described.6 Briefly, it was cross-sectional and performed between July 2019 and February 2020, at the OCDIR of a tertiary-care level and academic center for rheumatic diseases, located in Mexico City. STROBE's guidelines were followed (Appendix, http://links.lww.com/RHU/A394).\nThe INCMyN-SZ belongs to the National Institutes of Health of Mexico. Patients had Federal government health coverage depending on their socioeconomic level, which was defined by social workers after patient's interview and income-to-needs ratio's assessment. Patients had to pay for their medication, medical assistance, laboratories, and diagnostic imaging studies; however, up to 75% of the patients had at least 70% health coverage.\nEleven certified rheumatologists and 10 trainees in rheumatology were assigned to the OCDIR, and all were self-defined as Mexican. In addition, approximately 5000 patients with at least 1 visit to the outpatient clinic, self-referred as Mexican and with a variety of rheumatic diseases attended the OCDIR. At first visit to the OCDIR, patients were assigned a primary rheumatologist, which was maintained during the entire patient's follow-up, but for patients assigned to trainees in rheumatology, they changed their primary physician every 2 years (training program duration). Patients might be assigned a different primary rheumatologist upon patient's request.\nThe 10 most frequent diagnoses (n = 4476) based on the attending rheumatologist criteria were SLE in 1652 patients (33%), RA in 1578 (31.6%), systemic sclerosis in 239 (4.8%), systemic vasculitis in 220 (4.4%), primary Sjögren syndrome (PSS) in 190 (3.8%), spondyloarthritis in 174 (3.5%), inflammatory myopathies and primary antiphospholipid syndrome in 150 patients each (3%), mixed connective tissue disease in 94 patients (1.9%), and adult Still disease in 29 patients (0.6%). Finally, 524 patients (10.5%) had other diagnosis.\nAll the patients who consecutively who consecutively were seen at the OCDIR during the study period, and had a defined rheumatic disease according to the criteria of the attending rheumatologist, were invited to participate. Exclusion criteria included patients on palliative care, with overlap syndrome (but secondary Sjögren syndrome), and with uncontrolled comorbid conditions.", "All included patients were invited to evaluate the PDR at the end of their consultation, and to complete the Patient-Doctor Relationship Questionnaire (PDRQ-9)20 and a PDR Likert Scale. Patients additionally completed the Spanish version of the Health Assessment Questionnaire–Disability Index (HAQ-DI) to assess disability21 and the Short-Form 36 items (SF-36) to assess health-related quality of life (HRQoL),22 a visual analog scale (VAS) to assess pain, and the Ideal Patient Autonomy Scale (IPAS).6 Physicians assigned to OCDIR also completed the IPAS.\nRelevant sociodemographic variables (sex, age, formal education, socioeconomic level, religious beliefs, economic dependency, living with a partner, and access to the social security system), disease-related variables (disease duration, years of follow-up at the OCDIR, comorbid conditions and Charlson comorbidity score,23 participation in clinical trials, previous hospitalizations and number), and treatment-related variables (immunosuppressive treatment and number of immunosuppressive drugs/per patient and corticosteroid use) were obtained from all the patients, in standardized formats after a careful chart review and patient interview to confirm the data.\nIn all cases, interviews, questionnaires, and scales were applied in an area designated for research purposes by personnel not involved in patient care.", "The PDRQ-920 assessed the quality of the PDR experienced by the patient through the quantification of the patient's opinion regarding communication, satisfaction, trust, and accessibility in dealing with the doctor and the treatment that followed. The questionnaire is based on a 5-point Likert scale ranging from 1 (not at all appropriate) to 5 (totally appropriate). PDRQ-9 scores range from 1 to 5, with higher scores translating into a better PDR.\nThe PDR Likert scale assessed the quality of the PDR experienced by the patient, who is directed to choose among 3 options: inferior, borderline, and superior.\nThe IPAS6 is a self-administered questionnaire that assesses patient' ideal of autonomy according to 4 subscales that can be further grouped into 2 subscales: patients with an ideal of physician-centered/paternalistic (with information) autonomy and patients with an ideal of patient-centered autonomy. The IPAS can be applied to the attending physician.", "A good PDR was defined based on a cutoff point established with the borderline performance method.24 Briefly, the PDRQ-9 of the patients who rated the PDR Likert scale as borderline were selected (n = 267), and their mean score was calculated as 3.73. Patients who scored the PDRQ-9 with a value >3.73 were considered to have a good PDR, and their counterpart were considered to have a deficient PDR.\nSenior rheumatologists were defined as certified rheumatologists with ≥20 years of clinical experience. Certified rheumatologists were defined as rheumatologists who completed their training program and certification process.", "Descriptive statistics were performed to estimate the frequencies and percentages for categorical variables and the median, interquartile range (IQR) for continuous variables, of the sociodemographic variables, the disease-related and treatment-related variables, the patient-reported outcomes, and the PDR of the study population.\nA PDRQ-9 score was assigned to each patient-rheumatologist encounter. Characteristics of patients with a good PDR were compared with those of patients with deficient PDR, using appropriate tests. Logistic multiple regression analysis was used to establish factors associated with a good PDR, which was considered the dependent variable. The selection of the variables to be included was based on statistical significance in the bivariate analysis (p ≤ 0.10), and a limited number of potential confounder variables was also considered. In addition, the number of variables to be included was previously defined to avoid overfitting the model, and correlations between variables were also analyzed.\nMissing data were below 1% and applied to SF-36 questionnaire, 2 missing data; no imputation was performed. In addition, only 496 patients (82.6%) had a predominant ideal of autonomy,6 and their data were included in the regression analysis.\nAll statistical analyses were performed using Statistical Package for the Social Sciences version 21.0 (SPSS; Chicago, IL). A value of p < 0.05 was considered statistically significant.", "A total of 691 ambulatory patients were invited to participate, 90 of whom declined the invitation, primarily due to time constraints, and 1 patient did not complete the PDRQ-9. The characteristics of the 600 patients are depicted in Table 1 and had been previously described.6 Briefly, patients were primarily middle-aged women (86%), with a medium-low socioeconomic status (88.8%), long-standing disease (51.8%), comorbid conditions (58.7%), pain under control (68.5%), no disability (55.3%), and HRQoL out of the reference range (73.7%–78.6%), based on published cutoffs for the pain-VAS, HAQ-DI, and SF-36.25 In addition, they were on immunosuppressive drugs (96.5%). Patients scored high on the PDRQ-9, and 30.5% of the patients rated the PDR with the highest score. The patients' diagnoses were as follows: 219 patients (36.5%) had SLE, 191 (31.8%) had RA, 42 (7%) had systemic vasculitis, 23 each (3.8%) had inflammatory myopathy and primary antiphospholipid syndrome, 25 (4.2%) had systemic sclerosis, 28 (4.7%) had spondyloarthritis, 20 each (3.3%) had PSS and mixed connective tissue disease, and 9 patients (1.5%) had adult Still disease.\nPopulation's Characteristics (N = 600)\nData presented as median (IQR) as otherwise indicated.\naNumber (%) of patients.\nbSSS provides comprehensive health care insurance for public and private employees (including pensioners) and their households, such as outpatient and inpatient health care, hospitalization, paid sick days, disability, and retirement plans. Funded by both the Mexican Federal Government and by contributions from employees and their employers, this system comprises a heterogeneous bundle of autonomous health care institutions, including outpatient clinics, general hospitals, and specialty hospitals (according to Pineda et al 2019, https://doi.org/10.1007/s00296-018-4198-7).\ncLimited to patients with previous hospitalizations.\nSE, socioeconomic (level); SSS, social security system; IDs, immunosuppressive drugs; MD, missing data; DD, disease duration.", "The median (IQR) of the PDRQ-9 score in the entire population was 4.6 (3.4–5). Twenty patients (3.3%) rated the PDR Likert scale as inferior, 267 (44.5%) as borderline, and the 313 patients left (52.2%) as superior. Table 2 summarizes the PDRQ-9 global score and individual item scores in the entire population, and the comparison among groups defined according to PDR Likert scale response. As expected, the better the PDR Likert scale, the higher the global and individual items PDRQ-9 scores.\nGlobal and Individual Item PDRQ-9 Scores in the Entire Population and Comparison Between Patients Classified According to their PDR Likert Scale\nItems 2 (“My doctor has enough time for me”), 4 (“My doctor understand me”), and 7 (“I can talk to my doctor”) obtained lower scores than the remaining items (p ≤ 0.001 for any comparison), and the differences persisted within the patients grouped according to the PDR Likert scale response (inferior, borderline, and superior), as summarized in Table 2 and Figure.\nIndividual item PDRQ-9 scores in the patients grouped according to their PDR Likert Scale category (inferior, borderline, superior). Color online-figure is available at http://www.jclinrheum.com.\nFinally, among the 600 patient-doctor encounters, 523 (87.2%) involved certified rheumatologists, whereas the remaining 77 encounters (22.8%) involved trainees in rheumatology. Patient-doctor encounters from the former group (with certified rheumatologists) were rated with higher global and individual item PDRQ-9 scores, compared with their counterparts, as summarized in Table 3, but for item 6, “My doctor and I agree on the nature of my medical symptoms” and 7, “I can talk to my doctor.”\nComparison of Global and Individual Item PDRQ-9 Scores Between Patient-Doctor Encounters that Involved Certified Rheumatologists and Those That Involved Trainees in Rheumatology", "We first compared medical encounters rated by the patients as deficient PDR (defined as PDRQ-9 score ≤3.73, n = 178) with medical encounters rated by the patients with good PDR (n = 422), and the results are summarized in Supplemental Table, http://links.lww.com/RHU/A393. Compared with their counterparts, patients from the former group were more likely to be female (90.4% vs 84.1%, p = 0.053) and scored higher pain-VAS (18 [3–42.8] vs 11 [0–38.5], p = 0.042), were more likely to have disability based on the HAQ-DI score (58.5% vs 47.8%, p = 0.019), and were less likely to have the SF-36 physical component within the reference range (14.7% vs 24.2%, p = 0.009). Accordingly, patients with deficient PDR had lower SF-36 global scores (56 [43.5–70.5] vs 63.1 [47.9–76.5], p = 0.001); also, they were more frequently involved in patient-doctor encounters with trainees in rheumatology. Finally, they were less likely to have a paternalistic ideal of patient autonomy (74% vs 89.4%, p ≤ 0.001), and were less likely to be concordant with their doctor's ideal of autonomy (62.3% vs 77.7%, p = 0.001, and 65.1% vs 80%, p = 0.001 for patient-doctor concordance with a physician-centered/paternalistic ideal of autonomy).\nThe following variables were included in the multiple logistic regression analysis to identify factors associated with a good PDR, which was considered the dependent variable: whether the patient was female, pain-VAS and SF-36 scores (highly correlated to SF-36 emotional component within the reference range), HAQ-DI score out of reference range, patient-doctor encounters with variable experience of the attending rheumatologist (trainees, certified rheumatologists), patient-doctor concordance in the ideal of autonomy (highly correlated to patient-doctor concordance in paternalistic ideal of autonomy), and patient's paternalistic ideal of autonomy. Results showed that patient paternalistic ideal of autonomy (odds ratio [OR], 3.029; 95% confidence interval [CI], 1.793–5.113; p ≤ 0.001), patient SF-36 global score (OR, 1.014; 95% CI, 1.003–1.025; p = 0.011), patient's female sex (OR, 0.460; 95% CI, 0.233–0.010; p = 0.026), and being a certified/senior rheumatologist (OR, 1.526; 95% CI, 1.059–2.200; p = 0.024) were associated with a good PDR." ]
[ null, null, null, null, null, null, null, null, null ]
[ "PATIENTS AND METHODS", "Ethics", "Study Design, Setting, and Study Population", "Study Maneuvers", "Instruments Description", "Definitions", "Statistical Analysis", "RESULTS", "Population Characteristics", "Description of the PDR in the Study Population", "Factors Associated With Good PDR", "DISCUSSION", "CONCLUSIONS" ]
[ "Ethics The study was approved by the Internal Review Board of the INCMyN-SZ (Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán) (reference number: IRE-3005-19-20-1). All the patients from the Outpatient Clinic of the Department of Immunology and Rheumatology (OCDIR) who agreed to participate provided written informed consent.\nBefore patient enrollment, the study was presented to all the rheumatologists/trainees assigned to the OCDIR during the study period, and all of them agreed to participate.\nThe study was approved by the Internal Review Board of the INCMyN-SZ (Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán) (reference number: IRE-3005-19-20-1). All the patients from the Outpatient Clinic of the Department of Immunology and Rheumatology (OCDIR) who agreed to participate provided written informed consent.\nBefore patient enrollment, the study was presented to all the rheumatologists/trainees assigned to the OCDIR during the study period, and all of them agreed to participate.\nStudy Design, Setting, and Study Population The study has been previously described.6 Briefly, it was cross-sectional and performed between July 2019 and February 2020, at the OCDIR of a tertiary-care level and academic center for rheumatic diseases, located in Mexico City. STROBE's guidelines were followed (Appendix, http://links.lww.com/RHU/A394).\nThe INCMyN-SZ belongs to the National Institutes of Health of Mexico. Patients had Federal government health coverage depending on their socioeconomic level, which was defined by social workers after patient's interview and income-to-needs ratio's assessment. Patients had to pay for their medication, medical assistance, laboratories, and diagnostic imaging studies; however, up to 75% of the patients had at least 70% health coverage.\nEleven certified rheumatologists and 10 trainees in rheumatology were assigned to the OCDIR, and all were self-defined as Mexican. In addition, approximately 5000 patients with at least 1 visit to the outpatient clinic, self-referred as Mexican and with a variety of rheumatic diseases attended the OCDIR. At first visit to the OCDIR, patients were assigned a primary rheumatologist, which was maintained during the entire patient's follow-up, but for patients assigned to trainees in rheumatology, they changed their primary physician every 2 years (training program duration). Patients might be assigned a different primary rheumatologist upon patient's request.\nThe 10 most frequent diagnoses (n = 4476) based on the attending rheumatologist criteria were SLE in 1652 patients (33%), RA in 1578 (31.6%), systemic sclerosis in 239 (4.8%), systemic vasculitis in 220 (4.4%), primary Sjögren syndrome (PSS) in 190 (3.8%), spondyloarthritis in 174 (3.5%), inflammatory myopathies and primary antiphospholipid syndrome in 150 patients each (3%), mixed connective tissue disease in 94 patients (1.9%), and adult Still disease in 29 patients (0.6%). Finally, 524 patients (10.5%) had other diagnosis.\nAll the patients who consecutively who consecutively were seen at the OCDIR during the study period, and had a defined rheumatic disease according to the criteria of the attending rheumatologist, were invited to participate. Exclusion criteria included patients on palliative care, with overlap syndrome (but secondary Sjögren syndrome), and with uncontrolled comorbid conditions.\nThe study has been previously described.6 Briefly, it was cross-sectional and performed between July 2019 and February 2020, at the OCDIR of a tertiary-care level and academic center for rheumatic diseases, located in Mexico City. STROBE's guidelines were followed (Appendix, http://links.lww.com/RHU/A394).\nThe INCMyN-SZ belongs to the National Institutes of Health of Mexico. Patients had Federal government health coverage depending on their socioeconomic level, which was defined by social workers after patient's interview and income-to-needs ratio's assessment. Patients had to pay for their medication, medical assistance, laboratories, and diagnostic imaging studies; however, up to 75% of the patients had at least 70% health coverage.\nEleven certified rheumatologists and 10 trainees in rheumatology were assigned to the OCDIR, and all were self-defined as Mexican. In addition, approximately 5000 patients with at least 1 visit to the outpatient clinic, self-referred as Mexican and with a variety of rheumatic diseases attended the OCDIR. At first visit to the OCDIR, patients were assigned a primary rheumatologist, which was maintained during the entire patient's follow-up, but for patients assigned to trainees in rheumatology, they changed their primary physician every 2 years (training program duration). Patients might be assigned a different primary rheumatologist upon patient's request.\nThe 10 most frequent diagnoses (n = 4476) based on the attending rheumatologist criteria were SLE in 1652 patients (33%), RA in 1578 (31.6%), systemic sclerosis in 239 (4.8%), systemic vasculitis in 220 (4.4%), primary Sjögren syndrome (PSS) in 190 (3.8%), spondyloarthritis in 174 (3.5%), inflammatory myopathies and primary antiphospholipid syndrome in 150 patients each (3%), mixed connective tissue disease in 94 patients (1.9%), and adult Still disease in 29 patients (0.6%). Finally, 524 patients (10.5%) had other diagnosis.\nAll the patients who consecutively who consecutively were seen at the OCDIR during the study period, and had a defined rheumatic disease according to the criteria of the attending rheumatologist, were invited to participate. Exclusion criteria included patients on palliative care, with overlap syndrome (but secondary Sjögren syndrome), and with uncontrolled comorbid conditions.\nStudy Maneuvers All included patients were invited to evaluate the PDR at the end of their consultation, and to complete the Patient-Doctor Relationship Questionnaire (PDRQ-9)20 and a PDR Likert Scale. Patients additionally completed the Spanish version of the Health Assessment Questionnaire–Disability Index (HAQ-DI) to assess disability21 and the Short-Form 36 items (SF-36) to assess health-related quality of life (HRQoL),22 a visual analog scale (VAS) to assess pain, and the Ideal Patient Autonomy Scale (IPAS).6 Physicians assigned to OCDIR also completed the IPAS.\nRelevant sociodemographic variables (sex, age, formal education, socioeconomic level, religious beliefs, economic dependency, living with a partner, and access to the social security system), disease-related variables (disease duration, years of follow-up at the OCDIR, comorbid conditions and Charlson comorbidity score,23 participation in clinical trials, previous hospitalizations and number), and treatment-related variables (immunosuppressive treatment and number of immunosuppressive drugs/per patient and corticosteroid use) were obtained from all the patients, in standardized formats after a careful chart review and patient interview to confirm the data.\nIn all cases, interviews, questionnaires, and scales were applied in an area designated for research purposes by personnel not involved in patient care.\nAll included patients were invited to evaluate the PDR at the end of their consultation, and to complete the Patient-Doctor Relationship Questionnaire (PDRQ-9)20 and a PDR Likert Scale. Patients additionally completed the Spanish version of the Health Assessment Questionnaire–Disability Index (HAQ-DI) to assess disability21 and the Short-Form 36 items (SF-36) to assess health-related quality of life (HRQoL),22 a visual analog scale (VAS) to assess pain, and the Ideal Patient Autonomy Scale (IPAS).6 Physicians assigned to OCDIR also completed the IPAS.\nRelevant sociodemographic variables (sex, age, formal education, socioeconomic level, religious beliefs, economic dependency, living with a partner, and access to the social security system), disease-related variables (disease duration, years of follow-up at the OCDIR, comorbid conditions and Charlson comorbidity score,23 participation in clinical trials, previous hospitalizations and number), and treatment-related variables (immunosuppressive treatment and number of immunosuppressive drugs/per patient and corticosteroid use) were obtained from all the patients, in standardized formats after a careful chart review and patient interview to confirm the data.\nIn all cases, interviews, questionnaires, and scales were applied in an area designated for research purposes by personnel not involved in patient care.\nInstruments Description The PDRQ-920 assessed the quality of the PDR experienced by the patient through the quantification of the patient's opinion regarding communication, satisfaction, trust, and accessibility in dealing with the doctor and the treatment that followed. The questionnaire is based on a 5-point Likert scale ranging from 1 (not at all appropriate) to 5 (totally appropriate). PDRQ-9 scores range from 1 to 5, with higher scores translating into a better PDR.\nThe PDR Likert scale assessed the quality of the PDR experienced by the patient, who is directed to choose among 3 options: inferior, borderline, and superior.\nThe IPAS6 is a self-administered questionnaire that assesses patient' ideal of autonomy according to 4 subscales that can be further grouped into 2 subscales: patients with an ideal of physician-centered/paternalistic (with information) autonomy and patients with an ideal of patient-centered autonomy. The IPAS can be applied to the attending physician.\nThe PDRQ-920 assessed the quality of the PDR experienced by the patient through the quantification of the patient's opinion regarding communication, satisfaction, trust, and accessibility in dealing with the doctor and the treatment that followed. The questionnaire is based on a 5-point Likert scale ranging from 1 (not at all appropriate) to 5 (totally appropriate). PDRQ-9 scores range from 1 to 5, with higher scores translating into a better PDR.\nThe PDR Likert scale assessed the quality of the PDR experienced by the patient, who is directed to choose among 3 options: inferior, borderline, and superior.\nThe IPAS6 is a self-administered questionnaire that assesses patient' ideal of autonomy according to 4 subscales that can be further grouped into 2 subscales: patients with an ideal of physician-centered/paternalistic (with information) autonomy and patients with an ideal of patient-centered autonomy. The IPAS can be applied to the attending physician.\nDefinitions A good PDR was defined based on a cutoff point established with the borderline performance method.24 Briefly, the PDRQ-9 of the patients who rated the PDR Likert scale as borderline were selected (n = 267), and their mean score was calculated as 3.73. Patients who scored the PDRQ-9 with a value >3.73 were considered to have a good PDR, and their counterpart were considered to have a deficient PDR.\nSenior rheumatologists were defined as certified rheumatologists with ≥20 years of clinical experience. Certified rheumatologists were defined as rheumatologists who completed their training program and certification process.\nA good PDR was defined based on a cutoff point established with the borderline performance method.24 Briefly, the PDRQ-9 of the patients who rated the PDR Likert scale as borderline were selected (n = 267), and their mean score was calculated as 3.73. Patients who scored the PDRQ-9 with a value >3.73 were considered to have a good PDR, and their counterpart were considered to have a deficient PDR.\nSenior rheumatologists were defined as certified rheumatologists with ≥20 years of clinical experience. Certified rheumatologists were defined as rheumatologists who completed their training program and certification process.\nStatistical Analysis Descriptive statistics were performed to estimate the frequencies and percentages for categorical variables and the median, interquartile range (IQR) for continuous variables, of the sociodemographic variables, the disease-related and treatment-related variables, the patient-reported outcomes, and the PDR of the study population.\nA PDRQ-9 score was assigned to each patient-rheumatologist encounter. Characteristics of patients with a good PDR were compared with those of patients with deficient PDR, using appropriate tests. Logistic multiple regression analysis was used to establish factors associated with a good PDR, which was considered the dependent variable. The selection of the variables to be included was based on statistical significance in the bivariate analysis (p ≤ 0.10), and a limited number of potential confounder variables was also considered. In addition, the number of variables to be included was previously defined to avoid overfitting the model, and correlations between variables were also analyzed.\nMissing data were below 1% and applied to SF-36 questionnaire, 2 missing data; no imputation was performed. In addition, only 496 patients (82.6%) had a predominant ideal of autonomy,6 and their data were included in the regression analysis.\nAll statistical analyses were performed using Statistical Package for the Social Sciences version 21.0 (SPSS; Chicago, IL). A value of p < 0.05 was considered statistically significant.\nDescriptive statistics were performed to estimate the frequencies and percentages for categorical variables and the median, interquartile range (IQR) for continuous variables, of the sociodemographic variables, the disease-related and treatment-related variables, the patient-reported outcomes, and the PDR of the study population.\nA PDRQ-9 score was assigned to each patient-rheumatologist encounter. Characteristics of patients with a good PDR were compared with those of patients with deficient PDR, using appropriate tests. Logistic multiple regression analysis was used to establish factors associated with a good PDR, which was considered the dependent variable. The selection of the variables to be included was based on statistical significance in the bivariate analysis (p ≤ 0.10), and a limited number of potential confounder variables was also considered. In addition, the number of variables to be included was previously defined to avoid overfitting the model, and correlations between variables were also analyzed.\nMissing data were below 1% and applied to SF-36 questionnaire, 2 missing data; no imputation was performed. In addition, only 496 patients (82.6%) had a predominant ideal of autonomy,6 and their data were included in the regression analysis.\nAll statistical analyses were performed using Statistical Package for the Social Sciences version 21.0 (SPSS; Chicago, IL). A value of p < 0.05 was considered statistically significant.", "The study was approved by the Internal Review Board of the INCMyN-SZ (Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán) (reference number: IRE-3005-19-20-1). All the patients from the Outpatient Clinic of the Department of Immunology and Rheumatology (OCDIR) who agreed to participate provided written informed consent.\nBefore patient enrollment, the study was presented to all the rheumatologists/trainees assigned to the OCDIR during the study period, and all of them agreed to participate.", "The study has been previously described.6 Briefly, it was cross-sectional and performed between July 2019 and February 2020, at the OCDIR of a tertiary-care level and academic center for rheumatic diseases, located in Mexico City. STROBE's guidelines were followed (Appendix, http://links.lww.com/RHU/A394).\nThe INCMyN-SZ belongs to the National Institutes of Health of Mexico. Patients had Federal government health coverage depending on their socioeconomic level, which was defined by social workers after patient's interview and income-to-needs ratio's assessment. Patients had to pay for their medication, medical assistance, laboratories, and diagnostic imaging studies; however, up to 75% of the patients had at least 70% health coverage.\nEleven certified rheumatologists and 10 trainees in rheumatology were assigned to the OCDIR, and all were self-defined as Mexican. In addition, approximately 5000 patients with at least 1 visit to the outpatient clinic, self-referred as Mexican and with a variety of rheumatic diseases attended the OCDIR. At first visit to the OCDIR, patients were assigned a primary rheumatologist, which was maintained during the entire patient's follow-up, but for patients assigned to trainees in rheumatology, they changed their primary physician every 2 years (training program duration). Patients might be assigned a different primary rheumatologist upon patient's request.\nThe 10 most frequent diagnoses (n = 4476) based on the attending rheumatologist criteria were SLE in 1652 patients (33%), RA in 1578 (31.6%), systemic sclerosis in 239 (4.8%), systemic vasculitis in 220 (4.4%), primary Sjögren syndrome (PSS) in 190 (3.8%), spondyloarthritis in 174 (3.5%), inflammatory myopathies and primary antiphospholipid syndrome in 150 patients each (3%), mixed connective tissue disease in 94 patients (1.9%), and adult Still disease in 29 patients (0.6%). Finally, 524 patients (10.5%) had other diagnosis.\nAll the patients who consecutively who consecutively were seen at the OCDIR during the study period, and had a defined rheumatic disease according to the criteria of the attending rheumatologist, were invited to participate. Exclusion criteria included patients on palliative care, with overlap syndrome (but secondary Sjögren syndrome), and with uncontrolled comorbid conditions.", "All included patients were invited to evaluate the PDR at the end of their consultation, and to complete the Patient-Doctor Relationship Questionnaire (PDRQ-9)20 and a PDR Likert Scale. Patients additionally completed the Spanish version of the Health Assessment Questionnaire–Disability Index (HAQ-DI) to assess disability21 and the Short-Form 36 items (SF-36) to assess health-related quality of life (HRQoL),22 a visual analog scale (VAS) to assess pain, and the Ideal Patient Autonomy Scale (IPAS).6 Physicians assigned to OCDIR also completed the IPAS.\nRelevant sociodemographic variables (sex, age, formal education, socioeconomic level, religious beliefs, economic dependency, living with a partner, and access to the social security system), disease-related variables (disease duration, years of follow-up at the OCDIR, comorbid conditions and Charlson comorbidity score,23 participation in clinical trials, previous hospitalizations and number), and treatment-related variables (immunosuppressive treatment and number of immunosuppressive drugs/per patient and corticosteroid use) were obtained from all the patients, in standardized formats after a careful chart review and patient interview to confirm the data.\nIn all cases, interviews, questionnaires, and scales were applied in an area designated for research purposes by personnel not involved in patient care.", "The PDRQ-920 assessed the quality of the PDR experienced by the patient through the quantification of the patient's opinion regarding communication, satisfaction, trust, and accessibility in dealing with the doctor and the treatment that followed. The questionnaire is based on a 5-point Likert scale ranging from 1 (not at all appropriate) to 5 (totally appropriate). PDRQ-9 scores range from 1 to 5, with higher scores translating into a better PDR.\nThe PDR Likert scale assessed the quality of the PDR experienced by the patient, who is directed to choose among 3 options: inferior, borderline, and superior.\nThe IPAS6 is a self-administered questionnaire that assesses patient' ideal of autonomy according to 4 subscales that can be further grouped into 2 subscales: patients with an ideal of physician-centered/paternalistic (with information) autonomy and patients with an ideal of patient-centered autonomy. The IPAS can be applied to the attending physician.", "A good PDR was defined based on a cutoff point established with the borderline performance method.24 Briefly, the PDRQ-9 of the patients who rated the PDR Likert scale as borderline were selected (n = 267), and their mean score was calculated as 3.73. Patients who scored the PDRQ-9 with a value >3.73 were considered to have a good PDR, and their counterpart were considered to have a deficient PDR.\nSenior rheumatologists were defined as certified rheumatologists with ≥20 years of clinical experience. Certified rheumatologists were defined as rheumatologists who completed their training program and certification process.", "Descriptive statistics were performed to estimate the frequencies and percentages for categorical variables and the median, interquartile range (IQR) for continuous variables, of the sociodemographic variables, the disease-related and treatment-related variables, the patient-reported outcomes, and the PDR of the study population.\nA PDRQ-9 score was assigned to each patient-rheumatologist encounter. Characteristics of patients with a good PDR were compared with those of patients with deficient PDR, using appropriate tests. Logistic multiple regression analysis was used to establish factors associated with a good PDR, which was considered the dependent variable. The selection of the variables to be included was based on statistical significance in the bivariate analysis (p ≤ 0.10), and a limited number of potential confounder variables was also considered. In addition, the number of variables to be included was previously defined to avoid overfitting the model, and correlations between variables were also analyzed.\nMissing data were below 1% and applied to SF-36 questionnaire, 2 missing data; no imputation was performed. In addition, only 496 patients (82.6%) had a predominant ideal of autonomy,6 and their data were included in the regression analysis.\nAll statistical analyses were performed using Statistical Package for the Social Sciences version 21.0 (SPSS; Chicago, IL). A value of p < 0.05 was considered statistically significant.", "Population Characteristics A total of 691 ambulatory patients were invited to participate, 90 of whom declined the invitation, primarily due to time constraints, and 1 patient did not complete the PDRQ-9. The characteristics of the 600 patients are depicted in Table 1 and had been previously described.6 Briefly, patients were primarily middle-aged women (86%), with a medium-low socioeconomic status (88.8%), long-standing disease (51.8%), comorbid conditions (58.7%), pain under control (68.5%), no disability (55.3%), and HRQoL out of the reference range (73.7%–78.6%), based on published cutoffs for the pain-VAS, HAQ-DI, and SF-36.25 In addition, they were on immunosuppressive drugs (96.5%). Patients scored high on the PDRQ-9, and 30.5% of the patients rated the PDR with the highest score. The patients' diagnoses were as follows: 219 patients (36.5%) had SLE, 191 (31.8%) had RA, 42 (7%) had systemic vasculitis, 23 each (3.8%) had inflammatory myopathy and primary antiphospholipid syndrome, 25 (4.2%) had systemic sclerosis, 28 (4.7%) had spondyloarthritis, 20 each (3.3%) had PSS and mixed connective tissue disease, and 9 patients (1.5%) had adult Still disease.\nPopulation's Characteristics (N = 600)\nData presented as median (IQR) as otherwise indicated.\naNumber (%) of patients.\nbSSS provides comprehensive health care insurance for public and private employees (including pensioners) and their households, such as outpatient and inpatient health care, hospitalization, paid sick days, disability, and retirement plans. Funded by both the Mexican Federal Government and by contributions from employees and their employers, this system comprises a heterogeneous bundle of autonomous health care institutions, including outpatient clinics, general hospitals, and specialty hospitals (according to Pineda et al 2019, https://doi.org/10.1007/s00296-018-4198-7).\ncLimited to patients with previous hospitalizations.\nSE, socioeconomic (level); SSS, social security system; IDs, immunosuppressive drugs; MD, missing data; DD, disease duration.\nA total of 691 ambulatory patients were invited to participate, 90 of whom declined the invitation, primarily due to time constraints, and 1 patient did not complete the PDRQ-9. The characteristics of the 600 patients are depicted in Table 1 and had been previously described.6 Briefly, patients were primarily middle-aged women (86%), with a medium-low socioeconomic status (88.8%), long-standing disease (51.8%), comorbid conditions (58.7%), pain under control (68.5%), no disability (55.3%), and HRQoL out of the reference range (73.7%–78.6%), based on published cutoffs for the pain-VAS, HAQ-DI, and SF-36.25 In addition, they were on immunosuppressive drugs (96.5%). Patients scored high on the PDRQ-9, and 30.5% of the patients rated the PDR with the highest score. The patients' diagnoses were as follows: 219 patients (36.5%) had SLE, 191 (31.8%) had RA, 42 (7%) had systemic vasculitis, 23 each (3.8%) had inflammatory myopathy and primary antiphospholipid syndrome, 25 (4.2%) had systemic sclerosis, 28 (4.7%) had spondyloarthritis, 20 each (3.3%) had PSS and mixed connective tissue disease, and 9 patients (1.5%) had adult Still disease.\nPopulation's Characteristics (N = 600)\nData presented as median (IQR) as otherwise indicated.\naNumber (%) of patients.\nbSSS provides comprehensive health care insurance for public and private employees (including pensioners) and their households, such as outpatient and inpatient health care, hospitalization, paid sick days, disability, and retirement plans. Funded by both the Mexican Federal Government and by contributions from employees and their employers, this system comprises a heterogeneous bundle of autonomous health care institutions, including outpatient clinics, general hospitals, and specialty hospitals (according to Pineda et al 2019, https://doi.org/10.1007/s00296-018-4198-7).\ncLimited to patients with previous hospitalizations.\nSE, socioeconomic (level); SSS, social security system; IDs, immunosuppressive drugs; MD, missing data; DD, disease duration.\nDescription of the PDR in the Study Population The median (IQR) of the PDRQ-9 score in the entire population was 4.6 (3.4–5). Twenty patients (3.3%) rated the PDR Likert scale as inferior, 267 (44.5%) as borderline, and the 313 patients left (52.2%) as superior. Table 2 summarizes the PDRQ-9 global score and individual item scores in the entire population, and the comparison among groups defined according to PDR Likert scale response. As expected, the better the PDR Likert scale, the higher the global and individual items PDRQ-9 scores.\nGlobal and Individual Item PDRQ-9 Scores in the Entire Population and Comparison Between Patients Classified According to their PDR Likert Scale\nItems 2 (“My doctor has enough time for me”), 4 (“My doctor understand me”), and 7 (“I can talk to my doctor”) obtained lower scores than the remaining items (p ≤ 0.001 for any comparison), and the differences persisted within the patients grouped according to the PDR Likert scale response (inferior, borderline, and superior), as summarized in Table 2 and Figure.\nIndividual item PDRQ-9 scores in the patients grouped according to their PDR Likert Scale category (inferior, borderline, superior). Color online-figure is available at http://www.jclinrheum.com.\nFinally, among the 600 patient-doctor encounters, 523 (87.2%) involved certified rheumatologists, whereas the remaining 77 encounters (22.8%) involved trainees in rheumatology. Patient-doctor encounters from the former group (with certified rheumatologists) were rated with higher global and individual item PDRQ-9 scores, compared with their counterparts, as summarized in Table 3, but for item 6, “My doctor and I agree on the nature of my medical symptoms” and 7, “I can talk to my doctor.”\nComparison of Global and Individual Item PDRQ-9 Scores Between Patient-Doctor Encounters that Involved Certified Rheumatologists and Those That Involved Trainees in Rheumatology\nThe median (IQR) of the PDRQ-9 score in the entire population was 4.6 (3.4–5). Twenty patients (3.3%) rated the PDR Likert scale as inferior, 267 (44.5%) as borderline, and the 313 patients left (52.2%) as superior. Table 2 summarizes the PDRQ-9 global score and individual item scores in the entire population, and the comparison among groups defined according to PDR Likert scale response. As expected, the better the PDR Likert scale, the higher the global and individual items PDRQ-9 scores.\nGlobal and Individual Item PDRQ-9 Scores in the Entire Population and Comparison Between Patients Classified According to their PDR Likert Scale\nItems 2 (“My doctor has enough time for me”), 4 (“My doctor understand me”), and 7 (“I can talk to my doctor”) obtained lower scores than the remaining items (p ≤ 0.001 for any comparison), and the differences persisted within the patients grouped according to the PDR Likert scale response (inferior, borderline, and superior), as summarized in Table 2 and Figure.\nIndividual item PDRQ-9 scores in the patients grouped according to their PDR Likert Scale category (inferior, borderline, superior). Color online-figure is available at http://www.jclinrheum.com.\nFinally, among the 600 patient-doctor encounters, 523 (87.2%) involved certified rheumatologists, whereas the remaining 77 encounters (22.8%) involved trainees in rheumatology. Patient-doctor encounters from the former group (with certified rheumatologists) were rated with higher global and individual item PDRQ-9 scores, compared with their counterparts, as summarized in Table 3, but for item 6, “My doctor and I agree on the nature of my medical symptoms” and 7, “I can talk to my doctor.”\nComparison of Global and Individual Item PDRQ-9 Scores Between Patient-Doctor Encounters that Involved Certified Rheumatologists and Those That Involved Trainees in Rheumatology\nFactors Associated With Good PDR We first compared medical encounters rated by the patients as deficient PDR (defined as PDRQ-9 score ≤3.73, n = 178) with medical encounters rated by the patients with good PDR (n = 422), and the results are summarized in Supplemental Table, http://links.lww.com/RHU/A393. Compared with their counterparts, patients from the former group were more likely to be female (90.4% vs 84.1%, p = 0.053) and scored higher pain-VAS (18 [3–42.8] vs 11 [0–38.5], p = 0.042), were more likely to have disability based on the HAQ-DI score (58.5% vs 47.8%, p = 0.019), and were less likely to have the SF-36 physical component within the reference range (14.7% vs 24.2%, p = 0.009). Accordingly, patients with deficient PDR had lower SF-36 global scores (56 [43.5–70.5] vs 63.1 [47.9–76.5], p = 0.001); also, they were more frequently involved in patient-doctor encounters with trainees in rheumatology. Finally, they were less likely to have a paternalistic ideal of patient autonomy (74% vs 89.4%, p ≤ 0.001), and were less likely to be concordant with their doctor's ideal of autonomy (62.3% vs 77.7%, p = 0.001, and 65.1% vs 80%, p = 0.001 for patient-doctor concordance with a physician-centered/paternalistic ideal of autonomy).\nThe following variables were included in the multiple logistic regression analysis to identify factors associated with a good PDR, which was considered the dependent variable: whether the patient was female, pain-VAS and SF-36 scores (highly correlated to SF-36 emotional component within the reference range), HAQ-DI score out of reference range, patient-doctor encounters with variable experience of the attending rheumatologist (trainees, certified rheumatologists), patient-doctor concordance in the ideal of autonomy (highly correlated to patient-doctor concordance in paternalistic ideal of autonomy), and patient's paternalistic ideal of autonomy. Results showed that patient paternalistic ideal of autonomy (odds ratio [OR], 3.029; 95% confidence interval [CI], 1.793–5.113; p ≤ 0.001), patient SF-36 global score (OR, 1.014; 95% CI, 1.003–1.025; p = 0.011), patient's female sex (OR, 0.460; 95% CI, 0.233–0.010; p = 0.026), and being a certified/senior rheumatologist (OR, 1.526; 95% CI, 1.059–2.200; p = 0.024) were associated with a good PDR.\nWe first compared medical encounters rated by the patients as deficient PDR (defined as PDRQ-9 score ≤3.73, n = 178) with medical encounters rated by the patients with good PDR (n = 422), and the results are summarized in Supplemental Table, http://links.lww.com/RHU/A393. Compared with their counterparts, patients from the former group were more likely to be female (90.4% vs 84.1%, p = 0.053) and scored higher pain-VAS (18 [3–42.8] vs 11 [0–38.5], p = 0.042), were more likely to have disability based on the HAQ-DI score (58.5% vs 47.8%, p = 0.019), and were less likely to have the SF-36 physical component within the reference range (14.7% vs 24.2%, p = 0.009). Accordingly, patients with deficient PDR had lower SF-36 global scores (56 [43.5–70.5] vs 63.1 [47.9–76.5], p = 0.001); also, they were more frequently involved in patient-doctor encounters with trainees in rheumatology. Finally, they were less likely to have a paternalistic ideal of patient autonomy (74% vs 89.4%, p ≤ 0.001), and were less likely to be concordant with their doctor's ideal of autonomy (62.3% vs 77.7%, p = 0.001, and 65.1% vs 80%, p = 0.001 for patient-doctor concordance with a physician-centered/paternalistic ideal of autonomy).\nThe following variables were included in the multiple logistic regression analysis to identify factors associated with a good PDR, which was considered the dependent variable: whether the patient was female, pain-VAS and SF-36 scores (highly correlated to SF-36 emotional component within the reference range), HAQ-DI score out of reference range, patient-doctor encounters with variable experience of the attending rheumatologist (trainees, certified rheumatologists), patient-doctor concordance in the ideal of autonomy (highly correlated to patient-doctor concordance in paternalistic ideal of autonomy), and patient's paternalistic ideal of autonomy. Results showed that patient paternalistic ideal of autonomy (odds ratio [OR], 3.029; 95% confidence interval [CI], 1.793–5.113; p ≤ 0.001), patient SF-36 global score (OR, 1.014; 95% CI, 1.003–1.025; p = 0.011), patient's female sex (OR, 0.460; 95% CI, 0.233–0.010; p = 0.026), and being a certified/senior rheumatologist (OR, 1.526; 95% CI, 1.059–2.200; p = 0.024) were associated with a good PDR.", "A total of 691 ambulatory patients were invited to participate, 90 of whom declined the invitation, primarily due to time constraints, and 1 patient did not complete the PDRQ-9. The characteristics of the 600 patients are depicted in Table 1 and had been previously described.6 Briefly, patients were primarily middle-aged women (86%), with a medium-low socioeconomic status (88.8%), long-standing disease (51.8%), comorbid conditions (58.7%), pain under control (68.5%), no disability (55.3%), and HRQoL out of the reference range (73.7%–78.6%), based on published cutoffs for the pain-VAS, HAQ-DI, and SF-36.25 In addition, they were on immunosuppressive drugs (96.5%). Patients scored high on the PDRQ-9, and 30.5% of the patients rated the PDR with the highest score. The patients' diagnoses were as follows: 219 patients (36.5%) had SLE, 191 (31.8%) had RA, 42 (7%) had systemic vasculitis, 23 each (3.8%) had inflammatory myopathy and primary antiphospholipid syndrome, 25 (4.2%) had systemic sclerosis, 28 (4.7%) had spondyloarthritis, 20 each (3.3%) had PSS and mixed connective tissue disease, and 9 patients (1.5%) had adult Still disease.\nPopulation's Characteristics (N = 600)\nData presented as median (IQR) as otherwise indicated.\naNumber (%) of patients.\nbSSS provides comprehensive health care insurance for public and private employees (including pensioners) and their households, such as outpatient and inpatient health care, hospitalization, paid sick days, disability, and retirement plans. Funded by both the Mexican Federal Government and by contributions from employees and their employers, this system comprises a heterogeneous bundle of autonomous health care institutions, including outpatient clinics, general hospitals, and specialty hospitals (according to Pineda et al 2019, https://doi.org/10.1007/s00296-018-4198-7).\ncLimited to patients with previous hospitalizations.\nSE, socioeconomic (level); SSS, social security system; IDs, immunosuppressive drugs; MD, missing data; DD, disease duration.", "The median (IQR) of the PDRQ-9 score in the entire population was 4.6 (3.4–5). Twenty patients (3.3%) rated the PDR Likert scale as inferior, 267 (44.5%) as borderline, and the 313 patients left (52.2%) as superior. Table 2 summarizes the PDRQ-9 global score and individual item scores in the entire population, and the comparison among groups defined according to PDR Likert scale response. As expected, the better the PDR Likert scale, the higher the global and individual items PDRQ-9 scores.\nGlobal and Individual Item PDRQ-9 Scores in the Entire Population and Comparison Between Patients Classified According to their PDR Likert Scale\nItems 2 (“My doctor has enough time for me”), 4 (“My doctor understand me”), and 7 (“I can talk to my doctor”) obtained lower scores than the remaining items (p ≤ 0.001 for any comparison), and the differences persisted within the patients grouped according to the PDR Likert scale response (inferior, borderline, and superior), as summarized in Table 2 and Figure.\nIndividual item PDRQ-9 scores in the patients grouped according to their PDR Likert Scale category (inferior, borderline, superior). Color online-figure is available at http://www.jclinrheum.com.\nFinally, among the 600 patient-doctor encounters, 523 (87.2%) involved certified rheumatologists, whereas the remaining 77 encounters (22.8%) involved trainees in rheumatology. Patient-doctor encounters from the former group (with certified rheumatologists) were rated with higher global and individual item PDRQ-9 scores, compared with their counterparts, as summarized in Table 3, but for item 6, “My doctor and I agree on the nature of my medical symptoms” and 7, “I can talk to my doctor.”\nComparison of Global and Individual Item PDRQ-9 Scores Between Patient-Doctor Encounters that Involved Certified Rheumatologists and Those That Involved Trainees in Rheumatology", "We first compared medical encounters rated by the patients as deficient PDR (defined as PDRQ-9 score ≤3.73, n = 178) with medical encounters rated by the patients with good PDR (n = 422), and the results are summarized in Supplemental Table, http://links.lww.com/RHU/A393. Compared with their counterparts, patients from the former group were more likely to be female (90.4% vs 84.1%, p = 0.053) and scored higher pain-VAS (18 [3–42.8] vs 11 [0–38.5], p = 0.042), were more likely to have disability based on the HAQ-DI score (58.5% vs 47.8%, p = 0.019), and were less likely to have the SF-36 physical component within the reference range (14.7% vs 24.2%, p = 0.009). Accordingly, patients with deficient PDR had lower SF-36 global scores (56 [43.5–70.5] vs 63.1 [47.9–76.5], p = 0.001); also, they were more frequently involved in patient-doctor encounters with trainees in rheumatology. Finally, they were less likely to have a paternalistic ideal of patient autonomy (74% vs 89.4%, p ≤ 0.001), and were less likely to be concordant with their doctor's ideal of autonomy (62.3% vs 77.7%, p = 0.001, and 65.1% vs 80%, p = 0.001 for patient-doctor concordance with a physician-centered/paternalistic ideal of autonomy).\nThe following variables were included in the multiple logistic regression analysis to identify factors associated with a good PDR, which was considered the dependent variable: whether the patient was female, pain-VAS and SF-36 scores (highly correlated to SF-36 emotional component within the reference range), HAQ-DI score out of reference range, patient-doctor encounters with variable experience of the attending rheumatologist (trainees, certified rheumatologists), patient-doctor concordance in the ideal of autonomy (highly correlated to patient-doctor concordance in paternalistic ideal of autonomy), and patient's paternalistic ideal of autonomy. Results showed that patient paternalistic ideal of autonomy (odds ratio [OR], 3.029; 95% confidence interval [CI], 1.793–5.113; p ≤ 0.001), patient SF-36 global score (OR, 1.014; 95% CI, 1.003–1.025; p = 0.011), patient's female sex (OR, 0.460; 95% CI, 0.233–0.010; p = 0.026), and being a certified/senior rheumatologist (OR, 1.526; 95% CI, 1.059–2.200; p = 0.024) were associated with a good PDR.", "The study focused on the quality of the PDR, where participants are greatly influenced by the social and cultural factors that define each other.26 Accordingly, the results complement the current knowledge of the topic, which has been conceived based on studies primarily performed in developed countries (United States, North European countries, United Kingdom, and Japan) and in populations with a different anthropologic background.9–15,27\nFirst, the study revealed that the majority of the primarily Mexican female patients with long-standing rheumatic diseases perceived a good PDR, which was more evident among medical encounters that involved certified rheumatologists. Moreover, some components of the PDR were rated lower by the patients, particularly, patients' perception of the time spent with the clinician and being understood by and of accessibility to talk to the doctor.\nSimilar results have been observed27 and could be explained by the substantial follow-up of the underlying rheumatic disease of the patients included, which might have biased the PDRQ-9 score to higher values. In addition, 10 certified rheumatologists were involved in the majority of the medical encounters; clinicians' knowledge and clinical expertise shape treatment preferences and have the potential to influence the shared decision-making (SDM), which improves the quality of care.28 Meanwhile, trust in the physician develops over time, characterizes long-term PDR, and impacts patient satisfaction with care, which might be considered a surrogate of a good PDR.27 Moreover, medical encounters with trainees lack physician continuity on repeat clinical visits, which has been associated with less positive perception of physician style and physician trust, which ultimately affects the PDR.29 Finally, time constraints have been recognized as a caveat of the quality of care in busy medical practice and limit the application of SDM.30,31 In RA patients, a longer consultation time of 10 minutes has been associated with a slightly higher SDM score.32 In addition, a patient's lower score of being understood by and accessibility to their doctor might reflect the well-known misalignment between patients and physicians' values, preferences, and perception of shared goals.33,34\nA second relevant finding was that, in our population, the patient paternalistic ideal of autonomy, the patient's quality of life, and the degree of experience of the attending rheumatologist were risk factors associated with a good PDR, whereas female sex was protective.\nIt is generally accepted that active rheumatic patients' participation in their interaction with rheumatologists is associated with health care satisfaction, which might be considered a surrogate for a good PDR.11,27,30 Nonetheless, the PDR is a complex and dynamic construct that is shaped by components highly nuanced by the cultural background.27 Several studies have confirmed that Mexican patients with rheumatic diseases do not desire or undertake an active role at the time of their consultation.6,17,18,35 Singh et al36 found that 40% of the United States and Canadian patients with cancer experienced discordance between the preferred and the experienced decision-making role, and highlighted the need to deliver the type of experience that the patients prefer in terms of their decision-making role. In the current study, the majority of patients (and physicians) referred a paternalistic ideal of autonomy,6 which explains its association with a good PDR. Also, and in agreement with our results, Ishikawa et al12 studied 115 Japanese RA patients who were under the continuous care of 8 rheumatologists. They found that, among patients who preferred autonomous decision-making, the likelihood of being understood was positively associated with the extent of reported participation in visit communication, whereas such a relationship was less evident among those with a lower preference for active decision-making.\nStudies involving patients with rheumatic diseases suggest that the nature of PDR can have a significant impact on HRQoL, which can be assessed with the SF-36.11 A possible explanation was proposed by Freburger et al,27 who argued that sicker patients deal with the health care system more frequently and are more likely to have problems with the care they receive and blame the physician because they are not getting better. The authors evaluated trust in the rheumatologist among 713 patients with RA, osteoarthritis, and fibromyalgia from North Carolina and found that patients with poor health and HRQoL reported lower levels of trust (a component of the PDRQ-9). Similarly, Beusterien et al9 found that positive physician interaction with patients led to greater satisfaction with treatment and more favorable emotional health among 302 SLE patients from the United States.\nThe association between the degree of experience of the attending rheumatologist and a good PDR might be explained based on 3 arguments. First, experienced rheumatologists might be perceived by patients as paternal authoritative figures, and there is a respect for such figures among Hispanic patients.31 Second, in the Hispanic community, there is an imbalance in social status between the patient and the physician, which favors a high-power distance culture, where patients expect the physician to take a more authoritative approach to the medical encounter, which is in line with the preferred ideal of autonomy of our patients.6,7 Third, as previously stated, experienced rheumatologists might build solid and trustful relationships with their patients, which are particularly relevant for the PDR in Mediterranean and Latin American cultures.29,37\nFinally, sex disparities in patients' experiences have received little attention, although our results were confirmed in nonrheumatic populations.38 Men have generally reported better experiences with specific aspects of outpatient care, and the opposite has been true regarding dissatisfaction with nursing care and staff attitude.38 Regarding inpatient experiences, in general, women also reported fewer positive experiences than men, with the exception of doctor communication, which is a component of the PDR.38\nThere are a few limitations of the study, which need to be considered. First, we used the PDRQ-9 to assess the PDR, but it is limited to patients' perspective; in addition, it has a substantial ceiling effect, which translates into a poor capacity to discriminate among patients who scored high on the PDR. Second, the study had a cross-sectional design, and only associations can be inferred. Third, the study was conducted at a single academic center where patients referred might have particular characteristics; therefore, the results may not be generalizable to other populations. Finally, relevant variables that are known to affect the PDR, such as patient-physician sex disparity, were not considered in the regression analysis.", "The PDR is a complex dynamic and multidisciplinary phenomenon that needs to be approached from a cultural perspective. The PDR might also be conceived as a highly valuable outcome in itself, the quality of which influences disease outcomes, patient's satisfaction with care and adherence to treatment, and clinician satisfaction at work. In Mexican outpatients with rheumatic diseases, we found factors associated with a good PDR that were related to patient characteristics and of the clinician. Insights from this study are of great value for the development of strategies targeted at building solid relationships and improving communication among patients and doctors." ]
[ "methods", null, null, null, null, null, null, "results", null, null, null, "discussions", "conclusions" ]
[ "patient-doctor relationship", "rheumatic diseases", "autonomy ideal", "paternalism" ]
PATIENTS AND METHODS: Ethics The study was approved by the Internal Review Board of the INCMyN-SZ (Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán) (reference number: IRE-3005-19-20-1). All the patients from the Outpatient Clinic of the Department of Immunology and Rheumatology (OCDIR) who agreed to participate provided written informed consent. Before patient enrollment, the study was presented to all the rheumatologists/trainees assigned to the OCDIR during the study period, and all of them agreed to participate. The study was approved by the Internal Review Board of the INCMyN-SZ (Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán) (reference number: IRE-3005-19-20-1). All the patients from the Outpatient Clinic of the Department of Immunology and Rheumatology (OCDIR) who agreed to participate provided written informed consent. Before patient enrollment, the study was presented to all the rheumatologists/trainees assigned to the OCDIR during the study period, and all of them agreed to participate. Study Design, Setting, and Study Population The study has been previously described.6 Briefly, it was cross-sectional and performed between July 2019 and February 2020, at the OCDIR of a tertiary-care level and academic center for rheumatic diseases, located in Mexico City. STROBE's guidelines were followed (Appendix, http://links.lww.com/RHU/A394). The INCMyN-SZ belongs to the National Institutes of Health of Mexico. Patients had Federal government health coverage depending on their socioeconomic level, which was defined by social workers after patient's interview and income-to-needs ratio's assessment. Patients had to pay for their medication, medical assistance, laboratories, and diagnostic imaging studies; however, up to 75% of the patients had at least 70% health coverage. Eleven certified rheumatologists and 10 trainees in rheumatology were assigned to the OCDIR, and all were self-defined as Mexican. In addition, approximately 5000 patients with at least 1 visit to the outpatient clinic, self-referred as Mexican and with a variety of rheumatic diseases attended the OCDIR. At first visit to the OCDIR, patients were assigned a primary rheumatologist, which was maintained during the entire patient's follow-up, but for patients assigned to trainees in rheumatology, they changed their primary physician every 2 years (training program duration). Patients might be assigned a different primary rheumatologist upon patient's request. The 10 most frequent diagnoses (n = 4476) based on the attending rheumatologist criteria were SLE in 1652 patients (33%), RA in 1578 (31.6%), systemic sclerosis in 239 (4.8%), systemic vasculitis in 220 (4.4%), primary Sjögren syndrome (PSS) in 190 (3.8%), spondyloarthritis in 174 (3.5%), inflammatory myopathies and primary antiphospholipid syndrome in 150 patients each (3%), mixed connective tissue disease in 94 patients (1.9%), and adult Still disease in 29 patients (0.6%). Finally, 524 patients (10.5%) had other diagnosis. All the patients who consecutively who consecutively were seen at the OCDIR during the study period, and had a defined rheumatic disease according to the criteria of the attending rheumatologist, were invited to participate. Exclusion criteria included patients on palliative care, with overlap syndrome (but secondary Sjögren syndrome), and with uncontrolled comorbid conditions. The study has been previously described.6 Briefly, it was cross-sectional and performed between July 2019 and February 2020, at the OCDIR of a tertiary-care level and academic center for rheumatic diseases, located in Mexico City. STROBE's guidelines were followed (Appendix, http://links.lww.com/RHU/A394). The INCMyN-SZ belongs to the National Institutes of Health of Mexico. Patients had Federal government health coverage depending on their socioeconomic level, which was defined by social workers after patient's interview and income-to-needs ratio's assessment. Patients had to pay for their medication, medical assistance, laboratories, and diagnostic imaging studies; however, up to 75% of the patients had at least 70% health coverage. Eleven certified rheumatologists and 10 trainees in rheumatology were assigned to the OCDIR, and all were self-defined as Mexican. In addition, approximately 5000 patients with at least 1 visit to the outpatient clinic, self-referred as Mexican and with a variety of rheumatic diseases attended the OCDIR. At first visit to the OCDIR, patients were assigned a primary rheumatologist, which was maintained during the entire patient's follow-up, but for patients assigned to trainees in rheumatology, they changed their primary physician every 2 years (training program duration). Patients might be assigned a different primary rheumatologist upon patient's request. The 10 most frequent diagnoses (n = 4476) based on the attending rheumatologist criteria were SLE in 1652 patients (33%), RA in 1578 (31.6%), systemic sclerosis in 239 (4.8%), systemic vasculitis in 220 (4.4%), primary Sjögren syndrome (PSS) in 190 (3.8%), spondyloarthritis in 174 (3.5%), inflammatory myopathies and primary antiphospholipid syndrome in 150 patients each (3%), mixed connective tissue disease in 94 patients (1.9%), and adult Still disease in 29 patients (0.6%). Finally, 524 patients (10.5%) had other diagnosis. All the patients who consecutively who consecutively were seen at the OCDIR during the study period, and had a defined rheumatic disease according to the criteria of the attending rheumatologist, were invited to participate. Exclusion criteria included patients on palliative care, with overlap syndrome (but secondary Sjögren syndrome), and with uncontrolled comorbid conditions. Study Maneuvers All included patients were invited to evaluate the PDR at the end of their consultation, and to complete the Patient-Doctor Relationship Questionnaire (PDRQ-9)20 and a PDR Likert Scale. Patients additionally completed the Spanish version of the Health Assessment Questionnaire–Disability Index (HAQ-DI) to assess disability21 and the Short-Form 36 items (SF-36) to assess health-related quality of life (HRQoL),22 a visual analog scale (VAS) to assess pain, and the Ideal Patient Autonomy Scale (IPAS).6 Physicians assigned to OCDIR also completed the IPAS. Relevant sociodemographic variables (sex, age, formal education, socioeconomic level, religious beliefs, economic dependency, living with a partner, and access to the social security system), disease-related variables (disease duration, years of follow-up at the OCDIR, comorbid conditions and Charlson comorbidity score,23 participation in clinical trials, previous hospitalizations and number), and treatment-related variables (immunosuppressive treatment and number of immunosuppressive drugs/per patient and corticosteroid use) were obtained from all the patients, in standardized formats after a careful chart review and patient interview to confirm the data. In all cases, interviews, questionnaires, and scales were applied in an area designated for research purposes by personnel not involved in patient care. All included patients were invited to evaluate the PDR at the end of their consultation, and to complete the Patient-Doctor Relationship Questionnaire (PDRQ-9)20 and a PDR Likert Scale. Patients additionally completed the Spanish version of the Health Assessment Questionnaire–Disability Index (HAQ-DI) to assess disability21 and the Short-Form 36 items (SF-36) to assess health-related quality of life (HRQoL),22 a visual analog scale (VAS) to assess pain, and the Ideal Patient Autonomy Scale (IPAS).6 Physicians assigned to OCDIR also completed the IPAS. Relevant sociodemographic variables (sex, age, formal education, socioeconomic level, religious beliefs, economic dependency, living with a partner, and access to the social security system), disease-related variables (disease duration, years of follow-up at the OCDIR, comorbid conditions and Charlson comorbidity score,23 participation in clinical trials, previous hospitalizations and number), and treatment-related variables (immunosuppressive treatment and number of immunosuppressive drugs/per patient and corticosteroid use) were obtained from all the patients, in standardized formats after a careful chart review and patient interview to confirm the data. In all cases, interviews, questionnaires, and scales were applied in an area designated for research purposes by personnel not involved in patient care. Instruments Description The PDRQ-920 assessed the quality of the PDR experienced by the patient through the quantification of the patient's opinion regarding communication, satisfaction, trust, and accessibility in dealing with the doctor and the treatment that followed. The questionnaire is based on a 5-point Likert scale ranging from 1 (not at all appropriate) to 5 (totally appropriate). PDRQ-9 scores range from 1 to 5, with higher scores translating into a better PDR. The PDR Likert scale assessed the quality of the PDR experienced by the patient, who is directed to choose among 3 options: inferior, borderline, and superior. The IPAS6 is a self-administered questionnaire that assesses patient' ideal of autonomy according to 4 subscales that can be further grouped into 2 subscales: patients with an ideal of physician-centered/paternalistic (with information) autonomy and patients with an ideal of patient-centered autonomy. The IPAS can be applied to the attending physician. The PDRQ-920 assessed the quality of the PDR experienced by the patient through the quantification of the patient's opinion regarding communication, satisfaction, trust, and accessibility in dealing with the doctor and the treatment that followed. The questionnaire is based on a 5-point Likert scale ranging from 1 (not at all appropriate) to 5 (totally appropriate). PDRQ-9 scores range from 1 to 5, with higher scores translating into a better PDR. The PDR Likert scale assessed the quality of the PDR experienced by the patient, who is directed to choose among 3 options: inferior, borderline, and superior. The IPAS6 is a self-administered questionnaire that assesses patient' ideal of autonomy according to 4 subscales that can be further grouped into 2 subscales: patients with an ideal of physician-centered/paternalistic (with information) autonomy and patients with an ideal of patient-centered autonomy. The IPAS can be applied to the attending physician. Definitions A good PDR was defined based on a cutoff point established with the borderline performance method.24 Briefly, the PDRQ-9 of the patients who rated the PDR Likert scale as borderline were selected (n = 267), and their mean score was calculated as 3.73. Patients who scored the PDRQ-9 with a value >3.73 were considered to have a good PDR, and their counterpart were considered to have a deficient PDR. Senior rheumatologists were defined as certified rheumatologists with ≥20 years of clinical experience. Certified rheumatologists were defined as rheumatologists who completed their training program and certification process. A good PDR was defined based on a cutoff point established with the borderline performance method.24 Briefly, the PDRQ-9 of the patients who rated the PDR Likert scale as borderline were selected (n = 267), and their mean score was calculated as 3.73. Patients who scored the PDRQ-9 with a value >3.73 were considered to have a good PDR, and their counterpart were considered to have a deficient PDR. Senior rheumatologists were defined as certified rheumatologists with ≥20 years of clinical experience. Certified rheumatologists were defined as rheumatologists who completed their training program and certification process. Statistical Analysis Descriptive statistics were performed to estimate the frequencies and percentages for categorical variables and the median, interquartile range (IQR) for continuous variables, of the sociodemographic variables, the disease-related and treatment-related variables, the patient-reported outcomes, and the PDR of the study population. A PDRQ-9 score was assigned to each patient-rheumatologist encounter. Characteristics of patients with a good PDR were compared with those of patients with deficient PDR, using appropriate tests. Logistic multiple regression analysis was used to establish factors associated with a good PDR, which was considered the dependent variable. The selection of the variables to be included was based on statistical significance in the bivariate analysis (p ≤ 0.10), and a limited number of potential confounder variables was also considered. In addition, the number of variables to be included was previously defined to avoid overfitting the model, and correlations between variables were also analyzed. Missing data were below 1% and applied to SF-36 questionnaire, 2 missing data; no imputation was performed. In addition, only 496 patients (82.6%) had a predominant ideal of autonomy,6 and their data were included in the regression analysis. All statistical analyses were performed using Statistical Package for the Social Sciences version 21.0 (SPSS; Chicago, IL). A value of p < 0.05 was considered statistically significant. Descriptive statistics were performed to estimate the frequencies and percentages for categorical variables and the median, interquartile range (IQR) for continuous variables, of the sociodemographic variables, the disease-related and treatment-related variables, the patient-reported outcomes, and the PDR of the study population. A PDRQ-9 score was assigned to each patient-rheumatologist encounter. Characteristics of patients with a good PDR were compared with those of patients with deficient PDR, using appropriate tests. Logistic multiple regression analysis was used to establish factors associated with a good PDR, which was considered the dependent variable. The selection of the variables to be included was based on statistical significance in the bivariate analysis (p ≤ 0.10), and a limited number of potential confounder variables was also considered. In addition, the number of variables to be included was previously defined to avoid overfitting the model, and correlations between variables were also analyzed. Missing data were below 1% and applied to SF-36 questionnaire, 2 missing data; no imputation was performed. In addition, only 496 patients (82.6%) had a predominant ideal of autonomy,6 and their data were included in the regression analysis. All statistical analyses were performed using Statistical Package for the Social Sciences version 21.0 (SPSS; Chicago, IL). A value of p < 0.05 was considered statistically significant. Ethics: The study was approved by the Internal Review Board of the INCMyN-SZ (Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán) (reference number: IRE-3005-19-20-1). All the patients from the Outpatient Clinic of the Department of Immunology and Rheumatology (OCDIR) who agreed to participate provided written informed consent. Before patient enrollment, the study was presented to all the rheumatologists/trainees assigned to the OCDIR during the study period, and all of them agreed to participate. Study Design, Setting, and Study Population: The study has been previously described.6 Briefly, it was cross-sectional and performed between July 2019 and February 2020, at the OCDIR of a tertiary-care level and academic center for rheumatic diseases, located in Mexico City. STROBE's guidelines were followed (Appendix, http://links.lww.com/RHU/A394). The INCMyN-SZ belongs to the National Institutes of Health of Mexico. Patients had Federal government health coverage depending on their socioeconomic level, which was defined by social workers after patient's interview and income-to-needs ratio's assessment. Patients had to pay for their medication, medical assistance, laboratories, and diagnostic imaging studies; however, up to 75% of the patients had at least 70% health coverage. Eleven certified rheumatologists and 10 trainees in rheumatology were assigned to the OCDIR, and all were self-defined as Mexican. In addition, approximately 5000 patients with at least 1 visit to the outpatient clinic, self-referred as Mexican and with a variety of rheumatic diseases attended the OCDIR. At first visit to the OCDIR, patients were assigned a primary rheumatologist, which was maintained during the entire patient's follow-up, but for patients assigned to trainees in rheumatology, they changed their primary physician every 2 years (training program duration). Patients might be assigned a different primary rheumatologist upon patient's request. The 10 most frequent diagnoses (n = 4476) based on the attending rheumatologist criteria were SLE in 1652 patients (33%), RA in 1578 (31.6%), systemic sclerosis in 239 (4.8%), systemic vasculitis in 220 (4.4%), primary Sjögren syndrome (PSS) in 190 (3.8%), spondyloarthritis in 174 (3.5%), inflammatory myopathies and primary antiphospholipid syndrome in 150 patients each (3%), mixed connective tissue disease in 94 patients (1.9%), and adult Still disease in 29 patients (0.6%). Finally, 524 patients (10.5%) had other diagnosis. All the patients who consecutively who consecutively were seen at the OCDIR during the study period, and had a defined rheumatic disease according to the criteria of the attending rheumatologist, were invited to participate. Exclusion criteria included patients on palliative care, with overlap syndrome (but secondary Sjögren syndrome), and with uncontrolled comorbid conditions. Study Maneuvers: All included patients were invited to evaluate the PDR at the end of their consultation, and to complete the Patient-Doctor Relationship Questionnaire (PDRQ-9)20 and a PDR Likert Scale. Patients additionally completed the Spanish version of the Health Assessment Questionnaire–Disability Index (HAQ-DI) to assess disability21 and the Short-Form 36 items (SF-36) to assess health-related quality of life (HRQoL),22 a visual analog scale (VAS) to assess pain, and the Ideal Patient Autonomy Scale (IPAS).6 Physicians assigned to OCDIR also completed the IPAS. Relevant sociodemographic variables (sex, age, formal education, socioeconomic level, religious beliefs, economic dependency, living with a partner, and access to the social security system), disease-related variables (disease duration, years of follow-up at the OCDIR, comorbid conditions and Charlson comorbidity score,23 participation in clinical trials, previous hospitalizations and number), and treatment-related variables (immunosuppressive treatment and number of immunosuppressive drugs/per patient and corticosteroid use) were obtained from all the patients, in standardized formats after a careful chart review and patient interview to confirm the data. In all cases, interviews, questionnaires, and scales were applied in an area designated for research purposes by personnel not involved in patient care. Instruments Description: The PDRQ-920 assessed the quality of the PDR experienced by the patient through the quantification of the patient's opinion regarding communication, satisfaction, trust, and accessibility in dealing with the doctor and the treatment that followed. The questionnaire is based on a 5-point Likert scale ranging from 1 (not at all appropriate) to 5 (totally appropriate). PDRQ-9 scores range from 1 to 5, with higher scores translating into a better PDR. The PDR Likert scale assessed the quality of the PDR experienced by the patient, who is directed to choose among 3 options: inferior, borderline, and superior. The IPAS6 is a self-administered questionnaire that assesses patient' ideal of autonomy according to 4 subscales that can be further grouped into 2 subscales: patients with an ideal of physician-centered/paternalistic (with information) autonomy and patients with an ideal of patient-centered autonomy. The IPAS can be applied to the attending physician. Definitions: A good PDR was defined based on a cutoff point established with the borderline performance method.24 Briefly, the PDRQ-9 of the patients who rated the PDR Likert scale as borderline were selected (n = 267), and their mean score was calculated as 3.73. Patients who scored the PDRQ-9 with a value >3.73 were considered to have a good PDR, and their counterpart were considered to have a deficient PDR. Senior rheumatologists were defined as certified rheumatologists with ≥20 years of clinical experience. Certified rheumatologists were defined as rheumatologists who completed their training program and certification process. Statistical Analysis: Descriptive statistics were performed to estimate the frequencies and percentages for categorical variables and the median, interquartile range (IQR) for continuous variables, of the sociodemographic variables, the disease-related and treatment-related variables, the patient-reported outcomes, and the PDR of the study population. A PDRQ-9 score was assigned to each patient-rheumatologist encounter. Characteristics of patients with a good PDR were compared with those of patients with deficient PDR, using appropriate tests. Logistic multiple regression analysis was used to establish factors associated with a good PDR, which was considered the dependent variable. The selection of the variables to be included was based on statistical significance in the bivariate analysis (p ≤ 0.10), and a limited number of potential confounder variables was also considered. In addition, the number of variables to be included was previously defined to avoid overfitting the model, and correlations between variables were also analyzed. Missing data were below 1% and applied to SF-36 questionnaire, 2 missing data; no imputation was performed. In addition, only 496 patients (82.6%) had a predominant ideal of autonomy,6 and their data were included in the regression analysis. All statistical analyses were performed using Statistical Package for the Social Sciences version 21.0 (SPSS; Chicago, IL). A value of p < 0.05 was considered statistically significant. RESULTS: Population Characteristics A total of 691 ambulatory patients were invited to participate, 90 of whom declined the invitation, primarily due to time constraints, and 1 patient did not complete the PDRQ-9. The characteristics of the 600 patients are depicted in Table 1 and had been previously described.6 Briefly, patients were primarily middle-aged women (86%), with a medium-low socioeconomic status (88.8%), long-standing disease (51.8%), comorbid conditions (58.7%), pain under control (68.5%), no disability (55.3%), and HRQoL out of the reference range (73.7%–78.6%), based on published cutoffs for the pain-VAS, HAQ-DI, and SF-36.25 In addition, they were on immunosuppressive drugs (96.5%). Patients scored high on the PDRQ-9, and 30.5% of the patients rated the PDR with the highest score. The patients' diagnoses were as follows: 219 patients (36.5%) had SLE, 191 (31.8%) had RA, 42 (7%) had systemic vasculitis, 23 each (3.8%) had inflammatory myopathy and primary antiphospholipid syndrome, 25 (4.2%) had systemic sclerosis, 28 (4.7%) had spondyloarthritis, 20 each (3.3%) had PSS and mixed connective tissue disease, and 9 patients (1.5%) had adult Still disease. Population's Characteristics (N = 600) Data presented as median (IQR) as otherwise indicated. aNumber (%) of patients. bSSS provides comprehensive health care insurance for public and private employees (including pensioners) and their households, such as outpatient and inpatient health care, hospitalization, paid sick days, disability, and retirement plans. Funded by both the Mexican Federal Government and by contributions from employees and their employers, this system comprises a heterogeneous bundle of autonomous health care institutions, including outpatient clinics, general hospitals, and specialty hospitals (according to Pineda et al 2019, https://doi.org/10.1007/s00296-018-4198-7). cLimited to patients with previous hospitalizations. SE, socioeconomic (level); SSS, social security system; IDs, immunosuppressive drugs; MD, missing data; DD, disease duration. A total of 691 ambulatory patients were invited to participate, 90 of whom declined the invitation, primarily due to time constraints, and 1 patient did not complete the PDRQ-9. The characteristics of the 600 patients are depicted in Table 1 and had been previously described.6 Briefly, patients were primarily middle-aged women (86%), with a medium-low socioeconomic status (88.8%), long-standing disease (51.8%), comorbid conditions (58.7%), pain under control (68.5%), no disability (55.3%), and HRQoL out of the reference range (73.7%–78.6%), based on published cutoffs for the pain-VAS, HAQ-DI, and SF-36.25 In addition, they were on immunosuppressive drugs (96.5%). Patients scored high on the PDRQ-9, and 30.5% of the patients rated the PDR with the highest score. The patients' diagnoses were as follows: 219 patients (36.5%) had SLE, 191 (31.8%) had RA, 42 (7%) had systemic vasculitis, 23 each (3.8%) had inflammatory myopathy and primary antiphospholipid syndrome, 25 (4.2%) had systemic sclerosis, 28 (4.7%) had spondyloarthritis, 20 each (3.3%) had PSS and mixed connective tissue disease, and 9 patients (1.5%) had adult Still disease. Population's Characteristics (N = 600) Data presented as median (IQR) as otherwise indicated. aNumber (%) of patients. bSSS provides comprehensive health care insurance for public and private employees (including pensioners) and their households, such as outpatient and inpatient health care, hospitalization, paid sick days, disability, and retirement plans. Funded by both the Mexican Federal Government and by contributions from employees and their employers, this system comprises a heterogeneous bundle of autonomous health care institutions, including outpatient clinics, general hospitals, and specialty hospitals (according to Pineda et al 2019, https://doi.org/10.1007/s00296-018-4198-7). cLimited to patients with previous hospitalizations. SE, socioeconomic (level); SSS, social security system; IDs, immunosuppressive drugs; MD, missing data; DD, disease duration. Description of the PDR in the Study Population The median (IQR) of the PDRQ-9 score in the entire population was 4.6 (3.4–5). Twenty patients (3.3%) rated the PDR Likert scale as inferior, 267 (44.5%) as borderline, and the 313 patients left (52.2%) as superior. Table 2 summarizes the PDRQ-9 global score and individual item scores in the entire population, and the comparison among groups defined according to PDR Likert scale response. As expected, the better the PDR Likert scale, the higher the global and individual items PDRQ-9 scores. Global and Individual Item PDRQ-9 Scores in the Entire Population and Comparison Between Patients Classified According to their PDR Likert Scale Items 2 (“My doctor has enough time for me”), 4 (“My doctor understand me”), and 7 (“I can talk to my doctor”) obtained lower scores than the remaining items (p ≤ 0.001 for any comparison), and the differences persisted within the patients grouped according to the PDR Likert scale response (inferior, borderline, and superior), as summarized in Table 2 and Figure. Individual item PDRQ-9 scores in the patients grouped according to their PDR Likert Scale category (inferior, borderline, superior). Color online-figure is available at http://www.jclinrheum.com. Finally, among the 600 patient-doctor encounters, 523 (87.2%) involved certified rheumatologists, whereas the remaining 77 encounters (22.8%) involved trainees in rheumatology. Patient-doctor encounters from the former group (with certified rheumatologists) were rated with higher global and individual item PDRQ-9 scores, compared with their counterparts, as summarized in Table 3, but for item 6, “My doctor and I agree on the nature of my medical symptoms” and 7, “I can talk to my doctor.” Comparison of Global and Individual Item PDRQ-9 Scores Between Patient-Doctor Encounters that Involved Certified Rheumatologists and Those That Involved Trainees in Rheumatology The median (IQR) of the PDRQ-9 score in the entire population was 4.6 (3.4–5). Twenty patients (3.3%) rated the PDR Likert scale as inferior, 267 (44.5%) as borderline, and the 313 patients left (52.2%) as superior. Table 2 summarizes the PDRQ-9 global score and individual item scores in the entire population, and the comparison among groups defined according to PDR Likert scale response. As expected, the better the PDR Likert scale, the higher the global and individual items PDRQ-9 scores. Global and Individual Item PDRQ-9 Scores in the Entire Population and Comparison Between Patients Classified According to their PDR Likert Scale Items 2 (“My doctor has enough time for me”), 4 (“My doctor understand me”), and 7 (“I can talk to my doctor”) obtained lower scores than the remaining items (p ≤ 0.001 for any comparison), and the differences persisted within the patients grouped according to the PDR Likert scale response (inferior, borderline, and superior), as summarized in Table 2 and Figure. Individual item PDRQ-9 scores in the patients grouped according to their PDR Likert Scale category (inferior, borderline, superior). Color online-figure is available at http://www.jclinrheum.com. Finally, among the 600 patient-doctor encounters, 523 (87.2%) involved certified rheumatologists, whereas the remaining 77 encounters (22.8%) involved trainees in rheumatology. Patient-doctor encounters from the former group (with certified rheumatologists) were rated with higher global and individual item PDRQ-9 scores, compared with their counterparts, as summarized in Table 3, but for item 6, “My doctor and I agree on the nature of my medical symptoms” and 7, “I can talk to my doctor.” Comparison of Global and Individual Item PDRQ-9 Scores Between Patient-Doctor Encounters that Involved Certified Rheumatologists and Those That Involved Trainees in Rheumatology Factors Associated With Good PDR We first compared medical encounters rated by the patients as deficient PDR (defined as PDRQ-9 score ≤3.73, n = 178) with medical encounters rated by the patients with good PDR (n = 422), and the results are summarized in Supplemental Table, http://links.lww.com/RHU/A393. Compared with their counterparts, patients from the former group were more likely to be female (90.4% vs 84.1%, p = 0.053) and scored higher pain-VAS (18 [3–42.8] vs 11 [0–38.5], p = 0.042), were more likely to have disability based on the HAQ-DI score (58.5% vs 47.8%, p = 0.019), and were less likely to have the SF-36 physical component within the reference range (14.7% vs 24.2%, p = 0.009). Accordingly, patients with deficient PDR had lower SF-36 global scores (56 [43.5–70.5] vs 63.1 [47.9–76.5], p = 0.001); also, they were more frequently involved in patient-doctor encounters with trainees in rheumatology. Finally, they were less likely to have a paternalistic ideal of patient autonomy (74% vs 89.4%, p ≤ 0.001), and were less likely to be concordant with their doctor's ideal of autonomy (62.3% vs 77.7%, p = 0.001, and 65.1% vs 80%, p = 0.001 for patient-doctor concordance with a physician-centered/paternalistic ideal of autonomy). The following variables were included in the multiple logistic regression analysis to identify factors associated with a good PDR, which was considered the dependent variable: whether the patient was female, pain-VAS and SF-36 scores (highly correlated to SF-36 emotional component within the reference range), HAQ-DI score out of reference range, patient-doctor encounters with variable experience of the attending rheumatologist (trainees, certified rheumatologists), patient-doctor concordance in the ideal of autonomy (highly correlated to patient-doctor concordance in paternalistic ideal of autonomy), and patient's paternalistic ideal of autonomy. Results showed that patient paternalistic ideal of autonomy (odds ratio [OR], 3.029; 95% confidence interval [CI], 1.793–5.113; p ≤ 0.001), patient SF-36 global score (OR, 1.014; 95% CI, 1.003–1.025; p = 0.011), patient's female sex (OR, 0.460; 95% CI, 0.233–0.010; p = 0.026), and being a certified/senior rheumatologist (OR, 1.526; 95% CI, 1.059–2.200; p = 0.024) were associated with a good PDR. We first compared medical encounters rated by the patients as deficient PDR (defined as PDRQ-9 score ≤3.73, n = 178) with medical encounters rated by the patients with good PDR (n = 422), and the results are summarized in Supplemental Table, http://links.lww.com/RHU/A393. Compared with their counterparts, patients from the former group were more likely to be female (90.4% vs 84.1%, p = 0.053) and scored higher pain-VAS (18 [3–42.8] vs 11 [0–38.5], p = 0.042), were more likely to have disability based on the HAQ-DI score (58.5% vs 47.8%, p = 0.019), and were less likely to have the SF-36 physical component within the reference range (14.7% vs 24.2%, p = 0.009). Accordingly, patients with deficient PDR had lower SF-36 global scores (56 [43.5–70.5] vs 63.1 [47.9–76.5], p = 0.001); also, they were more frequently involved in patient-doctor encounters with trainees in rheumatology. Finally, they were less likely to have a paternalistic ideal of patient autonomy (74% vs 89.4%, p ≤ 0.001), and were less likely to be concordant with their doctor's ideal of autonomy (62.3% vs 77.7%, p = 0.001, and 65.1% vs 80%, p = 0.001 for patient-doctor concordance with a physician-centered/paternalistic ideal of autonomy). The following variables were included in the multiple logistic regression analysis to identify factors associated with a good PDR, which was considered the dependent variable: whether the patient was female, pain-VAS and SF-36 scores (highly correlated to SF-36 emotional component within the reference range), HAQ-DI score out of reference range, patient-doctor encounters with variable experience of the attending rheumatologist (trainees, certified rheumatologists), patient-doctor concordance in the ideal of autonomy (highly correlated to patient-doctor concordance in paternalistic ideal of autonomy), and patient's paternalistic ideal of autonomy. Results showed that patient paternalistic ideal of autonomy (odds ratio [OR], 3.029; 95% confidence interval [CI], 1.793–5.113; p ≤ 0.001), patient SF-36 global score (OR, 1.014; 95% CI, 1.003–1.025; p = 0.011), patient's female sex (OR, 0.460; 95% CI, 0.233–0.010; p = 0.026), and being a certified/senior rheumatologist (OR, 1.526; 95% CI, 1.059–2.200; p = 0.024) were associated with a good PDR. Population Characteristics: A total of 691 ambulatory patients were invited to participate, 90 of whom declined the invitation, primarily due to time constraints, and 1 patient did not complete the PDRQ-9. The characteristics of the 600 patients are depicted in Table 1 and had been previously described.6 Briefly, patients were primarily middle-aged women (86%), with a medium-low socioeconomic status (88.8%), long-standing disease (51.8%), comorbid conditions (58.7%), pain under control (68.5%), no disability (55.3%), and HRQoL out of the reference range (73.7%–78.6%), based on published cutoffs for the pain-VAS, HAQ-DI, and SF-36.25 In addition, they were on immunosuppressive drugs (96.5%). Patients scored high on the PDRQ-9, and 30.5% of the patients rated the PDR with the highest score. The patients' diagnoses were as follows: 219 patients (36.5%) had SLE, 191 (31.8%) had RA, 42 (7%) had systemic vasculitis, 23 each (3.8%) had inflammatory myopathy and primary antiphospholipid syndrome, 25 (4.2%) had systemic sclerosis, 28 (4.7%) had spondyloarthritis, 20 each (3.3%) had PSS and mixed connective tissue disease, and 9 patients (1.5%) had adult Still disease. Population's Characteristics (N = 600) Data presented as median (IQR) as otherwise indicated. aNumber (%) of patients. bSSS provides comprehensive health care insurance for public and private employees (including pensioners) and their households, such as outpatient and inpatient health care, hospitalization, paid sick days, disability, and retirement plans. Funded by both the Mexican Federal Government and by contributions from employees and their employers, this system comprises a heterogeneous bundle of autonomous health care institutions, including outpatient clinics, general hospitals, and specialty hospitals (according to Pineda et al 2019, https://doi.org/10.1007/s00296-018-4198-7). cLimited to patients with previous hospitalizations. SE, socioeconomic (level); SSS, social security system; IDs, immunosuppressive drugs; MD, missing data; DD, disease duration. Description of the PDR in the Study Population: The median (IQR) of the PDRQ-9 score in the entire population was 4.6 (3.4–5). Twenty patients (3.3%) rated the PDR Likert scale as inferior, 267 (44.5%) as borderline, and the 313 patients left (52.2%) as superior. Table 2 summarizes the PDRQ-9 global score and individual item scores in the entire population, and the comparison among groups defined according to PDR Likert scale response. As expected, the better the PDR Likert scale, the higher the global and individual items PDRQ-9 scores. Global and Individual Item PDRQ-9 Scores in the Entire Population and Comparison Between Patients Classified According to their PDR Likert Scale Items 2 (“My doctor has enough time for me”), 4 (“My doctor understand me”), and 7 (“I can talk to my doctor”) obtained lower scores than the remaining items (p ≤ 0.001 for any comparison), and the differences persisted within the patients grouped according to the PDR Likert scale response (inferior, borderline, and superior), as summarized in Table 2 and Figure. Individual item PDRQ-9 scores in the patients grouped according to their PDR Likert Scale category (inferior, borderline, superior). Color online-figure is available at http://www.jclinrheum.com. Finally, among the 600 patient-doctor encounters, 523 (87.2%) involved certified rheumatologists, whereas the remaining 77 encounters (22.8%) involved trainees in rheumatology. Patient-doctor encounters from the former group (with certified rheumatologists) were rated with higher global and individual item PDRQ-9 scores, compared with their counterparts, as summarized in Table 3, but for item 6, “My doctor and I agree on the nature of my medical symptoms” and 7, “I can talk to my doctor.” Comparison of Global and Individual Item PDRQ-9 Scores Between Patient-Doctor Encounters that Involved Certified Rheumatologists and Those That Involved Trainees in Rheumatology Factors Associated With Good PDR: We first compared medical encounters rated by the patients as deficient PDR (defined as PDRQ-9 score ≤3.73, n = 178) with medical encounters rated by the patients with good PDR (n = 422), and the results are summarized in Supplemental Table, http://links.lww.com/RHU/A393. Compared with their counterparts, patients from the former group were more likely to be female (90.4% vs 84.1%, p = 0.053) and scored higher pain-VAS (18 [3–42.8] vs 11 [0–38.5], p = 0.042), were more likely to have disability based on the HAQ-DI score (58.5% vs 47.8%, p = 0.019), and were less likely to have the SF-36 physical component within the reference range (14.7% vs 24.2%, p = 0.009). Accordingly, patients with deficient PDR had lower SF-36 global scores (56 [43.5–70.5] vs 63.1 [47.9–76.5], p = 0.001); also, they were more frequently involved in patient-doctor encounters with trainees in rheumatology. Finally, they were less likely to have a paternalistic ideal of patient autonomy (74% vs 89.4%, p ≤ 0.001), and were less likely to be concordant with their doctor's ideal of autonomy (62.3% vs 77.7%, p = 0.001, and 65.1% vs 80%, p = 0.001 for patient-doctor concordance with a physician-centered/paternalistic ideal of autonomy). The following variables were included in the multiple logistic regression analysis to identify factors associated with a good PDR, which was considered the dependent variable: whether the patient was female, pain-VAS and SF-36 scores (highly correlated to SF-36 emotional component within the reference range), HAQ-DI score out of reference range, patient-doctor encounters with variable experience of the attending rheumatologist (trainees, certified rheumatologists), patient-doctor concordance in the ideal of autonomy (highly correlated to patient-doctor concordance in paternalistic ideal of autonomy), and patient's paternalistic ideal of autonomy. Results showed that patient paternalistic ideal of autonomy (odds ratio [OR], 3.029; 95% confidence interval [CI], 1.793–5.113; p ≤ 0.001), patient SF-36 global score (OR, 1.014; 95% CI, 1.003–1.025; p = 0.011), patient's female sex (OR, 0.460; 95% CI, 0.233–0.010; p = 0.026), and being a certified/senior rheumatologist (OR, 1.526; 95% CI, 1.059–2.200; p = 0.024) were associated with a good PDR. DISCUSSION: The study focused on the quality of the PDR, where participants are greatly influenced by the social and cultural factors that define each other.26 Accordingly, the results complement the current knowledge of the topic, which has been conceived based on studies primarily performed in developed countries (United States, North European countries, United Kingdom, and Japan) and in populations with a different anthropologic background.9–15,27 First, the study revealed that the majority of the primarily Mexican female patients with long-standing rheumatic diseases perceived a good PDR, which was more evident among medical encounters that involved certified rheumatologists. Moreover, some components of the PDR were rated lower by the patients, particularly, patients' perception of the time spent with the clinician and being understood by and of accessibility to talk to the doctor. Similar results have been observed27 and could be explained by the substantial follow-up of the underlying rheumatic disease of the patients included, which might have biased the PDRQ-9 score to higher values. In addition, 10 certified rheumatologists were involved in the majority of the medical encounters; clinicians' knowledge and clinical expertise shape treatment preferences and have the potential to influence the shared decision-making (SDM), which improves the quality of care.28 Meanwhile, trust in the physician develops over time, characterizes long-term PDR, and impacts patient satisfaction with care, which might be considered a surrogate of a good PDR.27 Moreover, medical encounters with trainees lack physician continuity on repeat clinical visits, which has been associated with less positive perception of physician style and physician trust, which ultimately affects the PDR.29 Finally, time constraints have been recognized as a caveat of the quality of care in busy medical practice and limit the application of SDM.30,31 In RA patients, a longer consultation time of 10 minutes has been associated with a slightly higher SDM score.32 In addition, a patient's lower score of being understood by and accessibility to their doctor might reflect the well-known misalignment between patients and physicians' values, preferences, and perception of shared goals.33,34 A second relevant finding was that, in our population, the patient paternalistic ideal of autonomy, the patient's quality of life, and the degree of experience of the attending rheumatologist were risk factors associated with a good PDR, whereas female sex was protective. It is generally accepted that active rheumatic patients' participation in their interaction with rheumatologists is associated with health care satisfaction, which might be considered a surrogate for a good PDR.11,27,30 Nonetheless, the PDR is a complex and dynamic construct that is shaped by components highly nuanced by the cultural background.27 Several studies have confirmed that Mexican patients with rheumatic diseases do not desire or undertake an active role at the time of their consultation.6,17,18,35 Singh et al36 found that 40% of the United States and Canadian patients with cancer experienced discordance between the preferred and the experienced decision-making role, and highlighted the need to deliver the type of experience that the patients prefer in terms of their decision-making role. In the current study, the majority of patients (and physicians) referred a paternalistic ideal of autonomy,6 which explains its association with a good PDR. Also, and in agreement with our results, Ishikawa et al12 studied 115 Japanese RA patients who were under the continuous care of 8 rheumatologists. They found that, among patients who preferred autonomous decision-making, the likelihood of being understood was positively associated with the extent of reported participation in visit communication, whereas such a relationship was less evident among those with a lower preference for active decision-making. Studies involving patients with rheumatic diseases suggest that the nature of PDR can have a significant impact on HRQoL, which can be assessed with the SF-36.11 A possible explanation was proposed by Freburger et al,27 who argued that sicker patients deal with the health care system more frequently and are more likely to have problems with the care they receive and blame the physician because they are not getting better. The authors evaluated trust in the rheumatologist among 713 patients with RA, osteoarthritis, and fibromyalgia from North Carolina and found that patients with poor health and HRQoL reported lower levels of trust (a component of the PDRQ-9). Similarly, Beusterien et al9 found that positive physician interaction with patients led to greater satisfaction with treatment and more favorable emotional health among 302 SLE patients from the United States. The association between the degree of experience of the attending rheumatologist and a good PDR might be explained based on 3 arguments. First, experienced rheumatologists might be perceived by patients as paternal authoritative figures, and there is a respect for such figures among Hispanic patients.31 Second, in the Hispanic community, there is an imbalance in social status between the patient and the physician, which favors a high-power distance culture, where patients expect the physician to take a more authoritative approach to the medical encounter, which is in line with the preferred ideal of autonomy of our patients.6,7 Third, as previously stated, experienced rheumatologists might build solid and trustful relationships with their patients, which are particularly relevant for the PDR in Mediterranean and Latin American cultures.29,37 Finally, sex disparities in patients' experiences have received little attention, although our results were confirmed in nonrheumatic populations.38 Men have generally reported better experiences with specific aspects of outpatient care, and the opposite has been true regarding dissatisfaction with nursing care and staff attitude.38 Regarding inpatient experiences, in general, women also reported fewer positive experiences than men, with the exception of doctor communication, which is a component of the PDR.38 There are a few limitations of the study, which need to be considered. First, we used the PDRQ-9 to assess the PDR, but it is limited to patients' perspective; in addition, it has a substantial ceiling effect, which translates into a poor capacity to discriminate among patients who scored high on the PDR. Second, the study had a cross-sectional design, and only associations can be inferred. Third, the study was conducted at a single academic center where patients referred might have particular characteristics; therefore, the results may not be generalizable to other populations. Finally, relevant variables that are known to affect the PDR, such as patient-physician sex disparity, were not considered in the regression analysis. CONCLUSIONS: The PDR is a complex dynamic and multidisciplinary phenomenon that needs to be approached from a cultural perspective. The PDR might also be conceived as a highly valuable outcome in itself, the quality of which influences disease outcomes, patient's satisfaction with care and adherence to treatment, and clinician satisfaction at work. In Mexican outpatients with rheumatic diseases, we found factors associated with a good PDR that were related to patient characteristics and of the clinician. Insights from this study are of great value for the development of strategies targeted at building solid relationships and improving communication among patients and doctors.
Background: The patient-doctor relationship (PDR) is a complex phenomenon with strong cultural determinants, which impacts health-related outcomes and, accordingly, does have ethical implications. The study objective was to describe the PDR from medical encounters between 600 Mexican outpatients with rheumatic diseases and their attending rheumatologists, and to identify factors associated with a good PDR. Methods: A cross-sectional study was performed. Patients completed the PDRQ-9 (Patient-Doctor Relationship Questionnaire, 9 items), the HAQ-DI (Health Assessment Questionnaire Disability Index), the Short-Form 36 items (SF-36), a pain-visual analog scale, and the Ideal Patient Autonomy Scale. Relevant sociodemographic, disease-related, and treatment-related variables were obtained. Patients assigned a PDRQ-9 score to each patient-doctor encounter. Regression analysis was used to identify factors associated with a good PDR, which was defined based on a cutoff point established using the borderline performance method. Results: Patients were primarily middle-aged female subjects (86%), with substantial disease duration (median, 11.1 years), without disability (HAQ-DI within reference range, 55.3%), and with deteriorated quality of life (SF-36 out of reference range, 73.7%-78.6%). Among them, 36.5% had systemic lupus erythematosus and 31.8% had rheumatoid arthritis. There were 422 patients (70.3%) with a good PDR and 523 medical encounters (87.2%) involved certified rheumatologists.Patient paternalistic ideal of autonomy (odds ratio [OR], 3.029; 95% confidence interval [CI], 1.793-5.113), SF-36 score (OR, 1.014; 95% CI, 1.003-1.025), female sex (OR, 0.460; 95% CI, 0.233-0.010), and being certified rheumatologist (OR, 1.526; 95% CI, 1.059-2.200) were associated with a good PDR. Conclusions: Patient-related factors and the degree of experience of the attending physician impact the quality of the PDR, in Mexican outpatients with rheumatic diseases.
null
null
9,170
399
[ 96, 441, 243, 181, 107, 255, 414, 367, 485 ]
13
[ "patients", "pdr", "patient", "pdrq", "doctor", "ideal", "autonomy", "variables", "rheumatologists", "scale" ]
[ "rheumatologist patient request", "ocdir patients assigned", "rheumatologist invited participate", "evaluated trust rheumatologist", "physicians assigned ocdir" ]
null
null
null
[CONTENT] patient-doctor relationship | rheumatic diseases | autonomy ideal | paternalism [SUMMARY]
[CONTENT] patient-doctor relationship | rheumatic diseases | autonomy ideal | paternalism [SUMMARY]
[CONTENT] patient-doctor relationship | rheumatic diseases | autonomy ideal | paternalism [SUMMARY]
[CONTENT] patient-doctor relationship | rheumatic diseases | autonomy ideal | paternalism [SUMMARY]
null
null
[CONTENT] Cross-Sectional Studies | Disability Evaluation | Female | Humans | Middle Aged | Physician-Patient Relations | Quality of Life | Rheumatic Diseases | Surveys and Questionnaires [SUMMARY]
[CONTENT] Cross-Sectional Studies | Disability Evaluation | Female | Humans | Middle Aged | Physician-Patient Relations | Quality of Life | Rheumatic Diseases | Surveys and Questionnaires [SUMMARY]
[CONTENT] Cross-Sectional Studies | Disability Evaluation | Female | Humans | Middle Aged | Physician-Patient Relations | Quality of Life | Rheumatic Diseases | Surveys and Questionnaires [SUMMARY]
[CONTENT] Cross-Sectional Studies | Disability Evaluation | Female | Humans | Middle Aged | Physician-Patient Relations | Quality of Life | Rheumatic Diseases | Surveys and Questionnaires [SUMMARY]
null
null
[CONTENT] rheumatologist patient request | ocdir patients assigned | rheumatologist invited participate | evaluated trust rheumatologist | physicians assigned ocdir [SUMMARY]
[CONTENT] rheumatologist patient request | ocdir patients assigned | rheumatologist invited participate | evaluated trust rheumatologist | physicians assigned ocdir [SUMMARY]
[CONTENT] rheumatologist patient request | ocdir patients assigned | rheumatologist invited participate | evaluated trust rheumatologist | physicians assigned ocdir [SUMMARY]
[CONTENT] rheumatologist patient request | ocdir patients assigned | rheumatologist invited participate | evaluated trust rheumatologist | physicians assigned ocdir [SUMMARY]
null
null
[CONTENT] patients | pdr | patient | pdrq | doctor | ideal | autonomy | variables | rheumatologists | scale [SUMMARY]
[CONTENT] patients | pdr | patient | pdrq | doctor | ideal | autonomy | variables | rheumatologists | scale [SUMMARY]
[CONTENT] patients | pdr | patient | pdrq | doctor | ideal | autonomy | variables | rheumatologists | scale [SUMMARY]
[CONTENT] patients | pdr | patient | pdrq | doctor | ideal | autonomy | variables | rheumatologists | scale [SUMMARY]
null
null
[CONTENT] patients | variables | ocdir | patient | pdr | assigned | study | defined | scale | questionnaire [SUMMARY]
[CONTENT] doctor | vs | patients | scores | encounters | patient | global | individual | item | pdr [SUMMARY]
[CONTENT] clinician | satisfaction | pdr | building | pdr related patient characteristics | outpatients | great | great value | great value development | great value development strategies [SUMMARY]
[CONTENT] patients | pdr | patient | doctor | variables | pdrq | scale | ocdir | rheumatologists | autonomy [SUMMARY]
null
null
[CONTENT] ||| Relationship Questionnaire | 9 | SF-36 | the Ideal Patient Autonomy Scale ||| ||| ||| PDR [SUMMARY]
[CONTENT] 86% | 11.1 years | 55.3% | SF-36 | 73.7%-78.6% ||| 36.5% | 31.8% ||| 422 | 70.3% | PDR | 523 | 87.2% ||| ||| ||| 3.029 | 95% ||| CI | 1.793 | 1.014 | 95% | CI | 1.003-1.025 | 0.460 | 95% | CI | 0.233-0.010 | 1.526 | 95% | CI | 1.059 | PDR [SUMMARY]
[CONTENT] PDR | Mexican [SUMMARY]
[CONTENT] PDR ||| PDR | Mexican | PDR ||| ||| Relationship Questionnaire | 9 | SF-36 | the Ideal Patient Autonomy Scale ||| ||| ||| PDR ||| ||| 86% | 11.1 years | 55.3% | SF-36 | 73.7%-78.6% ||| 36.5% | 31.8% ||| 422 | 70.3% | PDR | 523 | 87.2% ||| ||| ||| 3.029 | 95% ||| CI | 1.793 | 1.014 | 95% | CI | 1.003-1.025 | 0.460 | 95% | CI | 0.233-0.010 | 1.526 | 95% | CI | 1.059 | PDR ||| PDR | Mexican [SUMMARY]
null
Transjugular intrahepatic portosystemic shunt
36405105
Transjugular intrahepatic portosystemic shunt (TIPS) placement is an effective intervention for recurrent tense ascites. Some studies show an increased risk of acute on chronic liver failure (ACLF) associated with TIPS placement. It is not clear whether ACLF in this context is a consequence of TIPS or of the pre-existing liver disease.
BACKGROUND
Two hundred and fourteen patients undergoing their first TIPS placement for recurrent tense ascites at our tertiary-care center between 2007 and 2017 were identified (TIPS group). Three hundred and ninety-eight patients of the same time interval with liver cirrhosis and recurrent tense ascites not undergoing TIPS placement (No TIPS group) were analyzed as a control group. TIPS indication, diagnosis of recurrent ascites, further diagnoses and clinical findings were obtained from a database search and patient records. The in-hospital mortality and ACLF incidence of both groups were compared using 1:1 propensity score matching and multivariate logistic regressions.
METHODS
After propensity score matching, the TIPS and No TIPS groups were comparable in terms of laboratory values and ACLF incidence at hospital admission. There was no detectable difference in mortality (TIPS: 11/214, No TIPS 13/214). During the hospital stay, ACLF occurred more frequently in the TIPS group than in the No TIPS group (TIPS: 70/214, No TIPS: 57/214, P = 0.04). This effect was confined to patients with severely impaired liver function at hospital admission as indicated by a significant interaction term of Child score and TIPS placement in multivariate logistic regression. The TIPS group had a lower ACLF incidence at Child scores < 8 points and a higher ACLF incidence at ≥ 11 points. No significant difference was found between groups in patients with Child scores of 8 to 10 points.
RESULTS
TIPS placement for recurrent tense ascites is associated with an increased rate of ACLF in patients with severely impaired liver function but does not result in higher in-hospital mortality.
CONCLUSION
[ "Humans", "Child", "Ascites", "Portasystemic Shunt, Transjugular Intrahepatic", "Conservative Treatment", "Propensity Score", "Acute-On-Chronic Liver Failure" ]
9669826
INTRODUCTION
Transjugular intrahepatic portosystemic shunt (TIPS) is an effective therapy for complications of portal hypertension, such as ascites or esophageal variceal bleeding. Although TIPS placement is effective against ascites, early studies showed no survival benefit after TIPS placement compared to repeated paracentesis and albumin substitution[1-3]. More recent studies have shown more promising results, such as survival benefit[4-7], improved renal function[8,9] and better quality of life[10,11]. TIPS placement is therefore recommended as the treatment of choice[12,13]. Nevertheless, TIPS placement is an invasive procedure with considerable risks. In addition to hepatic encephalopathy and bleeding complications due to the placement procedure, sudden worsening of liver function is a serious complication. It has been observed after 5% to 10% of TIPS procedures and has a serious prognosis[14,15]. Such an acute deterioration of liver function accompanied by single- or multi-organ-failure is a common complication of advanced liver cirrhosis. This clinical syndrome has been described as acute on chronic liver failure (ACLF)[16]. Due to the risk of liver failure, TIPS placement for ascites is often limited to patients with good liver function and most randomized controlled trials have been conducted in patients with good liver function. It is still unclear how often ACLF occurs after TIPS placement and whether it is due to the TIPS procedure or rather to the severity of the underlying liver disease[17]. Recent recommendations argue against strict cut-off values for MELD, Child or other scoring systems. Instead, they recommend individual decision-making[18]. To better address the risk of ACLF in this challenging clinical situation the aim of this study was: (1) To determine whether ACLF occurs more often in patients with recurrent tense ascites treated with TIPS than in patients receiving conservative therapy; (2) to compare the outcome of ACLF associated with TIPS placement with the outcome of ACLF in patients receiving conservative therapy; and (3) to evaluate whether the risk of ACLF and death associated with TIPS placement increases disproportionately in patients with marginal liver function.
MATERIALS AND METHODS
Selection of patients A database was constructed containing ICD and OPS codes as well as laboratory values of all inpatients of the Division of Gastroenterology of the Rostock University Medical Center. Patients who were treated for liver cirrhosis between 2007 and 2017 were identified based on their discharge diagnosis using ICD10 codes K70.3, K70.4, K71.7, K74.6 and K76.6 (2197 cases of 1404 patients). Patients who received TIPS were identified using OPS codes 8-839*. Only cases of patients receiving their first TIPS for recurrent tense ascites were selected. Therefore there was only one case per patient in the TIPS group. Cases of patients who had liver cirrhosis and tense ascites requiring paracentesis, but did not undergo TIPS placement were selected for comparison (No TIPS group). If several cases were available for the same patient in the No TIPS group (e.g., because of multiple hospital admissions), the latest case was selected. TIPS indication, diagnosis of recurrent tense ascites, further diagnoses and clinical findings were obtained from ICD codes and from patient files. Laboratory values were obtained from the data base. Cases with missing data on relevant clinical or laboratory findings were removed (43 cases). Cases with pre-existing renal insufficiency requiring dialysis (30 cases) or with malignant tumors (471 cases) were also excluded. Patient selection resulted in 398 patients in the No TIPS group and 214 patients in the TIPS group. After data collection was completed, all patient data were pseudonymized. Patient selection criteria and reasons for exclusion from data analysis are depicted in Figure 1. The study was approved by the local ethics committee of the Rostock University Medical Center (A2018-0127). Flow diagram showing the study population and reasons for exclusion from data analysis. HE: Hepatic encephalopathy; NA: Not available; TIPS: Transjugular intrahepatic portosystemic shunt. The MELD-score and ACLF grade as defined by Moreau et al[16] at hospital admission and the highest ACLF grade achieved during hospital stay were determined for each patient. Furthermore, the in-hospital mortality of both groups was determined. Multivariate logistic regressions revealed that bilirubin, creatinine, INR, CRP, sodium, white blood cell count, albumin and age were predictive either for survival or for group membership in TIPS vs No TIPS group or for both. Therefore these covariates were chosen for the propensity score matching procedure. The matching (1:1 greedy matching, nearest neighbor, without replacement) resulted in a matched sample of 428 patients (214 patients in the No TIPS and 214 in the TIPS group). A database was constructed containing ICD and OPS codes as well as laboratory values of all inpatients of the Division of Gastroenterology of the Rostock University Medical Center. Patients who were treated for liver cirrhosis between 2007 and 2017 were identified based on their discharge diagnosis using ICD10 codes K70.3, K70.4, K71.7, K74.6 and K76.6 (2197 cases of 1404 patients). Patients who received TIPS were identified using OPS codes 8-839*. Only cases of patients receiving their first TIPS for recurrent tense ascites were selected. Therefore there was only one case per patient in the TIPS group. Cases of patients who had liver cirrhosis and tense ascites requiring paracentesis, but did not undergo TIPS placement were selected for comparison (No TIPS group). If several cases were available for the same patient in the No TIPS group (e.g., because of multiple hospital admissions), the latest case was selected. TIPS indication, diagnosis of recurrent tense ascites, further diagnoses and clinical findings were obtained from ICD codes and from patient files. Laboratory values were obtained from the data base. Cases with missing data on relevant clinical or laboratory findings were removed (43 cases). Cases with pre-existing renal insufficiency requiring dialysis (30 cases) or with malignant tumors (471 cases) were also excluded. Patient selection resulted in 398 patients in the No TIPS group and 214 patients in the TIPS group. After data collection was completed, all patient data were pseudonymized. Patient selection criteria and reasons for exclusion from data analysis are depicted in Figure 1. The study was approved by the local ethics committee of the Rostock University Medical Center (A2018-0127). Flow diagram showing the study population and reasons for exclusion from data analysis. HE: Hepatic encephalopathy; NA: Not available; TIPS: Transjugular intrahepatic portosystemic shunt. The MELD-score and ACLF grade as defined by Moreau et al[16] at hospital admission and the highest ACLF grade achieved during hospital stay were determined for each patient. Furthermore, the in-hospital mortality of both groups was determined. Multivariate logistic regressions revealed that bilirubin, creatinine, INR, CRP, sodium, white blood cell count, albumin and age were predictive either for survival or for group membership in TIPS vs No TIPS group or for both. Therefore these covariates were chosen for the propensity score matching procedure. The matching (1:1 greedy matching, nearest neighbor, without replacement) resulted in a matched sample of 428 patients (214 patients in the No TIPS and 214 in the TIPS group). Statistical analysis Statistical evaluation and matching were carried out using R (R version 3.6.3[19] and the R Package MatchIt, Version 4.1.0[20]). The distribution of most of the continuous data had significant positive skew, therefore non-parametric test methods were used. Continuous variables were compared using the Mann-Whitney U test and categorical variables using the chi-square or Fisher’s exact test. Data on an ordinal scale (ACLF, hepatic encephalopathy) were treated as continuous. To account for the loss of statistical independence due to the matching procedure[21,22], comparisons between the matched groups were carried out using the Wilcoxon signed rank test or McNemar test. Additional multivariate logistic regressions were performed as sensitivity analysis and for further insights into effects of liver function, TIPS placement and their interaction on ACLF incidence and in-hospital mortality. The statistical methods of this study were reviewed by Henrik Rudolf from Rostock University Medical Center, Institute for Biostatistics and Informatics in Medicine and Ageing Research. Statistical evaluation and matching were carried out using R (R version 3.6.3[19] and the R Package MatchIt, Version 4.1.0[20]). The distribution of most of the continuous data had significant positive skew, therefore non-parametric test methods were used. Continuous variables were compared using the Mann-Whitney U test and categorical variables using the chi-square or Fisher’s exact test. Data on an ordinal scale (ACLF, hepatic encephalopathy) were treated as continuous. To account for the loss of statistical independence due to the matching procedure[21,22], comparisons between the matched groups were carried out using the Wilcoxon signed rank test or McNemar test. Additional multivariate logistic regressions were performed as sensitivity analysis and for further insights into effects of liver function, TIPS placement and their interaction on ACLF incidence and in-hospital mortality. The statistical methods of this study were reviewed by Henrik Rudolf from Rostock University Medical Center, Institute for Biostatistics and Informatics in Medicine and Ageing Research.
null
null
CONCLUSION
The medical limits of TIPS placement for recurrent tense ascites should be evaluated in prospective studies which need to address the indications, contraindications and the associated complex decision making.
[ "INTRODUCTION", "Selection of patients", "Statistical analysis", "RESULTS", "Patient characteristics and matching", "Incidence of ACLF and in-hospital mortality", "Estimated in-hospital mortality and risk of ACLF", "DISCUSSION", "CONCLUSION" ]
[ "Transjugular intrahepatic portosystemic shunt (TIPS) is an effective therapy for complications of portal hypertension, such as ascites or esophageal variceal bleeding. Although TIPS placement is effective against ascites, early studies showed no survival benefit after TIPS placement compared to repeated paracentesis and albumin substitution[1-3]. More recent studies have shown more promising results, such as survival benefit[4-7], improved renal function[8,9] and better quality of life[10,11]. TIPS placement is therefore recommended as the treatment of choice[12,13].\nNevertheless, TIPS placement is an invasive procedure with considerable risks. In addition to hepatic encephalopathy and bleeding complications due to the placement procedure, sudden worsening of liver function is a serious complication. It has been observed after 5% to 10% of TIPS procedures and has a serious prognosis[14,15]. Such an acute deterioration of liver function accompanied by single- or multi-organ-failure is a common complication of advanced liver cirrhosis. This clinical syndrome has been described as acute on chronic liver failure (ACLF)[16]. Due to the risk of liver failure, TIPS placement for ascites is often limited to patients with good liver function and most randomized controlled trials have been conducted in patients with good liver function. It is still unclear how often ACLF occurs after TIPS placement and whether it is due to the TIPS procedure or rather to the severity of the underlying liver disease[17]. Recent recommendations argue against strict cut-off values for MELD, Child or other scoring systems. Instead, they recommend individual decision-making[18]. To better address the risk of ACLF in this challenging clinical situation the aim of this study was: (1) To determine whether ACLF occurs more often in patients with recurrent tense ascites treated with TIPS than in patients receiving conservative therapy; (2) to compare the outcome of ACLF associated with TIPS placement with the outcome of ACLF in patients receiving conservative therapy; and (3) to evaluate whether the risk of ACLF and death associated with TIPS placement increases disproportionately in patients with marginal liver function.", "A database was constructed containing ICD and OPS codes as well as laboratory values of all inpatients of the Division of Gastroenterology of the Rostock University Medical Center. Patients who were treated for liver cirrhosis between 2007 and 2017 were identified based on their discharge diagnosis using ICD10 codes K70.3, K70.4, K71.7, K74.6 and K76.6 (2197 cases of 1404 patients). Patients who received TIPS were identified using OPS codes 8-839*. Only cases of patients receiving their first TIPS for recurrent tense ascites were selected. Therefore there was only one case per patient in the TIPS group. Cases of patients who had liver cirrhosis and tense ascites requiring paracentesis, but did not undergo TIPS placement were selected for comparison (No TIPS group). If several cases were available for the same patient in the No TIPS group (e.g., because of multiple hospital admissions), the latest case was selected. TIPS indication, diagnosis of recurrent tense ascites, further diagnoses and clinical findings were obtained from ICD codes and from patient files. Laboratory values were obtained from the data base. Cases with missing data on relevant clinical or laboratory findings were removed (43 cases). Cases with pre-existing renal insufficiency requiring dialysis (30 cases) or with malignant tumors (471 cases) were also excluded. Patient selection resulted in 398 patients in the No TIPS group and 214 patients in the TIPS group. After data collection was completed, all patient data were pseudonymized. Patient selection criteria and reasons for exclusion from data analysis are depicted in Figure 1. The study was approved by the local ethics committee of the Rostock University Medical Center (A2018-0127).\n\nFlow diagram showing the study population and reasons for exclusion from data analysis. HE: Hepatic encephalopathy; NA: Not available; TIPS: Transjugular intrahepatic portosystemic shunt.\nThe MELD-score and ACLF grade as defined by Moreau et al[16] at hospital admission and the highest ACLF grade achieved during hospital stay were determined for each patient. Furthermore, the in-hospital mortality of both groups was determined. Multivariate logistic regressions revealed that bilirubin, creatinine, INR, CRP, sodium, white blood cell count, albumin and age were predictive either for survival or for group membership in TIPS vs No TIPS group or for both. Therefore these covariates were chosen for the propensity score matching procedure. The matching (1:1 greedy matching, nearest neighbor, without replacement) resulted in a matched sample of 428 patients (214 patients in the No TIPS and 214 in the TIPS group).", "Statistical evaluation and matching were carried out using R (R version 3.6.3[19] and the R Package MatchIt, Version 4.1.0[20]). The distribution of most of the continuous data had significant positive skew, therefore non-parametric test methods were used. Continuous variables were compared using the Mann-Whitney U test and categorical variables using the chi-square or Fisher’s exact test. Data on an ordinal scale (ACLF, hepatic encephalopathy) were treated as continuous. To account for the loss of statistical independence due to the matching procedure[21,22], comparisons between the matched groups were carried out using the Wilcoxon signed rank test or McNemar test. Additional multivariate logistic regressions were performed as sensitivity analysis and for further insights into effects of liver function, TIPS placement and their interaction on ACLF incidence and in-hospital mortality. The statistical methods of this study were reviewed by Henrik Rudolf from Rostock University Medical Center, Institute for Biostatistics and Informatics in Medicine and Ageing Research.", " Patient characteristics and matching Patient demographics and liver disease characteristics of the unmatched cohort are summarized in Table 1. Continuous values are given as median and range, categorical values as total number and percentage. Patients receiving TIPS had better liver function as assessed by MELD and Child score, bilirubin, INR, albumin and severity of hepatic encephalopathy. In addition, CRP, platelets and leukocytes differed significantly. Creatinine did not differ significantly. After propensity score matching all covariates were balanced in both groups (Table 2) and all variables used for matching did no longer predict group membership in the matched patients.\nBaseline characteristics at hospital admission (all patients)\nContinuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using the Mann-Whitney-U-test and categorical variables using the chi-square test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nBaseline characteristics at hospital admission (matched groups)\nContinuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using Wilcoxon signed-rank test and categorical variables using McNemar-Test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nFrom 2007 to 2017, both covered and uncovered stents were used for TIPS at our institution. Uncovered stents were placed in 42% and covered stents in 58% of cases. Stents were mostly dilated to 7-8 mm. Smaller or larger diameters were rarely chosen (6mm in 2 patients, 9 or 10 mm in 15 patients). No effect of stent type or stent diameter on any of our endpoints was found in either univariate or multivariate analyses (data not shown).\nPatient demographics and liver disease characteristics of the unmatched cohort are summarized in Table 1. Continuous values are given as median and range, categorical values as total number and percentage. Patients receiving TIPS had better liver function as assessed by MELD and Child score, bilirubin, INR, albumin and severity of hepatic encephalopathy. In addition, CRP, platelets and leukocytes differed significantly. Creatinine did not differ significantly. After propensity score matching all covariates were balanced in both groups (Table 2) and all variables used for matching did no longer predict group membership in the matched patients.\nBaseline characteristics at hospital admission (all patients)\nContinuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using the Mann-Whitney-U-test and categorical variables using the chi-square test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nBaseline characteristics at hospital admission (matched groups)\nContinuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using Wilcoxon signed-rank test and categorical variables using McNemar-Test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nFrom 2007 to 2017, both covered and uncovered stents were used for TIPS at our institution. Uncovered stents were placed in 42% and covered stents in 58% of cases. Stents were mostly dilated to 7-8 mm. Smaller or larger diameters were rarely chosen (6mm in 2 patients, 9 or 10 mm in 15 patients). No effect of stent type or stent diameter on any of our endpoints was found in either univariate or multivariate analyses (data not shown).\n Incidence of ACLF and in-hospital mortality Table 3 shows the incidence of ACLF as well as the in-hospital mortality of the matched patients. Patients receiving TIPS more often had ACLF of any grade (TIPS: 70/214 patients vs No TIPS 57/214 patients) and achieved higher ACLF grades (P = 0.04). An increase in ACLF grade (as compared to the ACLF grade at hospital admission) was more common in the TIPS group than in the No TIPS group (in 38/214 patients vs 23/214 patients). The hospital stay was longer in the TIPS group. The majority of patients in both groups had ACLF 1, which was due to renal failure. Organ systems affected in patients with ACLF > 1 were brain (hepatic encephalopathy grade 3-4) and/or liver function based on bilirubin in addition to renal failure. ACLF > 1 was mostly due to acute infections.\nChanges of acute on chronic liver failure grade during hospital stay and in-hospital mortality (matched groups)\nOR: Odds ratio; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nThere was no difference in terms of in-hospital mortality. In the TIPS group 11 of 214 patients died, in the No TIPS group 13 of 214 patients died. The mortality increased with the ACLF grade in both groups. Multivariate logistic regressions were performed as a sensitivity analysis and confirmed that TIPS was a risk factor for ACLF but not for in-hospital mortality (Table 4). Mortality in any ACLF stratum except ACLF 2 was comparable in both groups. For patients with ACLF 2, we found a lower mortality in the TIPS group compared to the No TIPS group (OR 0.09, 95%CI 0.01-0.87). The mortality of TIPS patients who increased in ACLF by 2 or 3 grades after TIPS placement was high (4/10 died). This also applies to the No TIPS group with an even higher mortality (4/5 patients with an increase of 2 or 3 ACLF grades compared to ACLF grade at hospital admission died).\nSensitivity analysis: Multivariate regressions (main effects only)\nDependent variables were in-hospital mortality (upper panel) and any increase in acute on chronic liver failure grade (lower panel). The full models (left side) included all parameters used for propensity score matching as covariates. After stepwise backward elimination by Akaike information criterion, a model (best model, right side) was selected for each dependent variable. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nMost patients in both groups (No TIPS 89%, TIPS 82%) without ACLF at admission did not develop any ACLF during hospital stay. Many patients who developed an ACLF grade 2 or 3 already had ACLF at hospital admission (5/10 patients in the No TIPS group and 11/20 patients in the TIPS group). Three patients in the TIPS group developed ACLF during the period between hospital admission and TIPS placement, i.e. before TIPS was implanted. Many of the pre-TIPS ACLFs resolved after TIPS placement. When comparing the highest ACLF grade before TIPS to the ACLF grade at hospital discharge (assuming ACLF 3 for patients who died), 32 patients (15%) improved their ACLF grade after TIPS placement while only 21 patients (10%) had a worse ACLF grade at discharge than at the time of TIPS placement.\nTable 3 shows the incidence of ACLF as well as the in-hospital mortality of the matched patients. Patients receiving TIPS more often had ACLF of any grade (TIPS: 70/214 patients vs No TIPS 57/214 patients) and achieved higher ACLF grades (P = 0.04). An increase in ACLF grade (as compared to the ACLF grade at hospital admission) was more common in the TIPS group than in the No TIPS group (in 38/214 patients vs 23/214 patients). The hospital stay was longer in the TIPS group. The majority of patients in both groups had ACLF 1, which was due to renal failure. Organ systems affected in patients with ACLF > 1 were brain (hepatic encephalopathy grade 3-4) and/or liver function based on bilirubin in addition to renal failure. ACLF > 1 was mostly due to acute infections.\nChanges of acute on chronic liver failure grade during hospital stay and in-hospital mortality (matched groups)\nOR: Odds ratio; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nThere was no difference in terms of in-hospital mortality. In the TIPS group 11 of 214 patients died, in the No TIPS group 13 of 214 patients died. The mortality increased with the ACLF grade in both groups. Multivariate logistic regressions were performed as a sensitivity analysis and confirmed that TIPS was a risk factor for ACLF but not for in-hospital mortality (Table 4). Mortality in any ACLF stratum except ACLF 2 was comparable in both groups. For patients with ACLF 2, we found a lower mortality in the TIPS group compared to the No TIPS group (OR 0.09, 95%CI 0.01-0.87). The mortality of TIPS patients who increased in ACLF by 2 or 3 grades after TIPS placement was high (4/10 died). This also applies to the No TIPS group with an even higher mortality (4/5 patients with an increase of 2 or 3 ACLF grades compared to ACLF grade at hospital admission died).\nSensitivity analysis: Multivariate regressions (main effects only)\nDependent variables were in-hospital mortality (upper panel) and any increase in acute on chronic liver failure grade (lower panel). The full models (left side) included all parameters used for propensity score matching as covariates. After stepwise backward elimination by Akaike information criterion, a model (best model, right side) was selected for each dependent variable. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nMost patients in both groups (No TIPS 89%, TIPS 82%) without ACLF at admission did not develop any ACLF during hospital stay. Many patients who developed an ACLF grade 2 or 3 already had ACLF at hospital admission (5/10 patients in the No TIPS group and 11/20 patients in the TIPS group). Three patients in the TIPS group developed ACLF during the period between hospital admission and TIPS placement, i.e. before TIPS was implanted. Many of the pre-TIPS ACLFs resolved after TIPS placement. When comparing the highest ACLF grade before TIPS to the ACLF grade at hospital discharge (assuming ACLF 3 for patients who died), 32 patients (15%) improved their ACLF grade after TIPS placement while only 21 patients (10%) had a worse ACLF grade at discharge than at the time of TIPS placement.\n Estimated in-hospital mortality and risk of ACLF Using multivariate logistic regression models based on the MELD or Child scores at admission, the probabilities of death in-hospital and of an increase in ACLF grade were estimated for the TIPS and the No TIPS group (Figure 2). The likelihood of death increases with the severity of the disease at admission; independent of whether this is assessed by MELD or by Child scores (Figure 2A and B). The regression curves for mortality are almost parallel, indicating that mortality depends only on liver function, but not on TIPS placement or an interaction between TIPS placement and the liver function. However, the regression curves for an increase in ACLF grade differ clearly between TIPS and No TIPS (Figure 2C and D). The probability of an ACLF in the TIPS group is lower than in the No TIPS group at low to moderate MELD-and Child-levels, but it is higher than in the No TIPS group at high MELD and Child scores. The intersection of the regression curves suggests an interaction between MELD/Child score and TIPS placement. In fact, the multivariate logistic regression shows a statistically significant interaction term for Child-score and TIPS (P = 0.03; Table 5). In our model the TIPS group has a lower ACLF incidence at Child scores lower than 8 points and a higher ACLF incidence at 11 points and higher. Between 8 and 11 points the standard errors of both groups overlap, indicating that there is no relevant difference between both groups. The same effect can be observed when using the MELD score instead of the Child score. However, the interaction is weaker and not statistically significant (P = 0.19).\n\nEstimated in-hospital mortality and risk of acute on chronic liver failure depending on liver function. A and B: Estimated probability of dying in hospital depending on liver function at hospital admission; C and D: Estimated probability of acute on chronic liver failure (ACLF) occurring or existing ACLF worsening, depending on liver function at hospital admission. All probabilities were estimated using a multivariate logistic regression model based on the MELD and Child scores at hospital admission. TIPS: Transjugular intrahepatic portosystemic shunt.\nMultivariate logistic regressions with interaction terms\nFor models C and D death was treated as an increase in acute on chronic liver failure. Models A and B show an effect of only the MELD/Child scores on mortality. Transjugular intrahepatic portosystemic shunt (TIPS) and the interaction of TIPS and MELD/Child scores (MELD: TIPS, Child: TIPS) have no significant influence on mortality (A and B). In model D a significant interaction term Child:TIPS exists. In model C the interaction term MELD: TIPS is not significant, indicating a weaker interaction than in model D. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nUsing multivariate logistic regression models based on the MELD or Child scores at admission, the probabilities of death in-hospital and of an increase in ACLF grade were estimated for the TIPS and the No TIPS group (Figure 2). The likelihood of death increases with the severity of the disease at admission; independent of whether this is assessed by MELD or by Child scores (Figure 2A and B). The regression curves for mortality are almost parallel, indicating that mortality depends only on liver function, but not on TIPS placement or an interaction between TIPS placement and the liver function. However, the regression curves for an increase in ACLF grade differ clearly between TIPS and No TIPS (Figure 2C and D). The probability of an ACLF in the TIPS group is lower than in the No TIPS group at low to moderate MELD-and Child-levels, but it is higher than in the No TIPS group at high MELD and Child scores. The intersection of the regression curves suggests an interaction between MELD/Child score and TIPS placement. In fact, the multivariate logistic regression shows a statistically significant interaction term for Child-score and TIPS (P = 0.03; Table 5). In our model the TIPS group has a lower ACLF incidence at Child scores lower than 8 points and a higher ACLF incidence at 11 points and higher. Between 8 and 11 points the standard errors of both groups overlap, indicating that there is no relevant difference between both groups. The same effect can be observed when using the MELD score instead of the Child score. However, the interaction is weaker and not statistically significant (P = 0.19).\n\nEstimated in-hospital mortality and risk of acute on chronic liver failure depending on liver function. A and B: Estimated probability of dying in hospital depending on liver function at hospital admission; C and D: Estimated probability of acute on chronic liver failure (ACLF) occurring or existing ACLF worsening, depending on liver function at hospital admission. All probabilities were estimated using a multivariate logistic regression model based on the MELD and Child scores at hospital admission. TIPS: Transjugular intrahepatic portosystemic shunt.\nMultivariate logistic regressions with interaction terms\nFor models C and D death was treated as an increase in acute on chronic liver failure. Models A and B show an effect of only the MELD/Child scores on mortality. Transjugular intrahepatic portosystemic shunt (TIPS) and the interaction of TIPS and MELD/Child scores (MELD: TIPS, Child: TIPS) have no significant influence on mortality (A and B). In model D a significant interaction term Child:TIPS exists. In model C the interaction term MELD: TIPS is not significant, indicating a weaker interaction than in model D. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.", "Patient demographics and liver disease characteristics of the unmatched cohort are summarized in Table 1. Continuous values are given as median and range, categorical values as total number and percentage. Patients receiving TIPS had better liver function as assessed by MELD and Child score, bilirubin, INR, albumin and severity of hepatic encephalopathy. In addition, CRP, platelets and leukocytes differed significantly. Creatinine did not differ significantly. After propensity score matching all covariates were balanced in both groups (Table 2) and all variables used for matching did no longer predict group membership in the matched patients.\nBaseline characteristics at hospital admission (all patients)\nContinuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using the Mann-Whitney-U-test and categorical variables using the chi-square test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nBaseline characteristics at hospital admission (matched groups)\nContinuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using Wilcoxon signed-rank test and categorical variables using McNemar-Test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nFrom 2007 to 2017, both covered and uncovered stents were used for TIPS at our institution. Uncovered stents were placed in 42% and covered stents in 58% of cases. Stents were mostly dilated to 7-8 mm. Smaller or larger diameters were rarely chosen (6mm in 2 patients, 9 or 10 mm in 15 patients). No effect of stent type or stent diameter on any of our endpoints was found in either univariate or multivariate analyses (data not shown).", "Table 3 shows the incidence of ACLF as well as the in-hospital mortality of the matched patients. Patients receiving TIPS more often had ACLF of any grade (TIPS: 70/214 patients vs No TIPS 57/214 patients) and achieved higher ACLF grades (P = 0.04). An increase in ACLF grade (as compared to the ACLF grade at hospital admission) was more common in the TIPS group than in the No TIPS group (in 38/214 patients vs 23/214 patients). The hospital stay was longer in the TIPS group. The majority of patients in both groups had ACLF 1, which was due to renal failure. Organ systems affected in patients with ACLF > 1 were brain (hepatic encephalopathy grade 3-4) and/or liver function based on bilirubin in addition to renal failure. ACLF > 1 was mostly due to acute infections.\nChanges of acute on chronic liver failure grade during hospital stay and in-hospital mortality (matched groups)\nOR: Odds ratio; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nThere was no difference in terms of in-hospital mortality. In the TIPS group 11 of 214 patients died, in the No TIPS group 13 of 214 patients died. The mortality increased with the ACLF grade in both groups. Multivariate logistic regressions were performed as a sensitivity analysis and confirmed that TIPS was a risk factor for ACLF but not for in-hospital mortality (Table 4). Mortality in any ACLF stratum except ACLF 2 was comparable in both groups. For patients with ACLF 2, we found a lower mortality in the TIPS group compared to the No TIPS group (OR 0.09, 95%CI 0.01-0.87). The mortality of TIPS patients who increased in ACLF by 2 or 3 grades after TIPS placement was high (4/10 died). This also applies to the No TIPS group with an even higher mortality (4/5 patients with an increase of 2 or 3 ACLF grades compared to ACLF grade at hospital admission died).\nSensitivity analysis: Multivariate regressions (main effects only)\nDependent variables were in-hospital mortality (upper panel) and any increase in acute on chronic liver failure grade (lower panel). The full models (left side) included all parameters used for propensity score matching as covariates. After stepwise backward elimination by Akaike information criterion, a model (best model, right side) was selected for each dependent variable. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nMost patients in both groups (No TIPS 89%, TIPS 82%) without ACLF at admission did not develop any ACLF during hospital stay. Many patients who developed an ACLF grade 2 or 3 already had ACLF at hospital admission (5/10 patients in the No TIPS group and 11/20 patients in the TIPS group). Three patients in the TIPS group developed ACLF during the period between hospital admission and TIPS placement, i.e. before TIPS was implanted. Many of the pre-TIPS ACLFs resolved after TIPS placement. When comparing the highest ACLF grade before TIPS to the ACLF grade at hospital discharge (assuming ACLF 3 for patients who died), 32 patients (15%) improved their ACLF grade after TIPS placement while only 21 patients (10%) had a worse ACLF grade at discharge than at the time of TIPS placement.", "Using multivariate logistic regression models based on the MELD or Child scores at admission, the probabilities of death in-hospital and of an increase in ACLF grade were estimated for the TIPS and the No TIPS group (Figure 2). The likelihood of death increases with the severity of the disease at admission; independent of whether this is assessed by MELD or by Child scores (Figure 2A and B). The regression curves for mortality are almost parallel, indicating that mortality depends only on liver function, but not on TIPS placement or an interaction between TIPS placement and the liver function. However, the regression curves for an increase in ACLF grade differ clearly between TIPS and No TIPS (Figure 2C and D). The probability of an ACLF in the TIPS group is lower than in the No TIPS group at low to moderate MELD-and Child-levels, but it is higher than in the No TIPS group at high MELD and Child scores. The intersection of the regression curves suggests an interaction between MELD/Child score and TIPS placement. In fact, the multivariate logistic regression shows a statistically significant interaction term for Child-score and TIPS (P = 0.03; Table 5). In our model the TIPS group has a lower ACLF incidence at Child scores lower than 8 points and a higher ACLF incidence at 11 points and higher. Between 8 and 11 points the standard errors of both groups overlap, indicating that there is no relevant difference between both groups. The same effect can be observed when using the MELD score instead of the Child score. However, the interaction is weaker and not statistically significant (P = 0.19).\n\nEstimated in-hospital mortality and risk of acute on chronic liver failure depending on liver function. A and B: Estimated probability of dying in hospital depending on liver function at hospital admission; C and D: Estimated probability of acute on chronic liver failure (ACLF) occurring or existing ACLF worsening, depending on liver function at hospital admission. All probabilities were estimated using a multivariate logistic regression model based on the MELD and Child scores at hospital admission. TIPS: Transjugular intrahepatic portosystemic shunt.\nMultivariate logistic regressions with interaction terms\nFor models C and D death was treated as an increase in acute on chronic liver failure. Models A and B show an effect of only the MELD/Child scores on mortality. Transjugular intrahepatic portosystemic shunt (TIPS) and the interaction of TIPS and MELD/Child scores (MELD: TIPS, Child: TIPS) have no significant influence on mortality (A and B). In model D a significant interaction term Child:TIPS exists. In model C the interaction term MELD: TIPS is not significant, indicating a weaker interaction than in model D. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.", "Most of the randomized controlled trials (RCT) have been performed in patients with good liver function. This applies in particular to the RCTs that showed a survival benefit. In these studies the mean MELD was 9.6[6] to 12.1[7]). Therefore many patients with refractory ascites receive no TIPS due to impaired liver function. Others have considered MELD scores ≥ 18[13,23,24] to ≥ 24[25,26] and bilirubin levels ≥ 51.3 to ≥ 85.5 μmol/L[13,27] as contraindications for TIPS. Our TIPS patients had a comparatively poor liver function at hospital admission (MELD median 14, mean 15.2), allowing to describe mortality and morbidity in this high-risk group.\nIn our cohort of patients with significantly impaired liver function ACLF incidence and in-hospital mortality was within the range observed in other studies on ACLF[16,28,29]. The in-hospital mortality was neither positively nor negatively influenced by TIPS placement despite the comparatively poor liver function of our patients. In the matched cohorts ACLF occurred more frequently in the TIPS group than in conservatively treated patients. The results of the multivariate logistic regressions suggest that this effect depends on the extent of the pre-existing liver damage. In patients with good liver function (Child ≤ 8) an ACLF occurs less frequently in the TIPS group. However, at higher scores (Child ≥ 11), the probability of developing an ACLF is higher in the TIPS group than in the No TIPS group. This interaction blurs the effect of TIPS on ACLF incidence in univariate analyses.\nNot all ACLFs in the TIPS group can be attributed to TIPS. The majority of the ACLFs occurred already before TIPS placement and many patients already had at least an ACLF grade 1 on hospital admission. ACLFs grade 1 were almost exclusively due to renal failure. This was to be expected in patients with recurrent tense ascites. Patients whose ACLF increased by 2 or 3 grades during hospital stay had a particularly poor outcome in both groups. A serious deterioration of liver function after TIPS placement is often attributed to TIPS placement. In our patients such events occurred in both groups when we considered the entire hospital stay (No TIPS group 5/214 patients, TIPS group 10/214 patients). Some of the ACLFs after TIPS placement are likely due to other causes than TIPS, such as bacterial infections or gastrointestinal bleeding. Such events precede most ACLFs and can occur with and without TIPS placement[29]. In line with that, TIPS was not a precipitant of ACLF in a recently published study on acute decompensation and ACLF[28]. Furthermore, the majority of pre-TIPS ACLFs resolved after TIPS placement, suggesting that TIPS is more capable to overcome an ACLF than causing it. We have studied patients with recurrent tense ascites. The most common cause of ACLF within this group was kidney failure. It is plausible that a TIPS can improve such an ACLF, e.g., since dose of diuretics can be lowered or diuretics can be discontinued altogether.\nWe did not include an analysis of the effect of TIPS on ascites resolution since it typically takes up to several months after TIPS placement for the underlying circulatory, renal and neurohumoral dysfunction to normalize[27]. Therefore, the effect of TIPS placement on ascites cannot be reliably assessed during hospital stay.\nWhen interpreting these results, the limitations of a retrospective analysis have to be considered. Since this is a retrospective study, many patients in the No TIPS group lack data on the further course after hospital discharge. For the selected endpoints (highest ACLF during inpatient stay, death during inpatient stay), complete data are available in both groups. Therefore, we had to limit the analysis to inpatient stay. In this study propensity score matching was used prior to comparing the TIPS and No TIPS group. However, even with propensity score matching, a similar distribution of unknown confounders cannot be guaranteed. We only evaluated the short-term outcome during hospital stay. It is well known that the positive impact of a TIPS only takes effect after a few weeks to months[23,27]. In fact, some studies have observed an increased mortality after TIPS placement during the first few weeks[24,30]. Therefore, positive effects of TIPS on survival might be underestimated. On the other hand, our results were confirmed and extended by the multivariate logistic regressions (Table 5). The multivariate logistic regression also provided insight into the complex interactions between liver function and TIPS as seen in Figure 2.\nSome ACLFs were already present on admission, some occurred before TIPS, and some ACLFs improved after TIPS. The fact that some patients already had ACLF prior to TIPS complicates the interpretation of the relationship between TIPS and ACLF. As in all retrospective studies, conclusions about the causal relationship between ACLF and TIPS are impossible. Furthermore, we cannot analyze systematically why TIPS was chosen in some patients and not in others. We can only compare the clinical outcome of both groups after very careful propensity score matching.\nOur TIPS patients had a comparatively poor liver function, but a bilirubin of 85.5 μmol/L or a MELD of 24 points was rarely exceeded (approx. 8% and 6% of patients). In addition, in patients with very high MELD scores on hospital admission, TIPS placement was performed only after initial stabilization and after MELD had improved. Since the number of observations in our study is limited for this situation, a decision for TIPS placement should be made with caution in such patients. Nevertheless, as shown in Figure 2 and in accordance with other studies the mortality in the TIPS group is not higher than in the No TIPS group even at the highest MELD and Child scores[17,31-33].\nOur data show an increased risk of ACLF in the TIPS group in patients with severely impaired liver function (Child ≥ 11 points), but not in patients with good or moderately impaired liver function. These findings may explain why TIPS is often considered a risky intervention with potentially unfavorable outcomes in patients with high MELD or Child scores. Nevertheless, we did not find such a negative effect of TIPS placement on in-hospital mortality in patients with high to very high MELD and Child scores. We found that many ACLFs in the TIPS group occurred before TIPS placement and often resolved after TIPS placement. Unlike several previous RCTs we did not find a positive effect of TIPS on mortality. Possible reasons are the comparatively short follow-up and the significantly worse liver function of our TIPS patients compared to the patients in the RCTs. In the presence of moderately to severely impair liver function recurrent tense ascites may be a dominant symptom. TIPS is the most effective therapy for recurrent tense ascites. Therefore, we conclude that TIPS is a viable option not only for patients with good liver function but also for patients with high Child scores after carefully weighing the increased risk of ACLF against the expected benefits.", "TIPS placement for recurrent tense ascites is associated with an increased incidence of ACLF. This effect occurs only in patients with severely impaired liver function (Child score ≥ 11) and does not lead to a higher in-hospital mortality compared with conservative treatment." ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Selection of patients", "Statistical analysis", "RESULTS", "Patient characteristics and matching", "Incidence of ACLF and in-hospital mortality", "Estimated in-hospital mortality and risk of ACLF", "DISCUSSION", "CONCLUSION" ]
[ "Transjugular intrahepatic portosystemic shunt (TIPS) is an effective therapy for complications of portal hypertension, such as ascites or esophageal variceal bleeding. Although TIPS placement is effective against ascites, early studies showed no survival benefit after TIPS placement compared to repeated paracentesis and albumin substitution[1-3]. More recent studies have shown more promising results, such as survival benefit[4-7], improved renal function[8,9] and better quality of life[10,11]. TIPS placement is therefore recommended as the treatment of choice[12,13].\nNevertheless, TIPS placement is an invasive procedure with considerable risks. In addition to hepatic encephalopathy and bleeding complications due to the placement procedure, sudden worsening of liver function is a serious complication. It has been observed after 5% to 10% of TIPS procedures and has a serious prognosis[14,15]. Such an acute deterioration of liver function accompanied by single- or multi-organ-failure is a common complication of advanced liver cirrhosis. This clinical syndrome has been described as acute on chronic liver failure (ACLF)[16]. Due to the risk of liver failure, TIPS placement for ascites is often limited to patients with good liver function and most randomized controlled trials have been conducted in patients with good liver function. It is still unclear how often ACLF occurs after TIPS placement and whether it is due to the TIPS procedure or rather to the severity of the underlying liver disease[17]. Recent recommendations argue against strict cut-off values for MELD, Child or other scoring systems. Instead, they recommend individual decision-making[18]. To better address the risk of ACLF in this challenging clinical situation the aim of this study was: (1) To determine whether ACLF occurs more often in patients with recurrent tense ascites treated with TIPS than in patients receiving conservative therapy; (2) to compare the outcome of ACLF associated with TIPS placement with the outcome of ACLF in patients receiving conservative therapy; and (3) to evaluate whether the risk of ACLF and death associated with TIPS placement increases disproportionately in patients with marginal liver function.", " Selection of patients A database was constructed containing ICD and OPS codes as well as laboratory values of all inpatients of the Division of Gastroenterology of the Rostock University Medical Center. Patients who were treated for liver cirrhosis between 2007 and 2017 were identified based on their discharge diagnosis using ICD10 codes K70.3, K70.4, K71.7, K74.6 and K76.6 (2197 cases of 1404 patients). Patients who received TIPS were identified using OPS codes 8-839*. Only cases of patients receiving their first TIPS for recurrent tense ascites were selected. Therefore there was only one case per patient in the TIPS group. Cases of patients who had liver cirrhosis and tense ascites requiring paracentesis, but did not undergo TIPS placement were selected for comparison (No TIPS group). If several cases were available for the same patient in the No TIPS group (e.g., because of multiple hospital admissions), the latest case was selected. TIPS indication, diagnosis of recurrent tense ascites, further diagnoses and clinical findings were obtained from ICD codes and from patient files. Laboratory values were obtained from the data base. Cases with missing data on relevant clinical or laboratory findings were removed (43 cases). Cases with pre-existing renal insufficiency requiring dialysis (30 cases) or with malignant tumors (471 cases) were also excluded. Patient selection resulted in 398 patients in the No TIPS group and 214 patients in the TIPS group. After data collection was completed, all patient data were pseudonymized. Patient selection criteria and reasons for exclusion from data analysis are depicted in Figure 1. The study was approved by the local ethics committee of the Rostock University Medical Center (A2018-0127).\n\nFlow diagram showing the study population and reasons for exclusion from data analysis. HE: Hepatic encephalopathy; NA: Not available; TIPS: Transjugular intrahepatic portosystemic shunt.\nThe MELD-score and ACLF grade as defined by Moreau et al[16] at hospital admission and the highest ACLF grade achieved during hospital stay were determined for each patient. Furthermore, the in-hospital mortality of both groups was determined. Multivariate logistic regressions revealed that bilirubin, creatinine, INR, CRP, sodium, white blood cell count, albumin and age were predictive either for survival or for group membership in TIPS vs No TIPS group or for both. Therefore these covariates were chosen for the propensity score matching procedure. The matching (1:1 greedy matching, nearest neighbor, without replacement) resulted in a matched sample of 428 patients (214 patients in the No TIPS and 214 in the TIPS group).\nA database was constructed containing ICD and OPS codes as well as laboratory values of all inpatients of the Division of Gastroenterology of the Rostock University Medical Center. Patients who were treated for liver cirrhosis between 2007 and 2017 were identified based on their discharge diagnosis using ICD10 codes K70.3, K70.4, K71.7, K74.6 and K76.6 (2197 cases of 1404 patients). Patients who received TIPS were identified using OPS codes 8-839*. Only cases of patients receiving their first TIPS for recurrent tense ascites were selected. Therefore there was only one case per patient in the TIPS group. Cases of patients who had liver cirrhosis and tense ascites requiring paracentesis, but did not undergo TIPS placement were selected for comparison (No TIPS group). If several cases were available for the same patient in the No TIPS group (e.g., because of multiple hospital admissions), the latest case was selected. TIPS indication, diagnosis of recurrent tense ascites, further diagnoses and clinical findings were obtained from ICD codes and from patient files. Laboratory values were obtained from the data base. Cases with missing data on relevant clinical or laboratory findings were removed (43 cases). Cases with pre-existing renal insufficiency requiring dialysis (30 cases) or with malignant tumors (471 cases) were also excluded. Patient selection resulted in 398 patients in the No TIPS group and 214 patients in the TIPS group. After data collection was completed, all patient data were pseudonymized. Patient selection criteria and reasons for exclusion from data analysis are depicted in Figure 1. The study was approved by the local ethics committee of the Rostock University Medical Center (A2018-0127).\n\nFlow diagram showing the study population and reasons for exclusion from data analysis. HE: Hepatic encephalopathy; NA: Not available; TIPS: Transjugular intrahepatic portosystemic shunt.\nThe MELD-score and ACLF grade as defined by Moreau et al[16] at hospital admission and the highest ACLF grade achieved during hospital stay were determined for each patient. Furthermore, the in-hospital mortality of both groups was determined. Multivariate logistic regressions revealed that bilirubin, creatinine, INR, CRP, sodium, white blood cell count, albumin and age were predictive either for survival or for group membership in TIPS vs No TIPS group or for both. Therefore these covariates were chosen for the propensity score matching procedure. The matching (1:1 greedy matching, nearest neighbor, without replacement) resulted in a matched sample of 428 patients (214 patients in the No TIPS and 214 in the TIPS group).\n Statistical analysis Statistical evaluation and matching were carried out using R (R version 3.6.3[19] and the R Package MatchIt, Version 4.1.0[20]). The distribution of most of the continuous data had significant positive skew, therefore non-parametric test methods were used. Continuous variables were compared using the Mann-Whitney U test and categorical variables using the chi-square or Fisher’s exact test. Data on an ordinal scale (ACLF, hepatic encephalopathy) were treated as continuous. To account for the loss of statistical independence due to the matching procedure[21,22], comparisons between the matched groups were carried out using the Wilcoxon signed rank test or McNemar test. Additional multivariate logistic regressions were performed as sensitivity analysis and for further insights into effects of liver function, TIPS placement and their interaction on ACLF incidence and in-hospital mortality. The statistical methods of this study were reviewed by Henrik Rudolf from Rostock University Medical Center, Institute for Biostatistics and Informatics in Medicine and Ageing Research.\nStatistical evaluation and matching were carried out using R (R version 3.6.3[19] and the R Package MatchIt, Version 4.1.0[20]). The distribution of most of the continuous data had significant positive skew, therefore non-parametric test methods were used. Continuous variables were compared using the Mann-Whitney U test and categorical variables using the chi-square or Fisher’s exact test. Data on an ordinal scale (ACLF, hepatic encephalopathy) were treated as continuous. To account for the loss of statistical independence due to the matching procedure[21,22], comparisons between the matched groups were carried out using the Wilcoxon signed rank test or McNemar test. Additional multivariate logistic regressions were performed as sensitivity analysis and for further insights into effects of liver function, TIPS placement and their interaction on ACLF incidence and in-hospital mortality. The statistical methods of this study were reviewed by Henrik Rudolf from Rostock University Medical Center, Institute for Biostatistics and Informatics in Medicine and Ageing Research.", "A database was constructed containing ICD and OPS codes as well as laboratory values of all inpatients of the Division of Gastroenterology of the Rostock University Medical Center. Patients who were treated for liver cirrhosis between 2007 and 2017 were identified based on their discharge diagnosis using ICD10 codes K70.3, K70.4, K71.7, K74.6 and K76.6 (2197 cases of 1404 patients). Patients who received TIPS were identified using OPS codes 8-839*. Only cases of patients receiving their first TIPS for recurrent tense ascites were selected. Therefore there was only one case per patient in the TIPS group. Cases of patients who had liver cirrhosis and tense ascites requiring paracentesis, but did not undergo TIPS placement were selected for comparison (No TIPS group). If several cases were available for the same patient in the No TIPS group (e.g., because of multiple hospital admissions), the latest case was selected. TIPS indication, diagnosis of recurrent tense ascites, further diagnoses and clinical findings were obtained from ICD codes and from patient files. Laboratory values were obtained from the data base. Cases with missing data on relevant clinical or laboratory findings were removed (43 cases). Cases with pre-existing renal insufficiency requiring dialysis (30 cases) or with malignant tumors (471 cases) were also excluded. Patient selection resulted in 398 patients in the No TIPS group and 214 patients in the TIPS group. After data collection was completed, all patient data were pseudonymized. Patient selection criteria and reasons for exclusion from data analysis are depicted in Figure 1. The study was approved by the local ethics committee of the Rostock University Medical Center (A2018-0127).\n\nFlow diagram showing the study population and reasons for exclusion from data analysis. HE: Hepatic encephalopathy; NA: Not available; TIPS: Transjugular intrahepatic portosystemic shunt.\nThe MELD-score and ACLF grade as defined by Moreau et al[16] at hospital admission and the highest ACLF grade achieved during hospital stay were determined for each patient. Furthermore, the in-hospital mortality of both groups was determined. Multivariate logistic regressions revealed that bilirubin, creatinine, INR, CRP, sodium, white blood cell count, albumin and age were predictive either for survival or for group membership in TIPS vs No TIPS group or for both. Therefore these covariates were chosen for the propensity score matching procedure. The matching (1:1 greedy matching, nearest neighbor, without replacement) resulted in a matched sample of 428 patients (214 patients in the No TIPS and 214 in the TIPS group).", "Statistical evaluation and matching were carried out using R (R version 3.6.3[19] and the R Package MatchIt, Version 4.1.0[20]). The distribution of most of the continuous data had significant positive skew, therefore non-parametric test methods were used. Continuous variables were compared using the Mann-Whitney U test and categorical variables using the chi-square or Fisher’s exact test. Data on an ordinal scale (ACLF, hepatic encephalopathy) were treated as continuous. To account for the loss of statistical independence due to the matching procedure[21,22], comparisons between the matched groups were carried out using the Wilcoxon signed rank test or McNemar test. Additional multivariate logistic regressions were performed as sensitivity analysis and for further insights into effects of liver function, TIPS placement and their interaction on ACLF incidence and in-hospital mortality. The statistical methods of this study were reviewed by Henrik Rudolf from Rostock University Medical Center, Institute for Biostatistics and Informatics in Medicine and Ageing Research.", " Patient characteristics and matching Patient demographics and liver disease characteristics of the unmatched cohort are summarized in Table 1. Continuous values are given as median and range, categorical values as total number and percentage. Patients receiving TIPS had better liver function as assessed by MELD and Child score, bilirubin, INR, albumin and severity of hepatic encephalopathy. In addition, CRP, platelets and leukocytes differed significantly. Creatinine did not differ significantly. After propensity score matching all covariates were balanced in both groups (Table 2) and all variables used for matching did no longer predict group membership in the matched patients.\nBaseline characteristics at hospital admission (all patients)\nContinuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using the Mann-Whitney-U-test and categorical variables using the chi-square test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nBaseline characteristics at hospital admission (matched groups)\nContinuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using Wilcoxon signed-rank test and categorical variables using McNemar-Test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nFrom 2007 to 2017, both covered and uncovered stents were used for TIPS at our institution. Uncovered stents were placed in 42% and covered stents in 58% of cases. Stents were mostly dilated to 7-8 mm. Smaller or larger diameters were rarely chosen (6mm in 2 patients, 9 or 10 mm in 15 patients). No effect of stent type or stent diameter on any of our endpoints was found in either univariate or multivariate analyses (data not shown).\nPatient demographics and liver disease characteristics of the unmatched cohort are summarized in Table 1. Continuous values are given as median and range, categorical values as total number and percentage. Patients receiving TIPS had better liver function as assessed by MELD and Child score, bilirubin, INR, albumin and severity of hepatic encephalopathy. In addition, CRP, platelets and leukocytes differed significantly. Creatinine did not differ significantly. After propensity score matching all covariates were balanced in both groups (Table 2) and all variables used for matching did no longer predict group membership in the matched patients.\nBaseline characteristics at hospital admission (all patients)\nContinuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using the Mann-Whitney-U-test and categorical variables using the chi-square test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nBaseline characteristics at hospital admission (matched groups)\nContinuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using Wilcoxon signed-rank test and categorical variables using McNemar-Test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nFrom 2007 to 2017, both covered and uncovered stents were used for TIPS at our institution. Uncovered stents were placed in 42% and covered stents in 58% of cases. Stents were mostly dilated to 7-8 mm. Smaller or larger diameters were rarely chosen (6mm in 2 patients, 9 or 10 mm in 15 patients). No effect of stent type or stent diameter on any of our endpoints was found in either univariate or multivariate analyses (data not shown).\n Incidence of ACLF and in-hospital mortality Table 3 shows the incidence of ACLF as well as the in-hospital mortality of the matched patients. Patients receiving TIPS more often had ACLF of any grade (TIPS: 70/214 patients vs No TIPS 57/214 patients) and achieved higher ACLF grades (P = 0.04). An increase in ACLF grade (as compared to the ACLF grade at hospital admission) was more common in the TIPS group than in the No TIPS group (in 38/214 patients vs 23/214 patients). The hospital stay was longer in the TIPS group. The majority of patients in both groups had ACLF 1, which was due to renal failure. Organ systems affected in patients with ACLF > 1 were brain (hepatic encephalopathy grade 3-4) and/or liver function based on bilirubin in addition to renal failure. ACLF > 1 was mostly due to acute infections.\nChanges of acute on chronic liver failure grade during hospital stay and in-hospital mortality (matched groups)\nOR: Odds ratio; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nThere was no difference in terms of in-hospital mortality. In the TIPS group 11 of 214 patients died, in the No TIPS group 13 of 214 patients died. The mortality increased with the ACLF grade in both groups. Multivariate logistic regressions were performed as a sensitivity analysis and confirmed that TIPS was a risk factor for ACLF but not for in-hospital mortality (Table 4). Mortality in any ACLF stratum except ACLF 2 was comparable in both groups. For patients with ACLF 2, we found a lower mortality in the TIPS group compared to the No TIPS group (OR 0.09, 95%CI 0.01-0.87). The mortality of TIPS patients who increased in ACLF by 2 or 3 grades after TIPS placement was high (4/10 died). This also applies to the No TIPS group with an even higher mortality (4/5 patients with an increase of 2 or 3 ACLF grades compared to ACLF grade at hospital admission died).\nSensitivity analysis: Multivariate regressions (main effects only)\nDependent variables were in-hospital mortality (upper panel) and any increase in acute on chronic liver failure grade (lower panel). The full models (left side) included all parameters used for propensity score matching as covariates. After stepwise backward elimination by Akaike information criterion, a model (best model, right side) was selected for each dependent variable. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nMost patients in both groups (No TIPS 89%, TIPS 82%) without ACLF at admission did not develop any ACLF during hospital stay. Many patients who developed an ACLF grade 2 or 3 already had ACLF at hospital admission (5/10 patients in the No TIPS group and 11/20 patients in the TIPS group). Three patients in the TIPS group developed ACLF during the period between hospital admission and TIPS placement, i.e. before TIPS was implanted. Many of the pre-TIPS ACLFs resolved after TIPS placement. When comparing the highest ACLF grade before TIPS to the ACLF grade at hospital discharge (assuming ACLF 3 for patients who died), 32 patients (15%) improved their ACLF grade after TIPS placement while only 21 patients (10%) had a worse ACLF grade at discharge than at the time of TIPS placement.\nTable 3 shows the incidence of ACLF as well as the in-hospital mortality of the matched patients. Patients receiving TIPS more often had ACLF of any grade (TIPS: 70/214 patients vs No TIPS 57/214 patients) and achieved higher ACLF grades (P = 0.04). An increase in ACLF grade (as compared to the ACLF grade at hospital admission) was more common in the TIPS group than in the No TIPS group (in 38/214 patients vs 23/214 patients). The hospital stay was longer in the TIPS group. The majority of patients in both groups had ACLF 1, which was due to renal failure. Organ systems affected in patients with ACLF > 1 were brain (hepatic encephalopathy grade 3-4) and/or liver function based on bilirubin in addition to renal failure. ACLF > 1 was mostly due to acute infections.\nChanges of acute on chronic liver failure grade during hospital stay and in-hospital mortality (matched groups)\nOR: Odds ratio; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nThere was no difference in terms of in-hospital mortality. In the TIPS group 11 of 214 patients died, in the No TIPS group 13 of 214 patients died. The mortality increased with the ACLF grade in both groups. Multivariate logistic regressions were performed as a sensitivity analysis and confirmed that TIPS was a risk factor for ACLF but not for in-hospital mortality (Table 4). Mortality in any ACLF stratum except ACLF 2 was comparable in both groups. For patients with ACLF 2, we found a lower mortality in the TIPS group compared to the No TIPS group (OR 0.09, 95%CI 0.01-0.87). The mortality of TIPS patients who increased in ACLF by 2 or 3 grades after TIPS placement was high (4/10 died). This also applies to the No TIPS group with an even higher mortality (4/5 patients with an increase of 2 or 3 ACLF grades compared to ACLF grade at hospital admission died).\nSensitivity analysis: Multivariate regressions (main effects only)\nDependent variables were in-hospital mortality (upper panel) and any increase in acute on chronic liver failure grade (lower panel). The full models (left side) included all parameters used for propensity score matching as covariates. After stepwise backward elimination by Akaike information criterion, a model (best model, right side) was selected for each dependent variable. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nMost patients in both groups (No TIPS 89%, TIPS 82%) without ACLF at admission did not develop any ACLF during hospital stay. Many patients who developed an ACLF grade 2 or 3 already had ACLF at hospital admission (5/10 patients in the No TIPS group and 11/20 patients in the TIPS group). Three patients in the TIPS group developed ACLF during the period between hospital admission and TIPS placement, i.e. before TIPS was implanted. Many of the pre-TIPS ACLFs resolved after TIPS placement. When comparing the highest ACLF grade before TIPS to the ACLF grade at hospital discharge (assuming ACLF 3 for patients who died), 32 patients (15%) improved their ACLF grade after TIPS placement while only 21 patients (10%) had a worse ACLF grade at discharge than at the time of TIPS placement.\n Estimated in-hospital mortality and risk of ACLF Using multivariate logistic regression models based on the MELD or Child scores at admission, the probabilities of death in-hospital and of an increase in ACLF grade were estimated for the TIPS and the No TIPS group (Figure 2). The likelihood of death increases with the severity of the disease at admission; independent of whether this is assessed by MELD or by Child scores (Figure 2A and B). The regression curves for mortality are almost parallel, indicating that mortality depends only on liver function, but not on TIPS placement or an interaction between TIPS placement and the liver function. However, the regression curves for an increase in ACLF grade differ clearly between TIPS and No TIPS (Figure 2C and D). The probability of an ACLF in the TIPS group is lower than in the No TIPS group at low to moderate MELD-and Child-levels, but it is higher than in the No TIPS group at high MELD and Child scores. The intersection of the regression curves suggests an interaction between MELD/Child score and TIPS placement. In fact, the multivariate logistic regression shows a statistically significant interaction term for Child-score and TIPS (P = 0.03; Table 5). In our model the TIPS group has a lower ACLF incidence at Child scores lower than 8 points and a higher ACLF incidence at 11 points and higher. Between 8 and 11 points the standard errors of both groups overlap, indicating that there is no relevant difference between both groups. The same effect can be observed when using the MELD score instead of the Child score. However, the interaction is weaker and not statistically significant (P = 0.19).\n\nEstimated in-hospital mortality and risk of acute on chronic liver failure depending on liver function. A and B: Estimated probability of dying in hospital depending on liver function at hospital admission; C and D: Estimated probability of acute on chronic liver failure (ACLF) occurring or existing ACLF worsening, depending on liver function at hospital admission. All probabilities were estimated using a multivariate logistic regression model based on the MELD and Child scores at hospital admission. TIPS: Transjugular intrahepatic portosystemic shunt.\nMultivariate logistic regressions with interaction terms\nFor models C and D death was treated as an increase in acute on chronic liver failure. Models A and B show an effect of only the MELD/Child scores on mortality. Transjugular intrahepatic portosystemic shunt (TIPS) and the interaction of TIPS and MELD/Child scores (MELD: TIPS, Child: TIPS) have no significant influence on mortality (A and B). In model D a significant interaction term Child:TIPS exists. In model C the interaction term MELD: TIPS is not significant, indicating a weaker interaction than in model D. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nUsing multivariate logistic regression models based on the MELD or Child scores at admission, the probabilities of death in-hospital and of an increase in ACLF grade were estimated for the TIPS and the No TIPS group (Figure 2). The likelihood of death increases with the severity of the disease at admission; independent of whether this is assessed by MELD or by Child scores (Figure 2A and B). The regression curves for mortality are almost parallel, indicating that mortality depends only on liver function, but not on TIPS placement or an interaction between TIPS placement and the liver function. However, the regression curves for an increase in ACLF grade differ clearly between TIPS and No TIPS (Figure 2C and D). The probability of an ACLF in the TIPS group is lower than in the No TIPS group at low to moderate MELD-and Child-levels, but it is higher than in the No TIPS group at high MELD and Child scores. The intersection of the regression curves suggests an interaction between MELD/Child score and TIPS placement. In fact, the multivariate logistic regression shows a statistically significant interaction term for Child-score and TIPS (P = 0.03; Table 5). In our model the TIPS group has a lower ACLF incidence at Child scores lower than 8 points and a higher ACLF incidence at 11 points and higher. Between 8 and 11 points the standard errors of both groups overlap, indicating that there is no relevant difference between both groups. The same effect can be observed when using the MELD score instead of the Child score. However, the interaction is weaker and not statistically significant (P = 0.19).\n\nEstimated in-hospital mortality and risk of acute on chronic liver failure depending on liver function. A and B: Estimated probability of dying in hospital depending on liver function at hospital admission; C and D: Estimated probability of acute on chronic liver failure (ACLF) occurring or existing ACLF worsening, depending on liver function at hospital admission. All probabilities were estimated using a multivariate logistic regression model based on the MELD and Child scores at hospital admission. TIPS: Transjugular intrahepatic portosystemic shunt.\nMultivariate logistic regressions with interaction terms\nFor models C and D death was treated as an increase in acute on chronic liver failure. Models A and B show an effect of only the MELD/Child scores on mortality. Transjugular intrahepatic portosystemic shunt (TIPS) and the interaction of TIPS and MELD/Child scores (MELD: TIPS, Child: TIPS) have no significant influence on mortality (A and B). In model D a significant interaction term Child:TIPS exists. In model C the interaction term MELD: TIPS is not significant, indicating a weaker interaction than in model D. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.", "Patient demographics and liver disease characteristics of the unmatched cohort are summarized in Table 1. Continuous values are given as median and range, categorical values as total number and percentage. Patients receiving TIPS had better liver function as assessed by MELD and Child score, bilirubin, INR, albumin and severity of hepatic encephalopathy. In addition, CRP, platelets and leukocytes differed significantly. Creatinine did not differ significantly. After propensity score matching all covariates were balanced in both groups (Table 2) and all variables used for matching did no longer predict group membership in the matched patients.\nBaseline characteristics at hospital admission (all patients)\nContinuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using the Mann-Whitney-U-test and categorical variables using the chi-square test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nBaseline characteristics at hospital admission (matched groups)\nContinuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using Wilcoxon signed-rank test and categorical variables using McNemar-Test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nFrom 2007 to 2017, both covered and uncovered stents were used for TIPS at our institution. Uncovered stents were placed in 42% and covered stents in 58% of cases. Stents were mostly dilated to 7-8 mm. Smaller or larger diameters were rarely chosen (6mm in 2 patients, 9 or 10 mm in 15 patients). No effect of stent type or stent diameter on any of our endpoints was found in either univariate or multivariate analyses (data not shown).", "Table 3 shows the incidence of ACLF as well as the in-hospital mortality of the matched patients. Patients receiving TIPS more often had ACLF of any grade (TIPS: 70/214 patients vs No TIPS 57/214 patients) and achieved higher ACLF grades (P = 0.04). An increase in ACLF grade (as compared to the ACLF grade at hospital admission) was more common in the TIPS group than in the No TIPS group (in 38/214 patients vs 23/214 patients). The hospital stay was longer in the TIPS group. The majority of patients in both groups had ACLF 1, which was due to renal failure. Organ systems affected in patients with ACLF > 1 were brain (hepatic encephalopathy grade 3-4) and/or liver function based on bilirubin in addition to renal failure. ACLF > 1 was mostly due to acute infections.\nChanges of acute on chronic liver failure grade during hospital stay and in-hospital mortality (matched groups)\nOR: Odds ratio; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nThere was no difference in terms of in-hospital mortality. In the TIPS group 11 of 214 patients died, in the No TIPS group 13 of 214 patients died. The mortality increased with the ACLF grade in both groups. Multivariate logistic regressions were performed as a sensitivity analysis and confirmed that TIPS was a risk factor for ACLF but not for in-hospital mortality (Table 4). Mortality in any ACLF stratum except ACLF 2 was comparable in both groups. For patients with ACLF 2, we found a lower mortality in the TIPS group compared to the No TIPS group (OR 0.09, 95%CI 0.01-0.87). The mortality of TIPS patients who increased in ACLF by 2 or 3 grades after TIPS placement was high (4/10 died). This also applies to the No TIPS group with an even higher mortality (4/5 patients with an increase of 2 or 3 ACLF grades compared to ACLF grade at hospital admission died).\nSensitivity analysis: Multivariate regressions (main effects only)\nDependent variables were in-hospital mortality (upper panel) and any increase in acute on chronic liver failure grade (lower panel). The full models (left side) included all parameters used for propensity score matching as covariates. After stepwise backward elimination by Akaike information criterion, a model (best model, right side) was selected for each dependent variable. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.\nMost patients in both groups (No TIPS 89%, TIPS 82%) without ACLF at admission did not develop any ACLF during hospital stay. Many patients who developed an ACLF grade 2 or 3 already had ACLF at hospital admission (5/10 patients in the No TIPS group and 11/20 patients in the TIPS group). Three patients in the TIPS group developed ACLF during the period between hospital admission and TIPS placement, i.e. before TIPS was implanted. Many of the pre-TIPS ACLFs resolved after TIPS placement. When comparing the highest ACLF grade before TIPS to the ACLF grade at hospital discharge (assuming ACLF 3 for patients who died), 32 patients (15%) improved their ACLF grade after TIPS placement while only 21 patients (10%) had a worse ACLF grade at discharge than at the time of TIPS placement.", "Using multivariate logistic regression models based on the MELD or Child scores at admission, the probabilities of death in-hospital and of an increase in ACLF grade were estimated for the TIPS and the No TIPS group (Figure 2). The likelihood of death increases with the severity of the disease at admission; independent of whether this is assessed by MELD or by Child scores (Figure 2A and B). The regression curves for mortality are almost parallel, indicating that mortality depends only on liver function, but not on TIPS placement or an interaction between TIPS placement and the liver function. However, the regression curves for an increase in ACLF grade differ clearly between TIPS and No TIPS (Figure 2C and D). The probability of an ACLF in the TIPS group is lower than in the No TIPS group at low to moderate MELD-and Child-levels, but it is higher than in the No TIPS group at high MELD and Child scores. The intersection of the regression curves suggests an interaction between MELD/Child score and TIPS placement. In fact, the multivariate logistic regression shows a statistically significant interaction term for Child-score and TIPS (P = 0.03; Table 5). In our model the TIPS group has a lower ACLF incidence at Child scores lower than 8 points and a higher ACLF incidence at 11 points and higher. Between 8 and 11 points the standard errors of both groups overlap, indicating that there is no relevant difference between both groups. The same effect can be observed when using the MELD score instead of the Child score. However, the interaction is weaker and not statistically significant (P = 0.19).\n\nEstimated in-hospital mortality and risk of acute on chronic liver failure depending on liver function. A and B: Estimated probability of dying in hospital depending on liver function at hospital admission; C and D: Estimated probability of acute on chronic liver failure (ACLF) occurring or existing ACLF worsening, depending on liver function at hospital admission. All probabilities were estimated using a multivariate logistic regression model based on the MELD and Child scores at hospital admission. TIPS: Transjugular intrahepatic portosystemic shunt.\nMultivariate logistic regressions with interaction terms\nFor models C and D death was treated as an increase in acute on chronic liver failure. Models A and B show an effect of only the MELD/Child scores on mortality. Transjugular intrahepatic portosystemic shunt (TIPS) and the interaction of TIPS and MELD/Child scores (MELD: TIPS, Child: TIPS) have no significant influence on mortality (A and B). In model D a significant interaction term Child:TIPS exists. In model C the interaction term MELD: TIPS is not significant, indicating a weaker interaction than in model D. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt.", "Most of the randomized controlled trials (RCT) have been performed in patients with good liver function. This applies in particular to the RCTs that showed a survival benefit. In these studies the mean MELD was 9.6[6] to 12.1[7]). Therefore many patients with refractory ascites receive no TIPS due to impaired liver function. Others have considered MELD scores ≥ 18[13,23,24] to ≥ 24[25,26] and bilirubin levels ≥ 51.3 to ≥ 85.5 μmol/L[13,27] as contraindications for TIPS. Our TIPS patients had a comparatively poor liver function at hospital admission (MELD median 14, mean 15.2), allowing to describe mortality and morbidity in this high-risk group.\nIn our cohort of patients with significantly impaired liver function ACLF incidence and in-hospital mortality was within the range observed in other studies on ACLF[16,28,29]. The in-hospital mortality was neither positively nor negatively influenced by TIPS placement despite the comparatively poor liver function of our patients. In the matched cohorts ACLF occurred more frequently in the TIPS group than in conservatively treated patients. The results of the multivariate logistic regressions suggest that this effect depends on the extent of the pre-existing liver damage. In patients with good liver function (Child ≤ 8) an ACLF occurs less frequently in the TIPS group. However, at higher scores (Child ≥ 11), the probability of developing an ACLF is higher in the TIPS group than in the No TIPS group. This interaction blurs the effect of TIPS on ACLF incidence in univariate analyses.\nNot all ACLFs in the TIPS group can be attributed to TIPS. The majority of the ACLFs occurred already before TIPS placement and many patients already had at least an ACLF grade 1 on hospital admission. ACLFs grade 1 were almost exclusively due to renal failure. This was to be expected in patients with recurrent tense ascites. Patients whose ACLF increased by 2 or 3 grades during hospital stay had a particularly poor outcome in both groups. A serious deterioration of liver function after TIPS placement is often attributed to TIPS placement. In our patients such events occurred in both groups when we considered the entire hospital stay (No TIPS group 5/214 patients, TIPS group 10/214 patients). Some of the ACLFs after TIPS placement are likely due to other causes than TIPS, such as bacterial infections or gastrointestinal bleeding. Such events precede most ACLFs and can occur with and without TIPS placement[29]. In line with that, TIPS was not a precipitant of ACLF in a recently published study on acute decompensation and ACLF[28]. Furthermore, the majority of pre-TIPS ACLFs resolved after TIPS placement, suggesting that TIPS is more capable to overcome an ACLF than causing it. We have studied patients with recurrent tense ascites. The most common cause of ACLF within this group was kidney failure. It is plausible that a TIPS can improve such an ACLF, e.g., since dose of diuretics can be lowered or diuretics can be discontinued altogether.\nWe did not include an analysis of the effect of TIPS on ascites resolution since it typically takes up to several months after TIPS placement for the underlying circulatory, renal and neurohumoral dysfunction to normalize[27]. Therefore, the effect of TIPS placement on ascites cannot be reliably assessed during hospital stay.\nWhen interpreting these results, the limitations of a retrospective analysis have to be considered. Since this is a retrospective study, many patients in the No TIPS group lack data on the further course after hospital discharge. For the selected endpoints (highest ACLF during inpatient stay, death during inpatient stay), complete data are available in both groups. Therefore, we had to limit the analysis to inpatient stay. In this study propensity score matching was used prior to comparing the TIPS and No TIPS group. However, even with propensity score matching, a similar distribution of unknown confounders cannot be guaranteed. We only evaluated the short-term outcome during hospital stay. It is well known that the positive impact of a TIPS only takes effect after a few weeks to months[23,27]. In fact, some studies have observed an increased mortality after TIPS placement during the first few weeks[24,30]. Therefore, positive effects of TIPS on survival might be underestimated. On the other hand, our results were confirmed and extended by the multivariate logistic regressions (Table 5). The multivariate logistic regression also provided insight into the complex interactions between liver function and TIPS as seen in Figure 2.\nSome ACLFs were already present on admission, some occurred before TIPS, and some ACLFs improved after TIPS. The fact that some patients already had ACLF prior to TIPS complicates the interpretation of the relationship between TIPS and ACLF. As in all retrospective studies, conclusions about the causal relationship between ACLF and TIPS are impossible. Furthermore, we cannot analyze systematically why TIPS was chosen in some patients and not in others. We can only compare the clinical outcome of both groups after very careful propensity score matching.\nOur TIPS patients had a comparatively poor liver function, but a bilirubin of 85.5 μmol/L or a MELD of 24 points was rarely exceeded (approx. 8% and 6% of patients). In addition, in patients with very high MELD scores on hospital admission, TIPS placement was performed only after initial stabilization and after MELD had improved. Since the number of observations in our study is limited for this situation, a decision for TIPS placement should be made with caution in such patients. Nevertheless, as shown in Figure 2 and in accordance with other studies the mortality in the TIPS group is not higher than in the No TIPS group even at the highest MELD and Child scores[17,31-33].\nOur data show an increased risk of ACLF in the TIPS group in patients with severely impaired liver function (Child ≥ 11 points), but not in patients with good or moderately impaired liver function. These findings may explain why TIPS is often considered a risky intervention with potentially unfavorable outcomes in patients with high MELD or Child scores. Nevertheless, we did not find such a negative effect of TIPS placement on in-hospital mortality in patients with high to very high MELD and Child scores. We found that many ACLFs in the TIPS group occurred before TIPS placement and often resolved after TIPS placement. Unlike several previous RCTs we did not find a positive effect of TIPS on mortality. Possible reasons are the comparatively short follow-up and the significantly worse liver function of our TIPS patients compared to the patients in the RCTs. In the presence of moderately to severely impair liver function recurrent tense ascites may be a dominant symptom. TIPS is the most effective therapy for recurrent tense ascites. Therefore, we conclude that TIPS is a viable option not only for patients with good liver function but also for patients with high Child scores after carefully weighing the increased risk of ACLF against the expected benefits.", "TIPS placement for recurrent tense ascites is associated with an increased incidence of ACLF. This effect occurs only in patients with severely impaired liver function (Child score ≥ 11) and does not lead to a higher in-hospital mortality compared with conservative treatment." ]
[ null, "methods", null, null, null, null, null, null, null, null ]
[ "Liver cirrhosis", "Ascites", "Transjugular intrahepatic portosystemic shunt", "Acute on chronic liver failure", "Mortality", "Propensity score" ]
INTRODUCTION: Transjugular intrahepatic portosystemic shunt (TIPS) is an effective therapy for complications of portal hypertension, such as ascites or esophageal variceal bleeding. Although TIPS placement is effective against ascites, early studies showed no survival benefit after TIPS placement compared to repeated paracentesis and albumin substitution[1-3]. More recent studies have shown more promising results, such as survival benefit[4-7], improved renal function[8,9] and better quality of life[10,11]. TIPS placement is therefore recommended as the treatment of choice[12,13]. Nevertheless, TIPS placement is an invasive procedure with considerable risks. In addition to hepatic encephalopathy and bleeding complications due to the placement procedure, sudden worsening of liver function is a serious complication. It has been observed after 5% to 10% of TIPS procedures and has a serious prognosis[14,15]. Such an acute deterioration of liver function accompanied by single- or multi-organ-failure is a common complication of advanced liver cirrhosis. This clinical syndrome has been described as acute on chronic liver failure (ACLF)[16]. Due to the risk of liver failure, TIPS placement for ascites is often limited to patients with good liver function and most randomized controlled trials have been conducted in patients with good liver function. It is still unclear how often ACLF occurs after TIPS placement and whether it is due to the TIPS procedure or rather to the severity of the underlying liver disease[17]. Recent recommendations argue against strict cut-off values for MELD, Child or other scoring systems. Instead, they recommend individual decision-making[18]. To better address the risk of ACLF in this challenging clinical situation the aim of this study was: (1) To determine whether ACLF occurs more often in patients with recurrent tense ascites treated with TIPS than in patients receiving conservative therapy; (2) to compare the outcome of ACLF associated with TIPS placement with the outcome of ACLF in patients receiving conservative therapy; and (3) to evaluate whether the risk of ACLF and death associated with TIPS placement increases disproportionately in patients with marginal liver function. MATERIALS AND METHODS: Selection of patients A database was constructed containing ICD and OPS codes as well as laboratory values of all inpatients of the Division of Gastroenterology of the Rostock University Medical Center. Patients who were treated for liver cirrhosis between 2007 and 2017 were identified based on their discharge diagnosis using ICD10 codes K70.3, K70.4, K71.7, K74.6 and K76.6 (2197 cases of 1404 patients). Patients who received TIPS were identified using OPS codes 8-839*. Only cases of patients receiving their first TIPS for recurrent tense ascites were selected. Therefore there was only one case per patient in the TIPS group. Cases of patients who had liver cirrhosis and tense ascites requiring paracentesis, but did not undergo TIPS placement were selected for comparison (No TIPS group). If several cases were available for the same patient in the No TIPS group (e.g., because of multiple hospital admissions), the latest case was selected. TIPS indication, diagnosis of recurrent tense ascites, further diagnoses and clinical findings were obtained from ICD codes and from patient files. Laboratory values were obtained from the data base. Cases with missing data on relevant clinical or laboratory findings were removed (43 cases). Cases with pre-existing renal insufficiency requiring dialysis (30 cases) or with malignant tumors (471 cases) were also excluded. Patient selection resulted in 398 patients in the No TIPS group and 214 patients in the TIPS group. After data collection was completed, all patient data were pseudonymized. Patient selection criteria and reasons for exclusion from data analysis are depicted in Figure 1. The study was approved by the local ethics committee of the Rostock University Medical Center (A2018-0127). Flow diagram showing the study population and reasons for exclusion from data analysis. HE: Hepatic encephalopathy; NA: Not available; TIPS: Transjugular intrahepatic portosystemic shunt. The MELD-score and ACLF grade as defined by Moreau et al[16] at hospital admission and the highest ACLF grade achieved during hospital stay were determined for each patient. Furthermore, the in-hospital mortality of both groups was determined. Multivariate logistic regressions revealed that bilirubin, creatinine, INR, CRP, sodium, white blood cell count, albumin and age were predictive either for survival or for group membership in TIPS vs No TIPS group or for both. Therefore these covariates were chosen for the propensity score matching procedure. The matching (1:1 greedy matching, nearest neighbor, without replacement) resulted in a matched sample of 428 patients (214 patients in the No TIPS and 214 in the TIPS group). A database was constructed containing ICD and OPS codes as well as laboratory values of all inpatients of the Division of Gastroenterology of the Rostock University Medical Center. Patients who were treated for liver cirrhosis between 2007 and 2017 were identified based on their discharge diagnosis using ICD10 codes K70.3, K70.4, K71.7, K74.6 and K76.6 (2197 cases of 1404 patients). Patients who received TIPS were identified using OPS codes 8-839*. Only cases of patients receiving their first TIPS for recurrent tense ascites were selected. Therefore there was only one case per patient in the TIPS group. Cases of patients who had liver cirrhosis and tense ascites requiring paracentesis, but did not undergo TIPS placement were selected for comparison (No TIPS group). If several cases were available for the same patient in the No TIPS group (e.g., because of multiple hospital admissions), the latest case was selected. TIPS indication, diagnosis of recurrent tense ascites, further diagnoses and clinical findings were obtained from ICD codes and from patient files. Laboratory values were obtained from the data base. Cases with missing data on relevant clinical or laboratory findings were removed (43 cases). Cases with pre-existing renal insufficiency requiring dialysis (30 cases) or with malignant tumors (471 cases) were also excluded. Patient selection resulted in 398 patients in the No TIPS group and 214 patients in the TIPS group. After data collection was completed, all patient data were pseudonymized. Patient selection criteria and reasons for exclusion from data analysis are depicted in Figure 1. The study was approved by the local ethics committee of the Rostock University Medical Center (A2018-0127). Flow diagram showing the study population and reasons for exclusion from data analysis. HE: Hepatic encephalopathy; NA: Not available; TIPS: Transjugular intrahepatic portosystemic shunt. The MELD-score and ACLF grade as defined by Moreau et al[16] at hospital admission and the highest ACLF grade achieved during hospital stay were determined for each patient. Furthermore, the in-hospital mortality of both groups was determined. Multivariate logistic regressions revealed that bilirubin, creatinine, INR, CRP, sodium, white blood cell count, albumin and age were predictive either for survival or for group membership in TIPS vs No TIPS group or for both. Therefore these covariates were chosen for the propensity score matching procedure. The matching (1:1 greedy matching, nearest neighbor, without replacement) resulted in a matched sample of 428 patients (214 patients in the No TIPS and 214 in the TIPS group). Statistical analysis Statistical evaluation and matching were carried out using R (R version 3.6.3[19] and the R Package MatchIt, Version 4.1.0[20]). The distribution of most of the continuous data had significant positive skew, therefore non-parametric test methods were used. Continuous variables were compared using the Mann-Whitney U test and categorical variables using the chi-square or Fisher’s exact test. Data on an ordinal scale (ACLF, hepatic encephalopathy) were treated as continuous. To account for the loss of statistical independence due to the matching procedure[21,22], comparisons between the matched groups were carried out using the Wilcoxon signed rank test or McNemar test. Additional multivariate logistic regressions were performed as sensitivity analysis and for further insights into effects of liver function, TIPS placement and their interaction on ACLF incidence and in-hospital mortality. The statistical methods of this study were reviewed by Henrik Rudolf from Rostock University Medical Center, Institute for Biostatistics and Informatics in Medicine and Ageing Research. Statistical evaluation and matching were carried out using R (R version 3.6.3[19] and the R Package MatchIt, Version 4.1.0[20]). The distribution of most of the continuous data had significant positive skew, therefore non-parametric test methods were used. Continuous variables were compared using the Mann-Whitney U test and categorical variables using the chi-square or Fisher’s exact test. Data on an ordinal scale (ACLF, hepatic encephalopathy) were treated as continuous. To account for the loss of statistical independence due to the matching procedure[21,22], comparisons between the matched groups were carried out using the Wilcoxon signed rank test or McNemar test. Additional multivariate logistic regressions were performed as sensitivity analysis and for further insights into effects of liver function, TIPS placement and their interaction on ACLF incidence and in-hospital mortality. The statistical methods of this study were reviewed by Henrik Rudolf from Rostock University Medical Center, Institute for Biostatistics and Informatics in Medicine and Ageing Research. Selection of patients: A database was constructed containing ICD and OPS codes as well as laboratory values of all inpatients of the Division of Gastroenterology of the Rostock University Medical Center. Patients who were treated for liver cirrhosis between 2007 and 2017 were identified based on their discharge diagnosis using ICD10 codes K70.3, K70.4, K71.7, K74.6 and K76.6 (2197 cases of 1404 patients). Patients who received TIPS were identified using OPS codes 8-839*. Only cases of patients receiving their first TIPS for recurrent tense ascites were selected. Therefore there was only one case per patient in the TIPS group. Cases of patients who had liver cirrhosis and tense ascites requiring paracentesis, but did not undergo TIPS placement were selected for comparison (No TIPS group). If several cases were available for the same patient in the No TIPS group (e.g., because of multiple hospital admissions), the latest case was selected. TIPS indication, diagnosis of recurrent tense ascites, further diagnoses and clinical findings were obtained from ICD codes and from patient files. Laboratory values were obtained from the data base. Cases with missing data on relevant clinical or laboratory findings were removed (43 cases). Cases with pre-existing renal insufficiency requiring dialysis (30 cases) or with malignant tumors (471 cases) were also excluded. Patient selection resulted in 398 patients in the No TIPS group and 214 patients in the TIPS group. After data collection was completed, all patient data were pseudonymized. Patient selection criteria and reasons for exclusion from data analysis are depicted in Figure 1. The study was approved by the local ethics committee of the Rostock University Medical Center (A2018-0127). Flow diagram showing the study population and reasons for exclusion from data analysis. HE: Hepatic encephalopathy; NA: Not available; TIPS: Transjugular intrahepatic portosystemic shunt. The MELD-score and ACLF grade as defined by Moreau et al[16] at hospital admission and the highest ACLF grade achieved during hospital stay were determined for each patient. Furthermore, the in-hospital mortality of both groups was determined. Multivariate logistic regressions revealed that bilirubin, creatinine, INR, CRP, sodium, white blood cell count, albumin and age were predictive either for survival or for group membership in TIPS vs No TIPS group or for both. Therefore these covariates were chosen for the propensity score matching procedure. The matching (1:1 greedy matching, nearest neighbor, without replacement) resulted in a matched sample of 428 patients (214 patients in the No TIPS and 214 in the TIPS group). Statistical analysis: Statistical evaluation and matching were carried out using R (R version 3.6.3[19] and the R Package MatchIt, Version 4.1.0[20]). The distribution of most of the continuous data had significant positive skew, therefore non-parametric test methods were used. Continuous variables were compared using the Mann-Whitney U test and categorical variables using the chi-square or Fisher’s exact test. Data on an ordinal scale (ACLF, hepatic encephalopathy) were treated as continuous. To account for the loss of statistical independence due to the matching procedure[21,22], comparisons between the matched groups were carried out using the Wilcoxon signed rank test or McNemar test. Additional multivariate logistic regressions were performed as sensitivity analysis and for further insights into effects of liver function, TIPS placement and their interaction on ACLF incidence and in-hospital mortality. The statistical methods of this study were reviewed by Henrik Rudolf from Rostock University Medical Center, Institute for Biostatistics and Informatics in Medicine and Ageing Research. RESULTS: Patient characteristics and matching Patient demographics and liver disease characteristics of the unmatched cohort are summarized in Table 1. Continuous values are given as median and range, categorical values as total number and percentage. Patients receiving TIPS had better liver function as assessed by MELD and Child score, bilirubin, INR, albumin and severity of hepatic encephalopathy. In addition, CRP, platelets and leukocytes differed significantly. Creatinine did not differ significantly. After propensity score matching all covariates were balanced in both groups (Table 2) and all variables used for matching did no longer predict group membership in the matched patients. Baseline characteristics at hospital admission (all patients) Continuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using the Mann-Whitney-U-test and categorical variables using the chi-square test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. Baseline characteristics at hospital admission (matched groups) Continuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using Wilcoxon signed-rank test and categorical variables using McNemar-Test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. From 2007 to 2017, both covered and uncovered stents were used for TIPS at our institution. Uncovered stents were placed in 42% and covered stents in 58% of cases. Stents were mostly dilated to 7-8 mm. Smaller or larger diameters were rarely chosen (6mm in 2 patients, 9 or 10 mm in 15 patients). No effect of stent type or stent diameter on any of our endpoints was found in either univariate or multivariate analyses (data not shown). Patient demographics and liver disease characteristics of the unmatched cohort are summarized in Table 1. Continuous values are given as median and range, categorical values as total number and percentage. Patients receiving TIPS had better liver function as assessed by MELD and Child score, bilirubin, INR, albumin and severity of hepatic encephalopathy. In addition, CRP, platelets and leukocytes differed significantly. Creatinine did not differ significantly. After propensity score matching all covariates were balanced in both groups (Table 2) and all variables used for matching did no longer predict group membership in the matched patients. Baseline characteristics at hospital admission (all patients) Continuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using the Mann-Whitney-U-test and categorical variables using the chi-square test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. Baseline characteristics at hospital admission (matched groups) Continuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using Wilcoxon signed-rank test and categorical variables using McNemar-Test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. From 2007 to 2017, both covered and uncovered stents were used for TIPS at our institution. Uncovered stents were placed in 42% and covered stents in 58% of cases. Stents were mostly dilated to 7-8 mm. Smaller or larger diameters were rarely chosen (6mm in 2 patients, 9 or 10 mm in 15 patients). No effect of stent type or stent diameter on any of our endpoints was found in either univariate or multivariate analyses (data not shown). Incidence of ACLF and in-hospital mortality Table 3 shows the incidence of ACLF as well as the in-hospital mortality of the matched patients. Patients receiving TIPS more often had ACLF of any grade (TIPS: 70/214 patients vs No TIPS 57/214 patients) and achieved higher ACLF grades (P = 0.04). An increase in ACLF grade (as compared to the ACLF grade at hospital admission) was more common in the TIPS group than in the No TIPS group (in 38/214 patients vs 23/214 patients). The hospital stay was longer in the TIPS group. The majority of patients in both groups had ACLF 1, which was due to renal failure. Organ systems affected in patients with ACLF > 1 were brain (hepatic encephalopathy grade 3-4) and/or liver function based on bilirubin in addition to renal failure. ACLF > 1 was mostly due to acute infections. Changes of acute on chronic liver failure grade during hospital stay and in-hospital mortality (matched groups) OR: Odds ratio; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. There was no difference in terms of in-hospital mortality. In the TIPS group 11 of 214 patients died, in the No TIPS group 13 of 214 patients died. The mortality increased with the ACLF grade in both groups. Multivariate logistic regressions were performed as a sensitivity analysis and confirmed that TIPS was a risk factor for ACLF but not for in-hospital mortality (Table 4). Mortality in any ACLF stratum except ACLF 2 was comparable in both groups. For patients with ACLF 2, we found a lower mortality in the TIPS group compared to the No TIPS group (OR 0.09, 95%CI 0.01-0.87). The mortality of TIPS patients who increased in ACLF by 2 or 3 grades after TIPS placement was high (4/10 died). This also applies to the No TIPS group with an even higher mortality (4/5 patients with an increase of 2 or 3 ACLF grades compared to ACLF grade at hospital admission died). Sensitivity analysis: Multivariate regressions (main effects only) Dependent variables were in-hospital mortality (upper panel) and any increase in acute on chronic liver failure grade (lower panel). The full models (left side) included all parameters used for propensity score matching as covariates. After stepwise backward elimination by Akaike information criterion, a model (best model, right side) was selected for each dependent variable. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. Most patients in both groups (No TIPS 89%, TIPS 82%) without ACLF at admission did not develop any ACLF during hospital stay. Many patients who developed an ACLF grade 2 or 3 already had ACLF at hospital admission (5/10 patients in the No TIPS group and 11/20 patients in the TIPS group). Three patients in the TIPS group developed ACLF during the period between hospital admission and TIPS placement, i.e. before TIPS was implanted. Many of the pre-TIPS ACLFs resolved after TIPS placement. When comparing the highest ACLF grade before TIPS to the ACLF grade at hospital discharge (assuming ACLF 3 for patients who died), 32 patients (15%) improved their ACLF grade after TIPS placement while only 21 patients (10%) had a worse ACLF grade at discharge than at the time of TIPS placement. Table 3 shows the incidence of ACLF as well as the in-hospital mortality of the matched patients. Patients receiving TIPS more often had ACLF of any grade (TIPS: 70/214 patients vs No TIPS 57/214 patients) and achieved higher ACLF grades (P = 0.04). An increase in ACLF grade (as compared to the ACLF grade at hospital admission) was more common in the TIPS group than in the No TIPS group (in 38/214 patients vs 23/214 patients). The hospital stay was longer in the TIPS group. The majority of patients in both groups had ACLF 1, which was due to renal failure. Organ systems affected in patients with ACLF > 1 were brain (hepatic encephalopathy grade 3-4) and/or liver function based on bilirubin in addition to renal failure. ACLF > 1 was mostly due to acute infections. Changes of acute on chronic liver failure grade during hospital stay and in-hospital mortality (matched groups) OR: Odds ratio; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. There was no difference in terms of in-hospital mortality. In the TIPS group 11 of 214 patients died, in the No TIPS group 13 of 214 patients died. The mortality increased with the ACLF grade in both groups. Multivariate logistic regressions were performed as a sensitivity analysis and confirmed that TIPS was a risk factor for ACLF but not for in-hospital mortality (Table 4). Mortality in any ACLF stratum except ACLF 2 was comparable in both groups. For patients with ACLF 2, we found a lower mortality in the TIPS group compared to the No TIPS group (OR 0.09, 95%CI 0.01-0.87). The mortality of TIPS patients who increased in ACLF by 2 or 3 grades after TIPS placement was high (4/10 died). This also applies to the No TIPS group with an even higher mortality (4/5 patients with an increase of 2 or 3 ACLF grades compared to ACLF grade at hospital admission died). Sensitivity analysis: Multivariate regressions (main effects only) Dependent variables were in-hospital mortality (upper panel) and any increase in acute on chronic liver failure grade (lower panel). The full models (left side) included all parameters used for propensity score matching as covariates. After stepwise backward elimination by Akaike information criterion, a model (best model, right side) was selected for each dependent variable. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. Most patients in both groups (No TIPS 89%, TIPS 82%) without ACLF at admission did not develop any ACLF during hospital stay. Many patients who developed an ACLF grade 2 or 3 already had ACLF at hospital admission (5/10 patients in the No TIPS group and 11/20 patients in the TIPS group). Three patients in the TIPS group developed ACLF during the period between hospital admission and TIPS placement, i.e. before TIPS was implanted. Many of the pre-TIPS ACLFs resolved after TIPS placement. When comparing the highest ACLF grade before TIPS to the ACLF grade at hospital discharge (assuming ACLF 3 for patients who died), 32 patients (15%) improved their ACLF grade after TIPS placement while only 21 patients (10%) had a worse ACLF grade at discharge than at the time of TIPS placement. Estimated in-hospital mortality and risk of ACLF Using multivariate logistic regression models based on the MELD or Child scores at admission, the probabilities of death in-hospital and of an increase in ACLF grade were estimated for the TIPS and the No TIPS group (Figure 2). The likelihood of death increases with the severity of the disease at admission; independent of whether this is assessed by MELD or by Child scores (Figure 2A and B). The regression curves for mortality are almost parallel, indicating that mortality depends only on liver function, but not on TIPS placement or an interaction between TIPS placement and the liver function. However, the regression curves for an increase in ACLF grade differ clearly between TIPS and No TIPS (Figure 2C and D). The probability of an ACLF in the TIPS group is lower than in the No TIPS group at low to moderate MELD-and Child-levels, but it is higher than in the No TIPS group at high MELD and Child scores. The intersection of the regression curves suggests an interaction between MELD/Child score and TIPS placement. In fact, the multivariate logistic regression shows a statistically significant interaction term for Child-score and TIPS (P = 0.03; Table 5). In our model the TIPS group has a lower ACLF incidence at Child scores lower than 8 points and a higher ACLF incidence at 11 points and higher. Between 8 and 11 points the standard errors of both groups overlap, indicating that there is no relevant difference between both groups. The same effect can be observed when using the MELD score instead of the Child score. However, the interaction is weaker and not statistically significant (P = 0.19). Estimated in-hospital mortality and risk of acute on chronic liver failure depending on liver function. A and B: Estimated probability of dying in hospital depending on liver function at hospital admission; C and D: Estimated probability of acute on chronic liver failure (ACLF) occurring or existing ACLF worsening, depending on liver function at hospital admission. All probabilities were estimated using a multivariate logistic regression model based on the MELD and Child scores at hospital admission. TIPS: Transjugular intrahepatic portosystemic shunt. Multivariate logistic regressions with interaction terms For models C and D death was treated as an increase in acute on chronic liver failure. Models A and B show an effect of only the MELD/Child scores on mortality. Transjugular intrahepatic portosystemic shunt (TIPS) and the interaction of TIPS and MELD/Child scores (MELD: TIPS, Child: TIPS) have no significant influence on mortality (A and B). In model D a significant interaction term Child:TIPS exists. In model C the interaction term MELD: TIPS is not significant, indicating a weaker interaction than in model D. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. Using multivariate logistic regression models based on the MELD or Child scores at admission, the probabilities of death in-hospital and of an increase in ACLF grade were estimated for the TIPS and the No TIPS group (Figure 2). The likelihood of death increases with the severity of the disease at admission; independent of whether this is assessed by MELD or by Child scores (Figure 2A and B). The regression curves for mortality are almost parallel, indicating that mortality depends only on liver function, but not on TIPS placement or an interaction between TIPS placement and the liver function. However, the regression curves for an increase in ACLF grade differ clearly between TIPS and No TIPS (Figure 2C and D). The probability of an ACLF in the TIPS group is lower than in the No TIPS group at low to moderate MELD-and Child-levels, but it is higher than in the No TIPS group at high MELD and Child scores. The intersection of the regression curves suggests an interaction between MELD/Child score and TIPS placement. In fact, the multivariate logistic regression shows a statistically significant interaction term for Child-score and TIPS (P = 0.03; Table 5). In our model the TIPS group has a lower ACLF incidence at Child scores lower than 8 points and a higher ACLF incidence at 11 points and higher. Between 8 and 11 points the standard errors of both groups overlap, indicating that there is no relevant difference between both groups. The same effect can be observed when using the MELD score instead of the Child score. However, the interaction is weaker and not statistically significant (P = 0.19). Estimated in-hospital mortality and risk of acute on chronic liver failure depending on liver function. A and B: Estimated probability of dying in hospital depending on liver function at hospital admission; C and D: Estimated probability of acute on chronic liver failure (ACLF) occurring or existing ACLF worsening, depending on liver function at hospital admission. All probabilities were estimated using a multivariate logistic regression model based on the MELD and Child scores at hospital admission. TIPS: Transjugular intrahepatic portosystemic shunt. Multivariate logistic regressions with interaction terms For models C and D death was treated as an increase in acute on chronic liver failure. Models A and B show an effect of only the MELD/Child scores on mortality. Transjugular intrahepatic portosystemic shunt (TIPS) and the interaction of TIPS and MELD/Child scores (MELD: TIPS, Child: TIPS) have no significant influence on mortality (A and B). In model D a significant interaction term Child:TIPS exists. In model C the interaction term MELD: TIPS is not significant, indicating a weaker interaction than in model D. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. Patient characteristics and matching: Patient demographics and liver disease characteristics of the unmatched cohort are summarized in Table 1. Continuous values are given as median and range, categorical values as total number and percentage. Patients receiving TIPS had better liver function as assessed by MELD and Child score, bilirubin, INR, albumin and severity of hepatic encephalopathy. In addition, CRP, platelets and leukocytes differed significantly. Creatinine did not differ significantly. After propensity score matching all covariates were balanced in both groups (Table 2) and all variables used for matching did no longer predict group membership in the matched patients. Baseline characteristics at hospital admission (all patients) Continuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using the Mann-Whitney-U-test and categorical variables using the chi-square test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. Baseline characteristics at hospital admission (matched groups) Continuous variables are given as median and range, categorical variables as total number and percentage. Continuous variables were compared using Wilcoxon signed-rank test and categorical variables using McNemar-Test. NA: Not available; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. From 2007 to 2017, both covered and uncovered stents were used for TIPS at our institution. Uncovered stents were placed in 42% and covered stents in 58% of cases. Stents were mostly dilated to 7-8 mm. Smaller or larger diameters were rarely chosen (6mm in 2 patients, 9 or 10 mm in 15 patients). No effect of stent type or stent diameter on any of our endpoints was found in either univariate or multivariate analyses (data not shown). Incidence of ACLF and in-hospital mortality: Table 3 shows the incidence of ACLF as well as the in-hospital mortality of the matched patients. Patients receiving TIPS more often had ACLF of any grade (TIPS: 70/214 patients vs No TIPS 57/214 patients) and achieved higher ACLF grades (P = 0.04). An increase in ACLF grade (as compared to the ACLF grade at hospital admission) was more common in the TIPS group than in the No TIPS group (in 38/214 patients vs 23/214 patients). The hospital stay was longer in the TIPS group. The majority of patients in both groups had ACLF 1, which was due to renal failure. Organ systems affected in patients with ACLF > 1 were brain (hepatic encephalopathy grade 3-4) and/or liver function based on bilirubin in addition to renal failure. ACLF > 1 was mostly due to acute infections. Changes of acute on chronic liver failure grade during hospital stay and in-hospital mortality (matched groups) OR: Odds ratio; ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. There was no difference in terms of in-hospital mortality. In the TIPS group 11 of 214 patients died, in the No TIPS group 13 of 214 patients died. The mortality increased with the ACLF grade in both groups. Multivariate logistic regressions were performed as a sensitivity analysis and confirmed that TIPS was a risk factor for ACLF but not for in-hospital mortality (Table 4). Mortality in any ACLF stratum except ACLF 2 was comparable in both groups. For patients with ACLF 2, we found a lower mortality in the TIPS group compared to the No TIPS group (OR 0.09, 95%CI 0.01-0.87). The mortality of TIPS patients who increased in ACLF by 2 or 3 grades after TIPS placement was high (4/10 died). This also applies to the No TIPS group with an even higher mortality (4/5 patients with an increase of 2 or 3 ACLF grades compared to ACLF grade at hospital admission died). Sensitivity analysis: Multivariate regressions (main effects only) Dependent variables were in-hospital mortality (upper panel) and any increase in acute on chronic liver failure grade (lower panel). The full models (left side) included all parameters used for propensity score matching as covariates. After stepwise backward elimination by Akaike information criterion, a model (best model, right side) was selected for each dependent variable. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. Most patients in both groups (No TIPS 89%, TIPS 82%) without ACLF at admission did not develop any ACLF during hospital stay. Many patients who developed an ACLF grade 2 or 3 already had ACLF at hospital admission (5/10 patients in the No TIPS group and 11/20 patients in the TIPS group). Three patients in the TIPS group developed ACLF during the period between hospital admission and TIPS placement, i.e. before TIPS was implanted. Many of the pre-TIPS ACLFs resolved after TIPS placement. When comparing the highest ACLF grade before TIPS to the ACLF grade at hospital discharge (assuming ACLF 3 for patients who died), 32 patients (15%) improved their ACLF grade after TIPS placement while only 21 patients (10%) had a worse ACLF grade at discharge than at the time of TIPS placement. Estimated in-hospital mortality and risk of ACLF: Using multivariate logistic regression models based on the MELD or Child scores at admission, the probabilities of death in-hospital and of an increase in ACLF grade were estimated for the TIPS and the No TIPS group (Figure 2). The likelihood of death increases with the severity of the disease at admission; independent of whether this is assessed by MELD or by Child scores (Figure 2A and B). The regression curves for mortality are almost parallel, indicating that mortality depends only on liver function, but not on TIPS placement or an interaction between TIPS placement and the liver function. However, the regression curves for an increase in ACLF grade differ clearly between TIPS and No TIPS (Figure 2C and D). The probability of an ACLF in the TIPS group is lower than in the No TIPS group at low to moderate MELD-and Child-levels, but it is higher than in the No TIPS group at high MELD and Child scores. The intersection of the regression curves suggests an interaction between MELD/Child score and TIPS placement. In fact, the multivariate logistic regression shows a statistically significant interaction term for Child-score and TIPS (P = 0.03; Table 5). In our model the TIPS group has a lower ACLF incidence at Child scores lower than 8 points and a higher ACLF incidence at 11 points and higher. Between 8 and 11 points the standard errors of both groups overlap, indicating that there is no relevant difference between both groups. The same effect can be observed when using the MELD score instead of the Child score. However, the interaction is weaker and not statistically significant (P = 0.19). Estimated in-hospital mortality and risk of acute on chronic liver failure depending on liver function. A and B: Estimated probability of dying in hospital depending on liver function at hospital admission; C and D: Estimated probability of acute on chronic liver failure (ACLF) occurring or existing ACLF worsening, depending on liver function at hospital admission. All probabilities were estimated using a multivariate logistic regression model based on the MELD and Child scores at hospital admission. TIPS: Transjugular intrahepatic portosystemic shunt. Multivariate logistic regressions with interaction terms For models C and D death was treated as an increase in acute on chronic liver failure. Models A and B show an effect of only the MELD/Child scores on mortality. Transjugular intrahepatic portosystemic shunt (TIPS) and the interaction of TIPS and MELD/Child scores (MELD: TIPS, Child: TIPS) have no significant influence on mortality (A and B). In model D a significant interaction term Child:TIPS exists. In model C the interaction term MELD: TIPS is not significant, indicating a weaker interaction than in model D. ACLF: Acute on chronic liver failure; TIPS: Transjugular intrahepatic portosystemic shunt. DISCUSSION: Most of the randomized controlled trials (RCT) have been performed in patients with good liver function. This applies in particular to the RCTs that showed a survival benefit. In these studies the mean MELD was 9.6[6] to 12.1[7]). Therefore many patients with refractory ascites receive no TIPS due to impaired liver function. Others have considered MELD scores ≥ 18[13,23,24] to ≥ 24[25,26] and bilirubin levels ≥ 51.3 to ≥ 85.5 μmol/L[13,27] as contraindications for TIPS. Our TIPS patients had a comparatively poor liver function at hospital admission (MELD median 14, mean 15.2), allowing to describe mortality and morbidity in this high-risk group. In our cohort of patients with significantly impaired liver function ACLF incidence and in-hospital mortality was within the range observed in other studies on ACLF[16,28,29]. The in-hospital mortality was neither positively nor negatively influenced by TIPS placement despite the comparatively poor liver function of our patients. In the matched cohorts ACLF occurred more frequently in the TIPS group than in conservatively treated patients. The results of the multivariate logistic regressions suggest that this effect depends on the extent of the pre-existing liver damage. In patients with good liver function (Child ≤ 8) an ACLF occurs less frequently in the TIPS group. However, at higher scores (Child ≥ 11), the probability of developing an ACLF is higher in the TIPS group than in the No TIPS group. This interaction blurs the effect of TIPS on ACLF incidence in univariate analyses. Not all ACLFs in the TIPS group can be attributed to TIPS. The majority of the ACLFs occurred already before TIPS placement and many patients already had at least an ACLF grade 1 on hospital admission. ACLFs grade 1 were almost exclusively due to renal failure. This was to be expected in patients with recurrent tense ascites. Patients whose ACLF increased by 2 or 3 grades during hospital stay had a particularly poor outcome in both groups. A serious deterioration of liver function after TIPS placement is often attributed to TIPS placement. In our patients such events occurred in both groups when we considered the entire hospital stay (No TIPS group 5/214 patients, TIPS group 10/214 patients). Some of the ACLFs after TIPS placement are likely due to other causes than TIPS, such as bacterial infections or gastrointestinal bleeding. Such events precede most ACLFs and can occur with and without TIPS placement[29]. In line with that, TIPS was not a precipitant of ACLF in a recently published study on acute decompensation and ACLF[28]. Furthermore, the majority of pre-TIPS ACLFs resolved after TIPS placement, suggesting that TIPS is more capable to overcome an ACLF than causing it. We have studied patients with recurrent tense ascites. The most common cause of ACLF within this group was kidney failure. It is plausible that a TIPS can improve such an ACLF, e.g., since dose of diuretics can be lowered or diuretics can be discontinued altogether. We did not include an analysis of the effect of TIPS on ascites resolution since it typically takes up to several months after TIPS placement for the underlying circulatory, renal and neurohumoral dysfunction to normalize[27]. Therefore, the effect of TIPS placement on ascites cannot be reliably assessed during hospital stay. When interpreting these results, the limitations of a retrospective analysis have to be considered. Since this is a retrospective study, many patients in the No TIPS group lack data on the further course after hospital discharge. For the selected endpoints (highest ACLF during inpatient stay, death during inpatient stay), complete data are available in both groups. Therefore, we had to limit the analysis to inpatient stay. In this study propensity score matching was used prior to comparing the TIPS and No TIPS group. However, even with propensity score matching, a similar distribution of unknown confounders cannot be guaranteed. We only evaluated the short-term outcome during hospital stay. It is well known that the positive impact of a TIPS only takes effect after a few weeks to months[23,27]. In fact, some studies have observed an increased mortality after TIPS placement during the first few weeks[24,30]. Therefore, positive effects of TIPS on survival might be underestimated. On the other hand, our results were confirmed and extended by the multivariate logistic regressions (Table 5). The multivariate logistic regression also provided insight into the complex interactions between liver function and TIPS as seen in Figure 2. Some ACLFs were already present on admission, some occurred before TIPS, and some ACLFs improved after TIPS. The fact that some patients already had ACLF prior to TIPS complicates the interpretation of the relationship between TIPS and ACLF. As in all retrospective studies, conclusions about the causal relationship between ACLF and TIPS are impossible. Furthermore, we cannot analyze systematically why TIPS was chosen in some patients and not in others. We can only compare the clinical outcome of both groups after very careful propensity score matching. Our TIPS patients had a comparatively poor liver function, but a bilirubin of 85.5 μmol/L or a MELD of 24 points was rarely exceeded (approx. 8% and 6% of patients). In addition, in patients with very high MELD scores on hospital admission, TIPS placement was performed only after initial stabilization and after MELD had improved. Since the number of observations in our study is limited for this situation, a decision for TIPS placement should be made with caution in such patients. Nevertheless, as shown in Figure 2 and in accordance with other studies the mortality in the TIPS group is not higher than in the No TIPS group even at the highest MELD and Child scores[17,31-33]. Our data show an increased risk of ACLF in the TIPS group in patients with severely impaired liver function (Child ≥ 11 points), but not in patients with good or moderately impaired liver function. These findings may explain why TIPS is often considered a risky intervention with potentially unfavorable outcomes in patients with high MELD or Child scores. Nevertheless, we did not find such a negative effect of TIPS placement on in-hospital mortality in patients with high to very high MELD and Child scores. We found that many ACLFs in the TIPS group occurred before TIPS placement and often resolved after TIPS placement. Unlike several previous RCTs we did not find a positive effect of TIPS on mortality. Possible reasons are the comparatively short follow-up and the significantly worse liver function of our TIPS patients compared to the patients in the RCTs. In the presence of moderately to severely impair liver function recurrent tense ascites may be a dominant symptom. TIPS is the most effective therapy for recurrent tense ascites. Therefore, we conclude that TIPS is a viable option not only for patients with good liver function but also for patients with high Child scores after carefully weighing the increased risk of ACLF against the expected benefits. CONCLUSION: TIPS placement for recurrent tense ascites is associated with an increased incidence of ACLF. This effect occurs only in patients with severely impaired liver function (Child score ≥ 11) and does not lead to a higher in-hospital mortality compared with conservative treatment.
Background: Transjugular intrahepatic portosystemic shunt (TIPS) placement is an effective intervention for recurrent tense ascites. Some studies show an increased risk of acute on chronic liver failure (ACLF) associated with TIPS placement. It is not clear whether ACLF in this context is a consequence of TIPS or of the pre-existing liver disease. Methods: Two hundred and fourteen patients undergoing their first TIPS placement for recurrent tense ascites at our tertiary-care center between 2007 and 2017 were identified (TIPS group). Three hundred and ninety-eight patients of the same time interval with liver cirrhosis and recurrent tense ascites not undergoing TIPS placement (No TIPS group) were analyzed as a control group. TIPS indication, diagnosis of recurrent ascites, further diagnoses and clinical findings were obtained from a database search and patient records. The in-hospital mortality and ACLF incidence of both groups were compared using 1:1 propensity score matching and multivariate logistic regressions. Results: After propensity score matching, the TIPS and No TIPS groups were comparable in terms of laboratory values and ACLF incidence at hospital admission. There was no detectable difference in mortality (TIPS: 11/214, No TIPS 13/214). During the hospital stay, ACLF occurred more frequently in the TIPS group than in the No TIPS group (TIPS: 70/214, No TIPS: 57/214, P = 0.04). This effect was confined to patients with severely impaired liver function at hospital admission as indicated by a significant interaction term of Child score and TIPS placement in multivariate logistic regression. The TIPS group had a lower ACLF incidence at Child scores < 8 points and a higher ACLF incidence at ≥ 11 points. No significant difference was found between groups in patients with Child scores of 8 to 10 points. Conclusions: TIPS placement for recurrent tense ascites is associated with an increased rate of ACLF in patients with severely impaired liver function but does not result in higher in-hospital mortality.
INTRODUCTION: Transjugular intrahepatic portosystemic shunt (TIPS) is an effective therapy for complications of portal hypertension, such as ascites or esophageal variceal bleeding. Although TIPS placement is effective against ascites, early studies showed no survival benefit after TIPS placement compared to repeated paracentesis and albumin substitution[1-3]. More recent studies have shown more promising results, such as survival benefit[4-7], improved renal function[8,9] and better quality of life[10,11]. TIPS placement is therefore recommended as the treatment of choice[12,13]. Nevertheless, TIPS placement is an invasive procedure with considerable risks. In addition to hepatic encephalopathy and bleeding complications due to the placement procedure, sudden worsening of liver function is a serious complication. It has been observed after 5% to 10% of TIPS procedures and has a serious prognosis[14,15]. Such an acute deterioration of liver function accompanied by single- or multi-organ-failure is a common complication of advanced liver cirrhosis. This clinical syndrome has been described as acute on chronic liver failure (ACLF)[16]. Due to the risk of liver failure, TIPS placement for ascites is often limited to patients with good liver function and most randomized controlled trials have been conducted in patients with good liver function. It is still unclear how often ACLF occurs after TIPS placement and whether it is due to the TIPS procedure or rather to the severity of the underlying liver disease[17]. Recent recommendations argue against strict cut-off values for MELD, Child or other scoring systems. Instead, they recommend individual decision-making[18]. To better address the risk of ACLF in this challenging clinical situation the aim of this study was: (1) To determine whether ACLF occurs more often in patients with recurrent tense ascites treated with TIPS than in patients receiving conservative therapy; (2) to compare the outcome of ACLF associated with TIPS placement with the outcome of ACLF in patients receiving conservative therapy; and (3) to evaluate whether the risk of ACLF and death associated with TIPS placement increases disproportionately in patients with marginal liver function. CONCLUSION: The medical limits of TIPS placement for recurrent tense ascites should be evaluated in prospective studies which need to address the indications, contraindications and the associated complex decision making.
Background: Transjugular intrahepatic portosystemic shunt (TIPS) placement is an effective intervention for recurrent tense ascites. Some studies show an increased risk of acute on chronic liver failure (ACLF) associated with TIPS placement. It is not clear whether ACLF in this context is a consequence of TIPS or of the pre-existing liver disease. Methods: Two hundred and fourteen patients undergoing their first TIPS placement for recurrent tense ascites at our tertiary-care center between 2007 and 2017 were identified (TIPS group). Three hundred and ninety-eight patients of the same time interval with liver cirrhosis and recurrent tense ascites not undergoing TIPS placement (No TIPS group) were analyzed as a control group. TIPS indication, diagnosis of recurrent ascites, further diagnoses and clinical findings were obtained from a database search and patient records. The in-hospital mortality and ACLF incidence of both groups were compared using 1:1 propensity score matching and multivariate logistic regressions. Results: After propensity score matching, the TIPS and No TIPS groups were comparable in terms of laboratory values and ACLF incidence at hospital admission. There was no detectable difference in mortality (TIPS: 11/214, No TIPS 13/214). During the hospital stay, ACLF occurred more frequently in the TIPS group than in the No TIPS group (TIPS: 70/214, No TIPS: 57/214, P = 0.04). This effect was confined to patients with severely impaired liver function at hospital admission as indicated by a significant interaction term of Child score and TIPS placement in multivariate logistic regression. The TIPS group had a lower ACLF incidence at Child scores < 8 points and a higher ACLF incidence at ≥ 11 points. No significant difference was found between groups in patients with Child scores of 8 to 10 points. Conclusions: TIPS placement for recurrent tense ascites is associated with an increased rate of ACLF in patients with severely impaired liver function but does not result in higher in-hospital mortality.
8,343
369
[ 382, 479, 182, 3054, 340, 636, 536, 1292, 48 ]
10
[ "tips", "aclf", "patients", "hospital", "group", "liver", "tips group", "mortality", "placement", "tips placement" ]
[ "portal hypertension ascites", "transjugular intrahepatic portosystemic", "liver function tips", "intrahepatic portosystemic shunt", "liver failure tips" ]
null
[CONTENT] Liver cirrhosis | Ascites | Transjugular intrahepatic portosystemic shunt | Acute on chronic liver failure | Mortality | Propensity score [SUMMARY]
[CONTENT] Liver cirrhosis | Ascites | Transjugular intrahepatic portosystemic shunt | Acute on chronic liver failure | Mortality | Propensity score [SUMMARY]
null
[CONTENT] Liver cirrhosis | Ascites | Transjugular intrahepatic portosystemic shunt | Acute on chronic liver failure | Mortality | Propensity score [SUMMARY]
[CONTENT] Liver cirrhosis | Ascites | Transjugular intrahepatic portosystemic shunt | Acute on chronic liver failure | Mortality | Propensity score [SUMMARY]
[CONTENT] Liver cirrhosis | Ascites | Transjugular intrahepatic portosystemic shunt | Acute on chronic liver failure | Mortality | Propensity score [SUMMARY]
[CONTENT] Humans | Child | Ascites | Portasystemic Shunt, Transjugular Intrahepatic | Conservative Treatment | Propensity Score | Acute-On-Chronic Liver Failure [SUMMARY]
[CONTENT] Humans | Child | Ascites | Portasystemic Shunt, Transjugular Intrahepatic | Conservative Treatment | Propensity Score | Acute-On-Chronic Liver Failure [SUMMARY]
null
[CONTENT] Humans | Child | Ascites | Portasystemic Shunt, Transjugular Intrahepatic | Conservative Treatment | Propensity Score | Acute-On-Chronic Liver Failure [SUMMARY]
[CONTENT] Humans | Child | Ascites | Portasystemic Shunt, Transjugular Intrahepatic | Conservative Treatment | Propensity Score | Acute-On-Chronic Liver Failure [SUMMARY]
[CONTENT] Humans | Child | Ascites | Portasystemic Shunt, Transjugular Intrahepatic | Conservative Treatment | Propensity Score | Acute-On-Chronic Liver Failure [SUMMARY]
[CONTENT] portal hypertension ascites | transjugular intrahepatic portosystemic | liver function tips | intrahepatic portosystemic shunt | liver failure tips [SUMMARY]
[CONTENT] portal hypertension ascites | transjugular intrahepatic portosystemic | liver function tips | intrahepatic portosystemic shunt | liver failure tips [SUMMARY]
null
[CONTENT] portal hypertension ascites | transjugular intrahepatic portosystemic | liver function tips | intrahepatic portosystemic shunt | liver failure tips [SUMMARY]
[CONTENT] portal hypertension ascites | transjugular intrahepatic portosystemic | liver function tips | intrahepatic portosystemic shunt | liver failure tips [SUMMARY]
[CONTENT] portal hypertension ascites | transjugular intrahepatic portosystemic | liver function tips | intrahepatic portosystemic shunt | liver failure tips [SUMMARY]
[CONTENT] tips | aclf | patients | hospital | group | liver | tips group | mortality | placement | tips placement [SUMMARY]
[CONTENT] tips | aclf | patients | hospital | group | liver | tips group | mortality | placement | tips placement [SUMMARY]
null
[CONTENT] tips | aclf | patients | hospital | group | liver | tips group | mortality | placement | tips placement [SUMMARY]
[CONTENT] tips | aclf | patients | hospital | group | liver | tips group | mortality | placement | tips placement [SUMMARY]
[CONTENT] tips | aclf | patients | hospital | group | liver | tips group | mortality | placement | tips placement [SUMMARY]
[CONTENT] tips | placement | liver | tips placement | patients | aclf | therapy | function | ascites | liver function [SUMMARY]
[CONTENT] cases | tips | patient | data | patients | group | tips group | codes | test | statistical [SUMMARY]
null
[CONTENT] mortality compared conservative treatment | score 11 lead higher | ascites associated | ascites associated increased | ascites associated increased incidence | aclf effect occurs | aclf effect occurs patients | lead | tips placement recurrent tense | tips placement recurrent [SUMMARY]
[CONTENT] tips | patients | aclf | group | tips group | liver | hospital | child | placement | tips placement [SUMMARY]
[CONTENT] tips | patients | aclf | group | tips group | liver | hospital | child | placement | tips placement [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] Two hundred and fourteen | first | tertiary | between 2007 and 2017 ||| Three hundred and ninety-eight ||| ||| 1:1 [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| ||| Two hundred and fourteen | first | tertiary | between 2007 and 2017 ||| Three hundred and ninety-eight ||| ||| 1:1 ||| ||| 11/214 ||| 70/214 | 57/214 | 0.04 ||| Child ||| Child | 8 | ≥ 11 ||| Child | 8 to 10 ||| [SUMMARY]
[CONTENT] ||| ||| ||| Two hundred and fourteen | first | tertiary | between 2007 and 2017 ||| Three hundred and ninety-eight ||| ||| 1:1 ||| ||| 11/214 ||| 70/214 | 57/214 | 0.04 ||| Child ||| Child | 8 | ≥ 11 ||| Child | 8 to 10 ||| [SUMMARY]
Anti-apoptotic gene transcription signature of salivary gland neoplasms.
22313995
Development of accurate therapeutic approaches to salivary gland neoplasms depends on better understanding of their molecular pathogenesis. Tumour growth is regulated by the balance between proliferation and apoptosis. Few studies have investigated apoptosis in salivary tumours relying almost exclusively on immunohistochemistry or TUNEL assay. Furthermore, there is no information regarding the mRNA expression profile of apoptotic genes in salivary tumors. Our objective was to investigate the quantitative expression of BCL-2 (anti-apoptotic), BAX and Caspase3 (pro-apoptotic genes) mRNAs in salivary gland neoplasms and examine the association of these data with tumour size, proliferative activity and p53 staining (parameters associated with a poor prognosis of salivary tumours patients).
BACKGROUND
We investigated the apoptotic profile of salivary neoplasms in twenty fresh samples of benign and seven samples of malignant salivary neoplasms, using quantitative real time PCR. We further assessed p53 and ki-67 immunopositivity and obtained clinical tumour size data.
METHODS
We demonstrated that BCL-2 mRNA is overexpressed in salivary neoplasms, leading to an overall anti-apoptotic profile. We also found an association between the anti-apoptotic index (BCL-2/BAX) with p53 immunoexpression. A higher proliferative activity was found in the malignant tumours. In addition, tumour size was associated with cell proliferation but not with the transcription of apoptotic genes.
RESULTS
In conclusion, we show an anti-apoptotic gene expression profile in salivary neoplasms in association with p53 staining, but independent of cell proliferation and tumour size.
CONCLUSION
[ "Apoptosis", "Caspase 3", "Cell Proliferation", "Gene Expression Profiling", "Humans", "Immunohistochemistry", "Proto-Oncogene Proteins c-bcl-2", "RNA, Messenger", "Reverse Transcriptase Polymerase Chain Reaction", "Salivary Gland Neoplasms", "Tumor Suppressor Protein p53", "bcl-2-Associated X Protein" ]
3293030
Background
Salivary gland tumours have an annual global incidence between 0.4 and 13.5 cases per 100 000 individuals [1]. High proliferative activity, presence of residual tumour and advanced tumour stage were shown to be strong negative predictors of survival in salivary gland neoplasms [2]. Development of targeted therapy calls for a better understanding of their molecular and cellular biology [3]. Apoptosis is a highly regulated active process, characterized by cell shrinkage, chromatin condensation and DNA fragmentation promoted by endonucleases. Induction of apoptosis is a normal defense against loss of growth control which follows DNA mutations. Apoptosis is frequently deregulated in human cancers, being a suitable target for anticancer therapy [4]. The B-cell lymphoma (BCL-2) family comprises different regulators involved in apoptosis. BCL-2 is an important proto-oncogene located at chromosome 18q21 [5]. It was the first gene implicated in the regulation of apoptosis. Its protein is able to stop programmed cell death (apoptosis) facilitating cell survival independent of promoting cell division [6]. BCL-2 is thought to be involved in resistance to conventional cancer treatment and its increased expression has been implicated in a number of cancers [4]. Apparently, many cancers depend on the anti-apoptotic activity of BCL-2 for tumour initiation and maintenance [7]. BAX (Bcl-2-associated protein X) is the most characteristic death-promoting member of the BCL-2 family. The translocation of Bax protein from the cytosol to the mitochondria triggers the activation of the caspases cascade, leading to death [8]. Caspase 3 (CASP3) was first described in 1995 and once activated, is considered to be responsible for the actual demolition of the cell during apoptosis [9,10]. In salivary neoplasms, apoptosis has been investigated almost exclusively by means of immunohistochemistry. Bcl-2 and Bax proteins are expressed in most of the salivary gland neoplasms investigated, but Bcl-2 positivity was found in a lower percentage of mucoepidermoid carcinomas [11-15]. In the studies with TUNEL in salivary gland neoplasms, apoptotic activity was inversely associated with Bcl-2 immunoexpression [11,15]. In salivary gland carcinomas, TUNEL was associated with a poor prognosis, being correlated with p53 and ki-67 staining [16]. As the transcription of apoptosis related genes could help to elucidate the pathogenesis of tumours, we propose to investigate the quantitative expression of BCL-2 (anti-apoptotic), BAX and Caspase3 (pro-apoptotic genes) using qPCR. As tumour size, high proliferative activity and p53 staining are associated with a poor prognosis of salivary tumours patients [2,17], we tested the association of these parameters with the transcription of the apoptotic/anti-apoptotic genes.
Methods
Samples Twenty seven salivary gland neoplasms were included in this study. Fresh tumour samples were obtained from patients who underwent surgical excision of salivary gland neoplasms. The study was approved by the local ethics committee. The diagnoses were reviewed and confirmed: 17 pleomorphic adenomas, one basal cell adenoma, one Warthin tumour, one mucinous cystadenoma, three polymorphous low grade adenocarcinomas, two low grade mucoepidermoid carcinomas, one adenoid cystic carcinoma and one cystadenocarcinoma. Six samples of normal salivary glands obtained from healthy volunteers undergoing surgery for non-neoplastic disease were used as controls. For each sample, a portion of the lesion was stored in RNAHolder (BioAgency Biotecnologia, São Paulo, SP, Brazil) at -80°C, while another portion was fixed in 10% buffered formalin and paraffin embedded. All the samples underwent the same fixation and processing procedures. Twenty seven salivary gland neoplasms were included in this study. Fresh tumour samples were obtained from patients who underwent surgical excision of salivary gland neoplasms. The study was approved by the local ethics committee. The diagnoses were reviewed and confirmed: 17 pleomorphic adenomas, one basal cell adenoma, one Warthin tumour, one mucinous cystadenoma, three polymorphous low grade adenocarcinomas, two low grade mucoepidermoid carcinomas, one adenoid cystic carcinoma and one cystadenocarcinoma. Six samples of normal salivary glands obtained from healthy volunteers undergoing surgery for non-neoplastic disease were used as controls. For each sample, a portion of the lesion was stored in RNAHolder (BioAgency Biotecnologia, São Paulo, SP, Brazil) at -80°C, while another portion was fixed in 10% buffered formalin and paraffin embedded. All the samples underwent the same fixation and processing procedures. Quantitative reverse transcriptase PCR (qRT-PCR) Total RNA was isolated from samples using Tri-Phasis Reagent (BioAgency, São Paulo, Brazil) and treated with DNase (Invitrogen Life Technologies, Carlsbad, CA, USA). cDNA was synthesized with Superscript First-Strand Synthesis System kit (Invitrogen Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions. Quantitative PCR analyses were carried out using 1x SYBR Green PCR Master Mix (Applied Biosystems, Warrington, CHS, UK). BCL-2, BAX, CASP3 and ACTB primers were designed using Primer Express software (Applied Biosystems, Foster City, CA, USA) version 3.0. The primer sequences are specified in Table 1. Reactions were performed in duplicate and run on a Step One machine (Applied Biosystems, Foster City, CA, USA). The cycling parameters were 10 min denaturation at 95°C followed by 40 cycles at 95°C for 15 s and 56°C for 1 min. The cycling was followed by melting curve analysis to distinguish specificity of the PCR products. BCL-2, BAX and CASP3 expressions were normalized with actin (ACTB) as internal control. The average threshold cycle (Ct) for two replicates per sample was used to calculate ΔCt. Relative quantification of these genes expressions was calculated with the 2-ΔΔCt method. Normal salivary gland samples were used as a calibrator. qRT-PCR primer sequences and amplicon sizes Apoptosis tendency was estimated by two indexes: Anti-apoptotic index 1 (AI-1), calculated by the ratio between BCL-2/BAX expression and anti-apoptotic index 2 (AI-2), calculated by the ratio between BCL-2/CASP3. Samples with AI-1 or AI-2 higher than 1 were regarded as having higher anti-apoptotic activity than samples exhibiting an AI lower than 1. Total RNA was isolated from samples using Tri-Phasis Reagent (BioAgency, São Paulo, Brazil) and treated with DNase (Invitrogen Life Technologies, Carlsbad, CA, USA). cDNA was synthesized with Superscript First-Strand Synthesis System kit (Invitrogen Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions. Quantitative PCR analyses were carried out using 1x SYBR Green PCR Master Mix (Applied Biosystems, Warrington, CHS, UK). BCL-2, BAX, CASP3 and ACTB primers were designed using Primer Express software (Applied Biosystems, Foster City, CA, USA) version 3.0. The primer sequences are specified in Table 1. Reactions were performed in duplicate and run on a Step One machine (Applied Biosystems, Foster City, CA, USA). The cycling parameters were 10 min denaturation at 95°C followed by 40 cycles at 95°C for 15 s and 56°C for 1 min. The cycling was followed by melting curve analysis to distinguish specificity of the PCR products. BCL-2, BAX and CASP3 expressions were normalized with actin (ACTB) as internal control. The average threshold cycle (Ct) for two replicates per sample was used to calculate ΔCt. Relative quantification of these genes expressions was calculated with the 2-ΔΔCt method. Normal salivary gland samples were used as a calibrator. qRT-PCR primer sequences and amplicon sizes Apoptosis tendency was estimated by two indexes: Anti-apoptotic index 1 (AI-1), calculated by the ratio between BCL-2/BAX expression and anti-apoptotic index 2 (AI-2), calculated by the ratio between BCL-2/CASP3. Samples with AI-1 or AI-2 higher than 1 were regarded as having higher anti-apoptotic activity than samples exhibiting an AI lower than 1. Immunohistochemistry Paraffin-embedded sections (4 μm) were dewaxed in xylene, hydrated with graded ethanol and endogenous peroxidase blocked in 1% hydrogen peroxidase for 15 min. Antigen retrieval was performed with citric acid, pH 6.0. The primary antibodies used were ki-67 (MIB-1) and p53 (DO7), both from DAKO (Carpinteria, CA, USA) and diluted 1:50. Primary antiserum incubation was performed for 30 min at room temperature and binding visualized using a polymer-based system (EnVision, Dako Corporation, Carpinteria, CA, USA) with diaminobenzidine (Sigma, St Louis, MO, USA) as chromogen. For each antibody, positive (squamous cell carcinoma with known reactivity) and negative controls in which the primary antibody was omitted were included. The sections were counterstained with hematoxylin, dehydrated and mounted. The percentage of ki-67 positive nuclei was obtained by counting nuclear staining in 10 high power fields (400 × magnification) including the most positive areas. Neoplasms were divided into two groups with 5% or more positive nuclei being considered as high proliferative activity, or fewer than 5% positive nuclei considered low proliferative activity. p53 stained nuclei were counted in eight fields (400 × magnification); more than 5% of positive nuclei was considered positive [18]. Paraffin-embedded sections (4 μm) were dewaxed in xylene, hydrated with graded ethanol and endogenous peroxidase blocked in 1% hydrogen peroxidase for 15 min. Antigen retrieval was performed with citric acid, pH 6.0. The primary antibodies used were ki-67 (MIB-1) and p53 (DO7), both from DAKO (Carpinteria, CA, USA) and diluted 1:50. Primary antiserum incubation was performed for 30 min at room temperature and binding visualized using a polymer-based system (EnVision, Dako Corporation, Carpinteria, CA, USA) with diaminobenzidine (Sigma, St Louis, MO, USA) as chromogen. For each antibody, positive (squamous cell carcinoma with known reactivity) and negative controls in which the primary antibody was omitted were included. The sections were counterstained with hematoxylin, dehydrated and mounted. The percentage of ki-67 positive nuclei was obtained by counting nuclear staining in 10 high power fields (400 × magnification) including the most positive areas. Neoplasms were divided into two groups with 5% or more positive nuclei being considered as high proliferative activity, or fewer than 5% positive nuclei considered low proliferative activity. p53 stained nuclei were counted in eight fields (400 × magnification); more than 5% of positive nuclei was considered positive [18]. Statistical Analyses Mann-Whitney, Fisher Test and Spearman correlation tests were used when appropriate. P values < 0.05 were considered statistically significant. These tests were performed with BioEstat software (Belém, PA, Brazil), version 4. Mann-Whitney, Fisher Test and Spearman correlation tests were used when appropriate. P values < 0.05 were considered statistically significant. These tests were performed with BioEstat software (Belém, PA, Brazil), version 4.
null
null
Conclusions
The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2407/12/61/prepub
[ "Background", "Samples", "Quantitative reverse transcriptase PCR (qRT-PCR)", "Immunohistochemistry", "Statistical Analyses", "Results", "BAX, BCL-2 and CASP3 expression", "Anti-apoptotic indexes", "p53 and cell proliferation index", "Tumour size", "Discussion", "Conclusions" ]
[ "Salivary gland tumours have an annual global incidence between 0.4 and 13.5 cases per 100 000 individuals [1]. High proliferative activity, presence of residual tumour and advanced tumour stage were shown to be strong negative predictors of survival in salivary gland neoplasms [2]. Development of targeted therapy calls for a better understanding of their molecular and cellular biology [3].\nApoptosis is a highly regulated active process, characterized by cell shrinkage, chromatin condensation and DNA fragmentation promoted by endonucleases. Induction of apoptosis is a normal defense against loss of growth control which follows DNA mutations. Apoptosis is frequently deregulated in human cancers, being a suitable target for anticancer therapy [4].\nThe B-cell lymphoma (BCL-2) family comprises different regulators involved in apoptosis. BCL-2 is an important proto-oncogene located at chromosome 18q21 [5]. It was the first gene implicated in the regulation of apoptosis. Its protein is able to stop programmed cell death (apoptosis) facilitating cell survival independent of promoting cell division [6]. BCL-2 is thought to be involved in resistance to conventional cancer treatment and its increased expression has been implicated in a number of cancers [4]. Apparently, many cancers depend on the anti-apoptotic activity of BCL-2 for tumour initiation and maintenance [7].\nBAX (Bcl-2-associated protein X) is the most characteristic death-promoting member of the BCL-2 family. The translocation of Bax protein from the cytosol to the mitochondria triggers the activation of the caspases cascade, leading to death [8]. Caspase 3 (CASP3) was first described in 1995 and once activated, is considered to be responsible for the actual demolition of the cell during apoptosis [9,10].\nIn salivary neoplasms, apoptosis has been investigated almost exclusively by means of immunohistochemistry. Bcl-2 and Bax proteins are expressed in most of the salivary gland neoplasms investigated, but Bcl-2 positivity was found in a lower percentage of mucoepidermoid carcinomas [11-15]. In the studies with TUNEL in salivary gland neoplasms, apoptotic activity was inversely associated with Bcl-2 immunoexpression [11,15]. In salivary gland carcinomas, TUNEL was associated with a poor prognosis, being correlated with p53 and ki-67 staining [16]. As the transcription of apoptosis related genes could help to elucidate the pathogenesis of tumours, we propose to investigate the quantitative expression of BCL-2 (anti-apoptotic), BAX and Caspase3 (pro-apoptotic genes) using qPCR. As tumour size, high proliferative activity and p53 staining are associated with a poor prognosis of salivary tumours patients [2,17], we tested the association of these parameters with the transcription of the apoptotic/anti-apoptotic genes.", "Twenty seven salivary gland neoplasms were included in this study. Fresh tumour samples were obtained from patients who underwent surgical excision of salivary gland neoplasms. The study was approved by the local ethics committee. The diagnoses were reviewed and confirmed: 17 pleomorphic adenomas, one basal cell adenoma, one Warthin tumour, one mucinous cystadenoma, three polymorphous low grade adenocarcinomas, two low grade mucoepidermoid carcinomas, one adenoid cystic carcinoma and one cystadenocarcinoma. Six samples of normal salivary glands obtained from healthy volunteers undergoing surgery for non-neoplastic disease were used as controls.\nFor each sample, a portion of the lesion was stored in RNAHolder (BioAgency Biotecnologia, São Paulo, SP, Brazil) at -80°C, while another portion was fixed in 10% buffered formalin and paraffin embedded. All the samples underwent the same fixation and processing procedures.", "Total RNA was isolated from samples using Tri-Phasis Reagent (BioAgency, São Paulo, Brazil) and treated with DNase (Invitrogen Life Technologies, Carlsbad, CA, USA). cDNA was synthesized with Superscript First-Strand Synthesis System kit (Invitrogen Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions. Quantitative PCR analyses were carried out using 1x SYBR Green PCR Master Mix (Applied Biosystems, Warrington, CHS, UK). BCL-2, BAX, CASP3 and ACTB primers were designed using Primer Express software (Applied Biosystems, Foster City, CA, USA) version 3.0. The primer sequences are specified in Table 1. Reactions were performed in duplicate and run on a Step One machine (Applied Biosystems, Foster City, CA, USA). The cycling parameters were 10 min denaturation at 95°C followed by 40 cycles at 95°C for 15 s and 56°C for 1 min. The cycling was followed by melting curve analysis to distinguish specificity of the PCR products. BCL-2, BAX and CASP3 expressions were normalized with actin (ACTB) as internal control. The average threshold cycle (Ct) for two replicates per sample was used to calculate ΔCt. Relative quantification of these genes expressions was calculated with the 2-ΔΔCt method. Normal salivary gland samples were used as a calibrator.\nqRT-PCR primer sequences and amplicon sizes\nApoptosis tendency was estimated by two indexes: Anti-apoptotic index 1 (AI-1), calculated by the ratio between BCL-2/BAX expression and anti-apoptotic index 2 (AI-2), calculated by the ratio between BCL-2/CASP3. Samples with AI-1 or AI-2 higher than 1 were regarded as having higher anti-apoptotic activity than samples exhibiting an AI lower than 1.", "Paraffin-embedded sections (4 μm) were dewaxed in xylene, hydrated with graded ethanol and endogenous peroxidase blocked in 1% hydrogen peroxidase for 15 min. Antigen retrieval was performed with citric acid, pH 6.0. The primary antibodies used were ki-67 (MIB-1) and p53 (DO7), both from DAKO (Carpinteria, CA, USA) and diluted 1:50. Primary antiserum incubation was performed for 30 min at room temperature and binding visualized using a polymer-based system (EnVision, Dako Corporation, Carpinteria, CA, USA) with diaminobenzidine (Sigma, St Louis, MO, USA) as chromogen. For each antibody, positive (squamous cell carcinoma with known reactivity) and negative controls in which the primary antibody was omitted were included. The sections were counterstained with hematoxylin, dehydrated and mounted.\nThe percentage of ki-67 positive nuclei was obtained by counting nuclear staining in 10 high power fields (400 × magnification) including the most positive areas. Neoplasms were divided into two groups with 5% or more positive nuclei being considered as high proliferative activity, or fewer than 5% positive nuclei considered low proliferative activity. p53 stained nuclei were counted in eight fields (400 × magnification); more than 5% of positive nuclei was considered positive [18].", "Mann-Whitney, Fisher Test and Spearman correlation tests were used when appropriate. P values < 0.05 were considered statistically significant. These tests were performed with BioEstat software (Belém, PA, Brazil), version 4.", " BAX, BCL-2 and CASP3 expression There was no difference between mRNA expression of BAX, BCL-2 and CASP3 between malignant and benign salivary gland neoplasm groups. However, positive correlation was found using the Spearman test for all the following paired groups: BAX/BCL-2 (p = 0.010), BAX/CASP3 (p = 0.008), BCL-2/CASP3 (p < 0.0001). Overall, 78% (21/27) of the salivary neoplasm samples exhibited overexpression of BCL-2 mRNA compared to the expression in normal salivary glands, including 15 out of 17 pleomorphic adenomas. The expression of BAX and CASP3 in the salivary tumours, however, was higher than normal glands only in 52% (14/27) and 44% (12/27) samples, respectively.\nThere was no difference between mRNA expression of BAX, BCL-2 and CASP3 between malignant and benign salivary gland neoplasm groups. However, positive correlation was found using the Spearman test for all the following paired groups: BAX/BCL-2 (p = 0.010), BAX/CASP3 (p = 0.008), BCL-2/CASP3 (p < 0.0001). Overall, 78% (21/27) of the salivary neoplasm samples exhibited overexpression of BCL-2 mRNA compared to the expression in normal salivary glands, including 15 out of 17 pleomorphic adenomas. The expression of BAX and CASP3 in the salivary tumours, however, was higher than normal glands only in 52% (14/27) and 44% (12/27) samples, respectively.\n Anti-apoptotic indexes Anti-apoptotic index results are displayed in Table 2 and illustrated in Figure 1. Eighty-five percent (n = 23) of all salivary neoplasms samples showed a higher AI-1 or AI-2, when compared with normal salivary glands (Figure 1). AI-1 and AI-2 did not show association with malignancy. p53 immunopositivity was associated with a statistically significant higher AI-1 (p = 0.004) (Figure 2). AI-2 was not associated with any of the investigated parameters.\nBenign and malignant tumours data regarding diagnosis, tumour size, p53 staining, cell proliferation index and anti-apoptotic indexes 1 and 2\na Tumour size 1 refers to tumours ≤ 2 cm and tumour size 2 are tumours > 2 cm. PA = pleomorphic adenoma, MC = mucinous cystadenoma, WT = Warthin tumour, BCA = basal cell adenoma, PLGA = polymorphous low grade adenocarcinoma, ACC = adenoid cystic carcinoma, CA = cystadenocarcinoma, MEC = mucoepidermoid carcinoma. #27 paraffin embedded tissue was small and was not used in immunohistochemistry\nAnti-apoptotic index 1 (black bars) and 2 (white bars) in the malignant and benign salivary gland neoplasms. Most of the samples exhibited an anti-apoptotic profile. All the pleomorphic adenoma samples exhibited more BCL-2 than CASP3 transcription. Samples #19, #21 and #22, which showed a decreased AI-1 and 2, were p53 negative. X-axis represents the normal salivary glands pool of samples included as reference sample in all the reactions. AI-1 = BCL-2/BAX, AI-2 = BCL-2/CASP3.\nParameters statistically associated with p53 positivity. (A) p53 immunopositivity was associated with a statistically significant higher anti-apoptotic index 1 (p = 0.004). (B) p53 immunostaining in a sample of pleomorphic adenoma (C) high proliferation index in a sample of polymorphous low grade adenocarcinoma. High proliferative activity was associated with p53 positivity. AI-1 = BCL-2/BAX (Original magnification 400×).\nAnti-apoptotic index results are displayed in Table 2 and illustrated in Figure 1. Eighty-five percent (n = 23) of all salivary neoplasms samples showed a higher AI-1 or AI-2, when compared with normal salivary glands (Figure 1). AI-1 and AI-2 did not show association with malignancy. p53 immunopositivity was associated with a statistically significant higher AI-1 (p = 0.004) (Figure 2). AI-2 was not associated with any of the investigated parameters.\nBenign and malignant tumours data regarding diagnosis, tumour size, p53 staining, cell proliferation index and anti-apoptotic indexes 1 and 2\na Tumour size 1 refers to tumours ≤ 2 cm and tumour size 2 are tumours > 2 cm. PA = pleomorphic adenoma, MC = mucinous cystadenoma, WT = Warthin tumour, BCA = basal cell adenoma, PLGA = polymorphous low grade adenocarcinoma, ACC = adenoid cystic carcinoma, CA = cystadenocarcinoma, MEC = mucoepidermoid carcinoma. #27 paraffin embedded tissue was small and was not used in immunohistochemistry\nAnti-apoptotic index 1 (black bars) and 2 (white bars) in the malignant and benign salivary gland neoplasms. Most of the samples exhibited an anti-apoptotic profile. All the pleomorphic adenoma samples exhibited more BCL-2 than CASP3 transcription. Samples #19, #21 and #22, which showed a decreased AI-1 and 2, were p53 negative. X-axis represents the normal salivary glands pool of samples included as reference sample in all the reactions. AI-1 = BCL-2/BAX, AI-2 = BCL-2/CASP3.\nParameters statistically associated with p53 positivity. (A) p53 immunopositivity was associated with a statistically significant higher anti-apoptotic index 1 (p = 0.004). (B) p53 immunostaining in a sample of pleomorphic adenoma (C) high proliferation index in a sample of polymorphous low grade adenocarcinoma. High proliferative activity was associated with p53 positivity. AI-1 = BCL-2/BAX (Original magnification 400×).\n p53 and cell proliferation index p53 and cell proliferation results are displayed in Table 2. The samples that were p53 positive in the immunohistochemistry also showed a high relative quantification of BCL-2 (p = 0.0003) and CASP3 mRNA (p = 0.0007). p53 positivity was also associated with high cellular proliferation index (p = 0.002) (Figure 2). In addition, the samples exhibiting high cellular proliferation index were associated with high CASP3 transcriptional levels (p = 0.018). High proliferation index was associated with malignancy (p = 0.001), as well.\np53 and cell proliferation results are displayed in Table 2. The samples that were p53 positive in the immunohistochemistry also showed a high relative quantification of BCL-2 (p = 0.0003) and CASP3 mRNA (p = 0.0007). p53 positivity was also associated with high cellular proliferation index (p = 0.002) (Figure 2). In addition, the samples exhibiting high cellular proliferation index were associated with high CASP3 transcriptional levels (p = 0.018). High proliferation index was associated with malignancy (p = 0.001), as well.\n Tumour size Samples were divided into two groups: tumour size ≤ 2 cm and tumour size > 2 cm. Tumour size showed association with a high cellular proliferation index (p = 0.019), despite showing no association with the transcription of BCL-2, BAX or CASP3, or with the anti-apoptotic indexes.\nSamples were divided into two groups: tumour size ≤ 2 cm and tumour size > 2 cm. Tumour size showed association with a high cellular proliferation index (p = 0.019), despite showing no association with the transcription of BCL-2, BAX or CASP3, or with the anti-apoptotic indexes.", "There was no difference between mRNA expression of BAX, BCL-2 and CASP3 between malignant and benign salivary gland neoplasm groups. However, positive correlation was found using the Spearman test for all the following paired groups: BAX/BCL-2 (p = 0.010), BAX/CASP3 (p = 0.008), BCL-2/CASP3 (p < 0.0001). Overall, 78% (21/27) of the salivary neoplasm samples exhibited overexpression of BCL-2 mRNA compared to the expression in normal salivary glands, including 15 out of 17 pleomorphic adenomas. The expression of BAX and CASP3 in the salivary tumours, however, was higher than normal glands only in 52% (14/27) and 44% (12/27) samples, respectively.", "Anti-apoptotic index results are displayed in Table 2 and illustrated in Figure 1. Eighty-five percent (n = 23) of all salivary neoplasms samples showed a higher AI-1 or AI-2, when compared with normal salivary glands (Figure 1). AI-1 and AI-2 did not show association with malignancy. p53 immunopositivity was associated with a statistically significant higher AI-1 (p = 0.004) (Figure 2). AI-2 was not associated with any of the investigated parameters.\nBenign and malignant tumours data regarding diagnosis, tumour size, p53 staining, cell proliferation index and anti-apoptotic indexes 1 and 2\na Tumour size 1 refers to tumours ≤ 2 cm and tumour size 2 are tumours > 2 cm. PA = pleomorphic adenoma, MC = mucinous cystadenoma, WT = Warthin tumour, BCA = basal cell adenoma, PLGA = polymorphous low grade adenocarcinoma, ACC = adenoid cystic carcinoma, CA = cystadenocarcinoma, MEC = mucoepidermoid carcinoma. #27 paraffin embedded tissue was small and was not used in immunohistochemistry\nAnti-apoptotic index 1 (black bars) and 2 (white bars) in the malignant and benign salivary gland neoplasms. Most of the samples exhibited an anti-apoptotic profile. All the pleomorphic adenoma samples exhibited more BCL-2 than CASP3 transcription. Samples #19, #21 and #22, which showed a decreased AI-1 and 2, were p53 negative. X-axis represents the normal salivary glands pool of samples included as reference sample in all the reactions. AI-1 = BCL-2/BAX, AI-2 = BCL-2/CASP3.\nParameters statistically associated with p53 positivity. (A) p53 immunopositivity was associated with a statistically significant higher anti-apoptotic index 1 (p = 0.004). (B) p53 immunostaining in a sample of pleomorphic adenoma (C) high proliferation index in a sample of polymorphous low grade adenocarcinoma. High proliferative activity was associated with p53 positivity. AI-1 = BCL-2/BAX (Original magnification 400×).", "p53 and cell proliferation results are displayed in Table 2. The samples that were p53 positive in the immunohistochemistry also showed a high relative quantification of BCL-2 (p = 0.0003) and CASP3 mRNA (p = 0.0007). p53 positivity was also associated with high cellular proliferation index (p = 0.002) (Figure 2). In addition, the samples exhibiting high cellular proliferation index were associated with high CASP3 transcriptional levels (p = 0.018). High proliferation index was associated with malignancy (p = 0.001), as well.", "Samples were divided into two groups: tumour size ≤ 2 cm and tumour size > 2 cm. Tumour size showed association with a high cellular proliferation index (p = 0.019), despite showing no association with the transcription of BCL-2, BAX or CASP3, or with the anti-apoptotic indexes.", "Very little is known about the apoptotic index of salivary gland neoplasms. We used two different anti-apoptotic indexes (AI-1 and AI-2) to calculate the apoptotic profile of these lesions. The higher these coefficients are, the more probable an anti-apoptotic profile is expected. In contrast, an index < 1 indicates an increase in BAX or CASP3 or a decrease in BCL-2 mRNA transcription, favoring apoptosis. It has been shown that Bcl-2 protein forms heterodimers with the Bax protein such that Bcl-2-Bax inhibits apoptosis, whereas Bax-Bax homodimers favor it [19]. Tumour growth depends on the balance between proliferation/apoptotic indexes. In the present paper we demonstrated that most of the salivary gland neoplasm samples showed a higher AI-1 and AI-2 when compared to normal salivary glands, suggesting a predominance of anti-apoptotic behavior in neoplastic cells, which in turn contributes to neoplasia growth (Figure 1). This study is the first to demonstrate that salivary gland tumours present an anti-apoptotic transcriptional signature.\nIt was reported, using the 3'-end DNA labeling method (TUNEL), that in salivary gland neoplasms apoptosis is inversely associated with Bcl-2 expression, but not related to Bax expression [11]. This result was strengthened by another publication using TUNEL method which described an inverse association between apoptosis and the expression of Bcl-2 in adenoid cystic carcinomas [15]. However, in mucoepidermoid carcinomas such association did not exist [13]. In the present paper we have shown an increased BCL-2 mRNA transcription in the salivary tumours compared to normal salivary glands in 78% of the samples. If we consider only the pleomorphic adenoma samples, BCL-2 overexpression was even higher, corresponding to 88% of these tumours. This result supports the above mentioned paper by Soini and colleagues (1998) [11], which pointed to a very low apoptotic index (0.01%) in pleomorphic adenomas. Our results are in agreement with another study that described Bcl-2 immunopositivity in 33/35 samples of pleomorphic adenomas investigated [14]. Also, all the pleomorphic adenomas exhibited AI-1 and/or AI-2 higher than normal salivary glands (Figure 1). Altogether, the evidence points to Bcl-2 as an important factor in the salivary gland neoplasms pathogenesis and as a possible molecular target in salivary gland tumour treatment in the future.\nWe did not analyze other benign/malignant lesions in separate groups, because as they are rather unusual (eg. adenoid cystic carcinoma, cystadenocarcinoma, mucinous cystadenoma) we had only a few fresh samples included in the study. However, the expression profile of the two mucoepidermoid samples included in the study was unique, as one of them revealed an apoptotic tendency (#27) and in previous immunohistochemistry based publications, these lesions did not reveal a high percentage of Bcl-2 positive samples [11,13].\nWe demonstrated an overall BCL-2 mRNA overexpression, as well as an increased AI-1 and AI-2 in the salivary tumours when compared to normal salivary glands, and in addition we found an association between increased tumour size and high cellular proliferation index. Although the malignant and benign group of samples did not show difference in the AI-1/AI-2, the malignant samples showed a statistical significant higher cellular proliferation index. Taking these findings together, it seems that while both, benign and malignant tumours, tend to evade apoptosis, the malignant samples in addition tend to have a higher cell proliferation activity, guaranteeing a growth advantage.\np53 positive samples showed higher BCL-2 transcription levels than the negative ones, indicating an association of p53 immunopositivity with an increased AI-1 and a predominantly anti-apoptotic profile. It has been shown that p53 induces apoptosis by repressing the transcription of the anti-apoptotic gene BCL-2 and activating the transcription of the apoptotic BAX [20]. Such apoptotic function could be inactivated by p53 mutations. The mutated p53 is usually more stable than the wild-type leading to higher levels of p53 and to immunohistochemical detection of such protein. Therefore, p53 immunoexpression in the samples analyzed may reflect loss of apoptosis induction promoted by this protein, which may explain the increased anti-apoptotic activity found in these tumours [21]. According to the last publication of the World Health Organization on Head and Neck tumours, the role of p53 in salivary gland neoplasms is an issue of controversy [1]. In the present paper, we demonstrated an association between malignancy and high proliferation index and between proliferation and p53 positivity. This evidence suggests that p53 (clone DO7) positivity may be used as a marker in salivary tumours as it is associated not only with higher proliferation index but also with an anti-apoptotic profile, both contributing to tumour growth. This apparent importance of p53 staining as a potentially useful marker in salivary neoplasms empowers other findings of association between higher salivary p53 expression in salivary gland tumours and poor survival [17].", "In conclusion, we demonstrate an overall anti-apoptotic transcriptional signature in salivary gland neoplasms and an association of it with p53 immunoexpression. In addition, the higher proliferative activity found in the malignant tumours suggests cell proliferation is advantageous to growth of malignant salivary tumours. We further demonstrate that tumour size is associated with cell proliferation, but not with the transcription of apoptotic genes." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Samples", "Quantitative reverse transcriptase PCR (qRT-PCR)", "Immunohistochemistry", "Statistical Analyses", "Results", "BAX, BCL-2 and CASP3 expression", "Anti-apoptotic indexes", "p53 and cell proliferation index", "Tumour size", "Discussion", "Conclusions" ]
[ "Salivary gland tumours have an annual global incidence between 0.4 and 13.5 cases per 100 000 individuals [1]. High proliferative activity, presence of residual tumour and advanced tumour stage were shown to be strong negative predictors of survival in salivary gland neoplasms [2]. Development of targeted therapy calls for a better understanding of their molecular and cellular biology [3].\nApoptosis is a highly regulated active process, characterized by cell shrinkage, chromatin condensation and DNA fragmentation promoted by endonucleases. Induction of apoptosis is a normal defense against loss of growth control which follows DNA mutations. Apoptosis is frequently deregulated in human cancers, being a suitable target for anticancer therapy [4].\nThe B-cell lymphoma (BCL-2) family comprises different regulators involved in apoptosis. BCL-2 is an important proto-oncogene located at chromosome 18q21 [5]. It was the first gene implicated in the regulation of apoptosis. Its protein is able to stop programmed cell death (apoptosis) facilitating cell survival independent of promoting cell division [6]. BCL-2 is thought to be involved in resistance to conventional cancer treatment and its increased expression has been implicated in a number of cancers [4]. Apparently, many cancers depend on the anti-apoptotic activity of BCL-2 for tumour initiation and maintenance [7].\nBAX (Bcl-2-associated protein X) is the most characteristic death-promoting member of the BCL-2 family. The translocation of Bax protein from the cytosol to the mitochondria triggers the activation of the caspases cascade, leading to death [8]. Caspase 3 (CASP3) was first described in 1995 and once activated, is considered to be responsible for the actual demolition of the cell during apoptosis [9,10].\nIn salivary neoplasms, apoptosis has been investigated almost exclusively by means of immunohistochemistry. Bcl-2 and Bax proteins are expressed in most of the salivary gland neoplasms investigated, but Bcl-2 positivity was found in a lower percentage of mucoepidermoid carcinomas [11-15]. In the studies with TUNEL in salivary gland neoplasms, apoptotic activity was inversely associated with Bcl-2 immunoexpression [11,15]. In salivary gland carcinomas, TUNEL was associated with a poor prognosis, being correlated with p53 and ki-67 staining [16]. As the transcription of apoptosis related genes could help to elucidate the pathogenesis of tumours, we propose to investigate the quantitative expression of BCL-2 (anti-apoptotic), BAX and Caspase3 (pro-apoptotic genes) using qPCR. As tumour size, high proliferative activity and p53 staining are associated with a poor prognosis of salivary tumours patients [2,17], we tested the association of these parameters with the transcription of the apoptotic/anti-apoptotic genes.", " Samples Twenty seven salivary gland neoplasms were included in this study. Fresh tumour samples were obtained from patients who underwent surgical excision of salivary gland neoplasms. The study was approved by the local ethics committee. The diagnoses were reviewed and confirmed: 17 pleomorphic adenomas, one basal cell adenoma, one Warthin tumour, one mucinous cystadenoma, three polymorphous low grade adenocarcinomas, two low grade mucoepidermoid carcinomas, one adenoid cystic carcinoma and one cystadenocarcinoma. Six samples of normal salivary glands obtained from healthy volunteers undergoing surgery for non-neoplastic disease were used as controls.\nFor each sample, a portion of the lesion was stored in RNAHolder (BioAgency Biotecnologia, São Paulo, SP, Brazil) at -80°C, while another portion was fixed in 10% buffered formalin and paraffin embedded. All the samples underwent the same fixation and processing procedures.\nTwenty seven salivary gland neoplasms were included in this study. Fresh tumour samples were obtained from patients who underwent surgical excision of salivary gland neoplasms. The study was approved by the local ethics committee. The diagnoses were reviewed and confirmed: 17 pleomorphic adenomas, one basal cell adenoma, one Warthin tumour, one mucinous cystadenoma, three polymorphous low grade adenocarcinomas, two low grade mucoepidermoid carcinomas, one adenoid cystic carcinoma and one cystadenocarcinoma. Six samples of normal salivary glands obtained from healthy volunteers undergoing surgery for non-neoplastic disease were used as controls.\nFor each sample, a portion of the lesion was stored in RNAHolder (BioAgency Biotecnologia, São Paulo, SP, Brazil) at -80°C, while another portion was fixed in 10% buffered formalin and paraffin embedded. All the samples underwent the same fixation and processing procedures.\n Quantitative reverse transcriptase PCR (qRT-PCR) Total RNA was isolated from samples using Tri-Phasis Reagent (BioAgency, São Paulo, Brazil) and treated with DNase (Invitrogen Life Technologies, Carlsbad, CA, USA). cDNA was synthesized with Superscript First-Strand Synthesis System kit (Invitrogen Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions. Quantitative PCR analyses were carried out using 1x SYBR Green PCR Master Mix (Applied Biosystems, Warrington, CHS, UK). BCL-2, BAX, CASP3 and ACTB primers were designed using Primer Express software (Applied Biosystems, Foster City, CA, USA) version 3.0. The primer sequences are specified in Table 1. Reactions were performed in duplicate and run on a Step One machine (Applied Biosystems, Foster City, CA, USA). The cycling parameters were 10 min denaturation at 95°C followed by 40 cycles at 95°C for 15 s and 56°C for 1 min. The cycling was followed by melting curve analysis to distinguish specificity of the PCR products. BCL-2, BAX and CASP3 expressions were normalized with actin (ACTB) as internal control. The average threshold cycle (Ct) for two replicates per sample was used to calculate ΔCt. Relative quantification of these genes expressions was calculated with the 2-ΔΔCt method. Normal salivary gland samples were used as a calibrator.\nqRT-PCR primer sequences and amplicon sizes\nApoptosis tendency was estimated by two indexes: Anti-apoptotic index 1 (AI-1), calculated by the ratio between BCL-2/BAX expression and anti-apoptotic index 2 (AI-2), calculated by the ratio between BCL-2/CASP3. Samples with AI-1 or AI-2 higher than 1 were regarded as having higher anti-apoptotic activity than samples exhibiting an AI lower than 1.\nTotal RNA was isolated from samples using Tri-Phasis Reagent (BioAgency, São Paulo, Brazil) and treated with DNase (Invitrogen Life Technologies, Carlsbad, CA, USA). cDNA was synthesized with Superscript First-Strand Synthesis System kit (Invitrogen Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions. Quantitative PCR analyses were carried out using 1x SYBR Green PCR Master Mix (Applied Biosystems, Warrington, CHS, UK). BCL-2, BAX, CASP3 and ACTB primers were designed using Primer Express software (Applied Biosystems, Foster City, CA, USA) version 3.0. The primer sequences are specified in Table 1. Reactions were performed in duplicate and run on a Step One machine (Applied Biosystems, Foster City, CA, USA). The cycling parameters were 10 min denaturation at 95°C followed by 40 cycles at 95°C for 15 s and 56°C for 1 min. The cycling was followed by melting curve analysis to distinguish specificity of the PCR products. BCL-2, BAX and CASP3 expressions were normalized with actin (ACTB) as internal control. The average threshold cycle (Ct) for two replicates per sample was used to calculate ΔCt. Relative quantification of these genes expressions was calculated with the 2-ΔΔCt method. Normal salivary gland samples were used as a calibrator.\nqRT-PCR primer sequences and amplicon sizes\nApoptosis tendency was estimated by two indexes: Anti-apoptotic index 1 (AI-1), calculated by the ratio between BCL-2/BAX expression and anti-apoptotic index 2 (AI-2), calculated by the ratio between BCL-2/CASP3. Samples with AI-1 or AI-2 higher than 1 were regarded as having higher anti-apoptotic activity than samples exhibiting an AI lower than 1.\n Immunohistochemistry Paraffin-embedded sections (4 μm) were dewaxed in xylene, hydrated with graded ethanol and endogenous peroxidase blocked in 1% hydrogen peroxidase for 15 min. Antigen retrieval was performed with citric acid, pH 6.0. The primary antibodies used were ki-67 (MIB-1) and p53 (DO7), both from DAKO (Carpinteria, CA, USA) and diluted 1:50. Primary antiserum incubation was performed for 30 min at room temperature and binding visualized using a polymer-based system (EnVision, Dako Corporation, Carpinteria, CA, USA) with diaminobenzidine (Sigma, St Louis, MO, USA) as chromogen. For each antibody, positive (squamous cell carcinoma with known reactivity) and negative controls in which the primary antibody was omitted were included. The sections were counterstained with hematoxylin, dehydrated and mounted.\nThe percentage of ki-67 positive nuclei was obtained by counting nuclear staining in 10 high power fields (400 × magnification) including the most positive areas. Neoplasms were divided into two groups with 5% or more positive nuclei being considered as high proliferative activity, or fewer than 5% positive nuclei considered low proliferative activity. p53 stained nuclei were counted in eight fields (400 × magnification); more than 5% of positive nuclei was considered positive [18].\nParaffin-embedded sections (4 μm) were dewaxed in xylene, hydrated with graded ethanol and endogenous peroxidase blocked in 1% hydrogen peroxidase for 15 min. Antigen retrieval was performed with citric acid, pH 6.0. The primary antibodies used were ki-67 (MIB-1) and p53 (DO7), both from DAKO (Carpinteria, CA, USA) and diluted 1:50. Primary antiserum incubation was performed for 30 min at room temperature and binding visualized using a polymer-based system (EnVision, Dako Corporation, Carpinteria, CA, USA) with diaminobenzidine (Sigma, St Louis, MO, USA) as chromogen. For each antibody, positive (squamous cell carcinoma with known reactivity) and negative controls in which the primary antibody was omitted were included. The sections were counterstained with hematoxylin, dehydrated and mounted.\nThe percentage of ki-67 positive nuclei was obtained by counting nuclear staining in 10 high power fields (400 × magnification) including the most positive areas. Neoplasms were divided into two groups with 5% or more positive nuclei being considered as high proliferative activity, or fewer than 5% positive nuclei considered low proliferative activity. p53 stained nuclei were counted in eight fields (400 × magnification); more than 5% of positive nuclei was considered positive [18].\n Statistical Analyses Mann-Whitney, Fisher Test and Spearman correlation tests were used when appropriate. P values < 0.05 were considered statistically significant. These tests were performed with BioEstat software (Belém, PA, Brazil), version 4.\nMann-Whitney, Fisher Test and Spearman correlation tests were used when appropriate. P values < 0.05 were considered statistically significant. These tests were performed with BioEstat software (Belém, PA, Brazil), version 4.", "Twenty seven salivary gland neoplasms were included in this study. Fresh tumour samples were obtained from patients who underwent surgical excision of salivary gland neoplasms. The study was approved by the local ethics committee. The diagnoses were reviewed and confirmed: 17 pleomorphic adenomas, one basal cell adenoma, one Warthin tumour, one mucinous cystadenoma, three polymorphous low grade adenocarcinomas, two low grade mucoepidermoid carcinomas, one adenoid cystic carcinoma and one cystadenocarcinoma. Six samples of normal salivary glands obtained from healthy volunteers undergoing surgery for non-neoplastic disease were used as controls.\nFor each sample, a portion of the lesion was stored in RNAHolder (BioAgency Biotecnologia, São Paulo, SP, Brazil) at -80°C, while another portion was fixed in 10% buffered formalin and paraffin embedded. All the samples underwent the same fixation and processing procedures.", "Total RNA was isolated from samples using Tri-Phasis Reagent (BioAgency, São Paulo, Brazil) and treated with DNase (Invitrogen Life Technologies, Carlsbad, CA, USA). cDNA was synthesized with Superscript First-Strand Synthesis System kit (Invitrogen Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions. Quantitative PCR analyses were carried out using 1x SYBR Green PCR Master Mix (Applied Biosystems, Warrington, CHS, UK). BCL-2, BAX, CASP3 and ACTB primers were designed using Primer Express software (Applied Biosystems, Foster City, CA, USA) version 3.0. The primer sequences are specified in Table 1. Reactions were performed in duplicate and run on a Step One machine (Applied Biosystems, Foster City, CA, USA). The cycling parameters were 10 min denaturation at 95°C followed by 40 cycles at 95°C for 15 s and 56°C for 1 min. The cycling was followed by melting curve analysis to distinguish specificity of the PCR products. BCL-2, BAX and CASP3 expressions were normalized with actin (ACTB) as internal control. The average threshold cycle (Ct) for two replicates per sample was used to calculate ΔCt. Relative quantification of these genes expressions was calculated with the 2-ΔΔCt method. Normal salivary gland samples were used as a calibrator.\nqRT-PCR primer sequences and amplicon sizes\nApoptosis tendency was estimated by two indexes: Anti-apoptotic index 1 (AI-1), calculated by the ratio between BCL-2/BAX expression and anti-apoptotic index 2 (AI-2), calculated by the ratio between BCL-2/CASP3. Samples with AI-1 or AI-2 higher than 1 were regarded as having higher anti-apoptotic activity than samples exhibiting an AI lower than 1.", "Paraffin-embedded sections (4 μm) were dewaxed in xylene, hydrated with graded ethanol and endogenous peroxidase blocked in 1% hydrogen peroxidase for 15 min. Antigen retrieval was performed with citric acid, pH 6.0. The primary antibodies used were ki-67 (MIB-1) and p53 (DO7), both from DAKO (Carpinteria, CA, USA) and diluted 1:50. Primary antiserum incubation was performed for 30 min at room temperature and binding visualized using a polymer-based system (EnVision, Dako Corporation, Carpinteria, CA, USA) with diaminobenzidine (Sigma, St Louis, MO, USA) as chromogen. For each antibody, positive (squamous cell carcinoma with known reactivity) and negative controls in which the primary antibody was omitted were included. The sections were counterstained with hematoxylin, dehydrated and mounted.\nThe percentage of ki-67 positive nuclei was obtained by counting nuclear staining in 10 high power fields (400 × magnification) including the most positive areas. Neoplasms were divided into two groups with 5% or more positive nuclei being considered as high proliferative activity, or fewer than 5% positive nuclei considered low proliferative activity. p53 stained nuclei were counted in eight fields (400 × magnification); more than 5% of positive nuclei was considered positive [18].", "Mann-Whitney, Fisher Test and Spearman correlation tests were used when appropriate. P values < 0.05 were considered statistically significant. These tests were performed with BioEstat software (Belém, PA, Brazil), version 4.", " BAX, BCL-2 and CASP3 expression There was no difference between mRNA expression of BAX, BCL-2 and CASP3 between malignant and benign salivary gland neoplasm groups. However, positive correlation was found using the Spearman test for all the following paired groups: BAX/BCL-2 (p = 0.010), BAX/CASP3 (p = 0.008), BCL-2/CASP3 (p < 0.0001). Overall, 78% (21/27) of the salivary neoplasm samples exhibited overexpression of BCL-2 mRNA compared to the expression in normal salivary glands, including 15 out of 17 pleomorphic adenomas. The expression of BAX and CASP3 in the salivary tumours, however, was higher than normal glands only in 52% (14/27) and 44% (12/27) samples, respectively.\nThere was no difference between mRNA expression of BAX, BCL-2 and CASP3 between malignant and benign salivary gland neoplasm groups. However, positive correlation was found using the Spearman test for all the following paired groups: BAX/BCL-2 (p = 0.010), BAX/CASP3 (p = 0.008), BCL-2/CASP3 (p < 0.0001). Overall, 78% (21/27) of the salivary neoplasm samples exhibited overexpression of BCL-2 mRNA compared to the expression in normal salivary glands, including 15 out of 17 pleomorphic adenomas. The expression of BAX and CASP3 in the salivary tumours, however, was higher than normal glands only in 52% (14/27) and 44% (12/27) samples, respectively.\n Anti-apoptotic indexes Anti-apoptotic index results are displayed in Table 2 and illustrated in Figure 1. Eighty-five percent (n = 23) of all salivary neoplasms samples showed a higher AI-1 or AI-2, when compared with normal salivary glands (Figure 1). AI-1 and AI-2 did not show association with malignancy. p53 immunopositivity was associated with a statistically significant higher AI-1 (p = 0.004) (Figure 2). AI-2 was not associated with any of the investigated parameters.\nBenign and malignant tumours data regarding diagnosis, tumour size, p53 staining, cell proliferation index and anti-apoptotic indexes 1 and 2\na Tumour size 1 refers to tumours ≤ 2 cm and tumour size 2 are tumours > 2 cm. PA = pleomorphic adenoma, MC = mucinous cystadenoma, WT = Warthin tumour, BCA = basal cell adenoma, PLGA = polymorphous low grade adenocarcinoma, ACC = adenoid cystic carcinoma, CA = cystadenocarcinoma, MEC = mucoepidermoid carcinoma. #27 paraffin embedded tissue was small and was not used in immunohistochemistry\nAnti-apoptotic index 1 (black bars) and 2 (white bars) in the malignant and benign salivary gland neoplasms. Most of the samples exhibited an anti-apoptotic profile. All the pleomorphic adenoma samples exhibited more BCL-2 than CASP3 transcription. Samples #19, #21 and #22, which showed a decreased AI-1 and 2, were p53 negative. X-axis represents the normal salivary glands pool of samples included as reference sample in all the reactions. AI-1 = BCL-2/BAX, AI-2 = BCL-2/CASP3.\nParameters statistically associated with p53 positivity. (A) p53 immunopositivity was associated with a statistically significant higher anti-apoptotic index 1 (p = 0.004). (B) p53 immunostaining in a sample of pleomorphic adenoma (C) high proliferation index in a sample of polymorphous low grade adenocarcinoma. High proliferative activity was associated with p53 positivity. AI-1 = BCL-2/BAX (Original magnification 400×).\nAnti-apoptotic index results are displayed in Table 2 and illustrated in Figure 1. Eighty-five percent (n = 23) of all salivary neoplasms samples showed a higher AI-1 or AI-2, when compared with normal salivary glands (Figure 1). AI-1 and AI-2 did not show association with malignancy. p53 immunopositivity was associated with a statistically significant higher AI-1 (p = 0.004) (Figure 2). AI-2 was not associated with any of the investigated parameters.\nBenign and malignant tumours data regarding diagnosis, tumour size, p53 staining, cell proliferation index and anti-apoptotic indexes 1 and 2\na Tumour size 1 refers to tumours ≤ 2 cm and tumour size 2 are tumours > 2 cm. PA = pleomorphic adenoma, MC = mucinous cystadenoma, WT = Warthin tumour, BCA = basal cell adenoma, PLGA = polymorphous low grade adenocarcinoma, ACC = adenoid cystic carcinoma, CA = cystadenocarcinoma, MEC = mucoepidermoid carcinoma. #27 paraffin embedded tissue was small and was not used in immunohistochemistry\nAnti-apoptotic index 1 (black bars) and 2 (white bars) in the malignant and benign salivary gland neoplasms. Most of the samples exhibited an anti-apoptotic profile. All the pleomorphic adenoma samples exhibited more BCL-2 than CASP3 transcription. Samples #19, #21 and #22, which showed a decreased AI-1 and 2, were p53 negative. X-axis represents the normal salivary glands pool of samples included as reference sample in all the reactions. AI-1 = BCL-2/BAX, AI-2 = BCL-2/CASP3.\nParameters statistically associated with p53 positivity. (A) p53 immunopositivity was associated with a statistically significant higher anti-apoptotic index 1 (p = 0.004). (B) p53 immunostaining in a sample of pleomorphic adenoma (C) high proliferation index in a sample of polymorphous low grade adenocarcinoma. High proliferative activity was associated with p53 positivity. AI-1 = BCL-2/BAX (Original magnification 400×).\n p53 and cell proliferation index p53 and cell proliferation results are displayed in Table 2. The samples that were p53 positive in the immunohistochemistry also showed a high relative quantification of BCL-2 (p = 0.0003) and CASP3 mRNA (p = 0.0007). p53 positivity was also associated with high cellular proliferation index (p = 0.002) (Figure 2). In addition, the samples exhibiting high cellular proliferation index were associated with high CASP3 transcriptional levels (p = 0.018). High proliferation index was associated with malignancy (p = 0.001), as well.\np53 and cell proliferation results are displayed in Table 2. The samples that were p53 positive in the immunohistochemistry also showed a high relative quantification of BCL-2 (p = 0.0003) and CASP3 mRNA (p = 0.0007). p53 positivity was also associated with high cellular proliferation index (p = 0.002) (Figure 2). In addition, the samples exhibiting high cellular proliferation index were associated with high CASP3 transcriptional levels (p = 0.018). High proliferation index was associated with malignancy (p = 0.001), as well.\n Tumour size Samples were divided into two groups: tumour size ≤ 2 cm and tumour size > 2 cm. Tumour size showed association with a high cellular proliferation index (p = 0.019), despite showing no association with the transcription of BCL-2, BAX or CASP3, or with the anti-apoptotic indexes.\nSamples were divided into two groups: tumour size ≤ 2 cm and tumour size > 2 cm. Tumour size showed association with a high cellular proliferation index (p = 0.019), despite showing no association with the transcription of BCL-2, BAX or CASP3, or with the anti-apoptotic indexes.", "There was no difference between mRNA expression of BAX, BCL-2 and CASP3 between malignant and benign salivary gland neoplasm groups. However, positive correlation was found using the Spearman test for all the following paired groups: BAX/BCL-2 (p = 0.010), BAX/CASP3 (p = 0.008), BCL-2/CASP3 (p < 0.0001). Overall, 78% (21/27) of the salivary neoplasm samples exhibited overexpression of BCL-2 mRNA compared to the expression in normal salivary glands, including 15 out of 17 pleomorphic adenomas. The expression of BAX and CASP3 in the salivary tumours, however, was higher than normal glands only in 52% (14/27) and 44% (12/27) samples, respectively.", "Anti-apoptotic index results are displayed in Table 2 and illustrated in Figure 1. Eighty-five percent (n = 23) of all salivary neoplasms samples showed a higher AI-1 or AI-2, when compared with normal salivary glands (Figure 1). AI-1 and AI-2 did not show association with malignancy. p53 immunopositivity was associated with a statistically significant higher AI-1 (p = 0.004) (Figure 2). AI-2 was not associated with any of the investigated parameters.\nBenign and malignant tumours data regarding diagnosis, tumour size, p53 staining, cell proliferation index and anti-apoptotic indexes 1 and 2\na Tumour size 1 refers to tumours ≤ 2 cm and tumour size 2 are tumours > 2 cm. PA = pleomorphic adenoma, MC = mucinous cystadenoma, WT = Warthin tumour, BCA = basal cell adenoma, PLGA = polymorphous low grade adenocarcinoma, ACC = adenoid cystic carcinoma, CA = cystadenocarcinoma, MEC = mucoepidermoid carcinoma. #27 paraffin embedded tissue was small and was not used in immunohistochemistry\nAnti-apoptotic index 1 (black bars) and 2 (white bars) in the malignant and benign salivary gland neoplasms. Most of the samples exhibited an anti-apoptotic profile. All the pleomorphic adenoma samples exhibited more BCL-2 than CASP3 transcription. Samples #19, #21 and #22, which showed a decreased AI-1 and 2, were p53 negative. X-axis represents the normal salivary glands pool of samples included as reference sample in all the reactions. AI-1 = BCL-2/BAX, AI-2 = BCL-2/CASP3.\nParameters statistically associated with p53 positivity. (A) p53 immunopositivity was associated with a statistically significant higher anti-apoptotic index 1 (p = 0.004). (B) p53 immunostaining in a sample of pleomorphic adenoma (C) high proliferation index in a sample of polymorphous low grade adenocarcinoma. High proliferative activity was associated with p53 positivity. AI-1 = BCL-2/BAX (Original magnification 400×).", "p53 and cell proliferation results are displayed in Table 2. The samples that were p53 positive in the immunohistochemistry also showed a high relative quantification of BCL-2 (p = 0.0003) and CASP3 mRNA (p = 0.0007). p53 positivity was also associated with high cellular proliferation index (p = 0.002) (Figure 2). In addition, the samples exhibiting high cellular proliferation index were associated with high CASP3 transcriptional levels (p = 0.018). High proliferation index was associated with malignancy (p = 0.001), as well.", "Samples were divided into two groups: tumour size ≤ 2 cm and tumour size > 2 cm. Tumour size showed association with a high cellular proliferation index (p = 0.019), despite showing no association with the transcription of BCL-2, BAX or CASP3, or with the anti-apoptotic indexes.", "Very little is known about the apoptotic index of salivary gland neoplasms. We used two different anti-apoptotic indexes (AI-1 and AI-2) to calculate the apoptotic profile of these lesions. The higher these coefficients are, the more probable an anti-apoptotic profile is expected. In contrast, an index < 1 indicates an increase in BAX or CASP3 or a decrease in BCL-2 mRNA transcription, favoring apoptosis. It has been shown that Bcl-2 protein forms heterodimers with the Bax protein such that Bcl-2-Bax inhibits apoptosis, whereas Bax-Bax homodimers favor it [19]. Tumour growth depends on the balance between proliferation/apoptotic indexes. In the present paper we demonstrated that most of the salivary gland neoplasm samples showed a higher AI-1 and AI-2 when compared to normal salivary glands, suggesting a predominance of anti-apoptotic behavior in neoplastic cells, which in turn contributes to neoplasia growth (Figure 1). This study is the first to demonstrate that salivary gland tumours present an anti-apoptotic transcriptional signature.\nIt was reported, using the 3'-end DNA labeling method (TUNEL), that in salivary gland neoplasms apoptosis is inversely associated with Bcl-2 expression, but not related to Bax expression [11]. This result was strengthened by another publication using TUNEL method which described an inverse association between apoptosis and the expression of Bcl-2 in adenoid cystic carcinomas [15]. However, in mucoepidermoid carcinomas such association did not exist [13]. In the present paper we have shown an increased BCL-2 mRNA transcription in the salivary tumours compared to normal salivary glands in 78% of the samples. If we consider only the pleomorphic adenoma samples, BCL-2 overexpression was even higher, corresponding to 88% of these tumours. This result supports the above mentioned paper by Soini and colleagues (1998) [11], which pointed to a very low apoptotic index (0.01%) in pleomorphic adenomas. Our results are in agreement with another study that described Bcl-2 immunopositivity in 33/35 samples of pleomorphic adenomas investigated [14]. Also, all the pleomorphic adenomas exhibited AI-1 and/or AI-2 higher than normal salivary glands (Figure 1). Altogether, the evidence points to Bcl-2 as an important factor in the salivary gland neoplasms pathogenesis and as a possible molecular target in salivary gland tumour treatment in the future.\nWe did not analyze other benign/malignant lesions in separate groups, because as they are rather unusual (eg. adenoid cystic carcinoma, cystadenocarcinoma, mucinous cystadenoma) we had only a few fresh samples included in the study. However, the expression profile of the two mucoepidermoid samples included in the study was unique, as one of them revealed an apoptotic tendency (#27) and in previous immunohistochemistry based publications, these lesions did not reveal a high percentage of Bcl-2 positive samples [11,13].\nWe demonstrated an overall BCL-2 mRNA overexpression, as well as an increased AI-1 and AI-2 in the salivary tumours when compared to normal salivary glands, and in addition we found an association between increased tumour size and high cellular proliferation index. Although the malignant and benign group of samples did not show difference in the AI-1/AI-2, the malignant samples showed a statistical significant higher cellular proliferation index. Taking these findings together, it seems that while both, benign and malignant tumours, tend to evade apoptosis, the malignant samples in addition tend to have a higher cell proliferation activity, guaranteeing a growth advantage.\np53 positive samples showed higher BCL-2 transcription levels than the negative ones, indicating an association of p53 immunopositivity with an increased AI-1 and a predominantly anti-apoptotic profile. It has been shown that p53 induces apoptosis by repressing the transcription of the anti-apoptotic gene BCL-2 and activating the transcription of the apoptotic BAX [20]. Such apoptotic function could be inactivated by p53 mutations. The mutated p53 is usually more stable than the wild-type leading to higher levels of p53 and to immunohistochemical detection of such protein. Therefore, p53 immunoexpression in the samples analyzed may reflect loss of apoptosis induction promoted by this protein, which may explain the increased anti-apoptotic activity found in these tumours [21]. According to the last publication of the World Health Organization on Head and Neck tumours, the role of p53 in salivary gland neoplasms is an issue of controversy [1]. In the present paper, we demonstrated an association between malignancy and high proliferation index and between proliferation and p53 positivity. This evidence suggests that p53 (clone DO7) positivity may be used as a marker in salivary tumours as it is associated not only with higher proliferation index but also with an anti-apoptotic profile, both contributing to tumour growth. This apparent importance of p53 staining as a potentially useful marker in salivary neoplasms empowers other findings of association between higher salivary p53 expression in salivary gland tumours and poor survival [17].", "In conclusion, we demonstrate an overall anti-apoptotic transcriptional signature in salivary gland neoplasms and an association of it with p53 immunoexpression. In addition, the higher proliferative activity found in the malignant tumours suggests cell proliferation is advantageous to growth of malignant salivary tumours. We further demonstrate that tumour size is associated with cell proliferation, but not with the transcription of apoptotic genes." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null ]
[ "Salivary gland neoplasms", "Pleomorphic adenoma", "Apoptosis", "Bcl-2", "Bax", "Caspase 3", "Transcription", "p53", "Cell proliferation", "Tumour size" ]
Background: Salivary gland tumours have an annual global incidence between 0.4 and 13.5 cases per 100 000 individuals [1]. High proliferative activity, presence of residual tumour and advanced tumour stage were shown to be strong negative predictors of survival in salivary gland neoplasms [2]. Development of targeted therapy calls for a better understanding of their molecular and cellular biology [3]. Apoptosis is a highly regulated active process, characterized by cell shrinkage, chromatin condensation and DNA fragmentation promoted by endonucleases. Induction of apoptosis is a normal defense against loss of growth control which follows DNA mutations. Apoptosis is frequently deregulated in human cancers, being a suitable target for anticancer therapy [4]. The B-cell lymphoma (BCL-2) family comprises different regulators involved in apoptosis. BCL-2 is an important proto-oncogene located at chromosome 18q21 [5]. It was the first gene implicated in the regulation of apoptosis. Its protein is able to stop programmed cell death (apoptosis) facilitating cell survival independent of promoting cell division [6]. BCL-2 is thought to be involved in resistance to conventional cancer treatment and its increased expression has been implicated in a number of cancers [4]. Apparently, many cancers depend on the anti-apoptotic activity of BCL-2 for tumour initiation and maintenance [7]. BAX (Bcl-2-associated protein X) is the most characteristic death-promoting member of the BCL-2 family. The translocation of Bax protein from the cytosol to the mitochondria triggers the activation of the caspases cascade, leading to death [8]. Caspase 3 (CASP3) was first described in 1995 and once activated, is considered to be responsible for the actual demolition of the cell during apoptosis [9,10]. In salivary neoplasms, apoptosis has been investigated almost exclusively by means of immunohistochemistry. Bcl-2 and Bax proteins are expressed in most of the salivary gland neoplasms investigated, but Bcl-2 positivity was found in a lower percentage of mucoepidermoid carcinomas [11-15]. In the studies with TUNEL in salivary gland neoplasms, apoptotic activity was inversely associated with Bcl-2 immunoexpression [11,15]. In salivary gland carcinomas, TUNEL was associated with a poor prognosis, being correlated with p53 and ki-67 staining [16]. As the transcription of apoptosis related genes could help to elucidate the pathogenesis of tumours, we propose to investigate the quantitative expression of BCL-2 (anti-apoptotic), BAX and Caspase3 (pro-apoptotic genes) using qPCR. As tumour size, high proliferative activity and p53 staining are associated with a poor prognosis of salivary tumours patients [2,17], we tested the association of these parameters with the transcription of the apoptotic/anti-apoptotic genes. Methods: Samples Twenty seven salivary gland neoplasms were included in this study. Fresh tumour samples were obtained from patients who underwent surgical excision of salivary gland neoplasms. The study was approved by the local ethics committee. The diagnoses were reviewed and confirmed: 17 pleomorphic adenomas, one basal cell adenoma, one Warthin tumour, one mucinous cystadenoma, three polymorphous low grade adenocarcinomas, two low grade mucoepidermoid carcinomas, one adenoid cystic carcinoma and one cystadenocarcinoma. Six samples of normal salivary glands obtained from healthy volunteers undergoing surgery for non-neoplastic disease were used as controls. For each sample, a portion of the lesion was stored in RNAHolder (BioAgency Biotecnologia, São Paulo, SP, Brazil) at -80°C, while another portion was fixed in 10% buffered formalin and paraffin embedded. All the samples underwent the same fixation and processing procedures. Twenty seven salivary gland neoplasms were included in this study. Fresh tumour samples were obtained from patients who underwent surgical excision of salivary gland neoplasms. The study was approved by the local ethics committee. The diagnoses were reviewed and confirmed: 17 pleomorphic adenomas, one basal cell adenoma, one Warthin tumour, one mucinous cystadenoma, three polymorphous low grade adenocarcinomas, two low grade mucoepidermoid carcinomas, one adenoid cystic carcinoma and one cystadenocarcinoma. Six samples of normal salivary glands obtained from healthy volunteers undergoing surgery for non-neoplastic disease were used as controls. For each sample, a portion of the lesion was stored in RNAHolder (BioAgency Biotecnologia, São Paulo, SP, Brazil) at -80°C, while another portion was fixed in 10% buffered formalin and paraffin embedded. All the samples underwent the same fixation and processing procedures. Quantitative reverse transcriptase PCR (qRT-PCR) Total RNA was isolated from samples using Tri-Phasis Reagent (BioAgency, São Paulo, Brazil) and treated with DNase (Invitrogen Life Technologies, Carlsbad, CA, USA). cDNA was synthesized with Superscript First-Strand Synthesis System kit (Invitrogen Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions. Quantitative PCR analyses were carried out using 1x SYBR Green PCR Master Mix (Applied Biosystems, Warrington, CHS, UK). BCL-2, BAX, CASP3 and ACTB primers were designed using Primer Express software (Applied Biosystems, Foster City, CA, USA) version 3.0. The primer sequences are specified in Table 1. Reactions were performed in duplicate and run on a Step One machine (Applied Biosystems, Foster City, CA, USA). The cycling parameters were 10 min denaturation at 95°C followed by 40 cycles at 95°C for 15 s and 56°C for 1 min. The cycling was followed by melting curve analysis to distinguish specificity of the PCR products. BCL-2, BAX and CASP3 expressions were normalized with actin (ACTB) as internal control. The average threshold cycle (Ct) for two replicates per sample was used to calculate ΔCt. Relative quantification of these genes expressions was calculated with the 2-ΔΔCt method. Normal salivary gland samples were used as a calibrator. qRT-PCR primer sequences and amplicon sizes Apoptosis tendency was estimated by two indexes: Anti-apoptotic index 1 (AI-1), calculated by the ratio between BCL-2/BAX expression and anti-apoptotic index 2 (AI-2), calculated by the ratio between BCL-2/CASP3. Samples with AI-1 or AI-2 higher than 1 were regarded as having higher anti-apoptotic activity than samples exhibiting an AI lower than 1. Total RNA was isolated from samples using Tri-Phasis Reagent (BioAgency, São Paulo, Brazil) and treated with DNase (Invitrogen Life Technologies, Carlsbad, CA, USA). cDNA was synthesized with Superscript First-Strand Synthesis System kit (Invitrogen Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions. Quantitative PCR analyses were carried out using 1x SYBR Green PCR Master Mix (Applied Biosystems, Warrington, CHS, UK). BCL-2, BAX, CASP3 and ACTB primers were designed using Primer Express software (Applied Biosystems, Foster City, CA, USA) version 3.0. The primer sequences are specified in Table 1. Reactions were performed in duplicate and run on a Step One machine (Applied Biosystems, Foster City, CA, USA). The cycling parameters were 10 min denaturation at 95°C followed by 40 cycles at 95°C for 15 s and 56°C for 1 min. The cycling was followed by melting curve analysis to distinguish specificity of the PCR products. BCL-2, BAX and CASP3 expressions were normalized with actin (ACTB) as internal control. The average threshold cycle (Ct) for two replicates per sample was used to calculate ΔCt. Relative quantification of these genes expressions was calculated with the 2-ΔΔCt method. Normal salivary gland samples were used as a calibrator. qRT-PCR primer sequences and amplicon sizes Apoptosis tendency was estimated by two indexes: Anti-apoptotic index 1 (AI-1), calculated by the ratio between BCL-2/BAX expression and anti-apoptotic index 2 (AI-2), calculated by the ratio between BCL-2/CASP3. Samples with AI-1 or AI-2 higher than 1 were regarded as having higher anti-apoptotic activity than samples exhibiting an AI lower than 1. Immunohistochemistry Paraffin-embedded sections (4 μm) were dewaxed in xylene, hydrated with graded ethanol and endogenous peroxidase blocked in 1% hydrogen peroxidase for 15 min. Antigen retrieval was performed with citric acid, pH 6.0. The primary antibodies used were ki-67 (MIB-1) and p53 (DO7), both from DAKO (Carpinteria, CA, USA) and diluted 1:50. Primary antiserum incubation was performed for 30 min at room temperature and binding visualized using a polymer-based system (EnVision, Dako Corporation, Carpinteria, CA, USA) with diaminobenzidine (Sigma, St Louis, MO, USA) as chromogen. For each antibody, positive (squamous cell carcinoma with known reactivity) and negative controls in which the primary antibody was omitted were included. The sections were counterstained with hematoxylin, dehydrated and mounted. The percentage of ki-67 positive nuclei was obtained by counting nuclear staining in 10 high power fields (400 × magnification) including the most positive areas. Neoplasms were divided into two groups with 5% or more positive nuclei being considered as high proliferative activity, or fewer than 5% positive nuclei considered low proliferative activity. p53 stained nuclei were counted in eight fields (400 × magnification); more than 5% of positive nuclei was considered positive [18]. Paraffin-embedded sections (4 μm) were dewaxed in xylene, hydrated with graded ethanol and endogenous peroxidase blocked in 1% hydrogen peroxidase for 15 min. Antigen retrieval was performed with citric acid, pH 6.0. The primary antibodies used were ki-67 (MIB-1) and p53 (DO7), both from DAKO (Carpinteria, CA, USA) and diluted 1:50. Primary antiserum incubation was performed for 30 min at room temperature and binding visualized using a polymer-based system (EnVision, Dako Corporation, Carpinteria, CA, USA) with diaminobenzidine (Sigma, St Louis, MO, USA) as chromogen. For each antibody, positive (squamous cell carcinoma with known reactivity) and negative controls in which the primary antibody was omitted were included. The sections were counterstained with hematoxylin, dehydrated and mounted. The percentage of ki-67 positive nuclei was obtained by counting nuclear staining in 10 high power fields (400 × magnification) including the most positive areas. Neoplasms were divided into two groups with 5% or more positive nuclei being considered as high proliferative activity, or fewer than 5% positive nuclei considered low proliferative activity. p53 stained nuclei were counted in eight fields (400 × magnification); more than 5% of positive nuclei was considered positive [18]. Statistical Analyses Mann-Whitney, Fisher Test and Spearman correlation tests were used when appropriate. P values < 0.05 were considered statistically significant. These tests were performed with BioEstat software (Belém, PA, Brazil), version 4. Mann-Whitney, Fisher Test and Spearman correlation tests were used when appropriate. P values < 0.05 were considered statistically significant. These tests were performed with BioEstat software (Belém, PA, Brazil), version 4. Samples: Twenty seven salivary gland neoplasms were included in this study. Fresh tumour samples were obtained from patients who underwent surgical excision of salivary gland neoplasms. The study was approved by the local ethics committee. The diagnoses were reviewed and confirmed: 17 pleomorphic adenomas, one basal cell adenoma, one Warthin tumour, one mucinous cystadenoma, three polymorphous low grade adenocarcinomas, two low grade mucoepidermoid carcinomas, one adenoid cystic carcinoma and one cystadenocarcinoma. Six samples of normal salivary glands obtained from healthy volunteers undergoing surgery for non-neoplastic disease were used as controls. For each sample, a portion of the lesion was stored in RNAHolder (BioAgency Biotecnologia, São Paulo, SP, Brazil) at -80°C, while another portion was fixed in 10% buffered formalin and paraffin embedded. All the samples underwent the same fixation and processing procedures. Quantitative reverse transcriptase PCR (qRT-PCR): Total RNA was isolated from samples using Tri-Phasis Reagent (BioAgency, São Paulo, Brazil) and treated with DNase (Invitrogen Life Technologies, Carlsbad, CA, USA). cDNA was synthesized with Superscript First-Strand Synthesis System kit (Invitrogen Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions. Quantitative PCR analyses were carried out using 1x SYBR Green PCR Master Mix (Applied Biosystems, Warrington, CHS, UK). BCL-2, BAX, CASP3 and ACTB primers were designed using Primer Express software (Applied Biosystems, Foster City, CA, USA) version 3.0. The primer sequences are specified in Table 1. Reactions were performed in duplicate and run on a Step One machine (Applied Biosystems, Foster City, CA, USA). The cycling parameters were 10 min denaturation at 95°C followed by 40 cycles at 95°C for 15 s and 56°C for 1 min. The cycling was followed by melting curve analysis to distinguish specificity of the PCR products. BCL-2, BAX and CASP3 expressions were normalized with actin (ACTB) as internal control. The average threshold cycle (Ct) for two replicates per sample was used to calculate ΔCt. Relative quantification of these genes expressions was calculated with the 2-ΔΔCt method. Normal salivary gland samples were used as a calibrator. qRT-PCR primer sequences and amplicon sizes Apoptosis tendency was estimated by two indexes: Anti-apoptotic index 1 (AI-1), calculated by the ratio between BCL-2/BAX expression and anti-apoptotic index 2 (AI-2), calculated by the ratio between BCL-2/CASP3. Samples with AI-1 or AI-2 higher than 1 were regarded as having higher anti-apoptotic activity than samples exhibiting an AI lower than 1. Immunohistochemistry: Paraffin-embedded sections (4 μm) were dewaxed in xylene, hydrated with graded ethanol and endogenous peroxidase blocked in 1% hydrogen peroxidase for 15 min. Antigen retrieval was performed with citric acid, pH 6.0. The primary antibodies used were ki-67 (MIB-1) and p53 (DO7), both from DAKO (Carpinteria, CA, USA) and diluted 1:50. Primary antiserum incubation was performed for 30 min at room temperature and binding visualized using a polymer-based system (EnVision, Dako Corporation, Carpinteria, CA, USA) with diaminobenzidine (Sigma, St Louis, MO, USA) as chromogen. For each antibody, positive (squamous cell carcinoma with known reactivity) and negative controls in which the primary antibody was omitted were included. The sections were counterstained with hematoxylin, dehydrated and mounted. The percentage of ki-67 positive nuclei was obtained by counting nuclear staining in 10 high power fields (400 × magnification) including the most positive areas. Neoplasms were divided into two groups with 5% or more positive nuclei being considered as high proliferative activity, or fewer than 5% positive nuclei considered low proliferative activity. p53 stained nuclei were counted in eight fields (400 × magnification); more than 5% of positive nuclei was considered positive [18]. Statistical Analyses: Mann-Whitney, Fisher Test and Spearman correlation tests were used when appropriate. P values < 0.05 were considered statistically significant. These tests were performed with BioEstat software (Belém, PA, Brazil), version 4. Results: BAX, BCL-2 and CASP3 expression There was no difference between mRNA expression of BAX, BCL-2 and CASP3 between malignant and benign salivary gland neoplasm groups. However, positive correlation was found using the Spearman test for all the following paired groups: BAX/BCL-2 (p = 0.010), BAX/CASP3 (p = 0.008), BCL-2/CASP3 (p < 0.0001). Overall, 78% (21/27) of the salivary neoplasm samples exhibited overexpression of BCL-2 mRNA compared to the expression in normal salivary glands, including 15 out of 17 pleomorphic adenomas. The expression of BAX and CASP3 in the salivary tumours, however, was higher than normal glands only in 52% (14/27) and 44% (12/27) samples, respectively. There was no difference between mRNA expression of BAX, BCL-2 and CASP3 between malignant and benign salivary gland neoplasm groups. However, positive correlation was found using the Spearman test for all the following paired groups: BAX/BCL-2 (p = 0.010), BAX/CASP3 (p = 0.008), BCL-2/CASP3 (p < 0.0001). Overall, 78% (21/27) of the salivary neoplasm samples exhibited overexpression of BCL-2 mRNA compared to the expression in normal salivary glands, including 15 out of 17 pleomorphic adenomas. The expression of BAX and CASP3 in the salivary tumours, however, was higher than normal glands only in 52% (14/27) and 44% (12/27) samples, respectively. Anti-apoptotic indexes Anti-apoptotic index results are displayed in Table 2 and illustrated in Figure 1. Eighty-five percent (n = 23) of all salivary neoplasms samples showed a higher AI-1 or AI-2, when compared with normal salivary glands (Figure 1). AI-1 and AI-2 did not show association with malignancy. p53 immunopositivity was associated with a statistically significant higher AI-1 (p = 0.004) (Figure 2). AI-2 was not associated with any of the investigated parameters. Benign and malignant tumours data regarding diagnosis, tumour size, p53 staining, cell proliferation index and anti-apoptotic indexes 1 and 2 a Tumour size 1 refers to tumours ≤ 2 cm and tumour size 2 are tumours > 2 cm. PA = pleomorphic adenoma, MC = mucinous cystadenoma, WT = Warthin tumour, BCA = basal cell adenoma, PLGA = polymorphous low grade adenocarcinoma, ACC = adenoid cystic carcinoma, CA = cystadenocarcinoma, MEC = mucoepidermoid carcinoma. #27 paraffin embedded tissue was small and was not used in immunohistochemistry Anti-apoptotic index 1 (black bars) and 2 (white bars) in the malignant and benign salivary gland neoplasms. Most of the samples exhibited an anti-apoptotic profile. All the pleomorphic adenoma samples exhibited more BCL-2 than CASP3 transcription. Samples #19, #21 and #22, which showed a decreased AI-1 and 2, were p53 negative. X-axis represents the normal salivary glands pool of samples included as reference sample in all the reactions. AI-1 = BCL-2/BAX, AI-2 = BCL-2/CASP3. Parameters statistically associated with p53 positivity. (A) p53 immunopositivity was associated with a statistically significant higher anti-apoptotic index 1 (p = 0.004). (B) p53 immunostaining in a sample of pleomorphic adenoma (C) high proliferation index in a sample of polymorphous low grade adenocarcinoma. High proliferative activity was associated with p53 positivity. AI-1 = BCL-2/BAX (Original magnification 400×). Anti-apoptotic index results are displayed in Table 2 and illustrated in Figure 1. Eighty-five percent (n = 23) of all salivary neoplasms samples showed a higher AI-1 or AI-2, when compared with normal salivary glands (Figure 1). AI-1 and AI-2 did not show association with malignancy. p53 immunopositivity was associated with a statistically significant higher AI-1 (p = 0.004) (Figure 2). AI-2 was not associated with any of the investigated parameters. Benign and malignant tumours data regarding diagnosis, tumour size, p53 staining, cell proliferation index and anti-apoptotic indexes 1 and 2 a Tumour size 1 refers to tumours ≤ 2 cm and tumour size 2 are tumours > 2 cm. PA = pleomorphic adenoma, MC = mucinous cystadenoma, WT = Warthin tumour, BCA = basal cell adenoma, PLGA = polymorphous low grade adenocarcinoma, ACC = adenoid cystic carcinoma, CA = cystadenocarcinoma, MEC = mucoepidermoid carcinoma. #27 paraffin embedded tissue was small and was not used in immunohistochemistry Anti-apoptotic index 1 (black bars) and 2 (white bars) in the malignant and benign salivary gland neoplasms. Most of the samples exhibited an anti-apoptotic profile. All the pleomorphic adenoma samples exhibited more BCL-2 than CASP3 transcription. Samples #19, #21 and #22, which showed a decreased AI-1 and 2, were p53 negative. X-axis represents the normal salivary glands pool of samples included as reference sample in all the reactions. AI-1 = BCL-2/BAX, AI-2 = BCL-2/CASP3. Parameters statistically associated with p53 positivity. (A) p53 immunopositivity was associated with a statistically significant higher anti-apoptotic index 1 (p = 0.004). (B) p53 immunostaining in a sample of pleomorphic adenoma (C) high proliferation index in a sample of polymorphous low grade adenocarcinoma. High proliferative activity was associated with p53 positivity. AI-1 = BCL-2/BAX (Original magnification 400×). p53 and cell proliferation index p53 and cell proliferation results are displayed in Table 2. The samples that were p53 positive in the immunohistochemistry also showed a high relative quantification of BCL-2 (p = 0.0003) and CASP3 mRNA (p = 0.0007). p53 positivity was also associated with high cellular proliferation index (p = 0.002) (Figure 2). In addition, the samples exhibiting high cellular proliferation index were associated with high CASP3 transcriptional levels (p = 0.018). High proliferation index was associated with malignancy (p = 0.001), as well. p53 and cell proliferation results are displayed in Table 2. The samples that were p53 positive in the immunohistochemistry also showed a high relative quantification of BCL-2 (p = 0.0003) and CASP3 mRNA (p = 0.0007). p53 positivity was also associated with high cellular proliferation index (p = 0.002) (Figure 2). In addition, the samples exhibiting high cellular proliferation index were associated with high CASP3 transcriptional levels (p = 0.018). High proliferation index was associated with malignancy (p = 0.001), as well. Tumour size Samples were divided into two groups: tumour size ≤ 2 cm and tumour size > 2 cm. Tumour size showed association with a high cellular proliferation index (p = 0.019), despite showing no association with the transcription of BCL-2, BAX or CASP3, or with the anti-apoptotic indexes. Samples were divided into two groups: tumour size ≤ 2 cm and tumour size > 2 cm. Tumour size showed association with a high cellular proliferation index (p = 0.019), despite showing no association with the transcription of BCL-2, BAX or CASP3, or with the anti-apoptotic indexes. BAX, BCL-2 and CASP3 expression: There was no difference between mRNA expression of BAX, BCL-2 and CASP3 between malignant and benign salivary gland neoplasm groups. However, positive correlation was found using the Spearman test for all the following paired groups: BAX/BCL-2 (p = 0.010), BAX/CASP3 (p = 0.008), BCL-2/CASP3 (p < 0.0001). Overall, 78% (21/27) of the salivary neoplasm samples exhibited overexpression of BCL-2 mRNA compared to the expression in normal salivary glands, including 15 out of 17 pleomorphic adenomas. The expression of BAX and CASP3 in the salivary tumours, however, was higher than normal glands only in 52% (14/27) and 44% (12/27) samples, respectively. Anti-apoptotic indexes: Anti-apoptotic index results are displayed in Table 2 and illustrated in Figure 1. Eighty-five percent (n = 23) of all salivary neoplasms samples showed a higher AI-1 or AI-2, when compared with normal salivary glands (Figure 1). AI-1 and AI-2 did not show association with malignancy. p53 immunopositivity was associated with a statistically significant higher AI-1 (p = 0.004) (Figure 2). AI-2 was not associated with any of the investigated parameters. Benign and malignant tumours data regarding diagnosis, tumour size, p53 staining, cell proliferation index and anti-apoptotic indexes 1 and 2 a Tumour size 1 refers to tumours ≤ 2 cm and tumour size 2 are tumours > 2 cm. PA = pleomorphic adenoma, MC = mucinous cystadenoma, WT = Warthin tumour, BCA = basal cell adenoma, PLGA = polymorphous low grade adenocarcinoma, ACC = adenoid cystic carcinoma, CA = cystadenocarcinoma, MEC = mucoepidermoid carcinoma. #27 paraffin embedded tissue was small and was not used in immunohistochemistry Anti-apoptotic index 1 (black bars) and 2 (white bars) in the malignant and benign salivary gland neoplasms. Most of the samples exhibited an anti-apoptotic profile. All the pleomorphic adenoma samples exhibited more BCL-2 than CASP3 transcription. Samples #19, #21 and #22, which showed a decreased AI-1 and 2, were p53 negative. X-axis represents the normal salivary glands pool of samples included as reference sample in all the reactions. AI-1 = BCL-2/BAX, AI-2 = BCL-2/CASP3. Parameters statistically associated with p53 positivity. (A) p53 immunopositivity was associated with a statistically significant higher anti-apoptotic index 1 (p = 0.004). (B) p53 immunostaining in a sample of pleomorphic adenoma (C) high proliferation index in a sample of polymorphous low grade adenocarcinoma. High proliferative activity was associated with p53 positivity. AI-1 = BCL-2/BAX (Original magnification 400×). p53 and cell proliferation index: p53 and cell proliferation results are displayed in Table 2. The samples that were p53 positive in the immunohistochemistry also showed a high relative quantification of BCL-2 (p = 0.0003) and CASP3 mRNA (p = 0.0007). p53 positivity was also associated with high cellular proliferation index (p = 0.002) (Figure 2). In addition, the samples exhibiting high cellular proliferation index were associated with high CASP3 transcriptional levels (p = 0.018). High proliferation index was associated with malignancy (p = 0.001), as well. Tumour size: Samples were divided into two groups: tumour size ≤ 2 cm and tumour size > 2 cm. Tumour size showed association with a high cellular proliferation index (p = 0.019), despite showing no association with the transcription of BCL-2, BAX or CASP3, or with the anti-apoptotic indexes. Discussion: Very little is known about the apoptotic index of salivary gland neoplasms. We used two different anti-apoptotic indexes (AI-1 and AI-2) to calculate the apoptotic profile of these lesions. The higher these coefficients are, the more probable an anti-apoptotic profile is expected. In contrast, an index < 1 indicates an increase in BAX or CASP3 or a decrease in BCL-2 mRNA transcription, favoring apoptosis. It has been shown that Bcl-2 protein forms heterodimers with the Bax protein such that Bcl-2-Bax inhibits apoptosis, whereas Bax-Bax homodimers favor it [19]. Tumour growth depends on the balance between proliferation/apoptotic indexes. In the present paper we demonstrated that most of the salivary gland neoplasm samples showed a higher AI-1 and AI-2 when compared to normal salivary glands, suggesting a predominance of anti-apoptotic behavior in neoplastic cells, which in turn contributes to neoplasia growth (Figure 1). This study is the first to demonstrate that salivary gland tumours present an anti-apoptotic transcriptional signature. It was reported, using the 3'-end DNA labeling method (TUNEL), that in salivary gland neoplasms apoptosis is inversely associated with Bcl-2 expression, but not related to Bax expression [11]. This result was strengthened by another publication using TUNEL method which described an inverse association between apoptosis and the expression of Bcl-2 in adenoid cystic carcinomas [15]. However, in mucoepidermoid carcinomas such association did not exist [13]. In the present paper we have shown an increased BCL-2 mRNA transcription in the salivary tumours compared to normal salivary glands in 78% of the samples. If we consider only the pleomorphic adenoma samples, BCL-2 overexpression was even higher, corresponding to 88% of these tumours. This result supports the above mentioned paper by Soini and colleagues (1998) [11], which pointed to a very low apoptotic index (0.01%) in pleomorphic adenomas. Our results are in agreement with another study that described Bcl-2 immunopositivity in 33/35 samples of pleomorphic adenomas investigated [14]. Also, all the pleomorphic adenomas exhibited AI-1 and/or AI-2 higher than normal salivary glands (Figure 1). Altogether, the evidence points to Bcl-2 as an important factor in the salivary gland neoplasms pathogenesis and as a possible molecular target in salivary gland tumour treatment in the future. We did not analyze other benign/malignant lesions in separate groups, because as they are rather unusual (eg. adenoid cystic carcinoma, cystadenocarcinoma, mucinous cystadenoma) we had only a few fresh samples included in the study. However, the expression profile of the two mucoepidermoid samples included in the study was unique, as one of them revealed an apoptotic tendency (#27) and in previous immunohistochemistry based publications, these lesions did not reveal a high percentage of Bcl-2 positive samples [11,13]. We demonstrated an overall BCL-2 mRNA overexpression, as well as an increased AI-1 and AI-2 in the salivary tumours when compared to normal salivary glands, and in addition we found an association between increased tumour size and high cellular proliferation index. Although the malignant and benign group of samples did not show difference in the AI-1/AI-2, the malignant samples showed a statistical significant higher cellular proliferation index. Taking these findings together, it seems that while both, benign and malignant tumours, tend to evade apoptosis, the malignant samples in addition tend to have a higher cell proliferation activity, guaranteeing a growth advantage. p53 positive samples showed higher BCL-2 transcription levels than the negative ones, indicating an association of p53 immunopositivity with an increased AI-1 and a predominantly anti-apoptotic profile. It has been shown that p53 induces apoptosis by repressing the transcription of the anti-apoptotic gene BCL-2 and activating the transcription of the apoptotic BAX [20]. Such apoptotic function could be inactivated by p53 mutations. The mutated p53 is usually more stable than the wild-type leading to higher levels of p53 and to immunohistochemical detection of such protein. Therefore, p53 immunoexpression in the samples analyzed may reflect loss of apoptosis induction promoted by this protein, which may explain the increased anti-apoptotic activity found in these tumours [21]. According to the last publication of the World Health Organization on Head and Neck tumours, the role of p53 in salivary gland neoplasms is an issue of controversy [1]. In the present paper, we demonstrated an association between malignancy and high proliferation index and between proliferation and p53 positivity. This evidence suggests that p53 (clone DO7) positivity may be used as a marker in salivary tumours as it is associated not only with higher proliferation index but also with an anti-apoptotic profile, both contributing to tumour growth. This apparent importance of p53 staining as a potentially useful marker in salivary neoplasms empowers other findings of association between higher salivary p53 expression in salivary gland tumours and poor survival [17]. Conclusions: In conclusion, we demonstrate an overall anti-apoptotic transcriptional signature in salivary gland neoplasms and an association of it with p53 immunoexpression. In addition, the higher proliferative activity found in the malignant tumours suggests cell proliferation is advantageous to growth of malignant salivary tumours. We further demonstrate that tumour size is associated with cell proliferation, but not with the transcription of apoptotic genes.
Background: Development of accurate therapeutic approaches to salivary gland neoplasms depends on better understanding of their molecular pathogenesis. Tumour growth is regulated by the balance between proliferation and apoptosis. Few studies have investigated apoptosis in salivary tumours relying almost exclusively on immunohistochemistry or TUNEL assay. Furthermore, there is no information regarding the mRNA expression profile of apoptotic genes in salivary tumors. Our objective was to investigate the quantitative expression of BCL-2 (anti-apoptotic), BAX and Caspase3 (pro-apoptotic genes) mRNAs in salivary gland neoplasms and examine the association of these data with tumour size, proliferative activity and p53 staining (parameters associated with a poor prognosis of salivary tumours patients). Methods: We investigated the apoptotic profile of salivary neoplasms in twenty fresh samples of benign and seven samples of malignant salivary neoplasms, using quantitative real time PCR. We further assessed p53 and ki-67 immunopositivity and obtained clinical tumour size data. Results: We demonstrated that BCL-2 mRNA is overexpressed in salivary neoplasms, leading to an overall anti-apoptotic profile. We also found an association between the anti-apoptotic index (BCL-2/BAX) with p53 immunoexpression. A higher proliferative activity was found in the malignant tumours. In addition, tumour size was associated with cell proliferation but not with the transcription of apoptotic genes. Conclusions: In conclusion, we show an anti-apoptotic gene expression profile in salivary neoplasms in association with p53 staining, but independent of cell proliferation and tumour size.
Background: Salivary gland tumours have an annual global incidence between 0.4 and 13.5 cases per 100 000 individuals [1]. High proliferative activity, presence of residual tumour and advanced tumour stage were shown to be strong negative predictors of survival in salivary gland neoplasms [2]. Development of targeted therapy calls for a better understanding of their molecular and cellular biology [3]. Apoptosis is a highly regulated active process, characterized by cell shrinkage, chromatin condensation and DNA fragmentation promoted by endonucleases. Induction of apoptosis is a normal defense against loss of growth control which follows DNA mutations. Apoptosis is frequently deregulated in human cancers, being a suitable target for anticancer therapy [4]. The B-cell lymphoma (BCL-2) family comprises different regulators involved in apoptosis. BCL-2 is an important proto-oncogene located at chromosome 18q21 [5]. It was the first gene implicated in the regulation of apoptosis. Its protein is able to stop programmed cell death (apoptosis) facilitating cell survival independent of promoting cell division [6]. BCL-2 is thought to be involved in resistance to conventional cancer treatment and its increased expression has been implicated in a number of cancers [4]. Apparently, many cancers depend on the anti-apoptotic activity of BCL-2 for tumour initiation and maintenance [7]. BAX (Bcl-2-associated protein X) is the most characteristic death-promoting member of the BCL-2 family. The translocation of Bax protein from the cytosol to the mitochondria triggers the activation of the caspases cascade, leading to death [8]. Caspase 3 (CASP3) was first described in 1995 and once activated, is considered to be responsible for the actual demolition of the cell during apoptosis [9,10]. In salivary neoplasms, apoptosis has been investigated almost exclusively by means of immunohistochemistry. Bcl-2 and Bax proteins are expressed in most of the salivary gland neoplasms investigated, but Bcl-2 positivity was found in a lower percentage of mucoepidermoid carcinomas [11-15]. In the studies with TUNEL in salivary gland neoplasms, apoptotic activity was inversely associated with Bcl-2 immunoexpression [11,15]. In salivary gland carcinomas, TUNEL was associated with a poor prognosis, being correlated with p53 and ki-67 staining [16]. As the transcription of apoptosis related genes could help to elucidate the pathogenesis of tumours, we propose to investigate the quantitative expression of BCL-2 (anti-apoptotic), BAX and Caspase3 (pro-apoptotic genes) using qPCR. As tumour size, high proliferative activity and p53 staining are associated with a poor prognosis of salivary tumours patients [2,17], we tested the association of these parameters with the transcription of the apoptotic/anti-apoptotic genes. Conclusions: The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2407/12/61/prepub
Background: Development of accurate therapeutic approaches to salivary gland neoplasms depends on better understanding of their molecular pathogenesis. Tumour growth is regulated by the balance between proliferation and apoptosis. Few studies have investigated apoptosis in salivary tumours relying almost exclusively on immunohistochemistry or TUNEL assay. Furthermore, there is no information regarding the mRNA expression profile of apoptotic genes in salivary tumors. Our objective was to investigate the quantitative expression of BCL-2 (anti-apoptotic), BAX and Caspase3 (pro-apoptotic genes) mRNAs in salivary gland neoplasms and examine the association of these data with tumour size, proliferative activity and p53 staining (parameters associated with a poor prognosis of salivary tumours patients). Methods: We investigated the apoptotic profile of salivary neoplasms in twenty fresh samples of benign and seven samples of malignant salivary neoplasms, using quantitative real time PCR. We further assessed p53 and ki-67 immunopositivity and obtained clinical tumour size data. Results: We demonstrated that BCL-2 mRNA is overexpressed in salivary neoplasms, leading to an overall anti-apoptotic profile. We also found an association between the anti-apoptotic index (BCL-2/BAX) with p53 immunoexpression. A higher proliferative activity was found in the malignant tumours. In addition, tumour size was associated with cell proliferation but not with the transcription of apoptotic genes. Conclusions: In conclusion, we show an anti-apoptotic gene expression profile in salivary neoplasms in association with p53 staining, but independent of cell proliferation and tumour size.
5,944
281
[ 507, 158, 336, 243, 42, 1357, 135, 371, 101, 57, 913, 70 ]
13
[ "bcl", "samples", "salivary", "ai", "p53", "apoptotic", "bax", "index", "anti", "anti apoptotic" ]
[ "apoptosis 10 salivary", "activity bcl tumour", "bcl tumour initiation", "gland neoplasms apoptosis", "salivary neoplasms apoptosis" ]
null
[CONTENT] Salivary gland neoplasms | Pleomorphic adenoma | Apoptosis | Bcl-2 | Bax | Caspase 3 | Transcription | p53 | Cell proliferation | Tumour size [SUMMARY]
[CONTENT] Salivary gland neoplasms | Pleomorphic adenoma | Apoptosis | Bcl-2 | Bax | Caspase 3 | Transcription | p53 | Cell proliferation | Tumour size [SUMMARY]
null
[CONTENT] Salivary gland neoplasms | Pleomorphic adenoma | Apoptosis | Bcl-2 | Bax | Caspase 3 | Transcription | p53 | Cell proliferation | Tumour size [SUMMARY]
[CONTENT] Salivary gland neoplasms | Pleomorphic adenoma | Apoptosis | Bcl-2 | Bax | Caspase 3 | Transcription | p53 | Cell proliferation | Tumour size [SUMMARY]
[CONTENT] Salivary gland neoplasms | Pleomorphic adenoma | Apoptosis | Bcl-2 | Bax | Caspase 3 | Transcription | p53 | Cell proliferation | Tumour size [SUMMARY]
[CONTENT] Apoptosis | Caspase 3 | Cell Proliferation | Gene Expression Profiling | Humans | Immunohistochemistry | Proto-Oncogene Proteins c-bcl-2 | RNA, Messenger | Reverse Transcriptase Polymerase Chain Reaction | Salivary Gland Neoplasms | Tumor Suppressor Protein p53 | bcl-2-Associated X Protein [SUMMARY]
[CONTENT] Apoptosis | Caspase 3 | Cell Proliferation | Gene Expression Profiling | Humans | Immunohistochemistry | Proto-Oncogene Proteins c-bcl-2 | RNA, Messenger | Reverse Transcriptase Polymerase Chain Reaction | Salivary Gland Neoplasms | Tumor Suppressor Protein p53 | bcl-2-Associated X Protein [SUMMARY]
null
[CONTENT] Apoptosis | Caspase 3 | Cell Proliferation | Gene Expression Profiling | Humans | Immunohistochemistry | Proto-Oncogene Proteins c-bcl-2 | RNA, Messenger | Reverse Transcriptase Polymerase Chain Reaction | Salivary Gland Neoplasms | Tumor Suppressor Protein p53 | bcl-2-Associated X Protein [SUMMARY]
[CONTENT] Apoptosis | Caspase 3 | Cell Proliferation | Gene Expression Profiling | Humans | Immunohistochemistry | Proto-Oncogene Proteins c-bcl-2 | RNA, Messenger | Reverse Transcriptase Polymerase Chain Reaction | Salivary Gland Neoplasms | Tumor Suppressor Protein p53 | bcl-2-Associated X Protein [SUMMARY]
[CONTENT] Apoptosis | Caspase 3 | Cell Proliferation | Gene Expression Profiling | Humans | Immunohistochemistry | Proto-Oncogene Proteins c-bcl-2 | RNA, Messenger | Reverse Transcriptase Polymerase Chain Reaction | Salivary Gland Neoplasms | Tumor Suppressor Protein p53 | bcl-2-Associated X Protein [SUMMARY]
[CONTENT] apoptosis 10 salivary | activity bcl tumour | bcl tumour initiation | gland neoplasms apoptosis | salivary neoplasms apoptosis [SUMMARY]
[CONTENT] apoptosis 10 salivary | activity bcl tumour | bcl tumour initiation | gland neoplasms apoptosis | salivary neoplasms apoptosis [SUMMARY]
null
[CONTENT] apoptosis 10 salivary | activity bcl tumour | bcl tumour initiation | gland neoplasms apoptosis | salivary neoplasms apoptosis [SUMMARY]
[CONTENT] apoptosis 10 salivary | activity bcl tumour | bcl tumour initiation | gland neoplasms apoptosis | salivary neoplasms apoptosis [SUMMARY]
[CONTENT] apoptosis 10 salivary | activity bcl tumour | bcl tumour initiation | gland neoplasms apoptosis | salivary neoplasms apoptosis [SUMMARY]
[CONTENT] bcl | samples | salivary | ai | p53 | apoptotic | bax | index | anti | anti apoptotic [SUMMARY]
[CONTENT] bcl | samples | salivary | ai | p53 | apoptotic | bax | index | anti | anti apoptotic [SUMMARY]
null
[CONTENT] bcl | samples | salivary | ai | p53 | apoptotic | bax | index | anti | anti apoptotic [SUMMARY]
[CONTENT] bcl | samples | salivary | ai | p53 | apoptotic | bax | index | anti | anti apoptotic [SUMMARY]
[CONTENT] bcl | samples | salivary | ai | p53 | apoptotic | bax | index | anti | anti apoptotic [SUMMARY]
[CONTENT] apoptosis | bcl | salivary | cancers | death | apoptotic | cell | protein | associated | salivary gland [SUMMARY]
[CONTENT] usa | nuclei | pcr | positive | positive nuclei | samples | ai | min | considered | performed [SUMMARY]
null
[CONTENT] demonstrate | cell proliferation | malignant | tumours | proliferation | gland neoplasms association p53 | p53 immunoexpression addition higher | p53 immunoexpression addition | transcriptional signature salivary | transcriptional signature salivary gland [SUMMARY]
[CONTENT] bcl | salivary | ai | samples | apoptotic | p53 | proliferation | index | tumour | bax [SUMMARY]
[CONTENT] bcl | salivary | ai | samples | apoptotic | p53 | proliferation | index | tumour | bax [SUMMARY]
[CONTENT] ||| ||| ||| ||| BCL-2 | BAX | Caspase3 ||| [SUMMARY]
[CONTENT] twenty | seven | PCR ||| ki-67 [SUMMARY]
null
[CONTENT] [SUMMARY]
[CONTENT] ||| ||| ||| ||| BCL-2 | BAX | Caspase3 ||| ||| twenty | seven | PCR ||| ki-67 ||| ||| BCL-2 ||| BCL-2 ||| ||| ||| [SUMMARY]
[CONTENT] ||| ||| ||| ||| BCL-2 | BAX | Caspase3 ||| ||| twenty | seven | PCR ||| ki-67 ||| ||| BCL-2 ||| BCL-2 ||| ||| ||| [SUMMARY]
Dabrafenib plus trametinib versus dabrafenib monotherapy in patients with metastatic BRAF V600E/K-mutant melanoma: long-term survival and safety analysis of a phase 3 study.
28475671
Previous analysis of COMBI-d (NCT01584648) demonstrated improved progression-free survival (PFS) and overall survival (OS) with combination dabrafenib and trametinib versus dabrafenib monotherapy in BRAF V600E/K-mutant metastatic melanoma. This study was continued to assess 3-year landmark efficacy and safety after ≥36-month follow-up for all living patients.
BACKGROUND
This double-blind, phase 3 study enrolled previously untreated patients with BRAF V600E/K-mutant unresectable stage IIIC or stage IV melanoma. Patients were randomized to receive dabrafenib (150 mg twice daily) plus trametinib (2 mg once daily) or dabrafenib plus placebo. The primary endpoint was PFS; secondary endpoints were OS, overall response, duration of response, safety, and pharmacokinetics.
PATIENTS AND METHODS
Between 4 May and 30 November 2012, a total of 423 of 947 screened patients were randomly assigned to receive dabrafenib plus trametinib (n = 211) or dabrafenib monotherapy (n = 212). At data cut-off (15 February 2016), outcomes remained superior with the combination: 3-year PFS was 22% with dabrafenib plus trametinib versus 12% with monotherapy, and 3-year OS was 44% versus 32%, respectively. Twenty-five patients receiving monotherapy crossed over to combination therapy, with continued follow-up under the monotherapy arm (per intent-to-treat principle). Of combination-arm patients alive at 3 years, 58% remained on dabrafenib plus trametinib. Three-year OS with the combination reached 62% in the most favourable subgroup (normal lactate dehydrogenase and <3 organ sites with metastasis) versus only 25% in the unfavourable subgroup (elevated lactate dehydrogenase). The dabrafenib plus trametinib safety profile was consistent with previous clinical trial observations, and no new safety signals were detected with long-term use.
RESULTS
These data demonstrate that durable (≥3 years) survival is achievable with dabrafenib plus trametinib in patients with BRAF V600-mutant metastatic melanoma and support long-term first-line use of the combination in this setting.
CONCLUSIONS
[ "Antineoplastic Combined Chemotherapy Protocols", "Biomarkers, Tumor", "Disease Progression", "Disease-Free Survival", "Double-Blind Method", "Drug Administration Schedule", "Humans", "Imidazoles", "Kaplan-Meier Estimate", "Melanoma", "Mutation", "Oximes", "Protein Kinase Inhibitors", "Proto-Oncogene Proteins B-raf", "Pyridones", "Pyrimidinones", "Risk Factors", "Skin Neoplasms", "Time Factors", "Treatment Outcome" ]
5834102
Introduction
Before recent therapeutic advances, the prognosis for patients with metastatic melanoma was poor, with a 5-year survival of ∼6% and a median overall survival (OS) of 7.5 months [1]. The anti-cytotoxic T-lymphocyte-associated protein 4 (anti-CTLA-4) therapy ipilimumab was the first agent to show durable clinical benefit lasting ≥5 years in a subset of patients within molecularly unselected advanced melanoma populations [2]. More recently, BRAF and MEK inhibitor (BRAFi/MEKi) combinations and anti-programmed death-1 (anti-PD-1) checkpoint-inhibitor immunotherapy regimens demonstrated significant improvements in clinical outcomes in phase 3 trials of patients with metastatic melanoma; however, extended follow-up in these studies has been limited to ≤2 years [3–9]. Targeted therapies have been purported to be associated with rapid deterioration and death following development of secondary resistance; however, evidence from long-term, large randomized studies is lacking. With multiple treatments now available for BRAF V600-mutant melanoma, a better understanding of the proportion and characteristics of patients who can derive durable benefit and maintain tolerability with long-term use of current therapies is needed for optimizing treatment. Combination dabrafenib and trametinib (D + T) demonstrated improved progression-free survival (PFS) and OS over BRAFi monotherapy in randomized phase 2 and 3 trials in patients with BRAF V600E/K-mutant stage IIIC unresectable or stage IV metastatic melanoma [3, 4, 10–13]. The D + T safety profile has been consistent across these studies, in which the combination has been associated with a reduction in hyperproliferative skin lesions [e.g. squamous cell carcinoma (SCC), keratoacanthoma (KA)] compared with BRAFi monotherapy, while the frequency and severity of pyrexia appear higher [3, 4, 10]. In the most recent analysis of COMBI-d, a randomized, double-blind, phase 3 trial of D + T versus dabrafenib monotherapy (dabrafenib plus placebo), with a median follow-up of 20.0 months for the D + T arm and 16.0 months for the monotherapy arm, median PFS was 11.0 versus 8.8 months [HR, 0.67; 95% confidence interval (CI), 0.53–0.84; P = 0.0004], median OS was 25.1 versus 18.7 months (HR, 0.71; 95% CI, 0.55–0.92; P = 0.0107), and 2-year OS was 51% versus 42% [3]. These findings confirmed results from the primary analysis of COMBI-d [12] and were consistent with outcomes observed in the randomized phase 3 COMBI-v study of D + T versus vemurafenib [4]. The longest follow-up to date for D + T in a randomized study (median 45.6 months) was reported for the phase 2 BRF113220 study (part C) evaluating D + T (n = 54) versus dabrafenib monotherapy (n = 54) [11], in which D + T-treated patients had a 2- and 3-year PFS of 25% and 21%, respectively, and a 2- and 3-year OS of 51% and 38%, respectively. Pooled data across these trials (median follow-up of 20.0 months) showed that normal baseline serum lactate dehydrogenase (LDH) and <3 organ sites containing metastasis were the factors most predictive of durable outcomes; patients with both of these characteristics had a 2-year PFS of 46% and a 2-year OS of 75% [14]. Here, we report an updated 3-year landmark analysis for the phase 3 COMBI-d trial, including updated PFS, OS, best response and safety analyses.
Methods
The COMBI-d study (protocol previously published [3] and further described in the supplementary data, available at Annals of Oncology online) was continued after prior primary and OS analyses [3, 12] to provide an updated 3-year landmark analysis of long-term outcomes. Crossover was permitted following the previous OS analysis by patient/physician discretion on the intent-to-treat principle, by which any crossover benefit was applied to the randomized therapy arm estimates. Kaplan–Meier estimations of 2- and 3-year PFS and OS were carried out to describe long-term outcomes. Influences of prognostic factors on patient-derived benefit were explored with descriptive subgroup stratification by baseline factors previously identified as being predictive of outcomes in patients receiving D + T [14].
Results
Baseline characteristics were well balanced across 423 patients randomly assigned to receive D + T (n = 211) or dabrafenib monotherapy (n = 212; supplementary Figure S1 and Table S1, available at Annals of Oncology online). At data cut-off, 15 February 2016, patients who were alive had ≥36 months of follow-up from time of randomization. Forty (19%) D + T-arm patients versus 6 (3%) monotherapy-arm patients remained on randomized treatment. At data cut-off, 3-year PFS was 22% for the D + T arm and 12% for the monotherapy arm [HR, 0.71 (95% CI, 0.57–0.88)] (Figure 1A), and 3-year OS was 44% and 32%, respectively [HR, 0.75 (95% CI, 0.58–0.96)] (Figure 2A). Notably, 25 (12%) patients in the dabrafenib monotherapy arm crossed over to D + T, of which 6 (24%) had progressed on monotherapy before crossover. Survival outcomes in these crossover patients, all of whom remained on D + T as of data cut-off, continued to be followed up under the monotherapy arm. Of combination-arm patients who were progression free (n = 31) and alive (n = 76) at 3 years, 28 (90%) and 44 (58%) remained on D + T, respectively. Progression-free survival (PFS) in the dabrafenib and trametinib (D + T) and dabrafenib monotherapy [D + placebo (Pbo)] arms in (A) the intent-to-treat population and patients with (B) normal baseline lactate dehydrogenase (≤upper limit of normal), (C) normal baseline lactate dehydrogenase and <3 organ sites with metastasis, and (D) elevated baseline lactate dehydrogenase (>upper limit of normal). CI, confidence interval; HR, hazard ratio. aIncludes 25 patients who crossed over from monotherapy to the combination. bOf D + T patients who were progression free at 3 years, 28 (90%) remained on D + T. Overall survival (OS) in the dabrafenib and trametinib (D + T) and dabrafenib monotherapy [D + placebo (Pbo)] arms in (A) the intent-to-treat population and patients with (B) normal baseline lactate dehydrogenase (≤upper limit of normal), (C) normal baseline lactate dehydrogenase and <3 organ sites with metastasis, and (D) elevated baseline lactate dehydrogenase (>upper limit of normal). CI, confidence interval; HR, hazard ratio. aIncludes 25 patients who crossed over from monotherapy to the combination. bOf patients in the D + T arm alive at 3 years, 44 (58%) remained on D + T. As expected per the progression rate in each arm, more monotherapy-arm patients received post-progression systemic therapy versus D + T-arm patients [130/211 (62%) versus 101/209 (48%); supplementary Table S2, available at Annals of Oncology online]. In both the D + T and monotherapy groups, immunotherapy was the most common subsequent anticancer therapy (56% versus 56%, respectively); ipilimumab was the most common immunotherapy (41% versus 50%), with fewer patients receiving nivolumab (7% versus 5%) or pembrolizumab (13% versus 11%). Long-term PFS and OS consistently favoured D + T over monotherapy, regardless of baseline prognostic factors. Three-year PFS rates in patients with normal baseline LDH levels [≤upper limit of normal (ULN), n = 273/423 (65%)] were 27% in the D + T arm versus 17% in the monotherapy arm [HR, 0.70 (95% CI, 0.53–0.93)] (Figure 1B), and 3-year OS rates were 54% versus 41%, respectively [HR, 0.74 (95% CI, 0.53–1.03)] (Figure 2B). Of 133 D + T-arm patients with LDH ≤ ULN, 61 (46%) were alive at 3 years and 34 (26%) remained on D + T. The greatest clinical benefit with D + T was observed in patients with LDH ≤ ULN and <3 organ sites with metastasis at baseline [n = 172/423 (41%)], with 3-year PFS rates of 38% in the combination arm versus 16% in the monotherapy arm [HR, 0.53 (95% CI, 0.38–0.76)] (Figure 1C), and 3-year OS rates of 62% versus 45%, respectively [HR, 0.63 (95% CI, 0.41–0.99)] (Figure 2C). Of 76 combination-arm patients with baseline LDH ≤ ULN and <3 organ sites with metastasis, 37 (49%) were alive at 3 years and 23 (30%) remained on D + T. In patients with baseline LDH > ULN [n = 147/423 (35%)], 3-year PFS rates were 13% in the D + T arm and 4% in the monotherapy arm [HR, 0.61 (95% CI, 0.43–0.88)] (Figure 1D), and 3-year OS rates were 25% versus 14% [HR, 0.61 (95% CI, 0.41–0.89)] (Figure 2D). Of 76 D + T-arm patients with LDH > ULN, 15 (20%) were alive at 3 years and 10 (13%) remained on D + T. The confirmed response rates per Response Evaluation Criteria In Solid Tumors (RECIST) were 68% in combination-arm patients versus 55% in monotherapy patients (Table 1), with a complete response (CR) rate of 18% versus 15%, respectively. Median duration of response was 12.0 (95% CI, 9.3–17.1) versus 10.6 (95% CI, 8.3–12.9) months. Table 1Confirmed RECIST responseDabrafenib plus trametinibDabrafenib plus placebo(n  = 211)(n  = 212)RECIST response, n (%) Complete response (CR)38 (18)31 (15) Partial response (PR)106 (50)85 (40) Stable disease51 (24)68 (32) Progressive disease12 (6)18 (8) Not evaluable4 (2)10 (5)Response rate (CR + PR), n (%) [95% CI]144 (68)116 (55)[61.5–74.5][47.8–61.5]Duration of responsen  = 144n  = 116 Progressed or died, n (%)100 (69)84 (72) Median (95% CI), months12.0 (9.3–17.1)10.6 (8.3–12.9)CI, confidence interval; RECIST, Response Evaluation Criteria In Solid Tumors. Confirmed RECIST response CI, confidence interval; RECIST, Response Evaluation Criteria In Solid Tumors. With a median time on treatment of 11.8 (range, 0.4–43.7) versus 8.3 (range, 0.1–45.3) months in D + T-arm and monotherapy-arm patients, respectively, 49% versus 38% had >12 months of study treatment. Adverse events (AEs) of any grade, regardless of study drug relationship, were observed in 97% of patients (both arms), with 48% of D + T-arm patients versus 50% of monotherapy patients experiencing ≥1 grade 3/4 AE (supplementary Table S3, available at Annals of Oncology online) and 45% versus 38% experiencing serious AEs (supplementary Table S4, available at Annals of Oncology online). The incidence of several AEs was higher (>10% difference, any grade) in the D + T versus monotherapy arm: pyrexia (59% versus 33%), chills (32% versus 17%), diarrhoea (31% versus 17%), vomiting (26% versus 15%), and peripheral oedema (22% vs 9%) (supplementary Table S4, available at Annals of Oncology online). Conversely, the incidence of hyperkeratosis (35% versus 7%), alopecia (28% versus 9%), and skin papilloma (22% versus 2%) was higher in monotherapy-arm versus combination-arm patients (supplementary Table S3, available at Annals of Oncology online). Palmoplantar hyperkeratosis (18% versus 5%), SCC/KA (7% versus 2%), and basal cell carcinoma (7% versus 4%) also occurred more frequently in monotherapy-arm versus D + T-arm patients (supplementary Table S4, available at Annals of Oncology online). The incidence of other AEs of special interest (i.e. cardiotoxicities, ocular events, haemorrhages) was generally similar across the study arms (supplementary Table S5, available at Annals of Oncology online). Notably, the frequency of the most common D + T-associated AEs, including pyrexia, did not increase by >2% with an additional 13 months of follow-up since the last analysis (supplementary Table S6, available at Annals of Oncology online). Similarly, the incidence of key skin-related AEs, including palmoplantar hyperkeratosis, SCC/KA, and basal cell carcinoma, did not increase by >1% in the combination arm with extended follow-up, and no new primary melanomas were observed. Additionally, occurrence of events leading to dose interruptions (n = 122; 58%) or permanent discontinuation (n = 29; 14%) in D + T-arm patients increased by only 2% and 3%, respectively, and no new grade 5 AEs were observed.
null
null
[]
[]
[]
[ "Introduction", "Methods", "Results", "Discussion", "Supplementary Material" ]
[ "Before recent therapeutic advances, the prognosis for patients with metastatic melanoma was poor, with a 5-year survival of ∼6% and a median overall survival (OS) of 7.5 months [1]. The anti-cytotoxic T-lymphocyte-associated protein 4 (anti-CTLA-4) therapy ipilimumab was the first agent to show durable clinical benefit lasting ≥5 years in a subset of patients within molecularly unselected advanced melanoma populations [2]. More recently, BRAF and MEK inhibitor (BRAFi/MEKi) combinations and anti-programmed death-1 (anti-PD-1) checkpoint-inhibitor immunotherapy regimens demonstrated significant improvements in clinical outcomes in phase 3 trials of patients with metastatic melanoma; however, extended follow-up in these studies has been limited to ≤2 years [3–9]. Targeted therapies have been purported to be associated with rapid deterioration and death following development of secondary resistance; however, evidence from long-term, large randomized studies is lacking. With multiple treatments now available for BRAF V600-mutant melanoma, a better understanding of the proportion and characteristics of patients who can derive durable benefit and maintain tolerability with long-term use of current therapies is needed for optimizing treatment. \nCombination dabrafenib and trametinib (D + T) demonstrated improved progression-free survival (PFS) and OS over BRAFi monotherapy in randomized phase 2 and 3 trials in patients with BRAF V600E/K-mutant stage IIIC unresectable or stage IV metastatic melanoma [3, 4, 10–13]. The D + T safety profile has been consistent across these studies, in which the combination has been associated with a reduction in hyperproliferative skin lesions [e.g. squamous cell carcinoma (SCC), keratoacanthoma (KA)] compared with BRAFi monotherapy, while the frequency and severity of pyrexia appear higher [3, 4, 10].\nIn the most recent analysis of COMBI-d, a randomized, double-blind, phase 3 trial of D + T versus dabrafenib monotherapy (dabrafenib plus placebo), with a median follow-up of 20.0 months for the D + T arm and 16.0 months for the monotherapy arm, median PFS was 11.0 versus 8.8 months [HR, 0.67; 95% confidence interval (CI), 0.53–0.84; P = 0.0004], median OS was 25.1 versus 18.7 months (HR, 0.71; 95% CI, 0.55–0.92; P = 0.0107), and 2-year OS was 51% versus 42% [3]. These findings confirmed results from the primary analysis of COMBI-d [12] and were consistent with outcomes observed in the randomized phase 3 COMBI-v study of D + T versus vemurafenib [4]. The longest follow-up to date for D + T in a randomized study (median 45.6 months) was reported for the phase 2 BRF113220 study (part C) evaluating D + T (n = 54) versus dabrafenib monotherapy (n = 54) [11], in which D + T-treated patients had a 2- and 3-year PFS of 25% and 21%, respectively, and a 2- and 3-year OS of 51% and 38%, respectively. Pooled data across these trials (median follow-up of 20.0 months) showed that normal baseline serum lactate dehydrogenase (LDH) and <3 organ sites containing metastasis were the factors most predictive of durable outcomes; patients with both of these characteristics had a 2-year PFS of 46% and a 2-year OS of 75% [14].\nHere, we report an updated 3-year landmark analysis for the phase 3 COMBI-d trial, including updated PFS, OS, best response and safety analyses.", "The COMBI-d study (protocol previously published [3] and further described in the supplementary data, available at Annals of Oncology online) was continued after prior primary and OS analyses [3, 12] to provide an updated 3-year landmark analysis of long-term outcomes. Crossover was permitted following the previous OS analysis by patient/physician discretion on the intent-to-treat principle, by which any crossover benefit was applied to the randomized therapy arm estimates. Kaplan–Meier estimations of 2- and 3-year PFS and OS were carried out to describe long-term outcomes. Influences of prognostic factors on patient-derived benefit were explored with descriptive subgroup stratification by baseline factors previously identified as being predictive of outcomes in patients receiving D + T [14].", "Baseline characteristics were well balanced across 423 patients randomly assigned to receive D + T (n = 211) or dabrafenib monotherapy (n = 212; supplementary Figure S1 and Table S1, available at Annals of Oncology online). At data cut-off, 15 February 2016, patients who were alive had ≥36 months of follow-up from time of randomization. Forty (19%) D + T-arm patients versus 6 (3%) monotherapy-arm patients remained on randomized treatment.\nAt data cut-off, 3-year PFS was 22% for the D + T arm and 12% for the monotherapy arm [HR, 0.71 (95% CI, 0.57–0.88)] (Figure 1A), and 3-year OS was 44% and 32%, respectively [HR, 0.75 (95% CI, 0.58–0.96)] (Figure 2A). Notably, 25 (12%) patients in the dabrafenib monotherapy arm crossed over to D + T, of which 6 (24%) had progressed on monotherapy before crossover. Survival outcomes in these crossover patients, all of whom remained on D + T as of data cut-off, continued to be followed up under the monotherapy arm. Of combination-arm patients who were progression free (n = 31) and alive (n = 76) at 3 years, 28 (90%) and 44 (58%) remained on D + T, respectively. \n\nProgression-free survival (PFS) in the dabrafenib and trametinib (D + T) and dabrafenib monotherapy [D + placebo (Pbo)] arms in (A) the intent-to-treat population and patients with (B) normal baseline lactate dehydrogenase (≤upper limit of normal), (C) normal baseline lactate dehydrogenase and <3 organ sites with metastasis, and (D) elevated baseline lactate dehydrogenase (>upper limit of normal). CI, confidence interval; HR, hazard ratio. aIncludes 25 patients who crossed over from monotherapy to the combination. bOf D + T patients who were progression free at 3 years, 28 (90%) remained on D + T.\nOverall survival (OS) in the dabrafenib and trametinib (D + T) and dabrafenib monotherapy [D + placebo (Pbo)] arms in (A) the intent-to-treat population and patients with (B) normal baseline lactate dehydrogenase (≤upper limit of normal), (C) normal baseline lactate dehydrogenase and <3 organ sites with metastasis, and (D) elevated baseline lactate dehydrogenase (>upper limit of normal). CI, confidence interval; HR, hazard ratio. aIncludes 25 patients who crossed over from monotherapy to the combination. bOf patients in the D + T arm alive at 3 years, 44 (58%) remained on D + T.\nAs expected per the progression rate in each arm, more monotherapy-arm patients received post-progression systemic therapy versus D + T-arm patients [130/211 (62%) versus 101/209 (48%); supplementary Table S2, available at Annals of Oncology online]. In both the D + T and monotherapy groups, immunotherapy was the most common subsequent anticancer therapy (56% versus 56%, respectively); ipilimumab was the most common immunotherapy (41% versus 50%), with fewer patients receiving nivolumab (7% versus 5%) or pembrolizumab (13% versus 11%).\nLong-term PFS and OS consistently favoured D + T over monotherapy, regardless of baseline prognostic factors. Three-year PFS rates in patients with normal baseline LDH levels [≤upper limit of normal (ULN), n = 273/423 (65%)] were 27% in the D + T arm versus 17% in the monotherapy arm [HR, 0.70 (95% CI, 0.53–0.93)] (Figure 1B), and 3-year OS rates were 54% versus 41%, respectively [HR, 0.74 (95% CI, 0.53–1.03)] (Figure 2B). Of 133 D + T-arm patients with LDH ≤ ULN, 61 (46%) were alive at 3 years and 34 (26%) remained on D + T. The greatest clinical benefit with D + T was observed in patients with LDH ≤ ULN and <3 organ sites with metastasis at baseline [n = 172/423 (41%)], with 3-year PFS rates of 38% in the combination arm versus 16% in the monotherapy arm [HR, 0.53 (95% CI, 0.38–0.76)] (Figure 1C), and 3-year OS rates of 62% versus 45%, respectively [HR, 0.63 (95% CI, 0.41–0.99)] (Figure 2C). Of 76 combination-arm patients with baseline LDH ≤ ULN and <3 organ sites with metastasis, 37 (49%) were alive at 3 years and 23 (30%) remained on D + T. In patients with baseline LDH > ULN [n = 147/423 (35%)], 3-year PFS rates were 13% in the D + T arm and 4% in the monotherapy arm [HR, 0.61 (95% CI, 0.43–0.88)] (Figure 1D), and 3-year OS rates were 25% versus 14% [HR, 0.61 (95% CI, 0.41–0.89)] (Figure 2D). Of 76 D + T-arm patients with LDH > ULN, 15 (20%) were alive at 3 years and 10 (13%) remained on D + T.\nThe confirmed response rates per Response Evaluation Criteria In Solid Tumors (RECIST) were 68% in combination-arm patients versus 55% in monotherapy patients (Table 1), with a complete response (CR) rate of 18% versus 15%, respectively. Median duration of response was 12.0 (95% CI, 9.3–17.1) versus 10.6 (95% CI, 8.3–12.9) months.\nTable 1Confirmed RECIST responseDabrafenib plus trametinibDabrafenib plus placebo(n  = 211)(n  = 212)RECIST response, n (%) Complete response (CR)38 (18)31 (15) Partial response (PR)106 (50)85 (40) Stable disease51 (24)68 (32) Progressive disease12 (6)18 (8) Not evaluable4 (2)10 (5)Response rate (CR + PR), n (%) [95% CI]144 (68)116 (55)[61.5–74.5][47.8–61.5]Duration of responsen  = 144n  = 116 Progressed or died, n (%)100 (69)84 (72) Median (95% CI), months12.0 (9.3–17.1)10.6 (8.3–12.9)CI, confidence interval; RECIST, Response Evaluation Criteria In Solid Tumors.\nConfirmed RECIST response\nCI, confidence interval; RECIST, Response Evaluation Criteria In Solid Tumors.\nWith a median time on treatment of 11.8 (range, 0.4–43.7) versus 8.3 (range, 0.1–45.3) months in D + T-arm and monotherapy-arm patients, respectively, 49% versus 38% had >12 months of study treatment. Adverse events (AEs) of any grade, regardless of study drug relationship, were observed in 97% of patients (both arms), with 48% of D + T-arm patients versus 50% of monotherapy patients experiencing ≥1 grade 3/4 AE (supplementary Table S3, available at Annals of Oncology online) and 45% versus 38% experiencing serious AEs (supplementary Table S4, available at Annals of Oncology online). The incidence of several AEs was higher (>10% difference, any grade) in the D + T versus monotherapy arm: pyrexia (59% versus 33%), chills (32% versus 17%), diarrhoea (31% versus 17%), vomiting (26% versus 15%), and peripheral oedema (22% vs 9%) (supplementary Table S4, available at Annals of Oncology online). Conversely, the incidence of hyperkeratosis (35% versus 7%), alopecia (28% versus 9%), and skin papilloma (22% versus 2%) was higher in monotherapy-arm versus combination-arm patients (supplementary Table S3, available at Annals of Oncology online). Palmoplantar hyperkeratosis (18% versus 5%), SCC/KA (7% versus 2%), and basal cell carcinoma (7% versus 4%) also occurred more frequently in monotherapy-arm versus D + T-arm patients (supplementary Table S4, available at Annals of Oncology online). The incidence of other AEs of special interest (i.e. cardiotoxicities, ocular events, haemorrhages) was generally similar across the study arms (supplementary Table S5, available at Annals of Oncology online).\nNotably, the frequency of the most common D + T-associated AEs, including pyrexia, did not increase by >2% with an additional 13 months of follow-up since the last analysis (supplementary Table S6, available at Annals of Oncology online). Similarly, the incidence of key skin-related AEs, including palmoplantar hyperkeratosis, SCC/KA, and basal cell carcinoma, did not increase by >1% in the combination arm with extended follow-up, and no new primary melanomas were observed. Additionally, occurrence of events leading to dose interruptions (n = 122; 58%) or permanent discontinuation (n = 29; 14%) in D + T-arm patients increased by only 2% and 3%, respectively, and no new grade 5 AEs were observed.", "This 3-year landmark analysis of COMBI-d represents the longest follow-up for any phase 3 BRAFi/MEKi combination therapy trial and provides evidence that long-term clinical benefit and tolerability are achievable with D + T in a subset of patients with previously untreated BRAF V600E/K-mutant metastatic melanoma. Importantly, these findings do not support the idea that most patients treated by mitogen-activated protein kinase inhibitors rapidly develop deterioration due to secondary resistance. At the 3-year landmark, D + T continued to demonstrate superior benefit versus dabrafenib monotherapy (PFS, 22% versus 12%; OS, 44% versus 32%), even though 12% of monotherapy patients crossed over to receive D + T. Furthermore, many patients alive at 3 years remained on D + T.\nThe 3-year OS reported for D + T in this large phase 3 trial (44%) confirms preliminary results for the smaller corresponding patient subset in the randomized phase 2 BRF113220 trial (3-year OS, 38%) [11]. More generally, survival observed in the current analysis is consistent with previous findings for D + T in BRAF V600-mutant melanoma, since the 2-year OS reported here (52%) is similar to that reported in the randomized phase 3 COMBI-v study (51%) and in a pooled analysis across registration trials (53%) [14]. In this era of multiple drugs with significant activity in metastatic melanoma, clinical trial OS results may be confounded by availability of these therapies. In this analysis, of patients who received any post-progression systemic therapy, rates of subsequent anti-PD-1 use were similar between the D + T and monotherapy arms, and the rate of subsequent ipilimumab therapy was numerically higher in the monotherapy arm compared with the D + T arm. Thus, the 3-year OS observed with D + T in this study may be mostly attributed to the combination.\nDirect comparisons of survival landmarks across trials of currently available melanoma treatments should be interpreted with caution due to differences in baseline characteristics between study populations, including the requirement for the presence of a BRAF V600E or V600K mutation in targeted therapy trials and the period of time during which studies were conducted (e.g. what treatments were available for subsequent therapy). However, in the absence of prospective head-to-head trials evaluating targeted versus checkpoint-inhibitor immunotherapies, pivotal trials to date can be considered to provide outcomes trends for each drug class. Moving forward, it will be important to balance advantages of immunotherapy with anti-PD-1 (±anti-CTLA-4) and BRAFi/MEKi combinations.\nFollow-up for anti-PD-1 checkpoint-inhibitor immunotherapy regimens has lagged behind targeted therapy; 3-year landmark OS results, as reported here, are currently available only for early-phase trials. In a phase 1 study evaluating nivolumab monotherapy in 107 patients with previously treated melanoma, unselected for BRAF mutation status and 36% with elevated LDH, the 2-, 3-, and 5-year OS rates were 48%, 42%, and 34%, respectively [15]. In a phase 1 study of combined nivolumab plus ipilimumab, in 53 treatment-naive (60%) or previously treated (40%) patients with advanced melanoma (38% with elevated LDH), the 3-year OS was 68%; however, it should be noted that these results are preliminary [16] and randomized studies of the combination have shown a consistent 2-year survival of 64% [9, 17], less than this phase 1 landmark. As larger trials evaluating anti-PD-1 regimens in metastatic melanoma continue follow-up, preliminary trends in outcomes in a recent meta-analysis demonstrated no significant difference in OS between first-line BRAFi/MEKi and anti-PD-1 [18].\nAltogether, data across trials of currently available therapies suggest that long-term survival profiles, at least up to 3 years, do not seem to confirm the hypothesis that only checkpoint-inhibitor immunotherapy can provide durable benefit in patients with metastatic melanoma. Although initial clinical activity (e.g. response rates) differs between these therapeutic classes [3–9], the proportion of patients with a 3-year benefit may be similar; however this will need to be confirmed by additional analyses of checkpoint-inhibitor immunotherapies specifically in patients with BRAF-mutant disease. Furthermore, it is important to note that the plateau survival pattern observed with ipilimumab [2] has not yet been demonstrated with anti-PD-1 therapies and remains a potential survival pattern for BRAFi/MEKi.\nIt is now well established that efficacy of treatment of metastatic melanoma can differ depending on baseline patient characteristics. Analyses of BRAFi-naive patients treated with D + T in the phase 2 BRF113220 study and in a pooled analysis across D + T registration trials identified significant associations between baseline LDH and number of organ sites containing metastasis and clinical outcomes [11, 14]. Results from the current analysis support these findings, with the highest 3-year OS observed among patients with LDH < ULN and <3 organ sites containing metastasis (D + T, 62%; monotherapy, 45%). Patients with favourable baseline markers treated with frontline D + T are thus more likely to derive long-term benefit from this combination. Moreover, although 3-year survival was much lower in patients with LDH > ULN, the superiority of D + T over dabrafenib monotherapy was maintained (3-year OS, 25% versus 14%).\nWith an additional 13 months of follow-up from the previous OS analysis of COMBI-d, CR was achieved by an additional 5 D + T-arm patients, resulting in an updated CR rate of 18% and an overall response rate of 68% with the combination.\nThe safety profile of D + T with longer follow-up was similar to that observed in previous analyses, in which the combination was associated with a reduction in toxicities related to paradoxical activation of the mitogen-activated protein kinase pathway compared with BRAFi monotherapy [3, 4, 10–13]. Pyrexia remained the most common AE with D + T; however, it has been shown that pyrexia can be managed [19]. The frequency of key AEs did not greatly change with additional follow-up, including pyrexia and secondary malignancies, consistent with a recent report that incidence of D + T-associated AEs is highest during the first 6 months of treatment, declining thereafter [20]. Thus, although patients who remain on and benefit from treatment can become an increasingly biased population due to the disappearance of those with very poor tolerance and/or development of secondary resistance, long-term treatment with D + T appears to be well tolerated in the subgroup of patients who benefit.\nThis analysis, representing the longest follow-up for any phase 3 trial evaluating BRAFi/MEKi combination therapy, demonstrated that long-term survival is achievable with D + T in a relevant proportion of patients with BRAF V600-mutant metastatic melanoma and that long-term treatment with D + T is tolerable, with no new safety signals. These results support long-term use of D + T as a first-line treatment strategy for patients with advanced BRAF V600-mutant melanoma. However, a more comprehensive model including clinical factors as described here, along with molecular and/or immune-markers associated with efficacy, is needed to further guide treatment decisions (e.g. BRAFi/MEKi and checkpoint inhibitor immunotherapy sequencing strategies) in this melanoma population. Continued follow-up planned for up to 5 years for COMBI-d will provide further understanding of the extent of benefit achievable with D + T in this setting.", "Click here for additional data file." ]
[ "intro", "methods", "results", "discussion", "supplementary-material" ]
[ "melanoma", "metastatic", "BRAF", "dabrafenib", "trametinib", "durable outcomes" ]
Introduction: Before recent therapeutic advances, the prognosis for patients with metastatic melanoma was poor, with a 5-year survival of ∼6% and a median overall survival (OS) of 7.5 months [1]. The anti-cytotoxic T-lymphocyte-associated protein 4 (anti-CTLA-4) therapy ipilimumab was the first agent to show durable clinical benefit lasting ≥5 years in a subset of patients within molecularly unselected advanced melanoma populations [2]. More recently, BRAF and MEK inhibitor (BRAFi/MEKi) combinations and anti-programmed death-1 (anti-PD-1) checkpoint-inhibitor immunotherapy regimens demonstrated significant improvements in clinical outcomes in phase 3 trials of patients with metastatic melanoma; however, extended follow-up in these studies has been limited to ≤2 years [3–9]. Targeted therapies have been purported to be associated with rapid deterioration and death following development of secondary resistance; however, evidence from long-term, large randomized studies is lacking. With multiple treatments now available for BRAF V600-mutant melanoma, a better understanding of the proportion and characteristics of patients who can derive durable benefit and maintain tolerability with long-term use of current therapies is needed for optimizing treatment. Combination dabrafenib and trametinib (D + T) demonstrated improved progression-free survival (PFS) and OS over BRAFi monotherapy in randomized phase 2 and 3 trials in patients with BRAF V600E/K-mutant stage IIIC unresectable or stage IV metastatic melanoma [3, 4, 10–13]. The D + T safety profile has been consistent across these studies, in which the combination has been associated with a reduction in hyperproliferative skin lesions [e.g. squamous cell carcinoma (SCC), keratoacanthoma (KA)] compared with BRAFi monotherapy, while the frequency and severity of pyrexia appear higher [3, 4, 10]. In the most recent analysis of COMBI-d, a randomized, double-blind, phase 3 trial of D + T versus dabrafenib monotherapy (dabrafenib plus placebo), with a median follow-up of 20.0 months for the D + T arm and 16.0 months for the monotherapy arm, median PFS was 11.0 versus 8.8 months [HR, 0.67; 95% confidence interval (CI), 0.53–0.84; P = 0.0004], median OS was 25.1 versus 18.7 months (HR, 0.71; 95% CI, 0.55–0.92; P = 0.0107), and 2-year OS was 51% versus 42% [3]. These findings confirmed results from the primary analysis of COMBI-d [12] and were consistent with outcomes observed in the randomized phase 3 COMBI-v study of D + T versus vemurafenib [4]. The longest follow-up to date for D + T in a randomized study (median 45.6 months) was reported for the phase 2 BRF113220 study (part C) evaluating D + T (n = 54) versus dabrafenib monotherapy (n = 54) [11], in which D + T-treated patients had a 2- and 3-year PFS of 25% and 21%, respectively, and a 2- and 3-year OS of 51% and 38%, respectively. Pooled data across these trials (median follow-up of 20.0 months) showed that normal baseline serum lactate dehydrogenase (LDH) and <3 organ sites containing metastasis were the factors most predictive of durable outcomes; patients with both of these characteristics had a 2-year PFS of 46% and a 2-year OS of 75% [14]. Here, we report an updated 3-year landmark analysis for the phase 3 COMBI-d trial, including updated PFS, OS, best response and safety analyses. Methods: The COMBI-d study (protocol previously published [3] and further described in the supplementary data, available at Annals of Oncology online) was continued after prior primary and OS analyses [3, 12] to provide an updated 3-year landmark analysis of long-term outcomes. Crossover was permitted following the previous OS analysis by patient/physician discretion on the intent-to-treat principle, by which any crossover benefit was applied to the randomized therapy arm estimates. Kaplan–Meier estimations of 2- and 3-year PFS and OS were carried out to describe long-term outcomes. Influences of prognostic factors on patient-derived benefit were explored with descriptive subgroup stratification by baseline factors previously identified as being predictive of outcomes in patients receiving D + T [14]. Results: Baseline characteristics were well balanced across 423 patients randomly assigned to receive D + T (n = 211) or dabrafenib monotherapy (n = 212; supplementary Figure S1 and Table S1, available at Annals of Oncology online). At data cut-off, 15 February 2016, patients who were alive had ≥36 months of follow-up from time of randomization. Forty (19%) D + T-arm patients versus 6 (3%) monotherapy-arm patients remained on randomized treatment. At data cut-off, 3-year PFS was 22% for the D + T arm and 12% for the monotherapy arm [HR, 0.71 (95% CI, 0.57–0.88)] (Figure 1A), and 3-year OS was 44% and 32%, respectively [HR, 0.75 (95% CI, 0.58–0.96)] (Figure 2A). Notably, 25 (12%) patients in the dabrafenib monotherapy arm crossed over to D + T, of which 6 (24%) had progressed on monotherapy before crossover. Survival outcomes in these crossover patients, all of whom remained on D + T as of data cut-off, continued to be followed up under the monotherapy arm. Of combination-arm patients who were progression free (n = 31) and alive (n = 76) at 3 years, 28 (90%) and 44 (58%) remained on D + T, respectively. Progression-free survival (PFS) in the dabrafenib and trametinib (D + T) and dabrafenib monotherapy [D + placebo (Pbo)] arms in (A) the intent-to-treat population and patients with (B) normal baseline lactate dehydrogenase (≤upper limit of normal), (C) normal baseline lactate dehydrogenase and <3 organ sites with metastasis, and (D) elevated baseline lactate dehydrogenase (>upper limit of normal). CI, confidence interval; HR, hazard ratio. aIncludes 25 patients who crossed over from monotherapy to the combination. bOf D + T patients who were progression free at 3 years, 28 (90%) remained on D + T. Overall survival (OS) in the dabrafenib and trametinib (D + T) and dabrafenib monotherapy [D + placebo (Pbo)] arms in (A) the intent-to-treat population and patients with (B) normal baseline lactate dehydrogenase (≤upper limit of normal), (C) normal baseline lactate dehydrogenase and <3 organ sites with metastasis, and (D) elevated baseline lactate dehydrogenase (>upper limit of normal). CI, confidence interval; HR, hazard ratio. aIncludes 25 patients who crossed over from monotherapy to the combination. bOf patients in the D + T arm alive at 3 years, 44 (58%) remained on D + T. As expected per the progression rate in each arm, more monotherapy-arm patients received post-progression systemic therapy versus D + T-arm patients [130/211 (62%) versus 101/209 (48%); supplementary Table S2, available at Annals of Oncology online]. In both the D + T and monotherapy groups, immunotherapy was the most common subsequent anticancer therapy (56% versus 56%, respectively); ipilimumab was the most common immunotherapy (41% versus 50%), with fewer patients receiving nivolumab (7% versus 5%) or pembrolizumab (13% versus 11%). Long-term PFS and OS consistently favoured D + T over monotherapy, regardless of baseline prognostic factors. Three-year PFS rates in patients with normal baseline LDH levels [≤upper limit of normal (ULN), n = 273/423 (65%)] were 27% in the D + T arm versus 17% in the monotherapy arm [HR, 0.70 (95% CI, 0.53–0.93)] (Figure 1B), and 3-year OS rates were 54% versus 41%, respectively [HR, 0.74 (95% CI, 0.53–1.03)] (Figure 2B). Of 133 D + T-arm patients with LDH ≤ ULN, 61 (46%) were alive at 3 years and 34 (26%) remained on D + T. The greatest clinical benefit with D + T was observed in patients with LDH ≤ ULN and <3 organ sites with metastasis at baseline [n = 172/423 (41%)], with 3-year PFS rates of 38% in the combination arm versus 16% in the monotherapy arm [HR, 0.53 (95% CI, 0.38–0.76)] (Figure 1C), and 3-year OS rates of 62% versus 45%, respectively [HR, 0.63 (95% CI, 0.41–0.99)] (Figure 2C). Of 76 combination-arm patients with baseline LDH ≤ ULN and <3 organ sites with metastasis, 37 (49%) were alive at 3 years and 23 (30%) remained on D + T. In patients with baseline LDH > ULN [n = 147/423 (35%)], 3-year PFS rates were 13% in the D + T arm and 4% in the monotherapy arm [HR, 0.61 (95% CI, 0.43–0.88)] (Figure 1D), and 3-year OS rates were 25% versus 14% [HR, 0.61 (95% CI, 0.41–0.89)] (Figure 2D). Of 76 D + T-arm patients with LDH > ULN, 15 (20%) were alive at 3 years and 10 (13%) remained on D + T. The confirmed response rates per Response Evaluation Criteria In Solid Tumors (RECIST) were 68% in combination-arm patients versus 55% in monotherapy patients (Table 1), with a complete response (CR) rate of 18% versus 15%, respectively. Median duration of response was 12.0 (95% CI, 9.3–17.1) versus 10.6 (95% CI, 8.3–12.9) months. Table 1Confirmed RECIST responseDabrafenib plus trametinibDabrafenib plus placebo(n  = 211)(n  = 212)RECIST response, n (%) Complete response (CR)38 (18)31 (15) Partial response (PR)106 (50)85 (40) Stable disease51 (24)68 (32) Progressive disease12 (6)18 (8) Not evaluable4 (2)10 (5)Response rate (CR + PR), n (%) [95% CI]144 (68)116 (55)[61.5–74.5][47.8–61.5]Duration of responsen  = 144n  = 116 Progressed or died, n (%)100 (69)84 (72) Median (95% CI), months12.0 (9.3–17.1)10.6 (8.3–12.9)CI, confidence interval; RECIST, Response Evaluation Criteria In Solid Tumors. Confirmed RECIST response CI, confidence interval; RECIST, Response Evaluation Criteria In Solid Tumors. With a median time on treatment of 11.8 (range, 0.4–43.7) versus 8.3 (range, 0.1–45.3) months in D + T-arm and monotherapy-arm patients, respectively, 49% versus 38% had >12 months of study treatment. Adverse events (AEs) of any grade, regardless of study drug relationship, were observed in 97% of patients (both arms), with 48% of D + T-arm patients versus 50% of monotherapy patients experiencing ≥1 grade 3/4 AE (supplementary Table S3, available at Annals of Oncology online) and 45% versus 38% experiencing serious AEs (supplementary Table S4, available at Annals of Oncology online). The incidence of several AEs was higher (>10% difference, any grade) in the D + T versus monotherapy arm: pyrexia (59% versus 33%), chills (32% versus 17%), diarrhoea (31% versus 17%), vomiting (26% versus 15%), and peripheral oedema (22% vs 9%) (supplementary Table S4, available at Annals of Oncology online). Conversely, the incidence of hyperkeratosis (35% versus 7%), alopecia (28% versus 9%), and skin papilloma (22% versus 2%) was higher in monotherapy-arm versus combination-arm patients (supplementary Table S3, available at Annals of Oncology online). Palmoplantar hyperkeratosis (18% versus 5%), SCC/KA (7% versus 2%), and basal cell carcinoma (7% versus 4%) also occurred more frequently in monotherapy-arm versus D + T-arm patients (supplementary Table S4, available at Annals of Oncology online). The incidence of other AEs of special interest (i.e. cardiotoxicities, ocular events, haemorrhages) was generally similar across the study arms (supplementary Table S5, available at Annals of Oncology online). Notably, the frequency of the most common D + T-associated AEs, including pyrexia, did not increase by >2% with an additional 13 months of follow-up since the last analysis (supplementary Table S6, available at Annals of Oncology online). Similarly, the incidence of key skin-related AEs, including palmoplantar hyperkeratosis, SCC/KA, and basal cell carcinoma, did not increase by >1% in the combination arm with extended follow-up, and no new primary melanomas were observed. Additionally, occurrence of events leading to dose interruptions (n = 122; 58%) or permanent discontinuation (n = 29; 14%) in D + T-arm patients increased by only 2% and 3%, respectively, and no new grade 5 AEs were observed. Discussion: This 3-year landmark analysis of COMBI-d represents the longest follow-up for any phase 3 BRAFi/MEKi combination therapy trial and provides evidence that long-term clinical benefit and tolerability are achievable with D + T in a subset of patients with previously untreated BRAF V600E/K-mutant metastatic melanoma. Importantly, these findings do not support the idea that most patients treated by mitogen-activated protein kinase inhibitors rapidly develop deterioration due to secondary resistance. At the 3-year landmark, D + T continued to demonstrate superior benefit versus dabrafenib monotherapy (PFS, 22% versus 12%; OS, 44% versus 32%), even though 12% of monotherapy patients crossed over to receive D + T. Furthermore, many patients alive at 3 years remained on D + T. The 3-year OS reported for D + T in this large phase 3 trial (44%) confirms preliminary results for the smaller corresponding patient subset in the randomized phase 2 BRF113220 trial (3-year OS, 38%) [11]. More generally, survival observed in the current analysis is consistent with previous findings for D + T in BRAF V600-mutant melanoma, since the 2-year OS reported here (52%) is similar to that reported in the randomized phase 3 COMBI-v study (51%) and in a pooled analysis across registration trials (53%) [14]. In this era of multiple drugs with significant activity in metastatic melanoma, clinical trial OS results may be confounded by availability of these therapies. In this analysis, of patients who received any post-progression systemic therapy, rates of subsequent anti-PD-1 use were similar between the D + T and monotherapy arms, and the rate of subsequent ipilimumab therapy was numerically higher in the monotherapy arm compared with the D + T arm. Thus, the 3-year OS observed with D + T in this study may be mostly attributed to the combination. Direct comparisons of survival landmarks across trials of currently available melanoma treatments should be interpreted with caution due to differences in baseline characteristics between study populations, including the requirement for the presence of a BRAF V600E or V600K mutation in targeted therapy trials and the period of time during which studies were conducted (e.g. what treatments were available for subsequent therapy). However, in the absence of prospective head-to-head trials evaluating targeted versus checkpoint-inhibitor immunotherapies, pivotal trials to date can be considered to provide outcomes trends for each drug class. Moving forward, it will be important to balance advantages of immunotherapy with anti-PD-1 (±anti-CTLA-4) and BRAFi/MEKi combinations. Follow-up for anti-PD-1 checkpoint-inhibitor immunotherapy regimens has lagged behind targeted therapy; 3-year landmark OS results, as reported here, are currently available only for early-phase trials. In a phase 1 study evaluating nivolumab monotherapy in 107 patients with previously treated melanoma, unselected for BRAF mutation status and 36% with elevated LDH, the 2-, 3-, and 5-year OS rates were 48%, 42%, and 34%, respectively [15]. In a phase 1 study of combined nivolumab plus ipilimumab, in 53 treatment-naive (60%) or previously treated (40%) patients with advanced melanoma (38% with elevated LDH), the 3-year OS was 68%; however, it should be noted that these results are preliminary [16] and randomized studies of the combination have shown a consistent 2-year survival of 64% [9, 17], less than this phase 1 landmark. As larger trials evaluating anti-PD-1 regimens in metastatic melanoma continue follow-up, preliminary trends in outcomes in a recent meta-analysis demonstrated no significant difference in OS between first-line BRAFi/MEKi and anti-PD-1 [18]. Altogether, data across trials of currently available therapies suggest that long-term survival profiles, at least up to 3 years, do not seem to confirm the hypothesis that only checkpoint-inhibitor immunotherapy can provide durable benefit in patients with metastatic melanoma. Although initial clinical activity (e.g. response rates) differs between these therapeutic classes [3–9], the proportion of patients with a 3-year benefit may be similar; however this will need to be confirmed by additional analyses of checkpoint-inhibitor immunotherapies specifically in patients with BRAF-mutant disease. Furthermore, it is important to note that the plateau survival pattern observed with ipilimumab [2] has not yet been demonstrated with anti-PD-1 therapies and remains a potential survival pattern for BRAFi/MEKi. It is now well established that efficacy of treatment of metastatic melanoma can differ depending on baseline patient characteristics. Analyses of BRAFi-naive patients treated with D + T in the phase 2 BRF113220 study and in a pooled analysis across D + T registration trials identified significant associations between baseline LDH and number of organ sites containing metastasis and clinical outcomes [11, 14]. Results from the current analysis support these findings, with the highest 3-year OS observed among patients with LDH < ULN and <3 organ sites containing metastasis (D + T, 62%; monotherapy, 45%). Patients with favourable baseline markers treated with frontline D + T are thus more likely to derive long-term benefit from this combination. Moreover, although 3-year survival was much lower in patients with LDH > ULN, the superiority of D + T over dabrafenib monotherapy was maintained (3-year OS, 25% versus 14%). With an additional 13 months of follow-up from the previous OS analysis of COMBI-d, CR was achieved by an additional 5 D + T-arm patients, resulting in an updated CR rate of 18% and an overall response rate of 68% with the combination. The safety profile of D + T with longer follow-up was similar to that observed in previous analyses, in which the combination was associated with a reduction in toxicities related to paradoxical activation of the mitogen-activated protein kinase pathway compared with BRAFi monotherapy [3, 4, 10–13]. Pyrexia remained the most common AE with D + T; however, it has been shown that pyrexia can be managed [19]. The frequency of key AEs did not greatly change with additional follow-up, including pyrexia and secondary malignancies, consistent with a recent report that incidence of D + T-associated AEs is highest during the first 6 months of treatment, declining thereafter [20]. Thus, although patients who remain on and benefit from treatment can become an increasingly biased population due to the disappearance of those with very poor tolerance and/or development of secondary resistance, long-term treatment with D + T appears to be well tolerated in the subgroup of patients who benefit. This analysis, representing the longest follow-up for any phase 3 trial evaluating BRAFi/MEKi combination therapy, demonstrated that long-term survival is achievable with D + T in a relevant proportion of patients with BRAF V600-mutant metastatic melanoma and that long-term treatment with D + T is tolerable, with no new safety signals. These results support long-term use of D + T as a first-line treatment strategy for patients with advanced BRAF V600-mutant melanoma. However, a more comprehensive model including clinical factors as described here, along with molecular and/or immune-markers associated with efficacy, is needed to further guide treatment decisions (e.g. BRAFi/MEKi and checkpoint inhibitor immunotherapy sequencing strategies) in this melanoma population. Continued follow-up planned for up to 5 years for COMBI-d will provide further understanding of the extent of benefit achievable with D + T in this setting. Supplementary Material: Click here for additional data file.
Background: Previous analysis of COMBI-d (NCT01584648) demonstrated improved progression-free survival (PFS) and overall survival (OS) with combination dabrafenib and trametinib versus dabrafenib monotherapy in BRAF V600E/K-mutant metastatic melanoma. This study was continued to assess 3-year landmark efficacy and safety after ≥36-month follow-up for all living patients. Methods: This double-blind, phase 3 study enrolled previously untreated patients with BRAF V600E/K-mutant unresectable stage IIIC or stage IV melanoma. Patients were randomized to receive dabrafenib (150 mg twice daily) plus trametinib (2 mg once daily) or dabrafenib plus placebo. The primary endpoint was PFS; secondary endpoints were OS, overall response, duration of response, safety, and pharmacokinetics. Results: Between 4 May and 30 November 2012, a total of 423 of 947 screened patients were randomly assigned to receive dabrafenib plus trametinib (n = 211) or dabrafenib monotherapy (n = 212). At data cut-off (15 February 2016), outcomes remained superior with the combination: 3-year PFS was 22% with dabrafenib plus trametinib versus 12% with monotherapy, and 3-year OS was 44% versus 32%, respectively. Twenty-five patients receiving monotherapy crossed over to combination therapy, with continued follow-up under the monotherapy arm (per intent-to-treat principle). Of combination-arm patients alive at 3 years, 58% remained on dabrafenib plus trametinib. Three-year OS with the combination reached 62% in the most favourable subgroup (normal lactate dehydrogenase and <3 organ sites with metastasis) versus only 25% in the unfavourable subgroup (elevated lactate dehydrogenase). The dabrafenib plus trametinib safety profile was consistent with previous clinical trial observations, and no new safety signals were detected with long-term use. Conclusions: These data demonstrate that durable (≥3 years) survival is achievable with dabrafenib plus trametinib in patients with BRAF V600-mutant metastatic melanoma and support long-term first-line use of the combination in this setting.
null
null
4,373
409
[]
5
[ "patients", "versus", "arm", "monotherapy", "year", "os", "baseline", "ci", "melanoma", "combination" ]
[ "melanoma treatments interpreted", "metastatic melanoma continue", "sequencing strategies melanoma", "meki checkpoint inhibitor", "checkpoint inhibitor immunotherapies" ]
null
null
[CONTENT] melanoma | metastatic | BRAF | dabrafenib | trametinib | durable outcomes [SUMMARY]
[CONTENT] melanoma | metastatic | BRAF | dabrafenib | trametinib | durable outcomes [SUMMARY]
[CONTENT] melanoma | metastatic | BRAF | dabrafenib | trametinib | durable outcomes [SUMMARY]
null
[CONTENT] melanoma | metastatic | BRAF | dabrafenib | trametinib | durable outcomes [SUMMARY]
null
[CONTENT] Antineoplastic Combined Chemotherapy Protocols | Biomarkers, Tumor | Disease Progression | Disease-Free Survival | Double-Blind Method | Drug Administration Schedule | Humans | Imidazoles | Kaplan-Meier Estimate | Melanoma | Mutation | Oximes | Protein Kinase Inhibitors | Proto-Oncogene Proteins B-raf | Pyridones | Pyrimidinones | Risk Factors | Skin Neoplasms | Time Factors | Treatment Outcome [SUMMARY]
[CONTENT] Antineoplastic Combined Chemotherapy Protocols | Biomarkers, Tumor | Disease Progression | Disease-Free Survival | Double-Blind Method | Drug Administration Schedule | Humans | Imidazoles | Kaplan-Meier Estimate | Melanoma | Mutation | Oximes | Protein Kinase Inhibitors | Proto-Oncogene Proteins B-raf | Pyridones | Pyrimidinones | Risk Factors | Skin Neoplasms | Time Factors | Treatment Outcome [SUMMARY]
[CONTENT] Antineoplastic Combined Chemotherapy Protocols | Biomarkers, Tumor | Disease Progression | Disease-Free Survival | Double-Blind Method | Drug Administration Schedule | Humans | Imidazoles | Kaplan-Meier Estimate | Melanoma | Mutation | Oximes | Protein Kinase Inhibitors | Proto-Oncogene Proteins B-raf | Pyridones | Pyrimidinones | Risk Factors | Skin Neoplasms | Time Factors | Treatment Outcome [SUMMARY]
null
[CONTENT] Antineoplastic Combined Chemotherapy Protocols | Biomarkers, Tumor | Disease Progression | Disease-Free Survival | Double-Blind Method | Drug Administration Schedule | Humans | Imidazoles | Kaplan-Meier Estimate | Melanoma | Mutation | Oximes | Protein Kinase Inhibitors | Proto-Oncogene Proteins B-raf | Pyridones | Pyrimidinones | Risk Factors | Skin Neoplasms | Time Factors | Treatment Outcome [SUMMARY]
null
[CONTENT] melanoma treatments interpreted | metastatic melanoma continue | sequencing strategies melanoma | meki checkpoint inhibitor | checkpoint inhibitor immunotherapies [SUMMARY]
[CONTENT] melanoma treatments interpreted | metastatic melanoma continue | sequencing strategies melanoma | meki checkpoint inhibitor | checkpoint inhibitor immunotherapies [SUMMARY]
[CONTENT] melanoma treatments interpreted | metastatic melanoma continue | sequencing strategies melanoma | meki checkpoint inhibitor | checkpoint inhibitor immunotherapies [SUMMARY]
null
[CONTENT] melanoma treatments interpreted | metastatic melanoma continue | sequencing strategies melanoma | meki checkpoint inhibitor | checkpoint inhibitor immunotherapies [SUMMARY]
null
[CONTENT] patients | versus | arm | monotherapy | year | os | baseline | ci | melanoma | combination [SUMMARY]
[CONTENT] patients | versus | arm | monotherapy | year | os | baseline | ci | melanoma | combination [SUMMARY]
[CONTENT] patients | versus | arm | monotherapy | year | os | baseline | ci | melanoma | combination [SUMMARY]
null
[CONTENT] patients | versus | arm | monotherapy | year | os | baseline | ci | melanoma | combination [SUMMARY]
null
[CONTENT] phase | median | months | melanoma | versus | os | patients | year | monotherapy | anti [SUMMARY]
[CONTENT] term outcomes | long term outcomes | os | outcomes | previously | patient | crossover | benefit | long term | factors [SUMMARY]
[CONTENT] versus | arm | patients | monotherapy | ci | arm patients | table | 95 ci | 95 | figure [SUMMARY]
null
[CONTENT] patients | versus | os | year | monotherapy | additional data file | file | additional data | click additional data | data file [SUMMARY]
null
[CONTENT] NCT01584648 ||| 3-year [SUMMARY]
[CONTENT] 3 | IIIC ||| 150 | 2 ||| [SUMMARY]
[CONTENT] Between 4 May and 30 November 2012 | 423 | 947 | 211 | 212 ||| 15 February 2016 | 3-year | 22% | 12% | 3-year | 44% | 32% ||| Twenty-five ||| 3 years | 58% ||| Three-year | 62% | only 25% ||| [SUMMARY]
null
[CONTENT] NCT01584648 ||| 3-year ||| 3 | IIIC ||| 150 | 2 ||| ||| Between 4 May and 30 November 2012 | 423 | 947 | 211 | 212 ||| 15 February 2016 | 3-year | 22% | 12% | 3-year | 44% | 32% ||| Twenty-five ||| 3 years | 58% ||| Three-year | 62% | only 25% ||| ||| V600 | first [SUMMARY]
null
Normal Reference Plots for the Bioelectrical Impedance Vector in Healthy Korean Adults.
31373183
Accurate volume measurement is important in the management of patients with congestive heart failure or renal insufficiency. A bioimpedance analyser can estimate total body water in litres and has been widely used in clinical practice due to its non-invasiveness and ease of results interpretation. To change impedance data to volumetric data, bioimpedance analysers use equations derived from data from healthy subjects, which may not apply to patients with other conditions. Bioelectrical impedance vector analysis (BIVA) was developed to overcome the dependence on those equations by constructing vector plots using raw impedance data. BIVA requires normal reference plots for the proper interpretation of individual vectors. The aim of this study was to construct normal reference vector plots of bioelectrical impedance for Koreans.
BACKGROUND
Bioelectrical impedance measurements were collected from apparently healthy subjects screened according to a comprehensive physical examination and medical history performed by trained physicians. Reference vector contours were plotted on the RXc graph using the probability density function of the bivariate normal distribution. We further compared them with those of other ethnic groups.
METHODS
A total of 242 healthy subjects aged 22 to 83 were recruited (137 men and 105 women) between December 2015 and November 2016. The centers of the tolerance ellipses were 306.3 Ω/m and 34.9 Ω/m for men and 425.6 Ω/m and 39.7 Ω/m for women. The ellipses were wider for women than for men. The confidence ellipses for Koreans were located between those for Americans and Spaniards without overlap for both genders.
RESULTS
This study presented gender-specific normal reference BIVA plots and corresponding tolerance and confidence ellipses on the RXc graph, which is important for the interpretation of BIA-reported volume status in patients with congestive heart failure or renal insufficiency. There were noticeable differences in reference ellipses with regard to gender and ethnic groups.
CONCLUSION
[ "Adult", "Aged", "Aged, 80 and over", "Body Composition", "Electric Impedance", "Female", "Heart Failure", "Humans", "Male", "Middle Aged", "Renal Insufficiency", "Republic of Korea", "Young Adult" ]
6676004
INTRODUCTION
Assessment of accurate volume status is of great importance in patients with various cardiac conditions or renal insufficiency regarding diagnosis, monitoring the response to therapy, and prognosis.12 Bioelectrical impedance analysis (BIA) is a widely used tool to assess the volume status of patients due to its low cost, non-invasiveness and ease of use.3 BIA analysers report the amount of body fluid in volumetric units (usually litres) using their own algorithms to convert the electrical measurements from human cells into volumetric numbers. However, those algorithms are derived from various mathematical regression models and assumptions that are usually derived from and validated in healthy and steady-state populations. Thus, the accuracy of the estimated values from conventional BIA may be compromised in patients with abnormal conditions, such as congestive heart failure or chronic and acute renal insufficiency, in whom the use of raw electrical data would have greater strength.45678 Bioelectrical impedance vector analysis (BIVA) is an alternative method to overcome such limitations of conventional BIA methods. BIVA creates vector plots of impedance (Z) on height standardized resistance (R) and reactance (Xc) axes, which is called the RXc graph, from raw impedance data instead of reporting volumetric values as is done in conventional BIA.9 For the correct interpretation of a patient's volume status using a BIVA plot, normal reference values are needed, which might be different according to gender and ethnicity. There were several articles that showed normal electrical vector plots for the interpretation of volume status of diseased patients in other countries. However, to the best of our knowledge, there have been no studies reporting standardized reference vector plots for Koreans. The aim of this study was to establish gender-specific normal reference vector plots for healthy Koreans and compare them with those of other ethnic groups.
METHODS
Subjects with no history of medical illness except hypertension were recruited. Well-trained and dedicated physicians conducted physical examinations to ascertain the health and volume status of subjects and assessed functional limitations of daily activities. The subjects were considered to be euvolemic when they had a steady-state body weight and exhibited normal skin turgor without pitting oedema or jugular venous distention. We excluded subjects who had abnormal chest and heart sounds, irregular heartbeats, pale conjunctiva and icteric sclera. A total of 242 healthy subjects were recruited (137 men and 105 women) from Busan metropolitan area between December 2015 and November 2016. Measurements The height and weight of the participants were measured using a digital scale equipped with a stadiometer. Upper arm circumference was measured on the non-dominant arm at the mid-point between the shoulder and elbow using a tape measure. Blood pressure was measured on both arms, and the higher reading was recorded. Bioelectrical impedance was measured using a tetra-polar eight-point tactile electrode system (InBody S10®; InBody Co., Ltd., Seoul, Korea) that provides a set of raw bioelectrical measurements of Z and Xc, each for five parts of the body (both arms and legs and the trunk) in multiple frequencies ranging from 1 kHz to 1,000 kHz. Whole-body Z and Xc values were each the sum of those readings for the right arm, right leg and trunk at 50 kHz. Whole-body R and phase angle (PhA) values were obtained using the following equations: Z2 = R2 + Xc2 and arctangent (Xc/R) with a conversion factor of 180°/ π. The volumetric readings, which were provided with the built-in BIA algorithm of Inbody S10, were collected for total body water, intracellular water, and extracellular water. Special care was given to clean with wet tissues the skin in contact with the electrodes and to spread out the arms and legs such that they did not touch any other part of the body in the supine position. The height and weight of the participants were measured using a digital scale equipped with a stadiometer. Upper arm circumference was measured on the non-dominant arm at the mid-point between the shoulder and elbow using a tape measure. Blood pressure was measured on both arms, and the higher reading was recorded. Bioelectrical impedance was measured using a tetra-polar eight-point tactile electrode system (InBody S10®; InBody Co., Ltd., Seoul, Korea) that provides a set of raw bioelectrical measurements of Z and Xc, each for five parts of the body (both arms and legs and the trunk) in multiple frequencies ranging from 1 kHz to 1,000 kHz. Whole-body Z and Xc values were each the sum of those readings for the right arm, right leg and trunk at 50 kHz. Whole-body R and phase angle (PhA) values were obtained using the following equations: Z2 = R2 + Xc2 and arctangent (Xc/R) with a conversion factor of 180°/ π. The volumetric readings, which were provided with the built-in BIA algorithm of Inbody S10, were collected for total body water, intracellular water, and extracellular water. Special care was given to clean with wet tissues the skin in contact with the electrodes and to spread out the arms and legs such that they did not touch any other part of the body in the supine position. Statistical analysis Continuous variables are expressed as the means ± standard deviations (SDs) and were compared using Student's t-test. Categorical variables are presented as frequencies with percentages in parenthesis and were compared using Fisher's exact probability test. The Pearson correlation coefficient was calculated to examine the association between two continuous variables. The impedance measurements were standardized with heights (H) and plotted on an R/H versus Xc/H graph (RXc graph). Given the assumption that the two variables were normally distributed and correlated with each other, the ellipsoid joint probability contours were constructed according to the probability density function of the multivariate normal distribution (Supplementary Data 1).10 Three gender-specific tolerance ellipses were drawn within which the vector for an individual subject falls with a probability of 50%, 75%, and 95%. Gender-specific 95% confidence ellipses for mean vectors were constructed using the means and SDs of R/H and Xc/H. The differences between mean vectors were considered significant when their 95% confidence ellipses did not overlap according to Hotelling's T 2 test. Statistical significance was defined as P < 0.05. All statistical calculations were performed using SPSS software version 22.0 (IBM SPSS Inc., Chicago, IL, USA). Continuous variables are expressed as the means ± standard deviations (SDs) and were compared using Student's t-test. Categorical variables are presented as frequencies with percentages in parenthesis and were compared using Fisher's exact probability test. The Pearson correlation coefficient was calculated to examine the association between two continuous variables. The impedance measurements were standardized with heights (H) and plotted on an R/H versus Xc/H graph (RXc graph). Given the assumption that the two variables were normally distributed and correlated with each other, the ellipsoid joint probability contours were constructed according to the probability density function of the multivariate normal distribution (Supplementary Data 1).10 Three gender-specific tolerance ellipses were drawn within which the vector for an individual subject falls with a probability of 50%, 75%, and 95%. Gender-specific 95% confidence ellipses for mean vectors were constructed using the means and SDs of R/H and Xc/H. The differences between mean vectors were considered significant when their 95% confidence ellipses did not overlap according to Hotelling's T 2 test. Statistical significance was defined as P < 0.05. All statistical calculations were performed using SPSS software version 22.0 (IBM SPSS Inc., Chicago, IL, USA). Ethics statement This study was approved by the Institutional Review Board of the Pusan National University Hospital (H-1601-005-037), and written informed consent was obtained from each patient. This study was approved by the Institutional Review Board of the Pusan National University Hospital (H-1601-005-037), and written informed consent was obtained from each patient.
RESULTS
The mean ages of the men and women were 37.3 ± 11.5 (range, 22–83) and 37.9 ± 14.9 (range, 21–82) years, respectively, without a significant difference (P = 0.720) (Table 1). The men were significantly taller and heavier than the women. The body mass index (BMI) and systolic and diastolic blood pressures were significantly higher in the men, but the prevalence of hypertension was similar between the genders. In the women, Z, R, Xc and their standardized values (R/H and Xc/H) were significantly higher, while the phase angle (PhA) was significantly lower, than the corresponding values for the men. Regarding the estimated body fluid volume using the built-in conventional BIA algorithm, the mean TBW was 42.75 ± 4.96 litres in the men and 28.92 ± 2.89 litres in the women (P < 0.001). All the estimates of body fluid compartments were significantly larger, although the ratios of ECW/TBW and ECW/ICW were significantly lower, in men (Supplementary Table 1). The associations between impedance values and age were presented in Supplementary Table 2. Results are expressed as the means ± standard deviations or frequencies (percentages). BMI = body mass index, Z = impedance, R = resistance, Xc = reactance, R/H = resistance normalized by height, Xc/H = reactance normalized by height, PhA = phase angle. The gender-specific normalized bioimpedance vectors were plotted on the RXc graph. The tolerance ellipses for men (Fig. 1) had a center of 306.3 Ω/m and 34.9 Ω/m, and the lengths of the semi-major and minor axes were 85.1 Ω/m and 7.0 Ω/m for the 95% tolerance, 57.9 Ω/m and 4.8 Ω/m for the 75% tolerance, and 41.0 Ω/m and 3.4 Ω/m for the 50% tolerance ellipses, respectively. The slopes of the major and minor axes were 42.5° and −89.5°, respectively. The tolerance ellipses for women (Fig. 2) had a center of 425.6 Ω/m and 39.7 Ω/m. The lengths of the semi-major and minor axes were 113.2 Ω/m and 8.1 Ω/m for the 95% tolerance, 77.0 Ω/m and 5.5 Ω/m for the 75% tolerance, and 54.4 Ω/m and 3.9 Ω/m for the 50% tolerance ellipses, respectively. The slopes of the major and minor axes were 36.8° and −89.6°, respectively. The size of the ellipses for the women were larger and the mean vector was significantly deviated to the upper right side compared with those for the men (Hotelling's T 2 = 677.087; P < 0.001). The confidence ellipses across the age groups were depicted in Supplementary Fig. 1. R/H = resistance normalized by height, Xc/H = reactance normalized by height, R = resistance, Xc = reactance. R/H = resistance normalized by height, Xc/H = reactance normalized by height, R = resistance, Xc = reactance. Comparison with other ethnic groups The age, BMI, height, and height-standardized impedance values of Northern Italians, Spaniards, and Non-Hispanic Whites, Non-Hispanic Blacks, and Mexican Americans who lived in the United States were collected from published studies and compared with the values of Koreans in Table 2.111213 The men were taller and had lower values of both R/H and Xc/H than the women across all the populations. The 95% confidence ellipses for those groups were constructed using the reported means and SDs of R/H and Xc/H (Fig. 3). The ellipses for Korean men and women were present to the lower right side of those for Mexican Americans, Non-Hispanic individuals of African descent, and Non-Hispanic Caucasians and to the upper right side of those for Spaniards and Northern Italians without an overlap. Variables are expressed as ranges or the means ± standard deviations. BMI = body mass index, R/H = resistance normalized by height, Xc = reactance, Xc/H = reactance normalized by height, r = Pearson's correlation coefficient between R/H and Xc/H, NA = not applicable. R/H = resistance normalized by height, Xc/H = reactance normalized by height. The age, BMI, height, and height-standardized impedance values of Northern Italians, Spaniards, and Non-Hispanic Whites, Non-Hispanic Blacks, and Mexican Americans who lived in the United States were collected from published studies and compared with the values of Koreans in Table 2.111213 The men were taller and had lower values of both R/H and Xc/H than the women across all the populations. The 95% confidence ellipses for those groups were constructed using the reported means and SDs of R/H and Xc/H (Fig. 3). The ellipses for Korean men and women were present to the lower right side of those for Mexican Americans, Non-Hispanic individuals of African descent, and Non-Hispanic Caucasians and to the upper right side of those for Spaniards and Northern Italians without an overlap. Variables are expressed as ranges or the means ± standard deviations. BMI = body mass index, R/H = resistance normalized by height, Xc = reactance, Xc/H = reactance normalized by height, r = Pearson's correlation coefficient between R/H and Xc/H, NA = not applicable. R/H = resistance normalized by height, Xc/H = reactance normalized by height.
null
null
[ "Measurements", "Statistical analysis", "Ethics statement", "Comparison with other ethnic groups" ]
[ "The height and weight of the participants were measured using a digital scale equipped with a stadiometer. Upper arm circumference was measured on the non-dominant arm at the mid-point between the shoulder and elbow using a tape measure. Blood pressure was measured on both arms, and the higher reading was recorded. Bioelectrical impedance was measured using a tetra-polar eight-point tactile electrode system (InBody S10®; InBody Co., Ltd., Seoul, Korea) that provides a set of raw bioelectrical measurements of Z and Xc, each for five parts of the body (both arms and legs and the trunk) in multiple frequencies ranging from 1 kHz to 1,000 kHz. Whole-body Z and Xc values were each the sum of those readings for the right arm, right leg and trunk at 50 kHz. Whole-body R and phase angle (PhA) values were obtained using the following equations: Z2 = R2 + Xc2 and arctangent (Xc/R) with a conversion factor of 180°/ π. The volumetric readings, which were provided with the built-in BIA algorithm of Inbody S10, were collected for total body water, intracellular water, and extracellular water. Special care was given to clean with wet tissues the skin in contact with the electrodes and to spread out the arms and legs such that they did not touch any other part of the body in the supine position.", "Continuous variables are expressed as the means ± standard deviations (SDs) and were compared using Student's t-test. Categorical variables are presented as frequencies with percentages in parenthesis and were compared using Fisher's exact probability test. The Pearson correlation coefficient was calculated to examine the association between two continuous variables. The impedance measurements were standardized with heights (H) and plotted on an R/H versus Xc/H graph (RXc graph). Given the assumption that the two variables were normally distributed and correlated with each other, the ellipsoid joint probability contours were constructed according to the probability density function of the multivariate normal distribution (Supplementary Data 1).10 Three gender-specific tolerance ellipses were drawn within which the vector for an individual subject falls with a probability of 50%, 75%, and 95%. Gender-specific 95% confidence ellipses for mean vectors were constructed using the means and SDs of R/H and Xc/H. The differences between mean vectors were considered significant when their 95% confidence ellipses did not overlap according to Hotelling's T\n2 test. Statistical significance was defined as P < 0.05. All statistical calculations were performed using SPSS software version 22.0 (IBM SPSS Inc., Chicago, IL, USA).", "This study was approved by the Institutional Review Board of the Pusan National University Hospital (H-1601-005-037), and written informed consent was obtained from each patient.", "The age, BMI, height, and height-standardized impedance values of Northern Italians, Spaniards, and Non-Hispanic Whites, Non-Hispanic Blacks, and Mexican Americans who lived in the United States were collected from published studies and compared with the values of Koreans in Table 2.111213 The men were taller and had lower values of both R/H and Xc/H than the women across all the populations. The 95% confidence ellipses for those groups were constructed using the reported means and SDs of R/H and Xc/H (Fig. 3). The ellipses for Korean men and women were present to the lower right side of those for Mexican Americans, Non-Hispanic individuals of African descent, and Non-Hispanic Caucasians and to the upper right side of those for Spaniards and Northern Italians without an overlap.\nVariables are expressed as ranges or the means ± standard deviations.\nBMI = body mass index, R/H = resistance normalized by height, Xc = reactance, Xc/H = reactance normalized by height, r = Pearson's correlation coefficient between R/H and Xc/H, NA = not applicable.\nR/H = resistance normalized by height, Xc/H = reactance normalized by height." ]
[ null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Measurements", "Statistical analysis", "Ethics statement", "RESULTS", "Comparison with other ethnic groups", "DISCUSSION" ]
[ "Assessment of accurate volume status is of great importance in patients with various cardiac conditions or renal insufficiency regarding diagnosis, monitoring the response to therapy, and prognosis.12 Bioelectrical impedance analysis (BIA) is a widely used tool to assess the volume status of patients due to its low cost, non-invasiveness and ease of use.3 BIA analysers report the amount of body fluid in volumetric units (usually litres) using their own algorithms to convert the electrical measurements from human cells into volumetric numbers. However, those algorithms are derived from various mathematical regression models and assumptions that are usually derived from and validated in healthy and steady-state populations. Thus, the accuracy of the estimated values from conventional BIA may be compromised in patients with abnormal conditions, such as congestive heart failure or chronic and acute renal insufficiency, in whom the use of raw electrical data would have greater strength.45678 Bioelectrical impedance vector analysis (BIVA) is an alternative method to overcome such limitations of conventional BIA methods. BIVA creates vector plots of impedance (Z) on height standardized resistance (R) and reactance (Xc) axes, which is called the RXc graph, from raw impedance data instead of reporting volumetric values as is done in conventional BIA.9\nFor the correct interpretation of a patient's volume status using a BIVA plot, normal reference values are needed, which might be different according to gender and ethnicity. There were several articles that showed normal electrical vector plots for the interpretation of volume status of diseased patients in other countries. However, to the best of our knowledge, there have been no studies reporting standardized reference vector plots for Koreans. The aim of this study was to establish gender-specific normal reference vector plots for healthy Koreans and compare them with those of other ethnic groups.", "Subjects with no history of medical illness except hypertension were recruited. Well-trained and dedicated physicians conducted physical examinations to ascertain the health and volume status of subjects and assessed functional limitations of daily activities. The subjects were considered to be euvolemic when they had a steady-state body weight and exhibited normal skin turgor without pitting oedema or jugular venous distention. We excluded subjects who had abnormal chest and heart sounds, irregular heartbeats, pale conjunctiva and icteric sclera. A total of 242 healthy subjects were recruited (137 men and 105 women) from Busan metropolitan area between December 2015 and November 2016.\n Measurements The height and weight of the participants were measured using a digital scale equipped with a stadiometer. Upper arm circumference was measured on the non-dominant arm at the mid-point between the shoulder and elbow using a tape measure. Blood pressure was measured on both arms, and the higher reading was recorded. Bioelectrical impedance was measured using a tetra-polar eight-point tactile electrode system (InBody S10®; InBody Co., Ltd., Seoul, Korea) that provides a set of raw bioelectrical measurements of Z and Xc, each for five parts of the body (both arms and legs and the trunk) in multiple frequencies ranging from 1 kHz to 1,000 kHz. Whole-body Z and Xc values were each the sum of those readings for the right arm, right leg and trunk at 50 kHz. Whole-body R and phase angle (PhA) values were obtained using the following equations: Z2 = R2 + Xc2 and arctangent (Xc/R) with a conversion factor of 180°/ π. The volumetric readings, which were provided with the built-in BIA algorithm of Inbody S10, were collected for total body water, intracellular water, and extracellular water. Special care was given to clean with wet tissues the skin in contact with the electrodes and to spread out the arms and legs such that they did not touch any other part of the body in the supine position.\nThe height and weight of the participants were measured using a digital scale equipped with a stadiometer. Upper arm circumference was measured on the non-dominant arm at the mid-point between the shoulder and elbow using a tape measure. Blood pressure was measured on both arms, and the higher reading was recorded. Bioelectrical impedance was measured using a tetra-polar eight-point tactile electrode system (InBody S10®; InBody Co., Ltd., Seoul, Korea) that provides a set of raw bioelectrical measurements of Z and Xc, each for five parts of the body (both arms and legs and the trunk) in multiple frequencies ranging from 1 kHz to 1,000 kHz. Whole-body Z and Xc values were each the sum of those readings for the right arm, right leg and trunk at 50 kHz. Whole-body R and phase angle (PhA) values were obtained using the following equations: Z2 = R2 + Xc2 and arctangent (Xc/R) with a conversion factor of 180°/ π. The volumetric readings, which were provided with the built-in BIA algorithm of Inbody S10, were collected for total body water, intracellular water, and extracellular water. Special care was given to clean with wet tissues the skin in contact with the electrodes and to spread out the arms and legs such that they did not touch any other part of the body in the supine position.\n Statistical analysis Continuous variables are expressed as the means ± standard deviations (SDs) and were compared using Student's t-test. Categorical variables are presented as frequencies with percentages in parenthesis and were compared using Fisher's exact probability test. The Pearson correlation coefficient was calculated to examine the association between two continuous variables. The impedance measurements were standardized with heights (H) and plotted on an R/H versus Xc/H graph (RXc graph). Given the assumption that the two variables were normally distributed and correlated with each other, the ellipsoid joint probability contours were constructed according to the probability density function of the multivariate normal distribution (Supplementary Data 1).10 Three gender-specific tolerance ellipses were drawn within which the vector for an individual subject falls with a probability of 50%, 75%, and 95%. Gender-specific 95% confidence ellipses for mean vectors were constructed using the means and SDs of R/H and Xc/H. The differences between mean vectors were considered significant when their 95% confidence ellipses did not overlap according to Hotelling's T\n2 test. Statistical significance was defined as P < 0.05. All statistical calculations were performed using SPSS software version 22.0 (IBM SPSS Inc., Chicago, IL, USA).\nContinuous variables are expressed as the means ± standard deviations (SDs) and were compared using Student's t-test. Categorical variables are presented as frequencies with percentages in parenthesis and were compared using Fisher's exact probability test. The Pearson correlation coefficient was calculated to examine the association between two continuous variables. The impedance measurements were standardized with heights (H) and plotted on an R/H versus Xc/H graph (RXc graph). Given the assumption that the two variables were normally distributed and correlated with each other, the ellipsoid joint probability contours were constructed according to the probability density function of the multivariate normal distribution (Supplementary Data 1).10 Three gender-specific tolerance ellipses were drawn within which the vector for an individual subject falls with a probability of 50%, 75%, and 95%. Gender-specific 95% confidence ellipses for mean vectors were constructed using the means and SDs of R/H and Xc/H. The differences between mean vectors were considered significant when their 95% confidence ellipses did not overlap according to Hotelling's T\n2 test. Statistical significance was defined as P < 0.05. All statistical calculations were performed using SPSS software version 22.0 (IBM SPSS Inc., Chicago, IL, USA).\n Ethics statement This study was approved by the Institutional Review Board of the Pusan National University Hospital (H-1601-005-037), and written informed consent was obtained from each patient.\nThis study was approved by the Institutional Review Board of the Pusan National University Hospital (H-1601-005-037), and written informed consent was obtained from each patient.", "The height and weight of the participants were measured using a digital scale equipped with a stadiometer. Upper arm circumference was measured on the non-dominant arm at the mid-point between the shoulder and elbow using a tape measure. Blood pressure was measured on both arms, and the higher reading was recorded. Bioelectrical impedance was measured using a tetra-polar eight-point tactile electrode system (InBody S10®; InBody Co., Ltd., Seoul, Korea) that provides a set of raw bioelectrical measurements of Z and Xc, each for five parts of the body (both arms and legs and the trunk) in multiple frequencies ranging from 1 kHz to 1,000 kHz. Whole-body Z and Xc values were each the sum of those readings for the right arm, right leg and trunk at 50 kHz. Whole-body R and phase angle (PhA) values were obtained using the following equations: Z2 = R2 + Xc2 and arctangent (Xc/R) with a conversion factor of 180°/ π. The volumetric readings, which were provided with the built-in BIA algorithm of Inbody S10, were collected for total body water, intracellular water, and extracellular water. Special care was given to clean with wet tissues the skin in contact with the electrodes and to spread out the arms and legs such that they did not touch any other part of the body in the supine position.", "Continuous variables are expressed as the means ± standard deviations (SDs) and were compared using Student's t-test. Categorical variables are presented as frequencies with percentages in parenthesis and were compared using Fisher's exact probability test. The Pearson correlation coefficient was calculated to examine the association between two continuous variables. The impedance measurements were standardized with heights (H) and plotted on an R/H versus Xc/H graph (RXc graph). Given the assumption that the two variables were normally distributed and correlated with each other, the ellipsoid joint probability contours were constructed according to the probability density function of the multivariate normal distribution (Supplementary Data 1).10 Three gender-specific tolerance ellipses were drawn within which the vector for an individual subject falls with a probability of 50%, 75%, and 95%. Gender-specific 95% confidence ellipses for mean vectors were constructed using the means and SDs of R/H and Xc/H. The differences between mean vectors were considered significant when their 95% confidence ellipses did not overlap according to Hotelling's T\n2 test. Statistical significance was defined as P < 0.05. All statistical calculations were performed using SPSS software version 22.0 (IBM SPSS Inc., Chicago, IL, USA).", "This study was approved by the Institutional Review Board of the Pusan National University Hospital (H-1601-005-037), and written informed consent was obtained from each patient.", "The mean ages of the men and women were 37.3 ± 11.5 (range, 22–83) and 37.9 ± 14.9 (range, 21–82) years, respectively, without a significant difference (P = 0.720) (Table 1). The men were significantly taller and heavier than the women. The body mass index (BMI) and systolic and diastolic blood pressures were significantly higher in the men, but the prevalence of hypertension was similar between the genders. In the women, Z, R, Xc and their standardized values (R/H and Xc/H) were significantly higher, while the phase angle (PhA) was significantly lower, than the corresponding values for the men. Regarding the estimated body fluid volume using the built-in conventional BIA algorithm, the mean TBW was 42.75 ± 4.96 litres in the men and 28.92 ± 2.89 litres in the women (P < 0.001). All the estimates of body fluid compartments were significantly larger, although the ratios of ECW/TBW and ECW/ICW were significantly lower, in men (Supplementary Table 1). The associations between impedance values and age were presented in Supplementary Table 2.\nResults are expressed as the means ± standard deviations or frequencies (percentages).\nBMI = body mass index, Z = impedance, R = resistance, Xc = reactance, R/H = resistance normalized by height, Xc/H = reactance normalized by height, PhA = phase angle.\nThe gender-specific normalized bioimpedance vectors were plotted on the RXc graph. The tolerance ellipses for men (Fig. 1) had a center of 306.3 Ω/m and 34.9 Ω/m, and the lengths of the semi-major and minor axes were 85.1 Ω/m and 7.0 Ω/m for the 95% tolerance, 57.9 Ω/m and 4.8 Ω/m for the 75% tolerance, and 41.0 Ω/m and 3.4 Ω/m for the 50% tolerance ellipses, respectively. The slopes of the major and minor axes were 42.5° and −89.5°, respectively. The tolerance ellipses for women (Fig. 2) had a center of 425.6 Ω/m and 39.7 Ω/m. The lengths of the semi-major and minor axes were 113.2 Ω/m and 8.1 Ω/m for the 95% tolerance, 77.0 Ω/m and 5.5 Ω/m for the 75% tolerance, and 54.4 Ω/m and 3.9 Ω/m for the 50% tolerance ellipses, respectively. The slopes of the major and minor axes were 36.8° and −89.6°, respectively. The size of the ellipses for the women were larger and the mean vector was significantly deviated to the upper right side compared with those for the men (Hotelling's T\n2 = 677.087; P < 0.001). The confidence ellipses across the age groups were depicted in Supplementary Fig. 1.\nR/H = resistance normalized by height, Xc/H = reactance normalized by height, R = resistance, Xc = reactance.\nR/H = resistance normalized by height, Xc/H = reactance normalized by height, R = resistance, Xc = reactance.\n Comparison with other ethnic groups The age, BMI, height, and height-standardized impedance values of Northern Italians, Spaniards, and Non-Hispanic Whites, Non-Hispanic Blacks, and Mexican Americans who lived in the United States were collected from published studies and compared with the values of Koreans in Table 2.111213 The men were taller and had lower values of both R/H and Xc/H than the women across all the populations. The 95% confidence ellipses for those groups were constructed using the reported means and SDs of R/H and Xc/H (Fig. 3). The ellipses for Korean men and women were present to the lower right side of those for Mexican Americans, Non-Hispanic individuals of African descent, and Non-Hispanic Caucasians and to the upper right side of those for Spaniards and Northern Italians without an overlap.\nVariables are expressed as ranges or the means ± standard deviations.\nBMI = body mass index, R/H = resistance normalized by height, Xc = reactance, Xc/H = reactance normalized by height, r = Pearson's correlation coefficient between R/H and Xc/H, NA = not applicable.\nR/H = resistance normalized by height, Xc/H = reactance normalized by height.\nThe age, BMI, height, and height-standardized impedance values of Northern Italians, Spaniards, and Non-Hispanic Whites, Non-Hispanic Blacks, and Mexican Americans who lived in the United States were collected from published studies and compared with the values of Koreans in Table 2.111213 The men were taller and had lower values of both R/H and Xc/H than the women across all the populations. The 95% confidence ellipses for those groups were constructed using the reported means and SDs of R/H and Xc/H (Fig. 3). The ellipses for Korean men and women were present to the lower right side of those for Mexican Americans, Non-Hispanic individuals of African descent, and Non-Hispanic Caucasians and to the upper right side of those for Spaniards and Northern Italians without an overlap.\nVariables are expressed as ranges or the means ± standard deviations.\nBMI = body mass index, R/H = resistance normalized by height, Xc = reactance, Xc/H = reactance normalized by height, r = Pearson's correlation coefficient between R/H and Xc/H, NA = not applicable.\nR/H = resistance normalized by height, Xc/H = reactance normalized by height.", "The age, BMI, height, and height-standardized impedance values of Northern Italians, Spaniards, and Non-Hispanic Whites, Non-Hispanic Blacks, and Mexican Americans who lived in the United States were collected from published studies and compared with the values of Koreans in Table 2.111213 The men were taller and had lower values of both R/H and Xc/H than the women across all the populations. The 95% confidence ellipses for those groups were constructed using the reported means and SDs of R/H and Xc/H (Fig. 3). The ellipses for Korean men and women were present to the lower right side of those for Mexican Americans, Non-Hispanic individuals of African descent, and Non-Hispanic Caucasians and to the upper right side of those for Spaniards and Northern Italians without an overlap.\nVariables are expressed as ranges or the means ± standard deviations.\nBMI = body mass index, R/H = resistance normalized by height, Xc = reactance, Xc/H = reactance normalized by height, r = Pearson's correlation coefficient between R/H and Xc/H, NA = not applicable.\nR/H = resistance normalized by height, Xc/H = reactance normalized by height.", "BIA has been used widely to estimate the volume status of various patients in clinics.1415 However, the accuracy of BIA may be compromised when the volumetric reports are used rather than the directly measured raw indexes, especially in patients with congestive heart failure or renal insufficiency because the converting algorithms are derived from and validated in healthy people and are highly dependent on body weight value as well.246 By contrast, BIVA does not require those algorithms and body weight because it uses raw electrical measurements and height, which is a more constant variable. Thus, there are advantages of BIVA in the assessment of such patients with abnormal health conditions. However, BIVA requires normal reference plots for determination of the position of individual vectors.91617\nIn previous studies, it was demonstrated that the position of impedance vectors corresponds well to the change in individual volumic and nutritional statuses, as the vector is displaced parallel to the major axis of the reference tolerance ellipses, indicating a change in hydration status, and to the minor axis, indicating a change in cell mass and quality.2131819 Piccoli et al.20 suggested the reference 75% tolerance ellipse of a healthy population as the boundary for normal hydration status in dialysis-dependent patients. Based on those properties of BIVA, its clinical applications for adjusting optimal dry weight and estimating prognosis in patients depending on hemodialysis have been demonstrated.21 In other studies, BIVA were able to assess the hydration status and its shifts appropriately over the clinical course implying the role in diagnosis and treatment in patients with heart failure.2223\nIn this study, gender-specific normal reference vector plots were suggested using Inbody S10. We presented three different reference tolerance ellipses on the RXc graph for the probabilities of 50%, 75%, and 95% for each gender of the Korean population. The tolerance ellipses for women were situated on the upper right side and spread more widely than those for men. This pattern has been observed in other ethnic populations such as Northern Italians and Spaniards, and it appears to be caused by the larger mean values and greater variations of the bioimpedance measurements (Z, R, and Xc) in women than in men.24\nAlthough the same pattern of gender differences in the tolerance ellipses was consistently observed across all other ethnic groups, the 95% confidence ellipses were located differently from each other. This result suggested that one set of reference values drawn from one study group could not be applied to the other groups interchangeably. The differences may be caused by the disparities in body composition, ethnicity, health status, and devices used across studies.1425 In a United States study from which the bioimpedance data were collected for Non-Hispanic Caucasians, Non-Hispanic individuals of African descent, and Mexican-Americans using the third National Health and Nutrition Examination Survey, 23% of the participants had abnormal health conditions.13 This result was in stark contrast to the fact that this study recruited only healthy subjects without apparent abnormal physical signs.\nIt is noteworthy that the minor axis of the ellipses in our study was more vertical in a lesser orthogonal position to the major axis on the RXc graph. The pattern was prominently different from that depicted in previous studies, in which the minor axis was more slanted at a right angle to the major axis. However, given the anisometric scale of x (ranging from 0 to 600) and y (ranging from 0 to 60) axes on the RXc graph, we believed that the angle between the major and the minor axes should not be a right angle on this scale of the graph. This difference could partly be explained by the methodological difference used to draw the ellipses. In the first report by Piccoli et al.,9 they used their own modified equations for statistical calculations. By contrast, we used the joint probability density function for the bivariate normal distribution and calculated eigenvectors and eigenvalues to draw the ellipses without any arbitrary modifications (Supplementary Data 1).10\nOne of the limitations of BIVA compared with conventional BIA is that it cannot discriminate the intracellular and extracellular water components from the total body water component. Multifrequency bioimpedance analysis may have the potential to discriminate those components, but its relevance should be demonstrated in further studies. Otherwise, both BIVA and BIA could be used clinically in a complementary manner (Supplementary Fig. 2).\nThere are several factors to be considered when interpreting the results of this study. First, the number of subjects recruited for this study may not be sufficiently large to represent the entire Korean population. However, the fact that we enrolled only healthy subjects, screened by a thorough physical examination, renders our results more accurate for healthy populations. Second, this study used only one device to measure the electrical properties. Thus, our results may not be comparable to the data obtained by other devices.2627 To our best knowledge, this is the first study establishing reference ellipses on the RXc graph for healthy Koreans.\nIn conclusion, this study presented normal reference BIA parameters and corresponding tolerance and confidence ellipses on the RXc graph, which is of paramount importance for the clinical interpretation of an individual vector position. There were also noticeable differences in reference ellipses with regard to gender and ethnic groups. We believe that these basic data could be used for the accurate interpretation of BIA-assessed volume status in Korean patients with heart failure or renal disease." ]
[ "intro", "methods", null, null, null, "results", null, "discussion" ]
[ "Body Fluid Compartments", "Blood Volume", "Electric Impedance", "Congestive Heart Failure", "Renal Insufficiency", "Vector" ]
INTRODUCTION: Assessment of accurate volume status is of great importance in patients with various cardiac conditions or renal insufficiency regarding diagnosis, monitoring the response to therapy, and prognosis.12 Bioelectrical impedance analysis (BIA) is a widely used tool to assess the volume status of patients due to its low cost, non-invasiveness and ease of use.3 BIA analysers report the amount of body fluid in volumetric units (usually litres) using their own algorithms to convert the electrical measurements from human cells into volumetric numbers. However, those algorithms are derived from various mathematical regression models and assumptions that are usually derived from and validated in healthy and steady-state populations. Thus, the accuracy of the estimated values from conventional BIA may be compromised in patients with abnormal conditions, such as congestive heart failure or chronic and acute renal insufficiency, in whom the use of raw electrical data would have greater strength.45678 Bioelectrical impedance vector analysis (BIVA) is an alternative method to overcome such limitations of conventional BIA methods. BIVA creates vector plots of impedance (Z) on height standardized resistance (R) and reactance (Xc) axes, which is called the RXc graph, from raw impedance data instead of reporting volumetric values as is done in conventional BIA.9 For the correct interpretation of a patient's volume status using a BIVA plot, normal reference values are needed, which might be different according to gender and ethnicity. There were several articles that showed normal electrical vector plots for the interpretation of volume status of diseased patients in other countries. However, to the best of our knowledge, there have been no studies reporting standardized reference vector plots for Koreans. The aim of this study was to establish gender-specific normal reference vector plots for healthy Koreans and compare them with those of other ethnic groups. METHODS: Subjects with no history of medical illness except hypertension were recruited. Well-trained and dedicated physicians conducted physical examinations to ascertain the health and volume status of subjects and assessed functional limitations of daily activities. The subjects were considered to be euvolemic when they had a steady-state body weight and exhibited normal skin turgor without pitting oedema or jugular venous distention. We excluded subjects who had abnormal chest and heart sounds, irregular heartbeats, pale conjunctiva and icteric sclera. A total of 242 healthy subjects were recruited (137 men and 105 women) from Busan metropolitan area between December 2015 and November 2016. Measurements The height and weight of the participants were measured using a digital scale equipped with a stadiometer. Upper arm circumference was measured on the non-dominant arm at the mid-point between the shoulder and elbow using a tape measure. Blood pressure was measured on both arms, and the higher reading was recorded. Bioelectrical impedance was measured using a tetra-polar eight-point tactile electrode system (InBody S10®; InBody Co., Ltd., Seoul, Korea) that provides a set of raw bioelectrical measurements of Z and Xc, each for five parts of the body (both arms and legs and the trunk) in multiple frequencies ranging from 1 kHz to 1,000 kHz. Whole-body Z and Xc values were each the sum of those readings for the right arm, right leg and trunk at 50 kHz. Whole-body R and phase angle (PhA) values were obtained using the following equations: Z2 = R2 + Xc2 and arctangent (Xc/R) with a conversion factor of 180°/ π. The volumetric readings, which were provided with the built-in BIA algorithm of Inbody S10, were collected for total body water, intracellular water, and extracellular water. Special care was given to clean with wet tissues the skin in contact with the electrodes and to spread out the arms and legs such that they did not touch any other part of the body in the supine position. The height and weight of the participants were measured using a digital scale equipped with a stadiometer. Upper arm circumference was measured on the non-dominant arm at the mid-point between the shoulder and elbow using a tape measure. Blood pressure was measured on both arms, and the higher reading was recorded. Bioelectrical impedance was measured using a tetra-polar eight-point tactile electrode system (InBody S10®; InBody Co., Ltd., Seoul, Korea) that provides a set of raw bioelectrical measurements of Z and Xc, each for five parts of the body (both arms and legs and the trunk) in multiple frequencies ranging from 1 kHz to 1,000 kHz. Whole-body Z and Xc values were each the sum of those readings for the right arm, right leg and trunk at 50 kHz. Whole-body R and phase angle (PhA) values were obtained using the following equations: Z2 = R2 + Xc2 and arctangent (Xc/R) with a conversion factor of 180°/ π. The volumetric readings, which were provided with the built-in BIA algorithm of Inbody S10, were collected for total body water, intracellular water, and extracellular water. Special care was given to clean with wet tissues the skin in contact with the electrodes and to spread out the arms and legs such that they did not touch any other part of the body in the supine position. Statistical analysis Continuous variables are expressed as the means ± standard deviations (SDs) and were compared using Student's t-test. Categorical variables are presented as frequencies with percentages in parenthesis and were compared using Fisher's exact probability test. The Pearson correlation coefficient was calculated to examine the association between two continuous variables. The impedance measurements were standardized with heights (H) and plotted on an R/H versus Xc/H graph (RXc graph). Given the assumption that the two variables were normally distributed and correlated with each other, the ellipsoid joint probability contours were constructed according to the probability density function of the multivariate normal distribution (Supplementary Data 1).10 Three gender-specific tolerance ellipses were drawn within which the vector for an individual subject falls with a probability of 50%, 75%, and 95%. Gender-specific 95% confidence ellipses for mean vectors were constructed using the means and SDs of R/H and Xc/H. The differences between mean vectors were considered significant when their 95% confidence ellipses did not overlap according to Hotelling's T 2 test. Statistical significance was defined as P < 0.05. All statistical calculations were performed using SPSS software version 22.0 (IBM SPSS Inc., Chicago, IL, USA). Continuous variables are expressed as the means ± standard deviations (SDs) and were compared using Student's t-test. Categorical variables are presented as frequencies with percentages in parenthesis and were compared using Fisher's exact probability test. The Pearson correlation coefficient was calculated to examine the association between two continuous variables. The impedance measurements were standardized with heights (H) and plotted on an R/H versus Xc/H graph (RXc graph). Given the assumption that the two variables were normally distributed and correlated with each other, the ellipsoid joint probability contours were constructed according to the probability density function of the multivariate normal distribution (Supplementary Data 1).10 Three gender-specific tolerance ellipses were drawn within which the vector for an individual subject falls with a probability of 50%, 75%, and 95%. Gender-specific 95% confidence ellipses for mean vectors were constructed using the means and SDs of R/H and Xc/H. The differences between mean vectors were considered significant when their 95% confidence ellipses did not overlap according to Hotelling's T 2 test. Statistical significance was defined as P < 0.05. All statistical calculations were performed using SPSS software version 22.0 (IBM SPSS Inc., Chicago, IL, USA). Ethics statement This study was approved by the Institutional Review Board of the Pusan National University Hospital (H-1601-005-037), and written informed consent was obtained from each patient. This study was approved by the Institutional Review Board of the Pusan National University Hospital (H-1601-005-037), and written informed consent was obtained from each patient. Measurements: The height and weight of the participants were measured using a digital scale equipped with a stadiometer. Upper arm circumference was measured on the non-dominant arm at the mid-point between the shoulder and elbow using a tape measure. Blood pressure was measured on both arms, and the higher reading was recorded. Bioelectrical impedance was measured using a tetra-polar eight-point tactile electrode system (InBody S10®; InBody Co., Ltd., Seoul, Korea) that provides a set of raw bioelectrical measurements of Z and Xc, each for five parts of the body (both arms and legs and the trunk) in multiple frequencies ranging from 1 kHz to 1,000 kHz. Whole-body Z and Xc values were each the sum of those readings for the right arm, right leg and trunk at 50 kHz. Whole-body R and phase angle (PhA) values were obtained using the following equations: Z2 = R2 + Xc2 and arctangent (Xc/R) with a conversion factor of 180°/ π. The volumetric readings, which were provided with the built-in BIA algorithm of Inbody S10, were collected for total body water, intracellular water, and extracellular water. Special care was given to clean with wet tissues the skin in contact with the electrodes and to spread out the arms and legs such that they did not touch any other part of the body in the supine position. Statistical analysis: Continuous variables are expressed as the means ± standard deviations (SDs) and were compared using Student's t-test. Categorical variables are presented as frequencies with percentages in parenthesis and were compared using Fisher's exact probability test. The Pearson correlation coefficient was calculated to examine the association between two continuous variables. The impedance measurements were standardized with heights (H) and plotted on an R/H versus Xc/H graph (RXc graph). Given the assumption that the two variables were normally distributed and correlated with each other, the ellipsoid joint probability contours were constructed according to the probability density function of the multivariate normal distribution (Supplementary Data 1).10 Three gender-specific tolerance ellipses were drawn within which the vector for an individual subject falls with a probability of 50%, 75%, and 95%. Gender-specific 95% confidence ellipses for mean vectors were constructed using the means and SDs of R/H and Xc/H. The differences between mean vectors were considered significant when their 95% confidence ellipses did not overlap according to Hotelling's T 2 test. Statistical significance was defined as P < 0.05. All statistical calculations were performed using SPSS software version 22.0 (IBM SPSS Inc., Chicago, IL, USA). Ethics statement: This study was approved by the Institutional Review Board of the Pusan National University Hospital (H-1601-005-037), and written informed consent was obtained from each patient. RESULTS: The mean ages of the men and women were 37.3 ± 11.5 (range, 22–83) and 37.9 ± 14.9 (range, 21–82) years, respectively, without a significant difference (P = 0.720) (Table 1). The men were significantly taller and heavier than the women. The body mass index (BMI) and systolic and diastolic blood pressures were significantly higher in the men, but the prevalence of hypertension was similar between the genders. In the women, Z, R, Xc and their standardized values (R/H and Xc/H) were significantly higher, while the phase angle (PhA) was significantly lower, than the corresponding values for the men. Regarding the estimated body fluid volume using the built-in conventional BIA algorithm, the mean TBW was 42.75 ± 4.96 litres in the men and 28.92 ± 2.89 litres in the women (P < 0.001). All the estimates of body fluid compartments were significantly larger, although the ratios of ECW/TBW and ECW/ICW were significantly lower, in men (Supplementary Table 1). The associations between impedance values and age were presented in Supplementary Table 2. Results are expressed as the means ± standard deviations or frequencies (percentages). BMI = body mass index, Z = impedance, R = resistance, Xc = reactance, R/H = resistance normalized by height, Xc/H = reactance normalized by height, PhA = phase angle. The gender-specific normalized bioimpedance vectors were plotted on the RXc graph. The tolerance ellipses for men (Fig. 1) had a center of 306.3 Ω/m and 34.9 Ω/m, and the lengths of the semi-major and minor axes were 85.1 Ω/m and 7.0 Ω/m for the 95% tolerance, 57.9 Ω/m and 4.8 Ω/m for the 75% tolerance, and 41.0 Ω/m and 3.4 Ω/m for the 50% tolerance ellipses, respectively. The slopes of the major and minor axes were 42.5° and −89.5°, respectively. The tolerance ellipses for women (Fig. 2) had a center of 425.6 Ω/m and 39.7 Ω/m. The lengths of the semi-major and minor axes were 113.2 Ω/m and 8.1 Ω/m for the 95% tolerance, 77.0 Ω/m and 5.5 Ω/m for the 75% tolerance, and 54.4 Ω/m and 3.9 Ω/m for the 50% tolerance ellipses, respectively. The slopes of the major and minor axes were 36.8° and −89.6°, respectively. The size of the ellipses for the women were larger and the mean vector was significantly deviated to the upper right side compared with those for the men (Hotelling's T 2 = 677.087; P < 0.001). The confidence ellipses across the age groups were depicted in Supplementary Fig. 1. R/H = resistance normalized by height, Xc/H = reactance normalized by height, R = resistance, Xc = reactance. R/H = resistance normalized by height, Xc/H = reactance normalized by height, R = resistance, Xc = reactance. Comparison with other ethnic groups The age, BMI, height, and height-standardized impedance values of Northern Italians, Spaniards, and Non-Hispanic Whites, Non-Hispanic Blacks, and Mexican Americans who lived in the United States were collected from published studies and compared with the values of Koreans in Table 2.111213 The men were taller and had lower values of both R/H and Xc/H than the women across all the populations. The 95% confidence ellipses for those groups were constructed using the reported means and SDs of R/H and Xc/H (Fig. 3). The ellipses for Korean men and women were present to the lower right side of those for Mexican Americans, Non-Hispanic individuals of African descent, and Non-Hispanic Caucasians and to the upper right side of those for Spaniards and Northern Italians without an overlap. Variables are expressed as ranges or the means ± standard deviations. BMI = body mass index, R/H = resistance normalized by height, Xc = reactance, Xc/H = reactance normalized by height, r = Pearson's correlation coefficient between R/H and Xc/H, NA = not applicable. R/H = resistance normalized by height, Xc/H = reactance normalized by height. The age, BMI, height, and height-standardized impedance values of Northern Italians, Spaniards, and Non-Hispanic Whites, Non-Hispanic Blacks, and Mexican Americans who lived in the United States were collected from published studies and compared with the values of Koreans in Table 2.111213 The men were taller and had lower values of both R/H and Xc/H than the women across all the populations. The 95% confidence ellipses for those groups were constructed using the reported means and SDs of R/H and Xc/H (Fig. 3). The ellipses for Korean men and women were present to the lower right side of those for Mexican Americans, Non-Hispanic individuals of African descent, and Non-Hispanic Caucasians and to the upper right side of those for Spaniards and Northern Italians without an overlap. Variables are expressed as ranges or the means ± standard deviations. BMI = body mass index, R/H = resistance normalized by height, Xc = reactance, Xc/H = reactance normalized by height, r = Pearson's correlation coefficient between R/H and Xc/H, NA = not applicable. R/H = resistance normalized by height, Xc/H = reactance normalized by height. Comparison with other ethnic groups: The age, BMI, height, and height-standardized impedance values of Northern Italians, Spaniards, and Non-Hispanic Whites, Non-Hispanic Blacks, and Mexican Americans who lived in the United States were collected from published studies and compared with the values of Koreans in Table 2.111213 The men were taller and had lower values of both R/H and Xc/H than the women across all the populations. The 95% confidence ellipses for those groups were constructed using the reported means and SDs of R/H and Xc/H (Fig. 3). The ellipses for Korean men and women were present to the lower right side of those for Mexican Americans, Non-Hispanic individuals of African descent, and Non-Hispanic Caucasians and to the upper right side of those for Spaniards and Northern Italians without an overlap. Variables are expressed as ranges or the means ± standard deviations. BMI = body mass index, R/H = resistance normalized by height, Xc = reactance, Xc/H = reactance normalized by height, r = Pearson's correlation coefficient between R/H and Xc/H, NA = not applicable. R/H = resistance normalized by height, Xc/H = reactance normalized by height. DISCUSSION: BIA has been used widely to estimate the volume status of various patients in clinics.1415 However, the accuracy of BIA may be compromised when the volumetric reports are used rather than the directly measured raw indexes, especially in patients with congestive heart failure or renal insufficiency because the converting algorithms are derived from and validated in healthy people and are highly dependent on body weight value as well.246 By contrast, BIVA does not require those algorithms and body weight because it uses raw electrical measurements and height, which is a more constant variable. Thus, there are advantages of BIVA in the assessment of such patients with abnormal health conditions. However, BIVA requires normal reference plots for determination of the position of individual vectors.91617 In previous studies, it was demonstrated that the position of impedance vectors corresponds well to the change in individual volumic and nutritional statuses, as the vector is displaced parallel to the major axis of the reference tolerance ellipses, indicating a change in hydration status, and to the minor axis, indicating a change in cell mass and quality.2131819 Piccoli et al.20 suggested the reference 75% tolerance ellipse of a healthy population as the boundary for normal hydration status in dialysis-dependent patients. Based on those properties of BIVA, its clinical applications for adjusting optimal dry weight and estimating prognosis in patients depending on hemodialysis have been demonstrated.21 In other studies, BIVA were able to assess the hydration status and its shifts appropriately over the clinical course implying the role in diagnosis and treatment in patients with heart failure.2223 In this study, gender-specific normal reference vector plots were suggested using Inbody S10. We presented three different reference tolerance ellipses on the RXc graph for the probabilities of 50%, 75%, and 95% for each gender of the Korean population. The tolerance ellipses for women were situated on the upper right side and spread more widely than those for men. This pattern has been observed in other ethnic populations such as Northern Italians and Spaniards, and it appears to be caused by the larger mean values and greater variations of the bioimpedance measurements (Z, R, and Xc) in women than in men.24 Although the same pattern of gender differences in the tolerance ellipses was consistently observed across all other ethnic groups, the 95% confidence ellipses were located differently from each other. This result suggested that one set of reference values drawn from one study group could not be applied to the other groups interchangeably. The differences may be caused by the disparities in body composition, ethnicity, health status, and devices used across studies.1425 In a United States study from which the bioimpedance data were collected for Non-Hispanic Caucasians, Non-Hispanic individuals of African descent, and Mexican-Americans using the third National Health and Nutrition Examination Survey, 23% of the participants had abnormal health conditions.13 This result was in stark contrast to the fact that this study recruited only healthy subjects without apparent abnormal physical signs. It is noteworthy that the minor axis of the ellipses in our study was more vertical in a lesser orthogonal position to the major axis on the RXc graph. The pattern was prominently different from that depicted in previous studies, in which the minor axis was more slanted at a right angle to the major axis. However, given the anisometric scale of x (ranging from 0 to 600) and y (ranging from 0 to 60) axes on the RXc graph, we believed that the angle between the major and the minor axes should not be a right angle on this scale of the graph. This difference could partly be explained by the methodological difference used to draw the ellipses. In the first report by Piccoli et al.,9 they used their own modified equations for statistical calculations. By contrast, we used the joint probability density function for the bivariate normal distribution and calculated eigenvectors and eigenvalues to draw the ellipses without any arbitrary modifications (Supplementary Data 1).10 One of the limitations of BIVA compared with conventional BIA is that it cannot discriminate the intracellular and extracellular water components from the total body water component. Multifrequency bioimpedance analysis may have the potential to discriminate those components, but its relevance should be demonstrated in further studies. Otherwise, both BIVA and BIA could be used clinically in a complementary manner (Supplementary Fig. 2). There are several factors to be considered when interpreting the results of this study. First, the number of subjects recruited for this study may not be sufficiently large to represent the entire Korean population. However, the fact that we enrolled only healthy subjects, screened by a thorough physical examination, renders our results more accurate for healthy populations. Second, this study used only one device to measure the electrical properties. Thus, our results may not be comparable to the data obtained by other devices.2627 To our best knowledge, this is the first study establishing reference ellipses on the RXc graph for healthy Koreans. In conclusion, this study presented normal reference BIA parameters and corresponding tolerance and confidence ellipses on the RXc graph, which is of paramount importance for the clinical interpretation of an individual vector position. There were also noticeable differences in reference ellipses with regard to gender and ethnic groups. We believe that these basic data could be used for the accurate interpretation of BIA-assessed volume status in Korean patients with heart failure or renal disease.
Background: Accurate volume measurement is important in the management of patients with congestive heart failure or renal insufficiency. A bioimpedance analyser can estimate total body water in litres and has been widely used in clinical practice due to its non-invasiveness and ease of results interpretation. To change impedance data to volumetric data, bioimpedance analysers use equations derived from data from healthy subjects, which may not apply to patients with other conditions. Bioelectrical impedance vector analysis (BIVA) was developed to overcome the dependence on those equations by constructing vector plots using raw impedance data. BIVA requires normal reference plots for the proper interpretation of individual vectors. The aim of this study was to construct normal reference vector plots of bioelectrical impedance for Koreans. Methods: Bioelectrical impedance measurements were collected from apparently healthy subjects screened according to a comprehensive physical examination and medical history performed by trained physicians. Reference vector contours were plotted on the RXc graph using the probability density function of the bivariate normal distribution. We further compared them with those of other ethnic groups. Results: A total of 242 healthy subjects aged 22 to 83 were recruited (137 men and 105 women) between December 2015 and November 2016. The centers of the tolerance ellipses were 306.3 Ω/m and 34.9 Ω/m for men and 425.6 Ω/m and 39.7 Ω/m for women. The ellipses were wider for women than for men. The confidence ellipses for Koreans were located between those for Americans and Spaniards without overlap for both genders. Conclusions: This study presented gender-specific normal reference BIVA plots and corresponding tolerance and confidence ellipses on the RXc graph, which is important for the interpretation of BIA-reported volume status in patients with congestive heart failure or renal insufficiency. There were noticeable differences in reference ellipses with regard to gender and ethnic groups.
null
null
4,442
347
[ 268, 236, 33, 240 ]
8
[ "xc", "ellipses", "height", "body", "values", "normalized", "non", "normalized height", "men", "tolerance" ]
[ "bioelectrical measurements", "raw bioelectrical measurements", "impedance analysis bia", "bia assessed volume", "bioelectrical impedance vector" ]
null
null
[CONTENT] Body Fluid Compartments | Blood Volume | Electric Impedance | Congestive Heart Failure | Renal Insufficiency | Vector [SUMMARY]
[CONTENT] Body Fluid Compartments | Blood Volume | Electric Impedance | Congestive Heart Failure | Renal Insufficiency | Vector [SUMMARY]
[CONTENT] Body Fluid Compartments | Blood Volume | Electric Impedance | Congestive Heart Failure | Renal Insufficiency | Vector [SUMMARY]
null
[CONTENT] Body Fluid Compartments | Blood Volume | Electric Impedance | Congestive Heart Failure | Renal Insufficiency | Vector [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Body Composition | Electric Impedance | Female | Heart Failure | Humans | Male | Middle Aged | Renal Insufficiency | Republic of Korea | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Body Composition | Electric Impedance | Female | Heart Failure | Humans | Male | Middle Aged | Renal Insufficiency | Republic of Korea | Young Adult [SUMMARY]
[CONTENT] Adult | Aged | Aged, 80 and over | Body Composition | Electric Impedance | Female | Heart Failure | Humans | Male | Middle Aged | Renal Insufficiency | Republic of Korea | Young Adult [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Body Composition | Electric Impedance | Female | Heart Failure | Humans | Male | Middle Aged | Renal Insufficiency | Republic of Korea | Young Adult [SUMMARY]
null
[CONTENT] bioelectrical measurements | raw bioelectrical measurements | impedance analysis bia | bia assessed volume | bioelectrical impedance vector [SUMMARY]
[CONTENT] bioelectrical measurements | raw bioelectrical measurements | impedance analysis bia | bia assessed volume | bioelectrical impedance vector [SUMMARY]
[CONTENT] bioelectrical measurements | raw bioelectrical measurements | impedance analysis bia | bia assessed volume | bioelectrical impedance vector [SUMMARY]
null
[CONTENT] bioelectrical measurements | raw bioelectrical measurements | impedance analysis bia | bia assessed volume | bioelectrical impedance vector [SUMMARY]
null
[CONTENT] xc | ellipses | height | body | values | normalized | non | normalized height | men | tolerance [SUMMARY]
[CONTENT] xc | ellipses | height | body | values | normalized | non | normalized height | men | tolerance [SUMMARY]
[CONTENT] xc | ellipses | height | body | values | normalized | non | normalized height | men | tolerance [SUMMARY]
null
[CONTENT] xc | ellipses | height | body | values | normalized | non | normalized height | men | tolerance [SUMMARY]
null
[CONTENT] plots | patients | vector plots | volume status | status | bia | vector | volume | biva | electrical [SUMMARY]
[CONTENT] measured | probability | body | variables | test | khz | arm | arms | xc | inbody [SUMMARY]
[CONTENT] normalized | normalized height | xc reactance | height | xc | reactance | men | resistance | significantly | women [SUMMARY]
null
[CONTENT] xc | ellipses | normalized | height | normalized height | body | values | xc reactance | reactance | non hispanic [SUMMARY]
null
[CONTENT] ||| ||| ||| ||| ||| Koreans [SUMMARY]
[CONTENT] ||| ||| [SUMMARY]
[CONTENT] 242 | 22 to 83 | 137 | 105 | between December 2015 and November 2016 ||| 306.3 | 34.9 | 425.6 | 39.7 ||| ||| Koreans | Americans | Spaniards [SUMMARY]
null
[CONTENT] ||| ||| ||| ||| ||| Koreans ||| ||| ||| ||| ||| 242 | 22 to 83 | 137 | 105 | between December 2015 and November 2016 ||| 306.3 | 34.9 | 425.6 | 39.7 ||| ||| Koreans | Americans | Spaniards ||| ||| [SUMMARY]
null
Distinguishing and overlapping laboratory results of thrombotic microangiopathies in HIV infection: Can scoring systems assist?
36054148
Patients with Human Immunodeficiency Virus (HIV) infection are at risk of thrombotic microangiopathies (TMAs) notably thrombotic thrombocytopenic purpura (TTP) and disseminated intravascular coagulation (DIC). Overlap between laboratory results exists resulting in diagnostic ambiguity.
BACKGROUND
Routine laboratory results of 71 patients with HIV-associated TTP (HIV-TTP) and 81 with DIC with concomitant HIV infection (HIV-DIC) admitted between 2015 and 2021 to academic hospitals in Johannesburg, South Africa were retrospectively reviewed. Both the PLASMIC and the International Society of Thrombosis and Haemostasis (ISTH) DIC scores were calculated.
METHODS
Patients with HIV-TTP had significantly (P < .001) increased schistocytes and features of hemolysis including elevated lactate dehydrogenase (LDH)/upper-limit-of-normal ratio (median of 9 (interquartile range [IQR] 5-12) vs 3 (IQR 2-5)) but unexpectedly lower fibrinogen (median 2.8 (IQR 2.2-3.4) vs 4 g/L (IQR 2.5-9.2)) and higher D-dimer (median 4.8 (IQR 2.4-8.1) vs 3.6 g/L (IQR 1.7-6.2)) levels vs the HIV-DIC cohort. Patients with HIV-DIC were more immunocompromised with frequent secondary infections, higher platelet and hemoglobin levels, more deranged coagulation parameters and less hemolysis. Overlap in scoring systems was however observed.
RESULTS
The laboratory parameter overlap between HIV-DIC and HIV-TTP might reflect a shared pathogenesis including endothelial dysfunction and inflammation and further research is required. Fibrinogen in DIC may be elevated as an acute phase reactant and D-dimers may reflect the extensive hemostatic activation in HIV-TTP. Inclusion of additional parameters in TMA scoring systems such the LDH/upper-limit-of-normal ratio, schistocytes count and wider access to ADAMTS-13 testing may enhance diagnostic accuracy and ensure appropriate utilization of plasma.
CONCLUSION
[ "ADAMTS13 Protein", "Acute-Phase Proteins", "Dacarbazine", "Disseminated Intravascular Coagulation", "HIV Infections", "Hemoglobins", "Hemolysis", "Hemostatics", "Humans", "Lactate Dehydrogenases", "Purpura, Thrombotic Thrombocytopenic", "Retrospective Studies", "South Africa", "Thiamine", "Thrombotic Microangiopathies" ]
9804888
INTRODUCTION
Thrombotic microangiopathy (TMA) is a clinical syndrome characterized by hemolytic anemia, thrombocytopenia and microvascular thrombosis resulting in life‐threatening multi‐organ failure. 1 , 2 TMAs are heterogeneous and include congenital and acquired thrombotic thrombocytopenic purpura (TTP) and TTP‐like syndromes, hemolytic uremic syndrome (HUS) and the atypical form of this disease, aHUS. 1 , 2 , 3 Disseminated intravascular coagulation (DIC) can also be classified as a TMA. 4 TMAs can be the manifestation of common disease processes such as hypertension and malignancy as well as develop in relation to drug exposure. 1 , 4 Although the distinction between different TMA syndromes is often difficult, authors have advised against grouping of these disorders under a single pathological entity underlining the need for further studies in order to improve patient outcomes. 3 There are more than 7.7 million people in South Africa infected with human immunodeficiency virus (HIV). 5 Antiretroviral therapy (ART) is often initiated late in these patients who consequently present with advanced HIV infection and high rates of non‐communicable disease, like malignancy and cardiovascular disease, and opportunistic infections as well as associated complications such as TMAs. 5 , 6 , 7 , 8 HIV‐infected patients with laboratory features of a TMA pose a diagnostic dilemma since infection with HIV predisposes to a number of these disease processes particularly secondary TTP (HIV‐TTP) and DIC with background HIV infection (HIV‐DIC). 9 , 10 , 11 , 12 , 13 , 14 Distinguishing these conditions is important since treatment differs. DIC is managed by treatment of the underlying pathogenic cause and HIV‐TTP with therapeutic plasma exchange (TPE) or plasma infusion. 1 , 12 , 13 , 15 , 16 Treatment of patients with HIV‐TTP is the most frequent request for TPE in South Africa. 17 Plasma infusion alone is also of therapeutic value in patients with HIV‐TTP 16 but administration of insufficient amounts of plasma due to the risk of fluid overload and limited availability of plasma frequently results in poor responses and a need to convert to TPE. 16 Adverse events related to apheresis therapy and exposure to plasma still occur despite technological and procedural developments which have made operational systems safer. 18 For these reasons and to ensure best patient outcomes, correct distinction between secondary TTP and DIC is paramount. The microvascular thrombosis in TTP and in DIC differs in both pathogenesis and in composition of the thrombi. 1 , 19 In acquired TTP, the cleavage of von Willebrand Factor (VWF) multimers released by the endothelium may be impaired by a reduction in activity of the VWF proteolytic enzyme, a‐disintegrin‐and‐metalloproteinase‐with‐thrombospondin‐motifs 13 (ADAMTS‐13), mediated by auto‐antibodies. 1 Excessive release of high molecular weight VWF multimers from damaged endothelium resulting in a relative deficiency of ADAMTS‐13 is another postulated pathogenic factor in secondary TTP termed TTP‐like syndrome. 13 , 20 The resultant thrombi in TTP are therefore rich in VWF and platelets with abundant red blood cell (RBC) fragments (schistocytes) and severe thrombocytopenia. 1 , 9 , 12 The microvascular thromboses in DIC in contrast consist mainly of fibrin‐platelet clots following the exposure of coagulation factors to tissue factor secondary to an initiating process such as sepsis or trauma. 10 , 21 Excessive bleeding occurs frequently in DIC secondary to the consumption of coagulation factors as well as platelets. Intravascular clot formation is further accelerated in DIC by the loss of natural anticoagulant and fibrinolytic activity. 15 Schistocytes are present in DIC but usually constitute <10% of the RBCs. 15 In both of these disease processes, endothelial damage and dysregulation of the coagulation cascade also contribute to disease pathogenesis. 22 In addition, HIV‐infected patients often have background hematological abnormalities including cytopenias, underlying bone marrow dyshematopoiesis and baseline activation of the hemostatic system contributing to diagnostic uncertainty. 6 , 22 , 23 , 24 The PLASMIC (platelet count, hemolysis, active cancer, MCV (mean red blood cell (RBC) volume), international normalized ratio (INR) and creatinine) score (Table 1) is based on clinical and routine laboratory parameters and predicts the likelihood of severe ADAMTS‐13 deficiency in patients with a TMA since testing for this parameter is not widely available. 25 This score was designed to enable the distinction between TTP and other TMAs. 25 The International Society of Thrombosis and Haemostasis (ISTH) DIC score (Table 1), is a diagnostic tool to assist in the diagnosis of DIC in an appropriate clinical setting. 26 The utility of these scoring systems in HIV‐infected patients with TMAs has not been comprehensively assessed and bedside treatment decisions are often inconsistent. It is further possible that the background hemostatic changes in HIV infected patients may alter TMA scoring system performance. 4 , 13 , 14 , 27 The objective of the current study was to identify distinguishing clinical and laboratory parameters to assist with the accurate diagnosis of HIV infected patients who present with a TMA suspected to be either HIV‐TTP or HIV‐DIC. The ISTH DIC score and the PLASMIC score for prediction of thrombotic microangiopathy associated with severe ADAMTS‐13 deficiency 25 , 26 Score: ≥5: compatible with overt DIC: repeat score daily <5: suggestive for non‐overt DIC: repeat next 1 to 2 days Likelihood score for severe ADAMTS‐13 deficiency: • 0‐4: low likelihood • 5: intermediate likelihood • 6 or 7: high likelihood Abbreviations: ADAMTS‐13, a‐disintegrin‐and‐metalloproteinase‐with‐thrombospondin‐motifs 13; INR, international normalized ratio; MCV, mean corpuscular volume. Moderate D‐dimer increase: = 0.25‐1 D‐dimer units (mg/L)/ Strong (marked) D‐dimer increase: ≥ 1 D‐Dimer units (mg/L). 26 Reticulocyte count >2.5%, or haptoglobin undetectable, or indirect bilirubin > 12.0 μmol/L.
METHODS
Approval for this study was obtained from the Human Research Ethics Committee of the University of the Witwatersrand (Wits) (Certificate numbers: M160134 and M160839). Informed individual patient consent was waived for this retrospective record review in which all patient identifiers were removed. The authors independently and retrospectively applied both the PLASMIC and the ISTH DIC scores to the available results of consecutive HIV‐infected patients who were diagnosed with either HIV‐associated TTP (HIV‐TTP) (n = 71) or overt, uncompensated DIC with background HIV infection (HIV‐DIC) (n = 81) between 2015 and 2021 at the 3 academic hospitals affiliated to Wits. The diagnoses were made by treating physicians based on clinical and routine laboratory parameters. A diagnosis of HIV‐TTP was made based on laboratory features of severe thrombocytopenia (Platelets <30 × 10 9 /L) and abundant schistocytes (constituting >10% of the RBCs on the peripheral film) in the absence of features suggestive of another TMA in most cases. ADAMTS‐13 activity and autoantibody levels were not included in the initial diagnosis. Where possible, stored plasma was sent for batch ADAMTS‐13 activity and autoantibody levels performed at the University of the Free State, Research Coagulation Laboratory. Diagnosis of DIC was made in patients in the correct clinical context by applying the ISTH‐DIC score. The available results for both cohorts, including full blood count (FBC) (performed on Sysmex XN analysers, Sysmex, Japan), peripheral smear findings, hemolytic and inflammatory markers (performed on Roche Cobas analysers, Roche, Switzerland) and coagulation assays (performed on a STAGO STA‐R MAX analysers, Diagnostica Stago, France) from the accredited National Health Laboratory Service (NHLS) laboratory as part of routine patient management were collected. Summary statistics were computed for all parameters including a median and interquartile range (IQR). Results were compared using Graphpad Prism version 9 (Graphpad software, San Diego).
RESULTS
The results of the 71 patients diagnosed with HIV‐associated TTP (HIV‐TTP) and 81 patients diagnosed with overt DIC with background of HIV infection (HIV‐DIC) are included in Table 2. The patients with laboratory‐confirmed DIC were less likely to have virological control and had significantly more pronounced immunodeficiency. The hemoglobin and platelet counts were also significantly higher and the prolongation of the PT was more pronounced in the DIC cohort. Although patients diagnosed with HIV‐TTP showed less pronounced derangement of the coagulation system, that is, less prolongation of the prothrombin time (PT), they presented with significantly higher D‐dimer and significantly lower fibrinogen levels compared to the cohort with HIV‐DIC. Underlying infection was identified in 68 (84%) of the DIC cohort. Identified pathogens included bacterial septicemia and Mycobacterium tuberculosis. In contrast, no secondary infection could be identified in 62 (88%) of the patients with HIV‐TTP despite extensive investigations. Baseline median (IQR) results of 71 patients diagnosed with HIV‐TTP (including 43 (61%) with confirmed reduced ADAMTS‐13 levels) and 81 with HIV‐DIC Abbreviations: ADAMTS‐13, a‐disintegrin‐and‐metalloproteinase‐with‐thrombospondin‐motifs 13; HIV‐TTP, HIV‐associated thrombotic thrombocytopenic purpura; HIV‐DIC, disseminated intravascular coagulation (DIC) with background HIV infection; IQR, interquartile range (25‐75%); n, number of available results if not available in all patients; N/S, not significant; RI, normal reference interval. P ≤ .05 was deemed significant. The diagnosis of HIV‐TTP was made clinically in conjunction with routine results. In 43 of these patients (61%), ADAMTS‐13 activity levels were measured retrospectively. A sub‐analysis was performed comparing the results of routine parameters in patients with suspected TTP with and without confirmed ADAMTS‐13 deficiency and those diagnosed with HIV‐DIC. This sub‐analysis confirmed that the differences persisted between HIV‐DIC and HIV‐TTP even when patients without confirmed ADAMTS‐13 levels were excluded (P‐value <.001). There was therefore no significant difference in the results of routine tests between the HIV‐TTP groups with and without ADAMTS‐13 results (P > .9). Clinically significant levels of autoantibodies to ADAMTS‐13 were present in the 43 (61%) of the patients with HIV‐TTP in whom ADAMTS‐13 levels were measured. No ADAMTS‐13 levels were measured in the patients who were diagnosed with HIV‐DIC. Although the PLASMIC score was high in 99% of the patients diagnosed with HIV‐TTP (n = 71), 18 (31%) of these patients also had an ISTH DIC score of 5 or greater which is compatible with an underlying overt DIC. The PLASMIC score was also applied to the cohort of HIV infected patients diagnosed with an overt DIC as per the ISTH DIC score (n = 81) and 14 (17%) of these patients had a PLASMIC score of 5 (intermediate likelihood of severe ADAMTS‐13 deficiency) and 19 (23%) had a PLASMIC score of 6 or higher (high likelihood of severe ADAMTS‐13 deficiency). ADAMTS‐13 levels were retrospectively available in 43 (61%) of the patients with HIV‐TTP. All of these patients had levels below 15%, that is, severe ADAMTS‐13 deficiency. Unfortunately, ADAMTS‐13 levels were not available in the remaining 28 patients. Importantly, 69 (97%) of patients diagnosed with HIV‐TTP responded to plasma therapy. Notable, exclusion of the patients without documented ADAMTS‐13 levels from the final data analysis did not alter the statistical difference in parameter results between the HIV‐DIC and the HIV‐TTP cohorts. The most prominent laboratory features in the cohort of patients with HIV‐TTP were marked peripheral schistocytosis (>10% of RBCs) which was present on admission in 65 of 71 patients (91.5%) and developed within 24‐h in five additional patients. The LDH/upper‐limit‐of‐normal ratio was also significantly elevated in the patients with HIV‐TTP compared to the patients with a HIV‐DIC. LDH levels were however only performed in 51 (71%) patients in the DIC cohort. 67 (94%) of the patients with HIV‐TTP were treated with fresh frozen plasma (FFP) with 64 (90%) receiving TPE and 3 (4%) plasma infusion only for a median of 10 days (IQR 7‐13). 69 (97%) of the patients who received plasma therapy responded and 2 (3%) deteriorated and demised in hospital despite plasma therapy, ART and additional supportive care
CONCLUSION
HIV infection is prevalent in the African context 5 with secondary HIV‐associated TTP and DIC in the background of HIV infection constituting the most prevalent TMAs in this group of patients. 2 , 22 The diagnostic distinction between these conditions can be ambiguous resulting in inappropriate treatment due to the background activation of the coagulation system and inflammation in HIV infected patients. 9 , 11 , 40 The addition of the LDH/upper‐limit‐of‐normal ratio and objective, automated quantification of schistocytes will probably improve the accuracy of the PLASMIC score. 28 , 41 The LDH/upper‐limit‐of‐normal ratio standardizes across different reagents and reference indices. The value of longitudinal, repeated application of scoring systems in patients with a TMA in our setting must also be evaluated. The cause and significance of the elevation of D‐dimers in patients with HIV‐associated TTP also requires further investigation. 13 , 31 Based on the results of the study, the authors support the addition of the LDH/upper‐limit‐of‐normal ratio to the PLASMIC score for improved diagnostic accuracy and to guide urgent, but appropriate, institution of therapeutic plasma exchange (TPE) as was proposed by Zhao et al. 36
[ "INTRODUCTION", "AUTHOR CONTRIBUTIONS", "FUNDING INFORMATION", "ETHICS STATEMENT" ]
[ "Thrombotic microangiopathy (TMA) is a clinical syndrome characterized by hemolytic anemia, thrombocytopenia and microvascular thrombosis resulting in life‐threatening multi‐organ failure.\n1\n, \n2\n TMAs are heterogeneous and include congenital and acquired thrombotic thrombocytopenic purpura (TTP) and TTP‐like syndromes, hemolytic uremic syndrome (HUS) and the atypical form of this disease, aHUS.\n1\n, \n2\n, \n3\n Disseminated intravascular coagulation (DIC) can also be classified as a TMA.\n4\n TMAs can be the manifestation of common disease processes such as hypertension and malignancy as well as develop in relation to drug exposure.\n1\n, \n4\n Although the distinction between different TMA syndromes is often difficult, authors have advised against grouping of these disorders under a single pathological entity underlining the need for further studies in order to improve patient outcomes.\n3\n\n\nThere are more than 7.7 million people in South Africa infected with human immunodeficiency virus (HIV).\n5\n Antiretroviral therapy (ART) is often initiated late in these patients who consequently present with advanced HIV infection and high rates of non‐communicable disease, like malignancy and cardiovascular disease, and opportunistic infections as well as associated complications such as TMAs.\n5\n, \n6\n, \n7\n, \n8\n HIV‐infected patients with laboratory features of a TMA pose a diagnostic dilemma since infection with HIV predisposes to a number of these disease processes particularly secondary TTP (HIV‐TTP) and DIC with background HIV infection (HIV‐DIC).\n9\n, \n10\n, \n11\n, \n12\n, \n13\n, \n14\n Distinguishing these conditions is important since treatment differs. DIC is managed by treatment of the underlying pathogenic cause and HIV‐TTP with therapeutic plasma exchange (TPE) or plasma infusion.\n1\n, \n12\n, \n13\n, \n15\n, \n16\n Treatment of patients with HIV‐TTP is the most frequent request for TPE in South Africa.\n17\n Plasma infusion alone is also of therapeutic value in patients with HIV‐TTP\n16\n but administration of insufficient amounts of plasma due to the risk of fluid overload and limited availability of plasma frequently results in poor responses and a need to convert to TPE.\n16\n Adverse events related to apheresis therapy and exposure to plasma still occur despite technological and procedural developments which have made operational systems safer.\n18\n For these reasons and to ensure best patient outcomes, correct distinction between secondary TTP and DIC is paramount.\nThe microvascular thrombosis in TTP and in DIC differs in both pathogenesis and in composition of the thrombi.\n1\n, \n19\n In acquired TTP, the cleavage of von Willebrand Factor (VWF) multimers released by the endothelium may be impaired by a reduction in activity of the VWF proteolytic enzyme, a‐disintegrin‐and‐metalloproteinase‐with‐thrombospondin‐motifs 13 (ADAMTS‐13), mediated by auto‐antibodies.\n1\n Excessive release of high molecular weight VWF multimers from damaged endothelium resulting in a relative deficiency of ADAMTS‐13 is another postulated pathogenic factor in secondary TTP termed TTP‐like syndrome.\n13\n, \n20\n The resultant thrombi in TTP are therefore rich in VWF and platelets with abundant red blood cell (RBC) fragments (schistocytes) and severe thrombocytopenia.\n1\n, \n9\n, \n12\n The microvascular thromboses in DIC in contrast consist mainly of fibrin‐platelet clots following the exposure of coagulation factors to tissue factor secondary to an initiating process such as sepsis or trauma.\n10\n, \n21\n Excessive bleeding occurs frequently in DIC secondary to the consumption of coagulation factors as well as platelets. Intravascular clot formation is further accelerated in DIC by the loss of natural anticoagulant and fibrinolytic activity.\n15\n Schistocytes are present in DIC but usually constitute <10% of the RBCs.\n15\n In both of these disease processes, endothelial damage and dysregulation of the coagulation cascade also contribute to disease pathogenesis.\n22\n In addition, HIV‐infected patients often have background hematological abnormalities including cytopenias, underlying bone marrow dyshematopoiesis and baseline activation of the hemostatic system contributing to diagnostic uncertainty.\n6\n, \n22\n, \n23\n, \n24\n\n\nThe PLASMIC (platelet count, hemolysis, active cancer, MCV (mean red blood cell (RBC) volume), international normalized ratio (INR) and creatinine) score (Table 1) is based on clinical and routine laboratory parameters and predicts the likelihood of severe ADAMTS‐13 deficiency in patients with a TMA since testing for this parameter is not widely available.\n25\n This score was designed to enable the distinction between TTP and other TMAs.\n25\n The International Society of Thrombosis and Haemostasis (ISTH) DIC score (Table 1), is a diagnostic tool to assist in the diagnosis of DIC in an appropriate clinical setting.\n26\n The utility of these scoring systems in HIV‐infected patients with TMAs has not been comprehensively assessed and bedside treatment decisions are often inconsistent. It is further possible that the background hemostatic changes in HIV infected patients may alter TMA scoring system performance.\n4\n, \n13\n, \n14\n, \n27\n The objective of the current study was to identify distinguishing clinical and laboratory parameters to assist with the accurate diagnosis of HIV infected patients who present with a TMA suspected to be either HIV‐TTP or HIV‐DIC.\nThe ISTH DIC score and the PLASMIC score for prediction of thrombotic microangiopathy associated with severe ADAMTS‐13 deficiency\n25\n, \n26\n\n\nScore:\n≥5: compatible with overt DIC: repeat score daily\n<5: suggestive for non‐overt DIC: repeat next 1 to 2 days\nLikelihood score for severe ADAMTS‐13 deficiency:\n• 0‐4: low likelihood\n• 5: intermediate likelihood\n• 6 or 7: high likelihood\nAbbreviations: ADAMTS‐13, a‐disintegrin‐and‐metalloproteinase‐with‐thrombospondin‐motifs 13; INR, international normalized ratio; MCV, mean corpuscular volume.\nModerate D‐dimer increase: = 0.25‐1 D‐dimer units (mg/L)/ Strong (marked) D‐dimer increase: ≥ 1 D‐Dimer units (mg/L).\n26\n\n\nReticulocyte count >2.5%, or haptoglobin undetectable, or indirect bilirubin > 12.0 μmol/L.", "Susan Louw: study design, data collection and analysis, manuscript writing and critical review, and approval of submission. Barry Frank Jacobson: study design, critical review, and approval of submission. Elizabeth Sarah Mayne: study design, data collection and analysis, manuscript writing and critical review, and approval of submission.", "No funding was received for this manuscript.", "Approval was obtained from the Human Research Ethics Committee of the University of the Witwatersrand (Wits) (Certificate numbers: M160134 and M160839). Individual patient consent was waived for this retrospective record review." ]
[ null, null, null, null ]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION", "CONCLUSION", "AUTHOR CONTRIBUTIONS", "FUNDING INFORMATION", "CONFLICT OF INTEREST", "ETHICS STATEMENT" ]
[ "Thrombotic microangiopathy (TMA) is a clinical syndrome characterized by hemolytic anemia, thrombocytopenia and microvascular thrombosis resulting in life‐threatening multi‐organ failure.\n1\n, \n2\n TMAs are heterogeneous and include congenital and acquired thrombotic thrombocytopenic purpura (TTP) and TTP‐like syndromes, hemolytic uremic syndrome (HUS) and the atypical form of this disease, aHUS.\n1\n, \n2\n, \n3\n Disseminated intravascular coagulation (DIC) can also be classified as a TMA.\n4\n TMAs can be the manifestation of common disease processes such as hypertension and malignancy as well as develop in relation to drug exposure.\n1\n, \n4\n Although the distinction between different TMA syndromes is often difficult, authors have advised against grouping of these disorders under a single pathological entity underlining the need for further studies in order to improve patient outcomes.\n3\n\n\nThere are more than 7.7 million people in South Africa infected with human immunodeficiency virus (HIV).\n5\n Antiretroviral therapy (ART) is often initiated late in these patients who consequently present with advanced HIV infection and high rates of non‐communicable disease, like malignancy and cardiovascular disease, and opportunistic infections as well as associated complications such as TMAs.\n5\n, \n6\n, \n7\n, \n8\n HIV‐infected patients with laboratory features of a TMA pose a diagnostic dilemma since infection with HIV predisposes to a number of these disease processes particularly secondary TTP (HIV‐TTP) and DIC with background HIV infection (HIV‐DIC).\n9\n, \n10\n, \n11\n, \n12\n, \n13\n, \n14\n Distinguishing these conditions is important since treatment differs. DIC is managed by treatment of the underlying pathogenic cause and HIV‐TTP with therapeutic plasma exchange (TPE) or plasma infusion.\n1\n, \n12\n, \n13\n, \n15\n, \n16\n Treatment of patients with HIV‐TTP is the most frequent request for TPE in South Africa.\n17\n Plasma infusion alone is also of therapeutic value in patients with HIV‐TTP\n16\n but administration of insufficient amounts of plasma due to the risk of fluid overload and limited availability of plasma frequently results in poor responses and a need to convert to TPE.\n16\n Adverse events related to apheresis therapy and exposure to plasma still occur despite technological and procedural developments which have made operational systems safer.\n18\n For these reasons and to ensure best patient outcomes, correct distinction between secondary TTP and DIC is paramount.\nThe microvascular thrombosis in TTP and in DIC differs in both pathogenesis and in composition of the thrombi.\n1\n, \n19\n In acquired TTP, the cleavage of von Willebrand Factor (VWF) multimers released by the endothelium may be impaired by a reduction in activity of the VWF proteolytic enzyme, a‐disintegrin‐and‐metalloproteinase‐with‐thrombospondin‐motifs 13 (ADAMTS‐13), mediated by auto‐antibodies.\n1\n Excessive release of high molecular weight VWF multimers from damaged endothelium resulting in a relative deficiency of ADAMTS‐13 is another postulated pathogenic factor in secondary TTP termed TTP‐like syndrome.\n13\n, \n20\n The resultant thrombi in TTP are therefore rich in VWF and platelets with abundant red blood cell (RBC) fragments (schistocytes) and severe thrombocytopenia.\n1\n, \n9\n, \n12\n The microvascular thromboses in DIC in contrast consist mainly of fibrin‐platelet clots following the exposure of coagulation factors to tissue factor secondary to an initiating process such as sepsis or trauma.\n10\n, \n21\n Excessive bleeding occurs frequently in DIC secondary to the consumption of coagulation factors as well as platelets. Intravascular clot formation is further accelerated in DIC by the loss of natural anticoagulant and fibrinolytic activity.\n15\n Schistocytes are present in DIC but usually constitute <10% of the RBCs.\n15\n In both of these disease processes, endothelial damage and dysregulation of the coagulation cascade also contribute to disease pathogenesis.\n22\n In addition, HIV‐infected patients often have background hematological abnormalities including cytopenias, underlying bone marrow dyshematopoiesis and baseline activation of the hemostatic system contributing to diagnostic uncertainty.\n6\n, \n22\n, \n23\n, \n24\n\n\nThe PLASMIC (platelet count, hemolysis, active cancer, MCV (mean red blood cell (RBC) volume), international normalized ratio (INR) and creatinine) score (Table 1) is based on clinical and routine laboratory parameters and predicts the likelihood of severe ADAMTS‐13 deficiency in patients with a TMA since testing for this parameter is not widely available.\n25\n This score was designed to enable the distinction between TTP and other TMAs.\n25\n The International Society of Thrombosis and Haemostasis (ISTH) DIC score (Table 1), is a diagnostic tool to assist in the diagnosis of DIC in an appropriate clinical setting.\n26\n The utility of these scoring systems in HIV‐infected patients with TMAs has not been comprehensively assessed and bedside treatment decisions are often inconsistent. It is further possible that the background hemostatic changes in HIV infected patients may alter TMA scoring system performance.\n4\n, \n13\n, \n14\n, \n27\n The objective of the current study was to identify distinguishing clinical and laboratory parameters to assist with the accurate diagnosis of HIV infected patients who present with a TMA suspected to be either HIV‐TTP or HIV‐DIC.\nThe ISTH DIC score and the PLASMIC score for prediction of thrombotic microangiopathy associated with severe ADAMTS‐13 deficiency\n25\n, \n26\n\n\nScore:\n≥5: compatible with overt DIC: repeat score daily\n<5: suggestive for non‐overt DIC: repeat next 1 to 2 days\nLikelihood score for severe ADAMTS‐13 deficiency:\n• 0‐4: low likelihood\n• 5: intermediate likelihood\n• 6 or 7: high likelihood\nAbbreviations: ADAMTS‐13, a‐disintegrin‐and‐metalloproteinase‐with‐thrombospondin‐motifs 13; INR, international normalized ratio; MCV, mean corpuscular volume.\nModerate D‐dimer increase: = 0.25‐1 D‐dimer units (mg/L)/ Strong (marked) D‐dimer increase: ≥ 1 D‐Dimer units (mg/L).\n26\n\n\nReticulocyte count >2.5%, or haptoglobin undetectable, or indirect bilirubin > 12.0 μmol/L.", "Approval for this study was obtained from the Human Research Ethics Committee of the University of the Witwatersrand (Wits) (Certificate numbers: M160134 and M160839). Informed individual patient consent was waived for this retrospective record review in which all patient identifiers were removed. The authors independently and retrospectively applied both the PLASMIC and the ISTH DIC scores to the available results of consecutive HIV‐infected patients who were diagnosed with either HIV‐associated TTP (HIV‐TTP) (n = 71) or overt, uncompensated DIC with background HIV infection (HIV‐DIC) (n = 81) between 2015 and 2021 at the 3 academic hospitals affiliated to Wits. The diagnoses were made by treating physicians based on clinical and routine laboratory parameters. A diagnosis of HIV‐TTP was made based on laboratory features of severe thrombocytopenia (Platelets <30 × 10\n9\n/L) and abundant schistocytes (constituting >10% of the RBCs on the peripheral film) in the absence of features suggestive of another TMA in most cases. ADAMTS‐13 activity and autoantibody levels were not included in the initial diagnosis. Where possible, stored plasma was sent for batch ADAMTS‐13 activity and autoantibody levels performed at the University of the Free State, Research Coagulation Laboratory. Diagnosis of DIC was made in patients in the correct clinical context by applying the ISTH‐DIC score. The available results for both cohorts, including full blood count (FBC) (performed on Sysmex XN analysers, Sysmex, Japan), peripheral smear findings, hemolytic and inflammatory markers (performed on Roche Cobas analysers, Roche, Switzerland) and coagulation assays (performed on a STAGO STA‐R MAX analysers, Diagnostica Stago, France) from the accredited National Health Laboratory Service (NHLS) laboratory as part of routine patient management were collected. Summary statistics were computed for all parameters including a median and interquartile range (IQR). Results were compared using Graphpad Prism version 9 (Graphpad software, San Diego).", "The results of the 71 patients diagnosed with HIV‐associated TTP (HIV‐TTP) and 81 patients diagnosed with overt DIC with background of HIV infection (HIV‐DIC) are included in Table 2. The patients with laboratory‐confirmed DIC were less likely to have virological control and had significantly more pronounced immunodeficiency. The hemoglobin and platelet counts were also significantly higher and the prolongation of the PT was more pronounced in the DIC cohort. Although patients diagnosed with HIV‐TTP showed less pronounced derangement of the coagulation system, that is, less prolongation of the prothrombin time (PT), they presented with significantly higher D‐dimer and significantly lower fibrinogen levels compared to the cohort with HIV‐DIC. Underlying infection was identified in 68 (84%) of the DIC cohort. Identified pathogens included bacterial septicemia and Mycobacterium tuberculosis. In contrast, no secondary infection could be identified in 62 (88%) of the patients with HIV‐TTP despite extensive investigations.\nBaseline median (IQR) results of 71 patients diagnosed with HIV‐TTP (including 43 (61%) with confirmed reduced ADAMTS‐13 levels) and 81 with HIV‐DIC\nAbbreviations: ADAMTS‐13, a‐disintegrin‐and‐metalloproteinase‐with‐thrombospondin‐motifs 13; HIV‐TTP, HIV‐associated thrombotic thrombocytopenic purpura; HIV‐DIC, disseminated intravascular coagulation (DIC) with background HIV infection; IQR, interquartile range (25‐75%); n, number of available results if not available in all patients; N/S, not significant; RI, normal reference interval.\n\nP ≤ .05 was deemed significant.\nThe diagnosis of HIV‐TTP was made clinically in conjunction with routine results. In 43 of these patients (61%), ADAMTS‐13 activity levels were measured retrospectively. A sub‐analysis was performed comparing the results of routine parameters in patients with suspected TTP with and without confirmed ADAMTS‐13 deficiency and those diagnosed with HIV‐DIC. This sub‐analysis confirmed that the differences persisted between HIV‐DIC and HIV‐TTP even when patients without confirmed ADAMTS‐13 levels were excluded (P‐value <.001). There was therefore no significant difference in the results of routine tests between the HIV‐TTP groups with and without ADAMTS‐13 results (P > .9). Clinically significant levels of autoantibodies to ADAMTS‐13 were present in the 43 (61%) of the patients with HIV‐TTP in whom ADAMTS‐13 levels were measured. No ADAMTS‐13 levels were measured in the patients who were diagnosed with HIV‐DIC.\nAlthough the PLASMIC score was high in 99% of the patients diagnosed with HIV‐TTP (n = 71), 18 (31%) of these patients also had an ISTH DIC score of 5 or greater which is compatible with an underlying overt DIC. The PLASMIC score was also applied to the cohort of HIV infected patients diagnosed with an overt DIC as per the ISTH DIC score (n = 81) and 14 (17%) of these patients had a PLASMIC score of 5 (intermediate likelihood of severe ADAMTS‐13 deficiency) and 19 (23%) had a PLASMIC score of 6 or higher (high likelihood of severe ADAMTS‐13 deficiency). ADAMTS‐13 levels were retrospectively available in 43 (61%) of the patients with HIV‐TTP. All of these patients had levels below 15%, that is, severe ADAMTS‐13 deficiency. Unfortunately, ADAMTS‐13 levels were not available in the remaining 28 patients. Importantly, 69 (97%) of patients diagnosed with HIV‐TTP responded to plasma therapy. Notable, exclusion of the patients without documented ADAMTS‐13 levels from the final data analysis did not alter the statistical difference in parameter results between the HIV‐DIC and the HIV‐TTP cohorts.\nThe most prominent laboratory features in the cohort of patients with HIV‐TTP were marked peripheral schistocytosis (>10% of RBCs) which was present on admission in 65 of 71 patients (91.5%) and developed within 24‐h in five additional patients. The LDH/upper‐limit‐of‐normal ratio was also significantly elevated in the patients with HIV‐TTP compared to the patients with a HIV‐DIC. LDH levels were however only performed in 51 (71%) patients in the DIC cohort.\n67 (94%) of the patients with HIV‐TTP were treated with fresh frozen plasma (FFP) with 64 (90%) receiving TPE and 3 (4%) plasma infusion only for a median of 10 days (IQR 7‐13). 69 (97%) of the patients who received plasma therapy responded and 2 (3%) deteriorated and demised in hospital despite plasma therapy, ART and additional supportive care", "The differentiation between TTP and DIC represents an important diagnostic decision since TTP is managed primarily with TPE in our treatment center and delays in initiation of therapy may adversely impact patient outcomes.\n1\n, \n28\n Although plasma infusions may be used in DIC to correct severe hemostatic abnormalities, primary management is treatment of the underlying pathogenic cause.\n15\n\n\nHIV represents a significant risk factor for both secondary TTP and DIC.\n10\n, \n12\n, \n22\n, \n29\n The HIV viral load results were significantly higher in the HIV‐TTP group compared with the HIV‐DIC cohort but despite better HIV viral control in the HIV‐DIC cohort, the CD4 positive T‐cell counts were lower (P < .001). This finding probably reflects acute concomitant infections in the HIV‐DIC cohort.\nNormal D‐dimer levels were previously considered a feature of TTP and, that together with preserved time‐to‐clot formation assays, for example, PT as well as antithrombin (AT), were suggested to be useful in distinguishing between these conditions in HIV‐uninfected patients.\n30\n In the current study, patients with HIV‐TTP however presented with significantly elevated D‐dimer levels suggesting widespread microthrombosis although mucocutaneous bleeding was probably also contributory.\nIn this study, we demonstrate that there is significant overlap between the laboratory parameters included in diagnostic scores in patients with HIV‐TTP and those with HIV‐DIC with 51 of the 152 patients having scores which were diagnostic for both conditions. Important differentiators in these patients included the abundance of schistocytes and the elevated LDH/upper‐limit‐of‐normal ratios which appeared to show a higher specificity for TTP. The prothrombin time in patients with DIC was significantly more prolonged vs the HIV‐TTP cohort. Importantly, D‐dimers were a poor discriminator between the two populations with TTP patients showing higher D‐dimer levels than patients with DIC. Elevated D‐dimer levels in patients with HIV‐associated TTP have also been observed in other studies.\n13\n, \n31\n Median fibrinogen levels were within the normal reference range in both cohorts but were significantly higher in patients with HIV‐DIC mirroring the CRP levels most likely reflecting increased production of fibrinogen as an acute phase reactant. CRP was a distinguishing parameter between the two cohorts with elevated levels in the HIV‐DIC cohort probably related to underlying concomitant infections and this routine parameter therefore could have clinical utility in distinguishing between HIV‐DIC and HIV‐TTP. D‐dimers and fibrinogen form important components of the ISTH DIC score and should be interpreted with caution in HIV infected patients with a TMA.\n11\n, \n32\n The authors caution against favoring a diagnosis of HIV‐DIC instead of HIV‐TTP based on elevated D‐dimer levels when additional features are compatible with HIV‐TTP.\nThe overlap in laboratory parameters between acquired TTP and DIC in HIV infected patients may reflect a shared pathogenesis. Contributory factors include chronic inflammation with baseline activation of the hemostatic and complement systems as a result of ongoing viral replication, microbial translocation across a disrupted gastrointestinal mucosal barrier and opportunistic infections.\n23\n, \n33\n, \n34\n Inflammation and complement activation causes endothelial damage which predispose to coagulopathies including TMAs.\n6\n The background derangements of the coagulation and hematopoietic systems in patients with underlying HIV infection should also be considered when making diagnostic and treatment decisions in patients with HIV‐TMAs.\n24\n, \n35\n Scoring systems standardize diagnoses to ensure appropriate therapy and improve patient outcomes.\n25\n, \n27\n The PLASMIC score is based on clinical parameters and the results of routine tests to predict the likelihood of significant ADAMTS‐13 deficiency which is indicative of the presence of TTP in a patient with laboratory features of a TMA.\n25\n Although the PLASMIC score predicted a high probability of severe ADAMTS‐13 deficiency in 99% of the cohort diagnosed with HIV‐TTP, it also predicted a similar risk in 23% of HIV‐infected patients with an overt DIC based on the ISTH DIC score. The PLASMIC score may therefore not have sufficient specificity to delineate between HIV‐TTP and HIV‐DIC in all cases and inclusion of the LDH/upper‐limit‐of‐normal ratio is likely to improve the specificity and accuracy. Zhao et al\n36\n also demonstrated that inclusion of the LDH/upper‐limit of‐normal ratio improved the accuracy of the PLASMIC score in identifying patients who suffered from TTP. Increased schistocyte count was also a distinguishing feature between the TTP and DIC cohort but this parameter is poorly standardized with considerable inter‐observer variability since it often relies on the subjective methodology of light microscopy and manual counting of cells.\n37\n, \n38\n Wider access to ADAMTS‐13 testing, possibly even on a Point‐of‐Care‐Testing (POCT) platform, could also improve the accuracy of the diagnosis of the pathophysiological cause in patients presenting with a TMA.\n39\n Although all 43(61%) patients in the HIV‐TTP cohort who were tested for ADAMTS‐13 autoantibodies had clinically significant levels, the diagnostic utility of this parameter is uncertain as it probably forms part of the HIV polygammaglobulinemia in HIV infected individuals and is present even in the absence of TTP.\n12\n\n\nThe limitations of this study include the retrospective nature which resulted in some results being unavailable. No ADAMTS‐13 levels were performed in the DIC‐cohort of patients. The details of the treatment administered and the patient outcomes for the HIV‐DIC cohort were also not available and based on the PLASMIC scores, some of these patients may have benefited from plasma treatment. Unfortunately, the details of the ART regimens and duration of treatment in the HIV‐DIC cohort were not available. ART status could therefore not be evaluated as a distinguishing feature between the two TMAs. Further studies in this regard are required. The study data, however, reflect the diagnostic and treatment decisions made on admission in the patient cohorts. All requests for DIC screen analysis were available to the authors but patients with a diagnosis of HIV‐TTP may have been treated by attending physicians without the knowledge of the authors and were therefore not included in the study. Irrespective of these limitations, we are of the opinion that the study results reflect the overlapping findings of these serious conditions in our population with HIV infection.", "HIV infection is prevalent in the African context\n5\n with secondary HIV‐associated TTP and DIC in the background of HIV infection constituting the most prevalent TMAs in this group of patients.\n2\n, \n22\n The diagnostic distinction between these conditions can be ambiguous resulting in inappropriate treatment due to the background activation of the coagulation system and inflammation in HIV infected patients.\n9\n, \n11\n, \n40\n The addition of the LDH/upper‐limit‐of‐normal ratio and objective, automated quantification of schistocytes will probably improve the accuracy of the PLASMIC score.\n28\n, \n41\n The LDH/upper‐limit‐of‐normal ratio standardizes across different reagents and reference indices. The value of longitudinal, repeated application of scoring systems in patients with a TMA in our setting must also be evaluated. The cause and significance of the elevation of D‐dimers in patients with HIV‐associated TTP also requires further investigation.\n13\n, \n31\n Based on the results of the study, the authors support the addition of the LDH/upper‐limit‐of‐normal ratio to the PLASMIC score for improved diagnostic accuracy and to guide urgent, but appropriate, institution of therapeutic plasma exchange (TPE) as was proposed by Zhao et al.\n36\n\n", "Susan Louw: study design, data collection and analysis, manuscript writing and critical review, and approval of submission. Barry Frank Jacobson: study design, critical review, and approval of submission. Elizabeth Sarah Mayne: study design, data collection and analysis, manuscript writing and critical review, and approval of submission.", "No funding was received for this manuscript.", "The authors declare no conflict of interest pertaining to the study.", "Approval was obtained from the Human Research Ethics Committee of the University of the Witwatersrand (Wits) (Certificate numbers: M160134 and M160839). Individual patient consent was waived for this retrospective record review." ]
[ null, "methods", "results", "discussion", "conclusions", null, null, "COI-statement", null ]
[ "diagnostic scoring systems", "disseminated intravascular coagulation (DIC)", "thrombotic thrombocytopenic purpura (TTP)", "treatment decisions" ]
INTRODUCTION: Thrombotic microangiopathy (TMA) is a clinical syndrome characterized by hemolytic anemia, thrombocytopenia and microvascular thrombosis resulting in life‐threatening multi‐organ failure. 1 , 2 TMAs are heterogeneous and include congenital and acquired thrombotic thrombocytopenic purpura (TTP) and TTP‐like syndromes, hemolytic uremic syndrome (HUS) and the atypical form of this disease, aHUS. 1 , 2 , 3 Disseminated intravascular coagulation (DIC) can also be classified as a TMA. 4 TMAs can be the manifestation of common disease processes such as hypertension and malignancy as well as develop in relation to drug exposure. 1 , 4 Although the distinction between different TMA syndromes is often difficult, authors have advised against grouping of these disorders under a single pathological entity underlining the need for further studies in order to improve patient outcomes. 3 There are more than 7.7 million people in South Africa infected with human immunodeficiency virus (HIV). 5 Antiretroviral therapy (ART) is often initiated late in these patients who consequently present with advanced HIV infection and high rates of non‐communicable disease, like malignancy and cardiovascular disease, and opportunistic infections as well as associated complications such as TMAs. 5 , 6 , 7 , 8 HIV‐infected patients with laboratory features of a TMA pose a diagnostic dilemma since infection with HIV predisposes to a number of these disease processes particularly secondary TTP (HIV‐TTP) and DIC with background HIV infection (HIV‐DIC). 9 , 10 , 11 , 12 , 13 , 14 Distinguishing these conditions is important since treatment differs. DIC is managed by treatment of the underlying pathogenic cause and HIV‐TTP with therapeutic plasma exchange (TPE) or plasma infusion. 1 , 12 , 13 , 15 , 16 Treatment of patients with HIV‐TTP is the most frequent request for TPE in South Africa. 17 Plasma infusion alone is also of therapeutic value in patients with HIV‐TTP 16 but administration of insufficient amounts of plasma due to the risk of fluid overload and limited availability of plasma frequently results in poor responses and a need to convert to TPE. 16 Adverse events related to apheresis therapy and exposure to plasma still occur despite technological and procedural developments which have made operational systems safer. 18 For these reasons and to ensure best patient outcomes, correct distinction between secondary TTP and DIC is paramount. The microvascular thrombosis in TTP and in DIC differs in both pathogenesis and in composition of the thrombi. 1 , 19 In acquired TTP, the cleavage of von Willebrand Factor (VWF) multimers released by the endothelium may be impaired by a reduction in activity of the VWF proteolytic enzyme, a‐disintegrin‐and‐metalloproteinase‐with‐thrombospondin‐motifs 13 (ADAMTS‐13), mediated by auto‐antibodies. 1 Excessive release of high molecular weight VWF multimers from damaged endothelium resulting in a relative deficiency of ADAMTS‐13 is another postulated pathogenic factor in secondary TTP termed TTP‐like syndrome. 13 , 20 The resultant thrombi in TTP are therefore rich in VWF and platelets with abundant red blood cell (RBC) fragments (schistocytes) and severe thrombocytopenia. 1 , 9 , 12 The microvascular thromboses in DIC in contrast consist mainly of fibrin‐platelet clots following the exposure of coagulation factors to tissue factor secondary to an initiating process such as sepsis or trauma. 10 , 21 Excessive bleeding occurs frequently in DIC secondary to the consumption of coagulation factors as well as platelets. Intravascular clot formation is further accelerated in DIC by the loss of natural anticoagulant and fibrinolytic activity. 15 Schistocytes are present in DIC but usually constitute <10% of the RBCs. 15 In both of these disease processes, endothelial damage and dysregulation of the coagulation cascade also contribute to disease pathogenesis. 22 In addition, HIV‐infected patients often have background hematological abnormalities including cytopenias, underlying bone marrow dyshematopoiesis and baseline activation of the hemostatic system contributing to diagnostic uncertainty. 6 , 22 , 23 , 24 The PLASMIC (platelet count, hemolysis, active cancer, MCV (mean red blood cell (RBC) volume), international normalized ratio (INR) and creatinine) score (Table 1) is based on clinical and routine laboratory parameters and predicts the likelihood of severe ADAMTS‐13 deficiency in patients with a TMA since testing for this parameter is not widely available. 25 This score was designed to enable the distinction between TTP and other TMAs. 25 The International Society of Thrombosis and Haemostasis (ISTH) DIC score (Table 1), is a diagnostic tool to assist in the diagnosis of DIC in an appropriate clinical setting. 26 The utility of these scoring systems in HIV‐infected patients with TMAs has not been comprehensively assessed and bedside treatment decisions are often inconsistent. It is further possible that the background hemostatic changes in HIV infected patients may alter TMA scoring system performance. 4 , 13 , 14 , 27 The objective of the current study was to identify distinguishing clinical and laboratory parameters to assist with the accurate diagnosis of HIV infected patients who present with a TMA suspected to be either HIV‐TTP or HIV‐DIC. The ISTH DIC score and the PLASMIC score for prediction of thrombotic microangiopathy associated with severe ADAMTS‐13 deficiency 25 , 26 Score: ≥5: compatible with overt DIC: repeat score daily <5: suggestive for non‐overt DIC: repeat next 1 to 2 days Likelihood score for severe ADAMTS‐13 deficiency: • 0‐4: low likelihood • 5: intermediate likelihood • 6 or 7: high likelihood Abbreviations: ADAMTS‐13, a‐disintegrin‐and‐metalloproteinase‐with‐thrombospondin‐motifs 13; INR, international normalized ratio; MCV, mean corpuscular volume. Moderate D‐dimer increase: = 0.25‐1 D‐dimer units (mg/L)/ Strong (marked) D‐dimer increase: ≥ 1 D‐Dimer units (mg/L). 26 Reticulocyte count >2.5%, or haptoglobin undetectable, or indirect bilirubin > 12.0 μmol/L. METHODS: Approval for this study was obtained from the Human Research Ethics Committee of the University of the Witwatersrand (Wits) (Certificate numbers: M160134 and M160839). Informed individual patient consent was waived for this retrospective record review in which all patient identifiers were removed. The authors independently and retrospectively applied both the PLASMIC and the ISTH DIC scores to the available results of consecutive HIV‐infected patients who were diagnosed with either HIV‐associated TTP (HIV‐TTP) (n = 71) or overt, uncompensated DIC with background HIV infection (HIV‐DIC) (n = 81) between 2015 and 2021 at the 3 academic hospitals affiliated to Wits. The diagnoses were made by treating physicians based on clinical and routine laboratory parameters. A diagnosis of HIV‐TTP was made based on laboratory features of severe thrombocytopenia (Platelets <30 × 10 9 /L) and abundant schistocytes (constituting >10% of the RBCs on the peripheral film) in the absence of features suggestive of another TMA in most cases. ADAMTS‐13 activity and autoantibody levels were not included in the initial diagnosis. Where possible, stored plasma was sent for batch ADAMTS‐13 activity and autoantibody levels performed at the University of the Free State, Research Coagulation Laboratory. Diagnosis of DIC was made in patients in the correct clinical context by applying the ISTH‐DIC score. The available results for both cohorts, including full blood count (FBC) (performed on Sysmex XN analysers, Sysmex, Japan), peripheral smear findings, hemolytic and inflammatory markers (performed on Roche Cobas analysers, Roche, Switzerland) and coagulation assays (performed on a STAGO STA‐R MAX analysers, Diagnostica Stago, France) from the accredited National Health Laboratory Service (NHLS) laboratory as part of routine patient management were collected. Summary statistics were computed for all parameters including a median and interquartile range (IQR). Results were compared using Graphpad Prism version 9 (Graphpad software, San Diego). RESULTS: The results of the 71 patients diagnosed with HIV‐associated TTP (HIV‐TTP) and 81 patients diagnosed with overt DIC with background of HIV infection (HIV‐DIC) are included in Table 2. The patients with laboratory‐confirmed DIC were less likely to have virological control and had significantly more pronounced immunodeficiency. The hemoglobin and platelet counts were also significantly higher and the prolongation of the PT was more pronounced in the DIC cohort. Although patients diagnosed with HIV‐TTP showed less pronounced derangement of the coagulation system, that is, less prolongation of the prothrombin time (PT), they presented with significantly higher D‐dimer and significantly lower fibrinogen levels compared to the cohort with HIV‐DIC. Underlying infection was identified in 68 (84%) of the DIC cohort. Identified pathogens included bacterial septicemia and Mycobacterium tuberculosis. In contrast, no secondary infection could be identified in 62 (88%) of the patients with HIV‐TTP despite extensive investigations. Baseline median (IQR) results of 71 patients diagnosed with HIV‐TTP (including 43 (61%) with confirmed reduced ADAMTS‐13 levels) and 81 with HIV‐DIC Abbreviations: ADAMTS‐13, a‐disintegrin‐and‐metalloproteinase‐with‐thrombospondin‐motifs 13; HIV‐TTP, HIV‐associated thrombotic thrombocytopenic purpura; HIV‐DIC, disseminated intravascular coagulation (DIC) with background HIV infection; IQR, interquartile range (25‐75%); n, number of available results if not available in all patients; N/S, not significant; RI, normal reference interval. P ≤ .05 was deemed significant. The diagnosis of HIV‐TTP was made clinically in conjunction with routine results. In 43 of these patients (61%), ADAMTS‐13 activity levels were measured retrospectively. A sub‐analysis was performed comparing the results of routine parameters in patients with suspected TTP with and without confirmed ADAMTS‐13 deficiency and those diagnosed with HIV‐DIC. This sub‐analysis confirmed that the differences persisted between HIV‐DIC and HIV‐TTP even when patients without confirmed ADAMTS‐13 levels were excluded (P‐value <.001). There was therefore no significant difference in the results of routine tests between the HIV‐TTP groups with and without ADAMTS‐13 results (P > .9). Clinically significant levels of autoantibodies to ADAMTS‐13 were present in the 43 (61%) of the patients with HIV‐TTP in whom ADAMTS‐13 levels were measured. No ADAMTS‐13 levels were measured in the patients who were diagnosed with HIV‐DIC. Although the PLASMIC score was high in 99% of the patients diagnosed with HIV‐TTP (n = 71), 18 (31%) of these patients also had an ISTH DIC score of 5 or greater which is compatible with an underlying overt DIC. The PLASMIC score was also applied to the cohort of HIV infected patients diagnosed with an overt DIC as per the ISTH DIC score (n = 81) and 14 (17%) of these patients had a PLASMIC score of 5 (intermediate likelihood of severe ADAMTS‐13 deficiency) and 19 (23%) had a PLASMIC score of 6 or higher (high likelihood of severe ADAMTS‐13 deficiency). ADAMTS‐13 levels were retrospectively available in 43 (61%) of the patients with HIV‐TTP. All of these patients had levels below 15%, that is, severe ADAMTS‐13 deficiency. Unfortunately, ADAMTS‐13 levels were not available in the remaining 28 patients. Importantly, 69 (97%) of patients diagnosed with HIV‐TTP responded to plasma therapy. Notable, exclusion of the patients without documented ADAMTS‐13 levels from the final data analysis did not alter the statistical difference in parameter results between the HIV‐DIC and the HIV‐TTP cohorts. The most prominent laboratory features in the cohort of patients with HIV‐TTP were marked peripheral schistocytosis (>10% of RBCs) which was present on admission in 65 of 71 patients (91.5%) and developed within 24‐h in five additional patients. The LDH/upper‐limit‐of‐normal ratio was also significantly elevated in the patients with HIV‐TTP compared to the patients with a HIV‐DIC. LDH levels were however only performed in 51 (71%) patients in the DIC cohort. 67 (94%) of the patients with HIV‐TTP were treated with fresh frozen plasma (FFP) with 64 (90%) receiving TPE and 3 (4%) plasma infusion only for a median of 10 days (IQR 7‐13). 69 (97%) of the patients who received plasma therapy responded and 2 (3%) deteriorated and demised in hospital despite plasma therapy, ART and additional supportive care DISCUSSION: The differentiation between TTP and DIC represents an important diagnostic decision since TTP is managed primarily with TPE in our treatment center and delays in initiation of therapy may adversely impact patient outcomes. 1 , 28 Although plasma infusions may be used in DIC to correct severe hemostatic abnormalities, primary management is treatment of the underlying pathogenic cause. 15 HIV represents a significant risk factor for both secondary TTP and DIC. 10 , 12 , 22 , 29 The HIV viral load results were significantly higher in the HIV‐TTP group compared with the HIV‐DIC cohort but despite better HIV viral control in the HIV‐DIC cohort, the CD4 positive T‐cell counts were lower (P < .001). This finding probably reflects acute concomitant infections in the HIV‐DIC cohort. Normal D‐dimer levels were previously considered a feature of TTP and, that together with preserved time‐to‐clot formation assays, for example, PT as well as antithrombin (AT), were suggested to be useful in distinguishing between these conditions in HIV‐uninfected patients. 30 In the current study, patients with HIV‐TTP however presented with significantly elevated D‐dimer levels suggesting widespread microthrombosis although mucocutaneous bleeding was probably also contributory. In this study, we demonstrate that there is significant overlap between the laboratory parameters included in diagnostic scores in patients with HIV‐TTP and those with HIV‐DIC with 51 of the 152 patients having scores which were diagnostic for both conditions. Important differentiators in these patients included the abundance of schistocytes and the elevated LDH/upper‐limit‐of‐normal ratios which appeared to show a higher specificity for TTP. The prothrombin time in patients with DIC was significantly more prolonged vs the HIV‐TTP cohort. Importantly, D‐dimers were a poor discriminator between the two populations with TTP patients showing higher D‐dimer levels than patients with DIC. Elevated D‐dimer levels in patients with HIV‐associated TTP have also been observed in other studies. 13 , 31 Median fibrinogen levels were within the normal reference range in both cohorts but were significantly higher in patients with HIV‐DIC mirroring the CRP levels most likely reflecting increased production of fibrinogen as an acute phase reactant. CRP was a distinguishing parameter between the two cohorts with elevated levels in the HIV‐DIC cohort probably related to underlying concomitant infections and this routine parameter therefore could have clinical utility in distinguishing between HIV‐DIC and HIV‐TTP. D‐dimers and fibrinogen form important components of the ISTH DIC score and should be interpreted with caution in HIV infected patients with a TMA. 11 , 32 The authors caution against favoring a diagnosis of HIV‐DIC instead of HIV‐TTP based on elevated D‐dimer levels when additional features are compatible with HIV‐TTP. The overlap in laboratory parameters between acquired TTP and DIC in HIV infected patients may reflect a shared pathogenesis. Contributory factors include chronic inflammation with baseline activation of the hemostatic and complement systems as a result of ongoing viral replication, microbial translocation across a disrupted gastrointestinal mucosal barrier and opportunistic infections. 23 , 33 , 34 Inflammation and complement activation causes endothelial damage which predispose to coagulopathies including TMAs. 6 The background derangements of the coagulation and hematopoietic systems in patients with underlying HIV infection should also be considered when making diagnostic and treatment decisions in patients with HIV‐TMAs. 24 , 35 Scoring systems standardize diagnoses to ensure appropriate therapy and improve patient outcomes. 25 , 27 The PLASMIC score is based on clinical parameters and the results of routine tests to predict the likelihood of significant ADAMTS‐13 deficiency which is indicative of the presence of TTP in a patient with laboratory features of a TMA. 25 Although the PLASMIC score predicted a high probability of severe ADAMTS‐13 deficiency in 99% of the cohort diagnosed with HIV‐TTP, it also predicted a similar risk in 23% of HIV‐infected patients with an overt DIC based on the ISTH DIC score. The PLASMIC score may therefore not have sufficient specificity to delineate between HIV‐TTP and HIV‐DIC in all cases and inclusion of the LDH/upper‐limit‐of‐normal ratio is likely to improve the specificity and accuracy. Zhao et al 36 also demonstrated that inclusion of the LDH/upper‐limit of‐normal ratio improved the accuracy of the PLASMIC score in identifying patients who suffered from TTP. Increased schistocyte count was also a distinguishing feature between the TTP and DIC cohort but this parameter is poorly standardized with considerable inter‐observer variability since it often relies on the subjective methodology of light microscopy and manual counting of cells. 37 , 38 Wider access to ADAMTS‐13 testing, possibly even on a Point‐of‐Care‐Testing (POCT) platform, could also improve the accuracy of the diagnosis of the pathophysiological cause in patients presenting with a TMA. 39 Although all 43(61%) patients in the HIV‐TTP cohort who were tested for ADAMTS‐13 autoantibodies had clinically significant levels, the diagnostic utility of this parameter is uncertain as it probably forms part of the HIV polygammaglobulinemia in HIV infected individuals and is present even in the absence of TTP. 12 The limitations of this study include the retrospective nature which resulted in some results being unavailable. No ADAMTS‐13 levels were performed in the DIC‐cohort of patients. The details of the treatment administered and the patient outcomes for the HIV‐DIC cohort were also not available and based on the PLASMIC scores, some of these patients may have benefited from plasma treatment. Unfortunately, the details of the ART regimens and duration of treatment in the HIV‐DIC cohort were not available. ART status could therefore not be evaluated as a distinguishing feature between the two TMAs. Further studies in this regard are required. The study data, however, reflect the diagnostic and treatment decisions made on admission in the patient cohorts. All requests for DIC screen analysis were available to the authors but patients with a diagnosis of HIV‐TTP may have been treated by attending physicians without the knowledge of the authors and were therefore not included in the study. Irrespective of these limitations, we are of the opinion that the study results reflect the overlapping findings of these serious conditions in our population with HIV infection. CONCLUSION: HIV infection is prevalent in the African context 5 with secondary HIV‐associated TTP and DIC in the background of HIV infection constituting the most prevalent TMAs in this group of patients. 2 , 22 The diagnostic distinction between these conditions can be ambiguous resulting in inappropriate treatment due to the background activation of the coagulation system and inflammation in HIV infected patients. 9 , 11 , 40 The addition of the LDH/upper‐limit‐of‐normal ratio and objective, automated quantification of schistocytes will probably improve the accuracy of the PLASMIC score. 28 , 41 The LDH/upper‐limit‐of‐normal ratio standardizes across different reagents and reference indices. The value of longitudinal, repeated application of scoring systems in patients with a TMA in our setting must also be evaluated. The cause and significance of the elevation of D‐dimers in patients with HIV‐associated TTP also requires further investigation. 13 , 31 Based on the results of the study, the authors support the addition of the LDH/upper‐limit‐of‐normal ratio to the PLASMIC score for improved diagnostic accuracy and to guide urgent, but appropriate, institution of therapeutic plasma exchange (TPE) as was proposed by Zhao et al. 36 AUTHOR CONTRIBUTIONS: Susan Louw: study design, data collection and analysis, manuscript writing and critical review, and approval of submission. Barry Frank Jacobson: study design, critical review, and approval of submission. Elizabeth Sarah Mayne: study design, data collection and analysis, manuscript writing and critical review, and approval of submission. FUNDING INFORMATION: No funding was received for this manuscript. CONFLICT OF INTEREST: The authors declare no conflict of interest pertaining to the study. ETHICS STATEMENT: Approval was obtained from the Human Research Ethics Committee of the University of the Witwatersrand (Wits) (Certificate numbers: M160134 and M160839). Individual patient consent was waived for this retrospective record review.
Background: Patients with Human Immunodeficiency Virus (HIV) infection are at risk of thrombotic microangiopathies (TMAs) notably thrombotic thrombocytopenic purpura (TTP) and disseminated intravascular coagulation (DIC). Overlap between laboratory results exists resulting in diagnostic ambiguity. Methods: Routine laboratory results of 71 patients with HIV-associated TTP (HIV-TTP) and 81 with DIC with concomitant HIV infection (HIV-DIC) admitted between 2015 and 2021 to academic hospitals in Johannesburg, South Africa were retrospectively reviewed. Both the PLASMIC and the International Society of Thrombosis and Haemostasis (ISTH) DIC scores were calculated. Results: Patients with HIV-TTP had significantly (P < .001) increased schistocytes and features of hemolysis including elevated lactate dehydrogenase (LDH)/upper-limit-of-normal ratio (median of 9 (interquartile range [IQR] 5-12) vs 3 (IQR 2-5)) but unexpectedly lower fibrinogen (median 2.8 (IQR 2.2-3.4) vs 4 g/L (IQR 2.5-9.2)) and higher D-dimer (median 4.8 (IQR 2.4-8.1) vs 3.6 g/L (IQR 1.7-6.2)) levels vs the HIV-DIC cohort. Patients with HIV-DIC were more immunocompromised with frequent secondary infections, higher platelet and hemoglobin levels, more deranged coagulation parameters and less hemolysis. Overlap in scoring systems was however observed. Conclusions: The laboratory parameter overlap between HIV-DIC and HIV-TTP might reflect a shared pathogenesis including endothelial dysfunction and inflammation and further research is required. Fibrinogen in DIC may be elevated as an acute phase reactant and D-dimers may reflect the extensive hemostatic activation in HIV-TTP. Inclusion of additional parameters in TMA scoring systems such the LDH/upper-limit-of-normal ratio, schistocytes count and wider access to ADAMTS-13 testing may enhance diagnostic accuracy and ensure appropriate utilization of plasma.
INTRODUCTION: Thrombotic microangiopathy (TMA) is a clinical syndrome characterized by hemolytic anemia, thrombocytopenia and microvascular thrombosis resulting in life‐threatening multi‐organ failure. 1 , 2 TMAs are heterogeneous and include congenital and acquired thrombotic thrombocytopenic purpura (TTP) and TTP‐like syndromes, hemolytic uremic syndrome (HUS) and the atypical form of this disease, aHUS. 1 , 2 , 3 Disseminated intravascular coagulation (DIC) can also be classified as a TMA. 4 TMAs can be the manifestation of common disease processes such as hypertension and malignancy as well as develop in relation to drug exposure. 1 , 4 Although the distinction between different TMA syndromes is often difficult, authors have advised against grouping of these disorders under a single pathological entity underlining the need for further studies in order to improve patient outcomes. 3 There are more than 7.7 million people in South Africa infected with human immunodeficiency virus (HIV). 5 Antiretroviral therapy (ART) is often initiated late in these patients who consequently present with advanced HIV infection and high rates of non‐communicable disease, like malignancy and cardiovascular disease, and opportunistic infections as well as associated complications such as TMAs. 5 , 6 , 7 , 8 HIV‐infected patients with laboratory features of a TMA pose a diagnostic dilemma since infection with HIV predisposes to a number of these disease processes particularly secondary TTP (HIV‐TTP) and DIC with background HIV infection (HIV‐DIC). 9 , 10 , 11 , 12 , 13 , 14 Distinguishing these conditions is important since treatment differs. DIC is managed by treatment of the underlying pathogenic cause and HIV‐TTP with therapeutic plasma exchange (TPE) or plasma infusion. 1 , 12 , 13 , 15 , 16 Treatment of patients with HIV‐TTP is the most frequent request for TPE in South Africa. 17 Plasma infusion alone is also of therapeutic value in patients with HIV‐TTP 16 but administration of insufficient amounts of plasma due to the risk of fluid overload and limited availability of plasma frequently results in poor responses and a need to convert to TPE. 16 Adverse events related to apheresis therapy and exposure to plasma still occur despite technological and procedural developments which have made operational systems safer. 18 For these reasons and to ensure best patient outcomes, correct distinction between secondary TTP and DIC is paramount. The microvascular thrombosis in TTP and in DIC differs in both pathogenesis and in composition of the thrombi. 1 , 19 In acquired TTP, the cleavage of von Willebrand Factor (VWF) multimers released by the endothelium may be impaired by a reduction in activity of the VWF proteolytic enzyme, a‐disintegrin‐and‐metalloproteinase‐with‐thrombospondin‐motifs 13 (ADAMTS‐13), mediated by auto‐antibodies. 1 Excessive release of high molecular weight VWF multimers from damaged endothelium resulting in a relative deficiency of ADAMTS‐13 is another postulated pathogenic factor in secondary TTP termed TTP‐like syndrome. 13 , 20 The resultant thrombi in TTP are therefore rich in VWF and platelets with abundant red blood cell (RBC) fragments (schistocytes) and severe thrombocytopenia. 1 , 9 , 12 The microvascular thromboses in DIC in contrast consist mainly of fibrin‐platelet clots following the exposure of coagulation factors to tissue factor secondary to an initiating process such as sepsis or trauma. 10 , 21 Excessive bleeding occurs frequently in DIC secondary to the consumption of coagulation factors as well as platelets. Intravascular clot formation is further accelerated in DIC by the loss of natural anticoagulant and fibrinolytic activity. 15 Schistocytes are present in DIC but usually constitute <10% of the RBCs. 15 In both of these disease processes, endothelial damage and dysregulation of the coagulation cascade also contribute to disease pathogenesis. 22 In addition, HIV‐infected patients often have background hematological abnormalities including cytopenias, underlying bone marrow dyshematopoiesis and baseline activation of the hemostatic system contributing to diagnostic uncertainty. 6 , 22 , 23 , 24 The PLASMIC (platelet count, hemolysis, active cancer, MCV (mean red blood cell (RBC) volume), international normalized ratio (INR) and creatinine) score (Table 1) is based on clinical and routine laboratory parameters and predicts the likelihood of severe ADAMTS‐13 deficiency in patients with a TMA since testing for this parameter is not widely available. 25 This score was designed to enable the distinction between TTP and other TMAs. 25 The International Society of Thrombosis and Haemostasis (ISTH) DIC score (Table 1), is a diagnostic tool to assist in the diagnosis of DIC in an appropriate clinical setting. 26 The utility of these scoring systems in HIV‐infected patients with TMAs has not been comprehensively assessed and bedside treatment decisions are often inconsistent. It is further possible that the background hemostatic changes in HIV infected patients may alter TMA scoring system performance. 4 , 13 , 14 , 27 The objective of the current study was to identify distinguishing clinical and laboratory parameters to assist with the accurate diagnosis of HIV infected patients who present with a TMA suspected to be either HIV‐TTP or HIV‐DIC. The ISTH DIC score and the PLASMIC score for prediction of thrombotic microangiopathy associated with severe ADAMTS‐13 deficiency 25 , 26 Score: ≥5: compatible with overt DIC: repeat score daily <5: suggestive for non‐overt DIC: repeat next 1 to 2 days Likelihood score for severe ADAMTS‐13 deficiency: • 0‐4: low likelihood • 5: intermediate likelihood • 6 or 7: high likelihood Abbreviations: ADAMTS‐13, a‐disintegrin‐and‐metalloproteinase‐with‐thrombospondin‐motifs 13; INR, international normalized ratio; MCV, mean corpuscular volume. Moderate D‐dimer increase: = 0.25‐1 D‐dimer units (mg/L)/ Strong (marked) D‐dimer increase: ≥ 1 D‐Dimer units (mg/L). 26 Reticulocyte count >2.5%, or haptoglobin undetectable, or indirect bilirubin > 12.0 μmol/L. CONCLUSION: HIV infection is prevalent in the African context 5 with secondary HIV‐associated TTP and DIC in the background of HIV infection constituting the most prevalent TMAs in this group of patients. 2 , 22 The diagnostic distinction between these conditions can be ambiguous resulting in inappropriate treatment due to the background activation of the coagulation system and inflammation in HIV infected patients. 9 , 11 , 40 The addition of the LDH/upper‐limit‐of‐normal ratio and objective, automated quantification of schistocytes will probably improve the accuracy of the PLASMIC score. 28 , 41 The LDH/upper‐limit‐of‐normal ratio standardizes across different reagents and reference indices. The value of longitudinal, repeated application of scoring systems in patients with a TMA in our setting must also be evaluated. The cause and significance of the elevation of D‐dimers in patients with HIV‐associated TTP also requires further investigation. 13 , 31 Based on the results of the study, the authors support the addition of the LDH/upper‐limit‐of‐normal ratio to the PLASMIC score for improved diagnostic accuracy and to guide urgent, but appropriate, institution of therapeutic plasma exchange (TPE) as was proposed by Zhao et al. 36
Background: Patients with Human Immunodeficiency Virus (HIV) infection are at risk of thrombotic microangiopathies (TMAs) notably thrombotic thrombocytopenic purpura (TTP) and disseminated intravascular coagulation (DIC). Overlap between laboratory results exists resulting in diagnostic ambiguity. Methods: Routine laboratory results of 71 patients with HIV-associated TTP (HIV-TTP) and 81 with DIC with concomitant HIV infection (HIV-DIC) admitted between 2015 and 2021 to academic hospitals in Johannesburg, South Africa were retrospectively reviewed. Both the PLASMIC and the International Society of Thrombosis and Haemostasis (ISTH) DIC scores were calculated. Results: Patients with HIV-TTP had significantly (P < .001) increased schistocytes and features of hemolysis including elevated lactate dehydrogenase (LDH)/upper-limit-of-normal ratio (median of 9 (interquartile range [IQR] 5-12) vs 3 (IQR 2-5)) but unexpectedly lower fibrinogen (median 2.8 (IQR 2.2-3.4) vs 4 g/L (IQR 2.5-9.2)) and higher D-dimer (median 4.8 (IQR 2.4-8.1) vs 3.6 g/L (IQR 1.7-6.2)) levels vs the HIV-DIC cohort. Patients with HIV-DIC were more immunocompromised with frequent secondary infections, higher platelet and hemoglobin levels, more deranged coagulation parameters and less hemolysis. Overlap in scoring systems was however observed. Conclusions: The laboratory parameter overlap between HIV-DIC and HIV-TTP might reflect a shared pathogenesis including endothelial dysfunction and inflammation and further research is required. Fibrinogen in DIC may be elevated as an acute phase reactant and D-dimers may reflect the extensive hemostatic activation in HIV-TTP. Inclusion of additional parameters in TMA scoring systems such the LDH/upper-limit-of-normal ratio, schistocytes count and wider access to ADAMTS-13 testing may enhance diagnostic accuracy and ensure appropriate utilization of plasma.
3,866
369
[ 1171, 60, 8, 38 ]
9
[ "hiv", "patients", "dic", "ttp", "13", "hiv ttp", "adamts 13", "adamts", "levels", "hiv dic" ]
[ "thrombocytopenia microvascular thrombosis", "thrombocytopenia microvascular", "microangiopathy tma", "microangiopathy tma clinical", "hiv associated thrombotic" ]
[CONTENT] diagnostic scoring systems | disseminated intravascular coagulation (DIC) | thrombotic thrombocytopenic purpura (TTP) | treatment decisions [SUMMARY]
[CONTENT] diagnostic scoring systems | disseminated intravascular coagulation (DIC) | thrombotic thrombocytopenic purpura (TTP) | treatment decisions [SUMMARY]
[CONTENT] diagnostic scoring systems | disseminated intravascular coagulation (DIC) | thrombotic thrombocytopenic purpura (TTP) | treatment decisions [SUMMARY]
[CONTENT] diagnostic scoring systems | disseminated intravascular coagulation (DIC) | thrombotic thrombocytopenic purpura (TTP) | treatment decisions [SUMMARY]
[CONTENT] diagnostic scoring systems | disseminated intravascular coagulation (DIC) | thrombotic thrombocytopenic purpura (TTP) | treatment decisions [SUMMARY]
[CONTENT] diagnostic scoring systems | disseminated intravascular coagulation (DIC) | thrombotic thrombocytopenic purpura (TTP) | treatment decisions [SUMMARY]
[CONTENT] ADAMTS13 Protein | Acute-Phase Proteins | Dacarbazine | Disseminated Intravascular Coagulation | HIV Infections | Hemoglobins | Hemolysis | Hemostatics | Humans | Lactate Dehydrogenases | Purpura, Thrombotic Thrombocytopenic | Retrospective Studies | South Africa | Thiamine | Thrombotic Microangiopathies [SUMMARY]
[CONTENT] ADAMTS13 Protein | Acute-Phase Proteins | Dacarbazine | Disseminated Intravascular Coagulation | HIV Infections | Hemoglobins | Hemolysis | Hemostatics | Humans | Lactate Dehydrogenases | Purpura, Thrombotic Thrombocytopenic | Retrospective Studies | South Africa | Thiamine | Thrombotic Microangiopathies [SUMMARY]
[CONTENT] ADAMTS13 Protein | Acute-Phase Proteins | Dacarbazine | Disseminated Intravascular Coagulation | HIV Infections | Hemoglobins | Hemolysis | Hemostatics | Humans | Lactate Dehydrogenases | Purpura, Thrombotic Thrombocytopenic | Retrospective Studies | South Africa | Thiamine | Thrombotic Microangiopathies [SUMMARY]
[CONTENT] ADAMTS13 Protein | Acute-Phase Proteins | Dacarbazine | Disseminated Intravascular Coagulation | HIV Infections | Hemoglobins | Hemolysis | Hemostatics | Humans | Lactate Dehydrogenases | Purpura, Thrombotic Thrombocytopenic | Retrospective Studies | South Africa | Thiamine | Thrombotic Microangiopathies [SUMMARY]
[CONTENT] ADAMTS13 Protein | Acute-Phase Proteins | Dacarbazine | Disseminated Intravascular Coagulation | HIV Infections | Hemoglobins | Hemolysis | Hemostatics | Humans | Lactate Dehydrogenases | Purpura, Thrombotic Thrombocytopenic | Retrospective Studies | South Africa | Thiamine | Thrombotic Microangiopathies [SUMMARY]
[CONTENT] ADAMTS13 Protein | Acute-Phase Proteins | Dacarbazine | Disseminated Intravascular Coagulation | HIV Infections | Hemoglobins | Hemolysis | Hemostatics | Humans | Lactate Dehydrogenases | Purpura, Thrombotic Thrombocytopenic | Retrospective Studies | South Africa | Thiamine | Thrombotic Microangiopathies [SUMMARY]
[CONTENT] thrombocytopenia microvascular thrombosis | thrombocytopenia microvascular | microangiopathy tma | microangiopathy tma clinical | hiv associated thrombotic [SUMMARY]
[CONTENT] thrombocytopenia microvascular thrombosis | thrombocytopenia microvascular | microangiopathy tma | microangiopathy tma clinical | hiv associated thrombotic [SUMMARY]
[CONTENT] thrombocytopenia microvascular thrombosis | thrombocytopenia microvascular | microangiopathy tma | microangiopathy tma clinical | hiv associated thrombotic [SUMMARY]
[CONTENT] thrombocytopenia microvascular thrombosis | thrombocytopenia microvascular | microangiopathy tma | microangiopathy tma clinical | hiv associated thrombotic [SUMMARY]
[CONTENT] thrombocytopenia microvascular thrombosis | thrombocytopenia microvascular | microangiopathy tma | microangiopathy tma clinical | hiv associated thrombotic [SUMMARY]
[CONTENT] thrombocytopenia microvascular thrombosis | thrombocytopenia microvascular | microangiopathy tma | microangiopathy tma clinical | hiv associated thrombotic [SUMMARY]
[CONTENT] hiv | patients | dic | ttp | 13 | hiv ttp | adamts 13 | adamts | levels | hiv dic [SUMMARY]
[CONTENT] hiv | patients | dic | ttp | 13 | hiv ttp | adamts 13 | adamts | levels | hiv dic [SUMMARY]
[CONTENT] hiv | patients | dic | ttp | 13 | hiv ttp | adamts 13 | adamts | levels | hiv dic [SUMMARY]
[CONTENT] hiv | patients | dic | ttp | 13 | hiv ttp | adamts 13 | adamts | levels | hiv dic [SUMMARY]
[CONTENT] hiv | patients | dic | ttp | 13 | hiv ttp | adamts 13 | adamts | levels | hiv dic [SUMMARY]
[CONTENT] hiv | patients | dic | ttp | 13 | hiv ttp | adamts 13 | adamts | levels | hiv dic [SUMMARY]
[CONTENT] hiv | dic | ttp | disease | 13 | patients | score | tma | vwf | adamts [SUMMARY]
[CONTENT] hiv | laboratory | analysers | performed | dic | graphpad | roche | activity autoantibody | activity autoantibody levels | 13 activity autoantibody levels [SUMMARY]
[CONTENT] hiv | patients | dic | ttp | hiv ttp | 13 | adamts 13 | adamts | levels | patients diagnosed [SUMMARY]
[CONTENT] hiv | patients | ldh upper limit normal | limit normal | ldh upper limit | ldh upper | ldh | limit normal ratio | normal | limit [SUMMARY]
[CONTENT] hiv | patients | dic | ttp | manuscript | funding received manuscript | funding received | received manuscript | funding | 13 [SUMMARY]
[CONTENT] hiv | patients | dic | ttp | manuscript | funding received manuscript | funding received | received manuscript | funding | 13 [SUMMARY]
[CONTENT] TTP ||| [SUMMARY]
[CONTENT] 71 | TTP | 81 | DIC | between 2015 and 2021 | Johannesburg | South Africa ||| the International Society of Thrombosis and Haemostasis ||| DIC [SUMMARY]
[CONTENT] 9 | 5 | 3 | 2.8 | IQR | 4 | g/L | 4.8 | 3.6 ||| ||| [SUMMARY]
[CONTENT] ||| DIC ||| TMA [SUMMARY]
[CONTENT] TTP ||| ||| 71 | TTP | 81 | DIC | between 2015 and 2021 | Johannesburg | South Africa ||| the International Society of Thrombosis and Haemostasis ||| DIC ||| ||| 9 | 5 | 3 | 2.8 | IQR | 4 | g/L | 4.8 | 3.6 ||| ||| ||| ||| DIC ||| TMA [SUMMARY]
[CONTENT] TTP ||| ||| 71 | TTP | 81 | DIC | between 2015 and 2021 | Johannesburg | South Africa ||| the International Society of Thrombosis and Haemostasis ||| DIC ||| ||| 9 | 5 | 3 | 2.8 | IQR | 4 | g/L | 4.8 | 3.6 ||| ||| ||| ||| DIC ||| TMA [SUMMARY]
Visualising the urethra for prostate radiotherapy planning.
34028976
The prostatic urethra is an organ at risk for prostate radiotherapy with genitourinary toxicities a common side effect. Many external beam radiation therapy protocols call for urethral sparing, and with modulated radiotherapy techniques, the radiation dose distribution can be controlled so that maximum doses do not fall within the prostatic urethral volume. Whilst traditional diagnostic MRI sequences provide excellent delineation of the prostate, uncertainty often remains as to the true path of the urethra within the gland. This study aims to assess if a high-resolution isotropic 3D T2 MRI series can reduce inter-observer variability in urethral delineation for radiotherapy planning.
INTRODUCTION
Five independent observers contoured the prostatic urethra for ten patients on three data sets; a 2 mm axial CT, a diagnostic 3 mm axial T2 TSE MRI and a 0.9 mm isotropic 3D T2 SPACE MRI. The observers were blinded from each other's contours. A Dice Similarity Coefficient (DSC) score was calculated using the intersection and union of the five observer contours vs an expert reference contour for each data set.
METHODS
The mean DSC of the observer vs reference contours was 0.47 for CT, 0.62 for T2 TSE and 0.78 for T2 SPACE (P < 0.001).
RESULTS
The introduction of a 0.9 mm isotropic 3D T2 SPACE MRI for treatment planning provides improved urethral visualisation and can lead to a significant reduction in inter-observer variation in prostatic urethral contouring.
CONCLUSIONS
[ "Humans", "Magnetic Resonance Imaging", "Male", "Observer Variation", "Prostatic Neoplasms", "Radiotherapy Dosage", "Radiotherapy Planning, Computer-Assisted", "Urethra" ]
8424315
Introduction
Genitourinary (GU) toxicities are a common side effect of prostate radiotherapy.1, 2, 3, 4, 5 Attempts at correlating bladder dose to GU toxicity have not shown a consistent relationship.6 Urethral strictures have long been recognised as a complication of prostate brachytherapy, and as such it is routine practice to try to limit dose delivery to the urethra in modern brachytherapy regimens.7, 8 Similarly, strictures around the urethral anastomosis are a common concern amongst urologists regarding adjuvant or salvage radiotherapy following a radical prostatectomy.9 Although the prostatic urethra has not traditionally been defined as an organ at risk (OAR) for prostate external beam radiation therapy (EBRT), a combination of the above evidence and higher dosed schedules suggests that it would be prudent to take steps to accurately identify and limit dose to this structure. Several contemporary ultrahypofractionated prostate EBRT regimes are beginning to call for urethral dose limiting and dose reporting.10, 11, 12 With modern intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) techniques combined with online and real‐time image‐guided radiation therapy (IGRT), the distribution of dose can be controlled so that maximum dose regions do not fall within the prostatic urethral volume whilst still maintaining the minimum therapeutic dose to the entire prostate gland.13 Historically, the urethra has been a difficult OAR to accurately define for radiotherapy planning purposes. Ultrasound‐based studies have shown the cranio‐caudal urethral path and prostatic urethral angle can demonstrate considerable anatomical variations between subjects.14 Whilst traditional diagnostic computed tomography (CT) and magnetic resonance imaging (MRI) provide excellent geometrical delineation of the prostate, uncertainty often remains as to the true path of the urethra within the gland itself. Modern ultrahypofractionated trials have permitted an estimated contour of the urethral position with a subsequent radial expansion to create a planning organ at risk volume (PRV).10, 11, 12 Our institutional planning process historically involved obtaining diagnostic images from external MRI sources and performing a rigid fiducial registration to the planning CT for contouring. A 3 mm axial T2‐weighted turbo spin echo (TSE) image provides excellent anatomical contrast and is historically used for target volume and OAR definition.15 However, the non‐isotropic voxels restrict high resolution viewing in the treatment planning system (TPS) to the axial plane only, as shown in Figure 1. A conventional diagnostic 3 mm T2 TSE series of the prostate as displayed in Eclipse TPS axially (A), and the subsequent degradation in image quality with coronal (B) and sagittal (C) reconstruction. The implementation of ultrahypofractionated stereotactic prostate treatments in our department highlighted the potential benefits of improved urethral visualisation. We initially employed in‐dwelling catheters (IDC) at the CT simulation session, followed by a diagnostic T2 TSE MRI with the IDC remaining in situ.10 This technique provides clear urethral visualisation for dose limiting, but the benefits are confounded by the invasiveness of the procedure, which carries increased infection risk, and is often not well tolerated by patients.16, 17 This is combined with increased staffing requirements during the simulation sessions. There is also the potential risk with this approach that IDC insertion results in deformation of the natural urethral anatomy,18 which may be problematic given the IDC was not re‐inserted for subsequent treatment appointments. Studies have shown that a T2‐weighted MRI sequence can display the prostatic urethra with hyper‐intense contrast compared to the surrounding glandular tissue; however, the voxel size and slice thickness of the diagnostic series did not provide acceptable multiplane resolution for radiotherapy planning and required specialist urogenital radiologist input.17 Recent recommendations that a three‐dimensional (3D) isotropic T2‐weighted axial acquisition is justified for prostate radiation therapy planning have also been published.19, 20 We verified that an isotropic T2 SPACE (Sampling Perfection with Application optimised Contrasts using different flip angle Evolution) sequence could produce a 3D prostate image of satisfactory quality for clinical use by our GU radiation oncologists (ROs). We found a 0.9 mm isotropic scan provided optimal image quality for multiplane RT planning (Fig. 2), with a clinically acceptable acquisition time (˜5‐6 min). A T2 SPACE series of the prostate as displayed in Eclipse TPS. The 0.9 mm isotropic voxel provides consistent resolution in axial (A) and coronal (B) planes with the arrow indicating the urethra on the sagittal image (C). This study investigates the potential impact of implementing an MRI imaging protocol for urethral contouring by assessing inter‐observer variability for radiotherapy planning by comparing 3D T2 SPACE, conventional CT and axial T2 TSE MRI planning series.
Methods
Ten male patients with histologically proven prostate carcinoma, an intact prostate gland and who had consented for prostate radiotherapy were identified. All patients provided written informed consent prior to study enrolment. The Hunter New England Human Research Ethics Committee approved this study, reference number 2020/STE01574. A 120 kV, 2 mm axial CT (SOMATOM CONFIDENCE, Siemens Healthcare, Erlangen, Germany) for treatment planning was acquired in accordance with standard departmental practice and was followed immediately by MRI imaging in the department’s dedicated 3‐Tesla MRI Simulator (MAGNETOM Skyra, Siemens Healthcare, Erlangen, Germany). For the MRI, patients were immobilised with identical radiotherapy positioning equipment on a QFix Insight™ flat couch overlay with a Siemens Body 18 flex coil fixed to a QFix Insight™ Body Coil Holder. The MR imaging protocol consisted of a 3 mm axial T2 TSE, a 2 mm axial T1 gradient echo (GRE) for fiducial marker visualisation and the additional 0.9 mm isotropic 3D T2 SPACE scan. Table 1 lists the specific MRI acquisition parameters. MRI acquisition parameters. 2D Axial TSE 3D SPACE TSE GRE = gradient echo; TSE = turbo spin echo; SPACE = sampling perfection with application optimised contrast using different flip angle evolution; TE = echo time; TR = repetition time; FOV = field of view. All scans were imported into the Eclipse™ treatment planning system (Varian Medical Systems, Palo Alto, CA, USA), and rigid registration of CT, T1 GRE, T2 TSE and T2 SPACE using gold seed fiducial markers was performed for each patient. The clinical target volume (CTV) was contoured by the RO on the CT using the T2 TSE series in a blended window as per current clinical practice. The CTV was duplicated onto each T2‐weighted series. The radiation oncologist created a reference urethral PRV contour in consensus with a GU clinical specialist radiation therapist using both T2‐weighted series via the prescribed method below. The reference contour was copied to all three data sets and approved as a clinically acceptable urethral position. Five radiation therapists (with a range of 5–19 years experience) sub‐specialising in GU radiation therapy contoured the prostatic urethra of the ten patients on each data set; planning CT, T2 TSE and the T2 SPACE series using the same prescribed method. The observers were blinded to all other urethra contours. The data sets were contoured in the above sequential order. On each series, the observers were instructed to contour the urethra within the CTV volume in the sagittal window from bladder neck to the apex of the prostate using the 3D brush tool set to a static 2 mm diameter (Fig. 3). The urethra contour was drawn on the sagittal plane, with the axial and coronal planes also available for viewing to help guide the observer in all series. The urethra contours were set as ‘high resolution structures’ in the contour properties. Contouring of the urethra across multiple sagittal slices was permitted if the urethral path appeared convoluted through the prostate gland. Any large transurethral resection of the prostate (TURP) voids were also contoured as part of the prostatic urethra if they fell inside the CTV volume. In order to create a conventional cylindrical urethral PRV, a further 3 mm radial expansion was applied to the contours. Any volume extending outside of the CTV in the superior–inferior direction was cropped. The final ˜8‐mm‐diameter cylindrical urethra PRV structure was then used for analysis. Sagittal example of one patient data set with observer urethra PRV contours (blue) and reference contour (red) for CT (A), T2 TSE (B) and T2 SPACE (C). A further two contours were then created for every observer on each of the three imaging methods for the ten patients. This consisted of (a) the intersecting volume of the observer and reference contour (A ∩ B) and (b) the union of the observer contour and the reference contour (A ∪ B). The diagram in Figure 4 represents the volumes created for each observer contour vs the reference contour. Shaded regions representing volumes created for DSC scoring; (a) intersecting volume of observer & reference contour (A ∩ B) & (b) union of observer and reference contour (A ∪ B). The volume of the intersection and union contours was recorded. A Dice Similarity Coefficient (DSC) was calculated using the equation; DSC = 2 (A ∩ B) / (A ∪ B). The DSC score was then used to assess the inter‐observer similarity of the identified urethral volumes. DSC comparisons have been widely used as a metric to evaluate spatial overlap between multiple volumes in radiation oncology settings.21, 22, 23, 24 DSC scores are displayed as a value between 0 – representing no spatial overlap, and 1 – representing perfect spatial overlap. A DSC score of >0.70 has been reported as demonstrating ‘good’ spatial and volumetric similarity.25 Two factor statistical analysis was performed using Friedman’s two‐way repeated ANOVA to assess main effect difference, based on ten patients and three image acquisition types for five independent observers. Subsequent post hoc multiple comparison was performed using Fisher’s Least Significant Difference test to compare each pair of the three different imaging methods.
Results
The mean DSC of the observer vs reference contours was 0.47 for CT, 0.62 for T2 TSE and 0.78 for T2 SPACE, as shown in Table 2. The calculated Friedman’s two‐way repeated ANOVA P‐value of <0.001 suggests that there is a significant overall difference between the three groups. DSC scores improved from CT to T2 TSE (mean DSC improvement = +0.15), and then further improvements were seen in the T2 TSE to T2 SPACE comparison (mean DSC improvement = +0.16). Mean Dice Similarity Coefficient (DSC) of observer contours (n = 5) vs reference contour. Post hoc multiple comparison of the three groups resulted in: CT‐T2 TSE P = 0.23; CT‐T2 SPACE P = <0.001; T2 TSE‐T2 SPACE p = 0.023. These results demonstrate a significant difference in mean value between T2 SPACE when compared to both T2 TSE and CT DSC groups. A graphical representation in Figure 5 shows clear improvements in DSC score for T2 SPACE compared to the conventional imaging series. Graphical representation of mean DSC scores showing a reduction in inter‐observer variation for T2 SPACE.
Conclusion
The introduction of a 0.9 mm isotropic 3D T2 SPACE planning MRI provides improved urethral visualisation and can lead to a marked reduction in inter‐observer variation of prostatic urethral contours compared to conventional planning CT and diagnostic T2 TSE MRI. The improvements in urethral visualisation were best appreciated in the sagittal plane in the TPS. We have adopted the T2 SPACE as a standard contouring sequence for prostate radiotherapy planning in the MRI simulator.
[ "Introduction" ]
[ "Genitourinary (GU) toxicities are a common side effect of prostate radiotherapy.1, 2, 3, 4, 5 Attempts at correlating bladder dose to GU toxicity have not shown a consistent relationship.6 Urethral strictures have long been recognised as a complication of prostate brachytherapy, and as such it is routine practice to try to limit dose delivery to the urethra in modern brachytherapy regimens.7, 8 Similarly, strictures around the urethral anastomosis are a common concern amongst urologists regarding adjuvant or salvage radiotherapy following a radical prostatectomy.9 Although the prostatic urethra has not traditionally been defined as an organ at risk (OAR) for prostate external beam radiation therapy (EBRT), a combination of the above evidence and higher dosed schedules suggests that it would be prudent to take steps to accurately identify and limit dose to this structure.\nSeveral contemporary ultrahypofractionated prostate EBRT regimes are beginning to call for urethral dose limiting and dose reporting.10, 11, 12 With modern intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) techniques combined with online and real‐time image‐guided radiation therapy (IGRT), the distribution of dose can be controlled so that maximum dose regions do not fall within the prostatic urethral volume whilst still maintaining the minimum therapeutic dose to the entire prostate gland.13\n\nHistorically, the urethra has been a difficult OAR to accurately define for radiotherapy planning purposes. Ultrasound‐based studies have shown the cranio‐caudal urethral path and prostatic urethral angle can demonstrate considerable anatomical variations between subjects.14 Whilst traditional diagnostic computed tomography (CT) and magnetic resonance imaging (MRI) provide excellent geometrical delineation of the prostate, uncertainty often remains as to the true path of the urethra within the gland itself. Modern ultrahypofractionated trials have permitted an estimated contour of the urethral position with a subsequent radial expansion to create a planning organ at risk volume (PRV).10, 11, 12 Our institutional planning process historically involved obtaining diagnostic images from external MRI sources and performing a rigid fiducial registration to the planning CT for contouring. A 3 mm axial T2‐weighted turbo spin echo (TSE) image provides excellent anatomical contrast and is historically used for target volume and OAR definition.15 However, the non‐isotropic voxels restrict high resolution viewing in the treatment planning system (TPS) to the axial plane only, as shown in Figure 1.\nA conventional diagnostic 3 mm T2 TSE series of the prostate as displayed in Eclipse TPS axially (A), and the subsequent degradation in image quality with coronal (B) and sagittal (C) reconstruction.\nThe implementation of ultrahypofractionated stereotactic prostate treatments in our department highlighted the potential benefits of improved urethral visualisation. We initially employed in‐dwelling catheters (IDC) at the CT simulation session, followed by a diagnostic T2 TSE MRI with the IDC remaining in situ.10 This technique provides clear urethral visualisation for dose limiting, but the benefits are confounded by the invasiveness of the procedure, which carries increased infection risk, and is often not well tolerated by patients.16, 17 This is combined with increased staffing requirements during the simulation sessions. There is also the potential risk with this approach that IDC insertion results in deformation of the natural urethral anatomy,18 which may be problematic given the IDC was not re‐inserted for subsequent treatment appointments.\nStudies have shown that a T2‐weighted MRI sequence can display the prostatic urethra with hyper‐intense contrast compared to the surrounding glandular tissue; however, the voxel size and slice thickness of the diagnostic series did not provide acceptable multiplane resolution for radiotherapy planning and required specialist urogenital radiologist input.17 Recent recommendations that a three‐dimensional (3D) isotropic T2‐weighted axial acquisition is justified for prostate radiation therapy planning have also been published.19, 20\n\nWe verified that an isotropic T2 SPACE (Sampling Perfection with Application optimised Contrasts using different flip angle Evolution) sequence could produce a 3D prostate image of satisfactory quality for clinical use by our GU radiation oncologists (ROs). We found a 0.9 mm isotropic scan provided optimal image quality for multiplane RT planning (Fig. 2), with a clinically acceptable acquisition time (˜5‐6 min).\nA T2 SPACE series of the prostate as displayed in Eclipse TPS. The 0.9 mm isotropic voxel provides consistent resolution in axial (A) and coronal (B) planes with the arrow indicating the urethra on the sagittal image (C).\nThis study investigates the potential impact of implementing an MRI imaging protocol for urethral contouring by assessing inter‐observer variability for radiotherapy planning by comparing 3D T2 SPACE, conventional CT and axial T2 TSE MRI planning series." ]
[ null ]
[ "Introduction", "Methods", "Results", "Discussion", "Conclusion", "Conflict of Interest" ]
[ "Genitourinary (GU) toxicities are a common side effect of prostate radiotherapy.1, 2, 3, 4, 5 Attempts at correlating bladder dose to GU toxicity have not shown a consistent relationship.6 Urethral strictures have long been recognised as a complication of prostate brachytherapy, and as such it is routine practice to try to limit dose delivery to the urethra in modern brachytherapy regimens.7, 8 Similarly, strictures around the urethral anastomosis are a common concern amongst urologists regarding adjuvant or salvage radiotherapy following a radical prostatectomy.9 Although the prostatic urethra has not traditionally been defined as an organ at risk (OAR) for prostate external beam radiation therapy (EBRT), a combination of the above evidence and higher dosed schedules suggests that it would be prudent to take steps to accurately identify and limit dose to this structure.\nSeveral contemporary ultrahypofractionated prostate EBRT regimes are beginning to call for urethral dose limiting and dose reporting.10, 11, 12 With modern intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) techniques combined with online and real‐time image‐guided radiation therapy (IGRT), the distribution of dose can be controlled so that maximum dose regions do not fall within the prostatic urethral volume whilst still maintaining the minimum therapeutic dose to the entire prostate gland.13\n\nHistorically, the urethra has been a difficult OAR to accurately define for radiotherapy planning purposes. Ultrasound‐based studies have shown the cranio‐caudal urethral path and prostatic urethral angle can demonstrate considerable anatomical variations between subjects.14 Whilst traditional diagnostic computed tomography (CT) and magnetic resonance imaging (MRI) provide excellent geometrical delineation of the prostate, uncertainty often remains as to the true path of the urethra within the gland itself. Modern ultrahypofractionated trials have permitted an estimated contour of the urethral position with a subsequent radial expansion to create a planning organ at risk volume (PRV).10, 11, 12 Our institutional planning process historically involved obtaining diagnostic images from external MRI sources and performing a rigid fiducial registration to the planning CT for contouring. A 3 mm axial T2‐weighted turbo spin echo (TSE) image provides excellent anatomical contrast and is historically used for target volume and OAR definition.15 However, the non‐isotropic voxels restrict high resolution viewing in the treatment planning system (TPS) to the axial plane only, as shown in Figure 1.\nA conventional diagnostic 3 mm T2 TSE series of the prostate as displayed in Eclipse TPS axially (A), and the subsequent degradation in image quality with coronal (B) and sagittal (C) reconstruction.\nThe implementation of ultrahypofractionated stereotactic prostate treatments in our department highlighted the potential benefits of improved urethral visualisation. We initially employed in‐dwelling catheters (IDC) at the CT simulation session, followed by a diagnostic T2 TSE MRI with the IDC remaining in situ.10 This technique provides clear urethral visualisation for dose limiting, but the benefits are confounded by the invasiveness of the procedure, which carries increased infection risk, and is often not well tolerated by patients.16, 17 This is combined with increased staffing requirements during the simulation sessions. There is also the potential risk with this approach that IDC insertion results in deformation of the natural urethral anatomy,18 which may be problematic given the IDC was not re‐inserted for subsequent treatment appointments.\nStudies have shown that a T2‐weighted MRI sequence can display the prostatic urethra with hyper‐intense contrast compared to the surrounding glandular tissue; however, the voxel size and slice thickness of the diagnostic series did not provide acceptable multiplane resolution for radiotherapy planning and required specialist urogenital radiologist input.17 Recent recommendations that a three‐dimensional (3D) isotropic T2‐weighted axial acquisition is justified for prostate radiation therapy planning have also been published.19, 20\n\nWe verified that an isotropic T2 SPACE (Sampling Perfection with Application optimised Contrasts using different flip angle Evolution) sequence could produce a 3D prostate image of satisfactory quality for clinical use by our GU radiation oncologists (ROs). We found a 0.9 mm isotropic scan provided optimal image quality for multiplane RT planning (Fig. 2), with a clinically acceptable acquisition time (˜5‐6 min).\nA T2 SPACE series of the prostate as displayed in Eclipse TPS. The 0.9 mm isotropic voxel provides consistent resolution in axial (A) and coronal (B) planes with the arrow indicating the urethra on the sagittal image (C).\nThis study investigates the potential impact of implementing an MRI imaging protocol for urethral contouring by assessing inter‐observer variability for radiotherapy planning by comparing 3D T2 SPACE, conventional CT and axial T2 TSE MRI planning series.", "Ten male patients with histologically proven prostate carcinoma, an intact prostate gland and who had consented for prostate radiotherapy were identified. All patients provided written informed consent prior to study enrolment. The Hunter New England Human Research Ethics Committee approved this study, reference number 2020/STE01574.\nA 120 kV, 2 mm axial CT (SOMATOM CONFIDENCE, Siemens Healthcare, Erlangen, Germany) for treatment planning was acquired in accordance with standard departmental practice and was followed immediately by MRI imaging in the department’s dedicated 3‐Tesla MRI Simulator (MAGNETOM Skyra, Siemens Healthcare, Erlangen, Germany). For the MRI, patients were immobilised with identical radiotherapy positioning equipment on a QFix Insight™ flat couch overlay with a Siemens Body 18 flex coil fixed to a QFix Insight™ Body Coil Holder. The MR imaging protocol consisted of a 3 mm axial T2 TSE, a 2 mm axial T1 gradient echo (GRE) for fiducial marker visualisation and the additional 0.9 mm isotropic 3D T2 SPACE scan. Table 1 lists the specific MRI acquisition parameters.\nMRI acquisition parameters.\n2D Axial\nTSE\n3D SPACE\nTSE\nGRE = gradient echo; TSE = turbo spin echo; SPACE = sampling perfection with application optimised contrast using different flip angle evolution; TE = echo time; TR = repetition time; FOV = field of view.\nAll scans were imported into the Eclipse™ treatment planning system (Varian Medical Systems, Palo Alto, CA, USA), and rigid registration of CT, T1 GRE, T2 TSE and T2 SPACE using gold seed fiducial markers was performed for each patient. The clinical target volume (CTV) was contoured by the RO on the CT using the T2 TSE series in a blended window as per current clinical practice. The CTV was duplicated onto each T2‐weighted series. The radiation oncologist created a reference urethral PRV contour in consensus with a GU clinical specialist radiation therapist using both T2‐weighted series via the prescribed method below. The reference contour was copied to all three data sets and approved as a clinically acceptable urethral position.\nFive radiation therapists (with a range of 5–19 years experience) sub‐specialising in GU radiation therapy contoured the prostatic urethra of the ten patients on each data set; planning CT, T2 TSE and the T2 SPACE series using the same prescribed method. The observers were blinded to all other urethra contours. The data sets were contoured in the above sequential order.\nOn each series, the observers were instructed to contour the urethra within the CTV volume in the sagittal window from bladder neck to the apex of the prostate using the 3D brush tool set to a static 2 mm diameter (Fig. 3). The urethra contour was drawn on the sagittal plane, with the axial and coronal planes also available for viewing to help guide the observer in all series. The urethra contours were set as ‘high resolution structures’ in the contour properties. Contouring of the urethra across multiple sagittal slices was permitted if the urethral path appeared convoluted through the prostate gland. Any large transurethral resection of the prostate (TURP) voids were also contoured as part of the prostatic urethra if they fell inside the CTV volume. In order to create a conventional cylindrical urethral PRV, a further 3 mm radial expansion was applied to the contours. Any volume extending outside of the CTV in the superior–inferior direction was cropped. The final ˜8‐mm‐diameter cylindrical urethra PRV structure was then used for analysis.\nSagittal example of one patient data set with observer urethra PRV contours (blue) and reference contour (red) for CT (A), T2 TSE (B) and T2 SPACE (C).\nA further two contours were then created for every observer on each of the three imaging methods for the ten patients. This consisted of (a) the intersecting volume of the observer and reference contour (A ∩ B) and (b) the union of the observer contour and the reference contour (A ∪ B). The diagram in Figure 4 represents the volumes created for each observer contour vs the reference contour.\nShaded regions representing volumes created for DSC scoring; (a) intersecting volume of observer & reference contour (A ∩ B) & (b) union of observer and reference contour (A ∪ B).\nThe volume of the intersection and union contours was recorded. A Dice Similarity Coefficient (DSC) was calculated using the equation; DSC = 2 (A ∩ B) / (A ∪ B). The DSC score was then used to assess the inter‐observer similarity of the identified urethral volumes. DSC comparisons have been widely used as a metric to evaluate spatial overlap between multiple volumes in radiation oncology settings.21, 22, 23, 24 DSC scores are displayed as a value between 0 – representing no spatial overlap, and 1 – representing perfect spatial overlap. A DSC score of >0.70 has been reported as demonstrating ‘good’ spatial and volumetric similarity.25\n\nTwo factor statistical analysis was performed using Friedman’s two‐way repeated ANOVA to assess main effect difference, based on ten patients and three image acquisition types for five independent observers. Subsequent post hoc multiple comparison was performed using Fisher’s Least Significant Difference test to compare each pair of the three different imaging methods.", "The mean DSC of the observer vs reference contours was 0.47 for CT, 0.62 for T2 TSE and 0.78 for T2 SPACE, as shown in Table 2. The calculated Friedman’s two‐way repeated ANOVA P‐value of <0.001 suggests that there is a significant overall difference between the three groups. DSC scores improved from CT to T2 TSE (mean DSC improvement = +0.15), and then further improvements were seen in the T2 TSE to T2 SPACE comparison (mean DSC improvement = +0.16).\nMean Dice Similarity Coefficient (DSC) of observer contours (n = 5) vs reference contour.\nPost hoc multiple comparison of the three groups resulted in: CT‐T2 TSE P = 0.23; CT‐T2 SPACE P = <0.001; T2 TSE‐T2 SPACE p = 0.023. These results demonstrate a significant difference in mean value between T2 SPACE when compared to both T2 TSE and CT DSC groups.\nA graphical representation in Figure 5 shows clear improvements in DSC score for T2 SPACE compared to the conventional imaging series.\nGraphical representation of mean DSC scores showing a reduction in inter‐observer variation for T2 SPACE.", "This study has demonstrated that the improvement in image quality achieved through the use of a 0.9 mm isotropic 3D T2 SPACE sequence can result in less inter‐observer variability in contouring of the prostatic urethra PRV, when compared to conventional CT and diagnostic T2 TSE MRI approaches. Note that no patient had a better mean DSC for the T2 TSE compared to the T2 SPACE approach. The authors acknowledge this study involves a relatively small patient cohort and the results presented pertain to the observers in this study and may not generalise to other observers. However, the consistency and strong statistical evidence of the results suggest it is unlikely that more patients would lead to a change in outcome, particularly in relation to T2 SPACE vs convention planning CT, where a highly significant difference was observed between mean DSC (P < 0.001).\nWhilst the T2 SPACE series does show reduced inter‐observer variability, contouring the urethra can still require a degree of estimation. Patients with a convoluted urethral path or large body habitus may still be difficult to visualise. Conversely, patients with significant TURP voids could likely be contoured accurately with a standard T2 TSE diagnostic series alone, and this was observed in patient #5 in the cohort. During the course of this investigation, the significance of ensuring a quality patient set‐up has been reinforced with an anatomically straight and level pelvis showing obvious improvement in urethral visualisation. Also of note, the increased scan acquisition time of T2 SPACE compared to T2 TSE (2–3 min) can introduce motion artefact, with patient discomfort from bladder filling a potential factor. Furthermore, contouring on an MR image in the sagittal viewing plane of the TPS may be a novel technique for some as the axial plane has long been the primary viewing window for RT contouring and planning. Therefore, protocol‐based user education is critical to ensure best value is obtained from its implementation.23\n\nThe alternative approach of placing an IDC is also sometimes deployed for ultrahypofractionated regimens, with some protocols exploring cooling of the urethra.13 That said, the rate of moderate‐to‐severe GU toxicity appears to be low with RT doses <40 Gy in 5 fractions delivered without any specific urethral sparing.26, 27 As such, the main advantages of such an approach are likely to be in emerging indications such as boosting of the dominant intraprostatic nodule (DIL), prostate re‐irradiation and further dose escalation such as on virtual HDR boost protocols.28, 29, 30, 31 Patients with intraprostatic recurrence after radiotherapy can be managed with re‐irradiation32 and may benefit from a more reliable approach to urethral delineation as meticulous attention to urethral doses is often mandated in the SBRT retreatment scenario.33 By reducing the inherent uncertainty around the urethral PRV and using a consistent contouring approach, we see potential to more confidently investigate future urethral dose sparing opportunities.\nRecommendations to tailor MRI imaging protocols towards a radiotherapy focus have been reported and have the potential to provide tangible benefit to patients undergoing radiotherapy.20 The installation of a dedicated MRI simulator continues to expedite these adaptions for our department. It opens the scope to an MRI only planning process, which is a current research focus in our unit.34 The authors also feel that this small study supports the need for multidisciplinary collaboration to best utilise the MRI for quality improvement in day‐to‐day radiotherapy imaging.35\n", "The introduction of a 0.9 mm isotropic 3D T2 SPACE planning MRI provides improved urethral visualisation and can lead to a marked reduction in inter‐observer variation of prostatic urethral contours compared to conventional planning CT and diagnostic T2 TSE MRI. The improvements in urethral visualisation were best appreciated in the sagittal plane in the TPS. We have adopted the T2 SPACE as a standard contouring sequence for prostate radiotherapy planning in the MRI simulator.", "The authors declare no conflict of interest." ]
[ null, "methods", "results", "discussion", "conclusions", "COI-statement" ]
[ "MRI", "prostate", "radiation therapy", "SBRT", "urethra" ]
Introduction: Genitourinary (GU) toxicities are a common side effect of prostate radiotherapy.1, 2, 3, 4, 5 Attempts at correlating bladder dose to GU toxicity have not shown a consistent relationship.6 Urethral strictures have long been recognised as a complication of prostate brachytherapy, and as such it is routine practice to try to limit dose delivery to the urethra in modern brachytherapy regimens.7, 8 Similarly, strictures around the urethral anastomosis are a common concern amongst urologists regarding adjuvant or salvage radiotherapy following a radical prostatectomy.9 Although the prostatic urethra has not traditionally been defined as an organ at risk (OAR) for prostate external beam radiation therapy (EBRT), a combination of the above evidence and higher dosed schedules suggests that it would be prudent to take steps to accurately identify and limit dose to this structure. Several contemporary ultrahypofractionated prostate EBRT regimes are beginning to call for urethral dose limiting and dose reporting.10, 11, 12 With modern intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) techniques combined with online and real‐time image‐guided radiation therapy (IGRT), the distribution of dose can be controlled so that maximum dose regions do not fall within the prostatic urethral volume whilst still maintaining the minimum therapeutic dose to the entire prostate gland.13 Historically, the urethra has been a difficult OAR to accurately define for radiotherapy planning purposes. Ultrasound‐based studies have shown the cranio‐caudal urethral path and prostatic urethral angle can demonstrate considerable anatomical variations between subjects.14 Whilst traditional diagnostic computed tomography (CT) and magnetic resonance imaging (MRI) provide excellent geometrical delineation of the prostate, uncertainty often remains as to the true path of the urethra within the gland itself. Modern ultrahypofractionated trials have permitted an estimated contour of the urethral position with a subsequent radial expansion to create a planning organ at risk volume (PRV).10, 11, 12 Our institutional planning process historically involved obtaining diagnostic images from external MRI sources and performing a rigid fiducial registration to the planning CT for contouring. A 3 mm axial T2‐weighted turbo spin echo (TSE) image provides excellent anatomical contrast and is historically used for target volume and OAR definition.15 However, the non‐isotropic voxels restrict high resolution viewing in the treatment planning system (TPS) to the axial plane only, as shown in Figure 1. A conventional diagnostic 3 mm T2 TSE series of the prostate as displayed in Eclipse TPS axially (A), and the subsequent degradation in image quality with coronal (B) and sagittal (C) reconstruction. The implementation of ultrahypofractionated stereotactic prostate treatments in our department highlighted the potential benefits of improved urethral visualisation. We initially employed in‐dwelling catheters (IDC) at the CT simulation session, followed by a diagnostic T2 TSE MRI with the IDC remaining in situ.10 This technique provides clear urethral visualisation for dose limiting, but the benefits are confounded by the invasiveness of the procedure, which carries increased infection risk, and is often not well tolerated by patients.16, 17 This is combined with increased staffing requirements during the simulation sessions. There is also the potential risk with this approach that IDC insertion results in deformation of the natural urethral anatomy,18 which may be problematic given the IDC was not re‐inserted for subsequent treatment appointments. Studies have shown that a T2‐weighted MRI sequence can display the prostatic urethra with hyper‐intense contrast compared to the surrounding glandular tissue; however, the voxel size and slice thickness of the diagnostic series did not provide acceptable multiplane resolution for radiotherapy planning and required specialist urogenital radiologist input.17 Recent recommendations that a three‐dimensional (3D) isotropic T2‐weighted axial acquisition is justified for prostate radiation therapy planning have also been published.19, 20 We verified that an isotropic T2 SPACE (Sampling Perfection with Application optimised Contrasts using different flip angle Evolution) sequence could produce a 3D prostate image of satisfactory quality for clinical use by our GU radiation oncologists (ROs). We found a 0.9 mm isotropic scan provided optimal image quality for multiplane RT planning (Fig. 2), with a clinically acceptable acquisition time (˜5‐6 min). A T2 SPACE series of the prostate as displayed in Eclipse TPS. The 0.9 mm isotropic voxel provides consistent resolution in axial (A) and coronal (B) planes with the arrow indicating the urethra on the sagittal image (C). This study investigates the potential impact of implementing an MRI imaging protocol for urethral contouring by assessing inter‐observer variability for radiotherapy planning by comparing 3D T2 SPACE, conventional CT and axial T2 TSE MRI planning series. Methods: Ten male patients with histologically proven prostate carcinoma, an intact prostate gland and who had consented for prostate radiotherapy were identified. All patients provided written informed consent prior to study enrolment. The Hunter New England Human Research Ethics Committee approved this study, reference number 2020/STE01574. A 120 kV, 2 mm axial CT (SOMATOM CONFIDENCE, Siemens Healthcare, Erlangen, Germany) for treatment planning was acquired in accordance with standard departmental practice and was followed immediately by MRI imaging in the department’s dedicated 3‐Tesla MRI Simulator (MAGNETOM Skyra, Siemens Healthcare, Erlangen, Germany). For the MRI, patients were immobilised with identical radiotherapy positioning equipment on a QFix Insight™ flat couch overlay with a Siemens Body 18 flex coil fixed to a QFix Insight™ Body Coil Holder. The MR imaging protocol consisted of a 3 mm axial T2 TSE, a 2 mm axial T1 gradient echo (GRE) for fiducial marker visualisation and the additional 0.9 mm isotropic 3D T2 SPACE scan. Table 1 lists the specific MRI acquisition parameters. MRI acquisition parameters. 2D Axial TSE 3D SPACE TSE GRE = gradient echo; TSE = turbo spin echo; SPACE = sampling perfection with application optimised contrast using different flip angle evolution; TE = echo time; TR = repetition time; FOV = field of view. All scans were imported into the Eclipse™ treatment planning system (Varian Medical Systems, Palo Alto, CA, USA), and rigid registration of CT, T1 GRE, T2 TSE and T2 SPACE using gold seed fiducial markers was performed for each patient. The clinical target volume (CTV) was contoured by the RO on the CT using the T2 TSE series in a blended window as per current clinical practice. The CTV was duplicated onto each T2‐weighted series. The radiation oncologist created a reference urethral PRV contour in consensus with a GU clinical specialist radiation therapist using both T2‐weighted series via the prescribed method below. The reference contour was copied to all three data sets and approved as a clinically acceptable urethral position. Five radiation therapists (with a range of 5–19 years experience) sub‐specialising in GU radiation therapy contoured the prostatic urethra of the ten patients on each data set; planning CT, T2 TSE and the T2 SPACE series using the same prescribed method. The observers were blinded to all other urethra contours. The data sets were contoured in the above sequential order. On each series, the observers were instructed to contour the urethra within the CTV volume in the sagittal window from bladder neck to the apex of the prostate using the 3D brush tool set to a static 2 mm diameter (Fig. 3). The urethra contour was drawn on the sagittal plane, with the axial and coronal planes also available for viewing to help guide the observer in all series. The urethra contours were set as ‘high resolution structures’ in the contour properties. Contouring of the urethra across multiple sagittal slices was permitted if the urethral path appeared convoluted through the prostate gland. Any large transurethral resection of the prostate (TURP) voids were also contoured as part of the prostatic urethra if they fell inside the CTV volume. In order to create a conventional cylindrical urethral PRV, a further 3 mm radial expansion was applied to the contours. Any volume extending outside of the CTV in the superior–inferior direction was cropped. The final ˜8‐mm‐diameter cylindrical urethra PRV structure was then used for analysis. Sagittal example of one patient data set with observer urethra PRV contours (blue) and reference contour (red) for CT (A), T2 TSE (B) and T2 SPACE (C). A further two contours were then created for every observer on each of the three imaging methods for the ten patients. This consisted of (a) the intersecting volume of the observer and reference contour (A ∩ B) and (b) the union of the observer contour and the reference contour (A ∪ B). The diagram in Figure 4 represents the volumes created for each observer contour vs the reference contour. Shaded regions representing volumes created for DSC scoring; (a) intersecting volume of observer & reference contour (A ∩ B) & (b) union of observer and reference contour (A ∪ B). The volume of the intersection and union contours was recorded. A Dice Similarity Coefficient (DSC) was calculated using the equation; DSC = 2 (A ∩ B) / (A ∪ B). The DSC score was then used to assess the inter‐observer similarity of the identified urethral volumes. DSC comparisons have been widely used as a metric to evaluate spatial overlap between multiple volumes in radiation oncology settings.21, 22, 23, 24 DSC scores are displayed as a value between 0 – representing no spatial overlap, and 1 – representing perfect spatial overlap. A DSC score of >0.70 has been reported as demonstrating ‘good’ spatial and volumetric similarity.25 Two factor statistical analysis was performed using Friedman’s two‐way repeated ANOVA to assess main effect difference, based on ten patients and three image acquisition types for five independent observers. Subsequent post hoc multiple comparison was performed using Fisher’s Least Significant Difference test to compare each pair of the three different imaging methods. Results: The mean DSC of the observer vs reference contours was 0.47 for CT, 0.62 for T2 TSE and 0.78 for T2 SPACE, as shown in Table 2. The calculated Friedman’s two‐way repeated ANOVA P‐value of <0.001 suggests that there is a significant overall difference between the three groups. DSC scores improved from CT to T2 TSE (mean DSC improvement = +0.15), and then further improvements were seen in the T2 TSE to T2 SPACE comparison (mean DSC improvement = +0.16). Mean Dice Similarity Coefficient (DSC) of observer contours (n = 5) vs reference contour. Post hoc multiple comparison of the three groups resulted in: CT‐T2 TSE P = 0.23; CT‐T2 SPACE P = <0.001; T2 TSE‐T2 SPACE p = 0.023. These results demonstrate a significant difference in mean value between T2 SPACE when compared to both T2 TSE and CT DSC groups. A graphical representation in Figure 5 shows clear improvements in DSC score for T2 SPACE compared to the conventional imaging series. Graphical representation of mean DSC scores showing a reduction in inter‐observer variation for T2 SPACE. Discussion: This study has demonstrated that the improvement in image quality achieved through the use of a 0.9 mm isotropic 3D T2 SPACE sequence can result in less inter‐observer variability in contouring of the prostatic urethra PRV, when compared to conventional CT and diagnostic T2 TSE MRI approaches. Note that no patient had a better mean DSC for the T2 TSE compared to the T2 SPACE approach. The authors acknowledge this study involves a relatively small patient cohort and the results presented pertain to the observers in this study and may not generalise to other observers. However, the consistency and strong statistical evidence of the results suggest it is unlikely that more patients would lead to a change in outcome, particularly in relation to T2 SPACE vs convention planning CT, where a highly significant difference was observed between mean DSC (P < 0.001). Whilst the T2 SPACE series does show reduced inter‐observer variability, contouring the urethra can still require a degree of estimation. Patients with a convoluted urethral path or large body habitus may still be difficult to visualise. Conversely, patients with significant TURP voids could likely be contoured accurately with a standard T2 TSE diagnostic series alone, and this was observed in patient #5 in the cohort. During the course of this investigation, the significance of ensuring a quality patient set‐up has been reinforced with an anatomically straight and level pelvis showing obvious improvement in urethral visualisation. Also of note, the increased scan acquisition time of T2 SPACE compared to T2 TSE (2–3 min) can introduce motion artefact, with patient discomfort from bladder filling a potential factor. Furthermore, contouring on an MR image in the sagittal viewing plane of the TPS may be a novel technique for some as the axial plane has long been the primary viewing window for RT contouring and planning. Therefore, protocol‐based user education is critical to ensure best value is obtained from its implementation.23 The alternative approach of placing an IDC is also sometimes deployed for ultrahypofractionated regimens, with some protocols exploring cooling of the urethra.13 That said, the rate of moderate‐to‐severe GU toxicity appears to be low with RT doses <40 Gy in 5 fractions delivered without any specific urethral sparing.26, 27 As such, the main advantages of such an approach are likely to be in emerging indications such as boosting of the dominant intraprostatic nodule (DIL), prostate re‐irradiation and further dose escalation such as on virtual HDR boost protocols.28, 29, 30, 31 Patients with intraprostatic recurrence after radiotherapy can be managed with re‐irradiation32 and may benefit from a more reliable approach to urethral delineation as meticulous attention to urethral doses is often mandated in the SBRT retreatment scenario.33 By reducing the inherent uncertainty around the urethral PRV and using a consistent contouring approach, we see potential to more confidently investigate future urethral dose sparing opportunities. Recommendations to tailor MRI imaging protocols towards a radiotherapy focus have been reported and have the potential to provide tangible benefit to patients undergoing radiotherapy.20 The installation of a dedicated MRI simulator continues to expedite these adaptions for our department. It opens the scope to an MRI only planning process, which is a current research focus in our unit.34 The authors also feel that this small study supports the need for multidisciplinary collaboration to best utilise the MRI for quality improvement in day‐to‐day radiotherapy imaging.35 Conclusion: The introduction of a 0.9 mm isotropic 3D T2 SPACE planning MRI provides improved urethral visualisation and can lead to a marked reduction in inter‐observer variation of prostatic urethral contours compared to conventional planning CT and diagnostic T2 TSE MRI. The improvements in urethral visualisation were best appreciated in the sagittal plane in the TPS. We have adopted the T2 SPACE as a standard contouring sequence for prostate radiotherapy planning in the MRI simulator. Conflict of Interest: The authors declare no conflict of interest.
Background: The prostatic urethra is an organ at risk for prostate radiotherapy with genitourinary toxicities a common side effect. Many external beam radiation therapy protocols call for urethral sparing, and with modulated radiotherapy techniques, the radiation dose distribution can be controlled so that maximum doses do not fall within the prostatic urethral volume. Whilst traditional diagnostic MRI sequences provide excellent delineation of the prostate, uncertainty often remains as to the true path of the urethra within the gland. This study aims to assess if a high-resolution isotropic 3D T2 MRI series can reduce inter-observer variability in urethral delineation for radiotherapy planning. Methods: Five independent observers contoured the prostatic urethra for ten patients on three data sets; a 2 mm axial CT, a diagnostic 3 mm axial T2 TSE MRI and a 0.9 mm isotropic 3D T2 SPACE MRI. The observers were blinded from each other's contours. A Dice Similarity Coefficient (DSC) score was calculated using the intersection and union of the five observer contours vs an expert reference contour for each data set. Results: The mean DSC of the observer vs reference contours was 0.47 for CT, 0.62 for T2 TSE and 0.78 for T2 SPACE (P < 0.001). Conclusions: The introduction of a 0.9 mm isotropic 3D T2 SPACE MRI for treatment planning provides improved urethral visualisation and can lead to a significant reduction in inter-observer variation in prostatic urethral contouring.
Introduction: Genitourinary (GU) toxicities are a common side effect of prostate radiotherapy.1, 2, 3, 4, 5 Attempts at correlating bladder dose to GU toxicity have not shown a consistent relationship.6 Urethral strictures have long been recognised as a complication of prostate brachytherapy, and as such it is routine practice to try to limit dose delivery to the urethra in modern brachytherapy regimens.7, 8 Similarly, strictures around the urethral anastomosis are a common concern amongst urologists regarding adjuvant or salvage radiotherapy following a radical prostatectomy.9 Although the prostatic urethra has not traditionally been defined as an organ at risk (OAR) for prostate external beam radiation therapy (EBRT), a combination of the above evidence and higher dosed schedules suggests that it would be prudent to take steps to accurately identify and limit dose to this structure. Several contemporary ultrahypofractionated prostate EBRT regimes are beginning to call for urethral dose limiting and dose reporting.10, 11, 12 With modern intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) techniques combined with online and real‐time image‐guided radiation therapy (IGRT), the distribution of dose can be controlled so that maximum dose regions do not fall within the prostatic urethral volume whilst still maintaining the minimum therapeutic dose to the entire prostate gland.13 Historically, the urethra has been a difficult OAR to accurately define for radiotherapy planning purposes. Ultrasound‐based studies have shown the cranio‐caudal urethral path and prostatic urethral angle can demonstrate considerable anatomical variations between subjects.14 Whilst traditional diagnostic computed tomography (CT) and magnetic resonance imaging (MRI) provide excellent geometrical delineation of the prostate, uncertainty often remains as to the true path of the urethra within the gland itself. Modern ultrahypofractionated trials have permitted an estimated contour of the urethral position with a subsequent radial expansion to create a planning organ at risk volume (PRV).10, 11, 12 Our institutional planning process historically involved obtaining diagnostic images from external MRI sources and performing a rigid fiducial registration to the planning CT for contouring. A 3 mm axial T2‐weighted turbo spin echo (TSE) image provides excellent anatomical contrast and is historically used for target volume and OAR definition.15 However, the non‐isotropic voxels restrict high resolution viewing in the treatment planning system (TPS) to the axial plane only, as shown in Figure 1. A conventional diagnostic 3 mm T2 TSE series of the prostate as displayed in Eclipse TPS axially (A), and the subsequent degradation in image quality with coronal (B) and sagittal (C) reconstruction. The implementation of ultrahypofractionated stereotactic prostate treatments in our department highlighted the potential benefits of improved urethral visualisation. We initially employed in‐dwelling catheters (IDC) at the CT simulation session, followed by a diagnostic T2 TSE MRI with the IDC remaining in situ.10 This technique provides clear urethral visualisation for dose limiting, but the benefits are confounded by the invasiveness of the procedure, which carries increased infection risk, and is often not well tolerated by patients.16, 17 This is combined with increased staffing requirements during the simulation sessions. There is also the potential risk with this approach that IDC insertion results in deformation of the natural urethral anatomy,18 which may be problematic given the IDC was not re‐inserted for subsequent treatment appointments. Studies have shown that a T2‐weighted MRI sequence can display the prostatic urethra with hyper‐intense contrast compared to the surrounding glandular tissue; however, the voxel size and slice thickness of the diagnostic series did not provide acceptable multiplane resolution for radiotherapy planning and required specialist urogenital radiologist input.17 Recent recommendations that a three‐dimensional (3D) isotropic T2‐weighted axial acquisition is justified for prostate radiation therapy planning have also been published.19, 20 We verified that an isotropic T2 SPACE (Sampling Perfection with Application optimised Contrasts using different flip angle Evolution) sequence could produce a 3D prostate image of satisfactory quality for clinical use by our GU radiation oncologists (ROs). We found a 0.9 mm isotropic scan provided optimal image quality for multiplane RT planning (Fig. 2), with a clinically acceptable acquisition time (˜5‐6 min). A T2 SPACE series of the prostate as displayed in Eclipse TPS. The 0.9 mm isotropic voxel provides consistent resolution in axial (A) and coronal (B) planes with the arrow indicating the urethra on the sagittal image (C). This study investigates the potential impact of implementing an MRI imaging protocol for urethral contouring by assessing inter‐observer variability for radiotherapy planning by comparing 3D T2 SPACE, conventional CT and axial T2 TSE MRI planning series. Conclusion: The introduction of a 0.9 mm isotropic 3D T2 SPACE planning MRI provides improved urethral visualisation and can lead to a marked reduction in inter‐observer variation of prostatic urethral contours compared to conventional planning CT and diagnostic T2 TSE MRI. The improvements in urethral visualisation were best appreciated in the sagittal plane in the TPS. We have adopted the T2 SPACE as a standard contouring sequence for prostate radiotherapy planning in the MRI simulator.
Background: The prostatic urethra is an organ at risk for prostate radiotherapy with genitourinary toxicities a common side effect. Many external beam radiation therapy protocols call for urethral sparing, and with modulated radiotherapy techniques, the radiation dose distribution can be controlled so that maximum doses do not fall within the prostatic urethral volume. Whilst traditional diagnostic MRI sequences provide excellent delineation of the prostate, uncertainty often remains as to the true path of the urethra within the gland. This study aims to assess if a high-resolution isotropic 3D T2 MRI series can reduce inter-observer variability in urethral delineation for radiotherapy planning. Methods: Five independent observers contoured the prostatic urethra for ten patients on three data sets; a 2 mm axial CT, a diagnostic 3 mm axial T2 TSE MRI and a 0.9 mm isotropic 3D T2 SPACE MRI. The observers were blinded from each other's contours. A Dice Similarity Coefficient (DSC) score was calculated using the intersection and union of the five observer contours vs an expert reference contour for each data set. Results: The mean DSC of the observer vs reference contours was 0.47 for CT, 0.62 for T2 TSE and 0.78 for T2 SPACE (P < 0.001). Conclusions: The introduction of a 0.9 mm isotropic 3D T2 SPACE MRI for treatment planning provides improved urethral visualisation and can lead to a significant reduction in inter-observer variation in prostatic urethral contouring.
2,808
275
[ 838 ]
6
[ "t2", "urethral", "space", "tse", "t2 space", "mri", "prostate", "planning", "t2 tse", "urethra" ]
[ "urethral dose", "sequence prostate radiotherapy", "effect prostate radiotherapy", "prostate radiotherapy planning", "urethral dose limiting" ]
[CONTENT] MRI | prostate | radiation therapy | SBRT | urethra [SUMMARY]
[CONTENT] MRI | prostate | radiation therapy | SBRT | urethra [SUMMARY]
[CONTENT] MRI | prostate | radiation therapy | SBRT | urethra [SUMMARY]
[CONTENT] MRI | prostate | radiation therapy | SBRT | urethra [SUMMARY]
[CONTENT] MRI | prostate | radiation therapy | SBRT | urethra [SUMMARY]
[CONTENT] MRI | prostate | radiation therapy | SBRT | urethra [SUMMARY]
[CONTENT] Humans | Magnetic Resonance Imaging | Male | Observer Variation | Prostatic Neoplasms | Radiotherapy Dosage | Radiotherapy Planning, Computer-Assisted | Urethra [SUMMARY]
[CONTENT] Humans | Magnetic Resonance Imaging | Male | Observer Variation | Prostatic Neoplasms | Radiotherapy Dosage | Radiotherapy Planning, Computer-Assisted | Urethra [SUMMARY]
[CONTENT] Humans | Magnetic Resonance Imaging | Male | Observer Variation | Prostatic Neoplasms | Radiotherapy Dosage | Radiotherapy Planning, Computer-Assisted | Urethra [SUMMARY]
[CONTENT] Humans | Magnetic Resonance Imaging | Male | Observer Variation | Prostatic Neoplasms | Radiotherapy Dosage | Radiotherapy Planning, Computer-Assisted | Urethra [SUMMARY]
[CONTENT] Humans | Magnetic Resonance Imaging | Male | Observer Variation | Prostatic Neoplasms | Radiotherapy Dosage | Radiotherapy Planning, Computer-Assisted | Urethra [SUMMARY]
[CONTENT] Humans | Magnetic Resonance Imaging | Male | Observer Variation | Prostatic Neoplasms | Radiotherapy Dosage | Radiotherapy Planning, Computer-Assisted | Urethra [SUMMARY]
[CONTENT] urethral dose | sequence prostate radiotherapy | effect prostate radiotherapy | prostate radiotherapy planning | urethral dose limiting [SUMMARY]
[CONTENT] urethral dose | sequence prostate radiotherapy | effect prostate radiotherapy | prostate radiotherapy planning | urethral dose limiting [SUMMARY]
[CONTENT] urethral dose | sequence prostate radiotherapy | effect prostate radiotherapy | prostate radiotherapy planning | urethral dose limiting [SUMMARY]
[CONTENT] urethral dose | sequence prostate radiotherapy | effect prostate radiotherapy | prostate radiotherapy planning | urethral dose limiting [SUMMARY]
[CONTENT] urethral dose | sequence prostate radiotherapy | effect prostate radiotherapy | prostate radiotherapy planning | urethral dose limiting [SUMMARY]
[CONTENT] urethral dose | sequence prostate radiotherapy | effect prostate radiotherapy | prostate radiotherapy planning | urethral dose limiting [SUMMARY]
[CONTENT] t2 | urethral | space | tse | t2 space | mri | prostate | planning | t2 tse | urethra [SUMMARY]
[CONTENT] t2 | urethral | space | tse | t2 space | mri | prostate | planning | t2 tse | urethra [SUMMARY]
[CONTENT] t2 | urethral | space | tse | t2 space | mri | prostate | planning | t2 tse | urethra [SUMMARY]
[CONTENT] t2 | urethral | space | tse | t2 space | mri | prostate | planning | t2 tse | urethra [SUMMARY]
[CONTENT] t2 | urethral | space | tse | t2 space | mri | prostate | planning | t2 tse | urethra [SUMMARY]
[CONTENT] t2 | urethral | space | tse | t2 space | mri | prostate | planning | t2 tse | urethra [SUMMARY]
[CONTENT] dose | prostate | urethral | planning | t2 | urethra | image | radiation | therapy | risk [SUMMARY]
[CONTENT] contour | reference | urethra | volume | reference contour | t2 | ctv | dsc | observer | patients [SUMMARY]
[CONTENT] t2 | dsc | mean | t2 space | space | mean dsc | tse | t2 tse | groups | ct [SUMMARY]
[CONTENT] planning mri | urethral | mri | planning | t2 | urethral visualisation | visualisation | t2 space | space | improvements urethral visualisation [SUMMARY]
[CONTENT] t2 | urethral | space | t2 space | dsc | mri | planning | tse | authors declare conflict | authors declare [SUMMARY]
[CONTENT] t2 | urethral | space | t2 space | dsc | mri | planning | tse | authors declare conflict | authors declare [SUMMARY]
[CONTENT] ||| ||| ||| T2 [SUMMARY]
[CONTENT] Five | ten | three | 2 mm | CT | 3 mm | T2 TSE | 0.9 mm | T2 ||| ||| five [SUMMARY]
[CONTENT] 0.47 | CT | 0.62 | T2 | TSE | 0.78 | T2 [SUMMARY]
[CONTENT] 0.9 mm | T2 [SUMMARY]
[CONTENT] ||| ||| ||| T2 ||| Five | ten | three | 2 mm | CT | 3 mm | T2 TSE | 0.9 mm | T2 ||| ||| five ||| 0.47 | CT | 0.62 | T2 | TSE | 0.78 | T2 ||| 0.9 mm | T2 [SUMMARY]
[CONTENT] ||| ||| ||| T2 ||| Five | ten | three | 2 mm | CT | 3 mm | T2 TSE | 0.9 mm | T2 ||| ||| five ||| 0.47 | CT | 0.62 | T2 | TSE | 0.78 | T2 ||| 0.9 mm | T2 [SUMMARY]
Spinach consumption and nonalcoholic fatty liver disease among adults: a case-control study.
33933019
Spinach has high antioxidants and polyphenols and showed protective effects against liver diseases in experimental studies. We aimed to assess the association between dietary intake of spinach and odds of nonalcoholic fatty liver disease (NAFLD) in a case-control study among Iranian adults.
BACKGROUND
Totally 225 newly diagnosed NAFLD patients and 450 controls, aged 20-60 years, were recruited in this study. Participants' dietary intakes were collected using a valid and reliable 168-item semi-quantitative food frequency questionnaire (FFQ). The logistic regression test was used for assessing the association between total, raw, and boiled dietary spinach with the odds of NAFLD.
METHODS
The mean (SD) age and BMI of participants (53% male) were 38.1 (8.8) years and 26.8 (4.3) kg/m2, respectively. In the final adjusted model for potential confounders, the odds (95% CI) of NAFLD in individuals in the highest tertile of daily total and raw spinach intake was [0.36 (0.19-0.71), P_trend = 0.001] and [0.47 (0.24-0.89), P_trend = 0.008], respectively compared with those in the lowest tertile. Furthermore, in the adjusted analyses, an inverse association was observed between the highest yearly intake versus no raw spinach consumption and odds of NAFLD [(OR 0.41; 95% CI 0.18-0.96), P for trend = 0.013]. However, there was no significant association between higher boiled spinach intake and odds of NAFLD.
RESULTS
The present study found an inverse association between total and raw spinach intake with the odds of NAFLD.
CONCLUSIONS
[ "Adult", "Case-Control Studies", "Diet", "Female", "Humans", "Iran", "Male", "Middle Aged", "Non-alcoholic Fatty Liver Disease", "Spinacia oleracea", "Young Adult" ]
8088717
Background
Nonalcoholic fatty liver disease (NAFLD) refers to the state of accumulation of fat in hepatocytes in persons who do not consume excessive alcohol [1]. This disease includes a wide range of conditions from fatty liver to nonalcoholic steatohepatitis (NASH), fibrosis, and cirrhosis [2]. The NAFLD pathogenesis is defined under the term “Multiple-hit theory” [1], in which several factors including genetic susceptibility, insulin resistance, adipose hormones, gut microbiome, diet, and lifestyle can affect the risk of NAFLD development. For example, it is reported some gene variants in glutathione-S-transferase [3], glutamate-cysteine ligase [4], peroxisome proliferator-activated receptors (PPARs) [5], etc. are associated with NAFLD risk in the Iranian population. NAFLD is associated with other metabolic abnormalities, such as insulin resistance, high blood glucose level, dyslipidemia, central adiposity, and hypertension [6]. Accordingly, it is believed that NAFLD is the hepatic manifestation of metabolic syndrome. Some underlying conditions can increase NAFLD development risk, including obesity, type 2 diabetes mellitus (T2DM), and older age [7]. The average prevalence of NAFLD among the general population is estimated at 25% worldwide [8]. According to a recent meta-analysis, middle-east and Asian populations have higher NAFLD rates than the global average [9]. The prevalence in Iranian adults is reported between 20 and 50% [6] reports show a growing prevalence of NAFLD worldwide, attributed to the upward trend of adverse lifestyle changes, including unhealthy diet, sedentary behavior, and overweight [10]. Different aspects of diet, including dietary patterns, various food groups such as fruits, vegetables, and whole and refined grains, and nutrients like types of fatty acids and fructose, have been investigated with NAFLD’s risk [11, 12]. Also, the relationship between some single vegetables and chronic diseases, such as carrot and breast cancer [13], potato and diabetes [14], green leafy vegetables, and cardiovascular disease (CVD) [15], etc., have received particular attention, recently. In line with these studies, the role of spinach, as a broadleaf green, rich in nutrients, such as folates, vitamin A, C, and K, and minerals such as iron, magnesium, and manganese, and polyphenols, especially lutein, zeaxanthin, and β-carotene have been considered about the risk of NAFLD [16]. Plenty of interventional and experimental studies, have investigated spinach’s antioxidant and anti-inflammatory properties [16, 17]. To the best of our knowledge, although the association of dietary spinach with the risk of NAFLD was not assessed in observational studies, but it is shown that moderate intake of spinach may have protective effects against DNA oxidation [18]. Accordingly, it may be expected that spinach can reduce chronic disease risk, mostly related to oxidative stress. For example, some previous studies with controversial results have investigated the association of spinach with the odds of the breast (BrCa) [19, 20] and prostate cancer [21]. Furthermore, atrial stiffness [22], intrahepatic stone [23], and gallstone [24] are the other disease indicated that spinach has a protective impact against them. A recent animal study has shown that a high spinach intake significantly reduces the adverse effects of a high-fat diet on the gut microbiome, blood glucose, lipid profile, and cholesterol accumulation in the liver [25]. Two experimental studies have also shown the anti-hyperlipidemic effects of spinach in rats fed high cholesterol diet [26] and its anti-inflammatory effects in rats with a regular diet [27]. Using common foods like spinach to improve diet quality may provide a simple and inexpensive way to reduce the risk of chronic diseases such as NAFLD. However, whether individuals receiving higher dietary spinach than those who did not have a different risk of developing NAFLD has not been assessed in previous studies. Besides, previous studies have shown that heating vegetables can have beneficial or detrimental effects on their nutritional status. Accordingly, some evidence of higher bioavailability and stability of active nutrients such as phytochemicals from cooked spinach compared to raw spinach [28, 29], suggesting that the two methods of consuming spinach may be different on the risk of liver status. Therefore, we sought to investigate the association between spinach’s dietary intake, either raw, boiled, and total, and NAFLD risk in a case–control study among Iranian adults.
null
null
Results
The mean (SD) age and BMI of participants (53% male) were 38.1 (8.8) years and 26.8 (4.3) kg/m2, respectively. Table 1 indicates the general characteristics and dietary intake of cases and controls. Smoking was higher among NAFLD patients than in control (p value = 0.006), and also, smoking was higher among men than women (p value = 0.006, data not shown). Also, NAFLD patients had higher BMI(p value = 0.006), SES scores (p value = 0.043), and lower physical activity (p value < 0.001) than the control group. Also, the NAFLD patients had a higher dietary intake of calorie (p value = 0.006) and high-fat dairy products (p value < 0.001), whereas ate higher vegetables (p value = 0.001), total and raw spinach (p value = 0.001), and boiled spinach (p-value = 0.046) than controls. There were no significant differences between the two groups in age and sex distribution and dietary intakes of carbohydrate, protein, fat, fiber, whole grains, fruits, nuts, and legumes (P > 0.05). Table 1Characteristics and dietary intakes among cases and controlsCases (n = 225)Controls (n = 450)P valueAge (years)38.6 ± 8.737.8 ± 8.90.293Male, n (%)125 (55.6)233 (51.8)0.354BMI (kg/m2)30.5 ± 4.025.0 ± 3.0< 0.001Smoking, n (%)16 (7.1)12 (2.7)0.006Physical activity (MET/min/week)1119 ± 6161590 ± 949< 0.001SES, n(%)0.043Low65 (28.9)158 (35.1)Middle104 (46.2)206 (45.8)High56 (24.9)86 (19.1)Macro and micronutrientsEnergy intake (Kcal/day)2369 ± 6212227 ± 6450.006Carbohydrate (% of energy)55.9 ± 7.455.9 ± 6.50.917Protein (% of energy)13.2 ± 2.413.2 ± 2.20.941Fat (% of energy)30.7 ± 7.430.8 ± 6.50.927Fiber (g/1000 Kcal)16.7 ± 8.315.8 ± 6.40.154Calcium (mg/1000Kcal)528 ± 163533 ± 1530.723Sodium (mg/1000Kcal)1990 ± 19602040 ± 13850.736Potassium (mg/1000Kcal)1546 ± 3651596 ± 3730.093Food groupsWhole grains (g/day)64.0 (30.7–115.3)53.8 (25.1–112.8)0.711High fat dairy products (g/day)237 ± 153118 ± 107< 0.001Fruits (g/day)327 ± 227318 ± 2280.628Vegetables (g/day)262 ± 138302 ± 1480.001Nuts and legume (g/day)22.1 ± 21.121.2 ± 18.50.596Red meat (g/day)18.1 ± 16.218.9 ± 17.40.577Total spinach (g/day)1.7 (0.7–3.6)2.3 (1.0–5.6)0.001Raw spinach (g/day)0.7 (0.2–1.8)1.0 (0.5–3.0)0.001Boiled spinach (g/day)0.4 (0.1–1.5)0.6 (0.3–1.8)0.046Raw spinach (serving/year)2.5 (1.2–7.6)9.1 (3.0–22.8)< 0.001Boiled spinach (serving/year)1.9 (0.6–6.3)12.0 (6.0–36.5)0.046P values were computed using the independent sample t test and chi square for continues and categorical variables, respectively Characteristics and dietary intakes among cases and controls P values were computed using the independent sample t test and chi square for continues and categorical variables, respectively Table 2 shows the association between tertiles of the total, raw, and boiled spinach consumption (g/d) and odds of NAFLD. In the crude model, all three spinach categories in the highest compared with the lowest tertile were related with lower odds of NAFLD. in the final adjusted model for confounding variables including BMI, physical activity, smoking, SES, and dietary intake of energy, high-fat dairy, and other vegetables (except spinach), the odds ratio for NAFLD in the highest compared to the lowest tertile of total spinach and raw spinach were [(OR 0.47; 95% CI 0.24–0.89), (P for trend = 0.001)] and [(OR 0.36; 95% CI 0.19–0.7), (P for trend = 0.008)], respectively. However, based on the final adjusted model, the intake of boiled spinach was no significantly associated with the odds of NAFLD [(OR 0.76; 95% CI 0.42–1.38), P for trend = 0.508)]. Table 2Odds ratios (ORs) and 95% confidence intervals (CIs) for NAFLD based on tertiles of dietary spinachTertiles of dietary spinach intakeP-trendT1T2T3Total spinachMedian intake (g/day), per 1000 Kcal0.351.093.67NAFLD/control101/15083/15041/150Crude model1.00 (Ref)0.80 (0.55–1.16)0.41 (0.26–0.62)< 0.001Model 1†1.00 (Ref)0.91 (0.55–1.52)0.33 (0.18–0.59)< 0.001Model 2‡1.00 (Ref)1.13 (0.63–2.00)0.36 (0.19–0.71)0.001Raw spinachMedian intake (g/day), per 1000 Kcal0.120.491.92NAFLD/control97/14983/15145/150Crude model1.00 (Ref)0.81(0.56–1.18)0.45 (0.29–0.68)< 0.001Model 1†1.00 (Ref)0.90 (0.54–1.51)0.38 (0.22–0.68)0.001Model 2 ‡1.00 (Ref)1.22 (0.68–2.19)0.47 (0.24–0.89)0.008Boiled spinachMedian intake(g/day), per 1000 Kcal0.060.301.14NAFLD/control107/14851/15267/150Crude model1.00 (Ref)0.45 (0.30–0.68)0.62 (0.42–0.91)0.100Model 1†1.00 (Ref)0.62 (0.36–1.07)0.67 (0.40–1.13)0.242Model 2‡1.00 (Ref)0.72 (0.39–1.31)0.76 (0.42–1.38)0.508†Model 1: adjusted for BMI, physical activity, smoking, SES, and energy intake‡Model 2: Additionally adjusted for dietary intake of high fat dairy and other vegetables except of spinach Odds ratios (ORs) and 95% confidence intervals (CIs) for NAFLD based on tertiles of dietary spinach †Model 1: adjusted for BMI, physical activity, smoking, SES, and energy intake ‡Model 2: Additionally adjusted for dietary intake of high fat dairy and other vegetables except of spinach The association of yearly dietary serving of raw and boiled spinach with odds of NAFLD among tertiles of participants, who consumed these foods compared with those who had no spinach intake in the last year, was presented in Table 3. In the crude model, the odds (95% CI) of NAFLD in subjects in the highest tertile of raw and boiled spinach were 0.56 (0.33–0.95), P for trend = 0.008 and 0.60 (0.37–0.99), P for trend = 0.072 compared with those who had no consumption, respectively. In the final model, after adjusting for potential confounders, the odds (95% CI) of NAFLD in individuals in the highest tertile of raw spinach compared with those who had no consumption remained significant [(OR 0.41; 95% CI 0.18–0.96), (P for trend = 0.013)]. However, in the final model, the yearly serving intake of boiled spinach was not associated with the odds of NAFLD. Table 3Odds ratios (ORs) and 95% confidence intervals (CIs) for NAFLD according to yearly intake of raw and boiled spinachCategories of spinach intakeP trend0T1T2T3Raw spinachMedian intake (serving/year)06.013.539.1NAFLD/control34/5755/10484/13952/150Crude model1.00 (Ref)0.89(0.52–1.53)1.00 (0.60–1.65)0.56 (0.33–0.95)0.008Model 1†1.00 (Ref)0.66 (0.32–1.38)0.96 (0.48–1.92)0.35 (0.17–0.73)0.002Model 2‡1.00 (Ref)0.76 (0.34–1.72)1.12 (0.51–2.44)0.41 (0.18–0.96)0.013Boiled spinachMedian intake (serving/year)01.33.212.0NAFLD/control47/6762/11361/13555/135Crude model1.00 (Ref)0.83 (0.51–1.35)0.66 (0.40–1.07)0.60 (0.37–0.99)0.072Model 1†1.00 (Ref)0.87 (0.44–1.72)0.88 (0.45–1.71)0.66 (0.34–1.30)0.211Model 2‡1.00 (Ref)0.72 (0.33–1.57)0.83 (0.39–1.76)0.69 (0.32–1.47)0.479†Model 1: adjusted for BMI, physical activity, smoking, SES, and energy intake‡Model 2: Additionally adjusted for dietary intake of high fat dairy and other vegetables except of spinach Odds ratios (ORs) and 95% confidence intervals (CIs) for NAFLD according to yearly intake of raw and boiled spinach †Model 1: adjusted for BMI, physical activity, smoking, SES, and energy intake ‡Model 2: Additionally adjusted for dietary intake of high fat dairy and other vegetables except of spinach
Conclusions
The present study found an inverse association between total and raw spinach intake with the odds of NAFLD. However, there was no significant association between higher boiled spinach intake and odds of NAFLD. Spinach is one of the richest sources of ingredients such as polyphenol and antioxidants. If its beneficial effects on chronic disease are approved in future studies, it could easily be used as a powder to enrich the nutritional values of homemade foods or products such as dairy or other foods. We suggest that our hypothesis of the association between dietary spinach and NAFLD odds be examined in more studies with higher design power, like large cohort studies and clinical trials.
[ "Background", "Methods", "Study population", "Dietary assessment", "Anthropometric measurements", "Assessment of other variables", "Statistical analysis" ]
[ "Nonalcoholic fatty liver disease (NAFLD) refers to the state of accumulation of fat in hepatocytes in persons who do not consume excessive alcohol [1]. This disease includes a wide range of conditions from fatty liver to nonalcoholic steatohepatitis (NASH), fibrosis, and cirrhosis [2]. The NAFLD pathogenesis is defined under the term “Multiple-hit theory” [1], in which several factors including genetic susceptibility, insulin resistance, adipose hormones, gut microbiome, diet, and lifestyle can affect the risk of NAFLD development. For example, it is reported some gene variants in glutathione-S-transferase [3], glutamate-cysteine ligase [4], peroxisome proliferator-activated receptors (PPARs) [5], etc. are associated with NAFLD risk in the Iranian population.\nNAFLD is associated with other metabolic abnormalities, such as insulin resistance, high blood glucose level, dyslipidemia, central adiposity, and hypertension [6]. Accordingly, it is believed that NAFLD is the hepatic manifestation of metabolic syndrome. Some underlying conditions can increase NAFLD development risk, including obesity, type 2 diabetes mellitus (T2DM), and older age [7].\nThe average prevalence of NAFLD among the general population is estimated at 25% worldwide [8]. According to a recent meta-analysis, middle-east and Asian populations have higher NAFLD rates than the global average [9]. The prevalence in Iranian adults is reported between 20 and 50% [6] reports show a growing prevalence of NAFLD worldwide, attributed to the upward trend of adverse lifestyle changes, including unhealthy diet, sedentary behavior, and overweight [10].\nDifferent aspects of diet, including dietary patterns, various food groups such as fruits, vegetables, and whole and refined grains, and nutrients like types of fatty acids and fructose, have been investigated with NAFLD’s risk [11, 12]. Also, the relationship between some single vegetables and chronic diseases, such as carrot and breast cancer [13], potato and diabetes [14], green leafy vegetables, and cardiovascular disease (CVD) [15], etc., have received particular attention, recently. In line with these studies, the role of spinach, as a broadleaf green, rich in nutrients, such as folates, vitamin A, C, and K, and minerals such as iron, magnesium, and manganese, and polyphenols, especially lutein, zeaxanthin, and β-carotene have been considered about the risk of NAFLD [16]. Plenty of interventional and experimental studies, have investigated spinach’s antioxidant and anti-inflammatory properties [16, 17]. To the best of our knowledge, although the association of dietary spinach with the risk of NAFLD was not assessed in observational studies, but it is shown that moderate intake of spinach may have protective effects against DNA oxidation [18]. Accordingly, it may be expected that spinach can reduce chronic disease risk, mostly related to oxidative stress. For example, some previous studies with controversial results have investigated the association of spinach with the odds of the breast (BrCa) [19, 20] and prostate cancer [21]. Furthermore, atrial stiffness [22], intrahepatic stone [23], and gallstone [24] are the other disease indicated that spinach has a protective impact against them.\nA recent animal study has shown that a high spinach intake significantly reduces the adverse effects of a high-fat diet on the gut microbiome, blood glucose, lipid profile, and cholesterol accumulation in the liver [25]. Two experimental studies have also shown the anti-hyperlipidemic effects of spinach in rats fed high cholesterol diet [26] and its anti-inflammatory effects in rats with a regular diet [27].\nUsing common foods like spinach to improve diet quality may provide a simple and inexpensive way to reduce the risk of chronic diseases such as NAFLD. However, whether individuals receiving higher dietary spinach than those who did not have a different risk of developing NAFLD has not been assessed in previous studies. Besides, previous studies have shown that heating vegetables can have beneficial or detrimental effects on their nutritional status. Accordingly, some evidence of higher bioavailability and stability of active nutrients such as phytochemicals from cooked spinach compared to raw spinach [28, 29], suggesting that the two methods of consuming spinach may be different on the risk of liver status. Therefore, we sought to investigate the association between spinach’s dietary intake, either raw, boiled, and total, and NAFLD risk in a case–control study among Iranian adults.", "Study population The present study was conducted in the Metabolic Liver Disease Research Center as a referral center affiliated to Isfahan University of Medical Sciences with a case–control design. Participants were obtained through convenience sampling. Totally 225 newly diagnosed NAFLD patients and 450 controls, aged 20–60 years, were recruited in this study. NAFLD diagnosis was confirmed by the absence of alcohol consumption and other liver disease etiologies, and an ultrasonography scan of the liver was compatible with NAFLD (Grade II, III as a definite diagnosis). The control group was selected among healthy individuals based on the liver ultrasonography (not suffering from any hepatic steatosis stages).\nAlthough Liver biopsy, recognized as the gold standard for diagnosing and staging fibrosis and inflammation, has significant limitations such as bleeding, sampling error, or interobserver variability and is not readily accepted by all patients. Nowadays, several noninvasive techniques, including biochemical and hematological tests, the scoring system using a combination of clinical and laboratory tests, Ultrasonography-based testes, and also double-contrast magnetic resonance (MR) imaging, MR elastography, and MR spectroscopy, and Diffusion-weighted MR imaging are used for diagnosis of fibrosis and hepatic inflammation [30]. Each method has its benefits and limitations. The results of laboratory and biochemical tests and scores are overlapping and cannot predict necroinflammatory activity. Despite the high accuracy of (MR) imaging techniques for quantifying the degree of necroinflammation, it is expensive and not widely available in clinical practice [30, 31]. Although the Ultrasonography test is operator-dependent and may be difficult in obese patients with narrow intercostal space, it is low cost, safe, and more accessible. A previous meta-analysis indicated that ultrasonography allows for reliable and accurate detection of the moderate-severe fatty liver compared to histology [32]. Ultrasound is likely the imaging technique of choice for screening for fatty liver in clinical and population settings, and we used an experienced physician for Ultrasonography implementation and analysis.\nIn the present study, patients who were referred to screen their probability of NAFLD by an Ultrasonography test because of having an abnormal or slight elevation in liver enzymes or being at risk of metabolic syndrome or having metabolic syndrome, etc. were evaluated for study eligibility criteria and patients who willing to cooperate were included in the study. The inclusion criteria for this study are not to have a special diet (due to a particular disease or weight loss) and lack of history of renal and hepatic diseases (Wilson’s disease, autoimmune liver disease, hemochromatosis, virus infection, and alcoholic fatty liver), CVD, diabetes, malignancy, thyroid disorder, and autoimmune. Also, individuals who used potentially hepatotoxic or steatogenic drugs were not included in the current study. Participants who completed less than 35 items of the food frequency questionnaire and those with under or over-reported daily energy intake (≤ 800 or ≥ 4500 kcal/d) were excluded (8 participants) and were replaced. All subjects provided written informed consent before the study enrollment.\nThe present study was conducted in the Metabolic Liver Disease Research Center as a referral center affiliated to Isfahan University of Medical Sciences with a case–control design. Participants were obtained through convenience sampling. Totally 225 newly diagnosed NAFLD patients and 450 controls, aged 20–60 years, were recruited in this study. NAFLD diagnosis was confirmed by the absence of alcohol consumption and other liver disease etiologies, and an ultrasonography scan of the liver was compatible with NAFLD (Grade II, III as a definite diagnosis). The control group was selected among healthy individuals based on the liver ultrasonography (not suffering from any hepatic steatosis stages).\nAlthough Liver biopsy, recognized as the gold standard for diagnosing and staging fibrosis and inflammation, has significant limitations such as bleeding, sampling error, or interobserver variability and is not readily accepted by all patients. Nowadays, several noninvasive techniques, including biochemical and hematological tests, the scoring system using a combination of clinical and laboratory tests, Ultrasonography-based testes, and also double-contrast magnetic resonance (MR) imaging, MR elastography, and MR spectroscopy, and Diffusion-weighted MR imaging are used for diagnosis of fibrosis and hepatic inflammation [30]. Each method has its benefits and limitations. The results of laboratory and biochemical tests and scores are overlapping and cannot predict necroinflammatory activity. Despite the high accuracy of (MR) imaging techniques for quantifying the degree of necroinflammation, it is expensive and not widely available in clinical practice [30, 31]. Although the Ultrasonography test is operator-dependent and may be difficult in obese patients with narrow intercostal space, it is low cost, safe, and more accessible. A previous meta-analysis indicated that ultrasonography allows for reliable and accurate detection of the moderate-severe fatty liver compared to histology [32]. Ultrasound is likely the imaging technique of choice for screening for fatty liver in clinical and population settings, and we used an experienced physician for Ultrasonography implementation and analysis.\nIn the present study, patients who were referred to screen their probability of NAFLD by an Ultrasonography test because of having an abnormal or slight elevation in liver enzymes or being at risk of metabolic syndrome or having metabolic syndrome, etc. were evaluated for study eligibility criteria and patients who willing to cooperate were included in the study. The inclusion criteria for this study are not to have a special diet (due to a particular disease or weight loss) and lack of history of renal and hepatic diseases (Wilson’s disease, autoimmune liver disease, hemochromatosis, virus infection, and alcoholic fatty liver), CVD, diabetes, malignancy, thyroid disorder, and autoimmune. Also, individuals who used potentially hepatotoxic or steatogenic drugs were not included in the current study. Participants who completed less than 35 items of the food frequency questionnaire and those with under or over-reported daily energy intake (≤ 800 or ≥ 4500 kcal/d) were excluded (8 participants) and were replaced. All subjects provided written informed consent before the study enrollment.\nDietary assessment Participants’ dietary intakes were collected using a valid and reliable 168-item semi-quantitative food frequency questionnaire (FFQ) [33]. The FFQ listed a set of common Iranian foods with standard serving sizes. Participants were asked to express their mean dietary intake during the past year by choosing one of the following categories: never or less than once a month, 3–4 times per month, once a week, 2–4 times per week, 5–6 times per week, once daily, 2–3 times per day, 4–5 times per day, and 6 or more times a day. Portion sizes of each food item were converted into grams using standard Iranian household measures [34]. Daily energy and nutrients intake for each individual were calculated using the United States Department of Agriculture’s (USDA) Food Composition Table (FCT) [35]. The Iranian FCT was used for some traditional foods that do not exist in USDA FCT [36]. Then the consumed foods frequencies were transformed into a daily intake scale. Each serving size of dietary spinach was computed as a cup of raw spinach (30 g) and a 1/2 cup of cooked-boiled spinach (90 g) consumption, respectively (serving size calculated using USDA National Nutrient Database for Standard Reference, http://www.ars.usda.gov/ba/bhnrc/ndl).\nParticipants’ dietary intakes were collected using a valid and reliable 168-item semi-quantitative food frequency questionnaire (FFQ) [33]. The FFQ listed a set of common Iranian foods with standard serving sizes. Participants were asked to express their mean dietary intake during the past year by choosing one of the following categories: never or less than once a month, 3–4 times per month, once a week, 2–4 times per week, 5–6 times per week, once daily, 2–3 times per day, 4–5 times per day, and 6 or more times a day. Portion sizes of each food item were converted into grams using standard Iranian household measures [34]. Daily energy and nutrients intake for each individual were calculated using the United States Department of Agriculture’s (USDA) Food Composition Table (FCT) [35]. The Iranian FCT was used for some traditional foods that do not exist in USDA FCT [36]. Then the consumed foods frequencies were transformed into a daily intake scale. Each serving size of dietary spinach was computed as a cup of raw spinach (30 g) and a 1/2 cup of cooked-boiled spinach (90 g) consumption, respectively (serving size calculated using USDA National Nutrient Database for Standard Reference, http://www.ars.usda.gov/ba/bhnrc/ndl).\nAnthropometric measurements An experienced dietician performed anthropometric measurements. Weight was measured via a standard digital Seca scale (made in Germany), with minimum clothes and without shoes, and recorded to the nearest 100 g. Height was measured by a mounted non-elastic tape meter in a relaxed shoulder standing position with no shoes to the nearest 0.5 cm. Body mass index (BMI) was computed as weight (kg) divided by height in square meters (m2).\nAn experienced dietician performed anthropometric measurements. Weight was measured via a standard digital Seca scale (made in Germany), with minimum clothes and without shoes, and recorded to the nearest 100 g. Height was measured by a mounted non-elastic tape meter in a relaxed shoulder standing position with no shoes to the nearest 0.5 cm. Body mass index (BMI) was computed as weight (kg) divided by height in square meters (m2).\nAssessment of other variables Information on other variables, including age, sex, marital status, socioeconomic status (SES), and smoking status, was collected using a standard demographic questionnaire. SES score, as an index of socioeconomic status, was calculated based on three variables, including family size (≤ 4, > 4 people), education (academic and non-academic education), and acquisition (house ownership or not). For each of these variables, participants were given a score of 1 (if their family members were ≤ 4, were academically educated, or owned a house) or given a score of 0 (if their family members were > 4, or had non-academic education, or leasehold property). The total SES score was then computed by summing up the assigned scores (minimum SES score of 0 to a maximum score of 3). Participants who had a score of 3, 2, and a sum of subjects with 1 and 0 were classified as high, moderate, and low SES, respectively.\nPhysical activity measurement was conducted using the International Physical Activity Questionnaire (IPAQ) through face-to-face interviews. All results of the IPAQ were expressed as Metabolic Equivalents per week (METs/week).\nInformation on other variables, including age, sex, marital status, socioeconomic status (SES), and smoking status, was collected using a standard demographic questionnaire. SES score, as an index of socioeconomic status, was calculated based on three variables, including family size (≤ 4, > 4 people), education (academic and non-academic education), and acquisition (house ownership or not). For each of these variables, participants were given a score of 1 (if their family members were ≤ 4, were academically educated, or owned a house) or given a score of 0 (if their family members were > 4, or had non-academic education, or leasehold property). The total SES score was then computed by summing up the assigned scores (minimum SES score of 0 to a maximum score of 3). Participants who had a score of 3, 2, and a sum of subjects with 1 and 0 were classified as high, moderate, and low SES, respectively.\nPhysical activity measurement was conducted using the International Physical Activity Questionnaire (IPAQ) through face-to-face interviews. All results of the IPAQ were expressed as Metabolic Equivalents per week (METs/week).\nStatistical analysis Statistical analysis was performed using Statistical Package Software for Social Science, version 21 (SPSS Inc., Chicago, IL, USA). The normality of the data was tested using Kolmogorov-Smirnov’s test and histogram chart. Participants’ baseline characteristics and dietary intakes were expressed as mean ± SD or median (25–75 interquartile range) for quantitative variables and frequency (percentages) for qualitative variables. The data were compared between two groups by independent sample t test and chi-square for continuous and categorical variables, respectively. The logistic regression test was used for assessing the association between total, raw, and boiled dietary spinach with the odds of NAFLD. We first conducted a univariate analysis for each eventually confounding variable with the NAFLD for choosing the potential confounders. We entered the logistic regression analysis variables as confounders, whose p-value was lower than 0.20 [37]. Age and sex were not different between cases and controls. Based on the univariate analysis, the mentioned variables were not associated significantly with NAFLD; they do not change substantially in logistic regression results, so they were not adjusted in the analysis.\nThe analysis was adjusted for potential confounders, including BMI, physical activity, smoking, SES, dietary intake of energy, high-fat dairy, and other vegetables except for spinach. The odds ratio (OR) with 95% confidence interval (CI) of NAFLD across tertiles of the total, raw, and boiled dietary spinach (gram per 1000 Kcal of energy intake) was reported. P-values < 0.05 were considered statistically significant. We also conducted an additional analysis for testing the odds of NAFLD between tertiles of yearly serving size intake of raw or boiled spinach compared to the participants who do not consume raw or boiled in the last year the reference category. As only 35 participants (20 controls, 15 cases) not finished total spinach, which was too low a sample size for the reference category, we exclude the total spinach for this additional analysis.\nStatistical analysis was performed using Statistical Package Software for Social Science, version 21 (SPSS Inc., Chicago, IL, USA). The normality of the data was tested using Kolmogorov-Smirnov’s test and histogram chart. Participants’ baseline characteristics and dietary intakes were expressed as mean ± SD or median (25–75 interquartile range) for quantitative variables and frequency (percentages) for qualitative variables. The data were compared between two groups by independent sample t test and chi-square for continuous and categorical variables, respectively. The logistic regression test was used for assessing the association between total, raw, and boiled dietary spinach with the odds of NAFLD. We first conducted a univariate analysis for each eventually confounding variable with the NAFLD for choosing the potential confounders. We entered the logistic regression analysis variables as confounders, whose p-value was lower than 0.20 [37]. Age and sex were not different between cases and controls. Based on the univariate analysis, the mentioned variables were not associated significantly with NAFLD; they do not change substantially in logistic regression results, so they were not adjusted in the analysis.\nThe analysis was adjusted for potential confounders, including BMI, physical activity, smoking, SES, dietary intake of energy, high-fat dairy, and other vegetables except for spinach. The odds ratio (OR) with 95% confidence interval (CI) of NAFLD across tertiles of the total, raw, and boiled dietary spinach (gram per 1000 Kcal of energy intake) was reported. P-values < 0.05 were considered statistically significant. We also conducted an additional analysis for testing the odds of NAFLD between tertiles of yearly serving size intake of raw or boiled spinach compared to the participants who do not consume raw or boiled in the last year the reference category. As only 35 participants (20 controls, 15 cases) not finished total spinach, which was too low a sample size for the reference category, we exclude the total spinach for this additional analysis.", "The present study was conducted in the Metabolic Liver Disease Research Center as a referral center affiliated to Isfahan University of Medical Sciences with a case–control design. Participants were obtained through convenience sampling. Totally 225 newly diagnosed NAFLD patients and 450 controls, aged 20–60 years, were recruited in this study. NAFLD diagnosis was confirmed by the absence of alcohol consumption and other liver disease etiologies, and an ultrasonography scan of the liver was compatible with NAFLD (Grade II, III as a definite diagnosis). The control group was selected among healthy individuals based on the liver ultrasonography (not suffering from any hepatic steatosis stages).\nAlthough Liver biopsy, recognized as the gold standard for diagnosing and staging fibrosis and inflammation, has significant limitations such as bleeding, sampling error, or interobserver variability and is not readily accepted by all patients. Nowadays, several noninvasive techniques, including biochemical and hematological tests, the scoring system using a combination of clinical and laboratory tests, Ultrasonography-based testes, and also double-contrast magnetic resonance (MR) imaging, MR elastography, and MR spectroscopy, and Diffusion-weighted MR imaging are used for diagnosis of fibrosis and hepatic inflammation [30]. Each method has its benefits and limitations. The results of laboratory and biochemical tests and scores are overlapping and cannot predict necroinflammatory activity. Despite the high accuracy of (MR) imaging techniques for quantifying the degree of necroinflammation, it is expensive and not widely available in clinical practice [30, 31]. Although the Ultrasonography test is operator-dependent and may be difficult in obese patients with narrow intercostal space, it is low cost, safe, and more accessible. A previous meta-analysis indicated that ultrasonography allows for reliable and accurate detection of the moderate-severe fatty liver compared to histology [32]. Ultrasound is likely the imaging technique of choice for screening for fatty liver in clinical and population settings, and we used an experienced physician for Ultrasonography implementation and analysis.\nIn the present study, patients who were referred to screen their probability of NAFLD by an Ultrasonography test because of having an abnormal or slight elevation in liver enzymes or being at risk of metabolic syndrome or having metabolic syndrome, etc. were evaluated for study eligibility criteria and patients who willing to cooperate were included in the study. The inclusion criteria for this study are not to have a special diet (due to a particular disease or weight loss) and lack of history of renal and hepatic diseases (Wilson’s disease, autoimmune liver disease, hemochromatosis, virus infection, and alcoholic fatty liver), CVD, diabetes, malignancy, thyroid disorder, and autoimmune. Also, individuals who used potentially hepatotoxic or steatogenic drugs were not included in the current study. Participants who completed less than 35 items of the food frequency questionnaire and those with under or over-reported daily energy intake (≤ 800 or ≥ 4500 kcal/d) were excluded (8 participants) and were replaced. All subjects provided written informed consent before the study enrollment.", "Participants’ dietary intakes were collected using a valid and reliable 168-item semi-quantitative food frequency questionnaire (FFQ) [33]. The FFQ listed a set of common Iranian foods with standard serving sizes. Participants were asked to express their mean dietary intake during the past year by choosing one of the following categories: never or less than once a month, 3–4 times per month, once a week, 2–4 times per week, 5–6 times per week, once daily, 2–3 times per day, 4–5 times per day, and 6 or more times a day. Portion sizes of each food item were converted into grams using standard Iranian household measures [34]. Daily energy and nutrients intake for each individual were calculated using the United States Department of Agriculture’s (USDA) Food Composition Table (FCT) [35]. The Iranian FCT was used for some traditional foods that do not exist in USDA FCT [36]. Then the consumed foods frequencies were transformed into a daily intake scale. Each serving size of dietary spinach was computed as a cup of raw spinach (30 g) and a 1/2 cup of cooked-boiled spinach (90 g) consumption, respectively (serving size calculated using USDA National Nutrient Database for Standard Reference, http://www.ars.usda.gov/ba/bhnrc/ndl).", "An experienced dietician performed anthropometric measurements. Weight was measured via a standard digital Seca scale (made in Germany), with minimum clothes and without shoes, and recorded to the nearest 100 g. Height was measured by a mounted non-elastic tape meter in a relaxed shoulder standing position with no shoes to the nearest 0.5 cm. Body mass index (BMI) was computed as weight (kg) divided by height in square meters (m2).", "Information on other variables, including age, sex, marital status, socioeconomic status (SES), and smoking status, was collected using a standard demographic questionnaire. SES score, as an index of socioeconomic status, was calculated based on three variables, including family size (≤ 4, > 4 people), education (academic and non-academic education), and acquisition (house ownership or not). For each of these variables, participants were given a score of 1 (if their family members were ≤ 4, were academically educated, or owned a house) or given a score of 0 (if their family members were > 4, or had non-academic education, or leasehold property). The total SES score was then computed by summing up the assigned scores (minimum SES score of 0 to a maximum score of 3). Participants who had a score of 3, 2, and a sum of subjects with 1 and 0 were classified as high, moderate, and low SES, respectively.\nPhysical activity measurement was conducted using the International Physical Activity Questionnaire (IPAQ) through face-to-face interviews. All results of the IPAQ were expressed as Metabolic Equivalents per week (METs/week).", "Statistical analysis was performed using Statistical Package Software for Social Science, version 21 (SPSS Inc., Chicago, IL, USA). The normality of the data was tested using Kolmogorov-Smirnov’s test and histogram chart. Participants’ baseline characteristics and dietary intakes were expressed as mean ± SD or median (25–75 interquartile range) for quantitative variables and frequency (percentages) for qualitative variables. The data were compared between two groups by independent sample t test and chi-square for continuous and categorical variables, respectively. The logistic regression test was used for assessing the association between total, raw, and boiled dietary spinach with the odds of NAFLD. We first conducted a univariate analysis for each eventually confounding variable with the NAFLD for choosing the potential confounders. We entered the logistic regression analysis variables as confounders, whose p-value was lower than 0.20 [37]. Age and sex were not different between cases and controls. Based on the univariate analysis, the mentioned variables were not associated significantly with NAFLD; they do not change substantially in logistic regression results, so they were not adjusted in the analysis.\nThe analysis was adjusted for potential confounders, including BMI, physical activity, smoking, SES, dietary intake of energy, high-fat dairy, and other vegetables except for spinach. The odds ratio (OR) with 95% confidence interval (CI) of NAFLD across tertiles of the total, raw, and boiled dietary spinach (gram per 1000 Kcal of energy intake) was reported. P-values < 0.05 were considered statistically significant. We also conducted an additional analysis for testing the odds of NAFLD between tertiles of yearly serving size intake of raw or boiled spinach compared to the participants who do not consume raw or boiled in the last year the reference category. As only 35 participants (20 controls, 15 cases) not finished total spinach, which was too low a sample size for the reference category, we exclude the total spinach for this additional analysis." ]
[ null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Dietary assessment", "Anthropometric measurements", "Assessment of other variables", "Statistical analysis", "Results", "Discussion", "Conclusions" ]
[ "Nonalcoholic fatty liver disease (NAFLD) refers to the state of accumulation of fat in hepatocytes in persons who do not consume excessive alcohol [1]. This disease includes a wide range of conditions from fatty liver to nonalcoholic steatohepatitis (NASH), fibrosis, and cirrhosis [2]. The NAFLD pathogenesis is defined under the term “Multiple-hit theory” [1], in which several factors including genetic susceptibility, insulin resistance, adipose hormones, gut microbiome, diet, and lifestyle can affect the risk of NAFLD development. For example, it is reported some gene variants in glutathione-S-transferase [3], glutamate-cysteine ligase [4], peroxisome proliferator-activated receptors (PPARs) [5], etc. are associated with NAFLD risk in the Iranian population.\nNAFLD is associated with other metabolic abnormalities, such as insulin resistance, high blood glucose level, dyslipidemia, central adiposity, and hypertension [6]. Accordingly, it is believed that NAFLD is the hepatic manifestation of metabolic syndrome. Some underlying conditions can increase NAFLD development risk, including obesity, type 2 diabetes mellitus (T2DM), and older age [7].\nThe average prevalence of NAFLD among the general population is estimated at 25% worldwide [8]. According to a recent meta-analysis, middle-east and Asian populations have higher NAFLD rates than the global average [9]. The prevalence in Iranian adults is reported between 20 and 50% [6] reports show a growing prevalence of NAFLD worldwide, attributed to the upward trend of adverse lifestyle changes, including unhealthy diet, sedentary behavior, and overweight [10].\nDifferent aspects of diet, including dietary patterns, various food groups such as fruits, vegetables, and whole and refined grains, and nutrients like types of fatty acids and fructose, have been investigated with NAFLD’s risk [11, 12]. Also, the relationship between some single vegetables and chronic diseases, such as carrot and breast cancer [13], potato and diabetes [14], green leafy vegetables, and cardiovascular disease (CVD) [15], etc., have received particular attention, recently. In line with these studies, the role of spinach, as a broadleaf green, rich in nutrients, such as folates, vitamin A, C, and K, and minerals such as iron, magnesium, and manganese, and polyphenols, especially lutein, zeaxanthin, and β-carotene have been considered about the risk of NAFLD [16]. Plenty of interventional and experimental studies, have investigated spinach’s antioxidant and anti-inflammatory properties [16, 17]. To the best of our knowledge, although the association of dietary spinach with the risk of NAFLD was not assessed in observational studies, but it is shown that moderate intake of spinach may have protective effects against DNA oxidation [18]. Accordingly, it may be expected that spinach can reduce chronic disease risk, mostly related to oxidative stress. For example, some previous studies with controversial results have investigated the association of spinach with the odds of the breast (BrCa) [19, 20] and prostate cancer [21]. Furthermore, atrial stiffness [22], intrahepatic stone [23], and gallstone [24] are the other disease indicated that spinach has a protective impact against them.\nA recent animal study has shown that a high spinach intake significantly reduces the adverse effects of a high-fat diet on the gut microbiome, blood glucose, lipid profile, and cholesterol accumulation in the liver [25]. Two experimental studies have also shown the anti-hyperlipidemic effects of spinach in rats fed high cholesterol diet [26] and its anti-inflammatory effects in rats with a regular diet [27].\nUsing common foods like spinach to improve diet quality may provide a simple and inexpensive way to reduce the risk of chronic diseases such as NAFLD. However, whether individuals receiving higher dietary spinach than those who did not have a different risk of developing NAFLD has not been assessed in previous studies. Besides, previous studies have shown that heating vegetables can have beneficial or detrimental effects on their nutritional status. Accordingly, some evidence of higher bioavailability and stability of active nutrients such as phytochemicals from cooked spinach compared to raw spinach [28, 29], suggesting that the two methods of consuming spinach may be different on the risk of liver status. Therefore, we sought to investigate the association between spinach’s dietary intake, either raw, boiled, and total, and NAFLD risk in a case–control study among Iranian adults.", "Study population The present study was conducted in the Metabolic Liver Disease Research Center as a referral center affiliated to Isfahan University of Medical Sciences with a case–control design. Participants were obtained through convenience sampling. Totally 225 newly diagnosed NAFLD patients and 450 controls, aged 20–60 years, were recruited in this study. NAFLD diagnosis was confirmed by the absence of alcohol consumption and other liver disease etiologies, and an ultrasonography scan of the liver was compatible with NAFLD (Grade II, III as a definite diagnosis). The control group was selected among healthy individuals based on the liver ultrasonography (not suffering from any hepatic steatosis stages).\nAlthough Liver biopsy, recognized as the gold standard for diagnosing and staging fibrosis and inflammation, has significant limitations such as bleeding, sampling error, or interobserver variability and is not readily accepted by all patients. Nowadays, several noninvasive techniques, including biochemical and hematological tests, the scoring system using a combination of clinical and laboratory tests, Ultrasonography-based testes, and also double-contrast magnetic resonance (MR) imaging, MR elastography, and MR spectroscopy, and Diffusion-weighted MR imaging are used for diagnosis of fibrosis and hepatic inflammation [30]. Each method has its benefits and limitations. The results of laboratory and biochemical tests and scores are overlapping and cannot predict necroinflammatory activity. Despite the high accuracy of (MR) imaging techniques for quantifying the degree of necroinflammation, it is expensive and not widely available in clinical practice [30, 31]. Although the Ultrasonography test is operator-dependent and may be difficult in obese patients with narrow intercostal space, it is low cost, safe, and more accessible. A previous meta-analysis indicated that ultrasonography allows for reliable and accurate detection of the moderate-severe fatty liver compared to histology [32]. Ultrasound is likely the imaging technique of choice for screening for fatty liver in clinical and population settings, and we used an experienced physician for Ultrasonography implementation and analysis.\nIn the present study, patients who were referred to screen their probability of NAFLD by an Ultrasonography test because of having an abnormal or slight elevation in liver enzymes or being at risk of metabolic syndrome or having metabolic syndrome, etc. were evaluated for study eligibility criteria and patients who willing to cooperate were included in the study. The inclusion criteria for this study are not to have a special diet (due to a particular disease or weight loss) and lack of history of renal and hepatic diseases (Wilson’s disease, autoimmune liver disease, hemochromatosis, virus infection, and alcoholic fatty liver), CVD, diabetes, malignancy, thyroid disorder, and autoimmune. Also, individuals who used potentially hepatotoxic or steatogenic drugs were not included in the current study. Participants who completed less than 35 items of the food frequency questionnaire and those with under or over-reported daily energy intake (≤ 800 or ≥ 4500 kcal/d) were excluded (8 participants) and were replaced. All subjects provided written informed consent before the study enrollment.\nThe present study was conducted in the Metabolic Liver Disease Research Center as a referral center affiliated to Isfahan University of Medical Sciences with a case–control design. Participants were obtained through convenience sampling. Totally 225 newly diagnosed NAFLD patients and 450 controls, aged 20–60 years, were recruited in this study. NAFLD diagnosis was confirmed by the absence of alcohol consumption and other liver disease etiologies, and an ultrasonography scan of the liver was compatible with NAFLD (Grade II, III as a definite diagnosis). The control group was selected among healthy individuals based on the liver ultrasonography (not suffering from any hepatic steatosis stages).\nAlthough Liver biopsy, recognized as the gold standard for diagnosing and staging fibrosis and inflammation, has significant limitations such as bleeding, sampling error, or interobserver variability and is not readily accepted by all patients. Nowadays, several noninvasive techniques, including biochemical and hematological tests, the scoring system using a combination of clinical and laboratory tests, Ultrasonography-based testes, and also double-contrast magnetic resonance (MR) imaging, MR elastography, and MR spectroscopy, and Diffusion-weighted MR imaging are used for diagnosis of fibrosis and hepatic inflammation [30]. Each method has its benefits and limitations. The results of laboratory and biochemical tests and scores are overlapping and cannot predict necroinflammatory activity. Despite the high accuracy of (MR) imaging techniques for quantifying the degree of necroinflammation, it is expensive and not widely available in clinical practice [30, 31]. Although the Ultrasonography test is operator-dependent and may be difficult in obese patients with narrow intercostal space, it is low cost, safe, and more accessible. A previous meta-analysis indicated that ultrasonography allows for reliable and accurate detection of the moderate-severe fatty liver compared to histology [32]. Ultrasound is likely the imaging technique of choice for screening for fatty liver in clinical and population settings, and we used an experienced physician for Ultrasonography implementation and analysis.\nIn the present study, patients who were referred to screen their probability of NAFLD by an Ultrasonography test because of having an abnormal or slight elevation in liver enzymes or being at risk of metabolic syndrome or having metabolic syndrome, etc. were evaluated for study eligibility criteria and patients who willing to cooperate were included in the study. The inclusion criteria for this study are not to have a special diet (due to a particular disease or weight loss) and lack of history of renal and hepatic diseases (Wilson’s disease, autoimmune liver disease, hemochromatosis, virus infection, and alcoholic fatty liver), CVD, diabetes, malignancy, thyroid disorder, and autoimmune. Also, individuals who used potentially hepatotoxic or steatogenic drugs were not included in the current study. Participants who completed less than 35 items of the food frequency questionnaire and those with under or over-reported daily energy intake (≤ 800 or ≥ 4500 kcal/d) were excluded (8 participants) and were replaced. All subjects provided written informed consent before the study enrollment.\nDietary assessment Participants’ dietary intakes were collected using a valid and reliable 168-item semi-quantitative food frequency questionnaire (FFQ) [33]. The FFQ listed a set of common Iranian foods with standard serving sizes. Participants were asked to express their mean dietary intake during the past year by choosing one of the following categories: never or less than once a month, 3–4 times per month, once a week, 2–4 times per week, 5–6 times per week, once daily, 2–3 times per day, 4–5 times per day, and 6 or more times a day. Portion sizes of each food item were converted into grams using standard Iranian household measures [34]. Daily energy and nutrients intake for each individual were calculated using the United States Department of Agriculture’s (USDA) Food Composition Table (FCT) [35]. The Iranian FCT was used for some traditional foods that do not exist in USDA FCT [36]. Then the consumed foods frequencies were transformed into a daily intake scale. Each serving size of dietary spinach was computed as a cup of raw spinach (30 g) and a 1/2 cup of cooked-boiled spinach (90 g) consumption, respectively (serving size calculated using USDA National Nutrient Database for Standard Reference, http://www.ars.usda.gov/ba/bhnrc/ndl).\nParticipants’ dietary intakes were collected using a valid and reliable 168-item semi-quantitative food frequency questionnaire (FFQ) [33]. The FFQ listed a set of common Iranian foods with standard serving sizes. Participants were asked to express their mean dietary intake during the past year by choosing one of the following categories: never or less than once a month, 3–4 times per month, once a week, 2–4 times per week, 5–6 times per week, once daily, 2–3 times per day, 4–5 times per day, and 6 or more times a day. Portion sizes of each food item were converted into grams using standard Iranian household measures [34]. Daily energy and nutrients intake for each individual were calculated using the United States Department of Agriculture’s (USDA) Food Composition Table (FCT) [35]. The Iranian FCT was used for some traditional foods that do not exist in USDA FCT [36]. Then the consumed foods frequencies were transformed into a daily intake scale. Each serving size of dietary spinach was computed as a cup of raw spinach (30 g) and a 1/2 cup of cooked-boiled spinach (90 g) consumption, respectively (serving size calculated using USDA National Nutrient Database for Standard Reference, http://www.ars.usda.gov/ba/bhnrc/ndl).\nAnthropometric measurements An experienced dietician performed anthropometric measurements. Weight was measured via a standard digital Seca scale (made in Germany), with minimum clothes and without shoes, and recorded to the nearest 100 g. Height was measured by a mounted non-elastic tape meter in a relaxed shoulder standing position with no shoes to the nearest 0.5 cm. Body mass index (BMI) was computed as weight (kg) divided by height in square meters (m2).\nAn experienced dietician performed anthropometric measurements. Weight was measured via a standard digital Seca scale (made in Germany), with minimum clothes and without shoes, and recorded to the nearest 100 g. Height was measured by a mounted non-elastic tape meter in a relaxed shoulder standing position with no shoes to the nearest 0.5 cm. Body mass index (BMI) was computed as weight (kg) divided by height in square meters (m2).\nAssessment of other variables Information on other variables, including age, sex, marital status, socioeconomic status (SES), and smoking status, was collected using a standard demographic questionnaire. SES score, as an index of socioeconomic status, was calculated based on three variables, including family size (≤ 4, > 4 people), education (academic and non-academic education), and acquisition (house ownership or not). For each of these variables, participants were given a score of 1 (if their family members were ≤ 4, were academically educated, or owned a house) or given a score of 0 (if their family members were > 4, or had non-academic education, or leasehold property). The total SES score was then computed by summing up the assigned scores (minimum SES score of 0 to a maximum score of 3). Participants who had a score of 3, 2, and a sum of subjects with 1 and 0 were classified as high, moderate, and low SES, respectively.\nPhysical activity measurement was conducted using the International Physical Activity Questionnaire (IPAQ) through face-to-face interviews. All results of the IPAQ were expressed as Metabolic Equivalents per week (METs/week).\nInformation on other variables, including age, sex, marital status, socioeconomic status (SES), and smoking status, was collected using a standard demographic questionnaire. SES score, as an index of socioeconomic status, was calculated based on three variables, including family size (≤ 4, > 4 people), education (academic and non-academic education), and acquisition (house ownership or not). For each of these variables, participants were given a score of 1 (if their family members were ≤ 4, were academically educated, or owned a house) or given a score of 0 (if their family members were > 4, or had non-academic education, or leasehold property). The total SES score was then computed by summing up the assigned scores (minimum SES score of 0 to a maximum score of 3). Participants who had a score of 3, 2, and a sum of subjects with 1 and 0 were classified as high, moderate, and low SES, respectively.\nPhysical activity measurement was conducted using the International Physical Activity Questionnaire (IPAQ) through face-to-face interviews. All results of the IPAQ were expressed as Metabolic Equivalents per week (METs/week).\nStatistical analysis Statistical analysis was performed using Statistical Package Software for Social Science, version 21 (SPSS Inc., Chicago, IL, USA). The normality of the data was tested using Kolmogorov-Smirnov’s test and histogram chart. Participants’ baseline characteristics and dietary intakes were expressed as mean ± SD or median (25–75 interquartile range) for quantitative variables and frequency (percentages) for qualitative variables. The data were compared between two groups by independent sample t test and chi-square for continuous and categorical variables, respectively. The logistic regression test was used for assessing the association between total, raw, and boiled dietary spinach with the odds of NAFLD. We first conducted a univariate analysis for each eventually confounding variable with the NAFLD for choosing the potential confounders. We entered the logistic regression analysis variables as confounders, whose p-value was lower than 0.20 [37]. Age and sex were not different between cases and controls. Based on the univariate analysis, the mentioned variables were not associated significantly with NAFLD; they do not change substantially in logistic regression results, so they were not adjusted in the analysis.\nThe analysis was adjusted for potential confounders, including BMI, physical activity, smoking, SES, dietary intake of energy, high-fat dairy, and other vegetables except for spinach. The odds ratio (OR) with 95% confidence interval (CI) of NAFLD across tertiles of the total, raw, and boiled dietary spinach (gram per 1000 Kcal of energy intake) was reported. P-values < 0.05 were considered statistically significant. We also conducted an additional analysis for testing the odds of NAFLD between tertiles of yearly serving size intake of raw or boiled spinach compared to the participants who do not consume raw or boiled in the last year the reference category. As only 35 participants (20 controls, 15 cases) not finished total spinach, which was too low a sample size for the reference category, we exclude the total spinach for this additional analysis.\nStatistical analysis was performed using Statistical Package Software for Social Science, version 21 (SPSS Inc., Chicago, IL, USA). The normality of the data was tested using Kolmogorov-Smirnov’s test and histogram chart. Participants’ baseline characteristics and dietary intakes were expressed as mean ± SD or median (25–75 interquartile range) for quantitative variables and frequency (percentages) for qualitative variables. The data were compared between two groups by independent sample t test and chi-square for continuous and categorical variables, respectively. The logistic regression test was used for assessing the association between total, raw, and boiled dietary spinach with the odds of NAFLD. We first conducted a univariate analysis for each eventually confounding variable with the NAFLD for choosing the potential confounders. We entered the logistic regression analysis variables as confounders, whose p-value was lower than 0.20 [37]. Age and sex were not different between cases and controls. Based on the univariate analysis, the mentioned variables were not associated significantly with NAFLD; they do not change substantially in logistic regression results, so they were not adjusted in the analysis.\nThe analysis was adjusted for potential confounders, including BMI, physical activity, smoking, SES, dietary intake of energy, high-fat dairy, and other vegetables except for spinach. The odds ratio (OR) with 95% confidence interval (CI) of NAFLD across tertiles of the total, raw, and boiled dietary spinach (gram per 1000 Kcal of energy intake) was reported. P-values < 0.05 were considered statistically significant. We also conducted an additional analysis for testing the odds of NAFLD between tertiles of yearly serving size intake of raw or boiled spinach compared to the participants who do not consume raw or boiled in the last year the reference category. As only 35 participants (20 controls, 15 cases) not finished total spinach, which was too low a sample size for the reference category, we exclude the total spinach for this additional analysis.", "The present study was conducted in the Metabolic Liver Disease Research Center as a referral center affiliated to Isfahan University of Medical Sciences with a case–control design. Participants were obtained through convenience sampling. Totally 225 newly diagnosed NAFLD patients and 450 controls, aged 20–60 years, were recruited in this study. NAFLD diagnosis was confirmed by the absence of alcohol consumption and other liver disease etiologies, and an ultrasonography scan of the liver was compatible with NAFLD (Grade II, III as a definite diagnosis). The control group was selected among healthy individuals based on the liver ultrasonography (not suffering from any hepatic steatosis stages).\nAlthough Liver biopsy, recognized as the gold standard for diagnosing and staging fibrosis and inflammation, has significant limitations such as bleeding, sampling error, or interobserver variability and is not readily accepted by all patients. Nowadays, several noninvasive techniques, including biochemical and hematological tests, the scoring system using a combination of clinical and laboratory tests, Ultrasonography-based testes, and also double-contrast magnetic resonance (MR) imaging, MR elastography, and MR spectroscopy, and Diffusion-weighted MR imaging are used for diagnosis of fibrosis and hepatic inflammation [30]. Each method has its benefits and limitations. The results of laboratory and biochemical tests and scores are overlapping and cannot predict necroinflammatory activity. Despite the high accuracy of (MR) imaging techniques for quantifying the degree of necroinflammation, it is expensive and not widely available in clinical practice [30, 31]. Although the Ultrasonography test is operator-dependent and may be difficult in obese patients with narrow intercostal space, it is low cost, safe, and more accessible. A previous meta-analysis indicated that ultrasonography allows for reliable and accurate detection of the moderate-severe fatty liver compared to histology [32]. Ultrasound is likely the imaging technique of choice for screening for fatty liver in clinical and population settings, and we used an experienced physician for Ultrasonography implementation and analysis.\nIn the present study, patients who were referred to screen their probability of NAFLD by an Ultrasonography test because of having an abnormal or slight elevation in liver enzymes or being at risk of metabolic syndrome or having metabolic syndrome, etc. were evaluated for study eligibility criteria and patients who willing to cooperate were included in the study. The inclusion criteria for this study are not to have a special diet (due to a particular disease or weight loss) and lack of history of renal and hepatic diseases (Wilson’s disease, autoimmune liver disease, hemochromatosis, virus infection, and alcoholic fatty liver), CVD, diabetes, malignancy, thyroid disorder, and autoimmune. Also, individuals who used potentially hepatotoxic or steatogenic drugs were not included in the current study. Participants who completed less than 35 items of the food frequency questionnaire and those with under or over-reported daily energy intake (≤ 800 or ≥ 4500 kcal/d) were excluded (8 participants) and were replaced. All subjects provided written informed consent before the study enrollment.", "Participants’ dietary intakes were collected using a valid and reliable 168-item semi-quantitative food frequency questionnaire (FFQ) [33]. The FFQ listed a set of common Iranian foods with standard serving sizes. Participants were asked to express their mean dietary intake during the past year by choosing one of the following categories: never or less than once a month, 3–4 times per month, once a week, 2–4 times per week, 5–6 times per week, once daily, 2–3 times per day, 4–5 times per day, and 6 or more times a day. Portion sizes of each food item were converted into grams using standard Iranian household measures [34]. Daily energy and nutrients intake for each individual were calculated using the United States Department of Agriculture’s (USDA) Food Composition Table (FCT) [35]. The Iranian FCT was used for some traditional foods that do not exist in USDA FCT [36]. Then the consumed foods frequencies were transformed into a daily intake scale. Each serving size of dietary spinach was computed as a cup of raw spinach (30 g) and a 1/2 cup of cooked-boiled spinach (90 g) consumption, respectively (serving size calculated using USDA National Nutrient Database for Standard Reference, http://www.ars.usda.gov/ba/bhnrc/ndl).", "An experienced dietician performed anthropometric measurements. Weight was measured via a standard digital Seca scale (made in Germany), with minimum clothes and without shoes, and recorded to the nearest 100 g. Height was measured by a mounted non-elastic tape meter in a relaxed shoulder standing position with no shoes to the nearest 0.5 cm. Body mass index (BMI) was computed as weight (kg) divided by height in square meters (m2).", "Information on other variables, including age, sex, marital status, socioeconomic status (SES), and smoking status, was collected using a standard demographic questionnaire. SES score, as an index of socioeconomic status, was calculated based on three variables, including family size (≤ 4, > 4 people), education (academic and non-academic education), and acquisition (house ownership or not). For each of these variables, participants were given a score of 1 (if their family members were ≤ 4, were academically educated, or owned a house) or given a score of 0 (if their family members were > 4, or had non-academic education, or leasehold property). The total SES score was then computed by summing up the assigned scores (minimum SES score of 0 to a maximum score of 3). Participants who had a score of 3, 2, and a sum of subjects with 1 and 0 were classified as high, moderate, and low SES, respectively.\nPhysical activity measurement was conducted using the International Physical Activity Questionnaire (IPAQ) through face-to-face interviews. All results of the IPAQ were expressed as Metabolic Equivalents per week (METs/week).", "Statistical analysis was performed using Statistical Package Software for Social Science, version 21 (SPSS Inc., Chicago, IL, USA). The normality of the data was tested using Kolmogorov-Smirnov’s test and histogram chart. Participants’ baseline characteristics and dietary intakes were expressed as mean ± SD or median (25–75 interquartile range) for quantitative variables and frequency (percentages) for qualitative variables. The data were compared between two groups by independent sample t test and chi-square for continuous and categorical variables, respectively. The logistic regression test was used for assessing the association between total, raw, and boiled dietary spinach with the odds of NAFLD. We first conducted a univariate analysis for each eventually confounding variable with the NAFLD for choosing the potential confounders. We entered the logistic regression analysis variables as confounders, whose p-value was lower than 0.20 [37]. Age and sex were not different between cases and controls. Based on the univariate analysis, the mentioned variables were not associated significantly with NAFLD; they do not change substantially in logistic regression results, so they were not adjusted in the analysis.\nThe analysis was adjusted for potential confounders, including BMI, physical activity, smoking, SES, dietary intake of energy, high-fat dairy, and other vegetables except for spinach. The odds ratio (OR) with 95% confidence interval (CI) of NAFLD across tertiles of the total, raw, and boiled dietary spinach (gram per 1000 Kcal of energy intake) was reported. P-values < 0.05 were considered statistically significant. We also conducted an additional analysis for testing the odds of NAFLD between tertiles of yearly serving size intake of raw or boiled spinach compared to the participants who do not consume raw or boiled in the last year the reference category. As only 35 participants (20 controls, 15 cases) not finished total spinach, which was too low a sample size for the reference category, we exclude the total spinach for this additional analysis.", "The mean (SD) age and BMI of participants (53% male) were 38.1 (8.8) years and 26.8 (4.3) kg/m2, respectively. Table 1 indicates the general characteristics and dietary intake of cases and controls. Smoking was higher among NAFLD patients than in control (p value = 0.006), and also, smoking was higher among men than women (p value = 0.006, data not shown). Also, NAFLD patients had higher BMI(p value = 0.006), SES scores (p value = 0.043), and lower physical activity (p value < 0.001) than the control group. Also, the NAFLD patients had a higher dietary intake of calorie (p value = 0.006) and high-fat dairy products (p value < 0.001), whereas ate higher vegetables (p value = 0.001), total and raw spinach (p value = 0.001), and boiled spinach (p-value = 0.046) than controls. There were no significant differences between the two groups in age and sex distribution and dietary intakes of carbohydrate, protein, fat, fiber, whole grains, fruits, nuts, and legumes (P > 0.05).\n\nTable 1Characteristics and dietary intakes among cases and controlsCases (n = 225)Controls (n = 450)P valueAge (years)38.6 ± 8.737.8 ± 8.90.293Male, n (%)125 (55.6)233 (51.8)0.354BMI (kg/m2)30.5 ± 4.025.0 ± 3.0< 0.001Smoking, n (%)16 (7.1)12 (2.7)0.006Physical activity (MET/min/week)1119 ± 6161590 ± 949< 0.001SES, n(%)0.043Low65 (28.9)158 (35.1)Middle104 (46.2)206 (45.8)High56 (24.9)86 (19.1)Macro and micronutrientsEnergy intake (Kcal/day)2369 ± 6212227 ± 6450.006Carbohydrate (% of energy)55.9 ± 7.455.9 ± 6.50.917Protein (% of energy)13.2 ± 2.413.2 ± 2.20.941Fat (% of energy)30.7 ± 7.430.8 ± 6.50.927Fiber (g/1000 Kcal)16.7 ± 8.315.8 ± 6.40.154Calcium (mg/1000Kcal)528 ± 163533 ± 1530.723Sodium (mg/1000Kcal)1990 ± 19602040 ± 13850.736Potassium (mg/1000Kcal)1546 ± 3651596 ± 3730.093Food groupsWhole grains (g/day)64.0 (30.7–115.3)53.8 (25.1–112.8)0.711High fat dairy products (g/day)237 ± 153118 ± 107< 0.001Fruits (g/day)327 ± 227318 ± 2280.628Vegetables (g/day)262 ± 138302 ± 1480.001Nuts and legume (g/day)22.1 ± 21.121.2 ± 18.50.596Red meat (g/day)18.1 ± 16.218.9 ± 17.40.577Total spinach (g/day)1.7 (0.7–3.6)2.3 (1.0–5.6)0.001Raw spinach (g/day)0.7 (0.2–1.8)1.0 (0.5–3.0)0.001Boiled spinach (g/day)0.4 (0.1–1.5)0.6 (0.3–1.8)0.046Raw spinach (serving/year)2.5 (1.2–7.6)9.1 (3.0–22.8)< 0.001Boiled spinach (serving/year)1.9 (0.6–6.3)12.0 (6.0–36.5)0.046P values were computed using the independent sample t test and chi square for continues and categorical variables, respectively\nCharacteristics and dietary intakes among cases and controls\nP values were computed using the independent sample t test and chi square for continues and categorical variables, respectively\nTable 2 shows the association between tertiles of the total, raw, and boiled spinach consumption (g/d) and odds of NAFLD. In the crude model, all three spinach categories in the highest compared with the lowest tertile were related with lower odds of NAFLD. in the final adjusted model for confounding variables including BMI, physical activity, smoking, SES, and dietary intake of energy, high-fat dairy, and other vegetables (except spinach), the odds ratio for NAFLD in the highest compared to the lowest tertile of total spinach and raw spinach were [(OR 0.47; 95% CI 0.24–0.89), (P for trend = 0.001)] and [(OR 0.36; 95% CI 0.19–0.7), (P for trend = 0.008)], respectively. However, based on the final adjusted model, the intake of boiled spinach was no significantly associated with the odds of NAFLD [(OR 0.76; 95% CI 0.42–1.38), P for trend = 0.508)].\n\nTable 2Odds ratios (ORs) and 95% confidence intervals (CIs) for NAFLD based on tertiles of dietary spinachTertiles of dietary spinach intakeP-trendT1T2T3Total spinachMedian intake (g/day), per 1000 Kcal0.351.093.67NAFLD/control101/15083/15041/150Crude model1.00 (Ref)0.80 (0.55–1.16)0.41 (0.26–0.62)< 0.001Model 1†1.00 (Ref)0.91 (0.55–1.52)0.33 (0.18–0.59)< 0.001Model 2‡1.00 (Ref)1.13 (0.63–2.00)0.36 (0.19–0.71)0.001Raw spinachMedian intake (g/day), per 1000 Kcal0.120.491.92NAFLD/control97/14983/15145/150Crude model1.00 (Ref)0.81(0.56–1.18)0.45 (0.29–0.68)< 0.001Model 1†1.00 (Ref)0.90 (0.54–1.51)0.38 (0.22–0.68)0.001Model 2 ‡1.00 (Ref)1.22 (0.68–2.19)0.47 (0.24–0.89)0.008Boiled spinachMedian intake(g/day), per 1000 Kcal0.060.301.14NAFLD/control107/14851/15267/150Crude model1.00 (Ref)0.45 (0.30–0.68)0.62 (0.42–0.91)0.100Model 1†1.00 (Ref)0.62 (0.36–1.07)0.67 (0.40–1.13)0.242Model 2‡1.00 (Ref)0.72 (0.39–1.31)0.76 (0.42–1.38)0.508†Model 1: adjusted for BMI, physical activity, smoking, SES, and energy intake‡Model 2: Additionally adjusted for dietary intake of high fat dairy and other vegetables except of spinach\nOdds ratios (ORs) and 95% confidence intervals (CIs) for NAFLD based on tertiles of dietary spinach\n†Model 1: adjusted for BMI, physical activity, smoking, SES, and energy intake\n‡Model 2: Additionally adjusted for dietary intake of high fat dairy and other vegetables except of spinach\nThe association of yearly dietary serving of raw and boiled spinach with odds of NAFLD among tertiles of participants, who consumed these foods compared with those who had no spinach intake in the last year, was presented in Table 3. In the crude model, the odds (95% CI) of NAFLD in subjects in the highest tertile of raw and boiled spinach were 0.56 (0.33–0.95), P for trend = 0.008 and 0.60 (0.37–0.99), P for trend = 0.072 compared with those who had no consumption, respectively. In the final model, after adjusting for potential confounders, the odds (95% CI) of NAFLD in individuals in the highest tertile of raw spinach compared with those who had no consumption remained significant [(OR 0.41; 95% CI 0.18–0.96), (P for trend = 0.013)]. However, in the final model, the yearly serving intake of boiled spinach was not associated with the odds of NAFLD.\n\nTable 3Odds ratios (ORs) and 95% confidence intervals (CIs) for NAFLD according to yearly intake of raw and boiled spinachCategories of spinach intakeP trend0T1T2T3Raw spinachMedian intake (serving/year)06.013.539.1NAFLD/control34/5755/10484/13952/150Crude model1.00 (Ref)0.89(0.52–1.53)1.00 (0.60–1.65)0.56 (0.33–0.95)0.008Model 1†1.00 (Ref)0.66 (0.32–1.38)0.96 (0.48–1.92)0.35 (0.17–0.73)0.002Model 2‡1.00 (Ref)0.76 (0.34–1.72)1.12 (0.51–2.44)0.41 (0.18–0.96)0.013Boiled spinachMedian intake (serving/year)01.33.212.0NAFLD/control47/6762/11361/13555/135Crude model1.00 (Ref)0.83 (0.51–1.35)0.66 (0.40–1.07)0.60 (0.37–0.99)0.072Model 1†1.00 (Ref)0.87 (0.44–1.72)0.88 (0.45–1.71)0.66 (0.34–1.30)0.211Model 2‡1.00 (Ref)0.72 (0.33–1.57)0.83 (0.39–1.76)0.69 (0.32–1.47)0.479†Model 1: adjusted for BMI, physical activity, smoking, SES, and energy intake‡Model 2: Additionally adjusted for dietary intake of high fat dairy and other vegetables except of spinach\nOdds ratios (ORs) and 95% confidence intervals (CIs) for NAFLD according to yearly intake of raw and boiled spinach\n†Model 1: adjusted for BMI, physical activity, smoking, SES, and energy intake\n‡Model 2: Additionally adjusted for dietary intake of high fat dairy and other vegetables except of spinach", "In the present case–control study, we found a higher total and raw spinach intake, associated with a lower odds of NAFLD. However, there was no significant association between the boiled spinach intake and the odds of NAFLD. Also, participants in the highest category of raw spinach consumption compared with those with no intake had lower odds of NAFLD; however, the highest versus no boiled spinach consumption showed any significant association with NAFLD odds.\nTo date, no study has examined the association of spinach intake from the usual diet with NAFLD. However, studies with conflicting results have assessed the association of spinach intake with the risk of some chronic diseases [20, 23, 24]. A previous population-based case–control study among American participants showed that individuals with a high annual intake of raw spinach had lower odds of breast cancer than those with no intake; however, the highest intake of boiled spinach versus no intake showed no association with breast cancer. Furthermore, consumption of a carrot and raw and cooked spinach twice weekly, compared to not consuming it, showed 46% lower odds of breast cancer [20]. Another case–control study among Korean women showed no relationship between higher spinach intake and breast cancer risk; however, they do not separate raw and cooked spinach analysis. The association between spinach intake and intrahepatic and gallbladder stones was investigated in two case–control studies [23, 24]. spinach consumption of two times weekly or more versus less than once monthly among Tiwanian participants was related to 65 and 84% lower odds of intrahepatic stone for men and women, respectively [23]. In contrast, a study conducted in Netherlands showed no association between spinach intake and gallstone risk [24]. A cohort study that aimed to assess the relationship between dietary antioxidant, fruit, and vegetable subclasses with the risk of prostate cancer indicated that a higher intake of vegetables rich in vitamin C includes dietary spinach along with pepper and broccoli negatively related to the risk of prostate cancer incidence [21].\nAlthough spinach individually not assessed with NAFLD, is investigated with the risk of liver disease as a part of healthy diets such as the Mediterranean diet (MD) [38] or Dietary Approaches to Stop Hypertension (DASH) diet [39], total vegetable, or one of the main components of non-starchy vegetables, leafy green vegetables, and allium vegetables. The higher adherence to Mediterranean and DASH diets, which have a high spinach content, indicated an improvement in liver imaging, liver fibrosis score and showed an inverse association with liver diseases and related metabolic complications [38, 39]. Previously, it has been reported that vegetable consumption, especially beta-carotene-rich one, is associated with lower visceral or liver fat content and improved insulin sensitivity [40, 41]. It has been proposed that leafy green vegetables, mostly includes spinach and lettuce, have protective effects against NAFLD through prevention of intrahepatic triglyceride (IHTG) accumulation [42], hepatic steatosis, and also maintaining blood glucose, insulin, and free fatty acids in normal hematologic ranges [43]. Furthermore, some evidence supports allium vegetables’ protective effects on NAFLD and other related disorders such as hypertension, type 2 diabetes, and metabolic syndrome (MetS) [44, 45].\nDespite limited evidence in human research, several animal studies have reported the possible ameliorative spinach effect on NAFLD development [25, 46]. It has been shown that spinach consumption significantly reduced the adverse effects of a high-fat diet on blood glucose, lipid profile, and cholesterol accumulation in the liver [25, 46]. Two animal studies have also observed the anti-hyperlipidemic effects in rats fed high cholesterol diet [26] and the anti-inflammatory effects of spinach, by reducing serum TNF alpha and beta levels, in rats with a regular diet [27].\nSeveral mechanisms have been proposed to explain leafy greens’ protective effects, such as spinach on NAFLD. However, it seems that nitrate and polyphenols’ contents play a crucial role in this relationship. Supplementing the breakfast with spinach among older women resulted in an increase in the plasma values of polyphenols and carotenoids, including lutein, zeaxanthin, and β-carotene, compared with the control group also higher than strawberries and red wine [47]. Polyphenols are an important group of bioactive ingredients which a lot of advantageous effects such as hypolipidemic, anti-inflammatory, anti-fibrotic, and insulin-sensitizing properties [48]. It has also been demonstrated that polyphenols inhibit de novo lipogenesis via SREBP1c downregulation and stimulate β-oxidation in the NAFLD models [42]. Nitrate is another important bioactive compound that estimated about 80–95% of its dietary intake supplied through vegetables, mainly green leafy vegetables like spinach [49]. Previous studies have demonstrated that dietary nitrate has protective effects against inflammation and oxidative stress through its ability to activate AMPK via a rise in the xanthine oxidase-dependent NO production, and cGMP signaling declines superoxide levels through NADPH oxidases [50]. Wang et al. found that lower serum nitrate levels directly associate with aging-related liver degeneration. In contrast, dietary nitrate can restore the serum nitrate levels and reverse this process in aging mice [51]. Another animal study claimed that spinach nitrate intake could regulate lipid homeostasis, inflammatory status, and endothelial function, so it is an excellent dietary constituent for insulin resistance prevention [52].\nBoiled spinach showed a non-significant association with NAFLD in the present study. Two reasons may justify it; the first is the lower consumption of boiled compared to the raw spinach (about 10% in daily intake by the gram, and about five times by serving intake per year) in the present study. Second, it is previously observed that nitrate and total polyphenolic content and antioxidant activity of some vegetables, including spinach, are reduced during the cooking process [53, 54].\nCompared with other studies investigating spinach and liver disorders, our study has some advantages; we directly assessed the association of mere spinach intake independent of other vegetables with NAFLD. We also analyzed all types of spinach consumption (total, raw, and cooked). We compared NAFLD risk among participants in the highest vs. lowest intake and the highest vs. no spinach consumption.\nBesides the advantages mentioned above, our study has several strengths; this study is the first study investigating the association of spinach intake as a single dietary component with the odds of NAFLD. Besides, dietary data were collected by trained interviewers in a face-to-face interview, using a validated and reproducible 168 item food frequency questionnaire (FFQ) [55], which decreases measurement bias.\nWe also had some limitations: firstly, the inability to discover the causal relationships and other concerns due to the study’s case–control design. Secondly, NAFLD diagnosis was based on ultrasonography, whereas the gold standard was a liver biopsy and (MR) imaging technique is more accurate. Of course, this issue can be neglected because today, due to the limitations and complications of biopsy and high cost and low availability of (MR) imaging techniques, using noninvasive methods ultrasonography is reliable and applicable in clinical practice [7]. The third limitation was no matching the cases and controls based on three essential variables, including age, sex, and BMI; however, there was no significant difference for age and sex between the cases and controls, and these variables were adjusted in analysis. We have no data on pubertal status, the number of pregnancies, hormonal conditions of participants, genetic data, etc. So, regarding the adjustment of several potential confounders, our study design cannot eliminate all potential confounding, and the effects of some residual confounding variables may have occurred.", "The present study found an inverse association between total and raw spinach intake with the odds of NAFLD. However, there was no significant association between higher boiled spinach intake and odds of NAFLD. Spinach is one of the richest sources of ingredients such as polyphenol and antioxidants. If its beneficial effects on chronic disease are approved in future studies, it could easily be used as a powder to enrich the nutritional values of homemade foods or products such as dairy or other foods. We suggest that our hypothesis of the association between dietary spinach and NAFLD odds be examined in more studies with higher design power, like large cohort studies and clinical trials." ]
[ null, null, null, null, null, null, null, "results", "discussion", "conclusion" ]
[ "Spinach", "Nonalcoholic fatty liver disease" ]
Background: Nonalcoholic fatty liver disease (NAFLD) refers to the state of accumulation of fat in hepatocytes in persons who do not consume excessive alcohol [1]. This disease includes a wide range of conditions from fatty liver to nonalcoholic steatohepatitis (NASH), fibrosis, and cirrhosis [2]. The NAFLD pathogenesis is defined under the term “Multiple-hit theory” [1], in which several factors including genetic susceptibility, insulin resistance, adipose hormones, gut microbiome, diet, and lifestyle can affect the risk of NAFLD development. For example, it is reported some gene variants in glutathione-S-transferase [3], glutamate-cysteine ligase [4], peroxisome proliferator-activated receptors (PPARs) [5], etc. are associated with NAFLD risk in the Iranian population. NAFLD is associated with other metabolic abnormalities, such as insulin resistance, high blood glucose level, dyslipidemia, central adiposity, and hypertension [6]. Accordingly, it is believed that NAFLD is the hepatic manifestation of metabolic syndrome. Some underlying conditions can increase NAFLD development risk, including obesity, type 2 diabetes mellitus (T2DM), and older age [7]. The average prevalence of NAFLD among the general population is estimated at 25% worldwide [8]. According to a recent meta-analysis, middle-east and Asian populations have higher NAFLD rates than the global average [9]. The prevalence in Iranian adults is reported between 20 and 50% [6] reports show a growing prevalence of NAFLD worldwide, attributed to the upward trend of adverse lifestyle changes, including unhealthy diet, sedentary behavior, and overweight [10]. Different aspects of diet, including dietary patterns, various food groups such as fruits, vegetables, and whole and refined grains, and nutrients like types of fatty acids and fructose, have been investigated with NAFLD’s risk [11, 12]. Also, the relationship between some single vegetables and chronic diseases, such as carrot and breast cancer [13], potato and diabetes [14], green leafy vegetables, and cardiovascular disease (CVD) [15], etc., have received particular attention, recently. In line with these studies, the role of spinach, as a broadleaf green, rich in nutrients, such as folates, vitamin A, C, and K, and minerals such as iron, magnesium, and manganese, and polyphenols, especially lutein, zeaxanthin, and β-carotene have been considered about the risk of NAFLD [16]. Plenty of interventional and experimental studies, have investigated spinach’s antioxidant and anti-inflammatory properties [16, 17]. To the best of our knowledge, although the association of dietary spinach with the risk of NAFLD was not assessed in observational studies, but it is shown that moderate intake of spinach may have protective effects against DNA oxidation [18]. Accordingly, it may be expected that spinach can reduce chronic disease risk, mostly related to oxidative stress. For example, some previous studies with controversial results have investigated the association of spinach with the odds of the breast (BrCa) [19, 20] and prostate cancer [21]. Furthermore, atrial stiffness [22], intrahepatic stone [23], and gallstone [24] are the other disease indicated that spinach has a protective impact against them. A recent animal study has shown that a high spinach intake significantly reduces the adverse effects of a high-fat diet on the gut microbiome, blood glucose, lipid profile, and cholesterol accumulation in the liver [25]. Two experimental studies have also shown the anti-hyperlipidemic effects of spinach in rats fed high cholesterol diet [26] and its anti-inflammatory effects in rats with a regular diet [27]. Using common foods like spinach to improve diet quality may provide a simple and inexpensive way to reduce the risk of chronic diseases such as NAFLD. However, whether individuals receiving higher dietary spinach than those who did not have a different risk of developing NAFLD has not been assessed in previous studies. Besides, previous studies have shown that heating vegetables can have beneficial or detrimental effects on their nutritional status. Accordingly, some evidence of higher bioavailability and stability of active nutrients such as phytochemicals from cooked spinach compared to raw spinach [28, 29], suggesting that the two methods of consuming spinach may be different on the risk of liver status. Therefore, we sought to investigate the association between spinach’s dietary intake, either raw, boiled, and total, and NAFLD risk in a case–control study among Iranian adults. Methods: Study population The present study was conducted in the Metabolic Liver Disease Research Center as a referral center affiliated to Isfahan University of Medical Sciences with a case–control design. Participants were obtained through convenience sampling. Totally 225 newly diagnosed NAFLD patients and 450 controls, aged 20–60 years, were recruited in this study. NAFLD diagnosis was confirmed by the absence of alcohol consumption and other liver disease etiologies, and an ultrasonography scan of the liver was compatible with NAFLD (Grade II, III as a definite diagnosis). The control group was selected among healthy individuals based on the liver ultrasonography (not suffering from any hepatic steatosis stages). Although Liver biopsy, recognized as the gold standard for diagnosing and staging fibrosis and inflammation, has significant limitations such as bleeding, sampling error, or interobserver variability and is not readily accepted by all patients. Nowadays, several noninvasive techniques, including biochemical and hematological tests, the scoring system using a combination of clinical and laboratory tests, Ultrasonography-based testes, and also double-contrast magnetic resonance (MR) imaging, MR elastography, and MR spectroscopy, and Diffusion-weighted MR imaging are used for diagnosis of fibrosis and hepatic inflammation [30]. Each method has its benefits and limitations. The results of laboratory and biochemical tests and scores are overlapping and cannot predict necroinflammatory activity. Despite the high accuracy of (MR) imaging techniques for quantifying the degree of necroinflammation, it is expensive and not widely available in clinical practice [30, 31]. Although the Ultrasonography test is operator-dependent and may be difficult in obese patients with narrow intercostal space, it is low cost, safe, and more accessible. A previous meta-analysis indicated that ultrasonography allows for reliable and accurate detection of the moderate-severe fatty liver compared to histology [32]. Ultrasound is likely the imaging technique of choice for screening for fatty liver in clinical and population settings, and we used an experienced physician for Ultrasonography implementation and analysis. In the present study, patients who were referred to screen their probability of NAFLD by an Ultrasonography test because of having an abnormal or slight elevation in liver enzymes or being at risk of metabolic syndrome or having metabolic syndrome, etc. were evaluated for study eligibility criteria and patients who willing to cooperate were included in the study. The inclusion criteria for this study are not to have a special diet (due to a particular disease or weight loss) and lack of history of renal and hepatic diseases (Wilson’s disease, autoimmune liver disease, hemochromatosis, virus infection, and alcoholic fatty liver), CVD, diabetes, malignancy, thyroid disorder, and autoimmune. Also, individuals who used potentially hepatotoxic or steatogenic drugs were not included in the current study. Participants who completed less than 35 items of the food frequency questionnaire and those with under or over-reported daily energy intake (≤ 800 or ≥ 4500 kcal/d) were excluded (8 participants) and were replaced. All subjects provided written informed consent before the study enrollment. The present study was conducted in the Metabolic Liver Disease Research Center as a referral center affiliated to Isfahan University of Medical Sciences with a case–control design. Participants were obtained through convenience sampling. Totally 225 newly diagnosed NAFLD patients and 450 controls, aged 20–60 years, were recruited in this study. NAFLD diagnosis was confirmed by the absence of alcohol consumption and other liver disease etiologies, and an ultrasonography scan of the liver was compatible with NAFLD (Grade II, III as a definite diagnosis). The control group was selected among healthy individuals based on the liver ultrasonography (not suffering from any hepatic steatosis stages). Although Liver biopsy, recognized as the gold standard for diagnosing and staging fibrosis and inflammation, has significant limitations such as bleeding, sampling error, or interobserver variability and is not readily accepted by all patients. Nowadays, several noninvasive techniques, including biochemical and hematological tests, the scoring system using a combination of clinical and laboratory tests, Ultrasonography-based testes, and also double-contrast magnetic resonance (MR) imaging, MR elastography, and MR spectroscopy, and Diffusion-weighted MR imaging are used for diagnosis of fibrosis and hepatic inflammation [30]. Each method has its benefits and limitations. The results of laboratory and biochemical tests and scores are overlapping and cannot predict necroinflammatory activity. Despite the high accuracy of (MR) imaging techniques for quantifying the degree of necroinflammation, it is expensive and not widely available in clinical practice [30, 31]. Although the Ultrasonography test is operator-dependent and may be difficult in obese patients with narrow intercostal space, it is low cost, safe, and more accessible. A previous meta-analysis indicated that ultrasonography allows for reliable and accurate detection of the moderate-severe fatty liver compared to histology [32]. Ultrasound is likely the imaging technique of choice for screening for fatty liver in clinical and population settings, and we used an experienced physician for Ultrasonography implementation and analysis. In the present study, patients who were referred to screen their probability of NAFLD by an Ultrasonography test because of having an abnormal or slight elevation in liver enzymes or being at risk of metabolic syndrome or having metabolic syndrome, etc. were evaluated for study eligibility criteria and patients who willing to cooperate were included in the study. The inclusion criteria for this study are not to have a special diet (due to a particular disease or weight loss) and lack of history of renal and hepatic diseases (Wilson’s disease, autoimmune liver disease, hemochromatosis, virus infection, and alcoholic fatty liver), CVD, diabetes, malignancy, thyroid disorder, and autoimmune. Also, individuals who used potentially hepatotoxic or steatogenic drugs were not included in the current study. Participants who completed less than 35 items of the food frequency questionnaire and those with under or over-reported daily energy intake (≤ 800 or ≥ 4500 kcal/d) were excluded (8 participants) and were replaced. All subjects provided written informed consent before the study enrollment. Dietary assessment Participants’ dietary intakes were collected using a valid and reliable 168-item semi-quantitative food frequency questionnaire (FFQ) [33]. The FFQ listed a set of common Iranian foods with standard serving sizes. Participants were asked to express their mean dietary intake during the past year by choosing one of the following categories: never or less than once a month, 3–4 times per month, once a week, 2–4 times per week, 5–6 times per week, once daily, 2–3 times per day, 4–5 times per day, and 6 or more times a day. Portion sizes of each food item were converted into grams using standard Iranian household measures [34]. Daily energy and nutrients intake for each individual were calculated using the United States Department of Agriculture’s (USDA) Food Composition Table (FCT) [35]. The Iranian FCT was used for some traditional foods that do not exist in USDA FCT [36]. Then the consumed foods frequencies were transformed into a daily intake scale. Each serving size of dietary spinach was computed as a cup of raw spinach (30 g) and a 1/2 cup of cooked-boiled spinach (90 g) consumption, respectively (serving size calculated using USDA National Nutrient Database for Standard Reference, http://www.ars.usda.gov/ba/bhnrc/ndl). Participants’ dietary intakes were collected using a valid and reliable 168-item semi-quantitative food frequency questionnaire (FFQ) [33]. The FFQ listed a set of common Iranian foods with standard serving sizes. Participants were asked to express their mean dietary intake during the past year by choosing one of the following categories: never or less than once a month, 3–4 times per month, once a week, 2–4 times per week, 5–6 times per week, once daily, 2–3 times per day, 4–5 times per day, and 6 or more times a day. Portion sizes of each food item were converted into grams using standard Iranian household measures [34]. Daily energy and nutrients intake for each individual were calculated using the United States Department of Agriculture’s (USDA) Food Composition Table (FCT) [35]. The Iranian FCT was used for some traditional foods that do not exist in USDA FCT [36]. Then the consumed foods frequencies were transformed into a daily intake scale. Each serving size of dietary spinach was computed as a cup of raw spinach (30 g) and a 1/2 cup of cooked-boiled spinach (90 g) consumption, respectively (serving size calculated using USDA National Nutrient Database for Standard Reference, http://www.ars.usda.gov/ba/bhnrc/ndl). Anthropometric measurements An experienced dietician performed anthropometric measurements. Weight was measured via a standard digital Seca scale (made in Germany), with minimum clothes and without shoes, and recorded to the nearest 100 g. Height was measured by a mounted non-elastic tape meter in a relaxed shoulder standing position with no shoes to the nearest 0.5 cm. Body mass index (BMI) was computed as weight (kg) divided by height in square meters (m2). An experienced dietician performed anthropometric measurements. Weight was measured via a standard digital Seca scale (made in Germany), with minimum clothes and without shoes, and recorded to the nearest 100 g. Height was measured by a mounted non-elastic tape meter in a relaxed shoulder standing position with no shoes to the nearest 0.5 cm. Body mass index (BMI) was computed as weight (kg) divided by height in square meters (m2). Assessment of other variables Information on other variables, including age, sex, marital status, socioeconomic status (SES), and smoking status, was collected using a standard demographic questionnaire. SES score, as an index of socioeconomic status, was calculated based on three variables, including family size (≤ 4, > 4 people), education (academic and non-academic education), and acquisition (house ownership or not). For each of these variables, participants were given a score of 1 (if their family members were ≤ 4, were academically educated, or owned a house) or given a score of 0 (if their family members were > 4, or had non-academic education, or leasehold property). The total SES score was then computed by summing up the assigned scores (minimum SES score of 0 to a maximum score of 3). Participants who had a score of 3, 2, and a sum of subjects with 1 and 0 were classified as high, moderate, and low SES, respectively. Physical activity measurement was conducted using the International Physical Activity Questionnaire (IPAQ) through face-to-face interviews. All results of the IPAQ were expressed as Metabolic Equivalents per week (METs/week). Information on other variables, including age, sex, marital status, socioeconomic status (SES), and smoking status, was collected using a standard demographic questionnaire. SES score, as an index of socioeconomic status, was calculated based on three variables, including family size (≤ 4, > 4 people), education (academic and non-academic education), and acquisition (house ownership or not). For each of these variables, participants were given a score of 1 (if their family members were ≤ 4, were academically educated, or owned a house) or given a score of 0 (if their family members were > 4, or had non-academic education, or leasehold property). The total SES score was then computed by summing up the assigned scores (minimum SES score of 0 to a maximum score of 3). Participants who had a score of 3, 2, and a sum of subjects with 1 and 0 were classified as high, moderate, and low SES, respectively. Physical activity measurement was conducted using the International Physical Activity Questionnaire (IPAQ) through face-to-face interviews. All results of the IPAQ were expressed as Metabolic Equivalents per week (METs/week). Statistical analysis Statistical analysis was performed using Statistical Package Software for Social Science, version 21 (SPSS Inc., Chicago, IL, USA). The normality of the data was tested using Kolmogorov-Smirnov’s test and histogram chart. Participants’ baseline characteristics and dietary intakes were expressed as mean ± SD or median (25–75 interquartile range) for quantitative variables and frequency (percentages) for qualitative variables. The data were compared between two groups by independent sample t test and chi-square for continuous and categorical variables, respectively. The logistic regression test was used for assessing the association between total, raw, and boiled dietary spinach with the odds of NAFLD. We first conducted a univariate analysis for each eventually confounding variable with the NAFLD for choosing the potential confounders. We entered the logistic regression analysis variables as confounders, whose p-value was lower than 0.20 [37]. Age and sex were not different between cases and controls. Based on the univariate analysis, the mentioned variables were not associated significantly with NAFLD; they do not change substantially in logistic regression results, so they were not adjusted in the analysis. The analysis was adjusted for potential confounders, including BMI, physical activity, smoking, SES, dietary intake of energy, high-fat dairy, and other vegetables except for spinach. The odds ratio (OR) with 95% confidence interval (CI) of NAFLD across tertiles of the total, raw, and boiled dietary spinach (gram per 1000 Kcal of energy intake) was reported. P-values < 0.05 were considered statistically significant. We also conducted an additional analysis for testing the odds of NAFLD between tertiles of yearly serving size intake of raw or boiled spinach compared to the participants who do not consume raw or boiled in the last year the reference category. As only 35 participants (20 controls, 15 cases) not finished total spinach, which was too low a sample size for the reference category, we exclude the total spinach for this additional analysis. Statistical analysis was performed using Statistical Package Software for Social Science, version 21 (SPSS Inc., Chicago, IL, USA). The normality of the data was tested using Kolmogorov-Smirnov’s test and histogram chart. Participants’ baseline characteristics and dietary intakes were expressed as mean ± SD or median (25–75 interquartile range) for quantitative variables and frequency (percentages) for qualitative variables. The data were compared between two groups by independent sample t test and chi-square for continuous and categorical variables, respectively. The logistic regression test was used for assessing the association between total, raw, and boiled dietary spinach with the odds of NAFLD. We first conducted a univariate analysis for each eventually confounding variable with the NAFLD for choosing the potential confounders. We entered the logistic regression analysis variables as confounders, whose p-value was lower than 0.20 [37]. Age and sex were not different between cases and controls. Based on the univariate analysis, the mentioned variables were not associated significantly with NAFLD; they do not change substantially in logistic regression results, so they were not adjusted in the analysis. The analysis was adjusted for potential confounders, including BMI, physical activity, smoking, SES, dietary intake of energy, high-fat dairy, and other vegetables except for spinach. The odds ratio (OR) with 95% confidence interval (CI) of NAFLD across tertiles of the total, raw, and boiled dietary spinach (gram per 1000 Kcal of energy intake) was reported. P-values < 0.05 were considered statistically significant. We also conducted an additional analysis for testing the odds of NAFLD between tertiles of yearly serving size intake of raw or boiled spinach compared to the participants who do not consume raw or boiled in the last year the reference category. As only 35 participants (20 controls, 15 cases) not finished total spinach, which was too low a sample size for the reference category, we exclude the total spinach for this additional analysis. Study population: The present study was conducted in the Metabolic Liver Disease Research Center as a referral center affiliated to Isfahan University of Medical Sciences with a case–control design. Participants were obtained through convenience sampling. Totally 225 newly diagnosed NAFLD patients and 450 controls, aged 20–60 years, were recruited in this study. NAFLD diagnosis was confirmed by the absence of alcohol consumption and other liver disease etiologies, and an ultrasonography scan of the liver was compatible with NAFLD (Grade II, III as a definite diagnosis). The control group was selected among healthy individuals based on the liver ultrasonography (not suffering from any hepatic steatosis stages). Although Liver biopsy, recognized as the gold standard for diagnosing and staging fibrosis and inflammation, has significant limitations such as bleeding, sampling error, or interobserver variability and is not readily accepted by all patients. Nowadays, several noninvasive techniques, including biochemical and hematological tests, the scoring system using a combination of clinical and laboratory tests, Ultrasonography-based testes, and also double-contrast magnetic resonance (MR) imaging, MR elastography, and MR spectroscopy, and Diffusion-weighted MR imaging are used for diagnosis of fibrosis and hepatic inflammation [30]. Each method has its benefits and limitations. The results of laboratory and biochemical tests and scores are overlapping and cannot predict necroinflammatory activity. Despite the high accuracy of (MR) imaging techniques for quantifying the degree of necroinflammation, it is expensive and not widely available in clinical practice [30, 31]. Although the Ultrasonography test is operator-dependent and may be difficult in obese patients with narrow intercostal space, it is low cost, safe, and more accessible. A previous meta-analysis indicated that ultrasonography allows for reliable and accurate detection of the moderate-severe fatty liver compared to histology [32]. Ultrasound is likely the imaging technique of choice for screening for fatty liver in clinical and population settings, and we used an experienced physician for Ultrasonography implementation and analysis. In the present study, patients who were referred to screen their probability of NAFLD by an Ultrasonography test because of having an abnormal or slight elevation in liver enzymes or being at risk of metabolic syndrome or having metabolic syndrome, etc. were evaluated for study eligibility criteria and patients who willing to cooperate were included in the study. The inclusion criteria for this study are not to have a special diet (due to a particular disease or weight loss) and lack of history of renal and hepatic diseases (Wilson’s disease, autoimmune liver disease, hemochromatosis, virus infection, and alcoholic fatty liver), CVD, diabetes, malignancy, thyroid disorder, and autoimmune. Also, individuals who used potentially hepatotoxic or steatogenic drugs were not included in the current study. Participants who completed less than 35 items of the food frequency questionnaire and those with under or over-reported daily energy intake (≤ 800 or ≥ 4500 kcal/d) were excluded (8 participants) and were replaced. All subjects provided written informed consent before the study enrollment. Dietary assessment: Participants’ dietary intakes were collected using a valid and reliable 168-item semi-quantitative food frequency questionnaire (FFQ) [33]. The FFQ listed a set of common Iranian foods with standard serving sizes. Participants were asked to express their mean dietary intake during the past year by choosing one of the following categories: never or less than once a month, 3–4 times per month, once a week, 2–4 times per week, 5–6 times per week, once daily, 2–3 times per day, 4–5 times per day, and 6 or more times a day. Portion sizes of each food item were converted into grams using standard Iranian household measures [34]. Daily energy and nutrients intake for each individual were calculated using the United States Department of Agriculture’s (USDA) Food Composition Table (FCT) [35]. The Iranian FCT was used for some traditional foods that do not exist in USDA FCT [36]. Then the consumed foods frequencies were transformed into a daily intake scale. Each serving size of dietary spinach was computed as a cup of raw spinach (30 g) and a 1/2 cup of cooked-boiled spinach (90 g) consumption, respectively (serving size calculated using USDA National Nutrient Database for Standard Reference, http://www.ars.usda.gov/ba/bhnrc/ndl). Anthropometric measurements: An experienced dietician performed anthropometric measurements. Weight was measured via a standard digital Seca scale (made in Germany), with minimum clothes and without shoes, and recorded to the nearest 100 g. Height was measured by a mounted non-elastic tape meter in a relaxed shoulder standing position with no shoes to the nearest 0.5 cm. Body mass index (BMI) was computed as weight (kg) divided by height in square meters (m2). Assessment of other variables: Information on other variables, including age, sex, marital status, socioeconomic status (SES), and smoking status, was collected using a standard demographic questionnaire. SES score, as an index of socioeconomic status, was calculated based on three variables, including family size (≤ 4, > 4 people), education (academic and non-academic education), and acquisition (house ownership or not). For each of these variables, participants were given a score of 1 (if their family members were ≤ 4, were academically educated, or owned a house) or given a score of 0 (if their family members were > 4, or had non-academic education, or leasehold property). The total SES score was then computed by summing up the assigned scores (minimum SES score of 0 to a maximum score of 3). Participants who had a score of 3, 2, and a sum of subjects with 1 and 0 were classified as high, moderate, and low SES, respectively. Physical activity measurement was conducted using the International Physical Activity Questionnaire (IPAQ) through face-to-face interviews. All results of the IPAQ were expressed as Metabolic Equivalents per week (METs/week). Statistical analysis: Statistical analysis was performed using Statistical Package Software for Social Science, version 21 (SPSS Inc., Chicago, IL, USA). The normality of the data was tested using Kolmogorov-Smirnov’s test and histogram chart. Participants’ baseline characteristics and dietary intakes were expressed as mean ± SD or median (25–75 interquartile range) for quantitative variables and frequency (percentages) for qualitative variables. The data were compared between two groups by independent sample t test and chi-square for continuous and categorical variables, respectively. The logistic regression test was used for assessing the association between total, raw, and boiled dietary spinach with the odds of NAFLD. We first conducted a univariate analysis for each eventually confounding variable with the NAFLD for choosing the potential confounders. We entered the logistic regression analysis variables as confounders, whose p-value was lower than 0.20 [37]. Age and sex were not different between cases and controls. Based on the univariate analysis, the mentioned variables were not associated significantly with NAFLD; they do not change substantially in logistic regression results, so they were not adjusted in the analysis. The analysis was adjusted for potential confounders, including BMI, physical activity, smoking, SES, dietary intake of energy, high-fat dairy, and other vegetables except for spinach. The odds ratio (OR) with 95% confidence interval (CI) of NAFLD across tertiles of the total, raw, and boiled dietary spinach (gram per 1000 Kcal of energy intake) was reported. P-values < 0.05 were considered statistically significant. We also conducted an additional analysis for testing the odds of NAFLD between tertiles of yearly serving size intake of raw or boiled spinach compared to the participants who do not consume raw or boiled in the last year the reference category. As only 35 participants (20 controls, 15 cases) not finished total spinach, which was too low a sample size for the reference category, we exclude the total spinach for this additional analysis. Results: The mean (SD) age and BMI of participants (53% male) were 38.1 (8.8) years and 26.8 (4.3) kg/m2, respectively. Table 1 indicates the general characteristics and dietary intake of cases and controls. Smoking was higher among NAFLD patients than in control (p value = 0.006), and also, smoking was higher among men than women (p value = 0.006, data not shown). Also, NAFLD patients had higher BMI(p value = 0.006), SES scores (p value = 0.043), and lower physical activity (p value < 0.001) than the control group. Also, the NAFLD patients had a higher dietary intake of calorie (p value = 0.006) and high-fat dairy products (p value < 0.001), whereas ate higher vegetables (p value = 0.001), total and raw spinach (p value = 0.001), and boiled spinach (p-value = 0.046) than controls. There were no significant differences between the two groups in age and sex distribution and dietary intakes of carbohydrate, protein, fat, fiber, whole grains, fruits, nuts, and legumes (P > 0.05). Table 1Characteristics and dietary intakes among cases and controlsCases (n = 225)Controls (n = 450)P valueAge (years)38.6 ± 8.737.8 ± 8.90.293Male, n (%)125 (55.6)233 (51.8)0.354BMI (kg/m2)30.5 ± 4.025.0 ± 3.0< 0.001Smoking, n (%)16 (7.1)12 (2.7)0.006Physical activity (MET/min/week)1119 ± 6161590 ± 949< 0.001SES, n(%)0.043Low65 (28.9)158 (35.1)Middle104 (46.2)206 (45.8)High56 (24.9)86 (19.1)Macro and micronutrientsEnergy intake (Kcal/day)2369 ± 6212227 ± 6450.006Carbohydrate (% of energy)55.9 ± 7.455.9 ± 6.50.917Protein (% of energy)13.2 ± 2.413.2 ± 2.20.941Fat (% of energy)30.7 ± 7.430.8 ± 6.50.927Fiber (g/1000 Kcal)16.7 ± 8.315.8 ± 6.40.154Calcium (mg/1000Kcal)528 ± 163533 ± 1530.723Sodium (mg/1000Kcal)1990 ± 19602040 ± 13850.736Potassium (mg/1000Kcal)1546 ± 3651596 ± 3730.093Food groupsWhole grains (g/day)64.0 (30.7–115.3)53.8 (25.1–112.8)0.711High fat dairy products (g/day)237 ± 153118 ± 107< 0.001Fruits (g/day)327 ± 227318 ± 2280.628Vegetables (g/day)262 ± 138302 ± 1480.001Nuts and legume (g/day)22.1 ± 21.121.2 ± 18.50.596Red meat (g/day)18.1 ± 16.218.9 ± 17.40.577Total spinach (g/day)1.7 (0.7–3.6)2.3 (1.0–5.6)0.001Raw spinach (g/day)0.7 (0.2–1.8)1.0 (0.5–3.0)0.001Boiled spinach (g/day)0.4 (0.1–1.5)0.6 (0.3–1.8)0.046Raw spinach (serving/year)2.5 (1.2–7.6)9.1 (3.0–22.8)< 0.001Boiled spinach (serving/year)1.9 (0.6–6.3)12.0 (6.0–36.5)0.046P values were computed using the independent sample t test and chi square for continues and categorical variables, respectively Characteristics and dietary intakes among cases and controls P values were computed using the independent sample t test and chi square for continues and categorical variables, respectively Table 2 shows the association between tertiles of the total, raw, and boiled spinach consumption (g/d) and odds of NAFLD. In the crude model, all three spinach categories in the highest compared with the lowest tertile were related with lower odds of NAFLD. in the final adjusted model for confounding variables including BMI, physical activity, smoking, SES, and dietary intake of energy, high-fat dairy, and other vegetables (except spinach), the odds ratio for NAFLD in the highest compared to the lowest tertile of total spinach and raw spinach were [(OR 0.47; 95% CI 0.24–0.89), (P for trend = 0.001)] and [(OR 0.36; 95% CI 0.19–0.7), (P for trend = 0.008)], respectively. However, based on the final adjusted model, the intake of boiled spinach was no significantly associated with the odds of NAFLD [(OR 0.76; 95% CI 0.42–1.38), P for trend = 0.508)]. Table 2Odds ratios (ORs) and 95% confidence intervals (CIs) for NAFLD based on tertiles of dietary spinachTertiles of dietary spinach intakeP-trendT1T2T3Total spinachMedian intake (g/day), per 1000 Kcal0.351.093.67NAFLD/control101/15083/15041/150Crude model1.00 (Ref)0.80 (0.55–1.16)0.41 (0.26–0.62)< 0.001Model 1†1.00 (Ref)0.91 (0.55–1.52)0.33 (0.18–0.59)< 0.001Model 2‡1.00 (Ref)1.13 (0.63–2.00)0.36 (0.19–0.71)0.001Raw spinachMedian intake (g/day), per 1000 Kcal0.120.491.92NAFLD/control97/14983/15145/150Crude model1.00 (Ref)0.81(0.56–1.18)0.45 (0.29–0.68)< 0.001Model 1†1.00 (Ref)0.90 (0.54–1.51)0.38 (0.22–0.68)0.001Model 2 ‡1.00 (Ref)1.22 (0.68–2.19)0.47 (0.24–0.89)0.008Boiled spinachMedian intake(g/day), per 1000 Kcal0.060.301.14NAFLD/control107/14851/15267/150Crude model1.00 (Ref)0.45 (0.30–0.68)0.62 (0.42–0.91)0.100Model 1†1.00 (Ref)0.62 (0.36–1.07)0.67 (0.40–1.13)0.242Model 2‡1.00 (Ref)0.72 (0.39–1.31)0.76 (0.42–1.38)0.508†Model 1: adjusted for BMI, physical activity, smoking, SES, and energy intake‡Model 2: Additionally adjusted for dietary intake of high fat dairy and other vegetables except of spinach Odds ratios (ORs) and 95% confidence intervals (CIs) for NAFLD based on tertiles of dietary spinach †Model 1: adjusted for BMI, physical activity, smoking, SES, and energy intake ‡Model 2: Additionally adjusted for dietary intake of high fat dairy and other vegetables except of spinach The association of yearly dietary serving of raw and boiled spinach with odds of NAFLD among tertiles of participants, who consumed these foods compared with those who had no spinach intake in the last year, was presented in Table 3. In the crude model, the odds (95% CI) of NAFLD in subjects in the highest tertile of raw and boiled spinach were 0.56 (0.33–0.95), P for trend = 0.008 and 0.60 (0.37–0.99), P for trend = 0.072 compared with those who had no consumption, respectively. In the final model, after adjusting for potential confounders, the odds (95% CI) of NAFLD in individuals in the highest tertile of raw spinach compared with those who had no consumption remained significant [(OR 0.41; 95% CI 0.18–0.96), (P for trend = 0.013)]. However, in the final model, the yearly serving intake of boiled spinach was not associated with the odds of NAFLD. Table 3Odds ratios (ORs) and 95% confidence intervals (CIs) for NAFLD according to yearly intake of raw and boiled spinachCategories of spinach intakeP trend0T1T2T3Raw spinachMedian intake (serving/year)06.013.539.1NAFLD/control34/5755/10484/13952/150Crude model1.00 (Ref)0.89(0.52–1.53)1.00 (0.60–1.65)0.56 (0.33–0.95)0.008Model 1†1.00 (Ref)0.66 (0.32–1.38)0.96 (0.48–1.92)0.35 (0.17–0.73)0.002Model 2‡1.00 (Ref)0.76 (0.34–1.72)1.12 (0.51–2.44)0.41 (0.18–0.96)0.013Boiled spinachMedian intake (serving/year)01.33.212.0NAFLD/control47/6762/11361/13555/135Crude model1.00 (Ref)0.83 (0.51–1.35)0.66 (0.40–1.07)0.60 (0.37–0.99)0.072Model 1†1.00 (Ref)0.87 (0.44–1.72)0.88 (0.45–1.71)0.66 (0.34–1.30)0.211Model 2‡1.00 (Ref)0.72 (0.33–1.57)0.83 (0.39–1.76)0.69 (0.32–1.47)0.479†Model 1: adjusted for BMI, physical activity, smoking, SES, and energy intake‡Model 2: Additionally adjusted for dietary intake of high fat dairy and other vegetables except of spinach Odds ratios (ORs) and 95% confidence intervals (CIs) for NAFLD according to yearly intake of raw and boiled spinach †Model 1: adjusted for BMI, physical activity, smoking, SES, and energy intake ‡Model 2: Additionally adjusted for dietary intake of high fat dairy and other vegetables except of spinach Discussion: In the present case–control study, we found a higher total and raw spinach intake, associated with a lower odds of NAFLD. However, there was no significant association between the boiled spinach intake and the odds of NAFLD. Also, participants in the highest category of raw spinach consumption compared with those with no intake had lower odds of NAFLD; however, the highest versus no boiled spinach consumption showed any significant association with NAFLD odds. To date, no study has examined the association of spinach intake from the usual diet with NAFLD. However, studies with conflicting results have assessed the association of spinach intake with the risk of some chronic diseases [20, 23, 24]. A previous population-based case–control study among American participants showed that individuals with a high annual intake of raw spinach had lower odds of breast cancer than those with no intake; however, the highest intake of boiled spinach versus no intake showed no association with breast cancer. Furthermore, consumption of a carrot and raw and cooked spinach twice weekly, compared to not consuming it, showed 46% lower odds of breast cancer [20]. Another case–control study among Korean women showed no relationship between higher spinach intake and breast cancer risk; however, they do not separate raw and cooked spinach analysis. The association between spinach intake and intrahepatic and gallbladder stones was investigated in two case–control studies [23, 24]. spinach consumption of two times weekly or more versus less than once monthly among Tiwanian participants was related to 65 and 84% lower odds of intrahepatic stone for men and women, respectively [23]. In contrast, a study conducted in Netherlands showed no association between spinach intake and gallstone risk [24]. A cohort study that aimed to assess the relationship between dietary antioxidant, fruit, and vegetable subclasses with the risk of prostate cancer indicated that a higher intake of vegetables rich in vitamin C includes dietary spinach along with pepper and broccoli negatively related to the risk of prostate cancer incidence [21]. Although spinach individually not assessed with NAFLD, is investigated with the risk of liver disease as a part of healthy diets such as the Mediterranean diet (MD) [38] or Dietary Approaches to Stop Hypertension (DASH) diet [39], total vegetable, or one of the main components of non-starchy vegetables, leafy green vegetables, and allium vegetables. The higher adherence to Mediterranean and DASH diets, which have a high spinach content, indicated an improvement in liver imaging, liver fibrosis score and showed an inverse association with liver diseases and related metabolic complications [38, 39]. Previously, it has been reported that vegetable consumption, especially beta-carotene-rich one, is associated with lower visceral or liver fat content and improved insulin sensitivity [40, 41]. It has been proposed that leafy green vegetables, mostly includes spinach and lettuce, have protective effects against NAFLD through prevention of intrahepatic triglyceride (IHTG) accumulation [42], hepatic steatosis, and also maintaining blood glucose, insulin, and free fatty acids in normal hematologic ranges [43]. Furthermore, some evidence supports allium vegetables’ protective effects on NAFLD and other related disorders such as hypertension, type 2 diabetes, and metabolic syndrome (MetS) [44, 45]. Despite limited evidence in human research, several animal studies have reported the possible ameliorative spinach effect on NAFLD development [25, 46]. It has been shown that spinach consumption significantly reduced the adverse effects of a high-fat diet on blood glucose, lipid profile, and cholesterol accumulation in the liver [25, 46]. Two animal studies have also observed the anti-hyperlipidemic effects in rats fed high cholesterol diet [26] and the anti-inflammatory effects of spinach, by reducing serum TNF alpha and beta levels, in rats with a regular diet [27]. Several mechanisms have been proposed to explain leafy greens’ protective effects, such as spinach on NAFLD. However, it seems that nitrate and polyphenols’ contents play a crucial role in this relationship. Supplementing the breakfast with spinach among older women resulted in an increase in the plasma values of polyphenols and carotenoids, including lutein, zeaxanthin, and β-carotene, compared with the control group also higher than strawberries and red wine [47]. Polyphenols are an important group of bioactive ingredients which a lot of advantageous effects such as hypolipidemic, anti-inflammatory, anti-fibrotic, and insulin-sensitizing properties [48]. It has also been demonstrated that polyphenols inhibit de novo lipogenesis via SREBP1c downregulation and stimulate β-oxidation in the NAFLD models [42]. Nitrate is another important bioactive compound that estimated about 80–95% of its dietary intake supplied through vegetables, mainly green leafy vegetables like spinach [49]. Previous studies have demonstrated that dietary nitrate has protective effects against inflammation and oxidative stress through its ability to activate AMPK via a rise in the xanthine oxidase-dependent NO production, and cGMP signaling declines superoxide levels through NADPH oxidases [50]. Wang et al. found that lower serum nitrate levels directly associate with aging-related liver degeneration. In contrast, dietary nitrate can restore the serum nitrate levels and reverse this process in aging mice [51]. Another animal study claimed that spinach nitrate intake could regulate lipid homeostasis, inflammatory status, and endothelial function, so it is an excellent dietary constituent for insulin resistance prevention [52]. Boiled spinach showed a non-significant association with NAFLD in the present study. Two reasons may justify it; the first is the lower consumption of boiled compared to the raw spinach (about 10% in daily intake by the gram, and about five times by serving intake per year) in the present study. Second, it is previously observed that nitrate and total polyphenolic content and antioxidant activity of some vegetables, including spinach, are reduced during the cooking process [53, 54]. Compared with other studies investigating spinach and liver disorders, our study has some advantages; we directly assessed the association of mere spinach intake independent of other vegetables with NAFLD. We also analyzed all types of spinach consumption (total, raw, and cooked). We compared NAFLD risk among participants in the highest vs. lowest intake and the highest vs. no spinach consumption. Besides the advantages mentioned above, our study has several strengths; this study is the first study investigating the association of spinach intake as a single dietary component with the odds of NAFLD. Besides, dietary data were collected by trained interviewers in a face-to-face interview, using a validated and reproducible 168 item food frequency questionnaire (FFQ) [55], which decreases measurement bias. We also had some limitations: firstly, the inability to discover the causal relationships and other concerns due to the study’s case–control design. Secondly, NAFLD diagnosis was based on ultrasonography, whereas the gold standard was a liver biopsy and (MR) imaging technique is more accurate. Of course, this issue can be neglected because today, due to the limitations and complications of biopsy and high cost and low availability of (MR) imaging techniques, using noninvasive methods ultrasonography is reliable and applicable in clinical practice [7]. The third limitation was no matching the cases and controls based on three essential variables, including age, sex, and BMI; however, there was no significant difference for age and sex between the cases and controls, and these variables were adjusted in analysis. We have no data on pubertal status, the number of pregnancies, hormonal conditions of participants, genetic data, etc. So, regarding the adjustment of several potential confounders, our study design cannot eliminate all potential confounding, and the effects of some residual confounding variables may have occurred. Conclusions: The present study found an inverse association between total and raw spinach intake with the odds of NAFLD. However, there was no significant association between higher boiled spinach intake and odds of NAFLD. Spinach is one of the richest sources of ingredients such as polyphenol and antioxidants. If its beneficial effects on chronic disease are approved in future studies, it could easily be used as a powder to enrich the nutritional values of homemade foods or products such as dairy or other foods. We suggest that our hypothesis of the association between dietary spinach and NAFLD odds be examined in more studies with higher design power, like large cohort studies and clinical trials.
Background: Spinach has high antioxidants and polyphenols and showed protective effects against liver diseases in experimental studies. We aimed to assess the association between dietary intake of spinach and odds of nonalcoholic fatty liver disease (NAFLD) in a case-control study among Iranian adults. Methods: Totally 225 newly diagnosed NAFLD patients and 450 controls, aged 20-60 years, were recruited in this study. Participants' dietary intakes were collected using a valid and reliable 168-item semi-quantitative food frequency questionnaire (FFQ). The logistic regression test was used for assessing the association between total, raw, and boiled dietary spinach with the odds of NAFLD. Results: The mean (SD) age and BMI of participants (53% male) were 38.1 (8.8) years and 26.8 (4.3) kg/m2, respectively. In the final adjusted model for potential confounders, the odds (95% CI) of NAFLD in individuals in the highest tertile of daily total and raw spinach intake was [0.36 (0.19-0.71), P_trend = 0.001] and [0.47 (0.24-0.89), P_trend = 0.008], respectively compared with those in the lowest tertile. Furthermore, in the adjusted analyses, an inverse association was observed between the highest yearly intake versus no raw spinach consumption and odds of NAFLD [(OR 0.41; 95% CI 0.18-0.96), P for trend = 0.013]. However, there was no significant association between higher boiled spinach intake and odds of NAFLD. Conclusions: The present study found an inverse association between total and raw spinach intake with the odds of NAFLD.
Background: Nonalcoholic fatty liver disease (NAFLD) refers to the state of accumulation of fat in hepatocytes in persons who do not consume excessive alcohol [1]. This disease includes a wide range of conditions from fatty liver to nonalcoholic steatohepatitis (NASH), fibrosis, and cirrhosis [2]. The NAFLD pathogenesis is defined under the term “Multiple-hit theory” [1], in which several factors including genetic susceptibility, insulin resistance, adipose hormones, gut microbiome, diet, and lifestyle can affect the risk of NAFLD development. For example, it is reported some gene variants in glutathione-S-transferase [3], glutamate-cysteine ligase [4], peroxisome proliferator-activated receptors (PPARs) [5], etc. are associated with NAFLD risk in the Iranian population. NAFLD is associated with other metabolic abnormalities, such as insulin resistance, high blood glucose level, dyslipidemia, central adiposity, and hypertension [6]. Accordingly, it is believed that NAFLD is the hepatic manifestation of metabolic syndrome. Some underlying conditions can increase NAFLD development risk, including obesity, type 2 diabetes mellitus (T2DM), and older age [7]. The average prevalence of NAFLD among the general population is estimated at 25% worldwide [8]. According to a recent meta-analysis, middle-east and Asian populations have higher NAFLD rates than the global average [9]. The prevalence in Iranian adults is reported between 20 and 50% [6] reports show a growing prevalence of NAFLD worldwide, attributed to the upward trend of adverse lifestyle changes, including unhealthy diet, sedentary behavior, and overweight [10]. Different aspects of diet, including dietary patterns, various food groups such as fruits, vegetables, and whole and refined grains, and nutrients like types of fatty acids and fructose, have been investigated with NAFLD’s risk [11, 12]. Also, the relationship between some single vegetables and chronic diseases, such as carrot and breast cancer [13], potato and diabetes [14], green leafy vegetables, and cardiovascular disease (CVD) [15], etc., have received particular attention, recently. In line with these studies, the role of spinach, as a broadleaf green, rich in nutrients, such as folates, vitamin A, C, and K, and minerals such as iron, magnesium, and manganese, and polyphenols, especially lutein, zeaxanthin, and β-carotene have been considered about the risk of NAFLD [16]. Plenty of interventional and experimental studies, have investigated spinach’s antioxidant and anti-inflammatory properties [16, 17]. To the best of our knowledge, although the association of dietary spinach with the risk of NAFLD was not assessed in observational studies, but it is shown that moderate intake of spinach may have protective effects against DNA oxidation [18]. Accordingly, it may be expected that spinach can reduce chronic disease risk, mostly related to oxidative stress. For example, some previous studies with controversial results have investigated the association of spinach with the odds of the breast (BrCa) [19, 20] and prostate cancer [21]. Furthermore, atrial stiffness [22], intrahepatic stone [23], and gallstone [24] are the other disease indicated that spinach has a protective impact against them. A recent animal study has shown that a high spinach intake significantly reduces the adverse effects of a high-fat diet on the gut microbiome, blood glucose, lipid profile, and cholesterol accumulation in the liver [25]. Two experimental studies have also shown the anti-hyperlipidemic effects of spinach in rats fed high cholesterol diet [26] and its anti-inflammatory effects in rats with a regular diet [27]. Using common foods like spinach to improve diet quality may provide a simple and inexpensive way to reduce the risk of chronic diseases such as NAFLD. However, whether individuals receiving higher dietary spinach than those who did not have a different risk of developing NAFLD has not been assessed in previous studies. Besides, previous studies have shown that heating vegetables can have beneficial or detrimental effects on their nutritional status. Accordingly, some evidence of higher bioavailability and stability of active nutrients such as phytochemicals from cooked spinach compared to raw spinach [28, 29], suggesting that the two methods of consuming spinach may be different on the risk of liver status. Therefore, we sought to investigate the association between spinach’s dietary intake, either raw, boiled, and total, and NAFLD risk in a case–control study among Iranian adults. Conclusions: The present study found an inverse association between total and raw spinach intake with the odds of NAFLD. However, there was no significant association between higher boiled spinach intake and odds of NAFLD. Spinach is one of the richest sources of ingredients such as polyphenol and antioxidants. If its beneficial effects on chronic disease are approved in future studies, it could easily be used as a powder to enrich the nutritional values of homemade foods or products such as dairy or other foods. We suggest that our hypothesis of the association between dietary spinach and NAFLD odds be examined in more studies with higher design power, like large cohort studies and clinical trials.
Background: Spinach has high antioxidants and polyphenols and showed protective effects against liver diseases in experimental studies. We aimed to assess the association between dietary intake of spinach and odds of nonalcoholic fatty liver disease (NAFLD) in a case-control study among Iranian adults. Methods: Totally 225 newly diagnosed NAFLD patients and 450 controls, aged 20-60 years, were recruited in this study. Participants' dietary intakes were collected using a valid and reliable 168-item semi-quantitative food frequency questionnaire (FFQ). The logistic regression test was used for assessing the association between total, raw, and boiled dietary spinach with the odds of NAFLD. Results: The mean (SD) age and BMI of participants (53% male) were 38.1 (8.8) years and 26.8 (4.3) kg/m2, respectively. In the final adjusted model for potential confounders, the odds (95% CI) of NAFLD in individuals in the highest tertile of daily total and raw spinach intake was [0.36 (0.19-0.71), P_trend = 0.001] and [0.47 (0.24-0.89), P_trend = 0.008], respectively compared with those in the lowest tertile. Furthermore, in the adjusted analyses, an inverse association was observed between the highest yearly intake versus no raw spinach consumption and odds of NAFLD [(OR 0.41; 95% CI 0.18-0.96), P for trend = 0.013]. However, there was no significant association between higher boiled spinach intake and odds of NAFLD. Conclusions: The present study found an inverse association between total and raw spinach intake with the odds of NAFLD.
8,584
321
[ 882, 3079, 576, 243, 87, 241, 382 ]
10
[ "spinach", "nafld", "intake", "dietary", "study", "liver", "participants", "analysis", "raw", "variables" ]
[ "nafld pathogenesis defined", "fibrosis cirrhosis nafld", "nafld prevention intrahepatic", "metabolic liver disease", "fatty liver disease" ]
null
[CONTENT] Spinach | Nonalcoholic fatty liver disease [SUMMARY]
null
[CONTENT] Spinach | Nonalcoholic fatty liver disease [SUMMARY]
[CONTENT] Spinach | Nonalcoholic fatty liver disease [SUMMARY]
[CONTENT] Spinach | Nonalcoholic fatty liver disease [SUMMARY]
[CONTENT] Spinach | Nonalcoholic fatty liver disease [SUMMARY]
[CONTENT] Adult | Case-Control Studies | Diet | Female | Humans | Iran | Male | Middle Aged | Non-alcoholic Fatty Liver Disease | Spinacia oleracea | Young Adult [SUMMARY]
null
[CONTENT] Adult | Case-Control Studies | Diet | Female | Humans | Iran | Male | Middle Aged | Non-alcoholic Fatty Liver Disease | Spinacia oleracea | Young Adult [SUMMARY]
[CONTENT] Adult | Case-Control Studies | Diet | Female | Humans | Iran | Male | Middle Aged | Non-alcoholic Fatty Liver Disease | Spinacia oleracea | Young Adult [SUMMARY]
[CONTENT] Adult | Case-Control Studies | Diet | Female | Humans | Iran | Male | Middle Aged | Non-alcoholic Fatty Liver Disease | Spinacia oleracea | Young Adult [SUMMARY]
[CONTENT] Adult | Case-Control Studies | Diet | Female | Humans | Iran | Male | Middle Aged | Non-alcoholic Fatty Liver Disease | Spinacia oleracea | Young Adult [SUMMARY]
[CONTENT] nafld pathogenesis defined | fibrosis cirrhosis nafld | nafld prevention intrahepatic | metabolic liver disease | fatty liver disease [SUMMARY]
null
[CONTENT] nafld pathogenesis defined | fibrosis cirrhosis nafld | nafld prevention intrahepatic | metabolic liver disease | fatty liver disease [SUMMARY]
[CONTENT] nafld pathogenesis defined | fibrosis cirrhosis nafld | nafld prevention intrahepatic | metabolic liver disease | fatty liver disease [SUMMARY]
[CONTENT] nafld pathogenesis defined | fibrosis cirrhosis nafld | nafld prevention intrahepatic | metabolic liver disease | fatty liver disease [SUMMARY]
[CONTENT] nafld pathogenesis defined | fibrosis cirrhosis nafld | nafld prevention intrahepatic | metabolic liver disease | fatty liver disease [SUMMARY]
[CONTENT] spinach | nafld | intake | dietary | study | liver | participants | analysis | raw | variables [SUMMARY]
null
[CONTENT] spinach | nafld | intake | dietary | study | liver | participants | analysis | raw | variables [SUMMARY]
[CONTENT] spinach | nafld | intake | dietary | study | liver | participants | analysis | raw | variables [SUMMARY]
[CONTENT] spinach | nafld | intake | dietary | study | liver | participants | analysis | raw | variables [SUMMARY]
[CONTENT] spinach | nafld | intake | dietary | study | liver | participants | analysis | raw | variables [SUMMARY]
[CONTENT] nafld | spinach | risk | studies | diet | effects | accordingly | risk nafld | prevalence | studies shown [SUMMARY]
null
[CONTENT] 00 | ref | 00 ref | model | spinach | intake | day | 95 | value | nafld [SUMMARY]
[CONTENT] studies | spinach | spinach intake odds nafld | spinach intake odds | intake odds | intake odds nafld | odds | association | nafld | spinach intake [SUMMARY]
[CONTENT] spinach | nafld | intake | liver | study | dietary | score | analysis | variables | odds [SUMMARY]
[CONTENT] spinach | nafld | intake | liver | study | dietary | score | analysis | variables | odds [SUMMARY]
[CONTENT] ||| Iranian [SUMMARY]
null
[CONTENT] BMI | 53% | 38.1 | 8.8) years | 26.8 | 4.3) kg | m2 ||| 95% | CI | NAFLD | daily | 0.36 | 0.19-0.71 | 0.001 | 0.47 | 0.24-0.89 | 0.008 ||| NAFLD ||| 0.41 | 95% | CI | 0.18-0.96 | 0.013 ||| NAFLD [SUMMARY]
[CONTENT] NAFLD [SUMMARY]
[CONTENT] ||| Iranian ||| 225 | NAFLD | 450 | 20-60 years ||| 168 ||| NAFLD ||| BMI | 53% | 38.1 | 8.8) years | 26.8 | 4.3) kg | m2 ||| 95% | CI | NAFLD | daily | 0.36 | 0.19-0.71 | 0.001 | 0.47 | 0.24-0.89 | 0.008 ||| NAFLD ||| 0.41 | 95% | CI | 0.18-0.96 | 0.013 ||| NAFLD ||| NAFLD [SUMMARY]
[CONTENT] ||| Iranian ||| 225 | NAFLD | 450 | 20-60 years ||| 168 ||| NAFLD ||| BMI | 53% | 38.1 | 8.8) years | 26.8 | 4.3) kg | m2 ||| 95% | CI | NAFLD | daily | 0.36 | 0.19-0.71 | 0.001 | 0.47 | 0.24-0.89 | 0.008 ||| NAFLD ||| 0.41 | 95% | CI | 0.18-0.96 | 0.013 ||| NAFLD ||| NAFLD [SUMMARY]
Good ingredients from foods to vegan cosmetics after COVID-19 pandemic.
35486443
New changes are taking place in the beauty and cosmetology market due to changes in daily life due to coronavirus disease-19 (COVID-19) and environmental alteration caused by the spread of live commerce.
BACKGROUND
This review paper is a critical literature review, and a narrative review approach has been used for this study. A total of 300-400 references were selected using representative journal search websites such as PubMed, Google Scholar, Scopus, RISS, and ResearchGate, which a total of 45 papers were selected in the final stage based on 2009 to 2022.
METHODS
As environmental problems increased after the COVID-19 pandemic, we tried to understand the needs of consumers for vegan cosmetics, which are good ingredients and good cosmetics. Therefore, this narrative review clearly shows the need for beauty and cosmetics industry consumers to pursue good consumption due to the global COVID-19 pandemic.
RESULT
Accordingly, this literature review will need to identify consumer needs for vegan cosmetics that started from vegan foods and develop the applications for the development of customized inner beauty products, customized vegan inner beauty products and/or customized vegan cosmetics using customized cosmetics. This is expected to be used as important marketing materials for the global vegan cosmetics market that confirms new changes in the cosmetics market.
CONCLUSION
[ "COVID-19", "Cosmetics", "Humans", "Marketing", "Pandemics", "Vegans" ]
9115085
INTRODUCTION
The ongoing coronavirus disease‐19 (COVID‐19) pandemic is endangering millions of people in more and more countries and is a serious public health threat worldwide. Recently, extensive research and clinical trials have been conducted to develop antiviral drugs, vaccines, and anti‐syndromic coronavirus 2 (SARS‐CoV‐2) antibody treatment for the treatment of COVID‐19, plasma treatment for convalescence, and nanoparticle‐based treatment. As a result, the spread of the COVID‐19 virus continues. 1 A newly developed rapid point‐of‐care test is underway. This is contributing to controlling the spread of the COVID‐19 virus by facilitating large‐scale testing on the population. A population‐based cross‐sectional study conducted in Cantabria, Spain between April and May 2020 also evaluated the applicability of a self‐testing strategy for SARS‐CoV‐2. In the early days of the COVID‐19 outbreak, lung health was a major focus of research. Recently, COVID‐19 self‐diagnosis has been considered and used as a screening test tool. Vaccines and treatments have been developed for COVID‐19, but the pandemic shows no sign of ending yet. 2 In recent years, e‐commerce such as online purchase has been growing steadily, and due to this social situation, the e‐commerce market is growing rapidly due to the transition to a non‐face‐to‐face society. 3 In Prof. Kim's book, “It is not a question of who owns more, but a new measure of life's abundance is scaled to ‘who has more experience’.” “Streaming” essentially means playing audio or video in real time without downloading it. It is called streaming because it treats data as if it were flowing through water. The most fundamental difference between streaming and downloading is ownership. The streaming is the biggest feature that you can experience whenever you want, even if you do not own it. The era of enjoying such a streaming life has arrived, and live commerce is also growing at the same time in the beauty market. It has emerged as a major trend in the industry. The service method that provides information in a non‐face‐to‐face manner means minimizing contact with people. As we enter the unknown era, the frequency of purchasing customized cosmetics through untact mobile shopping is increasing, and research results have shown that this has a high correlation with interest in skin and perception of customized cosmetics. In the unexplored era after COVID‐19, cosmetics consumption through mobile shopping is expected to increase. 4 , 5 Unlike in the past, social media environment dominates our daily life and everything is changing rapidly, and nothing seems to change long. In a rapidly changing era, it is very meaningful socially and culturally to compare the appearance of generations who share the same emotions with the differences in perceptions of one generation. 6 , 7 , 8  The Korean economy and the global Hallyu wave. It is necessary to diagnose the reality of the beauty industry, which has grown in importance over time, and to suggest realistic development directions. The production and use of e‐commerce packaging has steadily increased in recent years due to the increase in online purchases. As a result, the impact on the environment has also increased. Humanity faces climate change, pollution, environmental degradation and/or destruction of air, soil, water, and ecosystems. The climate and environmental crisis will be one of the greatest challenges in human history. As a result, consumers become more obsessed with good consumption and good ingredients, and the trend is gradually spreading from vegan food to vegan cosmetics. 9 , 10 Therefore, as environmental issues have been increasing after the COVID‐19 pandemic, it will be identified the needs of consumers for vegan cosmetics, which are good ingredients and good cosmetics, and focused on the future value and direction of vegan cosmetics from food to cosmetics.
null
null
RESULTS
Changes in daily life due to COVID‐19 pandemic Effective vaccines and therapeutics to prevent COVID‐19 have been released. Yet the world continues to rely on social distancing, sanitation measures and repurposing drugs. The world has developed an effective SARS‐CoV‐2 vaccine. By the end of August 2020, 30 vaccines had already entered clinical trials, and more than 200 vaccines had undergone various stages of development. As a result, 71.5% were either “very high” or “somewhat likely” to receive the COVID‐19 vaccine. 48.1% said they would follow their employer's recommendations. The difference in acceptance rates varied from nearly 90% (China) to <55% (Russia). Respondents with higher confidence in government sources of information are more likely to be vaccinated and follow employer recommendations. As a result, many vaccinations have been carried out, but the world is still in a state of panic. 11 , 12 A social distancing policy to spend more time at home during the COVID‐19 period has been implemented. 13 As a result, the transition to telecommuting has been positive for most, with potential benefits in reducing fatigue for many employees. Maintaining the option to work from home after COVID‐19 can help reduce burnout in the long term. Addressing information technologies that can personalize options and ensure functionality has been found to be important for those who cannot work effectively from home. 14  Telecommuting, which requires working from home rather than in a company building, is a future‐oriented work method that is beneficial to both the organization and its members and is spreading widely with the development of information infrastructure. It has various advantages such as productivity increase, job satisfaction increase, cost reduction, the flexibility of work and workplace, etc., and is also important as a means of enhancing organizational competitiveness. It is expected to provide opportunities for women who have difficulties in raising children to perform work and housework at the same time at home. In the study of factors affecting attitudes toward telecommuting, the Korean Intellectual Property Office examiners’ telecommuting consciousness survey was conducted targeting examiners of the Korean Intellectual Property Office, which is the most successful government agency conducting telecommuting. The advantages of telecommuting (increased productivity, psychological freedom, and improved family relationships) all have a positive effect on the attitude toward telecommuting, and the disadvantages (creating a sense of incongruity within the organization, difficulties in managing employees by the department head, not knowing the organizational situation) was found to have a negative effect. However, compared to those who did not expect to work from home, the other three groups had a more positive attitude toward telecommuting. 15 Effective vaccines and therapeutics to prevent COVID‐19 have been released. Yet the world continues to rely on social distancing, sanitation measures and repurposing drugs. The world has developed an effective SARS‐CoV‐2 vaccine. By the end of August 2020, 30 vaccines had already entered clinical trials, and more than 200 vaccines had undergone various stages of development. As a result, 71.5% were either “very high” or “somewhat likely” to receive the COVID‐19 vaccine. 48.1% said they would follow their employer's recommendations. The difference in acceptance rates varied from nearly 90% (China) to <55% (Russia). Respondents with higher confidence in government sources of information are more likely to be vaccinated and follow employer recommendations. As a result, many vaccinations have been carried out, but the world is still in a state of panic. 11 , 12 A social distancing policy to spend more time at home during the COVID‐19 period has been implemented. 13 As a result, the transition to telecommuting has been positive for most, with potential benefits in reducing fatigue for many employees. Maintaining the option to work from home after COVID‐19 can help reduce burnout in the long term. Addressing information technologies that can personalize options and ensure functionality has been found to be important for those who cannot work effectively from home. 14  Telecommuting, which requires working from home rather than in a company building, is a future‐oriented work method that is beneficial to both the organization and its members and is spreading widely with the development of information infrastructure. It has various advantages such as productivity increase, job satisfaction increase, cost reduction, the flexibility of work and workplace, etc., and is also important as a means of enhancing organizational competitiveness. It is expected to provide opportunities for women who have difficulties in raising children to perform work and housework at the same time at home. In the study of factors affecting attitudes toward telecommuting, the Korean Intellectual Property Office examiners’ telecommuting consciousness survey was conducted targeting examiners of the Korean Intellectual Property Office, which is the most successful government agency conducting telecommuting. The advantages of telecommuting (increased productivity, psychological freedom, and improved family relationships) all have a positive effect on the attitude toward telecommuting, and the disadvantages (creating a sense of incongruity within the organization, difficulties in managing employees by the department head, not knowing the organizational situation) was found to have a negative effect. However, compared to those who did not expect to work from home, the other three groups had a more positive attitude toward telecommuting. 15 Environmental issues due to the spread of live e‐commerce As the number of single‐person households and the number of single people increases, the number of people who want to cut off the changing and inconvenient communication of life culture has increased, resulting in untact marketing in the retail industry. Here, COVID‐19 quickly penetrated untact culture and trends and established untact marketing. Untact culture and untact marketing in Japan and Korea were examined through case studies, and similarities and differences were compared. The untact culture and the development of technology were combined to create novel untact marketing, but in Japan, the traditional culture of face‐to‐face and contact changed to an untact culture. Untact marketing using digital technology is applied in various ways in Korea. As COVID‐19 continues for a long time, efforts to make untact marketing efficient in this untact culture are accelerating. 16  The development of mobile technology is an era in which modern people have become essential in all areas of life. You can easily access various devices and concepts that makeup not only the media, but also all aspects of society and the entire culture, including popular art, music, video, art, and daily food, clothing, and shelter culture anytime, anywhere. The development of these contemporary technologies and the possibility of expansion of the untact space due to the development of mobile technology, which is universal and rapidly developing in modern society, were paid attention to. The development of mobile technology, centered on non‐face‐to‐face, has the potential to develop into a field of untact exhibition and space design. Untact, a service that implies unmanned services such as artificial intelligence, big data, and IoT, which is the 4th industrial revolution technology, is developing into a more sophisticated and new technology. 17 Studies on the role of smart packaging systems in the food supply chain also suggest that they can affect food quality, safety, and sustainability. Packaging systems have evolved to become smarter by integrating new electronic products with wireless communication and cloud data solutions. There are many factors that cause food loss and waste problems throughout the food supply chain. However, the development of smart packaging systems has developed recently, and there are several articles showing breakthroughs and mentioning that there are challenges for sustainability. 18 Global warming and obesity: A systematic review study found that there was a common correlation between global warming and obesity epidemics. The following studies found: Global warming affects obesity prevalence; Obesity epidemic contributes to global warming; Global warming and obesity epidemics influence each other. Increased energy consumption affects global warming. Policies that support clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. Policies that support the deployment of clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. 19  Table 1 summarizes environmental problems caused by the spread of live commerce. Environmental issues due to the spread of live e‐commerce As the number of single‐person households and the number of single people increases, the number of people who want to cut off the changing and inconvenient communication of life culture has increased, resulting in untact marketing in the retail industry. Here, COVID‐19 quickly penetrated untact culture and trends and established untact marketing. Untact culture and untact marketing in Japan and Korea were examined through case studies, and similarities and differences were compared. The untact culture and the development of technology were combined to create novel untact marketing, but in Japan, the traditional culture of face‐to‐face and contact changed to an untact culture. Untact marketing using digital technology is applied in various ways in Korea. As COVID‐19 continues for a long time, efforts to make untact marketing efficient in this untact culture are accelerating. 16  The development of mobile technology is an era in which modern people have become essential in all areas of life. You can easily access various devices and concepts that makeup not only the media, but also all aspects of society and the entire culture, including popular art, music, video, art, and daily food, clothing, and shelter culture anytime, anywhere. The development of these contemporary technologies and the possibility of expansion of the untact space due to the development of mobile technology, which is universal and rapidly developing in modern society, were paid attention to. The development of mobile technology, centered on non‐face‐to‐face, has the potential to develop into a field of untact exhibition and space design. Untact, a service that implies unmanned services such as artificial intelligence, big data, and IoT, which is the 4th industrial revolution technology, is developing into a more sophisticated and new technology. 17 Studies on the role of smart packaging systems in the food supply chain also suggest that they can affect food quality, safety, and sustainability. Packaging systems have evolved to become smarter by integrating new electronic products with wireless communication and cloud data solutions. There are many factors that cause food loss and waste problems throughout the food supply chain. However, the development of smart packaging systems has developed recently, and there are several articles showing breakthroughs and mentioning that there are challenges for sustainability. 18 Global warming and obesity: A systematic review study found that there was a common correlation between global warming and obesity epidemics. The following studies found: Global warming affects obesity prevalence; Obesity epidemic contributes to global warming; Global warming and obesity epidemics influence each other. Increased energy consumption affects global warming. Policies that support clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. Policies that support the deployment of clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. 19  Table 1 summarizes environmental problems caused by the spread of live commerce. Environmental issues due to the spread of live e‐commerce Increasing interests in good ingredients and vegan cosmetics As the environmental pollution caused by industrial wastes becomes more serious, veganism is emerging as a social problem. To raise awareness and responsibility for environmental protection, more consumers are turning to vegetarian cosmetics in the beauty industry. 20 Recently, veganism is a food consumption pattern. Vegetarianism usually includes several diets depending on the degree of exclusion in part or all of animal products such as meat and dairy. Among them, flexible, semi‐vegetarian, pesco‐vegetarian, lacto‐ovo‐vegetarian, vegan, raw and fruit diets are classified in order of restriction. These last three modes can be extended to a lifestyle called veganism, which is defined as not using animal products (cosmetics, clothing, ingredients, etc.) in daily life. 21 Due to COVID‐19, research on sustainable changes in the safety‐oriented beauty market trend is coming to an era where the perspective of sustainable safety can be applied to the entire beauty and cosmetology industry, and as customers’ perceptions change, safe ingredients are needed. Here, “good ingredients” means safe ingredients according to changes in customers’ perceptions. 22  This meaning also includes environmental issues. Recently, due to the increase in the use of masks due to environmental problems or infectious diseases, skin troubles are rapidly increasing due to changes in the use of cosmetics around the world. Therefore, research was conducted to suggest natural products related to research on skin soothing ingredients and makeup products. 23 A study on the influence of consumer decision‐making style on the consumption value of vegan cosmetics is in progress. In the effect of vegan cosmetics on consumption value, the higher the prudence and compliance type among the sub‐factors of consumer decision‐making type, the higher the consumption value. It was found to have a significant effect on sub‐factors, functional value, economic value, social value, conditional value, cognitive value, and emotional value. The purchase decision attributes, purchase behavioral intentions, and consumption values of the MZ generation of vegan cosmetics for infants and toddlers are constantly changing due to the spread of vegan cosmetics for infants and children, and various environmental and social impacts. It was found to have a significant effect on the purchase intention of vegan cosmetics as a determinant. 25 As the environmental pollution caused by industrial wastes becomes more serious, veganism is emerging as a social problem. To raise awareness and responsibility for environmental protection, more consumers are turning to vegetarian cosmetics in the beauty industry. 20 Recently, veganism is a food consumption pattern. Vegetarianism usually includes several diets depending on the degree of exclusion in part or all of animal products such as meat and dairy. Among them, flexible, semi‐vegetarian, pesco‐vegetarian, lacto‐ovo‐vegetarian, vegan, raw and fruit diets are classified in order of restriction. These last three modes can be extended to a lifestyle called veganism, which is defined as not using animal products (cosmetics, clothing, ingredients, etc.) in daily life. 21 Due to COVID‐19, research on sustainable changes in the safety‐oriented beauty market trend is coming to an era where the perspective of sustainable safety can be applied to the entire beauty and cosmetology industry, and as customers’ perceptions change, safe ingredients are needed. Here, “good ingredients” means safe ingredients according to changes in customers’ perceptions. 22  This meaning also includes environmental issues. Recently, due to the increase in the use of masks due to environmental problems or infectious diseases, skin troubles are rapidly increasing due to changes in the use of cosmetics around the world. Therefore, research was conducted to suggest natural products related to research on skin soothing ingredients and makeup products. 23 A study on the influence of consumer decision‐making style on the consumption value of vegan cosmetics is in progress. In the effect of vegan cosmetics on consumption value, the higher the prudence and compliance type among the sub‐factors of consumer decision‐making type, the higher the consumption value. It was found to have a significant effect on sub‐factors, functional value, economic value, social value, conditional value, cognitive value, and emotional value. The purchase decision attributes, purchase behavioral intentions, and consumption values of the MZ generation of vegan cosmetics for infants and toddlers are constantly changing due to the spread of vegan cosmetics for infants and children, and various environmental and social impacts. It was found to have a significant effect on the purchase intention of vegan cosmetics as a determinant. 25 Good ingredients for vegan and vegan cosmetics originated from foods Recently, research on "natural" and "vegetarian" cosmetics has been actively conducted. 26 Table 2 summarizes the good ingredients of vegan and vegan cosmetics derived from food. Glycine max, also known as soybean or soybean, is a type of legume native to East Asia. Soybeans contain many functional components including phenolic acids, flavonoids, isoflavonoids (quercetin, genistein, and daidzein), small proteins (Bowman‐Burk inhibitors, and soybean trypsin inhibitors) tannins, and proanthocyanidins. Soybean seed extract and fresh soymilk fraction have been reported to have cosmetic and dermatological benefits such as anti‐inflammatory, collagen stimulatory effect, powerful anti‐oxidant scavenging peroxyl radical activities, skin whitening and sun protective effects. 27 Soybean isoflavone extracts have also been studied to inhibit 2,4‐dinitrochlorobenzene‐induced contact dermatitis. Numerous epidemiological and clinical studies have demonstrated the protective role of dietary isoflavones in the pathogenesis of several chronic diseases such as inflammatory bowel disease. A ISO‐1 is promising for amelioration of DNCB‐induced experimental inflammatory model and skin barrier damage, suggesting potential application of topical ISO‐1 for inflammatory dermatosis. 28  The concentration of major proteins and carbohydrates such as polysaccharides was measured by Forint phenol assay and phenol‐sulfuric acid assay, respectively, in studies on the safety of black bean sprouts and cosmetics use. As a result, it was concluded that the extract of black bean sprouts is safe and can be used as an additive for anti‐aging and whitening effective cosmetics. 29  Palm olein (POo), olive oil (OO), safflower oil (SAF), grape seed oil (GSO), soybean oil (SBO) and sunflower oil (SFO), which have different saturation levels, are the main oils in the stability evaluation study of various vegetable oil‐based emulsions. As a result, it was confirmed that the saturation level of the vegetable oil had a significant effect on the emulsion stability. 30  There is a study result that Korean red ginseng hot water extract relieved atopic dermatitis‐like inflammatory response by negative regulation of the mitogen‐activated protein kinase (MAPK) signaling pathway in vivo. The Korean red ginseng is a traditional Korean medicinal plant and is often consumed as foods. Increased levels and decreased serum IgE levels, epidermal thickness, mast cell infiltration and ceramidase release. 31 Studies have been conducted on the surface activity and foaming properties of plant extracts. Saponins are amphiphilic glycoside secondary metabolites produced by many plants. Only a few of these have been thoroughly analyzed and far fewer have found industrial applications as bio‐surfactants. In this contribution, it will be screened 45 plants from different families that were reported to be rich in saponins for their surface activity and foam properties. For this purpose, room temperature aqueous extracts such as maceration solutions of plant organs rich in saponins were prepared and spray dried under the same conditions in the presence of sodium benzoate and potassium sorbate as preservatives and drying aids. The 15 selected plants were also extracted using hot water, but the high temperature lowered the surface activity of the extract in most cases. 32 Good ingredients for vegan and vegan cosmetics originated from foods Recently, research on "natural" and "vegetarian" cosmetics has been actively conducted. 26 Table 2 summarizes the good ingredients of vegan and vegan cosmetics derived from food. Glycine max, also known as soybean or soybean, is a type of legume native to East Asia. Soybeans contain many functional components including phenolic acids, flavonoids, isoflavonoids (quercetin, genistein, and daidzein), small proteins (Bowman‐Burk inhibitors, and soybean trypsin inhibitors) tannins, and proanthocyanidins. Soybean seed extract and fresh soymilk fraction have been reported to have cosmetic and dermatological benefits such as anti‐inflammatory, collagen stimulatory effect, powerful anti‐oxidant scavenging peroxyl radical activities, skin whitening and sun protective effects. 27 Soybean isoflavone extracts have also been studied to inhibit 2,4‐dinitrochlorobenzene‐induced contact dermatitis. Numerous epidemiological and clinical studies have demonstrated the protective role of dietary isoflavones in the pathogenesis of several chronic diseases such as inflammatory bowel disease. A ISO‐1 is promising for amelioration of DNCB‐induced experimental inflammatory model and skin barrier damage, suggesting potential application of topical ISO‐1 for inflammatory dermatosis. 28  The concentration of major proteins and carbohydrates such as polysaccharides was measured by Forint phenol assay and phenol‐sulfuric acid assay, respectively, in studies on the safety of black bean sprouts and cosmetics use. As a result, it was concluded that the extract of black bean sprouts is safe and can be used as an additive for anti‐aging and whitening effective cosmetics. 29  Palm olein (POo), olive oil (OO), safflower oil (SAF), grape seed oil (GSO), soybean oil (SBO) and sunflower oil (SFO), which have different saturation levels, are the main oils in the stability evaluation study of various vegetable oil‐based emulsions. As a result, it was confirmed that the saturation level of the vegetable oil had a significant effect on the emulsion stability. 30  There is a study result that Korean red ginseng hot water extract relieved atopic dermatitis‐like inflammatory response by negative regulation of the mitogen‐activated protein kinase (MAPK) signaling pathway in vivo. The Korean red ginseng is a traditional Korean medicinal plant and is often consumed as foods. Increased levels and decreased serum IgE levels, epidermal thickness, mast cell infiltration and ceramidase release. 31 Studies have been conducted on the surface activity and foaming properties of plant extracts. Saponins are amphiphilic glycoside secondary metabolites produced by many plants. Only a few of these have been thoroughly analyzed and far fewer have found industrial applications as bio‐surfactants. In this contribution, it will be screened 45 plants from different families that were reported to be rich in saponins for their surface activity and foam properties. For this purpose, room temperature aqueous extracts such as maceration solutions of plant organs rich in saponins were prepared and spray dried under the same conditions in the presence of sodium benzoate and potassium sorbate as preservatives and drying aids. The 15 selected plants were also extracted using hot water, but the high temperature lowered the surface activity of the extract in most cases. 32 Good ingredients for vegan and vegan cosmetics originated from foods
CONCLUSIONS
Based on the results of this study, it will be necessary to identify consumer needs for vegan cosmetics that started from vegan food and develop applications for the development of customized vegan inner beauty products and customized vegan cosmetics using customized inner beauty products and/or customized cosmetics. This is expected to be used as an important marketing material for the global vegan cosmetics market that confirms new changes in the cosmetics market.
[ "INTRODUCTION", "Changes in daily life due to COVID‐19 pandemic", "Environmental issues due to the spread of live e‐commerce", "Increasing interests in good ingredients and vegan cosmetics", "Good ingredients for vegan and vegan cosmetics originated from foods" ]
[ "The ongoing coronavirus disease‐19 (COVID‐19) pandemic is endangering millions of people in more and more countries and is a serious public health threat worldwide. Recently, extensive research and clinical trials have been conducted to develop antiviral drugs, vaccines, and anti‐syndromic coronavirus 2 (SARS‐CoV‐2) antibody treatment for the treatment of COVID‐19, plasma treatment for convalescence, and nanoparticle‐based treatment. As a result, the spread of the COVID‐19 virus continues.\n1\n A newly developed rapid point‐of‐care test is underway. This is contributing to controlling the spread of the COVID‐19 virus by facilitating large‐scale testing on the population. A population‐based cross‐sectional study conducted in Cantabria, Spain between April and May 2020 also evaluated the applicability of a self‐testing strategy for SARS‐CoV‐2. In the early days of the COVID‐19 outbreak, lung health was a major focus of research. Recently, COVID‐19 self‐diagnosis has been considered and used as a screening test tool. Vaccines and treatments have been developed for COVID‐19, but the pandemic shows no sign of ending yet.\n2\n\n\nIn recent years, e‐commerce such as online purchase has been growing steadily, and due to this social situation, the e‐commerce market is growing rapidly due to the transition to a non‐face‐to‐face society.\n3\n In Prof. Kim's book, “It is not a question of who owns more, but a new measure of life's abundance is scaled to ‘who has more experience’.” “Streaming” essentially means playing audio or video in real time without downloading it. It is called streaming because it treats data as if it were flowing through water. The most fundamental difference between streaming and downloading is ownership. The streaming is the biggest feature that you can experience whenever you want, even if you do not own it. The era of enjoying such a streaming life has arrived, and live commerce is also growing at the same time in the beauty market. It has emerged as a major trend in the industry. The service method that provides information in a non‐face‐to‐face manner means minimizing contact with people. As we enter the unknown era, the frequency of purchasing customized cosmetics through untact mobile shopping is increasing, and research results have shown that this has a high correlation with interest in skin and perception of customized cosmetics. In the unexplored era after COVID‐19, cosmetics consumption through mobile shopping is expected to increase.\n4\n, \n5\n Unlike in the past, social media environment dominates our daily life and everything is changing rapidly, and nothing seems to change long. In a rapidly changing era, it is very meaningful socially and culturally to compare the appearance of generations who share the same emotions with the differences in perceptions of one generation.\n6\n, \n7\n, \n8\n The Korean economy and the global Hallyu wave. It is necessary to diagnose the reality of the beauty industry, which has grown in importance over time, and to suggest realistic development directions. The production and use of e‐commerce packaging has steadily increased in recent years due to the increase in online purchases. As a result, the impact on the environment has also increased. Humanity faces climate change, pollution, environmental degradation and/or destruction of air, soil, water, and ecosystems. The climate and environmental crisis will be one of the greatest challenges in human history. As a result, consumers become more obsessed with good consumption and good ingredients, and the trend is gradually spreading from vegan food to vegan cosmetics.\n9\n, \n10\n\n\nTherefore, as environmental issues have been increasing after the COVID‐19 pandemic, it will be identified the needs of consumers for vegan cosmetics, which are good ingredients and good cosmetics, and focused on the future value and direction of vegan cosmetics from food to cosmetics.", "Effective vaccines and therapeutics to prevent COVID‐19 have been released. Yet the world continues to rely on social distancing, sanitation measures and repurposing drugs. The world has developed an effective SARS‐CoV‐2 vaccine. By the end of August 2020, 30 vaccines had already entered clinical trials, and more than 200 vaccines had undergone various stages of development. As a result, 71.5% were either “very high” or “somewhat likely” to receive the COVID‐19 vaccine. 48.1% said they would follow their employer's recommendations. The difference in acceptance rates varied from nearly 90% (China) to <55% (Russia). Respondents with higher confidence in government sources of information are more likely to be vaccinated and follow employer recommendations. As a result, many vaccinations have been carried out, but the world is still in a state of panic.\n11\n, \n12\n A social distancing policy to spend more time at home during the COVID‐19 period has been implemented.\n13\n As a result, the transition to telecommuting has been positive for most, with potential benefits in reducing fatigue for many employees. Maintaining the option to work from home after COVID‐19 can help reduce burnout in the long term. Addressing information technologies that can personalize options and ensure functionality has been found to be important for those who cannot work effectively from home.\n14\n Telecommuting, which requires working from home rather than in a company building, is a future‐oriented work method that is beneficial to both the organization and its members and is spreading widely with the development of information infrastructure. It has various advantages such as productivity increase, job satisfaction increase, cost reduction, the flexibility of work and workplace, etc., and is also important as a means of enhancing organizational competitiveness. It is expected to provide opportunities for women who have difficulties in raising children to perform work and housework at the same time at home. In the study of factors affecting attitudes toward telecommuting, the Korean Intellectual Property Office examiners’ telecommuting consciousness survey was conducted targeting examiners of the Korean Intellectual Property Office, which is the most successful government agency conducting telecommuting. The advantages of telecommuting (increased productivity, psychological freedom, and improved family relationships) all have a positive effect on the attitude toward telecommuting, and the disadvantages (creating a sense of incongruity within the organization, difficulties in managing employees by the department head, not knowing the organizational situation) was found to have a negative effect. However, compared to those who did not expect to work from home, the other three groups had a more positive attitude toward telecommuting.\n15\n\n", "As the number of single‐person households and the number of single people increases, the number of people who want to cut off the changing and inconvenient communication of life culture has increased, resulting in untact marketing in the retail industry. Here, COVID‐19 quickly penetrated untact culture and trends and established untact marketing. Untact culture and untact marketing in Japan and Korea were examined through case studies, and similarities and differences were compared. The untact culture and the development of technology were combined to create novel untact marketing, but in Japan, the traditional culture of face‐to‐face and contact changed to an untact culture. Untact marketing using digital technology is applied in various ways in Korea. As COVID‐19 continues for a long time, efforts to make untact marketing efficient in this untact culture are accelerating.\n16\n The development of mobile technology is an era in which modern people have become essential in all areas of life. You can easily access various devices and concepts that makeup not only the media, but also all aspects of society and the entire culture, including popular art, music, video, art, and daily food, clothing, and shelter culture anytime, anywhere. The development of these contemporary technologies and the possibility of expansion of the untact space due to the development of mobile technology, which is universal and rapidly developing in modern society, were paid attention to. The development of mobile technology, centered on non‐face‐to‐face, has the potential to develop into a field of untact exhibition and space design. Untact, a service that implies unmanned services such as artificial intelligence, big data, and IoT, which is the 4th industrial revolution technology, is developing into a more sophisticated and new technology.\n17\n Studies on the role of smart packaging systems in the food supply chain also suggest that they can affect food quality, safety, and sustainability. Packaging systems have evolved to become smarter by integrating new electronic products with wireless communication and cloud data solutions. There are many factors that cause food loss and waste problems throughout the food supply chain. However, the development of smart packaging systems has developed recently, and there are several articles showing breakthroughs and mentioning that there are challenges for sustainability.\n18\n Global warming and obesity: A systematic review study found that there was a common correlation between global warming and obesity epidemics. The following studies found: Global warming affects obesity prevalence; Obesity epidemic contributes to global warming; Global warming and obesity epidemics influence each other. Increased energy consumption affects global warming. Policies that support clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. Policies that support the deployment of clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity.\n19\n Table 1 summarizes environmental problems caused by the spread of live commerce.\nEnvironmental issues due to the spread of live e‐commerce", "As the environmental pollution caused by industrial wastes becomes more serious, veganism is emerging as a social problem. To raise awareness and responsibility for environmental protection, more consumers are turning to vegetarian cosmetics in the beauty industry.\n20\n Recently, veganism is a food consumption pattern. Vegetarianism usually includes several diets depending on the degree of exclusion in part or all of animal products such as meat and dairy. Among them, flexible, semi‐vegetarian, pesco‐vegetarian, lacto‐ovo‐vegetarian, vegan, raw and fruit diets are classified in order of restriction. These last three modes can be extended to a lifestyle called veganism, which is defined as not using animal products (cosmetics, clothing, ingredients, etc.) in daily life.\n21\n Due to COVID‐19, research on sustainable changes in the safety‐oriented beauty market trend is coming to an era where the perspective of sustainable safety can be applied to the entire beauty and cosmetology industry, and as customers’ perceptions change, safe ingredients are needed. Here, “good ingredients” means safe ingredients according to changes in customers’ perceptions.\n22\n This meaning also includes environmental issues. Recently, due to the increase in the use of masks due to environmental problems or infectious diseases, skin troubles are rapidly increasing due to changes in the use of cosmetics around the world. Therefore, research was conducted to suggest natural products related to research on skin soothing ingredients and makeup products.\n23\n A study on the influence of consumer decision‐making style on the consumption value of vegan cosmetics is in progress. In the effect of vegan cosmetics on consumption value, the higher the prudence and compliance type among the sub‐factors of consumer decision‐making type, the higher the consumption value. It was found to have a significant effect on sub‐factors, functional value, economic value, social value, conditional value, cognitive value, and emotional value. The purchase decision attributes, purchase behavioral intentions, and consumption values of the MZ generation of vegan cosmetics for infants and toddlers are constantly changing due to the spread of vegan cosmetics for infants and children, and various environmental and social impacts. It was found to have a significant effect on the purchase intention of vegan cosmetics as a determinant.\n25\n\n", "Recently, research on \"natural\" and \"vegetarian\" cosmetics has been actively conducted.\n26\n Table 2 summarizes the good ingredients of vegan and vegan cosmetics derived from food. Glycine max, also known as soybean or soybean, is a type of legume native to East Asia. Soybeans contain many functional components including phenolic acids, flavonoids, isoflavonoids (quercetin, genistein, and daidzein), small proteins (Bowman‐Burk inhibitors, and soybean trypsin inhibitors) tannins, and proanthocyanidins. Soybean seed extract and fresh soymilk fraction have been reported to have cosmetic and dermatological benefits such as anti‐inflammatory, collagen stimulatory effect, powerful anti‐oxidant scavenging peroxyl radical activities, skin whitening and sun protective effects.\n27\n Soybean isoflavone extracts have also been studied to inhibit 2,4‐dinitrochlorobenzene‐induced contact dermatitis. Numerous epidemiological and clinical studies have demonstrated the protective role of dietary isoflavones in the pathogenesis of several chronic diseases such as inflammatory bowel disease. A ISO‐1 is promising for amelioration of DNCB‐induced experimental inflammatory model and skin barrier damage, suggesting potential application of topical ISO‐1 for inflammatory dermatosis.\n28\n The concentration of major proteins and carbohydrates such as polysaccharides was measured by Forint phenol assay and phenol‐sulfuric acid assay, respectively, in studies on the safety of black bean sprouts and cosmetics use. As a result, it was concluded that the extract of black bean sprouts is safe and can be used as an additive for anti‐aging and whitening effective cosmetics.\n29\n Palm olein (POo), olive oil (OO), safflower oil (SAF), grape seed oil (GSO), soybean oil (SBO) and sunflower oil (SFO), which have different saturation levels, are the main oils in the stability evaluation study of various vegetable oil‐based emulsions. As a result, it was confirmed that the saturation level of the vegetable oil had a significant effect on the emulsion stability.\n30\n There is a study result that Korean red ginseng hot water extract relieved atopic dermatitis‐like inflammatory response by negative regulation of the mitogen‐activated protein kinase (MAPK) signaling pathway in vivo. The Korean red ginseng is a traditional Korean medicinal plant and is often consumed as foods. Increased levels and decreased serum IgE levels, epidermal thickness, mast cell infiltration and ceramidase release.\n31\n Studies have been conducted on the surface activity and foaming properties of plant extracts. Saponins are amphiphilic glycoside secondary metabolites produced by many plants. Only a few of these have been thoroughly analyzed and far fewer have found industrial applications as bio‐surfactants. In this contribution, it will be screened 45 plants from different families that were reported to be rich in saponins for their surface activity and foam properties. For this purpose, room temperature aqueous extracts such as maceration solutions of plant organs rich in saponins were prepared and spray dried under the same conditions in the presence of sodium benzoate and potassium sorbate as preservatives and drying aids. The 15 selected plants were also extracted using hot water, but the high temperature lowered the surface activity of the extract in most cases.\n32\n\n\nGood ingredients for vegan and vegan cosmetics originated from foods" ]
[ null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "RESULTS", "Changes in daily life due to COVID‐19 pandemic", "Environmental issues due to the spread of live e‐commerce", "Increasing interests in good ingredients and vegan cosmetics", "Good ingredients for vegan and vegan cosmetics originated from foods", "DISCUSSIONS", "CONCLUSIONS", "Supporting information" ]
[ "The ongoing coronavirus disease‐19 (COVID‐19) pandemic is endangering millions of people in more and more countries and is a serious public health threat worldwide. Recently, extensive research and clinical trials have been conducted to develop antiviral drugs, vaccines, and anti‐syndromic coronavirus 2 (SARS‐CoV‐2) antibody treatment for the treatment of COVID‐19, plasma treatment for convalescence, and nanoparticle‐based treatment. As a result, the spread of the COVID‐19 virus continues.\n1\n A newly developed rapid point‐of‐care test is underway. This is contributing to controlling the spread of the COVID‐19 virus by facilitating large‐scale testing on the population. A population‐based cross‐sectional study conducted in Cantabria, Spain between April and May 2020 also evaluated the applicability of a self‐testing strategy for SARS‐CoV‐2. In the early days of the COVID‐19 outbreak, lung health was a major focus of research. Recently, COVID‐19 self‐diagnosis has been considered and used as a screening test tool. Vaccines and treatments have been developed for COVID‐19, but the pandemic shows no sign of ending yet.\n2\n\n\nIn recent years, e‐commerce such as online purchase has been growing steadily, and due to this social situation, the e‐commerce market is growing rapidly due to the transition to a non‐face‐to‐face society.\n3\n In Prof. Kim's book, “It is not a question of who owns more, but a new measure of life's abundance is scaled to ‘who has more experience’.” “Streaming” essentially means playing audio or video in real time without downloading it. It is called streaming because it treats data as if it were flowing through water. The most fundamental difference between streaming and downloading is ownership. The streaming is the biggest feature that you can experience whenever you want, even if you do not own it. The era of enjoying such a streaming life has arrived, and live commerce is also growing at the same time in the beauty market. It has emerged as a major trend in the industry. The service method that provides information in a non‐face‐to‐face manner means minimizing contact with people. As we enter the unknown era, the frequency of purchasing customized cosmetics through untact mobile shopping is increasing, and research results have shown that this has a high correlation with interest in skin and perception of customized cosmetics. In the unexplored era after COVID‐19, cosmetics consumption through mobile shopping is expected to increase.\n4\n, \n5\n Unlike in the past, social media environment dominates our daily life and everything is changing rapidly, and nothing seems to change long. In a rapidly changing era, it is very meaningful socially and culturally to compare the appearance of generations who share the same emotions with the differences in perceptions of one generation.\n6\n, \n7\n, \n8\n The Korean economy and the global Hallyu wave. It is necessary to diagnose the reality of the beauty industry, which has grown in importance over time, and to suggest realistic development directions. The production and use of e‐commerce packaging has steadily increased in recent years due to the increase in online purchases. As a result, the impact on the environment has also increased. Humanity faces climate change, pollution, environmental degradation and/or destruction of air, soil, water, and ecosystems. The climate and environmental crisis will be one of the greatest challenges in human history. As a result, consumers become more obsessed with good consumption and good ingredients, and the trend is gradually spreading from vegan food to vegan cosmetics.\n9\n, \n10\n\n\nTherefore, as environmental issues have been increasing after the COVID‐19 pandemic, it will be identified the needs of consumers for vegan cosmetics, which are good ingredients and good cosmetics, and focused on the future value and direction of vegan cosmetics from food to cosmetics.", "This review paper is a critical literature review, and a narrative review approach has been used for this study. A total of 300–400 references were selected using representative journal search websites such as PubMed, Google Scholar, Scopus, RISS, and ResearchGate, which a total of 45 papers were selected in the final stage based on 2009 to 2022. The PRISMA flow diagram is shown in Figure S1.", "Changes in daily life due to COVID‐19 pandemic Effective vaccines and therapeutics to prevent COVID‐19 have been released. Yet the world continues to rely on social distancing, sanitation measures and repurposing drugs. The world has developed an effective SARS‐CoV‐2 vaccine. By the end of August 2020, 30 vaccines had already entered clinical trials, and more than 200 vaccines had undergone various stages of development. As a result, 71.5% were either “very high” or “somewhat likely” to receive the COVID‐19 vaccine. 48.1% said they would follow their employer's recommendations. The difference in acceptance rates varied from nearly 90% (China) to <55% (Russia). Respondents with higher confidence in government sources of information are more likely to be vaccinated and follow employer recommendations. As a result, many vaccinations have been carried out, but the world is still in a state of panic.\n11\n, \n12\n A social distancing policy to spend more time at home during the COVID‐19 period has been implemented.\n13\n As a result, the transition to telecommuting has been positive for most, with potential benefits in reducing fatigue for many employees. Maintaining the option to work from home after COVID‐19 can help reduce burnout in the long term. Addressing information technologies that can personalize options and ensure functionality has been found to be important for those who cannot work effectively from home.\n14\n Telecommuting, which requires working from home rather than in a company building, is a future‐oriented work method that is beneficial to both the organization and its members and is spreading widely with the development of information infrastructure. It has various advantages such as productivity increase, job satisfaction increase, cost reduction, the flexibility of work and workplace, etc., and is also important as a means of enhancing organizational competitiveness. It is expected to provide opportunities for women who have difficulties in raising children to perform work and housework at the same time at home. In the study of factors affecting attitudes toward telecommuting, the Korean Intellectual Property Office examiners’ telecommuting consciousness survey was conducted targeting examiners of the Korean Intellectual Property Office, which is the most successful government agency conducting telecommuting. The advantages of telecommuting (increased productivity, psychological freedom, and improved family relationships) all have a positive effect on the attitude toward telecommuting, and the disadvantages (creating a sense of incongruity within the organization, difficulties in managing employees by the department head, not knowing the organizational situation) was found to have a negative effect. However, compared to those who did not expect to work from home, the other three groups had a more positive attitude toward telecommuting.\n15\n\n\nEffective vaccines and therapeutics to prevent COVID‐19 have been released. Yet the world continues to rely on social distancing, sanitation measures and repurposing drugs. The world has developed an effective SARS‐CoV‐2 vaccine. By the end of August 2020, 30 vaccines had already entered clinical trials, and more than 200 vaccines had undergone various stages of development. As a result, 71.5% were either “very high” or “somewhat likely” to receive the COVID‐19 vaccine. 48.1% said they would follow their employer's recommendations. The difference in acceptance rates varied from nearly 90% (China) to <55% (Russia). Respondents with higher confidence in government sources of information are more likely to be vaccinated and follow employer recommendations. As a result, many vaccinations have been carried out, but the world is still in a state of panic.\n11\n, \n12\n A social distancing policy to spend more time at home during the COVID‐19 period has been implemented.\n13\n As a result, the transition to telecommuting has been positive for most, with potential benefits in reducing fatigue for many employees. Maintaining the option to work from home after COVID‐19 can help reduce burnout in the long term. Addressing information technologies that can personalize options and ensure functionality has been found to be important for those who cannot work effectively from home.\n14\n Telecommuting, which requires working from home rather than in a company building, is a future‐oriented work method that is beneficial to both the organization and its members and is spreading widely with the development of information infrastructure. It has various advantages such as productivity increase, job satisfaction increase, cost reduction, the flexibility of work and workplace, etc., and is also important as a means of enhancing organizational competitiveness. It is expected to provide opportunities for women who have difficulties in raising children to perform work and housework at the same time at home. In the study of factors affecting attitudes toward telecommuting, the Korean Intellectual Property Office examiners’ telecommuting consciousness survey was conducted targeting examiners of the Korean Intellectual Property Office, which is the most successful government agency conducting telecommuting. The advantages of telecommuting (increased productivity, psychological freedom, and improved family relationships) all have a positive effect on the attitude toward telecommuting, and the disadvantages (creating a sense of incongruity within the organization, difficulties in managing employees by the department head, not knowing the organizational situation) was found to have a negative effect. However, compared to those who did not expect to work from home, the other three groups had a more positive attitude toward telecommuting.\n15\n\n\nEnvironmental issues due to the spread of live e‐commerce As the number of single‐person households and the number of single people increases, the number of people who want to cut off the changing and inconvenient communication of life culture has increased, resulting in untact marketing in the retail industry. Here, COVID‐19 quickly penetrated untact culture and trends and established untact marketing. Untact culture and untact marketing in Japan and Korea were examined through case studies, and similarities and differences were compared. The untact culture and the development of technology were combined to create novel untact marketing, but in Japan, the traditional culture of face‐to‐face and contact changed to an untact culture. Untact marketing using digital technology is applied in various ways in Korea. As COVID‐19 continues for a long time, efforts to make untact marketing efficient in this untact culture are accelerating.\n16\n The development of mobile technology is an era in which modern people have become essential in all areas of life. You can easily access various devices and concepts that makeup not only the media, but also all aspects of society and the entire culture, including popular art, music, video, art, and daily food, clothing, and shelter culture anytime, anywhere. The development of these contemporary technologies and the possibility of expansion of the untact space due to the development of mobile technology, which is universal and rapidly developing in modern society, were paid attention to. The development of mobile technology, centered on non‐face‐to‐face, has the potential to develop into a field of untact exhibition and space design. Untact, a service that implies unmanned services such as artificial intelligence, big data, and IoT, which is the 4th industrial revolution technology, is developing into a more sophisticated and new technology.\n17\n Studies on the role of smart packaging systems in the food supply chain also suggest that they can affect food quality, safety, and sustainability. Packaging systems have evolved to become smarter by integrating new electronic products with wireless communication and cloud data solutions. There are many factors that cause food loss and waste problems throughout the food supply chain. However, the development of smart packaging systems has developed recently, and there are several articles showing breakthroughs and mentioning that there are challenges for sustainability.\n18\n Global warming and obesity: A systematic review study found that there was a common correlation between global warming and obesity epidemics. The following studies found: Global warming affects obesity prevalence; Obesity epidemic contributes to global warming; Global warming and obesity epidemics influence each other. Increased energy consumption affects global warming. Policies that support clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. Policies that support the deployment of clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity.\n19\n Table 1 summarizes environmental problems caused by the spread of live commerce.\nEnvironmental issues due to the spread of live e‐commerce\nAs the number of single‐person households and the number of single people increases, the number of people who want to cut off the changing and inconvenient communication of life culture has increased, resulting in untact marketing in the retail industry. Here, COVID‐19 quickly penetrated untact culture and trends and established untact marketing. Untact culture and untact marketing in Japan and Korea were examined through case studies, and similarities and differences were compared. The untact culture and the development of technology were combined to create novel untact marketing, but in Japan, the traditional culture of face‐to‐face and contact changed to an untact culture. Untact marketing using digital technology is applied in various ways in Korea. As COVID‐19 continues for a long time, efforts to make untact marketing efficient in this untact culture are accelerating.\n16\n The development of mobile technology is an era in which modern people have become essential in all areas of life. You can easily access various devices and concepts that makeup not only the media, but also all aspects of society and the entire culture, including popular art, music, video, art, and daily food, clothing, and shelter culture anytime, anywhere. The development of these contemporary technologies and the possibility of expansion of the untact space due to the development of mobile technology, which is universal and rapidly developing in modern society, were paid attention to. The development of mobile technology, centered on non‐face‐to‐face, has the potential to develop into a field of untact exhibition and space design. Untact, a service that implies unmanned services such as artificial intelligence, big data, and IoT, which is the 4th industrial revolution technology, is developing into a more sophisticated and new technology.\n17\n Studies on the role of smart packaging systems in the food supply chain also suggest that they can affect food quality, safety, and sustainability. Packaging systems have evolved to become smarter by integrating new electronic products with wireless communication and cloud data solutions. There are many factors that cause food loss and waste problems throughout the food supply chain. However, the development of smart packaging systems has developed recently, and there are several articles showing breakthroughs and mentioning that there are challenges for sustainability.\n18\n Global warming and obesity: A systematic review study found that there was a common correlation between global warming and obesity epidemics. The following studies found: Global warming affects obesity prevalence; Obesity epidemic contributes to global warming; Global warming and obesity epidemics influence each other. Increased energy consumption affects global warming. Policies that support clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. Policies that support the deployment of clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity.\n19\n Table 1 summarizes environmental problems caused by the spread of live commerce.\nEnvironmental issues due to the spread of live e‐commerce\nIncreasing interests in good ingredients and vegan cosmetics As the environmental pollution caused by industrial wastes becomes more serious, veganism is emerging as a social problem. To raise awareness and responsibility for environmental protection, more consumers are turning to vegetarian cosmetics in the beauty industry.\n20\n Recently, veganism is a food consumption pattern. Vegetarianism usually includes several diets depending on the degree of exclusion in part or all of animal products such as meat and dairy. Among them, flexible, semi‐vegetarian, pesco‐vegetarian, lacto‐ovo‐vegetarian, vegan, raw and fruit diets are classified in order of restriction. These last three modes can be extended to a lifestyle called veganism, which is defined as not using animal products (cosmetics, clothing, ingredients, etc.) in daily life.\n21\n Due to COVID‐19, research on sustainable changes in the safety‐oriented beauty market trend is coming to an era where the perspective of sustainable safety can be applied to the entire beauty and cosmetology industry, and as customers’ perceptions change, safe ingredients are needed. Here, “good ingredients” means safe ingredients according to changes in customers’ perceptions.\n22\n This meaning also includes environmental issues. Recently, due to the increase in the use of masks due to environmental problems or infectious diseases, skin troubles are rapidly increasing due to changes in the use of cosmetics around the world. Therefore, research was conducted to suggest natural products related to research on skin soothing ingredients and makeup products.\n23\n A study on the influence of consumer decision‐making style on the consumption value of vegan cosmetics is in progress. In the effect of vegan cosmetics on consumption value, the higher the prudence and compliance type among the sub‐factors of consumer decision‐making type, the higher the consumption value. It was found to have a significant effect on sub‐factors, functional value, economic value, social value, conditional value, cognitive value, and emotional value. The purchase decision attributes, purchase behavioral intentions, and consumption values of the MZ generation of vegan cosmetics for infants and toddlers are constantly changing due to the spread of vegan cosmetics for infants and children, and various environmental and social impacts. It was found to have a significant effect on the purchase intention of vegan cosmetics as a determinant.\n25\n\n\nAs the environmental pollution caused by industrial wastes becomes more serious, veganism is emerging as a social problem. To raise awareness and responsibility for environmental protection, more consumers are turning to vegetarian cosmetics in the beauty industry.\n20\n Recently, veganism is a food consumption pattern. Vegetarianism usually includes several diets depending on the degree of exclusion in part or all of animal products such as meat and dairy. Among them, flexible, semi‐vegetarian, pesco‐vegetarian, lacto‐ovo‐vegetarian, vegan, raw and fruit diets are classified in order of restriction. These last three modes can be extended to a lifestyle called veganism, which is defined as not using animal products (cosmetics, clothing, ingredients, etc.) in daily life.\n21\n Due to COVID‐19, research on sustainable changes in the safety‐oriented beauty market trend is coming to an era where the perspective of sustainable safety can be applied to the entire beauty and cosmetology industry, and as customers’ perceptions change, safe ingredients are needed. Here, “good ingredients” means safe ingredients according to changes in customers’ perceptions.\n22\n This meaning also includes environmental issues. Recently, due to the increase in the use of masks due to environmental problems or infectious diseases, skin troubles are rapidly increasing due to changes in the use of cosmetics around the world. Therefore, research was conducted to suggest natural products related to research on skin soothing ingredients and makeup products.\n23\n A study on the influence of consumer decision‐making style on the consumption value of vegan cosmetics is in progress. In the effect of vegan cosmetics on consumption value, the higher the prudence and compliance type among the sub‐factors of consumer decision‐making type, the higher the consumption value. It was found to have a significant effect on sub‐factors, functional value, economic value, social value, conditional value, cognitive value, and emotional value. The purchase decision attributes, purchase behavioral intentions, and consumption values of the MZ generation of vegan cosmetics for infants and toddlers are constantly changing due to the spread of vegan cosmetics for infants and children, and various environmental and social impacts. It was found to have a significant effect on the purchase intention of vegan cosmetics as a determinant.\n25\n\n\nGood ingredients for vegan and vegan cosmetics originated from foods Recently, research on \"natural\" and \"vegetarian\" cosmetics has been actively conducted.\n26\n Table 2 summarizes the good ingredients of vegan and vegan cosmetics derived from food. Glycine max, also known as soybean or soybean, is a type of legume native to East Asia. Soybeans contain many functional components including phenolic acids, flavonoids, isoflavonoids (quercetin, genistein, and daidzein), small proteins (Bowman‐Burk inhibitors, and soybean trypsin inhibitors) tannins, and proanthocyanidins. Soybean seed extract and fresh soymilk fraction have been reported to have cosmetic and dermatological benefits such as anti‐inflammatory, collagen stimulatory effect, powerful anti‐oxidant scavenging peroxyl radical activities, skin whitening and sun protective effects.\n27\n Soybean isoflavone extracts have also been studied to inhibit 2,4‐dinitrochlorobenzene‐induced contact dermatitis. Numerous epidemiological and clinical studies have demonstrated the protective role of dietary isoflavones in the pathogenesis of several chronic diseases such as inflammatory bowel disease. A ISO‐1 is promising for amelioration of DNCB‐induced experimental inflammatory model and skin barrier damage, suggesting potential application of topical ISO‐1 for inflammatory dermatosis.\n28\n The concentration of major proteins and carbohydrates such as polysaccharides was measured by Forint phenol assay and phenol‐sulfuric acid assay, respectively, in studies on the safety of black bean sprouts and cosmetics use. As a result, it was concluded that the extract of black bean sprouts is safe and can be used as an additive for anti‐aging and whitening effective cosmetics.\n29\n Palm olein (POo), olive oil (OO), safflower oil (SAF), grape seed oil (GSO), soybean oil (SBO) and sunflower oil (SFO), which have different saturation levels, are the main oils in the stability evaluation study of various vegetable oil‐based emulsions. As a result, it was confirmed that the saturation level of the vegetable oil had a significant effect on the emulsion stability.\n30\n There is a study result that Korean red ginseng hot water extract relieved atopic dermatitis‐like inflammatory response by negative regulation of the mitogen‐activated protein kinase (MAPK) signaling pathway in vivo. The Korean red ginseng is a traditional Korean medicinal plant and is often consumed as foods. Increased levels and decreased serum IgE levels, epidermal thickness, mast cell infiltration and ceramidase release.\n31\n Studies have been conducted on the surface activity and foaming properties of plant extracts. Saponins are amphiphilic glycoside secondary metabolites produced by many plants. Only a few of these have been thoroughly analyzed and far fewer have found industrial applications as bio‐surfactants. In this contribution, it will be screened 45 plants from different families that were reported to be rich in saponins for their surface activity and foam properties. For this purpose, room temperature aqueous extracts such as maceration solutions of plant organs rich in saponins were prepared and spray dried under the same conditions in the presence of sodium benzoate and potassium sorbate as preservatives and drying aids. The 15 selected plants were also extracted using hot water, but the high temperature lowered the surface activity of the extract in most cases.\n32\n\n\nGood ingredients for vegan and vegan cosmetics originated from foods\nRecently, research on \"natural\" and \"vegetarian\" cosmetics has been actively conducted.\n26\n Table 2 summarizes the good ingredients of vegan and vegan cosmetics derived from food. Glycine max, also known as soybean or soybean, is a type of legume native to East Asia. Soybeans contain many functional components including phenolic acids, flavonoids, isoflavonoids (quercetin, genistein, and daidzein), small proteins (Bowman‐Burk inhibitors, and soybean trypsin inhibitors) tannins, and proanthocyanidins. Soybean seed extract and fresh soymilk fraction have been reported to have cosmetic and dermatological benefits such as anti‐inflammatory, collagen stimulatory effect, powerful anti‐oxidant scavenging peroxyl radical activities, skin whitening and sun protective effects.\n27\n Soybean isoflavone extracts have also been studied to inhibit 2,4‐dinitrochlorobenzene‐induced contact dermatitis. Numerous epidemiological and clinical studies have demonstrated the protective role of dietary isoflavones in the pathogenesis of several chronic diseases such as inflammatory bowel disease. A ISO‐1 is promising for amelioration of DNCB‐induced experimental inflammatory model and skin barrier damage, suggesting potential application of topical ISO‐1 for inflammatory dermatosis.\n28\n The concentration of major proteins and carbohydrates such as polysaccharides was measured by Forint phenol assay and phenol‐sulfuric acid assay, respectively, in studies on the safety of black bean sprouts and cosmetics use. As a result, it was concluded that the extract of black bean sprouts is safe and can be used as an additive for anti‐aging and whitening effective cosmetics.\n29\n Palm olein (POo), olive oil (OO), safflower oil (SAF), grape seed oil (GSO), soybean oil (SBO) and sunflower oil (SFO), which have different saturation levels, are the main oils in the stability evaluation study of various vegetable oil‐based emulsions. As a result, it was confirmed that the saturation level of the vegetable oil had a significant effect on the emulsion stability.\n30\n There is a study result that Korean red ginseng hot water extract relieved atopic dermatitis‐like inflammatory response by negative regulation of the mitogen‐activated protein kinase (MAPK) signaling pathway in vivo. The Korean red ginseng is a traditional Korean medicinal plant and is often consumed as foods. Increased levels and decreased serum IgE levels, epidermal thickness, mast cell infiltration and ceramidase release.\n31\n Studies have been conducted on the surface activity and foaming properties of plant extracts. Saponins are amphiphilic glycoside secondary metabolites produced by many plants. Only a few of these have been thoroughly analyzed and far fewer have found industrial applications as bio‐surfactants. In this contribution, it will be screened 45 plants from different families that were reported to be rich in saponins for their surface activity and foam properties. For this purpose, room temperature aqueous extracts such as maceration solutions of plant organs rich in saponins were prepared and spray dried under the same conditions in the presence of sodium benzoate and potassium sorbate as preservatives and drying aids. The 15 selected plants were also extracted using hot water, but the high temperature lowered the surface activity of the extract in most cases.\n32\n\n\nGood ingredients for vegan and vegan cosmetics originated from foods", "Effective vaccines and therapeutics to prevent COVID‐19 have been released. Yet the world continues to rely on social distancing, sanitation measures and repurposing drugs. The world has developed an effective SARS‐CoV‐2 vaccine. By the end of August 2020, 30 vaccines had already entered clinical trials, and more than 200 vaccines had undergone various stages of development. As a result, 71.5% were either “very high” or “somewhat likely” to receive the COVID‐19 vaccine. 48.1% said they would follow their employer's recommendations. The difference in acceptance rates varied from nearly 90% (China) to <55% (Russia). Respondents with higher confidence in government sources of information are more likely to be vaccinated and follow employer recommendations. As a result, many vaccinations have been carried out, but the world is still in a state of panic.\n11\n, \n12\n A social distancing policy to spend more time at home during the COVID‐19 period has been implemented.\n13\n As a result, the transition to telecommuting has been positive for most, with potential benefits in reducing fatigue for many employees. Maintaining the option to work from home after COVID‐19 can help reduce burnout in the long term. Addressing information technologies that can personalize options and ensure functionality has been found to be important for those who cannot work effectively from home.\n14\n Telecommuting, which requires working from home rather than in a company building, is a future‐oriented work method that is beneficial to both the organization and its members and is spreading widely with the development of information infrastructure. It has various advantages such as productivity increase, job satisfaction increase, cost reduction, the flexibility of work and workplace, etc., and is also important as a means of enhancing organizational competitiveness. It is expected to provide opportunities for women who have difficulties in raising children to perform work and housework at the same time at home. In the study of factors affecting attitudes toward telecommuting, the Korean Intellectual Property Office examiners’ telecommuting consciousness survey was conducted targeting examiners of the Korean Intellectual Property Office, which is the most successful government agency conducting telecommuting. The advantages of telecommuting (increased productivity, psychological freedom, and improved family relationships) all have a positive effect on the attitude toward telecommuting, and the disadvantages (creating a sense of incongruity within the organization, difficulties in managing employees by the department head, not knowing the organizational situation) was found to have a negative effect. However, compared to those who did not expect to work from home, the other three groups had a more positive attitude toward telecommuting.\n15\n\n", "As the number of single‐person households and the number of single people increases, the number of people who want to cut off the changing and inconvenient communication of life culture has increased, resulting in untact marketing in the retail industry. Here, COVID‐19 quickly penetrated untact culture and trends and established untact marketing. Untact culture and untact marketing in Japan and Korea were examined through case studies, and similarities and differences were compared. The untact culture and the development of technology were combined to create novel untact marketing, but in Japan, the traditional culture of face‐to‐face and contact changed to an untact culture. Untact marketing using digital technology is applied in various ways in Korea. As COVID‐19 continues for a long time, efforts to make untact marketing efficient in this untact culture are accelerating.\n16\n The development of mobile technology is an era in which modern people have become essential in all areas of life. You can easily access various devices and concepts that makeup not only the media, but also all aspects of society and the entire culture, including popular art, music, video, art, and daily food, clothing, and shelter culture anytime, anywhere. The development of these contemporary technologies and the possibility of expansion of the untact space due to the development of mobile technology, which is universal and rapidly developing in modern society, were paid attention to. The development of mobile technology, centered on non‐face‐to‐face, has the potential to develop into a field of untact exhibition and space design. Untact, a service that implies unmanned services such as artificial intelligence, big data, and IoT, which is the 4th industrial revolution technology, is developing into a more sophisticated and new technology.\n17\n Studies on the role of smart packaging systems in the food supply chain also suggest that they can affect food quality, safety, and sustainability. Packaging systems have evolved to become smarter by integrating new electronic products with wireless communication and cloud data solutions. There are many factors that cause food loss and waste problems throughout the food supply chain. However, the development of smart packaging systems has developed recently, and there are several articles showing breakthroughs and mentioning that there are challenges for sustainability.\n18\n Global warming and obesity: A systematic review study found that there was a common correlation between global warming and obesity epidemics. The following studies found: Global warming affects obesity prevalence; Obesity epidemic contributes to global warming; Global warming and obesity epidemics influence each other. Increased energy consumption affects global warming. Policies that support clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. Policies that support the deployment of clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity.\n19\n Table 1 summarizes environmental problems caused by the spread of live commerce.\nEnvironmental issues due to the spread of live e‐commerce", "As the environmental pollution caused by industrial wastes becomes more serious, veganism is emerging as a social problem. To raise awareness and responsibility for environmental protection, more consumers are turning to vegetarian cosmetics in the beauty industry.\n20\n Recently, veganism is a food consumption pattern. Vegetarianism usually includes several diets depending on the degree of exclusion in part or all of animal products such as meat and dairy. Among them, flexible, semi‐vegetarian, pesco‐vegetarian, lacto‐ovo‐vegetarian, vegan, raw and fruit diets are classified in order of restriction. These last three modes can be extended to a lifestyle called veganism, which is defined as not using animal products (cosmetics, clothing, ingredients, etc.) in daily life.\n21\n Due to COVID‐19, research on sustainable changes in the safety‐oriented beauty market trend is coming to an era where the perspective of sustainable safety can be applied to the entire beauty and cosmetology industry, and as customers’ perceptions change, safe ingredients are needed. Here, “good ingredients” means safe ingredients according to changes in customers’ perceptions.\n22\n This meaning also includes environmental issues. Recently, due to the increase in the use of masks due to environmental problems or infectious diseases, skin troubles are rapidly increasing due to changes in the use of cosmetics around the world. Therefore, research was conducted to suggest natural products related to research on skin soothing ingredients and makeup products.\n23\n A study on the influence of consumer decision‐making style on the consumption value of vegan cosmetics is in progress. In the effect of vegan cosmetics on consumption value, the higher the prudence and compliance type among the sub‐factors of consumer decision‐making type, the higher the consumption value. It was found to have a significant effect on sub‐factors, functional value, economic value, social value, conditional value, cognitive value, and emotional value. The purchase decision attributes, purchase behavioral intentions, and consumption values of the MZ generation of vegan cosmetics for infants and toddlers are constantly changing due to the spread of vegan cosmetics for infants and children, and various environmental and social impacts. It was found to have a significant effect on the purchase intention of vegan cosmetics as a determinant.\n25\n\n", "Recently, research on \"natural\" and \"vegetarian\" cosmetics has been actively conducted.\n26\n Table 2 summarizes the good ingredients of vegan and vegan cosmetics derived from food. Glycine max, also known as soybean or soybean, is a type of legume native to East Asia. Soybeans contain many functional components including phenolic acids, flavonoids, isoflavonoids (quercetin, genistein, and daidzein), small proteins (Bowman‐Burk inhibitors, and soybean trypsin inhibitors) tannins, and proanthocyanidins. Soybean seed extract and fresh soymilk fraction have been reported to have cosmetic and dermatological benefits such as anti‐inflammatory, collagen stimulatory effect, powerful anti‐oxidant scavenging peroxyl radical activities, skin whitening and sun protective effects.\n27\n Soybean isoflavone extracts have also been studied to inhibit 2,4‐dinitrochlorobenzene‐induced contact dermatitis. Numerous epidemiological and clinical studies have demonstrated the protective role of dietary isoflavones in the pathogenesis of several chronic diseases such as inflammatory bowel disease. A ISO‐1 is promising for amelioration of DNCB‐induced experimental inflammatory model and skin barrier damage, suggesting potential application of topical ISO‐1 for inflammatory dermatosis.\n28\n The concentration of major proteins and carbohydrates such as polysaccharides was measured by Forint phenol assay and phenol‐sulfuric acid assay, respectively, in studies on the safety of black bean sprouts and cosmetics use. As a result, it was concluded that the extract of black bean sprouts is safe and can be used as an additive for anti‐aging and whitening effective cosmetics.\n29\n Palm olein (POo), olive oil (OO), safflower oil (SAF), grape seed oil (GSO), soybean oil (SBO) and sunflower oil (SFO), which have different saturation levels, are the main oils in the stability evaluation study of various vegetable oil‐based emulsions. As a result, it was confirmed that the saturation level of the vegetable oil had a significant effect on the emulsion stability.\n30\n There is a study result that Korean red ginseng hot water extract relieved atopic dermatitis‐like inflammatory response by negative regulation of the mitogen‐activated protein kinase (MAPK) signaling pathway in vivo. The Korean red ginseng is a traditional Korean medicinal plant and is often consumed as foods. Increased levels and decreased serum IgE levels, epidermal thickness, mast cell infiltration and ceramidase release.\n31\n Studies have been conducted on the surface activity and foaming properties of plant extracts. Saponins are amphiphilic glycoside secondary metabolites produced by many plants. Only a few of these have been thoroughly analyzed and far fewer have found industrial applications as bio‐surfactants. In this contribution, it will be screened 45 plants from different families that were reported to be rich in saponins for their surface activity and foam properties. For this purpose, room temperature aqueous extracts such as maceration solutions of plant organs rich in saponins were prepared and spray dried under the same conditions in the presence of sodium benzoate and potassium sorbate as preservatives and drying aids. The 15 selected plants were also extracted using hot water, but the high temperature lowered the surface activity of the extract in most cases.\n32\n\n\nGood ingredients for vegan and vegan cosmetics originated from foods", "The coronavirus outbreak poses multiple challenges for people around the world to stay healthy, negotiable with disease risks and harsh social distancing measures. In the basic psychological needs, positive and negative emotions through satisfaction and frustration with autonomy, ability, and relevance, as well as life satisfaction and overall pain, were studied. It was performed from the perspective of the role of self‐determination theory on a 29‐year‐old Serbian woman with 965 participants was used. Serbian emotion list based on basic psychological needs satisfaction and frustration scale, life scale satisfaction, depression anxiety stress scale 12, and Panas‐X was used. The indirect relationship between positive and negative emotions, life satisfaction and overall pain was successively mediated by autonomy satisfaction, competence frustration, relevance satisfaction, and frustration. An important question has been raised about how the tendency to experience positive or negative emotions affects changes in subjective well‐being. Consistent with the assumptions of self‐determination theory, the results suggest that satisfaction and frustration with basic psychological needs may play an important role in achieving optimal well‐being. Therefore, it is reported that our understanding of human functioning in special circumstances has improved during the pandemic.\n33\n, \n34\n Schools across the world have been closed during the COVID‐19 pandemic. However, there are little data on the transmission of SARS‐CoV‐2 in children and educational settings. Most schools in Australia continued to open during the first epidemic, despite a decline in student field attendance at the peak of the pandemic.\n35\n A second COVID‐19 pandemic risk determination modeling study was conducted in the UK with the impact of optimal strategies, testing, and follow‐up interventions for school reopening. If schools open full‐time in September, along with a gradual easing of school closures, the results are likely to bring a second wave that will peak in December 2020. As a result, the second wave of infection is 2.0–2.3 times the size of the original COVID‐19 waves. When contagiousness in children and young adults varies from 100% to 50% in older adults, there is still a need for a comprehensive and effective follow‐up and quarantine testing strategy to avoid a second COVID‐19 wave.\n36\n\n\nIn this era, telecommuting has become a necessity rather than an option. Telecommuting, which was gradually expanding in Cyber‐Physical Space, was further accelerated by the COVID‐19 pandemic. Various organizations at home and abroad are encouraging telecommuting, and it is predicted that telecommuting will be established as a type of work based on the COVID‐19 pandemic.\n37\n With the contemporary media environment, it is necessary to pay attention to the phenomenon of the personalization of images. As the number of zero TV households increases, the number of people looking for content at their own time with computers, tablet PCs, and smartphones instead of traditional TV sets has increased rapidly. As of the current time, YouTube, and various social networking services (SNS) are the most frequently used platforms. Now, the individual goes beyond the level of viewing to the level of creation. In addition, an image interface that can be manipulated coexists with the traditional screen interface represented by a movie. In short, the use of digital devices has become so common that our daily lives can be called “screen life.”\n38\n With millions of viewers worldwide, live streaming has a variety of social interaction features and new social media that provide real‐time video content. The study aimed to understand the personality traits, motivations, and user behaviors of active live streaming viewers in the general Chinese population. As a result, extraversion was negatively associated with live streaming use, while openness was positively associated with it. The main motivations for watching live streaming were social interaction, information gathering, and entertainment, which were associated with different frequencies of use and genre selection.\n39\n\n\nHowever, there are also studies that show that live streaming influences mental health and depression. Online live streaming is gaining popularity in media entertainment, and micro‐celebrities such as video streamers can have a similar impact on their audience. An online survey of 470 people was conducted, and the study looked at how the streamer's exposure to depression affects the streamers’ and viewers’ perceptions of depression. In addition, quasi‐social relationships, quasi‐social interactions, and identity with streamers were investigated as follows: (1) the viewer's perceived authenticity and trust in the streamer; (2) Associated with an increase in the viewer's perceived prevalence, risk sensitivity, and risk severity. Mental health shows a strong link between streamers’ health disclosure and public perception of depression. Mental health is also being discussed as a previous study of celebrity influence.\n40\n Live commerce content is being strengthened throughout the beauty distribution industry. The live commerce market is growing rapidly as non‐face‐to‐face consumption increases due to the spread of the novel coronavirus infection COVID‐19. This year's size is 3 trillion won and is expected to grow to 10 trillion won by 2023. In addition to collaborating with large companies such as Naver and Kakao corporation, their own live commerce platform is actively being developed. It also sells products planned with YouTube creators and is increasing communication with the MZ generation, a major consumer group. YouTube creators have fandom as great as idols, and the MZ generation, who are accustomed to video media, do not perceive live commerce broadcasting as a simple product purchase channel. It was a new communication channel and pursue fun in it, and live commerce is expected to become more active in future as it can attract young customers and expand contact points.\n41\n, \n42\n\n\nHowever, according to these times, electricity transport and agricultural productivity lead to excessive greenhouse gas emissions. Direct impact on obesity by influencing global warming and causing nutritional diversion and physical inactivity, by obesity epidemics, global warming, food supply, price shocks and adaptive thermogenesis. The obesity epidemic contributes to global warming due to increased energy consumption. Policies that support the deployment of clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity.\n19\n From the perspective of sustainable safety, it was said that harmful and safe edible ingredients should be comparatively analyzed from the perspective of clean beauty in a study on the overall change of the beauty industry market in the post‐COVID19 pandemic. It was also said that the importance of sustainable value should be considered from the point of view of clean beauty. It was concluded that, according to changes in customer perception, procurement of raw materials for products beyond raw materials, manufacturing processes, product test, etc. are part of the clean beauty for minimizing carbon emission, using water, recycling product containers, and reducing waste.\n22\n Lately, as consumers are increasingly interested in products made with safe ingredients, cosmetics that have been certified as vegan are increasing in the cosmetic market. In this situation, consumers are analyzing cosmetic ingredients or receiving information about products using various contents such as social media and applications to meet the needs for products made with safe ingredients.\n43\n Table 3 summarizes the growing interest in good ingredients and vegan cosmetics. Results of an online survey conducted on female consumers in their 20s and 40s who are interested in vegan cosmetics. Among the sub‐factors of the vegan cosmetic consumer's planned behavioral model, attitude, subjective norm, perceived behavioral control, ethical responsibility, and self‐identity were found to have a statistically significant positive effect on brand image and purchase intention., it was found that brand image had a statistically significant positive effect on purchase intention.\n20\n Accordingly, various studies on vegan ingredients, which are good ingredients, are being conducted. The surface activity and foaming properties of plant extracts rich in saponins have been studied. As a result, depending on the overall characteristics, Saponaria officinalis L. (soap), Avena sativa L. (oat), Aesculus hippocastanum L. (horse chestnut), Chenopodium quinoa Will. (quinoa), Vaccaria hispanica (Mill.) Rauschert (cowherb) and Glycine max (L.) Merr. (Soybean) has been proposed as the best potential source of saponins for surfactant applications in natural cosmetics and household products. Vitamin B12 applied by mouth through toothpaste enters the bloodstream and corrects vitamin B12 markers in the blood of vegans at high risk of vitamin B12 deficiency. Vitamin B12 fortified toothpaste has been shown to improve vitamin status in vegans.\n44\n In order to improve these problems and meet the needs of consumers, additional research through customized inner beauty and customized cosmetics and research on the development of vegan cosmetics matching convergence service application development according to changes in the beauty consumption market will be required.\n26\n, \n43\n, \n45\n Table 4 summarizes the need for vegan cosmetics matching fusion service application development.\nIncreasing interests in good ingredients and vegan cosmetics\nDue to the seriousness of environmental pollution caused by industrial waste, veganism has become an important social issue.\nTo promote awareness and responsibility for environmental protection, more and more consumers have shown interest in vegetarian cosmetics in the beauty industry.\nThe necessity of developing vegan cosmetic matching fusion service application\nTherefore, this narrative review clearly indicates the needs of consumers in the beauty and cosmetics industry to pursue good consumption due to the COVID‐19 pandemic. However, the limitation of this study is that the research on the cosmetic market because COVID‐19 is not finished yet. Accordingly, it will be necessary to continue research on the possibility of developing cosmetics with safe ingredients, vegan foods, reflecting the needs of consumers in future.", "Based on the results of this study, it will be necessary to identify consumer needs for vegan cosmetics that started from vegan food and develop applications for the development of customized vegan inner beauty products and customized vegan cosmetics using customized inner beauty products and/or customized cosmetics. This is expected to be used as an important marketing material for the global vegan cosmetics market that confirms new changes in the cosmetics market.", "Fig S1\nClick here for additional data file." ]
[ null, "materials-and-methods", "results", null, null, null, null, "discussion", "conclusions", "supplementary-material" ]
[ "COVID‐19", "good consumption", "live commerce", "vegan cosmetics", "vegan inner beauty" ]
INTRODUCTION: The ongoing coronavirus disease‐19 (COVID‐19) pandemic is endangering millions of people in more and more countries and is a serious public health threat worldwide. Recently, extensive research and clinical trials have been conducted to develop antiviral drugs, vaccines, and anti‐syndromic coronavirus 2 (SARS‐CoV‐2) antibody treatment for the treatment of COVID‐19, plasma treatment for convalescence, and nanoparticle‐based treatment. As a result, the spread of the COVID‐19 virus continues. 1 A newly developed rapid point‐of‐care test is underway. This is contributing to controlling the spread of the COVID‐19 virus by facilitating large‐scale testing on the population. A population‐based cross‐sectional study conducted in Cantabria, Spain between April and May 2020 also evaluated the applicability of a self‐testing strategy for SARS‐CoV‐2. In the early days of the COVID‐19 outbreak, lung health was a major focus of research. Recently, COVID‐19 self‐diagnosis has been considered and used as a screening test tool. Vaccines and treatments have been developed for COVID‐19, but the pandemic shows no sign of ending yet. 2 In recent years, e‐commerce such as online purchase has been growing steadily, and due to this social situation, the e‐commerce market is growing rapidly due to the transition to a non‐face‐to‐face society. 3 In Prof. Kim's book, “It is not a question of who owns more, but a new measure of life's abundance is scaled to ‘who has more experience’.” “Streaming” essentially means playing audio or video in real time without downloading it. It is called streaming because it treats data as if it were flowing through water. The most fundamental difference between streaming and downloading is ownership. The streaming is the biggest feature that you can experience whenever you want, even if you do not own it. The era of enjoying such a streaming life has arrived, and live commerce is also growing at the same time in the beauty market. It has emerged as a major trend in the industry. The service method that provides information in a non‐face‐to‐face manner means minimizing contact with people. As we enter the unknown era, the frequency of purchasing customized cosmetics through untact mobile shopping is increasing, and research results have shown that this has a high correlation with interest in skin and perception of customized cosmetics. In the unexplored era after COVID‐19, cosmetics consumption through mobile shopping is expected to increase. 4 , 5 Unlike in the past, social media environment dominates our daily life and everything is changing rapidly, and nothing seems to change long. In a rapidly changing era, it is very meaningful socially and culturally to compare the appearance of generations who share the same emotions with the differences in perceptions of one generation. 6 , 7 , 8  The Korean economy and the global Hallyu wave. It is necessary to diagnose the reality of the beauty industry, which has grown in importance over time, and to suggest realistic development directions. The production and use of e‐commerce packaging has steadily increased in recent years due to the increase in online purchases. As a result, the impact on the environment has also increased. Humanity faces climate change, pollution, environmental degradation and/or destruction of air, soil, water, and ecosystems. The climate and environmental crisis will be one of the greatest challenges in human history. As a result, consumers become more obsessed with good consumption and good ingredients, and the trend is gradually spreading from vegan food to vegan cosmetics. 9 , 10 Therefore, as environmental issues have been increasing after the COVID‐19 pandemic, it will be identified the needs of consumers for vegan cosmetics, which are good ingredients and good cosmetics, and focused on the future value and direction of vegan cosmetics from food to cosmetics. MATERIALS AND METHODS: This review paper is a critical literature review, and a narrative review approach has been used for this study. A total of 300–400 references were selected using representative journal search websites such as PubMed, Google Scholar, Scopus, RISS, and ResearchGate, which a total of 45 papers were selected in the final stage based on 2009 to 2022. The PRISMA flow diagram is shown in Figure S1. RESULTS: Changes in daily life due to COVID‐19 pandemic Effective vaccines and therapeutics to prevent COVID‐19 have been released. Yet the world continues to rely on social distancing, sanitation measures and repurposing drugs. The world has developed an effective SARS‐CoV‐2 vaccine. By the end of August 2020, 30 vaccines had already entered clinical trials, and more than 200 vaccines had undergone various stages of development. As a result, 71.5% were either “very high” or “somewhat likely” to receive the COVID‐19 vaccine. 48.1% said they would follow their employer's recommendations. The difference in acceptance rates varied from nearly 90% (China) to <55% (Russia). Respondents with higher confidence in government sources of information are more likely to be vaccinated and follow employer recommendations. As a result, many vaccinations have been carried out, but the world is still in a state of panic. 11 , 12 A social distancing policy to spend more time at home during the COVID‐19 period has been implemented. 13 As a result, the transition to telecommuting has been positive for most, with potential benefits in reducing fatigue for many employees. Maintaining the option to work from home after COVID‐19 can help reduce burnout in the long term. Addressing information technologies that can personalize options and ensure functionality has been found to be important for those who cannot work effectively from home. 14  Telecommuting, which requires working from home rather than in a company building, is a future‐oriented work method that is beneficial to both the organization and its members and is spreading widely with the development of information infrastructure. It has various advantages such as productivity increase, job satisfaction increase, cost reduction, the flexibility of work and workplace, etc., and is also important as a means of enhancing organizational competitiveness. It is expected to provide opportunities for women who have difficulties in raising children to perform work and housework at the same time at home. In the study of factors affecting attitudes toward telecommuting, the Korean Intellectual Property Office examiners’ telecommuting consciousness survey was conducted targeting examiners of the Korean Intellectual Property Office, which is the most successful government agency conducting telecommuting. The advantages of telecommuting (increased productivity, psychological freedom, and improved family relationships) all have a positive effect on the attitude toward telecommuting, and the disadvantages (creating a sense of incongruity within the organization, difficulties in managing employees by the department head, not knowing the organizational situation) was found to have a negative effect. However, compared to those who did not expect to work from home, the other three groups had a more positive attitude toward telecommuting. 15 Effective vaccines and therapeutics to prevent COVID‐19 have been released. Yet the world continues to rely on social distancing, sanitation measures and repurposing drugs. The world has developed an effective SARS‐CoV‐2 vaccine. By the end of August 2020, 30 vaccines had already entered clinical trials, and more than 200 vaccines had undergone various stages of development. As a result, 71.5% were either “very high” or “somewhat likely” to receive the COVID‐19 vaccine. 48.1% said they would follow their employer's recommendations. The difference in acceptance rates varied from nearly 90% (China) to <55% (Russia). Respondents with higher confidence in government sources of information are more likely to be vaccinated and follow employer recommendations. As a result, many vaccinations have been carried out, but the world is still in a state of panic. 11 , 12 A social distancing policy to spend more time at home during the COVID‐19 period has been implemented. 13 As a result, the transition to telecommuting has been positive for most, with potential benefits in reducing fatigue for many employees. Maintaining the option to work from home after COVID‐19 can help reduce burnout in the long term. Addressing information technologies that can personalize options and ensure functionality has been found to be important for those who cannot work effectively from home. 14  Telecommuting, which requires working from home rather than in a company building, is a future‐oriented work method that is beneficial to both the organization and its members and is spreading widely with the development of information infrastructure. It has various advantages such as productivity increase, job satisfaction increase, cost reduction, the flexibility of work and workplace, etc., and is also important as a means of enhancing organizational competitiveness. It is expected to provide opportunities for women who have difficulties in raising children to perform work and housework at the same time at home. In the study of factors affecting attitudes toward telecommuting, the Korean Intellectual Property Office examiners’ telecommuting consciousness survey was conducted targeting examiners of the Korean Intellectual Property Office, which is the most successful government agency conducting telecommuting. The advantages of telecommuting (increased productivity, psychological freedom, and improved family relationships) all have a positive effect on the attitude toward telecommuting, and the disadvantages (creating a sense of incongruity within the organization, difficulties in managing employees by the department head, not knowing the organizational situation) was found to have a negative effect. However, compared to those who did not expect to work from home, the other three groups had a more positive attitude toward telecommuting. 15 Environmental issues due to the spread of live e‐commerce As the number of single‐person households and the number of single people increases, the number of people who want to cut off the changing and inconvenient communication of life culture has increased, resulting in untact marketing in the retail industry. Here, COVID‐19 quickly penetrated untact culture and trends and established untact marketing. Untact culture and untact marketing in Japan and Korea were examined through case studies, and similarities and differences were compared. The untact culture and the development of technology were combined to create novel untact marketing, but in Japan, the traditional culture of face‐to‐face and contact changed to an untact culture. Untact marketing using digital technology is applied in various ways in Korea. As COVID‐19 continues for a long time, efforts to make untact marketing efficient in this untact culture are accelerating. 16  The development of mobile technology is an era in which modern people have become essential in all areas of life. You can easily access various devices and concepts that makeup not only the media, but also all aspects of society and the entire culture, including popular art, music, video, art, and daily food, clothing, and shelter culture anytime, anywhere. The development of these contemporary technologies and the possibility of expansion of the untact space due to the development of mobile technology, which is universal and rapidly developing in modern society, were paid attention to. The development of mobile technology, centered on non‐face‐to‐face, has the potential to develop into a field of untact exhibition and space design. Untact, a service that implies unmanned services such as artificial intelligence, big data, and IoT, which is the 4th industrial revolution technology, is developing into a more sophisticated and new technology. 17 Studies on the role of smart packaging systems in the food supply chain also suggest that they can affect food quality, safety, and sustainability. Packaging systems have evolved to become smarter by integrating new electronic products with wireless communication and cloud data solutions. There are many factors that cause food loss and waste problems throughout the food supply chain. However, the development of smart packaging systems has developed recently, and there are several articles showing breakthroughs and mentioning that there are challenges for sustainability. 18 Global warming and obesity: A systematic review study found that there was a common correlation between global warming and obesity epidemics. The following studies found: Global warming affects obesity prevalence; Obesity epidemic contributes to global warming; Global warming and obesity epidemics influence each other. Increased energy consumption affects global warming. Policies that support clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. Policies that support the deployment of clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. 19  Table 1 summarizes environmental problems caused by the spread of live commerce. Environmental issues due to the spread of live e‐commerce As the number of single‐person households and the number of single people increases, the number of people who want to cut off the changing and inconvenient communication of life culture has increased, resulting in untact marketing in the retail industry. Here, COVID‐19 quickly penetrated untact culture and trends and established untact marketing. Untact culture and untact marketing in Japan and Korea were examined through case studies, and similarities and differences were compared. The untact culture and the development of technology were combined to create novel untact marketing, but in Japan, the traditional culture of face‐to‐face and contact changed to an untact culture. Untact marketing using digital technology is applied in various ways in Korea. As COVID‐19 continues for a long time, efforts to make untact marketing efficient in this untact culture are accelerating. 16  The development of mobile technology is an era in which modern people have become essential in all areas of life. You can easily access various devices and concepts that makeup not only the media, but also all aspects of society and the entire culture, including popular art, music, video, art, and daily food, clothing, and shelter culture anytime, anywhere. The development of these contemporary technologies and the possibility of expansion of the untact space due to the development of mobile technology, which is universal and rapidly developing in modern society, were paid attention to. The development of mobile technology, centered on non‐face‐to‐face, has the potential to develop into a field of untact exhibition and space design. Untact, a service that implies unmanned services such as artificial intelligence, big data, and IoT, which is the 4th industrial revolution technology, is developing into a more sophisticated and new technology. 17 Studies on the role of smart packaging systems in the food supply chain also suggest that they can affect food quality, safety, and sustainability. Packaging systems have evolved to become smarter by integrating new electronic products with wireless communication and cloud data solutions. There are many factors that cause food loss and waste problems throughout the food supply chain. However, the development of smart packaging systems has developed recently, and there are several articles showing breakthroughs and mentioning that there are challenges for sustainability. 18 Global warming and obesity: A systematic review study found that there was a common correlation between global warming and obesity epidemics. The following studies found: Global warming affects obesity prevalence; Obesity epidemic contributes to global warming; Global warming and obesity epidemics influence each other. Increased energy consumption affects global warming. Policies that support clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. Policies that support the deployment of clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. 19  Table 1 summarizes environmental problems caused by the spread of live commerce. Environmental issues due to the spread of live e‐commerce Increasing interests in good ingredients and vegan cosmetics As the environmental pollution caused by industrial wastes becomes more serious, veganism is emerging as a social problem. To raise awareness and responsibility for environmental protection, more consumers are turning to vegetarian cosmetics in the beauty industry. 20 Recently, veganism is a food consumption pattern. Vegetarianism usually includes several diets depending on the degree of exclusion in part or all of animal products such as meat and dairy. Among them, flexible, semi‐vegetarian, pesco‐vegetarian, lacto‐ovo‐vegetarian, vegan, raw and fruit diets are classified in order of restriction. These last three modes can be extended to a lifestyle called veganism, which is defined as not using animal products (cosmetics, clothing, ingredients, etc.) in daily life. 21 Due to COVID‐19, research on sustainable changes in the safety‐oriented beauty market trend is coming to an era where the perspective of sustainable safety can be applied to the entire beauty and cosmetology industry, and as customers’ perceptions change, safe ingredients are needed. Here, “good ingredients” means safe ingredients according to changes in customers’ perceptions. 22  This meaning also includes environmental issues. Recently, due to the increase in the use of masks due to environmental problems or infectious diseases, skin troubles are rapidly increasing due to changes in the use of cosmetics around the world. Therefore, research was conducted to suggest natural products related to research on skin soothing ingredients and makeup products. 23 A study on the influence of consumer decision‐making style on the consumption value of vegan cosmetics is in progress. In the effect of vegan cosmetics on consumption value, the higher the prudence and compliance type among the sub‐factors of consumer decision‐making type, the higher the consumption value. It was found to have a significant effect on sub‐factors, functional value, economic value, social value, conditional value, cognitive value, and emotional value. The purchase decision attributes, purchase behavioral intentions, and consumption values of the MZ generation of vegan cosmetics for infants and toddlers are constantly changing due to the spread of vegan cosmetics for infants and children, and various environmental and social impacts. It was found to have a significant effect on the purchase intention of vegan cosmetics as a determinant. 25 As the environmental pollution caused by industrial wastes becomes more serious, veganism is emerging as a social problem. To raise awareness and responsibility for environmental protection, more consumers are turning to vegetarian cosmetics in the beauty industry. 20 Recently, veganism is a food consumption pattern. Vegetarianism usually includes several diets depending on the degree of exclusion in part or all of animal products such as meat and dairy. Among them, flexible, semi‐vegetarian, pesco‐vegetarian, lacto‐ovo‐vegetarian, vegan, raw and fruit diets are classified in order of restriction. These last three modes can be extended to a lifestyle called veganism, which is defined as not using animal products (cosmetics, clothing, ingredients, etc.) in daily life. 21 Due to COVID‐19, research on sustainable changes in the safety‐oriented beauty market trend is coming to an era where the perspective of sustainable safety can be applied to the entire beauty and cosmetology industry, and as customers’ perceptions change, safe ingredients are needed. Here, “good ingredients” means safe ingredients according to changes in customers’ perceptions. 22  This meaning also includes environmental issues. Recently, due to the increase in the use of masks due to environmental problems or infectious diseases, skin troubles are rapidly increasing due to changes in the use of cosmetics around the world. Therefore, research was conducted to suggest natural products related to research on skin soothing ingredients and makeup products. 23 A study on the influence of consumer decision‐making style on the consumption value of vegan cosmetics is in progress. In the effect of vegan cosmetics on consumption value, the higher the prudence and compliance type among the sub‐factors of consumer decision‐making type, the higher the consumption value. It was found to have a significant effect on sub‐factors, functional value, economic value, social value, conditional value, cognitive value, and emotional value. The purchase decision attributes, purchase behavioral intentions, and consumption values of the MZ generation of vegan cosmetics for infants and toddlers are constantly changing due to the spread of vegan cosmetics for infants and children, and various environmental and social impacts. It was found to have a significant effect on the purchase intention of vegan cosmetics as a determinant. 25 Good ingredients for vegan and vegan cosmetics originated from foods Recently, research on "natural" and "vegetarian" cosmetics has been actively conducted. 26 Table 2 summarizes the good ingredients of vegan and vegan cosmetics derived from food. Glycine max, also known as soybean or soybean, is a type of legume native to East Asia. Soybeans contain many functional components including phenolic acids, flavonoids, isoflavonoids (quercetin, genistein, and daidzein), small proteins (Bowman‐Burk inhibitors, and soybean trypsin inhibitors) tannins, and proanthocyanidins. Soybean seed extract and fresh soymilk fraction have been reported to have cosmetic and dermatological benefits such as anti‐inflammatory, collagen stimulatory effect, powerful anti‐oxidant scavenging peroxyl radical activities, skin whitening and sun protective effects. 27 Soybean isoflavone extracts have also been studied to inhibit 2,4‐dinitrochlorobenzene‐induced contact dermatitis. Numerous epidemiological and clinical studies have demonstrated the protective role of dietary isoflavones in the pathogenesis of several chronic diseases such as inflammatory bowel disease. A ISO‐1 is promising for amelioration of DNCB‐induced experimental inflammatory model and skin barrier damage, suggesting potential application of topical ISO‐1 for inflammatory dermatosis. 28  The concentration of major proteins and carbohydrates such as polysaccharides was measured by Forint phenol assay and phenol‐sulfuric acid assay, respectively, in studies on the safety of black bean sprouts and cosmetics use. As a result, it was concluded that the extract of black bean sprouts is safe and can be used as an additive for anti‐aging and whitening effective cosmetics. 29  Palm olein (POo), olive oil (OO), safflower oil (SAF), grape seed oil (GSO), soybean oil (SBO) and sunflower oil (SFO), which have different saturation levels, are the main oils in the stability evaluation study of various vegetable oil‐based emulsions. As a result, it was confirmed that the saturation level of the vegetable oil had a significant effect on the emulsion stability. 30  There is a study result that Korean red ginseng hot water extract relieved atopic dermatitis‐like inflammatory response by negative regulation of the mitogen‐activated protein kinase (MAPK) signaling pathway in vivo. The Korean red ginseng is a traditional Korean medicinal plant and is often consumed as foods. Increased levels and decreased serum IgE levels, epidermal thickness, mast cell infiltration and ceramidase release. 31 Studies have been conducted on the surface activity and foaming properties of plant extracts. Saponins are amphiphilic glycoside secondary metabolites produced by many plants. Only a few of these have been thoroughly analyzed and far fewer have found industrial applications as bio‐surfactants. In this contribution, it will be screened 45 plants from different families that were reported to be rich in saponins for their surface activity and foam properties. For this purpose, room temperature aqueous extracts such as maceration solutions of plant organs rich in saponins were prepared and spray dried under the same conditions in the presence of sodium benzoate and potassium sorbate as preservatives and drying aids. The 15 selected plants were also extracted using hot water, but the high temperature lowered the surface activity of the extract in most cases. 32 Good ingredients for vegan and vegan cosmetics originated from foods Recently, research on "natural" and "vegetarian" cosmetics has been actively conducted. 26 Table 2 summarizes the good ingredients of vegan and vegan cosmetics derived from food. Glycine max, also known as soybean or soybean, is a type of legume native to East Asia. Soybeans contain many functional components including phenolic acids, flavonoids, isoflavonoids (quercetin, genistein, and daidzein), small proteins (Bowman‐Burk inhibitors, and soybean trypsin inhibitors) tannins, and proanthocyanidins. Soybean seed extract and fresh soymilk fraction have been reported to have cosmetic and dermatological benefits such as anti‐inflammatory, collagen stimulatory effect, powerful anti‐oxidant scavenging peroxyl radical activities, skin whitening and sun protective effects. 27 Soybean isoflavone extracts have also been studied to inhibit 2,4‐dinitrochlorobenzene‐induced contact dermatitis. Numerous epidemiological and clinical studies have demonstrated the protective role of dietary isoflavones in the pathogenesis of several chronic diseases such as inflammatory bowel disease. A ISO‐1 is promising for amelioration of DNCB‐induced experimental inflammatory model and skin barrier damage, suggesting potential application of topical ISO‐1 for inflammatory dermatosis. 28  The concentration of major proteins and carbohydrates such as polysaccharides was measured by Forint phenol assay and phenol‐sulfuric acid assay, respectively, in studies on the safety of black bean sprouts and cosmetics use. As a result, it was concluded that the extract of black bean sprouts is safe and can be used as an additive for anti‐aging and whitening effective cosmetics. 29  Palm olein (POo), olive oil (OO), safflower oil (SAF), grape seed oil (GSO), soybean oil (SBO) and sunflower oil (SFO), which have different saturation levels, are the main oils in the stability evaluation study of various vegetable oil‐based emulsions. As a result, it was confirmed that the saturation level of the vegetable oil had a significant effect on the emulsion stability. 30  There is a study result that Korean red ginseng hot water extract relieved atopic dermatitis‐like inflammatory response by negative regulation of the mitogen‐activated protein kinase (MAPK) signaling pathway in vivo. The Korean red ginseng is a traditional Korean medicinal plant and is often consumed as foods. Increased levels and decreased serum IgE levels, epidermal thickness, mast cell infiltration and ceramidase release. 31 Studies have been conducted on the surface activity and foaming properties of plant extracts. Saponins are amphiphilic glycoside secondary metabolites produced by many plants. Only a few of these have been thoroughly analyzed and far fewer have found industrial applications as bio‐surfactants. In this contribution, it will be screened 45 plants from different families that were reported to be rich in saponins for their surface activity and foam properties. For this purpose, room temperature aqueous extracts such as maceration solutions of plant organs rich in saponins were prepared and spray dried under the same conditions in the presence of sodium benzoate and potassium sorbate as preservatives and drying aids. The 15 selected plants were also extracted using hot water, but the high temperature lowered the surface activity of the extract in most cases. 32 Good ingredients for vegan and vegan cosmetics originated from foods Changes in daily life due to COVID‐19 pandemic: Effective vaccines and therapeutics to prevent COVID‐19 have been released. Yet the world continues to rely on social distancing, sanitation measures and repurposing drugs. The world has developed an effective SARS‐CoV‐2 vaccine. By the end of August 2020, 30 vaccines had already entered clinical trials, and more than 200 vaccines had undergone various stages of development. As a result, 71.5% were either “very high” or “somewhat likely” to receive the COVID‐19 vaccine. 48.1% said they would follow their employer's recommendations. The difference in acceptance rates varied from nearly 90% (China) to <55% (Russia). Respondents with higher confidence in government sources of information are more likely to be vaccinated and follow employer recommendations. As a result, many vaccinations have been carried out, but the world is still in a state of panic. 11 , 12 A social distancing policy to spend more time at home during the COVID‐19 period has been implemented. 13 As a result, the transition to telecommuting has been positive for most, with potential benefits in reducing fatigue for many employees. Maintaining the option to work from home after COVID‐19 can help reduce burnout in the long term. Addressing information technologies that can personalize options and ensure functionality has been found to be important for those who cannot work effectively from home. 14  Telecommuting, which requires working from home rather than in a company building, is a future‐oriented work method that is beneficial to both the organization and its members and is spreading widely with the development of information infrastructure. It has various advantages such as productivity increase, job satisfaction increase, cost reduction, the flexibility of work and workplace, etc., and is also important as a means of enhancing organizational competitiveness. It is expected to provide opportunities for women who have difficulties in raising children to perform work and housework at the same time at home. In the study of factors affecting attitudes toward telecommuting, the Korean Intellectual Property Office examiners’ telecommuting consciousness survey was conducted targeting examiners of the Korean Intellectual Property Office, which is the most successful government agency conducting telecommuting. The advantages of telecommuting (increased productivity, psychological freedom, and improved family relationships) all have a positive effect on the attitude toward telecommuting, and the disadvantages (creating a sense of incongruity within the organization, difficulties in managing employees by the department head, not knowing the organizational situation) was found to have a negative effect. However, compared to those who did not expect to work from home, the other three groups had a more positive attitude toward telecommuting. 15 Environmental issues due to the spread of live e‐commerce: As the number of single‐person households and the number of single people increases, the number of people who want to cut off the changing and inconvenient communication of life culture has increased, resulting in untact marketing in the retail industry. Here, COVID‐19 quickly penetrated untact culture and trends and established untact marketing. Untact culture and untact marketing in Japan and Korea were examined through case studies, and similarities and differences were compared. The untact culture and the development of technology were combined to create novel untact marketing, but in Japan, the traditional culture of face‐to‐face and contact changed to an untact culture. Untact marketing using digital technology is applied in various ways in Korea. As COVID‐19 continues for a long time, efforts to make untact marketing efficient in this untact culture are accelerating. 16  The development of mobile technology is an era in which modern people have become essential in all areas of life. You can easily access various devices and concepts that makeup not only the media, but also all aspects of society and the entire culture, including popular art, music, video, art, and daily food, clothing, and shelter culture anytime, anywhere. The development of these contemporary technologies and the possibility of expansion of the untact space due to the development of mobile technology, which is universal and rapidly developing in modern society, were paid attention to. The development of mobile technology, centered on non‐face‐to‐face, has the potential to develop into a field of untact exhibition and space design. Untact, a service that implies unmanned services such as artificial intelligence, big data, and IoT, which is the 4th industrial revolution technology, is developing into a more sophisticated and new technology. 17 Studies on the role of smart packaging systems in the food supply chain also suggest that they can affect food quality, safety, and sustainability. Packaging systems have evolved to become smarter by integrating new electronic products with wireless communication and cloud data solutions. There are many factors that cause food loss and waste problems throughout the food supply chain. However, the development of smart packaging systems has developed recently, and there are several articles showing breakthroughs and mentioning that there are challenges for sustainability. 18 Global warming and obesity: A systematic review study found that there was a common correlation between global warming and obesity epidemics. The following studies found: Global warming affects obesity prevalence; Obesity epidemic contributes to global warming; Global warming and obesity epidemics influence each other. Increased energy consumption affects global warming. Policies that support clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. Policies that support the deployment of clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. 19  Table 1 summarizes environmental problems caused by the spread of live commerce. Environmental issues due to the spread of live e‐commerce Increasing interests in good ingredients and vegan cosmetics: As the environmental pollution caused by industrial wastes becomes more serious, veganism is emerging as a social problem. To raise awareness and responsibility for environmental protection, more consumers are turning to vegetarian cosmetics in the beauty industry. 20 Recently, veganism is a food consumption pattern. Vegetarianism usually includes several diets depending on the degree of exclusion in part or all of animal products such as meat and dairy. Among them, flexible, semi‐vegetarian, pesco‐vegetarian, lacto‐ovo‐vegetarian, vegan, raw and fruit diets are classified in order of restriction. These last three modes can be extended to a lifestyle called veganism, which is defined as not using animal products (cosmetics, clothing, ingredients, etc.) in daily life. 21 Due to COVID‐19, research on sustainable changes in the safety‐oriented beauty market trend is coming to an era where the perspective of sustainable safety can be applied to the entire beauty and cosmetology industry, and as customers’ perceptions change, safe ingredients are needed. Here, “good ingredients” means safe ingredients according to changes in customers’ perceptions. 22  This meaning also includes environmental issues. Recently, due to the increase in the use of masks due to environmental problems or infectious diseases, skin troubles are rapidly increasing due to changes in the use of cosmetics around the world. Therefore, research was conducted to suggest natural products related to research on skin soothing ingredients and makeup products. 23 A study on the influence of consumer decision‐making style on the consumption value of vegan cosmetics is in progress. In the effect of vegan cosmetics on consumption value, the higher the prudence and compliance type among the sub‐factors of consumer decision‐making type, the higher the consumption value. It was found to have a significant effect on sub‐factors, functional value, economic value, social value, conditional value, cognitive value, and emotional value. The purchase decision attributes, purchase behavioral intentions, and consumption values of the MZ generation of vegan cosmetics for infants and toddlers are constantly changing due to the spread of vegan cosmetics for infants and children, and various environmental and social impacts. It was found to have a significant effect on the purchase intention of vegan cosmetics as a determinant. 25 Good ingredients for vegan and vegan cosmetics originated from foods: Recently, research on "natural" and "vegetarian" cosmetics has been actively conducted. 26 Table 2 summarizes the good ingredients of vegan and vegan cosmetics derived from food. Glycine max, also known as soybean or soybean, is a type of legume native to East Asia. Soybeans contain many functional components including phenolic acids, flavonoids, isoflavonoids (quercetin, genistein, and daidzein), small proteins (Bowman‐Burk inhibitors, and soybean trypsin inhibitors) tannins, and proanthocyanidins. Soybean seed extract and fresh soymilk fraction have been reported to have cosmetic and dermatological benefits such as anti‐inflammatory, collagen stimulatory effect, powerful anti‐oxidant scavenging peroxyl radical activities, skin whitening and sun protective effects. 27 Soybean isoflavone extracts have also been studied to inhibit 2,4‐dinitrochlorobenzene‐induced contact dermatitis. Numerous epidemiological and clinical studies have demonstrated the protective role of dietary isoflavones in the pathogenesis of several chronic diseases such as inflammatory bowel disease. A ISO‐1 is promising for amelioration of DNCB‐induced experimental inflammatory model and skin barrier damage, suggesting potential application of topical ISO‐1 for inflammatory dermatosis. 28  The concentration of major proteins and carbohydrates such as polysaccharides was measured by Forint phenol assay and phenol‐sulfuric acid assay, respectively, in studies on the safety of black bean sprouts and cosmetics use. As a result, it was concluded that the extract of black bean sprouts is safe and can be used as an additive for anti‐aging and whitening effective cosmetics. 29  Palm olein (POo), olive oil (OO), safflower oil (SAF), grape seed oil (GSO), soybean oil (SBO) and sunflower oil (SFO), which have different saturation levels, are the main oils in the stability evaluation study of various vegetable oil‐based emulsions. As a result, it was confirmed that the saturation level of the vegetable oil had a significant effect on the emulsion stability. 30  There is a study result that Korean red ginseng hot water extract relieved atopic dermatitis‐like inflammatory response by negative regulation of the mitogen‐activated protein kinase (MAPK) signaling pathway in vivo. The Korean red ginseng is a traditional Korean medicinal plant and is often consumed as foods. Increased levels and decreased serum IgE levels, epidermal thickness, mast cell infiltration and ceramidase release. 31 Studies have been conducted on the surface activity and foaming properties of plant extracts. Saponins are amphiphilic glycoside secondary metabolites produced by many plants. Only a few of these have been thoroughly analyzed and far fewer have found industrial applications as bio‐surfactants. In this contribution, it will be screened 45 plants from different families that were reported to be rich in saponins for their surface activity and foam properties. For this purpose, room temperature aqueous extracts such as maceration solutions of plant organs rich in saponins were prepared and spray dried under the same conditions in the presence of sodium benzoate and potassium sorbate as preservatives and drying aids. The 15 selected plants were also extracted using hot water, but the high temperature lowered the surface activity of the extract in most cases. 32 Good ingredients for vegan and vegan cosmetics originated from foods DISCUSSIONS: The coronavirus outbreak poses multiple challenges for people around the world to stay healthy, negotiable with disease risks and harsh social distancing measures. In the basic psychological needs, positive and negative emotions through satisfaction and frustration with autonomy, ability, and relevance, as well as life satisfaction and overall pain, were studied. It was performed from the perspective of the role of self‐determination theory on a 29‐year‐old Serbian woman with 965 participants was used. Serbian emotion list based on basic psychological needs satisfaction and frustration scale, life scale satisfaction, depression anxiety stress scale 12, and Panas‐X was used. The indirect relationship between positive and negative emotions, life satisfaction and overall pain was successively mediated by autonomy satisfaction, competence frustration, relevance satisfaction, and frustration. An important question has been raised about how the tendency to experience positive or negative emotions affects changes in subjective well‐being. Consistent with the assumptions of self‐determination theory, the results suggest that satisfaction and frustration with basic psychological needs may play an important role in achieving optimal well‐being. Therefore, it is reported that our understanding of human functioning in special circumstances has improved during the pandemic. 33 , 34 Schools across the world have been closed during the COVID‐19 pandemic. However, there are little data on the transmission of SARS‐CoV‐2 in children and educational settings. Most schools in Australia continued to open during the first epidemic, despite a decline in student field attendance at the peak of the pandemic. 35 A second COVID‐19 pandemic risk determination modeling study was conducted in the UK with the impact of optimal strategies, testing, and follow‐up interventions for school reopening. If schools open full‐time in September, along with a gradual easing of school closures, the results are likely to bring a second wave that will peak in December 2020. As a result, the second wave of infection is 2.0–2.3 times the size of the original COVID‐19 waves. When contagiousness in children and young adults varies from 100% to 50% in older adults, there is still a need for a comprehensive and effective follow‐up and quarantine testing strategy to avoid a second COVID‐19 wave. 36 In this era, telecommuting has become a necessity rather than an option. Telecommuting, which was gradually expanding in Cyber‐Physical Space, was further accelerated by the COVID‐19 pandemic. Various organizations at home and abroad are encouraging telecommuting, and it is predicted that telecommuting will be established as a type of work based on the COVID‐19 pandemic. 37  With the contemporary media environment, it is necessary to pay attention to the phenomenon of the personalization of images. As the number of zero TV households increases, the number of people looking for content at their own time with computers, tablet PCs, and smartphones instead of traditional TV sets has increased rapidly. As of the current time, YouTube, and various social networking services (SNS) are the most frequently used platforms. Now, the individual goes beyond the level of viewing to the level of creation. In addition, an image interface that can be manipulated coexists with the traditional screen interface represented by a movie. In short, the use of digital devices has become so common that our daily lives can be called “screen life.” 38  With millions of viewers worldwide, live streaming has a variety of social interaction features and new social media that provide real‐time video content. The study aimed to understand the personality traits, motivations, and user behaviors of active live streaming viewers in the general Chinese population. As a result, extraversion was negatively associated with live streaming use, while openness was positively associated with it. The main motivations for watching live streaming were social interaction, information gathering, and entertainment, which were associated with different frequencies of use and genre selection. 39 However, there are also studies that show that live streaming influences mental health and depression. Online live streaming is gaining popularity in media entertainment, and micro‐celebrities such as video streamers can have a similar impact on their audience. An online survey of 470 people was conducted, and the study looked at how the streamer's exposure to depression affects the streamers’ and viewers’ perceptions of depression. In addition, quasi‐social relationships, quasi‐social interactions, and identity with streamers were investigated as follows: (1) the viewer's perceived authenticity and trust in the streamer; (2) Associated with an increase in the viewer's perceived prevalence, risk sensitivity, and risk severity. Mental health shows a strong link between streamers’ health disclosure and public perception of depression. Mental health is also being discussed as a previous study of celebrity influence. 40 Live commerce content is being strengthened throughout the beauty distribution industry. The live commerce market is growing rapidly as non‐face‐to‐face consumption increases due to the spread of the novel coronavirus infection COVID‐19. This year's size is 3 trillion won and is expected to grow to 10 trillion won by 2023. In addition to collaborating with large companies such as Naver and Kakao corporation, their own live commerce platform is actively being developed. It also sells products planned with YouTube creators and is increasing communication with the MZ generation, a major consumer group. YouTube creators have fandom as great as idols, and the MZ generation, who are accustomed to video media, do not perceive live commerce broadcasting as a simple product purchase channel. It was a new communication channel and pursue fun in it, and live commerce is expected to become more active in future as it can attract young customers and expand contact points. 41 , 42 However, according to these times, electricity transport and agricultural productivity lead to excessive greenhouse gas emissions. Direct impact on obesity by influencing global warming and causing nutritional diversion and physical inactivity, by obesity epidemics, global warming, food supply, price shocks and adaptive thermogenesis. The obesity epidemic contributes to global warming due to increased energy consumption. Policies that support the deployment of clean and sustainable energy sources and urban designs that encourage active lifestyles are likely to alleviate the social burden of global warming and obesity. 19 From the perspective of sustainable safety, it was said that harmful and safe edible ingredients should be comparatively analyzed from the perspective of clean beauty in a study on the overall change of the beauty industry market in the post‐COVID19 pandemic. It was also said that the importance of sustainable value should be considered from the point of view of clean beauty. It was concluded that, according to changes in customer perception, procurement of raw materials for products beyond raw materials, manufacturing processes, product test, etc. are part of the clean beauty for minimizing carbon emission, using water, recycling product containers, and reducing waste. 22 Lately, as consumers are increasingly interested in products made with safe ingredients, cosmetics that have been certified as vegan are increasing in the cosmetic market. In this situation, consumers are analyzing cosmetic ingredients or receiving information about products using various contents such as social media and applications to meet the needs for products made with safe ingredients. 43 Table 3 summarizes the growing interest in good ingredients and vegan cosmetics. Results of an online survey conducted on female consumers in their 20s and 40s who are interested in vegan cosmetics. Among the sub‐factors of the vegan cosmetic consumer's planned behavioral model, attitude, subjective norm, perceived behavioral control, ethical responsibility, and self‐identity were found to have a statistically significant positive effect on brand image and purchase intention., it was found that brand image had a statistically significant positive effect on purchase intention. 20 Accordingly, various studies on vegan ingredients, which are good ingredients, are being conducted. The surface activity and foaming properties of plant extracts rich in saponins have been studied. As a result, depending on the overall characteristics, Saponaria officinalis L. (soap), Avena sativa L. (oat), Aesculus hippocastanum L. (horse chestnut), Chenopodium quinoa Will. (quinoa), Vaccaria hispanica (Mill.) Rauschert (cowherb) and Glycine max (L.) Merr. (Soybean) has been proposed as the best potential source of saponins for surfactant applications in natural cosmetics and household products. Vitamin B12 applied by mouth through toothpaste enters the bloodstream and corrects vitamin B12 markers in the blood of vegans at high risk of vitamin B12 deficiency. Vitamin B12 fortified toothpaste has been shown to improve vitamin status in vegans. 44 In order to improve these problems and meet the needs of consumers, additional research through customized inner beauty and customized cosmetics and research on the development of vegan cosmetics matching convergence service application development according to changes in the beauty consumption market will be required. 26 , 43 , 45  Table 4 summarizes the need for vegan cosmetics matching fusion service application development. Increasing interests in good ingredients and vegan cosmetics Due to the seriousness of environmental pollution caused by industrial waste, veganism has become an important social issue. To promote awareness and responsibility for environmental protection, more and more consumers have shown interest in vegetarian cosmetics in the beauty industry. The necessity of developing vegan cosmetic matching fusion service application Therefore, this narrative review clearly indicates the needs of consumers in the beauty and cosmetics industry to pursue good consumption due to the COVID‐19 pandemic. However, the limitation of this study is that the research on the cosmetic market because COVID‐19 is not finished yet. Accordingly, it will be necessary to continue research on the possibility of developing cosmetics with safe ingredients, vegan foods, reflecting the needs of consumers in future. CONCLUSIONS: Based on the results of this study, it will be necessary to identify consumer needs for vegan cosmetics that started from vegan food and develop applications for the development of customized vegan inner beauty products and customized vegan cosmetics using customized inner beauty products and/or customized cosmetics. This is expected to be used as an important marketing material for the global vegan cosmetics market that confirms new changes in the cosmetics market. Supporting information: Fig S1 Click here for additional data file.
Background: New changes are taking place in the beauty and cosmetology market due to changes in daily life due to coronavirus disease-19 (COVID-19) and environmental alteration caused by the spread of live commerce. Methods: This review paper is a critical literature review, and a narrative review approach has been used for this study. A total of 300-400 references were selected using representative journal search websites such as PubMed, Google Scholar, Scopus, RISS, and ResearchGate, which a total of 45 papers were selected in the final stage based on 2009 to 2022. Results: As environmental problems increased after the COVID-19 pandemic, we tried to understand the needs of consumers for vegan cosmetics, which are good ingredients and good cosmetics. Therefore, this narrative review clearly shows the need for beauty and cosmetics industry consumers to pursue good consumption due to the global COVID-19 pandemic. Conclusions: Accordingly, this literature review will need to identify consumer needs for vegan cosmetics that started from vegan foods and develop the applications for the development of customized inner beauty products, customized vegan inner beauty products and/or customized vegan cosmetics using customized cosmetics. This is expected to be used as important marketing materials for the global vegan cosmetics market that confirms new changes in the cosmetics market.
INTRODUCTION: The ongoing coronavirus disease‐19 (COVID‐19) pandemic is endangering millions of people in more and more countries and is a serious public health threat worldwide. Recently, extensive research and clinical trials have been conducted to develop antiviral drugs, vaccines, and anti‐syndromic coronavirus 2 (SARS‐CoV‐2) antibody treatment for the treatment of COVID‐19, plasma treatment for convalescence, and nanoparticle‐based treatment. As a result, the spread of the COVID‐19 virus continues. 1 A newly developed rapid point‐of‐care test is underway. This is contributing to controlling the spread of the COVID‐19 virus by facilitating large‐scale testing on the population. A population‐based cross‐sectional study conducted in Cantabria, Spain between April and May 2020 also evaluated the applicability of a self‐testing strategy for SARS‐CoV‐2. In the early days of the COVID‐19 outbreak, lung health was a major focus of research. Recently, COVID‐19 self‐diagnosis has been considered and used as a screening test tool. Vaccines and treatments have been developed for COVID‐19, but the pandemic shows no sign of ending yet. 2 In recent years, e‐commerce such as online purchase has been growing steadily, and due to this social situation, the e‐commerce market is growing rapidly due to the transition to a non‐face‐to‐face society. 3 In Prof. Kim's book, “It is not a question of who owns more, but a new measure of life's abundance is scaled to ‘who has more experience’.” “Streaming” essentially means playing audio or video in real time without downloading it. It is called streaming because it treats data as if it were flowing through water. The most fundamental difference between streaming and downloading is ownership. The streaming is the biggest feature that you can experience whenever you want, even if you do not own it. The era of enjoying such a streaming life has arrived, and live commerce is also growing at the same time in the beauty market. It has emerged as a major trend in the industry. The service method that provides information in a non‐face‐to‐face manner means minimizing contact with people. As we enter the unknown era, the frequency of purchasing customized cosmetics through untact mobile shopping is increasing, and research results have shown that this has a high correlation with interest in skin and perception of customized cosmetics. In the unexplored era after COVID‐19, cosmetics consumption through mobile shopping is expected to increase. 4 , 5 Unlike in the past, social media environment dominates our daily life and everything is changing rapidly, and nothing seems to change long. In a rapidly changing era, it is very meaningful socially and culturally to compare the appearance of generations who share the same emotions with the differences in perceptions of one generation. 6 , 7 , 8  The Korean economy and the global Hallyu wave. It is necessary to diagnose the reality of the beauty industry, which has grown in importance over time, and to suggest realistic development directions. The production and use of e‐commerce packaging has steadily increased in recent years due to the increase in online purchases. As a result, the impact on the environment has also increased. Humanity faces climate change, pollution, environmental degradation and/or destruction of air, soil, water, and ecosystems. The climate and environmental crisis will be one of the greatest challenges in human history. As a result, consumers become more obsessed with good consumption and good ingredients, and the trend is gradually spreading from vegan food to vegan cosmetics. 9 , 10 Therefore, as environmental issues have been increasing after the COVID‐19 pandemic, it will be identified the needs of consumers for vegan cosmetics, which are good ingredients and good cosmetics, and focused on the future value and direction of vegan cosmetics from food to cosmetics. CONCLUSIONS: Based on the results of this study, it will be necessary to identify consumer needs for vegan cosmetics that started from vegan food and develop applications for the development of customized vegan inner beauty products and customized vegan cosmetics using customized inner beauty products and/or customized cosmetics. This is expected to be used as an important marketing material for the global vegan cosmetics market that confirms new changes in the cosmetics market.
Background: New changes are taking place in the beauty and cosmetology market due to changes in daily life due to coronavirus disease-19 (COVID-19) and environmental alteration caused by the spread of live commerce. Methods: This review paper is a critical literature review, and a narrative review approach has been used for this study. A total of 300-400 references were selected using representative journal search websites such as PubMed, Google Scholar, Scopus, RISS, and ResearchGate, which a total of 45 papers were selected in the final stage based on 2009 to 2022. Results: As environmental problems increased after the COVID-19 pandemic, we tried to understand the needs of consumers for vegan cosmetics, which are good ingredients and good cosmetics. Therefore, this narrative review clearly shows the need for beauty and cosmetics industry consumers to pursue good consumption due to the global COVID-19 pandemic. Conclusions: Accordingly, this literature review will need to identify consumer needs for vegan cosmetics that started from vegan foods and develop the applications for the development of customized inner beauty products, customized vegan inner beauty products and/or customized vegan cosmetics using customized cosmetics. This is expected to be used as important marketing materials for the global vegan cosmetics market that confirms new changes in the cosmetics market.
8,981
242
[ 710, 497, 561, 421, 584 ]
10
[ "cosmetics", "vegan", "19", "untact", "covid", "covid 19", "ingredients", "vegan cosmetics", "social", "global" ]
[ "developed covid 19", "coronavirus sars cov", "covid 19 outbreak", "covid 19 vaccine", "coronavirus disease 19" ]
null
[CONTENT] COVID‐19 | good consumption | live commerce | vegan cosmetics | vegan inner beauty [SUMMARY]
null
[CONTENT] COVID‐19 | good consumption | live commerce | vegan cosmetics | vegan inner beauty [SUMMARY]
[CONTENT] COVID‐19 | good consumption | live commerce | vegan cosmetics | vegan inner beauty [SUMMARY]
[CONTENT] COVID‐19 | good consumption | live commerce | vegan cosmetics | vegan inner beauty [SUMMARY]
[CONTENT] COVID‐19 | good consumption | live commerce | vegan cosmetics | vegan inner beauty [SUMMARY]
[CONTENT] COVID-19 | Cosmetics | Humans | Marketing | Pandemics | Vegans [SUMMARY]
null
[CONTENT] COVID-19 | Cosmetics | Humans | Marketing | Pandemics | Vegans [SUMMARY]
[CONTENT] COVID-19 | Cosmetics | Humans | Marketing | Pandemics | Vegans [SUMMARY]
[CONTENT] COVID-19 | Cosmetics | Humans | Marketing | Pandemics | Vegans [SUMMARY]
[CONTENT] COVID-19 | Cosmetics | Humans | Marketing | Pandemics | Vegans [SUMMARY]
[CONTENT] developed covid 19 | coronavirus sars cov | covid 19 outbreak | covid 19 vaccine | coronavirus disease 19 [SUMMARY]
null
[CONTENT] developed covid 19 | coronavirus sars cov | covid 19 outbreak | covid 19 vaccine | coronavirus disease 19 [SUMMARY]
[CONTENT] developed covid 19 | coronavirus sars cov | covid 19 outbreak | covid 19 vaccine | coronavirus disease 19 [SUMMARY]
[CONTENT] developed covid 19 | coronavirus sars cov | covid 19 outbreak | covid 19 vaccine | coronavirus disease 19 [SUMMARY]
[CONTENT] developed covid 19 | coronavirus sars cov | covid 19 outbreak | covid 19 vaccine | coronavirus disease 19 [SUMMARY]
[CONTENT] cosmetics | vegan | 19 | untact | covid | covid 19 | ingredients | vegan cosmetics | social | global [SUMMARY]
null
[CONTENT] cosmetics | vegan | 19 | untact | covid | covid 19 | ingredients | vegan cosmetics | social | global [SUMMARY]
[CONTENT] cosmetics | vegan | 19 | untact | covid | covid 19 | ingredients | vegan cosmetics | social | global [SUMMARY]
[CONTENT] cosmetics | vegan | 19 | untact | covid | covid 19 | ingredients | vegan cosmetics | social | global [SUMMARY]
[CONTENT] cosmetics | vegan | 19 | untact | covid | covid 19 | ingredients | vegan cosmetics | social | global [SUMMARY]
[CONTENT] 19 | covid 19 | covid | cosmetics | streaming | treatment | face | commerce | growing | era [SUMMARY]
null
[CONTENT] untact | culture | cosmetics | vegan | value | technology | oil | telecommuting | warming | global warming [SUMMARY]
[CONTENT] customized | cosmetics | vegan | inner beauty products | customized vegan | inner beauty products customized | products customized | beauty products | cosmetics market | beauty products customized [SUMMARY]
[CONTENT] cosmetics | vegan | untact | 19 | vegan cosmetics | covid | covid 19 | telecommuting | value | ingredients [SUMMARY]
[CONTENT] cosmetics | vegan | untact | 19 | vegan cosmetics | covid | covid 19 | telecommuting | value | ingredients [SUMMARY]
[CONTENT] disease-19 | COVID-19 [SUMMARY]
null
[CONTENT] COVID-19 ||| COVID-19 [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] disease-19 | COVID-19 ||| ||| 300-400 | PubMed | Google Scholar | ResearchGate | 45 | 2009 to 2022 ||| ||| COVID-19 ||| COVID-19 ||| ||| [SUMMARY]
[CONTENT] disease-19 | COVID-19 ||| ||| 300-400 | PubMed | Google Scholar | ResearchGate | 45 | 2009 to 2022 ||| ||| COVID-19 ||| COVID-19 ||| ||| [SUMMARY]
Age-specific prevalence and genotype distribution of human papillomavirus in women from Northwest China.
35365956
Human papillomavirus (HPV) is the leading cause of cervical cancer with more than 200 genotypes. Different genotypes have different potentials in causing premalignant lesions and cervical cancers. In this study, we investigated the age-specific prevalence and genotype distribution of HPV genotypes in Northwest China.
BACKGROUND
We recruited 145,918 unvaccinated women from Northwest China for population-based HPV DNA screening test during June 2015 to December 2020. And a lab-based test was performed for each volunteer by flow fluorescent technology to identify the genotypes of HPV.
MATERIALS AND METHODS
The overall infection rate of HPV was 22.97%. With the participants divided into 12 groups according to age, a bimodal curve of infection rate was obtained. And the two peaks appeared in the younger than 20 group and 61-65 group, respectively. The five most common HPV genotypes included HPV 16, 58, 52, 53 and 61 in all participants, which were in descending order of frequency. Among women younger than 25 years old, HPV 6 and 11 were more common and even higher than some genotypes mentioned above. Among women older than 65 years old, HPV 18 and 66 were more common than or as high as the six most common genotypes in all populations. Additionally, the distribution of single and multiple infections in each age group was also different.
RESULTS
The baseline prevalence and genotype distribution of HPV in Northwest China was uncovered for the first time. Age was related to the epidemiology of different HPV genotypes. All the results would be of great significance for future healthcare services.
CONCLUSION
[ "Female", "Humans", "Adult", "Aged", "Papillomaviridae", "Alphapapillomavirus", "Papillomavirus Infections", "Prevalence", "Uterine Cervical Neoplasms", "Genotype", "Age Factors", "China", "Uterine Cervical Dysplasia" ]
9678107
INTRODUCTION
Human papillomavirus (HPV) is the leading cause of cervical cancer, which is the fourth most common female cancer worldwide. 1 HPV infection is the most common sexually transmitted infection, and approximately 70% of females having sex will be infected with HPV during the whole lifetime. 2 Although most HPV infections are asymptomatic, the persistent infection could induce cervical cancer. 3 , 4 Hitherto, there are more than 200 HPV genotypes, which are different in respect of the potential to cause premalignant lesions and cervical cancers. Among them, the 12 genotypes, including HPV 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59, were classified as high‐risk (HR) genotypes, and other 12 genotypes, including HPV 6, 11, 40, 42, 43, 44, 54, 61, 70, 72, 81, and CP6108, as low‐risk (LR) genotypes. 5 , 6 Specifying the prevalence of different HPV genotypes could predict the cancer risk in the population. It seems to be a promising method to eliminate cervical cancer by preventing HPV infections among women. Since the development of the first HPV vaccine, 7 vaccination programs have been spread among women in some developed countries before they get exposed to HPV. 8 , 9 , 10 , 11 To date, three commercial HPV vaccines are available in China, including the bivalent vaccine (Cervarix) targeting HPV 16 and 18; the quadrivalent HPV vaccine (Gardasil) targeting HPV 6, 11, 16, and 18; and the 9‐valent HPV vaccine (Gardasil) targeting HPV 6, 11, 16, 18, 31, 33, 45, 52, and 58. Furthermore, more HPV vaccines developed by Chinese domestic enterprises are coming to the market. 12 With the wide variety of vaccines, it is difficult for the public to choose. The prevalence of HPV genotypes is dependent on the geographic region, 13 so that knowledge of the geographical prevalence of HPV genotypes would provide important information for vaccine selection. The geographical prevalence of HPV genotypes had been widely investigated in previous studies, leading to different prevalence patterns in different areas. Globally, HPV 16 and 18 are most prevalent. 14 Additionally, the most common HPV genotypes among the Asian population with cervical cancer are HPV 16, 18, 45, 52, and 58. 15 However, according to the data from WHO, HPV 16, 18, 33, 52, and 58 are the five most common HPV genotypes in patients with cervical cancer in Eastern Asia. 16 Data from several provinces in China, such as Guangdong, Jiangsu, Sichuan, Yunnan, Hunan, and Shandong, suggest that the HPV genotypes with high prevalence in different provinces are different. 17 , 18 , 19 , 20 , 21 Up to now, studies on HPV genotypes are all conducted in Southwest, Central South, Southeast, or Eastern China, not in Northwest China. In Northwest China, the less‐developed area, the epidemiology of HPV is considered to be different from that of the developed areas. The cost‐effectiveness in the prevention of cervical cancer is of much more attention in Northwest China. 22 The study for prevalence and genotype distribution of different HPV genotypes is urgent for controlling the economic burden of cervical cancer on public health in Northwest China. Therefore, we aimed to provide large‐scale epidemiologic data on genotype distribution and prevalence of HPV among women in Northwest China. In total, all samples of this study were collected from women volunteers who had never been vaccinated with HPV vaccines. Hence, the prevalence and distribution of HPV genotypes in Northwest China had been elucidated for the first time, and the age‐related differences were uncovered.
null
null
RESULTS
HPV infection rates and age‐specific prevalence There were 145,918 women (15 to 82 years) included, and 33,522 were HPV positive (Table 1), with an infection rate of 22.97%. All subjects were divided into 12 groups according to age, and the infection rate showed a U‐shape among women younger than 20 to those in 61–65. And among subjects who were 66–70 and older than 70 years old, the infection rate showed a decreasing trend compared with those who were 56–60 and 61–65 years old, but it was higher than the younger (Figure 1). The lowest and highest infection rates were 19.30% and 35.66% in the ages of 26–30 and 61–65 groups, respectively. Among subjects who were between 21 and 45 years old, the infection rate fluctuated between 19.30% and 22.11%, being lower than the overall infection rate; however, the other groups were the opposite. Basic character of the study population Abbreviation: HPV, human papillomavirus. HPV infection rates and age‐specific prevalence There were 145,918 women (15 to 82 years) included, and 33,522 were HPV positive (Table 1), with an infection rate of 22.97%. All subjects were divided into 12 groups according to age, and the infection rate showed a U‐shape among women younger than 20 to those in 61–65. And among subjects who were 66–70 and older than 70 years old, the infection rate showed a decreasing trend compared with those who were 56–60 and 61–65 years old, but it was higher than the younger (Figure 1). The lowest and highest infection rates were 19.30% and 35.66% in the ages of 26–30 and 61–65 groups, respectively. Among subjects who were between 21 and 45 years old, the infection rate fluctuated between 19.30% and 22.11%, being lower than the overall infection rate; however, the other groups were the opposite. Basic character of the study population Abbreviation: HPV, human papillomavirus. HPV infection rates and age‐specific prevalence Genotype distribution of HPV and age‐specific prevalence The prevalence of 27 genotypes in the population was in details (Supplementary S1). HPV 16 was most prevalent (5.18%, 7564/145,918). After close therewith is HPV 58 (3.10%, 4521/145,918), HPV 52 (2.75%, 4013/145,918), HPV 53 (2.18%, 3181/145,918), and HPV 61 (1.74%, 2532/145,918). Taken together, they were the five most common HPV genotypes in the population. And among the five, only HPV 61 was an LR‐HPV genotype. As shown in Figure 2, in women younger than 20, the five most common HR‐HPV genotypes were HPV 16 (6.70%, 45/672), HPV 58 (3.57%, 24/672), HPV 52 (3.13%, 21/672), HPV 18 (2.53%, 17/672), and HPV 56 (2.38%, 16/672). Moreover, HPV 11 (5.80%, 39/672) and HPV 6 (5.06%, 34/672) were the first two LR‐HPV genotypes in this age group. Similarly, in women aged 21–25, HPV 6 and 11 were still the most common LR‐HPV genotypes, and the next one was HPV 61. However, in other age groups, HPV 61 was the most common LR‐HPV genotype. In all HR‐HPV genotypes, HPV 16, 58, and 52 were most prevalent in all age groups. In women older than 36, the five most common HR‐HPV genotypes showed a strong consistency, including HPV 16, 58, 52, 53, and 56. In addition, the three most common LR‐HPV genotypes were HPV 61, 55, and 81 in women older than 36. Genotype distribution of HPV in each age group. (A) HR genotypes. (B) LR genotypes. HPV, human papillomavirus; HR, high‐risk; LR, low‐risk The prevalence of HR‐ and LR‐HPV genotypes in different age groups was investigated with combinations of the infections of each HPV genotype. Like age‐specific infection rates, there were U‐shaped curves of both HR‐ and LR‐HPV genotypes (Figure 3), which were significantly decreased in women who were older than 70. In all subjects, the prevalence of LR‐HPV was below the overall infection rate of 20.52%. However, HR‐HPV was more prevalent than the overall infection rate in most subjects, except for the women aged 26–30 and 41–45. And the lowest prevalence of HR‐HPV genotypes was 19.61% in women aged from 26 to 30. Age‐specific prevalence of HR and LR‐HPV genotypes The prevalence of 27 genotypes in the population was in details (Supplementary S1). HPV 16 was most prevalent (5.18%, 7564/145,918). After close therewith is HPV 58 (3.10%, 4521/145,918), HPV 52 (2.75%, 4013/145,918), HPV 53 (2.18%, 3181/145,918), and HPV 61 (1.74%, 2532/145,918). Taken together, they were the five most common HPV genotypes in the population. And among the five, only HPV 61 was an LR‐HPV genotype. As shown in Figure 2, in women younger than 20, the five most common HR‐HPV genotypes were HPV 16 (6.70%, 45/672), HPV 58 (3.57%, 24/672), HPV 52 (3.13%, 21/672), HPV 18 (2.53%, 17/672), and HPV 56 (2.38%, 16/672). Moreover, HPV 11 (5.80%, 39/672) and HPV 6 (5.06%, 34/672) were the first two LR‐HPV genotypes in this age group. Similarly, in women aged 21–25, HPV 6 and 11 were still the most common LR‐HPV genotypes, and the next one was HPV 61. However, in other age groups, HPV 61 was the most common LR‐HPV genotype. In all HR‐HPV genotypes, HPV 16, 58, and 52 were most prevalent in all age groups. In women older than 36, the five most common HR‐HPV genotypes showed a strong consistency, including HPV 16, 58, 52, 53, and 56. In addition, the three most common LR‐HPV genotypes were HPV 61, 55, and 81 in women older than 36. Genotype distribution of HPV in each age group. (A) HR genotypes. (B) LR genotypes. HPV, human papillomavirus; HR, high‐risk; LR, low‐risk The prevalence of HR‐ and LR‐HPV genotypes in different age groups was investigated with combinations of the infections of each HPV genotype. Like age‐specific infection rates, there were U‐shaped curves of both HR‐ and LR‐HPV genotypes (Figure 3), which were significantly decreased in women who were older than 70. In all subjects, the prevalence of LR‐HPV was below the overall infection rate of 20.52%. However, HR‐HPV was more prevalent than the overall infection rate in most subjects, except for the women aged 26–30 and 41–45. And the lowest prevalence of HR‐HPV genotypes was 19.61% in women aged from 26 to 30. Age‐specific prevalence of HR and LR‐HPV genotypes Distribution of single and multiple infections As some subjects were infected with more than one HPV genotype, the single and multiple infection types were analyzed. A total of 25,181 subjects were single infection with a proportion of 75.38% in all HPV‐positive cases, with its overall prevalence in the population of 17.26% (Table 1). Moreover, the prevalence of double, triple, and multiple infections in the population were 4.25%, 1.09%, and 0.38%, possessing 18.22%, 4.75%, and 1.64% in all HPV‐positive cases, respectively. The proportions of single, double, triple, and multiple infections in each age group were further analyzed (Figure 4). The prevalence of multiple infections was also lowest in all age groups except for the women aged younger than 20. And the proportion of multiple infections was 8.94% in this age group, being highest in all age groups. In addition, the proportion of single infection in this age group was 59.22%, being the lowest in all age groups. In other age groups, the proportions of single infection were all above 70%, except for the women aged 56–70. The proportions of double and triple infection were relatively stable except for two peaks that appeared in the women who were younger than 20 and among 56–70. The highest proportions of double and triple infections were 25.70% and 10.11%, appearing in the women who were younger than 20 and among 66–70, respectively. Age‐specific distribution of single and multiple infections As some subjects were infected with more than one HPV genotype, the single and multiple infection types were analyzed. A total of 25,181 subjects were single infection with a proportion of 75.38% in all HPV‐positive cases, with its overall prevalence in the population of 17.26% (Table 1). Moreover, the prevalence of double, triple, and multiple infections in the population were 4.25%, 1.09%, and 0.38%, possessing 18.22%, 4.75%, and 1.64% in all HPV‐positive cases, respectively. The proportions of single, double, triple, and multiple infections in each age group were further analyzed (Figure 4). The prevalence of multiple infections was also lowest in all age groups except for the women aged younger than 20. And the proportion of multiple infections was 8.94% in this age group, being highest in all age groups. In addition, the proportion of single infection in this age group was 59.22%, being the lowest in all age groups. In other age groups, the proportions of single infection were all above 70%, except for the women aged 56–70. The proportions of double and triple infection were relatively stable except for two peaks that appeared in the women who were younger than 20 and among 56–70. The highest proportions of double and triple infections were 25.70% and 10.11%, appearing in the women who were younger than 20 and among 66–70, respectively. Age‐specific distribution of single and multiple infections
CONCLUSION
In conclusion, the prevalence and distribution of HPV genotypes were investigated in Northwest China for the first time. Age is an influencing factor in the epidemiology of HPV genotypes. Our results provide a basis for future medical intervention. It also provides important information for the development of next‐generation HPV vaccines.
[ "INTRODUCTION", "SUBJECTS AND METHODS", "Subjects", "\nHPV detection and genotyping", "Statistical analysis", "\nHPV infection rates and age‐specific prevalence", "Genotype distribution of HPV and age‐specific prevalence", "Distribution of single and multiple infections", "AUTHOR CONTRIBUTIONS" ]
[ "Human papillomavirus (HPV) is the leading cause of cervical cancer, which is the fourth most common female cancer worldwide.\n1\n HPV infection is the most common sexually transmitted infection, and approximately 70% of females having sex will be infected with HPV during the whole lifetime.\n2\n Although most HPV infections are asymptomatic, the persistent infection could induce cervical cancer.\n3\n, \n4\n Hitherto, there are more than 200 HPV genotypes, which are different in respect of the potential to cause premalignant lesions and cervical cancers. Among them, the 12 genotypes, including HPV 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59, were classified as high‐risk (HR) genotypes, and other 12 genotypes, including HPV 6, 11, 40, 42, 43, 44, 54, 61, 70, 72, 81, and CP6108, as low‐risk (LR) genotypes.\n5\n, \n6\n Specifying the prevalence of different HPV genotypes could predict the cancer risk in the population.\nIt seems to be a promising method to eliminate cervical cancer by preventing HPV infections among women. Since the development of the first HPV vaccine,\n7\n vaccination programs have been spread among women in some developed countries before they get exposed to HPV.\n8\n, \n9\n, \n10\n, \n11\n To date, three commercial HPV vaccines are available in China, including the bivalent vaccine (Cervarix) targeting HPV 16 and 18; the quadrivalent HPV vaccine (Gardasil) targeting HPV 6, 11, 16, and 18; and the 9‐valent HPV vaccine (Gardasil) targeting HPV 6, 11, 16, 18, 31, 33, 45, 52, and 58. Furthermore, more HPV vaccines developed by Chinese domestic enterprises are coming to the market.\n12\n With the wide variety of vaccines, it is difficult for the public to choose. The prevalence of HPV genotypes is dependent on the geographic region,\n13\n so that knowledge of the geographical prevalence of HPV genotypes would provide important information for vaccine selection.\nThe geographical prevalence of HPV genotypes had been widely investigated in previous studies, leading to different prevalence patterns in different areas. Globally, HPV 16 and 18 are most prevalent.\n14\n Additionally, the most common HPV genotypes among the Asian population with cervical cancer are HPV 16, 18, 45, 52, and 58.\n15\n However, according to the data from WHO, HPV 16, 18, 33, 52, and 58 are the five most common HPV genotypes in patients with cervical cancer in Eastern Asia.\n16\n Data from several provinces in China, such as Guangdong, Jiangsu, Sichuan, Yunnan, Hunan, and Shandong, suggest that the HPV genotypes with high prevalence in different provinces are different.\n17\n, \n18\n, \n19\n, \n20\n, \n21\n Up to now, studies on HPV genotypes are all conducted in Southwest, Central South, Southeast, or Eastern China, not in Northwest China. In Northwest China, the less‐developed area, the epidemiology of HPV is considered to be different from that of the developed areas. The cost‐effectiveness in the prevention of cervical cancer is of much more attention in Northwest China.\n22\n The study for prevalence and genotype distribution of different HPV genotypes is urgent for controlling the economic burden of cervical cancer on public health in Northwest China.\nTherefore, we aimed to provide large‐scale epidemiologic data on genotype distribution and prevalence of HPV among women in Northwest China. In total, all samples of this study were collected from women volunteers who had never been vaccinated with HPV vaccines. Hence, the prevalence and distribution of HPV genotypes in Northwest China had been elucidated for the first time, and the age‐related differences were uncovered.", "Subjects Women volunteers who attended Xijing Hospital from June 2015 to December 2020 were enrolled. Inclusion criteria: (1) women who had an intact cervix; (2) women who did not receive cauterization or surgery; (3) women without pregnancy. Exclusion criteria: (1) women who had previous diagnosis or treatment for the cervical or vaginal disease; (2) women who had been vaccinated with any HPV vaccines before. Finally, there were 145,918 women for HPV genotypes DNA screening test. All subjects freely signed informed consent. We have obtained approval from the Ethical Research Committee of Xijing Hospital.\nWomen volunteers who attended Xijing Hospital from June 2015 to December 2020 were enrolled. Inclusion criteria: (1) women who had an intact cervix; (2) women who did not receive cauterization or surgery; (3) women without pregnancy. Exclusion criteria: (1) women who had previous diagnosis or treatment for the cervical or vaginal disease; (2) women who had been vaccinated with any HPV vaccines before. Finally, there were 145,918 women for HPV genotypes DNA screening test. All subjects freely signed informed consent. We have obtained approval from the Ethical Research Committee of Xijing Hospital.\n\nHPV detection and genotyping Exfoliated endocervical cells were obtained by pathologists with a cervical sampling brush (Jiangsu Jiangyou Medical Science and Technology Corporation). DNA extraction, PCR amplification, hybridization, and HPV genotyping were all conducted with High Throughput HPV genotyping Kits (Tellgen Corporation) by flow fluorescent technology according to the manufacturer's instructions. There were 27 distinguished HPV genotypes in one test. After experiments, 17 HR‐HPV genotypes included HPV 16, 18, 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, and 82.\n23\n The other 10 genotypes were defined as LR‐HPV genotypes, including HPV 6, HPV 11, HPV 40, HPV 42, HPV 43, HPV 44, HPV 55, HPV 61, HPV81, and HPV 83.\nExfoliated endocervical cells were obtained by pathologists with a cervical sampling brush (Jiangsu Jiangyou Medical Science and Technology Corporation). DNA extraction, PCR amplification, hybridization, and HPV genotyping were all conducted with High Throughput HPV genotyping Kits (Tellgen Corporation) by flow fluorescent technology according to the manufacturer's instructions. There were 27 distinguished HPV genotypes in one test. After experiments, 17 HR‐HPV genotypes included HPV 16, 18, 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, and 82.\n23\n The other 10 genotypes were defined as LR‐HPV genotypes, including HPV 6, HPV 11, HPV 40, HPV 42, HPV 43, HPV 44, HPV 55, HPV 61, HPV81, and HPV 83.\nStatistical analysis SAS statistical software, version 9.4 (SAS Institute Inc.) was carried out for statistical analyses. p < 0.05 was considered as statistical significance. The prevalence of HPV infections among groups (≤20, 21–25, 26–30, 31–35, 36–40, 41–45, 46–50, 51–55, 56–60, 61–65, 66–70 and >70 years old) and overall was calculated and compared using Chi‐square test or Fisher's exact test. Then, the prevalence of 27 HPV genotypes was calculated in the whole population and each age group. The prevalence of the HR‐HPV and LR‐HPV genotypes in each age group were combined. Finally, the prevalence of single, double (infected with two HPV genotypes), triple (infected with three HPV genotypes), and multiple HPV infections (infected with four or more HPV genotypes) were calculated in population and each age group, respectively.\nSAS statistical software, version 9.4 (SAS Institute Inc.) was carried out for statistical analyses. p < 0.05 was considered as statistical significance. The prevalence of HPV infections among groups (≤20, 21–25, 26–30, 31–35, 36–40, 41–45, 46–50, 51–55, 56–60, 61–65, 66–70 and >70 years old) and overall was calculated and compared using Chi‐square test or Fisher's exact test. Then, the prevalence of 27 HPV genotypes was calculated in the whole population and each age group. The prevalence of the HR‐HPV and LR‐HPV genotypes in each age group were combined. Finally, the prevalence of single, double (infected with two HPV genotypes), triple (infected with three HPV genotypes), and multiple HPV infections (infected with four or more HPV genotypes) were calculated in population and each age group, respectively.", "Women volunteers who attended Xijing Hospital from June 2015 to December 2020 were enrolled. Inclusion criteria: (1) women who had an intact cervix; (2) women who did not receive cauterization or surgery; (3) women without pregnancy. Exclusion criteria: (1) women who had previous diagnosis or treatment for the cervical or vaginal disease; (2) women who had been vaccinated with any HPV vaccines before. Finally, there were 145,918 women for HPV genotypes DNA screening test. All subjects freely signed informed consent. We have obtained approval from the Ethical Research Committee of Xijing Hospital.", "Exfoliated endocervical cells were obtained by pathologists with a cervical sampling brush (Jiangsu Jiangyou Medical Science and Technology Corporation). DNA extraction, PCR amplification, hybridization, and HPV genotyping were all conducted with High Throughput HPV genotyping Kits (Tellgen Corporation) by flow fluorescent technology according to the manufacturer's instructions. There were 27 distinguished HPV genotypes in one test. After experiments, 17 HR‐HPV genotypes included HPV 16, 18, 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, and 82.\n23\n The other 10 genotypes were defined as LR‐HPV genotypes, including HPV 6, HPV 11, HPV 40, HPV 42, HPV 43, HPV 44, HPV 55, HPV 61, HPV81, and HPV 83.", "SAS statistical software, version 9.4 (SAS Institute Inc.) was carried out for statistical analyses. p < 0.05 was considered as statistical significance. The prevalence of HPV infections among groups (≤20, 21–25, 26–30, 31–35, 36–40, 41–45, 46–50, 51–55, 56–60, 61–65, 66–70 and >70 years old) and overall was calculated and compared using Chi‐square test or Fisher's exact test. Then, the prevalence of 27 HPV genotypes was calculated in the whole population and each age group. The prevalence of the HR‐HPV and LR‐HPV genotypes in each age group were combined. Finally, the prevalence of single, double (infected with two HPV genotypes), triple (infected with three HPV genotypes), and multiple HPV infections (infected with four or more HPV genotypes) were calculated in population and each age group, respectively.", "There were 145,918 women (15 to 82 years) included, and 33,522 were HPV positive (Table 1), with an infection rate of 22.97%. All subjects were divided into 12 groups according to age, and the infection rate showed a U‐shape among women younger than 20 to those in 61–65. And among subjects who were 66–70 and older than 70 years old, the infection rate showed a decreasing trend compared with those who were 56–60 and 61–65 years old, but it was higher than the younger (Figure 1). The lowest and highest infection rates were 19.30% and 35.66% in the ages of 26–30 and 61–65 groups, respectively. Among subjects who were between 21 and 45 years old, the infection rate fluctuated between 19.30% and 22.11%, being lower than the overall infection rate; however, the other groups were the opposite.\nBasic character of the study population\nAbbreviation: HPV, human papillomavirus.\nHPV infection rates and age‐specific prevalence", "The prevalence of 27 genotypes in the population was in details (Supplementary S1). HPV 16 was most prevalent (5.18%, 7564/145,918). After close therewith is HPV 58 (3.10%, 4521/145,918), HPV 52 (2.75%, 4013/145,918), HPV 53 (2.18%, 3181/145,918), and HPV 61 (1.74%, 2532/145,918). Taken together, they were the five most common HPV genotypes in the population. And among the five, only HPV 61 was an LR‐HPV genotype.\nAs shown in Figure 2, in women younger than 20, the five most common HR‐HPV genotypes were HPV 16 (6.70%, 45/672), HPV 58 (3.57%, 24/672), HPV 52 (3.13%, 21/672), HPV 18 (2.53%, 17/672), and HPV 56 (2.38%, 16/672). Moreover, HPV 11 (5.80%, 39/672) and HPV 6 (5.06%, 34/672) were the first two LR‐HPV genotypes in this age group. Similarly, in women aged 21–25, HPV 6 and 11 were still the most common LR‐HPV genotypes, and the next one was HPV 61. However, in other age groups, HPV 61 was the most common LR‐HPV genotype. In all HR‐HPV genotypes, HPV 16, 58, and 52 were most prevalent in all age groups. In women older than 36, the five most common HR‐HPV genotypes showed a strong consistency, including HPV 16, 58, 52, 53, and 56. In addition, the three most common LR‐HPV genotypes were HPV 61, 55, and 81 in women older than 36.\nGenotype distribution of HPV in each age group. (A) HR genotypes. (B) LR genotypes. HPV, human papillomavirus; HR, high‐risk; LR, low‐risk\nThe prevalence of HR‐ and LR‐HPV genotypes in different age groups was investigated with combinations of the infections of each HPV genotype. Like age‐specific infection rates, there were U‐shaped curves of both HR‐ and LR‐HPV genotypes (Figure 3), which were significantly decreased in women who were older than 70. In all subjects, the prevalence of LR‐HPV was below the overall infection rate of 20.52%. However, HR‐HPV was more prevalent than the overall infection rate in most subjects, except for the women aged 26–30 and 41–45. And the lowest prevalence of HR‐HPV genotypes was 19.61% in women aged from 26 to 30.\nAge‐specific prevalence of HR and LR‐HPV genotypes", "As some subjects were infected with more than one HPV genotype, the single and multiple infection types were analyzed. A total of 25,181 subjects were single infection with a proportion of 75.38% in all HPV‐positive cases, with its overall prevalence in the population of 17.26% (Table 1). Moreover, the prevalence of double, triple, and multiple infections in the population were 4.25%, 1.09%, and 0.38%, possessing 18.22%, 4.75%, and 1.64% in all HPV‐positive cases, respectively.\nThe proportions of single, double, triple, and multiple infections in each age group were further analyzed (Figure 4). The prevalence of multiple infections was also lowest in all age groups except for the women aged younger than 20. And the proportion of multiple infections was 8.94% in this age group, being highest in all age groups. In addition, the proportion of single infection in this age group was 59.22%, being the lowest in all age groups. In other age groups, the proportions of single infection were all above 70%, except for the women aged 56–70. The proportions of double and triple infection were relatively stable except for two peaks that appeared in the women who were younger than 20 and among 56–70. The highest proportions of double and triple infections were 25.70% and 10.11%, appearing in the women who were younger than 20 and among 66–70, respectively.\nAge‐specific distribution of single and multiple infections", "Xiaohong Lin, Jia Li, Jianfang Zhang, and Hong Yang designed the research. Xiaohong Lin, Liu Chen, and Jianfang Zhang collected the data. Xiaohong Lin, Feng Yan, and Jia Li analyzed the data. Xiaohong Lin, Jia Li, and Jianfang Zhang wrote the manuscript. Hong Yang revised the manuscript into the published version. All authors have read and agreed to the published version of the manuscript." ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "SUBJECTS AND METHODS", "Subjects", "\nHPV detection and genotyping", "Statistical analysis", "RESULTS", "\nHPV infection rates and age‐specific prevalence", "Genotype distribution of HPV and age‐specific prevalence", "Distribution of single and multiple infections", "DISCUSSION", "CONCLUSION", "CONFLICT OF INTEREST", "AUTHOR CONTRIBUTIONS", "Supporting information" ]
[ "Human papillomavirus (HPV) is the leading cause of cervical cancer, which is the fourth most common female cancer worldwide.\n1\n HPV infection is the most common sexually transmitted infection, and approximately 70% of females having sex will be infected with HPV during the whole lifetime.\n2\n Although most HPV infections are asymptomatic, the persistent infection could induce cervical cancer.\n3\n, \n4\n Hitherto, there are more than 200 HPV genotypes, which are different in respect of the potential to cause premalignant lesions and cervical cancers. Among them, the 12 genotypes, including HPV 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59, were classified as high‐risk (HR) genotypes, and other 12 genotypes, including HPV 6, 11, 40, 42, 43, 44, 54, 61, 70, 72, 81, and CP6108, as low‐risk (LR) genotypes.\n5\n, \n6\n Specifying the prevalence of different HPV genotypes could predict the cancer risk in the population.\nIt seems to be a promising method to eliminate cervical cancer by preventing HPV infections among women. Since the development of the first HPV vaccine,\n7\n vaccination programs have been spread among women in some developed countries before they get exposed to HPV.\n8\n, \n9\n, \n10\n, \n11\n To date, three commercial HPV vaccines are available in China, including the bivalent vaccine (Cervarix) targeting HPV 16 and 18; the quadrivalent HPV vaccine (Gardasil) targeting HPV 6, 11, 16, and 18; and the 9‐valent HPV vaccine (Gardasil) targeting HPV 6, 11, 16, 18, 31, 33, 45, 52, and 58. Furthermore, more HPV vaccines developed by Chinese domestic enterprises are coming to the market.\n12\n With the wide variety of vaccines, it is difficult for the public to choose. The prevalence of HPV genotypes is dependent on the geographic region,\n13\n so that knowledge of the geographical prevalence of HPV genotypes would provide important information for vaccine selection.\nThe geographical prevalence of HPV genotypes had been widely investigated in previous studies, leading to different prevalence patterns in different areas. Globally, HPV 16 and 18 are most prevalent.\n14\n Additionally, the most common HPV genotypes among the Asian population with cervical cancer are HPV 16, 18, 45, 52, and 58.\n15\n However, according to the data from WHO, HPV 16, 18, 33, 52, and 58 are the five most common HPV genotypes in patients with cervical cancer in Eastern Asia.\n16\n Data from several provinces in China, such as Guangdong, Jiangsu, Sichuan, Yunnan, Hunan, and Shandong, suggest that the HPV genotypes with high prevalence in different provinces are different.\n17\n, \n18\n, \n19\n, \n20\n, \n21\n Up to now, studies on HPV genotypes are all conducted in Southwest, Central South, Southeast, or Eastern China, not in Northwest China. In Northwest China, the less‐developed area, the epidemiology of HPV is considered to be different from that of the developed areas. The cost‐effectiveness in the prevention of cervical cancer is of much more attention in Northwest China.\n22\n The study for prevalence and genotype distribution of different HPV genotypes is urgent for controlling the economic burden of cervical cancer on public health in Northwest China.\nTherefore, we aimed to provide large‐scale epidemiologic data on genotype distribution and prevalence of HPV among women in Northwest China. In total, all samples of this study were collected from women volunteers who had never been vaccinated with HPV vaccines. Hence, the prevalence and distribution of HPV genotypes in Northwest China had been elucidated for the first time, and the age‐related differences were uncovered.", "Subjects Women volunteers who attended Xijing Hospital from June 2015 to December 2020 were enrolled. Inclusion criteria: (1) women who had an intact cervix; (2) women who did not receive cauterization or surgery; (3) women without pregnancy. Exclusion criteria: (1) women who had previous diagnosis or treatment for the cervical or vaginal disease; (2) women who had been vaccinated with any HPV vaccines before. Finally, there were 145,918 women for HPV genotypes DNA screening test. All subjects freely signed informed consent. We have obtained approval from the Ethical Research Committee of Xijing Hospital.\nWomen volunteers who attended Xijing Hospital from June 2015 to December 2020 were enrolled. Inclusion criteria: (1) women who had an intact cervix; (2) women who did not receive cauterization or surgery; (3) women without pregnancy. Exclusion criteria: (1) women who had previous diagnosis or treatment for the cervical or vaginal disease; (2) women who had been vaccinated with any HPV vaccines before. Finally, there were 145,918 women for HPV genotypes DNA screening test. All subjects freely signed informed consent. We have obtained approval from the Ethical Research Committee of Xijing Hospital.\n\nHPV detection and genotyping Exfoliated endocervical cells were obtained by pathologists with a cervical sampling brush (Jiangsu Jiangyou Medical Science and Technology Corporation). DNA extraction, PCR amplification, hybridization, and HPV genotyping were all conducted with High Throughput HPV genotyping Kits (Tellgen Corporation) by flow fluorescent technology according to the manufacturer's instructions. There were 27 distinguished HPV genotypes in one test. After experiments, 17 HR‐HPV genotypes included HPV 16, 18, 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, and 82.\n23\n The other 10 genotypes were defined as LR‐HPV genotypes, including HPV 6, HPV 11, HPV 40, HPV 42, HPV 43, HPV 44, HPV 55, HPV 61, HPV81, and HPV 83.\nExfoliated endocervical cells were obtained by pathologists with a cervical sampling brush (Jiangsu Jiangyou Medical Science and Technology Corporation). DNA extraction, PCR amplification, hybridization, and HPV genotyping were all conducted with High Throughput HPV genotyping Kits (Tellgen Corporation) by flow fluorescent technology according to the manufacturer's instructions. There were 27 distinguished HPV genotypes in one test. After experiments, 17 HR‐HPV genotypes included HPV 16, 18, 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, and 82.\n23\n The other 10 genotypes were defined as LR‐HPV genotypes, including HPV 6, HPV 11, HPV 40, HPV 42, HPV 43, HPV 44, HPV 55, HPV 61, HPV81, and HPV 83.\nStatistical analysis SAS statistical software, version 9.4 (SAS Institute Inc.) was carried out for statistical analyses. p < 0.05 was considered as statistical significance. The prevalence of HPV infections among groups (≤20, 21–25, 26–30, 31–35, 36–40, 41–45, 46–50, 51–55, 56–60, 61–65, 66–70 and >70 years old) and overall was calculated and compared using Chi‐square test or Fisher's exact test. Then, the prevalence of 27 HPV genotypes was calculated in the whole population and each age group. The prevalence of the HR‐HPV and LR‐HPV genotypes in each age group were combined. Finally, the prevalence of single, double (infected with two HPV genotypes), triple (infected with three HPV genotypes), and multiple HPV infections (infected with four or more HPV genotypes) were calculated in population and each age group, respectively.\nSAS statistical software, version 9.4 (SAS Institute Inc.) was carried out for statistical analyses. p < 0.05 was considered as statistical significance. The prevalence of HPV infections among groups (≤20, 21–25, 26–30, 31–35, 36–40, 41–45, 46–50, 51–55, 56–60, 61–65, 66–70 and >70 years old) and overall was calculated and compared using Chi‐square test or Fisher's exact test. Then, the prevalence of 27 HPV genotypes was calculated in the whole population and each age group. The prevalence of the HR‐HPV and LR‐HPV genotypes in each age group were combined. Finally, the prevalence of single, double (infected with two HPV genotypes), triple (infected with three HPV genotypes), and multiple HPV infections (infected with four or more HPV genotypes) were calculated in population and each age group, respectively.", "Women volunteers who attended Xijing Hospital from June 2015 to December 2020 were enrolled. Inclusion criteria: (1) women who had an intact cervix; (2) women who did not receive cauterization or surgery; (3) women without pregnancy. Exclusion criteria: (1) women who had previous diagnosis or treatment for the cervical or vaginal disease; (2) women who had been vaccinated with any HPV vaccines before. Finally, there were 145,918 women for HPV genotypes DNA screening test. All subjects freely signed informed consent. We have obtained approval from the Ethical Research Committee of Xijing Hospital.", "Exfoliated endocervical cells were obtained by pathologists with a cervical sampling brush (Jiangsu Jiangyou Medical Science and Technology Corporation). DNA extraction, PCR amplification, hybridization, and HPV genotyping were all conducted with High Throughput HPV genotyping Kits (Tellgen Corporation) by flow fluorescent technology according to the manufacturer's instructions. There were 27 distinguished HPV genotypes in one test. After experiments, 17 HR‐HPV genotypes included HPV 16, 18, 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, and 82.\n23\n The other 10 genotypes were defined as LR‐HPV genotypes, including HPV 6, HPV 11, HPV 40, HPV 42, HPV 43, HPV 44, HPV 55, HPV 61, HPV81, and HPV 83.", "SAS statistical software, version 9.4 (SAS Institute Inc.) was carried out for statistical analyses. p < 0.05 was considered as statistical significance. The prevalence of HPV infections among groups (≤20, 21–25, 26–30, 31–35, 36–40, 41–45, 46–50, 51–55, 56–60, 61–65, 66–70 and >70 years old) and overall was calculated and compared using Chi‐square test or Fisher's exact test. Then, the prevalence of 27 HPV genotypes was calculated in the whole population and each age group. The prevalence of the HR‐HPV and LR‐HPV genotypes in each age group were combined. Finally, the prevalence of single, double (infected with two HPV genotypes), triple (infected with three HPV genotypes), and multiple HPV infections (infected with four or more HPV genotypes) were calculated in population and each age group, respectively.", "\nHPV infection rates and age‐specific prevalence There were 145,918 women (15 to 82 years) included, and 33,522 were HPV positive (Table 1), with an infection rate of 22.97%. All subjects were divided into 12 groups according to age, and the infection rate showed a U‐shape among women younger than 20 to those in 61–65. And among subjects who were 66–70 and older than 70 years old, the infection rate showed a decreasing trend compared with those who were 56–60 and 61–65 years old, but it was higher than the younger (Figure 1). The lowest and highest infection rates were 19.30% and 35.66% in the ages of 26–30 and 61–65 groups, respectively. Among subjects who were between 21 and 45 years old, the infection rate fluctuated between 19.30% and 22.11%, being lower than the overall infection rate; however, the other groups were the opposite.\nBasic character of the study population\nAbbreviation: HPV, human papillomavirus.\nHPV infection rates and age‐specific prevalence\nThere were 145,918 women (15 to 82 years) included, and 33,522 were HPV positive (Table 1), with an infection rate of 22.97%. All subjects were divided into 12 groups according to age, and the infection rate showed a U‐shape among women younger than 20 to those in 61–65. And among subjects who were 66–70 and older than 70 years old, the infection rate showed a decreasing trend compared with those who were 56–60 and 61–65 years old, but it was higher than the younger (Figure 1). The lowest and highest infection rates were 19.30% and 35.66% in the ages of 26–30 and 61–65 groups, respectively. Among subjects who were between 21 and 45 years old, the infection rate fluctuated between 19.30% and 22.11%, being lower than the overall infection rate; however, the other groups were the opposite.\nBasic character of the study population\nAbbreviation: HPV, human papillomavirus.\nHPV infection rates and age‐specific prevalence\nGenotype distribution of HPV and age‐specific prevalence The prevalence of 27 genotypes in the population was in details (Supplementary S1). HPV 16 was most prevalent (5.18%, 7564/145,918). After close therewith is HPV 58 (3.10%, 4521/145,918), HPV 52 (2.75%, 4013/145,918), HPV 53 (2.18%, 3181/145,918), and HPV 61 (1.74%, 2532/145,918). Taken together, they were the five most common HPV genotypes in the population. And among the five, only HPV 61 was an LR‐HPV genotype.\nAs shown in Figure 2, in women younger than 20, the five most common HR‐HPV genotypes were HPV 16 (6.70%, 45/672), HPV 58 (3.57%, 24/672), HPV 52 (3.13%, 21/672), HPV 18 (2.53%, 17/672), and HPV 56 (2.38%, 16/672). Moreover, HPV 11 (5.80%, 39/672) and HPV 6 (5.06%, 34/672) were the first two LR‐HPV genotypes in this age group. Similarly, in women aged 21–25, HPV 6 and 11 were still the most common LR‐HPV genotypes, and the next one was HPV 61. However, in other age groups, HPV 61 was the most common LR‐HPV genotype. In all HR‐HPV genotypes, HPV 16, 58, and 52 were most prevalent in all age groups. In women older than 36, the five most common HR‐HPV genotypes showed a strong consistency, including HPV 16, 58, 52, 53, and 56. In addition, the three most common LR‐HPV genotypes were HPV 61, 55, and 81 in women older than 36.\nGenotype distribution of HPV in each age group. (A) HR genotypes. (B) LR genotypes. HPV, human papillomavirus; HR, high‐risk; LR, low‐risk\nThe prevalence of HR‐ and LR‐HPV genotypes in different age groups was investigated with combinations of the infections of each HPV genotype. Like age‐specific infection rates, there were U‐shaped curves of both HR‐ and LR‐HPV genotypes (Figure 3), which were significantly decreased in women who were older than 70. In all subjects, the prevalence of LR‐HPV was below the overall infection rate of 20.52%. However, HR‐HPV was more prevalent than the overall infection rate in most subjects, except for the women aged 26–30 and 41–45. And the lowest prevalence of HR‐HPV genotypes was 19.61% in women aged from 26 to 30.\nAge‐specific prevalence of HR and LR‐HPV genotypes\nThe prevalence of 27 genotypes in the population was in details (Supplementary S1). HPV 16 was most prevalent (5.18%, 7564/145,918). After close therewith is HPV 58 (3.10%, 4521/145,918), HPV 52 (2.75%, 4013/145,918), HPV 53 (2.18%, 3181/145,918), and HPV 61 (1.74%, 2532/145,918). Taken together, they were the five most common HPV genotypes in the population. And among the five, only HPV 61 was an LR‐HPV genotype.\nAs shown in Figure 2, in women younger than 20, the five most common HR‐HPV genotypes were HPV 16 (6.70%, 45/672), HPV 58 (3.57%, 24/672), HPV 52 (3.13%, 21/672), HPV 18 (2.53%, 17/672), and HPV 56 (2.38%, 16/672). Moreover, HPV 11 (5.80%, 39/672) and HPV 6 (5.06%, 34/672) were the first two LR‐HPV genotypes in this age group. Similarly, in women aged 21–25, HPV 6 and 11 were still the most common LR‐HPV genotypes, and the next one was HPV 61. However, in other age groups, HPV 61 was the most common LR‐HPV genotype. In all HR‐HPV genotypes, HPV 16, 58, and 52 were most prevalent in all age groups. In women older than 36, the five most common HR‐HPV genotypes showed a strong consistency, including HPV 16, 58, 52, 53, and 56. In addition, the three most common LR‐HPV genotypes were HPV 61, 55, and 81 in women older than 36.\nGenotype distribution of HPV in each age group. (A) HR genotypes. (B) LR genotypes. HPV, human papillomavirus; HR, high‐risk; LR, low‐risk\nThe prevalence of HR‐ and LR‐HPV genotypes in different age groups was investigated with combinations of the infections of each HPV genotype. Like age‐specific infection rates, there were U‐shaped curves of both HR‐ and LR‐HPV genotypes (Figure 3), which were significantly decreased in women who were older than 70. In all subjects, the prevalence of LR‐HPV was below the overall infection rate of 20.52%. However, HR‐HPV was more prevalent than the overall infection rate in most subjects, except for the women aged 26–30 and 41–45. And the lowest prevalence of HR‐HPV genotypes was 19.61% in women aged from 26 to 30.\nAge‐specific prevalence of HR and LR‐HPV genotypes\nDistribution of single and multiple infections As some subjects were infected with more than one HPV genotype, the single and multiple infection types were analyzed. A total of 25,181 subjects were single infection with a proportion of 75.38% in all HPV‐positive cases, with its overall prevalence in the population of 17.26% (Table 1). Moreover, the prevalence of double, triple, and multiple infections in the population were 4.25%, 1.09%, and 0.38%, possessing 18.22%, 4.75%, and 1.64% in all HPV‐positive cases, respectively.\nThe proportions of single, double, triple, and multiple infections in each age group were further analyzed (Figure 4). The prevalence of multiple infections was also lowest in all age groups except for the women aged younger than 20. And the proportion of multiple infections was 8.94% in this age group, being highest in all age groups. In addition, the proportion of single infection in this age group was 59.22%, being the lowest in all age groups. In other age groups, the proportions of single infection were all above 70%, except for the women aged 56–70. The proportions of double and triple infection were relatively stable except for two peaks that appeared in the women who were younger than 20 and among 56–70. The highest proportions of double and triple infections were 25.70% and 10.11%, appearing in the women who were younger than 20 and among 66–70, respectively.\nAge‐specific distribution of single and multiple infections\nAs some subjects were infected with more than one HPV genotype, the single and multiple infection types were analyzed. A total of 25,181 subjects were single infection with a proportion of 75.38% in all HPV‐positive cases, with its overall prevalence in the population of 17.26% (Table 1). Moreover, the prevalence of double, triple, and multiple infections in the population were 4.25%, 1.09%, and 0.38%, possessing 18.22%, 4.75%, and 1.64% in all HPV‐positive cases, respectively.\nThe proportions of single, double, triple, and multiple infections in each age group were further analyzed (Figure 4). The prevalence of multiple infections was also lowest in all age groups except for the women aged younger than 20. And the proportion of multiple infections was 8.94% in this age group, being highest in all age groups. In addition, the proportion of single infection in this age group was 59.22%, being the lowest in all age groups. In other age groups, the proportions of single infection were all above 70%, except for the women aged 56–70. The proportions of double and triple infection were relatively stable except for two peaks that appeared in the women who were younger than 20 and among 56–70. The highest proportions of double and triple infections were 25.70% and 10.11%, appearing in the women who were younger than 20 and among 66–70, respectively.\nAge‐specific distribution of single and multiple infections", "There were 145,918 women (15 to 82 years) included, and 33,522 were HPV positive (Table 1), with an infection rate of 22.97%. All subjects were divided into 12 groups according to age, and the infection rate showed a U‐shape among women younger than 20 to those in 61–65. And among subjects who were 66–70 and older than 70 years old, the infection rate showed a decreasing trend compared with those who were 56–60 and 61–65 years old, but it was higher than the younger (Figure 1). The lowest and highest infection rates were 19.30% and 35.66% in the ages of 26–30 and 61–65 groups, respectively. Among subjects who were between 21 and 45 years old, the infection rate fluctuated between 19.30% and 22.11%, being lower than the overall infection rate; however, the other groups were the opposite.\nBasic character of the study population\nAbbreviation: HPV, human papillomavirus.\nHPV infection rates and age‐specific prevalence", "The prevalence of 27 genotypes in the population was in details (Supplementary S1). HPV 16 was most prevalent (5.18%, 7564/145,918). After close therewith is HPV 58 (3.10%, 4521/145,918), HPV 52 (2.75%, 4013/145,918), HPV 53 (2.18%, 3181/145,918), and HPV 61 (1.74%, 2532/145,918). Taken together, they were the five most common HPV genotypes in the population. And among the five, only HPV 61 was an LR‐HPV genotype.\nAs shown in Figure 2, in women younger than 20, the five most common HR‐HPV genotypes were HPV 16 (6.70%, 45/672), HPV 58 (3.57%, 24/672), HPV 52 (3.13%, 21/672), HPV 18 (2.53%, 17/672), and HPV 56 (2.38%, 16/672). Moreover, HPV 11 (5.80%, 39/672) and HPV 6 (5.06%, 34/672) were the first two LR‐HPV genotypes in this age group. Similarly, in women aged 21–25, HPV 6 and 11 were still the most common LR‐HPV genotypes, and the next one was HPV 61. However, in other age groups, HPV 61 was the most common LR‐HPV genotype. In all HR‐HPV genotypes, HPV 16, 58, and 52 were most prevalent in all age groups. In women older than 36, the five most common HR‐HPV genotypes showed a strong consistency, including HPV 16, 58, 52, 53, and 56. In addition, the three most common LR‐HPV genotypes were HPV 61, 55, and 81 in women older than 36.\nGenotype distribution of HPV in each age group. (A) HR genotypes. (B) LR genotypes. HPV, human papillomavirus; HR, high‐risk; LR, low‐risk\nThe prevalence of HR‐ and LR‐HPV genotypes in different age groups was investigated with combinations of the infections of each HPV genotype. Like age‐specific infection rates, there were U‐shaped curves of both HR‐ and LR‐HPV genotypes (Figure 3), which were significantly decreased in women who were older than 70. In all subjects, the prevalence of LR‐HPV was below the overall infection rate of 20.52%. However, HR‐HPV was more prevalent than the overall infection rate in most subjects, except for the women aged 26–30 and 41–45. And the lowest prevalence of HR‐HPV genotypes was 19.61% in women aged from 26 to 30.\nAge‐specific prevalence of HR and LR‐HPV genotypes", "As some subjects were infected with more than one HPV genotype, the single and multiple infection types were analyzed. A total of 25,181 subjects were single infection with a proportion of 75.38% in all HPV‐positive cases, with its overall prevalence in the population of 17.26% (Table 1). Moreover, the prevalence of double, triple, and multiple infections in the population were 4.25%, 1.09%, and 0.38%, possessing 18.22%, 4.75%, and 1.64% in all HPV‐positive cases, respectively.\nThe proportions of single, double, triple, and multiple infections in each age group were further analyzed (Figure 4). The prevalence of multiple infections was also lowest in all age groups except for the women aged younger than 20. And the proportion of multiple infections was 8.94% in this age group, being highest in all age groups. In addition, the proportion of single infection in this age group was 59.22%, being the lowest in all age groups. In other age groups, the proportions of single infection were all above 70%, except for the women aged 56–70. The proportions of double and triple infection were relatively stable except for two peaks that appeared in the women who were younger than 20 and among 56–70. The highest proportions of double and triple infections were 25.70% and 10.11%, appearing in the women who were younger than 20 and among 66–70, respectively.\nAge‐specific distribution of single and multiple infections", "The HPV distribution and prevalence differ among different populations and regions.\n24\n, \n25\n As one of the fundamental steps to control cervical cancer, reliable population‐based prevalence, and genotype distributions of HPV are needed for specific areas. Although there are some studies about different regions of China for the prevalence and genotype distribution of HPV, there is still no report in Northwest China.\n22\n, \n26\n, \n27\n, \n28\n, \n29\n, \n30\n Lack of the related statistic data would be not only a great burden for the public health, but also an obstacle for introducing efficient HPV vaccines, as more and more vaccines are available.\n12\n In this study, the prevalence of HPV genotypes was uncovered with a large population of 145,918 in Northwest China. And age‐related differences in the prevalence and genotype distribution of HPV were obtained with the large population. All participants had never been vaccinated with any HPV vaccines, and the baseline prevalence of HPV genotypes in this area was obtained for the first time. Our results could not only give a basis for controlling cervical cancer, but also provide a foundation for evaluating the effects of HPV vaccines in the future.\nIn this study, the overall HPV infection rate was 22.97% in 145,918 women from Northwest China, which was higher compared to the developed regions of China, including 14.7% in Tianjin, 9.9% in Beijing, and 13.3% in Zhejiang.\n26\n, \n27\n, \n31\n Persistent HPV infection is the main cause of cervical cancer, so the higher infection rate of HPV means a higher incidence rate of cervical cancer. Considering that most deaths caused by cervical cancer occur in low‐ and middle‐incoming areas,\n32\n cervical cancer could still be an important threat to public health in Northwest China, which is the major less‐developed area of China.\nWith the population in different age groups, a bimodal curve of infection rate was obtained. The wave trough of the prevalence curve of age‐related HPV appeared in women aged 21 and 45 years, being lower than the overall infection rate. And in women aged older than 51, the infection rate was higher than the younger groups. Although the two peaks may appear in different age groups, the U‐shape curve of the age‐related prevalence of HPV has been observed in many other studies.\n33\n, \n34\n, \n35\n It could be a result of the spontaneous regression of HPV infections in women aged between 21 and 45 years.\n36\n We first reported a decreased infection rate in women aged older than 66 after the consistent increase of infection rate in women aged between 46 and 65.\nAs different HPV genotypes have different carcinogenic potentials, specifying the prevalence of different HPV genotypes is also important for the strategies to prevent and manage cervical cancer.\n37\n Although the prevalent genotypes of HPV vary by region worldwide, HPV 16 and 18 are most prevalent with cancer.\n15\n As shown by the results, HPV 16 is also the most common HPV genotype in Northwest China.\n5\n While, HPV 18 is not prevalent in Northwest China, which could be caused by different sampling standards, as this study is based on population. In a previous study, HPV 18 is not a major prevalent genotype in China.\n38\n In contrast, the prevalence of HPV 58 and 52 are just lower than HPV 16 in women from Northwest China in this study, which supports that HPV 16, 58, and 52 are the three most common HPV genotypes in China.\n39\n It also suggests that HPV 58 and HPV 52 deserve more attention in Asia.\n40\n, \n41\n The situation in Northwest China is also unique with a high prevalence of HPV 53 and HPV 61 after the three most common HPV genotypes mentioned above.\n17\n, \n28\n, \n29\n, \n30\n Although two LR‐HPV genotypes, HPV 6 and 11, were contained in some vaccines, HPV 61 was the predominant LR‐HPV genotype in Northwest China in this study. All the results provide a basis for developing next ‐generation HPV vaccines.\nWith the large population included, the relationship between the distribution of HPV genotypes and age was also uncovered. Although the five most prevalent HR‐HPV and three most prevalent LR‐HPV genotypes were all consistent as HPV 16, 58, 52, 53, 56, and HPV 61, 55, 81 in women who were older than 36, the prevalence of some other genotypes were as high as or even higher than these genotypes in younger age groups, such as HPV 39, 51, 18, 6, and 11. Considering the transient infection of HPV in young women with spontaneous cure, it could be concluded that the prevalent HPV genotypes, such as HPV 16, 58, 52, 53, 56, 61, 55, 81, had stronger possibilities of persistent infections in women who were older than 36. In addition, this result also gives guidance for choosing sufficient vaccines for women of different ages. Although HPV 18 is not as prevalent in Chinese women as reported in women from other countries,\n42\n its carcinogenicity in Chinese women still needs further study. And a high risk of cervical carcinoma would be expected in women from Northwest China with a higher infection rate of HR‐HPV than LR‐HPV not only in overall, but also in each age group as unraveled by this study. In addition, the single and multiple infection patterns also showed an age‐related difference in this study. All the results suggested that single genotype infection is most common in HPV‐positive cases, which is consistent with former studies.\n23\n, \n43\n The high peak of single infection appeared in women aged between 21 and 55, which was the trough of the infection rate. These women were also the troughs of double, triple, and multiple infections. Despite the potentially competitive and/or cooperative interactions among different genotypes in coinfections of HPV, it also could be a result of strong immunity in those women. The mechanisms behind the age‐related infection patterns still need further study.\nHowever, there are still some limitations. Previous studies have shown that there are differences in the prevalence and distribution of HPV genotypes between population‐based surveys and cervical carcinoma case investigations in the same area.\n18\n, \n30\n Therefore, the correlation between HPV genotypes and cervical cytology or histology results in Northwest China needs to be further explored.", "In conclusion, the prevalence and distribution of HPV genotypes were investigated in Northwest China for the first time. Age is an influencing factor in the epidemiology of HPV genotypes. Our results provide a basis for future medical intervention. It also provides important information for the development of next‐generation HPV vaccines.", "All the authors declare no conflict of interest.", "Xiaohong Lin, Jia Li, Jianfang Zhang, and Hong Yang designed the research. Xiaohong Lin, Liu Chen, and Jianfang Zhang collected the data. Xiaohong Lin, Feng Yan, and Jia Li analyzed the data. Xiaohong Lin, Jia Li, and Jianfang Zhang wrote the manuscript. Hong Yang revised the manuscript into the published version. All authors have read and agreed to the published version of the manuscript.", "\nData S1\n\nClick here for additional data file." ]
[ null, null, null, null, null, "results", null, null, null, "discussion", "conclusions", "COI-statement", null, "supplementary-material" ]
[ "age‐specific prevalence", "genotype distribution", "human papillomavirus", "Northwest China", "women" ]
INTRODUCTION: Human papillomavirus (HPV) is the leading cause of cervical cancer, which is the fourth most common female cancer worldwide. 1 HPV infection is the most common sexually transmitted infection, and approximately 70% of females having sex will be infected with HPV during the whole lifetime. 2 Although most HPV infections are asymptomatic, the persistent infection could induce cervical cancer. 3 , 4 Hitherto, there are more than 200 HPV genotypes, which are different in respect of the potential to cause premalignant lesions and cervical cancers. Among them, the 12 genotypes, including HPV 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59, were classified as high‐risk (HR) genotypes, and other 12 genotypes, including HPV 6, 11, 40, 42, 43, 44, 54, 61, 70, 72, 81, and CP6108, as low‐risk (LR) genotypes. 5 , 6 Specifying the prevalence of different HPV genotypes could predict the cancer risk in the population. It seems to be a promising method to eliminate cervical cancer by preventing HPV infections among women. Since the development of the first HPV vaccine, 7 vaccination programs have been spread among women in some developed countries before they get exposed to HPV. 8 , 9 , 10 , 11 To date, three commercial HPV vaccines are available in China, including the bivalent vaccine (Cervarix) targeting HPV 16 and 18; the quadrivalent HPV vaccine (Gardasil) targeting HPV 6, 11, 16, and 18; and the 9‐valent HPV vaccine (Gardasil) targeting HPV 6, 11, 16, 18, 31, 33, 45, 52, and 58. Furthermore, more HPV vaccines developed by Chinese domestic enterprises are coming to the market. 12 With the wide variety of vaccines, it is difficult for the public to choose. The prevalence of HPV genotypes is dependent on the geographic region, 13 so that knowledge of the geographical prevalence of HPV genotypes would provide important information for vaccine selection. The geographical prevalence of HPV genotypes had been widely investigated in previous studies, leading to different prevalence patterns in different areas. Globally, HPV 16 and 18 are most prevalent. 14 Additionally, the most common HPV genotypes among the Asian population with cervical cancer are HPV 16, 18, 45, 52, and 58. 15 However, according to the data from WHO, HPV 16, 18, 33, 52, and 58 are the five most common HPV genotypes in patients with cervical cancer in Eastern Asia. 16 Data from several provinces in China, such as Guangdong, Jiangsu, Sichuan, Yunnan, Hunan, and Shandong, suggest that the HPV genotypes with high prevalence in different provinces are different. 17 , 18 , 19 , 20 , 21 Up to now, studies on HPV genotypes are all conducted in Southwest, Central South, Southeast, or Eastern China, not in Northwest China. In Northwest China, the less‐developed area, the epidemiology of HPV is considered to be different from that of the developed areas. The cost‐effectiveness in the prevention of cervical cancer is of much more attention in Northwest China. 22 The study for prevalence and genotype distribution of different HPV genotypes is urgent for controlling the economic burden of cervical cancer on public health in Northwest China. Therefore, we aimed to provide large‐scale epidemiologic data on genotype distribution and prevalence of HPV among women in Northwest China. In total, all samples of this study were collected from women volunteers who had never been vaccinated with HPV vaccines. Hence, the prevalence and distribution of HPV genotypes in Northwest China had been elucidated for the first time, and the age‐related differences were uncovered. SUBJECTS AND METHODS: Subjects Women volunteers who attended Xijing Hospital from June 2015 to December 2020 were enrolled. Inclusion criteria: (1) women who had an intact cervix; (2) women who did not receive cauterization or surgery; (3) women without pregnancy. Exclusion criteria: (1) women who had previous diagnosis or treatment for the cervical or vaginal disease; (2) women who had been vaccinated with any HPV vaccines before. Finally, there were 145,918 women for HPV genotypes DNA screening test. All subjects freely signed informed consent. We have obtained approval from the Ethical Research Committee of Xijing Hospital. Women volunteers who attended Xijing Hospital from June 2015 to December 2020 were enrolled. Inclusion criteria: (1) women who had an intact cervix; (2) women who did not receive cauterization or surgery; (3) women without pregnancy. Exclusion criteria: (1) women who had previous diagnosis or treatment for the cervical or vaginal disease; (2) women who had been vaccinated with any HPV vaccines before. Finally, there were 145,918 women for HPV genotypes DNA screening test. All subjects freely signed informed consent. We have obtained approval from the Ethical Research Committee of Xijing Hospital. HPV detection and genotyping Exfoliated endocervical cells were obtained by pathologists with a cervical sampling brush (Jiangsu Jiangyou Medical Science and Technology Corporation). DNA extraction, PCR amplification, hybridization, and HPV genotyping were all conducted with High Throughput HPV genotyping Kits (Tellgen Corporation) by flow fluorescent technology according to the manufacturer's instructions. There were 27 distinguished HPV genotypes in one test. After experiments, 17 HR‐HPV genotypes included HPV 16, 18, 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, and 82. 23 The other 10 genotypes were defined as LR‐HPV genotypes, including HPV 6, HPV 11, HPV 40, HPV 42, HPV 43, HPV 44, HPV 55, HPV 61, HPV81, and HPV 83. Exfoliated endocervical cells were obtained by pathologists with a cervical sampling brush (Jiangsu Jiangyou Medical Science and Technology Corporation). DNA extraction, PCR amplification, hybridization, and HPV genotyping were all conducted with High Throughput HPV genotyping Kits (Tellgen Corporation) by flow fluorescent technology according to the manufacturer's instructions. There were 27 distinguished HPV genotypes in one test. After experiments, 17 HR‐HPV genotypes included HPV 16, 18, 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, and 82. 23 The other 10 genotypes were defined as LR‐HPV genotypes, including HPV 6, HPV 11, HPV 40, HPV 42, HPV 43, HPV 44, HPV 55, HPV 61, HPV81, and HPV 83. Statistical analysis SAS statistical software, version 9.4 (SAS Institute Inc.) was carried out for statistical analyses. p < 0.05 was considered as statistical significance. The prevalence of HPV infections among groups (≤20, 21–25, 26–30, 31–35, 36–40, 41–45, 46–50, 51–55, 56–60, 61–65, 66–70 and >70 years old) and overall was calculated and compared using Chi‐square test or Fisher's exact test. Then, the prevalence of 27 HPV genotypes was calculated in the whole population and each age group. The prevalence of the HR‐HPV and LR‐HPV genotypes in each age group were combined. Finally, the prevalence of single, double (infected with two HPV genotypes), triple (infected with three HPV genotypes), and multiple HPV infections (infected with four or more HPV genotypes) were calculated in population and each age group, respectively. SAS statistical software, version 9.4 (SAS Institute Inc.) was carried out for statistical analyses. p < 0.05 was considered as statistical significance. The prevalence of HPV infections among groups (≤20, 21–25, 26–30, 31–35, 36–40, 41–45, 46–50, 51–55, 56–60, 61–65, 66–70 and >70 years old) and overall was calculated and compared using Chi‐square test or Fisher's exact test. Then, the prevalence of 27 HPV genotypes was calculated in the whole population and each age group. The prevalence of the HR‐HPV and LR‐HPV genotypes in each age group were combined. Finally, the prevalence of single, double (infected with two HPV genotypes), triple (infected with three HPV genotypes), and multiple HPV infections (infected with four or more HPV genotypes) were calculated in population and each age group, respectively. Subjects: Women volunteers who attended Xijing Hospital from June 2015 to December 2020 were enrolled. Inclusion criteria: (1) women who had an intact cervix; (2) women who did not receive cauterization or surgery; (3) women without pregnancy. Exclusion criteria: (1) women who had previous diagnosis or treatment for the cervical or vaginal disease; (2) women who had been vaccinated with any HPV vaccines before. Finally, there were 145,918 women for HPV genotypes DNA screening test. All subjects freely signed informed consent. We have obtained approval from the Ethical Research Committee of Xijing Hospital. HPV detection and genotyping: Exfoliated endocervical cells were obtained by pathologists with a cervical sampling brush (Jiangsu Jiangyou Medical Science and Technology Corporation). DNA extraction, PCR amplification, hybridization, and HPV genotyping were all conducted with High Throughput HPV genotyping Kits (Tellgen Corporation) by flow fluorescent technology according to the manufacturer's instructions. There were 27 distinguished HPV genotypes in one test. After experiments, 17 HR‐HPV genotypes included HPV 16, 18, 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, and 82. 23 The other 10 genotypes were defined as LR‐HPV genotypes, including HPV 6, HPV 11, HPV 40, HPV 42, HPV 43, HPV 44, HPV 55, HPV 61, HPV81, and HPV 83. Statistical analysis: SAS statistical software, version 9.4 (SAS Institute Inc.) was carried out for statistical analyses. p < 0.05 was considered as statistical significance. The prevalence of HPV infections among groups (≤20, 21–25, 26–30, 31–35, 36–40, 41–45, 46–50, 51–55, 56–60, 61–65, 66–70 and >70 years old) and overall was calculated and compared using Chi‐square test or Fisher's exact test. Then, the prevalence of 27 HPV genotypes was calculated in the whole population and each age group. The prevalence of the HR‐HPV and LR‐HPV genotypes in each age group were combined. Finally, the prevalence of single, double (infected with two HPV genotypes), triple (infected with three HPV genotypes), and multiple HPV infections (infected with four or more HPV genotypes) were calculated in population and each age group, respectively. RESULTS: HPV infection rates and age‐specific prevalence There were 145,918 women (15 to 82 years) included, and 33,522 were HPV positive (Table 1), with an infection rate of 22.97%. All subjects were divided into 12 groups according to age, and the infection rate showed a U‐shape among women younger than 20 to those in 61–65. And among subjects who were 66–70 and older than 70 years old, the infection rate showed a decreasing trend compared with those who were 56–60 and 61–65 years old, but it was higher than the younger (Figure 1). The lowest and highest infection rates were 19.30% and 35.66% in the ages of 26–30 and 61–65 groups, respectively. Among subjects who were between 21 and 45 years old, the infection rate fluctuated between 19.30% and 22.11%, being lower than the overall infection rate; however, the other groups were the opposite. Basic character of the study population Abbreviation: HPV, human papillomavirus. HPV infection rates and age‐specific prevalence There were 145,918 women (15 to 82 years) included, and 33,522 were HPV positive (Table 1), with an infection rate of 22.97%. All subjects were divided into 12 groups according to age, and the infection rate showed a U‐shape among women younger than 20 to those in 61–65. And among subjects who were 66–70 and older than 70 years old, the infection rate showed a decreasing trend compared with those who were 56–60 and 61–65 years old, but it was higher than the younger (Figure 1). The lowest and highest infection rates were 19.30% and 35.66% in the ages of 26–30 and 61–65 groups, respectively. Among subjects who were between 21 and 45 years old, the infection rate fluctuated between 19.30% and 22.11%, being lower than the overall infection rate; however, the other groups were the opposite. Basic character of the study population Abbreviation: HPV, human papillomavirus. HPV infection rates and age‐specific prevalence Genotype distribution of HPV and age‐specific prevalence The prevalence of 27 genotypes in the population was in details (Supplementary S1). HPV 16 was most prevalent (5.18%, 7564/145,918). After close therewith is HPV 58 (3.10%, 4521/145,918), HPV 52 (2.75%, 4013/145,918), HPV 53 (2.18%, 3181/145,918), and HPV 61 (1.74%, 2532/145,918). Taken together, they were the five most common HPV genotypes in the population. And among the five, only HPV 61 was an LR‐HPV genotype. As shown in Figure 2, in women younger than 20, the five most common HR‐HPV genotypes were HPV 16 (6.70%, 45/672), HPV 58 (3.57%, 24/672), HPV 52 (3.13%, 21/672), HPV 18 (2.53%, 17/672), and HPV 56 (2.38%, 16/672). Moreover, HPV 11 (5.80%, 39/672) and HPV 6 (5.06%, 34/672) were the first two LR‐HPV genotypes in this age group. Similarly, in women aged 21–25, HPV 6 and 11 were still the most common LR‐HPV genotypes, and the next one was HPV 61. However, in other age groups, HPV 61 was the most common LR‐HPV genotype. In all HR‐HPV genotypes, HPV 16, 58, and 52 were most prevalent in all age groups. In women older than 36, the five most common HR‐HPV genotypes showed a strong consistency, including HPV 16, 58, 52, 53, and 56. In addition, the three most common LR‐HPV genotypes were HPV 61, 55, and 81 in women older than 36. Genotype distribution of HPV in each age group. (A) HR genotypes. (B) LR genotypes. HPV, human papillomavirus; HR, high‐risk; LR, low‐risk The prevalence of HR‐ and LR‐HPV genotypes in different age groups was investigated with combinations of the infections of each HPV genotype. Like age‐specific infection rates, there were U‐shaped curves of both HR‐ and LR‐HPV genotypes (Figure 3), which were significantly decreased in women who were older than 70. In all subjects, the prevalence of LR‐HPV was below the overall infection rate of 20.52%. However, HR‐HPV was more prevalent than the overall infection rate in most subjects, except for the women aged 26–30 and 41–45. And the lowest prevalence of HR‐HPV genotypes was 19.61% in women aged from 26 to 30. Age‐specific prevalence of HR and LR‐HPV genotypes The prevalence of 27 genotypes in the population was in details (Supplementary S1). HPV 16 was most prevalent (5.18%, 7564/145,918). After close therewith is HPV 58 (3.10%, 4521/145,918), HPV 52 (2.75%, 4013/145,918), HPV 53 (2.18%, 3181/145,918), and HPV 61 (1.74%, 2532/145,918). Taken together, they were the five most common HPV genotypes in the population. And among the five, only HPV 61 was an LR‐HPV genotype. As shown in Figure 2, in women younger than 20, the five most common HR‐HPV genotypes were HPV 16 (6.70%, 45/672), HPV 58 (3.57%, 24/672), HPV 52 (3.13%, 21/672), HPV 18 (2.53%, 17/672), and HPV 56 (2.38%, 16/672). Moreover, HPV 11 (5.80%, 39/672) and HPV 6 (5.06%, 34/672) were the first two LR‐HPV genotypes in this age group. Similarly, in women aged 21–25, HPV 6 and 11 were still the most common LR‐HPV genotypes, and the next one was HPV 61. However, in other age groups, HPV 61 was the most common LR‐HPV genotype. In all HR‐HPV genotypes, HPV 16, 58, and 52 were most prevalent in all age groups. In women older than 36, the five most common HR‐HPV genotypes showed a strong consistency, including HPV 16, 58, 52, 53, and 56. In addition, the three most common LR‐HPV genotypes were HPV 61, 55, and 81 in women older than 36. Genotype distribution of HPV in each age group. (A) HR genotypes. (B) LR genotypes. HPV, human papillomavirus; HR, high‐risk; LR, low‐risk The prevalence of HR‐ and LR‐HPV genotypes in different age groups was investigated with combinations of the infections of each HPV genotype. Like age‐specific infection rates, there were U‐shaped curves of both HR‐ and LR‐HPV genotypes (Figure 3), which were significantly decreased in women who were older than 70. In all subjects, the prevalence of LR‐HPV was below the overall infection rate of 20.52%. However, HR‐HPV was more prevalent than the overall infection rate in most subjects, except for the women aged 26–30 and 41–45. And the lowest prevalence of HR‐HPV genotypes was 19.61% in women aged from 26 to 30. Age‐specific prevalence of HR and LR‐HPV genotypes Distribution of single and multiple infections As some subjects were infected with more than one HPV genotype, the single and multiple infection types were analyzed. A total of 25,181 subjects were single infection with a proportion of 75.38% in all HPV‐positive cases, with its overall prevalence in the population of 17.26% (Table 1). Moreover, the prevalence of double, triple, and multiple infections in the population were 4.25%, 1.09%, and 0.38%, possessing 18.22%, 4.75%, and 1.64% in all HPV‐positive cases, respectively. The proportions of single, double, triple, and multiple infections in each age group were further analyzed (Figure 4). The prevalence of multiple infections was also lowest in all age groups except for the women aged younger than 20. And the proportion of multiple infections was 8.94% in this age group, being highest in all age groups. In addition, the proportion of single infection in this age group was 59.22%, being the lowest in all age groups. In other age groups, the proportions of single infection were all above 70%, except for the women aged 56–70. The proportions of double and triple infection were relatively stable except for two peaks that appeared in the women who were younger than 20 and among 56–70. The highest proportions of double and triple infections were 25.70% and 10.11%, appearing in the women who were younger than 20 and among 66–70, respectively. Age‐specific distribution of single and multiple infections As some subjects were infected with more than one HPV genotype, the single and multiple infection types were analyzed. A total of 25,181 subjects were single infection with a proportion of 75.38% in all HPV‐positive cases, with its overall prevalence in the population of 17.26% (Table 1). Moreover, the prevalence of double, triple, and multiple infections in the population were 4.25%, 1.09%, and 0.38%, possessing 18.22%, 4.75%, and 1.64% in all HPV‐positive cases, respectively. The proportions of single, double, triple, and multiple infections in each age group were further analyzed (Figure 4). The prevalence of multiple infections was also lowest in all age groups except for the women aged younger than 20. And the proportion of multiple infections was 8.94% in this age group, being highest in all age groups. In addition, the proportion of single infection in this age group was 59.22%, being the lowest in all age groups. In other age groups, the proportions of single infection were all above 70%, except for the women aged 56–70. The proportions of double and triple infection were relatively stable except for two peaks that appeared in the women who were younger than 20 and among 56–70. The highest proportions of double and triple infections were 25.70% and 10.11%, appearing in the women who were younger than 20 and among 66–70, respectively. Age‐specific distribution of single and multiple infections HPV infection rates and age‐specific prevalence: There were 145,918 women (15 to 82 years) included, and 33,522 were HPV positive (Table 1), with an infection rate of 22.97%. All subjects were divided into 12 groups according to age, and the infection rate showed a U‐shape among women younger than 20 to those in 61–65. And among subjects who were 66–70 and older than 70 years old, the infection rate showed a decreasing trend compared with those who were 56–60 and 61–65 years old, but it was higher than the younger (Figure 1). The lowest and highest infection rates were 19.30% and 35.66% in the ages of 26–30 and 61–65 groups, respectively. Among subjects who were between 21 and 45 years old, the infection rate fluctuated between 19.30% and 22.11%, being lower than the overall infection rate; however, the other groups were the opposite. Basic character of the study population Abbreviation: HPV, human papillomavirus. HPV infection rates and age‐specific prevalence Genotype distribution of HPV and age‐specific prevalence: The prevalence of 27 genotypes in the population was in details (Supplementary S1). HPV 16 was most prevalent (5.18%, 7564/145,918). After close therewith is HPV 58 (3.10%, 4521/145,918), HPV 52 (2.75%, 4013/145,918), HPV 53 (2.18%, 3181/145,918), and HPV 61 (1.74%, 2532/145,918). Taken together, they were the five most common HPV genotypes in the population. And among the five, only HPV 61 was an LR‐HPV genotype. As shown in Figure 2, in women younger than 20, the five most common HR‐HPV genotypes were HPV 16 (6.70%, 45/672), HPV 58 (3.57%, 24/672), HPV 52 (3.13%, 21/672), HPV 18 (2.53%, 17/672), and HPV 56 (2.38%, 16/672). Moreover, HPV 11 (5.80%, 39/672) and HPV 6 (5.06%, 34/672) were the first two LR‐HPV genotypes in this age group. Similarly, in women aged 21–25, HPV 6 and 11 were still the most common LR‐HPV genotypes, and the next one was HPV 61. However, in other age groups, HPV 61 was the most common LR‐HPV genotype. In all HR‐HPV genotypes, HPV 16, 58, and 52 were most prevalent in all age groups. In women older than 36, the five most common HR‐HPV genotypes showed a strong consistency, including HPV 16, 58, 52, 53, and 56. In addition, the three most common LR‐HPV genotypes were HPV 61, 55, and 81 in women older than 36. Genotype distribution of HPV in each age group. (A) HR genotypes. (B) LR genotypes. HPV, human papillomavirus; HR, high‐risk; LR, low‐risk The prevalence of HR‐ and LR‐HPV genotypes in different age groups was investigated with combinations of the infections of each HPV genotype. Like age‐specific infection rates, there were U‐shaped curves of both HR‐ and LR‐HPV genotypes (Figure 3), which were significantly decreased in women who were older than 70. In all subjects, the prevalence of LR‐HPV was below the overall infection rate of 20.52%. However, HR‐HPV was more prevalent than the overall infection rate in most subjects, except for the women aged 26–30 and 41–45. And the lowest prevalence of HR‐HPV genotypes was 19.61% in women aged from 26 to 30. Age‐specific prevalence of HR and LR‐HPV genotypes Distribution of single and multiple infections: As some subjects were infected with more than one HPV genotype, the single and multiple infection types were analyzed. A total of 25,181 subjects were single infection with a proportion of 75.38% in all HPV‐positive cases, with its overall prevalence in the population of 17.26% (Table 1). Moreover, the prevalence of double, triple, and multiple infections in the population were 4.25%, 1.09%, and 0.38%, possessing 18.22%, 4.75%, and 1.64% in all HPV‐positive cases, respectively. The proportions of single, double, triple, and multiple infections in each age group were further analyzed (Figure 4). The prevalence of multiple infections was also lowest in all age groups except for the women aged younger than 20. And the proportion of multiple infections was 8.94% in this age group, being highest in all age groups. In addition, the proportion of single infection in this age group was 59.22%, being the lowest in all age groups. In other age groups, the proportions of single infection were all above 70%, except for the women aged 56–70. The proportions of double and triple infection were relatively stable except for two peaks that appeared in the women who were younger than 20 and among 56–70. The highest proportions of double and triple infections were 25.70% and 10.11%, appearing in the women who were younger than 20 and among 66–70, respectively. Age‐specific distribution of single and multiple infections DISCUSSION: The HPV distribution and prevalence differ among different populations and regions. 24 , 25 As one of the fundamental steps to control cervical cancer, reliable population‐based prevalence, and genotype distributions of HPV are needed for specific areas. Although there are some studies about different regions of China for the prevalence and genotype distribution of HPV, there is still no report in Northwest China. 22 , 26 , 27 , 28 , 29 , 30 Lack of the related statistic data would be not only a great burden for the public health, but also an obstacle for introducing efficient HPV vaccines, as more and more vaccines are available. 12 In this study, the prevalence of HPV genotypes was uncovered with a large population of 145,918 in Northwest China. And age‐related differences in the prevalence and genotype distribution of HPV were obtained with the large population. All participants had never been vaccinated with any HPV vaccines, and the baseline prevalence of HPV genotypes in this area was obtained for the first time. Our results could not only give a basis for controlling cervical cancer, but also provide a foundation for evaluating the effects of HPV vaccines in the future. In this study, the overall HPV infection rate was 22.97% in 145,918 women from Northwest China, which was higher compared to the developed regions of China, including 14.7% in Tianjin, 9.9% in Beijing, and 13.3% in Zhejiang. 26 , 27 , 31 Persistent HPV infection is the main cause of cervical cancer, so the higher infection rate of HPV means a higher incidence rate of cervical cancer. Considering that most deaths caused by cervical cancer occur in low‐ and middle‐incoming areas, 32 cervical cancer could still be an important threat to public health in Northwest China, which is the major less‐developed area of China. With the population in different age groups, a bimodal curve of infection rate was obtained. The wave trough of the prevalence curve of age‐related HPV appeared in women aged 21 and 45 years, being lower than the overall infection rate. And in women aged older than 51, the infection rate was higher than the younger groups. Although the two peaks may appear in different age groups, the U‐shape curve of the age‐related prevalence of HPV has been observed in many other studies. 33 , 34 , 35 It could be a result of the spontaneous regression of HPV infections in women aged between 21 and 45 years. 36 We first reported a decreased infection rate in women aged older than 66 after the consistent increase of infection rate in women aged between 46 and 65. As different HPV genotypes have different carcinogenic potentials, specifying the prevalence of different HPV genotypes is also important for the strategies to prevent and manage cervical cancer. 37 Although the prevalent genotypes of HPV vary by region worldwide, HPV 16 and 18 are most prevalent with cancer. 15 As shown by the results, HPV 16 is also the most common HPV genotype in Northwest China. 5 While, HPV 18 is not prevalent in Northwest China, which could be caused by different sampling standards, as this study is based on population. In a previous study, HPV 18 is not a major prevalent genotype in China. 38 In contrast, the prevalence of HPV 58 and 52 are just lower than HPV 16 in women from Northwest China in this study, which supports that HPV 16, 58, and 52 are the three most common HPV genotypes in China. 39 It also suggests that HPV 58 and HPV 52 deserve more attention in Asia. 40 , 41 The situation in Northwest China is also unique with a high prevalence of HPV 53 and HPV 61 after the three most common HPV genotypes mentioned above. 17 , 28 , 29 , 30 Although two LR‐HPV genotypes, HPV 6 and 11, were contained in some vaccines, HPV 61 was the predominant LR‐HPV genotype in Northwest China in this study. All the results provide a basis for developing next ‐generation HPV vaccines. With the large population included, the relationship between the distribution of HPV genotypes and age was also uncovered. Although the five most prevalent HR‐HPV and three most prevalent LR‐HPV genotypes were all consistent as HPV 16, 58, 52, 53, 56, and HPV 61, 55, 81 in women who were older than 36, the prevalence of some other genotypes were as high as or even higher than these genotypes in younger age groups, such as HPV 39, 51, 18, 6, and 11. Considering the transient infection of HPV in young women with spontaneous cure, it could be concluded that the prevalent HPV genotypes, such as HPV 16, 58, 52, 53, 56, 61, 55, 81, had stronger possibilities of persistent infections in women who were older than 36. In addition, this result also gives guidance for choosing sufficient vaccines for women of different ages. Although HPV 18 is not as prevalent in Chinese women as reported in women from other countries, 42 its carcinogenicity in Chinese women still needs further study. And a high risk of cervical carcinoma would be expected in women from Northwest China with a higher infection rate of HR‐HPV than LR‐HPV not only in overall, but also in each age group as unraveled by this study. In addition, the single and multiple infection patterns also showed an age‐related difference in this study. All the results suggested that single genotype infection is most common in HPV‐positive cases, which is consistent with former studies. 23 , 43 The high peak of single infection appeared in women aged between 21 and 55, which was the trough of the infection rate. These women were also the troughs of double, triple, and multiple infections. Despite the potentially competitive and/or cooperative interactions among different genotypes in coinfections of HPV, it also could be a result of strong immunity in those women. The mechanisms behind the age‐related infection patterns still need further study. However, there are still some limitations. Previous studies have shown that there are differences in the prevalence and distribution of HPV genotypes between population‐based surveys and cervical carcinoma case investigations in the same area. 18 , 30 Therefore, the correlation between HPV genotypes and cervical cytology or histology results in Northwest China needs to be further explored. CONCLUSION: In conclusion, the prevalence and distribution of HPV genotypes were investigated in Northwest China for the first time. Age is an influencing factor in the epidemiology of HPV genotypes. Our results provide a basis for future medical intervention. It also provides important information for the development of next‐generation HPV vaccines. CONFLICT OF INTEREST: All the authors declare no conflict of interest. AUTHOR CONTRIBUTIONS: Xiaohong Lin, Jia Li, Jianfang Zhang, and Hong Yang designed the research. Xiaohong Lin, Liu Chen, and Jianfang Zhang collected the data. Xiaohong Lin, Feng Yan, and Jia Li analyzed the data. Xiaohong Lin, Jia Li, and Jianfang Zhang wrote the manuscript. Hong Yang revised the manuscript into the published version. All authors have read and agreed to the published version of the manuscript. Supporting information: Data S1 Click here for additional data file.
Background: Human papillomavirus (HPV) is the leading cause of cervical cancer with more than 200 genotypes. Different genotypes have different potentials in causing premalignant lesions and cervical cancers. In this study, we investigated the age-specific prevalence and genotype distribution of HPV genotypes in Northwest China. Methods: We recruited 145,918 unvaccinated women from Northwest China for population-based HPV DNA screening test during June 2015 to December 2020. And a lab-based test was performed for each volunteer by flow fluorescent technology to identify the genotypes of HPV. Results: The overall infection rate of HPV was 22.97%. With the participants divided into 12 groups according to age, a bimodal curve of infection rate was obtained. And the two peaks appeared in the younger than 20 group and 61-65 group, respectively. The five most common HPV genotypes included HPV 16, 58, 52, 53 and 61 in all participants, which were in descending order of frequency. Among women younger than 25 years old, HPV 6 and 11 were more common and even higher than some genotypes mentioned above. Among women older than 65 years old, HPV 18 and 66 were more common than or as high as the six most common genotypes in all populations. Additionally, the distribution of single and multiple infections in each age group was also different. Conclusions: The baseline prevalence and genotype distribution of HPV in Northwest China was uncovered for the first time. Age was related to the epidemiology of different HPV genotypes. All the results would be of great significance for future healthcare services.
INTRODUCTION: Human papillomavirus (HPV) is the leading cause of cervical cancer, which is the fourth most common female cancer worldwide. 1 HPV infection is the most common sexually transmitted infection, and approximately 70% of females having sex will be infected with HPV during the whole lifetime. 2 Although most HPV infections are asymptomatic, the persistent infection could induce cervical cancer. 3 , 4 Hitherto, there are more than 200 HPV genotypes, which are different in respect of the potential to cause premalignant lesions and cervical cancers. Among them, the 12 genotypes, including HPV 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59, were classified as high‐risk (HR) genotypes, and other 12 genotypes, including HPV 6, 11, 40, 42, 43, 44, 54, 61, 70, 72, 81, and CP6108, as low‐risk (LR) genotypes. 5 , 6 Specifying the prevalence of different HPV genotypes could predict the cancer risk in the population. It seems to be a promising method to eliminate cervical cancer by preventing HPV infections among women. Since the development of the first HPV vaccine, 7 vaccination programs have been spread among women in some developed countries before they get exposed to HPV. 8 , 9 , 10 , 11 To date, three commercial HPV vaccines are available in China, including the bivalent vaccine (Cervarix) targeting HPV 16 and 18; the quadrivalent HPV vaccine (Gardasil) targeting HPV 6, 11, 16, and 18; and the 9‐valent HPV vaccine (Gardasil) targeting HPV 6, 11, 16, 18, 31, 33, 45, 52, and 58. Furthermore, more HPV vaccines developed by Chinese domestic enterprises are coming to the market. 12 With the wide variety of vaccines, it is difficult for the public to choose. The prevalence of HPV genotypes is dependent on the geographic region, 13 so that knowledge of the geographical prevalence of HPV genotypes would provide important information for vaccine selection. The geographical prevalence of HPV genotypes had been widely investigated in previous studies, leading to different prevalence patterns in different areas. Globally, HPV 16 and 18 are most prevalent. 14 Additionally, the most common HPV genotypes among the Asian population with cervical cancer are HPV 16, 18, 45, 52, and 58. 15 However, according to the data from WHO, HPV 16, 18, 33, 52, and 58 are the five most common HPV genotypes in patients with cervical cancer in Eastern Asia. 16 Data from several provinces in China, such as Guangdong, Jiangsu, Sichuan, Yunnan, Hunan, and Shandong, suggest that the HPV genotypes with high prevalence in different provinces are different. 17 , 18 , 19 , 20 , 21 Up to now, studies on HPV genotypes are all conducted in Southwest, Central South, Southeast, or Eastern China, not in Northwest China. In Northwest China, the less‐developed area, the epidemiology of HPV is considered to be different from that of the developed areas. The cost‐effectiveness in the prevention of cervical cancer is of much more attention in Northwest China. 22 The study for prevalence and genotype distribution of different HPV genotypes is urgent for controlling the economic burden of cervical cancer on public health in Northwest China. Therefore, we aimed to provide large‐scale epidemiologic data on genotype distribution and prevalence of HPV among women in Northwest China. In total, all samples of this study were collected from women volunteers who had never been vaccinated with HPV vaccines. Hence, the prevalence and distribution of HPV genotypes in Northwest China had been elucidated for the first time, and the age‐related differences were uncovered. CONCLUSION: In conclusion, the prevalence and distribution of HPV genotypes were investigated in Northwest China for the first time. Age is an influencing factor in the epidemiology of HPV genotypes. Our results provide a basis for future medical intervention. It also provides important information for the development of next‐generation HPV vaccines.
Background: Human papillomavirus (HPV) is the leading cause of cervical cancer with more than 200 genotypes. Different genotypes have different potentials in causing premalignant lesions and cervical cancers. In this study, we investigated the age-specific prevalence and genotype distribution of HPV genotypes in Northwest China. Methods: We recruited 145,918 unvaccinated women from Northwest China for population-based HPV DNA screening test during June 2015 to December 2020. And a lab-based test was performed for each volunteer by flow fluorescent technology to identify the genotypes of HPV. Results: The overall infection rate of HPV was 22.97%. With the participants divided into 12 groups according to age, a bimodal curve of infection rate was obtained. And the two peaks appeared in the younger than 20 group and 61-65 group, respectively. The five most common HPV genotypes included HPV 16, 58, 52, 53 and 61 in all participants, which were in descending order of frequency. Among women younger than 25 years old, HPV 6 and 11 were more common and even higher than some genotypes mentioned above. Among women older than 65 years old, HPV 18 and 66 were more common than or as high as the six most common genotypes in all populations. Additionally, the distribution of single and multiple infections in each age group was also different. Conclusions: The baseline prevalence and genotype distribution of HPV in Northwest China was uncovered for the first time. Age was related to the epidemiology of different HPV genotypes. All the results would be of great significance for future healthcare services.
6,368
302
[ 747, 874, 114, 154, 163, 191, 470, 280, 79 ]
14
[ "hpv", "genotypes", "hpv genotypes", "women", "age", "prevalence", "infection", "lr", "hr", "61" ]
[ "prevalence hpv genotypes", "cervical cancer preventing", "hpv genotypes important", "human papillomavirus hpv", "cervical cancer hpv" ]
null
[CONTENT] age‐specific prevalence | genotype distribution | human papillomavirus | Northwest China | women [SUMMARY]
null
[CONTENT] age‐specific prevalence | genotype distribution | human papillomavirus | Northwest China | women [SUMMARY]
[CONTENT] age‐specific prevalence | genotype distribution | human papillomavirus | Northwest China | women [SUMMARY]
[CONTENT] age‐specific prevalence | genotype distribution | human papillomavirus | Northwest China | women [SUMMARY]
[CONTENT] age‐specific prevalence | genotype distribution | human papillomavirus | Northwest China | women [SUMMARY]
[CONTENT] Female | Humans | Adult | Aged | Papillomaviridae | Alphapapillomavirus | Papillomavirus Infections | Prevalence | Uterine Cervical Neoplasms | Genotype | Age Factors | China | Uterine Cervical Dysplasia [SUMMARY]
null
[CONTENT] Female | Humans | Adult | Aged | Papillomaviridae | Alphapapillomavirus | Papillomavirus Infections | Prevalence | Uterine Cervical Neoplasms | Genotype | Age Factors | China | Uterine Cervical Dysplasia [SUMMARY]
[CONTENT] Female | Humans | Adult | Aged | Papillomaviridae | Alphapapillomavirus | Papillomavirus Infections | Prevalence | Uterine Cervical Neoplasms | Genotype | Age Factors | China | Uterine Cervical Dysplasia [SUMMARY]
[CONTENT] Female | Humans | Adult | Aged | Papillomaviridae | Alphapapillomavirus | Papillomavirus Infections | Prevalence | Uterine Cervical Neoplasms | Genotype | Age Factors | China | Uterine Cervical Dysplasia [SUMMARY]
[CONTENT] Female | Humans | Adult | Aged | Papillomaviridae | Alphapapillomavirus | Papillomavirus Infections | Prevalence | Uterine Cervical Neoplasms | Genotype | Age Factors | China | Uterine Cervical Dysplasia [SUMMARY]
[CONTENT] prevalence hpv genotypes | cervical cancer preventing | hpv genotypes important | human papillomavirus hpv | cervical cancer hpv [SUMMARY]
null
[CONTENT] prevalence hpv genotypes | cervical cancer preventing | hpv genotypes important | human papillomavirus hpv | cervical cancer hpv [SUMMARY]
[CONTENT] prevalence hpv genotypes | cervical cancer preventing | hpv genotypes important | human papillomavirus hpv | cervical cancer hpv [SUMMARY]
[CONTENT] prevalence hpv genotypes | cervical cancer preventing | hpv genotypes important | human papillomavirus hpv | cervical cancer hpv [SUMMARY]
[CONTENT] prevalence hpv genotypes | cervical cancer preventing | hpv genotypes important | human papillomavirus hpv | cervical cancer hpv [SUMMARY]
[CONTENT] hpv | genotypes | hpv genotypes | women | age | prevalence | infection | lr | hr | 61 [SUMMARY]
null
[CONTENT] hpv | genotypes | hpv genotypes | women | age | prevalence | infection | lr | hr | 61 [SUMMARY]
[CONTENT] hpv | genotypes | hpv genotypes | women | age | prevalence | infection | lr | hr | 61 [SUMMARY]
[CONTENT] hpv | genotypes | hpv genotypes | women | age | prevalence | infection | lr | hr | 61 [SUMMARY]
[CONTENT] hpv | genotypes | hpv genotypes | women | age | prevalence | infection | lr | hr | 61 [SUMMARY]
[CONTENT] hpv | cancer | genotypes | china | cervical cancer | different | hpv genotypes | cervical | vaccine | 16 18 [SUMMARY]
null
[CONTENT] hpv | infection | age | genotypes | women | 672 | lr | hr | groups | lr hpv [SUMMARY]
[CONTENT] hpv | factor epidemiology hpv | intervention provides important information | provides important | provides important information | provides important information development | investigated northwest | medical intervention provides important | medical intervention provides | medical intervention [SUMMARY]
[CONTENT] hpv | genotypes | women | hpv genotypes | infection | age | prevalence | data | china | rate [SUMMARY]
[CONTENT] hpv | genotypes | women | hpv genotypes | infection | age | prevalence | data | china | rate [SUMMARY]
[CONTENT] HPV | more than 200 ||| ||| HPV | Northwest China [SUMMARY]
null
[CONTENT] HPV | 22.97% ||| 12 ||| two | 61 ||| five | HPV | HPV 16 | 58 | 52 | 53 | 61 ||| 25 years old | 6 | 11 ||| 65 years old | HPV 18 and | 66 | six ||| [SUMMARY]
[CONTENT] HPV | Northwest China | first ||| HPV ||| [SUMMARY]
[CONTENT] HPV | more than 200 ||| ||| HPV | Northwest China ||| 145,918 | Northwest China | HPV | June 2015 to December 2020 ||| HPV ||| ||| HPV | 22.97% ||| 12 ||| two | 61 ||| five | HPV | HPV 16 | 58 | 52 | 53 | 61 ||| 25 years old | 6 | 11 ||| 65 years old | HPV 18 and | 66 | six ||| ||| HPV | Northwest China | first ||| HPV ||| [SUMMARY]
[CONTENT] HPV | more than 200 ||| ||| HPV | Northwest China ||| 145,918 | Northwest China | HPV | June 2015 to December 2020 ||| HPV ||| ||| HPV | 22.97% ||| 12 ||| two | 61 ||| five | HPV | HPV 16 | 58 | 52 | 53 | 61 ||| 25 years old | 6 | 11 ||| 65 years old | HPV 18 and | 66 | six ||| ||| HPV | Northwest China | first ||| HPV ||| [SUMMARY]
Epidemiology of Pertussis Among Young Pakistani Infants: A Community-Based Prospective Surveillance Study.
27838667
 Pertussis remains a cause of morbidity and mortality among young infants. There are limited data on the pertussis disease burden in this age group from low- and lower-middle-income countries, including in South Asia.
BACKGROUND
 We conducted an active community-based surveillance study from February 2015 to April 2016 among 2 cohorts of young infants in 4 low-income settlements in Karachi, Pakistan. Infants were enrolled either at birth (closed cohort) or at ages up to 10 weeks (open cohort) and followed until 18 weeks of age. Nasopharyngeal swab specimens were obtained from infants who met a standardized syndromic case definition and tested for Bordetella pertussis using real-time polymerase chain reaction. We determined the incidence of pertussis using a protocol-defined case definition, as well as the US Centers for Disease Control and Prevention (CDC) definitions for confirmed and probable pertussis.
METHODS
 Of 2021 infants enrolled into the study, 8 infants met the protocol-defined pertussis case definition, for an incidence of 3.96 (95% confidence interval [CI], 1.84-7.50) cases per 1000 infants. Seven of the pertussis cases met the CDC pertussis case definition (5 confirmed, 2 probable), for incidences of CDC-defined confirmed pertussis of 2.47 (95% CI, .90-5.48) cases per 1000 infants, and probable pertussis of 0.99 (95% CI, .17-3.27) cases per 1000 infants. Three of the pertussis cases were severe according to the Modified Preziosi Scale score.
RESULTS
 In one of the first prospective surveillance studies of infant pertussis in a developing country, we identified a moderate burden of pertussis disease in early infancy in Pakistan.
CONCLUSIONS
[ "Bordetella pertussis", "Female", "Humans", "Incidence", "Infant", "Infant, Newborn", "Male", "Pakistan", "Population Surveillance", "Prospective Studies", "Seasons", "Severity of Illness Index", "Socioeconomic Factors", "Whooping Cough" ]
5106628
null
null
METHODS
The Prevention of Pertussis in Young Infants in Pakistan (PrePY) Baseline Surveillance study was conducted in 4 low-income settlements of Karachi (Rehri Goth, Ibrahim Hyderi, Bhains Colony, and Ali Akbar Shah) where the Department of Paediatrics and Child Health of The Aga Khan University (AKU) has been running primary healthcare centers (staffed with physicians, lady health visitors, and community health workers) for several years. AKU has an active population based Demographic Surveillance System (DSS) in the study areas with a total surveillance catchment population of approximately 220 000. Enrollment for this surveillance study in the 4 study sites in Karachi began on 21 February 2015 and the last follow-up occurred on 12 April 2016. Surveillance was conducted among both an open cohort, with infants enrolled at ages up to 10 weeks and followed through 18 weeks of age, and a smaller closed cohort, with pregnant women enrolled on or after 27 weeks’ gestation or mothers enrolled who gave birth within the prior 72 hours; infants born to these women were followed through 18 weeks of age. For both cohorts, infants were routinely evaluated for symptoms associated with a syndromic screening definition (described later), and for infants who met the syndromic screening definition, nasopharyngeal swabs and blood samples were collected for laboratory testing. Infant surveillance occurred through routine scheduled in-person visits, telephone follow-up, and additional unscheduled visits and calls, according to the following schedule (Figure 1): Infant follow-up home visits were made twice a week from birth through 2 weeks of age. From 3 to 7 weeks of age, follow-up home visits occurred once a week; from 8 through 18 weeks of age, follow-up home visits were conducted every 2 weeks. In addition to home visit follow-ups, phone calls were made twice weekly from delivery through 4 weeks of age. Weekly phone calls were made from 4 to 18 weeks of age. Figure 1.Open cohort study schedule. aSchedule for surveillance visits will be based on the infant's age at enrollment, not time since enrollment; bThree milliliters of blood to be collected for infant specimens. Home visit key: X1 = twice weekly; X2 = weekly; X3 = fortnightly. Abbreviation: CBC, complete blood count. Open cohort study schedule. aSchedule for surveillance visits will be based on the infant's age at enrollment, not time since enrollment; bThree milliliters of blood to be collected for infant specimens. Home visit key: X1 = twice weekly; X2 = weekly; X3 = fortnightly. Abbreviation: CBC, complete blood count. Basic demographic data were obtained from mothers and infants at the time of enrollment, including age at enrollment, anthropometric measurements (infant length, weight, and head circumference), and maternal history of tetanus toxoid (TT) receipt. At all surveillance visits, infants were assessed against the standardized syndromic criteria, which is defined as an infant presenting with any of the following symptoms: cough (lasting at least 1 day), coryza, whoop, apnea, posttussive emesis, cyanosis, seizure, tachypnea (>50 breaths/minute for infants >2 months or >60 breaths/minute for infants <2 months), severe chest indrawing, movement only when stimulated (or an alternative definition of lethargy), poor feeding (confirmed by poor suck), close exposure to any family member with a prolonged afebrile cough illness, or axillary temperature ≥38°C. Our analysis was based on 2 outcome definitions: (1) The infant met the syndromic definition and had a positive polymerase chain reaction (PCR) test for Bordetella pertussis; and (2) the infant met the US Centers for Disease Control and Prevention (CDC) case definition of probable or confirmed pertussis (see Supplementary Data). To ensure the most accurate clinical description of each case, we identified syndromic symptoms within the time frame from cough onset through diagnosis and any continued symptoms without a break in symptom presentation that would be considered part of that illness episode. To identify new illness episodes as discrete, we required 7 days without symptoms. Because the CDC clinical case definition requires symptom assessment over time, based on a cough with a duration of ≥2 weeks, this approach is in line with the CDC clinical criteria. This provided us the ability to conduct a longitudinal assessment that captured all symptoms within the clinical episode, rather than being limited to a snapshot of symptoms at only 1 visit. Infants meeting the syndromic screening definition had nasopharyngeal swabs obtained by trained physicians using sterile, individually wrapped Copan FLOQ Minitip Nylon Flocked Dry Swabs. These swabs have comparable performance to rayon swabs [3, 4], and their use is recommended by CDC for optimal specimen collection for PCR testing for B. pertussis. To minimize exposure, the physician obtaining the swab wore a surgical mask and clean gloves. Swabs were inserted nasally and advanced along the floor of the nose, until they reached the nasopharynx. Once at the nasopharynx, the swabs were held against the posterior nasopharynx for a few seconds. Swabs were collected and stored in labeled universal transport medium cryovials and transported to the AKU Infectious Disease Research Laboratory at 4°C. Samples were stored at −70°C until total nucleic acid extraction for PCR. The PCR procedures were in line with the CDC protocols for B. pertussis PCR and were adopted in consultation with the CDC [5, 6]. Total nucleic acid was extracted from the frozen aliquots using MagNA Pure Compact Nucleic Acid Isolation Kit I (Roche Life Science, Indianapolis, Indiana). Leftover DNA samples following initial testing were archived at AKU's Infectious Disease Research Laboratory, with storage of at least 2 years at −80°C. DNA extracts were tested by PCR for evidence of B. pertussis infection, using a real-time PCR detection system (Applied Biosystems 7500, Thermo Fisher Scientific, Waltham, Massachusetts). All assays were run with positive and negative controls using standardized preparations of B. pertussis DNA, as well as PCR for the RNAse-P enzyme, which is used as a quality control measure to confirm that nasopharyngeal swabs successfully contacted human mucosa during the sampling. In line with the CDC analysis criteria for monoplex real-time PCR, cycle threshold (Ct) values <35 for IS481 were considered positive for B. pertussis, with IS481 Ct values ≥35 and <40 requiring further confirmation with ptxS1 testing. We tested all samples positive for IS481 using the ptxS1 assay, and ptxS1 Ct values <40 were considered a positive reaction. For all infants, accumulated person-time, measured in person-months, was computed, and based on month of enrollment, month-specific person-time accumulated was computed. Overall confirmed pertussis cases were identified by month of occurrence, and calendar month-specific and overall incidence rates with 95% confidence intervals (CIs) were calculated. We computed the (modified) Preziosi score for pertussis severity based on the presence or absence of specific symptoms, as well as a modified version of the Preziosi scoring system [7] which includes additional symptoms (Supplementary Figure 1). Infants were categorized as having severe pertussis if their Modified Preziosi Scale score was ≥7, and categorized as having moderate pertussis if their score ranged from 1 to 6.
RESULTS
Study Population and Descriptive Statistics Of the 2021 infants enrolled into the surveillance study, 1800 (89.1%) were enrolled into the open cohort, and 221 (10.9%) enrolled into the closed cohort. The total surveillance cohort contained slightly more male than female infants (52.3% vs 47.7%, respectively). Detailed demographics (infant anthropometric measurements and receipt of birth vaccines) are shown in Table 1. Age at enrollment and anthropometric measures were similar among infants who had a positive PCR test for B. pertussis compared to those who did not have a positive PCR test for B. pertussis (Table 1). Table 1.Baseline Characteristics of InfantsInfant Characteristics and No. of Infants Assessed for Overall ComparisonOverallInfants Without Positive PCR for B. pertussis (n = 2013)Infants With Positive PCR for B. pertussis (n = 8)Age at enrollment, d, median (IQR) (n = 2017)20 (9–41)20 (9–41)18 (14–26.5)Weight at enrollment, g, median (IQR) (n = 1900)3320 (2820–3970)3320 (2820–3970)3001 (2600–3350)Length at enrollment, cm, median (IQR) (n = 1900)51.5 (49.0–53.9)51.5 (49.0–54.0)51.3 (46.3–51.5)Head circumference at enrollment, cm, median (IQR)35.3 (33.8–36.5)35.3 (33.8–36.5)35.0 (33.0–35.7)Male sex52.3%62.5%52.2%Birth weight, g, median (IQR)a2800 (2500–3000)2800 (2500–3000)2600 (2600–2600)Birth immunizations received, No. (%) BCG1706 (84.4)1701 (84.5)5 (62.5) OPV1705 (84.4)1700 (84.5)5 (62.5)Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction.a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis. Baseline Characteristics of Infants Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction. a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis. Of the 8 infants with positive pertussis tests, all met our protocol-defined initial pertussis case definition, namely, they met the syndromic screening definition and had a positive PCR test for pertussis. Of these 8 infants, 7 met the CDC pertussis case definition (5 met the criteria for CDC confirmed pertussis cases, 2 met the criteria for CDC probable pertussis cases); only 1 of these 8 did not meet the CDC pertussis case criteria, as this infant did not have cough, with syndromic screening identifying only coryza and chest indrawing. A total of 1311 infants met the syndromic screening definition, of whom 1303 (99.4%) did not have PCR-positive tests. The incidence of pertussis per 1000 infants according to our pertussis case definition was 3.96 (95% CI, 1.84–7.50) cases per 1000 infants. The incidence of pertussis among infants meeting the CDC confirmed case definition was 2.47 (95% CI, .90–5.48) cases per 1000 infants, and among infants meeting the CDC probable case definition was 0.99 (95% CI, .17–3.27) cases per 1000 infants (Table 2). Table 2.Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case CriteriaCategoryNo. of InfantsPerson-time, moIncidence Rate per 1000 Person-months (95% CI)Incidence per 1000 Infants (95% CI)All positive Bordetella pertussis PCR assaysa All pertussis86654.51.14 (.57–2.28)3.96 (1.84–7.50) Nonsevere pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC confirmed pertussis diagnostic criteria All pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC probable pertussis diagnostic criteria All pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis06654.50.0 (NA)0.0 (NA)Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction.a Any infant meeting the syndromic screening definition with a positive PCR test. Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case Criteria Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction. a Any infant meeting the syndromic screening definition with a positive PCR test. The incidence rate of pertussis according to our pertussis case definition was 1.14 (95% CI, .57–2.28) cases per 1000 person-months. The incidence rate of pertussis according to the CDC confirmed pertussis case definition was 0.75 (95% CI, .31–1.81) cases per 1000 person-months, and among infants meeting the CDC probable case definition was 0.30 (95% CI, .08–1.20) cases per 1000 person-months (Table 2). Three cases met the severe pertussis criteria of a (modified) Preziosi score ≥7 (incidence rate of severe pertussis, 0.43 [95% CI, .14–1.33] cases per 1000 person-months), with all of these cases meeting the CDC confirmed case definition (Table 2). Pertussis cases occurred between June and December 2015, with 1 case each in June and July, 3 cases in September, 2 cases in November, and 1 case in December. The 3 severe pertussis cases occurred in July, November, and December (Figure 2). We also evaluated pertussis cases by month of age at diagnosis, with age-specific person-time computed to estimate age-specific pertussis incidence rates. Figure 2.A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis. A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis. The most common symptoms were cough (occurring in 7 of 8 cases), severe chest indrawing (6 of 8 cases), and tachypnea and coryza (4 of 8 cases for both). Additionally, 5 of the 8 cases presented with upper respiratory symptoms not otherwise specified (Supplementary Table 2). Among the 3 severe pertussis cases, there were 3 specified symptoms that were present in all cases—cough, coryza, and severe chest indrawing, whereas whoop and tachypnea were seen in 2 of the 3 severe cases. Among the 5 nonsevere cases, the most common symptoms were cough (n = 4) and severe chest indrawing (n = 3). There were no hospitalizations among the 8 pertussis cases. There was 1 death among the pertussis positive cases. This infant was diagnosed at 5 weeks of age with a nonsevere syndromic case of pertussis, and passed away the same week. A detailed summary of demographic and case classification findings for the 8 PCR-confirmed cases is presented in Table 3. Notably, 5 pertussis cases were in male infants, and 3 in female infants, similar to the slight excess of males in the total surveillance study. The median time to diagnosis from enrollment was 6 weeks. Table 3.Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis CasesIDSexAge at Enrollment, wkAge at Diagnosis, wkTime to Diagnosis, wkMonth of DiagnosisModified Preziosi Scale ScoreNo. of Pentavalent Vaccine Doses ReceivedAge, wk, of Each Pentavalent Vaccine Dose ReceivedCase Typea32311Female231November14210, 15Confirmed30361Male264July1028, 16Confirmed18111Male198December90NAConfirmed42201Male31310September5113Confirmed53401Female253November6113Confirmed50701Male81810June60NAProbable42291Male11514September0111Probable53321Female352September50NASyndromicAbbreviation: NA, not applicable.a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation. Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis Cases Abbreviation: NA, not applicable. a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation. Of the 2021 infants enrolled into the surveillance study, 1800 (89.1%) were enrolled into the open cohort, and 221 (10.9%) enrolled into the closed cohort. The total surveillance cohort contained slightly more male than female infants (52.3% vs 47.7%, respectively). Detailed demographics (infant anthropometric measurements and receipt of birth vaccines) are shown in Table 1. Age at enrollment and anthropometric measures were similar among infants who had a positive PCR test for B. pertussis compared to those who did not have a positive PCR test for B. pertussis (Table 1). Table 1.Baseline Characteristics of InfantsInfant Characteristics and No. of Infants Assessed for Overall ComparisonOverallInfants Without Positive PCR for B. pertussis (n = 2013)Infants With Positive PCR for B. pertussis (n = 8)Age at enrollment, d, median (IQR) (n = 2017)20 (9–41)20 (9–41)18 (14–26.5)Weight at enrollment, g, median (IQR) (n = 1900)3320 (2820–3970)3320 (2820–3970)3001 (2600–3350)Length at enrollment, cm, median (IQR) (n = 1900)51.5 (49.0–53.9)51.5 (49.0–54.0)51.3 (46.3–51.5)Head circumference at enrollment, cm, median (IQR)35.3 (33.8–36.5)35.3 (33.8–36.5)35.0 (33.0–35.7)Male sex52.3%62.5%52.2%Birth weight, g, median (IQR)a2800 (2500–3000)2800 (2500–3000)2600 (2600–2600)Birth immunizations received, No. (%) BCG1706 (84.4)1701 (84.5)5 (62.5) OPV1705 (84.4)1700 (84.5)5 (62.5)Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction.a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis. Baseline Characteristics of Infants Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction. a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis. Of the 8 infants with positive pertussis tests, all met our protocol-defined initial pertussis case definition, namely, they met the syndromic screening definition and had a positive PCR test for pertussis. Of these 8 infants, 7 met the CDC pertussis case definition (5 met the criteria for CDC confirmed pertussis cases, 2 met the criteria for CDC probable pertussis cases); only 1 of these 8 did not meet the CDC pertussis case criteria, as this infant did not have cough, with syndromic screening identifying only coryza and chest indrawing. A total of 1311 infants met the syndromic screening definition, of whom 1303 (99.4%) did not have PCR-positive tests. The incidence of pertussis per 1000 infants according to our pertussis case definition was 3.96 (95% CI, 1.84–7.50) cases per 1000 infants. The incidence of pertussis among infants meeting the CDC confirmed case definition was 2.47 (95% CI, .90–5.48) cases per 1000 infants, and among infants meeting the CDC probable case definition was 0.99 (95% CI, .17–3.27) cases per 1000 infants (Table 2). Table 2.Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case CriteriaCategoryNo. of InfantsPerson-time, moIncidence Rate per 1000 Person-months (95% CI)Incidence per 1000 Infants (95% CI)All positive Bordetella pertussis PCR assaysa All pertussis86654.51.14 (.57–2.28)3.96 (1.84–7.50) Nonsevere pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC confirmed pertussis diagnostic criteria All pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC probable pertussis diagnostic criteria All pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis06654.50.0 (NA)0.0 (NA)Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction.a Any infant meeting the syndromic screening definition with a positive PCR test. Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case Criteria Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction. a Any infant meeting the syndromic screening definition with a positive PCR test. The incidence rate of pertussis according to our pertussis case definition was 1.14 (95% CI, .57–2.28) cases per 1000 person-months. The incidence rate of pertussis according to the CDC confirmed pertussis case definition was 0.75 (95% CI, .31–1.81) cases per 1000 person-months, and among infants meeting the CDC probable case definition was 0.30 (95% CI, .08–1.20) cases per 1000 person-months (Table 2). Three cases met the severe pertussis criteria of a (modified) Preziosi score ≥7 (incidence rate of severe pertussis, 0.43 [95% CI, .14–1.33] cases per 1000 person-months), with all of these cases meeting the CDC confirmed case definition (Table 2). Pertussis cases occurred between June and December 2015, with 1 case each in June and July, 3 cases in September, 2 cases in November, and 1 case in December. The 3 severe pertussis cases occurred in July, November, and December (Figure 2). We also evaluated pertussis cases by month of age at diagnosis, with age-specific person-time computed to estimate age-specific pertussis incidence rates. Figure 2.A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis. A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis. The most common symptoms were cough (occurring in 7 of 8 cases), severe chest indrawing (6 of 8 cases), and tachypnea and coryza (4 of 8 cases for both). Additionally, 5 of the 8 cases presented with upper respiratory symptoms not otherwise specified (Supplementary Table 2). Among the 3 severe pertussis cases, there were 3 specified symptoms that were present in all cases—cough, coryza, and severe chest indrawing, whereas whoop and tachypnea were seen in 2 of the 3 severe cases. Among the 5 nonsevere cases, the most common symptoms were cough (n = 4) and severe chest indrawing (n = 3). There were no hospitalizations among the 8 pertussis cases. There was 1 death among the pertussis positive cases. This infant was diagnosed at 5 weeks of age with a nonsevere syndromic case of pertussis, and passed away the same week. A detailed summary of demographic and case classification findings for the 8 PCR-confirmed cases is presented in Table 3. Notably, 5 pertussis cases were in male infants, and 3 in female infants, similar to the slight excess of males in the total surveillance study. The median time to diagnosis from enrollment was 6 weeks. Table 3.Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis CasesIDSexAge at Enrollment, wkAge at Diagnosis, wkTime to Diagnosis, wkMonth of DiagnosisModified Preziosi Scale ScoreNo. of Pentavalent Vaccine Doses ReceivedAge, wk, of Each Pentavalent Vaccine Dose ReceivedCase Typea32311Female231November14210, 15Confirmed30361Male264July1028, 16Confirmed18111Male198December90NAConfirmed42201Male31310September5113Confirmed53401Female253November6113Confirmed50701Male81810June60NAProbable42291Male11514September0111Probable53321Female352September50NASyndromicAbbreviation: NA, not applicable.a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation. Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis Cases Abbreviation: NA, not applicable. a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation.
null
null
[ "Study Population and Descriptive Statistics" ]
[ "Of the 2021 infants enrolled into the surveillance study, 1800 (89.1%) were enrolled into the open cohort, and 221 (10.9%) enrolled into the closed cohort. The total surveillance cohort contained slightly more male than female infants (52.3% vs 47.7%, respectively). Detailed demographics (infant anthropometric measurements and receipt of birth vaccines) are shown in Table 1. Age at enrollment and anthropometric measures were similar among infants who had a positive PCR test for B. pertussis compared to those who did not have a positive PCR test for B. pertussis (Table 1).\nTable 1.Baseline Characteristics of InfantsInfant Characteristics and No. of Infants Assessed for Overall ComparisonOverallInfants Without Positive PCR for B. pertussis (n = 2013)Infants With Positive PCR for B. pertussis (n = 8)Age at enrollment, d, median (IQR) (n = 2017)20 (9–41)20 (9–41)18 (14–26.5)Weight at enrollment, g, median (IQR) (n = 1900)3320 (2820–3970)3320 (2820–3970)3001 (2600–3350)Length at enrollment, cm, median (IQR) (n = 1900)51.5 (49.0–53.9)51.5 (49.0–54.0)51.3 (46.3–51.5)Head circumference at enrollment, cm, median (IQR)35.3 (33.8–36.5)35.3 (33.8–36.5)35.0 (33.0–35.7)Male sex52.3%62.5%52.2%Birth weight, g, median (IQR)a2800 (2500–3000)2800 (2500–3000)2600 (2600–2600)Birth immunizations received, No. (%) BCG1706 (84.4)1701 (84.5)5 (62.5) OPV1705 (84.4)1700 (84.5)5 (62.5)Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction.a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis.\nBaseline Characteristics of Infants\nAbbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction.\na Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis.\nOf the 8 infants with positive pertussis tests, all met our protocol-defined initial pertussis case definition, namely, they met the syndromic screening definition and had a positive PCR test for pertussis. Of these 8 infants, 7 met the CDC pertussis case definition (5 met the criteria for CDC confirmed pertussis cases, 2 met the criteria for CDC probable pertussis cases); only 1 of these 8 did not meet the CDC pertussis case criteria, as this infant did not have cough, with syndromic screening identifying only coryza and chest indrawing. A total of 1311 infants met the syndromic screening definition, of whom 1303 (99.4%) did not have PCR-positive tests.\nThe incidence of pertussis per 1000 infants according to our pertussis case definition was 3.96 (95% CI, 1.84–7.50) cases per 1000 infants. The incidence of pertussis among infants meeting the CDC confirmed case definition was 2.47 (95% CI, .90–5.48) cases per 1000 infants, and among infants meeting the CDC probable case definition was 0.99 (95% CI, .17–3.27) cases per 1000 infants (Table 2).\nTable 2.Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case CriteriaCategoryNo. of InfantsPerson-time, moIncidence Rate per 1000 Person-months (95% CI)Incidence per 1000 Infants (95% CI)All positive Bordetella pertussis PCR assaysa All pertussis86654.51.14 (.57–2.28)3.96 (1.84–7.50) Nonsevere pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC confirmed pertussis diagnostic criteria All pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC probable pertussis diagnostic criteria All pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis06654.50.0 (NA)0.0 (NA)Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction.a Any infant meeting the syndromic screening definition with a positive PCR test.\nIncidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case Criteria\nAbbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction.\na Any infant meeting the syndromic screening definition with a positive PCR test.\nThe incidence rate of pertussis according to our pertussis case definition was 1.14 (95% CI, .57–2.28) cases per 1000 person-months. The incidence rate of pertussis according to the CDC confirmed pertussis case definition was 0.75 (95% CI, .31–1.81) cases per 1000 person-months, and among infants meeting the CDC probable case definition was 0.30 (95% CI, .08–1.20) cases per 1000 person-months (Table 2).\nThree cases met the severe pertussis criteria of a (modified) Preziosi score ≥7 (incidence rate of severe pertussis, 0.43 [95% CI, .14–1.33] cases per 1000 person-months), with all of these cases meeting the CDC confirmed case definition (Table 2).\nPertussis cases occurred between June and December 2015, with 1 case each in June and July, 3 cases in September, 2 cases in November, and 1 case in December. The 3 severe pertussis cases occurred in July, November, and December (Figure 2). We also evaluated pertussis cases by month of age at diagnosis, with age-specific person-time computed to estimate age-specific pertussis incidence rates.\nFigure 2.A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis.\nA, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis.\nThe most common symptoms were cough (occurring in 7 of 8 cases), severe chest indrawing (6 of 8 cases), and tachypnea and coryza (4 of 8 cases for both). Additionally, 5 of the 8 cases presented with upper respiratory symptoms not otherwise specified (Supplementary Table 2). Among the 3 severe pertussis cases, there were 3 specified symptoms that were present in all cases—cough, coryza, and severe chest indrawing, whereas whoop and tachypnea were seen in 2 of the 3 severe cases. Among the 5 nonsevere cases, the most common symptoms were cough (n = 4) and severe chest indrawing (n = 3). There were no hospitalizations among the 8 pertussis cases. There was 1 death among the pertussis positive cases. This infant was diagnosed at 5 weeks of age with a nonsevere syndromic case of pertussis, and passed away the same week.\nA detailed summary of demographic and case classification findings for the 8 PCR-confirmed cases is presented in Table 3. Notably, 5 pertussis cases were in male infants, and 3 in female infants, similar to the slight excess of males in the total surveillance study. The median time to diagnosis from enrollment was 6 weeks.\nTable 3.Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis CasesIDSexAge at Enrollment, wkAge at Diagnosis, wkTime to Diagnosis, wkMonth of DiagnosisModified Preziosi Scale ScoreNo. of Pentavalent Vaccine Doses ReceivedAge, wk, of Each Pentavalent Vaccine Dose ReceivedCase Typea32311Female231November14210, 15Confirmed30361Male264July1028, 16Confirmed18111Male198December90NAConfirmed42201Male31310September5113Confirmed53401Female253November6113Confirmed50701Male81810June60NAProbable42291Male11514September0111Probable53321Female352September50NASyndromicAbbreviation: NA, not applicable.a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation.\nDescriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis Cases\nAbbreviation: NA, not applicable.\na “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation." ]
[ null ]
[ "METHODS", "RESULTS", "Study Population and Descriptive Statistics", "DISCUSSION", "Supplementary Data" ]
[ "The Prevention of Pertussis in Young Infants in Pakistan (PrePY) Baseline Surveillance study was conducted in 4 low-income settlements of Karachi (Rehri Goth, Ibrahim Hyderi, Bhains Colony, and Ali Akbar Shah) where the Department of Paediatrics and Child Health of The Aga Khan University (AKU) has been running primary healthcare centers (staffed with physicians, lady health visitors, and community health workers) for several years. AKU has an active population based Demographic Surveillance System (DSS) in the study areas with a total surveillance catchment population of approximately 220 000.\nEnrollment for this surveillance study in the 4 study sites in Karachi began on 21 February 2015 and the last follow-up occurred on 12 April 2016. Surveillance was conducted among both an open cohort, with infants enrolled at ages up to 10 weeks and followed through 18 weeks of age, and a smaller closed cohort, with pregnant women enrolled on or after 27 weeks’ gestation or mothers enrolled who gave birth within the prior 72 hours; infants born to these women were followed through 18 weeks of age. For both cohorts, infants were routinely evaluated for symptoms associated with a syndromic screening definition (described later), and for infants who met the syndromic screening definition, nasopharyngeal swabs and blood samples were collected for laboratory testing.\nInfant surveillance occurred through routine scheduled in-person visits, telephone follow-up, and additional unscheduled visits and calls, according to the following schedule (Figure 1): Infant follow-up home visits were made twice a week from birth through 2 weeks of age. From 3 to 7 weeks of age, follow-up home visits occurred once a week; from 8 through 18 weeks of age, follow-up home visits were conducted every 2 weeks. In addition to home visit follow-ups, phone calls were made twice weekly from delivery through 4 weeks of age. Weekly phone calls were made from 4 to 18 weeks of age.\nFigure 1.Open cohort study schedule. aSchedule for surveillance visits will be based on the infant's age at enrollment, not time since enrollment; bThree milliliters of blood to be collected for infant specimens. Home visit key: X1 = twice weekly; X2 = weekly; X3 = fortnightly. Abbreviation: CBC, complete blood count.\nOpen cohort study schedule. aSchedule for surveillance visits will be based on the infant's age at enrollment, not time since enrollment; bThree milliliters of blood to be collected for infant specimens. Home visit key: X1 = twice weekly; X2 = weekly; X3 = fortnightly. Abbreviation: CBC, complete blood count.\nBasic demographic data were obtained from mothers and infants at the time of enrollment, including age at enrollment, anthropometric measurements (infant length, weight, and head circumference), and maternal history of tetanus toxoid (TT) receipt. At all surveillance visits, infants were assessed against the standardized syndromic criteria, which is defined as an infant presenting with any of the following symptoms: cough (lasting at least 1 day), coryza, whoop, apnea, posttussive emesis, cyanosis, seizure, tachypnea (>50 breaths/minute for infants >2 months or >60 breaths/minute for infants <2 months), severe chest indrawing, movement only when stimulated (or an alternative definition of lethargy), poor feeding (confirmed by poor suck), close exposure to any family member with a prolonged afebrile cough illness, or axillary temperature ≥38°C.\nOur analysis was based on 2 outcome definitions: (1) The infant met the syndromic definition and had a positive polymerase chain reaction (PCR) test for Bordetella pertussis; and (2) the infant met the US Centers for Disease Control and Prevention (CDC) case definition of probable or confirmed pertussis (see Supplementary Data).\nTo ensure the most accurate clinical description of each case, we identified syndromic symptoms within the time frame from cough onset through diagnosis and any continued symptoms without a break in symptom presentation that would be considered part of that illness episode. To identify new illness episodes as discrete, we required 7 days without symptoms. Because the CDC clinical case definition requires symptom assessment over time, based on a cough with a duration of ≥2 weeks, this approach is in line with the CDC clinical criteria. This provided us the ability to conduct a longitudinal assessment that captured all symptoms within the clinical episode, rather than being limited to a snapshot of symptoms at only 1 visit.\nInfants meeting the syndromic screening definition had nasopharyngeal swabs obtained by trained physicians using sterile, individually wrapped Copan FLOQ Minitip Nylon Flocked Dry Swabs. These swabs have comparable performance to rayon swabs [3, 4], and their use is recommended by CDC for optimal specimen collection for PCR testing for B. pertussis. To minimize exposure, the physician obtaining the swab wore a surgical mask and clean gloves. Swabs were inserted nasally and advanced along the floor of the nose, until they reached the nasopharynx. Once at the nasopharynx, the swabs were held against the posterior nasopharynx for a few seconds. Swabs were collected and stored in labeled universal transport medium cryovials and transported to the AKU Infectious Disease Research Laboratory at 4°C. Samples were stored at −70°C until total nucleic acid extraction for PCR.\nThe PCR procedures were in line with the CDC protocols for B. pertussis PCR and were adopted in consultation with the CDC [5, 6]. Total nucleic acid was extracted from the frozen aliquots using MagNA Pure Compact Nucleic Acid Isolation Kit I (Roche Life Science, Indianapolis, Indiana). Leftover DNA samples following initial testing were archived at AKU's Infectious Disease Research Laboratory, with storage of at least 2 years at −80°C. DNA extracts were tested by PCR for evidence of B. pertussis infection, using a real-time PCR detection system (Applied Biosystems 7500, Thermo Fisher Scientific, Waltham, Massachusetts).\nAll assays were run with positive and negative controls using standardized preparations of B. pertussis DNA, as well as PCR for the RNAse-P enzyme, which is used as a quality control measure to confirm that nasopharyngeal swabs successfully contacted human mucosa during the sampling. In line with the CDC analysis criteria for monoplex real-time PCR, cycle threshold (Ct) values <35 for IS481 were considered positive for B. pertussis, with IS481 Ct values ≥35 and <40 requiring further confirmation with ptxS1 testing. We tested all samples positive for IS481 using the ptxS1 assay, and ptxS1 Ct values <40 were considered a positive reaction.\nFor all infants, accumulated person-time, measured in person-months, was computed, and based on month of enrollment, month-specific person-time accumulated was computed. Overall confirmed pertussis cases were identified by month of occurrence, and calendar month-specific and overall incidence rates with 95% confidence intervals (CIs) were calculated.\nWe computed the (modified) Preziosi score for pertussis severity based on the presence or absence of specific symptoms, as well as a modified version of the Preziosi scoring system [7] which includes additional symptoms (Supplementary Figure 1). Infants were categorized as having severe pertussis if their Modified Preziosi Scale score was ≥7, and categorized as having moderate pertussis if their score ranged from 1 to 6.", " Study Population and Descriptive Statistics Of the 2021 infants enrolled into the surveillance study, 1800 (89.1%) were enrolled into the open cohort, and 221 (10.9%) enrolled into the closed cohort. The total surveillance cohort contained slightly more male than female infants (52.3% vs 47.7%, respectively). Detailed demographics (infant anthropometric measurements and receipt of birth vaccines) are shown in Table 1. Age at enrollment and anthropometric measures were similar among infants who had a positive PCR test for B. pertussis compared to those who did not have a positive PCR test for B. pertussis (Table 1).\nTable 1.Baseline Characteristics of InfantsInfant Characteristics and No. of Infants Assessed for Overall ComparisonOverallInfants Without Positive PCR for B. pertussis (n = 2013)Infants With Positive PCR for B. pertussis (n = 8)Age at enrollment, d, median (IQR) (n = 2017)20 (9–41)20 (9–41)18 (14–26.5)Weight at enrollment, g, median (IQR) (n = 1900)3320 (2820–3970)3320 (2820–3970)3001 (2600–3350)Length at enrollment, cm, median (IQR) (n = 1900)51.5 (49.0–53.9)51.5 (49.0–54.0)51.3 (46.3–51.5)Head circumference at enrollment, cm, median (IQR)35.3 (33.8–36.5)35.3 (33.8–36.5)35.0 (33.0–35.7)Male sex52.3%62.5%52.2%Birth weight, g, median (IQR)a2800 (2500–3000)2800 (2500–3000)2600 (2600–2600)Birth immunizations received, No. (%) BCG1706 (84.4)1701 (84.5)5 (62.5) OPV1705 (84.4)1700 (84.5)5 (62.5)Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction.a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis.\nBaseline Characteristics of Infants\nAbbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction.\na Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis.\nOf the 8 infants with positive pertussis tests, all met our protocol-defined initial pertussis case definition, namely, they met the syndromic screening definition and had a positive PCR test for pertussis. Of these 8 infants, 7 met the CDC pertussis case definition (5 met the criteria for CDC confirmed pertussis cases, 2 met the criteria for CDC probable pertussis cases); only 1 of these 8 did not meet the CDC pertussis case criteria, as this infant did not have cough, with syndromic screening identifying only coryza and chest indrawing. A total of 1311 infants met the syndromic screening definition, of whom 1303 (99.4%) did not have PCR-positive tests.\nThe incidence of pertussis per 1000 infants according to our pertussis case definition was 3.96 (95% CI, 1.84–7.50) cases per 1000 infants. The incidence of pertussis among infants meeting the CDC confirmed case definition was 2.47 (95% CI, .90–5.48) cases per 1000 infants, and among infants meeting the CDC probable case definition was 0.99 (95% CI, .17–3.27) cases per 1000 infants (Table 2).\nTable 2.Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case CriteriaCategoryNo. of InfantsPerson-time, moIncidence Rate per 1000 Person-months (95% CI)Incidence per 1000 Infants (95% CI)All positive Bordetella pertussis PCR assaysa All pertussis86654.51.14 (.57–2.28)3.96 (1.84–7.50) Nonsevere pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC confirmed pertussis diagnostic criteria All pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC probable pertussis diagnostic criteria All pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis06654.50.0 (NA)0.0 (NA)Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction.a Any infant meeting the syndromic screening definition with a positive PCR test.\nIncidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case Criteria\nAbbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction.\na Any infant meeting the syndromic screening definition with a positive PCR test.\nThe incidence rate of pertussis according to our pertussis case definition was 1.14 (95% CI, .57–2.28) cases per 1000 person-months. The incidence rate of pertussis according to the CDC confirmed pertussis case definition was 0.75 (95% CI, .31–1.81) cases per 1000 person-months, and among infants meeting the CDC probable case definition was 0.30 (95% CI, .08–1.20) cases per 1000 person-months (Table 2).\nThree cases met the severe pertussis criteria of a (modified) Preziosi score ≥7 (incidence rate of severe pertussis, 0.43 [95% CI, .14–1.33] cases per 1000 person-months), with all of these cases meeting the CDC confirmed case definition (Table 2).\nPertussis cases occurred between June and December 2015, with 1 case each in June and July, 3 cases in September, 2 cases in November, and 1 case in December. The 3 severe pertussis cases occurred in July, November, and December (Figure 2). We also evaluated pertussis cases by month of age at diagnosis, with age-specific person-time computed to estimate age-specific pertussis incidence rates.\nFigure 2.A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis.\nA, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis.\nThe most common symptoms were cough (occurring in 7 of 8 cases), severe chest indrawing (6 of 8 cases), and tachypnea and coryza (4 of 8 cases for both). Additionally, 5 of the 8 cases presented with upper respiratory symptoms not otherwise specified (Supplementary Table 2). Among the 3 severe pertussis cases, there were 3 specified symptoms that were present in all cases—cough, coryza, and severe chest indrawing, whereas whoop and tachypnea were seen in 2 of the 3 severe cases. Among the 5 nonsevere cases, the most common symptoms were cough (n = 4) and severe chest indrawing (n = 3). There were no hospitalizations among the 8 pertussis cases. There was 1 death among the pertussis positive cases. This infant was diagnosed at 5 weeks of age with a nonsevere syndromic case of pertussis, and passed away the same week.\nA detailed summary of demographic and case classification findings for the 8 PCR-confirmed cases is presented in Table 3. Notably, 5 pertussis cases were in male infants, and 3 in female infants, similar to the slight excess of males in the total surveillance study. The median time to diagnosis from enrollment was 6 weeks.\nTable 3.Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis CasesIDSexAge at Enrollment, wkAge at Diagnosis, wkTime to Diagnosis, wkMonth of DiagnosisModified Preziosi Scale ScoreNo. of Pentavalent Vaccine Doses ReceivedAge, wk, of Each Pentavalent Vaccine Dose ReceivedCase Typea32311Female231November14210, 15Confirmed30361Male264July1028, 16Confirmed18111Male198December90NAConfirmed42201Male31310September5113Confirmed53401Female253November6113Confirmed50701Male81810June60NAProbable42291Male11514September0111Probable53321Female352September50NASyndromicAbbreviation: NA, not applicable.a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation.\nDescriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis Cases\nAbbreviation: NA, not applicable.\na “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation.\nOf the 2021 infants enrolled into the surveillance study, 1800 (89.1%) were enrolled into the open cohort, and 221 (10.9%) enrolled into the closed cohort. The total surveillance cohort contained slightly more male than female infants (52.3% vs 47.7%, respectively). Detailed demographics (infant anthropometric measurements and receipt of birth vaccines) are shown in Table 1. Age at enrollment and anthropometric measures were similar among infants who had a positive PCR test for B. pertussis compared to those who did not have a positive PCR test for B. pertussis (Table 1).\nTable 1.Baseline Characteristics of InfantsInfant Characteristics and No. of Infants Assessed for Overall ComparisonOverallInfants Without Positive PCR for B. pertussis (n = 2013)Infants With Positive PCR for B. pertussis (n = 8)Age at enrollment, d, median (IQR) (n = 2017)20 (9–41)20 (9–41)18 (14–26.5)Weight at enrollment, g, median (IQR) (n = 1900)3320 (2820–3970)3320 (2820–3970)3001 (2600–3350)Length at enrollment, cm, median (IQR) (n = 1900)51.5 (49.0–53.9)51.5 (49.0–54.0)51.3 (46.3–51.5)Head circumference at enrollment, cm, median (IQR)35.3 (33.8–36.5)35.3 (33.8–36.5)35.0 (33.0–35.7)Male sex52.3%62.5%52.2%Birth weight, g, median (IQR)a2800 (2500–3000)2800 (2500–3000)2600 (2600–2600)Birth immunizations received, No. (%) BCG1706 (84.4)1701 (84.5)5 (62.5) OPV1705 (84.4)1700 (84.5)5 (62.5)Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction.a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis.\nBaseline Characteristics of Infants\nAbbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction.\na Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis.\nOf the 8 infants with positive pertussis tests, all met our protocol-defined initial pertussis case definition, namely, they met the syndromic screening definition and had a positive PCR test for pertussis. Of these 8 infants, 7 met the CDC pertussis case definition (5 met the criteria for CDC confirmed pertussis cases, 2 met the criteria for CDC probable pertussis cases); only 1 of these 8 did not meet the CDC pertussis case criteria, as this infant did not have cough, with syndromic screening identifying only coryza and chest indrawing. A total of 1311 infants met the syndromic screening definition, of whom 1303 (99.4%) did not have PCR-positive tests.\nThe incidence of pertussis per 1000 infants according to our pertussis case definition was 3.96 (95% CI, 1.84–7.50) cases per 1000 infants. The incidence of pertussis among infants meeting the CDC confirmed case definition was 2.47 (95% CI, .90–5.48) cases per 1000 infants, and among infants meeting the CDC probable case definition was 0.99 (95% CI, .17–3.27) cases per 1000 infants (Table 2).\nTable 2.Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case CriteriaCategoryNo. of InfantsPerson-time, moIncidence Rate per 1000 Person-months (95% CI)Incidence per 1000 Infants (95% CI)All positive Bordetella pertussis PCR assaysa All pertussis86654.51.14 (.57–2.28)3.96 (1.84–7.50) Nonsevere pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC confirmed pertussis diagnostic criteria All pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC probable pertussis diagnostic criteria All pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis06654.50.0 (NA)0.0 (NA)Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction.a Any infant meeting the syndromic screening definition with a positive PCR test.\nIncidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case Criteria\nAbbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction.\na Any infant meeting the syndromic screening definition with a positive PCR test.\nThe incidence rate of pertussis according to our pertussis case definition was 1.14 (95% CI, .57–2.28) cases per 1000 person-months. The incidence rate of pertussis according to the CDC confirmed pertussis case definition was 0.75 (95% CI, .31–1.81) cases per 1000 person-months, and among infants meeting the CDC probable case definition was 0.30 (95% CI, .08–1.20) cases per 1000 person-months (Table 2).\nThree cases met the severe pertussis criteria of a (modified) Preziosi score ≥7 (incidence rate of severe pertussis, 0.43 [95% CI, .14–1.33] cases per 1000 person-months), with all of these cases meeting the CDC confirmed case definition (Table 2).\nPertussis cases occurred between June and December 2015, with 1 case each in June and July, 3 cases in September, 2 cases in November, and 1 case in December. The 3 severe pertussis cases occurred in July, November, and December (Figure 2). We also evaluated pertussis cases by month of age at diagnosis, with age-specific person-time computed to estimate age-specific pertussis incidence rates.\nFigure 2.A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis.\nA, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis.\nThe most common symptoms were cough (occurring in 7 of 8 cases), severe chest indrawing (6 of 8 cases), and tachypnea and coryza (4 of 8 cases for both). Additionally, 5 of the 8 cases presented with upper respiratory symptoms not otherwise specified (Supplementary Table 2). Among the 3 severe pertussis cases, there were 3 specified symptoms that were present in all cases—cough, coryza, and severe chest indrawing, whereas whoop and tachypnea were seen in 2 of the 3 severe cases. Among the 5 nonsevere cases, the most common symptoms were cough (n = 4) and severe chest indrawing (n = 3). There were no hospitalizations among the 8 pertussis cases. There was 1 death among the pertussis positive cases. This infant was diagnosed at 5 weeks of age with a nonsevere syndromic case of pertussis, and passed away the same week.\nA detailed summary of demographic and case classification findings for the 8 PCR-confirmed cases is presented in Table 3. Notably, 5 pertussis cases were in male infants, and 3 in female infants, similar to the slight excess of males in the total surveillance study. The median time to diagnosis from enrollment was 6 weeks.\nTable 3.Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis CasesIDSexAge at Enrollment, wkAge at Diagnosis, wkTime to Diagnosis, wkMonth of DiagnosisModified Preziosi Scale ScoreNo. of Pentavalent Vaccine Doses ReceivedAge, wk, of Each Pentavalent Vaccine Dose ReceivedCase Typea32311Female231November14210, 15Confirmed30361Male264July1028, 16Confirmed18111Male198December90NAConfirmed42201Male31310September5113Confirmed53401Female253November6113Confirmed50701Male81810June60NAProbable42291Male11514September0111Probable53321Female352September50NASyndromicAbbreviation: NA, not applicable.a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation.\nDescriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis Cases\nAbbreviation: NA, not applicable.\na “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation.", "Of the 2021 infants enrolled into the surveillance study, 1800 (89.1%) were enrolled into the open cohort, and 221 (10.9%) enrolled into the closed cohort. The total surveillance cohort contained slightly more male than female infants (52.3% vs 47.7%, respectively). Detailed demographics (infant anthropometric measurements and receipt of birth vaccines) are shown in Table 1. Age at enrollment and anthropometric measures were similar among infants who had a positive PCR test for B. pertussis compared to those who did not have a positive PCR test for B. pertussis (Table 1).\nTable 1.Baseline Characteristics of InfantsInfant Characteristics and No. of Infants Assessed for Overall ComparisonOverallInfants Without Positive PCR for B. pertussis (n = 2013)Infants With Positive PCR for B. pertussis (n = 8)Age at enrollment, d, median (IQR) (n = 2017)20 (9–41)20 (9–41)18 (14–26.5)Weight at enrollment, g, median (IQR) (n = 1900)3320 (2820–3970)3320 (2820–3970)3001 (2600–3350)Length at enrollment, cm, median (IQR) (n = 1900)51.5 (49.0–53.9)51.5 (49.0–54.0)51.3 (46.3–51.5)Head circumference at enrollment, cm, median (IQR)35.3 (33.8–36.5)35.3 (33.8–36.5)35.0 (33.0–35.7)Male sex52.3%62.5%52.2%Birth weight, g, median (IQR)a2800 (2500–3000)2800 (2500–3000)2600 (2600–2600)Birth immunizations received, No. (%) BCG1706 (84.4)1701 (84.5)5 (62.5) OPV1705 (84.4)1700 (84.5)5 (62.5)Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction.a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis.\nBaseline Characteristics of Infants\nAbbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction.\na Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis.\nOf the 8 infants with positive pertussis tests, all met our protocol-defined initial pertussis case definition, namely, they met the syndromic screening definition and had a positive PCR test for pertussis. Of these 8 infants, 7 met the CDC pertussis case definition (5 met the criteria for CDC confirmed pertussis cases, 2 met the criteria for CDC probable pertussis cases); only 1 of these 8 did not meet the CDC pertussis case criteria, as this infant did not have cough, with syndromic screening identifying only coryza and chest indrawing. A total of 1311 infants met the syndromic screening definition, of whom 1303 (99.4%) did not have PCR-positive tests.\nThe incidence of pertussis per 1000 infants according to our pertussis case definition was 3.96 (95% CI, 1.84–7.50) cases per 1000 infants. The incidence of pertussis among infants meeting the CDC confirmed case definition was 2.47 (95% CI, .90–5.48) cases per 1000 infants, and among infants meeting the CDC probable case definition was 0.99 (95% CI, .17–3.27) cases per 1000 infants (Table 2).\nTable 2.Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case CriteriaCategoryNo. of InfantsPerson-time, moIncidence Rate per 1000 Person-months (95% CI)Incidence per 1000 Infants (95% CI)All positive Bordetella pertussis PCR assaysa All pertussis86654.51.14 (.57–2.28)3.96 (1.84–7.50) Nonsevere pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC confirmed pertussis diagnostic criteria All pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC probable pertussis diagnostic criteria All pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis06654.50.0 (NA)0.0 (NA)Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction.a Any infant meeting the syndromic screening definition with a positive PCR test.\nIncidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case Criteria\nAbbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction.\na Any infant meeting the syndromic screening definition with a positive PCR test.\nThe incidence rate of pertussis according to our pertussis case definition was 1.14 (95% CI, .57–2.28) cases per 1000 person-months. The incidence rate of pertussis according to the CDC confirmed pertussis case definition was 0.75 (95% CI, .31–1.81) cases per 1000 person-months, and among infants meeting the CDC probable case definition was 0.30 (95% CI, .08–1.20) cases per 1000 person-months (Table 2).\nThree cases met the severe pertussis criteria of a (modified) Preziosi score ≥7 (incidence rate of severe pertussis, 0.43 [95% CI, .14–1.33] cases per 1000 person-months), with all of these cases meeting the CDC confirmed case definition (Table 2).\nPertussis cases occurred between June and December 2015, with 1 case each in June and July, 3 cases in September, 2 cases in November, and 1 case in December. The 3 severe pertussis cases occurred in July, November, and December (Figure 2). We also evaluated pertussis cases by month of age at diagnosis, with age-specific person-time computed to estimate age-specific pertussis incidence rates.\nFigure 2.A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis.\nA, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis.\nThe most common symptoms were cough (occurring in 7 of 8 cases), severe chest indrawing (6 of 8 cases), and tachypnea and coryza (4 of 8 cases for both). Additionally, 5 of the 8 cases presented with upper respiratory symptoms not otherwise specified (Supplementary Table 2). Among the 3 severe pertussis cases, there were 3 specified symptoms that were present in all cases—cough, coryza, and severe chest indrawing, whereas whoop and tachypnea were seen in 2 of the 3 severe cases. Among the 5 nonsevere cases, the most common symptoms were cough (n = 4) and severe chest indrawing (n = 3). There were no hospitalizations among the 8 pertussis cases. There was 1 death among the pertussis positive cases. This infant was diagnosed at 5 weeks of age with a nonsevere syndromic case of pertussis, and passed away the same week.\nA detailed summary of demographic and case classification findings for the 8 PCR-confirmed cases is presented in Table 3. Notably, 5 pertussis cases were in male infants, and 3 in female infants, similar to the slight excess of males in the total surveillance study. The median time to diagnosis from enrollment was 6 weeks.\nTable 3.Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis CasesIDSexAge at Enrollment, wkAge at Diagnosis, wkTime to Diagnosis, wkMonth of DiagnosisModified Preziosi Scale ScoreNo. of Pentavalent Vaccine Doses ReceivedAge, wk, of Each Pentavalent Vaccine Dose ReceivedCase Typea32311Female231November14210, 15Confirmed30361Male264July1028, 16Confirmed18111Male198December90NAConfirmed42201Male31310September5113Confirmed53401Female253November6113Confirmed50701Male81810June60NAProbable42291Male11514September0111Probable53321Female352September50NASyndromicAbbreviation: NA, not applicable.a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation.\nDescriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis Cases\nAbbreviation: NA, not applicable.\na “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation.", "Ours is one of the first prospective surveillance studies to evaluate the burden of pertussis in young infants in developing countries. We found a moderate burden of pertussis disease in our surveillance catchment area and have identified pertussis as a pathogen responsible for considerable disease among infants in Karachi. This is a first step in estimating the public health impact of pertussis in young infants in low- and lower-middle-income countries. The next logical step would be to estimate the extent to which pertussis contributes to severe disease, hospitalizations, and deaths among young infants in these low-resource settings. There are indications that at least in some other low-resource settings, such as Johannesburg, South Africa, pertussis is associated with hospitalizations of young infants (see Nunes et al in this supplement).\nThere are limited comparable high-quality burden data on infant pertussis from other low- and lower-middle-income countries. In a recent review of the literature, Tan et al found few published epidemiologic data from the WHO Africa, Eastern Mediterranean, Southeast Asia, and Western Pacific regions [8]. Moreover, as much of the available data are generally not collected through established surveillance systems for pertussis, incidence rates are often estimated based on retrospective studies of hospitalized infants. Nevertheless, data from high-income countries indicate that pertussis incidence is higher in disadvantaged populations. For example, Vitek et al reported that, in the 1990s, pertussis-associated mortality was at least 2.6 times higher in Hispanic infants compared with non-Hispanic infants [9]. Similarly, between 2002 and 2004, the incidence of pertussis-associated hospitalizations was 101 per 100 000 among Native American and Alaskan infants compared with 68 per 100 000 among the general US infant population [10].\nThere have been several attempts to estimate the global burden of pertussis. In 1999, Crowcroft et al produced a global estimate of 48.5 million cases and 295 000 deaths [11]. Later models estimated substantially lower estimates of the global burden of pertussis. For example, in 2010, Black et al estimated that 16 million cases of pertussis and 195 000 pertussis-associated deaths occurred globally in 2008 [12]. Despite these estimates, the actual reported cases have been only a fraction of estimated cases. For example, in 2014 only 139 786 cases of pertussis were reported globally [13]. There are many reasons for uncertainty in pertussis burden estimates including secular trends in surveillance intensity and approaches, evolution in diagnostic methods, changes in national vaccine schedules, vaccine products used, and cyclical trends in pertussis incidence [13–15]. However, the most significant contributor to nonrobust pertussis burden estimates is lack of data from low- and lower-middle-income countries. Studies such as ours will help fill this data gap.\nAmong the 8 pertussis cases in our community-based study, 3 were classified as severe based on a modified Preziosi score. Yet, none of these severe cases were hospitalized. Importantly, the original Preziosi scale was designed to identify combinations of presenting signs and symptoms that would dichotomize pertussis cases into severe and nonsevere illnesses, with the threshold defined by the median score in the study population—it was not necessarily meant to be predictive of clinical outcomes [7]. Moreover, as shown here, the age group followed in this study can present with symptoms not common among the population originally studied by Preziosi et al. Two subsequent retrospective studies were able to identify risk factors or clinical predictors of severe disease [16] (as defined by clinical outcomes) or death [17] due to pertussis, including young age (<2 months), prematurity, fever at presentation, and peak white blood cell and lymphocyte counts. However, both studies were conducted in settings (Australia and the United States) where pertussis transmission, healthcare-seeking behavior, and pediatric healthcare services may not be representative of developing countries. Prospective surveillance of hospitalized infants with suspected or confirmed pertussis in our setting would help generate evidence for a more objective pertussis severity assessment in low- and lower-middle-income countries.\nThe infant immunization schedule in Pakistan recommends vaccination with diphtheria, tetanus, and whole-cell pertussis (DTwP), Haemophilus influenzae type b, and hepatitis B at 6, 10, and 14 weeks of age [18]. Of the 3 severe pertussis cases, all were too young to be fully vaccinated. This is in line with the relative age distribution of pertussis reported from a variety of settings. This distribution of cases provides support for a maternal pertussis immunization strategy to reduce the infant pertussis burden. Moreover, our syndromic case definition, designed to be as sensitive as possible, performed as well as the standard US CDC case definition. Of the 8 PCR-confirmed pertussis cases identified to date in this surveillance cohort, 7 met the CDC pertussis case definition, including 5 that met the CDC confirmed case definition and 2 that met the CDC probable case definition.\nThere a few potential limitations of our study. First, the surveillance period spanned approximately 1 year, even though pertussis is known to exhibit multiyear periodicity, with cycles occurring every 2–4 years [19]. Hence, our findings should be interpreted as a snapshot of pertussis epidemiology in an urban, South Asian population. Moreover, our study was conducted in a setting with low whole-cell pertussis vaccine (DTwP) coverage. The estimated 3-dose DTwP coverage at our study sites is 40%–50%, as measured by the demographic and surveillance system at these sites. This could ostensibly limit the generalizability of our findings to populations with suboptimal infant DTwP coverage. However, such populations form a substantial portion of the birth cohorts in low-income settings. Moreover, given that there is evidence that national immunization programs tend to overestimate vaccine coverage, our coverage estimates are likely to be closer to the “real” coverage. Hence, our findings are likely to be generalizable to large parts of developing country populations. Additionally, many surveillance studies have limitations due to healthcare-seeking behavior in the community under surveillance. However, given our intensive surveillance, it is unlikely that healthcare-seeking behavior had an impact on estimates of pertussis incidence in our study.\nIn conclusion, while the current study established that pertussis is a cause of disease in early infancy in a low-income South Asian setting, there is a need to better characterize the burden of pertussis in hospitalized cases. Moreover, given that pertussis often has multiyear cycles, the next steps could include continuing surveillance with an emphasis on identifying severe disease in hospitalized infants.", "Supplementary materials are available at http://cid.oxfordjournals.org. Consisting of data provided by the author to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the author, so questions or comments should be addressed to the author." ]
[ "methods", "results", null, "discussion", "supplementary-material" ]
[ "pertussis", "maternal vaccine", "Tdap", "Pakistan", "surveillance" ]
METHODS: The Prevention of Pertussis in Young Infants in Pakistan (PrePY) Baseline Surveillance study was conducted in 4 low-income settlements of Karachi (Rehri Goth, Ibrahim Hyderi, Bhains Colony, and Ali Akbar Shah) where the Department of Paediatrics and Child Health of The Aga Khan University (AKU) has been running primary healthcare centers (staffed with physicians, lady health visitors, and community health workers) for several years. AKU has an active population based Demographic Surveillance System (DSS) in the study areas with a total surveillance catchment population of approximately 220 000. Enrollment for this surveillance study in the 4 study sites in Karachi began on 21 February 2015 and the last follow-up occurred on 12 April 2016. Surveillance was conducted among both an open cohort, with infants enrolled at ages up to 10 weeks and followed through 18 weeks of age, and a smaller closed cohort, with pregnant women enrolled on or after 27 weeks’ gestation or mothers enrolled who gave birth within the prior 72 hours; infants born to these women were followed through 18 weeks of age. For both cohorts, infants were routinely evaluated for symptoms associated with a syndromic screening definition (described later), and for infants who met the syndromic screening definition, nasopharyngeal swabs and blood samples were collected for laboratory testing. Infant surveillance occurred through routine scheduled in-person visits, telephone follow-up, and additional unscheduled visits and calls, according to the following schedule (Figure 1): Infant follow-up home visits were made twice a week from birth through 2 weeks of age. From 3 to 7 weeks of age, follow-up home visits occurred once a week; from 8 through 18 weeks of age, follow-up home visits were conducted every 2 weeks. In addition to home visit follow-ups, phone calls were made twice weekly from delivery through 4 weeks of age. Weekly phone calls were made from 4 to 18 weeks of age. Figure 1.Open cohort study schedule. aSchedule for surveillance visits will be based on the infant's age at enrollment, not time since enrollment; bThree milliliters of blood to be collected for infant specimens. Home visit key: X1 = twice weekly; X2 = weekly; X3 = fortnightly. Abbreviation: CBC, complete blood count. Open cohort study schedule. aSchedule for surveillance visits will be based on the infant's age at enrollment, not time since enrollment; bThree milliliters of blood to be collected for infant specimens. Home visit key: X1 = twice weekly; X2 = weekly; X3 = fortnightly. Abbreviation: CBC, complete blood count. Basic demographic data were obtained from mothers and infants at the time of enrollment, including age at enrollment, anthropometric measurements (infant length, weight, and head circumference), and maternal history of tetanus toxoid (TT) receipt. At all surveillance visits, infants were assessed against the standardized syndromic criteria, which is defined as an infant presenting with any of the following symptoms: cough (lasting at least 1 day), coryza, whoop, apnea, posttussive emesis, cyanosis, seizure, tachypnea (>50 breaths/minute for infants >2 months or >60 breaths/minute for infants <2 months), severe chest indrawing, movement only when stimulated (or an alternative definition of lethargy), poor feeding (confirmed by poor suck), close exposure to any family member with a prolonged afebrile cough illness, or axillary temperature ≥38°C. Our analysis was based on 2 outcome definitions: (1) The infant met the syndromic definition and had a positive polymerase chain reaction (PCR) test for Bordetella pertussis; and (2) the infant met the US Centers for Disease Control and Prevention (CDC) case definition of probable or confirmed pertussis (see Supplementary Data). To ensure the most accurate clinical description of each case, we identified syndromic symptoms within the time frame from cough onset through diagnosis and any continued symptoms without a break in symptom presentation that would be considered part of that illness episode. To identify new illness episodes as discrete, we required 7 days without symptoms. Because the CDC clinical case definition requires symptom assessment over time, based on a cough with a duration of ≥2 weeks, this approach is in line with the CDC clinical criteria. This provided us the ability to conduct a longitudinal assessment that captured all symptoms within the clinical episode, rather than being limited to a snapshot of symptoms at only 1 visit. Infants meeting the syndromic screening definition had nasopharyngeal swabs obtained by trained physicians using sterile, individually wrapped Copan FLOQ Minitip Nylon Flocked Dry Swabs. These swabs have comparable performance to rayon swabs [3, 4], and their use is recommended by CDC for optimal specimen collection for PCR testing for B. pertussis. To minimize exposure, the physician obtaining the swab wore a surgical mask and clean gloves. Swabs were inserted nasally and advanced along the floor of the nose, until they reached the nasopharynx. Once at the nasopharynx, the swabs were held against the posterior nasopharynx for a few seconds. Swabs were collected and stored in labeled universal transport medium cryovials and transported to the AKU Infectious Disease Research Laboratory at 4°C. Samples were stored at −70°C until total nucleic acid extraction for PCR. The PCR procedures were in line with the CDC protocols for B. pertussis PCR and were adopted in consultation with the CDC [5, 6]. Total nucleic acid was extracted from the frozen aliquots using MagNA Pure Compact Nucleic Acid Isolation Kit I (Roche Life Science, Indianapolis, Indiana). Leftover DNA samples following initial testing were archived at AKU's Infectious Disease Research Laboratory, with storage of at least 2 years at −80°C. DNA extracts were tested by PCR for evidence of B. pertussis infection, using a real-time PCR detection system (Applied Biosystems 7500, Thermo Fisher Scientific, Waltham, Massachusetts). All assays were run with positive and negative controls using standardized preparations of B. pertussis DNA, as well as PCR for the RNAse-P enzyme, which is used as a quality control measure to confirm that nasopharyngeal swabs successfully contacted human mucosa during the sampling. In line with the CDC analysis criteria for monoplex real-time PCR, cycle threshold (Ct) values <35 for IS481 were considered positive for B. pertussis, with IS481 Ct values ≥35 and <40 requiring further confirmation with ptxS1 testing. We tested all samples positive for IS481 using the ptxS1 assay, and ptxS1 Ct values <40 were considered a positive reaction. For all infants, accumulated person-time, measured in person-months, was computed, and based on month of enrollment, month-specific person-time accumulated was computed. Overall confirmed pertussis cases were identified by month of occurrence, and calendar month-specific and overall incidence rates with 95% confidence intervals (CIs) were calculated. We computed the (modified) Preziosi score for pertussis severity based on the presence or absence of specific symptoms, as well as a modified version of the Preziosi scoring system [7] which includes additional symptoms (Supplementary Figure 1). Infants were categorized as having severe pertussis if their Modified Preziosi Scale score was ≥7, and categorized as having moderate pertussis if their score ranged from 1 to 6. RESULTS: Study Population and Descriptive Statistics Of the 2021 infants enrolled into the surveillance study, 1800 (89.1%) were enrolled into the open cohort, and 221 (10.9%) enrolled into the closed cohort. The total surveillance cohort contained slightly more male than female infants (52.3% vs 47.7%, respectively). Detailed demographics (infant anthropometric measurements and receipt of birth vaccines) are shown in Table 1. Age at enrollment and anthropometric measures were similar among infants who had a positive PCR test for B. pertussis compared to those who did not have a positive PCR test for B. pertussis (Table 1). Table 1.Baseline Characteristics of InfantsInfant Characteristics and No. of Infants Assessed for Overall ComparisonOverallInfants Without Positive PCR for B. pertussis (n = 2013)Infants With Positive PCR for B. pertussis (n = 8)Age at enrollment, d, median (IQR) (n = 2017)20 (9–41)20 (9–41)18 (14–26.5)Weight at enrollment, g, median (IQR) (n = 1900)3320 (2820–3970)3320 (2820–3970)3001 (2600–3350)Length at enrollment, cm, median (IQR) (n = 1900)51.5 (49.0–53.9)51.5 (49.0–54.0)51.3 (46.3–51.5)Head circumference at enrollment, cm, median (IQR)35.3 (33.8–36.5)35.3 (33.8–36.5)35.0 (33.0–35.7)Male sex52.3%62.5%52.2%Birth weight, g, median (IQR)a2800 (2500–3000)2800 (2500–3000)2600 (2600–2600)Birth immunizations received, No. (%) BCG1706 (84.4)1701 (84.5)5 (62.5) OPV1705 (84.4)1700 (84.5)5 (62.5)Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction.a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis. Baseline Characteristics of Infants Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction. a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis. Of the 8 infants with positive pertussis tests, all met our protocol-defined initial pertussis case definition, namely, they met the syndromic screening definition and had a positive PCR test for pertussis. Of these 8 infants, 7 met the CDC pertussis case definition (5 met the criteria for CDC confirmed pertussis cases, 2 met the criteria for CDC probable pertussis cases); only 1 of these 8 did not meet the CDC pertussis case criteria, as this infant did not have cough, with syndromic screening identifying only coryza and chest indrawing. A total of 1311 infants met the syndromic screening definition, of whom 1303 (99.4%) did not have PCR-positive tests. The incidence of pertussis per 1000 infants according to our pertussis case definition was 3.96 (95% CI, 1.84–7.50) cases per 1000 infants. The incidence of pertussis among infants meeting the CDC confirmed case definition was 2.47 (95% CI, .90–5.48) cases per 1000 infants, and among infants meeting the CDC probable case definition was 0.99 (95% CI, .17–3.27) cases per 1000 infants (Table 2). Table 2.Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case CriteriaCategoryNo. of InfantsPerson-time, moIncidence Rate per 1000 Person-months (95% CI)Incidence per 1000 Infants (95% CI)All positive Bordetella pertussis PCR assaysa All pertussis86654.51.14 (.57–2.28)3.96 (1.84–7.50) Nonsevere pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC confirmed pertussis diagnostic criteria All pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC probable pertussis diagnostic criteria All pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis06654.50.0 (NA)0.0 (NA)Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction.a Any infant meeting the syndromic screening definition with a positive PCR test. Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case Criteria Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction. a Any infant meeting the syndromic screening definition with a positive PCR test. The incidence rate of pertussis according to our pertussis case definition was 1.14 (95% CI, .57–2.28) cases per 1000 person-months. The incidence rate of pertussis according to the CDC confirmed pertussis case definition was 0.75 (95% CI, .31–1.81) cases per 1000 person-months, and among infants meeting the CDC probable case definition was 0.30 (95% CI, .08–1.20) cases per 1000 person-months (Table 2). Three cases met the severe pertussis criteria of a (modified) Preziosi score ≥7 (incidence rate of severe pertussis, 0.43 [95% CI, .14–1.33] cases per 1000 person-months), with all of these cases meeting the CDC confirmed case definition (Table 2). Pertussis cases occurred between June and December 2015, with 1 case each in June and July, 3 cases in September, 2 cases in November, and 1 case in December. The 3 severe pertussis cases occurred in July, November, and December (Figure 2). We also evaluated pertussis cases by month of age at diagnosis, with age-specific person-time computed to estimate age-specific pertussis incidence rates. Figure 2.A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis. A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis. The most common symptoms were cough (occurring in 7 of 8 cases), severe chest indrawing (6 of 8 cases), and tachypnea and coryza (4 of 8 cases for both). Additionally, 5 of the 8 cases presented with upper respiratory symptoms not otherwise specified (Supplementary Table 2). Among the 3 severe pertussis cases, there were 3 specified symptoms that were present in all cases—cough, coryza, and severe chest indrawing, whereas whoop and tachypnea were seen in 2 of the 3 severe cases. Among the 5 nonsevere cases, the most common symptoms were cough (n = 4) and severe chest indrawing (n = 3). There were no hospitalizations among the 8 pertussis cases. There was 1 death among the pertussis positive cases. This infant was diagnosed at 5 weeks of age with a nonsevere syndromic case of pertussis, and passed away the same week. A detailed summary of demographic and case classification findings for the 8 PCR-confirmed cases is presented in Table 3. Notably, 5 pertussis cases were in male infants, and 3 in female infants, similar to the slight excess of males in the total surveillance study. The median time to diagnosis from enrollment was 6 weeks. Table 3.Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis CasesIDSexAge at Enrollment, wkAge at Diagnosis, wkTime to Diagnosis, wkMonth of DiagnosisModified Preziosi Scale ScoreNo. of Pentavalent Vaccine Doses ReceivedAge, wk, of Each Pentavalent Vaccine Dose ReceivedCase Typea32311Female231November14210, 15Confirmed30361Male264July1028, 16Confirmed18111Male198December90NAConfirmed42201Male31310September5113Confirmed53401Female253November6113Confirmed50701Male81810June60NAProbable42291Male11514September0111Probable53321Female352September50NASyndromicAbbreviation: NA, not applicable.a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation. Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis Cases Abbreviation: NA, not applicable. a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation. Of the 2021 infants enrolled into the surveillance study, 1800 (89.1%) were enrolled into the open cohort, and 221 (10.9%) enrolled into the closed cohort. The total surveillance cohort contained slightly more male than female infants (52.3% vs 47.7%, respectively). Detailed demographics (infant anthropometric measurements and receipt of birth vaccines) are shown in Table 1. Age at enrollment and anthropometric measures were similar among infants who had a positive PCR test for B. pertussis compared to those who did not have a positive PCR test for B. pertussis (Table 1). Table 1.Baseline Characteristics of InfantsInfant Characteristics and No. of Infants Assessed for Overall ComparisonOverallInfants Without Positive PCR for B. pertussis (n = 2013)Infants With Positive PCR for B. pertussis (n = 8)Age at enrollment, d, median (IQR) (n = 2017)20 (9–41)20 (9–41)18 (14–26.5)Weight at enrollment, g, median (IQR) (n = 1900)3320 (2820–3970)3320 (2820–3970)3001 (2600–3350)Length at enrollment, cm, median (IQR) (n = 1900)51.5 (49.0–53.9)51.5 (49.0–54.0)51.3 (46.3–51.5)Head circumference at enrollment, cm, median (IQR)35.3 (33.8–36.5)35.3 (33.8–36.5)35.0 (33.0–35.7)Male sex52.3%62.5%52.2%Birth weight, g, median (IQR)a2800 (2500–3000)2800 (2500–3000)2600 (2600–2600)Birth immunizations received, No. (%) BCG1706 (84.4)1701 (84.5)5 (62.5) OPV1705 (84.4)1700 (84.5)5 (62.5)Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction.a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis. Baseline Characteristics of Infants Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction. a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis. Of the 8 infants with positive pertussis tests, all met our protocol-defined initial pertussis case definition, namely, they met the syndromic screening definition and had a positive PCR test for pertussis. Of these 8 infants, 7 met the CDC pertussis case definition (5 met the criteria for CDC confirmed pertussis cases, 2 met the criteria for CDC probable pertussis cases); only 1 of these 8 did not meet the CDC pertussis case criteria, as this infant did not have cough, with syndromic screening identifying only coryza and chest indrawing. A total of 1311 infants met the syndromic screening definition, of whom 1303 (99.4%) did not have PCR-positive tests. The incidence of pertussis per 1000 infants according to our pertussis case definition was 3.96 (95% CI, 1.84–7.50) cases per 1000 infants. The incidence of pertussis among infants meeting the CDC confirmed case definition was 2.47 (95% CI, .90–5.48) cases per 1000 infants, and among infants meeting the CDC probable case definition was 0.99 (95% CI, .17–3.27) cases per 1000 infants (Table 2). Table 2.Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case CriteriaCategoryNo. of InfantsPerson-time, moIncidence Rate per 1000 Person-months (95% CI)Incidence per 1000 Infants (95% CI)All positive Bordetella pertussis PCR assaysa All pertussis86654.51.14 (.57–2.28)3.96 (1.84–7.50) Nonsevere pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC confirmed pertussis diagnostic criteria All pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC probable pertussis diagnostic criteria All pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis06654.50.0 (NA)0.0 (NA)Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction.a Any infant meeting the syndromic screening definition with a positive PCR test. Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case Criteria Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction. a Any infant meeting the syndromic screening definition with a positive PCR test. The incidence rate of pertussis according to our pertussis case definition was 1.14 (95% CI, .57–2.28) cases per 1000 person-months. The incidence rate of pertussis according to the CDC confirmed pertussis case definition was 0.75 (95% CI, .31–1.81) cases per 1000 person-months, and among infants meeting the CDC probable case definition was 0.30 (95% CI, .08–1.20) cases per 1000 person-months (Table 2). Three cases met the severe pertussis criteria of a (modified) Preziosi score ≥7 (incidence rate of severe pertussis, 0.43 [95% CI, .14–1.33] cases per 1000 person-months), with all of these cases meeting the CDC confirmed case definition (Table 2). Pertussis cases occurred between June and December 2015, with 1 case each in June and July, 3 cases in September, 2 cases in November, and 1 case in December. The 3 severe pertussis cases occurred in July, November, and December (Figure 2). We also evaluated pertussis cases by month of age at diagnosis, with age-specific person-time computed to estimate age-specific pertussis incidence rates. Figure 2.A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis. A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis. The most common symptoms were cough (occurring in 7 of 8 cases), severe chest indrawing (6 of 8 cases), and tachypnea and coryza (4 of 8 cases for both). Additionally, 5 of the 8 cases presented with upper respiratory symptoms not otherwise specified (Supplementary Table 2). Among the 3 severe pertussis cases, there were 3 specified symptoms that were present in all cases—cough, coryza, and severe chest indrawing, whereas whoop and tachypnea were seen in 2 of the 3 severe cases. Among the 5 nonsevere cases, the most common symptoms were cough (n = 4) and severe chest indrawing (n = 3). There were no hospitalizations among the 8 pertussis cases. There was 1 death among the pertussis positive cases. This infant was diagnosed at 5 weeks of age with a nonsevere syndromic case of pertussis, and passed away the same week. A detailed summary of demographic and case classification findings for the 8 PCR-confirmed cases is presented in Table 3. Notably, 5 pertussis cases were in male infants, and 3 in female infants, similar to the slight excess of males in the total surveillance study. The median time to diagnosis from enrollment was 6 weeks. Table 3.Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis CasesIDSexAge at Enrollment, wkAge at Diagnosis, wkTime to Diagnosis, wkMonth of DiagnosisModified Preziosi Scale ScoreNo. of Pentavalent Vaccine Doses ReceivedAge, wk, of Each Pentavalent Vaccine Dose ReceivedCase Typea32311Female231November14210, 15Confirmed30361Male264July1028, 16Confirmed18111Male198December90NAConfirmed42201Male31310September5113Confirmed53401Female253November6113Confirmed50701Male81810June60NAProbable42291Male11514September0111Probable53321Female352September50NASyndromicAbbreviation: NA, not applicable.a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation. Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis Cases Abbreviation: NA, not applicable. a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation. Study Population and Descriptive Statistics: Of the 2021 infants enrolled into the surveillance study, 1800 (89.1%) were enrolled into the open cohort, and 221 (10.9%) enrolled into the closed cohort. The total surveillance cohort contained slightly more male than female infants (52.3% vs 47.7%, respectively). Detailed demographics (infant anthropometric measurements and receipt of birth vaccines) are shown in Table 1. Age at enrollment and anthropometric measures were similar among infants who had a positive PCR test for B. pertussis compared to those who did not have a positive PCR test for B. pertussis (Table 1). Table 1.Baseline Characteristics of InfantsInfant Characteristics and No. of Infants Assessed for Overall ComparisonOverallInfants Without Positive PCR for B. pertussis (n = 2013)Infants With Positive PCR for B. pertussis (n = 8)Age at enrollment, d, median (IQR) (n = 2017)20 (9–41)20 (9–41)18 (14–26.5)Weight at enrollment, g, median (IQR) (n = 1900)3320 (2820–3970)3320 (2820–3970)3001 (2600–3350)Length at enrollment, cm, median (IQR) (n = 1900)51.5 (49.0–53.9)51.5 (49.0–54.0)51.3 (46.3–51.5)Head circumference at enrollment, cm, median (IQR)35.3 (33.8–36.5)35.3 (33.8–36.5)35.0 (33.0–35.7)Male sex52.3%62.5%52.2%Birth weight, g, median (IQR)a2800 (2500–3000)2800 (2500–3000)2600 (2600–2600)Birth immunizations received, No. (%) BCG1706 (84.4)1701 (84.5)5 (62.5) OPV1705 (84.4)1700 (84.5)5 (62.5)Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction.a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis. Baseline Characteristics of Infants Abbreviations: B. pertussis, Bordetella pertussis; IQR, interquartile range; OPV, oral polio vaccine; PCR, polymerase chain reaction. a Specific to infants enrolled in the closed cohort (n = 221) only. Note that only 1 infant in the closed cohort had a positive PCR test for B. pertussis. Of the 8 infants with positive pertussis tests, all met our protocol-defined initial pertussis case definition, namely, they met the syndromic screening definition and had a positive PCR test for pertussis. Of these 8 infants, 7 met the CDC pertussis case definition (5 met the criteria for CDC confirmed pertussis cases, 2 met the criteria for CDC probable pertussis cases); only 1 of these 8 did not meet the CDC pertussis case criteria, as this infant did not have cough, with syndromic screening identifying only coryza and chest indrawing. A total of 1311 infants met the syndromic screening definition, of whom 1303 (99.4%) did not have PCR-positive tests. The incidence of pertussis per 1000 infants according to our pertussis case definition was 3.96 (95% CI, 1.84–7.50) cases per 1000 infants. The incidence of pertussis among infants meeting the CDC confirmed case definition was 2.47 (95% CI, .90–5.48) cases per 1000 infants, and among infants meeting the CDC probable case definition was 0.99 (95% CI, .17–3.27) cases per 1000 infants (Table 2). Table 2.Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case CriteriaCategoryNo. of InfantsPerson-time, moIncidence Rate per 1000 Person-months (95% CI)Incidence per 1000 Infants (95% CI)All positive Bordetella pertussis PCR assaysa All pertussis86654.51.14 (.57–2.28)3.96 (1.84–7.50) Nonsevere pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC confirmed pertussis diagnostic criteria All pertussis56654.50.75 (.31–1.81)2.47 (.90–5.48) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis36654.50.43 (.14–1.33)1.48 (.38–4.03)Infants meeting CDC probable pertussis diagnostic criteria All pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Nonsevere pertussis26654.50.29 (.07–1.14)0.99 (.17–3.27) Severe pertussis06654.50.0 (NA)0.0 (NA)Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction.a Any infant meeting the syndromic screening definition with a positive PCR test. Incidence of Severe and Nonsevere Pertussis, Overall and by Centers for Disease Control and Prevention Diagnostic Case Criteria Abbreviations: CDC, Centers for Disease Control and Prevention; CI, confidence interval; NA, not applicable; PCR, polymerase chain reaction. a Any infant meeting the syndromic screening definition with a positive PCR test. The incidence rate of pertussis according to our pertussis case definition was 1.14 (95% CI, .57–2.28) cases per 1000 person-months. The incidence rate of pertussis according to the CDC confirmed pertussis case definition was 0.75 (95% CI, .31–1.81) cases per 1000 person-months, and among infants meeting the CDC probable case definition was 0.30 (95% CI, .08–1.20) cases per 1000 person-months (Table 2). Three cases met the severe pertussis criteria of a (modified) Preziosi score ≥7 (incidence rate of severe pertussis, 0.43 [95% CI, .14–1.33] cases per 1000 person-months), with all of these cases meeting the CDC confirmed case definition (Table 2). Pertussis cases occurred between June and December 2015, with 1 case each in June and July, 3 cases in September, 2 cases in November, and 1 case in December. The 3 severe pertussis cases occurred in July, November, and December (Figure 2). We also evaluated pertussis cases by month of age at diagnosis, with age-specific person-time computed to estimate age-specific pertussis incidence rates. Figure 2.A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis. A, Incidence of severe and nonsevere pertussis by calendar month. B, Incidence of severe and nonsevere pertussis by infant age, in months. Pertussis incidence presented here includes all pertussis cases, defined as meeting the syndromic case definition plus positive polymerase chain reaction test for Bordetella pertussis. The most common symptoms were cough (occurring in 7 of 8 cases), severe chest indrawing (6 of 8 cases), and tachypnea and coryza (4 of 8 cases for both). Additionally, 5 of the 8 cases presented with upper respiratory symptoms not otherwise specified (Supplementary Table 2). Among the 3 severe pertussis cases, there were 3 specified symptoms that were present in all cases—cough, coryza, and severe chest indrawing, whereas whoop and tachypnea were seen in 2 of the 3 severe cases. Among the 5 nonsevere cases, the most common symptoms were cough (n = 4) and severe chest indrawing (n = 3). There were no hospitalizations among the 8 pertussis cases. There was 1 death among the pertussis positive cases. This infant was diagnosed at 5 weeks of age with a nonsevere syndromic case of pertussis, and passed away the same week. A detailed summary of demographic and case classification findings for the 8 PCR-confirmed cases is presented in Table 3. Notably, 5 pertussis cases were in male infants, and 3 in female infants, similar to the slight excess of males in the total surveillance study. The median time to diagnosis from enrollment was 6 weeks. Table 3.Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis CasesIDSexAge at Enrollment, wkAge at Diagnosis, wkTime to Diagnosis, wkMonth of DiagnosisModified Preziosi Scale ScoreNo. of Pentavalent Vaccine Doses ReceivedAge, wk, of Each Pentavalent Vaccine Dose ReceivedCase Typea32311Female231November14210, 15Confirmed30361Male264July1028, 16Confirmed18111Male198December90NAConfirmed42201Male31310September5113Confirmed53401Female253November6113Confirmed50701Male81810June60NAProbable42291Male11514September0111Probable53321Female352September50NASyndromicAbbreviation: NA, not applicable.a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation. Descriptive Summary of Demographic and Case Classification Findings for Polymerase Chain Reaction–Confirmed Pertussis Cases Abbreviation: NA, not applicable. a “Confirmed” represents cases meeting the Centers for Disease Control and Prevention (CDC) confirmed case classification; “Probable” represents cases meeting the CDC probable case classification; “Syndromic” represents cases meeting the syndromic case definition plus polymerase chain reaction confirmation. DISCUSSION: Ours is one of the first prospective surveillance studies to evaluate the burden of pertussis in young infants in developing countries. We found a moderate burden of pertussis disease in our surveillance catchment area and have identified pertussis as a pathogen responsible for considerable disease among infants in Karachi. This is a first step in estimating the public health impact of pertussis in young infants in low- and lower-middle-income countries. The next logical step would be to estimate the extent to which pertussis contributes to severe disease, hospitalizations, and deaths among young infants in these low-resource settings. There are indications that at least in some other low-resource settings, such as Johannesburg, South Africa, pertussis is associated with hospitalizations of young infants (see Nunes et al in this supplement). There are limited comparable high-quality burden data on infant pertussis from other low- and lower-middle-income countries. In a recent review of the literature, Tan et al found few published epidemiologic data from the WHO Africa, Eastern Mediterranean, Southeast Asia, and Western Pacific regions [8]. Moreover, as much of the available data are generally not collected through established surveillance systems for pertussis, incidence rates are often estimated based on retrospective studies of hospitalized infants. Nevertheless, data from high-income countries indicate that pertussis incidence is higher in disadvantaged populations. For example, Vitek et al reported that, in the 1990s, pertussis-associated mortality was at least 2.6 times higher in Hispanic infants compared with non-Hispanic infants [9]. Similarly, between 2002 and 2004, the incidence of pertussis-associated hospitalizations was 101 per 100 000 among Native American and Alaskan infants compared with 68 per 100 000 among the general US infant population [10]. There have been several attempts to estimate the global burden of pertussis. In 1999, Crowcroft et al produced a global estimate of 48.5 million cases and 295 000 deaths [11]. Later models estimated substantially lower estimates of the global burden of pertussis. For example, in 2010, Black et al estimated that 16 million cases of pertussis and 195 000 pertussis-associated deaths occurred globally in 2008 [12]. Despite these estimates, the actual reported cases have been only a fraction of estimated cases. For example, in 2014 only 139 786 cases of pertussis were reported globally [13]. There are many reasons for uncertainty in pertussis burden estimates including secular trends in surveillance intensity and approaches, evolution in diagnostic methods, changes in national vaccine schedules, vaccine products used, and cyclical trends in pertussis incidence [13–15]. However, the most significant contributor to nonrobust pertussis burden estimates is lack of data from low- and lower-middle-income countries. Studies such as ours will help fill this data gap. Among the 8 pertussis cases in our community-based study, 3 were classified as severe based on a modified Preziosi score. Yet, none of these severe cases were hospitalized. Importantly, the original Preziosi scale was designed to identify combinations of presenting signs and symptoms that would dichotomize pertussis cases into severe and nonsevere illnesses, with the threshold defined by the median score in the study population—it was not necessarily meant to be predictive of clinical outcomes [7]. Moreover, as shown here, the age group followed in this study can present with symptoms not common among the population originally studied by Preziosi et al. Two subsequent retrospective studies were able to identify risk factors or clinical predictors of severe disease [16] (as defined by clinical outcomes) or death [17] due to pertussis, including young age (<2 months), prematurity, fever at presentation, and peak white blood cell and lymphocyte counts. However, both studies were conducted in settings (Australia and the United States) where pertussis transmission, healthcare-seeking behavior, and pediatric healthcare services may not be representative of developing countries. Prospective surveillance of hospitalized infants with suspected or confirmed pertussis in our setting would help generate evidence for a more objective pertussis severity assessment in low- and lower-middle-income countries. The infant immunization schedule in Pakistan recommends vaccination with diphtheria, tetanus, and whole-cell pertussis (DTwP), Haemophilus influenzae type b, and hepatitis B at 6, 10, and 14 weeks of age [18]. Of the 3 severe pertussis cases, all were too young to be fully vaccinated. This is in line with the relative age distribution of pertussis reported from a variety of settings. This distribution of cases provides support for a maternal pertussis immunization strategy to reduce the infant pertussis burden. Moreover, our syndromic case definition, designed to be as sensitive as possible, performed as well as the standard US CDC case definition. Of the 8 PCR-confirmed pertussis cases identified to date in this surveillance cohort, 7 met the CDC pertussis case definition, including 5 that met the CDC confirmed case definition and 2 that met the CDC probable case definition. There a few potential limitations of our study. First, the surveillance period spanned approximately 1 year, even though pertussis is known to exhibit multiyear periodicity, with cycles occurring every 2–4 years [19]. Hence, our findings should be interpreted as a snapshot of pertussis epidemiology in an urban, South Asian population. Moreover, our study was conducted in a setting with low whole-cell pertussis vaccine (DTwP) coverage. The estimated 3-dose DTwP coverage at our study sites is 40%–50%, as measured by the demographic and surveillance system at these sites. This could ostensibly limit the generalizability of our findings to populations with suboptimal infant DTwP coverage. However, such populations form a substantial portion of the birth cohorts in low-income settings. Moreover, given that there is evidence that national immunization programs tend to overestimate vaccine coverage, our coverage estimates are likely to be closer to the “real” coverage. Hence, our findings are likely to be generalizable to large parts of developing country populations. Additionally, many surveillance studies have limitations due to healthcare-seeking behavior in the community under surveillance. However, given our intensive surveillance, it is unlikely that healthcare-seeking behavior had an impact on estimates of pertussis incidence in our study. In conclusion, while the current study established that pertussis is a cause of disease in early infancy in a low-income South Asian setting, there is a need to better characterize the burden of pertussis in hospitalized cases. Moreover, given that pertussis often has multiyear cycles, the next steps could include continuing surveillance with an emphasis on identifying severe disease in hospitalized infants. Supplementary Data: Supplementary materials are available at http://cid.oxfordjournals.org. Consisting of data provided by the author to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the author, so questions or comments should be addressed to the author.
Background:  Pertussis remains a cause of morbidity and mortality among young infants. There are limited data on the pertussis disease burden in this age group from low- and lower-middle-income countries, including in South Asia. Methods:  We conducted an active community-based surveillance study from February 2015 to April 2016 among 2 cohorts of young infants in 4 low-income settlements in Karachi, Pakistan. Infants were enrolled either at birth (closed cohort) or at ages up to 10 weeks (open cohort) and followed until 18 weeks of age. Nasopharyngeal swab specimens were obtained from infants who met a standardized syndromic case definition and tested for Bordetella pertussis using real-time polymerase chain reaction. We determined the incidence of pertussis using a protocol-defined case definition, as well as the US Centers for Disease Control and Prevention (CDC) definitions for confirmed and probable pertussis. Results:  Of 2021 infants enrolled into the study, 8 infants met the protocol-defined pertussis case definition, for an incidence of 3.96 (95% confidence interval [CI], 1.84-7.50) cases per 1000 infants. Seven of the pertussis cases met the CDC pertussis case definition (5 confirmed, 2 probable), for incidences of CDC-defined confirmed pertussis of 2.47 (95% CI, .90-5.48) cases per 1000 infants, and probable pertussis of 0.99 (95% CI, .17-3.27) cases per 1000 infants. Three of the pertussis cases were severe according to the Modified Preziosi Scale score. Conclusions:  In one of the first prospective surveillance studies of infant pertussis in a developing country, we identified a moderate burden of pertussis disease in early infancy in Pakistan.
null
null
7,445
327
[ 1574 ]
5
[ "pertussis", "cases", "infants", "case", "definition", "cdc", "severe", "pcr", "incidence", "positive" ]
[ "pertussis infant met", "infant pertussis burden", "data infant pertussis", "pertussis disease surveillance", "disease infants karachi" ]
null
null
null
null
[CONTENT] pertussis | maternal vaccine | Tdap | Pakistan | surveillance [SUMMARY]
[CONTENT] pertussis | maternal vaccine | Tdap | Pakistan | surveillance [SUMMARY]
null
[CONTENT] pertussis | maternal vaccine | Tdap | Pakistan | surveillance [SUMMARY]
null
null
[CONTENT] Bordetella pertussis | Female | Humans | Incidence | Infant | Infant, Newborn | Male | Pakistan | Population Surveillance | Prospective Studies | Seasons | Severity of Illness Index | Socioeconomic Factors | Whooping Cough [SUMMARY]
[CONTENT] Bordetella pertussis | Female | Humans | Incidence | Infant | Infant, Newborn | Male | Pakistan | Population Surveillance | Prospective Studies | Seasons | Severity of Illness Index | Socioeconomic Factors | Whooping Cough [SUMMARY]
null
[CONTENT] Bordetella pertussis | Female | Humans | Incidence | Infant | Infant, Newborn | Male | Pakistan | Population Surveillance | Prospective Studies | Seasons | Severity of Illness Index | Socioeconomic Factors | Whooping Cough [SUMMARY]
null
null
[CONTENT] pertussis infant met | infant pertussis burden | data infant pertussis | pertussis disease surveillance | disease infants karachi [SUMMARY]
[CONTENT] pertussis infant met | infant pertussis burden | data infant pertussis | pertussis disease surveillance | disease infants karachi [SUMMARY]
null
[CONTENT] pertussis infant met | infant pertussis burden | data infant pertussis | pertussis disease surveillance | disease infants karachi [SUMMARY]
null
null
[CONTENT] pertussis | cases | infants | case | definition | cdc | severe | pcr | incidence | positive [SUMMARY]
[CONTENT] pertussis | cases | infants | case | definition | cdc | severe | pcr | incidence | positive [SUMMARY]
null
[CONTENT] pertussis | cases | infants | case | definition | cdc | severe | pcr | incidence | positive [SUMMARY]
null
null
[CONTENT] swabs | visits | infants | pertussis | weeks | time | follow | weekly | home | based [SUMMARY]
[CONTENT] pertussis | cases | case | infants | meeting | positive | cdc | definition | severe | pcr [SUMMARY]
null
[CONTENT] pertussis | cases | infants | case | definition | cdc | author | severe | pcr | positive [SUMMARY]
null
null
[CONTENT] ||| February 2015 to April 2016 | 2 | 4 | Karachi | Pakistan ||| up to 10 weeks | 18 weeks of age ||| Nasopharyngeal ||| the US Centers for Disease Control and Prevention | CDC [SUMMARY]
[CONTENT] 2021 | 8 | 3.96 | 95% | CI | 1.84 | 1000 ||| Seven | CDC | 5 | 2 | CDC | 2.47 | 95% | CI | 1000 | 0.99 | 95% | CI | 1000 ||| Three | the Modified Preziosi Scale [SUMMARY]
null
[CONTENT] ||| South Asia ||| February 2015 to April 2016 | 2 | 4 | Karachi | Pakistan ||| up to 10 weeks | 18 weeks of age ||| Nasopharyngeal ||| the US Centers for Disease Control and Prevention | CDC ||| 2021 | 8 | 3.96 | 95% | CI | 1.84 | 1000 ||| Seven | CDC | 5 | 2 | CDC | 2.47 | 95% | CI | 1000 | 0.99 | 95% | CI | 1000 ||| Three | the Modified Preziosi Scale ||| ||| one | first | Pakistan [SUMMARY]
null
Active angiogenesis in metastatic renal cell carcinoma predicts clinical benefit to sunitinib-based therapy.
24786599
Sunitinib represents a widely used therapy for metastatic renal cell carcinoma patients. Even so, there is a group of patients who show toxicity without clinical benefit. In this work, we have analysed pivotal molecular targets involved in angiogenesis (vascular endothelial growth factor (VEGF)-A, VEGF receptor 2 (KDR), phosphorylated (p)KDR and microvascular density (MVD)) to test their potential value as predictive biomarkers of clinical benefit in sunitinib-treated renal cell carcinoma patients.
BACKGROUND
Vascular endothelial growth factor-A, KDR and pKDR-Y1775 expression as well as CD31, for MVD visualisation, were determined by immunohistochemistry in 48 renal cell carcinoma patients, including 23 metastatic cases treated with sunitinib. Threshold was defined for each biomarker, and univariate and multivariate analyses for progression-free survival (PFS) and overall survival (OS) were carried out.
METHODS
The HistoScore mean value obtained for VEGF-A was 121.6 (range, 10-300); for KDR 258.5 (range, 150-300); for pKDR-Y1775 10.8 (range, 0-65) and the mean value of CD31-positive structures for MVD visualisation was 49 (range, 10-126). Statistical differences for PFS (P=0.01) and OS (P=0.007) were observed for pKDR-Y1775 in sunitinib-treated patients. Importantly, pKDR-Y1775 expression remained significant after multivariate Cox analysis for PFS (P=0.01; HR: 5.35, 95% CI, 1.49-19.13) and for OS (P=0.02; HR: 5.13, 95% CI, 1.25-21.05).
RESULTS
Our results suggest that the expression of phosphorylated (i.e., activated) KDR in tumour stroma might be used as predictive biomarker for the clinical outcome in renal cell carcinoma first-line sunitinib-treated patients.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Angiogenesis Inhibitors", "Biomarkers, Tumor", "Carcinoma, Renal Cell", "Cohort Studies", "Disease-Free Survival", "Female", "Humans", "Indoles", "Kaplan-Meier Estimate", "Kidney Neoplasms", "Male", "Microvessels", "Middle Aged", "Multivariate Analysis", "Neovascularization, Pathologic", "Phosphoproteins", "Proportional Hazards Models", "Pyrroles", "Sunitinib", "Vascular Endothelial Growth Factor A", "Vascular Endothelial Growth Factor Receptor-2" ]
4037833
null
null
null
null
Results
Patient characteristics Recruited data from patients at baseline are summarized in Table 1. The distribution of patients according to sex was similar; 52% females and 48% males. The median age for this cohort of patients was 62 years. In terms of ECOG performance status, most of patients, 61%, were clustered as equal to 1. Previous nephrectomy was carried out in 87% of the cases. Sites of metastases were diverse, including lung 35%, liver 13%, bone 22%, brain 4% and lymph nodes 26%. Number of disease sites was established as 1, 2 and ⩾3 (48%, 35% and 17%, respectively). Patients were grouped according to their MSKCC risk factor classification as favourable (61%) and intermediate (39%). The control group comprised of 25 biopsies from non-metastatic RCC patients without treatment. The median age of this group was 67 years, and the distribution of patients according to sex was 60% males and 40% females. Recruited data from patients at baseline are summarized in Table 1. The distribution of patients according to sex was similar; 52% females and 48% males. The median age for this cohort of patients was 62 years. In terms of ECOG performance status, most of patients, 61%, were clustered as equal to 1. Previous nephrectomy was carried out in 87% of the cases. Sites of metastases were diverse, including lung 35%, liver 13%, bone 22%, brain 4% and lymph nodes 26%. Number of disease sites was established as 1, 2 and ⩾3 (48%, 35% and 17%, respectively). Patients were grouped according to their MSKCC risk factor classification as favourable (61%) and intermediate (39%). The control group comprised of 25 biopsies from non-metastatic RCC patients without treatment. The median age of this group was 67 years, and the distribution of patients according to sex was 60% males and 40% females. Vascular endothelial growth factor-A, KDR, pKDR-Y1775 and MVD in RCC To evaluate the expression of the selected proteins, immunohistochemistry assays were performed in patients treated with SU11248. A control group was included to establish a reference value for each marker. Vascular endothelial growth factor-A expression was diffusely detected in the cytoplasm of tumour cells, as well as in the stromal, including fibroblasts, and endothelial cells. Most of the cases showed stronger staining in the tumour than stroma. Expression of KDR was seen in endothelial cells, preferentially in tumour stroma. In addition, KDR was also detected in isolated fibroblasts and malignant cells. Expression of pKDR-Y1775 was only observed in the endothelial cells of vascular structures in the tumour. Conversely, endothelial cells of vessels in adjacent non-tumoral renal tissue did not express pKDR-Y1775. Finally, CD31 expression was present in all vascular structures, both in tumour and non-tumoral renal tissue (Figure 1). HScore values of all patients for VEGF-A, KDR and pKDR-Y1775 as well as absolute number of CD31-positive structures for MVD visualisation are represented in histograms (Figure 1). The HScore mean value obtained for VEGF-A staining was 121.6 (range, 10–300); for KDR in endothelial cells 258.5 (range, 150–300); for pKDR-Y1775 10.8 (range, 0–65); and the mean value of CD31-positive vascular structures for MVD staining was 49 (range, 10–126). To determine the predictive potential of these proteins in metastatic RCC patients treated with sunitinib in first line, we estimated a cut-off point of 60 for VEGF-A, of 200 for KDR, of 0 for pKDR-Y1775 and of 48 for MVD. To evaluate the expression of the selected proteins, immunohistochemistry assays were performed in patients treated with SU11248. A control group was included to establish a reference value for each marker. Vascular endothelial growth factor-A expression was diffusely detected in the cytoplasm of tumour cells, as well as in the stromal, including fibroblasts, and endothelial cells. Most of the cases showed stronger staining in the tumour than stroma. Expression of KDR was seen in endothelial cells, preferentially in tumour stroma. In addition, KDR was also detected in isolated fibroblasts and malignant cells. Expression of pKDR-Y1775 was only observed in the endothelial cells of vascular structures in the tumour. Conversely, endothelial cells of vessels in adjacent non-tumoral renal tissue did not express pKDR-Y1775. Finally, CD31 expression was present in all vascular structures, both in tumour and non-tumoral renal tissue (Figure 1). HScore values of all patients for VEGF-A, KDR and pKDR-Y1775 as well as absolute number of CD31-positive structures for MVD visualisation are represented in histograms (Figure 1). The HScore mean value obtained for VEGF-A staining was 121.6 (range, 10–300); for KDR in endothelial cells 258.5 (range, 150–300); for pKDR-Y1775 10.8 (range, 0–65); and the mean value of CD31-positive vascular structures for MVD staining was 49 (range, 10–126). To determine the predictive potential of these proteins in metastatic RCC patients treated with sunitinib in first line, we estimated a cut-off point of 60 for VEGF-A, of 200 for KDR, of 0 for pKDR-Y1775 and of 48 for MVD. pKDR-Y1775 in tumour stroma predicts clinical outcome On the basis of these cut-off points, Kaplan–Meier analysis for categorical values of each marker was performed to assess the correlation between the expression levels and prognosis status in patients treated with sunitinib in terms of PFS and OS. Vascular endothelial growth factor-A , KDR and MVD did not show statistical difference in terms of PFS and OS (Figures 2 and 3) (Tables 2 and 3). In relation to pKDR-Y1775, log rank test showed statistical differences for this biomarker in terms of both PFS (log rank 0.01) and OS (log rank 0.007). The median survival time for the patients without the expression of pKDR (negative) was 23.4 months (range, 5–88) for PFS and 27.6 months (range, 8–88) for OS, whereas those cases with positive pKDR-Y1775 expression were associated with worse outcome, with a median survival time of 15.8 for PFS (range, 4–36) and 25.9 months (range, 4–51) for OS. Univariate analysis showed statistical differences for both PFS (P=0.017, HR: 4.02, 95% CI, 1.28–12.63) (Figure 2 and Table 2) and OS (P=0.015 HR: 5.34, 95% CI, 1.39–20.5) (Figure 3 and Table 3) After multivariate Cox proportional hazards regression analysis, pKDR-Y1775 expression remained significant for both PFS (P=0.01; HR: 5.35, 95% CI, 1.49–19.13) and OS (P=0.02; HR: 5.13, 95% CI, 1.25–21.05) suggesting that phosphorylation of KDR in Y1175 could be an independent predictive factor of sunitinib response in patients with clear cell metastatic RCC (Tables 2 and 3). On the basis of these cut-off points, Kaplan–Meier analysis for categorical values of each marker was performed to assess the correlation between the expression levels and prognosis status in patients treated with sunitinib in terms of PFS and OS. Vascular endothelial growth factor-A , KDR and MVD did not show statistical difference in terms of PFS and OS (Figures 2 and 3) (Tables 2 and 3). In relation to pKDR-Y1775, log rank test showed statistical differences for this biomarker in terms of both PFS (log rank 0.01) and OS (log rank 0.007). The median survival time for the patients without the expression of pKDR (negative) was 23.4 months (range, 5–88) for PFS and 27.6 months (range, 8–88) for OS, whereas those cases with positive pKDR-Y1775 expression were associated with worse outcome, with a median survival time of 15.8 for PFS (range, 4–36) and 25.9 months (range, 4–51) for OS. Univariate analysis showed statistical differences for both PFS (P=0.017, HR: 4.02, 95% CI, 1.28–12.63) (Figure 2 and Table 2) and OS (P=0.015 HR: 5.34, 95% CI, 1.39–20.5) (Figure 3 and Table 3) After multivariate Cox proportional hazards regression analysis, pKDR-Y1775 expression remained significant for both PFS (P=0.01; HR: 5.35, 95% CI, 1.49–19.13) and OS (P=0.02; HR: 5.13, 95% CI, 1.25–21.05) suggesting that phosphorylation of KDR in Y1175 could be an independent predictive factor of sunitinib response in patients with clear cell metastatic RCC (Tables 2 and 3).
null
null
[ "Patients", "Immunohistochemistry", "Statistical analysis", "Patient characteristics", "Vascular endothelial growth factor-A, KDR, pKDR-Y1775 and MVD in RCC", "pKDR-Y1775 in tumour stroma predicts clinical outcome" ]
[ "The study involved 23 biopsies from consecutive cases of clear cell metastatic RCC treated with sunitinib in first line between 2008 and 2013 obtained from the Biobank of Fundación Jiménez Díaz Hospital (Spain). To compare biomarkers' expression with baseline data, we included a control group consisting of biopsies from non-metastatic RCC patients without treatment (n=25). All patients gave written informed consent and sample collection was made with the approval of the Institutional Scientific and Ethical Committee.\nClinical–pathological data were obtained from the patient medical records and included sex, age, Eastern Cooperative Oncology group (ECOG) performance status, previous nephrectomy, site of metastases, number of disease sites and Memorial Sloan-Kettering Cancer Center risk classification (MSKCC risk factors), which stratifies patients with metastatic RCC into risk categories based on the number of adverse clinical and laboratory parameters present such as levels of serum haemoglobin, serum calcium and serum lactate dehydrogenase, ECOG performance status and time between diagnosis and treatment (Motzer et al, 2002).", "Consecutive 4 μm tissue sections were obtained from formalin-fixed paraffin-embedded samples. Antigen retrieval was performed in PT-Link (Dako, Glostrup, Denmark) for 20 min at 95 °C in high pH buffered solution (Dako). Endogenous peroxidase was blocked by immersing the sections in 0.03% hydrogen peroxide for 5 min. Slides were washed for 5 min with Tris-buffered saline solution containing Tween 20 at pH 7.6 and incubated with the primary antibodies (VEGF-A (Clone VG1 M7273, Dako) specific labels VEGF-A121, VEGF-A165 and VEGF-A189 isoforms), VEGF receptor 2 (Ref. 2479, Cell Signaling Technology, Inc., Danvers, MA, USA), phosphorylated-VEGF receptor 2 at Tyr1175 (Ref. 2478, Cell Signaling Technology, Inc.) and CD31 (Clone JC70A, Dako) for 20 min at room temperature, followed by incubation with the appropriate anti-Ig horseradish peroxidase-conjugated polymer (EnVision, Dako) to detect antigen–antibody. Sections were then visualized with 3,3′-diaminobenzidine as a chromogen for 5 min and counterstained with haematoxylin. All immunohistochemical stainings were performed in a Dako Autostainer and the same sections incubated with non-immunized serum were used as negative controls. As positive control, sections of a renal human tumour with known expression of the markers were stained.\nExpression of the studied markers was assessed in a blinded fashion by two investigators (FR and SZ). Vascular endothelial growth factor-A was expressed in the cytoplasm of tumour cells. Vascular endothelial growth factor receptor 2 was detected in the membrane and cytoplasm of endothelial cells, and, occasionally, in activated fibroblast of tumour stroma and malignant cells. Only expression in endothelial cells was considered for the analysis. For pKDR and CD31, staining in endothelial cells was required for considering a tumour as positive. For VEGF-A, KDR and pKDR, a semiquantitative HistoScore (HScore) was calculated. The HScore was determined by estimation of the percentage of cells positively stained with low, medium or high staining intensity. The final score was determined after applying a weighting factor to each estimate. The following formula was used: HScore=(low%) × 1+(medium%) × 2+(high%) × 3 and the results ranged from 0 to 300. Microvascular density was calculated by the Chalkley counting procedure (Pallares et al, 2006). Briefly, a 25-point Chalkley eyepiece graticule (Olympus X250, Tokyo, Japan; Chalkley grid area 0.196 mm2) was applied to the ocular of the microscope and at medium magnification ( × 200); the three most vascular areas of the tumour were quantified.", "All statistical analyses were performed using SPSS software version 20.0 (SPSS Inc., Chicago, IL, USA). Clinical and histopathologic information as well the immunohistochemical results were collected in a database.\nFor potential VEGF-A and KDR association with the disease outcome, patients were divided into three expression groups (tertiles: low, medium, high) on the basis of their HScores. For MVD analysis, patients were divided according to its absolute number of CD31-positive structures. To evaluate the prognostic value of VEGF-A, KDR and MVD in our cohort, survival curves were estimated using the Kaplan–Meier method with the three groups as a factor. Significant survival differences between groups were determined by the log rank test. The third tertile was established as the cut-off point, leaving low- and high-risk patient groups, for MVD. The same approach was applied for VEGF-A and KDR, establishing the first tertile as the cut-off point.\nFor pKDR-Y1175 analysis, a cut-off point determined as positive (pKDR-Y1175>0) and negative expression (pKDR-Y1175=0) was used. Patients were divided into two groups, survival curves were estimated and differences between groups were determined by the log rank test.\nThose variables that had potential prognostic suggested by univariate analysis were subjected to multivariate analysis with the Cox proportional hazards regression model. Overall survival (OS) and PFS were calculated from the date of diagnosis to the date of death or the last follow-up and to the date of sunitinib progression, respectively. A P-value <0.05 was considered as statistically significant.", "Recruited data from patients at baseline are summarized in Table 1. The distribution of patients according to sex was similar; 52% females and 48% males. The median age for this cohort of patients was 62 years. In terms of ECOG performance status, most of patients, 61%, were clustered as equal to 1. Previous nephrectomy was carried out in 87% of the cases. Sites of metastases were diverse, including lung 35%, liver 13%, bone 22%, brain 4% and lymph nodes 26%. Number of disease sites was established as 1, 2 and ⩾3 (48%, 35% and 17%, respectively). Patients were grouped according to their MSKCC risk factor classification as favourable (61%) and intermediate (39%).\nThe control group comprised of 25 biopsies from non-metastatic RCC patients without treatment. The median age of this group was 67 years, and the distribution of patients according to sex was 60% males and 40% females.", "To evaluate the expression of the selected proteins, immunohistochemistry assays were performed in patients treated with SU11248. A control group was included to establish a reference value for each marker. Vascular endothelial growth factor-A expression was diffusely detected in the cytoplasm of tumour cells, as well as in the stromal, including fibroblasts, and endothelial cells. Most of the cases showed stronger staining in the tumour than stroma. Expression of KDR was seen in endothelial cells, preferentially in tumour stroma. In addition, KDR was also detected in isolated fibroblasts and malignant cells. Expression of pKDR-Y1775 was only observed in the endothelial cells of vascular structures in the tumour. Conversely, endothelial cells of vessels in adjacent non-tumoral renal tissue did not express pKDR-Y1775. Finally, CD31 expression was present in all vascular structures, both in tumour and non-tumoral renal tissue (Figure 1).\nHScore values of all patients for VEGF-A, KDR and pKDR-Y1775 as well as absolute number of CD31-positive structures for MVD visualisation are represented in histograms (Figure 1). The HScore mean value obtained for VEGF-A staining was 121.6 (range, 10–300); for KDR in endothelial cells 258.5 (range, 150–300); for pKDR-Y1775 10.8 (range, 0–65); and the mean value of CD31-positive vascular structures for MVD staining was 49 (range, 10–126).\nTo determine the predictive potential of these proteins in metastatic RCC patients treated with sunitinib in first line, we estimated a cut-off point of 60 for VEGF-A, of 200 for KDR, of 0 for pKDR-Y1775 and of 48 for MVD.", "On the basis of these cut-off points, Kaplan–Meier analysis for categorical values of each marker was performed to assess the correlation between the expression levels and prognosis status in patients treated with sunitinib in terms of PFS and OS.\nVascular endothelial growth factor-A , KDR and MVD did not show statistical difference in terms of PFS and OS (Figures 2 and 3) (Tables 2 and 3).\nIn relation to pKDR-Y1775, log rank test showed statistical differences for this biomarker in terms of both PFS (log rank 0.01) and OS (log rank 0.007). The median survival time for the patients without the expression of pKDR (negative) was 23.4 months (range, 5–88) for PFS and 27.6 months (range, 8–88) for OS, whereas those cases with positive pKDR-Y1775 expression were associated with worse outcome, with a median survival time of 15.8 for PFS (range, 4–36) and 25.9 months (range, 4–51) for OS. Univariate analysis showed statistical differences for both PFS (P=0.017, HR: 4.02, 95% CI, 1.28–12.63) (Figure 2 and Table 2) and OS (P=0.015 HR: 5.34, 95% CI, 1.39–20.5) (Figure 3 and Table 3)\nAfter multivariate Cox proportional hazards regression analysis, pKDR-Y1775 expression remained significant for both PFS (P=0.01; HR: 5.35, 95% CI, 1.49–19.13) and OS (P=0.02; HR: 5.13, 95% CI, 1.25–21.05) suggesting that phosphorylation of KDR in Y1175 could be an independent predictive factor of sunitinib response in patients with clear cell metastatic RCC (Tables 2 and 3)." ]
[ null, null, null, null, null, null ]
[ "Materials and Methods", "Patients", "Immunohistochemistry", "Statistical analysis", "Results", "Patient characteristics", "Vascular endothelial growth factor-A, KDR, pKDR-Y1775 and MVD in RCC", "pKDR-Y1775 in tumour stroma predicts clinical outcome", "Discussion" ]
[ " Patients The study involved 23 biopsies from consecutive cases of clear cell metastatic RCC treated with sunitinib in first line between 2008 and 2013 obtained from the Biobank of Fundación Jiménez Díaz Hospital (Spain). To compare biomarkers' expression with baseline data, we included a control group consisting of biopsies from non-metastatic RCC patients without treatment (n=25). All patients gave written informed consent and sample collection was made with the approval of the Institutional Scientific and Ethical Committee.\nClinical–pathological data were obtained from the patient medical records and included sex, age, Eastern Cooperative Oncology group (ECOG) performance status, previous nephrectomy, site of metastases, number of disease sites and Memorial Sloan-Kettering Cancer Center risk classification (MSKCC risk factors), which stratifies patients with metastatic RCC into risk categories based on the number of adverse clinical and laboratory parameters present such as levels of serum haemoglobin, serum calcium and serum lactate dehydrogenase, ECOG performance status and time between diagnosis and treatment (Motzer et al, 2002).\nThe study involved 23 biopsies from consecutive cases of clear cell metastatic RCC treated with sunitinib in first line between 2008 and 2013 obtained from the Biobank of Fundación Jiménez Díaz Hospital (Spain). To compare biomarkers' expression with baseline data, we included a control group consisting of biopsies from non-metastatic RCC patients without treatment (n=25). All patients gave written informed consent and sample collection was made with the approval of the Institutional Scientific and Ethical Committee.\nClinical–pathological data were obtained from the patient medical records and included sex, age, Eastern Cooperative Oncology group (ECOG) performance status, previous nephrectomy, site of metastases, number of disease sites and Memorial Sloan-Kettering Cancer Center risk classification (MSKCC risk factors), which stratifies patients with metastatic RCC into risk categories based on the number of adverse clinical and laboratory parameters present such as levels of serum haemoglobin, serum calcium and serum lactate dehydrogenase, ECOG performance status and time between diagnosis and treatment (Motzer et al, 2002).\n Immunohistochemistry Consecutive 4 μm tissue sections were obtained from formalin-fixed paraffin-embedded samples. Antigen retrieval was performed in PT-Link (Dako, Glostrup, Denmark) for 20 min at 95 °C in high pH buffered solution (Dako). Endogenous peroxidase was blocked by immersing the sections in 0.03% hydrogen peroxide for 5 min. Slides were washed for 5 min with Tris-buffered saline solution containing Tween 20 at pH 7.6 and incubated with the primary antibodies (VEGF-A (Clone VG1 M7273, Dako) specific labels VEGF-A121, VEGF-A165 and VEGF-A189 isoforms), VEGF receptor 2 (Ref. 2479, Cell Signaling Technology, Inc., Danvers, MA, USA), phosphorylated-VEGF receptor 2 at Tyr1175 (Ref. 2478, Cell Signaling Technology, Inc.) and CD31 (Clone JC70A, Dako) for 20 min at room temperature, followed by incubation with the appropriate anti-Ig horseradish peroxidase-conjugated polymer (EnVision, Dako) to detect antigen–antibody. Sections were then visualized with 3,3′-diaminobenzidine as a chromogen for 5 min and counterstained with haematoxylin. All immunohistochemical stainings were performed in a Dako Autostainer and the same sections incubated with non-immunized serum were used as negative controls. As positive control, sections of a renal human tumour with known expression of the markers were stained.\nExpression of the studied markers was assessed in a blinded fashion by two investigators (FR and SZ). Vascular endothelial growth factor-A was expressed in the cytoplasm of tumour cells. Vascular endothelial growth factor receptor 2 was detected in the membrane and cytoplasm of endothelial cells, and, occasionally, in activated fibroblast of tumour stroma and malignant cells. Only expression in endothelial cells was considered for the analysis. For pKDR and CD31, staining in endothelial cells was required for considering a tumour as positive. For VEGF-A, KDR and pKDR, a semiquantitative HistoScore (HScore) was calculated. The HScore was determined by estimation of the percentage of cells positively stained with low, medium or high staining intensity. The final score was determined after applying a weighting factor to each estimate. The following formula was used: HScore=(low%) × 1+(medium%) × 2+(high%) × 3 and the results ranged from 0 to 300. Microvascular density was calculated by the Chalkley counting procedure (Pallares et al, 2006). Briefly, a 25-point Chalkley eyepiece graticule (Olympus X250, Tokyo, Japan; Chalkley grid area 0.196 mm2) was applied to the ocular of the microscope and at medium magnification ( × 200); the three most vascular areas of the tumour were quantified.\nConsecutive 4 μm tissue sections were obtained from formalin-fixed paraffin-embedded samples. Antigen retrieval was performed in PT-Link (Dako, Glostrup, Denmark) for 20 min at 95 °C in high pH buffered solution (Dako). Endogenous peroxidase was blocked by immersing the sections in 0.03% hydrogen peroxide for 5 min. Slides were washed for 5 min with Tris-buffered saline solution containing Tween 20 at pH 7.6 and incubated with the primary antibodies (VEGF-A (Clone VG1 M7273, Dako) specific labels VEGF-A121, VEGF-A165 and VEGF-A189 isoforms), VEGF receptor 2 (Ref. 2479, Cell Signaling Technology, Inc., Danvers, MA, USA), phosphorylated-VEGF receptor 2 at Tyr1175 (Ref. 2478, Cell Signaling Technology, Inc.) and CD31 (Clone JC70A, Dako) for 20 min at room temperature, followed by incubation with the appropriate anti-Ig horseradish peroxidase-conjugated polymer (EnVision, Dako) to detect antigen–antibody. Sections were then visualized with 3,3′-diaminobenzidine as a chromogen for 5 min and counterstained with haematoxylin. All immunohistochemical stainings were performed in a Dako Autostainer and the same sections incubated with non-immunized serum were used as negative controls. As positive control, sections of a renal human tumour with known expression of the markers were stained.\nExpression of the studied markers was assessed in a blinded fashion by two investigators (FR and SZ). Vascular endothelial growth factor-A was expressed in the cytoplasm of tumour cells. Vascular endothelial growth factor receptor 2 was detected in the membrane and cytoplasm of endothelial cells, and, occasionally, in activated fibroblast of tumour stroma and malignant cells. Only expression in endothelial cells was considered for the analysis. For pKDR and CD31, staining in endothelial cells was required for considering a tumour as positive. For VEGF-A, KDR and pKDR, a semiquantitative HistoScore (HScore) was calculated. The HScore was determined by estimation of the percentage of cells positively stained with low, medium or high staining intensity. The final score was determined after applying a weighting factor to each estimate. The following formula was used: HScore=(low%) × 1+(medium%) × 2+(high%) × 3 and the results ranged from 0 to 300. Microvascular density was calculated by the Chalkley counting procedure (Pallares et al, 2006). Briefly, a 25-point Chalkley eyepiece graticule (Olympus X250, Tokyo, Japan; Chalkley grid area 0.196 mm2) was applied to the ocular of the microscope and at medium magnification ( × 200); the three most vascular areas of the tumour were quantified.\n Statistical analysis All statistical analyses were performed using SPSS software version 20.0 (SPSS Inc., Chicago, IL, USA). Clinical and histopathologic information as well the immunohistochemical results were collected in a database.\nFor potential VEGF-A and KDR association with the disease outcome, patients were divided into three expression groups (tertiles: low, medium, high) on the basis of their HScores. For MVD analysis, patients were divided according to its absolute number of CD31-positive structures. To evaluate the prognostic value of VEGF-A, KDR and MVD in our cohort, survival curves were estimated using the Kaplan–Meier method with the three groups as a factor. Significant survival differences between groups were determined by the log rank test. The third tertile was established as the cut-off point, leaving low- and high-risk patient groups, for MVD. The same approach was applied for VEGF-A and KDR, establishing the first tertile as the cut-off point.\nFor pKDR-Y1175 analysis, a cut-off point determined as positive (pKDR-Y1175>0) and negative expression (pKDR-Y1175=0) was used. Patients were divided into two groups, survival curves were estimated and differences between groups were determined by the log rank test.\nThose variables that had potential prognostic suggested by univariate analysis were subjected to multivariate analysis with the Cox proportional hazards regression model. Overall survival (OS) and PFS were calculated from the date of diagnosis to the date of death or the last follow-up and to the date of sunitinib progression, respectively. A P-value <0.05 was considered as statistically significant.\nAll statistical analyses were performed using SPSS software version 20.0 (SPSS Inc., Chicago, IL, USA). Clinical and histopathologic information as well the immunohistochemical results were collected in a database.\nFor potential VEGF-A and KDR association with the disease outcome, patients were divided into three expression groups (tertiles: low, medium, high) on the basis of their HScores. For MVD analysis, patients were divided according to its absolute number of CD31-positive structures. To evaluate the prognostic value of VEGF-A, KDR and MVD in our cohort, survival curves were estimated using the Kaplan–Meier method with the three groups as a factor. Significant survival differences between groups were determined by the log rank test. The third tertile was established as the cut-off point, leaving low- and high-risk patient groups, for MVD. The same approach was applied for VEGF-A and KDR, establishing the first tertile as the cut-off point.\nFor pKDR-Y1175 analysis, a cut-off point determined as positive (pKDR-Y1175>0) and negative expression (pKDR-Y1175=0) was used. Patients were divided into two groups, survival curves were estimated and differences between groups were determined by the log rank test.\nThose variables that had potential prognostic suggested by univariate analysis were subjected to multivariate analysis with the Cox proportional hazards regression model. Overall survival (OS) and PFS were calculated from the date of diagnosis to the date of death or the last follow-up and to the date of sunitinib progression, respectively. A P-value <0.05 was considered as statistically significant.", "The study involved 23 biopsies from consecutive cases of clear cell metastatic RCC treated with sunitinib in first line between 2008 and 2013 obtained from the Biobank of Fundación Jiménez Díaz Hospital (Spain). To compare biomarkers' expression with baseline data, we included a control group consisting of biopsies from non-metastatic RCC patients without treatment (n=25). All patients gave written informed consent and sample collection was made with the approval of the Institutional Scientific and Ethical Committee.\nClinical–pathological data were obtained from the patient medical records and included sex, age, Eastern Cooperative Oncology group (ECOG) performance status, previous nephrectomy, site of metastases, number of disease sites and Memorial Sloan-Kettering Cancer Center risk classification (MSKCC risk factors), which stratifies patients with metastatic RCC into risk categories based on the number of adverse clinical and laboratory parameters present such as levels of serum haemoglobin, serum calcium and serum lactate dehydrogenase, ECOG performance status and time between diagnosis and treatment (Motzer et al, 2002).", "Consecutive 4 μm tissue sections were obtained from formalin-fixed paraffin-embedded samples. Antigen retrieval was performed in PT-Link (Dako, Glostrup, Denmark) for 20 min at 95 °C in high pH buffered solution (Dako). Endogenous peroxidase was blocked by immersing the sections in 0.03% hydrogen peroxide for 5 min. Slides were washed for 5 min with Tris-buffered saline solution containing Tween 20 at pH 7.6 and incubated with the primary antibodies (VEGF-A (Clone VG1 M7273, Dako) specific labels VEGF-A121, VEGF-A165 and VEGF-A189 isoforms), VEGF receptor 2 (Ref. 2479, Cell Signaling Technology, Inc., Danvers, MA, USA), phosphorylated-VEGF receptor 2 at Tyr1175 (Ref. 2478, Cell Signaling Technology, Inc.) and CD31 (Clone JC70A, Dako) for 20 min at room temperature, followed by incubation with the appropriate anti-Ig horseradish peroxidase-conjugated polymer (EnVision, Dako) to detect antigen–antibody. Sections were then visualized with 3,3′-diaminobenzidine as a chromogen for 5 min and counterstained with haematoxylin. All immunohistochemical stainings were performed in a Dako Autostainer and the same sections incubated with non-immunized serum were used as negative controls. As positive control, sections of a renal human tumour with known expression of the markers were stained.\nExpression of the studied markers was assessed in a blinded fashion by two investigators (FR and SZ). Vascular endothelial growth factor-A was expressed in the cytoplasm of tumour cells. Vascular endothelial growth factor receptor 2 was detected in the membrane and cytoplasm of endothelial cells, and, occasionally, in activated fibroblast of tumour stroma and malignant cells. Only expression in endothelial cells was considered for the analysis. For pKDR and CD31, staining in endothelial cells was required for considering a tumour as positive. For VEGF-A, KDR and pKDR, a semiquantitative HistoScore (HScore) was calculated. The HScore was determined by estimation of the percentage of cells positively stained with low, medium or high staining intensity. The final score was determined after applying a weighting factor to each estimate. The following formula was used: HScore=(low%) × 1+(medium%) × 2+(high%) × 3 and the results ranged from 0 to 300. Microvascular density was calculated by the Chalkley counting procedure (Pallares et al, 2006). Briefly, a 25-point Chalkley eyepiece graticule (Olympus X250, Tokyo, Japan; Chalkley grid area 0.196 mm2) was applied to the ocular of the microscope and at medium magnification ( × 200); the three most vascular areas of the tumour were quantified.", "All statistical analyses were performed using SPSS software version 20.0 (SPSS Inc., Chicago, IL, USA). Clinical and histopathologic information as well the immunohistochemical results were collected in a database.\nFor potential VEGF-A and KDR association with the disease outcome, patients were divided into three expression groups (tertiles: low, medium, high) on the basis of their HScores. For MVD analysis, patients were divided according to its absolute number of CD31-positive structures. To evaluate the prognostic value of VEGF-A, KDR and MVD in our cohort, survival curves were estimated using the Kaplan–Meier method with the three groups as a factor. Significant survival differences between groups were determined by the log rank test. The third tertile was established as the cut-off point, leaving low- and high-risk patient groups, for MVD. The same approach was applied for VEGF-A and KDR, establishing the first tertile as the cut-off point.\nFor pKDR-Y1175 analysis, a cut-off point determined as positive (pKDR-Y1175>0) and negative expression (pKDR-Y1175=0) was used. Patients were divided into two groups, survival curves were estimated and differences between groups were determined by the log rank test.\nThose variables that had potential prognostic suggested by univariate analysis were subjected to multivariate analysis with the Cox proportional hazards regression model. Overall survival (OS) and PFS were calculated from the date of diagnosis to the date of death or the last follow-up and to the date of sunitinib progression, respectively. A P-value <0.05 was considered as statistically significant.", " Patient characteristics Recruited data from patients at baseline are summarized in Table 1. The distribution of patients according to sex was similar; 52% females and 48% males. The median age for this cohort of patients was 62 years. In terms of ECOG performance status, most of patients, 61%, were clustered as equal to 1. Previous nephrectomy was carried out in 87% of the cases. Sites of metastases were diverse, including lung 35%, liver 13%, bone 22%, brain 4% and lymph nodes 26%. Number of disease sites was established as 1, 2 and ⩾3 (48%, 35% and 17%, respectively). Patients were grouped according to their MSKCC risk factor classification as favourable (61%) and intermediate (39%).\nThe control group comprised of 25 biopsies from non-metastatic RCC patients without treatment. The median age of this group was 67 years, and the distribution of patients according to sex was 60% males and 40% females.\nRecruited data from patients at baseline are summarized in Table 1. The distribution of patients according to sex was similar; 52% females and 48% males. The median age for this cohort of patients was 62 years. In terms of ECOG performance status, most of patients, 61%, were clustered as equal to 1. Previous nephrectomy was carried out in 87% of the cases. Sites of metastases were diverse, including lung 35%, liver 13%, bone 22%, brain 4% and lymph nodes 26%. Number of disease sites was established as 1, 2 and ⩾3 (48%, 35% and 17%, respectively). Patients were grouped according to their MSKCC risk factor classification as favourable (61%) and intermediate (39%).\nThe control group comprised of 25 biopsies from non-metastatic RCC patients without treatment. The median age of this group was 67 years, and the distribution of patients according to sex was 60% males and 40% females.\n Vascular endothelial growth factor-A, KDR, pKDR-Y1775 and MVD in RCC To evaluate the expression of the selected proteins, immunohistochemistry assays were performed in patients treated with SU11248. A control group was included to establish a reference value for each marker. Vascular endothelial growth factor-A expression was diffusely detected in the cytoplasm of tumour cells, as well as in the stromal, including fibroblasts, and endothelial cells. Most of the cases showed stronger staining in the tumour than stroma. Expression of KDR was seen in endothelial cells, preferentially in tumour stroma. In addition, KDR was also detected in isolated fibroblasts and malignant cells. Expression of pKDR-Y1775 was only observed in the endothelial cells of vascular structures in the tumour. Conversely, endothelial cells of vessels in adjacent non-tumoral renal tissue did not express pKDR-Y1775. Finally, CD31 expression was present in all vascular structures, both in tumour and non-tumoral renal tissue (Figure 1).\nHScore values of all patients for VEGF-A, KDR and pKDR-Y1775 as well as absolute number of CD31-positive structures for MVD visualisation are represented in histograms (Figure 1). The HScore mean value obtained for VEGF-A staining was 121.6 (range, 10–300); for KDR in endothelial cells 258.5 (range, 150–300); for pKDR-Y1775 10.8 (range, 0–65); and the mean value of CD31-positive vascular structures for MVD staining was 49 (range, 10–126).\nTo determine the predictive potential of these proteins in metastatic RCC patients treated with sunitinib in first line, we estimated a cut-off point of 60 for VEGF-A, of 200 for KDR, of 0 for pKDR-Y1775 and of 48 for MVD.\nTo evaluate the expression of the selected proteins, immunohistochemistry assays were performed in patients treated with SU11248. A control group was included to establish a reference value for each marker. Vascular endothelial growth factor-A expression was diffusely detected in the cytoplasm of tumour cells, as well as in the stromal, including fibroblasts, and endothelial cells. Most of the cases showed stronger staining in the tumour than stroma. Expression of KDR was seen in endothelial cells, preferentially in tumour stroma. In addition, KDR was also detected in isolated fibroblasts and malignant cells. Expression of pKDR-Y1775 was only observed in the endothelial cells of vascular structures in the tumour. Conversely, endothelial cells of vessels in adjacent non-tumoral renal tissue did not express pKDR-Y1775. Finally, CD31 expression was present in all vascular structures, both in tumour and non-tumoral renal tissue (Figure 1).\nHScore values of all patients for VEGF-A, KDR and pKDR-Y1775 as well as absolute number of CD31-positive structures for MVD visualisation are represented in histograms (Figure 1). The HScore mean value obtained for VEGF-A staining was 121.6 (range, 10–300); for KDR in endothelial cells 258.5 (range, 150–300); for pKDR-Y1775 10.8 (range, 0–65); and the mean value of CD31-positive vascular structures for MVD staining was 49 (range, 10–126).\nTo determine the predictive potential of these proteins in metastatic RCC patients treated with sunitinib in first line, we estimated a cut-off point of 60 for VEGF-A, of 200 for KDR, of 0 for pKDR-Y1775 and of 48 for MVD.\n pKDR-Y1775 in tumour stroma predicts clinical outcome On the basis of these cut-off points, Kaplan–Meier analysis for categorical values of each marker was performed to assess the correlation between the expression levels and prognosis status in patients treated with sunitinib in terms of PFS and OS.\nVascular endothelial growth factor-A , KDR and MVD did not show statistical difference in terms of PFS and OS (Figures 2 and 3) (Tables 2 and 3).\nIn relation to pKDR-Y1775, log rank test showed statistical differences for this biomarker in terms of both PFS (log rank 0.01) and OS (log rank 0.007). The median survival time for the patients without the expression of pKDR (negative) was 23.4 months (range, 5–88) for PFS and 27.6 months (range, 8–88) for OS, whereas those cases with positive pKDR-Y1775 expression were associated with worse outcome, with a median survival time of 15.8 for PFS (range, 4–36) and 25.9 months (range, 4–51) for OS. Univariate analysis showed statistical differences for both PFS (P=0.017, HR: 4.02, 95% CI, 1.28–12.63) (Figure 2 and Table 2) and OS (P=0.015 HR: 5.34, 95% CI, 1.39–20.5) (Figure 3 and Table 3)\nAfter multivariate Cox proportional hazards regression analysis, pKDR-Y1775 expression remained significant for both PFS (P=0.01; HR: 5.35, 95% CI, 1.49–19.13) and OS (P=0.02; HR: 5.13, 95% CI, 1.25–21.05) suggesting that phosphorylation of KDR in Y1175 could be an independent predictive factor of sunitinib response in patients with clear cell metastatic RCC (Tables 2 and 3).\nOn the basis of these cut-off points, Kaplan–Meier analysis for categorical values of each marker was performed to assess the correlation between the expression levels and prognosis status in patients treated with sunitinib in terms of PFS and OS.\nVascular endothelial growth factor-A , KDR and MVD did not show statistical difference in terms of PFS and OS (Figures 2 and 3) (Tables 2 and 3).\nIn relation to pKDR-Y1775, log rank test showed statistical differences for this biomarker in terms of both PFS (log rank 0.01) and OS (log rank 0.007). The median survival time for the patients without the expression of pKDR (negative) was 23.4 months (range, 5–88) for PFS and 27.6 months (range, 8–88) for OS, whereas those cases with positive pKDR-Y1775 expression were associated with worse outcome, with a median survival time of 15.8 for PFS (range, 4–36) and 25.9 months (range, 4–51) for OS. Univariate analysis showed statistical differences for both PFS (P=0.017, HR: 4.02, 95% CI, 1.28–12.63) (Figure 2 and Table 2) and OS (P=0.015 HR: 5.34, 95% CI, 1.39–20.5) (Figure 3 and Table 3)\nAfter multivariate Cox proportional hazards regression analysis, pKDR-Y1775 expression remained significant for both PFS (P=0.01; HR: 5.35, 95% CI, 1.49–19.13) and OS (P=0.02; HR: 5.13, 95% CI, 1.25–21.05) suggesting that phosphorylation of KDR in Y1175 could be an independent predictive factor of sunitinib response in patients with clear cell metastatic RCC (Tables 2 and 3).", "Recruited data from patients at baseline are summarized in Table 1. The distribution of patients according to sex was similar; 52% females and 48% males. The median age for this cohort of patients was 62 years. In terms of ECOG performance status, most of patients, 61%, were clustered as equal to 1. Previous nephrectomy was carried out in 87% of the cases. Sites of metastases were diverse, including lung 35%, liver 13%, bone 22%, brain 4% and lymph nodes 26%. Number of disease sites was established as 1, 2 and ⩾3 (48%, 35% and 17%, respectively). Patients were grouped according to their MSKCC risk factor classification as favourable (61%) and intermediate (39%).\nThe control group comprised of 25 biopsies from non-metastatic RCC patients without treatment. The median age of this group was 67 years, and the distribution of patients according to sex was 60% males and 40% females.", "To evaluate the expression of the selected proteins, immunohistochemistry assays were performed in patients treated with SU11248. A control group was included to establish a reference value for each marker. Vascular endothelial growth factor-A expression was diffusely detected in the cytoplasm of tumour cells, as well as in the stromal, including fibroblasts, and endothelial cells. Most of the cases showed stronger staining in the tumour than stroma. Expression of KDR was seen in endothelial cells, preferentially in tumour stroma. In addition, KDR was also detected in isolated fibroblasts and malignant cells. Expression of pKDR-Y1775 was only observed in the endothelial cells of vascular structures in the tumour. Conversely, endothelial cells of vessels in adjacent non-tumoral renal tissue did not express pKDR-Y1775. Finally, CD31 expression was present in all vascular structures, both in tumour and non-tumoral renal tissue (Figure 1).\nHScore values of all patients for VEGF-A, KDR and pKDR-Y1775 as well as absolute number of CD31-positive structures for MVD visualisation are represented in histograms (Figure 1). The HScore mean value obtained for VEGF-A staining was 121.6 (range, 10–300); for KDR in endothelial cells 258.5 (range, 150–300); for pKDR-Y1775 10.8 (range, 0–65); and the mean value of CD31-positive vascular structures for MVD staining was 49 (range, 10–126).\nTo determine the predictive potential of these proteins in metastatic RCC patients treated with sunitinib in first line, we estimated a cut-off point of 60 for VEGF-A, of 200 for KDR, of 0 for pKDR-Y1775 and of 48 for MVD.", "On the basis of these cut-off points, Kaplan–Meier analysis for categorical values of each marker was performed to assess the correlation between the expression levels and prognosis status in patients treated with sunitinib in terms of PFS and OS.\nVascular endothelial growth factor-A , KDR and MVD did not show statistical difference in terms of PFS and OS (Figures 2 and 3) (Tables 2 and 3).\nIn relation to pKDR-Y1775, log rank test showed statistical differences for this biomarker in terms of both PFS (log rank 0.01) and OS (log rank 0.007). The median survival time for the patients without the expression of pKDR (negative) was 23.4 months (range, 5–88) for PFS and 27.6 months (range, 8–88) for OS, whereas those cases with positive pKDR-Y1775 expression were associated with worse outcome, with a median survival time of 15.8 for PFS (range, 4–36) and 25.9 months (range, 4–51) for OS. Univariate analysis showed statistical differences for both PFS (P=0.017, HR: 4.02, 95% CI, 1.28–12.63) (Figure 2 and Table 2) and OS (P=0.015 HR: 5.34, 95% CI, 1.39–20.5) (Figure 3 and Table 3)\nAfter multivariate Cox proportional hazards regression analysis, pKDR-Y1775 expression remained significant for both PFS (P=0.01; HR: 5.35, 95% CI, 1.49–19.13) and OS (P=0.02; HR: 5.13, 95% CI, 1.25–21.05) suggesting that phosphorylation of KDR in Y1175 could be an independent predictive factor of sunitinib response in patients with clear cell metastatic RCC (Tables 2 and 3).", "This study evaluates the role of VEGF-A, KDR, pKDR-Y1175 and MVD in metastatic RCC after receiving sunitinib and their potential to predict significant clinical benefit in terms of statistically longer PFS and OS.\nVascular endothelial growth factor pathway has been largely characterised in RCC as a key mechanism in the angiogenesis development (Takahashi et al, 1994; Nakagawa et al, 1997; Tomisawa et al, 1999), and as a result, a relevant therapeutic target (Rini, 2009). Sunitinib is a multitargeted receptor tyrosine kinase inhibitor of VEGF receptors, among others, which interacts with the ATP binding pocket of these kinases and acts as competitive inhibitor with ATP. Its efficacy in patients with RCC refractory to cytokine-based therapy was demonstrated in two phase II trials (Motzer et al, 2006a, 2006b) as well as in previously untreated patients in a phase III trial (Motzer et al, 2007).\nAlthough anti-angiogenic therapy has resulted in a complete revolution in the treatment of metastatic RCC patients, the response varies widely from patient to patient in terms of PFS and OS, no apparent explanation is found in most cases (Motzer et al, 2007, 2009). Certainly, it is the differential grade in the outcome that justifies the need to identify biomarkers that can predict the clinical benefit of sunitinib.\nIn addition to clinical and laboratory-based factors used as prognostic criteria, being MSKCC the most known (Motzer et al, 1999), several molecules have been explored as potential biologic indicators in terms of response to SU11248. Some of these studies have showed association between levels of VEGF-A soluble isoforms and PFS (Paule et al, 2010; Porta et al, 2010). Other recent studies have found an association between several proteins involved in hypoxia and SU11248 efficacy as well as low VEGFR3 expression associated with worse outcome (Garcia-Donas et al, 2013). Circulating endothelial cells as well as circulating bone marrow-derived progenitor cells have also been explored as valuable biomarkers (Gruenwald et al, 2010; Farace et al, 2011). Even at genetic level, novel studies have revealed a differential outcome based on the presence of polymorphisms in VEGF and VEGFR genes (Scartozzi et al, 2013) or based on miRNA expression profiles (Gamez-Pozo et al, 2012). Terakawa et al (2013) suggested that it would be useful to consider the expression levels of KDR to identify the metastatic RCC patients likely to be benefited from treatment with sunitinib; although several biomarkers were studied, only VEGFR2 expression appeared to be independently related to PFS as well as OS on multivariate analysis. In the analysis carried out in our panel of patients, we described for the first time the correlation of pKDR-Y1175 expression with PFS and OS in patients with metastatic RCC in terms of clinical benefit of sunitinib-based therapy.\nPresently, little is known about the predictive role of pKDR-Y1175 in response to treatment. The phosphorylation profile and the intracellular location of KDR were investigated in both normal and neoplasic kidneys (Fox et al, 2004). Although the phosphorylated epitopes were different from our marker (Y1059 and Y1214), this study showed that pKDR is present in a wide variety of renal tumours, suggesting that anti-VEGFR therapy might have direct effects on tumour cells. Furthermore, pKDR-Y1775 has been associated with poor prognosis in endometrial carcinomas (Giatromanolaki et al, 2006).\nAngiogenesis and its signalling proteins have been largely studied in several tumour types and their importance in tumour progression is widely accepted. However, their role in the modulation of response to anti-angiogenic therapies in cancer is still under debate. Some evidences recently showed correlations between angiogenesis and response to tyrosine kinase inhibitors that target receptors of angiogenesis (Rosa et al, 2013), including sunitinib. Supporting this research, our analysis provides novel data of the role of active angiogenesis in RCC patients to predict the benefit of sunitinib. These findings require further validation in additional clinical series to confirm the potential impact in terms of outcome prediction." ]
[ "materials|methods", null, null, null, "results", null, null, null, "discussion" ]
[ "renal cell carcinoma", "sunitinib", "biomarker", "VEGF-A", "KDR", "angiogenesis", "microvascular density", "progression-free survival", "overall survival" ]
Materials and Methods: Patients The study involved 23 biopsies from consecutive cases of clear cell metastatic RCC treated with sunitinib in first line between 2008 and 2013 obtained from the Biobank of Fundación Jiménez Díaz Hospital (Spain). To compare biomarkers' expression with baseline data, we included a control group consisting of biopsies from non-metastatic RCC patients without treatment (n=25). All patients gave written informed consent and sample collection was made with the approval of the Institutional Scientific and Ethical Committee. Clinical–pathological data were obtained from the patient medical records and included sex, age, Eastern Cooperative Oncology group (ECOG) performance status, previous nephrectomy, site of metastases, number of disease sites and Memorial Sloan-Kettering Cancer Center risk classification (MSKCC risk factors), which stratifies patients with metastatic RCC into risk categories based on the number of adverse clinical and laboratory parameters present such as levels of serum haemoglobin, serum calcium and serum lactate dehydrogenase, ECOG performance status and time between diagnosis and treatment (Motzer et al, 2002). The study involved 23 biopsies from consecutive cases of clear cell metastatic RCC treated with sunitinib in first line between 2008 and 2013 obtained from the Biobank of Fundación Jiménez Díaz Hospital (Spain). To compare biomarkers' expression with baseline data, we included a control group consisting of biopsies from non-metastatic RCC patients without treatment (n=25). All patients gave written informed consent and sample collection was made with the approval of the Institutional Scientific and Ethical Committee. Clinical–pathological data were obtained from the patient medical records and included sex, age, Eastern Cooperative Oncology group (ECOG) performance status, previous nephrectomy, site of metastases, number of disease sites and Memorial Sloan-Kettering Cancer Center risk classification (MSKCC risk factors), which stratifies patients with metastatic RCC into risk categories based on the number of adverse clinical and laboratory parameters present such as levels of serum haemoglobin, serum calcium and serum lactate dehydrogenase, ECOG performance status and time between diagnosis and treatment (Motzer et al, 2002). Immunohistochemistry Consecutive 4 μm tissue sections were obtained from formalin-fixed paraffin-embedded samples. Antigen retrieval was performed in PT-Link (Dako, Glostrup, Denmark) for 20 min at 95 °C in high pH buffered solution (Dako). Endogenous peroxidase was blocked by immersing the sections in 0.03% hydrogen peroxide for 5 min. Slides were washed for 5 min with Tris-buffered saline solution containing Tween 20 at pH 7.6 and incubated with the primary antibodies (VEGF-A (Clone VG1 M7273, Dako) specific labels VEGF-A121, VEGF-A165 and VEGF-A189 isoforms), VEGF receptor 2 (Ref. 2479, Cell Signaling Technology, Inc., Danvers, MA, USA), phosphorylated-VEGF receptor 2 at Tyr1175 (Ref. 2478, Cell Signaling Technology, Inc.) and CD31 (Clone JC70A, Dako) for 20 min at room temperature, followed by incubation with the appropriate anti-Ig horseradish peroxidase-conjugated polymer (EnVision, Dako) to detect antigen–antibody. Sections were then visualized with 3,3′-diaminobenzidine as a chromogen for 5 min and counterstained with haematoxylin. All immunohistochemical stainings were performed in a Dako Autostainer and the same sections incubated with non-immunized serum were used as negative controls. As positive control, sections of a renal human tumour with known expression of the markers were stained. Expression of the studied markers was assessed in a blinded fashion by two investigators (FR and SZ). Vascular endothelial growth factor-A was expressed in the cytoplasm of tumour cells. Vascular endothelial growth factor receptor 2 was detected in the membrane and cytoplasm of endothelial cells, and, occasionally, in activated fibroblast of tumour stroma and malignant cells. Only expression in endothelial cells was considered for the analysis. For pKDR and CD31, staining in endothelial cells was required for considering a tumour as positive. For VEGF-A, KDR and pKDR, a semiquantitative HistoScore (HScore) was calculated. The HScore was determined by estimation of the percentage of cells positively stained with low, medium or high staining intensity. The final score was determined after applying a weighting factor to each estimate. The following formula was used: HScore=(low%) × 1+(medium%) × 2+(high%) × 3 and the results ranged from 0 to 300. Microvascular density was calculated by the Chalkley counting procedure (Pallares et al, 2006). Briefly, a 25-point Chalkley eyepiece graticule (Olympus X250, Tokyo, Japan; Chalkley grid area 0.196 mm2) was applied to the ocular of the microscope and at medium magnification ( × 200); the three most vascular areas of the tumour were quantified. Consecutive 4 μm tissue sections were obtained from formalin-fixed paraffin-embedded samples. Antigen retrieval was performed in PT-Link (Dako, Glostrup, Denmark) for 20 min at 95 °C in high pH buffered solution (Dako). Endogenous peroxidase was blocked by immersing the sections in 0.03% hydrogen peroxide for 5 min. Slides were washed for 5 min with Tris-buffered saline solution containing Tween 20 at pH 7.6 and incubated with the primary antibodies (VEGF-A (Clone VG1 M7273, Dako) specific labels VEGF-A121, VEGF-A165 and VEGF-A189 isoforms), VEGF receptor 2 (Ref. 2479, Cell Signaling Technology, Inc., Danvers, MA, USA), phosphorylated-VEGF receptor 2 at Tyr1175 (Ref. 2478, Cell Signaling Technology, Inc.) and CD31 (Clone JC70A, Dako) for 20 min at room temperature, followed by incubation with the appropriate anti-Ig horseradish peroxidase-conjugated polymer (EnVision, Dako) to detect antigen–antibody. Sections were then visualized with 3,3′-diaminobenzidine as a chromogen for 5 min and counterstained with haematoxylin. All immunohistochemical stainings were performed in a Dako Autostainer and the same sections incubated with non-immunized serum were used as negative controls. As positive control, sections of a renal human tumour with known expression of the markers were stained. Expression of the studied markers was assessed in a blinded fashion by two investigators (FR and SZ). Vascular endothelial growth factor-A was expressed in the cytoplasm of tumour cells. Vascular endothelial growth factor receptor 2 was detected in the membrane and cytoplasm of endothelial cells, and, occasionally, in activated fibroblast of tumour stroma and malignant cells. Only expression in endothelial cells was considered for the analysis. For pKDR and CD31, staining in endothelial cells was required for considering a tumour as positive. For VEGF-A, KDR and pKDR, a semiquantitative HistoScore (HScore) was calculated. The HScore was determined by estimation of the percentage of cells positively stained with low, medium or high staining intensity. The final score was determined after applying a weighting factor to each estimate. The following formula was used: HScore=(low%) × 1+(medium%) × 2+(high%) × 3 and the results ranged from 0 to 300. Microvascular density was calculated by the Chalkley counting procedure (Pallares et al, 2006). Briefly, a 25-point Chalkley eyepiece graticule (Olympus X250, Tokyo, Japan; Chalkley grid area 0.196 mm2) was applied to the ocular of the microscope and at medium magnification ( × 200); the three most vascular areas of the tumour were quantified. Statistical analysis All statistical analyses were performed using SPSS software version 20.0 (SPSS Inc., Chicago, IL, USA). Clinical and histopathologic information as well the immunohistochemical results were collected in a database. For potential VEGF-A and KDR association with the disease outcome, patients were divided into three expression groups (tertiles: low, medium, high) on the basis of their HScores. For MVD analysis, patients were divided according to its absolute number of CD31-positive structures. To evaluate the prognostic value of VEGF-A, KDR and MVD in our cohort, survival curves were estimated using the Kaplan–Meier method with the three groups as a factor. Significant survival differences between groups were determined by the log rank test. The third tertile was established as the cut-off point, leaving low- and high-risk patient groups, for MVD. The same approach was applied for VEGF-A and KDR, establishing the first tertile as the cut-off point. For pKDR-Y1175 analysis, a cut-off point determined as positive (pKDR-Y1175>0) and negative expression (pKDR-Y1175=0) was used. Patients were divided into two groups, survival curves were estimated and differences between groups were determined by the log rank test. Those variables that had potential prognostic suggested by univariate analysis were subjected to multivariate analysis with the Cox proportional hazards regression model. Overall survival (OS) and PFS were calculated from the date of diagnosis to the date of death or the last follow-up and to the date of sunitinib progression, respectively. A P-value <0.05 was considered as statistically significant. All statistical analyses were performed using SPSS software version 20.0 (SPSS Inc., Chicago, IL, USA). Clinical and histopathologic information as well the immunohistochemical results were collected in a database. For potential VEGF-A and KDR association with the disease outcome, patients were divided into three expression groups (tertiles: low, medium, high) on the basis of their HScores. For MVD analysis, patients were divided according to its absolute number of CD31-positive structures. To evaluate the prognostic value of VEGF-A, KDR and MVD in our cohort, survival curves were estimated using the Kaplan–Meier method with the three groups as a factor. Significant survival differences between groups were determined by the log rank test. The third tertile was established as the cut-off point, leaving low- and high-risk patient groups, for MVD. The same approach was applied for VEGF-A and KDR, establishing the first tertile as the cut-off point. For pKDR-Y1175 analysis, a cut-off point determined as positive (pKDR-Y1175>0) and negative expression (pKDR-Y1175=0) was used. Patients were divided into two groups, survival curves were estimated and differences between groups were determined by the log rank test. Those variables that had potential prognostic suggested by univariate analysis were subjected to multivariate analysis with the Cox proportional hazards regression model. Overall survival (OS) and PFS were calculated from the date of diagnosis to the date of death or the last follow-up and to the date of sunitinib progression, respectively. A P-value <0.05 was considered as statistically significant. Patients: The study involved 23 biopsies from consecutive cases of clear cell metastatic RCC treated with sunitinib in first line between 2008 and 2013 obtained from the Biobank of Fundación Jiménez Díaz Hospital (Spain). To compare biomarkers' expression with baseline data, we included a control group consisting of biopsies from non-metastatic RCC patients without treatment (n=25). All patients gave written informed consent and sample collection was made with the approval of the Institutional Scientific and Ethical Committee. Clinical–pathological data were obtained from the patient medical records and included sex, age, Eastern Cooperative Oncology group (ECOG) performance status, previous nephrectomy, site of metastases, number of disease sites and Memorial Sloan-Kettering Cancer Center risk classification (MSKCC risk factors), which stratifies patients with metastatic RCC into risk categories based on the number of adverse clinical and laboratory parameters present such as levels of serum haemoglobin, serum calcium and serum lactate dehydrogenase, ECOG performance status and time between diagnosis and treatment (Motzer et al, 2002). Immunohistochemistry: Consecutive 4 μm tissue sections were obtained from formalin-fixed paraffin-embedded samples. Antigen retrieval was performed in PT-Link (Dako, Glostrup, Denmark) for 20 min at 95 °C in high pH buffered solution (Dako). Endogenous peroxidase was blocked by immersing the sections in 0.03% hydrogen peroxide for 5 min. Slides were washed for 5 min with Tris-buffered saline solution containing Tween 20 at pH 7.6 and incubated with the primary antibodies (VEGF-A (Clone VG1 M7273, Dako) specific labels VEGF-A121, VEGF-A165 and VEGF-A189 isoforms), VEGF receptor 2 (Ref. 2479, Cell Signaling Technology, Inc., Danvers, MA, USA), phosphorylated-VEGF receptor 2 at Tyr1175 (Ref. 2478, Cell Signaling Technology, Inc.) and CD31 (Clone JC70A, Dako) for 20 min at room temperature, followed by incubation with the appropriate anti-Ig horseradish peroxidase-conjugated polymer (EnVision, Dako) to detect antigen–antibody. Sections were then visualized with 3,3′-diaminobenzidine as a chromogen for 5 min and counterstained with haematoxylin. All immunohistochemical stainings were performed in a Dako Autostainer and the same sections incubated with non-immunized serum were used as negative controls. As positive control, sections of a renal human tumour with known expression of the markers were stained. Expression of the studied markers was assessed in a blinded fashion by two investigators (FR and SZ). Vascular endothelial growth factor-A was expressed in the cytoplasm of tumour cells. Vascular endothelial growth factor receptor 2 was detected in the membrane and cytoplasm of endothelial cells, and, occasionally, in activated fibroblast of tumour stroma and malignant cells. Only expression in endothelial cells was considered for the analysis. For pKDR and CD31, staining in endothelial cells was required for considering a tumour as positive. For VEGF-A, KDR and pKDR, a semiquantitative HistoScore (HScore) was calculated. The HScore was determined by estimation of the percentage of cells positively stained with low, medium or high staining intensity. The final score was determined after applying a weighting factor to each estimate. The following formula was used: HScore=(low%) × 1+(medium%) × 2+(high%) × 3 and the results ranged from 0 to 300. Microvascular density was calculated by the Chalkley counting procedure (Pallares et al, 2006). Briefly, a 25-point Chalkley eyepiece graticule (Olympus X250, Tokyo, Japan; Chalkley grid area 0.196 mm2) was applied to the ocular of the microscope and at medium magnification ( × 200); the three most vascular areas of the tumour were quantified. Statistical analysis: All statistical analyses were performed using SPSS software version 20.0 (SPSS Inc., Chicago, IL, USA). Clinical and histopathologic information as well the immunohistochemical results were collected in a database. For potential VEGF-A and KDR association with the disease outcome, patients were divided into three expression groups (tertiles: low, medium, high) on the basis of their HScores. For MVD analysis, patients were divided according to its absolute number of CD31-positive structures. To evaluate the prognostic value of VEGF-A, KDR and MVD in our cohort, survival curves were estimated using the Kaplan–Meier method with the three groups as a factor. Significant survival differences between groups were determined by the log rank test. The third tertile was established as the cut-off point, leaving low- and high-risk patient groups, for MVD. The same approach was applied for VEGF-A and KDR, establishing the first tertile as the cut-off point. For pKDR-Y1175 analysis, a cut-off point determined as positive (pKDR-Y1175>0) and negative expression (pKDR-Y1175=0) was used. Patients were divided into two groups, survival curves were estimated and differences between groups were determined by the log rank test. Those variables that had potential prognostic suggested by univariate analysis were subjected to multivariate analysis with the Cox proportional hazards regression model. Overall survival (OS) and PFS were calculated from the date of diagnosis to the date of death or the last follow-up and to the date of sunitinib progression, respectively. A P-value <0.05 was considered as statistically significant. Results: Patient characteristics Recruited data from patients at baseline are summarized in Table 1. The distribution of patients according to sex was similar; 52% females and 48% males. The median age for this cohort of patients was 62 years. In terms of ECOG performance status, most of patients, 61%, were clustered as equal to 1. Previous nephrectomy was carried out in 87% of the cases. Sites of metastases were diverse, including lung 35%, liver 13%, bone 22%, brain 4% and lymph nodes 26%. Number of disease sites was established as 1, 2 and ⩾3 (48%, 35% and 17%, respectively). Patients were grouped according to their MSKCC risk factor classification as favourable (61%) and intermediate (39%). The control group comprised of 25 biopsies from non-metastatic RCC patients without treatment. The median age of this group was 67 years, and the distribution of patients according to sex was 60% males and 40% females. Recruited data from patients at baseline are summarized in Table 1. The distribution of patients according to sex was similar; 52% females and 48% males. The median age for this cohort of patients was 62 years. In terms of ECOG performance status, most of patients, 61%, were clustered as equal to 1. Previous nephrectomy was carried out in 87% of the cases. Sites of metastases were diverse, including lung 35%, liver 13%, bone 22%, brain 4% and lymph nodes 26%. Number of disease sites was established as 1, 2 and ⩾3 (48%, 35% and 17%, respectively). Patients were grouped according to their MSKCC risk factor classification as favourable (61%) and intermediate (39%). The control group comprised of 25 biopsies from non-metastatic RCC patients without treatment. The median age of this group was 67 years, and the distribution of patients according to sex was 60% males and 40% females. Vascular endothelial growth factor-A, KDR, pKDR-Y1775 and MVD in RCC To evaluate the expression of the selected proteins, immunohistochemistry assays were performed in patients treated with SU11248. A control group was included to establish a reference value for each marker. Vascular endothelial growth factor-A expression was diffusely detected in the cytoplasm of tumour cells, as well as in the stromal, including fibroblasts, and endothelial cells. Most of the cases showed stronger staining in the tumour than stroma. Expression of KDR was seen in endothelial cells, preferentially in tumour stroma. In addition, KDR was also detected in isolated fibroblasts and malignant cells. Expression of pKDR-Y1775 was only observed in the endothelial cells of vascular structures in the tumour. Conversely, endothelial cells of vessels in adjacent non-tumoral renal tissue did not express pKDR-Y1775. Finally, CD31 expression was present in all vascular structures, both in tumour and non-tumoral renal tissue (Figure 1). HScore values of all patients for VEGF-A, KDR and pKDR-Y1775 as well as absolute number of CD31-positive structures for MVD visualisation are represented in histograms (Figure 1). The HScore mean value obtained for VEGF-A staining was 121.6 (range, 10–300); for KDR in endothelial cells 258.5 (range, 150–300); for pKDR-Y1775 10.8 (range, 0–65); and the mean value of CD31-positive vascular structures for MVD staining was 49 (range, 10–126). To determine the predictive potential of these proteins in metastatic RCC patients treated with sunitinib in first line, we estimated a cut-off point of 60 for VEGF-A, of 200 for KDR, of 0 for pKDR-Y1775 and of 48 for MVD. To evaluate the expression of the selected proteins, immunohistochemistry assays were performed in patients treated with SU11248. A control group was included to establish a reference value for each marker. Vascular endothelial growth factor-A expression was diffusely detected in the cytoplasm of tumour cells, as well as in the stromal, including fibroblasts, and endothelial cells. Most of the cases showed stronger staining in the tumour than stroma. Expression of KDR was seen in endothelial cells, preferentially in tumour stroma. In addition, KDR was also detected in isolated fibroblasts and malignant cells. Expression of pKDR-Y1775 was only observed in the endothelial cells of vascular structures in the tumour. Conversely, endothelial cells of vessels in adjacent non-tumoral renal tissue did not express pKDR-Y1775. Finally, CD31 expression was present in all vascular structures, both in tumour and non-tumoral renal tissue (Figure 1). HScore values of all patients for VEGF-A, KDR and pKDR-Y1775 as well as absolute number of CD31-positive structures for MVD visualisation are represented in histograms (Figure 1). The HScore mean value obtained for VEGF-A staining was 121.6 (range, 10–300); for KDR in endothelial cells 258.5 (range, 150–300); for pKDR-Y1775 10.8 (range, 0–65); and the mean value of CD31-positive vascular structures for MVD staining was 49 (range, 10–126). To determine the predictive potential of these proteins in metastatic RCC patients treated with sunitinib in first line, we estimated a cut-off point of 60 for VEGF-A, of 200 for KDR, of 0 for pKDR-Y1775 and of 48 for MVD. pKDR-Y1775 in tumour stroma predicts clinical outcome On the basis of these cut-off points, Kaplan–Meier analysis for categorical values of each marker was performed to assess the correlation between the expression levels and prognosis status in patients treated with sunitinib in terms of PFS and OS. Vascular endothelial growth factor-A , KDR and MVD did not show statistical difference in terms of PFS and OS (Figures 2 and 3) (Tables 2 and 3). In relation to pKDR-Y1775, log rank test showed statistical differences for this biomarker in terms of both PFS (log rank 0.01) and OS (log rank 0.007). The median survival time for the patients without the expression of pKDR (negative) was 23.4 months (range, 5–88) for PFS and 27.6 months (range, 8–88) for OS, whereas those cases with positive pKDR-Y1775 expression were associated with worse outcome, with a median survival time of 15.8 for PFS (range, 4–36) and 25.9 months (range, 4–51) for OS. Univariate analysis showed statistical differences for both PFS (P=0.017, HR: 4.02, 95% CI, 1.28–12.63) (Figure 2 and Table 2) and OS (P=0.015 HR: 5.34, 95% CI, 1.39–20.5) (Figure 3 and Table 3) After multivariate Cox proportional hazards regression analysis, pKDR-Y1775 expression remained significant for both PFS (P=0.01; HR: 5.35, 95% CI, 1.49–19.13) and OS (P=0.02; HR: 5.13, 95% CI, 1.25–21.05) suggesting that phosphorylation of KDR in Y1175 could be an independent predictive factor of sunitinib response in patients with clear cell metastatic RCC (Tables 2 and 3). On the basis of these cut-off points, Kaplan–Meier analysis for categorical values of each marker was performed to assess the correlation between the expression levels and prognosis status in patients treated with sunitinib in terms of PFS and OS. Vascular endothelial growth factor-A , KDR and MVD did not show statistical difference in terms of PFS and OS (Figures 2 and 3) (Tables 2 and 3). In relation to pKDR-Y1775, log rank test showed statistical differences for this biomarker in terms of both PFS (log rank 0.01) and OS (log rank 0.007). The median survival time for the patients without the expression of pKDR (negative) was 23.4 months (range, 5–88) for PFS and 27.6 months (range, 8–88) for OS, whereas those cases with positive pKDR-Y1775 expression were associated with worse outcome, with a median survival time of 15.8 for PFS (range, 4–36) and 25.9 months (range, 4–51) for OS. Univariate analysis showed statistical differences for both PFS (P=0.017, HR: 4.02, 95% CI, 1.28–12.63) (Figure 2 and Table 2) and OS (P=0.015 HR: 5.34, 95% CI, 1.39–20.5) (Figure 3 and Table 3) After multivariate Cox proportional hazards regression analysis, pKDR-Y1775 expression remained significant for both PFS (P=0.01; HR: 5.35, 95% CI, 1.49–19.13) and OS (P=0.02; HR: 5.13, 95% CI, 1.25–21.05) suggesting that phosphorylation of KDR in Y1175 could be an independent predictive factor of sunitinib response in patients with clear cell metastatic RCC (Tables 2 and 3). Patient characteristics: Recruited data from patients at baseline are summarized in Table 1. The distribution of patients according to sex was similar; 52% females and 48% males. The median age for this cohort of patients was 62 years. In terms of ECOG performance status, most of patients, 61%, were clustered as equal to 1. Previous nephrectomy was carried out in 87% of the cases. Sites of metastases were diverse, including lung 35%, liver 13%, bone 22%, brain 4% and lymph nodes 26%. Number of disease sites was established as 1, 2 and ⩾3 (48%, 35% and 17%, respectively). Patients were grouped according to their MSKCC risk factor classification as favourable (61%) and intermediate (39%). The control group comprised of 25 biopsies from non-metastatic RCC patients without treatment. The median age of this group was 67 years, and the distribution of patients according to sex was 60% males and 40% females. Vascular endothelial growth factor-A, KDR, pKDR-Y1775 and MVD in RCC: To evaluate the expression of the selected proteins, immunohistochemistry assays were performed in patients treated with SU11248. A control group was included to establish a reference value for each marker. Vascular endothelial growth factor-A expression was diffusely detected in the cytoplasm of tumour cells, as well as in the stromal, including fibroblasts, and endothelial cells. Most of the cases showed stronger staining in the tumour than stroma. Expression of KDR was seen in endothelial cells, preferentially in tumour stroma. In addition, KDR was also detected in isolated fibroblasts and malignant cells. Expression of pKDR-Y1775 was only observed in the endothelial cells of vascular structures in the tumour. Conversely, endothelial cells of vessels in adjacent non-tumoral renal tissue did not express pKDR-Y1775. Finally, CD31 expression was present in all vascular structures, both in tumour and non-tumoral renal tissue (Figure 1). HScore values of all patients for VEGF-A, KDR and pKDR-Y1775 as well as absolute number of CD31-positive structures for MVD visualisation are represented in histograms (Figure 1). The HScore mean value obtained for VEGF-A staining was 121.6 (range, 10–300); for KDR in endothelial cells 258.5 (range, 150–300); for pKDR-Y1775 10.8 (range, 0–65); and the mean value of CD31-positive vascular structures for MVD staining was 49 (range, 10–126). To determine the predictive potential of these proteins in metastatic RCC patients treated with sunitinib in first line, we estimated a cut-off point of 60 for VEGF-A, of 200 for KDR, of 0 for pKDR-Y1775 and of 48 for MVD. pKDR-Y1775 in tumour stroma predicts clinical outcome: On the basis of these cut-off points, Kaplan–Meier analysis for categorical values of each marker was performed to assess the correlation between the expression levels and prognosis status in patients treated with sunitinib in terms of PFS and OS. Vascular endothelial growth factor-A , KDR and MVD did not show statistical difference in terms of PFS and OS (Figures 2 and 3) (Tables 2 and 3). In relation to pKDR-Y1775, log rank test showed statistical differences for this biomarker in terms of both PFS (log rank 0.01) and OS (log rank 0.007). The median survival time for the patients without the expression of pKDR (negative) was 23.4 months (range, 5–88) for PFS and 27.6 months (range, 8–88) for OS, whereas those cases with positive pKDR-Y1775 expression were associated with worse outcome, with a median survival time of 15.8 for PFS (range, 4–36) and 25.9 months (range, 4–51) for OS. Univariate analysis showed statistical differences for both PFS (P=0.017, HR: 4.02, 95% CI, 1.28–12.63) (Figure 2 and Table 2) and OS (P=0.015 HR: 5.34, 95% CI, 1.39–20.5) (Figure 3 and Table 3) After multivariate Cox proportional hazards regression analysis, pKDR-Y1775 expression remained significant for both PFS (P=0.01; HR: 5.35, 95% CI, 1.49–19.13) and OS (P=0.02; HR: 5.13, 95% CI, 1.25–21.05) suggesting that phosphorylation of KDR in Y1175 could be an independent predictive factor of sunitinib response in patients with clear cell metastatic RCC (Tables 2 and 3). Discussion: This study evaluates the role of VEGF-A, KDR, pKDR-Y1175 and MVD in metastatic RCC after receiving sunitinib and their potential to predict significant clinical benefit in terms of statistically longer PFS and OS. Vascular endothelial growth factor pathway has been largely characterised in RCC as a key mechanism in the angiogenesis development (Takahashi et al, 1994; Nakagawa et al, 1997; Tomisawa et al, 1999), and as a result, a relevant therapeutic target (Rini, 2009). Sunitinib is a multitargeted receptor tyrosine kinase inhibitor of VEGF receptors, among others, which interacts with the ATP binding pocket of these kinases and acts as competitive inhibitor with ATP. Its efficacy in patients with RCC refractory to cytokine-based therapy was demonstrated in two phase II trials (Motzer et al, 2006a, 2006b) as well as in previously untreated patients in a phase III trial (Motzer et al, 2007). Although anti-angiogenic therapy has resulted in a complete revolution in the treatment of metastatic RCC patients, the response varies widely from patient to patient in terms of PFS and OS, no apparent explanation is found in most cases (Motzer et al, 2007, 2009). Certainly, it is the differential grade in the outcome that justifies the need to identify biomarkers that can predict the clinical benefit of sunitinib. In addition to clinical and laboratory-based factors used as prognostic criteria, being MSKCC the most known (Motzer et al, 1999), several molecules have been explored as potential biologic indicators in terms of response to SU11248. Some of these studies have showed association between levels of VEGF-A soluble isoforms and PFS (Paule et al, 2010; Porta et al, 2010). Other recent studies have found an association between several proteins involved in hypoxia and SU11248 efficacy as well as low VEGFR3 expression associated with worse outcome (Garcia-Donas et al, 2013). Circulating endothelial cells as well as circulating bone marrow-derived progenitor cells have also been explored as valuable biomarkers (Gruenwald et al, 2010; Farace et al, 2011). Even at genetic level, novel studies have revealed a differential outcome based on the presence of polymorphisms in VEGF and VEGFR genes (Scartozzi et al, 2013) or based on miRNA expression profiles (Gamez-Pozo et al, 2012). Terakawa et al (2013) suggested that it would be useful to consider the expression levels of KDR to identify the metastatic RCC patients likely to be benefited from treatment with sunitinib; although several biomarkers were studied, only VEGFR2 expression appeared to be independently related to PFS as well as OS on multivariate analysis. In the analysis carried out in our panel of patients, we described for the first time the correlation of pKDR-Y1175 expression with PFS and OS in patients with metastatic RCC in terms of clinical benefit of sunitinib-based therapy. Presently, little is known about the predictive role of pKDR-Y1175 in response to treatment. The phosphorylation profile and the intracellular location of KDR were investigated in both normal and neoplasic kidneys (Fox et al, 2004). Although the phosphorylated epitopes were different from our marker (Y1059 and Y1214), this study showed that pKDR is present in a wide variety of renal tumours, suggesting that anti-VEGFR therapy might have direct effects on tumour cells. Furthermore, pKDR-Y1775 has been associated with poor prognosis in endometrial carcinomas (Giatromanolaki et al, 2006). Angiogenesis and its signalling proteins have been largely studied in several tumour types and their importance in tumour progression is widely accepted. However, their role in the modulation of response to anti-angiogenic therapies in cancer is still under debate. Some evidences recently showed correlations between angiogenesis and response to tyrosine kinase inhibitors that target receptors of angiogenesis (Rosa et al, 2013), including sunitinib. Supporting this research, our analysis provides novel data of the role of active angiogenesis in RCC patients to predict the benefit of sunitinib. These findings require further validation in additional clinical series to confirm the potential impact in terms of outcome prediction.
Background: Sunitinib represents a widely used therapy for metastatic renal cell carcinoma patients. Even so, there is a group of patients who show toxicity without clinical benefit. In this work, we have analysed pivotal molecular targets involved in angiogenesis (vascular endothelial growth factor (VEGF)-A, VEGF receptor 2 (KDR), phosphorylated (p)KDR and microvascular density (MVD)) to test their potential value as predictive biomarkers of clinical benefit in sunitinib-treated renal cell carcinoma patients. Methods: Vascular endothelial growth factor-A, KDR and pKDR-Y1775 expression as well as CD31, for MVD visualisation, were determined by immunohistochemistry in 48 renal cell carcinoma patients, including 23 metastatic cases treated with sunitinib. Threshold was defined for each biomarker, and univariate and multivariate analyses for progression-free survival (PFS) and overall survival (OS) were carried out. Results: The HistoScore mean value obtained for VEGF-A was 121.6 (range, 10-300); for KDR 258.5 (range, 150-300); for pKDR-Y1775 10.8 (range, 0-65) and the mean value of CD31-positive structures for MVD visualisation was 49 (range, 10-126). Statistical differences for PFS (P=0.01) and OS (P=0.007) were observed for pKDR-Y1775 in sunitinib-treated patients. Importantly, pKDR-Y1775 expression remained significant after multivariate Cox analysis for PFS (P=0.01; HR: 5.35, 95% CI, 1.49-19.13) and for OS (P=0.02; HR: 5.13, 95% CI, 1.25-21.05). Conclusions: Our results suggest that the expression of phosphorylated (i.e., activated) KDR in tumour stroma might be used as predictive biomarker for the clinical outcome in renal cell carcinoma first-line sunitinib-treated patients.
null
null
6,407
348
[ 192, 506, 311, 195, 320, 316 ]
9
[ "patients", "pkdr", "expression", "vegf", "cells", "endothelial", "kdr", "tumour", "analysis", "pkdr y1775" ]
[ "sunitinib response patients", "sunitinib biomarkers", "sunitinib biomarkers studied", "metastatic rcc risk", "metastatic rcc patients" ]
null
null
null
null
null
null
[CONTENT] renal cell carcinoma | sunitinib | biomarker | VEGF-A | KDR | angiogenesis | microvascular density | progression-free survival | overall survival [SUMMARY]
null
[CONTENT] renal cell carcinoma | sunitinib | biomarker | VEGF-A | KDR | angiogenesis | microvascular density | progression-free survival | overall survival [SUMMARY]
null
null
null
[CONTENT] Adult | Aged | Aged, 80 and over | Angiogenesis Inhibitors | Biomarkers, Tumor | Carcinoma, Renal Cell | Cohort Studies | Disease-Free Survival | Female | Humans | Indoles | Kaplan-Meier Estimate | Kidney Neoplasms | Male | Microvessels | Middle Aged | Multivariate Analysis | Neovascularization, Pathologic | Phosphoproteins | Proportional Hazards Models | Pyrroles | Sunitinib | Vascular Endothelial Growth Factor A | Vascular Endothelial Growth Factor Receptor-2 [SUMMARY]
null
[CONTENT] Adult | Aged | Aged, 80 and over | Angiogenesis Inhibitors | Biomarkers, Tumor | Carcinoma, Renal Cell | Cohort Studies | Disease-Free Survival | Female | Humans | Indoles | Kaplan-Meier Estimate | Kidney Neoplasms | Male | Microvessels | Middle Aged | Multivariate Analysis | Neovascularization, Pathologic | Phosphoproteins | Proportional Hazards Models | Pyrroles | Sunitinib | Vascular Endothelial Growth Factor A | Vascular Endothelial Growth Factor Receptor-2 [SUMMARY]
null
null
null
[CONTENT] sunitinib response patients | sunitinib biomarkers | sunitinib biomarkers studied | metastatic rcc risk | metastatic rcc patients [SUMMARY]
null
[CONTENT] sunitinib response patients | sunitinib biomarkers | sunitinib biomarkers studied | metastatic rcc risk | metastatic rcc patients [SUMMARY]
null
null
null
[CONTENT] patients | pkdr | expression | vegf | cells | endothelial | kdr | tumour | analysis | pkdr y1775 [SUMMARY]
null
[CONTENT] patients | pkdr | expression | vegf | cells | endothelial | kdr | tumour | analysis | pkdr y1775 [SUMMARY]
null
null
null
[CONTENT] range | pkdr y1775 | y1775 | patients | pkdr | cells | pfs | os | endothelial | expression [SUMMARY]
null
[CONTENT] patients | cells | pkdr | expression | vegf | endothelial | tumour | range | kdr | y1775 [SUMMARY]
null
null
null
[CONTENT] HistoScore | VEGF-A | 121.6 | 10-300 | KDR | 150-300 | 10.8 | 0-65 | MVD | 49 | 10-126 ||| ||| 5.35 | 95% | CI | 1.49-19.13 | 5.13 | 95% | CI | 1.25 [SUMMARY]
null
[CONTENT] ||| ||| VEGF)-A | VEGF | 2 | KDR | MVD ||| KDR | MVD | 48 | 23 ||| ||| ||| HistoScore | VEGF-A | 121.6 | 10-300 | KDR | 150-300 | 10.8 | 0-65 | MVD | 49 | 10-126 ||| ||| 5.35 | 95% | CI | 1.49-19.13 | 5.13 | 95% | CI | 1.25 ||| KDR | stroma | first [SUMMARY]
null
Upregulation of hsa_circ_0004812 promotes COVID-19 cytokine storm via hsa-miR-1287-5p/IL6R, RIG-I axis.
35989496
SARS-CoV-2 is one of the most contagious viruses in the Coronaviridae (CoV) family, which has become a pandemic. The aim of this study is to understand more about the role of hsa_circ_0004812 in the SARS-CoV-2 related cytokine storm and its associated molecular mechanisms.
BACKGROUND
cDNA synthesis was performed after total RNA was extracted from the peripheral blood mononuclear cells (PBMC) of 46 patients with symptomatic COVID-19, 46 patients with asymptomatic COVID-19, and 46 healthy controls. The expression levels of hsa_circ_0004812, hsa-miR-1287-5p, IL6R, and RIG-I were determined using qRT-PCR, and the potential interaction between these molecules was confirmed by bioinformatics tools and correlation analysis.
MATERIALS AND METHODS
hsa_circ_0004812, IL6R, and RIG-I are expressed higher in the severe symptom group compared with the negative control group. Also, the relative expression of these genes in the asymptomatic group is lower than in the severe symptom group. The expression level of hsa-miR-1287-5p was positively correlated with symptoms in patients. The results of the bioinformatics analysis predicted the sponging effect of hsa_circ_0004812 as a competing endogenous RNA on hsa-miR-1287-5p. Moreover, there was a significant positive correlation between hsa_circ_0004812, RIG-I, and IL-6R expressions, and also a negative expression correlation between hsa_circ_0004812 and hsa-miR-1287-5p and between hsa-miR-1287-5p, RIG-I, and IL-6R.
RESULTS
The results of this in-vitro and in silico study show that hsa_circ_0004812/hsa-miR-1287-5p/IL6R, RIG-I can play an important role in the outcome of COVID-19.
CONCLUSION
[ "COVID-19", "Cell Proliferation", "Cytokine Release Syndrome", "DNA, Complementary", "Humans", "Leukocytes, Mononuclear", "MicroRNAs", "RNA, Circular", "Receptors, Cell Surface", "Receptors, Interleukin-6", "SARS-CoV-2", "Up-Regulation" ]
9538103
BACKGROUND
The COVID‐19 pandemic has become a very challenging and controversial problem. The severe acute respiratory syndrome coronavirus‐2 (SARS‐CoV‐2) virus is a newly emerging single‐stranded RNA virus that belongs to the beta‐coronavirus family and causes COVID‐19 (coronavirus disease 2019). COVID‐19, with a dramatic fatality rate, 1 , 2 , 3 triggers the immune system to respond and simultaneously represses the immune response to allow the virus replication. The interplay among the infected cells, viruses, and immune system induces a high level of cytokine and chemokine secretion known as the “cytokine storm”. 3 , 4 CircRNAs are created through the back‐splicing process, which results in a closed‐loop structure without poly(A) tails and 5′ caps. 5 These RNA molecules are highly stable and resistant to exonuclease‐mediated degradation. It has been found that circRNAs, as the key regulators, could modulate gene expression through various molecular mechanisms, including a sponging effect on miRNA or acting as protein scaffolds. 6 It has been reported that noncoding RNAs could have an effect on the inflammatory cytokine storm. 7 CircRNAs, which are regarded as endogenous short RNAs that may be classified into coding and noncoding circRNAs, 8 play a proven role in COVID‐19‐related pathogenesis. 9 For example, Wu et al. 9 found that differentially expressed circRNAs in COVID‐19 patients with recurrent disease are mainly involved in the regulation of host cell immunity and inflammation, substance and energy metabolism, cell cycle, and cell apoptosis . The most well‐known function of circRNAs is their activity as competing endogenous RNAs (ceRNAs) through a circRNA/miRNA/mRNA regulatory network. 10 , 11 Recent evidence suggested the role of circRNAs in various viral infections such as human papilloma virus infection, herpes simplex virus, Epstein–Barr virus, human immunodeficiency virus, Middle East respiratory syndrome coronavirus, and hepatitis B virus infection and confirmed it as a potential biomarker between viral and non‐viral states. 12 One of the circRNAs that has been recognized for its contribution to human viral‐associated infection is hsa_circ_0004812. Hsa_circ_0004812 originated from the NINL gene and is located on chromosome 20. 13 In 2020, Zhang and colleagues revealed that hsa_circ_0004812 was upregulated in chronic hepatitis B (CHB) and HBV infected hepatoma cells. Additionally, this circRNA was associated with immune suppression by HBV infection through the hsa_circ_0004812/hsa‐miR‐1287‐5p/FSTL1 pathway. 14 According to some studies, CircRNAs play an important role in gene expression as microRNA sponges via their microRNA response elements (MREs). 5 MicroRNAs have also been indicated as potential biomarkers for COVID‐19 diagnosis. 15 High‐throughput sequencing was used during the study in 2020 to evaluate the expression levels of different miRNAs. Then, they revealed that in human patients with COVID‐19, 35 miRNAs were upregulated, and 38 miRNAs were downregulated. 16 MicroRNAs are small noncoding RNAs that are about 20 to 25 nucleotides in length and act as regulators of gene expression by affecting the stability and translation of their target mRNAs. 17 In this study, hsa‐miR‐1287‐5p located on chromosome 10 was selected, which was previously predicted to have a significant binding site in the SARS‐CoV‐2 genome. 18 JAK/STAT is one of the pathways involved in inflammatory responses of COVID‐19, such as cytokine storm. 14 Interleukin 6 (IL‐6) and other inflammatory components such as IL‐12 and TNF are important components of cytokine storm and play a role in the pathogenesis of COVID‐19 associated pneumonia. 19 IL‐6 can activate STAT3 during inflammatory processes, and both of them play a regulatory role in the cytokine storm in COVID‐19 infection. 20 Some studies indicate that IL‐6R antagonists can improve the hyper‐inflammation state in hospitalized COVID‐19 patients. 21 Retinoid‐inducible gene 1 (RIG‐I) is one of the retinoic acid‐inducible genes I (RIG‐I)‐like receptor (RLR) family, playing an important role in sensing viral nucleic acids and production of pro‐inflammatory and antiviral proteins. 22 Interestingly, we find that hsa_circ_0004812/hsa‐mir‐1287‐5p/IL6R, RIG‐I have an impact on cytokine storm through JAK/STAT and STAT3 signaling pathways. The aim of this study was to develop a better understanding of the molecular mechanisms involved in COVID‐19 inflammatory responses. According to the primary in‐silico analysis and literature studies, we had a specific focus on the hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis to evaluate its potential role in COVID‐19 inflammatory responses. In this regard, we assessed the expression of the members of our candidate axis, and its correlation with JAK/STAT and STAT3 signaling pathways was measured.
null
null
RESULTS
Basic and demographic information of patients Participants in the current study included: group (1) 46 patients with symptomatic COVID‐19, including 19 (41%) females and 27 (59%) males, with a mean age of 41.54 years, group (2) 46 patients with asymptomatic COVID‐19, with 21 (46%) females and 25 (54%) males, with a mean age of 47.90 years, and group (3) the negative control which included 24 (52%) males and 22 (48%), females, with a mean age of 42.65. The patients and control groups were matched in terms of age, sex, and blood group (Table 2). The most common underlying diseases in the symptomatic and asymptomatic COVID‐19 groups were cardiovascular disease with 10 (22%), 12 (26%) patients and immunodeficiency disease with 4 (9%), 0 (0%) patients, respectively (Table 2). Table 3 demonstrates the different symptoms of patients with symptomatic COVID‐19 participating in this study. Participants' baseline and demographic information (p‐values 1: in comparison with negative control, 2: in comparison with asymptomatic COVID‐19) Features of symptomatic COVID‐19 patients Participants in the current study included: group (1) 46 patients with symptomatic COVID‐19, including 19 (41%) females and 27 (59%) males, with a mean age of 41.54 years, group (2) 46 patients with asymptomatic COVID‐19, with 21 (46%) females and 25 (54%) males, with a mean age of 47.90 years, and group (3) the negative control which included 24 (52%) males and 22 (48%), females, with a mean age of 42.65. The patients and control groups were matched in terms of age, sex, and blood group (Table 2). The most common underlying diseases in the symptomatic and asymptomatic COVID‐19 groups were cardiovascular disease with 10 (22%), 12 (26%) patients and immunodeficiency disease with 4 (9%), 0 (0%) patients, respectively (Table 2). Table 3 demonstrates the different symptoms of patients with symptomatic COVID‐19 participating in this study. Participants' baseline and demographic information (p‐values 1: in comparison with negative control, 2: in comparison with asymptomatic COVID‐19) Features of symptomatic COVID‐19 patients Evaluation of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R expression in symptomatic COVID‐19 patients compared with asymptomatic COVID‐19 patients and negative controls We used quantitative real‐time PCR to evaluate the gene expression levels in two different subgroups of COVID‐19 patients and negative controls, with a total of 46 samples in each group. The expression of hsa_circ_0004812 is significantly higher in symptomatic COVID‐19 samples in comparison with other groups, and it was also upregulated in asymptomatic patients compared with negative controls (Table 4, Figure 1A). The hsa‐miR‐1287‐5p expression is obviously lower in symptomatic COVID‐19 patients compared with negative controls and asymptomatic patients (Table 4, Figure 1B). Furthermore, the results indicate that RIG‐I and IL6R are significantly upregulated in symptomatic and asymptomatic patients compared with negative controls (Table 3, Figure 1C,D). Expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R in asymptomatic and symptomatic COVID‐19 patients, and negative controls (p‐value computed from Kruskal–Wallis Test. p‐value 1: In comparison with negative control. p‐value 2: In comparison to asymptomatic covid) Relative expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, IL6R, and RIG‐I were shown in a box and whisker plot (10–90 percentile). (ns: p > .05; * p ≤ .05; *** p ˂ .001). (A) Upregulation of hsa_circ_0004812 in COVID‐19 patients compared with healthy. (B) Down expression of hsa‐miR‐1287‐5p in COVID‐19 patients compared with healthy controls. (C) Upregulation of RIG‐I in COVID‐19 patients compared with healthy controls. (D) Upregulation of IL6R in COVID‐19 patients compared with healthy controls. We used quantitative real‐time PCR to evaluate the gene expression levels in two different subgroups of COVID‐19 patients and negative controls, with a total of 46 samples in each group. The expression of hsa_circ_0004812 is significantly higher in symptomatic COVID‐19 samples in comparison with other groups, and it was also upregulated in asymptomatic patients compared with negative controls (Table 4, Figure 1A). The hsa‐miR‐1287‐5p expression is obviously lower in symptomatic COVID‐19 patients compared with negative controls and asymptomatic patients (Table 4, Figure 1B). Furthermore, the results indicate that RIG‐I and IL6R are significantly upregulated in symptomatic and asymptomatic patients compared with negative controls (Table 3, Figure 1C,D). Expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R in asymptomatic and symptomatic COVID‐19 patients, and negative controls (p‐value computed from Kruskal–Wallis Test. p‐value 1: In comparison with negative control. p‐value 2: In comparison to asymptomatic covid) Relative expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, IL6R, and RIG‐I were shown in a box and whisker plot (10–90 percentile). (ns: p > .05; * p ≤ .05; *** p ˂ .001). (A) Upregulation of hsa_circ_0004812 in COVID‐19 patients compared with healthy. (B) Down expression of hsa‐miR‐1287‐5p in COVID‐19 patients compared with healthy controls. (C) Upregulation of RIG‐I in COVID‐19 patients compared with healthy controls. (D) Upregulation of IL6R in COVID‐19 patients compared with healthy controls. The hsa_circ_0004812 as a potential regulator of RIG‐I and IL6R through sponging hsa‐miR 1287‐5p To confirm whether hsa_circ_0004812 may act as the molecular sponge for hsa‐miR‐1287‐5p, the expression correlation between these ncRNAs was evaluated in the samples. There was a significant negative correlation between the expression of hsa_circ_0004812 and hsa‐miR‐1287‐5p (r = −0.283, p value = .0.001, Figure 2B). Using bioinformatics tools, we have predicted that RIG‐I and IL6R are potential targets of the hsa_circ_0004812/hsa‐miR‐1287‐5p, and correlation analysis between scale results of RT‐PCR revealed a positive expression correlation between hsa_circ_0004812 and IL6R and RIG‐I (r = 0.760, p value = ˂.001/r = .236, p value = .005, Figure 2C,D). In addition, the negative expression correlation between hsa‐miR‐1287‐5p and IL6R and RIG‐I was studied (r = −0.234, p value = .006/r = −0.514, p value = ˂.001, Figure 2E,F). According to the results from the expression pattern of the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R axis, a distinct expression profile between COVID‐19 samples and negative controls is shown by the heatmap analysis (Figure 2A). The final output confirms the significance of this hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R pathway in the COVID‐19 cytokine storm. (A) LogFC heatmap of gene expression. (B) The negative correlation between hsa_circ_0004812 and hsa‐miR‐1287‐5p expression levels in COVID‐19 (r = −0.283, p value = .001). (C) The positive correlation between hsa_circ_0004812 and IL6R expression levels in COVID‐19 (r = 0.760, p value = ˂.001). (D) The positive correlation between hsa_circ_0004812 and RIG‐I expression levels in COVID‐19 (r = 0.236, p value = .005). (E) The negative correlation between hsa‐miR‐1287‐5p and IL6R expression levels in COVID‐19 (r = −0.234, p value = .006). (F) The negative correlation between hsa‐miR‐1287‐5p and RIG‐I expression levels in COVID‐19 (r = −0.514, p value = ˂.001). To confirm whether hsa_circ_0004812 may act as the molecular sponge for hsa‐miR‐1287‐5p, the expression correlation between these ncRNAs was evaluated in the samples. There was a significant negative correlation between the expression of hsa_circ_0004812 and hsa‐miR‐1287‐5p (r = −0.283, p value = .0.001, Figure 2B). Using bioinformatics tools, we have predicted that RIG‐I and IL6R are potential targets of the hsa_circ_0004812/hsa‐miR‐1287‐5p, and correlation analysis between scale results of RT‐PCR revealed a positive expression correlation between hsa_circ_0004812 and IL6R and RIG‐I (r = 0.760, p value = ˂.001/r = .236, p value = .005, Figure 2C,D). In addition, the negative expression correlation between hsa‐miR‐1287‐5p and IL6R and RIG‐I was studied (r = −0.234, p value = .006/r = −0.514, p value = ˂.001, Figure 2E,F). According to the results from the expression pattern of the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R axis, a distinct expression profile between COVID‐19 samples and negative controls is shown by the heatmap analysis (Figure 2A). The final output confirms the significance of this hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R pathway in the COVID‐19 cytokine storm. (A) LogFC heatmap of gene expression. (B) The negative correlation between hsa_circ_0004812 and hsa‐miR‐1287‐5p expression levels in COVID‐19 (r = −0.283, p value = .001). (C) The positive correlation between hsa_circ_0004812 and IL6R expression levels in COVID‐19 (r = 0.760, p value = ˂.001). (D) The positive correlation between hsa_circ_0004812 and RIG‐I expression levels in COVID‐19 (r = 0.236, p value = .005). (E) The negative correlation between hsa‐miR‐1287‐5p and IL6R expression levels in COVID‐19 (r = −0.234, p value = .006). (F) The negative correlation between hsa‐miR‐1287‐5p and RIG‐I expression levels in COVID‐19 (r = −0.514, p value = ˂.001). Functional enrichment analysis of hsa_circ_0004812/hsa‐miR‐1287‐5p/ RIG‐I, IL6R axis Using the circRNA‐miRNA pair and the miRNA‐mRNA pair, we generated the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R triple network. We used the miRPathDB and Enrichr databases to evaluate the association between miRNA and mRNA related pathways, respectively. 26 , 27 “Regulatory circuits of the STAT3 signaling pathway” is the significant pathway related to hsa‐miR‐1287‐5p via WikiPathways data (p‐value = .006). Additionally, the significant gene ontology (GO) associated with IL6R and RIG‐I has been shown in Figure 3. GO analysis displayed that IL6R, RIG‐I, and hsa‐miR‐1287‐5p in our proposed ceRNA regulatory network were involved in the regulation of JAK–STAT and PI3K‐AKT signaling pathways. Overall workflow of bioinformatics analyses Using the circRNA‐miRNA pair and the miRNA‐mRNA pair, we generated the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R triple network. We used the miRPathDB and Enrichr databases to evaluate the association between miRNA and mRNA related pathways, respectively. 26 , 27 “Regulatory circuits of the STAT3 signaling pathway” is the significant pathway related to hsa‐miR‐1287‐5p via WikiPathways data (p‐value = .006). Additionally, the significant gene ontology (GO) associated with IL6R and RIG‐I has been shown in Figure 3. GO analysis displayed that IL6R, RIG‐I, and hsa‐miR‐1287‐5p in our proposed ceRNA regulatory network were involved in the regulation of JAK–STAT and PI3K‐AKT signaling pathways. Overall workflow of bioinformatics analyses
CONCLUSION
In conclusion, the interaction of hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis has been verified by a bioinformatics study. Gene ontology analysis indicated that the ceRNA axis is involved in the regulation of JAK–STAT and PI3K‐AKT signaling pathways. COVID‐19 patients related circRNA‐miRNA‐mRNA pathway analysis hinted that hsa_circ_0004812 regulated RIG1 and IL6R by sponging hsa‐miR‐1287‐5p. Moreover, on the basis of the correlation analysis, it was revealed that this candidate axis is more significant in the patients with higher COVID‐19 symptoms and could be considered as the potential target to be used in understanding, identifying, and treating COVID‐19.
[ "Patient characteristics and sample collection", "Isolation of total RNA and cDNA synthesis", "Real time PCR", "Statistical analysis", "Bioinformatics Predictions", "Basic and demographic information of patients", "Evaluation of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R expression in symptomatic COVID‐19 patients compared with asymptomatic COVID‐19 patients and negative controls", "The hsa_circ_0004812 as a potential regulator of RIG‐I and IL6R through sponging hsa‐miR 1287‐5p", "Functional enrichment analysis of hsa_circ_0004812/hsa‐miR‐1287‐5p/ RIG‐I, IL6R axis", "FUNDING INFORMATION", "PATIENT CONSENT STATEMENT" ]
[ "To investigate the role of hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis in COVID‐19 viral infection, patients with SARS‐CoV‐2 severe symptoms (n = 46), without symptoms (n = 46), and a negative control group (n = 46) were selected for the study. The first group of patients was hospitalized at Valiasr and Shariati (Fasa, Fars, Iran). All samples were confirmed by reverse transcription‐polymerase chain reaction (RT‐PCR). Individuals presenting as negative controls had no respiratory insufficiency or hyperinflammation, such as bacterial and fungal infections, inflammatory bowel disease, autoimmune disorders, or cancer. We work on peripheral blood mononuclear cells (PBMC) whole blood from samples after collecting 5 μl blood samples in tubes containing EDTA in the shortest possible time. All of the participants signed a consent form, and this study was approved by the ethics committee of FUMS (IR.FUMS.REC.1399.182).", "After exposing the blood cells to the reagent, PBMC were isolated using the density‐gradient method. Trizol isolation reagent (Invitrogen, Thermo Fisher) was used to extract total RNA. Then, RNA was extracted from PBMC according to the manufacturer's instructions. The RNA purity was assessed with a NanoDrop spectrophotometer (BioTek, HTX multi‐mode reader) and gel electrophoresis. The first‐strand cDNA was synthesized using the PrimeScript™ RT Reagent Kit (BioFact™, Cat. No: BR441‐096) following the manufacturer's recommendations.", "The relative expression of genes was determined using Power SYBR® Green PCR Master Mix (ABI, USA) on the 7500 real‐time PCR system (ABI, life technology). A 15 μl reaction contained 1 μl of cDNA, 7.5 μl BioFACT™ master mix including SYBR Green (Ampliqon, Cat. No: A325402‐25), 0.75 μl of each primer, and 5 μl DNase‐free deionized H2O. Thermal cycling was performed at 45 cycles of 95°C for 20 s and then 60°C for 30 s. We used two internal controls, including ACTB and U48. The sequences of each primer are stated in Table 1. The Livak method (2−∆∆Ct) was utilized to calculate the relative expression.\nprimers sequences\nAbbreviations: ACTB, actin beta; IL6R, interleukin 6 receptor; RIG‐I, RNA sensor RIG‐I; U48, SNORD48, small nucleolar RNA.", "Data analysis was performed in spss software v.26, and graphs were drawn with graphpad prism v.8. Comparison of expression between the three groups of samples, including symptomatic COVID‐19, asymptomatic COVID‐19, and negative control, was evaluated using the Kruskal–Wallis H non‐parametric test. The Chi‐square test was applied to compare sex and blood type between groups, and the one‐way anova test was used to compare age. The correlation of the elements in the hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I pathway was measured by the Spearman correlation coefficient test. A p‐value less than .05 was considered a significant level.", "Hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis was constructed based on circRNA‐miRNA pair, miRNA‐mRNA pairs. CircRNA‐miRNA interaction was downloaded from the database Circinteractome.\n23\n The MiRWalk2.0\n24\n and miRTargetLink\n25\n databases were used to predict the interaction between miRNA and mRNAs. Then, the pathway enrichment analysis of miRNA and mRNAs was performed using miRPathDB 2.0\n26\n and Enrichr\n27\n databases.", "Participants in the current study included: group (1) 46 patients with symptomatic COVID‐19, including 19 (41%) females and 27 (59%) males, with a mean age of 41.54 years, group (2) 46 patients with asymptomatic COVID‐19, with 21 (46%) females and 25 (54%) males, with a mean age of 47.90 years, and group (3) the negative control which included 24 (52%) males and 22 (48%), females, with a mean age of 42.65. The patients and control groups were matched in terms of age, sex, and blood group (Table 2). The most common underlying diseases in the symptomatic and asymptomatic COVID‐19 groups were cardiovascular disease with 10 (22%), 12 (26%) patients and immunodeficiency disease with 4 (9%), 0 (0%) patients, respectively (Table 2). Table 3 demonstrates the different symptoms of patients with symptomatic COVID‐19 participating in this study.\nParticipants' baseline and demographic information (p‐values 1: in comparison with negative control, 2: in comparison with asymptomatic COVID‐19)\nFeatures of symptomatic COVID‐19 patients", "We used quantitative real‐time PCR to evaluate the gene expression levels in two different subgroups of COVID‐19 patients and negative controls, with a total of 46 samples in each group. The expression of hsa_circ_0004812 is significantly higher in symptomatic COVID‐19 samples in comparison with other groups, and it was also upregulated in asymptomatic patients compared with negative controls (Table 4, Figure 1A). The hsa‐miR‐1287‐5p expression is obviously lower in symptomatic COVID‐19 patients compared with negative controls and asymptomatic patients (Table 4, Figure 1B). Furthermore, the results indicate that RIG‐I and IL6R are significantly upregulated in symptomatic and asymptomatic patients compared with negative controls (Table 3, Figure 1C,D).\nExpression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R in asymptomatic and symptomatic COVID‐19 patients, and negative controls (p‐value computed from Kruskal–Wallis Test. p‐value 1: In comparison with negative control. p‐value 2: In comparison to asymptomatic covid)\nRelative expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, IL6R, and RIG‐I were shown in a box and whisker plot (10–90 percentile). (ns: p > .05; * p ≤ .05; *** p ˂ .001). (A) Upregulation of hsa_circ_0004812 in COVID‐19 patients compared with healthy. (B) Down expression of hsa‐miR‐1287‐5p in COVID‐19 patients compared with healthy controls. (C) Upregulation of RIG‐I in COVID‐19 patients compared with healthy controls. (D) Upregulation of IL6R in COVID‐19 patients compared with healthy controls.", "To confirm whether hsa_circ_0004812 may act as the molecular sponge for hsa‐miR‐1287‐5p, the expression correlation between these ncRNAs was evaluated in the samples. There was a significant negative correlation between the expression of hsa_circ_0004812 and hsa‐miR‐1287‐5p (r = −0.283, p value = .0.001, Figure 2B). Using bioinformatics tools, we have predicted that RIG‐I and IL6R are potential targets of the hsa_circ_0004812/hsa‐miR‐1287‐5p, and correlation analysis between scale results of RT‐PCR revealed a positive expression correlation between hsa_circ_0004812 and IL6R and RIG‐I (r = 0.760, p value = ˂.001/r = .236, p value = .005, Figure 2C,D). In addition, the negative expression correlation between hsa‐miR‐1287‐5p and IL6R and RIG‐I was studied (r = −0.234, p value = .006/r = −0.514, p value = ˂.001, Figure 2E,F). According to the results from the expression pattern of the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R axis, a distinct expression profile between COVID‐19 samples and negative controls is shown by the heatmap analysis (Figure 2A). The final output confirms the significance of this hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R pathway in the COVID‐19 cytokine storm.\n(A) LogFC heatmap of gene expression. (B) The negative correlation between hsa_circ_0004812 and hsa‐miR‐1287‐5p expression levels in COVID‐19 (r = −0.283, p value = .001). (C) The positive correlation between hsa_circ_0004812 and IL6R expression levels in COVID‐19 (r = 0.760, p value = ˂.001). (D) The positive correlation between hsa_circ_0004812 and RIG‐I expression levels in COVID‐19 (r = 0.236, p value = .005). (E) The negative correlation between hsa‐miR‐1287‐5p and IL6R expression levels in COVID‐19 (r = −0.234, p value = .006). (F) The negative correlation between hsa‐miR‐1287‐5p and RIG‐I expression levels in COVID‐19 (r = −0.514, p value = ˂.001).", "Using the circRNA‐miRNA pair and the miRNA‐mRNA pair, we generated the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R triple network. We used the miRPathDB and Enrichr databases to evaluate the association between miRNA and mRNA related pathways, respectively.\n26\n, \n27\n “Regulatory circuits of the STAT3 signaling pathway” is the significant pathway related to hsa‐miR‐1287‐5p via WikiPathways data (p‐value = .006). Additionally, the significant gene ontology (GO) associated with IL6R and RIG‐I has been shown in Figure 3. GO analysis displayed that IL6R, RIG‐I, and hsa‐miR‐1287‐5p in our proposed ceRNA regulatory network were involved in the regulation of JAK–STAT and PI3K‐AKT signaling pathways.\nOverall workflow of bioinformatics analyses", "This work was supported by Fasa University of Medical Sciences under Grant number 99157.", "Informed consent was obtained from all individual participants included in the study." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "BACKGROUND", "MATERIALS AND METHODS", "Patient characteristics and sample collection", "Isolation of total RNA and cDNA synthesis", "Real time PCR", "Statistical analysis", "Bioinformatics Predictions", "RESULTS", "Basic and demographic information of patients", "Evaluation of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R expression in symptomatic COVID‐19 patients compared with asymptomatic COVID‐19 patients and negative controls", "The hsa_circ_0004812 as a potential regulator of RIG‐I and IL6R through sponging hsa‐miR 1287‐5p", "Functional enrichment analysis of hsa_circ_0004812/hsa‐miR‐1287‐5p/ RIG‐I, IL6R axis", "DISCUSSION", "CONCLUSION", "FUNDING INFORMATION", "CONFLICT OF INTEREST", "PATIENT CONSENT STATEMENT" ]
[ "The COVID‐19 pandemic has become a very challenging and controversial problem. The severe acute respiratory syndrome coronavirus‐2 (SARS‐CoV‐2) virus is a newly emerging single‐stranded RNA virus that belongs to the beta‐coronavirus family and causes COVID‐19 (coronavirus disease 2019). COVID‐19, with a dramatic fatality rate,\n1\n, \n2\n, \n3\n triggers the immune system to respond and simultaneously represses the immune response to allow the virus replication. The interplay among the infected cells, viruses, and immune system induces a high level of cytokine and chemokine secretion known as the “cytokine storm”.\n3\n, \n4\n\n\nCircRNAs are created through the back‐splicing process, which results in a closed‐loop structure without poly(A) tails and 5′ caps.\n5\n These RNA molecules are highly stable and resistant to exonuclease‐mediated degradation. It has been found that circRNAs, as the key regulators, could modulate gene expression through various molecular mechanisms, including a sponging effect on miRNA or acting as protein scaffolds.\n6\n\n\nIt has been reported that noncoding RNAs could have an effect on the inflammatory cytokine storm.\n7\n CircRNAs, which are regarded as endogenous short RNAs that may be classified into coding and noncoding circRNAs,\n8\n play a proven role in COVID‐19‐related pathogenesis.\n9\n For example, Wu et al.\n9\n found that differentially expressed circRNAs in COVID‐19 patients with recurrent disease are mainly involved in the regulation of host cell immunity and inflammation, substance and energy metabolism, cell cycle, and cell apoptosis .\nThe most well‐known function of circRNAs is their activity as competing endogenous RNAs (ceRNAs) through a circRNA/miRNA/mRNA regulatory network.\n10\n, \n11\n Recent evidence suggested the role of circRNAs in various viral infections such as human papilloma virus infection, herpes simplex virus, Epstein–Barr virus, human immunodeficiency virus, Middle East respiratory syndrome coronavirus, and hepatitis B virus infection and confirmed it as a potential biomarker between viral and non‐viral states.\n12\n\n\nOne of the circRNAs that has been recognized for its contribution to human viral‐associated infection is hsa_circ_0004812. Hsa_circ_0004812 originated from the NINL gene and is located on chromosome 20.\n13\n In 2020, Zhang and colleagues revealed that hsa_circ_0004812 was upregulated in chronic hepatitis B (CHB) and HBV infected hepatoma cells. Additionally, this circRNA was associated with immune suppression by HBV infection through the hsa_circ_0004812/hsa‐miR‐1287‐5p/FSTL1 pathway.\n14\n According to some studies, CircRNAs play an important role in gene expression as microRNA sponges via their microRNA response elements (MREs).\n5\n\n\nMicroRNAs have also been indicated as potential biomarkers for COVID‐19 diagnosis.\n15\n High‐throughput sequencing was used during the study in 2020 to evaluate the expression levels of different miRNAs. Then, they revealed that in human patients with COVID‐19, 35 miRNAs were upregulated, and 38 miRNAs were downregulated.\n16\n\n\nMicroRNAs are small noncoding RNAs that are about 20 to 25 nucleotides in length and act as regulators of gene expression by affecting the stability and translation of their target mRNAs.\n17\n In this study, hsa‐miR‐1287‐5p located on chromosome 10 was selected, which was previously predicted to have a significant binding site in the SARS‐CoV‐2 genome.\n18\n\n\nJAK/STAT is one of the pathways involved in inflammatory responses of COVID‐19, such as cytokine storm.\n14\n Interleukin 6 (IL‐6) and other inflammatory components such as IL‐12 and TNF are important components of cytokine storm and play a role in the pathogenesis of COVID‐19 associated pneumonia.\n19\n IL‐6 can activate STAT3 during inflammatory processes, and both of them play a regulatory role in the cytokine storm in COVID‐19 infection.\n20\n Some studies indicate that IL‐6R antagonists can improve the hyper‐inflammation state in hospitalized COVID‐19 patients.\n21\n Retinoid‐inducible gene 1 (RIG‐I) is one of the retinoic acid‐inducible genes I (RIG‐I)‐like receptor (RLR) family, playing an important role in sensing viral nucleic acids and production of pro‐inflammatory and antiviral proteins.\n22\n Interestingly, we find that hsa_circ_0004812/hsa‐mir‐1287‐5p/IL6R, RIG‐I have an impact on cytokine storm through JAK/STAT and STAT3 signaling pathways.\nThe aim of this study was to develop a better understanding of the molecular mechanisms involved in COVID‐19 inflammatory responses. According to the primary in‐silico analysis and literature studies, we had a specific focus on the hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis to evaluate its potential role in COVID‐19 inflammatory responses. In this regard, we assessed the expression of the members of our candidate axis, and its correlation with JAK/STAT and STAT3 signaling pathways was measured.", " Patient characteristics and sample collection To investigate the role of hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis in COVID‐19 viral infection, patients with SARS‐CoV‐2 severe symptoms (n = 46), without symptoms (n = 46), and a negative control group (n = 46) were selected for the study. The first group of patients was hospitalized at Valiasr and Shariati (Fasa, Fars, Iran). All samples were confirmed by reverse transcription‐polymerase chain reaction (RT‐PCR). Individuals presenting as negative controls had no respiratory insufficiency or hyperinflammation, such as bacterial and fungal infections, inflammatory bowel disease, autoimmune disorders, or cancer. We work on peripheral blood mononuclear cells (PBMC) whole blood from samples after collecting 5 μl blood samples in tubes containing EDTA in the shortest possible time. All of the participants signed a consent form, and this study was approved by the ethics committee of FUMS (IR.FUMS.REC.1399.182).\nTo investigate the role of hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis in COVID‐19 viral infection, patients with SARS‐CoV‐2 severe symptoms (n = 46), without symptoms (n = 46), and a negative control group (n = 46) were selected for the study. The first group of patients was hospitalized at Valiasr and Shariati (Fasa, Fars, Iran). All samples were confirmed by reverse transcription‐polymerase chain reaction (RT‐PCR). Individuals presenting as negative controls had no respiratory insufficiency or hyperinflammation, such as bacterial and fungal infections, inflammatory bowel disease, autoimmune disorders, or cancer. We work on peripheral blood mononuclear cells (PBMC) whole blood from samples after collecting 5 μl blood samples in tubes containing EDTA in the shortest possible time. All of the participants signed a consent form, and this study was approved by the ethics committee of FUMS (IR.FUMS.REC.1399.182).\n Isolation of total RNA and cDNA synthesis After exposing the blood cells to the reagent, PBMC were isolated using the density‐gradient method. Trizol isolation reagent (Invitrogen, Thermo Fisher) was used to extract total RNA. Then, RNA was extracted from PBMC according to the manufacturer's instructions. The RNA purity was assessed with a NanoDrop spectrophotometer (BioTek, HTX multi‐mode reader) and gel electrophoresis. The first‐strand cDNA was synthesized using the PrimeScript™ RT Reagent Kit (BioFact™, Cat. No: BR441‐096) following the manufacturer's recommendations.\nAfter exposing the blood cells to the reagent, PBMC were isolated using the density‐gradient method. Trizol isolation reagent (Invitrogen, Thermo Fisher) was used to extract total RNA. Then, RNA was extracted from PBMC according to the manufacturer's instructions. The RNA purity was assessed with a NanoDrop spectrophotometer (BioTek, HTX multi‐mode reader) and gel electrophoresis. The first‐strand cDNA was synthesized using the PrimeScript™ RT Reagent Kit (BioFact™, Cat. No: BR441‐096) following the manufacturer's recommendations.\n Real time PCR The relative expression of genes was determined using Power SYBR® Green PCR Master Mix (ABI, USA) on the 7500 real‐time PCR system (ABI, life technology). A 15 μl reaction contained 1 μl of cDNA, 7.5 μl BioFACT™ master mix including SYBR Green (Ampliqon, Cat. No: A325402‐25), 0.75 μl of each primer, and 5 μl DNase‐free deionized H2O. Thermal cycling was performed at 45 cycles of 95°C for 20 s and then 60°C for 30 s. We used two internal controls, including ACTB and U48. The sequences of each primer are stated in Table 1. The Livak method (2−∆∆Ct) was utilized to calculate the relative expression.\nprimers sequences\nAbbreviations: ACTB, actin beta; IL6R, interleukin 6 receptor; RIG‐I, RNA sensor RIG‐I; U48, SNORD48, small nucleolar RNA.\nThe relative expression of genes was determined using Power SYBR® Green PCR Master Mix (ABI, USA) on the 7500 real‐time PCR system (ABI, life technology). A 15 μl reaction contained 1 μl of cDNA, 7.5 μl BioFACT™ master mix including SYBR Green (Ampliqon, Cat. No: A325402‐25), 0.75 μl of each primer, and 5 μl DNase‐free deionized H2O. Thermal cycling was performed at 45 cycles of 95°C for 20 s and then 60°C for 30 s. We used two internal controls, including ACTB and U48. The sequences of each primer are stated in Table 1. The Livak method (2−∆∆Ct) was utilized to calculate the relative expression.\nprimers sequences\nAbbreviations: ACTB, actin beta; IL6R, interleukin 6 receptor; RIG‐I, RNA sensor RIG‐I; U48, SNORD48, small nucleolar RNA.\n Statistical analysis Data analysis was performed in spss software v.26, and graphs were drawn with graphpad prism v.8. Comparison of expression between the three groups of samples, including symptomatic COVID‐19, asymptomatic COVID‐19, and negative control, was evaluated using the Kruskal–Wallis H non‐parametric test. The Chi‐square test was applied to compare sex and blood type between groups, and the one‐way anova test was used to compare age. The correlation of the elements in the hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I pathway was measured by the Spearman correlation coefficient test. A p‐value less than .05 was considered a significant level.\nData analysis was performed in spss software v.26, and graphs were drawn with graphpad prism v.8. Comparison of expression between the three groups of samples, including symptomatic COVID‐19, asymptomatic COVID‐19, and negative control, was evaluated using the Kruskal–Wallis H non‐parametric test. The Chi‐square test was applied to compare sex and blood type between groups, and the one‐way anova test was used to compare age. The correlation of the elements in the hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I pathway was measured by the Spearman correlation coefficient test. A p‐value less than .05 was considered a significant level.\n Bioinformatics Predictions Hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis was constructed based on circRNA‐miRNA pair, miRNA‐mRNA pairs. CircRNA‐miRNA interaction was downloaded from the database Circinteractome.\n23\n The MiRWalk2.0\n24\n and miRTargetLink\n25\n databases were used to predict the interaction between miRNA and mRNAs. Then, the pathway enrichment analysis of miRNA and mRNAs was performed using miRPathDB 2.0\n26\n and Enrichr\n27\n databases.\nHsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis was constructed based on circRNA‐miRNA pair, miRNA‐mRNA pairs. CircRNA‐miRNA interaction was downloaded from the database Circinteractome.\n23\n The MiRWalk2.0\n24\n and miRTargetLink\n25\n databases were used to predict the interaction between miRNA and mRNAs. Then, the pathway enrichment analysis of miRNA and mRNAs was performed using miRPathDB 2.0\n26\n and Enrichr\n27\n databases.", "To investigate the role of hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis in COVID‐19 viral infection, patients with SARS‐CoV‐2 severe symptoms (n = 46), without symptoms (n = 46), and a negative control group (n = 46) were selected for the study. The first group of patients was hospitalized at Valiasr and Shariati (Fasa, Fars, Iran). All samples were confirmed by reverse transcription‐polymerase chain reaction (RT‐PCR). Individuals presenting as negative controls had no respiratory insufficiency or hyperinflammation, such as bacterial and fungal infections, inflammatory bowel disease, autoimmune disorders, or cancer. We work on peripheral blood mononuclear cells (PBMC) whole blood from samples after collecting 5 μl blood samples in tubes containing EDTA in the shortest possible time. All of the participants signed a consent form, and this study was approved by the ethics committee of FUMS (IR.FUMS.REC.1399.182).", "After exposing the blood cells to the reagent, PBMC were isolated using the density‐gradient method. Trizol isolation reagent (Invitrogen, Thermo Fisher) was used to extract total RNA. Then, RNA was extracted from PBMC according to the manufacturer's instructions. The RNA purity was assessed with a NanoDrop spectrophotometer (BioTek, HTX multi‐mode reader) and gel electrophoresis. The first‐strand cDNA was synthesized using the PrimeScript™ RT Reagent Kit (BioFact™, Cat. No: BR441‐096) following the manufacturer's recommendations.", "The relative expression of genes was determined using Power SYBR® Green PCR Master Mix (ABI, USA) on the 7500 real‐time PCR system (ABI, life technology). A 15 μl reaction contained 1 μl of cDNA, 7.5 μl BioFACT™ master mix including SYBR Green (Ampliqon, Cat. No: A325402‐25), 0.75 μl of each primer, and 5 μl DNase‐free deionized H2O. Thermal cycling was performed at 45 cycles of 95°C for 20 s and then 60°C for 30 s. We used two internal controls, including ACTB and U48. The sequences of each primer are stated in Table 1. The Livak method (2−∆∆Ct) was utilized to calculate the relative expression.\nprimers sequences\nAbbreviations: ACTB, actin beta; IL6R, interleukin 6 receptor; RIG‐I, RNA sensor RIG‐I; U48, SNORD48, small nucleolar RNA.", "Data analysis was performed in spss software v.26, and graphs were drawn with graphpad prism v.8. Comparison of expression between the three groups of samples, including symptomatic COVID‐19, asymptomatic COVID‐19, and negative control, was evaluated using the Kruskal–Wallis H non‐parametric test. The Chi‐square test was applied to compare sex and blood type between groups, and the one‐way anova test was used to compare age. The correlation of the elements in the hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I pathway was measured by the Spearman correlation coefficient test. A p‐value less than .05 was considered a significant level.", "Hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis was constructed based on circRNA‐miRNA pair, miRNA‐mRNA pairs. CircRNA‐miRNA interaction was downloaded from the database Circinteractome.\n23\n The MiRWalk2.0\n24\n and miRTargetLink\n25\n databases were used to predict the interaction between miRNA and mRNAs. Then, the pathway enrichment analysis of miRNA and mRNAs was performed using miRPathDB 2.0\n26\n and Enrichr\n27\n databases.", " Basic and demographic information of patients Participants in the current study included: group (1) 46 patients with symptomatic COVID‐19, including 19 (41%) females and 27 (59%) males, with a mean age of 41.54 years, group (2) 46 patients with asymptomatic COVID‐19, with 21 (46%) females and 25 (54%) males, with a mean age of 47.90 years, and group (3) the negative control which included 24 (52%) males and 22 (48%), females, with a mean age of 42.65. The patients and control groups were matched in terms of age, sex, and blood group (Table 2). The most common underlying diseases in the symptomatic and asymptomatic COVID‐19 groups were cardiovascular disease with 10 (22%), 12 (26%) patients and immunodeficiency disease with 4 (9%), 0 (0%) patients, respectively (Table 2). Table 3 demonstrates the different symptoms of patients with symptomatic COVID‐19 participating in this study.\nParticipants' baseline and demographic information (p‐values 1: in comparison with negative control, 2: in comparison with asymptomatic COVID‐19)\nFeatures of symptomatic COVID‐19 patients\nParticipants in the current study included: group (1) 46 patients with symptomatic COVID‐19, including 19 (41%) females and 27 (59%) males, with a mean age of 41.54 years, group (2) 46 patients with asymptomatic COVID‐19, with 21 (46%) females and 25 (54%) males, with a mean age of 47.90 years, and group (3) the negative control which included 24 (52%) males and 22 (48%), females, with a mean age of 42.65. The patients and control groups were matched in terms of age, sex, and blood group (Table 2). The most common underlying diseases in the symptomatic and asymptomatic COVID‐19 groups were cardiovascular disease with 10 (22%), 12 (26%) patients and immunodeficiency disease with 4 (9%), 0 (0%) patients, respectively (Table 2). Table 3 demonstrates the different symptoms of patients with symptomatic COVID‐19 participating in this study.\nParticipants' baseline and demographic information (p‐values 1: in comparison with negative control, 2: in comparison with asymptomatic COVID‐19)\nFeatures of symptomatic COVID‐19 patients\n Evaluation of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R expression in symptomatic COVID‐19 patients compared with asymptomatic COVID‐19 patients and negative controls We used quantitative real‐time PCR to evaluate the gene expression levels in two different subgroups of COVID‐19 patients and negative controls, with a total of 46 samples in each group. The expression of hsa_circ_0004812 is significantly higher in symptomatic COVID‐19 samples in comparison with other groups, and it was also upregulated in asymptomatic patients compared with negative controls (Table 4, Figure 1A). The hsa‐miR‐1287‐5p expression is obviously lower in symptomatic COVID‐19 patients compared with negative controls and asymptomatic patients (Table 4, Figure 1B). Furthermore, the results indicate that RIG‐I and IL6R are significantly upregulated in symptomatic and asymptomatic patients compared with negative controls (Table 3, Figure 1C,D).\nExpression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R in asymptomatic and symptomatic COVID‐19 patients, and negative controls (p‐value computed from Kruskal–Wallis Test. p‐value 1: In comparison with negative control. p‐value 2: In comparison to asymptomatic covid)\nRelative expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, IL6R, and RIG‐I were shown in a box and whisker plot (10–90 percentile). (ns: p > .05; * p ≤ .05; *** p ˂ .001). (A) Upregulation of hsa_circ_0004812 in COVID‐19 patients compared with healthy. (B) Down expression of hsa‐miR‐1287‐5p in COVID‐19 patients compared with healthy controls. (C) Upregulation of RIG‐I in COVID‐19 patients compared with healthy controls. (D) Upregulation of IL6R in COVID‐19 patients compared with healthy controls.\nWe used quantitative real‐time PCR to evaluate the gene expression levels in two different subgroups of COVID‐19 patients and negative controls, with a total of 46 samples in each group. The expression of hsa_circ_0004812 is significantly higher in symptomatic COVID‐19 samples in comparison with other groups, and it was also upregulated in asymptomatic patients compared with negative controls (Table 4, Figure 1A). The hsa‐miR‐1287‐5p expression is obviously lower in symptomatic COVID‐19 patients compared with negative controls and asymptomatic patients (Table 4, Figure 1B). Furthermore, the results indicate that RIG‐I and IL6R are significantly upregulated in symptomatic and asymptomatic patients compared with negative controls (Table 3, Figure 1C,D).\nExpression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R in asymptomatic and symptomatic COVID‐19 patients, and negative controls (p‐value computed from Kruskal–Wallis Test. p‐value 1: In comparison with negative control. p‐value 2: In comparison to asymptomatic covid)\nRelative expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, IL6R, and RIG‐I were shown in a box and whisker plot (10–90 percentile). (ns: p > .05; * p ≤ .05; *** p ˂ .001). (A) Upregulation of hsa_circ_0004812 in COVID‐19 patients compared with healthy. (B) Down expression of hsa‐miR‐1287‐5p in COVID‐19 patients compared with healthy controls. (C) Upregulation of RIG‐I in COVID‐19 patients compared with healthy controls. (D) Upregulation of IL6R in COVID‐19 patients compared with healthy controls.\n The hsa_circ_0004812 as a potential regulator of RIG‐I and IL6R through sponging hsa‐miR 1287‐5p To confirm whether hsa_circ_0004812 may act as the molecular sponge for hsa‐miR‐1287‐5p, the expression correlation between these ncRNAs was evaluated in the samples. There was a significant negative correlation between the expression of hsa_circ_0004812 and hsa‐miR‐1287‐5p (r = −0.283, p value = .0.001, Figure 2B). Using bioinformatics tools, we have predicted that RIG‐I and IL6R are potential targets of the hsa_circ_0004812/hsa‐miR‐1287‐5p, and correlation analysis between scale results of RT‐PCR revealed a positive expression correlation between hsa_circ_0004812 and IL6R and RIG‐I (r = 0.760, p value = ˂.001/r = .236, p value = .005, Figure 2C,D). In addition, the negative expression correlation between hsa‐miR‐1287‐5p and IL6R and RIG‐I was studied (r = −0.234, p value = .006/r = −0.514, p value = ˂.001, Figure 2E,F). According to the results from the expression pattern of the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R axis, a distinct expression profile between COVID‐19 samples and negative controls is shown by the heatmap analysis (Figure 2A). The final output confirms the significance of this hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R pathway in the COVID‐19 cytokine storm.\n(A) LogFC heatmap of gene expression. (B) The negative correlation between hsa_circ_0004812 and hsa‐miR‐1287‐5p expression levels in COVID‐19 (r = −0.283, p value = .001). (C) The positive correlation between hsa_circ_0004812 and IL6R expression levels in COVID‐19 (r = 0.760, p value = ˂.001). (D) The positive correlation between hsa_circ_0004812 and RIG‐I expression levels in COVID‐19 (r = 0.236, p value = .005). (E) The negative correlation between hsa‐miR‐1287‐5p and IL6R expression levels in COVID‐19 (r = −0.234, p value = .006). (F) The negative correlation between hsa‐miR‐1287‐5p and RIG‐I expression levels in COVID‐19 (r = −0.514, p value = ˂.001).\nTo confirm whether hsa_circ_0004812 may act as the molecular sponge for hsa‐miR‐1287‐5p, the expression correlation between these ncRNAs was evaluated in the samples. There was a significant negative correlation between the expression of hsa_circ_0004812 and hsa‐miR‐1287‐5p (r = −0.283, p value = .0.001, Figure 2B). Using bioinformatics tools, we have predicted that RIG‐I and IL6R are potential targets of the hsa_circ_0004812/hsa‐miR‐1287‐5p, and correlation analysis between scale results of RT‐PCR revealed a positive expression correlation between hsa_circ_0004812 and IL6R and RIG‐I (r = 0.760, p value = ˂.001/r = .236, p value = .005, Figure 2C,D). In addition, the negative expression correlation between hsa‐miR‐1287‐5p and IL6R and RIG‐I was studied (r = −0.234, p value = .006/r = −0.514, p value = ˂.001, Figure 2E,F). According to the results from the expression pattern of the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R axis, a distinct expression profile between COVID‐19 samples and negative controls is shown by the heatmap analysis (Figure 2A). The final output confirms the significance of this hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R pathway in the COVID‐19 cytokine storm.\n(A) LogFC heatmap of gene expression. (B) The negative correlation between hsa_circ_0004812 and hsa‐miR‐1287‐5p expression levels in COVID‐19 (r = −0.283, p value = .001). (C) The positive correlation between hsa_circ_0004812 and IL6R expression levels in COVID‐19 (r = 0.760, p value = ˂.001). (D) The positive correlation between hsa_circ_0004812 and RIG‐I expression levels in COVID‐19 (r = 0.236, p value = .005). (E) The negative correlation between hsa‐miR‐1287‐5p and IL6R expression levels in COVID‐19 (r = −0.234, p value = .006). (F) The negative correlation between hsa‐miR‐1287‐5p and RIG‐I expression levels in COVID‐19 (r = −0.514, p value = ˂.001).\n Functional enrichment analysis of hsa_circ_0004812/hsa‐miR‐1287‐5p/ RIG‐I, IL6R axis Using the circRNA‐miRNA pair and the miRNA‐mRNA pair, we generated the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R triple network. We used the miRPathDB and Enrichr databases to evaluate the association between miRNA and mRNA related pathways, respectively.\n26\n, \n27\n “Regulatory circuits of the STAT3 signaling pathway” is the significant pathway related to hsa‐miR‐1287‐5p via WikiPathways data (p‐value = .006). Additionally, the significant gene ontology (GO) associated with IL6R and RIG‐I has been shown in Figure 3. GO analysis displayed that IL6R, RIG‐I, and hsa‐miR‐1287‐5p in our proposed ceRNA regulatory network were involved in the regulation of JAK–STAT and PI3K‐AKT signaling pathways.\nOverall workflow of bioinformatics analyses\nUsing the circRNA‐miRNA pair and the miRNA‐mRNA pair, we generated the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R triple network. We used the miRPathDB and Enrichr databases to evaluate the association between miRNA and mRNA related pathways, respectively.\n26\n, \n27\n “Regulatory circuits of the STAT3 signaling pathway” is the significant pathway related to hsa‐miR‐1287‐5p via WikiPathways data (p‐value = .006). Additionally, the significant gene ontology (GO) associated with IL6R and RIG‐I has been shown in Figure 3. GO analysis displayed that IL6R, RIG‐I, and hsa‐miR‐1287‐5p in our proposed ceRNA regulatory network were involved in the regulation of JAK–STAT and PI3K‐AKT signaling pathways.\nOverall workflow of bioinformatics analyses", "Participants in the current study included: group (1) 46 patients with symptomatic COVID‐19, including 19 (41%) females and 27 (59%) males, with a mean age of 41.54 years, group (2) 46 patients with asymptomatic COVID‐19, with 21 (46%) females and 25 (54%) males, with a mean age of 47.90 years, and group (3) the negative control which included 24 (52%) males and 22 (48%), females, with a mean age of 42.65. The patients and control groups were matched in terms of age, sex, and blood group (Table 2). The most common underlying diseases in the symptomatic and asymptomatic COVID‐19 groups were cardiovascular disease with 10 (22%), 12 (26%) patients and immunodeficiency disease with 4 (9%), 0 (0%) patients, respectively (Table 2). Table 3 demonstrates the different symptoms of patients with symptomatic COVID‐19 participating in this study.\nParticipants' baseline and demographic information (p‐values 1: in comparison with negative control, 2: in comparison with asymptomatic COVID‐19)\nFeatures of symptomatic COVID‐19 patients", "We used quantitative real‐time PCR to evaluate the gene expression levels in two different subgroups of COVID‐19 patients and negative controls, with a total of 46 samples in each group. The expression of hsa_circ_0004812 is significantly higher in symptomatic COVID‐19 samples in comparison with other groups, and it was also upregulated in asymptomatic patients compared with negative controls (Table 4, Figure 1A). The hsa‐miR‐1287‐5p expression is obviously lower in symptomatic COVID‐19 patients compared with negative controls and asymptomatic patients (Table 4, Figure 1B). Furthermore, the results indicate that RIG‐I and IL6R are significantly upregulated in symptomatic and asymptomatic patients compared with negative controls (Table 3, Figure 1C,D).\nExpression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R in asymptomatic and symptomatic COVID‐19 patients, and negative controls (p‐value computed from Kruskal–Wallis Test. p‐value 1: In comparison with negative control. p‐value 2: In comparison to asymptomatic covid)\nRelative expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, IL6R, and RIG‐I were shown in a box and whisker plot (10–90 percentile). (ns: p > .05; * p ≤ .05; *** p ˂ .001). (A) Upregulation of hsa_circ_0004812 in COVID‐19 patients compared with healthy. (B) Down expression of hsa‐miR‐1287‐5p in COVID‐19 patients compared with healthy controls. (C) Upregulation of RIG‐I in COVID‐19 patients compared with healthy controls. (D) Upregulation of IL6R in COVID‐19 patients compared with healthy controls.", "To confirm whether hsa_circ_0004812 may act as the molecular sponge for hsa‐miR‐1287‐5p, the expression correlation between these ncRNAs was evaluated in the samples. There was a significant negative correlation between the expression of hsa_circ_0004812 and hsa‐miR‐1287‐5p (r = −0.283, p value = .0.001, Figure 2B). Using bioinformatics tools, we have predicted that RIG‐I and IL6R are potential targets of the hsa_circ_0004812/hsa‐miR‐1287‐5p, and correlation analysis between scale results of RT‐PCR revealed a positive expression correlation between hsa_circ_0004812 and IL6R and RIG‐I (r = 0.760, p value = ˂.001/r = .236, p value = .005, Figure 2C,D). In addition, the negative expression correlation between hsa‐miR‐1287‐5p and IL6R and RIG‐I was studied (r = −0.234, p value = .006/r = −0.514, p value = ˂.001, Figure 2E,F). According to the results from the expression pattern of the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R axis, a distinct expression profile between COVID‐19 samples and negative controls is shown by the heatmap analysis (Figure 2A). The final output confirms the significance of this hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R pathway in the COVID‐19 cytokine storm.\n(A) LogFC heatmap of gene expression. (B) The negative correlation between hsa_circ_0004812 and hsa‐miR‐1287‐5p expression levels in COVID‐19 (r = −0.283, p value = .001). (C) The positive correlation between hsa_circ_0004812 and IL6R expression levels in COVID‐19 (r = 0.760, p value = ˂.001). (D) The positive correlation between hsa_circ_0004812 and RIG‐I expression levels in COVID‐19 (r = 0.236, p value = .005). (E) The negative correlation between hsa‐miR‐1287‐5p and IL6R expression levels in COVID‐19 (r = −0.234, p value = .006). (F) The negative correlation between hsa‐miR‐1287‐5p and RIG‐I expression levels in COVID‐19 (r = −0.514, p value = ˂.001).", "Using the circRNA‐miRNA pair and the miRNA‐mRNA pair, we generated the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R triple network. We used the miRPathDB and Enrichr databases to evaluate the association between miRNA and mRNA related pathways, respectively.\n26\n, \n27\n “Regulatory circuits of the STAT3 signaling pathway” is the significant pathway related to hsa‐miR‐1287‐5p via WikiPathways data (p‐value = .006). Additionally, the significant gene ontology (GO) associated with IL6R and RIG‐I has been shown in Figure 3. GO analysis displayed that IL6R, RIG‐I, and hsa‐miR‐1287‐5p in our proposed ceRNA regulatory network were involved in the regulation of JAK–STAT and PI3K‐AKT signaling pathways.\nOverall workflow of bioinformatics analyses", "Despite the quick progression in COVID‐19 research, due to the manifestation of new variants in SARS‐CoV‐2, COVID‐19 remains a global crisis. This is largely owing to our lack of understanding of the virus's molecular processes of pathogenesis. While the characteristics of COVID‐19 vastly range from symptomatic to asymptomatic, the cytokine storm has been triggered in the respiratory system of some of the patients.\n1\n, \n2\n Thus, it is crucial to conduct some original research on the genetic factors that cause a dysfunctional innate immune system.\nDysregulation in circRNA expression has been linked to various diseases, including cancer, heart disease, neurological problems, and some cases of viral infection, while the underlying mechanisms remain unknown. For example, Wu et al.\n9\n found that 114 differentially expressed circRNAs were related to exosomes that were extracted from COVID‐19 patients and healthy people. In this study, using qRT‐PCR, we assessed the hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R expression among the three groups of participants. Further analysis was performed to discover circRNA‐miRNA, miRNA‐mRNAs, and circRNA‐mRNAs interactions. For confirmation of the qPCR results, in silico validation was also conducted using several databases.\nCircular RNAs (circRNAs) can bind to miRNAs and act as sponges or decoys, enhancing the expression of miRNA target genes.\n28\n Recently, circRNAs have been proposed as biomarkers to differentiate viral from non‐viral pneumonia. Circular RNAs are involved in various activities, including immune tolerance and escape, and thus could be helpful in the emerging COVID‐19 infection.\n9\n The cellular mechanisms underlying circRNA dysregulation are still poorly understood in SARS‐CoV‐2 infection. The in silico representation identified a ceRNA network in SARS‐CoV‐infected cells by Arora et al.\n29\n RNA sequencing was carried out by Zhou et al. and they discovered a total number of 99 dysregulated circRNAs that were linked to chronic hepatitis B (CHB). The study of the CHB‐related circRNA‐miRNA‐mRNA pathway suggested that TGFb2 was controlled by hsa _circ _0000650 through miR‐6873‐3p sponging.\n30\n We chose hsa_circ_0004812, which is generated from the ninein‐like (NINL) gene, for further investigation among the circRNAs that have been shown to play a function in infectious illnesses.\n13\n\n\nThe experimental data showed that hsa_circ_0004812 is significantly overexpressed in symptomatic COVID‐19 patients as compared with asymptomatic COVID‐19 patients and negative controls. In line with our analysis, previous research has indicated upregulated hsa_circ_0004812 could modulate the expression of follistatin‐like 1 (FSTL1) through sponging of miR‐1287‐5p in CHB patients and HBV‐infected hepatoma cells.\n14\n\n\nIn a study, N Wang et al.\n31\n introduced the lncRNA LPAL2 in thyroid eye disease (TED) by sponging the hsa‐miR‐1287‐5p. In contrast to our study, W. Hao argues that increased miR‐1287‐5p levels may inhibit LPS‐induced human nasal epithelial cells (HNECs) from releasing pro‐inflammatory cytokines.\n32\n Yajie Hu et al.\n33\n reported 47 novel miRNAs with differential expression in Enterovirus 71 (EV71) and coxsackievirus A16 (CA16) infections using high‐throughput sequencing. This result of qRT‐PCR of hsa‐miR‐1287‐5p was in accordance with the high‐throughput dataset. In a study, Caixia Li et al. (2020) identified 70 miRNAs dysregulated in human patients with COVID‐19 by high‐throughput sequencing.\n16\n In the same study, by next‐generation sequencing and bioinformatics tools, 55 miRNAs were altered in 10 COVID‐19 patients were identified.\n34\n Yajie Hu et al.\n33\n reported 47 novel miRNAs with differential expression in Enterovirus 71 (EV71) and coxsackievirus A16 (CA16) infections using high‐throughput sequencing. This result of qRT‐PCR of hsa‐miR‐1287‐5p was in accordance with the high‐throughput dataset.\nWe know circRNA acts as a microRNA sponge and modulates gene expression indirectly.\n35\n The present study explored the relationship between circRNA and microRNA using a bioinformatics database and then picked out hsa‐miR‐1287‐5p. The circRNA‐interactome indicates the 7mer‐1a site type between hsa_circ_0004812 and hsa‐miR‐1287‐5p.\n23\n Our results demonstrate that the hsa‐miR‐1287‐5p is downregulated in severe symptoms COVID‐19 patients compared with asymptomatic COVID‐19 patients and negative controls.\nTwo mRNAs that we propose are related to the ceRNA are IL6R and RIG‐I. Potential roles of these genes have been determined in the SARS‐CoV‐2 infection.\n36\n, \n37\n The innate immune system is also activated by SARS‐CoV‐2 and macrophage activation causes an overproduction of pro‐inflammatory cytokines, such as IL‐6.\n38\n In previous studies, it was found that interleukin 6 (IL‐6) is a pleiotropic cytokine that controls cell proliferation and differentiation along with the immune response. Many other cytokines share this receptor component with the IL6 receptor (IL6R) and interleukin 6 signal transducer (IL6ST/GP130/IL6‐beta). Innate immune responses are generated by specific families of pattern recognition receptors. A class of cytosolic RNA helicases known as RIG‐I‐like receptors (RLRs) recognizes non‐self RNA that enters a cell as a result of intracellular virus replication. RIG‐I, MDA5, and LGP2 are RLR proteins. These are expressed in immune and non‐immune cells.\n39\n, \n40\n Retinoic acid‐inducible gene I (RIG‐I) can activate type I IFN in response to SARS‐CoV2 in fibroblasts and dendritic cells by activating interferon regulatory factor 3 (IRF3) via kinases.\n41\n\n\nWe investigated the interaction between miRNA and mRNAs using three databases. We utilized bioinformatics to predict that hsa‐miR‐1287‐5p could interact with IL6R and RIG‐I. Then, in the present study, we observed that the expression of IL6 and RIG‐I was increased in the SARS‐CoV‐2‐infected patients with severe symptoms as compared with the asymptomatic and negative control groups. The correlation between gene expression is weak, according to the study that Ratner conducted. Additional tests would need to be performed in the future to ensure and continue the work's results.\n42\n\n\nWe evaluated the correlation between the expression levels of genes in the ceRNA regulatory network. Furthermore, we performed a correlation study between the expression of hsa_circ_0004812 and hsa‐miR‐1287‐5p and then between hsa_circ_0004812 and IL6 and RIG‐I. There was a significant negative correlation between hsa_circ_0004812 expression and hsa‐miR‐1287‐5p expression. Additionally, our findings showed a significant positive correlation between the expression of hsa_circ_0004812 and IL6 and RIG‐I. Interestingly, Zhang and colleague reported the relationships between hsa_circ_0004812 and miR‐1287‐5p by luciferase assays in cells transfected with pHBV.\n14\n Similarly, our results demonstrate that hsa‐miR‐1287‐5p negatively regulates IL6R and RIG‐I levels in SARS‐CoV‐2 infected patients.", "In conclusion, the interaction of hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis has been verified by a bioinformatics study. Gene ontology analysis indicated that the ceRNA axis is involved in the regulation of JAK–STAT and PI3K‐AKT signaling pathways. COVID‐19 patients related circRNA‐miRNA‐mRNA pathway analysis hinted that hsa_circ_0004812 regulated RIG1 and IL6R by sponging hsa‐miR‐1287‐5p. Moreover, on the basis of the correlation analysis, it was revealed that this candidate axis is more significant in the patients with higher COVID‐19 symptoms and could be considered as the potential target to be used in understanding, identifying, and treating COVID‐19.", "This work was supported by Fasa University of Medical Sciences under Grant number 99157.", "The authors declare that there is no conflict of interests regarding the publication of this paper.", "Informed consent was obtained from all individual participants included in the study." ]
[ "background", "materials-and-methods", null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusions", null, "COI-statement", null ]
[ "cytokine storm", "hsa_circ_0004812", "hsa‐miR‐1287‐5p", "SARS‐CoV‐2", "sponging effect" ]
BACKGROUND: The COVID‐19 pandemic has become a very challenging and controversial problem. The severe acute respiratory syndrome coronavirus‐2 (SARS‐CoV‐2) virus is a newly emerging single‐stranded RNA virus that belongs to the beta‐coronavirus family and causes COVID‐19 (coronavirus disease 2019). COVID‐19, with a dramatic fatality rate, 1 , 2 , 3 triggers the immune system to respond and simultaneously represses the immune response to allow the virus replication. The interplay among the infected cells, viruses, and immune system induces a high level of cytokine and chemokine secretion known as the “cytokine storm”. 3 , 4 CircRNAs are created through the back‐splicing process, which results in a closed‐loop structure without poly(A) tails and 5′ caps. 5 These RNA molecules are highly stable and resistant to exonuclease‐mediated degradation. It has been found that circRNAs, as the key regulators, could modulate gene expression through various molecular mechanisms, including a sponging effect on miRNA or acting as protein scaffolds. 6 It has been reported that noncoding RNAs could have an effect on the inflammatory cytokine storm. 7 CircRNAs, which are regarded as endogenous short RNAs that may be classified into coding and noncoding circRNAs, 8 play a proven role in COVID‐19‐related pathogenesis. 9 For example, Wu et al. 9 found that differentially expressed circRNAs in COVID‐19 patients with recurrent disease are mainly involved in the regulation of host cell immunity and inflammation, substance and energy metabolism, cell cycle, and cell apoptosis . The most well‐known function of circRNAs is their activity as competing endogenous RNAs (ceRNAs) through a circRNA/miRNA/mRNA regulatory network. 10 , 11 Recent evidence suggested the role of circRNAs in various viral infections such as human papilloma virus infection, herpes simplex virus, Epstein–Barr virus, human immunodeficiency virus, Middle East respiratory syndrome coronavirus, and hepatitis B virus infection and confirmed it as a potential biomarker between viral and non‐viral states. 12 One of the circRNAs that has been recognized for its contribution to human viral‐associated infection is hsa_circ_0004812. Hsa_circ_0004812 originated from the NINL gene and is located on chromosome 20. 13 In 2020, Zhang and colleagues revealed that hsa_circ_0004812 was upregulated in chronic hepatitis B (CHB) and HBV infected hepatoma cells. Additionally, this circRNA was associated with immune suppression by HBV infection through the hsa_circ_0004812/hsa‐miR‐1287‐5p/FSTL1 pathway. 14 According to some studies, CircRNAs play an important role in gene expression as microRNA sponges via their microRNA response elements (MREs). 5 MicroRNAs have also been indicated as potential biomarkers for COVID‐19 diagnosis. 15 High‐throughput sequencing was used during the study in 2020 to evaluate the expression levels of different miRNAs. Then, they revealed that in human patients with COVID‐19, 35 miRNAs were upregulated, and 38 miRNAs were downregulated. 16 MicroRNAs are small noncoding RNAs that are about 20 to 25 nucleotides in length and act as regulators of gene expression by affecting the stability and translation of their target mRNAs. 17 In this study, hsa‐miR‐1287‐5p located on chromosome 10 was selected, which was previously predicted to have a significant binding site in the SARS‐CoV‐2 genome. 18 JAK/STAT is one of the pathways involved in inflammatory responses of COVID‐19, such as cytokine storm. 14 Interleukin 6 (IL‐6) and other inflammatory components such as IL‐12 and TNF are important components of cytokine storm and play a role in the pathogenesis of COVID‐19 associated pneumonia. 19 IL‐6 can activate STAT3 during inflammatory processes, and both of them play a regulatory role in the cytokine storm in COVID‐19 infection. 20 Some studies indicate that IL‐6R antagonists can improve the hyper‐inflammation state in hospitalized COVID‐19 patients. 21 Retinoid‐inducible gene 1 (RIG‐I) is one of the retinoic acid‐inducible genes I (RIG‐I)‐like receptor (RLR) family, playing an important role in sensing viral nucleic acids and production of pro‐inflammatory and antiviral proteins. 22 Interestingly, we find that hsa_circ_0004812/hsa‐mir‐1287‐5p/IL6R, RIG‐I have an impact on cytokine storm through JAK/STAT and STAT3 signaling pathways. The aim of this study was to develop a better understanding of the molecular mechanisms involved in COVID‐19 inflammatory responses. According to the primary in‐silico analysis and literature studies, we had a specific focus on the hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis to evaluate its potential role in COVID‐19 inflammatory responses. In this regard, we assessed the expression of the members of our candidate axis, and its correlation with JAK/STAT and STAT3 signaling pathways was measured. MATERIALS AND METHODS: Patient characteristics and sample collection To investigate the role of hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis in COVID‐19 viral infection, patients with SARS‐CoV‐2 severe symptoms (n = 46), without symptoms (n = 46), and a negative control group (n = 46) were selected for the study. The first group of patients was hospitalized at Valiasr and Shariati (Fasa, Fars, Iran). All samples were confirmed by reverse transcription‐polymerase chain reaction (RT‐PCR). Individuals presenting as negative controls had no respiratory insufficiency or hyperinflammation, such as bacterial and fungal infections, inflammatory bowel disease, autoimmune disorders, or cancer. We work on peripheral blood mononuclear cells (PBMC) whole blood from samples after collecting 5 μl blood samples in tubes containing EDTA in the shortest possible time. All of the participants signed a consent form, and this study was approved by the ethics committee of FUMS (IR.FUMS.REC.1399.182). To investigate the role of hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis in COVID‐19 viral infection, patients with SARS‐CoV‐2 severe symptoms (n = 46), without symptoms (n = 46), and a negative control group (n = 46) were selected for the study. The first group of patients was hospitalized at Valiasr and Shariati (Fasa, Fars, Iran). All samples were confirmed by reverse transcription‐polymerase chain reaction (RT‐PCR). Individuals presenting as negative controls had no respiratory insufficiency or hyperinflammation, such as bacterial and fungal infections, inflammatory bowel disease, autoimmune disorders, or cancer. We work on peripheral blood mononuclear cells (PBMC) whole blood from samples after collecting 5 μl blood samples in tubes containing EDTA in the shortest possible time. All of the participants signed a consent form, and this study was approved by the ethics committee of FUMS (IR.FUMS.REC.1399.182). Isolation of total RNA and cDNA synthesis After exposing the blood cells to the reagent, PBMC were isolated using the density‐gradient method. Trizol isolation reagent (Invitrogen, Thermo Fisher) was used to extract total RNA. Then, RNA was extracted from PBMC according to the manufacturer's instructions. The RNA purity was assessed with a NanoDrop spectrophotometer (BioTek, HTX multi‐mode reader) and gel electrophoresis. The first‐strand cDNA was synthesized using the PrimeScript™ RT Reagent Kit (BioFact™, Cat. No: BR441‐096) following the manufacturer's recommendations. After exposing the blood cells to the reagent, PBMC were isolated using the density‐gradient method. Trizol isolation reagent (Invitrogen, Thermo Fisher) was used to extract total RNA. Then, RNA was extracted from PBMC according to the manufacturer's instructions. The RNA purity was assessed with a NanoDrop spectrophotometer (BioTek, HTX multi‐mode reader) and gel electrophoresis. The first‐strand cDNA was synthesized using the PrimeScript™ RT Reagent Kit (BioFact™, Cat. No: BR441‐096) following the manufacturer's recommendations. Real time PCR The relative expression of genes was determined using Power SYBR® Green PCR Master Mix (ABI, USA) on the 7500 real‐time PCR system (ABI, life technology). A 15 μl reaction contained 1 μl of cDNA, 7.5 μl BioFACT™ master mix including SYBR Green (Ampliqon, Cat. No: A325402‐25), 0.75 μl of each primer, and 5 μl DNase‐free deionized H2O. Thermal cycling was performed at 45 cycles of 95°C for 20 s and then 60°C for 30 s. We used two internal controls, including ACTB and U48. The sequences of each primer are stated in Table 1. The Livak method (2−∆∆Ct) was utilized to calculate the relative expression. primers sequences Abbreviations: ACTB, actin beta; IL6R, interleukin 6 receptor; RIG‐I, RNA sensor RIG‐I; U48, SNORD48, small nucleolar RNA. The relative expression of genes was determined using Power SYBR® Green PCR Master Mix (ABI, USA) on the 7500 real‐time PCR system (ABI, life technology). A 15 μl reaction contained 1 μl of cDNA, 7.5 μl BioFACT™ master mix including SYBR Green (Ampliqon, Cat. No: A325402‐25), 0.75 μl of each primer, and 5 μl DNase‐free deionized H2O. Thermal cycling was performed at 45 cycles of 95°C for 20 s and then 60°C for 30 s. We used two internal controls, including ACTB and U48. The sequences of each primer are stated in Table 1. The Livak method (2−∆∆Ct) was utilized to calculate the relative expression. primers sequences Abbreviations: ACTB, actin beta; IL6R, interleukin 6 receptor; RIG‐I, RNA sensor RIG‐I; U48, SNORD48, small nucleolar RNA. Statistical analysis Data analysis was performed in spss software v.26, and graphs were drawn with graphpad prism v.8. Comparison of expression between the three groups of samples, including symptomatic COVID‐19, asymptomatic COVID‐19, and negative control, was evaluated using the Kruskal–Wallis H non‐parametric test. The Chi‐square test was applied to compare sex and blood type between groups, and the one‐way anova test was used to compare age. The correlation of the elements in the hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I pathway was measured by the Spearman correlation coefficient test. A p‐value less than .05 was considered a significant level. Data analysis was performed in spss software v.26, and graphs were drawn with graphpad prism v.8. Comparison of expression between the three groups of samples, including symptomatic COVID‐19, asymptomatic COVID‐19, and negative control, was evaluated using the Kruskal–Wallis H non‐parametric test. The Chi‐square test was applied to compare sex and blood type between groups, and the one‐way anova test was used to compare age. The correlation of the elements in the hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I pathway was measured by the Spearman correlation coefficient test. A p‐value less than .05 was considered a significant level. Bioinformatics Predictions Hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis was constructed based on circRNA‐miRNA pair, miRNA‐mRNA pairs. CircRNA‐miRNA interaction was downloaded from the database Circinteractome. 23 The MiRWalk2.0 24 and miRTargetLink 25 databases were used to predict the interaction between miRNA and mRNAs. Then, the pathway enrichment analysis of miRNA and mRNAs was performed using miRPathDB 2.0 26 and Enrichr 27 databases. Hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis was constructed based on circRNA‐miRNA pair, miRNA‐mRNA pairs. CircRNA‐miRNA interaction was downloaded from the database Circinteractome. 23 The MiRWalk2.0 24 and miRTargetLink 25 databases were used to predict the interaction between miRNA and mRNAs. Then, the pathway enrichment analysis of miRNA and mRNAs was performed using miRPathDB 2.0 26 and Enrichr 27 databases. Patient characteristics and sample collection: To investigate the role of hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis in COVID‐19 viral infection, patients with SARS‐CoV‐2 severe symptoms (n = 46), without symptoms (n = 46), and a negative control group (n = 46) were selected for the study. The first group of patients was hospitalized at Valiasr and Shariati (Fasa, Fars, Iran). All samples were confirmed by reverse transcription‐polymerase chain reaction (RT‐PCR). Individuals presenting as negative controls had no respiratory insufficiency or hyperinflammation, such as bacterial and fungal infections, inflammatory bowel disease, autoimmune disorders, or cancer. We work on peripheral blood mononuclear cells (PBMC) whole blood from samples after collecting 5 μl blood samples in tubes containing EDTA in the shortest possible time. All of the participants signed a consent form, and this study was approved by the ethics committee of FUMS (IR.FUMS.REC.1399.182). Isolation of total RNA and cDNA synthesis: After exposing the blood cells to the reagent, PBMC were isolated using the density‐gradient method. Trizol isolation reagent (Invitrogen, Thermo Fisher) was used to extract total RNA. Then, RNA was extracted from PBMC according to the manufacturer's instructions. The RNA purity was assessed with a NanoDrop spectrophotometer (BioTek, HTX multi‐mode reader) and gel electrophoresis. The first‐strand cDNA was synthesized using the PrimeScript™ RT Reagent Kit (BioFact™, Cat. No: BR441‐096) following the manufacturer's recommendations. Real time PCR: The relative expression of genes was determined using Power SYBR® Green PCR Master Mix (ABI, USA) on the 7500 real‐time PCR system (ABI, life technology). A 15 μl reaction contained 1 μl of cDNA, 7.5 μl BioFACT™ master mix including SYBR Green (Ampliqon, Cat. No: A325402‐25), 0.75 μl of each primer, and 5 μl DNase‐free deionized H2O. Thermal cycling was performed at 45 cycles of 95°C for 20 s and then 60°C for 30 s. We used two internal controls, including ACTB and U48. The sequences of each primer are stated in Table 1. The Livak method (2−∆∆Ct) was utilized to calculate the relative expression. primers sequences Abbreviations: ACTB, actin beta; IL6R, interleukin 6 receptor; RIG‐I, RNA sensor RIG‐I; U48, SNORD48, small nucleolar RNA. Statistical analysis: Data analysis was performed in spss software v.26, and graphs were drawn with graphpad prism v.8. Comparison of expression between the three groups of samples, including symptomatic COVID‐19, asymptomatic COVID‐19, and negative control, was evaluated using the Kruskal–Wallis H non‐parametric test. The Chi‐square test was applied to compare sex and blood type between groups, and the one‐way anova test was used to compare age. The correlation of the elements in the hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I pathway was measured by the Spearman correlation coefficient test. A p‐value less than .05 was considered a significant level. Bioinformatics Predictions: Hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis was constructed based on circRNA‐miRNA pair, miRNA‐mRNA pairs. CircRNA‐miRNA interaction was downloaded from the database Circinteractome. 23 The MiRWalk2.0 24 and miRTargetLink 25 databases were used to predict the interaction between miRNA and mRNAs. Then, the pathway enrichment analysis of miRNA and mRNAs was performed using miRPathDB 2.0 26 and Enrichr 27 databases. RESULTS: Basic and demographic information of patients Participants in the current study included: group (1) 46 patients with symptomatic COVID‐19, including 19 (41%) females and 27 (59%) males, with a mean age of 41.54 years, group (2) 46 patients with asymptomatic COVID‐19, with 21 (46%) females and 25 (54%) males, with a mean age of 47.90 years, and group (3) the negative control which included 24 (52%) males and 22 (48%), females, with a mean age of 42.65. The patients and control groups were matched in terms of age, sex, and blood group (Table 2). The most common underlying diseases in the symptomatic and asymptomatic COVID‐19 groups were cardiovascular disease with 10 (22%), 12 (26%) patients and immunodeficiency disease with 4 (9%), 0 (0%) patients, respectively (Table 2). Table 3 demonstrates the different symptoms of patients with symptomatic COVID‐19 participating in this study. Participants' baseline and demographic information (p‐values 1: in comparison with negative control, 2: in comparison with asymptomatic COVID‐19) Features of symptomatic COVID‐19 patients Participants in the current study included: group (1) 46 patients with symptomatic COVID‐19, including 19 (41%) females and 27 (59%) males, with a mean age of 41.54 years, group (2) 46 patients with asymptomatic COVID‐19, with 21 (46%) females and 25 (54%) males, with a mean age of 47.90 years, and group (3) the negative control which included 24 (52%) males and 22 (48%), females, with a mean age of 42.65. The patients and control groups were matched in terms of age, sex, and blood group (Table 2). The most common underlying diseases in the symptomatic and asymptomatic COVID‐19 groups were cardiovascular disease with 10 (22%), 12 (26%) patients and immunodeficiency disease with 4 (9%), 0 (0%) patients, respectively (Table 2). Table 3 demonstrates the different symptoms of patients with symptomatic COVID‐19 participating in this study. Participants' baseline and demographic information (p‐values 1: in comparison with negative control, 2: in comparison with asymptomatic COVID‐19) Features of symptomatic COVID‐19 patients Evaluation of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R expression in symptomatic COVID‐19 patients compared with asymptomatic COVID‐19 patients and negative controls We used quantitative real‐time PCR to evaluate the gene expression levels in two different subgroups of COVID‐19 patients and negative controls, with a total of 46 samples in each group. The expression of hsa_circ_0004812 is significantly higher in symptomatic COVID‐19 samples in comparison with other groups, and it was also upregulated in asymptomatic patients compared with negative controls (Table 4, Figure 1A). The hsa‐miR‐1287‐5p expression is obviously lower in symptomatic COVID‐19 patients compared with negative controls and asymptomatic patients (Table 4, Figure 1B). Furthermore, the results indicate that RIG‐I and IL6R are significantly upregulated in symptomatic and asymptomatic patients compared with negative controls (Table 3, Figure 1C,D). Expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R in asymptomatic and symptomatic COVID‐19 patients, and negative controls (p‐value computed from Kruskal–Wallis Test. p‐value 1: In comparison with negative control. p‐value 2: In comparison to asymptomatic covid) Relative expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, IL6R, and RIG‐I were shown in a box and whisker plot (10–90 percentile). (ns: p > .05; * p ≤ .05; *** p ˂ .001). (A) Upregulation of hsa_circ_0004812 in COVID‐19 patients compared with healthy. (B) Down expression of hsa‐miR‐1287‐5p in COVID‐19 patients compared with healthy controls. (C) Upregulation of RIG‐I in COVID‐19 patients compared with healthy controls. (D) Upregulation of IL6R in COVID‐19 patients compared with healthy controls. We used quantitative real‐time PCR to evaluate the gene expression levels in two different subgroups of COVID‐19 patients and negative controls, with a total of 46 samples in each group. The expression of hsa_circ_0004812 is significantly higher in symptomatic COVID‐19 samples in comparison with other groups, and it was also upregulated in asymptomatic patients compared with negative controls (Table 4, Figure 1A). The hsa‐miR‐1287‐5p expression is obviously lower in symptomatic COVID‐19 patients compared with negative controls and asymptomatic patients (Table 4, Figure 1B). Furthermore, the results indicate that RIG‐I and IL6R are significantly upregulated in symptomatic and asymptomatic patients compared with negative controls (Table 3, Figure 1C,D). Expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R in asymptomatic and symptomatic COVID‐19 patients, and negative controls (p‐value computed from Kruskal–Wallis Test. p‐value 1: In comparison with negative control. p‐value 2: In comparison to asymptomatic covid) Relative expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, IL6R, and RIG‐I were shown in a box and whisker plot (10–90 percentile). (ns: p > .05; * p ≤ .05; *** p ˂ .001). (A) Upregulation of hsa_circ_0004812 in COVID‐19 patients compared with healthy. (B) Down expression of hsa‐miR‐1287‐5p in COVID‐19 patients compared with healthy controls. (C) Upregulation of RIG‐I in COVID‐19 patients compared with healthy controls. (D) Upregulation of IL6R in COVID‐19 patients compared with healthy controls. The hsa_circ_0004812 as a potential regulator of RIG‐I and IL6R through sponging hsa‐miR 1287‐5p To confirm whether hsa_circ_0004812 may act as the molecular sponge for hsa‐miR‐1287‐5p, the expression correlation between these ncRNAs was evaluated in the samples. There was a significant negative correlation between the expression of hsa_circ_0004812 and hsa‐miR‐1287‐5p (r = −0.283, p value = .0.001, Figure 2B). Using bioinformatics tools, we have predicted that RIG‐I and IL6R are potential targets of the hsa_circ_0004812/hsa‐miR‐1287‐5p, and correlation analysis between scale results of RT‐PCR revealed a positive expression correlation between hsa_circ_0004812 and IL6R and RIG‐I (r = 0.760, p value = ˂.001/r = .236, p value = .005, Figure 2C,D). In addition, the negative expression correlation between hsa‐miR‐1287‐5p and IL6R and RIG‐I was studied (r = −0.234, p value = .006/r = −0.514, p value = ˂.001, Figure 2E,F). According to the results from the expression pattern of the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R axis, a distinct expression profile between COVID‐19 samples and negative controls is shown by the heatmap analysis (Figure 2A). The final output confirms the significance of this hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R pathway in the COVID‐19 cytokine storm. (A) LogFC heatmap of gene expression. (B) The negative correlation between hsa_circ_0004812 and hsa‐miR‐1287‐5p expression levels in COVID‐19 (r = −0.283, p value = .001). (C) The positive correlation between hsa_circ_0004812 and IL6R expression levels in COVID‐19 (r = 0.760, p value = ˂.001). (D) The positive correlation between hsa_circ_0004812 and RIG‐I expression levels in COVID‐19 (r = 0.236, p value = .005). (E) The negative correlation between hsa‐miR‐1287‐5p and IL6R expression levels in COVID‐19 (r = −0.234, p value = .006). (F) The negative correlation between hsa‐miR‐1287‐5p and RIG‐I expression levels in COVID‐19 (r = −0.514, p value = ˂.001). To confirm whether hsa_circ_0004812 may act as the molecular sponge for hsa‐miR‐1287‐5p, the expression correlation between these ncRNAs was evaluated in the samples. There was a significant negative correlation between the expression of hsa_circ_0004812 and hsa‐miR‐1287‐5p (r = −0.283, p value = .0.001, Figure 2B). Using bioinformatics tools, we have predicted that RIG‐I and IL6R are potential targets of the hsa_circ_0004812/hsa‐miR‐1287‐5p, and correlation analysis between scale results of RT‐PCR revealed a positive expression correlation between hsa_circ_0004812 and IL6R and RIG‐I (r = 0.760, p value = ˂.001/r = .236, p value = .005, Figure 2C,D). In addition, the negative expression correlation between hsa‐miR‐1287‐5p and IL6R and RIG‐I was studied (r = −0.234, p value = .006/r = −0.514, p value = ˂.001, Figure 2E,F). According to the results from the expression pattern of the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R axis, a distinct expression profile between COVID‐19 samples and negative controls is shown by the heatmap analysis (Figure 2A). The final output confirms the significance of this hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R pathway in the COVID‐19 cytokine storm. (A) LogFC heatmap of gene expression. (B) The negative correlation between hsa_circ_0004812 and hsa‐miR‐1287‐5p expression levels in COVID‐19 (r = −0.283, p value = .001). (C) The positive correlation between hsa_circ_0004812 and IL6R expression levels in COVID‐19 (r = 0.760, p value = ˂.001). (D) The positive correlation between hsa_circ_0004812 and RIG‐I expression levels in COVID‐19 (r = 0.236, p value = .005). (E) The negative correlation between hsa‐miR‐1287‐5p and IL6R expression levels in COVID‐19 (r = −0.234, p value = .006). (F) The negative correlation between hsa‐miR‐1287‐5p and RIG‐I expression levels in COVID‐19 (r = −0.514, p value = ˂.001). Functional enrichment analysis of hsa_circ_0004812/hsa‐miR‐1287‐5p/ RIG‐I, IL6R axis Using the circRNA‐miRNA pair and the miRNA‐mRNA pair, we generated the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R triple network. We used the miRPathDB and Enrichr databases to evaluate the association between miRNA and mRNA related pathways, respectively. 26 , 27 “Regulatory circuits of the STAT3 signaling pathway” is the significant pathway related to hsa‐miR‐1287‐5p via WikiPathways data (p‐value = .006). Additionally, the significant gene ontology (GO) associated with IL6R and RIG‐I has been shown in Figure 3. GO analysis displayed that IL6R, RIG‐I, and hsa‐miR‐1287‐5p in our proposed ceRNA regulatory network were involved in the regulation of JAK–STAT and PI3K‐AKT signaling pathways. Overall workflow of bioinformatics analyses Using the circRNA‐miRNA pair and the miRNA‐mRNA pair, we generated the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R triple network. We used the miRPathDB and Enrichr databases to evaluate the association between miRNA and mRNA related pathways, respectively. 26 , 27 “Regulatory circuits of the STAT3 signaling pathway” is the significant pathway related to hsa‐miR‐1287‐5p via WikiPathways data (p‐value = .006). Additionally, the significant gene ontology (GO) associated with IL6R and RIG‐I has been shown in Figure 3. GO analysis displayed that IL6R, RIG‐I, and hsa‐miR‐1287‐5p in our proposed ceRNA regulatory network were involved in the regulation of JAK–STAT and PI3K‐AKT signaling pathways. Overall workflow of bioinformatics analyses Basic and demographic information of patients: Participants in the current study included: group (1) 46 patients with symptomatic COVID‐19, including 19 (41%) females and 27 (59%) males, with a mean age of 41.54 years, group (2) 46 patients with asymptomatic COVID‐19, with 21 (46%) females and 25 (54%) males, with a mean age of 47.90 years, and group (3) the negative control which included 24 (52%) males and 22 (48%), females, with a mean age of 42.65. The patients and control groups were matched in terms of age, sex, and blood group (Table 2). The most common underlying diseases in the symptomatic and asymptomatic COVID‐19 groups were cardiovascular disease with 10 (22%), 12 (26%) patients and immunodeficiency disease with 4 (9%), 0 (0%) patients, respectively (Table 2). Table 3 demonstrates the different symptoms of patients with symptomatic COVID‐19 participating in this study. Participants' baseline and demographic information (p‐values 1: in comparison with negative control, 2: in comparison with asymptomatic COVID‐19) Features of symptomatic COVID‐19 patients Evaluation of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R expression in symptomatic COVID‐19 patients compared with asymptomatic COVID‐19 patients and negative controls: We used quantitative real‐time PCR to evaluate the gene expression levels in two different subgroups of COVID‐19 patients and negative controls, with a total of 46 samples in each group. The expression of hsa_circ_0004812 is significantly higher in symptomatic COVID‐19 samples in comparison with other groups, and it was also upregulated in asymptomatic patients compared with negative controls (Table 4, Figure 1A). The hsa‐miR‐1287‐5p expression is obviously lower in symptomatic COVID‐19 patients compared with negative controls and asymptomatic patients (Table 4, Figure 1B). Furthermore, the results indicate that RIG‐I and IL6R are significantly upregulated in symptomatic and asymptomatic patients compared with negative controls (Table 3, Figure 1C,D). Expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R in asymptomatic and symptomatic COVID‐19 patients, and negative controls (p‐value computed from Kruskal–Wallis Test. p‐value 1: In comparison with negative control. p‐value 2: In comparison to asymptomatic covid) Relative expression levels of hsa_circ_0004812, hsa‐miR‐1287‐5p, IL6R, and RIG‐I were shown in a box and whisker plot (10–90 percentile). (ns: p > .05; * p ≤ .05; *** p ˂ .001). (A) Upregulation of hsa_circ_0004812 in COVID‐19 patients compared with healthy. (B) Down expression of hsa‐miR‐1287‐5p in COVID‐19 patients compared with healthy controls. (C) Upregulation of RIG‐I in COVID‐19 patients compared with healthy controls. (D) Upregulation of IL6R in COVID‐19 patients compared with healthy controls. The hsa_circ_0004812 as a potential regulator of RIG‐I and IL6R through sponging hsa‐miR 1287‐5p: To confirm whether hsa_circ_0004812 may act as the molecular sponge for hsa‐miR‐1287‐5p, the expression correlation between these ncRNAs was evaluated in the samples. There was a significant negative correlation between the expression of hsa_circ_0004812 and hsa‐miR‐1287‐5p (r = −0.283, p value = .0.001, Figure 2B). Using bioinformatics tools, we have predicted that RIG‐I and IL6R are potential targets of the hsa_circ_0004812/hsa‐miR‐1287‐5p, and correlation analysis between scale results of RT‐PCR revealed a positive expression correlation between hsa_circ_0004812 and IL6R and RIG‐I (r = 0.760, p value = ˂.001/r = .236, p value = .005, Figure 2C,D). In addition, the negative expression correlation between hsa‐miR‐1287‐5p and IL6R and RIG‐I was studied (r = −0.234, p value = .006/r = −0.514, p value = ˂.001, Figure 2E,F). According to the results from the expression pattern of the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R axis, a distinct expression profile between COVID‐19 samples and negative controls is shown by the heatmap analysis (Figure 2A). The final output confirms the significance of this hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R pathway in the COVID‐19 cytokine storm. (A) LogFC heatmap of gene expression. (B) The negative correlation between hsa_circ_0004812 and hsa‐miR‐1287‐5p expression levels in COVID‐19 (r = −0.283, p value = .001). (C) The positive correlation between hsa_circ_0004812 and IL6R expression levels in COVID‐19 (r = 0.760, p value = ˂.001). (D) The positive correlation between hsa_circ_0004812 and RIG‐I expression levels in COVID‐19 (r = 0.236, p value = .005). (E) The negative correlation between hsa‐miR‐1287‐5p and IL6R expression levels in COVID‐19 (r = −0.234, p value = .006). (F) The negative correlation between hsa‐miR‐1287‐5p and RIG‐I expression levels in COVID‐19 (r = −0.514, p value = ˂.001). Functional enrichment analysis of hsa_circ_0004812/hsa‐miR‐1287‐5p/ RIG‐I, IL6R axis: Using the circRNA‐miRNA pair and the miRNA‐mRNA pair, we generated the hsa_circ_0004812/hsa‐miR‐1287‐5p/RIG‐I, IL6R triple network. We used the miRPathDB and Enrichr databases to evaluate the association between miRNA and mRNA related pathways, respectively. 26 , 27 “Regulatory circuits of the STAT3 signaling pathway” is the significant pathway related to hsa‐miR‐1287‐5p via WikiPathways data (p‐value = .006). Additionally, the significant gene ontology (GO) associated with IL6R and RIG‐I has been shown in Figure 3. GO analysis displayed that IL6R, RIG‐I, and hsa‐miR‐1287‐5p in our proposed ceRNA regulatory network were involved in the regulation of JAK–STAT and PI3K‐AKT signaling pathways. Overall workflow of bioinformatics analyses DISCUSSION: Despite the quick progression in COVID‐19 research, due to the manifestation of new variants in SARS‐CoV‐2, COVID‐19 remains a global crisis. This is largely owing to our lack of understanding of the virus's molecular processes of pathogenesis. While the characteristics of COVID‐19 vastly range from symptomatic to asymptomatic, the cytokine storm has been triggered in the respiratory system of some of the patients. 1 , 2 Thus, it is crucial to conduct some original research on the genetic factors that cause a dysfunctional innate immune system. Dysregulation in circRNA expression has been linked to various diseases, including cancer, heart disease, neurological problems, and some cases of viral infection, while the underlying mechanisms remain unknown. For example, Wu et al. 9 found that 114 differentially expressed circRNAs were related to exosomes that were extracted from COVID‐19 patients and healthy people. In this study, using qRT‐PCR, we assessed the hsa_circ_0004812, hsa‐miR‐1287‐5p, RIG‐I, and IL6R expression among the three groups of participants. Further analysis was performed to discover circRNA‐miRNA, miRNA‐mRNAs, and circRNA‐mRNAs interactions. For confirmation of the qPCR results, in silico validation was also conducted using several databases. Circular RNAs (circRNAs) can bind to miRNAs and act as sponges or decoys, enhancing the expression of miRNA target genes. 28 Recently, circRNAs have been proposed as biomarkers to differentiate viral from non‐viral pneumonia. Circular RNAs are involved in various activities, including immune tolerance and escape, and thus could be helpful in the emerging COVID‐19 infection. 9 The cellular mechanisms underlying circRNA dysregulation are still poorly understood in SARS‐CoV‐2 infection. The in silico representation identified a ceRNA network in SARS‐CoV‐infected cells by Arora et al. 29 RNA sequencing was carried out by Zhou et al. and they discovered a total number of 99 dysregulated circRNAs that were linked to chronic hepatitis B (CHB). The study of the CHB‐related circRNA‐miRNA‐mRNA pathway suggested that TGFb2 was controlled by hsa _circ _0000650 through miR‐6873‐3p sponging. 30 We chose hsa_circ_0004812, which is generated from the ninein‐like (NINL) gene, for further investigation among the circRNAs that have been shown to play a function in infectious illnesses. 13 The experimental data showed that hsa_circ_0004812 is significantly overexpressed in symptomatic COVID‐19 patients as compared with asymptomatic COVID‐19 patients and negative controls. In line with our analysis, previous research has indicated upregulated hsa_circ_0004812 could modulate the expression of follistatin‐like 1 (FSTL1) through sponging of miR‐1287‐5p in CHB patients and HBV‐infected hepatoma cells. 14 In a study, N Wang et al. 31 introduced the lncRNA LPAL2 in thyroid eye disease (TED) by sponging the hsa‐miR‐1287‐5p. In contrast to our study, W. Hao argues that increased miR‐1287‐5p levels may inhibit LPS‐induced human nasal epithelial cells (HNECs) from releasing pro‐inflammatory cytokines. 32 Yajie Hu et al. 33 reported 47 novel miRNAs with differential expression in Enterovirus 71 (EV71) and coxsackievirus A16 (CA16) infections using high‐throughput sequencing. This result of qRT‐PCR of hsa‐miR‐1287‐5p was in accordance with the high‐throughput dataset. In a study, Caixia Li et al. (2020) identified 70 miRNAs dysregulated in human patients with COVID‐19 by high‐throughput sequencing. 16 In the same study, by next‐generation sequencing and bioinformatics tools, 55 miRNAs were altered in 10 COVID‐19 patients were identified. 34 Yajie Hu et al. 33 reported 47 novel miRNAs with differential expression in Enterovirus 71 (EV71) and coxsackievirus A16 (CA16) infections using high‐throughput sequencing. This result of qRT‐PCR of hsa‐miR‐1287‐5p was in accordance with the high‐throughput dataset. We know circRNA acts as a microRNA sponge and modulates gene expression indirectly. 35 The present study explored the relationship between circRNA and microRNA using a bioinformatics database and then picked out hsa‐miR‐1287‐5p. The circRNA‐interactome indicates the 7mer‐1a site type between hsa_circ_0004812 and hsa‐miR‐1287‐5p. 23 Our results demonstrate that the hsa‐miR‐1287‐5p is downregulated in severe symptoms COVID‐19 patients compared with asymptomatic COVID‐19 patients and negative controls. Two mRNAs that we propose are related to the ceRNA are IL6R and RIG‐I. Potential roles of these genes have been determined in the SARS‐CoV‐2 infection. 36 , 37 The innate immune system is also activated by SARS‐CoV‐2 and macrophage activation causes an overproduction of pro‐inflammatory cytokines, such as IL‐6. 38 In previous studies, it was found that interleukin 6 (IL‐6) is a pleiotropic cytokine that controls cell proliferation and differentiation along with the immune response. Many other cytokines share this receptor component with the IL6 receptor (IL6R) and interleukin 6 signal transducer (IL6ST/GP130/IL6‐beta). Innate immune responses are generated by specific families of pattern recognition receptors. A class of cytosolic RNA helicases known as RIG‐I‐like receptors (RLRs) recognizes non‐self RNA that enters a cell as a result of intracellular virus replication. RIG‐I, MDA5, and LGP2 are RLR proteins. These are expressed in immune and non‐immune cells. 39 , 40 Retinoic acid‐inducible gene I (RIG‐I) can activate type I IFN in response to SARS‐CoV2 in fibroblasts and dendritic cells by activating interferon regulatory factor 3 (IRF3) via kinases. 41 We investigated the interaction between miRNA and mRNAs using three databases. We utilized bioinformatics to predict that hsa‐miR‐1287‐5p could interact with IL6R and RIG‐I. Then, in the present study, we observed that the expression of IL6 and RIG‐I was increased in the SARS‐CoV‐2‐infected patients with severe symptoms as compared with the asymptomatic and negative control groups. The correlation between gene expression is weak, according to the study that Ratner conducted. Additional tests would need to be performed in the future to ensure and continue the work's results. 42 We evaluated the correlation between the expression levels of genes in the ceRNA regulatory network. Furthermore, we performed a correlation study between the expression of hsa_circ_0004812 and hsa‐miR‐1287‐5p and then between hsa_circ_0004812 and IL6 and RIG‐I. There was a significant negative correlation between hsa_circ_0004812 expression and hsa‐miR‐1287‐5p expression. Additionally, our findings showed a significant positive correlation between the expression of hsa_circ_0004812 and IL6 and RIG‐I. Interestingly, Zhang and colleague reported the relationships between hsa_circ_0004812 and miR‐1287‐5p by luciferase assays in cells transfected with pHBV. 14 Similarly, our results demonstrate that hsa‐miR‐1287‐5p negatively regulates IL6R and RIG‐I levels in SARS‐CoV‐2 infected patients. CONCLUSION: In conclusion, the interaction of hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis has been verified by a bioinformatics study. Gene ontology analysis indicated that the ceRNA axis is involved in the regulation of JAK–STAT and PI3K‐AKT signaling pathways. COVID‐19 patients related circRNA‐miRNA‐mRNA pathway analysis hinted that hsa_circ_0004812 regulated RIG1 and IL6R by sponging hsa‐miR‐1287‐5p. Moreover, on the basis of the correlation analysis, it was revealed that this candidate axis is more significant in the patients with higher COVID‐19 symptoms and could be considered as the potential target to be used in understanding, identifying, and treating COVID‐19. FUNDING INFORMATION: This work was supported by Fasa University of Medical Sciences under Grant number 99157. CONFLICT OF INTEREST: The authors declare that there is no conflict of interests regarding the publication of this paper. PATIENT CONSENT STATEMENT: Informed consent was obtained from all individual participants included in the study.
Background: SARS-CoV-2 is one of the most contagious viruses in the Coronaviridae (CoV) family, which has become a pandemic. The aim of this study is to understand more about the role of hsa_circ_0004812 in the SARS-CoV-2 related cytokine storm and its associated molecular mechanisms. Methods: cDNA synthesis was performed after total RNA was extracted from the peripheral blood mononuclear cells (PBMC) of 46 patients with symptomatic COVID-19, 46 patients with asymptomatic COVID-19, and 46 healthy controls. The expression levels of hsa_circ_0004812, hsa-miR-1287-5p, IL6R, and RIG-I were determined using qRT-PCR, and the potential interaction between these molecules was confirmed by bioinformatics tools and correlation analysis. Results: hsa_circ_0004812, IL6R, and RIG-I are expressed higher in the severe symptom group compared with the negative control group. Also, the relative expression of these genes in the asymptomatic group is lower than in the severe symptom group. The expression level of hsa-miR-1287-5p was positively correlated with symptoms in patients. The results of the bioinformatics analysis predicted the sponging effect of hsa_circ_0004812 as a competing endogenous RNA on hsa-miR-1287-5p. Moreover, there was a significant positive correlation between hsa_circ_0004812, RIG-I, and IL-6R expressions, and also a negative expression correlation between hsa_circ_0004812 and hsa-miR-1287-5p and between hsa-miR-1287-5p, RIG-I, and IL-6R. Conclusions: The results of this in-vitro and in silico study show that hsa_circ_0004812/hsa-miR-1287-5p/IL6R, RIG-I can play an important role in the outcome of COVID-19.
BACKGROUND: The COVID‐19 pandemic has become a very challenging and controversial problem. The severe acute respiratory syndrome coronavirus‐2 (SARS‐CoV‐2) virus is a newly emerging single‐stranded RNA virus that belongs to the beta‐coronavirus family and causes COVID‐19 (coronavirus disease 2019). COVID‐19, with a dramatic fatality rate, 1 , 2 , 3 triggers the immune system to respond and simultaneously represses the immune response to allow the virus replication. The interplay among the infected cells, viruses, and immune system induces a high level of cytokine and chemokine secretion known as the “cytokine storm”. 3 , 4 CircRNAs are created through the back‐splicing process, which results in a closed‐loop structure without poly(A) tails and 5′ caps. 5 These RNA molecules are highly stable and resistant to exonuclease‐mediated degradation. It has been found that circRNAs, as the key regulators, could modulate gene expression through various molecular mechanisms, including a sponging effect on miRNA or acting as protein scaffolds. 6 It has been reported that noncoding RNAs could have an effect on the inflammatory cytokine storm. 7 CircRNAs, which are regarded as endogenous short RNAs that may be classified into coding and noncoding circRNAs, 8 play a proven role in COVID‐19‐related pathogenesis. 9 For example, Wu et al. 9 found that differentially expressed circRNAs in COVID‐19 patients with recurrent disease are mainly involved in the regulation of host cell immunity and inflammation, substance and energy metabolism, cell cycle, and cell apoptosis . The most well‐known function of circRNAs is their activity as competing endogenous RNAs (ceRNAs) through a circRNA/miRNA/mRNA regulatory network. 10 , 11 Recent evidence suggested the role of circRNAs in various viral infections such as human papilloma virus infection, herpes simplex virus, Epstein–Barr virus, human immunodeficiency virus, Middle East respiratory syndrome coronavirus, and hepatitis B virus infection and confirmed it as a potential biomarker between viral and non‐viral states. 12 One of the circRNAs that has been recognized for its contribution to human viral‐associated infection is hsa_circ_0004812. Hsa_circ_0004812 originated from the NINL gene and is located on chromosome 20. 13 In 2020, Zhang and colleagues revealed that hsa_circ_0004812 was upregulated in chronic hepatitis B (CHB) and HBV infected hepatoma cells. Additionally, this circRNA was associated with immune suppression by HBV infection through the hsa_circ_0004812/hsa‐miR‐1287‐5p/FSTL1 pathway. 14 According to some studies, CircRNAs play an important role in gene expression as microRNA sponges via their microRNA response elements (MREs). 5 MicroRNAs have also been indicated as potential biomarkers for COVID‐19 diagnosis. 15 High‐throughput sequencing was used during the study in 2020 to evaluate the expression levels of different miRNAs. Then, they revealed that in human patients with COVID‐19, 35 miRNAs were upregulated, and 38 miRNAs were downregulated. 16 MicroRNAs are small noncoding RNAs that are about 20 to 25 nucleotides in length and act as regulators of gene expression by affecting the stability and translation of their target mRNAs. 17 In this study, hsa‐miR‐1287‐5p located on chromosome 10 was selected, which was previously predicted to have a significant binding site in the SARS‐CoV‐2 genome. 18 JAK/STAT is one of the pathways involved in inflammatory responses of COVID‐19, such as cytokine storm. 14 Interleukin 6 (IL‐6) and other inflammatory components such as IL‐12 and TNF are important components of cytokine storm and play a role in the pathogenesis of COVID‐19 associated pneumonia. 19 IL‐6 can activate STAT3 during inflammatory processes, and both of them play a regulatory role in the cytokine storm in COVID‐19 infection. 20 Some studies indicate that IL‐6R antagonists can improve the hyper‐inflammation state in hospitalized COVID‐19 patients. 21 Retinoid‐inducible gene 1 (RIG‐I) is one of the retinoic acid‐inducible genes I (RIG‐I)‐like receptor (RLR) family, playing an important role in sensing viral nucleic acids and production of pro‐inflammatory and antiviral proteins. 22 Interestingly, we find that hsa_circ_0004812/hsa‐mir‐1287‐5p/IL6R, RIG‐I have an impact on cytokine storm through JAK/STAT and STAT3 signaling pathways. The aim of this study was to develop a better understanding of the molecular mechanisms involved in COVID‐19 inflammatory responses. According to the primary in‐silico analysis and literature studies, we had a specific focus on the hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis to evaluate its potential role in COVID‐19 inflammatory responses. In this regard, we assessed the expression of the members of our candidate axis, and its correlation with JAK/STAT and STAT3 signaling pathways was measured. CONCLUSION: In conclusion, the interaction of hsa_circ_0004812/hsa‐miR‐1287‐5p/IL6R, RIG‐I axis has been verified by a bioinformatics study. Gene ontology analysis indicated that the ceRNA axis is involved in the regulation of JAK–STAT and PI3K‐AKT signaling pathways. COVID‐19 patients related circRNA‐miRNA‐mRNA pathway analysis hinted that hsa_circ_0004812 regulated RIG1 and IL6R by sponging hsa‐miR‐1287‐5p. Moreover, on the basis of the correlation analysis, it was revealed that this candidate axis is more significant in the patients with higher COVID‐19 symptoms and could be considered as the potential target to be used in understanding, identifying, and treating COVID‐19.
Background: SARS-CoV-2 is one of the most contagious viruses in the Coronaviridae (CoV) family, which has become a pandemic. The aim of this study is to understand more about the role of hsa_circ_0004812 in the SARS-CoV-2 related cytokine storm and its associated molecular mechanisms. Methods: cDNA synthesis was performed after total RNA was extracted from the peripheral blood mononuclear cells (PBMC) of 46 patients with symptomatic COVID-19, 46 patients with asymptomatic COVID-19, and 46 healthy controls. The expression levels of hsa_circ_0004812, hsa-miR-1287-5p, IL6R, and RIG-I were determined using qRT-PCR, and the potential interaction between these molecules was confirmed by bioinformatics tools and correlation analysis. Results: hsa_circ_0004812, IL6R, and RIG-I are expressed higher in the severe symptom group compared with the negative control group. Also, the relative expression of these genes in the asymptomatic group is lower than in the severe symptom group. The expression level of hsa-miR-1287-5p was positively correlated with symptoms in patients. The results of the bioinformatics analysis predicted the sponging effect of hsa_circ_0004812 as a competing endogenous RNA on hsa-miR-1287-5p. Moreover, there was a significant positive correlation between hsa_circ_0004812, RIG-I, and IL-6R expressions, and also a negative expression correlation between hsa_circ_0004812 and hsa-miR-1287-5p and between hsa-miR-1287-5p, RIG-I, and IL-6R. Conclusions: The results of this in-vitro and in silico study show that hsa_circ_0004812/hsa-miR-1287-5p/IL6R, RIG-I can play an important role in the outcome of COVID-19.
7,484
313
[ 175, 95, 172, 110, 77, 230, 288, 397, 134, 15, 13 ]
17
[ "19", "covid", "covid 19", "expression", "mir", "1287 5p", "mir 1287", "mir 1287 5p", "1287", "5p" ]
[ "rnas circrnas bind", "mirna mrnas circrna", "role circrnas viral", "circrnas covid 19", "cytokine storm circrnas" ]
null
[CONTENT] cytokine storm | hsa_circ_0004812 | hsa‐miR‐1287‐5p | SARS‐CoV‐2 | sponging effect [SUMMARY]
null
[CONTENT] cytokine storm | hsa_circ_0004812 | hsa‐miR‐1287‐5p | SARS‐CoV‐2 | sponging effect [SUMMARY]
[CONTENT] cytokine storm | hsa_circ_0004812 | hsa‐miR‐1287‐5p | SARS‐CoV‐2 | sponging effect [SUMMARY]
[CONTENT] cytokine storm | hsa_circ_0004812 | hsa‐miR‐1287‐5p | SARS‐CoV‐2 | sponging effect [SUMMARY]
[CONTENT] cytokine storm | hsa_circ_0004812 | hsa‐miR‐1287‐5p | SARS‐CoV‐2 | sponging effect [SUMMARY]
[CONTENT] COVID-19 | Cell Proliferation | Cytokine Release Syndrome | DNA, Complementary | Humans | Leukocytes, Mononuclear | MicroRNAs | RNA, Circular | Receptors, Cell Surface | Receptors, Interleukin-6 | SARS-CoV-2 | Up-Regulation [SUMMARY]
null
[CONTENT] COVID-19 | Cell Proliferation | Cytokine Release Syndrome | DNA, Complementary | Humans | Leukocytes, Mononuclear | MicroRNAs | RNA, Circular | Receptors, Cell Surface | Receptors, Interleukin-6 | SARS-CoV-2 | Up-Regulation [SUMMARY]
[CONTENT] COVID-19 | Cell Proliferation | Cytokine Release Syndrome | DNA, Complementary | Humans | Leukocytes, Mononuclear | MicroRNAs | RNA, Circular | Receptors, Cell Surface | Receptors, Interleukin-6 | SARS-CoV-2 | Up-Regulation [SUMMARY]
[CONTENT] COVID-19 | Cell Proliferation | Cytokine Release Syndrome | DNA, Complementary | Humans | Leukocytes, Mononuclear | MicroRNAs | RNA, Circular | Receptors, Cell Surface | Receptors, Interleukin-6 | SARS-CoV-2 | Up-Regulation [SUMMARY]
[CONTENT] COVID-19 | Cell Proliferation | Cytokine Release Syndrome | DNA, Complementary | Humans | Leukocytes, Mononuclear | MicroRNAs | RNA, Circular | Receptors, Cell Surface | Receptors, Interleukin-6 | SARS-CoV-2 | Up-Regulation [SUMMARY]
[CONTENT] rnas circrnas bind | mirna mrnas circrna | role circrnas viral | circrnas covid 19 | cytokine storm circrnas [SUMMARY]
null
[CONTENT] rnas circrnas bind | mirna mrnas circrna | role circrnas viral | circrnas covid 19 | cytokine storm circrnas [SUMMARY]
[CONTENT] rnas circrnas bind | mirna mrnas circrna | role circrnas viral | circrnas covid 19 | cytokine storm circrnas [SUMMARY]
[CONTENT] rnas circrnas bind | mirna mrnas circrna | role circrnas viral | circrnas covid 19 | cytokine storm circrnas [SUMMARY]
[CONTENT] rnas circrnas bind | mirna mrnas circrna | role circrnas viral | circrnas covid 19 | cytokine storm circrnas [SUMMARY]
[CONTENT] 19 | covid | covid 19 | expression | mir | 1287 5p | mir 1287 | mir 1287 5p | 1287 | 5p [SUMMARY]
null
[CONTENT] 19 | covid | covid 19 | expression | mir | 1287 5p | mir 1287 | mir 1287 5p | 1287 | 5p [SUMMARY]
[CONTENT] 19 | covid | covid 19 | expression | mir | 1287 5p | mir 1287 | mir 1287 5p | 1287 | 5p [SUMMARY]
[CONTENT] 19 | covid | covid 19 | expression | mir | 1287 5p | mir 1287 | mir 1287 5p | 1287 | 5p [SUMMARY]
[CONTENT] 19 | covid | covid 19 | expression | mir | 1287 5p | mir 1287 | mir 1287 5p | 1287 | 5p [SUMMARY]
[CONTENT] circrnas | virus | 19 | covid | covid 19 | role | cytokine | inflammatory | cytokine storm | storm [SUMMARY]
null
[CONTENT] 19 | covid | covid 19 | patients | expression | value | hsa | hsa mir 1287 | hsa mir 1287 5p | 1287 [SUMMARY]
[CONTENT] axis | analysis | covid 19 | 19 | covid | patients | rig1 il6r | 1287 5p basis correlation | regulated rig1 il6r sponging | potential target understanding identifying [SUMMARY]
[CONTENT] 19 | covid | covid 19 | patients | expression | mir | mir 1287 5p | mir 1287 | 5p | 1287 5p [SUMMARY]
[CONTENT] 19 | covid | covid 19 | patients | expression | mir | mir 1287 5p | mir 1287 | 5p | 1287 5p [SUMMARY]
[CONTENT] ||| [SUMMARY]
null
[CONTENT] IL6R ||| ||| miR-1287 ||| RNA | miR-1287 ||| IL-6R | miR-1287 | miR-1287 [SUMMARY]
[CONTENT] miR-1287 | COVID-19 [SUMMARY]
[CONTENT] ||| ||| RNA | 46 | COVID-19 | 46 | COVID-19 | 46 ||| miR-1287 | 5p | IL6R ||| IL6R ||| ||| miR-1287 ||| RNA | miR-1287 ||| IL-6R | miR-1287 | miR-1287 | miR-1287 | COVID-19 [SUMMARY]
[CONTENT] ||| ||| RNA | 46 | COVID-19 | 46 | COVID-19 | 46 ||| miR-1287 | 5p | IL6R ||| IL6R ||| ||| miR-1287 ||| RNA | miR-1287 ||| IL-6R | miR-1287 | miR-1287 | miR-1287 | COVID-19 [SUMMARY]
One-stage posterior surgery combined with anti-Brucella therapy in the management of lumbosacral brucellosis spondylitis: a retrospective study.
36401260
This study aimed to assess the clinical efficacy of one-stage posterior surgery combined with anti-Brucella therapy in the treatment of lumbosacral brucellosis spondylitis (LBS).
BACKGROUND
From June 2010 to June 2020, the clinical and radiographic data of patients with LBS treated by one-stage posterior surgery combined with anti-Brucella therapy were retrospectively analyzed. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) and Oswestry Disability Index scores (ODI) were used to evaluate the clinical outcomes. Frankel's classification system was employed to access the initial and final neurologic function. Fusion of the bone grafting was classified by Bridwell's grading system.
METHODS
A total of 55 patients were included in this study with a mean postoperative follow-up time of 2.6 ± 0.8 years (range, 2 to 5). There were 40 males and 15 females with a mean age of 39.8 ± 14.7 years (range, 27 to 57). The Brucella agglutination test was ≥ 1:160 in all patients, but the blood culture was positive in 43 patients (78.1%). A statistical difference was observed in ESR, CRP, VAS, ODI, and JOA between preoperative and final follow-up (P < 0.05). Neurological function was significantly improved in 20 patients with preoperative neurological dysfunction after surgery. According to Bridwell's grading system, the fusion of bone grafting in 48 cases (87.2%) was defined as grade I, and grade II in 7 cases (12.7%). None of the infestation recurrences was observed.
RESULTS
One-stage posterior surgery combined with anti-Brucella therapy was a practical method in the treatment of LBS with severe neurological compression and spinal sagittal imbalance.
CONCLUSION
[ "Humans", "Male", "Female", "Adult", "Middle Aged", "Brucella", "Retrospective Studies", "Spinal Fusion", "Lumbar Vertebrae", "Debridement", "Spondylitis", "Brucellosis" ]
9673312
Background
Human brucellosis disease was an infectious zoonotic allergic disease caused by Brucella [1], which was usually transmitted by occupational contact (e.g., veterinarians, slaughterhouses, animal husbandry) and the digestive tract (consumption of contaminated products). It remained a serious public health problem in livestock regions, such as northern China, Australia, the Mediterranean region, and India [2, 3]. A total of 240,000 people worldwide were at risk, with more than 500,000 new cases annually, and 10–85% of patients might be accompanied by involvement of the skeletal system [4–7]. Lumbosacral was the common region of the spinal Brucella spondylitis [8, 9], with an incidence of 2–53% [10], especially L4–5 level, and L5–S1 level [11, 12]. However, the insidious progression of brucellosis lesion made anti-Brucella therapy hardly intervene promptly, resulting in irreversible destruction of the lumbar vertebral body, including abscess formation, disc destruction, and vertebral sclerosis [13]. Failure to diagnose and treat LBS promptly might result in serious sequelae, such as chronic low back pain, neurological dysfunction, and even kyphotic deformity [13, 14]. In clinical practice, hence, the treatment plan for patients with lumbosacral Brucella spondylitis (LBS) combined with spinal cord compression symptoms or kyphotic deformity remains a great challenge for clinicians. At present, the standard treatment of LBS was non-surgical interventions (antibiotics chemotherapy: doxycycline, rifamycin). Surgical intervention should be considered when the spinal cord compression symptoms or kyphotic deformity occurred, and the principle was to remove the lesion, relieve the spinal cord compression and restore the spinal sagittal balance. When surgery was the treatment of choice, the indication of surgical procedure (anterior, posterior and combined anterior and posterior surgery) remains controversial. Besides, the clinical efficacy of the percutaneous ultrasonic or CT-guided evacuation of paravertebral collections has also been reported [13], but the recurrence of infection still exists since the limited visual field of the surgical procedure. Posterior surgery was suggested since its satisfactory efficacy in removing lesions, decompression, deformity correction, and restoring the spinal sagittal balance, especially for patients with significant lesion destruction and intractable back pain. Therefore, the purpose of this study was to retrospectively analyze the clinical efficacy of patients with LBS managed by one-stage posterior surgery combined with anti-Brucella therapy in our hospital and summarize the surgical indications for the treatment strategy.
null
null
Results
A total of 55 patients were included in this study with a mean postoperative follow-up time of 2.6 ± 0.8 years (range, 2 to 5). There were 40 males and 15 females with a mean age of 39.8 ± 14.7 years (range, 27 to 57, Table 1). All patients were hampered by lower back pain and limited waist mobility. Further, there were 28 patients (50.9%) with radiating pain in the lower limb and 41 patients (74.5%) with a history of night sweats. Destruction of the vertebral body was observed in 30 patients (54.5%), spinal canal stenosis in 32 patients (58.1%), paravertebral abscess formation in 32 patients (58.1%), paravertebral soft tissue involvement in 27 patients (49%), and epidural granulation tissue or abscess in 19 patients (34.5%). The preoperative serum agglutination test was ≥ 1:160 in all patients and the blood culture was positive in 43 patients (78.1%). Thirty-seven patients (67.2%) were infected with Brucella melitensis, 5 patients (9%) with Brucella abortus, and one patient (1.8%) with Brucella suis. The mean serum levels of ESR and CRP were 41.3 ± 15.5 mm/h (range, 25 to 57), and 33.6 ± 18.5 mg/L (range, 14 to 52) respectively. Table 1Clinical data of patientsPatientAge (range, year)Gender (M/F)Affected levelPathogenExtra-spine infestationPostoperative grade of FCFollow-up time (year)Outcome140–45ML2–L3BMFeverE4FOD227–32ML3–L5BAFever + SE3FOD345–50FL4–L5BMFever + H + SC3ND445–50ML2–L4NegFeverE2FOD530–35ML2–L3BMFeverE3FOD632–37ML4–L5BMFeverE5FOD740–45ML2–L3BMFeverE3FOD845–50MT12–L3BMFeverE4FOD935–40ML4–L5BMFeverE2FOD1050–55FL5–S1BAFever + HE5FOD1140–45ML2–L4BMFeverE3FOD1247–52ML3–L4NegFeverE2FOD1345–50FL3–L5BMFever + HE2FOD1435–40MT11–L2BAFever + H + SD4FOD1540–45ML3–L5BMFeverE2FOD1632–37ML4–L5NegFever + HE3FOD1740–45ML5–S1NegFever + H + SC4ND1840–45FT10–L2BMFeverE2FOD1940–45ML3–L5BMFever + HE3FOD2040–45ML3BAFeverE2FOD2130–35MS1BMFeverE5FOD2240–45FL5–S1BMFeverE4FOD2345–50ML4–L5BSFeverE2FOD2442–47FL1–L4BMFeverE2FOD2550–60ML3BAFever + HE3FOD2642–47ML5–S1BMFeverE2FOD2735–40ML3BMFever + H + SD2FOD2835–40ML5NegFever + HE4FOD2945–50ML1–L3BMFeverE3FOD3032–37FT12BMFeverE2FOD3140–45ML2–L3NegFeverE3FOD3235–40FT12BMFeverE2FOD3325–30ML1–L2BMFever + HE4FOD3438–42ML2–L4BMFeverE2FOD3535–40MT12–L2BMFever + H + SE3FOD3638–42FL1–L3NegFeverE2FOD3735–40ML4–L5BMFeverE4FOD3825–30FL5–S1BMFever + HE3FOD3925–30FT12–L2NegFeverE5FOD4030–35ML3–L5BMFeverE2FOD4145–50MT12BMFever + H + SE3FOD4242–47MS1–S2BMFever + HE3FOD4352–57ML5–S1BMFeverE4FOD4442–47ML2–L3NegFeverE3FOD4535–40FT12–L2BMFeverE3FOD4632–37ML4–L5BMFever + HE2FOD4745–50FL4–L5BMFever + H + SE3FOD4830–35FT12–L2BMFever + H + SE4FOD4940–45ML2–L3NegFever + H + SE3FOD5040–45ML3–L5BMFever + HE2FOD5145–50ML5–S1NegFeverE4FOD5245–50FT12–L2BMFeverE3FOD5340–45ML2–L3NegFeverE3FOD5430–35ML2–L4BMFeverE2FOD5528–32ML5BMFever + HE5FODBA, Brucella abortus; BM, Brucella melitensis; BS, Brucella suis; F, female; FOD, free of disease; H, hepatomegaly; M, male; Neg, negative; ND, neurological dysfunction; S, splenomegaly Clinical data of patients BA, Brucella abortus; BM, Brucella melitensis; BS, Brucella suis; F, female; FOD, free of disease; H, hepatomegaly; M, male; Neg, negative; ND, neurological dysfunction; S, splenomegaly The poisoning symptoms were relieved in all patients after posterior surgery combined with anti-Brucella therapy, without local spine tenderness or percussion pain at follow-up. The mean operation time was 138.7 ± 63.8 min (range, 75 to 205) with a mean intraoperative blood loss of 215.4 ± 77.1 mL (range, 135 to 300). The average hospitalization time was 12.7 ± 6.2 days (range, 6 to 19). ESR, CRP, VAS, ODI, and JOA were improved after surgery, and a statistical difference was observed between preoperative and final follow-up (P < 0.05, Table 2). The typical cases described in this study were referred to in Figs. 1 and 2. Table 2Comparison of preoperative, postoperative VAS, ODI, JOA scores, and inflammatory indicatorsVariablePreoperativeThree postoperative monthsFinal follow-upImprovementrate (%)ESR41.35 ± 15.509.15 ± 3.17*7.31 ± 2.34*#91.6CRP33.61 ± 18.545.18 ± 1.79*2.04 ± 0.71*#86.3VAS6.04 ± 1.491.69 ± 0.57*0.72 ± 0.53*#92.8ODI(%)54.08 ± 9.9215.87 ± 5.93*10.44 ± 5.04*#83.1JOA15.12 ± 3.8923.47 ± 3.13*25.43 ± 3.49*#80.5CRP C-reactive protein, ESR Erythrocyte sedimentation rate, JOA Japanese Orthopaedic Association, ODI Oswestry disability index, VAS Visual analogue scale*Comparison of preoperative, P < 0.05#Comparison of three postoperative months, P < 0.05 Comparison of preoperative, postoperative VAS, ODI, JOA scores, and inflammatory indicators CRP C-reactive protein, ESR Erythrocyte sedimentation rate, JOA Japanese Orthopaedic Association, ODI Oswestry disability index, VAS Visual analogue scale *Comparison of preoperative, P < 0.05 #Comparison of three postoperative months, P < 0.05 Fig. 1  A 44-year-old female with lumbosacral Brucella spondylitis. a–d The lesion of the lumbosacral spine (L3, L4) was shown by the preoperative positive and lateral X-ray, CT sagittal reconstruction, and MRI. e, f The vertebral body was fixed firmly by the screw at 3 postoperative months, which was presented by X-ray. g, h CT sagittal and three-dimensional reconstruction demonstrated that the lesion was removed completely, and the internal fixation was stable without recurrence of the lesion at 6 postoperative months  A 44-year-old female with lumbosacral Brucella spondylitis. a–d The lesion of the lumbosacral spine (L3, L4) was shown by the preoperative positive and lateral X-ray, CT sagittal reconstruction, and MRI. e, f The vertebral body was fixed firmly by the screw at 3 postoperative months, which was presented by X-ray. g, h CT sagittal and three-dimensional reconstruction demonstrated that the lesion was removed completely, and the internal fixation was stable without recurrence of the lesion at 6 postoperative months Fig. 2A 57-year-old female with lumbosacral Brucella spondylitis. a–d L5, S1 vertebral body destruction, and intervertebral space narrowing caused by infection were indicated by the anteroposterior and lateral X-ray, CT sagittal reconstruction, and MRI. e, f The vertebral body was fixed firmly by the screw at 3 postoperative months, which was presented by X-ray. g, h CT sagittal and three-dimensional reconstruction demonstrated that the lesion was removed completely, and internal fixation was in a satisfactory position without recurrence of infection at the 6 postoperative months A 57-year-old female with lumbosacral Brucella spondylitis. a–d L5, S1 vertebral body destruction, and intervertebral space narrowing caused by infection were indicated by the anteroposterior and lateral X-ray, CT sagittal reconstruction, and MRI. e, f The vertebral body was fixed firmly by the screw at 3 postoperative months, which was presented by X-ray. g, h CT sagittal and three-dimensional reconstruction demonstrated that the lesion was removed completely, and internal fixation was in a satisfactory position without recurrence of infection at the 6 postoperative months Neurological function was significantly improved in 20 patients with preoperative neurological dysfunction after surgery. In short, two patients with preoperative Frankel’s grade C recovered to grade D at 1 postoperative month, and one patient with preoperative Frankel’s grade C recovered to grade E at 6 postoperative months. Seven of the 17 patients with Frankel’s grade D recovered to grade E at 1 postoperative month, and the remaining cases recovered gradually to grade E at the follow-up. Only 2 patients with preoperative neurological dysfunction (Frankel’s grade C) were not improved after surgery (Table 3). The mean fusion time was 6.9 ± 0.7 months (range, 6 to 8). According to Bridwell’s grading system, the fusion of bone grafting in 48 cases (87.2%) was defined as grade I, and grade II in 7 cases (12.7%). None of the internal fixation loosening and breakage was found during the follow-up. Table 3Comparison of neurological outcomes after surgeryFrankel’ grade*PreoperativeOne postoperative monthThree postoperative monthsSix postoperative monthsFinal follow-upA00000B00000C31000D1712532E3542505253*Frankel classification Comparison of neurological outcomes after surgery *Frankel classification
Conclusion
Standard anti-Brucella therapy was indispensable for infestation control in the early stage of LBS. One-stage posterior surgery combined with anti-Brucella therapy was a practical method in the treatment of LBS with severe neurological compression and spinal sagittal imbalance.
[ "Background", "Patients and methods", "Surgical technique", "Postoperative management", "Statistical analysis" ]
[ "Human brucellosis disease was an infectious zoonotic allergic disease caused by Brucella [1], which was usually transmitted by occupational contact (e.g., veterinarians, slaughterhouses, animal husbandry) and the digestive tract (consumption of contaminated products). It remained a serious public health problem in livestock regions, such as northern China, Australia, the Mediterranean region, and India [2, 3]. A total of 240,000 people worldwide were at risk, with more than 500,000 new cases annually, and 10–85% of patients might be accompanied by involvement of the skeletal system [4–7].\nLumbosacral was the common region of the spinal Brucella spondylitis [8, 9], with an incidence of 2–53% [10], especially L4–5 level, and L5–S1 level [11, 12]. However, the insidious progression of brucellosis lesion made anti-Brucella therapy hardly intervene promptly, resulting in irreversible destruction of the lumbar vertebral body, including abscess formation, disc destruction, and vertebral sclerosis [13]. Failure to diagnose and treat LBS promptly might result in serious sequelae, such as chronic low back pain, neurological dysfunction, and even kyphotic deformity [13, 14]. In clinical practice, hence, the treatment plan for patients with lumbosacral Brucella spondylitis (LBS) combined with spinal cord compression symptoms or kyphotic deformity remains a great challenge for clinicians.\nAt present, the standard treatment of LBS was non-surgical interventions (antibiotics chemotherapy: doxycycline, rifamycin). Surgical intervention should be considered when the spinal cord compression symptoms or kyphotic deformity occurred, and the principle was to remove the lesion, relieve the spinal cord compression and restore the spinal sagittal balance. When surgery was the treatment of choice, the indication of surgical procedure (anterior, posterior and combined anterior and posterior surgery) remains controversial. Besides, the clinical efficacy of the percutaneous ultrasonic or CT-guided evacuation of paravertebral collections has also been reported [13], but the recurrence of infection still exists since the limited visual field of the surgical procedure. Posterior surgery was suggested since its satisfactory efficacy in removing lesions, decompression, deformity correction, and restoring the spinal sagittal balance, especially for patients with significant lesion destruction and intractable back pain. Therefore, the purpose of this study was to retrospectively analyze the clinical efficacy of patients with LBS managed by one-stage posterior surgery combined with anti-Brucella therapy in our hospital and summarize the surgical indications for the treatment strategy.", "After receiving written informed consent from participants and approval from the Ethics Committee of our institute, the clinical data of patients with LBS treated by one-stage posterior surgery combined with anti-Brucella therapy were retrospectively collected and evaluated, from June 2010 to June 2020. Inclusion criteria: brucellosis poisoning symptoms [back pain, fever (high “spikes” in the afternoon), night sweats, body-wide aches, headache]; serum agglutination test ≥ 1:160; abscess formation in the paraspinal or psoas muscle; vertebral body disruption, sclerosis of the residual bone and osteophyte formation (“beak” shape of vertebrae anterior edge) confirmed by imaging films; managed by one-stage posterior surgery combined with anti-Brucella therapy; follow-up time > 1 year. Patients were excluded for incomplete medical records, poor compliance, combined with other immune or parasitic diseases, or follow-up time less than 1 year.\nThe demographic data, pharmacologic treatment records, biopsy or culture results of the cyst, index of C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR) were documented.\nSurgical technique A posterior midline incision was performed to expose the spinous process, lamina, articular process, and screw insertion entrance point of the diseased vertebra. Two pedicle screws were respectively inserted above and below the lesion after confirming a satisfactory position. Temporary rod fixation was applied to the milder symptom side. Fenestration decompression of the vertebral plate was performed on the side with severe symptoms (part of the superior and inferior facets could be removed if necessary). The intervertebral space was removed thoroughly, and the lesion was sent for pathological examination. Decompression of the vertebral plate fenestration and removal of part of the superior and inferior facets were performed on the compression symptom severer side. For patients with compression symptoms of the double-side nerve root, sneak decompression should be performed on the contralateral side. The base of the spinous process of the vertebral body was removed by the forceps and curette to enlarge the central canal, and the sac should be distracted by a nerve dissector. The cartilage endplate was removed to expose the subchondral bone, and the removed uninfected bone was bitten into small pieces for the mixture with streptomycin. Then these were implanted into the intervertebral space. If the amount of bone graft was insufficient, the autologous iliac bone could be considered for the supplement. Finally, a connecting rod and screw cap were installed, after confirming the satisfactory fixation position by fluoroscopy again. The incision was flushed with sufficient 0.9% saline, a drainage tube was placed in the surgical area, and the incision was closed sequentially.\nA posterior midline incision was performed to expose the spinous process, lamina, articular process, and screw insertion entrance point of the diseased vertebra. Two pedicle screws were respectively inserted above and below the lesion after confirming a satisfactory position. Temporary rod fixation was applied to the milder symptom side. Fenestration decompression of the vertebral plate was performed on the side with severe symptoms (part of the superior and inferior facets could be removed if necessary). The intervertebral space was removed thoroughly, and the lesion was sent for pathological examination. Decompression of the vertebral plate fenestration and removal of part of the superior and inferior facets were performed on the compression symptom severer side. For patients with compression symptoms of the double-side nerve root, sneak decompression should be performed on the contralateral side. The base of the spinous process of the vertebral body was removed by the forceps and curette to enlarge the central canal, and the sac should be distracted by a nerve dissector. The cartilage endplate was removed to expose the subchondral bone, and the removed uninfected bone was bitten into small pieces for the mixture with streptomycin. Then these were implanted into the intervertebral space. If the amount of bone graft was insufficient, the autologous iliac bone could be considered for the supplement. Finally, a connecting rod and screw cap were installed, after confirming the satisfactory fixation position by fluoroscopy again. The incision was flushed with sufficient 0.9% saline, a drainage tube was placed in the surgical area, and the incision was closed sequentially.\nPostoperative management Antibiotics were managed for 2 or 3 postoperative days, and the surgical area drainage tube was removed when drainage volume was < 30 mL/day. Furthermore, the lumbosacral brace was applied for 3 months for helping with postoperative rehabilitation. Anti-Brucella therapy was managed for a minimum of 6 postoperative weeks following the standard WHO-recommended oral regimen: rifampicin (600 mg/day), and doxycycline (200 mg/day). Subsequently, radiography, ESR, and CRP were examined at 1, 6, 12, 18, and 24 postoperative months. All patients were followed up by special recovery questionnaires using the smartphone after being discharged. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) and Oswestry Disability Index scores (ODI) were used to evaluate the clinical outcomes. Frankel’s classification system was employed to access the initial and final neurologic function. Fusion of the bone grafting was classified by Bridwell’s grading system.\nAntibiotics were managed for 2 or 3 postoperative days, and the surgical area drainage tube was removed when drainage volume was < 30 mL/day. Furthermore, the lumbosacral brace was applied for 3 months for helping with postoperative rehabilitation. Anti-Brucella therapy was managed for a minimum of 6 postoperative weeks following the standard WHO-recommended oral regimen: rifampicin (600 mg/day), and doxycycline (200 mg/day). Subsequently, radiography, ESR, and CRP were examined at 1, 6, 12, 18, and 24 postoperative months. All patients were followed up by special recovery questionnaires using the smartphone after being discharged. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) and Oswestry Disability Index scores (ODI) were used to evaluate the clinical outcomes. Frankel’s classification system was employed to access the initial and final neurologic function. Fusion of the bone grafting was classified by Bridwell’s grading system.\nStatistical analysis Data were analyzed by the SPSS 21.0 software package (Chicago, IL, USA). Continuous variables were expressed as mean ± standard deviation (SD), and the distribution of the data was evaluated by the Shapiro–Wilk test. Comparisons between groups (preoperative vs. three postoperative months, and preoperative vs. final follow-up) were performed using the Chi-square test or paired t-test. P < 0.05 was considered a statistical significance.\nData were analyzed by the SPSS 21.0 software package (Chicago, IL, USA). Continuous variables were expressed as mean ± standard deviation (SD), and the distribution of the data was evaluated by the Shapiro–Wilk test. Comparisons between groups (preoperative vs. three postoperative months, and preoperative vs. final follow-up) were performed using the Chi-square test or paired t-test. P < 0.05 was considered a statistical significance.", "A posterior midline incision was performed to expose the spinous process, lamina, articular process, and screw insertion entrance point of the diseased vertebra. Two pedicle screws were respectively inserted above and below the lesion after confirming a satisfactory position. Temporary rod fixation was applied to the milder symptom side. Fenestration decompression of the vertebral plate was performed on the side with severe symptoms (part of the superior and inferior facets could be removed if necessary). The intervertebral space was removed thoroughly, and the lesion was sent for pathological examination. Decompression of the vertebral plate fenestration and removal of part of the superior and inferior facets were performed on the compression symptom severer side. For patients with compression symptoms of the double-side nerve root, sneak decompression should be performed on the contralateral side. The base of the spinous process of the vertebral body was removed by the forceps and curette to enlarge the central canal, and the sac should be distracted by a nerve dissector. The cartilage endplate was removed to expose the subchondral bone, and the removed uninfected bone was bitten into small pieces for the mixture with streptomycin. Then these were implanted into the intervertebral space. If the amount of bone graft was insufficient, the autologous iliac bone could be considered for the supplement. Finally, a connecting rod and screw cap were installed, after confirming the satisfactory fixation position by fluoroscopy again. The incision was flushed with sufficient 0.9% saline, a drainage tube was placed in the surgical area, and the incision was closed sequentially.", "Antibiotics were managed for 2 or 3 postoperative days, and the surgical area drainage tube was removed when drainage volume was < 30 mL/day. Furthermore, the lumbosacral brace was applied for 3 months for helping with postoperative rehabilitation. Anti-Brucella therapy was managed for a minimum of 6 postoperative weeks following the standard WHO-recommended oral regimen: rifampicin (600 mg/day), and doxycycline (200 mg/day). Subsequently, radiography, ESR, and CRP were examined at 1, 6, 12, 18, and 24 postoperative months. All patients were followed up by special recovery questionnaires using the smartphone after being discharged. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) and Oswestry Disability Index scores (ODI) were used to evaluate the clinical outcomes. Frankel’s classification system was employed to access the initial and final neurologic function. Fusion of the bone grafting was classified by Bridwell’s grading system.", "Data were analyzed by the SPSS 21.0 software package (Chicago, IL, USA). Continuous variables were expressed as mean ± standard deviation (SD), and the distribution of the data was evaluated by the Shapiro–Wilk test. Comparisons between groups (preoperative vs. three postoperative months, and preoperative vs. final follow-up) were performed using the Chi-square test or paired t-test. P < 0.05 was considered a statistical significance." ]
[ null, null, null, null, null ]
[ "Background", "Patients and methods", "Surgical technique", "Postoperative management", "Statistical analysis", "Results", "Discussion", "Conclusion" ]
[ "Human brucellosis disease was an infectious zoonotic allergic disease caused by Brucella [1], which was usually transmitted by occupational contact (e.g., veterinarians, slaughterhouses, animal husbandry) and the digestive tract (consumption of contaminated products). It remained a serious public health problem in livestock regions, such as northern China, Australia, the Mediterranean region, and India [2, 3]. A total of 240,000 people worldwide were at risk, with more than 500,000 new cases annually, and 10–85% of patients might be accompanied by involvement of the skeletal system [4–7].\nLumbosacral was the common region of the spinal Brucella spondylitis [8, 9], with an incidence of 2–53% [10], especially L4–5 level, and L5–S1 level [11, 12]. However, the insidious progression of brucellosis lesion made anti-Brucella therapy hardly intervene promptly, resulting in irreversible destruction of the lumbar vertebral body, including abscess formation, disc destruction, and vertebral sclerosis [13]. Failure to diagnose and treat LBS promptly might result in serious sequelae, such as chronic low back pain, neurological dysfunction, and even kyphotic deformity [13, 14]. In clinical practice, hence, the treatment plan for patients with lumbosacral Brucella spondylitis (LBS) combined with spinal cord compression symptoms or kyphotic deformity remains a great challenge for clinicians.\nAt present, the standard treatment of LBS was non-surgical interventions (antibiotics chemotherapy: doxycycline, rifamycin). Surgical intervention should be considered when the spinal cord compression symptoms or kyphotic deformity occurred, and the principle was to remove the lesion, relieve the spinal cord compression and restore the spinal sagittal balance. When surgery was the treatment of choice, the indication of surgical procedure (anterior, posterior and combined anterior and posterior surgery) remains controversial. Besides, the clinical efficacy of the percutaneous ultrasonic or CT-guided evacuation of paravertebral collections has also been reported [13], but the recurrence of infection still exists since the limited visual field of the surgical procedure. Posterior surgery was suggested since its satisfactory efficacy in removing lesions, decompression, deformity correction, and restoring the spinal sagittal balance, especially for patients with significant lesion destruction and intractable back pain. Therefore, the purpose of this study was to retrospectively analyze the clinical efficacy of patients with LBS managed by one-stage posterior surgery combined with anti-Brucella therapy in our hospital and summarize the surgical indications for the treatment strategy.", "After receiving written informed consent from participants and approval from the Ethics Committee of our institute, the clinical data of patients with LBS treated by one-stage posterior surgery combined with anti-Brucella therapy were retrospectively collected and evaluated, from June 2010 to June 2020. Inclusion criteria: brucellosis poisoning symptoms [back pain, fever (high “spikes” in the afternoon), night sweats, body-wide aches, headache]; serum agglutination test ≥ 1:160; abscess formation in the paraspinal or psoas muscle; vertebral body disruption, sclerosis of the residual bone and osteophyte formation (“beak” shape of vertebrae anterior edge) confirmed by imaging films; managed by one-stage posterior surgery combined with anti-Brucella therapy; follow-up time > 1 year. Patients were excluded for incomplete medical records, poor compliance, combined with other immune or parasitic diseases, or follow-up time less than 1 year.\nThe demographic data, pharmacologic treatment records, biopsy or culture results of the cyst, index of C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR) were documented.\nSurgical technique A posterior midline incision was performed to expose the spinous process, lamina, articular process, and screw insertion entrance point of the diseased vertebra. Two pedicle screws were respectively inserted above and below the lesion after confirming a satisfactory position. Temporary rod fixation was applied to the milder symptom side. Fenestration decompression of the vertebral plate was performed on the side with severe symptoms (part of the superior and inferior facets could be removed if necessary). The intervertebral space was removed thoroughly, and the lesion was sent for pathological examination. Decompression of the vertebral plate fenestration and removal of part of the superior and inferior facets were performed on the compression symptom severer side. For patients with compression symptoms of the double-side nerve root, sneak decompression should be performed on the contralateral side. The base of the spinous process of the vertebral body was removed by the forceps and curette to enlarge the central canal, and the sac should be distracted by a nerve dissector. The cartilage endplate was removed to expose the subchondral bone, and the removed uninfected bone was bitten into small pieces for the mixture with streptomycin. Then these were implanted into the intervertebral space. If the amount of bone graft was insufficient, the autologous iliac bone could be considered for the supplement. Finally, a connecting rod and screw cap were installed, after confirming the satisfactory fixation position by fluoroscopy again. The incision was flushed with sufficient 0.9% saline, a drainage tube was placed in the surgical area, and the incision was closed sequentially.\nA posterior midline incision was performed to expose the spinous process, lamina, articular process, and screw insertion entrance point of the diseased vertebra. Two pedicle screws were respectively inserted above and below the lesion after confirming a satisfactory position. Temporary rod fixation was applied to the milder symptom side. Fenestration decompression of the vertebral plate was performed on the side with severe symptoms (part of the superior and inferior facets could be removed if necessary). The intervertebral space was removed thoroughly, and the lesion was sent for pathological examination. Decompression of the vertebral plate fenestration and removal of part of the superior and inferior facets were performed on the compression symptom severer side. For patients with compression symptoms of the double-side nerve root, sneak decompression should be performed on the contralateral side. The base of the spinous process of the vertebral body was removed by the forceps and curette to enlarge the central canal, and the sac should be distracted by a nerve dissector. The cartilage endplate was removed to expose the subchondral bone, and the removed uninfected bone was bitten into small pieces for the mixture with streptomycin. Then these were implanted into the intervertebral space. If the amount of bone graft was insufficient, the autologous iliac bone could be considered for the supplement. Finally, a connecting rod and screw cap were installed, after confirming the satisfactory fixation position by fluoroscopy again. The incision was flushed with sufficient 0.9% saline, a drainage tube was placed in the surgical area, and the incision was closed sequentially.\nPostoperative management Antibiotics were managed for 2 or 3 postoperative days, and the surgical area drainage tube was removed when drainage volume was < 30 mL/day. Furthermore, the lumbosacral brace was applied for 3 months for helping with postoperative rehabilitation. Anti-Brucella therapy was managed for a minimum of 6 postoperative weeks following the standard WHO-recommended oral regimen: rifampicin (600 mg/day), and doxycycline (200 mg/day). Subsequently, radiography, ESR, and CRP were examined at 1, 6, 12, 18, and 24 postoperative months. All patients were followed up by special recovery questionnaires using the smartphone after being discharged. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) and Oswestry Disability Index scores (ODI) were used to evaluate the clinical outcomes. Frankel’s classification system was employed to access the initial and final neurologic function. Fusion of the bone grafting was classified by Bridwell’s grading system.\nAntibiotics were managed for 2 or 3 postoperative days, and the surgical area drainage tube was removed when drainage volume was < 30 mL/day. Furthermore, the lumbosacral brace was applied for 3 months for helping with postoperative rehabilitation. Anti-Brucella therapy was managed for a minimum of 6 postoperative weeks following the standard WHO-recommended oral regimen: rifampicin (600 mg/day), and doxycycline (200 mg/day). Subsequently, radiography, ESR, and CRP were examined at 1, 6, 12, 18, and 24 postoperative months. All patients were followed up by special recovery questionnaires using the smartphone after being discharged. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) and Oswestry Disability Index scores (ODI) were used to evaluate the clinical outcomes. Frankel’s classification system was employed to access the initial and final neurologic function. Fusion of the bone grafting was classified by Bridwell’s grading system.\nStatistical analysis Data were analyzed by the SPSS 21.0 software package (Chicago, IL, USA). Continuous variables were expressed as mean ± standard deviation (SD), and the distribution of the data was evaluated by the Shapiro–Wilk test. Comparisons between groups (preoperative vs. three postoperative months, and preoperative vs. final follow-up) were performed using the Chi-square test or paired t-test. P < 0.05 was considered a statistical significance.\nData were analyzed by the SPSS 21.0 software package (Chicago, IL, USA). Continuous variables were expressed as mean ± standard deviation (SD), and the distribution of the data was evaluated by the Shapiro–Wilk test. Comparisons between groups (preoperative vs. three postoperative months, and preoperative vs. final follow-up) were performed using the Chi-square test or paired t-test. P < 0.05 was considered a statistical significance.", "A posterior midline incision was performed to expose the spinous process, lamina, articular process, and screw insertion entrance point of the diseased vertebra. Two pedicle screws were respectively inserted above and below the lesion after confirming a satisfactory position. Temporary rod fixation was applied to the milder symptom side. Fenestration decompression of the vertebral plate was performed on the side with severe symptoms (part of the superior and inferior facets could be removed if necessary). The intervertebral space was removed thoroughly, and the lesion was sent for pathological examination. Decompression of the vertebral plate fenestration and removal of part of the superior and inferior facets were performed on the compression symptom severer side. For patients with compression symptoms of the double-side nerve root, sneak decompression should be performed on the contralateral side. The base of the spinous process of the vertebral body was removed by the forceps and curette to enlarge the central canal, and the sac should be distracted by a nerve dissector. The cartilage endplate was removed to expose the subchondral bone, and the removed uninfected bone was bitten into small pieces for the mixture with streptomycin. Then these were implanted into the intervertebral space. If the amount of bone graft was insufficient, the autologous iliac bone could be considered for the supplement. Finally, a connecting rod and screw cap were installed, after confirming the satisfactory fixation position by fluoroscopy again. The incision was flushed with sufficient 0.9% saline, a drainage tube was placed in the surgical area, and the incision was closed sequentially.", "Antibiotics were managed for 2 or 3 postoperative days, and the surgical area drainage tube was removed when drainage volume was < 30 mL/day. Furthermore, the lumbosacral brace was applied for 3 months for helping with postoperative rehabilitation. Anti-Brucella therapy was managed for a minimum of 6 postoperative weeks following the standard WHO-recommended oral regimen: rifampicin (600 mg/day), and doxycycline (200 mg/day). Subsequently, radiography, ESR, and CRP were examined at 1, 6, 12, 18, and 24 postoperative months. All patients were followed up by special recovery questionnaires using the smartphone after being discharged. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) and Oswestry Disability Index scores (ODI) were used to evaluate the clinical outcomes. Frankel’s classification system was employed to access the initial and final neurologic function. Fusion of the bone grafting was classified by Bridwell’s grading system.", "Data were analyzed by the SPSS 21.0 software package (Chicago, IL, USA). Continuous variables were expressed as mean ± standard deviation (SD), and the distribution of the data was evaluated by the Shapiro–Wilk test. Comparisons between groups (preoperative vs. three postoperative months, and preoperative vs. final follow-up) were performed using the Chi-square test or paired t-test. P < 0.05 was considered a statistical significance.", "A total of 55 patients were included in this study with a mean postoperative follow-up time of 2.6 ± 0.8 years (range, 2 to 5). There were 40 males and 15 females with a mean age of 39.8 ± 14.7 years (range, 27 to 57, Table 1). All patients were hampered by lower back pain and limited waist mobility. Further, there were 28 patients (50.9%) with radiating pain in the lower limb and 41 patients (74.5%) with a history of night sweats. Destruction of the vertebral body was observed in 30 patients (54.5%), spinal canal stenosis in 32 patients (58.1%), paravertebral abscess formation in 32 patients (58.1%), paravertebral soft tissue involvement in 27 patients (49%), and epidural granulation tissue or abscess in 19 patients (34.5%). The preoperative serum agglutination test was ≥ 1:160 in all patients and the blood culture was positive in 43 patients (78.1%). Thirty-seven patients (67.2%) were infected with Brucella melitensis, 5 patients (9%) with Brucella abortus, and one patient (1.8%) with Brucella suis. The mean serum levels of ESR and CRP were 41.3 ± 15.5 mm/h (range, 25 to 57), and 33.6 ± 18.5 mg/L (range, 14 to 52) respectively.\n\nTable 1Clinical data of patientsPatientAge (range, year)Gender (M/F)Affected levelPathogenExtra-spine infestationPostoperative grade of FCFollow-up time (year)Outcome140–45ML2–L3BMFeverE4FOD227–32ML3–L5BAFever + SE3FOD345–50FL4–L5BMFever + H + SC3ND445–50ML2–L4NegFeverE2FOD530–35ML2–L3BMFeverE3FOD632–37ML4–L5BMFeverE5FOD740–45ML2–L3BMFeverE3FOD845–50MT12–L3BMFeverE4FOD935–40ML4–L5BMFeverE2FOD1050–55FL5–S1BAFever + HE5FOD1140–45ML2–L4BMFeverE3FOD1247–52ML3–L4NegFeverE2FOD1345–50FL3–L5BMFever + HE2FOD1435–40MT11–L2BAFever + H + SD4FOD1540–45ML3–L5BMFeverE2FOD1632–37ML4–L5NegFever + HE3FOD1740–45ML5–S1NegFever + H + SC4ND1840–45FT10–L2BMFeverE2FOD1940–45ML3–L5BMFever + HE3FOD2040–45ML3BAFeverE2FOD2130–35MS1BMFeverE5FOD2240–45FL5–S1BMFeverE4FOD2345–50ML4–L5BSFeverE2FOD2442–47FL1–L4BMFeverE2FOD2550–60ML3BAFever + HE3FOD2642–47ML5–S1BMFeverE2FOD2735–40ML3BMFever + H + SD2FOD2835–40ML5NegFever + HE4FOD2945–50ML1–L3BMFeverE3FOD3032–37FT12BMFeverE2FOD3140–45ML2–L3NegFeverE3FOD3235–40FT12BMFeverE2FOD3325–30ML1–L2BMFever + HE4FOD3438–42ML2–L4BMFeverE2FOD3535–40MT12–L2BMFever + H + SE3FOD3638–42FL1–L3NegFeverE2FOD3735–40ML4–L5BMFeverE4FOD3825–30FL5–S1BMFever + HE3FOD3925–30FT12–L2NegFeverE5FOD4030–35ML3–L5BMFeverE2FOD4145–50MT12BMFever + H + SE3FOD4242–47MS1–S2BMFever + HE3FOD4352–57ML5–S1BMFeverE4FOD4442–47ML2–L3NegFeverE3FOD4535–40FT12–L2BMFeverE3FOD4632–37ML4–L5BMFever + HE2FOD4745–50FL4–L5BMFever + H + SE3FOD4830–35FT12–L2BMFever + H + SE4FOD4940–45ML2–L3NegFever + H + SE3FOD5040–45ML3–L5BMFever + HE2FOD5145–50ML5–S1NegFeverE4FOD5245–50FT12–L2BMFeverE3FOD5340–45ML2–L3NegFeverE3FOD5430–35ML2–L4BMFeverE2FOD5528–32ML5BMFever + HE5FODBA, Brucella abortus; BM, Brucella melitensis; BS, Brucella suis; F, female; FOD, free of disease; H, hepatomegaly; M, male; Neg, negative; ND, neurological dysfunction; S, splenomegaly\nClinical data of patients\nBA, Brucella abortus; BM, Brucella melitensis; BS, Brucella suis; F, female; FOD, free of disease; H, hepatomegaly; M, male; Neg, negative; ND, neurological dysfunction; S, splenomegaly\nThe poisoning symptoms were relieved in all patients after posterior surgery combined with anti-Brucella therapy, without local spine tenderness or percussion pain at follow-up. The mean operation time was 138.7 ± 63.8 min (range, 75 to 205) with a mean intraoperative blood loss of 215.4 ± 77.1 mL (range, 135 to 300). The average hospitalization time was 12.7 ± 6.2 days (range, 6 to 19). ESR, CRP, VAS, ODI, and JOA were improved after surgery, and a statistical difference was observed between preoperative and final follow-up (P < 0.05, Table 2). The typical cases described in this study were referred to in Figs. 1 and 2.\n\nTable 2Comparison of preoperative, postoperative VAS, ODI, JOA scores, and inflammatory indicatorsVariablePreoperativeThree postoperative monthsFinal follow-upImprovementrate (%)ESR41.35 ± 15.509.15 ± 3.17*7.31 ± 2.34*#91.6CRP33.61 ± 18.545.18 ± 1.79*2.04 ± 0.71*#86.3VAS6.04 ± 1.491.69 ± 0.57*0.72 ± 0.53*#92.8ODI(%)54.08 ± 9.9215.87 ± 5.93*10.44 ± 5.04*#83.1JOA15.12 ± 3.8923.47 ± 3.13*25.43 ± 3.49*#80.5CRP C-reactive protein, ESR Erythrocyte sedimentation rate, JOA Japanese Orthopaedic Association, ODI Oswestry disability index, VAS Visual analogue scale*Comparison of preoperative, P < 0.05#Comparison of three postoperative months, P < 0.05\nComparison of preoperative, postoperative VAS, ODI, JOA scores, and inflammatory indicators\nCRP C-reactive protein, ESR Erythrocyte sedimentation rate, JOA Japanese Orthopaedic Association, ODI Oswestry disability index, VAS Visual analogue scale\n*Comparison of preoperative, P < 0.05\n#Comparison of three postoperative months, P < 0.05\n\nFig. 1\n A 44-year-old female with lumbosacral Brucella spondylitis. a–d The lesion of the lumbosacral spine (L3, L4) was shown by the preoperative positive and lateral X-ray, CT sagittal reconstruction, and MRI. e, f The vertebral body was fixed firmly by the screw at 3 postoperative months, which was presented by X-ray. g, h CT sagittal and three-dimensional reconstruction demonstrated that the lesion was removed completely, and the internal fixation was stable without recurrence of the lesion at 6 postoperative months\n\n A 44-year-old female with lumbosacral Brucella spondylitis. a–d The lesion of the lumbosacral spine (L3, L4) was shown by the preoperative positive and lateral X-ray, CT sagittal reconstruction, and MRI. e, f The vertebral body was fixed firmly by the screw at 3 postoperative months, which was presented by X-ray. g, h CT sagittal and three-dimensional reconstruction demonstrated that the lesion was removed completely, and the internal fixation was stable without recurrence of the lesion at 6 postoperative months\n\nFig. 2A 57-year-old female with lumbosacral Brucella spondylitis. a–d L5, S1 vertebral body destruction, and intervertebral space narrowing caused by infection were indicated by the anteroposterior and lateral X-ray, CT sagittal reconstruction, and MRI. e, f The vertebral body was fixed firmly by the screw at 3 postoperative months, which was presented by X-ray. g, h CT sagittal and three-dimensional reconstruction demonstrated that the lesion was removed completely, and internal fixation was in a satisfactory position without recurrence of infection at the 6 postoperative months\nA 57-year-old female with lumbosacral Brucella spondylitis. a–d L5, S1 vertebral body destruction, and intervertebral space narrowing caused by infection were indicated by the anteroposterior and lateral X-ray, CT sagittal reconstruction, and MRI. e, f The vertebral body was fixed firmly by the screw at 3 postoperative months, which was presented by X-ray. g, h CT sagittal and three-dimensional reconstruction demonstrated that the lesion was removed completely, and internal fixation was in a satisfactory position without recurrence of infection at the 6 postoperative months\nNeurological function was significantly improved in 20 patients with preoperative neurological dysfunction after surgery. In short, two patients with preoperative Frankel’s grade C recovered to grade D at 1 postoperative month, and one patient with preoperative Frankel’s grade C recovered to grade E at 6 postoperative months. Seven of the 17 patients with Frankel’s grade D recovered to grade E at 1 postoperative month, and the remaining cases recovered gradually to grade E at the follow-up. Only 2 patients with preoperative neurological dysfunction (Frankel’s grade C) were not improved after surgery (Table 3). The mean fusion time was 6.9 ± 0.7 months (range, 6 to 8). According to Bridwell’s grading system, the fusion of bone grafting in 48 cases (87.2%) was defined as grade I, and grade II in 7 cases (12.7%). None of the internal fixation loosening and breakage was found during the follow-up.\n\nTable 3Comparison of neurological outcomes after surgeryFrankel’ grade*PreoperativeOne postoperative monthThree postoperative monthsSix postoperative monthsFinal follow-upA00000B00000C31000D1712532E3542505253*Frankel classification\nComparison of neurological outcomes after surgery\n*Frankel classification", "The pathological basis of LBS was chronic degeneration of the intervertebral disc and vertebral bone destruction, and intractable back pain as the main clinical manifestations [15]. Intervertebral space stenosis was the common presentation of radiography, presented in 32 patients (58.1%) in this study. In the view of anatomy, the intervertebral joint was the stress concentration area behind the spine, which might be easily affected by intervertebral space stenosis. The lesions might slowly invade the articular surface of the vertebrae body and resulted in the proliferation and hardening of the articular surface when the anti-Brucella therapy was not intervened timely. The biomechanical structure stability of the intervertebral joint and spine sagittal balance might be destroyed if the progression continued. Via published studies [16–18], the phenomenon that invasion of the synovium and cartilage surface of joints by Brucella was more common. Posterior joint destruction combined with disc degeneration might result in vertebral slippage. On that occasion, intractable back pain could be worsened by spinal sagittal imbalance and severe vertebral slippage, as well as the injury of the nerve root. Fortunately, the velocity of infiltrative bone destruction in Brucella infestation was slow. The process of bone destruction was accompanied by the process of bone repair, so the sequestrum was not commonly formed [17, 19]. Hence, the preservation of the vertebrae’s structural morphology was a special character of LBS, which was different from spinal tuberculosis [19]. The spinal stability of patients with LBS was usually better than that of spinal tuberculosis, and that’s why kyphotic deformity was rare among patients with LBS.\nBlood culture remained the gold standard for diagnosis of Brucella infestation [1, 3]. Yet, the sensitivity of blood culture depended on several factors, especially the disease phase and previous antibiotics usage. In the acute phase, the sensitivity of blood culture might be more than 80%, while for patients with chronic infestation, its sensitivity was approximately 30–70% [20]. Although the population was susceptible to Brucella, most of the clinical symptoms could be effectively relieved by prompt and standard antibacterial therapy [21, 22]. The indications and timing of surgical intervention were still controversial. But the current recognition was that the surgery should be prepared for patients whose Brucella poisoning symptoms cannot be effectively improved by anti-Brucella therapy (6 weeks of medication and 1 week of drug withdrawal), and combined with one of the following symptoms [7, 23, 24]: paravertebral abscess or psoas abscess; intervertebral disc destruction; spinal structure instability; accompanied by other bacterial infections.\nThe purposes of surgery were radically removing the lesion, improving the local blood circulation, relieving the nerve root compression symptoms, and restoring the spinal sagittal balance to promote the early limbs’ function recovery. At present, the surgical procedures mainly consisted of anterior, posterior, and combined anterior and posterior surgery. The choice of approach should be based on the location of the spinal lesion, degree of vertebral destruction, level of spinal nerve compression, and surgeon’s technical proficiency. It had been reported in the literature that posterior surgery was more practical for intraspinal granulation and abscess removal, especially for patients with intraspinal nerve damage caused by posterior column lesions. While the combined anterior and posterior surgery was recommended for patients with perivertebral abscess, psoas abscess, or greater anterior column destruction [7]. In this study, the posterior surgery was successfully performed in all patients, since the paravertebral abscess combined with spinal nerve compression symptoms occurred in most patients (83.6%). According to Frankel’s classification, the spinal nerve compression symptoms were improved from grade C/D to grade E in 53 patients (96.3%) at the final follow-up.\nTo our knowledge, anterior surgery had also been recommended by previous studies. However, this method not only required meticulous surgical technique with a prolonged operative time but also left the risk of damaging the iliac vessels and sympathetic nerves of the complex anatomy of the anterior lumbosacral spine. Yin et al. [25] reported a case series of 16 patients with Bucella spondylitis managed by anterior surgery with a mean operation time and intraoperative blood loss of 237.4 min and 580.2 mL, respectively. In this cohort, the mean operation time was 138.7 min (range, 80 to 200) with a mean intraoperative blood loss of 215.4 mL (range, 60 to 370), which was significantly less than Yin’s study. Additionally, there was no back pain caused by iatrogenic, and the fusion of bone grafting in 48 cases (87.2%) was defined as grade I, and grade II in 7 cases (12.7%).\nAlthough the posterior surgery made up for the lack of anterior surgery, the spinal sagittal imbalance caused by serious destruction of the anterior column could not be ignored. The persistent nerve compression symptoms (back pain or numbness) might also be caused by the long period of the insidious development of vertebrae destruction. The intervertebral space and the upper and lower endplates of adjacent vertebral bodies were usually involved, but the distribution of abscesses was limited, which rarely exceeded the edge of the vertebral body [1, 6]. A retrospective comparative study published by Ulu-Kilic et al. [4] also showed that the extent of paravertebral abscesses in thoracolumbar Brucella spondylitis generally did not exceed the upper and lower edges of the destroyed vertebral body. Some patients with nerve root compression symptoms caused by intervertebral discs bulging from intraspinal abscesses or swelling could also be effectively treated by prolonging the antibacterial therapy period. Thus, the completeness of lesion removal should not be overemphasized [23]. Chen et al. [8] reported that 24 patients with Brucella spondylitis were treated with one-stage posterior surgery and received satisfactory postoperative results. In this study, the neurological compression symptoms of two patients were not improved. We considered that the irreversible nerve damage might be caused by their long period of chronic infestation. In our experience, hence, the earlier anti-Brucella therapy intervention, the less incidence of vertebrae destruction and neurological compression symptoms. Clinicians in the endemic area should become aware of brucellosis in the differential diagnosis of febrile diseases with peculiar musculoskeletal to prevent the increased medical burden. Yet, it was necessary to perform the surgery when the spinal sagittal imbalance occurred caused by the development of infestation.\nESR and CRP returned to a normal level in the 3rd postoperative month (P < 0.05). In the comparison of the preoperative, the pain symptoms and neurological dysfunction were improved (P < 0.05). In our opinion, posterior surgery was recommended for patients without neurological dysfunction to effectively avoid excessive damage to the structure of the posterior column of the spine, which also decreased the risk of intraoperative injury to the nerve roots and dissemination of infection. Besides, the surgical procedure was suggested to be performed on the severer side for hemi-spinal fenestration and resection of the facet joint selected for patients with neurological compression symptoms. Once the severe side was completely decompressed, it was easy to decompress the mild side. The decompression should be carefully manipulated to avoid the fracture of the contralateral lamina or excessive destruction of the facet joints.\nLast but not the least, the results of this study might be affected by potential limitations since its retrospective and single-centre nature. There was also no standardized surgical method for the treatment of advanced LBS. Hence, a prospective study with larger-sample and multi-centre would be helpful for the management of LBS.", "Standard anti-Brucella therapy was indispensable for infestation control in the early stage of LBS. One-stage posterior surgery combined with anti-Brucella therapy was a practical method in the treatment of LBS with severe neurological compression and spinal sagittal imbalance." ]
[ null, null, null, null, null, "results", "discussion", "conclusion" ]
[ "Brucellosis", "Infection", "Lumbosacral", "Spine", "Surgery" ]
Background: Human brucellosis disease was an infectious zoonotic allergic disease caused by Brucella [1], which was usually transmitted by occupational contact (e.g., veterinarians, slaughterhouses, animal husbandry) and the digestive tract (consumption of contaminated products). It remained a serious public health problem in livestock regions, such as northern China, Australia, the Mediterranean region, and India [2, 3]. A total of 240,000 people worldwide were at risk, with more than 500,000 new cases annually, and 10–85% of patients might be accompanied by involvement of the skeletal system [4–7]. Lumbosacral was the common region of the spinal Brucella spondylitis [8, 9], with an incidence of 2–53% [10], especially L4–5 level, and L5–S1 level [11, 12]. However, the insidious progression of brucellosis lesion made anti-Brucella therapy hardly intervene promptly, resulting in irreversible destruction of the lumbar vertebral body, including abscess formation, disc destruction, and vertebral sclerosis [13]. Failure to diagnose and treat LBS promptly might result in serious sequelae, such as chronic low back pain, neurological dysfunction, and even kyphotic deformity [13, 14]. In clinical practice, hence, the treatment plan for patients with lumbosacral Brucella spondylitis (LBS) combined with spinal cord compression symptoms or kyphotic deformity remains a great challenge for clinicians. At present, the standard treatment of LBS was non-surgical interventions (antibiotics chemotherapy: doxycycline, rifamycin). Surgical intervention should be considered when the spinal cord compression symptoms or kyphotic deformity occurred, and the principle was to remove the lesion, relieve the spinal cord compression and restore the spinal sagittal balance. When surgery was the treatment of choice, the indication of surgical procedure (anterior, posterior and combined anterior and posterior surgery) remains controversial. Besides, the clinical efficacy of the percutaneous ultrasonic or CT-guided evacuation of paravertebral collections has also been reported [13], but the recurrence of infection still exists since the limited visual field of the surgical procedure. Posterior surgery was suggested since its satisfactory efficacy in removing lesions, decompression, deformity correction, and restoring the spinal sagittal balance, especially for patients with significant lesion destruction and intractable back pain. Therefore, the purpose of this study was to retrospectively analyze the clinical efficacy of patients with LBS managed by one-stage posterior surgery combined with anti-Brucella therapy in our hospital and summarize the surgical indications for the treatment strategy. Patients and methods: After receiving written informed consent from participants and approval from the Ethics Committee of our institute, the clinical data of patients with LBS treated by one-stage posterior surgery combined with anti-Brucella therapy were retrospectively collected and evaluated, from June 2010 to June 2020. Inclusion criteria: brucellosis poisoning symptoms [back pain, fever (high “spikes” in the afternoon), night sweats, body-wide aches, headache]; serum agglutination test ≥ 1:160; abscess formation in the paraspinal or psoas muscle; vertebral body disruption, sclerosis of the residual bone and osteophyte formation (“beak” shape of vertebrae anterior edge) confirmed by imaging films; managed by one-stage posterior surgery combined with anti-Brucella therapy; follow-up time > 1 year. Patients were excluded for incomplete medical records, poor compliance, combined with other immune or parasitic diseases, or follow-up time less than 1 year. The demographic data, pharmacologic treatment records, biopsy or culture results of the cyst, index of C-reactive protein (CRP), and erythrocyte sedimentation rate (ESR) were documented. Surgical technique A posterior midline incision was performed to expose the spinous process, lamina, articular process, and screw insertion entrance point of the diseased vertebra. Two pedicle screws were respectively inserted above and below the lesion after confirming a satisfactory position. Temporary rod fixation was applied to the milder symptom side. Fenestration decompression of the vertebral plate was performed on the side with severe symptoms (part of the superior and inferior facets could be removed if necessary). The intervertebral space was removed thoroughly, and the lesion was sent for pathological examination. Decompression of the vertebral plate fenestration and removal of part of the superior and inferior facets were performed on the compression symptom severer side. For patients with compression symptoms of the double-side nerve root, sneak decompression should be performed on the contralateral side. The base of the spinous process of the vertebral body was removed by the forceps and curette to enlarge the central canal, and the sac should be distracted by a nerve dissector. The cartilage endplate was removed to expose the subchondral bone, and the removed uninfected bone was bitten into small pieces for the mixture with streptomycin. Then these were implanted into the intervertebral space. If the amount of bone graft was insufficient, the autologous iliac bone could be considered for the supplement. Finally, a connecting rod and screw cap were installed, after confirming the satisfactory fixation position by fluoroscopy again. The incision was flushed with sufficient 0.9% saline, a drainage tube was placed in the surgical area, and the incision was closed sequentially. A posterior midline incision was performed to expose the spinous process, lamina, articular process, and screw insertion entrance point of the diseased vertebra. Two pedicle screws were respectively inserted above and below the lesion after confirming a satisfactory position. Temporary rod fixation was applied to the milder symptom side. Fenestration decompression of the vertebral plate was performed on the side with severe symptoms (part of the superior and inferior facets could be removed if necessary). The intervertebral space was removed thoroughly, and the lesion was sent for pathological examination. Decompression of the vertebral plate fenestration and removal of part of the superior and inferior facets were performed on the compression symptom severer side. For patients with compression symptoms of the double-side nerve root, sneak decompression should be performed on the contralateral side. The base of the spinous process of the vertebral body was removed by the forceps and curette to enlarge the central canal, and the sac should be distracted by a nerve dissector. The cartilage endplate was removed to expose the subchondral bone, and the removed uninfected bone was bitten into small pieces for the mixture with streptomycin. Then these were implanted into the intervertebral space. If the amount of bone graft was insufficient, the autologous iliac bone could be considered for the supplement. Finally, a connecting rod and screw cap were installed, after confirming the satisfactory fixation position by fluoroscopy again. The incision was flushed with sufficient 0.9% saline, a drainage tube was placed in the surgical area, and the incision was closed sequentially. Postoperative management Antibiotics were managed for 2 or 3 postoperative days, and the surgical area drainage tube was removed when drainage volume was < 30 mL/day. Furthermore, the lumbosacral brace was applied for 3 months for helping with postoperative rehabilitation. Anti-Brucella therapy was managed for a minimum of 6 postoperative weeks following the standard WHO-recommended oral regimen: rifampicin (600 mg/day), and doxycycline (200 mg/day). Subsequently, radiography, ESR, and CRP were examined at 1, 6, 12, 18, and 24 postoperative months. All patients were followed up by special recovery questionnaires using the smartphone after being discharged. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) and Oswestry Disability Index scores (ODI) were used to evaluate the clinical outcomes. Frankel’s classification system was employed to access the initial and final neurologic function. Fusion of the bone grafting was classified by Bridwell’s grading system. Antibiotics were managed for 2 or 3 postoperative days, and the surgical area drainage tube was removed when drainage volume was < 30 mL/day. Furthermore, the lumbosacral brace was applied for 3 months for helping with postoperative rehabilitation. Anti-Brucella therapy was managed for a minimum of 6 postoperative weeks following the standard WHO-recommended oral regimen: rifampicin (600 mg/day), and doxycycline (200 mg/day). Subsequently, radiography, ESR, and CRP were examined at 1, 6, 12, 18, and 24 postoperative months. All patients were followed up by special recovery questionnaires using the smartphone after being discharged. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) and Oswestry Disability Index scores (ODI) were used to evaluate the clinical outcomes. Frankel’s classification system was employed to access the initial and final neurologic function. Fusion of the bone grafting was classified by Bridwell’s grading system. Statistical analysis Data were analyzed by the SPSS 21.0 software package (Chicago, IL, USA). Continuous variables were expressed as mean ± standard deviation (SD), and the distribution of the data was evaluated by the Shapiro–Wilk test. Comparisons between groups (preoperative vs. three postoperative months, and preoperative vs. final follow-up) were performed using the Chi-square test or paired t-test. P < 0.05 was considered a statistical significance. Data were analyzed by the SPSS 21.0 software package (Chicago, IL, USA). Continuous variables were expressed as mean ± standard deviation (SD), and the distribution of the data was evaluated by the Shapiro–Wilk test. Comparisons between groups (preoperative vs. three postoperative months, and preoperative vs. final follow-up) were performed using the Chi-square test or paired t-test. P < 0.05 was considered a statistical significance. Surgical technique: A posterior midline incision was performed to expose the spinous process, lamina, articular process, and screw insertion entrance point of the diseased vertebra. Two pedicle screws were respectively inserted above and below the lesion after confirming a satisfactory position. Temporary rod fixation was applied to the milder symptom side. Fenestration decompression of the vertebral plate was performed on the side with severe symptoms (part of the superior and inferior facets could be removed if necessary). The intervertebral space was removed thoroughly, and the lesion was sent for pathological examination. Decompression of the vertebral plate fenestration and removal of part of the superior and inferior facets were performed on the compression symptom severer side. For patients with compression symptoms of the double-side nerve root, sneak decompression should be performed on the contralateral side. The base of the spinous process of the vertebral body was removed by the forceps and curette to enlarge the central canal, and the sac should be distracted by a nerve dissector. The cartilage endplate was removed to expose the subchondral bone, and the removed uninfected bone was bitten into small pieces for the mixture with streptomycin. Then these were implanted into the intervertebral space. If the amount of bone graft was insufficient, the autologous iliac bone could be considered for the supplement. Finally, a connecting rod and screw cap were installed, after confirming the satisfactory fixation position by fluoroscopy again. The incision was flushed with sufficient 0.9% saline, a drainage tube was placed in the surgical area, and the incision was closed sequentially. Postoperative management: Antibiotics were managed for 2 or 3 postoperative days, and the surgical area drainage tube was removed when drainage volume was < 30 mL/day. Furthermore, the lumbosacral brace was applied for 3 months for helping with postoperative rehabilitation. Anti-Brucella therapy was managed for a minimum of 6 postoperative weeks following the standard WHO-recommended oral regimen: rifampicin (600 mg/day), and doxycycline (200 mg/day). Subsequently, radiography, ESR, and CRP were examined at 1, 6, 12, 18, and 24 postoperative months. All patients were followed up by special recovery questionnaires using the smartphone after being discharged. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) and Oswestry Disability Index scores (ODI) were used to evaluate the clinical outcomes. Frankel’s classification system was employed to access the initial and final neurologic function. Fusion of the bone grafting was classified by Bridwell’s grading system. Statistical analysis: Data were analyzed by the SPSS 21.0 software package (Chicago, IL, USA). Continuous variables were expressed as mean ± standard deviation (SD), and the distribution of the data was evaluated by the Shapiro–Wilk test. Comparisons between groups (preoperative vs. three postoperative months, and preoperative vs. final follow-up) were performed using the Chi-square test or paired t-test. P < 0.05 was considered a statistical significance. Results: A total of 55 patients were included in this study with a mean postoperative follow-up time of 2.6 ± 0.8 years (range, 2 to 5). There were 40 males and 15 females with a mean age of 39.8 ± 14.7 years (range, 27 to 57, Table 1). All patients were hampered by lower back pain and limited waist mobility. Further, there were 28 patients (50.9%) with radiating pain in the lower limb and 41 patients (74.5%) with a history of night sweats. Destruction of the vertebral body was observed in 30 patients (54.5%), spinal canal stenosis in 32 patients (58.1%), paravertebral abscess formation in 32 patients (58.1%), paravertebral soft tissue involvement in 27 patients (49%), and epidural granulation tissue or abscess in 19 patients (34.5%). The preoperative serum agglutination test was ≥ 1:160 in all patients and the blood culture was positive in 43 patients (78.1%). Thirty-seven patients (67.2%) were infected with Brucella melitensis, 5 patients (9%) with Brucella abortus, and one patient (1.8%) with Brucella suis. The mean serum levels of ESR and CRP were 41.3 ± 15.5 mm/h (range, 25 to 57), and 33.6 ± 18.5 mg/L (range, 14 to 52) respectively. Table 1Clinical data of patientsPatientAge (range, year)Gender (M/F)Affected levelPathogenExtra-spine infestationPostoperative grade of FCFollow-up time (year)Outcome140–45ML2–L3BMFeverE4FOD227–32ML3–L5BAFever + SE3FOD345–50FL4–L5BMFever + H + SC3ND445–50ML2–L4NegFeverE2FOD530–35ML2–L3BMFeverE3FOD632–37ML4–L5BMFeverE5FOD740–45ML2–L3BMFeverE3FOD845–50MT12–L3BMFeverE4FOD935–40ML4–L5BMFeverE2FOD1050–55FL5–S1BAFever + HE5FOD1140–45ML2–L4BMFeverE3FOD1247–52ML3–L4NegFeverE2FOD1345–50FL3–L5BMFever + HE2FOD1435–40MT11–L2BAFever + H + SD4FOD1540–45ML3–L5BMFeverE2FOD1632–37ML4–L5NegFever + HE3FOD1740–45ML5–S1NegFever + H + SC4ND1840–45FT10–L2BMFeverE2FOD1940–45ML3–L5BMFever + HE3FOD2040–45ML3BAFeverE2FOD2130–35MS1BMFeverE5FOD2240–45FL5–S1BMFeverE4FOD2345–50ML4–L5BSFeverE2FOD2442–47FL1–L4BMFeverE2FOD2550–60ML3BAFever + HE3FOD2642–47ML5–S1BMFeverE2FOD2735–40ML3BMFever + H + SD2FOD2835–40ML5NegFever + HE4FOD2945–50ML1–L3BMFeverE3FOD3032–37FT12BMFeverE2FOD3140–45ML2–L3NegFeverE3FOD3235–40FT12BMFeverE2FOD3325–30ML1–L2BMFever + HE4FOD3438–42ML2–L4BMFeverE2FOD3535–40MT12–L2BMFever + H + SE3FOD3638–42FL1–L3NegFeverE2FOD3735–40ML4–L5BMFeverE4FOD3825–30FL5–S1BMFever + HE3FOD3925–30FT12–L2NegFeverE5FOD4030–35ML3–L5BMFeverE2FOD4145–50MT12BMFever + H + SE3FOD4242–47MS1–S2BMFever + HE3FOD4352–57ML5–S1BMFeverE4FOD4442–47ML2–L3NegFeverE3FOD4535–40FT12–L2BMFeverE3FOD4632–37ML4–L5BMFever + HE2FOD4745–50FL4–L5BMFever + H + SE3FOD4830–35FT12–L2BMFever + H + SE4FOD4940–45ML2–L3NegFever + H + SE3FOD5040–45ML3–L5BMFever + HE2FOD5145–50ML5–S1NegFeverE4FOD5245–50FT12–L2BMFeverE3FOD5340–45ML2–L3NegFeverE3FOD5430–35ML2–L4BMFeverE2FOD5528–32ML5BMFever + HE5FODBA, Brucella abortus; BM, Brucella melitensis; BS, Brucella suis; F, female; FOD, free of disease; H, hepatomegaly; M, male; Neg, negative; ND, neurological dysfunction; S, splenomegaly Clinical data of patients BA, Brucella abortus; BM, Brucella melitensis; BS, Brucella suis; F, female; FOD, free of disease; H, hepatomegaly; M, male; Neg, negative; ND, neurological dysfunction; S, splenomegaly The poisoning symptoms were relieved in all patients after posterior surgery combined with anti-Brucella therapy, without local spine tenderness or percussion pain at follow-up. The mean operation time was 138.7 ± 63.8 min (range, 75 to 205) with a mean intraoperative blood loss of 215.4 ± 77.1 mL (range, 135 to 300). The average hospitalization time was 12.7 ± 6.2 days (range, 6 to 19). ESR, CRP, VAS, ODI, and JOA were improved after surgery, and a statistical difference was observed between preoperative and final follow-up (P < 0.05, Table 2). The typical cases described in this study were referred to in Figs. 1 and 2. Table 2Comparison of preoperative, postoperative VAS, ODI, JOA scores, and inflammatory indicatorsVariablePreoperativeThree postoperative monthsFinal follow-upImprovementrate (%)ESR41.35 ± 15.509.15 ± 3.17*7.31 ± 2.34*#91.6CRP33.61 ± 18.545.18 ± 1.79*2.04 ± 0.71*#86.3VAS6.04 ± 1.491.69 ± 0.57*0.72 ± 0.53*#92.8ODI(%)54.08 ± 9.9215.87 ± 5.93*10.44 ± 5.04*#83.1JOA15.12 ± 3.8923.47 ± 3.13*25.43 ± 3.49*#80.5CRP C-reactive protein, ESR Erythrocyte sedimentation rate, JOA Japanese Orthopaedic Association, ODI Oswestry disability index, VAS Visual analogue scale*Comparison of preoperative, P < 0.05#Comparison of three postoperative months, P < 0.05 Comparison of preoperative, postoperative VAS, ODI, JOA scores, and inflammatory indicators CRP C-reactive protein, ESR Erythrocyte sedimentation rate, JOA Japanese Orthopaedic Association, ODI Oswestry disability index, VAS Visual analogue scale *Comparison of preoperative, P < 0.05 #Comparison of three postoperative months, P < 0.05 Fig. 1  A 44-year-old female with lumbosacral Brucella spondylitis. a–d The lesion of the lumbosacral spine (L3, L4) was shown by the preoperative positive and lateral X-ray, CT sagittal reconstruction, and MRI. e, f The vertebral body was fixed firmly by the screw at 3 postoperative months, which was presented by X-ray. g, h CT sagittal and three-dimensional reconstruction demonstrated that the lesion was removed completely, and the internal fixation was stable without recurrence of the lesion at 6 postoperative months  A 44-year-old female with lumbosacral Brucella spondylitis. a–d The lesion of the lumbosacral spine (L3, L4) was shown by the preoperative positive and lateral X-ray, CT sagittal reconstruction, and MRI. e, f The vertebral body was fixed firmly by the screw at 3 postoperative months, which was presented by X-ray. g, h CT sagittal and three-dimensional reconstruction demonstrated that the lesion was removed completely, and the internal fixation was stable without recurrence of the lesion at 6 postoperative months Fig. 2A 57-year-old female with lumbosacral Brucella spondylitis. a–d L5, S1 vertebral body destruction, and intervertebral space narrowing caused by infection were indicated by the anteroposterior and lateral X-ray, CT sagittal reconstruction, and MRI. e, f The vertebral body was fixed firmly by the screw at 3 postoperative months, which was presented by X-ray. g, h CT sagittal and three-dimensional reconstruction demonstrated that the lesion was removed completely, and internal fixation was in a satisfactory position without recurrence of infection at the 6 postoperative months A 57-year-old female with lumbosacral Brucella spondylitis. a–d L5, S1 vertebral body destruction, and intervertebral space narrowing caused by infection were indicated by the anteroposterior and lateral X-ray, CT sagittal reconstruction, and MRI. e, f The vertebral body was fixed firmly by the screw at 3 postoperative months, which was presented by X-ray. g, h CT sagittal and three-dimensional reconstruction demonstrated that the lesion was removed completely, and internal fixation was in a satisfactory position without recurrence of infection at the 6 postoperative months Neurological function was significantly improved in 20 patients with preoperative neurological dysfunction after surgery. In short, two patients with preoperative Frankel’s grade C recovered to grade D at 1 postoperative month, and one patient with preoperative Frankel’s grade C recovered to grade E at 6 postoperative months. Seven of the 17 patients with Frankel’s grade D recovered to grade E at 1 postoperative month, and the remaining cases recovered gradually to grade E at the follow-up. Only 2 patients with preoperative neurological dysfunction (Frankel’s grade C) were not improved after surgery (Table 3). The mean fusion time was 6.9 ± 0.7 months (range, 6 to 8). According to Bridwell’s grading system, the fusion of bone grafting in 48 cases (87.2%) was defined as grade I, and grade II in 7 cases (12.7%). None of the internal fixation loosening and breakage was found during the follow-up. Table 3Comparison of neurological outcomes after surgeryFrankel’ grade*PreoperativeOne postoperative monthThree postoperative monthsSix postoperative monthsFinal follow-upA00000B00000C31000D1712532E3542505253*Frankel classification Comparison of neurological outcomes after surgery *Frankel classification Discussion: The pathological basis of LBS was chronic degeneration of the intervertebral disc and vertebral bone destruction, and intractable back pain as the main clinical manifestations [15]. Intervertebral space stenosis was the common presentation of radiography, presented in 32 patients (58.1%) in this study. In the view of anatomy, the intervertebral joint was the stress concentration area behind the spine, which might be easily affected by intervertebral space stenosis. The lesions might slowly invade the articular surface of the vertebrae body and resulted in the proliferation and hardening of the articular surface when the anti-Brucella therapy was not intervened timely. The biomechanical structure stability of the intervertebral joint and spine sagittal balance might be destroyed if the progression continued. Via published studies [16–18], the phenomenon that invasion of the synovium and cartilage surface of joints by Brucella was more common. Posterior joint destruction combined with disc degeneration might result in vertebral slippage. On that occasion, intractable back pain could be worsened by spinal sagittal imbalance and severe vertebral slippage, as well as the injury of the nerve root. Fortunately, the velocity of infiltrative bone destruction in Brucella infestation was slow. The process of bone destruction was accompanied by the process of bone repair, so the sequestrum was not commonly formed [17, 19]. Hence, the preservation of the vertebrae’s structural morphology was a special character of LBS, which was different from spinal tuberculosis [19]. The spinal stability of patients with LBS was usually better than that of spinal tuberculosis, and that’s why kyphotic deformity was rare among patients with LBS. Blood culture remained the gold standard for diagnosis of Brucella infestation [1, 3]. Yet, the sensitivity of blood culture depended on several factors, especially the disease phase and previous antibiotics usage. In the acute phase, the sensitivity of blood culture might be more than 80%, while for patients with chronic infestation, its sensitivity was approximately 30–70% [20]. Although the population was susceptible to Brucella, most of the clinical symptoms could be effectively relieved by prompt and standard antibacterial therapy [21, 22]. The indications and timing of surgical intervention were still controversial. But the current recognition was that the surgery should be prepared for patients whose Brucella poisoning symptoms cannot be effectively improved by anti-Brucella therapy (6 weeks of medication and 1 week of drug withdrawal), and combined with one of the following symptoms [7, 23, 24]: paravertebral abscess or psoas abscess; intervertebral disc destruction; spinal structure instability; accompanied by other bacterial infections. The purposes of surgery were radically removing the lesion, improving the local blood circulation, relieving the nerve root compression symptoms, and restoring the spinal sagittal balance to promote the early limbs’ function recovery. At present, the surgical procedures mainly consisted of anterior, posterior, and combined anterior and posterior surgery. The choice of approach should be based on the location of the spinal lesion, degree of vertebral destruction, level of spinal nerve compression, and surgeon’s technical proficiency. It had been reported in the literature that posterior surgery was more practical for intraspinal granulation and abscess removal, especially for patients with intraspinal nerve damage caused by posterior column lesions. While the combined anterior and posterior surgery was recommended for patients with perivertebral abscess, psoas abscess, or greater anterior column destruction [7]. In this study, the posterior surgery was successfully performed in all patients, since the paravertebral abscess combined with spinal nerve compression symptoms occurred in most patients (83.6%). According to Frankel’s classification, the spinal nerve compression symptoms were improved from grade C/D to grade E in 53 patients (96.3%) at the final follow-up. To our knowledge, anterior surgery had also been recommended by previous studies. However, this method not only required meticulous surgical technique with a prolonged operative time but also left the risk of damaging the iliac vessels and sympathetic nerves of the complex anatomy of the anterior lumbosacral spine. Yin et al. [25] reported a case series of 16 patients with Bucella spondylitis managed by anterior surgery with a mean operation time and intraoperative blood loss of 237.4 min and 580.2 mL, respectively. In this cohort, the mean operation time was 138.7 min (range, 80 to 200) with a mean intraoperative blood loss of 215.4 mL (range, 60 to 370), which was significantly less than Yin’s study. Additionally, there was no back pain caused by iatrogenic, and the fusion of bone grafting in 48 cases (87.2%) was defined as grade I, and grade II in 7 cases (12.7%). Although the posterior surgery made up for the lack of anterior surgery, the spinal sagittal imbalance caused by serious destruction of the anterior column could not be ignored. The persistent nerve compression symptoms (back pain or numbness) might also be caused by the long period of the insidious development of vertebrae destruction. The intervertebral space and the upper and lower endplates of adjacent vertebral bodies were usually involved, but the distribution of abscesses was limited, which rarely exceeded the edge of the vertebral body [1, 6]. A retrospective comparative study published by Ulu-Kilic et al. [4] also showed that the extent of paravertebral abscesses in thoracolumbar Brucella spondylitis generally did not exceed the upper and lower edges of the destroyed vertebral body. Some patients with nerve root compression symptoms caused by intervertebral discs bulging from intraspinal abscesses or swelling could also be effectively treated by prolonging the antibacterial therapy period. Thus, the completeness of lesion removal should not be overemphasized [23]. Chen et al. [8] reported that 24 patients with Brucella spondylitis were treated with one-stage posterior surgery and received satisfactory postoperative results. In this study, the neurological compression symptoms of two patients were not improved. We considered that the irreversible nerve damage might be caused by their long period of chronic infestation. In our experience, hence, the earlier anti-Brucella therapy intervention, the less incidence of vertebrae destruction and neurological compression symptoms. Clinicians in the endemic area should become aware of brucellosis in the differential diagnosis of febrile diseases with peculiar musculoskeletal to prevent the increased medical burden. Yet, it was necessary to perform the surgery when the spinal sagittal imbalance occurred caused by the development of infestation. ESR and CRP returned to a normal level in the 3rd postoperative month (P < 0.05). In the comparison of the preoperative, the pain symptoms and neurological dysfunction were improved (P < 0.05). In our opinion, posterior surgery was recommended for patients without neurological dysfunction to effectively avoid excessive damage to the structure of the posterior column of the spine, which also decreased the risk of intraoperative injury to the nerve roots and dissemination of infection. Besides, the surgical procedure was suggested to be performed on the severer side for hemi-spinal fenestration and resection of the facet joint selected for patients with neurological compression symptoms. Once the severe side was completely decompressed, it was easy to decompress the mild side. The decompression should be carefully manipulated to avoid the fracture of the contralateral lamina or excessive destruction of the facet joints. Last but not the least, the results of this study might be affected by potential limitations since its retrospective and single-centre nature. There was also no standardized surgical method for the treatment of advanced LBS. Hence, a prospective study with larger-sample and multi-centre would be helpful for the management of LBS. Conclusion: Standard anti-Brucella therapy was indispensable for infestation control in the early stage of LBS. One-stage posterior surgery combined with anti-Brucella therapy was a practical method in the treatment of LBS with severe neurological compression and spinal sagittal imbalance.
Background: This study aimed to assess the clinical efficacy of one-stage posterior surgery combined with anti-Brucella therapy in the treatment of lumbosacral brucellosis spondylitis (LBS). Methods: From June 2010 to June 2020, the clinical and radiographic data of patients with LBS treated by one-stage posterior surgery combined with anti-Brucella therapy were retrospectively analyzed. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) and Oswestry Disability Index scores (ODI) were used to evaluate the clinical outcomes. Frankel's classification system was employed to access the initial and final neurologic function. Fusion of the bone grafting was classified by Bridwell's grading system. Results: A total of 55 patients were included in this study with a mean postoperative follow-up time of 2.6 ± 0.8 years (range, 2 to 5). There were 40 males and 15 females with a mean age of 39.8 ± 14.7 years (range, 27 to 57). The Brucella agglutination test was ≥ 1:160 in all patients, but the blood culture was positive in 43 patients (78.1%). A statistical difference was observed in ESR, CRP, VAS, ODI, and JOA between preoperative and final follow-up (P < 0.05). Neurological function was significantly improved in 20 patients with preoperative neurological dysfunction after surgery. According to Bridwell's grading system, the fusion of bone grafting in 48 cases (87.2%) was defined as grade I, and grade II in 7 cases (12.7%). None of the infestation recurrences was observed. Conclusions: One-stage posterior surgery combined with anti-Brucella therapy was a practical method in the treatment of LBS with severe neurological compression and spinal sagittal imbalance.
Background: Human brucellosis disease was an infectious zoonotic allergic disease caused by Brucella [1], which was usually transmitted by occupational contact (e.g., veterinarians, slaughterhouses, animal husbandry) and the digestive tract (consumption of contaminated products). It remained a serious public health problem in livestock regions, such as northern China, Australia, the Mediterranean region, and India [2, 3]. A total of 240,000 people worldwide were at risk, with more than 500,000 new cases annually, and 10–85% of patients might be accompanied by involvement of the skeletal system [4–7]. Lumbosacral was the common region of the spinal Brucella spondylitis [8, 9], with an incidence of 2–53% [10], especially L4–5 level, and L5–S1 level [11, 12]. However, the insidious progression of brucellosis lesion made anti-Brucella therapy hardly intervene promptly, resulting in irreversible destruction of the lumbar vertebral body, including abscess formation, disc destruction, and vertebral sclerosis [13]. Failure to diagnose and treat LBS promptly might result in serious sequelae, such as chronic low back pain, neurological dysfunction, and even kyphotic deformity [13, 14]. In clinical practice, hence, the treatment plan for patients with lumbosacral Brucella spondylitis (LBS) combined with spinal cord compression symptoms or kyphotic deformity remains a great challenge for clinicians. At present, the standard treatment of LBS was non-surgical interventions (antibiotics chemotherapy: doxycycline, rifamycin). Surgical intervention should be considered when the spinal cord compression symptoms or kyphotic deformity occurred, and the principle was to remove the lesion, relieve the spinal cord compression and restore the spinal sagittal balance. When surgery was the treatment of choice, the indication of surgical procedure (anterior, posterior and combined anterior and posterior surgery) remains controversial. Besides, the clinical efficacy of the percutaneous ultrasonic or CT-guided evacuation of paravertebral collections has also been reported [13], but the recurrence of infection still exists since the limited visual field of the surgical procedure. Posterior surgery was suggested since its satisfactory efficacy in removing lesions, decompression, deformity correction, and restoring the spinal sagittal balance, especially for patients with significant lesion destruction and intractable back pain. Therefore, the purpose of this study was to retrospectively analyze the clinical efficacy of patients with LBS managed by one-stage posterior surgery combined with anti-Brucella therapy in our hospital and summarize the surgical indications for the treatment strategy. Conclusion: Standard anti-Brucella therapy was indispensable for infestation control in the early stage of LBS. One-stage posterior surgery combined with anti-Brucella therapy was a practical method in the treatment of LBS with severe neurological compression and spinal sagittal imbalance.
Background: This study aimed to assess the clinical efficacy of one-stage posterior surgery combined with anti-Brucella therapy in the treatment of lumbosacral brucellosis spondylitis (LBS). Methods: From June 2010 to June 2020, the clinical and radiographic data of patients with LBS treated by one-stage posterior surgery combined with anti-Brucella therapy were retrospectively analyzed. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) and Oswestry Disability Index scores (ODI) were used to evaluate the clinical outcomes. Frankel's classification system was employed to access the initial and final neurologic function. Fusion of the bone grafting was classified by Bridwell's grading system. Results: A total of 55 patients were included in this study with a mean postoperative follow-up time of 2.6 ± 0.8 years (range, 2 to 5). There were 40 males and 15 females with a mean age of 39.8 ± 14.7 years (range, 27 to 57). The Brucella agglutination test was ≥ 1:160 in all patients, but the blood culture was positive in 43 patients (78.1%). A statistical difference was observed in ESR, CRP, VAS, ODI, and JOA between preoperative and final follow-up (P < 0.05). Neurological function was significantly improved in 20 patients with preoperative neurological dysfunction after surgery. According to Bridwell's grading system, the fusion of bone grafting in 48 cases (87.2%) was defined as grade I, and grade II in 7 cases (12.7%). None of the infestation recurrences was observed. Conclusions: One-stage posterior surgery combined with anti-Brucella therapy was a practical method in the treatment of LBS with severe neurological compression and spinal sagittal imbalance.
5,510
341
[ 469, 1348, 286, 184, 90 ]
8
[ "patients", "postoperative", "brucella", "vertebral", "surgery", "posterior", "removed", "bone", "symptoms", "months" ]
[ "brucella spondylitis l5", "brucella spondylitis treated", "patients lumbosacral brucella", "brucella spondylitis incidence", "spinal brucella spondylitis" ]
null
[CONTENT] Brucellosis | Infection | Lumbosacral | Spine | Surgery [SUMMARY]
null
[CONTENT] Brucellosis | Infection | Lumbosacral | Spine | Surgery [SUMMARY]
[CONTENT] Brucellosis | Infection | Lumbosacral | Spine | Surgery [SUMMARY]
[CONTENT] Brucellosis | Infection | Lumbosacral | Spine | Surgery [SUMMARY]
[CONTENT] Brucellosis | Infection | Lumbosacral | Spine | Surgery [SUMMARY]
[CONTENT] Humans | Male | Female | Adult | Middle Aged | Brucella | Retrospective Studies | Spinal Fusion | Lumbar Vertebrae | Debridement | Spondylitis | Brucellosis [SUMMARY]
null
[CONTENT] Humans | Male | Female | Adult | Middle Aged | Brucella | Retrospective Studies | Spinal Fusion | Lumbar Vertebrae | Debridement | Spondylitis | Brucellosis [SUMMARY]
[CONTENT] Humans | Male | Female | Adult | Middle Aged | Brucella | Retrospective Studies | Spinal Fusion | Lumbar Vertebrae | Debridement | Spondylitis | Brucellosis [SUMMARY]
[CONTENT] Humans | Male | Female | Adult | Middle Aged | Brucella | Retrospective Studies | Spinal Fusion | Lumbar Vertebrae | Debridement | Spondylitis | Brucellosis [SUMMARY]
[CONTENT] Humans | Male | Female | Adult | Middle Aged | Brucella | Retrospective Studies | Spinal Fusion | Lumbar Vertebrae | Debridement | Spondylitis | Brucellosis [SUMMARY]
[CONTENT] brucella spondylitis l5 | brucella spondylitis treated | patients lumbosacral brucella | brucella spondylitis incidence | spinal brucella spondylitis [SUMMARY]
null
[CONTENT] brucella spondylitis l5 | brucella spondylitis treated | patients lumbosacral brucella | brucella spondylitis incidence | spinal brucella spondylitis [SUMMARY]
[CONTENT] brucella spondylitis l5 | brucella spondylitis treated | patients lumbosacral brucella | brucella spondylitis incidence | spinal brucella spondylitis [SUMMARY]
[CONTENT] brucella spondylitis l5 | brucella spondylitis treated | patients lumbosacral brucella | brucella spondylitis incidence | spinal brucella spondylitis [SUMMARY]
[CONTENT] brucella spondylitis l5 | brucella spondylitis treated | patients lumbosacral brucella | brucella spondylitis incidence | spinal brucella spondylitis [SUMMARY]
[CONTENT] patients | postoperative | brucella | vertebral | surgery | posterior | removed | bone | symptoms | months [SUMMARY]
null
[CONTENT] patients | postoperative | brucella | vertebral | surgery | posterior | removed | bone | symptoms | months [SUMMARY]
[CONTENT] patients | postoperative | brucella | vertebral | surgery | posterior | removed | bone | symptoms | months [SUMMARY]
[CONTENT] patients | postoperative | brucella | vertebral | surgery | posterior | removed | bone | symptoms | months [SUMMARY]
[CONTENT] patients | postoperative | brucella | vertebral | surgery | posterior | removed | bone | symptoms | months [SUMMARY]
[CONTENT] spinal | deformity | efficacy | spinal cord compression | spinal cord | cord compression | cord | surgical | treatment | lbs [SUMMARY]
null
[CONTENT] postoperative | grade | patients | reconstruction | ray ct sagittal | ray ct | ray | ct sagittal | months | preoperative [SUMMARY]
[CONTENT] stage | lbs | lbs severe | control | infestation control early | infestation control | brucella therapy indispensable | neurological compression spinal | stage lbs | stage lbs stage [SUMMARY]
[CONTENT] postoperative | brucella | patients | removed | spinal | surgery | performed | bone | months | vertebral [SUMMARY]
[CONTENT] postoperative | brucella | patients | removed | spinal | surgery | performed | bone | months | vertebral [SUMMARY]
[CONTENT] one | anti-Brucella | lumbosacral [SUMMARY]
null
[CONTENT] 55 | 2.6 ± | 0.8 years | 2 ||| 40 | 15 | 39.8 ± | 14.7 years | 27 ||| Brucella | ≥ | 1:160 | 43 | 78.1% ||| ESR | CRP | VAS | JOA | 0.05 ||| 20 ||| Bridwell | 48 | 87.2% | II | 7 | 12.7% ||| [SUMMARY]
[CONTENT] One | anti-Brucella [SUMMARY]
[CONTENT] one | anti-Brucella | lumbosacral ||| June 2010 to June 2020 | one | anti-Brucella ||| Japanese Orthopaedic Association | JOA | Oswestry Disability ||| ||| Bridwell ||| 55 | 2.6 ± | 0.8 years | 2 ||| 40 | 15 | 39.8 ± | 14.7 years | 27 ||| Brucella | ≥ | 1:160 | 43 | 78.1% ||| ESR | CRP | VAS | JOA | 0.05 ||| 20 ||| Bridwell | 48 | 87.2% | II | 7 | 12.7% ||| ||| One | anti-Brucella [SUMMARY]
[CONTENT] one | anti-Brucella | lumbosacral ||| June 2010 to June 2020 | one | anti-Brucella ||| Japanese Orthopaedic Association | JOA | Oswestry Disability ||| ||| Bridwell ||| 55 | 2.6 ± | 0.8 years | 2 ||| 40 | 15 | 39.8 ± | 14.7 years | 27 ||| Brucella | ≥ | 1:160 | 43 | 78.1% ||| ESR | CRP | VAS | JOA | 0.05 ||| 20 ||| Bridwell | 48 | 87.2% | II | 7 | 12.7% ||| ||| One | anti-Brucella [SUMMARY]
A comparative evaluation between the reliability of gypsum casts and digital greyscale intra-oral scans for the scoring of tooth wear using the Tooth Wear Evaluation System (TWES).
33370476
The Tooth Wear Evaluation System (TWES) is a type of tooth wear index. To date, there is the lack of data comparing the reliability of the application of this index on gypsum cast records and digital greyscale intra-oral scan records.
BACKGROUND
Records for 10 patients with moderate to severe tooth wear (TWES ≥ 2) were randomly selected from a larger clinical trial. TWES grading of the occlusal/incisal, buccal and palatal/lingual surfaces was performed to determine the levels of intra- and interobserver agreement. Intra-observer reproducibility was based on the findings of one examiner only. For the interobserver reproducibility, the findings of two examiners were considered. One set of models/ records were used per patient. Cohen's weighted kappa (κW ) was used to ascertain agreement between and within the observers. Comparison of agreement was performed using t tests (P < .05).
METHODS
For the scoring of the of the total occlusal/incisal surfaces, the overall levels of intra- and interobserver agreement were significantly higher using the gypsum cast records than with the digital greyscale intra-oral scan records, (P < .001) and (P < .001), respectively. For the overall buccal surfaces, only a significant difference was found in the intra-observer agreement using gypsum casts, (P = .013). For the palatal/lingual surfaces, a significant difference was only reported in the interobserver agreement using gypsum casts, (P = .043). At the occlusal/incisal surfaces, grading performed using gypsum casts, culminated in significantly higher TWES scores than with the use of the digital greyscale intra-oral scans (P < .001). At the buccal and palatal/lingual surfaces, significantly higher wear scores were obtained using digital greyscale intra-oral scan records (P < .009).
RESULTS
The TWES can offer a reliable means for the scoring of wearing occlusal/incisal surfaces using gypsum casts. The reliability offered by digital greyscale intra-oral scans for consecutive scoring was in general, inferior.
CONCLUSIONS
[ "Calcium Sulfate", "Humans", "Reproducibility of Results", "Tooth Attrition", "Tooth Wear" ]
8248338
BACKGROUND
In 2018, an estimated mean global prevalence of erosive tooth wear in permanent teeth between 20% and 45% was described. 1 Tooth wear can result in a variety of dentofacially related symptoms, to include, aesthetic impairment, sensitivity, pain, discomfort and/ or functional problems. 2 , 3 More severe forms of tooth wear may also have an adverse impact on a patient's quality of life. 4 , 5 , 6 Restorative intervention is sometimes prescribed for patients with tooth wear. 3 However, treatment (with a direct resin composite technique, or indirect techniques) may prove to be costly and complex. 7 There may also be some ambiguity with the optimal timing for restorative intervention. 3 , 8 Whilst counselling and monitoring are advised for all patients with pathological tooth wear, restorative intervention may be indicated when the presenting tooth wear is a clear concern for the patient and/or the clinician, where there may be functional, or aesthetic concerns and/or symptoms of pain, or discomfort. 3 However, definitive dental restorations for tooth wear management should not be prescribed until any active dental pathology has been effectively managed and full patient commitment is available. 3 Where the presenting pathological tooth wear is not progressive and with the lack of any further concerns, restorative intervention may not be necessary and management with vigilant monitoring and counselling, may be continued. 3 Determining the most appropriate time to prescribe restorative intervention should also consider the progression of the wear process. 3 The need for pragmatic and reliable means to assess the rate of tooth wear progression (between appointments, as well as between different clinicians) is therefore relevant. Tooth wear assessment is most frequently undertaken by periodic clinical (chairside) assessment; however, photographs, serial (consecutive) dental casts and serial digital 3D data scans may also be used to undertake assessment, each with their own limitations. 3 , 9 A plethora of tooth wear indices have been introduced for the scoring of the severity of the tooth wear present, 10 , 11 , 12 , 13 , 14 , 15 but the universal acceptance of a grading scale for erosive tooth wear in general dental practice, is lacking. 16 A clinical tooth wear index should ideally offer the potential to undertake scoring using indirect methods such as intra‐oral photographs, traditional gypsum dental casts and on digital intra‐oral scans, 17 thereby enabling some extra‐oral assessment. This may be particularly beneficial when the available clinical chairside time may be constrained. The Tooth Wear Evaluation System (TWES) is a modular clinical guideline that can be used for the assessment of tooth wear and to assist with diagnosis and patient management. 14 , 15 , 18 The TWES was revised in 2020 and a new taxonomy was proposed—TWES 2.0. 18 The TWES index in general includes the application of an 8‐point occlusal/incisal ordinal grading scale and a 3‐point non‐occlusal/ non‐incisal grading scale for the scoring of the respective surfaces. The TWES has been reported to offer adequate levels of reliability with tooth wear grading when applied clinically, as well as when using dental cast records. 19 Furthermore, when undertaking occlusal/ incisal surface grading using dental casts and intra‐oral photographic records, the TWES has been described to offer the necessary sensitivity to enable the detection of changes in the pattern on tooth wear on a sequential basis and, thereby, help monitor disease progression. 17 , 20 The aim of this study was to undertake a comparative evaluation between the use of gypsum casts and digital greyscale (black‐white) intra‐oral scan records with the reliability of grading tooth wear using the TWES, applied to patient records that were demonstrative of moderate to severe forms of tooth wear.
null
null
RESULTS
Table 2 provides a combined overview of the TWES scores for all ten patient records at the occlusal/incisal, the buccal and the palatal/lingual surfaces. The patient records showed the presence of significant amounts of tooth wear at all teeth. The majority of the scores at the occlusal/incisal surfaces were between TWES 2 (showing wear with dentine exposure and loss of clinical crown height <1/3) and TWES 3b (wear with dentine exposure and loss of clinical crown height >1/2‐2/3). Twenty‐one teeth included in the patient records were scored TWES 4, presenting with dentine exposure and the loss of clinical crown height of >2/3. Descriptives of tooth wear scores using the TWES at all tooth surfaces, measured on gypsum casts (n = 10) Details of the levels of intra‐observer agreements (O1) and interobserver agreement (O1‐O2) (Kappa scores) for the consecutive scoring of tooth wear applying the TWES on the gypsum cast records and the digital intra‐oral scan records that were included in this investigation are presented in Table 3. Table 3 also provides information relating to the comparative evaluation between the use of gypsum cast records and digital greyscale intra‐oral scan records with the reproducibility of tooth wear scoring with the TWES. For the grading of the overall occlusal/ incisal surfaces using gypsum cast records, the levels of intra‐observer agreements (O1) and interobserver agreements (O1‐O2) were significantly higher compared with the agreement in the scoring of the same surfaces using the digital greyscale intra‐oral scan records, (P < .001) and(P < .001), respectively. For the grading of the overall buccal and palatal/lingual surfaces, other than significantly higher levels of O1 agreement in the scoring of the buccal surfaces using gypsum cast (P = .013) and the O1‐O2 agreement in the scoring of the palatal/lingual surfaces with gypsum cast records (P = .043), no other significant difference was found between the type of record used on the reliability of scoring with the TWES. Intra‐ and interobserver agreements (Kappa scores) using the TWES on gypsum cast records the and digital intra‐oral scans Kappa's Cohen (κW) of intra‐ and interobserver measurements per location (01 = Observer 1, 02 = Observer 2) Differences between the Kappa scores on gypsum cast records versus digital intra‐oral scans, expressed with P‐value and 95%CI. N/A: no statistical test was possible. Bold denotes a value that was statistically significant. Measurement showed a perfect agreement on a single score. Table 4 provides information about the effect of the type of record on the tooth wear score. This was expressed as the mean difference in the tooth wear grading on gypsum casts and the digital greyscale intra‐oral scan records using the TWES. For the overall scores at the occlusal/ incisal surfaces, grading of the gypsum casts culminated in significantly higher TWES scores compared with the use of the digital greyscale intra‐oral scan records (P < .001; 95% CI = [0.084…0.272]). However, the overall scores at the buccal and palatal/lingual surfaces showed significantly higher values using the digital intra‐oral scans records when undertaking tooth wear grading than with the use of gypsum cast records (P = .009; 95% CI = [−0.294…0.042] and P = .001; 95% CI = [−0.342…0.084]), respectively. Showing the differences in TWES scores between measurements on gypsum casts and digital intra‐oral scans Table is presenting the mean difference between tooth wear gradings on gypsum models versus digital scans using the TWES, together with the P‐value and the 95%CI. To test the differences the TWES index was converted into an 8‐point‐scale (0 = 1, 1a = 2, 1b = 3, 1c = 4, 2 = 5, 3a = 6, 3b = 7, and 4 = 8). A positive score means a higher tooth wear score on the gysum models compared to the digital scan records. Bold denotes a value that was statistically significant.
CONCLUSIONS
It was concluded that the scores obtained with the grading scales of the TWES on gypsum casts can offer reliability, especially for the grading of the occlusal/incisal surfaces of teeth with signs of moderate to severe wear. The level of reproducibility offered using digital greyscale intra‐oral scan records to carry out tooth wear assessments with the TWES was generally inferior to that offered by the use of gypsum casts.
[ "MATERIALS & METHODS", "Tooth wear evaluation system", "Subjects", "Scoring and the intra‐ and interobserver agreement", "Statistical analyses", "AUTHORS’ CONTRIBUTION" ]
[ " Tooth wear evaluation system The Tooth Wear Evaluation (TWES) was used as the grading system in this investigation.\n14\n, \n15\n, \n18\n For the scoring of the occlusal and incisal surfaces, an 8‐point ordinal scale was used. The grades defined as grade 0 = no (visible) wear; grade 1a = minimal wear of cusps or incisal tips, within the enamel; grade 1b = facets parallel to the normal planes of contour, within the enamel; grade 1c = noticeable flattening of cusps or incisal edges, within the enamel; grade 2 = wear with dentine exposure and loss of clinical crown height <1/3; grade 3a = wear with dentine exposure and loss of clinical crown height 1/3‐1/2; grade 3b = wear with dentine exposure and loss of clinical crown height >1/2‐2/3; and grade 4 = wear with dentine exposure and loss of clinical crown height of >2/3. For scoring at the non‐occlusal/non‐incisal surfaces, a 3‐point ordinal scale was applied. The grades are described as grade 0 = no (visible) wear; grade 1 = wear confined to the enamel; and grade 2 = wear into the dentine. The scope of this study is based on the TWES. It did not include the extensions from the TWES 2.0, as data collection commenced prior to the introduction of the updated taxonomy.\nThe Tooth Wear Evaluation (TWES) was used as the grading system in this investigation.\n14\n, \n15\n, \n18\n For the scoring of the occlusal and incisal surfaces, an 8‐point ordinal scale was used. The grades defined as grade 0 = no (visible) wear; grade 1a = minimal wear of cusps or incisal tips, within the enamel; grade 1b = facets parallel to the normal planes of contour, within the enamel; grade 1c = noticeable flattening of cusps or incisal edges, within the enamel; grade 2 = wear with dentine exposure and loss of clinical crown height <1/3; grade 3a = wear with dentine exposure and loss of clinical crown height 1/3‐1/2; grade 3b = wear with dentine exposure and loss of clinical crown height >1/2‐2/3; and grade 4 = wear with dentine exposure and loss of clinical crown height of >2/3. For scoring at the non‐occlusal/non‐incisal surfaces, a 3‐point ordinal scale was applied. The grades are described as grade 0 = no (visible) wear; grade 1 = wear confined to the enamel; and grade 2 = wear into the dentine. The scope of this study is based on the TWES. It did not include the extensions from the TWES 2.0, as data collection commenced prior to the introduction of the updated taxonomy.\n Subjects The current investigation is part of a larger clinical trial on the management of erosive tooth wear, the Radboud Tooth Wear Project, in which 200 patients are included.\n21\n The records of ten patients were randomly selected for the present study. Inclusion criteria were the presence of moderate to severe tooth wear, with at least one score of TWES ≥ 2. The records applied in this investigation were limited to gypsum casts and 3D (three dimensional) digital intra‐oral scans. The study was carried out in accordance with the Declaration of Helsinki for research involving humans and ethical approval was obtained (ABR codes: NL31401.091.10, NL30346.091.10 and NL31371.091.10). All patients agreed to participate in the research project, and written informed consent was attained prior to entering the Radboud Tooth Wear Project.\nThe baseline dental condition of each participant had been fully documented, and full‐arch gypsum casts of the upper and lower dental arches were fabricated. Dental impressions were taken using a vinyl polysiloxane impression material, (Ivoclar Virtual 380, Ivoclar Vivodent, Liechtenstein, Europe) comprising two consistencies, a Heavy body and Monophase applied in a single stage. The impressions were cast in Type III dental stone (SLR Dental GmbH, Germany) within 24 hours, according to the manufacturer's instructions. A yellow‐coloured dental stone material was used. During the same appointment, digital intra‐oral scans were obtained using the LAVA COS Intraoral Scanner (3M). Both the digital and dental impressions were captured by the same trained operator. The scanning procedure was undertaken in accordance with the manufacturer's instructions. Scans were made with the patient in a supine position, a latex‐free lip and cheek retractor was applied, Optragate (Ivoclar Vivadent, Liechtenstein), teeth were rinsed, air‐dried and lightly powdered with titanium dioxide. The LAVA COS scanner was used to capture the digital impression, including the bite registration scan. The scans were digitally stored in the web‐based platform, Casemanager (3M). The 3D models of the scans (‘digital intra‐oral scans/ digital models’) were amenable to downloading from this platform and these open STL files could be easily imported into the free‐software, MeshLab (www.meshlab.net). Figure 1 is a representation of the MeshLab user interface.\nThe use of Meshlab to score intra‐oral scans with the TWES\nThe current investigation is part of a larger clinical trial on the management of erosive tooth wear, the Radboud Tooth Wear Project, in which 200 patients are included.\n21\n The records of ten patients were randomly selected for the present study. Inclusion criteria were the presence of moderate to severe tooth wear, with at least one score of TWES ≥ 2. The records applied in this investigation were limited to gypsum casts and 3D (three dimensional) digital intra‐oral scans. The study was carried out in accordance with the Declaration of Helsinki for research involving humans and ethical approval was obtained (ABR codes: NL31401.091.10, NL30346.091.10 and NL31371.091.10). All patients agreed to participate in the research project, and written informed consent was attained prior to entering the Radboud Tooth Wear Project.\nThe baseline dental condition of each participant had been fully documented, and full‐arch gypsum casts of the upper and lower dental arches were fabricated. Dental impressions were taken using a vinyl polysiloxane impression material, (Ivoclar Virtual 380, Ivoclar Vivodent, Liechtenstein, Europe) comprising two consistencies, a Heavy body and Monophase applied in a single stage. The impressions were cast in Type III dental stone (SLR Dental GmbH, Germany) within 24 hours, according to the manufacturer's instructions. A yellow‐coloured dental stone material was used. During the same appointment, digital intra‐oral scans were obtained using the LAVA COS Intraoral Scanner (3M). Both the digital and dental impressions were captured by the same trained operator. The scanning procedure was undertaken in accordance with the manufacturer's instructions. Scans were made with the patient in a supine position, a latex‐free lip and cheek retractor was applied, Optragate (Ivoclar Vivadent, Liechtenstein), teeth were rinsed, air‐dried and lightly powdered with titanium dioxide. The LAVA COS scanner was used to capture the digital impression, including the bite registration scan. The scans were digitally stored in the web‐based platform, Casemanager (3M). The 3D models of the scans (‘digital intra‐oral scans/ digital models’) were amenable to downloading from this platform and these open STL files could be easily imported into the free‐software, MeshLab (www.meshlab.net). Figure 1 is a representation of the MeshLab user interface.\nThe use of Meshlab to score intra‐oral scans with the TWES\n Scoring and the intra‐ and interobserver agreement In advance of this study, Observer 1 (O1), a final‐year undergraduate dental student, was trained and calibrated over the course of two training sessions with the use of the TWES by Observer 2 (O2). Observer 2 was an experienced dental practitioner and researcher.\nThe gypsum cast records included in this investigation were scored using the TWES in the same environment and appraised under consistent, standard room lighting conditions. Under same conditions, the digital intra‐oral scan records were visualised in greyscale on a computer screen (resolution: 1920x1080) with MeshLab, enabling the assessor to rotate and zoom in on the models. As the output of the 3D models when using the LAVA COS scanner is in greyscale, this formed the rationale for the use of greyscale records in this investigation. The sequence of scoring for all records was, the first quadrant, followed by the second, the third and finally, the fourth. No time limit was set for the evaluations.\nTeeth with fixed prosthodontic restorations (eg crowns and bridges), or large intra‐coronal restorations were excluded from the analysis. Teeth that were not clearly visible (inclusive of teeth that were unclear on the digital intra‐oral scans), or where they were broken/ or damaged on the casts were also excluded from the analysis.\nFor the intra‐observer measurements, the ten sets of gypsum casts and digital intra‐oral scan records were scored twice by Observer 1 with a minimum interval of 2 weeks between the consecutive observations. Comparisons were made between the consecutive scores for the full mouth (overall scores), as well as for anterior and posterior areas. Assessments were then undertaken by Observer 2 applying the same protocols; however, for the purpose of evaluating the interobserver agreement, only one round of scoring was performed by O2. To study the interobserver agreement (O1‐O2), the gypsum casts and digital greyscale intra‐oral scan records for the same ten cases were scored once by both observers, O1 and O2. The observers were blinded to each other's scores and in the case of O1 blinded to the outcomes of their former observations when carrying out the second round of their assessments. In Figure 2, a flow diagram has been provided to summarise the assessment protocol.\nFlowchart of assessment protocol: intra‐ and inter‐observer agreement\nTo evaluate the effect of the ‘type of record’ (gypsum models or digital greyscale intra‐oral scans) on the scoring with the TWES, the differences in Observer 1’s tooth wear scores at each of the surfaces assessed using the gypsum models and digital scan records were determined.\nIn advance of this study, Observer 1 (O1), a final‐year undergraduate dental student, was trained and calibrated over the course of two training sessions with the use of the TWES by Observer 2 (O2). Observer 2 was an experienced dental practitioner and researcher.\nThe gypsum cast records included in this investigation were scored using the TWES in the same environment and appraised under consistent, standard room lighting conditions. Under same conditions, the digital intra‐oral scan records were visualised in greyscale on a computer screen (resolution: 1920x1080) with MeshLab, enabling the assessor to rotate and zoom in on the models. As the output of the 3D models when using the LAVA COS scanner is in greyscale, this formed the rationale for the use of greyscale records in this investigation. The sequence of scoring for all records was, the first quadrant, followed by the second, the third and finally, the fourth. No time limit was set for the evaluations.\nTeeth with fixed prosthodontic restorations (eg crowns and bridges), or large intra‐coronal restorations were excluded from the analysis. Teeth that were not clearly visible (inclusive of teeth that were unclear on the digital intra‐oral scans), or where they were broken/ or damaged on the casts were also excluded from the analysis.\nFor the intra‐observer measurements, the ten sets of gypsum casts and digital intra‐oral scan records were scored twice by Observer 1 with a minimum interval of 2 weeks between the consecutive observations. Comparisons were made between the consecutive scores for the full mouth (overall scores), as well as for anterior and posterior areas. Assessments were then undertaken by Observer 2 applying the same protocols; however, for the purpose of evaluating the interobserver agreement, only one round of scoring was performed by O2. To study the interobserver agreement (O1‐O2), the gypsum casts and digital greyscale intra‐oral scan records for the same ten cases were scored once by both observers, O1 and O2. The observers were blinded to each other's scores and in the case of O1 blinded to the outcomes of their former observations when carrying out the second round of their assessments. In Figure 2, a flow diagram has been provided to summarise the assessment protocol.\nFlowchart of assessment protocol: intra‐ and inter‐observer agreement\nTo evaluate the effect of the ‘type of record’ (gypsum models or digital greyscale intra‐oral scans) on the scoring with the TWES, the differences in Observer 1’s tooth wear scores at each of the surfaces assessed using the gypsum models and digital scan records were determined.\n Statistical analyses To describe the agreement between the intra‐ and interobserver scores for the assessments using the gypsum cast records or digital intra‐oral scans, Cohen's weighted Kappa (κW) was used. In all Kappa analyses, squared weights were applied. Kappa measures were interpreted as follows: <0 as indicating ‘no agreement,’ 0‐0.20 as slight, 0.21‐0.40 as fair, 0.41‐0.60 as moderate, 0.61‐0.80 as substantial and 0.81‐1 as almost perfect agreement.\n22\n Scores were presented for the ‘overall’ (total) occlusal/ incisal surfaces, for the buccal surfaces and for the palatal/lingual surfaces. Scores were also presented by tooth type, hence, anterior teeth (incisors and canines) and posterior teeth (premolars and molar teeth), irrespective of the arch. Differences in Kappa scores were analysed using t tests and the data expressed as mean values, with confidence intervals, (95% ci) and P‐values (P < .05).\nTo determine the effect of the type of record on the scoring outcome, the TWES scores of the occlusal/incisal scale (0, 1a, 1b, etc) were converted into numerical scores, ranging from 1 to 8 inclusive. Hence, as seen by Table 1, a TWES outcome of ‘0’ would be scored ‘1’, 1a as ‘2’, 1b as ‘3’ etc For all measurements, the scores of the digital intra‐oral scans and the gypsum cast records were compared with a paired t test. The mean difference in the TWES scores at the various surfaces were evaluated for the two types of records; a positive score would indicate scoring using a gypsum model would result in a higher TWES score. All analyses were performed using R (version 3.6.1). Weighted kappa values were calculated using the Kappa function of the vcd library (version 1.4‐7).\nConversion of the TWES grades into numerical scores, as applied in this investigation\nTWES grades as per Wetselaar & Lobbezoo, 2016.\nTo describe the agreement between the intra‐ and interobserver scores for the assessments using the gypsum cast records or digital intra‐oral scans, Cohen's weighted Kappa (κW) was used. In all Kappa analyses, squared weights were applied. Kappa measures were interpreted as follows: <0 as indicating ‘no agreement,’ 0‐0.20 as slight, 0.21‐0.40 as fair, 0.41‐0.60 as moderate, 0.61‐0.80 as substantial and 0.81‐1 as almost perfect agreement.\n22\n Scores were presented for the ‘overall’ (total) occlusal/ incisal surfaces, for the buccal surfaces and for the palatal/lingual surfaces. Scores were also presented by tooth type, hence, anterior teeth (incisors and canines) and posterior teeth (premolars and molar teeth), irrespective of the arch. Differences in Kappa scores were analysed using t tests and the data expressed as mean values, with confidence intervals, (95% ci) and P‐values (P < .05).\nTo determine the effect of the type of record on the scoring outcome, the TWES scores of the occlusal/incisal scale (0, 1a, 1b, etc) were converted into numerical scores, ranging from 1 to 8 inclusive. Hence, as seen by Table 1, a TWES outcome of ‘0’ would be scored ‘1’, 1a as ‘2’, 1b as ‘3’ etc For all measurements, the scores of the digital intra‐oral scans and the gypsum cast records were compared with a paired t test. The mean difference in the TWES scores at the various surfaces were evaluated for the two types of records; a positive score would indicate scoring using a gypsum model would result in a higher TWES score. All analyses were performed using R (version 3.6.1). Weighted kappa values were calculated using the Kappa function of the vcd library (version 1.4‐7).\nConversion of the TWES grades into numerical scores, as applied in this investigation\nTWES grades as per Wetselaar & Lobbezoo, 2016.", "The Tooth Wear Evaluation (TWES) was used as the grading system in this investigation.\n14\n, \n15\n, \n18\n For the scoring of the occlusal and incisal surfaces, an 8‐point ordinal scale was used. The grades defined as grade 0 = no (visible) wear; grade 1a = minimal wear of cusps or incisal tips, within the enamel; grade 1b = facets parallel to the normal planes of contour, within the enamel; grade 1c = noticeable flattening of cusps or incisal edges, within the enamel; grade 2 = wear with dentine exposure and loss of clinical crown height <1/3; grade 3a = wear with dentine exposure and loss of clinical crown height 1/3‐1/2; grade 3b = wear with dentine exposure and loss of clinical crown height >1/2‐2/3; and grade 4 = wear with dentine exposure and loss of clinical crown height of >2/3. For scoring at the non‐occlusal/non‐incisal surfaces, a 3‐point ordinal scale was applied. The grades are described as grade 0 = no (visible) wear; grade 1 = wear confined to the enamel; and grade 2 = wear into the dentine. The scope of this study is based on the TWES. It did not include the extensions from the TWES 2.0, as data collection commenced prior to the introduction of the updated taxonomy.", "The current investigation is part of a larger clinical trial on the management of erosive tooth wear, the Radboud Tooth Wear Project, in which 200 patients are included.\n21\n The records of ten patients were randomly selected for the present study. Inclusion criteria were the presence of moderate to severe tooth wear, with at least one score of TWES ≥ 2. The records applied in this investigation were limited to gypsum casts and 3D (three dimensional) digital intra‐oral scans. The study was carried out in accordance with the Declaration of Helsinki for research involving humans and ethical approval was obtained (ABR codes: NL31401.091.10, NL30346.091.10 and NL31371.091.10). All patients agreed to participate in the research project, and written informed consent was attained prior to entering the Radboud Tooth Wear Project.\nThe baseline dental condition of each participant had been fully documented, and full‐arch gypsum casts of the upper and lower dental arches were fabricated. Dental impressions were taken using a vinyl polysiloxane impression material, (Ivoclar Virtual 380, Ivoclar Vivodent, Liechtenstein, Europe) comprising two consistencies, a Heavy body and Monophase applied in a single stage. The impressions were cast in Type III dental stone (SLR Dental GmbH, Germany) within 24 hours, according to the manufacturer's instructions. A yellow‐coloured dental stone material was used. During the same appointment, digital intra‐oral scans were obtained using the LAVA COS Intraoral Scanner (3M). Both the digital and dental impressions were captured by the same trained operator. The scanning procedure was undertaken in accordance with the manufacturer's instructions. Scans were made with the patient in a supine position, a latex‐free lip and cheek retractor was applied, Optragate (Ivoclar Vivadent, Liechtenstein), teeth were rinsed, air‐dried and lightly powdered with titanium dioxide. The LAVA COS scanner was used to capture the digital impression, including the bite registration scan. The scans were digitally stored in the web‐based platform, Casemanager (3M). The 3D models of the scans (‘digital intra‐oral scans/ digital models’) were amenable to downloading from this platform and these open STL files could be easily imported into the free‐software, MeshLab (www.meshlab.net). Figure 1 is a representation of the MeshLab user interface.\nThe use of Meshlab to score intra‐oral scans with the TWES", "In advance of this study, Observer 1 (O1), a final‐year undergraduate dental student, was trained and calibrated over the course of two training sessions with the use of the TWES by Observer 2 (O2). Observer 2 was an experienced dental practitioner and researcher.\nThe gypsum cast records included in this investigation were scored using the TWES in the same environment and appraised under consistent, standard room lighting conditions. Under same conditions, the digital intra‐oral scan records were visualised in greyscale on a computer screen (resolution: 1920x1080) with MeshLab, enabling the assessor to rotate and zoom in on the models. As the output of the 3D models when using the LAVA COS scanner is in greyscale, this formed the rationale for the use of greyscale records in this investigation. The sequence of scoring for all records was, the first quadrant, followed by the second, the third and finally, the fourth. No time limit was set for the evaluations.\nTeeth with fixed prosthodontic restorations (eg crowns and bridges), or large intra‐coronal restorations were excluded from the analysis. Teeth that were not clearly visible (inclusive of teeth that were unclear on the digital intra‐oral scans), or where they were broken/ or damaged on the casts were also excluded from the analysis.\nFor the intra‐observer measurements, the ten sets of gypsum casts and digital intra‐oral scan records were scored twice by Observer 1 with a minimum interval of 2 weeks between the consecutive observations. Comparisons were made between the consecutive scores for the full mouth (overall scores), as well as for anterior and posterior areas. Assessments were then undertaken by Observer 2 applying the same protocols; however, for the purpose of evaluating the interobserver agreement, only one round of scoring was performed by O2. To study the interobserver agreement (O1‐O2), the gypsum casts and digital greyscale intra‐oral scan records for the same ten cases were scored once by both observers, O1 and O2. The observers were blinded to each other's scores and in the case of O1 blinded to the outcomes of their former observations when carrying out the second round of their assessments. In Figure 2, a flow diagram has been provided to summarise the assessment protocol.\nFlowchart of assessment protocol: intra‐ and inter‐observer agreement\nTo evaluate the effect of the ‘type of record’ (gypsum models or digital greyscale intra‐oral scans) on the scoring with the TWES, the differences in Observer 1’s tooth wear scores at each of the surfaces assessed using the gypsum models and digital scan records were determined.", "To describe the agreement between the intra‐ and interobserver scores for the assessments using the gypsum cast records or digital intra‐oral scans, Cohen's weighted Kappa (κW) was used. In all Kappa analyses, squared weights were applied. Kappa measures were interpreted as follows: <0 as indicating ‘no agreement,’ 0‐0.20 as slight, 0.21‐0.40 as fair, 0.41‐0.60 as moderate, 0.61‐0.80 as substantial and 0.81‐1 as almost perfect agreement.\n22\n Scores were presented for the ‘overall’ (total) occlusal/ incisal surfaces, for the buccal surfaces and for the palatal/lingual surfaces. Scores were also presented by tooth type, hence, anterior teeth (incisors and canines) and posterior teeth (premolars and molar teeth), irrespective of the arch. Differences in Kappa scores were analysed using t tests and the data expressed as mean values, with confidence intervals, (95% ci) and P‐values (P < .05).\nTo determine the effect of the type of record on the scoring outcome, the TWES scores of the occlusal/incisal scale (0, 1a, 1b, etc) were converted into numerical scores, ranging from 1 to 8 inclusive. Hence, as seen by Table 1, a TWES outcome of ‘0’ would be scored ‘1’, 1a as ‘2’, 1b as ‘3’ etc For all measurements, the scores of the digital intra‐oral scans and the gypsum cast records were compared with a paired t test. The mean difference in the TWES scores at the various surfaces were evaluated for the two types of records; a positive score would indicate scoring using a gypsum model would result in a higher TWES score. All analyses were performed using R (version 3.6.1). Weighted kappa values were calculated using the Kappa function of the vcd library (version 1.4‐7).\nConversion of the TWES grades into numerical scores, as applied in this investigation\nTWES grades as per Wetselaar & Lobbezoo, 2016.", "SB Mehta contributed to data interpretation, drafted and critically revised the manuscript. EM Bronkhorst contributed to design and data interpretation, performed all statistical analyses and critically revised the manuscript. L. Crins contributed to data interpretation and critically revised the manuscript. P. Wetselaar contributed to conception, design and data collection, and critically revised the manuscript. MCDNJM Huysmans contributed to conception, design and data interpretation, and critically revised the manuscript. BAC Loomans is the project leader of the Radboud Tooth Wear Project, contributed to conception, design, enrolment of patients, data acquisition and interpretation, and critically revised the manuscript. All authors gave their final approval and agree to be accountable for all aspects of the work." ]
[ null, null, null, null, null, null ]
[ "BACKGROUND", "MATERIALS & METHODS", "Tooth wear evaluation system", "Subjects", "Scoring and the intra‐ and interobserver agreement", "Statistical analyses", "RESULTS", "DISCUSSION", "CONCLUSIONS", "CONFLICT OF INTEREST", "AUTHORS’ CONTRIBUTION" ]
[ "In 2018, an estimated mean global prevalence of erosive tooth wear in permanent teeth between 20% and 45% was described.\n1\n Tooth wear can result in a variety of dentofacially related symptoms, to include, aesthetic impairment, sensitivity, pain, discomfort and/ or functional problems.\n2\n, \n3\n More severe forms of tooth wear may also have an adverse impact on a patient's quality of life.\n4\n, \n5\n, \n6\n\n\nRestorative intervention is sometimes prescribed for patients with tooth wear.\n3\n However, treatment (with a direct resin composite technique, or indirect techniques) may prove to be costly and complex.\n7\n There may also be some ambiguity with the optimal timing for restorative intervention.\n3\n, \n8\n Whilst counselling and monitoring are advised for all patients with pathological tooth wear, restorative intervention may be indicated when the presenting tooth wear is a clear concern for the patient and/or the clinician, where there may be functional, or aesthetic concerns and/or symptoms of pain, or discomfort.\n3\n However, definitive dental restorations for tooth wear management should not be prescribed until any active dental pathology has been effectively managed and full patient commitment is available.\n3\n Where the presenting pathological tooth wear is not progressive and with the lack of any further concerns, restorative intervention may not be necessary and management with vigilant monitoring and counselling, may be continued.\n3\n\n\nDetermining the most appropriate time to prescribe restorative intervention should also consider the progression of the wear process.\n3\n The need for pragmatic and reliable means to assess the rate of tooth wear progression (between appointments, as well as between different clinicians) is therefore relevant. Tooth wear assessment is most frequently undertaken by periodic clinical (chairside) assessment; however, photographs, serial (consecutive) dental casts and serial digital 3D data scans may also be used to undertake assessment, each with their own limitations.\n3\n, \n9\n\n\nA plethora of tooth wear indices have been introduced for the scoring of the severity of the tooth wear present,\n10\n, \n11\n, \n12\n, \n13\n, \n14\n, \n15\n but the universal acceptance of a grading scale for erosive tooth wear in general dental practice, is lacking.\n16\n A clinical tooth wear index should ideally offer the potential to undertake scoring using indirect methods such as intra‐oral photographs, traditional gypsum dental casts and on digital intra‐oral scans,\n17\n thereby enabling some extra‐oral assessment. This may be particularly beneficial when the available clinical chairside time may be constrained.\nThe Tooth Wear Evaluation System (TWES) is a modular clinical guideline that can be used for the assessment of tooth wear and to assist with diagnosis and patient management.\n14\n, \n15\n, \n18\n The TWES was revised in 2020 and a new taxonomy was proposed—TWES 2.0.\n18\n The TWES index in general includes the application of an 8‐point occlusal/incisal ordinal grading scale and a 3‐point non‐occlusal/ non‐incisal grading scale for the scoring of the respective surfaces. The TWES has been reported to offer adequate levels of reliability with tooth wear grading when applied clinically, as well as when using dental cast records.\n19\n Furthermore, when undertaking occlusal/ incisal surface grading using dental casts and intra‐oral photographic records, the TWES has been described to offer the necessary sensitivity to enable the detection of changes in the pattern on tooth wear on a sequential basis and, thereby, help monitor disease progression.\n17\n, \n20\n The aim of this study was to undertake a comparative evaluation between the use of gypsum casts and digital greyscale (black‐white) intra‐oral scan records with the reliability of grading tooth wear using the TWES, applied to patient records that were demonstrative of moderate to severe forms of tooth wear.", " Tooth wear evaluation system The Tooth Wear Evaluation (TWES) was used as the grading system in this investigation.\n14\n, \n15\n, \n18\n For the scoring of the occlusal and incisal surfaces, an 8‐point ordinal scale was used. The grades defined as grade 0 = no (visible) wear; grade 1a = minimal wear of cusps or incisal tips, within the enamel; grade 1b = facets parallel to the normal planes of contour, within the enamel; grade 1c = noticeable flattening of cusps or incisal edges, within the enamel; grade 2 = wear with dentine exposure and loss of clinical crown height <1/3; grade 3a = wear with dentine exposure and loss of clinical crown height 1/3‐1/2; grade 3b = wear with dentine exposure and loss of clinical crown height >1/2‐2/3; and grade 4 = wear with dentine exposure and loss of clinical crown height of >2/3. For scoring at the non‐occlusal/non‐incisal surfaces, a 3‐point ordinal scale was applied. The grades are described as grade 0 = no (visible) wear; grade 1 = wear confined to the enamel; and grade 2 = wear into the dentine. The scope of this study is based on the TWES. It did not include the extensions from the TWES 2.0, as data collection commenced prior to the introduction of the updated taxonomy.\nThe Tooth Wear Evaluation (TWES) was used as the grading system in this investigation.\n14\n, \n15\n, \n18\n For the scoring of the occlusal and incisal surfaces, an 8‐point ordinal scale was used. The grades defined as grade 0 = no (visible) wear; grade 1a = minimal wear of cusps or incisal tips, within the enamel; grade 1b = facets parallel to the normal planes of contour, within the enamel; grade 1c = noticeable flattening of cusps or incisal edges, within the enamel; grade 2 = wear with dentine exposure and loss of clinical crown height <1/3; grade 3a = wear with dentine exposure and loss of clinical crown height 1/3‐1/2; grade 3b = wear with dentine exposure and loss of clinical crown height >1/2‐2/3; and grade 4 = wear with dentine exposure and loss of clinical crown height of >2/3. For scoring at the non‐occlusal/non‐incisal surfaces, a 3‐point ordinal scale was applied. The grades are described as grade 0 = no (visible) wear; grade 1 = wear confined to the enamel; and grade 2 = wear into the dentine. The scope of this study is based on the TWES. It did not include the extensions from the TWES 2.0, as data collection commenced prior to the introduction of the updated taxonomy.\n Subjects The current investigation is part of a larger clinical trial on the management of erosive tooth wear, the Radboud Tooth Wear Project, in which 200 patients are included.\n21\n The records of ten patients were randomly selected for the present study. Inclusion criteria were the presence of moderate to severe tooth wear, with at least one score of TWES ≥ 2. The records applied in this investigation were limited to gypsum casts and 3D (three dimensional) digital intra‐oral scans. The study was carried out in accordance with the Declaration of Helsinki for research involving humans and ethical approval was obtained (ABR codes: NL31401.091.10, NL30346.091.10 and NL31371.091.10). All patients agreed to participate in the research project, and written informed consent was attained prior to entering the Radboud Tooth Wear Project.\nThe baseline dental condition of each participant had been fully documented, and full‐arch gypsum casts of the upper and lower dental arches were fabricated. Dental impressions were taken using a vinyl polysiloxane impression material, (Ivoclar Virtual 380, Ivoclar Vivodent, Liechtenstein, Europe) comprising two consistencies, a Heavy body and Monophase applied in a single stage. The impressions were cast in Type III dental stone (SLR Dental GmbH, Germany) within 24 hours, according to the manufacturer's instructions. A yellow‐coloured dental stone material was used. During the same appointment, digital intra‐oral scans were obtained using the LAVA COS Intraoral Scanner (3M). Both the digital and dental impressions were captured by the same trained operator. The scanning procedure was undertaken in accordance with the manufacturer's instructions. Scans were made with the patient in a supine position, a latex‐free lip and cheek retractor was applied, Optragate (Ivoclar Vivadent, Liechtenstein), teeth were rinsed, air‐dried and lightly powdered with titanium dioxide. The LAVA COS scanner was used to capture the digital impression, including the bite registration scan. The scans were digitally stored in the web‐based platform, Casemanager (3M). The 3D models of the scans (‘digital intra‐oral scans/ digital models’) were amenable to downloading from this platform and these open STL files could be easily imported into the free‐software, MeshLab (www.meshlab.net). Figure 1 is a representation of the MeshLab user interface.\nThe use of Meshlab to score intra‐oral scans with the TWES\nThe current investigation is part of a larger clinical trial on the management of erosive tooth wear, the Radboud Tooth Wear Project, in which 200 patients are included.\n21\n The records of ten patients were randomly selected for the present study. Inclusion criteria were the presence of moderate to severe tooth wear, with at least one score of TWES ≥ 2. The records applied in this investigation were limited to gypsum casts and 3D (three dimensional) digital intra‐oral scans. The study was carried out in accordance with the Declaration of Helsinki for research involving humans and ethical approval was obtained (ABR codes: NL31401.091.10, NL30346.091.10 and NL31371.091.10). All patients agreed to participate in the research project, and written informed consent was attained prior to entering the Radboud Tooth Wear Project.\nThe baseline dental condition of each participant had been fully documented, and full‐arch gypsum casts of the upper and lower dental arches were fabricated. Dental impressions were taken using a vinyl polysiloxane impression material, (Ivoclar Virtual 380, Ivoclar Vivodent, Liechtenstein, Europe) comprising two consistencies, a Heavy body and Monophase applied in a single stage. The impressions were cast in Type III dental stone (SLR Dental GmbH, Germany) within 24 hours, according to the manufacturer's instructions. A yellow‐coloured dental stone material was used. During the same appointment, digital intra‐oral scans were obtained using the LAVA COS Intraoral Scanner (3M). Both the digital and dental impressions were captured by the same trained operator. The scanning procedure was undertaken in accordance with the manufacturer's instructions. Scans were made with the patient in a supine position, a latex‐free lip and cheek retractor was applied, Optragate (Ivoclar Vivadent, Liechtenstein), teeth were rinsed, air‐dried and lightly powdered with titanium dioxide. The LAVA COS scanner was used to capture the digital impression, including the bite registration scan. The scans were digitally stored in the web‐based platform, Casemanager (3M). The 3D models of the scans (‘digital intra‐oral scans/ digital models’) were amenable to downloading from this platform and these open STL files could be easily imported into the free‐software, MeshLab (www.meshlab.net). Figure 1 is a representation of the MeshLab user interface.\nThe use of Meshlab to score intra‐oral scans with the TWES\n Scoring and the intra‐ and interobserver agreement In advance of this study, Observer 1 (O1), a final‐year undergraduate dental student, was trained and calibrated over the course of two training sessions with the use of the TWES by Observer 2 (O2). Observer 2 was an experienced dental practitioner and researcher.\nThe gypsum cast records included in this investigation were scored using the TWES in the same environment and appraised under consistent, standard room lighting conditions. Under same conditions, the digital intra‐oral scan records were visualised in greyscale on a computer screen (resolution: 1920x1080) with MeshLab, enabling the assessor to rotate and zoom in on the models. As the output of the 3D models when using the LAVA COS scanner is in greyscale, this formed the rationale for the use of greyscale records in this investigation. The sequence of scoring for all records was, the first quadrant, followed by the second, the third and finally, the fourth. No time limit was set for the evaluations.\nTeeth with fixed prosthodontic restorations (eg crowns and bridges), or large intra‐coronal restorations were excluded from the analysis. Teeth that were not clearly visible (inclusive of teeth that were unclear on the digital intra‐oral scans), or where they were broken/ or damaged on the casts were also excluded from the analysis.\nFor the intra‐observer measurements, the ten sets of gypsum casts and digital intra‐oral scan records were scored twice by Observer 1 with a minimum interval of 2 weeks between the consecutive observations. Comparisons were made between the consecutive scores for the full mouth (overall scores), as well as for anterior and posterior areas. Assessments were then undertaken by Observer 2 applying the same protocols; however, for the purpose of evaluating the interobserver agreement, only one round of scoring was performed by O2. To study the interobserver agreement (O1‐O2), the gypsum casts and digital greyscale intra‐oral scan records for the same ten cases were scored once by both observers, O1 and O2. The observers were blinded to each other's scores and in the case of O1 blinded to the outcomes of their former observations when carrying out the second round of their assessments. In Figure 2, a flow diagram has been provided to summarise the assessment protocol.\nFlowchart of assessment protocol: intra‐ and inter‐observer agreement\nTo evaluate the effect of the ‘type of record’ (gypsum models or digital greyscale intra‐oral scans) on the scoring with the TWES, the differences in Observer 1’s tooth wear scores at each of the surfaces assessed using the gypsum models and digital scan records were determined.\nIn advance of this study, Observer 1 (O1), a final‐year undergraduate dental student, was trained and calibrated over the course of two training sessions with the use of the TWES by Observer 2 (O2). Observer 2 was an experienced dental practitioner and researcher.\nThe gypsum cast records included in this investigation were scored using the TWES in the same environment and appraised under consistent, standard room lighting conditions. Under same conditions, the digital intra‐oral scan records were visualised in greyscale on a computer screen (resolution: 1920x1080) with MeshLab, enabling the assessor to rotate and zoom in on the models. As the output of the 3D models when using the LAVA COS scanner is in greyscale, this formed the rationale for the use of greyscale records in this investigation. The sequence of scoring for all records was, the first quadrant, followed by the second, the third and finally, the fourth. No time limit was set for the evaluations.\nTeeth with fixed prosthodontic restorations (eg crowns and bridges), or large intra‐coronal restorations were excluded from the analysis. Teeth that were not clearly visible (inclusive of teeth that were unclear on the digital intra‐oral scans), or where they were broken/ or damaged on the casts were also excluded from the analysis.\nFor the intra‐observer measurements, the ten sets of gypsum casts and digital intra‐oral scan records were scored twice by Observer 1 with a minimum interval of 2 weeks between the consecutive observations. Comparisons were made between the consecutive scores for the full mouth (overall scores), as well as for anterior and posterior areas. Assessments were then undertaken by Observer 2 applying the same protocols; however, for the purpose of evaluating the interobserver agreement, only one round of scoring was performed by O2. To study the interobserver agreement (O1‐O2), the gypsum casts and digital greyscale intra‐oral scan records for the same ten cases were scored once by both observers, O1 and O2. The observers were blinded to each other's scores and in the case of O1 blinded to the outcomes of their former observations when carrying out the second round of their assessments. In Figure 2, a flow diagram has been provided to summarise the assessment protocol.\nFlowchart of assessment protocol: intra‐ and inter‐observer agreement\nTo evaluate the effect of the ‘type of record’ (gypsum models or digital greyscale intra‐oral scans) on the scoring with the TWES, the differences in Observer 1’s tooth wear scores at each of the surfaces assessed using the gypsum models and digital scan records were determined.\n Statistical analyses To describe the agreement between the intra‐ and interobserver scores for the assessments using the gypsum cast records or digital intra‐oral scans, Cohen's weighted Kappa (κW) was used. In all Kappa analyses, squared weights were applied. Kappa measures were interpreted as follows: <0 as indicating ‘no agreement,’ 0‐0.20 as slight, 0.21‐0.40 as fair, 0.41‐0.60 as moderate, 0.61‐0.80 as substantial and 0.81‐1 as almost perfect agreement.\n22\n Scores were presented for the ‘overall’ (total) occlusal/ incisal surfaces, for the buccal surfaces and for the palatal/lingual surfaces. Scores were also presented by tooth type, hence, anterior teeth (incisors and canines) and posterior teeth (premolars and molar teeth), irrespective of the arch. Differences in Kappa scores were analysed using t tests and the data expressed as mean values, with confidence intervals, (95% ci) and P‐values (P < .05).\nTo determine the effect of the type of record on the scoring outcome, the TWES scores of the occlusal/incisal scale (0, 1a, 1b, etc) were converted into numerical scores, ranging from 1 to 8 inclusive. Hence, as seen by Table 1, a TWES outcome of ‘0’ would be scored ‘1’, 1a as ‘2’, 1b as ‘3’ etc For all measurements, the scores of the digital intra‐oral scans and the gypsum cast records were compared with a paired t test. The mean difference in the TWES scores at the various surfaces were evaluated for the two types of records; a positive score would indicate scoring using a gypsum model would result in a higher TWES score. All analyses were performed using R (version 3.6.1). Weighted kappa values were calculated using the Kappa function of the vcd library (version 1.4‐7).\nConversion of the TWES grades into numerical scores, as applied in this investigation\nTWES grades as per Wetselaar & Lobbezoo, 2016.\nTo describe the agreement between the intra‐ and interobserver scores for the assessments using the gypsum cast records or digital intra‐oral scans, Cohen's weighted Kappa (κW) was used. In all Kappa analyses, squared weights were applied. Kappa measures were interpreted as follows: <0 as indicating ‘no agreement,’ 0‐0.20 as slight, 0.21‐0.40 as fair, 0.41‐0.60 as moderate, 0.61‐0.80 as substantial and 0.81‐1 as almost perfect agreement.\n22\n Scores were presented for the ‘overall’ (total) occlusal/ incisal surfaces, for the buccal surfaces and for the palatal/lingual surfaces. Scores were also presented by tooth type, hence, anterior teeth (incisors and canines) and posterior teeth (premolars and molar teeth), irrespective of the arch. Differences in Kappa scores were analysed using t tests and the data expressed as mean values, with confidence intervals, (95% ci) and P‐values (P < .05).\nTo determine the effect of the type of record on the scoring outcome, the TWES scores of the occlusal/incisal scale (0, 1a, 1b, etc) were converted into numerical scores, ranging from 1 to 8 inclusive. Hence, as seen by Table 1, a TWES outcome of ‘0’ would be scored ‘1’, 1a as ‘2’, 1b as ‘3’ etc For all measurements, the scores of the digital intra‐oral scans and the gypsum cast records were compared with a paired t test. The mean difference in the TWES scores at the various surfaces were evaluated for the two types of records; a positive score would indicate scoring using a gypsum model would result in a higher TWES score. All analyses were performed using R (version 3.6.1). Weighted kappa values were calculated using the Kappa function of the vcd library (version 1.4‐7).\nConversion of the TWES grades into numerical scores, as applied in this investigation\nTWES grades as per Wetselaar & Lobbezoo, 2016.", "The Tooth Wear Evaluation (TWES) was used as the grading system in this investigation.\n14\n, \n15\n, \n18\n For the scoring of the occlusal and incisal surfaces, an 8‐point ordinal scale was used. The grades defined as grade 0 = no (visible) wear; grade 1a = minimal wear of cusps or incisal tips, within the enamel; grade 1b = facets parallel to the normal planes of contour, within the enamel; grade 1c = noticeable flattening of cusps or incisal edges, within the enamel; grade 2 = wear with dentine exposure and loss of clinical crown height <1/3; grade 3a = wear with dentine exposure and loss of clinical crown height 1/3‐1/2; grade 3b = wear with dentine exposure and loss of clinical crown height >1/2‐2/3; and grade 4 = wear with dentine exposure and loss of clinical crown height of >2/3. For scoring at the non‐occlusal/non‐incisal surfaces, a 3‐point ordinal scale was applied. The grades are described as grade 0 = no (visible) wear; grade 1 = wear confined to the enamel; and grade 2 = wear into the dentine. The scope of this study is based on the TWES. It did not include the extensions from the TWES 2.0, as data collection commenced prior to the introduction of the updated taxonomy.", "The current investigation is part of a larger clinical trial on the management of erosive tooth wear, the Radboud Tooth Wear Project, in which 200 patients are included.\n21\n The records of ten patients were randomly selected for the present study. Inclusion criteria were the presence of moderate to severe tooth wear, with at least one score of TWES ≥ 2. The records applied in this investigation were limited to gypsum casts and 3D (three dimensional) digital intra‐oral scans. The study was carried out in accordance with the Declaration of Helsinki for research involving humans and ethical approval was obtained (ABR codes: NL31401.091.10, NL30346.091.10 and NL31371.091.10). All patients agreed to participate in the research project, and written informed consent was attained prior to entering the Radboud Tooth Wear Project.\nThe baseline dental condition of each participant had been fully documented, and full‐arch gypsum casts of the upper and lower dental arches were fabricated. Dental impressions were taken using a vinyl polysiloxane impression material, (Ivoclar Virtual 380, Ivoclar Vivodent, Liechtenstein, Europe) comprising two consistencies, a Heavy body and Monophase applied in a single stage. The impressions were cast in Type III dental stone (SLR Dental GmbH, Germany) within 24 hours, according to the manufacturer's instructions. A yellow‐coloured dental stone material was used. During the same appointment, digital intra‐oral scans were obtained using the LAVA COS Intraoral Scanner (3M). Both the digital and dental impressions were captured by the same trained operator. The scanning procedure was undertaken in accordance with the manufacturer's instructions. Scans were made with the patient in a supine position, a latex‐free lip and cheek retractor was applied, Optragate (Ivoclar Vivadent, Liechtenstein), teeth were rinsed, air‐dried and lightly powdered with titanium dioxide. The LAVA COS scanner was used to capture the digital impression, including the bite registration scan. The scans were digitally stored in the web‐based platform, Casemanager (3M). The 3D models of the scans (‘digital intra‐oral scans/ digital models’) were amenable to downloading from this platform and these open STL files could be easily imported into the free‐software, MeshLab (www.meshlab.net). Figure 1 is a representation of the MeshLab user interface.\nThe use of Meshlab to score intra‐oral scans with the TWES", "In advance of this study, Observer 1 (O1), a final‐year undergraduate dental student, was trained and calibrated over the course of two training sessions with the use of the TWES by Observer 2 (O2). Observer 2 was an experienced dental practitioner and researcher.\nThe gypsum cast records included in this investigation were scored using the TWES in the same environment and appraised under consistent, standard room lighting conditions. Under same conditions, the digital intra‐oral scan records were visualised in greyscale on a computer screen (resolution: 1920x1080) with MeshLab, enabling the assessor to rotate and zoom in on the models. As the output of the 3D models when using the LAVA COS scanner is in greyscale, this formed the rationale for the use of greyscale records in this investigation. The sequence of scoring for all records was, the first quadrant, followed by the second, the third and finally, the fourth. No time limit was set for the evaluations.\nTeeth with fixed prosthodontic restorations (eg crowns and bridges), or large intra‐coronal restorations were excluded from the analysis. Teeth that were not clearly visible (inclusive of teeth that were unclear on the digital intra‐oral scans), or where they were broken/ or damaged on the casts were also excluded from the analysis.\nFor the intra‐observer measurements, the ten sets of gypsum casts and digital intra‐oral scan records were scored twice by Observer 1 with a minimum interval of 2 weeks between the consecutive observations. Comparisons were made between the consecutive scores for the full mouth (overall scores), as well as for anterior and posterior areas. Assessments were then undertaken by Observer 2 applying the same protocols; however, for the purpose of evaluating the interobserver agreement, only one round of scoring was performed by O2. To study the interobserver agreement (O1‐O2), the gypsum casts and digital greyscale intra‐oral scan records for the same ten cases were scored once by both observers, O1 and O2. The observers were blinded to each other's scores and in the case of O1 blinded to the outcomes of their former observations when carrying out the second round of their assessments. In Figure 2, a flow diagram has been provided to summarise the assessment protocol.\nFlowchart of assessment protocol: intra‐ and inter‐observer agreement\nTo evaluate the effect of the ‘type of record’ (gypsum models or digital greyscale intra‐oral scans) on the scoring with the TWES, the differences in Observer 1’s tooth wear scores at each of the surfaces assessed using the gypsum models and digital scan records were determined.", "To describe the agreement between the intra‐ and interobserver scores for the assessments using the gypsum cast records or digital intra‐oral scans, Cohen's weighted Kappa (κW) was used. In all Kappa analyses, squared weights were applied. Kappa measures were interpreted as follows: <0 as indicating ‘no agreement,’ 0‐0.20 as slight, 0.21‐0.40 as fair, 0.41‐0.60 as moderate, 0.61‐0.80 as substantial and 0.81‐1 as almost perfect agreement.\n22\n Scores were presented for the ‘overall’ (total) occlusal/ incisal surfaces, for the buccal surfaces and for the palatal/lingual surfaces. Scores were also presented by tooth type, hence, anterior teeth (incisors and canines) and posterior teeth (premolars and molar teeth), irrespective of the arch. Differences in Kappa scores were analysed using t tests and the data expressed as mean values, with confidence intervals, (95% ci) and P‐values (P < .05).\nTo determine the effect of the type of record on the scoring outcome, the TWES scores of the occlusal/incisal scale (0, 1a, 1b, etc) were converted into numerical scores, ranging from 1 to 8 inclusive. Hence, as seen by Table 1, a TWES outcome of ‘0’ would be scored ‘1’, 1a as ‘2’, 1b as ‘3’ etc For all measurements, the scores of the digital intra‐oral scans and the gypsum cast records were compared with a paired t test. The mean difference in the TWES scores at the various surfaces were evaluated for the two types of records; a positive score would indicate scoring using a gypsum model would result in a higher TWES score. All analyses were performed using R (version 3.6.1). Weighted kappa values were calculated using the Kappa function of the vcd library (version 1.4‐7).\nConversion of the TWES grades into numerical scores, as applied in this investigation\nTWES grades as per Wetselaar & Lobbezoo, 2016.", "Table 2 provides a combined overview of the TWES scores for all ten patient records at the occlusal/incisal, the buccal and the palatal/lingual surfaces. The patient records showed the presence of significant amounts of tooth wear at all teeth. The majority of the scores at the occlusal/incisal surfaces were between TWES 2 (showing wear with dentine exposure and loss of clinical crown height <1/3) and TWES 3b (wear with dentine exposure and loss of clinical crown height >1/2‐2/3). Twenty‐one teeth included in the patient records were scored TWES 4, presenting with dentine exposure and the loss of clinical crown height of >2/3.\nDescriptives of tooth wear scores using the TWES at all tooth surfaces, measured on gypsum casts (n = 10)\nDetails of the levels of intra‐observer agreements (O1) and interobserver agreement (O1‐O2) (Kappa scores) for the consecutive scoring of tooth wear applying the TWES on the gypsum cast records and the digital intra‐oral scan records that were included in this investigation are presented in Table 3. Table 3 also provides information relating to the comparative evaluation between the use of gypsum cast records and digital greyscale intra‐oral scan records with the reproducibility of tooth wear scoring with the TWES. For the grading of the overall occlusal/ incisal surfaces using gypsum cast records, the levels of intra‐observer agreements (O1) and interobserver agreements (O1‐O2) were significantly higher compared with the agreement in the scoring of the same surfaces using the digital greyscale intra‐oral scan records, (P < .001) and(P < .001), respectively. For the grading of the overall buccal and palatal/lingual surfaces, other than significantly higher levels of O1 agreement in the scoring of the buccal surfaces using gypsum cast (P = .013) and the O1‐O2 agreement in the scoring of the palatal/lingual surfaces with gypsum cast records (P = .043), no other significant difference was found between the type of record used on the reliability of scoring with the TWES.\nIntra‐ and interobserver agreements (Kappa scores) using the TWES on gypsum cast records the and digital intra‐oral scans Kappa's Cohen (κW) of intra‐ and interobserver measurements per location (01 = Observer 1, 02 = Observer 2)\nDifferences between the Kappa scores on gypsum cast records versus digital intra‐oral scans, expressed with P‐value and 95%CI.\nN/A: no statistical test was possible.\nBold denotes a value that was statistically significant.\nMeasurement showed a perfect agreement on a single score.\nTable 4 provides information about the effect of the type of record on the tooth wear score. This was expressed as the mean difference in the tooth wear grading on gypsum casts and the digital greyscale intra‐oral scan records using the TWES. For the overall scores at the occlusal/ incisal surfaces, grading of the gypsum casts culminated in significantly higher TWES scores compared with the use of the digital greyscale intra‐oral scan records (P < .001; 95% CI = [0.084…0.272]). However, the overall scores at the buccal and palatal/lingual surfaces showed significantly higher values using the digital intra‐oral scans records when undertaking tooth wear grading than with the use of gypsum cast records (P = .009; 95% CI = [−0.294…0.042] and P = .001; 95% CI = [−0.342…0.084]), respectively.\nShowing the differences in TWES scores between measurements on gypsum casts and digital intra‐oral scans\nTable is presenting the mean difference between tooth wear gradings on gypsum models versus digital scans using the TWES, together with the P‐value and the 95%CI.\nTo test the differences the TWES index was converted into an 8‐point‐scale (0 = 1, 1a = 2, 1b = 3, 1c = 4, 2 = 5, 3a = 6, 3b = 7, and 4 = 8).\nA positive score means a higher tooth wear score on the gysum models compared to the digital scan records.\nBold denotes a value that was statistically significant.", "This study has reported high levels of agreement (both intra‐ and interobserver) in the scoring of the occlusal/incisal surfaces using gypsum cast records, applying the 8‐point grading scales of the TWES on the dental records of ten randomly selected patients with signs of moderate to severe tooth wear. The superiority of using gypsum cast records compared with digital greyscale intra‐oral scan records at the occlusal/incisal surfaces was statistically significant. Moreover, significantly higher tooth wear scores were recorded when applying the gypsum cast records for the grading of the occlusal/ incisal surfaces, whereas the opposite was reported for the buccal/palatal surfaces. As with the present investigation, several previous studies have also reported favourable reliability applying the 8‐point occlusal/incisal grading scale of the TWES for the assessment of worn occlusal/ incisal surfaces using traditional dental casts.\n17\n, \n19\n, \n20\n However, with each of these previous investigations, Interclass Correlation coefficients (ICC’s) were used to determine reliability. ICC’s have been developed for the analysis of continuous outcomes. Furthermore, given that the results of the ICC calculation may be significantly affected with the choice to investigate agreement (as in this case, rather than consistency) the decision was taken to use weighted Kappa scores.\nAlthough tooth wear assessment using the TWES chairside has been shown to be more reliable than assessments carried out using dental casts records alone,\n19\n reliability investigations have shown the outcomes offered by the use of intra‐oral photographs for occlusal/incisal grading to be comparable to the use of gypsum casts.\n17\n The presence or absence of initial dentine exposure will, however, be more challenging to ascertain using dental casts alone,\n9\n as the identification of the visual colour changes and subtle tactile alterations at the dental hard tissues that accompany the wear process (and are often associated with the early stages of tooth wear) may not be as readily as detectable compared with chairside assessment. As a limitation of the present study, no patient records were included of cases demonstrating lower levels, or signs of no tooth wear. Furthermore, yellow‐coloured Type III dental stone was used for the fabrication of the cast records. Whilst Type III dental stone is intended for the construction of dental casts, the use of a Type IV gypsum material (typically used for the fabrication of dental dies) that can offer higher abrasion resistance and possible finer surface detail, may have had an impact on the observations reported. This may be an area for future investigation, as may be the influence of the colour of the gypsum product on the scoring outcomes.\nIn the current study, using the gypsum cast records, lower levels of intra‐ and interobserver agreement were reported with the scoring of tooth wear at the occluding/incisal surfaces of the posterior teeth than at the anterior teeth. Given the practical application of an 8‐point ordinal scale for the scoring of the occlusal/ incisal surfaces, with multiple options available and the subtle differences especially between the various sub‐scales of the TWES, some variation in the scoring between consecutive assessments (both intra‐ and interexaminer) is perhaps inevitable.\nThe results of this study also reported comparatively higher levels intra‐observer agreement with the scoring of the posterior teeth compared with the anterior teeth, when applying the digital greyscale intra‐oral scan records. This observation was independent of the surface scored. Digital intra‐oral scans offer the opportunity for the assessor to view the records in multiple directions and also allow the zooming‐ in of areas of further interest; however, unlike gypsum casts, they do not permit any tactile assessment. Digital models in greyscale (black‐white, as in this investigation), neither permit adequate visualisation of the hard tissue colour changes, which may be relevant for the accurate assessment of less severe patterns of tooth wear, or tooth wear at the non‐occluding surfaces of the anterior teeth, as discussed above. Although the use of coloured 3D scans may help improve this aspect and permit the visualisation of exposed dentine, the currently available coloured scans appear to provide a sub‐optimal contrast of the tooth surfaces. The need for the visual assessment of the colour changes that accompany the tooth wear process may have accounted for the observations at the anterior teeth buccal and palatal/lingual surfaces included; however, the precise reason of the effect of using digital greyscale intra‐oral scans on attaining higher tooth wear score at the anterior buccal and palatal/lingual surfaces, is not known.\nIn this investigation, where the buccal and palatal/lingual surfaces were graded using the 3‐point ordinal scale of the TWES, in general, lower levels of agreement were described compared with the assessments undertaken at the occlusal/ incisal surfaces. This observation was independent of the type of patient record used. However, some caution needs to be applied with the interpretation of the data attained for the scoring of the buccal and palatal/lingual surfaces, as on occasion, exceptionally high levels of agreement (κW = 1.0) were reported for the anterior teeth included within the sample (Table 3). In general, the Cohen's kappa score requires further consideration if one outcome is extremely dominant and other variables are only encountered sporadically. Furthermore, it may not be appropriate to compare the outcomes in agreement at the differing surfaces using the 8‐point ordinal scale at one type of surface and the 3‐point scale at another. Previous investigations have also reported considerably lower reliability scores for the grading of non‐occlusal/ non‐incisal surfaces using the TWES on dental casts.\n17\n, \n19\n, \n20\n These findings have been postulated to be accounted by the levels of training the observers may have received to carry out appropriate evaluations at these surfaces, or a possible reflection of a flaw of the TWES grading system itself when applied at such surfaces.\n19\n\n\nThe recoding the TWES into a numerical scale and subsequently analysing the differences between gypsum and digital scores for the purpose of investigating the effect of the type of record on the scoring is an approach that may be questioned. The process of undertaking recoding silently assumes the difference between any two consecutive scales of the TWES to be of the same size. However, this is not necessarily the case. Two alternatives were considered to the approach applied in this investigation. Firstly, an extension of the McNemar test, the McNemar‐Bowker test; however, due to the large number of categories in relation to the size of the study, this analysis was not effective. For the second alternative, the Wilcoxon signed‐rank test was applied. The latter test, whilst suitable for comparing the gypsum cast and digital intra‐oral scan scores, is not able to provide a clinically interpretable estimation of the differences between the scores. However, the Wilcoxon signed‐rank test was applied to perform a sensitivity analyses and the outcomes compared with the p‐values attained from the paired t tests. In all cases, the P‐values reported were similar. A situation with one test giving a statistically significant difference and the other test labelling the difference not to be statistically different was not observed. Consequently, the authors considered the more easily interpretable paired t test to offer a level of reliability that was deemed sufficient for the purpose of undertaking analysis.\nThere are some further limitations with the current study. Previous investigations (often using other clinical tooth wear indices) have reported challenges with the accurate grading of early tooth wear using study casts,\n15\n clinical photographs\n23\n or both.\n24\n The clinical background of the observers has also been shown to influence the outcomes of scoring tooth wear using study casts and photographs.\n24\n In the present study, both observers were of the same discipline. Furthermore, when considering the effect of the type of the record on the scoring, only the outcomes of a single observer's assessments were used, (O1). The impact of the resolution offered by the intra‐oral scanning device used in this investigation is neither known.\nAlthough the merits of occlusal/ incisal grading using the TWES on gypsum casts have been highlighted, compliance with the taking of study models in the primary care sector to monitor wear has been shown to be relatively low.\n25\n Some caution is also required when undertaking assessments of tooth wear using sequential gypsum casts, due to the risks of distortion of the dental materials used, and the effect of the actual dental material(s) selected.\n26\n Based on the result of this investigation, it may also be challenging to make accurate comparisons between consecutive gypsum cast records and digital intra‐oral scans.\nIn the future, with increasing popularity for intra‐oral scanners in dental practice, some clinicians may preferentially choose to use digital scans/ models for the purpose of the sequential monitoring tooth wear, to overcome the challenges with traditional gypsum models, to include storage. The use of an intra‐oral scanner may offer the scope to monitor tooth wear progression consistently and accurately,\n27\n, \n28\n, \n29\n inclusive of the use of subtraction techniques that have been more recently reported.\n30\n This may help overcome some of the drawbacks commonly associated with the fabrication of gypsum dental casts. However, as there are some clear barriers for the current routine use of intra‐oral scanners in the primary care setting (to include, economic factors), the importance of using an appropriate tooth wear index to monitor progression of wear is likely to remain, at least in the short to medium term.", "It was concluded that the scores obtained with the grading scales of the TWES on gypsum casts can offer reliability, especially for the grading of the occlusal/incisal surfaces of teeth with signs of moderate to severe wear. The level of reproducibility offered using digital greyscale intra‐oral scan records to carry out tooth wear assessments with the TWES was generally inferior to that offered by the use of gypsum casts.", "None of the authors have any conflicts of interest to declare.", "SB Mehta contributed to data interpretation, drafted and critically revised the manuscript. EM Bronkhorst contributed to design and data interpretation, performed all statistical analyses and critically revised the manuscript. L. Crins contributed to data interpretation and critically revised the manuscript. P. Wetselaar contributed to conception, design and data collection, and critically revised the manuscript. MCDNJM Huysmans contributed to conception, design and data interpretation, and critically revised the manuscript. BAC Loomans is the project leader of the Radboud Tooth Wear Project, contributed to conception, design, enrolment of patients, data acquisition and interpretation, and critically revised the manuscript. All authors gave their final approval and agree to be accountable for all aspects of the work." ]
[ "background", null, null, null, null, null, "results", "discussion", "conclusions", "COI-statement", null ]
[ "assessment tools", "dental casts", "digital casts", "grading scales", "reliability", "tooth wear", "tooth wear evaluation system (TWES)" ]
BACKGROUND: In 2018, an estimated mean global prevalence of erosive tooth wear in permanent teeth between 20% and 45% was described. 1 Tooth wear can result in a variety of dentofacially related symptoms, to include, aesthetic impairment, sensitivity, pain, discomfort and/ or functional problems. 2 , 3 More severe forms of tooth wear may also have an adverse impact on a patient's quality of life. 4 , 5 , 6 Restorative intervention is sometimes prescribed for patients with tooth wear. 3 However, treatment (with a direct resin composite technique, or indirect techniques) may prove to be costly and complex. 7 There may also be some ambiguity with the optimal timing for restorative intervention. 3 , 8 Whilst counselling and monitoring are advised for all patients with pathological tooth wear, restorative intervention may be indicated when the presenting tooth wear is a clear concern for the patient and/or the clinician, where there may be functional, or aesthetic concerns and/or symptoms of pain, or discomfort. 3 However, definitive dental restorations for tooth wear management should not be prescribed until any active dental pathology has been effectively managed and full patient commitment is available. 3 Where the presenting pathological tooth wear is not progressive and with the lack of any further concerns, restorative intervention may not be necessary and management with vigilant monitoring and counselling, may be continued. 3 Determining the most appropriate time to prescribe restorative intervention should also consider the progression of the wear process. 3 The need for pragmatic and reliable means to assess the rate of tooth wear progression (between appointments, as well as between different clinicians) is therefore relevant. Tooth wear assessment is most frequently undertaken by periodic clinical (chairside) assessment; however, photographs, serial (consecutive) dental casts and serial digital 3D data scans may also be used to undertake assessment, each with their own limitations. 3 , 9 A plethora of tooth wear indices have been introduced for the scoring of the severity of the tooth wear present, 10 , 11 , 12 , 13 , 14 , 15 but the universal acceptance of a grading scale for erosive tooth wear in general dental practice, is lacking. 16 A clinical tooth wear index should ideally offer the potential to undertake scoring using indirect methods such as intra‐oral photographs, traditional gypsum dental casts and on digital intra‐oral scans, 17 thereby enabling some extra‐oral assessment. This may be particularly beneficial when the available clinical chairside time may be constrained. The Tooth Wear Evaluation System (TWES) is a modular clinical guideline that can be used for the assessment of tooth wear and to assist with diagnosis and patient management. 14 , 15 , 18 The TWES was revised in 2020 and a new taxonomy was proposed—TWES 2.0. 18 The TWES index in general includes the application of an 8‐point occlusal/incisal ordinal grading scale and a 3‐point non‐occlusal/ non‐incisal grading scale for the scoring of the respective surfaces. The TWES has been reported to offer adequate levels of reliability with tooth wear grading when applied clinically, as well as when using dental cast records. 19 Furthermore, when undertaking occlusal/ incisal surface grading using dental casts and intra‐oral photographic records, the TWES has been described to offer the necessary sensitivity to enable the detection of changes in the pattern on tooth wear on a sequential basis and, thereby, help monitor disease progression. 17 , 20 The aim of this study was to undertake a comparative evaluation between the use of gypsum casts and digital greyscale (black‐white) intra‐oral scan records with the reliability of grading tooth wear using the TWES, applied to patient records that were demonstrative of moderate to severe forms of tooth wear. MATERIALS & METHODS: Tooth wear evaluation system The Tooth Wear Evaluation (TWES) was used as the grading system in this investigation. 14 , 15 , 18 For the scoring of the occlusal and incisal surfaces, an 8‐point ordinal scale was used. The grades defined as grade 0 = no (visible) wear; grade 1a = minimal wear of cusps or incisal tips, within the enamel; grade 1b = facets parallel to the normal planes of contour, within the enamel; grade 1c = noticeable flattening of cusps or incisal edges, within the enamel; grade 2 = wear with dentine exposure and loss of clinical crown height <1/3; grade 3a = wear with dentine exposure and loss of clinical crown height 1/3‐1/2; grade 3b = wear with dentine exposure and loss of clinical crown height >1/2‐2/3; and grade 4 = wear with dentine exposure and loss of clinical crown height of >2/3. For scoring at the non‐occlusal/non‐incisal surfaces, a 3‐point ordinal scale was applied. The grades are described as grade 0 = no (visible) wear; grade 1 = wear confined to the enamel; and grade 2 = wear into the dentine. The scope of this study is based on the TWES. It did not include the extensions from the TWES 2.0, as data collection commenced prior to the introduction of the updated taxonomy. The Tooth Wear Evaluation (TWES) was used as the grading system in this investigation. 14 , 15 , 18 For the scoring of the occlusal and incisal surfaces, an 8‐point ordinal scale was used. The grades defined as grade 0 = no (visible) wear; grade 1a = minimal wear of cusps or incisal tips, within the enamel; grade 1b = facets parallel to the normal planes of contour, within the enamel; grade 1c = noticeable flattening of cusps or incisal edges, within the enamel; grade 2 = wear with dentine exposure and loss of clinical crown height <1/3; grade 3a = wear with dentine exposure and loss of clinical crown height 1/3‐1/2; grade 3b = wear with dentine exposure and loss of clinical crown height >1/2‐2/3; and grade 4 = wear with dentine exposure and loss of clinical crown height of >2/3. For scoring at the non‐occlusal/non‐incisal surfaces, a 3‐point ordinal scale was applied. The grades are described as grade 0 = no (visible) wear; grade 1 = wear confined to the enamel; and grade 2 = wear into the dentine. The scope of this study is based on the TWES. It did not include the extensions from the TWES 2.0, as data collection commenced prior to the introduction of the updated taxonomy. Subjects The current investigation is part of a larger clinical trial on the management of erosive tooth wear, the Radboud Tooth Wear Project, in which 200 patients are included. 21 The records of ten patients were randomly selected for the present study. Inclusion criteria were the presence of moderate to severe tooth wear, with at least one score of TWES ≥ 2. The records applied in this investigation were limited to gypsum casts and 3D (three dimensional) digital intra‐oral scans. The study was carried out in accordance with the Declaration of Helsinki for research involving humans and ethical approval was obtained (ABR codes: NL31401.091.10, NL30346.091.10 and NL31371.091.10). All patients agreed to participate in the research project, and written informed consent was attained prior to entering the Radboud Tooth Wear Project. The baseline dental condition of each participant had been fully documented, and full‐arch gypsum casts of the upper and lower dental arches were fabricated. Dental impressions were taken using a vinyl polysiloxane impression material, (Ivoclar Virtual 380, Ivoclar Vivodent, Liechtenstein, Europe) comprising two consistencies, a Heavy body and Monophase applied in a single stage. The impressions were cast in Type III dental stone (SLR Dental GmbH, Germany) within 24 hours, according to the manufacturer's instructions. A yellow‐coloured dental stone material was used. During the same appointment, digital intra‐oral scans were obtained using the LAVA COS Intraoral Scanner (3M). Both the digital and dental impressions were captured by the same trained operator. The scanning procedure was undertaken in accordance with the manufacturer's instructions. Scans were made with the patient in a supine position, a latex‐free lip and cheek retractor was applied, Optragate (Ivoclar Vivadent, Liechtenstein), teeth were rinsed, air‐dried and lightly powdered with titanium dioxide. The LAVA COS scanner was used to capture the digital impression, including the bite registration scan. The scans were digitally stored in the web‐based platform, Casemanager (3M). The 3D models of the scans (‘digital intra‐oral scans/ digital models’) were amenable to downloading from this platform and these open STL files could be easily imported into the free‐software, MeshLab (www.meshlab.net). Figure 1 is a representation of the MeshLab user interface. The use of Meshlab to score intra‐oral scans with the TWES The current investigation is part of a larger clinical trial on the management of erosive tooth wear, the Radboud Tooth Wear Project, in which 200 patients are included. 21 The records of ten patients were randomly selected for the present study. Inclusion criteria were the presence of moderate to severe tooth wear, with at least one score of TWES ≥ 2. The records applied in this investigation were limited to gypsum casts and 3D (three dimensional) digital intra‐oral scans. The study was carried out in accordance with the Declaration of Helsinki for research involving humans and ethical approval was obtained (ABR codes: NL31401.091.10, NL30346.091.10 and NL31371.091.10). All patients agreed to participate in the research project, and written informed consent was attained prior to entering the Radboud Tooth Wear Project. The baseline dental condition of each participant had been fully documented, and full‐arch gypsum casts of the upper and lower dental arches were fabricated. Dental impressions were taken using a vinyl polysiloxane impression material, (Ivoclar Virtual 380, Ivoclar Vivodent, Liechtenstein, Europe) comprising two consistencies, a Heavy body and Monophase applied in a single stage. The impressions were cast in Type III dental stone (SLR Dental GmbH, Germany) within 24 hours, according to the manufacturer's instructions. A yellow‐coloured dental stone material was used. During the same appointment, digital intra‐oral scans were obtained using the LAVA COS Intraoral Scanner (3M). Both the digital and dental impressions were captured by the same trained operator. The scanning procedure was undertaken in accordance with the manufacturer's instructions. Scans were made with the patient in a supine position, a latex‐free lip and cheek retractor was applied, Optragate (Ivoclar Vivadent, Liechtenstein), teeth were rinsed, air‐dried and lightly powdered with titanium dioxide. The LAVA COS scanner was used to capture the digital impression, including the bite registration scan. The scans were digitally stored in the web‐based platform, Casemanager (3M). The 3D models of the scans (‘digital intra‐oral scans/ digital models’) were amenable to downloading from this platform and these open STL files could be easily imported into the free‐software, MeshLab (www.meshlab.net). Figure 1 is a representation of the MeshLab user interface. The use of Meshlab to score intra‐oral scans with the TWES Scoring and the intra‐ and interobserver agreement In advance of this study, Observer 1 (O1), a final‐year undergraduate dental student, was trained and calibrated over the course of two training sessions with the use of the TWES by Observer 2 (O2). Observer 2 was an experienced dental practitioner and researcher. The gypsum cast records included in this investigation were scored using the TWES in the same environment and appraised under consistent, standard room lighting conditions. Under same conditions, the digital intra‐oral scan records were visualised in greyscale on a computer screen (resolution: 1920x1080) with MeshLab, enabling the assessor to rotate and zoom in on the models. As the output of the 3D models when using the LAVA COS scanner is in greyscale, this formed the rationale for the use of greyscale records in this investigation. The sequence of scoring for all records was, the first quadrant, followed by the second, the third and finally, the fourth. No time limit was set for the evaluations. Teeth with fixed prosthodontic restorations (eg crowns and bridges), or large intra‐coronal restorations were excluded from the analysis. Teeth that were not clearly visible (inclusive of teeth that were unclear on the digital intra‐oral scans), or where they were broken/ or damaged on the casts were also excluded from the analysis. For the intra‐observer measurements, the ten sets of gypsum casts and digital intra‐oral scan records were scored twice by Observer 1 with a minimum interval of 2 weeks between the consecutive observations. Comparisons were made between the consecutive scores for the full mouth (overall scores), as well as for anterior and posterior areas. Assessments were then undertaken by Observer 2 applying the same protocols; however, for the purpose of evaluating the interobserver agreement, only one round of scoring was performed by O2. To study the interobserver agreement (O1‐O2), the gypsum casts and digital greyscale intra‐oral scan records for the same ten cases were scored once by both observers, O1 and O2. The observers were blinded to each other's scores and in the case of O1 blinded to the outcomes of their former observations when carrying out the second round of their assessments. In Figure 2, a flow diagram has been provided to summarise the assessment protocol. Flowchart of assessment protocol: intra‐ and inter‐observer agreement To evaluate the effect of the ‘type of record’ (gypsum models or digital greyscale intra‐oral scans) on the scoring with the TWES, the differences in Observer 1’s tooth wear scores at each of the surfaces assessed using the gypsum models and digital scan records were determined. In advance of this study, Observer 1 (O1), a final‐year undergraduate dental student, was trained and calibrated over the course of two training sessions with the use of the TWES by Observer 2 (O2). Observer 2 was an experienced dental practitioner and researcher. The gypsum cast records included in this investigation were scored using the TWES in the same environment and appraised under consistent, standard room lighting conditions. Under same conditions, the digital intra‐oral scan records were visualised in greyscale on a computer screen (resolution: 1920x1080) with MeshLab, enabling the assessor to rotate and zoom in on the models. As the output of the 3D models when using the LAVA COS scanner is in greyscale, this formed the rationale for the use of greyscale records in this investigation. The sequence of scoring for all records was, the first quadrant, followed by the second, the third and finally, the fourth. No time limit was set for the evaluations. Teeth with fixed prosthodontic restorations (eg crowns and bridges), or large intra‐coronal restorations were excluded from the analysis. Teeth that were not clearly visible (inclusive of teeth that were unclear on the digital intra‐oral scans), or where they were broken/ or damaged on the casts were also excluded from the analysis. For the intra‐observer measurements, the ten sets of gypsum casts and digital intra‐oral scan records were scored twice by Observer 1 with a minimum interval of 2 weeks between the consecutive observations. Comparisons were made between the consecutive scores for the full mouth (overall scores), as well as for anterior and posterior areas. Assessments were then undertaken by Observer 2 applying the same protocols; however, for the purpose of evaluating the interobserver agreement, only one round of scoring was performed by O2. To study the interobserver agreement (O1‐O2), the gypsum casts and digital greyscale intra‐oral scan records for the same ten cases were scored once by both observers, O1 and O2. The observers were blinded to each other's scores and in the case of O1 blinded to the outcomes of their former observations when carrying out the second round of their assessments. In Figure 2, a flow diagram has been provided to summarise the assessment protocol. Flowchart of assessment protocol: intra‐ and inter‐observer agreement To evaluate the effect of the ‘type of record’ (gypsum models or digital greyscale intra‐oral scans) on the scoring with the TWES, the differences in Observer 1’s tooth wear scores at each of the surfaces assessed using the gypsum models and digital scan records were determined. Statistical analyses To describe the agreement between the intra‐ and interobserver scores for the assessments using the gypsum cast records or digital intra‐oral scans, Cohen's weighted Kappa (κW) was used. In all Kappa analyses, squared weights were applied. Kappa measures were interpreted as follows: <0 as indicating ‘no agreement,’ 0‐0.20 as slight, 0.21‐0.40 as fair, 0.41‐0.60 as moderate, 0.61‐0.80 as substantial and 0.81‐1 as almost perfect agreement. 22 Scores were presented for the ‘overall’ (total) occlusal/ incisal surfaces, for the buccal surfaces and for the palatal/lingual surfaces. Scores were also presented by tooth type, hence, anterior teeth (incisors and canines) and posterior teeth (premolars and molar teeth), irrespective of the arch. Differences in Kappa scores were analysed using t tests and the data expressed as mean values, with confidence intervals, (95% ci) and P‐values (P < .05). To determine the effect of the type of record on the scoring outcome, the TWES scores of the occlusal/incisal scale (0, 1a, 1b, etc) were converted into numerical scores, ranging from 1 to 8 inclusive. Hence, as seen by Table 1, a TWES outcome of ‘0’ would be scored ‘1’, 1a as ‘2’, 1b as ‘3’ etc For all measurements, the scores of the digital intra‐oral scans and the gypsum cast records were compared with a paired t test. The mean difference in the TWES scores at the various surfaces were evaluated for the two types of records; a positive score would indicate scoring using a gypsum model would result in a higher TWES score. All analyses were performed using R (version 3.6.1). Weighted kappa values were calculated using the Kappa function of the vcd library (version 1.4‐7). Conversion of the TWES grades into numerical scores, as applied in this investigation TWES grades as per Wetselaar & Lobbezoo, 2016. To describe the agreement between the intra‐ and interobserver scores for the assessments using the gypsum cast records or digital intra‐oral scans, Cohen's weighted Kappa (κW) was used. In all Kappa analyses, squared weights were applied. Kappa measures were interpreted as follows: <0 as indicating ‘no agreement,’ 0‐0.20 as slight, 0.21‐0.40 as fair, 0.41‐0.60 as moderate, 0.61‐0.80 as substantial and 0.81‐1 as almost perfect agreement. 22 Scores were presented for the ‘overall’ (total) occlusal/ incisal surfaces, for the buccal surfaces and for the palatal/lingual surfaces. Scores were also presented by tooth type, hence, anterior teeth (incisors and canines) and posterior teeth (premolars and molar teeth), irrespective of the arch. Differences in Kappa scores were analysed using t tests and the data expressed as mean values, with confidence intervals, (95% ci) and P‐values (P < .05). To determine the effect of the type of record on the scoring outcome, the TWES scores of the occlusal/incisal scale (0, 1a, 1b, etc) were converted into numerical scores, ranging from 1 to 8 inclusive. Hence, as seen by Table 1, a TWES outcome of ‘0’ would be scored ‘1’, 1a as ‘2’, 1b as ‘3’ etc For all measurements, the scores of the digital intra‐oral scans and the gypsum cast records were compared with a paired t test. The mean difference in the TWES scores at the various surfaces were evaluated for the two types of records; a positive score would indicate scoring using a gypsum model would result in a higher TWES score. All analyses were performed using R (version 3.6.1). Weighted kappa values were calculated using the Kappa function of the vcd library (version 1.4‐7). Conversion of the TWES grades into numerical scores, as applied in this investigation TWES grades as per Wetselaar & Lobbezoo, 2016. Tooth wear evaluation system: The Tooth Wear Evaluation (TWES) was used as the grading system in this investigation. 14 , 15 , 18 For the scoring of the occlusal and incisal surfaces, an 8‐point ordinal scale was used. The grades defined as grade 0 = no (visible) wear; grade 1a = minimal wear of cusps or incisal tips, within the enamel; grade 1b = facets parallel to the normal planes of contour, within the enamel; grade 1c = noticeable flattening of cusps or incisal edges, within the enamel; grade 2 = wear with dentine exposure and loss of clinical crown height <1/3; grade 3a = wear with dentine exposure and loss of clinical crown height 1/3‐1/2; grade 3b = wear with dentine exposure and loss of clinical crown height >1/2‐2/3; and grade 4 = wear with dentine exposure and loss of clinical crown height of >2/3. For scoring at the non‐occlusal/non‐incisal surfaces, a 3‐point ordinal scale was applied. The grades are described as grade 0 = no (visible) wear; grade 1 = wear confined to the enamel; and grade 2 = wear into the dentine. The scope of this study is based on the TWES. It did not include the extensions from the TWES 2.0, as data collection commenced prior to the introduction of the updated taxonomy. Subjects: The current investigation is part of a larger clinical trial on the management of erosive tooth wear, the Radboud Tooth Wear Project, in which 200 patients are included. 21 The records of ten patients were randomly selected for the present study. Inclusion criteria were the presence of moderate to severe tooth wear, with at least one score of TWES ≥ 2. The records applied in this investigation were limited to gypsum casts and 3D (three dimensional) digital intra‐oral scans. The study was carried out in accordance with the Declaration of Helsinki for research involving humans and ethical approval was obtained (ABR codes: NL31401.091.10, NL30346.091.10 and NL31371.091.10). All patients agreed to participate in the research project, and written informed consent was attained prior to entering the Radboud Tooth Wear Project. The baseline dental condition of each participant had been fully documented, and full‐arch gypsum casts of the upper and lower dental arches were fabricated. Dental impressions were taken using a vinyl polysiloxane impression material, (Ivoclar Virtual 380, Ivoclar Vivodent, Liechtenstein, Europe) comprising two consistencies, a Heavy body and Monophase applied in a single stage. The impressions were cast in Type III dental stone (SLR Dental GmbH, Germany) within 24 hours, according to the manufacturer's instructions. A yellow‐coloured dental stone material was used. During the same appointment, digital intra‐oral scans were obtained using the LAVA COS Intraoral Scanner (3M). Both the digital and dental impressions were captured by the same trained operator. The scanning procedure was undertaken in accordance with the manufacturer's instructions. Scans were made with the patient in a supine position, a latex‐free lip and cheek retractor was applied, Optragate (Ivoclar Vivadent, Liechtenstein), teeth were rinsed, air‐dried and lightly powdered with titanium dioxide. The LAVA COS scanner was used to capture the digital impression, including the bite registration scan. The scans were digitally stored in the web‐based platform, Casemanager (3M). The 3D models of the scans (‘digital intra‐oral scans/ digital models’) were amenable to downloading from this platform and these open STL files could be easily imported into the free‐software, MeshLab (www.meshlab.net). Figure 1 is a representation of the MeshLab user interface. The use of Meshlab to score intra‐oral scans with the TWES Scoring and the intra‐ and interobserver agreement: In advance of this study, Observer 1 (O1), a final‐year undergraduate dental student, was trained and calibrated over the course of two training sessions with the use of the TWES by Observer 2 (O2). Observer 2 was an experienced dental practitioner and researcher. The gypsum cast records included in this investigation were scored using the TWES in the same environment and appraised under consistent, standard room lighting conditions. Under same conditions, the digital intra‐oral scan records were visualised in greyscale on a computer screen (resolution: 1920x1080) with MeshLab, enabling the assessor to rotate and zoom in on the models. As the output of the 3D models when using the LAVA COS scanner is in greyscale, this formed the rationale for the use of greyscale records in this investigation. The sequence of scoring for all records was, the first quadrant, followed by the second, the third and finally, the fourth. No time limit was set for the evaluations. Teeth with fixed prosthodontic restorations (eg crowns and bridges), or large intra‐coronal restorations were excluded from the analysis. Teeth that were not clearly visible (inclusive of teeth that were unclear on the digital intra‐oral scans), or where they were broken/ or damaged on the casts were also excluded from the analysis. For the intra‐observer measurements, the ten sets of gypsum casts and digital intra‐oral scan records were scored twice by Observer 1 with a minimum interval of 2 weeks between the consecutive observations. Comparisons were made between the consecutive scores for the full mouth (overall scores), as well as for anterior and posterior areas. Assessments were then undertaken by Observer 2 applying the same protocols; however, for the purpose of evaluating the interobserver agreement, only one round of scoring was performed by O2. To study the interobserver agreement (O1‐O2), the gypsum casts and digital greyscale intra‐oral scan records for the same ten cases were scored once by both observers, O1 and O2. The observers were blinded to each other's scores and in the case of O1 blinded to the outcomes of their former observations when carrying out the second round of their assessments. In Figure 2, a flow diagram has been provided to summarise the assessment protocol. Flowchart of assessment protocol: intra‐ and inter‐observer agreement To evaluate the effect of the ‘type of record’ (gypsum models or digital greyscale intra‐oral scans) on the scoring with the TWES, the differences in Observer 1’s tooth wear scores at each of the surfaces assessed using the gypsum models and digital scan records were determined. Statistical analyses: To describe the agreement between the intra‐ and interobserver scores for the assessments using the gypsum cast records or digital intra‐oral scans, Cohen's weighted Kappa (κW) was used. In all Kappa analyses, squared weights were applied. Kappa measures were interpreted as follows: <0 as indicating ‘no agreement,’ 0‐0.20 as slight, 0.21‐0.40 as fair, 0.41‐0.60 as moderate, 0.61‐0.80 as substantial and 0.81‐1 as almost perfect agreement. 22 Scores were presented for the ‘overall’ (total) occlusal/ incisal surfaces, for the buccal surfaces and for the palatal/lingual surfaces. Scores were also presented by tooth type, hence, anterior teeth (incisors and canines) and posterior teeth (premolars and molar teeth), irrespective of the arch. Differences in Kappa scores were analysed using t tests and the data expressed as mean values, with confidence intervals, (95% ci) and P‐values (P < .05). To determine the effect of the type of record on the scoring outcome, the TWES scores of the occlusal/incisal scale (0, 1a, 1b, etc) were converted into numerical scores, ranging from 1 to 8 inclusive. Hence, as seen by Table 1, a TWES outcome of ‘0’ would be scored ‘1’, 1a as ‘2’, 1b as ‘3’ etc For all measurements, the scores of the digital intra‐oral scans and the gypsum cast records were compared with a paired t test. The mean difference in the TWES scores at the various surfaces were evaluated for the two types of records; a positive score would indicate scoring using a gypsum model would result in a higher TWES score. All analyses were performed using R (version 3.6.1). Weighted kappa values were calculated using the Kappa function of the vcd library (version 1.4‐7). Conversion of the TWES grades into numerical scores, as applied in this investigation TWES grades as per Wetselaar & Lobbezoo, 2016. RESULTS: Table 2 provides a combined overview of the TWES scores for all ten patient records at the occlusal/incisal, the buccal and the palatal/lingual surfaces. The patient records showed the presence of significant amounts of tooth wear at all teeth. The majority of the scores at the occlusal/incisal surfaces were between TWES 2 (showing wear with dentine exposure and loss of clinical crown height <1/3) and TWES 3b (wear with dentine exposure and loss of clinical crown height >1/2‐2/3). Twenty‐one teeth included in the patient records were scored TWES 4, presenting with dentine exposure and the loss of clinical crown height of >2/3. Descriptives of tooth wear scores using the TWES at all tooth surfaces, measured on gypsum casts (n = 10) Details of the levels of intra‐observer agreements (O1) and interobserver agreement (O1‐O2) (Kappa scores) for the consecutive scoring of tooth wear applying the TWES on the gypsum cast records and the digital intra‐oral scan records that were included in this investigation are presented in Table 3. Table 3 also provides information relating to the comparative evaluation between the use of gypsum cast records and digital greyscale intra‐oral scan records with the reproducibility of tooth wear scoring with the TWES. For the grading of the overall occlusal/ incisal surfaces using gypsum cast records, the levels of intra‐observer agreements (O1) and interobserver agreements (O1‐O2) were significantly higher compared with the agreement in the scoring of the same surfaces using the digital greyscale intra‐oral scan records, (P < .001) and(P < .001), respectively. For the grading of the overall buccal and palatal/lingual surfaces, other than significantly higher levels of O1 agreement in the scoring of the buccal surfaces using gypsum cast (P = .013) and the O1‐O2 agreement in the scoring of the palatal/lingual surfaces with gypsum cast records (P = .043), no other significant difference was found between the type of record used on the reliability of scoring with the TWES. Intra‐ and interobserver agreements (Kappa scores) using the TWES on gypsum cast records the and digital intra‐oral scans Kappa's Cohen (κW) of intra‐ and interobserver measurements per location (01 = Observer 1, 02 = Observer 2) Differences between the Kappa scores on gypsum cast records versus digital intra‐oral scans, expressed with P‐value and 95%CI. N/A: no statistical test was possible. Bold denotes a value that was statistically significant. Measurement showed a perfect agreement on a single score. Table 4 provides information about the effect of the type of record on the tooth wear score. This was expressed as the mean difference in the tooth wear grading on gypsum casts and the digital greyscale intra‐oral scan records using the TWES. For the overall scores at the occlusal/ incisal surfaces, grading of the gypsum casts culminated in significantly higher TWES scores compared with the use of the digital greyscale intra‐oral scan records (P < .001; 95% CI = [0.084…0.272]). However, the overall scores at the buccal and palatal/lingual surfaces showed significantly higher values using the digital intra‐oral scans records when undertaking tooth wear grading than with the use of gypsum cast records (P = .009; 95% CI = [−0.294…0.042] and P = .001; 95% CI = [−0.342…0.084]), respectively. Showing the differences in TWES scores between measurements on gypsum casts and digital intra‐oral scans Table is presenting the mean difference between tooth wear gradings on gypsum models versus digital scans using the TWES, together with the P‐value and the 95%CI. To test the differences the TWES index was converted into an 8‐point‐scale (0 = 1, 1a = 2, 1b = 3, 1c = 4, 2 = 5, 3a = 6, 3b = 7, and 4 = 8). A positive score means a higher tooth wear score on the gysum models compared to the digital scan records. Bold denotes a value that was statistically significant. DISCUSSION: This study has reported high levels of agreement (both intra‐ and interobserver) in the scoring of the occlusal/incisal surfaces using gypsum cast records, applying the 8‐point grading scales of the TWES on the dental records of ten randomly selected patients with signs of moderate to severe tooth wear. The superiority of using gypsum cast records compared with digital greyscale intra‐oral scan records at the occlusal/incisal surfaces was statistically significant. Moreover, significantly higher tooth wear scores were recorded when applying the gypsum cast records for the grading of the occlusal/ incisal surfaces, whereas the opposite was reported for the buccal/palatal surfaces. As with the present investigation, several previous studies have also reported favourable reliability applying the 8‐point occlusal/incisal grading scale of the TWES for the assessment of worn occlusal/ incisal surfaces using traditional dental casts. 17 , 19 , 20 However, with each of these previous investigations, Interclass Correlation coefficients (ICC’s) were used to determine reliability. ICC’s have been developed for the analysis of continuous outcomes. Furthermore, given that the results of the ICC calculation may be significantly affected with the choice to investigate agreement (as in this case, rather than consistency) the decision was taken to use weighted Kappa scores. Although tooth wear assessment using the TWES chairside has been shown to be more reliable than assessments carried out using dental casts records alone, 19 reliability investigations have shown the outcomes offered by the use of intra‐oral photographs for occlusal/incisal grading to be comparable to the use of gypsum casts. 17 The presence or absence of initial dentine exposure will, however, be more challenging to ascertain using dental casts alone, 9 as the identification of the visual colour changes and subtle tactile alterations at the dental hard tissues that accompany the wear process (and are often associated with the early stages of tooth wear) may not be as readily as detectable compared with chairside assessment. As a limitation of the present study, no patient records were included of cases demonstrating lower levels, or signs of no tooth wear. Furthermore, yellow‐coloured Type III dental stone was used for the fabrication of the cast records. Whilst Type III dental stone is intended for the construction of dental casts, the use of a Type IV gypsum material (typically used for the fabrication of dental dies) that can offer higher abrasion resistance and possible finer surface detail, may have had an impact on the observations reported. This may be an area for future investigation, as may be the influence of the colour of the gypsum product on the scoring outcomes. In the current study, using the gypsum cast records, lower levels of intra‐ and interobserver agreement were reported with the scoring of tooth wear at the occluding/incisal surfaces of the posterior teeth than at the anterior teeth. Given the practical application of an 8‐point ordinal scale for the scoring of the occlusal/ incisal surfaces, with multiple options available and the subtle differences especially between the various sub‐scales of the TWES, some variation in the scoring between consecutive assessments (both intra‐ and interexaminer) is perhaps inevitable. The results of this study also reported comparatively higher levels intra‐observer agreement with the scoring of the posterior teeth compared with the anterior teeth, when applying the digital greyscale intra‐oral scan records. This observation was independent of the surface scored. Digital intra‐oral scans offer the opportunity for the assessor to view the records in multiple directions and also allow the zooming‐ in of areas of further interest; however, unlike gypsum casts, they do not permit any tactile assessment. Digital models in greyscale (black‐white, as in this investigation), neither permit adequate visualisation of the hard tissue colour changes, which may be relevant for the accurate assessment of less severe patterns of tooth wear, or tooth wear at the non‐occluding surfaces of the anterior teeth, as discussed above. Although the use of coloured 3D scans may help improve this aspect and permit the visualisation of exposed dentine, the currently available coloured scans appear to provide a sub‐optimal contrast of the tooth surfaces. The need for the visual assessment of the colour changes that accompany the tooth wear process may have accounted for the observations at the anterior teeth buccal and palatal/lingual surfaces included; however, the precise reason of the effect of using digital greyscale intra‐oral scans on attaining higher tooth wear score at the anterior buccal and palatal/lingual surfaces, is not known. In this investigation, where the buccal and palatal/lingual surfaces were graded using the 3‐point ordinal scale of the TWES, in general, lower levels of agreement were described compared with the assessments undertaken at the occlusal/ incisal surfaces. This observation was independent of the type of patient record used. However, some caution needs to be applied with the interpretation of the data attained for the scoring of the buccal and palatal/lingual surfaces, as on occasion, exceptionally high levels of agreement (κW = 1.0) were reported for the anterior teeth included within the sample (Table 3). In general, the Cohen's kappa score requires further consideration if one outcome is extremely dominant and other variables are only encountered sporadically. Furthermore, it may not be appropriate to compare the outcomes in agreement at the differing surfaces using the 8‐point ordinal scale at one type of surface and the 3‐point scale at another. Previous investigations have also reported considerably lower reliability scores for the grading of non‐occlusal/ non‐incisal surfaces using the TWES on dental casts. 17 , 19 , 20 These findings have been postulated to be accounted by the levels of training the observers may have received to carry out appropriate evaluations at these surfaces, or a possible reflection of a flaw of the TWES grading system itself when applied at such surfaces. 19 The recoding the TWES into a numerical scale and subsequently analysing the differences between gypsum and digital scores for the purpose of investigating the effect of the type of record on the scoring is an approach that may be questioned. The process of undertaking recoding silently assumes the difference between any two consecutive scales of the TWES to be of the same size. However, this is not necessarily the case. Two alternatives were considered to the approach applied in this investigation. Firstly, an extension of the McNemar test, the McNemar‐Bowker test; however, due to the large number of categories in relation to the size of the study, this analysis was not effective. For the second alternative, the Wilcoxon signed‐rank test was applied. The latter test, whilst suitable for comparing the gypsum cast and digital intra‐oral scan scores, is not able to provide a clinically interpretable estimation of the differences between the scores. However, the Wilcoxon signed‐rank test was applied to perform a sensitivity analyses and the outcomes compared with the p‐values attained from the paired t tests. In all cases, the P‐values reported were similar. A situation with one test giving a statistically significant difference and the other test labelling the difference not to be statistically different was not observed. Consequently, the authors considered the more easily interpretable paired t test to offer a level of reliability that was deemed sufficient for the purpose of undertaking analysis. There are some further limitations with the current study. Previous investigations (often using other clinical tooth wear indices) have reported challenges with the accurate grading of early tooth wear using study casts, 15 clinical photographs 23 or both. 24 The clinical background of the observers has also been shown to influence the outcomes of scoring tooth wear using study casts and photographs. 24 In the present study, both observers were of the same discipline. Furthermore, when considering the effect of the type of the record on the scoring, only the outcomes of a single observer's assessments were used, (O1). The impact of the resolution offered by the intra‐oral scanning device used in this investigation is neither known. Although the merits of occlusal/ incisal grading using the TWES on gypsum casts have been highlighted, compliance with the taking of study models in the primary care sector to monitor wear has been shown to be relatively low. 25 Some caution is also required when undertaking assessments of tooth wear using sequential gypsum casts, due to the risks of distortion of the dental materials used, and the effect of the actual dental material(s) selected. 26 Based on the result of this investigation, it may also be challenging to make accurate comparisons between consecutive gypsum cast records and digital intra‐oral scans. In the future, with increasing popularity for intra‐oral scanners in dental practice, some clinicians may preferentially choose to use digital scans/ models for the purpose of the sequential monitoring tooth wear, to overcome the challenges with traditional gypsum models, to include storage. The use of an intra‐oral scanner may offer the scope to monitor tooth wear progression consistently and accurately, 27 , 28 , 29 inclusive of the use of subtraction techniques that have been more recently reported. 30 This may help overcome some of the drawbacks commonly associated with the fabrication of gypsum dental casts. However, as there are some clear barriers for the current routine use of intra‐oral scanners in the primary care setting (to include, economic factors), the importance of using an appropriate tooth wear index to monitor progression of wear is likely to remain, at least in the short to medium term. CONCLUSIONS: It was concluded that the scores obtained with the grading scales of the TWES on gypsum casts can offer reliability, especially for the grading of the occlusal/incisal surfaces of teeth with signs of moderate to severe wear. The level of reproducibility offered using digital greyscale intra‐oral scan records to carry out tooth wear assessments with the TWES was generally inferior to that offered by the use of gypsum casts. CONFLICT OF INTEREST: None of the authors have any conflicts of interest to declare. AUTHORS’ CONTRIBUTION: SB Mehta contributed to data interpretation, drafted and critically revised the manuscript. EM Bronkhorst contributed to design and data interpretation, performed all statistical analyses and critically revised the manuscript. L. Crins contributed to data interpretation and critically revised the manuscript. P. Wetselaar contributed to conception, design and data collection, and critically revised the manuscript. MCDNJM Huysmans contributed to conception, design and data interpretation, and critically revised the manuscript. BAC Loomans is the project leader of the Radboud Tooth Wear Project, contributed to conception, design, enrolment of patients, data acquisition and interpretation, and critically revised the manuscript. All authors gave their final approval and agree to be accountable for all aspects of the work.
Background: The Tooth Wear Evaluation System (TWES) is a type of tooth wear index. To date, there is the lack of data comparing the reliability of the application of this index on gypsum cast records and digital greyscale intra-oral scan records. Methods: Records for 10 patients with moderate to severe tooth wear (TWES ≥ 2) were randomly selected from a larger clinical trial. TWES grading of the occlusal/incisal, buccal and palatal/lingual surfaces was performed to determine the levels of intra- and interobserver agreement. Intra-observer reproducibility was based on the findings of one examiner only. For the interobserver reproducibility, the findings of two examiners were considered. One set of models/ records were used per patient. Cohen's weighted kappa (κW ) was used to ascertain agreement between and within the observers. Comparison of agreement was performed using t tests (P < .05). Results: For the scoring of the of the total occlusal/incisal surfaces, the overall levels of intra- and interobserver agreement were significantly higher using the gypsum cast records than with the digital greyscale intra-oral scan records, (P < .001) and (P < .001), respectively. For the overall buccal surfaces, only a significant difference was found in the intra-observer agreement using gypsum casts, (P = .013). For the palatal/lingual surfaces, a significant difference was only reported in the interobserver agreement using gypsum casts, (P = .043). At the occlusal/incisal surfaces, grading performed using gypsum casts, culminated in significantly higher TWES scores than with the use of the digital greyscale intra-oral scans (P < .001). At the buccal and palatal/lingual surfaces, significantly higher wear scores were obtained using digital greyscale intra-oral scan records (P < .009). Conclusions: The TWES can offer a reliable means for the scoring of wearing occlusal/incisal surfaces using gypsum casts. The reliability offered by digital greyscale intra-oral scans for consecutive scoring was in general, inferior.
BACKGROUND: In 2018, an estimated mean global prevalence of erosive tooth wear in permanent teeth between 20% and 45% was described. 1 Tooth wear can result in a variety of dentofacially related symptoms, to include, aesthetic impairment, sensitivity, pain, discomfort and/ or functional problems. 2 , 3 More severe forms of tooth wear may also have an adverse impact on a patient's quality of life. 4 , 5 , 6 Restorative intervention is sometimes prescribed for patients with tooth wear. 3 However, treatment (with a direct resin composite technique, or indirect techniques) may prove to be costly and complex. 7 There may also be some ambiguity with the optimal timing for restorative intervention. 3 , 8 Whilst counselling and monitoring are advised for all patients with pathological tooth wear, restorative intervention may be indicated when the presenting tooth wear is a clear concern for the patient and/or the clinician, where there may be functional, or aesthetic concerns and/or symptoms of pain, or discomfort. 3 However, definitive dental restorations for tooth wear management should not be prescribed until any active dental pathology has been effectively managed and full patient commitment is available. 3 Where the presenting pathological tooth wear is not progressive and with the lack of any further concerns, restorative intervention may not be necessary and management with vigilant monitoring and counselling, may be continued. 3 Determining the most appropriate time to prescribe restorative intervention should also consider the progression of the wear process. 3 The need for pragmatic and reliable means to assess the rate of tooth wear progression (between appointments, as well as between different clinicians) is therefore relevant. Tooth wear assessment is most frequently undertaken by periodic clinical (chairside) assessment; however, photographs, serial (consecutive) dental casts and serial digital 3D data scans may also be used to undertake assessment, each with their own limitations. 3 , 9 A plethora of tooth wear indices have been introduced for the scoring of the severity of the tooth wear present, 10 , 11 , 12 , 13 , 14 , 15 but the universal acceptance of a grading scale for erosive tooth wear in general dental practice, is lacking. 16 A clinical tooth wear index should ideally offer the potential to undertake scoring using indirect methods such as intra‐oral photographs, traditional gypsum dental casts and on digital intra‐oral scans, 17 thereby enabling some extra‐oral assessment. This may be particularly beneficial when the available clinical chairside time may be constrained. The Tooth Wear Evaluation System (TWES) is a modular clinical guideline that can be used for the assessment of tooth wear and to assist with diagnosis and patient management. 14 , 15 , 18 The TWES was revised in 2020 and a new taxonomy was proposed—TWES 2.0. 18 The TWES index in general includes the application of an 8‐point occlusal/incisal ordinal grading scale and a 3‐point non‐occlusal/ non‐incisal grading scale for the scoring of the respective surfaces. The TWES has been reported to offer adequate levels of reliability with tooth wear grading when applied clinically, as well as when using dental cast records. 19 Furthermore, when undertaking occlusal/ incisal surface grading using dental casts and intra‐oral photographic records, the TWES has been described to offer the necessary sensitivity to enable the detection of changes in the pattern on tooth wear on a sequential basis and, thereby, help monitor disease progression. 17 , 20 The aim of this study was to undertake a comparative evaluation between the use of gypsum casts and digital greyscale (black‐white) intra‐oral scan records with the reliability of grading tooth wear using the TWES, applied to patient records that were demonstrative of moderate to severe forms of tooth wear. CONCLUSIONS: It was concluded that the scores obtained with the grading scales of the TWES on gypsum casts can offer reliability, especially for the grading of the occlusal/incisal surfaces of teeth with signs of moderate to severe wear. The level of reproducibility offered using digital greyscale intra‐oral scan records to carry out tooth wear assessments with the TWES was generally inferior to that offered by the use of gypsum casts.
Background: The Tooth Wear Evaluation System (TWES) is a type of tooth wear index. To date, there is the lack of data comparing the reliability of the application of this index on gypsum cast records and digital greyscale intra-oral scan records. Methods: Records for 10 patients with moderate to severe tooth wear (TWES ≥ 2) were randomly selected from a larger clinical trial. TWES grading of the occlusal/incisal, buccal and palatal/lingual surfaces was performed to determine the levels of intra- and interobserver agreement. Intra-observer reproducibility was based on the findings of one examiner only. For the interobserver reproducibility, the findings of two examiners were considered. One set of models/ records were used per patient. Cohen's weighted kappa (κW ) was used to ascertain agreement between and within the observers. Comparison of agreement was performed using t tests (P < .05). Results: For the scoring of the of the total occlusal/incisal surfaces, the overall levels of intra- and interobserver agreement were significantly higher using the gypsum cast records than with the digital greyscale intra-oral scan records, (P < .001) and (P < .001), respectively. For the overall buccal surfaces, only a significant difference was found in the intra-observer agreement using gypsum casts, (P = .013). For the palatal/lingual surfaces, a significant difference was only reported in the interobserver agreement using gypsum casts, (P = .043). At the occlusal/incisal surfaces, grading performed using gypsum casts, culminated in significantly higher TWES scores than with the use of the digital greyscale intra-oral scans (P < .001). At the buccal and palatal/lingual surfaces, significantly higher wear scores were obtained using digital greyscale intra-oral scan records (P < .009). Conclusions: The TWES can offer a reliable means for the scoring of wearing occlusal/incisal surfaces using gypsum casts. The reliability offered by digital greyscale intra-oral scans for consecutive scoring was in general, inferior.
8,339
411
[ 3170, 274, 436, 485, 377, 131 ]
11
[ "wear", "intra", "twes", "tooth", "records", "tooth wear", "digital", "gypsum", "oral", "intra oral" ]
[ "tooth wear evaluation", "severity tooth wear", "restorations tooth wear", "erosive tooth wear", "tooth wear restorative" ]
null
[CONTENT] assessment tools | dental casts | digital casts | grading scales | reliability | tooth wear | tooth wear evaluation system (TWES) [SUMMARY]
null
[CONTENT] assessment tools | dental casts | digital casts | grading scales | reliability | tooth wear | tooth wear evaluation system (TWES) [SUMMARY]
[CONTENT] assessment tools | dental casts | digital casts | grading scales | reliability | tooth wear | tooth wear evaluation system (TWES) [SUMMARY]
[CONTENT] assessment tools | dental casts | digital casts | grading scales | reliability | tooth wear | tooth wear evaluation system (TWES) [SUMMARY]
[CONTENT] assessment tools | dental casts | digital casts | grading scales | reliability | tooth wear | tooth wear evaluation system (TWES) [SUMMARY]
[CONTENT] Calcium Sulfate | Humans | Reproducibility of Results | Tooth Attrition | Tooth Wear [SUMMARY]
null
[CONTENT] Calcium Sulfate | Humans | Reproducibility of Results | Tooth Attrition | Tooth Wear [SUMMARY]
[CONTENT] Calcium Sulfate | Humans | Reproducibility of Results | Tooth Attrition | Tooth Wear [SUMMARY]
[CONTENT] Calcium Sulfate | Humans | Reproducibility of Results | Tooth Attrition | Tooth Wear [SUMMARY]
[CONTENT] Calcium Sulfate | Humans | Reproducibility of Results | Tooth Attrition | Tooth Wear [SUMMARY]
[CONTENT] tooth wear evaluation | severity tooth wear | restorations tooth wear | erosive tooth wear | tooth wear restorative [SUMMARY]
null
[CONTENT] tooth wear evaluation | severity tooth wear | restorations tooth wear | erosive tooth wear | tooth wear restorative [SUMMARY]
[CONTENT] tooth wear evaluation | severity tooth wear | restorations tooth wear | erosive tooth wear | tooth wear restorative [SUMMARY]
[CONTENT] tooth wear evaluation | severity tooth wear | restorations tooth wear | erosive tooth wear | tooth wear restorative [SUMMARY]
[CONTENT] tooth wear evaluation | severity tooth wear | restorations tooth wear | erosive tooth wear | tooth wear restorative [SUMMARY]
[CONTENT] wear | intra | twes | tooth | records | tooth wear | digital | gypsum | oral | intra oral [SUMMARY]
null
[CONTENT] wear | intra | twes | tooth | records | tooth wear | digital | gypsum | oral | intra oral [SUMMARY]
[CONTENT] wear | intra | twes | tooth | records | tooth wear | digital | gypsum | oral | intra oral [SUMMARY]
[CONTENT] wear | intra | twes | tooth | records | tooth wear | digital | gypsum | oral | intra oral [SUMMARY]
[CONTENT] wear | intra | twes | tooth | records | tooth wear | digital | gypsum | oral | intra oral [SUMMARY]
[CONTENT] wear | tooth wear | tooth | restorative | intervention | restorative intervention | dental | assessment | grading | patient [SUMMARY]
null
[CONTENT] records | gypsum | intra | twes | scores | digital | gypsum cast | wear | surfaces | gypsum cast records [SUMMARY]
[CONTENT] offered | grading | casts | gypsum casts | offer reliability especially grading | especially grading occlusal incisal | severe wear | severe wear level | severe wear level reproducibility | obtained grading [SUMMARY]
[CONTENT] wear | intra | twes | records | scores | digital | gypsum | tooth wear | tooth | oral [SUMMARY]
[CONTENT] wear | intra | twes | records | scores | digital | gypsum | tooth wear | tooth | oral [SUMMARY]
[CONTENT] The Tooth Wear Evaluation System ||| scan [SUMMARY]
null
[CONTENT] ||| .013 ||| ||| ||| [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] The Tooth Wear Evaluation System ||| scan ||| 10 | 2 ||| ||| Intra | one ||| two ||| ||| Cohen ||| Comparison ||| ||| .013 ||| ||| ||| ||| ||| [SUMMARY]
[CONTENT] The Tooth Wear Evaluation System ||| scan ||| 10 | 2 ||| ||| Intra | one ||| two ||| ||| Cohen ||| Comparison ||| ||| .013 ||| ||| ||| ||| ||| [SUMMARY]
MYLIP p.N342S polymorphism is not associated with lipid profile in the Brazilian population.
22741812
A recent study investigated the MYLIP region in the Mexican population in order to fine-map the actual susceptibility variants of this locus. The p.N342S polymorphism was identified as the underlying functional variant accounting for one of the previous signals of genome-wide association studies and the N342 allele was associated with higher cholesterol concentrations in Mexican dyslipidemic individuals. To date, there is no further evaluation on this genotype-phenotype association in the literature. In this scenario, and because of a possible pharmacotherapeutic target of dyslipidemia, the main aim of this study was to assess the influence of the MYLIP p.N342S polymorphism on lipid profile in Brazilian individuals.
BACKGROUND
1295 subjects of the general population and 1425 consecutive patients submitted to coronary angiography were selected. General characteristics, biochemical tests, blood pressures, pulse wave velocity, and coronary artery disease scores were analyzed. Genotypes for the MYLIP rs9370867 (p.N342S, c.G1025A) polymorphism were detected by high resolution melting analysis.
METHODS
No association of the MYLIP rs9370867 genotypes with lipid profile, hemodynamic data, and coronary angiographic data was found. Analysis stratified by hyperlipidemia, gender, and ethnicity was also performed and the sub-groups presented similar results. In both general population and patient samples, the MYLIP rs9370867 polymorphism was differently distributed according to ethnicity. In the general population, subjects carrying GG genotypes had higher systolic blood pressure (BP), diastolic BP, and mean BP values (129.0 ± 23.3; 84.9 ± 14.6; 99.5 ± 16.8 mmHg) compared with subjects carrying AA genotypes (123.7 ± 19.5; 81.6 ± 11.8; 95.6 ± 13.6 mmHg) (p = 0.01; p = 0.02; p = 0.01, respectively), even after adjustment for covariates. However, in analysis stratified by ethnicity, this finding was not found and there is no evidence that the polymorphism influences BP.
RESULTS
Our findings indicate that association studies involving this MYLIP variant can present distinct results according to the studied population. In this moment, further studies are needed to reaffirm if the MYLIP p.N342S polymorphism is functional or not, and to identify other functional markers within this gene.
CONCLUSION
[ "Adult", "Aged", "Amino Acid Substitution", "Brazil", "Coronary Angiography", "Coronary Artery Disease", "Female", "Genetic Association Studies", "Humans", "Hyperlipidemias", "Hypertension", "Lipids", "Male", "Middle Aged", "Phenotype", "Polymorphism, Single Nucleotide", "Radionuclide Imaging", "Sequence Analysis, DNA", "Ubiquitin-Protein Ligases", "Urban Population", "Vascular Stiffness" ]
3439349
Background
Lipid profile disorders have been significantly associated with risk of cardiovascular disease (CVD), which is also influenced by genetic factors, hypertension, type 2 diabetes mellitus, obesity, and smoking. CVD are the main cause of morbidity and mortality in developed countries and the financial cost is enormous. Thus, guidelines from the National Cholesterol Education Program (NCEP) rely on low-density lipoprotein cholesterol (LDL-C) for the prevention of CVD [1-6]. A conventional lipid panel reports several parameters, including total cholesterol (TC), LDL-C, high-density lipoprotein cholesterol (HDL-C), and triglycerides. Of these, the NCEP and the American Heart Association recommend using LDL-C as a primary target of therapy in conjunction with assessing cardiovascular risk factors. The Third Report of the Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III, or ATP III - NCEP) updated clinical guidelines for cholesterol testing such as high TC (≥ 240 mg/dL), high LDL-C (≥ 160 mg/dL), and low HDL-C (< 40 mg/dL) [3,7,8]. Population genetic and epidemiological studies could help to assess the etiologic role of lipid profile in CVD and, novel genetic determinants of blood lipids can also help to provide new insights into the biological pathways and identify novel therapeutic targets. In this way, current genome-wide association studies (GWAS) have identified genetic loci contributing to inter-individual variation in serum concentration of lipids [9-12]. Some GWAS, using cohorts of mixed European descent, identified non-coding polymorphisms in the region of the MYLIP gene that were associated with LDL-C concentrations [12-14]. The MYLIP functional variant and the mechanistic basis of these associations were recently postulated by Weissglas-Volkov et al. [15]. MYLIP gene encodes a regulator of the LDL receptor pathway for cellular cholesterol uptake called MYLIP (myosin regulatory light chain interacting protein; also known as IDOL). Weissglas-Volkov et al. investigated the MYLIP region in the Mexican population in order to fine-map the actual susceptibility variants. They identified the rs9370867 non-synonymous polymorphism (p.N342S) as the underlying functional variant accounting for one of the previous GWAS significant signals and associated N342 allele with higher TC concentrations in Mexican dyslipidemic individuals [15]. To date, there is no further evaluation on this genotype-phenotype association (MYLIP p.N342S – lipid profile). In this scenario, the main aim of this study was to assess the influence of the MYLIP polymorphism on lipid profile in Brazilian individuals.
Methods
General population One thousand two hundred ninety-five subjects of the general urban population were selected from Vitoria, Brazil [16]. The study design was based on cross-sectional research methodology and was developed by means of surveying and analyzing socioeconomic and health data in a probabilistic sample of residents from the municipality of Vitoria, Espirito Santo, Brazil. The sampling plan had the objective of ensuring that the research would be socioeconomically, geographically, and demographically representative of the residents of this municipality. The study protocol was approved by the involved Institutional Ethics Committees and written informed consent was obtained from all participants prior to enter the study. One thousand two hundred ninety-five subjects of the general urban population were selected from Vitoria, Brazil [16]. The study design was based on cross-sectional research methodology and was developed by means of surveying and analyzing socioeconomic and health data in a probabilistic sample of residents from the municipality of Vitoria, Espirito Santo, Brazil. The sampling plan had the objective of ensuring that the research would be socioeconomically, geographically, and demographically representative of the residents of this municipality. The study protocol was approved by the involved Institutional Ethics Committees and written informed consent was obtained from all participants prior to enter the study. Patients submitted to coronary angiography One thousand four hundred twenty-five consecutive patients submitted to coronary angiography for the first time to study suggestive coronary artery disease etiology were selected at the Laboratory of Hemodinamics, Heart Institute (Incor), Sao Paulo, Brazil. All patients had a clinical diagnosis of angina pectoris and stable angina. No patient enrolled in this study was currently experiencing an acute coronary syndrome. Patients with previous acute ischemic events, heart failure classes III–IV, hepatic dysfunction, familiar hypercholesterolemia, previous heart or kidney transplantation, and in antiviral treatment were excluded [17-19]. Patients answered a clinical questionnaire that covered questions regarding personal medical history, family antecedents of CVD, sedentarism, smoking status, hypertension, obesity, dyslipidemia, diabetes, and current treatment. All patients signed an informed consent form and the study has been approved by the local Ethics committee. One thousand four hundred twenty-five consecutive patients submitted to coronary angiography for the first time to study suggestive coronary artery disease etiology were selected at the Laboratory of Hemodinamics, Heart Institute (Incor), Sao Paulo, Brazil. All patients had a clinical diagnosis of angina pectoris and stable angina. No patient enrolled in this study was currently experiencing an acute coronary syndrome. Patients with previous acute ischemic events, heart failure classes III–IV, hepatic dysfunction, familiar hypercholesterolemia, previous heart or kidney transplantation, and in antiviral treatment were excluded [17-19]. Patients answered a clinical questionnaire that covered questions regarding personal medical history, family antecedents of CVD, sedentarism, smoking status, hypertension, obesity, dyslipidemia, diabetes, and current treatment. All patients signed an informed consent form and the study has been approved by the local Ethics committee. Demographic data and laboratory tests Weight and height were measured according to a standard protocol, and body mass index (BMI) was calculated. Individuals answered a clinical questionnaire that covered questions regarding smoking status and current medical treatment. Individuals who had ever smoked more than five cigarettes per day for the last year were classified as smokers [1,20]. Ethnicity was classified with a validated questionnaire for the Brazilian population according to a set of phenotypic characteristics (such as skin color, hair texture, shape of the nose and aspect of the lip) and individuals were classified as White, Intermediate (meaning Brown, Pardo in Portuguese), Black, Amerindian or Oriental descent [16,21,22]. Triglycerides (TG), TC, HDL-C, LDL-C, and glucose were evaluated by standard techniques in 12-h fasting blood samples. Diabetes mellitus was diagnosed by the presence of fasting glucose ≥ 126 mg/dL or the use of antidiabetic drugs [23]. Hyperlipidemia was defined as TC ≥ 240 mg/dL, LDL-C ≥ 160 mg/dL, and/or use of hypolipidemic drugs [7]. Weight and height were measured according to a standard protocol, and body mass index (BMI) was calculated. Individuals answered a clinical questionnaire that covered questions regarding smoking status and current medical treatment. Individuals who had ever smoked more than five cigarettes per day for the last year were classified as smokers [1,20]. Ethnicity was classified with a validated questionnaire for the Brazilian population according to a set of phenotypic characteristics (such as skin color, hair texture, shape of the nose and aspect of the lip) and individuals were classified as White, Intermediate (meaning Brown, Pardo in Portuguese), Black, Amerindian or Oriental descent [16,21,22]. Triglycerides (TG), TC, HDL-C, LDL-C, and glucose were evaluated by standard techniques in 12-h fasting blood samples. Diabetes mellitus was diagnosed by the presence of fasting glucose ≥ 126 mg/dL or the use of antidiabetic drugs [23]. Hyperlipidemia was defined as TC ≥ 240 mg/dL, LDL-C ≥ 160 mg/dL, and/or use of hypolipidemic drugs [7]. Blood pressure phenotypes Blood pressure was measured in the sitting position with the use of a standard mercury sphygmomanometer on the left arm after 5 min rest. The first and fifth phases of Korotkoff sounds were used for systolic blood pressure (SBP) and diastolic blood pressure (DBP), respectively. The SBP and DBP were calculated from two readings with a minimal interval of 10 min apart. Hypertension was defined as mean SBP ≥140 mmHg and/or DBP ≥90 mm Hg and/or antihypertensive drug use [24]. The mean blood pressure (MBP) was calculated as the mean pulse pressure added to one-third of the DBP. Blood pressure was measured in the sitting position with the use of a standard mercury sphygmomanometer on the left arm after 5 min rest. The first and fifth phases of Korotkoff sounds were used for systolic blood pressure (SBP) and diastolic blood pressure (DBP), respectively. The SBP and DBP were calculated from two readings with a minimal interval of 10 min apart. Hypertension was defined as mean SBP ≥140 mmHg and/or DBP ≥90 mm Hg and/or antihypertensive drug use [24]. The mean blood pressure (MBP) was calculated as the mean pulse pressure added to one-third of the DBP. Pulse wave velocity and arterial stiffness Carotid-femoral pulse wave velocity (PWV) was analyzed with an automatic device (Complior®; Colson) by an experienced observer blinded to clinical characteristics. Briefly, common carotid artery and femoral artery pressure waveforms were recorded non-invasively using a pressure-sensitive transducer (TY-306-Fukuda®; Fukuda; Tokyo, Japan). The distance between the recording sites (D) was measured, and PWV was automatically calculated as PWV = D/t, where (t) means pulse transit time. Measurements were repeated over 10 different cardiac cycles, and the mean was used for the final analysis. The validation and its reproducibility have been previously described, and increased arterial stiffness was defined as PWV ≥ 12 m/s [16,25]. Carotid-femoral pulse wave velocity (PWV) was analyzed with an automatic device (Complior®; Colson) by an experienced observer blinded to clinical characteristics. Briefly, common carotid artery and femoral artery pressure waveforms were recorded non-invasively using a pressure-sensitive transducer (TY-306-Fukuda®; Fukuda; Tokyo, Japan). The distance between the recording sites (D) was measured, and PWV was automatically calculated as PWV = D/t, where (t) means pulse transit time. Measurements were repeated over 10 different cardiac cycles, and the mean was used for the final analysis. The validation and its reproducibility have been previously described, and increased arterial stiffness was defined as PWV ≥ 12 m/s [16,25]. Coronary artery disease scores Twenty coronary segments were scored: each vessel was divided into three segments (proximal, medial, and distal), except for the secondary branches of the right coronary artery (posterior ventricular and posterior descending), which were divided into proximal and distal segments. Stenosis higher than 50% in any coronary segment was graded 1 point and the sum of points for all 20 segments constituted the Extension Score. Lesion severity was calculated as follows: none and irregularities, 0 points; <50%, 0.3 points; 50–70%, 0.6 points; >70–90%, 0.8 points; and >90–100%, 0.95 points. The Severity Score was calculated through the sum of points for all 20 coronary segments [17]. Twenty coronary segments were scored: each vessel was divided into three segments (proximal, medial, and distal), except for the secondary branches of the right coronary artery (posterior ventricular and posterior descending), which were divided into proximal and distal segments. Stenosis higher than 50% in any coronary segment was graded 1 point and the sum of points for all 20 segments constituted the Extension Score. Lesion severity was calculated as follows: none and irregularities, 0 points; <50%, 0.3 points; 50–70%, 0.6 points; >70–90%, 0.8 points; and >90–100%, 0.95 points. The Severity Score was calculated through the sum of points for all 20 coronary segments [17]. Genotyping Genomic DNA from subjects was extracted from peripheral blood following standard salting-out procedure. Genotypes for the MYLIP rs9370867 (p.N342S, c.G1025A) polymorphism was detected by polymerase chain reaction (PCR) followed by high resolution melting (HRM) analysis with the Rotor Gene 6000® instrument (Qiagen, Courtaboeuf, France) [26,27]. The QIAgility® (Qiagen, Courtaboeuf, France), an automated instrument, was used according to instructions to optimize the sample preparation step. One specific disc is able to genotype 96 samples for this polymorphism [28]. Amplification of the fragment was performed using the primer sense 5’- TTGTGGACCTCGTTTCAAGA -3’ and antisense 5’- GCTGCAGTTCATGCTGCT -3’ (80 pairs base) for the rs9370867. A 40-cycle PCR was carried out with the following conditions: denaturation of the template DNA for first cycle of 94°C for 120 s, denaturation of 94°C for 20 s, annealing of 53.4°C for 20 s, and extension of 72°C for 22 s. PCR was performed using a 10 μL reactive solution (10 mM Tris–HCl, 50 mM KCl, pH 9.0; 2.0 mM MgCl2; 200 μM of each dNTP; 0.5 U Taq DNA Polymerase; 200 nM of each primer; 10 ng of genomic DNA template) with addition of fluorescent DNA-intercalating SYTO9® (1.5 μM; Invitrogen, Carlsbad, USA). In the HRM phase, the Rotor Gene 6000® measured the fluorescence in each 0.1°C temperature increase in the range of 73-85°C. Melting curves were generated by the decrease in fluorescence with the increase in the temperature; and in analysis, nucleotide changes result in three different curve patterns (Figure 1). Samples of the three observed curves were analyzed using bidirectional sequencing as a validation procedure (ABI Terminator Sequencing Kit® and ABI 3500XL Sequencer® - Applied Biosystems, Foster City, CA, USA). The two methods gave identical results in all tests. The wild-type, heterozygous and mutant homozygous genotypes for the rs9370867 could be easily discernible by HRM analysis. In addition, 4% of the samples were randomly selected and reanalyzed as quality controls and gave identical results. Graphs of the MYLIP rs9370867 (p.N342S, c.G1025A) nucleotide changes results in different curve patterns using high resolution melting analysis. A: Graph of normalized fluorescence by temperature. B: Graph of normalized fluorescence (based on genotype 2) by temperature. C: Graph of melting curve analysis (fluorescence differential/temperature differential). 1: wild-type genotype (GG); 2: heterozygous genotype (GA); 3: mutant homozygous genotype (AA). Genomic DNA from subjects was extracted from peripheral blood following standard salting-out procedure. Genotypes for the MYLIP rs9370867 (p.N342S, c.G1025A) polymorphism was detected by polymerase chain reaction (PCR) followed by high resolution melting (HRM) analysis with the Rotor Gene 6000® instrument (Qiagen, Courtaboeuf, France) [26,27]. The QIAgility® (Qiagen, Courtaboeuf, France), an automated instrument, was used according to instructions to optimize the sample preparation step. One specific disc is able to genotype 96 samples for this polymorphism [28]. Amplification of the fragment was performed using the primer sense 5’- TTGTGGACCTCGTTTCAAGA -3’ and antisense 5’- GCTGCAGTTCATGCTGCT -3’ (80 pairs base) for the rs9370867. A 40-cycle PCR was carried out with the following conditions: denaturation of the template DNA for first cycle of 94°C for 120 s, denaturation of 94°C for 20 s, annealing of 53.4°C for 20 s, and extension of 72°C for 22 s. PCR was performed using a 10 μL reactive solution (10 mM Tris–HCl, 50 mM KCl, pH 9.0; 2.0 mM MgCl2; 200 μM of each dNTP; 0.5 U Taq DNA Polymerase; 200 nM of each primer; 10 ng of genomic DNA template) with addition of fluorescent DNA-intercalating SYTO9® (1.5 μM; Invitrogen, Carlsbad, USA). In the HRM phase, the Rotor Gene 6000® measured the fluorescence in each 0.1°C temperature increase in the range of 73-85°C. Melting curves were generated by the decrease in fluorescence with the increase in the temperature; and in analysis, nucleotide changes result in three different curve patterns (Figure 1). Samples of the three observed curves were analyzed using bidirectional sequencing as a validation procedure (ABI Terminator Sequencing Kit® and ABI 3500XL Sequencer® - Applied Biosystems, Foster City, CA, USA). The two methods gave identical results in all tests. The wild-type, heterozygous and mutant homozygous genotypes for the rs9370867 could be easily discernible by HRM analysis. In addition, 4% of the samples were randomly selected and reanalyzed as quality controls and gave identical results. Graphs of the MYLIP rs9370867 (p.N342S, c.G1025A) nucleotide changes results in different curve patterns using high resolution melting analysis. A: Graph of normalized fluorescence by temperature. B: Graph of normalized fluorescence (based on genotype 2) by temperature. C: Graph of melting curve analysis (fluorescence differential/temperature differential). 1: wild-type genotype (GG); 2: heterozygous genotype (GA); 3: mutant homozygous genotype (AA). Statistical analysis Categorical variables are presented as percentage while continuous variables are presented as mean ± standard deviation. Chi-square test was performed for comparative analysis of gender, ethnicity, hypertension, diabetes, hyperlipidemia, increased arterial stiffness, and smoking frequencies according to MYLIP polymorphism. Chi-square test was also performed for the Hardy-Weinberg equilibrium. ANOVA test was performed for comparing the age, BMI, biochemical data, blood pressures, PWV, and angiographic data means according to MYLIP polymorphism. Tukey's post hoc test was performed to identify the different group. Biochemical data, blood pressures, and angiographic data were adjusted for age, gender, and ethnicity. PWV was adjusted for age, gender, MBP, and ethnicity. The analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria [8,15]. Fasting serum TG > 200 mg/dL for the cases, TG < 150 mg/dL for the controls, and subjects with morbid obesity (BMI > 40 kg/m2) or type 2 diabetes mellitus were excluded. The subjects were classified as high-TC if their serum TC levels were ≥ 240 mg/dL and normal TC if their serum TC levels were < 240 mg/dL in the absence of lipid lowering medication. The subjects were also classified as combined hyperlipidemia if their serum TC and TG levels were ≥ 240 mg/dL and > 200 mg/dL, and as controls if TC and TG levels were < 240 mg/dL and < 150 mg/dL, respectively [8,15]. All statistical analyses were carried out using the SPSS software (v. 16.0), with the level of significance set at p < 0.05. Categorical variables are presented as percentage while continuous variables are presented as mean ± standard deviation. Chi-square test was performed for comparative analysis of gender, ethnicity, hypertension, diabetes, hyperlipidemia, increased arterial stiffness, and smoking frequencies according to MYLIP polymorphism. Chi-square test was also performed for the Hardy-Weinberg equilibrium. ANOVA test was performed for comparing the age, BMI, biochemical data, blood pressures, PWV, and angiographic data means according to MYLIP polymorphism. Tukey's post hoc test was performed to identify the different group. Biochemical data, blood pressures, and angiographic data were adjusted for age, gender, and ethnicity. PWV was adjusted for age, gender, MBP, and ethnicity. The analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria [8,15]. Fasting serum TG > 200 mg/dL for the cases, TG < 150 mg/dL for the controls, and subjects with morbid obesity (BMI > 40 kg/m2) or type 2 diabetes mellitus were excluded. The subjects were classified as high-TC if their serum TC levels were ≥ 240 mg/dL and normal TC if their serum TC levels were < 240 mg/dL in the absence of lipid lowering medication. The subjects were also classified as combined hyperlipidemia if their serum TC and TG levels were ≥ 240 mg/dL and > 200 mg/dL, and as controls if TC and TG levels were < 240 mg/dL and < 150 mg/dL, respectively [8,15]. All statistical analyses were carried out using the SPSS software (v. 16.0), with the level of significance set at p < 0.05.
Results
General characteristics according to MYLIP polymorphism Table 1 summarizes general characteristics of both studied samples. General characteristics according to MYLIP rs9370867 genotypes aEthnicity was categorized in White, Black, Intermediate (person with admixture between White and Black) and other (Amerindians and Oriental descent). bHypertension was defined as mean systolic blood pressure ≥ 140 mmHg and/or diastolic blood pressure ≥ 90 mmHg and/or use of anti-hypertension drugs. cDiabetes was defined as fasting glucose ≥ 126 mg/dL and/or use of hypoglycemic drugs. dHyperlipidemia was defined as total-cholesterol ≥ 240 mg/dL, low density lipoprotein-cholesterol ≥ 160 mg/dL, and/or use of hypolipidemic drugs. eIncreased arterial stiffness was defined as pulse wave velocity ≥ 12 m/s. fIndividuals who had ever smoked more than five cigarettes per day for the last a year were classified as smokers. gAdjusted for age and gender. No difference in the frequencies of hyperlipidemia, diabetes, increased arterial stiffness, and smoking status according to MYLIP polymorphism was found. Only hypertension frequency presented a trend (p = 0.05) in the general population (Table 1). In both general population and patient samples, the MYLIP rs9370867 polymorphism was differently distributed according to ethnicity (Table 1). In the general population, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (45.8% and 22.1%) compared with Blacks (15.2% and 3.1%) (p < 0.001 and p < 0.001, respectively). In the patients submitted to coronary angiography, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (44.6% and 20.7%) compared with Blacks (21.3% and 5.9%) (p < 0.001 and p < 0.001, respectively). The genotypic distribution for the MYLIP rs9370867 polymorphism was in Hardy–Weinberg equilibrium according to ethnic groups. Table 1 summarizes general characteristics of both studied samples. General characteristics according to MYLIP rs9370867 genotypes aEthnicity was categorized in White, Black, Intermediate (person with admixture between White and Black) and other (Amerindians and Oriental descent). bHypertension was defined as mean systolic blood pressure ≥ 140 mmHg and/or diastolic blood pressure ≥ 90 mmHg and/or use of anti-hypertension drugs. cDiabetes was defined as fasting glucose ≥ 126 mg/dL and/or use of hypoglycemic drugs. dHyperlipidemia was defined as total-cholesterol ≥ 240 mg/dL, low density lipoprotein-cholesterol ≥ 160 mg/dL, and/or use of hypolipidemic drugs. eIncreased arterial stiffness was defined as pulse wave velocity ≥ 12 m/s. fIndividuals who had ever smoked more than five cigarettes per day for the last a year were classified as smokers. gAdjusted for age and gender. No difference in the frequencies of hyperlipidemia, diabetes, increased arterial stiffness, and smoking status according to MYLIP polymorphism was found. Only hypertension frequency presented a trend (p = 0.05) in the general population (Table 1). In both general population and patient samples, the MYLIP rs9370867 polymorphism was differently distributed according to ethnicity (Table 1). In the general population, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (45.8% and 22.1%) compared with Blacks (15.2% and 3.1%) (p < 0.001 and p < 0.001, respectively). In the patients submitted to coronary angiography, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (44.6% and 20.7%) compared with Blacks (21.3% and 5.9%) (p < 0.001 and p < 0.001, respectively). The genotypic distribution for the MYLIP rs9370867 polymorphism was in Hardy–Weinberg equilibrium according to ethnic groups. Biochemical, hemodynamic, and angiographic data according to MYLIP polymorphism Table 2 summarizes biochemical, hemodynamic, and angiographic data of both studied samples. Biochemical, hemodynamic, and angiographic data according to MYLIP rs9370867 genotypes Continuous data are expressed as mean ± standard deviation. Values with different superscript letters are significantly different (Tukey’s post hoc test). HDL-C: high density lipoprotein; LDL-C: low density lipoprotein. Biochemical data, blood pressures, ejection fraction, and angiographic scores were adjusted for age, gender, and ethnicity. Pulse wave velocity data was adjusted for age, gender, mean blood pressure, and ethnicity. There was no association of the MYLIP rs9370867 genotypes with TC, LDL-C, HDL-C, triglycerides, and glycemia values in both samples (Table 2). In the general population, subjects carrying GG genotypes had higher SBP, DBP, and MBP values (129.0 ± 23.3; 84.9 ± 14.6; 99.5 ± 16.8 mmHg) compared with subjects carrying AA genotypes (123.7 ± 19.5; 81.6 ± 11.8; 95.6 ± 13.6 mmHg) (p = 0.01; p = 0.02; p = 0.01, respectively), even after adjustment for age, gender, and ethnicity (Table 2). This difference was not found in the patient sample. No association of the studied polymorphism with PWV values (available for the general population sample) or with angiographic data (extension and severity scores, available for the patient sample) was observed (Table 2). Table 2 summarizes biochemical, hemodynamic, and angiographic data of both studied samples. Biochemical, hemodynamic, and angiographic data according to MYLIP rs9370867 genotypes Continuous data are expressed as mean ± standard deviation. Values with different superscript letters are significantly different (Tukey’s post hoc test). HDL-C: high density lipoprotein; LDL-C: low density lipoprotein. Biochemical data, blood pressures, ejection fraction, and angiographic scores were adjusted for age, gender, and ethnicity. Pulse wave velocity data was adjusted for age, gender, mean blood pressure, and ethnicity. There was no association of the MYLIP rs9370867 genotypes with TC, LDL-C, HDL-C, triglycerides, and glycemia values in both samples (Table 2). In the general population, subjects carrying GG genotypes had higher SBP, DBP, and MBP values (129.0 ± 23.3; 84.9 ± 14.6; 99.5 ± 16.8 mmHg) compared with subjects carrying AA genotypes (123.7 ± 19.5; 81.6 ± 11.8; 95.6 ± 13.6 mmHg) (p = 0.01; p = 0.02; p = 0.01, respectively), even after adjustment for age, gender, and ethnicity (Table 2). This difference was not found in the patient sample. No association of the studied polymorphism with PWV values (available for the general population sample) or with angiographic data (extension and severity scores, available for the patient sample) was observed (Table 2). Analysis stratified by hyperlipidemia, gender, and ethnicity The analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria (see Methods section for details) [8,15]. In this analysis, the sub-groups (controls and dyslipidemic subjects) presented similar result as the total sample in both general population and patient samples, even after adjustment for covariates. In the sub-groups, no association of the variables according to genotypes was found and no difference in the variant allele frequency between sub-groups was observed (p > 0.05). In addition, the analysis stratified by gender and ethnicity did not identify significant results in both studied samples. The analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria (see Methods section for details) [8,15]. In this analysis, the sub-groups (controls and dyslipidemic subjects) presented similar result as the total sample in both general population and patient samples, even after adjustment for covariates. In the sub-groups, no association of the variables according to genotypes was found and no difference in the variant allele frequency between sub-groups was observed (p > 0.05). In addition, the analysis stratified by gender and ethnicity did not identify significant results in both studied samples.
Conclusion
Our findings indicate that association studies involving this MYLIP variant can present distinct results according to the studied population. In this moment, further studies are needed to reaffirm if the MYLIP p.N342S polymorphism is functional or not, and to identify other functional markers in this locus.
[ "Background", "General population", "Patients submitted to coronary angiography", "Demographic data and laboratory tests", "Blood pressure phenotypes", "Pulse wave velocity and arterial stiffness", "Coronary artery disease scores", "Genotyping", "Statistical analysis", "General characteristics according to MYLIP polymorphism", "Biochemical, hemodynamic, and angiographic data according to MYLIP polymorphism", "Analysis stratified by hyperlipidemia, gender, and ethnicity", "Competing interests", "Authors' contributions" ]
[ "Lipid profile disorders have been significantly associated with risk of cardiovascular disease (CVD), which is also influenced by genetic factors, hypertension, type 2 diabetes mellitus, obesity, and smoking. CVD are the main cause of morbidity and mortality in developed countries and the financial cost is enormous. Thus, guidelines from the National Cholesterol Education Program (NCEP) rely on low-density lipoprotein cholesterol (LDL-C) for the prevention of CVD [1-6].\nA conventional lipid panel reports several parameters, including total cholesterol (TC), LDL-C, high-density lipoprotein cholesterol (HDL-C), and triglycerides. Of these, the NCEP and the American Heart Association recommend using LDL-C as a primary target of therapy in conjunction with assessing cardiovascular risk factors. The Third Report of the Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III, or ATP III - NCEP) updated clinical guidelines for cholesterol testing such as high TC (≥ 240 mg/dL), high LDL-C (≥ 160 mg/dL), and low HDL-C (< 40 mg/dL) [3,7,8].\nPopulation genetic and epidemiological studies could help to assess the etiologic role of lipid profile in CVD and, novel genetic determinants of blood lipids can also help to provide new insights into the biological pathways and identify novel therapeutic targets. In this way, current genome-wide association studies (GWAS) have identified genetic loci contributing to inter-individual variation in serum concentration of lipids [9-12]. Some GWAS, using cohorts of mixed European descent, identified non-coding polymorphisms in the region of the MYLIP gene that were associated with LDL-C concentrations [12-14]. The MYLIP functional variant and the mechanistic basis of these associations were recently postulated by Weissglas-Volkov et al. [15].\nMYLIP gene encodes a regulator of the LDL receptor pathway for cellular cholesterol uptake called MYLIP (myosin regulatory light chain interacting protein; also known as IDOL). Weissglas-Volkov et al. investigated the MYLIP region in the Mexican population in order to fine-map the actual susceptibility variants. They identified the rs9370867 non-synonymous polymorphism (p.N342S) as the underlying functional variant accounting for one of the previous GWAS significant signals and associated N342 allele with higher TC concentrations in Mexican dyslipidemic individuals [15].\nTo date, there is no further evaluation on this genotype-phenotype association (MYLIP p.N342S – lipid profile). In this scenario, the main aim of this study was to assess the influence of the MYLIP polymorphism on lipid profile in Brazilian individuals.", "One thousand two hundred ninety-five subjects of the general urban population were selected from Vitoria, Brazil [16]. The study design was based on cross-sectional research methodology and was developed by means of surveying and analyzing socioeconomic and health data in a probabilistic sample of residents from the municipality of Vitoria, Espirito Santo, Brazil. The sampling plan had the objective of ensuring that the research would be socioeconomically, geographically, and demographically representative of the residents of this municipality. The study protocol was approved by the involved Institutional Ethics Committees and written informed consent was obtained from all participants prior to enter the study.", "One thousand four hundred twenty-five consecutive patients submitted to coronary angiography for the first time to study suggestive coronary artery disease etiology were selected at the Laboratory of Hemodinamics, Heart Institute (Incor), Sao Paulo, Brazil. All patients had a clinical diagnosis of angina pectoris and stable angina. No patient enrolled in this study was currently experiencing an acute coronary syndrome. Patients with previous acute ischemic events, heart failure classes III–IV, hepatic dysfunction, familiar hypercholesterolemia, previous heart or kidney transplantation, and in antiviral treatment were excluded [17-19]. Patients answered a clinical questionnaire that covered questions regarding personal medical history, family antecedents of CVD, sedentarism, smoking status, hypertension, obesity, dyslipidemia, diabetes, and current treatment. All patients signed an informed consent form and the study has been approved by the local Ethics committee.", "Weight and height were measured according to a standard protocol, and body mass index (BMI) was calculated. Individuals answered a clinical questionnaire that covered questions regarding smoking status and current medical treatment. Individuals who had ever smoked more than five cigarettes per day for the last year were classified as smokers [1,20]. Ethnicity was classified with a validated questionnaire for the Brazilian population according to a set of phenotypic characteristics (such as skin color, hair texture, shape of the nose and aspect of the lip) and individuals were classified as White, Intermediate (meaning Brown, Pardo in Portuguese), Black, Amerindian or Oriental descent [16,21,22].\nTriglycerides (TG), TC, HDL-C, LDL-C, and glucose were evaluated by standard techniques in 12-h fasting blood samples. Diabetes mellitus was diagnosed by the presence of fasting glucose ≥ 126 mg/dL or the use of antidiabetic drugs [23]. Hyperlipidemia was defined as TC ≥ 240 mg/dL, LDL-C ≥ 160 mg/dL, and/or use of hypolipidemic drugs [7].", "Blood pressure was measured in the sitting position with the use of a standard mercury sphygmomanometer on the left arm after 5 min rest. The first and fifth phases of Korotkoff sounds were used for systolic blood pressure (SBP) and diastolic blood pressure (DBP), respectively. The SBP and DBP were calculated from two readings with a minimal interval of 10 min apart. Hypertension was defined as mean SBP ≥140 mmHg and/or DBP ≥90 mm Hg and/or antihypertensive drug use [24]. The mean blood pressure (MBP) was calculated as the mean pulse pressure added to one-third of the DBP.", "Carotid-femoral pulse wave velocity (PWV) was analyzed with an automatic device (Complior®; Colson) by an experienced observer blinded to clinical characteristics. Briefly, common carotid artery and femoral artery pressure waveforms were recorded non-invasively using a pressure-sensitive transducer (TY-306-Fukuda®; Fukuda; Tokyo, Japan). The distance between the recording sites (D) was measured, and PWV was automatically calculated as PWV = D/t, where (t) means pulse transit time. Measurements were repeated over 10 different cardiac cycles, and the mean was used for the final analysis. The validation and its reproducibility have been previously described, and increased arterial stiffness was defined as PWV ≥ 12 m/s [16,25].", "Twenty coronary segments were scored: each vessel was divided into three segments (proximal, medial, and distal), except for the secondary branches of the right coronary artery (posterior ventricular and posterior descending), which were divided into proximal and distal segments. Stenosis higher than 50% in any coronary segment was graded 1 point and the sum of points for all 20 segments constituted the Extension Score. Lesion severity was calculated as follows: none and irregularities, 0 points; <50%, 0.3 points; 50–70%, 0.6 points; >70–90%, 0.8 points; and >90–100%, 0.95 points. The Severity Score was calculated through the sum of points for all 20 coronary segments [17].", "Genomic DNA from subjects was extracted from peripheral blood following standard salting-out procedure. Genotypes for the MYLIP rs9370867 (p.N342S, c.G1025A) polymorphism was detected by polymerase chain reaction (PCR) followed by high resolution melting (HRM) analysis with the Rotor Gene 6000® instrument (Qiagen, Courtaboeuf, France) [26,27]. The QIAgility® (Qiagen, Courtaboeuf, France), an automated instrument, was used according to instructions to optimize the sample preparation step. One specific disc is able to genotype 96 samples for this polymorphism [28].\nAmplification of the fragment was performed using the primer sense 5’- TTGTGGACCTCGTTTCAAGA -3’ and antisense 5’- GCTGCAGTTCATGCTGCT -3’ (80 pairs base) for the rs9370867. A 40-cycle PCR was carried out with the following conditions: denaturation of the template DNA for first cycle of 94°C for 120 s, denaturation of 94°C for 20 s, annealing of 53.4°C for 20 s, and extension of 72°C for 22 s. PCR was performed using a 10 μL reactive solution (10 mM Tris–HCl, 50 mM KCl, pH 9.0; 2.0 mM MgCl2; 200 μM of each dNTP; 0.5 U Taq DNA Polymerase; 200 nM of each primer; 10 ng of genomic DNA template) with addition of fluorescent DNA-intercalating SYTO9® (1.5 μM; Invitrogen, Carlsbad, USA).\nIn the HRM phase, the Rotor Gene 6000® measured the fluorescence in each 0.1°C temperature increase in the range of 73-85°C. Melting curves were generated by the decrease in fluorescence with the increase in the temperature; and in analysis, nucleotide changes result in three different curve patterns (Figure 1). Samples of the three observed curves were analyzed using bidirectional sequencing as a validation procedure (ABI Terminator Sequencing Kit® and ABI 3500XL Sequencer® - Applied Biosystems, Foster City, CA, USA). The two methods gave identical results in all tests. The wild-type, heterozygous and mutant homozygous genotypes for the rs9370867 could be easily discernible by HRM analysis. In addition, 4% of the samples were randomly selected and reanalyzed as quality controls and gave identical results.\nGraphs of the MYLIP rs9370867 (p.N342S, c.G1025A) nucleotide changes results in different curve patterns using high resolution melting analysis. A: Graph of normalized fluorescence by temperature. B: Graph of normalized fluorescence (based on genotype 2) by temperature. C: Graph of melting curve analysis (fluorescence differential/temperature differential). 1: wild-type genotype (GG); 2: heterozygous genotype (GA); 3: mutant homozygous genotype (AA).", "Categorical variables are presented as percentage while continuous variables are presented as mean ± standard deviation. Chi-square test was performed for comparative analysis of gender, ethnicity, hypertension, diabetes, hyperlipidemia, increased arterial stiffness, and smoking frequencies according to MYLIP polymorphism. Chi-square test was also performed for the Hardy-Weinberg equilibrium. ANOVA test was performed for comparing the age, BMI, biochemical data, blood pressures, PWV, and angiographic data means according to MYLIP polymorphism. Tukey's post hoc test was performed to identify the different group. Biochemical data, blood pressures, and angiographic data were adjusted for age, gender, and ethnicity. PWV was adjusted for age, gender, MBP, and ethnicity.\nThe analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria [8,15]. Fasting serum TG > 200 mg/dL for the cases, TG < 150 mg/dL for the controls, and subjects with morbid obesity (BMI > 40 kg/m2) or type 2 diabetes mellitus were excluded. The subjects were classified as high-TC if their serum TC levels were ≥ 240 mg/dL and normal TC if their serum TC levels were < 240 mg/dL in the absence of lipid lowering medication. The subjects were also classified as combined hyperlipidemia if their serum TC and TG levels were ≥ 240 mg/dL and > 200 mg/dL, and as controls if TC and TG levels were < 240 mg/dL and < 150 mg/dL, respectively [8,15]. All statistical analyses were carried out using the SPSS software (v. 16.0), with the level of significance set at p < 0.05.", "Table 1 summarizes general characteristics of both studied samples.\n\nGeneral characteristics according to\n\nMYLIP\n\nrs9370867 genotypes\n\naEthnicity was categorized in White, Black, Intermediate (person with admixture between White and Black) and other (Amerindians and Oriental descent).\nbHypertension was defined as mean systolic blood pressure ≥ 140 mmHg and/or diastolic blood pressure ≥ 90 mmHg and/or use of anti-hypertension drugs.\ncDiabetes was defined as fasting glucose ≥ 126 mg/dL and/or use of hypoglycemic drugs.\ndHyperlipidemia was defined as total-cholesterol ≥ 240 mg/dL, low density lipoprotein-cholesterol ≥ 160 mg/dL, and/or use of hypolipidemic drugs.\neIncreased arterial stiffness was defined as pulse wave velocity ≥ 12 m/s.\nfIndividuals who had ever smoked more than five cigarettes per day for the last a year were classified as smokers.\ngAdjusted for age and gender.\nNo difference in the frequencies of hyperlipidemia, diabetes, increased arterial stiffness, and smoking status according to MYLIP polymorphism was found. Only hypertension frequency presented a trend (p = 0.05) in the general population (Table 1).\nIn both general population and patient samples, the MYLIP rs9370867 polymorphism was differently distributed according to ethnicity (Table 1). In the general population, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (45.8% and 22.1%) compared with Blacks (15.2% and 3.1%) (p < 0.001 and p < 0.001, respectively). In the patients submitted to coronary angiography, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (44.6% and 20.7%) compared with Blacks (21.3% and 5.9%) (p < 0.001 and p < 0.001, respectively). The genotypic distribution for the MYLIP rs9370867 polymorphism was in Hardy–Weinberg equilibrium according to ethnic groups.", "Table 2 summarizes biochemical, hemodynamic, and angiographic data of both studied samples.\n\nBiochemical, hemodynamic, and angiographic data according to\n\nMYLIP\n\nrs9370867 genotypes\n\nContinuous data are expressed as mean ± standard deviation.\nValues with different superscript letters are significantly different (Tukey’s post hoc test).\nHDL-C: high density lipoprotein; LDL-C: low density lipoprotein.\nBiochemical data, blood pressures, ejection fraction, and angiographic scores were adjusted for age, gender, and ethnicity.\nPulse wave velocity data was adjusted for age, gender, mean blood pressure, and ethnicity.\nThere was no association of the MYLIP rs9370867 genotypes with TC, LDL-C, HDL-C, triglycerides, and glycemia values in both samples (Table 2).\nIn the general population, subjects carrying GG genotypes had higher SBP, DBP, and MBP values (129.0 ± 23.3; 84.9 ± 14.6; 99.5 ± 16.8 mmHg) compared with subjects carrying AA genotypes (123.7 ± 19.5; 81.6 ± 11.8; 95.6 ± 13.6 mmHg) (p = 0.01; p = 0.02; p = 0.01, respectively), even after adjustment for age, gender, and ethnicity (Table 2). This difference was not found in the patient sample.\nNo association of the studied polymorphism with PWV values (available for the general population sample) or with angiographic data (extension and severity scores, available for the patient sample) was observed (Table 2).", "The analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria (see Methods section for details) [8,15]. In this analysis, the sub-groups (controls and dyslipidemic subjects) presented similar result as the total sample in both general population and patient samples, even after adjustment for covariates. In the sub-groups, no association of the variables according to genotypes was found and no difference in the variant allele frequency between sub-groups was observed (p > 0.05). In addition, the analysis stratified by gender and ethnicity did not identify significant results in both studied samples.", "The authors declare that they have no competing interests.", "PCJLS carried out the molecular genetic studies, statistical analysis and drafted the manuscript. TGMO carried out the molecular genetic studies. ACP participated in the design of the study, statistical analysis and manuscript preparation. ACP, PAL, JGM, JEK participated in the design of the study and were responsible for individual selection and characterization. All authors contributed critically to the manuscript, whose present version was read and approved by all." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "General population", "Patients submitted to coronary angiography", "Demographic data and laboratory tests", "Blood pressure phenotypes", "Pulse wave velocity and arterial stiffness", "Coronary artery disease scores", "Genotyping", "Statistical analysis", "Results", "General characteristics according to MYLIP polymorphism", "Biochemical, hemodynamic, and angiographic data according to MYLIP polymorphism", "Analysis stratified by hyperlipidemia, gender, and ethnicity", "Discussion", "Conclusion", "Competing interests", "Authors' contributions" ]
[ "Lipid profile disorders have been significantly associated with risk of cardiovascular disease (CVD), which is also influenced by genetic factors, hypertension, type 2 diabetes mellitus, obesity, and smoking. CVD are the main cause of morbidity and mortality in developed countries and the financial cost is enormous. Thus, guidelines from the National Cholesterol Education Program (NCEP) rely on low-density lipoprotein cholesterol (LDL-C) for the prevention of CVD [1-6].\nA conventional lipid panel reports several parameters, including total cholesterol (TC), LDL-C, high-density lipoprotein cholesterol (HDL-C), and triglycerides. Of these, the NCEP and the American Heart Association recommend using LDL-C as a primary target of therapy in conjunction with assessing cardiovascular risk factors. The Third Report of the Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III, or ATP III - NCEP) updated clinical guidelines for cholesterol testing such as high TC (≥ 240 mg/dL), high LDL-C (≥ 160 mg/dL), and low HDL-C (< 40 mg/dL) [3,7,8].\nPopulation genetic and epidemiological studies could help to assess the etiologic role of lipid profile in CVD and, novel genetic determinants of blood lipids can also help to provide new insights into the biological pathways and identify novel therapeutic targets. In this way, current genome-wide association studies (GWAS) have identified genetic loci contributing to inter-individual variation in serum concentration of lipids [9-12]. Some GWAS, using cohorts of mixed European descent, identified non-coding polymorphisms in the region of the MYLIP gene that were associated with LDL-C concentrations [12-14]. The MYLIP functional variant and the mechanistic basis of these associations were recently postulated by Weissglas-Volkov et al. [15].\nMYLIP gene encodes a regulator of the LDL receptor pathway for cellular cholesterol uptake called MYLIP (myosin regulatory light chain interacting protein; also known as IDOL). Weissglas-Volkov et al. investigated the MYLIP region in the Mexican population in order to fine-map the actual susceptibility variants. They identified the rs9370867 non-synonymous polymorphism (p.N342S) as the underlying functional variant accounting for one of the previous GWAS significant signals and associated N342 allele with higher TC concentrations in Mexican dyslipidemic individuals [15].\nTo date, there is no further evaluation on this genotype-phenotype association (MYLIP p.N342S – lipid profile). In this scenario, the main aim of this study was to assess the influence of the MYLIP polymorphism on lipid profile in Brazilian individuals.", " General population One thousand two hundred ninety-five subjects of the general urban population were selected from Vitoria, Brazil [16]. The study design was based on cross-sectional research methodology and was developed by means of surveying and analyzing socioeconomic and health data in a probabilistic sample of residents from the municipality of Vitoria, Espirito Santo, Brazil. The sampling plan had the objective of ensuring that the research would be socioeconomically, geographically, and demographically representative of the residents of this municipality. The study protocol was approved by the involved Institutional Ethics Committees and written informed consent was obtained from all participants prior to enter the study.\nOne thousand two hundred ninety-five subjects of the general urban population were selected from Vitoria, Brazil [16]. The study design was based on cross-sectional research methodology and was developed by means of surveying and analyzing socioeconomic and health data in a probabilistic sample of residents from the municipality of Vitoria, Espirito Santo, Brazil. The sampling plan had the objective of ensuring that the research would be socioeconomically, geographically, and demographically representative of the residents of this municipality. The study protocol was approved by the involved Institutional Ethics Committees and written informed consent was obtained from all participants prior to enter the study.\n Patients submitted to coronary angiography One thousand four hundred twenty-five consecutive patients submitted to coronary angiography for the first time to study suggestive coronary artery disease etiology were selected at the Laboratory of Hemodinamics, Heart Institute (Incor), Sao Paulo, Brazil. All patients had a clinical diagnosis of angina pectoris and stable angina. No patient enrolled in this study was currently experiencing an acute coronary syndrome. Patients with previous acute ischemic events, heart failure classes III–IV, hepatic dysfunction, familiar hypercholesterolemia, previous heart or kidney transplantation, and in antiviral treatment were excluded [17-19]. Patients answered a clinical questionnaire that covered questions regarding personal medical history, family antecedents of CVD, sedentarism, smoking status, hypertension, obesity, dyslipidemia, diabetes, and current treatment. All patients signed an informed consent form and the study has been approved by the local Ethics committee.\nOne thousand four hundred twenty-five consecutive patients submitted to coronary angiography for the first time to study suggestive coronary artery disease etiology were selected at the Laboratory of Hemodinamics, Heart Institute (Incor), Sao Paulo, Brazil. All patients had a clinical diagnosis of angina pectoris and stable angina. No patient enrolled in this study was currently experiencing an acute coronary syndrome. Patients with previous acute ischemic events, heart failure classes III–IV, hepatic dysfunction, familiar hypercholesterolemia, previous heart or kidney transplantation, and in antiviral treatment were excluded [17-19]. Patients answered a clinical questionnaire that covered questions regarding personal medical history, family antecedents of CVD, sedentarism, smoking status, hypertension, obesity, dyslipidemia, diabetes, and current treatment. All patients signed an informed consent form and the study has been approved by the local Ethics committee.\n Demographic data and laboratory tests Weight and height were measured according to a standard protocol, and body mass index (BMI) was calculated. Individuals answered a clinical questionnaire that covered questions regarding smoking status and current medical treatment. Individuals who had ever smoked more than five cigarettes per day for the last year were classified as smokers [1,20]. Ethnicity was classified with a validated questionnaire for the Brazilian population according to a set of phenotypic characteristics (such as skin color, hair texture, shape of the nose and aspect of the lip) and individuals were classified as White, Intermediate (meaning Brown, Pardo in Portuguese), Black, Amerindian or Oriental descent [16,21,22].\nTriglycerides (TG), TC, HDL-C, LDL-C, and glucose were evaluated by standard techniques in 12-h fasting blood samples. Diabetes mellitus was diagnosed by the presence of fasting glucose ≥ 126 mg/dL or the use of antidiabetic drugs [23]. Hyperlipidemia was defined as TC ≥ 240 mg/dL, LDL-C ≥ 160 mg/dL, and/or use of hypolipidemic drugs [7].\nWeight and height were measured according to a standard protocol, and body mass index (BMI) was calculated. Individuals answered a clinical questionnaire that covered questions regarding smoking status and current medical treatment. Individuals who had ever smoked more than five cigarettes per day for the last year were classified as smokers [1,20]. Ethnicity was classified with a validated questionnaire for the Brazilian population according to a set of phenotypic characteristics (such as skin color, hair texture, shape of the nose and aspect of the lip) and individuals were classified as White, Intermediate (meaning Brown, Pardo in Portuguese), Black, Amerindian or Oriental descent [16,21,22].\nTriglycerides (TG), TC, HDL-C, LDL-C, and glucose were evaluated by standard techniques in 12-h fasting blood samples. Diabetes mellitus was diagnosed by the presence of fasting glucose ≥ 126 mg/dL or the use of antidiabetic drugs [23]. Hyperlipidemia was defined as TC ≥ 240 mg/dL, LDL-C ≥ 160 mg/dL, and/or use of hypolipidemic drugs [7].\n Blood pressure phenotypes Blood pressure was measured in the sitting position with the use of a standard mercury sphygmomanometer on the left arm after 5 min rest. The first and fifth phases of Korotkoff sounds were used for systolic blood pressure (SBP) and diastolic blood pressure (DBP), respectively. The SBP and DBP were calculated from two readings with a minimal interval of 10 min apart. Hypertension was defined as mean SBP ≥140 mmHg and/or DBP ≥90 mm Hg and/or antihypertensive drug use [24]. The mean blood pressure (MBP) was calculated as the mean pulse pressure added to one-third of the DBP.\nBlood pressure was measured in the sitting position with the use of a standard mercury sphygmomanometer on the left arm after 5 min rest. The first and fifth phases of Korotkoff sounds were used for systolic blood pressure (SBP) and diastolic blood pressure (DBP), respectively. The SBP and DBP were calculated from two readings with a minimal interval of 10 min apart. Hypertension was defined as mean SBP ≥140 mmHg and/or DBP ≥90 mm Hg and/or antihypertensive drug use [24]. The mean blood pressure (MBP) was calculated as the mean pulse pressure added to one-third of the DBP.\n Pulse wave velocity and arterial stiffness Carotid-femoral pulse wave velocity (PWV) was analyzed with an automatic device (Complior®; Colson) by an experienced observer blinded to clinical characteristics. Briefly, common carotid artery and femoral artery pressure waveforms were recorded non-invasively using a pressure-sensitive transducer (TY-306-Fukuda®; Fukuda; Tokyo, Japan). The distance between the recording sites (D) was measured, and PWV was automatically calculated as PWV = D/t, where (t) means pulse transit time. Measurements were repeated over 10 different cardiac cycles, and the mean was used for the final analysis. The validation and its reproducibility have been previously described, and increased arterial stiffness was defined as PWV ≥ 12 m/s [16,25].\nCarotid-femoral pulse wave velocity (PWV) was analyzed with an automatic device (Complior®; Colson) by an experienced observer blinded to clinical characteristics. Briefly, common carotid artery and femoral artery pressure waveforms were recorded non-invasively using a pressure-sensitive transducer (TY-306-Fukuda®; Fukuda; Tokyo, Japan). The distance between the recording sites (D) was measured, and PWV was automatically calculated as PWV = D/t, where (t) means pulse transit time. Measurements were repeated over 10 different cardiac cycles, and the mean was used for the final analysis. The validation and its reproducibility have been previously described, and increased arterial stiffness was defined as PWV ≥ 12 m/s [16,25].\n Coronary artery disease scores Twenty coronary segments were scored: each vessel was divided into three segments (proximal, medial, and distal), except for the secondary branches of the right coronary artery (posterior ventricular and posterior descending), which were divided into proximal and distal segments. Stenosis higher than 50% in any coronary segment was graded 1 point and the sum of points for all 20 segments constituted the Extension Score. Lesion severity was calculated as follows: none and irregularities, 0 points; <50%, 0.3 points; 50–70%, 0.6 points; >70–90%, 0.8 points; and >90–100%, 0.95 points. The Severity Score was calculated through the sum of points for all 20 coronary segments [17].\nTwenty coronary segments were scored: each vessel was divided into three segments (proximal, medial, and distal), except for the secondary branches of the right coronary artery (posterior ventricular and posterior descending), which were divided into proximal and distal segments. Stenosis higher than 50% in any coronary segment was graded 1 point and the sum of points for all 20 segments constituted the Extension Score. Lesion severity was calculated as follows: none and irregularities, 0 points; <50%, 0.3 points; 50–70%, 0.6 points; >70–90%, 0.8 points; and >90–100%, 0.95 points. The Severity Score was calculated through the sum of points for all 20 coronary segments [17].\n Genotyping Genomic DNA from subjects was extracted from peripheral blood following standard salting-out procedure. Genotypes for the MYLIP rs9370867 (p.N342S, c.G1025A) polymorphism was detected by polymerase chain reaction (PCR) followed by high resolution melting (HRM) analysis with the Rotor Gene 6000® instrument (Qiagen, Courtaboeuf, France) [26,27]. The QIAgility® (Qiagen, Courtaboeuf, France), an automated instrument, was used according to instructions to optimize the sample preparation step. One specific disc is able to genotype 96 samples for this polymorphism [28].\nAmplification of the fragment was performed using the primer sense 5’- TTGTGGACCTCGTTTCAAGA -3’ and antisense 5’- GCTGCAGTTCATGCTGCT -3’ (80 pairs base) for the rs9370867. A 40-cycle PCR was carried out with the following conditions: denaturation of the template DNA for first cycle of 94°C for 120 s, denaturation of 94°C for 20 s, annealing of 53.4°C for 20 s, and extension of 72°C for 22 s. PCR was performed using a 10 μL reactive solution (10 mM Tris–HCl, 50 mM KCl, pH 9.0; 2.0 mM MgCl2; 200 μM of each dNTP; 0.5 U Taq DNA Polymerase; 200 nM of each primer; 10 ng of genomic DNA template) with addition of fluorescent DNA-intercalating SYTO9® (1.5 μM; Invitrogen, Carlsbad, USA).\nIn the HRM phase, the Rotor Gene 6000® measured the fluorescence in each 0.1°C temperature increase in the range of 73-85°C. Melting curves were generated by the decrease in fluorescence with the increase in the temperature; and in analysis, nucleotide changes result in three different curve patterns (Figure 1). Samples of the three observed curves were analyzed using bidirectional sequencing as a validation procedure (ABI Terminator Sequencing Kit® and ABI 3500XL Sequencer® - Applied Biosystems, Foster City, CA, USA). The two methods gave identical results in all tests. The wild-type, heterozygous and mutant homozygous genotypes for the rs9370867 could be easily discernible by HRM analysis. In addition, 4% of the samples were randomly selected and reanalyzed as quality controls and gave identical results.\nGraphs of the MYLIP rs9370867 (p.N342S, c.G1025A) nucleotide changes results in different curve patterns using high resolution melting analysis. A: Graph of normalized fluorescence by temperature. B: Graph of normalized fluorescence (based on genotype 2) by temperature. C: Graph of melting curve analysis (fluorescence differential/temperature differential). 1: wild-type genotype (GG); 2: heterozygous genotype (GA); 3: mutant homozygous genotype (AA).\nGenomic DNA from subjects was extracted from peripheral blood following standard salting-out procedure. Genotypes for the MYLIP rs9370867 (p.N342S, c.G1025A) polymorphism was detected by polymerase chain reaction (PCR) followed by high resolution melting (HRM) analysis with the Rotor Gene 6000® instrument (Qiagen, Courtaboeuf, France) [26,27]. The QIAgility® (Qiagen, Courtaboeuf, France), an automated instrument, was used according to instructions to optimize the sample preparation step. One specific disc is able to genotype 96 samples for this polymorphism [28].\nAmplification of the fragment was performed using the primer sense 5’- TTGTGGACCTCGTTTCAAGA -3’ and antisense 5’- GCTGCAGTTCATGCTGCT -3’ (80 pairs base) for the rs9370867. A 40-cycle PCR was carried out with the following conditions: denaturation of the template DNA for first cycle of 94°C for 120 s, denaturation of 94°C for 20 s, annealing of 53.4°C for 20 s, and extension of 72°C for 22 s. PCR was performed using a 10 μL reactive solution (10 mM Tris–HCl, 50 mM KCl, pH 9.0; 2.0 mM MgCl2; 200 μM of each dNTP; 0.5 U Taq DNA Polymerase; 200 nM of each primer; 10 ng of genomic DNA template) with addition of fluorescent DNA-intercalating SYTO9® (1.5 μM; Invitrogen, Carlsbad, USA).\nIn the HRM phase, the Rotor Gene 6000® measured the fluorescence in each 0.1°C temperature increase in the range of 73-85°C. Melting curves were generated by the decrease in fluorescence with the increase in the temperature; and in analysis, nucleotide changes result in three different curve patterns (Figure 1). Samples of the three observed curves were analyzed using bidirectional sequencing as a validation procedure (ABI Terminator Sequencing Kit® and ABI 3500XL Sequencer® - Applied Biosystems, Foster City, CA, USA). The two methods gave identical results in all tests. The wild-type, heterozygous and mutant homozygous genotypes for the rs9370867 could be easily discernible by HRM analysis. In addition, 4% of the samples were randomly selected and reanalyzed as quality controls and gave identical results.\nGraphs of the MYLIP rs9370867 (p.N342S, c.G1025A) nucleotide changes results in different curve patterns using high resolution melting analysis. A: Graph of normalized fluorescence by temperature. B: Graph of normalized fluorescence (based on genotype 2) by temperature. C: Graph of melting curve analysis (fluorescence differential/temperature differential). 1: wild-type genotype (GG); 2: heterozygous genotype (GA); 3: mutant homozygous genotype (AA).\n Statistical analysis Categorical variables are presented as percentage while continuous variables are presented as mean ± standard deviation. Chi-square test was performed for comparative analysis of gender, ethnicity, hypertension, diabetes, hyperlipidemia, increased arterial stiffness, and smoking frequencies according to MYLIP polymorphism. Chi-square test was also performed for the Hardy-Weinberg equilibrium. ANOVA test was performed for comparing the age, BMI, biochemical data, blood pressures, PWV, and angiographic data means according to MYLIP polymorphism. Tukey's post hoc test was performed to identify the different group. Biochemical data, blood pressures, and angiographic data were adjusted for age, gender, and ethnicity. PWV was adjusted for age, gender, MBP, and ethnicity.\nThe analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria [8,15]. Fasting serum TG > 200 mg/dL for the cases, TG < 150 mg/dL for the controls, and subjects with morbid obesity (BMI > 40 kg/m2) or type 2 diabetes mellitus were excluded. The subjects were classified as high-TC if their serum TC levels were ≥ 240 mg/dL and normal TC if their serum TC levels were < 240 mg/dL in the absence of lipid lowering medication. The subjects were also classified as combined hyperlipidemia if their serum TC and TG levels were ≥ 240 mg/dL and > 200 mg/dL, and as controls if TC and TG levels were < 240 mg/dL and < 150 mg/dL, respectively [8,15]. All statistical analyses were carried out using the SPSS software (v. 16.0), with the level of significance set at p < 0.05.\nCategorical variables are presented as percentage while continuous variables are presented as mean ± standard deviation. Chi-square test was performed for comparative analysis of gender, ethnicity, hypertension, diabetes, hyperlipidemia, increased arterial stiffness, and smoking frequencies according to MYLIP polymorphism. Chi-square test was also performed for the Hardy-Weinberg equilibrium. ANOVA test was performed for comparing the age, BMI, biochemical data, blood pressures, PWV, and angiographic data means according to MYLIP polymorphism. Tukey's post hoc test was performed to identify the different group. Biochemical data, blood pressures, and angiographic data were adjusted for age, gender, and ethnicity. PWV was adjusted for age, gender, MBP, and ethnicity.\nThe analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria [8,15]. Fasting serum TG > 200 mg/dL for the cases, TG < 150 mg/dL for the controls, and subjects with morbid obesity (BMI > 40 kg/m2) or type 2 diabetes mellitus were excluded. The subjects were classified as high-TC if their serum TC levels were ≥ 240 mg/dL and normal TC if their serum TC levels were < 240 mg/dL in the absence of lipid lowering medication. The subjects were also classified as combined hyperlipidemia if their serum TC and TG levels were ≥ 240 mg/dL and > 200 mg/dL, and as controls if TC and TG levels were < 240 mg/dL and < 150 mg/dL, respectively [8,15]. All statistical analyses were carried out using the SPSS software (v. 16.0), with the level of significance set at p < 0.05.", "One thousand two hundred ninety-five subjects of the general urban population were selected from Vitoria, Brazil [16]. The study design was based on cross-sectional research methodology and was developed by means of surveying and analyzing socioeconomic and health data in a probabilistic sample of residents from the municipality of Vitoria, Espirito Santo, Brazil. The sampling plan had the objective of ensuring that the research would be socioeconomically, geographically, and demographically representative of the residents of this municipality. The study protocol was approved by the involved Institutional Ethics Committees and written informed consent was obtained from all participants prior to enter the study.", "One thousand four hundred twenty-five consecutive patients submitted to coronary angiography for the first time to study suggestive coronary artery disease etiology were selected at the Laboratory of Hemodinamics, Heart Institute (Incor), Sao Paulo, Brazil. All patients had a clinical diagnosis of angina pectoris and stable angina. No patient enrolled in this study was currently experiencing an acute coronary syndrome. Patients with previous acute ischemic events, heart failure classes III–IV, hepatic dysfunction, familiar hypercholesterolemia, previous heart or kidney transplantation, and in antiviral treatment were excluded [17-19]. Patients answered a clinical questionnaire that covered questions regarding personal medical history, family antecedents of CVD, sedentarism, smoking status, hypertension, obesity, dyslipidemia, diabetes, and current treatment. All patients signed an informed consent form and the study has been approved by the local Ethics committee.", "Weight and height were measured according to a standard protocol, and body mass index (BMI) was calculated. Individuals answered a clinical questionnaire that covered questions regarding smoking status and current medical treatment. Individuals who had ever smoked more than five cigarettes per day for the last year were classified as smokers [1,20]. Ethnicity was classified with a validated questionnaire for the Brazilian population according to a set of phenotypic characteristics (such as skin color, hair texture, shape of the nose and aspect of the lip) and individuals were classified as White, Intermediate (meaning Brown, Pardo in Portuguese), Black, Amerindian or Oriental descent [16,21,22].\nTriglycerides (TG), TC, HDL-C, LDL-C, and glucose were evaluated by standard techniques in 12-h fasting blood samples. Diabetes mellitus was diagnosed by the presence of fasting glucose ≥ 126 mg/dL or the use of antidiabetic drugs [23]. Hyperlipidemia was defined as TC ≥ 240 mg/dL, LDL-C ≥ 160 mg/dL, and/or use of hypolipidemic drugs [7].", "Blood pressure was measured in the sitting position with the use of a standard mercury sphygmomanometer on the left arm after 5 min rest. The first and fifth phases of Korotkoff sounds were used for systolic blood pressure (SBP) and diastolic blood pressure (DBP), respectively. The SBP and DBP were calculated from two readings with a minimal interval of 10 min apart. Hypertension was defined as mean SBP ≥140 mmHg and/or DBP ≥90 mm Hg and/or antihypertensive drug use [24]. The mean blood pressure (MBP) was calculated as the mean pulse pressure added to one-third of the DBP.", "Carotid-femoral pulse wave velocity (PWV) was analyzed with an automatic device (Complior®; Colson) by an experienced observer blinded to clinical characteristics. Briefly, common carotid artery and femoral artery pressure waveforms were recorded non-invasively using a pressure-sensitive transducer (TY-306-Fukuda®; Fukuda; Tokyo, Japan). The distance between the recording sites (D) was measured, and PWV was automatically calculated as PWV = D/t, where (t) means pulse transit time. Measurements were repeated over 10 different cardiac cycles, and the mean was used for the final analysis. The validation and its reproducibility have been previously described, and increased arterial stiffness was defined as PWV ≥ 12 m/s [16,25].", "Twenty coronary segments were scored: each vessel was divided into three segments (proximal, medial, and distal), except for the secondary branches of the right coronary artery (posterior ventricular and posterior descending), which were divided into proximal and distal segments. Stenosis higher than 50% in any coronary segment was graded 1 point and the sum of points for all 20 segments constituted the Extension Score. Lesion severity was calculated as follows: none and irregularities, 0 points; <50%, 0.3 points; 50–70%, 0.6 points; >70–90%, 0.8 points; and >90–100%, 0.95 points. The Severity Score was calculated through the sum of points for all 20 coronary segments [17].", "Genomic DNA from subjects was extracted from peripheral blood following standard salting-out procedure. Genotypes for the MYLIP rs9370867 (p.N342S, c.G1025A) polymorphism was detected by polymerase chain reaction (PCR) followed by high resolution melting (HRM) analysis with the Rotor Gene 6000® instrument (Qiagen, Courtaboeuf, France) [26,27]. The QIAgility® (Qiagen, Courtaboeuf, France), an automated instrument, was used according to instructions to optimize the sample preparation step. One specific disc is able to genotype 96 samples for this polymorphism [28].\nAmplification of the fragment was performed using the primer sense 5’- TTGTGGACCTCGTTTCAAGA -3’ and antisense 5’- GCTGCAGTTCATGCTGCT -3’ (80 pairs base) for the rs9370867. A 40-cycle PCR was carried out with the following conditions: denaturation of the template DNA for first cycle of 94°C for 120 s, denaturation of 94°C for 20 s, annealing of 53.4°C for 20 s, and extension of 72°C for 22 s. PCR was performed using a 10 μL reactive solution (10 mM Tris–HCl, 50 mM KCl, pH 9.0; 2.0 mM MgCl2; 200 μM of each dNTP; 0.5 U Taq DNA Polymerase; 200 nM of each primer; 10 ng of genomic DNA template) with addition of fluorescent DNA-intercalating SYTO9® (1.5 μM; Invitrogen, Carlsbad, USA).\nIn the HRM phase, the Rotor Gene 6000® measured the fluorescence in each 0.1°C temperature increase in the range of 73-85°C. Melting curves were generated by the decrease in fluorescence with the increase in the temperature; and in analysis, nucleotide changes result in three different curve patterns (Figure 1). Samples of the three observed curves were analyzed using bidirectional sequencing as a validation procedure (ABI Terminator Sequencing Kit® and ABI 3500XL Sequencer® - Applied Biosystems, Foster City, CA, USA). The two methods gave identical results in all tests. The wild-type, heterozygous and mutant homozygous genotypes for the rs9370867 could be easily discernible by HRM analysis. In addition, 4% of the samples were randomly selected and reanalyzed as quality controls and gave identical results.\nGraphs of the MYLIP rs9370867 (p.N342S, c.G1025A) nucleotide changes results in different curve patterns using high resolution melting analysis. A: Graph of normalized fluorescence by temperature. B: Graph of normalized fluorescence (based on genotype 2) by temperature. C: Graph of melting curve analysis (fluorescence differential/temperature differential). 1: wild-type genotype (GG); 2: heterozygous genotype (GA); 3: mutant homozygous genotype (AA).", "Categorical variables are presented as percentage while continuous variables are presented as mean ± standard deviation. Chi-square test was performed for comparative analysis of gender, ethnicity, hypertension, diabetes, hyperlipidemia, increased arterial stiffness, and smoking frequencies according to MYLIP polymorphism. Chi-square test was also performed for the Hardy-Weinberg equilibrium. ANOVA test was performed for comparing the age, BMI, biochemical data, blood pressures, PWV, and angiographic data means according to MYLIP polymorphism. Tukey's post hoc test was performed to identify the different group. Biochemical data, blood pressures, and angiographic data were adjusted for age, gender, and ethnicity. PWV was adjusted for age, gender, MBP, and ethnicity.\nThe analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria [8,15]. Fasting serum TG > 200 mg/dL for the cases, TG < 150 mg/dL for the controls, and subjects with morbid obesity (BMI > 40 kg/m2) or type 2 diabetes mellitus were excluded. The subjects were classified as high-TC if their serum TC levels were ≥ 240 mg/dL and normal TC if their serum TC levels were < 240 mg/dL in the absence of lipid lowering medication. The subjects were also classified as combined hyperlipidemia if their serum TC and TG levels were ≥ 240 mg/dL and > 200 mg/dL, and as controls if TC and TG levels were < 240 mg/dL and < 150 mg/dL, respectively [8,15]. All statistical analyses were carried out using the SPSS software (v. 16.0), with the level of significance set at p < 0.05.", " General characteristics according to MYLIP polymorphism Table 1 summarizes general characteristics of both studied samples.\n\nGeneral characteristics according to\n\nMYLIP\n\nrs9370867 genotypes\n\naEthnicity was categorized in White, Black, Intermediate (person with admixture between White and Black) and other (Amerindians and Oriental descent).\nbHypertension was defined as mean systolic blood pressure ≥ 140 mmHg and/or diastolic blood pressure ≥ 90 mmHg and/or use of anti-hypertension drugs.\ncDiabetes was defined as fasting glucose ≥ 126 mg/dL and/or use of hypoglycemic drugs.\ndHyperlipidemia was defined as total-cholesterol ≥ 240 mg/dL, low density lipoprotein-cholesterol ≥ 160 mg/dL, and/or use of hypolipidemic drugs.\neIncreased arterial stiffness was defined as pulse wave velocity ≥ 12 m/s.\nfIndividuals who had ever smoked more than five cigarettes per day for the last a year were classified as smokers.\ngAdjusted for age and gender.\nNo difference in the frequencies of hyperlipidemia, diabetes, increased arterial stiffness, and smoking status according to MYLIP polymorphism was found. Only hypertension frequency presented a trend (p = 0.05) in the general population (Table 1).\nIn both general population and patient samples, the MYLIP rs9370867 polymorphism was differently distributed according to ethnicity (Table 1). In the general population, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (45.8% and 22.1%) compared with Blacks (15.2% and 3.1%) (p < 0.001 and p < 0.001, respectively). In the patients submitted to coronary angiography, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (44.6% and 20.7%) compared with Blacks (21.3% and 5.9%) (p < 0.001 and p < 0.001, respectively). The genotypic distribution for the MYLIP rs9370867 polymorphism was in Hardy–Weinberg equilibrium according to ethnic groups.\nTable 1 summarizes general characteristics of both studied samples.\n\nGeneral characteristics according to\n\nMYLIP\n\nrs9370867 genotypes\n\naEthnicity was categorized in White, Black, Intermediate (person with admixture between White and Black) and other (Amerindians and Oriental descent).\nbHypertension was defined as mean systolic blood pressure ≥ 140 mmHg and/or diastolic blood pressure ≥ 90 mmHg and/or use of anti-hypertension drugs.\ncDiabetes was defined as fasting glucose ≥ 126 mg/dL and/or use of hypoglycemic drugs.\ndHyperlipidemia was defined as total-cholesterol ≥ 240 mg/dL, low density lipoprotein-cholesterol ≥ 160 mg/dL, and/or use of hypolipidemic drugs.\neIncreased arterial stiffness was defined as pulse wave velocity ≥ 12 m/s.\nfIndividuals who had ever smoked more than five cigarettes per day for the last a year were classified as smokers.\ngAdjusted for age and gender.\nNo difference in the frequencies of hyperlipidemia, diabetes, increased arterial stiffness, and smoking status according to MYLIP polymorphism was found. Only hypertension frequency presented a trend (p = 0.05) in the general population (Table 1).\nIn both general population and patient samples, the MYLIP rs9370867 polymorphism was differently distributed according to ethnicity (Table 1). In the general population, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (45.8% and 22.1%) compared with Blacks (15.2% and 3.1%) (p < 0.001 and p < 0.001, respectively). In the patients submitted to coronary angiography, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (44.6% and 20.7%) compared with Blacks (21.3% and 5.9%) (p < 0.001 and p < 0.001, respectively). The genotypic distribution for the MYLIP rs9370867 polymorphism was in Hardy–Weinberg equilibrium according to ethnic groups.\n Biochemical, hemodynamic, and angiographic data according to MYLIP polymorphism Table 2 summarizes biochemical, hemodynamic, and angiographic data of both studied samples.\n\nBiochemical, hemodynamic, and angiographic data according to\n\nMYLIP\n\nrs9370867 genotypes\n\nContinuous data are expressed as mean ± standard deviation.\nValues with different superscript letters are significantly different (Tukey’s post hoc test).\nHDL-C: high density lipoprotein; LDL-C: low density lipoprotein.\nBiochemical data, blood pressures, ejection fraction, and angiographic scores were adjusted for age, gender, and ethnicity.\nPulse wave velocity data was adjusted for age, gender, mean blood pressure, and ethnicity.\nThere was no association of the MYLIP rs9370867 genotypes with TC, LDL-C, HDL-C, triglycerides, and glycemia values in both samples (Table 2).\nIn the general population, subjects carrying GG genotypes had higher SBP, DBP, and MBP values (129.0 ± 23.3; 84.9 ± 14.6; 99.5 ± 16.8 mmHg) compared with subjects carrying AA genotypes (123.7 ± 19.5; 81.6 ± 11.8; 95.6 ± 13.6 mmHg) (p = 0.01; p = 0.02; p = 0.01, respectively), even after adjustment for age, gender, and ethnicity (Table 2). This difference was not found in the patient sample.\nNo association of the studied polymorphism with PWV values (available for the general population sample) or with angiographic data (extension and severity scores, available for the patient sample) was observed (Table 2).\nTable 2 summarizes biochemical, hemodynamic, and angiographic data of both studied samples.\n\nBiochemical, hemodynamic, and angiographic data according to\n\nMYLIP\n\nrs9370867 genotypes\n\nContinuous data are expressed as mean ± standard deviation.\nValues with different superscript letters are significantly different (Tukey’s post hoc test).\nHDL-C: high density lipoprotein; LDL-C: low density lipoprotein.\nBiochemical data, blood pressures, ejection fraction, and angiographic scores were adjusted for age, gender, and ethnicity.\nPulse wave velocity data was adjusted for age, gender, mean blood pressure, and ethnicity.\nThere was no association of the MYLIP rs9370867 genotypes with TC, LDL-C, HDL-C, triglycerides, and glycemia values in both samples (Table 2).\nIn the general population, subjects carrying GG genotypes had higher SBP, DBP, and MBP values (129.0 ± 23.3; 84.9 ± 14.6; 99.5 ± 16.8 mmHg) compared with subjects carrying AA genotypes (123.7 ± 19.5; 81.6 ± 11.8; 95.6 ± 13.6 mmHg) (p = 0.01; p = 0.02; p = 0.01, respectively), even after adjustment for age, gender, and ethnicity (Table 2). This difference was not found in the patient sample.\nNo association of the studied polymorphism with PWV values (available for the general population sample) or with angiographic data (extension and severity scores, available for the patient sample) was observed (Table 2).\n Analysis stratified by hyperlipidemia, gender, and ethnicity The analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria (see Methods section for details) [8,15]. In this analysis, the sub-groups (controls and dyslipidemic subjects) presented similar result as the total sample in both general population and patient samples, even after adjustment for covariates. In the sub-groups, no association of the variables according to genotypes was found and no difference in the variant allele frequency between sub-groups was observed (p > 0.05). In addition, the analysis stratified by gender and ethnicity did not identify significant results in both studied samples.\nThe analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria (see Methods section for details) [8,15]. In this analysis, the sub-groups (controls and dyslipidemic subjects) presented similar result as the total sample in both general population and patient samples, even after adjustment for covariates. In the sub-groups, no association of the variables according to genotypes was found and no difference in the variant allele frequency between sub-groups was observed (p > 0.05). In addition, the analysis stratified by gender and ethnicity did not identify significant results in both studied samples.", "Table 1 summarizes general characteristics of both studied samples.\n\nGeneral characteristics according to\n\nMYLIP\n\nrs9370867 genotypes\n\naEthnicity was categorized in White, Black, Intermediate (person with admixture between White and Black) and other (Amerindians and Oriental descent).\nbHypertension was defined as mean systolic blood pressure ≥ 140 mmHg and/or diastolic blood pressure ≥ 90 mmHg and/or use of anti-hypertension drugs.\ncDiabetes was defined as fasting glucose ≥ 126 mg/dL and/or use of hypoglycemic drugs.\ndHyperlipidemia was defined as total-cholesterol ≥ 240 mg/dL, low density lipoprotein-cholesterol ≥ 160 mg/dL, and/or use of hypolipidemic drugs.\neIncreased arterial stiffness was defined as pulse wave velocity ≥ 12 m/s.\nfIndividuals who had ever smoked more than five cigarettes per day for the last a year were classified as smokers.\ngAdjusted for age and gender.\nNo difference in the frequencies of hyperlipidemia, diabetes, increased arterial stiffness, and smoking status according to MYLIP polymorphism was found. Only hypertension frequency presented a trend (p = 0.05) in the general population (Table 1).\nIn both general population and patient samples, the MYLIP rs9370867 polymorphism was differently distributed according to ethnicity (Table 1). In the general population, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (45.8% and 22.1%) compared with Blacks (15.2% and 3.1%) (p < 0.001 and p < 0.001, respectively). In the patients submitted to coronary angiography, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (44.6% and 20.7%) compared with Blacks (21.3% and 5.9%) (p < 0.001 and p < 0.001, respectively). The genotypic distribution for the MYLIP rs9370867 polymorphism was in Hardy–Weinberg equilibrium according to ethnic groups.", "Table 2 summarizes biochemical, hemodynamic, and angiographic data of both studied samples.\n\nBiochemical, hemodynamic, and angiographic data according to\n\nMYLIP\n\nrs9370867 genotypes\n\nContinuous data are expressed as mean ± standard deviation.\nValues with different superscript letters are significantly different (Tukey’s post hoc test).\nHDL-C: high density lipoprotein; LDL-C: low density lipoprotein.\nBiochemical data, blood pressures, ejection fraction, and angiographic scores were adjusted for age, gender, and ethnicity.\nPulse wave velocity data was adjusted for age, gender, mean blood pressure, and ethnicity.\nThere was no association of the MYLIP rs9370867 genotypes with TC, LDL-C, HDL-C, triglycerides, and glycemia values in both samples (Table 2).\nIn the general population, subjects carrying GG genotypes had higher SBP, DBP, and MBP values (129.0 ± 23.3; 84.9 ± 14.6; 99.5 ± 16.8 mmHg) compared with subjects carrying AA genotypes (123.7 ± 19.5; 81.6 ± 11.8; 95.6 ± 13.6 mmHg) (p = 0.01; p = 0.02; p = 0.01, respectively), even after adjustment for age, gender, and ethnicity (Table 2). This difference was not found in the patient sample.\nNo association of the studied polymorphism with PWV values (available for the general population sample) or with angiographic data (extension and severity scores, available for the patient sample) was observed (Table 2).", "The analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria (see Methods section for details) [8,15]. In this analysis, the sub-groups (controls and dyslipidemic subjects) presented similar result as the total sample in both general population and patient samples, even after adjustment for covariates. In the sub-groups, no association of the variables according to genotypes was found and no difference in the variant allele frequency between sub-groups was observed (p > 0.05). In addition, the analysis stratified by gender and ethnicity did not identify significant results in both studied samples.", "A recent study reported that MYLIP rs9370867 polymorphism was associated with TC levels in a Mexican dyslipidemic sample and this genetic data was supported by functional data, which demonstrated that rs9370867 influences plasma cholesterol levels by modifying the degradation of the LDL-receptor [15]. In this context and in an attempt to replicate this important association, our main finding was that the polymorphism did not influence the lipid profile in both Brazilian samples studied: general population and patients submitted to coronary angiography.\nHere, the frequency of the A was much higher in Whites compared with Blacks. Previous studies reported that in African and Asian groups, the frequency is relatively low at 2% - 8%, whereas in European descent, the frequency is much higher at 49% - 60% [15,29,30]. The Brazilian population is one of the most heterogeneous in the world, and it is a mixture of different ethnic groups, composed mainly of European descent, African descent, and Amerindians. Thus, adjustement for ethnicity plus other covariates were performed and an analysis stratified by ethnicity was made. Nevertheless, no significant result was found. In this point there is a limitation: genetic markers of ancestry have not been used; however, a validated questionnaire for ethnicity classification was used.\nIn the same way, this variation of the allele frequency among ethnicity influenced blood pressures data in general population samples. SBP, DBP, and MBP values were higher in GG genotype group (major frequency of Blacks) while lower values were observed in AA genotype group (minor frequency of Blacks). The adjustment for covariates plus ethnicity was not able to exclude the participation of the ethnicity variable in the observed results. But, in analysis stratified by ethnicity, this finding was not found in any ethnicity group. Corroborating with this observation, our group demonstrated in a recent study that SBP, DBP, and MBP values were higher in Black individuals than in the other ethnic groups in the Brazilian general population (p < 0.001) [16]. Thus there is no evidence that the MYLIP studied polymorphism influences blood pressures.\nIn this study, the replication of the previously identified association was not found in both general population and patient samples. Two GWAS of populations of European descent have identified MYLIP genetic loci contributing to variation in serum lipids [12,13]. Weissglas-Volkov et al. investigated the MYLIP region in Mexican individuals in order to restrict the associated region and identified the variant p.N342S (rs9370867). They associated this substitution with cholesterol levels in a Mexican dyslipidemic study sample [15].\nIt is important to report that the mentioned study have only observed the association in Mexican dyslipidemic individuals. Our Brazilian general population sample allowed a first assessment of this association in a general population and, our second sample, patients submitted to coronary angiography, allowed an analysis with major frequency of dyslipidemic individuals. In both samples, even after performing analysis according to Weissglas-Volkov et al.‘s criteria (see Methods section), no association was observed. The patterns of linkage disequilibrium vary across populations and ethnicities according to previous studies [15,29,30]; thus, it is plausible that one or more MYLIP functional polymorphisms could be differently distributed leading to distinct findings.", "Our findings indicate that association studies involving this MYLIP variant can present distinct results according to the studied population. In this moment, further studies are needed to reaffirm if the MYLIP p.N342S polymorphism is functional or not, and to identify other functional markers in this locus.", "The authors declare that they have no competing interests.", "PCJLS carried out the molecular genetic studies, statistical analysis and drafted the manuscript. TGMO carried out the molecular genetic studies. ACP participated in the design of the study, statistical analysis and manuscript preparation. ACP, PAL, JGM, JEK participated in the design of the study and were responsible for individual selection and characterization. All authors contributed critically to the manuscript, whose present version was read and approved by all." ]
[ null, "methods", null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusions", null, null ]
[ "\nMYLIP\n", "p.N342S", "rs9370867", "Lipid profile", "Cholesterol", "Ethnicity", "Brazilian" ]
Background: Lipid profile disorders have been significantly associated with risk of cardiovascular disease (CVD), which is also influenced by genetic factors, hypertension, type 2 diabetes mellitus, obesity, and smoking. CVD are the main cause of morbidity and mortality in developed countries and the financial cost is enormous. Thus, guidelines from the National Cholesterol Education Program (NCEP) rely on low-density lipoprotein cholesterol (LDL-C) for the prevention of CVD [1-6]. A conventional lipid panel reports several parameters, including total cholesterol (TC), LDL-C, high-density lipoprotein cholesterol (HDL-C), and triglycerides. Of these, the NCEP and the American Heart Association recommend using LDL-C as a primary target of therapy in conjunction with assessing cardiovascular risk factors. The Third Report of the Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III, or ATP III - NCEP) updated clinical guidelines for cholesterol testing such as high TC (≥ 240 mg/dL), high LDL-C (≥ 160 mg/dL), and low HDL-C (< 40 mg/dL) [3,7,8]. Population genetic and epidemiological studies could help to assess the etiologic role of lipid profile in CVD and, novel genetic determinants of blood lipids can also help to provide new insights into the biological pathways and identify novel therapeutic targets. In this way, current genome-wide association studies (GWAS) have identified genetic loci contributing to inter-individual variation in serum concentration of lipids [9-12]. Some GWAS, using cohorts of mixed European descent, identified non-coding polymorphisms in the region of the MYLIP gene that were associated with LDL-C concentrations [12-14]. The MYLIP functional variant and the mechanistic basis of these associations were recently postulated by Weissglas-Volkov et al. [15]. MYLIP gene encodes a regulator of the LDL receptor pathway for cellular cholesterol uptake called MYLIP (myosin regulatory light chain interacting protein; also known as IDOL). Weissglas-Volkov et al. investigated the MYLIP region in the Mexican population in order to fine-map the actual susceptibility variants. They identified the rs9370867 non-synonymous polymorphism (p.N342S) as the underlying functional variant accounting for one of the previous GWAS significant signals and associated N342 allele with higher TC concentrations in Mexican dyslipidemic individuals [15]. To date, there is no further evaluation on this genotype-phenotype association (MYLIP p.N342S – lipid profile). In this scenario, the main aim of this study was to assess the influence of the MYLIP polymorphism on lipid profile in Brazilian individuals. Methods: General population One thousand two hundred ninety-five subjects of the general urban population were selected from Vitoria, Brazil [16]. The study design was based on cross-sectional research methodology and was developed by means of surveying and analyzing socioeconomic and health data in a probabilistic sample of residents from the municipality of Vitoria, Espirito Santo, Brazil. The sampling plan had the objective of ensuring that the research would be socioeconomically, geographically, and demographically representative of the residents of this municipality. The study protocol was approved by the involved Institutional Ethics Committees and written informed consent was obtained from all participants prior to enter the study. One thousand two hundred ninety-five subjects of the general urban population were selected from Vitoria, Brazil [16]. The study design was based on cross-sectional research methodology and was developed by means of surveying and analyzing socioeconomic and health data in a probabilistic sample of residents from the municipality of Vitoria, Espirito Santo, Brazil. The sampling plan had the objective of ensuring that the research would be socioeconomically, geographically, and demographically representative of the residents of this municipality. The study protocol was approved by the involved Institutional Ethics Committees and written informed consent was obtained from all participants prior to enter the study. Patients submitted to coronary angiography One thousand four hundred twenty-five consecutive patients submitted to coronary angiography for the first time to study suggestive coronary artery disease etiology were selected at the Laboratory of Hemodinamics, Heart Institute (Incor), Sao Paulo, Brazil. All patients had a clinical diagnosis of angina pectoris and stable angina. No patient enrolled in this study was currently experiencing an acute coronary syndrome. Patients with previous acute ischemic events, heart failure classes III–IV, hepatic dysfunction, familiar hypercholesterolemia, previous heart or kidney transplantation, and in antiviral treatment were excluded [17-19]. Patients answered a clinical questionnaire that covered questions regarding personal medical history, family antecedents of CVD, sedentarism, smoking status, hypertension, obesity, dyslipidemia, diabetes, and current treatment. All patients signed an informed consent form and the study has been approved by the local Ethics committee. One thousand four hundred twenty-five consecutive patients submitted to coronary angiography for the first time to study suggestive coronary artery disease etiology were selected at the Laboratory of Hemodinamics, Heart Institute (Incor), Sao Paulo, Brazil. All patients had a clinical diagnosis of angina pectoris and stable angina. No patient enrolled in this study was currently experiencing an acute coronary syndrome. Patients with previous acute ischemic events, heart failure classes III–IV, hepatic dysfunction, familiar hypercholesterolemia, previous heart or kidney transplantation, and in antiviral treatment were excluded [17-19]. Patients answered a clinical questionnaire that covered questions regarding personal medical history, family antecedents of CVD, sedentarism, smoking status, hypertension, obesity, dyslipidemia, diabetes, and current treatment. All patients signed an informed consent form and the study has been approved by the local Ethics committee. Demographic data and laboratory tests Weight and height were measured according to a standard protocol, and body mass index (BMI) was calculated. Individuals answered a clinical questionnaire that covered questions regarding smoking status and current medical treatment. Individuals who had ever smoked more than five cigarettes per day for the last year were classified as smokers [1,20]. Ethnicity was classified with a validated questionnaire for the Brazilian population according to a set of phenotypic characteristics (such as skin color, hair texture, shape of the nose and aspect of the lip) and individuals were classified as White, Intermediate (meaning Brown, Pardo in Portuguese), Black, Amerindian or Oriental descent [16,21,22]. Triglycerides (TG), TC, HDL-C, LDL-C, and glucose were evaluated by standard techniques in 12-h fasting blood samples. Diabetes mellitus was diagnosed by the presence of fasting glucose ≥ 126 mg/dL or the use of antidiabetic drugs [23]. Hyperlipidemia was defined as TC ≥ 240 mg/dL, LDL-C ≥ 160 mg/dL, and/or use of hypolipidemic drugs [7]. Weight and height were measured according to a standard protocol, and body mass index (BMI) was calculated. Individuals answered a clinical questionnaire that covered questions regarding smoking status and current medical treatment. Individuals who had ever smoked more than five cigarettes per day for the last year were classified as smokers [1,20]. Ethnicity was classified with a validated questionnaire for the Brazilian population according to a set of phenotypic characteristics (such as skin color, hair texture, shape of the nose and aspect of the lip) and individuals were classified as White, Intermediate (meaning Brown, Pardo in Portuguese), Black, Amerindian or Oriental descent [16,21,22]. Triglycerides (TG), TC, HDL-C, LDL-C, and glucose were evaluated by standard techniques in 12-h fasting blood samples. Diabetes mellitus was diagnosed by the presence of fasting glucose ≥ 126 mg/dL or the use of antidiabetic drugs [23]. Hyperlipidemia was defined as TC ≥ 240 mg/dL, LDL-C ≥ 160 mg/dL, and/or use of hypolipidemic drugs [7]. Blood pressure phenotypes Blood pressure was measured in the sitting position with the use of a standard mercury sphygmomanometer on the left arm after 5 min rest. The first and fifth phases of Korotkoff sounds were used for systolic blood pressure (SBP) and diastolic blood pressure (DBP), respectively. The SBP and DBP were calculated from two readings with a minimal interval of 10 min apart. Hypertension was defined as mean SBP ≥140 mmHg and/or DBP ≥90 mm Hg and/or antihypertensive drug use [24]. The mean blood pressure (MBP) was calculated as the mean pulse pressure added to one-third of the DBP. Blood pressure was measured in the sitting position with the use of a standard mercury sphygmomanometer on the left arm after 5 min rest. The first and fifth phases of Korotkoff sounds were used for systolic blood pressure (SBP) and diastolic blood pressure (DBP), respectively. The SBP and DBP were calculated from two readings with a minimal interval of 10 min apart. Hypertension was defined as mean SBP ≥140 mmHg and/or DBP ≥90 mm Hg and/or antihypertensive drug use [24]. The mean blood pressure (MBP) was calculated as the mean pulse pressure added to one-third of the DBP. Pulse wave velocity and arterial stiffness Carotid-femoral pulse wave velocity (PWV) was analyzed with an automatic device (Complior®; Colson) by an experienced observer blinded to clinical characteristics. Briefly, common carotid artery and femoral artery pressure waveforms were recorded non-invasively using a pressure-sensitive transducer (TY-306-Fukuda®; Fukuda; Tokyo, Japan). The distance between the recording sites (D) was measured, and PWV was automatically calculated as PWV = D/t, where (t) means pulse transit time. Measurements were repeated over 10 different cardiac cycles, and the mean was used for the final analysis. The validation and its reproducibility have been previously described, and increased arterial stiffness was defined as PWV ≥ 12 m/s [16,25]. Carotid-femoral pulse wave velocity (PWV) was analyzed with an automatic device (Complior®; Colson) by an experienced observer blinded to clinical characteristics. Briefly, common carotid artery and femoral artery pressure waveforms were recorded non-invasively using a pressure-sensitive transducer (TY-306-Fukuda®; Fukuda; Tokyo, Japan). The distance between the recording sites (D) was measured, and PWV was automatically calculated as PWV = D/t, where (t) means pulse transit time. Measurements were repeated over 10 different cardiac cycles, and the mean was used for the final analysis. The validation and its reproducibility have been previously described, and increased arterial stiffness was defined as PWV ≥ 12 m/s [16,25]. Coronary artery disease scores Twenty coronary segments were scored: each vessel was divided into three segments (proximal, medial, and distal), except for the secondary branches of the right coronary artery (posterior ventricular and posterior descending), which were divided into proximal and distal segments. Stenosis higher than 50% in any coronary segment was graded 1 point and the sum of points for all 20 segments constituted the Extension Score. Lesion severity was calculated as follows: none and irregularities, 0 points; <50%, 0.3 points; 50–70%, 0.6 points; >70–90%, 0.8 points; and >90–100%, 0.95 points. The Severity Score was calculated through the sum of points for all 20 coronary segments [17]. Twenty coronary segments were scored: each vessel was divided into three segments (proximal, medial, and distal), except for the secondary branches of the right coronary artery (posterior ventricular and posterior descending), which were divided into proximal and distal segments. Stenosis higher than 50% in any coronary segment was graded 1 point and the sum of points for all 20 segments constituted the Extension Score. Lesion severity was calculated as follows: none and irregularities, 0 points; <50%, 0.3 points; 50–70%, 0.6 points; >70–90%, 0.8 points; and >90–100%, 0.95 points. The Severity Score was calculated through the sum of points for all 20 coronary segments [17]. Genotyping Genomic DNA from subjects was extracted from peripheral blood following standard salting-out procedure. Genotypes for the MYLIP rs9370867 (p.N342S, c.G1025A) polymorphism was detected by polymerase chain reaction (PCR) followed by high resolution melting (HRM) analysis with the Rotor Gene 6000® instrument (Qiagen, Courtaboeuf, France) [26,27]. The QIAgility® (Qiagen, Courtaboeuf, France), an automated instrument, was used according to instructions to optimize the sample preparation step. One specific disc is able to genotype 96 samples for this polymorphism [28]. Amplification of the fragment was performed using the primer sense 5’- TTGTGGACCTCGTTTCAAGA -3’ and antisense 5’- GCTGCAGTTCATGCTGCT -3’ (80 pairs base) for the rs9370867. A 40-cycle PCR was carried out with the following conditions: denaturation of the template DNA for first cycle of 94°C for 120 s, denaturation of 94°C for 20 s, annealing of 53.4°C for 20 s, and extension of 72°C for 22 s. PCR was performed using a 10 μL reactive solution (10 mM Tris–HCl, 50 mM KCl, pH 9.0; 2.0 mM MgCl2; 200 μM of each dNTP; 0.5 U Taq DNA Polymerase; 200 nM of each primer; 10 ng of genomic DNA template) with addition of fluorescent DNA-intercalating SYTO9® (1.5 μM; Invitrogen, Carlsbad, USA). In the HRM phase, the Rotor Gene 6000® measured the fluorescence in each 0.1°C temperature increase in the range of 73-85°C. Melting curves were generated by the decrease in fluorescence with the increase in the temperature; and in analysis, nucleotide changes result in three different curve patterns (Figure 1). Samples of the three observed curves were analyzed using bidirectional sequencing as a validation procedure (ABI Terminator Sequencing Kit® and ABI 3500XL Sequencer® - Applied Biosystems, Foster City, CA, USA). The two methods gave identical results in all tests. The wild-type, heterozygous and mutant homozygous genotypes for the rs9370867 could be easily discernible by HRM analysis. In addition, 4% of the samples were randomly selected and reanalyzed as quality controls and gave identical results. Graphs of the MYLIP rs9370867 (p.N342S, c.G1025A) nucleotide changes results in different curve patterns using high resolution melting analysis. A: Graph of normalized fluorescence by temperature. B: Graph of normalized fluorescence (based on genotype 2) by temperature. C: Graph of melting curve analysis (fluorescence differential/temperature differential). 1: wild-type genotype (GG); 2: heterozygous genotype (GA); 3: mutant homozygous genotype (AA). Genomic DNA from subjects was extracted from peripheral blood following standard salting-out procedure. Genotypes for the MYLIP rs9370867 (p.N342S, c.G1025A) polymorphism was detected by polymerase chain reaction (PCR) followed by high resolution melting (HRM) analysis with the Rotor Gene 6000® instrument (Qiagen, Courtaboeuf, France) [26,27]. The QIAgility® (Qiagen, Courtaboeuf, France), an automated instrument, was used according to instructions to optimize the sample preparation step. One specific disc is able to genotype 96 samples for this polymorphism [28]. Amplification of the fragment was performed using the primer sense 5’- TTGTGGACCTCGTTTCAAGA -3’ and antisense 5’- GCTGCAGTTCATGCTGCT -3’ (80 pairs base) for the rs9370867. A 40-cycle PCR was carried out with the following conditions: denaturation of the template DNA for first cycle of 94°C for 120 s, denaturation of 94°C for 20 s, annealing of 53.4°C for 20 s, and extension of 72°C for 22 s. PCR was performed using a 10 μL reactive solution (10 mM Tris–HCl, 50 mM KCl, pH 9.0; 2.0 mM MgCl2; 200 μM of each dNTP; 0.5 U Taq DNA Polymerase; 200 nM of each primer; 10 ng of genomic DNA template) with addition of fluorescent DNA-intercalating SYTO9® (1.5 μM; Invitrogen, Carlsbad, USA). In the HRM phase, the Rotor Gene 6000® measured the fluorescence in each 0.1°C temperature increase in the range of 73-85°C. Melting curves were generated by the decrease in fluorescence with the increase in the temperature; and in analysis, nucleotide changes result in three different curve patterns (Figure 1). Samples of the three observed curves were analyzed using bidirectional sequencing as a validation procedure (ABI Terminator Sequencing Kit® and ABI 3500XL Sequencer® - Applied Biosystems, Foster City, CA, USA). The two methods gave identical results in all tests. The wild-type, heterozygous and mutant homozygous genotypes for the rs9370867 could be easily discernible by HRM analysis. In addition, 4% of the samples were randomly selected and reanalyzed as quality controls and gave identical results. Graphs of the MYLIP rs9370867 (p.N342S, c.G1025A) nucleotide changes results in different curve patterns using high resolution melting analysis. A: Graph of normalized fluorescence by temperature. B: Graph of normalized fluorescence (based on genotype 2) by temperature. C: Graph of melting curve analysis (fluorescence differential/temperature differential). 1: wild-type genotype (GG); 2: heterozygous genotype (GA); 3: mutant homozygous genotype (AA). Statistical analysis Categorical variables are presented as percentage while continuous variables are presented as mean ± standard deviation. Chi-square test was performed for comparative analysis of gender, ethnicity, hypertension, diabetes, hyperlipidemia, increased arterial stiffness, and smoking frequencies according to MYLIP polymorphism. Chi-square test was also performed for the Hardy-Weinberg equilibrium. ANOVA test was performed for comparing the age, BMI, biochemical data, blood pressures, PWV, and angiographic data means according to MYLIP polymorphism. Tukey's post hoc test was performed to identify the different group. Biochemical data, blood pressures, and angiographic data were adjusted for age, gender, and ethnicity. PWV was adjusted for age, gender, MBP, and ethnicity. The analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria [8,15]. Fasting serum TG > 200 mg/dL for the cases, TG < 150 mg/dL for the controls, and subjects with morbid obesity (BMI > 40 kg/m2) or type 2 diabetes mellitus were excluded. The subjects were classified as high-TC if their serum TC levels were ≥ 240 mg/dL and normal TC if their serum TC levels were < 240 mg/dL in the absence of lipid lowering medication. The subjects were also classified as combined hyperlipidemia if their serum TC and TG levels were ≥ 240 mg/dL and > 200 mg/dL, and as controls if TC and TG levels were < 240 mg/dL and < 150 mg/dL, respectively [8,15]. All statistical analyses were carried out using the SPSS software (v. 16.0), with the level of significance set at p < 0.05. Categorical variables are presented as percentage while continuous variables are presented as mean ± standard deviation. Chi-square test was performed for comparative analysis of gender, ethnicity, hypertension, diabetes, hyperlipidemia, increased arterial stiffness, and smoking frequencies according to MYLIP polymorphism. Chi-square test was also performed for the Hardy-Weinberg equilibrium. ANOVA test was performed for comparing the age, BMI, biochemical data, blood pressures, PWV, and angiographic data means according to MYLIP polymorphism. Tukey's post hoc test was performed to identify the different group. Biochemical data, blood pressures, and angiographic data were adjusted for age, gender, and ethnicity. PWV was adjusted for age, gender, MBP, and ethnicity. The analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria [8,15]. Fasting serum TG > 200 mg/dL for the cases, TG < 150 mg/dL for the controls, and subjects with morbid obesity (BMI > 40 kg/m2) or type 2 diabetes mellitus were excluded. The subjects were classified as high-TC if their serum TC levels were ≥ 240 mg/dL and normal TC if their serum TC levels were < 240 mg/dL in the absence of lipid lowering medication. The subjects were also classified as combined hyperlipidemia if their serum TC and TG levels were ≥ 240 mg/dL and > 200 mg/dL, and as controls if TC and TG levels were < 240 mg/dL and < 150 mg/dL, respectively [8,15]. All statistical analyses were carried out using the SPSS software (v. 16.0), with the level of significance set at p < 0.05. General population: One thousand two hundred ninety-five subjects of the general urban population were selected from Vitoria, Brazil [16]. The study design was based on cross-sectional research methodology and was developed by means of surveying and analyzing socioeconomic and health data in a probabilistic sample of residents from the municipality of Vitoria, Espirito Santo, Brazil. The sampling plan had the objective of ensuring that the research would be socioeconomically, geographically, and demographically representative of the residents of this municipality. The study protocol was approved by the involved Institutional Ethics Committees and written informed consent was obtained from all participants prior to enter the study. Patients submitted to coronary angiography: One thousand four hundred twenty-five consecutive patients submitted to coronary angiography for the first time to study suggestive coronary artery disease etiology were selected at the Laboratory of Hemodinamics, Heart Institute (Incor), Sao Paulo, Brazil. All patients had a clinical diagnosis of angina pectoris and stable angina. No patient enrolled in this study was currently experiencing an acute coronary syndrome. Patients with previous acute ischemic events, heart failure classes III–IV, hepatic dysfunction, familiar hypercholesterolemia, previous heart or kidney transplantation, and in antiviral treatment were excluded [17-19]. Patients answered a clinical questionnaire that covered questions regarding personal medical history, family antecedents of CVD, sedentarism, smoking status, hypertension, obesity, dyslipidemia, diabetes, and current treatment. All patients signed an informed consent form and the study has been approved by the local Ethics committee. Demographic data and laboratory tests: Weight and height were measured according to a standard protocol, and body mass index (BMI) was calculated. Individuals answered a clinical questionnaire that covered questions regarding smoking status and current medical treatment. Individuals who had ever smoked more than five cigarettes per day for the last year were classified as smokers [1,20]. Ethnicity was classified with a validated questionnaire for the Brazilian population according to a set of phenotypic characteristics (such as skin color, hair texture, shape of the nose and aspect of the lip) and individuals were classified as White, Intermediate (meaning Brown, Pardo in Portuguese), Black, Amerindian or Oriental descent [16,21,22]. Triglycerides (TG), TC, HDL-C, LDL-C, and glucose were evaluated by standard techniques in 12-h fasting blood samples. Diabetes mellitus was diagnosed by the presence of fasting glucose ≥ 126 mg/dL or the use of antidiabetic drugs [23]. Hyperlipidemia was defined as TC ≥ 240 mg/dL, LDL-C ≥ 160 mg/dL, and/or use of hypolipidemic drugs [7]. Blood pressure phenotypes: Blood pressure was measured in the sitting position with the use of a standard mercury sphygmomanometer on the left arm after 5 min rest. The first and fifth phases of Korotkoff sounds were used for systolic blood pressure (SBP) and diastolic blood pressure (DBP), respectively. The SBP and DBP were calculated from two readings with a minimal interval of 10 min apart. Hypertension was defined as mean SBP ≥140 mmHg and/or DBP ≥90 mm Hg and/or antihypertensive drug use [24]. The mean blood pressure (MBP) was calculated as the mean pulse pressure added to one-third of the DBP. Pulse wave velocity and arterial stiffness: Carotid-femoral pulse wave velocity (PWV) was analyzed with an automatic device (Complior®; Colson) by an experienced observer blinded to clinical characteristics. Briefly, common carotid artery and femoral artery pressure waveforms were recorded non-invasively using a pressure-sensitive transducer (TY-306-Fukuda®; Fukuda; Tokyo, Japan). The distance between the recording sites (D) was measured, and PWV was automatically calculated as PWV = D/t, where (t) means pulse transit time. Measurements were repeated over 10 different cardiac cycles, and the mean was used for the final analysis. The validation and its reproducibility have been previously described, and increased arterial stiffness was defined as PWV ≥ 12 m/s [16,25]. Coronary artery disease scores: Twenty coronary segments were scored: each vessel was divided into three segments (proximal, medial, and distal), except for the secondary branches of the right coronary artery (posterior ventricular and posterior descending), which were divided into proximal and distal segments. Stenosis higher than 50% in any coronary segment was graded 1 point and the sum of points for all 20 segments constituted the Extension Score. Lesion severity was calculated as follows: none and irregularities, 0 points; <50%, 0.3 points; 50–70%, 0.6 points; >70–90%, 0.8 points; and >90–100%, 0.95 points. The Severity Score was calculated through the sum of points for all 20 coronary segments [17]. Genotyping: Genomic DNA from subjects was extracted from peripheral blood following standard salting-out procedure. Genotypes for the MYLIP rs9370867 (p.N342S, c.G1025A) polymorphism was detected by polymerase chain reaction (PCR) followed by high resolution melting (HRM) analysis with the Rotor Gene 6000® instrument (Qiagen, Courtaboeuf, France) [26,27]. The QIAgility® (Qiagen, Courtaboeuf, France), an automated instrument, was used according to instructions to optimize the sample preparation step. One specific disc is able to genotype 96 samples for this polymorphism [28]. Amplification of the fragment was performed using the primer sense 5’- TTGTGGACCTCGTTTCAAGA -3’ and antisense 5’- GCTGCAGTTCATGCTGCT -3’ (80 pairs base) for the rs9370867. A 40-cycle PCR was carried out with the following conditions: denaturation of the template DNA for first cycle of 94°C for 120 s, denaturation of 94°C for 20 s, annealing of 53.4°C for 20 s, and extension of 72°C for 22 s. PCR was performed using a 10 μL reactive solution (10 mM Tris–HCl, 50 mM KCl, pH 9.0; 2.0 mM MgCl2; 200 μM of each dNTP; 0.5 U Taq DNA Polymerase; 200 nM of each primer; 10 ng of genomic DNA template) with addition of fluorescent DNA-intercalating SYTO9® (1.5 μM; Invitrogen, Carlsbad, USA). In the HRM phase, the Rotor Gene 6000® measured the fluorescence in each 0.1°C temperature increase in the range of 73-85°C. Melting curves were generated by the decrease in fluorescence with the increase in the temperature; and in analysis, nucleotide changes result in three different curve patterns (Figure 1). Samples of the three observed curves were analyzed using bidirectional sequencing as a validation procedure (ABI Terminator Sequencing Kit® and ABI 3500XL Sequencer® - Applied Biosystems, Foster City, CA, USA). The two methods gave identical results in all tests. The wild-type, heterozygous and mutant homozygous genotypes for the rs9370867 could be easily discernible by HRM analysis. In addition, 4% of the samples were randomly selected and reanalyzed as quality controls and gave identical results. Graphs of the MYLIP rs9370867 (p.N342S, c.G1025A) nucleotide changes results in different curve patterns using high resolution melting analysis. A: Graph of normalized fluorescence by temperature. B: Graph of normalized fluorescence (based on genotype 2) by temperature. C: Graph of melting curve analysis (fluorescence differential/temperature differential). 1: wild-type genotype (GG); 2: heterozygous genotype (GA); 3: mutant homozygous genotype (AA). Statistical analysis: Categorical variables are presented as percentage while continuous variables are presented as mean ± standard deviation. Chi-square test was performed for comparative analysis of gender, ethnicity, hypertension, diabetes, hyperlipidemia, increased arterial stiffness, and smoking frequencies according to MYLIP polymorphism. Chi-square test was also performed for the Hardy-Weinberg equilibrium. ANOVA test was performed for comparing the age, BMI, biochemical data, blood pressures, PWV, and angiographic data means according to MYLIP polymorphism. Tukey's post hoc test was performed to identify the different group. Biochemical data, blood pressures, and angiographic data were adjusted for age, gender, and ethnicity. PWV was adjusted for age, gender, MBP, and ethnicity. The analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria [8,15]. Fasting serum TG > 200 mg/dL for the cases, TG < 150 mg/dL for the controls, and subjects with morbid obesity (BMI > 40 kg/m2) or type 2 diabetes mellitus were excluded. The subjects were classified as high-TC if their serum TC levels were ≥ 240 mg/dL and normal TC if their serum TC levels were < 240 mg/dL in the absence of lipid lowering medication. The subjects were also classified as combined hyperlipidemia if their serum TC and TG levels were ≥ 240 mg/dL and > 200 mg/dL, and as controls if TC and TG levels were < 240 mg/dL and < 150 mg/dL, respectively [8,15]. All statistical analyses were carried out using the SPSS software (v. 16.0), with the level of significance set at p < 0.05. Results: General characteristics according to MYLIP polymorphism Table 1 summarizes general characteristics of both studied samples. General characteristics according to MYLIP rs9370867 genotypes aEthnicity was categorized in White, Black, Intermediate (person with admixture between White and Black) and other (Amerindians and Oriental descent). bHypertension was defined as mean systolic blood pressure ≥ 140 mmHg and/or diastolic blood pressure ≥ 90 mmHg and/or use of anti-hypertension drugs. cDiabetes was defined as fasting glucose ≥ 126 mg/dL and/or use of hypoglycemic drugs. dHyperlipidemia was defined as total-cholesterol ≥ 240 mg/dL, low density lipoprotein-cholesterol ≥ 160 mg/dL, and/or use of hypolipidemic drugs. eIncreased arterial stiffness was defined as pulse wave velocity ≥ 12 m/s. fIndividuals who had ever smoked more than five cigarettes per day for the last a year were classified as smokers. gAdjusted for age and gender. No difference in the frequencies of hyperlipidemia, diabetes, increased arterial stiffness, and smoking status according to MYLIP polymorphism was found. Only hypertension frequency presented a trend (p = 0.05) in the general population (Table 1). In both general population and patient samples, the MYLIP rs9370867 polymorphism was differently distributed according to ethnicity (Table 1). In the general population, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (45.8% and 22.1%) compared with Blacks (15.2% and 3.1%) (p < 0.001 and p < 0.001, respectively). In the patients submitted to coronary angiography, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (44.6% and 20.7%) compared with Blacks (21.3% and 5.9%) (p < 0.001 and p < 0.001, respectively). The genotypic distribution for the MYLIP rs9370867 polymorphism was in Hardy–Weinberg equilibrium according to ethnic groups. Table 1 summarizes general characteristics of both studied samples. General characteristics according to MYLIP rs9370867 genotypes aEthnicity was categorized in White, Black, Intermediate (person with admixture between White and Black) and other (Amerindians and Oriental descent). bHypertension was defined as mean systolic blood pressure ≥ 140 mmHg and/or diastolic blood pressure ≥ 90 mmHg and/or use of anti-hypertension drugs. cDiabetes was defined as fasting glucose ≥ 126 mg/dL and/or use of hypoglycemic drugs. dHyperlipidemia was defined as total-cholesterol ≥ 240 mg/dL, low density lipoprotein-cholesterol ≥ 160 mg/dL, and/or use of hypolipidemic drugs. eIncreased arterial stiffness was defined as pulse wave velocity ≥ 12 m/s. fIndividuals who had ever smoked more than five cigarettes per day for the last a year were classified as smokers. gAdjusted for age and gender. No difference in the frequencies of hyperlipidemia, diabetes, increased arterial stiffness, and smoking status according to MYLIP polymorphism was found. Only hypertension frequency presented a trend (p = 0.05) in the general population (Table 1). In both general population and patient samples, the MYLIP rs9370867 polymorphism was differently distributed according to ethnicity (Table 1). In the general population, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (45.8% and 22.1%) compared with Blacks (15.2% and 3.1%) (p < 0.001 and p < 0.001, respectively). In the patients submitted to coronary angiography, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (44.6% and 20.7%) compared with Blacks (21.3% and 5.9%) (p < 0.001 and p < 0.001, respectively). The genotypic distribution for the MYLIP rs9370867 polymorphism was in Hardy–Weinberg equilibrium according to ethnic groups. Biochemical, hemodynamic, and angiographic data according to MYLIP polymorphism Table 2 summarizes biochemical, hemodynamic, and angiographic data of both studied samples. Biochemical, hemodynamic, and angiographic data according to MYLIP rs9370867 genotypes Continuous data are expressed as mean ± standard deviation. Values with different superscript letters are significantly different (Tukey’s post hoc test). HDL-C: high density lipoprotein; LDL-C: low density lipoprotein. Biochemical data, blood pressures, ejection fraction, and angiographic scores were adjusted for age, gender, and ethnicity. Pulse wave velocity data was adjusted for age, gender, mean blood pressure, and ethnicity. There was no association of the MYLIP rs9370867 genotypes with TC, LDL-C, HDL-C, triglycerides, and glycemia values in both samples (Table 2). In the general population, subjects carrying GG genotypes had higher SBP, DBP, and MBP values (129.0 ± 23.3; 84.9 ± 14.6; 99.5 ± 16.8 mmHg) compared with subjects carrying AA genotypes (123.7 ± 19.5; 81.6 ± 11.8; 95.6 ± 13.6 mmHg) (p = 0.01; p = 0.02; p = 0.01, respectively), even after adjustment for age, gender, and ethnicity (Table 2). This difference was not found in the patient sample. No association of the studied polymorphism with PWV values (available for the general population sample) or with angiographic data (extension and severity scores, available for the patient sample) was observed (Table 2). Table 2 summarizes biochemical, hemodynamic, and angiographic data of both studied samples. Biochemical, hemodynamic, and angiographic data according to MYLIP rs9370867 genotypes Continuous data are expressed as mean ± standard deviation. Values with different superscript letters are significantly different (Tukey’s post hoc test). HDL-C: high density lipoprotein; LDL-C: low density lipoprotein. Biochemical data, blood pressures, ejection fraction, and angiographic scores were adjusted for age, gender, and ethnicity. Pulse wave velocity data was adjusted for age, gender, mean blood pressure, and ethnicity. There was no association of the MYLIP rs9370867 genotypes with TC, LDL-C, HDL-C, triglycerides, and glycemia values in both samples (Table 2). In the general population, subjects carrying GG genotypes had higher SBP, DBP, and MBP values (129.0 ± 23.3; 84.9 ± 14.6; 99.5 ± 16.8 mmHg) compared with subjects carrying AA genotypes (123.7 ± 19.5; 81.6 ± 11.8; 95.6 ± 13.6 mmHg) (p = 0.01; p = 0.02; p = 0.01, respectively), even after adjustment for age, gender, and ethnicity (Table 2). This difference was not found in the patient sample. No association of the studied polymorphism with PWV values (available for the general population sample) or with angiographic data (extension and severity scores, available for the patient sample) was observed (Table 2). Analysis stratified by hyperlipidemia, gender, and ethnicity The analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria (see Methods section for details) [8,15]. In this analysis, the sub-groups (controls and dyslipidemic subjects) presented similar result as the total sample in both general population and patient samples, even after adjustment for covariates. In the sub-groups, no association of the variables according to genotypes was found and no difference in the variant allele frequency between sub-groups was observed (p > 0.05). In addition, the analysis stratified by gender and ethnicity did not identify significant results in both studied samples. The analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria (see Methods section for details) [8,15]. In this analysis, the sub-groups (controls and dyslipidemic subjects) presented similar result as the total sample in both general population and patient samples, even after adjustment for covariates. In the sub-groups, no association of the variables according to genotypes was found and no difference in the variant allele frequency between sub-groups was observed (p > 0.05). In addition, the analysis stratified by gender and ethnicity did not identify significant results in both studied samples. General characteristics according to MYLIP polymorphism: Table 1 summarizes general characteristics of both studied samples. General characteristics according to MYLIP rs9370867 genotypes aEthnicity was categorized in White, Black, Intermediate (person with admixture between White and Black) and other (Amerindians and Oriental descent). bHypertension was defined as mean systolic blood pressure ≥ 140 mmHg and/or diastolic blood pressure ≥ 90 mmHg and/or use of anti-hypertension drugs. cDiabetes was defined as fasting glucose ≥ 126 mg/dL and/or use of hypoglycemic drugs. dHyperlipidemia was defined as total-cholesterol ≥ 240 mg/dL, low density lipoprotein-cholesterol ≥ 160 mg/dL, and/or use of hypolipidemic drugs. eIncreased arterial stiffness was defined as pulse wave velocity ≥ 12 m/s. fIndividuals who had ever smoked more than five cigarettes per day for the last a year were classified as smokers. gAdjusted for age and gender. No difference in the frequencies of hyperlipidemia, diabetes, increased arterial stiffness, and smoking status according to MYLIP polymorphism was found. Only hypertension frequency presented a trend (p = 0.05) in the general population (Table 1). In both general population and patient samples, the MYLIP rs9370867 polymorphism was differently distributed according to ethnicity (Table 1). In the general population, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (45.8% and 22.1%) compared with Blacks (15.2% and 3.1%) (p < 0.001 and p < 0.001, respectively). In the patients submitted to coronary angiography, the frequencies of the MYLIP rs9370867 A variant allele and of the homozygous genotype (AA) was higher in Whites (44.6% and 20.7%) compared with Blacks (21.3% and 5.9%) (p < 0.001 and p < 0.001, respectively). The genotypic distribution for the MYLIP rs9370867 polymorphism was in Hardy–Weinberg equilibrium according to ethnic groups. Biochemical, hemodynamic, and angiographic data according to MYLIP polymorphism: Table 2 summarizes biochemical, hemodynamic, and angiographic data of both studied samples. Biochemical, hemodynamic, and angiographic data according to MYLIP rs9370867 genotypes Continuous data are expressed as mean ± standard deviation. Values with different superscript letters are significantly different (Tukey’s post hoc test). HDL-C: high density lipoprotein; LDL-C: low density lipoprotein. Biochemical data, blood pressures, ejection fraction, and angiographic scores were adjusted for age, gender, and ethnicity. Pulse wave velocity data was adjusted for age, gender, mean blood pressure, and ethnicity. There was no association of the MYLIP rs9370867 genotypes with TC, LDL-C, HDL-C, triglycerides, and glycemia values in both samples (Table 2). In the general population, subjects carrying GG genotypes had higher SBP, DBP, and MBP values (129.0 ± 23.3; 84.9 ± 14.6; 99.5 ± 16.8 mmHg) compared with subjects carrying AA genotypes (123.7 ± 19.5; 81.6 ± 11.8; 95.6 ± 13.6 mmHg) (p = 0.01; p = 0.02; p = 0.01, respectively), even after adjustment for age, gender, and ethnicity (Table 2). This difference was not found in the patient sample. No association of the studied polymorphism with PWV values (available for the general population sample) or with angiographic data (extension and severity scores, available for the patient sample) was observed (Table 2). Analysis stratified by hyperlipidemia, gender, and ethnicity: The analysis stratified by hyperlipidemia was performed according to Weissglas-Volkov et al.‘s inclusion and exclusion criteria (see Methods section for details) [8,15]. In this analysis, the sub-groups (controls and dyslipidemic subjects) presented similar result as the total sample in both general population and patient samples, even after adjustment for covariates. In the sub-groups, no association of the variables according to genotypes was found and no difference in the variant allele frequency between sub-groups was observed (p > 0.05). In addition, the analysis stratified by gender and ethnicity did not identify significant results in both studied samples. Discussion: A recent study reported that MYLIP rs9370867 polymorphism was associated with TC levels in a Mexican dyslipidemic sample and this genetic data was supported by functional data, which demonstrated that rs9370867 influences plasma cholesterol levels by modifying the degradation of the LDL-receptor [15]. In this context and in an attempt to replicate this important association, our main finding was that the polymorphism did not influence the lipid profile in both Brazilian samples studied: general population and patients submitted to coronary angiography. Here, the frequency of the A was much higher in Whites compared with Blacks. Previous studies reported that in African and Asian groups, the frequency is relatively low at 2% - 8%, whereas in European descent, the frequency is much higher at 49% - 60% [15,29,30]. The Brazilian population is one of the most heterogeneous in the world, and it is a mixture of different ethnic groups, composed mainly of European descent, African descent, and Amerindians. Thus, adjustement for ethnicity plus other covariates were performed and an analysis stratified by ethnicity was made. Nevertheless, no significant result was found. In this point there is a limitation: genetic markers of ancestry have not been used; however, a validated questionnaire for ethnicity classification was used. In the same way, this variation of the allele frequency among ethnicity influenced blood pressures data in general population samples. SBP, DBP, and MBP values were higher in GG genotype group (major frequency of Blacks) while lower values were observed in AA genotype group (minor frequency of Blacks). The adjustment for covariates plus ethnicity was not able to exclude the participation of the ethnicity variable in the observed results. But, in analysis stratified by ethnicity, this finding was not found in any ethnicity group. Corroborating with this observation, our group demonstrated in a recent study that SBP, DBP, and MBP values were higher in Black individuals than in the other ethnic groups in the Brazilian general population (p < 0.001) [16]. Thus there is no evidence that the MYLIP studied polymorphism influences blood pressures. In this study, the replication of the previously identified association was not found in both general population and patient samples. Two GWAS of populations of European descent have identified MYLIP genetic loci contributing to variation in serum lipids [12,13]. Weissglas-Volkov et al. investigated the MYLIP region in Mexican individuals in order to restrict the associated region and identified the variant p.N342S (rs9370867). They associated this substitution with cholesterol levels in a Mexican dyslipidemic study sample [15]. It is important to report that the mentioned study have only observed the association in Mexican dyslipidemic individuals. Our Brazilian general population sample allowed a first assessment of this association in a general population and, our second sample, patients submitted to coronary angiography, allowed an analysis with major frequency of dyslipidemic individuals. In both samples, even after performing analysis according to Weissglas-Volkov et al.‘s criteria (see Methods section), no association was observed. The patterns of linkage disequilibrium vary across populations and ethnicities according to previous studies [15,29,30]; thus, it is plausible that one or more MYLIP functional polymorphisms could be differently distributed leading to distinct findings. Conclusion: Our findings indicate that association studies involving this MYLIP variant can present distinct results according to the studied population. In this moment, further studies are needed to reaffirm if the MYLIP p.N342S polymorphism is functional or not, and to identify other functional markers in this locus. Competing interests: The authors declare that they have no competing interests. Authors' contributions: PCJLS carried out the molecular genetic studies, statistical analysis and drafted the manuscript. TGMO carried out the molecular genetic studies. ACP participated in the design of the study, statistical analysis and manuscript preparation. ACP, PAL, JGM, JEK participated in the design of the study and were responsible for individual selection and characterization. All authors contributed critically to the manuscript, whose present version was read and approved by all.
Background: A recent study investigated the MYLIP region in the Mexican population in order to fine-map the actual susceptibility variants of this locus. The p.N342S polymorphism was identified as the underlying functional variant accounting for one of the previous signals of genome-wide association studies and the N342 allele was associated with higher cholesterol concentrations in Mexican dyslipidemic individuals. To date, there is no further evaluation on this genotype-phenotype association in the literature. In this scenario, and because of a possible pharmacotherapeutic target of dyslipidemia, the main aim of this study was to assess the influence of the MYLIP p.N342S polymorphism on lipid profile in Brazilian individuals. Methods: 1295 subjects of the general population and 1425 consecutive patients submitted to coronary angiography were selected. General characteristics, biochemical tests, blood pressures, pulse wave velocity, and coronary artery disease scores were analyzed. Genotypes for the MYLIP rs9370867 (p.N342S, c.G1025A) polymorphism were detected by high resolution melting analysis. Results: No association of the MYLIP rs9370867 genotypes with lipid profile, hemodynamic data, and coronary angiographic data was found. Analysis stratified by hyperlipidemia, gender, and ethnicity was also performed and the sub-groups presented similar results. In both general population and patient samples, the MYLIP rs9370867 polymorphism was differently distributed according to ethnicity. In the general population, subjects carrying GG genotypes had higher systolic blood pressure (BP), diastolic BP, and mean BP values (129.0 ± 23.3; 84.9 ± 14.6; 99.5 ± 16.8 mmHg) compared with subjects carrying AA genotypes (123.7 ± 19.5; 81.6 ± 11.8; 95.6 ± 13.6 mmHg) (p = 0.01; p = 0.02; p = 0.01, respectively), even after adjustment for covariates. However, in analysis stratified by ethnicity, this finding was not found and there is no evidence that the polymorphism influences BP. Conclusions: Our findings indicate that association studies involving this MYLIP variant can present distinct results according to the studied population. In this moment, further studies are needed to reaffirm if the MYLIP p.N342S polymorphism is functional or not, and to identify other functional markers within this gene.
Background: Lipid profile disorders have been significantly associated with risk of cardiovascular disease (CVD), which is also influenced by genetic factors, hypertension, type 2 diabetes mellitus, obesity, and smoking. CVD are the main cause of morbidity and mortality in developed countries and the financial cost is enormous. Thus, guidelines from the National Cholesterol Education Program (NCEP) rely on low-density lipoprotein cholesterol (LDL-C) for the prevention of CVD [1-6]. A conventional lipid panel reports several parameters, including total cholesterol (TC), LDL-C, high-density lipoprotein cholesterol (HDL-C), and triglycerides. Of these, the NCEP and the American Heart Association recommend using LDL-C as a primary target of therapy in conjunction with assessing cardiovascular risk factors. The Third Report of the Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III, or ATP III - NCEP) updated clinical guidelines for cholesterol testing such as high TC (≥ 240 mg/dL), high LDL-C (≥ 160 mg/dL), and low HDL-C (< 40 mg/dL) [3,7,8]. Population genetic and epidemiological studies could help to assess the etiologic role of lipid profile in CVD and, novel genetic determinants of blood lipids can also help to provide new insights into the biological pathways and identify novel therapeutic targets. In this way, current genome-wide association studies (GWAS) have identified genetic loci contributing to inter-individual variation in serum concentration of lipids [9-12]. Some GWAS, using cohorts of mixed European descent, identified non-coding polymorphisms in the region of the MYLIP gene that were associated with LDL-C concentrations [12-14]. The MYLIP functional variant and the mechanistic basis of these associations were recently postulated by Weissglas-Volkov et al. [15]. MYLIP gene encodes a regulator of the LDL receptor pathway for cellular cholesterol uptake called MYLIP (myosin regulatory light chain interacting protein; also known as IDOL). Weissglas-Volkov et al. investigated the MYLIP region in the Mexican population in order to fine-map the actual susceptibility variants. They identified the rs9370867 non-synonymous polymorphism (p.N342S) as the underlying functional variant accounting for one of the previous GWAS significant signals and associated N342 allele with higher TC concentrations in Mexican dyslipidemic individuals [15]. To date, there is no further evaluation on this genotype-phenotype association (MYLIP p.N342S – lipid profile). In this scenario, the main aim of this study was to assess the influence of the MYLIP polymorphism on lipid profile in Brazilian individuals. Conclusion: Our findings indicate that association studies involving this MYLIP variant can present distinct results according to the studied population. In this moment, further studies are needed to reaffirm if the MYLIP p.N342S polymorphism is functional or not, and to identify other functional markers in this locus.
Background: A recent study investigated the MYLIP region in the Mexican population in order to fine-map the actual susceptibility variants of this locus. The p.N342S polymorphism was identified as the underlying functional variant accounting for one of the previous signals of genome-wide association studies and the N342 allele was associated with higher cholesterol concentrations in Mexican dyslipidemic individuals. To date, there is no further evaluation on this genotype-phenotype association in the literature. In this scenario, and because of a possible pharmacotherapeutic target of dyslipidemia, the main aim of this study was to assess the influence of the MYLIP p.N342S polymorphism on lipid profile in Brazilian individuals. Methods: 1295 subjects of the general population and 1425 consecutive patients submitted to coronary angiography were selected. General characteristics, biochemical tests, blood pressures, pulse wave velocity, and coronary artery disease scores were analyzed. Genotypes for the MYLIP rs9370867 (p.N342S, c.G1025A) polymorphism were detected by high resolution melting analysis. Results: No association of the MYLIP rs9370867 genotypes with lipid profile, hemodynamic data, and coronary angiographic data was found. Analysis stratified by hyperlipidemia, gender, and ethnicity was also performed and the sub-groups presented similar results. In both general population and patient samples, the MYLIP rs9370867 polymorphism was differently distributed according to ethnicity. In the general population, subjects carrying GG genotypes had higher systolic blood pressure (BP), diastolic BP, and mean BP values (129.0 ± 23.3; 84.9 ± 14.6; 99.5 ± 16.8 mmHg) compared with subjects carrying AA genotypes (123.7 ± 19.5; 81.6 ± 11.8; 95.6 ± 13.6 mmHg) (p = 0.01; p = 0.02; p = 0.01, respectively), even after adjustment for covariates. However, in analysis stratified by ethnicity, this finding was not found and there is no evidence that the polymorphism influences BP. Conclusions: Our findings indicate that association studies involving this MYLIP variant can present distinct results according to the studied population. In this moment, further studies are needed to reaffirm if the MYLIP p.N342S polymorphism is functional or not, and to identify other functional markers within this gene.
9,240
411
[ 520, 117, 162, 218, 118, 148, 137, 522, 356, 379, 306, 124, 10, 79 ]
18
[ "mylip", "according", "dl", "mg dl", "mg", "analysis", "blood", "data", "general", "rs9370867" ]
[ "density lipoprotein cholesterol", "cholesterol hdl triglycerides", "guidelines cholesterol testing", "cholesterol ldl prevention", "guidelines national cholesterol" ]
[CONTENT] MYLIP | p.N342S | rs9370867 | Lipid profile | Cholesterol | Ethnicity | Brazilian [SUMMARY]
[CONTENT] MYLIP | p.N342S | rs9370867 | Lipid profile | Cholesterol | Ethnicity | Brazilian [SUMMARY]
[CONTENT] MYLIP | p.N342S | rs9370867 | Lipid profile | Cholesterol | Ethnicity | Brazilian [SUMMARY]
[CONTENT] MYLIP | p.N342S | rs9370867 | Lipid profile | Cholesterol | Ethnicity | Brazilian [SUMMARY]
[CONTENT] MYLIP | p.N342S | rs9370867 | Lipid profile | Cholesterol | Ethnicity | Brazilian [SUMMARY]
[CONTENT] MYLIP | p.N342S | rs9370867 | Lipid profile | Cholesterol | Ethnicity | Brazilian [SUMMARY]
[CONTENT] Adult | Aged | Amino Acid Substitution | Brazil | Coronary Angiography | Coronary Artery Disease | Female | Genetic Association Studies | Humans | Hyperlipidemias | Hypertension | Lipids | Male | Middle Aged | Phenotype | Polymorphism, Single Nucleotide | Radionuclide Imaging | Sequence Analysis, DNA | Ubiquitin-Protein Ligases | Urban Population | Vascular Stiffness [SUMMARY]
[CONTENT] Adult | Aged | Amino Acid Substitution | Brazil | Coronary Angiography | Coronary Artery Disease | Female | Genetic Association Studies | Humans | Hyperlipidemias | Hypertension | Lipids | Male | Middle Aged | Phenotype | Polymorphism, Single Nucleotide | Radionuclide Imaging | Sequence Analysis, DNA | Ubiquitin-Protein Ligases | Urban Population | Vascular Stiffness [SUMMARY]
[CONTENT] Adult | Aged | Amino Acid Substitution | Brazil | Coronary Angiography | Coronary Artery Disease | Female | Genetic Association Studies | Humans | Hyperlipidemias | Hypertension | Lipids | Male | Middle Aged | Phenotype | Polymorphism, Single Nucleotide | Radionuclide Imaging | Sequence Analysis, DNA | Ubiquitin-Protein Ligases | Urban Population | Vascular Stiffness [SUMMARY]
[CONTENT] Adult | Aged | Amino Acid Substitution | Brazil | Coronary Angiography | Coronary Artery Disease | Female | Genetic Association Studies | Humans | Hyperlipidemias | Hypertension | Lipids | Male | Middle Aged | Phenotype | Polymorphism, Single Nucleotide | Radionuclide Imaging | Sequence Analysis, DNA | Ubiquitin-Protein Ligases | Urban Population | Vascular Stiffness [SUMMARY]
[CONTENT] Adult | Aged | Amino Acid Substitution | Brazil | Coronary Angiography | Coronary Artery Disease | Female | Genetic Association Studies | Humans | Hyperlipidemias | Hypertension | Lipids | Male | Middle Aged | Phenotype | Polymorphism, Single Nucleotide | Radionuclide Imaging | Sequence Analysis, DNA | Ubiquitin-Protein Ligases | Urban Population | Vascular Stiffness [SUMMARY]
[CONTENT] Adult | Aged | Amino Acid Substitution | Brazil | Coronary Angiography | Coronary Artery Disease | Female | Genetic Association Studies | Humans | Hyperlipidemias | Hypertension | Lipids | Male | Middle Aged | Phenotype | Polymorphism, Single Nucleotide | Radionuclide Imaging | Sequence Analysis, DNA | Ubiquitin-Protein Ligases | Urban Population | Vascular Stiffness [SUMMARY]
[CONTENT] density lipoprotein cholesterol | cholesterol hdl triglycerides | guidelines cholesterol testing | cholesterol ldl prevention | guidelines national cholesterol [SUMMARY]
[CONTENT] density lipoprotein cholesterol | cholesterol hdl triglycerides | guidelines cholesterol testing | cholesterol ldl prevention | guidelines national cholesterol [SUMMARY]
[CONTENT] density lipoprotein cholesterol | cholesterol hdl triglycerides | guidelines cholesterol testing | cholesterol ldl prevention | guidelines national cholesterol [SUMMARY]
[CONTENT] density lipoprotein cholesterol | cholesterol hdl triglycerides | guidelines cholesterol testing | cholesterol ldl prevention | guidelines national cholesterol [SUMMARY]
[CONTENT] density lipoprotein cholesterol | cholesterol hdl triglycerides | guidelines cholesterol testing | cholesterol ldl prevention | guidelines national cholesterol [SUMMARY]
[CONTENT] density lipoprotein cholesterol | cholesterol hdl triglycerides | guidelines cholesterol testing | cholesterol ldl prevention | guidelines national cholesterol [SUMMARY]
[CONTENT] mylip | according | dl | mg dl | mg | analysis | blood | data | general | rs9370867 [SUMMARY]
[CONTENT] mylip | according | dl | mg dl | mg | analysis | blood | data | general | rs9370867 [SUMMARY]
[CONTENT] mylip | according | dl | mg dl | mg | analysis | blood | data | general | rs9370867 [SUMMARY]
[CONTENT] mylip | according | dl | mg dl | mg | analysis | blood | data | general | rs9370867 [SUMMARY]
[CONTENT] mylip | according | dl | mg dl | mg | analysis | blood | data | general | rs9370867 [SUMMARY]
[CONTENT] mylip | according | dl | mg dl | mg | analysis | blood | data | general | rs9370867 [SUMMARY]
[CONTENT] cholesterol | ldl | lipid | mylip | profile | lipid profile | genetic | cvd | ncep | panel [SUMMARY]
[CONTENT] mg dl | dl | mg | points | coronary | pressure | tc | analysis | dna | segments [SUMMARY]
[CONTENT] table | general | mylip | mylip rs9370867 | rs9370867 | data | according | genotypes | general population | gender [SUMMARY]
[CONTENT] functional | studies | mylip | distinct results according | distinct results according studied | studied population | studied population moment | studied population moment studies | according studied population | according studied population moment [SUMMARY]
[CONTENT] mylip | mg dl | mg | dl | analysis | according | data | pressure | study | general [SUMMARY]
[CONTENT] mylip | mg dl | mg | dl | analysis | according | data | pressure | study | general [SUMMARY]
[CONTENT] MYLIP | Mexican ||| one | N342 | Mexican ||| ||| Brazilian [SUMMARY]
[CONTENT] 1295 | 1425 ||| ||| rs9370867 | c.G1025A [SUMMARY]
[CONTENT] rs9370867 ||| ||| rs9370867 ||| 129.0 | 23.3 | 84.9 | 14.6 | 99.5 | 16.8 | 123.7 | 81.6 | 11.8 | 95.6 | 13.6 | 0.01 | 0.02 | 0.01 ||| BP [SUMMARY]
[CONTENT] ||| [SUMMARY]
[CONTENT] MYLIP | Mexican ||| one | N342 | Mexican ||| ||| Brazilian ||| 1425 ||| ||| rs9370867 | c.G1025A ||| ||| rs9370867 ||| ||| rs9370867 ||| 129.0 | 23.3 | 84.9 | 14.6 | 99.5 | 16.8 | 123.7 | 81.6 | 11.8 | 95.6 | 13.6 | 0.01 | 0.02 | 0.01 ||| BP ||| ||| [SUMMARY]
[CONTENT] MYLIP | Mexican ||| one | N342 | Mexican ||| ||| Brazilian ||| 1425 ||| ||| rs9370867 | c.G1025A ||| ||| rs9370867 ||| ||| rs9370867 ||| 129.0 | 23.3 | 84.9 | 14.6 | 99.5 | 16.8 | 123.7 | 81.6 | 11.8 | 95.6 | 13.6 | 0.01 | 0.02 | 0.01 ||| BP ||| ||| [SUMMARY]
A Clinical Risk Score to Predict In-hospital Mortality from COVID-19 in South Korea.
33876588
Early identification of patients with coronavirus disease 2019 (COVID-19) who are at high risk of mortality is of vital importance for appropriate clinical decision making and delivering optimal treatment. We aimed to develop and validate a clinical risk score for predicting mortality at the time of admission of patients hospitalized with COVID-19.
BACKGROUND
Collaborating with the Korea Centers for Disease Control and Prevention (KCDC), we established a prospective consecutive cohort of 5,628 patients with confirmed COVID-19 infection who were admitted to 120 hospitals in Korea between January 20, 2020, and April 30, 2020. The cohort was randomly divided using a 7:3 ratio into a development (n = 3,940) and validation (n = 1,688) set. Clinical information and complete blood count (CBC) detected at admission were investigated using Least Absolute Shrinkage and Selection Operator (LASSO) and logistic regression to construct a predictive risk score (COVID-Mortality Score). The discriminative power of the risk model was assessed by calculating the area under the curve (AUC) of the receiver operating characteristic curves.
METHODS
The incidence of mortality was 4.3% in both the development and validation set. A COVID-Mortality Score consisting of age, sex, body mass index, combined comorbidity, clinical symptoms, and CBC was developed. AUCs of the scoring system were 0.96 (95% confidence interval [CI], 0.85-0.91) and 0.97 (95% CI, 0.84-0.93) in the development and validation set, respectively. If the model was optimized for > 90% sensitivity, accuracies were 81.0% and 80.2% with sensitivities of 91.7% and 86.1% in the development and validation set, respectively. The optimized scoring system has been applied to the public online risk calculator (https://www.diseaseriskscore.com).
RESULTS
This clinically developed and validated COVID-Mortality Score, using clinical data available at the time of admission, will aid clinicians in predicting in-hospital mortality.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "COVID-19", "Child", "Child, Preschool", "Female", "Hospital Mortality", "Humans", "Infant", "Infant, Newborn", "Logistic Models", "Male", "Middle Aged", "Prospective Studies", "Republic of Korea", "SARS-CoV-2", "Young Adult" ]
8055508
INTRODUCTION
Since the outbreak of coronavirus disease 2019 (COVID-19) in Wuhan, China in December 2019, it was rapidly followed by worldwide outbreaks.123 As of March 22, 2021, the World Health Organization (WHO) reported a total of 123,877,740 COVID-19 cases globally, with an average mortality of 2.2%. In many patients, the disease is mild or self-limiting, however in a considerable portion the disease is severe and fatal. Consequently, it is vital to be able to identify in advance those patients who are at greatest risk of mortality, to enable prompt referral to appropriate care settings, to try and improve outcomes. Recently, clinical scores to predict the occurrence of critically ill patients and/or a fatal outcome with COVID-19 were developed in a cohort of 1,590 Chinese patients treated in more than 575 centers throughout China.4 Liang et al.4 identified 10 independent predictive factors (abnormal chest radiograph, age, hemoptysis, dyspnea, unconsciousness, number of comorbidities, cancer history, neutrophil-to-lymphocyte ratio, lactate dehydrogenase and direct bilirubin) which were used to produce a risk score which had a mean area under the curve (AUC) of 0.88 in both the development (95% confidence interval [CI], 0.85–0.91), and validation cohort (95% CI, 0.84–0.93), for estimating the risk a hospitalized patient with COVID-19 will develop critical illness. Further reports have shown methods or new severity scores to assess disease severity and mortality of COVID-19 infection (Table 1).56789 However, it is important to understand that the mortality of COVID-19 varies according to race and ethnicity, and therefore the accuracy of risk scores is not necessarily transferrable between countries.10 Therefore, the present study aimed to develop a novel COVID-19 in-hospital mortality risk score (hereafter referred to as COVID-Mortality Score), based on data rapidly obtainable soon after hospital admission in South Korea. COVID-19 = coronavirus disease 2019, CI = confidence interval.
METHODS
Data sources and processing We obtained medical records from laboratory-confirmed hospitalized cases with COVID-19 reported to the Korea Centers for Disease Control and Prevention (KCDC) between January 2020 and April 2020. All 120 hospitals in Korea that were assigned to treat COVID-19 patients submitted the clinical data of all their hospitalized cases with laboratory-confirmed COVID-19 infection to the KCDC by April 30, 2020. All patients with COVID-19 were diagnosed and treated according to the guidelines published by the KCDC (http://www.cdc.go.kr). According to the KCDC guidelines, laboratory confirmation for COVID-19 infection was defined as a positive result on real-time reverse-transcription polymerase-chain-reaction assay of nasal and oropharyngeal swabs. All the patients analyzed either died in hospital or were discharged home despite the limitations that we could not acquire the information for in-hospital mortality due to other causes except COVID-19 or underlying diseases. We collected the data of all COVID-19 patients on clinical status at hospitalization (clinical symptoms and signs, complete blood count (CBC) findings, disease severity, and discharge status). The patients' data were collected up to death or discharge from the hospital. Ordinary variables were converted into separated dichotomous variable. We randomly selected a development and validation set using a 7:3 ratio, respectively. Imputation for missing variables was considered if missing values were less than 30%. We used predictive random forest algorithm for the imputation. We obtained medical records from laboratory-confirmed hospitalized cases with COVID-19 reported to the Korea Centers for Disease Control and Prevention (KCDC) between January 2020 and April 2020. All 120 hospitals in Korea that were assigned to treat COVID-19 patients submitted the clinical data of all their hospitalized cases with laboratory-confirmed COVID-19 infection to the KCDC by April 30, 2020. All patients with COVID-19 were diagnosed and treated according to the guidelines published by the KCDC (http://www.cdc.go.kr). According to the KCDC guidelines, laboratory confirmation for COVID-19 infection was defined as a positive result on real-time reverse-transcription polymerase-chain-reaction assay of nasal and oropharyngeal swabs. All the patients analyzed either died in hospital or were discharged home despite the limitations that we could not acquire the information for in-hospital mortality due to other causes except COVID-19 or underlying diseases. We collected the data of all COVID-19 patients on clinical status at hospitalization (clinical symptoms and signs, complete blood count (CBC) findings, disease severity, and discharge status). The patients' data were collected up to death or discharge from the hospital. Ordinary variables were converted into separated dichotomous variable. We randomly selected a development and validation set using a 7:3 ratio, respectively. Imputation for missing variables was considered if missing values were less than 30%. We used predictive random forest algorithm for the imputation. Potential predictive variables The 35 potential predictive variables included the following patient characteristics at hospital admission: demographic variables and body mass index (BMI), medical history, clinical signs and symptoms, and CBC findings. Demographic variables such as age and sex and BMI were collected for the study. Medical history included diabetes mellitus, hypertension, heart failure, cardiovascular disease, bronchial asthma, chronic obstructive pulmonary disease, chronic renal disease, malignancy, chronic liver disease, autoimmune disease, and dementia. Clinical signs and symptoms included categorical and continuous variables as follows: systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate, body temperature, fever, cough, sputum, sore throat, rhinorrhea, myalgia, fatigue, dyspnea, headache, unconsciousness, nausea/vomiting, and diarrhea. CBC findings were included as follows: hemoglobin, hematocrit, lymphocyte, platelet, and white blood cell (WBC). The 35 potential predictive variables included the following patient characteristics at hospital admission: demographic variables and body mass index (BMI), medical history, clinical signs and symptoms, and CBC findings. Demographic variables such as age and sex and BMI were collected for the study. Medical history included diabetes mellitus, hypertension, heart failure, cardiovascular disease, bronchial asthma, chronic obstructive pulmonary disease, chronic renal disease, malignancy, chronic liver disease, autoimmune disease, and dementia. Clinical signs and symptoms included categorical and continuous variables as follows: systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate, body temperature, fever, cough, sputum, sore throat, rhinorrhea, myalgia, fatigue, dyspnea, headache, unconsciousness, nausea/vomiting, and diarrhea. CBC findings were included as follows: hemoglobin, hematocrit, lymphocyte, platelet, and white blood cell (WBC). Outcomes We adopted in-hospital death as the end point because death is the most serious outcome of COVID-19. We adopted in-hospital death as the end point because death is the most serious outcome of COVID-19. Selection of variables and construction of scoring system We included all 3,940 patients hospitalized with COVID-19 in the development set for selection of variables and development of mortality score. As previously stated, the 35 variables were involved in the selection process. Least Absolute Shrinkage and Selection Operator (LASSO) regression was used for the initial variable selection. For datasets with a low events per variables ratio, LASSO is more appropriate than the stepwise regression analysis, and it is more satisfactory for regression models with high-dimensional predictors. This is a logistic regression model that obtains the subset of predictors that minimized prediction error for a quantitative variable. In penalized regression, it is needed to specify a constant λ to adjust the amount of the coefficient shrinkage. With larger penalties, the estimates of weaker factors shrink toward zero so that only the strongest predictors remain in the model.1 We used LASSO regression augmented with 10-fold cross-validation for internal validation. The best covariates that minimized the cross-validation prediction error rate were defined as the minimum (λ min). The R package “glmnet” statistical software (R Foundation) was used to perform the LASSO regression. The variables identified by LASSO regression analysis that were independently significant in logistic regression analysis were used to generate the risk prediction model (COVID-Mortality Score). The COVID-Mortality Score was generated on the basis of coefficients from the logistic model. We used the following equation to estimate the probability: probability = exp(Σβ × X)/[1 + exp (Σβ × X)]. We included all 3,940 patients hospitalized with COVID-19 in the development set for selection of variables and development of mortality score. As previously stated, the 35 variables were involved in the selection process. Least Absolute Shrinkage and Selection Operator (LASSO) regression was used for the initial variable selection. For datasets with a low events per variables ratio, LASSO is more appropriate than the stepwise regression analysis, and it is more satisfactory for regression models with high-dimensional predictors. This is a logistic regression model that obtains the subset of predictors that minimized prediction error for a quantitative variable. In penalized regression, it is needed to specify a constant λ to adjust the amount of the coefficient shrinkage. With larger penalties, the estimates of weaker factors shrink toward zero so that only the strongest predictors remain in the model.1 We used LASSO regression augmented with 10-fold cross-validation for internal validation. The best covariates that minimized the cross-validation prediction error rate were defined as the minimum (λ min). The R package “glmnet” statistical software (R Foundation) was used to perform the LASSO regression. The variables identified by LASSO regression analysis that were independently significant in logistic regression analysis were used to generate the risk prediction model (COVID-Mortality Score). The COVID-Mortality Score was generated on the basis of coefficients from the logistic model. We used the following equation to estimate the probability: probability = exp(Σβ × X)/[1 + exp (Σβ × X)]. Assessment of accuracy of prediction model The accuracy of COVID-Mortality Score was analyzed for using the AUC of the receiver-operator characteristic curve. For internal validation, we used 200 bootstrap resamplings. Statistical analysis was performed with R software (version 4.0.2, R Foundation), and P < 0.05 was considered statistically significant. To validate the generalizability of COVID-Mortality Score, we used data from the 1,688 patients not included in the development set. The accuracy of COVID-Mortality Score was analyzed for using the AUC of the receiver-operator characteristic curve. For internal validation, we used 200 bootstrap resamplings. Statistical analysis was performed with R software (version 4.0.2, R Foundation), and P < 0.05 was considered statistically significant. To validate the generalizability of COVID-Mortality Score, we used data from the 1,688 patients not included in the development set. Ethics statement This study was approved by the ethics committee of the Korea Centers for Disease Control and Prevention (KCDC) and written informed consent was exempted because of the de-identified retrospective nature of the publicly available data. This study was approved by the ethics committee of the Korea Centers for Disease Control and Prevention (KCDC) and written informed consent was exempted because of the de-identified retrospective nature of the publicly available data.
RESULTS
Characteristics of the development set In the development set, on hospital admission, 134 of 3,940 patients (3.4%) were admitted directly to the intensive care unit (ICU), with the rest (96.6%) admitted to the general ward. The end point of mortality occurred in 4.3% (n = 169). Patients who died were more likely to be men, and were more likely to have a BMI < 18.5, hypertension, an admission to the ICU and more comorbidities than those who lived (Table 2). Fever (23.1%), cough (42.0%), sputum (29.0%), sore throat (15.8%), rhinorrhea (10.9%), myalgia (16.3%), fatigue (4.0%), dyspnea (12.3%), headache (16.8%), unconsciousness (0.6%), nausea/vomiting (4.5%), and diarrhea (9.1%) were the commonest symptoms. CBC findings of the development set are also presented in Table 2. When CBC findings such as WBC, lymphocyte, platelet, and hemoglobin were evaluated as continuous variables, they did not exhibit a significant U-shaped pattern. Data are mean ± standard deviation or number (percentage), where No. is the total number of patients with available data. WBC = white blood cell. In the development set, on hospital admission, 134 of 3,940 patients (3.4%) were admitted directly to the intensive care unit (ICU), with the rest (96.6%) admitted to the general ward. The end point of mortality occurred in 4.3% (n = 169). Patients who died were more likely to be men, and were more likely to have a BMI < 18.5, hypertension, an admission to the ICU and more comorbidities than those who lived (Table 2). Fever (23.1%), cough (42.0%), sputum (29.0%), sore throat (15.8%), rhinorrhea (10.9%), myalgia (16.3%), fatigue (4.0%), dyspnea (12.3%), headache (16.8%), unconsciousness (0.6%), nausea/vomiting (4.5%), and diarrhea (9.1%) were the commonest symptoms. CBC findings of the development set are also presented in Table 2. When CBC findings such as WBC, lymphocyte, platelet, and hemoglobin were evaluated as continuous variables, they did not exhibit a significant U-shaped pattern. Data are mean ± standard deviation or number (percentage), where No. is the total number of patients with available data. WBC = white blood cell. Predictor selection The 35 variables measured at hospital admission (Table 2, see method) were included in the LASSO regression. After LASSO regression selection (Supplementary Fig. 1), 17 variables remained as significant predictors of death, including age, sex, BMI, SBP, heart rate, body temperature, comorbidities including rhinorrhea, dyspnea, unconsciousness, diabetes mellitus, hypertension, chronic obstructive pulmonary disease, malignancy, WBC, lymphocyte, platelet, and hemoglobin. Inclusion of these 17 variables in a logistic regression model resulted in 13 variables that were independently statistically significant predictors of death and was included in risk score. These variables included age by three groups: 60–69 years (odds ratio [OR], 3.63; 95% CI, 1.64– 8.01; P = 0.001), age 70–79 years (OR, 6.12; 95% CI, 2.84–13.16; P < 0.001), age ≥ 80 years (OR, 21.24; 95% CI, 9.65–46.74; P < 0.001), men (OR, 1.67; 95% CI, 1.04–2.67; P = 0.034), BMI < 18.5 (OR, 3.38; 95% CI, 1.64–6.95; P < 0.001), diabetes mellitus (OR, 2.10; 95% CI, 1.33–3.31; P = 0.001), malignancy history (OR, 2.78; 95% CI, 1.14–6.79; P = 0.025), dementia (OR, 2.67; 95% CI, 1.49–4.78; P < 0.001), rhinorrhea (OR, 0.27; 95% CI, 0.08–0.91; P = 0.035), dyspnea (OR, 4.03; 95% CI, 2.50–6.48; P < 0.001), unconsciousness (OR, 25.10, 95% CI, 6.55–96.18; P < 0.001), WBC (OR per 103 μL, 1.10; 95% CI, 1.04–1.17, P < 0.001), lower lymphocyte proportion (OR per %, 0.92; 95% CI, 0.89–0.94; P < 0.001), lower platelet count (OR per 104 μL, 0.90; 95% CI, 0.88–0.93, P < 0.001), and lower hemoglobin level (OR per g/dL, 0.81; 95% CI, 0.72–0.92; P = 0.001) (Table 3). Each age group was compared with all other age groups. OR = odds ratio, BMI = body mass index, WBC = white blood cell. The 35 variables measured at hospital admission (Table 2, see method) were included in the LASSO regression. After LASSO regression selection (Supplementary Fig. 1), 17 variables remained as significant predictors of death, including age, sex, BMI, SBP, heart rate, body temperature, comorbidities including rhinorrhea, dyspnea, unconsciousness, diabetes mellitus, hypertension, chronic obstructive pulmonary disease, malignancy, WBC, lymphocyte, platelet, and hemoglobin. Inclusion of these 17 variables in a logistic regression model resulted in 13 variables that were independently statistically significant predictors of death and was included in risk score. These variables included age by three groups: 60–69 years (odds ratio [OR], 3.63; 95% CI, 1.64– 8.01; P = 0.001), age 70–79 years (OR, 6.12; 95% CI, 2.84–13.16; P < 0.001), age ≥ 80 years (OR, 21.24; 95% CI, 9.65–46.74; P < 0.001), men (OR, 1.67; 95% CI, 1.04–2.67; P = 0.034), BMI < 18.5 (OR, 3.38; 95% CI, 1.64–6.95; P < 0.001), diabetes mellitus (OR, 2.10; 95% CI, 1.33–3.31; P = 0.001), malignancy history (OR, 2.78; 95% CI, 1.14–6.79; P = 0.025), dementia (OR, 2.67; 95% CI, 1.49–4.78; P < 0.001), rhinorrhea (OR, 0.27; 95% CI, 0.08–0.91; P = 0.035), dyspnea (OR, 4.03; 95% CI, 2.50–6.48; P < 0.001), unconsciousness (OR, 25.10, 95% CI, 6.55–96.18; P < 0.001), WBC (OR per 103 μL, 1.10; 95% CI, 1.04–1.17, P < 0.001), lower lymphocyte proportion (OR per %, 0.92; 95% CI, 0.89–0.94; P < 0.001), lower platelet count (OR per 104 μL, 0.90; 95% CI, 0.88–0.93, P < 0.001), and lower hemoglobin level (OR per g/dL, 0.81; 95% CI, 0.72–0.92; P = 0.001) (Table 3). Each age group was compared with all other age groups. OR = odds ratio, BMI = body mass index, WBC = white blood cell. The performance, validation and optimization of COVID-Mortality score The clinical characteristics and CBC findings of the validation set are presented in Supplementary Table 1. The AUCs of the model with the development set and validation set were 0.97 (95% CI, 0.85–0.91, Fig. 1) and 0.96 (95% CI, 0.84–0.93, Fig. 2), respectively. When the scoring system was optimized for > 90% sensitivity by stratifying age groups, the accuracy was 81.0% with 91.7% sensitivity and 80.5% specificity in the development set. From the validation set, accuracy was 80.2% with 86.1% sensitivity and 80.0% specificity. The optimized scoring system was utilized for the construction of the online risk calculator (https://www.diseaseriskscore.com). The online risk calculator determined whether the patient belonged to high-risk group or low-risk group and presented hazard ratio and high rank percentage for mortality. The model-derived score thresholds used for the optimized scoring system were 0.51 for age less than 40, 0.00252 for age 40s, 0.00176 for age 50s, 0.02302 for age 60s, 0.09532 for age 70s, and 0.13311 for age equal or more than 80. The clinical characteristics and CBC findings of the validation set are presented in Supplementary Table 1. The AUCs of the model with the development set and validation set were 0.97 (95% CI, 0.85–0.91, Fig. 1) and 0.96 (95% CI, 0.84–0.93, Fig. 2), respectively. When the scoring system was optimized for > 90% sensitivity by stratifying age groups, the accuracy was 81.0% with 91.7% sensitivity and 80.5% specificity in the development set. From the validation set, accuracy was 80.2% with 86.1% sensitivity and 80.0% specificity. The optimized scoring system was utilized for the construction of the online risk calculator (https://www.diseaseriskscore.com). The online risk calculator determined whether the patient belonged to high-risk group or low-risk group and presented hazard ratio and high rank percentage for mortality. The model-derived score thresholds used for the optimized scoring system were 0.51 for age less than 40, 0.00252 for age 40s, 0.00176 for age 50s, 0.02302 for age 60s, 0.09532 for age 70s, and 0.13311 for age equal or more than 80.
null
null
[ "Data sources and processing", "Potential predictive variables", "Outcomes", "Selection of variables and construction of scoring system", "Assessment of accuracy of prediction model", "Ethics statement", "Characteristics of the development set", "Predictor selection", "The performance, validation and optimization of COVID-Mortality score" ]
[ "We obtained medical records from laboratory-confirmed hospitalized cases with COVID-19 reported to the Korea Centers for Disease Control and Prevention (KCDC) between January 2020 and April 2020. All 120 hospitals in Korea that were assigned to treat COVID-19 patients submitted the clinical data of all their hospitalized cases with laboratory-confirmed COVID-19 infection to the KCDC by April 30, 2020. All patients with COVID-19 were diagnosed and treated according to the guidelines published by the KCDC (http://www.cdc.go.kr). According to the KCDC guidelines, laboratory confirmation for COVID-19 infection was defined as a positive result on real-time reverse-transcription polymerase-chain-reaction assay of nasal and oropharyngeal swabs. All the patients analyzed either died in hospital or were discharged home despite the limitations that we could not acquire the information for in-hospital mortality due to other causes except COVID-19 or underlying diseases. We collected the data of all COVID-19 patients on clinical status at hospitalization (clinical symptoms and signs, complete blood count (CBC) findings, disease severity, and discharge status). The patients' data were collected up to death or discharge from the hospital. Ordinary variables were converted into separated dichotomous variable. We randomly selected a development and validation set using a 7:3 ratio, respectively. Imputation for missing variables was considered if missing values were less than 30%. We used predictive random forest algorithm for the imputation.", "The 35 potential predictive variables included the following patient characteristics at hospital admission: demographic variables and body mass index (BMI), medical history, clinical signs and symptoms, and CBC findings. Demographic variables such as age and sex and BMI were collected for the study. Medical history included diabetes mellitus, hypertension, heart failure, cardiovascular disease, bronchial asthma, chronic obstructive pulmonary disease, chronic renal disease, malignancy, chronic liver disease, autoimmune disease, and dementia. Clinical signs and symptoms included categorical and continuous variables as follows: systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate, body temperature, fever, cough, sputum, sore throat, rhinorrhea, myalgia, fatigue, dyspnea, headache, unconsciousness, nausea/vomiting, and diarrhea. CBC findings were included as follows: hemoglobin, hematocrit, lymphocyte, platelet, and white blood cell (WBC).", "We adopted in-hospital death as the end point because death is the most serious outcome of COVID-19.", "We included all 3,940 patients hospitalized with COVID-19 in the development set for selection of variables and development of mortality score. As previously stated, the 35 variables were involved in the selection process. Least Absolute Shrinkage and Selection Operator (LASSO) regression was used for the initial variable selection. For datasets with a low events per variables ratio, LASSO is more appropriate than the stepwise regression analysis, and it is more satisfactory for regression models with high-dimensional predictors. This is a logistic regression model that obtains the subset of predictors that minimized prediction error for a quantitative variable. In penalized regression, it is needed to specify a constant λ to adjust the amount of the coefficient shrinkage. With larger penalties, the estimates of weaker factors shrink toward zero so that only the strongest predictors remain in the model.1 We used LASSO regression augmented with 10-fold cross-validation for internal validation. The best covariates that minimized the cross-validation prediction error rate were defined as the minimum (λ min). The R package “glmnet” statistical software (R Foundation) was used to perform the LASSO regression. The variables identified by LASSO regression analysis that were independently significant in logistic regression analysis were used to generate the risk prediction model (COVID-Mortality Score). The COVID-Mortality Score was generated on the basis of coefficients from the logistic model. We used the following equation to estimate the probability: probability = exp(Σβ × X)/[1 + exp (Σβ × X)].", "The accuracy of COVID-Mortality Score was analyzed for using the AUC of the receiver-operator characteristic curve. For internal validation, we used 200 bootstrap resamplings. Statistical analysis was performed with R software (version 4.0.2, R Foundation), and P < 0.05 was considered statistically significant. To validate the generalizability of COVID-Mortality Score, we used data from the 1,688 patients not included in the development set.", "This study was approved by the ethics committee of the Korea Centers for Disease Control and Prevention (KCDC) and written informed consent was exempted because of the de-identified retrospective nature of the publicly available data.", "In the development set, on hospital admission, 134 of 3,940 patients (3.4%) were admitted directly to the intensive care unit (ICU), with the rest (96.6%) admitted to the general ward. The end point of mortality occurred in 4.3% (n = 169). Patients who died were more likely to be men, and were more likely to have a BMI < 18.5, hypertension, an admission to the ICU and more comorbidities than those who lived (Table 2). Fever (23.1%), cough (42.0%), sputum (29.0%), sore throat (15.8%), rhinorrhea (10.9%), myalgia (16.3%), fatigue (4.0%), dyspnea (12.3%), headache (16.8%), unconsciousness (0.6%), nausea/vomiting (4.5%), and diarrhea (9.1%) were the commonest symptoms. CBC findings of the development set are also presented in Table 2. When CBC findings such as WBC, lymphocyte, platelet, and hemoglobin were evaluated as continuous variables, they did not exhibit a significant U-shaped pattern.\nData are mean ± standard deviation or number (percentage), where No. is the total number of patients with available data.\nWBC = white blood cell.", "The 35 variables measured at hospital admission (Table 2, see method) were included in the LASSO regression. After LASSO regression selection (Supplementary Fig. 1), 17 variables remained as significant predictors of death, including age, sex, BMI, SBP, heart rate, body temperature, comorbidities including rhinorrhea, dyspnea, unconsciousness, diabetes mellitus, hypertension, chronic obstructive pulmonary disease, malignancy, WBC, lymphocyte, platelet, and hemoglobin. Inclusion of these 17 variables in a logistic regression model resulted in 13 variables that were independently statistically significant predictors of death and was included in risk score. These variables included age by three groups: 60–69 years (odds ratio [OR], 3.63; 95% CI, 1.64– 8.01; P = 0.001), age 70–79 years (OR, 6.12; 95% CI, 2.84–13.16; P < 0.001), age ≥ 80 years (OR, 21.24; 95% CI, 9.65–46.74; P < 0.001), men (OR, 1.67; 95% CI, 1.04–2.67; P = 0.034), BMI < 18.5 (OR, 3.38; 95% CI, 1.64–6.95; P < 0.001), diabetes mellitus (OR, 2.10; 95% CI, 1.33–3.31; P = 0.001), malignancy history (OR, 2.78; 95% CI, 1.14–6.79; P = 0.025), dementia (OR, 2.67; 95% CI, 1.49–4.78; P < 0.001), rhinorrhea (OR, 0.27; 95% CI, 0.08–0.91; P = 0.035), dyspnea (OR, 4.03; 95% CI, 2.50–6.48; P < 0.001), unconsciousness (OR, 25.10, 95% CI, 6.55–96.18; P < 0.001), WBC (OR per 103 μL, 1.10; 95% CI, 1.04–1.17, P < 0.001), lower lymphocyte proportion (OR per %, 0.92; 95% CI, 0.89–0.94; P < 0.001), lower platelet count (OR per 104 μL, 0.90; 95% CI, 0.88–0.93, P < 0.001), and lower hemoglobin level (OR per g/dL, 0.81; 95% CI, 0.72–0.92; P = 0.001) (Table 3).\nEach age group was compared with all other age groups.\nOR = odds ratio, BMI = body mass index, WBC = white blood cell.", "The clinical characteristics and CBC findings of the validation set are presented in Supplementary Table 1. The AUCs of the model with the development set and validation set were 0.97 (95% CI, 0.85–0.91, Fig. 1) and 0.96 (95% CI, 0.84–0.93, Fig. 2), respectively. When the scoring system was optimized for > 90% sensitivity by stratifying age groups, the accuracy was 81.0% with 91.7% sensitivity and 80.5% specificity in the development set. From the validation set, accuracy was 80.2% with 86.1% sensitivity and 80.0% specificity. The optimized scoring system was utilized for the construction of the online risk calculator (https://www.diseaseriskscore.com). The online risk calculator determined whether the patient belonged to high-risk group or low-risk group and presented hazard ratio and high rank percentage for mortality. The model-derived score thresholds used for the optimized scoring system were 0.51 for age less than 40, 0.00252 for age 40s, 0.00176 for age 50s, 0.02302 for age 60s, 0.09532 for age 70s, and 0.13311 for age equal or more than 80." ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Data sources and processing", "Potential predictive variables", "Outcomes", "Selection of variables and construction of scoring system", "Assessment of accuracy of prediction model", "Ethics statement", "RESULTS", "Characteristics of the development set", "Predictor selection", "The performance, validation and optimization of COVID-Mortality score", "DISCUSSION" ]
[ "Since the outbreak of coronavirus disease 2019 (COVID-19) in Wuhan, China in December 2019, it was rapidly followed by worldwide outbreaks.123 As of March 22, 2021, the World Health Organization (WHO) reported a total of 123,877,740 COVID-19 cases globally, with an average mortality of 2.2%. In many patients, the disease is mild or self-limiting, however in a considerable portion the disease is severe and fatal. Consequently, it is vital to be able to identify in advance those patients who are at greatest risk of mortality, to enable prompt referral to appropriate care settings, to try and improve outcomes.\nRecently, clinical scores to predict the occurrence of critically ill patients and/or a fatal outcome with COVID-19 were developed in a cohort of 1,590 Chinese patients treated in more than 575 centers throughout China.4 Liang et al.4 identified 10 independent predictive factors (abnormal chest radiograph, age, hemoptysis, dyspnea, unconsciousness, number of comorbidities, cancer history, neutrophil-to-lymphocyte ratio, lactate dehydrogenase and direct bilirubin) which were used to produce a risk score which had a mean area under the curve (AUC) of 0.88 in both the development (95% confidence interval [CI], 0.85–0.91), and validation cohort (95% CI, 0.84–0.93), for estimating the risk a hospitalized patient with COVID-19 will develop critical illness. Further reports have shown methods or new severity scores to assess disease severity and mortality of COVID-19 infection (Table 1).56789 However, it is important to understand that the mortality of COVID-19 varies according to race and ethnicity, and therefore the accuracy of risk scores is not necessarily transferrable between countries.10 Therefore, the present study aimed to develop a novel COVID-19 in-hospital mortality risk score (hereafter referred to as COVID-Mortality Score), based on data rapidly obtainable soon after hospital admission in South Korea.\nCOVID-19 = coronavirus disease 2019, CI = confidence interval.", "Data sources and processing We obtained medical records from laboratory-confirmed hospitalized cases with COVID-19 reported to the Korea Centers for Disease Control and Prevention (KCDC) between January 2020 and April 2020. All 120 hospitals in Korea that were assigned to treat COVID-19 patients submitted the clinical data of all their hospitalized cases with laboratory-confirmed COVID-19 infection to the KCDC by April 30, 2020. All patients with COVID-19 were diagnosed and treated according to the guidelines published by the KCDC (http://www.cdc.go.kr). According to the KCDC guidelines, laboratory confirmation for COVID-19 infection was defined as a positive result on real-time reverse-transcription polymerase-chain-reaction assay of nasal and oropharyngeal swabs. All the patients analyzed either died in hospital or were discharged home despite the limitations that we could not acquire the information for in-hospital mortality due to other causes except COVID-19 or underlying diseases. We collected the data of all COVID-19 patients on clinical status at hospitalization (clinical symptoms and signs, complete blood count (CBC) findings, disease severity, and discharge status). The patients' data were collected up to death or discharge from the hospital. Ordinary variables were converted into separated dichotomous variable. We randomly selected a development and validation set using a 7:3 ratio, respectively. Imputation for missing variables was considered if missing values were less than 30%. We used predictive random forest algorithm for the imputation.\nWe obtained medical records from laboratory-confirmed hospitalized cases with COVID-19 reported to the Korea Centers for Disease Control and Prevention (KCDC) between January 2020 and April 2020. All 120 hospitals in Korea that were assigned to treat COVID-19 patients submitted the clinical data of all their hospitalized cases with laboratory-confirmed COVID-19 infection to the KCDC by April 30, 2020. All patients with COVID-19 were diagnosed and treated according to the guidelines published by the KCDC (http://www.cdc.go.kr). According to the KCDC guidelines, laboratory confirmation for COVID-19 infection was defined as a positive result on real-time reverse-transcription polymerase-chain-reaction assay of nasal and oropharyngeal swabs. All the patients analyzed either died in hospital or were discharged home despite the limitations that we could not acquire the information for in-hospital mortality due to other causes except COVID-19 or underlying diseases. We collected the data of all COVID-19 patients on clinical status at hospitalization (clinical symptoms and signs, complete blood count (CBC) findings, disease severity, and discharge status). The patients' data were collected up to death or discharge from the hospital. Ordinary variables were converted into separated dichotomous variable. We randomly selected a development and validation set using a 7:3 ratio, respectively. Imputation for missing variables was considered if missing values were less than 30%. We used predictive random forest algorithm for the imputation.\nPotential predictive variables The 35 potential predictive variables included the following patient characteristics at hospital admission: demographic variables and body mass index (BMI), medical history, clinical signs and symptoms, and CBC findings. Demographic variables such as age and sex and BMI were collected for the study. Medical history included diabetes mellitus, hypertension, heart failure, cardiovascular disease, bronchial asthma, chronic obstructive pulmonary disease, chronic renal disease, malignancy, chronic liver disease, autoimmune disease, and dementia. Clinical signs and symptoms included categorical and continuous variables as follows: systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate, body temperature, fever, cough, sputum, sore throat, rhinorrhea, myalgia, fatigue, dyspnea, headache, unconsciousness, nausea/vomiting, and diarrhea. CBC findings were included as follows: hemoglobin, hematocrit, lymphocyte, platelet, and white blood cell (WBC).\nThe 35 potential predictive variables included the following patient characteristics at hospital admission: demographic variables and body mass index (BMI), medical history, clinical signs and symptoms, and CBC findings. Demographic variables such as age and sex and BMI were collected for the study. Medical history included diabetes mellitus, hypertension, heart failure, cardiovascular disease, bronchial asthma, chronic obstructive pulmonary disease, chronic renal disease, malignancy, chronic liver disease, autoimmune disease, and dementia. Clinical signs and symptoms included categorical and continuous variables as follows: systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate, body temperature, fever, cough, sputum, sore throat, rhinorrhea, myalgia, fatigue, dyspnea, headache, unconsciousness, nausea/vomiting, and diarrhea. CBC findings were included as follows: hemoglobin, hematocrit, lymphocyte, platelet, and white blood cell (WBC).\nOutcomes We adopted in-hospital death as the end point because death is the most serious outcome of COVID-19.\nWe adopted in-hospital death as the end point because death is the most serious outcome of COVID-19.\nSelection of variables and construction of scoring system We included all 3,940 patients hospitalized with COVID-19 in the development set for selection of variables and development of mortality score. As previously stated, the 35 variables were involved in the selection process. Least Absolute Shrinkage and Selection Operator (LASSO) regression was used for the initial variable selection. For datasets with a low events per variables ratio, LASSO is more appropriate than the stepwise regression analysis, and it is more satisfactory for regression models with high-dimensional predictors. This is a logistic regression model that obtains the subset of predictors that minimized prediction error for a quantitative variable. In penalized regression, it is needed to specify a constant λ to adjust the amount of the coefficient shrinkage. With larger penalties, the estimates of weaker factors shrink toward zero so that only the strongest predictors remain in the model.1 We used LASSO regression augmented with 10-fold cross-validation for internal validation. The best covariates that minimized the cross-validation prediction error rate were defined as the minimum (λ min). The R package “glmnet” statistical software (R Foundation) was used to perform the LASSO regression. The variables identified by LASSO regression analysis that were independently significant in logistic regression analysis were used to generate the risk prediction model (COVID-Mortality Score). The COVID-Mortality Score was generated on the basis of coefficients from the logistic model. We used the following equation to estimate the probability: probability = exp(Σβ × X)/[1 + exp (Σβ × X)].\nWe included all 3,940 patients hospitalized with COVID-19 in the development set for selection of variables and development of mortality score. As previously stated, the 35 variables were involved in the selection process. Least Absolute Shrinkage and Selection Operator (LASSO) regression was used for the initial variable selection. For datasets with a low events per variables ratio, LASSO is more appropriate than the stepwise regression analysis, and it is more satisfactory for regression models with high-dimensional predictors. This is a logistic regression model that obtains the subset of predictors that minimized prediction error for a quantitative variable. In penalized regression, it is needed to specify a constant λ to adjust the amount of the coefficient shrinkage. With larger penalties, the estimates of weaker factors shrink toward zero so that only the strongest predictors remain in the model.1 We used LASSO regression augmented with 10-fold cross-validation for internal validation. The best covariates that minimized the cross-validation prediction error rate were defined as the minimum (λ min). The R package “glmnet” statistical software (R Foundation) was used to perform the LASSO regression. The variables identified by LASSO regression analysis that were independently significant in logistic regression analysis were used to generate the risk prediction model (COVID-Mortality Score). The COVID-Mortality Score was generated on the basis of coefficients from the logistic model. We used the following equation to estimate the probability: probability = exp(Σβ × X)/[1 + exp (Σβ × X)].\nAssessment of accuracy of prediction model The accuracy of COVID-Mortality Score was analyzed for using the AUC of the receiver-operator characteristic curve. For internal validation, we used 200 bootstrap resamplings. Statistical analysis was performed with R software (version 4.0.2, R Foundation), and P < 0.05 was considered statistically significant. To validate the generalizability of COVID-Mortality Score, we used data from the 1,688 patients not included in the development set.\nThe accuracy of COVID-Mortality Score was analyzed for using the AUC of the receiver-operator characteristic curve. For internal validation, we used 200 bootstrap resamplings. Statistical analysis was performed with R software (version 4.0.2, R Foundation), and P < 0.05 was considered statistically significant. To validate the generalizability of COVID-Mortality Score, we used data from the 1,688 patients not included in the development set.\nEthics statement This study was approved by the ethics committee of the Korea Centers for Disease Control and Prevention (KCDC) and written informed consent was exempted because of the de-identified retrospective nature of the publicly available data.\nThis study was approved by the ethics committee of the Korea Centers for Disease Control and Prevention (KCDC) and written informed consent was exempted because of the de-identified retrospective nature of the publicly available data.", "We obtained medical records from laboratory-confirmed hospitalized cases with COVID-19 reported to the Korea Centers for Disease Control and Prevention (KCDC) between January 2020 and April 2020. All 120 hospitals in Korea that were assigned to treat COVID-19 patients submitted the clinical data of all their hospitalized cases with laboratory-confirmed COVID-19 infection to the KCDC by April 30, 2020. All patients with COVID-19 were diagnosed and treated according to the guidelines published by the KCDC (http://www.cdc.go.kr). According to the KCDC guidelines, laboratory confirmation for COVID-19 infection was defined as a positive result on real-time reverse-transcription polymerase-chain-reaction assay of nasal and oropharyngeal swabs. All the patients analyzed either died in hospital or were discharged home despite the limitations that we could not acquire the information for in-hospital mortality due to other causes except COVID-19 or underlying diseases. We collected the data of all COVID-19 patients on clinical status at hospitalization (clinical symptoms and signs, complete blood count (CBC) findings, disease severity, and discharge status). The patients' data were collected up to death or discharge from the hospital. Ordinary variables were converted into separated dichotomous variable. We randomly selected a development and validation set using a 7:3 ratio, respectively. Imputation for missing variables was considered if missing values were less than 30%. We used predictive random forest algorithm for the imputation.", "The 35 potential predictive variables included the following patient characteristics at hospital admission: demographic variables and body mass index (BMI), medical history, clinical signs and symptoms, and CBC findings. Demographic variables such as age and sex and BMI were collected for the study. Medical history included diabetes mellitus, hypertension, heart failure, cardiovascular disease, bronchial asthma, chronic obstructive pulmonary disease, chronic renal disease, malignancy, chronic liver disease, autoimmune disease, and dementia. Clinical signs and symptoms included categorical and continuous variables as follows: systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate, body temperature, fever, cough, sputum, sore throat, rhinorrhea, myalgia, fatigue, dyspnea, headache, unconsciousness, nausea/vomiting, and diarrhea. CBC findings were included as follows: hemoglobin, hematocrit, lymphocyte, platelet, and white blood cell (WBC).", "We adopted in-hospital death as the end point because death is the most serious outcome of COVID-19.", "We included all 3,940 patients hospitalized with COVID-19 in the development set for selection of variables and development of mortality score. As previously stated, the 35 variables were involved in the selection process. Least Absolute Shrinkage and Selection Operator (LASSO) regression was used for the initial variable selection. For datasets with a low events per variables ratio, LASSO is more appropriate than the stepwise regression analysis, and it is more satisfactory for regression models with high-dimensional predictors. This is a logistic regression model that obtains the subset of predictors that minimized prediction error for a quantitative variable. In penalized regression, it is needed to specify a constant λ to adjust the amount of the coefficient shrinkage. With larger penalties, the estimates of weaker factors shrink toward zero so that only the strongest predictors remain in the model.1 We used LASSO regression augmented with 10-fold cross-validation for internal validation. The best covariates that minimized the cross-validation prediction error rate were defined as the minimum (λ min). The R package “glmnet” statistical software (R Foundation) was used to perform the LASSO regression. The variables identified by LASSO regression analysis that were independently significant in logistic regression analysis were used to generate the risk prediction model (COVID-Mortality Score). The COVID-Mortality Score was generated on the basis of coefficients from the logistic model. We used the following equation to estimate the probability: probability = exp(Σβ × X)/[1 + exp (Σβ × X)].", "The accuracy of COVID-Mortality Score was analyzed for using the AUC of the receiver-operator characteristic curve. For internal validation, we used 200 bootstrap resamplings. Statistical analysis was performed with R software (version 4.0.2, R Foundation), and P < 0.05 was considered statistically significant. To validate the generalizability of COVID-Mortality Score, we used data from the 1,688 patients not included in the development set.", "This study was approved by the ethics committee of the Korea Centers for Disease Control and Prevention (KCDC) and written informed consent was exempted because of the de-identified retrospective nature of the publicly available data.", "Characteristics of the development set In the development set, on hospital admission, 134 of 3,940 patients (3.4%) were admitted directly to the intensive care unit (ICU), with the rest (96.6%) admitted to the general ward. The end point of mortality occurred in 4.3% (n = 169). Patients who died were more likely to be men, and were more likely to have a BMI < 18.5, hypertension, an admission to the ICU and more comorbidities than those who lived (Table 2). Fever (23.1%), cough (42.0%), sputum (29.0%), sore throat (15.8%), rhinorrhea (10.9%), myalgia (16.3%), fatigue (4.0%), dyspnea (12.3%), headache (16.8%), unconsciousness (0.6%), nausea/vomiting (4.5%), and diarrhea (9.1%) were the commonest symptoms. CBC findings of the development set are also presented in Table 2. When CBC findings such as WBC, lymphocyte, platelet, and hemoglobin were evaluated as continuous variables, they did not exhibit a significant U-shaped pattern.\nData are mean ± standard deviation or number (percentage), where No. is the total number of patients with available data.\nWBC = white blood cell.\nIn the development set, on hospital admission, 134 of 3,940 patients (3.4%) were admitted directly to the intensive care unit (ICU), with the rest (96.6%) admitted to the general ward. The end point of mortality occurred in 4.3% (n = 169). Patients who died were more likely to be men, and were more likely to have a BMI < 18.5, hypertension, an admission to the ICU and more comorbidities than those who lived (Table 2). Fever (23.1%), cough (42.0%), sputum (29.0%), sore throat (15.8%), rhinorrhea (10.9%), myalgia (16.3%), fatigue (4.0%), dyspnea (12.3%), headache (16.8%), unconsciousness (0.6%), nausea/vomiting (4.5%), and diarrhea (9.1%) were the commonest symptoms. CBC findings of the development set are also presented in Table 2. When CBC findings such as WBC, lymphocyte, platelet, and hemoglobin were evaluated as continuous variables, they did not exhibit a significant U-shaped pattern.\nData are mean ± standard deviation or number (percentage), where No. is the total number of patients with available data.\nWBC = white blood cell.\nPredictor selection The 35 variables measured at hospital admission (Table 2, see method) were included in the LASSO regression. After LASSO regression selection (Supplementary Fig. 1), 17 variables remained as significant predictors of death, including age, sex, BMI, SBP, heart rate, body temperature, comorbidities including rhinorrhea, dyspnea, unconsciousness, diabetes mellitus, hypertension, chronic obstructive pulmonary disease, malignancy, WBC, lymphocyte, platelet, and hemoglobin. Inclusion of these 17 variables in a logistic regression model resulted in 13 variables that were independently statistically significant predictors of death and was included in risk score. These variables included age by three groups: 60–69 years (odds ratio [OR], 3.63; 95% CI, 1.64– 8.01; P = 0.001), age 70–79 years (OR, 6.12; 95% CI, 2.84–13.16; P < 0.001), age ≥ 80 years (OR, 21.24; 95% CI, 9.65–46.74; P < 0.001), men (OR, 1.67; 95% CI, 1.04–2.67; P = 0.034), BMI < 18.5 (OR, 3.38; 95% CI, 1.64–6.95; P < 0.001), diabetes mellitus (OR, 2.10; 95% CI, 1.33–3.31; P = 0.001), malignancy history (OR, 2.78; 95% CI, 1.14–6.79; P = 0.025), dementia (OR, 2.67; 95% CI, 1.49–4.78; P < 0.001), rhinorrhea (OR, 0.27; 95% CI, 0.08–0.91; P = 0.035), dyspnea (OR, 4.03; 95% CI, 2.50–6.48; P < 0.001), unconsciousness (OR, 25.10, 95% CI, 6.55–96.18; P < 0.001), WBC (OR per 103 μL, 1.10; 95% CI, 1.04–1.17, P < 0.001), lower lymphocyte proportion (OR per %, 0.92; 95% CI, 0.89–0.94; P < 0.001), lower platelet count (OR per 104 μL, 0.90; 95% CI, 0.88–0.93, P < 0.001), and lower hemoglobin level (OR per g/dL, 0.81; 95% CI, 0.72–0.92; P = 0.001) (Table 3).\nEach age group was compared with all other age groups.\nOR = odds ratio, BMI = body mass index, WBC = white blood cell.\nThe 35 variables measured at hospital admission (Table 2, see method) were included in the LASSO regression. After LASSO regression selection (Supplementary Fig. 1), 17 variables remained as significant predictors of death, including age, sex, BMI, SBP, heart rate, body temperature, comorbidities including rhinorrhea, dyspnea, unconsciousness, diabetes mellitus, hypertension, chronic obstructive pulmonary disease, malignancy, WBC, lymphocyte, platelet, and hemoglobin. Inclusion of these 17 variables in a logistic regression model resulted in 13 variables that were independently statistically significant predictors of death and was included in risk score. These variables included age by three groups: 60–69 years (odds ratio [OR], 3.63; 95% CI, 1.64– 8.01; P = 0.001), age 70–79 years (OR, 6.12; 95% CI, 2.84–13.16; P < 0.001), age ≥ 80 years (OR, 21.24; 95% CI, 9.65–46.74; P < 0.001), men (OR, 1.67; 95% CI, 1.04–2.67; P = 0.034), BMI < 18.5 (OR, 3.38; 95% CI, 1.64–6.95; P < 0.001), diabetes mellitus (OR, 2.10; 95% CI, 1.33–3.31; P = 0.001), malignancy history (OR, 2.78; 95% CI, 1.14–6.79; P = 0.025), dementia (OR, 2.67; 95% CI, 1.49–4.78; P < 0.001), rhinorrhea (OR, 0.27; 95% CI, 0.08–0.91; P = 0.035), dyspnea (OR, 4.03; 95% CI, 2.50–6.48; P < 0.001), unconsciousness (OR, 25.10, 95% CI, 6.55–96.18; P < 0.001), WBC (OR per 103 μL, 1.10; 95% CI, 1.04–1.17, P < 0.001), lower lymphocyte proportion (OR per %, 0.92; 95% CI, 0.89–0.94; P < 0.001), lower platelet count (OR per 104 μL, 0.90; 95% CI, 0.88–0.93, P < 0.001), and lower hemoglobin level (OR per g/dL, 0.81; 95% CI, 0.72–0.92; P = 0.001) (Table 3).\nEach age group was compared with all other age groups.\nOR = odds ratio, BMI = body mass index, WBC = white blood cell.\nThe performance, validation and optimization of COVID-Mortality score The clinical characteristics and CBC findings of the validation set are presented in Supplementary Table 1. The AUCs of the model with the development set and validation set were 0.97 (95% CI, 0.85–0.91, Fig. 1) and 0.96 (95% CI, 0.84–0.93, Fig. 2), respectively. When the scoring system was optimized for > 90% sensitivity by stratifying age groups, the accuracy was 81.0% with 91.7% sensitivity and 80.5% specificity in the development set. From the validation set, accuracy was 80.2% with 86.1% sensitivity and 80.0% specificity. The optimized scoring system was utilized for the construction of the online risk calculator (https://www.diseaseriskscore.com). The online risk calculator determined whether the patient belonged to high-risk group or low-risk group and presented hazard ratio and high rank percentage for mortality. The model-derived score thresholds used for the optimized scoring system were 0.51 for age less than 40, 0.00252 for age 40s, 0.00176 for age 50s, 0.02302 for age 60s, 0.09532 for age 70s, and 0.13311 for age equal or more than 80.\nThe clinical characteristics and CBC findings of the validation set are presented in Supplementary Table 1. The AUCs of the model with the development set and validation set were 0.97 (95% CI, 0.85–0.91, Fig. 1) and 0.96 (95% CI, 0.84–0.93, Fig. 2), respectively. When the scoring system was optimized for > 90% sensitivity by stratifying age groups, the accuracy was 81.0% with 91.7% sensitivity and 80.5% specificity in the development set. From the validation set, accuracy was 80.2% with 86.1% sensitivity and 80.0% specificity. The optimized scoring system was utilized for the construction of the online risk calculator (https://www.diseaseriskscore.com). The online risk calculator determined whether the patient belonged to high-risk group or low-risk group and presented hazard ratio and high rank percentage for mortality. The model-derived score thresholds used for the optimized scoring system were 0.51 for age less than 40, 0.00252 for age 40s, 0.00176 for age 50s, 0.02302 for age 60s, 0.09532 for age 70s, and 0.13311 for age equal or more than 80.", "In the development set, on hospital admission, 134 of 3,940 patients (3.4%) were admitted directly to the intensive care unit (ICU), with the rest (96.6%) admitted to the general ward. The end point of mortality occurred in 4.3% (n = 169). Patients who died were more likely to be men, and were more likely to have a BMI < 18.5, hypertension, an admission to the ICU and more comorbidities than those who lived (Table 2). Fever (23.1%), cough (42.0%), sputum (29.0%), sore throat (15.8%), rhinorrhea (10.9%), myalgia (16.3%), fatigue (4.0%), dyspnea (12.3%), headache (16.8%), unconsciousness (0.6%), nausea/vomiting (4.5%), and diarrhea (9.1%) were the commonest symptoms. CBC findings of the development set are also presented in Table 2. When CBC findings such as WBC, lymphocyte, platelet, and hemoglobin were evaluated as continuous variables, they did not exhibit a significant U-shaped pattern.\nData are mean ± standard deviation or number (percentage), where No. is the total number of patients with available data.\nWBC = white blood cell.", "The 35 variables measured at hospital admission (Table 2, see method) were included in the LASSO regression. After LASSO regression selection (Supplementary Fig. 1), 17 variables remained as significant predictors of death, including age, sex, BMI, SBP, heart rate, body temperature, comorbidities including rhinorrhea, dyspnea, unconsciousness, diabetes mellitus, hypertension, chronic obstructive pulmonary disease, malignancy, WBC, lymphocyte, platelet, and hemoglobin. Inclusion of these 17 variables in a logistic regression model resulted in 13 variables that were independently statistically significant predictors of death and was included in risk score. These variables included age by three groups: 60–69 years (odds ratio [OR], 3.63; 95% CI, 1.64– 8.01; P = 0.001), age 70–79 years (OR, 6.12; 95% CI, 2.84–13.16; P < 0.001), age ≥ 80 years (OR, 21.24; 95% CI, 9.65–46.74; P < 0.001), men (OR, 1.67; 95% CI, 1.04–2.67; P = 0.034), BMI < 18.5 (OR, 3.38; 95% CI, 1.64–6.95; P < 0.001), diabetes mellitus (OR, 2.10; 95% CI, 1.33–3.31; P = 0.001), malignancy history (OR, 2.78; 95% CI, 1.14–6.79; P = 0.025), dementia (OR, 2.67; 95% CI, 1.49–4.78; P < 0.001), rhinorrhea (OR, 0.27; 95% CI, 0.08–0.91; P = 0.035), dyspnea (OR, 4.03; 95% CI, 2.50–6.48; P < 0.001), unconsciousness (OR, 25.10, 95% CI, 6.55–96.18; P < 0.001), WBC (OR per 103 μL, 1.10; 95% CI, 1.04–1.17, P < 0.001), lower lymphocyte proportion (OR per %, 0.92; 95% CI, 0.89–0.94; P < 0.001), lower platelet count (OR per 104 μL, 0.90; 95% CI, 0.88–0.93, P < 0.001), and lower hemoglobin level (OR per g/dL, 0.81; 95% CI, 0.72–0.92; P = 0.001) (Table 3).\nEach age group was compared with all other age groups.\nOR = odds ratio, BMI = body mass index, WBC = white blood cell.", "The clinical characteristics and CBC findings of the validation set are presented in Supplementary Table 1. The AUCs of the model with the development set and validation set were 0.97 (95% CI, 0.85–0.91, Fig. 1) and 0.96 (95% CI, 0.84–0.93, Fig. 2), respectively. When the scoring system was optimized for > 90% sensitivity by stratifying age groups, the accuracy was 81.0% with 91.7% sensitivity and 80.5% specificity in the development set. From the validation set, accuracy was 80.2% with 86.1% sensitivity and 80.0% specificity. The optimized scoring system was utilized for the construction of the online risk calculator (https://www.diseaseriskscore.com). The online risk calculator determined whether the patient belonged to high-risk group or low-risk group and presented hazard ratio and high rank percentage for mortality. The model-derived score thresholds used for the optimized scoring system were 0.51 for age less than 40, 0.00252 for age 40s, 0.00176 for age 50s, 0.02302 for age 60s, 0.09532 for age 70s, and 0.13311 for age equal or more than 80.", "This study developed and validated a clinical risk score (COVID-Mortality Score) to predict mortality among patients hospitalized with COVID-19 infection. Importantly the 13 variables required for calculating the risk of mortality using this score are all generally readily available on hospital admission. Practically, if the patient's estimated risk for death is low, the clinician may choose careful monitoring, whereas high-risk estimates might require aggressive treatment or immediate ICU care. In this context, we optimized the scoring model to achieve higher sensitivity, to the detriment of accuracy, given the potentially lethal clinical outcome of COVID-19.\nFurthermore, this score-based model could assist clinicians when making decisions. For example, clinicians may treat patients with a high-risk score more intensively in an emergency, if resources and ICU beds were limited. Older age, sex, BMI, diabetes mellitus, malignancy, dementia, rhinorrhea, dyspnea, unconsciousness, WBC, lymphocyte, platelet, and hemoglobin were all included in the COVID-Mortality Score. Previous studies have found several of these variables to be risk factors for severe illness related to COVID-19 (Table 1). Wu et al.11 found that older age and more comorbidities were associated with a higher risk of developing acute respiratory distress syndrome (ARDS) in patients infected with COVID-19. Recently Liang et al.4 developed a risk score based on characteristics of COVID-19 patients at the time of admission to the hospital to predict a patient's risk of developing a critical illness. They found that from 72 potential predictors, 10 variables were independent predictive factors and were included in a risk score which had a mean AUC of 0.88 in both the development (95% CI, 0.85–0.91) and validation (95% CI, 0.84–0.93) cohorts. Some of the variables in the Chinese model such as age, dyspnea, unconsciousness, and cancer history were also included in the COVID-Mortality Score, and despite its development in a Korean population, which could limit its generalizability to other areas of the world, the present results show that the AUC is similar at 96 to 97%. Nevertheless, the current COVID-Mortality Score will be needed to validate externally with heterogenous baseline characteristics cohorts because of the limitation of only available predictors in current data.\nWhile mortality prediction is neither perfect nor absolute, having a simple score to predict how severe a patient's illness is and their hospital course, will aid admitting and emergency room physicians in triaging the severity, and predicting the prognosis of COVID-19 infection, which we are realizing has a very broad-spectrum of severity. This can also be used to guide recommendations for palliative care consultations early in a patient's hospital course.\nAlthough this study includes a large sample size for constructing the risk score and a relatively big sample for validation, the data for score development and validation are entirely from Korea and are limited only in specified predictors and mortality as outcomes, limiting the generalizability of the risk score in other areas of the world. However, despite these differences in race, the risk score remained valid in predicting in-hospital mortality.\nIn conclusion, we developed a risk score to estimate the risk of developing mortality among patients with COVID-19 based on 13 variables commonly measured on admission to hospital in this study. This score could help identify patients in need of more supportive treatment or assist with optimizing the use of medical resources." ]
[ "intro", "methods", null, null, null, null, null, null, "results", null, null, null, "discussion" ]
[ "COVID-19", "In-hospital Mortality", "Death", "Prediction", "Risk Score" ]
INTRODUCTION: Since the outbreak of coronavirus disease 2019 (COVID-19) in Wuhan, China in December 2019, it was rapidly followed by worldwide outbreaks.123 As of March 22, 2021, the World Health Organization (WHO) reported a total of 123,877,740 COVID-19 cases globally, with an average mortality of 2.2%. In many patients, the disease is mild or self-limiting, however in a considerable portion the disease is severe and fatal. Consequently, it is vital to be able to identify in advance those patients who are at greatest risk of mortality, to enable prompt referral to appropriate care settings, to try and improve outcomes. Recently, clinical scores to predict the occurrence of critically ill patients and/or a fatal outcome with COVID-19 were developed in a cohort of 1,590 Chinese patients treated in more than 575 centers throughout China.4 Liang et al.4 identified 10 independent predictive factors (abnormal chest radiograph, age, hemoptysis, dyspnea, unconsciousness, number of comorbidities, cancer history, neutrophil-to-lymphocyte ratio, lactate dehydrogenase and direct bilirubin) which were used to produce a risk score which had a mean area under the curve (AUC) of 0.88 in both the development (95% confidence interval [CI], 0.85–0.91), and validation cohort (95% CI, 0.84–0.93), for estimating the risk a hospitalized patient with COVID-19 will develop critical illness. Further reports have shown methods or new severity scores to assess disease severity and mortality of COVID-19 infection (Table 1).56789 However, it is important to understand that the mortality of COVID-19 varies according to race and ethnicity, and therefore the accuracy of risk scores is not necessarily transferrable between countries.10 Therefore, the present study aimed to develop a novel COVID-19 in-hospital mortality risk score (hereafter referred to as COVID-Mortality Score), based on data rapidly obtainable soon after hospital admission in South Korea. COVID-19 = coronavirus disease 2019, CI = confidence interval. METHODS: Data sources and processing We obtained medical records from laboratory-confirmed hospitalized cases with COVID-19 reported to the Korea Centers for Disease Control and Prevention (KCDC) between January 2020 and April 2020. All 120 hospitals in Korea that were assigned to treat COVID-19 patients submitted the clinical data of all their hospitalized cases with laboratory-confirmed COVID-19 infection to the KCDC by April 30, 2020. All patients with COVID-19 were diagnosed and treated according to the guidelines published by the KCDC (http://www.cdc.go.kr). According to the KCDC guidelines, laboratory confirmation for COVID-19 infection was defined as a positive result on real-time reverse-transcription polymerase-chain-reaction assay of nasal and oropharyngeal swabs. All the patients analyzed either died in hospital or were discharged home despite the limitations that we could not acquire the information for in-hospital mortality due to other causes except COVID-19 or underlying diseases. We collected the data of all COVID-19 patients on clinical status at hospitalization (clinical symptoms and signs, complete blood count (CBC) findings, disease severity, and discharge status). The patients' data were collected up to death or discharge from the hospital. Ordinary variables were converted into separated dichotomous variable. We randomly selected a development and validation set using a 7:3 ratio, respectively. Imputation for missing variables was considered if missing values were less than 30%. We used predictive random forest algorithm for the imputation. We obtained medical records from laboratory-confirmed hospitalized cases with COVID-19 reported to the Korea Centers for Disease Control and Prevention (KCDC) between January 2020 and April 2020. All 120 hospitals in Korea that were assigned to treat COVID-19 patients submitted the clinical data of all their hospitalized cases with laboratory-confirmed COVID-19 infection to the KCDC by April 30, 2020. All patients with COVID-19 were diagnosed and treated according to the guidelines published by the KCDC (http://www.cdc.go.kr). According to the KCDC guidelines, laboratory confirmation for COVID-19 infection was defined as a positive result on real-time reverse-transcription polymerase-chain-reaction assay of nasal and oropharyngeal swabs. All the patients analyzed either died in hospital or were discharged home despite the limitations that we could not acquire the information for in-hospital mortality due to other causes except COVID-19 or underlying diseases. We collected the data of all COVID-19 patients on clinical status at hospitalization (clinical symptoms and signs, complete blood count (CBC) findings, disease severity, and discharge status). The patients' data were collected up to death or discharge from the hospital. Ordinary variables were converted into separated dichotomous variable. We randomly selected a development and validation set using a 7:3 ratio, respectively. Imputation for missing variables was considered if missing values were less than 30%. We used predictive random forest algorithm for the imputation. Potential predictive variables The 35 potential predictive variables included the following patient characteristics at hospital admission: demographic variables and body mass index (BMI), medical history, clinical signs and symptoms, and CBC findings. Demographic variables such as age and sex and BMI were collected for the study. Medical history included diabetes mellitus, hypertension, heart failure, cardiovascular disease, bronchial asthma, chronic obstructive pulmonary disease, chronic renal disease, malignancy, chronic liver disease, autoimmune disease, and dementia. Clinical signs and symptoms included categorical and continuous variables as follows: systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate, body temperature, fever, cough, sputum, sore throat, rhinorrhea, myalgia, fatigue, dyspnea, headache, unconsciousness, nausea/vomiting, and diarrhea. CBC findings were included as follows: hemoglobin, hematocrit, lymphocyte, platelet, and white blood cell (WBC). The 35 potential predictive variables included the following patient characteristics at hospital admission: demographic variables and body mass index (BMI), medical history, clinical signs and symptoms, and CBC findings. Demographic variables such as age and sex and BMI were collected for the study. Medical history included diabetes mellitus, hypertension, heart failure, cardiovascular disease, bronchial asthma, chronic obstructive pulmonary disease, chronic renal disease, malignancy, chronic liver disease, autoimmune disease, and dementia. Clinical signs and symptoms included categorical and continuous variables as follows: systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate, body temperature, fever, cough, sputum, sore throat, rhinorrhea, myalgia, fatigue, dyspnea, headache, unconsciousness, nausea/vomiting, and diarrhea. CBC findings were included as follows: hemoglobin, hematocrit, lymphocyte, platelet, and white blood cell (WBC). Outcomes We adopted in-hospital death as the end point because death is the most serious outcome of COVID-19. We adopted in-hospital death as the end point because death is the most serious outcome of COVID-19. Selection of variables and construction of scoring system We included all 3,940 patients hospitalized with COVID-19 in the development set for selection of variables and development of mortality score. As previously stated, the 35 variables were involved in the selection process. Least Absolute Shrinkage and Selection Operator (LASSO) regression was used for the initial variable selection. For datasets with a low events per variables ratio, LASSO is more appropriate than the stepwise regression analysis, and it is more satisfactory for regression models with high-dimensional predictors. This is a logistic regression model that obtains the subset of predictors that minimized prediction error for a quantitative variable. In penalized regression, it is needed to specify a constant λ to adjust the amount of the coefficient shrinkage. With larger penalties, the estimates of weaker factors shrink toward zero so that only the strongest predictors remain in the model.1 We used LASSO regression augmented with 10-fold cross-validation for internal validation. The best covariates that minimized the cross-validation prediction error rate were defined as the minimum (λ min). The R package “glmnet” statistical software (R Foundation) was used to perform the LASSO regression. The variables identified by LASSO regression analysis that were independently significant in logistic regression analysis were used to generate the risk prediction model (COVID-Mortality Score). The COVID-Mortality Score was generated on the basis of coefficients from the logistic model. We used the following equation to estimate the probability: probability = exp(Σβ × X)/[1 + exp (Σβ × X)]. We included all 3,940 patients hospitalized with COVID-19 in the development set for selection of variables and development of mortality score. As previously stated, the 35 variables were involved in the selection process. Least Absolute Shrinkage and Selection Operator (LASSO) regression was used for the initial variable selection. For datasets with a low events per variables ratio, LASSO is more appropriate than the stepwise regression analysis, and it is more satisfactory for regression models with high-dimensional predictors. This is a logistic regression model that obtains the subset of predictors that minimized prediction error for a quantitative variable. In penalized regression, it is needed to specify a constant λ to adjust the amount of the coefficient shrinkage. With larger penalties, the estimates of weaker factors shrink toward zero so that only the strongest predictors remain in the model.1 We used LASSO regression augmented with 10-fold cross-validation for internal validation. The best covariates that minimized the cross-validation prediction error rate were defined as the minimum (λ min). The R package “glmnet” statistical software (R Foundation) was used to perform the LASSO regression. The variables identified by LASSO regression analysis that were independently significant in logistic regression analysis were used to generate the risk prediction model (COVID-Mortality Score). The COVID-Mortality Score was generated on the basis of coefficients from the logistic model. We used the following equation to estimate the probability: probability = exp(Σβ × X)/[1 + exp (Σβ × X)]. Assessment of accuracy of prediction model The accuracy of COVID-Mortality Score was analyzed for using the AUC of the receiver-operator characteristic curve. For internal validation, we used 200 bootstrap resamplings. Statistical analysis was performed with R software (version 4.0.2, R Foundation), and P < 0.05 was considered statistically significant. To validate the generalizability of COVID-Mortality Score, we used data from the 1,688 patients not included in the development set. The accuracy of COVID-Mortality Score was analyzed for using the AUC of the receiver-operator characteristic curve. For internal validation, we used 200 bootstrap resamplings. Statistical analysis was performed with R software (version 4.0.2, R Foundation), and P < 0.05 was considered statistically significant. To validate the generalizability of COVID-Mortality Score, we used data from the 1,688 patients not included in the development set. Ethics statement This study was approved by the ethics committee of the Korea Centers for Disease Control and Prevention (KCDC) and written informed consent was exempted because of the de-identified retrospective nature of the publicly available data. This study was approved by the ethics committee of the Korea Centers for Disease Control and Prevention (KCDC) and written informed consent was exempted because of the de-identified retrospective nature of the publicly available data. Data sources and processing: We obtained medical records from laboratory-confirmed hospitalized cases with COVID-19 reported to the Korea Centers for Disease Control and Prevention (KCDC) between January 2020 and April 2020. All 120 hospitals in Korea that were assigned to treat COVID-19 patients submitted the clinical data of all their hospitalized cases with laboratory-confirmed COVID-19 infection to the KCDC by April 30, 2020. All patients with COVID-19 were diagnosed and treated according to the guidelines published by the KCDC (http://www.cdc.go.kr). According to the KCDC guidelines, laboratory confirmation for COVID-19 infection was defined as a positive result on real-time reverse-transcription polymerase-chain-reaction assay of nasal and oropharyngeal swabs. All the patients analyzed either died in hospital or were discharged home despite the limitations that we could not acquire the information for in-hospital mortality due to other causes except COVID-19 or underlying diseases. We collected the data of all COVID-19 patients on clinical status at hospitalization (clinical symptoms and signs, complete blood count (CBC) findings, disease severity, and discharge status). The patients' data were collected up to death or discharge from the hospital. Ordinary variables were converted into separated dichotomous variable. We randomly selected a development and validation set using a 7:3 ratio, respectively. Imputation for missing variables was considered if missing values were less than 30%. We used predictive random forest algorithm for the imputation. Potential predictive variables: The 35 potential predictive variables included the following patient characteristics at hospital admission: demographic variables and body mass index (BMI), medical history, clinical signs and symptoms, and CBC findings. Demographic variables such as age and sex and BMI were collected for the study. Medical history included diabetes mellitus, hypertension, heart failure, cardiovascular disease, bronchial asthma, chronic obstructive pulmonary disease, chronic renal disease, malignancy, chronic liver disease, autoimmune disease, and dementia. Clinical signs and symptoms included categorical and continuous variables as follows: systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate, body temperature, fever, cough, sputum, sore throat, rhinorrhea, myalgia, fatigue, dyspnea, headache, unconsciousness, nausea/vomiting, and diarrhea. CBC findings were included as follows: hemoglobin, hematocrit, lymphocyte, platelet, and white blood cell (WBC). Outcomes: We adopted in-hospital death as the end point because death is the most serious outcome of COVID-19. Selection of variables and construction of scoring system: We included all 3,940 patients hospitalized with COVID-19 in the development set for selection of variables and development of mortality score. As previously stated, the 35 variables were involved in the selection process. Least Absolute Shrinkage and Selection Operator (LASSO) regression was used for the initial variable selection. For datasets with a low events per variables ratio, LASSO is more appropriate than the stepwise regression analysis, and it is more satisfactory for regression models with high-dimensional predictors. This is a logistic regression model that obtains the subset of predictors that minimized prediction error for a quantitative variable. In penalized regression, it is needed to specify a constant λ to adjust the amount of the coefficient shrinkage. With larger penalties, the estimates of weaker factors shrink toward zero so that only the strongest predictors remain in the model.1 We used LASSO regression augmented with 10-fold cross-validation for internal validation. The best covariates that minimized the cross-validation prediction error rate were defined as the minimum (λ min). The R package “glmnet” statistical software (R Foundation) was used to perform the LASSO regression. The variables identified by LASSO regression analysis that were independently significant in logistic regression analysis were used to generate the risk prediction model (COVID-Mortality Score). The COVID-Mortality Score was generated on the basis of coefficients from the logistic model. We used the following equation to estimate the probability: probability = exp(Σβ × X)/[1 + exp (Σβ × X)]. Assessment of accuracy of prediction model: The accuracy of COVID-Mortality Score was analyzed for using the AUC of the receiver-operator characteristic curve. For internal validation, we used 200 bootstrap resamplings. Statistical analysis was performed with R software (version 4.0.2, R Foundation), and P < 0.05 was considered statistically significant. To validate the generalizability of COVID-Mortality Score, we used data from the 1,688 patients not included in the development set. Ethics statement: This study was approved by the ethics committee of the Korea Centers for Disease Control and Prevention (KCDC) and written informed consent was exempted because of the de-identified retrospective nature of the publicly available data. RESULTS: Characteristics of the development set In the development set, on hospital admission, 134 of 3,940 patients (3.4%) were admitted directly to the intensive care unit (ICU), with the rest (96.6%) admitted to the general ward. The end point of mortality occurred in 4.3% (n = 169). Patients who died were more likely to be men, and were more likely to have a BMI < 18.5, hypertension, an admission to the ICU and more comorbidities than those who lived (Table 2). Fever (23.1%), cough (42.0%), sputum (29.0%), sore throat (15.8%), rhinorrhea (10.9%), myalgia (16.3%), fatigue (4.0%), dyspnea (12.3%), headache (16.8%), unconsciousness (0.6%), nausea/vomiting (4.5%), and diarrhea (9.1%) were the commonest symptoms. CBC findings of the development set are also presented in Table 2. When CBC findings such as WBC, lymphocyte, platelet, and hemoglobin were evaluated as continuous variables, they did not exhibit a significant U-shaped pattern. Data are mean ± standard deviation or number (percentage), where No. is the total number of patients with available data. WBC = white blood cell. In the development set, on hospital admission, 134 of 3,940 patients (3.4%) were admitted directly to the intensive care unit (ICU), with the rest (96.6%) admitted to the general ward. The end point of mortality occurred in 4.3% (n = 169). Patients who died were more likely to be men, and were more likely to have a BMI < 18.5, hypertension, an admission to the ICU and more comorbidities than those who lived (Table 2). Fever (23.1%), cough (42.0%), sputum (29.0%), sore throat (15.8%), rhinorrhea (10.9%), myalgia (16.3%), fatigue (4.0%), dyspnea (12.3%), headache (16.8%), unconsciousness (0.6%), nausea/vomiting (4.5%), and diarrhea (9.1%) were the commonest symptoms. CBC findings of the development set are also presented in Table 2. When CBC findings such as WBC, lymphocyte, platelet, and hemoglobin were evaluated as continuous variables, they did not exhibit a significant U-shaped pattern. Data are mean ± standard deviation or number (percentage), where No. is the total number of patients with available data. WBC = white blood cell. Predictor selection The 35 variables measured at hospital admission (Table 2, see method) were included in the LASSO regression. After LASSO regression selection (Supplementary Fig. 1), 17 variables remained as significant predictors of death, including age, sex, BMI, SBP, heart rate, body temperature, comorbidities including rhinorrhea, dyspnea, unconsciousness, diabetes mellitus, hypertension, chronic obstructive pulmonary disease, malignancy, WBC, lymphocyte, platelet, and hemoglobin. Inclusion of these 17 variables in a logistic regression model resulted in 13 variables that were independently statistically significant predictors of death and was included in risk score. These variables included age by three groups: 60–69 years (odds ratio [OR], 3.63; 95% CI, 1.64– 8.01; P = 0.001), age 70–79 years (OR, 6.12; 95% CI, 2.84–13.16; P < 0.001), age ≥ 80 years (OR, 21.24; 95% CI, 9.65–46.74; P < 0.001), men (OR, 1.67; 95% CI, 1.04–2.67; P = 0.034), BMI < 18.5 (OR, 3.38; 95% CI, 1.64–6.95; P < 0.001), diabetes mellitus (OR, 2.10; 95% CI, 1.33–3.31; P = 0.001), malignancy history (OR, 2.78; 95% CI, 1.14–6.79; P = 0.025), dementia (OR, 2.67; 95% CI, 1.49–4.78; P < 0.001), rhinorrhea (OR, 0.27; 95% CI, 0.08–0.91; P = 0.035), dyspnea (OR, 4.03; 95% CI, 2.50–6.48; P < 0.001), unconsciousness (OR, 25.10, 95% CI, 6.55–96.18; P < 0.001), WBC (OR per 103 μL, 1.10; 95% CI, 1.04–1.17, P < 0.001), lower lymphocyte proportion (OR per %, 0.92; 95% CI, 0.89–0.94; P < 0.001), lower platelet count (OR per 104 μL, 0.90; 95% CI, 0.88–0.93, P < 0.001), and lower hemoglobin level (OR per g/dL, 0.81; 95% CI, 0.72–0.92; P = 0.001) (Table 3). Each age group was compared with all other age groups. OR = odds ratio, BMI = body mass index, WBC = white blood cell. The 35 variables measured at hospital admission (Table 2, see method) were included in the LASSO regression. After LASSO regression selection (Supplementary Fig. 1), 17 variables remained as significant predictors of death, including age, sex, BMI, SBP, heart rate, body temperature, comorbidities including rhinorrhea, dyspnea, unconsciousness, diabetes mellitus, hypertension, chronic obstructive pulmonary disease, malignancy, WBC, lymphocyte, platelet, and hemoglobin. Inclusion of these 17 variables in a logistic regression model resulted in 13 variables that were independently statistically significant predictors of death and was included in risk score. These variables included age by three groups: 60–69 years (odds ratio [OR], 3.63; 95% CI, 1.64– 8.01; P = 0.001), age 70–79 years (OR, 6.12; 95% CI, 2.84–13.16; P < 0.001), age ≥ 80 years (OR, 21.24; 95% CI, 9.65–46.74; P < 0.001), men (OR, 1.67; 95% CI, 1.04–2.67; P = 0.034), BMI < 18.5 (OR, 3.38; 95% CI, 1.64–6.95; P < 0.001), diabetes mellitus (OR, 2.10; 95% CI, 1.33–3.31; P = 0.001), malignancy history (OR, 2.78; 95% CI, 1.14–6.79; P = 0.025), dementia (OR, 2.67; 95% CI, 1.49–4.78; P < 0.001), rhinorrhea (OR, 0.27; 95% CI, 0.08–0.91; P = 0.035), dyspnea (OR, 4.03; 95% CI, 2.50–6.48; P < 0.001), unconsciousness (OR, 25.10, 95% CI, 6.55–96.18; P < 0.001), WBC (OR per 103 μL, 1.10; 95% CI, 1.04–1.17, P < 0.001), lower lymphocyte proportion (OR per %, 0.92; 95% CI, 0.89–0.94; P < 0.001), lower platelet count (OR per 104 μL, 0.90; 95% CI, 0.88–0.93, P < 0.001), and lower hemoglobin level (OR per g/dL, 0.81; 95% CI, 0.72–0.92; P = 0.001) (Table 3). Each age group was compared with all other age groups. OR = odds ratio, BMI = body mass index, WBC = white blood cell. The performance, validation and optimization of COVID-Mortality score The clinical characteristics and CBC findings of the validation set are presented in Supplementary Table 1. The AUCs of the model with the development set and validation set were 0.97 (95% CI, 0.85–0.91, Fig. 1) and 0.96 (95% CI, 0.84–0.93, Fig. 2), respectively. When the scoring system was optimized for > 90% sensitivity by stratifying age groups, the accuracy was 81.0% with 91.7% sensitivity and 80.5% specificity in the development set. From the validation set, accuracy was 80.2% with 86.1% sensitivity and 80.0% specificity. The optimized scoring system was utilized for the construction of the online risk calculator (https://www.diseaseriskscore.com). The online risk calculator determined whether the patient belonged to high-risk group or low-risk group and presented hazard ratio and high rank percentage for mortality. The model-derived score thresholds used for the optimized scoring system were 0.51 for age less than 40, 0.00252 for age 40s, 0.00176 for age 50s, 0.02302 for age 60s, 0.09532 for age 70s, and 0.13311 for age equal or more than 80. The clinical characteristics and CBC findings of the validation set are presented in Supplementary Table 1. The AUCs of the model with the development set and validation set were 0.97 (95% CI, 0.85–0.91, Fig. 1) and 0.96 (95% CI, 0.84–0.93, Fig. 2), respectively. When the scoring system was optimized for > 90% sensitivity by stratifying age groups, the accuracy was 81.0% with 91.7% sensitivity and 80.5% specificity in the development set. From the validation set, accuracy was 80.2% with 86.1% sensitivity and 80.0% specificity. The optimized scoring system was utilized for the construction of the online risk calculator (https://www.diseaseriskscore.com). The online risk calculator determined whether the patient belonged to high-risk group or low-risk group and presented hazard ratio and high rank percentage for mortality. The model-derived score thresholds used for the optimized scoring system were 0.51 for age less than 40, 0.00252 for age 40s, 0.00176 for age 50s, 0.02302 for age 60s, 0.09532 for age 70s, and 0.13311 for age equal or more than 80. Characteristics of the development set: In the development set, on hospital admission, 134 of 3,940 patients (3.4%) were admitted directly to the intensive care unit (ICU), with the rest (96.6%) admitted to the general ward. The end point of mortality occurred in 4.3% (n = 169). Patients who died were more likely to be men, and were more likely to have a BMI < 18.5, hypertension, an admission to the ICU and more comorbidities than those who lived (Table 2). Fever (23.1%), cough (42.0%), sputum (29.0%), sore throat (15.8%), rhinorrhea (10.9%), myalgia (16.3%), fatigue (4.0%), dyspnea (12.3%), headache (16.8%), unconsciousness (0.6%), nausea/vomiting (4.5%), and diarrhea (9.1%) were the commonest symptoms. CBC findings of the development set are also presented in Table 2. When CBC findings such as WBC, lymphocyte, platelet, and hemoglobin were evaluated as continuous variables, they did not exhibit a significant U-shaped pattern. Data are mean ± standard deviation or number (percentage), where No. is the total number of patients with available data. WBC = white blood cell. Predictor selection: The 35 variables measured at hospital admission (Table 2, see method) were included in the LASSO regression. After LASSO regression selection (Supplementary Fig. 1), 17 variables remained as significant predictors of death, including age, sex, BMI, SBP, heart rate, body temperature, comorbidities including rhinorrhea, dyspnea, unconsciousness, diabetes mellitus, hypertension, chronic obstructive pulmonary disease, malignancy, WBC, lymphocyte, platelet, and hemoglobin. Inclusion of these 17 variables in a logistic regression model resulted in 13 variables that were independently statistically significant predictors of death and was included in risk score. These variables included age by three groups: 60–69 years (odds ratio [OR], 3.63; 95% CI, 1.64– 8.01; P = 0.001), age 70–79 years (OR, 6.12; 95% CI, 2.84–13.16; P < 0.001), age ≥ 80 years (OR, 21.24; 95% CI, 9.65–46.74; P < 0.001), men (OR, 1.67; 95% CI, 1.04–2.67; P = 0.034), BMI < 18.5 (OR, 3.38; 95% CI, 1.64–6.95; P < 0.001), diabetes mellitus (OR, 2.10; 95% CI, 1.33–3.31; P = 0.001), malignancy history (OR, 2.78; 95% CI, 1.14–6.79; P = 0.025), dementia (OR, 2.67; 95% CI, 1.49–4.78; P < 0.001), rhinorrhea (OR, 0.27; 95% CI, 0.08–0.91; P = 0.035), dyspnea (OR, 4.03; 95% CI, 2.50–6.48; P < 0.001), unconsciousness (OR, 25.10, 95% CI, 6.55–96.18; P < 0.001), WBC (OR per 103 μL, 1.10; 95% CI, 1.04–1.17, P < 0.001), lower lymphocyte proportion (OR per %, 0.92; 95% CI, 0.89–0.94; P < 0.001), lower platelet count (OR per 104 μL, 0.90; 95% CI, 0.88–0.93, P < 0.001), and lower hemoglobin level (OR per g/dL, 0.81; 95% CI, 0.72–0.92; P = 0.001) (Table 3). Each age group was compared with all other age groups. OR = odds ratio, BMI = body mass index, WBC = white blood cell. The performance, validation and optimization of COVID-Mortality score: The clinical characteristics and CBC findings of the validation set are presented in Supplementary Table 1. The AUCs of the model with the development set and validation set were 0.97 (95% CI, 0.85–0.91, Fig. 1) and 0.96 (95% CI, 0.84–0.93, Fig. 2), respectively. When the scoring system was optimized for > 90% sensitivity by stratifying age groups, the accuracy was 81.0% with 91.7% sensitivity and 80.5% specificity in the development set. From the validation set, accuracy was 80.2% with 86.1% sensitivity and 80.0% specificity. The optimized scoring system was utilized for the construction of the online risk calculator (https://www.diseaseriskscore.com). The online risk calculator determined whether the patient belonged to high-risk group or low-risk group and presented hazard ratio and high rank percentage for mortality. The model-derived score thresholds used for the optimized scoring system were 0.51 for age less than 40, 0.00252 for age 40s, 0.00176 for age 50s, 0.02302 for age 60s, 0.09532 for age 70s, and 0.13311 for age equal or more than 80. DISCUSSION: This study developed and validated a clinical risk score (COVID-Mortality Score) to predict mortality among patients hospitalized with COVID-19 infection. Importantly the 13 variables required for calculating the risk of mortality using this score are all generally readily available on hospital admission. Practically, if the patient's estimated risk for death is low, the clinician may choose careful monitoring, whereas high-risk estimates might require aggressive treatment or immediate ICU care. In this context, we optimized the scoring model to achieve higher sensitivity, to the detriment of accuracy, given the potentially lethal clinical outcome of COVID-19. Furthermore, this score-based model could assist clinicians when making decisions. For example, clinicians may treat patients with a high-risk score more intensively in an emergency, if resources and ICU beds were limited. Older age, sex, BMI, diabetes mellitus, malignancy, dementia, rhinorrhea, dyspnea, unconsciousness, WBC, lymphocyte, platelet, and hemoglobin were all included in the COVID-Mortality Score. Previous studies have found several of these variables to be risk factors for severe illness related to COVID-19 (Table 1). Wu et al.11 found that older age and more comorbidities were associated with a higher risk of developing acute respiratory distress syndrome (ARDS) in patients infected with COVID-19. Recently Liang et al.4 developed a risk score based on characteristics of COVID-19 patients at the time of admission to the hospital to predict a patient's risk of developing a critical illness. They found that from 72 potential predictors, 10 variables were independent predictive factors and were included in a risk score which had a mean AUC of 0.88 in both the development (95% CI, 0.85–0.91) and validation (95% CI, 0.84–0.93) cohorts. Some of the variables in the Chinese model such as age, dyspnea, unconsciousness, and cancer history were also included in the COVID-Mortality Score, and despite its development in a Korean population, which could limit its generalizability to other areas of the world, the present results show that the AUC is similar at 96 to 97%. Nevertheless, the current COVID-Mortality Score will be needed to validate externally with heterogenous baseline characteristics cohorts because of the limitation of only available predictors in current data. While mortality prediction is neither perfect nor absolute, having a simple score to predict how severe a patient's illness is and their hospital course, will aid admitting and emergency room physicians in triaging the severity, and predicting the prognosis of COVID-19 infection, which we are realizing has a very broad-spectrum of severity. This can also be used to guide recommendations for palliative care consultations early in a patient's hospital course. Although this study includes a large sample size for constructing the risk score and a relatively big sample for validation, the data for score development and validation are entirely from Korea and are limited only in specified predictors and mortality as outcomes, limiting the generalizability of the risk score in other areas of the world. However, despite these differences in race, the risk score remained valid in predicting in-hospital mortality. In conclusion, we developed a risk score to estimate the risk of developing mortality among patients with COVID-19 based on 13 variables commonly measured on admission to hospital in this study. This score could help identify patients in need of more supportive treatment or assist with optimizing the use of medical resources.
Background: Early identification of patients with coronavirus disease 2019 (COVID-19) who are at high risk of mortality is of vital importance for appropriate clinical decision making and delivering optimal treatment. We aimed to develop and validate a clinical risk score for predicting mortality at the time of admission of patients hospitalized with COVID-19. Methods: Collaborating with the Korea Centers for Disease Control and Prevention (KCDC), we established a prospective consecutive cohort of 5,628 patients with confirmed COVID-19 infection who were admitted to 120 hospitals in Korea between January 20, 2020, and April 30, 2020. The cohort was randomly divided using a 7:3 ratio into a development (n = 3,940) and validation (n = 1,688) set. Clinical information and complete blood count (CBC) detected at admission were investigated using Least Absolute Shrinkage and Selection Operator (LASSO) and logistic regression to construct a predictive risk score (COVID-Mortality Score). The discriminative power of the risk model was assessed by calculating the area under the curve (AUC) of the receiver operating characteristic curves. Results: The incidence of mortality was 4.3% in both the development and validation set. A COVID-Mortality Score consisting of age, sex, body mass index, combined comorbidity, clinical symptoms, and CBC was developed. AUCs of the scoring system were 0.96 (95% confidence interval [CI], 0.85-0.91) and 0.97 (95% CI, 0.84-0.93) in the development and validation set, respectively. If the model was optimized for > 90% sensitivity, accuracies were 81.0% and 80.2% with sensitivities of 91.7% and 86.1% in the development and validation set, respectively. The optimized scoring system has been applied to the public online risk calculator (https://www.diseaseriskscore.com). Conclusions: This clinically developed and validated COVID-Mortality Score, using clinical data available at the time of admission, will aid clinicians in predicting in-hospital mortality.
null
null
6,424
373
[ 260, 172, 20, 281, 79, 40, 253, 446, 208 ]
13
[ "covid", "95", "variables", "ci", "95 ci", "age", "score", "19", "covid 19", "mortality" ]
[ "hospitalized covid 19", "19 coronavirus disease", "coronavirus disease 2019", "mortality covid 19", "prognosis covid 19" ]
null
null
[CONTENT] COVID-19 | In-hospital Mortality | Death | Prediction | Risk Score [SUMMARY]
[CONTENT] COVID-19 | In-hospital Mortality | Death | Prediction | Risk Score [SUMMARY]
[CONTENT] COVID-19 | In-hospital Mortality | Death | Prediction | Risk Score [SUMMARY]
null
[CONTENT] COVID-19 | In-hospital Mortality | Death | Prediction | Risk Score [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | COVID-19 | Child | Child, Preschool | Female | Hospital Mortality | Humans | Infant | Infant, Newborn | Logistic Models | Male | Middle Aged | Prospective Studies | Republic of Korea | SARS-CoV-2 | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | COVID-19 | Child | Child, Preschool | Female | Hospital Mortality | Humans | Infant | Infant, Newborn | Logistic Models | Male | Middle Aged | Prospective Studies | Republic of Korea | SARS-CoV-2 | Young Adult [SUMMARY]
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | COVID-19 | Child | Child, Preschool | Female | Hospital Mortality | Humans | Infant | Infant, Newborn | Logistic Models | Male | Middle Aged | Prospective Studies | Republic of Korea | SARS-CoV-2 | Young Adult [SUMMARY]
null
[CONTENT] Adolescent | Adult | Aged | Aged, 80 and over | COVID-19 | Child | Child, Preschool | Female | Hospital Mortality | Humans | Infant | Infant, Newborn | Logistic Models | Male | Middle Aged | Prospective Studies | Republic of Korea | SARS-CoV-2 | Young Adult [SUMMARY]
null
[CONTENT] hospitalized covid 19 | 19 coronavirus disease | coronavirus disease 2019 | mortality covid 19 | prognosis covid 19 [SUMMARY]
[CONTENT] hospitalized covid 19 | 19 coronavirus disease | coronavirus disease 2019 | mortality covid 19 | prognosis covid 19 [SUMMARY]
[CONTENT] hospitalized covid 19 | 19 coronavirus disease | coronavirus disease 2019 | mortality covid 19 | prognosis covid 19 [SUMMARY]
null
[CONTENT] hospitalized covid 19 | 19 coronavirus disease | coronavirus disease 2019 | mortality covid 19 | prognosis covid 19 [SUMMARY]
null
[CONTENT] covid | 95 | variables | ci | 95 ci | age | score | 19 | covid 19 | mortality [SUMMARY]
[CONTENT] covid | 95 | variables | ci | 95 ci | age | score | 19 | covid 19 | mortality [SUMMARY]
[CONTENT] covid | 95 | variables | ci | 95 ci | age | score | 19 | covid 19 | mortality [SUMMARY]
null
[CONTENT] covid | 95 | variables | ci | 95 ci | age | score | 19 | covid 19 | mortality [SUMMARY]
null
[CONTENT] covid 19 | 19 | covid | scores | 2019 | mortality | risk | disease | coronavirus disease 2019 | fatal [SUMMARY]
[CONTENT] covid | regression | variables | 19 | covid 19 | disease | kcdc | lasso | patients | included [SUMMARY]
[CONTENT] 95 | ci | 95 ci | 001 | age | set | 80 | variables | wbc | table [SUMMARY]
null
[CONTENT] covid | 19 | covid 19 | 95 | ci | 95 ci | variables | score | 001 | age [SUMMARY]
null
[CONTENT] 2019 | COVID-19 ||| COVID-19 [SUMMARY]
[CONTENT] the Korea Centers for Disease Control and Prevention | KCDC | 5,628 | COVID-19 | 120 | Korea | between January 20, 2020 | April 30, 2020 ||| 7:3 | 3,940 | 1,688 ||| CBC | Least Absolute Shrinkage and Selection Operator | LASSO | COVID-Mortality Score ||| [SUMMARY]
[CONTENT] 4.3% ||| CBC ||| 0.96 | 95% ||| CI | 0.85-0.91 | 0.97 | 95% | CI | 0.84-0.93 ||| 90% | 81.0% | 80.2% | 91.7% | 86.1% ||| [SUMMARY]
null
[CONTENT] 2019 | COVID-19 ||| COVID-19 ||| the Korea Centers for Disease Control and Prevention | KCDC | 5,628 | COVID-19 | 120 | Korea | between January 20, 2020 | April 30, 2020 ||| 7:3 | 3,940 | 1,688 ||| CBC | Least Absolute Shrinkage and Selection Operator | LASSO | COVID-Mortality Score ||| ||| ||| 4.3% ||| CBC ||| 0.96 | 95% ||| CI | 0.85-0.91 | 0.97 | 95% | CI | 0.84-0.93 ||| 90% | 81.0% | 80.2% | 91.7% | 86.1% ||| ||| COVID-Mortality Score | clinicians [SUMMARY]
null
Leukoencephalopathy with accumulated succinate is indicative of SDHAF1 related complex II deficiency.
22995659
Deficiency of complex II (succinate dehydrogenase, SDH) represents a rare cause of mitochondrial disease and is associated with a wide range of clinical symptoms. Recently, mutations of SDHAF1, the gene encoding for the SDH assembly factor 1, were reported in SDH-defective infantile leukoencephalopathy. Our goal was to identify SDHAF1 mutations in further patients and to delineate the clinical phenotype.
BACKGROUND
In a retrospective data collection study we identified nine children with biochemically proven complex II deficiency among our cohorts of patients with mitochondrial disorders. The cohort comprised five patients from three families affected by SDH-defective infantile leukoencephalopathy with accumulation of succinate in disordered cerebral white matter, as detected by in vivo proton MR spectroscopy. One of these patients had neuropathological features of Leigh syndrome. Four further unrelated patients of the cohort showed diverse clinical phenotypes without leukoencephalopathy. SDHAF1 was sequenced in all nine patients.
METHODS
Homozygous mutations of SDHAF1 were detected in all five patients affected by leukoencephalopathy with accumulated succinate, but not in any of the four patients with other, diverse clinical phenotypes. Two sisters had a mutation reported previously, in three patients two novel mutations were found.
RESULTS
Leukoencephalopathy with accumulated succinate is a key symptom of defective complex II assembly due to SDHAF1 mutations.
CONCLUSION
[ "Adolescent", "Child", "Child, Preschool", "Female", "Humans", "Leukoencephalopathies", "Magnetic Resonance Spectroscopy", "Male", "Proteins", "Succinic Acid" ]
3492161
Background
Deficiency of complex II (succinate dehydrogenase, SDH) is a rare cause of disordered oxidative phosphorylation (OXPHOS) and is associated with a wide range of clinical symptoms. Among our cohorts of more than 1200 patients with defects in OXPHOS we found only nine with biochemically proven deficiency of complex II. In 2001 we reported a characteristic finding in localized in vivo cerebral proton magnetic resonance spectroscopy (MRS) in three patients from two unrelated families, two German sisters of Turkish origin (family A) and one Norwegian boy (family B), presenting with symptoms and MRI signs of leukoencephalopathy [1]. MRS revealed a prominent singlet at 2.40 ppm in cerebral and cerebellar white matter originating from accumulated succinate in affected white matter. Biochemical investigations demonstrated isolated deficiency of complex II in muscle and fibroblasts of these patients. Recently, homozygous mutations in SDHAF1, encoding a new LYR-motif protein, were detected in 2 families from Turkey and Italy with several children affected by infantile leukoencephalopathy with defective SDH [2,3]. We investigated whether SDHAF1 is mutated in our nine patients with complex II deficiency with either SDH-defective leukoencephalopathy or other, diverse clinical phenotypes without leukoencephalopathy.
Patients and methods
The Table 1 summarizes the clinical, neuroradiological, and biochemical features of five patients from three families with SDH-defective leukoencephalopathy and 4 unrelated patients with other clinical phenotypes of complex II deficiency. Details of clinical and neuroradiological features of patients #1, #2 and #3 were described previously [1]. Clinical, neuroradiological, biochemical and genetic features of five patients from three families with SDH-defective leukoencephalopathy and four unrelated patients with other, diverse phenotypes of complex II deficiency SDH succinate dehydrogenase; CS citrate synthase; (T) Turkish origin; (N) Norwegian origin; (P) Palestinian origin; (G) German origin; (J) Jewish origin; f female; m male; mo months; yrs years; + = present; - = absent/not done; LE = leukoencephalopathy; a mutation reported previously [2], b mutation not reported previously. In family A, patients #1 and #2 were the second and fifth children of healthy, consanguineous (first cousins) German parents of Turkish origin belonging to a big pedigree where SDHAF1 mutations were previously described [2]. Both sisters presented with motor deterioration and spasticity in the 2nd half of their first year of life. MRI of the brain revealed extensive T2-hyperintensities in cerebral and cerebellar white matter, and cerebral proton MRS demonstrated accumulation of succinate. The elder sister died from multiorgan failure with severe lactic acidosis at age 18 months. Postmortem examination revealed histopathological features and topic patterns of a multifocal spongiform encephalomyelopathy consistent with Leigh syndrome [1]. The younger sister showed severe motor disability with marked spastic tetraparesis and relatively preserved cognitive abilities. She died at age 11 years. In family B, patient #3 was the first of three children of allegedly unrelated Norwegian parents coming from neighboring areas. With onset at 20 months increasing spasticity and clumsiness were observed. Cerebral MRI and proton MRS indicated a leukoencephalopathy with accumulation of succinate. Biochemical analysis of fibroblasts demonstrated an isolated deficiency of complex II. At present, at age 16 years, his main clinical feature is spastic paraplegia. He has suffered single epileptic seizures but has no preventive treatment. The cognitive function is tested to be within normal range. His fine motor skills and language function are good, and he is attending ordinary school with some facilitation. A follow-up cranial MRI performed at age 9 years showed leukoencephalopathy with supratentorial bilateral T2-hyperintensities (Figure 1), largely unchanged compared with neuroimaging performed 5 years before [1]. (A-C) Axial T2-weighted and (D) coronal FLAIR-weighted MR images of patient 3 at 9 years of age show widespread bilateral T2-hyperintensities in cerebral periventricular white matter. Involvement of the pons (C, full white arrow) and cystic lesions (D, open arrow) are visible. Peripheral U-fibers are spared. Lesions are largely unchanged compared to neuroimaging at age 4 years1. In family C, patients #4 and #5 were the first and second of three daughters of consanguineous Palestinian parents. With onset at 14 and 4 months, respectively, they showed motor regression and spasticity. In patient #4, best motor function was standing up, best mental function was speaking a few words. In patient #5, best motor function was head control at 5 months, best social function was smiling at 3 months. In both girls MRI and proton MRS of the brain revealed bilateral leukoencephalopathy and accumulation of succinate. Complex II deficiency was demonstrated in muscle or lymphocytes or both. Patient #4 died at age 5 years from pneumonia. Postmortem was not performed. Additional four patients (#6 to #9) presented with other, diverse clinical features. Patient #6 showed exercise intolerance and SDH-defective myopathy. He had normal cognitive abilities with very good performance at school as well as normal MRI and MRS of the brain. After initial normal development, patient #7 suffered from acute liver failure requiring liver transplantation with fatal outcome at age 2 years. Psychomotor retardation with muscle weakness and hearing impairment were main clinical features of patient #8. Patient #9 showed psychomotor retardation as well as muscular hypotonia and weakness. In all of these four patients, complex II deficiency was demonstrated biochemically in muscle or liver. MRI of the brain was performed in patients #6, #8, and #9 (with additional proton MRS of the brain in #6) and did not reveal leukoencephalopathy or any other abnormalities. Genetic analysis Total genomic DNA was extracted from peripheral blood leukocytes by standard techniques. Primers for DNA amplification and sequencing were designed to cover exon 1 of SDHAF1 along with flanking segments (GenBank Reference No. NC_000019.9). Screening for sequence variants was performed using the BigDyeTM Terminator Ready Reaction chemistry on an ABI PRISM 3100 Avant genetic analyzer (Applied Biosystems, Darmstadt, Germany). All identified mutations were confirmed by direct sequencing of two different PCR amplification products on forward and reverse strands. Primer sequences as well as PCR and sequencing conditions are available on request. Informed consent was obtained for each patient from the parents. The Institutional Review Board approved the study. Total genomic DNA was extracted from peripheral blood leukocytes by standard techniques. Primers for DNA amplification and sequencing were designed to cover exon 1 of SDHAF1 along with flanking segments (GenBank Reference No. NC_000019.9). Screening for sequence variants was performed using the BigDyeTM Terminator Ready Reaction chemistry on an ABI PRISM 3100 Avant genetic analyzer (Applied Biosystems, Darmstadt, Germany). All identified mutations were confirmed by direct sequencing of two different PCR amplification products on forward and reverse strands. Primer sequences as well as PCR and sequencing conditions are available on request. Informed consent was obtained for each patient from the parents. The Institutional Review Board approved the study.
Results
Patients #1 and #2, siblings of Turkish origin, were homozygous for a missense mutation c.164 G > C, corresponding to p.Arg55Pro, with both unaffected parents being heterozygous for the mutation. We found a homozygous c.22C > T nonsense mutation in patient #3 of Norwegian origin predicted to result in a premature stop of translation (p.Gln8X). Two healthy siblings as well as the unaffected parents were heterozygous carrier for this variant. In patients #4 and #5, siblings from consanguineous parents of Palestinian origin, a missense mutation c.170 G > A, corresponding to p.Gly57Glu was detected. This mutation affects the same highly conserved residue, Gly57, as the mutation Gly57Arg reported previously in an Italian family [2]. This observation indicates a fundamental role for Gly57 for the function of SDHAF1. Molecular analysis of the coding sequence as well as of adjacent promoter and 3′UT regions revealed no sequence alterations in patients #6 to #9.
null
null
[ "Background", "Genetic analysis", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Deficiency of complex II (succinate dehydrogenase, SDH) is a rare cause of disordered oxidative phosphorylation (OXPHOS) and is associated with a wide range of clinical symptoms. Among our cohorts of more than 1200 patients with defects in OXPHOS we found only nine with biochemically proven deficiency of complex II.\nIn 2001 we reported a characteristic finding in localized in vivo cerebral proton magnetic resonance spectroscopy (MRS) in three patients from two unrelated families, two German sisters of Turkish origin (family A) and one Norwegian boy (family B), presenting with symptoms and MRI signs of leukoencephalopathy [1]. MRS revealed a prominent singlet at 2.40 ppm in cerebral and cerebellar white matter originating from accumulated succinate in affected white matter. Biochemical investigations demonstrated isolated deficiency of complex II in muscle and fibroblasts of these patients.\nRecently, homozygous mutations in SDHAF1, encoding a new LYR-motif protein, were detected in 2 families from Turkey and Italy with several children affected by infantile leukoencephalopathy with defective SDH [2,3].\nWe investigated whether SDHAF1 is mutated in our nine patients with complex II deficiency with either SDH-defective leukoencephalopathy or other, diverse clinical phenotypes without leukoencephalopathy.", "Total genomic DNA was extracted from peripheral blood leukocytes by standard techniques. Primers for DNA amplification and sequencing were designed to cover exon 1 of SDHAF1 along with flanking segments (GenBank Reference No. NC_000019.9). Screening for sequence variants was performed using the BigDyeTM Terminator Ready Reaction chemistry on an ABI PRISM 3100 Avant genetic analyzer (Applied Biosystems, Darmstadt, Germany). All identified mutations were confirmed by direct sequencing of two different PCR amplification products on forward and reverse strands. Primer sequences as well as PCR and sequencing conditions are available on request. Informed consent was obtained for each patient from the parents. The Institutional Review Board approved the study.", "SDH: Succinate dehydrogenase; SDHAF1: SDH assembly factor 1; OXPHOS: Oxidative phosphorylation; MRI: Magnetic resonance imaging; MRS: Magnetic resonance spectroscopy; 3′UT region: 3′ untranslated region; LYR-motif: Pattern in protein structure consisting of leucine, tyrosine, and arginine.", "The authors declare that they have no competing interests.", "AO carried out the molecular genetic studies. SE, AB, JS contributed with clinical data. AS carried out the biochemical investigations. OE contributed with clinical and genetic information and interpretation of data. JG participated in the design of the study and interpretation of data. KB conceived of the study, provided clinical information and drafted the manuscript. All authors participated in finalizing the manuscript and read and approved the final manuscript." ]
[ null, null, null, null, null ]
[ "Background", "Patients and methods", "Genetic analysis", "Results", "Discussion", "Abbreviations", "Competing interests", "Authors’ contributions" ]
[ "Deficiency of complex II (succinate dehydrogenase, SDH) is a rare cause of disordered oxidative phosphorylation (OXPHOS) and is associated with a wide range of clinical symptoms. Among our cohorts of more than 1200 patients with defects in OXPHOS we found only nine with biochemically proven deficiency of complex II.\nIn 2001 we reported a characteristic finding in localized in vivo cerebral proton magnetic resonance spectroscopy (MRS) in three patients from two unrelated families, two German sisters of Turkish origin (family A) and one Norwegian boy (family B), presenting with symptoms and MRI signs of leukoencephalopathy [1]. MRS revealed a prominent singlet at 2.40 ppm in cerebral and cerebellar white matter originating from accumulated succinate in affected white matter. Biochemical investigations demonstrated isolated deficiency of complex II in muscle and fibroblasts of these patients.\nRecently, homozygous mutations in SDHAF1, encoding a new LYR-motif protein, were detected in 2 families from Turkey and Italy with several children affected by infantile leukoencephalopathy with defective SDH [2,3].\nWe investigated whether SDHAF1 is mutated in our nine patients with complex II deficiency with either SDH-defective leukoencephalopathy or other, diverse clinical phenotypes without leukoencephalopathy.", "The Table 1 summarizes the clinical, neuroradiological, and biochemical features of five patients from three families with SDH-defective leukoencephalopathy and 4 unrelated patients with other clinical phenotypes of complex II deficiency. Details of clinical and neuroradiological features of patients #1, #2 and #3 were described previously [1]. \nClinical, neuroradiological, biochemical and genetic features of five patients from three families with SDH-defective leukoencephalopathy and four unrelated patients with other, diverse phenotypes of complex II deficiency\nSDH succinate dehydrogenase; CS citrate synthase; (T) Turkish origin; (N) Norwegian origin; (P) Palestinian origin; (G) German origin; (J) Jewish origin; f female; m male; mo months; yrs years; + = present; - = absent/not done; LE = leukoencephalopathy; a mutation reported previously [2], b mutation not reported previously.\nIn family A, patients #1 and #2 were the second and fifth children of healthy, consanguineous (first cousins) German parents of Turkish origin belonging to a big pedigree where SDHAF1 mutations were previously described [2]. Both sisters presented with motor deterioration and spasticity in the 2nd half of their first year of life. MRI of the brain revealed extensive T2-hyperintensities in cerebral and cerebellar white matter, and cerebral proton MRS demonstrated accumulation of succinate. The elder sister died from multiorgan failure with severe lactic acidosis at age 18 months. Postmortem examination revealed histopathological features and topic patterns of a multifocal spongiform encephalomyelopathy consistent with Leigh syndrome [1]. The younger sister showed severe motor disability with marked spastic tetraparesis and relatively preserved cognitive abilities. She died at age 11 years.\nIn family B, patient #3 was the first of three children of allegedly unrelated Norwegian parents coming from neighboring areas. With onset at 20 months increasing spasticity and clumsiness were observed. Cerebral MRI and proton MRS indicated a leukoencephalopathy with accumulation of succinate. Biochemical analysis of fibroblasts demonstrated an isolated deficiency of complex II. At present, at age 16 years, his main clinical feature is spastic paraplegia. He has suffered single epileptic seizures but has no preventive treatment. The cognitive function is tested to be within normal range. His fine motor skills and language function are good, and he is attending ordinary school with some facilitation. A follow-up cranial MRI performed at age 9 years showed leukoencephalopathy with supratentorial bilateral T2-hyperintensities (Figure 1), largely unchanged compared with neuroimaging performed 5 years before [1]. \n(A-C) Axial T2-weighted and (D) coronal FLAIR-weighted MR images of patient 3 at 9 years of age show widespread bilateral T2-hyperintensities in cerebral periventricular white matter. Involvement of the pons (C, full white arrow) and cystic lesions (D, open arrow) are visible. Peripheral U-fibers are spared. Lesions are largely unchanged compared to neuroimaging at age 4 years1.\nIn family C, patients #4 and #5 were the first and second of three daughters of consanguineous Palestinian parents. With onset at 14 and 4 months, respectively, they showed motor regression and spasticity. In patient #4, best motor function was standing up, best mental function was speaking a few words. In patient #5, best motor function was head control at 5 months, best social function was smiling at 3 months. In both girls MRI and proton MRS of the brain revealed bilateral leukoencephalopathy and accumulation of succinate. Complex II deficiency was demonstrated in muscle or lymphocytes or both. Patient #4 died at age 5 years from pneumonia. Postmortem was not performed.\nAdditional four patients (#6 to #9) presented with other, diverse clinical features. Patient #6 showed exercise intolerance and SDH-defective myopathy. He had normal cognitive abilities with very good performance at school as well as normal MRI and MRS of the brain. After initial normal development, patient #7 suffered from acute liver failure requiring liver transplantation with fatal outcome at age 2 years. Psychomotor retardation with muscle weakness and hearing impairment were main clinical features of patient #8. Patient #9 showed psychomotor retardation as well as muscular hypotonia and weakness. In all of these four patients, complex II deficiency was demonstrated biochemically in muscle or liver. MRI of the brain was performed in patients #6, #8, and #9 (with additional proton MRS of the brain in #6) and did not reveal leukoencephalopathy or any other abnormalities.\n Genetic analysis Total genomic DNA was extracted from peripheral blood leukocytes by standard techniques. Primers for DNA amplification and sequencing were designed to cover exon 1 of SDHAF1 along with flanking segments (GenBank Reference No. NC_000019.9). Screening for sequence variants was performed using the BigDyeTM Terminator Ready Reaction chemistry on an ABI PRISM 3100 Avant genetic analyzer (Applied Biosystems, Darmstadt, Germany). All identified mutations were confirmed by direct sequencing of two different PCR amplification products on forward and reverse strands. Primer sequences as well as PCR and sequencing conditions are available on request. Informed consent was obtained for each patient from the parents. The Institutional Review Board approved the study.\nTotal genomic DNA was extracted from peripheral blood leukocytes by standard techniques. Primers for DNA amplification and sequencing were designed to cover exon 1 of SDHAF1 along with flanking segments (GenBank Reference No. NC_000019.9). Screening for sequence variants was performed using the BigDyeTM Terminator Ready Reaction chemistry on an ABI PRISM 3100 Avant genetic analyzer (Applied Biosystems, Darmstadt, Germany). All identified mutations were confirmed by direct sequencing of two different PCR amplification products on forward and reverse strands. Primer sequences as well as PCR and sequencing conditions are available on request. Informed consent was obtained for each patient from the parents. The Institutional Review Board approved the study.", "Total genomic DNA was extracted from peripheral blood leukocytes by standard techniques. Primers for DNA amplification and sequencing were designed to cover exon 1 of SDHAF1 along with flanking segments (GenBank Reference No. NC_000019.9). Screening for sequence variants was performed using the BigDyeTM Terminator Ready Reaction chemistry on an ABI PRISM 3100 Avant genetic analyzer (Applied Biosystems, Darmstadt, Germany). All identified mutations were confirmed by direct sequencing of two different PCR amplification products on forward and reverse strands. Primer sequences as well as PCR and sequencing conditions are available on request. Informed consent was obtained for each patient from the parents. The Institutional Review Board approved the study.", "Patients #1 and #2, siblings of Turkish origin, were homozygous for a missense mutation c.164 G > C, corresponding to p.Arg55Pro, with both unaffected parents being heterozygous for the mutation.\nWe found a homozygous c.22C > T nonsense mutation in patient #3 of Norwegian origin predicted to result in a premature stop of translation (p.Gln8X). Two healthy siblings as well as the unaffected parents were heterozygous carrier for this variant.\nIn patients #4 and #5, siblings from consanguineous parents of Palestinian origin, a missense mutation c.170 G > A, corresponding to p.Gly57Glu was detected. This mutation affects the same highly conserved residue, Gly57, as the mutation Gly57Arg reported previously in an Italian family [2]. This observation indicates a fundamental role for Gly57 for the function of SDHAF1.\nMolecular analysis of the coding sequence as well as of adjacent promoter and 3′UT regions revealed no sequence alterations in patients #6 to #9.", "Mutation analysis of the SDHAF1 gene revealed mutations in five patients with SDH-defective infantile leukoencephalopathy. The missense mutation detected in patients #1 and #2 was reported previously in a large multiconsanguineous kindred of Turkish origin with several affected children [2,3]. Family history of the siblings described here (patients #1 and #2) indicates common ancestry with those patients. The homozygous nonsense mutation demonstrated in patient #3 and the homozygous missense mutation detected in patients #4 and #5 were not reported before.\nSuccinate dehydrogenase participates in the electron transfer in the respiratory chain and in succinate catabolism in the Krebs cycle and consists of four subunits, all encoded by the nuclear genome [4]. Isolated complex II deficiency is a relatively rare cause of mitochondrial disease compared to other respiratory chain defects, but is associated with a wide range of clinical features [5]. Mutations in the four genes, SDH-A, -B, -C, -D, have been reported, with remarkably diverse phenotypes. Mutations in the SDHA gene were found to be associated with Leigh syndrome [6], late onset neurodegenerative disease [7] and dilated cardiomyopathy [8]. Heterozygous germline mutations in SDHA, SDHB, SDHC, and SDHD cause hereditary paragangliomas and pheochromocytomas [9], and germline mutations in SDHB and SDHC were found to be associated with gastrointestinal stromal tumors [4]. Recently, a mitochondrial encephalopathy was reported to be caused by SDHD mutations [10].\nWhereas an increasing number of assembly factors have been identified for complex I, III, and cytochrome oxidase, little was known concerning the assembly of complex II. Recently two genes involved in this process were detected in humans, and the first, termed SDHAF1, was found by linkage analysis in two families with SDH-defective infantile leukoencephalopathy [2]. Yeast experimentation indicated that the protein encoded by this gene is required for the stable assembly and full function of the SDH complex. The protein was thus termed SDH assembly factor 1 (SDHAF1) [2].\nIn the same year, mutations in the SDHAF2 (SDH5) gene encoding a protein necessary for the flavination of the subunit SDHA were detected in patients with paraganglioma [10]. The cause of such diverse phenotypes associated with defective assembly factors of complex II remains enigmatic to date.\nOur results confirm the pathogenicity of SDHAF1 mutations in infantile leukoencephalopathy due to defective succinate dehydrogenase. Our patients with complex II deficiency not associated with leukoencephalopathy but with other, diverse clinical phenotypes including myopathy with exercise intolerance, acute liver failure, psychomotor delay, muscle weakness, and hearing impairment did not carry a SDHAF1 mutation. Further studies will clarify whether infantile leukoencephalopathy with accumulation of succinate, readily detectable by in vivo proton MR spectroscopy of the brain, is pathognomonic for SDHAF1 deficiency.\nTo date, clinical features comprising motor regression with spasticity and neuroradiological features including bilateral leukoencephalopathy with elevated succinate on cerebral proton MRS are the suggestive findings pointing to a SDHAF1 mutation.\nTreatment with riboflavin was found to be effective in selected mitochondrial disorders, including SDH deficiency [3]. In our cohort, riboflavin treatment was applied in patients #2, #4, and #5 (SDHAF1 mutations) as well as #6 (SDH-defective myopathy with exercise intolerance). Riboflavin treatment resulted in no discernable effect in the 3 patients with SDHAF1 mutations. Patient #6 had clear benefit from this treatment with markedly prolonged motor endurance.\nThe clinical course in our five patients with SDHAF1 mutations is strikingly diverse. Patients #1, #2, and #4, all carrying missense mutations, died at age 18 months, 11 years, and 5 years, respectively, with histopathological features of Leigh syndrome in one of them. In contrast, patient #3, who carries a stop mutation, follows a milder course with spastic paraparesis as the main clinical feature 14 years after onset and stable white matter changes on MRI over many years. We hypothesize that an atypical starting codon could be present resulting in synthesis of a partially functional protein. Further studies are needed to elucidate the pathomechanism of this stop mutation. This variation points to influential further genetic or epigenetic factors shaping the phenotype of defective complex II assembly due to mutated SDHAF1.", "SDH: Succinate dehydrogenase; SDHAF1: SDH assembly factor 1; OXPHOS: Oxidative phosphorylation; MRI: Magnetic resonance imaging; MRS: Magnetic resonance spectroscopy; 3′UT region: 3′ untranslated region; LYR-motif: Pattern in protein structure consisting of leucine, tyrosine, and arginine.", "The authors declare that they have no competing interests.", "AO carried out the molecular genetic studies. SE, AB, JS contributed with clinical data. AS carried out the biochemical investigations. OE contributed with clinical and genetic information and interpretation of data. JG participated in the design of the study and interpretation of data. KB conceived of the study, provided clinical information and drafted the manuscript. All authors participated in finalizing the manuscript and read and approved the final manuscript." ]
[ null, "methods", null, "results", "discussion", null, null, null ]
[ "Succinate dehydrogenase", "Leukoencephalopathy", "SDHAF1", "Leigh syndrome", "Complex II deficiency", "Assembly factor" ]
Background: Deficiency of complex II (succinate dehydrogenase, SDH) is a rare cause of disordered oxidative phosphorylation (OXPHOS) and is associated with a wide range of clinical symptoms. Among our cohorts of more than 1200 patients with defects in OXPHOS we found only nine with biochemically proven deficiency of complex II. In 2001 we reported a characteristic finding in localized in vivo cerebral proton magnetic resonance spectroscopy (MRS) in three patients from two unrelated families, two German sisters of Turkish origin (family A) and one Norwegian boy (family B), presenting with symptoms and MRI signs of leukoencephalopathy [1]. MRS revealed a prominent singlet at 2.40 ppm in cerebral and cerebellar white matter originating from accumulated succinate in affected white matter. Biochemical investigations demonstrated isolated deficiency of complex II in muscle and fibroblasts of these patients. Recently, homozygous mutations in SDHAF1, encoding a new LYR-motif protein, were detected in 2 families from Turkey and Italy with several children affected by infantile leukoencephalopathy with defective SDH [2,3]. We investigated whether SDHAF1 is mutated in our nine patients with complex II deficiency with either SDH-defective leukoencephalopathy or other, diverse clinical phenotypes without leukoencephalopathy. Patients and methods: The Table 1 summarizes the clinical, neuroradiological, and biochemical features of five patients from three families with SDH-defective leukoencephalopathy and 4 unrelated patients with other clinical phenotypes of complex II deficiency. Details of clinical and neuroradiological features of patients #1, #2 and #3 were described previously [1]. Clinical, neuroradiological, biochemical and genetic features of five patients from three families with SDH-defective leukoencephalopathy and four unrelated patients with other, diverse phenotypes of complex II deficiency SDH succinate dehydrogenase; CS citrate synthase; (T) Turkish origin; (N) Norwegian origin; (P) Palestinian origin; (G) German origin; (J) Jewish origin; f female; m male; mo months; yrs years; + = present; - = absent/not done; LE = leukoencephalopathy; a mutation reported previously [2], b mutation not reported previously. In family A, patients #1 and #2 were the second and fifth children of healthy, consanguineous (first cousins) German parents of Turkish origin belonging to a big pedigree where SDHAF1 mutations were previously described [2]. Both sisters presented with motor deterioration and spasticity in the 2nd half of their first year of life. MRI of the brain revealed extensive T2-hyperintensities in cerebral and cerebellar white matter, and cerebral proton MRS demonstrated accumulation of succinate. The elder sister died from multiorgan failure with severe lactic acidosis at age 18 months. Postmortem examination revealed histopathological features and topic patterns of a multifocal spongiform encephalomyelopathy consistent with Leigh syndrome [1]. The younger sister showed severe motor disability with marked spastic tetraparesis and relatively preserved cognitive abilities. She died at age 11 years. In family B, patient #3 was the first of three children of allegedly unrelated Norwegian parents coming from neighboring areas. With onset at 20 months increasing spasticity and clumsiness were observed. Cerebral MRI and proton MRS indicated a leukoencephalopathy with accumulation of succinate. Biochemical analysis of fibroblasts demonstrated an isolated deficiency of complex II. At present, at age 16 years, his main clinical feature is spastic paraplegia. He has suffered single epileptic seizures but has no preventive treatment. The cognitive function is tested to be within normal range. His fine motor skills and language function are good, and he is attending ordinary school with some facilitation. A follow-up cranial MRI performed at age 9 years showed leukoencephalopathy with supratentorial bilateral T2-hyperintensities (Figure 1), largely unchanged compared with neuroimaging performed 5 years before [1]. (A-C) Axial T2-weighted and (D) coronal FLAIR-weighted MR images of patient 3 at 9 years of age show widespread bilateral T2-hyperintensities in cerebral periventricular white matter. Involvement of the pons (C, full white arrow) and cystic lesions (D, open arrow) are visible. Peripheral U-fibers are spared. Lesions are largely unchanged compared to neuroimaging at age 4 years1. In family C, patients #4 and #5 were the first and second of three daughters of consanguineous Palestinian parents. With onset at 14 and 4 months, respectively, they showed motor regression and spasticity. In patient #4, best motor function was standing up, best mental function was speaking a few words. In patient #5, best motor function was head control at 5 months, best social function was smiling at 3 months. In both girls MRI and proton MRS of the brain revealed bilateral leukoencephalopathy and accumulation of succinate. Complex II deficiency was demonstrated in muscle or lymphocytes or both. Patient #4 died at age 5 years from pneumonia. Postmortem was not performed. Additional four patients (#6 to #9) presented with other, diverse clinical features. Patient #6 showed exercise intolerance and SDH-defective myopathy. He had normal cognitive abilities with very good performance at school as well as normal MRI and MRS of the brain. After initial normal development, patient #7 suffered from acute liver failure requiring liver transplantation with fatal outcome at age 2 years. Psychomotor retardation with muscle weakness and hearing impairment were main clinical features of patient #8. Patient #9 showed psychomotor retardation as well as muscular hypotonia and weakness. In all of these four patients, complex II deficiency was demonstrated biochemically in muscle or liver. MRI of the brain was performed in patients #6, #8, and #9 (with additional proton MRS of the brain in #6) and did not reveal leukoencephalopathy or any other abnormalities. Genetic analysis Total genomic DNA was extracted from peripheral blood leukocytes by standard techniques. Primers for DNA amplification and sequencing were designed to cover exon 1 of SDHAF1 along with flanking segments (GenBank Reference No. NC_000019.9). Screening for sequence variants was performed using the BigDyeTM Terminator Ready Reaction chemistry on an ABI PRISM 3100 Avant genetic analyzer (Applied Biosystems, Darmstadt, Germany). All identified mutations were confirmed by direct sequencing of two different PCR amplification products on forward and reverse strands. Primer sequences as well as PCR and sequencing conditions are available on request. Informed consent was obtained for each patient from the parents. The Institutional Review Board approved the study. Total genomic DNA was extracted from peripheral blood leukocytes by standard techniques. Primers for DNA amplification and sequencing were designed to cover exon 1 of SDHAF1 along with flanking segments (GenBank Reference No. NC_000019.9). Screening for sequence variants was performed using the BigDyeTM Terminator Ready Reaction chemistry on an ABI PRISM 3100 Avant genetic analyzer (Applied Biosystems, Darmstadt, Germany). All identified mutations were confirmed by direct sequencing of two different PCR amplification products on forward and reverse strands. Primer sequences as well as PCR and sequencing conditions are available on request. Informed consent was obtained for each patient from the parents. The Institutional Review Board approved the study. Genetic analysis: Total genomic DNA was extracted from peripheral blood leukocytes by standard techniques. Primers for DNA amplification and sequencing were designed to cover exon 1 of SDHAF1 along with flanking segments (GenBank Reference No. NC_000019.9). Screening for sequence variants was performed using the BigDyeTM Terminator Ready Reaction chemistry on an ABI PRISM 3100 Avant genetic analyzer (Applied Biosystems, Darmstadt, Germany). All identified mutations were confirmed by direct sequencing of two different PCR amplification products on forward and reverse strands. Primer sequences as well as PCR and sequencing conditions are available on request. Informed consent was obtained for each patient from the parents. The Institutional Review Board approved the study. Results: Patients #1 and #2, siblings of Turkish origin, were homozygous for a missense mutation c.164 G > C, corresponding to p.Arg55Pro, with both unaffected parents being heterozygous for the mutation. We found a homozygous c.22C > T nonsense mutation in patient #3 of Norwegian origin predicted to result in a premature stop of translation (p.Gln8X). Two healthy siblings as well as the unaffected parents were heterozygous carrier for this variant. In patients #4 and #5, siblings from consanguineous parents of Palestinian origin, a missense mutation c.170 G > A, corresponding to p.Gly57Glu was detected. This mutation affects the same highly conserved residue, Gly57, as the mutation Gly57Arg reported previously in an Italian family [2]. This observation indicates a fundamental role for Gly57 for the function of SDHAF1. Molecular analysis of the coding sequence as well as of adjacent promoter and 3′UT regions revealed no sequence alterations in patients #6 to #9. Discussion: Mutation analysis of the SDHAF1 gene revealed mutations in five patients with SDH-defective infantile leukoencephalopathy. The missense mutation detected in patients #1 and #2 was reported previously in a large multiconsanguineous kindred of Turkish origin with several affected children [2,3]. Family history of the siblings described here (patients #1 and #2) indicates common ancestry with those patients. The homozygous nonsense mutation demonstrated in patient #3 and the homozygous missense mutation detected in patients #4 and #5 were not reported before. Succinate dehydrogenase participates in the electron transfer in the respiratory chain and in succinate catabolism in the Krebs cycle and consists of four subunits, all encoded by the nuclear genome [4]. Isolated complex II deficiency is a relatively rare cause of mitochondrial disease compared to other respiratory chain defects, but is associated with a wide range of clinical features [5]. Mutations in the four genes, SDH-A, -B, -C, -D, have been reported, with remarkably diverse phenotypes. Mutations in the SDHA gene were found to be associated with Leigh syndrome [6], late onset neurodegenerative disease [7] and dilated cardiomyopathy [8]. Heterozygous germline mutations in SDHA, SDHB, SDHC, and SDHD cause hereditary paragangliomas and pheochromocytomas [9], and germline mutations in SDHB and SDHC were found to be associated with gastrointestinal stromal tumors [4]. Recently, a mitochondrial encephalopathy was reported to be caused by SDHD mutations [10]. Whereas an increasing number of assembly factors have been identified for complex I, III, and cytochrome oxidase, little was known concerning the assembly of complex II. Recently two genes involved in this process were detected in humans, and the first, termed SDHAF1, was found by linkage analysis in two families with SDH-defective infantile leukoencephalopathy [2]. Yeast experimentation indicated that the protein encoded by this gene is required for the stable assembly and full function of the SDH complex. The protein was thus termed SDH assembly factor 1 (SDHAF1) [2]. In the same year, mutations in the SDHAF2 (SDH5) gene encoding a protein necessary for the flavination of the subunit SDHA were detected in patients with paraganglioma [10]. The cause of such diverse phenotypes associated with defective assembly factors of complex II remains enigmatic to date. Our results confirm the pathogenicity of SDHAF1 mutations in infantile leukoencephalopathy due to defective succinate dehydrogenase. Our patients with complex II deficiency not associated with leukoencephalopathy but with other, diverse clinical phenotypes including myopathy with exercise intolerance, acute liver failure, psychomotor delay, muscle weakness, and hearing impairment did not carry a SDHAF1 mutation. Further studies will clarify whether infantile leukoencephalopathy with accumulation of succinate, readily detectable by in vivo proton MR spectroscopy of the brain, is pathognomonic for SDHAF1 deficiency. To date, clinical features comprising motor regression with spasticity and neuroradiological features including bilateral leukoencephalopathy with elevated succinate on cerebral proton MRS are the suggestive findings pointing to a SDHAF1 mutation. Treatment with riboflavin was found to be effective in selected mitochondrial disorders, including SDH deficiency [3]. In our cohort, riboflavin treatment was applied in patients #2, #4, and #5 (SDHAF1 mutations) as well as #6 (SDH-defective myopathy with exercise intolerance). Riboflavin treatment resulted in no discernable effect in the 3 patients with SDHAF1 mutations. Patient #6 had clear benefit from this treatment with markedly prolonged motor endurance. The clinical course in our five patients with SDHAF1 mutations is strikingly diverse. Patients #1, #2, and #4, all carrying missense mutations, died at age 18 months, 11 years, and 5 years, respectively, with histopathological features of Leigh syndrome in one of them. In contrast, patient #3, who carries a stop mutation, follows a milder course with spastic paraparesis as the main clinical feature 14 years after onset and stable white matter changes on MRI over many years. We hypothesize that an atypical starting codon could be present resulting in synthesis of a partially functional protein. Further studies are needed to elucidate the pathomechanism of this stop mutation. This variation points to influential further genetic or epigenetic factors shaping the phenotype of defective complex II assembly due to mutated SDHAF1. Abbreviations: SDH: Succinate dehydrogenase; SDHAF1: SDH assembly factor 1; OXPHOS: Oxidative phosphorylation; MRI: Magnetic resonance imaging; MRS: Magnetic resonance spectroscopy; 3′UT region: 3′ untranslated region; LYR-motif: Pattern in protein structure consisting of leucine, tyrosine, and arginine. Competing interests: The authors declare that they have no competing interests. Authors’ contributions: AO carried out the molecular genetic studies. SE, AB, JS contributed with clinical data. AS carried out the biochemical investigations. OE contributed with clinical and genetic information and interpretation of data. JG participated in the design of the study and interpretation of data. KB conceived of the study, provided clinical information and drafted the manuscript. All authors participated in finalizing the manuscript and read and approved the final manuscript.
Background: Deficiency of complex II (succinate dehydrogenase, SDH) represents a rare cause of mitochondrial disease and is associated with a wide range of clinical symptoms. Recently, mutations of SDHAF1, the gene encoding for the SDH assembly factor 1, were reported in SDH-defective infantile leukoencephalopathy. Our goal was to identify SDHAF1 mutations in further patients and to delineate the clinical phenotype. Methods: In a retrospective data collection study we identified nine children with biochemically proven complex II deficiency among our cohorts of patients with mitochondrial disorders. The cohort comprised five patients from three families affected by SDH-defective infantile leukoencephalopathy with accumulation of succinate in disordered cerebral white matter, as detected by in vivo proton MR spectroscopy. One of these patients had neuropathological features of Leigh syndrome. Four further unrelated patients of the cohort showed diverse clinical phenotypes without leukoencephalopathy. SDHAF1 was sequenced in all nine patients. Results: Homozygous mutations of SDHAF1 were detected in all five patients affected by leukoencephalopathy with accumulated succinate, but not in any of the four patients with other, diverse clinical phenotypes. Two sisters had a mutation reported previously, in three patients two novel mutations were found. Conclusions: Leukoencephalopathy with accumulated succinate is a key symptom of defective complex II assembly due to SDHAF1 mutations.
null
null
2,631
244
[ 223, 122, 53, 10, 79 ]
8
[ "patients", "sdhaf1", "clinical", "mutations", "leukoencephalopathy", "complex", "patient", "sdh", "mutation", "ii" ]
[ "le leukoencephalopathy mutation", "reveal leukoencephalopathy abnormalities", "phenotypes leukoencephalopathy", "deficiency associated leukoencephalopathy", "leukoencephalopathy defective succinate" ]
null
null
[CONTENT] Succinate dehydrogenase | Leukoencephalopathy | SDHAF1 | Leigh syndrome | Complex II deficiency | Assembly factor [SUMMARY]
[CONTENT] Succinate dehydrogenase | Leukoencephalopathy | SDHAF1 | Leigh syndrome | Complex II deficiency | Assembly factor [SUMMARY]
[CONTENT] Succinate dehydrogenase | Leukoencephalopathy | SDHAF1 | Leigh syndrome | Complex II deficiency | Assembly factor [SUMMARY]
null
[CONTENT] Succinate dehydrogenase | Leukoencephalopathy | SDHAF1 | Leigh syndrome | Complex II deficiency | Assembly factor [SUMMARY]
null
[CONTENT] Adolescent | Child | Child, Preschool | Female | Humans | Leukoencephalopathies | Magnetic Resonance Spectroscopy | Male | Proteins | Succinic Acid [SUMMARY]
[CONTENT] Adolescent | Child | Child, Preschool | Female | Humans | Leukoencephalopathies | Magnetic Resonance Spectroscopy | Male | Proteins | Succinic Acid [SUMMARY]
[CONTENT] Adolescent | Child | Child, Preschool | Female | Humans | Leukoencephalopathies | Magnetic Resonance Spectroscopy | Male | Proteins | Succinic Acid [SUMMARY]
null
[CONTENT] Adolescent | Child | Child, Preschool | Female | Humans | Leukoencephalopathies | Magnetic Resonance Spectroscopy | Male | Proteins | Succinic Acid [SUMMARY]
null
[CONTENT] le leukoencephalopathy mutation | reveal leukoencephalopathy abnormalities | phenotypes leukoencephalopathy | deficiency associated leukoencephalopathy | leukoencephalopathy defective succinate [SUMMARY]
[CONTENT] le leukoencephalopathy mutation | reveal leukoencephalopathy abnormalities | phenotypes leukoencephalopathy | deficiency associated leukoencephalopathy | leukoencephalopathy defective succinate [SUMMARY]
[CONTENT] le leukoencephalopathy mutation | reveal leukoencephalopathy abnormalities | phenotypes leukoencephalopathy | deficiency associated leukoencephalopathy | leukoencephalopathy defective succinate [SUMMARY]
null
[CONTENT] le leukoencephalopathy mutation | reveal leukoencephalopathy abnormalities | phenotypes leukoencephalopathy | deficiency associated leukoencephalopathy | leukoencephalopathy defective succinate [SUMMARY]
null
[CONTENT] patients | sdhaf1 | clinical | mutations | leukoencephalopathy | complex | patient | sdh | mutation | ii [SUMMARY]
[CONTENT] patients | sdhaf1 | clinical | mutations | leukoencephalopathy | complex | patient | sdh | mutation | ii [SUMMARY]
[CONTENT] patients | sdhaf1 | clinical | mutations | leukoencephalopathy | complex | patient | sdh | mutation | ii [SUMMARY]
null
[CONTENT] patients | sdhaf1 | clinical | mutations | leukoencephalopathy | complex | patient | sdh | mutation | ii [SUMMARY]
null
[CONTENT] deficiency | ii | complex | complex ii | leukoencephalopathy | patients | deficiency complex | deficiency complex ii | symptoms | sdh [SUMMARY]
[CONTENT] patient | years | age | patients | leukoencephalopathy | performed | months | motor | sequencing | features [SUMMARY]
[CONTENT] mutation | siblings | parents | unaffected | gly57 | parents heterozygous | corresponding | patients siblings | unaffected parents | unaffected parents heterozygous [SUMMARY]
null
[CONTENT] patients | mutation | sdh | clinical | authors | leukoencephalopathy | authors declare competing interests | declare competing | declare | declare competing interests [SUMMARY]
null
[CONTENT] SDH ||| SDHAF1 | SDH | 1 | SDH ||| SDHAF1 [SUMMARY]
[CONTENT] nine | II ||| five | three ||| One | Leigh syndrome ||| Four ||| SDHAF1 | nine [SUMMARY]
[CONTENT] SDHAF1 | five | four ||| Two | three | two [SUMMARY]
null
[CONTENT] SDH ||| SDHAF1 | SDH | 1 | SDH ||| SDHAF1 ||| nine | II ||| five | three ||| One | Leigh syndrome ||| Four ||| SDHAF1 | nine ||| ||| SDHAF1 | five | four ||| Two | three | two ||| II | SDHAF1 [SUMMARY]
null